Prevent Production Downtime With Hyperconvergence

27 October, 2020 | Blog

Michael Racine

Michaël Racine

Telecommunications and Networking Analyst

icon linkedin

Virtualization has come a long way over the years and has become the standard in production systems. Protecting virtualization systems and, by extension, the production system, requires a robust network architecture capable of overcoming all emergency situations like power outages, fires or equipment failures.

Hyperconvergence is a great way to ensure the resilience needed to protect virtualization and storage systems in the production system.

How does hyperconvergence work?

Hyperconvergence is a simple principle:

  • Software is installed on all virtualization hypervisors.
  • This software then enables resources from all these servers to be “combined” into a single logical whole.
  • Then, all these server resources can be managed from a central point.
  • Hyperconvergence delivers powerful new options for virtualization environments, better redundancy and improved performance.

Hyperconvergence features and benefits

While hyperconvergence features differ depending on the product, the following list of features is a market standard:

  • One of the most important features of hyperconvergence is data deduplication, which allows multiple copies of the same virtual machine to be distributed across multiple servers at all times. These copies are made in real time and are therefore identical to the source virtual machine.
    • In a hyperconverged virtualization environment, a number of disks or servers can be lost without affecting virtual machines and data, which is perfect for highly sensitive environments.
    • You can set the desired number of copies for each virtual machine as long as you have the necessary number of servers. If less critical machines do not require copies, but critical machines require two copies, these parameters can easily be configured.
    • If a server breaks down, one of the copies of some virtual machines can go missing. If the hyperconvergence system includes more than two virtualization servers, the software then automatically starts a process to copy data on the affected server by targeting a different server. The system is designed to have the desired number of copies and a high degree of redundancy at all times. For some hyperconvergence software applications, an acceptable period of “non-compliance” can be set using a redundancy policy to avoid unnecessary copying of files during short or planned downtimes.
  • Resources are easily scalable (RAM, CPU, disk). For example, simply add a hard disk to a server and the additional space becomes available for the entire storage pool.
  • If existing servers no longer have slots to insert a hard disk, RAM or CPU, there is no need to get rid of old servers or replace components, you can also add a server to a hyperconvergence environment. This server then provides additional resources AND redundancy. Yay for options!
  • Since you can always add components or servers to the environment, you avoid falling into the trap of overprovisioning for fear of running out of resources.
  • Unlike the SAN server options, storage is connected directly to the server that acts as a hypervisor, providing better performance. In fact, SAN storage is not built with an evolutionary philosophy and usually has a single point of failure.
  • Data is compressed to reduce the network load.

Setting up hyperconvergence

You must apply certain best practices when choosing hyperconvergence, as well as certain network architecture requirements. BBA’s telecommunications team can help you create a robust hyperconvergence architecture. Here are some basic rules to consider when setting up a hyperconverged system:

  • You must have at least three servers, including:
    • Two or more resource servers where the hard disks, RAM and processors used in the hyperconverged environment will be located
    • A server that will act as a witness: this server does not have many resources, as its sole purpose is to monitor the health of the resource servers and avoid the so-called “split brain” scenario.
  • Servers must be placed in different locations, so they are not all affected by natural disasters or fires at once. Fibre optic cables between buildings must be redundant and ideally be run through different paths.
  • The more servers there are, the more redundancy there is for virtual machines.
  • Network links must be efficient to ensure hyperconverged systems run smoothly.

As with any IT solution, a backup system must also be in place to take over in case an issue arises with the hyperconverged system. Although this type of system is highly reliable, you should always consider all possible emergency situations, so you are prepared for any eventuality.

This content is for general information purposes only. All rights reserved ©BBA

DO YOU HAVE A SIMILAR CHALLENGE?