If you’re an IT professional responsible for servers, storage, networking, application, data management, and the like, chances are you’d have heard a lot about converged and hyperconverged infrastructure/hyperconvergence over the past year or so, but what does it really mean? In a nutshell, it is about reducing infrastructure complexity using virtualisation as an enabling technology.
However, like many emerging IT trends, the details differ depending on who you listen to. So to help, we have taken our experience of working with several vendors in this field and come up with our own interpretation.
We believe that converged infrastructure can be loosely grouped into three phases, let’s call them Wave One, Two, and Three:
- Wave One: This first wave sought to simplify the design and support of server, storage, and sometimes network infrastructure by adopting reference architectures and pre-configured solutions that could often be ordered with a single part number. This approach is often referred to as an “Integrated Solution”, but the core technologies were still from a range of vendors, albeit often best-of-breed, which results in a complex technology stack with many points of management, skill sets required, and maintenance costs.
- Wave Two: The second wave saw the introduction of “Converged Infrastructure” usually the result of bringing storage back into the server and sharing it between the applications on those servers by means of a “virtual storage controller” and often using the combination of Flash/SSD and high capacity HDD to address both capacity and performance needs. The technology stack may or may not be a single vendor, holistic technology, but a significant step in the right direction.
- Wave Three: The term “Hyper-Converged Infrastructure or HyperConvergence” started to be used when single vendors developed a technology stack from the ground up, lower-end vendors even including their own basic hypervisor whereas enterprise-class platforms would hook into the leading hypervisor vendors management tools. The technology stack includes not only server and storage elements but also de-duplication, backup, replication, fast recovery, WAN optimisation, and more.
At Frontier Technology we believe that the first two waves were steps to what we really wanted and that a truly hyper-converged infrastructure needs to do much more than simply integrate a few technologies in a box.
The leading vendor in the hyperconvergence market is Simplivity, and we believe their hyperconvergence proposition offers a paradigm shift in the way enterprises can build out scalable, highly cost-effective, and capable infrastructure for their virtual workloads.
Simplivity took the opportunity to take a step back and examine what they call “the data problem”. They had seen that de-duplication technology had been a successful strategy for improving the efficiency of storage platforms, mostly backup & archive, but was being applied too late in the data lifecycle.
The same could be said of WAN optimisation devices and Cloud Gateways, all of these de-dupe and compress data very effectively, but with different technologies, algorithms, and management systems and again, loo late in the lifecycle to be truly effective. What was needed was a fresh approach, a paradigm shift!
Storage de-duplication technology focuses on the reduction in storage capacity requirements, whereas WAN optimisation is more about reducing network IO and when you think about it reduction in IO is the critical factor, as computer genius Gene Amdahl once said: “The Best IO is the one you don’t have to do”.
The Simplivity solution is to utilise a globally de-duplicated environment by processing IOs through the de-dupe engine before any data is ever written to storage. This results in considerable improvements in storage capacity and system performance (as there fewer IO transactions per write) plus there are many efficiencies introduced into the backup and replication processes, in fact, legacy backup systems may soon be a thing of the past.