Hyperconverged Infrastructure

In the Beginning… of Hyperconverged Infrastructure

Hyperconverged infrastructure (HCI) emerged in the early 2000s as a response to the growing challenge enterprises faced in dealing with complex, multi-vendor, multi-system, multi-site IT infrastructure. It could be seen as the industry pendulum swinging back to a more centrally managed computing environment.

Evolution or Revolution?

In the late 1960s and early 1970s, enterprise IT infrastructure largely consisted of one or more mainframe computers being used to support all workloads and applications. They typically were housed in a single data center. All of the functions of the computer resided in a single (or small number) of cabinets.

In the late 1970s and into the 1980s, some workloads were offloaded to smaller, less costly systems known as minicomputers. These minicomputers often were easier to program and operate than the mainframes; they assisted the mainframe and supported business unit or divisional workloads, and fed data back to the mainframe.

In the 1980s and into the 1990s, processor, memory, storage, and networking capabilities advanced rapidly. Innovative suppliers re-examined the concept of a minicomputer — then called “midrange systems” — and decided to tease out individual functions into separate “server appliances.”

This approach allowed the performance and capability of each separate function to scale up or down as necessary through the addition (or removal) of individual appliances. At this time, another trend was observed — enterprises began to standardize on Intel x86 architectures that hosted Microsoft Windows and, later, Linux operating systems (OSes) and workloads. These systems became known as “Industry Standard Systems.”

The advantages of this distributed-system concept were that it supported exceptional levels of performance and scalability. This approach also helped enterprises reduce hardware costs. They only needed to purchase the appliances actually needed for their current workload, and could scale up as their business grew by adding additional systems.

In the 2000s, the challenges of this approach also began to be experienced by these enterprises. Each of the appliances often required that expertise be maintained for their proprietary management tools. Furthermore, each of these appliances often were developed using proprietary OSes, memory, storage, and networking components.

As enterprises embraced this approach, their networks soon began to look like a patchwork quilt of appliances. They were increasingly hard to manage, required staff having specialized expertise, and could lead to increased levels of cost.

Enter Virtual Computing

Although prominent in mainframe computing environments since the late 1960s, virtualization technology began to emerge in the world of Industry Standard Systems. It became increasingly common for workload-hosting OSes to become virtual by being hosted by virtual-processing software. In addition, storage increasingly was virtualized to enhance storage performance and optimize utilization of available capacity. Similar enhancements were made to the networks supporting distributed workloads through the use of network virtualization.

Once workloads and applications lived in virtual environments, enterprises wanted suppliers offer the flexibility and performance, but with a more unified approach to management. They also demanded that current OSes, development and management approaches be supported.

Vendors responded by offering products based on a converged infrastructure, which is where processing, memory, and storage were brought back into a single enclosure and could be managed using a single set of management tools. Later, another function, networking, was brought into this enclosure; the result was called hyperconverged infrastructure.

How Did Vendors Respond?

Although nearly all systems suppliers are offering HCI-based solutions today, following are some of earliest examples of this approach:

  • Oracle’s 2008 announcement of HP Oracle Database Machine might be considered one of the first HCI computing solutions. Oracle and HP made available the hardware and software to support a database solution using a single order number. These configurations included a system, OS, and database.
  • Cisco announced its Unified Computing Solution (UCS) shortly thereafter. UCS was a family of general-purpose systems that were scalable, flexible, and could be managed using a unified set of management tools. The hardware, however, was based on a proprietary array of processor, memory, networking, and storage technology.
  • Arcadia Vblock, an EMC/Cisco joint venture, emerged. Later, when Intel and VMware joined the party, Arcadia was renamed Virtual Computing Environment or VCE. These solutions were based on Cisco servers, EMC storage and VMware virtualization software. When EMC was acquired by Dell, VCE became EMC Converged Platform Division.
  • IBM jumped in with its own approach and called it “PureSystems.” As with the others, the company offered pre-configured systems. What was different was that these configurations included both x86 and Power architecture systems. Configurations could include any of four different OSes — AIX, IBM i, Linux, and Windows. They could be based on any of five different hypervisors — Hyper-V from Microsoft, KVM, PowerVM from IBM, VMware or Xen.
  • Lenovo and HPE both offered their own pre-configured, converged systems at this time.
  • New market entrants, SimpliVity, Nutanix, and Scale computing appeared in this time frame.

All of these suppliers focused on some combination of the following use cases: in-house cloud platforms, support for enterprise-critical applications, and VDI.

Later, these suppliers began to focus on incorporating flash storage to improve overall performance.

Key Questions

Although the introduction of HCI has helped, the industry is still answering a few key questions, such as:

  • Will HCI actually reduce complexity in the enterprise IT infrastructure?
  • Will HCI really make it possible for enterprises to simplify the management of their IT infrastructure?
  • Will enterprises actually be able to reduce their IT costs through the use of HCI?
  • Will the shared HCI infrastructure really make it possible for enterprise to break through their silos and operate in a more unified way?
  • Will the adoption of HCI create a more open, vendor-neutral IT architecture, or have vendors found a way to move their lock-ins up the stack?

A future post will examine how HCI has matured and include answers to some of these questions.