Hyperconverged Infrastructure

What Are the Best Uses for Hyperconverged Infrastructure?

In the very early days, HCI was pushed pretty hard for “edge” use cases. One such use case was virtual desktop infrastructure (VDI), but it wasn’t uncommon to see hyperconvergence touted for other, smaller uses cases back then.

Over time, the perception of HCI has shifted from edge cases to an architecture that can support even the most demanding mission-critical workloads. There was skepticism at the beginning of the HCI era, but that skepticism has given way to excitement as organizations seek to enjoy the benefits of HCI adoption, including reduced operational overhead and faster time to value for new initiatives, among many others.

General Purpose Workloads

It should come as no surprise that HCI has become a staple for general purpose workloads. These are those functions that every organization has to have; they can include infrastructure servers (DNS, DHCP, Active Directory, print servers, and so on), file servers, application servers, database servers, and anything else that the company needs to operate.

In the context of “general purpose,” there are a lot of ways that workloads can be defined. The easiest way is to say that, in essence, everything can be included. Prior to flash storage, this might have been more difficult to achieve, but with the kinds of performance benefits flash offers, even hybrid HCI environments can support a wide array of workloads all vying for storage, RAM, and compute resources.

Databases

Databases are the workhorses for most businesses, powering everything from ecommerce sites to point-of-sale systems to customer relationship management tools to enterprise resource planning systems.

Regardless of the actual application, databases all have one key fact in common: they have to perform, since they’re usually linked directly to the bottom line. Poor performance can impact revenue in different ways: by increasing expenses due to slow applications holding back employee productivity, or reducing revenue by driving away customers frustrated that the checkout process, for example, is taking too long.

HCI solutions can support even the most intense database applications, thanks to their inclusion of flash storage and the efficiency of the storage stack on each cluster node. Moreover, as databases grow, HCI makes it far easier for companies to expand their storage footprint. The recipe: just add more nodes. That’s it! HCI solutions are purpose-built to enable easy scale, which was one of the most significant shortcomings of legacy environments.

Logging and Analytics

In recent years, logging and analytics have emerged as key workloads requiring significant enterprise IT support. These tools carry with them some relatively unique characteristics. For logging, the underlying infrastructure needs to support a significant level of data velocity: if, for example, the platform has to support fast writes because there’s so much data coming in. Moreover, the platform has to support quick and easy capacity expansion, since logging can consume vast swaths of storage capacity as it’s being ingested into the system.

Analytics can have similar characteristics as logging; the type of workload generated depends on your usage. Are you gathering data to analyze? That’s a lot like logging. Are you mining data for insight? If that’s your goal, the environment needs to support fast reads since the analytics platform will need to consume all the underlying data to yield results.

Regardless, HCI can provide support for both write- and read-intensive applications. Of course, there still needs to be some thought given to how to architect the hyperconverged deployment. You can’t just deploy a bunch of spinning disk nodes and expect high levels of performance. You’ll need hybrid or all-flash nodes to achieve appropriate levels of performance.

Data Protection

Data protection means different things to different people. It may mean just maintaining a high level of availability, which HCI solutions typically do by default; it might mean providing comprehensive disaster recovery services, or enabling a strong data protection and disaster recovery partner ecosystem.

Data protection is a strength of HCI environments. With easily scalable storage capabilities, it’s not hard to make sure that there is sufficient capacity to enable data protection services.

In recent years, the term “secondary storage” has come into use as a way to describe non-primary storage needs; it includes backup and recovery as an included use case. HCI is a key enabler of secondary storage offerings, due not just to the easily-harnessed and expanded storage footprint, but also because it’s so central to all workloads operating in the environment.

File Storage

File servers are often the “dumping ground” for everything that doesn’t fit somewhere else; but they’re also chock full of corporate data and secrets, and IT needs to ensure that these resources are well supported and protected. Unlike other workloads, file servers don’t generally demand high levels of performance.

Capacity, on the other hand, is a different story. File servers can demand a lot of capacity, since they might store everything from small text files to corporate board reports to entire libraries of video content from the marketing department.

The sheer scalability of HCI makes it a great match for file services. If you need more capacity, just add nodes. Even better, companies such as Nutanix have taken steps to add native file services to their hyperconverged platforms. Native file services allow customers to deploy powerful and highly-scalable file storage structures in their HCI environments, without having to build out separate Windows File Servers. This simplifies the overall architecture. Of course, these services still integrate with Active Directory to enable secure authentication to what are often sensitive company resources.

Edge Computing

Edge computing describes computing activities that take place outside an organization’s data centers and cloud environments. Edge computing locations can include remote office and branch office locations, but can also include other locales, including inside self-driving vehicles, which require tremendous computing power that’s also immediately accessible. See Figure 1.

In the traditional remote/branch office (ROBO) sense, HCI is a perfect fit, since hyperconverged clusters can often start very small. Even the biggest enterprises have very small needs at the edge. The ability for an HCI deployment to scale down to support these environments is critical.

Moreover, this is one use case in which ease of use and simple scalability are truly key. Many edge environments don’t have dedicated IT staff, so the infrastructure deployed into those locations needs to be rock-solid and easily administered. It also has to be scalable. If a store grows and needs more workload capacity, it should be easy to expand.

The needs around the edge are simplicity, scalability, and cost-effectiveness. HCI makes it possible for organizations to design a standard edge architecture and then deploy it as many times as necessary to support the needs of the business while retaining the ability to scale as needed.

Edge computing pushes processing out to the front lines of the business, where IT staff may not even exist.
Figure 1. Edge computing pushes processing out to the front lines of the business, where IT staff may not even exist.

VDI and Desktop-as-a-Service (DaaS)

VDI and Desktops-as-a-Service (DaaS) are two methods by which organizations seek to bring order to what can be desktop chaos. Around the time that HCI was originally hitting the market, CIOs and desktop architects were struggling with VDI deployments, and often giving up on the promise of the technology. Often, VDI failure was due to underperforming storage, as well as sheer architectural complexity.

HCI collapsed the hard parts of VDI into a single appliance, often imbued with just enough flash storage to help organizations ride out the boot storms and login storms that plagued previous efforts. This is why VDI was paraded as one of the top HCI use cases at its outset (see Figure 2).

Hyperconverged infrastructure scales easily as your needs grow, turning virtual desktop infrastructure architecture into a simple building-block approach.
Figure 2. Hyperconverged infrastructure scales easily as your needs grow, turning virtual desktop infrastructure architecture into a simple building-block approach.

VDI’s cousin, DaaS, is everything you’d expect from a software-as-a-service (SaaS) offering. Initial iterations of DaaS, such as Nutanix’s Xi Frame, are fully managed, allowing customers to simply consume their desktops from the cloud. In the future, expect to see on-premises DaaS offerings—based on HCI—that will provide more flexibility and allow organizations to maintain an on-premises desktop environment while enjoying the consumption-centric benefits DaaS has to offer.

Test and Development

Test and development workloads are sometimes relegated to the IT castoff gear that’s no longer in use. This is a mistake. In an age in which those with the best software and processes win, having the best gear for developers is paramount. It’s critically important for developers to have access to gear that operates similarly to production, but is still cost-effective and can scale as development workloads increase.

HCI provides developers with a programmable infrastructure environment they can include right into their development workflows. They can create and destroy VMs on command, and run it all on infrastructure that performs well, speeding their efforts.