The data center is now more than ever the core of the business – nearly all primary IT functions are housed in, routed through, and co-ordinated from this hub, and every phase of business relies on it.

However over the years, many data centers have become hard to manage as disparate technologies have been rolled in, requiring increasingly demanding configurations to make everything work together.

Business demands on IT have added pressure to deliver agile and flexible environments that allow business operations to pivot quickly to meet market demands.

The price for this agility has traditionally been large investments in infrastructure, with costs that add up every year. The three key data center systems - servers, storage and networking equipment - need separate specialist teams to manage them, adding not just cost but complexity.

Specialist systems such as mission-critical database servers, tuned and optimized for a single application, jostle for attention with the myriad other devices in the data center, each of which requires management attention.

However, with the growing popularity of hyper-convergence, building a powerful back end for any business is becoming less about buying big, expensive boxes. Organizations are finding

massive efficiencies, agility and manageability benefits by virtualizing the data center environment and running it from software.

Technology analyst firms believe hyper-converged technologies will be propelled into the mainstream within the next five years.

According to IDC, the converged marketplace will grow to US$14.3 billion by 2017. While Gartner stated the market for hyper-converged integrated systems (HCIS) will grow 79 per cent to reach almost US $2 billion in 2016.

HCIS will be the fastest-growing segment of the overall market for integrated systems, reaching almost US$5 billion, which is 24 per cent of the market, by 2019, Gartner stated.

Gartner defines HCIS as a platform offering shared compute and storage resources, based on software-defined storage, software-defined compute, commodity hardware and a unified management interface. Hyper-converged systems deliver their main value through software tools, commoditizing the underlying hardware.

So what does this mean for businesses? Hyper-convergence (HC) presents an accessible, affordable way for organizations - of any size - to modernize their data centers and embrace 21st century business models. The advantage is the need to do this without having to invest heavily in new technology. That’s not to say there won’t be a need to purchase new products, but it won’t require a complicated refurbishing or modernization project across a whole new data center.

How Hyper-Convergence Works

HC is the logical successor to converged infrastructure, the pre-integrated and tested bundles of data center components that became popular several years ago.

At a technology level, HC assembles best-of-breed components, then adds software-driven automation that enables IT to deploy entire app and support stacks in a matter of clicks. This level of software-defined and automated operation creates cloud-like agility on-premises.

HC solutions bring IT one step closer to a truly software-defined data center (SDCC). The software layer lends itself to ever-greater automation - a hallmark of the SDCC - with application programming interfaces (APIs) dictating how control plane applications talk to one another.

Ultimately, the idea is to take all of the guess work out of deploying, provisioning and managing the infrastructure. The key value of HC is that it puts the promised benefits of SDDC within reach of smaller and mid-size businesses. For larger enterprises, it fits with their need to set up data centers at branch offices and remote locations, because it’s a single, compact appliance that can be managed by an IT generalist.

This type of hyper-converged architecture lends itself to exceptional resilience. The tight integration of data center elements makes for easier management, improved performance, and a turnkey solution to the problem created by systems diversity.

The business benefits are clear - they include less time spent operating servers and more delivering applications to enable an agile business. This agility is also reflected in the system’s ability to act as a bridge between internal resources and the cloud.

Companies today understand the value of cloud and cloud-like operational models. The self-service capabilities, service catalogs, and automation inherent in cloud operations eliminate hours and days of manual IT work. But combining the synergies of hyper-convergence with cloud can take the data center to a whole new level.

Hyper-Convergence and the Cloud

With a hyper converged system, organizations can start small and fast with cloud and see how it can add value for business workloads and tasks. The cloud enables an enhanced level of end user independence via self-service portals & APIs, enabling organizations to pull together all the resources they need for a big project with a few strokes on a keyboard.

All they have to do is create a template which, for example, might spin up three web service VMs, two app service VMs, and one database. And then, with a click of a key or two, you have your service up and running.

But as with any other technology change, the complexity lies not just in the infrastructure, but in the people and processes around it. What often happens is that organizations build a top-notch self-service cloud environment for their customers – but no one uses it.

People still end up picking up the phone and talking to the admin to get things done. That’s why it’s essential to start small and establish the right organizational processes around it. There’s great importance in making sure it’s functional – and more importantly, get the lines of business/developers to use it – before embarking on “cloudifying” the whole data center.

An excellent starting point for the cloud journey for organizations that have already virtualized is to think in terms of single virtual machines, rather than in terms of complex services that span across multiple machines and need a lot of advanced orchestration. For this kind of focused and phased approach, a hyper-converged infrastructure is ideal.

Hyper convergence plus cloud gives organizations the ability to choose which cloud system suits their organization.


  • For smaller organizations, the ease and convenience of the pay-as-you go public cloud model is attractive.
  • A hyper converged private cloud is a highly competitive alternative to self-configured options. While it does involve an up-front investment, it’s not as expensive as that required for traditional cloud infrastructures.
  • Organizations won't be restricted to private cloud, because a hyper converged solution supports a public cloud strategy, a hybrid approach, or anything in between.

What the hyper-converged layer does is help organizations to rely on in-house infrastructure for their average use, and then turn to the public cloud when there’s a spike in activity or a surge in demand. It also enables organizations to pursue a hybrid cloud strategy on hyper-converged platforms. With its support for multiple hypervisors and public cloud providers, it is a great choice for companies looking for an open, hybrid ‘cloud in a box’.

From Concept to Reality

Q:

Now what? What or where do organizations begin to even think about who they should speak to about transforming their infrastructure with hyper-convergence?

Those were some of the questions one of America’s leading genome research institutes, HudsonAlpha, was looking for when it needed a powerful and easily managed platform that would deliver the flexibility and scalability to cope with increased demands.

For the past six years, HudsonAlpha’s data production doubled every six months, and the amount of data under management is now growing at 1PB a month.

According to HudsonAlpha’s chief information officer, Peyton McNully, the scale of the company’s growth became larger than most enterprise systems are built to support.

"We not only have growth in the number of people but also growth in the amount of work each of them is doing,” he said. “Every single one of our 100+ researchers [is] doing something that we generalize as genomics but is very specific to their research, so there is a nuance difference in every one of these workloads".

HudsonAlpha Institute for Biotechnology is based on a 155-acre campus in Huntsville, Alabama and is a leading center for genomic research in the USA. In addition, the institute runs an education outreach program that educates 100,000 children a year from Alabama and nationally in genomics and genetics.

“The research that happens at HudsonAlpha results in some of the latest and greatest findings that are going to translate into better care, better crops and a better quality of life for humans all over the world,” McNully said.

With 200 individuals working on the non-profit side and 600 in 30+ associate life science companies housed on the HudsonAlpha campus, huge amounts of sequencing data are produced.

“Data is our biggest challenge. We generate over 1PB of data a month that needs to be stored, manipulated, computed and so forth,” co-founder and chairman of the board, Jim Hudson said.

“Managing that data and being able to query it is very essential for us. We analyze it on large computers but we needed a platform that would allow us to very efficiently bring all this together and take advantage of all the resources that we have in a much better way.”

"The entire organization is now focused on delivering changes to an IT environment that have business value, and that’s hugely important, because there is a lot less education that has to take place between the sales organization and the IT organization here at HudsonAlpha.”

HudsonAlpha decided that hyper-convergence was part of the answer to its challenging growth.

As users transition to a software-defined data center (SDDC), hyper-converged products serve as self-contained, modular building blocks that can handle changing workloads and accommodate new business. Because of its existing relationship with Hewlett Packard Enterprise and following a visit to HPE Discover in Las Vegas, HudsonAlpha chose the all-in-one compute, storage and virtualization platform, HPE Hyper Converged 380.

“At Discover, I was better able to understand the strategy in what’s taking Hewlett Packard Enterprise forward into transformation areas, and where those can be applied,” said McNully.
“The entire organization is now focused on delivering changes to an IT environment that have business value, and that’s hugely important, because there is a lot less education that has to take place between the sales organization and the IT organization here at HudsonAlpha.”

The Hyper Converged 380 is HPE’s latest hyper-converged infrastructure platform, aimed at midsized businesses, enterprise remote office branch office sites and enterprise line-of-business (legacy and DevOps) environments.

Each HC 380 “building block” combines extensible compute, storage, hypervisor and management capabilities into a compact, 2U scale-out appliance. More specifically, the HC 380 incorporates HPE ProLiant Gen9 DL380 x86-based server technology, HPE StoreVirtual software-defined storage, VMware vSphere® hypervisor, HPE OneView InstantOn software and the HPE OneView User Experience (UX) interface.

Each HC 380 appliance includes two server nodes. Up to 16 server nodes can be clustered as a system and managed from the same user interface. CPU, memory, networking, SSD and HDD are preconfigured for key workloads such as cloud and VDI.

Product features and attributes include:


  • Quick and easy deployment, expansion and maintenance.
  • All hardware and software components are factory-installed and pre-integrated, simplifying installation and setup.
  • A self-guided start up program (OneView InstantOn) streamlines initial system configuration as well as adding nodes to the cluster. (HPE asserts IT generalists can add capacity in as little as 15 minutes.) Firmware and driver updates can be applied with just three simple UI clicks.
  • Rapid IT service provisioning, using the HC 380 as a “VM vending machine” with a “consumer-inspired” user interface.
  • VM provisioning through HPE OneView User Experience can be carried out on a desktop or even a mobile device.
  • Outstanding density and scalability, supporting up to 576 cores per 16U system (16 two-socket server-nodes x 18 cores/processor). It can easily handle traditional data-intensive applications such as large-scale databases as well newer applications like persistent VDI or Big Data analytics.
  • Transparent VM failover and data mobility across nodes, systems and sites for business continuity.
  • Backup and recovery through HPE StoreVirtual Application Aware Snapshot Manager and HPE StoreVirtual Recovery Manager.
  • Predictive analytics and troubleshooting tools to simplify performance management, capacity planning, and problem identification, isolation and resolution tasks.

The scale-out, software-defined storage platform provides data mobility across tiers and locations and between physical and virtual storage, enabling linear scaling of capacity and performance.

“Our best use of HPE Hyper Converged 380 so far has been in ultimately using it as a development test environment because it’s really similar to what a user will actually have in a production environment on HPE Synergy," McNully said. "It’s also functionally simpler for a developer to hit the FENIX web server user interface window, spin up their instance, log in, conduct their test and then deploy their workload to a production environment whenever they see fit.”

Conclusion

Balancing business demands with IT capabilities has been a challenging proposition, given limited budgets and the constant pressure to ‘do more with less.’

But agility and flexibility are vital for businesses that need to compete in the increasingly fast-paced and global 21st century marketplace. The price for agility has always been large investments in infrastructure that can handle peak loads - which results in a large up-front cost to cover those peaks. The increase in equipment and complexity leads to IT admins wasting time and effort on day-to-day activities, just keeping the lights on rather than adding real value.

Q:

So how can IT departments marry agility and flexibility, with inflexible architectures and over-specialized systems?

The key to in untangling the conundrum points to a hyper-converged infrastructure, which can also provide a fundamental building block for cloud.
HPE Logo

Address
   450 Alexandra Road, Singapore 119960
Phone   +65 6275 3888

Welcome to Hewlett Packard Enterprise. Our technology helps customers turn ideas into value. In turn, they transform industries, markets and lives.