Hyperconvergence without the hype: essentials for CEOs
Many CIOs and data centre managers have placed hyper-converged systems at the top of their list for any new data centre server purchases. Here's all you need to know about hyperconvergence essentials for the CEOs.
In business technology circles, we’ve seen recent focus on organisations adopting hyperconvergence solutions in the data centre. This is primarily to eliminate the time and cost associated with running legacy infrastructure, but the lift it provides to IT performance and agility is also helping companies create new revenue streams faster.
Let’s examine this trend and answer a few questions for CEOs.
What is hyperconvergence? Who is using it, and why? And what does it mean for businesses and C-level leaders?
Hyperconvergence essentials for business leaders
Let’s start by looking at the difference between converged and hyper-converged infrastructure, as this question is often asked.
Converged IT systems were essentially a loose coupling of server, storage, networking and virtualisation technologies. These were managed by a central software manager, which worked with other server, storage, and networking software managers. If that sounds complicated, it was because simple management tasks became tedious due to the multiple layers in the system. With hyper-converged systems, the number of layers is reduced, from four to two, and the intelligence of the system then shifts to the more agile software layer. Hence, hyper-converged systems are considered to be software defined.
Why is all this useful? There are three primary CEO-focused benefits from hyperconvergence: faster time to market for new applications, faster systems, and lower total cost of ownership (TCO). I will explain how these benefits are delivered:
- Faster time to market: In today’s complex and fast-paced environment, it’s important to constantly evolve business applications to meet changing consumer needs. This trend has given rise to ‘DevOps’, where new applications are tested with new features constantly, and put into production very quickly. Hyper-converged systems are essentially simpler in design than traditional three-tier. The data that was stored outside in an external storage area network (SAN) is now inside the server itself, and multiple copies of data are maintained for redundancies. Other than switches for network connectivity, no other hardware is needed. As hyperconvergence combines all the layers into a single appliance, it becomes very easy to deploy new applications. This is because IT does not have to worry about the amount of resources required by the application for every layer – it only needs to think about it at a total level; the system does the rest. Applications are therefore modified, updated and deployed faster than in three-tier systems.
- Faster applications: To use a classic analogy, it’s always faster to pick up a book on your desk, than to go and get it from a library. With hyperconvergence, as the data is stored on the system itself, the application can access it faster. When we add in high performance disks like solid-state drives (SSDs), we can achieve speeds that are 100 times faster than the old applications.
- Lower TCO: Hyperconvergence delivers a more attractive TCO of the entire infrastructure, as fewer layers of hardware are needed to achieve the same outcome. The footprint is smaller, with lower energy consumption and less systems to acquire and maintain. Software costs can potentially increase, but they will vary from vendor to vendor, and the overall solution will still result in a lower TCO.
Hyper-converged infrastructure began with virtual desktops in the early days. That’s because these workloads usually had their own separate budget and were not critical, and they required high performance plus linear scaling. Even today, virtual desktops and mailbox workloads are perfectly suited to this architecture. But today, hyperconvergence has matured. It not only handles traditional server workloads like web servers, test and development, and standard applications – it can also take on mission critical workloads like SAP, SQL, and Oracle, as well as heavier workloads like business analytics and big data tools.
Hyperconvergence technology has actually existed for a more than a decade, but previously the costs for the processing power, disk technology, and performance requirements were more cost-prohibitive.
Today that is no longer the case with multi-core technologies, a drop in storage costs per terabyte, and faster disk technologies.
The market is responding. Around the region, CIOs and data centre managers I speak to have placed hyper-converged systems at the top of their list for any new data centre server purchases. Top of mind as a driver is an imperative toward consolidating workloads and hardware within the data centre. A 2016 Morgan Stanley survey echoes my conversations with C-level executives, reporting that 60 percent of CIOs plan to purchase hyper-converged servers within a year, a threefold increase from the year before.
The main analysts have also confirmed the trend. IDC says the market for hyper-converged systems reached $2.6 billion in 2016 and is tipped to grow to $6.4 billion by 2020, a figure cited in this CIO article. Gartner has published comparable numbers, estimating the market will be worth almost $5 billion by 2019. The Asia Pacific region is tipped to be the fastest growing region in the global hyper-converged infrastructure market between 2016 and 2022.
The right technology for a cloud world
Many of the advantages of the public cloud (lower costs, simplified control, faster deployment) are built into hyper-converged systems allowing organisations to enjoy these benefits within their own data centres as a private cloud.
The common sense approach to the cloud is hybrid – which gives CEOs and CIOs the best of both worlds. This is why the leading virtualisation vendors are all working towards hybrid cloud systems that allow customers to ‘burst’ workloads to the public cloud, but keep heavy workloads (usually mission critical) on premise.
Hyper-converged systems are the most suited for facilitating this approach – as they provide the simplicity of unified layers of IT. And while security in the public cloud has improved, the extra degree of data protection inherent to hyperconvergence remains attractive to organisations – an important consideration for leaders.
Hyperconvergence considerations for Asia Pacific data centres
In Asia Pacific, many organisations have opted for a modular approach to rolling out hyper-converged infrastructure, as distinct from using it to embark on big picture, company-wide digital transformation projects. That isn’t to say a business could not do so, but for many of the C-level executives I speak with in this region, the preference is to work on discrete workloads because this approach can be more manageable, cost-efficient, and easier to complete within desired time frames.
Workload by workload, companies can achieve more accurate analysis and make decisions with more confidence. Does this workload or application need hyperconvergence, software-defined storage, or a bare metal server? Will it be a back-end system, or is this a key production workload? At the micro level, teams can more easily ask and find the answers to questions like these, and while that is not the only approach that works, it seems to be a positive model for this region. And potentially for anywhere.
Hyperconvergence is a good fit for today, especially, as technology is accelerating into a new era of data-intensive Internet of Things (IoT) fuelled smart cities, homes and workplaces.
Please share your thoughts or experiences on hyperconvergence in the comments below! You can also reach out to me on Twitter.
(Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the views of YourStory.)