Giving your business the edge with high performance computing
What if we could predict earthquakes? Avert the next financial crisis? Cure cancer? The benefits of Giving your business the edge with HPC to all of us are certainly far-reaching.
It’s a promising and exciting time for high-performance computing (HPC), a subset of the supercomputing field. New technology breakthroughs and cost-efficiencies offer much for business, government organisations and research institutions alike.
I’ll take a look at the big changes we’re seeing in this space and what impact they are having.
Firstly, what is HPC?
HPC is defined by a cluster of systems working together seamlessly as one unit to achieve performance goals not possible with disparate or monolithic systems. HPC features “clusters” of processors, memory, high-speed networks and high-speed storage. Individual computers within a cluster are referred to as “nodes”. Small to large business applications of HPC will typically feature between four to 64 nodes.
(To give you an idea of how big HPC can get – Lenovo supercomputer at the Barcelona Supercomputer centre is the world’s largest Intel-based supercomputer, running on over 3,400 server nodes).
The beauty of HPC is it can be used to crack more complex problems compared to normal computers. This is why we see stories like this one in Wired about HPC being used to power diverse science from human genome research, bioinformatics and biomechanics, to weather forecasting and atmospheric composition. The potential to hasten a cure for cancer or to assist science to combat climate change is inspiring a whole new generation of computer scientists and AI experts. What if we could predict earthquakes? Avert the next financial crisis? Cure cancer? The benefits to all of us are certainly far-reaching.
What does HPC mean for organisations?
If you consider that HPC helps organisations to crunch larger volumes of data simultaneously, it’s not much of a stretch of the imagination to see how HPC dramatically helps those engaged in big data, advanced analytics and artificial intelligence (AI) research.
As datasets get bigger and bigger, the fields of HPC, AI and data mining are all merging, as experts find new ways to approach, everything from national security and climate modeling to product design and customer experience mapping.
As such - the market is now shifting. HPC customer deals used to fall into the $5 million to $20 million range, but I’m increasingly seeing HPC being purchased as smaller installations. The technology is simply becoming more affordable, which is good news for data scientists… and business leaders.
To illustrate my point on cost, in Asia Pacific, we recently helped a heavily research-based government science institution with an HPC solution. We replaced a huge amount of hardware and infrastructure with HPC hardware at a third of the cost compared to their existing HPC system. Installations like this are being replicated by forward-thinking organisations around the world right now.
One reason organisations are keen to adopt HPC is a willingness to stay ahead of the curve. New products, markets and opportunities can open up when organisations have the tools to analyse data that were not previously available. I like to use the analogy of firing shells at a clay pigeon. Leaders are always looking ahead at trend cycles to gauge what’s worth adopting, and what’s not. You need to be thinking 12 to 18 months ahead of any cycle, aiming your shell in advance of the clay pigeon’s trajectory, if you will. Technology leaders in different industries have been telling me that HPC allows them to stay on top of these trend cycles by facilitating the computing power they need to conduct research at the bleeding edge.
Hybrid HPC is a further trend the industry is seeing. This is where an organisation operates a data centre infrastructure running a normal stack, then, for example, at night the same hardware becomes a big data processing HPC superhero. The extra processing may be used for a distinct line of business, or the company may “rent” it’s computing power to another organisation as a service. The ratio of extra HPC usage is completely flexible for data centre operators – it can range from a few months of the year to three hours per day.
There are many, practical uses for high-performance computing. Here are just a few examples – HPC can be used to:
- Develop, redesign and model products
- Optimise production and delivery processes
- Analyse or develop large datasets
- Further artificial intelligence research
- Conduct large-scale research projects
- Simulate an earthquake and study its impact
- Perform consumer trend monitoring, searching or profiling
- Create computer visualisations that explain research results
- Carry out simulations and/or modelling of complex processes
- Render the next movie blockbuster
We’re at an exciting period for high performance computing, no doubt about it. Stay tuned for a related article I’m preparing about AI - what it is, what it means for CxOs, and how it will impact the data centre space.
I welcome your thoughts on HPC in the comments below. Or feel free to connect with me on LinkedIn or on Twitter to start a conversation about how HPC could be used in your organisation.
(Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the views of YourStory.)