What is Cloud High Performance Computing?
Want to learn more about cloud HPC?
High Performance Computing
High performance computing (HPC) is one of the most impactful technologies used by scientists, researchers, and engineers to develop the most advanced and most pressing modern innovations. HPC is often used to describe the hardware and software tools used to digital model or simulate new products and how they perform in the physical world. From the mid-1900s, the invention of powerful computers combined with new engineering techniques transformed the way advanced technologies, every-day products, and even city infrastructure are designed, optimized, and manufactured.
For decades HPC was used by mostly large organizations with significant financing and large IT teams needed to support complex supercomputers and broad stacks of hardware and software. With cloud computing infrastructure, any company has access to virtually limitless combinations of specialized hardware from anywhere with on-demand subscription models, significantly lowering the barriers to entry to these transformational technologies.
HPC Shifts to the Cloud
Cloud HPC has changed the way companies provide resources for research and development initiatives because of its advantages of agility and flexibility to deploy and manage. Compared to on-premises supercomputers, HPC clusters, and workstations, cloud HPC can be procured and deployed faster and can be easily modified based on the needs of the organization at any given moment. Cloud HPC eliminates the need for costly upfront capital and time investments to get started on strategic and time-sensitive projects. Today, approximately 70% of organizations surveyed have integrated cloud into their HPC practices with over 50% reporting consistent usage (Source: 2022 State of Computational Engineering Report).
As global cloud infrastructure has matured in scale and capability, HPC practitioners, IT managers, and business leaders alike have discovered additional benefits of adopting cloud HPC. From improved collaboration and user experience to enhanced reporting and management, cloud HPC can improve end-user productivity, decrease operational risks, and ultimately lead to faster time-to-market. Organizations’ HPC hardware and software portfolios are becoming increasingly specialized and diverse, which is causing many organizations to take advantage of multicloud strategies to continuously optimize their application workloads to increase performance and/or decrease costs. With the high demand for computer-aided engineering and explosion of new AI/ML techniques, technology leaders are turning to cloud HPC to de-risk their operations and ensure their most important projects have the hardware they need to succeed.
Business leaders who are considering strategic investments in computing should consider cloud HPC to optimize overall operational efficiency. Similar to how cloud has transformed many other aspects of enterprise technology, cloud HPC enables instant access, monitoring, governance, and forecasting all from a single pane of glass from a browser anywhere in the world. This agility and flexibility is a compelling competitive advantage that will empower new discoveries for the organizations adopt cloud-first HPC strategies.
Common HPC Use Cases for R&D and Engineering
High Performance Computing give engineers, scientists, and researchers new methods to solve real-world problems digitally, eliminating the need to build costly prototypes to test physically. The use of computation-intensive analysis and simulation – often referred to as computational science and engineering or computer-aided engineering – has expanded to nearly every industry from aerospace and defense, automotive, earth science, energy, life sciences, manufacturing, to many other industries that involve making physical goods. The future of computational science and engineering is likely to continue growing into new use cases as practitioners develop new applications that lead to new discoveries at faster speeds. HPC is often associated with the following use cases from the industries mentioned above:
Designing and building new aircraft for transportation or space exploration involves using HPC to optimize nearly every component. From maximizing lift and propulsion to minimizing drag and weight, modern R&D effort are focused on making air travel safer and more efficient so that it can be accessible to more people. HPC is commonly used to digitally test everything from aerodynamics, structural rigidity, weight, and launch trajectories AKA ballistics, making the odds of successful missions higher than ever. Current trends in aerospace research and development include vertical take-off and landing, urban air mobility, electrification, super- and hypersonic, low-boom, and autonomous systems. In recent years, the commercialization of space travel and space-based industry has become more efficient than ever due to increased HPC access for digital simulation. As faster speeds and sustainability take priority, Aerospace HPC will undoubtedly play a role in developing new vehicle propellants and materials. Launching satellites and other space infrastructure has emerged as major focus for R&D to make missions to low earth orbit (LEO), the moon, and Mars not only feasible but also economic and safe.
Few consumer products receive as much fine-tuned R&D and engineering as today’s cars, trucks, and other wheel vehicles. With billions of vehicles on the road globally, the optimization of automobile performance and efficiency is critical to be competitive in today’s market. HPC use is ubiquitous among auto manufacturers and suppliers in the race to build better drive-trains, chassis, safety systems, and new electrified and autonomous systems to make driving more comfortable, safe, and environmentally cleaner. A common example of Automotive HPC is the use of computational fluid dynamics (CFD) to improve engine combustion and it’s even used in the aesthetics and creature comforts like optimizing the body and wheel curvatures for visual appeal and aerodynamics. Even the acoustic implications of various materials and methods of injection molding and casting are tested digitally before mass-production begins. As vehicles become more autonomous and connected, engineers are simulating the many sensors, antennae, and semiconductors needed to navigate safely through complex environments. Chemical reactions for battery power sources can be tested along with thermodynamic properties to prevent overheating and potentially expensive recalls due to defects. As more new entrants begin testing production vehicles, there is increasing usage of HPC in the use of finite element analysis (FEA) or finite element modeling (FEM) for structural analysis in crash tests conducted virtually. This can reduce the need to buy or rent expensive equipment to crash a perfectly functional vehicle, ultimately leading to saved costs and likely saved lives.
Global demand for energy has steadily increased in the last 100 years and as more countries rapidly modernize we will need new innovations in power generation, storage, and distribution. Computing has wide implications in the energy sector with goals ranging from improving the methods used to find and extract fossil fuels to improving the efficiency of our electrical grids. Before modern computing was introduced to geological surveying, oil-field discovery was invasive, whereas modern 3D seismic modeling techniques leave minimal impact and are far more precise. With the majority of global energy production and consumption coming from fossil fuels (including coal, petroleum, natural gas, and other gases), engineers have been hard at work to minimize environmental externalities and maximize the output of these systems including on-shore and off-shore oil extraction. Undoubtedly the need to develop new sources of cleaner energy has led to deploying HPC energy solutions for developing wind, wave, solar, and new sources of energy like small-scale nuclear or fusion. As we shift to new sustainable and renewable sources, these energy systems also need to be optimized for periods of volatility and energy storage. Many common weather models used by meteorologists and scientists are now open source and available to help energy companies (and many other industries) better predict and respond to weather volatility. Whether engineers need to simulate physics or chemical energy reactions, there are a wide range of software available to virtually build and test new solutions for a cleaner tomorrow.
One of the most notable examples of innovation acceleration can be seen impact of HPC on products such as pharmaceuticals, medical devices, and personalized medicine through the study of genomics. Life sciences and its many related fields of research and development have generated massive amounts of data leading to new possibilities for therapies, vaccines, and drugs that can drastically improve patient outcomes. Scientists and researchers rely on HPC in life sciences more than ever to process all this data and better understand the human body down to an individual’s unique biology. With access to HPC on-demand in the cloud, organizations and research studies can easily conduct simulations and modeling at a cellular and molecular level to predict how diseases and therapeutics will affect the human body. Common analysis that benefits from HPC include genomic sequencing, molecular dynamics, pharmacokinetics, fluid dynamics, crystallization, protein folding, computational and quantum chemistry. As a result of this increased access to flexible cloud resources, leading pharmaceutical companies are able to accelerate both drug discovery and time to market by decreasing time to trials. Through the use of digital simulation, medical and pharmaceutical companies can also increase R&D efficiency and effectiveness by reducing the number of failed trials and ineffective drug formulations. With digital transformation initiatives underway for many hospitals, the use of digital records keeping (bioinformatics) and computational research will increasingly rely on flexible and secure resources to meet their growing scale. By taking advantage of the latest in specialized cloud capabilities, doctors and researchers will be able to continue discovering life-saving solutions previously out of reach.
Industry 4.0 has brought about significant changes in the way modern product companies and their suppliers bring new products to market. From smart factories to connected devices, digitization at every level of the product life cycle gives designers and engineers a digital thread to track critical product characteristics. This digital thread often begins long before the first prototype is ever created. This is possible through the use of HPC to build and test digital prototypes down to the smallest component parts, that way when production begins companies have confidence that their products will perform as expected. This approach can be seen across consumer packaged goods (CPG), to heavy industry, to electronics and nearly every kind of product that needs to be produced in large quantities or is costly to manufacture. Engineers often conduct simulations with multi-physics, discrete element (DEM), fluid dynamics (CFD), and finite element methods. These specialized analyses provide a better understanding of a given product’s structural integrity, thermal management, performance (or failure), and help predict how manufacturing processes can be automated. As new manufacturing techniques come online like additive manufacturing (3D printing), digital twins, and internet of things (IoT), the use of HPC manufacturing is increasing to predict where investments should be made to ensure product quality, safety, and factory efficiency. It’s important to note that manufacturing companies uniquely benefit from using cloud HPC due to having diverse computing needs, and thus needing flexible access to specific hardware and software for large teams and projects. Aside from building better products improve customer satisfaction and reduce risk of warranties, companies are also examining their processes and how they can make them more sustainable, safe, and efficient.
Different Options for Deploying Cloud HPC?
When deciding a path to cloud HPC, it is important for IT and business unit decision-makers to understand the motivations and desired outcomes along with the possible deployment models. While the typical motivations for cloud transformation include flexibility, choice, and simplification, many of the offerings available fail to take full advantage of the potential of the cloud. The following four models outline different ways it’s possible to get started with cloud – however, buying committees beware: they are not all created equal.
Option 1 – Do-it-yourself “lift and shift” Cloud HPC Approach:
Organizations that are accustomed to making investments in on-premises HPC “clusters” may approach cloud strategies in a similar way. This includes purchasing a specific allotment of static compute resources but instead of physical hardware, they are now utilizing (“lifting and shifting”to) public cloud computing infrastructure. The amount of capacity purchased is based on an upfront estimate of compute needed over a longer period of time. DIY lift-and-shift customers merely aim to rearchitect and directly manage the same technology stack as they once had on-premises which causes long lead-times and complexity in building in a new environment.
Cloud HPC decision-makers and administrators typically bring with them legacy metrics like optimizing for high utilization which recreates the same scarcity challenge of on-premises systems that forced end-users to wait in long queues to run their workloads. In an effort to control costs, organizations seek long-term commitments often with individual CSPs without considering the potential benefits of optimizing the cost-performance across multiple CSPs. As specialized hardware options have exploded in recent years, multi-cloud HPC customers have been able to realize 30% more value quarter over quarter by continuously finding more efficient hardware-software combinations.
Other challenges facing DIY customers include the ongoing maintenance of diverse software portfolios and building additional layers of reporting and controls not provided natively by cloud providers. Enterprise teams that need performance analytics and security reporting will most likely need to build these components from scratch to meet best-practices. After considering in the likely downsides of complexity without many of the cloud upsides flexibility, the DIY lift-and-shift model may prove to be more work than it’s worth for most organizations.
Option 2 – ISV Cloud HPC Approach:
ISV Cloud HPC Advantages
Scientists, researchers, and engineers have long depended on computation-intensive simulation and modeling software. In an effort to increase the accessibility of this software, a few of the leading independent software (ISV) vendors have partnered with infrastructure providers to provide packages with both the software licensing and necessary compute to run workloads on-demand. These subscription models deliver simplicity for software users with pricing flexibility.
ISV Cloud HPC Disadvantages
The disadvantages of ISV cloud offerings are quickly evident for organizations with growing teams and/or mature HPC practices. A key trend that makes this model of cloud HPC infeasible is that the number of software being utilized by users is growing rapidly leading to a diverse mix of commercial (ISV), open-source (free), and custom (proprietary) software. The typical organization with HPC practitioners runs multiple software with 67% of teams running 2-5 software and 12% running 6 or more (Source: State of Computational Engineering Report).
Additionally, many of these ISV offerings are locked into specific CSP partnerships giving end users little choice or visibility over the underlaying infrastructure and its cost-economics.
Option 3 – Managed-Cloud HPC Approach:
This category of cloud HPC is a bucket of various managed-cloud offerings by which customers can outsource HPC infrastructure management and still maintain a reserved or private infrastructure model. Underlaying these offerings is often a bare-metal private cloud architecture that can give customers a perception of always-dedicated infrastructure. The reality is that these offerings are typically fixed capacity contracts and, again, lack the true benefits of cloud flexibility. Some providers offer single and sometimes double cloud offerings but lack the platform intelligence to dynamically shift to the most efficient cloud or compute architecture. In many cases, managed cloud HPC services are only designed for specific applications or use-cases which can make scaling up or expansion infeasible.
Option 4 – Cloud Automation Platform Approach:
Deploying a cloud HPC system that can adapt and growth with the needs of a business requires a solution that can take advantage of the best parts of cloud while abstracting its complexities. This is why Rescale believes in delivering a purpose-built solution of HPC built for the cloud. To accomplish this, the various teams at Rescale have assembled an end-to-end solution to give customers in any industry access to the best HPC technologies available.
To define HPC Built for the Cloud, Rescale looks at 5 key attributes of strategic cloud HPC:
- User-centric – a solution that brings the resources to the user on-demand, always in context with project and business goals in mind. This also mean simplifying the user experience to increase access to more engineers and scientists with flexible job submission options like a graphical user interface (GUI), application programming interface (API), and command line interface (CLI).
- Unlimited – a solution that enables any application to be run on any architecture (be it multi-cloud or hybrid/on-premises).
- Connected – a solution that unifies islands of analysis, fosters collaboration, and merges best-practices.
- Intelligent – a solution that recommends optimal software-hardware combinations and best-fit architectural configurations to match business goals.
- Automated – a solution that eliminates complexity and repetitive administrative tasks while accelerating end-user workflows and minimizing operational risks in budgets and security.
In summary, a built-for-the-cloud approach starts with the goal of optimizing R&D speed and efficiency, and focusing on the needs of the business, not merely the infrastructure being utilized. This approach helps organizations align with objectives of digital transformation, where the goals are to streamline how teams interact, eliminate bottlenecks, and drive new innovations.
If you want to learn more about our HPC Built for the Cloud approach download the full eBook now.
High Performance Computing (HPC) vs. Supercomputing – Is There a Difference and Which is Best for Businesses?
The words ‘high-performance computing’ and ‘supercomputing’ are often used interchangeably – however, are they really the same thing? It is important to consider their definitions and approaches before getting started.
Supercomputing is the process of calculating or simulating real-world scenarios with the use of a supercomputer. A supercomputer consists of tens of thousands of processors, all of which work together to complete large and complex tasks that would be too difficult, costly, or time intensive to perform in the real world.
Supercomputers are used for a wide range of simulations in a variety of different fields including; weather forecasts, molecular dynamics, physical simulations as well as other industries. Large academic and government-funded organizations often build supercomputers to conduct large-scale research projects. Supercomputers require significant upfront investment in components and facilities, expertise in IT and facilities, and a time spent on implementation and ongoing maintenance.
High Performance Computing
High-performance computing is broad term that may include supercomputers, but typically refers to the use of aggregated high performance computer or servers clusters that contain specialized components, often assembled for specific applications or types of tasks.
HPC is often used to tackle the same kinds of computational problems as supercomputing, however, HPC is more commonly used as it can be carried on an engineer’s workstation all the way up to a large data center. Across industries, technology providers are developing new tools to meet the growing HPC needs of engineers, scientists, and researchers. The recent proliferation of specialized software and hardware is driving business to devise strategies to simplify complex technology stacks and maximize their efficiency.
Making the Right Choice for Your Business
As companies depend on increased computational power to gain a competitive edge, technology leaders should choose solutions that maximize the value of existing resources like talent, software, and intellectual property. Today, most businesses choose a HPC approach because it can be purpose-built and right-sized for the needs of the business, whereas supercomputers are typically feature static architectures, significant investment, and long time horizons. Cloud HPC and HPC as a Service offerings have lowered the barriers to entry for organizations to deploy advanced computing operations where they were previously too complex and/or too cost prohibitive solutions. Having increased access to more computational engineering resources enables businesses to accelerate their R&D projects and even meet budgetary goals.