- Editors note: This post is the introduction to a 6 part blog series. Read the full report here.
2. The Five Winning Strategies: From Inflexible to Unlimited
Traditional HPC is Inflexible
Traditional HPC approaches lead to tightly coupled stacks, with performance optimized for specific software/hardware combinations. This tightly coupled approach limits access to different hardware architectures, software packages (or software versions), different cloud providers, different cloud provider computing regions, as well as whether to use physical, virtual, or container-based infrastructures. Even if HPC / IT organizations want to be flexible with traditional HPC approaches, the cost and efforts to provide that flexibility would be cost-prohibitive.
HPC Built for the Cloud is Unlimited
Enterprise engineering organizations often use several applications in their use of high performance computing. In the 2021 Big Compute State of Cloud HPC Report, the study found that nearly 40% of organizations are using six or more vendors. Beyond commercial ISV packages, a significant fraction of computational science and engineering orgs make use of in-house developed models, or open source software. The increasing use of AI/ML methods in engineering creates additional demands on the types of tools needing support.
From an infrastructure perspective, architecture options in the cloud continue to explode. Architectures such as x86, GPU, ARM, RISK-5, and FPGA chip types, provide unique options to optimize the performance of computational engineering workloads. Additionally, deployment models can vary from virtual infrastructure (the common model for cloud providers), to containers, or even bare metal. The best options suited for each workload may come from architectures from different cloud providers.
As the use of HPC grows, HPC organizations will need to support an almost unlimited set of options in order to avoid a bottleneck. This can include various software options, deployment models, cloud providers, and even geographic regions.
Any Architecture – Choosing the right architecture (e.g., high speed interconnect, larger memory, and high-clock-rate GPU), can have a significant impact on the cost and time-to-solve of engineering projects.
Any Application – Engineering and R&D organizations typically use several ISVs. Popular applications include Ansys, Cadence, Autodesk, Synopsys, etc. Engineers may choose from hundreds of applications available for a wide range of tasks
Any Cloud Provider – Engineering can choose cloud providers in order to take advantage of the unique functionality and performance of different architectures, based on need.
Physical, Virtual, or Container – While virtual machines are dominant in cloud computing today, containerization continues to gain traction. Bare metal remains common practice for many data center deployments.
A Broad Set of High-Performance Computing Architectures
With Rescale’s intelligent built-for-the-cloud HPC platform, engineering and R&D users can run workloads of any system type, including bare metal, virtualized, or container-based, on the cloud infrastructure of their choice, whether it’s private cloud or hybrid or multi-cloud on public cloud service providers.
For workloads on bare metal, Rescale can provide delegation to the scheduler that manages the systems. By default, the Rescale platform runs fully virtualized in the cloud and supports all major cloud service providers (i.e., AWS, Azure, Google, Oracle). Additionally, Rescale allows users to run any container-based workloads, which can be automatically deployed to any cloud infrastructure available on the Rescale platform.