5 Winning Strategies to Accelerate Engineering: Cloud HPC is Unlimited (Part 2 of 5)

|
  • Editors note: This post is the introduction to a 6 part blog series. Read the full report here.

2. The Five Winning Strategies: From Inflexible to Unlimited

Traditional HPC is Inflexible

Traditional HPC approaches lead to tightly coupled stacks, with performance optimized for specific software/hardware combinations. This tightly coupled approach limits access to different hardware architectures, software packages (or software versions), different cloud providers, different cloud provider computing regions, as well as whether to use physical, virtual, or container-based infrastructures. Even if HPC / IT organizations want to be flexible with traditional HPC approaches, the cost and efforts to provide that flexibility would be cost-prohibitive.

HPC Built for the Cloud is Unlimited

Enterprise engineering organizations often use several applications in their use of high performance computing. In the 2021 Big Compute State of Cloud HPC Report, the study found that nearly 40% of organizations are using six or more vendors. Beyond commercial ISV packages, a significant fraction of computational science and engineering orgs make use of in-house developed models, or open source software. The increasing use of AI/ML methods in engineering creates additional demands on the types of tools needing support. 

From an infrastructure perspective, architecture options in the cloud continue to explode. Architectures such as x86, GPU, ARM, RISK-5, and FPGA chip types, provide unique options to optimize the performance of computational engineering workloads. Additionally, deployment models can vary from virtual infrastructure (the common model for cloud providers), to containers, or even bare metal. The best options suited for each workload may come from architectures from different cloud providers. 

As the use of HPC grows, HPC organizations will need to support an almost unlimited set of options in order to avoid a bottleneck. This can include various software options, deployment models, cloud providers, and even geographic regions.

icon cpu 20

Any Architecture – Choosing the right architecture (e.g., high speed interconnect, larger memory, and high-clock-rate GPU), can have a significant impact on the cost and time-to-solve of engineering projects.

icon green 22

Any Application – Engineering and R&D organizations typically use several ISVs. Popular applications include Ansys, Cadence, Autodesk, Synopsys, etc. Engineers may choose from hundreds of applications available for a wide range of tasks

icon multicloud 03 1

Any Cloud Provider – Engineering can choose cloud providers in order to take advantage of the unique functionality and performance of different architectures, based on need.

icon platform 27

Physical, Virtual, or Container – While virtual machines are dominant in cloud computing today, containerization continues to gain traction. Bare metal remains common practice for many data center deployments.

A Broad Set of High-Performance Computing Architectures
icon cpu 20

CPU

icon gpu

GPU

icon memory

Memory

icon network

Interconnect

icon disk

Storage

With Rescale’s intelligent built-for-the-cloud HPC platform, engineering and R&D users can run workloads of any system type, including bare metal, virtualized, or container-based, on the cloud infrastructure of their choice, whether it’s private cloud or hybrid or multi-cloud on public cloud service providers.

For workloads on bare metal, Rescale can provide delegation to the scheduler that manages the systems. By default, the Rescale platform runs fully virtualized in the cloud and supports all major cloud service providers (i.e., AWS, Azure, Google, Oracle). Additionally, Rescale allows users to run any container-based workloads, which can be automatically deployed to any cloud infrastructure available on the Rescale platform.

The Rescale Cloud HPC Platform
800applications

Authors

  • Garrett VanLee

    Garrett VanLee leads Product Marketing at Rescale where he works closely with customers on the cutting edge of innovation across industries. He enjoys sharing success stories, user research, and best-practices from Rescale engineers, scientists, and IT professionals to help other organizations. Garrett is currently focused on how trends in HPC + AI + cloud and how these trends are converging to redefine modern R&D.

  • Edward Hsu

    Edward is responsible for product strategy, design, roadmap, and go-to-market, and driving the commercial success of Rescale’s product portfolio. Prior to Rescale, Edward ran product and marketing at D2IQ (formerly Mesosphere), as well as product marketing at VMware. Earlier in his career, Edward worked as consultant in McKinsey & Company, and served as an engineering lead at Oracle’s CRM division. Edward has Masters and Bachelors degrees in Electrical Engineering and Computer Science from MIT, and an MBA from NYU Stern School of Business

Similar Posts