| | |

Bringing the Power of AI To Optimize Engineering Simulations in the Cloud

How do you choose the best combinations of compute resources to optimize complex HPC simulations?

AI Recommendations Are Showing us the Way Forward!

Complex simulations that utilize high-performance computing (HPC) are becoming even more complex just to run due to proliferating hardware and software optionality. For many of us, cloud services can simplify the infrastructure stack, but the sheer number of possible options makes finding the best combination for price and performance fiendishly complicated. 

In the past, life was simple; on-premises HPC offered relatively few choices. Cores, I/O bandwidth, memory, and storage were all fixed.

With my background in Automotive CFD, and motorsport engineering, I tend to think of on-premises HPC in the same way as owning, rather than renting, a car. Just weeks after you purchase the best car you can afford, someone brings out a better, faster, and more feature packed model. It’s the same with on-premises HPC: you select processors, memory and more to create the highest-capacity stack you can afford… and the next day a new chipset hits the market.

As a counterpoint, HPC in the cloud takes advantage of continuous improvement. Unlike on-premises solutions that are refreshed every three to five years, cloud providers all compete to deliver the latest, greatest capabilities, by way of the latest chipsets on the market. Back to motorsport: running HPC in the cloud is like driving a car that is constantly tweaked and upgraded. New power unit? Sure! Enhanced fuel-flow? Of course!

But in this world of turmoil, how do you select the best solutions to optimize your simulation workload? You may run a standard workflow that delivers good results, but are you aware of just-launched new chipsets that could cut runtimes? The challenge of workflow optimization is one of the hottest topics at HPC conferences right now, and with good reason.

Using AI to Optimize Your HPC Simulations

Here’s where AI steps in. Running a simulation generates huge quantities of metadata about the way jobs run – core utilization, memory consumption, I/O performance and more – and the data from thousands of simulations provides the perfect resource-pool for AI engineering models. Fed by past results, from the past 10 years of running HPC simulations in the cloud, AI can make intelligent recommendations and optimizations for proposed workflows.

In addition, with multiple new and highly specialized chipsets coming to market, it has become near-impossible to monitor the latest options. As cloud providers add these offerings to their services, AI can examine the claimed capabilities and propose potential new options.

The result is that engineers like me do not need to keep up with chipset, GPU, or accelerator card releases. AI will recommend new core types suited to the job you are running. For example, AI can suggest that a specific workload and software combination will tend to produce faster results on a specific core type, and you can choose to follow that suggestion if it appeals.

Now, in simulations for motorsports, time is the critical factor: the key question is always about how fast you can deliver results, because the race isn’t going to wait for another two days. Slow simulations can impose very undesirable delays, simply because there are so many manufacturing steps to complete before your proposed modifications reach the physical vehicle. In the past, often the answer was to accelerate the simulation by compromising on the number of data points, which in turn could adversely affect the design decisions.

With AI optimization assistance, simulations can run more data points or run faster – and often both. If I can run jobs more quickly, I will use the time to run more jobs. My total compute and software expense is likely to be around the same, but better optimization has enabled me to increase throughput.

Using Rescale to Automate Running Any HPC Workload

Disclosure alert: Rescale, where I work as a Solutions Architect, has built a cloud-based platform that uses AI to automate running any HPC workload on the best possible hardware-software configuration. Rescale automates job orchestration and resource management across multiple cloud vendors, and enables IT and HPC managers to review and select optimal workload configurations within a couple of clicks based on their business goals (e.g. solve time or cost savings). Rather than learning the intricacies of each provider’s offerings, Rescale offers AI-powered recommendations for a vast array of technologies, letting engineers and scientists focus on their day jobs rather than job set up, benchmarking, and troubleshooting.

The task of selecting optimal configurations will, as the market diversifies and specializes, become progressively more challenging. For example, at the time of writing, there are new GPU and CPU combined chipsets that promise super-high throughput, yet right now there is relatively little software benchmark data. Every cloud provider will bring these new technologies on board, creating a new sweet spot to add to the mix. 

In addition to the advantages of AI finding and recommending these configuration sweet spots, Rescale’s intelligence enables global cloud resource management, automatically load-balancing your job so that users experience fast job launch times, minimizing failed jobs, and directing workloads to where the selected cloud provider has the greatest available capacity for business continuity. 

On this basis we can see that just simply moving workloads to cloud is not enough. AI-powered optimization capabilities will be a necessary ingredient to get the most out of cloud, especially as engineers create ever more complex simulations.

If you are interested to see Rescale’s AI optimization applied to your workloads, you can reach out to our team for a demo here.

Similar Posts