- Editors note: This post is the introduction to a 6 part blog series. Read the full report here.
1. From Hardware-centric to User-centric
Traditional HPC is Hardware-centric
The traditional HPC approach continues to focus on maximizing hardware utilization because IT’s goal has typically been to ensure that the usages of hardware justify the cost. With finite computing capacity, IT must ensure hardware resources are not wasted. A common tool businesses use in trying to maximize on-premises hardware is a scheduler, but schedulers don’t provide an easy interface for accessing the hardware that engineers constantly need, and engineers end up competing for hardware availability
HPC Built-for-the-Cloud is User Centric
Today, the most valuable asset in an organization is the engineer or researcher. The productivity of users – not the utilization of three-year-old hardware – has a much greater strategic impact to the business. Loss of engineering time can mean slower time to market of new products, or less performant products, and can ultimately mean the difference between winning or losing in the market.
In a user-centric model, everything that happens in the HPC environment is based on the user’s intentions. The HPC environment should automatically provide any policies or data that would help the user make decisions.
For example, an aerospace engineer working on Project Phoenix may be tasked with a new aircraft component design with specific aerodynamic and structural objectives. The engineer will need specific data on existing designs from which to start, as well as a set of software to perform the analysis. In this scenario, the proper R&D Context means the software (including version and licensing) is instantly available, templates applying best practices are instantly accessible, and the data needed from previous work are already in his shared workspace.
The proper Business Context for the user is Project Phoenix’s budget and prioritization. For example, Project Phoenix may be so critical to the business that the engineer is allowed to use the most performant architectures to optimize time-to-solve, run over budget, and use funds allocated to other lower priority projects. He should not need to escalate and ask for resources because the HPC environment is aware of business priorities.
The IT & HPC Context speaks to applying policies consistent with company, security or regulatory objectives. Perhaps Project Phoenix data should never go beyond a particular geography, or the data should not be allowed to be downloaded, or not allowed to be shared outside the team. A user-centric approach would automatically apply these policies so that the user doesn’t need to track whom he should or should not share data with.
Lastly, the Technology & Industry Context is an understanding of how new computing architectures are evolving. Perhaps a new architecture from Intel, Nvidia, or AMD has become available and will dramatically change the economics of simulations being performed. The user-centric HPC environment should perform full stack optimizations factoring in both performance and cost of the new hardware, as well as how using it would change the economics of software licensing.
Examples of running Ansys Fluent simulation software on cloud infrastructure
Rescale’s approach is to automate the entire stack to make the infrastructure essentially invisible, ensuring the optimum architecture is always used consistent with IT policies. Engineers and researchers can simply use a browser, upload the software input files, choose or customize recommended hardware, and submit the job in a few clicks just as any SasS-like application. The time saved allows them to focus on important engineering tasks.