Network of data and files
| | | |

Q&A: Insights into Simulation Data Management from SCALE

Learn from simulation data experts on how the rapidly expanding use of digital simulations is generating massive amounts of data that need to be strategically managed

The rapidly expanding use of complex digital simulations to design and test new products is generating massive amounts of data that need to be strategically managed. 

In an effort to further automate and simplify how computational engineering teams use simulations and manage their data, Rescale has recently formed a partnership with SCALE, a leading simulation process and data management (SDM) company with strong roots in the German automotive manufacturing industry. 

Thanks to this partnership, SCALE’s software solution SCALE.sdm can now instantly access high performance computing (HPC) service in the cloud. With one click, engineers can automatically start new simulations that run on Rescale’s multi-cloud HPC infrastructure. 

Recently, we spoke with Marko Thiele, a product manager at SCALE with two decades of experience in SDM, to discuss the growing importance of simulation data management. 

What is SDM and what key challenges does it solve?

Marko Thiele: SDM stands for “simulation data management.” Some also use the term “SPDM” where the P stands for “process.” We still prefer to refer to it as “SDM,” since in our opinion this already includes the processes. That is a very important concept because what you are managing is not only the data but also the processes surrounding the simulation, both the preprocessing work and the post-processing analysis and management. 

The data part of SDM is typically files based, usually solver input files. This data can be about 3D geometry or various physical parameters related to the simulation, such as material parameters, friction parameters, or timing parameters, really any kind of functional data which is needed to create the simulation.  

The process part involves various processes surrounding the simulation. This starts with pre-processing, which usually involves changing the input parameters in some way to explore how the simulation output might change. You change the parameters, such as the physical stress on an object to see how it responds—testing the product under various conditions. 

Simulation processes also involve scripts which do the post-processing of the raw simulation result data by extracting key values, curves, pictures, videos, etc.—whatever you need from the simulation.  

The value of SDM is that all these processes and the resulting data are kept together with the source simulation data because the data and the processes are very tightly related.  

Why is “process” so important for simulation data management? 

Marko Thiele: Initially, teams focused on the data part, which missed out on the process part, and that really proved to greatly limit the reproducibility of simulations. In other words, without all the process information, it was very hard for engineering teams to go back to a simulation and review how it was initially created.  

So six months or a year later, engineering teams couldn’t look at a simulation and understand how the team arrived at a given simulation result. You still had the input files, but you didn’t have all the process scripts anymore in the right versions to recreate a runnable simulation. Teams also had trouble finding new results from related tests with the simulation, as well as building new simulations based on the old data. 

So over time, engineering teams realized they needed to keep the processes close to the data and keep all the versions of a simulation data connected with the processes. 

What are the trends driving the growing need for simulation data management? 

Marko Thiele: The need for SDM is driven by both the growing complexity of modern products, as well as the growing sophistication of simulations. This is certainly the case for our major automotive customers.  

Thanks to continually increasing computational power, simulations are becoming more complex and, hence, generating a lot more data to manage. Teams are also running many more simulations than in the past, making the tracking of all those versions increasingly challenging. 

For example, in the automotive industry many more parts of a vehicle are undergoing digital simulations testing. There are literally thousands of parts to track and test, and each part has to be modeled and then combined into larger parts systems.  

It seems hard to imagine, but in the early days of digital simulations in the automotive industry, usually one engineer was in charge of running simulations for an entire vehicle. Now you have an entire team working on just the design of car seats or the suspension system, etc. 

All of these engineers need to bring their individual simulation data together. Each team needs to see what the other teams are doing at any point in time. One engineer should be able to easily use a simulation created by another engineer to explore other facets of the design. 

Another aspect of SDM is on the process side. There are certain very complex computer-aided engineering tasks that require highly specialized skills. Maybe only one or two engineers in a company have the skills to program these simulations. But by making scripts available in the SDM, everyone else can easily run these complex simulations. This democratizes and scales the process from just one or two engineers to hundreds of engineers. 

In what other ways is the use of computer-aided engineering evolving? 

Marko Thiele: Another trend that relates directly to our partnership with Rescale is regarding the new generation of engineers.  

Until recently, engineers needed to not only know how to use the tools to create digital models and run simulations, but they also had to be technically adept to set up high performance computing batch jobs to run the simulations. This required knowing things like how to connect to servers, submit a job to a queueing system, or boot up HPC cloud services. 

But engineers ultimately just want to focus on their work rather than spend their time setting up and maintaining HPC systems. And, given the wide-spread adoption of graphical user interfaces and low-code development tools, today’s engineers are accustomed to using modern, intuitive tools to automate repetitive tasks and reduce technical complexities. 

They just want to work on their simulation data. Submitting a job should be as easy as possible—one button to click on. 

And now they literally have that capability on the SCALE.sdm platform. Thanks to our partnership with Rescale, they can click one button to set up a batch job and run a simulation on the Rescale multi-cloud HPC platform. The engineer never needs to leave our software environment. And once the simulation concludes, all simulation result data is automatically transferred to our SDM system and the relevant post-processing tasks are executed automatically.  

This is clearly the future of computational engineering. Organizations like SCALE and Rescale are working to standardize, automate, and unify simulation processes and their data so R&D teams can focus on creating innovative products rather than spending their time chasing data, grappling with technical issues, or managing computing systems and software. This new partnership is a great next step in that evolution.

Learn more about the data challenges facing modern R&D teams.
Read the ebook “Solving the Data Challenges of Digital Engineering.”

Author

  • At Rescale Marketing, we're the driving force behind the seamless convergence of advanced technology and strategic marketing. Our team specializes in catalyzing the potential of High Performance Computing (HPC), Physics AI, and pioneering Cloud Research & Development (R&D) initiatives. Our team comes from a diverse blend of visionaries, strategists and implementors that focuses on creativity collaboration and drive for innovation. We thrive on challenges, pushing boundaries and redefining what is possible.

Similar Posts