An engineer from a phone manufacturing company was explaining to me that consumers are asking for exponentially more features today. This increase in features is leading to more components in the phone, and thus introducing more risk in the product development and design process. The number of test scenarios this engineer has to foresee to prevent costly field issues, like a phone catching on fire, has drastically increased over the past year. So how can an engineering team keep up with delivering highly innovative products on time and still meet demanding quality standards?
Engineering software companies came up with their own buzzword: the digital twin, a time-effective tool for making decisions that would be costly or impossible to do with physical testing on a real model. It consists of a set of physical characteristics (thermal, structural, electromagnetic, fluid dynamics, etc.) overlaid with behavioral data collected from the real product. Physical characteristics are often modelled as virtual high-fidelity models using different simulation software packages. In the car industry for example, LS-DYNA might be used for crashworthiness, STAR-CCM+ for aerodynamics, and Abaqus for durability. The complex high-fidelity simulation test scenarios are run on high-performance computers (HPC) and take days or sometimes even weeks to return results. Not only has the digital twin become a key component of the product development process, but reliance on simulation results is ever-increasing, resulting in larger model size, higher number of runs, and more sophisticated multidisciplinary simulation workflows.
Companies manufacturing chips are coming up with more and more specialized hardware (memory-intensive chips, SSD drives, low-latency networks, GPUs, specialty co-processors, etc.) that are increasingly leveraged by simulation software to deliver better performance. The head of engineering IT at an automotive OEM recently told me, “I can’t ignore anymore how fast hardware is improving, but maintaining our in-house HPC with the latest technology is simply impossible.” Manufacturing IT departments have to transform to address these challenges of a fragmented and rapidly evolving IT ecosystem to fully enable proper use of the digital twin to innovate. Engineers need access to the right software package to solve a specific engineering scenario and run the simulations in an optimal and scalable IT environment to deliver on the full promise of the digital twin.
Rescale empowers engineers not only with speed but also with the capabilities to efficiently interrogate the digital twin model to conceptualize, compare alternatives, and collaborate. It is a turn-key simulation cloud platform that provides access to 180+ simulation and deep learning software packages through a tailored SaaS portal to accommodate the needs of every type of end user. Various analyses can run on a broad set of core types through the Rescale global HPC network, including any data center and core type from AWS, a preferred Rescale partner.
While openness and speed are key components to a successful Digital Twin environment deployment, manufacturing companies are also concerned by the cost of this transformation. Rescale and AWS’ partnership goes beyond cloud native use cases in order to protect a customer’s past investment. Rescale can easily be integrated with a customer’s legacy environment (on-premise HPC, homegrown tools and script, local license server, etc.), enabling a seamless, progressive migration from on-premise to hybrid to a cloud-first environment. Additionally, AWS’ Spot pricing has enabled Rescale to offer increased pricing flexibility, allowing it to better meet customers’ requirements and to balance cost versus performance goal per simulation. Finally, manufacturing companies also need to take into account the opportunity cost of an on-premise HPC-only solution and thus consider engineering wait time, hardware maintenance, and overly conservative end-product.
This article was written by Fanny Tréheux.