Rescale offers several price options for running your HPC simulations: On Demand, Low Priority, and Prepaid.  This article will show an analysis of compute usage to determine when getting a prepaid plan makes sense from a cost standpoint.

Our Prepaid price plan provides a long term reservation for Rescale cores at the lowest cost per hour, in exchange for paying for all the hours up-front for the reservation.  Prepaid cores are a great way to lower your hardware costs IF you have high enough utilization. In the following, we will answer 2 questions:

  • How high does my utilization need to be for a particular core type in order for Prepaid to save me money?
  • Given some schedule of core usage, how many Prepaid cores should I buy to minimize my total hardware costs?

Prepaid cost savings by utilization

The first question is calculated by solving this equation:

Cost savings = Utilization * Total reservation hours * Priceon-demand – Total reservation hours * Priceprepaid

The Prepaid price option offers two time periods to choose from. Users can prepay cores for 1 year or 3 years. So in the case of our popular Nickel core type (which has a 3 year Prepaid cost of $0.04/core/hour), the break-even point where cost savings = 0 is:

Utilization = $0.04 per hour Prepaid / $0.15 per hour On Demand =~ 27%

If your average utilization of a Nickel core is above 27%, Prepaid will save you money compared to On Demand rates.  If your average utilization over a year is 50%, you are saving $482 per year per core using the Prepaid option over a three year term, instead of On Demand.

Pre-pay how much for a given job forecast?

The above calculation is simple but you may not have a target utilization in mind. Instead, you may have a forecast of all the compute jobs you are planning to run, often based on historical usage. How do we take a schedule of jobs and determine the optimal number of Prepaid cores to buy?

As before, we calculate something similar:

Prepaid savings(x cores Prepaid) = All On Demand cost – (Prepaid cost(x) + Residual On Demand cost(x))

All On Demand cost is the total core hours for all your jobs over the Prepaid period, multiplied by the On Demand core price.

Prepaid cost is just the Prepaid cost for x cores.

Residual On Demand cost is the tricky one to calculate. We need to derive the additional On Demand core hours needed, while maximizing utilization of our Prepaid cores.

To calculate the residual core hours, we need to look at the number of cores running per unit time, and then only consider the cores that are not Prepaid. Visually, we are packing core use time intervals from the bottom up and slicing out the bottom x cores across jobs in time (which are using Prepaid cores) and then just calculating the cost of the rest.


The above figure does not take into account that core time intervals will not always either be entirely overlapping or disjoint. The intervals really need to be cut up into smaller disjoint intervals and then we fill from the bottom with these interval parts.


Next, we just count up the cores for each time slice, missing slices implicitly have zero cores.


Here we have completely covered our reservation period with disjoint intervals. Next, we can subtract off the Prepaid count, for example if we have 2 Prepaid cores, we get these residual core intervals.


From these intervals, we then sum the product of the intervals lengths and core counts to get our residual core hours.

Let’s look at how to do this same analysis programmatically. Our inputs are:

  • Schedule of compute jobs, which is a list of partially overlapping (start_time, end_time, core_count) tuples
  • Start of the prepaid reservation period
  • End of the prepaid reservation period
  • Number of prepaid cores

Most of the complexity here is hiding in chop_and_aggregate. Let’s look at an implementation of this. We start at reservation_start and step forward in time, keeping track of the intervals that are currently open. When an interval opens or closes, we create a new disjoint interval with the current core count between the last 2 time boundaries..

Putting this all together, you can multiply the residual core hours by the On Demand price and get your cost savings.  You can even use calculate_residual_core_hours with zero prepaid cores to (inefficiently) get the all-on-demand cost:

From here, you could do a binary search over the number of prepaid_cores to find your optimal savings. As an optimization, the disjoint_intervals only need to be calculated once and for all the different Prepaid core calculations.

Additional details

The above explanation is simplified from our real Prepaid option:

Different core types

You might use multiple core types (e.g. Marble & Nickel) across jobs that you typically run. In that case, you would separate your jobs by core type and perform the above analysis once for each core type batch. You would end up with the optimal number of Prepaid cores of each type. Note that in some situations, this might yield wasted capacity and could end up being more expensive than running on fewer core types, even if it means you are running on more powerful cores than are needed for a particular job.

We plan to soon release a Prepaid core calculator that will make recommendations based on your previous Rescale usage, doing the above analysis for you!

This article was written by Mark Whitney.


Implementing hybrid and pure-play cloud based HPC in order to maintain a competitive edge has become essential for organizations. In an interview with CIO Review, Joris Poort, CEO of Rescale Inc. gives his views on the industry dynamics driving the CIOs and IT professionals to shift from deploying on-premise HPC clusters to an agile IT environment with scalable, secure, and elastic cloud HPC resources.

The forces driving simulation/HPC in the cloud

Simulation driven design has become an indispensable tool for product development. Across all industries, simulation is driving a paradigm shift from traditional physical test and design to virtual prototyping, design space explorations and optimizations that evaluate what-if scenarios and trade off studies. As a result, organizations are running engineering simulations that are far more comprehensive and accurate, driving innovation and competitive advantage. The dramatically increasing and highly variable user demand for simulation presents a daunting challenge for traditional IT systems that cannot provision peak demand, resulting in delays and bottlenecks. Providing state-of-the-art HPC resources that carry the computing horsepower and scalability while remaining elastic and flexible are key to deliver on these new IT demands.

Key evaluation factors to choose a cloud provider

Global reach, best-in-class hardware, application availability, on-demand licensing, security, support, cost, administrative functions for IT, tools to integrate with the existing computing infrastructure and easy of use are all important aspects when choosing a robust enterprise HPC cloud provider. Rescale’s HPC cloud platform is natively built for the cloud and designed with these considerations in mind to efficiently deploy to Fortune 500 IT organizations. Its disruptive technology is transforming fixed-cost capital infrastructure into agile project-based operating expense. Rescale has the largest hardware footprint in the industry with an infrastructure network of over 30 of the most advanced data centers worldwide, as well as the most extensive simulation software selection (120+ software packages, 9 out of 10 top CAE vendors) including Siemens PLM, CD-adapco, Dassault Systemes, MSC Software, ANSYS, and many others.

Rescale—Ensuring safe operations in the cloud environment

Rescale has built the leading security solution for cloud HPC. The company invests heavily in the security and resiliency of every component within the Rescale ecosystem. As a result, Rescale complies with the strictest industry standards such as ITAR compliance for US export controlled activities and SOC 2, as well as end-to-end data encryption, cluster isolation, kernel encryption, data center security and independent external security audits.

Providing a powerful simulation platform to the engineers and scientists

Through a unique set of deployment tools, Rescale supports public, private cloud as well as hybrid on-premise/cloud deployment. Simulation customers can easily port their applications to or programmatically burst their compute jobs onto the Rescale cloud simulation platform to tap into the immense global infrastructure network, including collaboration, visualization and scheduler integration tools that are built into the platform. Rescale supports native integrations and deployments with all commercial schedulers.

Rescale’s enterprise platform includes administrative IT tools that provide controls for administrators to implement security policies, manage user account settings, and configure permissions, budgets, and privileges.

The future to HPC cloud service providers

There are challenges that must be addressed when adding cloud HPC into an enterprise IT organization – especially on total cost of ownership benefits and security model that must be directly addressed and understood. Providing a sound business model and helping application providers make the transition from annual licenses to on-demand software offerings is another critical effort. Rescale facilitates, promotes and works closely with the application providers to introduce on-demand licensing.

This article was written by Rescale.


San Francisco, CA – March 19, 2015 – Rescale, a leader in computer aided engineering (CAE) simulation and high performance computing (HPC), has been named by CIO Review ( as one of 20 Most Promising HPC Solution Providers in 2015. The leading position of Rescale is based on evaluations of Rescale’s products for HPC simulation applications and its expertise in providing secure end-to-end HPC solutions.

The annual list of companies is selected by a panel of experts and members of CIO Review’s editorial board to recognize and promote technology entrepreneurship. “Rescale has been on our radar for some time now for stirring a revolution in HPC and we are happy to showcase them this year due to their continuing excellence in delivering top-notch technology-driven solutions,” said Harvi Sachar, Publisher and Founder, CIO Review. “Rescale’s solutions continued to break new ground within the past year benefiting its customers around the globe, and we’re excited to have them featured on our top companies list.”

“Rescale is pleased to be recognized by CIO Review’s panel of experts and thought leaders,” said Joris Poort, CEO, Rescale. “Providing state-of-the-art HPC resources that carry the computing horsepower and scalability while remaining elastic and flexible are key to deliver on today’s new IT demands. With Rescale’s simulation and HPC cloud platform, our customers enjoy a significant competitive advantage over their competition through the agility to seamlessly scale both software and hardware solutions based on internal demand.”

Rescale has quickly grown to have the largest hardware footprint in the industry with an infrastructure network of over 30 of the most advanced data centers worldwide. Rescale also supports the most extensive simulation software selection with over 120 solutions including, Siemens PLM, CD-adapco, Dassault Systemes, MSC Software, ANSYS, and many others. The company invests heavily in the security and resiliency of every component within the Rescale ecosystem and has built the leading security solution for cloud HPC. Rescale compiles with the strictest industry standards such as ITAR compliance for US export controlled activities and SOC 2, as well as end-to-end data encryption, cluster isolation, kernel encryption, data center security, and independent external security audits.

You can download the full article here.

About Rescale

Rescale’s simulation platform is the leading global solution for the secure deployment of simulation software and high performance computing (HPC) solutions in the enterprise.  The platform is deployed securely and seamlessly to enterprises via a browser interface in an application environment powered by leading simulation software providers while backed by the largest commercially available HPC backbone.  Headquartered in San Francisco, CA, Rescale’s customers include global Fortune 500 companies in the aerospace, automotive, life sciences, and energy sectors. For more information on Rescale products and services, visit

About CIO Review

Published from Fremont, California, CIO Review provides influential IT and business executives with in-depth coverage of the topics most critical to their organizations IT infrastructure.

CIO Review constantly endeavors to identify “The Best” in a variety of areas important to tech business. Through nominations and consultations with industry leaders, our editors choose the best in different domains. HPC Technology Special is a listing of 20 Most Promising HPC Technology Solution Providers 2015.

This article was written by Rescale.



When running HPC on Rescale, or in any traditional HPC environment for that matter, it is essential that the solver can be run in a headless environment. What that means in practice is that ISVs have made sure that their solvers can be run in batch mode using simple (or not so simple) command-line instructions. This allows users of their software to submit jobs to a headless HPC environment. Traditionally, this environment is a set of servers sitting behind a scheduler. The engineer or scientist would write a script of command-line instructions to be submitted to the scheduler. The same is true on Rescale. The user enters a set of command-line instructions to run their jobs on Rescale’s platform.

Let’s take OpenFOAM, for example. An OpenFOAM user will usually write a Allrun script, and invoke it on Rescale by simply calling the script:

This is easy and applies to other solvers available on Rescale: LS-Dyna, CONVERGE, Star-CCM+, NX Nastran, XFlow and many more. All solvers on Rescale are instantiated using a simple command-line instruction.

 The Headless Environment

Being able to run a solver using a command-line instruction does not mean the solver will run in batch. For example, trying to run Star-CCM+ without explicitly specifying to run the solver in batch mode would cause the program to launch its graphical user interface (GUI) and look for a display device, causing it to immediately exit. Star-CCM+ should, therefore, be called using a command-line instruction like:

This is simple enough. There is something to be said for being to run both the batch solver and GUI using the same program. Unfortunately, this type of implementation can be incomplete.

 When ISVs decide to migrate their solver capabilities from the desktop environment to the HPC (batch) environment they usually do this because they have implemented the ability to run their solver over more than a single machine. A solver that can only run on a single machine provides less of a benefit when it’s run in an HPC environment. On initial iterations, ISVs may leave some latent artifacts of the original desktop implementation inside their batch solvers. Although these solvers can be executed from the command-line, they may still require access to a display. Since, on Rescale, we still want to be able to run these “almost-headless” solvers we make use of a tool called virtual frame buffers.

 The X Virtual Frame Buffer

The X virtual frame buffer (Xvfb) renders a virtual display in memory, so applications that are not truly headless can use it to render graphical elements. At Rescale, we use the virtual frame buffers as a last resort because there is a performance penalty to launching and running them. The use of Xvfb requires us to implement a wrapper around these solver programs. In its simplest form, this can be implemented as follows:

This seems fairly simple. We launch a virtual frame buffer on a numbered display, tell our environment to use the display associated with the virtual frame buffer, launch our solver, and clean up at the end.

A Can Of Worms

One very powerful feature on Rescale is the parameter sweep/design of experiment (DOE) functionality. We can run multiple runs of a DOE in parallel. This also means that multiple runs of the DOE can be run on the same server. Let’s imagine running the above script twice on the same node. Each instantiation of the script will now try to launch a frame buffer on the same display. This can lead to all sorts of problems. Race conditions, process corruption, and so on. Regardless of the low level issues this may cause, the biggest high-level issue is when the solver hangs due to issues with the virtual frame buffer. As it stands, a user who initiated a DOE with 100 runs may do so at the end of the day and let the job run overnight. The next morning that user may realize that one run has been hanging the entire night due to issues with Xvfb. He may still see 99 runs finishing in maybe a couple of hours, but the one hanging run has kept his cluster up for the entire night. This kind of situation is one that we want to avoid at all costs.

The implementation of a virtual frame buffer requires us to write all kinds of robustness provisions into our wrapper script. We may decide to only launch a single Xvfb on a single display and use that display for all of our solver instantiations. We can check whether Xvfb is running and, if it isn’t, skip the launching of the frame buffer step:

This has the side effect that we never know when we can shut down the frame buffer, requiring us to leave it up at all times. This may sometimes be okay depending on the requirements of the solver. If it’s not okay, we would have to increment the display number for each solver process, and clean up each frame buffer when each solver finishes. 

We can also explicitly check whether a solver is hanging. We can launch the solver in the background and interrogate its pid for the status of the program using a foreground polling loop. We can write a retry loop around the solver instantiation to restart the solver if it fails the first time. This may be the case if the frame buffer is still initializing while we are calling the solver.

The Case of the Invalid License

One of the CFD solvers we support requires a frame buffer. A customer launched a simple job without specifying a valid license address. Two days later he was wondering whether his job was still running. It turned out that the solver had hung within seconds of being instantiated and had been sitting idle for 2 days. During debugging of the issue, we decided to inspect the virtual frame buffer. We took a screenshot using:

The resulting screenshot showed a window asking the user to enter a valid license location. This was obviously an artifact in the desktop implementation of the batch solver. A true headless batch program would have just exited with a message to the user. We have since fixed this issue and are more careful when making any use of this tool–putting in robustness provisions as described before.

Virtual Frame Buffer’s Post-processing Utility

A very useful utility of Xvfb is that we can use it to render post-processing graphics in a headless environment. We can now call tools such as ParaView or LS-PrePost to generate movies and images of scenes they normally render on-screen.

Here is an example that uses Xvfb, OpenFOAM and ParaView to generate a scene image:

Here is an example which uses Xvfb, LS-Dyna and LS-PrePost to generate a movie of a crash simulation:

Some of our users have used this capability to their advantage by creating a visual representation of their data, forgoing the need to download the raw data sets.

Lessons Learned

Since our first use of Xvfb, we’ve learned that its use sometimes leads to adverse and unforeseen side-effects. We have since worked to make all our use of Xvfb as robust as possible to prevent the worst side effect of all, the stalled job. We also have used Xvfb with a great deal of benefit as it allows us to run solvers that do not run in a traditional headless HPC environment. It also allows us to use certain post-processing tools to render images and movies in batch mode. We encourage ISVs who only have desktop implementations of their solvers to create solvers that run in a headless HPC environment–but in doing so keeping in mind what it means to truly run a solver without a display.

This article was written by Mulyanto Poort.