Total Cost of Ownership (TCO) is a powerful financial tool that allows you to understand the direct and indirect expenses related to an asset, such as your HPC system. Calculating the TCO for an on-premise HPC system is direct: add up all expenses related to your system and its management for the entirety of its deployment. But what happens when you’re interested in switching to a cloud-enabled HPC environment? Can you confidently compare the cloud-enabled HPC system’s TCO with an on-premise HPC system’s TCO?

This question has been addressed by many different institutions.

Our view is simple: TCO is a poor financial tool for evaluating the value of cloud-enabled HPC. Comparing a system with a static environment against a dynamic environment creates an unreliable and misleading analysis. It is an apples to oranges comparison, and using TCO to assess cloud-enabled HPC attempts to make apple juice from oranges.

What is a static environment and how does it apply to my TCO analysis?

Static environments for TCO are used when you have set expense for a set return. For an on-premise system, you can get X amount of computing power for Y amount of dollars. This same relationship goes on for most expenses in the cost analysis of an on-premise HPC system until you reach a comprehensive TCO. There are some variable costs involved (fluctuation in software pricing, staffing, energy, unpredicted errors, etc.); however, margins can be used to monitor their influence on the TCO. Essentially, you end up with the general TCO analysis of X computing power = Y expenses ± margin of change. This is a great tool for comparing systems with little expense variations and known rewards that create a near-linear relationship. However, what happens when the computing power is nearly infinite, and the expenses are reactive, as is the case for cloud computing?

What is a dynamic environment and how does it apply to my TCO analysis?

A dynamic environment for a TCO analysis is a system where the expenses and rewards are not directly correlated, making them difficult to define and compare. In a cloud-enabled HPC system, you pay for computing power when you need it; there is little initial capital expenditure needed to use cloud-enabled HPC, when compared to on-premise HPC systems. In this environment, your expenses for HPC become less predictable and more reactive because they are generated from your computing demand. In addition, you are no longer constrained by a set capacity or architecture of computing resources, so your reward is extremely variable due to how you utilize HPC. This scalability and agility can heavily influence your HPC usage; especially if your current system is inhibiting your simulation-throughput and potential Design of Experiment (DOE). The rewards of cloud computing beckon the question: if you have less restrictions on HPC, would you utilize it differently?

What happens when you use TCO to compare on-premise vs cloud-enabled HPC systems?

TCO is a tool that is helpful for static environments, but when you try to take the same static tool and apply it to a highly dynamic environment, it is misleading. For example, consider you want to calculate the TCO of an on-premise HPC system. First, you must predict your peak usage and utilization for a system that will be used for approximately 3-5 years. To manage all an organization’s requirements, trade offs are made between peak capacity and the cost of obsolescence. Then you must pay the massive initial capital expenditure to purchase all the hardware, software, and staff required to assemble and operate the system. Calculate all these expenses and you receive your TCO for a system that awards you limited computing resources.

Now, try to use the same analysis of a cloud-enabled HPC system. Most take the projected peak computing power and average utilization and multiply it by the price to compute in their prospective cloud service provider. This is the first problem, you’re already treating both systems as if their rewards and expenses are equal. With cloud-enabled HPC systems, you have instant access to the latest hardware and applications which means you are always utilizing the best infrastructure for your workflow. By utilizing cloud resources, your computing power becomes near-infinite, meaning there is no reason to have a queue for running simulations, which increases your productivity. The limitless and diverse computing resources allows for innovations in the research and design process that are essential to getting better products to market before competitors. The inability to easily scale and upgrade resources for an on-premise HPC system can severely inhibit your ability to compete. The differences in rewards makes it hard to quantify the expenses associated with the aging on-premise HPC system’s effect on potential new workflows that can help you out-compete your competition.

When comparing HPC solution’s TCO, you must acknowledge the rewards provided by each solution, because the lack of a rewards should be reflected as an expense in the competitor’s TCO. For example, if your cloud computing solution provides no queue time, better computing performance, and new DOEs, but your on-premise solution doesn’t, then you must calculate the expenses of inefficiency correlated to the absence of rewards from the on-premise system. That is the only way to level the TCO with the corresponding rewards, but it proves extremely difficult to define exact numbers for each reward; henceforth, making TCO a misleading and inaccurate tool. Comparing the TCO and rewards of cloud-enabled and on-premise HPC systems is pointless because the tool does not address the reality of each system; one is static and requires massive investment to create limited computing power, and the other is agile and requires pay-as-you-go expenses for limitless computing power.

Determining the financial implications of incorporating cloud-enabled HPC into you HPC system can be difficult. Thankfully, Rescale has many specialists and confidential tools to help define the benefit of cloud-enabled HPC on your organization.

Come talk to us today.

This article was written by Thomas Helmonds.

oil-pump

Introduction
The large and demanding oil and gas field requires massive, computationally-intensive simulations for everything from reservoir modeling to drilling application to natural gas extraction. Additional factors including seismic stability, weather patterns, and acoustic emissions also contribute to the complexity of oil and gas analyses.

Running complex simulations are essential to production, however, these analyses are often difficult to execute. For example, a single drilling application model can contain millions of elements for a finite element analysis (FEA). Since many of these analyses are so compute intensive, high-performance computing (HPC) plays a large role in executing simulations that can run for days or weeks. However, due to constrained on-site HPC resources, simulations are often left in long queues and engineers left without the necessary tools to reach optimal designs.

Rescale Solution
To address these concerns, a leading oilfield service company turned to Rescale. With access to a comprehensive suite of simulation tools and scalable HPC configurations, our customer utilized Rescale’s secure, web-based platform to evaluate acoustic emission properties among different drill bits and drilling scenarios.

All simulations were executed on Rescale using the commercially available software, LS-DYNA. Provided by the Livermore Software Technology Corporation, LS-DYNA is used for computationally complex FEA analyses.  Our customer conducted these analyses for the purpose of R&D drilling applications.

The user ran the simulation on three separate configurations, simultaneously, to compare Rescale’s platform performance across several different compute configurations. Identical simulations were executed on 16, 32, and 64 cores, respectively. Upon job submission by the user, the Rescale-powered solver performed as follow:

  • All processors were dynamically provisioned within five minutes of job submission
  • Results were gathered and delivered for post-processing and analysis
  • All computing instances across the cluster were deleted upon completion

Results
When executed on the customer’s 16 core, in-house cluster, the simulation converged in 77 hours. However, Rescale’s platform achieved convergence for the different configurations as follows:

  • 16 cores: 67 hours
  • 32 cores: 32 hours
  • 64 cores: 17 hours

When Rescale ran the simulation on the same number of cores as the customer, it achieved a 13% reduction in compute time, saving roughly an entire workday.

When the customer ran the simulation using 64 cores on the Rescale platform it modeled 180 microseconds of acoustics per case and finished >75% faster than their in-house cluster. The total cost of running the job on Rescale with 64 cores cost the customer only $864, resulting in a total savings of >$6,000 while still reducing convergence time by 60 hours. Additionally, executing the simulation on Rescale revealed previously undiscovered observations about drilling scenarios and enabled our customer to determine an optimal acoustic emissions environment.

To learn more about Rescale, please visit, www.rescale.com. To begin using Rescale for engineering and science simulations, please contact info@rescale.com.

This article was written by Ilea Graedel.