What is Reservoir Simulation?

Reservoir simulation is applied by the oil and gas industry in the development of oil and gas fields, as well as in production forecasts to help make investment decisions. This branch of reservoir engineering is concerned with modeling fluids and gases inside the reservoir for production purposes. Most modern simulators allow for construction of 3D representations for use in either full-field or single-well models. There are a number of commercial software offerings in the field, including, Schlumberger Eclipse, Halliburton NEXUS, Emerson Roxar, and Rock Flow Dynamics tNavigator, among many others. There exist a number of open source simulators as well, including, BOAST and OPM.

image01

Figure 1. A modern reservoir simulator interface

Reservoir simulation is perfect for the cloud

Reservoir engineering is a multi-faceted discipline that includes seismic acquisition and processing, reservoir analysis and simulation, well and drilling modeling, and flow modeling. Seismic acquisition and modeling is highly data intensive – an oil field modeled in the three-dimensional space can have tens of terabytes of raw data. Processing such data in the cloud is cumbersome due to having to move high volumes of data on and off the cloud.

Well and drilling modeling, as well as pipe flow modeling (oil and gas transportation), is relatively light in both data and processing needs. Even the largest models can be simulated on a modern laptop between seconds to a handful of minutes.

In reservoir simulation, the large volume of seismic data is distilled to a relatively small size to comprise the reservoir model. While the inputs are normally around tens of megabytes, the processing needs are very high, with some models running for weeks at a time on traditional hardware. Adding more CPU power can reduce runtime, but this in not always straightforward.

Scalability challenges

Many reservoir simulators don’t scale well beyond 12-32 CPU cores for a single simulation job, and, therefore, moving these to a large cluster or the cloud is not helpful.

On the other hand, a common simulation scenario involves multiple instantiations of a single simulation using different parameters in an assisted optimization run. This embarrassingly parallel setup scales well to a large number of CPUs, and most simulators can take advantage of the speedup. This is the sweet spot for a cloud platform.

tNavigator + Rescale = Match Made in Cloud

tNavigator by Rock Flow Dynamics (RFD) is an extremely scalable, parallel, interactive reservoir simulator. It has been shown to achieve near linear speedup using thousands of CPU cores, even on a single instantiation. Figure 2 shows tNavigator’s scalability in comparison to industry standard simulators.

image00

Figure 2. tNavigator’s scalability vs that of industry standard simulators

Coupled with an on-demand, pay-per-use cloud solution, such as Rescale, oil and gas companies can take advantage of a fully scalable solution. Users can pay for the exact amount of computation they use, as both Rescale and tNavigator are offering hourly pricing. This arrangement allows for 100% utilization of resources, and allows very large clusters to be created on demand to speed up computations.

Rescale and RFD have recently implemented a cloud-based solution for tNavigator. Users upload the input data via encrypted channels (similar to the online banking systems), choose the number of CPUs to be used, and press Submit. The simulation results can be observed remotely at runtime from a user’s terminal. The data is 100% secured when running simulations on Rescale’s platform.

From days to minutes – a case study

A real field three-phase model with 39 wells and 10 years of historical data was chosen to test the scalability of the system. The model contained 21.8 million active grid blocks. We configured a powerful 512-node cluster, with each cluster node consisting of two four-core Intel® Xeon 5570 Nehalem processors, which sums to 4096 simulation cores.

This setup resulted in a computation time reduction from 2.5 weeks down to 19 minutes. The resulting speed-up coefficient compared to one calculation core is equal to 1,328.

About Rock Flow Dynamics

Rock Flow Dynamics was founded in 2005 by mathematicians and physicists, are backed by Intel Capital, and have customers on all continents, including the biggest names in the oil and gas industry – OXY, BG Group, Marathon Oil, Tullow Oil, PetroFac, Murphy Oil, PennWest, Tatweer Petroleum, PetroChina, and the like. Rock Flow Dynamics was established with a clear vision to provide reservoir engineers worldwide with new state-of-the-art dynamic reservoir simulation technology that meets the most demanding modern expectations for raw performance, rich modeling functionality, advanced Graphical User interface capabilities, and smart license pricing.

About Rescale

Rescale is a secure, cloud-based, high performance computing platform for engineering and scientific simulations. The platform allows engineers and scientists to quickly build, compute, and analyze large simulations easily on demand. Rescale partners with industry-leading software vendors to provide instant access to a variety of simulation packages while simultaneously offering customizable HPC hardware. Headquartered in San Francisco, CA, Rescale’s customers include global Fortune 500 companies in the areas of aerospace, automotive, life sciences, and energy.

This article was written by Rescale.

pistons

Photo credit: Audi

Not all analysis software available on Rescale make use of multi-threading, or MPI, to scale across multiple cores or nodes. However, the workflow features available on Rescale provide practical applications for scaling out these serial codes. I will show how an automotive engineer can use the Rescale platform to improve the design of an internal combustion (IC) engine ringpack using Rescale’s workflow features and the serial ringpack code CASE – Ring, which was developed by Mid-Michigan Research. The ringpack defines the geometry and function of the rings and ring-grooves on a piston, and is an essential part of the engine.

The Challenge

MAHLE-Rings-Pack-_PR_1-web

During the development of an IC engine, a working prototype may be created to test its overall durability and performance. This prototype can be used to obtain baseline operational data for the engine, including data for the ringpack (cut-away above). Some of the main issues that a ringpack engineer is concerned with are:

  • Blowby: pressure loss due to gas escaping through the ringpack

  • Blowback: the reverse of blowby; blowback gasses may be saturated with oil, causing decreased emissions performance of the engine

  • Wear and failure: excessive movements of the ring or starvation of lubrication in the engine may result in increased wear, or even failure and disintegration of rings

In this post I will focus on the issue of blowby. Blowby is easily tested on a prototype engine by measuring the amount of gas escaping from the sump of the engine. The experimental numbers from the prototype engine are then used as a baseline and as reference for tuning the simulation model.

The Simulation Model

The engine model I used is a purely hypothetical diesel engine and has a displacement of 1.5L per cylinder. A simple model may be thrown together in a few minutes using the CASE graphical user interface (GUI).

Tuning the Model

Once the engineer has some baseline blowby numbers, the engine model can be tuned to match the experimental blowby numbers. CASE uses simple linear flow models for gas-flow between different parts of the ringpack and, therefore, makes use of two flow coefficients:

  • discharge coefficients for complex discharge geometries

  • leakage coefficients for gas escaping between the micro-scale channels between two contacting rigid surfaces

Let’s say that the engineer ran the engine at the following load points having the resulting blowby:

Engine Load
(Bar)
Engine Speed Blowby
(L/min)
27 550 4.6
36 1800 5.4
41 2800 10.0
53 4000 12.4

Running it on Rescale

On Rescale, a 7×13 parameter sweep was set up for discharge coefficients (dcoeff) between 0.2 and 0.8 and leakage coefficients  (leakage) between 0.0 an 0.03. The idea is to run the model at these parameter points and compare the output to the experimental numbers. The parameter point with the lowest difference between the experimental and the simulation data will be used to run subsequent design of experiment (DOE) simulations.

Screen Shot 2014-03-18 at 1.31.28 PM

The input files were set up as a template, each one containing the corresponding template tags for each parameter.

To run all 4 load points, the command to run on Rescale was entered as:

A post-processing script was used to extract the blowby values for each load point, compare the blowby values to the experimental values and calculate an RMS error. The smallest error occurred at point (dcoeff=0.70, leakage=0.01). The entire setup can be viewed by adding the shared job to your account.

Similar to running a parameter sweep, the engineer may use an optimizer to find the best match for these coefficients. Rather than optimizing over the entire domain, for pure demonstration purposes, I have set up and run an optimization job that refines the results of the previous parameter sweep searching in the domain (dcoeff=0.60-0.80, leakage=0.00-0.02). The setup of the optimization job can be viewed by adding the shared job to your account. The best result is conveniently highlighted with a green bar on the results page.

Screen Shot 2014-03-19 at 6.45.27 PM

The results of these two jobs are shown below. The RMS error for the parameter sweep was 0.4060 L/min, while the refined RMS error was 0.4027 L/min.

Blowby (L/min)
Experimental Best Sweep Point Refined Point
4.60 4.97 5.04
5.40 5.88 5.96
10.00 9.86 10.00
12.40 11.88 12.02

The entire domain can be plotted as a response surface using Rescale visualization tools:

Screen Shot 2014-03-18 at 2.52.52 PM

Engineering Analysis

We now have a tuned model and can do some real engineering work. The following example cases are implemented using the same workflow setup tools available in Rescale.

Minimize Blowby by Changing Static Twist

Let’s assume we want to optimize our ring design to minimize blowby. It is easy to change certain geometric properties of the ring. One of these properties is the ring static twist. This property describes the rotation of the ring cross-section relative to its nominal angle (0 degrees). We can run a quick parameter sweep on this geometric property using Rescale, similar to how we set up the tuning sweep. I set up a cross-product parameter sweep for the twist of the compression ring (ring 1) and the twist of the scraper ring (ring 2):

Screen Shot 2014-03-19 at 5.38.46 PM

Let’s look at the results. To display the results, I have chosen the Area Chart option of the Rescale visualization tools. The blue colors denote low blowby (good!) and the red colors correspond to high blowbay (bad!). Interestingly, the chart shows that the twist of the first ring and the second ring do not independently affect the total blowby as there is no clear smooth trend.

chart

There are some interesting conclusions an engineer can draw from these results.  The entire setup can be viewed by adding the shared job to your account.

And here’s a second study:

Minimize Blowby by Changing Groove Geometry

Let’s now look at the geometry of the grooves in which the rings reside. This time we changed the width of the groove of both the top and bottom grooves:

Screen Shot 2014-03-19 at 6.54.02 PM

Let’s look at the results. Once again, to display the results I have chosen the Area Chart option of the Rescale visualization tools. Blue colors denote low blowby and the red colors correspond to high blowbay. This time, the chart shows that the geometry of the first and second groove independently affect the total blowby as there is a clear trend from the bottom left of the graph to the top middle.

Screen Shot 2014-03-19 at 6.55.40 PM

The entire setup can be viewed by adding the shared job to your account.

Summary

Even serial codes can be massively scaled out for the right applications. Rescale provides workflow tools to improve the process of scaling out simulation software. Not only can these codes be easily scaled out, embarrassingly parallel applications like parameter sweeps scale linearly, so there is no cost-performance loss. Rescale also provides visualization tools to summarize date in a single plot or chart, helping engineers process output information faster.

CASE is a cylinder-kit analysis suite developed by Mid-Michigan Research. For licensing or more information about the analysis software used, please contact sales@mmrllc.com or Harold Schock at harold.schock@mmrllc.com.

Please check out our paper entitled “Piston Ring Wear Analysis for Diesel Engines” later this year at ICEF2014.

This article was written by Mulyanto Poort.

petsc

Portable Extensible Toolkit for Scientific Computation (PETSc) is a suite of data structures and routines developed by Argonne National Laboratory for the scalable (parallel) solution of scientific applications modeled by partial differential equations. PETSc is one of the world’s most widely used parallel numerical software libraries for partial differential equations and sparse matrix computations.

Traditionally, when a scientist or algorithm developer finishes a new parallel algorithm in PETSc, they need to run it in a multi-core computer cluster to test its scalability and speedup. A cluster is typically a shared compute resource at a university, government, or business with significant administrative work to maintain the HPC resource. In addition, to create a run the scientist or developer has to prepare the environment, which can be difficult and time consuming, and anything unexpected during the run may cause failure and no output data generated.

With Rescale, testing the scalability of the PETSc algorithm becomes much simpler. A scientist or developer can specify the hardware type and number of cores and then run the job with an internet connection and web browser.

Algorithm to Test

The algorithm I’m going to test comes from the official tutorial of PETSc package. The code solves a linear system in parallel with KSP. I chose KSP because it is one of the most commonly used operations in the PETSc package. I made a slight change that outputs the timestamps of the operations of each PETSc function call for all processes to a log file for further analysis.

Here is the source code after my modifications. I’ve also created a makefile for compiling it.

What KSP does is solve the linear equation AX=B for vector X – which has n elements – where A is an m x n sized matrix and B is a vector that has m elements.

Run Your Algorithm on Rescale

After you sign up on Rescale, you can create a new job that allows you to compile and run your PETSc algorithm.

On the Setup page, select PETSc from the Analysis Code section. In the Hardware Settings, select Core Type as HPC+ with 8 cores.  The image below shows what your screen should look like.

Screen Shot 2014-03-08 at 7.46.24 PM

On the Workflow page, upload the source code and makefile. Alternatively, you can also choose to upload the compiled executable binary file instead. In the Analysis command, input the command you want to execute.

 

The Workflow page should look like the following:

Screen Shot 2014-03-08 at 8.33.24 PM

Click Submit in the lower right corner to execute your job to Rescale. After job submission, the Status page will allow you to monitor the job in real time.

Screen Shot 2014-03-08 at 8.42.34 PM

When the job is done, you can view and download the output files and log files from the Results page.

Screen Shot 2014-03-08 at 8.58.41 PM

KSP Test Results

In my scalability test, I chose 1, 2, 4, 8, 16, and 32 cores, with Rescale’s HPC+ core type. The size of matrix A was 1024 x 1024. Here are the results of the average process execution time, number of iterations, and time per iteration.

chart1

From the average process execution time, we can see that as the number of cores increased the time decreased – up to 16 cores – then it unexpectedly increased with 32 cores. This occurred because I needed to take the number of iterations into consideration.

Each time the algorithm started, the matrix and right hand vector were randomly generated. This meant that the iterations needed to converge to “norm of error” were different each time. In the following chart, we see the number of iterations for each run from our test.

chart2

The last chart shows the number of iterations per second, which is calculated by number of iterations / average process execution time. From the charts, we can see that parallel KSP scales well with the increasing number of cores.

chart3

If you have a Rescale account, you can click HERE to clone the KSP test job I mentioned and try running the simulation with different hardware settings, number of cores, and parameters. You can also click HERE to clone a “PETSc helloworld” sample job. If you do not have an account, please click HERE to sign up.

This article was written by Irwen Song.

blog-graphene.png

Graphic Credit: AlexanderAlUS/Wikipedia

With the recent hype about graphene – a new material that has been praised for its unparalleled strength, lightness and electronic properties – I became curious and wondered whether I could run a molecular dynamics (MD) simulation of graphene given its simple chemical structure (hexagonal lattice). As someone new to the field of MD, with a background in front-end development, I cautiously set out to teach myself a thing or two about an MD simulation over a weekend and run a basic simulation on Rescale’s platform.

Molecular Dynamics

In layman’s terms, MD is a computer simulation about interactions between atoms (or molecules), using some basic laws of physics. More specifically, it is a method to determine the trajectories of atoms in phase space according to Newton’s equation of motion (F=ma); it does not try to solve quantum equation, however. In order to solve the equations of motion using a computer, it would take a person a long time before he/she masters the numerical methods required for basic math (e.g., differentiation, integration, etc). Fortunately, there are open-source tools — namely analysis codes — that provide optimized algorithms. So a person can focus his/her attention on solving physical questions.

Analysis Code

There is more than one analysis code suited for running an MD simulation. The list of codes I’ve found includes, CHARMM, AMBER, NAMD, GROMACS, DL_POLY, and LAMMPS. Among these six codes, the first four (CHARMM, AMBER, NAMD, GROMACS) are oriented toward biology (e.g. DNA force fields), while the last two are well suited for materials sciences, — the type I was after. In the end, I decided to use LAMMPS, mostly owing to its active community group and a well-written documentation.

Installing LAMMPS

First, I had to install LAMMPS on my machine to understand how an input file works in LAMMPS. Contrary to my initial worry about having to set up a linux environment, thanks to Derek Thomas, I was able to install the code directly onto my OS X environment using only a few commands, and was ready to run an MD simulation within minutes. For a simple sanity check, I followed the basic tutorial and ran the “in.obstacles” code (sample tutorial) using the following command:

lammps -in in.obstacles

And just to make sure that everything worked as intended, I ran the same job on Rescale, which yielded the same results as the one from my Macbook.

Now, a bigger challenge for me was to write a code that would alter conditions (i.e., temperature) and observe differences in carbon atoms’ physical properties (e.g., positions, kinetics energy, potential energy, etc) over time. However, setting up all this without making any mistakes in physics seemed difficult, so I decided to borrow an existing code from an academic paper.

Uniaxial Tensile Test in LAMMPS

After hours of googling, I found the paper “Molecular Dynamics Study of Effects of Geometric Defects on the Mechanical Properties of Graphene” by Nuwan Dewapriya Mallika Arachchige, which used LAMMPS to achieve a goal similar to what I was trying to do. The author was kind enough to attach the setup part of his experiment in the appendix, but I had to create my own sheet of graphene by writing a python script that determines the experiment boundaries and positions of carbon atoms.

After trial and error, I confirmed that things worked correctly by comparing the rate of increase in potential energy that I found to the results in Archchige’s paper. Figure 1 demonstrates the potential energy increase over time.

Figure 1. Potential Energy vs. Time

Rescale Platform

Having run a successful simulation on my machine, I decided to move it to the Rescale Platform for better performance. The size of my simulation was 166,320,000 (1.66e8) — the size of your experiment is determined by the product of the number of atoms (252) and the number of time-steps (660,000) in MD.

I ran the job on different hardware settings to demonstrate the performance gains for different core types. Figure 2 shows the differences in hardware performance for my simulation.

performance.png

Figure 2. Hardware Performance Overview

Discussion

Overall, it was a fun first-time experience with an MD simulation. While graphing out the increase in potential energy over time was interesting, I see some room for creativity in terms of designing the experiment (e.g. people seem to be creating imperfect graphene models to see how a missing atom affects the stability of the overall structure).

As far as the computing power goes, by using Rescale’s Platform I was able to cut down my simulation time in a linear fashion. For those of you interested in running an MD simulation of Graphene, you can see the results of the simulation and download the input files here.

For information on how to set up and run LAMMPS simulations on Rescale, please visit rescale.com/resources/lammps.

This article was written by Rescale.