atce

Last month Rescale attended the Society of Petroleum Engineers’ (SPE) Annual Technical Conference and Exhibition (ATCE), the largest technical conference in the field of oil and gas exploration and production (E&P). This year it was held in New Orleans, LA from September 30th through October 2nd.

The SPE ATCE encompasses all technological aspects of oil and gas production, from drill bits, explosives, filtration units, and mobile platforms, to software, sensors and data processing. The full range of the E&P process involves computing technology, including planning, seismic acquisition and processing, reservoir modeling and management, well modeling, flow modeling, and integrated asset modeling, among others.

A select few of the E&P applications are very data heavy (seismic acquisition and processing, micro-seismic processing, etc.), and some require high-performance computing (HPC) clusters (e.g., reservoir modeling and simulation) for timely number crunching.

There are numerous categories of applications where HPC can be useful in oil and gas, generally in the following categories:

Engineering: Finite element analysis (FEA) is used in designing and testing mechanical components (such as drill bits), especially under the extreme conditions (pressure and temperature) of wells. Computational fluid dynamics (CFD) is used in modeling oil and gas flows in wells, transportation pipelines, as well as in processing plants. Computational intensity varies widely according to specific applications.

Seismic processing: The very large amount of data collected in seismic processing requires HPC clusters to process the data into a human-interpretable format. Many specialized seismic analysis packages exist for this purpose, and due to the nature of the application, processing is easy to parallelize to many compute cores.

Reservoir modeling: History matching, specifically, is a very compute intensive aspect of reservoir modeling that looks at well performance over time, including predicting future performance.

Meteorology: The oil and gas industry is interested in weather patterns as it relates to the protection and safety of its assets and personnel. The industry has funded some of the best weather models in existence. Weather pattern modeling is very computationally intensive, which is further exacerbated by the time sensitive nature of the processing. A model that predicts five days out cannot take five weeks to compute.

The general feeling among software vendors is that HPC is moving to the cloud, as many customers are moving away from maintaining their own in-house clusters. One vendor already offers access to their modeling software on an on-demand basis, fully in the cloud, via their own data center.

In-house HPC clusters are expensive to operate, and locks their internal users into certain hardware. Cloud-based hardware lets users choose the best configuration for their job on-demand. For instance, few in-house installations will have 240GB of RAM on a single node, or InfiniBand (high bandwidth, low latency) interconnect for distributed applications.

On-demand access to software will also bring innovation to licensing, since pay-per-use models make more sense for the cloud. Smaller players will see such pricing models as differentiation, which will move the entire space in that direction over time.

Rescale’s technology is a great complement to the on-demand licensing scheme, as Rescale already provides the pay-per-use HPC platform to accommodate the compute-heavy simulation software. Rescale is already in advanced discussions with simulation software vendors in the oil and gas industry, and will strive to set the trend for years to come.

To learn more about Rescale, please visit, www.rescale.com. To begin using Rescale for engineering and science simulations, please contact info@rescale.com.

This article was written by Rescale.

oil-pump

Introduction
The large and demanding oil and gas field requires massive, computationally-intensive simulations for everything from reservoir modeling to drilling application to natural gas extraction. Additional factors including seismic stability, weather patterns, and acoustic emissions also contribute to the complexity of oil and gas analyses.

Running complex simulations are essential to production, however, these analyses are often difficult to execute. For example, a single drilling application model can contain millions of elements for a finite element analysis (FEA). Since many of these analyses are so compute intensive, high-performance computing (HPC) plays a large role in executing simulations that can run for days or weeks. However, due to constrained on-site HPC resources, simulations are often left in long queues and engineers left without the necessary tools to reach optimal designs.

Rescale Solution
To address these concerns, a leading oilfield service company turned to Rescale. With access to a comprehensive suite of simulation tools and scalable HPC configurations, our customer utilized Rescale’s secure, web-based platform to evaluate acoustic emission properties among different drill bits and drilling scenarios.

All simulations were executed on Rescale using the commercially available software, LS-DYNA. Provided by the Livermore Software Technology Corporation, LS-DYNA is used for computationally complex FEA analyses.  Our customer conducted these analyses for the purpose of R&D drilling applications.

The user ran the simulation on three separate configurations, simultaneously, to compare Rescale’s platform performance across several different compute configurations. Identical simulations were executed on 16, 32, and 64 cores, respectively. Upon job submission by the user, the Rescale-powered solver performed as follow:

  • All processors were dynamically provisioned within five minutes of job submission
  • Results were gathered and delivered for post-processing and analysis
  • All computing instances across the cluster were deleted upon completion

Results
When executed on the customer’s 16 core, in-house cluster, the simulation converged in 77 hours. However, Rescale’s platform achieved convergence for the different configurations as follows:

  • 16 cores: 67 hours
  • 32 cores: 32 hours
  • 64 cores: 17 hours

When Rescale ran the simulation on the same number of cores as the customer, it achieved a 13% reduction in compute time, saving roughly an entire workday.

When the customer ran the simulation using 64 cores on the Rescale platform it modeled 180 microseconds of acoustics per case and finished >75% faster than their in-house cluster. The total cost of running the job on Rescale with 64 cores cost the customer only $864, resulting in a total savings of >$6,000 while still reducing convergence time by 60 hours. Additionally, executing the simulation on Rescale revealed previously undiscovered observations about drilling scenarios and enabled our customer to determine an optimal acoustic emissions environment.

To learn more about Rescale, please visit, www.rescale.com. To begin using Rescale for engineering and science simulations, please contact info@rescale.com.

This article was written by Ilea Graedel.

blog-conf
I served as a panelist at the ISC Cloud’13 Conference, held this year in Heidelberg, Germany on September 23rd and 24th. Attendees included representatives from large manufacturers (HP, Intel, etc.), CAE software vendors (ANSYS, Simulia, etc.), large organizations and HPC end users (Ford, CERN, various universities), and of course, cloud simulation providers like Rescale.

The conference program included a broad mix of topics related to HPC in the cloud – covering a wide range of industries from financial services to biotechnology to of course, engineering and manufacturing. In addition, a wide variety of interesting solutions and concepts were presented, including but not limited to:

a) a virtual real-time market for HPC capacity that would show available capacity and prices for anyone interested
b) cloud federation tools that enable the sharing and use of medical and health data in the Netherlands
c) a hybrid cloud setup that enables computation of mission-critical, financial insurance services in the cloud

However, the majority of the conference focused on engineering and scientific simulations, which have long shown the most consistent need for high-performance computing and are natural candidates for adoption of HPC in the cloud. I took away three main points of interest from the time I spent listening to other speakers and interacting with attendees at this conference.

1. There is a strong and growing wave of interest (and not just from vendors) in cloud-based engineering simulation. I met with numerous end users, especially from smaller and mid-market companies, who desperately need a more cost-effective way to perform engineering work, in terms of both computation and software licensing. Not surprisingly, the few large-enterprise representatives I spoke with expressed interest in the “burst” capability now offered by the cloud. Walking them through the business case for cloud simulation helped me understand their immediate and longer-term strategies for leveraging the cloud, and it bodes well for the future of this growing market.

2. Getting the technical details right will get you in the door with prospective customers, but all the “other stuff”, such as finding the right business model and ensuring data security, is tremendously important. Many of the hallway conversations I had were about security and understanding the financials. Part of the reason why performance wasn’t a hot topic is because HPC is a well-known commodity by now, and the remaining issues with cloud (interconnect, file transfer, etc.), are all well understood, even if not yet universally solved.

3. Engineering software vendors customarily recognize that cloud is the next frontier for innovation and growth, and are implementing strategic plans to capture market share. We heard from four of the top vendors, and all unveiled or reiterated plans for the next generation of simulation that included the concept of various “apps” within a simulation ecosystem. Cloud is top-of-mind for many of these vendors, mostly due to demand from various end user segments that see the benefits of running simulations in the cloud.

Overall, it was a terrific learning experience. The conference was very well organized and provided a great deal of insight about the state of the market, and enabled several conversations with other parties interested in running simulations in the cloud. The future looks promising, and I left the conference excited about what is yet to come.

To learn more about Rescale, please visit, www.rescale.com. To begin using Rescale for engineering and science simulations, please contact info@rescale.com.

This article was written by Rescale.

su2

Stanford University Unstructured (SU2) is an open-source set of tools written in the C++ programming language. Designed to numerically solve problems involving Partial Differential Equations (PDEs), it is uniquely suited for PDE-constrained optimization problems. While Computational Fluid Dynamics (CFD) and aerodynamic shape optimization were influential in driving early development of the code, it has also proven useful in several other disciplines such as electrodynamics, linear elasticity, heat transfer, and chemically reacting flows. The Aerospace Design Laboratory (ADL), in Stanford’s Department of Aeronautics and Astronautics, is actively developing SU2, and external contributions from others are encouraged via their GitHub repository.

Computational analysis tools are an integral part of research, design, and product development in both academia and commercial environments–particularly within the aerospace industry. However, access to the source code for most established and reliable codes is either proprietary, prohibitively expensive, or otherwise unavailable. Similarly, procuring and maintaining the essential computing resources to run simulations is yet another barrier facing end-users. To help overcome this problem, the SU2 team provides an open-source, easy-to-use, and accurate flow solver, while Rescale offers its customers a scalable, on-demand, and enterprise-class computing platform, with SU2 among their available computational tools.

To demonstrate SU2’s aerodynamic shape optimization capabilities, we ran an inviscid, lift- constrained design optimization to minimize pressure drag for a half-symmetry ONERA M6 wing model (Figures 1a and 1b).

control_volume

Figure 1a: ONERA M6 analysis domain

om6_wing_analysis_model

Figure 1b: ONERA M6 analysis model

The ONERA M6 wing was designed in 1972 by the ONERA Aerodynamics Department as an experimental geometry for studying three-dimensional, high Reynolds number flows. This widely–known wing geometry serves as a reference model for validating new numerical methods and CFD codes due to the availability of empirical data and intriguing flow features. The analysis model surface mesh used in this design optimization is depicted in Figure 2.

surface_mesh

Figure 2: ONERA M6 (OM6) analysis model surface mesh

The objective of the design process was to minimize the drag coefficient (CD) by changing the shape of the airfoil’s sections along the span of the wing. A set of control points were defined as design variables that can move freely in a direction normal to the wing’s upper surface. Additionally, a set of conditions were prescribed to govern the simulation as follows:

Pressure: P0 = 2116 lb/ft2
Temperature: T0 = 32°F
Mach Number: M0 = 0.8395
Angle of Attack: α = 3.06°
Reference Area: A = 8.16 ft2
Lift Coefficient: CL ≥ 0.285

In drag minimization, it is appropriate to constrain the lift coefficient because the induced drag is a substantial component of the overall drag. Therefore, the total drag can reduced by simply reducing the lift. The solution proceeds by solving the direct flow problem, followed by solution of the continuous adjoint problem. Next the gradient for the objective function–pressure drag in this example–is calculated with respect to the movement of control points, or design variables. At each design iteration, the optimizer drives the shape changes, based on the computed gradients, in order to minimize the prescribed objective function and maintain lift. For our simulation, we ran sixteen design cycles and observed the results, highlighted in Figures 3a, 3b, and 4 below.

final_cp_contours

Figure 3a: OM6 initial pressure contours

initial_cp_contours

Figure 3b: OM6 final pressure contours

colorbar_tikz

A comparison of the pressure contours on the upper wing surface is illustrated in Figures 3a & 3b. The initial direct flow solution is compared with results from the final iteration of the shape optimization. The sharp pressure rise, associated with a second shock wave further downstream of the initial shock, has been nearly eliminated. A plot of the drag coefficient, CD, with design cycle iteration is shown in Figure 4, highlighting how the pressure drag is reduced from roughly 0.0118 to 0.0089–a 24.6% reduction–over the course of the shape optimization.

cd_history

Figure 4: OM6 drag coefficient, CD, shape optimization history

For more information regarding SU2, please visit Stanford’s website at http://su2.stanford.edu. An active user community with direct feedback from SU2’s developers is available here. For information regarding setting up and running SU2 on Rescale, please contact us at support@rescale.com.

This article was written by Rescale.