Guest post by Seth Morris, Aerodynamicist, Richard Childress Racing

In the world of stock-car racing, finding even the smallest competitive advantage is the difference between winning and losing.

That’s why at Richard Childress Racing, we design and build our race cars end-to-end. We engineer and machine our own chassis and suspension components, we design and fabricate our own bodies, and we test and build our own engines. Everything is built from the ground up at RCR.

Cup Shop Floor 2014

Perhaps that’s why we’ve been so successful over our 48 year history – dating back to 1969 when our owner, Richard Childress, got his start in NASCAR. Since then, we’ve won 17 championships and 200 races and became the first team to win in all three of NASCAR’s top touring series. Richard’s partnership with the legendary Dale Earnhardt made RCR an elite organization with six Cup Series championships stretching from 1986 to 1994.

Today, RCR has eight full-time race teams, 500-plus employees, a 40-acre campus with an engineering staff of over 50, ranging from mechanical design, aerodynamics, simulation, strategy, and research and development. Our aerodynamics team is six engineers strong with another four fabricators tasked with crafting the components for testing in the wind tunnel. Three of these six engineers are dedicated to simulating the aerodynamics of the car using computational fluid dynamics (CFD).

Continue reading

This article was written by Rescale.

ESSS image

San Francisco, CA — Rescale is pleased to announce a channel partnership with Engineering Simulation and Scientific Software Ltd. (ESSS), an ANSYS channel partner based in Brazil, to deliver tailored cloud solutions to ESSS clients in Central and South America. ESSS brings innovative multi-physics CAE solutions into the Latin American market, distributing licenses for CAE software such as ANSYS CFD, ANSYS Mechanical, ANSYS Multiphysics, ANSYS Electronics including both high-frequency and low-frequency products, and Rocky.

Continue reading

This article was written by Rescale.

Gartner Cool Vendor 2017 image

San Francisco, CA — Rescale, the turnkey platform provider in cloud high performance computing, today announced that it has been named a “Cool Vendor” based on the May 2017 report “Cool Vendors in Cloud Infrastructure, 2017” by leading industry analyst firm Gartner.

The report makes recommendations for infrastructure and operations (I&O) leaders seeking to modernize and exploit more agile solutions, including the following:

  • “I&O leaders should examine these Cool Vendors closely and leverage the opportunities that they provide.”
  • “As enterprises grapple with the right mix of on-premises, off-premises and native cloud, choosing a cloud infrastructure vendor becomes more critical.”

Continue reading

This article was written by Rescale.

Servers image

We have made a number of blog posts over the years where we have run some MPI microbenchmarks against the offerings from the major public cloud providers. All of these providers have made a number of networking improvements during this time so we thought it would be useful to rerun these microbenchmarks against the latest generation of VMs. In particular, AWS has released a new version of “Enhanced Networking” that supports up to 20Gbps, and Azure has released the H-series family of VMs which offers virtualized FDR InfiniBand.

My colleague Irwen recently ran the point-to-point latency (osu_latency) and bisection bandwidth (osu_bibw) tests from the OSU Microbenchmarks library (version 5.3.2) against a number of different VM types from Google Compute Engine. For consistency, we’ll use the same library here with Azure and AWS.  The table below includes the best performing machine from Irwen’s post: the n1-highmem-32. The c4.8xlarge represents an AWS VM type from the previous Enhanced Networking generation and the newer m4.32xlarge VM is running the newer version of Enhanced Networking.

In the table below, we list the averaged results over 3 trials. A new pair of VMs were  provisioned from scratch for each trial:

0-byte Latency (us) 1MB bisection bandwidth (MB/s)
GCE (n1-highmem-32) 41.04 1076
AWS (c4.8xlarge) 37.07 1176
AWS (m4.32xlarge) 32.43 1152
Azure (H16r) 2.63 10807

As you might expect, the Azure H-series VMs seriously outpace the non-InfiniBand equipped competition in these tests. One of the frequent criticisms levied against using the public cloud for HPC is that networking performance is not up to the task of running a tightly-coupled workload. Microsoft’s Azure has shown that it is possible to run a virtualized high-performance networking fabric at hyperscale.

That said, while this is interesting from a raw networking performance perspective, it is important to avoid putting too much stock into synthetic benchmarks like this. Application benchmarks are generally a much better representation of real-world performance. It is certainly possible to achieve strong scaling with some CFD solvers with virtualized 10GigE. AWS has published STAR-CCM+ benchmarks showing close to linear scaling on a 16M cell model on runs up to 700 MPI processes. Microsoft has also published some STAR-CCM+ benchmarks showing close to linear scaling on up to 1,024 MPI processes with an older generation of InfiniBand equipped VMs (note that this is not an apples-to-apples comparison because Microsoft used a larger 100M cell model in their tests). It’s also important to highlight that specialized networking fabric typically comes at a higher price point. Additionally, keep in mind is that network speed is just one dimension of performance. Disk IO, RAM, CPU core count and generation, as well as the type of simulation and model size all need to be taken into consideration when making a decision about what hardware profiles to use. One of the advantages of using a multi-cloud platform like Rescale’s ScaleX Platform is that it makes it easy to run benchmarks and moreover, enterprise HPC workloads, across a variety of hardware configurations by simply changing the core type in your job submission request.

Finally, it is impressive to note how far things have come from the original Magellan report. There is a fierce battle going on right now between the public cloud heavyweights and we are starting to see hardware refresh cycles including not only high-performance interconnect but also modern CPU generations (Skylake) as well as GPU and FPGA availability at large scale. The “commodity” public cloud is increasingly viable for a growing number of HPC workloads.

This article was written by Ryan Kaneshiro.