Background and Challenge
Adaptive Corporation is a leading Digital to Physical Product Lifecycle Company that helps streamline business processes, reduce costs and improve efficiencies for customers that need to bring new products to market. Our CAE team runs explicit crash/impact simulations and large non-linear analysis for structures and components for our customers, who include leading manufacturers in Industrial Equipment, Aerospace, Auto, and Life Sciences. For these simulations, we use Abaqus, Nastran, Adams, Isight, fe-safe, and Tosca.

Our workload for our CAE consulting is project-based and varies quite a bit during the course of the year. Prior to using Rescale, we were custom building or purchasing our own HPC system. We were resource-constrained.  We either had underutilized computing capacity between projects or not enough capacity when multiple projects were underway at the same time. Too often, we experienced bottlenecks when we needed to solve multiple jobs simultaneously because we were limited with our smaller HPC system. Projects would run over schedule as a result.

Our mission is to craft tailored solutions that help our customers shorten development and production cycles throughout product planning, development, manufacturing, and aftermarket service processes. The simulation and analysis bottlenecks were undermining our ability to do that, so we needed to look for a solution. In addition, our on-premise hardware was out-of-date within 2 or 3 years, so we were spending a lot of time and effort every few years to upgrade our hardware and we needed an IT support team to maintain the servers and licenses.

The Rescale Solution
Two years ago, as bottlenecks became more frequent, we started to look to the cloud for bursting. We chose Rescale due to competitive pricing and their ability to work with us to customize the solution to our IT needs. We ditched our existing HPC system and now have a “reverse hybrid” cloud computing environment, in which we mostly rely on Rescale’s unlimited, on-demand capacity to meet the demands of our simulation engineers but occasionally run jobs on our desktops as well.

We have been running Abaqus on Rescale since 2014 and since we do so much computing on Rescale, we actually host our license on their server as well. On occasion, when we need to run simulations on our desktop computers, we’ll tap into our license server on Rescale. It’s a seamless process that saves us the headache of license management.

Results and Benefits
With instant access to more CPUs and faster hardware on Rescale, optimized to our simulation needs, our CAE consulting business has been able to solve jobs significantly faster than we could before–more than 10x faster! And with the availability of on-demand resources on the cloud, we now have access to hardware that can respond to our constantly changing compute needs. We’ve found it to be much more cost-effective to use the cloud, paying only for the hardware that we actually use, and to always have access to the current hardware offerings.

We’ve also been able to significantly reduce our IT overhead. We’ve eliminated the need for license server maintenance completely. We’ve reduced IT spending dramatically, never have underutilized compute capacity, and our employees can focus on engineering. We can now focus our resources on the value-added business activities, not on managing hardware. Rescale has helped us improve our operational efficiency.

About Adaptive Corporation
Adaptive Corporation is the leading Digital to Physical Product Lifecycle Company that helps streamlines business processes, reduce costs and improve efficiencies for customers who need to bring new products to market.  Adaptive’s growth is powered by working closely with its 500+ customers to overcome the challenges around product development. Our customer base includes leading manufacturers of industrial and consumer products and the suppliers that provide the underlying sub-systems. Adaptive’s unique “Digital to Physical” product portfolio includes CAD/CAM, CAE, PLM, business analytics, metrology, and 3D printing solutions from leading IT providers.  Using a combination of these offerings, the Adaptive team crafts tailored solutions that help our customers shorten development and production cycles throughout product planning, development, manufacturing, and after-market service processes.  For more information, visit

This article was written by Adaptive Corporation.


San Francisco, CA – Rescale and ANSYS are pleased to announce a strategic partnership that will provide users of ANSYS’ suite of engineering simulation software the ability to run their simulations on Rescale’s scalable, turn-key, cloud-based platform. ANSYS’ new Elastic Licensing for cloud computing, which allows users to purchase pay-per-use licensing, is available to be hosted and deployed on Rescale, while ANSYS users can also use their own traditional lease and paid-up licenses.

ANSYS offers simulation-driven product development tools across the entire range of physics, including structural mechanics, computational fluid dynamics (CFD), electromagnetics, explicit dynamics, as well as the combination of these physics. Now, eight of those tools (including CFX, CHEMKIN-PRO, Fluent, HFSS, Maxwell, and Mechanical) are available, pre-configured and optimized, on Rescale’s powerful, turn-key simulation platform. By accessing the Rescale platform through any browser, ANSYS users can easily run sophisticated engineering simulations on Rescale’s global multi-cloud HPC network of 60+ data centers in 25+ locations. Engineers can scale out on thousands of cores and choose hardware configurations optimized to the diverse needs of the ANSYS suite, with options ranging from economical HPC configurations to cutting-edge bare metal systems, low-latency InfiniBand interconnect, and the latest Intel and NVIDIA chipsets.

Pay-as-you-go, no-commitment hardware and ANSYS Elastic Licensing ensure that dynamic HPC resources are responsive to the variability in simulation loads that engineering enterprises face, which in turn enables shortened product design cycles and increased IT agility. “Customers love the flexibility that our turn-key solution and pay-as-you-go hardware give them, and we are excited to be extending that flexibility to ANSYS software users. Agile resources will help our mutual customers utilize computing and simulation to drive new innovations faster across a broad spectrum of industries,” said Rescale co-founder and CEO Joris Poort. Additionally, enterprises can leverage Rescale’s built-in administration and collaboration tools to manage resources and improve results across the design portfolio, as well as trust in best-in-class security features such as built-in enterprise-grade encryption, SOC2 compliance, and an ITAR compliant platform.

Ray Milhem, VP of Enterprise Solutions and Cloud at ANSYS, echoed the importance of flexibility, saying, “We’ve always given our customers a number of deployment options with our comprehensive range of engineering simulation software products. Now, by making our software available on Rescale with Elastic Licensing, we’re also giving the option to elastically expand their simulations and hardware to better meet project timing. Engineers have more power and more flexibility than ever before.”

About Rescale
Rescale is the global leader for high-performance computing simulations and deep learning in the cloud. Trusted by the Global Fortune 500, Rescale empowers the world’s top scientists and engineers to develop the most innovative new products and perform groundbreaking research and development faster and at lower cost. Rescale’s platform transforms traditional fixed IT resources into flexible, hybrid, private, and public cloud resources – built on the largest and most powerful high-performance computing network in the world. For more information on Rescale products and services, visit

This article was written by Rescale.


Sandi Metz recently wrote an article proclaiming that duplication is cheaper than the wrong abstraction. This article raises valuable points about the costs of speculative generalization, but it’s part of a long line of articles detailing and railing against those costs. By now it should be old hat to hear someone criticize abstraction, and yet the meme persists.

The aspect of Sandi Metz’s article that I’d like to respond to in this post is the mindset it promotes, or at least the mindset that has responded to it the most. This mindset is very common to see in comments – just get the task done, nothing more.  Sometimes that’s appropriate and the right approach to take, but the problem here is that the costs of abstraction, especially when it’s gotten wrong, are obvious, and the costs of the “simplicity first” mindset aren’t as obvious. I won’t talk about the specific costs of duplicated code, as those are already well known. I will talk about the opportunity costs – the missed learning opportunities.

Good developers should be constantly learning, constantly honing their skills. There’s always room to improve. The skill that’s most important for developers to practice is recognizing profitable abstractions, because doing so correctly relies on honed intuition. It takes seeing costs manifest over the long term, and it takes making mistakes. Developers should be constantly evaluating their past decisions and taking risks on new ones.

Opportunity cost is an often overlooked aspect of technical debt. The reason accumulating technical debt is the cheaper choice in the moment is that it takes a path for which the solution is already known. There’s nothing to learn, just implement the hack. That’s fine in small doses, but it forgoes the opportunity to learn things about the codebase, to discover missing abstractions and create conceptual tools that can help solve the problem.

So what the developers in Sandi Metz’s example should have done is noticed that this particular abstraction was costing them more than it was benefiting them. That’s a good thing to notice – it’s a valuable learning experience. What specific aspects of the abstraction were slowing down development? Which parts confused new developers and led them to make it worse? These are questions the developers should have asked themselves in order to learn from the experience.

Our development team has a weekly practice that we call “Tech Talks,” in which a developer talks about something they learned that week, some part of the codebase that was thornier than it should have been, and so on. This practice is invaluable for promoting a growth mindset, and the situation from Mz. Metz’s article would have been a perfect example to bring up.

Developers shouldn’t focus on just cranking out code. Those who limit their attention in such a way aren’t growing and will soon be surpassed by better tools. Instead, we should recognize that the job of a developer is to understand which abstractions will prove valuable for the codebase. The only way to learn that is through experience.

This article was written by Alex Kudlick.

On November 11th, Norway’s Magnus Carlsen will defend his chess world championship against Sergey Karjakin of Russia. The unified championship will return to New York and American soil for the first time since 1990, when two chess legends, Kasparov and Karpov, met for the last time in a world chess championship match. Since then, chess’ popularity in the United States has slowly increased, as has the strength of its players. Just two months ago, the United States men’s team won the 42nd Chess Olympiad for the first time in 40 years. They are now led by Top 10 players Caruana, So and Nakamura.

Nevertheless, World Champion Magnus Carlsen has dominated chess for the last 5 years and is rightfully in position to defend his world championship. In preparation for important tournaments like the world championship match, Grandmasters almost always hire a team of ‘seconds’ (other grandmasters) to assist them in preparation. Their main job is to analyze moves for the opening phase of the game to maximize the charted territory, if you will, of their player. One of the most important tools they use in this analysis is the computer engine. The computer engine is a piece of software which objectively evaluates any chess positions.

Top Chess Grandmasters have a bit of a love-hate relationship with computer chess engines. Although it has become an essential and invaluable tool for training and preparation, many lament the loss of creativity due to the extensive charting of opening sequences known as the opening book. Players take less risk in openings because a well-prepared opponent will easily expose creative but unsound ideas. The capacity for top Grandmasters to memorize the thousands of variation of an opening book then becomes a limiting factor. Some grandmasters, such as Carlsen, intentionally play moves early on which are less analyzed but also less optimal to be able to “just play chess” instead of challenging the opponent’s preparation and memorization skills.

Today, grandmasters and amateurs alike use chess engines for training and analysis. There are many different chess engines (some paid, some open source) which all essentially do the same thing which is to evaluate a chess position. When playing a game against a chess engine, for most beginners to intermediate players the “strength” of the engine is not important. An engine that runs on an iPhone can easily beat most amateur players. You’d have to artificially dial down the strength of the engine to get a competitive game. We have gotten to the point where chess engines can beat any human in a game with “classical time controls” (90 minutes for first 40 moves). The best chess engines have Elo scores of 3200+ while the highest achieved rating of any human player has been just shy of 2900. It is therefore no longer interesting for humans to compete against chess engines. Instead, there are now leagues which just feature chess engines, in which they compete against each other under fixed conditions.


Jerauld, Brian. A Brief Postmortem. Digital image. ChessBase., 27 Apr. 2015. Web. 16 Nov. 2016.
2. f4 was a real cool move you played there, Garry. I think… Ok, let’s ask Stockfish

How it Works
Here is a very high level overview of how chess engines works.

There are three distinct stages within a chess game that can be handled differently by chess engines. The opening uses an opening book, a database of predefined lines of moves. Once the engine is “out of book” it will use its evaluation and tree search capabilities to find its best moves. Lastly in positions with few pieces on the board an engine can use an endgame tablebase which stores all winning moves for a given position on the board.

The most important part of a chess engine is its ability to evaluate a static position in the most efficient manner possible. It uses this evaluation in conjunction with a tree search to find the best move or moves possible in the current position. It can store evaluated positions in a hash table so it doesn’t have to recalculate a given position more than once. The deeper the engine can search the tree, theoretically, the more accurate its evaluation is of the current position and its ability to predict the best move.

What specific algorithms in engines makes a better chess engine is up for debate. In general, being able to search a tree faster is not useful if your static evaluation is inaccurate. For this reason you need to test engines against other engines or different versions of engines to make sure that incremental improvements have the desired effect. There are also competitions which pit engines against each other. TCEC is one of those competitions. These competitions accelerate the development of new evaluation techniques in chess engines.

In general, tree search is a fairly brute-force-ish way of evaluating a position even though there is a lot of complex theory behind optimizing this kind algorithm. Most chess engines today are therefore “dumb” tools. You give it a position and it evaluates it. When Grandmasters use it for preparation, there’s always a human in the loop to tell the engine what positions it needs to evaluate and to tell it when it should stop evaluating. A next step in development of chess engines is the inclusion of Artificial Intelligence (AI). As we saw in AlphaGo vs Lee Sedol, inclusion of AI in board game engines significantly increases its strength and utility. AI will allow players to use engines for a specific purposes. For example, we can study how a specific opponent reacts to certain positions, what their tendencies are and generate a strategy specifically for that opponent by feeding the engine all the games the opponent has ever played. At a high level we can learn from patterns in positions and correlate them to outcomes of games given the strength of each player. With the inclusion of these new technologies and increased strength of engines there is no doubt that the landscape of competitive chess will change.

Hardware Considerations
Today’s strongest engines do not parallelize well or not at all across multiple nodes because of the current state parallel tree search algorithms. Some attempts are been made to parallelize over multiple nodes using distributed process algorithms, but these versions are not being used extensively in the chess community. So, the approach today would be to analyze different positions on different instances of the chess engine using a human in the loop. The single-node limitation of many chess engines makes it such that large multi-core SMP machines can significantly outperform, say, a number of laptops networked together.

With the single node limitations of chess engines, clusters can still be used to do many evaluations in parallel for analyses. Clusters can also be used in developing chess engine software. Simulating many games or positions is one of the only ways to make sure that changes in engine code actually make it stronger.

How to Run a Chess Engine on Rescale
Rescale currently provides a framework for running UCI chess engines. It’s a bring-your-own engine setup. If you do not provide a chess engine, it will run Stockfish 7 by default.

Once you launch a job with the chess engine it will broadcast and listen on port 30000. You will need to set up an ssh tunnel to forward a local port to the analysis node port 30000. See the video below for a complete overview of how to run Stockfish on Rescale with the client Scid:

You can run any UCI engine. Make sure you name the engine executable “engine” and upload it as an input file. Rescale will automatically use the uploaded engine:

You can even run 2 engines against each other. If you wanted to run, say, Komodo against Stockfish. You would start two jobs on Rescale each running a different engine. Just make sure you forward a different local port to your second engine:

The key components of linking your UCI client to the engine running on Rescale are your ssh tunnel and your raw connection to the engine using either netcat (nc) on Linux/MacOS or plink.exe on Windows.

Armchair Chess QB
The transmission of live games with “live evaluation” on sites such as Chessbase or Chessbomb allows every chess enthusiast to be an armchair quarterback during tournaments such as the upcoming world championship. This year, take it a step further and do the analysis yourself using Chess Engines on Rescale.

This article was written by Mulyanto Poort.