Rescale is pleased to introduce a powerful new feature now available on our platform: In-Browser SSH. For jobs using Linux compute clusters on supported providers, you will now see an SSH panel under the Live Tailing panel that allows you to connect to the running cluster using SSH:

The new In-Browser SSH connection panel

For jobs running on supported hardware, this will be available even if you did not set up an SSH public key in your job settings. However, for maximum flexibility, we still recommend setting it up if you anticipate a need for SSH connectivity.

There are two ways to launch an In-Browser SSH session. Clicking the link itself will open a tab directly in the SSH panel, while clicking the pop-out icon to the right will create new browser window/tab with the SSH session maximized.

Click the IP link (boxed in red) to open an In-Browser SSH panel in the SSH Sessions tab. Click the pop-out link (circled in blue) to open a session in a new browser window/tab

Opening new SSH sessions in tabs will open the tab directly in the SSH Sessions panel. ANSI colors are supported.

What’s more, if you have a multi-node cluster, you will see a connection link to each node in the cluster. This is tremendously useful if you are trying to quickly connect to several different nodes within the cluster.

A connection entry will appear for each separate node in the cluster.

Note that currently, the active SSH tabs will only persist while the Job Status view is active. In other words, switching away from the Status view to, for example, the Results view and then back will close the SSH panels. If you need to persist the SSH session, we recommend launching a session in a separate browser window or tab.

Since the In-Browser SSH terminal uses a HTML canvas element, copy-and-pasting must go through an intermediate clipboard. To open it, click the small tab near the right-top corner of the SSH window. Anything inside the text box in this side panel will be available to the SSH session. Then paste it into the SSH terminal using Ctrl+Shift+V on Windows or Command+V on a Mac.

Click the tab near the right-top corner of the in-browser SSH panel to toggle the clipboard pane

Note that the SSH support, going through canvas, means that there is no graphical device available. In other words, this is text-mode only without X11 forwarding. If you need X11 forwarding, you must use your OS’s native terminal with appropriate X11 support installed.

Also note that the in-browser SSH panel will only be available to owners of the job. If you share the job using the job-sharing functionality, the recipient will not automatically get SSH access to the cluster.

This article was written by Alex Huang.

As a green, but eager developer at a fast-paced startup, it can feel like staring straight up at El Capitan when tasked with simultaneously learning the code base and writing production-ready maintainable code.  At Rescale, developers are given wide latitude in making software design choices to implement a given feature.  Recently, we implemented the ability to create persistent clusters which can accept multiple jobs.  Choosing the optimal design pattern that is scalable, maintainable, and aligns with the architecture of the application requires a measure of thoughtfulness. I will share my experience in working on this awesome feature.

As a quick recap, a ‘job’ in Rescale parlance is the combination of software, for example Tensorflow for machine learning or BLAST+ for genetic alignment query (amongst numerous other software packages that Rescale offers), and a hardware cluster (e.g., a C4 instance from AWS) and a job-specific command.  Prior to the persistent cluster feature, in the typical “basic” use case, the cluster would shut-down after the job ran its course.  Now with the persistent cluster feature a user can build a cluster according to their exact requirements, run multiple jobs throughout the day, and tear it down when done.   

This feature comprised of new API endpoints, which was done with the Python Django framework, and several view (UI) changes done with the Angular framework.  One of the challenges with the UI was how to reuse the cluster list view (referred to as “clusterList View”), which is encapsulated as an Angular directive (referred to as the “clusterList Directive”) and is shown below.

A corresponding controller and template for this directive completes the encapsulation.  Angular directives provide reusability through encapsulation of the HTML template, view logic, and business logic.  The core responsibility of this directive is to render a list of clusters, and it also includes functionality to delete clusters.  The controller for this view includes functionality for pagination and for navigating to a specific cluster for detailed information.

Below is the clusterList Directive reused in the hardware settings view (which itself is its own encapsulated directive —  referred to as the “hardwareSettings Directive”).  In the hardware settings view, a user can select either a persistent cluster (which a user previously spun up) or create a new cluster (which was the previous workflow).  

This view was composed of the following directives (attributes removed for brevity).

The software design decision centered on how to communicate that a persistent cluster was selected from the clusterList Directive to the controller responsible for the hardwareSettings view.  The hardwareSettings controller includes logic to set the hardware settings on the Job object.  With the addition of the persistent clusters feature, these hardware settings either come from a persistent cluster or from a newly generated cluster.

Publish Subscribe(‘pub-sub’) vs Dependency Injection
With Angular, the $emit and $on functions on $rootscope can be used to create an application wide pub-sub architecture.

The above code will transmit a message with the topic ‘clusterSelected’ to the event bus within $rootscope, along with the cluster object.  Other parts of the application can tap into this event bus by injecting in $rootscope, and subscribing to the topic with $on.

The pub-sub design pattern is an elegant way to communicate across boundaries (in this case, across directive boundaries), and is an often used architecture to communicate across network boundaries.  The ease of implementation in Angular, along with a ‘just make this work’ mindset when working under a timeline, sealed the deal for me as the appropriate design pattern.    

Another design choice I made was to reuse the cluster list controller (“clusterListCtrl”) in the hardware settings view:

With this choice I expanded clusterListCtrl with extra functions to achieve the behavior required in the hardwareSettings View, adding unnecessary bloat to the controller and further coupling the controller to the directive.  I presented my design choices to Alex Kudlick, co-developer on this feature, and a seasoned developer with a penchant for dispensing software development wisdom.  We discussed the merits of pub-sub vs dependency injection.

Dependency Injection
Dependency injection, which is closely related to inversion of control, is a pattern wherein the dependency flow is inverted.  In contrast, with the pub-sub example, a dependency is created by all subscribers listening to the topic ‘clusterSelected’, which is published by the clusterList Directive when a click is detected — the dependency flow is from the source flowing outwards.

With dependency injection a click handler, ‘on-cluster-click’,  is injected, or passed in, as an attribute into the clusterList Directive as shown below.

This attribute is now a property on the isolate scope of the clusterList Directive.  The controller for the hardwareSettings view, includes a function on $scope called toggleClusterSelected which contains logic to handle when a persistent cluster is selected.  When a click is detected by the clusterList Directive, the toggleClusterSelected function is invoked.

The reusability of the clusterList Directive with dependency injection is now illustrated with how the clusterList Directive handles clicks in the clusterList View.

The controller for the clusterList View, includes the function, goToCluster, which is invoked when the click handler is executed in this view.   The above illustrates the use of dependency injection, and by binding different expressions (the toggleClusterSelected, and goToCluster methods), to the click handler, the clusterList Directive can possess different behavior depending on the context in which it is used.  

With the pub-sub architecture the ability to transmit an event, application wide to communicate across boundaries is elegant and simple to accomplish in Angular, — it also satisfies the ideal of loose coupling.  In using $rootScope.$emit messages were transmitted via the $rootScope, the parent scope in an Angular application.  This presented a problem, in that listeners in different parts of the application (there were subscribers for ‘clusterSelected’ in the hardwareSettings controller and also in the clusterList controller), would interfere with each other.  For example while in the hardwareSettings View a click on the cluster list would trigger goToCluster (not desired), and toggleClusterSelected (desired behavior).  However, goToCluster won out, and thus the correct behavior was not achieved.  The work-around for this issue was to use $scope.$emit and $scope.$on which limited messaging to just the relevant scope rather than the application wide $rootScope.  However, potential for side effects and interference still exists when multiple subscribers are acting on the same message.  

Another downside was that components that subscribed to ‘clusterSelected’ message, the hardwareSettings controller and clusterList controller, formed an implicit dependency to the clusterList Directive.  A future developer would need to examine the body of the controllers to determine that there is a dependency to the ‘clusterSelected’ message.  With two subscribers, this may not be a big deal, but as more subscribers are added, this approach can become burdensome to maintain and can lead to brittle code.

With dependency injection, the dependencies are explicit, a developer can either scan the attributes in the directive markup or scan the isolate scope in the directive to get a complete list of dependencies.    

Furthermore, the directive is reusable in different contexts by specifying the appropriate dependencies and expressions bindings.  I learned that as a software engineer time must be allocated to think through the consequences of a given design choice — with an eye towards maintainability and scalability, and ultimately explicit is better than implicit.

This article was written by Nam Dao.

We talked to two CFD engineers at Modine Manufacturing Company, an international company that designs, manufactures, and tests heat transfer solutions, to talk about how their company uses Rescale, their motivation for computing on the cloud, and their expectations for how Rescale will impact their design process in the future. Read the conversation below to hear how they’re using Rescale to expand their CFD simulations to larger, complex transient models while actually saving time.

Rescale: Please introduce yourself and describe your role within Modine organization.

John: My name is John Iselin. I’m a technical advisor, and I primarily do computational fluid dynamics analysis and provide feedback to our application and product development engineers as to how they might improve their design based on results of CFD simulations.

Victor: And my name is Victor Niño, I’m the virtual technology manager. I oversee all the simulation activity in computational fluid dynamics and finite element analysis, as well as performance model developments and durability and test development.

“Running a transient simulation on ten cores takes us about three months, but with Rescale we could do it in a few days.”

– Victor Niño, Virtual Technology Manager, Modine

Rescale: Can you describe Modine and give some examples of Modine’s product line?

Victor: At Modine, we provide heat transfer management solutions. We serve a lot of industries, including automotive, agriculture equipment, and construction equipment. Another big portion of Modine’s business is HVAC. We focus on designing heat transfer components and systems.

Rescale: Okay, and the product line includes things like radiators and transmission oil coolers?

Victor: Yeah, exactly. We’ll provide radiators, charge-air coolers for diesel engines, oil coolers, exhaust gas recirculation coolers, condensers and evaporators for air-conditioning systems, and battery coolers for electric vehicles.

John: Our primary business is in extended surface heat exchangers.

Rescale: Can you describe your simulation needs and your computing environment before you started using Rescale?

John: We’ve got relatively large workstation computing systems. In North America, we don’t have an in-house grid type of computing environment. Our typical workstations have 10 cores and 64 gigabytes of memory. Memory and computing bandwidth constrain the amount of parallelization we can use in our simulations.

Victor: Modine Germany has a grid that is updated every five years. The current grid has 32 processors with initial plans of additional expansion.

Rescale: Could you talk a little bit more about the pain points that led you to explore other options, including Rescale?

Victor:  It’s basically a computer limitation. We want to eliminate assumptions in our models, which adds complexity and increases the model size.  Also, we want to do a lot more transient simulations, which are more complex models that take a long time, but it’s basically impossible to run transient simulations on our computers. Running a transient simulation on ten cores takes us about three months, but with Rescale we could do it in few days.

John:  Right now, we’re developing methodologies to do conjugate transient simulations. We realized that we didn’t have the computing power, and Rescale allowed us to quickly investigate some of these methodologies without having to purchase our own computing equipment, which would have significantly delayed our development. Time savings is a big driving factor in using Rescale.

“We are now able to simulate the transient nature of the heat exchanger with CFD thanks to the unlimited cores that Rescale provides… We can also do it fast. Now we’re actually providing better results in a very reasonable time.”

– Victor Niño, Virtual Technology Manager, Modine

Rescale: What ultimately led you to select Rescale as your test platform?

Victor: It was a word-of-mouth recommendation. I talked to other CFD engineers at a STAR-CCM+ seminar, sharing our experiences and pain points. The CD-adapco representative said there are a few [cloud] services that you could use. He recommended Rescale based on feedback from other CFD engineers because it is easy to use—easy to upload files and use the GUI without coding.

Rescale: Victor, how important was the GUI in enabling you to do testing? Is the GUI important to the engineering community at Modine, or are you all familiar with command line interfaces?

Victor: I’m not a code developer. I just want to upload, hit a button, get my answer, and then download it—and that’s it. From a management standpoint, I really like that you guys say up front how much it’s going to cost per hour and you can put limits on how much time we want to run so we won’t go over budget. We also have control on the budget. Rescale has very good management capabilities, and that’s very helpful.

Rescale:  How was your experience opening your Rescale account and running initial tests from the beginning?

Victor:  The free trial was very useful. I used it to run a very quick transient simulation to compare how long it would take to run on Rescale. The second model I ran on Rescale was something that a customer needed. It was a really big model that would have taken us three months to run as a transient model, but we were able to run it in a few days on Rescale. The customer was very pleased with our time, especially because we’re providing more detailed results than before.

Rescale: Talk a little bit more about the types of simulations you’re running on Rescale, the size of those jobs, core counts and types, etc.

Victor: We purchased STAR-CCM+’s Power On Demand licensing. They give us unlimited cores per hour. I have exclusively run transient conjugate heat transfer CFD. My measures are about 18 million cells. I want to run my simulations as fast as possible. Each second matters because transient simulation takes a long time. My licenses give me access to unlimited cores, so I have run about 500 cores in a job, for example. We use Onyx and Nickel core types.

Rescale: How has Rescale changed the way you approach simulation at your company?

John: It’s given us the possibility of doing things that we just couldn’t do before or that were unrealistic because of their time scale. For example, we just couldn’t tie up one of our workstations for two weeks just to grind through a large problem. Rescale has allowed us to respond to our customers in an appropriate fashion.

Rescale:  It sounds like faster time to market with your products—getting your results back to your customers faster—is the biggest benefit that you’re getting. What about enhanced product development? How has making your models more complex impacted the work that you’re doing?

Victor: We knew that transient CFD was the way to improve our methodology. It’s just that we didn’t have the capability, and you guys provided a platform where we could run [transient] simulations in a very short time. For example, with EGR (exhaust gas recirculation), exhaust gas gets piped back to the cylinders, but the gas first has to be cooled down using some coolant. EGR has thermal gradients that create a lot of thermal stresses. There are a lot of failures due to thermal expansion and contraction. We should be able to model that. We are now able to simulate the transient nature of the heat exchanger with CFD thanks to the unlimited cores that Rescale provides. Not only are we able to simulate transient behavior, but we can also do it fast. Now we’re actually providing better results in a very reasonable time. We’re getting rid of a lot of assumptions by going to a transient simulation. Basically, Rescale has allowed us to model the transient nature of the problem.

“The things we’ve learned through transient CFD analysis on Rescale have helped us create better products with a lower failure rate.”

– John Iselin, Technical Advisor, Modine

Rescale: Do you think this is going to have any impact on the amount of physical testing that your research center has to do?

Victor: Oh, simulation is definitely the way to go. Our goal is to do as much simulation within the company as possible. As you’re saying, faster time to market. We’ll save on prototyping and testing. Our goal here as a group is to know a lot about our products before they’re built. With Rescale’s platform allowing us to run these simulations faster, we are starting to see this impact [on the amount of physical testing we do].

John: Our work up until now has improved the quality of our products, but it hasn’t allowed us to cut back on physical prototype testing yet. As we’re able to model our products in a more realistic way, and as our development engineers realize that the simulations are getting the same results as the physical tests, they’re going to reduce physical testing. The things we’ve learned through transient CFD analysis on Rescale have helped us create better products with a lower failure rate.

Rescale: Can you comment on how you see Modine using Rescale in the future?

Victor: Our finite element analysis group is the same story as CFD, just different equations. They also want to add complexity to their models, but because of the limitations of the computers, they are not able to. They’re using lot of linear calculations, but they want to go to non-linear to have better resolution on their calculations. I see the same story happening in the FEA side as well.

John: We make some HVAC products that are sold in California, which has very strict noise requirements on air handling equipment, and I know Modine would like to be able to do some aero-acoustic simulations. Unless we go to cloud computing, we don’t have any hope of doing those types of simulations. That work is a couple of years down the road, but now we can at least talk about it. Before, the computations would be so astronomically long that we couldn’t even think about doing it. All of a sudden, we’re able to consider doing things and exploring areas that we couldn’t consider before.

Rescale: Okay. We do have a number of acoustic-specific codes already loaded on the Rescale platform, but if it’s something that we don’t have, we’re always open to adding new codes. We can make sure that we have the specific codes that you want to run.

John: When we initially established our trial account with Rescale, you did not have the version of STAR-CCM+ that we were using at the time, and within two or three days you had installed that version of the code. We were very appreciative of your response time in getting the exact version of simulation software that we’re using. For inter-compatibility, it’s important for us to have that. We appreciated your flexibility and response time.

About Modine
Modine, with fiscal 2016 revenues of $1.4 billion (prior to the Luvata HTS acquisition), specializes in thermal management systems and components, bringing highly engineered heating and cooling technology and solutions to diversified global markets. Modine products are used in light, medium and heavy-duty vehicles, heating, ventilation and air conditioning equipment, off-highway and industrial equipment and refrigeration systems. Modine is a global company headquartered in Racine, Wisconsin (USA), with operations in North America, South America, Europe, Asia and Africa. For more information about Modine, visit

Sign up for a free trial to see how Rescale can accelerate your simulations or add complexity to your models.

This article was written by Mika Pegors.

Advances in technology in the past decade have decreased the cost of sequencing the human genome significantly. With lower costs, researchers are able to perform population studies for various disorders and diseases. To perform these population studies, they will need to sequence thousands patients and as a result generate a significant amount of data with 70-80 GB files for each sample. After sequencing these patients, they will need to analyze the data with the goal of determining the cause of these genetic diseases and disorders.

Using the generated data, end users of HPC systems run various analysis workflows and draw their conclusions. On-premise systems have many limitations that affect their workflow. These limitations include rapid growth of on-premise storage needs, the command line user interface, and full utilization of compute resources. Taking into consideration the logistics of expanding a storage server (purchase order, shipping, and implementation), end-users could be waiting over a month until they can start using their purchased storage. In academic institutions, graduate students (usually coming from a biology background) run analysis workflows on generated data. Most of the time, these students have never seen a command line prompt before. As a result, students must learning UNIX basics and HPC scheduler commands before they can even start running their analyses. Full resource utilization delays queued jobs from being scheduled onto compute resources. This limitation affects researchers greatly when research paper submission deadlines need to be met.

Managing an on-premise HPC system creates a high workload on an organization’s IT team. IT workers must constantly analyze their cluster usage to optimize the performance of the system. This optimization includes tuning their job scheduler to run as many jobs as possible and capacity planning for growing resources in their data center. Depending on the country, clinical data must be retained for a predetermined number of years. To address this constraint, IT workers must also implement and manage an archive and disaster recovery system for their on-premise storage in the event that researchers’ studies are audited.

These end-user limitations can be resolved and the workload on IT can be reduced through the use of cloud services like Rescale. By using Rescale storage, end-users pay for exactly what they use and are able to use their desired amount of storage instantly. Users are able to set policies using Rescale storage to automate archiving data. Through the use of our cloud storage solutions, data redundancy is as simple as clicking a checkbox. What’s more, researchers who adopt a cloud-native compute environment will be best-positioned to fully realize the benefits of the cloud by avoiding file transfer bottlenecks. Researchers should first move their data to the cloud, then incrementally push sequenced data to the cloud. The one-time cost of this transfer pays off in the long-run—the cloud offers researchers a highly flexible, scalable, long-term solution that puts unlimited cloud compute resources at researchers’ fingertips so they can always meet their deadlines.

Rescale’s cloud platform enables researchers to increase the speed and scale of their genetic analyses. As a result, they are able to obtain qualitative/quantitative data needed to publish their research papers. Discoveries made in these research papers will advance personalized medicine and eventually will be applied in a clinical setting with the goal of improving an individual’s health, quality of life, and creating a better world.

If you are interested in learning more about the Rescale platform or wish to start running your analysis in the cloud, create a free trial account at

This article was written by Brian Phan.