By Jerry Gutierrez, Global HPC Solution Leader, Bluemix Infrastructure (SoftLayer), IBM Cloud & Tyler Smith, Head of Partnerships, Rescale

IBM Bluemix offers our customers an impressive array of leading high-performance computing (HPC) infrastructure, including bare metal servers and the latest GPUs, on an hourly basis to customers all over the world. But as HPC technology gets more advanced it takes more knowledge to leverage effectively, and we recognize that many times the people who need HPC aren’t experts in HPC; they’re experts in something else. They’re data scientists, automotive and aerospace manufacturers, and bioinformaticians. With challenging, high-stakes problems to solve, the intricacies of HPC implementation is just a hurdle to overcome on their quests.

Rescale’s web-based platform for running HPC on the cloud delivers the performance these engineers and scientists need in a turnkey, user-friendly experience. Rescale’s ScaleX platform gives users control over their data, hardware, and software while automating the complex tasks of HPC configuration and optimization. Running on IBM Bluemix infrastructure, Rescale makes sophisticated compute capabilities accessible to the users who need it most.

Here are just a few ways that Rescale simplifies HPC for all those trailblazers, helping put IBM Bluemix’s world-class HPC infrastructure network to work curing cancer, finding life on Mars, and predicting the future:

  1. Automated cluster configuration
    Building out your HPC cluster each time you want to run a job can be time-consuming and complicated, especially for the HPC novice. Doing hours of legwork just to get your computations started can undermine the value of being able to burst to the cloud exactly when you need to making the user’s task as simple and quick as choosing their hardware and cluster size and clicking “Submit.” It just takes a few clicks to spin up a cluster.
  2. Broad portfolio of pre-tuned and optimized software applications
    When a customer comes to IBM Bluemix, they install their own software based on their specific needs. Then they run benchmarks and tune their software to run on IBM Bluemix infrastructure. Again, this takes wizard-level HPC knowledge to do, and if you skip this step your performance will degrade or your problem might not even be solvable in the cloud. Rescale has a team of HPC, CAE, and deep learning experts that have automated and productized the tuning and optimization process for That saves a lot of time for our shared customers, and ensures they effortlessly get the maximum performance out of our hardware. Plus, Rescale offers hourly on-demand licenses, cloud license-hosting, and license proxy tooling to simplify the tangle of cloud licensing models software users must navigate if they want to leverage the cloud.
  3. Cloud management features
    Using the cloud to infinitely scale out your computations is game-changing, but it presents its own set of challenges at enterprise scale. Rescale’s has an administrative portal for IT teams to manage budgets and permissions for large, multi-disciplinary teams and projects. It also integrates easily into existing private infrastructure hardware and schedulers for seamless hybrid cloud deployment. Collaboration features allow users to share jobs with their colleagues in real-time, without having to download or transfer large files. These features and others give enterprise employees the raw power of HPC while ensuring the organization is productive, cost-effective, and secure.

In short, Rescale’s synergies with IBM Bluemix open the doors to world-class HPC, increase utilization of our infrastructure, and make them a valued partner. We’ve got some big stuff in the pipeline for 2017 and beyond, and we’re excited to bring them to market with Rescale at our side.

To learn more, watch Rescale’s Head of Partnerships, Tyler Smith, present on “How to Leverage New HPC and AI Capabilities Via the IBM Cloud” at IBM InterConnect on Thursday, March 23, 2017 at 11:30 am PDT in Mandalay Bay North, Level 0, South Pacific D. Click here for more information.

See this post on the IBM Bluemix blog.

This article was written by Tyler Smith.

Advances in technology in the past decade have decreased the cost of sequencing the human genome significantly. With lower costs, researchers are able to perform population studies for various disorders and diseases. To perform these population studies, they will need to sequence thousands patients and as a result generate a significant amount of data with 70-80 GB files for each sample. After sequencing these patients, they will need to analyze the data with the goal of determining the cause of these genetic diseases and disorders.

Using the generated data, end users of HPC systems run various analysis workflows and draw their conclusions. On-premise systems have many limitations that affect their workflow. These limitations include rapid growth of on-premise storage needs, the command line user interface, and full utilization of compute resources. Taking into consideration the logistics of expanding a storage server (purchase order, shipping, and implementation), end-users could be waiting over a month until they can start using their purchased storage. In academic institutions, graduate students (usually coming from a biology background) run analysis workflows on generated data. Most of the time, these students have never seen a command line prompt before. As a result, students must learning UNIX basics and HPC scheduler commands before they can even start running their analyses. Full resource utilization delays queued jobs from being scheduled onto compute resources. This limitation affects researchers greatly when research paper submission deadlines need to be met.

Managing an on-premise HPC system creates a high workload on an organization’s IT team. IT workers must constantly analyze their cluster usage to optimize the performance of the system. This optimization includes tuning their job scheduler to run as many jobs as possible and capacity planning for growing resources in their data center. Depending on the country, clinical data must be retained for a predetermined number of years. To address this constraint, IT workers must also implement and manage an archive and disaster recovery system for their on-premise storage in the event that researchers’ studies are audited.

These end-user limitations can be resolved and the workload on IT can be reduced through the use of cloud services like Rescale. By using Rescale storage, end-users pay for exactly what they use and are able to use their desired amount of storage instantly. Users are able to set policies using Rescale storage to automate archiving data. Through the use of our cloud storage solutions, data redundancy is as simple as clicking a checkbox. What’s more, researchers who adopt a cloud-native compute environment will be best-positioned to fully realize the benefits of the cloud by avoiding file transfer bottlenecks. Researchers should first move their data to the cloud, then incrementally push sequenced data to the cloud. The one-time cost of this transfer pays off in the long-run—the cloud offers researchers a highly flexible, scalable, long-term solution that puts unlimited cloud compute resources at researchers’ fingertips so they can always meet their deadlines.

Rescale’s cloud platform enables researchers to increase the speed and scale of their genetic analyses. As a result, they are able to obtain qualitative/quantitative data needed to publish their research papers. Discoveries made in these research papers will advance personalized medicine and eventually will be applied in a clinical setting with the goal of improving an individual’s health, quality of life, and creating a better world.

If you are interested in learning more about the Rescale platform or wish to start running your analysis in the cloud, create a free trial account at rescale.com.

This article was written by Brian Phan.