The collaboration will bring next-generation computation power to the cloud

 

San Francisco, CA — Rescale is pleased to introduce the ScaleX Labs with Intel® Xeon Phi™ processors and Intel® Omni-Path Fabric managed by R Systems. The collaboration brings lightning-fast, next-generation computation to Rescale’s cloud platform for big compute, ScaleX Pro.

The Intel Xeon Phi processor is a bootable host processor that delivers massive parallelism and vectorization to support the most demanding high-performance computing (HPC) applications. The joint cloud solution also features Intel Omni-Path Fabric to deliver fast, low-latency performance. R Systems hosts Intel’s technology at their remote HPC data centers in Champaign, Illinois, providing white-glove implementation and maintenance to make Intel’s hardware seamlessly accessible on the cloud through Rescale. “This is another example how R Systems is committed in offering leading edge, bare metal technology to the HPC research community through its partnerships with Rescale and Intel,” added Brian Kucic, R Systems Principal.

These impressive HPC capabilities are available at no charge to users for four weeks through Rescale’s cloud platform for big compute, ScaleX Pro. ScaleX Pro provides users with an intuitive GUI for job execution (including pre- and post-processing) and seamless collaboration with peers, backed by best-in-class security protocols and certifications including annual SOC 2 Type 2 Certification and ITAR- and EAR-compliant infrastructure. ScaleX Labs users will also receive beta access to ScaleX Developer, Rescale’s product that allows software developers to create, publish, and run their own software on the ScaleX platform. Developing and deploying software to the cloud on Rescale is easy and seamless on ScaleX Developer, which follows the same GUI workflow as Rescale’s other ScaleX products and requires no special knowledge about how Rescale works.

“We are proud to provide a remote access platform for Intel’s latest processors and interconnect, and appreciate the committed cooperation of our partners at R Systems,” said Rescale CEO Joris Poort. “Our customers care about both performance and convenience, and the ScaleX Labs with Intel Xeon Phi processors brings them both in a single cloud HPC solution at a price point that works for everyone.”

“Intel is investing to offer a balanced portfolio of products for high-performance computing, including our leading Intel Xeon Phi processors and low-latency Intel Omni-Path Architecture,” said Barry Davis, General Manager, Accelerated Workload Group, Intel. “With increasing adoption of HPC applications to drive discovery and innovation, the ScaleX Labs with Intel Xeon Phi processors provides customers the opportunity to access high-performance compute capability in the cloud.”

Try Intel Xeon Phi processors on ScaleX Labs now.

About Rescale
Rescale is the global leader for high-performance computing simulations and deep learning in the cloud. Trusted by the Global Fortune 500, Rescale empowers the world’s top scientists and engineers to develop the most innovative new products and perform groundbreaking research and development faster and at lower cost. Rescale’s ScaleX platform transforms traditional fixed IT resources into flexible hybrid, private, and public cloud resources—built on the largest and most powerful high-performance computing network in the world. For more information on Rescale’s ScaleX platform, visit www.rescale.com.

About R Systems
R Systems is a service provider of high performance computing resources. The company empowers research by providing leading edge technology with a knowledgeable tech team, delivering the best performing result in a cohesive working environment. Offerings include lease-time for bursting as well as for short-term and long-term projects, available at industry-leading prices. The R Systems central mission is to help researchers, scientists and engineers dramatically accelerate their time to solution. For more information visit www.rsystemsinc.com or call (217) 954-1056.

Contact:
Mika Pegors
Rescale
1-855-RESCALE
mika@rescale.com

Intel, Xeon and Xeon Phi are trademarks or registered trademarks of Intel Corporation in the United States and other countries.

This article was written by Rescale.

Why high utilization doesn’t work for TSA and why it doesn’t work for HPC

© CC-BY-SA 2.0 2010 “Security at Denver International Airport” by oddharmonic via Flickr

Executive Summary:

  • In the case of big compute power, the purchase of large capital assets can create an organizational misalignment of incentives that places the needs of the end user last
  • Achieving high utilization rates of on-premise computing is a pyrrhic victory; it creates winners and losers and puts a governor on the pace of innovation
  • Information technology leaders with high utilization rates of on-premise compute should establish a cloud bypass for work to encourage a culture of agility, innovation, and “outside-the-stacks” thinking
  • When calculating total cost of ownership (TCO) of on-premise computing, user experience, workflow cycle times, responsiveness to new requirements, and other factors must be considered

Airport Travelers and HPC Users Have the Same Complaints
While standing in line in airport security at LAX recently, travelers behind me began engaging in a familiar sport: wondering if there were better alternatives to the US airport security screening process. As some lines proved to be faster than others, the complaints ranged from line choice to the efficacy of the entire system. Having recently returned from several meetings with future users of cloud computing, the complaints were similar: wait times, capacity limitations, and perceived unfairness in the system.

High utilization rates of on-premise computing assets are often cited in a cost-based defense of maintaining a pure on-premise strategy for big compute (HPC) workloads. The argument goes like this: the higher the utilization rate of an on-premise system, the more costly it is to lift and shift those workloads to the cloud. This frequently is a result of a total cost of ownership (TCO) study comparing an incomplete set of variables:

The above TCO comparison is woefully incomplete, but the missing pieces aside, even more visibly apparent is the key assumption underlying cloud computing: 100% utilization. The use of the assumption is understandable. Capital investments require financial justification and, depending on their scale, often detailed NPV analysis. Unfortunately, it is difficult to compare a fixed and capitalized expenditure to a variable and operational expenditure for these analyses. Forecasting opex requires detailed logging of compute usage and assumptions that past behavior can predict future requirements. For simplicity, it is easier to simply assume 100% utilization of cloud computing and move on. However, the organizational implications for 100% utilization of cloud computing versus 100% utilization of on-premise assets are very different. 100% utilization of a constrained on-premise compute asset implies queue times, a constant reevaluation of internal resource priorities, and slow reaction times to new requirements. 100% utilization of a certain portion of the immense cloud has none of these disadvantages.

This brings us back to our TSA story.

A TSA Nightmare
Imagine one day, the TSA agents at a particular airport received a peculiar directive: the taxpayers are extremely sensitive to the purchase of capitalized assets; and, as a result, it is now an agency priority to achieve 95% or greater capacity utilization of the newly installed scanners. What would be the consequences?

First, 95% utilization would require passenger processing through the line at all hours of the night, regardless of the fact that airplanes were only leaving and arriving between 6AM and midnight. Second, every 19 out of 20 passengers that arrived at the security line should expect a queue, regardless of the time they arrived. Third, during peak travel periods, wait times would increase exponentially. Fourth, in the long run, to achieve the targets, the TSA agents would be incentivized to shut down additional security lines and laterally transfer “excess” scanners to other airports. Somewhere in the aftermath is the passenger whose needs have been subordinated to the quest for high utilization rates. The psychology of the passenger changes, also. The passenger begins planning for long queue times, devoting otherwise productive time to gaming a system with limited predictability.

In the case of the purchase of a large, fixed-capacity compute system, the misalignment of incentives begin almost immediately after the purchase of the asset. Finance wants to optimize the return on the asset, putting pressure on Information Technology leaders to use the smallest possible asset at the highest levels of utilization for the longest amount of time. Meanwhile, hardware requirements continue to diverge and evolve outside the walls of the company, artificially constraining the company to decisions made years prior when business conditions were unlikely similar to present day. The very nature of a fixed asset creates winners and losers as workloads from some portions of the company are prioritized over others. Unlike airline travelers, however, engineers, researchers, and data scientists can be given options to bypass the system.

The cloud has inherent advantages relative to its on-premise counterpart. As a result, cloud big compute has earned its seat at the table in any organization that values agility, fast innovation cycles, and new approaches to problems. On-premise resources are inherently capacity-constrained and over time can place psychological governors to how employees think about finding solutions to problems. For example, an engineer may simply assume she has no other option and over-design a part rather than run a design study to understand sensitivity to key parameters. The cloud is not a panacea for all problems that need big compute. However, Information Technology leaders can do their part to encourage a culture of innovation by merely having a capable cloud strategy.

The cloud is more than TSA PreCheck, it is driving up on the tarmac and getting on the plane.

Learn more about the advantages of moving HPC to the cloud by downloading our free white paper: Motivations and IT Roadmap for Cloud HPC

This article was written by Matt McKee.