Rescale can now provide software vendors with a detailed understanding of how their customers use their software and consume on-demand licenses. Rescale recently released ScaleX Partner beta to serve our most trusted software partners.

Rescale has 50+ software partners with over 180 software applications that are pre-configured, certified, and natively-integrated with the ScaleX platform. As a way to deliver additional value to software partners, Rescale created ScaleX Partner to allow partners to directly manage software and understand its usage across their customer portfolio from a single interface.

ScaleX Partner Dashboard (Note: this image depicts falsified data.)

From the “Customers” tab, software vendors can now manage their software package portfolios holistically and set permissions for customers on ScaleX Enterprise.

ScaleX Partner Customers (Note: this image depicts falsified data.)

Software vendors now have the power of usage reports (under the “Usage” tab) to get a snapshot of the dollar amount of usage per month broken down by specific software package.

ScaleX Partners Usage (Note: this image depicts falsified data.)

Lastly, the “Software Management” tab breaks down all software by version and visibility to customers on ScaleX Enterprise. We break down visibility into three categories:

  • Public: Visible on the platform and selectable by all customers (green in the image below).
  • Requestable: Visible on the platform but not selectable unless access is granted. Access may be requested (yellow in the image below).
  • Private: Hidden on the platform unless access is granted (red in the image below).

ScaleX Partner Software Management (Note: this image depicts falsified data.)

All of this information has provided great value to our software partners and their sales teams. Rescale can now trend software usage data over time to be able to better forecast revenue. This portal can be applied as a lead generation tool as well.

To request beta access to ScaleX Partner and other new Rescale products before general release, please visit info.rescale.com/beta.

This article was written by Cameron Fillmore.

In a previous post, we showed examples of using multiple GPUs to train a deep neural network (DNN) using the Torch machine learning library. In this post, we will focus on performing multi-GPU training using TensorFlow.

In particular, we will explore data-parallel GPU training with multi-GPU and multi-node configurations on Rescale. We will leverage Rescale’s existing MPI configured clusters to easily launch TensorFlow distributed training workers. For a basic example of training with TensorFlow on a single GPU, see this previous post.

Preparing Data
To make our multi-GPU training sessions more interesting, we will be using some larger datasets. Later, we will show a training job on the popular ImageNet image classification dataset. Before we start with this 150 GB dataset, we will prepare a smaller dataset to be in the same format as ImageNet and test our jobs with that in order to make sure the TensorFlow trainer is working properly. In order to keep data local during training, Rescale syncs the dataset to local storage on the GPU nodes before training starts. Waiting for a large dataset like ImageNet to sync when iteratively developing a model using just a few examples is wasteful. For this reason, we will start with the smaller Flowers dataset and then move to ImageNet once we have a working example.

TensorFlow processes images that are formatted as TFRecords so first let’s download and preprocess the pngs from the flowers dataset to be in this format. All the examples we will be showing today come out of the inception module in the tensorflow/models repository on GitHub, so we start by cloning that repository:

Now we use the bazel build tool to make the flowers download-and-preprocess script and then run that script:

This should download a ~220MB archive and create something like this:

We assemble this archive and then upload it to Rescale. Optionally, you can delete the raw-data and archive file since all the necessary information is now encoded as TFRecords.

We have assembled all these operations in a preprocessing job on Rescale here for you to clone and run yourself.

Next, let’s take the flowers.tar.gz file we just produced and convert it to an input file for the next step:


Now we have our preprocessed flower image TFRecords ready for training.

Single Node – Multiple GPUs
The next step is to take this input dataset and train a model with it. We will be using the Inception v3 DNN architecture from the tensorflow/models repository as mentioned above. Training on a single node with multiple GPUs looks something like this:

(from https://github.com/tensorflow/models/tree/master/inception#how-to-train-from-scratch)


We will first create a Rescale job that runs on a single node, since that has fewer moving parts than the multi-node case. We will actually run 3 processes in the job:

  • Main GPU-based model training
  • CPU-based evaluation of checkpointed models on the validation set
  • TensorBoard visualization tool

So let’s get started! First build the training and evaluation scripts.

Next, create some output directories and start the main training process: 

$RESCALE_GPUS_PER_SLOT is a variable set on all Rescale job environments. In this command line, we point to the flowers directory with our training images and the empty out/train directory where TensorFlow will output logs and models files.

Evaluation of accuracy on the validation set can be done separately and does not need GPU acceleration:

imagenet_eval sits in a loop and wakes up every eval_interval_secs to evaluate the accuracy of the most recently trained model checkpoint in out/train against validation TFRecords in the flowers directory. Accuracy results are logged to out/eval. CUDA_VISIBLE_DEVICES is an important parameter here. TensorFlow will by default always load itself into GPU memory, even if it is not going to make use of the GPU. Without this parameter, both the training and evaluation processes will together exhaust all the memory on the GPU and cause the training to fail.

Finally, TensorBoard is a handy tool for monitoring TensorFlow’s progress. TensorBoard runs its own web server to show plots of training progress, a graph of the model, and may other visualizations. To start it, we just have to point it to the out directory where our training and evaluation processes are outputting:

TensorBoard will pull in logs in all subdirectories of logdir so it will show training and evaluation data together.

Putting this all together:

Since this all runs in a single shell, we background the TensorBoard and evaluation processes. We also delay start of the evaluation process since the training process needs a few minutes to initialize and create the first model checkpoint.

You can run this training job on Rescale here.

Since TensorBoard runs its own web server without any authentication, access is blocked by default on Rescale. The easiest way to get access to TensorBoard is to open an SSH tunnel to the node and forward port 6006:


Now navigate to http://localhost:6006 and you should see something like this:


Multiple Nodes
The current state-of-the-art limits the total GPU cards that can fit on a node to something around 8. Additionally, the CUDA peer-to-peer system, the mechanism a TensorFlow process uses to distribute work amongst GPUs is currently limited to 8 GPU devices. While these numbers will continue to increase, it is still convenient to have a mechanism to scale your training out for large models and datasets. TensorFlow distributed training synchronizes updates between different training processes over the network, so it can be used with any network fabric and not be limited by CUDA implementation details. Distributed training consists of some number of workers and parameter servers as shown here:

(from https://github.com/tensorflow/models/tree/master/inception#how-to-train-from-scratch)


Parameter servers provide model parameters which are then used to evaluate an input batch. After the batch on each worker is complete, the error gradients are fed back into the parameter server which then uses them to produce new model parameters. In the context of a GPU cluster, we could run a worker process to use each GPU in our cluster and then pick enough parameter servers to keep up with processing of the gradients.

Following the instructions here, we set up a worker per GPU and a parameter server per node. We will take advantage of the MPI configuration that comes with every Rescale cluster.

To start, we need generate the host strings that will be passed to each parameter server and worker, each process getting a unique hostname:port combination, for example:

We want a single entry per host for the parameter servers and a single entry per GPU for the workers. We take advantage of the machine files that are automatically set up on every Rescale cluster. $HOME/machinefile just has a list of hosts in the cluster and $HOME/machinefile.gpu has a list of hosts annotated with the number of GPUs on each host. We parse them to generate our host strings in a python script: make_hoststrings.py

Next we have a script that takes these host strings and launches the imagenet_distributed_train script with the proper task ID and GPU whitelist, tf_mpistart.sh:

 tf_mpistart.sh will be run with OpenMPI mpirun so $OMPI* environment variables are automatically injected. We use $OMPI_COMM_WORLD_RANK to get a global task index and $OMPI_COMM_WORLD_LOCAL_RANK to get a node local GPU index.

Now, putting it all together:

We start with a bunch of the same directory creation and bazel build boilerplate. The 2 exceptions are:

1. We move all the input directories to the shared/ subdirectory so it is shared across nodes.
2. We now call the bazel build command with the --output_base so that bazel doesn’t symlink the build products to $HOME/.cache and instead makes them available on the shared filesystem.

Next we launch TensorBoard and imagenet_eval locally on the MPI master. These 2 processes don’t need to be replicated across nodes with mpirun.

Finally, we launch the parameter servers with tf_mpistart.sh ps and the single entry per node machinefile and then the workers with tf_mpistart.sh worker with GPU-ranked machinefile.gpu.

Here is an example job performing the above distributed training on the flowers dataset using 2 Jade nodes (8 K520 GPUs). Note that since we are using the MPI infrastructure already set up on Rescale, we can use this same example for any number of nodes or GPUs-per-node. Using the appropriate machinefiles, the number of workers and parameter servers are set automatically to match the resources.

Training on ImageNet
Now that we have developed the machinery to launch a TensorFlow distributed training job on the smaller flowers dataset, we are ready to train on the full ImageNet dataset. Downloading of ImageNet requires permission here. You can request access and upon acceptance, you will be given a username and password to download the necessary tarballs.

We can then run a preparation job similar to the flowers job above to download the dataset and format the images into TFRecords:

You can clone and run this preparation job on Rescale here.

If you have already downloaded the 3 necessary inputs from the ImageNet site (ILSVRC2012_img_train.tar, ILSVRC2012_img_val.tar, and ILSVRC2012_bbox_train_v2.tar.gz) and have placed them somewhere accessible to via HTTP (like an AWS S3 bucket), you can customize models/inception/inception/data/download_imagenet.sh in the tensorflow/models repository to download from your custom location:

Clone and run this version of the preparation job here.

Finally, we can make some slight modifications to our multi-GPU flowers jobs to take the imagenet-data dataset directory instead of the flowers directory by changing the $DATASET variable at the top:

And the distributed training case:

We have gone through all the details for performing multi-GPU, single and multi-node model training with TensorFlow. In an upcoming post, we will discuss the performance ramifications of distributed training, and look at how well it scales on different server configurations.

Rescale Jobs
Here is a summary of the Rescale jobs used in this example. Click the links to import the jobs to your Rescale account.

Flowers dataset preprocessing:
https://platform.rescale.com/tutorials/tensorflow-flowers-preprocess/clone/

Single node flowers training: 
https://platform.rescale.com/tutorials/tensorflow-flower-train-single-node/clone/

Multiple nodes flowers training: 
https://platform.rescale.com/tutorials/tensorflow-flower-train-distributed/clone/

ImageNet ILSVRC2012 download and preprocessing: 
https://platform.rescale.com/tutorials/tensorflow-imagenet-preprocess/clone/

ImageNet ILSVRC2012 download from existing S3 bucket: 
https://platform.rescale.com/tutorials/tensorflow-imagenet-noauth-preprocess/clone/

This article was written by Mark Whitney.

RescaleのアプリケーションエンジニアであるAlex Huangがポストしたブログ記事の翻訳です。

Alex Huang – 2017年2月20日

このたびRescaleは、当社のプラットフォームで利用可能な強力な新機能、In-Browser SSH(ブラウザー上でのセキュアシェル)を導入しました。Linuxをサポートしているプロバイダーのコンピューティングクラスターを使用するジョブであれば、Live Tailigパネル(テールパネル)の下に実行中のクラスターに接続するためのSSHパネルが表示され、SSHを利用できます。

Continue reading

This article was written by Rescale Japan.

RescaleのRahul Vergheseが、2017年1月19日に記載したBlog記事の翻訳です。

元記事はIntroducing Persistent Clusters: a New Feature to Save Time & Money with Multiple Jobsをご覧ください。

Rescaleは、最新のデプロイメントで新機能のPersistent Clusters (以下、「永続クラスター」:マニュアルで起動/削除可能なクラスター)をリリースしました。この機能を有効にすることで、複数のインスタンスを起動してクラスタを構築し、シャットダウン(訳注:Rescaleシステムではシャットダウン後インスタンスは削除されます)することなく、Rescaleワークフロー(Web UI)を使用して複数のジョブを順番に同じクラスターへ投入できます。以前は、各ジョブ毎にクラスターが稼働し、ジョブの完了後は自動的にシャットダウンされるため、複数の小さなジョブを実行すると遅延が発生する可能性がありました。この新しい機能により、繰り返し処理の高速化が可能になります。これは、テストや同じハードウェア構成を必要とする複数のジョブに特に便利です。

時間とお金を節約

一般に、各クラスターがスピンアップしてシャットダウンするまでには数分かかります。永続的クラスターを有効にしておくと、クラスターに追加する各ジョブの時間とコストを節約できます。

なぜ?
標準のクラスターは、ジョブが完了すると自動的にシャットダウンし、後続のジョブも同じように起動してシャットダウンするため、別々のクラスターとしてそれぞれ課金されます。(訳注:通常、たとえ10分の計算であっても1時間分の課金となるため、10分で完了する連続する2つのジョブを実行した場合、2時間分が課金されます)一方で、永続クラスターを使用すると、クラスターはすぐに次のジョブの実行に使用できるようになるため、ジョブ間で別のクラスターをシャットダウンして起動させる時間を無駄にしません。それによって、同様のジョブを多数立ち上げるユーザーにとって、時間とコストを大幅に節約することになります。(訳注:上記の例だとちょうど10分の計算を待ち時間なく2つ連続的に実施できることになり、1時間分の課金で収まることになります。)
Continue reading

This article was written by Rescale Japan.