The Rescale Transfer Manager is a native Windows application that can be used to download output files from jobs. This is a more robust and faster alternative to downloading large files through the browser.

Getting Started

First, you’ll need to provision an API key for your account by navigating to Settings > API (direct link). Click the Generate button in the API section to create a new key if you do not already have one.

Then, select the “Click to Install” button in the Rescale Transfer Manager section to download and install the application on your desktop.
When the Transfer Manager launches, it will prompt you to enter your API key. Copy the API key that was provisioned above, paste it into the text box, and click the OK button to continue.


How do I download files?

Once the Transfer Manager is installed, there are several approaches that can be used to start your download depending on what you are trying to transfer.

If you simply want to download all job output files…
The easiest way to download all files is to navigate to a job’s result page in the browser and click the “Download with Rescale Transfer Manager” button in the upper right. This will launch the Transfer Manager if it is not already running, prompt you for a download location, and then start the download.


If you want to download a subset of the job output files…

For many jobs, there are only a small number of output files that need to be transferred to a local workstation. Transferring only the files that you need will save a lot of time and bandwidth.
First, you’ll need to make a note of the job ID of the job that you want to download. The easiest way to obtain the job ID is to select the job in the browser and look at the address bar. There is a short code that is displayed in the URL that consists of a series of upper and lowercase letters. Paste this code into the Job ID text box.


Next, you’ll need to provide a Search Query to restrict the files that are downloaded. Note that this is currently limited to a simple substring match with no globbing or regular expressions allowed. In the screenshot below, any file that contain “d3dump” in its name will be downloaded.


If you want to download output files automatically when jobs complete…

The Transfer Manager has a simple background download feature that can be used to monitor your jobs and automatically start downloading output files when the job finishes. This can be a useful feature to enable if you are submitting a number of jobs at the same time and don’t want to manually download the results from each one individually.

To enable this feature, click on the gear icon in the upper right to open the Settings page.
On the settings page, you will need to first enable Automatic Downloads by checking the Enabled box.


This will reveal additional settings that control which jobs and files will be automatically downloaded:

  • Max job age (in days) specifies the oldest completed job that will be downloaded. In the screenshot above, all completed jobs that were created in the last 30 days will be downloaded.
  • Destination indicates the directory that jobs will be saved to.
  • Search Query will restrict downloads to the files that have names which contain the specified value.

Click the Save button to commit your changes. After a few moments, a job download should begin automatically. Note that the Transfer Manager application must remain open for automatic downloads to work.

The Rescale Transfer Manager is available for download today. Please email if you have any questions or feature suggestions.

This article was written by Ryan Kaneshiro.


Rescale is a valuable regression and performance testing resource for software vendors. Using our API and command line tools, we will discuss how you can run all or a subset of your in-house regression tests on Rescale. The advantages of testing on Rescale are as follows:

  1. Compute resources are on-demand, so you only pay when you are actually running tests
  2. Compute resources are also scalable, enabling you to run a large test suite in parallel and get feedback sooner
  3. Heterogeneous resources are available to test software performance on various hardware configurations (e.g. Infiniband, 10GigE, Tesla and Phi coprocessors)
  4. Testing your software on Rescale can then optionally enable you to provide private beta releases to other customers on Rescale

Test Setup

For the remainder of this article, we will assume you have the following sets of files:

  1. Full build tree of your software package, in a commonly supported archive format (tar.gz, zip, etc.)
  2. Archived set of reference test inputs and expected outputs
  3. Script or command line to run your software build against one or more test inputs
  4. Script to evaluate actual test output with expected output
  5. (Optional) Smaller set of incremental build products to overlay on top of the full build tree

In the examples below, we will be using our python SDK. A selection of examples below are available in the SDK repo here. The SDK just wraps our REST API, so you can port these examples to other languages by using the endpoints referenced in

Note that all these examples require:

  1. An account on the Rescale platform
  2. A local RESCALE_API_KEY environment variable set to your API key found in Settings->API from the main platform page

Running tests from a single job

We will start with the simplest example, uploading a full build and test reference data as job input files, running the tests serially, and comparing the results. Let’s start with some example “software” which we will upload and run. Here is a list of the software package and test files:

Each software build and test case is archived separately. Here are the steps to prepare and run our test suite job:

  1. Upload build, reference test data, and results comparison script using the Rescale python SDK:

RescaleFile uploads the local file contents to Rescale and returns metadata to reference that file. At this point, you can view these files at

      2. Create the test suite job:

RescaleJob creates a new job which you can now view at Note here we are running the job on a single Marble core. You can opt to run more cores by increasing coresPerSlot or change the core type by selecting a different core type code from RescaleConnect.get_core_types().

Note that the command and postProcessScriptCommand fields can be any valid bash script, so there is quite a bit of flexibility in how you run your test and evaluate the results. In our very simple example, the post-test command comparison just diffs the out and expected_out files in each test case directory.

  1. Submit the job for execution and wait for it to complete:

Once the job cluster is provisioned, the input files are transferred to the cluster, unencrypted, then uncompressed in the work directory. Next, the TEST_COMMAND is run, followed by the POST_RUN_COMPARE_COMMAND.

  1. Download the test results. All Rescale job commands have stdout redirected to process_output.log let’s just download that one file to get the test result summary.

It is important to note here that by doing our test result comparison as a post-processing step in our job, we avoid downloading potentially large output files, which would delay how long it takes to get test results. This doesn’t address the issue that we still need to upload the test reference outputs, which will often be similar size to the actual output. The key is that we only need to upload files to Rescale when they change, not for every test job we launch. Assuming our reference test cases do not change very often, we can now reuse the files we just uploaded to Rescale in later test runs.

You can find this example in full here.

Reusing reference test data

We will now modify the above procedure to avoid uploading reference test data for every submitted test job.

  1. (modified) Find test file metadata on Rescale and use as input file to subsequent jobs:

RescaleFile.get_newest_by_name just retrieves metadata for the test file that was already uploaded to Rescale. Note that if you uploaded multiple test archives with the same name, this will pick the most recently uploaded one.

Steps 2 through 4 are the same as the previous example.

Parallelize long running tests

The previous examples just run all your tests sequentially, let’s now run some in parallel. For this example, we assume our tests are partitioned into “short” and “long” tests. The short tests are in an archive called all_short_tests.tar.gz and each long test is in a separate archive called long_test_.tar.gz.

We will now launch a single job for all the short tests and a job per test for the long tests. We assume these test files have already been uploaded to Rescale, as was done in the first example.

In this example, we launched our short test job with a single Marble core and each of our long tests with a 32 core (2 nodes) Nickel MPI cluster.

This test job configuration is particularly appropriate for performance tests. To test that a particular build + test case combination scales, you might launch 4 jobs with 1, 2, 4, and 8 nodes respectively.

This example can be found here.

Incremental builds

In the above, we avoided re-uploading test data for each test run by reusing the same data already stored on Rescale. If we have a large software build we need to test, we would like to also reuse already uploaded data, but each build tested will generally be different. In many cases though, only a small subset of files from the whole package will change from build to build.

To leverage the similarity in builds, we can supply an incremental build delta that will be uncompressed on top of the base build tree we uploaded in the first job. There are just 2 requirements:

  1. The build delta must have the same directory structure as the base build
  2. We need to specify the build delta archive as an input file AFTER the base build archive

In the above, base_build_input comes from the file already on Rescale and incremental_build_input is uploaded each time.

Parallelism in Design-of-Experiment (DOE) jobs

Another way to run tests is to group multiple tests into a single DOE job. The number of tests that can run in parallel is then defined by the number of task slots you configure for your job. You would then structure your test runs so that they can be parameterized by a templated configuration file, as described in

This method has the advantage of eliminating job cluster setup time, compared to the multi-job case. The disadvantage is that each test run is limited to the same hardware configuration you define for a task slot. For an example on how to set up a DOE job with the python SDK, see

Large file uploads

In the above examples, we have uploaded our input files with a simple PUT request. This will be slow and/or often not work for multi-gigabyte files. An alternative is to use the Rescale CLI tool, which provides bandwidth optimized file uploads and downloads and can resume transfers if they are interrupted.

For more info on the Rescale CLI, see here:

Running tests on Rescale is a great way to reduce testing time and strain on internal compute resources for large regression and performance test suites. The Rescale API provides a very flexible way to launch groups of tests on diverse hardware configurations, including high-memory, high-storage, infiniband, and GPU-enabled clusters. If you are interested in doing your own testing on Rescale, check out our SDK and example scripts at or contact us at

This article was written by Mark Whitney.