At Rescale, we meet with many IT and engineering management teams to discuss their HPC needs and how Rescale can help deliver on those requirements. Customers are excited about the flexibility and scalability Rescale offers, but they also share concerns about management and oversight of these new cloud services.  One specific concern shared with Rescale relates to controlling platform use to stay within budgetary constraints. IT and engineering management require both visibility and control of the overall use of the HPC cloud resource for their large and geographically diverse teams.

Rescale has responded to customer requests by providing ScaleX Enterprise, which provides a powerful, customizable IT dashboard to monitor and manage the use of your HPC cloud infrastructure.

Administrative Management, Controls, & Visibility
The Rescale ScaleX Enterprise administrative portal provides detailed control over all users, groups, and projects. IT and engineering managers can set user permissions, control budgets, provision hardware and software, manage license server connections, identify project codes and IDs, and track overall and individual usage. Permissions can be set at both on global and regional levels for projects or individual users.

The administrative portal allows organizations to manage simulation users. Administrative functionality related to cost controls can be customized to the specific needs of each customer.

The following illustrates a list of administrative cost control functions utilized by enterprise customers on Rescale platform. (Note: this is a partial list of ScaleX Enterprise features. This list is specific to cost and usage management.)

  • Usage Reporting – Detailed usage reports are readily available in secure user accounts to allow companies or users to track their usage and spending, and to provision budgets and allocate funds accordingly. Rescale works with each company and user to implement the billing cycle and repayment method that best suits their needs.
  • Budget Control – Rescale allows company administrators to set budgets at several levels and then monitor in real time the activity and balances of the budgets. Budgets can be set for Company, Groups, Users and Projects.

A common question from IT and engineering management is “What if our engineers run too many jobs or use too many HPC resources and we exceed our budget?”

Rescale has addressed this issue by continuously checking against the available budget as an individual or multiple jobs are running on an account.  If an individual job is submitted and the necessary funds are not available, the job will be placed in queue until the funding is replenished. If multiple jobs are submitted and the budget is exceeded, any active jobs will be terminated.

Budget Definitions

  • Company level budget – A budget that controls the total amount of money a company account can spend.  This budget is adjustable by both Rescale and the company admin.
  • User level budget – A budget that controls the total amount of money an individual user account can spend.  This budget is adjustable by Rescale, the company admin (if assigned to a company) and the individual user.
  • Project level budget – A budget that controls the total amount of money users assigned to a project can spend on that project.  This is a ScaleX Enterprise-only feature and can only be modified by the company admin.

Summary
In summary, the Rescale ScaleX Enterprise administrative portal provides a powerful, customizable IT dashboard to monitor and manage the use of your consolidated HPC cloud infrastructure. Cost controls are managed at several levels by setting budgets at company, user, and project levels to ensure that enterprise customers stay within their planned budget.

This article was written by Jeff Stemler.

You might have run into situations where you’re calling asynchronous code inside of a callback of some framework, and you need to test their side effects. For example, you might be making API calls inside of a React component’s componentDidMount() callback that will in turn call setState() when the request has completed, and you want to assert that the component is in a certain state. This article shows techniques for testing these types of scenarios.

Take a simplified example. We have a class called PromisesHaveFinishedIndicator. The constructor takes in a list of promises. When all of the promises have resolved, the instance’s finished property is set to true:

A good test case would involve calling the constructor with multiple promises whose resolution timing we can control, and writing expectations of the value of this.finished as each promise is resolved.

In order to control resolution timings of promises in tests, we use Deferred objects which expose the resolve and reject methods:

With this, we can set up a test for PromisesHaveFinishedIndicator. We use the Jest testing framework in this example, but the technique can be applied to other testing frameworks as well:

This test will actually fail because promise callbacks are asynchronous, so any callbacks sitting in the queue will run after the last statement of this test due to run to completion semantics. In other words the promise callback for the Promise.all call: () => { this.finished = true; } will have run after this test has already exited!

Jest (and other testing frameworks) provides a way to deal with asynchrony by preventing the test from exiting after the last statement. We would have to call the provided done function in order to tell the runner that the test has finished. Now you may think something like this would work:

However this will also fail. The reason lies in the implementation of Promise.all. When resolve is called on d1 (and d2 as well), Promise.all schedules a callback that checks whether all promises have resolved. If this check returns true, it will resolve the promise returned from the Promise.all call which would then enqueue the () => { this.finished = true; } callback. This callback is still sitting in the queue by the time done is called!

Now the question is how do we make the callback that sets this.finished to true to run before calling done? To answer this we need to understand how promise callbacks are scheduled when promises are resolved or rejected. Jake Archibald’s article on Tasks, microtasks, queues and schedules goes in depth on exactly that topic, and I highly recommend reading it.

In summary: Promise callbacks are queued onto the microtask queue and callbacks of APIs such as setTimeout(fn) and setInterval(fn) are queued onto the macrotask queue. Callbacks sitting on the microtask queue are run right after the stack empties out, and if a microtask schedules another microtask, then they will continually be pulled off the queue before yielding to the macrotask queue.

With this knowledge, we can make this test pass by using setTimeout instead of then():

The reason this works is because by the time second setTimeout callback runs, we know that these promise callbacks have run:

  • The callback inside the implementation of Promise.all that checks that all promises have resolved, then resolves the returned promise.
  • The callback that sets this.finished = true.

Having a bunch of setTimeout(fn, 0) in our code is unsightly to say the least. We can clean this up with the new async/await syntax:

If you want to be extra fancy, you can use setImmediate instead of setTimeout in some environments (Node.js). It is faster than setTimeout but still runs after microtasks:

When writing tests involving promises and asynchrony, it is beneficial to understand how callbacks are scheduled and the roles that different queues play on the event loop. Having this knowledge allows us to reason with the asynchronous code that we write.

This article was written by Kenneth Chung.