You might have run into situations where you’re calling asynchronous code inside of a callback of some framework, and you need to test their side effects. For example, you might be making API calls inside of a React component’s componentDidMount() callback that will in turn call setState() when the request has completed, and you want to assert that the component is in a certain state. This article shows techniques for testing these types of scenarios.

Take a simplified example. We have a class called PromisesHaveFinishedIndicator. The constructor takes in a list of promises. When all of the promises have resolved, the instance’s finished property is set to true:

A good test case would involve calling the constructor with multiple promises whose resolution timing we can control, and writing expectations of the value of this.finished as each promise is resolved.

In order to control resolution timings of promises in tests, we use Deferred objects which expose the resolve and reject methods:

With this, we can set up a test for PromisesHaveFinishedIndicator. We use the Jest testing framework in this example, but the technique can be applied to other testing frameworks as well:

This test will actually fail because promise callbacks are asynchronous, so any callbacks sitting in the queue will run after the last statement of this test due to run to completion semantics. In other words the promise callback for the Promise.all call: () => { this.finished = true; } will have run after this test has already exited!

Jest (and other testing frameworks) provides a way to deal with asynchrony by preventing the test from exiting after the last statement. We would have to call the provided done function in order to tell the runner that the test has finished. Now you may think something like this would work:

However this will also fail. The reason lies in the implementation of Promise.all. When resolve is called on d1 (and d2 as well), Promise.all schedules a callback that checks whether all promises have resolved. If this check returns true, it will resolve the promise returned from the Promise.all call which would then enqueue the () => { this.finished = true; } callback. This callback is still sitting in the queue by the time done is called!

Now the question is how do we make the callback that sets this.finished to true to run before calling done? To answer this we need to understand how promise callbacks are scheduled when promises are resolved or rejected. Jake Archibald’s article on Tasks, microtasks, queues and schedules goes in depth on exactly that topic, and I highly recommend reading it.

In summary: Promise callbacks are queued onto the microtask queue and callbacks of APIs such as setTimeout(fn) and setInterval(fn) are queued onto the macrotask queue. Callbacks sitting on the microtask queue are run right after the stack empties out, and if a microtask schedules another microtask, then they will continually be pulled off the queue before yielding to the macrotask queue.

With this knowledge, we can make this test pass by using setTimeout instead of then():

The reason this works is because by the time second setTimeout callback runs, we know that these promise callbacks have run:

  • The callback inside the implementation of Promise.all that checks that all promises have resolved, then resolves the returned promise.
  • The callback that sets this.finished = true.

Having a bunch of setTimeout(fn, 0) in our code is unsightly to say the least. We can clean this up with the new async/await syntax:

If you want to be extra fancy, you can use setImmediate instead of setTimeout in some environments (Node.js). It is faster than setTimeout but still runs after microtasks:

When writing tests involving promises and asynchrony, it is beneficial to understand how callbacks are scheduled and the roles that different queues play on the event loop. Having this knowledge allows us to reason with the asynchronous code that we write.

This article was written by Kenneth Chung.

laptop

Sometimes I feel good after fixing a bug. More likely though, I feel like I’ve made things worse. Fixing bugs often makes the code a little harder to read and a little more difficult to understand. Worse, fixing bugs may accidentally introduce even more bugs.

Most of the time, bugs occur because programmers can’t envision all possible runtime behaviors of a program. These unhandled behaviors are sometimes called edge cases. Usually, edge cases can be easily addressed with a simple if statement: if we encounter this case, do something else. However, doing so can make programs more difficult to comprehend because the reader now has to visualize multiple code paths in their head. It gets worse when there are multiple edge cases for which we pile on if statements. When it’s time to refactor some related code, these ifs would have to carry through the refactoring, and this increases the likelihood of a regression.

When I find myself piling on if statements to fix bugs, I ask myself if there are better ways to address the issue without making the program more difficult to understand and without the possibility of introducing more bugs. When there are, it usually involves refactoring the way data is modeled and handled. Below is a recount of one of those times.

At Rescale, users can launch desktop instances in the cloud. These desktops can be in the not_started, starting, running, stopping, or stopped state. The desktops and their latest known state are returned from an API endpoint for which we polled when displaying the desktops to the user.

There was a bug regarding the local UI state of the desktop. The state of the desktop is optimistically set to stopping when the user requests a desktop to be stopped. This is optimistic because it is set regardless of whether the stop request, which sends a message to queue a task for the desktop to be stopped, is successful.

There was a window of time in which if the user requested the list of desktops again, the API would return running for the desktop that was just requested to be stopped because the task for stopping the desktop is stilled queued and hasn’t run yet. The UI would update with the latest status and the user would see that the desktop went from stopping to running. When the stop task finally runs, the user would see a transition from running to stopping. Seeing stopping then running then stopping is a jarring experience for the user, so we needed to fix this.

An approach to fixing this would be to check whether the desktop is in the stopping state locally, and if so, skip updating its status to running. This is the “pile on an if statement” approach.

Instead, I decided to hold a set of statuses for each desktop. Then, whenever the desktops’ list API response came back, I would add the latest status of each desktop to their respective status set. I have a function that takes a set of statuses and displays the appropriate status to the user. For example, if running and stopping are in the set, it will just display stopping, but this function also has rules for handling starting and running.

This fixes the issue because the order in which the statuses arrive no longer mattered because the status displayed depends purely on what’s in the status set and not on the order in which they arrived. In other words, it was no longer possible to see stopping, then running, then back to stopping.

What’s great about this is that it fixed a similar issue that I had forgotten: when the user launches a desktop, the UI optimistically shows that the desktop is in the starting state, but the next API call may respond with not_started, and so the user could see that the desktop went from starting to not_started, then eventually back to starting. This issue had been effectively fixed for free.

In conclusion, when tasked to fix a bug, a simple solution may appear fine at first, but we should be thinking whether the solution encourages further complexity down the road. For example, in the previous example, I could have solved both cases with simple if statements. But if there were an issue with the status changing from stopped to stopping, then another programmer may be encouraged to pile on another if statement. Sometimes, it’s worth it to spend a little more time thinking about solving not just the bug, but the class of bugs that the issue represents.

This article was written by Kenneth Chung.