You might have run into situations where you’re calling asynchronous code inside of a callback of some framework, and you need to test their side effects. For example, you might be making API calls inside of a React component’s componentDidMount() callback that will in turn call setState() when the request has completed, and you want to assert that the component is in a certain state. This article shows techniques for testing these types of scenarios.

Take a simplified example. We have a class called PromisesHaveFinishedIndicator. The constructor takes in a list of promises. When all of the promises have resolved, the instance’s finished property is set to true:

A good test case would involve calling the constructor with multiple promises whose resolution timing we can control, and writing expectations of the value of this.finished as each promise is resolved.

In order to control resolution timings of promises in tests, we use Deferred objects which expose the resolve and reject methods:

With this, we can set up a test for PromisesHaveFinishedIndicator. We use the Jest testing framework in this example, but the technique can be applied to other testing frameworks as well:

This test will actually fail because promise callbacks are asynchronous, so any callbacks sitting in the queue will run after the last statement of this test due to run to completion semantics. In other words the promise callback for the Promise.all call: () => { this.finished = true; } will have run after this test has already exited!

Jest (and other testing frameworks) provides a way to deal with asynchrony by preventing the test from exiting after the last statement. We would have to call the provided done function in order to tell the runner that the test has finished. Now you may think something like this would work:

However this will also fail. The reason lies in the implementation of Promise.all. When resolve is called on d1 (and d2 as well), Promise.all schedules a callback that checks whether all promises have resolved. If this check returns true, it will resolve the promise returned from the Promise.all call which would then enqueue the () => { this.finished = true; } callback. This callback is still sitting in the queue by the time done is called!

Now the question is how do we make the callback that sets this.finished to true to run before calling done? To answer this we need to understand how promise callbacks are scheduled when promises are resolved or rejected. Jake Archibald’s article on Tasks, microtasks, queues and schedules goes in depth on exactly that topic, and I highly recommend reading it.

In summary: Promise callbacks are queued onto the microtask queue and callbacks of APIs such as setTimeout(fn) and setInterval(fn) are queued onto the macrotask queue. Callbacks sitting on the microtask queue are run right after the stack empties out, and if a microtask schedules another microtask, then they will continually be pulled off the queue before yielding to the macrotask queue.

With this knowledge, we can make this test pass by using setTimeout instead of then():

The reason this works is because by the time second setTimeout callback runs, we know that these promise callbacks have run:

  • The callback inside the implementation of Promise.all that checks that all promises have resolved, then resolves the returned promise.
  • The callback that sets this.finished = true.

Having a bunch of setTimeout(fn, 0) in our code is unsightly to say the least. We can clean this up with the new async/await syntax:

If you want to be extra fancy, you can use setImmediate instead of setTimeout in some environments (Node.js). It is faster than setTimeout but still runs after microtasks:

When writing tests involving promises and asynchrony, it is beneficial to understand how callbacks are scheduled and the roles that different queues play on the event loop. Having this knowledge allows us to reason with the asynchronous code that we write.

This article was written by Kenneth Chung.


Sometimes I feel good after fixing a bug. More likely though, I feel like I’ve made things worse. Fixing bugs often makes the code a little harder to read and a little more difficult to understand. Worse, fixing bugs may accidentally introduce even more bugs.

Most of the time, bugs occur because programmers can’t envision all possible runtime behaviors of a program. These unhandled behaviors are sometimes called edge cases. Usually, edge cases can be easily addressed with a simple if statement: if we encounter this case, do something else. However, doing so can make programs more difficult to comprehend because the reader now has to visualize multiple code paths in their head. It gets worse when there are multiple edge cases for which we pile on if statements. When it’s time to refactor some related code, these ifs would have to carry through the refactoring, and this increases the likelihood of a regression.

When I find myself piling on if statements to fix bugs, I ask myself if there are better ways to address the issue without making the program more difficult to understand and without the possibility of introducing more bugs. When there are, it usually involves refactoring the way data is modeled and handled. Below is a recount of one of those times.

At Rescale, users can launch desktop instances in the cloud. These desktops can be in the not_started, starting, running, stopping, or stopped state. The desktops and their latest known state are returned from an API endpoint for which we polled when displaying the desktops to the user.

There was a bug regarding the local UI state of the desktop. The state of the desktop is optimistically set to stopping when the user requests a desktop to be stopped. This is optimistic because it is set regardless of whether the stop request, which sends a message to queue a task for the desktop to be stopped, is successful.

There was a window of time in which if the user requested the list of desktops again, the API would return running for the desktop that was just requested to be stopped because the task for stopping the desktop is stilled queued and hasn’t run yet. The UI would update with the latest status and the user would see that the desktop went from stopping to running. When the stop task finally runs, the user would see a transition from running to stopping. Seeing stopping then running then stopping is a jarring experience for the user, so we needed to fix this.

An approach to fixing this would be to check whether the desktop is in the stopping state locally, and if so, skip updating its status to running. This is the “pile on an if statement” approach.

Instead, I decided to hold a set of statuses for each desktop. Then, whenever the desktops’ list API response came back, I would add the latest status of each desktop to their respective status set. I have a function that takes a set of statuses and displays the appropriate status to the user. For example, if running and stopping are in the set, it will just display stopping, but this function also has rules for handling starting and running.

This fixes the issue because the order in which the statuses arrive no longer mattered because the status displayed depends purely on what’s in the status set and not on the order in which they arrived. In other words, it was no longer possible to see stopping, then running, then back to stopping.

What’s great about this is that it fixed a similar issue that I had forgotten: when the user launches a desktop, the UI optimistically shows that the desktop is in the starting state, but the next API call may respond with not_started, and so the user could see that the desktop went from starting to not_started, then eventually back to starting. This issue had been effectively fixed for free.

In conclusion, when tasked to fix a bug, a simple solution may appear fine at first, but we should be thinking whether the solution encourages further complexity down the road. For example, in the previous example, I could have solved both cases with simple if statements. But if there were an issue with the status changing from stopped to stopping, then another programmer may be encouraged to pile on another if statement. Sometimes, it’s worth it to spend a little more time thinking about solving not just the bug, but the class of bugs that the issue represents.

This article was written by Kenneth Chung.


A little more than a month ago we shipped a redesigned version of our homepage, During its development, we thought it would be a good time to reevaluate the way we were writing our CSS.

At the time, our main CSS file was more than 2000 lines long, written in the classic semantic-class-names style (e.g. the news section gets the .news class) with a couple of utility classes here and there.

From skimming our CSS code, we felt a little uneasy going into this redesign because 1) many of the CSS classes we had written were probably not reusable in the new design, so we were going to end up adding a lot more CSS, and 2) there were going to be a lot more responsive components, and the way we were writing responsive CSS was difficult to follow and change.

One of the major pain points with our old CSS was that it was very hard to visualize how an element with a particular CSS class would behave on different screen sizes just from reading the source code. This arose because our media-query sections were separated by hundreds of lines of CSS. For example, with this CSS:

If we were reading the .footer class under one of the media queries, it would be very difficult to remember what was in the base class, what base properties were being overridden, and which media query block we were even in. Adding more classes would only exacerbate the problem.

Around that time we stumbled across the Tachyons CSS library and an article written by its creator, Adam Morse, titled CSS and Scalability.

In it, Adam argues that the crux of CSS scalability and reusability problems is the use of semantic class name, and with such a system, we will never stop writing CSS.

Here are some of the most intriguing points made in that article:

If you’re going to build a new component, or change a piece of UI in your app – what do you do? I don’t know anyone that reads all the available CSS in the app to see if there is something they can reuse.

If you are starting from the assumption that your CSS isn’t reusable, your first instinct is to write new CSS. But most likely you aren’t creating a new classification of visual styles. In my experience it is likely that you are replicating visual styles that already exist.

In this model, you will never stop writing CSS. Refactoring CSS is hard and time consuming. Deleting unused CSS is hard and time consuming. And more often than not – it’s not work people are excited to do. So what happens? People keep writing more and more CSS.

With that, we adopted the Tachyons library and were amazed by how little CSS we had to write in order to implement our redesign, and that implementing responsive designs was fun again.

Tachyons comes with a bunch of CSS classes, each of which contains only one property. For example, .fl defines float: left. Furthermore, for each class there are classes with responsive modifiers. For example. .fl-ns means only apply float: left when the screen size is “not small”, or above 480px.

We use these classes as building blocks for components. For example, here’s one column of a responsive 4 column layout:

That says, float left and take up 50% of the width when the screen is bigger than “not small”, and take up 25% of the width when the screen is bigger than “medium”. If it’s smaller than “not small”, then use the default behavior of a div, which is to takes up 100% of the width.

The other thing Tachyons buys us is a set of predefined font sizes and padding/margin spacings also known as font and spacing scales. This system has some pros and cons.

One of the biggest pros is that we spend a lot less time deciding on sizes. For example, prior to adopting Tachyons, we would often ask ourselves if a particular section should use a font size of 16px or 18px, or should the padding be 8px or 10px. Tachyons reduces the number of decisions while increasing consistency.

The con is, of course, when we want to use a size not on these scales. The way we handle this is just to inline the style because usually they’re not going to be reused anyways so there’s no reason these one-off styles should be in CSS.

The other problem is there will be repetition of elements with the same classes, usually among sibling elements. For the 4 column component above, we would have repeated it 4 times. This is a minor issue for code editors with multiple cursors. If it becomes a problem we can use a template system for abstract away the components.

Overall, we think this is the right direction that CSS systems for scalability is heading. In other words, the best way to scale CSS is to stop writing CSS.

This article was written by Kenneth Chung.


React version 15 has deprecated valueLink which was a property on form elements for expressing two-way binding between the value of the form element and a state property of the component using the element. The recommended replacement is to explicitly specify the value as a prop and to supply an onChange handler instead.

At Rescale, we use valueLink a lot. When creating components for modifying business objects—our business objects are usually immutable Record instances of the Immutable library—we use the pattern of passing the business object to the component via props, and then initializing a state property on the component to said object via getInitialState. Here’s pseudocode for illustration:

What’s good about this pattern is that since our business objects are immutable, the company passed into the component ( will be guaranteed not to change in the course of modifying the company within this component, but the company referenced by state ( may change when handleNameChange gets called. With this pattern, the state of the component can be thought of as a “staging” area for changes to the model. We can do stuff like compare the company referenced by state with the company referenced by props to determine whether any changes were made, and we can easily revert changes or abandon changes.

As you can see though, it gets a bit tedious creating handlers for every field of the model that can be modified by the form. In our example we needed handlers for changes to name and changes to budget. But, it’s not too hard to create a function that will generate an object with an appropriate { requestChange, value } pair and then feed that object to valueLink. Indeed that’s what react-addons-linked-state-mixin and reactlink-immutable is used for.

However, in React 15, we are to use value and onChange instead of valueLink. Switching over to use value isn’t that big of a deal, we just write it out like in the example above. The only caveat is that for checkboxes we need to use checked instead of value.

Creating a factory that generates handlers for onChange is something we decided to implement ourselves since we use a mix of simple component states (states with simple string or boolean values) and states that reference Immutable objects. Here’s such a factory for generating onChange handlers (named linkState, not to be confused with the this.linkState given by react-addons-linked-state-mixin):

We can use this to replace the valueLinks in the above example:

What’s nice about our implementation is that we don’t need to use mixins which may be deprecated in the future, and we’re using the same factory function for creating handlers that deal with simple states as well as for deep modifications of immutable objects. This means we can remove the react-addons-linked-state-mixin and reactlink-immutable dependencies in our projects.

This article was written by Kenneth Chung.