abstraction

Sandi Metz recently wrote an article proclaiming that duplication is cheaper than the wrong abstraction. This article raises valuable points about the costs of speculative generalization, but it’s part of a long line of articles detailing and railing against those costs. By now it should be old hat to hear someone criticize abstraction, and yet the meme persists.

The aspect of Sandi Metz’s article that I’d like to respond to in this post is the mindset it promotes, or at least the mindset that has responded to it the most. This mindset is very common to see in comments – just get the task done, nothing more.  Sometimes that’s appropriate and the right approach to take, but the problem here is that the costs of abstraction, especially when it’s gotten wrong, are obvious, and the costs of the “simplicity first” mindset aren’t as obvious. I won’t talk about the specific costs of duplicated code, as those are already well known. I will talk about the opportunity costs – the missed learning opportunities.

Good developers should be constantly learning, constantly honing their skills. There’s always room to improve. The skill that’s most important for developers to practice is recognizing profitable abstractions, because doing so correctly relies on honed intuition. It takes seeing costs manifest over the long term, and it takes making mistakes. Developers should be constantly evaluating their past decisions and taking risks on new ones.

Opportunity cost is an often overlooked aspect of technical debt. The reason accumulating technical debt is the cheaper choice in the moment is that it takes a path for which the solution is already known. There’s nothing to learn, just implement the hack. That’s fine in small doses, but it forgoes the opportunity to learn things about the codebase, to discover missing abstractions and create conceptual tools that can help solve the problem.

So what the developers in Sandi Metz’s example should have done is noticed that this particular abstraction was costing them more than it was benefiting them. That’s a good thing to notice – it’s a valuable learning experience. What specific aspects of the abstraction were slowing down development? Which parts confused new developers and led them to make it worse? These are questions the developers should have asked themselves in order to learn from the experience.

Our development team has a weekly practice that we call “Tech Talks,” in which a developer talks about something they learned that week, some part of the codebase that was thornier than it should have been, and so on. This practice is invaluable for promoting a growth mindset, and the situation from Mz. Metz’s article would have been a perfect example to bring up.

Developers shouldn’t focus on just cranking out code. Those who limit their attention in such a way aren’t growing and will soon be surpassed by better tools. Instead, we should recognize that the job of a developer is to understand which abstractions will prove valuable for the codebase. The only way to learn that is through experience.

This article was written by Alex Kudlick.

laptop

Sometimes I feel good after fixing a bug. More likely though, I feel like I’ve made things worse. Fixing bugs often makes the code a little harder to read and a little more difficult to understand. Worse, fixing bugs may accidentally introduce even more bugs.

Most of the time, bugs occur because programmers can’t envision all possible runtime behaviors of a program. These unhandled behaviors are sometimes called edge cases. Usually, edge cases can be easily addressed with a simple if statement: if we encounter this case, do something else. However, doing so can make programs more difficult to comprehend because the reader now has to visualize multiple code paths in their head. It gets worse when there are multiple edge cases for which we pile on if statements. When it’s time to refactor some related code, these ifs would have to carry through the refactoring, and this increases the likelihood of a regression.

When I find myself piling on if statements to fix bugs, I ask myself if there are better ways to address the issue without making the program more difficult to understand and without the possibility of introducing more bugs. When there are, it usually involves refactoring the way data is modeled and handled. Below is a recount of one of those times.

At Rescale, users can launch desktop instances in the cloud. These desktops can be in the not_started, starting, running, stopping, or stopped state. The desktops and their latest known state are returned from an API endpoint for which we polled when displaying the desktops to the user.

There was a bug regarding the local UI state of the desktop. The state of the desktop is optimistically set to stopping when the user requests a desktop to be stopped. This is optimistic because it is set regardless of whether the stop request, which sends a message to queue a task for the desktop to be stopped, is successful.

There was a window of time in which if the user requested the list of desktops again, the API would return running for the desktop that was just requested to be stopped because the task for stopping the desktop is stilled queued and hasn’t run yet. The UI would update with the latest status and the user would see that the desktop went from stopping to running. When the stop task finally runs, the user would see a transition from running to stopping. Seeing stopping then running then stopping is a jarring experience for the user, so we needed to fix this.

An approach to fixing this would be to check whether the desktop is in the stopping state locally, and if so, skip updating its status to running. This is the “pile on an if statement” approach.

Instead, I decided to hold a set of statuses for each desktop. Then, whenever the desktops’ list API response came back, I would add the latest status of each desktop to their respective status set. I have a function that takes a set of statuses and displays the appropriate status to the user. For example, if running and stopping are in the set, it will just display stopping, but this function also has rules for handling starting and running.

This fixes the issue because the order in which the statuses arrive no longer mattered because the status displayed depends purely on what’s in the status set and not on the order in which they arrived. In other words, it was no longer possible to see stopping, then running, then back to stopping.

What’s great about this is that it fixed a similar issue that I had forgotten: when the user launches a desktop, the UI optimistically shows that the desktop is in the starting state, but the next API call may respond with not_started, and so the user could see that the desktop went from starting to not_started, then eventually back to starting. This issue had been effectively fixed for free.

In conclusion, when tasked to fix a bug, a simple solution may appear fine at first, but we should be thinking whether the solution encourages further complexity down the road. For example, in the previous example, I could have solved both cases with simple if statements. But if there were an issue with the status changing from stopped to stopping, then another programmer may be encouraged to pile on another if statement. Sometimes, it’s worth it to spend a little more time thinking about solving not just the bug, but the class of bugs that the issue represents.

This article was written by Kenneth Chung.