Refactoring to testable code

Imagine we have code that interacts with external systems. External systems could mean:

  • interacting with a database
  • interacting with a file-system
  • printing something to the console
  • interacting with a remote service
  • and so on

How do we test a program like this? How do we make sure it does the right thing after we change something in its logic, or its structure, when we add a feafure of fix a bug?

In this post we will look at a general approach to achieve this.

One sure way would be to re-start the program each time we change its code. If the program expects user interaction, we would need to interact with the program to test it. Additionally such a program also needs all the required external systems, maybe a database, maybe a remote service, to be available. Not only that, but often these systems need to have specific data, or to be in a specific state.

As we can see this approach, while the most direct and intuitive one, is not without its drawbacks:

  • it is slow
  • it is error-prone (did we remember to test everything?)
  • it is complex to set up (all external systems need to be up-and-running)

To the latter point one could add: it might even be impossible to set up. While developing we do not always have access to all the equivalent systems that will run during production. And running our tests against a production database or service is far from a good idea (provided we had access to those to begin with).

As we see, under certain circumstances testing our code manually can be slow, error-prone, complex, expensive, or even impossible.

What alternative do we have?

Imagine we could split our program in two parts:

  • The first part would comprise most of our code. It consists only of “pure logic” which is unaware of external systems. As a result, we can write small programs (tests) which in turn exercise this “logic” with specific inputs, checking for expected outputs. This is the core idea behind unit-tests. These tests run extremely fast, in a repeatable, reliable way – potentially after each small change we make to our code. And best of all: automatically, without our intervention, and without depending on external systems.
  • The second part would ideally be as small as possible. This part contains all the rest, the aspects which are difficult to test. They are isolated here and we can decide wether we want to test them in a manual or expensive way, or not test them at all, trusting that they “just work” (which you can safely do if your code is straightforward enough).

Beware of this: only when these two parts are cleanly separated, it is possible to achieve the promises stated for the first part. When they are mixed, which is often the case, it is no longer possible to achieve high speed in testing / developing.

So, how would we proceed if we have a program in such situations, where “pure logic” is intermingled with code which interacts with external systems?

We will look at a general approach now, using abstract diagrams representing a program. In a follow-up post we will look at more concrete examples using code. For now, we want to explore some ideas behind this.

Let’s consider this abstract representation of a program:

Program with mixed parts

Imagine the external box is a piece of code. Each long, rounded box is some instruction. The “logic” ones are normal computations. They don’t interact with any external systems. The orange ones, they do. In this case they interact with the file-system and a database.

Because of this, if we wanted to write an automated test we would need a file-system available, and a database running. Such an automated test could not run in a machine without access to a database, or where a file-system is read-only, or where the expected directory structure does not exist.

The first step in re-structuring our code, would be to “extract” the instructions that interact with external systems to new classes / components. This is illustrated here:

Components extracted

Note how these newly created components interact directly with the external systems (database and file-system, respectively). We can see that these components HAVE a direct dependency to these systems.

We took out the instructions that deal with external systems in our original code. However, our normal implementation should not change. It needs to talk to these external systems as part of its normal operation. We need to introduce that back. How to do that? The first naive approach would be to make instances of the new components inside of the original code:

Naive instantiation

With this, our program would do what it did before. Good. But at the same time we would be coupled to the implementation of our new components, which in turn are coupled to the external systems. Again: difficult to test. Not so good.

Instead of doing this, we’ll do something else. It will seem more elaborate at first, but it will provide us with the flexibility we need later. We’ll separate between the interface to our components and their implementation:

Interface and implementation

Here we see that the components implement their respective interfaces. What does this buy us?

In the next step we’ll do the following. We’ll modify our original code to “receive” instances of the new interfaces we created from the outside. If your original code is a class, the constructor could be a good place to pass these instances. On the other hand if it is a method / function, you could pass them as parameters. It is important that you use the interfaces as the type to pass, not the type of the component-implementations (which implement the interfaces).

Injection of dependencies

Who creates the instances now? When and where does it happen?

As far as the code-to-test is concerned, it does not matter. They just get passed as “dependencies”. Because this happens “passively”, without an imperative command by the original code, we call this dependency injection.

In practice the instantiation needs to happen somewhere. Depending on your programming language this can vary between instantiating the components manually in your main / entry function, or letting a dependency-injection framework do the job for you (like the Spring or Guice frameworks in Java).

Once the dependencies are “injected”, we can invoke the respective operations on the components through the interfaces.

Call via dependencies

If all of this is in place, the newly structured program should not behave differently from before. So why do all the work?

Finally we come to the point where it all bears fruit. Remember that we separated the new components, extracted their interface, and let them be implementations of those interfaces. Ideally, the implementation should have these characteristics:

  • be easy to reason about
  • contain as less logic as possible (ideally none)
  • we should trust that they work

If these criteria apply, suddenly we can create alternative implementations of these interfaces! Implementations which do not interact with the external systems, but just pretend that they do. It would seem as if we would be making fun of the components we extracted and isolated! In fact the term used to describe this situation is to “mock” some class or component.

Through “mocking” we can construct a parallel world, in which the code-to-test gets mocked-instances “injected” into it – without being none the wiser.

Mocked dependencies

The code can not distinguish, nor does it care, about how the implementations passed work as long as they reasonably behave as expected. If we are clever in constructing our “mocks” suddenly we can “run” our program without really touching a database or the file-system.

And this is precisely what we wanted to achieve!

Once we reach this stage we can write fast automated tests which only deal with our main code. We are able to test our program fast, in any environment, with no dependencies, reliably, in a repeated, automated way.

This is the idea. In a follow-up post we will see how this translates to a concrete example.