My Testing Best Practices – Part I : Functional Test Levels

In the course of my (still short) career as a software developer, I have been involved in writing quite some tests. I don’t have a testing background, and all I learned comes either from University of from my own experience. Nevertheless I have come up with a set of best practices that I use whenever I’m about to write new tests. As I will be talking about my own set of best practices I may use some terminology the wrong way. I am sorry if this happens, and feel free to correct me.

As there is much to say about testing, I have decided to split it up into a number of smaller blog posts. For this first part I will answer the three big questions related to the functional test levels: “which, how, why”.

I use three different levels, with each answers it’s own question:

  1. Unit Tests: does the unit work as intended?
  2. Integration Tests: is the unit correctly using other parts?
  3. Acceptance Tests: does the system work as requested?

A higher level tests more functionality than a lower one, and thus should depend on it in such a way that if the lower level fails, so will the higher level. This in itself is only logical as you want to know what fails, however I don’t respect the separation of levels at all.

To use unit tests successfully you have to do a lot of mocking and stubbing, but I’ve never been a fan of this. So what do I do? Well if it’s a unit that does not need stubs or mocks, I’m writing genuine unit tests. If the unit however depends on other units, I also write ‘unit tests’, but for the unit as-is without mocks or stubs. These tests aren’t really unit tests, they are integration tests. And as the units start to depend on more other units the level of my tests grows until I reach the level of testing the entire system and thus am writing acceptance tests.

This may sound as a very bad testing strategy, but I have a good reason for it. A major reason is that I’m a developer, I like writing new code, code that does something. Though testing code is necessary, and I realize that, but such code does not add something to the application. My main goal is to write tests that cover as much as possible (good tests) in the least amount of time.

So I have come to ask myself the question: “What’s the point of having a test for unit B (that heavily depends on unit A), without unit A?”. The unit test of B will claim it works as intended, but it’s lying (Read this post for more information). An integration test is required to actually test the integration.

An often claimed benefit is that it is more clear where the bug is situated. This is completely true, just imagine a system with 26 units (A – Z). If there is a bug in unit B then the tests for all the units depending on B will fail, obfuscating the actual error as it’s not an integration error. If the unit tests are written as it should be you can clearly see that only the unit tests of unit B are failing.

Though I agree with idea behind it, I don’t agree with the way it is solved. Yes, you will see there is something wrong with unit B, but only if you look at the unit tests first. Because all integration tests will be failing as well. Will you just ignore the integration tests if there is a failure in a unit test?

Now image we have the same system, but there is an error in an integration. If it happens early in the dependency chain you will have a lot of failing integration tests. Once again you will have a hard time finding the root cause. So how do we solve this? We can’t fall back to the unit tests as they are all green.

My opinion is that all of this can be solved by simply defining dependencies between your tests. If you can specify that the tests of unit B depend on the tests of unit A, a failure in unit A will make the tests of unit B fail with a simple dependency error. This way we can follow the chain back to the root cause.

The only problem with my system is that writing tests for a small unit is different than for an entire system, but until recently I wrote them in the same way. It is however easy to see that simple basic units can have simple basic unit tests, but more advanced units require a more advanced framework as I mentioned in my previous post.


3 thoughts on “My Testing Best Practices – Part I : Functional Test Levels

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.