When writing software we often are only focused on just that: getting the feature done. With the rise of Test Driven Development (TDD) some of us will first write tests before the actual code and this is great. Personally I like the concept of TDD, but I can’t seem to make myself adhere to it as I am always to eager to write the code instead of first write the test.
Another reason I don’t do TDD is that I find it hard to write a test for something that doesn’t even exist yet. At least the public interface should exist, but this interface can change often during my implementation because I don’t come up with a full design in the beginning.
But even though I don’t do TDD, I spend a lot of time writing unit tests, making sure all of my code is being covered and that I didn’t forget anything. I do this because this is the only way to avoid bugs in the future. Others however will spend either much less time on writing tests or none at all but will write a lot more code. To an outsider it may seem like these developers are more productive as they output more code. There are however a couple of problems with this idea.
The main purpose of testing is to verify code works as expected. Not just now, but in the future as well. It gives us confidence we didn’t break anything and no bugs are introduced. Writing tests is therefore a crucial part of software development that prevents us from having to do a lot of bug fixing after each change.
It is important that nearly all of the code is being tested, in a complex project with thousands or millions of lines of code. How do we know we have enough tests? Just counting the amount of tests and setting a required ration on the lines of codes will not work as some lines of code require more tests than others. Moreover this can easily by manipulated if the teams feels pressure to reach this level. A better approach is to determine all the scenario’s that need to be tested. This gives us confidence that these scenario’s do work, there might however be other case that have been forgotten or due to implementation details act different from others. Determining the scenario’s in a white box fashion will prevent this as the implementation details are used. But the cases that have to be tested will change often as implementation changes, and verifying that all cases are covered (and re-determining the interesting cases) is a manual action that will take up a lot of time. A much better way is to automatically gather information about which lines of code are executed by running the tests, called code coverage.
As a software developer you will have to fix bugs reported by users. A good bug report contains these three essential elements:
- A description of the encountered bug.
- A clear scenario that caused the bug.
- The outcome the user expected.
A lot of users don’t report bug reports like this, but it still is a good idea to rewrite the bug report in a fixed way before starting to fix it. We have a way of how bug reports should be reported containing three sections:
- How to reproduce: containing the scenario
- Observed behaviour: what the result of the actions are
- Expected behaviour: what was actually expected
We don’t re-write bug reports to match this form, and that sometimes causes a lot of frustrations. Especially if the bug report is considered to be non-critical it is often delayed before working on it. But after a couple of months it becomes harder to reproduce the scenario due to changes in the product, which may have changed the scenario or outcome.
So how do we handle a bug that we can not reproduce?