Friday, February 12, 2010

The different types of tests

There is a huge number of examples of types of tests in the development community and they can be categorized in many different ways

I am going to focus on splitting tests by their intention here.
I guess people generally think of tests as a tool for reducing bugs and keeping robustness. Tests give us the courage to dare perform large refactorings and introduce new features while being relatively confident that we aren't breaking things.

That's a very important part of writing tests but I'll argue that there's another side to it:
Tests help us design. Tests help us write clean code. Tests help us split code into isolated modules.

With these two in mind, consider the following two test strategys: Unit Testing and Blackbox Testing.

Blackbox tests
my definition:
A blackbox test looks at the code as if it were a black box, i.e. the internals of the system are unknown and irrelevant. You use your knowledge of the system requirements to provide the inputs and verify that the outputs are correct.

I really like writing blackbox tests because they are fairly immune to internal refactorings and can usually quite easily capture the requirements on a high level. Blackbox tests have the additional benefit of being able to test a lot of code at once, and only testing code that will actually be used by real users.

However, writing blackbox tests on a too high level doesn't force you to write clean and modularized code.

Unit tests
my definition:
A unit test verifies the correctness of individual components of the system (hence the name unit). If the component (or unit) needs some collaborators, you typically mock them, and let them provide the inputs you want for your specific test. Unlike black box testing, you want to limit the scope of testing as much as possible per test.

Unit tests on the other hand usually have to be rewritten every time you perform refactorings and they can usually only capture requirements that are contained within a single class. The advantage of unit tests is that they force you to write your classes to be testable individually, which greatly helps you write good code.

Some people like to measure the coverage of their codebase, and diligently write unit tests until they're at an acceptable level. I think that writing unit tests for the sake of increasing coverage is wrong.

The problem is that you're potentially testing dead code, and our only way of detecting dead code is by looking at coverage reports.

Instead, consider measuring coverage from blackbox tests and unit tests separately.
The blackbox tests should reflect the overall system requirements so run coverage for those tests. Then by looking at the coverage report you can spot dead code - Code that isn't used by any requirement.

If you spot any particular dead code section, you have two options:
  1. Remove the code - it isn't used! Dead code bloats down your codebase.
  2. If the code can't be removed, it must be a part of some requirement, so formalize the requirement as a blackbox test and try it again.
If you try writing a unit test just to exercise that code, you haven't gained anything, just a false sense of code coverage.

When developing according to TDD practices, should you write blackbox tests or unit tests?
The answer is both! I like to start with writing blackbox tests that cover all the requirements I can come up with. Then continue iteratively with unit tests and code in the usual TDD cycle until both the blackbox tests and unit tests are happy.

This should mean that you both capture the necessary requirement while maintaining a healthy code base.

No comments:

Post a Comment