Tuesday, January 6, 2015

Why we Test: part 5: Two reasons for regression testing

In Part 1, I wrote: "I count on tests to catch mistakes before our customers do" and "Having tests means I can refactor safely".

In both cases, I want tests to catch my mistakes, but I now realize I should consider these separately.


In the first case I'm relying on the tests to confirm that I have written my code correctly, or that future functionality changes don't break previous functionality changes. I have written bugs plenty often, and I'm looking to the tests to tell me about. I'm definitely going to keep writing new features and fixing old bugs and shipping software.

When we decide to change old functionality, we'll want to change the old tests. So they should be malleable to provide their value. They should be readable and granular, so when they fail I can decide whether to change the product or change the test.


In the second case, my decision about whether to refactor is heavily influenced by whether I have those tests. In a legacy (i.e. tightly coupled) system without good tests, most people will just leave things as-is instead of refactoring.

If I'm just looking for a safety net while I refactor, I can use Pinning Tests. They tests don't need to be malleable, since the product behavior is not changing. If they are fast, they don't need to be granular, since I can run them really often. They do need to be very reliable. They need to cover as many cases as I can manage, but only in the sections of code I'm touching. It's OK if the tests are ugly, if I'm just going to delete them at the end of my refactoring session (when my decoupled code is now amenable to unit testing.)

(In this context, when I say "refactor", I don't mean "a highly disciplined process of changing code without changing behavior, according to a recipe", or "using a high-fidelity automated tool that will safely change code without changing behavior". You could say I mean "tiny rewrites.")

No comments: