In both cases, I want tests to catch my mistakes, but I now realize I should consider these separately.
Regression
In the first case I'm relying on the tests to confirm that I have written my code correctly, or that future functionality changes don't break previous functionality changes. I have written bugs plenty often, and I'm looking to the tests to tell me about. I'm definitely going to keep writing new features and fixing old bugs and shipping software.When we decide to change old functionality, we'll want to change the old tests. So they should be malleable to provide their value. They should be readable and granular, so when they fail I can decide whether to change the product or change the test.
Pinning
In the second case, my decision about whether to refactor is heavily influenced by whether I have those tests. In a legacy (i.e. tightly coupled) system without good tests, most people will just leave things as-is instead of refactoring.If I'm just looking for a safety net while I refactor, I can use Pinning Tests. They tests don't need to be malleable, since the product behavior is not changing. If they are fast, they don't need to be granular, since I can run them really often. They do need to be very reliable. They need to cover as many cases as I can manage, but only in the sections of code I'm touching. It's OK if the tests are ugly, if I'm just going to delete them at the end of my refactoring session (when my decoupled code is now amenable to unit testing.)
(In this context, when I say "refactor", I don't mean "a highly disciplined process of changing code without changing behavior, according to a recipe", or "using a high-fidelity automated tool that will safely change code without changing behavior". You could say I mean "tiny rewrites.")
No comments:
Post a Comment