People claimed that TDD was or was not effective in some way based on those results. To make it worse, I see wide variation in the stated purpose of TDD. If we don't see the same purpose, then we aren't measuring effectiveness the same way. For example:
- ensure correctness of new code
- prevent regressions due to future work
- point me directly at my mistake
- be fast enough to run often
- a safety net during refactoring
- the only way to be sure my tests are comprehensive
- to stop me from writing code I don't need
- make code coupling obvious
- make DRY problems visible
- support cohesion
- create the context for entering a Flow state
- regular rewards as I make progress
- confidence (possibly false!) that my program will work
- explain to another human what my code is intended to do
(I sometimes group these into Bugs, Design Feedback, Psychological Benefits, and Specification.)
If you start by practicing TDD a certain way, and see it succeed at one of the above, you'll be tempted to argue that is the "true purpose" of TDD.
If you start with a belief about the true purpose of TDD, and select a practice that doesn't do that, you'll think TDD doesn't work. (See We Tried Baseball...)
I say all this because I hope people will shift to an "all of the above" mindset, and adjust their understanding and practice to make that happen.
2 comments:
The link to "Why tried baseball" is outdated. It should probably be updated to http://ronjeffries.com/xprog/articles/jatbaseball/
Thanks!
Post a Comment