Saturday, March 8, 2014

Stages of TDD

I've often heard people say this sort of thing about TDD:
I really believe in the importance of TDD. It helps me catch stupid bugs right away, instead of waiting for testers or customers to report them. I especially like that it catches any regressions.
TDD lets me refactor safely. Also, writing tests first help me think about my design from the caller's perspective.
I still need a comprehensive suite of end-to-end tests to make sure the whole system works. I am concerned that, since I'm writing both the code and the tests, my blind spots will appear in both, allowing bugs to slip through. 
When requirements change, or we refactor a subsystem, a thousand tests break. Then we have to spend a lot of time fixing them.
TDD is expensive, but it's worth it.  
While others say something like:
TDD is not a testing activity; it's a design activity. "Test" is a misnomer. With TDD I write better code, faster. 
I don't need a bug database. My whole team only has a couple bugs per month. We do root cause analysis on every bug.
From the first perspective, the second sounds weird. It's hard to believe it's even possible. Maybe it only works in that specific context.

From the second perspective, the first sounds backwards and unenlightened. Why don't they get it?

I now believe that learning about TDD takes time and moves through stages. From the early stages, it's hard to fully understand the later stages. From the later stages, the you wonder why you ever wasted your time in the early stages.

The stages that I see are:
  1. TDD is about testing. I write tests to prove that my code works. Thanks to the safety afforded by those tests, I refactor more.
  2. I commit to 100% TDD. Every feature and every bug fix will be represented in tests. I immediately discover how hard this is, which leads me to mocks, dependency injection, marking methods as "public" for testing, etc.
  3. Tests give me feedback. I see "hard to test" as a code smell. I refactor to make things easier to test. Tests get easier and code gets cleaner.
  4. My tests are always easy to write, easy to read, and super fast. Code is clean. Every idea has a single canonical location. There is no duplication. Classes have great names. Cohesion is high and coupling is low.

    There are no bug farms, no part of the code I'm afraid to touch. Working in this code is inherently low risk. Since I rarely create bugs, testing for correctness is rarely fruitful. So yes, tests got me here, but not by catching bugs.

I found #4 very hard to grasp a year ago. I'm not sure if seeing this roadmap would have helped. Perhaps the right way to begin is just to focus on #1: write tests to catch bugs. Make developers responsible for proving correctness of their work. Don't count a feature as "done" until the bugs are gone. Build from there.

There may be an additional stage in the middle. If you know it, tell me and I'll fill in.

I'm really curious if there's a stage 5. Something I'm blind to. Got any?

5 comments:

alvaro said...

I've got a few for #5:

* Refactor to patterns
* Cleaner code
* Other (deeper) smells
* Realising that software is not about TDD. Releasing dogma. Unlearning the learnt part. Now, there are other tools in my toolbox: property-based testing, design upfront, simple design, etc

What do you think?

Yawar said...

'Testing shows the presence, not the absence of bugs' -Edsger Dijkstra ( https://en.wikiquote.org/wiki/Edsger_W._Dijkstra )

For me testing is about catching regression bugs. Once I've shipped something, I want to make sure I don't mess it up later. I think branch coverage is a handy metric for that–good branch coverage shows that I thought about all the corner cases in my code. It also works well with certain types of mutation testing that check that your tests really would catch regressions.

episiarch said...

I hope by the time you reach stage #5 you've begun to think in terms of systems rather than in terms of language methods. That way you'll have given up thinking of software correctness as linear execution flows that can be easily unit-tested, and you'll be asking yourself:

How do I profile the behavior and performance of my whole microservice ecosystem, and what race conditions are hidden behind seemingly-sensible TDD-approved logic?

What are the limitations of exceptions, and how are errors bubbled up to the appropriate observer with context across my whole system?

How do I describe and test behaviors that span multiple hosts with different implied contracts?

How do I properly test code in X different microservices with different requirements that calls into a complicated Postgres query with non-deterministic behavior under load?

Michael R. Wolf said...

I just found this post (again) and shared it with the Hardware/Software team of my new employer.

First... congrats on this well-written blog post. Even though it is 6 years old, it has stood the test of time enough to float up in a Google search. Well done. Bravo, Jay.

Second... you closed this post 6 years ago asking if people have noticed a 5th stage. I'd be curious to learn what you have experienced?

Jay Bazuzi said...

Thanks Michael, I'm still pretty happy with it.

I've learned is that "My whole team only has a couple bugs per month" might be insufficiently bold. Some teams have way fewer bugs than that.

I've also learned (from James Shore) that giving attention to "Code Coverage" impedes the transition from #2 to #3 (even though it sounds like a good tool for achieving #2).