Monday, February 20, 2017

Test-as-spec and assertion syntax



I like to say that tests should, first and foremost, be a human-readable spec. Let's look at what that can mean for how we write assertions.

Suppose we're writing a card game, and we want to assert that a deck of cards is sorted the way you'd find them when you first open the box. (I'm using this simple example as a proxy for the kinds of more complex problems that we see in legacy code. It's up to you to map these ideas to that context.)

An approach I see in a lot of code is to iterate over the cards to assert. Perhaps something like:


This kind of code makes it obvious that an AssertEquals would be valuable, so that on failure you can see the expected and actual values in the test results.

If this test fails, you only know about one incorrect card. If there are more, you won't know until you fix the current error and rerun the test.

A richer assertion library might offer AssertSorted. It could even take a set of 1 or more sort key selectors. The result might look like:


(That's C++ lambda syntax, if you haven't seen it before).

Both of these approaches are "computer science" solutions - they work in the solution domain, and use the language of computer code. If I want my test to be a human readable spec, I need to use the language of the problem domain. I could take a step in that direction by extracting a method, giving:


But we're also doing TDD. In TDD, we want the tests to give us feedback about the design of the code. And this test is saying "the notion of being sorted that is missing from the code under test". Taking an intuitive leap, the class that should hold that notion is a "deck of cards", which is also missing from the code under test. That leads to:


I like the improvements to the design of the code and the way the test reads, but I am sad to lose the ability to provide a detailed report when this assertion fails. I'm not sure how I would fix that, or if it would ever actually matter.

It's interesting to me that we're back to the bool-only assertion from the first example.

Saturday, February 18, 2017

Micro-ATDD

I strive to make all my tests be both microtests and acceptance tests, an idea I learned from Arlo Belshee.

When I say this to people, they are usually confused first, then doubtful when I explain. I don't think I'm ready to address the doubt, but maybe I can address the confusion today.

Microtests

Coined by GeePaw Hill (see his article for his original definition), a microtest is like a unit test, but it has all the qualities I wish all unit tests had. It's fast and focused. It answers the question "does this little piece of code do what I intend it to do?"

Because microtests talk directly to the system under test, they are written in terms of the SUT.

It's obvious that a microtest can only be used on parts of the code that are simple and decoupled and isolated. An integration test is never a microtest.

Acceptance Test

An acceptance test describes expected software behaviors from the point of view of a user or other stakeholder. It answers the question "does this system meet the requirements I expect of to meet?"

Because acceptance tests are written in conversation with that user, they are written in the language of that user. They are organized like a spec.

My ideal tests

I want tests that hold to all of the above. My ideal tests are super fast, 100% reliable, simple, isolated, written in the language of the user, and easy to read. (This requires the SUT to be decoupled, cohesive, well-named, and DRY - properties I already want.)

Every test is both an acceptance test and a microtest.

The doubt

The usual objection I hear is "while isolated unit tests tell you about each of the little pieces, you still need some kind of integration test to confirm that all the parts work when you put them together." 

Well, that's true for most programs, but it's only necessary because of how your code is organized. "parts work when you put them together" means "the desired behaviors of the program (the acceptance criteria) are emergent properties of the system". But we know how to refactor. If two parts of the system need to work together, we can put them together in the code, and then use a microtest to assert that desired behavior.

Friday, February 17, 2017

AONW2017: Amazing Distributed Teams

My new job involves teams that are distributed over a bunch of locations, mostly on the West Coast of the USA. Each team has people at multiple locations.

I went to Agile Open NorthWest 2017 with the question "How can we make distributed teams awesome?" Here's what we came up with:

There are a bunch of known good practices to help distributed teams not suck too much. Doing them won't get us to "awesome", but at least we can get up to "not sucky". So let's start by writing down these practices:
  • Communicate a lot
  • Don't let remote people be 2nd-class. Make everyone equally remote, even if some are in the same building.
    • Chat room for all communication, even within an office
  • Experienced people who don't need to learn as much are at less of a disadvantage when remote*
  • Because remote pairing is more tiring, be deliberate about taking breaks. 
    • use Pomodoro
  • Do your homework before coming to meetings, so you don't need to rely as much on awkward VTC communication
  • Show up to meetings on time
  • Don't let people get blocked on questions. If someone raises a question in chat, don't leave them hanging.
  • Have the whole team mob for 1 hour to start the day, to get alignment on the day's work
  • Synchronize time of work
  • Meet face-to-face regularly. Have a budget to bring people together.
  • Many companies save money by having people work from home. Direct some of that savings to equipment, travel, etc. 
  • Consider paying out-of-pocket to make remote more awesome, and then ask for reimbursement if it helps.
  • Remember that patience online is short, and accommodate that fact.
  • Create team agreements - they're at least as important as for teams that sit together
  • Recognize the expression of Conway's Law: limited communications affect software architecture
  • Telepresence robots can help. Make sure they are human size (short robots get treated like children)
  • Retro often, with a relentless focus on the things that make remoting difficult
  • Build the team
    • Play games together remotely (poker, Halo)
    • Friday beers in VTC
Yet-unsolved problems that generally make remote work suck:
  • There is no good remote whiteboard
  • Estimating remotely is particularly bad (plug for #NoEstimates)
  • When tools/tech stop working right when we need them, we have a bad time
  • It's hard to influence the org beyond the team / hard to influence culture
And then we looked at advantages to remote work / distributed teams - how they can be better than teams that sit together:
  • 50 people can write on a Google Doc at once, while only a couple people can write on a whiteboard at once.
  • Better ergonomics are possible. No crowding around a single screen.
  • Remote breaks are real breaks. When you step away, no one can reach you. Go outside!
  • You have access to a broader pool of talent.
  • It increases diversity, even compared to the same people sitting together
  • Enables a 24-hour development cycle
  • Can accommodate people in new ways. For example, a person with a partial hearing loss can turn up their headphone volume, instead of asking everyone to remember to speak up. 
  • Can accommodate varying communication styles
We measure how awesome a team is with two questions:
  1. Are we delivering (steadily increasing) value?
  2. Are people happy?
*I don't think this is true, but it come up in the session, so I put it here in the list.