There's a ton of variation out there in how teams set up the pipeline from "edit code" to "live in production". I want to talk about my ideal, to use as a reference point in further discussion.
TL;DR: When a change is pushed to master, it is proven ready for production.
"pushed to master" is equivalent to "makes it off a development machine".
It's common in Git to make multiple commits locally before pushing them up to the official repository. I am fine with those local commits not all passing tests. It's the "push" or "merge" that matters.
I take the term "master" from popular Git usage, but that's not important - it could be "trunk" or "Main" or whatever.
"Proven" here can mean a bunch of things. Obviously, it includes passing unit tests. It also includes compilation, so I will lean on the compiler. It also includes static analysis, which I will extend to eliminate classes of bugs.
It's important that this "proving" process be super fast, so that I never hesitate to run it. If it's slow, I'll want to separate the slow and fast parts, and require the only fast parts to be run on every change. The slow parts might run every few changes, or every night, or whatever, which means I don't know that master is always ready for production. So I look for ways to make it all super fast.
Sometimes a bug will slip through, and be caught by manual testing, or production monitoring, or by a customer. When this happens, I look for some way I can improve my "proving" to eliminate this class of bugs forever. In this way, over time my confidence in "ready for production" steadily grows.
Some teams have an "in development" branch, where changes can go before master, so that they can be shared between developers even if they're not production ready. In my ideal model, I don't need that. I use vertical slicing, safe refactoring, feature flags, etc. to be able to commit my changes quickly. My branches are short-lived. If my changes pass tests, I push them to master, and I'm done.
Some teams have an "in test" branch, where they'll take a snapshot of what's in master, and then run a testing pass before going to production (with some iteration for making additional fixes). In my ideal model, I don't need that. If my changes pass tests, I push them to master, and they're ready for production.
Ideally, there's an automated system that runs these builds + tests against proposed changes and then pushes them to master if they pass. In TFS they call this "gated checkin"; some people call it "Continuous Integration". The important thing is that you know for sure that master is always green - the validation always passes.
I want to reinforce the point that this is an ideal. I don't expect you to get there tomorrow. But I do want you to agree that this is both valuable and feasible, and start working towards this ideal today. Each step you take in this direction will make things a little better. You'll get there eventually.
And don't do something irresponsible like delete all your integrated tests, or fire your QA staff. Start moving towards this ideal, but keep your old process around until you can demonstrate that it is no longer giving you value.
Wednesday, December 23, 2015
Sunday, December 6, 2015
Types of integration/integrated test
I've noticed that people often use these terms interchangeably.
And when I look at the kinds of tests they're talking about, I see a bunch of different things. Each of these things is worth considering separately, but we lack crisp terminology for them. (I've touched on this before.)
And when I look at the kinds of tests they're talking about, I see a bunch of different things. Each of these things is worth considering separately, but we lack crisp terminology for them. (I've touched on this before.)
1. Testing class A through B
Loading Gist cc384ef8739df7f39ca7...
2. Testing class A, but B is incidentally along for the ride
Loading Gist a816a0e33d2e6fc2b593...
3. I have tested classes A and B separately, but now I want to test that they work together.
That is, that they integrate correctly.
Loading Gist 6581d5b326653cdd8b52...
4. My business logic is testable in isolation, but then I have an adapter for each external system; I test these adapters against the real external system. I call this a focused integration test, and it happens when I use Ports/Adapters/Simulators.
5. I have unit tested bits of my system to some degree, but I don't have confidence that it's ready to ship until I run it in a real(ish) environment with real(ish) load.
6. I am responsible for one service; you are responsible for another; our customers only care that they work together. We deploy our services to an integration environment, and run end-to-end tests there.
Every "Extract Method" starts with minus 1 points
Eric Gunnerson once wrote about the idea that, in programming language design, every potential language feature starts with "minus 100 points":
When refactoring, I say "Every 'Extract Method' starts with minus 1 points".
The default negative reflects the cost of looking in two places to understand your program, where previously everything was in one place. The extracted method has to provide some additional value to justify its existence.
If the new method lets you eliminate duplication, add points.
If the new method is poorly named (worse than good / accurate / honest), subtract points. If the name more clearly expresses intent, add points.
If the calling method is now easier to follow, add points.
It's not a very high bar, but if you can't get to positive territory before merging to master, throw away the refactoring.
Every feature starts out in the hole by 100 points, which means that it has to have a significant net positive effect on the overall package for it to make it into the language. Some features are okay features for a language to have, they just aren't quite good enough to make it into the language.Once a feature makes it in to a programming language, it's in there forever. If you later realize it could have been better if done a little differently, you're stuck. Features tend to join to create combinatoric complexity, so each feature you add now means potentially big costs down the line.
When refactoring, I say "Every 'Extract Method' starts with minus 1 points".
The default negative reflects the cost of looking in two places to understand your program, where previously everything was in one place. The extracted method has to provide some additional value to justify its existence.
If the new method lets you eliminate duplication, add points.
If the new method is poorly named (worse than good / accurate / honest), subtract points. If the name more clearly expresses intent, add points.
If the calling method is now easier to follow, add points.
It's not a very high bar, but if you can't get to positive territory before merging to master, throw away the refactoring.
Subscribe to:
Posts (Atom)