Unagile Testing

Friday, November 18th, 2005

Teaching commitments in New York prevented me from sticking around to hear most of the talks at STARWest this week after my own sessions were over. However I did listen to Wednesday’s keynotes from James Whittaker and Julie Gardiner. Reflecting on them, it struck me that while today’s programmers may be test infected, testers aren’t yet agile infected.

Whittaker’s keynote made the useful point that we need to investigate our bugs to see what we’re doing wrong, which is sensible. However he seems to believe that the only reason to do this is so testers can more easily find bugs. He doesn’t hold out any hope that programmers can find bugs or not put the bugs into the code in the first place. Because all software is buggy, he thinks no methodologies are worth anything. He claims we don’t know how to write perfect code, and therefore we can only hope to detect the bugs after the fact.

Certainly all software has bugs, but not all software is created equal. Some software is demonstrably better than other software; and we can learn good techniques from this software, even if it still has a few bugs in it for testers to find. He’s really suffering from the binary fallacy. The question is not whether software is perfect or imperfect. It’s a quantitative question of how imperfect any given piece of software is. We can strive for perfect software, but we can only achieve better and worse software. But we can achieve that, and it’s still worth studying and striving for. (In a brief hallway conversation after the talk, James did hint that he was deliberately overstating his point to pump up the audience, but there were no such caveats in the keynote itself; even when I raised these points in Q&A)

Julie Gardiner delivered the second keynote on Why “‘Risk’ is a Tester’s Favorite Four-Letter Word.” I don’t have a big background in test theory. I haven’t read a lot of books about testing. (She claims to own over 60, and to have read most of them.) so I may not have fully understood everything she said. However, it seemed to me that she was saying risk is dependent on time spent testing. The more time spent on after-the-fact testing the more confidence you had that the software was bug free. You decide when to release based on how much risk you are willing to accept.

There’s something to that but it’s not how I work, it’s not how I see companies like Microsoft working, and it’s certainly not agile. In agile development, the software is tested and the bugs found much sooner. An agile team normally has a very good handle on how many bugs they have; and they run at a very high confidence level that there are few undiscovered bugs. This is because testing is done simultaneously with coding rather than after the coding.

As far as releases go, a really agile team may release every two weeks. If there are undiscovered bugs they can be fixed and the fix rolled out very quickly. In the world of shrinkwrapped software such fast release cycles may not be possible. However then the question of whether to ship is normally driven by two questions:

1. Are sufficient features implemented to justify the release?
2. Are any known bugs or combination of bugs significant enough to delay the release?

The possibility of unknown bugs just doesn’t enter into it.

Note that I am not saying there’s no place for testers on an agile team. It’s just that I think the testers need to be brought in much earlier. Testing needs to be simultaneous with development, not subsequent to it.