Home | Login | Recent Changes | Search | All Pages | Help

TestDrivenDevelopment

Also called Test First Programming.

  1. Think about what you want to do.

  2. Think about how to test it.

  3. Write a small test. Think about the desired API.

  4. Write just enough code to fail the test.

  5. Run and watch the test fail. (The test-runner, if you're using JUnit, shows the "Red Bar"). Now you know that your test is going to be executed.

  6. Write just enough code to pass the test (and pass all your previous tests).

  7. Run and watch all of the tests pass. (The test-runner, if you're using JUnit, shows the "Green Bar"). If it doesn't pass, you did something wrong, fix it now since it's got be something you just wrote.

  8. If you have any duplicate logic, or inexpressive code, refactor to remove duplication and increase expresssiveness -- this include reducing coupling and increasing cohesion.

  9. Run the tests again, you should still have the Green Bar. If you get the Red Bar, then you made a mistake in your refactoring. Fix it now and re-run.

  10. repeat steps 1 through 9 until you can't find any more tests that drive writing new code.

See also http://groups.yahoo.com/group/testdrivendevelopment/

Drafts of Kent's book on this subject are in the Files section of that site.

There are several examples on the web, including simple ones like a converter to Roman numerals, prime-factors generator, etc., and even user-interface code in Java.


One thing our anonymous author left out is something at the meta-level.
  1. Keep a record of how many times you run a new test that you thought was sure to work (because you just changed the minimum amount) and it didn't work. This record tells you a lot about you, and a lot about the chances that you've missed something when you've finished running all your tests. - JerryWeinberg 2002.07.04

Most often, in fact, almost always, when a test that I wrote fails to pass after I wrote the code to make the test pass, the test is at fault. --KeithRay
I've been reading Kent Beck's draft chapters, and find some ideas that are both appealing and vaguely unsettling. He describes a development cycle that is very intentional. Thinking about your APIs in terms of how to test them (steps 2 and 3) are a good way to ground yourself and prevent flights of fancy that leave messy APIs and unused, untested code in their wakes. On the whole, I think the steps he lays out would be a significant win for many developers.

What bothers me is step 6, which moves from having a failing test to one that works. In the manuscript, this is a two-part step. The advise is to fake it at first, if necessary, such that the tests pass, without the code necessarily working correctly. And then step 6' is to make it really work. This bothers me. It seems risky to move through an interim step where the tests pass, but the code doesn't work. One big interruption on a Friday afternoon, and when you next sit down on Monday morning, you risk forgetting about step 6', leaving problems that will later surface to bite you in integration tests. To be fair, Beck recommends keeping a separate checklist, but mine always tend to get out of sync.

DaveSmith, 8 Aug 2002


Don't forget that "Fake It" is but one technique at step 6, and Kent provides another. I'll get back to that, but I'd like to address the concern voiced.

Leaving problems that will later surface to bite you in integration tests : TDD is a development technique. Some specific risks might inhere to it, such as the production of acceptance tests lagging behind the production of unit tests to such an extent that the "goal gap" problem you outline here could cause defects. Reducing such risks is a matter of project management. In XP, a project management discipline, this particular risk is addressed by insisting on acceptance tests being provided on an very short cycle, as unit tests are; "later" in XP will be on the order of days, and at most two weeks.

However, "fake it" is part and parcel of how TDD turns development into an "intentional" process as you put it. The significance of "fake it", as I understand it, is this : it causes you to be perpetually on the alert for just how easy it is to fool your tests into giving your code a thumbs-up. And, consequently, how easy it is to fool yourself.

To quote Ron Jeffries, we want clean and correct code. He then defines "clean" and "correct" in very simple terms - clean means no duplication, correct means all the tests pass. This transforms the problem of coding into one of writing the necessary and sufficient tests to define the functionality we want. "Fake it" is a technique which probes the limits of "necessary and sufficient". If you will, it's a way of testing the tests.

To put this into concrete terms, suppose I have just written a test which compares a String returned by a method with a constant : "Hello Laurent !". The method sayHello() takes a String parameter. The task before me is to implement sayHello(). Kent identifies two ways to approach the implementation : "Fake It" and "Obvious Implementation". I will use "Obvious Implementation" when I bloody well know that I intended to pass the name as a parameter, and will implement as return "Hello "+name+" !";.

I will use "Fake It" when there is the slightest doubt as to what I meant by sayHello() - wait a minute, did I want the salutation or the name passed in as the parameter ? Or was it both ? In that case I will return the same constant that the test expects, then explore the relevant variation by writing another test : I'll compare with "Hello David !", for instance.

The best way to see how it plays out, of course, isn't to read about a trivial example which I made trivial for the purposes of exposition - it is to try it in your own work for a hour or a day or a week. In trivial examples we always know all there is to know, we are never caught off our guard. In our own work we are often caught off our guard, in our own specific and quirky ways.

LaurentBossavit 2002.08.08


The significance of "fake it", as I understand it, is this : it causes you to be perpetually on the alert for just how easy it is to fool your tests into giving your code a thumbs-up. And, consequently, how easy it is to fool yourself.

That's the crux of my discomfort. I suspect that achieving an early "green bar" (all tests passing) will lull some people into a false sense of security, rather than causing them to be perpetually alert. A red bar creates a tension that needs to be resolved. I would much rather see that resolution be working code, rather than an interim fake.

The best way to see how it plays out, of course, isn't to read about a trivial example which I made trivial for the purposes of exposition - it is to try it in your own work for a hour or a day or a week.

I've been doing Test First development for a few years now (when I get to write code). Most of what Beck lays out is right on, except the "fake it" advise.

By the way, TDD isn't tied to XP, though they're compatible. Beck is reaching out to a larger audience with this book.

DaveSmith 8 Aug 2002


I think "faking it" may be less dangerous in any approach where the code is always open to more than one person, as in XP. If you use it in other methods, where one person "owns" code and doesn't show it to anyone else with any regularity, I wouldn't advise "faking it" as a technique.

Also, "test first" is dangerous with hidden code. Only if you know something about the clean way the code is to be designed and implemented can "test first" be comprehensive enough. When anything at all can be hidden in the code, test first can be a trap - with programmers writing to the test. In XP, however, the rest of the structure prevents this. You cannot just adopt test first willy-nilly - but in the hands of a competent and conscientious programmer like DaveSmith, it can work well. It's just that there aren't too many of those around these days, and perhaps there never were. - JerryWeinberg 2002.08.08


Jerry -- Thank you for that gracious assessment.

Only if you know something about the clean way the code is to be designed and implemented can "test first" be comprehensive enough.

Test First Design, at least as Kent Beck lays it out, is actually a design technique on a micro level. It's subtle, but when one of the first questions you pose to yourself when facing a blank screen is "how am I going to test this?", you avoid all manner of design nonsense. Starting to code by actually writing an idealized test case first helps you avoid nonsensical APIs. After working this way for a few years now, I have less confidence in approaches that want a complete design before starting to code. You're right, though, that Test First doesn't guarantee a comprensive set of tests. It's not a replacement for integration and acceptance tests, and one has to be conscientious about adding new tests as the need arises.

DaveSmith 08 Aug 2002


I've been doing Test First development for a few years now...

Dave, I can see how I might have given the impression of lecturing. My exhortation to "just try it" was limited to these aspects of TDD that you feel uncomfortable with. At AYE we will have an opportunity to reveal to each other more about how our respective test-first styles "feel" when we're actually doing test-first; to pair program for a while if we're interested in that. And an opportunity not to do that if we're not interested.

I appreciate and value your and Jerry's comments because they draw attention away from the technique itself, and more to its context : the appropriate way to write code is very dependent on how you think the code you write will be received, appreciated, and in turn enhanced by others on the team.

LaurentBossavit 2002.08.09


I didn't see any computers at AYE last year! I'll bring my laptop with me this year, in case anyone wants to pair programming with me.

KeithRay 2002.08.09


They were there, but we keep them hidden so nobody sneaks powerpoint slides into the room. - JerryWeinberg 2002.08.09 --> WhyWeDoNotUsePowerPoint
Keith, I haven't written code in about 10 years. I can still read code, I think. If you're willing to take on a novice (which is something Williams and Kessler suggest in their new book, "Pair Programming Illuminated"), I'd love to try pair programming with you. I really like their book. Have you read it yet?

When I programmed, I was too impatient to do TDD, until I'd had about 7 years of experience. I think that's why I wasn't as good a programmer as I wanted to be. When I was a tester, I took more time to think about the tests before plunging in. As a project manager or manager, I take much more time to think than when I was a programmer. Maybe now that I'm used to thinking, I would be a better programmer. -- JohannaRothman 2002.08.16


"Pair Programming Illuminated" is on my to-do list (have to buy it first). I'll be happy to practice TDD and PP with JR --KeithRay


I took the words of two people (Roger and Robert, I think their names were) on the TDD mailing list, and created this dialog...

<http://groups.yahoo.com/group/testdrivendevelopment/message/1732>

KeithRay 2002.08.25


Updated: Sunday, August 25, 2002