Home | Login | Recent Changes | Search | All Pages | Help

PervasiveTesting

From AgilePractices


I no longer use a debugger much.

When I'm coding and make a mistake, my program's behaviour is affected: the program no longer does what it's supposed to do. I can become aware of that by executing part of the program's features - the parts which my mistake affects.

Previously, my development method was to write a bunch of code, at the same time running my program from time to time to check on the feature that I was writing. As a result, I was usually able to quickly discover those of my mistakes which created problems in the feature I was developing.

But if my mistakes resulted in problems affecting other parts of the program - different features, special cases not part of the task immediately at hand - it would take longer for me to become aware of the mistakes. In the meantime, I would have written lots of code related to many different features.

As a result, when a mistake was brought to light - by me, by a tester, or worse, by a user - I had to look for it in the entire program. I had to use a powerful tool to locate many of my mistakes: a debugger.

I work differently nowadays.

Whenever I write code for a feature, at a fine grained level, I also write a test. I don't mean a documented test plan, which would entail running the program to check on this or that feature; I mean automated tests, written in Java or C++ or whatnot. My programs are self-testing, so to speak. Since the testing is automated, I can afford to retest all of the program's features several times a day.

And so, when I make a mistake, a failing test immediately lets me know. I nearly always know where I made a mistake: most of the time, it's in the code I wrote a hour or two ago, or ten minutes.

The more tests I have, the less I need the debugger.

LaurentBossavit 2003.04.16


I don't use a debugger much, either, for much the same reasons. If the steps you take are small enough, the need for a debugger is greatly reduced. The folks I know who are heavy debugger users tend to write for days or weeks before they make any serious attempt to test their code (beyond "it compiles") and integrate.

I've been reading a set of articles by Ron Jeffries (one of the XP heavyweights) where he describes the process of working through a problem in a new language (C#) using a combination of Agile strategies, including top-down Test Driven Development (TDD) and a few case of bottom-up design. Well into having a small, working application, Ron mentions that he hasn't learned the .NET debugger yet, and that he's avoided doing so on purpose on the belief that working from tests is a better way to learn. It's a good set of articles (so far). He shows his mistakes, and how he struggled with and fixed them.

--DaveSmith 2003.04.15


Michael Feathers has a start on a book currently titled Working Effectively with Legacy Code, the mailing list to discuss the book is http://groups.yahoo.com/group/welc/ and drafts of the book are in that yahoo group's file section. This book all about using techniques that been popularized by test-first programming.

KeithRay 2003.04.15


In about twenty years of writing programs almost full time, I don't recall ever using a debugger successfully. Once or twice someone brought one in to help solve a particularly tough problem, but they never helped.

Looking back, a lot of that was because of the way we always worked - building in pieces that we were 99% sure had one bug or less. We always knew where to look if something went wrong. (The way our work product was structured had a lot to do with that, too.)

I never understood why there was so much fuss and enthusiasm for debuggers. The ones I saw in the past were far more likely to turn up a bug in themselves than in our code. Perhaps there's been a lot of progress in the last generation, and I'm just out of touch. Certainly some people are making a living selling debuggers. - JerryWeinberg 2003.04.16


Working for a debugger(Totalview) company(Etnus), I thought I'd offer a few observations. One, our major users are the national labs, where Totalview gets used for debugging seriously parallel code. Two, the second most important category are those more common but smaller scale users who are deep into multithreading, i.e. code parallel in a different way. As I understand it, the asynchrony can get far too complex to keep all in the mind, given a problem.

At the other places I've worked, debuggers (almost always gdb) were used extensively to sort out interactions for code depending heavily and perhaps tightly integrated with the OS kernel or some brand new (buggy) hardware.

That suggests parallelism, general complexity, and unstable platforms are seen as reasons for a debugger. These are all for developers whose abilities I deeply respect.

OTOH, I'm a tester. My code is, intentionally, rarely that complicated. And I have yet to find a debugger enough better than printf's to pay for the learning curve. (I also tend to do much of my coding in scripting languages.) Here, I certainly use Totalview, but only to test Totalview. The developers of Totalview use it extensively to debug Totalview, but then they paid for the learning curve a long time ago. I would say, they use Totalview as much as a testing tool as a debugger, given their deep familiarity with it.

MikeMelendez 2003.04.17


Yeah, what Mike said. I run my code under a debugger when the code is interacting with something else, and the behaviour of the something else is not fully spec'd. I guess I use the debugger as a command-line interface to my code.

DaveLiebreich 2003.04.19


Sometimes I need to find out why/when a certain memory location is being modified, and I can do that in a good debugger. Unfortunately, when operating in a mix of Photoshop Host application, TWAIN plugin, Java background application, and interapplication communication via AppleEvents on MacOS 9, a debugger just doesn't handle it.

KeithRay 2003.04.19


Thought I'd chime in and agree with the consensus about debuggers. I've used them but for snooping about in something external to the source I have in hand. We did uncover a couple kinds of precompiler strangeness with a debugger from time to time. Also some symbol collisions that weren't resolved right earlier.

The practice of using a debugger for stepping through line by line of something you are building yourself confuses me. Why do this? -- JimBullock, 2003.04.19


Stepping through your code is a form of manual testing... I used to do it before I got into unit testing / test-driven-design. I think Steve McConnell recommended it in his books. It helps find your mistakes, but so do unit tests, and unit tests are faster and persistent.

KeithRay 2003.04.19


Copied from FrequentBuilds page arguing that frequent builds are necessary but not sufficient:

I haven't seen build time issues based on hardware limits in quite a while. I have seen test-bed pushes and similar take a long time, still. Things like pushing out databases, and test sets, and registering services or interfaces. I have seen "agile" or "agile wanna be" projects stall when making and maintaining a suitable test bed for all those automated unit and acceptance tests turned out to be hard and timeconsuming. JUnit can run Java unit tests. Well and fine. What about data to go against? In a known state? Recoverable covering the edge cases?

I have also seen the agile testing approaches stall when the failures were more system-level than code-level. If a failure leaves the test system in a bollixed up state - crashes the messaging middleware, hard, for example, that takes the test suite down, but may also require manual intervention to get the environment back up.

I have seen within the last couple years several examples of "big ball of code" problems, where a large bunch of developers was working with a big, intermingled pile of code. Managing sources changes that step on each other became a real burden - daily half-day check in meetings involving dozens of people. The code was way to arbitrarily coupled. Since "everybody knew" what effected what, none of that was written down. And as it grew, with a bunch of very smart people living with the code pretty much 24 x 7, eventually the folklore needed to make changes exceeded the recollection and recounting rate available even for people who drink too much coffee and talk really fast.

There's something missing from the "frequent build" idea. It's necessary but not sufficient, I think. -- JimBullock, 2003.06.30


Good points, Jim. To reap those benefits, you have to design for testability. That can include the infrastructure for a mini "seeded" database, for mocking-out a database with XML stubs, or other means of supplying rapid test feedback.

I've seen a transaction processing system set up so all COMMIT in testing became ROLLBACK in order to preserve the initial conditions.

I've seen stubs that provided probablistic results, scripted.

The one of most useful testability ploys is using MVC (Model-View-Controller) to afford testing of APIs rather than requiring capture/replay of GUI control actions.

Schemes for testability tap the experience of the team to avoid dead ends in project closure.

BobLee 2003.07.01


. . . to afford testing of APIs rather than requiring capture/replay of GUI control actions.

Well yeah. And no. There are at least two problems that I've seen not get handled very well, that bollix this stuff up.

  • Hard vs. soft interfaces.
  • Doing the work.

The power of MVC for testability isn't the pattern, but the interface it makes stable and opaque. So I think the big trick is to make these interfaces stable and opaque as early as you can, so that you can build test harnesses that use them, and even simulate the M before it's available. This works just fine except when you "refactor mercilessly" changing the M's API trashing test setups, playback tests and so on.

If you're doing MVC in one language the playback tools - JUnit for example - work just fine. If the interface among the components is something else, or proprietary, that's harder. If you decide to refactor the component interface into using a different technology that's harder yet. Not free. Way more expensive than making some code & design changes within a single language and technology stack, then launching the new version out through the automated build & test tool chain.

One project I initiated for an e-commerce company with it's own middleware was a test harness / simulation tool for that middleware. Solves a big chunk of the test bed problem, along with making "hard API's" testable.

Doing the work isn't easy either. The problems I have seen are at least these:

  • Skills. People doing "application coding" don't know how to setup the playback tool, or the middleware and don't want to learn.
  • Ownership. "That's evil database stuff - not my job." The C2 wiki has examples of this with some of the developers dissing on dba's. Applies to any "other" tech.
  • Tool Chains. Building or faking the additional "stuff" takes additional tool chains that don't necessarily integrate well with each other, haven't necessarily been licensed & deployed, and may have version / dependency incompatibilites among them.
  • Volatility of interfaces. When refactoring changes the contents of test beds, or the required tool chains, it isn't as cheap as a single-language source code refactoring.
  • Coupling across tool chains. One of the ThoughtWorks guys has instigated an "agile database" thing, based on automated database rebuilds and database refactoring. They have the same challenges we had 13 or 14 years ago - hand crafting some hooks between the source code tools and the DB build tools. It works just grand once you get it to work and it's brittle.

Solving this stuff isn't easy.

-- JimBullock, 2003.07.01


Solving this stuff isn't easy.

That's where having sufficient experience on the project team helps shorten the way. Having experience with testability successes (and failures!) like quoted above pays off in easier closure and less latency in testing.

--BobLee 2003.07.01


Updated: Tuesday, July 1, 2003