Home | Login | Recent Changes | Search | All Pages | Help

FaultFeedbackRatio

Johanna has an article about this here:

ratio = (fixes that require more work) / (all the fixes)

What percentage of the time when you try to fix things do you make additional mistakes.


First, there's a more subtle POV about "mistakes." I always assume that current work is in terms of current practices and standards which themselves evolve over time. Today's "perfect" is usually tomorrow's "mistake", or we're not learning about what we are doing, and that's boring.

Second, measured how & where? I just had an experience of adjusting this ratio at the company where I am currently VP of Engineerin (interim.)

- Our feedback ratio of baselined builds metered into QA was pretty aweful back in the day. We were averaging about 3 tries to fix any particular feature scale chunk with as many as 6 go-arounds. Our success rate on the first try was approximately 0. Our ratio of unintended and bad side effects on the first try at a fix was actually higher than 1:1. If you count individual, reported faults we were batting maybe 50/50 at "fixing" the indicated fault. It wasn't pretty.

- Measuring from about February, we have had one build a week on two code lines, with an aggregate of about 5 "chunks" in each build. In that time we have had 6 weeks that included breakage in either code line. We had about four CM bobbles as we move more of the product into change control, and about four pieces of broken code / function. Obviously we had a CM issue and a broken code thing in the same week a couple times. Of the broken function events, exactly two were actual regression (new errors) vs. incomplete repairs. All the errors that got into a build were fixed successfully on the first try at repair.

- This is all grand. Yet to do so, we introduced various kinds of collaboration up-stream of the builds we deliver to others. Initially, this stuff was bouncing about the same as the builds were failing before. Now, we tend to bounce the up-stream reviews about 1/4 of the time. Most of those bounces are actually around practices, risk management, specification / functionality and similar, and amount to a judgment call that we want to raise the bar on ourselves with this particular change, mainly because it's extra-important.

So what counts?

FWIW, the trend line of errors in the builds is pretty flat lately, while the trend line for errors in the up-stream checkpoints is still down. That actually points out the thing I like to watch, more than fault / feedback ratio: What's the trend line? More than that, I like to watch the trend of the trend line, wherever that is measured. Related to the this I like to track "tries" for any particular change. Once we get competent at all about what we intend, these "tries" tend to morph from code not doing what we intended to us not knowing what we intend - sort of multiple prototpyes by stealth. That's an important thing to track as well.

-- JimBullock 2005.06.29 (Simple metrics lead to simplistic answers.)


Sounds to me like you found a use for Fault Feedback Ratio in your project. So what's the problem? I don't understand. -- JohannaRothman 2005.06.30
It's always important to track trends in your measurements, whatever they be. Trends are more reliable indicators of what's happening than any particular measurement. - JerryWeinberg 2005.07.01
Sounds to me like you found a use for Fault Feedback Ratio in your project. So what's the problem? I don't understand.

Which fault-feedback ratio? The one after stuff is into the build, and shipped to test or the one up-stream of that? Or maybe the one after that, after we ship stuff out the door? We have driven that one to 0 lately. It won't hold, of course.

What counts as a "fault?" Depending on the granularity, and what we're measuring measuring, the ratio flails all over the place. Seems to generally track directionally among the various measures.

- JimBullock 2005.07.09


Updated: Sunday, July 10, 2005