Home | Login | Recent Changes | Search | All Pages | Help

WhatSlowsYourProjectsDown

In order to speed projects up, we need to know what slows them down. What slows down your projects? I'll seed this thread with a couple of things I know about:

  • Selecting a lifecycle that doesn't move quickly enough for the work you want done. For example, using a waterfall or phase-gate instead of using a cyclical or chunking lifecycle.
  • Not setting up systems to obtain data about the project instead of people's gut feelings. People's guts are not particularly good at discerning whether the project is actually on target. Data is good.

What else? -- JohannaRothman 2004.04.25


  • Focusing overly on instantaneous speed at the expense of the velocity (speed plus direction) of the overall project.

--DaleEmery 2004.04.25


  • Replanning and re-estimating to find an acceptable answer. Lately almost every senior person on our project has spent virtually all their time in meetings or preparing for them.

Johanna, data is good if it is relevant, not so good when the data collected does not keep up with changes in how projects are run. On this project, the data showed we were almost on target when my gut (and not just mine) said we were in trouble. Maybe the lesson here is to investigate whatever points to the existence of a problem, rather than trusting whatever says we will be OK. SherryHeinze 2004.04.25


Believing that you can ski uphill - aka the mistaken belief that you have to rush through all the stuff at the start of the project because you know you've got loads of things to do later on. The later of course has a mistaken belief that the sooner you get to those "loads of things" the quicker you'll be able to sort them out (and more cheaply) - whatever they may be.

PhilStubbington 2004.04.27


Phil, you reminded me of this one too:
  • Believing that if you take lots of time at the beginning of the project, you can rush through the stuff at the end of the project because the requirements are so "good" or whatever adjective you'd like to use here.

JohannaRothman 2004.04.27


People being taking off the project to work on other projects (usually fixing bugs).

Low-quality code (no automated tests, no reviews or pairing, hard to understand, hard to change, overly interdependent) really slows things down because the developers are forced to manually test things, spend time trying to understand the code ("is this line of code after the last return from the function unnecessary, or does it really belong before the return?") and are afraid to make any changes that could improve the code quality while preserving the desired behavior (refactoring).

KeithRay 2004.04.28


I've seen a lot of planning time sacrificed on the alter of the illusion of certainty.

DaveSmith 2004.04.29


The underlying cause of all project slowdowns is mistakes.

The number one cause of mistakes is trying to go too fast. - JerryWeinberg 2004.04.28


KeithRay reminded me of another - the mistaken belief that a project can try to do more than one thing - like introduce automated testing and deliver what ever the Vision/Scope of the project says it's delivering.

... and thanks to JohannaRothman of course for her comment above!

PhilStubbington 2004.04.29


I'm pondering Jerry's comment about trying to go too fast. For me it takes these forms:
  • rushing through whatever you need to do initially to get to the end. This frequently takes the form of rushing through whatever you do for requirements and design.
  • rushing through the end to release (even if you didn't rush the beginning, pushing through the testing without allowing for data collection, for example)
  • not allowing the project to iterate slowly at the beginning to understand what you can do, so you can go faster at the end. (starting off at a slower tempo and slowly increasing speed as you know what you're doing and where you're going)

Let me give an example since I'm sure I lost everyone with that last bullet. If you try a HudsonsBayStart in a project, you try a small iteration slowly (at least, not fast). Or, if you try one small thing in the beginning, you learn where you can move quickly and where you can move slowly. You can use that information to speed your project up in later iterations. -- JohannaRothman 2004.04.30


We have been doing the textbook illustration of your first bullet for the last 6 months. Tomorrow we were to go live. Instead, we are starting over again with the analysis. Hopefully, we really won't return to doing the development before the analysis and design again. The one thing I am sure of it that it is slower to do it over than to do it right the first time. SherryHeinze 2004.04.30
The old saw once again: "We don't have enough time to do it right the first time, but we always have enough time to do it over." Unfortunately the old saw is a true saw. :-( CharlesAdams 2004.04.30
I really wish people would consider some form of iterative lifecycle before starting with a waterfall or phase gate. Even if they just made their waterfalls or phase-gates shorter so they could do two little projects in 6 months instead of one not-quite-successful project.

Charles, I've been thinking that the saw should be something more like this: "We don't have enough time to do it right the first time, so we'll do less that we can do right, instead of doing the whole thing wrong and having to do it over again." Well, that's not very spiffy as a saw. Maybe we can come up with a sweet saying. -- JohannaRothman 2004.05.01


Part of our problem here is that management thinks that what we did was iterative, specifically Agile. No amount of argument will get past a mental definition of Agile as meaning without any process at all at this point, nor is it likely to prevent Sales from promoting Agile to a client as an excuse for no process, no (or insufficient) client involvement, no time allowed for even unit testing, etc. I was told that the problem with all methodologies other than waterfall is that they never work. This is probably true, if the only one you actually follow is waterfall. Using the name Agile or RAD or Spiral does not invoke enough sympathetic magic to cause success. SherryHeinze 2004.05.02
  • Not choosing how to do the work.

I see reflections of this is several of the points above, not to mention my own experience. Some examples follow. Start with the assumption that everyone knows the "right" way to do it. Justify what you did after the fact by giving it a name. Since estimating always fails, don't bother, just do the work. Just "trust" your best programmer. You shipped, didn't you? So you must have done it "right". -- MikeMelendez 2004.05.03


  • Spending lots and lots of time estimating when all your history points to your estimations as incorrect

JohannaRothman 2004.05.05


Beautifully put! SherryHeinze 2004.05.05
"We don't have enough time to do it right the first time, so we'll do less that we can do right, instead of doing the whole thing wrong and having to do it over again." Well, that's not very spiffy as a saw. Maybe we can come up with a sweet saying.

How about "If you don't have time to do it over, do it right the first time"? DonGray 2004.05.05

Well said, Don! -- JohannaRothman 2004.05.08

Taking the time to focus and plan properly during a project might appear to slow a project down. In my personal experiences, just the opposite happens. In some of my own metrics that I keep on projects, I have found that for a medium sized project taking about 6 to 8 months, each day of planning/focusing can save up to one week of time near the end of the project. In PSL, I remember learning that focus is a substitute for time. For those who are constantly looking for substitutes for time on drawn out projects, Wayne Strider has a wonderful section in his book, Powerful Project Leadership called Substitutes for Time. I like the book since it takes a Satir approach to project management.

--JohnSuzuki, 2004.05.06


I'm a huge fan of planning for projects. But I plan (and teach this way) that the PM better plan to replan. The value you obtain from planning in the first place is that you're not making tradeoffs blindly during the project. That's why you save time at the end. But the value you obtain from replanning is the ability to adapt to reality.

John, when you talk about planning, do you include estimation? If so, what are estimation techniques do you use, and how accurate are they? (I've just come off two projects where the estimates were 200-1000% off. Yes, that much off. Nowhere near close.) -- JohannaRothman 2004.05.08


Some of the fastest projects I have seen have made the least progress. I remember a tape of several Aikido demonstrations where the Shihan were attacked, and showing this throw and that technique. The Shihan never seemed to be in a hurry. By and large the didn't move much, and when I thought about it, or stepped through the tape, they didn't move all that fast.

What slows projects down the most, at least what I have seen, is activity unrelated to progress. With apologies to Dijkstra, "Every activity advances the state of the project."

-- JimBullock, 2004.05.08


Related to Johanna's point, "Believing that if you take lots of time at the beginning of the project, you can rush through the stuff at the end of the project..."
  • believing that 'We'll make up time in the next phase/iteration/...'


ED 051604


  • Not knowing how much rework you'll need to do, how long it will take, and when it will occur.
-- JohannaRothman 2004.06.01

Rework reminds me of a related activity I call, "Shuffling the Deck Chairs." In this activity, the client requests changes to the software before the previous changes have been throughly tested and it is known how the software will behave in THE REAL WORLD.

The negative impacts of Shuffling the Deck Chairs include:

  • Project Churn from Requirements to Deploying
  • Lack of a stable code baseline
  • Failure to THINK about what the real problem might be.

OTOH it looks like progress is being made. -- DonGray 2004.06.01


On the other hand, taking small "what's the next step" steps and not thinking big problems through to the point of paralysis can help a team move forward, often to completion. Big up-front thinking can often lead to over-analysis and over-design, in part to counter fears that never materialize.

--DaveSmith 2005.06.02


Dave and Don, I think you're talking about different things here. I certainly embrace the idea of trying out little things and seeing if that's enough. That's different from adding more features before you've finished any of the features you've already got. The worst way I saw this play out was during a project the developers were trying to finish their assigned feature sets, and once the marketing guy saw a demo, he wanted to change the whole thing. Of course, the project wasn't sufficiently adaptable to accommodate his requests, so about half the developer tried to finish their work and the other half changed what they were doing. Absolutely nothing got done until the project manager understood what was happening.

So, let me rephrase your points, if I may:

  • Embracing the next requirements set without completing the work you agreed to initially
  • Not selecing a lifecycle or schedule or plan or something to allow the project to adapt to changes in midstream

Did I do this right? -- JohannaRothman 2004.06.02

I agree with Johanna that Dave and I are talking about different "activities". I agree that "try a little" testing concepts and possibilities is A GOOD THING. My thought has more to do with "We don't have time to think, we need to be doing something." DonGray 2004.06.07

Some things that slow testing down:
  • pressuring development to meet an impossible deadline, so the developers only pretend they managed to unit test and deliver (surprise!)lousy quality software that isn't ready for system test
  • not having someone who is accountable for delivering and maintaining a clean, complete, stable test environment
  • assuming Systems Integration Test is the same sort of animal as System Test, and will somehow morph organically out of System Test with no separate planning or additional resources to develop the approach and test cases

There are others, of course, but these are some of the worst and most ubiquitous. -- FionaCharles 2-Jun-2004


Fiona, to your first point:
  • claiming that you'd met a milestone without really doing so.

I saw this in a client where the developers met <strong>every</strong>date in the schedule, but the way they met the schedule was to shortcut design, do no unit testing at all, and then were surprised (!!) when the code got to testing and nothing worked. The managers could not understand how testing could be slowing up the project so badly. When I explained what happened (and I had data to prove it), they threw me out and told everyone I was wrong :-) They have successfully lost money every quarter for the last 4 or 5 years, and their releases take longer and longer to complete. They've had a layoff almost every quarter for the last 3 years too. But they didn't believe me. Maybe there's something else here:

  • Not understanding what causes slowdowns. By the time you've detected a slowdown, the event(s) that caused the slowing are long gone.
  • Believing your model when all the data points to something else as the cause of slowness
-- Johanna Rothman 2004.06.03

As I read back over the posts, I'm trying to get a general feel for the areas we've covered. Good ways to slow your project down are:
  • Setting up wrong (method, planning, choosing how to work, responsibility
  • Incorrect status (data, direction)
  • Thinking you can go faster
  • Going too fast
  • Doing too many things
  • Doing the wrong things

Any other major categories? I'm trying to summarize here, since the next obvious question is, HowToSpeedUpYourProject -- DonGray 2004.06.09


Don,

One category that might be missing that I see all the time that is supported by the literature is having the wrong people on the project. You can add a category called People (or hiring or assigning the wrong persons). Barry Boehm's work suggest that "people" have one of the largest impacts on software project success. The right person with the appropriate experience, along with the correct numbers over time are important according to Boehm. Along those lines, proper training must be given for individuals and teams to perform optimally.

--JohnSuzuki 2004.07.09


John (and everyone), it's not just that people in general have the largest impact on project success. It's even worse :-) I'm in the airport right now, so I don't remember the percentages off the top of my head, but in Capers Jones' _Software Assessments, Benchmarks, and Best Practices._ (I think), he claims that good management is much more important (a percentage of 85?%) than process (a percentage of 14?%) and that good people, meaning people who have the functional skills and domain expertise are almost as important as good managers (75?%). Only reuse (300?%) is a more important factor in software projects. Boehm has similar data, but I'm not arithmetically capable to translate percentage to factors. (I don't claim these percentages are correct because I'm not looking at the book; they are only what I remember.)

What I take away from this is that hiring the right people for a project is more important than any kind of process thing you could do. I think the other issues we've discussed here are also important, and I don't know how to rank them. I do know that a good project manager and good people (in the sense that they have the functional skills and domain expertise) will trump any set of bad senior management (except in which project to choose.) Which is why I wrote the hiring book :-) -- JohannaRothman 2004.07.11


Johanna,

You asked: When you talk about planning, do you include estimation? Absolutely. Unfortunately, in my experience many organizations don't do formal estimation. Those that do rarely use historical data since they rarely collect it.

You asked: If so, what are estimation techniques do you use, and how accurate are they? Well, I have tried many, including formal parametric cost models to more simple techniques such as the Wide Band Delphi method. On several projects based on Waterfall-like development, (that I kept detailed formal data on) I used a formal cost construction model like Cocomo calibrated with a couple years of similar domain project data. Similar domain data is absolutely important for realistic estimation.

On one project (if I recall correctly) using this model and calibrated data, the model suggested a project duration of 22 +/- 3 months with hundreds of thousands of dollars of labor cost (say around 2X). The customer originally wanted the project in 12 months with a cost of X. The client had not done any formal estimation before bringing me into the project. Of course both management teams (client and customer) were not happy with the new estimation information. After all was said and done the project actully shipped in 21 months to the customer.

It was difficult explaining this to the management of both parties involved but I stood firm. The project was late and overbudget for all the reasons you already know about. But after reviewing the final project statistics and historical data after shipment during a post mortem, the president and project manager (of the client) realized the importance of proper estimation. They were surprised how close the cost construction model data predicted the final ship date. I spent considerable time explaining why based on how the model was developed.

I believe that despite using cost construction models with calibrated data, you will still be off in your estimates--maybe 20, 40 or even 50 percent off. If you can limit the variance of your estimates to less then 10-20 percent over time over several projects I believe that you are a world class software development organization. Organizations that don't use formal cost construction models often have project estimates that are often 100 to 200 percent off (Based on studies like the Standish Group's Chaos Study). In the practical project world I would accept an overrun of 20 to 50 percent versus 100 to 200 percent any day.

-- JohnSuzuki 2004.07.09


I think we can also limit the amount a project overruns by using iterative lifecycles and early and often deliverables. -- JohannaRothman 2004.07.11

I think there's something important about expectations and acutals that we can borrow from the manufacturing quality literature, SPC all the way back to Demming. The idea of control bands.

I think that any given organization has a range of variability for their projects. More important, different kinds of projects have different variability. The more discovery involved, the greater the variability. Conversely, the more precise the required outcome, the more the variability in the delivery process. Fewer unknowns are "in band" for the expected result, so they must get sucked up by the project activity.

Yet another unpublished essay of mine explores this more. It's unpublished because the preceding paragraph is all I have in crisp form. It's taking a while. Since I started that essay, however, my understanding of the methods of writing has improved, so perhaps that particular project has a narrower confidence band than before. Certainly writing projects as a group are showing less variability for me.

So, what slows projects down (and speeds them up?) Two things come to mind:

- Trying to over-control a project system, which as Deming demonstrated all those years ago only makes things worse. The predictability you have with this system is the predictability you have with this system. Accept and plan for the variability you have, in your production planning, and everyone will be happier.

- Ignoring your unknowns. The blind insistence on an answer to something we just don't know yet causes more project grief than almost anything I have seen. "I need a number for how long it takes us to do a build." Well, what if we don't have such a number? In a sense that number "ought" to be something known, and small. But in many, many organizations that number is large, variable, and unknown. Insisting that there is a number, or on a small one, just generates noise. Planning based on that synthetic number just stacks up lots of contingent plans and activities that will have to be changed later - with associated noise.

I have spent most of my time, it tursn out, with projects having a large component of discovery. Add to that my personal bias toward the chaotic part of an otherwise more predictable project, and I have a lot of practice with unknowns. Step 0 to "save time" on projects like this is admit what you don't know, and what you know poorly.

-- JimBullock (approximately) 2004.07.10


I like the idea of control bands. Have you applied control bands to intermediate deliverables on a project? -- JohannaRothman 2004.07.10
?. . . intermediate deliverables? - jb

Intermediate deliverables are the handoffs between groups or interim deliverables. Not a whole new build, but a new module as part of a build. Does that help? JohannaRothman 2004.07.15


  • failing to customize any/all aspects of the effort to the specific context of the project

(see ContextFitting and ContextAnalysis) --DaveRabinek 2004.07.14


Dave, does that mean things like Dwayne's 3Ps -- where the people, the process, and the product all have to match for a successful effort? I've found that mismatching the people, process, and product doesn't just slow the project down, it makes it fail. :-) JohannaRothman 2004.07.15

Re: Control Bands and intermediate deliverables.

Yeah, that helps. Control bands work for anything at all. More specifically, you can apply the idea to anything that you do repeatedly and can measure in some way.

Here's the illustrations and implications ramble: The classic examples are in manufacturing, where the measure is some aspect of the output of a process or step, and you track this over time. "Defects / 1,000,000" could be a measure. More interesting are things like: "Delta from target weight." You get a value for each sample (1,000,000 widgets in the first case, each widget in the second), and look at where they land vs. "nominal." The classic representation is a line graph X vs. Y, with each sample at a new X. Vertically, the measure goes on Y, with "nominal" as a solid line, and the control bands around that.

I'll provide a diagram if someone can point me to an example of how to get a graphic into this Wiki thing. Don't we have sketches somewhere around here that I could learn from? Can't find them at the moment.


Jim, the way you put a graphic on the wiki is to put it on a website somewhere, then give the url here. There are several examples already up--take a look below at how I put my photo up (look in edit mode). - JerryWeinberg 2004.07.16



For scheduling purposes, you might be inclined to measure some aspect of time, effort, or other investment actually consumed. Then you have some number of chunks of work of more or less equal size, or that can be normalized, and away you go. Several nice things about this kind of abstract tool:

  • You can side-step all kinds of lacks of understanding by just simplifying the measure. You might like "weeks' effort per user story" but can't agree what a user story is. Fine. How about just start tracking "time to completion of things we estimate will take us a week?" It turns out that many people have personal, implicit, and emotionally high-voltabe control band for exactly that measure. Often something like "If we said it would take a week, it ought to be off by no more than a couple hours." Note that "couple hours" vs. a 50 hour week is 4%, a control band that many manufacturing processes never reach, and tank reactors don't get distracted, redirected, or sick. So our expectations of predictability for estimates of human processes are way too stringent a lot of the time.
  • Throws attention to the process. When things flail around, well, they flail around. When they flail around more than you'd like, fix the process. When they flail around more than they used to, something happened. This was one of Deming's constant points.
  • Gives you some insight into what you think is similar. This one's sort of a stealth payoff. If you're grouping a bunch of things, say "requirements" into a pile in a control band chart, you're saying that these things are in some sense the same. When one of them acts differently, it begs the question: Is this really the same. This was another one of Deming's constant points. Variation doesn't exclusively, or even mostly come from human error or cussedness. Often it comes from things simply being different.

For software and intermediate deliverables, control charts work nicely for iterative efforts. Don't do a damn bit of good for "big bang" approaches. I'd be really inclined with large systems to use control bands to look at any step (transformation) where you have similar inputs and similar outputs, across multiple iterations. At a minimum, you use the first several iterations to calibrate what you might reasonably expect for the remainder. Then, if you like, you can look to see what might be done to change the set point of that step - training, tools, whatever. If you're real subtle, you might consider that the development or project environment is a system, and look to create a system that encourages that process step in a particular direction. This last is one of the models kind of hidden behing Highsmith's SoftwareDevelopmentEcosystems.

-- JimBullock 2004.07.15 (That reply was in-band . . . for me.)


Lots of good ideas/comments/stories in this thread - the breadth of experience of AYE folks continues to amaze me...

My $.02:
Seems that an underlying theme of 'what slows things down', based on the comments in this thread and on my experience, is
a) not enough information
b) incorrect information
(or as Jerry sez 'It's not a crisis it's just the end of an illusion')

Examples:
*participants with insufficient technical information (in terms of skills or experience)
*incorrect information on the expectations of end-users and other stakeholders
*not enough info to predict the effects of interactions with other software/systems
*insufficient information on requirements
*incorrect information on the reliability of reusable software components
*incorrect information about project status

The ability to judge
1) when there is enough information
2) when information may be incorrect
3) how difficult it will be to obtain the information
4) what information is missing (knowing what you don't know)
would seem to be critical.

Since it is so hard to judge the above (eg, to know when you're dealing with an illusion) in many large software development projects, a start-small iterative approach seems to often make sense, in terms of avoiding the big slowdowns.

-RickHower 2004.10.13


Good analysis, Rick. I'd like to add that meta-information underlies these things ("since it's hard to judge the above"), because in sick projects, the communication about the quality of the information gets sick, too. Or maybe it starts there. See my article among the AYE articles on destroying thei information system. - JerryWeinberg 2004.10.13
Jerry, your article on destroying communications in software development really hits the nail on the head.

Sometimes I find that an entire organization and management structure has institutionalized �sick� communications, so projects are always �sick� from the get-go. And sometimes I work in organizations that enable healthy communication but the particular project that I'm involved in has, or develops, sick communications.
I find the former type of organization much more trying than the latter, since no matter where I turn, it can feel like a house of mirrors, where perspectives warp in strange ways, fiction and truth get confused, bad people look good, and good people look bad. - RickHower 2004.10.25


Updated: Monday, October 25, 2004