Home | Login | Recent Changes | Search | All Pages | Help

MeasuringProgress

Measuring Progress
  • by Richard House
    • Director, i-logue Ltd
    • 212 Piccadilly
    • London W1J 9HG
    • Tel: 020 7830 9657
    • [email protected]

DickKarpinski put this here because it arrived at just the time I was composing TrashingRequirementsEngineering and seemed to support my thesis.

Executive Summary (copied from below for easy access)

Purpose

There has been much published on the subject of IT project failures in terms of cost and time overruns, but cost and time alone cannot describe success. If we can make the assertion that success is the goal of every project then success is a fundamental business objective which in our experience is often inadequately defined. Number one on the National Audit Office/Office of Government Commerce publication �List of Common Causes of Project Failure� is:

Lack of clear link between the project and the organisation�s key strategic priorities, including agreed measures of success.

Where evidence of success is required, for example justifying the investment to a critical audience, then collecting that evidence by measurement requires a quantitative definition of success criteria in order to ensure that the right data is collected. Rather than solely focussing on financial measures and time, metrics for success should include how much the organisation is improving as a direct consequence of the implementation of the project.

The purpose of this paper is to examine the extent to which UK Government IT projects have defined measures of success and how they are using these measures to monitor project implementation.

End of copy from below. ======

Contents

  • 1 Executive Summary 1
  • 1.1 Purpose 1
  • 1.2 Key Findings 1
  • 1.3 Issues Raised 1
  • 2 Introduction 2
  • 2.1 Background 2
  • 2.2 Purpose of this Document 2
  • 2.3 Document Structure 2
  • 3 Approach 3
  • 3.1 Information Gathering 3
  • 3.2 Question Rationale 3
  • 3.3 Performance Measure Categorisation 4
  • 3.4 Assessment Metrics 4
  • 3.5 Validity of Responses 5
  • 3.6 Results Presentation 5
  • 4 Analysis 6
  • 4.1 Survey Responses 6
  • 4.2 Performance Measure Categories 6
  • 4.2.1 Quantitative Measures 7
  • 4.2.2 Internal and External Measures 8
  • 4.3 Goal Coverage 10
  • 4.4 Project Length 11
  • 4.5 Performance Reporting Start Points in the Project Timescale 11
  • 4.6 General Observations 12
  • 4.6.1 Problems with Performance Measure Definitions 12
  • 4.6.2 Baselining 12
  • 4.6.3 Quantitative Understanding 13
  • 5 Summary 13
  • 5.1 Conclusions 13
  • 5.2 Recommendations 13

  • Annex A Organisations Surveyed 14



References:

Gilb, Tom: "Competitive Engineering: A Handbook For Systems Engineering, Requirements Engineering, and Software Engineering Using Planguage" ISBN 0750665076 Publisher: Elsevier Butterworth-Heinemann, Oxford, 2005, Softcover, 480 Pages.

Royal Academy of Engineering and British Computer Society Working Group, The Challenges of Complex IT Projects: The report of a working group from the Royal Academy of Engineering and The British Computer Society, April 2004. ISBN: 1903496152. http://www.raeng.org.uk <http://www.raeng.org.uk>

Reviewers

  • Tom Gilb
  • Lindsey Brodie
  • Chris Dale
  • John McCubbin
  • Derek Robertson
  • Dick Karpinski

Licence

This work is licensed under the Creative Commons Attribution-ShareAlike 2.0 License. To view a copy of this licence or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.

You are free:

  • to copy, distribute, display, and perform the work
  • to make derivative works
  • to make commercial use of the work

Under the following conditions:

  • Attribution. You must give the original author credit.
  • Share Alike. If you alter, transform, or build upon this work, you may distribute the resulting work only under a licence identical to this one.
  • For any reuse or distribution, you must make clear to others the licence terms of this work.
  • Any of these conditions can be waived if you get permission from the copyright holder.
  • Your fair use and other rights are in no way affected by the above.


Executive Summary

Purpose

There has been much published on the subject of IT project failures in terms of cost and time overruns, but cost and time alone cannot describe success. If we can make the assertion that success is the goal of every project then success is a fundamental business objective which in our experience is often inadequately defined. Number one on the National Audit Office/Office of Government Commerce publication �List of Common Causes of Project Failure� is:

Lack of clear link between the project and the organisation�s key strategic priorities, including agreed measures of success.

Where evidence of success is required, for example justifying the investment to a critical audience, then collecting that evidence by measurement requires a quantitative definition of success criteria in order to ensure that the right data is collected. Rather than solely focussing on financial measures and time, metrics for success should include how much the organisation is improving as a direct consequence of the implementation of the project.

The purpose of this paper is to examine the extent to which UK Government IT projects have defined measures of success and how they are using these measures to monitor project implementation.

Key Findings

  • The responses indicated that quantifiable measures for assessing success in projects are:
    • in place � {65%}
    • aligned to internal project assessment, rather than any business impact - {55%}
    • aligned to business goals � {45%}

Issues Raised

The proportion of projects with no quantitative performance measures in place indicates an apparent reluctance to quantify formal success criteria, a lack of skills to do so, or that this information is not readily available. The lack of explicit definitions of anticipated/expected business change increases project risk arising from misunderstanding objectives.

Less than a half of projects have quantitative measures that correlate with project goals or reflect business impact. This suggests that even those that are quantifying measures might not be selecting those that are important to the business. Our analysis indicates only a quarter of respondents demonstrated an effective use of quantifiable measures of success.

The Government approach of incremental delivery seems not to have been uniformly adopted. The duration of some projects without any quantitative performance measures gives cause for concern over how these projects can be successfully managed towards outcomes that meet key strategic priorities.

The evidence from this survey suggests that there is considerable room for improvement in the way that IT projects specify and use measures of success.

The proportion of projects not reporting quantitative performance measures is of major concern for the accountability of public investment.

The majority of government departments need to reappraise how the business impact of their IT projects is captured and monitored.

Richard House Director, i-logue Ltd [email protected]

Introduction

�You get what you measure� is an often quoted management maxim, implying that something is unlikely to be realised if it is not measured. Perhaps less obviously, measuring the wrong thing is probably going to result in a lot of wasted effort. Selecting the right performance metrics and making measurements should therefore be helpful to management, indeed the right performance metrics should define what we mean by success.

Background

i-logue Ltd has many years experience in the area of Government IT and has observed several initiatives to improve the success rate of IT projects. IT governance has increased on many projects with the application of PRINCE2, initiatives such as the Office of Government Commerce (OGC) Gateway review process and more recently the introduction of Managing Successful Programmes (MSP). Costs and timescale elements of project management seem to be under better control, but there are still well publicised exceptions.

Cost and timescales are not the only measures of success and the question therefore arises as to whether projects are defining success in quantifiable terms (other than time and cost) which mean something to the business sponsors and if they are, is progress towards those success criteria being measured? From our own company experience we know that there is a wide variation in how organisations monitor project progress towards success and the research behind this paper sought to provide evidence of the variation that currently exists in a wide spectrum of projects.

In April 2004, a report1 by a working group from the Royal Academy of Engineering and British Computer Society commented that the success rate of IT projects remains too low (even taking into account recent improvements). The report identified several key success factors for complex IT projects with �measuring progress� amongst them:

  • client-supplier relationship;
  • evolutionary project management;
  • requirements management;
  • change management;
  • measuring progress;
  • contractual arrangements;
  • risk management;
  • technical issues

The National Audit Office in conjunction with the Office of Government Commerce has publicised common causes of project failure2, the first in their list being:

  • Lack of clear link between the project and the organisation�s key strategic priorities, including agreed measures of success.

From these publications, the measurement of progress towards success is already a subject of some public concern.

Purpose of this Document

This report summarises the results of a survey of {29} UK Government departments and agencies on how performance measures used to report progress of IT projects.

Document Structure

The following sections in this document cover the approach taken, an analysis of the responses and ends with conclusions and recommendations based on the results.

Approach

Information Gathering

Information was acquired for the survey via the Freedom of Information (FOI) Act (2000) process to approach Government departments and agencies with a number of questions relating to how they used performance measures in reporting project progress to the �business�. The {29} organisations selected were relatively large organisations, each potentially making a significant investment in IT projects. Question Rationale

The questions posed in the requests for information were designed to be answered from information that might be readily available within the organisation. Under the FOI Act, public organisations have no obligation to supply information that they do not hold or need to expend additional effort to provide. The information requested and the rationale behind them is explained below. The exact wording of the information requests were:

  • A brief description (short paragraph) of a single current major IT project in your organisation, for example one that has executive board level interest.
  • Start date of the project.
  • Planned end date of the project.
  • Definitions of the performance measures used by the business to assess the success of the deployment of the system into the organisation.
  • The date when reporting against these measures started/is due to start.
  • To whom the reports are/will be provided.

The project description provided the business context of a project. Organisations were free to select one project from their IT programme. The nature of the project was not of major interest to the survey, but rather how its success was being measured. This freedom of project choice allowed organisations to present their best projects in terms of performance reporting, however we have no means of knowing whether they did.

We draw a distinction between quantification and measurement. Quantification is the process of constructing phrases which provide a contextual meaning for numbers, i.e. it is a process of definition. Measurement is the practical process of acquiring actual numbers. The main thrust of the survey was around quantification though a few responses did provided evidence of measurement by noting numeric values.

�But you can�t measure everything. What about the qualitative or intangible elements of this project?� This is an exclamation we hear often. Indeed you would not want to measure everything and too much measurement represents no value to people. With the required skills, it is possible to quantify any degree of improvement sought, even with so called qualitative elements or intangibles. Take as example a project to change the way people perceive an issue. Finding out what percentage of a sample population perceive the issue in a certain way is always feasible. We assert that it is always possible to quantify the attributes of any desired change but the subtlety is in finding and using those measures that represent value to people. All the dimensions of success should always be quantified for purposes of clarity and communication.

Government publications on IT project management3 recommend an incremental approach to delivering system capability. The start date of reporting performance measures in relation to the start and end date of the projects should provide an indication of the degree of incremental delivery, i.e. a project not starting reporting until the project end date would be unlikely to be delivering incremental capability.

The definitions of the performance measures provided the critical information for this research. We allowed organisations to make their own interpretation of �performance measure� but qualifying it with the business focus for success deployment. We hoped to get more than just time and budget measures.

Performance Measure Categorisation

The questions we wished to answer from the research were:

To what degree are the performance measures in use within projects capable of being used quantitatively? More practically, this might be viewed as how many measures can you put a number against?

To what degree do the performance measures mean anything in terms of the impact on the business the project is supporting?

How �joined up� are the measures with the business goals of the project, i.e. can the measures be used to monitor progress towards what the organisation is trying to achieve through the project? To what extent is the incremental delivery of projects evident from the use of performance measures?

In relation to these research questions, we used a system of categorising performance measures received in responses in the following terms:

Quantifiable. The Oxford English Dictionary defines the verb �measure� as �To ascertain or determine the spatial magnitude or quantity of something�. For measures to be measurable, it should therefore be possible to get a numerical value for the measure that means something against the definition of the measure. For example �efficiency� might be a measure, but without further definition an efficiency number carries little meaning, whereas �number of system users� is a quantifiable measure (but we would still need to understand what qualifies as a �system user�). Where measures were defined in sufficient detail to convey meaning with a number, these were deemed quantifiable.

External Measures. Measures quoted that have business impact outside the system being implemented were classified as external measures. The mental test for external measures is to ask whether the measure would still be relevant if the system were taken away. e.g. customer response time would probably be an external measure as the customer would probably still exist if the system were removed and the customer would still have some expectation of a response time from the organisation.

Internal Measures. Measures that are not external (as explained in the preceding bullet) but relevant to the system being implemented are classed as internal measures. Rate of help desk calls from system users and system stability would be examples of internal measures.

Goal Addressing Measures. 95% of responses described one or more of the business goals of the project in the project description, e.g. financial savings, reduced bureaucracy, streamlined processes, better customer support etc. We sought to identify whether these goals had any associated performance measures defined. These measures would be implicitly External Measures, but not all External Measures would be Goal Addressing Measures, i.e. business impact may be wider that the defined project goals.

Assessment Metrics

Our own definitions of the metrics we used for this assessment were: Quantifiable Measure Percentage: The percentage of measures provided for a project that were capable of being measured (getting a meaningful number). A 0% figure on this scale would indicate that none of the measures presented were defined in a way by which a group of people could consistently interpret a numeric value against the measure.

External Measure Percentage: The percentage of measures provided by an organisation that were both Quantifiable and External Measures (as defined above). A 100% figure on this scale would indicate that all measures reported were able to have a numeric value meaningfully put against them and that they all reflected the impact of the project in terms of business outcomes.

Goals Coverage. The percentage of project goals expressed that had at least one quantifiable measure related to each goal, e.g. if a project had a goal of �cutting bureaucracy� and stated a performance measure of �time taken in processing cases�, the relationship was close enough for the goal to be counted as being covered by the performance measure. Strictly speaking in this example we would need to understand what the organisation�s stakeholders meant by �cutting bureaucracy� (i.e. whether it meant cutting the number of forms) but the analysis gave the benefit of the doubt.

Performance Reporting Start Point. The percentage of the way through the project before performance reporting starts, e.g. if the project starts reporting on day one of the project the figure would be 0%, whereas if performance reporting starts at the end of the project the figure would be 100%. The earlier performance reporting starts in the project the more chance management has to assess the results and take any necessary corrective action.

Validity of Responses

This research could be challenged on the basis of ineffective enquiry response processes in terms of tracking down the information requested. The response content did indicate that most organisations had gone to some effort in providing the detail required. Indeed many organisations had sought additional indirect clarification by e-mail or through visits to relevant areas of our company website.

We conducted the survey based on the supposition that if performance measures were being rigorously applied to projects then the details of these measures would be readily available either through published sources (e.g. intranets, project reporting) or through a simple enquiry to a programme or project management office. If performance measures were not public internally then there is cause for concern that they are either not being formally applied, or that their value as a channel for communicating project progress, success, or problems to stakeholders is being diminished though lack of availability. If therefore the information wasn�t provided due to its lack of availability, then these results still provide a valid interpretation of the effective use of performance measures in organisations.

Results Presentation

We have avoided focussing on numeric averages in presenting the results as averages obscure an important fact of life � things vary. The degree of variation is often more informative than the absolute value, so our assessment results are presented graphically to show the numerical spread. A second advantage of graphical presentation is that it highlights extremes. This is important in the context of this report because there is no �right� answer to the questions posed in the request for information. The assessment measures allow us to comment on the extremes in the analysis section.

Analysis

Survey Responses

The survey statistics are shown in . More detail of the response performance in the context of the Freedom of Information Act (2000) is provided in an annex.

*Table �Survey Response

Performance Measure Categories

The number of measures reported by each organisation varied, but the nature of the measures provided the subject of analysis rather than the number. For reference, the number of measures reported by each organisation is shown in .

Figure � Distribution of Number of Measures Reported by Organisations Note that we do not assume that these figures represent all the measures in use within projects. Our request for information emphasised that we were only interested in those that were used by the business for assessing successful deployment. Quantitative Measures

We assessed each performance measure in the responses received by a quantification test, i.e. place a number in front of it and ask �does it make sense?� For example, one response included a performance measure �proportion of information held in shared areas�. So the quantification test would result in �42 proportion of information held in shared areas�. With a slight adjustment to �42% of information held in shared areas�, this passes the quantification test. However another response �improved information sharing� doesn�t past the quantification test as a number in front of it doesn�t convey any meaning. Even qualitative measures such as perceptions can be defined in quantifiable terms, e.g. �percentage of staff who believe that the system provides easy access to information.�

All of the responses were analysed using this test. We assessed the measures as quantitative wherever minor amendments allowed us to, so the results are optimistic rather than pessimistic. Examples of some of the quantitative measures and non-quantitative measures provided in responses are shown in .

*Table � Example Quantitative Measures The percentage of measures found to be quantitative was calculated by dividing by the number of measures in the response. As an example in one case, two measures were reported and both were found to be quantitative so the project scored 100% on this scale.

Figure � Distribution of Percentage of Measures Defined Quantitatively

shows that {5} of the responses had expressed all their performance measures in quantifiable terms; however {7} presented no quantifiable performance measures. {8} organisations had a proportion of their measures that were quantifiable. {35%} of organisations appeared to have no quantifiable means of measuring success.

The responses within the 80%-100% range on this scale showed a systematic approach to measurement; however those outside this range lacked consistency which may indicate no formal approach to the issue.

Internal and External Measures

Each quantitative measure was analysed to discover whether it was measuring an external impact of the project on the business, or an internal feature of the system. The test for which category the measure fell into was provided by asking whether the measure would still be of concern to the business if the system were removed. Examples of responses classed as internal and external measures are shown in .

*Table � Internal and External Measures

Figure � Distribution of Percentage of Measures Found to be External Measures

{45%} of projects reported no quantitative measures of impact outside the project/system, i.e. no measure of business impact. For this set of projects there appears to be no commonly understood and agreed view of what quantitative business change will result from the project investment.

Goal Coverage

We analysed the performance measures to establish the degree to which the project goals provided in project descriptions were covered by performance measures. In most of the responses the number of performance measures was greater than the number of project goals (as we would expect) but if a project goal was important enough to be mentioned in the project description then hopefully there would have been at least one performance measure related to it. We did not seek a direct relationship but allowed some flexibility, for example a performance measure of the number of �customer transactions� was close enough to the goal of �better customer support� to be counted as a goal-linked measure.

Figure � Distribution of Goal Coverage by Performance Measures In {55%} of responses we could find no linkage between the performance measures and the goals of the project. This should be of particular concern to their business sponsors as this indicates a lack of quantifiable metrics to assess the degree of progress towards the goals of the project. This situation was emphasised in some responses by remarks that the metrics would be addressed at the end of the project when benefits were to be assessed. Without clear definition of the goals in metric terms, there can be little assurance that everyone has made the same interpretation of what success might be and made coordinated plans for it.

Project Length

Recommendation 12 of the Successful IT: Modernising Government in Action paper4 stated

�Departments and agencies must adopt a modular and/or incremental approach to projects, unless there are very strong reasons for not doing so. The approach to be taken must be clearly documented before large projects are initiated and must explicitly consider the capabilities of the organisation and its supplier(s) and the size of each proposed increment.�

The reasoning behind this recommendation was that it is easier to deliver positive results through a series of small steps rather than large ones and the negative effect of any step going wrong is minimised. Producing evidence to validate the outcomes of each step should involve the use of some quantitative performance measures. Without quantitative definitions, measurements would lack the consistency required for trends to be visible over the period of incremental deliveries.

The use of quantitative performance measures provides some indication of whether incremental approaches are being effectively deployed on large projects in terms of demonstrating value delivered and avoiding large scale negative impacts.

Figure � Project Length

{70%} of projects have a planned duration of more than 2 years. shows the {35%} of projects having no quantifiable metrics are spread over the range of project durations. The approach of incremental delivery may have been interpreted by some as delivering products by increments with success only materialising when all the products have been delivered. The intent of incremental delivery is to achieve small steps of success through delivery of a number of products, an approach that should involve the early deployment of success measures for progress monitoring.

Performance Reporting Start Points in the Project Timescale Only {50%, 10 from 20} of the responses provided a date answer to the request for �The date when reporting against these measures started/is due to start�. For two of those that did give a date, the responses indicated that these were not reporting performance measures of success. Only {40% (8 from 20)} responses provided valid answers to the question posed.

Figure � Start Time of Performance Reporting

On this chart, 0% represents projects that started reporting on success measures from the beginning of the project. 100% represents projects that started (or plan to start) reporting at the end of the project. The single project at 120% did not plan to report on success measures until the post project evaluation phase 6 months after the end of the project.

The quality of the answers to the question posed, and the wide variation in performance reporting start points indicates that there is no widely recognised best practice in the deployment of performance reporting regimes.

General Observations

Problems with Performance Measure Definitions

In responding to the request for performance measure definitions, {30%} of organisations mixed descriptions of what the system is (e.g. a single system with a consistent look and feel) and what the system is designed to achieve (e.g. reduction in the learning time for new users). The remaining majority of responses demonstrated an understanding of performance measures being used to assess achievements, even though not all demonstrated an understanding of quantification.

Baselining

Baselining is the process of measuring the current levels of performance before the project implements an improvement. Baselining should be a key element of any change programme as without a known current state, target setting and planning is largely hypothetical and the degree of improvement achieved later is impossible to quantify with any degree of assurance.

Evidence from the responses suggested that only a few organisations had undertaken baselining. The use of the future tense in many of the responses (e.g. ��will include measures of��) indicated that scales of measurement had yet to be defined and that measurements of the current state had probably not been made. As no direct information on baselining was sought in the survey, no formal results are available but the use of the future tense where baselining or measurement was mentioned in responses indicated that at least {65%} of projects had not conducted any measurements to date.

Quantitative Understanding

We assessed that about {20%} of the organisations providing responses demonstrated a sufficient understanding of quantitative performance measurement to enable them to deploy measurement processes practically and obtain numerical data. This is less than the {25%} who demonstrated a quantitative approach.

Summary

Conclusions

About a quarter of organisations responding to this survey demonstrated a quantitative approach to measuring progress towards success, however:

{35}% of organisations provided no quantifiable means of measuring success. This set of projects would appear to be relying on a successful outcome being determined at some future date, rather than an explicit definition of the business change that is anticipated/expected. Without quantifiable measures, some doubts arise over how accountability for the investment is being handled.

{45% of projects were assessed to have no quantitative measures that demonstrated impact outside the project/system, i.e. no quantitative measures of business impact. These projects are likely to have difficulty is providing evidence of success that is valued by business stakeholders.

{55%} of organisations reported no performance measures that linked to the goals of the project, i.e. no quantitative indication of progress towards business goals. Without this linkage one has to question whether the right things are being measured and reported.

The approach of incremental delivery of positive business results seems not to have been uniformly adopted. The length of some projects without any quantitative performance measures gives cause for concern in terms of corporately managing these projects towards successful outcomes.

It is worth considering that this analysis approach could be applied to any programme of work expecting to deliver change, not just IT projects.

Recommendations

The evidence from this survey suggests that there is considerable room for improvement in the way that IT projects specify and use measures of success. If the UK Government wishes to improve performance in this area, the following actions are recommended:

Provide training in quantitative performance measurement and analysis techniques.

Review with OGC whether the Gateway review process is assessing the deployment of performance measurement within projects. Monitor and review current success measures for all major projects within each organisation as a means to drive local improvements. Promote and develop performance assessment best practice guidance for individual organisations/departments that is appropriate to their individual needs.

Organisations Surveyed

The performance of the government departments and agencies in meeting the requirements of the FOI Act was not the purpose of this research. The performance was varied however and the data is recorded here for general interest. Most enquires were submitted electronically via email or web forms, but two had to be sent by post as an electronic contact point was not readily visible on the organisations� web sites. The organisations from which information was requested are shown in .

**Table � Organisations Surveyed

Under the FOI Act (2000), public authorities must comply with a request promptly, and should provide the information within 20 working days (around a month). If they need more time, they must write to the person and state when they will be able to answer the request, and why they need more time. The variation in the time from submitting the RFI to receiving a response is shown in .

Figure � FOI Response Times


I am approaching my first year on the Wiki and this page puzzles me for several reasons. I read the Recent Changes page to see what has happened of late, and that is how I follow threads. This page seems to have appeared there from out of nowhere. A search only reveals one link to it, and that is the Interesting Current Threads Page.

Secondly it seems very non-AYE-ish. For example there was no lead in to describe it nor was the Executive Summary easy to get to. Perhaps it could have an AYE type summary at the top or bottom? I would have to read it all to write it myself and I got lost trying to find the answer to my last puzzlement.

Can a project be a success if it is abandonded quickly? We might think of successful projects as ones that work, but maybe ending one after a week of analysis might be just as successful (as well as miraculous).

KurtSimmons 2006.08.18


It was written for a different purpose, but I have tried to ameliorate the problem you are explicit about by copying part of the Executive Summary up to the top. Is there a way to make a link to a place further down (or up) the page? DickKarpinski 20 Aug 2006
Dick, even if MeasuringProgress is Creative Commons, it's more appropriate to post a link to it, and then add something by way of starting a discussion (e.g., what you find interesting about the article, open questions, etc.).

--DaveSmith 2006.08.18


If I had had a link for it I would have done that in a minute instead of taking more than an hour to edit it enough to be somewhat readable from the PDF I had gotten from the main author. I believe it is not yet complete and is thus available only here. And I hope to get some comments to incorporate in my review from AYE folks.

Meanwhile I believe that the tables didn't make it from the pdf to the page here. Is there a way to load a pdf so that readers can see what I see using the Adobe Reader?

DickKarpinski 20 Aug 2006


Would they let you drop it on a server somewhere and link to it?

KurtSimmons 2008.08.21


Dick, How about posting the document on your Wiki rather than here?

Re: Success

I couldn't get myself to read the whole document so I scanned it.

1,753 words occur before the first use of the word, "customer" (counting from the start of the second occurrence of the Executive Summary). That's a problem for me.

Can a project succeed without stringent measurement programs? Yes, it happens regularly. Why? Because the project satisfies the people whose opinion count the most -- the customer(s). In my not so humble opinion, the focus for success is the customer rather than measurement.

When the customer sees the value and fully participates in the design of a measurement program, the measurements can help the project team steer the project to success.

When the customer doesn't see value and half-heartedly participates in the design of the measurement program, the measurements are as likely to steer the project team away from success as steer them to success. The measurements are overhead that get in the way and have to be dragged around with the project.

Am I advocating no measurement? Absolutely not. But my goodness, measurements are servants to the customer and project team rather than the other way around.

SteveSmith 2006.08.21


Steve, I read your comment above and thought about it quite a while with respect to my current biggest project. Some customers - the people who are to use the system being built - hate the system. The project is a failure for them. Some other customers - the people who will use the information that the system records - are dying to have the system be put into operation.

So, is the project a failure or success? I tend to go with the customers who will use the result of the system instead of the customers who will use the system. But, I am biased.

DwaynePhillips 22 August 2006

Hey Dwayne. I suspect that two things are going on here. One is that you have two communities with different needs and functions in the business. You said as much. The second is that these communities are looking at the system with local optimization goggles on. If the organization as a whole has a sufficiently clear strategy, and if everyone understands how they need others to contribute for the organization to work, well, you can get around the local optimization problem.

I wonder. Maybe this is an opportunity. Get folks together to talk about how the system helps and impedes them, and have them together look for a global optimum. When I work in an organization I do all kinds of stuff that isn't intrinsically thrilling, yet, I'm glad to in aid of the larger goal. That gladness comes from some combination of understanding how it helps, believing the larger goal is worthy, and confidence that my contributions won't be wasted.

Having this conversation might raise exactly those issues, of understanding, goals, and confidence. Those are worth addressing if they exist.

- JimBullock 2006.08.22 (". . . and we will all go down together.")


Hi Steve,

Thanks for your comments. I put the paper here in part to get comments. I appreciate that you took the time to respond. But either I don't understand many of your comments or else I disagree. I hope that I can reveal my differing opinions without being offensively disagreeable. I encourage anyone who has another view to contribute theirs as well.

I wonder why you think I should load the paper elsewhere rather than here. That might make it easier to ignore, I suppose, but I doubt that there is a space or cost constraint that would make it significantly polite to avoid putting it here.

That's an interesting observation about the appearance of the word "customer". I'll mention it in my review, which I expect to copy here in case anybody cares. I guess the delayed reference to the customer suggests that the focus of the paper could be made more pertinent to the purpose as expressed.

The focus on measurement rather than on the customer per se is because the paper is advising the project managers on how to accomplish delivering value to the customer rather than attempting to convince the managers that the customer matters. That latter is assumed to be accepted already.

We agree that it helps when the customer sees the value. When she doesn't, we have a very major problem which outweighs any concern with project management techniques. It seems to me quite unwise to proceed to any extent without agreement on the value issues. If the customer and the project team do not agree on what values are sought, it seems unlikely that the project will succeed except by accident. Indeed that is the heart of my understanding of the topic.

If you know how to get projects to succeed without measurement to keep the project team going in the right directions, I want to know how you do it. This paper was written because the author observes that far too many projects fail and the focus on measurement is intended to improve our odds of success. It is not to attend to measurement to the exclusion of concern for the customer, but rather to enhance and embrace hightened awareness of the customer's needs. Otherwise why bother with the measurement or indeed with the project itself?

Is this note reasonably clear?

DickKarpinski 22 Aug 2006


Hi Dwayne,

Thanks for your comments.

Do you believe that if the users hate the system, they will still use it well enough to give benefit to the customers who use the results of the system? I would have serious doubts. Indeed, I would try to explore the problem in such detail that a triple win plan could be devised. Otherwise I would incline to withdraw from the field, claiming that success was unlikely, and put my efforts someplace else.

DickKarpinski 22 Aug 2006


Dwayne ...So, is the project a failure or success? ...

If the customer(s) whose opinion organizationally counts the most considers the project a success, it's a success.

SteveSmith 2006.08.22


Dick: I wonder why you think I should load the paper elsewhere rather than here. That might make it easier to ignore, I suppose, but I doubt that there is a space or cost constraint that would make it significantly polite to avoid putting it here.

I would prefer the document be stored somewhere else so it's easier to read. The document suffers from some formatting problems that wouldn't see in the PDF version. With a document like the above, for me, I like to first read about why you would like me to read the document and what questions you would like me to answer. Regardless, I can live with you putting the document on the Wiki.

Dick: If you know how to get projects to succeed without measurement to keep the project team going in the right directions, I want to know how you do it.

Start with customers who have a moderate problem and moderate expectations. Hire a development team who have the skills necessary to solve the problem and who will create a strong bond between themselves and the customer: They like each other; they trust each other, they communicate well; they understand each other, they share each other's concerns. Trust will eliminate needless measurement and time expenditures.

Am I advocating this approach? I would if I thought it would produce the required results and it was a good match for the personalities. I think an approach that tunes itself to the people who are involved in the work rather than just on solving the problem will tend to be more successful.

Does this approach easily scale up? No. Will it work on complex problems whose solutions are well beyond the skills of the development team? No.

Dick: Is this note reasonably clear?

Yes.

SteveSmith 2006.08.22


And since I keyed off this I have to say I really like Steve's phrasing. It is much easier for me to come into a new wiki page and see: "I have this paper I am reviewing. It is about X, Y, and Z of measuring project progress. I am looking for opinions on the following questions: A, B, and C. You can find the paper here (with all its charts intact) if you are interested."

It takes less than a minute for me to see if I want to dive in.

KurtSimmons 2006.8.22


Updated: Tuesday, August 22, 2006