Home | Login | Recent Changes | Search | All Pages | Help
ClimbOutOfTechDebt� 2002 Johanna Rothman Other good terms for Technical Debt are Software Entropy or Maintenance Entropy - every change makes things more brittle unless refactoring is actively practiced. See Martin Fowler's IEEE Software article: Refactoring.pdf Getting refactoring onto the project schedule is an interesting process. BobLee 02/24/02 Some development methods include refactoring as part of implementing features incrementally. Keith and other XPers, what has your experience been with refactoring as you go? EstherDerby 02/25/02 While refactoring is a useful idea for technological change to a software product, I think Johanna's article emphasizes management's role in producing a good product. Not ALL problems can be refactored away. It is very useful for managers to understand how their decisions affect quality. Here's a retake on a sign hanging in the kitchen of my friend who is a very busy Mom: "Superman doesn't live here anymore. Do it yourself." True. There are 2 ways of climbing out of tech debt: preventative and restorative. While I much prefer the prevention route, I've had to participate in "productizing" a field-developed program that had 9 commercial customers in 7 incompatible versions when we bought it. It was worth rescuing: we made over $35 million per year selling "ENDEVOR/MVS" before Computer Associates ate us. Our other ENDEVOR/DB product had been designed and built to last. Last time I looked on CA's web site, both are still offered. ENDEVOR/DB is in release 15.0 after 18 years on the market. I don't believe we ever kept more that 1 1/2 people busy maintaining it. ENDEVOR/MVS swallowed teams of 15 for periods of 6-9 months at a time. Tech debt can still pay. BobLee 02/27/02 Bob, thinking about restorative, I am reminded of a building renovation in the 1980's. It was satisfying at one level, frustrating at others. Always seemed to be short of money or time or manpower. As a result it was a slower process than, say, designing it right the first time. But, sometimes that's what we have to work with! And, as you point out there can be payoffs. We sold our renovated building at a handsome profit. Pain = Gain? If I understand you, you are saying the management decisions and design are preventative and "productizing" is restorative. Isn't refactoring a restorative technique, too? BeckyWinant 2/28/02 Refactoring can be either preventative or restorative, depending on how much you let the entropy/tech debt build up. Practiced pro-actively, it reduces maintenance effort. Figuring out unnecessary Byzantine code impedes both test and maintenance. When you're afraid to touch it, you've got too much tech debt mortgage on your soul. Agility depends on clarity and surety of understanding. Whatever you can't understand can't be safely altered. Kent Beck and Martin Fowler discuss refactoring as the mechanism that flattens the "exponential increase in cost the further the project proceeds". The significance of Fowler's refactoring is that there are algorithmically safe transforms that do not alter the processing results. The other significant XP contribution to the art here is existing sufficient unit tests for every unit of code. This flattens the entropy curves. Jerry's QSM v. 1 & 2 points out the metrics of size complexity. Refactoring and O-O help keep the UNIT size in the flat part of the complexity curve. The order N**2 search/lookup for size in lines, modules, etc. is mitigated by very small N in aggressive refactoring. Martin Fowler discourages object methods big enough to require temporary variables! It took me a little while to see that he was battling the complexity dynamic directly. The other dynamic is clean/clear class responsibility so the "I know it's in there somewhere" doesn't have a wide file search. This is where O-O starts paying off over procedural/structural logic. The [human readable] size dynamic can be held to the flat part of the order N**2 curve. I find that any piece of software will be read for understanding at least 10 times more frequently than written. Keeping that read-for-comprehension cheap is significant to project throughput. Moreover, the 80 / 20 rule kicks in, so that 80% of the searches are into the ugliest 20% of the code - where your certainty is weakest. Small is beautiful for progress. Further, 100% of the value of a software deliverable comes in its post-development [production] period. Unless the metrics have shifted, I believe that 60-80% of the product cost is also incurred during this period. Too many managers look with tunnel vision at the "development cost" [meaning pre-production] rather than total cost of ownership and return on investment. This is akin to our mania with short-term results and market valuation hysteria. BobLee 02/28/02 Too many managers look with tunnel vision at the "development cost" [meaning pre-production] rather than total cost of ownership and return on investment. In my experience it isn't necessarily natural tunnel vision, but tunnel vision induced by measurement/reward systems and accounting practices. When short development cycles (get something out the door to generate revenue!) and low development costs are rewarded (bonus, promotion, salary increase, recognition...) most people will follow the force. Cost accounting for projects rarely look at cost over the life of the system. And traditional wisdom is that extending the development timeline only adds to cost and deferrs revenue. Plus, the people responsible for delivering costly-to-support systems are rarely the ones accountable for support budgets. I rarely see companies where it's easy to tie cost to produce a system and cost to support a system back to revenue generated by a system. EstherDerby 03/01/02 Esther, I've wrestled with the various faces of software development. I agree with your statement about rarely seeing direct cost comparisions. The one place I've seen people track this closely is organizations who take on contracts for a specific piece of work. For instance, the government puts out bids and people respond. They may contract for development only and hold the cost down to get the contract fee and any bonuses. This means maintenance might skyrocket! If the contract calls for maintenance, too, well, then the cost is tracked there. You don't want maintenacne eating into other new contracts. One dimension that affect cost analysis is whether an organization is handed requirements or has to discover them. Another is whether software development is a cost center (most IT) or a profit center (contract work and possibly commerical sales). Nevertheless, technical debt occurs in all! -BeckyWinant 3-01-2002 Added to that is the constant problem encountered by consulting companies or independent contractors: regardless of what we say, the client almost always believes that anything "extra", not directly coding, is added only to pad our fee. You can tell them the truth - "to do it properly will cost 1/2 a million dollars", or you can price it dishonestly - "we can do it for half of that" and just keep billing, hoping that you will be allowed to finish it anyway, or you can build it cheaply and let the maintenance costs bury the real development cost. Honest pricing doesn't seem to sell well, most days. I am trying not to be cynical, but I am not succeeding. If the client who was sold a cheap project which went way over budget needs something else done soon, perhaps they remember. I am not suggesting underpricing. Too often, that seems to mean we take the shortcuts and let them worry about maintaining it after we go. I can't blame the companies who either of these things happen to for distrusting software people in general. SherryHeinze 2002/03/02 It sure seems like maintenance has dropped below the radar in all the magazines I've read lartely. Perhaps some SHAPErs or AYErs would like to start digging up the story on development vs. maintenance - total cost of ownership & ROI. Plant a few articles in IEEE Software, Software Development, and STQE then see how awareness returns. I believe the last time I recall some serious discussion of maintenance as a virtue was in the early '90s. People used to be able to quote chapter & verse on maintenance cost/benefits but I guess we've got a whole new generation and that's old, boring "solved" stuff. BobLee 03/02/02 ... maintenance as a virtue... Bob, is this a nod to refactoring or do you other ideas in mind? I suspect that costs for maintenance are buried deep in some accounting pit of miscellaneous. Hard to make the financial cases. - BeckyWinant 3/3/02 Think of maintenance as a benefit rather than as a cost. Maintenance [as in enhancement] means that the system is succeeding in delivering value, that new requirements or newly altered requirements have surfaced, and it wasn't necessary to commission expen$ive new development to replace the working system. Since we seem to have trouble conceiving of maintenance other than as a cost, I rest my case! We used to perform feasibility studies which estimated return on investment before undertaking development. These always had to account for planned estimated maintenance. The feasibility study prequel to project kick-off was and is a best practice that seems out-of-scope or offstage most of the time. Every project avoided rather than abandoned is a win. BobLee 3/3/02 I'm not sure what you mean by maintenance. Here are all the things I've heard called maintenance: - fixing defects on a previously released code base - adding small enhancements to a previously released code base - point releases, regardless of their contents (hey, if it's 5.1, it's maintenance) In my opinion, maintenance is only fixing defects on a previously released code base. In that case, most of the refactoring that people do is not maintenance, but small development projects. Too many project managers (and other managers) confuse what they do on small projects with what they do on big projects. Most people have no idea what they spend on fixing defects. I've written several articles about that, one most recently at Stickyminds.com, http://www.stickyminds.com/sitewide.asp?Function=WEEKLYCOLUMN&ObjectId;=3223 JohannaRothman 3/6/02 Johanna -- That matches my experince, too. One client I worked with was under pressure to reduce costs and trouble incidents with their system. The system group had very little data other than budget reports; however the operations area had pretty good data about system outages. When it came time to look for areas to make improvements, the system folks pointed at the operations folks. EstherDerby 030602 OK, call it enhancement instead of maintenance. As Kent Beck says in Extreme Programming, Development is an unnatural, 1-time thing. Get out of development and into maintenance [enhancement] as soon as possible, preferably in the first iteration. BobLee 3/6/02 Development is a poor word, if we take the biological analogy, for there, development is the expression of previously programmed (genetic, for example) change patterns. Adaptation would be a better term for changing to respond to new environments (though adaptability can develop). Strengthening would describe work that improves performance on various attributes - such as maintainability, speed, understandability. But we seem stuck with loaded and not very accurate words in our business. Better to describe the activities you're talking about in specific terms. - JerryWeinberg March 6, 2002 The origins of the terminology in "Software Development" were brought home to me the other day reading Applied Software Architecture by Hofmeister, Nord & Soni. They describe how our craft has successively peeled architectural "views" out of the process, and the first to be discovered was the Code Architecture View: prior to FORTRAN and easy CALL facilities, programs tended to reside in a single sequential source file. Now we expect a complex of directories, of headers and sources, of object libraries, and of run-time components. We've come a long way - nothing bigger than an academic exercise usually comes from a single source file anymore. We needed to learn to manage the dynamics of scattered module management to get to today's starting point. Is it any wonder that our terms that date back to the '50s, '60s and '70s seem a little stretched-out and inadequate? BobLee 03/06/02 TechnicalDebt extends beyond the code itself. Case in point: Midway through a project, your compiler vendor issues a major upgrade. Should you move to the newer compiler? You know you're going to need to at some point. A nervous management, clinging to a tight schedule, insist on deferring. TechnicalDebt is incurred. The debt might be forgiven -- this could be the final release of your product. But more likely, the debt will begin to accumulate interest. Compiler bugs you might encounter now get fixed in the new version only. Other third party tools that you use issue updates that only work with the latest compiler. Your development environment becomes a stagnant backwater. People who are familiar with stable regions of code move on to other projects. Interest on the debt, in the form of risk, grows. Now, when you decide to pay off the debt by upgrading, any problems that crop up are dealt with by people unfamiliar (or less familiar) with parts of the code. Plus, you're having to update several other tools in parallel. What could have been a 2-3 day effort earlier now stalls the project for 2 weeks. --DaveSmith 3/9/02 I've been in this spot. At one company where I worked, the development projects don't learn about the compiler upgrates until a week before they were going to happen. So the projects haven't planned for it. It was common for project managers to learn with only a couple of days notice that there was a new version of CICS being installed or all DB calls had to be rewritten with backout/restart logic. (This was a long time ago.) These were seldom 2-3 day efforts. The they added to development and testing. The schedule seldom expanded to accomodate the extra work, nor did the function scope contract. The reward structure can work against making the "most sensible" decision: If mangers are being rewarded on meeting schedules and will loose a portion of bonus or salary increase if the project doesn't meet it tight schedule, they have litte incentive to pause and do the work to stay out of debt. Accounting practices can contribute, too. In some shops the organization that initially develops a product is separate from the organization that does all subsequent releases. So project managers push the cost of upgrades into the organization that supports the product for the rest of its product life. (Didn't this come up on another thread?) EstherDerby 031002 I did a paper a number of years ago with a colleague on an internal project where we analyzed the total lifecycle costs for a current product. We used some well-known system engineering (full) lifecycle cost models. We analyzed field defects and other known data we had collected over the years. We looked at ROI and costs for preventative maintenance and was astounded to find that about 90-95% of the total project costs was associated with maintenance fixes and repairs. If I remember correctly, if we made some minor design changes during tradeoff studies early in the product lifecycle we could have drastically changed the profit curve for the product and significantly reduce the cost of maintenance. The product over 15 years never made money (in the black) for the company. We eventually received approval to present the paper at two national conferences but the paper generated no interest from our internal product development engineers and managers. The VP of R&D reviewed the paper and approved it for presentation but did little to use the information internally. Part of that was attibuted to the lack of familiarity with life cycle cost analysis and the other was attributed to the culture that "we don't have time to do things like that." Another factor was the organization was driven towards rewarding short term behavior such as getting a release out the door at any cost. Many of the staff engineers have never had to perform cost/tradeoff studies, or demonstrate some form of ROI and most had never studied this process in school. JohnSuzuki 3/11/02 John - That is a fascinating story! I've done some small data gathering in client organizations to look at costs and the reactions have been similar.
EstherDerby 03/11/02 "We don't have time to look at such stuff about why we're losing money on everything we build. We have to hurry and get some (crappy thing) out the door so we can get some revenue." -JerryWeinberg 3/15/02 I've seen a product that shipped over the protests of the QA department. It could have been refactored without having to understand it (just eliminate rampant duplication and dead code), but that would still preserve all the bugs -- since there were no tests, refactoring would unsafe. Since there were no detailed specifications (much less design docs and so on), fixing bugs is hard to do -- evidentally some people here and there know some of the intended behavior, but relating that to the underlying medium (SNMP - which isn't Simple at all) would have been very difficult. We declaired that product unmaintainable -- it wasn't even written in an appropriate language for its intended design. I would want every product plan included a phase-out plan. Creating new products, but never discontinuing support of old products, is an increasing burden on a small programming group, to the point that new product development is swamped by bug-fixing five year old products. KeithRay 03/17/02 Excellent point, Keith. The accumulation of old products is like cholesterol in the arteries. Our nation is great at eating fast food and generating cholesterol, right? An idea to try out in product requirements sessions would be "Which products can we phase-out if this is completed?" (but be sure you've got the right stakeholders there!) This phase-out trade-off idea needs a management that favors upward communication to get authorization and buy-in on phase-outs. BobLee 03/17/02 Keith, A great reminder for those of us who do maintenance. Besides bug fixes, enhancements and feature releases we better add a phase out plan and schedule to the maintenance plan. JohnSuzuki 3/17/02 Really good note about simple code (maintained by refactoring) versus patched-up code (maintained by bug-fixing) was made in the XP mailing list: [my abridged version of his message follows] > From: "Justin Sampson" <justin_t_sampson @ yahoo.com> Proposition 1. In a [poorly] factored program, every change takes O(program size) effort. Rationale: Since design decisions are not encapsulated, any change potentially affects every part of the program. Therefore each change requires evaluating the consequences throughout the code. Proposition 2. In an optimally factored program, every change takes O(log(program size)) effort. Rationale: Since design decisions are well encapsulated, any change only affects one part of the program. Therefore each change only requires looking through a localized portion of the code to find the place to change. If the class dependencies look like a balanced tree then the effort to go from the application entry point to the point of change is at most the height of the tree, which is the logarithm of the size of the tree. [...] Proposition 3. For a given amount of functionality (number of story points completed) the amount of code in the [poorly factored] case is exponential in the number of story points completed. Rationale: Since decisions are not encapsulated, each decision multiplies the amount of code in the system because of duplication. Proposition 4. The amount of code in the optimal case is linear in the number of story points completed. Rationale: Since decisions are well encapsulated, each decision results in adding a small amount of localized code. Therefore, in the [poorly factored] case: cost of change = O(program size) program size = O(exp(story points completed)) And in the optimal case: cost of change = O(log(program size)) program size = O(story points completed) There, nice and scientific. :) KeithRay 3/19/2002 I was thinking about this, and wanted to show an example of how well-factoring depends somewhat on programming language. Let's compare making a change to a large C++ program with the same change to an equivalent Smalltalk program. Let's say that both programs were written using a string type that only permitted eight-bit characters, and then the requirements changed and the program had to work with most known languages, including Korean, Chinese and Japanese. So we want to change the string type to use Unicode, but let's also say that we can't modify the original string class. Because C++ has type declarations of all variables, parameters, and function returns, every piece of code that referenced a string object would have to be changed. In Smalltalk, there are no type declarations, so only the pieces of code that create string objects have to modified, since creation of an object refers to its class explicitly. The Smalltalk program can be additionally well-factored by hiding the creation of string objects in a Factory Object or Factory Method, so that a fundamental change -- 8-bit-strings versus Unicode-strings -- can be isolated to changing ONE METHOD. What applies to C++ also applies to Java and most of the common programming languages; but Smalltalk, Python, Ruby, and a few other 'dynamically-typed' languages offer better chances for writing the ultimate well-factored program. KeithRay 3/29/2002 Good approach. I have trouble believing in: Proposition 4. The amount of code in the optimal case is linear in the number of story points completed. I would believe in the trend, but the amount of code infrastructure can't be linear with user story points in my experience. Stuff varies in a bell-shaped curve.
Given that, and that the 80 / 20 rule says 80% of stuff comes from 20% of requirements, I reserve some judgement on: From John Suzuki: We looked at ROI and costs for preventative maintenance and was astounded to find that about 90-95% of the total project costs was associated with maintenance fixes and repairs. Just catching up and this jumped out. It had me recalling a business section feature in the NYT (mid 80s) about DuPont's expenses for IT. The actuals for the previous year for "maintenence" was $1.4 million. The allocation for "development" was $350,000. This along with other articles and books I read that year seemed to indicate that maintenance ran 50-80% of total software costs. In looking at real dollars spent, is John's story typical or atypical? Another thing I run into is companies who don't really track maintenance costs. These are folks who do work on a contract basis or embedded software like automotive where software maintenance is still not considered a "manufacturing cost" contributing to cost of goods sold. Any other views of software costs in various markets? - BeckyWinant 3/21/02 I think that the ratio of maintenance to total cost depends a lot [at a point in time] on how active development is vs. how much they are riding on legacy systems. [Maybe afraid to try to replace them?] BobLee 3/21/02 Yes, I'm sure that is right, Bob. There are many different types of projects. At an organizational level I wonder what the budgets for various software activities look like for say a five-year period. What the breakdown and ratios are. Anyone have any pointers to something like this? - BeckyWinant 3/22/02 I recently had a good refactoring experience. The code to be refactored was a large class, with LOTS of duplicate logic. There were dozens of methods where the only differences were in a string and a variable name. This class also had lots of unit test cases. Now the writer of the class should have paused to think -- "hey, these test cases are almost identical, and these methods are almost identical" -- soon after writing the second or third duplicate method/test. But that didn't happen. Using "Extract Class", "Extract Field" and "Extract Method" and other refactorings, I transformed this code into four small new classes and greatly simplified the original class by using those new classes as member variables. The public interface of the original class didn't change, and the unit cases continued to pass (or when they failed, they helped me find my mistakes). Many of these changes add up to what I call "replace procedural code with objects". I'll try writing a paper about that someday, but it is the basis of the power of object oriented programming. I later did a little more refactoring, "replace conditional with polymorphism", to create a subclass of one of the four new classes. The failing unit tests then let me find which variables needed to have their types changed to the new subclass. Without the tests, I would not have been able to do these refactorings as quickly or as safely. The tests also let me knowingly change the behavior at times, which means I did a little rewriting as well as a lot of refactoring. After the refactoring was over, I moved test cases from the old class-test-suite into new per-class test-suites, and remove duplication of test cases. In the original class's test suite, I no longer needed so many tests, so I wrote a few big tests to cover all of the behavior in less detail. I could then add new functionality (test-first) just by creating some new tests, and new member variables of the right types, and a few new methods to get values from those new members. KeithRay 3/29/2002
Updated: Friday, March 29, 2002 |