Home | Login | Recent Changes | Search | All Pages | Help

TypesofCodeReview

I know of several types of code review (these work for other documents, but I'm interested in code review right now):

  • walkthroughs
  • pair programming
  • inspections
  • buddy review (write a little code, hand it to your buddy who hands you his/her code). A subset of peer review.
  • peer review
  • expert offering opinions

The last one is the most troubling to me, "expert offering opinions." In the case I'm thinking of, a person with threesix years of professional development experience (aka programming for money) thinks he is providing a valuable service. He is mentoring the less seasoned developers, and at least one developer likes the feedback. But less than 10% of the department is using his expertise, because he offends the other people.

Have you seen ways where an outside expert (but still an employee) can offer feedback successfully over the long term? I haven't, but I still feel like a "youngster" sometimes :-)

BTW, in case you're wondering, I prefer pair-programming, buddy-review, and peer review as review mechanisms, in that order. I prefer them because it's people on the project interacting from most casually to most formally in that order. I prefer casual interaction with appropriate language for feedback. Do you have a preference? Why? -- JohannaRothman 2006.03.12


I haven't done professional programming in quite awhile. But I have experience with reviews of my writing, which doesn't seem to me to be any different than reviews of programming.

I prefer to have a buddy review followed by a peer review. Why? I like to hear feedback first from someone whose opinion I thoroughly respect and who has history with me. This kind of buddy feedback is very safe for me. And the feedback helps me know whether I have something worth continued effort.

The next step is a peer review by 3-5 people. Why? I need feedback from a variety of readers to help me fine tune. Why not more people? Too much feedback can overwhelm me and end up paralyzing me. Despite knowing I can't please all my readers, I want to.

I haven't done pair writing but the idea appeals to me.

Something about the idea of an inspection turns me off. It sounds like I'm in the military. Hmm... Call it a preview rather than an inspection and my feelings are different.

SteveSmith 2006.03.12


I've been trying to recall a time in the last ten years when more than one person (other than the code's author) arrived at a scheduled peer review meeting having actually carefully read the material being reviewed.

In my world, only the immediate stuff seems to get attention. Pair programming review works, because it's immediate. (And it works better than the alternatives, because the feedback loop is tight enough that corrective action can be immediate.)

Having a process that requires someone review code before it can be checked in works, because there's timely pressure on both sides of reviewers.

But the more formal code reviews/inspections don't ever seem to work, either because management typically clings to the fanciful belief that the work of carefully preparing for a review meeting somehow happens for free, or because there's just too much urgent stuff on everyone's plate.

Agile methods that sequence the work so that a pair of developers is working on exactly one thing (with unit tests first) until that thing is done relieves the urgency of having too many things to do, and provides immediate peer review. I have yet to find an approach that's as effective.

--DaveSmith 2006.03.12


I had an interesting code review last week. The project lead who I report to reviewed my code and sent me the results in an email. I thought that an email was not personal enough for something so sensitive (and shows his preference for little communication through out the project). I was also amazed that he thought that a brief list of bullet points would be enough to get me to change the code. I already demonstrated my thinking on the subject and he found it lacking, it will take more then a few words in an email to change my way of thinking about the code and get something radically different. I was sort of stunned, but then I am used to it. I asked him to show me what he was talking about in the code. It pained him to read the email to me, but I think this was more productive.

---KenEstes 2006.03.13


Ken, when I write my articles and books, I frequently receive comments in email. That works, because I don't often receive bullets. But especially a project lead (because of the hierarchy problem) needs to be sensitive to how people will receive the information.

Did he want something radically different? -- JohannaRothman 2006.03.14


Over the years, for all these kinds of reviews and many others, I've developed what I call the priorty-first review. No matter how many people are reviewing, face-to-face or off line, code or writing or any other product, deliver your observations most-important-first. That way, you make the best use of the time you have, even if you're interrupted in the middle. You don't bore participants, and the severity of the earliest comments tell you how good the product is likely to be and/or how well the participants prepared. Trivial comments kill reviews, but with priority-first, you can chop off the review the first time you get one, with nothing lost.- JerryWeinberg 2006.08.15
I don't know what you are talking about. If I don't know, I tend to suspect that few participants know. I shall reveal here the depth and possibly the bredth of my ignorance. If the following points occur above then I failed to notice and remember them.
  1. It is required that there be consistant and strong management committment to reviewing the documents being produced. People will do that on which they are actually measured and rewarded. (In truth, the necessity is that the effect reinforces the intended behavior; the recipient need not like it. Psych 101, B. F. Skinner.)

  2. Priority first makes sense, but it also makes sense to have that criterion applied in the choice of what to review and when. In particular, the first levels of design documents are the most critical since flaws there have the greatest impact on the remainder of the project. Consider the cone of influence.

  3. No mention yet of the conduct of the review process being specified, and yet there is not consistency of effectiveness without consistency of process. Books have been written on this: see "Inspection" by TomGilb.

  4. As a TomGilb adherent, I suggest that the content be constrained to say twelve pages or less, that the two or three participants spend one to two hours separately and similar time together, that they all seek to find three kinds of flaws: wrong function, lost function, and gained function, that they use predefined guidelines and checklists which are particular to the kind of material under review, and finally that a final twenty minutes to hour be spent considering how the guidelines and checklists be revised for future reviews.

  5. In order to keep everyone (big boss on down) in line about doing the reviews as needed, every review result should be publically presented without penalizing anyone. (As W. Edwards Deming pointed out half a century ago, continuous improvement requires driving out fear, not trying to use it as a management weapon.) What was the cost in staff hours and clock time? What was the benefit in deeper understandings and future rework requirements averted? What was the cost/benefit ratio?

  6. Note that while I complain that the earlier contributions to this topic were too vague to permit effective comparison of approaches and effectiveness, I have no basis to claim that my contribution is sufficient either. All I can claim is that it is more specific. That, of course, has the effect of permitting it to be more effectively criticised as insufficient, wrong headed, and even ill worded.

DickKarpinski Nitpicker extraordinaire 16 Aug 2006

ps This topic is being treated in the fashion of an old style forum. When we make full use of the wikiness, we will construct a page which drops all the signatures and is built to enhance the ability of the reader to extract useful understanding efficiently. This page with signatures should then be the discussion page, behind and subsidiary to the "main article", to choose a term from Wikipedia.


WikiTags: RewriteAsAnArticle

The shop I was in that did the best job of consistently reviewing code (by pair programming, plus extra eyes for critical code) had no management commitment for reviews. Management was neutral on what practices developers followed, as long as we continued to produce good work that customers would pay for. The development team practiced eXtreme Programming (XP), and was self-policing on process.

We also had very little that you might recognize as design documents, other than diagrams on whiteboards that lived for a week or so until the code was complete. (We did take pictures of whiteboards, but seldeom referred back to them.) We also had little written down in the way of requirements. Most of that was verbal with 4x6 cards as placeholders. But we did have very tight feedback loops; conversations with our product manager would turn into code (with unit and functional tests), and be in QA's hands within, at most, two weeks, with the product manager having reviewed the work along the way.

And the company has happy customers. (I've moved on, but stay in touch with the team.)

To be fair, we were a small shop, and we built network management software, not missle guidance systems or air traffic control code. We didn't have to spend time writing documentation to satisfy some process. Almost all of the artificts that Gilb would expect were absent.

--DaveSmith 2006.09.11


Dave, Dick, I would be interested to hear more about your experiences regarding Gilb-style (or not) reviews/processes. Dave, do you think it is possible to do projects with world-wide distributed teams without thoroughly reviewed documents? Dick, do you think small-scale projects justify these reviews? --MarkusSchnell 2006-09-12
Markus, I have serious doubts about world-wide distributed teams even with thoroughly reviewed documents, but then I'm coming at it from a small company perspective. I would much rather see a distributed team work out the kinks of doing short iterations, so that misunderstandings could be made and fixed quickly in code, rather then delaying that by spending a lot of time up front working towards the appearance of agreement on paper. I've seen this done with minimal paperwork, lightly reviewed. --DaveSmith 2006-09-12
Let me tell you a little bit about my/our situation: Our customer is a US company. They have a product out in the market, for which they want a second source - that would be us (semiconductor company). Project management is in Munich (Germany), but the actual development takes place in Berlin and Italy with support from several locations world-wide. Production will be in China. Of course, this is not just software development, but also chip design, packaging, board layout, production test, system testing, ...

My experience so far: The most challenging part is communication. People with different cultural backgrounds talking on noisy telephone lines in a language similar to English. You can never be sure you are talking about the same thing. Face-to-face meetings help a lot here, but so would clear and correct specifications. Some of the specifications are read by more than 50 different people. To communicate this in face-to-face meetings is impossible, especially when you keep in mind that these specifications have to be refered to over and over again during development.

It's not enough to have a common understanding within our development team: at the end of the day our solution has to behave exactly the same as the first solution.

What dou you think? -- MarkusSchnell, 2006-09-14

Markus wrote: ...our solution has to behave exactly the same as the first solution.

I'm confused about the reference to the first solution. Do you mean the US company's solution? If it does, the resulting product must be identical but the production process is compatible rather than identical with the US organization's. Is my thought process moving in the right direction? SteveSmith 2006.09.15

Yes, Steve, it is similar to what you describe: Actually, the US company outsourced also the first solution to several other companies (chip, board, and firmware were all done by different companies). Unfortunately, (detailed) specs came alive only very late in the project, when we already started working on the second solution. Meanwhile, the first solution is available and bought by a lot of people. Now, to the consumer, there should be no perceivable difference in behaviour. But also to the pre-existing production and test environment. It is very difficult to make sure both products behave the same by 'just' testing. Everytime we detect a new behavioural defect, I wonder, why we didn't think of a test case for this. But then, I sometimes wonder, how one could have thought of a test case for this. -- MarkusSchnell, 2006-09-19


I do work similar to what Markus describes everyday. Two things that I feel are critical to realize:

  • The work is not easy

  • The work is not cheap (inexpensive)

If people go into such a situation thinking that it will be easy and cheap, they will be disappointed.

DwaynePhillips 15 September 2006

Dwayne nails it: The work is not easy (that's a given), and it's not cheap. That extends to time also: The original schedule had the project finished at a point in time you could almost give history lessons on it. -- MarkusSchnell, 2006-09-19

Marcus, I'm wondering what the simplest thing you (or your project) could do that would push a single scenario through the project pipeline, resulting in a working artifact with some minimal functionality. That is, a way to demonstrate an exact duplication of some small, bounded subset of the original artifact's functionality.
That's what we did. And it was an easy job. The fun part started after that. -- MarkusSchnell, 2006-09-19

Such an exercise is a good test of whether the teams really understand one another, and might flush out a number of issues that would otherwise appear much later in the game.

I'm guessing, based on old (and probably outdated) experience on a project that involved fabbing chips, that fabbing is the activity that has the longest lead time and poses the greatest risk. Is it possible to lay out and test part or all of your board(s) using the original source's chips?

No. The main difference between 'their' and our solution is that we provide a single-chip-design. And you are right about the physical processing being the longest process. Generating the masks for processing is one of the most expensive steps. -- MarkusSchnell, 2006-09-19

--DaveSmith 2006-09-15


Markus, I would say your situation requires a strong culture of reviewing in almost every possible way you can think of. I've consulted with many international multi-cultural projects, and none of them could have succeeded without a large and ongoing investment in clear communication. Too often, the cost of this communication has not been figured into the original estimates, with resultant management unhappiness when you try to budget time and resources for reviews.
Jerry, did you consult on this project also? You definitely analyzed the situation correctly. :) -- MarkusSchnell 2006-09-19

But, I don't agree with Dick K. that strong management support is essential. He's right about needing reinforcement, but the best place to get that is from intrinsic reinforcement. Professionals LOVE well-run reviews, especially because of the strong learning they engender, so they will keep doing them once they've had this experience.

The kind of management support you do need is managers not interfering in the process, as so often happens. - JerryWeinberg 2006.09.17

I would like to understand your last sentence. In what way would managers interfere? Shouldn't they enable communication? Can you give an example? -- MarkusSchnell 2006-09-19

Markus, managers who do not understand about reviewing can interfere in any number of ways. For example:
  • Assigning people to other "emergency" tasks just before a review that they've bern scheduled to participate in.
  • Insisting on sitting on on reviews as a way of evaluating personnel
  • Refusing phone budget for review calls (in international situations such as yours)
  • Failing to give credence to the considered judgment of a review, without explaining why in a way that satisfies the reviewers (who have invested a lot in it).
  • Simply cancelling reviews in order to "make schedules"
  • Insisting on placing unqualified people on review teams, rather than letting the reviews be truly "peer" reviews.
  • Not allowing certain people to review for a variety of reasons, such as "they couldn't possibly know enough to review," or "they're too busy," or "they're too negative," or "they're too positive" -- rather than letting the tech staff decide who will be helpful and who will not.

Does that help you understand? There are many more instances, but these are the first that came to mind (because they've all happened at clients of mine in the past couple of months). - JerryWeinberg 2006.09.19

In the end, the manager-specific ways to interfere net out to using their authority to starve reviews of resources. Managers can also engage in all the review-killing behaviors available to everybody else: not doing the work, not showing up, disrupting the process, threatening people, lying . . . the list is endless. Managers also have manager-specific way to interfere by omission. When someone is disrupting reviews a manager can fail to declare that behavior unacceptable. -- JimBullock 2006.09.20 (Since when am I the pithy one?)

Jerry's examples help me to understand. But now I'm wondering if I'm doing some of them myself. For example: Am I a manager or a peer for the suppliers? I'm the one who wants to introduce the reviews. I also refrained from enforcing them because "it's to late, we have to tape-out tomorrow". (Tape-out is when you start computing the masks for chip production. No more code changes are possible.) Looking back, I see we would have had ample of time to do the reviews. -- MarkusSchnell 2006-09-21
"Suppliers" to who? I believe I have the right and the obligation to review any work product handed to me to use. If I accept it, I'm asserting that I believe it is sufficient for me to use it to do my job in turn. How to figure out "sufficient" is my problem, and theirs. If either of us don't know, that's the issue to solve immediately.

So, supply what, to whom? If they're handing stuff to you I think you have the right to review the stuff you are getting, and the obligation to assert that you ("You" singular, one guy trying to use this stuff to do his job.) need this review to do you job. It's part of your job, in fact.

-- JimBullock 2006.09.21 (Not that I have an opinion . . . )


Hindsight is amazing, isn't it? That's why I start projects knowing what done means, and selecting a lifecycle that will get us to done the fastest.

For me there are several pieces: - what kinds of reviews fit the culture? - are those adequate for the job the team has to do? - what do I need to do to facilitate those reviews?

It doesn't matter what kinds of projects and teams I consult to. Inevitably, people say, "Looking back, we would have time to do x," where x is whatever practice or practices people didn't think they had time to do.

But knowing that, what can you do *now*, going forward? -- JohannaRothman 2006.09.21


Markus, listen to Jim and Johanna (and Jerry--anyone whose name starts with J, apparently). Here's what Jerry has to say:
  1. It's great that you're thinking about what you might be inadvertently doing to impede reviews. I agree that most managers who do these things are, as always, trying their best to make things better. It's just that they don't understand review dynamics as well as they should.

  2. Reviews, properly done, are what I call "negative time" processes. That is, they don't take time, they give you time. They don't delay product development, they speed up product development. That's because errors are the number one cause of broken plans and project slowdown--and reviews catch errors early. Think what happens when you fabricate a chip and then find it has an error!

    1. It sounds like you may be doing reviews too late in the process, if it's just before tape-cutting time. Review early; Review often. Keep track of how reviews are paying off, and use that to justify placing them properly in your product cycle. It may be hard to argue the value of catching an error early, but you can use experience with errors you did not catch early to make estimates. Ask yourself for these historical caught-late errors: "What kind of review, done when, could have caught this the earliest, or at least before we finally did catch it? What would the value of that review have been?" One review that catches one error that saves a product recall or extensive field support will pay the cost of all your reviewing for all time. - JerryWeinberg 2006.09.21

You have a fine opportunity to do Jerry's item # 3 right now, although not a code review, or more broadly a work product review.

Do a "Post Implementation Review." (Note, I rarely call them "post mortems." Way too dramatic if nobody died. If somebody really died, or might, well be dramatic. That is big stuff.) A post implementation review is part of the last cycle and the next - both. It's wrapping up the lessons you already paid for this time, so why not? It's the first review in the next round of doing things, really asking: "What do we know about what we're good at, and want to get better at?" so you can plan accordingly.

There are lots of guidelines & techniques for reviews. I insist on only three things (This "insist" is Jim the boss-guy coming out vs. Jim the facilitator or coach. If we're going to invest in a review - and it is an investment - we're going to set it up so we get the benefit. I Have Spoken.):

  • Get things to "sustain" and things to "improve", both. Start with at least some stuff we want to sustain. Sets the right tone.
  • Everything is fair game. Consensus is not required to speak up. Everybody gets to have an opinion. That's not a guarantee that we'll all agree, or that we'll do it, only that you'll be heard.
  • What we list to improve isn't what we commit to change immediately. Changing things is hard and far from free. So that's a separate question, and one for which we have to agree. (Note, again I didn't say "consensus.") We will have that conversation as part of the review.

Notice that I don't say exactly who will do each of these things or how. I just know that they're part of reviews being effective at all. So as boss, I require that they happen as part of anything called a review of this kind. You want something really poisonous, allow pseudo-reviews to happen. I'm not dumb enough to let that happen.

As for reviews as "negative time" processes, at a recent client I put in place a "critical review" for any change going into the code base - a software product in maintenance and next version development. Prior to this, changes were failing - not fixing the identified error or introducing side effects worse than the error - an average of 3.5 times, *none* had gone in clean the first time in the four months I had been there, and the max bounces in those four months was six tries. (As you might guess I wasn't running engineering those four months.<g>)

That was January. If memory serves it was April before a change got bounced in test. We hadn't seen one bounced in the field yet, in September when my interim gig wrapped up. Velocity of useful changes went way up as well, a factor of four at least directly in engineering. Do the math to get an idea what that single process change - introducing one review into the life-cycle - was worth.

However, the first step to instituting the critical reviews was an informal "post implementation review", pulling change history data from the previous four months. "So, how about we keep fixing things, and knowing what changed. Sustain that. Let's get a few more of these right before we launch them out of engineering. BTW, what else do we need to have a completed change, beyond code that works as advertised?"

If you're going to introduce a change this dramatic, however, realize that the system around your process is used to things the way they were. You'll get reactions even to an improvement, and we did. That's another whole digression.

-- JimBullock 2006.09.21 (Guess I'm done being pithy.)


Thanks for all the input. I have to ponder it a bit. I'll report back later. -- MarkusSchnell 2006-09-25
Thank you, Markus. A sincere question is a gift beyond price. Your question let me understand a bit more about reviews. When you repor back, I'll learn even more. -- JimBullock 2006.09.25 (I seem to be the one netting out ahead here.)

Jim, that's the point of review, right? The people who review get as much out of the experience as those people whose work is reviewed. That's why mentor-reviewing makes me nuts. (What I called "expert offering opinions" above.) -- JohannaRothman 2006.09.26
Precisely so - er - no - er - in part. What about a review of all experts? I'm hearing several claims at once, JR, and I think I agree, disagree, and maybe, depending.
  • The folks "doing" a review can get more out of it than the person presenting. That's one huge potential payoff that's much more difficult with "expert reviews." - So, true.
  • If someone, indeed knows a subject deeply while someone else does not, applying that knowledge is a good thing, and sometimes the point. I like working with an expert in stuff I don't know yet (with some conditions.) precisely because I learn so much, so fast. You also get a much better work product. - So, false.
  • I think the essence of a good review is people offering their best stuff, based on their particular expertise and insights. A good review is *also* people accepting what people offer based in part on their particular expertise. Doing so doesn't necessarily eliminate any other kind of payoff. - So, maybe.
You weren't expecting a simple answer, were you? -- JimBullock

Here's a short update: After we really taped-out now, I convinced our program/project manager to do a Lessons Learned review for the project. I hope we can schedule this before AYE.

Johanna said something which resonated very strongly with me (field stone?):

Hindsight is amazing, isn't it? That's why I start projects knowing what done means, and selecting a lifecycle that will get us to done the fastest.

I think we never knew what done meant. I know that I said we should be the same as the other solution. But you can not do this to the smallest of detail. There is a point where you have to stop. So actually we should be similar to the other solution. But how similar? In what respect? That was never made clear. We basically gave our customer something to look at, asked "Done?" and he said "No. Try further."

Your input is very valuable. Thanks for this kind of review.

To be continued ...

-- MarkusSchnell 2006-10-09


So I've been thinking about these last couple of comments, Jim's and Markus'. Jim, when the expert is an expert in process, the solution-space domain expertise might not matter, which is the expert I referred to above. So, when you say "false" above, you are right--the expert can help if it's the process being reviewed. That's why my comment helped Markus.

But IMNHO, code review works best when people understand the product. And unless you've been working on the project and on the product, your solution-space domain expertise is suspect.

Thank you for making me articulate this. Markus, when and if you can, let us know what happened. -- JohannaRothman 2006.10.11


Updated: Wednesday, October 11, 2006