Dirty notes about code review, the most interesting part is in the reference. Maybe it should be bring up as the first paragraph.
When a reviewer or consensus of reviewers determines that code must be changed before it is acceptable, it is a “defect.”
A single code review should be limited in its scope. If the code review is longer than a certain size, the reviewer will worn out and stop caring. Actual size may vary but the range 200 - 300 comes up frequently. Besides, reviewing this range should take on average 1hour.
Notes, best kept secrect says that below 2000LOC is acceptable as long as the reviewer does not perform more than 1500LOC/hour. That not the same scale...
A reviewer should takes its time to review, 1 hour for 300 LOC is the ideal but not much longer. Otherwise the attention span decrease dramatically and there is an author that waits for the review to be done and continue its work. On the other hand, trivial review are not real review. If an entire review took 2 seconds it is clear that the reviewer did nothing.
It is good practice to check for code review several times a day.
Nobody wants to be known as Jerry, the guys that never checks for NULL pointer. Code review encourage people to be more careful about their code, making them better developer immediately. Even with a 1 out of 3 chance of being reviewed.
Reviewers can also learn by asking question about the code.
It is hard to receive criticism well.
All humans are fallible and makes mistakes. The question asked should be: What caused the issue and how to prevent it again. Not who caused the issue.
Developers fears that metrics will be turned against them. Did they took too long or maybe too many defects ?
Hard code has more defects it does not mean that the developer is bad. In other word, difficult code is expected to have flaws.
Each programmer has its own style. It is not because you would not have written a certain that that way is bad. In any doubt, propose the orignal author to pair program your solution and at the end compare the 2. Pick the one that fits the best the problem at hand.
From [SmartBear], says that the author should annotate its code before starting a code review. It should notify the reviewers which file should be looked up first and why the changes was required. Thanks to that, the reviewer won't have to decipher the whole thing from the ground up. The author will be forced to think about what he has done and may be able to spot issues before sending the code to the code review. Similar to the rubber duck debugging.
It sounds like a good idea but I am not sure that in practice it is really feasable. Every are to be finished yesterday right ?
Author and reviewer should have a check-list for commons issue. For example, did I added test to the method, did I check all input value...
As oppose to heavyweight code review that was used with the waterfall model, the lightweight code review is the technique used in agile.
There are 4 ways to execute code review.
- Over-the-shoulder: Author walk the reviewer through the code.
- Email pass-around: Author email its changes to the reviewer.
- Pair Programming: Two authors develop together.
- Tool-assisted: Authors and reviewers use specialized tools.
http://smartbear.com/SmartBear/media/pdfs/11_Best_Practices_for_Peer_Code_Review.pdf http://en.wikipedia.org/wiki/Rubber_duck_debugging
http://support.smartbear.com/support/media/resources/cc/CodeReviewSocialEffects.pdf http://blog.codinghorror.com/egoless-programming-you-are-not-your-job/