SubMain - CodeIt.Right The First Time!

/Community

Support Community for SubMain Products
 Home Products Services Download Purchase Support
in Search
 
Home Forums Blogs Tutorials/CIR Tutorials/GD Downloads
Welcome to SubMain Community Sign in | Join | Help

SubMain Blog

April 2017 - Posts

  • Spring Cleaning Your Code Review

    Many of us have a natural tendency to let little things pile up.  This gives rise to the notion of the so-called spring cleaning.  The weather turns warm and going outside becomes reasonable, so we take the opportunity to do some kind of deep cleaning. blog-spring-cleaning-your-code-review

    Of course, this may not apply to you.  Perhaps you keep your house impeccable at all times, or maybe you simply have a cleaning service.  But I'll bet that, in some part of your life or another, you put little things off until they become bigger things.  Your cruft may not involve dusty shelves and pockets of house clutter, but it probably exists somewhere.

    Maybe it exists in your professional life in some capacity.  Perhaps you have a string of half written blog posts, or your inbox has more than a thousand messages.  And, if you examine things honestly, you almost certainly have some item that has been skulking around your to-do list for months.  Somewhere, we all have items that could use some tidying, cognitive or physical.

    With that in mind, I'd like to talk about your code review process.  Have you been executing it like clockwork for months or years?  Perhaps it has become too much like clockwork.  Turn a critical eye to it, and you might realize elements of it have become stale or superfluous.  So let's take a look at how you can apply a spring cleaning to your code review process.

    Beware The Cargo Cult

    During World War II, the Allies set up a temporary air base on an island in the Pacific Ocean.  The people living on the Island observed the ground controllers waving at inbound planes to help them land.  Supplies then followed.  Not understand the purpose of this ritual or the mechanics of airplanes, the locals learned that making these motions brought planes with supplies.  So after the allies left, they mimicked the behavior, hoping for additional resources.  This execution of ritual without understanding earned the designation "cargo cult."

    In the world of software development, cargo cult programming involves adding code without understanding what it does.  You added it once, good things happened, so now you always add it.  You can think of this as a special case of programming by coincidence.  And it's something you should avoid.

    But cargo cult mentality can crop up in a code review as well.  Do you find your team calling out 'issues' during the review, but, if pressed, nobody could articulate why those are issues?  If so, you have a cargo cult practice, and you should cull it.

    Going Over the Same Stuff Repetitively

    Let's say that your team performs code review on a regular basis.  Does this involve an ongoing, constant uplift?  In other words, do you find learning spreads among the team, and you collectively sharpen your game and constantly improve?  Or do you find that the team calls out the same old issues again and again?

    If every code review involves noticing a method parameter dereference and saying, "you'll get an exception if someone passes in null," then you have stagnation.  Think of this as a team smell.  Why do people keep making the same mistake over and over again?  Why haven't you somehow operationalized a remedy?  And, couldn't someone have automated this?

    Keep an eye out for this sort of thing.  If you notice it, pause and do some root cause analysis.  Don't just fix the issue itself -- fix it so the issue stops happening.

    Inconsistency in Reviews

    Another common source of woe arises from inconsistency in the code review process.  Not only does this result in potential issues within the code, but it also threatens to demoralize members of the team.  Imagine attending a review and having someone admonish you to add logging calls to all of your methods.  But then, during the next review, someone gives you a hard time about logging too much.  Enough of that nonsense and team members start updating their resumes rather than their methods.

    And inconsistency can mean more than just different review styles from different people (or the same person on different days, varying by mood).  You might find that your team's behavior and suggestions during review have become out of sync with a formal document like the team's coding standard.  Whatever the source, inconsistency creates drag for your team.

    Take the opportunity of a metaphorical spring cleaning to address this potential pitfall.  Round up the team members and make sure they all have the same philosophies at code review time.  And then, make sure that unified philosophy lines up with anything documented.

    Cut Out the Nitpicking

    I've yet to see an organization where interpersonal code review didn't become at least a little political.  That makes sense, of course.  In essence, you're talking about an activity where people get together and offer (hopefully) constructive professional criticism.

    Because of the politics, personal code review can degenerate and lead to infighting in numerous ways.  Chief among these, I've found, is excessive nitpicking.  If team members perceive the activity as a never ending string of officious criticism, they start to hate coming to work.

    On top of that, people can only internalize so many lessons in a sitting.  After a while, they start to tune out or get tired.  So make the takeaways from the code review count.  Even if they haven't gotten every little thing just so, pick your battles and focus on big things.  And I file this under spring cleaning since it generally requires a concerted mental adjustment and since it will clear some of the cruft out of your review.

    Automate, Automate, Automate

    I will conclude by offering what I consider the most important item for any code review spring cleaning.  If the other suggestions in involved metaphorical shelf dusting and shower scrubbing, think of this one as completely cleaning out an entire room that you had loaded with junk.

    So much of the time teams spend in code review seems to trend toward picking at nits.  But even when it involves more substantive considerations, many of these considerations could be automatically detected.  The team wastes precious time peering at the code and playing static analyzer.  Stop this!

    Spruce up your review process by automating as much of it as humanly possible.  You should constantly ask yourself if the issue you're discussing could be automatically detected (and fixed).  If you think it could, then do it.  And, as part of your spring cleaning, knock out as many of these as possible.

    Save human-centric code review for focus on design considerations, architectural discussions, and big picture issues.  Don't bog yourself down in cruft.  You'll all feel a lot cleaner and happier for it, just as you would after any spring cleaning.

    Tools at your disposal

    SubMain offers CodeIt.Right that easily integrates into Visual Studio for flexible and intuitive automated code review solution that works real-time, on demand, at the source control check-in or as part of your build.

    Related resources

    Learn more how CodeIt.Right can help you automate code reviews and improve the quality of your code.

    About the Author

    Erik Dietrich

    I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

  • CodeIt.Right Rules Explained, Part 5

    Today, I'll do another installment of the CodeIt.Right Rules, Explained series.  This is post number five in the series.  And, as always, I'll start off by citing my two personal rules about static analysis guidance, along with the explanation for them.

    • Never implement a suggested fix without knowing what makes it a fix.
    • Never ignore a suggested fix without understanding what makes it a fix.

    It may seem as though I'm playing rhetorical games here.  After all, I could simply say, "learn the reasoning behind all suggested fixes."  But I want to underscore the decision you face when confronted with static analysis feedback.  In all cases, you must actively choose to ignore the feedback or address it.  And for both options, you need to understand the logic behind the suggestion.

    In that spirit, I'm going to offer up explanations for three more CodeIt.Right rules today.

    Mark ISerializable Types with "Serializable" Attribute

    If you run across this rule, you might do so while writing an exception class.  For example, the following small bit of code in a project of mine triggers it.

    public class GithubQueryingException
    : Exception { public GithubQueryingException(string message,
    Exception ex) : base(message,
    ex) { } }

    It seems pretty innocuous, right?  Well, let's take a look at what went wrong.

    The rule actually describes its own solution pretty well.  Slap a serializable attribute on this exception class and make the tool happy.  But who cares?  Why does it matter if you don't mark the exception as serializable?

    To understand the issue, you need awareness of a concept called "application domains" within the .NET framework.  Going into much detail about this would take us beyond the scope of the post.  But suffice it to say, "application domains provide an isolation boundary for security, reliability, and versioning, and for unloading assemblies."  Think two separate processes running and collaborating.

    If some external process will call your code, it won't access and deal with your objects the same way that your own code will.  Instead, it needs to communicate by serializing the object and passing it along as if over some remote service call.  In the case of the exception above, it lacks the attribute marking it explicitly as serializable, in spite of implementing that interface.  So bad things will happen at runtime.  And this warning exists to give you the heads up.

    If you'll only ever handle this exception within the same app domain, it won't cause you any heartburn.  But, then again, neither will adding an attribute to your class.

    Do Not Handle Non-CLS-Compliant Exceptions

    Have you ever written code that looks something like this?

    try {
    DoSomething(); return true;
    } catch { return false;
    }

    In essence, you want to take a stab at doing something and return true if it goes well and false if anything goes wrong.  So you write code that looks something like the above.

    If you you have, you'll run afoul of the CodeIt.Right rule, "do not handle non-cls-compliant exceptions."  You might find this confusing at first blush, particularly if you code exclusively in C# or Visual Basic.  This would confuse you because you cannot throw exceptions not compliant with the common language specification (CLS).  All exceptions you throw inherit from the Exception class and thus conform.

    However, in the case of native code written in, say, C++, you can actually throw non-CLS-compliant exceptions.  And this code will catch them because you've said "catch anything that comes my way."  This earns you a warning.

    The CodeIt.Right warning here resembles one telling you not to catch the general exception type.  You want to be intentional about what exceptions you trap, rather than casting an overly wide net.  You can fix this easily enough by specifying the actual exception you anticipate might occur.

    Async Methods Should Return Task or Task<T>

    As of .NET Framework 4.5, you can use the async keyword to allow invocation of an asynchronous operation.  For example, imagine that you had a desktop GUI app and you wanted to populate a form with data.  But imagine that acquiring said data involved doing an expensive and time consuming call over a network.

    With synchronous programming, the call out to the network would block, meaning that everything else would grind to a halt to wait on the network call... including the GUI's responsiveness.  That makes for a terrible user experience.  Of course, we solved this problem long before the existence of the async keyword.  But we used laborious threading solutions to do that, whereas the async keyword makes this more intuitive.

    Roughly speaking, designating a method as "async" indicates that you can dispatch it to conduct its business while you move on to do other things.  To accomplish this, the method synchronously returns something called a Task, which acts as a placeholder and a promise of sorts.  The calling method keeps a reference to the Task and can use it to get at the result of the method, once the asynchronous operation completes.

    But that only works if you return a Task or Task<T>.  If, instead, you create a void method and label it asynchronous, you have no means to get at it later and no means to explicitly wait on it.  There's a good chance this isn't what you want to do, and CodeIt.Right lets you know that.  In the case of an event handler, you might actually want to do this, but better safe than sorry.  You can fix the violation by returning a non-parameterized Task rather than declaring the method void.

    Until Next Time

    This post covered some interesting language and framework features.  We looked at the effect of crossing app domain boundaries and what that does to the objects whose structure you can easily take for granted.  Then we went off the beaten path a little by looking at something unexpected that can happen at the intersection of managed and native code.  And, finally, we delved into asynchronous programming a bit.

    As we wander through some of these relatively far-reaching concerns, it's nice to see that CodeIt.Right helps us keep track.  A good analysis tool not only helps you catch mistakes, but it also helps you expand your understanding of the language and framework.

    Learn more how CodeIt.Right can help you automate code reviews and improve your code quality.

    About the Author

    Erik Dietrich

    I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

  • Automated Code Review to Help with the Unknowns of Offshore Work

    I like variety.  In pursuit of this preference, I spend some time management consulting with enterprise clients and some time volunteering for "office hours" at a startup incubator.  Generally, this amounts to serving as "rent-a-CTO" for startup founders in half hour blocks.  This provides me with the spice of life, I guess.

    As disparate as these advice forums might seem, they often share a common theme.  Both in the impressive enterprise buildings and the startup incubator conference rooms, people ask me about offshoring application development.  To go overseas or not to go overseas?  That, quite frequently, is the question (posed to me).

    I find this pretty difficult to answer absent additional information.  In any context, people asking this bake two core assumptions into their question.  What they really want to say would sound more like this.  "Will I suffer for the choice to sacrifice quality to save money?"

    They assume first that cheaper offshore work means lower quality.  And then they assume that you can trade quality for cost as if adjusting the volume dial in your car.  If only life worked this simply.

    What You Know When You Offshore

    Before going further, let's back up a bit.  I want to talk about what you actually know when you make the decision to pay overseas firms a lower rate to build software.  But first, let's dispel these assumptions that nobody can really justify.

    Understand something unequivocally.  You cannot simply exchange units of "quality" for currency.  If you ask me to build you a web app, and I tell you that I'll do it for $30,000, you can't simply say, "I'll give you $15,000 to build one-half as good."  I mean, you could say that.  But you'd be saying something absurd, and you know it.  You can reasonably adjust cost by cutting scope, but not by assuming that "half as good" means "twice as fast."

    Also, you need to understand that "cheap overseas labor" doesn't necessarily mean lower quality.  Frequently it does, but not always.  And, not even frequently enough that you can just bank on it.

    So what do you know when you contract with an inexpensive, overseas provider?  Not a lot, actually.  But you do know that your partner will work with you mainly remotely, across a great deal of distance, and with significant communication obstacles.  You will not collaborate as closely with them as you would with an employee or a local vendor.

    The (Non) Locality Conundrum

    So you have a limited budget, and you go shopping for offshore app dev.  You go in knowing that you may deal with less skilled developers.  But honestly, most people dramatically overestimate the importance of that concern.

    What tends to torpedo these projects lies more in the communication gulf and less in the skill.  You give them wireframes and vague instructions, and they come back with what they think you want.  They explain their deliveries with passable English in emails sent at 2:30 AM your time.  This collaboration proves taxing for both parties, so you both avoid it, for the most part.  You thus mutually collude to raise the stakes with each passing week.

    Disaster then strikes at the end.  In a big bang, they deliver what they think you want, and it doesn't fit your expectations.  Or it fits your expectations, but you can't build on top of it.  You may later, using some revisionist history, consider this a matter of "software quality" but that misses the point.

    Your problem really lies in the non-locality, both geographically and more philosophically.

    When Software Projects Work

    Software projects work well with a tight feedback loop.  The entire agile movement rests firmly atop this premise.  Stop shipping software once per year, and start shipping it once per week.  See what the customer/stakeholder thinks and course correct before it's too late.  This helps facilitate success far more than the vague notion of quality.

    The locality issue detracts from the willingness to collaborate.  It encourages you to work in silos and save feedback for a later date.  It invites disaster.

    To avoid this, you need to figure out a way to remove unknowns from the equation.  You need to know what your partner is doing from week to week.  And you need to know the nature of what they're building.  Have they assembled throwaway, prototype code?  Or do you have the foundation of the future?

    Getting a Glimpse

    At this point, the course for enterprises and startups diverge.  The enterprise has legions of software developers and can easily afford to fly to Eastern Europe or Southeast Asia or wherever the work gets done.  They want to leverage economies of scale to save money as a matter of policy.

    The startup or small business, on the other hand, lacks these resources.  They can't just ask their legion of developers to review the offshore work more frequently.  And they certainly can't book a few business class tickets over there to check it out for themselves.  They need to get more creative.

    In fact, some of the startup founders I counsel have a pretty bleak outlook here.  They have no one in their organization in a position to review code at all.  So they rely on an offshore partner for budget reasons, and they rely on that partner as expert adviser and service provider.  They put all of their eggs in that vendor's basket.  And they come to me asking, "have I made a good choice?"

    They need a glimpse into what these offshore folks are doing, and one that they can understand.

    Leveraging Automated Code Review

    While you can't address the nebulous, subjective concept of "quality" wholesale, you can ascertain properties of code.  And you can even do it without a great deal of technical knowledge, yourself.  You could simply take their source code and run an automated code review on it.

    You're probably thinking that this seems a bit reductionist.  Make no mistake -- it's quite reductionist.  But it also beats no feedback at all.

    You could approach this by running the review on each incremental delivery.  Ask them to explain instances where it runs afoul of the tool.  Then keep doing it to see if they improve.  Or, you could ask them to incorporate the tool into their own process and make delivering issue-free code a part of the contract.  Neither of these things guarantees a successful result.  But at least it offers you something -- anything -- to help you evaluate the work, short of in-depth knowledge and study yourself.

    Recall what I said earlier about how enterprises regard quality.  It's not as much about intrinsic properties, nor is it inversely proportional to cost.  Instead, quality shows itself in the presence of a tight feedback loop and the ability to sustain adding more and more capabilities.  With limited time and knowledge, automated code review gives you a way to tighten that feedback loop and align expectations.  It ensures at least some oversight, and it aligns the work they do with what you might expect from firms that know their business.

    Tools at your disposal

    SubMain offers CodeIt.Right that easily integrates into Visual Studio for flexible and intuitive automated code review solution that works real-time, on demand, at the source control check-in or as part of your build.

    Related resources

    Learn more how CodeIt.Right can help you automate code reviews and ensure the quality of delivered code.

    About the Author

    Erik Dietrich

    I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

  • Transitioning from Manual to Automated Code Review

    I can almost sense the indignation from some of you.  You read the title and then began to seethe a little.  Then you clicked the link to see what kind sophistry awaited you.  "There is no substitute for peer review."

    Relax.  I agree with you.  In fact, I think that any robust review process should include a healthy amount of human and automated review.  And, of course, you also need your test pyramid, integration and deployment strategies, and the whole nine yards.  Having a truly mature software shop takes a great deal of work and involves standing on the shoulders of giants.  So, please, give me a little latitude with the premise of the post.

    Today I want to talk about how one could replace manual code review with automated code review only, should the need arise.

    Why Would The Need for This Arise?

    You might struggle to imagine why this would ever prove necessary.  Those of you with many years logged in the enterprise in particular probably find this puzzling.  But you might find manual code inspection axed from your process for any number of reasons other than, "we've decided we don't value the activity."

    First and most egregiously, a team's manager might come along with an eye toward cost savings.  "I need you to spend less time reading code and more time writing it!"  In that case, you'll need to move away from the practice, and going toward automation beats abandoning it altogether.  Of course, if that happens, I also recommend dusting off your resume.  In the first place, you have a penny-wise, pound-foolish manager.  And, secondly, management shouldn't micromanage you at this level.  Figuring out how to deliver good software should be your responsibility.

    But let's consider less unfortunate situations.  Perhaps you currently work in a team of 2, and number 2 just handed in her two weeks’ notice.  Even if your organization back-fills your erstwhile teammate, you have some time before the newbie can meaningfully review your code.  Or, perhaps you work for a larger team, but everyone gradually becomes so busy and fragmented in responsibility as not to have the time for much manual peer review.

    In my travels, this last case actually happens pretty frequently.  And then you have to choose: abandon the practice altogether, or move toward an automated version.  Pretty easy choice, if you ask me.

    First, Take Inventory

    Assuming no one has yet forced your hand, pause to take inventory.  What currently happens as part of your review process?  What sorts of feedback do you get?

    If your reviews happen in some kind of asynchronous format, then great.  This should prove easy enough to capture since you'll need only to go through your emails or issues list or whatever you use.  Do you have in-person reviews, but chronicle the findings?  Just as good for our purposes here.

    But if these reviews happen in more ad hoc fashion, then you have some work to do.  Start documenting the feedback and resultant action items.  After all, in order to create a suitable replacement strategy for an activity, you must first thoroughly understand that activity.

    Automate the Automate-able

    With your list in place, you can now start figuring out how to replace your expiring manual process.  First up, identify the things you can easily automate that come up during reviews.

    This will include cosmetic concerns.  Does your code comply with the team standard?  Does it comply with typical styling for your tech stack?  Have you consistently cased and named things?  If that stuff comes up during your reviews, you should probably automate it anyway and not waste time discussing it.  But, going forward, you will need to automate it.

    But you should also look for anything that you can leverage automation to catch.  Do you talk about methods getting too long or about not checking parameters for null before dereferencing?  You can also automate things like that.  How about compliance with non-cosmetic best practices?  Automate that as well with an automated code review tool.

    And spend some time researching what you can automate.  Even if no analyzer or review tool catches something out of the box, you can often customize them to catch it (or write your own thing, if needed).

    Checks and Balances for Conceptual Items

    Now, we move onto the more difficult things.  "This method seems pretty unreadable."  "Couldn't you use the builder pattern here?"  I'm talking here about the sorts of things for which manual code review really shines and serves its purpose.  You'll have a harder time replacing this.  But that doesn't mean you can't do something.

    First, I recommend that you audit the review history you've been compiling.  See what comes up the most frequently, and make a list of those things.  And group them conceptually.  If you see a lot of "couldn't you use Builder" and "couldn't you use Factory Method," then generalize to "couldn't you use a design pattern?"

    Once you have this list, if nothing else, you can use it as a checklist for yourself each time you commit code.  But you might also see whether you can conceive of some sort of automation.  Or maybe you just resolve to revisit the codebase periodically, with a critical eye toward these sorts of questions.

    You need to see if you can replace the human insights of a peer.  Admittedly, this presents a serious challenge.  But get creative and see what you can come up with.

    Adjust Your Approach

    The final plank I'll mention involves changing the way you approach development and review in general.  For whatever reason, human review of your work has become a scarce resource.  You need to adjust accordingly.

    Picking up a good bit of automated review makes up part of this adjustment, as does creating of a checklist to apply to yourself.  But you need to go further as well.  Take an approach wherein you look to become more self-sufficient for any of the littler things and store up your scarce access to human reviewers for the truly weighty architectural decisions.  When these come up, enlist the help of someone else in your organization or even the internet.

    On top of that, look opportunistically for ways to catch your own mistakes and improve.  Everyone has to learn from their mistakes, but with less margin for error, you need to learn from them and automate their prevention going forward.  Again, automated review helps here, but you'll need to get creative.

    Having peer review yanked out from under you undeniably presents a challenge.  Luckily, however, you have more tools than ever at your disposal to pick up the slack.  Make use of them.  When you find yourself in a situation with the peer review safety net restored, you'll be an even better programmer for it.

    Tools at your disposal

    SubMain offers CodeIt.Right that easily integrates into Visual Studio for flexible and intuitive automated code review solution that works real-time, on demand, at the source control check-in or as part of your build.

    Related resources

    Learn more how CodeIt.Right can help you automate code reviews and improve your code quality.

    About the Author

    Erik Dietrich

    I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

    

This Blog

Syndication

 
     
 
Home |  Products |  Services |  Download |  Purchase |  Support |  Community |  About Us |