SubMain - CodeIt.Right The First Time!

/Community

Support Community for SubMain Products
 Home Products Services Download Purchase Support
in Search
 
Home Forums Blogs Tutorials/CIR Tutorials/GD Downloads
Welcome to SubMain Community Sign in | Join | Help

SubMain Blog

Browse by Tags

All Tags » CodeAnalysis   (RSS)

  • Spring Cleaning Your Code Review

    Many of us have a natural tendency to let little things pile up.  This gives rise to the notion of the so-called spring cleaning.  The weather turns warm and going outside becomes reasonable, so we take the opportunity to do some kind of deep cleaning. blog-spring-cleaning-your-code-review

    Of course, this may not apply to you.  Perhaps you keep your house impeccable at all times, or maybe you simply have a cleaning service.  But I'll bet that, in some part of your life or another, you put little things off until they become bigger things.  Your cruft may not involve dusty shelves and pockets of house clutter, but it probably exists somewhere.

    Maybe it exists in your professional life in some capacity.  Perhaps you have a string of half written blog posts, or your inbox has more than a thousand messages.  And, if you examine things honestly, you almost certainly have some item that has been skulking around your to-do list for months.  Somewhere, we all have items that could use some tidying, cognitive or physical.

    With that in mind, I'd like to talk about your code review process.  Have you been executing it like clockwork for months or years?  Perhaps it has become too much like clockwork.  Turn a critical eye to it, and you might realize elements of it have become stale or superfluous.  So let's take a look at how you can apply a spring cleaning to your code review process.

    Beware The Cargo Cult

    During World War II, the Allies set up a temporary air base on an island in the Pacific Ocean.  The people living on the Island observed the ground controllers waving at inbound planes to help them land.  Supplies then followed.  Not understand the purpose of this ritual or the mechanics of airplanes, the locals learned that making these motions brought planes with supplies.  So after the allies left, they mimicked the behavior, hoping for additional resources.  This execution of ritual without understanding earned the designation "cargo cult."

    In the world of software development, cargo cult programming involves adding code without understanding what it does.  You added it once, good things happened, so now you always add it.  You can think of this as a special case of programming by coincidence.  And it's something you should avoid.

    But cargo cult mentality can crop up in a code review as well.  Do you find your team calling out 'issues' during the review, but, if pressed, nobody could articulate why those are issues?  If so, you have a cargo cult practice, and you should cull it.

    Going Over the Same Stuff Repetitively

    Let's say that your team performs code review on a regular basis.  Does this involve an ongoing, constant uplift?  In other words, do you find learning spreads among the team, and you collectively sharpen your game and constantly improve?  Or do you find that the team calls out the same old issues again and again?

    If every code review involves noticing a method parameter dereference and saying, "you'll get an exception if someone passes in null," then you have stagnation.  Think of this as a team smell.  Why do people keep making the same mistake over and over again?  Why haven't you somehow operationalized a remedy?  And, couldn't someone have automated this?

    Keep an eye out for this sort of thing.  If you notice it, pause and do some root cause analysis.  Don't just fix the issue itself -- fix it so the issue stops happening.

    Inconsistency in Reviews

    Another common source of woe arises from inconsistency in the code review process.  Not only does this result in potential issues within the code, but it also threatens to demoralize members of the team.  Imagine attending a review and having someone admonish you to add logging calls to all of your methods.  But then, during the next review, someone gives you a hard time about logging too much.  Enough of that nonsense and team members start updating their resumes rather than their methods.

    And inconsistency can mean more than just different review styles from different people (or the same person on different days, varying by mood).  You might find that your team's behavior and suggestions during review have become out of sync with a formal document like the team's coding standard.  Whatever the source, inconsistency creates drag for your team.

    Take the opportunity of a metaphorical spring cleaning to address this potential pitfall.  Round up the team members and make sure they all have the same philosophies at code review time.  And then, make sure that unified philosophy lines up with anything documented.

    Cut Out the Nitpicking

    I've yet to see an organization where interpersonal code review didn't become at least a little political.  That makes sense, of course.  In essence, you're talking about an activity where people get together and offer (hopefully) constructive professional criticism.

    Because of the politics, personal code review can degenerate and lead to infighting in numerous ways.  Chief among these, I've found, is excessive nitpicking.  If team members perceive the activity as a never ending string of officious criticism, they start to hate coming to work.

    On top of that, people can only internalize so many lessons in a sitting.  After a while, they start to tune out or get tired.  So make the takeaways from the code review count.  Even if they haven't gotten every little thing just so, pick your battles and focus on big things.  And I file this under spring cleaning since it generally requires a concerted mental adjustment and since it will clear some of the cruft out of your review.

    Automate, Automate, Automate

    I will conclude by offering what I consider the most important item for any code review spring cleaning.  If the other suggestions in involved metaphorical shelf dusting and shower scrubbing, think of this one as completely cleaning out an entire room that you had loaded with junk.

    So much of the time teams spend in code review seems to trend toward picking at nits.  But even when it involves more substantive considerations, many of these considerations could be automatically detected.  The team wastes precious time peering at the code and playing static analyzer.  Stop this!

    Spruce up your review process by automating as much of it as humanly possible.  You should constantly ask yourself if the issue you're discussing could be automatically detected (and fixed).  If you think it could, then do it.  And, as part of your spring cleaning, knock out as many of these as possible.

    Save human-centric code review for focus on design considerations, architectural discussions, and big picture issues.  Don't bog yourself down in cruft.  You'll all feel a lot cleaner and happier for it, just as you would after any spring cleaning.

    Tools at your disposal

    SubMain offers CodeIt.Right that easily integrates into Visual Studio for flexible and intuitive automated code review solution that works real-time, on demand, at the source control check-in or as part of your build.

    Related resources

    Learn more how CodeIt.Right can help you automate code reviews and improve the quality of your code.

    About the Author

    Erik Dietrich

    I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

  • Automated Code Review to Help with the Unknowns of Offshore Work

    I like variety.  In pursuit of this preference, I spend some time management consulting with enterprise clients and some time volunteering for "office hours" at a startup incubator.  Generally, this amounts to serving as "rent-a-CTO" for startup founders in half hour blocks.  This provides me with the spice of life, I guess.

    As disparate as these advice forums might seem, they often share a common theme.  Both in the impressive enterprise buildings and the startup incubator conference rooms, people ask me about offshoring application development.  To go overseas or not to go overseas?  That, quite frequently, is the question (posed to me).

    I find this pretty difficult to answer absent additional information.  In any context, people asking this bake two core assumptions into their question.  What they really want to say would sound more like this.  "Will I suffer for the choice to sacrifice quality to save money?"

    They assume first that cheaper offshore work means lower quality.  And then they assume that you can trade quality for cost as if adjusting the volume dial in your car.  If only life worked this simply.

    What You Know When You Offshore

    Before going further, let's back up a bit.  I want to talk about what you actually know when you make the decision to pay overseas firms a lower rate to build software.  But first, let's dispel these assumptions that nobody can really justify.

    Understand something unequivocally.  You cannot simply exchange units of "quality" for currency.  If you ask me to build you a web app, and I tell you that I'll do it for $30,000, you can't simply say, "I'll give you $15,000 to build one-half as good."  I mean, you could say that.  But you'd be saying something absurd, and you know it.  You can reasonably adjust cost by cutting scope, but not by assuming that "half as good" means "twice as fast."

    Also, you need to understand that "cheap overseas labor" doesn't necessarily mean lower quality.  Frequently it does, but not always.  And, not even frequently enough that you can just bank on it.

    So what do you know when you contract with an inexpensive, overseas provider?  Not a lot, actually.  But you do know that your partner will work with you mainly remotely, across a great deal of distance, and with significant communication obstacles.  You will not collaborate as closely with them as you would with an employee or a local vendor.

    The (Non) Locality Conundrum

    So you have a limited budget, and you go shopping for offshore app dev.  You go in knowing that you may deal with less skilled developers.  But honestly, most people dramatically overestimate the importance of that concern.

    What tends to torpedo these projects lies more in the communication gulf and less in the skill.  You give them wireframes and vague instructions, and they come back with what they think you want.  They explain their deliveries with passable English in emails sent at 2:30 AM your time.  This collaboration proves taxing for both parties, so you both avoid it, for the most part.  You thus mutually collude to raise the stakes with each passing week.

    Disaster then strikes at the end.  In a big bang, they deliver what they think you want, and it doesn't fit your expectations.  Or it fits your expectations, but you can't build on top of it.  You may later, using some revisionist history, consider this a matter of "software quality" but that misses the point.

    Your problem really lies in the non-locality, both geographically and more philosophically.

    When Software Projects Work

    Software projects work well with a tight feedback loop.  The entire agile movement rests firmly atop this premise.  Stop shipping software once per year, and start shipping it once per week.  See what the customer/stakeholder thinks and course correct before it's too late.  This helps facilitate success far more than the vague notion of quality.

    The locality issue detracts from the willingness to collaborate.  It encourages you to work in silos and save feedback for a later date.  It invites disaster.

    To avoid this, you need to figure out a way to remove unknowns from the equation.  You need to know what your partner is doing from week to week.  And you need to know the nature of what they're building.  Have they assembled throwaway, prototype code?  Or do you have the foundation of the future?

    Getting a Glimpse

    At this point, the course for enterprises and startups diverge.  The enterprise has legions of software developers and can easily afford to fly to Eastern Europe or Southeast Asia or wherever the work gets done.  They want to leverage economies of scale to save money as a matter of policy.

    The startup or small business, on the other hand, lacks these resources.  They can't just ask their legion of developers to review the offshore work more frequently.  And they certainly can't book a few business class tickets over there to check it out for themselves.  They need to get more creative.

    In fact, some of the startup founders I counsel have a pretty bleak outlook here.  They have no one in their organization in a position to review code at all.  So they rely on an offshore partner for budget reasons, and they rely on that partner as expert adviser and service provider.  They put all of their eggs in that vendor's basket.  And they come to me asking, "have I made a good choice?"

    They need a glimpse into what these offshore folks are doing, and one that they can understand.

    Leveraging Automated Code Review

    While you can't address the nebulous, subjective concept of "quality" wholesale, you can ascertain properties of code.  And you can even do it without a great deal of technical knowledge, yourself.  You could simply take their source code and run an automated code review on it.

    You're probably thinking that this seems a bit reductionist.  Make no mistake -- it's quite reductionist.  But it also beats no feedback at all.

    You could approach this by running the review on each incremental delivery.  Ask them to explain instances where it runs afoul of the tool.  Then keep doing it to see if they improve.  Or, you could ask them to incorporate the tool into their own process and make delivering issue-free code a part of the contract.  Neither of these things guarantees a successful result.  But at least it offers you something -- anything -- to help you evaluate the work, short of in-depth knowledge and study yourself.

    Recall what I said earlier about how enterprises regard quality.  It's not as much about intrinsic properties, nor is it inversely proportional to cost.  Instead, quality shows itself in the presence of a tight feedback loop and the ability to sustain adding more and more capabilities.  With limited time and knowledge, automated code review gives you a way to tighten that feedback loop and align expectations.  It ensures at least some oversight, and it aligns the work they do with what you might expect from firms that know their business.

    Tools at your disposal

    SubMain offers CodeIt.Right that easily integrates into Visual Studio for flexible and intuitive automated code review solution that works real-time, on demand, at the source control check-in or as part of your build.

    Related resources

    Learn more how CodeIt.Right can help you automate code reviews and ensure the quality of delivered code.

    About the Author

    Erik Dietrich

    I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

  • The Case for a Team Standard

    In professional contexts, I think that the word "standard" has two distinct flavors.  So when we talk about a "team standard" or a "coding standard," the waters muddy a bit.  In this post, I'm going to make the case for a team standard.  But before I do, I think it important to discuss these flavors that I mention.  And keep in mind that we're not talking dictionary definition as much as the feelings that the word evokes.

    blog-case-for-team-standardFirst, consider standard as "common."  To understand what I mean, let's talk cars.  If you go to buy a car, you can have an automatic transmission or a standard transmission.  Standard represents a weird naming choice for this distinction since (1) automatic transmissions dominate (at least in the US) and (2) "manual" or "stick-shift" offer much better descriptions.  But it's called "standard" because of historical context.  Once upon a time, automatic was a new sort of upgrade, so the existing, default option became boringly known as "standard."

    In contrast, consider standard as "discerning."  Most commonly you hear this in the context of having standards.  If some leering, creepy person suggested you go out on a date to a fast food restaurant, you might rejoin with, "ugh, no, I have standards."

    Now, take these common contexts for the word to the software team room.  When someone proposes coding standards, the two flavors make themselves plain in the team members' reactions.  Some like the idea, and think, "it's important to have standards and take pride in our work."  Others hear, "check your creativity at the gate, because around here we write standard, default code."

    What I Mean by Standard

    Now that I've drawn the appropriate distinction, I feel it appropriate to make my case.  When I talk about the importance of a standard, I speak with the second flavor of the word in mind.  I speak about the team looking at its code with a discerning attitude.  Not just any code can make it in here -- we have standards.

    These can take somewhat fluid forms, and I don't mean to be prescriptive.  The sorts of standards that I like to see apply to design principles as much as possible and to cosmetic concerns only when they have to.

    For example, "all non-GUI code should be test driven" and "methods with more than 20 lines should require a conversation to justify them" represent the sort of standards I like my teams to have.  They say, "we believe in TDD" and "we view long methods as code smells," respectively.  In a way, they represent the coding ethos of the group.

    On the other side of the fence lie prescriptions like, "all class fields shall be prepended with underscores" and "all methods shall be camel case."  I consider such concerns cosmetic, since they are appearance and not design or runtime behavior.  Cosmetic concerns are not important... unless they are.  If the team struggles to read code and becomes confused because of inconsistency, then such concerns become important.  If the occasional quirk presents no serious readability issues, then prescriptive declarations about it stifle more than they help.

    Having standards for your team's work product does not mean mandating total homogeneity.

    Why Have a Standard at All?

    Since I'm alluding to the potentially stifling effects of a team standard, you might reasonably ask why we should have them at all.  I can assert that I'm interested in the team being discerning, but is it really just about defining defaults?  Fair enough.  I'll make my case.

    First, consider something that I've already mentioned: maintenance.  If the team can easily read code, it can more easily maintain that code.  Logically, then, if the team all writes fairly similar code, they will all have an easier time reading, and thus maintaining that code.  A standard serves to nudge teams in this direction.

    Another important benefit of the team standard revolves around the integrity of the work product.  Many team's standards incorporate methodology for security, error handling, logging, etc.  Thus the established standard arms the team members with ways to ensure that the software behaves properly.

    And finally, well-done standards can help less experienced team members learn their craft.  When such people join the team, they tend to look to established folks for guidance.  Sadly, those people often have the most on their plate and the least time.  The standard can thus serve as teacher by proxy, letting everyone know the team's expectations for good code.

    Forget the Conformity (by Automating)

    So far, all of my rationale follows a fairly happy path.  Adopt a team standard, and reap the rewards: maintainability, better software, learning for newbies.  But equally important is avoiding the dark side of team standards.  Often this dark side takes the form of nitpicking, micromanagement and other petty bits of nastiness.

    Please, please, please remember that a standard should not elevate conformity as a virtue.  It should represent shared values and protection of work product quality.  Therefore, in situations where conformity (uniformity) is justified, you should automate it.  Don't make your collaborative time about telling people where to put spaces and brackets -- program your IDE to do that for you.

    Make Justification Part of the Standard

    Another critical way to remove the authoritarian vibe from the team standard is one that I rarely see.  And that mystifies me a bit because you can do it so easily.  Simply make sure you justify each item contained in the standard.

    "Methods with more than 20 line of code should prompt a conversation," might find a home in your standard.  But why not make it, "methods with more than 20 lines of code should prompt a conversation because studies have demonstrated that defect rate increases more than linearly with lines of code per method?"  Wow, talk about powerful.

    This little addition takes the authoritarian air out of the standard, and it also helps defuse squabbles.  And, best of all, people might just learn something.

    If you start doing this, you might also notice that boilerplate items in a lot of team standards become harder to justify.  "Prepend your class fields with m underscore" becomes "prepend your class fields with m underscore because... wait, why do we do that again?"

    Prune and Always Improve

    When you find yourself trailing off at because, you have a problem.  Something exists in your team standard that you can't justify.  If no one can justify it, then rip it out.  Seriously, get rid of it.  Having items that no one can justify starts to put you in conformity for the sake of conformity territory.  And that's when standard goes from "discerning" to "boring."

    Let this philosophy guide your standard in general.  Revisit it frequently, and audit it for valid justifications.  Sometimes justifications will age out of existence or seem lame in retrospect.  When this happens, do not hesitate to revisit, amend, or cull.  The best team standards are neither boring nor static.  The best team standards reflect the evolving, growing philosophy of the team.

    Related resources

    Tools at your disposal

    SubMain offers CodeIt.Right that easily integrates into Visual Studio for flexible and intuitive automated code review solution that works real-time, on demand, at the source control check-in or as part of your build.

    Learn more how CodeIt.Right can automate your team standards and improve code quality.

    About the Author

    Erik Dietrich

    I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

  • The Developer Feedback Loop

    If you write software, the term "feedback loop" might have made its way into your vocabulary.  It charts a slightly indirect route from its conception and into the developer lexicon, though, so let's start with the term's origin.  A feedback loop in general systems uses its output as one of its inputs.

    Kind of vague, huh?  I'll clarify with an example.  I'm actually writing this post from a hotel room, so I can see the air conditioner from my seat.  Charlotte, North Carolina, my temporary home, boasts some pretty steamy weather this time of year, so I'm giving the machine a workout.  Its LED display reads 70 Fahrenheit, and it's cranking to make that happen.

    When the AC unit hits exactly 70 degrees, as measured by its thermostat, it will take a break.  But as soon as the thermostat starts inching toward 71, it will turn itself back on and start working again.  Such is the Sisyphean struggle of climate control.

    Important for us here, though, is the mechanics of this system.  The AC unit alters the temperature in the room (its output).  But it also uses the temperature in the room as input (if < 71, do nothing, else cool the room).  Climate control in buildings operates via feedback loop.

    Appropriating the Term for Software Development

    It takes a bit of a cognitive leap to think of your own tradecraft in terms of feedback loops.  Most likely this happens because you become part of the system.  Most people find it harder to reason about things from within.

    In software development, you complete the loop.  You write code, the compiler builds it, the OS runs it, you observe the result, and decide what to do to the code next.  The output of that system becomes the input to drive the next round.

    If you have heard the term before, you've probably also heard the term "tightening the feedback loop."  Whether or not you've heard it, what people mean by this is reducing the cycle time of the aforementioned system.  People throwing that term around look to streamline the write->build->run->write again process.

    A History of Developer Feedback Loops

    At the risk of sounding like a grizzled old codger, let me digress for a moment to talk about feedback loop history.  Long before my time came the punched card era.  Without belaboring the point, I'll say that this feedback loop would astound you, the modern software developer.

    Programmers would sit at key punch "kiosks", used to physically perforate forms (one mistake, and you'd start over).  They would then take these forms and have operators turn them into cards, stacks of which they would hold onto.  Next, they'd wait in line to feed these cards into the machines, which acted as a runtime interpreter.   Often, they would have to wait up to 24 hours to see the output of what they had done.

    Can you imagine?  Write a bit of code, then wait for 24 hours to see if it worked.  With a feedback loop this loose, you can bet that checking and re-checking steps received hyper-optimization.

    blog-developer-feedback-loop

    When I went to college and started my programming career, these days had long passed.  But that doesn't mean my early days didn't involve a good bit of downtime.  I can recall modifying C files in projects I worked, and then waiting up to an hour for the code to build and run, depending what I had changed.  xkcd immortalized this issue nearly 10 years ago, in one of its most popular comics.

    Today, you don't see this as much, though certainly, you could find some legacy codebases or juggernauts that took a while to build.  Tooling, technique, modern hardware and architectural approaches all combine to minimize this problem via tighter feedback loops.

    The Worst Feedback Loop

    I have a hypothesis.  I believe that a specific amount of time exists for each person that represents the absolute, least-optimal amount of time for work feedback.  For me, it's about 40 seconds.

    If I make some changes to something and see immediate results, then great.  Beyond immediacy, my impatience kicks in.  I stare at the thing, I tap impatiently, I might even hit it a little, knowing no good will come.  But after about 40 seconds, I simply switch my attention elsewhere.

    Now, if I know the wait time will be longer than 40 seconds, I may develop some plan.  I might pipeline my work, or carve out some other tasks with which I can be productive while waiting.  If for instance, I can get feedback on something every 10 minutes, I'll kick it off, do some household chores, periodically checking on it.

    But, at 40 seconds, it resides in some kind of middle limbo, preventing any semblance of productivity.  I kick it off and check twitter.  40 seconds turns into 5 minutes when someone posts a link to some cool astronomy site.  I check back, forget what I did, and then remember.  I try again and wait 40 seconds.  This time, I look at a Buzzfeed article and waste 10 minutes as that turns into 4 Buzzfeed articles.  I then hate myself.

    The Importance of Tightening

    Why do I offer this story about my most sub-optimal feedback period?  To demonstrate the importance of diligence in tightening the loop.  Wasting a few seconds while waiting hinders you.  But waiting enough seconds to distract you with other things slaughters your productivity.

    With software development, you can get into a state of what I've heard described as "flow."  In a state of flow, the feedback loop creates harmony in what you're doing.  You make adjustments, get quick feedback, feel encouraged and productive, which promotes more concentration, more feedback, and more productivity.  You discover a virtuous circle.

    But just the slightest dropoff in the loop pops that bubble.  And, another dropoff from there (e.g. to 40 seconds for me) can render you borderline-useless.  So much of your professional performance rides on keeping the loop tight.

    Tighten Your Loop Further

    Modern tooling offers so many options for you.  Many IDEs will perform speculative compilation or interpretation as you code, making builds much faster.  GUI components can be rendered as you work, allowing you to see changes in real time as you alter the markup.  Unit tests slice your code into discrete, separately evaluated components, and continuous testing tools provide pass/fail feedback as you type.  Static code analysis tools offer you code review as you work, rather than at some code review days later.  I could go on.

    The general idea here is that you should constantly seek ways to tune your day to day work.  Keep your eyes out for tools that speed up your feedback loop.  Read blogs and go to user groups.  Watch your coworkers for tips and tricks.  Claw, scratch, and grapple your way to shaving time off of your feedback loop.

    We've come a long way from punch cards and sword fights while code compiles.  But, in 10 or 30 years, we'll look back in amazement at how archaic our current techniques seem.  Put yourself at the forefront of that curve, and you'll distinguish yourself as a developer.

    Learn more how CodeIt.Right can tighten the feedback loop and improve your code quality.

    About the Author

    Erik Dietrich

    I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

    

This Blog

Syndication

 
     
 
Home |  Products |  Services |  Download |  Purchase |  Support |  Community |  About Us |