|
Browse by Tags
All Tags » CodeReviews (RSS)
-
Many of us have a natural tendency to let little things pile up. This gives
rise to the notion of the so-called spring cleaning. The weather turns warm
and going outside becomes reasonable, so we take the opportunity to do some kind of
deep cleaning.
Of course, this may not apply to you. Perhaps you keep your house impeccable
at all times, or maybe you simply have a cleaning service. But I'll bet that,
in some part of your life or another, you put little things off until they become
bigger things. Your cruft may not involve dusty shelves and pockets of house
clutter, but it probably exists somewhere.
Maybe it exists in your professional life in some capacity. Perhaps you have
a string of half written blog posts, or your inbox has more than a thousand messages.
And, if you examine things honestly, you almost certainly have some item that has
been skulking around your to-do list for months. Somewhere, we all have items
that could use some tidying, cognitive or physical.
With that in mind, I'd like to talk about your code review process. Have you
been executing it like clockwork for months or years? Perhaps it has become too much
like clockwork. Turn a critical eye to it, and you might realize elements of
it have become stale or superfluous. So let's take a look at how you can apply
a spring cleaning to your code review process.
Beware The Cargo Cult
During World War II, the Allies set up a temporary air base on an island in the Pacific
Ocean. The people living on the Island observed the ground controllers waving
at inbound planes to help them land. Supplies then followed. Not understand
the purpose of this ritual or the mechanics of airplanes, the locals learned that
making these motions brought planes with supplies. So after the allies left,
they mimicked the behavior, hoping for additional resources. This execution
of ritual without understanding earned the designation "cargo cult."
In the world of software development, cargo
cult programming involves adding code without understanding what it does.
You added it once, good things happened, so now you always add it. You can think
of this as a special case of programming
by coincidence. And it's something you should avoid.
But cargo cult mentality can crop up in a code review as well. Do you find your
team calling out 'issues' during the review, but, if pressed, nobody could articulate
why those are issues? If so, you have a cargo cult practice, and you should
cull it.
Going Over the Same Stuff Repetitively
Let's say that your team performs code review on a regular basis. Does this
involve an ongoing, constant uplift? In other words, do you find learning spreads
among the team, and you collectively sharpen your game and constantly improve?
Or do you find that the team calls out the same old issues again and again?
If every code review involves noticing a method parameter dereference and saying,
"you'll get an exception if someone passes in null," then you have stagnation.
Think of this as a team smell. Why do people keep making the same mistake over
and over again? Why haven't you somehow operationalized a remedy? And,
couldn't someone have automated this?
Keep an eye out for this sort of thing. If you notice it, pause and do some
root cause analysis. Don't just fix the issue itself -- fix it so the issue
stops happening.
Inconsistency in Reviews
Another common source of woe arises from inconsistency in the code review process.
Not only does this result in potential issues within the code, but it also threatens
to demoralize members of the team. Imagine attending a review and having someone
admonish you to add logging calls to all of your methods. But then, during the
next review, someone gives you a hard time about logging too much. Enough of
that nonsense and team members start updating their resumes rather than their methods.
And inconsistency can mean more than just different review styles from different people
(or the same person on different days, varying by mood). You might find that
your team's behavior and suggestions during review have become out of sync with a
formal document like the team's coding standard. Whatever the source, inconsistency
creates drag for your team.
Take the opportunity of a metaphorical spring cleaning to address this potential pitfall.
Round up the team members and make sure they all have the same philosophies at code
review time. And then, make sure that unified philosophy lines up with anything
documented.
Cut Out the Nitpicking
I've yet to see an organization where interpersonal code review didn't become at least
a little political. That makes sense, of course. In essence, you're talking
about an activity where people get together and offer (hopefully) constructive professional
criticism.
Because of the politics, personal code review can degenerate and lead to infighting
in numerous ways. Chief among these, I've found, is excessive nitpicking.
If team members perceive the activity as a never ending string of officious criticism,
they start to hate coming to work.
On top of that, people can only internalize so many lessons in a sitting. After
a while, they start to tune out or get tired. So make the takeaways from the
code review count. Even if they haven't gotten every little thing just so, pick
your battles and focus on big things. And I file this under spring cleaning
since it generally requires a concerted mental adjustment and since it will clear
some of the cruft out of your review.
Automate, Automate, Automate
I will conclude by offering what I consider the most important item for any code review
spring cleaning. If the other suggestions in involved metaphorical shelf dusting
and shower scrubbing, think of this one as completely cleaning out an entire room
that you had loaded with junk.
So much of the time teams spend in code review seems to trend toward picking at nits.
But even when it involves more substantive considerations, many of these considerations
could be automatically detected. The team wastes precious time peering at the
code and playing static analyzer. Stop this!
Spruce up your review process by automating as much of it as humanly possible.
You should constantly ask yourself if the issue you're discussing could be automatically
detected (and fixed). If you think it could, then do it. And, as part
of your spring cleaning, knock out as many of these as possible.
Save human-centric code review for focus on design considerations, architectural discussions,
and big picture issues. Don't bog yourself down in cruft. You'll all feel
a lot cleaner and happier for it, just as you would after any spring cleaning.
Tools at your disposal
SubMain offers CodeIt.Right that
easily integrates into Visual Studio for flexible and intuitive automated code review
solution that works real-time, on demand, at the source control check-in or as part
of your build.
Related resources
Learn
more how CodeIt.Right can help you automate code reviews and improve the quality of
your code.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
-
I like variety. In pursuit of this preference, I spend some time management
consulting with enterprise clients and some time volunteering for "office hours" at
a startup incubator. Generally, this amounts to serving as "rent-a-CTO" for
startup founders in half hour blocks. This provides me with the spice of life,
I guess.
As disparate as these advice forums might seem, they often share a common theme.
Both in the impressive enterprise buildings and the startup incubator conference rooms,
people ask me about offshoring application development. To go overseas or not
to go overseas? That, quite frequently, is the question (posed to me).
I find this pretty difficult to answer absent additional information. In any
context, people asking this bake two core assumptions into their question. What
they really want to say would sound more like this. "Will I suffer for the choice
to sacrifice quality to save money?"
They assume first that cheaper offshore work means lower quality. And then they
assume that you can trade quality for cost as if adjusting the volume dial in your
car. If only life worked this simply.
What You Know When You Offshore
Before going further, let's back up a bit. I want to talk about what you actually know
when you make the decision to pay overseas firms a lower rate to build software.
But first, let's dispel these assumptions that nobody can really justify.
Understand something unequivocally. You cannot simply exchange units of "quality"
for currency. If you ask me to build you a web app, and I tell you that I'll
do it for $30,000, you can't simply say, "I'll give you $15,000 to build one-half
as good." I mean, you could say that. But you'd be saying something
absurd, and you know it. You can reasonably adjust cost by cutting scope, but
not by assuming that "half as good" means "twice as fast."
Also, you need to understand that "cheap overseas labor" doesn't necessarily mean
lower quality. Frequently it does, but not always. And, not even frequently
enough that you can just bank on it.
So what do you know when you contract with an inexpensive, overseas provider?
Not a lot, actually. But you do know that your partner will work with you mainly
remotely, across a great deal of distance, and with significant communication obstacles.
You will not collaborate as closely with them as you would with an employee or a local
vendor.
The (Non) Locality Conundrum
So you have a limited budget, and you go shopping for offshore app dev. You
go in knowing that you may deal with less skilled developers. But honestly,
most people dramatically overestimate the importance of that concern.
What tends to torpedo these projects lies more in the communication gulf and less
in the skill. You give them wireframes and vague instructions, and they come
back with what they think you want. They explain their deliveries with passable
English in emails sent at 2:30 AM your time. This collaboration proves taxing
for both parties, so you both avoid it, for the most part. You thus mutually
collude to raise the stakes with each passing week.
Disaster then strikes at the end. In a big bang, they deliver what they think
you want, and it doesn't fit your expectations. Or it fits your expectations,
but you can't build on top of it. You may later, using some revisionist history,
consider this a matter of "software quality" but that misses the point.
Your problem really lies in the non-locality, both geographically and more philosophically.
When Software Projects Work
Software projects work well with a tight feedback loop. The entire agile movement
rests firmly atop this premise. Stop shipping software once per year, and start
shipping it once per week. See what the customer/stakeholder thinks and course
correct before it's too late. This helps facilitate success far more than the
vague notion of quality.
The locality issue detracts from the willingness to collaborate. It encourages
you to work in silos and save feedback for a later date. It invites disaster.
To avoid this, you need to figure out a way to remove unknowns from the equation.
You need to know what your partner is doing from week to week. And you need
to know the nature of what they're building. Have they assembled throwaway,
prototype code? Or do you have the foundation of the future?
Getting a Glimpse
At this point, the course for enterprises and startups diverge. The enterprise
has legions of software developers and can easily afford to fly to Eastern Europe
or Southeast Asia or wherever the work gets done. They want to leverage economies
of scale to save money as a matter of policy.
The startup or small business, on the other hand, lacks these resources. They
can't just ask their legion of developers to review the offshore work more frequently.
And they certainly can't book a few business class tickets over there to check it
out for themselves. They need to get more creative.
In fact, some of the startup founders I counsel have a pretty bleak outlook here.
They have no one in their organization in a position to review code at all.
So they rely on an offshore partner for budget reasons, and they rely on that partner
as expert adviser and service provider. They put all of their eggs in that vendor's
basket. And they come to me asking, "have I made a good choice?"
They need a glimpse into what these offshore folks are doing, and one that they can
understand.
Leveraging Automated Code Review
While you can't address the nebulous, subjective concept of "quality" wholesale, you
can ascertain properties of code. And you can even do it without a great deal
of technical knowledge, yourself. You could simply take their source code and
run an automated code review on it.
You're probably thinking that this seems a bit reductionist. Make no mistake
-- it's quite reductionist. But it also beats no feedback at all.
You could approach this by running the review on each incremental delivery.
Ask them to explain instances where it runs afoul of the tool. Then keep doing
it to see if they improve. Or, you could ask them to incorporate the tool into
their own process and make delivering issue-free code a part of the contract.
Neither of these things guarantees a successful result. But at least it offers
you something -- anything -- to help you evaluate the work, short of in-depth knowledge
and study yourself.
Recall what I said earlier about how enterprises regard quality. It's not as
much about intrinsic properties, nor is it inversely proportional to cost. Instead,
quality shows itself in the presence of a tight feedback loop and the ability to sustain
adding more and more capabilities. With limited time and knowledge, automated
code review gives you a way to tighten that feedback loop and align expectations.
It ensures at least some oversight, and it aligns the work they do with what you might
expect from firms that know their business.
Tools at your disposal
SubMain offers CodeIt.Right that
easily integrates into Visual Studio for flexible and intuitive automated code review
solution that works real-time, on demand, at the source control check-in or as part
of your build.
Related resources
Learn
more how CodeIt.Right can help you automate code reviews and ensure the quality of
delivered code.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
-
I can almost sense the indignation from some of you. You read the title and
then began to seethe a little. Then you clicked the link to see what kind sophistry
awaited you. "There is no substitute for peer review."
Relax. I agree with you. In fact, I think that any robust review process
should include a healthy amount of human and automated review. And, of course,
you also need your test
pyramid, integration and deployment strategies, and the whole nine yards.
Having a truly mature software shop takes a great deal of work and involves standing
on the shoulders of giants. So, please, give me a little latitude with the premise
of the post.
Today I want to talk about how one could replace manual code review with automated
code review only, should the need arise.
Why Would The Need for This Arise?
You might struggle to imagine why this would ever prove necessary. Those of
you with many years logged in the enterprise in particular probably find this puzzling.
But you might find manual code inspection axed from your process for any number of
reasons other than, "we've decided we don't value the activity."
First and most egregiously, a team's manager might come along with an eye toward cost
savings. "I need you to spend less time reading code and more time writing it!"
In that case, you'll need to move away from the practice, and going toward automation
beats abandoning it altogether. Of course, if that happens, I also recommend
dusting off your resume. In the first place, you have a penny-wise, pound-foolish
manager. And, secondly, management shouldn't micromanage you at this level.
Figuring out how to deliver good software should be your responsibility.
But let's consider less unfortunate situations. Perhaps you currently work in
a team of 2, and number 2 just handed in her two weeks’ notice. Even if your
organization back-fills your erstwhile teammate, you have some time before the newbie
can meaningfully review your code. Or, perhaps you work for a larger team, but
everyone gradually becomes so busy and fragmented in responsibility as not to have
the time for much manual peer review.
In my travels, this last case actually happens pretty frequently. And then you
have to choose: abandon the practice altogether, or move toward an automated version.
Pretty easy choice, if you ask me.
First, Take Inventory
Assuming no one has yet forced your hand, pause to take inventory. What currently
happens as part of your review process? What sorts of feedback do you get?
If your reviews happen in some kind of asynchronous format, then great. This
should prove easy enough to capture since you'll need only to go through your emails
or issues list or whatever you use. Do you have in-person reviews, but chronicle
the findings? Just as good for our purposes here.
But if these reviews happen in more ad hoc fashion, then you have some work
to do. Start documenting the feedback and resultant action items. After
all, in order to create a suitable replacement strategy for an activity, you must
first thoroughly understand that activity.
Automate the Automate-able
With your list in place, you can now start figuring out how to replace your expiring
manual process. First up, identify the things you can easily automate that come
up during reviews.
This will include cosmetic concerns. Does your code comply with the team standard?
Does it comply with typical styling for your tech stack? Have you consistently
cased and named things? If that stuff comes up during your reviews, you should
probably automate it anyway and not waste time discussing it. But, going forward,
you will need to automate it.
But you should also look for anything that you can leverage automation to catch.
Do you talk about methods getting too long or about not checking parameters for null
before dereferencing? You can also automate things like that. How about
compliance with non-cosmetic best practices? Automate that as well with an automated
code review tool.
And spend some time researching what you can automate. Even if no analyzer or
review tool catches something out of the box, you can often customize them to catch
it (or write your own thing, if needed).
Checks and Balances for Conceptual Items
Now, we move onto the more difficult things. "This method seems pretty unreadable."
"Couldn't you use the builder pattern here?" I'm talking here about the sorts
of things for which manual code review really shines and serves its purpose.
You'll have a harder time replacing this. But that doesn't mean you can't do something.
First, I recommend that you audit the review history you've been compiling.
See what comes up the most frequently, and make a list of those things. And
group them conceptually. If you see a lot of "couldn't you use Builder" and
"couldn't you use Factory Method," then generalize to "couldn't you use a design pattern?"
Once you have this list, if nothing else, you can use it as a checklist for yourself
each time you commit code. But you might also see whether you can conceive of
some sort of automation. Or maybe you just resolve to revisit the codebase periodically,
with a critical eye toward these sorts of questions.
You need to see if you can replace the human insights of a peer. Admittedly,
this presents a serious challenge. But get creative and see what you can come
up with.
Adjust Your Approach
The final plank I'll mention involves changing the way you approach development and
review in general. For whatever reason, human review of your work has become
a scarce resource. You need to adjust accordingly.
Picking up a good bit of automated review makes up part of this adjustment, as does
creating of a checklist to apply to yourself. But you need to go further as
well. Take an approach wherein you look to become more self-sufficient for any
of the littler things and store up your scarce access to human reviewers for the truly
weighty architectural decisions. When these come up, enlist the help of someone
else in your organization or even the internet.
On top of that, look opportunistically for ways to catch your own mistakes and improve.
Everyone has to learn from their mistakes, but with less margin for error, you need
to learn from them and automate their prevention going forward. Again, automated
review helps here, but you'll need to get creative.
Having peer review yanked out from under you undeniably presents a challenge.
Luckily, however, you have more tools than ever at your disposal to pick up the slack.
Make use of them. When you find yourself in a situation with the peer review
safety net restored, you'll be an even better programmer for it.
Tools at your disposal
SubMain offers CodeIt.Right that
easily integrates into Visual Studio for flexible and intuitive automated code review
solution that works real-time, on demand, at the source control check-in or as part
of your build.
Related resources
Learn
more how CodeIt.Right can help you automate code reviews and improve your code quality.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
-
Today, I'd like to offer a somewhat lighthearted treatment to a serious topic.
I generally find that this tends to offer catharsis to the frustrated. And the
topic of code review tends to lead to lots of frustration.
When talking about code review, I always make sure to offer a specific distinction.
We can divide code reviews into two mutually exclusive buckets: automated and manual.
At first, this distinction might sound strange. Most readers probably think
of code reviews as activities with exclusively human actors. But I tend to disagree.
Any static analyzer (including the compiler) offers feedback. And some tools,
like CodeIt.Right,
specifically regard their suggestions and automated fixes as an automation of the
code review process.
I would argue that automated code review should definitely factor into your code review
strategy. It takes the simple things out of the equation and lets the humans
involved focus on more complex, nuanced topics. That said, I want to ignore
the idea of automated review for the rest of the post. Instead, I'll talk exclusively
about manual code reviews and, more specifically, where they tend to get ugly.
You should absolutely do manual code reviews. Full stop. But you
should also know that they can easily go wrong and devolved into useless or even toxic
activities. To make them effective, you need to exercise vigilance with them.
And, toward that end, I'll talk about some manual code review anti-patterns.
The Gauntlet
First up, let's talk about a style of review that probably inspires the most disgust
among former participants. Here, I'm talking about what I call "the gauntlet."
In this style of code review, the person submitting for review comes to a room with
a number of self-important, hyper-critical peers. Of course, they might not
view themselves as peers. Instead, they probably imagine themselves as a panel
of judges for some reality show.
From this 'lofty' perch, they attack the reviewee's code with a malevolent glee.
They adopt a derisive tone and administer the third degree. And, frankly, they
crush the spirit of anyone subject to this process, leaving low morale and resentment
in their wake.
The Marathon
Next, consider a less awful, but not effective style of code review. This one
I call "the marathon." I bet you can predict what I mean by this.
In the marathon code review, the participants sit in some conference room for hours.
It starts out as an enthusiastic enough affair, but as time passes, people's energy
wanes. Nevertheless, it goes on because of an edict that all code needs review
and because everyone waited until the 11th hour. And predictably, things get
less careless as time goes on and people space out.
Marathon code reviews eventually reach a point where you might as well not bother.
The Scattershot Review
Scattershot reviews tend to occur in organizations without much rigor around the code
review process. Perhaps their process does not officially formally include code
review. Or, maybe, it offers on more specifics than "do it."
With a scattershot review process, the reviewer demonstrates no consistency or predictability
in the evaluation. One day he might suggest eliminating global variables, and
on another day, he might advocate for them. Or, perhaps the variance occurs
depending on reviewer. Whatever the specifics, you can rest assured you'll never
receive the same feedback twice.
This approach to code review can cause some annoyance and resentment. But morale
issues typically take a backseat to simple ineffectiveness and churn in approach.
The Exam
Some of these can certainly coincide. In fact, some of them will likely coincide.
So it goes with "the exam" and "the gauntlet." But while the gauntlet focuses
mostly on the process of the review, the exam focuses on the outcome.
Exam code reviews occur when the parlance around what happens at the end involves
"pass or fail." If you hear people talking about "failing" a code review, you
have an exam on your hands.
Code review should involve a second set of eyes on something to improve it.
For instance, imagine that you wrote a presentation or a whitepaper. You might
ask someone to look it over and proofread it to help you improve it. If they
found a typo, they wouldn't proclaim that you had "failed." They'd just offer
the feedback.
Treating code reviews as exams generally hurts morale and causes the team to lose
out on the collaborative dynamic.
The Soliloquy
The review style I call "the soliloquy" involves paying lip service to the entire
process. In literature, characters offer soliloquies when they speak their thoughts
aloud regardless of anyone hearing them. So it goes with code review styles
as well.
To understand what I mean, think of times in the past where you've emailed someone
and asked them to look at a commit. Five minutes later, they send back a quick,
"looks good." Did they really review it? Really? You
have a soliloquy when you find yourself coding into the vacuum like this.
The downside here should be obvious. If people spare time for only a cursory
glance, you aren't really conducting code reviews.
The Alpha Dog
Again, you might find an "alpha dog" in some of these other sorts of reviews.
I'm looking at you, gauntlet and exam. With an alpha dog code review, you have
a situation where a particularly dominant senior developer rules the roost with the
team. In that sense, the title refers both to the person and to the style of
review.
In a team with a clear alpha dog, that person rules the codebase with an iron fist.
Thus the code review becomes an exercise in appeasing the alpha dog. If he is
present, this just results in him administering a gauntlet. But, even absent,
the review goes according to what he may or may not like.
This tends to lead team members to a condition known as "learned
helplessness," wherein they cease bothering to make decisions without the alpha
dog. Obviously, this stunts their career development, but it also has a pragmatic
toll for the team in the short term. This scales terribly.
The Weeds
Last up, I'll offer a review issue that I'll call "the weeds." This can happen
in the most well meaning of situations, particularly with folks that love their craft.
Simply put, they get "into the weeds."
What I mean with this colloquialism is that they bogged down in details at the expense
of the bigger picture. Obviously, an exacting alpha dog can drag things into
the weeds, but so can anyone. They might wind up with a lengthy digression about
some arcane language point, of interest to all parties, but not critical to shipping
software. And typically, this happens with things that you ought to make matters
of procedures, or even to address with your automated code reviews.
The biggest issue with a "weeds" code review arises from the poor use of time.
It causes things to get skipped, or else it turns reviews into marathons.
Getting it Right
How to get code reviews right could easily occupy multiple posts. But I'll close
by giving a very broad philosophical outlook on how to approach it.
First of all, make sure that you get clarity up front around code review goals, criteria,
and conduct. This helps to stop anti-patterns wherein the review gets off track
or bogged down. It also prevents soliloquies and somewhat mutes bad behavior.
But, beyond that, look at code reviews as collaborative, voluntary sessions where
peers try to improve the general codebase. Some of those peers may have more
or less experience, but everyone's opinion matters, and it's just that -- an opinion for
the author to take under advisement.
While you might cringe at the notion that someone less experienced might leave something
bad in the codebase, trust me. The damage you do by allowing these anti-patterns
to continue in the name of "getting it right" will be far worse.
Learn
more how CodeIt.Right can help you automate code reviews and improve your code quality.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
-
I have long since cast my lot with the software industry. But, if I were going
to make a commercial to convince others to follow suit, I can imagine what it would
look like. I'd probably feature cool-looking, clear whiteboards, engaged people,
and frenetic design of the future. And a robot or two. Come help us build
the technology of tomorrow.
Of course, you might later accuse me of bait and switch. You entered a bootcamp,
ready to build the technology of tomorrow. Three years later, you found yourself
on safari in a legacy code jungle, trying to wrangle some SharePoint plugin.
Erik, you lied to me.
So, let me inoculate myself against that particular accusation. With a career
in software, you will certainly get to work on some cool things. But you will
also find yourself doing the decidedly less glamorous task of software maintenance.
You may as well prepare yourself for that now.
The Conceptual Difference: Build vs Maintain
From the software developer's perspective, this distinction might evoke various contrasts.
Fun versus boring. Satisfying versus annoying. New problem versus solved
problem. My stuff versus that of some guy named Steve that apparently worked
here 8 years ago. You get the idea.
But let's zoom out a bit. For a broader perspective, consider the difference
as it pertains to a business.
Build
mode (green field) means a push toward new capability. Usually, the business
will regard construction of this capability as a project with a calculated return
on investment (ROI). To put it more plainly, "we're going to spend $500,000
building this thing that we expect to make/save us $1.5 million by next year."
Maintenance mode, on the other hand, presents the business with a cost
center. They've now made their investment and (at least partially)
realized return on it. The maintenance team just hangs around to prevent backslides.
For instance, should maintenance problems crop up, you may lose customers or efficiency.
Plan of Attack: Build vs Maintain
Because the business regards these activities differently, it will attack them differently.
And, while I can't speak to every conceivable situation, my consulting work has shown
me wide variety. So I can speak to general trends.
In green field mode, the business tends to regard the work as an investment.
So, while management might dislike overruns and unexpected costs, they will tend to
tolerate them more. Commonly, you see a "this will pay off later" mentality.
On the maintenance side of things, you tend to see far less forgiveness. Certainly,
all parties forgive unexpected problems a lot less easily. They view all of
it as a burden.
This difference in attitude translates to the planning as well. Green field
projects justifiably command full time people for the duration of the project.
Maintenance mode tends to get you familiar with the curious term "half of a person."
By this, I mean you hear things like "we're done with the Sigma project, but someone
needs to keep the lights on. That'll be half of Alice." The business grudgingly
allocates part time duty to maintenance tasks.
Why? Well, maintenance tends to arise out of reactive scenarios.
Reactive Mode and the Value of Automation
Maintenance mode in software will have some planned activities, particularly if it
needs scheduled maintenance. But most maintenance programmers find themselves
in a reactive, "wait and see" kind of situation. They have little to do on the
project in question until an outage happens, someone discovers a bug, or a customer
requests a new feature. Then, they spring into action.
Business folks tend to hate this sort of situation. After all, you need to plan
for this stuff, but you might have someone sitting around doing nothing. It
is from this fundamental conundrum that "half people" and "quarter people" arise.
Maintenance programmers usually have other stuff to juggle along with maintaining
"Sigma."
You should automate this stuff during green field time,
when management is willing to invest. If you tell them it means less maintenance cost,
they'll probably bite.
Because of this double duty, the business doubles down on pressure to minimize maintenance.
After all, not only does it create cost, but it takes the people away from other,
profit-driven things that they could otherwise do.
So how do we, as programmers, and we, as software shops, best deal with this?
We make maintenance as turnkey as possible by automating as much as possible.
Oh, and you should automate this stuff during green field time, when management is
willing to invest. If you tell them it means less maintenance cost, they'll
probably bite.
Automate the Test Suite
First up for automation candidates, think of the codebase's test suite. Hopefully,
you've followed my advice and built this during green field mode. But, if not,
it's never too late to start.
Think of how time consuming a job QA has. If manually running the software and
conducting experiments constitutes the entirety of your test strategy, you'll find
yourself hosed at maintenance time. With "half a person" allocated, no one has
time for that. Without an automated suite, then, testing falls by the wayside,
making your changes to a production system even more risky.
You need to automate a robust test suite that lets you know if you have broken anything.
This becomes even more critical when you consider that most maintenance programmers
haven't touched the code they modify in a long time, if ever.
Automate Code Reviews
If I were to pick a one-two punch for code quality, that would involve unit tests
and code review. Therefore, just as you should automate your test suite, you
should automate
your code review as well.
If you think testing goes by the wayside in an under-staffed, cost-center model, you
can forget about peer review altogether. During the course of my travels, I've
rarely seen code review continue into maintenance mode, except in regulated industries.
Automated
code review tools exist, and they don't require even "half a person." An
automated code review tool serves its role without consuming bandwidth. And,
it provides maintenance programmers operating in high risk scenarios with a modicum
of comfort and safety net.
Automate Production Monitoring
For my last maintenance mode automation tip of the post, I'll suggest that you automate
production monitoring capabilities. This covers a fair bit of ground, so I'll
generalize by saying these include anything that keeps your finger on the pulse of
your system's production behavior.
You have logging, no doubt, but do you monitor the logs? Do you keep track of
system outages and system load? If you roll software to production, do you have
a system of checks in place to know if something smells fishy?
You want to make the answer to these questions, "yes." And you want to make
the answer "yes" without you needing to go in and manually check. Automate various
means of monitoring your production software and providing yourself with alerts.
This will reduced maintenance costs across the board.
Automate Anything You Can
I've listed some automation examples that come to mind as the most critical, based
on my experience. But, really, you should automate anything around the maintenance
process that you can.
Now, you might think to yourself, "we're programmers, we should automate everything."
Well, that subject could make for a whole post in and of itself, but I'll speak briefly
to the distinction. Build mode usually involves creating something from nothing
on a large scale. While you can automate the scaffolding around this activity,
you'll struggle to automate a significant amount of the process.
But that ratio gets much better during maintenance time. So the cost center
nature of maintenance, combined with the higher possible automation percentage, makes
it a rich target. Indeed, I would argue that strategic automation defines the
art of maintenance.
Tools at your disposal
SubMain offers CodeIt.Right that
easily integrates into Visual Studio for flexible and intuitive automated code review
solution that works real-time, on demand, at the source control check-in or as part
of your build.
Related resources
Learn
more how CodeIt.Right can help you automate code reviews and improve your code quality.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
-
For years, I can remember fighting the good fight for unit testing. When I started
that fight, I understood a simple premise. We, as programmers, automate things.
So, why not automate testing?
Of all things, a grad school course in software engineering introduced me to the concept
back in 2005. It hooked me immediately, and I began applying the lessons to
my work at the time. A few years and a new job later, I came to a group that
had not yet discovered the wonders of automated testing. No worries, I figured,
I can introduce the concept!
Except, it turns out that people stuck in their ways kind of like those ways.
Imagine my surprise to discover that people turned up their nose at the practice.
Over the course of time, I learned to plead my case, both in technical and business
terms. But it often felt like wading upstream against a fast moving current.
Years later, I have fought that fight over and over again. In fact, I've produced
training materials, courses, videos, blog posts, and books on the subject. I've
brought people around to see the benefits and then subsequently realize those benefits
following adoption. This has brought me satisfaction.
But I don't do this in a vacuum. The industry as a whole has followed the same
trajectory, using the same logic. I count myself just another advocate among
a euphony of voices. And so our profession has generally come to accept unit
testing as a vital tool.
Widespread Acceptance of Automated Regression Tests
In fact, I might go so far as to call acceptance and adoption quite widespread.
This figure only increases if you include shops that totally mean to and will definitely
get around to it like sometime in the next six months or something. In other
words, if you count both shops that have adopted the practice and shops that feel
as though they should, acceptance figures certainly span a plurality.
Major enterprises bring me in to help them teach their developers to do it.
Still, other companies consult and ask questions about it. Just about everyone
wants to understand how to realize the unit testing value proposition of higher quality,
more stability, and fewer bugs.
This takes a simple form. We talk about unit testing and other forms of testing,
and sometimes this may blur the lines. But let's get specific here. A
holistic testing strategy includes tests at a variety of granularities. These
comprise what some call "the
test pyramid." Unit tests address individual components (e.g. classes),
while service tests drive at the way the components of your application work together.
GUI tests, the least granular of all, exercise the whole thing.
Taken together, these comprise your regression test suite. It stands
against the category of bugs known as "regressions," or defects where something that
used to work stops working. For a parallel example in the "real world" think
of the warning lights on your car's dashboard. "Low battery" light comes on
because the battery, which used to work, has stopped working.
Benefits of Automated Regression Test Suites
Why do this? What benefits to automated regression test suites provide?
Well, let's take a look at some.
-
Repeatability and accuracy. A human running tests over and over again may produce
slight variances in the tests. A machine, not so much.
-
Speed. As with anything, automation produces a significant speedup over manual
execution.
-
Fast feedback. The automated test suite can tell you much more quickly if you
have broken something.
-
Morale. The fewer times a QA department comes back with "you broke this thing,"
the fewer opportunities for contentiousness.
I should also mention, as a brief aside, that I don't consider automated test suites
to be acceptable substitutes for manual testing. Rather, I believe
the two efforts should work in complementary fashion. If the automated test
suite executes the humdrum tests in the codebase, it frees QA folks up to perform
intelligent, exploratory testing. As Uncle
Bob once famously said, "it's wrong to turn humans into machines. If you
can write a script for a test procedure, then you can write a program to execute that
procedure."
Automating Code Review
None of this probably comes as much of a shock to you. If you go out and read
tech blogs, you've no doubt encountered the widespread opinion that people should
automate regression test suites. In fact, you probably share that opinion.
So don't you wonder why we don't more frequently apply that logic to other concerns?
Take code review, for instance. Most organizations do this in entirely manual
fashion outside of, perhaps, a so-called "linting" tool. They mandate automated
test coverage and then content themselves with sicking their developers on one another
in meetings to gripe over tabs, spaces, and camel casing.
Why not approach code review the same way? Why not automate the aspects of it
that lend themselves to automation, while saving human intervention for more conceptual
matters?
Benefits of Automated Code Reviews
In a study by Steve McConnell and referenced
in this blog post, "formal code inspections" produced better results for preemptively
finding bugs than even automated regression tests. So it stands to reason that
we should invest in code review in the same ways that we invest in regression testing.
And I don't mean simply time spent, but in driving forward with automation and efficiency.
Consider the benefits I listed above for automated tests, and look how they apply
to automated
code review.
-
Repeatability and accuracy. Humans will miss instances of substandard code if
they feel tired -- machines won't.
-
Speed. Do you want your code review to take seconds or in hours/days.
-
Fast feedback. Because of the increased speed of the review, the reviewee gets
the results immediately after writing the code, for better learning.
-
Morale. The exact same reasoning applies here. Having a machine point
out your mistakes can save contentiousness.
I think that we'll see a similar trajectory to automating code review that we did
with automating test suites. And, what's more, I think that automated code review
will gain steam a lot more quickly and with less resistance. After all, automating
QA activities blazed a trail.
I believe the biggest barrier to adoption, in this case, is the lack of awareness.
People may not believe automating code review is possible. But I assure you,
you can do it. So keep an eye out for ways to automate
this important practice, and get in ahead of the adoption curve.
Learn
more how CodeIt.Right can help you automate code reviews and improve your code quality.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
-
As a teenager, I remember having a passing interest in hacking. Perhaps this
came from watching the movie Sneakers.
Whatever the origin, the fancy passed quickly because I prefer building stuff to breaking
other people's stuff. Therefore, what I know about hacking pretty much stops
at understanding terminology and high level concepts.
Consider the term "zero
day exploit," for instance. While I understand what this means, I have never
once, in my life, sat on discovery of a software vulnerability for the purpose of
using it somehow. Usually when I discover a bug, I'm trying to deposit a check
or something, and I care only about the inconvenience. But I still understand
the term.
"Zero day" refers to the amount of time the software vendor has to prepare for the
vulnerability. You see, the clever hacker gives no warning about the vulnerability
before using it. (This seems like common sense, though perhaps hackers with
more derring do like to give them half a day to watch them scramble to release something
before the hack takes effect.) The time between announcement and reality is
zero.
Increased Deployment Cadence
Let's co-opt the term "zero day" for a different purpose. Imagine that we now
use it to refer to software deployments. By "zero day deployment," we thus mean
"software deployed without any prior announcement."
But
why would anyone do this? Don't you miss out on some great marketing opportunities?
And, more importantly, can you even release software this quickly? Understanding
comes from realizing that software deployment is undergoing a radical shift.
To understand this think about software release cadences 20 years ago. In the
90s, Internet Explorer won the first browser
war because it managed to beat Netscape's plodding release of going 3 years between
releases. With major software products, release cadences of a year or two dominated
the landscape back then.
But that timeline has shrunk steadily. For a highly visible example, consider
Visual Studio. In 2002, 2005, 2008, Microsoft released versions corresponding
to those years. Then it started to shrink with 2010, 2012, and 2013. Now,
the years no longer mark releases, per se, with Microsoft actually releasing major
updates on a quarterly basis.
Zero Day Deployments
As much as going from "every 3 years" to "every 3 months" impresses, websites and
SaaS vendors have shrunk it to "every day." Consider Facebook's
deployment cadence. They roll minor updates every business day and major
ones every week.
With this cadence, we truly reach zero day deployment. You never hear Facebook
announcing major upcoming releases. In fact, you never hear Facebook announcing
releases, period. The first the world sees of a given Facebook release is when
the release actually happens. Truly, this means zero day releases.
Oh, don't get me wrong. Rumors of upcoming features and capabilities circulate,
and Facebook certainly has a robust marketing department. But Facebook and companies
with similar deployment approaches have impressively made deployments a non-event.
And others are looking to follow suit, perhaps yours included.
Conceptual Impediments to Zero Day Deployments
If what I just said made you spit your drink at the screen, I understand. Perhaps
your deployment and release process takes so long that the thought of shrinking it
to a day made you laugh. Or perhaps it terrified. Either way, I can understand
that it may seem quite a leap.
You may conceive of Facebook and other practitioners so alien to your own situation
that you see no path from here to there. But in reality, they almost certainly
do the same things you do as part of your longer process -- just optimized and automated.
Impediments take a variety of forms. You might have lengthy quality assurance
and vetting processes, perhaps that require many iterations between the developers
and quality assurance. You might still be packaging software onto DVDs and shipping
it to customers. Perhaps you run all sorts of checks and analytics on it.
But all will fall under the general heading of requiring manual intervention or consuming
a lot of time.
To get to zero day deployments, you need to automate and speed up considerably, and
this can seem daunting.
What's Common Today
Some good news exists, though. The same forces that let the Visual Studio team
see such radical improvement push on software shops across the board. We all
have access to helpful techs.
For instance, the overwhelming majority of organizations now have continuous integration
via dedicated build machines. Software developers commit code, and these things
scoop it up, compile it, and package it up in a deployable package. This activity
now happens on the order of minutes whereas, in the past, I can remember shops where
this was some poor guy's entire job, and he'd spend days on each build.
And, speaking of the CI server, a lot of them run automated test suites as part of
what they do. Most commonly, this means unit tests. But they might also
invoke acceptance tests and even more exotic things like smoke, GUI, and functionality
tests. You can thus accept commits, build the software, run a bunch of test,
and get it ready to deploy.
Of course, you can also automate the actual deployment as well. It stands to
reason that, if your build machine can ball it up into a deliverable, it can deliver
that deliverable. This might be harder with physical media involved, but as
more software deliveries happen over networks, more of them get automated.
What We Need Next
With all of that in place, why don't we have more zero day deployments? What's
missing?
Again, discounting the problem of physical media, I'd say quality checks present the
biggest issue. We can compile, run automated tests, and deploy automatically.
But does this guarantee acceptable production behavior?
What about the important element of code reviews? How do you assure that, even
as automated tests pass, the application isn't piling up mountains of technical debt
and impeding future deployments? To get to zero day deployments, we must address
these issues.
Don't get me wrong. Other things matter here as well. Zero day deployments
require robust production checks and sophisticated "oops, that didn't work, rollback!"
capabilities. But I think that nothing will matter more than automated
quality checks.
Each time you commit code, you need an intelligent analysis of that code that should
fail the build as surely as failing tests if issues crop up. In a zero day deployment
context, you cannot afford best practice violations. You cannot afford slipping
quality, mounting technical debt, and you most certainly cannot afford code rot.
Today's rot in a zero day deployment scenario means tomorrow's inability to deploy
that way.
Learn
more how CodeIt.Right can help you automate code reviews, improve your code quality,
and reduce technical debt.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
-
During my younger days, I worked for a company that made a habit of a strategic acquisition.
They didn't participate in Time Warner style mergers, but periodically they would
purchase a smaller competitor or a related product. And on more than one occasion,
I inherited the lead role for the assimilating software from one of these organizations.
Lucky me, right?
If I think in terms of how to describe this to someone, a plumbing analogy comes to
mind. Over the years, I have learned enough about plumbing to handle most tasks
myself. And this has exposed me to the irony of discovering a small leak in
a fitting plugged by grit or debris. I find this ironic because two wrongs make
a right. A dirty, leaky fitting reaches sub-optimal equilibrium, and you spring
a leak when you clean it.
Legacy codebases have this issue as well. You inherit some acquired codebase,
fix a tiny bug, and suddenly the defect floodgates open. And then you realize
the perilousness of your situation.
While you might not have come by it in the same way that I did, I imagine you can
relate. At some point or another, just about every developer has been thrust
into supporting some creaky codebase. How should you handle this?
Put Your Outrage in Check
First, take some deep breaths. Seriously, I mean it. As software developers,
we seem to hate code written by others. In fact, we seem to hate our own
code if we wrote it more than a few months ago. So when you see the legacy
codebase for the first time, you will feel a natural bias toward disgust.
But don't indulge it. Don't sit there cursing the people that wrote the code,
and don't take screenshots to send to the
Daily WTF. Not only will it do you no good, but I'd go so far as to say
that this is actively counterproductive. Deciding that the code offers nothing
worth salvaging makes you less inclined to try to understand it.
The people that wrote this code dealt with older languages, older tooling, older frameworks,
and generally less knowledge than we have today. And besides, you don't know
what constraints they faced. Perhaps bosses heaped delivery pressure on them
like crazy. Perhaps someone forced them to convert to writing in a new, unfamiliar
language. Whatever the case may be, you simply didn't walk in their shoes.
So take a breath, assume they did their best, and try to understand what you have
under the hood.
Get a Visualization of the Architecture
Once you've settled in mentally for this responsibility, seek to understand quickly.
You won't achieve this by cracking open the code and looking through random source
files. But, beyond that, you also won't achieve it by looking at their architecture
documents or folder structures. Reality gets out of sync with intention, and
those things start to lie. You need to see the big picture, but in a way that
lines up with reality.
Look for tools that map dependencies and can generate a visual of the codebase.
Plenty of these tools exist for you and can automate visual depictions. Find
one and employ it. This will tell you whether the architecture resembles the
neat diagram given to you or not. And, more importantly, it will get you to
a broad understanding much more quickly.
Characterize
Once you have the picture you need of the codebase and the right frame of mind, you
can start doing things to it. And the first thing you should do is to start
writing characterization
tests.
If you have not heard of them before, characterization tests have the purpose of,
well, characterizing the codebase. You don't worry about correct or incorrect
behaviors. Instead, you accept at face value what the code does, and document
those behaviors with tests. You do this because you want to get a safety net
in place that tells you when your changes affect inputs and outputs.
As this XKCD cartoon ably demonstrates,
someone will come to depend on the application's production behavior, however problematic.
So with legacy code, you cannot simply decide to improve a behavior and assume your
users will thank you. You need to exercise caution.
But characterization tests do more than just provide a safety net. As an exercise,
they help you develop a deeper understanding of the codebase. If the architectural
visualization gives you a skeleton understanding, this starts to put meat on the bones.
Isolate Problems
With a reliable safety net in place, you can begin making strategic changes to the
production code beyond simple break/fix. I recommend that you start by finding
and isolating problematic chunks of code. In essence, this means identifying
sources of technical debt and looking to improve, gradually.
This can mean pockets of global state or extreme complexity that make for risky change.
But it might also mean dependencies on outdated libraries, frameworks, or APIs.
In order to extricate yourself from such messes, you must start to isolate them from
business logic and important plumbing code. Once you have it isolated, fixes
will come more easily.
Evolve Toward Modernity
Once you've isolated problematic areas and archaic dependencies, it certainly seems
logical to subsequently eliminate them. And, I suggest you do just that as a
general rule. Of course, sometimes isolating them gives you enough of a win
since it helps you mitigate risk. But I would consider this the exception and
not the rule. You want to remove problem areas.
I do not say this idly nor do I say it because I have some kind of early adopter drive
for the latest and greatest. Rather, being stuck with old tooling and infrastructure
prevents you from taking advantage of modern efficiencies and gains. When some
old library prevents you from upgrading to a more modern language version, you wind
up writing more, less efficient code. Being stuck in the past will cost you
money.
The Fate of the Codebase
As you get comfortable and take ownership of the legacy codebase, never stop contemplating
its fate. Clearly, in the beginning, someone decided that the application's
value outweighed its liability factor, but that may not always continue to be true.
Keep your finger on the pulse of the codebase, while considering options like migration,
retirement, evolution, and major rework.
And, finally, remember that taking over a legacy codebase need not be onerous.
As initially shocked as I found myself with the state of some of those acquisitions,
some of them turned into rewarding projects for me. You can derive a certain
satisfaction from taking over a chaotic situation and gradually steer it toward sanity.
So if you find yourself thrown into this situation, smile, roll up your sleeves, own
it and make the best of it.
Related resources
Tools at your disposal
SubMain offers CodeIt.Right that
easily integrates into Visual Studio for flexible and intuitive automated code review
solution that works real-time, on demand, at the source control check-in or as part
of your build.
Learn
more how CodeIt.Right can identify technical debt, document it and gradually improve
the legacy code.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich

|
-
The balance among types of feedback drives some weird interpersonal dynamics and balances.
For instance, consider the rather trite (if effective) management technique of the
"compliment sandwich." Managers with a negative piece of feedback precede and
follow that feedback with compliments. In that fashion, the compliments form
the "bun."
Different people and different groups have their preferences for how to handle this.
While some might bend over backward for diplomacy others prefer environments where
people hurl snipes at one another and simply consider it "passionate debate."
I have no interest arguing for any particular approach -- only in pointing out the
variety. As it turns out, we humans find this subject thorny.
To some extent, this complicated situation extends beyond human boundaries and into
automated systems. While we might not take quite the same umbrage as we would
with humans, we still get frustrated. If you doubt this, I challenge you to
tell me that you have never yelled at a compiler because you were sure your code had
no errors. I thought so.
So from this perspective, I can understand the frustration with static analysis feedback.
Often, when you decide to enable a new static analysis engine or linting tool on a
codebase, the feedback overwhelms. 28,326 issues the code can demoralize anyone.
And so the temptation emerges to recoil from this feedback and turn off the tool.
But should you do this? I would argue that usually, you should not. But
situations do exist when disabling a static analyzer makes sense. Today, I'll
walk through some examples of times you might suppress such a warning.
False Positives
For the first example, I'll present something of a no-brainer. However, I will
also present a caveat to balance things.
If your static analysis tool presents you with a false positive, then you should suppress
that instance of the false positive. (No sense throwing the baby out with the
bathwater and suppressing the entire rule). Assuming that you have a true false
positive, the analysis warning simply constitutes noise and not signal. Get
rid of it.
That being said, take care with labeling warnings as false positives. False
positive means that the tool has indicated a problem and a potential error and gotten
it wrong. False positive does not mean that you disagree with the warning or
don't care. The tool's wrongness is a good reason to suppress -- you not liking
its prognosis false short of that.
Non-Applicable Code
For the second kind of instance, I'll use the term "non-applicable code." This
describes code for which you have no interest in static analysis warnings. While
this may sound contradictory to the last point, it differs subtly.
You do not control all code in your codebase, and not all code demands the same level
of scrutiny about the same concepts. For example, do you have code in your codebase
driven by a framework? Many frameworks force some sort of inheritance scheme
on you or the implementation of an interface. If the name of a method on a third
party interface violates a naming convention, you need not be dinged by your tool
for simply implementing it.
In general, you'll find warnings that do not universally apply. Test projects
differ from your production code. GUI projects differ from data access layer
ones. And NuGet packages or generated code remain entirely outside of your control.
Assuming the decision to use these things happened in the past, turning off the analysis
warnings makes sense.
Cosmetic Code Counter to Your Team's Standard
So far, I've talked about the tool making a mistake and the tool getting things right
on the wrong code. This third case presents a thematically similar consideration.
Instead of a mistake or misapplication, though, this involves a misfit.
Many tools out there offer purely cosmetic concerns. They'll flag field variables
not prepended with underscores or methods with camel casing instead of Pascal casing.
Assuming those jive with your team's standards, you have no issues. But if they
don't, you have two options: change the tool or change your standard. Generally
speaking, you probably want to err on the side of complying with broad standards.
But if your team is set with your standard, then turn off those warnings or configure
the tool.
When You're Buried in Warnings
Speaking of warnings, I'll offer another point that relates to them, but with an entirely
different theme. When your team is buried in warnings, you need to take action.
Before I talk about turning off warnings, however, consider fixing them en masse.
It may seem daunting, but I suspect that you might find yourself surprised at how
quickly you can wrangle a manageable number.
However, if this proves too difficult or time-consuming, consider force ranking the
warnings, and (temporarily) turning off all except the top, say, 200. Make it
part of your team's work to eliminate those, and then enable the next 200. Keep
at it until you eliminate the warnings. And remember, in this case, you're disabling
warnings only temporarily. Don't forget about them.
When You Have an Intelligent Disagreement
Last up comes the most perilous reason for turning off static analysis warnings.
This one also happens to occur most frequently, in my experience. People turn
them off because they know better than the static analysis tool.
Let's stop for a moment and contemplate this. Teams of workaday developers out
there tend to blithely conclude that they know their business. In fact, they
know their business better than people whose job it is to write static analysis tools
that generate these warnings. Really? Do you like those odds?
Below the surface, disagreement with the tool often masks resentment at being called
"wrong" or "non-compliant." Turning the warnings off thus becomes a matter of
pride or mild laziness. Don't go this route.
If you want to ignore warnings because you believe them to be wrong, do research first.
Only allow yourself to turn off warnings when you have a reasoned, intelligent, research-supported
argument as to why you should do so.
When in Doubt, Leave 'em On
In this post, I have gingerly walked through scenarios in which you may want to turn
off static analysis warnings and guidance. For me, this exercise produces some
discomfort because I rarely find this advisable. My default instinct is thus
not to encourage such behavior.
That said, I cannot deny that you will encounter instances where this makes sense.
But whatever you do, avoid letting this become common or, worse, your default.
If you have the slightest bit of doubt, leave them on. Put your trust
in the vendors of these tools -- they know their business. And steering you
in bad directions is bad for business.
Learn
more how CodeIt.Right can automate your team standards, makes it easy to ignore specific
guidance violations and keep track of them.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
-
More years ago than I'd care to admit, I took a software engineering course as part
of my graduate CS program. At the time, I worked a full-time job during the
day and did remote classes in the evening. As a result, I disproportionately
valued classes with applicability to my job. And this class offered plenty of
that.
We scratched the surface on such diverse topics as agile methodologies, automated
testing, cost of code ownership, and more. But I found myself perhaps most interested
by the dive we did into refactoring. The idea of reworking the internal structure
of code while preserving inputs and outputs is a surprisingly complex one.
Historical Complexity of Refactoring
At the risk of dating myself, I took this course in the fall of 2006. While
automated refactorings in your IDE now seem commonplace, back then, they were hard.
In fact, the professor of the course considered them to be sufficiently difficult
as to steer a group of mine away from a project implementing some. In the world
of 2006, I suspect he had the right of it. We steered clear.
In 2016, implemented automated refactorings still present a challenge.
But modern tool and IDE vendors can stand on the shoulders of giants, so to speak.
Back then? Not so much.
Refactorings present a unique challenge to tool vendors because of the inherent risk.
They can really screw up users' code. If a mistake happens, best case scenario
is that the resultant code fails to compile because then, at least, it fails fast.
Worse still is semantically and syntactically correct code that somehow behaves improperly.
In this situation, a refactoring -- a safe change to code -- becomes a modification
to the behavior of production code instead. Ouch.
On top of the risk, the implementation of refactoring anywhere beyond the trivial
involves heady concepts such as abstract syntax trees. In other words, it's
not for lightweights. So to recap, refactoring is risky and difficult.
And this is the landscape faced by tool authors.
I Don't Fix -- I Just Flag
If you live in the US, you may have seen a commercial that features a funny quip.
If I'm not mistaken, it advertises for some sort of fraud prevention services.
(Pardon any slight inaccuracies, as I recount this as best I can, from memory.)
In the ad, bank robbers hold a bank hostage in a rather cliché, dramatic scene.
Off to the side, a woman stands near a security guard, asking him why he didn't do
anything to stop it. "I'm not a robbery prevention service -- I'm a robbery monitoring service.
Oh, by the way, there's a robbery."
It brings a chuckle, but it also brings an underlying point. In many situations,
monitoring alone can prove woefully ineffective, prompting frustration. As a
former manager and current consultant, I generally advise people that they should
only point out problems when they have also prepared proposed solutions. It
can mean the difference between complaining and solving.
So you can imagine and probably share my frustration at tools that just flag problems
and leave it to you to investigate further and fix them. We feel like the woman
standing next to the "robbery monitor," wondering how useful the service is to us.
Levels of Solution
Going back to the subject of software development, we see this dynamic in a number
of places. The compiler, the IDE, productivity add-ins, static analysis tools,
and linting utilities all offer us warnings to heed.
Often, that's all we get. The utility says, "hey, something is wrong here, but
you're going to have to figure out what." I tend to think of that as the basic
level of service, or level 0, if you will.
The next level, level 1, involves at least offering some form of next action.
It might be as simple as offering a help file, inline reading, or a link to more information.
Anything above "this is a problem."
Level 2 ups the ante by offering a recommendation for what to do next.
"You have a dependency cycle. You should fix this by looking at these three
components and removing one mutual dependency." It goes beyond giving you a
next thing to do and gives you the next thing to do.
Level 3 rounds out the field by actually performing the action for you (following
a prompt, of course). "You've accidentally hidden a method on the parent class.
Click here to rename or click here to make parent virtual." That's just an example
off the top, of course, but it illustrates the interaction paradigm. "We've
noticed a problem, and you can click here to fix it."
Fixes in Your Tooling
When
evaluating your own tools, look to climb as high up this hierarchy as you can.
Favor tools that identify problems, but offer fixes whenever possible.
There are a number of such tools out there, including CodeIt.Right.
Using tools like this is a pleasure because it removes the burden of research and
implementation from you. Well, you can always do the research if you want, but
at your own leisure. But it's much better to do research at your leisure than
when you're trying to accomplish something else.
The other, important concern here is that you find trusted tooling to help you with
this sort of thing. After all, you don't want something messing with your source
code if it might mess up your source code. But, assuming you can trust it, this
provides an invaluable boost to your effectiveness by automatically resolving your
problems and by helping you learn.
In the year 2016, we have far more tooling available, with a far better track record,
than we did in 2006. Leverage it whenever possible so that you can focus on
solving the pressing problems of your day to day work.
Tools at your disposal
SubMain offers CodeIt.Right that
easily integrates into Visual Studio for flexible and intuitive "We've noticed a problem,
and you can click here to fix it." solution.
Learn
more how CodeIt.Right can automate your team standards and improve code quality.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
-
In professional contexts, I think that the word "standard" has two distinct flavors.
So when we talk about a "team standard" or a "coding standard," the waters muddy a
bit. In this post, I'm going to make the case for a team standard. But
before I do, I think it important to discuss these flavors that I mention. And
keep in mind that we're not talking dictionary definition as much as the feelings
that the word evokes.
First,
consider standard as "common." To understand what I mean, let's talk cars.
If you go to buy a car, you can have an automatic transmission or a standard transmission.
Standard represents a weird naming choice for this distinction since (1) automatic
transmissions dominate (at least in the US) and (2) "manual" or "stick-shift" offer
much better descriptions. But it's called "standard" because of historical context.
Once upon a time, automatic was a new sort of upgrade, so the existing, default option
became boringly known as "standard."
In contrast, consider standard as "discerning." Most commonly you hear this
in the context of having standards. If some leering, creepy person suggested
you go out on a date to a fast food restaurant, you might rejoin with, "ugh, no, I
have standards."
Now, take these common contexts for the word to the software team room. When
someone proposes coding standards, the two flavors make themselves plain in the team
members' reactions. Some like the idea, and think, "it's important to have standards
and take pride in our work." Others hear, "check your creativity at the gate,
because around here we write standard, default code."
What I Mean by Standard
Now that I've drawn the appropriate distinction, I feel it appropriate to make my
case. When I talk about the importance of a standard, I speak with the second
flavor of the word in mind. I speak about the team looking at its code with
a discerning attitude. Not just any code can make it in here -- we have standards.
These can take somewhat fluid forms, and I don't mean to be prescriptive. The
sorts of standards that I like to see apply to design principles as much as possible
and to cosmetic concerns only when they have to.
For example, "all non-GUI code should be test driven" and "methods with more than
20 lines should require a conversation to justify them" represent the sort of standards
I like my teams to have. They say, "we believe in TDD" and "we view long methods
as code smells," respectively. In a way, they represent the coding ethos of
the group.
On the other side of the fence lie prescriptions like, "all class fields shall be
prepended with underscores" and "all methods shall be camel case." I consider
such concerns cosmetic, since they are appearance and not design or runtime behavior.
Cosmetic concerns are not important... unless they are. If the team struggles
to read code and becomes confused because of inconsistency, then such concerns become
important. If the occasional quirk presents no serious readability issues, then
prescriptive declarations about it stifle more than they help.
Having standards for your team's work product does not mean mandating total homogeneity.
Why Have a Standard at All?
Since I'm alluding to the potentially stifling effects of a team standard, you might
reasonably ask why we should have them at all. I can assert that I'm interested
in the team being discerning, but is it really just about defining defaults?
Fair enough. I'll make my case.
First, consider something that I've already mentioned: maintenance. If the team
can easily read code, it can more easily maintain that code. Logically, then,
if the team all writes fairly similar code, they will all have an easier time reading,
and thus maintaining that code. A standard serves to nudge teams in this direction.
Another important benefit of the team standard revolves around the integrity of the
work product. Many team's standards incorporate methodology for security, error
handling, logging, etc. Thus the established standard arms the team members
with ways to ensure that the software behaves properly.
And finally, well-done standards can help less experienced team members learn their
craft. When such people join the team, they tend to look to established folks
for guidance. Sadly, those people often have the most on their plate and the
least time. The standard can thus serve as teacher by proxy, letting everyone
know the team's expectations for good code.
Forget the Conformity (by Automating)
So far, all of my rationale follows a fairly happy path. Adopt a team standard,
and reap the rewards: maintainability, better software, learning for newbies.
But equally important is avoiding the dark side of team standards. Often this
dark side takes the form of nitpicking, micromanagement and other petty bits of nastiness.
Please, please, please remember that a standard should not elevate conformity as a
virtue. It should represent shared values and protection of work product quality.
Therefore, in situations where conformity (uniformity) is justified, you should automate
it. Don't make your collaborative time about telling people where to put
spaces and brackets -- program
your IDE to do that for you.
Make Justification Part of the Standard
Another critical way to remove the authoritarian vibe from the team standard is one
that I rarely see. And that mystifies me a bit because you can do it so easily.
Simply make sure you justify each item contained in the standard.
"Methods with more than 20 line of code should prompt a conversation," might find
a home in your standard. But why not make it, "methods with more than 20 lines
of code should prompt a conversation because studies have demonstrated that defect
rate increases more than linearly with lines of code per method?" Wow, talk
about powerful.
This little addition takes the authoritarian air out of the standard, and it also
helps defuse squabbles. And, best of all, people might just learn something.
If you start doing this, you might also notice that boilerplate items in a lot of
team standards become harder to justify. "Prepend your class fields with m underscore"
becomes "prepend your class fields with m underscore because... wait, why do we do
that again?"
Prune and Always Improve
When you find yourself trailing off at because, you have a problem. Something
exists in your team standard that you can't justify. If no one can justify it,
then rip it out. Seriously, get rid of it. Having items that no one can
justify starts to put you in conformity for the sake of conformity territory.
And that's when standard goes from "discerning" to "boring."
Let this philosophy guide your standard in general. Revisit it frequently, and
audit it for valid justifications. Sometimes justifications will age out of
existence or seem lame in retrospect. When this happens, do not hesitate to
revisit, amend, or cull. The best team standards are neither boring nor static.
The best team standards reflect the evolving, growing philosophy of the team.
Related resources
Tools at your disposal
SubMain offers CodeIt.Right that
easily integrates into Visual Studio for flexible and intuitive automated code review
solution that works real-time, on demand, at the source control check-in or as part
of your build.
Learn
more how CodeIt.Right can automate your team standards and improve code quality.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
-
If you write software, the term "feedback loop" might have made its way into your
vocabulary. It charts a slightly indirect route from its conception and into
the developer lexicon, though, so let's start with the term's origin. A feedback
loop in general systems uses its output as one of its inputs.
Kind of vague, huh? I'll clarify with an example. I'm actually writing
this post from a hotel room, so I can see the air conditioner from my seat.
Charlotte, North Carolina, my temporary home, boasts some pretty steamy weather this
time of year, so I'm giving the machine a workout. Its LED display reads 70
Fahrenheit, and it's cranking to make that happen.
When the AC unit hits exactly 70 degrees, as measured by its thermostat, it will take
a break. But as soon as the thermostat starts inching toward 71, it will turn
itself back on and start working again. Such is the Sisyphean struggle of climate
control.
Important for us here, though, is the mechanics of this system. The AC unit
alters the temperature in the room (its output). But it also uses the temperature
in the room as input (if < 71, do nothing, else cool the room). Climate control
in buildings operates via feedback loop.
Appropriating the Term for Software Development
It takes a bit of a cognitive leap to think of your own tradecraft in terms of feedback
loops. Most likely this happens because you become part of the system.
Most people find it harder to reason about things from within.
In software development, you complete the loop. You write code, the compiler
builds it, the OS runs it, you observe the result, and decide what to do to the code
next. The output of that system becomes the input to drive the next round.
If you have heard the term before, you've probably also heard the term "tightening
the feedback loop." Whether or not you've heard it, what people mean by this
is reducing the cycle time of the aforementioned system. People throwing that
term around look to streamline the write->build->run->write again process.
A History of Developer Feedback Loops
At the risk of sounding like a grizzled old codger, let me digress for a moment to
talk about feedback loop history. Long before my time came the punched
card era. Without belaboring the point, I'll say that this feedback loop
would astound you, the modern software developer.
Programmers would sit at key punch "kiosks", used to physically perforate forms (one
mistake, and you'd start over). They would then take these forms and have operators
turn them into cards, stacks of which they would hold onto. Next, they'd wait
in line to feed these cards into the machines, which acted as a runtime interpreter.
Often, they would have to wait up to 24 hours to see the output of what they had done.
Can you imagine? Write a bit of code, then wait for 24 hours to see if it worked.
With a feedback loop this loose, you can bet that checking and re-checking steps received
hyper-optimization.
When I went to college and started my programming career, these days had long passed.
But that doesn't mean my early days didn't involve a good bit of downtime. I
can recall modifying C files in projects I worked, and then waiting up to an hour
for the code to build and run, depending what I had changed. xkcd
immortalized this issue nearly 10 years ago, in one of its most popular comics.
Today, you don't see this as much, though certainly, you could find some legacy codebases
or juggernauts that took a while to build. Tooling, technique, modern hardware
and architectural approaches all combine to minimize this problem via tighter feedback
loops.
The Worst Feedback Loop
I have a hypothesis. I believe that a specific amount of time exists for each
person that represents the absolute, least-optimal amount of time for work feedback.
For me, it's about 40 seconds.
If I make some changes to something and see immediate results, then great. Beyond
immediacy, my impatience kicks in. I stare at the thing, I tap impatiently,
I might even hit it a little, knowing no good will come. But after about 40
seconds, I simply switch my attention elsewhere.
Now, if I know the wait time will be longer than 40 seconds, I may develop some plan.
I might pipeline my work, or carve out some other tasks with which I can be productive
while waiting. If for instance, I can get feedback on something every 10 minutes,
I'll kick it off, do some household chores, periodically checking on it.
But, at 40 seconds, it resides in some kind of middle limbo, preventing any semblance
of productivity. I kick it off and check twitter. 40 seconds turns into
5 minutes when someone posts a link to some cool astronomy site. I check back,
forget what I did, and then remember. I try again and wait 40 seconds.
This time, I look at a Buzzfeed article and waste 10 minutes as that turns into 4
Buzzfeed articles. I then hate myself.
The Importance of Tightening
Why do I offer this story about my most sub-optimal feedback period? To demonstrate
the importance of diligence in tightening the loop. Wasting a few seconds while
waiting hinders you. But waiting enough seconds to distract you with other things
slaughters your productivity.
With software development, you can get into a state of what I've heard described as
"flow." In a state of flow, the feedback loop creates harmony in what you're
doing. You make adjustments, get quick feedback, feel encouraged and productive,
which promotes more concentration, more feedback, and more productivity. You
discover a virtuous circle.
But just the slightest dropoff in the loop pops that bubble. And, another dropoff
from there (e.g. to 40 seconds for me) can render you borderline-useless. So
much of your professional performance rides on keeping the loop tight.
Tighten Your Loop Further
Modern tooling offers so many options for you. Many IDEs will perform speculative
compilation or interpretation as you code, making builds much faster. GUI components
can be rendered as you work, allowing you to see changes in real time as you alter
the markup. Unit tests slice your code into discrete, separately evaluated components,
and continuous testing tools provide pass/fail feedback as you type. Static
code analysis tools offer you code
review as you work, rather than at some code review days later. I could
go on.
The general idea here is that you should constantly seek ways to tune your day to
day work. Keep your eyes out for tools that speed up your feedback loop.
Read blogs and go to user groups. Watch your coworkers for tips and tricks.
Claw, scratch, and grapple your way to shaving time off of your feedback loop.
We've come a long way from punch cards and sword fights while code compiles.
But, in 10 or 30 years, we'll look back in amazement at how archaic our current techniques
seem. Put yourself at the forefront of that curve, and you'll distinguish yourself
as a developer.
Learn
more how CodeIt.Right can tighten the feedback loop and improve your code quality.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
-
In the world of programming, 15 years or so of professional experience makes me a
grizzled veteran. That certainly does not hold for the work force in general,
but youth dominates our industry via the absolute explosion of demand for new programmers.
Given the tendency of developers to move around between projects and companies, 15
years have shown me a great deal of variety.
Perhaps nothing has exemplified this variety more than the code review. I've
participated in code reviews that were grueling, depressing marathons. On the
flip side, I've participated in ones where I learned things that would prove valuable
to my career. And I've seen just about everything in between.
Our industry has come to accept that peer review works. In the book Code
Complete, author Steve McConnell cites it, in some circumstance, as the single
most effective technique for avoiding defects. And, of course, it helps with
knowledge transfer and learning. But here's the rub -- implemented poorly, it
can also do a lot of harm.
Today, I'd like to make the case for the automated code review. Let me be clear.
I do not view this as a replacement for any manual code review, but as a supplement
and another tool in the tool chest. But I will say that automated code review
carries less risk than its manual counterpart of having negative consequences.
The Politics
I mentioned extremely productive code reviews. For me, this occurred when working
on a team with those I considered friends. I solicited opinions, got earnest
feedback, and learned. It felt like a group of people working to get better,
and that seemed to have no downside.
But I've seen the opposite, too. I've worked in environments where the air seemed
politically charged and competitive. Code reviews became religious wars, turf
battles, and arguments over minutiae. Morale dipped, and some people went out
of their way to find ways not to participate. Clearly no one would view this
as a productive situation.
With automated code review, no politics exist. Your review tool is, of course,
incapable of playing politics. It simply carries out its mission on your behalf.
Automating parts of the code review process -- especially something relatively arbitrary
such as coding standards compliance -- can give a team many fewer opportunities to
posture and bicker.
Learning May Be Easier
As an interpersonal activity, code review carries some social risk. If we make
a silly mistake, we worry that our peers will think less of us. This dynamic
is mitigated in environments with a high trust factor, but it exists nonetheless.
In more toxic environments, it dominates.
Having an automated code review tool creates an opportunity for consequence-free learning.
Just as the tool plays no politics, it offers no judgment. It just provides
feedback, quietly and anonymously.
Even in teams with a supportive dynamic, shy or nervous folks may prefer this paradigm.
I'd imagine that anyone would, to an extent. An automated code review tool points
out mistakes via a fast feedback loop and offers consequence-free opportunity to correct
them and learn.
Catching Everything
So far I've discussed ways to cut down on politics and soothe morale, but practical
concerns also bear mentioning. An automated code review tool necessarily lacks
the judgment that a human has. But it has more thoroughness.
If your team only performs peer review as a check, it will certainly catch mistakes
and design problems. But will it catch all of them? Or is it possible
that you might miss one possible null dereference or an empty catch block? If
you automate the process, then the answer becomes "no, it is not possible."
For the items in a code review that you can automate, you should, for the sake of
thoroughness.
Saving Resources and Effort
Human code review requires time and resources. The team must book a room, coordinate
schedules, use a projector (presumably), and assemble in the same location.
Of course, allowing for remote, asynchronous code review mitigates this somewhat,
but it can't eliminate the salary dollars spent on the activity. However you
slice it, code review represents an investment.
In this sense, automating parts of the code review process has a straightforward business
component. Whenever possible and economical, save yourself manual labor through
automation.
When there are code quality and practice checks that can be done automatically, do
them automatically. And it might surprise you to learn just how many such things
can be automated.
Improbable as it may seem, I have sat in code reviews where people argued about whether
or not a method would exhibit a runtime behavior, given certain inputs. "Why
not write a unit test with those inputs," I asked. Nobody benefits from humans
reasoning about something the build, the test suite, the compiler, or a static analysis
tool could tell them automatically.
Complimentary Approach
As I've mentioned throughout this post, automated code review and manual code review
do not directly compete. Humans solve some problems better than machines, and
vice-versa. To achieve the best of all worlds, you need to create a complimentary
code review approach.
First, understand what can be automated, or, at least, develop a good working framework
for guessing. Coding standard compliance, for instance, is a no-brainer from
an automation perspective. You do not need to pay humans to figure out whether
variable names are properly cased, so let a review tool do it for you. You can
learn more about the possibilities by simply downloading and trying out review and
analysis tools.
Secondly, socialize the tooling with the team so that they understand the distinction
as well. Encourage them not to waste time making a code review a matter of checking
things off of a list. Instead, manual code review should focus on architectural
and practice considerations. Could this class have fewer responsibilities?
Is the builder pattern a good fit here? Are we concerned about too many dependencies?
Finally, I'll offer the advice that you can use the balance between manual and automated
review based on the team's morale. Do they suffer from code review fatigue?
Have you noticed them sniping a lot? If so, perhaps lean more heavily on automated
review. Otherwise, use the automated review tools simply to save time on things
that can be automated.
If you're currently not using any automated analysis tools, I cannot overstate how
important it is that you check
them out. Our industry built itself entirely on the premise of automating
time-consuming manual activities. We need to eat our own dog food.
Related resources
Tools at your disposal
SubMain offers CodeIt.Right that
easily integrates into Visual Studio for flexible and intuitive automated code review
solution that works real-time, on demand, at the source control check-in or as part
of your build.
Learn
more how CodeIt.Right can help with automated code review and improve your code quality.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
|
|
|
|
|
|