|
March 2017 - Posts
-
"You never concatenate strings. Instead, always use a StringBuilder."
I feel pretty confident that any C# developer that has ever worked in a group has
heard this admonition at least once. This represents one of those bits of developer
wisdom that the world expects you to just memorize. Over the course of
your career, these add up. And once they do, grizzled veterans engage in a sort
of comparative jousting for rank. The internet
encourages them and eggs them on.
"How can you call yourself a senior C# developer and not know how to serialize
objects to XML?!"
With two evenly matched veterans swinging language swords at one another, this volley
may continue for a while. Eventually, though, one falters and pecking order
is established.
Static Analyzers to the Rescue
I must confess. I tend to do horribly at this sort of thing. Despite having
relatively good memory retention ability in theory, I have a critical Achilles Heel
in this regard. Specifically, I can only retain information that interests me.
And building up a massive arsenal of programming language "how-could-yous" for dueling
purposes just doesn't interest me. It doesn't solve any problem that
I have.
And, really, why should it? Early in my career, I figured out the joy of static
analyzers in pretty short order. Just as the ubiquity of search engines means
I don't need to memorize algorithms, the presence of static analyzers saves me from
cognitively carrying around giant checklists of programming sins to avoid. I
rejoiced in this discovery. Suddenly, I could solve interesting problems
and trust the equivalent of programmer spell check to take care of the boring stuff.
Oh, don't get me wrong. After the analyzers slapped me, I internalized the lessons.
But I never bothered to go out of my way to do so. I learned only in response
to an actual, immediate problem. "I don't like seeing warnings, so let me figure
out the issue and subsequently avoid it."
My Coding Provincialism
This general modus operandi caused me to respond predictably when I first encountered
the idea of globalization in language. "Wait, so this helps when? If someone
theoretically deploys code to some other country? And, then, they might see
dates printed in a way that seems strange to them? Huh."
For many years, this solved no actual problem that I had. Early in my career,
I wrote software that people deployed in the US. Much of it had no connectivity
functionality. Heck, a lot of it didn't even have a user interface. Worst
case, I might later have to realize that some log file's time stamps happened in Mountain
Time or something.
Globalization solved no problem that I had. So when I heard rumblings about
the "best practice," I generally paid no heed. And, truth be told, nobody suffered.
With the software I wrote for many years, this would have constituted a premature
optimization.
But it nevertheless instilled in me a provincialism regarding code.
A Dose of Reality
I've spent my career as a polyglot. And so at one point, I switched jobs, and
it took me from writing Java-based web apps to a desktop app using C# and WPF.
This WPF app happened to have worldwide distribution. And, when I say worldwide,
I mean just about every country in the world.
Suddenly, globalization went from "premature optimization" to "development table stakes."
And the learning curve became steep. We didn't just need to account for
the fact that people might want to see dates where the day, rather than the month,
came first. The GUI needed translation into dozens of languages as a menu setting.
This included languages with text read from right to left.
How did I deal with this? At the time, I don't recall having the benefit of
a static analyzer that helped in this regard. FXCop may have provided some relief,
but I don't recall one way or the other. Instead, I found myself needing to study and
laboriously create mental checklists. This "best practice" knowledge hoarding
suddenly solved an immediate problem. So, I did it.
CodeIt.Right's Globalization Features
Years have passed since then. I've had several jobs since then, and, as a solo
consultant, I've had dozens of clients and gigs. I've lost my once encyclopedic
knowledge of globalization concerns. That happened because -- you guessed it
-- it no longer solves an immediate problem that I have.
Oh, I'd probably do better with it now than I did in the past. But I'd still
have to re-familiarize myself with the particulars and study up once again in order
to get it right, should the need arise. Except, these days, I could enlist
some help. CodeIt.Right,
installed on my machine, will give me the heads up I didn't have those years ago.
It has a number of globalization concerns built right in. Specifically, it will
remind you about the following concerns. I'll just list them here, saving detailed
explanations for a future "CodeIt.Right Rules, Explained" post.
-
Specify culture info
-
Specify string comparison (for culture)
-
Do not pass literals as localized parameters
-
Normalize strings to uppercase
-
Do not hard code locale specific strings
-
Use ordinal string comparison
-
Specify marshaling for PInvoke string arguments
-
Set locale for data types
That provides an excellent head start on getting savvy with globalization.
The Takeaway
Throughout the post, I've talked about my tendency not to bother with things that
don't solve immediate problems for me. I realize philosophical differences in
approach exist, but I stand by this practice to this day. And I don't say this
only because of time savings and avoiding premature optimization. Storing up
an arsenal of specific "best practices" in your head threatens to entrench you in
your ways and to establish an approach of "that's just how you do it."
And yet, not doing this can lead to making rookie mistakes and later repeating them.
But, for me, that's where automated tooling enters the picture. I understand
the globalization problem in theory. That I have not forgotten.
And I can use a tool like CodeIt.Right to
bridge the gap between theory and specifics in short order, creating just-in-time
solutions to problems that I have.
So to conclude the post, I would offer the following in takeaway. Stop memorizing
all of the little things you need to check for at the method level in coding. Let
tooling do that for you, so that you can keep big picture ideas in your head.
I'd say, "don't lose sight of the forest for the trees," but with tooling, you can
see the forest and the trees.
Learn
more how CodeIt.Right can help you automate code reviews, improve your code quality,
and ensure your code is globalization ready.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
-
Today, I'd like to offer a somewhat lighthearted treatment to a serious topic.
I generally find that this tends to offer catharsis to the frustrated. And the
topic of code review tends to lead to lots of frustration.
When talking about code review, I always make sure to offer a specific distinction.
We can divide code reviews into two mutually exclusive buckets: automated and manual.
At first, this distinction might sound strange. Most readers probably think
of code reviews as activities with exclusively human actors. But I tend to disagree.
Any static analyzer (including the compiler) offers feedback. And some tools,
like CodeIt.Right,
specifically regard their suggestions and automated fixes as an automation of the
code review process.
I would argue that automated code review should definitely factor into your code review
strategy. It takes the simple things out of the equation and lets the humans
involved focus on more complex, nuanced topics. That said, I want to ignore
the idea of automated review for the rest of the post. Instead, I'll talk exclusively
about manual code reviews and, more specifically, where they tend to get ugly.
You should absolutely do manual code reviews. Full stop. But you
should also know that they can easily go wrong and devolved into useless or even toxic
activities. To make them effective, you need to exercise vigilance with them.
And, toward that end, I'll talk about some manual code review anti-patterns.
The Gauntlet
First up, let's talk about a style of review that probably inspires the most disgust
among former participants. Here, I'm talking about what I call "the gauntlet."
In this style of code review, the person submitting for review comes to a room with
a number of self-important, hyper-critical peers. Of course, they might not
view themselves as peers. Instead, they probably imagine themselves as a panel
of judges for some reality show.
From this 'lofty' perch, they attack the reviewee's code with a malevolent glee.
They adopt a derisive tone and administer the third degree. And, frankly, they
crush the spirit of anyone subject to this process, leaving low morale and resentment
in their wake.
The Marathon
Next, consider a less awful, but not effective style of code review. This one
I call "the marathon." I bet you can predict what I mean by this.
In the marathon code review, the participants sit in some conference room for hours.
It starts out as an enthusiastic enough affair, but as time passes, people's energy
wanes. Nevertheless, it goes on because of an edict that all code needs review
and because everyone waited until the 11th hour. And predictably, things get
less careless as time goes on and people space out.
Marathon code reviews eventually reach a point where you might as well not bother.
The Scattershot Review
Scattershot reviews tend to occur in organizations without much rigor around the code
review process. Perhaps their process does not officially formally include code
review. Or, maybe, it offers on more specifics than "do it."
With a scattershot review process, the reviewer demonstrates no consistency or predictability
in the evaluation. One day he might suggest eliminating global variables, and
on another day, he might advocate for them. Or, perhaps the variance occurs
depending on reviewer. Whatever the specifics, you can rest assured you'll never
receive the same feedback twice.
This approach to code review can cause some annoyance and resentment. But morale
issues typically take a backseat to simple ineffectiveness and churn in approach.
The Exam
Some of these can certainly coincide. In fact, some of them will likely coincide.
So it goes with "the exam" and "the gauntlet." But while the gauntlet focuses
mostly on the process of the review, the exam focuses on the outcome.
Exam code reviews occur when the parlance around what happens at the end involves
"pass or fail." If you hear people talking about "failing" a code review, you
have an exam on your hands.
Code review should involve a second set of eyes on something to improve it.
For instance, imagine that you wrote a presentation or a whitepaper. You might
ask someone to look it over and proofread it to help you improve it. If they
found a typo, they wouldn't proclaim that you had "failed." They'd just offer
the feedback.
Treating code reviews as exams generally hurts morale and causes the team to lose
out on the collaborative dynamic.
The Soliloquy
The review style I call "the soliloquy" involves paying lip service to the entire
process. In literature, characters offer soliloquies when they speak their thoughts
aloud regardless of anyone hearing them. So it goes with code review styles
as well.
To understand what I mean, think of times in the past where you've emailed someone
and asked them to look at a commit. Five minutes later, they send back a quick,
"looks good." Did they really review it? Really? You
have a soliloquy when you find yourself coding into the vacuum like this.
The downside here should be obvious. If people spare time for only a cursory
glance, you aren't really conducting code reviews.
The Alpha Dog
Again, you might find an "alpha dog" in some of these other sorts of reviews.
I'm looking at you, gauntlet and exam. With an alpha dog code review, you have
a situation where a particularly dominant senior developer rules the roost with the
team. In that sense, the title refers both to the person and to the style of
review.
In a team with a clear alpha dog, that person rules the codebase with an iron fist.
Thus the code review becomes an exercise in appeasing the alpha dog. If he is
present, this just results in him administering a gauntlet. But, even absent,
the review goes according to what he may or may not like.
This tends to lead team members to a condition known as "learned
helplessness," wherein they cease bothering to make decisions without the alpha
dog. Obviously, this stunts their career development, but it also has a pragmatic
toll for the team in the short term. This scales terribly.
The Weeds
Last up, I'll offer a review issue that I'll call "the weeds." This can happen
in the most well meaning of situations, particularly with folks that love their craft.
Simply put, they get "into the weeds."
What I mean with this colloquialism is that they bogged down in details at the expense
of the bigger picture. Obviously, an exacting alpha dog can drag things into
the weeds, but so can anyone. They might wind up with a lengthy digression about
some arcane language point, of interest to all parties, but not critical to shipping
software. And typically, this happens with things that you ought to make matters
of procedures, or even to address with your automated code reviews.
The biggest issue with a "weeds" code review arises from the poor use of time.
It causes things to get skipped, or else it turns reviews into marathons.
Getting it Right
How to get code reviews right could easily occupy multiple posts. But I'll close
by giving a very broad philosophical outlook on how to approach it.
First of all, make sure that you get clarity up front around code review goals, criteria,
and conduct. This helps to stop anti-patterns wherein the review gets off track
or bogged down. It also prevents soliloquies and somewhat mutes bad behavior.
But, beyond that, look at code reviews as collaborative, voluntary sessions where
peers try to improve the general codebase. Some of those peers may have more
or less experience, but everyone's opinion matters, and it's just that -- an opinion for
the author to take under advisement.
While you might cringe at the notion that someone less experienced might leave something
bad in the codebase, trust me. The damage you do by allowing these anti-patterns
to continue in the name of "getting it right" will be far worse.
Learn
more how CodeIt.Right can help you automate code reviews and improve your code quality.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
-
Today, I'll do another installment of the CodeIt.Right
Rules, Explained series. I have now made four such posts in this series.
And, as always, I'll start off by citing my two personal rules about static analysis
guidance.
-
Never implement a suggested fix without knowing what makes it a fix.
-
Never ignore a suggested fix without understanding what makes it a fix.
It may seem as though I'm playing rhetorical games here. After all, I could
simply say, "learn the reasoning behind all suggested fixes." But I want to
underscore the decision you face when confronted with static analysis feedback.
In all cases, you must actively choose to ignore the feedback or address it.
And for both options, you need to understand the logic behind the suggestion.
In that spirit, I'm going to offer up explanations for three more CodeIt.Right rules
today.
Type that contains only static members should be sealed
Let's start here with a quick example. I think this picture will suffice for
some number of words, if not necessarily one thousand.
Here, I've laid a tiny seed for a Swiss Army Knife, "utils" class. Presumably,
I will continue to dump any method I think might help me with Linq into this class.
But for now, it contains only a single method to make things easy to understand.
(As an aside, I discourage "utils" classes as a practice. I'm using this example
because everyone reading has most assuredly seen one of these things at some point.)
When you run CodeIt.Right analysis on this code, you will find yourself confronted
with a design issue. Specifically, "types that contain only static members should
be sealed."
You probably won't have a hard time discerning how to remedy the situation.
Adding the "sealed" modifier to the class will do the trick. But why does CodeIt.Right
object?
The Microsoft
guidelines contain a bit more information. They briefly explain that static
analyzers make an inference about your design intent, and that you can better communicate
that intent by using the "sealed" keyword. But let's unpack that a bit.
When you write a class that has nothing but static members, such as a static utils
class, you create something with no instantiation logic and no state. In other
words, you could instantiate "a LinqUtils," but you couldn't do anything
with it. Presumably, you do not intend that people use the class in that way.
But what about other ways of interacting with the class, such as via inheritance?
Again, you could create a LinqUtilsChild that inherited from LinqUtils, but
to what end? Polymorphism requires instance members, and non exist here.
The inheriting class would inherit absolutely nothing from its parent, making the
inheritance awkward at best.
Thus the intent of the rule. You can think of it telling you the following.
"You're obviously not planning to let people use inheritance with you, so don't even
leave that door open for them to possibly make a mistake."
So when you find yourself confronted with this warning, you have a simple bit of consideration.
Do you intend to have instance behavior? If so, add that behavior and the warning
goes away. If not, simply mark the class sealed.
Async methods should have async suffix
Next up, let's consider a rule in the naming category. Specifically, when you
name an async method with suffixing "async" on its name, you see the warning.
Microsoft declares
this succinctly in their guidelines.
By convention, you append "Async" to the names of methods that have an async modifier.
So, CodeIt.Right simply tells us that we've run afoul of this convention. But,
again, let's dive into the reasoning behind this rule.
When Microsoft introduced this programming paradigm, they did so in a non-breaking
release. This caused something of a conundrum for them because of a perfectly
understandable language rule stating that method overloads cannot vary only by a return
type. To take advantage of the new language feature, users would need to offer
the new, async methods, and also backward compatibility with existing method calls.
This put them in the position of needing to give the new, async methods different
names. And so Microsoft offered guidance on a convention for doing so.
I'd like to make a call-out here with regard to my two rules at the top of each post.
This convention came about because of expediency and now sticks around for convention's
sake. But it may bother you that you're asked to bake a keyword into the name
of a method. This might trouble you in the same way that a method called "GetCustomerNumberString()"
might bother you. In other words, while I don't advise you go against convention,
I will say that not all warnings are created equally.
Always define a global error handler
With this particular advice, we dive into warnings specific to ASP. When you
see this warning, it concerns the Global.asax file. To understand a bit more
about that, you can
read this Stack Overflow question. In short, Global.asax allows you to define
responses to "system level" in a single place.
CodeIt.Right is telling you to define just such an event -- specifically one in response
to the "Application_Error" event. This event occurs whenever an exception bubbles
all the way up without being trapped anywhere by your code somewhere. And, that's
a perfectly reasonable state of affairs -- your code won't trap every possible
exception.
CodeIt.Right wants you to define a default behavior on application errors. This
could mean something as simple as redirecting to a page that says, "oops, sorry about
that." Or, it could entail all sorts of robust, diagnostic information.
The important thing is that you define it and that it be consistent.
You certainly don't want to learn from your users what your own application does in
response to an error.
So spent a bit of time defining your global error handling behavior. By all
means, trap and handle exceptions as close to the source as you can. But always
make sure to have a backup plan.
Until Next Time
In this post, I ran the gamut across concerns. I touched on an object-oriented
design concern. Then, I went into a naming consideration involving async, and,
finally, I talked specifically about ASP programming considerations.
I don't have a particular algorithm for the order in which I cover these subjects.
But, I like the way this shook out. It goes to show you that CodeIt.Right covers
a lot of ground, across a lot of different landscapes of the .NET world.
Learn
more how CodeIt.Right can help you automate code reviews and improve your code quality.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
-
GhostDoc version 5.5 delivers compatibility with VS2017 RTM as well as a number of
fixes:
-
VS2017 RTM support
-
GhostDoc is now also available as VSIX for VS2017
-
Documentation Hints no longer visible in the Debug mode
-
Fixed issue wrapping lines within the <value></value> tag
-
In the Offline Activation Preview - the fields are now auto-selected on focus/click
for easy copying
-
GhostDoc is no longer adding extra line when re-documenting header in VB
-
GhostDoc is no longer appending generated XML comments to the existing comment when
using auto-generated properties in VB
For the complete list of changes, please see What's
New in GhostDoc v5
For overview of the v5.0 features, visit Overview
of GhostDoc v5.0 Features
Download the new build at http://submain.com/download/ghostdoc/
|
-
I have long since cast my lot with the software industry. But, if I were going
to make a commercial to convince others to follow suit, I can imagine what it would
look like. I'd probably feature cool-looking, clear whiteboards, engaged people,
and frenetic design of the future. And a robot or two. Come help us build
the technology of tomorrow.
Of course, you might later accuse me of bait and switch. You entered a bootcamp,
ready to build the technology of tomorrow. Three years later, you found yourself
on safari in a legacy code jungle, trying to wrangle some SharePoint plugin.
Erik, you lied to me.
So, let me inoculate myself against that particular accusation. With a career
in software, you will certainly get to work on some cool things. But you will
also find yourself doing the decidedly less glamorous task of software maintenance.
You may as well prepare yourself for that now.
The Conceptual Difference: Build vs Maintain
From the software developer's perspective, this distinction might evoke various contrasts.
Fun versus boring. Satisfying versus annoying. New problem versus solved
problem. My stuff versus that of some guy named Steve that apparently worked
here 8 years ago. You get the idea.
But let's zoom out a bit. For a broader perspective, consider the difference
as it pertains to a business.
Build
mode (green field) means a push toward new capability. Usually, the business
will regard construction of this capability as a project with a calculated return
on investment (ROI). To put it more plainly, "we're going to spend $500,000
building this thing that we expect to make/save us $1.5 million by next year."
Maintenance mode, on the other hand, presents the business with a cost
center. They've now made their investment and (at least partially)
realized return on it. The maintenance team just hangs around to prevent backslides.
For instance, should maintenance problems crop up, you may lose customers or efficiency.
Plan of Attack: Build vs Maintain
Because the business regards these activities differently, it will attack them differently.
And, while I can't speak to every conceivable situation, my consulting work has shown
me wide variety. So I can speak to general trends.
In green field mode, the business tends to regard the work as an investment.
So, while management might dislike overruns and unexpected costs, they will tend to
tolerate them more. Commonly, you see a "this will pay off later" mentality.
On the maintenance side of things, you tend to see far less forgiveness. Certainly,
all parties forgive unexpected problems a lot less easily. They view all of
it as a burden.
This difference in attitude translates to the planning as well. Green field
projects justifiably command full time people for the duration of the project.
Maintenance mode tends to get you familiar with the curious term "half of a person."
By this, I mean you hear things like "we're done with the Sigma project, but someone
needs to keep the lights on. That'll be half of Alice." The business grudgingly
allocates part time duty to maintenance tasks.
Why? Well, maintenance tends to arise out of reactive scenarios.
Reactive Mode and the Value of Automation
Maintenance mode in software will have some planned activities, particularly if it
needs scheduled maintenance. But most maintenance programmers find themselves
in a reactive, "wait and see" kind of situation. They have little to do on the
project in question until an outage happens, someone discovers a bug, or a customer
requests a new feature. Then, they spring into action.
Business folks tend to hate this sort of situation. After all, you need to plan
for this stuff, but you might have someone sitting around doing nothing. It
is from this fundamental conundrum that "half people" and "quarter people" arise.
Maintenance programmers usually have other stuff to juggle along with maintaining
"Sigma."
You should automate this stuff during green field time,
when management is willing to invest. If you tell them it means less maintenance cost,
they'll probably bite.
Because of this double duty, the business doubles down on pressure to minimize maintenance.
After all, not only does it create cost, but it takes the people away from other,
profit-driven things that they could otherwise do.
So how do we, as programmers, and we, as software shops, best deal with this?
We make maintenance as turnkey as possible by automating as much as possible.
Oh, and you should automate this stuff during green field time, when management is
willing to invest. If you tell them it means less maintenance cost, they'll
probably bite.
Automate the Test Suite
First up for automation candidates, think of the codebase's test suite. Hopefully,
you've followed my advice and built this during green field mode. But, if not,
it's never too late to start.
Think of how time consuming a job QA has. If manually running the software and
conducting experiments constitutes the entirety of your test strategy, you'll find
yourself hosed at maintenance time. With "half a person" allocated, no one has
time for that. Without an automated suite, then, testing falls by the wayside,
making your changes to a production system even more risky.
You need to automate a robust test suite that lets you know if you have broken anything.
This becomes even more critical when you consider that most maintenance programmers
haven't touched the code they modify in a long time, if ever.
Automate Code Reviews
If I were to pick a one-two punch for code quality, that would involve unit tests
and code review. Therefore, just as you should automate your test suite, you
should automate
your code review as well.
If you think testing goes by the wayside in an under-staffed, cost-center model, you
can forget about peer review altogether. During the course of my travels, I've
rarely seen code review continue into maintenance mode, except in regulated industries.
Automated
code review tools exist, and they don't require even "half a person." An
automated code review tool serves its role without consuming bandwidth. And,
it provides maintenance programmers operating in high risk scenarios with a modicum
of comfort and safety net.
Automate Production Monitoring
For my last maintenance mode automation tip of the post, I'll suggest that you automate
production monitoring capabilities. This covers a fair bit of ground, so I'll
generalize by saying these include anything that keeps your finger on the pulse of
your system's production behavior.
You have logging, no doubt, but do you monitor the logs? Do you keep track of
system outages and system load? If you roll software to production, do you have
a system of checks in place to know if something smells fishy?
You want to make the answer to these questions, "yes." And you want to make
the answer "yes" without you needing to go in and manually check. Automate various
means of monitoring your production software and providing yourself with alerts.
This will reduced maintenance costs across the board.
Automate Anything You Can
I've listed some automation examples that come to mind as the most critical, based
on my experience. But, really, you should automate anything around the maintenance
process that you can.
Now, you might think to yourself, "we're programmers, we should automate everything."
Well, that subject could make for a whole post in and of itself, but I'll speak briefly
to the distinction. Build mode usually involves creating something from nothing
on a large scale. While you can automate the scaffolding around this activity,
you'll struggle to automate a significant amount of the process.
But that ratio gets much better during maintenance time. So the cost center
nature of maintenance, combined with the higher possible automation percentage, makes
it a rich target. Indeed, I would argue that strategic automation defines the
art of maintenance.
Tools at your disposal
SubMain offers CodeIt.Right that
easily integrates into Visual Studio for flexible and intuitive automated code review
solution that works real-time, on demand, at the source control check-in or as part
of your build.
Related resources
Learn
more how CodeIt.Right can help you automate code reviews and improve your code quality.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
|
|
|
|
|
|