|
Browse by Tags
All Tags » CodeIt.Right (RSS)
-
Many of us have a natural tendency to let little things pile up. This gives
rise to the notion of the so-called spring cleaning. The weather turns warm
and going outside becomes reasonable, so we take the opportunity to do some kind of
deep cleaning.
Of course, this may not apply to you. Perhaps you keep your house impeccable
at all times, or maybe you simply have a cleaning service. But I'll bet that,
in some part of your life or another, you put little things off until they become
bigger things. Your cruft may not involve dusty shelves and pockets of house
clutter, but it probably exists somewhere.
Maybe it exists in your professional life in some capacity. Perhaps you have
a string of half written blog posts, or your inbox has more than a thousand messages.
And, if you examine things honestly, you almost certainly have some item that has
been skulking around your to-do list for months. Somewhere, we all have items
that could use some tidying, cognitive or physical.
With that in mind, I'd like to talk about your code review process. Have you
been executing it like clockwork for months or years? Perhaps it has become too much
like clockwork. Turn a critical eye to it, and you might realize elements of
it have become stale or superfluous. So let's take a look at how you can apply
a spring cleaning to your code review process.
Beware The Cargo Cult
During World War II, the Allies set up a temporary air base on an island in the Pacific
Ocean. The people living on the Island observed the ground controllers waving
at inbound planes to help them land. Supplies then followed. Not understand
the purpose of this ritual or the mechanics of airplanes, the locals learned that
making these motions brought planes with supplies. So after the allies left,
they mimicked the behavior, hoping for additional resources. This execution
of ritual without understanding earned the designation "cargo cult."
In the world of software development, cargo
cult programming involves adding code without understanding what it does.
You added it once, good things happened, so now you always add it. You can think
of this as a special case of programming
by coincidence. And it's something you should avoid.
But cargo cult mentality can crop up in a code review as well. Do you find your
team calling out 'issues' during the review, but, if pressed, nobody could articulate
why those are issues? If so, you have a cargo cult practice, and you should
cull it.
Going Over the Same Stuff Repetitively
Let's say that your team performs code review on a regular basis. Does this
involve an ongoing, constant uplift? In other words, do you find learning spreads
among the team, and you collectively sharpen your game and constantly improve?
Or do you find that the team calls out the same old issues again and again?
If every code review involves noticing a method parameter dereference and saying,
"you'll get an exception if someone passes in null," then you have stagnation.
Think of this as a team smell. Why do people keep making the same mistake over
and over again? Why haven't you somehow operationalized a remedy? And,
couldn't someone have automated this?
Keep an eye out for this sort of thing. If you notice it, pause and do some
root cause analysis. Don't just fix the issue itself -- fix it so the issue
stops happening.
Inconsistency in Reviews
Another common source of woe arises from inconsistency in the code review process.
Not only does this result in potential issues within the code, but it also threatens
to demoralize members of the team. Imagine attending a review and having someone
admonish you to add logging calls to all of your methods. But then, during the
next review, someone gives you a hard time about logging too much. Enough of
that nonsense and team members start updating their resumes rather than their methods.
And inconsistency can mean more than just different review styles from different people
(or the same person on different days, varying by mood). You might find that
your team's behavior and suggestions during review have become out of sync with a
formal document like the team's coding standard. Whatever the source, inconsistency
creates drag for your team.
Take the opportunity of a metaphorical spring cleaning to address this potential pitfall.
Round up the team members and make sure they all have the same philosophies at code
review time. And then, make sure that unified philosophy lines up with anything
documented.
Cut Out the Nitpicking
I've yet to see an organization where interpersonal code review didn't become at least
a little political. That makes sense, of course. In essence, you're talking
about an activity where people get together and offer (hopefully) constructive professional
criticism.
Because of the politics, personal code review can degenerate and lead to infighting
in numerous ways. Chief among these, I've found, is excessive nitpicking.
If team members perceive the activity as a never ending string of officious criticism,
they start to hate coming to work.
On top of that, people can only internalize so many lessons in a sitting. After
a while, they start to tune out or get tired. So make the takeaways from the
code review count. Even if they haven't gotten every little thing just so, pick
your battles and focus on big things. And I file this under spring cleaning
since it generally requires a concerted mental adjustment and since it will clear
some of the cruft out of your review.
Automate, Automate, Automate
I will conclude by offering what I consider the most important item for any code review
spring cleaning. If the other suggestions in involved metaphorical shelf dusting
and shower scrubbing, think of this one as completely cleaning out an entire room
that you had loaded with junk.
So much of the time teams spend in code review seems to trend toward picking at nits.
But even when it involves more substantive considerations, many of these considerations
could be automatically detected. The team wastes precious time peering at the
code and playing static analyzer. Stop this!
Spruce up your review process by automating as much of it as humanly possible.
You should constantly ask yourself if the issue you're discussing could be automatically
detected (and fixed). If you think it could, then do it. And, as part
of your spring cleaning, knock out as many of these as possible.
Save human-centric code review for focus on design considerations, architectural discussions,
and big picture issues. Don't bog yourself down in cruft. You'll all feel
a lot cleaner and happier for it, just as you would after any spring cleaning.
Tools at your disposal
SubMain offers CodeIt.Right that
easily integrates into Visual Studio for flexible and intuitive automated code review
solution that works real-time, on demand, at the source control check-in or as part
of your build.
Related resources
Learn
more how CodeIt.Right can help you automate code reviews and improve the quality of
your code.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
-
Today, I'll do another installment of the CodeIt.Right
Rules, Explained series. This is post number five in the series. And,
as always, I'll start off by citing my two personal rules about static analysis guidance,
along with the explanation for them.
-
Never implement a suggested fix without knowing what makes it a fix.
-
Never ignore a suggested fix without understanding what makes it a fix.
It may seem as though I'm playing rhetorical games here. After all, I could
simply say, "learn the reasoning behind all suggested fixes." But I want to
underscore the decision you face when confronted with static analysis feedback.
In all cases, you must actively choose to ignore the feedback or address it.
And for both options, you need to understand the logic behind the suggestion.
In that spirit, I'm going to offer up explanations for three more CodeIt.Right rules
today.
Mark ISerializable Types with "Serializable" Attribute
If you run across this rule, you might do so while writing an exception class.
For example, the following small bit of code in a project of mine triggers it. public class GithubQueryingException
: Exception { public GithubQueryingException(string message,
Exception ex) : base(message,
ex) { } }
It seems pretty innocuous, right? Well, let's take a look at what went wrong.
The rule actually describes its own solution pretty well. Slap a serializable
attribute on this exception class and make the tool happy. But who cares?
Why does it matter if you don't mark the exception as serializable?
To understand the issue, you need awareness of a concept called "application
domains" within the .NET framework. Going into much detail about this would
take us beyond the scope of the post. But suffice it to say, "application domains
provide an isolation boundary for security, reliability, and versioning, and for unloading
assemblies." Think two separate processes running and collaborating.
If some external process will call your code, it won't access and deal with your objects
the same way that your own code will. Instead, it needs to communicate by serializing
the object and passing it along as if over some remote service call. In the
case of the exception above, it lacks the attribute marking it explicitly as serializable,
in spite of implementing that interface. So bad things will happen at runtime.
And this warning exists to give you the heads up.
If you'll only ever handle this exception within the same app domain, it won't cause
you any heartburn. But, then again, neither will adding an attribute to your
class.
Do Not Handle Non-CLS-Compliant Exceptions
Have you ever written code that looks something like this? try {
DoSomething(); return true;
} catch { return false;
}
In essence, you want to take a stab at doing something and return true if it goes
well and false if anything goes wrong. So you write code that looks something
like the above.
If you you have, you'll run afoul of the CodeIt.Right rule, "do not handle non-cls-compliant
exceptions." You might find this confusing at first blush, particularly if you
code exclusively in C# or Visual Basic. This would confuse you because you cannot
throw exceptions not compliant with the common language specification (CLS).
All exceptions you throw inherit from the Exception class and thus conform.
However, in the case of native code written in, say, C++, you can actually
throw non-CLS-compliant exceptions. And this code will catch them because you've
said "catch anything that comes my way." This earns you a warning.
The CodeIt.Right warning here resembles one telling you not to catch the general exception
type. You want to be intentional about what exceptions you trap, rather than
casting an overly wide net. You can fix this easily enough by specifying the
actual exception you anticipate might occur.
Async Methods Should Return Task or Task<T>
As of .NET Framework 4.5, you can use the async
keyword to allow invocation of an asynchronous operation. For example, imagine
that you had a desktop GUI app and you wanted to populate a form with data.
But imagine that acquiring said data involved doing an expensive and time consuming
call over a network.
With synchronous programming, the call out to the network would block, meaning
that everything else would grind to a halt to wait on the network call... including
the GUI's responsiveness. That makes for a terrible user experience. Of
course, we solved this problem long before the existence of the async keyword.
But we used laborious threading solutions to do that, whereas the async keyword makes
this more intuitive.
Roughly speaking, designating a method as "async" indicates that you can dispatch
it to conduct its business while you move on to do other things. To accomplish
this, the method synchronously returns something called a Task, which acts as a placeholder
and a promise of sorts. The calling method keeps a reference to the Task and
can use it to get at the result of the method, once the asynchronous operation completes.
But that only works if you return a Task or Task<T>. If, instead, you
create a void method and label it asynchronous, you have no means to get at it later
and no means to explicitly wait on it. There's a good chance this isn't what
you want to do, and CodeIt.Right lets you know that. In the case of an event
handler, you might actually want to do this, but better safe than sorry. You
can fix the violation by returning a non-parameterized Task rather than declaring
the method void.
Until Next Time
This post covered some interesting language and framework features. We looked
at the effect of crossing app domain boundaries and what that does to the objects
whose structure you can easily take for granted. Then we went off the beaten
path a little by looking at something unexpected that can happen at the intersection
of managed and native code. And, finally, we delved into asynchronous programming
a bit.
As we wander through some of these relatively far-reaching concerns, it's nice to
see that CodeIt.Right helps
us keep track. A good analysis tool not only helps you catch mistakes, but it
also helps you expand your understanding of the language and framework.
Learn
more how CodeIt.Right can help you automate code reviews and improve your code quality.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
-
I like variety. In pursuit of this preference, I spend some time management
consulting with enterprise clients and some time volunteering for "office hours" at
a startup incubator. Generally, this amounts to serving as "rent-a-CTO" for
startup founders in half hour blocks. This provides me with the spice of life,
I guess.
As disparate as these advice forums might seem, they often share a common theme.
Both in the impressive enterprise buildings and the startup incubator conference rooms,
people ask me about offshoring application development. To go overseas or not
to go overseas? That, quite frequently, is the question (posed to me).
I find this pretty difficult to answer absent additional information. In any
context, people asking this bake two core assumptions into their question. What
they really want to say would sound more like this. "Will I suffer for the choice
to sacrifice quality to save money?"
They assume first that cheaper offshore work means lower quality. And then they
assume that you can trade quality for cost as if adjusting the volume dial in your
car. If only life worked this simply.
What You Know When You Offshore
Before going further, let's back up a bit. I want to talk about what you actually know
when you make the decision to pay overseas firms a lower rate to build software.
But first, let's dispel these assumptions that nobody can really justify.
Understand something unequivocally. You cannot simply exchange units of "quality"
for currency. If you ask me to build you a web app, and I tell you that I'll
do it for $30,000, you can't simply say, "I'll give you $15,000 to build one-half
as good." I mean, you could say that. But you'd be saying something
absurd, and you know it. You can reasonably adjust cost by cutting scope, but
not by assuming that "half as good" means "twice as fast."
Also, you need to understand that "cheap overseas labor" doesn't necessarily mean
lower quality. Frequently it does, but not always. And, not even frequently
enough that you can just bank on it.
So what do you know when you contract with an inexpensive, overseas provider?
Not a lot, actually. But you do know that your partner will work with you mainly
remotely, across a great deal of distance, and with significant communication obstacles.
You will not collaborate as closely with them as you would with an employee or a local
vendor.
The (Non) Locality Conundrum
So you have a limited budget, and you go shopping for offshore app dev. You
go in knowing that you may deal with less skilled developers. But honestly,
most people dramatically overestimate the importance of that concern.
What tends to torpedo these projects lies more in the communication gulf and less
in the skill. You give them wireframes and vague instructions, and they come
back with what they think you want. They explain their deliveries with passable
English in emails sent at 2:30 AM your time. This collaboration proves taxing
for both parties, so you both avoid it, for the most part. You thus mutually
collude to raise the stakes with each passing week.
Disaster then strikes at the end. In a big bang, they deliver what they think
you want, and it doesn't fit your expectations. Or it fits your expectations,
but you can't build on top of it. You may later, using some revisionist history,
consider this a matter of "software quality" but that misses the point.
Your problem really lies in the non-locality, both geographically and more philosophically.
When Software Projects Work
Software projects work well with a tight feedback loop. The entire agile movement
rests firmly atop this premise. Stop shipping software once per year, and start
shipping it once per week. See what the customer/stakeholder thinks and course
correct before it's too late. This helps facilitate success far more than the
vague notion of quality.
The locality issue detracts from the willingness to collaborate. It encourages
you to work in silos and save feedback for a later date. It invites disaster.
To avoid this, you need to figure out a way to remove unknowns from the equation.
You need to know what your partner is doing from week to week. And you need
to know the nature of what they're building. Have they assembled throwaway,
prototype code? Or do you have the foundation of the future?
Getting a Glimpse
At this point, the course for enterprises and startups diverge. The enterprise
has legions of software developers and can easily afford to fly to Eastern Europe
or Southeast Asia or wherever the work gets done. They want to leverage economies
of scale to save money as a matter of policy.
The startup or small business, on the other hand, lacks these resources. They
can't just ask their legion of developers to review the offshore work more frequently.
And they certainly can't book a few business class tickets over there to check it
out for themselves. They need to get more creative.
In fact, some of the startup founders I counsel have a pretty bleak outlook here.
They have no one in their organization in a position to review code at all.
So they rely on an offshore partner for budget reasons, and they rely on that partner
as expert adviser and service provider. They put all of their eggs in that vendor's
basket. And they come to me asking, "have I made a good choice?"
They need a glimpse into what these offshore folks are doing, and one that they can
understand.
Leveraging Automated Code Review
While you can't address the nebulous, subjective concept of "quality" wholesale, you
can ascertain properties of code. And you can even do it without a great deal
of technical knowledge, yourself. You could simply take their source code and
run an automated code review on it.
You're probably thinking that this seems a bit reductionist. Make no mistake
-- it's quite reductionist. But it also beats no feedback at all.
You could approach this by running the review on each incremental delivery.
Ask them to explain instances where it runs afoul of the tool. Then keep doing
it to see if they improve. Or, you could ask them to incorporate the tool into
their own process and make delivering issue-free code a part of the contract.
Neither of these things guarantees a successful result. But at least it offers
you something -- anything -- to help you evaluate the work, short of in-depth knowledge
and study yourself.
Recall what I said earlier about how enterprises regard quality. It's not as
much about intrinsic properties, nor is it inversely proportional to cost. Instead,
quality shows itself in the presence of a tight feedback loop and the ability to sustain
adding more and more capabilities. With limited time and knowledge, automated
code review gives you a way to tighten that feedback loop and align expectations.
It ensures at least some oversight, and it aligns the work they do with what you might
expect from firms that know their business.
Tools at your disposal
SubMain offers CodeIt.Right that
easily integrates into Visual Studio for flexible and intuitive automated code review
solution that works real-time, on demand, at the source control check-in or as part
of your build.
Related resources
Learn
more how CodeIt.Right can help you automate code reviews and ensure the quality of
delivered code.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
-
I can almost sense the indignation from some of you. You read the title and
then began to seethe a little. Then you clicked the link to see what kind sophistry
awaited you. "There is no substitute for peer review."
Relax. I agree with you. In fact, I think that any robust review process
should include a healthy amount of human and automated review. And, of course,
you also need your test
pyramid, integration and deployment strategies, and the whole nine yards.
Having a truly mature software shop takes a great deal of work and involves standing
on the shoulders of giants. So, please, give me a little latitude with the premise
of the post.
Today I want to talk about how one could replace manual code review with automated
code review only, should the need arise.
Why Would The Need for This Arise?
You might struggle to imagine why this would ever prove necessary. Those of
you with many years logged in the enterprise in particular probably find this puzzling.
But you might find manual code inspection axed from your process for any number of
reasons other than, "we've decided we don't value the activity."
First and most egregiously, a team's manager might come along with an eye toward cost
savings. "I need you to spend less time reading code and more time writing it!"
In that case, you'll need to move away from the practice, and going toward automation
beats abandoning it altogether. Of course, if that happens, I also recommend
dusting off your resume. In the first place, you have a penny-wise, pound-foolish
manager. And, secondly, management shouldn't micromanage you at this level.
Figuring out how to deliver good software should be your responsibility.
But let's consider less unfortunate situations. Perhaps you currently work in
a team of 2, and number 2 just handed in her two weeks’ notice. Even if your
organization back-fills your erstwhile teammate, you have some time before the newbie
can meaningfully review your code. Or, perhaps you work for a larger team, but
everyone gradually becomes so busy and fragmented in responsibility as not to have
the time for much manual peer review.
In my travels, this last case actually happens pretty frequently. And then you
have to choose: abandon the practice altogether, or move toward an automated version.
Pretty easy choice, if you ask me.
First, Take Inventory
Assuming no one has yet forced your hand, pause to take inventory. What currently
happens as part of your review process? What sorts of feedback do you get?
If your reviews happen in some kind of asynchronous format, then great. This
should prove easy enough to capture since you'll need only to go through your emails
or issues list or whatever you use. Do you have in-person reviews, but chronicle
the findings? Just as good for our purposes here.
But if these reviews happen in more ad hoc fashion, then you have some work
to do. Start documenting the feedback and resultant action items. After
all, in order to create a suitable replacement strategy for an activity, you must
first thoroughly understand that activity.
Automate the Automate-able
With your list in place, you can now start figuring out how to replace your expiring
manual process. First up, identify the things you can easily automate that come
up during reviews.
This will include cosmetic concerns. Does your code comply with the team standard?
Does it comply with typical styling for your tech stack? Have you consistently
cased and named things? If that stuff comes up during your reviews, you should
probably automate it anyway and not waste time discussing it. But, going forward,
you will need to automate it.
But you should also look for anything that you can leverage automation to catch.
Do you talk about methods getting too long or about not checking parameters for null
before dereferencing? You can also automate things like that. How about
compliance with non-cosmetic best practices? Automate that as well with an automated
code review tool.
And spend some time researching what you can automate. Even if no analyzer or
review tool catches something out of the box, you can often customize them to catch
it (or write your own thing, if needed).
Checks and Balances for Conceptual Items
Now, we move onto the more difficult things. "This method seems pretty unreadable."
"Couldn't you use the builder pattern here?" I'm talking here about the sorts
of things for which manual code review really shines and serves its purpose.
You'll have a harder time replacing this. But that doesn't mean you can't do something.
First, I recommend that you audit the review history you've been compiling.
See what comes up the most frequently, and make a list of those things. And
group them conceptually. If you see a lot of "couldn't you use Builder" and
"couldn't you use Factory Method," then generalize to "couldn't you use a design pattern?"
Once you have this list, if nothing else, you can use it as a checklist for yourself
each time you commit code. But you might also see whether you can conceive of
some sort of automation. Or maybe you just resolve to revisit the codebase periodically,
with a critical eye toward these sorts of questions.
You need to see if you can replace the human insights of a peer. Admittedly,
this presents a serious challenge. But get creative and see what you can come
up with.
Adjust Your Approach
The final plank I'll mention involves changing the way you approach development and
review in general. For whatever reason, human review of your work has become
a scarce resource. You need to adjust accordingly.
Picking up a good bit of automated review makes up part of this adjustment, as does
creating of a checklist to apply to yourself. But you need to go further as
well. Take an approach wherein you look to become more self-sufficient for any
of the littler things and store up your scarce access to human reviewers for the truly
weighty architectural decisions. When these come up, enlist the help of someone
else in your organization or even the internet.
On top of that, look opportunistically for ways to catch your own mistakes and improve.
Everyone has to learn from their mistakes, but with less margin for error, you need
to learn from them and automate their prevention going forward. Again, automated
review helps here, but you'll need to get creative.
Having peer review yanked out from under you undeniably presents a challenge.
Luckily, however, you have more tools than ever at your disposal to pick up the slack.
Make use of them. When you find yourself in a situation with the peer review
safety net restored, you'll be an even better programmer for it.
Tools at your disposal
SubMain offers CodeIt.Right that
easily integrates into Visual Studio for flexible and intuitive automated code review
solution that works real-time, on demand, at the source control check-in or as part
of your build.
Related resources
Learn
more how CodeIt.Right can help you automate code reviews and improve your code quality.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
-
"You never concatenate strings. Instead, always use a StringBuilder."
I feel pretty confident that any C# developer that has ever worked in a group has
heard this admonition at least once. This represents one of those bits of developer
wisdom that the world expects you to just memorize. Over the course of
your career, these add up. And once they do, grizzled veterans engage in a sort
of comparative jousting for rank. The internet
encourages them and eggs them on.
"How can you call yourself a senior C# developer and not know how to serialize
objects to XML?!"
With two evenly matched veterans swinging language swords at one another, this volley
may continue for a while. Eventually, though, one falters and pecking order
is established.
Static Analyzers to the Rescue
I must confess. I tend to do horribly at this sort of thing. Despite having
relatively good memory retention ability in theory, I have a critical Achilles Heel
in this regard. Specifically, I can only retain information that interests me.
And building up a massive arsenal of programming language "how-could-yous" for dueling
purposes just doesn't interest me. It doesn't solve any problem that
I have.
And, really, why should it? Early in my career, I figured out the joy of static
analyzers in pretty short order. Just as the ubiquity of search engines means
I don't need to memorize algorithms, the presence of static analyzers saves me from
cognitively carrying around giant checklists of programming sins to avoid. I
rejoiced in this discovery. Suddenly, I could solve interesting problems
and trust the equivalent of programmer spell check to take care of the boring stuff.
Oh, don't get me wrong. After the analyzers slapped me, I internalized the lessons.
But I never bothered to go out of my way to do so. I learned only in response
to an actual, immediate problem. "I don't like seeing warnings, so let me figure
out the issue and subsequently avoid it."
My Coding Provincialism
This general modus operandi caused me to respond predictably when I first encountered
the idea of globalization in language. "Wait, so this helps when? If someone
theoretically deploys code to some other country? And, then, they might see
dates printed in a way that seems strange to them? Huh."
For many years, this solved no actual problem that I had. Early in my career,
I wrote software that people deployed in the US. Much of it had no connectivity
functionality. Heck, a lot of it didn't even have a user interface. Worst
case, I might later have to realize that some log file's time stamps happened in Mountain
Time or something.
Globalization solved no problem that I had. So when I heard rumblings about
the "best practice," I generally paid no heed. And, truth be told, nobody suffered.
With the software I wrote for many years, this would have constituted a premature
optimization.
But it nevertheless instilled in me a provincialism regarding code.
A Dose of Reality
I've spent my career as a polyglot. And so at one point, I switched jobs, and
it took me from writing Java-based web apps to a desktop app using C# and WPF.
This WPF app happened to have worldwide distribution. And, when I say worldwide,
I mean just about every country in the world.
Suddenly, globalization went from "premature optimization" to "development table stakes."
And the learning curve became steep. We didn't just need to account for
the fact that people might want to see dates where the day, rather than the month,
came first. The GUI needed translation into dozens of languages as a menu setting.
This included languages with text read from right to left.
How did I deal with this? At the time, I don't recall having the benefit of
a static analyzer that helped in this regard. FXCop may have provided some relief,
but I don't recall one way or the other. Instead, I found myself needing to study and
laboriously create mental checklists. This "best practice" knowledge hoarding
suddenly solved an immediate problem. So, I did it.
CodeIt.Right's Globalization Features
Years have passed since then. I've had several jobs since then, and, as a solo
consultant, I've had dozens of clients and gigs. I've lost my once encyclopedic
knowledge of globalization concerns. That happened because -- you guessed it
-- it no longer solves an immediate problem that I have.
Oh, I'd probably do better with it now than I did in the past. But I'd still
have to re-familiarize myself with the particulars and study up once again in order
to get it right, should the need arise. Except, these days, I could enlist
some help. CodeIt.Right,
installed on my machine, will give me the heads up I didn't have those years ago.
It has a number of globalization concerns built right in. Specifically, it will
remind you about the following concerns. I'll just list them here, saving detailed
explanations for a future "CodeIt.Right Rules, Explained" post.
-
Specify culture info
-
Specify string comparison (for culture)
-
Do not pass literals as localized parameters
-
Normalize strings to uppercase
-
Do not hard code locale specific strings
-
Use ordinal string comparison
-
Specify marshaling for PInvoke string arguments
-
Set locale for data types
That provides an excellent head start on getting savvy with globalization.
The Takeaway
Throughout the post, I've talked about my tendency not to bother with things that
don't solve immediate problems for me. I realize philosophical differences in
approach exist, but I stand by this practice to this day. And I don't say this
only because of time savings and avoiding premature optimization. Storing up
an arsenal of specific "best practices" in your head threatens to entrench you in
your ways and to establish an approach of "that's just how you do it."
And yet, not doing this can lead to making rookie mistakes and later repeating them.
But, for me, that's where automated tooling enters the picture. I understand
the globalization problem in theory. That I have not forgotten.
And I can use a tool like CodeIt.Right to
bridge the gap between theory and specifics in short order, creating just-in-time
solutions to problems that I have.
So to conclude the post, I would offer the following in takeaway. Stop memorizing
all of the little things you need to check for at the method level in coding. Let
tooling do that for you, so that you can keep big picture ideas in your head.
I'd say, "don't lose sight of the forest for the trees," but with tooling, you can
see the forest and the trees.
Learn
more how CodeIt.Right can help you automate code reviews, improve your code quality,
and ensure your code is globalization ready.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
-
Today, I'd like to offer a somewhat lighthearted treatment to a serious topic.
I generally find that this tends to offer catharsis to the frustrated. And the
topic of code review tends to lead to lots of frustration.
When talking about code review, I always make sure to offer a specific distinction.
We can divide code reviews into two mutually exclusive buckets: automated and manual.
At first, this distinction might sound strange. Most readers probably think
of code reviews as activities with exclusively human actors. But I tend to disagree.
Any static analyzer (including the compiler) offers feedback. And some tools,
like CodeIt.Right,
specifically regard their suggestions and automated fixes as an automation of the
code review process.
I would argue that automated code review should definitely factor into your code review
strategy. It takes the simple things out of the equation and lets the humans
involved focus on more complex, nuanced topics. That said, I want to ignore
the idea of automated review for the rest of the post. Instead, I'll talk exclusively
about manual code reviews and, more specifically, where they tend to get ugly.
You should absolutely do manual code reviews. Full stop. But you
should also know that they can easily go wrong and devolved into useless or even toxic
activities. To make them effective, you need to exercise vigilance with them.
And, toward that end, I'll talk about some manual code review anti-patterns.
The Gauntlet
First up, let's talk about a style of review that probably inspires the most disgust
among former participants. Here, I'm talking about what I call "the gauntlet."
In this style of code review, the person submitting for review comes to a room with
a number of self-important, hyper-critical peers. Of course, they might not
view themselves as peers. Instead, they probably imagine themselves as a panel
of judges for some reality show.
From this 'lofty' perch, they attack the reviewee's code with a malevolent glee.
They adopt a derisive tone and administer the third degree. And, frankly, they
crush the spirit of anyone subject to this process, leaving low morale and resentment
in their wake.
The Marathon
Next, consider a less awful, but not effective style of code review. This one
I call "the marathon." I bet you can predict what I mean by this.
In the marathon code review, the participants sit in some conference room for hours.
It starts out as an enthusiastic enough affair, but as time passes, people's energy
wanes. Nevertheless, it goes on because of an edict that all code needs review
and because everyone waited until the 11th hour. And predictably, things get
less careless as time goes on and people space out.
Marathon code reviews eventually reach a point where you might as well not bother.
The Scattershot Review
Scattershot reviews tend to occur in organizations without much rigor around the code
review process. Perhaps their process does not officially formally include code
review. Or, maybe, it offers on more specifics than "do it."
With a scattershot review process, the reviewer demonstrates no consistency or predictability
in the evaluation. One day he might suggest eliminating global variables, and
on another day, he might advocate for them. Or, perhaps the variance occurs
depending on reviewer. Whatever the specifics, you can rest assured you'll never
receive the same feedback twice.
This approach to code review can cause some annoyance and resentment. But morale
issues typically take a backseat to simple ineffectiveness and churn in approach.
The Exam
Some of these can certainly coincide. In fact, some of them will likely coincide.
So it goes with "the exam" and "the gauntlet." But while the gauntlet focuses
mostly on the process of the review, the exam focuses on the outcome.
Exam code reviews occur when the parlance around what happens at the end involves
"pass or fail." If you hear people talking about "failing" a code review, you
have an exam on your hands.
Code review should involve a second set of eyes on something to improve it.
For instance, imagine that you wrote a presentation or a whitepaper. You might
ask someone to look it over and proofread it to help you improve it. If they
found a typo, they wouldn't proclaim that you had "failed." They'd just offer
the feedback.
Treating code reviews as exams generally hurts morale and causes the team to lose
out on the collaborative dynamic.
The Soliloquy
The review style I call "the soliloquy" involves paying lip service to the entire
process. In literature, characters offer soliloquies when they speak their thoughts
aloud regardless of anyone hearing them. So it goes with code review styles
as well.
To understand what I mean, think of times in the past where you've emailed someone
and asked them to look at a commit. Five minutes later, they send back a quick,
"looks good." Did they really review it? Really? You
have a soliloquy when you find yourself coding into the vacuum like this.
The downside here should be obvious. If people spare time for only a cursory
glance, you aren't really conducting code reviews.
The Alpha Dog
Again, you might find an "alpha dog" in some of these other sorts of reviews.
I'm looking at you, gauntlet and exam. With an alpha dog code review, you have
a situation where a particularly dominant senior developer rules the roost with the
team. In that sense, the title refers both to the person and to the style of
review.
In a team with a clear alpha dog, that person rules the codebase with an iron fist.
Thus the code review becomes an exercise in appeasing the alpha dog. If he is
present, this just results in him administering a gauntlet. But, even absent,
the review goes according to what he may or may not like.
This tends to lead team members to a condition known as "learned
helplessness," wherein they cease bothering to make decisions without the alpha
dog. Obviously, this stunts their career development, but it also has a pragmatic
toll for the team in the short term. This scales terribly.
The Weeds
Last up, I'll offer a review issue that I'll call "the weeds." This can happen
in the most well meaning of situations, particularly with folks that love their craft.
Simply put, they get "into the weeds."
What I mean with this colloquialism is that they bogged down in details at the expense
of the bigger picture. Obviously, an exacting alpha dog can drag things into
the weeds, but so can anyone. They might wind up with a lengthy digression about
some arcane language point, of interest to all parties, but not critical to shipping
software. And typically, this happens with things that you ought to make matters
of procedures, or even to address with your automated code reviews.
The biggest issue with a "weeds" code review arises from the poor use of time.
It causes things to get skipped, or else it turns reviews into marathons.
Getting it Right
How to get code reviews right could easily occupy multiple posts. But I'll close
by giving a very broad philosophical outlook on how to approach it.
First of all, make sure that you get clarity up front around code review goals, criteria,
and conduct. This helps to stop anti-patterns wherein the review gets off track
or bogged down. It also prevents soliloquies and somewhat mutes bad behavior.
But, beyond that, look at code reviews as collaborative, voluntary sessions where
peers try to improve the general codebase. Some of those peers may have more
or less experience, but everyone's opinion matters, and it's just that -- an opinion for
the author to take under advisement.
While you might cringe at the notion that someone less experienced might leave something
bad in the codebase, trust me. The damage you do by allowing these anti-patterns
to continue in the name of "getting it right" will be far worse.
Learn
more how CodeIt.Right can help you automate code reviews and improve your code quality.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
-
Today, I'll do another installment of the CodeIt.Right
Rules, Explained series. I have now made four such posts in this series.
And, as always, I'll start off by citing my two personal rules about static analysis
guidance.
-
Never implement a suggested fix without knowing what makes it a fix.
-
Never ignore a suggested fix without understanding what makes it a fix.
It may seem as though I'm playing rhetorical games here. After all, I could
simply say, "learn the reasoning behind all suggested fixes." But I want to
underscore the decision you face when confronted with static analysis feedback.
In all cases, you must actively choose to ignore the feedback or address it.
And for both options, you need to understand the logic behind the suggestion.
In that spirit, I'm going to offer up explanations for three more CodeIt.Right rules
today.
Type that contains only static members should be sealed
Let's start here with a quick example. I think this picture will suffice for
some number of words, if not necessarily one thousand.
Here, I've laid a tiny seed for a Swiss Army Knife, "utils" class. Presumably,
I will continue to dump any method I think might help me with Linq into this class.
But for now, it contains only a single method to make things easy to understand.
(As an aside, I discourage "utils" classes as a practice. I'm using this example
because everyone reading has most assuredly seen one of these things at some point.)
When you run CodeIt.Right analysis on this code, you will find yourself confronted
with a design issue. Specifically, "types that contain only static members should
be sealed."
You probably won't have a hard time discerning how to remedy the situation.
Adding the "sealed" modifier to the class will do the trick. But why does CodeIt.Right
object?
The Microsoft
guidelines contain a bit more information. They briefly explain that static
analyzers make an inference about your design intent, and that you can better communicate
that intent by using the "sealed" keyword. But let's unpack that a bit.
When you write a class that has nothing but static members, such as a static utils
class, you create something with no instantiation logic and no state. In other
words, you could instantiate "a LinqUtils," but you couldn't do anything
with it. Presumably, you do not intend that people use the class in that way.
But what about other ways of interacting with the class, such as via inheritance?
Again, you could create a LinqUtilsChild that inherited from LinqUtils, but
to what end? Polymorphism requires instance members, and non exist here.
The inheriting class would inherit absolutely nothing from its parent, making the
inheritance awkward at best.
Thus the intent of the rule. You can think of it telling you the following.
"You're obviously not planning to let people use inheritance with you, so don't even
leave that door open for them to possibly make a mistake."
So when you find yourself confronted with this warning, you have a simple bit of consideration.
Do you intend to have instance behavior? If so, add that behavior and the warning
goes away. If not, simply mark the class sealed.
Async methods should have async suffix
Next up, let's consider a rule in the naming category. Specifically, when you
name an async method with suffixing "async" on its name, you see the warning.
Microsoft declares
this succinctly in their guidelines.
By convention, you append "Async" to the names of methods that have an async modifier.
So, CodeIt.Right simply tells us that we've run afoul of this convention. But,
again, let's dive into the reasoning behind this rule.
When Microsoft introduced this programming paradigm, they did so in a non-breaking
release. This caused something of a conundrum for them because of a perfectly
understandable language rule stating that method overloads cannot vary only by a return
type. To take advantage of the new language feature, users would need to offer
the new, async methods, and also backward compatibility with existing method calls.
This put them in the position of needing to give the new, async methods different
names. And so Microsoft offered guidance on a convention for doing so.
I'd like to make a call-out here with regard to my two rules at the top of each post.
This convention came about because of expediency and now sticks around for convention's
sake. But it may bother you that you're asked to bake a keyword into the name
of a method. This might trouble you in the same way that a method called "GetCustomerNumberString()"
might bother you. In other words, while I don't advise you go against convention,
I will say that not all warnings are created equally.
Always define a global error handler
With this particular advice, we dive into warnings specific to ASP. When you
see this warning, it concerns the Global.asax file. To understand a bit more
about that, you can
read this Stack Overflow question. In short, Global.asax allows you to define
responses to "system level" in a single place.
CodeIt.Right is telling you to define just such an event -- specifically one in response
to the "Application_Error" event. This event occurs whenever an exception bubbles
all the way up without being trapped anywhere by your code somewhere. And, that's
a perfectly reasonable state of affairs -- your code won't trap every possible
exception.
CodeIt.Right wants you to define a default behavior on application errors. This
could mean something as simple as redirecting to a page that says, "oops, sorry about
that." Or, it could entail all sorts of robust, diagnostic information.
The important thing is that you define it and that it be consistent.
You certainly don't want to learn from your users what your own application does in
response to an error.
So spent a bit of time defining your global error handling behavior. By all
means, trap and handle exceptions as close to the source as you can. But always
make sure to have a backup plan.
Until Next Time
In this post, I ran the gamut across concerns. I touched on an object-oriented
design concern. Then, I went into a naming consideration involving async, and,
finally, I talked specifically about ASP programming considerations.
I don't have a particular algorithm for the order in which I cover these subjects.
But, I like the way this shook out. It goes to show you that CodeIt.Right covers
a lot of ground, across a lot of different landscapes of the .NET world.
Learn
more how CodeIt.Right can help you automate code reviews and improve your code quality.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
-
I have long since cast my lot with the software industry. But, if I were going
to make a commercial to convince others to follow suit, I can imagine what it would
look like. I'd probably feature cool-looking, clear whiteboards, engaged people,
and frenetic design of the future. And a robot or two. Come help us build
the technology of tomorrow.
Of course, you might later accuse me of bait and switch. You entered a bootcamp,
ready to build the technology of tomorrow. Three years later, you found yourself
on safari in a legacy code jungle, trying to wrangle some SharePoint plugin.
Erik, you lied to me.
So, let me inoculate myself against that particular accusation. With a career
in software, you will certainly get to work on some cool things. But you will
also find yourself doing the decidedly less glamorous task of software maintenance.
You may as well prepare yourself for that now.
The Conceptual Difference: Build vs Maintain
From the software developer's perspective, this distinction might evoke various contrasts.
Fun versus boring. Satisfying versus annoying. New problem versus solved
problem. My stuff versus that of some guy named Steve that apparently worked
here 8 years ago. You get the idea.
But let's zoom out a bit. For a broader perspective, consider the difference
as it pertains to a business.
Build
mode (green field) means a push toward new capability. Usually, the business
will regard construction of this capability as a project with a calculated return
on investment (ROI). To put it more plainly, "we're going to spend $500,000
building this thing that we expect to make/save us $1.5 million by next year."
Maintenance mode, on the other hand, presents the business with a cost
center. They've now made their investment and (at least partially)
realized return on it. The maintenance team just hangs around to prevent backslides.
For instance, should maintenance problems crop up, you may lose customers or efficiency.
Plan of Attack: Build vs Maintain
Because the business regards these activities differently, it will attack them differently.
And, while I can't speak to every conceivable situation, my consulting work has shown
me wide variety. So I can speak to general trends.
In green field mode, the business tends to regard the work as an investment.
So, while management might dislike overruns and unexpected costs, they will tend to
tolerate them more. Commonly, you see a "this will pay off later" mentality.
On the maintenance side of things, you tend to see far less forgiveness. Certainly,
all parties forgive unexpected problems a lot less easily. They view all of
it as a burden.
This difference in attitude translates to the planning as well. Green field
projects justifiably command full time people for the duration of the project.
Maintenance mode tends to get you familiar with the curious term "half of a person."
By this, I mean you hear things like "we're done with the Sigma project, but someone
needs to keep the lights on. That'll be half of Alice." The business grudgingly
allocates part time duty to maintenance tasks.
Why? Well, maintenance tends to arise out of reactive scenarios.
Reactive Mode and the Value of Automation
Maintenance mode in software will have some planned activities, particularly if it
needs scheduled maintenance. But most maintenance programmers find themselves
in a reactive, "wait and see" kind of situation. They have little to do on the
project in question until an outage happens, someone discovers a bug, or a customer
requests a new feature. Then, they spring into action.
Business folks tend to hate this sort of situation. After all, you need to plan
for this stuff, but you might have someone sitting around doing nothing. It
is from this fundamental conundrum that "half people" and "quarter people" arise.
Maintenance programmers usually have other stuff to juggle along with maintaining
"Sigma."
You should automate this stuff during green field time,
when management is willing to invest. If you tell them it means less maintenance cost,
they'll probably bite.
Because of this double duty, the business doubles down on pressure to minimize maintenance.
After all, not only does it create cost, but it takes the people away from other,
profit-driven things that they could otherwise do.
So how do we, as programmers, and we, as software shops, best deal with this?
We make maintenance as turnkey as possible by automating as much as possible.
Oh, and you should automate this stuff during green field time, when management is
willing to invest. If you tell them it means less maintenance cost, they'll
probably bite.
Automate the Test Suite
First up for automation candidates, think of the codebase's test suite. Hopefully,
you've followed my advice and built this during green field mode. But, if not,
it's never too late to start.
Think of how time consuming a job QA has. If manually running the software and
conducting experiments constitutes the entirety of your test strategy, you'll find
yourself hosed at maintenance time. With "half a person" allocated, no one has
time for that. Without an automated suite, then, testing falls by the wayside,
making your changes to a production system even more risky.
You need to automate a robust test suite that lets you know if you have broken anything.
This becomes even more critical when you consider that most maintenance programmers
haven't touched the code they modify in a long time, if ever.
Automate Code Reviews
If I were to pick a one-two punch for code quality, that would involve unit tests
and code review. Therefore, just as you should automate your test suite, you
should automate
your code review as well.
If you think testing goes by the wayside in an under-staffed, cost-center model, you
can forget about peer review altogether. During the course of my travels, I've
rarely seen code review continue into maintenance mode, except in regulated industries.
Automated
code review tools exist, and they don't require even "half a person." An
automated code review tool serves its role without consuming bandwidth. And,
it provides maintenance programmers operating in high risk scenarios with a modicum
of comfort and safety net.
Automate Production Monitoring
For my last maintenance mode automation tip of the post, I'll suggest that you automate
production monitoring capabilities. This covers a fair bit of ground, so I'll
generalize by saying these include anything that keeps your finger on the pulse of
your system's production behavior.
You have logging, no doubt, but do you monitor the logs? Do you keep track of
system outages and system load? If you roll software to production, do you have
a system of checks in place to know if something smells fishy?
You want to make the answer to these questions, "yes." And you want to make
the answer "yes" without you needing to go in and manually check. Automate various
means of monitoring your production software and providing yourself with alerts.
This will reduced maintenance costs across the board.
Automate Anything You Can
I've listed some automation examples that come to mind as the most critical, based
on my experience. But, really, you should automate anything around the maintenance
process that you can.
Now, you might think to yourself, "we're programmers, we should automate everything."
Well, that subject could make for a whole post in and of itself, but I'll speak briefly
to the distinction. Build mode usually involves creating something from nothing
on a large scale. While you can automate the scaffolding around this activity,
you'll struggle to automate a significant amount of the process.
But that ratio gets much better during maintenance time. So the cost center
nature of maintenance, combined with the higher possible automation percentage, makes
it a rich target. Indeed, I would argue that strategic automation defines the
art of maintenance.
Tools at your disposal
SubMain offers CodeIt.Right that
easily integrates into Visual Studio for flexible and intuitive automated code review
solution that works real-time, on demand, at the source control check-in or as part
of your build.
Related resources
Learn
more how CodeIt.Right can help you automate code reviews and improve your code quality.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
-
In what has become a
series of posts, I have been explaining some CodeIt.Right rules in depth.
As with the last post in the series, I'll start off by citing two rules that I, personally,
follow when it comes to static code analysis.
-
Never implement a suggested fix without knowing what makes it a fix.
-
Never ignore a suggested fix without understanding what makes it a fix.
It may seem as though I'm playing rhetorical games here. After all, I could
simply say, "learn the reasoning behind all suggested fixes." But I want to
underscore the decision you face when confronted with static analysis feedback.
In all cases, you must actively choose to ignore the feedback or address it.
And for both options, you need to understand the logic behind the suggestion.
In that spirit, I'm going to offer up explanations for three more CodeIt.Right rules
today.
Use Constants Where Appropriate
First up, let's consider the admonition to "use constants where appropriate."
Consider this code that I lifted from a
Github project I worked on once.
I received this warning on the first two lines of code for this class. Specifically,
CodeIt.Right objects to my usage of static readonly string . If I let
CodeIt.Right fix the issue for me, I wind up with the following code.
Now, CodeIt.Right seems happy. So, what gives? Why does this matter?
I'll offer you the
release notes of the version where CodeIt.Right introduced this rule. If
you look at the parenthetical next to the rule, you will see "performance."
This preference has something to do with code performance. So, let's get specific.
When you declare a variable using const or static readonly ,
think in terms of magic values and their elimination. For instance, imagine
my UserAgentKey value. Why do you think I declare that the way
I did? I did it to name that string, rather than using it inline as a "magic"
string.
As a maintenance programmer, how frustrating do you find stumbling across lines of
code like, "if(x == 299)"? "What is 299, and why do we care?!"
So you introduce a variable (or, preferably, a constant) to document your intent.
In the made-up hypothetical, you might then have "if(x == MaximumCountBeforeRetry)".
Now you can easily understand what the value means.
Either way of declaring this (constant or static, readonly field) serves the replacement
purpose. In both cases, I replace a magic value with a more readable, named
one. But in the case of static readonly , I replace it with a variable,
and in the case of const , I replace it with, well, a const.
From a performance perspective, this matters. You can think of a declaration
of const as simply hard-coding a value, but without the magic. So, when I switch
to const, in my declaration, the compiler replaces every version of UserAgentKey with
the string literal "user-agent". After compilation, you can't tell whether I
used a const or just hard-coded it everywhere.
But with a static readonly declaration, it remains a variable, even when
you use it like a constant. It thus incurs the relative overhead penalty of
performing a variable lookup at runtime. For this reason, CodeIt.Right steers
you toward considering making this a constant.
Parameter Names Should Match Base Declaration
For the next rule, let's return to the Github scraper project from the last example.
I'll show you two snippets of code. The first comes from an interface definition
and the second from a class implementing that interface. Pay specific attention
to the method, GetRepoSearchResults .
If you take a look at the parameter names, it probably won't surprise you to see that
they do not match. Therein lies the problem that CodeIt.Right has with my code.
It wants the implementing class to match the interface definition (i.e. the "base").
But why?
In this case, we have a fairly simple answer. Having different names for the
conceptually same method creates confusion.
Specifically, maintainers will struggle to understand whether you meant to override
or overload the method. In our mind's eyes, identical method signatures signals
polymorphic approaches, while same name, different parameters signals overload.
In a sense, changing the name of a variable fakes maintenance programmers out.
Do Not Declare Externally Visible Instance Fields
I don't believe we need a screenshot for this one. Consider the following trivial
code snippet.
public class SomeClass
{ public string _someVariable;
}
This warning says, "don't do that." More specifically, don't declare an instance
field with external (to the type) visibility. The question is, "why not?"
If you check out the Microsoft guidance on the subject, they explain that, the "use
of a field should be as an implementation detail." In other words, they contend
that you violate encapsulation by exposing fields. Instead, they say, you should
expose this via a property (which simply offers syntactic sugar over a method).
Instead of continuing with abstract concepts, I'll offer a concrete example.
Imagine that you want to model a family and you declare an integer field called _numberOfChildren .
That works fine initially, but eventually you encounter the conceptually weird edge
case where someone tries to define a family with -1 children. With an integer
field, you can technically do this, but you want to prevent that from happening.
With clients of your class directly accessing and setting this field, you wind up
having to go install this guard logic literally everywhere your clients interact with
the field. But had you hidden the field behind a property, you could simply
add logic to the property setter wherein you throw an exception on an attempt to set
a negative value.
This rule attempts to help you future-proof your code and follow good OO practice.
Until Next Time
Somewhat by coincidence, this post focused heavily on the C# flavor of object-oriented
programming. We looked at constants versus field access, but then focused on
polymorphism and encapsulation.
I mention this because I find it interesting to see where static analyzers take you.
Follow along for the rest of the series and, hopefully, you'll learn various useful
nuggets about the language you use.
Learn
more how CodeIt.Right can help you automate code reviews and improve your code quality.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
-
For years, I can remember fighting the good fight for unit testing. When I started
that fight, I understood a simple premise. We, as programmers, automate things.
So, why not automate testing?
Of all things, a grad school course in software engineering introduced me to the concept
back in 2005. It hooked me immediately, and I began applying the lessons to
my work at the time. A few years and a new job later, I came to a group that
had not yet discovered the wonders of automated testing. No worries, I figured,
I can introduce the concept!
Except, it turns out that people stuck in their ways kind of like those ways.
Imagine my surprise to discover that people turned up their nose at the practice.
Over the course of time, I learned to plead my case, both in technical and business
terms. But it often felt like wading upstream against a fast moving current.
Years later, I have fought that fight over and over again. In fact, I've produced
training materials, courses, videos, blog posts, and books on the subject. I've
brought people around to see the benefits and then subsequently realize those benefits
following adoption. This has brought me satisfaction.
But I don't do this in a vacuum. The industry as a whole has followed the same
trajectory, using the same logic. I count myself just another advocate among
a euphony of voices. And so our profession has generally come to accept unit
testing as a vital tool.
Widespread Acceptance of Automated Regression Tests
In fact, I might go so far as to call acceptance and adoption quite widespread.
This figure only increases if you include shops that totally mean to and will definitely
get around to it like sometime in the next six months or something. In other
words, if you count both shops that have adopted the practice and shops that feel
as though they should, acceptance figures certainly span a plurality.
Major enterprises bring me in to help them teach their developers to do it.
Still, other companies consult and ask questions about it. Just about everyone
wants to understand how to realize the unit testing value proposition of higher quality,
more stability, and fewer bugs.
This takes a simple form. We talk about unit testing and other forms of testing,
and sometimes this may blur the lines. But let's get specific here. A
holistic testing strategy includes tests at a variety of granularities. These
comprise what some call "the
test pyramid." Unit tests address individual components (e.g. classes),
while service tests drive at the way the components of your application work together.
GUI tests, the least granular of all, exercise the whole thing.
Taken together, these comprise your regression test suite. It stands
against the category of bugs known as "regressions," or defects where something that
used to work stops working. For a parallel example in the "real world" think
of the warning lights on your car's dashboard. "Low battery" light comes on
because the battery, which used to work, has stopped working.
Benefits of Automated Regression Test Suites
Why do this? What benefits to automated regression test suites provide?
Well, let's take a look at some.
-
Repeatability and accuracy. A human running tests over and over again may produce
slight variances in the tests. A machine, not so much.
-
Speed. As with anything, automation produces a significant speedup over manual
execution.
-
Fast feedback. The automated test suite can tell you much more quickly if you
have broken something.
-
Morale. The fewer times a QA department comes back with "you broke this thing,"
the fewer opportunities for contentiousness.
I should also mention, as a brief aside, that I don't consider automated test suites
to be acceptable substitutes for manual testing. Rather, I believe
the two efforts should work in complementary fashion. If the automated test
suite executes the humdrum tests in the codebase, it frees QA folks up to perform
intelligent, exploratory testing. As Uncle
Bob once famously said, "it's wrong to turn humans into machines. If you
can write a script for a test procedure, then you can write a program to execute that
procedure."
Automating Code Review
None of this probably comes as much of a shock to you. If you go out and read
tech blogs, you've no doubt encountered the widespread opinion that people should
automate regression test suites. In fact, you probably share that opinion.
So don't you wonder why we don't more frequently apply that logic to other concerns?
Take code review, for instance. Most organizations do this in entirely manual
fashion outside of, perhaps, a so-called "linting" tool. They mandate automated
test coverage and then content themselves with sicking their developers on one another
in meetings to gripe over tabs, spaces, and camel casing.
Why not approach code review the same way? Why not automate the aspects of it
that lend themselves to automation, while saving human intervention for more conceptual
matters?
Benefits of Automated Code Reviews
In a study by Steve McConnell and referenced
in this blog post, "formal code inspections" produced better results for preemptively
finding bugs than even automated regression tests. So it stands to reason that
we should invest in code review in the same ways that we invest in regression testing.
And I don't mean simply time spent, but in driving forward with automation and efficiency.
Consider the benefits I listed above for automated tests, and look how they apply
to automated
code review.
-
Repeatability and accuracy. Humans will miss instances of substandard code if
they feel tired -- machines won't.
-
Speed. Do you want your code review to take seconds or in hours/days.
-
Fast feedback. Because of the increased speed of the review, the reviewee gets
the results immediately after writing the code, for better learning.
-
Morale. The exact same reasoning applies here. Having a machine point
out your mistakes can save contentiousness.
I think that we'll see a similar trajectory to automating code review that we did
with automating test suites. And, what's more, I think that automated code review
will gain steam a lot more quickly and with less resistance. After all, automating
QA activities blazed a trail.
I believe the biggest barrier to adoption, in this case, is the lack of awareness.
People may not believe automating code review is possible. But I assure you,
you can do it. So keep an eye out for ways to automate
this important practice, and get in ahead of the adoption curve.
Learn
more how CodeIt.Right can help you automate code reviews and improve your code quality.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
-
As a teenager, I remember having a passing interest in hacking. Perhaps this
came from watching the movie Sneakers.
Whatever the origin, the fancy passed quickly because I prefer building stuff to breaking
other people's stuff. Therefore, what I know about hacking pretty much stops
at understanding terminology and high level concepts.
Consider the term "zero
day exploit," for instance. While I understand what this means, I have never
once, in my life, sat on discovery of a software vulnerability for the purpose of
using it somehow. Usually when I discover a bug, I'm trying to deposit a check
or something, and I care only about the inconvenience. But I still understand
the term.
"Zero day" refers to the amount of time the software vendor has to prepare for the
vulnerability. You see, the clever hacker gives no warning about the vulnerability
before using it. (This seems like common sense, though perhaps hackers with
more derring do like to give them half a day to watch them scramble to release something
before the hack takes effect.) The time between announcement and reality is
zero.
Increased Deployment Cadence
Let's co-opt the term "zero day" for a different purpose. Imagine that we now
use it to refer to software deployments. By "zero day deployment," we thus mean
"software deployed without any prior announcement."
But
why would anyone do this? Don't you miss out on some great marketing opportunities?
And, more importantly, can you even release software this quickly? Understanding
comes from realizing that software deployment is undergoing a radical shift.
To understand this think about software release cadences 20 years ago. In the
90s, Internet Explorer won the first browser
war because it managed to beat Netscape's plodding release of going 3 years between
releases. With major software products, release cadences of a year or two dominated
the landscape back then.
But that timeline has shrunk steadily. For a highly visible example, consider
Visual Studio. In 2002, 2005, 2008, Microsoft released versions corresponding
to those years. Then it started to shrink with 2010, 2012, and 2013. Now,
the years no longer mark releases, per se, with Microsoft actually releasing major
updates on a quarterly basis.
Zero Day Deployments
As much as going from "every 3 years" to "every 3 months" impresses, websites and
SaaS vendors have shrunk it to "every day." Consider Facebook's
deployment cadence. They roll minor updates every business day and major
ones every week.
With this cadence, we truly reach zero day deployment. You never hear Facebook
announcing major upcoming releases. In fact, you never hear Facebook announcing
releases, period. The first the world sees of a given Facebook release is when
the release actually happens. Truly, this means zero day releases.
Oh, don't get me wrong. Rumors of upcoming features and capabilities circulate,
and Facebook certainly has a robust marketing department. But Facebook and companies
with similar deployment approaches have impressively made deployments a non-event.
And others are looking to follow suit, perhaps yours included.
Conceptual Impediments to Zero Day Deployments
If what I just said made you spit your drink at the screen, I understand. Perhaps
your deployment and release process takes so long that the thought of shrinking it
to a day made you laugh. Or perhaps it terrified. Either way, I can understand
that it may seem quite a leap.
You may conceive of Facebook and other practitioners so alien to your own situation
that you see no path from here to there. But in reality, they almost certainly
do the same things you do as part of your longer process -- just optimized and automated.
Impediments take a variety of forms. You might have lengthy quality assurance
and vetting processes, perhaps that require many iterations between the developers
and quality assurance. You might still be packaging software onto DVDs and shipping
it to customers. Perhaps you run all sorts of checks and analytics on it.
But all will fall under the general heading of requiring manual intervention or consuming
a lot of time.
To get to zero day deployments, you need to automate and speed up considerably, and
this can seem daunting.
What's Common Today
Some good news exists, though. The same forces that let the Visual Studio team
see such radical improvement push on software shops across the board. We all
have access to helpful techs.
For instance, the overwhelming majority of organizations now have continuous integration
via dedicated build machines. Software developers commit code, and these things
scoop it up, compile it, and package it up in a deployable package. This activity
now happens on the order of minutes whereas, in the past, I can remember shops where
this was some poor guy's entire job, and he'd spend days on each build.
And, speaking of the CI server, a lot of them run automated test suites as part of
what they do. Most commonly, this means unit tests. But they might also
invoke acceptance tests and even more exotic things like smoke, GUI, and functionality
tests. You can thus accept commits, build the software, run a bunch of test,
and get it ready to deploy.
Of course, you can also automate the actual deployment as well. It stands to
reason that, if your build machine can ball it up into a deliverable, it can deliver
that deliverable. This might be harder with physical media involved, but as
more software deliveries happen over networks, more of them get automated.
What We Need Next
With all of that in place, why don't we have more zero day deployments? What's
missing?
Again, discounting the problem of physical media, I'd say quality checks present the
biggest issue. We can compile, run automated tests, and deploy automatically.
But does this guarantee acceptable production behavior?
What about the important element of code reviews? How do you assure that, even
as automated tests pass, the application isn't piling up mountains of technical debt
and impeding future deployments? To get to zero day deployments, we must address
these issues.
Don't get me wrong. Other things matter here as well. Zero day deployments
require robust production checks and sophisticated "oops, that didn't work, rollback!"
capabilities. But I think that nothing will matter more than automated
quality checks.
Each time you commit code, you need an intelligent analysis of that code that should
fail the build as surely as failing tests if issues crop up. In a zero day deployment
context, you cannot afford best practice violations. You cannot afford slipping
quality, mounting technical debt, and you most certainly cannot afford code rot.
Today's rot in a zero day deployment scenario means tomorrow's inability to deploy
that way.
Learn
more how CodeIt.Right can help you automate code reviews, improve your code quality,
and reduce technical debt.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
-
A little while back, I started
a post series explaining some of the CodeIt.Right rules. I led into the
post with a narrative, which I won't retell. But I will reiterate the two rules
that I follow when it comes to static analysis tooling.
-
Never implement a suggested fix without knowing what makes it a fix.
-
Never ignore a suggested fix without understanding what makes it a fix.
Because I follow these two rules, I find myself researching every fix suggested to
me by my tooling. And, since I've gone to the trouble of doing so, I'll save
you that same trouble by explaining some of those rules today. Specifically,
I'll examine 3 more CodeIt.Right rules
today and explain the rationale behind them.
Mark assemblies CLSCompliant
If you develop in .NET, you've no doubt run across this particular warning at some
point in your career. Before we get into the details, let's stop and define
the acronyms. "CLS" stands for "Common Language Specification," so the warning
informs you that you need to mark your assemblies "Common Language Specification Compliant"
(or non-compliant, if applicable).
Okay, but what does that mean? Well, you can easily forget that many programming
languages target the .NET runtime besides your language of choice. CLS compliance
indicates that any language targeting the runtime can use your assembly. You
can write language specific code, incompatible with other framework languages.
CLS compliance means you haven't.
Want an example? Let's say that you write C# code and that you decide to get
cute. You have a class with a "DoStuff" method, and you want to add a slight
variation on it. Because the new method adds improved functionality, you decide
to call it "DOSTUFF" in all caps to indicate its awesomeness. No problem, says
the C# compiler.
And yet, if you you try to do the same thing in Visual Basic, a case insensitive language,
you will encounter a compiler error. You have written C# code that VB code cannot
use. Thus you have written non-CLS compliant code. The CodeIt.Right rule
exists to inform you that you have not specified your assembly's compliance or non-compliance.
To fix, go specify. Ideally, go into the project's AssemblyInfo.cs file and
add the following to call it a day.
[assembly:CLSCompliant(true)]
But you can also specify non-compliance for the assembly to avoid a warning.
Of course, you can do better by marking the assembly compliant on the whole and then
hunting down and flagging non-compliant methods with the attribute.
Specify IFormatProvider
Next up, consider a warning to "specify IFormatProvider." When you encounter
this for the first time, it might leave you scratching your head. After all,
"IFormatProvider" seems a bit... technician-like. A more newbie-friendly name
for this warning might have been, "you have a localization problem."
For example, consider a situation in which some external supplies a date. Except,
they supply the date as a string and you have the task of converting it to a proper DateTime so
that you can perform operations on it. No problem, right?
var properDate = DateTime.Parse(inputString);
That should work, provided provincial concerns do not intervene. For those of
you in the US, "03/02/1995" corresponds to March 2nd, 1995. Of course, should
you live in Iraq, that date string would correspond to February 3rd, 1995. Oops.
Consider a nightmare scenario wherein you write some code with this parsing mechanism.
Based in the US and with most of your customers in the US, this works for years.
Eventually, though, your sales group starts making inroads elsewhere. Years
after the fact, you wind up with a strange bug in code you haven't touched for years.
Yikes.
By specifying a format provider, you can avoid this scenario.
Nested types should not be visible
Unlike the previous rule, this one's name suffices for description. If you declare
a type within another type (say a class within a class), you should not make the nested
type visible outside of the outer type. So, the following code triggers the
warning.
public class Outer
{ public class Nested
{ } }
To understand the issue here, consider the object oriented principle of encapsulation.
In short, hiding implementation details from outsiders gives you more freedom to vary
those details later, at your discretion. This thinking drives the rote instinct
for OOP programmers to declare private fields and expose them via public accessors/mutators/properties.
To some degree, the same reasoning applies here. If you declare a class or struct inside
of another one, then presumably only the containing type needs the nested one.
In that case, why make it public? On the other hand, if another type does, in
fact, need the nested one, why scope it within a parent type and not just the same
namespace?
You may have some reason for doing this -- something specific to your code and your
implementation. But understand that this is weird, and will tend to create awkward,
hard-to-discover code. For this reason, your static analysis tool flags your
code.
Until Next Time
As I said last time, you can extract a ton of value from understanding code analysis
rules. This goes beyond just understanding your tooling and accepted best practice.
Specifically, it gets you in the habit of researching and understanding your code
and applications at a deep, philosophical level.
In this post alone, we've discussed language interoperability, geographic maintenance
concerns, and object oriented design. You can, all too easily, dismiss analysis
rules as perfectionism. They aren't; they have very real, very important applications.
Stay tuned for more posts in this series, aimed at helping you understand your tooling.
Learn
more how CodeIt.Right can help you automate code reviews and improve your code quality.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
-
The v3.0 of CodeIt.Right v3 is here – the new major version
of our automated code review and code quality analysis product. Here are the v3.0
new feature highlights:
-
VS2017 RC integration
-
Official support for VS2015 Update 3 and ASP.NET 5/ASP.NET Core
1.0 solutions
-
Solution filtering by date, source control status and file patterns
-
Summary report view - provides a summary view of the analysis results and metrics,
customize to your needs
-
New Review Code commands – review opened
files and review checked out files
-
Improved Profile Editor with advanced rule search and filtering
-
Improved look and feel for Violations Report and Editor violation
markers
-
Setting to keep the OnDemand and Instant Review profiles in
sync
-
New Jenkins integration plugin
-
Batch correction is now turned off by default
-
Most every CodeIt.Right action now can be assigned a keyboard
shortcut
-
New rules
For the complete and detailed list of the v3.0 changes see What's
New in CodeIt.Right v3.0
Solution Filtering
The solution filtering feature allows to narrow the code review scope to using the
following options:
-
Analyze files modified Today/This Week/Last 2 Weeks/This Month
– so you can set the relative date once and not have to change the date every day
-
Analyze files modified since specific date
-
Analyze files opened in Visual Studio tabs
-
Analyze files checked out from the source control
-
Analyze only specific files – only include the files that match
a list of file patters like *Core*.cs or Modules\*. See this
KB post for the file path patterns details and examples.
New Review Code commands
We have changed the Start Analysis menu to Review Code – still the same feature and
the new name is just highlighting the automated code review nature of the product.
Also added the following Review Code commands:
-
Analyze Open Files menu - analyze only the files opened in Visual Studio tabs
-
Analyze Checked Out Files menu - analyze only files that that are checked out from
the source control
Improved
Profile Editor
The Profile Editor now features
-
Advanced rule filtering by rule id, title, name, severity, scope, target, and programming
language
-
Allows to quickly show only active, only inactive or all rules in the profile
-
Shows totals for the profile rules - total, active, and filtered
-
Improved adding rules with multiple categories
Summary Report
The Summary Report tab provides an overview of the analyzed source code quality, it
includes the high level summary of the current analysis information, filters, violation
summary, top N violation, solution info and metrics. Additionally it provides detailed
list of violations and excludes.
The report is self-contained – no external dependencies, everything it requires is
included within the html file. This makes it very easy to email the report to someone
or publish it on the team portal – see example.
The Summary Report is based on an ASP.NET Razor markup within the Summary.cshtml template.
This makes it very easy for you to customize it to your needs.
You will find the summary report API documentation in the help file – CodeIt.Right
–> Help & Support –> Help –> Summary Report API.
How do I try it?
Download the v5.0 at http://submain.com/download/codeit.right/
Feedback is what keeps us going!
Let us know what you think of the new version here - http://submain.com/support/feedback/
Note to the CodeIt.Right v2 users: The v2.x license codes won't work with
the v3.0. For users with active Software Assurance subscription we have sent out the
v3.x license codes. If you have not received or misplaced your new license, you can
retrieve it on the My Account page.
Users with expired Software Assurance subscription will need to purchase the new version
- currently we are not offering upgrade path other than the Software Assurance subscription.
For information about the upgrade protection see our Software
Assurance and Support - Renewal / Reinstatement Terms
|
-
I've heard tell of a social experiment conducted with monkeys. It may or may
not be apocryphal, but it illustrates an interesting point. So, here goes.
Primates and Conformity
A group of monkeys inhabited a large enclosure, which included a platform in the middle,
accessible by a ladder. For the experiment, their keepers set a banana on the
platform, but with a catch. Anytime a monkey would climb to the platform, the
action would trigger a mechanism that sprayed the entire cage with freezing cold water.
The smarter monkeys quickly figured out the correlation and actively sought to prevent
their cohorts from triggering the spray. Anytime a monkey attempted to climb
the ladder, they would stop it and beat it up a bit by way of teaching a lesson.
But the experiment wasn't finished.
Once the behavior had been established, they began swapping out monkeys. When
a newcomer arrived on the scene, he would go for the banana, not knowing the social
rules of the cage. The monkeys would quickly teach him, though. This continued
until they had rotated out all original monkeys. The monkeys in the cage would
beat up the newcomers even though they had never experienced the actual negative
consequences.
Now before you think to yourself, "stupid monkeys," ask yourself how much better you'd
fare. This
video shows that humans have the same instincts as our primate cousins.
Static Analysis and Conformity
You might find yourself wondering why I told you this story. What does it have
to do with software tooling and static analysis?
Well, I find that teams tend to exhibit two common anti-patterns when it comes to
static analysis. Most prominently, they tune out warnings without due diligence.
After that, I most frequently see them blindly implement the suggestions.
I tend to follow two rules when it comes to my interaction with static analysis tooling.
-
Never implement a suggested fix without knowing what makes it a fix.
-
Never ignore a suggested fix without understanding what makes it a fix.
You syllogism buffs out there have, no doubt, condensed this to a single rule.
Anytime you encounter a suggested fix you don't understand, go learn about it.
Once you understand it, you can implement the fix or ignore the suggestion with eyes
wide open. In software design/architecture, we deal with few clear cut rules
and endless trade-offs. But you can't speak intelligently about the trade-offs
without knowing the theory behind them.
Toward that end, I'd like to facilitate that warning for some CodeIt.Right rules
today. Hopefully this helps you leverage your tooling to its full benefit.
Abstract types should not have public constructors
First up, consider the idea of abstract types with public constructors.
public abstract class Shape
{ protected ConsoleColor
_color; public Shape(ConsoleColor
color) { _color = color;
} } public class Square
: Shape { public int SideLength
{ get; set;
} public Square(ConsoleColor
color) : base(color)
{ } }
CodeIt.Right will ding you for making the Shape constructor public (or
internal -- it wants protected). But why?
Well, you'll quickly discover that CodeIt.Right has good company in the form of the
.NET Framework guidelines and FxCop rules. But that just shifts the discussion
without solving the problem. Why does everyone seem not to like this
code?
First, understand that you cannot instantiate Shape, by design. The "abstract"
designation effectively communicates Shape's incompleteness. It's more of a template than
a finished class in that creating a Shape makes no sense without the added specificity
of a derived type, like Square .
So the only way classes outside of the inheritance hierarchy can interact with Shape
indirectly, via Square. They create Squares, and those Squares decide how to
go about interacting with Shape. Don't believe me? Try getting around
this. Try creating a Shape in code or try deleting Square's constructor and
calling new Square(color). Neither will compile.
Thus, when you make Shape's constructor public or internal, you invite users of your
inheritance hierarchy to do something impossible. You engage in false
advertising and you confuse them. CodeIt.Right is helping you avoid this
mistake.
Do not catch generic exception types
Next up, let's consider the wisdom, "do not catch generic exception types."
To see what that looks like, consider the following code.
public bool MergeUsers(int user1Id, int user2Id)
{ try { var user1 = _userRepo.Get(user1Id); var user2 = _userRepo.Get(user2Id);
user1.MergeWith(user2); _userRepo.Save(user1); _userRepo.Delete(user2); return true;
} catch(Exception
ex) { _logger.Log($"Exception
{ex.Message} occurred."); return false;
} }
Here we have a method that merges two users together, given their IDs. It accomplishes
this by fetching them from some persistence ignorance scheme, invoking a merge operation,
saving the merged one and deleting the vestigial one. Oh, and it wraps the whole
thing in a try block, and then logs and returns false should anything fail.
And, by anything, I mean absolutely anything. Business rules make merge
impossible? Log and return false. Server out of memory? Log it and
return false. Server hit by lightning and user data inaccessible? Log
it and return false.
With this approach, you encounter two categories of problem. First, you fail
to reason about or distinguish among the different things that might go wrong.
And, secondly, you risk overstepping what you're equipped to handle here. Do
you really want to handle fatal system exceptions right smack in the heart
of the MergeUsers business logic?
You may encounter circumstances where you want to handle everything, but probably
not as frequently as you think. Instead of defaulting to this catch all, go
through the exercise of reasoning about what could go wrong here and what you want
to handle.
Avoid language specific type names in parameters
If you see this violation, you probably have code that resembles the following.
(Though, hopefully, you wouldn't write this actual method)
public int Add(int xInt, int yInt)
{ return xInt + yInt;
}
CodeIt.Right does not like the name "int" in the parameters and this reflects a .NET
Framework guideline.
Here, we find something a single language developer may not stop to consider.
Specifically, not all languages that target the .NET framework use the same type name
conveniences. You say "int" and a VB developer says "Integer." So if a
VB developer invokes your method from a library, she may find this confusing.
That said, I would like to take this one step further and advise that you avoid baking
types into your parameter/variable names in general. Want to know why?
Let's consider a likely outcome of some project manager coming along and saying, "we
want to expand the add method to be able to handle really big numbers." Oh,
well, simple enough!
public long Add(long xInt, long yInt)
{ return xInt + yInt;
}
You just needed to change the datatypes to long, and voilà! Everything went
perfectly until someone asked you at code review why you have a long called "xInt."
Oops. You totally didn't even think about the variable names.
You'll be more careful next time. Well, I'd advise avoiding "next time" completely
by getting out of this naming habit. The IDE can tell you the type of a variable
-- don't encode it into the name redundantly.
Until Next Time
As I said in the introductory part of the post, I believe huge value exists in understanding
code analysis rules. You make better decisions, have better conversations, and
get more mileage out of the tooling. In general, this understanding makes you
a better developer. So I plan to continue with these explanatory posts from
time to time. Stay tuned!
Learn
more how CodeIt.Right can help you automate code reviews and improve your code quality.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
-
We have just made available the Release Candidate of CodeIt.Right v3.0, here is the
new feature highlights:
-
VS2017 RC integration
-
Solution filtering by date, source control status and file patterns
-
Summary report view (announced as the Dashboard in the Beta preview) - provides a
summary view of the analysis results and metrics, customize to your needs
These features were announced as part of our recent v3 Beta:
-
Official support for VS2015 Update 2 and ASP.NET 5/ASP.NET Core
1.0 solutions
-
New Review Code commands:
-
only opened files
-
only checked out files
-
only files modified after specific date
-
Improved Profile Editor with advanced rule search and filtering
-
Improved look and feel for Violations Report and Editor violation
markers
-
New rules
-
Setting to keep the OnDemand and Instant Review profiles in
sync
-
New Jenkins integration plugin
-
Batch correction is now turned off by default
-
Most every CodeIt.Right action now can be assigned a keyboard
shortcut
-
For the Beta changes and screenshots, please see Overview
of CodeIt.Right v3.0 Beta Features
For the complete and detailed list of the v3.0 changes see What's
New in CodeIt.Right v3.0
To give the v3.0 Release Candidate a try, download it here - http://submain.com/download/codeit.right/beta/
Solution Filtering
In addition to the solution filtering by modified since specific date, open and checked
out files available in the Beta, we are introducing few more options:
-
Analyze files modified Today/This Week/Last 2 Weeks/This Month
– so you can set the relative date once and not have to change the date every day
-
Analyze only specific files – only include the files that match
a list of file patters like *Core*.cs or Modules\*. See this
KB post for the file path patterns details and examples.
Summary Report
The Summary Report tab provides an overview of the analyzed source code quality, it
includes the high level summary of the current analysis information, filters, violation
summary, top N violation, solution info and metrics. Additionally it provides detailed
list of violations and excludes.
The report is self-contained – no external dependencies, everything it requires is
included within the html file. This makes it very easy to email the report to someone
or publish it on the team portal – see example.
The Summary Report is based on an ASP.NET Razor markup within the Summary.cshtml template.
This makes it very easy for you to customize it to your needs.
You will find the summary report API documentation in the help file – CodeIt.Right
–> Help & Support –> Help –> Summary Report API.
Feedback
We would love to hear your feedback on the new features! Please email it to us at support@submain.com or
post in the CodeIt.Right
Forum.

|
|
|
|
|
|
|