|
-
For years, I can remember fighting the good fight for unit testing. When I started
that fight, I understood a simple premise. We, as programmers, automate things.
So, why not automate testing?
Of all things, a grad school course in software engineering introduced me to the concept
back in 2005. It hooked me immediately, and I began applying the lessons to
my work at the time. A few years and a new job later, I came to a group that
had not yet discovered the wonders of automated testing. No worries, I figured,
I can introduce the concept!
Except, it turns out that people stuck in their ways kind of like those ways.
Imagine my surprise to discover that people turned up their nose at the practice.
Over the course of time, I learned to plead my case, both in technical and business
terms. But it often felt like wading upstream against a fast moving current.
Years later, I have fought that fight over and over again. In fact, I've produced
training materials, courses, videos, blog posts, and books on the subject. I've
brought people around to see the benefits and then subsequently realize those benefits
following adoption. This has brought me satisfaction.
But I don't do this in a vacuum. The industry as a whole has followed the same
trajectory, using the same logic. I count myself just another advocate among
a euphony of voices. And so our profession has generally come to accept unit
testing as a vital tool.
Widespread Acceptance of Automated Regression Tests
In fact, I might go so far as to call acceptance and adoption quite widespread.
This figure only increases if you include shops that totally mean to and will definitely
get around to it like sometime in the next six months or something. In other
words, if you count both shops that have adopted the practice and shops that feel
as though they should, acceptance figures certainly span a plurality.
Major enterprises bring me in to help them teach their developers to do it.
Still, other companies consult and ask questions about it. Just about everyone
wants to understand how to realize the unit testing value proposition of higher quality,
more stability, and fewer bugs.
This takes a simple form. We talk about unit testing and other forms of testing,
and sometimes this may blur the lines. But let's get specific here. A
holistic testing strategy includes tests at a variety of granularities. These
comprise what some call "the
test pyramid." Unit tests address individual components (e.g. classes),
while service tests drive at the way the components of your application work together.
GUI tests, the least granular of all, exercise the whole thing.
Taken together, these comprise your regression test suite. It stands
against the category of bugs known as "regressions," or defects where something that
used to work stops working. For a parallel example in the "real world" think
of the warning lights on your car's dashboard. "Low battery" light comes on
because the battery, which used to work, has stopped working.
Benefits of Automated Regression Test Suites
Why do this? What benefits to automated regression test suites provide?
Well, let's take a look at some.
-
Repeatability and accuracy. A human running tests over and over again may produce
slight variances in the tests. A machine, not so much.
-
Speed. As with anything, automation produces a significant speedup over manual
execution.
-
Fast feedback. The automated test suite can tell you much more quickly if you
have broken something.
-
Morale. The fewer times a QA department comes back with "you broke this thing,"
the fewer opportunities for contentiousness.
I should also mention, as a brief aside, that I don't consider automated test suites
to be acceptable substitutes for manual testing. Rather, I believe
the two efforts should work in complementary fashion. If the automated test
suite executes the humdrum tests in the codebase, it frees QA folks up to perform
intelligent, exploratory testing. As Uncle
Bob once famously said, "it's wrong to turn humans into machines. If you
can write a script for a test procedure, then you can write a program to execute that
procedure."
Automating Code Review
None of this probably comes as much of a shock to you. If you go out and read
tech blogs, you've no doubt encountered the widespread opinion that people should
automate regression test suites. In fact, you probably share that opinion.
So don't you wonder why we don't more frequently apply that logic to other concerns?
Take code review, for instance. Most organizations do this in entirely manual
fashion outside of, perhaps, a so-called "linting" tool. They mandate automated
test coverage and then content themselves with sicking their developers on one another
in meetings to gripe over tabs, spaces, and camel casing.
Why not approach code review the same way? Why not automate the aspects of it
that lend themselves to automation, while saving human intervention for more conceptual
matters?
Benefits of Automated Code Reviews
In a study by Steve McConnell and referenced
in this blog post, "formal code inspections" produced better results for preemptively
finding bugs than even automated regression tests. So it stands to reason that
we should invest in code review in the same ways that we invest in regression testing.
And I don't mean simply time spent, but in driving forward with automation and efficiency.
Consider the benefits I listed above for automated tests, and look how they apply
to automated
code review.
-
Repeatability and accuracy. Humans will miss instances of substandard code if
they feel tired -- machines won't.
-
Speed. Do you want your code review to take seconds or in hours/days.
-
Fast feedback. Because of the increased speed of the review, the reviewee gets
the results immediately after writing the code, for better learning.
-
Morale. The exact same reasoning applies here. Having a machine point
out your mistakes can save contentiousness.
I think that we'll see a similar trajectory to automating code review that we did
with automating test suites. And, what's more, I think that automated code review
will gain steam a lot more quickly and with less resistance. After all, automating
QA activities blazed a trail.
I believe the biggest barrier to adoption, in this case, is the lack of awareness.
People may not believe automating code review is possible. But I assure you,
you can do it. So keep an eye out for ways to automate
this important practice, and get in ahead of the adoption curve.
Learn
more how CodeIt.Right can help you automate code reviews and improve your code quality.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
-
As a teenager, I remember having a passing interest in hacking. Perhaps this
came from watching the movie Sneakers.
Whatever the origin, the fancy passed quickly because I prefer building stuff to breaking
other people's stuff. Therefore, what I know about hacking pretty much stops
at understanding terminology and high level concepts.
Consider the term "zero
day exploit," for instance. While I understand what this means, I have never
once, in my life, sat on discovery of a software vulnerability for the purpose of
using it somehow. Usually when I discover a bug, I'm trying to deposit a check
or something, and I care only about the inconvenience. But I still understand
the term.
"Zero day" refers to the amount of time the software vendor has to prepare for the
vulnerability. You see, the clever hacker gives no warning about the vulnerability
before using it. (This seems like common sense, though perhaps hackers with
more derring do like to give them half a day to watch them scramble to release something
before the hack takes effect.) The time between announcement and reality is
zero.
Increased Deployment Cadence
Let's co-opt the term "zero day" for a different purpose. Imagine that we now
use it to refer to software deployments. By "zero day deployment," we thus mean
"software deployed without any prior announcement."
But
why would anyone do this? Don't you miss out on some great marketing opportunities?
And, more importantly, can you even release software this quickly? Understanding
comes from realizing that software deployment is undergoing a radical shift.
To understand this think about software release cadences 20 years ago. In the
90s, Internet Explorer won the first browser
war because it managed to beat Netscape's plodding release of going 3 years between
releases. With major software products, release cadences of a year or two dominated
the landscape back then.
But that timeline has shrunk steadily. For a highly visible example, consider
Visual Studio. In 2002, 2005, 2008, Microsoft released versions corresponding
to those years. Then it started to shrink with 2010, 2012, and 2013. Now,
the years no longer mark releases, per se, with Microsoft actually releasing major
updates on a quarterly basis.
Zero Day Deployments
As much as going from "every 3 years" to "every 3 months" impresses, websites and
SaaS vendors have shrunk it to "every day." Consider Facebook's
deployment cadence. They roll minor updates every business day and major
ones every week.
With this cadence, we truly reach zero day deployment. You never hear Facebook
announcing major upcoming releases. In fact, you never hear Facebook announcing
releases, period. The first the world sees of a given Facebook release is when
the release actually happens. Truly, this means zero day releases.
Oh, don't get me wrong. Rumors of upcoming features and capabilities circulate,
and Facebook certainly has a robust marketing department. But Facebook and companies
with similar deployment approaches have impressively made deployments a non-event.
And others are looking to follow suit, perhaps yours included.
Conceptual Impediments to Zero Day Deployments
If what I just said made you spit your drink at the screen, I understand. Perhaps
your deployment and release process takes so long that the thought of shrinking it
to a day made you laugh. Or perhaps it terrified. Either way, I can understand
that it may seem quite a leap.
You may conceive of Facebook and other practitioners so alien to your own situation
that you see no path from here to there. But in reality, they almost certainly
do the same things you do as part of your longer process -- just optimized and automated.
Impediments take a variety of forms. You might have lengthy quality assurance
and vetting processes, perhaps that require many iterations between the developers
and quality assurance. You might still be packaging software onto DVDs and shipping
it to customers. Perhaps you run all sorts of checks and analytics on it.
But all will fall under the general heading of requiring manual intervention or consuming
a lot of time.
To get to zero day deployments, you need to automate and speed up considerably, and
this can seem daunting.
What's Common Today
Some good news exists, though. The same forces that let the Visual Studio team
see such radical improvement push on software shops across the board. We all
have access to helpful techs.
For instance, the overwhelming majority of organizations now have continuous integration
via dedicated build machines. Software developers commit code, and these things
scoop it up, compile it, and package it up in a deployable package. This activity
now happens on the order of minutes whereas, in the past, I can remember shops where
this was some poor guy's entire job, and he'd spend days on each build.
And, speaking of the CI server, a lot of them run automated test suites as part of
what they do. Most commonly, this means unit tests. But they might also
invoke acceptance tests and even more exotic things like smoke, GUI, and functionality
tests. You can thus accept commits, build the software, run a bunch of test,
and get it ready to deploy.
Of course, you can also automate the actual deployment as well. It stands to
reason that, if your build machine can ball it up into a deliverable, it can deliver
that deliverable. This might be harder with physical media involved, but as
more software deliveries happen over networks, more of them get automated.
What We Need Next
With all of that in place, why don't we have more zero day deployments? What's
missing?
Again, discounting the problem of physical media, I'd say quality checks present the
biggest issue. We can compile, run automated tests, and deploy automatically.
But does this guarantee acceptable production behavior?
What about the important element of code reviews? How do you assure that, even
as automated tests pass, the application isn't piling up mountains of technical debt
and impeding future deployments? To get to zero day deployments, we must address
these issues.
Don't get me wrong. Other things matter here as well. Zero day deployments
require robust production checks and sophisticated "oops, that didn't work, rollback!"
capabilities. But I think that nothing will matter more than automated
quality checks.
Each time you commit code, you need an intelligent analysis of that code that should
fail the build as surely as failing tests if issues crop up. In a zero day deployment
context, you cannot afford best practice violations. You cannot afford slipping
quality, mounting technical debt, and you most certainly cannot afford code rot.
Today's rot in a zero day deployment scenario means tomorrow's inability to deploy
that way.
Learn
more how CodeIt.Right can help you automate code reviews, improve your code quality,
and reduce technical debt.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
-
A little while back, I started
a post series explaining some of the CodeIt.Right rules. I led into the
post with a narrative, which I won't retell. But I will reiterate the two rules
that I follow when it comes to static analysis tooling.
-
Never implement a suggested fix without knowing what makes it a fix.
-
Never ignore a suggested fix without understanding what makes it a fix.
Because I follow these two rules, I find myself researching every fix suggested to
me by my tooling. And, since I've gone to the trouble of doing so, I'll save
you that same trouble by explaining some of those rules today. Specifically,
I'll examine 3 more CodeIt.Right rules
today and explain the rationale behind them.
Mark assemblies CLSCompliant
If you develop in .NET, you've no doubt run across this particular warning at some
point in your career. Before we get into the details, let's stop and define
the acronyms. "CLS" stands for "Common Language Specification," so the warning
informs you that you need to mark your assemblies "Common Language Specification Compliant"
(or non-compliant, if applicable).
Okay, but what does that mean? Well, you can easily forget that many programming
languages target the .NET runtime besides your language of choice. CLS compliance
indicates that any language targeting the runtime can use your assembly. You
can write language specific code, incompatible with other framework languages.
CLS compliance means you haven't.
Want an example? Let's say that you write C# code and that you decide to get
cute. You have a class with a "DoStuff" method, and you want to add a slight
variation on it. Because the new method adds improved functionality, you decide
to call it "DOSTUFF" in all caps to indicate its awesomeness. No problem, says
the C# compiler.
And yet, if you you try to do the same thing in Visual Basic, a case insensitive language,
you will encounter a compiler error. You have written C# code that VB code cannot
use. Thus you have written non-CLS compliant code. The CodeIt.Right rule
exists to inform you that you have not specified your assembly's compliance or non-compliance.
To fix, go specify. Ideally, go into the project's AssemblyInfo.cs file and
add the following to call it a day.
[assembly:CLSCompliant(true)]
But you can also specify non-compliance for the assembly to avoid a warning.
Of course, you can do better by marking the assembly compliant on the whole and then
hunting down and flagging non-compliant methods with the attribute.
Specify IFormatProvider
Next up, consider a warning to "specify IFormatProvider." When you encounter
this for the first time, it might leave you scratching your head. After all,
"IFormatProvider" seems a bit... technician-like. A more newbie-friendly name
for this warning might have been, "you have a localization problem."
For example, consider a situation in which some external supplies a date. Except,
they supply the date as a string and you have the task of converting it to a proper DateTime so
that you can perform operations on it. No problem, right?
var properDate = DateTime.Parse(inputString);
That should work, provided provincial concerns do not intervene. For those of
you in the US, "03/02/1995" corresponds to March 2nd, 1995. Of course, should
you live in Iraq, that date string would correspond to February 3rd, 1995. Oops.
Consider a nightmare scenario wherein you write some code with this parsing mechanism.
Based in the US and with most of your customers in the US, this works for years.
Eventually, though, your sales group starts making inroads elsewhere. Years
after the fact, you wind up with a strange bug in code you haven't touched for years.
Yikes.
By specifying a format provider, you can avoid this scenario.
Nested types should not be visible
Unlike the previous rule, this one's name suffices for description. If you declare
a type within another type (say a class within a class), you should not make the nested
type visible outside of the outer type. So, the following code triggers the
warning.
public class Outer
{ public class Nested
{ } }
To understand the issue here, consider the object oriented principle of encapsulation.
In short, hiding implementation details from outsiders gives you more freedom to vary
those details later, at your discretion. This thinking drives the rote instinct
for OOP programmers to declare private fields and expose them via public accessors/mutators/properties.
To some degree, the same reasoning applies here. If you declare a class or struct inside
of another one, then presumably only the containing type needs the nested one.
In that case, why make it public? On the other hand, if another type does, in
fact, need the nested one, why scope it within a parent type and not just the same
namespace?
You may have some reason for doing this -- something specific to your code and your
implementation. But understand that this is weird, and will tend to create awkward,
hard-to-discover code. For this reason, your static analysis tool flags your
code.
Until Next Time
As I said last time, you can extract a ton of value from understanding code analysis
rules. This goes beyond just understanding your tooling and accepted best practice.
Specifically, it gets you in the habit of researching and understanding your code
and applications at a deep, philosophical level.
In this post alone, we've discussed language interoperability, geographic maintenance
concerns, and object oriented design. You can, all too easily, dismiss analysis
rules as perfectionism. They aren't; they have very real, very important applications.
Stay tuned for more posts in this series, aimed at helping you understand your tooling.
Learn
more how CodeIt.Right can help you automate code reviews and improve your code quality.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
-
Last month, I wrote a
post introducing you to T4 templates. Near the end, I included a mention
of GhostDoc's use of T4 templates in automatically generating code comments.
Today, I'd like to expand on that.
To recap very briefly, recall that Ghost Doc allows you to generate things like method
header comments. I recommend that, in most cases, you let it do its thing.
It does a good job. But sometimes, you might have occasion to want to tweak
the result. And you can do that by making use of T4 Templates.
Documenting Chess TDD
To demonstrate, let's revisit my trusty toy code base, Chess
TDD. Because I put this code together for instructional purposes and not
to release as a product, it has no method header comments for IntelliSense's benefit.
This makes it the perfect candidate for a demonstration.
If I had released this as a library, I'd have started the documentation with the Board
class. Most of the client interaction would happen via Board, so let's document
that. It offers you a constructor and a bunch of semantics around placing and
moving pieces. Let's document the conceptually simple MovePiece method.
public void MovePiece(BoardCoordinate
origin, BoardCoordinate destination) { VerifyCoordinatesOrThrow(origin, destination); var pieceToMove = GetPiece(origin);
AddPiece(pieceToMove, destination); RemovePiece(origin); pieceToMove.HasMoved = true;
ReconcileEnPassant(origin, destination, pieceToMove); }
To add documentation to this method, I simply right click it and, from the GhostDoc
context menu, select "Document This." Alternatively, I can use the keyboard
shortcut Ctrl-Shift-D. Either option yields the following result.
/// <summary> /// Moves
the piece. /// </summary> /// <param
name="origin">The origin.</param> /// <param
name="destination">The destination.</param> public void MovePiece(BoardCoordinate
origin, BoardCoordinate destination) { VerifyCoordinatesOrThrow(origin, destination); var pieceToMove = GetPiece(origin);
AddPiece(pieceToMove, destination); RemovePiece(origin); pieceToMove.HasMoved = true;
ReconcileEnPassant(origin, destination, pieceToMove); }
Let's Make a Tiny Tweak
Alright, much better! If I scrutinize the comment, I can imagine what an IntelliSense-using
client will see. My parameter naming makes this conceptually simple to understand,
so the IntelliSense will tell the user that the first parameter represents the origin
square and the second parameter the destination.
But let's say that as I look at this, I find myself wanting to pick at a nit.
I don't care for the summary taking up three lines -- I want to condense it to one.
How might I do that?
Well, let's crack open the T4 template for generating a method header. Recall
that you do this in Visual Studio by selecting Tools->Ghost Doc->Options,
and picking "Rules" from the options pane.
If you double click on "Method Template", as highlighted above, you will see an "Edit
Rule" Window. The first few lines of code in that window look like this.
<#@ template language="C#" #> <# CodeElement
codeElement = Context.CurrentCodeElement; #> ///
<summary> ///<# GenerateSummaryText(); #> ///
</summary> <# if(codeElement.HasTypeParameters)
{ for(int i = 0;
i < codeElement.TypeParameters.Length;
i++)
{ TypeParameter typeParameter = codeElement.TypeParameters ; #>
Hmmm. I cannot count myself an expert in T4 templates, per se, but
I think I have an idea. Let's put that call to GenerateSummaryText() inline
between the summary tags. Like this:
<#@ template language="C#" #> <# CodeElement
codeElement = Context.CurrentCodeElement; #> ///
<summary><# GenerateSummaryText(); #></summary>
That should do it, right? Let's regenerate the comment and see what it looks
like. This results in the following.
/// <summary>Moves
the piece. /// </summary> /// <param
name="origin">The origin.</param> /// <param
name="destination">The destination.</param> public void MovePiece(BoardCoordinate
origin, BoardCoordinate destination) { VerifyCoordinatesOrThrow(origin, destination); var pieceToMove = GetPiece(origin);
AddPiece(pieceToMove, destination); RemovePiece(origin); pieceToMove.HasMoved = true;
ReconcileEnPassant(origin, destination, pieceToMove); }
Uh, oh. It made a difference, but somehow we only got halfway there. Why
might that be?
Diving Deeper
To understand, we need to look at the template in a bit more detail. The template
itself has everything on one line, and yet we see a newline in there somehow.
Could GenerateTextSummary cause this, somehow? Let's scroll down
to look at it. Since this method has a lot of code, here are the first few lines
only.
private void GenerateSummaryText()
{ if(Context.HasExistingTagText("summary"))
{ this.WriteLine(Context.GetExistingTagText("summary"));
} else if(IsAsyncMethod())
{ this.WriteLine(Context.ExecMacro("$(MethodName.Words.ExceptLast)") + " as
an asynchronous operation.");
} else if(IsMainMethod())
{ this.WriteLine("Defines
the entry point of the application.");
} }
Aha! Notice that we're calling WriteLine . What if we did
a find and replace to change all of those to just Write? Let's try. (To
do more serious operations like this, you will want to copy the text out of the editor
and into your favorite text editor in order to get more operations).
Once you have replaced all instances of WriteLine with Write in the template,
here is the new result.
/// <summary>Moves
the piece.</summary> /// <param
name="origin">The origin.</param> /// <param
name="destination">The destination.</param> public void MovePiece(BoardCoordinate
origin, BoardCoordinate destination) { VerifyCoordinatesOrThrow(origin, destination); var pieceToMove = GetPiece(origin);
AddPiece(pieceToMove, destination); RemovePiece(origin); pieceToMove.HasMoved = true;
ReconcileEnPassant(origin, destination, pieceToMove); }
Success!
Validation
As you play with this, you might have noticed a "Validate" button in the rule editor.
Use this liberally! This button will trigger a parsing of the template and provide
you with feedback as to validity. The last thing you want to do is work in here
for many iterations and wind up with no idea what you broke and when.
When working with these templates, think of this as equivalent to compiling.
You wouldn't want to sit for 20 minutes writing code with no feedback as to whether
it builds or not. So don't do it with these templates.
The Power at Your Disposal
I'll wrap here for this particular lesson, but understand that we have barely scratched
the surface of what you can do. In this post, we just changed a bit of the formatting
to suit a whim I had. But you can really dive into ways of reasoning about and
documenting the code if you so choose.
Stay tuned for future posts on more advanced tips and tricks with your comment templates.
Learn
more about how GhostDoc can help simplify your XML Comments, produce and maintain
quality help documentation.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
-
The v3.0 of CodeIt.Right v3 is here – the new major version
of our automated code review and code quality analysis product. Here are the v3.0
new feature highlights:
-
VS2017 RC integration
-
Official support for VS2015 Update 3 and ASP.NET 5/ASP.NET Core
1.0 solutions
-
Solution filtering by date, source control status and file patterns
-
Summary report view - provides a summary view of the analysis results and metrics,
customize to your needs
-
New Review Code commands – review opened
files and review checked out files
-
Improved Profile Editor with advanced rule search and filtering
-
Improved look and feel for Violations Report and Editor violation
markers
-
Setting to keep the OnDemand and Instant Review profiles in
sync
-
New Jenkins integration plugin
-
Batch correction is now turned off by default
-
Most every CodeIt.Right action now can be assigned a keyboard
shortcut
-
New rules
For the complete and detailed list of the v3.0 changes see What's
New in CodeIt.Right v3.0
Solution Filtering
The solution filtering feature allows to narrow the code review scope to using the
following options:
-
Analyze files modified Today/This Week/Last 2 Weeks/This Month
– so you can set the relative date once and not have to change the date every day
-
Analyze files modified since specific date
-
Analyze files opened in Visual Studio tabs
-
Analyze files checked out from the source control
-
Analyze only specific files – only include the files that match
a list of file patters like *Core*.cs or Modules\*. See this
KB post for the file path patterns details and examples.
New Review Code commands
We have changed the Start Analysis menu to Review Code – still the same feature and
the new name is just highlighting the automated code review nature of the product.
Also added the following Review Code commands:
-
Analyze Open Files menu - analyze only the files opened in Visual Studio tabs
-
Analyze Checked Out Files menu - analyze only files that that are checked out from
the source control
Improved
Profile Editor
The Profile Editor now features
-
Advanced rule filtering by rule id, title, name, severity, scope, target, and programming
language
-
Allows to quickly show only active, only inactive or all rules in the profile
-
Shows totals for the profile rules - total, active, and filtered
-
Improved adding rules with multiple categories
Summary Report
The Summary Report tab provides an overview of the analyzed source code quality, it
includes the high level summary of the current analysis information, filters, violation
summary, top N violation, solution info and metrics. Additionally it provides detailed
list of violations and excludes.
The report is self-contained – no external dependencies, everything it requires is
included within the html file. This makes it very easy to email the report to someone
or publish it on the team portal – see example.
The Summary Report is based on an ASP.NET Razor markup within the Summary.cshtml template.
This makes it very easy for you to customize it to your needs.
You will find the summary report API documentation in the help file – CodeIt.Right
–> Help & Support –> Help –> Summary Report API.
How do I try it?
Download the v5.0 at http://submain.com/download/codeit.right/
Feedback is what keeps us going!
Let us know what you think of the new version here - http://submain.com/support/feedback/
Note to the CodeIt.Right v2 users: The v2.x license codes won't work with
the v3.0. For users with active Software Assurance subscription we have sent out the
v3.x license codes. If you have not received or misplaced your new license, you can
retrieve it on the My Account page.
Users with expired Software Assurance subscription will need to purchase the new version
- currently we are not offering upgrade path other than the Software Assurance subscription.
For information about the upgrade protection see our Software
Assurance and Support - Renewal / Reinstatement Terms
|
-
I've heard tell of a social experiment conducted with monkeys. It may or may
not be apocryphal, but it illustrates an interesting point. So, here goes.
Primates and Conformity
A group of monkeys inhabited a large enclosure, which included a platform in the middle,
accessible by a ladder. For the experiment, their keepers set a banana on the
platform, but with a catch. Anytime a monkey would climb to the platform, the
action would trigger a mechanism that sprayed the entire cage with freezing cold water.
The smarter monkeys quickly figured out the correlation and actively sought to prevent
their cohorts from triggering the spray. Anytime a monkey attempted to climb
the ladder, they would stop it and beat it up a bit by way of teaching a lesson.
But the experiment wasn't finished.
Once the behavior had been established, they began swapping out monkeys. When
a newcomer arrived on the scene, he would go for the banana, not knowing the social
rules of the cage. The monkeys would quickly teach him, though. This continued
until they had rotated out all original monkeys. The monkeys in the cage would
beat up the newcomers even though they had never experienced the actual negative
consequences.
Now before you think to yourself, "stupid monkeys," ask yourself how much better you'd
fare. This
video shows that humans have the same instincts as our primate cousins.
Static Analysis and Conformity
You might find yourself wondering why I told you this story. What does it have
to do with software tooling and static analysis?
Well, I find that teams tend to exhibit two common anti-patterns when it comes to
static analysis. Most prominently, they tune out warnings without due diligence.
After that, I most frequently see them blindly implement the suggestions.
I tend to follow two rules when it comes to my interaction with static analysis tooling.
-
Never implement a suggested fix without knowing what makes it a fix.
-
Never ignore a suggested fix without understanding what makes it a fix.
You syllogism buffs out there have, no doubt, condensed this to a single rule.
Anytime you encounter a suggested fix you don't understand, go learn about it.
Once you understand it, you can implement the fix or ignore the suggestion with eyes
wide open. In software design/architecture, we deal with few clear cut rules
and endless trade-offs. But you can't speak intelligently about the trade-offs
without knowing the theory behind them.
Toward that end, I'd like to facilitate that warning for some CodeIt.Right rules
today. Hopefully this helps you leverage your tooling to its full benefit.
Abstract types should not have public constructors
First up, consider the idea of abstract types with public constructors.
public abstract class Shape
{ protected ConsoleColor
_color; public Shape(ConsoleColor
color) { _color = color;
} } public class Square
: Shape { public int SideLength
{ get; set;
} public Square(ConsoleColor
color) : base(color)
{ } }
CodeIt.Right will ding you for making the Shape constructor public (or
internal -- it wants protected). But why?
Well, you'll quickly discover that CodeIt.Right has good company in the form of the
.NET Framework guidelines and FxCop rules. But that just shifts the discussion
without solving the problem. Why does everyone seem not to like this
code?
First, understand that you cannot instantiate Shape, by design. The "abstract"
designation effectively communicates Shape's incompleteness. It's more of a template than
a finished class in that creating a Shape makes no sense without the added specificity
of a derived type, like Square .
So the only way classes outside of the inheritance hierarchy can interact with Shape
indirectly, via Square. They create Squares, and those Squares decide how to
go about interacting with Shape. Don't believe me? Try getting around
this. Try creating a Shape in code or try deleting Square's constructor and
calling new Square(color). Neither will compile.
Thus, when you make Shape's constructor public or internal, you invite users of your
inheritance hierarchy to do something impossible. You engage in false
advertising and you confuse them. CodeIt.Right is helping you avoid this
mistake.
Do not catch generic exception types
Next up, let's consider the wisdom, "do not catch generic exception types."
To see what that looks like, consider the following code.
public bool MergeUsers(int user1Id, int user2Id)
{ try { var user1 = _userRepo.Get(user1Id); var user2 = _userRepo.Get(user2Id);
user1.MergeWith(user2); _userRepo.Save(user1); _userRepo.Delete(user2); return true;
} catch(Exception
ex) { _logger.Log($"Exception
{ex.Message} occurred."); return false;
} }
Here we have a method that merges two users together, given their IDs. It accomplishes
this by fetching them from some persistence ignorance scheme, invoking a merge operation,
saving the merged one and deleting the vestigial one. Oh, and it wraps the whole
thing in a try block, and then logs and returns false should anything fail.
And, by anything, I mean absolutely anything. Business rules make merge
impossible? Log and return false. Server out of memory? Log it and
return false. Server hit by lightning and user data inaccessible? Log
it and return false.
With this approach, you encounter two categories of problem. First, you fail
to reason about or distinguish among the different things that might go wrong.
And, secondly, you risk overstepping what you're equipped to handle here. Do
you really want to handle fatal system exceptions right smack in the heart
of the MergeUsers business logic?
You may encounter circumstances where you want to handle everything, but probably
not as frequently as you think. Instead of defaulting to this catch all, go
through the exercise of reasoning about what could go wrong here and what you want
to handle.
Avoid language specific type names in parameters
If you see this violation, you probably have code that resembles the following.
(Though, hopefully, you wouldn't write this actual method)
public int Add(int xInt, int yInt)
{ return xInt + yInt;
}
CodeIt.Right does not like the name "int" in the parameters and this reflects a .NET
Framework guideline.
Here, we find something a single language developer may not stop to consider.
Specifically, not all languages that target the .NET framework use the same type name
conveniences. You say "int" and a VB developer says "Integer." So if a
VB developer invokes your method from a library, she may find this confusing.
That said, I would like to take this one step further and advise that you avoid baking
types into your parameter/variable names in general. Want to know why?
Let's consider a likely outcome of some project manager coming along and saying, "we
want to expand the add method to be able to handle really big numbers." Oh,
well, simple enough!
public long Add(long xInt, long yInt)
{ return xInt + yInt;
}
You just needed to change the datatypes to long, and voilà! Everything went
perfectly until someone asked you at code review why you have a long called "xInt."
Oops. You totally didn't even think about the variable names.
You'll be more careful next time. Well, I'd advise avoiding "next time" completely
by getting out of this naming habit. The IDE can tell you the type of a variable
-- don't encode it into the name redundantly.
Until Next Time
As I said in the introductory part of the post, I believe huge value exists in understanding
code analysis rules. You make better decisions, have better conversations, and
get more mileage out of the tooling. In general, this understanding makes you
a better developer. So I plan to continue with these explanatory posts from
time to time. Stay tuned!
Learn
more how CodeIt.Right can help you automate code reviews and improve your code quality.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
-
Today, I'd like to tackle a subject that inspires ambivalence in me. Specifically,
I mean the subject of automated text generation (including a common, specific flavor:
code generation).
If you haven't encountered this before, consider a common example. When you
file->new->(console) project, Visual Studio generates a Program.cs file.
This file contains standard includes, a program class, and a public static void method
called "Main." Conceptually, you just triggered text (and code) generation.
Many schemes exist for doing this. Really, you just need a templating scheme
and some kind of processing engine to make it happen. Think of ASP MVC, for
instance. You write markup sprinkled with interpreted variables (i.e. Razor),
and your controller object processes that and spits out pure HTML to return as the
response. PHP and other server side scripting constructs operate this way and
so do code/text generators.
However, I'd like to narrow the focus to a specific case: T4
templates. You can use this powerful construct to generate all manner of
text. But use discretion, because you can also use this powerful construct to
make a huge mess. I wrote a
post about the potential perils some years back, but suffice it to say that you
should take care not to automate and speed up copy and paste programming. Make
sure your case for use makes sense.
The Very Basics
With the obligatory disclaimer out of the way, let's get down to brass tacks.
I'll offer a lightning fast getting started primer.
Open some kind of playpen project in Visual Studio, and add a new item. You
can find the item in question under the "General" heading as "Text Template."
Give it a name. For instance, I called mine "sample" while writing this post.
Once you do that, you will see it show up in the root directory of your project as
Sample.tt. Here is the text that it contains.
<#@ template debug="false" hostspecific="false" language="C#" #> <#@ assembly name="System.Core" #> <#@ import namespace="System.Linq" #> <#@ import namespace="System.Text" #> <#@ import namespace="System.Collections.Generic" #> <#@ output extension=".txt" #>
Save this file. When you do so, Visual Studio will prompt you with a message
about potentially harming your computer, so something must be happening behind the
scenes, right? Indeed, something has happened. You have generated the
output of the T4 generation process. And you can see it by expanding the caret
next to your Sample.tt file as shown here.
If you open the Sample.txt file, however, you will find it empty. That's because
we haven't done anything interesting yet. Add a new line with the text "hello
world" to the bottom of the Sample.tt file and then save. (And feel free to
get rid of that message about harming your computer by opting out, if you want).
You will now see a new Sample.txt file containing the words "hello world."
Beyond the Trivial
While you might find it satisfying to get going, what we've done so far could be accomplished
with file copy. Let's take advantage of T4 templating in earnest. First
up, observe what happens when you change the output extension. Make it something
like .blah and observe that saving results in Sample.blah. As you can see, there's
more going on than simple text duplication. But let's do something more interesting.
Update your Sample.tt file to contain the following text and then click save.
<#@ template debug="false" hostspecific="false" language="C#" #> <#@ assembly name="System.Core" #> <#@ import namespace="System.Linq" #> <#@ import namespace="System.Text" #> <#@ import namespace="System.Collections.Generic" #> <#@ output extension=".txt" #> <# for(int i = 0;
i < 10;
i++)
WriteLine($"Hello
World {i}"); #>
When you open Sample.txt, you will see the following.
Hello World 0
Hello World 1
Hello World 2
Hello World 3
Hello World 4
Hello World 5
Hello World 6
Hello World 7
Hello World 8
Hello World 9
Pretty neat, huh? You've used the <# #> tokens to surround first class
C# that you can use to generate text. I imagine you can see the potential here.
Oh, and what happens when you type malformed C#? Remove the semicolon and see
for yourself. Yes, Visual Studio offers you feedback about bad T4 template files.
Use Cases
I'll stop here with the T4 tutorial. After all, I aimed only to provide an introduction.
And I think that part of any true introduction involves explaining where and how the
subject might prove useful to readers. So where do people reasonably use these
things?
Perhaps the most common usage scenario pertains to ORMs and the so-called impedance
mismatch problem. People create code generation schemes that examine databases
and spit out source code that matches with them. This approach spares the significant
performance hit of some kind of runtime scheme for figuring this out, but without
forcing tedious typing on dev teams. Entity Framework makes use of T4 templates.
I have seen other uses as well, however. Perhaps your organization puts involved
XML configuration files into any new projects and you want to generate these without
copy and paste. Or, perhaps you need to replace an expensive reflection/runtime
scheme for performance reasons. Maybe you have a good bit of layering boilerplate
and object mapping to do. Really, the sky is the limit here, but always bear
in mind the caveat that I offered at the beginning of this post. Take care not
to let code/text generation be a crutch for cranking out anti-patterns more rapidly.
The GhostDoc Use Case
I will close by offering a tie-in with the GhostDoc offering
as the final use case. If you use GhostDoc to generate comments for methods
and types in your codebase, you should know that you can customize the default generations
using T4 templates. (As an aside, I consider this a perfect use case for templating
-- a software vendor offering a product to developers that assists them with writing
code.)
If you open GhostDoc's options pane and navigate to "Rules" you will see the following
screen. Double clicking any of the templates will give you the option to edit
them, customizing as you see fit.
You can thus do simple things, like adding some copyright boilerplate, for instance.
Or you could really dive into the weeds of the commenting engine to customize to your
heart's content (be careful here, though). You can exert a great deal of control.
T4 templates offer you power and can make your life easier when used judiciously.
They're definitely a tool worth having in your tool belt. And, if you make use
of GhostDoc, this is doubly true.
Learn
more about how GhostDoc can help simplify your XML Comments, produce and maintain
quality help documentation.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
-
Version 5.4 of GhostDoc is a maintenance update for the v5.0 users:
-
VS2017 RC integration
-
New menu items - Getting Started Tutorial and Tutorials and Resources
-
(Pro) (Ent) Edit buttons in Options - Solution Ignore List and Options - Spelling
Ignore List
-
(Pro) (Ent) Test button in Options - Solution Ignore List
-
(Ent) Now GhostDoc shows error message when Conceptual Content path is invalid in
the solution configuration file
-
Fixed PathTooLongException exception when generating preview/build help file for C++
projects
-
(Ent) Updated ClassLibrary1.zip, moved all conceptual content files inside the project
in GhostDoc Enterprise\Samples\Conceptual Content\
-
Improved documenting ReadOnly auto-properties in VB
-
Resolved issue re-documenting a type at the top of source code file in VB
-
Resolved issue with generating preview of the <seealso> tag for generics in
VB
For the complete list of changes, please see What's
New in GhostDoc v5
For overview of the v5.0 features, visit Overview
of GhostDoc v5.0 Features
Download the new build at http://submain.com/download/ghostdoc/
|
-
We have just made available the Release Candidate of CodeIt.Right v3.0, here is the
new feature highlights:
-
VS2017 RC integration
-
Solution filtering by date, source control status and file patterns
-
Summary report view (announced as the Dashboard in the Beta preview) - provides a
summary view of the analysis results and metrics, customize to your needs
These features were announced as part of our recent v3 Beta:
-
Official support for VS2015 Update 2 and ASP.NET 5/ASP.NET Core
1.0 solutions
-
New Review Code commands:
-
only opened files
-
only checked out files
-
only files modified after specific date
-
Improved Profile Editor with advanced rule search and filtering
-
Improved look and feel for Violations Report and Editor violation
markers
-
New rules
-
Setting to keep the OnDemand and Instant Review profiles in
sync
-
New Jenkins integration plugin
-
Batch correction is now turned off by default
-
Most every CodeIt.Right action now can be assigned a keyboard
shortcut
-
For the Beta changes and screenshots, please see Overview
of CodeIt.Right v3.0 Beta Features
For the complete and detailed list of the v3.0 changes see What's
New in CodeIt.Right v3.0
To give the v3.0 Release Candidate a try, download it here - http://submain.com/download/codeit.right/beta/
Solution Filtering
In addition to the solution filtering by modified since specific date, open and checked
out files available in the Beta, we are introducing few more options:
-
Analyze files modified Today/This Week/Last 2 Weeks/This Month
– so you can set the relative date once and not have to change the date every day
-
Analyze only specific files – only include the files that match
a list of file patters like *Core*.cs or Modules\*. See this
KB post for the file path patterns details and examples.
Summary Report
The Summary Report tab provides an overview of the analyzed source code quality, it
includes the high level summary of the current analysis information, filters, violation
summary, top N violation, solution info and metrics. Additionally it provides detailed
list of violations and excludes.
The report is self-contained – no external dependencies, everything it requires is
included within the html file. This makes it very easy to email the report to someone
or publish it on the team portal – see example.
The Summary Report is based on an ASP.NET Razor markup within the Summary.cshtml template.
This makes it very easy for you to customize it to your needs.
You will find the summary report API documentation in the help file – CodeIt.Right
–> Help & Support –> Help –> Summary Report API.
Feedback
We would love to hear your feedback on the new features! Please email it to us at support@submain.com or
post in the CodeIt.Right
Forum.

|
-
We are looking for your input and we're willing to bribe you for answering one very
simple question: What are your biggest code documentation challenges right now?
The survey is super-quick and we're offering a $20 discount code for
your time (good with any new SubMain product license purchase) that you will automatically
receive once you complete the survey as our thank you.
We'd also appreciate it if you'd help us out by tweeting about this using the link Share
on Twitter or otherwise letting folks know we're interested to know their code
documentation challenges.
Thanks for your help!

|
-
During my younger days, I worked for a company that made a habit of a strategic acquisition.
They didn't participate in Time Warner style mergers, but periodically they would
purchase a smaller competitor or a related product. And on more than one occasion,
I inherited the lead role for the assimilating software from one of these organizations.
Lucky me, right?
If I think in terms of how to describe this to someone, a plumbing analogy comes to
mind. Over the years, I have learned enough about plumbing to handle most tasks
myself. And this has exposed me to the irony of discovering a small leak in
a fitting plugged by grit or debris. I find this ironic because two wrongs make
a right. A dirty, leaky fitting reaches sub-optimal equilibrium, and you spring
a leak when you clean it.
Legacy codebases have this issue as well. You inherit some acquired codebase,
fix a tiny bug, and suddenly the defect floodgates open. And then you realize
the perilousness of your situation.
While you might not have come by it in the same way that I did, I imagine you can
relate. At some point or another, just about every developer has been thrust
into supporting some creaky codebase. How should you handle this?
Put Your Outrage in Check
First, take some deep breaths. Seriously, I mean it. As software developers,
we seem to hate code written by others. In fact, we seem to hate our own
code if we wrote it more than a few months ago. So when you see the legacy
codebase for the first time, you will feel a natural bias toward disgust.
But don't indulge it. Don't sit there cursing the people that wrote the code,
and don't take screenshots to send to the
Daily WTF. Not only will it do you no good, but I'd go so far as to say
that this is actively counterproductive. Deciding that the code offers nothing
worth salvaging makes you less inclined to try to understand it.
The people that wrote this code dealt with older languages, older tooling, older frameworks,
and generally less knowledge than we have today. And besides, you don't know
what constraints they faced. Perhaps bosses heaped delivery pressure on them
like crazy. Perhaps someone forced them to convert to writing in a new, unfamiliar
language. Whatever the case may be, you simply didn't walk in their shoes.
So take a breath, assume they did their best, and try to understand what you have
under the hood.
Get a Visualization of the Architecture
Once you've settled in mentally for this responsibility, seek to understand quickly.
You won't achieve this by cracking open the code and looking through random source
files. But, beyond that, you also won't achieve it by looking at their architecture
documents or folder structures. Reality gets out of sync with intention, and
those things start to lie. You need to see the big picture, but in a way that
lines up with reality.
Look for tools that map dependencies and can generate a visual of the codebase.
Plenty of these tools exist for you and can automate visual depictions. Find
one and employ it. This will tell you whether the architecture resembles the
neat diagram given to you or not. And, more importantly, it will get you to
a broad understanding much more quickly.
Characterize
Once you have the picture you need of the codebase and the right frame of mind, you
can start doing things to it. And the first thing you should do is to start
writing characterization
tests.
If you have not heard of them before, characterization tests have the purpose of,
well, characterizing the codebase. You don't worry about correct or incorrect
behaviors. Instead, you accept at face value what the code does, and document
those behaviors with tests. You do this because you want to get a safety net
in place that tells you when your changes affect inputs and outputs.
As this XKCD cartoon ably demonstrates,
someone will come to depend on the application's production behavior, however problematic.
So with legacy code, you cannot simply decide to improve a behavior and assume your
users will thank you. You need to exercise caution.
But characterization tests do more than just provide a safety net. As an exercise,
they help you develop a deeper understanding of the codebase. If the architectural
visualization gives you a skeleton understanding, this starts to put meat on the bones.
Isolate Problems
With a reliable safety net in place, you can begin making strategic changes to the
production code beyond simple break/fix. I recommend that you start by finding
and isolating problematic chunks of code. In essence, this means identifying
sources of technical debt and looking to improve, gradually.
This can mean pockets of global state or extreme complexity that make for risky change.
But it might also mean dependencies on outdated libraries, frameworks, or APIs.
In order to extricate yourself from such messes, you must start to isolate them from
business logic and important plumbing code. Once you have it isolated, fixes
will come more easily.
Evolve Toward Modernity
Once you've isolated problematic areas and archaic dependencies, it certainly seems
logical to subsequently eliminate them. And, I suggest you do just that as a
general rule. Of course, sometimes isolating them gives you enough of a win
since it helps you mitigate risk. But I would consider this the exception and
not the rule. You want to remove problem areas.
I do not say this idly nor do I say it because I have some kind of early adopter drive
for the latest and greatest. Rather, being stuck with old tooling and infrastructure
prevents you from taking advantage of modern efficiencies and gains. When some
old library prevents you from upgrading to a more modern language version, you wind
up writing more, less efficient code. Being stuck in the past will cost you
money.
The Fate of the Codebase
As you get comfortable and take ownership of the legacy codebase, never stop contemplating
its fate. Clearly, in the beginning, someone decided that the application's
value outweighed its liability factor, but that may not always continue to be true.
Keep your finger on the pulse of the codebase, while considering options like migration,
retirement, evolution, and major rework.
And, finally, remember that taking over a legacy codebase need not be onerous.
As initially shocked as I found myself with the state of some of those acquisitions,
some of them turned into rewarding projects for me. You can derive a certain
satisfaction from taking over a chaotic situation and gradually steer it toward sanity.
So if you find yourself thrown into this situation, smile, roll up your sleeves, own
it and make the best of it.
Related resources
Tools at your disposal
SubMain offers CodeIt.Right that
easily integrates into Visual Studio for flexible and intuitive automated code review
solution that works real-time, on demand, at the source control check-in or as part
of your build.
Learn
more how CodeIt.Right can identify technical debt, document it and gradually improve
the legacy code.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich

|
-
If you spend enough years writing software, sooner or later, your chosen vocation
will force you into reverse engineering. Some weird API method with an inscrutable
name will stymie you. And you'll have to plug in random inputs and examine the
outputs to figure out what it does.
Clearly,
this wastes your time. Even if you enjoy the detective work, you can't argue
that an employer or client would view this as efficient. Library and API code
should not require you to launch a mystery investigation to determine what it does.
Instead, such code should come with appropriate documentation. This documentation
should move your focus from wondering what the code does to contemplating how best
to leverage it. It should make your life easier.
But what constitutes appropriate documentation? What particular characteristics
does it have? In this post, I'd like to lay out some elements of helpful code
documentation.
Elements of Style
Before moving on to what the documentation should contain, I will speak first about
its stylistic properties. After all, poorly written documentation can tank understanding,
even if it theoretically contains everything it should. If you're going to write
it, make it good.
Now don't get me wrong -- I'm not suggesting you should invest enough time to make
it a literary masterpiece. Instead, focus on three primary characteristics of
good writing: clarity, correctness, and precision. You want to make sure that
readers understand exactly what you're talking about. And, obviously, you cannot
get anything wrong.
The importance of this goes beyond just the particular method in question. It
affects your entire credibility with your userbase. If you confuse them with
ambiguity or, worse, get something wrong, they will start to mistrust you. The
documentation becomes useless to them and your reputation suffers.
Examples
Once you've gotten your house in order with stylistic concerns in the documentation,
you can decide on what to include. First up, I cannot overstate the importance
of including examples.
Whether you find yourself documenting a class, a method, a web service call, or anything
else, provide examples. Show the users the code in action and let them
apply their pattern matching and deduction skills. In case you hadn't noticed,
programmers tend to have these in spades.
Empathize with the users of your code. When you find yourself reading manuals
and documentation, don't you look for examples? Don't you prefer to grab them
and tweak them to suit your current situation? So do the readers of your documentation.
Oblige them. (See <example
/>)
Conditions
Next up, I'll talk about the general consideration of "conditions." By this,
I mean three basic types of conditions: preconditions,
postconditions, and invariants.
Let me define these in broad terms so that you understand what I mean. Respectively,
preconditions, postconditions, and invariants are things that must be true before
your code executes, things that must be true after it executes, and things that must
remain true throughout.
Documenting this information for your users saves them trial and error misery.
If you leave this out, they may have to discover for themselves that the method won't
accept a null parameter or that it never returns a positive number. Spare them
that trial and error experimentation and make this clear. By telling them explicitly,
you help them determine up front whether this code suits their purpose or not. (See <remarks
/> and <note
/>)
Related Elements
Moving out from core principles a bit, let's talk about some important meta-information.
People don't always peruse your documentation in "lookup" mode, wanting help about
a code element whose name they already know. Instead, sometimes they will 'surf'
the documentation, brainstorming the best way to tackle a problem.
For instance, imagine that you want to design some behavior around a collection type.
Familiar with List, you look that up, but then maybe you poke around to see what inherits
from the same base or implements the same interface. By doing this, you hope
to find the perfect collection type to suit your needs.
Make this sort of thing easy on readers of your documentation by offering a concept
of "related" elements. Listing OOP classes in the same hierarchy represents
just one example of what you might do. You can also list all elements with a
similar behavior or a similar name. You will have to determine for yourself
what related elements make sense based on context. Just make sure to include
them, though. (See <seealso
/> )
Pitfalls and Gotchas
Last, I'll mention an oft-overlooked property of documentation. Most commonly,
you might see this when looking at the documentation for some API call. Often,
it takes the form of "exceptions thrown" or "possible error codes."
But I'd like to generalize further here to "pitfalls and gotchas." Listing out
error codes and exceptions is great because it lets users know what to expect when
things go off the rails. But these aren't the only ways that things can go wrong,
nor are they the only things of which users should be aware.
Take care to list anything out here that might violate the principle
of least surprise or that could trip people up. This might include things
like, "common ways users misuse this method" or "if you get output X, check that you
set Y correctly." You can usually populate this section pretty easily whenever
a user struggles with the documentation as-is.
Wherever you get the pitfalls, just be sure to include them. Believe it or not,
this kind of detail can make the difference between adequate and outstanding documentation.
Few things impress users as much as you anticipating their questions and needs. (See <exception
/>, <returns
/> and <remarks
/>)
Documentation Won't Fix Bad Code
In closing, I would like to offer a thought that returns to the code itself.
Writing good documentation is critically important for anyone whose code will be consumed
by others -- especially those selling their code. But it all goes for naught
should you write bad or buggy code, or should your API present a mess to your users.
Thus I encourage you to apply the same scrutiny to the usability of your API that
I have just encouraged you to do for your documentation. Look to ensure that
you offer crisp, clear abstractions. Name code elements appropriately.
Avoid surprises to your users.
Over the last decade or so, organizations like Apple have moved us away from hefty
user manuals in favor of "discoverable" interfaces. Apply the same principle
to your code. I tell you this not to excuse you from documentation, but to help
you make your documentation count. When your clean API serves as part of your
documentation, you will write less of it, and what you do write will have higher value
to readers.
Learn
more about how GhostDoc can help simplify your XML Comments, produce and maintain
quality help documentation.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
-
The balance among types of feedback drives some weird interpersonal dynamics and balances.
For instance, consider the rather trite (if effective) management technique of the
"compliment sandwich." Managers with a negative piece of feedback precede and
follow that feedback with compliments. In that fashion, the compliments form
the "bun."
Different people and different groups have their preferences for how to handle this.
While some might bend over backward for diplomacy others prefer environments where
people hurl snipes at one another and simply consider it "passionate debate."
I have no interest arguing for any particular approach -- only in pointing out the
variety. As it turns out, we humans find this subject thorny.
To some extent, this complicated situation extends beyond human boundaries and into
automated systems. While we might not take quite the same umbrage as we would
with humans, we still get frustrated. If you doubt this, I challenge you to
tell me that you have never yelled at a compiler because you were sure your code had
no errors. I thought so.
So from this perspective, I can understand the frustration with static analysis feedback.
Often, when you decide to enable a new static analysis engine or linting tool on a
codebase, the feedback overwhelms. 28,326 issues the code can demoralize anyone.
And so the temptation emerges to recoil from this feedback and turn off the tool.
But should you do this? I would argue that usually, you should not. But
situations do exist when disabling a static analyzer makes sense. Today, I'll
walk through some examples of times you might suppress such a warning.
False Positives
For the first example, I'll present something of a no-brainer. However, I will
also present a caveat to balance things.
If your static analysis tool presents you with a false positive, then you should suppress
that instance of the false positive. (No sense throwing the baby out with the
bathwater and suppressing the entire rule). Assuming that you have a true false
positive, the analysis warning simply constitutes noise and not signal. Get
rid of it.
That being said, take care with labeling warnings as false positives. False
positive means that the tool has indicated a problem and a potential error and gotten
it wrong. False positive does not mean that you disagree with the warning or
don't care. The tool's wrongness is a good reason to suppress -- you not liking
its prognosis false short of that.
Non-Applicable Code
For the second kind of instance, I'll use the term "non-applicable code." This
describes code for which you have no interest in static analysis warnings. While
this may sound contradictory to the last point, it differs subtly.
You do not control all code in your codebase, and not all code demands the same level
of scrutiny about the same concepts. For example, do you have code in your codebase
driven by a framework? Many frameworks force some sort of inheritance scheme
on you or the implementation of an interface. If the name of a method on a third
party interface violates a naming convention, you need not be dinged by your tool
for simply implementing it.
In general, you'll find warnings that do not universally apply. Test projects
differ from your production code. GUI projects differ from data access layer
ones. And NuGet packages or generated code remain entirely outside of your control.
Assuming the decision to use these things happened in the past, turning off the analysis
warnings makes sense.
Cosmetic Code Counter to Your Team's Standard
So far, I've talked about the tool making a mistake and the tool getting things right
on the wrong code. This third case presents a thematically similar consideration.
Instead of a mistake or misapplication, though, this involves a misfit.
Many tools out there offer purely cosmetic concerns. They'll flag field variables
not prepended with underscores or methods with camel casing instead of Pascal casing.
Assuming those jive with your team's standards, you have no issues. But if they
don't, you have two options: change the tool or change your standard. Generally
speaking, you probably want to err on the side of complying with broad standards.
But if your team is set with your standard, then turn off those warnings or configure
the tool.
When You're Buried in Warnings
Speaking of warnings, I'll offer another point that relates to them, but with an entirely
different theme. When your team is buried in warnings, you need to take action.
Before I talk about turning off warnings, however, consider fixing them en masse.
It may seem daunting, but I suspect that you might find yourself surprised at how
quickly you can wrangle a manageable number.
However, if this proves too difficult or time-consuming, consider force ranking the
warnings, and (temporarily) turning off all except the top, say, 200. Make it
part of your team's work to eliminate those, and then enable the next 200. Keep
at it until you eliminate the warnings. And remember, in this case, you're disabling
warnings only temporarily. Don't forget about them.
When You Have an Intelligent Disagreement
Last up comes the most perilous reason for turning off static analysis warnings.
This one also happens to occur most frequently, in my experience. People turn
them off because they know better than the static analysis tool.
Let's stop for a moment and contemplate this. Teams of workaday developers out
there tend to blithely conclude that they know their business. In fact, they
know their business better than people whose job it is to write static analysis tools
that generate these warnings. Really? Do you like those odds?
Below the surface, disagreement with the tool often masks resentment at being called
"wrong" or "non-compliant." Turning the warnings off thus becomes a matter of
pride or mild laziness. Don't go this route.
If you want to ignore warnings because you believe them to be wrong, do research first.
Only allow yourself to turn off warnings when you have a reasoned, intelligent, research-supported
argument as to why you should do so.
When in Doubt, Leave 'em On
In this post, I have gingerly walked through scenarios in which you may want to turn
off static analysis warnings and guidance. For me, this exercise produces some
discomfort because I rarely find this advisable. My default instinct is thus
not to encourage such behavior.
That said, I cannot deny that you will encounter instances where this makes sense.
But whatever you do, avoid letting this become common or, worse, your default.
If you have the slightest bit of doubt, leave them on. Put your trust
in the vendors of these tools -- they know their business. And steering you
in bad directions is bad for business.
Learn
more how CodeIt.Right can automate your team standards, makes it easy to ignore specific
guidance violations and keep track of them.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
-
More years ago than I'd care to admit, I took a software engineering course as part
of my graduate CS program. At the time, I worked a full-time job during the
day and did remote classes in the evening. As a result, I disproportionately
valued classes with applicability to my job. And this class offered plenty of
that.
We scratched the surface on such diverse topics as agile methodologies, automated
testing, cost of code ownership, and more. But I found myself perhaps most interested
by the dive we did into refactoring. The idea of reworking the internal structure
of code while preserving inputs and outputs is a surprisingly complex one.
Historical Complexity of Refactoring
At the risk of dating myself, I took this course in the fall of 2006. While
automated refactorings in your IDE now seem commonplace, back then, they were hard.
In fact, the professor of the course considered them to be sufficiently difficult
as to steer a group of mine away from a project implementing some. In the world
of 2006, I suspect he had the right of it. We steered clear.
In 2016, implemented automated refactorings still present a challenge.
But modern tool and IDE vendors can stand on the shoulders of giants, so to speak.
Back then? Not so much.
Refactorings present a unique challenge to tool vendors because of the inherent risk.
They can really screw up users' code. If a mistake happens, best case scenario
is that the resultant code fails to compile because then, at least, it fails fast.
Worse still is semantically and syntactically correct code that somehow behaves improperly.
In this situation, a refactoring -- a safe change to code -- becomes a modification
to the behavior of production code instead. Ouch.
On top of the risk, the implementation of refactoring anywhere beyond the trivial
involves heady concepts such as abstract syntax trees. In other words, it's
not for lightweights. So to recap, refactoring is risky and difficult.
And this is the landscape faced by tool authors.
I Don't Fix -- I Just Flag
If you live in the US, you may have seen a commercial that features a funny quip.
If I'm not mistaken, it advertises for some sort of fraud prevention services.
(Pardon any slight inaccuracies, as I recount this as best I can, from memory.)
In the ad, bank robbers hold a bank hostage in a rather cliché, dramatic scene.
Off to the side, a woman stands near a security guard, asking him why he didn't do
anything to stop it. "I'm not a robbery prevention service -- I'm a robbery monitoring service.
Oh, by the way, there's a robbery."
It brings a chuckle, but it also brings an underlying point. In many situations,
monitoring alone can prove woefully ineffective, prompting frustration. As a
former manager and current consultant, I generally advise people that they should
only point out problems when they have also prepared proposed solutions. It
can mean the difference between complaining and solving.
So you can imagine and probably share my frustration at tools that just flag problems
and leave it to you to investigate further and fix them. We feel like the woman
standing next to the "robbery monitor," wondering how useful the service is to us.
Levels of Solution
Going back to the subject of software development, we see this dynamic in a number
of places. The compiler, the IDE, productivity add-ins, static analysis tools,
and linting utilities all offer us warnings to heed.
Often, that's all we get. The utility says, "hey, something is wrong here, but
you're going to have to figure out what." I tend to think of that as the basic
level of service, or level 0, if you will.
The next level, level 1, involves at least offering some form of next action.
It might be as simple as offering a help file, inline reading, or a link to more information.
Anything above "this is a problem."
Level 2 ups the ante by offering a recommendation for what to do next.
"You have a dependency cycle. You should fix this by looking at these three
components and removing one mutual dependency." It goes beyond giving you a
next thing to do and gives you the next thing to do.
Level 3 rounds out the field by actually performing the action for you (following
a prompt, of course). "You've accidentally hidden a method on the parent class.
Click here to rename or click here to make parent virtual." That's just an example
off the top, of course, but it illustrates the interaction paradigm. "We've
noticed a problem, and you can click here to fix it."
Fixes in Your Tooling
When
evaluating your own tools, look to climb as high up this hierarchy as you can.
Favor tools that identify problems, but offer fixes whenever possible.
There are a number of such tools out there, including CodeIt.Right.
Using tools like this is a pleasure because it removes the burden of research and
implementation from you. Well, you can always do the research if you want, but
at your own leisure. But it's much better to do research at your leisure than
when you're trying to accomplish something else.
The other, important concern here is that you find trusted tooling to help you with
this sort of thing. After all, you don't want something messing with your source
code if it might mess up your source code. But, assuming you can trust it, this
provides an invaluable boost to your effectiveness by automatically resolving your
problems and by helping you learn.
In the year 2016, we have far more tooling available, with a far better track record,
than we did in 2006. Leverage it whenever possible so that you can focus on
solving the pressing problems of your day to day work.
Tools at your disposal
SubMain offers CodeIt.Right that
easily integrates into Visual Studio for flexible and intuitive "We've noticed a problem,
and you can click here to fix it." solution.
Learn
more how CodeIt.Right can automate your team standards and improve code quality.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
-
Before I get down to the brass tacks of how to do some interesting stuff, I'm going
to spin a tale of woe. Well, I might have phrased that a little strongly.
Call it a tale of corporate drudgery.
In any case, many years ago I worked briefly in a little department, at a little company
that seemed to be a corporate drudgery factory. Oh, the place and people weren't
terrible. But the work consisted of, well, drudgery. We 'consulted' in
the sense that we cranked out software for other companies, for pay. Our software
plumbed the lines of business between client CRMs and ERPs or whatever. We would
write the software, then finish the software, then hand the software over, source
code and all.
Naturally, commenting our code and compliance with the coding standard attained crucial
importance. Why? Well, no practical reason. It was just that clients
would see this code. So it needed to look professional. Or something.
It didn't matter what the comments said. It didn't matter if the standard made
sense. Compliance earned you a gold star and a move onto the next project.
As I surveyed the scene surrounding me, I observed a mountain of vacuous comments
and dirty, but uniform code.
My Complex Relationship with Code Comments
My brief stay with (and departure from) this organization coincided with my growing
awareness of the Software Craftsmanship movement. Even as they copy and pasted
their way toward deadlines and wrote comments announcing that while(x < 6) would
proceed while x was less than 6, I became interested in the idea of the self-documenting
code.
Up to that point, I had diligently commented each method, file, and type I encountered.
In this regard, I looked out for fellow and future programmers. But after one
too many occasions of watching my own comments turn into lies when someone changed
the code without changing the comments, I gave up. I stopped commenting my code,
focusing entirely on extractions, refactoring, and making my code as legible as possible.
I achieved an equilibrium of sorts. In this fashion, I did less work and stopped
seeing my comments become nasty little fibs. But a single, non-subtle flaw remained
in this absolutist approach. What about documentation of a public (or internal)
API?
Naturally, I tried to apply the craftsmanship-oriented reasoning unilaterally.
Just make the public API so discoverable as to render the issue moot. But that
never totally satisfied me because I still liked my handy help screens and IntelliSense
info when consuming others' code.
And so I came to view XML doc comments on public methods as an exception. These,
after all, did not represent "comments." They came packaged with your deliverables
as your product. And I remain comfortable with that take today.
Generating Help More Efficiently
Now, my nuanced evolved view doesn't automatically mean I'll resume laboriously hand-typing
XML comments. Early in my career, a sort of sad pride in this "work harder,
not smarter" approach characterized my development. But who has time for that
anymore?
Instead, with a little bit of investment in learning and tooling, you can do some
legitimately cool stuff. Let me take you through a nifty sequence of steps that
you may come to love.
GhostDoc Enterprise
First up, take a look at the
GhostDoc Enterprise offering. Among other things, this product
lets you quickly generated XML comments, customize the default generation template,
spell check your code, generate help documentation and more. Poking through
all that alone will probably take some time out of your day. You should download
and play with the product.
Once you are done with that, though, consider how you might get more efficient at
beefing up your API. For the rest of this post, I will use as an example my
Chess TDD project. I use this as a toy codebase for all kinds of demos.
I never commented this codebase, nor did I generate any kind of documentation for
it. Why? I intended it solely as a teaching tool for test-driven development,
and never packaged it for others' consumption. Let's change that today.
Adding Comments
Armed with GhostDoc enterprise, I will first generate some comments. The Board class
makes a likely candidate since that offers theoretical users the most value.
First up, I need to add XML doc comments to the file. I can do this by right
clicking in the file, and selecting "Document Type" from the GhostDoc Enterprise context
menu. Here's what the result looks like.
The default template offers a pretty smart guess at intent, based on good variable
naming. For my fellow clean code enthusiasts out there, you can even check how
self-documenting your code is by the quality of the comments GhostDoc creates.
But still, you probably want to take a human pass through, checking and tweaking where
needed.
Building Help Documentation
All right. With comments in place for the public facing API of my little project,
we can move on to the actual documentation. Again, easy enough. Select
"Tools -> GhostDoc Enterprise -> Build Help Documentation" from the main menu.
You'll see this screen.
Notice that you have a great deal of control over the particulars. Going into
detail here is beyond the scope of my post, but you can certainly play around.
I'll take the defaults and build a CHM help file. Once I click "OK", here's
what I see (once I go to the board class).
Pretty slick, huh? Seriously. With just a few clicks, you get intelligently
commented public methods and a professional-looking help file. (You can also
have this as web-style documentation if you want). Obviously, I'd want to do
some housekeeping here if I were selling this, but it does a pretty good job even
with zero intervention from me.
Do It From the Build
Only one bit of automation remains at this point. And that's the generation
of this documentation from the build. Fortunately, GhostDoc Enterprise makes
that simple as well.
Any build system worth its salt will, of course, let you hook command line invocations
into your build. GhostDoc Enterprise offers one up for just this occasion.
You can read a
succinct guide on that right here. With a single command, you can point
it at your solution, a help configuration, and a project configuration, and generate
the help file. Putting it where you want is then easy enough.
Tying this in with an automated build or CI setup really ties everything together,
including the theme of this post. Automating the generation of clean, helpful
documentation of your clean code, building it, and packaging it up all without human
intervention pretty much represents the pinnacle of delivering a professional product.
Learn
more about how GhostDoc can help simplify your XML Comments, produce and maintain
quality help documentation.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
|
|
|
|
|
|