|
October 2016 - Posts
-
If you spend enough years writing software, sooner or later, your chosen vocation
will force you into reverse engineering. Some weird API method with an inscrutable
name will stymie you. And you'll have to plug in random inputs and examine the
outputs to figure out what it does.
Clearly,
this wastes your time. Even if you enjoy the detective work, you can't argue
that an employer or client would view this as efficient. Library and API code
should not require you to launch a mystery investigation to determine what it does.
Instead, such code should come with appropriate documentation. This documentation
should move your focus from wondering what the code does to contemplating how best
to leverage it. It should make your life easier.
But what constitutes appropriate documentation? What particular characteristics
does it have? In this post, I'd like to lay out some elements of helpful code
documentation.
Elements of Style
Before moving on to what the documentation should contain, I will speak first about
its stylistic properties. After all, poorly written documentation can tank understanding,
even if it theoretically contains everything it should. If you're going to write
it, make it good.
Now don't get me wrong -- I'm not suggesting you should invest enough time to make
it a literary masterpiece. Instead, focus on three primary characteristics of
good writing: clarity, correctness, and precision. You want to make sure that
readers understand exactly what you're talking about. And, obviously, you cannot
get anything wrong.
The importance of this goes beyond just the particular method in question. It
affects your entire credibility with your userbase. If you confuse them with
ambiguity or, worse, get something wrong, they will start to mistrust you. The
documentation becomes useless to them and your reputation suffers.
Examples
Once you've gotten your house in order with stylistic concerns in the documentation,
you can decide on what to include. First up, I cannot overstate the importance
of including examples.
Whether you find yourself documenting a class, a method, a web service call, or anything
else, provide examples. Show the users the code in action and let them
apply their pattern matching and deduction skills. In case you hadn't noticed,
programmers tend to have these in spades.
Empathize with the users of your code. When you find yourself reading manuals
and documentation, don't you look for examples? Don't you prefer to grab them
and tweak them to suit your current situation? So do the readers of your documentation.
Oblige them. (See <example
/>)
Conditions
Next up, I'll talk about the general consideration of "conditions." By this,
I mean three basic types of conditions: preconditions,
postconditions, and invariants.
Let me define these in broad terms so that you understand what I mean. Respectively,
preconditions, postconditions, and invariants are things that must be true before
your code executes, things that must be true after it executes, and things that must
remain true throughout.
Documenting this information for your users saves them trial and error misery.
If you leave this out, they may have to discover for themselves that the method won't
accept a null parameter or that it never returns a positive number. Spare them
that trial and error experimentation and make this clear. By telling them explicitly,
you help them determine up front whether this code suits their purpose or not. (See <remarks
/> and <note
/>)
Related Elements
Moving out from core principles a bit, let's talk about some important meta-information.
People don't always peruse your documentation in "lookup" mode, wanting help about
a code element whose name they already know. Instead, sometimes they will 'surf'
the documentation, brainstorming the best way to tackle a problem.
For instance, imagine that you want to design some behavior around a collection type.
Familiar with List, you look that up, but then maybe you poke around to see what inherits
from the same base or implements the same interface. By doing this, you hope
to find the perfect collection type to suit your needs.
Make this sort of thing easy on readers of your documentation by offering a concept
of "related" elements. Listing OOP classes in the same hierarchy represents
just one example of what you might do. You can also list all elements with a
similar behavior or a similar name. You will have to determine for yourself
what related elements make sense based on context. Just make sure to include
them, though. (See <seealso
/> )
Pitfalls and Gotchas
Last, I'll mention an oft-overlooked property of documentation. Most commonly,
you might see this when looking at the documentation for some API call. Often,
it takes the form of "exceptions thrown" or "possible error codes."
But I'd like to generalize further here to "pitfalls and gotchas." Listing out
error codes and exceptions is great because it lets users know what to expect when
things go off the rails. But these aren't the only ways that things can go wrong,
nor are they the only things of which users should be aware.
Take care to list anything out here that might violate the principle
of least surprise or that could trip people up. This might include things
like, "common ways users misuse this method" or "if you get output X, check that you
set Y correctly." You can usually populate this section pretty easily whenever
a user struggles with the documentation as-is.
Wherever you get the pitfalls, just be sure to include them. Believe it or not,
this kind of detail can make the difference between adequate and outstanding documentation.
Few things impress users as much as you anticipating their questions and needs. (See <exception
/>, <returns
/> and <remarks
/>)
Documentation Won't Fix Bad Code
In closing, I would like to offer a thought that returns to the code itself.
Writing good documentation is critically important for anyone whose code will be consumed
by others -- especially those selling their code. But it all goes for naught
should you write bad or buggy code, or should your API present a mess to your users.
Thus I encourage you to apply the same scrutiny to the usability of your API that
I have just encouraged you to do for your documentation. Look to ensure that
you offer crisp, clear abstractions. Name code elements appropriately.
Avoid surprises to your users.
Over the last decade or so, organizations like Apple have moved us away from hefty
user manuals in favor of "discoverable" interfaces. Apply the same principle
to your code. I tell you this not to excuse you from documentation, but to help
you make your documentation count. When your clean API serves as part of your
documentation, you will write less of it, and what you do write will have higher value
to readers.
Learn
more about how GhostDoc can help simplify your XML Comments, produce and maintain
quality help documentation.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
-
The balance among types of feedback drives some weird interpersonal dynamics and balances.
For instance, consider the rather trite (if effective) management technique of the
"compliment sandwich." Managers with a negative piece of feedback precede and
follow that feedback with compliments. In that fashion, the compliments form
the "bun."
Different people and different groups have their preferences for how to handle this.
While some might bend over backward for diplomacy others prefer environments where
people hurl snipes at one another and simply consider it "passionate debate."
I have no interest arguing for any particular approach -- only in pointing out the
variety. As it turns out, we humans find this subject thorny.
To some extent, this complicated situation extends beyond human boundaries and into
automated systems. While we might not take quite the same umbrage as we would
with humans, we still get frustrated. If you doubt this, I challenge you to
tell me that you have never yelled at a compiler because you were sure your code had
no errors. I thought so.
So from this perspective, I can understand the frustration with static analysis feedback.
Often, when you decide to enable a new static analysis engine or linting tool on a
codebase, the feedback overwhelms. 28,326 issues the code can demoralize anyone.
And so the temptation emerges to recoil from this feedback and turn off the tool.
But should you do this? I would argue that usually, you should not. But
situations do exist when disabling a static analyzer makes sense. Today, I'll
walk through some examples of times you might suppress such a warning.
False Positives
For the first example, I'll present something of a no-brainer. However, I will
also present a caveat to balance things.
If your static analysis tool presents you with a false positive, then you should suppress
that instance of the false positive. (No sense throwing the baby out with the
bathwater and suppressing the entire rule). Assuming that you have a true false
positive, the analysis warning simply constitutes noise and not signal. Get
rid of it.
That being said, take care with labeling warnings as false positives. False
positive means that the tool has indicated a problem and a potential error and gotten
it wrong. False positive does not mean that you disagree with the warning or
don't care. The tool's wrongness is a good reason to suppress -- you not liking
its prognosis false short of that.
Non-Applicable Code
For the second kind of instance, I'll use the term "non-applicable code." This
describes code for which you have no interest in static analysis warnings. While
this may sound contradictory to the last point, it differs subtly.
You do not control all code in your codebase, and not all code demands the same level
of scrutiny about the same concepts. For example, do you have code in your codebase
driven by a framework? Many frameworks force some sort of inheritance scheme
on you or the implementation of an interface. If the name of a method on a third
party interface violates a naming convention, you need not be dinged by your tool
for simply implementing it.
In general, you'll find warnings that do not universally apply. Test projects
differ from your production code. GUI projects differ from data access layer
ones. And NuGet packages or generated code remain entirely outside of your control.
Assuming the decision to use these things happened in the past, turning off the analysis
warnings makes sense.
Cosmetic Code Counter to Your Team's Standard
So far, I've talked about the tool making a mistake and the tool getting things right
on the wrong code. This third case presents a thematically similar consideration.
Instead of a mistake or misapplication, though, this involves a misfit.
Many tools out there offer purely cosmetic concerns. They'll flag field variables
not prepended with underscores or methods with camel casing instead of Pascal casing.
Assuming those jive with your team's standards, you have no issues. But if they
don't, you have two options: change the tool or change your standard. Generally
speaking, you probably want to err on the side of complying with broad standards.
But if your team is set with your standard, then turn off those warnings or configure
the tool.
When You're Buried in Warnings
Speaking of warnings, I'll offer another point that relates to them, but with an entirely
different theme. When your team is buried in warnings, you need to take action.
Before I talk about turning off warnings, however, consider fixing them en masse.
It may seem daunting, but I suspect that you might find yourself surprised at how
quickly you can wrangle a manageable number.
However, if this proves too difficult or time-consuming, consider force ranking the
warnings, and (temporarily) turning off all except the top, say, 200. Make it
part of your team's work to eliminate those, and then enable the next 200. Keep
at it until you eliminate the warnings. And remember, in this case, you're disabling
warnings only temporarily. Don't forget about them.
When You Have an Intelligent Disagreement
Last up comes the most perilous reason for turning off static analysis warnings.
This one also happens to occur most frequently, in my experience. People turn
them off because they know better than the static analysis tool.
Let's stop for a moment and contemplate this. Teams of workaday developers out
there tend to blithely conclude that they know their business. In fact, they
know their business better than people whose job it is to write static analysis tools
that generate these warnings. Really? Do you like those odds?
Below the surface, disagreement with the tool often masks resentment at being called
"wrong" or "non-compliant." Turning the warnings off thus becomes a matter of
pride or mild laziness. Don't go this route.
If you want to ignore warnings because you believe them to be wrong, do research first.
Only allow yourself to turn off warnings when you have a reasoned, intelligent, research-supported
argument as to why you should do so.
When in Doubt, Leave 'em On
In this post, I have gingerly walked through scenarios in which you may want to turn
off static analysis warnings and guidance. For me, this exercise produces some
discomfort because I rarely find this advisable. My default instinct is thus
not to encourage such behavior.
That said, I cannot deny that you will encounter instances where this makes sense.
But whatever you do, avoid letting this become common or, worse, your default.
If you have the slightest bit of doubt, leave them on. Put your trust
in the vendors of these tools -- they know their business. And steering you
in bad directions is bad for business.
Learn
more how CodeIt.Right can automate your team standards, makes it easy to ignore specific
guidance violations and keep track of them.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
-
More years ago than I'd care to admit, I took a software engineering course as part
of my graduate CS program. At the time, I worked a full-time job during the
day and did remote classes in the evening. As a result, I disproportionately
valued classes with applicability to my job. And this class offered plenty of
that.
We scratched the surface on such diverse topics as agile methodologies, automated
testing, cost of code ownership, and more. But I found myself perhaps most interested
by the dive we did into refactoring. The idea of reworking the internal structure
of code while preserving inputs and outputs is a surprisingly complex one.
Historical Complexity of Refactoring
At the risk of dating myself, I took this course in the fall of 2006. While
automated refactorings in your IDE now seem commonplace, back then, they were hard.
In fact, the professor of the course considered them to be sufficiently difficult
as to steer a group of mine away from a project implementing some. In the world
of 2006, I suspect he had the right of it. We steered clear.
In 2016, implemented automated refactorings still present a challenge.
But modern tool and IDE vendors can stand on the shoulders of giants, so to speak.
Back then? Not so much.
Refactorings present a unique challenge to tool vendors because of the inherent risk.
They can really screw up users' code. If a mistake happens, best case scenario
is that the resultant code fails to compile because then, at least, it fails fast.
Worse still is semantically and syntactically correct code that somehow behaves improperly.
In this situation, a refactoring -- a safe change to code -- becomes a modification
to the behavior of production code instead. Ouch.
On top of the risk, the implementation of refactoring anywhere beyond the trivial
involves heady concepts such as abstract syntax trees. In other words, it's
not for lightweights. So to recap, refactoring is risky and difficult.
And this is the landscape faced by tool authors.
I Don't Fix -- I Just Flag
If you live in the US, you may have seen a commercial that features a funny quip.
If I'm not mistaken, it advertises for some sort of fraud prevention services.
(Pardon any slight inaccuracies, as I recount this as best I can, from memory.)
In the ad, bank robbers hold a bank hostage in a rather cliché, dramatic scene.
Off to the side, a woman stands near a security guard, asking him why he didn't do
anything to stop it. "I'm not a robbery prevention service -- I'm a robbery monitoring service.
Oh, by the way, there's a robbery."
It brings a chuckle, but it also brings an underlying point. In many situations,
monitoring alone can prove woefully ineffective, prompting frustration. As a
former manager and current consultant, I generally advise people that they should
only point out problems when they have also prepared proposed solutions. It
can mean the difference between complaining and solving.
So you can imagine and probably share my frustration at tools that just flag problems
and leave it to you to investigate further and fix them. We feel like the woman
standing next to the "robbery monitor," wondering how useful the service is to us.
Levels of Solution
Going back to the subject of software development, we see this dynamic in a number
of places. The compiler, the IDE, productivity add-ins, static analysis tools,
and linting utilities all offer us warnings to heed.
Often, that's all we get. The utility says, "hey, something is wrong here, but
you're going to have to figure out what." I tend to think of that as the basic
level of service, or level 0, if you will.
The next level, level 1, involves at least offering some form of next action.
It might be as simple as offering a help file, inline reading, or a link to more information.
Anything above "this is a problem."
Level 2 ups the ante by offering a recommendation for what to do next.
"You have a dependency cycle. You should fix this by looking at these three
components and removing one mutual dependency." It goes beyond giving you a
next thing to do and gives you the next thing to do.
Level 3 rounds out the field by actually performing the action for you (following
a prompt, of course). "You've accidentally hidden a method on the parent class.
Click here to rename or click here to make parent virtual." That's just an example
off the top, of course, but it illustrates the interaction paradigm. "We've
noticed a problem, and you can click here to fix it."
Fixes in Your Tooling
When
evaluating your own tools, look to climb as high up this hierarchy as you can.
Favor tools that identify problems, but offer fixes whenever possible.
There are a number of such tools out there, including CodeIt.Right.
Using tools like this is a pleasure because it removes the burden of research and
implementation from you. Well, you can always do the research if you want, but
at your own leisure. But it's much better to do research at your leisure than
when you're trying to accomplish something else.
The other, important concern here is that you find trusted tooling to help you with
this sort of thing. After all, you don't want something messing with your source
code if it might mess up your source code. But, assuming you can trust it, this
provides an invaluable boost to your effectiveness by automatically resolving your
problems and by helping you learn.
In the year 2016, we have far more tooling available, with a far better track record,
than we did in 2006. Leverage it whenever possible so that you can focus on
solving the pressing problems of your day to day work.
Tools at your disposal
SubMain offers CodeIt.Right that
easily integrates into Visual Studio for flexible and intuitive "We've noticed a problem,
and you can click here to fix it." solution.
Learn
more how CodeIt.Right can automate your team standards and improve code quality.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
-
Before I get down to the brass tacks of how to do some interesting stuff, I'm going
to spin a tale of woe. Well, I might have phrased that a little strongly.
Call it a tale of corporate drudgery.
In any case, many years ago I worked briefly in a little department, at a little company
that seemed to be a corporate drudgery factory. Oh, the place and people weren't
terrible. But the work consisted of, well, drudgery. We 'consulted' in
the sense that we cranked out software for other companies, for pay. Our software
plumbed the lines of business between client CRMs and ERPs or whatever. We would
write the software, then finish the software, then hand the software over, source
code and all.
Naturally, commenting our code and compliance with the coding standard attained crucial
importance. Why? Well, no practical reason. It was just that clients
would see this code. So it needed to look professional. Or something.
It didn't matter what the comments said. It didn't matter if the standard made
sense. Compliance earned you a gold star and a move onto the next project.
As I surveyed the scene surrounding me, I observed a mountain of vacuous comments
and dirty, but uniform code.
My Complex Relationship with Code Comments
My brief stay with (and departure from) this organization coincided with my growing
awareness of the Software Craftsmanship movement. Even as they copy and pasted
their way toward deadlines and wrote comments announcing that while(x < 6) would
proceed while x was less than 6, I became interested in the idea of the self-documenting
code.
Up to that point, I had diligently commented each method, file, and type I encountered.
In this regard, I looked out for fellow and future programmers. But after one
too many occasions of watching my own comments turn into lies when someone changed
the code without changing the comments, I gave up. I stopped commenting my code,
focusing entirely on extractions, refactoring, and making my code as legible as possible.
I achieved an equilibrium of sorts. In this fashion, I did less work and stopped
seeing my comments become nasty little fibs. But a single, non-subtle flaw remained
in this absolutist approach. What about documentation of a public (or internal)
API?
Naturally, I tried to apply the craftsmanship-oriented reasoning unilaterally.
Just make the public API so discoverable as to render the issue moot. But that
never totally satisfied me because I still liked my handy help screens and IntelliSense
info when consuming others' code.
And so I came to view XML doc comments on public methods as an exception. These,
after all, did not represent "comments." They came packaged with your deliverables
as your product. And I remain comfortable with that take today.
Generating Help More Efficiently
Now, my nuanced evolved view doesn't automatically mean I'll resume laboriously hand-typing
XML comments. Early in my career, a sort of sad pride in this "work harder,
not smarter" approach characterized my development. But who has time for that
anymore?
Instead, with a little bit of investment in learning and tooling, you can do some
legitimately cool stuff. Let me take you through a nifty sequence of steps that
you may come to love.
GhostDoc Enterprise
First up, take a look at the
GhostDoc Enterprise offering. Among other things, this product
lets you quickly generated XML comments, customize the default generation template,
spell check your code, generate help documentation and more. Poking through
all that alone will probably take some time out of your day. You should download
and play with the product.
Once you are done with that, though, consider how you might get more efficient at
beefing up your API. For the rest of this post, I will use as an example my
Chess TDD project. I use this as a toy codebase for all kinds of demos.
I never commented this codebase, nor did I generate any kind of documentation for
it. Why? I intended it solely as a teaching tool for test-driven development,
and never packaged it for others' consumption. Let's change that today.
Adding Comments
Armed with GhostDoc enterprise, I will first generate some comments. The Board class
makes a likely candidate since that offers theoretical users the most value.
First up, I need to add XML doc comments to the file. I can do this by right
clicking in the file, and selecting "Document Type" from the GhostDoc Enterprise context
menu. Here's what the result looks like.
The default template offers a pretty smart guess at intent, based on good variable
naming. For my fellow clean code enthusiasts out there, you can even check how
self-documenting your code is by the quality of the comments GhostDoc creates.
But still, you probably want to take a human pass through, checking and tweaking where
needed.
Building Help Documentation
All right. With comments in place for the public facing API of my little project,
we can move on to the actual documentation. Again, easy enough. Select
"Tools -> GhostDoc Enterprise -> Build Help Documentation" from the main menu.
You'll see this screen.
Notice that you have a great deal of control over the particulars. Going into
detail here is beyond the scope of my post, but you can certainly play around.
I'll take the defaults and build a CHM help file. Once I click "OK", here's
what I see (once I go to the board class).
Pretty slick, huh? Seriously. With just a few clicks, you get intelligently
commented public methods and a professional-looking help file. (You can also
have this as web-style documentation if you want). Obviously, I'd want to do
some housekeeping here if I were selling this, but it does a pretty good job even
with zero intervention from me.
Do It From the Build
Only one bit of automation remains at this point. And that's the generation
of this documentation from the build. Fortunately, GhostDoc Enterprise makes
that simple as well.
Any build system worth its salt will, of course, let you hook command line invocations
into your build. GhostDoc Enterprise offers one up for just this occasion.
You can read a
succinct guide on that right here. With a single command, you can point
it at your solution, a help configuration, and a project configuration, and generate
the help file. Putting it where you want is then easy enough.
Tying this in with an automated build or CI setup really ties everything together,
including the theme of this post. Automating the generation of clean, helpful
documentation of your clean code, building it, and packaging it up all without human
intervention pretty much represents the pinnacle of delivering a professional product.
Learn
more about how GhostDoc can help simplify your XML Comments, produce and maintain
quality help documentation.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
|
|
|
|
|
|