|
January 2017 - Posts
-
For years, I can remember fighting the good fight for unit testing. When I started
that fight, I understood a simple premise. We, as programmers, automate things.
So, why not automate testing?
Of all things, a grad school course in software engineering introduced me to the concept
back in 2005. It hooked me immediately, and I began applying the lessons to
my work at the time. A few years and a new job later, I came to a group that
had not yet discovered the wonders of automated testing. No worries, I figured,
I can introduce the concept!
Except, it turns out that people stuck in their ways kind of like those ways.
Imagine my surprise to discover that people turned up their nose at the practice.
Over the course of time, I learned to plead my case, both in technical and business
terms. But it often felt like wading upstream against a fast moving current.
Years later, I have fought that fight over and over again. In fact, I've produced
training materials, courses, videos, blog posts, and books on the subject. I've
brought people around to see the benefits and then subsequently realize those benefits
following adoption. This has brought me satisfaction.
But I don't do this in a vacuum. The industry as a whole has followed the same
trajectory, using the same logic. I count myself just another advocate among
a euphony of voices. And so our profession has generally come to accept unit
testing as a vital tool.
Widespread Acceptance of Automated Regression Tests
In fact, I might go so far as to call acceptance and adoption quite widespread.
This figure only increases if you include shops that totally mean to and will definitely
get around to it like sometime in the next six months or something. In other
words, if you count both shops that have adopted the practice and shops that feel
as though they should, acceptance figures certainly span a plurality.
Major enterprises bring me in to help them teach their developers to do it.
Still, other companies consult and ask questions about it. Just about everyone
wants to understand how to realize the unit testing value proposition of higher quality,
more stability, and fewer bugs.
This takes a simple form. We talk about unit testing and other forms of testing,
and sometimes this may blur the lines. But let's get specific here. A
holistic testing strategy includes tests at a variety of granularities. These
comprise what some call "the
test pyramid." Unit tests address individual components (e.g. classes),
while service tests drive at the way the components of your application work together.
GUI tests, the least granular of all, exercise the whole thing.
Taken together, these comprise your regression test suite. It stands
against the category of bugs known as "regressions," or defects where something that
used to work stops working. For a parallel example in the "real world" think
of the warning lights on your car's dashboard. "Low battery" light comes on
because the battery, which used to work, has stopped working.
Benefits of Automated Regression Test Suites
Why do this? What benefits to automated regression test suites provide?
Well, let's take a look at some.
-
Repeatability and accuracy. A human running tests over and over again may produce
slight variances in the tests. A machine, not so much.
-
Speed. As with anything, automation produces a significant speedup over manual
execution.
-
Fast feedback. The automated test suite can tell you much more quickly if you
have broken something.
-
Morale. The fewer times a QA department comes back with "you broke this thing,"
the fewer opportunities for contentiousness.
I should also mention, as a brief aside, that I don't consider automated test suites
to be acceptable substitutes for manual testing. Rather, I believe
the two efforts should work in complementary fashion. If the automated test
suite executes the humdrum tests in the codebase, it frees QA folks up to perform
intelligent, exploratory testing. As Uncle
Bob once famously said, "it's wrong to turn humans into machines. If you
can write a script for a test procedure, then you can write a program to execute that
procedure."
Automating Code Review
None of this probably comes as much of a shock to you. If you go out and read
tech blogs, you've no doubt encountered the widespread opinion that people should
automate regression test suites. In fact, you probably share that opinion.
So don't you wonder why we don't more frequently apply that logic to other concerns?
Take code review, for instance. Most organizations do this in entirely manual
fashion outside of, perhaps, a so-called "linting" tool. They mandate automated
test coverage and then content themselves with sicking their developers on one another
in meetings to gripe over tabs, spaces, and camel casing.
Why not approach code review the same way? Why not automate the aspects of it
that lend themselves to automation, while saving human intervention for more conceptual
matters?
Benefits of Automated Code Reviews
In a study by Steve McConnell and referenced
in this blog post, "formal code inspections" produced better results for preemptively
finding bugs than even automated regression tests. So it stands to reason that
we should invest in code review in the same ways that we invest in regression testing.
And I don't mean simply time spent, but in driving forward with automation and efficiency.
Consider the benefits I listed above for automated tests, and look how they apply
to automated
code review.
-
Repeatability and accuracy. Humans will miss instances of substandard code if
they feel tired -- machines won't.
-
Speed. Do you want your code review to take seconds or in hours/days.
-
Fast feedback. Because of the increased speed of the review, the reviewee gets
the results immediately after writing the code, for better learning.
-
Morale. The exact same reasoning applies here. Having a machine point
out your mistakes can save contentiousness.
I think that we'll see a similar trajectory to automating code review that we did
with automating test suites. And, what's more, I think that automated code review
will gain steam a lot more quickly and with less resistance. After all, automating
QA activities blazed a trail.
I believe the biggest barrier to adoption, in this case, is the lack of awareness.
People may not believe automating code review is possible. But I assure you,
you can do it. So keep an eye out for ways to automate
this important practice, and get in ahead of the adoption curve.
Learn
more how CodeIt.Right can help you automate code reviews and improve your code quality.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
-
As a teenager, I remember having a passing interest in hacking. Perhaps this
came from watching the movie Sneakers.
Whatever the origin, the fancy passed quickly because I prefer building stuff to breaking
other people's stuff. Therefore, what I know about hacking pretty much stops
at understanding terminology and high level concepts.
Consider the term "zero
day exploit," for instance. While I understand what this means, I have never
once, in my life, sat on discovery of a software vulnerability for the purpose of
using it somehow. Usually when I discover a bug, I'm trying to deposit a check
or something, and I care only about the inconvenience. But I still understand
the term.
"Zero day" refers to the amount of time the software vendor has to prepare for the
vulnerability. You see, the clever hacker gives no warning about the vulnerability
before using it. (This seems like common sense, though perhaps hackers with
more derring do like to give them half a day to watch them scramble to release something
before the hack takes effect.) The time between announcement and reality is
zero.
Increased Deployment Cadence
Let's co-opt the term "zero day" for a different purpose. Imagine that we now
use it to refer to software deployments. By "zero day deployment," we thus mean
"software deployed without any prior announcement."
But
why would anyone do this? Don't you miss out on some great marketing opportunities?
And, more importantly, can you even release software this quickly? Understanding
comes from realizing that software deployment is undergoing a radical shift.
To understand this think about software release cadences 20 years ago. In the
90s, Internet Explorer won the first browser
war because it managed to beat Netscape's plodding release of going 3 years between
releases. With major software products, release cadences of a year or two dominated
the landscape back then.
But that timeline has shrunk steadily. For a highly visible example, consider
Visual Studio. In 2002, 2005, 2008, Microsoft released versions corresponding
to those years. Then it started to shrink with 2010, 2012, and 2013. Now,
the years no longer mark releases, per se, with Microsoft actually releasing major
updates on a quarterly basis.
Zero Day Deployments
As much as going from "every 3 years" to "every 3 months" impresses, websites and
SaaS vendors have shrunk it to "every day." Consider Facebook's
deployment cadence. They roll minor updates every business day and major
ones every week.
With this cadence, we truly reach zero day deployment. You never hear Facebook
announcing major upcoming releases. In fact, you never hear Facebook announcing
releases, period. The first the world sees of a given Facebook release is when
the release actually happens. Truly, this means zero day releases.
Oh, don't get me wrong. Rumors of upcoming features and capabilities circulate,
and Facebook certainly has a robust marketing department. But Facebook and companies
with similar deployment approaches have impressively made deployments a non-event.
And others are looking to follow suit, perhaps yours included.
Conceptual Impediments to Zero Day Deployments
If what I just said made you spit your drink at the screen, I understand. Perhaps
your deployment and release process takes so long that the thought of shrinking it
to a day made you laugh. Or perhaps it terrified. Either way, I can understand
that it may seem quite a leap.
You may conceive of Facebook and other practitioners so alien to your own situation
that you see no path from here to there. But in reality, they almost certainly
do the same things you do as part of your longer process -- just optimized and automated.
Impediments take a variety of forms. You might have lengthy quality assurance
and vetting processes, perhaps that require many iterations between the developers
and quality assurance. You might still be packaging software onto DVDs and shipping
it to customers. Perhaps you run all sorts of checks and analytics on it.
But all will fall under the general heading of requiring manual intervention or consuming
a lot of time.
To get to zero day deployments, you need to automate and speed up considerably, and
this can seem daunting.
What's Common Today
Some good news exists, though. The same forces that let the Visual Studio team
see such radical improvement push on software shops across the board. We all
have access to helpful techs.
For instance, the overwhelming majority of organizations now have continuous integration
via dedicated build machines. Software developers commit code, and these things
scoop it up, compile it, and package it up in a deployable package. This activity
now happens on the order of minutes whereas, in the past, I can remember shops where
this was some poor guy's entire job, and he'd spend days on each build.
And, speaking of the CI server, a lot of them run automated test suites as part of
what they do. Most commonly, this means unit tests. But they might also
invoke acceptance tests and even more exotic things like smoke, GUI, and functionality
tests. You can thus accept commits, build the software, run a bunch of test,
and get it ready to deploy.
Of course, you can also automate the actual deployment as well. It stands to
reason that, if your build machine can ball it up into a deliverable, it can deliver
that deliverable. This might be harder with physical media involved, but as
more software deliveries happen over networks, more of them get automated.
What We Need Next
With all of that in place, why don't we have more zero day deployments? What's
missing?
Again, discounting the problem of physical media, I'd say quality checks present the
biggest issue. We can compile, run automated tests, and deploy automatically.
But does this guarantee acceptable production behavior?
What about the important element of code reviews? How do you assure that, even
as automated tests pass, the application isn't piling up mountains of technical debt
and impeding future deployments? To get to zero day deployments, we must address
these issues.
Don't get me wrong. Other things matter here as well. Zero day deployments
require robust production checks and sophisticated "oops, that didn't work, rollback!"
capabilities. But I think that nothing will matter more than automated
quality checks.
Each time you commit code, you need an intelligent analysis of that code that should
fail the build as surely as failing tests if issues crop up. In a zero day deployment
context, you cannot afford best practice violations. You cannot afford slipping
quality, mounting technical debt, and you most certainly cannot afford code rot.
Today's rot in a zero day deployment scenario means tomorrow's inability to deploy
that way.
Learn
more how CodeIt.Right can help you automate code reviews, improve your code quality,
and reduce technical debt.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
-
A little while back, I started
a post series explaining some of the CodeIt.Right rules. I led into the
post with a narrative, which I won't retell. But I will reiterate the two rules
that I follow when it comes to static analysis tooling.
-
Never implement a suggested fix without knowing what makes it a fix.
-
Never ignore a suggested fix without understanding what makes it a fix.
Because I follow these two rules, I find myself researching every fix suggested to
me by my tooling. And, since I've gone to the trouble of doing so, I'll save
you that same trouble by explaining some of those rules today. Specifically,
I'll examine 3 more CodeIt.Right rules
today and explain the rationale behind them.
Mark assemblies CLSCompliant
If you develop in .NET, you've no doubt run across this particular warning at some
point in your career. Before we get into the details, let's stop and define
the acronyms. "CLS" stands for "Common Language Specification," so the warning
informs you that you need to mark your assemblies "Common Language Specification Compliant"
(or non-compliant, if applicable).
Okay, but what does that mean? Well, you can easily forget that many programming
languages target the .NET runtime besides your language of choice. CLS compliance
indicates that any language targeting the runtime can use your assembly. You
can write language specific code, incompatible with other framework languages.
CLS compliance means you haven't.
Want an example? Let's say that you write C# code and that you decide to get
cute. You have a class with a "DoStuff" method, and you want to add a slight
variation on it. Because the new method adds improved functionality, you decide
to call it "DOSTUFF" in all caps to indicate its awesomeness. No problem, says
the C# compiler.
And yet, if you you try to do the same thing in Visual Basic, a case insensitive language,
you will encounter a compiler error. You have written C# code that VB code cannot
use. Thus you have written non-CLS compliant code. The CodeIt.Right rule
exists to inform you that you have not specified your assembly's compliance or non-compliance.
To fix, go specify. Ideally, go into the project's AssemblyInfo.cs file and
add the following to call it a day.
[assembly:CLSCompliant(true)]
But you can also specify non-compliance for the assembly to avoid a warning.
Of course, you can do better by marking the assembly compliant on the whole and then
hunting down and flagging non-compliant methods with the attribute.
Specify IFormatProvider
Next up, consider a warning to "specify IFormatProvider." When you encounter
this for the first time, it might leave you scratching your head. After all,
"IFormatProvider" seems a bit... technician-like. A more newbie-friendly name
for this warning might have been, "you have a localization problem."
For example, consider a situation in which some external supplies a date. Except,
they supply the date as a string and you have the task of converting it to a proper DateTime so
that you can perform operations on it. No problem, right?
var properDate = DateTime.Parse(inputString);
That should work, provided provincial concerns do not intervene. For those of
you in the US, "03/02/1995" corresponds to March 2nd, 1995. Of course, should
you live in Iraq, that date string would correspond to February 3rd, 1995. Oops.
Consider a nightmare scenario wherein you write some code with this parsing mechanism.
Based in the US and with most of your customers in the US, this works for years.
Eventually, though, your sales group starts making inroads elsewhere. Years
after the fact, you wind up with a strange bug in code you haven't touched for years.
Yikes.
By specifying a format provider, you can avoid this scenario.
Nested types should not be visible
Unlike the previous rule, this one's name suffices for description. If you declare
a type within another type (say a class within a class), you should not make the nested
type visible outside of the outer type. So, the following code triggers the
warning.
public class Outer
{ public class Nested
{ } }
To understand the issue here, consider the object oriented principle of encapsulation.
In short, hiding implementation details from outsiders gives you more freedom to vary
those details later, at your discretion. This thinking drives the rote instinct
for OOP programmers to declare private fields and expose them via public accessors/mutators/properties.
To some degree, the same reasoning applies here. If you declare a class or struct inside
of another one, then presumably only the containing type needs the nested one.
In that case, why make it public? On the other hand, if another type does, in
fact, need the nested one, why scope it within a parent type and not just the same
namespace?
You may have some reason for doing this -- something specific to your code and your
implementation. But understand that this is weird, and will tend to create awkward,
hard-to-discover code. For this reason, your static analysis tool flags your
code.
Until Next Time
As I said last time, you can extract a ton of value from understanding code analysis
rules. This goes beyond just understanding your tooling and accepted best practice.
Specifically, it gets you in the habit of researching and understanding your code
and applications at a deep, philosophical level.
In this post alone, we've discussed language interoperability, geographic maintenance
concerns, and object oriented design. You can, all too easily, dismiss analysis
rules as perfectionism. They aren't; they have very real, very important applications.
Stay tuned for more posts in this series, aimed at helping you understand your tooling.
Learn
more how CodeIt.Right can help you automate code reviews and improve your code quality.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
-
Last month, I wrote a
post introducing you to T4 templates. Near the end, I included a mention
of GhostDoc's use of T4 templates in automatically generating code comments.
Today, I'd like to expand on that.
To recap very briefly, recall that Ghost Doc allows you to generate things like method
header comments. I recommend that, in most cases, you let it do its thing.
It does a good job. But sometimes, you might have occasion to want to tweak
the result. And you can do that by making use of T4 Templates.
Documenting Chess TDD
To demonstrate, let's revisit my trusty toy code base, Chess
TDD. Because I put this code together for instructional purposes and not
to release as a product, it has no method header comments for IntelliSense's benefit.
This makes it the perfect candidate for a demonstration.
If I had released this as a library, I'd have started the documentation with the Board
class. Most of the client interaction would happen via Board, so let's document
that. It offers you a constructor and a bunch of semantics around placing and
moving pieces. Let's document the conceptually simple MovePiece method.
public void MovePiece(BoardCoordinate
origin, BoardCoordinate destination) { VerifyCoordinatesOrThrow(origin, destination); var pieceToMove = GetPiece(origin);
AddPiece(pieceToMove, destination); RemovePiece(origin); pieceToMove.HasMoved = true;
ReconcileEnPassant(origin, destination, pieceToMove); }
To add documentation to this method, I simply right click it and, from the GhostDoc
context menu, select "Document This." Alternatively, I can use the keyboard
shortcut Ctrl-Shift-D. Either option yields the following result.
/// <summary> /// Moves
the piece. /// </summary> /// <param
name="origin">The origin.</param> /// <param
name="destination">The destination.</param> public void MovePiece(BoardCoordinate
origin, BoardCoordinate destination) { VerifyCoordinatesOrThrow(origin, destination); var pieceToMove = GetPiece(origin);
AddPiece(pieceToMove, destination); RemovePiece(origin); pieceToMove.HasMoved = true;
ReconcileEnPassant(origin, destination, pieceToMove); }
Let's Make a Tiny Tweak
Alright, much better! If I scrutinize the comment, I can imagine what an IntelliSense-using
client will see. My parameter naming makes this conceptually simple to understand,
so the IntelliSense will tell the user that the first parameter represents the origin
square and the second parameter the destination.
But let's say that as I look at this, I find myself wanting to pick at a nit.
I don't care for the summary taking up three lines -- I want to condense it to one.
How might I do that?
Well, let's crack open the T4 template for generating a method header. Recall
that you do this in Visual Studio by selecting Tools->Ghost Doc->Options,
and picking "Rules" from the options pane.
If you double click on "Method Template", as highlighted above, you will see an "Edit
Rule" Window. The first few lines of code in that window look like this.
<#@ template language="C#" #> <# CodeElement
codeElement = Context.CurrentCodeElement; #> ///
<summary> ///<# GenerateSummaryText(); #> ///
</summary> <# if(codeElement.HasTypeParameters)
{ for(int i = 0;
i < codeElement.TypeParameters.Length;
i++)
{ TypeParameter typeParameter = codeElement.TypeParameters ; #>
Hmmm. I cannot count myself an expert in T4 templates, per se, but
I think I have an idea. Let's put that call to GenerateSummaryText() inline
between the summary tags. Like this:
<#@ template language="C#" #> <# CodeElement
codeElement = Context.CurrentCodeElement; #> ///
<summary><# GenerateSummaryText(); #></summary>
That should do it, right? Let's regenerate the comment and see what it looks
like. This results in the following.
/// <summary>Moves
the piece. /// </summary> /// <param
name="origin">The origin.</param> /// <param
name="destination">The destination.</param> public void MovePiece(BoardCoordinate
origin, BoardCoordinate destination) { VerifyCoordinatesOrThrow(origin, destination); var pieceToMove = GetPiece(origin);
AddPiece(pieceToMove, destination); RemovePiece(origin); pieceToMove.HasMoved = true;
ReconcileEnPassant(origin, destination, pieceToMove); }
Uh, oh. It made a difference, but somehow we only got halfway there. Why
might that be?
Diving Deeper
To understand, we need to look at the template in a bit more detail. The template
itself has everything on one line, and yet we see a newline in there somehow.
Could GenerateTextSummary cause this, somehow? Let's scroll down
to look at it. Since this method has a lot of code, here are the first few lines
only.
private void GenerateSummaryText()
{ if(Context.HasExistingTagText("summary"))
{ this.WriteLine(Context.GetExistingTagText("summary"));
} else if(IsAsyncMethod())
{ this.WriteLine(Context.ExecMacro("$(MethodName.Words.ExceptLast)") + " as
an asynchronous operation.");
} else if(IsMainMethod())
{ this.WriteLine("Defines
the entry point of the application.");
} }
Aha! Notice that we're calling WriteLine . What if we did
a find and replace to change all of those to just Write? Let's try. (To
do more serious operations like this, you will want to copy the text out of the editor
and into your favorite text editor in order to get more operations).
Once you have replaced all instances of WriteLine with Write in the template,
here is the new result.
/// <summary>Moves
the piece.</summary> /// <param
name="origin">The origin.</param> /// <param
name="destination">The destination.</param> public void MovePiece(BoardCoordinate
origin, BoardCoordinate destination) { VerifyCoordinatesOrThrow(origin, destination); var pieceToMove = GetPiece(origin);
AddPiece(pieceToMove, destination); RemovePiece(origin); pieceToMove.HasMoved = true;
ReconcileEnPassant(origin, destination, pieceToMove); }
Success!
Validation
As you play with this, you might have noticed a "Validate" button in the rule editor.
Use this liberally! This button will trigger a parsing of the template and provide
you with feedback as to validity. The last thing you want to do is work in here
for many iterations and wind up with no idea what you broke and when.
When working with these templates, think of this as equivalent to compiling.
You wouldn't want to sit for 20 minutes writing code with no feedback as to whether
it builds or not. So don't do it with these templates.
The Power at Your Disposal
I'll wrap here for this particular lesson, but understand that we have barely scratched
the surface of what you can do. In this post, we just changed a bit of the formatting
to suit a whim I had. But you can really dive into ways of reasoning about and
documenting the code if you so choose.
Stay tuned for future posts on more advanced tips and tricks with your comment templates.
Learn
more about how GhostDoc can help simplify your XML Comments, produce and maintain
quality help documentation.
About the Author
Erik Dietrich
I'm a passionate software developer and active blogger. Read about me at my
site. View
all posts by Erik Dietrich
|
|
|
|
|
|
|