A few thoughts about coding standards
Some disputes within the development community seem to be perennial, popping
up every so often like this year's crop of thistles (forgive the analogy; I live
in farming country). I'm not talking about the big stuff like .NET vs. Java or
even Linux vs. Windows. No, the thing that has me ruminating this morning is the
simple matter of coding standards.
Maybe it's just the places that I hang out online, but in both weblogs and
mailing lists I've recently seen a whole flurry of people maintaining that
practices of which they don't approve (primarily Hungarian notation) lead to bad
code, bad development practices, hair on your palms, or other unpleasant side
effects. Now, as far as I can tell, no one is asking or telling the posters that
they personally must use Hungarian notation. Rather, they're objecting to
other people using it.
The disputes aren't limited to Hungarian notation, of course. I've seen
similar battles over the use of whitespace, how to name variables, the proper
content of comments, how to structure error-handling routines...in short, just
about anything for which an organization could conceivably have a coding
standard in place. This state of affairs perplexes me.
The reason we have coding standards, of course, is that computer languages
do not completely specify the input necessary to generate a particular
application. As languages become more flexible, there's generally more room for
coding standards. If you did any work in assembly language a decade or more ago,
for example, you would have found precious little room for coding standards
(beyond, perhaps, which registers should be used for which purposes). The
language and the compiler specified everything in detail. As we move to
higher-level and looser languages, the developer has more leeway. Even among
similar languages, though, there are differences. In C#, for example, your
standards might dictate a situation in which it is required or acceptable to use
two identifiers that only differ in case; in Visual Basic .NET, the issue
doesn't even arise because you can't do that.
So, coding standards exist to specify that part of the syntax that's up to
the developer. What I think some people miss is that there are several potential
audiences for such standards, and that not every standard is aimed at every
For example, standards for naming variables that are purely internal to your
application matter to you and to any maintenance developers who come along later
to deal with your source code. They don't (or shouldn't) matter to other
developers who consume your code through a small set of well-documented external
Standards for naming interfaces of public classes, on the other hand, matter
to the entire community that might use your code (or other code that interlocks
with your code, and so on. That's why we have things like the .NET
Framework Naming Guidelines. But in promulgating these guidelines, Microsoft
makes their limitations clear: "The goal of the .NET Framework design guidelines
is to encourage consistency and predictability in public APIs..." What you do in
the privacy of your own code is your own business.
Here's where I come down: when you're selecting coding guidelines, you need
to keep in mind who will deal with each guideline. For code to be shared with a
wide audience, adapting a widely-respected guideline is best. For internal code,
though, you should do whatever works best for your organization. If your team is
used to Hungarian notation, and find that it benefits their work - use it.
There are no extra points for politically-correct coding styles.
Mike Gunderloy has been developing software for a quarter-century now, and writing about it for nearly as long. He walked away from a .NET development career in 2006 and has been a happy Rails user ever since. Mike blogs at A Fresh Cup.