In-Depth
Debugging Quality Control
- By Stephen Swoyer
- March 1, 2006
The Big Idea
QUANTIFIABLE CODE
- As regulators clamp down, business users want a hands-on approach to software QA. Some in the industry are pushing for quality-level agreements.
- Developers say traditional metrics don’t always work. Many remain in the dark about their own companies’ internal QA processes. A defined QA process, however, may make their jobs easier.
- A growing trend among Global 2000 companies is to hire quality-control managers and develop QA programs outside their IT depts.
Bugs and other defects are inevitable byproducts of
software development. No one disputes this. Beyond
that, the consensus usually breaks down.
The disconnect between developers and business
users is almost a tradition. Business customers often give short shrift
to the concerns of developers. They want software that can handle
their business needs, even if those requirements evolve or change.
That's why last-minute change requests and eleventh-hour feature
or functionality additions are a fact of life for enterprise codejockeys.
These down-to-the-wire disruptions may account for defects
or bugs that ultimately get fingered by quality assurance testers.
A productive working relationship with line-of-business customers
may get even harder to achieve. Sarbanes-Oxley and the
current regulatory climate require higher levels of accountability.
Not surprisingly, business users want a more hands-on approach
to software QA.
As a result, a growing number of codejockeys are grappling with
software QA initiatives. They're being asked to take on new responsibilities
for testing and other QA-related activities. Why is
this a problem?
Because when it comes to QA, many developers are in the dark.
Many programmers can't tell you how their companies assess the
quality of the software they produce--and in some cases, they
don't care to know.
Number of X and Y
Theodore Woo, a production support engineer with a global IT
services firm, is working for a telco that has outsourced most of
its IT operations. Woo's firm won its outsourcing pitch by promising
to reduce the telco's IT costs and improve the overall quality
of its in-house software dev, in part by bringing its developers
up to CMMI Level 3 speed.
QLA's: Putting metrics in writing
As far as Agitar Software is concerned, the success of service enablement and a host of similar initiatives hinges on software quality. "If you are going to base your success on a service provided by someone else, how do you begin that relationship?" asks Jeffrey Fredrick, a principal with Agitar. "We think a key part of this relationship will be a transparency into the development process that is now almost unknown, and that quality-level agreements will be common."
Think of a QLA as a lot like a service-level agreement. It's a contract, more or less, that specifies requirements, expectations, responsibilities and other potential sticking points. QLAs can have teeth, too, because they may prescribe penalties or sanctions, monetary and otherwise, for quality breaches.
Agitar's Open Quality Initiative is highlighting a disturbing trend in commercial and enterprise software development. As Fredrick and other Agitar officials see it, software defects are now accepted as inevitable by developers and consumers alike. This in turn has precipitated a serious degradation in software quality.
To remedy this situation, Agitar is calling upon vendors to provide data about their internal software-quality efforts. By the same token, officials say customers should demand to see quantitative measurements of quality in the IT products and services they consume. Customers should ask for and receive QLAs from their software vendors just as they commonly do with SLAs.
It's a view that some in the industry are warming up to. "This is something we're starting to see a lot more," says Mark Eshelby, quality solutions product manager for testingtools specialist Compuware. QLAs provide a good way to make sure the project's start and finish criteria are reasonably well understood by both customers and programmers. The business user's participation in determining what level of quality and what criteria are going to be used to measure the quality of the finished project raises a potential issue because the concept is still evolving, Eshelby says.
QLAs offshore
A key QLA proving ground is the outsourcing arena. A surprising number of outsourced software dev projects don't deliver the goods, leaving orgs holding the bag. QLAs could give outsourcers more clout: If the finished project doesn't conform to the requirements set forth in the QLA, the outsourcing provider doesn't get paid.
It's no more unreasonable than some of the steps orgs currently take to protect themselves. However, it might not be as practicable, given the potential difficulty of enforcing contracts or judgments against offshore providers.
When Christopher Secord, a senior app dev with Wake Forest University, interviewed for a job with General Electric in 2001, he asked GE officials if they had notched contractual agreements with their outsourcing providers in India to protect the company in the event of disaster.
"Instead of an agreement that says, 'This...app has to meet the following metrics,' they broke everything up," remembers Secord, "as in 'User interface module A has to meet these requirements; user interface module B has to meet these other requirements.' The absolute worst-case scenario, they hoped, was that only one module would be a failure and they could replace it, with another outsourced firm if necessary."
Given the popularity of SLAs, adoption of QLAs might seem like a foregone conclusion. Not everyone is comfortable with that idea, however.
"I think the bottom line is that business people and developers have to have a relationship in which they can trust that the other is working toward the goal of getting a quality product out the door," says Ken Auer, a principal with custom software dev house Role Model Software. " A QLA seems like a bad way to define that relationship. It says, 'We, the business, have reasonable expectations and you, the developers, are incompetent or negligent if you don't meet them perfectly.' The reality is that the business people are taking an educated guess at what they would like to have done, and the developers are taking an educated guess as to the best way to get it done. They are both very prone to error. If they accept the reality, they can work together to reduce it."
—Stephen Swoyer
Neither of these objectives has been met yet, Woo says. And
when they are met, exactly how improvements will be measured is unclear. Proposals for software-quality
standards, much less quality-level agreements,
have yet to become prevalent in
Woo's org.
"There have been discussions about
this, but whenever it comes up, it's
dropped because no one can agree on any
workable metrics," he says. Developers
tend to scoff at traditional quality metrics,
such as X number of bugs or defects per Y
lines of code, defects per week or tests per
day. Metrics of this kind are useless in many
cases, veteran codejockeys say. And in other
respects, they argue, such metrics are
completely inapposite.
"We have no metric that I'm aware of
that measures errors per line of code," Woo
says, "but the problem with our type of object-oriented programming is that counting
lines of code doesn't have much validity--unless you export everything to one
huge text file and basically count it there."
Other metrics include cyclomatic complexity,
a widely used static software measurement
that in some cases can be used to
analyze a software dev project. The more
complex a piece of software is, the more forgiving an org might be of defects in the
code. Or so the theory goes.
One metric Woo's org uses is functionpoint
analysis. It, too, purports to measure
the size and complexity of a software dev
project. Function points are correlated
with app features or requirements. Software
QA testers can point to Xbugs per
Yfunction points, with Yfunction points
preferably being a multiple of Xbugs.
"We get a request to assign a number to
what we produce, but none of us know
what this is supposed to represent," Woo
says. "A group in Mexico is actually performing
the analysis. They're the ones who
run the numbers on this stuff. But there
hasn't been any feedback between us."
The agile angle
Software QA isn't at all inimical to agile concepts and methods. There is a sense in which software quality efforts, especially if they're imposed as top-down mandates, might strike many codejockeys as misguided—and inherently anti-agile—management initiatives.
Consider the idea of software quality-level agreements, which are a lot like service-level agreements. To the extent that a QLA insists upon a thorough adumbration of project requirements or presupposes the use of other waterfall-like methods, isn't there a danger that it will be incompatible with some agile disciplines?
This isn't to pick on QLAs, either: Isn't any software quality initiative that originates outside IT likely to be inimical to agile practices? Not necessarily, agile practitioners say. “If someone has some requirements and can suggest how to test them, they certainly should. That is not anti-agile,” says Ken Auer, a principal with custom software dev house Role Model Software.
Auer, an agile adherent, says most agile methods aren't at all reflexively opposed to requirements. "Agile is not against requirements. It merely rejects the idea that it is smart to try to figure out all the requirements up front without getting feedback from working--or somewhat working--software, based on customer responses."
There's another wrinkle. One best practice that's espoused by many agile enthusiasts is the idea of top-down management buy-in. Because some agile approaches to software dev are so counter to the expectations of executives and line-of-business customers, this is key. In orgs in which agile practitioners have already greased the skids, there's an excellent chance software QA initiatives such as QLAs can be designed to comport with the specific flavor (or flavors) of agile used in that enterprise. After all, some folks say, even agile and waterfall share points in common.
"When we look at agile dev and agile testing techniques, you still need a way of managing your assets and your results and correlating them back again," says Mark Eshelby, quality solutions product manager for testing-tools specialist Compuware. "So in the case of agile, which takes a test-first type of approach, you want to make sure that at the end of that project, all of the tests have been run appropriately. You want to know what the pass/fail is. You want to adapt to measure stuff like that."
And if reconciliation isn't possible, says Auer, agile teams can take a page from the suits. "Why not have a reverse QLA? Every time the business people change a requirement or come up with a new requirement that they hadn't previously identified or had identified incorrectly, there is a negative consequence. This seems to work in the waterfall process where change orders demand a lot of ceremony and, usually, some extra dollars."
—Stephen Swoyer
QA on the brain
Most orgs perform some kind of internal
QA process. However, it's not clear
how well rank-and-file developers such
as Woo understand this process.
"The idea that a level of quality can be
identified easily is kind of ludicrous," says
Ken Auer, a principal with custom software
dev house Role Model Software.
Auer faces QAexpectations in his own
work. In most cases, it's just an inconvenience.
"I've been asked to make sure that
we ship with stuff like 0 category-1 bugs,
less than 5 category-2 bugs, less than 30
category 3-bugs and so on, with some
vague description of what each of the categories
are," he explains. "This kind of
thing is fine, as long as there is an acceptance
test period where bugs are identified
and adequate time is taken to remove
them...or lessen their severity."
Business execs have software quality on
the brain. Want proof? Consider the popularity
of standardization efforts such as
CMMI and ISO 9001. To a large extent,
execs embrace these initiatives because
they promise to bring a degree of order,
regularity or repeatability to a software
dev process that to the unpracticed eye
can appear anarchic and unpredictable.
In the age of Sarbanes-Oxley, process
and repeatability have become more
important than ever to corporate decision-
makers.
New watchdogs
Software QA is traditionally aligned
more closely with IT than with the
bread-and-butter business. This isn't an
ideal relationship, however, especially in
view of IT's role as QA watchdog. To
avoid conflicts of interest, some proponents
say, QA needs to be spun off as an
independent entity with its own chain
of command.
Over the last few years, a number of
companies have done just that, introducing
discrete QA programs and creating
positions for quality managers. This trend
is becoming the norm among the Global
2000 set.
Mark Eshelby, quality solutions product
manager for testing-tools specialist
Compuware, says: "We've typically had
resources in an organization that are responsible
for running tests or building test
scripts. And the quality manager is the
person responsible for orchestrating that
team's work efforts. It's actually becoming
very popular."
Compuware markets a line of QAtesting
tools such as QACenter and CARS
5.1. IBM's Rational Software Group,
Mercury Interactive and many other
vendors also market QA complements
or QA testing tools.
QA testing tools aren't the problem. It's
the absence of QA tools that address the
concerns of codejockeys and line-of-business
customers that's difficult.
It's possible to deliver a fully functioning
app--one with no show-stopping defects--that fulfills all major requirements
yet fails to address the needs of the line-of-business
customer. "I once wrote a program
that did exactly what the document
said, and that turned out to be a bad thing,"
recalls one software engineer. "But I had a
document that said, 'Looky here, this is
what you wanted, and this is what you got.'"
Good enough to ship
Compuware's Eshelby acknowledges
that some QA tools and metrics were
not conceived with developers or line-ofbusiness
customers in mind. This doesn't
have to be the case, he argues. Developers,
especially, play an important role in
QA. Codejockeys need to understand
that the deck isn't stacked against them;
an effective software quality QA process
can make their jobs easier.
"A lot of [QA] comes down to having a
defined process that all of the parties in
the organization can work together on,"
Eshelby says. "It could be as simple as the
idea that the entrance and exit criteria are
agreed upon by all of the parties in the organization.
"One way we work with a customer on
a CARS engagement," he continues, "is to
try to involve the end users so they understand
the requirements that they've been
asking for and what they might mean in
terms of risk and time."
He points out that when users find out
how long it takes to build and test a feature,
you often find out it's not really that
important to them.
Role Model's Auer believes this kind of
QAvetting, predicated on the early identification
of app requirements, is misguided.
"Let's face it," he says, "requirements
are never even close to complete or
accurate. If that's the case, what kind of
quality are you measuring?"
Auer advocates a different approach. "The best you can do is to go through
some sort of acceptance testing, and decide
whether you're happy enough with
the quality during acceptance testing
that...you want to ship it," he says. "Typically
the amount of acceptance testing
will be proportionate to the cost of a bug
slipping through. If a bug is life-threatening,
that [cost] will be a whole lot."
Jeffrey Fredrick, a principal with software
QA specialist Agitar Software,
thinks the nature of QA testing is going
to change, especially as companies continue
to flesh out their SOA infrastructures
and apps become more virtualized.
Fredrick believes the extreme difficulty
of downstream testing will drive more
companies to adopt developer testing. "This is the only trend I see that offers
some relief to testing departments," he
says, "by pushing the testing upstream,
the quality of the code as it enters testing
is much higher, and this in turn will allow
[companies] to work more efficiently. At
the same time, testing done by developers
is something that can be measured,
quantified and communicated."
Age of ignorance
QA testing-tools vendors and other QA
proponents tend to paint rosy pictures of
an app future-scape in which enterprise
software dev is vastly improved. But for
many codejockeys, things aren't getting
better. Consider production support engineer
Woo, whose employer laid off a
significant percentage of its developer
workforce.
As a result, the QA process
at his company has changed drastically
over the last few years. "We used to have
weekly reviews, meetings where we'd actually
put code on an overhead and discuss
things [the programmer] might
have done instead," he says. "I can't remember
the last time we did a code review.
It's apparently assumed that the
developers just know what they're doing.
I have no idea whether they're following
the documentation protocols."
As for his employer's function-point
analysis initiative, Woo pleads ignorance.
"As far as I know, I'm not supposed to have
that much of a grasp of it anyway."
PHOTO BY BRITT ERLANSON