Columns
McCabe speaks out on software issues -- 2000 and beyond
- By Jack Vaughan
- July 12, 2001
In 1977, thomas mccabe took his novel work in the study of software metrics to market via McCabe & Associates.
The company has been
|
Thomas McCabe
|
"When you test everything, and the testing is the last thing, there is
trouble."
|
the home of many firsts -- its efforts to visually present the inner activity of software programs are current,
yet still pioneering. Managing Editor Jack Vaughan spoke with McCabe not long after he launched a new corporate
effort in the year 2000 date conversion arena.
What, today, brings a year 2000 project to its knees?
One of them is the size of the 'year 2k' undertaking. It's much larger than most any projects most companies
have taken on. The number of millions of lines of code involved in this is overwhelming. So the kinds of things
that can bring it to its knees are the partitioning and sizing. It's very, very dangerous and delicate because
you have to partition it to make the effort doable from many points of view, one of them being testing.
But along with that, you have to partition it in such a fashion that you keep the dependencies, the inter-application
ripple effect, there while you're testing. It takes, in terms of architecture, a very clean division of a huge
problem so you can attack it in a reasonable fashion. The difficulty is that the amount of expertise or insight
or planning is not typically there with the year '2K' projects, but yet it needs it ever so much because of the
size.
When errors crop up because of changes that were made, [developers do not know] if remediation has caused the
errors. In other words, they change the code, and they find out testing later that there are errors. They don't
know if they are legacy errors or errors caused by remediation. This can throw them into a big tailspin.
Now when you look at a big problem, one of the key things is to focus the testing where it will have an impact.
And if your testing has typically been very sloppy, if you don't take that kind of precise view of it, and you
start testing everything, the problem is enormous.
How do people pare the problem down so it is a more manageable? How have you adapted your products to help
[pare it down]?
When you look at a system and you look at the transactions and you look at the paths [within the program] --
the number of dates in terms of density often is not that high -- it's between 2% and 12%, maybe 90% of projects
fall in that range. Now if you look at the number of paths through an algorithm then, it's a very small number
of paths. So, one of the things that really gives you a lot of leverage is how you pick out the paths that are
hitting dates.
That's what we do. And we produce the conditions to run the paths that hit those dates. This is an extension
of traditionally what we had been doing before. It's just that we have a whole new department focusing on year
2K.
The reason why this is important is that you avoid the syndrome of 'I have to test everything.' When you hear
of projects that test everything, and the testing is the last thing they do, you know they're in trouble. When
you [properly] pre-test, it actually forces some definition and clarity on the issue. It forces you to come to
grips with how you think about the testing and the partitioning of the testing, and the meaning of the testing
and project management of the testing, and all of the above. Then it kind of forces most companies to face the
fact that their testing data is less than perfect. That's a euphemism sometimes for 'doesn't exist.'
So, must decisions be made in terms of which patients are likely to survive?
Absolutely. The analogy comes from the MASH units we saw on TV for a number of years. But what's missing in
the analogy is, obviously, if you look at the MASH unit, they don't do triage on the business function, they do
triage on the trauma. When patients come in, they don't ask their rank. They ask 'if I apply medical care, can
they survive?' That really is a technical analysis -- a medical analysis of the trauma someone has and how medical
care can help. Well, the problem is, in the year 2K, they're only taking the business view, typically. They're
looking at it in terms of will this function be critical to my business. And that's a good thing to do. But along
with that, you want to look at the disease or the trauma, in terms of risk analysis. You want to look at the algorithms
and the architecture and say, if I apply the remediation to this and test this, will this become reliable. In many
cases, the answer is yes. And then in 20% of the cases, you have 80% of your problems. And we know that with risk
analysis -- we know that with quality metrics. And if you're not sensitive to that, what happens is you end up
spinning wheels because you'll end up attacking things you really can't fix.
Is there a way of knowing that you are over testing?
Yes. A lot of times people do over test because if you realize, like I say, the number of dates and dates paths
is between maybe 2% and 12% and then you take on the syndrome where you test everything, maybe 88% of the time
you're wasting your testing effort because you're testing things you've gone through before or you're going through
things that have no particular impact on the date.
There's spaghetti code and there's the year 2000 problem. If people go in to fix the problem and then decide
to fix the spaghetti, is that wise?
The short answer is 'no.' That's not what I'm recommending because if you try to do that, you try to solve too
many things at one time. We have the metrics that actually quantify the spaghetti code. [The software] will tell
you specifically where it's all tangled up. When you realize that, you then want to look at what is a work-around
instead of trying to change the code. If I try to change that code, I'll generate more errors than were already
there.
There is going to be huge market after the year 2000 for reengineering, for crisis management, for contingency
management, for dealing with legal issues -- and that's where a lot of this stuff is going to come in.
How did you approach rolling out a year 2000 offering?
First off we looked at it and realized the testing issue was going to be huge. Then we spent some time really
trying to come up with a way we could make a huge difference. And once we thought we had that, we first worked
with a couple of clients to really get some kind of grounding on it, and then we organized a whole different developing
group dedicated just for the year 2K. We took a whole new sales marketing group dedicated to the year 2K. So it's
really a company within a company, and there's one focus completely on the year 2000. And we're forming relationships
with a number of service providers.
Switching gears, have you had time to reach any conclusions about Java?
My opinion is that it is a break from the classical model. It allows one to get applications in a very different
way. And the stuff that people are building for which there already exist applets, like a time function -- a clock
-- like an interface, like an access to a database, will become more commercially viable. Some of the behavior
of Java is better. The problems with C++, when you get to the deep semantics about pointers and so forth, it's
ill-defined and inconsistent. What we believe about Java, at least at this point in time, is that the thing does
hang together in terms of semantics and syntax. So when you do a detailed analysis, it looks like it's well-defined.
I think it can have quite a positive impact on the way things are going.
What's very interesting is after the year 2K, companies will come back to reality -- that is they start getting
the resources back to face Internet and Java and do new things, how do you bring these worlds together? How do
you bring the legacy world back with the Java world? That's where we think there's going to be an enormous amount
of reengineering going on. And our core functionality is about doing that.