In-Depth

Testing key to component quality

Testing components is similar to testing ''regular code.'' But there are many differences as well, including how much more important testing becomes when developing your own components or using someone else's components in your application.

There are also more stages of the life cycle during which major testing is needed, and more uncertainties or potential weak links to plan around and test for. Further, developers need to become more involved with testing than ever before, and are required to stay involved with testing for longer periods, experts say. Add Web services components to the mix, and the ante is upped even higher. (See the related story ''Testing Web services: Even more complex.'' )

The biggest difference, most observers agree, is that components require a higher degree of quality. ''A traditional application is 'good enough' 85% of the time,'' said Theresa Lanowitz, research director at Gartner Inc. in Stamford, Conn. ''But if you're using multiple components, the system is only as good as its weakest link. They have to be tested to a flawless state,'' she explained. ''The notion of 'good enough' software ceases to exist.''

Enzo Micali, chief technology officer at 1-800-flowers.com in Westbury, N.Y., agrees. ''The more shared the code is, the more dependent the rest of the environment is on it. So the performance of that particular component is critical.'' His shop is creating Java components that will serve as the middle tier of the company's computing architecture and hold the business logic used throughout all corporate applications. The company is doing unit, functional and stress/load testing of its components using multiple tools.

Testing is key because components can be reused in ways not always foreseen by developers when they initially sit down to code. In addition, the more pervasive the component -- the more various analysts or applications touch it -- the higher the quality required.

Given this interaction with other components and pieces of a distributed system, testing must become more fundamental to the component development process. It is also something to plan actively during the component's concept phase. Some believe this requires an agile development approach, whether it is Extreme Programming or something else, that calls for a ''test first, develop second'' life cycle.

''Buggy'' software costs the U.S. economy almost $60 billion each year, according to a recent study by the National Institute of Standards and Technology. Improvements in testing could reduce this cost by a third, the study said.

Getting to the highest quality components will require a shift in thinking on the part of developers, IT executives and their peers, most observers agree. ''Each organization will have to define a process that works for them -- I don't think we can pinpoint a specific methodology,'' said Gartner's Lanowitz. ''But it will require more cross-functional communication and meaningful reports.''

Sam Guggenheimer, senior director of technology at Rational Software Corp. in Cupertino, Calif., said most of the heavy-duty work in component testing has been happening in the embedded systems world, at places like the Federal Aviation Administration (FAA). ''There's a fairly evolved set of practices there,'' said Guggenheimer, ''the most rigorous of which is in a document the FAA uses.''

In general, a traditional application is ''fairly monolithic,'' Guggenheimer said, and ''you have a fairly small degree of variation in the kinds of things you expect to see and have happen.'' But in the component world, there are ''many possible points of failure,'' including the application server, back end and user interface, among others. In a traditional application, there are ''fewer moving parts'' than in the component world, he added.

Thus, the kind of testing needed in the component world is much more thorough than the testing done in the traditional mainframe or client/server worlds, where one group controlled all the code and there was a high degree of certainty about what was in the code, how it worked, where its weaknesses were and whether the company could ''live'' with those weaknesses until the next software version came along.

What testing is needed?
Rudolf Hauber, principal consultant at Koelsch & Altmann, an IT consultancy in Munich, Germany, has worked on numerous projects in which components were used for clients in the military, insurance, financial services and other fields. He said the kinds of testing components require differ from project to project. For instance, a real-time system might need testing for timing constraints to make sure a component has the sub-second performance it needs, while another project might require more functional testing.

That said, Hauber did offer some advice for all kinds of component projects. ''It's very important to have a testing strategy -- to know up-front how and what you want to test,'' he noted. Next, test at as low a level as possible -- class, component, subsystem -- whatever you can. ''There's less effort needed to test on a lower level than on a higher one,'' he explained. Finally, use a tool to automate testing as much as possible; this way you do not have to hand-code the tests themselves, he said.

Hauber's firm automated its process with a tool they developed in-house. Called TestExpert, the tool is based on Rational Rose and was ''developed over a year, during the course of many projects,'' he said. Essentially, it builds a model of the system and then generates code to test each class, each operation in the interface and then the entire component.

In addition, components require the usual ''black-box,'' ''white-box'' or functional testing done in traditional applications. In black-box testing, components are sent a message to see how they respond and what functions they perform. It is ''black-box'' in the sense that users only know the results of the test, not why tests are done. Each function and feature is tested to make sure it works and does what it is expected to do. This is behavioral testing.

In white-box testing, one uses internal knowledge of the software to help define the tests that will be used. This is more structural or architectural testing, also known as glass-box or clear-box testing.

With components, some observers maintain, there is more black-box testing required to make sure all the components work along multiple paths and with each other. There is also a need to combine different types of testing in new ways.

Rational's Guggenheimer, for example, suggested combining runtime analysis with black-box testing. This will show, he said, ''which lines of code have been covered, what functions have been tested, what the profile of the response is and whether there are specific bottlenecks that might be resolved.'' Guggenheimer also talked about ''gray-box'' testing, a process that combines white-box and black-box testing in new ways.

However, most of today's tools ''don't think about that combined usage,'' said Guggenheimer. ''Most are focused on either the black-box or white-box side.'' Another key set of features for testing includes the ability to automatically generate test harnesses for code that has never been exercised, and to rank the components according to risk, so ''I can know where I need to focus,'' he added.

Class I.Q., a testing vendor in Wilmington, Del., defines a four-tier process for components used to help integrate corporate applications: process testing, to test workflow processes independently of the apps; rule or middleware testing, to test the applications being integrated; component protocol or message testing, to make sure the component works through the interface (commonly known as ''black-box'' testing); and integrated component simulation, to simulate a component's response to a specific message.

The general idea is to find any problems or bugs as early as possible, before the component is integrated with others and it becomes more difficult - if not outright impossible -- to track down the source of a bug. Then again, some bugs only occur in combination with other components -- meaning each potential path the component might take needs to be tested. Tests also need to be repeatable across multiple components and paths, and the results must be tracked in reports that make sense.

Today's tools
There are already a bevy of testing tools on the market (see the related story ''Representative testing tools'' ), and more on the way. Compuware Corp.'s Remote Agent, for example, looks at different levels of an application to find out where problems are. ''Where the bug shows up may have nothing to do with where the error actually occurred,'' said Peter Varhol, product manager for the Farmington Hills, Mich., vendor's DevPartner Studio family. ''We test components on two levels: as standalone pieces and then as part of an integrated system.''

The software looks at what happens in each and every component of an application, and then correlates events across components, Varhol claims. ''We follow a transaction from one component to another, from beginning to end. If an error shows up, we can go into our distributed analysis feature and see exactly what event generated that error and in what component,'' he said.

At Mercury Interactive Corp. in Sunnyvale, Calif., the idea is to make ''QA testing more accessible to earlier phases in the development life cycle,'' said Simon Berman, director of product marketing. ''We wanted to customize testing tools for server-side EJBs and make it available within the native IDE.'' Borland's JBuilder and BEA Systems' WebGain platforms now include those add-ins. And whatever tests are created by developers in those platforms can also be used by the QA staff via Mercury's LoadRunner, Berman explained.

The goal is to create a connection for component testing between the development and QA sides of the house, so that each group can use their preferred tools, but also share tests and test results. ''We've found a way of providing better cooperation, so testing can be done earlier and the assets can be shared,'' Berman said. The IDE add-ins are available free of charge, so developers can ''create all the unit and functional tests they want,'' he said.

The developer's role
Still, ''testing frameworks are no substitute for a good testing strategy,'' said Adam Wallace, vice president of research and development at Flashline Inc., a Cleveland-based vendor that sells component management software. ''You have to make sure you're testing for the things you want to,'' he explained.

Developers and testers need to work together to define what needs to be tested and when during the process that needs to happen. Developers also need to do more testing than perhaps they once did. ''The responsibilities of both the developer and tester need to change,'' said Rational's Guggenheimer. ''The developer's responsibility is to guarantee a level of reliability for this component. The tester then needs to creatively push the test, not just to look for clearly identified defects, but [for] general fitness for use.''

Neither side can simply ''test through straight-line, happy-day scenarios,'' said Guggenheimer; they have to ''pick representative cases that are sufficiently complex to exercise the code and illustrate the possible risks.''

Similarly, ''we find developers involved right until the stages of integration testing, where you bring together all the components and make sure they work reliably,'' said Compuware Corp.'s Varhol. ''The application developer's job isn't done until all the components come together.''

Related
Testing key to component quality -ADT, Oct. 2002
Testing Web services: Even more complex
-ADT, Oct. 2002
Representative testing tools -ADT, Oct. 2002
Create your own test for Java/EJB code -ADT, Oct. 2002

Individuals contacted for this story were presenters at the Sept. 2002 "Software Test Automation Conference & Expo" To learn more, go to the conference page at http://www.sqe.com/testautomation/.