In-Depth

Extending the testing process

As testing responsibilities shift from software engineers to non-programmers, toolmakers scramble to simplify and integrate tools used for different stages of the testing process.

When testing tools first entered the marketplace 20 years ago, they did so to help developers produce quality applications. These first testing tools recorded keystrokes, played them back and compared screens. But these simple tests were difficult to maintain -- especiallyas applications progressed from one release to another -- and more powerful test-scripting tools soon emerged.

But test-scripting tools required users to have extensive development skills. ''Ten years ago it was more likely a test engineer was also a developer by training,'' noted Greg Pope. In fact, before working for the University of California Lawrence Livermore National Laboratory as a computer scientist, Pope was CEO of his own company. He started in the 1970s, as many did, writing his own code and then changing hats and testing it.

In an age where testers are not trained as software engineers or computer scientists, new testing tools are arising that provide a pre-written test framework that can significantly reduce the need to learn and use scripting languages. The purpose of these test frameworks, said Linda Hayes, CTO at Dallas-based Worksoft Inc., is threefold: to make automation accessible to testers who are not programmers, to simplify script development and to reduce the maintenance effort.

Part of the reason test frameworks are gaining in popularity is the emphasis by American businesses to do more with less. ''There is tremendous pressure to meet deadlines that are very difficult, with less people than you'd like to have and with less budget than you'd like to have,'' said Lawrence Livermore's Pope.

In response, many software vendors, including Mercury Interactive Corp. and Rational Software Corp., are trying to hook different kinds of tools together so they can share information. ''We're starting to see cases where a lot of leverage has been developed by integrating a requirement management tool, a test execution tool and a defect-tracking tool,'' Pope explained.

''Testing tools can no longer simply stand alone, providing limited functionality,'' added Dave Kapelanski, director of ASQ global field technical support at Farmington Hills, Mich.-based Compuware Corp. ''What used to be somewhat effective testing means, in which navigation issues within an application could be trapped, now requires capabilities that allow for drilling down into the code base while ensuring that all groups involved within application development understand the entire project's magnitude.''

Maturing industry
''The emergence of test framework products is a sign that the software test automation industry is maturing,'' noted Worksoft's Hayes. Another is the growth of the industry, which currently comprises 45 testing tool vendors and 104 testing tools, according to research firm Gartner Inc., Stamford, Conn. A price drop over testing tools of old is a further indication of the maturing marketplace. The first testing tools, for example, were actually geared toward mainframes and, thus, were very expensive. That is no longer the case, as evidenced by the number of defect-reporting and automated testing tools available for less than $100.

Case in point: Spector, from Vero Beach, Fla.-based SpectorSoft Corp., was invented so individuals could see which sites their children or spouses were visiting on the Internet. But because of its low price, testers now use it to help them document the testing process. Spector runs in the background, takes a screenshot every second and records all of an individual's keystrokes. Other tools available in the less-than-$100 category allow individuals to look at registration files or capture playback, for example.

In the past, companies often purchased testing tools to meet an immediate need. Once a project was completed, however, the tools became shelfware. Lawrence Livermore's Pope claims that as many as ''50% to 75% of [the testing] tools that have been purchased are collecting dust.'' While some testing tools do still make their way to ''the shelf'' upon completion of a project, companies are becoming more aware of this trend and are looking at ways to alleviate the problem.

One way to do so is through integration, particularly between developers and testers. Pope urges organizations to make corporate-wide decisions about what tools to buy. If users think beyond their immediate needs, he said, ''they may wind up with tools that are better off in the long run.''

For testing tools to be truly effective, they must provide pertinent information to everyone on the project team. For example, requirement management tools should identify the test needs of every business function built into an application. Tools that assist developers in creating more efficient code should integrate with testing tools.

Automatic execution debuggers should result in automatic notification of problematic code early in the development cycle. ''When these tools are integrated into the testing tool space, testers can include their analysis results directly from the test script utilized to verify the application,'' explained Compuware's Kapelanski. Performance analysis and code coverage is another area that could benefit from the integration of development and testing tools.

As integration occurs in more and more areas of technology, its emergence in the testing tools arena should be no exception. The problem users have to deal with at this stage of the game, however, is the inability to combine different testing tools from different vendors. Stated simply, said Lawrence Livermore's Pope, ''the tools won't talk to each other.''

Climbing stairs
One of the first steps to integration is a standardized testing terminology. For example, the term ''test plan'' could mean 12 different things to 12 different people. The tool vendors themselves are somewhat responsible for these 'foggy' meanings because they tend to use the terminology that sounds the best in their marketing efforts. ''Tool vendors ought to pay more attention to what they call things,'' cautioned Pope.

''As a first step in getting tools to collaborate, you need standardization of the physical and logical structure of all documents, artifacts and so on,'' he continued. Once the industry can agree on testing terminology, the next step is to standardize the concepts.

While vendors begin to integrate their own tools through product suites, there is a need to do this across suppliers. For cross-vendor integration to become a reality, alliances need to be made between tool manufacturers, vendors, and outsourcing vendors or third parties that collaborate. This could start with a standard set of tests, such as those used to test billing systems, database testing or customer relationship management (CRM) systems.

Pope likens this cross-vendor collaboration challenge to that of the PC industry. That market did not really take off until ''the hardware received some credibility from somebody like IBM, and then picked one operating system,'' he said. ''That's what has to happen in the test industry. We need a standard infrastructure environment to get developed.''

That environment has to start with a common infrastructure between tools. There is also a need for the testing world to merge back with the development world. ''If developers do use cases, we should be able to add to them and make them do useful test design,'' Pope said. ''Use cases don't go into enough detail for testers. There are a lot of things we could leverage if test automation was part of the whole development process, instead of this thing at the end.''

Baby steps
Before true integration can be achieved, baby steps such as design for testability, organizing test case libraries for maintainability and automating test evaluation must be taken. 'The whole idea is that testers wish they had access at the design stage -- even the choice of programming language,' noted Ross Collard, president of Collard & Company, a Manhattan training and consulting firm that specializes in software testing.

Design for testability involves choosing a programming language that will work with a testing tool. It means ''trying to get testers involved early enough in the design phase,'' Collard said. Organizing test case libraries for maintainability minimizes maintenance and componentizes highly reusable, highly sharable test components in much the same way other development components are cataloged. In addition, ''the actual process of test evaluation is still highly manual,'' explained Collard. Testing tools currently run test cases, say, over a weekend. When the tester returns to work on Monday, they are bombarded with thousands of test results. So there is a definite need for this process to become more automated.

Similarly, there is a need for automation in deciding what should be tested. ''The big difficulty in testing is deciding what to test,'' noted Wayne Middleton, CEO of Software Quality Engineering, Orange Park, Fla. ''There is an infinite number of tests you might run.'' To reduce that infinite number, developers need to design software with an inherent knowledge base of the software built into it. That way software requirements will drive test cases and procedures. ''Now people have to make those intellectual decisions most of the time,'' Middleton said.

Selecting the right tool
In the meantime, how can an organization choose testing tools that will be successful both now and in the future? Worksoft's Hayes suggests buying test architectures instead of building them. ''What people want from test automation,'' she noted, ''is test cases, not testware.''

Lawrence Livermore's Pope suggests people look at a company and ask the following: How well is it capitalized? How long has it been in business? Who makes up the management team? How well do they understand testing? The success of Mercury Interactive, he asserts, was due in large part to the reputation of its founder, Aerya Feingold, who was well known in the venture capital industry.

You should also find out where tool vendors stand on integrating tools and infrastructure, and find out their long-range plan of leveraging tools together. This will be a key functionality as the market matures. How easy or difficult will it be to integrate a particular firm's tool with the tools you have or other tools on the market?

Finally, what are the long-term goals of the testing tool company under consideration? Will it be around in five years or 10 years? How will it keep improving its tool(s)?

The next decade will be crucial as the industry attempts to standardize. By decade's end, individuals should be able to buy six different testing tools from six different vendors and have them all work together. Sound crazy? Pope believes it will happen. ''It has to happen or the industry will fail,'' he said.

But for now, we are on the right track. The bottom line is to realize that ''proper and complete testing requires more than just the few individuals within the quality assurance group,'' noted Compuware's Kapelanski. ''It is an effort that everyone involved within the application development life cycle must become committed to.''

Lawrence Livermore's Pope summed it up best. The human brain was the best testing tool in the past, he noted. In the future, he said, ''the best tool will still be the human brain.''

Related
Testing key to component quality -ADT, Oct. 2002
Testing Web services: Even more complex
-ADT, Oct. 2002
Representative testing tools -ADT, Oct. 2002
Create your own test for Java/EJB code -ADT, Oct. 2002

Individuals contacted for this story were presenters at the Sept. 2002 "Software Test Automation Conference & Expo" To learn more, go to the conference page at http://www.sqe.com/testautomation/.