In-Depth

When objects collide, you must rethink your test strategies

There is an upheaval taking place in software development. Instead of hand-crafted, proprietary systems residing in a protected environment, applications today are virtual galaxies of software objects strewn across platforms. This shift is creating aftershocks in software quality, especially in the area of integration testing.

At the beginning of the software chain are objects, which may be developed or purchased. Next are middleware applications, which provide components to vendors of end-user products and which are themselves composed of distributed objects and subsystems. Further downstream are applications sold to an end user, which are likely compiled using multiple development languages and third-party components from different vendors and executing across multiple tiers. The final level of software development -- at least for now -- is the enterprise, where applications are integrated into complete operating environments.

Each of these levels poses unique challenges and is spawning new approaches and tools. A look at the most vivid trend today shows that integration testing is expanding not only in scope and complexity, but also across departments and company lines. Integrated testing joins all the participants up and down the software chain to leverage -- rather than struggle against -- diversity. Integration testing may save the day because it embodies the age-old concept that earlier is better when it comes to testing, and it is the responsibility of each contributor to not only produce a quality product, but also provide support to everyone down the line.

Key elements of integration test software methods include more sophisticated use of test hooks and monitors, better simulation of interfaces, and creative partnerships between development and test team members and between vendors and customers.

On ramp

Consider the object. At a technical level, it is a code unit that responds to a defined set of calls; the form of the calls may be determined by the standard to which it adheres, such as Corba or DCOM. In functional terms, it may provide any of a wide range of services: communications, device support, data services, even spell checking. For integration test purposes, the object's response to these calls from other objects or systems must be verified.

It is becoming easier to develop objects, but unfortunately, managing the environment becomes more complex. The application boundaries are blurred, both internally and externally. Creating and managing applications comprised of objects has become the focus of a new class of development tools. ObjecTime Limited of Kanata, Ontario, Canada, is one of the companies that is delivering capabilities for facilitating the development of large scale, complex systems. ObjecTime is a spin-off from Brampton, Ontario, Canada-based Nortel (Northern Telecom), a telecommunications firm. Their industry is a prime example of complexity, encompassing business applications as well as embedded data communications, switching and other services acquired from an assortment of hardware and software vendors.

"With ObjecTime Developer, we provide a visual modeling language that allows components to be developed and assembled into applications," described Steve Saunders, director of Product Marketing Management at ObjecTime. "This model is used to generate C or C++ code." Even externally developed objects can be included, using wrappers that convert them into "active objects" or "actors" that ObjecTime can incorporate. This model makes it easier to see the interrelationships among objects and the overall structure of the application.

"Testing support is built right in," said Saunders. "The generated application code can be run at any time during development, [and] instrumented to observe messaging among objects as well as monitor their internal states." ObjecTime also offers unique security aspects for components developed with it. "Black box packaging allows you to lock down a component and control access to it," explained Saunders. "This allows downstream vendors to inherit your objects and refine them without corrupting the core or revealing the implementation." This capability protects the intellectual property of object vendors who want to make their functionality available without revealing how they accomplished it.

The incorporation of test hooks, monitors and security are positive indicators that quality is being designed into objects early for downstream developers.

Middle of the road

The object developer will probably never know how object software will be used downstream, or what environment it will be executed in. In fact, the object vendor's customer is often itself a provider of components for yet another customer's product. An emerging class of higher level components is positioned as middleware, software that is incorporated with other technology into end-user applications.

Tandem Telecom is a case in point. This Plano, Texas-based division of Tandem Computers in Cupertino, Calif. provides a software platform to telecommunications providers, who in turn create custom services for their end users such as home location registry and local number portability. Tandem Telecom's software platform is an engine with which its customers can write applications. This engine is itself configurable, and the customers are able to add options based on their needs.

"Our products provide capabilities in the form of API calls," explained Edna Clemens, Manager of Development Services at Tandem Telecom. As part of the integration test process, an application has been written to exercise each call, verify the parameters and return flags. But, as Clemens pointed out, "although we know what each call should do, we have no way of knowing what the customer will do with it." No two customers implement the same set of functionality the same way, and of course, the customer environment is itself a moving target.

As part of an ongoing process improvement program, integration testing is becoming more formalized at Tandem Telecom. The development group has devoted resources to establishing an integration environment and process and documenting a formal test plan. This phase focuses on testing all the different flavors available through varying configuration options and is the final stage before system test.

Delivering a solid build has alleviated some of the burden previously on system test, which has been reinvested in further refining the system test function. Now, instead of being totally focused on test execution, system test has been divided into test development and test engineering. "Test development addresses new features and functions by extending our coverage," said Janet Brown, the test manager at Tandem Telecom. "Test engineering is responsible for actually executing tests and maintaining the test environment." This division of responsibility helps to assure that the schedule pressure of test execution does not detract from keeping pace with product enhancements.

Testing middleware requires the exercise of both internal interfaces as well as external ones. The middleware itself comprises several subsystems, and the messaging between them must be verified. The API sets that are delivered to customers are also comprised of building blocks. "We provide an engine that executes SIBs (service instruction blocks). Each SIB provides a certain set of functionality, such as support for a particular switch or protocol," explained Clemens. Some of these SIBs are internally developed, some are outsourced and some are licensed.

External interfaces, such as the call traffic, must be simulated or tested in conjunction with the customer's own facilities. The variety and complexity of the customer's environment -- different platforms, switches, protocols -- and the fact that each implementation of the API engine is unique, makes comprehensive test coverage by Tandem Telecom a virtual impossibility. Yet, in the telecommunications market, quality is essential. Clearly, cooperation is the best strategy.

"We have recently established a Partner Lab with one of our key customers," said Clemens. "This enables us to test our system with the customer's application and environment." The shared lab has become increasingly popular as a means for accomplishing the transfer and acceptance of the software between the two companies, and is indicative of the cooperative trend that is taking shape. Essentially this approach recognizes that development spans companies, and so should integration testing.

Software freeway

At the next stop downstream are end-user application vendors. Consider Oacis Healthcare Systems, a Greenbrae, Calif. company that markets application software to the healthcare industry. As with so many applications houses today, complexity is a given. Their products -- eight front-end applications that communicate with three back-end processes -- integrate capabilities from a literal dozen vendors, including two development languages (VB and Visual C++), two development libraries, a database, middleware and server services, all supporting four different operating systems and executing in a multitier configuration ... not to mention the hardware variations.

Oacis has a separate, formal integration test group. Herb Davis, the manager of the Integration Team, has implemented both formal and informal processes supported by a training program for team members. "In our environment, integration testing is the gatekeeper between development and the product line," said Davis. "The purpose of the Integration Team is to identify and isolate interactions among the components in the build, confirm compatibility and test for cross-application impact."

Developers must submit a formal Oacis Integration/Change form, detailing changes or additions being introduced into the product. These changes are then introduced in an orderly manner; the new build is first verified, then staged for acceptance and regression testing. "This approach has greatly increased the quality of our products," noted Herb Isenberg, quality assurance architect at Oacis. "It allows us to test earlier, and it has also forged a cooperative relationship between testing and development."

This cooperation reflects an increased understanding that everyone is on the same side, trying to produce a quality system, and that it is easier if everyone understands their role. Although formality is necessary to define the process, "it's also important to stay flexible," said Isenberg. "With this many components, we may have to react quickly on an exception basis." This mindset is critical to positioning integration test as an accelerator, not a roadblock.

But creating and managing an integration test environment is far easier decided than done. "Aside from maintaining communication and compatibility among all the development groups and coordinating schedules, the biggest issue involves environments," Isenberg pointed out. "We have to maintain three for each product -- the old version, the current one and the new one -- and keep multiple sets as we migrate to new environments." This requires a major commitment of hardware and software and a corresponding dedication of resources to manage and maintain it.

Not only is there an impressive array of components and subsystems, but they are further multiplied by the list of Oacis supported customer configurations: workstations, servers, operating systems, third-party components such as databases and report writers ... the list goes on. Each component is versioned and the minimum and maximum configurations documented.

As already noted, the expansion of the integration test function does not happen in a vacuum. Process definition is critical to assuring that everyone understands their link in the chain. Tina Brooks, director of process improvement at Oacis, is defining the complete development and test cycle. Integration Test has planned to expand its scope up to Alpha Acceptance, which will include high level system test now performed by QA. This will in turn free up QA to invest in expanding their coverage.

In an environment as complex as Oacis', uncovering -- let alone diagnosing -- issues that are introduced during integration is itself an adventure. Typical debugging and diagnostic aids are language-centric, provided with the compiler or development kit. How can you effectively test components written in different languages and objects supplied by other companies?

It is for this type of testing that products like BoundsChecker from NuMega Technologies are made. "Under the hood, all objects are just forms of dynamically linked libraries (DLLs), except with different extensions," said Doug Carrier, director of product marketing at NuMega of Nashua, N.H. "Our products are able to intercept and monitor events between objects at the system level. Using a runtime injection technique, we can see all API calls that the code makes outside itself." Runtime injection intercepts and logs calls made between objects.

These calls form the conversation among components. Although there are standards for component calls, such as Corba and COM/DCOM, each component has a unique set of parameters and flags, all with different constraints. Simply testing all valid and invalid ranges and types of calls to a single component may be fairly extensive. For this, NuMega has developed a utility called API Gen, which is part of the BoundsChecker offering.

"API Gen can generate an API test framework for component calls, creating a custom API checking capability," said Carrier. "Complete log files capture all calls and provide diagnostic information for debugging." This type of "on-board" test facility reinforces the concept that testing is an activity that spans all levels.

In the future, NuMega promises enhanced support for tracing issues all the way into the source code of the component. Current support for C and C++ is being extended to Visual Basic as part of the Fail Safe product line. "Most development language debuggers miss system level calls," Carrier said. "Our products execute at the protected system level and can monitor events at any desired level of granularity."

Crossroads

It is important to re-evaluate integration testing: who does it, as well as when and how it is done. Companies that direct their focus to this phase, and thus deliver a higher quality system to system test, enjoy increased quality and faster time to market. Issues are either prevented or discovered sooner, requiring less rework and leaving more time to invest in wider test coverage.

At the most, new development and test technologies promise a new age of "cooperative quality." Development and test groups are integrating their processes, leveraging their respective skills in a combined effort instead of the old, throw-it-over-the-wall approach. Vendors and customers are joining forces. Vendors are building test capabilities into their products, providing support for their customers to test their applications, and customers are teaming up with vendors to test their unique configurations.

Perhaps the most important advice is to pay attention. The rules of development are changing, and quickly. Test processes that worked before are simply no longer valid, and organizations that do not admit and deal with these changes will find themselves mired in complexity, strangled by it, instead of empowered.

--Linda Hayes

 

Test tools target 'suite' spots

The prospect of testing today's multitiered, multicomponent environments has made clear the fact there are gaping holes in test environments. The route to the truly integrated test environment still remains to be traveled. In a fashion, a host of test industry mergers have provided a step toward more integrated tools.

Rational Software Corp., Santa Clara, Calif., acquired SQA, Performance Awareness, and Pure Atria to better integrate their respective point solutions. The Pure Atria acquisition was finalized in July. "Complete product integration between all of the Rational units is underway, and customers should start to see the benefits by the end of the year," said Pamela Russos, vice president, Rational's Pure Atria development products business unit.

"Mergers such as ours will help pull the environment together," said Tom Bogan, vice president of Rational's SQA business unit. "People building and deploying applications have many different things to do. Products are available from many vendors, but what people really want is to look to one company to provide a complete solution. With the recent addition of the Pure Atria tools, Rational will have an integrated set for the entire life cycle."

There are different levels of integration between tools. While a high level of integration between the tools that comprise the SQA Suite of GUI function/regression tools exists, Rational is working toward that level of integration between SQA and the PreVue line of load/performance testing products acquired from Performance Awareness.

Pure Atria's products will complement the SQA GUI test functions with load/peformance testing and test coverage analysis. "All kinds of things can be happening in the application," explained Russos. "But the GUI test won't see that."

Taking a similar path is Cyrano, Newburyport, Mass. Cyrano was formed as the result of a merger between Performance Software, and Paris-based IMM. While many point solutions will test a client, a server or database in isolation, the newly released Cyrano Suite integrates testing of the entire environment.

A different approach to integration has been taken by CenterLine Software, Cambridge, Mass. CenterLine's product, QC/Advantage, employs an open architecture that integrates any number of tools used by developers and QA managers.

"Our business really is quite different, and we're striving to get our message out there," said Alyssa Dver, CenterLine's vice president of product marketing. "Companies like Segue and Mercury offer point solutions for load or GUI testing. It's their business to provide the tools," she explained. "It's our business to integrate these tools, but in a complementary, not competitive, way."

Access to different tools is provided in a pull-down menu within QC/Advantage. Benefits include a common user interface across all tools, and the product's central repository, which allows tests and results to be integrated and shared.

As networking becomes key to modern systems operation, the industry will look for these vendors to add more "network smarts" to their suites. Often, one may guess, acquisition will be the route.

-- Deborah Melewski

Load Tests Help Sandia Labs'
Web Applications

The technical staff at Sandia National Laboratories, Albuquerque, N.M., embarked on a testing strategy when faced with putting Web-based business applications online and ensuring they run within desired performance parameters -- while still supporting about 8,000 users. A key ingredient to successfully deploying Web applications, according to Ann Hodges, senior member of Sandia Labs' technical staff, was to know ahead of time whether each application and system configuration will function properly when implemented.

Once a weapons-only government R&D site, Sandia Labs expanded its mission over the years and now addresses a broad range of national security needs. Application testing in this diversified environment is no simple matter, said Hodges. When working closely with the application development team, Hodges said, it is important to understand the operational profile and the topology of the application.

The development team needed a highly scalable testing platform to realistically reflect real-life situations in the 8,000-employee organization. They needed the ability to run a variety of scenarios emulating a range of application configurations. "We wanted flexibility in how we define our load test," said Hodges. "Concurrency requirements may differ depending on the time and day of a month. I may want to run all of my virtual users on one system or I may want to distribute the load to a set of nodes. With LoadRunner, I can do either." LoadRunner is a load testing tool from Mercury Interactive Corp., Santa Clara, Calif.

"Load and performance testing is crucial in making informed Web deployment decisions," said Hodges. "A Web application is more complex than earlier client/server applications. This kind of testing is important in fine-tuning not only the application but also the environment. You may want to make changes to the database and how the application is using the database."

According to Hodges, it is crucial not only to monitor the performance of the Web server during a load test but also to monitor the performance of the database machine and whatever nodes are involved. "On the node level, we look at CPU utilization and I/O utilization," said Hodges. "On the database machine, we look at the number of concurrent threads. Typically, that's where we find the bottlenecks."

The company's Electronic Timecard application was the first system to be tested with LoadRunner. This application automates the submission, review and approval of employee timesheets. Systems administrators were concerned that the Web-based application might not be able to handle the heavy load expected when all employees begin using the system.

The test bed included an HP/Unix machine running a combination of LoadRunner Controller and LoadRunner GUI Virtual User. Sandia Labs uses an SGI Web server and a Sun/Unix server running a 1Gb Sybase database. The setup also included a single PC that generated data on performance at the desktop. In one afternoon, the Sandia Labs technical team ran five tests that emulated loads of 20, 50, 100, 150 and 200 users respectively. After integrating information gathered from different sources including database monitors and OS utilities, Hodges said, the team was able to pinpoint the level at which bottlenecks could be expected to occur.

The Electronic Timecard application gave solid proof of concept for automated load testing and an early warning system for Sandia Labs' Web-based applications. "We are now quite confident about applying this technology to other systems," said Hodges.

-- Elizabeth U. Harding