In-Depth
Write once, test everywhere
- By Sandra Taylor
- June 27, 2001
The original selling point for Java was its PLATFORM INDEPENDENCE.
The promise of compatibility soon became associated with the slogan: WRITE ONCE, RUN EVERYWHERE.
Now, in the harsh light of experience, the promise is somewhat suspect. Application developers tell
us there are too many platforms, too many Java virtual machines and too many JAVA DEVELOPER
KIT versions. The slogan that emerges is: Write once, test everywhere. From widgets through class libraries,
from classes to models, from models to packaged apps -- it all has to be tested. THINGS
HAVE TO BE TESTED INDIVIDUALLY. They have to be tested in context. Thus the
new Java mantra: Write once, test everywhere
The good news is that we live in a time of tremendously expansive technologies that are both evolving and converging.
But that is also the bad news. Developing today's generation of business applications is akin to doing brain surgery
while riding Space Mountain. Just when you naively believe everything and everyone is stable, the world takes a
hard right.
Perhaps the most indicative of this technological roller coaster phenomena is component-based development using
the increasingly popular Java. Make no mistake here. Component-based Java development involves two very significant
yet different technologies that tend to overlap in many people's minds, and as is the case with Java, also overlap
technically. There are the language-independent issues that surround any plug-and-play component-based development.
There are also the language-specific issues relating to Java.
That Java use is accelerating should come as no surprise. In Natick, Mass.-based SPG Analyst Services' recent
market survey of information systems (I/S) organizations, Java ranked as the fastest growing language being used
to implement distributed systems -- on both the client side and the server side. The survey questions specifically
attempted to hone in on the languages used to develop distributed applications (versus informational Web pages).
If these prediction levels hold, Java will become the second most popular client-side language, and second by only
a hair's breadth to Basic/VB. On the server side, the growth rate is even more pronounced, and Java enters the
space traditionally reserved for C++ and SQL.
These penetration rates are astounding given the youth of the language. But they are not unjustified, given
the two-fold promises associated with this upstart. On one hand, there are the technically-oriented robustness
and self-protection features -- sandbox security, strong typing facilities, garbage collection, elimination of
pointers and pure object orientation. But more important -- especially to development and operations staffs who
have suffered through the development/deployment process on the sort-of-but-not-really-compatible Unix variants
and a mixed client environment -- is the portability promise of write once, run everywhere. A manager's dream is
to develop once and deploy to all the clients and servers comprising the application runtime environment. Not having
to write code representing the least common denominator among the target systems is a developer's dream. While
this portability is the Java dream, it is also the largest obstacle that Java has, is and will continue to face.
Problem definition versus problem solution
Stating the problem is easy; it is only the first of many steps, however. While press coverage at times gives
the impression that Java is fully-developed, the truth is that developing a comprehensive solution will take time,
even with advancements moving at Web speed.
<>Evolving Java Development Kit (JDK) and Virtual Machine (JVM) Functionality
Even within the organization that spawned Java, one can track the evolutionary process at work. The history of
the Sunsoft JDKs is one of discovery, enhancement and more discovery. This process has not been without its speed
bumps that require an upgrade on the part of the users of those JDKs and JVMs. On the other hand, it is only through
such a process that Java will mature and truly deliver on its potential.
<>Version-Sensitive System Elements The intimate intertwining of Java and
companion elements, most notably browsers, have not lessened the administrative complexity of the development/deployment
process. While the documentation regarding prerequisites has generally improved, coordinating the new releases
of companion elements is still an issue that I/S organizations must be prepared to address.
Evolving Developer Community Expertise Consider that there are no developers
today who can satisfy the traditional clause in employment ads -- "five plus years of (Java) development experience
required." Technically, Java's similarity to C++ is an advantage, and the leap from the C++ environment to
Java is better likened to a short hop. However, it is quite possible to write Java code that functions quite adequately,
but is not truly portable. And therein lies an exposure that only learning and experience can address.
What to do?
Five years ago if you described the bullet-paced environment of today and asked I/S managers if this were an
environment they would opt to work in, most of them would graciously decline. But now is now and there are few,
if any, alternatives. In this fluid, volatile environment, SPG Analyst Services believes there is only one logical
way to validate and verify the Java components being developed -- automated testing.
Dr. Roger Hayes, senior software engineer at SunTest, said he agrees. While his name may not be a household
name among Java developers, his work is. Those who have spent time with the 100% Pure Java Cookbook on the SunTest
and/or KeyLabs Web sites have indirectly spent time with Hayes.
For those who chase the Java portability promise, Hayes said he believes there are three facets of component-based
testing:
1 A Java component must be tested for platform independence (purity);
2 A Java component must be tested to ensure it performs the functions it
was designed to perform (functionality); and
3 Java components must be tested to ensure they work together (compatibility).
To this list, SPG Analyst Services would also add the dimension of performance. While performance might be considered
an element of functionality, the newness of Java and its anticipated use in mission-critical applications demand
that the application components operate in a responsive manner.
4 Java components must be tested to ensure reasonable levels of performance (responsiveness).
PURITY What exactly is 100% pure Java? According to information in the 100% Pure Java Cookbook, a pure
Java program (an application, an applet, a class library, a servlet, a JavaBeans component or some reasonable combination
of these components) is one that relies only on the documented Java platform -- utilizes the documented Java APIs.
This is the crux of the testing performed by Provo, Utah-based KeyLabs - the testing lab chosen by Sun to certify
Java code. For an initial charge of $1,000, KeyLabs does a static, visual inspection of the code as well as a dynamic
automated test that looks for violations. Note that this testing provides a purity check as well as a code coverage
check. Note also that the tool suites used for the certification process are available -- free -- for downloading
and testing.
The most common violations? According to Hayes, platform-specific references top the list of violations. But
Hayes also warned about using "neat things" that one may well find buried in a given vendor's JDK. Just
because a JDK is from a well-known vendor does not guarantee it is pure.
Purity does not necessarily guarantee portability. Purity tests go a long way in catching and identifying platform-specific
references, but they are not infallible. In some cases, the culprit may be one of the issues discussed above. In
other cases -- threads, for example -- platform-specific behavior may be the result of inherent differences in
the underlying operating system. As Java matures, some of these issues will naturally be addressed. For others,
the 100% Pure Java Cookbook should be considered mandatory reading as it also attempts to provide a clearing house
for the capture, definition and presentation of pure Java work-arounds for many of the commonly-encountered problems.
In any event, the watch words for development organizations moving to Java are still write once, test everywhere.
According to Hayes, there is another issue to keep in mind regarding purity and portability. In certain situations,
platform-specific code can be acceptable -- as long as that decision is consciously made. The worst case scenario
is accidental platform specificity.
Where the purity tests are unique to the Java environment, the classes of test described below are in reality
the same types of tests frequently performed in the distributed client/server world. Many of the vendors are, in
fact, the same vendors that cut their testing teeth on client/server applications and are now expanding their product
suites to include Java. Centerline, Mercury Interactive, NuMega (recently purchased by Compuware), Segue, and Rational
are addressing different aspects of Java testing and test management. There are the new players, such as RSW and
SunTest. And finally, there are the developer-oriented debugging facilities offered by all Java development tool
vendors.
FUNCTIONALITY Purity tests do nothing, nor can they, in terms of validating that a component is functioning
properly. Such testing requires knowledge of either a) what the authoring developer intended or b) what the consuming
developer expects. In dealing with component functionality, opportunities abound for miscommunication between developers,
as well as for the eccentricities caused by technological issues previously discussed.
And herein enters the regression test designed to exercise a reasonably granular unit of work. Care must be
taken to understand and set up the pre-conditions, invoke the component or component set, and validate the resulting
post-conditions.
COMPATIBILITY System-level regression testing is for the skeptic who believes individual components may
operate as expected, but the integration of such components may well introduce errors. Such skepticism was well
justified within the realm of large-scale client/server applications. SPG Analyst Services sees the world of distributed
objects as more fragile than client/server, with markedly younger technologies. Add the high load potential generated
by use of these applications in an Internet-enabled environment, and for those large-scale systems, system-level
regression testing becomes an almost obvious requirement.
RESPONSIVENESS As with system-level regression testing, the idea of load testing an application is not
new. And as with system-level regression testing, the unknowns are the performance characteristics of the new technologies.
Java in a maturing QA testing market
Peter Skrzypczak, SunTest system engineer, is a member of SunTest's SWAT team. For customers about to embark
on a major Java initiative, Skrzypczak is among the staff chartered to initially point the program in the right
direction. For customers who might have hit a snag in their testing program, Skrzypczak will work with the customer
team to get the program back on course.
We asked Skrzypczak what was the most commonly asked question he had to answer. Two or three years ago, when
the QA testing market was still in its adolescence and before mission-critical client/server had become a generally
accepted practice, that question would have been 'Why do we need to go through these testing procedures?' Today
Skrzypczak sees that question as being 'Where and how do we start?' While that general shift in the lead question
indicates a maturing market, it also highlights the fact that an automated QA testing effort is still a new discipline
for many organizations.
For those companies embarking on a formalized testing program, there may be few surprises in store. One will
be the level of personnel required to plan and manage the implementation of the program. As noted by Skrzypczak,
"Some organizations believe they can take some of their bright administrative personnel and mold them into
QA specialists. Well, it just doesn't work that way. Even though we've tried very hard to make our testing products
easy to use, they are sophisticated technical tools. Remember, we're using software to test software, and that's
not a simple process."
"Another thing that tends to make the process more complex is the rate at which developers seem to be turning
out code. Whoever's managing that QA process must have developed a testing philosophy and a plan to implement that
philosophy. By that, I mean it's impossible to test all the variables in a large, complex system. So what approach
do you take? Test the most commonly used code paths? Test the most critical paths in terms of business impact?
Or some combination of the two? Knowing what to test is a function of QA experience (or training), as well as knowing
what the application is all about. And those things mean you're talking about a QA specialist."
Skrzypczak continued, "And speaking of turning out that code, it's really important that QA and development
keep in sync. The process we recommend is one of develop, test, develop, test, develop, test, etc. If you wait
too long before starting to test, any errors can really snowball and it becomes a nightmare to figure out what's
going wrong."
SunTest's Hayes also brought up this concept of what he calls spiral development and some in the industry call
iterative development. The process involves building and testing a small functional kernel. That test, by the way,
is not merely a technical test, but also a functional test that involves representatives from the business side
of the house and representatives from the user community. The idea here is to shake out not only the technical
bugs, but the functional bugs as well.
These functional adjustments are just as important as bug fixing. Business conditions can change in the flash
of an eye. Additionally, the business analysts may not have thought of every applicable business rule during the
design phase. Seeing the application grow in small manageable increments gives both the technical and business
staffs time to shift directions if conditions warrant. Waiting until the end of the development cycle to conduct
these reviews is far too late in the process.
Other things also remain constant, regardless of development paradigm. Not all applications can justify the
investments in money and resources normally associated with a formal QA process. For simple applications, subjecting
the application to corporate standards verification and a self-conducted purity check might be all that is required.
On the other hand, the bar is a lot higher if we talk about a net-enabled order entry system or a full-blown supply
chain management system. Huge costs can be incurred by a mission-critical application that goes south because of
a software error. And while QA testing cannot absolutely guarantee the application is bug-free, it certainly moves
closer to that goal.
A view from the developer community
Founded in November 1996, C-bridge Internet Solutions designs and builds large-scale net-enabled business applications,
utilizing new wave technologies including client and server-side Java, Object Request Brokers (ORBs), and the net
triumvirate technologies (Internet, Intranet and Extranet). C-bridge's customer roster includes Liz Claiborne,
Harley-Davidson, Informix and Trane.
We spoke with Ron Bodkin, C-bridge cofounder and chief technology officer. With his business-oriented cofounder
hat on, Ron plays a key role in defining company strategy and raising financing. As a CTO, Bodkin directs the technical
evaluation and specifications of all tools and practices, as well as managing a core team of experts who provide
ongoing advice to project teams.
"The biggest complicating factor is multiple platforms. Open standards have reduced the proprietary lock-in.
And even though portability is better than ever before, we're expecting more. The nature of our business forces
us to deliver code that works in many environments. But guess what? Those operating systems, those JVMs, and ORBs
... they're all from different vendors and they don't all work identically. As a result, we've implemented strong
structured testing and benchmarking procedures," explained Bodkin, a strong advocate for the testing discipline.
Procedures are not the only thing being formalized. Bodkin and his technology team are creating an expanding
library of pre-built components and frameworks for use in their projects. Said Bodkin, "We're evolving our
library, in part with standard off-the-shelf component sets, and in part with our own components and a growing
set of frameworks." A portion of the C-bridge library is virtually predetermined by the company's business
needs. But C-bridge also learns from experience and its project teams are constantly feeding informational ideas
and suggestions to the core team.
Bodkin appears to be the ultimate skeptic. On assuming that off-the-shelf components work as advertised, Bodkin
said he believes "it doesn't make any difference whether the component set is from a major system provider
or from one of the new commodity component providers, you have to validate those components in the context of your
particular environment.
"You need to understand -- really understand -- the behavior of those components, as well as the pre and
post conditions for those components. Even more basic than that, you need to make sure those components work at
all. We've had experience with a few components -- a very few -- that made me wonder if they were run through any
QA process at all. At C-bridge, we're fanatical enough about the quality of the components we use that we've set
up our own validation process."
His advice for dealing with purchased components? "If at all possible, try to get the source code for the
component. You may never need to look at it, but if something goes wrong and the component's not behaving as you
anticipated, having that code to look at can tell you a lot. The other issue is the ability of someone to do an
accurate and complete job of documentation. Even though components inherently provide a cleaner interface, we're
still talking about the human factor in documenting that interface, and more importantly, the behavior of the component
behind that interface. The more complicated the behavior, the greater the potential for misunderstanding. And the
only way you can be sure of that behavior is to test it."
Functionality is not the only facet of components that C-bridge has addressed. Said Bodkin, "You also have
to think about your application infrastructures -- from a technical viewpoint and from an application function
viewpoint. We've set up our system so it's as easy as we can make it to drop new components -- new functionality
-- into the app without having to tear things apart. We do that on the technical side, too. For example, we found
it was taking far too long to instantiate objects and then remove those objects when they were no longer being
used. So we create and manage our own dynamic object pool. But we've done it in a way that's clean and modularized
... one of the benefits of component-based development. If our system software ever provides that function as a
standard, we'll be able to pull out our code and switch to the standard software."
Bodkin continued, "We've spent a lot of time optimizing our systems for components. And it's paid off.
But the key to successful systems is the quality of our applications. And that means testing for quality from the
very beginning. You can't retrofit quality."
This all may be called waiting for a slow waiter on a fast train. By nature, we are impatient animals. We want
a totally portable Java -- today, not tomorrow. But we also tend to forget how far we have come, and how fast.
The journey will continue. In the meantime, there will also continue to be incompatibilities. The solution to finding
and dealing with these issues? Write once, test everywhere.