In-Depth
Test and test again
- By Dwight Deugo
- July 31, 2002
I help coach a soccer team. As a coach, you soon realize that you have to
tell your players the same thing over and over again before they get the
message. And even after they get the message, you have to tell them the same
thing again and again to help them remember it. So, for those of you reading
this, the message of this column is: Test!
There are those who believe that bugs can be managed in Java through the
strategic use of exceptions, an exception in Java being a way to indicate to a
method that an abnormal condition has occurred. If a method encounters an
exceptional condition that it can't handle itself, it throws an exception. The
plan -- or hope -- is that an exception will be caught by a handler positioned
along the thread's method invocation stack. The idea is that developers position
exception handlers so that their applications will catch, handle and recover
from all exceptions. However, when your disk becomes full and your application
can't write to it anymore, is that a bug or just an abnormal condition to
handle? It's better to think of exceptions as exceptional conditions, such as
your disk getting full, and not to think of them as bugs. It is important not to
use exceptions to capture bugs because that is the job of testing.
There are many situations where an exception handler can't fix a problem in
code. And no matter how good a job those involved with the development of Java
as a language did, I still maintain that it's never a language that kills an
application, it's the skill of the developer using the language. Take, for
example, one of my favorite code samples:
public class Mistake {
private static int
methodWithAMistake(int a, int b, int c) {
return (b * b) + (a * c * 4);}
public static void main (String args[]) {
System.out.println(methodWithAMistake (7, 2, 0));
System.out.println(methodWithAMistake (7, 2, 1));}
}
Say a developer makes a mistake and uses addition in
methodWithAMistake when
it should have been subtraction. If the developer tests the method using only
one test case - the one with 7, 2 and 0 as arguments - the method looks correct
because it returns the same answer of 4 using either addition or subtraction.
However, if the developer tests with arguments 7, 2 and 1, 32 is returned when
it should have been -24. Clearly, passing a single test case is not enough
testing to demonstrate the absence of a bug. And, clearly, skilled developers
test.
Bugs, which are errors or faults that testing can catch, can be broken down
minimally into several types. An application may lack usability as a result of a
poor user interface. It might have a missing or incorrect capability. The
application might have side-effects and unanticipated, undesirable feature
interactions. Its performance may be lacking, or it might suffer from real-time
deadline failures, synchronization deadlock and livelock. The application's
output might also be wrong. And finally, something we are used to seeing, the
application might abruptly terminate -- crash to blue.
There are many causes of bugs. Often, important requirements aren't
identified and implemented. The implementation might not support the
requirements. The programmer may have just made a mistake, the chosen algorithm
might be inefficient or the approach might be not be feasible. As well, the
configuration might be invalid. So many more things might be wrong. And there
are so many things to test.
Are you ready to test yet? Good, but don't just go and run your application
to see if it works. Good test design involves the following steps. First
identify, model and analyze the responsibilities of your application under test.
Next, design test cases from an external view of your application. This view
might be from a user's perspective, whether that user is a person or another
application. Then add more test cases based on code analysis, suspicions and
heuristics. Finally, figure out the expected results of each test case or at
least develop an approach for evaluating whether a test case, when executed,
passes or fails. Does this sound like work? Let me put it this way. Would you
rather be responsible for your application working or failing? And keep in mind
that often many people's lives can depend on your software.
As important as testing is, it is still no substitute for good software
engineering practices. For example, if your system doesn't crash under testing
or you don't find any bugs, does that imply it has met all its functional
requirements? Of course not. However, testing can reveal interactions and
special scenarios that have not been considered and which cause the system to
crash. Using good software engineering practices, including testing from the
onset of the development process, is the best approach. Catching bugs early in
the process not only decreases the costs associated with finding them late in
the game, but just thinking about testing can impact the costs associated with
architectural considerations, designs, coding practices and personnel. Why wait
to consider testing? You know you have to do it.
Sometimes, a bug can suggest a new feature. For example,
I occasionally make a mistake and send a message to an object that it doesn't
understand. If you try hard enough, you might be able to get around the compiler
on this one. However, sometimes I actually want to send a message to an object
that it doesn't understand, but guess what happens? You're right. You get a
NoSuchMethodError
thrown.
What I wish Java would do is what Smalltalk does. It
forwards a doesNotUnderstand
message to the receiving object -- not the sending object --
so it can have one more chance at processing the message. Why? Here's one
reason: With this feature in Smalltalk I can write one proxy class for every
other class. In Java, well ... explaining that would take a full article. In
most cases, you need one proxy for every other class. Those of you reading this
who use RMI know the consequences of this problem.
So, what have we learned? Test now, test tomorrow, test often and test under
many circumstances. It's not the language's fault when you make a mistake, it's
yours. If you still don't believe in testing, I suggest you check out Robert
Binder's book, Testing Object-Oriented Systems: Models, Patterns and
Tools (Reading, Mass.: Addison-Wesley, 2000). I did. He goes into far more
detail about testing than you could ever want to know. If you don't think
testing is important after reading his book, I guess there is no hope for you.
Let me conclude by saying that even though I know testing is work, and there
might be a specific feature or two in other languages I would like to use, I
still prefer developing and testing my applications in Java. Don't you?
About the Author
Dwight Deugo is a professor of computer science at Carleton University in Ottawa, Ontario. Dwight has been an editor for SIGS and 101communications publications, and serves as chair of the Java Programming track at the SIGS Conference for Java Development. He can be reached via e-mail at [email protected].