You are on page 1of 5

Overview of Software Testing

Software testing is a systemic process to find differences between the expected behavior of the system
specified in the software requirements document and its observed behavior. In other words, it is an activity
for finding errors in the software system. There is no one agreed-upon goal of software testing. One school
of thought describes the goal of testing as demonstrating that errors are not present. Dijkstra (1930 - 2002)
describes the goal of testing as showing the presence of faults but not their absence. The ultimate goal,
however, is to find errors and fix them so users can be confident that they can depend on the software.

Errors (also known as bugs or glitches) in software are generally introduced by people involved in software
development (including analysts, architects, designers, programmers, and the testers themselves).
Examples of errors include:

• Interface specification: Mismatch between requirements and implementation.


• Algorithmic faults: Missing initialization, branching errors, or missing tests for null.
• Mechanical faults: The user manual doesn't match actual conditions or operating procedures.
• Omissions: Some of the features described in the requirements documents are not implemented.
Many developers view the subject of software testing as "not fashionable," and as a result too few of them
really understand the job software testers do. Testing is an iterative process and should start from the
beginning of the project. Software developers need to get used to the idea of designing software with testing
in mind. Some of the new software development methodologies such as eXtreme Programming stress
incremental development and testing. eXtreme Programming is ideally suited for some types of applications,
depending on their size, scope, and nature. User interface design, for example, benefits highly from rapid
prototyping and testing usability with actual users.

One way to make testing simple is to design applications with testing in mind. Organizing the system in a
certain way can make it much easier to test. Another implication is that the system must have enough
functionality and enough output information to distinguish among the system's different functional features. It
is now common to describe a system's functional requirements (features that the system must provide) by
using the Unified Modeling Language (UML) to create a use case diagram, then detailing the use cases in a
consistent written form. Documenting the various uses of the system in this way simplifies the task of testing
the system by allowing the tester to generate test scenarios from the use cases. The scenarios represent all
expected paths users will traverse when they use the features that the system must provide. Developers
distinguish these functional requirements from system requirements not related to particular functionality,
constraints related to performance, configuration, and usability.

Testing Activities

The testing that needs to be performed can be split into two classes: functional (black-box) testing and
structural (white-box) testing. In black-box testing, each of the components -- and ultimately the system as a
whole -- is treated as a black box, and testers verify that it supports all the features identified (often as use
cases) in the requirements documents. Black-box testing activities include:

• Unit (or Class) Testing: In this testing activity, components are tested separately. Because some objects
may depend on other objects that are not yet available, you may need to develop test drivers and test
stubs. A test driver simulates the part of the system that calls the component under test. A test stub
simulates a component called by the tested component.
• Integration Testing: In this activity, objects are integrated in increasingly large and complex subsystems.
This is an incremental testing process.
• System Testing: In this activity, the system is tested as a whole. Testers employ various techniques at
this stage, including functional testing (testing actual behavior against documented requirements),
performance testing (testing nonfunctional requirements), and acceptance and installation testing
(testing against the project agreement).
Black-box testing concerns itself with externally visible behavior, and ignores the source code. In white-box
testing, the focus is on the code that produces the behavior. One common white-box testing activity is path
testing, also known as code coverage. Its goal is to identify faults in the implementation by exercising all
possible paths through the code at least once. Testers check that every branch in the code has a test that
exercises that branch.
Note: The starting point of path testing is a flow graph consisting of nodes representing
executable blocks, and associations (or edges) representing flow of control. The minimum
number of tests necessary to cover all edges is equal to the number of independent paths
through the flow graph. This is known as the cyclomatic complexity (CC) of the flow graph,
which has the formula:
CC = number of edges - number of nodes + 2

Fortunately, you don't have to draw flow graphs for your code by hand, as several code coverage tools are
readily available.

Challenges of Testing Wireless Applications


The wide variety of Java technology-enabled devices such as wireless phones and PDAs results in each
device running a different implementation of the CLDC and MIDP. Varying display sizes add to the
complexity of the testing process. In addition, some vendors provide proprietary API extensions. As an
example, some J2ME vendors may support only the HTTP protocol, which the MIDP 1.0 specification
requires, while others support TCP sockets and UDP datagrams, which are optional.

To make your application both portable and easy to test, design it using standardized APIs defined through
the Java Community Process (JCP), so it will run as-is on devices with different J2ME implementations. If
you feel you must use vendor-specific extensions, design your application in such a way that it defaults to
the standard APIs if it's deployed on a device that doesn't support the extensions.

Testing Wireless Java Applications


The testing activities described above are applicable to testing wireless Java applications. In other words,
you perform unit or class testing, then you integrate components and test them together, and eventually you
test the whole system. In this section I provide guidelines for testing wireless applications.

Validating the Implementation

Ensuring that the application does what it's supposed to is an iterative process that you must go through
during the implementation phase of the project. Part of the validation process can be done in an emulation
environment such as the J2ME Wireless Toolkit, which provides several phone skins and standard input
mechanisms. The toolkit's emulation environment does not support all devices and platform extensions, but
it allows you to make sure that the application looks appealing and offers a user-friendly interface on a wide
range of devices. Once the application has been tested on an emulator, you can move on to the next step
and test it on a real device, and in a live network.

Usability Testing

In usability testing (or GUI navigation), focus on the external interface and the relationships among the
screens of the application. As an example, consider an email application that supports entry and validation
of a user name and password, enables the user to read, compose, and send messages, and allows
maintenance of related settings, using the screens shown in Figure 1, among others.

Figure 1: Messaging Application


In this example, start the test at the Login window. Enter a user name and a password and press the soft
button labeled Login. Enter a valid user name and password. The application should display the main
menu. Does it? The main menu should display a SignOut button. Does it? Press the SignOut button. Does
the application return to the Login screen? Write yourself a note to raise the question, "Why does the user
'log' in but 'sign' out?" Now enter an invalid user name or incorrect password. The program should display a
meaningful message box with an OK button. Does it? Press the OK button. Does the application return to
the Login screen?

You need to test the GUI navigation of the entire system, making notes about usability along the way. If, for
example, the user must traverse several screens to perform a function that's likely to be very popular, you
may wish to consider moving that particular function up the screen layers.

Some of the questions you should ask during usability testing include:

• Is the navigation depth (the number of screens the user must go through) appropriate for each particular
function?
• Does the application minimize text entry -- painful on a wireless phone -- or should it provide more
selection menus?
• Can screens of all supported devices display the content without truncating it?
• If you expect to deploy the application on foreign devices, does it support international character sets?
The MIDP Style Guide provides helpful hints about user interface design.

Network Performance Testing

The goal of the next type of testing is to verify that the application performs well in the hardest of conditions
(for example, when the battery is low or the phone is passing through a tunnel). Testing performance in an
emulated wireless network is very important. The problem with testing in a live wireless network is that so
many factors affect the performance of the network itself that you can't repeat the exact test scenarios. In an
emulated network environment, it is easy to record the result of a test and repeat it later, after you have
modified the application, to verify that the performance of the application has improved.

Server-Side Testing

It is very likely that your wireless Java applications will communicate with server-side applications. If your
application communicates with servers you control, you have a free hand to test both ends of the
application. If it communicates with servers beyond your control (such as quotes.yahoo.com), you just
need to find the prerequisites of use and make the best of them. You can test server-side applications that
communicate over HTTP connections using HttpUnit (a Java API for accessing web sites without a browser.
It is ideally suited for automated unit testing of web sites when combined with a Java unit test framework
such as JUnit, which I'll discuss in the next section. You can also measure a web site's performance using
httperf, a tool designed for measuring the performance of web servers).

Automating Unit Testing

One of the unit-testing tools most widely used by Java developers is the JUnit framework. A similar
framework for J2ME unit testing is J2MEUnit. Here is a sample test:

import j2meunit.framework.*;
public class TestLoginWindow extends TestCase {
public TestLoginWindow() {
super("null");
}
public TestLoginWindow(String s) {
super(s);
}
protected void runTest() throws java.lang.Throwable {
if(getTestMethodName().equals("testLoginWindow"))
testLogindWindow();
}
public Test suite() {
return new TestSuite(new TestLoginWindow().getClass(),
new String[] {"testLoginWindow"});
}
public void testLoginWindow(){
// test it
// use assert(String, boolean)
}
}
}

When using J2MEUnit in your testing, you need to:

• Create a subclass of TestCase, like TestLoginWindow in the example.


• Override the method runTest() as in the example. Because J2MEUnit doesn't use reflection, when
you override runTest() you must call getTestMethodName() to check the name of the test method
(testLoginWindow() in the example above).
• Override the method suite() so that it returns a TestSuite object containing the name of the class
for the test case and the names of the test methods.
• To check a value, call the assert() method and pass a boolean that is true if the test succeeds.
J2MEUnit provides two test runners that allow you to run your tests, collect results, and display them:

1. Text-based: This test runner provides simple-text based output of the results. To use it, add the following
main method to any subclass of TestCase:

public static void main(String argv[]) {


String[] {"j2meunit.examples.TestOne"};
j2meunit.textui.TestRunner.main(runnerArgs);
}

2. GUI-based: As the name implies, this test runner provides GUI output. To use it, you must compile and
preverify the J2MEUnit framework, then:
o Create a subclass of j2meunit.midp.ui.TestRunner.
o Override the startApp() method.
Debugging Information
Adding debugging information in your code is very important. You can display trace points, values of
variables, and other information during testing and debugging. One way to minimize the tedium of writing
System.out.println() calls is to write a utility method such as the following:

public void debug(String s) {


System.out.println("DEBUG: "+s);
}

You can easily use the debug() method to display debugging information, then later remove the calls from
production code.

The J2ME Wireless Toolkit provides a debugger that's easy to use. If you use the Sun ONE Studio 4, Mobile
Edition, see the article Debugging Wireless Applications with Mobile Edition for useful guidance.
Testing Checklists
This section provides checklists you will find useful when testing your application, in both emulation and live
environments. These checklists include tests that are usually performed in the Motorola Application
Certification Program, described a little later in the article.

Navigation Checklist

• Application name: Make sure your application displays a name in the title bar.
• Keep the user informed: If your application doesn't start up within a few seconds, it should alert the
user. For large applications, it is a good idea to have a progress bar.
• Readable text: Ensure that all kinds of content are readable on both grayscale and color devices. Also
make sure the text doesn't contain any misspelled words.
• Repainting screens: Verify that screens are properly painted and that the application doesn't cause
unnecessary screen repaints.
• Soft buttons: Verify that the functionality of soft buttons is consistent throughout the application. Verify
that the whole layout of screens and buttons is consistent.
• Screen Navigation: Verify that the most commonly used screens are easily accessible.
• Portability: Verify that the application will have the same friendly user interface on all devices it is likely
to be deployed on.
Network Checklist

• Sending/Receiving data: For network-aware applications, verify that the application sends and receives
data properly.
• Name resolution: Ensure that the application resolves IP addresses correctly, and sends and receives
data properly.
• Sensitive Data: When transmitting sensitive data over the network, verify that the data is being masked
or encrypted. Use the SSL protocol.
• Error handling: Make sure that error messages concerning network error conditions (such as no
network coverage) are displayed properly, and that when an error message box is dismissed the
application regains control.
• Interruptions: Verify that, when the device receives system alerts, SMS messages, and so on while the
application is running, messages are properly displayed. Also make sure that when the message box is
dismissed the application continues to function properly.
Other Important Issues

• Successful startup and exit: Verify that your application starts up properly and its entry point is
consistent. Also make sure that the application exits properly.
• Classes outside the MIDP and CLDC specifications: Unless you are willing to sacrifice portability and, in
some environments, certification, ensure that the application does not use classes not included in the
MIDP and CLDC specifications.
• User manual: Verify that all product documentation is accurate, and consistent with the software's actual
behavior.

You might also like