You are on page 1of 6

Introduction

All software needs to be tested. In fact, software testing is a major part of the overall software
development process that involves many people and countless hours of detailed work. Unfortunately,
most testing efforts are under-planned: some software testing professionals work in the field for years
without ever seeing a really comprehensive QA plan and test suite. Part of the problem is that QA
efforts often begin too late in the release cycle when there is too much pressure to take shortcuts.

This white paper shows how ReadySET Pro can be used to quickly create a comprehensive system test
suite with test cases. ReadySET Pro's unique content-rich templates help you write better test suites
that can improve your product quality. These templates make the writing process much faster than
starting from scratch, which means that you are more likely to be able to complete your plan, even
under time pressure.

The figure below illustrates where your QA plan and test suite fit with other project documents. This
white paper focuses on the yellow "Quality Assurance" box. Ideally, your testing documents are just
part of an overall set of project documents. But even if you do not have the others, you will be able to
follow the discussion of test planning documents.
Project Proposal Design
Software Requirements Spec Use Cases
Business Need → Structural Design
Use Case Suite
Market Opportunity Behavioral Design
Use Cases
Target & Benefits User Interface

Target Market Segment → Build System
Feature Specifications
Claimed Customer Benefits Architecture
Feature Set
User Needs Persistence
Feature Specifications
Classes of Users → Security
User Stories Quality Assurance
Non-Functional Requirements
QA Plan
Interview Notes → Test Suite
→ Environmental Requirements
Requests from Customers Test Cases
Test Runs

Software development projects that don't have enough test planning tend to bog down with defects
that can put the entire project's success at risk. Test planning helps in the following specific areas:
Requirements Validation
Designing a system test suite forces you to deeply understand the requirements. As you
understand the requirements more, you will notice incompleteness, ambiguity, and inconsistency.
Correcting these problems early can speed up development and reduce the number of late
requirements changes.
Testing Coordination
Testing involves many people working together over time. For the team to be effective, their
efforts must be coordinated with a written plan.
Test Coverage
Testing only half of a large system is sure to allow thousands of defects into the shipping product.
A QA plan is needed to set coverage criteria and evaluate coverage. A test suite must be carefully
designed with the coverage criteria in mind.
Test Automation
Too often, QA teams hope to use automated testing, but end up stuck with ad-hoc manual
testing. This happens because they never really formalize the requirements, so they must always
rely on human judgment to evaluate test outputs. Creating automated test scripts without
outlining the test suite is like writing code without a design document. The following diagram
illustrates the gap between ad-hoc testing and automated testing, and how systematic testing
with a test suite bridges that gap.
Automated Testing
Ad-Hoc Testing Systematic Testing
Driven by explicit QA goals
Just see if you can break it Driven by explicit QA goals
→ → Test suite designed for
Make up test cases "on the fly" Test suite designed for
→ → coverage
Human interpretation of → coverage → Scripts need no human
requirements Tests specify expected output
judgment
The rest of this white paper works through the steps shown in the diagram below. (You may notice
that this is very similar to the use case writing steps.)
Use Case Writing Steps
1: Overall QA Planning

2: Outline the Test Suite

3: List Test Case Names

4: Write Some Test Case Descriptions

5: Write Selected Test Cases

6: Evaluate Test Cases

Note that in steps 4 and 5, we recommend that you only specify the most important test cases in
detail. In any complex system, there will be a large number of potential test cases. We encourage you
to take a breadth-first approach: map out your test case suite first, then fill in details incrementally as
needed. This concept is key to getting the most value out of the limited time that you have for test
planning.

Step One: Overall QA Planning


Software quality is not one-size-fits-all: different software products need different types of testing
because they have different QA goals. For example, a real-time system may place much more priority
of performance than would a typical desktop business application.

The task of QA planning is discussed in detail in the "Quality Throughout the Life-Cycle" white paper.
The main parts of the overall QA plan are:

• Select and prioritize quality goals for this release


• Select QA activities to achieve those goals
• Evaluate how well the activities support the goals
• Plan the actions needed to carry out the activities

The overall QA plan addresses all quality activities. Quality can be achieved by building in better
quality from the start, and by testing to find and remove defects. Specific QA activities include: coding
preconditions, reviewing design and code, unit testing, integration testing, system testing, beta
testing, using analysis tools, and field failure reports, among others. The rest of this paper will focus in
on just the system testing activity.

Step Two: Outline the Test Suite


Once you have prioritized your QA goals, it is time to outline the system test suite. A test suite
document is an organized table of contents for your test cases: it simply lists the names of all test
cases that you intend to write. The suite can be organized in several ways. For example, you can list
all the system components, and then list test cases under each. Or, you could list major product
features, and then list test cases for each of those.
One of the best test suite organizations is to use a grid where the rows are types of business objects
and the columns are types of operations. Each cell in the grid lists test cases that test one type of
operation on one type of object. For example, in an e-commerce system, a Product business object
would have test cases for each of the following operations: adding a product to the system, listing or
browsing products, editing products, deleting products, searching products, and calculating values
related to the product such as shipping cost or days-until-shipment. The next row an e-commerce test
suite grid might focus on the Customer Order business object and have test cases for almost all the
same operations.

The advantage of using an organized list or grid is that it gives you the big picture, and it helps you
put your finger on any area that needs more work. For example, in the e-commerce grid, there might
be a business object "Coupon." It is obvious that shoppers use coupons, but it is easy to forget to test
the ability for administrators to create coupons. If it is overlooked, there will be a clearly visible blank
space in the test suite document. These clear indications of missing test cases allow you to improve
the test suite sooner, make more realistic estimates of testing time needed, and find more defects.
These advantages allow the found defects to be fixed sooner and help keep management expectations
in sync with reality, which helps keep the project out of crisis-management-mode.

Step Three: List Test Case Names


After you have outlined your test suite, this step becomes much easier to do well. Having an organized
system test suite makes it easier to list test cases because the task is broken down into many small,
specific subtasks.

Put your finger, or cursor, on each list item or grid cell in your test suite. Then, for each one, ask
yourself about the relevant system requirements. If you have a written use case document, you will
often be able to turn each use case into one or more test cases. There may be some list items or grid
cells that really should be empty. For example, an e-commerce application might not have any delete
operation for the Customer Order business object. Explicitly mark with "N/A" any cells that logically
should not have test cases. If you cannot think of any test cases for a part of the suite that logically
should have some test cases, explicitly mark it as "TODO".

The name of each test case should be a short phrase describing a general test situation. Append a
unique number to each test for the given test situation. For example: login-1, login-2, login-3 for three
alternative ways to test logging in. And, sales-tax-in-state-1 and sales-tax-out-of-state-1 for two
different situations where collected sales taxes are reported to the government according to two
different procedures. Use distinct test cases when different steps will be needed to test each situation.
One test case can be used when the steps are the same and different input values are needed.

As you gradually fill in the test suite outline, you may think of features or use cases that should be in
the software requirements specification (SRS), but are not there yet. Quickly note any missing
requirements in the SRS document as you go along.

Before moving on to the next step, it is worth highlighting the value of having a fairly complete test
suite outline. The test suite outline is a useful asset that can help your project succeed. At this point,
you can already get a better feeling for the scope of the testing effort. You can already roughly
prioritize test cases. You are already starting to look at your requirements critically and you may have
identified missing or unclear requirements. And, you can already estimate the level of specification-
based test coverage that you will achieve.

Step Four: Write Some Test Case Descriptions


In step three, you may have generated between ten and fifty test case names on your first pass. That
number will go up as you continue to make your testing more systematic. The advantage of having a
large number of tests is that it usually increases the coverage.

The disadvantage to creating a big test suite is simply that it is too big. It could take a long time to
fully specify every test case that you have mapped out. And, the resulting document could become too
large, making it harder to maintain.

A good strategy is to be selective before drilling down to the next level of detail. For example, you
might prioritize the test cases based on the priorities of the features or use cases that they test. Also,
it's a good idea to first write descriptions rather than get into detailed steps for each test case. Going
deep into the details of just a few test cases may be enough to shake out ambiguity or incompleteness
in the requirements. The remaining cases should all be specified eventually, however you might choose
to rely on ad-hoc testing for lower priority features in early releases.

For each test case, write one to three sentences describing its purpose. The description should provide
enough information so that you could come back to it after several weeks and recall the same ad-hoc
testing steps that you have in mind now. Later, when you actually write detailed steps in the test case,
you will be able to expect any team member to carry out the test the same way that you intended.

The act of writing the descriptions forces you to think a bit more about each test case. When
describing a test case, you may realize that it should actually be split into two test cases, or merged
with another test case. And again, make sure to note any requirements problems or questions that
you uncover.

Step Five: Write Selected Test Cases


Now it is time for the main event: actually writing the test case steps and specifying test data. This is
a task that you can expect to take ten to forty-five minutes for each test case. That might work out to
approximately ten test cases in a typical work day. So, you must be selective to get the most value in
return for your limited available time.

Focus on the test cases that seem most in need of additional detail. For example, select system test
cases that cover:

• High priority use cases or features


• Software components that are currently available for testing (rather than specifying tests on
components that cannot actually be tested yet)
• Features that must work properly before other features can be exercised (e.g., if login does not work,
you cannot test anything that requires a logged in user)
• Features that are needed for product demos or screenshots
• Requirements that need to be made more clear

Each test case should be simple enough to clearly succeed or fail, with little or no gray area in
between. Ideally, the steps of a test case are a simple sequence: set up the test situation, exercise the
system with specific test inputs, verify the correctness of the system outputs. You may use
programming constructs such as if-statements or loops, if needed.

Systems that are highly testable tend to have a large number of simple test cases that follow the set-
up-exercise-verify pattern. For those test cases, a one-column format can clearly express the needed
steps. However, not all test cases are so simple. Sometimes it is impractical to test one requirement at
a time. Instead, some system test cases may be longer scenarios that exercise several requirements
and verify correctness at each step. For those test cases, a two-column format can prove useful.
In the one-column format, each step is a brief verb phrase that describes the action that the tester
should take. For example, "enter username," "enter password," "click 'Login'," "see Welcome page,"
and "verify that greeting has correct username" are all steps. Verification of expected outputs are
written using the verbs "see" and "verify." If multiple inputs are needed, or multiple outputs must be
verified, one-column test cases will simply have more steps.

In the two-column format, each test case step has two parts: a test input, and an expected output:
Test Input
The Test Input is a verb phrase describing what the tester should do in that step.
Expected Output
The Expected Output is a noun phrase describing all the output that the tester should observe at
that step.

You may notice that the two formats for test cases mirror the two formats for use cases. The
difference is that use cases are a form of requirements, whereas test cases deal with more details of
the implemented system. Use cases focus mainly on the user's tasks and how the system supports
those tasks, while specifying as few implementation details as possible. A major advantage of use
cases is that they are simple enough to be read by actual users who can help validate requirements. In
contrast, test cases should more technical documents with enough implementation detail to allow any
member of the development team to carry out a test exactly the same way.

If you have written use cases, they can be copied and pasted as a good starting point for test cases.
When leveraging use cases in this way, make sure to add enough detail to make the test reliably
repeatable.

If you only have one test input value for a given test case, then you could write that test data value
directly into the step where it is used. However, many test cases will have a set of test data values
which must all be used to adequately cover all possible inputs. We encourage you to define and use
test input variables. Each variable is defined with a set of its selected values, and then it is used in test
case steps just as you would use a variable in a programming language. When carrying out the tests,
the tester should repeat each test case with each possible combination of test variable values, or as
many as practical.

Carefully selecting test data is as important as defining the steps of the test case. The concepts of
boundary conditions and equivalence partitions are key to good test data selection. Try these steps to
select test data:

• Determine the set of all input values that can possibly be entered for a given input parameter. For
example, the age of a person might be entered as any integer.
• Define the boundary between valid and invalid input values. For example, negative ages are nonsense.
You might also check for clearly unreasonable inputs. For example, an age entered as 200 is much
more likely to be a typo than a user who is actually two-hundred years old.
• Review the requirements and find boundaries in the valid range that should cause the system to
behave in different ways. For example, the system might treat minors differently than adults, so the
boundary would be age 18.
• Now you have a set of equivalence partitions: sets of values that the system should treat uniformly. For
example, all minors are treated one way, and all adults are treated another way. Double check the
requirements to make sure that you have not missed a partition division, e.g., not all adults are old
enough to drink alcohol in the U.S.
• Choose one input value somewhere in the middle of each equivalence partition (e.g., -5, 12, and 44),
one directly on each boundary (e.g., 0 and 18), and one on each side of each boundary (e.g., 1, 17,
and 19). Test data vales that are expected to cause errors (e.g., -5) should be tested in separate
robustness test cases.
• In functional correctness test cases, make sure that you have inputs that will force the system to
generate each possible type of response to valid input. And, in robustness test cases, make sure to
force the system to generate each relevant error message.

Recall that one of the advantages of writing test cases is that it forces you to clearly think through the
requirements. Capture your insights by writing notes and questions as you go. If a test case step
exposes an unclear requirement, make a note of it in the appropriate part of the system requirements
specification.

You might also like