You are on page 1of 48

SOFTWARE CONSTRUCTION

AND TESTING
CSSSPEC6
SOFTWARE DEVELOPMENT
WITH QUALITY ASSURANCE
<<professor>>
SOFTWARE CONSTRUCTION
SOFTWARE CONSTRUCTION is a fundamental act
of software engineering: the construction of working
meaningful software through a combination of
coding, validation, and testing (unit testing) by a
programmer.

Computer Science Department Slide 2 of 48


LOW LEVEL SOFTWARE CONSTRUCTION
• Verifying that the groundwork has been laid so that construction can proceed
successfully
• Determining how your code will be tested
• Designing and writing classes and routines
• Creating and naming variables and named constants
• Selecting control structures and organizing blocks of statements
• Unit testing, integration testing, and debugging your own code
• Reviewing other team members’ low-level designs and code and having them
review yours
• Polishing code by carefully formatting and commenting it
• Integrating software components that were created separately
• Improving code and design

Computer Science Department Slide 3 of 48


WHY IS SOFTWARE CONSTRUCTION IMPORTANT?
• Some Reasons
• Construction is a large part of software development
• Construction is the central activity in software
• development
• With a focus on construction, the individual
• programmer’s productivity can improve enormously
• Construction’s product, the source code, is often the
• only accurate description of the software

Computer Science Department Slide 4 of 48


WHY IS SOFTWARE CONSTRUCTION IMPORTANT?

• Simple Answer
Construction is the only activity that’s
guaranteed to be done

Computer Science Department Slide 5 of 48


IT’S ALL ABOUT QUALITY
• How do you ensure that the software
• does what it should?
• does it in the correct way?
• does it robust?
• is reliable?
• is easy to use?
• is easy to change?
• is easy to correct?
• is easy to test?

Computer Science Department Slide 6 of 48


WHAT IS SOFTWARE QUALITY ?
• Formal Definitions
• A Practical One
• The totality of features and characteristics of a product or
• service that bear on its ability to satisfy stated or implied
• needs. (ISO 8402: 1986, 3.1)
• The degree to which a system, component, or process meets
• specified requirements. (IEEE)
• A product that satisfies the stakeholders needs (Compliant
• Product + Good Quality + Delivery Within Budget/Schedule.)

Computer Science Department Slide 7 of 48


QUALITY IS A COLLECTION OF “…ILITIES”
reliability the ability to operate error free
reusability the ability to use parts of the software to solve other
software problems
extendibility the ability to have enhancement changes made easily
the ability to understand the software readily, in
understandability order to change/fix it (also called maintainability)
efficiency the speed and compactness of the software
usability the ability to use the software easily
testability the ability to construct and execute test cases easily
portability the ability to move the software easily from one
environment to another
functionality what the product does

Computer Science Department Slide 8 of 48


CHALLENGES IN SOFTWARE DEVELOPMENT
• Reliability [correctness + robustness]
It should be easier to build software that functions
correct, and easier to guarantee what it does.
• Reusability [modifiability + extendibility]
It should build less software!
Software should be easier to modify.
• Functionality [+ usability]
Ensure that the software does what the user expects
and does this in an easy to use way.

Computer Science Department Slide 9 of 48


SOFTWARE CONSTRUCTION STRATEGIES

• TOP-DOWN  High-level to low-


level; user interface
to detail logic
• BOTTOM-UP  Reverse of the above

• MIDDLE-OUT  Some of both

Computer Science Department Slide 10 of 48


QUALITY AND CONSTRUCTION
GOAL
• The goal of software construction is to build a
product that satisfies the quality requirements
• “Good enough software” Not excellent software !

Computer Science Department Slide 11 of 48


QUALITY AND SOFTWARE CONSTRUCTION
• Functionality [+ usability]
Build software as early as possible and give it to the user
As often as possible
• Reliability [correctness + robustness]
Run and test the software
As often as possible
• Reusability [modifiability + extendibility]
Redesign and improve the source code
• As often as possible

Computer Science Department Slide 12 of 48


CONSTRUCTION PROCESS INFRASTRUCTURE

Computer Science Department Slide 13 of 48


QUALITY
Quality means “conformance to requirements”
The best testers can only catch defects that
are contrary to specification.
Testing does not make the software perfect.
If an organization does not have good
requirements engineering practices then it
will be very hard to deliver software that fills
the users’ needs, because the product team
does not really know what those needs are.

Computer Science Department Slide 14 of 48


TEST PLANS
The goal of test planning is to establish the list of tasks
which, if performed, will identify all of the requirements
that have not been met in the software. The main work
product is the test plan.
The test plan documents the overall approach to the
test. In many ways, the test plan serves as a
summary of the test activities that will be
performed.
It shows how the tests will be organized, and
outlines all of the testers’ needs which must be met
in order to properly carry out the test.
The test plan should be inspected by members of the
engineering team and senior managers.
Computer Science Department Slide 15 of 48
TEST PLAN OUTLINE

Computer Science Department Slide 16 of 48


TEST CASES
A test case is a description of a specific interaction that a tester will have
in order to test a single behavior of the software. Test cases are very
similar to use cases, in that they are step-by-step narratives which
define a specific interaction between the user and the software.
A typical test case is laid out in a table, and includes:
 A unique name and number
 A requirement which this test case is exercising
 Preconditions which describe the state of the software before
the test case (which is often a previous test case that must
always be run before the current test case)
 Steps that describe the specific steps which make up the
interaction
 Expected Results which describe the expected state of the
software after the test case is executed
Test cases must be repeatable.
Good test cases are data-specific, and describe each interaction
necessary to repeat the test exactly.
Computer Science Department Slide 17 of 48
TEST CASES – GOOD EXAMPLE

Computer Science Department Slide 18 of 48


TEST CASES – BAD EXAMPLE
Steps 1. Bring up search and replace
2. Enter a lowercase word from the
document in the search term field.
3. Enter a mixed case word in the
replacement field.
4. Verify that case sensitivity is not turned
on and execute the search
Expected 1. Verify that the lowercase word has been
Results replaced with the mixed-case term in
lowercase
Computer Science Department Slide 19 of 48
TEST EXECUTION
The software testers begin executing the test plan after the
programmers deliver the alpha build, or a build that they
feel is feature complete.
 The alpha should be of high quality—the programmers should
feel that it is ready for release, and as good as they can get it.
There are typically several iterations of test execution.
 The first iteration focuses on new functionality that has been
added since the last round of testing.
 A regression test is a test designed to make sure that a change
to one area of the software has not caused any other part of
the software which had previously passed its tests to stop
working.
 Regression testing usually involves executing all test cases
which have previously been executed.
 There are typically at least two regression tests for any
software project.
Computer Science Department Slide 20 of 48
TEST EXECUTION
When is testing complete?
– No defects found
– Or defects meet acceptance criteria outlined in test plan

Computer Science Department Slide 21 of 48


DEFECT TRACKING
The defect tracking system is a program that testers use to
record and track defects. It routes each defect between
testers, developers, the project manager and others,
following a workflow designed to ensure that the defect is
verified and repaired.
 Every defect encountered in the test run is recorded and
entered into a defect tracking system so that it can be
prioritized.
 The defect workflow should track the interaction
between the testers who find the defect and the
programmers who fix it. It should ensure that every
defect can be properly prioritized and reviewed by all of
the stakeholders to determine whether or not it should
be repaired. This process of review and prioritization
referred to as triage.
Computer Science Department Slide 22 of 48
TEST ENVIRONMENT AND
PERFORMANCE TESTING
Project manager should ask questions regarding desired performance as
early as the vision and scope document
– How many users?
– Concurrency? Peak times?
– Hardware? OS? Security?
– Updates and Maintenance?
Adequate performance testing will usually require a large investment in
duplicate hardware and automated performance evaluation tools.
– ALL hardware should match (routers, firewalls, load balancers)
– If the organization cannot afford this expense, they should not
be developing the software and should seek another solution.

Computer Science Department Slide 23 of 48


SMOKE TESTS
A smoke test is a subset of the test cases that
is typically representative of the overall test
plan.
 Smoke tests are good for verifying proper
deployment or other non invasive changes.
 They are also useful for verifying a build
is ready to send to test.
 Smoke tests are not substitute for actual
functional testing.

Computer Science Department Slide 24 of 48


TEST AUTOMATION
Test automation is a practice in which testers employ a
software tool to reduce or eliminate repetitive tasks.
 Testers either write scripts or use record-and-
playback to capture user interactions with the
software being tested.
 This can save the testers a lot of time if many
iterations of testing will be required.
 It costs a lot to develop and maintain automated test
suites, so it is generally not worth developing them
for tests that will executed only a few times.
Computer Science Department Slide 25 of 48
OVERVIEW OF SOFTWARE
TESTING STRATEGIES

Integrates software test case design techniques into


a well-planned series of steps that result in the
successful construction of software.

A testing strategy must always incorporate test


planning, test case design, text execution, and the
resultant data collection and evaluation

Computer Science Department Slide 26 of 48


Overview of Software Testing Strategies
Generic characteristics of all software testing strategies:
1. Testing beings at the module level and works "outward"
toward the integration of the entire computer-based
system
2. Different testing techniques are appropriate at different
points in time
3. Testing is conducted by the developer of the software
and (for large projects) an independent test group
4. Testing and debugging are different activities, but
debugging must be accommodated in any testing
strategy
Computer Science Department Slide 27 of 48
VERIFICATION AND VALIDATION

Verification refers to the set of activities that ensure that


software correctly implements a specific function, i.e.

It asks the question: “Are we building the product right?”

Validation refers to a different set of activities that


ensure that the software that has been built is traceable
to customer requirements, i.e.

It asks the question: “Are we building the right product?”

Computer Science Department Slide 28 of 48


ORGANIZING FOR SOFTWARE TESTING

Misconceptions in Software testing:

1. That the developer of software should not do any


testing at all;
2. That the software should be "tossed over the wall" to
strangers who will test it mercilessly;
3. That testers get involved with the project only when
the testing steps are about to begin.

Computer Science Department Slide 29 of 48


SOFTWARE TESTING STRATEGY

Computer Science Department Slide 30 of 48


SOFTWARE TESTING STRATEGY
1. A strategy for software testing moves outward along the spiral.
2. Unit testing begins at the vortex of the spiral and concentrates
on each unit of the software as implemented in the source
code.
3. Testing progresses by moving outward along the spiral to
integration testing, where the focus is on the design and the
construction of the software architecture.
4. Validation testing is next encountered, where requirements
established as part of software requirement analysis are
validated against the software that has been constructed.
5. Finally, at system testing, where the software and other system
elements are tested as a whole.
Computer Science Department Slide 31 of 48
SOFTWARE TESTING STRATEGY
1. Unit tests: focuses on each module and makes heavy use of white box
testing
2. Integration tests: focuses on the design and construction of software
architecture; black box testing is most prevalent with limited white
box testing.
3. High-order tests: conduct validation and system tests. Makes use of
black box testing exclusively.

Computer Science Department Slide 32 of 48


UNIT TESTING
• Unit testing focuses verification effort on the
smallest unit of software design - the module.

• Using the detail design description as a guide,


important control paths are tested to uncover
errors within the boundary of the module

• The unit test is always white box-oriented

Computer Science Department Slide 33 of 48


UNIT TESTING

Computer Science Department Slide 34 of 48


UNIT TESTING PROCEDURES
1. Because a module is not a stand-alone program, driver
and/or stub software must be developed for each unit
test.
2. A driver is nothing more than a " main program" that
accepts test case data, passes such data to the module
(to be tested), and prints the relevant results.
3. Stubs serve to replace modules that are subordinate
(called by) the module to be tested. A stub or "dummy
subprogram" uses the subordinate module's interface,
may do nominal data manipulation, prints verification
of entry, and returns.
4. Drivers and stubs also represent overhead
Computer Science Department Slide 35 of 48
INTEGRATION TESTING
• Integration testing: technique for constructing the
program structure while at the same time conducting
tests to uncover tests to uncover errors associated
with interfacing

Objective: combine unit-tested modules and build a


program structure that has been dictated by design.
Two-types: Top-Down integration; Bottom-up Integration

Computer Science Department Slide 36 of 48


TOP-DOWN TESTING
INTEGRATION PROCESS
1. The main control module is used as a test driver and stubs
are substituted for all modules directly subordinate to the
main control module
2. Subordinate stubs are replaced one at a time with actual
modules
3. Tests are conducted as each module is integrated
4. On the completion of each set of tests, another stub is
replaced with the real module
5. Regression testing (i.e., conducting all or some of the
previous tests) may be conducted to ensure that new errors
have not been introduced
Computer Science Department Slide 37 of 48
TOP-DOWN TESTING
For the below program structure, the following test cases may be
derived if top-down integration is conducted:

•Test case 1: Modules A and B are integrated


•Test case 2: Modules A, B and C are integrated
•Test case 3: Modules A., B, C and D are integrated (etc.)

Computer Science Department Slide 38 of 48


TOP-DOWN TESTING
• There is a major problem in top-down integration:
inadequate testing at upper levels when data flows at low
levels in the hierarchy are required
Solutions to the above problem
1. Delay many test until stubs are replaced with actual
modules; but this can lead to difficulties in determining
the cause of errors and tends to violate the highly
constrained nature of the top-down approach
2. Develop stubs that perform limited functions that
simulate the actual module; but this can lead to
significant overhead
3. Perform bottom-up integration
Computer Science Department Slide 39 of 48
BOTTOM-UP TESTING
1. Low-level modules are combined into clusters
(sometimes called builds) that perform a
specific software subfunction
2. A driver (a control program for testing) is
written to coordinate test case input and
output
3. The cluster is tested
4. Drivers are removed and clusters are
combined moving upward in the program
structure

Computer Science Department Slide 40 of 48


BOTTOM-UP TESTING
Test case 1: Modules E and F are integrated
Test case 2: Modules E, F and G are integrated
Test case 3: Modules E., F, G and H are integrated
Test case 4: Modules E., F, G, H and C are integrated (etc.)
Drivers are used all round.

Computer Science Department Slide 41 of 48


VALIDATING TESTING
• Validation testing: ensuring that software functions in
a manner that can be reasonably expected by the
customer.
• Achieve through a series of black tests that
demonstrate conformity with requirements.
• A test plan outlines the classes of tests to be
conducted, and a test procedure defines specific test
cases that will be used in an attempt to uncover errors in
conformity with requirements.
• A series of acceptance tests (include both alpha and
beta testing) are conducted with the end users

Computer Science Department Slide 42 of 48


VALIDATING TESTING
Alpha testing
1. Is conducted at the developer's site by a
customer
2. The developer would supervise
3. Is conducted in a controlled environment

Beta testing
1. Is conducted at one or more customer sites by
the end user of the software
2. The developer is generally not present
3. Is conducted in a "live" environment

Computer Science Department Slide 43 of 48


SYSTEM TESTING
• Ultimately, software is only one component of a larger
computer-based system. Hence, once software is
incorporated with other system elements (e.g. new
hardware, information), a series of system integration
and validation tests are conducted.
• System testing is a series of different tests whose
primary purpose is to fully exercise the computer-
based system.
• Although each system test has a different purpose, all
work to verify that all system elements have been
properly integrated and perform allocated functions.

Computer Science Department Slide 44 of 48


RECOVERY TESTING
• A system test that forces software to fail in a
variety of ways and verifies that recovery is
properly performed

• If recovery is automatic, re-initialization, check-


pointing mechanisms, data recovery, and restart
are each evaluated for correctness
• If recovery is manual, the mean time to repair is
evaluated to determine whether it is within
acceptable limits.

Computer Science Department Slide 45 of 48


SECURITY TESTING
• Security testing attempts to verify that
protection mechanisms built into a system
will in fact protect it from improper
penetration.

• Particularly important to a computer-based


system that manages sensitive information or is
capable of causing actions that can improperly
harm (or benefit) individuals when targeted.

Computer Science Department Slide 46 of 48


STRESS TESTING
• Stress Testing is designed to confront
programs with abnormal situations where
unusual quantity frequency, or volume of
resources are demanded

• A variation is called sensitivity testing;


• Attempts to uncover data combinations within
valid input classes that may cause instability or
improper processing

Computer Science Department Slide 47 of 48


PERFORMANCE TESTING
• This mode of testing seeks to test the run-time
performance of software within the context of
an integrated system.
• Extra instrumentation can monitor execution
intervals, log events (e.g., interrupts) as they occur,
and sample machine states on a regular basis
• Use of instrumentation can uncover situations that
lead to degradation and possible system failure

Computer Science Department Slide 48 of 48

You might also like