Professional Documents
Culture Documents
By
Ajay Kumar Jha
Master of Science
in
Software Engineering
Melbourne, Florida
February 2007
_________________________
Cem Kaner, J.D, Ph.D
Professor and Thesis Advisor
Computer Science
_________________________
Pat Bond, Ph.D
Associate Professor
Computer Science
_________________________
Ivica Kostanic, Ph.D.
Professor and Director
Electrical and Computer Engineering
_________________________
William Shoaff, Ph.D.
Associate Professor and Head
Computer Science
Abstract
TITLE: A Risk Catalog for Mobile Applications
AUTHOR: Ajay Kumar Jha
MAJOR ADVISOR: Cem Kaner, J.D, Ph.D.
I used a sample mobile application in the education domain to build and refine the
risk catalog. Thereafter, I conducted a pilot study to determine the efficiency of this
risk catalog by using it as a guide to test a real-time financial mobile application.
Finally, I updated and enriched the risk catalog by building, analyzing and testing a
mobile application utilizing Web services.
iii
Table of Contents
Abstract .................................................................................................................... iii
Table of Contents ..................................................................................................... iv
Keywords ................................................................................................................. xii
List of Figures ........................................................................................................ xiii
List of Tables .......................................................................................................... xiv
Acknowledgments ................................................................................................... xv
Dedication .............................................................................................................. xvi
Chapter 1: Introduction ............................................................................................. 1
1.1 Mobile Applications ........................................................................................ 2
1.1.1 What is a Mobile Application? ................................................................. 2
1.1.2 Types of Mobile Applications Considered ............................................... 6
1.2 Goal of the Thesis ............................................................................................ 6
1.2.1 Challenges in Testing Mobile Applications ............................................. 6
1.2.2 Solution Approach .................................................................................... 8
1.3 Organization of the Thesis .............................................................................. 9
Chapter 2: Risk-Based Testing ................................................................................ 12
2.1 Software Risks ............................................................................................... 12
2.1.1 Risk ......................................................................................................... 12
2.1.2 Types of Software Risks......................................................................... 13
2.1.3 Concept of Risk in Software Testing ..................................................... 14
2.2 Risk Analysis Methods .................................................................................. 15
2.2.1 Heuristic Analysis .................................................................................. 17
2.2.2 Hazard and Operability Study ................................................................ 17
2.2.3 Failure Mode and Effects Analysis ........................................................ 18
2.2.4 Fault Tree Analysis ................................................................................ 22
iv
8.5.2 Testing the Smartphone Application Using the Risk Catalog .............. 172
Chapter 9: Conclusions and Closing Remarks ...................................................... 175
9.1 Usage and Utility of the Risk Catalog ......................................................... 175
9.2 Next Steps with the Risk Catalog ................................................................ 175
Appendix A: Overview of Mobile Computing Technology ................................. 177
Mobile Applications ......................................................................................... 177
Mobile Content Delivery and Middleware ....................................................... 178
Client-Side Devices .......................................................................................... 183
Wireless Networking Infrastructure ................................................................. 185
Glossary and Acronyms.................................................................................... 189
Appendix B............................................................................................................ 190
Issues Faced During Testing............................................................................. 190
Feedback Forms ................................................................................................ 201
Appendix C: Institutional Research Board Forms ................................................ 209
Student Application Research Involving Human Subjects............................... 209
Consent Form ................................................................................................... 214
Appendix D: Source Code Listing ........................................................................ 219
Form1.cs ........................................................................................................... 219
WSDL ............................................................................................................... 228
Service.cs .......................................................................................................... 233
References ............................................................................................................. 237
xi
Keywords
Failure mode and effects analysis
Failure mode catalog
Heuristics
Mobile applications
Mobile application failures
Risk analysis
Risk-based testing
Risk catalog
Service-oriented architecture
Software testing
Web services
Wireless network failures
Wireless technology
xii
List of Figures
Figure 1-1: Framework for mobile applications........................................................ 4
Figure 2-1: FTA on mobile book catalog ................................................................ 24
Figure 2-2: ETA on mobile book catalog ................................................................ 26
Figure 2-3: Cause consequence analysis on mobile book catalog .......................... 28
Figure 3-1: Heuristic test strategy model ................................................................ 46
Figure 4-1: Overview of Risk Catalog .................................................................... 55
Figure 6-1: Screenshot of Cells ............................................................................. 132
Figure 6-2: Summation of a data range using Cells .............................................. 133
Figure 6-3: PAAM on an instructors device. ....................................................... 134
Figure 7-1: MidCast launching splash screens ...................................................... 144
Figure 7-2: Day chart window............................................................................... 145
Figure 8-1: Service oriented architecture .............................................................. 154
Figure 8-2: Web services framework .................................................................... 158
Figure 8-3: Mobile book catalog application layers .............................................. 164
Figure 8-4: User interface of the mobile book catalog .......................................... 165
Figure 8-5: Populated list-view with book information ........................................ 168
Figure 8-6: Overview of the risk catalog using mindmap ..................................... 173
Figure 8-7: Heuristic risk analysis for product elements ...................................... 174
Figure 9-1 Schematic of Microsoft.NET compact framework .............................. 179
xiii
List of Tables
Table 2-1: FMEA on the mobile Web services application .................................... 21
Table 3-1: Heuristics in How to Solve It .............................................................. 37
Table 3-2: Diversity Among Taxonomies ............................................................... 47
Table 4-1: Categorization of the Risk Catalog ........................................................ 57
xiv
Acknowledgments
I am deeply grateful to my advisor Dr. Cem Kaner for his guidance and
unfailing support through the trials of my graduate study at Florida Institute
of Technology. He has always been available for discussions and has
provided very valuable and insightful inputs.
I thank James Bach, who provided me with valuable input to my thesis and
guided me whenever I needed his help.
I thank my committee members Dr. Pat Bond and Dr. Ivica Kostanic for
their useful suggestions and comments on my thesis.
I thank Sam Oswald and Georgi Nikolov for their help with testing some
sample applications and providing very valuable feedback on the strengths
and weaknesses of the risk catalog.
I thank my family, lab mates at Center for Software Testing and Research
and friends for the constant encouragement and support they provided. I
thank Amy Bowman for providing feedback on my work.
I thank Shagun Kumar for helping me with the printing of this thesis.
Finally I would like to thank my wife, Deepika, who was always available
to assist me by providing feedback, suggestions for improvement and help
in all ways possible.
xv
Dedication
This thesis is dedicated to the English author Aldous Leonard Huxley (July 26,
1894 November 22, 1963), whose writings are the biggest source of inspiration
and intellectual quest for me over the years (http://somaweb.org/index.html, last
accessed January 29, 2007).
I also dedicate this thesis to my best friend and wife, Deepika Gupta who was a
constant source of motivation and without whose support this work would have
been impossible.
xvi
Chapter 1: Introduction
Handheld devices are evolving and becoming increasingly complex with the
continuous addition of features and functionalities. The rapid proliferation of the
Internet Protocol (IP)-based wireless networks, the maturation of cellular
technology, and the business value discovered in deploying mobile solutions in
different sectors like education, enterprise, entertainment, and personal productivity
are some of the drivers of these changes. Computing and communication
technologies are converging, as with communications-enabled Personal Digital
Assistants (PDAs) and smart phones, and the mobile landscape is getting swamped
with devices having a variety of different form factors.
Testing is challenging in the handheld, wireless world because problems are new,
or they show up in new ways. Not many software testers have experience testing
1
this new breed of applications; consequently, they run out of test ideas and test
cases. To facilitate risk-based testing in this area, I present and organize a catalog
of a broad set of risks that are publicly reported and potential failures that could
occur with mobile applications running on handheld devices. A software tester of
mobile applications can derive test ideas for more-focused testing that explores the
risks outlined in this risk catalog and, accordingly, can achieve better test coverage
by executing the tests based on different categories of risks.
Another way to categorize mobile applications could be on the basis of the layering
of the system, which is based on the software and hardware infrastructure.
Varshney (2002) proposed a framework for mobile commerce application
development to separate the responsibilities and functionalities provided by
different entities, and to implement mobile systems.
The framework developed by Varshney and Vetter (2002) shown in the figure has
two planes: the user plane, the developer plane and the provider plane. The
framework has four layers in the user plane: m-commerce applications, user
infrastructure, middleware, and network infrastructure. Each layer has a welldefined responsibility and provides a standard interface to the adjoining layers. For
example, the user infrastructure layer shows that the design of new mobile
applications should take into consideration the general capabilities of the mobile
device, and should not be device-specific. Similarly, the middleware layer hides the
details of the underlying wireless network from the application layer. In the
developer and provider plane, this framework has separation of responsibilities
between the application developer, content providers, and the service providers.
Varshney and Vetter (2002) state that: Each one of these could build its products
and services using the functionalities provided by others. A content provider can
4
build its service using applications from multiple application developers. They can
also aggregate content from other content providers and can supply the aggregated
content to a network operator or service provider. Service providers can also act as
content aggregators, but are unlikely to act as either an application or content
provider due to their focus on the networking and service aspects of m-commerce.
A service provider can also act as a clearing house for content and application
providers in advertising and distributing their products to its customers. In any
case, the developer and provider plane in our framework is likely to have multiple
layers.
Mobile application layer: This layer includes the application software that
is responsible for user authentication and privacy, for establishing the
communication partners, and for determining the constraints on data and
other application services.
Also, with diverse mobile applications available, and rapidly evolving mobile
technology, testers find themselves at a loss to identify and create a risk profile
because of a lack of experience in this domain. Therefore, test cases may not be
sufficiently powerful, focused, or comprehensive (Jha & Kaner, 2003).
Test automation often expedites the process of test execution by reducing the
manual input required. It is a technique that saves a lot of time by enabling the
execution of repetitive tests with the help of computers. In the case of mobile
applications, it is difficult to automate even the mundane tests due to the inherent
constraints of hardware like less memory and poor processing power on which
these applications execute. These tests have to be executed manually. This demands
more manual testing resource and time. Test case prioritization based on risk
becomes increasingly important, to minimize the number of tests, and isolate the
more powerful tests from the weak ones. A risk catalog helps in test case
prioritization, by allowing the software tester to focus on the failure categories of
interest and map the risks in the application under test from a pre-structured risk
profile.
One of my goals in developing a risk catalog is to broaden the risk analysis that
testers use to guide their testing. A catalog provides a wider range of examples (and
categories of risk) than any one person is likely to think of while designing her or
his tests. It also provides training material for testers new to wireless mobile
applications to come up to speed quickly (Jha & Kaner, 2003).
I have used this catalog to test three mobile applications in different sectors like
education, enterprise, and mobile Web service, to refine and enrich the failure
categories, and to provide more examples of potential and known faults and failures
that could occur in these domains. The risk catalog is presented in chapter 5 of this
thesis; its usage in testing mobile applications is described in chapters 6, 7, and 8.
Chapter 3: Heuristics and Risk Catalogs introduces the concept of heuristics and
an application of risk-based heuristics testing in a popular model to strategize
testing. It describes and contrasts the difference between error, faults, and failures,
and clarifies the kind of examples that are populated in the risk catalog for mobile
applications. It briefly outlines the previous work researchers have done on
taxonomies and bug catalogs.
Chapter 4: Building and Using a Risk Catalog contains the experience report in
using heuristics to organize the failure lists by restructuring the bug taxonomies
created earlier. Kaner, Falk and Nyugen (1999) published a taxonomy for common
software errors as an appendix in the book, Testing Computer Software. This
chapter also provides a high-level overview of key categories and structure of the
risk catalog for testing mobile applications.
Chapter 5: Mobile Application Risk Catalog contains the risk catalog for mobile
applications. The risk catalog contains the known and potential problems in mobile
applications, categorized by quality attributes and product elements. This catalog is
used to test three different types of applications to refine, validate, and enhance the
initial version of the risk catalog developed in 2003 (Jha & Kaner, 2003).
Chapter 6: Refining the Risk Catalog describes the process of refining the risk
catalog. Two mobile applications in education were tested using the initial risk list,
and the catalog developed during the testing of these applications. Cells and Palm
OS Artifact and Assessment Manager (PAAM) were the applications tested.
Chapter 9: Conclusions and Closing Remarks summarizes the usage of the risk
catalog and the possible next steps for the development of the risk catalog.
Appendix D: Contains the source code listing for the sample application developed
utilizing mobile Web service. This application is described in chapter 8 of this
thesis.
11
2.1.1 Risk
The American Heritage Dictionary of the English Language 2000 defines risk to
be:
In simple terms, a risk is a potential future loss or harm that can occur if proper
remedial action is not taken.
Poyhonen (2001) states that there are many aspects of risk like
Risks can also be quantified as the product of two factors: the severity of the
potential failure and the probability of its occurrence (Rosenberg, Stapko, & Gallo,
1999, p. 1).
Risk =
( p( E ) c( E )
i
Where i =1, 2 n. n is the number of unique failure events, Ei are the possible
failure events, p is probability and c is cost.
Project Risk: These risks relate to the project in its own context. Factors
such as availability of skills, suppliers, contractual agreements, availability
of tools, schedules, budget, and so on determine the fate of the project.
Project management takes on the responsibility of handling these issues.
There could be many risks associated with project and the most severe one
is project chaos. Software Engineering Institute at Carnegie Mellon
University published a method of risk identification at the project level.
This is available at
http://www.sei.cmu.edu/pub/documents/93.reports/pdf/tr06.93.pdf, last
accessed December 11, 2006 (Carr, 1993).
13
Bach (2003) proposes following a generic chain of cause and effect to maintain the
frame of reference while identifying risks. He defines the following terms:
Problem: Something the product does that we wish it wouldnt do. (You can
also call this failure, but I can imagine problems that arent failures,
strictly speaking.)
There are many well-known risk analysis techniques used in multiple disciplines.
Some techniques relevant to this thesis are explained in the following section.
Product risk analysis methods are broadly divisible into qualitative and quantitative
categories (McGary, 2005).
15
Qualitative risk analysis is based on experience and intuition of the person carrying
out risk analysis. It uses techniques like brainstorming, surveys, interviews, and
polling to prioritize risks.
Quantitative risk analysis applies statistical techniques to evaluate the effect of risk
events on the project objectives.
There are advantages and disadvantages in both quantitative and qualitative risk
analysis methodologies. Quantitative risk analysis uses familiar mathematical
language of probability and statistics. It is easier to communicate the outcome of
analysis because it uses concrete numbers, and thus supports statistical analysis of
risks, which may assist in cost-benefit analysis to be used by management to make
decisions. The disadvantage of quantitative analysis is the uncertainty in the risk
results, owing to assignment of a numerical score to items that are intrinsically
qualitative. Quantitative analysis also requires complex calculations and skilled
manpower for gathering input data and for successful computation (McGary,
2005).
Sections 2.2.1 till 2.2.5 describe five different risk analysis methods. I have
provided the examples on how to carry out the risk analysis using a sample mobile
web services application. Details of the application are provided in section 8.3.
The sample application runs on Windows mobile device. It provides the
functionality of displaying a list of books and their prices on the handheld screen.
The user clicks a button on the application, namely, Get Items, and a list of books
and their prices downloads from a database server using a web service. The books
along with the prices are visible on the grid view widget of the windows mobile
handheld.
is identified. Consider the possible flow problems in a process line, the guide word
MORE OF will correspond to high flow rate, while that for LESS THAN, low flow
rate. The consequences of the hazard and measures to reduce the frequency with
which the hazard will occur are then discussed. This technique had gained wide
acceptance in the process industries as an effective tool for plant safety and
operability improvements (Jouko & Veikko, 1993).
discipline to software, see the discussions in (http://www.stuk.fi/julkaisut/tr/stukyto-tr190.pdf). Computer aided FMEA is discussed in the paper by Hecht (Hecht,
Xuegao, & Hecht, 2003).
A failure mode is, essentially, a way that the product can fail. A failure modes
effect is the consequence of the failure. The purpose of the FMEA is to identify
possible failures, to rate them according to the priority, and to take actions to
eliminate or reduce failures, starting with those having highest priority.
The detailed steps involved in carrying out FMEA as described by the American
Society for Quality (Source: http://www.asq.org/learn-about-quality/processanalysis-tools/overview/fmea.html, last accessed August 11, 2006) are as follows:
7. For each cause, identify current process controls. These are tests, procedures or
mechanisms that are currently in place to keep failures from reaching the
customer.
8. For each control, determine the detection rating, or D. This rating estimates
how well the controls can detect either the cause or its failure mode after they
have happened but before the customer is affected. Detection is usually rated on
a scale from 1 to 10, where 1 means the control is absolutely certain to detect
the problem and 10 means the control is certain not to detect the problem (or no
control exists).
9. Calculate the risk priority number, or RPN, which equals S O D.
10. Identify recommended actions. These actions may include design or process
changes to lower severity or occurrence. There may be additional controls to
improve detection. Also note who is responsible for the actions and target
completion dates.
11. Follow-up and update the risks after the recommended controls are
implemented.
Example: Table 2-1 identifies risks, and the severity, occurrence and detection of
the risks. Risk analysts use or test the application, gain some experience, and make
these estimates subjectively. They calculate the risk priority number of each risk as
RPN = S * O * D. The numbers given in the table below provides an example, and
will vary with the application and the organization.
20
Risk
Severity
Occurrence Detection
Priority
(S, 1 10)
(O, 1 10)
Number
(D, 1 - 10)
(RPN)
1
Items do not
download
Application does
not open
10
180
10
10
100
10
360
135
84
Application does
3
not connect to
database
Application does
The application can then be tested, or the failures corrected, based on the RPN.
For this thesis, I provide a risk catalog that assists in identifying failure modes or
risks, so that, if required, the FMEA process can be applied to the software. FMEA
allows for improved test management (explained in detail in Section 2.5).
21
Bell Telephone Laboratories developed the concept of fault tree analysis in 1962
for the U.S. Air Force for use with the Minuteman system. Mission-critical systems
have used FTA for a long time to determine the systems reliability and safety, by
identifying the probability of each top-level failure
(http://reliasoft.com/newsletter/2q2003/fta.htm, last accessed March 20, 2004).
Lyu (1995) defined a fault tree model as: A fault tree model is a graphical
representation of logical relationships between events (usually failure events)
Continue resolving the possible causes or failures and create a cause chain
until the root or leaf level is reached.
22
Connect the causes using AND, OR, and M-out of N logic gates.
Estimate and assign probabilities to each failure event at the lowest level.
Calculate the probability of the failure event at a higher level and continue
going up until the probability of the top-level failure has been calculated
The following example illustrates the concept of fault tree analysis more clearly.
Example: Figure 2-1 examines the probability of the failure: the application does
not download the list of books along with their prices. The probabilities of the leaf
level nodes are estimated subjectively, and with the help of the AND and OR gates,
the probability for the top level risk is calculated.
23
24
Newsletter available at
http://www.theiet.org/publicaffairs/health/hsb26c.pdf, last accessed August
11, 2006
ETA has seen application in the nuclear industries for operability analysis of
nuclear power plant as well as accident sequence in the Three Mile Island-2
reactors accident. (Source:
http://home1.pacific.net.sg/~thk/risk.html#2.3%20%20%20%20Event%20tree,
last accessed July 4, 2006)
The following example illustrates the concept of event tree analysis more clearly.
25
Example: Figure 2-2 examines the probability of the failure: the application does
not download the list of books along with their prices. The probabilities of the leaf
level nodes are estimated subjectively, and with the help of the AND and OR gates,
the probability for the top level risk is calculated.
26
Figure 2-7 shows a typical cause-consequence analysis in which the cause portion
of the analysis is represented by a fault tree. Cause occurrence is expressed as a
27
probability score. The figure then shows the different paths that the initial event
may trigger and their associated consequence score. This accounts for the
consequence portion of the analysis. Each subsequent event occurring in the system
has a probability attached to it based on the possible set of behaviors available
within the system.
28
(Lutz & Woodhouse, 1996) further described the integration of SFMEA and SFTA
in a two-step process:
Software testers can utilize the failure modes described in chapter 5 in a deductive
as well as inductive fashion. They can identify failure modes using the catalog and
carry out further analysis using risk analysis techniques such as those described
above.
The next sections describe software testing techniques and how risk-based testing is
done.
30
For more details on the different testing techniques, refer to the course notes on
black box software testing by Kaner (Kaner & Bach, 2005). Testing results are best
when testers combine different techniques during testing. This thesis focuses on
risk-based testing for mobile applications.
risk-based test management. The second type of risk analysis finds errors that lead
to risk-based testing. I will focus on risk-based test management in this section and
discuss risk-based testing in the next section.
Risk Identification
Risk Strategy
Risk Assessment
Risk Mitigation
Risk Reporting
Risk Prediction
(Amland, 1999) stated that the main objective of risk analysis is to identify the
potential problems that can affect the cost or outcome of a project. He identified
three main sources of the risk analysis:
32
(Bach, 1999) explained risk analysis for the purpose of finding software errors. The
steps in his approach are as follows:
As risks evaporate and new ones emerge, adjust your test effort to stay
focused on the current risk set.
I started off defining some key terms related to risk and risk analysis. The HAZOP
technique is more applicable where the requirements are clear and failures are
easily identified. In this method, guidewords such as MORE THAN or LESS
THAN are used in the software to help identify risks. In software, requirements are
often unclear. The FTA and ETA methods, although highly effective in identifying
risks in the context of an engineering project, require a formalized process for
identifying failure modes and the cause chain of these failure modes. FMEA, on the
other hand, is a simplified process in which failure modes are identified and their
effects weighed to decide what needs to be tested first. Heuristic analysis is a very
simple, yet very effective, technique to identify risks. It is explained in detail in
Chapter 3 and is the basis for the creation and refinement of the risk catalog.
I introduced the concept of risk-based test management and risk-based testing. The
primary focus of this thesis is risk-based testing. (Bach, 1999) provides a method
for risk-based testing using heuristics, as explained in sections 3.1.3 and 3.1.4.
34
The American Heritage Dictionary of the English Language 2000 defines heuristic
to be:
is selected at successive stages of a program for use in the next step of the
program.
Heuristics may be used in a wide variety of disciplines. These rules mostly work
well, but at times lead to biases. The following are examples of heuristics used in
different disciplines:
In psychology: Dreams often reflect your desires. For instance, if you dream
of winning a race against your brother, you probably desire to out-do him in
real life.
General: If you want to ask for something from someone, ask when the
person is in a good mood. You are more likely to get it then.
The mathematician George Polya first published How to Solve It in 1945. How to
Solve It is a collection of ideas about heuristics that he taught to math students. This
book provides ways of looking at problems and formulating solutions (Polya,
2004). He states in his book that Heuristic reasoning is not regarded as final and
36
strict but as provisional and plausible only, whose purpose is to discover the
solution to the present problem.
(Polya, 2004) suggested the following steps when solving a mathematical problem:
Devising a plan: Find the connection between the data and the unknown.
You may be obliged to consider auxiliary problems if an immediate
connection cannot be found. You should obtain a plan of the solution.
Looking back: Examine the solution obtained and consider ways to improve
it.
37
Map
Formal
Heuristic
Informal Description
Generalization
Generalization
Induction
Induction
Variation of the
problem
Search
Auxiliary
problem
Sub goal
Pattern
recognition
Pattern Matching
Specialization
Specialization
Decomposing and
recombining
Divide and
conquer
Working
backward
Backward
chaining
Draw a figure
Diagrammatic
reasoning
Auxiliary
elements
Extension
Here is a problem
related to yours
and solved before
38
Analogue
1. Visibility of system status: The system should always keep users informed
about what is going on, through appropriate feedback within reasonable time.
2. Match between system and the real world: The system should speak the users
language, with words, phrases and concepts familiar to the user, rather than
system-oriented terms. Follow real-world conventions, making information
appear in a natural and logical order.
3. User control and freedom: Users often choose system functions by mistake and
will need a clearly marked emergency exit to leave the unwanted state
without having to go through an extended dialogue. Support undo and redo.
4. Consistency and standards: Users should not have to wonder whether different
words, situations, or actions mean the same thing. Follow platform conventions.
5. Error prevention: Even better than good error messages is a careful design
which prevents a problem from occurring in the first place. Either eliminate
error-prone conditions or check for them and present users with a confirmation
option before they commit to the action.
39
6. Recognition rather than recall: Minimize the users memory load by making
objects, actions, and options visible. The user should not have to remember
information from one part of the dialogue to another. Instructions for use of the
system should be visible or easily retrievable whenever appropriate.
7. Flexibility and efficiency of use: Acceleratorsunseen by the novice user
may often speed up the interaction for the expert user such that the system can
cater to both inexperienced and experienced users. Allow users to tailor
frequent actions.
8. Aesthetic and minimalist design: Dialogues should not contain information
which is irrelevant or rarely needed. Every extra unit of information in a
dialogue competes with the relevant units of information and diminishes their
relative visibility.
9. Help users recognize, diagnose, and recover from errors: Error messages should
be expressed in plain language (no codes), precisely indicate the problem, and
constructively suggest a solution.
10. Help and documentation: Even though it is better if the system can be used
without documentation, it may be necessary to provide help and documentation.
Any such information should be easy to search, focused on the users task, list
concrete steps to be carried out, and not be too large.
(http://www.useit.com/papers/heuristic/heuristic_list.html, last accessed
September 5, 2006)
Apart from human computer interaction where the term heuristics is used as a rule
of thumb, the term heuristic has two well-defined technical meanings in computer
science. They are described below:
40
1. Heuristic Algorithms
Two fundamental goals in computer science are finding algorithms with good run
times and with optimal solution quality. A heuristic is an algorithm that gives up
one or both of these goals; for example, it usually finds pretty good solutions, but
there is no proof the solutions could not get arbitrarily bad; or it usually runs
reasonably quickly, but there is no argument that this will always be the case. For
many practical problems, a heuristic algorithm may be the only way to get good
solutions in a reasonable amount of time
Kaner & Bach (2005) described three classes of heuristics of identifying risks in
their course notes on risk-based testing
(http://www.testingeducation.org/k04/documents/bbstRisk2005.pdf, last accessed
January 29, 2007):
Recognize common project warning signs (and test things associated with
the risky aspects of the project).
Apply failure mode and effects analysis to (many or all) elements of the
product and to the products key quality criteria.
42
Bach (1999) observed in his course notes that heuristics is a method of generating
solutions quickly. He suggested that heuristics are a guide, not a checklist:
Guideword Heuristics: Words or labels that help you access the full
spectrum of your knowledge and experience as you analyze something.
Subtitle Heuristics: Help you reframe an idea so you can see alternatives
and bring out assumptions during a conversation.
Heuristic Procedure or Rule: Plans of action that may help solve a class of
problems. (p. 46)
These heuristics encourage the software developer or tester to use their skills and
encourage thinking to identify the maximum risks in the minimum time.
44
Bachs Heuristic Test Strategy Model (Bach, 2006a), as shown in Figure 3-1,
provides a clear structure that we can fit failure modes into. Therefore, this is the
methodology used for this thesis. I have subdivided the quality criteria into
operational quality criteria and development quality criteria subsections and
presented the model in Figure 4-1.
45
46
The science of systematics, which classifies animals and plants into groups
showing the relationship between each. (Bishop & Bailey, 1996)
Taxonomies were developed for many different objectives. Table 3-2 below lists
some of the common taxonomy and data models with their attributes and year of
publication.
Author
Year of Publication
Rubey
late 1970's
Glass
1981
1985
Knuth
1989
Grady
1992
1996
47
Attributes
Security-oriented
Author
Year of Publication
Landwehr
1995
Aslam
1995
1996
1993
Beizer
1990
1996
Many taxonomies related to software testing have been developed. Some important
ones are:
48
Testing Computer Software contains a broad-level bug catalog that lists almost 500
common bugs. Kaner suggests using the list as follows:
Ask whether the software under test could have this defect
If it is theoretically possible that the program could have the defect, ask
how you could find the bug if it was there.
Ask how plausible it is that this bug could be in the program and how
serious the failure would be if it was there.
For each potential defect, ask whether the software under test could have
this defect
49
If it is theoretically possible that the program could have the defect, ask
whether the test plan could find the bug if it was there.
3. Getting unstuck
Expose them to what can go wrong, challenge them to design tests that
could trigger those failures. (Kaner & Bach, 2005), Risk-Based Testing, p.
17.
50
Error: A human action that produces an incorrect result. Errors may lead to
one or many faults in the system. Designing a software application without
considering all possible program states is an example of error because it is
caused by a problem in a software engineers thought process.
51
The term failure refers to a behavioral deviation from the user requirement or the
product specification; fault refers to an underlying condition within software that
causes certain failure(s) to occur; error refers to a missing or incorrect human
action resulting in certain fault(s) being injected into software. Sometimes error is
also used to refer to human misconceptions or other misunderstandings or
ambiguities that are the root cause for the missing or incorrect actions (Tian 2001).
A causal relationship exists between the three kinds of software defects. An error
injects faults into the software that, when executed, result in failures. A specific
failure may be caused by several faults; some faults may never cause a failure.
Similarly, an error can lead to single or multiple faults injected into software.
1. The bloaters
2. The object orientation abusers
52
Marick has published a catalog of test ideas applicable at the fault level. This is
available at http://www.testing.com/writings/shortcatalog.pdf#search=%22brian%20marick%20catalog%22, last accessed on
September 5, 2006). More details on the topic are available in his book (Marick,
1995).
The following section clarifies the scope of the risk catalog for mobile applications
and with respect to the type of examples presented in the risk catalog.
domains. This effort enabled me to imagine how applications can fail in different
ways.
Project Environment
Operational Quality
Development Quality
Product Elements
54
Steps taken to structure the risk catalog for mobile applications are presented in
section 6.3.
56
The catalog is on the basis of Bachs Heuristic Test Strategy Model (Bach, 2006a)
where the categories and subcategories are as shown in Table 4-1. I have deleted
some sections that were not very relevant or useful in testing mobile applications,
and added some sections that were required in the context of mobile application
testing. Bachs original model can be found at
http://www.satisfice.com/tools/satisfice-tsm-4p.pdf, last accessed January 24,
2007)
Table 4-1: Categorization of the Risk Catalog
Categorization
Test Idea
Description (input taken from Bach,
Heuristics
Heuristics
2006a)
Product Elements
Structure
Code
Interfaces
Hardware
Categorization
Test Idea
Heuristics
Heuristics
2006a)
integral to the product
Non Executable
Files
Functions
User Interface
System Interface
Calculations
Startup/ Shutdown
Error Handling
Interactions
Data
Input
Output
Categorization
Test Idea
Heuristics
Heuristics
2006a)
by the product
Platform
Handling)
of data
Noise (Memory
Management,
corrupted, or produced in an
Memory Leaks)
External Hardware
(Mobile Switching
Center Failures,
Hardware Failures)
External Software
(Third Party
Software, Micro-
browser Failures,
Wireless Network
Failure, Location
Registers, Software
etc.
Upgrade Errors)
Internal
Components
(Mobile Database,
Database Server,
Mobile Middleware
Categorization
Test Idea
Heuristics
Heuristics
2006a)
Interface Failures)
Environment
(Mobility and
Resource
Management
distractions
Failures, Location
Management)
Time
Common Use
(Transaction
Errors)
Time Failure)
Hardware Interface
Wireless
Synchronization
connectivity
Operational Criteria
Capability
Suitability
Accuracy
Interoperability
Compliance
Categorization
Test Idea
Heuristics
Heuristics
2006a)
standards or conventions or
regulations
Dependability
Fault Tolerance
Maturity
Recoverability
Reliability
Usability
Learnability
Efficiency
Satisfaction
Memorability
Easy to remember
Accessibility
Error Messages
Categorization
Test Idea
Heuristics
Heuristics
2006a)
Security
Authentication
Authorization
Privacy and
Confidentiality
Data Integrity
Wireless Network
Security
Availability
Scalability
Horizontal
Vertical
Performance
Installability
System
Requirements
Configuration
Uninstallation
Categorization
Test Idea
Heuristics
Heuristics
2006a)
removed cleanly?
Upgrades
Compatibility
Application
Compatibility
Operating System
Compatibility
operating system
Hardware
Compatibility
Backward
Compatibility
versions of itself
Resource Usage
Quality of
Service
networks service?
Development Criteria
Supportability
Testability
Visibility (Field
failures)
Control
Categorization
Test Idea
Heuristics
Heuristics
2006a)
Maintanability
Analyzability
Changeability
Stability
Portability
Adaptability
Conformance
Replacability
Project Environment
Customers
Information
Developer
Relations
Test Team
Equipment and
Tools
Categorization
Test Idea
Heuristics
Heuristics
2006a)
Schedule
Test Items
Deliverables
65
5.1.1 Structure
Everything that comprises the physical product.
66
5.1.1.1 Code
Typical failures in this category arise due to not taking into consideration the
limited processing power and memory limitations of a mobile device.
Failure Modes
Designing network centric routines that require large network data streams
for proper execution.
5.1.1.2 Interfaces
Wireless Application Protocol (WAP) Gateway Failures: WAP gateway is a server
that is responsible for converting a Wireless Transport Protocol (WTP) request
made by a smart phone to an HTTP request to be processed by a Web server. WAP
gateway also translates an HTML Web page to wireless markup language (WML)
if required.
Failure Modes
Problems arising due to deck size of WML exceeding the device limit.
Failure arising due to usage of client side JavaScript and scripts that is
available only on a web browser running on desktop machines.
5.1.1.3 Hardware
Client side devices play a more important role in mobile applications as compared
to their desktop counterparts. There are diverse set of devices available with
varying capabilities. Mobile application developers and testers should take into
consideration the hardware platform on which the application will execute.
Failure Modes
Failure Modes
Help system does not respond the keyboard button or device specific help
button.
5.1.2 Functions
Everything that the product does.
Does the application allow data input from the touch screen as well as
device keyboard?
An enabling layer of software that resides between the business application and
the networked layer of heterogeneous (diverse) platforms and protocols. It
decouples the business applications from any dependencies on the plumbing layer,
which consists of heterogeneous operating systems, hardware platforms and
communication protocols. (Source: International Systems Group)
Failure Modes
User interface widgets in the middleware not optimized for mobile devices.
5.1.2.3 Calculation
One of the sample application tested by me is Cells which is described in more
detail in chapter 6. This application provides basic arithmetic operations to
calculate sum, average and other operations as available in a spreadsheets. Some
potential failure modes that are calculation specific are listed below.
70
Failure Modes
ASCII values are calculated in case user enters a character other than a
number.
Failure Modes
Mobile application does not terminate all open connections to the wireless
network when shutting down.
Mobile application worker process continues to hold memory even after the
application exits.
Mobile application does not save data or state after unexpected shut down.
71
5.1.2.6 Interactions
With the advent of service oriented architecture, many application interfaces now
reside on a server. Mobile applications have to connect to the server that contains
the methods and interfaces to get or set data and carry out tasks. Mobile book
catalog described in chapter 8 uses this mechanism to get the list of books from a
backend database.
Failure Modes
Problem in retrieving data on the client device due to data corruption over
the wireless network.
Client device not informed of any error arising on the server component.
Data payload too big over the wireless network resulting in delay in
receiving response from the server.
72
5.1.3 Data
Everything that the product processes.
5.1.3.1 Input
A well designed mobile application allows only valid input across its subsystem
boundaries and interfaces. Processing invalid input is expensive as it requires
memory and processing power. Data input in mobile devices is tough for the end
user as there is no keyboard or even if the device has a keyboard, it is not as good
as the one available for desktop machines. This requires that the data input to the
application has to be minimized and optimized to provide maximum efficiency to
the end user.
Failure Modes
Corruption of the file system due to invalid input not handled at user
interface.
No default values for the common fields in the user interface of the
application under test.
73
5.1.3.2 Output
This category highlights failures that can arise due improper output of the data after
processing. Some failure modes in this category are from the sample application
(Mobile book catalog) described in chapter 8.
Failure Modes
User interface does not support scrolling to display all available data.
Book titles do not download with right price on the mobile device.
Failure Modes
74
In the case of book catalog (mobile Web service), are the special characters
handled appropriately if present in the book price?
5.1.3.4 Noise
Memory Management
Mobile devices are highly resource-constrained with respect to the amount of
primary and secondary storage. Special attention is required while developing and
testing to avoid memory leaks and wild pointers.
Memory leaks
Failure Modes
5.1.4 Platform
Everything on which the product depends (and that is outside your project).
Hardware failures
Micro-browser failures
A micro-browser offers the same basic functionality as a desktop browser. It is
used to submit user requests, receive and interpret results and allow the users to
surf Web pages using their handheld (Nguyen, 2003).
Failure Modes
Pocket Internet Explorer Quits When You Connect to an SSL Site with the
DES56 Cipher: http://support.microsoft.com/default.aspx?scid=kb;enus;320894
PRB: You Receive an Unknown Error When You Call a Method of the
MFC ActiveX Control:
http://support.microsoft.com/default.aspx?scid=kb;en-us;310566
78
Loss of signal.
Coverage issues.
Failure Modes
Failure to update the HLR on the status of the mobile host after it enters a
new VLR (Biaz & Vaidya, 1998).
Excessive load on the network signaling resource due to the mobility of the
mobile hosts.
Excessive load on the database due to frequent updates needed resulting due
to mobility of the node.
Failure Modes
as a Web page with the help of a micro-browser. Since availability of the wireless
network still has some issues, this model is not very suitable for data-intensive
applications. The alternative model makes significant data reside on the handheld (a
local relational-database on the handheld).
Database Server
A database server is software that manages data in a database. Database
management functions such as the location of the actual record being requested,
updation, deletion and protection of the data are performed by the database server.
A database server also provides access control and concurrency control. So, while
testing a mobile application that connects to the database, if there is some erratic
data encountered, the database server could be the culprit and should be tested.
5.1.5 Operations
How the product will be used.
5.1.5.1 Environment
Mobility and Resource Management Failures
This category targets the failures that occur due to the mobility of the node and
improper resource management to offer uninterrupted wireless connectivity to the
user.
81
Failure modes
Location management
Location management is an extremely important functionality in location-based
mobile applications. A location-based mobile application utilizes the knowledge of
the location of the mobile node to serve location-specific information. It is used in
telematics, route directions, call routing, billing and several other applications.
Failure modes
Change in the logical identity of the device or the owner. A logical identity
could be MAC address, IP address or anything else used to identify a
mobile node.
Problems arising due to mobile node not re-registering with the base station.
Failure in receiving GPS data, in case of GPS being used to locate mobile
nodes.
82
Failure Modes
5.1.6 Time
Any relationship between the product and time.
Failure Modes
5.1.7 Synchronization
How the data will be synchronized.
Synchronization is a feature that enables exchanges, transforms and synchronizes
data between two different applications or data stores. Synchronization could be
either cradle-based or wireless. The SyncML Consortium is on a mission to get
mobile application developers and handheld device makers to use a common,
XML-based data synchronization technology. This category lists the different
failure that could be encountered while synchronizing data between two
applications.
Intellisynch error:
http://www.pdastreet.com/forums/showthread.php?threadid=779
85
Corrupted Data File May Prevent Mobile Application From Opening Up:
http://www.filemaker.com/ti/108092.html
86
Failure Modes
Failure Modes
Non-compliance with the guidelines for preformatting the Web pages for
mobile devices.
Operational quality criteria are criteria that relate to the product in use. We
distinguish them from development criteria, which relate to the product as a static
object under development.
5.2.1 Capability
Can it perform the required functions?
ISO 9126 defines functionality as, A set of attributes that bear on the existence of
a set of functions and their specified properties. The functions are those that satisfy
stated or implied needs. This set of attributes characterizes what the software does
88
to fulfill needs, whereas the other sets mainly characterize when and how it does
so. (Source: http://www.issco.unige.ch/ewg95/node14.html)
5.2.1.1 Suitability
Attributes of software that bear on the presence and appropriateness of a set of
functions for specified tasks. (ISO9126, 1991)
Failure Modes
Failure in the filter function of PAAM (right click for their suggestion).
5.2.1.2 Accuracy
Attributes of software that bear on the provision of right or agreed results or
effects. (ISO9126, 1991)
Failure Modes
Multiple copies of a file with the same name but different content exist on
the handheld.
90
5.2.1.3 Interoperability
Attributes of software that bear on its ability to interact with specified systems.
(ISO9126, 1991)
Failure Modes
Application not available for all the leading handheld platforms like Palm
OS, Pocket PC, Blackberry and Symbian OS.
Application can run only on one kind of network. For example, if CDMA or
1XRTT is used for voice and data, it can only work in North America and
places where CDMA is in use.
91
Problems When You Convert Files Between Excel and Pocket Excel:
http://support.microsoft.com/default.aspx?scid=kb;en-us;185921
5.2.1.4 Compliance
Attributes of software that make the software adhere to application related
standards or conventions or regulations in laws and similar prescriptions.
(ISO9126, 1991)
Failure Modes
92
5.2.2 Dependability
Will it work well and resist failure in all required situations?
Dependability is a term encompassing many notions like reliability, recoverability,
availability and safety within itself (Malloy, Varshney, & Snow, 2002). ISO 9126
divides reliability into three separate categories: Fault Tolerance, Maturity and
Recoverability.
Reliability within the mobile context could be defined as the Ability of the
wireless and mobile networks to perform their designated set of functions under
certain conditions for a certain operational time. (Malloy et al., 2002) I have
included the listing of failure modes with respect to Fault Tolerance, Maturity and
Recoverability in this paper as that seemed to be the most appropriate way to deal
with the issues concerning dependability of a wireless application and networks.
93
Failure Modes
5.2.2.2 Maturity
Attributes of software that bear on the frequency of failure by faults in the
software. (ISO9126, 1991)
Failure Modes
94
5.2.2.3 Recoverability
Recoverability is the capability of a system or application to maintain services
during attack or when all the resources are not available.
Failure Modes
Application does not switch to offline mode when there is a loss in network
connectivity.
No reconnection attempt after the device fails to establish the wireless link.
5.2.2.4 Reliability
This checks if the product will work well and resist failure in all required situations.
It includes the following:
Error handling: the product resists failure in the case of errors, is graceful
when it fails, and recovers readily.
Data Integrity: the data in the system is protected from loss or corruption.
Safety: the product will not fail in such a way as to harm life or property.
95
5.2.3 Usability
How easy is it for a real user to use the product?
Usability is the "effectiveness, efficiency and satisfaction with which a specified set
of users can achieve a specified set of tasks in a particular environment."
(ISO9241-11, 1998) According to Jakob Neilson, usability subsumes the notions of
learnability, memorability, efficiency, error rate / recovery and satisfaction (Nielsen
& Mack, 1994). ISO 9126 divides usability into understandability, learnability and
operability.
Usability issues are highly pronounced on handheld devices due to the limitations
of wireless handheld devices (Passani, 2000). They have a limited form-factor and
the display units are smaller than their desktop counterparts. Designing for such
small screen size needs more thinking and better navigation structures. Another
factor worth taking into consideration is the data input in a PDA or smart phone.
The keyboard or the soft input panel of these devices is not very spacious. This
warrants new and innovative ways of reducing the amount of the data that a user is
made to enter. In this paper, usability failure modes and risks are divided into
learnability, efficiency, memorability, error recovery and satisfaction.
5.2.3.1 Learnability
The system should be easy to learn so that the user can rapidly start getting some
work done with the system. (Nielsen & Mack, 1994)
96
Failure Modes
Inconsistent layout.
5.2.3.2 Efficiency
The system should be efficient to use, so that once the user has learned the system,
a high level of productivity is possible. (Nielsen & Mack, 1994)
Failure modes
No importance given to the main activities of the portable users and all the
functionalities implemented.
Main activities of the user not implemented in the fastest possible manner.
97
5.2.3.3 Satisfaction
The system should be pleasant to use, so that users are subjectively satisfied when
using it. Users should like the system. (Nielsen & Mack, 1994)
Failure Modes
5.2.3.4 Memorability
The system should be easy to remember so that the casual user is able to return to
the system after some period of not having used it without having to learn
everything all over again. (Nielsen & Mack, 1994)
Failure Modes
5.2.3.5 Accessibility
Can it be used by everyone?
A system is said to be accessible when it can be used by anyone irrespective of
their physical or technical capabilities. With respect to people with physical
98
Failure Modes
User not able to press a button on the handheld device due to improper
placement of the button.
User not able to use biometric security feature of the handheld due to strict
requirement of the hand movement.
5.2.4 Security
How well is the product protected against unauthorized use or intrusion?
Security issues could be subdivided into six subcategories: privacy and
confidentiality, access control and authorization, authentication, data integrity,
wireless network security, and availability.
99
5.2.4.1 Authentication
Authentication implies establishing identity of users, process or hardware
components.
Failure Modes
No authentication mechanism is used.
Weak passwords that can be easily broken.
100
Failure Modes
Disclosure of passwords.
Failure Modes
No application level (on top of Bluetooth stack) encryption used for highly
sensitive data.
Break of stream cipher - DES/RC4 are weaker than 3DES and AES.
No application level (on top of Bluetooth stack) encryption used for highly
sensitive data.
103
Man-in-the-middle attack.
Break of stream cipher - DES/RC4 are weaker than 3DES and AES.
If the plaintext message is known and attacker has the copy of the cipher
text key could be obtained by getting the IV and using a dictionary
attack.
Cipher stream reuse - key stream could be recovered from the WEP packet.
Weak AP password.
Man-in-the-middle attack.
Cache poisoning.
105
Phone cloning resulting due to the Electronic Serial Number (ESN) and
Mobile identification number (MIN) being read by attackers.
Hijacking of the voice channel by increasing the power level of the cellular
phone.
5.2.4.6 Availability
System availability is duration of time a system is available for use by its intended
users. Opposite of availability is Denial of Service (DoS), when the system is not
available with its services either fully or partially.
Failure Modes
Failure initiating and maintaining the wireless link due to interference with
external devices.
Airborne Viruses:
http://www.networkmagazine.com/article/NMG20001130S0001
5.2.5 Scalability
How well does the deployment of the product scale up or down?
Scalability can be sub-divided into two categories: horizontal scalability and
vertical scalability.
5.2.6 Performance
Mobile applications run on wireless links that have high latency and low
bandwidth. A data packet needs multiple hops before communicating with another
device or application. Special consideration is required while designing the system
to enhance the performance of such applications.
Failure Modes
Problems arising because of poor mobile client server architecture, i.e., fat
client vs. thin client approaches. (Yang, Nieh, Krishnappa, Mohla, &
Sajjadpour, 2003)
Throughput of the system is very low when many users log in and try to use
a feature.
Weak point in the system detected after exposing load at a specific portion
of the system is considered as being not so robust. This is also known as hot
spot testing. (Collard, 2002)
Systems performance hampers when varying the load from low to high and
following some pattern of load fluctuation. (Collard, 2002)
Problem occurs when exposing the system to abrupt load. This is also
known as spike or bounce testing. Load balancing and resource reallocation
problems surface during such tests. (Collard, 2002)
109
Delay caused due to high RTT and slow startup phase for the system to
utilize the wireless link. (Chakravorty & Pratt, 2002)
110
5.2.7 Installability
How easily can it be installed onto its target platform(s)?
Attributes of software that bear on the effort needed to install the software in a
specified environment. (ISO9126, 1991) Many of the failure modes described in
this section are inspired by Agruss article on installation testing. (Agruss, 2000)
Failure Modes
5.2.7.2 Configuration
This identifies the configuration, or the resources required, during installation.
Failure Modes
Desktop PCs registry clobbered with user configuration data and not
cleaned up after the installation procedure.
112
"Cannot Find Pocket Streets" Error Message When You Try to Install
Pocket Streets: http://support.microsoft.com/default.aspx?scid=kb;enus;319689
5.2.7.3 Uninstallation
This checks if all parts of the product have been removed from the system after
installing.
Failure Modes
5.2.7.4 Upgrades
This ensures that the product can be upgraded smoothly, maintaining current user
configuration.
Newer version cannot detect and remove older version of the software.
113
5.2.8 Compatibility
How well does it work with external components & configurations?
114
Failure Modes
High bit error rate due to mobility of the node connected to the wireless
network.
Alteration of the QoS parameters in the wireless networks not taken into
account while designing the system.
5.3.1 Supportability
The user may have suggestions for feature enhancement or bugs. Guidelines need
to be established for support to be provided for such user needs.
5.3.2 Testability
Testability of mobile applications can be subdivided into visibility and control and
field failures.
116
5.3.2.1 Visibility
Visibility is our ability to observe the states, outputs, resource usage and other side
effects of the software under test. (Pettichord, 2002)
Field failures
These are the failures that escape the unit testing stage or any other kind of testing
done using the simulators or emulators. Errors and failures are encountered when
the application runs on the actual device or on the actual wireless network in the
production environment.
Failure Modes
5.3.2.2 Control
Control is our ability to apply inputs to the software under test
5.3.3 Maintainability
Will it be easy to maintain?
Maintainability is defined as the ease with which changes can be made to a
software system. These changes may be necessary for the correction of faults,
adaptation of the system to a meet a new requirement, addition of new functionality
117
(Source: http://www.testingstandards.co.uk/maintainability_guidelines.htm)
5.3.3.1 Analyzability
Attributes of software that bear on the effort needed for diagnosis of deficiencies
or causes of failures, or for identification of parts to be modified. (ISO9126, 1991)
Failure Modes
5.3.3.2 Changeability
Attributes of software that bear on the risk of the unexpected effect of
modifications. (ISO9126, 1991)
Failure Modes
5.3.3.3 Stability
Attributes of software that bear on the effort needed for validating the modified
software. (ISO9126, 1991)
Failure Modes
5.3.4 Portability
How easy will it be to change the environment?
The ease with which a system or component can be transferred from one hardware
or software environment to another. (IEEE, 1991)
119
5.3.4.1 Adaptability
Attributes of software that bear on the opportunity for its adaptation to different
specified environments without applying other actions or means than those
provided for this purpose for the software considered. (ISO9126, 1991)
Failure Modes
5.3.4.2 Conformance
Attributes of software that make the software adhere to standards or conventions
relating to portability. (ISO9126, 1991)
Failure Modes
5.3.4.3 Replaceability
Attributes of software that bear on the opportunity and effort of using it in the
place of specified other software in the environment of that software. (ISO9126,
1991)
120
5.3.5 Localizability
Can I adapt the application to serve bigger market?
Internationalization (sometimes shortened to "I18N, meaning "I - eighteen letters N") is the process of planning and implementing products and services so that they
can easily be adapted to specific local languages and cultures, a process called
localization. The internationalization process is sometimes called translation or
localization enablement. (Source:
http://whatis.techtarget.com/definition/0,,sid9_gci212303,00.html)
5.3.6 Scalability
Can I increase the capacity with ease?
The ease with which a system or component can be modified to fit the problem
area. (IEEE, 1991)
Failure Modes
121
broad development project and the testing sub-project. Aspects of either can be a
source of constraints, problems, or opportunities for the tester.
We have described this section taking input from Bach (Bach, 2006a) and Bach
((Bach, 2003) p. 2), with his permission.
Project Environment includes resources, constraints, and other forces in the project
that enable us to test, while also keeping us from doing a perfect job. Make sure
that you make use of the resources you have available, while respecting your
constraints. ((Bach, 2003) p. 1) Creating and executing tests is the heart of the test
project. However, there are many factors in the project environment that are critical
to your decision about what particular tests to create. In each category, below,
consider how that factor may help or hinder your test design process. Try to exploit
the resources you have available while minimizing the impact of constraints.
5.4.1 Customers
Stakeholders of the project determine as to what kind of tests they want to run.
There may include the end customer, the project manager, the test manager or the
business analyst. Usage of the failure mode catalog will depend on the expectations
of the clients. If it is possible to have a discussion with the customer, then do so.
Failure Modes
5.4.2 Information
Mobile application is used in a variety of horizontal and vertical industries. Some
of the vertical applications are stock trading, airline reservation, healthcare
solutions, and warehouse inventory solutions, etc. Among the horizontal
applications, the most prominent ones are wireless e-mail and personal information
management, wireless office data solutions and sales force automation etc. Testing
will depend on the context in which the application will be used.
Failure Modes
123
Failure Modes
Does the programmer has any input to the possible risks, and is he
forthcoming about them?
Does the programmer talk about the difficulties they faced? If not, more
care needs to be taken when testing.
Does the programmer refute the validity of bugs, or insist that bugs are
features? In that case test cases will need to be designed such that refutation
becomes difficult.
Does the programmer gloss over certain sections when explaining or talking
about them? These sections may need to be looked at more closely.
5.4.4 Team
Experience, skills and expertise in special test techniques of the people responsible
for carrying out testing should be considered while formulating a test strategy.
124
Failure Modes
Are the reviews effective? Which sections have been more throughly
reviewed?
Is the development environment healthy? If not, more bugs may have been
introduced in the application.
Is there any team member with experience testing similar products? Input
may be taken from him/ her if so.
Is there any team member with a skill in testing in a particular way? Testing
may be assigned accordingly.
Has a team member undergone a personal problem recently? The parts (s)he
coded might need to be tested more thoroughly.
Has there been a long weekend or have long leaves been taken? Work done
around that time might need more careful testing.
Bugs in J2ME:
http://search.java.sun.com/search/java/index.jsp?qt=%2Bcategory%3Amidprofile+%2Bstate%3Aopen&nh=10&qp=&rf=1&since=&country=&langua
ge=&charset=&variant=&col=javabugs
125
Problems in BREW:
http://www.qualcomm.com/brew/developer/resources/ds/faq/techfaq14.html
5.4.6 Schedules
The schedules of the development as well as testing team, and the sequence or
process in which the activities are carried out, affect the application and can be
taken into consideration when testing.
Failure Modes
Was the schedule of the development team too tight? If so, testing will need
to be more rigorous.
Is the schedule of the test team too tight? If so, test prioritization will need
to be carried out.
126
Failure Modes
Does the product have new features that havent been tested before?
Does the application need to be tested for features that will allow for
compatibility with future releases of the same product?
5.4.8 Deliverables
The deliverables of the project include work products such as the test cases or test
reports. For example, test cases and test reports will need to be prepared as
required, following the standard guidelines. If someone else needs to run the test
cases again, they will need to be documented accordingly.
127
This chapter explains the process through which I refined the risk catalog. I tested
two mobile applications used in education using the initial risk list. This exercise
conducted on the mobile applications generated further test heuristics which I then
added to the catalog. The following sections of this chapter describe the sample
applications and refinement of the risk catalog.
128
The typical wireless networks in use are the Ethernet-based 802.11 family which
provide a high bandwidth data transfer. Wireless network hardware designed
exclusively for classrooms utilizes infra-red wireless technology and works as a
beaming station. These beaming stations distribute reading material and
assignments to the students.
The following section describes the educational handheld applications that I tested
to refine the risk catalog.
using applications like the Palm OS Artifact and Assessment Manager (PAAM) and
TI-NAVIGATOR from Texas Instruments.
Two applications, namely, Cells, and PAAM (GoKnow, 2004), were tested. Cells
version 1.1 was used for the experiment, and was available from University of
Michigans Center for Highly Interactive Computing in Education, on the link
http://www.handhelds.hice-dev.org/beta.php, last accessed July 21, 2003. The
current version of Cells is version 1.2, and is available from GoKnow Inc, at the
link http://www.goknow.com/Products/Cells/, last accessed November 28, 2006.
The following sections describe the applications and the tests conducted on them.
131
132
133
Screen Display: Transflective TFT 320 x 320 color display supports more
than 65,000 colors
I tested PAAM using a trial account provided by Go Know Inc. I installed PAAM
on a Pocket PC and ran it on the browser Internet Explorer, version 6.0. I faced
some installation issues and logged the issues in the risk catalog under the category
Installation Failure. In the initial stage of testing, my primary focus was on the
functional categories of failures like suitability, accuracy and calculation. These
were the features that I was able to test without setting up the wireless network and
were mostly confined to the standalone device. At the end of the initial phase of
testing, I started exploring other categories like usability, compatibility and security
for test ideas. At this stage, the risk catalog came very handy as it provided me with
the high level risk heuristics to channel my thinking process and direct my testing. I
135
was able to focus on a particular class of problems and drill deeper into related
potential issues and risks. I tested PAAM and its communication with the Pocket
PC, and added more failures and potential problems to the risk catalog in the
appropriate categories. After exhausting the categories of the risk catalog, I started
looking for additional categories and ways that I can restructure the risk catalog.
These are described in the subsequent sections.
Another problem was the lack of empirical data on the usage of risk catalogs for
mobile applications. I had created the catalog and populated the failure categories
136
with generic problems that could occur in mobile applications, but had not put the
catalog to test. My advisor Dr. Cem Kaner suggested that I test a mobile
application and enrich the risk categories with the problems that I encounter while
testing the application. I have described the process that I followed in section 6.3.3.
Models can be represented either explicitly or implicitly, on the basis of its type.
For example, in operational modeling, a state transition diagram can explicitly be
drawn either using word processor or using more sophisticated tools like Rational
XDE or Microsoft Visio. On the other hand, models can be implicit, that is, not
expressed as a diagram or in a document. In risk-based testing, when a tester tries
to develop a risk profile of the application, the guiding factor is the way the
application could fail. A risk-based tester writes tests that expose risks in the
application by following risk heuristics. These risk heuristics help to draw a mental
picture or model of how a problem could occur in the application.
and component lists. This was a good first step as it allowed me to focus on one
kind of problem at a time. When the lists grew and I had more examples of failures
and potential problems I realized that my risk catalog was difficult to read through
and apply. This problem stemmed due to the lack of categorization heuristics. To
overcome this issue I followed Bachs Heuristic Test Strategy Model (Bach, 2006a)
that provides a risk based tester some guidance on how to categorize the test ideas
into more actionable lists. Bach suggests three high level categories to analyze and
test a product. These top level categories are:
Product Elements
Quality Criteria
Project Environment
I subdivided the quality criteria further into operational and development quality
list to assist more focused risk exploration and test design. Bach has some more
guideword heuristics under each top level category in his test model which I used
to organize my failure lists. There are some categories like synchronization that I
thought deserved a higher level in the failure categorization scheme based on
mobile application specific technology.
6.3.3 Refining and Enhancing the Catalog with PAAM and Cells
As mentioned in the second paragraph of section 6.3.1 to gather empirical data on
the usage of risk catalog I tested the applications PAAM and Cells. Apart from
filtering and refining the failure categories identified for the catalog, my secondary
objective in carrying out this exercise was to enrich the risk catalog for mobile
applications with more examples of risks and failures. This activity assisted in
138
populating the sub-categories of the risk catalog with risk heuristics. Many risks
and failures provided in the catalog were inspired by the tests executed on these
applications. The risks identified were either failed tests or potential problems
imagined during testing. The process I followed to refine and enrich the risk
catalog was as follows:
1. In the first step, I allocated a 3 by 5 color card to each risk heuristic in the risk
catalog by writing the name of the heuristic on the card.
2. I then brainstormed on possible failures and problems that can occur in the
application under test (PAAM and Cells) and wrote them on another 3 by 5
card of different color to differentiate them from the cards with risk heuristics.
3. Next, I checked for patterns of failures in cards containing the failure modes
and placed them under the card with risk heuristic.
4. There were some cards with failure mode that I could not place under a
predefined failure category, so I decided to create my own subcategories to
place them. Some examples of these are WAP gateway failures and mobility
management.
5. Finally, I made a pass through all the cards with risk heuristics and removed the
thinly populated categories.
139
140
This chapter describes the tests that were conducted by two software testers at
Florida Institute of Technology. The testers tested an enterprise wireless
application, MidCast, using the risk catalog. The primary objective in testing this
application was to evaluate the effectiveness and efficiency of the risk catalog in
discovering failures in mobile applications.
MidCast, the application tested using the risk catalog, is a stock-quote application
that streams real-time stock price and news to the handhelds through a central
server.
Mobilization obviates the need to re-enter the same data on multiple systems and
increases the efficiency of the business process. For example, field personnel on the
field can enter data directly in the system using the mobile application. Hence they
do not need to enter the same data twice, on the field and again in the office.
142
The next section describes an experiment that was conducted with two software
testers at Florida Institute of technology to use the risk catalog to test an enterprise
wireless application.
7.2.1 MidCast
Midcast runs on any JAVA enabled handheld device. The client application of
MidCast running on the handheld device communicates directly with the MidCast
server using a wireless Wide Area Network (WAN) or Local Area Network (LAN)
over the internet. Real-time information on quotes, charts, graphs and news is
pushed to the handheld.
(http://www.hillcast.com/Website/products/midcast/index.asp ), last accessed on
March 9, 2004. More information on Midcast is available at
http://www.hillcast.com/Website/products/midcast/index.asp, last accessed on
March 9, 2004.
143
The MidCast client communicates wirelessly with the MidCast server, which runs
on the J2EE technology, using wireless WAN. The software is available in many
handheld operating systems like Motion, Palm OS, Windows mobile, and
Motorolas iDEN handhelds. Either wireless LAN, like the 802.11 family, or
wireless WAN, like GPRS or 1XRTT, can be used.
144
Figure 7.2 shows a day chart window of the client MidCast. In the first screenshot
the user selects the stock of MSFT and clicks on day chart. The chart loads and can
be seen in the third screenshot.
Screen Display: Transflective TFT 320 x 320 color display supports more
than 65,000 colors
145
Wireless Network:
AT&T Wireless GPRS network.
There was a problem in using the GPRS network of AT&T wireless within the Olin
engineering building of Florida Institute of Technology. This was resulting due to
reduced signal strength within the building. Testing was conducted at a place where
the signal strength was approximately 75% of the peak value. Mobility of the
device was minimal and localized within 10 meters.
Additional Environments:
Java HQ version 1.0 from Hillcast Technologies
Colors: Thousands
Networking: Enabled
146
Many issues were discovered during the testing conducted. Some of the important
ones are presented in detail.
The two software testers signed a consent form to be human subjects for study that
had been approved by the Human Subjects Institutional Research Board at Florida
Institute of technology.
147
I then asked the students to design test cases for MidCast and execute them using
the actual device and network. Details of the hardware, software, and wireless
network used are described in section 8.3. They spent approximately 10 hours
each, testing MidCast, and discovered around 35 issues of varying severity.
The two testers then filled up a survey stating their experience and provided
feedback on their experience of using the risk catalog. The survey forms are
provided in section 7.4.2.
148
How did you use the risk catalog while testing the sample applications?
Which portion of the catalog was most useful while designing and
executing your tests? You could say something like: I found operational
qualities criteria to be more useful than any other category of failure.
What are some ways in which the risk catalog could be made more useful?
What additional information would you find useful regarding the form of
testing that you carried out based on bug taxonomies?
How much coverage of the risk catalog did you achieve while testing the
sample applications?
Responses from the two testers are included in appendix B of this thesis.
149
150
In any distributed system, discrete software agents work together to perform some
tasks. In service-oriented architecture, a group of autonomous services co-operate
to carry out a task. There are some common elements in all the definitions and
representations of service-oriented architecture. Some core traits of service
orientation are:
There are three typical roles found in service-oriented systems. Those are the
service provider, service requestor, and the services infrastructure.
A service provider is the software module that implements a service and publishes
its interface, along with the service contract, to the service infrastructure.
The service infrastructure is the broker that provides the core facilities like service
information, contract, and interface to the service requestors. This infrastructure is
also responsible for maintaining standards across different services, implementing
quality of service, and security protocols.
A service requestor is the software module that consumes a service by invoking the
service published by the service provider. It binds itself to the service infrastructure
and adheres to the policies and protocols required by the service infrastructure. A
service requestor completes a desired business task after consuming the appropriate
services required to fulfill the business flow.
reusability at higher level than traditional object oriented paradigm. In an objectoriented system, objects that encapsulate data and behavior are the typical unit of
reuse. In service-oriented systems, reuse is promoted at the servicelevel, which is
a task or set of tasks that the system is required to perform.
153
Operation A
Operation A
Service A
Message
Service B
Operation B
Service A
Message
Some problems that this model of application architecture tries to solve are
(Chande, 2005):
Reuse of components
Web services are defined as reusable software components that are published,
located, and invoked over a network, and that encapsulate the business logic
required to complete a task. Web services are applications that use standard
transports, encodings, and protocols to allow systems to communicate over the
network in a secure, transacted and reliable manner.
(http://msdn.microsoft.com/Webservices/default.aspx?pull=/library/enus/dnWebsrv/html/wsmsplatform.asp#wsmsplat_topic2, last accessed July 28,
2006).
155
Web Services use XML for data representation, have mechanisms to describe the
service, and provide features like:
Support for composite applications such as business process flows, multichannel access, and rapid integration. (IBM, 2005)
to be self governing and extensible. SOAP body houses the actual content
or the payload of the message.
157
WS-*, or the Web-service extended specifications, are extensions to the SOAP and
Web-services infrastructure, primarily in the areas of:
Metadata management: WS-Addressing, WS-MessageDelivery, WSPolicy, Web Services Policy Language, and WS-MetadataExchange for
defining ways in which cooperating Web services can discover the features
each other support and interoperate.
158
SOAP header block provides a way to enhance the messages sent and received
from service providers and requestors, and to include additional metadata that helps
in implementing WS-*, as required by the application. Usually a choice is made on
the extensions that are to be implemented for the framework, on the basis of the
context and specific requirements of the system.
Web services require explicit contracts for communication through the use of
description language. They provide loose coupling and hide all the details of
159
implementation from the service requestor. They are platform and programming
languageneutral; hence the service provider and consumer can be implemented in
different environments. All Web services adhere to standards and are published,
located, and invoked on a network. All these characteristics make Web services a
very good fit for service-oriented architecture, and because of these reasons they
have become a dominant paradigm in the implementation of service-oriented
architecture.
160
161
Developing effective and usable mobile Web services also requires the services
infrastructure that addresses issues related to identity management, security, and
the machine readable description of Web services and metadata management
(Hirsch & Kemp, 2006). Effort between the software companies and mobile
software stakeholders is ongoing to define consistent standards and models and to
enable service-oriented architecture across different mobile middleware and
platforms.
162
This application has the user interface designed using Microsoft.NET windows
forms controls for smart devices utilizing a Web service created to fetch the book
title and price from a database running on Microsoft SQL Server 2000. The
following diagram (Figure 8-3) illustrates the tiered architecture of the mobile
application consuming a Web service. In the figure the mobile application (Mobile
book catalog application) consumes the book Web service hosted on the Web
server that returns a list of titles and price stored on the database server.
Communication between the service provider (Book Web service) and consumer
(Mobile book catalog) occurs over HTTP and SOAP. The book Web service
returns a dataset (Microsoft.NET data type) that is used to populate the view on the
mobile device.
Failure modes arising because of design and architectural errors can be imagined
and visualized easily if the mobile application tester is aware of the underlying
design and internal components of the mobile system. I chose to develop and test
this application to get better insight on these kinds of problems and failure modes.
163
164
The mobile client uses a Web service called BookWebService that runs the
business component necessary to get the required information stored in the
database. When the user clicks the button Get Items; the method attached the
button is invoked. In this method data from the Web service is extracted and
assigned it to a temporary dataset. If the DataSet downloads successfully from the
Web service the temporary dataset TempDS is assigned to the book catalog dataset;
BookCatalogDS. Following code fragment shows the method fired after the user
clicks the Get Items button.
165
application. This dataset returned from the service is then used to populate the list
view windows control on the mobile device windows form. Dataset returned from
the service has a datatable contained within it with multiple rows of book
information. The list view control is populated after iterating through the DataRows
and taking a DataItem from a particular DataRow to insert in to the listview
control.
GetItems () Web method returns a dataset that is populated from the database using
the method GetDataSet (). The GetDataSet () method connects to a SQL Server
2000 database and executes a SQL query on the database to fetch the datarows. The
following code fragment depicts how dataset is populated to be returned by the
Web service.
This method is then called in the constructor of the Web service class as shown
below to populate the BookCatalog declared in the BookWebService class.
Appendix C contains the complete source code listing for the Web service.
public BookWebService()
{
InitializeComponent();
167
BookCatalog = GetDataSet();
}
Figure 8.5 demonstrates the state of the mobile application when the use clicks on
the button to get the list of titles and price.
168
I installed the Software Development Kit (SDK) for windows mobile 2003 Pocket
PC and smartphones to write Mobile book catalog. This SDK provided the required
functionality to write managed code in C# or VB.NET for smart devices utilizing
either connected or disconnected modes for communication. SDK also provided the
required support to seamlessly call the APIs for the .NET compact framework
which is a trimmed down version of .NET framework designed for the mobile
devices. The SDK for mobile devices also contained an emulator that uses a virtual
machine to run the Pocket PC 2003 and smartphone software independent of the
operating system on the development machine. I utilized the emulator images to
deploy and test the mobile book catalog. Apart from Visual Studio.NET and SDK
for mobile devices, I used Microsoft ActiveSync 3.8.0 as the synchronization
software between windows mobile based smartphone and windows desktop based
development workstation.
169
The following list outlines the configuration and development tools installed at the
development workstation for mobile book catalog.
configuration file to allow requests from a remote machine. Actual and potential
problems encountered while developing and testing the mobile application is listed
under testability and supportability categories in the risk catalog.
thoughts and list the issues. To enhance the risk catalog further with these kinds of
risks I searched the bug databases, online reports of failures in web services, trade
press articles and inserted failures and possible problems in the risk catalog after
finishing work on the Mobile Book Catalog. Detailed report on the process that I
followed to test Mobile Book Catalog is presented in the next section.
expanded the risk analysis of the Mobile Book Catalog by imagining failures that
can happen in the quality risk categories like security, scalability, performance as
well as product dimensions like operations and data. Since I was the developer of
the mobile application, during the initial stages of this experiment I was focusing
more on design and ways in which the product will work. This was limiting my
thought process and risk analysis to imagine the ways application can fail when an
end user is using it. With the help of risk catalog, I could now think of problems
like, what will happen if I click the button to get the list of books and there are
other applications running on the handheld simultaneously.
Next I created further associative branches for each subcategories of the catalog
and hyperlinked it to the original diagram. This level was granular enough to start
thinking about the specific failures and risks for each failure category. Using the
173
combination of source code and the application running on device I then populated
these categories with specific potential risks and failures. At this stage I was able
to model the user behavior and was more empirical in my testing. I eventually used
these risks to update the risk catalog in chapter 5.
To give an example, the diagram for Product Elements detailing the categories and
risks identified against each category is shown below:
174
catalog suggested that a birds eye view of the risk catalog with some sort of
navigable links to drill down into the failure categories would be helpful. I plan to
build a risk modeling tool on the basis of this risk catalog. The primary
functionality that this tool will provide is better navigation through the risk
categories. Another helpful feature of the tool will be to allow users of the catalog
enter their own risks and failures modes under categories. This will assist in
populating the failure categories even further and help in diversifying or focusing
the catalog for users based on the application under test and context.
176
Mobile applications.
Client-side devices.
Mobile Applications
Many of these mobile solutions are already in use in the vertical industry like
Healthcare, Education and delivery services. Some of the commonly encountered
mobile applications are Mobile e-mail and Personal information management
(PIM). Other applications that are in use and emerging are: mobile financial
applications like banking and stock trading applications; mobile advertising, which
is location specific; mobile entertainment services and games; mobile office
177
178
The following figure demonstrates the architecture of compact framework and the
way it interacts with the native code of the machine. More information on
Microsoft.NET compact framework is available at
179
Java ME
Sun Microsystems offers a highly optimized runtime environment targeted towards
the handheld devices having limited resources. Java ME provides some core APIs,
classes, emulators and technologies for wireless programming under its wireless
toolkit. It also follows a community process to define and allow implementers to
create new combinations of runtime optimized for different devices. Java ME is
divided into configurations, profiles and optional packages. More information at is
available at Sun Microsystems Website: http://java.sun.com/javame/index.jsp
WAP / WML
Wireless markup language and Wireless Application protocol are closely tied. They
are used to display information on narrowband wireless clients like cell phones and
pagers. WML is used for creating Web pages for handheld devices. WAP is the
application communication protocol used to access services and information. A
consortium consisting of Unwired Planet, Motorola, Ericsson, and Nokia was
responsible for the creation of WAP and WML. More information can be obtained
at: http://www.wapforum.org/.
180
HDML
HDML stands for hand-held device markup language. HDML and HDTP, which is
the accompanying protocol, were created by Unwired Planet in 1997. There was
also a micro browser that was introduced called the UP.browser that runs on cell
phones and similar devices.
cHTML
Compact HTML is a subset of HTML 2.0, 3.2 and 4.0. The goal of the language is
quite similar to that of WML. The cHTML standard exists only as a W3C note
rather than a well-established standard. Compact HTML strips down the normal
HTML to the barebones making it suitable for narrowband and constrained devices.
181
It uses normal HTTP for data transfer making it easier to serve up content for the
handheld devices that support it.
VoiceXML
VoiceXML is an application of XML, so it possesses the same structure,
restrictions and benefits of XML. It is designed for creating audio dialogues with
human beings. It allows for a combination of synthesized speech and digitized
audio (output from the server side), recognition of spoken and DTMF key input,
and recording of spoken input. VoiceXML minimizes the client/server interactions
by specifying multiple interactions per document. The major goal is to bring the
advantages of Web-based development and content delivery to interactive voice
response applications.
Simplified HTML
This is a simplified version of HTML. PQA (Palm Query Application) uses a
subset of HTML and is one of the main browsing languages in the Palm handheld
market.
XHTML
XHTML is the replacement of HTML as the Web browser language as
recommended by W3C. XHTML 1.0 was a reformulation of HTML 4.01 in XML.
XHTML Basic is defined as proper subset of XHTML for mobile application
182
iMode
imode is wireless data service developed by CoCoMo. It is packet based as
opposed to circuit switched voice systems. cHTML is used to write imode pages.
More information is available at: http://www.ai.mit.edu/people/hqm/imode/
SynchML
SynchML is a mobile data synchronization protocol that synchronizes data between
a network / desktop and a mobile device. It offers support for a variety of transport
protocols and applications thereby enhancing interoperability.
Client-Side Devices
The first handheld computing device that acquired a significant market share was
the Apple Newton. Since then, the handheld space has evolved to the point where
there are literally thousands of different combinations of hardware devices,
software capabilities and wireless networking features. These are some of the
devices that are in use in the commercial, industrial and personal sectors.
183
Smart phones
They are cellular phones with the display hardware and software for the wireless
Internet connectivity. They have a micro browser and some memory that is
continuously being expanded. There are many different names for such phones
depending on the technology used for the Internet services and information. In
Japan it is known as imode phone; in Europe it's called a WAP phone, and in many
places it is known as a Web phone. (Beaulieu, 2002)
PDA
It is a miniature computer with special OS, storage, a keyboard or the soft input
panel and a display. In general they have much more computing power than a smart
phone. They again are called with different names like handheld, palm-top,
communicator etcetera. There are two different kinds of handheld: the industrial
and the consumer handheld (Beaulieu, 2002). The main difference is in the
packaging. The PDAs used in the consumer market are mostly based on Palm OS,
Microsoft Pocket PC OS and Blackberry OS. Some manufacturers of industrial
handheld are: Symbol, Intermec, Itronix, Husky and others. The industrial
handhelds mostly connect to the wireless LAN rather than WAN.
Pagers
A pager is a handheld wireless device that uses a paging network for data
communication (Beaulieu, 2002). Pagers could be one-way, two-way or uplink. An
uplink pager is used to transmit telemetry or location information, normally used
184
for asset management. Pagers are more cost effective, time sensitive and have more
battery life than a cell phone.
Appliances
iAppliances is the generic name for the class of devices with a specialized purpose
and limited Internet or wireless data connectivity. Some examples of such devices
are e-book readers, e-mail stations, Internet radios, et al.
The Hybrids
Series of handheld compatible phones are rolled out. They could be called as the
communication devices that could compute or the computing devices that can
communicate. They can run high-level applications and still work as cellular
phones. Java phones are the early devices in this category that delivers voice as
well as data. Trend is towards development of a Swiss army knife kind of device
that combines all the benefits of the above-mentioned devices into a single ideal
handheld device. (Beaulieu, 2002)
185
186
187
HiperLAN
In European countries the set of wireless network communication standards is
known as HiperLAN. There are two specifications adopted by ETSI (European
Telecommunications standards Institute), HiperLAN/1 and HiperLAN/2.
188
189
Appendix B
Issues Faced During Testing
The following issues were faced during testing of MidCast:
Issue # 1
Connection Lost During Connection Process
1. Begin Midcast
2. If prompted, connect and enable mobile
3. If the connection is lost at any point during this process, the following message
is displayed: Service Connection in Progress "Error: PPP timeout (0x1231)"
Issue # 2
Cannot navigate during connection hang-ups - REAL-TIME FAILURE
1. Get MidCast running and connected
2. During periods in which the connection is slow, the user is unable to navigate
using the onscreen prompts.
190
Usability Issues
Issue #3
When deleting a stock, you are NOT prompted to verify that correct action being
performed.
Issue #4
No way to look up stock names.
Issue #5
Cannot cycle back and forth through action options, only forward movement
possible.
Issue #6
Menus left justified to cells, however "ID" is centered over the stock names.
Issue #7
The "Action" button appears to have no relationship to the other button, but it
actually controls it.
Issue #8
Can not resort the list of stock names
191
Issue #9
Each time you start MidCast, a null stock appears, even after deletion
Issue # 10
Could not handle some NYSE stocks
Issue # 11
Real-Time Refreshing - REAL-TIME FAILURE
After some refreshes, the stock change was (e.g.) "+2.00" but the trade would still
show a red down arrow, meaning that the stock was down from its opening price
Issue # 12
Heap Memory Error - DATA INSTANCE ERROR
192
Issue # 13
Reappearing Stocks
1. Begin MidCast
2. From the main console, tap on "Day chart" 3 or 4 times continuously
3. Hit "cancel" once it displays "obtaining data" and it will then exit and reenter a
Day Chart
4. Return to the main console, and then repeat steps 2 and 3 with "RT Chart"
5. Repeat 2,3 and 4 a few times, and Stocks that have been deleted reappear, even
if they were deleted in previous sessions or if you have powered off
Issue # 14
Buffer Overrun and Memory Leak Requiring a Soft Reboot
193
1. Start MidCast
2. Once at the main console, tap on the "Day chart" or "RT Chart" options
3. Continuously until "Uncaught exception java/lang/OutOfMemoryError"
appears.
4. Repeat this two or three time
5. MidCast will run progressively slower until it eventually locks up
Issue # 15
Connection Errors
Issue # 16
Daytime and Real-time Chart Overload
1. Begin MidCast
2. Click on "Day chart" continuously until "uncaught exception java/lang/..."
message appears
194
3. Click okay, and then immediately begin clicking on "Day chart" again about ten
times.
MidCast will now enter and exit the daytime charts 10 times before locking up and
requiring a soft reboot
Issue # 17
Strange IP Value Error
Issue # 18
Null Pointer Exception
1. Start MidCast
2. Tap on "RT Chart" 5 times continuously
3. Once in the RT Chart, click back
4. Error message: "Uncaught exception java/lang/NullPointerException" causing
Crash
Issue # 19
Fatal Alert
1. Start MidCast
2. Click on "Day chart" continuously until "uncaught exception..." error appears
195
3. Click okay, then when MidCast says "obtaining data", hit cancel
4. Repeat steps 2 and 3 until "Fatal Exception" error asks you to restart
Issue # 20
No news button as advertised in the description.
Issue # 21
There is no scroll bar, just dotted line which does not allow scrolling
The only way to view them is with the hardware button on the Palm, but not all
PDAs have hardware keyboards.
Issue # 22
Pressing Action modifies the second button instead of bringing a menu, as it would
be expected by a user
ACCURACY
Issue # 23
On page 16 of
http://www.hillcast.com/Website/support/user_manuals/pdf/MidCastGuide_PalmO
196
S.pdf the arrows for wick, volume and candlestick are pointing the wrong locations
(they are shifted one inch to the left)
Issue # 24
The application claims that its day chart "displays the intraday highs and lows for a
specific stock in 20-minute time intervals throughout the trading day"
1. Add stock R
2. Select day chart
At 1:46 PM the last time the chart was updated was 9:30 AM
At 2:45 PM the last price information was from 9:30 AM, The was volume info
was from 12:30 PM
Issue # 25
For nonexistent stocks that start with characters like ' the displayed information is
different than the information displayed for the rest of the non-existent stocks.
Issue # 26
Spelling error at the button Day Chart
1. Press Action several times until you see the Day Chart button on the right.
197
EFFICIENCY
Issue # 27
To perform an action the user has to click through all possible actions
Issue # 28
User is not given proper feedback. Creating a day chart for nonexistent stock makes
the application send and received data, but does not show any error message or
graph (just a blank screen with back button).
1. Add stock W
2. Click action button until the button on the right is Day Chart
3. Click on Day Chart
Issue # 29
Sometimes for no apparent reason the application quits and brings the user to
the mobile panel. Probably it is because of a problem with the network but the user
is not given any feedback.
RECOVERABILITY
Issue # 30
Pressing Day Chart many times crashes the application. The displayed error is:
Error Uncaught exception java/lang/OutOfMemoryError OR Error
198
1. Select a stock
2. Press Day Chart many times
Probably the fault is in the fact that the application queues all user requests, without
checking the size of the queue, so if there are too many requests, at one point the
application just runs out of memory.
Issue # 31
It is impossible to connect in some rooms.
ERROR MESSAGES
Issue # 32
If the Palm is not connected to GPRS, the application ask the user to connect, if the
user selects cancel, and on the next prompt cancel again, instead of quitting the
application displays an error:
Error: Net.lin interface error: 0x00002F37 and still tries to connect.
199
Issue # 33
There is no way to know if the stock market is not working or the application is not
working, because no error message or information is given to the user, but the data
is not updated at all.
AUTHENTICATION
Issue # 34
It is possible to make a stock appear to not exist when it does.
The result is that the screen shows stock K, which exists but the information shows
that it doesn't.
Issue # 35
Error message appeared without obvious reason
Error Strange value of this IP
200
Feedback Forms
Undergraduate Student 1
3. How did you use the failure mode catalog while testing the sample
applications?
I started with Operational Quality Criteria -> Functionality -> Suitability. I read the
general description of the category. This focused my thinking in this area. Then I
read the failure modes, and for each of them I tried to imagine how I can apply the
same idea to my current application under test. Then I moved to the next category,
and so on. I traversed the categories in a sequential order (so I wouldnt miss
anything).
201
4. Which portion of the catalog was most useful while designing and
executing your tests? You could say something like: I found operational
qualities criteria to be more useful than any other category of failure.
I found operational qualities criteria to be more useful than any other category of
failure for few reasons. First of all, it was perfectly suitable for the type of testing I
was doing (Black Box testing). Second, I didnt need much understanding of the
wireless technology to perform the tests suggested by this category.
6. What are some ways in which the failure mode catalog could be made
more useful?
For me the core of the paper is the failure modes and everything else is auxiliary.
That is why I believe that the most important thing would be to add more failures
modes in each category. For example currently Suitability has 8, Accuracy 6, and
so on. It would be better if they were something like 15 - 20 in each category.
Also, it would be better if the general structure of the paper is improved. I saw the
HTML version where each category was a link so the tester can navigate very
quickly. That was great idea. The problem with the PDF version of the paper is that
202
navigation is hard. Maybe page numbers will a little bit more helpful so the chart
with the catalog will be something like table of contents.
The purpose of the Sample Application section that immediately follows the
catalog chart is unclear. It looks like an introduction and if it is such it would be
better if it is in the introduction section.
7. What additional information would you find useful regarding the form of
testing that you carried out based on bug taxonomies?
Generally, I would find useful more relevant categories and more failure modes in
each one.
8. How much coverage of the risk catalog did you achieve while testing the
sample applications?
I spent 90% of the time on the Operational Quality Criteria. But still I went through
the whole catalog, but I was going rather quickly because I was aware of the time
constraints.
203
How would you prefer to split the time if 30 hours are allocated in total to
finish all these tasks?
I believe that the distribution of my time was appropriate.
a) I assume that you mean familiarization with testing using a fault model catalog.
In this case 1 hour to read the paper would be enough (maybe one more hour to
read through other relevant papers).
b) I would say no more than 5% of the time, which in this case is hour and a half.
Note: If at some point I realize that I need additional information to perform a test,
Expand the categories and add more failure modes in each of them
204
Undergraduate Student #2
3. How did you use the failure mode catalog while testing the sample
applications?
I used the catalog mainly as a checklist and a starting point. I would find something
that I was either ready or comfortable to test, then dive into the application with
that in mind. Once inside the application test, I would check back with the catalog
to see whether I had found something new, or else to get some idea of what I
should look for next. The catalog was a great resource for focusing the testing.
205
4. Which portion of the catalog was most useful while designing and
executing your tests? You could say something like: I found operational
qualities criteria to be more useful than any other category of failure.
Not to throw out a general response, but I found the entire catalog very useful. As I
said earlier, each section of the catalog was just a focal point to tackle. I certainly
made use of some more than others, but that in no way detracts from the usefulness
of all of them. I feel there may have been some areas that should have been subclassified a little more, as well as there were some others areas that should have
been broadened, but on the whole the catalog functioned much like a thorough
task-managing solution.
6. What are some ways in which the failure mode catalog could be made
more useful?
Certainly more examples for each of the sections would provide more usability.
Similarly, more sub-classifications so that there is a broader range of focal points
(which I assume will simply come as the catalog develops over time). With my
206
experience, I found the catalog more than sufficient. Perhaps someone with more
experience might find shortcomings, but I did not. The only constructive suggestion
I would make would be to include more detail in the examples, such that there is
more of an idea on how to locate certain bugs, or where it is that they arise (such as
heap memory failures, etc.). Also, perhaps definitions of the bugs, so that there is a
clear and precise understanding of what a tester is working with.
7. What additional information would you find useful regarding the form of
testing that you carried out based on bug taxonomies?
Again, definitions would be fantastic. Rather than just simply links with the bugs
name, more explanation on what the bug is and, especially, what damage it can
cause. Additionally, Im not sure it would be beneficial to organize the bugs in a
priority manner. The current version of the catalog was not organized that way, and
I feel that it should stay as such.
8. How much coverage of the risk catalog did you achieve while testing the
sample applications?
Actually, a fair amount. What I found was that either many of the bugs did not exist
or did not apply in the program I was testing. However, a lot of the bugs I did find I
was not expecting, and probably would not have found without the catalog.
How would you prefer to split the time if 30 hours are allocated in total to
finish all these tasks?
I would have enjoyed more time testing some sample applications. The application
I did test thoroughly did not cover a lot of the listed bugs in the catalog, and for that
I feel that I did not get a chance to fully familiarize with many of the catalogs
features. If I had to make one request it would be more time testing, because even a
crash course in testing provides ample (and some of the best) time spent
familiarizing ones self with the catalog. Perhaps 5 hours towards familiarization
with testing techniques, wireless technology, and a general overview of the
calendar, and then the rest of the time simply allowing the user to get right into the
test application. In this way, you can really see where shortcomings in the catalog
would be just by any confusion that may reside in the tester once they get going.
208
Directions: Please provide the information requested below (please type). Use a
continuation sheet if necessary, and provide appropriate supporting documentation.
Please sign and date this form and then return it to your major advisor. You should
consult the university's document "Principles, Policy and Applicability for
Research Involving Human Subjects" prior to completing this form. Copies may be
obtained from the Office of the Vice President for Research.
We will provide the tester with the online version of the failure mode
catalog for mobile applications.
Then we will provide the tester with some examples of bugs that have
occurred in a similar application falling under a failure category.
Testers will then fill up a survey stating their experience and provide
feedback on their experience using the risk catalog.
To evaluate the results of the experiments, we will have some context-setting at the
start of the experiment and some results analysis left to do at the end:
Everyone will take an oral pretest that will give us an indication of the skills
they already have in the type of testing they are required to carry out.
5. Describe the procedures you will use to maintain confidentiality for your
research subjects and project data. What problems, if any, do you
anticipate in this regard?
211
We will not be using any personal data collected like name, etc in my thesis or
elsewhere.
Subjects will be assigned numbers, which they will use on all materials they hand
in. We will keep a list matching subject name and number, primarily for auditing
purposes. We will keep this list in an offsite file (Kaners house) and will not share
it with others unless they have a lawful need to know. We explain this in the
consent form.
6. Describe your plan for obtaining informed consent (attach proposed form).
Florida Tech IRB: 2/96
Consent will be sought using the attached consent form.
7. Discuss what benefits will accrue to your subjects and the importance of
the knowledge that will result from your study.
Pay by the hour (typically $150 for the experiment or $10 per hour)
8. Explain how your proposed study meets the criteria for exemption from
Institutional Review Board review.
The following protocol applies to my research that should be exempted, because it
meets the criterion:
212
1. to accept responsibility for the scientific and ethical conduct of this research
study.
2. to obtain prior approval from the Institutional Review Board before amending
or altering the research protocol or implementing changes in the approved
consent form.
3. to immediately report to the IRB any serious adverse reactions and/or
unanticipated effects on subjects which may occur as a result of this study.
4. to complete, on request by the IRB, a Continuation Review Form if the study
exceeds its estimated duration.
Signature:
Ajay K Jha
Date:
This is to certify that I have reviewed this research protocol and that I attest to the
scientific merit of the study, the necessity for the use of human subjects in the study
to the student's academic program, and the competency of the student to conduct
the project.
213
Consent Form
CONSENT FORM: FAILURE MODE CATALOG EXPERIMENT BY AJAY
JHA
We are seeking your participation in a research project in which we are developing
a failure mode catalog for testing mobile applications. We will use data from this
experiment to improve the risk catalog and adapt it to be more useful.
214
If you agree to participate, we may ask you to attend one or more lectures, read
materials, complete practice exercises, and take written tests at various points in the
study.
When you do an exercise or take a test, you will fill out an answer sheet. To
preserve your privacy, you will identify yourself on answer sheets with an
experimenter-assigned number. You might review answers written by other
students and they might review your answers.
You may be assigned to work on an exercise with another student, and if so, each
of you will fill out your own answer sheet.
The experiment will require several hours of your participation. If you cannot
participate for all of the scheduled hours, please do not begin this experiment. We
cannot use partial data and we cannot compensate you for partial participation.
We will split the experiment into sessions, which will last between 15 and 18
hours.
Participants in most of the phases of this work will be paid. Some of our colleagues
will serve as volunteers during the exploratory phases of the experiment and will
not be paid.
215
You will be paid at the completion of your role in the experiment (all sessions
assigned to you). We can afford to pay you only if you attend all the assigned
sessions and complete the required assignments/exercises/quizzes. We cannot use
your data if you skip any session.
Your participation will not subject you to any physical pain or risk. We do not
anticipate that you will be subject to any stress or embarrassment.
We will ask you to fill out one or more questionnaires that give us demographic
information about you and/or that give us insight into how you learn.
Your name will not be recorded on any answer sheet. You will be assigned an
anonymous code number. You will use that code number on your answer sheet.
Your responses will be tracked under that code number, not under your name. Any
reports about this research will contain only data of an anonymous or statistical
nature. Your name will not be used.
For auditing purposes, the experimenter will keep a list of all people who
participated in the experiment and the anonymous code assigned to them. That list
might be reviewed by the student experimenter, Ajay Jha, the projects Principal
Investigator, Cem Kaner, or by anyone designated by the Florida Institute of
Technology or an agency of the Government of the United States, including the
National Science Foundation, as having such legitimate administrative interests in
216
the project as analysis of the treatment of the subjects, the legitimacy of the data, or
the financial management of the project. We will file this list in a place we consider
safe and secure and take what we consider to be reasonable measures to protect its
confidentiality. We will treat it with the same (or greater) care as we would treat
our own confidential materials.
Any questions you have regarding this research may be directed to the
experimenter (Ajay Jha) or to Cem Kaner at Florida Tech's Department of
Computer Sciences, 321-674-7137. Information involving the conduct and review
of research involving humans may be obtained from the Chairman of the
Institutional Review Board of the Florida Institute of Technology, Dr. Ronald
Hansrote at 321-674-8120.
Your signature (below) indicates that you agree to participate in this research and
that:
You understand that you are free to discontinue participation at any time
without penalty or loss of benefits to which you are otherwise entitled
except that you wont be entitled to be paid for the experiment since you did
not attend all the sessions.
___________________________________________
217
__________________
Participant
Date
___________________________________________
__________________
Experimenter
Date
218
namespace BookCatalogAppCS
{
public class Form1 : Form {
[DllImport("coredll.dll")]
private static extern int LoadCursor (int zeroValue, int cursorID);
[DllImport("coredll.dll")]
private static extern int SetCursor(int cursorHandle);
public Form1()
{
InitializeComponent();
}
}
#endregion
222
/// <summary>
/// The main entry point for the application.
/// </summary>
static void Main() {
Application.Run(new Form1());
}
ShowWaitCursor(true);
try
{
//Get the data from the Web service and assign it to a
temporary DataSet.
//If the DataSet downloads successfully from the
Web service,
//assign TempDS to BookCatalogDS
TempDS = ws.GetItems();
BookCatalogDS = TempDS;
BookCatalogTable =
BookCatalogDS.Tables["Titles"];
AddDataToListView();
}
catch (WebException we) {
MessageBox.Show("Unable to connect. Error: " +
we.Message, "Connection Failed");
}
223
ShowWaitCursor(false);
}
if (listView1.SelectedIndices.Count > 0) {
row =
BookCatalogTable.Rows[listView1.SelectedIndices[0]];
textBox1.Text = "Description:\r\n" +
row["notes"].ToString();
try {
//pictureBox1.Image = new Bitmap(new
MemoryStream((byte[])row["Image"]));
}
catch (InvalidCastException ne) {
MessageBox.Show("Could not load image.
Exception: " + ne.Message);
}
}
}
XmlWriter Writer;
ShowWaitCursor(true);
BookCatalogDS.WriteXml(Writer,XmlWriteMode.WriteSchema);
Writer.Close();
225
}
}
ShowWaitCursor(false);
}
//pictureBox1
//pictureBox1.Image = new
System.Drawing.Bitmap(Assembly.GetExecutingAssembly().GetManifestResourc
eStream("BookCatalogAppCS.logo.gif"));
// pictureBox1.Size = pictureBox1.Image.Size;
}
listView1.Clear();
listView1.Columns.Add("Title",listView1.Width 60,HorizontalAlignment.Left);
listView1.Columns.Add("Price",45,HorizontalAlignment.Right);
listView1.View = View.Details;
item.SubItems.Add(String.Format("{0:F2}",(decimal)row["price"]));
}
listView1.Items.Add(item);
}
if (CatalogFile.Exists) {
try {
BookCatalogDS = new DataSet();
BookCatalogDS.ReadXml(DataFile);
BookCatalogTable =
BookCatalogDS.Tables["Titles"];
}
catch (Exception ex) {
MessageBox.Show(ex.Message);
}
AddDataToListView();
}
}
private static void ShowWaitCursor (bool value) {
227
}
}
WSDL
<?xml version="1.0" encoding="utf-8"?>
<definitions xmlns:http="http://schemas.xmlsoap.org/wsdl/http/"
xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/"
xmlns:s="http://www.w3.org/2001/XMLSchema" xmlns:s0="http://tempuri.org/"
xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/"
xmlns:tm="http://microsoft.com/wsdl/mime/textMatching/"
xmlns:mime="http://schemas.xmlsoap.org/wsdl/mime/"
228
targetNamespace="http://tempuri.org/"
xmlns="http://schemas.xmlsoap.org/wsdl/">
<types>
<s:schema elementFormDefault="qualified"
targetNamespace="http://tempuri.org/">
<s:import namespace="http://www.w3.org/2001/XMLSchema" />
<s:element name="GetItems">
<s:complexType />
</s:element>
<s:element name="GetItemsResponse">
<s:complexType>
<s:sequence>
<s:element minOccurs="0" maxOccurs="1" name="GetItemsResult">
<s:complexType>
<s:sequence>
<s:element ref="s:schema" />
<s:any />
</s:sequence>
</s:complexType>
</s:element>
</s:sequence>
</s:complexType>
</s:element>
<s:element name="DataSet" nillable="true">
<s:complexType>
<s:sequence>
<s:element ref="s:schema" />
<s:any />
229
</s:sequence>
</s:complexType>
</s:element>
</s:schema>
</types>
<message name="GetItemsSoapIn">
<part name="parameters" element="s0:GetItems" />
</message>
<message name="GetItemsSoapOut">
<part name="parameters" element="s0:GetItemsResponse" />
</message>
<message name="GetItemsHttpGetIn" />
<message name="GetItemsHttpGetOut">
<part name="Body" element="s0:DataSet" />
</message>
<message name="GetItemsHttpPostIn" />
<message name="GetItemsHttpPostOut">
<part name="Body" element="s0:DataSet" />
</message>
<portType name="Service1Soap">
<operation name="GetItems">
<input message="s0:GetItemsSoapIn" />
<output message="s0:GetItemsSoapOut" />
</operation>
</portType>
<portType name="Service1HttpGet">
<operation name="GetItems">
<input message="s0:GetItemsHttpGetIn" />
230
<http:urlEncoded />
</input>
<output>
<mime:mimeXml part="Body" />
</output>
</operation>
</binding>
<binding name="Service1HttpPost" type="s0:Service1HttpPost">
<http:binding verb="POST" />
<operation name="GetItems">
<http:operation location="/GetItems" />
<input>
<mime:content type="application/x-www-form-urlencoded" />
</input>
<output>
<mime:mimeXml part="Body" />
</output>
</operation>
</binding>
<service name="Service1">
<port name="Service1Soap" binding="s0:Service1Soap">
<soap:address
location="http://apps.gotdotnet.com/netcf/BookCatalogWS/Service1.asmx" />
</port>
<port name="Service1HttpGet" binding="s0:Service1HttpGet">
<http:address
location="http://apps.gotdotnet.com/netcf/BookCatalogWS/Service1.asmx" />
</port>
232
Service.cs
using System;
using System.Collections;
using System.ComponentModel;
using System.Data;
using System.Diagnostics;
using System.Web;
using System.Web.Services;
using System.Data.SqlClient;
using System.Net;
using System.IO;
using System.Drawing;
namespace BookCatalogWS
{
/// <summary>
/// Summary description for Service1.
/// </summary>
public class Service1 : System.Web.Services.WebService
233
{
DataSet BookCatalog;
public Service1()
{
//CODEGEN: This call is required by the ASP.NET Web
Services Designer
InitializeComponent();
BookCatalog = GetDataSet();
}
/// <summary>
/// Required method for Designer support - do not modify
/// the contents of this method with the code editor.
/// </summary>
private void InitializeComponent()
{
}
/// <summary>
/// Clean up any resources being used.
/// </summary>
protected override void Dispose( bool disposing )
234
{
if(disposing && components != null)
{
components.Dispose();
}
base.Dispose(disposing);
}
#endregion
DataTable dt = ds.Tables["Titles"];
DataColumn dc;
dc.DataType
= typeof(byte[]);
// First field is an
integer
dc.ColumnName = "Image";
// Column name is
'Integer1'
dc.Unique
dc.ReadOnly
= false;
= false;
dt.Columns.Add(dc);
ds.AcceptChanges();
return ds;
}
[WebMethod]
public DataSet GetItems()
{
return BookCatalog;
}
}
}
236
// Not unique
// Read/write
// Add column
References
Agruss, C. (2000). Software Installation Testing. Software Testing & Quality
Engineering, 2(4), from
http://www.stickyminds.com/stickyfile.asp?i=1866150&j=29860&ext=.pdf.
Altshuller, G. (1997). 40 Principles: TRIZ Keys to Technical Innovation (Vol. 1).
Worcester, MA, USA: Technical Innovation Center.
Amland, S. (1999). Risk Based Testing and Metrics. Paper presented at the 5th
International Conference, EuroSTAR '99.
Bach, J. (1999). Risk-Based Testing. Software Testing & Quality Engineering,
1(6), 22-29, from http://www.satisfice.com/articles/hrbt.pdf.
Bach, J. (2003). Troubleshooting Risk-Based Testing. Software Testing & Quality
Engineering, 5(3), 28-33, from http://www.satisfice.com/articles/rbttrouble.pdf.
Bach, J. (2006a). Heuristic Test Strategy Model (4.8 ed., pp. 1-5).
http://www.satisfice.com: Satisfice, Inc.
Bach, J. (2006b). Rapid Software Testing - Course Notes (1.9.8.3 ed.).
http://www.satisfice.com: Satisfice, Inc.
Beizer, B. (1990). Software Testing Techniques (2 ed.). New York, NY, USA: Van
Nostrand Reinhold Co.
Biaz, S., & Vaidya, N. H. (1997). Tolerating location register failures in mobile
environments. Texas, USA: Tech. Rep. No. 97-015, Texas A&M
University, Department of Computer Science.
237
Biaz, S., & Vaidya, N. H. (1998). Tolerating visitor location register failures in
mobile environments. Paper presented at the The 17th IEEE Symposium on
Reliable Distributed Systems.
Bishop, M., & Bailey, D. (1996). A Critical Analysis of Vulnerability Taxonomies.
Davis, CA, USA: University of California at Davis.
Bloom, B. S. (1956). Taxonomy of Educational Objectives, Handbook I: The
Cognitive Domain. Longman, New York, USA: Addison Wesley Publishing
Company.
Cao, G. (2000). Designing Efficient Fault-Tolerant Systems on Wireless Networks.
Paper presented at the Proceedings of the Third IEEE Information
Survivability Workshop.
Chakravorty, R., & Pratt, I. (2002). Performance Issues with General Packet Radio
Service. Journal of Communications and Networks, 4(2), 266-281, from
cl.cam.ac.uk/users/rc277/jcn02.ps.
Chande, S. (2005). Mobile Web Services. University of Helsinki from
http://www.cs.helsinki.fi/u/chande/courses/cs/MWS/.
Chatterjee, S., & Webber, J. (2003). Developing Enterprise Web Services: An
Architect's Guide. East Patchogue, NY, USA: Prentice Hall PTR.
Cheng, S., Lai, K., & Baker, M. (1999). Analysis of HTTP/1.1 Performance on a
Wireless Network. Stanford, CA, USA: Computer Systems Laboratory,
Stanford University from
http://citeseer.ist.psu.edu/cache/papers/cs/2053/http:zSzzSzgunpowder.stanf
ord.eduzSz~laikzSzprojectszSzwireless_httpzSzpublicationszSztech_report
zSzwireless_http.pdf/cheng99analysis.pdf.
Collard, R. (2002). Performance, Load and Stress Testing. Software Productivity
Center Inc.
238
Czerny, B. J., DAmbrosio, J. G., Murray, B. T., & Sundaram, P. (2005). Effective
Application of Software Safety Techniques for Automotive Embedded
Control Systems. Unpublished SAE Technical Paper Series. SEA
International.
Dreamtech, S. T. (2002). Programming for Embedded Systems: Cracking the Code
(Bk&CD-Rom edition ed.). New York, NY, USA: Wiley Publishing Inc.
Erl, T. (2005). Service-Oriented Architecture (SOA): Concepts, Technology, and
Design. Upper Saddle River, New Jersey, USA: Prentice Hall Professional
Technical Reference.
Fowler, M., Beck, K., Brant, J., Opdyke, W., & Roberts, D. (2003). Refactoring:
Improving the design of exisiting code. Boston, MA, USA: Addison-Wesley
Longman Inc.
Gerrard, P., Gerard, P., & Thompson, N. (2002). Risk-Based E-Business Testing (1
ed.). Boston, MA, USA: Artech House Publishers.
Giguere, E. (1999). Palm database programming: The complete developers guide.
Indianapolis, IN, USA: John Wiley & Sons.
GoKnow, I. (2004). Palm OS PAAM Conduit for Windows. Ann Arbor, MI, USA:
GoKnow, Inc. from
http://paam.goknow.com/files/PAAMWalkthrough_021403.pdf
Hecht, H., Xuegao, A., & Hecht, M. (2003). Computer aided software fmea for
unified modeling language based software. Paper presented at the
Reliability and Maintainability, 2004 Annual Symposium - RAMS.
Henley, E. J., & Kumamoto, H. (1992). Probabilistic Risk Assessment: Reliability
Engineering, Design, and Analysis. New York, NY, USA: IEEE Press.
Hirsch, F., & Kemp, J. (2006). Mobile web services: Architecture and
implementation. West Sussex, England: John Wiley & Sons.
239
IBM. (2005). WebSphere Version 5.1 Application Developer 5.1.1 Web Services
Handbook. IBM WebSphere Software, Redbooks from
http://www.redbooks.ibm.com/redbooks/pdfs/sg246891.pdf.
IEEE. (1991). IEEE standard computer dictionary: A compilation of IEEE
standard computer glossaries. New York, NY, USA: IEEE Press.
ISO9126. (1991). Information technology - Software product evaluation - Quality
characteristics and guidelines for their use. Geneva, Switzerland:
International Standard ISO/IEC 9126.
ISO9241-11. (1998). Ergonomic requirements for office work with visual display
terminals: Guidance on Usability: American National Standards Institute.
Jha, A., & Kaner, C. (2003). Bug in the brave new unwired world. Paper presented
at the Pacific North-West Software Quality Conference.
Jouko, S., & Veikko, R. (1993). Quality Management of Safety and Risk Analysis.
Tampere, Finland: Elsevier Science Publishers Co.
Kaner, C., & Bach, J. (2005). Black box software testing. Unpublished Course
Notes. Florida Institute of Technology from
http://www.testingeducation.org/BBST/index.html.
Kaner, C., Bach, J., & Pettichord, B. (2001). Lessons learned in software testing (1
ed.). New York, NY, USA: John Wiley & Sons.
Kaner, C., Falk, J., & Nguyen, H. Q. (1999). Testing Computer Software (2 ed.).
New York, NY, USA: John Wiley and Sons.
Karygiannis, T., & Owens, L. (2002). Wireless network security National Institute
of Standards and Technology from
http://www.csrc.nist.gov/publications/nistpubs/800-48/NIST_SP_80048.pdf.
240
Ko, H.-P. (1996). Attacks on cellular systems. GTE Laboratories Incorporated from
http://seclab.cs.ucdavis.edu/projects/cmad/4-1996/pdfs/Ko.PDF.
Lee, V., Schneider, H., & Schell, R. (2004). Mobile applications: Architecture,
design and development. Indianapolis, Indiana, USA: Prentice Hall
Professional Technical Reference.
Luchini K., Quintana C., Soloway E. (2004). Evaluating the Impact of Small
Screens on the Use of Scaffolded Handheld Learning Tools. University of
Michigan. Paper presented at American Educational Research Association,
2004
Lutz, R. R., & Woodhouse, R. M. (1996, April 15-16, 1996). Experience report:
Contributions of SFMEA to requirements analysis. Paper presented at the
Second IEEE International Conference on Requirements Engineering,
Colorado Springs, CO, U.S.A.
Lutz, R. R., & Woodhouse, R. M. (1997). Requirements Analysis Using Forward
and Backward Search. Annals of Software Engineering, Special Volume on
Requirements Engineering(3).
Lyu, M. R. (1995). Handbook of software reliability engineering. New York, NY,
USA: IEEE Computer Society Press and McGraw-Hill Book Company.
Malloy, A. D., Varshney, U., & Snow, A. P. (2002). Supporting mobile commerce
applications using dependable wireless networks. Mobile Networks and
Applications, 7(3), 225 - 234.
Mantyla, M. (2003). Bad smells in software - A taxonomy and empirical study.
Helsinki University of Technology, Helsinki, Finland.
Marick, B. (1995). The Craft of Software Testing. Upper Saddle River, New Jersey,
USA: Prentice Hall.
241
Carr M. J., Konda S. L., Monarch I., Ulrich F. C., Walker C. F., (1993). TaxonomyBased Risk Identification.
McDermid, J. A., Nicholson, M., Pumfrey, D. J., & Fenelon, P. (1995). Experience
with the application of HAZOP to computer-based systems. Heslington,
York, U.K.: British Aerospace Dependable Computing Systems Centre and
High Integrity Systems Engineering Group, Department of Computer
Science,
University of York from
http://citeseer.ist.psu.edu/cache/papers/cs/16867/ftp:zSzzSzftp.cs.york.ac.uk
zSzhise_reportszSzsafetyzSzexperience.pdf/mcdermid95experience.pdf.
McDermid, J. A., & Pumfrey, D. J. (1994). Towards Integrated Safety Analysis
and Design. Paper presented at the COMPASS 94: Proceedings of the
Ninth Annual Conference on Computer Assurance, Gaithersburg, MD,
USA.
McGary, R. (2005). Passing the PMP Exam: How to Take It and Pass It
(Bk&CD-Rom edition ed.). Indianapolis, Indiana, USA: Prentice Hall
Professional Technical Reference.
Newcomer, E., & Lomow, G. (2004). Understanding SOA with Web Services.
Boston, MA, USA: Addison-Wesley Professional.
Nguyen, H. Q. (2003). Testing applications on the web (2 ed.). New York, NY,
USA: John Wiley and Sons.
Nielsen, J., & Mack, R. L. (1994). Usability Inspection Methods. New York, NY,
USA: John Wiley & Sons Inc.
Nielsen, J., & Molich, R. (1990). Heuristic evaluation of user interfaces. Paper
presented at the Proc. ACM CHI'90 Conf, 249 - 256.
242
Yang, S. J., Nieh, J., Krishnappa, S., Mohla, A., & Sajjadpour, M. (2003, 20-24
May, 2003). Web browsing performance of wireless thin-client computing.
Paper presented at the The Twelfth International World Wide Web
Conference, Budapest, Hungary. from
http://www.ncl.cs.columbia.edu/publications/www2003_fordist.pdf.
244