You are on page 1of 34

WATERFALL MODEL:

The image below is the classic Waterfall model methodology, which is the first SDLC method and it describes the various phases involved in development.

Briefly on different Phases: Feasibility The feasibility study is used to determine if the project should get the goahead. If the project is to proceed, the feasibility study will produce a project plan and budget estimates for the future stages of development. Requirement Analysis and Design Analysis gathers the requirements for the system. This stage includes a detailed study of the business needs of the organization. Options for changing the business process may be considered. Design focuses on high level design like, what programs are needed and how are they going to interact, low-level design (how the individual programs are going to work), interface design (what are the interfaces going to look like) and data design (what data will be required). During these phases, the software's overall structure is defined. Analysis and Design are very crucial in the whole development cycle. Any glitch in the design phase could be very expensive to solve in the later stage of the software development. Much care is taken during this phase. The logical system of the product is developed in this phase. Implementation In this phase the designs are translated into code. Computer programs are written using a conventional programming language or an application generator. Programming tools like Compilers, Interpreters, Debuggers are used to generate the code. Different high level programming languages like C, C++, Pascal, Java are used for coding. With respect to the type of application, the right programming language is chosen. Testing In this phase the system is tested. Normally programs are written as a series of individual modules, these subject to separate and detailed test. The system is then tested as a whole. The separate modules are brought together and tested as a complete system. The system is tested to ensure that interfaces between modules

work (integration testing), the system works on the intended platform and with the expected volume of data (volume testing) and that the system does what the user requires (acceptance/beta testing). Maintenance Inevitably the system will need maintenance. Software will definitely undergo change once it is delivered to the customer. There are many reasons for the change. Change could happen because of some unexpected input values into the system. In addition, the changes in the system could directly affect the software operations. The software should be developed to accommodate changes that could happen during the post implementation period. V-Model or VV Model:

CRS & Feasibility

UAT

SRS Documentation/ Review/Test

System Testing

High level Design

Integration Testing

Detailed Design

Unit/Functional Testing

Coding

V model is a classical software development and testing process model, which ensures quality of product. At each testing stage, the corresponding planning stage is referred to, ensuring the system accurately meets the goals specified in the analysis and design stages. It encapsulates the steps in Verification and Validation phases for each step in the SDLC. Advantages: Clear project objectives Stable project requirements Knowledgeable user No immediate need to install Inexperienced team members Fluctuating team composition Less experienced project leader Need to conserve resources Strict requirement for approvals Prototyping Model:

Requirement specification

Minimal Development

The Prototype

Decision

No

Follow any SDLC

"Prototyping addresses the inability of many users to specify their information needs, and the difficulty of systems analysts to understand the user's environment, by providing the user with a tentative system for experimental purposes at the earliest possible time." Thus Prototyping is an iterative process that lets users work with a small-scale mock up of their system, experience how it might function in production, and request changes until it meets their requirements. Typically, once the user is satisfied with the prototype, then the prototype "becomes a working requirements statement" and design of the actual system begins. While most prototypes are done with the expectation that they will be discarded, it is possible to evolve from prototype to working system. Prototypes can be used to realistically model important aspects of a system during each phase of the traditional life cycle". All of these views see Prototyping as a subset within a larger development methodology. Advantages: ProjecSt objectives are unclear Functional requirements are changing User is not fully knowledgeable Immediate need to install something Experienced team members (particularly if the prototype is not throw-away) Stable team composition Experienced project leader No need to absolutely minimize resource consumption No strict policy or cultural bias favoring approvals Analysts/users appreciate business problems involved, before they begin project Innovative, flexible designs that will accommodate future changes are not critical Disadvantages:

There is the risk that only the most obvious and superficial needs will be addressed with the prototype. Very small projects may not be able to justify the added time and money of Prototyping. designers under time pressure may prototype too quickly, without sufficient up-front user needs analysis, resulting in an inflexible design with a narrow focus. designers may neglect documentation, resulting in insufficient justification for the final product and inadequate records for the future.

Spiral Model: "The model holds that each cycle involves a progression through the same sequence of steps, for each portion of the product and for each of its levels of elaboration, from an overall concept-of-operation document down to the coding of each individual program" Advantages: Risk avoidance is a high priority No need to absolutely minimize resource consumption Project manager is highly skilled and experienced Policies or cultural bias favor approvals Project might benefit from a mix of other development methodologies Organization and team culture appreciate precision and controls Delivery date takes precedence over functionality, which can be added in later versions

Design

HLD

SRS

Prototype

Analyst LLD CRS

Coding

Implementation & Testing

TESTING
Testing involves operation of a system or application under controlled conditions and evaluating the results. The controlled conditions should include both normal and abnormal conditions. Testing should intentionally attempt to make things go wrong

to determine if things happen when they shouldn't or things don't happen when they should. It is oriented to 'detection'. Testing can be broadly classified into White Box Testing Black Box Testing

WHITE BOX TESTING:


Also known as glass box, structural, clear box and open box testing. A software testing technique whereby explicit knowledge of the internal workings of the item being tested are used to select the test data. Unlike black box testing, white box testing uses specific knowledge of programming code to examine outputs. The test is accurate only if the tester knows what the program is supposed to do. He or she can then see if the program diverges from its intended goal. White box testing does not account for errors caused by omission, and all visible code must also be readable. White box testing is testing against the implementation and will discover faults of commission, indicating that part of the implementation is faulty. The purpose of white box testing is to: Initiate a strategic initiative to build quality throughout the life cycle of a software product or service. Provide a complementary function to black box testing. Perform complete coverage at the component level. Improve quality by optimizing performance.

White Box Testing Techniques


Basis Path Testing A path is defined to be a sequence of program statements that are executed by the software under test in response to a specific input. In most software units, there is a potentially (near) infinite number of different paths through the code, so complete path coverage is impractical. Not withstanding that, a number of structural techniques that involve the paths through the code can lead to a reasonable test result. These are test cases that exercise basic set will execute every statement at least once. Flow Graph Notation The flow graph depicts logical control flow using a diagrammatic notation. Each structured construct has a corresponding flow graph symbol. Cyclomatic Complexity Cyclomatic complexity is a software metric that provides a quantitative measure of the logical complexity of a program. When used in the context of a basis path testing method, the value computed for Cyclomatic complexity defines the number for independent paths in the basis set of a program and provides us an upper bound for the number of tests that must be conducted to ensure that all statements have been executed at least once. An independent path is any path through the program that introduces at least one new set of processing statements or a new condition. Branch Testing

The structure of the software under test can be shown using a control flow diagram, illustrating statements and decision points. The number of unique paths from the start to the end of the code is equal to the number of decision points plus one. This number is just the Cyclomatic Complexity of the program. Using the control flow diagram, test cases can be designed such that each exercises at least one new segment of the control flow graph. In theory the number of test cases required should be equal to the cyclomatic complexity, however in practice it is extremely difficult to do this operationally. Conditions Testing Condition testing is a test case design method that exercises the logical conditions contained in a program module. They may define: Relational expression: (E1 or E2), where E1 and E2 are arithmetic expressions. Simple condition: Boolean variable or relational expression, possibly proceeded by a NOT operator. Compound condition: composed of two or more simple conditions, boolean operators and parentheses. Boolean expression: Condition without relational expressions. Data Flow Testing Selects test paths according to the location of definitions and use of variables. Loop Testing Loops fundamental to many algorithms. Can define loops as simple, concatenated, nested, and unstructured.

White Box Testing Types


Unit Testing: Unit testing is performed on the smallest unit of software.The term unit testing refers to the individual testing of separate units of a software system. In object-oriented systems, these units typically are classes and methods. The primary goal of unit testing is to take the smallest piece of testable software in the application, isolate it from the remainder of the code, and determine whether it behaves exactly as you expect. Each unit is tested separately before integrating them into modules to test the interfaces between modules. The goal of unit testing is to isolate each part of the program and show that the individual parts are correct. It provides a written contract that the piece must satisfy. For unit testing, driver and/or stub software are developed for executing. It is important to realize that unit-testing will not catch every error in the program. By definition, it only tests the functionality of the units themselves. Therefore, it will not catch integration errors, performance problems and any other system-wide issues. Unit testing is only effective if it is used in conjunction with other software testing activities.

Integration Testing: Software is generally composed of multiple sub systems, which in turn is composed of multiple units, which in turn is composed of multiple modules. Testing the working

and interaction of all the modules, units and sub systems together against the architectural design is defined as integration testing. Integration testing is a logical extension of unit testing. In its simplest form, two units that have already been tested are combined into a component and the interface between them is tested. A component, in this sense, refers to an integrated aggregate of more than one unit. In a realistic scenario, many units are combined into components, which are in turn aggregated into even larger parts of the program. Errors found in integration test should generally be lower than those found in unit testing. Errors found during integration test are generally more complex and more time and cost consuming to diagnose and fix. Possible approaches to integration tests are: White Box Integration tests, Black Box Integration tests and Performance tests. There are two common ways to conduct integration testing. Non-incremental Integration Testing Incremental Integration Testing Non-incremental Integration Testing (big bang or umbrella): All the software units are assembled into the entire program. This assembly is then tested as a whole from the beginning, usually resulting in a chaotic situation, as the causes of defects are not easily isolated and corrected.

H I

G A

v e d
B A

E A

D A

Incremental Integration Testing: The program is constructed and tested in small increments by adding a minimum number of components at each interval. Therefore, the errors are easier to isolate and correct, and the interfaces are more likely to be tested completely. There are two common approaches to conduct incremental integration testing: Top-Down Incremental Integration Testing Bottom-up Incremental Integration Testing Top-Down Incremental Integration Testing: The top-down approach to integration testing requires the highest-level modules be test and integrated first. Modules are integrated from the main module (main program) to the subordinate modules either in the depth-first or breadth-first manner.

D A

E A

F A

G A

Integration testing starts with the highest-level, control module, or main program, with all its subordinates replaced by stubs.Stubs are replaced, one at a time, by the actual units, which in turn contain stubs for their subordinates. Bottom-up Incremental Integration Testing: The bottom-up approach requires the lowest-level units be tested and integrated first. The lowest level sub-modules are integrated and tested, then the successively superior level components are added and tested, transiting the hierarchy from the bottom, upwards.
A

D A

E A

F A

G A

Combine related clusters of these, write a driver for each cluster (or use the one you wrote for unit testing), which coordinates calls to and passage of test data between the clusters components.Carry on doing this up the programs control structure, until you reach the top/main program. Drivers and Stubs: A software application is made up of a number of Units, where output of one Unit goes as an Input of another Unit. e.g. A Sales Order Printing program takes a Sales Order as an input, which is actually an output of Sales Order Creation program. Due to such interfaces, independent testing of a Unit becomes impossible. But that is what we want to do; we want to test a Unit in isolation. So here we use Stub and Driver. The driver simulates a calling unit and the stub simulates a called unit. Driver is effectively a specially-written Main program which accepts test case data, feeds it to the Unit being tested, and displays results of testing for comparison with expected results. Test driver allows you to call a function and display its return values.

Driver

t Tracki ng Stub1 Reso 2Exec ution of TCs Stubs are dummy units which stand in for the units which are subordinate to (called Tracea by) the component being tested. Use of stubs can be tricky, especially if the subordinate unit returns data to the unit bility tested. being Matrix A stub returns a value that is sufficient for testing. Example - For Unit Testing of Sales Order Printing program, a Driver program will have the code which will create Sales Order records using hardcoded data and then call Sales Order Printing program. Suppose this printing program uses another unit which calculates Sales discounts by some complex calculations. Then call to this unit will be replaced by a Stub, which will simply return fix discount data.

Stub2 Module Under bDefec Test

BLACK BOX TESTING:


Also known as behavioral, functional, opaque-box, closed-box, concrete box testing. Black box testing treats the system as a "black-box", so it doesn't explicitly use knowledge of the internal structure. Or in other words the Test engineer need not know the internal working of the Black box. It focuses on the functionality part of the module. Black box testing attempts to derive sets of inputs that will fully exercise all the functional requirements of a system. It is not an alternative to white box testing. Black box testing is testing against the specification and will discover faults of omission, indicating that part of the specification has not been fulfilled.

Black Box Testing Techniques

Error Guessing Error Guessing comes with experience with the technology and the project. Error Guessing is the art of guessing where errors can be hidden. There are no specific tools and techniques for this, but you can write test cases depending on the situation. Either when reading the functional documents or when you are testing and find an error that you have not documented. Equivalence Partitioning This method divides the input domain of a program into classes of data from which test cases can be derived. Here input domain is divided into classes or groups of data. These classes are known as equivalence classes and the process of making

equivalence classes is called equivalence partitioning. Equivalence classes represent a set of valid or invalid states for input conditions. In Equivalence Partitioning, a test case is designed so as to uncover a group or class of error and thereby reduces the number of test cases needed. The general method followed is: Identify the entire input data space of the unit under test Partition this input space into different classes. Select a data element from each class and execute the unit Using this input check that the output is as expected Equivalence classes may be defined according to the following guidelines: If an input condition specifies a range, one valid and two invalid equivalence classes are defined. If an input condition requires a specific value, then one valid and two invalid equivalence classes are defined. If an input condition specifies a member of a set, then one valid and one invalid equivalence class are defined. If an input condition is boolean, then one valid and one invalid equivalence class are defined. Boundary Value Analysis It is based on the assumption that the developer is more likely to make mistakes when dealing with special cases at the boundary of equivalence class. Boundary Value Analysis (BVA) is a test data selection technique (Functional Testing technique) where the extreme values are chosen. Boundary values include maximum, minimum, just inside/outside boundaries, typical values, and error values. The hope is that, if a system works correctly for these special values then it will work correctly for all values in between. There are two types of boundary: Upper bounday value Lower boundary value Test min, min-1, max, max+1, typical values for the above boundaries. It has been observed that programs that work correctly for a set of values in an equivalence class fail on some special values. These values often lie on the boundary of the equivalence class. Boundary value for each equivalence class should be covered. Boundary value test cases are also called extreme cases. Hence, a boundary value test case is a set of input data that lies on the edge or boundary of a class input data or that generates output that lies at the boundary of a class of output data. Rather than focusing on input conditions solely, BVA derives test cases from the output domain also. BVA guidelines include:

For input ranges bounded by a and b, test cases should include values a and b and just above and just below a and b respectively. If an input condition specifies a number of values, test cases should be developed to exercise the minimum and maximum numbers and values just above and below these limits. Select a data element from the boundary and execute this unit as input and check that the output is as expected.

An input condition may be a range, value, set or boolean. Cause Effect Graphing Cause-effect graphing is a test case design approach that offers a concise depiction of logical conditions and associated actions. The approach has four stages: Cause (input conditions) and effects (actions) are listed for a module. A cause-effect graph is created. The graph is altered into a decision table. Decision table rules are modified to test cases. It has a beneficial side effect in pointing out incompleteness and ambiguities in specifications. Cause-effect graphing addresses the question of what causes what in the Cause Effect Graphs. A graphical representation of inputs or stimuli (causes) with their associated outputs (effects), which can be used to design test cases. As many causes and effects as possible should be listed. Once the cause-effect graph has been constructed, a decision table is created by tracing back through the graph to determine combinations of causes that result in each effect. The decision table then is converted into test cases with which the model is tested. Comparison Testing There are situations where independent versions of software be developed for critical applications, even when only a single version will be used in the delivered computer based system. It is these independent versions which form the basis of a black box testing technique called Comparison testing or back-to-back testing. In these situations redundant software and hardware is often used to ensure continuing functionality. When redundant software is produced separate software engineering teams produce independent versions of an application using the same applications. In this context each version can be tested with the same test data to ensure they produce the same output. These independent versions are the basis of a black box testing technique known as comparison testing. Other black box testing techniques are performed on the separate versions and it is assumed if they produce the same output they are assumed to be identical. However, if this is not the case then they are examined further.

Black Box Testing Types

Functional Testing: Testing that ignores the internal mechanism or structure of a system or component and focuses on the outputs generated in response to selected inputs and execution conditions. The objective of this test is to ensure that each element of the application meets the functional requirements of the business as outlined. In functional testing, we treat the program, or any component of it, as a function (whose inner workings we may not be able to see) and test the functions by giving it inputs and comparing its outputs to expected results. This stage will include Validation Testing - which is intensive testing of the new Front end fields and screens. Windows GUI Standards; valid, invalid and limit data input; screen & field look and appearance, and overall consistency with the rest of the application. Smoke Testing(sanity testing, dry run, skim): Is typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. The system must be in a 'sane' enough condition to warrant further testing in its current state. It is designed as a pacing mechanism for time-critical projects, allowing the software team to assess its project on a frequent basis. The smoke test should exercise the entire system from end to end. Smoke is the initial level of testing effort to determine if the new software version is performing well enough for its major level of testing effort. Smoke Test is covering all the functionality in less time to make sure that the application works fine in the normal condition. It verifies the major functionality at high level in order to determine if further testing is possible. The Smoke test scenarios should emphasize breadth more than depth. All components should be touched, and every major feature should be tested briefly. If test fails, the build is returned to developers un-tested. Exploratory Testing( ad-hoc, monkey, guerrilla): Exploratory Tests are categorized under Black Box Tests and are aimed at testing in conditions when sufficient time is not available for testing or proper documentation is not available. Exploratory testing is Testing while Exploring. When you have no idea of how the application works, exploring the application with the intent of finding errors can be termed as Exploratory Testing. The following can be used to perform Exploratory Testing: Learn the Application. Learn the Business for which the application is addressed. Learn the technology to the maximum extent on which the application has been designed. Learn how to test. Plan and Design tests as per the learning.

An interactive process of concurrent product exploration, test design, and test execution. The heart of exploratory testing can be stated simply: The outcome of this test influences the design of the next test. Regression Testing Rerunning test cases which a program has previously executed correctly in order to detect errors spawned by changes or corrections made during software development and maintenance. Ensures that changes have not propagated unintended side affects. Regression may be conducted manually, by re-executing a subset of all test cases or using automated capture/playback tools. Most of the time the testing team is asked to check last minute changes in the code just before making a release to the client, in this situation the testing team needs to check only the affected areas. So in short for the regression testing the testing team should get the input from the development team about the nature / amount of change in the fix so that testing team can first check the fix and then the side effects of the fix. The Regression test suit contains different classes of test cases: Regional Regression Testing: Tests that focus on the software components that have been changed. Full Regression Testing: Tests that will exercise all software functions. System Testing: System testing concentrates on testing the complete system with a variety of techniques and methods. System Testing comes into picture after the Unit and Integration Tests. System testing is a series of different tests whose main aim is to fully exercise the computer-based system. Although each test has a different role, all work should verify that all system elements have been properly integrated and form allocated functions. Various type of system testing are: Compatibility Testing(portability testing): Compatibility Testing concentrates on testing whether the given application goes well with third party tools, software or hardware platform. Testing whether the system is compatible with other systems with which it should communicate. Testing how well software performs in a particular hardware/software/operating system/network/etc. environment. For example, you have developed a web application. The major compatibility issue is, the web site should work well in various browsers. Similarly when you develop applications on one platform, you need to check if the application works on other operating systems as well. This is the main goal of Compatibility Testing.

Compatibility Testing is very crucial to organizations developing their own products. The products have to be checked for compliance with the competitors of the third party tools, hardware, or software platform. OS: Windows, Linux, Ma-Quintosh etc,. Browser: IE(4.0,5.01,5.5,6.0,7.0beta), Opera, Mozilla etc,. Netscape(4.01-4.08,4.7,4.51,4.5,8.0),

Recovery Testing: Testing aimed at verifying the system's ability to recover from varying degrees of failure. Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems. It is a system test that focuses the software to fall in a variety of ways and verifies that recovery is properly performed. If it is automatic recovery then re-initialization, check pointing mechanisms, data recovery and restart should be evaluated for correctness. If recovery requires human intervention, the mean-time-to-repair (MTTR) is evaluated to determine whether it is within acceptable limits. In certain cases, a system needs to be fault-tolerant. In other cases, a system failure must be corrected within a specified period of time or severe economic damage will happen. Security Testing: Security testing attempts to verify that protection mechanisms built into a system will, in fact, protect it from improper penetration. During Security testing, password cracking, unauthorized entry into the software, network security are all taken into consideration. Any computer-based system that manages sensitive information or produces operations that can improperly harm individuals is a target for improper or illegal penetration. Security testing tries to verify that protection approaches built into a system will protect it from improper penetration. Usability Testing( UI testing): Usability is the degree to which a user can easily learn and use a product to achieve a goal. Testing the ease with which users can learn and use a product. Usability testing is the system testing which attempts to find any human-factor problems. A simpler description is testing the software from a users point of view. Essentially it means testing software to prove/ensure that it is user-friendly, as distinct from testing the functionality of the software. In practical terms it includes ergonomic considerations, screen design, standardization etc. Tests are designed to evaluate the machine/user interface. Are the communication device(s) designed in a manner such that the information is displayed in a understandable fashion enabling the operator to correctly interact with the system? User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.

---------------------------------------------------------------------------------------------Client Server Architecture A client is defined as a requester of services and a server is defined as the provider of services. A single machine can be both a client and a server depending on the software configuration. The client/server architecture will reduce network traffic by providing a query response rather than total file transfer. It improves multi-user updating through a GUI front end to a shared database. In client/server architectures, Remote Procedure Calls (RPCs) or standard query language (SQL) statements are typically used to communicate between the client and server. A network architecture in which each computer or process on the network is either a client or a server. Servers are powerful computers or processes dedicated to managing disk drives (file servers), printers (print servers), or network traffic (network servers ). Clients are PCs or workstations on which users run applications. Clients rely on servers for resources, such as files, devices, and even processing power. In the simplest sense, the client and server can be defined as follows: A client is an individual user's computer or a user application that does a certain amount of processing on its own. It also sends and receives requests to and from one or more servers for other processing and/or data. A server consists of one or more computers that receive and process requests from one or more client machines. A server is typically designed with some redundancy in power, network, computing and file storage. However, a machine with dual processors is not necessarily a server. An individual workstation can function as a server. Although client/server in its simplest form is two-tier (server and client), there are newer, more powerful architectures that are three-tier (where application logic lives in the middle-tier and it is separated from the data and user interface) or even n-tier (where there are several middle-tier components within a single business transaction) in nature. Sometimes client/server is referred to as distributed computing; they have the same basic concepts. A Thin client is a computer (client) in client-server architecture networks which has little or no application logic, so it has to depend primarily on the central server for processing activities. The word "thin" refers to the small boot image which such clients typically require - perhaps no more than required to connect to a network and start up a dedicated web browser. A thin client does most of its processing on a central server with as little hardware and software as possible at the users location, and as much as possible at some centralized managed site. Thick/Fat Client A thick or fat client does as much processing as possible and passes only data required for communications and archival storage to the server. In computing, a Fat Client (also known as Rich-Client) is a term from client-server architecture for a client that performs the bulk of the data processing operations. The data itself is stored on the server.

Two Tier Architecture: Two Tier Refers to client/server architectures in which the user interface runs on the client and the database is stored on the server. The actual application logic can run on either the client or the server. Two tier architectures consist of three components distributed in two layers: client (requester of services) and server (provider of services). The three components are 1. User System Interface (such as session, text input, dialog, and display management services) 2. Processing Management (such as process development, process enactment, process monitoring, and process resource services) 3. Database Management (such as data and file services) The two tier design allocates the user system interface exclusively to the client. It places database management on the server and splits the processing management between client and server, creating two layers. Figure depicts the two tier software architecture.

In general, the user system interface client invokes services from the database management server. In many two tier designs, most of the application portion of processing is in the client environment. The database management server usually provides the portion of the processing related to accessing data (often implemented in store procedures). Clients commonly communicate with the server through SQL statements or a call-level interface. It should be noted that connectivity between tiers can be dynamically changed depending upon the user's request for data and services. It is possible for a server to function as a client to a different server- in a hierarchical client/server architecture. This is known as a chained two tier architecture design. Three Tier Architecture: A newer client/server architecture, called a three-tier architecture introduces a middle tier for the application logic. Three Tier Refers to a special type of client/server architecture consisting of three well-defined and separate processes, each running on a different platform: The user interface, which runs on the user's computer (the client). The functional modules that actually process data. This middle tier runs on a server and is often called the application server. A database management system (DBMS) that stores the data required by the middle tier. This tier runs on a second server called the database server.

In the three tier architecture, a middle tier is added between the user system interface client environment and the database management server environment. There are a variety of ways of implementing this middle tier, such as transaction processing monitors, message servers, or application servers. The middle tier can perform queuing, application execution, and database staging. For example, if the middle tier provides queuing, the client can deliver its request to the middle layer and disengage because the middle tier will access the data and return the answer to the client. In addition the middle layer adds scheduling and prioritization for work in progress. The three tier client/server architecture has been shown to improve performance for groups with a large number of users (in the thousands) and improves flexibility when compared to the two tier approach. Multi-tier architecture (often referred to as n-tier architecture) is a client-server architecture in which an application is executed by more than one distinct software agent. For example, an application that uses middleware to service data requests between a user and a database employs multi-tier architecture. The most widespread use of "multi-tier architecture" refers to three-tier architecture. An N-tier architecture uses several "tiers" of computers (servers) to interpret requests and transfer data between one place and another. The 0th tier is at the source of the data. Each tier is completely independent of all the other tiers, except for those immediately above and below it. The nth tier only has to know how to handle a request from the n+1th tier, and how to forward that request onto the n1th tier (if there is one), and handle the results of the request. ---------------------------------------------------------------------------------------------Performance Testing: Performance testing of a Web site is basically the process of understanding how the Web application and its operating environment respond at various user load levels. In general, we want to measure the Response Time, Throughput, and Utilization of the Web site while simulating attempts by virtual users to simultaneously access the site. One of the main objectives of performance testing is to maintain a Web site with low response time, high throughput, and low utilization. e-LOAD, LoadRunner, Astra LoadTest, PerformanceStudio, QALoad, SilkPerformer, WebLoad Response Time Response Time is the delay experienced when a request is made to the server and the server's response to the client is received. It is usually measured in units of time, such as seconds or milliseconds.

Typical characteristics of latency versus user load Network response time refers to the time it takes for data to travel from one server to another. Application response time is the time required for data to be processed within a server. Figure 2 shows the different response time in the entire process of a typical Web request.

Figure 2 shows the different response time in the entire process of a typical Web request. Total Response Time = (N1 + N2 + N3 + N4) + (A1 + A2 + A3) Where Nx represents the network Response Time and Ax represents the application Response Time. In general, the Response Time is mainly constrained by N1 and N4. Throughput Throughput refers to the number of client requests processed within a certain unit of time. Typically, the unit of measurement is requests per second or pages per second. From a marketing perspective, throughput may also be measured in terms of visitors per day or page views per day,

Typical characteristics of throughput versus user load Utilization Utilization refers to the usage level of different system resources, such as the server's CPU(s), memory, network bandwidth, and so forth. It is usually measured as

a percentage of the maximum available level of the specific resource. Utilization versus user load for a Web server typically produces a curve, as shown in Figure.

Typical characteristics of utilization versus user load

Types of performance testing Stress testing Load testing Volume testing Scalability testing Load testing Load testing is a much used industry term for the effort of performance testing. Here load means the number of users or the traffic for the system. Load testing is defined as the testing to determine whether the system is capable of handling anticipated number of users or not. In Load Testing, the virtual users are simulated to exhibit the real user behavior as much as possible. Even the user think time such as how users will take time to think before inputting data will also be emulated. It is carried out to justify whether the system is performing well for the specified limit of load. For example, Let us say an online-shopping application is anticipating 1000 concurrent user hits at peak period. In addition, the peak period is expected to stay for 12 hrs. Then the system is load tested with 1000 virtual users for 12 hrs. These kinds of tests are carried out in levels: first 1 user, 50 users, and 100 users, 250 users, 500 users and so on till the anticipated limit are reached. The testing effort is closed exactly for 1000 concurrent users. The objective of load testing is to check whether the system can perform well for specified load. The system may be capable of accommodating more than 1000 concurrent users. But, validating that is not under the scope of load testing. No attempt is made to determine how many more concurrent users the system is capable of servicing. Stress testing Stress testing is another industry term of performance testing. Though load testing & Stress testing are used synonymously for performancerelated efforts, their goal is different. Unlike load testing where testing is conducted for specified number of users, stress testing is conducted for the number of concurrent users beyond the specified limit.

The objective is to identify the maximum number of users the system can handle before breaking down or degrading drastically. Since the aim is to put more stress on system, think time of the user is ignored and the system is exposed to excess load. Let us take the same example of online shopping application to illustrate the objective of stress testing. It determines the maximum number of concurrent users an online system can service which can be beyond 1000 users (specified limit). However, there is a possibility that the maximum load that can be handled by the system may found to be same as the anticipated limit. Volume Testing Volume Testing, as its name implies, is testing that purposely subjects a system (both hardware and software) to a series of tests where the volume of data being processed is the subject of the test. Such systems can be transactions processing systems capturing real time sales or could be database updates and or data retrieval. Volume testing will seek to verify the physical and logical limits to a system's capacity and ascertain whether such limits are acceptable to meet the projected capacity of the organisation's business processing. Volume Testing is to find weaknesses in the system with respect to its handling of large amounts of data during short time periods. For example, this kind of testing ensures that the system will process data across physical and logical boundaries such as across servers and across disk partitions on one server Scalability testing We perform scalability testing to determine how effectively your Web site will expand to accommodate an increasing load. Scalability testing allows you to plan Web site capacity improvements as your business grows and to anticipate problems that could cost you revenue down the line. Scalability testing also reveals when your site cannot maintain good performance at higher usage levels, even with increased capacity. ---------------------------------------------------------------------------------------------Installation Testing Installation testing is often the most under tested area in testing. This type of testing is performed to ensure that all Installed features and options function properly. It is also performed to verify that all necessary components of the application are, indeed, installed. Installation testing should take care of the following points: To check if while installing product checks for the dependent software / patches say Service pack3. The product should check for the version of the same product on the target machine, say the previous version should not be over installed on the newer version. Installer should give a default installation path say C:\programs\. Installer should allow user to install at location other then the default installation path. Check if the product can be installed Over the Network Installation should start automatically when the CD is inserted. Installer should give the remove / Repair options. Try to install the software without administrative privileges (login as guest). Try installing on different operating system.

Try installing on system having non-compliant configuration such as less memory / RAM / HDD.

We also test all of the user setup options (full, typical, and custom), navigational buttons (Next, Back, Cancel, etc.), and user input fields to ensure that they function properly and yield the expected result. The Uninstallation of the product also needs to be tested to ensure that all data, executables, and DLL files are removed. The uninstallation of the application is tested using DOS command line, Add/Remove programs menu, and through the manual deletion of files. Alpha Testing Alpha testing happens at the development site just before the roll out of the application to the customer. Alpha tests are conducted replicating the live environment where the application would be installed and running. A software prototype stage when the software is first available for run. Here the software has the core functionalities in it but complete functionality is not aimed at. It would be able to accept inputs and give outputs. Usually the most used functionalities (parts of code) are developed more. The test is conducted at the developers site only. In a software development cycle, depending on the functionalities the number of alpha phases required is laid down in the project plan itself. During this, the testing is not a through one, since only the prototype of the software is available. Basic installation uninstallation tests, the completed core functionalities are tested. The functionality complete area of the Alpha stage is got from the project plan document. Acceptance testing (UAT or Beta testing) - also called beta testing, application testing, and end user testing - is a phase of software development in which the software is tested in the "real world" by the intended audience. UAT can be done by in-house testing in which volunteers or paid test subjects use the software or, more typically for widely-distributed software, by making the test version available for downloading and free trial over the Web. The experiences of the early users are forwarded back to the developers who make final changes before releasing the software commercially. It is the formal means by which we ensure that the new system or process does actually meet the essential user requirements. Each module to be implemented will be subject to one or more User Acceptance Tests (UAT) before being signed off as meeting user needs. Acceptance testing allows customers to ensure that the system meets their business requirements. In fact, depending on how your other tests were performed, this final test may not always be necessary. If the customers and users participated in system tests, such as requirements testing and usability testing, they may not need to perform a formal acceptance test. However, this additional test will probably be required for the customer to give final approval for the system. The actual acceptance test follows the general approach of the Acceptance Test Plan. After the test is completed, the customer either accepts the system or identifies further changes that are required. After these subsequent changes are completed,

either the entire test is performed again or just those portions in question are retested. Results of these tests will allow both the customers and the developers to be confident that the system will work as intended.

Testing Life cycle


CRS

Requirement Study Requirement Checklist Functional Specifications

Functional Specifications / SRS

Test Plan / Strategy Identify scenarios Design Test Cases Write test cases using techniques Review Approval

Traceability Matrix

Execution of TCs Reviews Defect Tracking

Retrospect

Walkthrough

TEST PLAN/STRATEGY:
Inspection 1. Introduction Overview This includes a brief introduction of the project and its functionalities are given. It also defines the purpose of this document and the target audience would be the testing team.

1.1 Purpose and Scope The purpose of this test plan is to outline the plans for conducting the test on this project. The objective is to ensure that all necessary tests are identified, adequately staffed, scheduled, conducted, and monitored. This document does not include the detail test cases and expected results. The detail test cases will be provided in a separate document. This plan covers the testing phase of the software development life cycle. Irrespective of the Life Cycle Model chosen, this plan becomes applicable during the testing phase of the project. 1.2 Not in scope The main test types that will not be performed for this release are defined. 1.3 Reference documents All the documents which has been referred in preparing the test plan are listed and the path of that particular document is also given. 1.4 Definitions and Acronyms All the definitions and abbrevations of the terms that will be used in the document are defined. 2. Test Strategy This sets the scope of system testing, the overall strategy to be adopted, the activities to be completed, the general resources required and the methods and processes to be used to test the release. It also details the activities, dependencies and effort required to conduct the System Test. 3. Types of Testing This section defines the types of testing that the Software Test Team will implement in this project dependent on type of application under test and requirements/needs for the application. The teams project plan will reflect accordingly. 4. Roles and Responsibilities It is a list of possible tasks with possible roles and responsibilities of each possible participant. The roles and responsibilities of test leader, individual testers, project manager are to be clearly defined at a project level in this section. This may not have names associated, but the role has to be very clearly defined. One or more participants will be Responsible for completing the task. Each task has ONE participant who is Accountable. Some participants will be Consulted before the task is completed and some will be Informed after the task is completed. The quantity of participants is dependent on the size of the project. The review and approval mechanism must be stated here for test plans and other test documents. Also, we have to state who reviews the test cases, test records and who approved them. The documents may go thru a series of reviews or multiple approvals and they have to be mentioned here. 5. Test Schedule A detailed time schedule for preparing all the documents related to testing and the time required to execute and analyze the results is prepared in an excel sheet in detail. This document clearly mentions the start date and end date for a particular action and the person responsible for the relative action. 6. Documentation/ Deliverables All the deliverables like Test Cases, Defect Report, Test Summary Report, Defect Analysis Chart, Traceability Matrix will be the output of test plan.

The documents pertaining to the entire testing process will listed down.

Document Test Plan Test Cases Validation Report

Location

Document name/ Number

7. Test Environment/Requirement 7.1 Hardware The Quality Assurance group will have control during testing of one or more application/database/web server(s) separate from any used by non-test members of the project team along with any needed firewall/proxy/load balancing hardware. The server(s) will be setup to either duplicate or mimic the targeted production environment. 7.2 Software The Quality Assurance group will use the targeted user operating systems, browsers, databases, and related applications necessary along with the Q/A software test tools necessary to complete all tests. 7.3 Database The Quality Assurance group will have control during testing of one or more databases separate from any used by non-test members of the project team. The database(s) will be setup to either duplicate or mimic the targeted production environment. 7.4 Operating Systems All the necessary operating systems which has been defined in the requirement document is listed and testing is carried out on these operating systems. 7.5 Browsers All the browsers on which the application has to be tested is defined. 8. Tools The various tools which are to be used in the entire testing process is listed down. Activity Defect Tracking Regression Testing Performance Testing Tool

9. Entry/Exit Criteria 9.1 Entry Criteria The criteria specified by the system test controller, should be fulfilled before System Test can commence. In the event, that any criterion has not been achieved, the System Test may commence if Business Team and Test Controller are in full agreement that the risk is manageable.

All developed code must be unit tested. Unit and Integration Testing must be completed and signed off by development team. System Test plans must be signed off by Business Analyst and Test Controller. All human resources must be assigned and in place. All test hardware and environments must be in place, and free for System test use. The Acceptance Tests must be completed, with a pass rate of not less than 80%.

9.2 Exit Criteria The criteria detailed below must be achieved before the Phase 1 software can be recommended for promotion to Operations Acceptance status. All High Priority errors from System Test must be fixed and tested If any medium or low-priority errors are outstanding - the implementation risk must be signed off as acceptable by Business Analyst and Business Expert Business Acceptance Test must be signed off by Business Expert. 9.3 Acceptance / Rejection Criteria Each test case contains expected results. The actual results must agree with the expected results. When the actual results differ from the expected results, the test team generates a defect report using the Bug Tracking System. The defect report is submitted to the software developers. The test team assists the developers by documenting test case steps, testing tool procedures, baseline data and test data. They can correct any expected results that differ from actual results if the expected results were incorrect due to training or misunderstanding by the test team. 9.4 Suspending /Resuming Criteria When developers correct the associated software, they resubmit the software to the test team to resume testing. The test team re-tests the system and evaluates the results based on the acceptance/rejection criteria previously developed. 10. Training If any specific training has to be given pertaining to the application has to be defined. 11. Risks and Mitigation Any risks that will affect the testing process must be listed along with the mitigation. By documenting the risks in this document, we can anticipate the occurrence of it well ahead of time and then we can proactively prevent it from occurring. Sample risks are dependency of completion of coding, which is done by sub-contractors, capability of testing tools etc. 12. Test Report/Evaluation All test results after execution will be reported to the Traceability Matrix, Test Cases, Test Defect Logs, Test Incident/Defect Reports, and a Change Control Log when and where appropriate. 12.1 Pass/Fail Criteria In order for the software quality assurance test team to begin testing, the project must be at an established phase or beta so that one or more complete test cases can be done.

12.2 Results Analysis Analysis will be upon all test cases of pass to fail ratio within the test matrix. The software quality assurance test individual in charge of the test project will be the person making the final test evaluation. 13. Error Management/Configuration Management During System Test, errors will be recorded as they are detected on Error Report forms. These forms will be input on the Error Management System each evening with status "Error Raised" or "Query Raised". The Error Review Team will meet to review and prioritise the defects, and assign them or drop them as appropriate. This section will define how and when the error will be managed. The configuration of each release and all the documents will be maintained in a Configuration management tool like VSS, CVS etc. has to be defined.

TEST CASE :
A test case is a noted/documented set of steps/activities that are carried out or executed on the software in order to confirm its functionality/behavior to certain set of inputs. Test Case Report Project Information Project ID Project Name Project Manager Test Manager/Lead Test Start Date Test End Date

Change Record for Test Cases Version Description of Changes Date Modified By Remarks

TEST CASE
SRS reference Test Case ID Test Case Description Test Input data Expected Results Actual Result

TRACEABILITY MATRIX:
1. Overview 1.1 Scope The traceability matrix is a table used to trace project life cycle activities and work products to the project requirements. The matrix establishes a thread that traces requirements from identification through implementation. Document the scope of the requirements traceability matrix in this section. 1.2 Responsibilities The traceability matrix will be maintained by the project manager or designee. However, input to the table may be required from other team members. Document the responsibilities in this section. 2. Requirements Traceability Matrix 2.1 Description of Matrix Fields Develop a matrix to trace the requirements back to the project objectives identified in the Project Plan and forward through the remainder of the project life cycle stages. Place a copy of the matrix in the Project File. Expand the matrix in each stage to show traceability of work products to the requirements and vice versa 2.2 Requirements Traceability Matrix Components implementing the reqmt. Design Test Release Program/ Specificatio Case No. & Remarks Module n ID Date

Reqmt ID

Description

Reqmt. source

Requirement1

Requirement2

DEFECT TRACKING Bug Life Cycle A bug or error or defect in software product is any exception that can hinder the functionality of either the whole software or part of it.
New Incomplete Evaluation

Open

Fixed

Invalid Wont Fix Duplicate Works for Me Later On Hold Enhancement

Stub3

Re-test

Closed

Re-Open

The typical lifecycle of a bug is as follows: 1. Bug is identified in system and created in tracking tool. 2. Bug is assigned to a developer. 3. Developer resolves bug or clarifies with user. 4. Developer sets bug to "Fixed" status and assigns back to original user. 5. Original user verifies that bug has been resolved and if so sets bug to "Closed" status. Only the original user who created the bug has access to "Close" the bug.

6. If the bug was not resolved to the user's satisfaction they may assign it back to the developer with a description and status as Re-Open (by adding a new detail). If this occurs then the bug returns to step 2 above. It is important to note that throughout the lifecycle of a bug, it should be assigned to someone. The system will allow for a bug to not be assigned but the usage of this feature should be minimal. By insuring that a bug is always assigned to a user or a developer, system administrators will maintain a high level of accountability for all bugs submitted to the system. STATUS RESOLUTION: New This bug has recently been added to the assignee's list of bugs and must be processed. Bugs in this state may be accepted, and become ASSIGNED, passed on to someone else, and remain NEW, or resolved and marked RESOLVED. Assigned This bug is not yet resolved, but is assigned to the proper person. From here bugs can be given to another person and become NEW, or resolved and become RESOLVED Open Once the developer starts working on the bug, he/she changes the status of the bug to Open to indicate that he/she is working on it to find a solution. Re-test The testing team changes the status of the bug, which is previously marked with FIXED to RETEST and assigns it to a tester for retesting. Reopened This bug was once resolved, but the resolution was deemed incorrect. For example, a WORKSFORME bug is REOPENED when more information shows up and the bug is now reproducible. From here bugs are either marked ASSIGNED or RESOLVED. Resolved A resolution has been taken, and it is awaiting verification by QA. From here bugs are either re-opened and become REOPENED, are marked VERIFIED, or are closed for good and marked CLOSED. Verified QA has looked at the bug and the resolution and agrees that the appropriate resolution has been taken. Bugs remain in this state until the product they were reported against actually ships, at which point they become CLOSED. Closed The bug is considered dead, the resolution is correct. Any zombie bugs who choose to walk the earth again must do so by becoming REOPENED. Fixed A fix for this bug has been checked into the tree and tested by the person marking it FIXED. Invalid The problem described is not a bug, or not a bug in Mozilla.

Wont fix The problem described is a bug which will never be fixed, or a problem report which is a "feature", not a bug. Duplicate/Synthetic The problem is a duplicate of an existing bug. Marking a bug duplicate requires the bug number of the duplicating bug and will add a comment with the bug number into the description field of the bug it is a duplicate of. Works for me All attempts at reproducing this bug in the current build were futile. If more information appears later, please re-open the bug, for now, file it. Later Sometimes, testing of a particular bug has to be postponed for an indefinite period. This situation may occur because of many reasons, such as unavailability of Test data, unavailability of particular functionality etc. At this stage the bug is assigned a LATER status so that it can be tested in the next release. On Hold In some cases a particular bug stands no importance and is needed to be/can be avoided, that time it is marked with ON HOLD status. Enhancement Request for enhancement Severity This field describes the impact of a bug. Blocker/Show stopper Blocks development and/or testing work Critical Crashes, loss of data, severe memory leak Major Major loss of function Minor Minor loss of function, or other problem where easy workaround is present Trivial/Synthetic Cosmetic problem like misspelled words or misaligned text Priority This field describes the importance and order in which a bug should be fixed. This field is utilized by the programmers/engineers to prioritize their work to be done. The available priorities are: P1 P2 P3 P4 P5 Most important

Least important

RETROSPECT Review A process or meeting during which a work product, or set of work products, is presented to project personnel, managers, users, customers, or other interested parties for comment or approval. The main goal of reviews is to find defects. Reviews are a good compliment to testing to help assure quality. A few purposes of SQA reviews can be as follows:

Assure the quality of deliverables before the project moves to the next stage. Once a deliverable has been reviewed, revised as required, and approved, it can be used as a basis for the next stage in the life cycle.

Types of reviews include Management Reviews, Technical Reviews, Inspections, Walkthroughs and Audits. Management Reviews Management reviews are performed by those directly responsible for the system in order to monitor progress, determine status of plans and schedules, confirm requirements and their system allocation. Technical Reviews Technical reviews confirm that product Conforms to specifications, adheres to regulations, standards, guidelines, plans, changes are properly implemented, changes affect only those system areas identified by the change specification. The main objectives of Technical Reviews can be categorized as follows: Ensure that the software confirms to the organization standards. Ensure that any changes in the development procedures (design, coding, testing) are implemented per the organization pre-defined standards. Walkthrough A static analysis technique in which a designer or programmer leads members of the development team and other interested parties through a segment of documentation or code, and the participants ask questions and make comments about possible errors, violation of development standards, and other problems. The objectives of Walkthrough can be summarized as follows: Detect errors early. Ensure (re)established standards are followed: Train and exchange technical information among project teams which participate in the walkthrough. Increase the quality of the project, thereby improving morale of the team members. The participants in Walkthroughs assume one or more of the following roles: a) Walk-through leader b) Recorder c) Author d) Team member Inspection A static analysis technique that relies on visual examination of development products to detect errors, violations of development standards, and other problems. Types include code inspection; design inspection, Architectural inspections, Test ware inspections etc. The participants in Inspections assume one or more of the following roles: a) Inspection leader b) Recorder

c) Reader d) Author e) Inspector All participants in the review are inspectors. The author shall not act as inspection leader and should not act as reader or recorder. Other roles may be shared among the team members. Individual participants may act in more than one role. Individuals holding management positions over any member of the inspection team shall not participate in the inspection. Specifically, the inspection team shall identify the software product disposition as one of the following: a) Accept with no or minor rework. The software product is accepted as is or with only minor rework. (For example, that would require no further verification). b) Accept with rework verification. The software product is to be accepted after the inspection leader or a designated member of the inspection team (other than the author) verifies rework. c) Re-inspect. Schedule a re-inspection to verify rework. At a minimum, a reinspection shall examine the software product areas changed to resolve anomalies identified in the last inspection, as well as side effects of those changes.

Configuration management tools Clear CaseRational Visual Source Safe(VSS)Microsoft Concurrent Versions System(CVS) Revision Control System(RCS) Source Configuration Management(SCM) Performance Testing Tools Apache Jmeter-------apache SilkPerformer Lite----segue SilkPerformer---------segue LoadRunner-----------Mercury IBM Rational Performance Tester-----IBM Functional Testing Tools Silk Test IBM Rational Robot IBM Rational Functional Tester Test Complete Quick Test Pro(QTP) WinRunner Automated QA

Test Harness A system of test drivers and other tools to support test execution (e.g., stubs, executable test cases, and test drivers).

You might also like