You are on page 1of 10

Test Estimation Techniques - Tools Journal

Gone are the days when the software testing used to be an additional role and responsibility of the developer. The complexity and size of the software has increased multi-fold and with that the types of testing and complexity of testing has also increased.

Why Test Estimation?

The testing has to be done at the Project Level which involves - Unit Testing, Integration Testing, Integration Testing, User Acceptance Testing and at Product Level which involves - Load Testing, Volume Testing, Functional Testing, End-To-End Testing, Parallel Testing, Concurrent Testing, Stress Testing, Positive Testing, User Manual Testing, Deployment Testing, Sanity Testing, Regression Testing., Performance Testing, Usability Testing, Comparison Testing, Intuitive Testing, Globalization Testing, Mobile Testing, Security Testing etc. The list is endless, It is not possible for organizations to do all of them, the types of testing to be done depends upon the product and end user requirements.

Hence software testing has emerged as an independent domain. For the end product to be delivered on time and in compliance with the customers requirements it is important to estimate the efforts involved in testing.

What exactly is Test Estimation?

1 / 10

Test Estimation Techniques - Tools Journal

So precisely, Test Estimation is the estimation of the testing size, testing effort, testing cost and testing schedule for a specified software testing project in a specified environment using defined methods, tools and techniques.

The factors that influence Testing Estimates are - Manpower cost, Cost of infrastructure, Operational costs, Special software, Communication, Travel, Training, Overheads, Shared facilities, Infrastructure overheads, Cost of Tools, Cost of Electricity/welfare/overtime, Testing equipment/environment cost etc.

So, a lot of things have to be kept in mind in coming out with a reasonable Test Estimation. There are some techniques that are helpful in the Test Estimation process; I have made an attempt to list as many techniques as possible so that people can pick and choose the ones that are apt for their projects. Following are various estimation techniques that can be used for estimating testing activities: Percent Of Development Effort Metrics Based Approach Implicit Risk Context Approach Iterative Approach Delphi Method Three Point Estimation TestCase Enumeration Based Estimate Task Based Estimate (WBS) Testing Size/Points Based Estimate Function Point Analysis UseCase Point Estimate Object Point Or Alogrithmic Technique

2 / 10

Test Estimation Techniques - Tools Journal

A very simple and basic estimation technique is approximating the estimate to a percentage of development effort. Effectively used when not much of information is available and a high level estimation has to be done.

Some organizations utilize this quick estimation method for testing based on the estimated programming effort. The testing effort is generally estimated to be 30%-40% of the programming effort.This approach may or may not be useful depending on the project-to-project variations in risk, personnel, types of applications, levels of complexity and other factors.

The past experiences of an organization on various projects and the associated test efforts are tracked. Once a substantial database of past information is created, this information is used for future project planning. This is essentially a 'judgement based on documented experience', and is not easy to do successfully.

A typical approach to test estimation is for a project manager or QA manager to implicitly use risk context, in combination with past personal experiences in the organization, to choose a

3 / 10

Test Estimation Techniques - Tools Journal

level of resources to allocate to testing. In many organizations, the 'risk context' is assumed to be similar from one project to the next, so there is no explicit consideration of risk context. (Risk context might include factors such as the organization's typical software quality levels, the software's intended use, the experience level of developers and testers, etc.) This is essentially an intuitive guess based on experience.

In this approach, for large test efforts, an initial rough testing estimate is made. Once testing begins, a more refined estimate is made after a small percentage of the first estimate's work is done. As the testers obtain additional test project knowledge and a better understanding of issues, general software quality, and risk, test plans and schedules are re-factored if necessary and a new estimates are done. The cycle is repeated as necessary/appropriate as more testing is done.

The original method involved a panel of experts iteratively discussing about the efforts put in a testing project and giving their judgements and narrowing down on the judgements given by each expert with next iteration and reaching out a consensus. This is a primitive approach and with time other variants have come into light, namely: - Policy Delphi It has been designed for normative and explorative use, particularly in the area of social policy and public health.

4 / 10

Test Estimation Techniques - Tools Journal

- Mini Delphi/Estimate-Talk-Estimate It is based on the principle that forecasts (or decisions) from a structured group of individuals are more accurate than those from unstructured groups. When this principle is adapted for use in face-to-face meetings, then it is called mini-Delphi or Estimate-Talk-Estimate (ETE). - Wideband Delphi The combination of Delphi technique and Three Point technique (discussed later) is called the Wideband technique. Team members give three numbers. The low and high estimators for each of the three numbers on each task explain their estimate. The process is repeated twice, and then the averages of the expected case estimates become the final estimate. The average best- and worst-case numbers for each task become the range.

It is based on statistical methods in which each testing task is broken down into sub tasks and then three types of estimation are done on each tasks. The first number is the best case estimate; i.e., assume everything goes well. The second number is the worst-case estimate; i.e., assume our worst fears are realized. The third number is the expected-case estimate. The average of the expected cases is the final estimate, but the best case and worst-case estimates are documented to understand the accuracy of the estimate and to feed into the test planning and risk management processes. The formula used by this technique is: Test Estimate = P + (4*N) + E / 6 Where P = Positive Scenarios, N = Negative Scenarios, E = Exceptional Scenarios Standard deviation for the technique is calculated as, Standard Deviation (SD) = (N E)/6

5 / 10

Test Estimation Techniques - Tools Journal

In this method first all the test cases are enumerated and then testing effort required for each test case is estimated using person hours or person days consistently. Best Case, Normal Case and Worst Case scenarios are used for estimating effort needed for each test case. Then Expected Effort for individual test case is calculated using Beta Distribution as Expected Effort = Best Case + Worst Case + (4 * Normal Case) / 6 This is then summed up to get Expected effort estimate for the project

This method looks at the project from the standpoint of tasks to be performed in executing the project. For a Testing project these tasks could be study of the specifications, determination of the type of tests to be conducted, determination of the test environment, estimation of the testing project size, effort, cost and schedule, estimation of the team size, review of estimation, approval of estimation, design of test cases, conduction of tests, defect reporting etc. The estimation is done as it is done in Test Case Enumeration Method the only difference is the unit of estimation is Task instead of Test Case.

6 / 10

Test Estimation Techniques - Tools Journal

Test Point is a size measure for measuring the size of a software-testing project and that a Test Point is equivalent to a normalized test case. Here a test case is the one having one input and one corresponding output. It is done by following the listed steps:-

- Use an existing software development size - Convert the software size into Unadjusted Test Points (UTP) using a conversion factor which is based on the application type i.e. Standalone, Client-Server, Web-Based - Compute a Composite Weight Factor (CWF)i. Sum up all individual weights of selected testsii. Multiply it by the weight of the Application Weightiii. Multiply it by the language weight if Unit Testing is selectediv. Multiply it by Tools Weight if Tools Usage is selected(The following weights could be considered - Application weight, Programming language weight, Weights for each type of testing, namely - Unit Testing, Integration Testing, System Testing, Acceptance Testing (Positive Testing), Load Testing, Parallel Testing, Stress Testing, End-to-End Testing, Functional Testing Negative Testing etc. All weights are project-specific.) - Unadjusted Test Points are multiplied by CWF to obtain the testing size in TestPoints size - The Productivity Factor indicates the amount of time for a test engineer to complete the testing of one Test Point - Testing Effort in Person Hours is computed by multiplying the Test Point size by the productivity factor.

7 / 10

Test Estimation Techniques - Tools Journal

Function point Analysis (FPA) is more or less based on the Functions involved in the Application Under Test (AUT). The application is broken down in terms of External Input, Internal Input, External Output, Internal Logic Files, External Logic files and Unadjusted Function Point is estimated. In addition to the five functional components described above there are two adjustment factors that need to be considered in Function Point Analysis.

- Functional Complexity - The first adjustment factor considers the Functional Complexity for each unique function. Functional Complexity is determined based on the combination of data groupings and data elements of a particular function. The number of data elements and unique groupings are counted and compared to a complexity matrix that will rate the function as low, average or high complexity.All of the functional components are analyzed in this way and added together to derive an Unadjusted Function Point count. - Value Adjustment Factor - The Unadjusted Function Point count is multiplied by the second adjustment factor called the Value Adjustment Factor. This factor considers the system's technical and operational characteristics and is calculated by answering 14 questions. The factors are: Data Communications, Distributed Data Processing, Performance, Heavily Used Configuration, Transaction Rate, On-line Data Entry, End -User Efficiency, On-line Update, Complex Processing, Reusability, Installation Ease, Operational Ease, Multiple Sites, Facilitate Change

Each of these factors is scored based on their influence on the system being counted. The resulting score will increase or decrease the Unadjusted Function Point count by 35%. This calculation provides us with the Adjusted Function Point count. Using the AFP value, we can derive - Effort, Cost of testing required.

8 / 10

Test Estimation Techniques - Tools Journal

Use-Case Point Method is based on the use cases where we calculate the unadjusted actor weights and unadjusted use case weights to determine the software testing estimation.

The formula used for this technique is: - Unadjusted actor weights = total no. of actors (positive, negative and exceptional) - Unadjusted use case weight = total no. of use cases. - Unadjusted use case point = Unadjusted actor weights + Unadjusted use case weight - Adjusted use case point = Unadjusted use case point * [0.65+ (0.01 * 50] - Total Effort = Adjusted use case point * 2

TheConstructive Cost Model(COCOMO) is an algorithmicsoftware cost estimation model. The model uses a basicregressionformula with parameters that are derived from historical project

9 / 10

Test Estimation Techniques - Tools Journal

data and current project characteristics.

There are three variants to this - Basic COCOMO computes software development effort (and cost) as a function of program size. Program size is expressed in estimated thousands of source lines of code (SLOC) - Intermediate COCOMO computes software development effort as function of program size and a set of "cost drivers" that include subjective assessment of product, hardware, personnel and project attributes. - Detailed COCOMO incorporates all characteristics of the intermediate version with an assessment of the cost driver's impact on each step (analysis, design, etc.) of the software engineering process.

The detailed model uses different efforts multipliers for each cost drivers attribute thesePhase Sensitiveeffort multipliers are each to determine the amount of effort required to complete each phase.

10 / 10

You might also like