Professional Documents
Culture Documents
How shifting quality to day zero of the delivery lifecycle can lead to a better product at a lower total cost
Robb La Velle & Ewald Roodenrijs
Table of Contents
My grandfathers Oldsmobile ........................................................................................... 1 How far weve come ........................................................................................................ 1 Another Lesson from Manufacturing ............................................................................ 3 The Economic Argument for PointZERO ...................................................................... 4 An Approach for Implementing a PointZERO strategy ................................................. 7 1. Deploy Quality-Driving Tools ................................................................................... 7 2. Enable Quality through the Empowerment of the QA Team ...................................... 8 3. Arm a Quality Program with the Industry-Leading Processes ..................................... 9 Conclusion .................................................................................................................... 12 About the authors .......................................................................................................... 13
ii
My grandfathers Oldsmobile
The day my grandfather walked into the Oldsmobile showroom and the keys were handed over to him was a momentous one. The year was 1952, the war was behind him and a bright future with his growing family lay ahead. What better means of propelling him into that new, wide open future than a new Super 88. But was the 88 really all that super, much less the state of quality in the entire American automotive industry? Edward Deming had arrived in Japan only a few years earlier and used his military assignment to assist a Japanese census to begin training engineers in statistical process control and concepts of quality, lessons that would ultimately transform their industry and the world. But Flint, Michigan was 9,000 miles and a paradigm away from Tokyo and the burgeoning quality revolution. The old Olds was subject to the same approach to quality that had been in place since the inception of mass production: post-production quality control. This reactive process of inspection entailed receiving each unit into the Quality Control Department off the line and running through a checklist designed to identify defects in the manufacturing process. The defective vehicles would then be staged for re-work and ultimately released for delivery. Todays automotive manufacturing environment looks nothing like that of 1952. Developments in statistical quality control, quality in design and engineering, six sigma manufacturing processes, lean manufacturing, design simulation and other methods and tools now provide us with products that have quality built in from the beginning of their lifecycle rather than hammered on at its tail end. Yet despite the huge advances realized in the car industry, it is sad to say that the way quality is managed in todays enterprise software development projects has not progressed far beyond the methods used for the old 88.
special cause variability as termed by Deming, should be the target of an end-to-end quality effort. To give an example, normal variability exists when business requirements are still in a state of flux while teams collaborate on a solution. Ultimately, a process of iteration and agreement stabilizes requirements and ambiguity is purged. Special cause variability, however, emerges when the changes to these requirements are not accurately propagated across project teams and defects are allowed to take root. As this story continues, requirements completion lags as limited access to business users constrains progress and their definitions become murkier in the sprint to meet deadlines. Design teams hamstrung by the same resource bottleneck do their utmost to turn requirements into cohesive functional designs, often without vital insight required to fit all of the pieces of the puzzle together. Design review sessions are held as often as possible, but the difficulty of getting the right representation when needed compromises the end-to-end integrity of the solution. Developers now get into the picture translating completed functional designs into technical specifications. But ongoing modification of requirements sends a wave of volatility through the solution landscape even though efforts are made to keep the whole entity in synch. Code and configuration commences on finalized technical specifications without the benefit of clarity on adjacent technical components. This would be akin to installing a kitchen without the luxury of knowing how the plumbing will run. Hours that had been dedicated for initial unit and assembly testing are now consumed to complete objects held hostage by upstream deliverables. Structural misalignments creep into the source code and are undetected by development leads. And while this great machine is being specified, designed and built like fitting new shocks to a car moving at high speed, quality control begins in the form of functional, performance and security testing. If all goes to plan, testing can commence against a relatively complete code base. But if stage containment violations persist, causing a smear of deliverables across project phases rather than a clean hand off as planned, test leads are usually asked to make due and manage test phase squeeze as best they can. Sometimes, the team can effectively risk-assess what needs to be done and deployment can happen without a grand explosion. But more often than not, the churn of late code against a previously stable system, compression of the testing timeline, and compromises of its scope lead to poor test effectiveness and unacceptable defect leakage into production. The most unacceptable aspect of this very typical scenario is that quality is purely of the reactive inspective variety as outlined in the Olds 88 example. As the project nears the finish
line, resources that would have been rolled off are extended and assigned to the all hands on deck effort to identify and rectify as many of these embedded defects as possible. This firefighting approach to defect purging invariably adds unforeseen costs to the project budget. Deming outlined 14 key principles for transforming business effectiveness in his book Out of the Crisis. The second of these principles is directly applicable to this problem and is the basis for this paper: To effectively build quality into a finished good, teams must cease dependence on inspection to achieve quality but rather, they must eliminate the need for massive inspection by building quality into the product in the first place. The arguments against transcending this state or even considering a better way to mitigate its inevitable occurrence are frequent and varied. As indicated before, an often-heard viewpoint in the field is that quality is something the combined team will hammer through at the end of the project. Again, sometimes the strongarm approach does work, but often it does not. Another view is that the cost of implementing lifecycle-spanning quality measures is more expensive than the cost of fixing errors later in production. This may be true depending on the applications in question and, of course, the nature of the production incidents. But the arguments that consistency drives quality and quality drives productivity are difficult for anyone with a background in manufacturing to dispute. Even more difficult to ignore is the simple and industry-accepted fact that the longer a defect becomes nested in a solution, the more difficult (and expensive) it is to resolve.
The message for managers of enterprise software projects should be resoundingly clear: By identifying errors at the point where they occur, we can deliver better software at a lower overall cost. Capgemini Group, combining Capgemini and Sogeti, a global thought leader in Application QA Services, has understood this problem for some time. To address it, the company has researched and developed a quality framework designed to shift quality upstream in the software development continuum right up to the point where projects have their inception. Capgemini Group calls this framework PointZERO. The Capgemini Group has been an industry leader in the application quality assurance space for over 30 years and this experience has culminated in our quality from day zero approach to software delivery: PointZERO. The essence of PointZERO is that quality should be a primary focus from the very start of every project and that doing so requires an orchestrated quality strategy and the people, processes and tools to enable it. But more than anything, it requires a change in thinking from a design/build/test model to one based on an end to end quality stream. PointZERO offers this quality stream in the form of an umbrella framework of components aligned for this purpose. It defines a holistic quality strategy, it enables an overarching quality organization and provides the individual tools and processes required to execute the quality program from day zero. Its ultimate goal is to allow businesses to deliver higher quality software products for less overall cost.
now nested in the solution, technical specifications are drafted, the new order type is coded, interfaces are modified, data objects are defined and mapped, test cases are written and executed. Unfortunately, not until User Acceptance Test is the design flaw identified. At that point, it is estimated that the effort to resolve the defect, including modifying impacted technical and data objects, changing supporting designs and re-writing test cases will run into thousands of manhours. In fact, several studies have indicated that the cost of fixing this defect during UAT will cost about 50 times more than what it would have cost to resolve during the design phase. Scenarios like this are anything but rare on large, enterprise projects. The cost avoidance offered by PointZERO strategies can be easily quantified by considering the average cost of fixing the same defect at various stages during the software development lifecycle and an average distribution of defects through the phases of a typical project.
Figure 1: Distribution of software development project defects
36% of defects originate in Requirements & Design phases
20.00% 18.00% 16.00% 14.00% 12.00% 10.00% 8.00% 6.00% 4.00% 2.00% 0.00%
fe ct s ts ts ts ts ts ts ts fe ct s fe c ts ef ec fe c ef ec de de de de de de de de ef ec ct ur Ar ch ite e D fe c fe c fe c fe c ts
Defect Distribution
19.00%
17.00% 15.00% 13.00% 11.00% 8.00% 7.00% 4.00% 2.00% 2.00% 2.00%
td
ts
ity
tT es
fix
te
od
at a
en
tc as
si
cu r
es
W eb
re m
ni
Se
Ba
Te s
1.
ui
8.
eq
3.
6.
7.
9.
10
oc 11 .
2.
4.
Source: Capers Jones. Data collected from 1984 through 2011;About 675 companies (150 clients in Fortune 500 set); About 35 government/military groups; About 13,500 total projects; New data = about 50-75 projects per month; Data collected from 24 countries
5.
um
en
ig
td
In a study1 conducted by Caper Jones, a leading researcher in software engineering methodologies and Chief Scientist Emeritus of Software Productivity Research, defect data collected from 675 companies representing 13,500 individual projects indicates that 36% of defects have their origins in Requirements and Design phases. Adding a further dimension to this analysis, a study2 by TRW Emeritus Professor of Software Engineering Barry Boehm assessed that the average cost of fixing a defect increases 100 fold between the initial Design Phase of a project and the deployment to Production. So where a defect found in that initial phase will cost $140 to fix, it will increase to $500 during Build, $1,000 during Unit Test, $2,500 during Integration Test, $4,500 during System Test, $7,000 during Operational Readiness Testing/UAT and ultimately $14,000 in Production. The data points of these two studies enable the estimation of very realistic scenarios. Figure 2 shows the distribution of 1,000 Severity 1 & 2 defects to each of the offending phases using the Caper Jones study. If we assume that 50% of these defects are not found until System Test, applying the Boehm defect resolution costs noted above yields a resolution cost that is almost $2m higher than it would have been had they been detected and resolved in the phase of their origins. The defects from the early Requirements and Design Phases contribute nearly 40% to these increased fix costs.
Figure 2: Model estimating cost to resolve 1,000 Severity 1 & 2 defects at their origin vs cost to resolve same defects during System Test
Incremental Cost of Defect Resolution
$2,500,000
$2,000,000
Axis Title
$1,500,000
$1,000,000
$500,000
$0
1. Design defects SDLC Distribution # Defects Cost to fix at Source Addl Cost to fix 50% during SIT 17.00% 170 $23,800 $358,700 2. Code defects 15.00% 150 $21,000 $316,500 3. Unit Test defects 13.00% 130 $18,200 $274,300 4. Data defects 11.00% 110 $15,400 $232,100
10. 11. TOTAL Document Architectur DEFECTS defects e Defects 2.00% 20 $2,800 $42,200 2.00% 20 $2,800 $42,200 100.00% 1,000 $140,000 $2,110,000
Just as in the example of the nested engine component in the expensive Italian motorcycle, the argument to identify and resolve defects early, or better, prevent them altogether, is a compelling one that all IT executives should consider.
to enable the integrated management of requirements, test design & execution and defects as well as specialty tools designed to automate testing and compress testing cycles. During the Build Phase, tools by vendors provide an automated approach to capture and quantify the quality, complexity and size of business applications by analyzing their structural quality. Structural Quality Analyzing applications perfectly complement a PointZERO quality strategy by providing analysis of all tiers of a complex application at the source code level and measuring adherence to architectural and coding standards, while providing a bottom-up view of development quality and technical debt and software engineering advice to Application Development teams. And during the Testing Phase more emphasis should be exercised on the automated creation of standardized test cases, checking the non-functional requirements, and executing full end to end tests provide a thorough insight in the application quality and still focus on the PointZERO strategy. In particular, the creation of standardized test cases provides faster execution of the full Test Phase. By using models for designing standard test cases, an enabler of better, faster and cheaper testing is created. Models are unambiguous (at least good models are), so a test suite derived from such an unambiguous model consists of a predictable set of test cases; provided by a formal test design technique. One of the very useful features of the Internet is that it hosts an increasing number of automated test design tools. With the help of these tools, a test suite can be built out of models in a matter of minutes, where manual test case specification would easily take days or more. Thus, the extra effort put into creating models during test preparation is won back several times over during the test specification activity!
and quality management as well experience in upstream project activities. This elevated role would have responsibility for defining the overall quality strategy, assembling the resources and assets required to implement it, and have the clout to stand up to project leadership on implementation, particularly at Quality Gates as collaboration points.
Likewise in other lifecycle phases, co-ordinated and structured use case development, modelbased testing, design reviews and other quality measures drive to ensure the integrity of work in progress and better position these outputs as inputs into the phase that follows. While the quality asset library of each project may be different, a core of measures by development phase could consist of the following: 1. Business Case a. Business Case Workshops provides an effective tool to enable projects to maintain focus on the desired business result. By processing opportunities and threats for the business outcomes, workshops keep the project objectives in line with the strategic goals. 2. Requirements a. Testability Specifications provide the business teams with a clearly defined format for each requirement to ensure that the required attributes are covered. b. Requirements Validation sessions constructed to enable the review of requirements across the solution landscape and include leaders from design, build and test teams. c. Requirements Traceability - In order to validate coverage of all requirements in testing, these must be systematically mapped to test cases prior to execution. 3. Design
10
a. User Story/Use Case Authoring - The creation of detailed, requirements-driven user stories enable the development of functional requirements as well as test scenarios. b. Evaluation (Reviews & Inspections) The determining of the difference between the actual properties and the required properties of an intermediate product and/or processes in the development process. c. Model-based reviewing - Actively seeking knowledge and understanding of the subject matter, and revealing as many as flaws as possible. 4. Development a. Test-Driven Development - The practice of developing test cases immediately after requirements have been defined and validated. These test cases then form a primary input into the coding process such that code is written to pass the test case. b. Code Structure Validation - The usage of code analysis tools, which validate all application tiers at the source code level and measure adherence to architectural and coding standards. 5. Testing & Acceptance a. End-2-End Testing A complete approach for End-2-End testing helps control quality across the software development lifecycle. b. Common Test Tool Platform - Utilization of a common test tool for all projects in order to maximize efficiency. c. Test Automation - Usage of tools, which automate the test design and execution process enabling efficiencies over manual methods and increased quality through more frequent regression cycles. d. Model-Based Test Design - The creation of a test model that describes some of the expected behavior (usually functional) of the test object. The purpose is to review the requirements thoroughly and/or to derive test cases in whole or in part from the test model. The test models are derived from the requirements or designs. e. Test Infrastructure in the Cloud - A pay per use cloud-enabled Test Infrastructure Service that can be accessed on demand, without the need of capital investment and large-scale testing resources. Providing a comprehensive, easily accessible, flexible and low-cost service for test execution. Capgeminis and Sogetis concerted PointZERO quality program aggregates these methods and many others under one umbrella providing enterprise projects with the tools, processes and metrics to implement end-to-end quality without re-inventing the wheel.
11
Most critical in this orchestration is the standardization of quality measures and the definition of exit and entry criteria at each of the stage gates throughout the lifecycle. As fundamental as this may sound for any enterprise software implementation, numerous test maturity assessments conducted across industries have shown that this is a basic quality principle that is very often neglected when the rubber hit the road.
Figure 4: Framework for PointZERO Quality Solution
Conclusion
As consumers, we have grown accustomed to a high level of quality in the products and services we buy. What started a half a century ago with Deming has now evolved and permeated both manufacturing and services sectors of industry. We live in an era in which vendors of inferior quality products simply do not survive. Why should the products of enterprise software development projects be any different? For those businesses with the insight to survey the total picture, quality as a weapon to reduce total cost is an opportunity definitely worth pursuing.
12
Ewald Roodenrijs Mr. Roodenrijs is a member of the business development team within Sogeti Netherlands and the global lead of the Test Cloud offerings within the Sogeti Group. As a member of the business development team, he works on different test innovations like testing clouds, model-based services, PointZERO, and using new media in testing. He won the Capgemini/Sogeti Innovation Award in 2011. Hes also the co-author of the book TMap NEXT BDTM, contributor to the book Seize the Cloud, author of the book TMap NEXT Testing Clouds, a speaker at international conferences, an author of various national and international articles in expert magazines and has created various training courses.
13