Professional Documents
Culture Documents
com/news/2240106479/Quality-metrics-The-economics-of-software-quality-PartOne
SSQ: Table 1.5 in the first chapter of your book lists 121 software quality attributes and ranks them on a scale from +10 for extremely valuable attributes to -10 for attributes that have demonstrated extreme harm. How did you come up with these 121 attributes and how was their ranked value determined? Jones/Bonsignour: The rankings come from observations in about 600 companies and 13,000 projects. Some of the more harmful attributes came from working as an expert witness in litigation where charges of poor quality were part of the case. The high-value methods were associated with projects in the top 10% of quality and productivity results. SSQ: I notice that Use of Agile methods ranked a 9.00, Use of hybrid methods ranked a 9.00, but use of waterfall methods only ranked a 1.00. Why is this? Have there been studies to show that Agile (or hybrid) methods result in higher quality software than when the waterfall approach is used? Jones/Bonsignour: The waterfall method has been troublesome for many years and correlates with high rates of creeping requirements and low levels of defect removal efficiency. Better methods include several flavors of Agile, the Rational Unified Process (RUP), and the Team Software Process (TSP). The term hybrid refers to the frequent customization of these methods and combining their best features.
Quality metrics: Software quality attributes and their rankings Part two
The rankings come from observations in about 600 companies and 13,000 projects, answered Capers Jones and Olivier Bonsignour when questioned about the table in their book, The Economics of Software Quality. The table lists 121 software quality attributes and their rankings on a scale from +10 (extremely valuable) to -10 (extremely harmful). In part one of this series, we learn that use of Agile methodologies and hybrid methodologies are both ranked as 9.00, while use of the waterfall methodology is ranked as 1.00. In this second part of the three-part series, we explore more of the attributes and their rankings. SSQ: Though Use of Agile methods ranked a 9.00, I didnt see line items for many techniques that some Agile teams practice such as test-driven development, pair programming and continuous integration. Have there been studies to demonstrate which Agile methods are the most beneficial in achieving higher quality? Capers Jones/Olivier Bonsignour: One caveat is that Agile is not the only method that is effective: both the Team Software Process (TSP) and the Rational Unified Process (RUP) are also successful. While weve been able to get some measurement regarding Agile methods overall, the industry around Agile is not very data-rich. We have not seen much activity in the Agile community towards quality measurement programs, especially at a level of refinement that would provide insights into such questions as variation between Agile methods. When we had our research paper on technical debt measurement accepted at this years 10th anniversary Agile conference, the organizers struggled to figure out where to feature the talk, since there was no measurement track. SSQ: There also arent line items for specific Agile methodologies such as Scrum and XP or mention of Lean or Kanban. What are your thoughts on the different Agile methodologies? Jones/Bonsignour: Its probably too early to derive any conclusion. Most of the Agile methodologies are less than 10 years old, and moreover, most of them have been built on the ground defined by the previous ones. There are interesting points in all the different approaches, and most of the time its quite difficult to fully embrace one and only one methodology. Most of the companies weve been talking with are in fact using a sort of mix of different methodologies picking the appropriate points/parts in each of them. For sure, with Scrum currently being so popular as to often be synonymous with Agile, you will hear most companies say that they are using/implementing Scrum. But XP includes more technical practices and Kanban/Lean are nicely complementing the good ideas/common sense behavior promoted by the above. Lean is not so much a method as an approach to focus on eliminating waste from the process. Lean tends to be a broader management initiative to introduce measurements to identify areas where cycle time is being degraded, identify root causes, and then measure again in a cycle of continuous improvement. As such, Structural Quality is an important component of Lean application management. But again, we are not aware of any definitive studies of their relative effectiveness. The question here is whether we should even look for one. Arent the principal characteristics of Agile the agility and flexibility? So
shouldnt the Agile practitioners be Agile enough to not stick to a single approach and instead continue to do what the community has been doing for 10 years: quickly adopt the new improvements? SSQ: Use of automated test tools scored an 8.00, but there are many different kinds of automated test tools. Which type was this referring to and what were the pros and cons? Jones/Bonsignour: For a number of reasons it is not possible to name specific tools. In the case of automated testing there are a number of vendors and a number of tools available. Tools that run test scripts, defect tracking tools, test library control tools, and several other categories are in this class. SSQ: Line items 109 and 110, Certification of test personnel and Certification of SQA personnel are both ranked as 8.0. What kind of certification are we talking about? There are some that would argue that hiring managers depend too heavily on certifications. What are your thoughts on that? Jones/Bonsignour: As we all know, the software industry does not yet use licensing or board specialties as does the fields of medicine and law. However, there is evidence that test and quality personnel who care enough about their work to take courses and pass certification exams from either non-profit groups or companies that provide such training and certifications have overall levels of defect removal efficiency that are higher than equivalent projects with untrained and uncertified personnel. SSQ: Line item 47 ranks Use of a formal SQA team as a 9.0, yet many who practice Agile discourage formal teams organized by domain area, due to the organizational silos that can develop. What are the reasons behind a formal SQA team enhancing quality? Jones/Bonsignour: If you look at the nature of Agile development, it is aimed primarily at small projects below 1000 function points where total team size is less than a dozen people and they are all located close enough to have Scrum sessions and frequent meetings. When you deal with projects large enough to have teams of 500 people scattered across half a dozen countries, you need the consistency provided by a software quality assurance group. Below 1000 function points in size, quality assurance teams seldom form. Above 10,000 function points in size, quality assurance teams are much more common. In addition to a formal SQA group, if the team is enhancing or maintaining an application larger than 1000 function points, Structural Quality issues also begin to surface. One change to an application thats large and interconnected can cause unexpected behavior in ways that are hard to predict without analyzing the structure of that application with each build or release cycle.
Jones/Bonsignour: For more than 40 years, customer satisfaction has had a strong correlation with volumes of defects in applications when they are released to customers. Released defect levels are a product of defect potentials and defect removal efficiency. The Agile community has not yet done a good job of measuring defect potentials, defect removal efficiency, delivered defects or customer satisfaction. The Agile groups will not achieve good customer satisfaction if defect removal efficiency is below 85%. It will be that low unless measurements are used. SSQ: Over the years, the way we develop software as well as our quality metrics has changed. An example is that Lines of code quality measures is ranked as a -5.00. We also see the trends towards the use of Agile methodologies and away from the waterfall approach of software development. What do you see as the biggest changes in how we measure software quality today as opposed to how weve measured it historically? Jones/Bonsignour: The general level of quality understanding in the software industry is roughly equivalent to the level of medical understanding before sterile surgical procedures were introduced. Achieving good quality needs a combination of defect measurement, defect prevention, pre-test defect removal such as static analysis and testing, scientific test case design and automated tools. But many companies even in 2011 have no knowledge of defect prevention, bypass pre-test activities or design test cases without proper methods, and depend upon untrained developers as testers rather than using certified test personnel. Also, the inability to control Structural Quality issues hampers management visibility of root causes of software failure and cost. This explains why the average percent of bugs removed prior to release is only about 85% when it should be 99%. Traditionally, metrics of structural software quality counted the structural elements of a component such as the number of decisions in the control flow. However, these metrics only suggested the possibility of a problem. Today we are basing structural measures of software quality on detecting patterns in the code that represent known violations of good architectural or coding practice. These newer measures are more direct measures of quality rather than being correlated measures. SSQ: What would be the biggest takeaway you would like readers to get from your book? Jones/Bonsignour: If you approach software quality using state-of-the-art methods, you will achieve a synergistic combination of high levels of defect removal efficiency, happier customers, better team morale, shorter development schedules, lower development costs, lower maintenance costs and total cost of ownership (TCO) that will be less than 50% of the same kinds of projects that botch up quality. You cant manage what you dont measure. By measuring the software product, which is the output of their software development process, software executives can manage their organization and the assets on which their business depends. This Q&A is based on The Economics of Software Quality (Jones/Bonsignour) Addison-Wesley Professional, and can be purchased by going tohttp://www.informit.com/title/0132582201