You are on page 1of 8

Bug Management Process and Tool The bug-management process (Figure 7.2) will typically have 7 stages.

All should be tool-supported: 1. Execute tests or reviews. Testers run system and/or unit tests and gather results. Note that it is usually pointless bureaucracy to record unit test failures of the unit tests executed by developers. Only when a unit test fails on submission to configuration management is it worth recording. The unit tests should occur before the unit is submitted for configuration management and the unit should never be admitted if the unit fails.

2. Identify and record bugs. Testers analyze the results, identify bugs, and submit a bug report (see

Chapter 8 for an example). Note that everyone in a project should be free to submit bug reports. This has several advantages: a. All whinges can be dealt with in the project managers time. b. Verbal whinges can be quickly repressed: write it up in a bug report! c. Everyone knows how to get a whinge answered. d. The project manager can direct all bugs to a deputy. e. Field or customer-discovered bugs are included along with developer- and tester-discovered bugs. It is more likely in this way that patterns of bugs will emerge. 3. Review and triage bugs. Bug reports are reviewed by the test manager, the project manager, the design authority, the configuration manager, a user representative, and possibly senior developers. There are various outcomes: a. The bug is confirmed. b. The bug severity is changed because it has been mis-classified. See Appendix C. c. The bug is assigned a priority. This simply determines the speed with which it will be fixed and is usually related to the degree to which the bug delays testing. It is not necessarily related to bug severity. d. The bug is confirmed but cannot be resolved by the developers it may be in the compiler or in some COT software which will not be changed in time. The user representative and management agree that the bug exists and must remain unfixed. e. The bug is confirmed but is being addressed in a bug-management plan which will become operative at a later date. f. The bug report is rejected for the following reasons: i. The bug is already raised. (If bugs are raised by more than one test group then note this: it will help you estimate the number of bugs in the system. See section 18.10.4 in Chapter 18 for more on this.) ii. The bug report has been wrongly-raised the system is working as specified (note that on occasions the specification may be wrong and the rejected bug report can become the basis of a specification change request). 4. Assign bugs. The bug is assigned to a developer to fix. 5. Fix and unit-test bugs. The developer makes the fix, unit-tests it, updates the headers in the various code files which have been changed, and marks the bug as ready for inclusion in the next build. The configuration manager will then incorporate the changed code in the next build. The change may also require changes to user documents and these should be reviewed too. 6. Retest bugs. After finishing the smoke and confidence tests, the system test team retests the unittested bug fixes, and marks them as either closed (if the bug no longer appears) or open otherwise. For bug-fix-only releases this will then be followed by a regression test. 7. Manage bug database. The bug management tool manager:

a. Identifies all the unit-tested bug fixes to be included in the next build b. Monitors the state of unit-tested bug fixes in the present build to ensure they are tested as early as possible c. Generates bug logs and bug charts d. Lists the rarely-occurring, irreproducible, but potentially-dangerous problems e. Looks for patterns of, and in bugs f. Issues an updated bug metrics log

This is a simple bug cycle and the one you evolve will probably differ. One probable cause of such a difference is the use of the change request, whereby all bug reports authorized to be fixed have that authorization embodied in a change request form, thus uniting the flow of information from bugs, with that deriving from customer- or other-inspired changes. Other changes may derive from differing terms used by the tool supplier. 7.6.1 Tracking Bug Reports Testing is the prime means of providing objective information both about the quality of the software as well as the process that produced it. To save time it is essential that all bugs and accompanying fixes, as well as any enhancements, are tracked on a database. See Appendix D for details of some commerciallyavailable ones. 7.6.2 Bug-Tracking Graphs The bug reporting database should also be able to produce graphs showing: The expected number of bugs found and their type The actual number of bugs found and their type The number of bugs fixed The length of time some bug has been in the software The number of outstanding bugs discovered over one month ago but not yet fixed The cause of each bug. See Appendix C for a classification scheme Which units have the most bugs Quite apart from the reassurance such graphs can give to senior management, they can also provide considerable reassurance to project management and staff. The sorts of graphs which can be obtained are shown in Figure 7.3, which is a chart of the bugs accumulated during a typical system testing phase over a period of twenty-five days. The curve forms an S-shape of which the beginning and the end show: first the difficulty normally experienced in running tests for the first time and secondly; that by the end of the cycle very few bugs are being found.

7.6.3 Bug Analysis As tests are run, a quality profile of the software can be developed. Quality profiles using analyzers may be run on the software before testing but it is as the software is being tested that the softwares behavior and the importance of the tool-derived profiles become evident. It is useful to distinguish between bugs, and failure. A failure is only the expression of the existence of one or more bugs. A failure is what we see: for example when a radio starts to smoke. The bug which causes the smoke lay in plugging the aerial into the power point. Bugs can be analyzed using the following headings: 1. What kind of bug? Use [Beizer 3] or Appendix C to classify the bugs. 2. Where was the bug made? In a specification? In a manual? In the code? The answer to this question helps answer several others: a. Which units have the greatest number of bugs? b. Is there any correspondence between the most bug-laden units, and any other unit characteristics such as complexity (see section 18.8.4), size, or number-of-changes? c. Which units/interfaces/features should be most heavily tested? d. Is any (part of any) specification particularly bug-related? e. Which variables have provoked the bug to throw a failure? 3. When was the bug made? In which phase of the project was the bug committed? (See Table 7.1 for an illustration.) This can show: a. How many bugs have persisted through two or more phases, b. Which reviews or tests failed to find the bug. 4. Who made the bug (and using what tools)? Which groups, individuals, and environments were involved? Note that a degree of collective responsibility is required: it is pointless blaming some programmer for a bug in some unit if that unit has been reviewed. Similarly if bugs can be traced to the use or non-use of some tool, such a relationship can provide management with new insights. 5. What was done wrong? This can be partly answered by using the classification shown in Appendix C. A fuller answer can be derived from the following questions: a. Why was the bug made? Answers to this question are central to improving the software project process. See section 15.7.1 as a means of analyzing the operations most likely to expose bugs. Those bugs remaining in the system for more than one phase provoke the further question: why wasnt the bug found earlier? b. In which phase was the bug made? c. How often should the bug have been caught by a test? d. Which test(s) should have caught it? e. Which review should have caught it? f. Which bugs were related to it? g. Which test group found it (if there is more than one)? 6. Which program constructs have to be changed most frequently? (See section 13.6 for more on this). 7. What is the relationship between the bugs? (An analysis of the circumstances likely to cause the bug. See section 15.7.1 for an example.) 8. What could have been done to prevent this bug? This is an antidote to the pious hand-wringing

that frequently succeeds a major problem.

9. Which test approach (could have) found this bug? The answers to this question help us decide: a. Which testing methods are most effective at finding what kind of bug b. (When the system is in use) Which kinds of bugs we failed to find and which kinds of extra tests are needed By setting up a system to answer such questions well before testing begins, management has the information required: To make tactical decisions during the testing and support phases To make strategic decisions during the planning phase of the next project To justify such choices to senior and customer management 7.6.4 Bug Source Table Example From the bug details to the bug management database you can create a pivot table in Excel as shown in Table 7.1. This partial table shows that 45% of critical bugs are being introduced by design and 50% are being introduced by coding. Trap the data as it is created, typically at bug triage sessions where those most-knowledgeable are together. 7.6.5 Bug Detection Effectiveness A more-detailed analysis shown in Table 7.2 demonstrates that although the requirements review was fairly effective (thirty-nine bugs found out of sixty-four), the architecture design review (43/(64 + 106 39)) was not. On deployment sixty-six bugs were found out of a total of 374 so the bug elimination level (or overall bug removal effectiveness) so far is 82%. Note that: You will be unable to complete this table until the release to which it refers is removed from service. Only then can any deployment (field) bugs be totaled. The table assumes that at each stage the bugs discovered at a previous stage have been removed. Calculate the bug detection effectiveness as shown in Figure 7.10.

B.5 Requirements Analysis Checklist B.5.1 Requirements Analysis General Checklist 1. Do the documents conform to standards? 2. Has an initial concepts document or some such been prepared? Have the requirement mapping against initial concepts documents been mapped against this? 3. Is each requirement separately identifiable? 4. What checks for completeness and consistency have been made on the requirements? 5. Which requirements are the most likely to change? Why? 6. How and where are the boundaries of the system defined? Are they clear? Do they include the user classes? 7. From what H/W, communications, or application bugs must the system always recover? 8. Within the requirements specification, is there a clear and concise statement of: a. each safety- or mission-related function to be implemented? b. the information to be given to the operator information at any time? c. the required action on each operator command action command action command including illegal or unexpected commands? d. the communications requirements between the system and other equipment? e. the initial states for all internal variables and external interfaces? f. the required action on power-down and recovery? g. the different requirements for each phase of system operation? (e.g., start-up, normal operation, shutdown) h. the anticipated ranges of input variables and the required action on out of-range variables? i. the required performance in terms of speed, accuracy, and precision? j. the constraints put on the software by the hardware (e.g., speed, memory size, word length)? k. internal self-checks to be carried out and the action on detection of a failure? l. any components to be replaced while the system is running? Is any downtime foreseen? 9. Can each requirement be reinterpreted in terms of IF . . . THEN . . . ELSE?

SoftwareSoftware Integration Test Review Checklist 1. Is the softwaresoftware integration strategy clear? Is the rationale for the softwaresoftware integration strategy clear? Is there an integration test for every build step? 2. Are the assumptions behind the integration strategy assumptions explicit? Are they likely to change? 3. Have all automated checks been run and have they failed to find any errors? 4. Are there any errors such as overflow/underflow or divide by zero that must be identified, repaired, or require special recovery? 5. Does the bug handling contain adequate error-detection facilities allied to bug containment, recovery, or safe shutdown procedures? 6. Are software packages reused? If so: a. have they been developed and tested to the same integrity level? b. have any modifications to them been carried out to the original standards and procedures? c. is there a procedure change control for the control of changes to library programs? 7. Have all mission-critical elements been distributed over redundant elements? 8. Has compatibility between the interfaces been defined? 9. Has every requirement been mapped to some system feature?

10. Does every mapping represent a sufficient transformation of some requirement with respect to the configuration management plan? 11. If graphic and prose definition are used, is there evidence that consistency checks on both have been carried out? 12. Does each integration test cover the relevant interfaces sufficiently? 13. Does each integration test cover the relevant features which that build level represents, to give you confidence that the build is worth continuing? 14. Do you have a sufficient regression test for each integration step?

You might also like