You are on page 1of 31

UNIT-I

Software Process Assessment overview


Introduction A software process assessment is the means whereby organizations can identify and assess their strengths, weaknesses, existing improvement activities, and key areas for further improvement. It is carried out by an internal or external software process expert who investigates and evaluates the current processes against a reference model. Based on the results from this investigation the organisation can determine the current state of their software process and this allows for the development of action plans to facilitate future improvement. The Role of Assessment in SPI An increasingly popular way of starting a SPI program is to do an assessment in order to determine the state of the organizations current software processes, to determine highpriority issues, and to obtain organizational commitment for SPI. Why Perform Assessments? In many cases, process assessment can help software organisations improve themselves by identifying their critical problems and establishing improvement priorities before attempting a solution . Therefore, the main reasons to perform a software process assessment is 1. To understand and determine the organisations current software engineering practices, and to learn how the organisation works. 2. To identify strengths, major weaknesses and key areas for software process improvement. 3. To facilitate the initiation of process improvement activities, and enrol opinion leaders in the change process. 4. To provide a framework for process improvement actions. 5. To help obtain sponsorship and support for action through following a participative approach to the assessment.

Assessment Phases
Assessments are typically conducted in six phases. 1. Selection Phase: During the first phase, an organization is identified as a candidate for assessment .The SEI contacts the organization to set up an executive level-briefing . 2. Commitment Phase: In the second phase, the organization commits to the full assessment process. An assessment agreement is signed by senior representatives of the organization and the SEI. This commitment includes the personal participation of the senior site manager, site representation on the assessment team, and agreement to take action on the assessment recommendations.

3. Preparation Phase: The third phase is devoted to preparing for the on-site assessment. An assessment team, composed of members from the SEI. and the organization being assessed, receives training. In addition, the on-site assessment is planned. The assessment participants are selected and briefed about the assessment process, including times, duration, and purpose of their participation. The questionnaire can also be filled out at this time. 4. Assessment Phase: In the fourth phase, the on-site assessment is conducted. On the first day, senior management and assessment participants are briefed as a group about the objectives and activities of the assessment. The project representatives complete the questionnaire if they have not done so previously. The resulting data and information is reviewed and analyzed by the assessment team. The team then holds discussions with each project. On the second day, the team conducts discussions with the functional area representatives or key software practitioners, who provide further insight into the software process. Over the course of the third day, the assessment team formulates findings based upon the information that has been collected on the previous days and gets feedback from the project representatives. On the last day, the findings are reviewed with the project representatives to help ensure that the assessment team understands the issues correctly. The findings are revised, if necessary, and presented to the assessment participants and senior site management. The assessment ends with formulation of the recommendations that address the findings. 5. Report Phase: The fifth phase is concerned with the final formulation and communication of assessment findings and specific recommendations that address those findings . The assessment team prepares a formal written report. 6. Assessment Follow-Up Phase: In the final phase, an action team composed entirely of professionals from the assessed organization is assembled and charged with formulating an action plan and facilitating its implementation . Typically, there is also some continuing support and guidance from the SEI. After approximately eighteen months, a reassessment or self-assessment is done by the organization to determine progress and to continue the software process improvement cycle.

Assessment Principles
Assessments are a challenging activity. Time is short; organizations are complex; and the software process tends to be personnel-intensive in nature. In order to meet these challenges, the SEI has developed the principles described in this section. They are the keys to conducting a successful assessment: Sponsorship Confidentiality Teamwork Action orientation
Software process framework

Sponsorship

A sponsor is an individual or group that authorizes the assessment and assumes final responsibility for it. The sponsor ensures that the assessment has legitimate backing and that software process improvement has official support and financial guarantees. The sponsor is usually the senior site manager--the person who sets the operational priorities for the organization. The sponsorship role involves the following: Providing authorization and support for the assessment, a responsibility which cannot be delegated. Being visible and personally involved in the assessment and follow-up activities. Assigning the resources and qualified people for planning and implementing the assessment and follow-up activities. Building sustaining sponsorship through middle management. Educating oneself and sustaining sponsors in the assessment process and software process improvement. This understanding helps sponsors make the decisions necessary to support improvement activities and secure the critical resources needed. Agreeing to and signing the assessment agreement Confidentiality The power of an assessment is that it taps the knowledge and skills of the organizations software experts and project leaders (assessment participants). Their accounts of the software process as it is practiced at the organization influence the results of the assessment. Because assessments depend upon the honest, open, and accurate information that comes from the assessment participants, it is vital that they feel that they can speak in confidence. Confidentiality is needed at all organizational levels. No leaks can occur--not to others in the organization, their bosses, or the organizations chief executive.
.

Assessments build confidentiality by several means: Composite assessment team findings individuals are not named projects are treated as a group--five to six projects are reviewed at one time An assessment agreement that has confidentiality provisions built in An assessment team training program and assessment presentations that teach confidentiality guidelines .

While many people agree to confidentiality in principle, it is difficult to enforce and maintain. Yet if confidentiality is lost, the organization is at the following risks: Participants will be less open and provide less information.

The assessment team will have incomplete and, therefore, less accurate information on which to base its findings and recommendations. The assessment is likely to fail to meet its goals and objectives.

Teamwork A successful assessment is a collaborative effort to identify present good practice and key areas for improvement. Teamwork occurs on several levels: within the assessment team; between the team and assessment participants; and between those involved in the assessment and the rest of the organization. One strength of the assessment process is that it is conducted as a structured study by a team of knowledgeable and experienced professionals. Team members from the organization and the SEI each make a valuable contribution. The site team members understand the organizations software process and organizational culture; and the SEI members add an independent professional viewpoint to the assessment. The training they receive together, including team-building exercises, helps them to form an effective and efficient assessment team. Assessments are based on the assumption that local software experts are in the best position to understand the organizations software process issues. With the leading in-house professionals contributing their knowledge and skills, the assessment can be a catalyst to motivate the entire organization to self-improvement. Action Orientation An organization which chooses to participate in an assessment must be directed toward software process improvement. Assessment findings focus on identifying problems that are key software process issues currently facing the organization. Improvement goals and expectations are set as a result of conducting an assessment. There is a risk: if no improvement actions are subsequently taken, the assessment may have a negative effect on the current situation. In the past, the local practitioners could assume that management did not entirely understand the issues and, thus, could not be expected to address them. After the assessment, this is clearly not the case. Senior management receives a written report that describes the current state of the practice, identifies key areas needing improvement, and lists a set of recommendations. If management does not then take action, the morale of the software professionals will suffer. An organization must be prepared to take action, or it should not conduct an assessment. Software Process Framework An assessment implies that there exists a standard or framework to measure against: an organizations software process needs to be reviewed in comparison with some vision of how software processes should be performed. Without this standard or framework, an assessment can easily become a loosely directed intuitive exploration. With the framework, there is a basis for orderly exploration as well as a means for establishing improvement priorities. The framework gives the team a focus for working together on the key issues and recommendations.

Conducting the Assessment

This chapter discusses the sequence of activities constituting the on-site portion of the assessment. Typically, these activities require four days to complete. a) Introductory Management Meetings The introductory management meetings include the assessment overview and the assessment participant briefing.
Assessment Overview

The assessment begins with presentations to the senior site manager, his or her immediate staff, and the assessment participants. The main objective of this presentation is to provide an overview of the assessment process and describe its relationship to software process improvement and software process management. It is vital that senior management attend this meeting to show their commitment to the assessment and their support for software process improvement.
Assessment Participant Briefing

After the assessment overview meeting, assessment participants receive further information about the assessment, including the schedule. The objectives of this meeting are to ensure that the participants questions are answered and to confirm that they understand their role in the assessment--where they should be, when they should be there, and what is expected of them. b) Questionnaire Data Analysis Session The questionnaire can be filled out either before or during the assessment. After the questionnaires have been completed, the assessment team meets in a private session to analyze the responses and determine the initial maturity level of the organization. Team members analyze questionnaire data for similarities, differences, anomalies, and misunderstandings; and they identify areas where questions need to be asked and supporting materials need to be requested. The session is private because confidentiality has been guaranteed to both the assessment participants and the individual projects. Supporting materials are are important for verifying that a particular question has been answered accurately. What is important here is verifying the actual practice in an organization, that is, the software process that is used on a daily basis. The assessment team is interested in what is actually being done, not what may be written in a set of manuals. For example, one question on the questionnaire may be "Are internal software design reviews conducted?" The supporting material requested may be the review action items or minutes of any design reviews conducted on the project. c) Project Discussions, Session I After the questionnaire data has been analyzed, the assessment team meets with the project representatives, one project at a time. Meeting on a project-by-project basis maintains confidentiality among projects. The purpose of this session is to allow the team to get a sense of whether the questionnaire was filled out

accurately and whether the maturity level of the projects seems accurate, to verify project differences and similarities, and to request supporting materials. The assessment team members listen to the representatives, asking questions about how the project does its day-to-day work of developing and maintaining software for the project. For example, the assessment team asks about configuration management, project management, and quality control.

Assessment Team Wrap-Up Meeting

After the project discussions, the assessment team meets alone to discuss the progress of the assessment, review initial findings, and prepare for the functional area discussions. d) Functional Area Discussions On the second day of the assessment, the team conducts discussions with software professionals from each major functional area of the organizations software process. The overall objective for this session is to enable the assessment team to learn what the software practitioners (non- management) consider the most important software engineering issues facing their functional area or the organization as a whole.

The discussions are informal sessions in which assessment participants are asked to describe how they do their work. These sessions allow the assessment team to learn the details of the actual day-to-day software practice. Managers do not attend these discussions so that software practitioners will feel more comfortable about speaking freely.

Near the end of the session, the assessment team leader asks each functional area representative a key question: "If you could change one thing to improve the quality or productivity of your work, what would it be?" The answers to this question are important because experience has shown that the people working directly with the product (software, documentation, etc.) generally have many good ideas for improvement.

Assessment Team Wrap-Up Meeting

To conclude day two, the assessment team holds a wrap-up meeting to review the days findings, contrast and compare them to the previous days findings, and prepare for the next day of the assessment. Considerable time is spent in this session in reaching team consensus on the preliminary findings.

e) Project Discussions, Session II To begin the third day of the assessment, the assessment team again meets with the project representatives, one project at a time. The purpose of these meetings is to discuss any open issues, to review the requested supporting materials, and to discuss the assessment teams preliminary findings with the project representatives. During the meeting, more issues may come up and further discussion may be needed. The intent here is to gather additional data to support and refine the findings, and to identify any other problems or issues. f) Final Findings Formulation After the project discussions are completed, the assessment team meets to formulate the findings and prepare the findings presentation. The findings represent the assessment teams view of the most important software process issues that currently face the organization. The findings are based on the maturity questionnaire responses, discussions with the assessment participants, and discussions among the assessment team members. The findings represent the starting point for formulating recommendations

Extended and intense discussions are often required to reach a team consensus on the findings. Since there is no time scheduled the next day for continuing these discussions, this meeting often continues into the evening. (This is where the team-building exercises during the assessment team training can pay off.) Once the findings have been formulated, the team then prepares the findings presentation.

At the conclusion of the third day, the assessment team should have achieved a team consensus for the assessment findings and completed the preliminary findings presentation.

g) Assessment Findings Presentation The findings presentation includes the following: This includes the site or division assessed, the names of the projects assessed, and the names of the functional areas that were interviewed.
Scope of the assessment --

Conduct of the assessment -- This

is a high-level discussion of the assessment, and how it went in general. All assessment participants and support staff are thanked for their time, cooperation, and assistance. Composite organizational status -- The software process maturity level of the organization is noted here. It is emphasized that the success of an organization is based on the action taken in response to recommendations, and not on a score. Strengths -- Any organizational strengths are noted here (e.g., examples of good project work) Findings -- Findings are summarized, and each finding is then discussed in detail. The consequences of each finding are pointed out, and general examples of the findings are used if appropriate. Next steps -- The steps following an assessment are discussed: the recommendations formulation, the final report, and the action plan, along with an anticipated schedule.

Assessment Team Session

On the last day of the assessment, the assessment team leader presents the findings presentation to the assessment team. The purpose of this session is to catch any errors and fine-tune the presentation.
Project Composite Feedback

Next, the assessment team leader gives a dry run of the findings presentation to all the project representatives together, along with the assessment team. The purpose of this meeting is to hear any concerns from the project representatives, get early feedback before the final findings presentation to senior management, and answer any questions.
Final Assessment Findings Presentation

Finally, the assessment team leader gives the final assessment findings presentation to the senior site manager, his or her immediate staff, and the assessment participants. The purpose of this session is to present the assessment teams view of the most important software process issues facing the organization. Because the organizations findings are a composite, confidentiality is ensured. h) Senior Management Meeting After the final findings presentation, the senior site manager, immediate staff (if desired), and the assessment team leader hold an executive meeting to discuss next steps. The purpose of this meeting is to confirm the time for the recommendations presentation and final report, discuss the importance of forming the action plan team and developing the action plan, and address any questions or concerns of management. i) Recommendations Formulation

After the assessment findings presentation and the senior management meeting, there is one more on-site session: the initial formulation of the recommendations to address the findings. The purpose of this meeting is to obtain a team consensus on the recommendations to be documented in the final report and to assign a portion of that report to each assessment team member.

Some guidelines used by the team for formulating the recommendations follow: Address each key finding, though there need not be a one-to-one correspondence between findings and recommendations. Limit the number of recommendations. Make the recommendations specific and concise. Prioritize the recommendations. Focus on what the recommendations are, not how they will be implemented. Be sensitive to how the recommendations affect organizational resources. Recommendations should be realistic enough to be accomplished

Software Development & Quality Management:


Definitions The aim of Software Quality Management (SQM) is to manage the quality of software and of its development process.

A quality product is one which meets its requirements and satisfies the user A quality culture is an organizational environment where quality is viewed as everyones responsibility.

SQM Roles

to ensure that the required level of quality is achieved in a software product to encourage a company-wide "Quality Culture" where quality is viewed as everyones responsibility to reduce the learning curve and help with continuity in case team members change positions within the organization to enable in-process fault avoidance and fault prevention through proper development

Many people use the terms SQM and SQA interchangeably.

The elements of a software quality system


There are two goals of the software quality system (SQS). The first goal is to build quality into the software from the beginning. This means assuring that the problem or need to be addressed is clearly and accurately stated, and that the requirements for the solution are properly defined, expressed, and understood. Nearly all the elements of the SQS are oriented toward requirements validity and satisfaction. The second goal of the SQS is to keep that quality in the software throughout the software life cycle (SLC). The 10 elements of the SQS are as follows: 1. Standards; 2. Reviewing; 3. Testing; 4. Defect analysis; 5. Configuration management (CM); 6. Security; 7. Education; 8. Vendor management; 9. Safety; 10. Risk management. 1.Standards As implied by the standards manual can have inputs from many sources. Standards are intended to provide consistent, rigorous, uniform, and enforceable methods for software development and operation activities. The development of standards, whether by professional societies such as the Institute of Electrical and Electronics Engineers (IEEE), international groups such as International Organization for Standardization/International Electro technical Commission Joint Technical Committee One (ISO/IEC JTC1), industry groups, or software development organizations for themselves, is recognizing and furthering that movement. Figure : Standards sources Standards cover all aspects of the SLC, including the very definition of the SLC itself. More, probably, than any of the other elements, standards can govern every phase of the life

cycle. Standards can describe considerations to be covered during the concept exploration phase. They can also specify the format of the final report describing the retirement of a software system that is no longer in use. Whether a standard comes from within a company, is imposed by government, or is adopted from an industry source, it must have several characteristics. These include the following: Necessity. No standard will be observed for long if there is no real reason for its existence. Feasibility. Common sense tells us that if it is not possible to comply with the tenets of a standard, then it will be ignored. Measurability. It must be possible to demonstrate that the standard is being followed. 2. Reviewing Reviews permit ongoing visibility into the software development and installation activities. Product reviews, also called technical reviews, are formal or informal examinations of products and components throughout the development phases of the life cycle. They are conducted throughout the software development life cycle (SDLC). Informal reviews generally occur during SDLC phases, while formal reviews usually mark the ends of the phases. The following figure illustrates this point. Figure : SDLC reviews. Informal reviews include walkthroughs and inspections. Walkthroughs are informal, but scheduled, reviews, usually conducted in and by peer groups. The author of the subject component-a design specification, test procedure, coded unit, or the like-walks through his or her component, explaining it to a small group of peers. The role of the peers is to look for defects in or problems with the component. These are then corrected before the component becomes the basis for further development. Inspections are a more structured type of walk-through. Though the basic goal of an inspection-removal of defects-is the same as that of the walk-through, the format of the meeting and the roles of the participants are more strictly defined, and more formal records of the proceedings are prepared.

3. Testing Tests provide increasing confidence and, ultimately, a demonstration that the software requirements are being satisfied. Test activities include planning, design, execution, and reporting. The figure presents a simple conceptual view of the testing process. The basic test process is the same, whether it is applied to system testing or to the earliest module testing. Figure : Simplified test process. Test planning begins during the requirements phase and parallels the requirements development. As each requirement is generated, the corresponding method of test for that requirement should be a consideration. A requirement is faulty if it is not testable. By starting test planning with the requirements, non testability is often avoided. In the same manner that requirements evolve and change throughout the software development, so, too, do the test plans evolve and change. This emphasizes the need for early, and continuing, CM of the requirements and test plans. Test design begins as the software design begins. Here, as before, a parallel effort with the software development is appropriate. As the design of the software takes form, the test cases, scenarios, and data are developed that will exercise the designed software. Each test case also will include specific expected results so that a pass-fail criterion is established. As each requirement must be measurable and testable, so must each test be measurable. A test whose completion is not definitive tells little about the subject of the test. Expected results give the basis against which the success or failure of the test is measured. 4. Defect analysis Defect analysis is the combination of defect detection and correction, and defect trend analysis. Defect detection and correction, together with change control, presents a record of all discrepancies found in each software component. It also records the disposition of each discrepancy, perhaps in the form of a software problem report or software change request. As shown in Figure each needed modification to a software component, whether found through a walk-through, review, test, audit, operation, or other means is reported, corrected, and formally closed. A problem or requested change may be submitted by anyone with an interest in the software. The situation will be verified by the developers, and the CM activity will agree to the change. Verification of the situation is to assure that the problem or need for the change actually exists. CM may wish to withhold permission for the change or delay it until a later time; perhaps because of concerns such as interference with other software, schedule and budget considerations, the customer's desires, and so on. Once the change is completed and tested, it will be reported by CM to all concerned parties, installed into the operational software by the developers or operations staff, and tested for functionality and compatibility in the full environment.

Figure : Typical change procedure. 5. Configuration management CM is a three-fold discipline. Its intent is to maintain control of the software, both during development and after it is put into use and changes begin. As shown in Figure, CM is, in fact, three related activities: identification, control, and accounting. If the physical and functional audits are included as CM responsibilities, there are four activities. Each of the activities has a distinct role to play. As system size grows, so does the scope and importance of each of the activities. In very small, or onetime use, systems, CM may be minimal. As systems grow and become more complex, or as changes to the system become more important, each activity takes on a more definite role in the overall management of the software and its integrity. Further, some CM may be informal for the organization itself, to keep track of how the development is proceeding and to maintain control of changes, while others will be more formal and be reported to the customer or user.

Figure : CM activities.

6. Security Security activities are applied both to data and to the physical data center itself. These activities are intended to protect the usefulness of the software and its environment. The highest quality software system is of no use if the data center in which it is to be used is damaged or destroyed. Such events as broken water pipes, fire, malicious damage by a disgruntled employee, and storm damage are among the most common causes of data center inoperability. Even more ominous is the rising incidence of terrorist attacks on certain industries and in various countries, including our own, around the world. Another frequent damager of the quality of output of an otherwise high-quality software system is data that has been unknowingly modified. If the data on which the system is operating has been made inaccurate, whether intentionally or by accident, the results of the software will not be correct. To the user or customer, this appears to be inadequate software. 7. Education Education assures that the people involved with software development, and those people using the software once it is developed, are able to do their jobs correctly. It is important to the quality of the software that the producers be educated in the use of the various development tools at his or her disposal. A programmer charged with writing object-oriented software in C++ cannot perform well if the only language he or she knows is Visual Basic. It is necessary that the programmer be taught to use C++ before beginning the programming assignment. Likewise, the use of operating systems, data modeling techniques, debugging tools, special workstations, and test tools must be taught before they can be applied beneficially. The proper use of the software once it has been developed and put into operation is another area requiring education. It this case, the actual software user must be taught proper operating procedures, data entry, report generation, and whatever else is involved in the effective use of the software system's capabilities. 8. Vendor management When software is purchased, the buyer must be aware of, and take action to gain confidence in, its quality. Not all purchased software can be treated in the same way, as will be demonstrated here. Each type of purchased software will have its own software quality system approach, and each must be handled in a manner appropriate to the degree of control the purchaser has over the development process used by producer. The following are three basic types of purchased software:
1. 2. 3. Off-the-shelf; Tailored shell; Contracted.

Off-the-shelf software is the package we buy at the store. Microsoft Office, Adobe Photoshop, virus checkers, and the like are examples. These packages come as they are

with no warrantee that they will do what you need to have done. They are also almost totally outside the buyer's influence with respect to quality. The second category may be called the tailored shell. In this case, a basic, existing framework is purchased and the vendor then adds specific capabilities as required by the contract. This is somewhat like buying a stripped version of a new car and then having the dealer add a stereo, sunroof, and other extras. The only real quality influence is over the custom-tailored portions. The third category is contracted software. This is software that is contractually specified and provided by a third-party developer. In this case, the contract can also specify the software quality activities that the vendor must perform and which the buyer will audit. The software quality practitioner has the responsibility in each case to determine the optimum level of influence to be applied, and how that influence can be most effectively applied. The purchaser's quality practitioners must work closely with the vendor's quality practitioners to assure that all required steps are being taken. 9. Safety As computers and software grow in importance and impact more and more of our lives, the safety of the devices becomes a major concern.. Every software project must consciously consider the safety implications of the software and the system of which it is a part. The project management plan should include a paragraph describing the safety issues to be considered. If appropriate, a software safety plan should be prepared. 10. Risk management There are several types of risk associated with any software project. Risks range from the simple, such as the availability of trained personnel to undertake the project, to more threatening, such as improper implementation of complicated algorithms, to the deadly, such as failure to detect an alarm in a nuclear plant. Risk management includes identification of the risk; determining the probability, cost, or threat of the risk; and taking action to eliminate, reduce, or accept the risk. Risk and its treatment is a necessary topic in the project plan and may deserve its own risk management plan.

Tools in Software Quality control:


Ishikawa promoted the statistical tools for quality control. These statistical tools are for process and quality control at the project and organization level, and hence they are used for project leaders and project experts. On the contrast, they do not provide specific information to software developers on how to improve the quality of the design or implementation. The 7 tools are:

1) Pareto Chart 2) Check Sheet or Check List 3) Histogram 4) Run Chart 5) Scatter Diagram 6) Control Chart (Flow Charts) 7) Cause and Effect or Fishbone Diagram
1) PARETO CHART

The Pareto diagram is named after Vilfredo Pareto, a 19th-century Italian economist who postulated that a large share of wealth is owned by a small percentage of the population. This basic principle translates well into quality problemsmost quality problems result from a small number of causes. Quality experts often refer to the principle as the 80-20 rule; that is, 80% of problems are caused by 20% of the potential sources. A Pareto diagram puts data in a hierarchical order, which allows the most significant problems to be corrected first. The Pareto analysis technique is used primarily to identify and evaluate nonconformities, although it can summarize all types of data. It is perhaps the diagram most often used in management presentations To create a Pareto diagram, the operator collects random data, regroups the categories in order of frequency, and creates a bar graph based on the results
A PARETO CHART IS USED FOR:

Focusing on critical issues by ranking them in terms of importance and frequency (example: Which course causes the most difficulty for students? which problem with Product X is most significant to our customers?) Prioritizing problems or causes to efficiently initiate problem solving (example: Which discipline problems should be tackled first? or, what is the most frequent complaint by client regarding the efficiency? solution of what production problem will improve quality most?) Analyzing problems or causes by different groupings of data (e.g., by program, by programmer, by resources, by machine, by team) Analyzing the before and after impact of changes made in a process (example: What is the most common complaint after the new modification was done? has the initiation of a quality improvement program reduced the number of defectives?)

STEPS IN CONSTRUCTING A PARETO CHART WITH STEP-BY-STEP EXAMPLE:

1. Determine the categories of problems or causes to be compared. Begin by organizing the problems or causes into a narrowed down list of categories (usually 8 or less). 2. Select a Standard Unit of Measurement and the Time Period to be studied. It could be a measure of how often something occurs (defects, errors, tardies, cost overruns, etc.); frequencies of reasons cited in surveys as the cause of a certain problem; or a specific measurement of volume or size. The time period to be studied should be a reasonable length of time to collect the data. 3. Collect and Summarize the Data. Create a three-column table with the headings of "error or problem category", "frequency", and "percent of total". In the "error or problem category" column list the categories of problems or causes previously identified. In the "frequency" column write in the totals for each of the categories over the designated period of time. In the "percent of total" column, divide each number in the "frequency" column by the total number of measurements. This will provide the percentage of the total. Sample Data: Title Abnormal End Address Error Incorrect Output Infinite Loop Error Message Endless Wait Title Incorrect Output Endless Wait Infinite Loop Abnormal End Address Error Error Message Symptom 5 5 53 9 0 29 Data 53 29 9 5 5 0 Percent 52% 81% 90% 95% 100% 100%

4. Create the framework for the horizontal and vertical axes of the Pareto Chart. The horizontal axis will be the categories of problems or causes in descending order with the most frequently occurring category on the far left (or at the beginning of the horizontal line). There will be two vertical axes-one on the far left and one on the far right. The vertical axis on the far left point will indicate the frequency for each of the categories. Scale it so the value at the top of the axis is slightly higher than the highest frequency number. The vertical axis on the far right will represent the

percentage scale and should be scaled so that the point for the number of occurrences on the left matches with the corresponding percentage on the right. 5. Plot the bars on the Pareto Chart. Using a bar graph format, draw the corresponding bars in decreasing height from left to right using the frequency scale on the left vertical axis. To plot the cumulative percentage line, place a dot above each bar at a height corresponding to the scale on the right vertical axis. Then connect these dots from left to right, ending with the 100% point at the top of the right vertical axis. 6. Interpret the Pareto Chart. Use common sense-just because a certain problem occurs most often doesn't necessarily mean it demands your greatest attention. Investigate all angles to help solve the problems-What makes the biggest difference? What will it cost to correct the problems? What will it cost if we don't correct this problem?

Pareto Chart
n=101 101

88.375

75.75 Frequency of Occurence

63.125 53

2) Run 50.5 Charts: Run charts are frequently used for software project management. These charts serve as real time statements of quality as well as work load. Often these run charts are compared to the historical data or a projection model so that the interpretation can be 37.875 placed into proper perspective. Example: Project Project1 Error 25.25 Rates Average (defects/kloc) 8.7 4.445455

29

12.625

Project2 Project3 Project4 Project5 Project6 Project7 Project8 Project9 Project10 Project11

1.9 4.8 4.5 5.3 5.2 2.1 5.9 3.6 3.5 3.4

4.445455 4.445455 4.445455 4.445455 4.445455 4.445455 4.445455 4.445455 4.445455 4.445455

Run C
9.9

8.9

7.9

6.9
3) Scatter Diagram: A scatter diagram shows how two variables are related and is thus used to test for cause and effect relationships. It cannot prove that one variable causes the change in the other, only that a relationship exists and how strong it is. In a scatter diagram, the horizontal (x) axis represents the measurement values of one variable, and the vertical (y) axis represents the measurements of the second variable. 4.9 Example:

No. of Errors

5.9

3.9

2.9

Defects 8 9 16 17 18 21 30 31 33 34 34 36 Defects 44 46 54 59 62 65 65 68 74 77 79 79 103 DEFECT TYPE

LOC

y=mx+b

4) Check Sheet: Check sheets help organize data by category. They show how many times each particular value occurs, and their information is increasingly helpful as more data are collected. More than 50 observations should be available to be charted for this tool to be really useful. Check sheets minimize clerical work since the operator merely adds a mark to the tally on the prepared sheet rather than writing out a figure. By showing the frequency of a particular defect and how often it occurs in a specific location, check sheets help operators spot problems. The check sheet example shows a list of molded part defects on a production line covering a week's time. One can easily see where to set priorities based on results shown on this check sheet. Assuming the production flow is the same on each day, the part with the largest number of defects carries the highest priority for correction.

1535 1415.521 1964 1603.863 2593 2922.257 4658 3110.599 4602 3298.941 3479 3863.967 6352 5559.044 5731 5747.386 5743 6124.07 4353 6312.412 4487 6312.412 6482 6689.096 LOC y=mx+b 5762 8195.832 10038 8572.516 14919 10079.25 9031 11020.96 16999 11585.99 7894 12151.01 SHIFTS 13411 12151.01 11 111 1 111 15795 12716.04 10300 1 13846.09 1 8003 111 14411.12 11 11393 11114787.8 111 17361 14787.8 1 23688 19308.01 11 1

Cause & Effect Diagram: The cause and effect diagram is sometimes called an Ishikawa diagram after its inventor. It is also known as a fish bone diagram because of its shape. A cause and effect diagram describes a relationship between variables. The undesirable outcome is shown as effect, and related causes are

shown as leading to, or potentially leading to, the said effect. This popular tool has one severe limitation, however, in that users can overlook important, complex interactions between causes. Thus, if a problem is caused by a combination of factors, it is difficult to use this tool to depict and solve it. A fish bone diagram displays all contributing factors and their relationships to the outcome to identify areas where data should be collected and analyzed. The major areas of potential causes are shown as the main bones, e.g., materials, methods, people, measurement, machines, and design. Later, the sub areas are depicted. Thorough analysis of each cause can eliminate causes one by one, and the most probable root cause can be selected for corrective action. Quantitative information can also be used to prioritize means for improvement, whether it be to machine, design, or operator. To construct the skeleton, remember: For manufacturing - the 4 Ms man, method, machine, material For service applications equipment, policies, procedures, people

5) Histograms: The histogram plots data in a frequency distribution table. What distinguishes the histogram from a check sheet is that its data are grouped into rows so that the identity of individual values is lost. Commonly used to present quality improvement data, histograms work best with small amounts of data that vary considerably. When used in process capability studies, histograms can display specification limits to show what portion of the data does not meet the specifications. After the raw data are collected, they are grouped in value and frequency and plotted in a graphical form. A histogram's shape shows the nature of the distribution of the data, as well as central tendency (average) and variability. Specification limits can be used to display the capability of the process.

6)

Flowcharts: Flowcharts describe a process in as much detail as possible by graphically displaying the steps in proper sequence. A good flowchart should show all process steps under analysis by the quality improvement team, identify critical process points for control, suggest areas for further improvement, and help explain and solve a problem. Process Decision Process Flow

A Sample Flow Chart 7) Control Charts: Specifies deviation from the mean, provides upper and lower specifications (bounds) and a range.

Control Chart
20 18 16 14 Number of Defects 12 UCL

SQA Assurance Plans 10


The IEEE recommended the SQA plans are as follows I. CL Purpose of plan II. References 8 III. Management a. Organization b. Tasks 6 c. Responsibilities IV. Documentation a. Purpose 4 b. Required s/w engineering documents c. Other documents V. Standards, Practices, and Conventions 2 a. Purpose b. Conventions VI. Reviews and audits 0 a. Purpose b. 1 2 3 Review 5 6 7 8 9 10 4 documents

11 12 13 14 15 16 Projects

VII. VIII. IX. X. XI. XII. XIII. XIV. XV.

s/w requirements review design reviews s/w verification and validation reviews functional audits physical audit in-process audits management reviews

Test Problem reporting and corrective action Tools, Techniques, and Methodologies Code control Media control Supplier control Record collections, Maintenance , and Retention Training Risk Management

SQA Considerations

Verification and Validation


Validation checks that the product design satisfies or fits the intended usage (high-level checking) i.e., you built the right product. This is done through dynamic testing and other forms of review. Verification: The process of evaluating software to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase.

Validation: The process of evaluating software during or at the end of the development process to determine whether it satisfies specified requirements. In other words, validation ensures that the product actually meets the user's needs, and that the specifications were correct in the first place, while verification is ensuring that the product has been built according to the requirements and design specifications. Validation ensures that you built the right thing. Verification ensures that you built it right. Validation confirms that the product, as provided, will fulfill its intended use.

From testing perspective:

Fault - wrong or missing function in the code. Failure - the manifestation of a fault during execution. Malfunction - according to its specification the system does not meet its specified functionality.

Within the modeling and simulation community, the definitions of validation, verification and accreditation are similar:

Validation is the process of determining the degree to which a model, simulation, or federation of models and simulations, and their associated data are accurate representations of the real world from the perspective of the intended use(s) Accreditation is the formal certification that a model or simulation is acceptable to be used for a specific purpose. Verification is the process of determining that a computer model, simulation, or federation of models and simulations implementations and their associated data accurately represents the developer's conceptual description and specifications.

You might also like