Professional Documents
Culture Documents
com
Quality Management
Quality management is a method for ensuring that all the activities necessary to
design, develop and implement a product or service are effective and efficient with
respect to the system and its performance. Quality management can be considered
to have three main components: quality control, quality assurance and quality
improvement. Quality management is focused not only on product quality, but also
the means to achieve it. Quality management therefore uses quality assurance and
control of processes as well as products to achieve more consistent quality.
Components:
Quality Control
Quality Assurance
Quality Improvement
Software Quality Factors
Quality Control
Usually, it is not the job of a quality control team or professional to correct quality
issues. Typically, other individuals are involved in the process of discovering the
cause of quality issues and fixing them. Once such problems are overcome, the
product, service, or process continues production or implementation as usual.
Quality control can cover not just products, services, and processes, but also people.
Employees are an important part of any company. If a company has employees that
don’t have adequate skills or training, have trouble understanding directions, or
are misinformed, quality may be severely diminished. When quality control is
considered in terms of human beings, it concerns correctable issues. However, it
should not be confused with human resource issues.
Often, quality control is confused with quality assurance. Though the two are very
similar, there are some basic differences. Quality control is concerned with the
product, while quality assurance is process–oriented.
Even with such a clear-cut difference defined, identifying the differences between the
two can be hard. Basically, quality control involves evaluating a product, activity,
process, or service. By contrast, quality assurance is designed to make sure
processes are sufficient to meet objectives. Simply put, quality assurance ensures a
product or service is manufactured, implemented, created, or produced in the right
way; while quality control evaluates whether or not the end result is satisfactory.
Example
Quality Control refers to quality related activities associated with the creation of
project deliverables. Quality control is used to verify that deliverables are of
acceptable quality and that they are complete and correct. Examples of quality
control activities include deliverable peer reviews and the testing process.
Quality Assurance refers to the process used to create the deliverables, and can be
performed by a manager, client, or even a third-party reviewer. Examples of quality
assurance include process checklists and project audits. If your project gets audited,
for instance, an auditor might not be able to tell if the content of a specific
deliverable is acceptable (quality control). However, the auditor should be able to tell
if the deliverable seems acceptable based on the process used to create it (quality
assurance). That's why project auditors can perform a quality assurance review on
your project, even if they do not know the specifics of what you are delivering. They
don't know your project, but they know what good processes look like.
Here's an example to drive home the point. Let's say a project manager asked the
sponsor to approve the Business Requirements Report. If you were the sponsor, how
would you validate that the business requirements seemed complete and correct?
One solution would be for you to actually review the document and the business
requirements. If you did that, you would be performing a quality control activity,
since your actions would be based on validating the deliverable itself.
However, let's say the document was thirty pages long and that you (as the sponsor)
did not have the expertise, the time, or the inclination to do a specific content
review. In that case, you wouldn't ask to review the document itself. Instead, you
would ask the project manager to describe the process used to create the document.
Let us say you received the following reply.
Project manager - "I gathered eight of your major users in a facilitated session. After
the meeting, I documented the requirements and asked the group for their feedback,
modifications, etc. I then took these updated requirements to representatives from
the Legal, Finance, Manufacturing and Purchasing groups and they added
requirements that were needed to support company standards. We then had a
meeting with the four managers in your area that are most impacted by this system.
These managers added a few more requirements. I then asked your four managers
to sign off on the requirements and you can see their signatures on the last page."
If you were the sponsor, would you now feel comfortable to sign the requirements?
If it were me, I would feel pretty comfortable.
That's the difference. Quality control activities are focused on the deliverable itself.
Quality assurance activities are focused on the process used to create the
deliverable. They are both powerful techniques and both must be performed to
ensure that the deliverables meet your customers quality requirements.
Quality Assurance
One of the most widely used paradigms for QA management is the PDCA (Plan-Do-
Check-Act) approach, also known as the Shewhart cycle.
• Failure testing
• Statistical control
• ISO 17025
Company Quality
During the 1980s, the concept of “company quality” with the focus on management
and people came to the fore. It was realised that, if all departments approached
quality with an open mind, success was possible if the management led the quality
improvement process.
The quality of the outputs is at risk if any of these aspects is deficient in any way.
The approach to quality management given here is therefore not limited to the
manufacturing theatre only but can be applied to any business activity:
• Design work
• Administrative services
• Consulting
• Banking
• Insurance
• Computer software
• Retailing
• Transportation
Still, in the system of Company Quality, the work being carried out was shop floor
inspection which did not control the major quality problems. This led to quality
assurance or total quality control, which has come into being recently.
Documentation Standards specify form and content for planning, control, and
product documentation and provide consistency throughout a project. The NASA
Data Item Descriptions (DIDs) are documentation standards.
Design Standards specify the form and content of the design product.They provide
rules and methods for translating the software requirements into the software design
and for representing it in the design documentation.
Code Standards specify the language in which the code is to be written and define
any restrictions on use of language features. They define legal language structures,
style conventions, rules for data structures and interfaces, and internal code
documentation. Procedures are explicit steps to be followed in carrying out a
process. All processes should have documented procedures. Examples of processes
for which procedures are needed are configuration management, nonconformance
reporting and corrective action, testing, and formal inspections.
If developed according to the NASA DID, the Management Plan describes the
software development control processes, such as configuration management, for
which there have to be procedures, and contains a list of the product standards.
Standards are to be documented according to the Standards and Guidelines DID in
the Product Specification. The planning activities required to assure that both
products and processes comply with designated standards and procedures are
described in the QA portion of the Management Plan.
Product evaluation and process monitoring are the SQA activities that assure the
software development and control processes described in the project's Management
Plan are correctly carried out and that the project's procedures and standards are
followed. Products are monitored for conformance to standards and processes are
monitored for conformance to procedures. Audits are a key technique used to
perform product evaluation and process monitoring. Review of the Management Plan
should ensure that appropriate SQA approval points are built into these processes.
Product evaluation is an SQA activity that assures standards are being followed.
Ideally, the first products monitored by SQA should be the project's standards and
procedures. SQA assures that clear and achievable standards exist and then
evaluates compliance of the software product to the established standards. Product
evaluation assures that the software product reflects the requirements of the
applicable standard(s) as identified in the Management Plan.
Process monitoring is an SQA activity that ensures that appropriate steps to carry
out the process are being followed. SQA monitors processes by comparing the actual
steps carried out with those in the documented procedures. The Assurance section of
the Management Plan specifies the methods to be used by the SQA process
monitoring activity.
A fundamental SQA technique is the audit, which looks at a process and/or a product
in depth, comparing them to established procedures and standards. Audits are used
to review management, technical, and assurance processes to provide an indication
of the quality and status of the software product.
The purpose of an SQA audit is to assure that proper control procedures are being
followed, that required documentation is maintained, and that the developer's status
reports accurately reflect the status of the activity. The SQA product is an audit
report to management consisting of findings and recommendations to bring the
development into conformance with standards and/or procedures.
SQA assures that software Configuration Management (CM) activities are performed
in accordance with the CM plans, standards, and procedures. SQA reviews the CM
plans for compliance with software CM policies and requirements and provides
follow-up for nonconformances. SQA audits the CM functions for adherence to
standards and procedures and prepares reports of its findings.
Approved changes to baselined software are made properly and consistently in all
products, and no unauthorized changes are made.
Formal software reviews should be conducted at the end of each phase of the life
cycle to identify problems and determine whether the interim product meets all
applicable requirements. Examples of formal reviews are the Preliminary Design
Review (PDR), Critical Design Review (CDR), and Test Readiness Review (TRR). A
review looks at the overall picture of the product being developed to see if it satisfies
its requirements. Reviews are part of the development process, designed to provide
a ready/not-ready decision to begin the next phase. In formal reviews, actual work
done is compared with established standards. SQA's main objective in reviews is to
assure that the Management and Development Plans have been followed, and that
the product is ready to proceed with the next phase of development. Although the
decision to proceed is a management decision, SQA is responsible for advising
management and participating in the decision.
SQA assures that formal software testing, such as acceptance testing, is done in
accordance with plans and procedures. SQA reviews testing documentation for
completeness and adherence to standards. The documentation review includes test
plans, test specifications, test procedures, and test reports. SQA monitors testing
and provides follow-up on nonconformances. By test monitoring, SQA assures
software completeness and readiness for delivery.
The objectives of SQA in monitoring formal software testing are to assure that:
The test procedures are testing the software requirements in accordance with test
plans.
The correct or "advertised" version of the software is being tested (by SQA
monitoring of the CM activity).
Nonconformances occurring during testing (that is, any incident not expected in the
test procedures) are noted and recorded.
Software testing verifies that the software meets its requirements. The quality of
testing is assured by
verifying that project requirements are satisfied and that the testing process is in
accordance with the test plans and procedures.
SQA should be involved in both writing and reviewing the Management Plan in order
to assure that the processes, procedures, and standards identified in the plan are
appropriate, clear, specific, and auditable. During this phase, SQA also provides the
QA section of the Management Plan.
During the software requirements phase, SQA assures that software requirements
are complete, testable, and properly expressed as functional, performance, and
interface requirements.
3. Software Architectural
·Assuring the Interface Control Documents are in agreement with the standard in
form and content.
·Reviewing PDR documentation and assuring that all action items are resolved.
Reviewing CDR documentation and assuring that all action items are resolved.
SQA activities during the implementation phase include the audit of:
·Results of coding and design activities including the schedule contained in the s/w
Development Plan.
·Assuring that all tests are run according to test plans and procedures and that any
nonconformances are reported and resolved.
· Certifying that testing is complete and software and documentation are ready for
delivery.
·Participating in the Test Readiness Review and assuring all action items are
completed.
As a minimum, SQA activities during the software acceptance and delivery phase
include assuring the performance of a final configuration audit to demonstrate that
all deliverable items are ready for delivery.
During this phase, there will be mini-development cycles to enhance or correct the
software. During these development cycles, SQA conducts the appropriate phase-
specific activities described above.
Quality Improvement
Complexity: Unnecessary work; any activity that makes a work process more
complicated without adding value to the resulting product or service.
Control Chart: A line graph that identifies the variation occurring in a work process
over time; helps distinguish between common-cause variation and special-cause
variation.
Cross Functional: Involving the cooperation of two or more departments within the
organization (e.g., Marketing and Product Development).
Customer: Any person or group inside or outside the organization who receives a
product or service.
Deming Cycle (also known as Shewart's Wheel): A model that describes the
cyclical interaction of research, sales, design, and production as a continuous work
flow, so that all functions are involved constantly in the effort to provide products
and services that satisfy customers and contribute to improved quality. (See also
PDCA.)
Internal Customer: Anyone in the organization who relies on you for a product or
service. (See also Customer.)
Internal Supplier: Anyone in the organization you rely on for a product or service.
(See also Supplier.) Juran Trilogy: The interrelationship of three basic managerial
processes with which to manage quality, quality control, and quality improvement.
PDCA Cycle: An adaptation of the Deming Cycle, which stresses that every
improvement activity, can best be accomplished by the following steps: plan, do,
check, etc. (See Deming Cycle.)
"the efficient production of the quality that the market expects" (Deming)
"fitness for use"; "product performance and freedom from deficiencies" (Juran)
"the totality of features and characteristics of a product or service that bear on its
ability to satisfy a given need" (American Society for Quality Control)
Special-Cause Variation: Any violation arising from circumstances that are not a
normal part of the work process. (See also Common-Cause Variation.)
Supplier: Any person or group inside or outside the organization that produces a
product or service. Suppliers improve quality by identifying customer expectations
and adjusting work processes so that products and services meet or exceed those
expectations. (See also Customer.)
Work Process: A series of work steps that produce a particular product or service
for the customer.
The CMM (aka Humphrey's Capability Maturity Model) was originally described in the
book Managing the Software Process (Addison Wesley Professional, Massachusetts,
1989) Watts Humphrey. The CMM was conceived by Watts Humphrey, who based it
on the earlier work of Phil Crosby. Active development of the model by the SEI (US
Dept. of Defence Software Engineering Institute) began in 1986. The SEI was at
Carnegie Mellon University in Pittsburgh.
The CMM was originally intended as a tool for objectively assessing the ability of
government contractors' processes to perform a contracted software project. Though
it comes from the area of software development, it can be (and has been and still is
being) applied as a generally applicable model to assist in understanding the process
capability maturity of organisations in diverse areas. For example, software
engineering, system engineering, project management, risk management, system
acquisition, information technology (IT), personnel management. It has been used
extensively for avionics software and government projects around the world.
Though still thus widely used as a general tool, for software development purposes
the CMM has been superseded by CMMI (Capability Maturity Model Integration). The
old CMM was renamed to Software Engineering CMM (SE-CMM) and organizations
accreditations based on SE-CMM expired on the 31st of December, 2007.
Other variants of the CMM include Software Security Engineering CMM SSE-CMM and
People CMM. Other maturity models such as ISM3 have also emerge
Contents:
Maturity Model
Structure of CMM
Levels of the CMM
Key process areas
Sokftware process framework for SEI's CMM
History
Controversial aspects
Beneficial Elements of CMM Level 2 and 3
Maturity Model
A maturity model is a structured collection of elements that describe certain aspects
of maturity in an organization. A maturity model may provide, for example:
• a place to start
• the benefit of a community’s prior experiences
• a common language and a shared vision
• a framework for prioritizing actions
• a way to define what improvement means for your organization.
Structure of CMM
Level 1 - Initial
At maturity level 1, processes are usually not documented and change based on the
user or event. The organization does not have a stable environment and may not
know or understand all of the components that make up the environment. As a
result, success in these organizations depends on the institutional knowledge, the
competence and heroics of the people in the organization, and the level of effort
expended by the team. In spite of this chaotic environment, maturity level 1
organizations often produce products and services; however, they frequently exceed
the budget and schedule of their projects. Due to the lack of formality, level 1
organizations, often over commit, abandon processes during a crisis, and are unable
to repeat past successes. There is very little planning and executive buy-in for
projects and process acceptance is limited. IT organizations at level 1 are often seen
as a service instead of a partner.
Level 2 - Repeatable
Process discipline is unlikely to be rigorous, but where it exists it may help to ensure
that existing practices are retained during times of stress. When these practices are
in place, projects are performed and managed according to their documented plans.
Project status and the delivery of services are visible to management at defined
points (for example, at major milestones and at the completion of major tasks).
Basic project management processes are established to track cost, schedule, and
functionality. The minimum process discipline is in place to repeat earlier successes
on projects with similar applications and scope. There is still a significant risk of
exceeding cost and time estimates.
Level 3 - Defined
The organization’s set of standard processes, which are the basis for level 3, are
established and subject to some degree of improvement over time. These standard
processes are used to establish consistency across the organization. Projects
establish their defined processes by applying the organization’s set of standard
processes, tailored, if necessary, within similarly standardized guidelines.
Level 4 - Managed
Using process metrics, management can effectively control the process (e.g., for
software development ). In particular, management can identify ways to adjust and
adapt the process to particular projects without measurable losses of quality or
deviations from specifications. Organizations at this level set quantitative
quality goals for both software process and software maintenance.
Subprocesses are selected that significantly contribute to overall process
performance. These selected subprocesses are controlled using statistical
and other quantitative techniques. A critical distinction between maturity level 3
and maturity level 4 is the predictability of process performance. At maturity level 4,
the performance of processes is controlled using statistical and other quantitative
techniques, and may be quantitatively predictable. At maturity level 3, processes are
only qualitatively predictable.
Level 5 - Optimizing
Optimizing processes that are nimble, adaptable and innovative depends on the
participation of an empowered workforce aligned with the business values and
objectives of the organization. The organization’s ability to rapidly respond to
changes and opportunities is enhanced by finding ways to accelerate and share
learning.
A critical distinction between maturity level 4 and maturity level 5 is the type of
process variation addressed. At maturity level 4, processes are concerned with
addressing special causes of process variation and providing statistical
predictability of the results. Though processes may produce predictable results,
the results may be insufficient to achieve the established objectives. At maturity
level 5, processes are concerned with addressing common causes of process
variation and changing the process (that is, shifting the mean of the process
performance) to improve process performance (while maintaining statistical
probability) to achieve the established quantitative process-improvement
objectives.
Extensions
Integration
Process
OPP Organizational Process Performance 4
Management
Project
QPM Quantitative Project Management 4
Management
Organizational Innovation and Process
OID 5
Deployment Management
CAR Causal Analysis and Resolution Support 5
TypeSD Description
Policy Describes the policy contents and KPA goals recommended by the CMM.
Describes the recommended content of select work products described
Standard
in the CMM.
Describes the process information content recommended by the CMM.
The process checklists are further refined into checklists for:
roles
entry criteria
inputs
activities
outputs
Process
exit criteria
measurements
documented procedures
training
tools
Describes the recommended content of documented procedures
Procedure
described in the CMM.
Level Provides an overview of an entire maturity level. The level overview
Overview checklists are further refined into checklists for:
KPA goals
policies
standards
process descriptions
procedures
training
tools
measurements
History of CMM
The Capability Maturity Model was initially funded by military research. The United
States Air Force funded a study at the Carnegie-Mellon Software Engineering
Institute to create an abstract model for the military to use as an objective
evaluation of software subcontractors. The result was the Capability Maturity Model,
published as Managing the Software Process in 1989. The CMM is no longer
supported by the SEI and has been superseded by the more comprehensive
Capability Maturity Model Integration (CMMI), of which version 1.2 has now been
released.
Context
Unfortunately, the influx of growth caused growing pains; project failure became
more commonplace not only because the field of computer science was still in its
infancy, but also because projects became more ambitious in scale and complexity.
In response, individuals such as Edward Yourdon, Larry Constantine, Gerald
Weinberg, DeMarco, and David Parnas published articles and books with research
results in an attempt to professionalize the software development process.
Watts Humphrey's Capability Maturity Model (CMM) was described in the book
Managing the Software Process (1989). The CMM as conceived by Watts Humphrey
was based on the work a decade earlier of Phil Crosby who published the Quality
Management Maturity Grid in his book Quality is Free in Active development of the
model by the SEI (US Dept. of Defense Software Engineering Institute) began in
1986.
The CMM was originally intended as a tool to evaluate the ability of government
contractors to perform a contracted software project. Though it comes from the area
of software development, it can be, has been, and continues to be widely applied as
a general model of the maturity of processes (e.g., IT Service Management
processes) in IS/IT (and other) organizations.
Note that the first application of a staged maturity model to IT was not by CMM/SEI,
but rather Richard L. Nolan in 1973.
1. Initial (chaotic, ad hoc, heroic) the starting point for use of a new process.
2. Repeatable (project management, process discipline) the process is used
repeatedly.
3. Defined (institutionalized) the process is defined/confirmed as a standard
business process.
4. Managed (quantified) process management and measurement takes place.
5. Optimising (process improvement) process management includes deliberate
process optimization/improvement.
Within each of these maturity levels are KPAs (Key Process Areas) which characterise
that level, and for each KPA there are five definitions identified:
1. Goals
2. Commitment
3. Ability
4. Measurement
5. Verification
The KPAs are not necessarily unique to CMM, representing — as they do — the
stages that organizations must go through on the way to becoming mature.
N.B.: The CMM was originally intended as a tool to evaluate the ability of
government contractors to perform a contracted software project. It may be suited
for that purpose. When it became a general model for software process
improvement, there were many critics.
Origins
In the 1980s, several military projects involving software subcontractors ran over-
budget and were completed much later than planned, if they were completed at all.
In an effort to determine why this was occurring, the United States Air Force funded
a study at the SEI. The result of this study was a model for the military to use as an
objective evaluation of software subcontractors. In 1989, the Capability Maturity
Model was published as Managing the Software Process. The basis for the model is
the Quality Management Maturity Grid introduced by Philip Crosby in his 1979 book
'Quality is Free'.
Timeline
Current State
Although these models have proved useful to many organizations, the use of multiple
models has been problematic. Further, applying multiple models that are not
integrated within and across an organization is costly in terms of training, appraisals,
and improvement activities. The CMM Integration project was formed to sort out the
problem of using multiple CMMs. The CMMI Product Team's mission was to combine
three source models:
CMMI is the designated successor of the three source models. The SEI has released a
policy to sunset the Software CMM and previous versions of the CMMI. The same can
be said for the SECM and the IPD-CMM; these models were superseded by CMMI.
World’s Largest Portal on Software Testing Information & Jobs -
http://www.OneStopTesting.com
Join Software Testing Community at
http://groups.yahoo.com/group/OneStopTesting/
Over 5,000 Testing Interview Questions at
http://www.CoolInterview.com
Software Testing : Quality Mangement eBook from www.OneStopTesting.com
Future Direction
With the release of the CMMI Version 1.2 Product Suite, the possibility of multiple
CMMI models was created. There is now a CMMI for Development (CMMI-DEV), V1.2
and a CMMI for Acquisition (CMMI-ACQ), V1.2. A version of the CMMI for Services is
being developed by a Northrop Grumman-led team under the auspices of the SEI,
with participation from Boeing, Lockheed Martin, Raytheon, SAIC, SRA, and Systems
and Software Consortium (SSCI).
Suggestions for improving CMMI are welcomed by the SEI. For information on how to
provide feedback, see the CMMI Web site.
Controversial Aspects
The software industry is diverse and volatile. All methodologies for creating software
have supporters and critics, and the CMM is no exception.
Praise
Criticism
• CMM has failed to take over the world. It's hard to tell exactly how wide
spread it is as the SEI only publishes the names and achieved levels of
compliance of companies that have requested this information to be listed.
The most current Maturity Profile for CMMI is available online.
• The CMM does not describe how to create an effective software development
organization. The CMM contains behaviors or best practices that successful
projects have demonstrated. Being CMM compliant is not a guarantee that a
project will be successful, however being compliant can increase a project's
chances of being successful.
process models.
• Peer Review of Code (Code Review) with metrics that allow developers to
walk through an implementation, and to suggest improvements or changes.
(Note - This is problematic because the code has already been developed, and
a bad design potentially cannot be fixed by "tweaking".) The Code Review
gives complete code a formal approval mechanism.
• The idea that there is a "right way" to build software, that it is a scientific
process involving engineering design and that groups of developers are not
there to simply work on the problem du jour.
This page points you to places where you can find more information about CMMI,
and describes the worldwide adoption and benefits of CMMI.
Worldwide Adoption
The SEI is excited about the response that organizations around the world are having
to the CMMI Product Suite. CMMI is being adopted worldwide, including North
America, Europe, Asia, Australia, South America, and Africa. This kind of response
has substantiated the SEI's commitment to the CMMI models and the Standard CMMI
Appraisal Method for Process Improvement (SCAMPISM).
The following are some of the benefits and business reasons for implementing
process improvement:
• The quality of a system is highly influenced by the quality of the process used
to acquire, develop, and maintain it.
• Process improvement increases product and service quality as organizations
apply it to achieve their business objectives.
• Process improvement objectives are aligned with business objectives.
CMMI Benefits
The following are some of the benefits and business reasons for implementing
process improvement:
Process Areas
http://www.OneStopTesting.com
http://groups.yahoo.com/group/OneStopTesting/
http://www.CoolInterview.com
and
http://www.TestingInterviewQuestions.com
and
http://www.NewInterviewQuestions.com