You are on page 1of 7

QNo-1 Explain the Layered Approach of SRE for improvement of

Software Quality?
SRE is a layered technology. It rests on the organizational commitment to quality with a
continuous process improvement culture and has its foundation in the process layer. Process
defines the framework for management control of the software projects, establishes the
context in which technical methods are applied, work products are produced, quality is
insured, and change is properly
managed. SRE methods provide the technical how tos for building the software whereas
the tools provide automated or semi-automated support for the processes and methods [6].

SRE management techniques work by applying two fundamental ideas:


Deliver the desired functionality more efficiently by quantitatively characterizing the
expected use, use this information to optimize the resource usage focusing on the most
used and/or critical functions, and make testing environment representative of operational
environment.
Balances customer needs for reliability, time, and cost-effectiveness. It works by setting
quantitative reliability, schedule and cost objectives, and engineers strategies to meet
these objectives.
The activities of SRE include:
Attributes and metrics of product design, development process, system architecture,
software operational environment, and their implications on reliability. Software reliability
measurementestimation and prediction.
The application of this knowledge in specifying and guiding system software architecture,
development, testing, acquisition, use, and maintenance.
There exist sound process models of SRE known as software development life cycle (SDLC)
models, which describe the various stages of software development in a sequential and
planned manner. Most of the models, model the SDLC in the following stages: requirement
analysis and definition, system design, program design, coding, testing, system delivery,
and maintenance. The tools and techniques
of SRE provide means to the software engineer to monitor, control, and improve the software
quality throughout the SDLC.

QNo-2 How do we determine the predective validity of reliablity


modeling?
Predictive validity refers simply to the accuracy of model estimates. The fore-most thing to
achieve predictive validity is to make sure that the input data are accurate and reliable. As
discussed in an earlier chapter, there is much room for improvement in data quality in the
software industry in general, including defect tracking in software development. Within the

development process, usually the tracking system and the data quality are better at the
back end (testing) than at the front end (requirements analysis, design reviews, and code
inspections). Without accurate data, it is impossible to obtain accurate estimates.

Second, and not less important, to establish predictive validity, model estimates
and actual outcomes must be compared and empirical validity must be established.
Such empirical validity is of utmost importance because the validity of software
reliability models, according to the state of the art, is context specific. A model may
work well in a certain development organization for a group of products using
certain development processes, but not in dissimilar environments. No universally
good software reliability model exists. By establishing empirical validity, we ensure
that the model works in the intended context. To improve its predictive validity, we
calibrated the model output with an adjustment factor, which is the mean difference
between the Rayleigh estimates and the actual defect rates reported. The
calibration is logical, given the similar structural parameters in the development
process among the three computer systems, including organization, management,
and work force.

Q. No 3 : Define Failure rate. Derive Mathematical expression of


reliability in terms of failure rate.

QN-4: Explain how reliability growth curve differs in software and


hardware reliability.
Reliability measure applied either to software or hardware refers to the quality of
the product and strives systematically to reduce or eliminate system failures. The
major difference in the reliability analysis of the two systems is due to their failure
mechanism. Failures in hardware is primarily due to material deterioration, aging,
random failures, misuse, changes in environmental factors, design errors,
etc. while software failures are caused by incorrect logic, misinterpretation of
requirements and specifications, design errors, inadequate and insufficient testing,
incorrect input, unexpected usage, etc. Software faults are more difficult to
visualize, classify, detect, and correct due to no standard techniques available for
the purpose and require a thorough understanding of the system, uncertain
operational
environment, and testing and debugging process. Another important difference in
the reliability analysis of the two systems lies in their failure trend. Failure curve that
is related to hardware systems is typically
a bathtub curve with three phasesburn-in, useful life, and wear-out phase as
shown in Fig.1 Software on the other hand does not have stable reliability in the life
phase instead it enjoys reliability growth or failure decay during testing and
operation since software faults are detected and removed during these phases as
shown in Fig. 2. The last phase of the software is different from the hardware in
the sense that it does not wear out but becomes obsolete due to major
improvement in the software functions and technology changes.

Hardware reliability theory relies on the analysis of stationary processes, because


only physical defects are considered. However, with the increase of the software
system size and complexity, reliability theory based on stationary process becomes
unsuitable to address non-stationary phenomenon such as reliability growth. This
makes software reliability a challenging problem, which requires employment of
several intelligent methods to attack [1] and forms a basis for software engineering
method to base on the construction of models representing the system failure and
fault removal process in terms of model parameters. The difference in the affect of
fault removal requires the software reliability to be defined differently from the
hardware reliability.

On the removal of a fault, a hardware component returns to its


previous level of reliability subject to the reliability of repair activity. But a software
repair implies either reliability improvement (case of perfect repair) or decline (case
of imperfect repair). The techniques of hardware reliability aim to maintain the
hardwares standard reliability and improvements if required in the design
standards. On the other hand SRE aims to continuous system reliability
improvement.

QNo-5: Even if software tested for an infinite time it


cannot be 100% reliable.
Software Testing is the process of executing a program or system
with the intent of finding errors. Or, it involves any activity aimed at
evaluating an attribute or capability of a program or system and
determining that it meets its required results. Software is not unlike
other physical processes where inputs are received and outputs are
produced. Where software differs is in the manner in which it fails.
Most physical systems fail in a fixed (and reasonably small) set of
ways. By contrast, software can fail in many bizarre ways. Detecting
all of the different failure modes for software is generally infeasible.
Software reliability has important relations with many aspects of software, including
the structure, and the amount of testing it has been subjected to. Based on an
operational profile (an estimate of the relative frequency of use of various inputs to
the program , testing can serve as a statistical sampling method to gain failure data for
reliability estimation.
Software testing is not mature. It still remains an art, because we still cannot make it a
science. We are still using the same testing techniques invented 20-30 years ago, some
of which are crafted methods or heuristics rather than good engineering methods.
Software testing can be costly, but not testing software is even more expensive,
especially in places that human lives are at stake. Solving the software-testing
problem is no easier than solving the Turing halting problem. We can never be sure

that a piece of software is correct. We can never be sure that the specifications are
correct. No verification system can verify every correct program. We can never be
certain that a verification system is correct either.

Q. 6: Solve the expression m(t)=b(a-m(t)) for GO model?


GoelOkumoto Model
The model [4] is based on the following assumptions:
1. Failure observation phenomenon is modeled by NHPP.
2. Failures are observed during execution caused by remaining faults in the software.
3. Each time a failure is observed, an immediate effort takes place to find the cause of the
failure and the isolated faults are removed prior to future test occasions.
4. All faults in the software are mutually independent.
5. The debugging process is perfect and no new fault is introduced during debugging.
Above assumptions can be described mathematically with the following differential

The model is known as exponential NHPP model as it describes an exponential failure curve.
GO model has been applied to a variety of testing environment in practice. In a number of
situations it provides good estimation and prediction of reliability. Hence can be considered
as a useful reliability model. The two main aspects of a good model are that the model must
be stable during the test period and remain stable until the end of the test phase for any
particular test environment and the model must provide a reasonably accurate prediction of
the field reliability. Following, the general assumptions of GO model other exponential SRGM
are proposed by Ohba [14] and Yamada and Osaki [15]. Ohba assumed that the
software consists of a number of independent modules whereas Yamada and Osaki assumed
there are two types of errors in the software. Both these models describe the failure
phenomenon for each module\error type by GO model with different parameters and the
mean value function is the sum of mean value function for each module\error type.

You might also like