You are on page 1of 14

Assignments

Software Engineering (MC 0019)

Amitava Mudi,
MCA – 4th Semester, Roll Number: 510545511, LC Code 1608

Assignments MCA Semester 4 1


Table of Contents
1 Assignment Set 1..........................................................3
1.1 Describe the following with respect to Software Engineering:......3
1.2 Describe the need for SRS. Explain the Requirement Analysis and
Specification Process..................................................................5
1.3 Discuss the following with respect to coding:...............................6
1.4 Describe the theory of Basis Path Testing.....................................6
1.5 Explain the theory of Metrics Reliability Estimation......................7
1.6 What is Musa’s basic model? Explain...........................................7
2 Assignments Set 2........................................................9
2.1 Explain the following with respect to Project Management:..........9
2.2 Discuss about code verification techniques................................10
2.3 Explain the theory of detailed design metrics.............................11
2.4 Explain Integration and Validation testing techniques................12
2.5 What is unit testing? Explain. ....................................................13
2.6 Explain Metrics for source code, testing, and maintenance........13

Assignments MCA Semester 4 2


1 Assignment Set 1

1.1 Describe the following with respect to Software


Engineering:
¾ Software crisis
¾ Software engineering problem

Software crisis
The term is used to describe the impact of rapid increases in computer
power and the complexity of the problems which could be tackled. In
essence, it refers to the difficulty of writing correct, understandable, and
verifiable computer programs. The roots of the software crisis are
complexity, expectations, and change.
The causes of the software crisis were linked to the overall complexity of
hardware and the software development process. The crisis manifested
itself in several ways:

♦ Projects running over-budget.


♦ Projects running over-time.
♦ Software was very inefficient.
♦ Software was of low quality.
♦ Software often did not meet requirements.
♦ Projects were unmanageable and code difficult to maintain.
♦ Software was never delivered.

Various processes and methodologies have been developed over the last
few decades to "tame" the software crisis, with varying degrees of
success. However, it is widely agreed that there is no "silver bullet" ― that
is, no single approach which will prevent project overruns and failures in
all cases. In general, software projects which are large, complicated,
poorly-specified, and involve unfamiliar aspects, are still particularly
vulnerable to large, unanticipated problems

Software Engineering problem


Large complex industrial strength software systems having myriad of
conflicting requirements have some problems related to design,
implementation, maintenance and extendibility. Some important problems
are

Assignments MCA Semester 4 3


Software is expensive – Software is costlier because developing
software product us labour intensive. It requires special expertise to be
able to develop software hence cost of labour is quite high. Productivity is
another important aspect that affects cost.
Late, costly and unreliable – Software industry has gained reputation of
not delivering software within schedule, budget and of producing poor
quality software product. Delays cause cost overrun and time pressure
causes poor quality. Projects are sometimes scraped when budget and
schedule goes out of control.
Unreliability – Many times software doesn’t do what is supposed to do
and does something that it is not supposed to do. Unlike other industries
software problems starts appearing only after the system becomes
operational. Unreliability cause inconvenience and budget overrun.
Problem of change and rework – Once the software is delivered and
becomes operational, software maintenance is required to fix residual
errors remaining in the system as and when they are discovered. Also,
changed user requirements needs to be accommodated as well.

Principles of Software Engineering


¾ Separation of Concerns - Separation of concerns is recognition of the
need for human beings to work within a limited context.
¾ Modularity - The principle of modularity is a specialization of the
principle of separation of concerns. Following the principle of
modularity implies separating software into components according to
functionality and responsibility,
¾ Abstraction - The principle of abstraction is another specialization of
the principle of separation of concerns. Following the principle of
abstraction implies separating the behavior of software components
from their implementation. It requires learning to look at software and
software components from two points of view: what it does, and how it
does it.
¾ Anticipation of Change - Computer software is an automated solution to a
problem. The problem arises in some context, or domain that is familiar to the
users of the software. The domain defines the types of data that the users need
to work with and relationships between the types of data. If context or any
other external factor changes that the software must be flexible
enough to adapt to the change. Anticipation of change deals with such
problems.
¾ Generality - The principle of generality is closely related to the
principle of anticipation of change. It is important in designing
software that is free from unnatural restrictions and limitations. One
excellent example of an unnatural restriction or limitation is the use of
two digit year numbers, which has led to the "year 2000" problem:
software that will garble record keeping at the turn of the century.

Assignments MCA Semester 4 4


Although the two-digit limitation appeared reasonable at the time,
good software frequently survives beyond its expected lifetime.
¾ Incremental Development - In this process, you build the software in
small increments; for example, adding one use case at a time. An
incremental software development process simplifies verification. If
you develop software by adding small increments of functionality then,
for verification, you only need to deal with the added portion. If there
are any errors detected then they are already partly isolated so they
are much easier to correct.
¾ Consistency - The principle of consistency is a recognition of the fact
that it is easier to do things in a familiar context. For example, coding
style is a consistent manner of laying out code text. This serves two
purposes. First, it makes reading the code easier. Second, it allows
programmers to automate part of the skills required in code entry,
freeing the programmer's mind to deal with more important issues.

1.2 Describe the need for SRS. Explain the Requirement


Analysis and Specification Process.
A Software Requirements Specification (SRS) is a complete description of
the behavior of the system to be developed. It includes a set of use cases
that describe all the interactions the users will have with the software. Use
cases are also known as functional requirements. In addition to use cases,
the SRS also contains non-functional (or supplementary) requirements.
Non-functional requirements are requirements which impose constraints
on the design or implementation (such as performance engineering
requirements, quality standards, or design constraints).

Requirements analysis
Requirements analysis in systems engineering and software engineering,
encompasses those tasks that go into determining the needs or conditions
to meet for a new or altered product, taking account of the possibly
conflicting requirements of the various stakeholders, such as beneficiaries
or users.

Requirements analysis is critical to the success of a development project.


Requirements must be actionable, measurable, testable, related to
identified business needs or opportunities, and defined to a level of detail
sufficient for system design. Requirements can be functional and non-
functional.

Specification process
Requirements specification is a complete description of the behaviour of
the system to be developed. It includes a set of use cases that describe all
of the interactions that the users will have with the software. Use cases
are also known as functional requirements. In addition to use cases, the

Assignments MCA Semester 4 5


specification also contains non-functional (or supplementary)
requirements. Non-functional requirements are requirements which
impose constraints on the design or implementation (such as performance
requirements, quality standards, or design constraints).
The specification process requires analysis and documentation of all
requirements as necessary for the project or the customer. Specification
types are chosen based on needs of the software system
Types of Requirements
¾ Customer Requirements
¾ Functional Requirements
¾ Non-functional Requirements
¾ Performance Requirements
¾ Design Requirements
¾ Derived Requirements
¾ Allocated Requirements

1.3 Discuss the following with respect to coding:


¾ Code Reading - It deals with specific techniques for reading code
written by others and outlines common programming concepts. It
includes concepts related to code that are likely to appear before a
software developer's eyes, including programming constructs, data
types, data structures, control flow, project organization, coding
standards, documentation, and architectures.
¾ Code Review - Code review is systematic examination (often as peer
review) of computer source code intended to find and fix mistakes
overlooked in the initial development phase, improving both the
overall quality of software and the developers' skills. Code reviews can
often find and remove common vulnerabilities such as format string
exploits, race conditions, memory leaks and buffer overflows, thereby
improving software security. Online software repositories based on
Subversion with Trac, Mercurial, Git or others allow groups of
individuals to collaboratively review code. Additionally, specific tools
for collaborative code review can facilitate the code review process.

1.4 Describe the theory of Basis Path Testing.


Basis path testing is a white-box technique. It allows the design and
definition of a basis set of execution paths. The test cases created from
the basis set allow the program to be executed in such a way as to
examine each possible path through the program by executing each

Assignments MCA Semester 4 6


statement at least once.
To be able to determine the different program paths, the engineer needs a
representation of the logical flow of control. The control structure can be
illustrated by a flow graph. A flow graph can be used to represent any
procedural design.
Next a metric can be used to determine the number of independent paths.
It is called cyclomatic complexity and it provides the number of test cases
that have to be designed. This insures coverage of all program statements

1.5 Explain the theory of Metrics Reliability Estimation.


After the testing is done and the software is delivered, the development is
considered over. It will clearly be very desirable to know, in quantifiable
terms, the reliability of the software being delivered. As testing directly
impacts the reliability and most reliability models use data obtained during
testing to predict reliability, reliability estimation is the main product
metrics of interest at the end of the testing phase. We will focus our
attention on this metric in this section.
Before we discuss the reliability modeling and estimation let us briefly
discuss a few main metrics that can be used for process evaluation at the
end of the project.
Once the project is finished, one can look at the overall productivity
achieved by the programmers during the project. As discussed earlier,
productivity can be measured as lines of code (or function points) per
person-month.
Another process metric of interest is defect removal efficiency. The defect
removal efficiency of a defect removing process is defined as the
percentage reduction of the defects that are present before the start of
the process [104]. The cumulative defect removal efficiency of a series of
defect removal processes is the percentage of defects that have been
removed by this series. The defect removal efficiency cannot be
determined exactly as the defects remaining in the system are not known.
However, at the end of testing, as most defects have been uncovered,
removal efficiencies can be estimated.

1.6 What is Musa’s basic model? Explain.


Musa has produced a classification scheme for software reliability models
which considers the following traits of each model:
¾ Time unit: Whether the natural time is measured as calendar or
execution time.
¾ Category: Whether the model allows for a finite or infinite number of
failures in an infinite time.
¾ Type: The distribution of the failures over the time interval.

Assignments MCA Semester 4 7


¾ Class: Functional form of the failure intensity which applies to finite
failure category models only.
¾ Family: Functional form of the failure intensity which applies to the
infinite failure models only.
A number of models have been proposed which cover a wide range of
scenarios. In terms of practical use however, Musa argues that only those
using execution time as the time domain are of use. As well, he argues
that a model must not be overly complicated to apply such as the
Bayesian model of Littlewood-Verrall. Empirical evidence based upon
industrial applications has demonstrated that the Musa Basic model and
the Non-homogenous Poisson Process model meet these criteria. [1] Both
of these models are exponential failure class models.

Assignments MCA Semester 4 8


2 Assignments Set 2

2.1 Explain the following with respect to Project Management:


Role of Software Metrics
Software metric is a measure of some property of a piece of software or its
specifications.
Since quantitative methods have proved so powerful in the other sciences,
computer science practitioners and theoreticians have worked hard to
bring similar approaches to software development. It is quite often said
that, “You can’t control what you can't measure.
Common software measurements include:
1.1 Code coverage
.2 Cohesion
.3 Comment density
.4 Coupling
.5 Cyclomatic complexity
.6 Function point analysis
.7 Instruction path length
.8 program load time
.9 Number of classes and interfaces
.10 Number of lines of customer requirements.
.11 program size
.12 Robert Cecil Martin’s software package metrics
.13 Bugs per line of code
.14 Source lines of code
.15 execution time
Software metrics are useful in measurement of different parameters. The
statistics can be used to improve quality and reduce development time
through effective resource management.

Size Oriented Metrics


This is a source code metrics. Software science assigns quantitative laws
to the development of computer software, using a set of primitive
measures that may be derived after code is generated or estimated once
design is complete. These follow
n1 = Number of distinct operators that appears in a program

Assignments MCA Semester 4 9


n2 = Number of distinct operands that appears in a program
N1 = Total number operators occurrences
N2 = Total number operand occurrences
And program volume may be defined by
V = N log2 (n1+n2)
Volume ratio L is defined by
L = 2/n1 x n2/N2
The primitive measures are used to develop expressions for the overall
program length, potential minimum volume for an algorithm, the actual
volume, the program level, the language level, and other features such as
development effort, development time and the projected number of faults
in the software.

Function Oriented Metrics


This is related to metrics for testing. Function based metrics can be used
as a predictor for overall testing effort. Various project level characteristics
for past projects can be collected and corrected with the number of FP
produced by the team. The team can then project estimated values of
those characteristics for the current project.

2.2 Discuss about code verification techniques.


Code verification techniques are as follows

Code reading
Code reading involves careful reading of the code by the programmer to
detect any discrepancies between the design specifications and the actual
implementation. It involves determining the abstraction of a module and
then comparing it with its specifications. The process is the reverse of the
design. In design we start from an abstraction and move toward an
abstract description.

Static analysis
Static code analysis is the analysis of computer software that is performed
without actually executing programs built from that software (analysis
performed on executing programs is known as dynamic analysis). In most
cases the analysis is performed on some version of the source code and in
the other cases some form of the object code. The term is usually applied
to the analysis performed by an automated tool, with human analysis
being called program understanding, program comprehension or code
review.

Assignments MCA Semester 4 10


Symbolic Execution
In Computer Science the analysis of programs by tracking symbolic rather
than actual values is called symbolic execution (sometimes symbolic
evaluation). The field of symbolic simulation applies the same concept to
hardware. Symbolic computation applies the concept to the analysis of
mathematical expressions.
Symbolic execution is used to reason about all the inputs that take the
same path through a program.

Proving correctness
In proof of correctness, the aim is to prove the program is correct. So
correctness is never really established but is implied and hoped by the
absence of detection of any errors.

Code inspection or review


Code review is systematic examination (often as peer review) of computer
source code intended to find and fix mistakes overlooked in the initial
development phase, improving both the overall quality of software and the
developers' skills.

Unit Testing
In computer programming, unit testing is a software verification and
validation method in which a programmer tests if individual units of source
code are fit for use. A unit is the smallest testable part of an application. In
procedural programming a unit may be an individual function or
procedure.

2.3 Explain the theory of detailed design metrics.


Design principles are
¾ Correctness – To ensure behavior is as expected
¾ Robustness - Ability to handle anomalous situations
¾ Flexibility – Adaptability to the changes in requirements
¾ Reusability – Develop once use multiple times
¾ Efficiency – How well does it use resources
Detailed design metrics targets to measure all of the above.

Assignments MCA Semester 4 11


2.4 Explain Integration and Validation testing techniques.
Integration testing
Integration testing (sometimes called Integration and Testing, abbreviated
"I&T" is the activity of software testing in which individual software
modules are combined and tested as a group. It occurs after unit testing
and before system testing. Integration testing takes as its input modules
that have been unit tested, groups them in larger aggregates, applies
tests defined in an integration test plan to those aggregates, and delivers
as its output the integrated system ready for system testing.

The purpose of integration testing is to verify functional, performance, and


reliability requirements placed on major design items. These "design
items", i.e. assemblages (or groups of units), are exercised through their
interfaces using Black box testing, success and error cases being
simulated via appropriate parameter and data inputs. Simulated usage of
shared data areas and inter-process communication is tested and
individual subsystems are exercised through their input interface. Test
cases are constructed to test that all components within assemblages
interact correctly, for example across procedure calls or process
activations, and this is done after testing individual modules, i.e. unit
testing. The overall idea is a "building block" approach, in which verified
assemblages are added to a verified base which is then used to support
the integration testing of further assemblages.

Some different types of integration testing are big bang, top-down, and
bottom-up.

Validation testing
In the computer architecture and hardware world, validation refers to the
process of verifying that the operations of the piece of hardware or
architecture meets the specification. In some cases, validation not only
refers to finding bugs in the hardware but also proving absence of certain
critical bugs which may not have workarounds and may lead to project
cancellation or product recall.
Validation refers to the process of data validation, ensuring that data
inserted into an application satisfies pre-determined formats or complies
with stated length and character requirements and other defined input
criteria. It may also ensure that only data that is either true or real can be
entered into a database.

Assignments MCA Semester 4 12


2.5 What is unit testing? Explain.
In computer programming, unit testing is a software verification and
validation method in which a programmer tests if individual units of source
code are fit for use. A unit is the smallest testable part of an application. In
procedural programming a unit may be an individual function or
procedure.
Ideally, each test case is independent from the others: substitutes like
method stubs, mock objects, fakes and test harnesses can be used to
assist testing a module in isolation. Unit tests are typically written and run
by software developers to ensure that code meets its design and behaves
as intended. Its implementation can vary from being very manual (pencil
and paper) to being formalized as part of build automation.
Unit testing is a fundamental part of quality modern software
development.
The goal of unit testing is to isolate each part of the program and show
that the individual parts are correct. A unit test provides a strict, written
contract that the piece of code must satisfy. As a result, it affords several
benefits. Unit tests find problems early in the development cycle.
Unit testing allows the programmer to re-factor code at a later date, and
make sure the module still works correctly (i.e. regression testing). The
procedure is to write test cases for all functions and methods so that
whenever a change causes a fault, it can be quickly identified and fixed.
Readily-available unit tests make it easy for the programmer to check
whether a piece of code is still working properly.
In continuous unit testing environments, through the inherent practice of
sustained maintenance, unit tests will continue to accurately reflect the
intended use of the executable and code in the face of any change.
Depending upon established development practices and unit test
coverage, up-to-the-second accuracy can be maintained.

2.6 Explain Metrics for source code, testing, and maintenance.


Metrics for source code
Software science assigns quantitative laws to the development of
computer software, using a set of primitive measures that may be derived
after code is generated or estimated once design is complete. These follow
n1 = Number of distinct operators that appears in a program
n2 = Number of distinct operands that appears in a program
N1 = Total number operators occurrences
N2 = Total number operand occurrences
Length N can be estimated

Assignments MCA Semester 4 13


N = n1 log2 n1 + n2 log2 n2
And program volume may be defined by
V = N log2 (n1+n2)
Volume ratio L is defined by
L = 2/n1 x n2/N2
The primitive measures are used to develop expressions for the overall
program length, potential minimum volume for an algorithm, the actual
volume, the program level, the language level, and other features such as
development effort, development time and the projected number of faults
in the software.

Metrics for testing


In general testers must rely on analysis, design, and code metrics to guide
them in the design and execution of test cases.
Function based metrics can be used as a predictor for overall testing
effort. Various project level characteristics for past projects can be
collected and corrected with the number of FP produced by the team.
The bang metric can provide an indication of the number of test cases
required by examining the primitive test measures.
Architectural design metrics provide information on the ease of difficulty
associated with integration testing and the need for specialized testing
software. Cyclomatic complexity lies at the core of basis path testing. In
addition, cyclomatic complexity can be used to target modules as
candidates for extensive unit testing.

Metrics for maintenance


Software maturity index (SMI) provides an indication of the stability of a
software product based on changes that occur for each release of the
product). The following information is determined:
MT = The number of modules in the current release
Fc = The number of modules in the current release that have been
changed
Fa = The number of modules in the current release that have been added
Fd = The number of modules from the preceding release that have been
deleted in the current release. The software maturity metrics is computed
in the following manner:
SMI = [MT – (Fa + Fc + Fd)] / MT

Assignments MCA Semester 4 14

You might also like