You are on page 1of 64

CONTENTS

CHAPTER TITLE PAGE NO


1. INTRODUCTION
1.1 Software Engineering Basics
1.2 Object Oriented Basics
2. LITERATURE REVIEW
2.1 Software Maintenance
2.2 Software Engineering Metrics
2.3 Characteristics of Object
Oriented Metrics
2.4 Related Work
2.4.1 Testing Metrics
2.4.2 OOPS- Specific Software Faults
3. PROBLEM DESCRIPTION
4. SYSTEM DESIGN
4.1 Dependent Variable
4.2 Independent Variable
4.2.1 Design Complexity
4.2.2 Maintenance Task
4.2.3 Summary of Research Variables
5. SYSTEM IMPLEMENTATION & RESULTS
5.1 System Implementation
5.2 Results
6. SCOPE FOR FUTURE DEVELOPMENT
7. CONCLUSION
APPENDIX
A. SAMPLE SCREEN
B. SOURCE CODE
BIBLIOGRAPHY

ABSTRACT

The Object-Oriented paradigm has become increasingly popular in


recent years. Researchers agree that, although maintenance may turn out to be
easier for Object-Oriented systems, it is unlikely that the maintenance burden
will completely disappear. One approach to controlling software maintenance
costs is the utilization of software metrics during the development phase, to
help identify potential problem areas. Many new metrics have been proposed
for Object-Oriented systems, but only a few of them have been validated.

The purpose of this research is to empirically explore the validation of


three existing Object-Oriented design complexity metrics and, specifically, to
assess their ability to predict maintenance time. This research reports the
results of validating three metrics, Interaction Level (IL), Interface Size (IS),
and Operation Argument Complexity (OAC).

A controlled experiment was conducted to investigate the effect of


design complexity (as measured by the above metrics) on maintenance time.
Each of the three metrics by itself was found to be useful in the experiment in
predicting maintenance performance. A java based system is developed to
estimate the maintenance time by using the design complexity metrics.
1. INTRODUCTION

1.1 Software Engineering Basics

Software engineering is the technological and managerial discipline


with systematic production and maintenance of software products that are
developed and modified on time and within cost estimates. [5, 6]

The primary goals of software engineering are to improve the quantity


of software products and to increase the productivity and job satisfaction of
software engineers. Software engineering is concerned with development
and maintenance of technological products, problem-solving techniques
common to all engineering disciplines

Engineering problem-solving techniques provides the basis for project


planning, project management, systematic analysis, methodical design,
careful fabrication, extensive validation, and ongoing maintenance activities.
Appropriate notations, tools, and techniques are applied in each of these
areas.

A fundamental principle of software engineering is to design software


products that minimize the intellectual distance between problem and
solution; however, the variety of approaches to software development is
limited on lay by the creativity and ingenuity of the programmer.

An interface between software modules also distinguishes software


engineering from the traditional engineering disciplines. A fundamental
principle for managing complexity is to decompose a large system into
smaller, more manageable sub units with well-defined interfaces. This
approach of divide and conquer is routinely used in the engineering
disciplines, in architecture, and in other disciplines that involve analysis and
synthesis of complex artifacts. In software engineering, the units of
decomposition are called modules.

Software module has both control and data interfaces. Control


interfaces are established by the calling relationship among module, and data
interfaces are manifest in the parameters passes between modules as well as
in the global data items shared among modules.

Software quality is primary concern of software engineers. Quality


attributes of importance for any particular software products are of course
dependent on the nature of the product. In some instances, transportability of
the software product between machines may be an attribute of prime
importance, while efficient utilization of memory space may be paramount
in other case. The most important quality attributes a software product can
posses is usefulness.

1.2 Object Oriented Programming Basics

Definition

Object oriented programming is an approach that provides a way of


modularizing programs by creating partitioned memory area for both data
and functions that can be used as templates for creating copies of such
modules on demand. An object is considered to be a partitioned area of
computer memory that stores data and set of operations that can access that
data. Since the memory partitions are independent, the object can be used in
a variety of different programs without modifications. [1]

Object-Oriented programming extends the design model into the


executable domain. An OO programming language is used to translate the
classes, attributes, operations, and messages into a form that can be executed
by a machine.

Class
An object class describes a group of objects with similar properties
(attributes), common behavior (operations), common relationships to other
objects, and common semantics. The abbreviation class is often used instead
of object class. Objects in a class have the same attributes and behavior
patterns. [1]

Super Class

Generalization is the relationship between a class and one or more


refined versions of it. The class being refined is called the super class and
each refined version is called a sub class. A class which has one or more
members which are (more specialized) classes themselves.

Sub class

A class which has link to a more general class. The class that does the
inheriting is called a subclass. Therefore, a subclass is a specialized version
of a super class. It inherits all of the instance variables and methods defined
by the super class and add its own, unique elements.
Dynamic Binding (method resolution)

One aspect of object-oriented languages that seems inefficient is the


use of method resolution at run-time to implement the polymorphic
operations. Method resolution is the process of matching an operation on an
object to a specific method. This would seem to require a search up the
inheritance tree at run time to find the class that implements the operation
for a given object. Most languages, however, optimize the look-up
mechanism to make it more efficient. As long as the class structure remains
unchanged during program execution, the correct method for every
operation can be stored locally in the subclass. With this technique, known
as method caching, dynamic binding can be reduced to a single has table
look-up and performed in constant time regardless of the depth of the
inheritance tree or the number of methods in the class. [1]

Features of Object Oriented Programming

♦ Emphasis is on data rather than procedure


♦ Data structures are designed such that they characterize the objects.
♦ Functions that operate on the data of an object are tied together in the
data structure.
♦ Data is hidden and cannot be accessed by external functions.
♦ Objects may communicate with each other through functions.
♦ New data and functions can be easily added whenever necessary
♦ Follows bottom-up approach in program design. [1]
Benefits of OOP

Object orientation contributes to the solution of many problems


associated with the development and quality of software products. The new
technology promises greater programmer productivity, better quality of
software and lesser maintenance cost.
♦ Through inheritance, can eliminate the redundant code and extend the
use of existing classes.
♦ To build the programs from the standard working modules that
communicates with one another, rather than having to start writing the
code from scratch. This leads to saving of development time and
higher productivity
♦ The principle of data hiding helps the programmer to build secure
programs that cannot be invaded by code in other parts of the
program.
♦ It is possible to have multiple instances of an object to co-exist
without any interference.
♦ It is possible to map objects in the problem domain to those objects in
the program.
♦ It is easy to partition the work in a project based on objects.
♦ The data centered design approach enables us to capture more details
of a model in implement able form.
♦ Object oriented systems can be easily upgraded from small to large
systems.
♦ Message passing techniques for communication between object makes
the interface descriptions with external system much simpler.
♦ Software complexity can be easily managed.

It is possible to incorporate all these features in an object oriented


system; their importance depends on the type of the project and the
preference of the programmer.

Object libraries must be available for reuse. The technology is still


developing and current products may be superseded quickly. Strict controls
and protocols need to be developed if reuse is not to be compromised.
Developing software that is easy to use makes it hard to build. It is hoped
that the object oriented programming tools would help manage this problem.
[1]
2. LITERATURE REVIEW

2.1 SOFTWARE MAINTENANCE

The term “software maintenance” is used to describe the software


engineering activities that occur following delivery of a software product to
the customer. The maintenance phase of the software life cycle is the time
period in which a software product performs useful work. Typically, the
development cycle for a software product spans 1 or 2 years, while the
maintenance phase spans 5 to 10 years.

Maintenance activities involve making enhancements to software


products, adapting products to new environments, and correcting problems.
Software products enhancement may involve providing new functional
capabilities, improving user displays and modes of interaction, upgrading
external documents and internal documentation, or upgrading the
performance characteristics of a system. Adaptation of software to a new
environment may involve moving the software to a different machine, or for
instance, modifying the software to accommodate a new
telecommunications protocol or additional disk drives. Problem correction
involves modification and revalidation of software to correct errors. Some
errors require immediate attention, some can be corrected on a scheduled,
periodic basis, and others are known but never corrected.
It is well established that maintenance activities consume a large portion of
the total life – cycle budget (LIE80). It is not uncommon for software
maintenance to account for 70 percent of total software life – cycle – costs
(with development requiring for 30 percent).

As a general rule of thumb the distribution of effort for software


maintenance includes 60 percent of the maintenance budgets for
enhancement, and 20 percent each for adaptation and correction.

If maintenance consumes 70 percent of the total life – cycle


effort devoted to a particular software product, and if 60 percent of
maintenance goes to enhancement the product, then 42 percent of the total
life cycle effort for that product is dedicated to product enhancement. Given
this perspective, it is apparent that the product delivered to the customer at
the end of the development cycle is only the initial version of the system.
Some authors have suggested that the appropriate life cycle model for
software is development – evolution – evolution- evolution.

This perspective makes it apparent that the primary goal of software


development should be production of maintainable software systems.
Maintainability, like all high- level quality attributes, can be expressed in
terms of attributes that are built into the product. The primary product
attributes that contribute to software maintainability are clarity, modularity,
and good internal documentation of the source code, as well as appropriate
supporting documents.
It should also be observed that software maintenance is a microcosm
of the software development cycle. Enhancement and adaptation of software
problem may reinitiates the development in the analysis phase, while
correction of a software problem may reinitiate the development cycle in the
analysis phase, the design phase, the design phase, or the implementation
phase. Thus, all of the tools and techniques used to develop software are
potentially useful for software maintenance.

Analysis activities during software maintenance involve


understanding the scope and effect of a desired change, as well as the
constraints on making the change. Design during maintenance involves
redesigning the product of incorporate the desired changes. The changes
must then be implemented, internal documentation of the code must be
updated, and new test cases must be design to assess the adequacy of the
modification. Also the supporting documents (requirements, design
specification test plan, principles of operation, user’s manual, cross-
reference directories, etc.) must be update to reflect the changes. Updated
versions of the software (code and supporting documents) must then be
distributed to various customer sites, and configuration control records for
each site must be updated.

All of these tasks must be accomplished using a systematic, orderly


approach to tracking and analysis of change requests, and careful redesign,
reimplementation, revalidation, and redocumentation of the changes.
Otherwise, the software product will quickly degrade as a result of the
maintenance process. It is not unusual for a well designed, properly
implemented, and adequately documented initial version of a software
product to become unmentionable due to inadequate maintenance
procedures. This can result in situations in which it become s easier and less
expensive to reimplement a module or subsystem than to modify the existing
version. Software maintenance activities must not destroy the
maintainability of software. A small change in the source code often requires
extensive changes to the test suit and the supporting documents. Failure to
recognize the true cost of a “small change” in the source code is one of the
most significant problems in software maintenance.

In subsequent sections of this chapter we discuss development cycle


activities that enhance maintainability, the managerial aspects of software
maintenance, configuration management, the role of source-code metrics in
maintenance, and tools and techniques of accomplishing maintenance.

ENHANCING MAINTAINABILITY DURING DEVELOPMENT

Many activities performed during software development enhance the


maintainability of a software product. Some of these activities are listed in
Table 9.1 and discussed below.

Analysis Activities

The analysis phase of software development is concerned with


determining customer requirements and constraints, and establishing
feasibility of the product. From the maintenance viewpoint, the most
important activities that accrue during analysis are establishing standards
and guidelines for the project and the work products to ensure uniformity of
the products. Setting of milestones to ensure that the work products are
produced on schedule specifying quality assurance procedure to ensure
development of high-quality documents, identifying,

Table Development activities that enhance software maintainability

Analysis Activities
Develop standards and guidelines
Set milestones for the supporting documents
Specify quality assurance procedures
Identify likely product enhancements
Determine resources required for maintenance
Estimate maintenance costs

Architectural Design Activities

Emphasize clarity and modularity as design criteria


Design to case likely enhancements
Use standardized notations to document data flow,
Function, structure, and interconnections
Observe the principles of information hiding, data
Abstraction and top- down hierarchical decomposition
Detailed Design Activities

Use standardized notations to specify algorithms, data


Structures and procedure interface specifications
Specify side effects and exception handling for each routine
Provide cross-reference directories

Implementation Activities

Use single entry, single exit constructs


Use standard indentation of constructs
Use simple, clear coding style
Use symbolic constants to parameterize routines
Provide margins on resources
Provide standard documentation prologues for each routine
Follow standard internal commenting guidelines

Other Activities

Develop a maintenance guide


Develop a test suite
Provide test suite documentation
product enhancements that will most likely occur following initial delivery
of the system; and estimating the resources (personal equipment. Floor
space) required performing maintenance activities.

Software maintenance may be performed by the developing


organization, by customer. Or by a third party on behalf of the customer. In
any case, the customer must be given and estimate of the resources required
and likely costs to be incurred in maintaining the system. These estimates
may exert strong influences. On the feasibility of system requirements, and
may result in modification to the requirements. An estimate of the resources
required for maintenance allows planning for and procurement of the
necessary maintenance facilities and personnel during the development
cycle, and minimizes unpleasant surprises for the customer.

Standards and guidelines. Various types of standards and guidelines


can be developed to enhance the maintainability of software. Standard
formats for requirements documents and design specifications, structured
coding conventions, and standardized formats for the supporting documents
such as the test plan, the principles of operation. The installation manual,
and the user’s manual contribute to the understandability and hence the
maintainability of software. The quality assurance group can be given
responsibility for developing developing and enforcing various standards
and guidelines during software development. Managers can ensure that
milestones are being met. And those documents are being developed on
schedule in conjunction with the design specifications and source code.
Design activities
Architectural design is concerned with developing the functional
components, conceptual data structures, and interconnections in a software
system. The most important activity for enhancing maintainability during
architectural design is to emphasize clarity. Modularity and case for
modification as the primary design criteria. Given alternative ways of
structuring a system. The designers will choose a particular structure on the
basis or certain design criteria that may be explicitly stated or implicitly
understood. The criteria may include coupling and cohesion of modules.
Efficiency consideration interfaces to existing software. Explicit emphasis
on clarity, modularity, and case of modification will usually result in a
system that is easier to maintain than one designed using efficiency in
execution time and minimization of memory space as the primary design
criteria.

Design concepts such as information hiding, data abstraction, and top-


down hierarchical decomposition are appropriate mechanisms for achieving
a clearly understandable, modular, and easily modified system structure. For
case of understanding, and for case of verifying completeness and
consistency of the design, standardized notations such as data flow
diagrams, structure charts and or HIPOs should be used. These forms of
design documentation and the software maintainer who must understand the
software product well enough to modify it and revalidate it.

Detailed design is concerned with specifying algorithmic details,


concrete data representations, and details of the interfaces among routines
and data structure. Standardized notations should be used to describe
algorithms, data structures, and interfaces. Procedure interface specifications
should describe the modes and problem domain attributes of parameters and
global variables used by each routine. In addition, selectively shared data
areas, global variables, side effects, and exception handling mechanisms
should be documented for each routine that incorporates those features. A
call graph and cross-reference directory should be prepared to indicate the
scope of effect of each routine; call graphs and directories provide the
information needed to determine which routines and data structures are
affected by modifications to other routines.

Implementation Activities.

Implementation, like design, should have the primary goal of


producing software that is easy to understand and easy to modify. Single
entry, single exit coding constructs should be used, standard indentation of
constructs should be observed, and a straightforward coding style should be
adopted. Ease of maintenance is enhanced by use of symbolic constants to
parameterize the software, by data encapsulation techniques, and by
adequate margins on resources such as table sizes and overflow tracks on
disks. In addition, standard prologues in each routine should provide the
author’s name, the date of development, the name of the maintenance
programmer, and the date and purpose of each modification. In addition,
input and output assertions, side effects, and exceptions and exception
handling actions should be documented in the prologue of each routine.
Supporting documents. There are two particularly important
supporting documents that should be prepared during the software
development cycle in order to case maintenance activities. These documents
are the maintenance guide and the test suite description. The maintenance
guide provides a technical description of the operational capabilities of the
entire system, and hierarchy diagrams, call graphs, and cross-reference
directories for the systems. An external description of each module,
including its purpose, input and output assertions, side effects, global data
structures accessed, and exceptions and exception handling actions should
be specified in the maintenance guide.

A test suite should accompany every delivered software product. A


test suite is a file of test cases developed during system integration testing
and customer acceptance testing. The test suite should contain a set of test
data and actual results from those tests. When software is modified, test
cases are added to the test suite to validate the modifications, and the entire
test suite is rerun to verify that the modifications have not introduced any
unexpected side effects. Execution of a test suite following software
modifications has not introduced any unexpected side effects. Execution of a
test suite following software modification is referred to as regression testing.

Documentation for the test suite should specify the system


configuration, assumptions and conditions for each test case, the rationale
for each test case, the actual input data for each test, and a description of
expected results for each test. During product development, the quality
assurance group is often given responsibility for preparing the acceptance
test and maintenance test suites.
2.2 Software Engineering Metrics

Metrics are units of measurement. The term "metrics" is also


frequently used to mean a set of specific measurements taken on a particular
item or process. Software engineering metrics are units of measurement that
are used to characterize: [8]

♦ Software engineering products, e.g., designs, source code, and test


cases,

♦ Software engineering processes, e.g., the activities of analysis,


designing, and coding, and

♦ Software engineering people, e.g., the efficiency of an individual


tester, or the productivity of an individual designer.

If used properly, software engineering metrics can allow us to:

♦ Quantitatively define success and failure, and/or the degree of success


or failure, for a product, a process, or a person,

♦ Identify and quantify improvement, lack of improvement, or


degradation in the products, processes, and people,

♦ Make meaningful and useful managerial and technical decisions,

♦ Identify trends, and

♦ Make quantified and meaningful estimates.

Some common trends among software engineering metrics. Here are


some observations:
♦ A single software engineering metric in isolation is seldom useful.
However, for a particular process, product, or person, 3 to 5 well-
chosen metrics seems to be a practical upper limit, i.e., additional
metrics (above 5) do not usually provide a significant return on
investment.

♦ Although multiple metrics must be gathered, the most useful set of


metrics for a given person, process, or product may not be known
ahead of time. The most useful metrics are described below:

♦ Metrics are almost always interrelated. Specifically, attempts to


influence one metric usually have an impact on other metrics for the
same person, process, or product.

♦ To be useful, metrics must be gathered systematically and regularly --


preferably in an automated manner.

♦ Metrics must be correlated with reality. This correlation must take


place before meaningful decisions, based on the metrics, can be made.

♦ Faulty analysis (statistical or otherwise) of metrics can render metrics


useless, or even harmful.

♦ To make meaningful metrics-based comparisons, both the similarities


and dissimilarities of the people, processes, or products being
compared must be known.

♦ Those gathering metrics must be aware of the items that may


influence the metrics they are gathering. For example, there are the
"terrible H's," i.e., the Heisenberg effect and the Hawthorne effect.
♦ Metrics can be harmful. More properly, metrics can be misused.

Object-oriented software engineering metrics are units of


measurement that are used to characterize:

♦ object-oriented software engineering products, e.g., designs, source


code, and test cases,

♦ object-oriented software engineering processes, e.g., the activities of


analysis, designing, and coding, and

♦ object- oriented software engineering people, e.g., the efficiency of an


individual tester, or the productivity of an individual designer.

Difference between Object-Oriented and Software Engineering Metrics


OOSE [8] metrics are different because of

♦ Localization

♦ Encapsulation

♦ Information hiding

♦ Inheritance

♦ Object abstraction techniques.


Localization is the process of placing items in close physical proximity to
each other:

♦ Functional decomposition processes localize information around


functions.

♦ Data-driven approaches localize information around data.


♦ Object-oriented approaches localize information around objects.

In most conventional software (e.g., software created using functional


decomposition), localization is based on functionality. Therefore:

♦ A great deal of metrics gathering has traditionally focused largely on


functions and functionality

♦ Units of software were functional in nature, thus metrics focusing on


component interrelationships emphasized functional
interrelationships, e.g., module coupling.

Encapsulation is the packaging (or binding together) of a collection of


items:

♦ Low-level examples of encapsulation include records and arrays.

♦ Subprograms (e.g., procedures, functions, subroutines, and


paragraphs) are mid-level mechanisms for encapsulation.

♦ In object-oriented (and object-based) programming languages, there


are still larger encapsulating mechanisms, e.g., C++'s classes, Ada's
packages, and Modula 3's modules.

Objects encapsulate

♦ Knowledge of state, whether statically maintained, calculated upon


demand, or otherwise

♦ Advertised capabilities (sometimes called operations, method


interfaces, method selectors, or method interfaces), and the
corresponding algorithms used to accomplish these capabilities (often
referred to simply as methods)

♦ Objects

♦ Exceptions

♦ Constants

In many object-oriented programming languages, encapsulation of


objects (e.g., classes and their instances) is syntactically and semantically
supported by the language. In others, the concept of encapsulation is
supported conceptually, but not physically.

Encapsulation has two major impacts on metrics:

♦ The basic unit will no longer be the subprogram, but rather the object,
and

♦ Characterizing and estimating systems can be modified.

Information hiding is the suppression (or hiding) of details.

♦ There are degrees of information hiding, ranging from partially


restricted visibility to total invisibility.

♦ Encapsulation and information hiding are not the same thing, e.g., an
item can be encapsulated but may still be totally visible.

Information hiding plays a direct role in such metrics as object coupling


and the degree of information hiding

Inheritance is a mechanism whereby one object acquires characteristics


from one, or more, other objects.
♦ Some object oriented languages support only single inheritance,
i.e., an object may acquire characteristics directly from only one
other object.

♦ Some object-oriented languages support multiple inheritance, i.e.


an object may acquire characteristics directly from two, or more,
different objects.

♦ The types of characteristics which may be inherited, and the


specific semantics of inheritance vary from language to language.

Many object-oriented software engineering metrics are based on inheritance,


e.g.:

♦ Number of children (number of immediate specializations)

♦ Number of parents (number of immediate generalizations)

♦ Class hierarchy nesting level (depth of a class in an inheritance


hierarchy).

Abstraction is a mechanism for focusing on the important (or essential)


details of a concept or item, while ignoring the inessential details.

♦ There are different types of abstraction, e.g., functional, data,


process, and object abstraction.

♦ In object abstraction, objects is a high-level entities (i.e., as black


boxes).

There are three commonly used (and different) views on the definition for
"class,":
♦ A class is a pattern, template, or a blueprint for a category of
structurally identical items. The items created using the class are
called instances. This is often referred to as the "class as a `cookie
cutter'" view.

♦ A class is a thing that consists of both a pattern and a mechanism for


creating items based on that pattern. This is the "class as an `instance
factory'" view. Instances are the individual items that are
"manufactured" (created) by using the class's creation mechanism.

♦ A class is the set of all items created using a specific pattern, i.e., the
class is the set of all instances of that pattern.

A metaclass is a class whose instances are themselves classes. Some


object-oriented programming languages directly support user-defined
metaclasses. In effect, metaclasses may be viewed as classes for classes, i.e.,
to create an instance, to supply some specific parameters to the metaclass,
and these are used to create a class. A metaclass is an abstraction of its
instances.

A parameterized class is a class some or all of whose elements may


be parameterized. New (directly usable) classes may be generated by
instantiating a parameterized class with its required parameters. Templates in
C++ and generic classes in Eiffel are examples of parameterized classes.
Some people differentiate metaclasses and parameterized classes by noting
that metaclasses (usually) have run-time behavior, whereas parameterized
classes (usually) do not have run-time behavior.
Several object-oriented software engineering metrics are related to the
class-instance relationship, e.g.:

♦ Number of instances per class per application

♦ Number or parameterized classes per application

♦ Ratio of parameterized classes to non-parameterized classes.


2.3 The Characteristics of Object-Oriented Metrics

The software engineering viewpoints stresses OOA, OOD and OOP


(coding) an important, but secondary, activity that is an outgrowth of
analysis and design, the reason for this is simple. As the complexity of
systems increases, the design architecture of the end product has a
significantly stronger influence on it success than the programming language
that has been used. [3]

Metrics for any engineered products are governed by the unique


characteristics of the product. Object oriented software is fundamentally
different than software developed using conventional methods. For this
reason, the metrics for OO system must be turned to the characteristics that
distinguished OO from conventional software.

Berard defines five characteristics that lead to specialized metrics:


localization, encapsulation, information hiding, inheritance, and object
abstraction techniques. [8]

Localization

Localization is a characteristic of a software that indicates the manner


in which information is concerned within a program. Data-driven methods
localize information around specific data structures. In the OO context,
information is concerned by encapsulating both data and process within the
bounds of class or object.

Because conventional software emphasizes function as localization


mechanism, software metrics have focused on the internal structure or
complexity of functions (e.g., module length, cohesion or cyclomatic
complexity) or the manner in which functions connect to one another (e.g.,
module coupling).

Since the class (object) as a complete entity. In addition, the


relationships between operations (functions) and classes is not necessarily
one to one. Therefore, metrics that reflect the manner in which classes
collaborate must be capable of accommodating one to many and many-to-
one relationships.

Encapsulation

Berard [8] defines encapsulation as “ The packaging of a collection of


items. Low-level examples of encapsulation (for conventional software)
include records and arrays, subprograms (e.g. procedures, functions,
subroutines, and paragraphs) are mid-level mechanisms for encapsulation.”

For OO systems encompasses the responsibilities of a class, including


its attributes ( and other classes for aggregate objects) and operations, and
the states of the class, as defined by the specific attribute values.

Encapsulation influences metrics by changing the focus of


measurement from single module to a package of data (attributes) and
processing modules (operations). In addition encapsulation encourages
measurement at a higher level of abstraction.
Inheritance

A relationship among classes, wherein one class shares the structure


or behavior defined in one (single inheritance) or more (multiple
inheritance) other classes. Inheritance defines an “is-a” hierarchy among
classes in which a subclass inherits from one or more generalized super
classes a subclass typically specializes its super classes by augmenting or
redefining existing structure and behavior.

Information Hiding

Information hiding is a fundamental design concept for software.


When a software system id designed using the information hiding approach,
each module in the system hides the internal details of its processing
activities and module communicate only through well-defined interfaces.

Other candidates for information hiding include:

A data structure, its internal linkage, and the implementation details of


the procedures that manipulate it (this is the principle of data abstraction).
The format of control blocks such as those for queues in an operating system
(a control-block module) Character codes, ordering of character sets, and
other implementation details Shifting, masking, and other machine
dependent details.
2.4 Related work

Different methods of testing are used to solve the problem. Already


there are a lot of research work is done in this area.

2.4.1 Testing Methods

An examination of testing methods for conventional programming


language systems follows as well as a look at the applicability of these
testing methods to object-oriented programming systems. A discussion of
testing methods specific to OOPS will then be presented.

Much has been written concerning the testing of conventional (or


procedural) language systems. Some of the earlier works include The Art of
Software Testing by Myers [12], "Functional Program Testing" by Howden
[9], and more recently Software Testing Techniques by Boris Beizer [2, 17].
The reference [12] focused on explaining testing and leading the reader
through realistic examples. It also discussed numerous testing methods and
defined testing terminology. Howden's reference focused on functional
testing, which is probably the most frequently applied testing method.
Finally, Beizer's text provided a veritable encyclopedia of information on the
many conventional testing techniques available and in use. [7, 20]

The test method taxonomy of Miller is used [11]. Testing is broken


into several categories: general testing; special input testing; functional
testing; realistic testing; stress testing; performance testing; execution
testing; competency testing; active interface testing; structural testing; and
error-introduction testing. General testing refers to generic and statistical
methods for exercising the program. These methods include: unit/module
testing, system testing, regression testing and ad-hoc testing. Special input
testing refers to methods for generating test cases to explore the domain of
possible system inputs. Specific testing methods included in this category
are random testing and domain testing.

Functional testing refers to methods for selecting test cases to assess


the required functionality of a program. Testing methods in the functional
testing category include: specific functional requirement testing and model-
based testing.

Realistic test methods choose inputs/environments comparable to the


intended installation situation. Specific methods include field testing and
scenario testing. Stress testing refers to choosing inputs/environments which
stress the design/implementation of the code. Testing methods in this
category include stability analysis, robustness testing and limit/range testing.

Performance testing refers to measuring various performance aspects


with realistic inputs. Specific methods include sizing/memory testing,
timing/flow testing and bottleneck testing.

Execution testing methods actively follow (and possibly interrupt) a


sequence of program execution steps. Testing methods in this category
include thread testing, activity tracing and results monitoring.

Competency testing methods compare the output "effectiveness" against


some pre-existing standard. These methods include gold standard testing
effectiveness procedures and workplace averages. Active interface testing
refers to testing various interfaces to the program. Specific methods include
data interface testing user interface testing and transaction-flow testing [2].

Structural testing refers to testing selected aspects of the program


structure. Methods in this category include statement testing, branch testing,
path testing, test-coverage analysis testing and data-flow testing [2]. Error
introduction testing systematically introduces errors into the program to
assess various effects. Specific methods include error seeding and mutation
testing. When utilizing conventional programming languages, software
systems are usually tested in a bottom-up fashion. First, units or modules are
tested and debugged (unit testing). This is followed by integration testing,
which exercises sets of modules. Testing of the fully integrated system
(system testing) is accomplished next. In some cases system testing is
followed by acceptance testing (usually accomplished by/for the customer
and/or end user).

Applicability to Object-Oriented Systems

To understand the applicability of conventional testing methods to


object-oriented programming systems, it is vital to examine the components
of these systems. OOPS can be seen as having five components: (1) objects,
(2) their associated messages and methods, (3) hierarchically-organized
classes of objects, (4) external interfaces, and (5) tools and utilities. Objects
are code modules that contain both data and procedures. The methods are
one type of object-procedure and are responsible for actions of computation,
display, or communication with other objects. Communication is
accomplished through the sending of messages. Objects are described by
abstract classes (or types). Specific objects are created as instances of a
class. Inheritance is used to pass down information from parent classes to
their subclasses.

External interfaces deal with the connection of OOPS systems to the


databases, communication channels, users, etc. Tools and utilities refers to
general application programs which may be used in building the objects or
assisting in any other features of the OOPS. As one might expect, certain
OOPS components can be handled very easily by applying conventional
testing methods to them, while other components will require distinctive
treatment. The hierarchically-organized classes can be viewed as declarative
knowledge structures. These components use syntax and naming
conventions to explicitly represent details of application knowledge. They
are therefore very amenable to a verification and validation philosophy of
formal verification. Formal verification refers to the use of formal
mathematical theorem-proving techniques to prove a variety of properties
about system components, such as redundancy, incompleteness, syntax
violations, and inconsistencies. Although this approach is not yet mature, it
is the most effective approach for the class component.

The tools and utilities component is seen as an example of a highly


reusable component. Certain objects will also fall into this category (this
must be evaluated on a case-by-case basis). A highly reusable component
can be reused over a wide variety of applications without needing any
customization to specific systems. A certification procedure is recommended
for these components which establish the functional and performance
characteristics of the component, independent of the application. Software
certification, like formal methods, could easily be the subject of an entire
paper and will not be addressed. The remaining components, including the
integrated system itself, some objects, messages and methods, and external
interfaces, fall into a third, catch-all category. The traditional set of
conventional testing methods can be applied to these components.

OOPS can be seen as comprising of five components. These


components, objects which are not highly reusable, messages and methods,
and external interfaces can be tested using conventional testing methods.
Formal methods should be applied to the class component. Certification
procedures should be applied to the tools and utilities component and to
highly reusable objects.

Object-Oriented System Specific Test Methods

In examining the literature on object-oriented programming systems


and testing, several testing methods were discovered which are specific to
OOPS. The unit repeated inheritance hierarchy testing method, inheritance
method, identity method, the set and examine method, and the state based
testing method will be described below.

Unit Repeated Inheritance (URI) Hierarchy Method

Repeated inheritance is defined as a class (e.g., class D) that multiply


inherits from two or more classes (e.g., classes B and C), and these classes
(B and C) are descendants of the same parent class (e.g., class A).
Inheritance Method

Smith and Robson [15] have identified a framework for testing object-
oriented systems which uses seven different testing strategies. Though not
all these strategies are specific to object-oriented systems, the inheritance
method is. The inheritance method uses regression analysis to determine
which routines should be tested (when a change has been made to the
system) and then performs the tests based upon how the super class was
successfully tested. This applies to sub-classes inherited from the parent
class. The sub-class under test is treated as a flattened class except that the
routines from the parent that are unaffected by the subclass are not retested
[15].

Identity Method

Another method proposed by Smith and Robson is the identity


method. This method searches for pairs (or more) of routines that leave the
state as it was originally (before any routines were invoked). This list of
routines is reported to the tester who can examine the pairs and ensure that
the unaltered state is the desired result [15].

Set and Examine Method

This Smith and Robson method is similar to the identity method.


Pairs of routines that set and examine a particular aspect of the state are
related and are used in conjunction to run tests. For example, a clock object
may have one routine that sets the time then another that checks the time.
The time can be set, then immediately checked using this pair of routines.
Boundary and error values can be checked using this method [15].
State-Based Testing Method

Turner and Robson [16] have suggested a new technique for the
validation of OOPS which emphasizes the interaction between the features
and the object’s state. Each feature is considered as a mapping from its
starting or input states to its resultant or output states affected by any stimuli
[16]. Substates are defined which are the values of a data item at a specific
point in time. These are then analyzed for specific and general values. Next,
the set of states that the Ith feature actually accepts as input (Ii) and the set
of states it is able to generate as output (Oi) are determined for all the
features of the class. Test cases are then generated using general guidelines
provided. For example, one test case should allocate one substate per data
item. Turner and Robson have found this technique to work best for classes
which have many interacting features.

2.4.2 OOPS-Specific Software Faults [10]

A fault is defined as a textual problem with the code resulting from a


mental mistake by the programmer or designer. [13] A fault is also called a
defect. Fault-based testing refers to the collection of information on whether
classes of software faults (or defects) exist in a program. Since testing can
only prove the existence of errors and not their absence, this testing
approach is a very sound one. It is desirable to be able to implement such a
testing approach for object oriented systems. Although lists of error types
can be found in the current object-oriented literature, at present there does
not exist a comprehensive taxonomy of defect types inherent to object-
oriented programming systems. This paper takes a first step toward such
taxonomy by consolidating the fault types found in the literature.
Three major sources of object-oriented faults were examined. Each
source examined object-oriented faults and attempted to describe the types
of test methods that could be applied to detect the faults. Firesmith
concentrated on conventional test methods such as unit testing and
integration testing, while Miller et al concentrated on a prototype static
analyzer called Verification of Object-Oriented Programming Systems
(VOOPS) for detecting faults. Purchase and Winder presented nine types of
faults, seven which are detectable using debugging tools. Duplicate/related
fault types must be eliminated or grouped. Dynamic testing methods should
be identified to detect each of the faults currently detected by VOOPS (this
is not mandatory; one can simply broaden the definition of testing to include
static and dynamic methods). Similarly, static testing methods should be
identified for as many fault types as possible. The taxonomy must then be
organized to either address OOPS components, the object-oriented model, or
causal and diagnostic fault types. These are all areas for future research. The
approach proposed in this paper differs from that of Miller, Fire smith,
Purchase & Winder [14] in that it looks not only at object-oriented faults and
not only at conventional methods applied to these faults. It looks at both of
these items plus examines methods specific to OOPS. It is therefore a step
toward a more comprehensive approach.
3. Problem Description

The object-oriented (OO) paradigm has become increasingly popular


in recent years as is evident by more and more organizations introducing
object-oriented methods and languages into their software development
practices. Claimed advantages of OOP (object-oriented programming)
include easier maintenance through better data encapsulation [10]. There is
some evidence to support the claim that these benefits may be achieved in
practice [36], [44]. Although maintenance may turn out to be easier for
programs written in OO languages, it is unlikely that the maintenance
burden will completely disappear [50]. Maintenance, in its widest sense of
"post deployment software support," is likely to continue to represent a very
large fraction of total system costs. Maintainability of software thus
continues to remain a critical area even in the object-oriented era. Object-
oriented design can play an important role in maintenance especially if
design-code consistency is maintained [6], [24].

The control of software maintenance costs can be approached in


several ways. One approach to controlling software maintenance costs is the
utilization of software metrics during the development phase. These metrics
can be utilized as indicators of the system quality and can help identify
potential problem areas [19], [38], [43]. Several metrics applicable during
the design phase have been developed. Several studies have been conducted
examining the relationships between design complexity metrics and
maintenance performance and have concluded that design-based complexity
metrics can be used as predictors of maintenance performance, many of
these studies, however, were done in the context of traditional software
systems [20], [25], [29], [40], [41].

The OO approach involves modeling the real world in terms of its


objects, while more traditional approaches emphasize a function-oriented
view that separates data and procedures.
OO designs are relatively richer in information and, therefore, metrics,
if properly defined, can take advantage of that information available at any
early stage in the life cycle. Unfortunately, most of the prior research does
not exploit this additional information. Three metrics, interaction level [1],
[2], interface size [1], and operation argument complexity [15], which are
the focus of the current paper, are among the metrics proposed and/or
studied that seem to take advantage of some of the additional information
available in an OO design.

The use of interface size information in slightly different ways.


Interaction level metric is the most complex out of the three metrics and
additionally captures the potential interactions that may occur in an
execution sequence. Operation argument complexity is the simplest of the
three metrics.

The research work of the interaction level [1], [2], [9], interface size
[1], and operation argument complexity [15] metrics has validated the
proposed metrics empirically. The metrics have, however, been subjectively
validated, where the metrics values are compared to expert judgments. In
such a work of [16] OO design quality metrics (including three
Chidamber/Kemerer metrics [17]) by Binkley and Schach [9], the interaction
level metric (also known as permitted interaction metric) was found to be the
second best for predicting implementation and maintenance effort.

The objective of the current paper is to present the results of assessed


the validity of predicting maintenance time from the design complexity of a
system as measured by the three metrics mentioned above. These metrics
have also been analytically validated [7] using the relevant mathematical
properties specified by Weyuker [49].
4. System Design

The research design suggests that design complexity, maintenance


task, and programmer ability all influence maintenance performance.
Maintenance performance is the dependent variable and design complexity,
maintenance task, and programmer ability are independent variables. This
work reports on only the first two of these independent variables.

4.1 Dependent Variable

Maintainability is defined as the ease with which systems can be understood


and modified [25]. In past work, it has been operationally as "number of
lines of code changed" [33], [34], time (required to make changes) and
accuracy [20], [25], and "time to understand, develop, and implement
modification" [39]. In this work, following Rising [39], maintainability was
operationally as "time to understand, develop, and actually make
modifications to existing programs." This did not include accuracy in the
maintenance measurement because of the following reasons: 1) An inverse
relationship exists between time (for making changes) and accuracy. 2) For
the measured accuracy to be statistically useful, the maintenance should be
done in some restricted amount of time. 3)
4.2 Independent Variables

4.2.1 Design Complexity

Interaction level (IL) [1], [2], interface size (IS) [1], and operation
argument complexity (OAC) [15] were chosen as measures of design
complexity in this work. All three metrics have been subjectively validated
by comparing their values to experts' judgments and have been found to
perform well [1], [2], [9], [15].

The fundamental basis for the interaction level metric, as well as for
the other two metrics, is the assumption that the greater the interface, the
more scope for (direct) interactions and interaction increases complexity.
This assumption is consistent with the notions of complexity suggested by
various researchers.

In the case of objects and classes, the methods and data attributes are
the set of properties and, therefore, complexity of a class is a function of the
interaction between the methods and the data attributes. The concept of IL
specifies the amount of potential (direct) interaction that can occur in a
system, class, or method.

For example, the IL of a method indicates the amount of (direct)


interaction that can occur whenever a method is invoked. To explain further,
whenever a method is invoked, its parameters are used for some internal
computation along with some of the data attributes associated with the class
to which that method belongs. Also, a value (object) may be passed back to
the calling routine. (Thus, the parameter count used in IL includes both the
regular method parameters and any return value if one exists.) There is said
to be an "interaction" between two entities A and B if the value of entity A is
calculated directly based on the value of entity B, or vice versa. In the
context of the interaction level metric, if the value of some data attribute is
calculated directly based on the value of one or more of the parameters, or
vice versa, then there is said to be an interaction between the parameters and
the data attribute. It is expected that a higher interaction level will correlate
with an increased difficulty in determining how to implement or modify a
design. The interaction level metric can be computed at varying levels of
granularity: The interaction level of a class is the sum of the interaction
levels of its methods. The interaction level of a design is the sum of the
interaction levels of its classes. The current study validates IL and the other
two metrics at the design level.

Both interaction level and interface size metrics use the concept of
"number" and "strength." For example, the interaction level of a method
depends on the number of interactions and the strength of interactions. The
size of a parameter (argument) or attribute is a specified constant , signifying
the complexity of the parameter/attribute type. The strength of interaction is
defined as the product of the sizes of the parameters/attributes involved in
the interaction. It is necessary to use both number and strength because they
typically have an inverse relationship in the sense that decreasing one
increases the other and vice versa. Also, a large increase in either number or
strength (of interactions) could increase the complexity. Accordingly, the
interaction level (IL) of a method is defined as: IL = K1* (number of
interactions) + K2* (sum of strength of interactions).

The constants K1 and K2 used in the linear combination are


tentatively set to 1 for simplicity and to balance the effect of the strength of
interactions and the number of interactions. However the revision of work
experience is gained with the metric. This approach is consistent with
assumptions made by other researchers in tentatively fixing a value for the
constants in metric definitions [16].It is to be noted that the interaction level
metric is derived based on the number and the strength of the interactions
"permitted" by the design. These interactions may or may not actually occur
in realizing the method.

For example, a parameter of a method may, upon implementation, be


seen to interact with only one of the data attributes, not all of them.
Nonetheless, the design of the method has created the mechanism for these
interactions to occur and hence "permits" them. Whether or not all the
interactions occur and how many times they occur is an implementation
issue.

The presence or absence of the mechanism is a design issue and,


hence, serves as an appropriate base for a design metric. The concept of
interface size gives a measure of the means for information to flow in and
out of class encapsulation. Some classes define many methods, perhaps
many of which have complex signatures (i.e., parameter lists) that provide
abundant means for information to flow in and out of their encapsulation.
Other classes may provide few methods, many of which have simple
signatures. It is expected that a larger interface size will correlate with an
increased difficulty in comprehending how to select and correctly use the
services provided by a class. Interface size (IS) of a method is defined as: IS
= K3* (number of parameters) + K4* (sum of sizes of parameters).As in the
case of the definition of IL, the constants K3 and K4 used in the linear
combination are tentatively set to 1 for simplicity and to balance the effect of
the number of parameters and size of the parameters.

Interface size of a class is the sum of the interface sizes of its


methods. The interface size of a design (the focus of the current study) is the
sum of the interface sizes of its classes. Operation argument complexity is
the simplest of the three metrics. Operation argument complexity (OAC) of
a method is defined as: OAC = P(i), where P(i) is the size of each
parameter. 2.Operation argument complexity of a class is the sum of the
operation argument complexities of its methods. The operation argument
complexity of a design (the focus of the current work) is the sum of the
operation argument complexities of its classes.

4.2.2 Maintenance Task

The second independent variable in the work was maintenance task.


Most researchers categorize maintenance activities as adaptive, corrective,
and perfective [32]. Adaptive maintenance is environment-driven. The need
for adaptive maintenance arises when there are changes in hardware,
operating systems, files, or compilers, which impact the system. Corrective
maintenance is error-driven. This activity is equivalent to debugging, but it
occurs after the system is placed in operation. Since programs are never truly
error free, corrective maintenance is required throughout the life of a system.
Perfective maintenance is user driven. Most perfective maintenance occurs
in the form of report modifications to meet changing user requirements [32].
The bulk of maintenance activities are of this latter type. To be
representative, two maintenance tasks were used in the work, one of which
was perfective and the other was corrective.

4.2.3 Summary of Research Variables

Based on the above research model, in this work the main research
objective was to focus on the relationship between the design complexity
metrics and the maintenance performance. Since the measured design
complexity using the metrics are wished to validate, if these metrics are
indeed valid metrics of design complexity, then the expectation is to see a
positive correlation between design complexity and maintenance time. The
work of this relationship in the contexts of both perfective and corrective
maintenance tasks.
DATA TYPES AND SIZE

S.No Data Type Size


1. Byte 1
2. Short 2
3. Int 4
4. Long 4
5. Float 4
6. Double 8
7. Char 2
8. Boolean 1

ARCHITECTURE OF OBJECT ORIENTED SYSTEM

System

Class Class Class

Attributes Methods Attributes Methods Attributes Methods


Hypotheses

The hypotheses for the work are derived from the following
proposition:
P1. There is a relationship between the complexity of a system's design and
the maintenance time required to make changes.
Propositions are generic statements made based on the research model. P1 is
a generic statement made based on the research model.

There are numerous ways to assess whether "a relationship" exists


between two variables: t-test/ANOVA, correlation, regression, etc. For each
of the metrics of interest in this study, three types of tests—ANOVA,
correlation, and regression—to assess whether a relationship indeed seems
to exist, to see whether each complexity metric can be used as a reliable
indicator of expected maintenance time. Each test is expressed in terms of a
hypothesis. Both the Null (HO) and the Alternate hypotheses (HA) are
shown. The null hypothesis says that maintenance time does not vary as a
function of the metric. If a metric is valid, The result is to find a significant
relationship between the metric and the maintenance time, and hence the
objective is to be able to reject the null hypotheses.

The following hypotheses formalize these tests:

H1O: There is no difference in the maintenance time required to make


changes to systems, irrespective of whether they have low- or high-
complexity designs.
H1A: There is a difference in the maintenance time required to make
changes to systems, depending on whether they have low- or high-
complexity designs.
H2O: There is no correlation between the complexity of a system's design
and the maintenance time required to make changes to that system: ρ =
0.H2A: There is a nonzero correlation between the complexity of a system's
design and the maintenance time required to make changes to that system: ρ
0.
H3O: There is no linear regression relationship between the complexity of a
system's design and the maintenance time required to make changes to that
system.
H3A: There is a nonzero linear regression relationship between the
complexity of a system's design and the maintenance time required to make
changes to that system.

The measurement of a system's complexity with each of the three


metrics, IL, IS, and OAC, and applied each of the hypotheses to each of the
three metrics. Thus, there were nine tests that were run in order to assess
Proposition P1.
Proposition P1 and the resulting nine tests served as the main objective of
the research, which was to validate the metrics (IL, IS, and OAC).
5. System Implementation and Results

5.1 System Implementation

The system uses three design complexity metrics to predict the


software maintenance time. This system implemented as a GUI based Java
application. Collecting system and its component information, calculate the
design complexity metric values and estimate the maintenance time are the
major operations of the system. The system has three major modules. They
are class information, metric analysis and maintenance time estimation. The
class information module is designed to collect and maintain the system
information. The metric analysis module is designed to calculate the design
complexity values for the three metrics. The maintenance time is estimated
by using the maintenance time prediction module.

This module is designed to collect the system information. Each


system is composed with a set of classes. A class can be formed with a set of
attributes and methods. The system information module has three sub
modules. They are the class information, attribute information and the
method information. The system name and the purpose of each system can
be maintained by the system information module. The user can add a new
system and the user can remove an existing system from the system details.
The application maintains a separate file for the system information.
SYSTEM ARCHITECTURE

Maintenance Time Prediction

System
Complexity Report
Information

Interaction Interface Operation


Class
Level Size Argument
Information
Complexity

Attribution Method
Information Information
MAINTENANCE TIME PREDICTION PROCESS

Collect System Information

Collect Class Information

Collect Method and


Attribute Information

Calculate Complexity

Estimate Maintenance Time

A system may have one or more classes. Each class information is


maintained by the class information module. The user can add a new class to
a system and the user can remove an existing class from the system. The
application stores all the class information into a separate file. All the class
details are displayed separately. The attribute information can added
separately with the class details. The attribute name and the tyep of the
attributes are used as the attribute information.

A class may have one or more methods. The method information


consists the following details method name, return type , argument name and
the argument type. The user can add the method details by using the method
entry form. The method details for each class are stored in a data file. This
system uses the Java based data types for the return type and argument type.

Metric Analysis

This module is designed to calculate the design complexity metrics.


This module has three sub modules. They are interaction level, interface size
and the operation argument complexity. The interaction level is calculated
with the attribute information and the method information. The number of
attributes and arguments with their strength values are used for the
interaction level calculation process. The interface size is calculated with the
method information only. In the same manner the operation argument
complexity values are also calculated. All the complexity values are
calculated for the method. The sum of method complexity is defined as the
class complexity. The sum of the class complexity is defined as system
complexity.
Maintenance time

The maintenance time is calculated by using the design complexity


metrics. The maintenance time can be calculated by using any one of the
above three metrics. This system uses all of the three metrics. The system
also calculates the average maintenance time to estimate the system
maintenance time. The system complexity value is used for the maintenance
time prediction process. The application shows the required maintenance
time for all the classes separately.

5.2 Results

The system is tested with the sample application information. All the
class information, method information and attribute information are updated
into the system. Initially the design complexity metrics are calculated. The
maintenance time is estimated with the help of the complexity metrics. The
application calculates the maintenance time for each class separately. The
overall system maintenance time is calculated by using the maintenance time
for all the classes.

The complexity values and the maintenance time are calculated for
different sample systems. Their values are analyzed by using the graph. The
maintenance time value is calculated with each complexity values. The
interaction level complexity and the maintenance time are represented in the
Fig 5.1. This chart describes the complexity values affect the maintenance
time for the system. The complexity values are estimated for each class in
the system. In the same way the maintenance time for the interface size is
shown in Fig 5.2. and the operation argument complexity value is displayed
in the Fig. 5.3. All the charts demonstrate that the maintenance time is
depends upon the complexity values.

The comparison of the different complexity values and its


maintenance time is presented in the Fig. 5.4. This chart describes that the
maintenance time does not have any major difference. The user can use any
complexity to estimate the maintenance time. The average maintenance time
for all of the three metrics is also predicts the maintenance time very
correctly. From the result the three metrics are suitable to predict the
maintenance time.
Interaction Level Vs Maintenance Time

120

100
Maintenance Time

80

60

40

20

0
32 63 32 87 78
Complexity

Complexity Maintenance Tme


32 40
63 78.75
32 40
87 108.75
78 97.5

Figure 5.1 Interaction Level Vs Maintenance Time


Interface Size Vs Maintenance Time
60

50
Maintenance

40

30

20

10

0
25 30 25 38 35
Complexity

Complexity Maintenance Time


25 36.25
30 43.5
25 36.25
38 55.1
35 50.75

Figure 5.2 Interface Size Vs Maintenance Time


Operation Argumnet Complexity Vs Maintenance Time

50
45
MAintenance Time

40
35
30
25
20
15
10
5
0
21 25 21 33 29
Compelxity

Complexity Maintenance Time


21 27.3
25 32.54
21 27.3
35 42.9
29 37.7

Figure 5.3 Operation Argument Complexity Vs Maintenance Time


6. SCOPE FOR FUTURE DEVELOPMENT

This research work is conducted to predict maintenance time using the


design complexity metrics Interaction Level, Interface Size and Operational
Argument Complexity. The experimental study can be extended and
replicated in several directions:

The original metric definitions did not explicitly address unique


object-oriented concepts such as inheritance. Future research can define
appropriate metric computations for inheritance, aggregation, and
association, and conduct a study to validate the metrics with respect to these
Object-Oriented concepts.

A study can be conducted to separately capture the time required to


understand the system and task, make changes, and test the changes. Also,
an analysis of the different ways the changes are made can be performed.
This can provide additional information on the impact of design
complexity on detailed maintenance activities.

A longitudinal investigation of one or more actively maintained


systems can be conducted. The design complexity metrics being studied
should be applied to the systems at the outset of the study and recomputed
after each modification. Data can be gathered to evaluate how design
complexity contributes to system deterioration, frequency of maintenance
changes, system reliability, etc. This should provide useful information
both to project managers as well as to system developers.
7. CONCLUSION

The main objective of this research was is empirically explore the


validation of three object-oriented design complexity metrics: interaction
level (IL), interface size (IS), and operation argument complexity (OAC).
To predict the maintenance time at design level, the metrics have also been
analytically validated based on the relevant set of properties. For empirical
validation, an experiment is conducted to achieve the research objective.

The following results are obtained by this system. Each of the three
complexity metrics by themselves is found to be useful in measuring the
design complexity. It is not necessary to measure all three metrics for a
given design. Instead, any one of the three metrics (IL, IS, OAC) may be
used in predicting maintenance performance (time to perform a given
maintenance task). Interface Size and Operation Argument Complexity each
explained more of the variance than does Interaction Level , using one of
them may be the best approach.

The relative performance of Interaction Level in this regard , which is


also the most complex of the three metrics. But the user can choose all the
three metrics or any one of them to predict the maintenance time. All the
three design complexity metrics predicts the maintenance time with a little
variation. The user can use the metrics to predict software maintenance time
at design level. But the dependent variables are not considered in these
metrics.
BIBLIOGRAPHY

Books

1. Balagurusamy E., "Object Oriented Programming with C++", Tata

McGraw-Hill, 1997.

2. Beizer, Boris, "Software Testing Techniques", Second Edition. New

York, Van Nostrand Reinhold, 1990.

3. Grady Booch, "Object Oriented Analysis and Design", Tata McGraw

Hill , 1980.

4. Naughton.P and H.Schildt, "Java 2: The Complete Reference",

5. Mc Graw-Hill, 1999.

6. Richard E. Fairley, "Software engineering concepts", Tata McGraw-

a. Hill, 2000.

7. Roger S. Pressman, "Software Engineering", TataMcGARAHill.


JOURNALS

8. Abbott D., "A Design Complexity Metric for Object-Oriented

Development," Masters thesis, Dept. of Computer Science, Clemson


Univ., 1993.,

9. Abbott D.H., Korson T.D. and McGregor J.D., "A Proposed


Design Complexity Metric for Object-Oriented Development,"
Technical Report TR 94-105, Computer Science Dept., Clemson
Univ., 1994.,

10. Abreu B.F. and Melo W.L., "Evaluating the Impact of Object-

Oriented Design on Software Quality,"Proc. Third Int'l Software


Metrics Symp.,Mar. 1996.,

11. Bandi R.K., "Using Object-Oriented Design Complexity Metrics

to Predict Maintenance Performance," PhD dissertation, Georgia


State Univ., 1998.,

12. Chen J-Y. and Lu J-F., "A New Metric for Object-Oriented

Design,"Information and Software Technology,pp. 232-240,Apr.


1993.,

13. Chidamber S.R. and Kemerer C.F., "Towards Metric Suite for

Object-Oriented Design,"Proc. Sixth ACM Conf. Object Oriented


Programming Systems, Language, and Applications
(OOPSLA),pp. 197-211,Nov. 1991.,
14. Howden, W. E., "Functional Program Testing," EEE Transactions

on Software Engineering, SE-6(2): March 1980.

15. Jane Huffman Hayes, "Testing of Object-Oriented Programming

Systems (OOPS): A Fault-Based Approach"

16. Taylor D., "Software Metrics for Object-Oriented


Technology,"Object Magazine,pp. 22-28,Mar.-Apr. 1993.,

17. Weyuker E.J., "Evaluating Software Complexity


Measures,"IEEE Trans. Software Eng.,pp. 1357-1365,Sept. 1988.,

18. Wilde N. and Huitt R., "Maintenance   Support  for  Object­

Oriented   Programs,"IEEE Trans. Software Eng.,pp. 1038-


1044,Dec. 1992.,

WEBSITES

19.www.cs.bham.ac.uk/se/2002/kv/Lecture5/OOTestingSpecialReq5.pdf
20.www.mass.edu/p_p/includes/ pipeline/includes/metricswentt.pdf
21.www.ics.ltsn.ac.uk/events/jicc8/jicc8/enggmetricessw.doc
22.http://selab.netlab.uky.edu/Homepage/ISOOMS.pdf

You might also like