Professional Documents
Culture Documents
ABSTRACT
Definition
Class
An object class describes a group of objects with similar properties
(attributes), common behavior (operations), common relationships to other
objects, and common semantics. The abbreviation class is often used instead
of object class. Objects in a class have the same attributes and behavior
patterns. [1]
Super Class
Sub class
A class which has link to a more general class. The class that does the
inheriting is called a subclass. Therefore, a subclass is a specialized version
of a super class. It inherits all of the instance variables and methods defined
by the super class and add its own, unique elements.
Dynamic Binding (method resolution)
Analysis Activities
Analysis Activities
Develop standards and guidelines
Set milestones for the supporting documents
Specify quality assurance procedures
Identify likely product enhancements
Determine resources required for maintenance
Estimate maintenance costs
Implementation Activities
Other Activities
Implementation Activities.
♦ Localization
♦ Encapsulation
♦ Information hiding
♦ Inheritance
Objects encapsulate
♦ Objects
♦ Exceptions
♦ Constants
♦ The basic unit will no longer be the subprogram, but rather the object,
and
♦ Encapsulation and information hiding are not the same thing, e.g., an
item can be encapsulated but may still be totally visible.
There are three commonly used (and different) views on the definition for
"class,":
♦ A class is a pattern, template, or a blueprint for a category of
structurally identical items. The items created using the class are
called instances. This is often referred to as the "class as a `cookie
cutter'" view.
♦ A class is the set of all items created using a specific pattern, i.e., the
class is the set of all instances of that pattern.
Localization
Encapsulation
Information Hiding
Smith and Robson [15] have identified a framework for testing object-
oriented systems which uses seven different testing strategies. Though not
all these strategies are specific to object-oriented systems, the inheritance
method is. The inheritance method uses regression analysis to determine
which routines should be tested (when a change has been made to the
system) and then performs the tests based upon how the super class was
successfully tested. This applies to sub-classes inherited from the parent
class. The sub-class under test is treated as a flattened class except that the
routines from the parent that are unaffected by the subclass are not retested
[15].
Identity Method
Turner and Robson [16] have suggested a new technique for the
validation of OOPS which emphasizes the interaction between the features
and the object’s state. Each feature is considered as a mapping from its
starting or input states to its resultant or output states affected by any stimuli
[16]. Substates are defined which are the values of a data item at a specific
point in time. These are then analyzed for specific and general values. Next,
the set of states that the Ith feature actually accepts as input (Ii) and the set
of states it is able to generate as output (Oi) are determined for all the
features of the class. Test cases are then generated using general guidelines
provided. For example, one test case should allocate one substate per data
item. Turner and Robson have found this technique to work best for classes
which have many interacting features.
The research work of the interaction level [1], [2], [9], interface size
[1], and operation argument complexity [15] metrics has validated the
proposed metrics empirically. The metrics have, however, been subjectively
validated, where the metrics values are compared to expert judgments. In
such a work of [16] OO design quality metrics (including three
Chidamber/Kemerer metrics [17]) by Binkley and Schach [9], the interaction
level metric (also known as permitted interaction metric) was found to be the
second best for predicting implementation and maintenance effort.
Interaction level (IL) [1], [2], interface size (IS) [1], and operation
argument complexity (OAC) [15] were chosen as measures of design
complexity in this work. All three metrics have been subjectively validated
by comparing their values to experts' judgments and have been found to
perform well [1], [2], [9], [15].
The fundamental basis for the interaction level metric, as well as for
the other two metrics, is the assumption that the greater the interface, the
more scope for (direct) interactions and interaction increases complexity.
This assumption is consistent with the notions of complexity suggested by
various researchers.
In the case of objects and classes, the methods and data attributes are
the set of properties and, therefore, complexity of a class is a function of the
interaction between the methods and the data attributes. The concept of IL
specifies the amount of potential (direct) interaction that can occur in a
system, class, or method.
Both interaction level and interface size metrics use the concept of
"number" and "strength." For example, the interaction level of a method
depends on the number of interactions and the strength of interactions. The
size of a parameter (argument) or attribute is a specified constant , signifying
the complexity of the parameter/attribute type. The strength of interaction is
defined as the product of the sizes of the parameters/attributes involved in
the interaction. It is necessary to use both number and strength because they
typically have an inverse relationship in the sense that decreasing one
increases the other and vice versa. Also, a large increase in either number or
strength (of interactions) could increase the complexity. Accordingly, the
interaction level (IL) of a method is defined as: IL = K1* (number of
interactions) + K2* (sum of strength of interactions).
Based on the above research model, in this work the main research
objective was to focus on the relationship between the design complexity
metrics and the maintenance performance. Since the measured design
complexity using the metrics are wished to validate, if these metrics are
indeed valid metrics of design complexity, then the expectation is to see a
positive correlation between design complexity and maintenance time. The
work of this relationship in the contexts of both perfective and corrective
maintenance tasks.
DATA TYPES AND SIZE
System
The hypotheses for the work are derived from the following
proposition:
P1. There is a relationship between the complexity of a system's design and
the maintenance time required to make changes.
Propositions are generic statements made based on the research model. P1 is
a generic statement made based on the research model.
System
Complexity Report
Information
Attribution Method
Information Information
MAINTENANCE TIME PREDICTION PROCESS
Calculate Complexity
Metric Analysis
5.2 Results
The system is tested with the sample application information. All the
class information, method information and attribute information are updated
into the system. Initially the design complexity metrics are calculated. The
maintenance time is estimated with the help of the complexity metrics. The
application calculates the maintenance time for each class separately. The
overall system maintenance time is calculated by using the maintenance time
for all the classes.
The complexity values and the maintenance time are calculated for
different sample systems. Their values are analyzed by using the graph. The
maintenance time value is calculated with each complexity values. The
interaction level complexity and the maintenance time are represented in the
Fig 5.1. This chart describes the complexity values affect the maintenance
time for the system. The complexity values are estimated for each class in
the system. In the same way the maintenance time for the interface size is
shown in Fig 5.2. and the operation argument complexity value is displayed
in the Fig. 5.3. All the charts demonstrate that the maintenance time is
depends upon the complexity values.
120
100
Maintenance Time
80
60
40
20
0
32 63 32 87 78
Complexity
50
Maintenance
40
30
20
10
0
25 30 25 38 35
Complexity
50
45
MAintenance Time
40
35
30
25
20
15
10
5
0
21 25 21 33 29
Compelxity
The following results are obtained by this system. Each of the three
complexity metrics by themselves is found to be useful in measuring the
design complexity. It is not necessary to measure all three metrics for a
given design. Instead, any one of the three metrics (IL, IS, OAC) may be
used in predicting maintenance performance (time to perform a given
maintenance task). Interface Size and Operation Argument Complexity each
explained more of the variance than does Interaction Level , using one of
them may be the best approach.
Books
McGraw-Hill, 1997.
Hill , 1980.
5. Mc Graw-Hill, 1999.
a. Hill, 2000.
10. Abreu B.F. and Melo W.L., "Evaluating the Impact of Object-
12. Chen J-Y. and Lu J-F., "A New Metric for Object-Oriented
13. Chidamber S.R. and Kemerer C.F., "Towards Metric Suite for
WEBSITES
19.www.cs.bham.ac.uk/se/2002/kv/Lecture5/OOTestingSpecialReq5.pdf
20.www.mass.edu/p_p/includes/ pipeline/includes/metricswentt.pdf
21.www.ics.ltsn.ac.uk/events/jicc8/jicc8/enggmetricessw.doc
22.http://selab.netlab.uky.edu/Homepage/ISOOMS.pdf