You are on page 1of 88

INDEX

Sr.No. 1. 2. 3. TOPIC To study software & software engineering. Explains its characteristics, goals and principles. To Study The Phase Development Process Of Software Signature

To study SRS (Software requirements specification). Explain its need & characteristics.

4.

To study DFD and calculate DFD for R.M.S value To study what are the risks. Explain risk management by different factors To study the term cost estimation. Explain COCOMO model. To study software design. Explain its fundamental and design methods Study of Software Testing .Explain Testing fundamental and various Techniques To study the term unit testing, integration testing, validation testing, system testing and debbuging.

5.

6.

7.

8.

9.

To study the term Software Maintenance. Explain its 10. characteristics and Side effects

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 02
PAGE NO

EXPERIMENT :1 AIM:To study software & software engineering.Explains its characteristics,goals and principles. SOFTWARE:

Software is a general term for the various kinds of programs used to operate computers and related devices. (The term hardware describes the physical aspects of computers and related devices.) A set of computer programs, procedures, and associated documentation concerned with the operation of a data processing system; e.g. , compilers, library routines, manuals, and circuit diagrams. Information (generally copyrightable) that may provide instructions for computers; data for documentation; and voice, video, and music for entertainment or education .Computer instructions or data. Anything that can be stored electronically is software. The storage devices and display devices are hardware.

The terms software and hardware are used as both nouns and adjectives. For example, you can say: "The problem lies in the software," meaning that there is a problem with the program or data, not with the computer itself. You can also say: "It's a software problem." The distinction between software and hardware is sometimes

confusing because they are so integrally linked. Clearly, when you purchase a program, you are buying software. But to buy the software, you need to buy the disk (hardware) on which the software is recorded.

Software is often divided into two categories:

Name

Roll No.

Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 03
PAGE NO

systems software : Includes the operating system and all the utilities that enable the computer to function. applications software : Includes programs that do real work for users. For example, word processors, spreadsheets, and database management systems fall under the category of applications software.

PROGRAM

OPERATING PROCEDURES

DOCUMENTATION

Figure: Software

Name

Roll No.

Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE)
Context Diagram Specification DFD(Data Flow Diagram)

PAGE NO

04

Flowchart Design ER Diagrams Documentation Source Code Listing Implementation Cross Reference Listing

Test Data Test Test Result

User manaul Operating Procedures Operational manaul

Name

Roll No.

Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 05
PAGE NO

SOFTWARE ENGINEERING:

Definition 1:"the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software".

Definition 2: "an engineering discipline that is concerned with all aspects of software production"

Definition 3:"the establishment and use of sound engineering principles in order to economically obtain software that is reliable and works efficiently on real machines"

Definition 4:Software Engineering is an approach to developing software that attempts to treat it as a formal process more like traditional engineering than the craft that many programmers believe it is.

Definition

5:"Software

engineering

employs

engineering

methods,

processes,

techniques and measurement." Software Engineering has come to mean at least two different things in our industry. First of all the term "software engineer" has generally replaced the term "programmer". So, in that sense there is a tendency to extrapolate in people's minds that Software Engineering is merely the act of programming. Secondly, the term "Software Engineering" has been used to describe "building of software systems which are so large or so complex that they are built by a team or teams of engineers", as was used in Fundamentals of Software Engineering by Ghezzi, Jazayeri, and Mandrioli.

Name

Roll No.

Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 06
PAGE NO

SOFTWARE VS HARDWARE: The distinction between hardware and software is artificial. The difference between the two is only one of scale: hardware works in the world of matter, software works in the world of energy1. However, whether you're manipulating atoms, or electrons, you're still just using the 'rearranging of the physical' as a tool. This is why it is possible for software and hardware to interact. This is why, when you press a mechanical key on your keyboard, a 'virtual' character can appear on the screen. This is why when a 'virtual' trigger is sprung, a physical activity can be initiated (when a conditional statement is true, the gears in your printer will turn). Hardware and software are not two separate worlds. Rather, they are more like the ocean and the atmosphere of Earth: two varieties of the same concept. In practical terms, we think of the ocean as full of something, but of the atmosphere as empty. Really, we are just swimming in an ocean of air. The ocean and atmosphere are both fluids; one of water, one of air. CHARACTERISTICS OF SOFTWARE ENGINEERING: They are refined into sub-characteristics, until the attributes or measurable properties are obtained. In this context, metric or measure is a defined as a measurement method and measurement means to use a metric or measure to assign a value. In order to monitor and control software quality during the development process, the external quality requirements are translated or transferred into the requirements of intermediate products, obtained from development activities. The translation and selection of the attributes is a non-trivial activity, depending on the stakeholder personal experience, unless the organization provides an infrastructure to collect and to analyze previous experience on completed projects. The definition of the main quality characteristics of the ISO 9126-1 standard for software quality measurement is shown in Table below. The model should be adapted or customized to the specific application Name Roll No. Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 07
PAGE NO

or product domain. In this sense, for a particular software product we could have a subset of the six characteristics.

Characteristics Functionality

Description The capability of the software product to provide functions which meet stated and implied needs when the software is used under specified conditions (what the software does to fulfil needs)

Reliability

The capability of the software product to maintain its level of performance under stated conditions for a stated period of time

Usability

The capability of the software product to be understood, learned, used and attractive to the user, when used under specified conditions (the effort needed for use)

Efficiency

The capability of the software product to provide appropriate performance, relative to the amount of resources used, under stated conditions

Maintainability

The capability of the software product to be modified. Modifications may include corrections, improvements or adaptations of the software to changes in the environment and in the requirements and functional specifications (the effort needed to be modified)

Name

Roll No.

Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 08
PAGE NO

Portability

The capability of the software product to be transferred from one environment to another. The environment may include organizational, hardware or software environment

SOFTWARE ENGINEERING GOALS: Software Engineering embodies many techniques that could arguably require volumes to describe. Generally such practices as top-down design, structured programming and design, pseudocode with iterative refinement, walk-throughs and OOP (Object Oriented Programming) are considered to be part of this discipline. Many advanced constructs in Name Roll No. Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 09
PAGE NO

modern programming languages such as C++ were incorporated to meet the Software Engineering Goals given below.

All programming projects should begin with functional descriptions which become the highest level pseudocode. Every module and every routine, high or low level, should have a functional description approptiate to its level of abstraction.

Functional descriptions determine high level pseudocode should be iteratively refined to low level pseudocode. The pseudocode becomes internal

documentation.

Pseudocode should be organized into modules, thus ultimately organizing the project into modules.

Computer code for all routines and main programs should be generated from low level pseudocode. The pseudocode should become internal documentation.

Module interface specifications should be created so that modules may be created independently.

Programs should be constructed from modules. All support routines should reside in separately compiled modules. Each module should only contain routines whose functions are related. Inter-module communication should be accomplished solely through passing parameters. No global variables should be used. Parameters should be passed "by value" where required access is read-only.

Communication between routines in the same module should also only be accomplished through parameter passing.

Name

Roll No.

Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA
PAGE NO

CSE LAB (SE)

010

Low level routines should have simple and explicit functions. For example low level routines should not do any I/O (especially interaction with the user) unless that is the sole function of that routine.

Support routines should be evaluated with respect to flexibility and efficiency. Include files should be used where required to effect inter-module independence and consistency. For example data-type definitions should be included to attain consistent definitions global to all modules.

Changes (such as changes to underlying data structures) in low level routines should not require source code changes in high level routines. This is accomplished by carefully constructing the calling interface to adhere to data abstraction principles.

Changes (such as changes to the driving application) in high level routines should not require source code changes in low level routines. This is accomplished by carefully constructing the calling interface to adhere to data abstraction principles.

Changes to high level routines should not require low level routines to be recompiled.

Changes to low level routines should not require high level routines to be recompiled. (This is generally very difficult to achieve.)

Modules should be constructed so that private support routines are not accessible outside the module. This may be accomplished using submodules or language-dependent protection mechanisms.

Modules should be constructed so that internal data structures can be accessed only through authorized routines.

Name

Roll No.

Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA
PAGE NO

CSE LAB (SE)

011

All program changes should start with a change to the highest level pseudocode appropriate to that change. This ensures that pseudocode accurately reflects the structure of the underlying code.

PRINCIPLES OF SOFTWARE ENGINEERING:

make quality number one priority; high-quality software is possible; give products to customers early; determine the problem before writing requirements; evaluate design alternatives; use an appropriate process model; use different languages for different phases; minimize intellectual distance; put technique before tools; get it right before you make it faster; inspect code; good management is more important than good technology;

Name

Roll No.

Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 012
PAGE NO

EXPERIMENT -2.

AIM :- To Study The Phase Development Process Of Software Software development is achieved through a collection activities that lead to the building of a software product. Software development is traditionally divided into various phases, which we describe briefly in the following sections. For small projects, the phases are carried out in the order shown; for larger projects, the phases are interleaved or even applied repetitively with gradually increasing refinement. The strict definition and organization of these phases is called a software process.
Software specification Software development Software validation Software evaluation

Software development is the way by which we produce software according to the requirements. software specification :The functioning of software & its constraints on its operaton must be defined software development: software must be developed that needs the specification. software validation : software must be validated to ensure that it performs what the customer need . software customer. evaluation : software must be evolved to meet changes according to

Study of various software development phases: Name Roll No. Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 013
PAGE NO

The different phases starting from the feasibility study to the integration & system testing phase are known as the development phases. The life cycle is broken down into an intuitive set of phases. The different phases are : feasibility study, requirements analyses and specifications, design, coding and unit testing, integration & system testing, & maintenance.

Feasibility study

Requirements analysis and specifications

Design

Coding and Unit testing

Integration and System testing

Maintenance

Name

Roll No.

Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 014
PAGE NO

Each phase of the life cycle has well-defined starting & ending criteria which typically need to be documented in the form of text description. So it is known when to stop and start a phase. Good software development organizations normally document all the information regarding the outputs to be produced at the end of different phases.

FEASIBILITY STUDY:

The main aim of this activity is to determine whether it would be financially and technically feasible to develop the product. The feasibility study activity involves the analysis of the

problem and collecting the relevant information regarding the product. The collected data are analyzed to arrive at the following: An abstract problem definition which is a rough description of the problem which considers only the imp. Requirements and ignores the rest. Formulation of the different solution strategies. Analysis of alternative solution strategies to compare their benefits and shortcomings. This usually requires making approximate estimates of the resources required, cost of development, and development time for each of the options. Once the best solution is identified, all the later phases of development are carried out as per this solution. Therefore during feasibility study it may come to light that none o the solution is feasible due to high cost and some technical reasons.

Name

Roll No.

Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 015
PAGE NO

REQUIREMENTS ANALYSIS AND SPECIFICATION:

The goal of this phase is to understand the exact requirements of customer and to document them properly. This activity is usually executed together with the customer. As the goal is to documents all function performance and interfacing requirement for the software. The requirements describe the what of system this phase produce a large document contains a description of what the system will do without describing how it will be done. The resultant document is known as software requirement specification document. The SRS documents may acts as contract between developer and customer. The customer requirements identified during the requirements gathering and analysis activity are organized into SRS document. The imp. Components are the functional and non-functional requirements, and the goals of implementation. The SRS document is written using the end-user terminology which makes the document understandable by the customer. DESIGN: The goal of design phase is to transform the requirements specified in the SRS document into a structure that is suitable for implementation in some programming language. Here overall S/W architecture is defined and the high level and detail design work is performed. This work is documented and known as the software design description document.

CODING AND UNIT TESTING:

Name

Roll No.

Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 016
PAGE NO

During this phase design is implemented if the SDD is complete. The implementation proceeds smoothly because all the information is needed by S/W developer is contained in SDD. A coding standard addresses issues such as the standard ways of laying out the program codes, the template for laying out the function and module headers, commenting guidelines, the maximum no. of source lines permitted in each module.

During this phase, each module is unit tested to determine the correct working of all individual modules. It involves testing each module in isolation as this is most efficient way to debug the errors identified at this stage. Unit testing also involves a precise definition of test cases, testing criteria, and management of test cases.

INTEGRATION AND SYSTEM TESTING:

This is very important phase. Effective testing will contribute to the delivery of higher quality S/W product is more satisfied users lower maintenance cost and more accurate and very expensive. Integration is normally carried out incrementally over a number of steps. During each integration step, the partially integrated system is tested and a set of previously planned modules are added to it. Finally, when all the modules have been successfully integrated and tested, system testing is carried out. After the requirements specification phase, a system test plan can be prepared which documents the plan for system testing. Name Roll No. Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 017
PAGE NO

MAINTENANCE: Software maintenance is a task that every development group has to face when the S/W is delivered to customers site installed and is operational. S/W maintenance is very broad activity that includes error correction enhancement of capability, and optimization

Maintenance of a typical software product requires much more effort than the effort necessary to develop the product itself. Maintenance involves performing any one or more of the following three kinds of activities.

Corrective maintenance: correcting errors that were not discovered during the product development phase.

Perfective maintenance: improving the implementation of the system, and enhancing the functionalities of the system according to the customers requirements.

Adaptive maintenance: porting the software to work in a new environment.

Name

Roll No.

Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 018
PAGE NO

EXPERIMENT:-3

Aim:-To study software requirement and specification. Explain the needs, goals and characteristics of SRS. Software Requirements Specification (SRS)

An SRS is basically an organization's understanding (in writing) of a customer or potential client's system requirements and dependencies at a particular point in time (usually) prior to any actual design or development work. It's a two-way insurance policy that assures that both the client and the organization understand the others requirements from that perspective at a given point intimae.

The SRS document itself states in precise and explicit language those functions and capabilities a software system(i.e., a software application, an eCommerce Web site, and soon) must provide, as well as states any required constraints by which the system must abide. The SRS also functions as blueprint for completing a project with as little cost growths possible. The SRS is often referredthe"parent"document because all subsequent project management documents, such as design specifications, statements of work, software architecture specifications, testingand validation plans, and documentation plans, are related to it. Its important to note that an SRS contains functional and nonfunctional requirements only; it doesn't offer design suggestions, possible solutions to technology or business issues, or any other information other than what the development team understands the customers system requirements to be.

Requirement Gathering:-

Requirement gathering is usually the first part of any software product. This stage starts when you are thinking about developing software. In this phase, you meet customers or prospective customers, analyzing market requirements and features that Name Roll No. Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 019
PAGE NO

are in demand. You also find out if there is a real need in the market for the software product you are trying to develop. In this stage, marketing and sales people or people who have direct contact with the customers do most of the work. These people talk to these customers and try to understand what they need. A comprehensive understanding of the customers needs and writing down features of the proposed software product are the keys to success in this phase. This phase is actually a base for the whole development effort. If the base is not laid correctly, the product will not find a place in the market. If you develop a very good software product which is not required in the market, it does not matter how well you build it. You can find many stories about software products that failed in the market because the customers did not require them.

Requirement Analysis:Requirement analysis is basically an organization's understanding (in writing) of a customer or potential client's system requirements prior to any actual design or development work. Good requirement analysis practices reduce project risk and help the project running smoothly. Having seen so many projects delivered, we understand how to do the project requirement analysis to provide better outcome of the project. Link provides the requirement analysis document for the representation of the software for the customers review and approval. We will help you identify your problems even if you dont know clearly what you really want.

Name

Roll No.

Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 020
PAGE NO

From the above diagram, analyst is the technical person. Scoping contains the Business entities, stake holders and actors. Stake holders and actors are business decision makers and influencers. Influencers will interconnect the client and the service provider like iLink. The Requirement analysis document contains both functional and non-functional requirements. Non-Functional requirements are usability, scalability, extensibility, performance, security and maintainability. They are all very important for a project but not nonfunctional requirements.

Tools of requirement Analysis:Name Roll No. Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 021
PAGE NO

1. Data flow diagram(DFD) 2. Data dictionary(DD) 3. Entity-Relationship diagrams (E-R diagrams). Software Requirements Specification

There are many good definitions of System and Software Requirements Specifications that will provide us a good basis upon which we can both define a great specification and help us identify deficiencies in our past efforts. There is also a lot of great stuff on the web about writing good specifications. The problem is not lack of knowledge about how to create a correctly formatted specification or even what should go into the specification. The problem is that we don't follow the definitions out there. We have to keep in mind that the goal is not to create great specifications but to create great products and great software. Can you create a great product without a great specification? Absolutely! You can also make your first million through the lottery but why take your chances? Systems and software these days are so complex that to embark on the design before knowing what you are going to build is foolish and risky.

Needs of a SRS:-

Establish the basis for agreement between the customers and the suppliers on what the software product is to do. The complete description of the functions to be performed by the software specified in the SRS will assist the potential users to determine if the software specified meets their needs or how the software must be modified to meet their needs. [NOTE: We use it as the basis of our contract with our clients all the time]. Reduce the development effort. The preparation of the SRS forces the various concerned groups in the customers organization to consider rigorously all of the requirements before design begins and reduces later redesign, recoding, and retesting. Name Roll No. Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 022
PAGE NO

Careful review of the requirements in the SRS can reveal omissions, misunderstandings, and inconsistencies early in the development cycle when these problems are easier to correct. Provide a basis for estimating costs and schedules. The description of the product to be developed as given in the SRS is a realistic basis for estimating project costs and can be used to obtain approval for bids or price estimates. [NOTE: Again, we use the SRS as the basis for our fixed price estimates] Provide a baseline for validation and verification. Organizations can develop their validation and Verification plans much more productively from a good SRS. As a part of the development contract, the SRS provides a baseline against which compliance can be measured. [NOTE: We use the SRS to create the Test Plan]. Characteristics of a SRS:a) Correct b) Unambiguous c) Complete d) Consistent e) Ranked for importance and/or stability f) Verifiable g) Modifiable h) Traceable

Correct - This is like motherhood and apple pie. Of course you want the specification to be correct. No one writes a specification that they know is incorrect. We like to say "Correct and ever correcting." The discipline is keeping the specification up to date when you find things that are not correct.

Name

Roll No.

Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 023
PAGE NO

Unambiguous - An SRS is unambiguous if, and only if, every requirement stated therein has only one interpretation. Again, easier said than done. Spending time on this area prior to releasing the SRS can be a waste of time. But as you find ambiguities - fix them.

Complete - A simple judge of this is that is should be all that is needed by the software designers to create the software.

Consistent - The SRS should be consistent within itself and consistent to its reference documents. If you call an input "Start and Stop" in one place, don't call it "Start/Stop" in another. Ranked for Importance - Very often a new system has requirements that are really marketing wish lists. Some may not be achievable. It is useful provide this information in the SRS.

Verifiable - Don't put in requirements like - "It should provide the user a fast response." Another of my favorites is - "The system should never crash." Instead, provide a quantitative requirement like: "Every key stroke should provide a user response within 100 milliseconds."

Modifiable - Having the same requirement in more than one place may not be wrong but tends to make the document not maintainable.

Traceable - Often, this is not important in a non-politicized environment. However, in most organizations, it is sometimes useful to connect the requirements in the SRS to a higher level document. Why do we need this requirement?

Name

Roll No.

Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 024
PAGE NO

Goals of SRS:Awell-designed,well-writtenSRSaccomplishesfourmajorgoals:

1. It provides feedback to the customer. An SRS is thecustomer's assurance that the development organizationunderstands the issues or problems to be solved and thesoftware behavior necessary to address those problems.Therefore, the SRS should be written in natural language(versus a formal language, explained later in thisarticle), in an unambiguous manner that may also includecharts, tables, data flow diagrams, decision tables, and soon.

2. It decomposes the problem into component parts. Thesimple act of writing down software requirements in awell-designed format organizes information, places bordersaround the problem, solidifies ideas, and helps break downthe problem intoits component parts in an orderlyfashion.

3. It serves as an input to the design specification. Asmentioned previously, the SRS serves as the parent documentto subsequent documents, such as the software design specification and statement of work. Therefore, the SRSmust contain sufficient detail in the functional systemrequirements so that a design solution can be devised. 4. It serves as a product validation check. The SRS alsoserves as the parent document for testing and validationstrategies that will bapplietotherequirementsfor verification.

Name

Roll No.

Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 025
PAGE NO

EXPERIMENT-4 AIM- To study DFD and draw dfd to calculate rms value. Data Flow Diagram(DFD):A data flow diagram (DFD) is a graphical representation of the "flow" of data through an information system. A data flow diagram can also be used for the visualization of data processing (structured design). It is common practice for a designer to draw a context-level DFD first which shows the interaction between the system and outside entities. This context-level DFD is then "exploded" to show more detail of the system being modeled. It is also known as Bubble Chart . Data flow diagrams (DFDs) are one of the three essential perspectives of Structured Systems Analysis and Design Method. The sponsor of a project and the end users will need to be briefed and consulted throughout all stages of a system's evolution. With a dataflow diagram, users are able to visualize how the system will operate, what the system will accomplish, and how the system will be implemented. The old system's dataflow diagrams can be drawn up and compared with the new system's dataflow diagrams to draw comparisons to implement a more efficient system. Dataflow diagrams can be used to provide the end user with a physical idea of where the data they input ultimately has an effect upon the structure of the whole system from order to dispatch to restock how any system is developed can be determined through a dataflow diagram. Developing a DFD helps in identifying the transaction data in the data model. Top-Down Approach 1. The system designer makes a context level DFD, which shows the interaction (data flows) between the system (represented by one process) and the system environment (represented by terminators). 2. The system is decomposed in lower level DFD (Zero) into a set of processes, data stores, and the data flows between these processes and data stores. 3. Each process is then decomposed into an even lower level diagram containing its subprocesses. 4. This approach then continues on the subsequent subprocesses, until a necessary and sufficient level of detail is reached which is called the primitive process (aka chewable in one bite). Name Roll No. Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 026
PAGE NO

Roll. No-2205218 Purpose/Objective:

The purpose of data flow diagrams is to provide a semantic bridge between users and systems developers. The diagrams are:

graphical, eliminating thousands of words; logical representations, modeling WHAT a system does, rather than physical models showing HOW it does it; hierarchical, showing systems at any level of detail; and jargon less, allowing user understanding and reviewing.

The goal of data flow diagramming is to have a commonly understood model of a system. The diagrams are the basis of structured systems analysis. Data flow diagrams are supported by other techniques of structured systems analysis such as data structure d iagrams, data dictionaries, and procedure-representing techniques such as decision tables, decision trees, and structured English.

Data flow diagrams have the objective of avoiding the cost of:

user/developer misunderstanding of a system, resulting in a need to redo systems or in not using the system. having to start documentation from scratch when the physical system changes since the logical system, WHAT gets done, often remains the same when technology changes. systems inefficiencies because a system gets "computerized" before it gets "systematized". being unable to evaluate system project boundaries or degree of automation, resulting in a project of inappropriate scope.

Name

Roll No.

Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 027
PAGE NO

Symbols of DFD

Diagram element Process

Graphical presentation

External Entity

Data Store

Data Flow

There are essentially four different types of symbols used to construct DFDs. These primitive symbols are depicted in fig. 1. The meaning of each symbol is explained below:

Function symbol. This symbol is called a process or a bubble. Bubbles are annotated with the names of the corresponding functions.

Name

Roll No.

Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 028
PAGE NO

External entity symbols. A rectangle represents an external entity such as a librarian, a library member, etc. The external entities are essentially those physical entities which are external to the software system and interact with the system by inputting data to the system or by consuming the data produced by the system.

Data Flow Symbol. A directed arc or an arrow is used as a data flow symbol. A data flow symbol represents the data flow occurring between two processes or between an external entity and a process in the direction of the data flow arrow. Data flow symbols are usually annotated with the corresponding data names.

Data store symbol. Open boxes are used to represent data stores. A data store represents a logical file, a data structure or a physical file on disk. How to draw the DFD :1. Identify all the external entities which act as sources or sinks for the system. 2. Draw a toplevel, single process, dataflow diagram which shows the above external entities. 3. The context diagram is refined by decomposing the single process into several more, maintaining the dataflows with the external enitites. 4. Repeat the above step for each diagram produced. 5. Starting from the sources, ask "What process needs this input?" 6. Draw that process, then ask, "What output does this process produce?" This will give a clue as to the next processes in the chain. 7. Repeat the first question to give the identity of the next process. Connect the processes by a dataflow. 8. Make chains in the graph. Use the previous questions to produce some chains, the common links (processes, flows and entities) in the chain can then be collapsed to produce a directed graph. 9. Do Not draw a single layer with more than approximately 5-7 processes. 10. DO Not connect a source directly to a sink DFD Principles

The general principle in Data Flow Diagramming is that a system can be decomposed into subsystems, and subsystems can be decomposed into lower level subsystems, and so on. Name Roll No. Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA
PAGE NO

CSE LAB (SE)

029

Each subsystem represents a process or activity in which data is processed. At the lowest level, processes can no longer be decomposed. Each 'process' (and from now on, by 'process' we mean subsystem and activity) in a DFD has the characteristics of a system. Just as a system must have input and output (if it is not dead), so a process must have input and output. Data enters the system from the environment; data flows between processes within the system; and data is produced as output from the system

Leveling of the DFD :-

Context Diagram (Level 0 DFD) :The context diagram is the most abstract data flow representation of a system. It represents the entire system as a single bubble. This bubble is labeled acc to the main function of the system. The various external entities with whi9ch the system interacts and the data flows occurring between the system and the external entities are also represented. The data input to the system and the data output from the system are represented as incoming and outgoing arrows. These data flow arrows should be annotated with the corresponding data names. The name context diagram is well justified because it represents the context in which the system is to exist, i.e. the external entities (users) who would interact with the system and the specific data items they would be supplying to the system and the data items they would be receiving from the system. The context diagram is also called the level 0 DFD. To develop the context diagram of the system, we have to analyze the SRS document to identify the different types of users who would be using the system and the kinds of data they would be inputting to the system and the data they would be receiving from the system. Here, the term users of the system also includes the external systems which supply data to or receive data from the system. The bubble in the context diagram is annotated with the name of the software system being developed (usually a noun). This is in contrast with the bubbles in all other Name Roll No. Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 030
PAGE NO

levels which ar4e annotated with verbs. This is to expect since the purpose of the context diagram is to expected since the purpose of the context diagram is to capture the context of the system rather than its functionality. Level 1 DFD :To develop the level 1 DFD, examine the high-level functional requirements. If there are between three to seven high-level functional requirements, then these can be directly represented as bubbles in the level 1 DFD. We can then examine the input data to these functions and the data output by these functions and represent them appropriately in the diagram. If a system has more than seven high-level requirements, then some of the related requirements have to be combined and represented in the form of a bubble in the level 1 DFD. These can be split in the lower DFD levels. If a system has less than three high-level functional requirements, then some of the high-level requirements need to be split into their sub functions so that we have roughly about five to seven bubbles on the diagram. DFD for the RMS value :-

Name

Roll No.

Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 031
PAGE NO

Name

Roll No.

Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 032
PAGE NO

Experiment-5

AIM:-To Study what are Risks and Explain Risk Management and Different Factors

RISKS

A risk is a probability that some adverse circumstance will actually occur. Risks may threaten the project, the software that is being developed or the organization. These categories of risks can be defined as:

1. Project risks are the risks which affect the project schedule or resources 2. Product risks are risks which affect the quality or performance of the software being developed 3. Business risks are risks which affect the organization developing or procuring the software

RISK MANAGEMENT

Risk management is the process of measuring, or assessing, risk and developing strategies to manage it. Strategies include transferring the risk to another party, avoiding the risk, reducing the negative effect of the risk, and accepting some or all of the consequences of a particular risk. Traditional risk management focuses on risks stemming from physical or legal causes (e.g. natural disasters or fires, accidents, death, and lawsuits). Financial risk management, on the other hand, focuses on risks that can be managed using traded financial instruments. Name Roll No. Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 033
PAGE NO

STEPS IN RISK MANAGEMENT PROCESS

Risk Identification

Risk Analysis

Risk Planning

Risk Monitoring

List of potential risks

Prioritized Risk list

Risk avoidance & contingency plans

Risk Assessment

ESTABLISH THE TEXT

1. Planning the remainder of the process. 2. Mapping out the following: the scope of the exercise, the identity and objectives of stakeholders, and the basis upon which risks will be evaluated. 3. Defining a framework for the process and an agenda for identification. 4. Developing an analysis of risk involved in the process RISK IDENTIFICATION

After establishing the context, the next step in the process of managing risk is to identify potential risks. Risks are about events that, when triggered, cause problems. Hence, risk identification can start with the source of problems, or with the problem itself.

Name

Roll No.

Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 034
PAGE NO

Source analysis Risk sources may be internal or external to the system that is the target of risk management. Examples of risk sources are: stakeholders of a project, employees of a company or the weather over an airport. Problem analysis Risks are related to identify threats. For example: the threat of losing money, the threat of abuse of privacy information or the threat of accidents and casualties. The threats may exist with various entities, most important with shareholder, customers and legislative bodies such as the government.

When either source or problem is known, the events that a source may trigger or the events that can lead to a problem can be investigated. For example: stakeholders withdrawing during a project may endanger funding of the project; privacy information may be stolen by employees even within a closed network; lightning striking a Boeing 747 during takeoff may make all people onboard immediate casualties.

The chosen method of identifying risks may depend on culture, industry practice and compliance. The identification methods are formed by templates or the development of templates for identifying source, problem or event. Common risk identification methods are:

Objectives-based risk identification Organizations and project teams have objectives. Any event that may endanger achieving an objective partly or completely is identified as risk. Objective-based risk identification is at the basis of COSO's Enterprise Risk Management - Integrated Framework Scenario-based risk identification In scenario analysis different scenarios are created. The scenarios may be the alternative ways to achieve an objective, or an analysis of the interaction of forces in, for example, a market or battle. Any event that triggers an undesired scenario alternative is identified as risk - see Futures Studies for methodology used by Futurists. Taxonomy-based risk identification The taxonomy in taxonomy-based risk identification is a breakdown of possible risk sources. Based on the taxonomy and knowledge of best practices, a questionnaire is compiled. Name Roll No. Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 035
PAGE NO

The answers to the questions reveal risks. Taxonomy-based risk identification in software industry can be found in CMU/SEI-93-TR-6. Common-risk Checking In several industries lists with known risks are available. Each risk in the list can be checked for application to a particular situation. An example of known risks in the software industry is the Common Vulnerability and Exposures list found at http://cve.mitre.org. Risk Charting This method combines the above approaches by listing Resources at risk, Threats to those resources Modifying Factors which may increase or reduce the risk and Consequences it is wished to avoid. Creating a matrix under these headings enables a variety of approaches. One can begin with resources and consider the threats they are exposed to and the consequences of each. Alternatively one can start with the threats and examine which resources they would affect, or one can begin with the consequences and determine which combination of threats and resources would be involved to bring them about

Categories of risks: Schedule Risk: Project schedule get slip when project tasks and schedule release risks are not addressed properly. Schedule risks mainly affect on project and finally on company economy and may lead to project failure. Schedules often slip due to following reasons:

Wrong time estimation Resources are not tracked properly. All resources like staff, systems, skills of individuals etc. Failure to identify complex functionalities and time required to develop those functionalities. Unexpected project scope expansions. Budget Risk:

Wrong budget estimation. Cost overruns Project scope expansion Operational Risks: Risks of loss due to improper process implementation, failed system or some Name Roll No. Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 036
PAGE NO

external events risks. Causes of Operational risks:


Failure to address priority conflicts Failure to resolve the responsibilities Insufficient resources No proper subject training No resource planning No communication in team. Technical risks: Technical risks generally leads to failure of functionality and performance.Causes of technical risks are:

Continuous changing requirements No advanced technology available or the existing technology is in initial stages. Product is complex to implement. Difficult project modules integration. Programmatic Risks: These are the external risks beyond the operational limits. These are all uncertain risks are outside the control of the program. These external events can be:

Running out of fund. Market development Changing customer product strategy and priority Government rule changes. Project risk In the advanced capital budgeting topics, the total risk associated with an investment project. Sometimes referred to as stand-alone project risk. In advanced capital budgeting, project risk is partitioned into systematic and unsystematic project risk. Business risk The uncertainty associated with a business firm's operating environment and reflected in the variability of earnings before interest and taxes (EBIT). Since this earnings measure has not had financing expenses removed, it reflect the risk Name Roll No. Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 037
PAGE NO

associated with business operations rather than methods of debt financing. This risk is often discussed in General Business Management courses.

RISK ASSESSMENT

Once risks have been identified, they must then be assessed as to their potential severity of loss and to the probability of occurrence. These quantities can be either simple to measure, in the case of the value of a lost building, or impossible to know for sure in the case of the probability of an unlikely event occurring. Therefore, in the assessment process it is critical to make the best educated guesses possible in order to properly prioritize the implementation of the risk management plan.

The fundamental difficulty in risk assessment is determining the rate of occurrence since statistical information is not available on all kinds of past incidents. Furthermore, evaluating the severity of the consequences (impact) is often quite difficult for immaterial assets. Asset valuation is another question that needs to be addressed. Thus, best educated opinions and available statistics are the primary sources of information. Nevertheless, risk assessment should produce such information for the management of the organization that the primary risks are easy to understand and that the risk management decisions may be prioritized. Thus, there have been several theories and attempts to quantify risks. Numerous different risk formulae exist, but perhaps the most widely accepted formula for risk quantification is:

Rate of occurrence X the impact of the event = risk

Later research has shown that the financial benefits of risk management are less dependent on the formula used but are more dependent on the frequency and how risk assessment is performed. Name Roll No. Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 038
PAGE NO

In business it is imperative to be able to present the findings of risk assessments in financial terms. Robert Courtney Jr. (IBM, 1970) proposed a formula for presenting risks in financial terms. The Courtney formula was accepted as the official risk analysis method for the US governmental agencies. The formula proposes calculation of ALE (annualised loss expectancy) and compares the expected loss value to the security control implementation costs (cost-benefit analysis).

POTENTIAL RISK TREATMENTS

Once risks have been identified and assessed, all techniques to manage the risk fall into one or more of these four major categories: (Dorfman, 1997)

Tolerate (aka retention) Treat (aka mitigation) Terminate (aka elimination) Transfer (aka buying insurance)

Ideal use of these strategies may not be possible. Some of them may involve trade-offs that are not acceptable to the organization or person making the risk management decisions.

RISK AVOIDANCE

This includes not performing an activity that could carry risk. An example would be not buying a property or business in order to not take on the liability that comes with it. Another would be not flying in order to not take the risk that the airplane were to be hijacked. Avoidance may seem the answer to all risks, but Name Roll No. Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 039
PAGE NO

avoiding risks also means losing out on the potential gain that accepting (retaining) the risk may have allowed. Not entering a business to avoid the risk of loss also avoids the possibility of earning profits. RISK REDUCTION

Involves methods that reduce the severity of the loss. Examples include sprinklers designed to put out a fire to reduce the risk of loss by fire. This method may cause a greater loss by water damage and therefore may not be suitable. Halon fire suppression systems may mitigate that risk, but the cost may be prohibitive as a strategy.

Modern software development methodologies reduce risk by developing and delivering software incrementally. Early methodologies suffered from the fact that they only delivered software in the final phase of development; any problems encountered in earlier phases meant costly rework and often jeopardized the whole project. By developing in iterations, software projects can limit effort wasted to a single iteration. A current trend in software development, spearheaded by the Extreme Programming community, is to reduce the size of iterations to the smallest size possible, sometimes as little as one week is allocated to an iteration.

RISK RETENTION

Involves accepting the loss when it occurs. True self insurance falls in this category. Risk retention is a viable strategy for small risks where the cost of insuring against the risk would be greater over time than the total losses sustained. All risks that are not avoided or transferred are retained by default. This includes risks that are so large or catastrophic that they either cannot be insured against or the premiums would be infeasible. War is an example since most property and risks are not insured against war, so the loss attributed by war is retained by the insured. Also any amounts of potential loss (risk) over the Name Roll No. Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 040
PAGE NO

amount insured are retained risk. This may also be acceptable if the chance of a very large loss is small or if the cost to insure for greater coverage amounts is so great it would hinder the goals of the organization too much.

RISK TRANSFER

Means causing another party to accept the risk, typically by contract or by hedging. Insurance is one type of risk transfer that uses contracts. Other times it may involve contract language that transfers a risk to another party without the payment of an insurance premium. Liability among construction or other contractors is very often transferred this way. On the other hand, taking offsetting positions in derivatives is typically how firms use hedging to financially manage risk.

Some ways of managing risk fall into multiple categories. Risk retention pools are technically retaining the risk for the group, but spreading it over the whole group involves transfer among individual members of the group. This is different from traditional insurance, in that no premium is exchanged between members of the group up front, but instead losses are assessed to all members of the group.

CREATE THE PLAN

Decide on the combination of methods to be used for each risk. Each risk management decision should be recorded and approved by the appropriate level of management. For example, a risk concerning the image of the organization should have top management decision behind it whereas IT management would have the authority to decide on computer virus risks.

Name

Roll No.

Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 041
PAGE NO

The risk management plan should propose applicable and effective security controls for managing the risks. For example, an observed high risk of computer viruses could be mitigated by acquiring and implementing anti virus software. A good risk management plan should contain a schedule for control implementation and responsible persons for those actions.

According to ISO/IEC 27001, the stage immediately after completion of the Risk Assessment phase consists of preparing a Risk Treatment Plan, which should document the decisions about how each of the identified risks should be handled. Mitigation of risks often means selection of Security Controls, which should be documented in a Statement of Applicability, which identifies which particular control objectives and controls from the standard have been selected, and why.

IMPLEMENTATION

Follow all of the planned methods for mitigating the effect of the risks. Purchase insurance policies for the risks that have been decided to be transferred to an insurer, avoid all risks that can be avoided without sacrificing the entity's goals, reduce others, and retain the rest.

REVIEW AND EVALUATION OF THE PLAN

Initial risk management plans will never be perfect. Practice, experience, and actual loss results will necessitate changes in the plan and contribute information to allow possible different decisions to be made in dealing with the risks being faced.

Risk analysis results and management plans should be updated periodically. There are two primary reasons for this: Name Roll No. Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 042
PAGE NO

1. to evaluate whether the previously selected security controls are still applicable and effective, and 2. to evaluate the possible risk level changes in the business environment. For example, information risks are a good example of rapidly changing business environment

Limitations If risks are improperly assessed and prioritized, time can be wasted in dealing with risk of losses that are not likely to occur. Spending too much time assessing and managing unlikely risks can divert resources that could be used more profitably. Unlikely events do occur but if the risk is unlikely enough to occur it may be better to simply retain the risk and deal with the result if the loss does in fact occur. Prioritizing too highly the risk management processes could keep an organization from ever completing a project or even getting started. This is especially true if other work is suspended until the risk management process is considered complete. It is also important to keep in mind the distinction between risk and uncertainty. Risk can be measured by impacts x probability.

Name

Roll No.

Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 043 2205282
PAGE NO

Experiment-6 AIM : To Study the term cost estimation explain COCOMO MODEL.

Introduction Software cost estimation is the process of predicting the amount of effort required to build a software system. Cost estimates are needed throughout the software lifecycle. Preliminary estimates are required to determine the feasibility of a project. Detailed estimates are needed to assist with project planning. The actual effort for individual tasks is compared with estimated and planned values, enabling project managers to reallocate resources when necessary. Analysis of historical project data indicates that cost trends can be correlated with certain measurable parameters. This observation has resulted in a wide range of models that can be used to assess, predict, and control software costs on a realtime basis. Models provide one or more mathematical algorithms that compute cost as a function of a number of variables. Size Size is a primary cost factor in most models. There are two common ways to measure software size: lines of code and function points. Lines of Code The most commonly used measure of source code program length is the number of lines of code (LOC) (Fenton, 1997). The abbreviation NCLOC is used to represent a non-commented source line of code. NCLOC is also sometimes referred to as effective lines of code (ELOC). NCLOC is therefore a measure of the uncommented length. The commented length is also a valid measure, depending on whether or not line documentation is considered to be a part of programming effort. The abbreviation CLOC is used to represent a commented source line of code (Fenton, 1997). By measuring NCLOC and CLOC separately we can define: total length (LOC) = NCLOC + CLOC Name Roll No. Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 044 2205282
PAGE NO

KLOC is used to denote thousands of lines of code. Function Points Function points (FP) measure size in terms of the amount of functionality in a system. Function points are computed by first calculating an unadjusted function point count (UFC). Counts are made for the following categories (Fenton, 1997): External inputs those items provided by the user that describe distinct application-oriented data (such as file names and menu selections) External outputs those items provided to the user that generate distinct application-oriented data (such as reports and messages, rather than the individual components of these) External inquiries interactive inputs requiring a response External files machine-readable interfaces to other systems Internal files logical master files in the system

THE COCOMO MODEL

A number of algorithmic models have been proposed as the basis for estimating the effort, schedule and costs of a software project. These are conceptually similar but use different parameter values. In 1981, Barry Boehm designed COCOMO to give an estimate of the number of man-months it will take to develop a software product. References to this model typically call it COCOMO 81. In 1990, a new model called COCOMO II appeared. Generally, references to COCOMO before 1995 refer to the original COCOMO model, references after 1995 refer to COCOMO II. The need for the new model came as software development technology moved from mainframe and overnight batch processing to desktop development, code reusability and the use of off-the-shelf software components. This "COnstructive COst MOdel" drew on a study of about sixty projects at TRW, a Californian automotive and IT company, that Northrop Grumman acquired in late 2002. The study examined programs ranging in size from 2000 to 100,000 lines of code, and programming languages ranging from assembly to PL/I. Name Roll No. Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 045 2205282
PAGE NO

COCOMO consists of a hierarchy of three increasingly detailed and accurate forms.

Basic COCOMO - is a static, single-valued model that computes software development effort (and cost) as a function of program size expressed in estimated lines of code. Intermediate COCOMO - computes software development effort as function of program size and a set of "cost drivers" that include subjective assessment of product, hardware, personnel and project attributes. Detailed COCOMO - incorporates all characteristics of the intermediate version with an assessment of the cost driver's impact on each step (analysis, design, etc.) of the software engineering process.

Basic cocomo Basic COCOMO is a form of the COCOMO model. COCOMO applies to three classes of software projects:

Organic projects - are relatively small, simple software projects in which small teams with good application experience work to a set of less than rigid requirements. Semi-detached projects - are intermediate (in size and complexity) software projects in which teams with mixed experience levels must meet a mix of rigid and less than rigid requirements. Embedded projects - are software projects that must be developed within a set of tight hardware, software, and operational constraints.

The basic COCOMO equations take the form E=ab(KLOC)bb D=cb(E)db P=E/D where E is the effort applied in person-months, D is the development time in chronological months, KLOC is the estimated number of delivered lines of code for the project (expressed in thousands), and P is the number of people required. The coefficients ab, bb, cb and db are given in the following table. Name Roll No. Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 046 2205282
PAGE NO

Software project

ab

bb

cb

db

Organic 2.4 1.05 2.5 0.38 Semi-detached 3.0 1.12 2.5 0.35 Embedded 3.6 1.20 2.5 0.32 Basic COCOMO is good for quick, early, rough order of magnitude estimates of software costs, but it does not account for differences in hardware constraints, personnel quality and experience, use of modern tools and techniques, and other project attributes known to have a significant influence on software costs, which limits its accuracy. COCOMO 81 assumed that the software would be developed according to a waterfall process using standard imperative programming languages such as C or FORTRAN. However, there have been radical changes to software development since this initial version was proposed. Prototyping and incremental development are commonly used process models. Software is now often developed by assembling reusable components with off-the-shelf systems and gluing them together with scripting language. Data-intensive systems are developed using a database programming language such as SQL and a commercial database management system. Existing software is re-engineered to create new software. CASE tool support for most software process activities is now available.

Intermediate cocomo The Intermediate COCOMO is an extension of the Basic COCOMO model, and estimates the programmer time to develop a software product. This extension considers a set of four "cost driver attributes", each with a number of subsidiary attributes:

Product attributes o Required software reliability o Size of application database o Complexity of the product Hardware attributes o Run-time performance constraints o Memory constraints o Volatility of the virtual machine environment Name Roll No. Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 047 2205282
PAGE NO

o Required turnabout time Personnel attributes o Analyst capability o Software engineer capability o Applications experience o Virtual machine experience o Programming language experience Project attributes o Use of software tools o Application of software engineering methods o Required development schedule

Example estimate using the intermediate COCOMO I Mode is organic Size = 200KDSI Cost drivers : Low reliability => .88 High product complexity => 1.15 Low application experience => 1.13 High programming language experience => .95 Other cost drivers assumed to be nominal => 1.00 C = .88 * 1.15 * 1.13 * .95 = 1.086 Effort = 3.2 * ( 2001.05 ) * 1.086 = 906 MM Development time = 2.5 * 9060.38 Advantages of COCOMO I i. COCOMO is transparent, you can see how it works unlike other models such as SLIM. ii. Drivers are particularly helpful to the estimator to understand the impact of different factors that affect project costs. Drawbacks of COCOMO I Name Roll No. Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 048 2205282
PAGE NO

i. It is hard to accurately estimate KDSI early on in the project, when most effort estimates are required. ii. KDSI, actually, is not a size measure it is a length measure. iii. Extremely vulnerable to mis-classification of the development mode iv. Success depends largely on tuning the model to the needs of the organization, using historical data which is not always available COCOMO II To take these changes into account, the COCOMO II model recognizes different approaches to software development such as prototyping, development by component composition and use of database programming. COCOMO II supports a spiral model of development and embeds several sub-models that produce increasingly detailed estimates. These can be used in successive rounds of the development spiral. Figure below shows COCOMO II sub-models and where they are used.

The sub-models that are part of the COCOMO II model are: 1. An application-composition model This assumes that systems are created from reusable components, scripting or database programming. It is designed to make estimates of prototype development. Software size estimates are based on application points, and a simple size/productivity formula is used to estimate the effort required. Application points are the same as object points discussed in Section 26.1, but the name was changed to avoid confusion with objects in object-oriented development. 2. An early design model This model is used during early stages of the system design after the requirements have been established. Estimates are based on function points, which are then converted to number of lines of source code. The formula follows the standard form discussed above with a simplified set of seven multipliers. 3. A reuse model This model is used to compute the effort required to integrate reusable components and/or program code that is automatically generated by design or program translation tools. It is usually used in conjunction with the post-architecture model. 4. A post-architecture model Once the system architecture has been designed, a more accurate estimate of the software size can be made. Again this model uses the standard formula for cost estimation discussed above. However, it includes a more extensive set of 17 multipliers reflecting personnel capability and product and project characteristics. Name Roll No. Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 049 2205282
PAGE NO

Of course, in large systems, different parts may be developed using different technologies, and you may not have to estimate all parts of the system to the same level of accuracy. In such cases, you can use the appropriate sub-model for each part of the system and combine the results to create a composite estimate.

THE COCOMO II MODELS

Name

Roll No.

Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 2205282 050
PAGE NO

EXPERIMENT 7 AIM : To study the software design . Explain design fundamentals and design methods. Software design

Software design deals with transformation of the customers requirement which is described in SRS into a structure that will be implemented using some programming language. The design phase consists of 1. 2. 3. 4. 5. Identify different modules required for designing Control relationship among the identify modules Interface among the different modules Data structure of the individual module Algorithm required implementing the module

In this the SRS document is the input for the designing phase. SRS will tell what the system does without knowing how it will be done.
D E What S How

I
Conceptual G Technical

Name

Roll No.

Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 2205282 051
PAGE NO

Customer

System Builder

Two types of Design

1. 2.

Preliminary design Detailed design

Preliminary design: - It is concerned with transformation of requirement into data and software architecture (include above step 1, 2 and 3).

Detailed design: It focuses on the refinement to the architectural representation that leads to the detailed data structure and algorithmic representat6ion (include above step 4 and 5).

Design Methodology:-

Structure design Object Oriented Design

Structure Analysis: - Activity is use to transform a textual problem description into a graphical model. During the structure analysis the major processing task of the system are analyze and the data flow among these task are represented graphical structured analysis based on following principal.

Name

Roll No.

Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 2205282 052
PAGE NO

1. Top down approach(modular design) 2. Design and conquer principal 3. Graphical representation using DFD Structure Design:The main aim of the structure design is to transform the result of the structure analysis into software chart. A structure chart representation the software architecture. Which consist of number of modules that are used to create a system, a module dependency and the parameter that are passed among the different modules, The structure chart representation can be easily implemented using some programming language

Structure Design tools:-

1. 2. 3. 4.

DFD(Data flow diagram) DD(Data dictionary) Structure chart pseudo code

Data flow diagram (DFD) is a graphical representation of the "flow" of data through an information system. A data flow diagram can also be used for the visualization of data processing (structured design). It is common practice for a designer to draw a context-level DFD first which shows the interaction between the system and outside entities. This context-level DFD is then "exploded" to show more detail of the system being modeled. It is also known as Bubble Chart. Data flow diagrams (DFDs) are one of the three essential perspectives of Structured Systems Analysis and Design Method. The sponsor of a project and the end users will need to be briefed and consulted throughout all stages of a system's evolution. With a dataflow diagram, users are able to visualize how the system will operate, what the system will accomplish, and how the system will be implemented. The old system's dataflow diagrams can be drawn up and compared with the new system's dataflow diagrams to draw comparisons to implement a more efficient system. Dataflow diagrams can be used to provide the end user with a physical idea of where the data they input ultimately has an effect upon the structure of the whole system from order to Name Roll No. Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 2205282 053
PAGE NO

dispatch to restock how any system is developed can be determined through a dataflow diagram. Rules for DFD:1. 2. 3. 4. All the name should be uniqe It doest contain the decision box It doest tell the event of order It will show the flow of information with in system

Data dictonary are used to store information about all the data items defined in the DFD. It is an organized lists of all the data items so that data dictonary is like other dictionary that containes the names of the data elements or items and discription about that elements. Information include in data dictonary:1. 2. 3. 4. 5. 6. Name of the data item Other name of the data item Discription of the purpose Related data items Range of values Data structure definition

Structure chart : - The Structure chart one of the most commonly used method for system design. In this the system is divided into module. These modules are known as the black box. Black means the functionality of known to the user without the knowledge of internal design input are given to black box and appropriate output are generated by the black box these concept reduce the complexity of the design system are easy to construct. In the Structure chart hierarchy modules are top level can all the modules at the lower level. The connections between the modules are represented by solid line are arrow head. Modules are number to understand the structure chart. The modules can pass the parameter to each other and they can call the modules to different levels.

Name

Roll No.

Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 2205282 054
PAGE NO

1. Module

2. Control

3.

Data

Library

4.
Module

5.

Physical Storage

6.

7.

Repetitive call Name Roll No.

Conditional call Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 2205282 055
PAGE NO

Pseudo code : - notation can be used in both the preliminary and detailed design likely flow chart pseudo code can be used at any desire level of abstraction. Using this code designer describe the system char by using the different keywords such as BEGIN and if then else, while, do while etc. this key words represents the processing actions. These code can be replace flow char and reduce the amount of external documentation requires to describe the system.

Relationship between Object Oriented Analysis (OOA) & Object Oriented Design (OOD) : -

Analysis
Problem Domain

Design

Object oriented approach of the software development consists of OOA and OOD. When we combine the feature of these tools then we get a method OOAD (Object oriented analysis design). The fundamental different between OOA and OOD is that the OOA models the problem domain, leading to an understanding and specification of the problem whereas OOD models the solution of that problem. In other words, the objects during OOA focus on the problem domain and the objects during OOD focus on the solution domain. So analysis deals with the specification and understanding of the problem in the form of objects while design deals with the solution of that problem using object oriented concept. The solution domain representation creates by OOD contains much of the representation created by OOA. The object used in the problem domain are called semantic object and the solution Name Roll No. Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 2205282 056
PAGE NO

domain consists of sementic object as well as three other object that is interface, application and utility. The interface object deals with the user interface The application objects specify the control mewchanism for the purposed solution Utility objectare those which are needed to support the services of the semantic object or to implement them efficiently . For example trees , queues, table, stack, array etc.

The basic goal of OOA and OOD is to produce the object design for the system that is represented by object diagram. Object Oriented design : - Basic conceptsb of the OOD or feature of OOD are 1. Objects :- Objects are the basic run-time entities in an object oriented system.They may represent a person, a place, a bank account , a table of data or any item that the p[rogram has to handle.Programming problems is analyzed in terms of objects and the nature of communication between them.program object should be chosen such that they match closely with the real-world objects. Objects take up space in the memory. When a program is executed , the objects interact by sending messages to one another. 2. Classes : - Objects contain data, and code to manipulate that code. The entire set of data and code of an object can be made a user-defined data type with the help of a class. Infact, Object are variables of the type Class. A class is thus a collection of objects of similar type. Like fruit mango Will create an object mango belonging to class fruit. 3. Data abstractionand encapsulation : - The wrapping up of data and function into a single unit(called class) is known as encapsulation. The data is not accessible to the outside world, and only those functions which are wrapped in the class can access it. These functions provide the interface between the objects data and the program. The insulation of data from direct access by the program is called data hiding or information hiding

Name

Roll No.

Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 2205282 057
PAGE NO

Abstraction refers to the act of representing essential features without including background details or explanations. Classes use the concept of Abstraction and are defined as a list of abstract attributes such as size, weight and cost, and functions to operate on these attributes. 4. Inheritance :- Inheritance is the process by which object of one class acquire the properties of object of another class. It support the concept of hierarchical classification.For Example the bird Robin is part of the class Flying bird which is assigned a part of the class bird In OOPS, concept of inheritance provides the idea of reuseability, this means that we can add additional features to an existing class without modifying it. This is possible by deriving a new class from the existing one, the new class will have the combined feature of both the classes. 5. Polymorphism : - Polymorphism is a greek term means the ability to take more than one form. An operation may exhibitt different behaviour in different instances. The behaviour depends on the type of data used in the operation. For example for two numbers, the operation will generate the sum if the operands are strings then the operation would produce a third string by concatenation. The process of making an operator to exhibit different behaviour in different instances is knowmn as noperator overloading. 6. Dynamic Binding : - Dynamic Binding is also known as the Late Binding means that code associated with a given procedurecall is not known until the time of the call at run-time it is associated with polymorphism and inheritance . A function call is associated with a polymorphic reference depends on the dynamic type of the reference. 7. Message passing :- objects communicate with one another by sending and receiving information much the same way as people pass message to one another. Message passing involves specifying the name of objects,function and the information to be send.

Name

Roll No.

Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 2205282 058
PAGE NO

EXPERIMENT 8 AIM : To study the software testing . Explain testing fundamentals and various testing techniques. Software testing is the process used to assess the quality of computer software. Software testing is an empirical technical investigation conducted to provide stakeholders with information about the quality of the product or service under test , with respect to the context in which it is intended to operate. This includes, but is not limited to, the process of executing a program or application with the intent of finding software bugs. Quality is not an absolute; it is value to some person. With that in mind, testing can never completely establish the correctness of arbitrary computer software; testing furnishes a criticism or comparison that compares the state and behaviour of the product against a specification. An important point is that software testing should be distinguished from the separate discipline of Software Quality Assurance (S.Q.A.), which encompasses all business process areas, not just testing. Software Testing Fundamentals : Testing objectives include

1. Testing is a process of executing a program with the intent of finding an error. 2. A good test case is one that has a high probability of finding an as yet undiscovered error. 3. A successful test is one that uncovers an as yet undiscovered error. Testing should systematically uncover different classes of errors in a minimum amount of time and with a minimum amount of effort. A secondary benefit of testing is that it demonstrates that the software appears to be working as stated in the specifications. The data collected through testing can also provide an indication of the software's reliability and quality. But, testing cannot show the absence of defect -- it can only show that software defects are present

What is testing: - Testing is the process of executing of the program intent aim of finding the error. We should not test a program to show that it works rather. We should start with the assumption that a program makes error and then test the program to find as many errors as possible.

Name

Roll No.

Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 2205282 059
PAGE NO

Why should be test: To make a software efficient. To make a testing less expansive in future. To full fill the requirement of the customer.

Who should do the testing: - Testing requires the developers ti find the errors for the software. It is very difficult to developers to find out the error from their creation. So separate team member are allotted for the testing.

Black Box Testing is testing without knowledge of the internal workings of the item being tested. For example, when black box testing is applied to software engineering, the tester would only know the "legal" inputs and what the expected outputs should be, but not how the program actually arrives at those outputs. It is because of this that black box testing can be considered testing with respect to the specifications, no other knowledge of the program is necessary. For this reason, the tester and the programmer can be independent of one another, avoiding programmer bias toward his own work. For this testing, test groups are often used, "Test groups are sometimes called professional idiots...people who are good at designing incorrect data.". Input Domain
Input text data System under test

Output Domain
Output text data

Boundary Value Analysis Testing Boundary value analysis is the technique of making sure that behavior of system is predictable for the input and output boundary conditions. Reason why boundary conditions are very important for testing is because defects could be introduced at the boundaries very easily. In the analysis the boundary condition needs input values may be bounded just below the boundary and just above the Name Roll No. Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 2205282 060
PAGE NO

boundary. Suppose we have input values (x) range fron 1 100 boundary value are 1, 50, 100 as well as below 1 and more than 100. consider a program with two input variables x and y the boundary value can be specified as a < = x <= b c < = y <= d Here both the input x and y are bounded to intervals [a, b] and [c, d] respectively for input x we make design a test-cases with value a and b. and just above a and just below b. This similar condition are input value y in terms of c and d. Suppose x and y value 100 to 300.

300

lll

200 100

X 400 The test-cases in the valid input

100

200

300

(200, 100) (200, 101) (200, 200) (200, 299) (200, 300) (100, 200) (101, 200) (299, 200) (300, 200)

Name

Roll No.

Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 2205282 061
PAGE NO

Equivalence Partitioning In this method . the input domain of a program is a partition or divided into finite no. of equivalent classes. Such that the test of a representative of each classes is equivalent to a test of any other value that is if one test-case in a class detect an error then all the other test-cases in the class would be expected to find the same error. If a test-case does not detect an error it would be expected the other test-cases would not find the test steps are used 1. The equivalence class are identified by taking is input condition and divided into valid or invalid classes. 2. Generate the test-cases using the equivalence classes this is performed by writing the test-cases covering all the valid equivalence classes as well as invalid equivalence classes . Input Domain Domain
Invalid Input
System under test Output

Output

Valid Input

[ 1 999 ]

valid [ 1 < item < 999 ] invalid [ item <1 ] and [ item > 999 ]

Decision Table Based Testing Decision table are useful for describing the situation in which the number of combination and actions are taken for a large set of condition. There are four parts in a decision table . 1. 2. 3. 4. Condition Stub Action Stub Condition Entry Action Entry Name Roll No. Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 2205282 062
PAGE NO

When condition c1, c2, c3 are all true then the action a1, a2 occurs. When the condition c1, c2 are true and c3 is false the action a1 and a3 can occurs . so the testing technique provide the values in form of yes or no. Condition Stub c1 True Entry True True False True False False True True False False False True False

c2

c3 Action a1 X Stub X a2 a3

X X X X

X X X X X

X X

Cause-Effect Graph Technique One drawback of boundary and Equivalence class partitioning is that they dont consider combination of input. they consider only single input condition. So for this reason we can use cause-effect technique. The following process or steps used to generate technique. 1. Cause and effect in this specification are identified. A cause is input condition and effect is output condition 2. Semantic content of this specification is analyze and converted into Boolean graph linking. All causes and effect 3. Then we trace out the condition in the graph and graph is converted into decision table 4. The columns in the decision tables are converted into test-cases.

C1

Identity Figure (a)

E1

C1

not Figure (b)

E1

C1

C1

Name

Roll No.

Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 2205282 063
PAGE NO

Or
C1 C1 E1 C1 E1

and

Figure (c) Advantages of Black Box Testing


Figure (d)

more effective on larger units of code than glass box testing tester needs no knowledge of implementation, including specific programming languages tester and programmer are independent of each other tests are done from a user's point of view will help to expose any ambiguities or inconsistencies in the specifications test cases can be designed as soon as the specifications are complete

Disadvantages of Black Box Testing


only a small number of possible inputs can actually be tested, to test every possible input stream would take nearly forever without clear and concise specifications, test cases are hard to design repetition of test inputs if the tester is not informed of test cases the programmer has already tried may leave many program paths untested cannot be directed toward specific segments of code which may be very complex (and therefore more error prone) most testing related research has been directed toward glass box testing

White Box Testing This is complementary approach in the functional testing that is called white box testing. In which we are concerned with the internal structure or the detailed of the program. White box testing is the carefully the examine the software design architecture or code code for bugs without executing it. So it is sometimes refers to as structure analysis or static void box testing. Parts and method used in testing 1. Path testing I. Flow graph Name Roll No. Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 2205282 064
PAGE NO

II. DD path graph 1. Path testing:-is the name given to the group of test techniques based on the selection of set of test path in a program. In a set of path properly chosen then it means that we have achieved a better test path e.g. we are taking a statement for every system statement is executed at least once. It requires the complete knowledge 0of the program structure this type testing involve.

1. Generally a set of path that will cover every branch in the program. 2. Finding a set of test cases that may execute every path in this set of program path.

1.1 Flow graph:- The control flow of the program can be analyzed using a graphical representation in this graph is a direct graph . in which nodes are either entire statement or fragment of a statement and edges repression the flow of the program . There is an edge from node I to j. if statement corrupted to node (j) can be executed immediately after the statement corresponding to nodes to (i). A flow graph can be easily generated from the code of any problems. Symbol used for program:-

(a) sequence

(b) if-then-else

(c) while loop

Name

Roll No.

Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 2205282 065
PAGE NO

(d) Repeat until loop

(e) Switch statement

1.2 DD path graph :- The flow graph generation is the 1st stap og the path testing. The 2nd step is to draw a DD path graph from the flow graph. DDpath graph is known as decision to decision path graph in which we consider only the decision box. The nodes of flow graph which are in a sequence are combined into a single node. DD path graph is a directed graphin which nodes are sequence of statement and edges represents control flow between nodes.

2. Cyclomatic Complexity :- Cyclomatic Complexity is also known as the structural complexity because it gives the internal view of the code. This approach is used to find the no. of independentpath in a program. This provides us the upper bound for the no. of tests thatmust be conducted to ensure that all statements have been executed at least once. And every condition has been executed on its true or false side. Ifd a program has backword branch then it may have infinite no. of paths. Cyclomatic Complexity For a graph V(G) having n vertices and e edges including P components can be calculated as V(G) = e n + 2P In this we take a directed graph in which that has unique entry & exit node. Each node in the graph corresponds to a block of code in the program. This graph is called Flow graph. And it is assumed that each node can Name Roll No. Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 2205282 066
PAGE NO

be reached by the entry node and each nod ecan be reached exit node . For example, a graph is given

Path1 : a c f
b c d

Path2 : a b e f Path3 : a d c f Path4 : a b e b e f Path5 : a b e a c f

e f

Path6 : a b e a b e f

V(G) = e n + 2P = 9 6 +2(1) = 5

P = connected component

1st method V(G) = e n + 2P 2nd method Cyclomatic complexityof V(G) of a flow graph is equal to the no. of predicate nodes (decision nodes) + 1 3rd Method Name Roll No. Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 2205282 067
PAGE NO

Cyclomatic complexity is equal to the number of region of a flow graph So V(G) = number of region.

3. Graph matrices :Whenever graph are used for testing . we arecinteresting to find out the independent path. The main aim is to trace out all the links of the graph at least once. The path tracing is not an easy task and that will create errors in testing . if size of graph increases it is difficult to dothe path tracing manually. So we uses a testing tools. To develop such a tool, a data structure called graph matrices is used. A graph matric is a squared matric with one row and one column for every nod ein the graph. The size of the matric is equal to the number of nodes in the flow graph that is number of the rows and the columns In the graph matric there is place to put every possible direct connection between one nodes to any other nodes. A connectionfrom node I to node j in a directed graph does not mean a connection from j to i. if there are several lines between two nodes them entry is sum of all the parallel links. So graph matric is nothing , Tabular representation of a flow graph. If we assign wait to each entry the graph matric can be used for calculating used to information for testing .The simplest wait is one , if tghere is a connection otherwise puyt zero , if there is no connection

1 4
3

a b d

c Matrix

Fig : Graph

Name

Roll No.

Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA 4 CSE LAB (SE) 2205282 068
PAGE NO

d fig : Flow Graph

4. Data flow testing :It is the another form of structural testing . it has nothing to do with data flow diagram. We used the concept of variables in data flow testing and the two main point s are 1) Statement where variables receive the value. 2) Statement where these values are used and referenced. The variables are defined and referenced with in a program. The basic concepts for variables are i. ii. iii. A variables is defined but not used or reference A variables is used but never defined A variables is defined twice before it is used.

Name

Roll No.

Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 069
PAGE NO

ROLL NO:-2205282 EXPERIMENT- 9

AIM- To study the term unit testing, integration testing, validation testing, system testing and debbuging. Unit testing In computer programming, unit testing is a procedure used to validate that individual units of source code are working properly. A unit is the smallest testable part of an application. In procedural programming a unit may be an individual program, function, procedure etc, while in object-oriented programming, the smallest unit is always a Class; which may be a base/super class, abstract class or derived/child class. Units are distinguished from modules in that modules are typically made up of units. Ideally, each test case is independent from the others; mock objects and test harnesses can be used to assist testing a module in isolation. Unit testing is typically done by the developers and not by end-users. Benefit The goal of unit testing is to isolate each part of the program and show that the individual parts are correct. A unit test provides a strict, written contract that the piece of code must satisfy. As a result, it affords several benefits. Facilitates change Unit testing allows the programmer to refactor code at a later date, and make sure the module still works correctly (i.e. regression testing). The procedure is to write test cases for all functions and methods so that whenever a change causes a fault, it can be quickly identified and fixed. Readily-available unit tests make it easy for the programmer to check whether a piece of code is still working properly. Good unit test design produces test cases that cover all paths through the unit with attention paid to loop conditions. Simplifies integration

Name

Roll No.

Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 070
PAGE NO

ROLL NO:-2205282 Unit testing helps to eliminate uncertainty in the units themselves and can be used in a bottom-up testing style approach. By testing the parts of a program first and then testing the sum of its parts, integration testing becomes much easier. A heavily debated matter exists in assessing the need to perform manual integration testing. While an elaborate hierarchy of unit tests may seem to have achieved integration testing, this presents a false sense of confidence since integration testing evaluates many other objectives that can only be proven through the human factor.

Documentation Unit testing provides a sort of "living document". Clients and other developers looking to learn how to use the module can look at the unit tests to determine how to use the module to fit their needs and gain a basic understanding of the API. Unit test cases embody characteristics that are critical to the success of the unit. These characteristics can indicate appropriate/inappropriate use of a unit as well as negative behaviors that are to be trapped by the unit. A unit test case, in and of itself, documents these critical characteristics, although many software development environments do not rely solely upon code to document the product in development. On the other hand, ordinary narrative documentation is more susceptible to drifting from the implementation of the program and will thus become outdated (e.g. design changes, feature creep, relaxed practices to keep documents up to date). Applications Extreme Programming The cornerstone of Extreme Programming (XP) is the unit test. XP relies on an automated unit testing framework. This automated unit testing framework can be either third party, e.g. xUnit, or created within the development group. Extreme Programming uses the creation of unit tests for Test Driven Development. The developer writes a unit test that exposes either a software requirement or a defect. This test will fail because either the requirement isn't Name Roll No. Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 071
PAGE NO

ROLL NO:-2205282 implemented yet, or because it intentionally exposes a defect in the existing code. Then, the developer writes the simplest code to make the test, along with other tests, pass. All classes in the system are unit tested. Developers release unit testing code to the code repository in conjunction with the code it tests. XP's thorough unit testing allows the benefits mentioned above, such as simpler and more confident code development and refactoring, simplified code integration, accurate documentation, and more modular designs. These unit tests are also constantly run as a form of regression test. Techniques Unit testing is commonly automated, but may still be performed manually. The IEEE does not favor one over the other. A manual approach to unit testing may employ a step-by-step instructional document. Nevertheless, the objective in unit testing is to isolate a unit and validate its correctness. Automation is efficient for achieving this, and enables the many benefits listed in this article. Conversely, if not planned carefully, a careless manual unit test case may execute as an integration test case that involves many software components, and thus preclude the achievement of most if not all of the goals established for unit testing. Using an automation framework, the developer codifies criteria into the test to verify the correctness of the unit. During execution of the test cases, the framework logs those that fail any criterion. Many frameworks will also automatically flag and report in a summary these failed test cases. Depending upon the severity of a failure, the framework may halt subsequent testing. As a consequence, unit testing is traditionally a motivator for programmers to create decoupled and cohesive code bodies. This practice promotes healthy habits in software development. Design patterns, unit testing, and refactoring often work together so that the most ideal solution may emerge. Unit testing frameworks Unit testing frameworks, which help simplify the process of unit testing, have been developed for a wide variety of languages. It is generally possible to perform unit testing without the support of specific framework by writing client code that exercises the units under test and uses assertion, exception, or early exit mechanisms to signal failure. This approach is valuable in that there is a negligible barrier to the adoption of unit testing. However, it is also limited in that many advanced features of a proper framework are missing or must be handName Roll No. Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 072
PAGE NO

ROLL NO:-2205282 coded. To address this issue the D programming language offers direct support for unit testing.

Integration testing Integration testing (sometimes called Integration and Testing, abbreviated I&T) is the phase of software testing in which individual software modules are combined and tested as a group. It follows package testing and precedes system testing. Integration testing takes as its input modules that have been unit tested, groups them in larger aggregates, applies tests defined in an integration test plan to those aggregates, and delivers as its output the integrated system ready for system testing. Purpose The purpose of integration testing is to verify functional, performance and reliability requirements placed on major design items. These "design items", i.e. assemblages (or groups of units), are exercised through their interfaces using black box testing, success and error cases being simulated via appropriate parameter and data inputs. Simulated usage of shared data areas and interprocess communication is tested and individual subsystems are exercised through their input interface. Test cases are constructed to test that all components within assemblages interact correctly, for example across procedure calls or process activations, and this is done after testing individual modules, i.e. unit testing. The overall idea is a "building block" approach, in which verified assemblages are added to a verified base which is then used to support the integration testing of further assemblages. The different types of integration testing are big bang, top-down, bottom-up, and back bone. Big Bang: In this approach, all or most of the developed modules are coupled together to form a complete software system or major part of the system and then used for integration testing. The Big Bang method is very effective for saving time in the integration testing process. However, if the test cases and their results are not recorded properly, the entire integration process will be more Name Roll No. Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 073
PAGE NO

ROLL NO:-2205282 complicated and may prevent the testing team from achieving the goal of integration testing. Bottom Up: All the bottom or low level modules, procedures or functions are integrated and then tested. After the integration testing of lower level integrated modules, the next level of modules will be formed and can be used for integration testing. This approach is helpful only when all or most of the modules of the same development level are ready. This method also helps to determine the levels of software developed and makes it easier to report testing progress in the form of a percentage. Validation and system testing Validation testing is a concern which overlaps with integration testing. Ensuring that the application fulfils its specification is a major criterion for the construction of an integration test. Validation testing also overlaps to a large extent with system testing, where the application is tested with respect to its typical working environment. Consequently for many processes no clear division between validation and system testing can be made. Specific tests which can be performed in either or both stages include the following.

Regression testing. Where this version of the software is tested with the automated test harnesses used with previous versions to ensure that the required features of the previous version are still working in the new version. Recovery testing. Where the software is deliberately interrupted in a number of ways, for example taking its hard disc off line or even turning the computer off, to ensure that the appropriate techniques for restoring any lost data will function. Security testing. Where unauthorised attempts to operate the software, or parts of it, are attempted. It might also include attempts to obtain access the data, or harm the software installation or even the system software. As with all types of security it is recognised that someone sufficiently determined will be able to obtain unauthorised access and the best that can be achieved is to make this process as difficult as possible. Stress testing. Where abnormal demands are made upon the software by increasing the rate at which it is asked to accept data, or the rate at which it is asked to produce information. More complex tests may attempt to create very large data sets or cause the software to make excessive demands on the operating system. Performance testing. Where the performance requirements, if any, are checked. These may include the size of the software when installed, the Name Roll No. Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 074
PAGE NO

ROLL NO:-2205282 amount of main memory and/ or secondary storage it requires and the demands made of the operating system when running within normal limits or the response time. Usability testing. The process of usability measurement was introduced in the previous chapter. Even if usability prototypes have been tested whilst the application was constructed, a validation test of the finished product will always be required. Alpha and beta testing. This is where the software is released to the actual end users. An initial release, the alpha release, might be made to selected users who would be expected to report bugs and other detailed observations back to the production team. Once the application has passed through the alpha phase a beta release, possibly incorporating changes necessitated by the alpha phase, can be made to a larger more representative set users, before the final release is made to all users.

The final process should be a software audit where the complete software project is checked to ensure that it meets production management requirements. This ensures that all required documentation has been produced, is in the correct format and is of acceptable quality. The purpose of this review is: firstly to assure the quality of the production process and by implication the product; and secondly to ensure that all is in order before the initial project construction phase concludes and the maintenance phase commences. A formal hand over from the development team at the end of the audit will mark the transition between the two phases.

Debugging Debugging is a methodical process of finding and reducing the number of bugs, or defects, in a computer program or a piece of electronic hardware thus making it behave as expected. Debugging tends to be harder when various subsystems are tightly coupled, as changes in one may cause bugs to emerge in another. Tools Debugging is, in general, a cumbersome and tiring task. The debugging skill of the programmer is probably the biggest factor in the ability to debug a problem, but the difficulty of software debugging varies greatly with the programming language used and the available tools, such as debuggers. Debuggers are software tools which enable the programmer to monitor the execution of a program, stop it, re-start it, run it in slow motion, change values in memory and Name Roll No. Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 075
PAGE NO

ROLL NO:-2205282 even, in some cases, go back in time. The term debugger can also refer to the person who is doing the debugging. Generally, high-level programming languages, such as Java, make debugging easier, because they have features such as exception handling that make real sources of erratic behaviour easier to spot. In lower-level programming languages such as C or assembly, bugs may cause silent problems such as memory corruption, and it is often difficult to see where the initial problem happened; in those cases, sophisticated debugging tools may be needed. In certain situations, general purpose software tools that are language specific in nature can be very useful. These take the form of static code analysis tools. These tools look for a very specific set of known problems, some common and some rare, within the source code. All such issues detected by these tools would rarely be picked up by a compiler or interpreter, thus they are not syntax checkers, but more semantic checkers. Some tools claim to be able to detect 300+ unique problems. Both commercial and free tools exist in various languages. These tools can be extremely useful when checking very large source trees, where it is impractical to do code walkthroughs. A typical example of a problem detected would be a variable dereference that occurs before the variable is assigned a value. Another example would be to perform strong type checking when the language does not require such. Thus, they are better at locating likely errors, versus actual errors. As a result, these tools have a reputation of false positives. The old Unix lint program is an early example. For debugging electronic hardware (e.g., computer hardware) as well as lowlevel software (e.g., BIOSes, device drivers) and firmware, instruments such as oscilloscopes, logic analyzers or in-circuit emulators (ICEs) are often used, alone or in combination. An ICE may perform many of the typical software debugger's tasks on low-level software and firmware. Basic steps Although each debugging experience is unique, certain general principles can be applied in debugging. This section particularly addresses debugging software, although many of these principles can also be applied to debugging hardware. The basic steps in debugging are:

Recognize that a bug exists Isolate the source of the bug Identify the cause of the bug Name Roll No. Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA
PAGE NO

CSE LAB (SE)

076

ROLL NO:-2205282 Determine a fix for the bug Apply the fix and test it

Recognize a bug exists Detection of bugs can be done proactively or passively. The goal of this step is to identify the symptoms of the bug. Observing the symptoms of the problem, under what conditions the problem is detected, and what work-arounds, if any, have been found, will greatly help the remaining steps to debugging the problem.

Isolate source of bug This step is often the most difficult (and therefore rewarding) step in debugging. The idea is to identify what portion of the system is causing the error. Unfortunately, the source of the problem isn't always the same as the source of the symptoms. This step often involves iterative testing. The programmer might first verify that the input is correct, next if it was read correctly, processed correctly, etc. For modular systems, this step can be a little easier by checking the validity of data passed across interfaces between different modules. If the input was correct, but the output was not, then the source of the error is within the module. By iteratively testing inputs and outputs, the debugger can identify within a few lines of code where the error is occurring. Identify cause of bug Having found the location of the bug, the next step is to determine the actual cause of the bug, which might involve other sections of the program. A good understanding of the system is vital to successfully identifying the source of the bug. A trained debugger can isolate where a problem originates, but only someone familiar with the system can accurately identify the actual cause behind the error. In some cases it might be external to the system: the input data was incorrect. In other cases it might be due to a logic error, where correct data was handled incorrectly. Other possibilities include unexpected values, where the initial assumptions were that a given field can have only "n" values, when in fact, it can have more, as well as unexpected combinations of values in different fields Name Roll No. Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 077
PAGE NO

ROLL NO:-2205282 (field x was only supposed to have that value when field y was something different). Another possibility is incorrect reference data, such as a lookup table containing incorrect values relative to the record that was corrupted. Having determined the cause of the bug, it is a good idea to examine similar sections of the code to see if the same mistake is repeated elsewhere. If the error was clearly a typo, this is less likely, but if the original programmer misunderstood the initial design and/or requirements, the same or similar mistakes could have been made elsewhere. Determine fix for bug Having identified the source of the problem, the next task is to determine how the problem can be fixed. An intimate knowledge of the existing system is essential for all but the simplest of problems. This is because the fix will modify the existing behavior of the system, which may produce unexpected results. Furthermore, fixing an existing bug can often either create additional bugs, or expose other bugs that were already present in the program, but never exposed because of the original bug. These problems are often caused by the program executing a previously untested branch of code, or under previously untested conditions. In some cases, a fix is simple and obvious. This is especially true for logic errors where the original design was implemented incorrectly. On the other hand, if the problem uncovers a major design flaw that permeates a large portion of the system, then the fix might range from difficult to impossible, requiring a total rewrite of the application. In some cases, it might be desirable to implement a "quick fix", followed by a more permanent fix. This decision is often made by considering the severity, visibility, frequency, and side effects of the problem, as well as the nature of the fix, and product schedules (e.g., are there more pressing problems?). Fix and test After the fix has been applied, it is important to test the system and determine that the fix handles the former problem correctly. Testing should be done for two purposes: (1) does the fix now handle the original problem correctly, and (2) make sure the fix hasn't created any undesirable side effects. For large systems, it is a good idea to have regression tests, a series of test runs that exercise the system. After significant changes and/or bug fixes, these tests Name Roll No. Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 078
PAGE NO

ROLL NO:-2205282 can be repeated at any time to verify that the system still executes as expected. As new features are added, additional tests can be included in the test suite.

Name

Roll No.

Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 079
PAGE NO

EXPERIMENT- 10

AIM- To study the term Software Maintenance. Explain its characteristics and Side effects

In software engineering, software maintenance is the process of enhancing and optimizing deployed software (software release), as well as remedying defects. Software maintenance is one of the phases in the software development process, and follows deployment of the software into the field. The software maintenance phase involves changes to the software in order to correct defects and deficiencies found during field usage as well as the addition of new functionality to improve the software's usability and applicability.

Need for maintenance

To do: o Correct errors o Correct requirements and design flaws o Improve the design o Make enhancements o Interface with other systems o Convert to use other hardware, ... can be used. o Migrate legacy systems o Retire systems Major aspects o Maintaining control over the system's day-to-day functions o Maintaining control over system modification o Perfecting existing acceptable functions o Preventing system performance from degrading to unacceptable levels.

In order to answer this question we need to consider what happens when the system is delivered to the users. The users operate the system and may find things wrong with it, or identify things they would like to see added to it. Via management feedback the maintainer makes the approved corrections or improvements and the improved system is delivered to the users. The cycle then Name Roll No. Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 080
PAGE NO

repeats itself, thus perpetuating the loop of maintenance and extending the life of the product. In most cases the maintenance phase ends up being the longest process of the entire life cycle, and so far outweighs the development phase in terms of time and cost. Error! Reference source not found. shows the lifecycle of maintenance on a software product and why (theoretically) it may be never ending. Lehman's (1980) first two laws of software evolution help explain why the Operations and Maintenance phase can be the longest of the life-cycle processes. His first law is the Law of Continuing Change, which states that a system needs to change in order to be useful. The second law is the Law of Increasing Complexity, which states that the structure of a program deteriorates as it evolves. Over time, the structure of the code degrades until it becomes more cost-effective to rewrite the program

Figure 1.1 The Maintenance Lifecycle Types of Software Maintenance In order for a software system to remain useful in its environment it may be necessary to carry out a wide range of maintenance activities upon it. Swanson (1976) was one of the first to examine what really happens during maintenance and was able to identify three different categories of maintenance activity.

Name

Roll No.

Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 081
PAGE NO

1 Corrective Changes necessitated by actual errors (defects or residual "bugs") in a system are termed corrective maintenance. These defects manifest themselves when the system does not operate as it was designed or advertised to do. A defect or bug can result from design errors, logic errors and coding errors. Design errors occur when for example changes made to the software are incorrect, incomplete, wrongly communicated or the change request misunderstood. Logic errors result from invalid tests and conclusions, incorrect implementation of design specification, faulty logic flow or incomplete test data. Coding errors are caused by incorrect implementation of detailed logic design and incorrect use of the source code logic. Defects are also caused by data processing errors and system performance errors. All these errors, sometimes called residual errors or bugs prevent the software from conforming to its agreed specification. In the event of a system failure due to an error, actions are taken to restore operation of the software system. The approach here is to locate the original specifications in order to determine what the system was originally designed to do. However, due to pressure from management, maintenance personnel sometimes resort to emergency fixes known as patching. The ad hoc nature of this approach often gives rise to a range of problems that include increased program complexity and unforeseen ripple effects. Increased program complexity usually arises from degeneration of program structure which makes the program increasingly difficult, if not impossible, to comprehend. This state of affairs can be referred to as the spaghetti syndrome or software fatigue. Unforeseen ripple effects imply a change to one part of a program may affect other sections in an unpredictable fashion. This is often due to lack of time to carry out a thorough impact analysis before effecting the change. Corrective maintenance has been estimated to account for 20% of all maintenance activities. 2 Adaptive Any effort that is initiated as a result of changes in the environment in which a software system must operate is termed adaptive change. Adaptive change is a change driven by the need to accommodate modifications in the environment of the software system, without which the system would become increasingly less useful until it became obsolete. Name Roll No. Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 082
PAGE NO

The term environment in this context refers to all the conditions and influences which act from outside upon the system, for example business rules, government policies, work patterns, software and hardware operating platforms. A change to the whole or part of this environment will warrant a corresponding modification of the software. Unfortunately, with this type of maintenance the user does not see a direct change in the operation of the system, but the software maintainer must expend resources to effect the change. This task is estimated to consume about 25% of the total maintenance activity. 3 Perfective The third widely accepted task is that of perfective maintenance. This is actually the most common type of maintenance encompassing enhancements both to the function and the efficiency of the code and includes all changes, insertions, deletions, modifications, extensions, and enhancements made to a system to meet the evolving and/or expanding needs of the user. A successful piece of software tends to be subjected to a succession of changes resulting in an increase in its requirements. This is based on the premise that as the software becomes useful, the users tend to experiment with new cases beyond the scope for which it was initially developed. Expansion in requirements can take the form of enhancement of existing system functionality or improvement in computational efficiency. As the program continues to grow with each enhancement the system evolves from an average-sized program of average maintainability to a very large program that offers great resistance to modification. Perfective maintenance is by far the largest consumer of maintenance resources, estimates of around 50% are not uncommon. The categories of maintenance above were further defined in the 1993 IEEE Standard on Software Maintenance (IEEE 1219 1993) which goes on to define a fourth category. 4 Preventive The long-term effect of corrective, adaptive and perfective change is expressed in Lehman's law of increasing entropy: As a large program is continuously changed, its complexity, which reflects deteriorating structure, increases unless work is done to maintain or reduce it. (Lehman 1985). Name Roll No. Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 083
PAGE NO

The IEEE defined preventative maintenance as "maintenance performed for the purpose of preventing problems before they occur" (IEEE 1219 1993). This is the process of changing software to improve its future maintainability or to provide a better basis for future enhancements. The preventative change is usually initiated from within the maintenance organisation with the intention of making programs easier to understand and hence facilitate future maintenance work. Preventive change does not usually give rise to a substantial increase in the baseline functionality. Preventive maintenance is rare (only about 5%) the reason being that other pressures tend to push it to the end of the queue. For instance, a demand may come to develop a new system that will improve the organisations competitiveness in the market. This will likely be seen as more desirable than spending time and money on a project that delivers no new function. Still, it is easy to see that if one considers the probability of a software unit needing change and the time pressures that are often present when the change is requested, it makes a lot of sense to anticipate change and to prepare accordingly. The most comprehensive and authoritative study of software maintenance was conducted by B. P. Lientz and E. B. Swanson (1980). Figure 1.2 depicts the distribution of maintenance activities by category by percentage of time from the Lientz and Swanson study of some 487 software organisations. Clearly, corrective maintenance (that is, fixing problems and routine debugging) is a small percentage of overall maintenance costs, Martin and McClure (1983) provide similar data.

Figure 1.2 Distribution of maintenance by categories 5 Maintenance as Ongoing Support Name Roll No. Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 084
PAGE NO

This category of maintenance work refers to the service provided to satisfy nonprogramming related work requests. Ongoing support, although not a change in itself, is essential for successful communication of desired changes. The objectives of ongoing support include effective communication between maintenance and end user personnel, training of end-users and providing business information to users and their organisations to aid decision making.

Effective communication is essential as maintenance is probably the most customer-intensive part of the software life cycle, since a greater proportion of maintenance effort is spent providing enhancements requested by customers than is spent on other types of system change. Good customer relations are important for several reasons and can lead to a reduction in the misinterpretation of users change requests, a better understanding of users' business needs and increased user involvement in the maintenance process. Failure to achieve the required level of communication between the maintenance organisation and those affected by the software changes may eventually lead to software failure. Training of end users - typical services provided by the maintenance organisation include manuals, telephone support , help desk, on-site visits, informal short courses, and user groups. Business information - users need various types of timely and accurate business information (for example, time, cost, resource estimates) to enable them take strategic business decisions. Questions such as should we enhance the existing system or replace it completely may need to be considered.

Swanson's definitions allow the software maintenance practitioner to be able to tell the user that a certain portion of a maintenance organisations efforts is devoted to user-driven or environment-driven requirements. The user requirements should not be buried with other types

Name

Roll No.

Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 085
PAGE NO

Fig 1.3. The Relationship between the different types of software change of maintenance. The point here is that these types of updates are not corrective in naturethey are improvements and no matter which definitions are used, it is imperative to discriminate between corrections and enhancements. By studying the types of maintenance activities above it is clear that regardless of which tools and development model is used, maintenance is needed. The categories clearly indicate that maintenance is more than fixing bugs. This view is supported by Jones (1991), who comments that organisations lump enhancements and the fixing of bugs together. He goes on to say that this distorts both activities and leads to confusion and mistakes in estimating the time it takes to implement changes and budgets. Even worse, this "lumping" perpetuates the notion that maintenance is fixing bugs and mistakes. Because many maintainers do not use maintenance categories, there is confusion and misinformation about maintenance. Maintenance Process The term process here refers to any activity carried out or action taken either by a machine or maintenance personnel during software maintenance. The facets of a maintenance process which affect the evolution of the software or contribute to maintenance costs include

The difficulty of capturing change (and changing) requirements requirements and user problems only become clearer when a system is in use. Also users may not be able to express their requirements in a form understandable to the analyst or programmer - the 'information gap'. The Name Roll No. Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 086
PAGE NO

requirements and changes evolve, therefore the maintenance team is always playing catch-up Variation in programming practice this may present difficulties if there is no consistency, therefore standards or stylistic guidelines are often provided. Working practices impact on the way a change is effected. Time to change can be adversely affected by clever code; undocumented assumptions; and undocumented design and implementation decisions. After some time, programmers find it difficult to understand their own code. Paradigm shift - older systems developed prior to the advent of structured programming techniques may be difficult to maintain. However, existing programs may be restructured or 'revamped using techniques and tools e.g. structured programming, object orientation, hierarchical program decomposition, reformatters and pretty-printers. Error detection and correction - error-free software is virtually non-existent. Software products tend to have 'residual' errors. The later these errors are discovered the more expensive they are to correct. The cost gets even higher if the errors are detected during the maintenance phase (Figure 1.5).

Figure 1.5 Cost of fixing errors increases in later phases of the life cycle

Name

Roll No.

Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA CSE LAB (SE) 087
PAGE NO

Obviously the factors of product, environment, user and maintenance personnel do not exist in isolation but interact with one another. Three major types of relation and interaction that can be identified are product/environment, product/user and product/maintenance personnel (Figure 1.6).

Relationship between product and environment - as the environment changes so must the product in order to be useful. Relationship between product and user - in order for the system to stay useful and acceptable to its users it also has to change to accommodate their changing requirements. Interaction between personnel and product - the maintenance personnel who implement changes also act as receptors of the changes. That is, they serve as the main avenue by which changes in the other factors user requirements, maintenance process, organisational and operational environments - act upon the software product. The nature of the maintenance process used and the attributes of the maintenance personnel will impact upon the quality of the change.

Maintenance side-effects

Any error or undesirable behavior that occurs as a result of modifications to a system Name Roll No. Submitted to:-

Software Engineering
PRACTICAL FILE
CSE-316
GOLPURA, BARWALA
PAGE NO

CSE LAB (SE)

088

Coding side-effects (inadvertent removal of vital code, changes in semantics of code, unexpected changes in execution path) Data side-effects (changes in data structures render older data invalid or incomplete, changes in global constants, changes in data ranges) Documentation side-effects (forgetting to document code or data structure changes, changes not reflected in user manuals or interface)

Name

Roll No.

Submitted to:-

You might also like