You are on page 1of 100

1.

INTRODUCTION
Along with the rapid development of computing and communication technique, a great
deal of data is generated. These massive data needs more strong computation resource and
greater storage space. Over the last years, cloud computing satises the application requirements
and grows very quickly. Essentially, it takes the data processing as a service, such as storage,
computing, data security, etc. By using the public cloud platform, the clients are relieved of the
burden for storage management, universal data access with independent geographical locations,
etc. Thus, more and more clients would like to store and process their data by using the remote
cloud computing system.

ORGANIZATION PROFILE

Software Solutions is an IT solution provider for a dynamic environment where business and
technology strategies converge. Their approach focuses on new ways of business combining IT
innovation and adoption while also leveraging an organizations current IT assets. Their work
with large global corporations and new products or services and to implement prudent business
and technology strategies in todays environment.

Xxxxxxxs RANGE OF EXPERTISE INCLUDES:

Software Development Services


Engineering Services
Systems Integration
Customer Relationship Management
Product Development
Electronic Commerce
Consulting
IT Outsourcing
We apply technology with innovation and responsibility to achieve two broad objectives:

Effectively address the business issues our customers face today.


Generate new opportunities that will help them stay ahead in the future.
THIS APPROACH RESTS ON:

A strategy where we architect, integrate and manage technology services and solutions - we
call it AIM for success.
A robust offshore development methodology and reduced demand on customer resources.
A focus on the use of reusable frameworks to provide cost and times benefits.
They combine the best people, processes and technology to achieve excellent results -
consistency. We offer customers the advantages of:

SPEED:

1|Page
MCA, AIET.
They understand the importance of timing, of getting there before the competition. A
rich portfolio of reusable, modular frameworks helps jump-start projects. Tried and tested
methodology ensures that we follow a predictable, low - risk path to achieve results. Our track
record is testimony to complex projects delivered within and evens before schedule.

EXPERTISE:

Our teams combine cutting edge technology skills with rich domain expertise. Whats
equally important - they share a strong customer orientation that means they actually start by
listening to the customer. Theyre focused on coming up with solutions that serve customer
requirements today and anticipate future needs.

A FULL SERVICE PORTFOLIO:

They offer customers the advantage of being able to Architect, integrate and manage
technology services. This means that they can rely on one, fully accountable source instead of
trying to integrate disparate multi vendor solutions.

SERVICES:

Xxx is providing its services to companies which are in the field of production, quality
control etc with their rich expertise and experience and information technology they are in best
position to provide software solutions to distinct business requirements.

2|Page
MCA, AIET.
2. SYSTEM ANALYSIS
2.1 INTRODUCTION

Software Development Life Cycle:-

There is various software development approaches defined and designed which are
used/employed during development process of software, these approaches are also referred as
"Software Development Process Models". Each process model follows a particular life cycle in
order to ensure success in process of software development.

Requirements

Business requirements are gathered in this phase. This phase is the main focus of the project
managers and stake holders. Meetings with managers, stake holders and users are held in order
to determine the requirements. Who is going to use the system? How will they use the
system? What data should be input into the system? What data should be output by the
system? These are general questions that get answered during a requirements gathering
phase. This produces a nice big list of functionality that the system should provide, which
describes functions the system should perform, business logic that processes data, what data is
stored and used by the system, and how the user interface should work. The overall result is the
system as a whole and how it performs, not how it is actually going to do it.

Design

The software system design is produced from the results of the requirements phase. Architects
have the ball in their court during this phase and this is the phase in which their focus lies. This

3|Page
MCA, AIET.
is where the details on how the system will work is produced. Architecture, including hardware
and software, communication, software design (UML is produced here) are all part of the
deliverables of a design phase.

Implementation

Code is produced from the deliverables of the design phase during implementation, and this is
the longest phase of the software development life cycle. For a developer, this is the main focus
of the life cycle because this is where the code is produced. Implementation my overlap with
both the design and testing phases. Many tools exists (CASE tools) to actually automate the
production of code using information gathered and produced during the design phase.

Testing

During testing, the implementation is tested against the requirements to make sure that the
product is actually solving the needs addressed and gathered during the requirements
phase. Unit tests and system/acceptance tests are done during this phase. Unit tests act on a
specific component of the system, while system tests act on the system as a whole.

So in a nutshell, that is a very basic overview of the general software development life cycle
model. Now lets delve into some of the traditional and widely used variations.

SDLC METHDOLOGIES

This document play a vital role in the development of life cycle (SDLC) as it describes the
complete requirement of the system. It means for use by developers and will be the basic during
testing phase. Any changes made to the requirements in the future will have to go through
formal change approval process.

SPIRAL MODEL was defined by Barry Boehm in his 1988 article, A spiral Model of
Software Development and Enhancement. This model was not the first model to discuss
iterative development, but it was the first model to explain why the iteration models.

As originally envisioned, the iterations were typically 6 months to 2 years long. Each phase
starts with a design goal and ends with a client reviewing the progress thus far. Analysis and
engineering efforts are applied at each phase of the project, with an eye toward the end goal of
the project.

4|Page
MCA, AIET.
The following diagram shows how a spiral model acts like:

The steps for Spiral Model can be generalized as follows:

The new system requirements are defined in as much details as possible. This usually
involves interviewing a number of users representing all the external or internal users
and other aspects of the existing system.

A preliminary design is created for the new system.

A first prototype of the new system is constructed from the preliminary design. This
is usually a scaled-down system, and represents an approximation of the
characteristics of the final product.

A second prototype is evolved by a fourfold procedure:

1. Evaluating the first prototype in terms of its strengths, weakness, and risks.

2. Defining the requirements of the second prototype.

3. Planning an designing the second prototype.

4. Constructing and testing the second prototype.

At the customer option, the entire project can be aborted if the risk is deemed too
great. Risk factors might involved development cost overruns, operating-cost
miscalculation, or any other factor that could, in the customers judgment, result in a
less-than-satisfactory final product.

5|Page
MCA, AIET.
The existing prototype is evaluated in the same manner as was the previous prototype,
and if necessary, another prototype is developed from it according to the fourfold
procedure outlined above.

The preceding steps are iterated until the customer is satisfied that the refined
prototype represents the final product desired.

The final system is constructed, based on the refined prototype.

The final system is thoroughly evaluated and tested. Routine maintenance is carried
on a continuing basis to prevent large scale failures and to minimize down time.

2.2 STUDY OF THE SYSTEM

In the flexibility of uses the interface has been developed a graphics concepts in mind,
associated through a browser interface. The GUIs at the top level has been categorized as
follows

1. Administrative User Interface Design

2. The Operational and Generic User Interface Design

The administrative user interface concentrates on the consistent information that is


practically, part of the organizational activities and which needs proper authentication for the
data collection. The Interface helps the administration with all the transactional states like data
insertion, data deletion, and data updating along with executive data search capabilities.

The operational and generic user interface helps the users upon the system in transactions
through the existing data and required services. The operational user interface also helps the
ordinary users in managing their own information helps the ordinary users in managing their own
information in a customized manner as per the assisted flexibilities.

Modules Involved

Modules:

Public Cloud Server.

Security Analysis.

Remote.

Symmetric key distribution Method.

6|Page
MCA, AIET.
MODULE DESCRIPTION

Public Cloud Server:

There exist many different security problems in the cloud computing This paper is based
on the research results of proxy cryptography, identity-based public key cryptography and
remote data integrity checking in public cloud. In some cases, the cryptographic operation will
be delegated to the third party, for example proxy. Thus, we have to use the proxy cryptography.
Proxy cryptography is a very important cryptography primitive. In 1996, Mambo et al. proposed
the notion of the proxy cryptosystem . When the bilinear pairings are brought into the identity-
based cryptography, identitybased cryptography becomes efficient and practical. Since identity-
based cryptography becomes more efficient because it avoids of the certificate management,
more and more experts are apt to study identity-based proxy cryptography. In 2013, Yoon et al.
proposed an ID-based proxy signature scheme with message recovery . Chen et al. proposed a
proxy signature scheme and a threshold proxy signature scheme from the Weil pairing . By
combining the proxy cryptography with encryption technique, some proxy re-encryption
schemes are proposed. Liu et al. formalize and construct the attribute-based proxy signature .Guo
et al. presented a non-interactive CPA(chosen-plaintext attack)-secure proxy re-encryption
scheme, which is resistant to collusion attacks in forging re-encryption keys . Many other
concrete proxy re-encryption schemes and their applications are also proposed.

Security Overlay:

The security of our ID-PUIC protocol mainly consists of the following parts: correctness,
proxy-protection and unforgeability. The correctness has been shown in the subsection III-B. In
the following paragraph, we study the proxy-protection and unforgeability. Proxy-protection
means that the original client cannot pass himself off as the proxy to create the tags.
Unforgeability means that when some challenged blocks are modified or deleted, PCS cannot
send the valid response which can pass the integrity checking..

7|Page
MCA, AIET.
Remote:

Upload their data to PCS and check their remote datas integrity by Internet. When the
client is an individual manager, some practical problems will happen. If the manager is suspected
of being involved into the commercial fraud, he will be taken away by the police. During the
period of investigation, the manager will be restricted to access the network in order to guard
against collusion. But, the managers legal business will go on during the period of investigation.
When a large of data is generated, who can help him process these data ? If these data cannot be
processed just in time, the manager will face thelose of economic interest. In order to prevent the
case happening, the manager has to delegate the proxy to process its data, for example, his
secretary. But, the manager will not hope others have the ability to perform the remote data
integrity checking. Public checking will incur some danger of leaking the privacy. For example,
the stored data volume can be detected by the malicious verifiers. When the uploaded data
volume is confidential, private remote data integrity checking is necessary. Although the
secretary has the ability to process and upload the data for the manager, he still cannot check the
managers remote data integrity unless he is delegated by the manager. We call the secretary as
the proxy of the manager.

Symmetric key distribution method:

Balanced incomplete block design (BIBD) is a combinatorial design methodology used in


key pre-distribution schemes. BIBD arranges v distinct key objects of a key pool into b different
blocks each block representing a key ring assigned to a node. Each BIBD design is expressed
with a quintuplet where v is the number of keys, b is the number of key rings, r is the number of
nodes sharing a key, and k is the number of keys in each key ring. Further, each pair of distinct
keys occurs together in exactly blocks. Any BIBD design can be expressed with the equivalent
tuple because the relationship always holds.

8|Page
MCA, AIET.
System Architecture
Tree-Based Index with the Document Collection

9|Page
MCA, AIET.
We construct a special keyword balanced binary tree as the index, and propose a Greedy Depth-
first Search algorithm to obtain better efficiency than linear search.

2.3 HARDWARE&SOFTWARE SPECIFICATIONS

Hardware Requirements:

System : Pentium IV 3.5 GHz.


Hard Disk : 40 GB.
Monitor : 14 Colour Monitor.
Ram : 2 GB.

Software Requirements:

Operating system : Windows 7 Ultimate or above.


Coding Language : ASP.Net with C#
Front-End : Visual Studio 2015.
Data Base : SQL Server 2014.

10 | P a g e
MCA, AIET.
2.4 PROPOSED SYSTEM

Proof process is almost the same as Shacham- Waterss protocol, we only give the differences. In
Shacham-Waterss protocol, u is randomly picked from G1. In our ID-PUIC protocol, u is
calculated by using the hash function h. In the random oracle model, hs output value is
indistinguishable from a random value nn the group G1. In the phase TagGen, the proxy-key is
used in ID-PUIC protocol while the data owners secret key a is used in Shacham- Waterss
protocol. For PCS, and a has the same function to generate the block tags. When PCS is
dishonest, since Shacham-Waterss protocol is existentially unforgeable in random oracle model,
our proposed ID-PUIC protocol is also existentially unforgeable in the random oracle model. The
detailed proof process is omitted since it is very similar to Shacham-Waterss protocol.

2.5Functional requirements

Data Owner Can Login in system.

Data Owner Upload the data based on encryption format.

Data Owner allows the user to share the data in data-storing center.

User can access the data from data-storing center.

Non-Functional Requirements

Non-functional requirements describe user-visible aspects of the system that are not
directly related to functionality of the system.

User Interface

A menu interface has been provided to the user to be user friendly.

Performance Constraints

Requests should be processed within no time.

Users should be authenticated for accessing the requested data

Error Handling and Extreme Conditions

11 | P a g e
MCA, AIET.
In case of User Error, the System should display a meaningful error message to the user,
such that the user can correct his Error.

The high level components in proposed system should handle exceptions that
occur while connecting to various database servers, IO Exceptions etc.

Quality Issues

Quality issues mainly refers to reliable, available and robust system developing the
proposed system the developer must be able to guarantee the reliability transactions so
that they will be processed completely and accurately.

The ability of system to detect failures and recovery from those failures refers to the
availability of system.

2.6 PROCESS MODEL

The following commands specify access control identifiers and they are typically used to
authorize and authenticate the user (command codes are shown in parentheses)

USER NAME (USER)

The user identification is that which is required by the server for access to its file system. This
command will normally be the first command transmitted by the user after the control
connections are made (some servers may require this).

PASSWORD (PASS)

This command must be immediately preceded by the user name command, and, for some sites,
completes the user's identification for access control. Since password information is quite
sensitive, it is desirable in general to "mask" it or suppress type out.

12 | P a g e
MCA, AIET.
3. FEASIBILITY REPORT
Preliminary investigation examine project feasibility, the likelihood the system will be useful
to the organization. The main objective of the feasibility study is to test the Technical,
Operational and Economical feasibility for adding new modules and debugging old running
system. All system is feasible if they are unlimited resources and infinite time. There are aspects
in the feasibility study portion of the preliminary investigation:

Technical Feasibility
Operational Feasibility
Economical Feasibility
3.1. TECHNICAL FEASIBILITY

The technical issue usually raised during the feasibility stage of the investigation includes
the following:

Does the necessary technology exist to do what is suggested?


Do the proposed equipments have the technical capacity to hold the data required to use the
new system?
Will the proposed system provide adequate response to inquiries, regardless of the number or
location of users?
Can the system be upgraded if developed?
Are there technical guarantees of accuracy, reliability, ease of access and data security?
Earlier no system existed to cater to the needs of Secure Infrastructure Implementation
System. The current system developed is technically feasible. It is a web based user interface for
audit workflow at NIC-CSD. Thus it provides an easy access to the users. The databases
purpose is to create, establish and maintain a workflow among various entities in order to
facilitate all concerned users in their various capacities or roles. Permission to the users would be
granted based on the roles specified. Therefore, it provides the technical guarantee of
accuracy, reliability and security. The software and hard requirements for the development of
this project are not many and are already available in-house at NIC or are available as free as
open source. The work for the project is done with the current equipment and existing software

13 | P a g e
MCA, AIET.
technology. Necessary bandwidth exists for providing a fast feedback to the users irrespective of
the number of users using the system.

3.2. OPERATIONAL FEASIBILITY

Proposed projects are beneficial only if they can be turned out into information system.
That will meet the organizations operating requirements. Operational feasibility aspects of the
project are to be taken as an important part of the project implementation. Some of the important
issues raised are to test the operational feasibility of a project includes the following: -

Is there sufficient support for the management from the users?


Will the system be used and work properly if it is being developed and implemented?
Will there be any resistance from the user that will undermine the possible application
benefits?
This system is targeted to be in accordance with the above-mentioned issues. Beforehand,
the management issues and user requirements have been taken into consideration. So there is no
question of resistance from the users that can undermine the possible application benefits.

The well-planned design would ensure the optimal utilization of the computer resources and
would help in the improvement of performance status.

3.3. ECONOMICAL FEASIBILITY

A system can be developed technically and that will be used if installed must still be a
good investment for the organization. In the economical feasibility, the development cost in
creating the system is evaluated against the ultimate benefit derived from the new systems.
Financial benefits must equal or exceed the costs.

The system is economically feasible. It does not require any addition hardware or
software. Since the interface for this system is developed using the existing resources and
technologies available at NIC, There is nominal expenditure and economical feasibility for
certain.

14 | P a g e
MCA, AIET.
4. SOFTWARE REQUIREMENTS
Scope: This Document plays a vital role in the development life cycle (SDLC) and it describes
the complete requirement of the system. It is meant for use by the developers and will be the
basic during testing phase. Any changes made to the requirements in the future will have to go
through formal change approval process.

DEVELOPERS RESPONSIBILITIES OVERVIEW:

The developer is responsible for:

Developing the system, which meets the SRS and solving all the requirements of the system?
Demonstrating the system and installing the system at client's location after the acceptance
testing is successful.
Submitting the required user manual describing the system interfaces to work on it and also
the documents of the system.
Conducting any user training that might be needed for using the system.
Maintaining the system for a period of one year after installation.

4.1. FUNCTIONAL REQUIREMENTS


OUTPUT DESIGN

Outputs from computer systems are required primarily to communicate the results of
processing to users. They are also used to provides a permanent copy of the results for later
consultation. The various types of outputs in general are:

External Outputs, whose destination is outside the organization


Internal Outputs whose destination is within organization and they are the
Users main interface with the computer.
Operational outputs whose use is purely within the computer department.
Interface outputs, which involve the user in communicating directly.

15 | P a g e
MCA, AIET.
OUTPUT DEFINITION

The outputs should be defined in terms of the following points:


Type of the output
Content of the output
Format of the output
Location of the output
Frequency of the output
Volume of the output
Sequence of the output
It is not always desirable to print or display data as it is held on a computer. It should be
decided as which form of the output is the most suitable.

INPUT DESIGN

Input design is a part of overall system design. The main objective during the input
design is as given below:

To produce a cost-effective method of input.


To achieve the highest possible level of accuracy.
To ensure that the input is acceptable and understood by the user.

INPUT STAGES:

The main input stages can be listed as below:

Data recording
Data transcription
Data conversion
Data verification
Data control
Data transmission
Data validation

16 | P a g e
MCA, AIET.
Data correction

INPUT TYPES:

It is necessary to determine the various types of inputs. Inputs can be categorized as follows:

External inputs, which are prime inputs for the system.


Internal inputs, which are user communications with the system.
Operational, which are computer departments communications to the system?
Interactive, which are inputs entered during a dialogue.
INPUT MEDIA:

At this stage choice has to be made about the input media. To conclude about the input
media consideration has to be given to;

Type of input
Flexibility of format
Speed
Accuracy
Verification methods
Rejection rates
Ease of correction
Storage and handling requirements
Security
Easy to use
Portability
Keeping in view the above description of the input types and input media, it can be said
that most of the inputs are of the form of internal and interactive. As

Input data is to be the directly keyed in by the user, the keyboard can be considered to be the
most suitable input device.

17 | P a g e
MCA, AIET.
ERROR AVOIDANCE

At this stage care is to be taken to ensure that input data remains accurate form the stage
at which it is recorded up to the stage in which the data is accepted by the system. This can be
achieved only by means of careful control each time the data is handled.

ERROR DETECTION

Even though every effort is make to avoid the occurrence of errors, still a small
proportion of errors is always likely to occur, these types of errors can be discovered by using
validations to check the input data.

DATA VALIDATION

Procedures are designed to detect errors in data at a lower level of detail. Data
validations have been included in the system in almost every area where there is a possibility for
the user to commit errors. The system will not accept invalid data. Whenever an invalid data is
keyed in, the system immediately prompts the user and the user has to again key in the data and
the system will accept the data only if the data is correct. Validations have been included where
necessary.

The system is designed to be a user friendly one. In other words the system has been
designed to communicate effectively with the user. The system has been designed with popup
menus.

USER INTERFACE DESIGN

It is essential to consult the system users and discuss their needs while designing the user
interface:

USER INTERFACE SYSTEMS CAN BE BROADLY CLASIFIED AS:

1. User initiated interface the user is in charge, controlling the progress of the user/computer
dialogue. In the computer-initiated interface, the computer selects the next stage in the
interaction.
2. Computer initiated interfaces

18 | P a g e
MCA, AIET.
In the computer initiated interfaces the computer guides the progress of the user/computer
dialogue. Information is displayed and the user response of the computer takes action or displays
further information.

USER_INITIATED INTERGFACES

User initiated interfaces fall into tow approximate classes:

1. Command driven interfaces: In this type of interface the user inputs commands or queries
which are interpreted by the computer.
2. Forms oriented interface: The user calls up an image of the form to his/her screen and fills in
the form. The forms oriented interface is chosen because it is the best choice.
COMPUTER-INITIATED INTERFACES

The following computer initiated interfaces were used:

1. The menu system for the user is presented with a list of alternatives and the user chooses one;
of alternatives.
2. Questions answer type dialog system where the computer asks question and takes action
based on the basis of the users reply.
Right from the start the system is going to be menu driven, the opening menu displays the
available options. Choosing one option gives another popup menu with more options. In this
way every option leads the users to data entry form where the user can key in the data.

ERROR MESSAGE DESIGN:

The design of error messages is an important part of the user interface design. As user is
bound to commit some errors or other while designing a system the system should be designed to
be helpful by providing the user with information regarding the error he/she has committed.

This application must be able to produce output at different modules for different inputs.

4.2. PERFORMANCE REQUIREMENTS


Performance is measured in terms of the output provided by the application.

Requirement specification plays an important part in the analysis of a system. Only when
the requirement specifications are properly given, it is possible to design a system, which will fit

19 | P a g e
MCA, AIET.
into required environment. It rests largely in the part of the users of the existing system to give
the requirement specifications because they are the people who finally use the system. This is
because the requirements have to be known during the initial stages so that the system can be
designed according to those requirements. It is very difficult to change the system once it has
been designed and on the other hand designing a system, which does not cater to the
requirements of the user, is of no use.

The requirement specification for any system can be broadly stated as given below:

The system should be able to interface with the existing system


The system should be accurate
The system should be better than the existing system
The existing system is completely dependent on the user to perform all the duties.

20 | P a g e
MCA, AIET.
5. LITERATURE SURVEY
5.1 INTRODUCTION
Public Cloud Server:

There exist many different security problems in the cloud computing This paper is based
on the research results of proxy cryptography, identity-based public key cryptography and
remote data integrity checking in public cloud. In some cases, the cryptographic operation will
be delegated to the third party, for example proxy. Thus, we have to use the proxy cryptography.
Proxy cryptography is a very important cryptography primitive. In 1996, Mambo et al. proposed
the notion of the proxy cryptosystem . When the bilinear pairings are brought into the identity-
based cryptography, identitybased cryptography becomes efficient and practical. Since identity-
based cryptography becomes more efficient because it avoids of the certificate management,
more and more experts are apt to study identity-based proxy cryptography. In 2013, Yoon et al.
proposed an ID-based proxy signature scheme with message recovery . Chen et al. proposed a
proxy signature scheme and a threshold proxy signature scheme from the Weil pairing . By
combining the proxy cryptography with encryption technique, some proxy re-encryption
schemes are proposed. Liu et al. formalize and construct the attribute-based proxy signature .Guo
et al. presented a non-interactive CPA(chosen-plaintext attack)-secure proxy re-encryption
scheme, which is resistant to collusion attacks in forging re-encryption keys . Many other
concrete proxy re-encryption schemes and their applications are also proposed.

Security Overlay:

The security of our ID-PUIC protocol mainly consists of the following parts: correctness,
proxy-protection and unforgeability. The correctness has been shown in the subsection III-B. In
the following paragraph, we study the proxy-protection and unforgeability. Proxy-protection
means that the original client cannot pass himself off as the proxy to create the tags.
Unforgeability means that when some challenged blocks are modified or deleted, PCS cannot
send the valid response which can pass the integrity checking.

21 | P a g e
MCA, AIET.
Remote:

Upload their data to PCS and check their remote datas integrity by Internet. When the
client is an individual manager, some practical problems will happen. If the manager is suspected
of being involved into the commercial fraud, he will be taken away by the police. During the
period of investigation, the manager will be restricted to access the network in order to guard
against collusion. But, the managers legal business will go on during the period of investigation.
When a large of data is generated, who can help him process these data ? If these data cannot be
processed just in time, the manager will face thelose of economic interest. In order to prevent the
case happening, the manager has to delegate the proxy to process its data, for example, his
secretary. But, the manager will not hope others have the ability to perform the remote data
integrity checking. Public checking will incur some danger of leaking the privacy. For example,
the stored data volume can be detected by the malicious verifiers. When the uploaded data
volume is confidential, private remote data integrity checking is necessary. Although the
secretary has the ability to process and upload the data for the manager, he still cannot check the
managers remote data integrity unless he is delegated by the manager. We call the secretary as
the proxy of the manager.

Symmetric key distribution method:

Balanced incomplete block design (BIBD) is a combinatorial design methodology used in


key pre-distribution schemes. BIBD arranges v distinct key objects of a key pool into b different
blocks each block representing a key ring assigned to a node. Each BIBD design is expressed
with a quintuplet where v is the number of keys, b is the number of key rings, r is the number of
nodes sharing a key, and k is the number of keys in each key ring. Further, each pair of distinct
keys occurs together in exactly blocks. Any BIBD design can be expressed with the equivalent
tuple because the relationship always holds.

Portability

The design of the .NET Framework allows it to theoretically be platform agnostic, and
thus cross-platform compatible. That is, a program written to use the framework should
run without change on any type of system for which the framework is implemented.
Microsoft's commercial implementations of the framework cover Windows, Windows

22 | P a g e
MCA, AIET.
CE, and the Xbox 360. In addition, Microsoft submits the specifications for the Common
Language Infrastructure (which includes the core class libraries, Common Type System,
and the Common Intermediate Language), the C# language, and the C++/CLI language to
both ECMA and the ISO, making them available as open standards. This makes it
possible for third parties to create compatible implementations of the framework and its
languages on other platforms.

Architecture

Visual overview of the Common Language Infrastructure (CLI)

Common Language Infrastructure

The core aspects of the .NET framework lie within the Common Language Infrastructure, or
CLI. The purpose of the CLI is to provide a language-neutral platform for application
development and execution, including functions for exception handling, garbage collection,
security, and interoperability. Microsoft's implementation of the CLI is called the Common
Language Runtime or CLR.

Assemblies

The intermediate CIL code is housed in .NET assemblies. As mandated by specification,


assemblies are stored in the Portable Executable (PE) format, common on the Windows platform
for all DLL and EXE files. The assembly consists of one or more files, one of which must
contain the manifest, which has the metadata for the assembly. The complete name of an
assembly (not to be confused with the filename on disk) contains its simple text name, version
number, culture, and public key token. The public key token is a unique hash generated when the
assembly is compiled, thus two assemblies with the same public key token are guaranteed to be
identical from the point of view of the framework. A private key can also be specified known
only to the creator of the assembly and can be used for strong naming and to guarantee that the

23 | P a g e
MCA, AIET.
assembly is from the same author when a new version of the assembly is compiled (required to
add an assembly to the Global Assembly Cache).

Metadata

All CLI is self-describing through .NET metadata. The CLR checks the metadata to ensure that
the correct method is called. Metadata is usually generated by language compilers but developers
can create their own metadata through custom attributes. Metadata contains information about
the assembly, and is also used to implement the reflective programming capabilities of .NET
Framework.

Security

.NET has its own security mechanism with two general features: Code Access Security (CAS),
and validation and verification. Code Access Security is based on evidence that is associated
with a specific assembly. Typically the evidence is the source of the assembly (whether it is
installed on the local machine or has been downloaded from the intranet or Internet). Code
Access Security uses evidence to determine the permissions granted to the code. Other code can
demand that calling code is granted a specified permission. The demand causes the CLR to
perform a call stack walk: every assembly of each method in the call stack is checked for the
required permission; if any assembly is not granted the permission a security exception is
thrown.

When an assembly is loaded the CLR performs various tests. Two such tests are validation and
verification. During validation the CLR checks that the assembly contains valid metadata and
CIL, and whether the internal tables are correct. Verification is not so exact. The verification
mechanism checks to see if the code does anything that is 'unsafe'. The algorithm used is quite
conservative; hence occasionally code that is 'safe' does not pass. Unsafe code will only be
executed if the assembly has the 'skip verification' permission, which generally means code that
is installed on the local machine.

.NET Framework uses appdomains as a mechanism for isolating code running in a process.
Appdomains can be created and code loaded into or unloaded from them independent of other
appdomains. This helps increase the fault tolerance of the application, as faults or crashes in one
appdomain do not affect rest of the application. Appdomains can also be configured
independently with different security privileges. This can help increase the security of the
application by isolating potentially unsafe code. The developer, however, has to split the
application into sub domains; it is not done by the CLR.

Class library

Namespaces in the BCL


System
System. CodeDom
System. Collections

24 | P a g e
MCA, AIET.
System. Diagnostics
System. Globalization
System. IO
System. Resources
System. Text
System.Text.RegularExpressions

Microsoft .NET Framework includes a set of standard class libraries. The class library is
organized in a hierarchy of namespaces. Most of the built in APIs are part of either System.*
or Microsoft.* namespaces. It encapsulates a large number of common functions, such as
file reading and writing, graphic rendering, database interaction, and XML document
manipulation, among others. The .NET class libraries are available to all .NET languages. The
.NET Framework class library is divided into two parts: the Base Class Library and the
Framework Class Library.

The Base Class Library (BCL) includes a small subset of the entire class library and is the core
set of classes that serve as the basic API of the Common Language Runtime. The classes in
mscorlib.dll and some of the classes in System.dll and System.core.dll are
considered to be a part of the BCL. The BCL classes are available in both .NET Framework as
well as its alternative implementations including .NET Compact Framework, Microsoft
Silverlight and Mono.

The Framework Class Library (FCL) is a superset of the BCL classes and refers to the entire
class library that ships with .NET Framework. It includes an expanded set of libraries, including
WinForms, ADO.NET, ASP.NET, Language Integrated Query, Windows Presentation
Foundation, Windows Communication Foundation among others. The FCL is much larger in
scope than standard libraries for languages like C++, and comparable in scope to the standard
libraries of Java.

Memory management

The .NET Framework CLR frees the developer from the burden of managing memory (allocating
and freeing up when done); instead it does the memory management itself. To this end, the
memory allocated to instantiations of .NET types (objects) is done contiguously from the
managed heap, a pool of memory managed by the CLR. As long as there exists a reference to an
object, which might be either a direct reference to an object or via a graph of objects, the object
is considered to be in use by the CLR. When there is no reference to an object, and it cannot be
reached or used, it becomes garbage. However, it still holds on to the memory allocated to it.
.NET Framework includes a garbage collector which runs periodically, on a separate thread from
the application's thread, that enumerates all the unusable objects and reclaims the memory
allocated to them.

The .NET Garbage Collector (GC) is a non-deterministic, compacting, mark-and-sweep garbage


collector. The GC runs only when a certain amount of memory has been used or there is enough

25 | P a g e
MCA, AIET.
pressure for memory on the system. Since it is not guaranteed when the conditions to reclaim
memory are reached, the GC runs are non-deterministic. Each .NET application has a set of
roots, which are pointers to objects on the managed heap (managed objects). These include
references to static objects and objects defined as local variables or method parameters currently
in scope, as well as objects referred to by CPU registers. When the GC runs, it pauses the
application, and for each object referred to in the root, it recursively enumerates all the objects
reachable from the root objects and marks them as reachable. It uses .NET metadata and
reflection to discover the objects encapsulated by an object, and then recursively walk them. It
then enumerates all the objects on the heap (which were initially allocated contiguously) using
reflection. All objects not marked as reachable are garbage. This is the mark phase. Since the
memory held by garbage is not of any consequence, it is considered free space. However, this
leaves chunks of free space between objects which were initially contiguous. The objects are
then compacted together, by using memcpy to copy them over to the free space to make them
contiguous again. Any reference to an object invalidated by moving the object is updated to
reflect the new location by the GC. The application is resumed after the garbage collection is
over.

The GC used by .NET Framework is actually generational. Objects are assigned a generation;
newly created objects belong to Generation 0. The objects that survive a garbage collection are
tagged as Generation 1, and the Generation 1 objects that survive another collection are
Generation 2 objects. The .NET Framework uses up to Generation 2 objects. Higher generation
objects are garbage collected less frequently than lower generation objects. This helps increase
the efficiency of garbage collection, as older objects tend to have a larger lifetime than newer
objects. Thus, by removing older (and thus more likely to survive a collection) objects from the
scope of a collection run, fewer objects need to be checked and compacted.

Versions
Microsoft started development on the .NET Framework in the late 1990s originally under the
name of Next Generation Windows Services (NGWS). By late 2000 the first beta versions of
.NET 1.0 were released.

26 | P a g e
MCA, AIET.
27 | P a g e
MCA, AIET.
The .NET Framework stack.

Client Application Development

Client applications are the closest to a traditional style of application in Windows-


based programming. These are the types of applications that display windows or forms on the
desktop, enabling a user to perform a task. Client applications include applications such as word
processors and spreadsheets, as well as custom business applications such as data-entry tools,
reporting tools, and so on. Client applications usually employ windows, menus, buttons, and
other GUI elements, and they likely access local resources such as the file system and peripherals
such as printers. Another kind of client application is the traditional ActiveX control (now
replaced by the managed Windows Forms control) deployed over the Internet as a Web page.
This application is much like other client applications: it is executed natively, has access to local
resources, and includes graphical elements.

28 | P a g e
MCA, AIET.
In the past, developers created such applications using C/C++ in conjunction with the
Microsoft Foundation Classes (MFC) or with a rapid application development (RAD)
environment such as Microsoft Visual Basic. The .NET Framework incorporates aspects of
these existing products into a single, consistent development environment that drastically
simplifies the development of client applications.

The Windows Forms classes contained in the .NET Framework are designed to be
used for GUI development. You can easily create command windows, buttons, menus, toolbars,
and other screen elements with the flexibility necessary to accommodate shifting business needs.

For example, the .NET Framework provides simple properties to adjust visual
attributes associated with forms. In some cases the underlying operating system does not support
changing these attributes directly, and in these cases the .NET Framework automatically
recreates the forms. This is one of many ways in which the .NET Framework integrates the
developer interface, making coding simpler and more consistent.

Server Application Development

Server-side applications in the managed world are implemented through runtime


hosts. Unmanaged applications host the common language runtime, which allows your custom
managed code to control the behavior of the server.

This model provides you with all the features of the common language runtime and
class library while gaining the performance and scalability of the host server.

29 | P a g e
MCA, AIET.
Server-side managed code

ASP.NET is the hosting environment that enables developers to use the .NET
Framework to target Web-based applications. However, ASP.NET is more than just a runtime
host; it is a complete architecture for developing Web sites and Internet-distributed objects using
managed code. Both Web Forms and XML Web services use IIS and ASP.NET as the publishing
mechanism for applications, and both have a collection of supporting classes in the .NET

5.2 C#.NET

The Relationship of C# to .NET

C# is a new programming language, and is significant in two respects:

It is specifically designed and targeted for use with Microsoft's .NET Framework (a
feature rich platform for the development, deployment, and execution of distributed
applications).

It is a language based upon the modern object-oriented design methodology, and when
designing it Microsoft has been able to learn from the experience of all the other similar
languages that have been around over the 20 years or so since object-oriented principles
came to prominence

30 | P a g e
MCA, AIET.
One important thing to make clear is that C# is a language in its own right. Although it is
designed to generate code that targets the .NET environment, it is not itself part of .NET. There
are some features that are supported by .NET but not by C#, and you might be surprised to learn
that there are actually features of the C# language that are not supported by .NET like Operator
Overloading.
However, since the C# language is intended for use with .NET, it is important for us to have
an understanding of this Framework if we wish to develop applications in C# effectively. So, in
this chapter
The Common Language Runtime:

Central to the .NET framework is its run-time execution environment, known as the
Common Language Runtime (CLR) or the .NET runtime. Code running under the control of
the CLR is often termed managed code.
However, before it can be executed by the CLR, any source code that we develop (in C# or some
other language) needs to be compiled. Compilation occurs in two steps in .NET:
1. Compilation of source code to Microsoft Intermediate Language (MS-IL)
2. Compilation of IL to platform-specific code by the CLR
At first sight this might seem a rather long-winded compilation process. Actually, this two-
stage compilation process is very important, because the existence of the Microsoft Intermediate
Language (managed code) is the key to providing many of the benefits of .NET. Let's see why.
Advantages of Managed Code

Microsoft Intermediate Language (often shortened to "Intermediate Language", or "IL")


shares with Java byte code the idea that it is a low-level language with a simple syntax (based on
numeric codes rather than text), which can be very quickly translated into native machine code.
Having this well-defined
Universal syntax for code has significant advantages.
Platform Independence

First, it means that the same file containing byte code instructions can be placed on any
platform; at runtime the final stage of compilation can then be easily accomplished so that the
code will run on that particular platform. In other words, by compiling to Intermediate Language

31 | P a g e
MCA, AIET.
we obtain platform independence for .NET, in much the same way as compiling to Java byte
code gives Java platform independence.

You should note that the platform independence of .NET is only theoretical at present
because, at the time of writing, .NET is only available for Windows. However, porting .NET to
other platforms is being explored (see for example the Mono project, an effort to create an open
source implementation of .NET, at http://www.go-mono.com/).
Performance Improvement

Although we previously made comparisons with Java, IL is actually a bit more ambitious
than Java byte code. Significantly, IL is always Just-In-Time compiled, whereas Java byte code
was often interpreted. One of the disadvantages of Java was that, on execution, the process of
translating from Java byte code to native executable resulted in a loss of performance (apart from
in more recent cases, here Java is JIT-compiled on certain platforms).

Instead of compiling the entire application in one go (which could lead to a slow start-up
time), the JIT compiler simply compiles each portion of code as it is called (just-in-time). When
code has been compiled once, the resultant native executable is stored until the application exits,
so that it does not need to be recompiled the next time that portion of code is run. Microsoft
argues that this process is more efficient than compiling the entire application code at the start,
because of the likelihood those large portions of any application code will not actually be
executed in any given run. Using the JIT compiler, such code will never get compiled.
This explains why we can expect that execution of managed IL code will be almost as
fast as executing native machine code. What it doesn't explain is why Microsoft expects that we
will get a performance improvement. The reason given for this is that, since the final stage of
compilation takes place at run time, the JIT compiler will know exactly what processor type the
program will run on. This means that it can optimize the final executable code to take advantage
of any features or particular machine code instructions offered by that particular processor.
Traditional compilers will optimize the code, but they can only perform optimizations
that will be independent of the particular processor that the code will run on. This is because
traditional compilers compile to native executable before the software is shipped. This means
that the compiler doesn't know what type of processor the code will run on beyond basic

32 | P a g e
MCA, AIET.
generalities, such as that it will be an x86-compatible processor or an Alpha processor. Visual
Studio 6, for example, optimizes for a generic Pentium machine, so the code that it generates
cannot take advantages of hardware features of Pentium III processors. On the other hand, the
JIT compiler can do all the optimizations that Visual Studio 6 can, and in addition to that it will
optimize for the particular processor the code is running on.
Language Interoperability

How the use of IL enables platform independence, and how JIT compilation should
improve performance. However, IL also facilitates language interoperability. Simply put, you
can compile to IL from one language, and this compiled code should then be interoperable with
code that has been compiled to IL from another language.
Intermediate Language

From what we learned in the previous section, Intermediate Language obviously plays a
fundamental role in the .NET Framework. As C# developers, we now understand that our C#
code will be compiled into Intermediate Language before it is executed (indeed, the C# compiler
only compiles to managed code). It makes sense, then, that we should now take a closer look at
the main characteristics of IL, since any language that targets .NET would logically need to
support the main characteristics of IL too.
Here are the important features of the Intermediate Language:

Object-orientation and use of interfaces


Strong distinction between value and reference types
Strong data typing
Error handling through the use of exceptions
Use of attributes
Support of Object Orientation and Interfaces

The language independence of .NET does have some practical limits. In particular, IL,
however it is designed, is inevitably going to implement some particular programming
methodology, which means that languages targeting it are going to have to be compatible with

33 | P a g e
MCA, AIET.
that methodology. The particular route that Microsoft has chosen to follow for IL is that of
classic object-oriented programming, with single implementation inheritance of classes.
Besides classic object-oriented programming, Intermediate Language also brings in the
idea of interfaces, which saw their first implementation under Windows with COM. .NET
interfaces are not the same as COM interfaces; they do not need to support any of the COM
infrastructure (for example, they are not derived from I Unknown, and they do not have
associated GUIDs). However, they do share with
COM interfaces the idea that they provide a contract, and classes that implement a given
interface must provide implementations of the methods and properties specified by that interface.
Object Orientation and Language Interoperability

Working with .NET means compiling to the Intermediate Language, and that in turn
means that you will need to be programming using traditional object-oriented methodologies.
That alone is not, however, sufficient to give us language interoperability. After all, C++ and
Java both use the same object-oriented paradigms, but they are still not regarded as interoperable.
We need to look a little more closely at the concept of language interoperability.

An associated problem was that, when debugging, you would still have to independently
debug components written in different languages. It was not possible to step between languages
in the debugger. So what we really mean by language interoperability is that classes written in
one language should be able to talk directly to classes written in another language. In particular:
A class written in one language can inherit from a class written in another
language
The class can contain an instance of another class, no matter what the languages of
the two classes are
An object can directly call methods against another object written in another
language
Objects (or references to objects) can be passed around between methods
When calling methods between languages we can step between the method calls in
the
Debugger, even where this means stepping between source code written in
different languages

34 | P a g e
MCA, AIET.
This is all quite an ambitious aim, but amazingly, .NET and the Intermediate Language
have achieved it. For the case of stepping between methods in the debugger, this facility is really
offered by the Visual Studio .NET IDE rather than from the CLR itself.
Strong Data Type

One very important aspect of IL is that it is based on exceptionally strong data typing.
What we mean by that is that all variables are clearly marked as being of a particular, specific
data type (there is no room in IL, for example, for the Variant data type recognized by Visual
Basic and scripting languages). In particular, IL does not normally permit any operations that
result in ambiguous data types.
For instance, VB developers will be used to being able to pass variables around without
worrying too much about their types, because VB automatically performs type conversion. C++
developers will be used to routinely casting pointers between different types. Being able to
perform this kind of operation can be great for performance, but it breaks type safety. Hence, it is
permitted only in very specific circumstances in some of the languages that compile to managed
code. Indeed, pointers (as opposed to references) are only permitted in marked blocks of code in
C#, and not at all in VB (although they are allowed as normal in managed C++). Using pointers
in your code will immediately cause it to fail the memory type safety checks performed by the
CLR.
You should note that some languages compatible with .NET, such as VB.NET, still allow
some laxity in typing, but that is only possible because the compilers behind the scenes ensure
the type safety is enforced in the emitted IL.
Although enforcing type safety might initially appear to hurt performance, in many cases
this is far outweighed by the benefits gained from the services provided by .NET that rely on
type safety. Such services include:
Language Interoperability

Garbage Collection

Security

Application Domains

35 | P a g e
MCA, AIET.
Common Type System (CTS)

This data type problem is solved in .NET through the use of the Common Type System
(CTS). The CTS defines the predefined data types that are available in IL, so that all languages
that target the .NET framework will produce compiled code that is ultimately based on these
types.
The CTS doesn't merely specify primitive data types, but a rich hierarchy of types, which
includes well-defined points in the hierarchy at which code is permitted to define its own types.
The hierarchical structure of the Common Type System reflects the single-inheritance object-
oriented methodology of IL, and looks like this:

Common Language Specification (CLS)

The Common Language Specification works with the Common Type System to ensure
language interoperability. The CLS is a set of minimum standards that all compilers targeting
.NET must support. Since IL is a very rich language, writers of most compilers will prefer to
restrict the capabilities of a given compiler to only support a subset of the facilities offered by IL
and the CTS. That is fine, as long as the compiler supports everything that is defined in the CLS.

36 | P a g e
MCA, AIET.
Garbage Collection

The garbage collector is .NET's answer to memory management, and in particular to the
question of what to do about reclaiming memory that running

37 | P a g e
MCA, AIET.
6. SYSTEM DESIGN
6.1 NORMALIZATION

It is a process of converting a relation to a standard form. The process is used to handle


the problems that can arise due to data redundancy i.e. repetition of data in the database,
maintain data integrity as well as handling problems that can arise due to insertion, updation,
deletion anomalies.

Decomposing is the process of splitting relations into multiple relations to eliminate


anomalies and maintain anomalies and maintain data integrity. To do this we use normal forms
or rules for structuring relation.

Insertion anomaly: Inability to add data to the database due to absence of other data.

Deletion anomaly: Unintended loss of data due to deletion of other data.

Update anomaly: Data inconsistency resulting from data redundancy and partial update.

Normal Forms: These are the rules for structuring relations that eliminate anomalies.

FIRST NORMAL FORM:

A relation is said to be in first normal form if the values in the relation are atomic for
every attribute in the relation. By this we mean simply that no attribute value can be a set of
values or, as it is sometimes expressed, a repeating group.

SECOND NORMAL FORM:

A relation is said to be in second Normal form is it is in first normal form and it should
satisfy any one of the following rules.

1) Primary key is a not a composite primary key


2) No non key attributes are present
3) Every non key attribute is fully functionally dependent on full set of primary key.

38 | P a g e
MCA, AIET.
THIRD NORMAL FORM:

A relation is said to be in third normal form if their exits no transitive dependencies.

Transitive Dependency: If two non key attributes depend on each other as well as on the
primary key then they are said to be transitively dependent.

The above normalization principles were applied to decompose the data in multiple tables
thereby making the data to be maintained in a consistent state.

6.2. E R DIAGRAMS

The relation upon the system is structure through a conceptual ER-Diagram, which not only
specifics the existential entities but also the standard relations through which the system
exists and the cardinalities that are necessary for the system state to continue.

The entity Relationship Diagram (ERD) depicts the relationship between the data objects.
The ERD is the notation that is used to conduct the date modeling activity the attributes of
each data object noted is the ERD can be described resign a data object descriptions.

The set of primary components that are identified by the ERD are

Data object Relationships

Attributes Various types of indicators.

The primary purpose of the ERD is to represent data objects and their relationships.

6.3. DATA FLOW DIAGRAMS

A data flow diagram is graphical tool used to describe and analyze movement of data
through a system. These are the central tool and the basis from which the other components are
developed. The transformation of data from input to output, through processed, may be
described logically and independently of physical components associated with the system. These
are known as the logical data flow diagrams. The physical data flow diagrams show the actual
implements and movement of data between people, departments and workstations. A full
description of a system actually consists of a set of data flow diagrams. Using two familiar
notations Yourdon, Gane and Sarson notation develops the data flow diagrams. Each component

39 | P a g e
MCA, AIET.
in a DFD is labeled with a descriptive name. Process is further identified with a number that will
be used for identification purpose. The development of DFDS is done in several levels. Each
process in lower level diagrams can be broken down into a more detailed DFD in the next level.
The lop-level diagram is often called context diagram. It consists a single process bit, which
plays vital role in studying the current system. The process in the context level diagram is
exploded into other process at the first level DFD.

The idea behind the explosion of a process into more process is that understanding at one
level of detail is exploded into greater detail at the next level. This is done until further
explosion is necessary and an adequate amount of detail is described for analyst to understand
the process.

Larry Constantine first developed the DFD as a way of expressing system requirements
in a graphical from, this lead to the modular design.

A DFD is also known as a bubble Chart has the purpose of clarifying system
requirements and identifying major transformations that will become programs in system design.
So it is the starting point of the design to the lowest level of detail. A DFD consists of a series of
bubbles joined by data flows in the system.

DFD SYMBOLS:

In the DFD, there are four symbols

1. A square defines a source(originator) or destination of system data


2. An arrow identifies data flow. It is the pipeline through which the information flows
3. A circle or a bubble represents a process that transforms incoming data flow into outgoing
data flows.
4. An open rectangle is a data store, data at rest or a temporary repository of data

Process that transforms data flow.

40 | P a g e
MCA, AIET.
Source or Destination of data

Data flow

Data Store

CONSTRUCTING A DFD:

Several rules of thumb are used in drawing DFDS:

1. Process should be named and numbered for an easy reference. Each name should be
representative of the process.
2. The direction of flow is from top to bottom and from left to right. Data traditionally flow
from source to the destination although they may flow back to the source. One way to
indicate this is to draw long flow line back to a source. An alternative way is to repeat the
source symbol as a destination. Since it is used more than once in the DFD it is marked with
a short diagonal.
3. When a process is exploded into lower level details, they are numbered.
4. The names of data stores and destinations are written in capital letters. Process and dataflow
names have the first letter of each work capitalized

A DFD typically shows the minimum contents of data store. Each data store should
contain all the data elements that flow in and out.

Questionnaires should contain all the data elements that flow in and out. Missing
interfaces redundancies and like is then accounted for often through interviews.

SAILENT FEATURES OF DFDS

41 | P a g e
MCA, AIET.
1. The DFD shows flow of data, not of control loops and decision are controlled considerations
do not appear on a DFD.
2. The DFD does not indicate the time factor involved in any process whether the dataflow take
place daily, weekly, monthly or yearly.
3. The sequence of events is not brought out on the DFD.

TYPES OF DATA FLOW DIAGRAMS


1. Current Physical
2. Current Logical
3. New Logical
4. New Physical

CURRENT PHYSICAL:
In Current Physical DFD process label include the name of people or their positions or
the names of computer systems that might provide some of the overall system-processing label
includes an identification of the technology used to process the data. Similarly data flows and
data stores are often labels with the names of the actual physical media on which data are stored
such as file folders, computer files, business forms or computer tapes.

CURRENT LOGICAL:

The physical aspects at the system are removed as mush as possible so that the current
system is reduced to its essence to the data and the processors that transforms them regardless of
actual physical form.

NEW LOGICAL:

This is exactly like a current logical model if the user were completely happy with he
user were completely happy with the functionality of the current system but had problems with
how it was implemented typically through the new logical model will differ from current logical
model while having additional functions, absolute function removal and inefficient flows
recognized.

42 | P a g e
MCA, AIET.
NEW PHYSICAL:

The new physical represents only the physical implementation of the new system.

RULES GOVERNING THE DFDS

PROCESS
1) No process can have only outputs.
2) No process can have only inputs. If an object has only inputs than it must be a sink.
3) A process has a verb phrase label.

DATA STORE

1) Data cannot move directly from one data store to another data store, a process must move
data.
2) Data cannot move directly from an outside source to a data store, a process, which receives,
must move data from the source and place the data into data store
3) A data store has a noun phrase label.

SOURCE OR SINK

The origin and / or destination of data.

1) Data cannot move direly from a source to sink it must be moved by a process
2) A source and /or sink has a noun phrase land

DATA FLOW
1) A Data Flow has only one direction of flow between symbols. It may flow in both directions
between a process and a data store to show a read before an update. The later is usually
indicated however by two separate arrows since these happen at different type.
2) A join in DFD means that exactly the same data comes from any of two or more different
processes data store or sink to a common location.

43 | P a g e
MCA, AIET.
3) A data flow cannot go directly back to the same process it leads. There must be at-least one
other process that handles the data flow produce some other data flow returns the original
data into the beginning process.
4) A Data flow to a data store means update (delete or change).
5) A data Flow from a data store means retrieve or use.

A data flow has a noun phrase label more than one data flow noun phrase can appear on a single
arrow as long as all of the flows on the same arrow move together as one package.

44 | P a g e
MCA, AIET.
DFD Diagrams:

Context Level Diagram (O level)

Data Input Stage Data Out put Stage


Admin Data
Storage
Admin

Transaction
Manager Transaction
Manager
UI Screens

DBS& DBS&
policies policies
Managerial
VTTP Reports
VTTP User
Level
Identity-Based Proxy-Oriented Data Uploading
Remote Data Integrity Checking in Public Cloud
c

45 | P a g e
MCA, AIET.
Login DFD:

Tbl_LoginMaster

Enter User User Home


Open Login Yes Yes
Name and Check User Page
form
Password

No

Validates
Data

46 | P a g e
MCA, AIET.
Admin DFD:

Tbl_REgistrat Tbl_Registrat
Tbl_Request
Login DB ion ion

Enters Login
Open Form Registration
Details Reports View User
Transaction 1.0.4
1.0.0 1.0.3
1.0.1 1.0.2

Validates
Data

47 | P a g e
MCA, AIET.
Transaction manager:-

1st Level

User Login
Master Tbl_RegistrtionForm

Open Manage
Upload
Personal
Form files View Files
Details
2.0.5 2.0.6
2.0.0 2.0.3

Yes
Enter Login
Details
Validates Data
2.0.1 NO
Tbl_Files Tbl_View Files
New Trans
Manager
Registration
Validates Data
2.0.4

48 | P a g e
MCA, AIET.
Transaction manager:-

2nd Level

Tbl_UploadType
Tbl_UploadType

Tbl_Upload
Open
Add Manage
Upload
Personal
Form
Upload Update File Add File
file
Details
Details 2.5.4 2.5.5
2.5.3
4.0.0
2.5.1 4.0.2

Upload Yes
files
Enter Login
Details
Validates Data
2.5.2
4.0.1 Tbl_Upload

Validates Data

49 | P a g e
MCA, AIET.
Customer Functionalities

1st Level

User Login
Master Tbl_RegistrtionForm Tbl_Request

Open Manage
Form Personal Request Response From
Details
File Tranmanager

4.0.0 4.0.2 4.03 4.04


Yes
Enter Login
Details
Validates Data
4.0.1 NO
Tbl_Request

New
Validates Data Customer
Registration

4.0.3

ER-DIAGRAM

50 | P a g e
MCA, AIET.
6.4. UML DIAGRAMS

The Unified Modeling Language (UML) is used to specify, visualize, modify, construct and
document the artifacts of an object-oriented software intensive system under development. The
UML uses mostly graphical notations to express the design of software projects. UML offers a
standard way to visualize a system's architectural blueprints, including elements such as:

actors
business processes
(logical) components
activities
programming language statements
database schemas, and

51 | P a g e
MCA, AIET.
Reusable software components.

UML Diagrams Overview

UML combines best techniques from data modeling (entity relationship diagrams),
business modeling (work flows), object modeling, and component modeling. It can be used with
all processes, throughout the software development life cycle, and across different
implementation technologies. UML has synthesized the notations of the Booch method, the
Object-modeling technique (OMT) and Object-oriented software engineering (OOSE) by fusing
them into a single, common and widely usable modeling language. UML aims to be a standard
modeling language which can model concurrent and distributed systems.

Use Case Diagram:-

52 | P a g e
MCA, AIET.
System
Login

Check Authentication

Provide Access
Data

User reports
User

Transaction Manager
Revoke Access Data
caegory

Authentication
Policies

Profile update

VTTP

Reply to
User

Logout

Sequence Diagrams:

From the name Interaction it is clear that the diagram is used to describe some type of
interactions among the different elements in the model. So this interaction is a part of dynamic
behavior of the system.

This interactive behavior is represented in UML by two diagrams known as Sequence diagram
and Collaboration diagram. The basic purposes of both the diagrams are similar.

53 | P a g e
MCA, AIET.
Sequence diagram emphasizes on time sequence of messages and collaboration diagram
emphasizes on the structural organization of the objects that send and receive messages.

A sequence diagram is an interaction diagram. From the name it is clear that the diagram deals
with some sequences, which are the sequence of messages flowing from one object to another.

Interaction among the components of a system is very important from implementation and
execution perspective.

So Sequence diagram is used to visualize the sequence of calls in a system to perform a specific
functionality.

Login Sequence Diagram

User Request:

File Access.aspx BAL:UserClass DAL:SqlHelper Database


User

1 : Open Form()
2 : Enter Details()
3 : Send request()

4 : ExecuteNonQuery()

5 : Return Results()

6 : Return Result()

7 : Show Result()

54 | P a g e
MCA, AIET.
Transaction Manager Response:

CheckTrustedAut BAL:ExpertClass DAL:SqlHelper Database


Transaction
hority.aspx

1 : Open Form()
2 : Enter Details()
3 : ValidUser()

4 : ExecuteNonQuery()

5 : Return Results()

6 : Return Result()

7 : Show Result()

55 | P a g e
MCA, AIET.
VTTP Response:

ProvidetheDAtaA BAL:ExpertClass DAL:SqlHelper Database


VTTP
ccess.aspx

1 : Open Form()
2 : Enter Details()
3 : AccepttheUser()

4 : ExecuteNonQuery()

5 : Return Results()

6 : Return Result()

7 : Show Result()

DICTONARY

After carefully understanding the requirements of the client the the entire data storage
requirements are divided into tables. The below tables are normalized to avoid any anomalies
during the course of data entry.

Collaboration Diagram

56 | P a g e
MCA, AIET.
Login Collaboration:-

DataBase
4 : ExecuteDataSet()

DAL:SqlHelper

5 : Results()

3 : CheckUser()

BAL:LoginClass

2 : Enter Uname()
6 : Return Result()
7 : Show Result()

Default.aspx

1 : Open Form()

User

57 | P a g e
MCA, AIET.
Transaction Manager Report:

DataBase
4 : ExecuteNonQuery()

DAL:SqlHelper

5 : Results()

3 : Create State ()

BAL:UserClass

2 : Enter Details()
6 : Return Result()
7 : Show Result()

Checkfrm.aspx

1 : Open Form()

Transaction

VTTP Response:

58 | P a g e
MCA, AIET.
DataBase
4 : ExecuteDataSet()

DAL:SqlHelper

5 : Results()

3 : SendRequest
Question()

BAL:UserClass

2 : Enter Details()
6 : Return Result()
7 : Show Result()

AccessData.aspx

1 : Open Form()

VTTP

59 | P a g e
MCA, AIET.
Class Diagram:

60 | P a g e
MCA, AIET.
61 | P a g e
MCA, AIET.
Activity Diagrams-:

Activity diagrams are graphical representations of Workflows of stepwise activities and


actions with support for choice, iteration and concurrency. In the Unified Modeling Language,
activity diagrams can be used to describe the business and operational step-by-step workflows of
components in a system. An activity diagram shows the overall flow of control.

`Activity diagrams are constructed from a limited number of shapes, connected with arrows. The
most important shape types:

rounded rectangles represent activities;


diamonds represent decisions;
bars represent the start (split) or end (join) of concurrent activities;
a black circle represents the start (initial state) of the workflow;
An encircled black circle represents the end (final state).

Arrows run from the start towards the end and represent the order in which activities happen.

Hence they can be regarded as a form of flowchart. Typical flowchart techniques lack constructs
for expressing concurrency. However, the join and split symbols in activity diagrams only
resolve this for simple cases; the meaning of the model is not clear when they are arbitrarily
combined with decisions or loops.

62 | P a g e
MCA, AIET.
User Activity:-

UserName/PWD

Submit

Invalid validate

valid

Personal Authentication Access Data


Data Policies

63 | P a g e
MCA, AIET.
Transaction Manager Activity:

UserName/PWD

Submit

Invalid validate

valid

Check User Provide Access Reports


Data Data

64 | P a g e
MCA, AIET.
VTTP:

UserName/PWD

Submit

Invalid validate

valid

Accept Users Provide the data Reports


Access

65 | P a g e
MCA, AIET.
7. SYSTEM TESTING

7.1 INTRODUCTION
The protection of computer based resources that include hardware, software, data,
procedures and people against unauthorized use or natural Disaster is known as System Security.

System Security can be divided into four related issues:

Security
Integrity
Privacy
Confidentiality
SYSTEM SECURITY refers to the technical innovations and procedures applied to the
hardware and operation systems to protect against deliberate or accidental damage from a
defined threat.

DATA SECURITY is the protection of data from loss, disclosure, modification and destruction.

SYSTEM INTEGRITY refers to the power functioning of hardware and programs, appropriate
physical security and safety against external threats such as eavesdropping and wiretapping.

PRIVACY defines the rights of the user or organizations to determine what information they are
willing to share with or accept from others and how the organization can be protected against
unwelcome, unfair or excessive dissemination of information about it.

CONFIDENTIALITY is a special status given to sensitive information in a database to


minimize the possible invasion of privacy. It is an attribute of information that characterizes its
need for protection.

7.2 SECURITY SOFTWARE


It is the technique used for the purpose of converting communication. It transfers
message secretly by embedding it into a cover medium with the use of information hiding
techniques. It is one of the conventional techniques capable of hiding large secret message in a
cover image without introducing many perceptible distortions. NET has two kinds of security:

66 | P a g e
MCA, AIET.
Role Based Security
Code Access Security

The Common Language Runtime (CLR) allows code to perform only those operations that the
code has permission to perform. So CAS is the CLR's security system that enforces security
policies by preventing unauthorized access to protected resources and operations. Using the
Code Access Security, you can do the following:

Restrict what your code can do


Restrict which code can call your code
Identify code

67 | P a g e
MCA, AIET.
8. CODING
8.1 STRATEGIC APPROACH

The software engineering process can be viewed as a spiral. Initially system engineering
defines the role of software and leads to software requirement analysis where the information
domain, functions, behavior, performance, constraints and validation criteria for software are
established. Moving inward along the spiral, we come to design and finally to coding. To
develop computer software we spiral in along streamlines that decrease the level of abstraction
on each turn.

UNIT TESTING

MODULE TESTING

Component Testing
SUB-SYSTEM TESING

SYSTEM TESTING
Integration Testing

ACCEPTANCE TESTING
User Testing

68 | P a g e
MCA, AIET.
8.1.2 UNIT TESTING
Unit testing focuses verification effort on the smallest unit of software design, the
module. The unit testing we have is white box oriented and some modules the steps are
conducted in parallel.
1. WHITE BOX TESTING
This type of testing ensures that

All independent paths have been exercised at least once


All logical decisions have been exercised on their true and false sides
All loops are executed at their boundaries and within their operational bounds
All internal data structures have been exercised to assure their validity.
To follow the concept of white box testing we have tested each form .we have created
independently to verify that Data flow is correct, All conditions are exercised to check their
validity, All loops are executed on their boundaries.

2. BASIC PATH TESTING

Established technique of flow graph with Cyclomatic complexity was used to derive test cases
for all the functions. The main steps in deriving test cases were:

Use the design of the code and draw correspondent flow graph.

Determine the Cyclomatic complexity of resultant flow graph, using formula:

V(G)=E-N+2 or

V(G)=P+1 or

V (G) =Number Of Regions

Where V (G) is Cyclomatic complexity,

E is the number of edges,

N is the number of flow graph nodes,

P is the number of predicate nodes.

Determine the basis of set of linearly independent paths.


3. CONDITIONAL TESTING

In this part of the testing each of the conditions were tested to both true and false aspects.
And all the resulting paths were tested. So that each path that may be generate on particular
condition is traced to uncover any possible errors.

69 | P a g e
MCA, AIET.
4. DATA FLOW TESTING

This type of testing selects the path of the program according to the location of definition
and use of variables. This kind of testing was used only when some local variable were declared.
The definition-use chain method was used in this type of testing. These were particularly useful
in nested statements.

5. LOOP TESTING

In this type of testing all the loops are tested to all the limits possible. The following exercise
was adopted for all loops:
All the loops were tested at their limits, just above them and just below them.
All the loops were skipped at least once.
For nested loops test the inner most loop first and then work outwards.
For concatenated loops the values of dependent loops were set with the help of connected
loop.
Unstructured loops were resolved into nested loops or concatenated loops and tested as
above.

8.2 SAMPLE CODE

using BPA_Service.DAL;

using System;

using System.Collections.Generic;

using System.Linq;

using System.Runtime.Serialization;

using System.ServiceModel;

using System.ServiceModel.Web;

using System.Text;

namespace BPA_Service

70 | P a g e
MCA, AIET.
// NOTE: You can use the "Rename" command on the "Refactor" menu to change the class
name "Service1" in code, svc and config file together.

// NOTE: In order to launch WCF Test Client for testing this service, please select
Service1.svc or Service1.svc.cs at the Solution Explorer and start debugging.

public class Service : IService

public string InsertUserRegistration(UserRegistration ur)

string msg = string.Empty;

if (ur.Email != null && ur.FirstName != null && ur.Password != null &&


ur.UserName!= null)

try

tbl_UserRegistration tblur = new tbl_UserRegistration();

tblur.FirstName = ur.FirstName;

tblur.UserId = ur.UserId;

tblur.UserName = ur.UserName;

tblur.Password = ur.Password;

tblur.FirstName = ur.FirstName;

tblur.MiddleName = ur.MiddleName;

tblur.LastName = ur.LastName;

tblur.DOB = ur.DOB;

//tblur.DOR = ur.DOR;

tblur.Gender = ur.Gender;

tblur.Email = ur.Email;

71 | P a g e
MCA, AIET.
tblur.PhoneNo = ur.PhoneNo;

tblur.Address = ur.Address;

//tblur.Image = ur.Image;

tblur.Status = "Pending";

tblur.FileName = ur.Image;

tblur.Role = ur.Role;

using (bpaEntities entity = new bpaEntities())

entity.tbl_UserRegistration.Add(tblur);

entity.SaveChanges();

entity.Entry(tblur).GetDatabaseValues();

if (tblur.UserId != 0)

msg = "Registration Successfully...";

else

msg = "Failure...!";

catch (Exception ex)

throw new ArgumentException(ex.Message);

72 | P a g e
MCA, AIET.
}

return msg;

public LoginModel GetUserLogin(string UserName, string Password)

try

using (bpaEntities entity = new bpaEntities())

var result = (from user in entity.tbl_UserRegistration

where user.UserName == UserName &&

user.Password == Password &&

user.Status == "Accept"

select new LoginModel

UserName = user.UserName,

Password = user.Password,

Role = user.Role,

Image =user.FileName,

UserId = user.UserId

}).FirstOrDefault();

return result;

73 | P a g e
MCA, AIET.
}

catch (Exception ex)

throw new ArgumentException(ex.Message);

public List<UserRegistration> GetUsersByRole(string Role)

List<UserRegistration> lur = new List<UserRegistration>();

try

using (bpaEntities entity = new bpaEntities())

lur = (from user in entity.tbl_UserRegistration

where user.Status == "Pending"

select new UserRegistration

UserId = user.UserId,

UserName = user.UserName,

Password = user.Password,

FirstName = user.FirstName,

74 | P a g e
MCA, AIET.
MiddleName = user.MiddleName,

LastName = user.LastName,

DOB = user.DOB,

//DOR = user.DOR,

Gender = user.Gender,

Email = user.Email,

PhoneNo = user.PhoneNo,

Address = user.Address,

//Image = user.Image,

Image = user.FileName,

Role = user.Role,

}).ToList();

catch (Exception ex)

throw new ArgumentException(ex.Message);

return lur;

public bool UserAcceptOrReject(int UserId, string Status)

try

75 | P a g e
MCA, AIET.
using (bpaEntities entity = new bpaEntities())

var obj = (from user in entity.tbl_UserRegistration

where user.UserId == UserId

select user).FirstOrDefault();

if (obj != null)

obj.Status = Status;

entity.SaveChanges();

return true;

catch (Exception ex)

throw new ArgumentException(ex.Message);

public UserProfileModel GetUserDetails(int UserId)

var ur = new UserProfileModel();

try

76 | P a g e
MCA, AIET.
using (bpaEntities entity = new bpaEntities())

ur = (from user in entity.tbl_UserRegistration

where user.UserId == UserId &&

user.Status == "Accept"

select new UserProfileModel

UserId = user.UserId,

UserName = user.UserName,

Password = user.Password,

FirstName = user.FirstName,

MiddleName = user.MiddleName,

LastName = user.LastName,

DOB = user.DOB,

//DOR = user.DOR,

Gender = user.Gender,

Email = user.Email,

PhoneNo = user.PhoneNo,

Address = user.Address,

//Image = user.Image,

Image = user.FileName,

Role = user.Role,

}).FirstOrDefault();

77 | P a g e
MCA, AIET.
catch (Exception ex)

throw new ArgumentException(ex.Message);

return ur;

public string UpdateProfile(UserUpdateModel ur)

try

using (bpaEntities entity = new bpaEntities())

var obj = (from user in entity.tbl_UserRegistration

where user.UserId == ur.UserId

select user).FirstOrDefault();

if (obj != null)

obj.FirstName = ur.FirstName;

obj.MiddleName = ur.MiddleName;

obj.LastName = ur.LastName;

obj.PhoneNo = ur.PhoneNo;

obj.Address = ur.Address;

//obj.Image = ur.Image;

obj.FileName = ur.Image;

78 | P a g e
MCA, AIET.
entity.SaveChanges();

return "You Profile updated successfully.";

return "Something goes went wrong, try again.";

catch (Exception ex)

throw new ArgumentException(ex.Message);

public string ChangePassword(int UserId, string OldPwd, string NewPwd, string


ConfirmPwd)

string msg = "";

try

using (bpaEntities entity = new bpaEntities())

var obj = entity.tbl_UserRegistration.SingleOrDefault(m => m.UserId ==


UserId);

if (obj != null)

if (OldPwd == NewPwd)

79 | P a g e
MCA, AIET.
msg = "Current Password and New Password are not equal...!";

else if(OldPwd != obj.Password)

msg = "Current Password is not valid...!";

else

obj.Password = NewPwd;

entity.SaveChanges();

msg= "Password Changed Successfully";

catch (Exception ex)

throw new ArgumentException(ex.Message);

return msg;

public string RecoverPassWord(string UserName, string Email)

string msg = string.Empty;

80 | P a g e
MCA, AIET.
try

using (bpaEntities entity = new bpaEntities())

var obj = (from user in entity.tbl_UserRegistration

where user.UserName == UserName && user.Email == Email

select user).FirstOrDefault();

if (obj != null)

msg = obj.Password;

else

msg = "UserName/Email are not valid...!";

catch (Exception ex)

throw new ArgumentException(ex.Message.ToString());

return msg;

public UserUpdateModel GetUserDetailsForEdit(int UserId)

81 | P a g e
MCA, AIET.
{

var ur = new UserUpdateModel();

try

using (bpaEntities entity = new bpaEntities())

ur = (from user in entity.tbl_UserRegistration

where user.UserId == UserId &&

user.Status == "Accept"

select new UserUpdateModel

UserId = user.UserId,

FirstName = user.FirstName,

MiddleName = user.MiddleName,

LastName = user.LastName,

Email = user.Email,

PhoneNo = user.PhoneNo,

Address = user.Address,

Image = user.FileName,

}).FirstOrDefault();

catch (Exception ex)

throw new ArgumentException(ex.Message);

82 | P a g e
MCA, AIET.
}

return ur;

public List<RequestFileModel> GetUserRequestFiles(int UserId)

var ur = new List<RequestFileModel>();

try

using (bpaEntities entity = new bpaEntities())

ur = (from f in entity.tbl_UploadFile

orderby f.FileId

//join g in entity.FileRequests on f.FileId equals g.FileId

//where g.UserId == UserId && g.Status != "Accept" && g.Status !=


"Requested"

select new RequestFileModel

UserId = UserId,

FileName = f.FileName,

FileDescription = f.Desc,

FileId = f.FileId,

//Status = g.Status

}).ToList();

83 | P a g e
MCA, AIET.
catch (Exception ex)

throw new ArgumentException(ex.Message);

return ur;

public RequestFileModel GetDownLoadFile(int FileId, string key)

var item = new RequestFileModel();

try

using (bpaEntities entity = new bpaEntities())

item = (from f in entity.tbl_UploadFile

where f.FileId == FileId && f.SecretKey == key

select new RequestFileModel

FileId = f.FileId,

FileName = f.FileName,

FileDescription = f.Desc,

File = f.FileContent,

DocumentName = f.DocumentName

}).SingleOrDefault();

84 | P a g e
MCA, AIET.
}

catch (Exception ex)

throw new ArgumentException(ex.Message);

return item;

public RequestFileModel GetDownLoadFileContent(int FileId)

var item = new RequestFileModel();

try

using (bpaEntities entity = new bpaEntities())

item = (from f in entity.tbl_UploadFile

where f.FileId == FileId

select new RequestFileModel

FileId = f.FileId,

FileName = f.FileName,

FileDescription = f.Desc,

85 | P a g e
MCA, AIET.
File = f.FileContent,

HdnSecretKey = f.SecretKey,

DocumentName = f.DocumentName

}).SingleOrDefault();

catch (Exception ex)

throw new ArgumentException(ex.Message);

return item;

public string GetRequesttoaFile(int UserId, int FieldId)

var msg = "";

try

using (bpaEntities entity = new bpaEntities())

var obj = (from f in entity.FileRequests

where f.FileId == FieldId && f.UserId == UserId

select f).FirstOrDefault();

//Admin Functionality Rejected Files

86 | P a g e
MCA, AIET.
public List<RequestFileModel> GetAdminRejectedFiles()

var ur = new List<RequestFileModel>();

try

using (bpaEntities entity = new bpaEntities())

ur = (from f in entity.tbl_UploadFile

join g in entity.FileRequests on f.FileId equals g.FileId

where g.Status == "Rejected"

select new RequestFileModel

UserId = g.UserId,

RequestId = g.RequestId,

FileName = f.FileName,

FileDescription = f.Desc,

FileId = f.FileId,

Status = g.Status

}).ToList();

catch (Exception ex)

throw new ArgumentException(ex.Message);

87 | P a g e
MCA, AIET.
return ur;

//Admin Functionality All Files

public List<RequestFileModel> GetAllFiles()

var ur = new List<RequestFileModel>();

try

using (bpaEntities entity = new bpaEntities())

ur = (from f in entity.tbl_UploadFile

join g in entity.FileRequests on f.FileId equals g.FileId

//where g.Status == "Rejected"

select new RequestFileModel

UserId = g.UserId,

RequestId = g.RequestId,

FileName = f.FileName,

FileDescription = f.Desc,

FileId = f.FileId,

Status = g.Status

}).ToList();

88 | P a g e
MCA, AIET.
catch (Exception ex)

throw new ArgumentException(ex.Message);

return ur;

//Admin Functionality AdminAcceptOrReject file

public bool AdminAcceptOrReject(int RequestId, string Status)

try

using (bpaEntities entity = new bpaEntities())

var obj = (from f in entity.FileRequests

where f.RequestId == RequestId

select f).FirstOrDefault();

if (obj != null)

obj.Status = Status;

entity.SaveChanges();

return true;

return false;

89 | P a g e
MCA, AIET.
string msg = string.Empty;

if ((model.Read || model.Write || model.Download ) && model.FileId != null &&


model.UserId != null)

try

var permissionsRead = model.Read == true ? "read" : "";

var permissionsWrite = model.Write == true ? "write" : "";

var permissionsDownload = model.Download == true ? "download" : "";

var userid = Convert.ToInt32(model.UserId);

var fileId = Convert.ToInt32(model.FileId);

using (bpaEntities entity = new bpaEntities())

var permissionsItem = entity.tbl_Permissions.SingleOrDefault(m=>m.fileId


== fileId && m.userId == userid);

if (permissionsItem != null)

try

if (permissionsItem != null)

permissionsItem.fileId = fileId;

permissionsItem.userId = userid;

permissionsItem.PermissionRead = permissionsRead;

permissionsItem.PermissionWritre = permissionsWrite;

90 | P a g e
MCA, AIET.
permissionsItem.PermissionDownload = permissionsDownload;

entity.SaveChanges();

return "File permissions updated successfully.";

return "Something goes went wrong, try again.";

catch (Exception ex)

throw new ArgumentException(ex.Message);

else

#region Insert permissions

var tblur = new tbl_Permissions();

tblur.fileId = model.FileId;

tblur.userId = (int)model.UserId;

tblur.PermissionRead = model.Read == true ? "read" : "";

tblur.PermissionWritre = model.Write == true ? "write" : "";

tblur.PermissionDownload = model.Download == true ? "download" : "";

entity.tbl_Permissions.Add(tblur);

entity.SaveChanges();

entity.Entry(tblur).GetDatabaseValues();

91 | P a g e
MCA, AIET.
entity.FileRequests.SingleOrDefault(m => m.FileId == model.FileId &&
m.UserId == model.UserId).Status = "Accepted";

entity.SaveChanges();

if (tblur.permissionId != 0)

return "File permissions are given successfully to user.";

else

return "Something went wrong,please try again.";

#endregion

catch (Exception ex)

throw new ArgumentException(ex.Message);

else

return "Something went wrong,please try again.";

92 | P a g e
MCA, AIET.
}

//public RequestFileModel UserFilePermissions(int fileId,int userId)

//{

// var model = new RequestFileModel();

// using (bpaEntities entity = new bpaEntities())

// {

// var obj = (from f in entity.tbl_Permissions

// where f.fileId == fileId && f.userId == userId

// select f).FirstOrDefault();

// if (obj != null)

// {

// model.pRead = obj.PermissionRead;

// model.pWrite = obj.PermissionWritre;

// model.pDownload = obj.PermissionDownload;

// model.RequestId = obj.permissionId;

// model.FileId = obj.fileId;

// model.UserId = obj.userId;

// }

// return model;

// }

//}

93 | P a g e
MCA, AIET.
9. SCREEN SHOTS

94 | P a g e
MCA, AIET.
95 | P a g e
MCA, AIET.
ZZ

96 | P a g e
MCA, AIET.
97 | P a g e
MCA, AIET.
98 | P a g e
MCA, AIET.
10.CONCLUSION

Motivated by the application needs, this paper proposes the novel security concept
of ID-PUIC in public cloud. The paper formalizes ID-PUICs system model and
security model. Then, the rst concrete ID-PUIC protocol is designed by using the
bilinear pairings technique. The concrete ID-PUIC protocol is provably secure and
efficient by using the formal security proof and efciency analysis. On the other
hand, the proposed ID-PUIC protocol can also realize private remote data integrity
checking, delegated remote data integrity checking and public remote data integrity
checking based on the original clients authorization.

99 | P a g e
MCA, AIET.
11.REFERENCE

[1] Z. Fu, X. Sun, Q. Liu, L. Zhou, J. Shu, Achieving efcient cloud search services: multi-
keyword ranked search over encrypted cloud data supporting parallel computing, IEICE
Transactions on Communications, vol. E98-B, no. 1, pp.190-200, 2015.

[2] Y. Ren, J. Shen, J. Wang, J. Han, S. Lee, Mutual veriable provable data auditing in public
cloud storage, Journal of Internet Technology, vol. 16, no. 2, pp. 317-323, 2015.

[3] M. Mambo, K. Usuda, E. Okamoto, Proxy signature for delegating signing operation, CCS
1996, pp. 48C57, 1996.

[4] E. Yoon, Y. Choi, C. Kim, New ID-based proxy signature scheme with message recovery,
Grid and Pervasive Computing, LNCS 7861, pp. 945-951, 2013.

[5] B. Chen, H. Yeh, Secure proxy signature schemes from the weil pairing, Journal of
Supercomputing, vol. 65, no. 2, pp. 496-506, 2013.

[6] X. Liu, J. Ma, J. Xiong, T. Zhang, Q. Li, Personal health records integrity verication using
attribute based proxy signature in cloud computing, Internet and Distributed Computing
Systems, LNCS 8223, pp. 238-251, 2013.

[7] H. Guo, Z. Zhang, J. Zhang, Proxy re-encryption with unforgeable reencryption keys,
Cryptology and Network Security, LNCS 8813, pp. 20-33, 2014.

[8] E. Kirshanova, Proxy re-encryption from lattices, PKC 2014, LNCS 8383, pp. 77-94, 2014.

[9] P. Xu, H. Chen, D. Zou, H. Jin, Fine-grained and heterogeneous proxy re-encryption for
secure cloud storage, Chinese Science Bulletin, vol.59, no.32, pp. 4201-4209, 2014.

[10] S. Ohata, Y. Kawai, T. Matsuda, G. Hanaoka, K. Matsuura, Reencryption veriability:


how to detect malicious activities of a proxy in proxy re-encryption, CT-RSA 2015, LNCS
9048, pp. 410-428, 2015.

100 | P a g e
MCA, AIET.

You might also like