Professional Documents
Culture Documents
1. INTRODUCTION
We present the design and evaluation of Platypus, a source routing system that, like many
source-routing protocols before it, can be used to implement efficient overlay forwarding, select
among multiple ingress/egress routers, provide virtual AS multi-homing, and address many other
common routing deficiencies. The key advantage of Platypus is its ability to ensure policy
compliance during packet forwarding. Platypus enables packets to be stamped at the source as being
policy compliant, reducing policy enforcement to stamp verification. Hence, Platypus allows for
management of routing policy independent of route export and path selection.
Our approach to reducing this complexity is to separate the issues of connectivity discovery
and path selection. Removing policy constraints from route discovery presents an opportunity for end
users and edge networks. The key challenge becomes determining whether a particular source route is
appropriate. ASes have no incentive to forward arbitrary traffic; currently they only wish to forward
traffic for their customers or peers. We argue, however, that this is simply a poor approximation of
the real goal: ASes want to forward traffic only if they are compensated for it. Henceforth, we will
consider traffic policy compliant at a particular point in the network if the AS can identify the
appropriate party to bill, and that party has been authorized by the AS to use the portion of the
network in question.
It is well known that multiple paths often exist between any two points in today’s Internet.
The central tenet of any source routing scheme is that no single route will be best for all parties.
Instead, sources should be empowered to select their own routes according to whatever criteria they
determine. Protocols for efficient wide-area route discovery and selection, however, are beyond the
scope of this paper.
2. SYSTEM ANALYSIS
Existing system:
Network operators and academic researchers alike recognize that today’s wide-area Internet
routing does not realize the full potential of the existing network infrastructure in terms of
performance, reliability or flexibility. While a number of techniques for intelligent, source-controlled
path selection have been proposed to improve end-to-end performance, reliability, and flexibility,
they have proven problematic to deploy due to concerns about security and network instability. In
particular, today’s primary wide area routing protocol, the Border Gateway Protocol (BGP), is
extraordinarily difficult to describe, analyze, or manage Autonomous systems (ASes) express their
local routing policy during BGP route advertisement by affecting the routes that are chosen and
exported to neighbors.
Disadvantages:
BGP becomes an overly complex task, one for which the outcome is rarely certain.
BGP’s complexity affects Internet Service Providers (ISPs) and end users alike.
ISPs struggle to understand and configure their networks while end users are left to wonder
why end-to-end connectivity is so poor.
Proposed System:
Our approach to reducing this complexity is to separate the issues of connectivity discovery
and path selection. We present the design and evaluation of Platypus, a source routing system that,
like many source-routing protocols before it, can be used to implement efficient overlay forwarding,
select among multiple ingress/egress routers
Advantages:
Platypus builds on this basic infrastructure, allowing entities to select paths other than the
default.
To increase the end-end performance.
Platypus allows for management of routing policy independent of route export and path
selection.
3. PROBLEM FORMULATION
4. SOFTWARE DESCRIPTION
Java is great programming language for the development of enterprise grade applications.
This programming Language is evolved from a language named Oak. Oak was developed in the early
nineties at Sun Microsystems as a platform-independent language aimed at allowing entertainment
appliances such as video game consoles and VCRs to communicate. Oak was first slated to appear in
television set-top boxes designed to provide video-on-demand services. Oak was unsuccessful so in
1995 Sun changed the name to Java and modified the language to take advantage of the burgeoning
World Wide Web.
Java is an object-oriented language, and this is very similar to C++. Java Programming
Language is simplified to eliminate language features that cause common programming errors. Java
source code files are compiled into a format called byte code, which can then be executed by a Java
interpreter.
If you want to start java programming then you need to use a text editor to create and edit the
source code. By using the Java complier, you can change the source code into byte code. The byte
code can be run on any platform having Java interpreter that can convert the byte code into codes
suitable for the operating system.
Introduction
Java is a well known technology which allows you for software designed and written only
once for an "virtual machine" to run on a different computers, supports various Operating System
like Windows PCs, Macintoshes, and Unix computers. On the web aspect, Java is popular on web
servers, used by many of the largest interactive websites. Java is used to create standalone
applications which may run on a single computer or in distributed network. It is also be used to
create a small application program based on applet, which is further used for Web page. Applets
make easy and possible to interact with the Web page.
Java is a high-level programming language and powerful software platform. On full implementation
of the Java platform gives you the following features:
JDK Tools: The JDK tools provide compiling, interpreting, running, monitoring, debugging,
and documenting your applications. The main tools used are the javac compiler, the java
launcher, and the javadoc documentation tool.
Application Programming Interface (API): The API provides the core functionality of the
Java programming language. It gives a wide collection of useful classes, which is further used
in your own applications. It provides basic objects and interface to networking and security,
to XML generation and database access, and much more.
Deployment Technologies: The JDK software provides two type of deployment technology
such as the Java Web Start software and Java Plug-In software for deploying your
applications to end users.
Graphical User Interface Toolkits: The Swing and Java 2D toolkits provide us the feature
of Graphical User Interfaces (GUIs).
Integrated Libraries: Integrated with various libraries such as the Java IDL API, JDBC API,
Java Naming and Directory Interface TM ("J.N.D.I.") API, Java RMI, and Java Remote
Method Invocation over Internet Inter-ORB Protocol Technology (Java RMI-IIOP
Technology) enable database to access and changes of remote objects.
Java Platform
The Java Virtual Machine is the root for the Java platform and is integrated into various hardware-
based platforms.
The API is a vast collection of various software components that provide you many useful
functionality to the application. It is grouped into logical collection of related classes and interfaces;
these logical collection are known as packages.
The API and Java Virtual Machine insulate the program from hardware.
Java work on platform-independent environment, the Java platform is bit slower than native code.
However, new changes in compiler and virtual machine brings performance close to that of native
code without posing any threatening to portability security.
All source code is written in text files (Notepad Editor) save with the .java extension in the Java
programming language.
The source files are compiled into .class files by the java compiler. A .class file contains byte codes
— the machine language of the Java Virtual Machine (JVM). The java launcher tool runs your
application with an instance of the Java Virtual Machine.
JVM works on different Operating System. The .class files(byte code) capable of running on
various Operating System. There are some virtual machines, such as the Java Hotspots virtual
machine that boost up your application performance at runtime. This include various tasks such as
Efficiency of Programme and recompiling (to native code) which is frequently used sections of code.
ODBC
Microsoft Open Database Connectivity (ODBC) is a standard programming interface for
application developers and database systems providers. Before ODBC became a de facto standard for
Windows programs to interface with database systems, programmers had to use proprietary languages
for each database they wanted to connect to. Now, ODBC has made the choice of the database system
almost irrelevant from a coding perspective, which is as it should be. Application developers have
much more important things to worry about than the syntax that is needed to port their program from
one database to another when business needs suddenly change.
Through the ODBC Administrator in Control Panel, you can specify the particular database
that is associated with a data source that an ODBC application program is written to use. Think of an
ODBC data source as a door with a name on it. Each door will lead you to a particular database. For
example, the data source named Sales Figures might be a SQL Server database, whereas the Accounts
Payable data source could refer to an Access database. The physical database referred to by a data
source can reside anywhere on the LAN.
The ODBC system files are not installed on your system by Windows 95. Rather, they are
installed when you setup a separate database application, such as SQL Server Client or Visual Basic
4.0. When the ODBC icon is installed in Control Panel, it uses a file called ODBCINST.DLL. It is
also possible to administer your ODBC data sources through a stand-alone program called
ODBCADM.EXE. There is a 16-bit and a 32-bit version of this program, and each maintains a
separate list of ODBC data sources.
From a programming perspective, the beauty of ODBC is that the application can be written
to use the same set of function calls to interface with any data source, regardless of the database
vendor. The source code of the application doesn’t change whether it talks to Oracle or SQL Server.
We only mention these two as an example. There are ODBC drivers available for several dozen
popular database systems. Even Excel spreadsheets and plain text files can be turned into data
sources. The operating system uses the Registry information written by ODBC Administrator to
determine which low-level ODBC drivers are needed to talk to the data source (such as the interface
to Oracle or SQL Server). The loading of the ODBC drivers is transparent to the ODBC application
program. In a client/server environment, the ODBC API even handles many of the network issues for
the application programmer.
The advantages of this scheme are so numerous that you are probably thinking there must be
some catch. The only disadvantage of ODBC is that it isn’t as efficient as talking directly to the
native database interface. ODBC has had many detractors make the charge that it is too slow.
Microsoft has always claimed that the critical factor in performance is the quality of the driver
software that is used. In our humble opinion, this is true. The availability of good ODBC drivers has
improved a great deal recently. And anyway, the criticism about performance is somewhat analogous
to those who said that compilers would never match the speed of pure assembly language. Maybe not,
but the compiler (or ODBC) gives you the opportunity to write cleaner programs, which means you
finish sooner. Meanwhile, computers get faster every year.
JDBC
In an effort to set an independent database standard API for Java, Sun Microsystems
developed Java Database Connectivity, or JDBC. JDBC offers a generic SQL database access
mechanism that provides a consistent interface to a variety of RDBMSs. This consistent interface is
achieved through the use of “plug-in” database connectivity modules, or drivers. If a database vendor
wishes to have JDBC support, he or she must provide the driver for each platform that the database
and Java run on.
To gain a wider acceptance of JDBC, Sun based JDBC’s framework on ODBC. As you
discovered earlier in this chapter, ODBC has widespread support on a variety of platforms. Basing
JDBC on ODBC will allow vendors to bring JDBC drivers to market much faster than developing a
completely new connectivity solution.
JDBC was announced in March of 1996. It was released for a 90 day public review that ended June 8,
1996. Because of user input, the final JDBC v1.0 specification was released soon after.
The remainder of this section will cover enough information about JDBC for you to know
what it is about and how to use it effectively. This is by no means a complete overview of JDBC.
That would fill an entire book.
JDBC Goals
Few software packages are designed without goals in mind. JDBC is one that, because of its
many goals, drove the development of the API. These goals, in conjunction with early reviewer
feedback, have finalized the JDBC class library into a solid framework for building database
applications in Java.
The goals that were set for JDBC are important. They will give you some insight as to why
certain classes and functionalities behave the way they do. The eight design goals for JDBC are as
follows:
1.SQL Level API
The designers felt that their main goal was to define a SQL interface for Java. Although not the
lowest database interface level possible, it is at a low enough level for higher-level tools and APIs to
be created. Conversely, it is at a high enough level for application programmers to use it confidently.
Attaining this goal allows for future tool vendors to “generate” JDBC code and to hide many of
JDBC’s complexities from the end user.
2.SQL Conformance
SQL syntax varies as you move from database vendor to database vendor. In an effort to
support a wide variety of vendors, JDBC will allow any query statement to be passed through it to the
underlying database driver. This allows the connectivity module to handle non-standard functionality
in a manner that is suitable for its users.
Networking
Networking is the practice of linking two or more computing devices together for the
purpose of sharing data. Networks are built with a mix of computer hardware and computer software.
Area Networks
Networks can be categorized in several different ways. One approach defines the type of
network according to the geographic area it spans. Local area networks (LANs), for example,
typically reach across a single home, whereas wide area networks (WANs), reach across cities, states,
or even across the world. The Internet is the world's largest public WAN.
Network Design
Computer networks also differ in their design. The two types of high-level network design
are called client-server and peer-to-peer. Client-server networks feature centralized server computers
that store email, Web pages, files and or applications. On a peer-to-peer network, conversely, all
computers tend to support the same functions. Client-server networks are much more common in
business and peer-to-peer networks much more common in homes.
A network topology represents its layout or structure from the point of view of data flow. In
so-called bus networks, for example, all of the computers share and communicate across one common
conduit, whereas in a star network, all data flows through one centralized device. Common types of
network topologies include bus, star, ring and mesh.
Network Protocols
In networking, the communication language used by computer devices is called the protocol.
Yet another way to classify computer networks is by the set of protocols they support. Networks often
implement multiple protocols to support specific applications. Popular protocols include TCP/IP, the
most common protocol found on the Internet and in home networks.
Many of the same network protocols, like TCP/IP, work in both wired and wireless networks.
Networks with Ethernet cables predominated in businesses, schools, and homes for several decades.
Recently, however, wireless networking alternatives have emerged as the premier technology for
building new computer networks.
Networking methods
Networking is a complex part of computing that makes up most of the IT Industry. Without
networks, almost all communication in the world would cease to happen. It is because of networking
that telephones, televisions, the internet, etc. work.
One way to categorize computer networks is by their geographic scope, although many real-world
networks interconnect Local Area Networks (LAN) via Wide Area Networks (WAN)and wireless
networks[WWAN]. These three (broad) types are:
A local area network is a network that spans a relatively small space and provides services to
a small number of people.
1. Single-service servers
2. print server,
where the server performs one task such as file server, ; while other servers can not only perform in
the capacity of file servers and print servers, but they also conduct calculations and use these to
provide information to clients (Web/Intranet Server). Computers may be connected in many different
ways, including Ethernet cables, Wireless networks, or other types of wires such as power lines or
phone lines.
The ITU-T G.hn standard is an example of a technology that provides high-speed (up to 1 Gbit/s)
local area networking over existing home wiring (power lines, phone lines and coaxial cables).
A wide area network is a network where a wide variety of resources are deployed across a
large domestic area or internationally. An example of this is a multinational business that uses a
WAN to interconnect their offices in different countries. The largest and best example of a WAN is
the Internet, which is a network composed of many smaller networks. The Internet is considered the
largest network in the world.[7]. The PSTN (Public Switched Telephone Network) also is an extremely
large network that is converging to use Internet technologies, although not necessarily through the
public Internet.
A Wide Area Network involves communication through the use of a wide range of different
technologies. These technologies include Point-to-Point WANs such as Point-to-Point Protocol (PPP)
and High-Level Data Link Control (HDLC), Frame Relay, ATM (Asynchronous Transfer Mode) and
Sonet (Synchronous Optical Network). The difference between the WAN technologies is based on the
switching capabilities they perform and the speed at which sending and receiving bits of information
(data) occur.
A metropolitan network is a network that is too large for even the largest of LAN's but is not
on the scale of a WAN. It also integrates two or more LAN networks over a specific geographical
area ( usually a city ) so as to increase the network and the flow of communications. The LAN's in
question would usually be connected via "backbone" lines.
For more information on WANs, see Frame Relay, ATM and Sonet.
A wireless network is basically the same as a LAN or a WAN but there are no wires between
hosts and servers. The data is transferred over sets of radio transceivers. These types of networks are
beneficial when it is too costly or inconvenient to run the necessary cables. For more information, see
Wireless LAN and Wireless wide area network. The media access protocols for LANs come from the
IEEE.
The most common IEEE 802.11 WLANs cover, depending on antennas, ranges from
hundreds of meters to a few kilometers. For larger areas, either communications satellites of various
types, cellular radio, or wireless local loop (IEEE 802.16) all have advantages and disadvantages.
Depending on the type of mobility needed, the relevant standards may come from the IETF or the
ITU.
Network topology
The network topology defines the way in which computers, printers, and other devices are
connected, physically and logically. A network topology describes the layout of the wire and devices
as well as the paths used by data transmissions.
• Physical
• logical
• Bus
• Star
• Tree (hierarchical)
• Linear
• Ring
• Mesh
o partially connected
o fully connected (sometimes known as fully redundant)
The network topologies mentioned above are only a general representation of the kinds of topologies
used in computer network and are considered basic topologies.
Server Sockets
There are two ends to each connection: the client that is the host that initiates the connection,
and the server, that is the host that responds to the connection. Clients and servers are connected by
sockets.
On the server side instead of connecting to a remote host, a program waits for other hosts to
connect to it. A server socket binds to a particular port on the local machine. Once it has successfully
bound to a port, it listens for incoming connection attempts. When it detects a connection attempt, it
accepts the connection. This creates a socket between the client and the server over which the client
and the server communicate.
Multiple clients can connect to the same port on the server at the same time. Incoming data is
distinguished by the port to which it is addressed and the client host and port from which it came. The
server can tell for which service (like http or ftp) the data is intended by inspecting the port. It can tell
which open socket on that service the data is intended by looking at the client address and port stored
with the data.
No more than one server socket can listen to a particular port at one time. Therefore, since a
server may need to handle many connections at once, server programs tend to be heavily multi-
threaded. Generally the server socket listening on the port will only accept the connections. It then
passes off the actual processing of connections to a separate thread.
Incoming connections are stored in a queue until the server can accept them. (On most
systems the default queue length is between 5 and 50. Once the queue fills up further incoming
connections are refused until space in the queue opens up.)
5. SYSTEM DESIGN
Design Overview
Design involves identification of classes, their relationships as well as their collaboration. In
objector, classes are divided into entity classes, interface classes and control classes. The Computer
Aided Software Engineering (CASE) tools that are available commercially do not provide any
assistance in this transition. CASE tools take advantage of meta modeling that are helpful only after
the construction of the class diagram. In the Fusion method, some object-oriented approaches like
Object Modeling Technique (OMT), Classes, Responsibilities, Collaborators (CRC), etc, are used.
Objector used the term “agents” to represent some of the hardware and software systems .In Fusion
method, there is no requirement phase, where a user will supply the initial requirement document.
Any software project is worked out by both the analyst and the designer. The analyst creates the use
case diagram. The designer creates the class diagram. But the designer can do this only after the
analyst creates the use case diagram. Once the design is over, it is essential to decide which software
is suitable for the application.
Select the
Best path
Check if it is
Decryption
Destination
Network Creation
Peer Login
Alternative Path
Find Traffic
Message Transmission
Peer
Login
DB Server
Sender
Receiver
Class Diagram
Class diagrams are used for a wide variety of purposes, including both conceptual/domain
modeling and detailed design modeling. Although I prefer to create class diagrams on whiteboards
because simple tools are more inclusive most of the diagrams that I’ll show in this article are drawn
using a software-based drawing tool so you may see the exact notation.
PathGeneration
DataBase_Connection Source
Destination
Connection pathWeight
Statement
ResultSet getPath()
ConnectingPeer FindPeer()
DBconnection() FindWeight()
Source UpdatePeerDetails()
Destination getPeerName()
Weight Update_PeerConnection()
getPeerDetails()
getAlterPath()
PortDetails()
Peer
Login_Page
Sequence Diagram
exchanged between them, in the order in which they occur. This allows the specification of simple
runtime scenarios in a graphical manner.
Check if it is destination
6. MODULE DESCRIPTION
Modules:
Network Creation.
Peer Login
Find Traffic
Message Transmission
Network Creation:
In this module is used to construct the topology. The user enters the Peer name, IP Address
and Port Number. If the Node name and IP Address is already available in the Database to display the
message box “Enter the correct peer Details”, otherwise to display the message box “Successfully
update the peer details”. The user updates all peers to click the complete button to display Connection
Frame. In this frame used to connecting the peers and enter the weight of each connection.
DB
Connecting the peers
Peer Login:
In this module the user to login the entire peer. The user enters the peer name, IP address and
port number of each peer. Server check these details are available in database, if available means to
display the message box” peer login successfully” and peer are listening state. Otherwise to display
the message box “This details not in database”.
Find Traffic:
This module used to check the traffic of each and every alternative path. The sender selects
the alternative path in the list box and clicks the find traffic button. To calculate the traffic of source
to destination. To click the path button and display the traffic path and best path.
Message Transmission:
The message transmission module used to transmit the using the selected waypoint. To find
the selected path at the time the server generates the temporal key. If the source sends the message
means using the platypus framework and the messages are encrypted. The next peer check if it
destination or not. If destination means first enter the temporal key and then enter the master key the
messages are decrypt and display the original message. Otherwise to enter only temporal key and
mark the IP address stamp and forward to next peer.
If check the
No Yes
Destination
7. SYSTEM TESTING
PROCESS:
The purpose of testing is to discover errors. Testing is the process of trying to discover every
conceivable fault or weakness in a work product. It provides a way to check the functionality of
components, sub assemblies, assemblies and/or a finished product It is the process of exercising
software with the intent of ensuring that the Software system meets its requirements and user
expectations and does not fail in an unacceptable manner. There are various types of test. Each test
type addresses a specific testing requirement.
TYPES OF TESTS
UNIT TESTING
Unit testing involves the design of test cases that validate that the internal program logic is
functioning properly, and that program input produces valid outputs. All decision branches and
internal code flow should be validated. It is the testing of individual software units of the
application .it is done after the completion of an individual unit before integration. This is a structural
testing, that relies on knowledge of its construction and is invasive. Unit tests perform basic tests at
component level and test a specific business process, application, and/or system configuration. Unit
tests ensure that each unique path of a business process performs accurately to the documented
specifications and contains clearly defined inputs and expected results.
INTEGRATION TESTING
Integration tests are designed to test integrated software components to determine if they
actually run as one program. Testing is event driven and is more concerned with the basic outcome
of screens or fields. Integration tests demonstrate that although the components were individually
satisfaction, as shown by successfully unit testing, the combination of components is correct and
consistent. Integration testing is specifically aimed at exposing the problems that arise from the
combination of components.
FUNCTIONAL TESTING
Functional tests provide systematic demonstrations that functions tested are available as
specified by the business and technical requirements, system documentation and user manuals.
SYSTEM TESTING
System testing ensures that the entire integrated software system meets requirements. It tests a
configuration to ensure known and predictable results. An example of system testing is the
configuration oriented system integration test. System testing is based on process descriptions and
flows, emphasizing pre-driven process links and integration points.
WHITE BOX TESTING
White Box Testing is a testing in which in which the software tester has knowledge of the inner
workings, structure and language of the software, or at least its purpose. It is purpose. It is used to test
areas that cannot be reached from a black box level.
BLACK BOX TESTING
Black Box Testing is testing the software without any knowledge of the inner workings,
structure or language of the module being tested . Black box tests, as most other kinds of tests, must
be written from a definitive source document, such as specification or requirements document, such
as specification or requirements document. It is a testing in which the software under test is treated, as
a black box .you cannot “see” into it. The test provides inputs and responds to outputs without
considering how the software works.
Unit Testing
Unit testing is usually conducted as part of a combined code and unit test phase of the
software lifecycle, although it is not uncommon for coding and unit testing to be conducted as two
distinct phases.
Field testing will be performed manually and functional tests will be written in detail.
Test objectives
All field entries must work properly.
Pages must be activated from the identified link.
The entry screen, messages and responses must not be delayed.
Features to be tested
Verify that the entries are of the correct format
No duplicate entries should be allowed
All links should take the user to the correct page.
Integration Testing
Software integration testing is the incremental integration testing of two or more integrated
software components on a single platform to produce failures caused by interface defects. The task of
the integration test is to check that components or software applications, e.g. components in a
software system or – one step up – software applications at the company level – interact without
error.
Test Results: All the test cases mentioned above passed successfully. No defects encountered.
Acceptance Testing
User Acceptance Testing is a critical phase of any project and requires significant
participation by the end user. It also ensures that the system meets the functional requirements.
Test Results: All the test cases mentioned above passed successfully. No defects encountered.
SYSTTEM IMPLEMENTATION
Implementation is the stage of the project when the theoretical design is turned out into a
working system. Thus it can be considered to be the most critical stage in achieving a successful new
system and in giving the user, confidence that the new system will work and be effective.
The implementation stage involves careful planning, investigation of the existing system and
it’s constraints on implementation, designing of methods to achieve changeover and evaluation of
changeover methods.
Implementation is the process of converting a new system design into operation. It is the
phase that focuses on user training, site preparation and file conversion for installing a candidate
system. The important factor that should be considered here is that the conversion should not disrupt
the functioning of the organization.
The implementation can be preceded through Socket in java but it will be considered as one
to all communication .For proactive broadcasting we need dynamic linking. So java will be more
suitable for platform independence and networking concepts. For maintaining route information we
go for SQL-server as database back end.
The main objective of secure and policy complaint source routing is to avoid the default path.
The proposed system implements the secure and policy, so the message is encrypts and decrypts the
message. Our approach to reducing this complexity is to separate the issues of connectivity discovery
and path selection.
9. CONCLUSION
We argue that capabilities are uniquely well-suited for use in wide-area Internet routing. The
Internet serves an extremely large number of users with an even larger number of motivations, all
attempting to simultaneously share widely distributed resources. Most importantly, there exists no
single arbiter (for example, a system administrator or user logged in at the console) who can make
informed access decisions. Moreover, we believe that much of the complexity of Internet routing
policy stems from inflexibility of existing routing protocols.
We aim to study how one might implement inter-AS traffic engineering policies through
capability pricing strategies. For example, an AS with multiple peering routers that wishes to
encourage load balancing may be able to do so through variable pricing of capabilities for the
corresponding Platypus waypoints. While properly modeling the self-interested behavior of external
entities may be difficult, we are hopeful that this challenge is simplified by the direct mapping
between Platypus waypoints and path selection.
Screens Shots