You are on page 1of 97

NEXT GENERATION INTERNET

Contents

Introduction

Working principles of Internet2

Stages of development of Internet2

Background:
This chapter deals with the reason behind the creation of next-generation Internet, its genesis and
the various stages involved in creating a deploying a working model of Internet2.
1.1 Introduction

What is Internet2?

Internet2 is a not for profit advanced networking consortium led by US higher education
universities in partnership with government and industry to work together to develop and deploy
advanced network applications and technologies thereby creating tomorrow’s Internet. In 2009,
Internet2 member rolls included over 200 higher education institutions, over 40 members from
industry, over 30 research and education network and connector organizations and over 50 affiliate
members (The list is provided below). The global scale of the collaboration has led to the physical
interconnection of nearly 30 countries creating a worldwide community of advanced Internet
development.

Internet2 is built on the following five principles:

1. Address the advanced networking needs and interests of the research and education
community.
Since Internet2 is a university led organization, the organization is aware that most of
the scientific fields like genomics, genetics, astronomy and particle physics among
others need high speed networking capabilities that are significantly higher than those
that are available from the commercial Internet. In addition high speed networks are an
important pre-requisite for development of new teaching methods in academia.
Internet2 aims to meet the requirements of the scientific community’s need for high
speed networks by encouraging their development and deployment.

2. Provide leadership in the evolution of global Internet.


Today’s global Internet evolved from the collaboration of scientists at CERN. Similarly
one of the goals of Internet2 is to serve as a prototype as well as a testing ground for the
future development of Internet. Like the way the first generation Internet was used as a
proof of concept for underlying technologies and protocols like TCP/IP, WWW, e-mail
and domain naming system, so too does the Internet2 community functions as a large
scale model for testing the current advanced technologies. In this role Internet2 serves
as an advocate for principles like advanced end to end architecture that is extremely
important for the Internet’s continued improvement and growth. End to end architecture
is the consistent and uninterrupted ability of any internet device to connect to another
without external interruptions like firewalls, caches, network address translators being
inserted in the communications path and thus interfering with the device and
application performance

3. Implement a systems approach to a scalable and vertically integrated advanced


networking infrastructure.
Today’s Internet faces end to end security and performance issues whose solution
requires an integrated approach. Internet2 with its collaboration between academia and
industry is well suited to combat these issues without compromising on its foundational
principles of continued growth and innovation.

4. Create strategic relationships among academia, industry and government.


The current Internet was created as collaboration between academia, industry and
government. Internet2 builds upon this partnership by providing a framework within
which individuals and organizations can work together on new networking technologies
and advanced applications. The Internet in addition to being a tool for research and
education has also become an indispensable tool for international commerce and
communication. Internet2 fosters and improves the partnerships that address the
complex interests that are crucial for the development of Internet.

5. Catalyze activities that cannot be accomplished by individual organizations.


Internet2 serves as a keystone and framework for increasing the effectiveness of the
group member’s collective efforts. It performs these functions by supporting working
groups and initiatives, convening workshops and meetings and offers a base of
operations for projects that serve the entire Internet2 community. As an organization,
Internet2 focuses on deployable, scalable and sustainable technologies and solutions.
1.2 Internet development spiral

Figure 1 - Internet Development Spiral

There are four phases in the Internet development spiral. They are

1. Research and Development:


This phase is the initial stage of Internet development. It takes place the university,
government and industrial laboratories.
2. Partnerships:
In this phase, research efforts which have promising potential are converted into leading-
edge production uses for the education community.

3. Privatization:
All technologies that reach the partnership phase are not successfully integrated into the
mainstream. Only those technologies that prove to be commercially viable are adopted by
the private players to be improved upon.
4. Commercialization:
This is the final phase of the Internet development life cycle where the technologies are
finally integrated into the mainstream for everyday usage.

Conclusion

The result of this development has been a steady increase in investment in the higher education
community over the past decade. This has led to the establishment of a robust technologically
superior infrastructure that has led to the development of several new high speed technologies
NEXT GENERATION INTERNET
Architecture
Contents

Background

Backbone Architecture

Network Management and Control Plane

Internet2 Subnet Models

5. Conclusion
Background:

This chapter deals with the two of the backbone technologies available in the current scenario. It
deals with the infrastructure used and what organizations are involved in setting up the backbone.
NEXT GENERATION INTERNET
Architecture
Contents: Backbone

Introduction

The Internet2 Network (Abilene Network)

3. vBNS (Very High Performance Backbone Network)

4. Conclusion
2.2.2 The Internet2 Network (Abilene Network)

Figure 2 Internet2 network

Abilene is a partnership between Indiana University, Juniper Networks, Cisco Systems, Nortel
Networks and Qwest Communications. As the figure indicates, Abilene network is a nationwide
high performance backbone network operated by the Internet2 consortium. In 2007, the name
Abilene Network was retired as the network was transitioned to an upgraded infrastructure
utilizing Level 3 Communications' fiber optic cable. The upgraded network is known as the
Internet2 Network.

The backbone connects regional network aggregation points, called gigaPoPs, to support the work
of Internet2 universities as they develop advanced Internet applications. The neurons in the central
nervous system of Internet2 are referred to as gigaPoPs. They send the information in packet bursts
to each other, and the data is reassembled into what it was originally. Internet2 consists of dozens
of these gigaPoPs connected to each other by fiber optics. A gigaPoP is a ‘one-stop shopping’
connection point that provides exceedingly cost-effective access to the major national commodity
Internet Service Providers (ISPs) as well as to ‘aggregation pools’ and mechanisms that ensure
alternate data paths, data paths with especially high quality, end-to-end performance for specific
applications, and links to partners.

2007 Infrastructure Upgrade

Previously the Abilene project was utilizing optical fiber networks provided by Qwest
Communications. In March 2006, Internet2 announced that it was planning to upgrade its
infrastructure to Level 3 Communications. Unlike the previous architecture, Level3 manages and
operates an Infinera Networks based DWDM system devoted to Internet2. Internet2 controls and
uses the 40 lambda capacity to provide IP backbone connectivity as well as transport for a new
SONET-based dynamic provisioning network based on the Ciena Networks CoreDirector
platform. The IP network continues to be based on the Juniper Networks T640 routing platform.
When the transition to the new Level3-based infrastructure was completed in 2007, the name
Abilene network was changed to Internet2 Network.
2.2.3 vBNS (very high performance Backbone Network Service)

Figure 3 - vBNS Backbone Network Map


The vBNS is the other major network backbone of Internet2, and is just as capable as the Internet2
Network (Abilene), as shown in aspects such as speed, reliability, and native multicasting.
According to the vBNS website (vBNS, 2000),
"vBNS+ is a network that supports high-performance, high-bandwidth applications. Originating in
1995 as the vBNS, vBNS+ is the product of a five-year cooperative agreement between MCI
Worldcom and the National Science Foundation. Now Business can experience the same
unparalleled speed, performance and reliability enjoyed by the Supercomputer Centers, Research
Organizations and Academic Institutions that were part of the vBNS."

vBNS+ may be the first step toward getting Internet2 technology out to the general population.
Anyone can purchase an OC-3 connection to vBNS+, although the price is still hefty
($21,600/month). It is still used by Internet2, but commercial businesses can start to connect to it.
Although it was probably commercialized solely to recover some of the expenses associated with
it, it has the unintentional effect of becoming sort of an intermediate developmental stage. It isn’t
too difficult to imagine Abilene remaining as the research network in years to come, leaving the
universities their own playground, and vBNS+ becoming the source for high-speed connections
for the normal customer. While most people are probably not willing to pay such a large sum of
money each month even for such a supreme product as vBNS+ offers, as the price comes down,
more people will connect to it. The vBNS + website shown in figure above, depicts its
connections.
NEXT GENERATION INTERNET
Architecture

Contents: Network Management and Control

Background

Middleware

3. 4D Architecture

4. Maestro

5. Conclusion
Background:

Middleware is the software glue that holds the various technologies and applications together. This
chapter deals with the concepts used in setting up fundamental policies of a middleware
framework
2.3.2 Middleware

Middleware is the software that binds the network applications together. It is the umbrella term for
the layer of software between applications and the network. This software provides services such
as identification, authentication, authorization, directories and security. In today’s Internet,
applications are usually required to provide security features by themselves which leads to
competing and incompatible standards. What Internet2 does is it encourages standardization and
interoperability in the middleware making the advanced network applications much easier to
handle. The Internet2 Middleware Initiative (I2-MI) is working toward the use of core middleware
services at Internet2 universities. Shibboleth is one of these initiatives released during 2003.
Shibboleth is an open source package that supports sharing of web resources between various
institutions subject to access controls. It is working on establishing single-sign-on technologies and
other ways to authenticate users across the network. Version 1.2 of the software is now available
for use. The following concepts are fundamental to Shibboleth’s policy framework:

1. Federated Administration: The origin campus (home to browser user) provides details
about the attributes of the user to the target site. There is a certain level of trust that exists
between various campuses which allows them to identify the user and set a trust level for
that particular user. This trust or the combined set of security policies is the framework for
the federation. The campuses are widespread over the network and hence a single technical
approach or a centralized solution is not feasible. So origin sites are responsible for security
for their own users and are allowed their own ways of doing so.
2. Access control based on attributes :As mentioned in the previous concept access control
decisions are made using the attributes provided by the user while accessing the origin site.
These attributes include identity but not all the sites require identity. Shibboleth has defined
a standard set of attributes. The first set is based on the eduPerson object class, which
includes attributes widely used in higher education.

3. Active management of privacy: The origin site and the browser user control the
information released to the target. The usual default is belonging to a particular community
that is the user must be a member of the university as a student or faculty. Individuals can
then manage attribute release via a web based user interface. This means that the users are
no longer dependent on the security policies of the target website.

4. Reliance on standards: Shibboleth uses the industry’s default standard, OpenSAML, for
the message and assertion formats and protocol bindings, which are based on the Security
Assertion Markup Language (SAML) developed by OASIS Security Services Technical
Committee.

5. Framework for multiple, scalable trust and policy sets (clubs): Shibboleth uses the
concept of a club to specify a set of parties who have agreed to a common set of policies. A
given site can belong to multiple clubs giving them more flexibility in operation. This
concept expands the trust framework beyond bilateral arrangements and provides for
flexibility when different situations require different policy sets.
Figure 4 - Example of Federated Enterprise

This shared trust environment is illustrated in the above figure. Federation is the basic framework
for higher education in general and for Internet technology in particular. This federated approach
to administration is now gaining wide spread acceptance in academia as well as in the corporate
sectors. One example of this is the Liberty Alliance which is a consortium of over 150 companies
defining standards for secure and interoperable federations of which Internet2 is also a member.

The federated administration ultimately benefits the end user as it allows the user to implement a
uniform single sign-on method to access network applications provided by external partners within
the federation. The user also has control over what kind of attributes he/she can send over to the
target site. Authorization maybe based on membership in a group (student/faculty) other than a
person’s personal information.
2.3.3 Four Dimensional Architecture:

In the 4D dimensional approach we revolutionize the change by making a very specific


observation on the current Intenret architecture. The current Internet approach is a box centric
approach . Routers , switches, management and control planes as a independent boxes which
interact with each other. This box centric approach has following disadvantages.

1> Since the boxes are independent it needs to be manually configured but manual
configuration are error prone. Considering the large network the manual configuration is
bound for errors.

2> Whenver the network topologies change manual configurations with respect to context are
needed to be done.

3> Whenver the protocols are implemented they don’t follow the policy language . In order
that protocol responds according to the policy we have to change the input parameters of
the protocols.

4> In addition to above it is difficult to perform network troubleshooting, isolation of failures


is a difficult task in large networks.

Due to lack sufficient mechanisms and proper interfaces between inter-domain and intra-domain
protocols the current Internet architecture fails to have stabilities.

The 4-D architecture was proposed by the research organization FIND. The 4-D in the 4D
architecture are Data, Discovery, Dissemination and Decision. This architecture is a centralized
architecture which enforce control on distributed entities for meeting network level policy
requirements. These planes can be explained as follows:
4-D Architecture

1> Data Plane: The Data plane is used for handling individual packets and it processes them
according to the decision plane. The decision plane outputs a state which the data plane
complies with it. The state can be routing tables, packet filters positioning and address
translations.

2> Decision Plane: The decision plane outputs a state as described above. It takes into account
the network topology and the network level policies and computes them to get a decision
for each one of the output state i.e. packet filters positioning.

3> Discovery Plane: Discovery plane can imagined as a scout who maintains network views
and discovers characteristics of routers, neighbor discovery and link layer discovery.

4> Dissemination Plane: It provides a channel link for each network node and the Decision
elements. It takes this input from Discovery plane founding.
The centralized architecture can help make decisions based on the network topology and
organizational policing. By the help of Dissemination plane , the Decision plane can make
decisions which will be eventually executed by Data plane. The decision plane can also re-evaluate
its decision and may introduce additional measures to confirm the policies.

The idea of decision plane and dissemination plane is also extended by making network devices
like routers behave more like forwarders. These routers will be controlled and managed by Router
Control Platform.

Although the 4-D architecture is quite impressive . It has known issues on scalability. For eg the
Discovery plane makes its judgement on network broadcasts . But considering the huge subnet
network flooding is not a feasible option. It can however be remedied by using DHT based
network architecture .

Conclusion: The 4D architecture is a basic idea by making the subnet more knowledgable and
intelligent . The Complexity Oblivious network management is another such architecture built on
the grounds of 4D
2.3.4 Maestro:

Maestro architecture perceives an operating system view for network control and management.
Like a standard operating system which supports scheduling, synchronization, inter-application
communication and resource application the Maestro tries to do in the same way.

Maestro is a clean slate architecture approach and unlike its contemparies like 4D and CONMan it
has an explicit mechanisms for handling network invariants. This provides buffering against
configuration errors from the higher level to the lower level.

Maestro: Architecture

Maestro as seen from the figure uses Meta-Management System is similar to 4Ds Dissemination
plane which creates a channeling for network devices and decisions. Like 4D it also has Discovery
mechanism to acquire knowledge of network topology and information of the network form MMS.
Based on these inputs the operating system creates a virtual view for the control application
running on top of it. Each application depending on its requirements is provided the relevant view
of the network.

Major difference between the 4D architecture and the Maestro is that 4D views the architecture
views are of a monolithic architecture. While the Maestro supports multiple functions by using an
operating system approach and network level invariants synchronizing functions and buffering
errors.

NEXT GENERATION INTERNET


Architecture
Contents: Internet2 Subnet models

Background

Content Centric Internet

World Wide Wisdom

Edge based Next Generation model

5. Virtualization based Next Generation model

6. Conclusion
Background
CONTENT CENTRIC
NEXT GENERATION INTERNET

Contents: Internet2 Subnet models

Introduction

Principles of content centric Internet

Content Naming

Content Routing

Content Delivery

Content Distribution

Conclusion
Background:

There are several approaches that are being currently researched. Most of these approaches use the
clean slate way or the evolutionary deployment. The novel way is that approach in which a clean
slate architecture is deployed. The other one makes use of existing concepts such as cognitive
computing, cloud computing which are already present but presenting these concepts for Next
Generation Internet is still a challenge and is still to be deployed. The novel or clean slate
architecture provides a sound reasoning of its success but since Internet is very large implementing
the clean slate architecture will be a major challenge than the evolutionary way.

The content centric approach that is discussed in this topic is a clean slate architecture deployment.
The content centric approach rests its argument that Internet is more sought for contents. Hence the
architecture should be focused on content. The following topic will deal of how this can be
achieved.
2.4.1 CONTENT CENTRIC NEXT GENERATION INTERNET

Introduction:

The next generation Internet technology is required since the current Internet technology cannot
manage increasing number of Internet subscribers. The Internet today is heavily used and number
of its users is increasing in great volumes. The measures that are taken today especially the ad-hoc
mechanisms which will be briefly described is not able to cope up with increasing Internet
demands. Thus the need of altogether different Internet architecture is being proposed.

When the usage of Internet is observed it is seen that most of the Internet traffic involves accessing
the Internet for data or more specifically for the content. The content today is increasing rapidly.
Thus one can infer that most of the Internet traffic is content centric. In this was content delivery
becomes a critical part in the Internet design today. The traffic is basically HTTP traffic and some
traffic used for locating the content and finding a suitable content delivery method.

Coming back to our discussion on the current policies to support increasing user demands we see
that these ad-hoc policies have to violate the current network architecture rules. For example the
DNS which resides in the application layer has to find out about the routing information to
facilitate content delivery. The scenario can be explained as this, suppose we need to access the
website if a conventional content routing is used it will cause a lot of overhead .The client access
first the authoritative name server. Then it issues another query for a nearby content server causing
another round trip time and next it may have to get redirected to another web server this will make
the entire content delivery very slow .

So a Dynamic content delivery is used instead in which the DNS emulates a router. It does content
routing. Thus in this we get a lot of flexibility by violating a basic law on network architecture that
the higher layers get the services from lower layers. So basically we have routing in both layers.

Other such violation is transparent caching. In transparent caching the ISPs cache the network
traffic without the content of users and it does not need to check about the browsers setting. In a
way it forces the users to cache which may cause a security issue. Thus as we see that the in order
to maintain the flexibility of the network we hijack the transport level connections.
The current Internet scenario whenever a user uses an application tools like Facebook, Skype,
Google we deal with the content e.g. we use the search engine Google to get the content, In case of
Skype we connect to the persons by using their names. We see that most of the applications are
content centric but they are resolved in an address-centric way. We thus that there is basic
mismatch in the content driver application and the address centric approach that is taken. Apart
from these there are other instances where the need of a content centric network layer should be
used. We can summarize the points as follows:

Persistence of the Names:

In the current Internet scenario the names are applied to content on where/what basis that is when
we access by the content name, the access is depended on the location. For e.g. URL (Uniform
Resource Locater) depends on “where” the content is stored and then get “what” is stored i.e.
content. So the names are not persistent if the location changes. The URL needs to be changed in
this case. This problem of address centric network can be resolved by content centric Internet or in
particular with reference to this section content centric network. In this case user can access the
name irrespective of where it is stored, the name and it access will thus be persistent even if the
location has be changed.

Content Distribution

Until now the reliability of any content was done by just replicating the content in different
geographical regions. Because there are replicas of the same content there traversing through the
backbone is very limited owing to huge amount of other traffic passing the backbone. In order to
remedy this content distribution is handled by application layers systems like Akamai CDN or
Web proxies. The problem of involving the third parties is that they don’t co-operate with each
other and not found everywhere. Not being free of charge they are also not economical. Also since
the IP layer only handles routing based on address it is agnostic of the content i.e. address centric
approach gives little or no importance to content. With content centric network layer, we provide
intelligence to the network layer thereby making it easy and economical to perform replication.
Lookup-delay:

In the address centric scenario the client get the content only after it has mediated through the
DNS. It has to query the DNS server to find out the ip address of the requested site. The DNS then
finds out the ip-address by searching the domains and their respective ip-addresses. After the ip-
address is acquired we then route and access or request for the content to access. This results into
considerable delay although we have faster Internet the subsequent queries and request-responses
does pose a problem. If the network layer is content centric we won’t need the DNS server since
the network now knows about the content by name and would router directly to the content
reducing the lookup time overhead.

Mobility:

An ip-address is an indentity of location and the end point. In other words whenever the location
changes the ip address changes , thus we have to get redirected at every ip-address change. If we
thus create a network content centric we will achieve greater mobility since the content name
won’t change with location.

Security:

In the current Internet scenario the content that is got through Internet the user has to trust the the
content provider and not the actual content. A simple scenario can be like this if we search ebay
website through the Google website. We trust the search engine and then when we click the link
provided we trust the DNS server in this way. In this way phishing of a website can take place. In
content centric network the security information comes embedded with the content. Even if it is
provided by an untrusted server it will be validated by the customer by checking the security
information in the content. This also gives a advantage over the address centric infrastructure by
allowing replication of secured content. It not only gives flexibility but also ensures that content
replicated is secured. In the next section we outline what are the principles of a content centric
Internet
Principles of content centric network layer:

Since in the content centric scenario we take interest in the content we base our principles on
content.

1> Instead of addressing by hosts that is by using the ip-address we use content name as an
address.

2> For routing we will use the destination content name instead of destination ip-address.

3> Since we will use the content name as an identity, security will be embedded in the
content . This will also prevent the use of fake content names.

4> The design is to provide more and more address-less hosts in which we deliver the
contents.

5> Proving caching to achieve efficiency in content delivery.

The above principles can be lucidly explained by the following illustration:

Bob stores bob.smith.video in the device. This device is connected to more than one routers.
Alice wished to view the contents of bob.smith.video by requesting at the network layer directly
.The routers the route the request based on the name “bob.smith.video” to the desitnation device

The destination device then forwards the content to Alice. In this entire process we have relied on
routers for requesting the content using its name. The content centric Internet does intend to make
use of the network layer for identifying the content based on its name alone.

It is now important that how to name the contents so that the routing can be effective. The previous
example is only an illustrative of the content routing will take place. We deal with the issues and
approaches for the content naming.
Content Naming:

Since Contents name is a very important thing in the next generation Internet naming the content
becomes a big issue that needs to be first resolved. Since naming is important to the user we should
ensure the name should have protocol as a parameter. For deciding how the naming should be we
take help of Zookos Triangle in which we can have only two points or two edges and we have to
let go off another edge.

Memorable

Global Security

Zooko’s Triangle

Security: The name addresses the content only. In other words the content cannot be duplicated.

Memorable: The name should be memorable i.e it can be easily remembered and hence we can
say easily accessed.

Global/Decentralized: Name can be chosen at free will it should not be assigned by a central
naming authority.

The issue in naming is that in the current scenario when we access a website we trust the DNS
server since the naming in this case is authoritative we can trust the server. However in the content
centric domain we have to decide the two ends of Zooko’s triangle.
If we name a content based on Memorable and Global and let go of Security then we have a major
issue which is explained as this. Since the contents name is not secure an untrusted router may
cache the content of a website (original) and posing as an original. Hence it is important to keep
the security. Now we have to decide between the decentralization and memorable. Since
memorable and decentralization are both are equally important we decide it upon the applications
hence depending upon applications, we either have secure-memorable or secure-decentralization.
For websites which are propriety like the website of a company we make it memorable-secure.
Contents that require normal approach like for example status updates we make it secure-
decentralized. However secure-memorable and secure-decentralized does become an issue since
we try to coexist both of them in the same network.
Content Routing:

The main essence of the content centric Internet lies in content routing i.e as how the content
routing should takes place by using the name of the content. The primary object in content routing
is to forward content based on the request issues by the host. The routing is based on name only.

So currently two types of architecture is being proposed one is the advertise-based and the other is
rendezvous-based architecture. We discuss the basic concept involving these.

Advertised based routing:

In advertise based routing we perform the traditional routing concept but instead of advertising and
routing through IP-address we route based on the content name. Hence in this context the routing
table won’t have ip-address entries but instead would have content names as entries.

In the advertise based routing we maintain the same concept of network topology of OSPF and
BGP but the only change is that instead of using ip-address we use content name. Advertise-based
architectures is a feasible solution in terms of transport however there are routing and scalability
issues. The number of entries in a routing table for content names is considerably large and there
needs to be effective mechanism to compress routing tables . The stability is also a concern since
the convergence delay in case of content based routing is worst against convergence delay in terms
ip-based routing. So the major tradeoff in advertise-based architecture is scalability and routing
and the greatest advantage is that it is good for request and response type.
Rendezvous based architecture:

Another type of architecture is rendezvous-based architecture. Rendezvous is basically a meeting


place or conversation place for two parties. So in this type of architecture the rendezvous node
contains information like the name of content and other content related information that will be
needed. The rendezvous node is found out by using standardized functions. So all the requests of
contents are first forwarded to the rendezvous node by the network layer. So here the rendezvous
node acts as an intermediate node which handles all the transactions so thus the network is divided
into user to rendezvous node and rendezvous to content. These routing protocols are inspired by
overlay network.

The rendezvous based architecture is suitable for subscription-publish type and cannot handle
request-response as fast as advertise based architecture since the requests will be first forwarded to
the rendezvous node and then the request will be routed to the requested content provider.
Content Delivery:

The content delivery is a concern of how to forward the content from the storage point to the host.
Since we no longer intend to keep the ip-address and also the routing tables contains only content
names and not about the host hence it becomes a challenge as to how to transfer it to the host since
we no longer support the idea of addresses. So the literature takes refuge in providing a temporary
response channel. The examples it shows that a response channel in its part must interface the
content and take the responsibility of following the content from the storage device to the host. It
accomplishes this by one of the two ways.

The first approach is that we apply the similar idea of source routing approach in which the
specific routes or sequences are mentioned in the header of data units transporting it. The second
approach is that the downstream node should just store the information about the next-hop link
layer interface.

Another potential problem faced with content delivery is that the network is subjected to
congestion, so we introduce congestion control mechanisms in the delivery protocol. So we put a
request-response for sending and receiving contents. We response and request through fixed size
for eg 512bytes. This strategy prevents loss of data and provides flexibility in controlling
congestion.
Content Distribution:

Content distribution deals with caching content in the network so as to reduce end path delays
thereby lowering the latency. This concept is popularly known as “in-network” caching. Thus the
router becomes a device that has some amount of memory assigned to store the contents. Thus it is
also important to provide structure to the contents network data units such that sequencing and
caching can be relatively easier. It is recommended that size of chunks be around 256-512kB.

In-network caching is either performed in autonomous way or in a co-ordinate way. In the


autonomous method by the help of a locally running algorithm a data unit will be cached to the
closest router to the host. However the greatest disadvantage is that all the closer routers will cache
the same content.

In coordination caching technique we use caching algorithms which will decide where the data
unit should be cached. Also the content distribution or the in-network caching should co-operate
with content routing. It is achieved by using an advertise based architecture or rendezvous based
architecture. In the advertise based architecture the router advertises whenever it caches the
content however in case of rendezvous based architecture by the use of routing algorithm user
requests are forwarded toward a rendezvous node. Since the rendezvous node knows the location
of the node it is forwarded to the router. If the intermediate router has content it is will serve the
request.
Conclusion

The concept of content centric Internet is a novel idea but it is important to know whether this
clean slate approach can satisfy Internet users requirements which view the Internet for contents is
perfect. It is also important to note that this new architecture means to replace the actual TCP/IP
from the network architecture. This also implies that the future Internet will have software routers
running different network layer protocols by creating a virtual network.

We at the current instance provide what should be general requirements for the CONET (Content
centric Internet) should satisfy:

1> CONET should have control about location where the contents or links that lead to the
content are stored. This is especially important in geographical or an administrative
domain. This needs to be done since we don’t want the nodes to be stored randomly.

2> A CONET should advertise its content although care must be taken that it limits this
advertisement inside the domain or definite section of the network.

3> A CONET should follow persistent naming eg a song, movie or a book but is also support
naming depending on purpose or service like weather service. It also that content is allowed
to change keeping the same name for eg revised paper etc.

4> A CONET must have a facility that it can delete or update the contents. Also it must also be
able to provide expiry date for a content so that a old content doesn’t remain too long in the
network along with its revised content . It must also give the users to edit or delete the
contents that are in the content or make it unavailable to the general public. As also seen
Wikipedia allows to edit the content or delete the content.

5> A way should be allowed to view to data mine the contents depending on their version.
Thus the users can access the latest content with the still the same content name.
6> A CONET should also provide functionality in between sessions that have interactive
exchange of data between two upper layers for e.g. client and server. It must do for the
contents that are unnamed but are important for upper-layer entities. The CONET doesn’t
imply that every content should be named since some data or content are not of significant
that it should be named since they are used internally. Thus CONET should support content
retrieval and also traditional service.

7> A CONET should provide inbuilt caching by each node and by user terminal. Thus the
users can get the desired content from anywhere and it need not be always from the original
source .It must also be possible the user can retrieve the content even if it is disconnected
from the CONET and connected to a node which has a content cached inside it. The
CONET must also basically aware of the contents in the network. This functionality gives
network operators more control and handle network traffic.

Content Centric Internet is a novel method for Next Generation Internet. Many approaches have
been taken. One of the approaches that will be discussed is an evolutionary approach in which we
offload the computing and network management to the edge routers.
EDGE BASED NEXT GENERATION
INTERNET

Contents: Internet2 Subnet models

Introduction

Architecture of Edge based Next Generation Internet

Issues of Edge based architecture.

Benefits of Edge based architecture.


2.4.2 Edge Cloud based Next Generation Internet:

Background:

Structure of Internet: The Internet can be subdivided in three components core, edge and access
networks. The core is a backbone consisting of routers supporting multiple telecommunication
interfaces switching and forwarding at very high rates. The edge router is in the outer concentric
circle of core network and is more close to the consumer. The edge router may be connected more
than core routers. The outermost circles are the network of access routers which are connected to
the edge routers. The access routers are mostly concerned with how the consumer wants to utilize
the Internet i.e. it depends on the customer subscription plan which in term determines the
bandwidth and data transfer rates.
Content centric Internet

CDNs are an evolution of client-server model, in which we bring the content closer to the user
applying a similar analogy of cache memory on a computer. To explain how a basic Content
Delivery method takes place let us consider the figure given below.

Suppose user A is requesting content, the content since it’s not found in the edge router ER1 it is
routed to the edge router ER1. Now when user B requests data instead of routing the content from
the source server it access it directly from edge router ER1. As seen not only the response time is
improved but it offloads the providing server. It is observed that usage of Internet have been
mostly content centric hence it is important to deploy a better CDN in the Next Generation
Internet.

The approach of Content Centric Internet extends the concept of CDN explained above by making
content as main focus. We decouple the location of the content from its identity. This approach
implies that content should be accessible irrespective of its location and only by its identity. We
thus make content distributed and offloading the server from it. The main goal of this architecture
is that make the content available to the users by adding services to it . We attempt to provide
intelligence to the edge and make the content transform from an unprocessed entity to a value
added service.

Life at Edge: Until now we have discussed in the Content Centric approach that we offload the
server and make the servers more available. But it is also equally advantageous from the client side
by offloading the computation required to the edge, thus making leaner platform for the client.
This can be easily realized through virtualization and making an Edge cloud to provide better
services. We have thus combined the Content Centric approach, cloud computing and
virtualization together. By using virtualization we can combine different overlay platforms into a
unique infrastructure.
Edge based Next Generation Internet :

The greatest transition from the current Internet architecture to the edge cloud based architecture is
to perform the computing at the edge only and not to be done by the client thus making the edge
intelligent. By delegating the computing load at the edge we make the core more simpler thus
reducing its function to packet forwarding. We deploy the Cloud at the edge instead at the core..

Architecture of the Edge Cloud:

As seen from the above figure we see that there are three services Access , the Edge and the Core .
In Access layer Infrastructure as a Service is provided. The infrastructure provides services such as
storage servers, networks etc. This storage service is called as storage cloud. The middle layer
provides Platform as a service. This layer computes the platform by virtualization of the
underlying infrastructure.
Inside the Edge cloud:

Surrogate:

The term surrogate is defined in the RFC 3040 as a gateway located with the original server or at a
different point in the network, which works and operates on behalf of the associated server. The
surrogating helps to accommodate protocol requirements. Thus it can work according to the
requirements ,removing the constraints on the server.

The surrogate thus supports wide range of clients including those with minimum requirements.
The surrogate achieves this through web-based virtualization. The surrogation from users
perspective is a special gateway as described above that can provide services such as unified
communications, content specific services (add, mash-up etc) . Also the surrogate provides both
computing and storage.

For example Web based GUI showing the available media and the user requesting any one of
them. The delivery will either be done from the local storage or the edge cloud. Since the proposed
Internet is content centric we capture the live media and stream the media using the streaming
server. It is also noteworthy that surrogate is stateful it maintains the session information. Also the
surrogate having the client software creates an illusion that service is always continuous even
though at the backend the terminals get disconnected and reconnected and possibly with different
ip-addresses.

HTTP:

As we can see from the diagram we have used HTTP, we will explain how we intend to use HTTP
in the surrogate. User performs the interaction through web based GUI, which implement the
virtual client side in User platforms (UE). The Web browser in turn supports these. The edge cloud
thus implements a web server, which receives users input entered through the GUI. With the
emergence of the Google docs we see how it is a virtual appliance. Therefore a web browser based
GUI the most suitable choice for virtual clients. Since most of the web pages are using markup
languages we use HTTP to support the transfer.

Content Access:

In this section we give the basic concept of how content access is achieved. We have to provide
content with or without virtualization. This requires content mapping i.e. associating a server to a
content and traffic engineering how the content is to be delivered. The index engine accomplishes
server mapping and using the ISP topology and overlay network conditions we perform the traffic
engineering.

The ISP thus provides the physical resources to the edge cloud though content provider should
have the required resources themselves to provide content, applications and index engines
according to the user requests. Thus the Edge cloud provides separate interfaces to both
Infrastructure provider and the Content Provider.
Content Overlays:

Content Distribution Architecture

Depending on the ISP services, contents results different Edge clouds which provide an overlay of
a logical content-centric Internet architecture. Overlay networks are those networks that are built
on other networks. For successful operation of this model a control and management plane is fitted
between the overlay network and the infrastructure layer. There are in this models three different
models which can be explained as follows:

1> Infrastructure Provider (InP) provides the infrastructure to the edge cloud . The InP
maintains physical resources like storage, surrogate, physical links etc. The Infrastructure
Provider (InP) also provides the interface to CP via Virtual Network Provider. In addition
to above the InP transports raw bit streams and processes services to the vendors. In this
fashion the ISP is a potential InP
2> Virtual Network Provider (VNP) encapsulates many InPs together and builds a virtual
network on these (these definfing the overlay network) and the virtual network as expected
is composed of virtual nodes and links. The VNP function is provide an interface to the
CPs . VNP also provides QoS to the InP for maintaining guaranteed level of infrastructure
services.

3> The CP (Content Provider) main function is to maintain applications of the surrogate and
the storage of the Edge Cloud. It is also that these functions are embedded in the interfaces
which is provided by the VNP. In order to facilate the system, the VNP offers interfaces to
the CP at convenient locations at the edge.

So the overlay network described above can be explained as this we have the InP at the bottom
layer. The VNP builds Virtual Network using these ISP. On top of the VnP we have the services
like surrogate and storage. In such a way multiple VNPs and CPs can exist in parallel.
Issues for Edge based Internet

Challenges facing the implementation of Edge modeled network . Before implementation of this
model we need to address following issues :

Secured Communication between Virtual Client and Surrogate:

The situation is this that the Surrogate is located at the edge and not necessarily at the ISPs .Hence
security and scalability becomes an issue. In order to resolve this we can provide multiple
authentication but this is not a feasible solution we need to provide scalability to this architecture
which provides single sign on capability. In effect user profile , billing , and authorization can also
be provided between two parties getting more security.

Secured Content Management:

Since in the content centric network the content is distributed securing the content becomes
difficult to manage. The proposed architecture must guarantee integrity, authenticity, Direct Right
Management etc. In the current CDN network these services are provided however in the proposed
architecture, it needs to provide self certified and context based techniques in addition to other
models. In the proposed model involvement of the ISPs allow to engage in secured content
delivery model.

Streaming Media Delivery:

Problem with virtualization is that it provides little support for multimedia applications. This case
is seen primarily in case of virtual desktop platforms. In the proposed model HTTP is used as a
virtual client protocol, the issue is this that HTML 5 tags video and audio which are protocol
agnostic. The solution is to use RTP/RTCP protocol but the browsers still don’t support the
RTP/RTCP based streaming. Another issue that needs to be resolved is about the codec’s
supported by the browsers.

There is also yet another thing that should be resolved one is that application should perform
exchange of capabilities i.e. between the user client and media server before delivering the content.
In our case of surrogate model must have the knowledge of capabilities of the user terminal before
negotiating with media server. If in the case the user doesn’t have the ability to support the media.
The content is first delivered to the surrogate and then transcoding is done and then made available
to the user terminal.

Performance of the Surrogate:

The major backbone of the Edge based network is the Surrogate. Surrogate is important concept in
Edge based network. Surrogate maintains the sessions, connections and state information from
different computing. We can follow the approach of grid computing that is to implement load
balancing to improve the performance of infrastructure. However the distributed implementation
has some issues.
Future benefits of Edge based Internet:

Simple UE : Now since we have offloaded the computing at the client side we would no longer to
make changes in the UE (User Equipment) and hence any changes that are required would have to
be done at Edge which can be done easily since the Edge implementation is of a cloud .

Works in limited user facility: We are using web-based clients so it works in areas in which there
are organizational restrictions or there is policing. For e.g. It is not advisable to install software or
programs in a college workstations here web-based clients would still work.

Fixed Mobile Convergence: We are using a virtual client , it can be used regardless the type of
network we use i.e. the network can be both fixed or a mobile network.

Future Internet: Multiple implementations of future Internet can co-exist with this model through
virtualization and service supported resources.

Reuse of mash-up content: A mashup as the name suggests is the aggregation from different
sources, received from different sources. Similarly in this model we have data stored in caches and
content repositories the mashup content can also be stored. The content can be given to the local
cache if it is up-to-date.

Enhanced security and billing: A end user will have to authenticated to access the network. This
authentication will be done by ISP by using Single Sign-on technique. Also the ISPs will be the
rights gauging the usage for billing. The usage will be given to the content provider and the billing
would then be calculated.
2.4.3 World Wide Wisdom

The current theories on future Internet involve making it not only speedy but also make it
computational intelligent. So a new theory was developed based on human intellect that is
subdivided as knowledge, experience, skills and wisdom. Wisdom provides highest form of
intelligence which involves querying on reasoning and judgment.

However the current Internet is still information driven so the approach is to generate the next
generation Internet as a Wisdom network. In World Wide Wisdom (WWW+) the network
infrastructures and technologies are based on cognitive informatics and cognitive computer (CC).

In WWW+, each node is a cognitive computer (CC) which is a combination of autonomous and
intelligent computers that are able to think and perceive and learn. Cognitive Computer (CC) is a
special branch of computer science which deals with machine learning and machine perception.
Cognitive Computer is an active field today however using these computers as nodes are still to be
implemented.

The theoretical basis of WWW+ is on cognitive informatics with the use of denotational
mathematics. The denotation mathematics is a superset of concept algebra, system algebra, real
time process algebra, granular algebra and visual semantic algebra. This can help the cognitive
computers and their design more expressive for design, modeling and implementation.

This kind architecture will closely resemble the brain and all the neural activities in the human
body. Thus each node will act as a super neural cell capable of thinking, perceiving and
processing. The WWW+ approach have been identified with several applications like network of
computational intelligence, distributed cognitive sensor network and distributed remote control
systems.

VIRTUALIZATION BASED
NEXT GENERATION INTERNET

Contents: Internet2 Subnet models

1. Introduction

2. Components of Virtualization based Architecture


2.4.4 VIRTUALIZATION BASED NEXT GENERATION INTERNET

Background:

The topic that is going to be covered deals with control, management and measurement of the
Internet. This topic deals with how the issues in current Internet can be solved by using
virtualization based network architecture. The approach given in this topic makes the network
more intelligent rather than just a service or data provider. The approach is an incremental change
and not a completely evolutionary change.
Introduction

The Internet architecture that was developed in 1970s and which is currently still in use is that the
protocols carry the data without knowing the content. Due to this many security attacks can be
made. The current Internet also faces difficulty in managing the network and controlling it.

The current protocol and devices don’t support abstraction i.e. the configuration details are not
hidden. Due to this debugging, installing and other network management functions are complex.
Also in order to make any changes in the configuration many adjustments need to made before the
change take place .For example in order to make change in the routing and traffic load in the
network , the routing protocols need to be changed .Since the network architecture is distributed
this information needs to be changed.

This article proposes virtualization-based network architecture for next generation Internet. The
concept that is applied in this is that as we create multiple virtual machines from a single physical
infrastructure, we divide the major network functions such as management, control and
measurement as an architectural module. As a result of this separation we enrich the network
functions completely. In order to add flexibility and feasibility we make it as an incremental model
so that further changes can be made easily. For analyzing the software and network architecture we
use ALLOY.

Change of approach: Virtualization in Internet was viewed as a tool to test set of new Internet
architectures and protocols. However this situation is changed instead we deploy virtualization in
the network that is Internet to build our models. Since we use virtualization we allow basically
running multiple network architectures using same physical networks. Due to this approach the
service provider can provide multiple services to end user using the same physical infrastructure.
In order to achieve this a 4D architecture was proposed .In 4D architecture we have decision
,architecture, knowledge and data .With respect to our scenario we have four planes control ,
management , knowledge and data .In order that they don’t interfere each other as well for
maintaining and functioning them properly we implement them through virtual networks. In order
that it doesn’t becomes monolithic we make it as an incremental model.

Virtualization based Network Architecture:

System model and features : The approach to deploy new network model is this that the current
Internet architecture is very large and making changes in the current architecture is very time
consuming since it may take many years to do this. So the solution that is come across is that we
divide the subnets in two parts on the basis of architecture Current Internet(CI) subnet and Next
Generation Internet (NGI) subnet .The CI subnet and the NGI subnet are logically separated from
each other. Since we are performing virtualization we allow the CI to run parallel and deploy and
test the NGI. Since the Next Generation Internet is not completely deployed, user is given the
choice to opt between the Next Generation Internet subnet and the Current Internet.

Virtualization based Architecture:

Coming down to the network topology, the CI (Current Internet) and Next Generation Internet
(NGI) also named CMMI (Control Manageable and Measurable Internet) have some basic
changes. The physical and the MAC layer are kept same. The application, network and data link
layers are kept the same .In NGI (Next Generation Internet) we add another dimensions as seen
from the 4D architecture we add Knowledge plane , control plane and management plane . The
data plane as seen from the figure is same as the current Internet architecture. The perception based
network knowledge processing and independent network management helps us solve the issues
currently faced in the Internet. These planes are logically separated from each other. On top of the
application layer we add User selection plane which gives user the option to stay with the current
Internet subnet or switch to next generation Internet.

User selection plane:

The User selection plane which resides above application plane is used to allow the user to use any
of the modes the current Internet subnet mode and next generation Internet mode. This is easily
implemented by introducing a notification bit called SUB_NET in the IP header. By allowing this
it makes the underlying architecture understand how to treat the packet. If the SUB_NET = 0 is
received it is meant for the current Internet subnet if it is set as 1 then it is meant for the next
generation Internet. In case of SUB_NET = 0 the user selection layer identifies that it is meant for
current Internet subnet and passes it through the subnet. At the receiver side the selection layer
comes to know that has comes from current Internet subnet from the SUB_NET field.

Co-operation among Data, Knowledge, Management and Control Planes.

As described earlier, the four planes that are added to the subnet. These four planes are divided ad
four subnets by virtualization. The data plane is responsible for delivery the application data and
the other control and manage the whole network.

1> The data plane manages the transfer and delivery of the application data throughout the
network. The data plane does this through interface. The users receive and send it through
this interface.

2> The knowledge plane performs self-analysis, self learning and also performs network
measurement .The knowledge plane thus provides the network knowledge .The network
knowledge is thus provided to the management layer and the control layer. It thus forms a
database .In order to provide complete analysis and knowledge it collects information like
status report on link, network, transport and application layers.

3> The management plane as the name suggests provides management to the network. It does
this by taking input from input management commands and the network knowledge
received from the knowledge plane. The management plane on a whole provides running
statuses, processing management command from administrators and translates. The
management plane is also useful to provide translation from commands to policy.

4> The control plane controls the data plane by making use of the control policies. These
control policies are made by the management plane and the network knowledge from the
knowledge plane along with the control primitives.
System Modeling and Evaluation:

In order to implement the above architecture we use ALLOY.ALLOY provides the required logic
and language. Using ALLOY we use function entity as an abstract entity that undertakes functions
related to the architecture. Similarly the layers like user selection layer and other layers are
represented in terms of entity. These entities for example the user entity and/or selection entity
interact with each other by means of a connector. The connectors are divided as protocol connector
and service connector. The protocol connector is used for horizontal connection and service
connector is for the vertical connection. The basic idea is that protocols being vertical are that peer
nodes exchange information with each other hence horizontal. Since the layer below the topology
provides services to the layers above we use vertical connection concept for service connector.
After the entities are represented we use the ALLOY Analyzer to evaluate using simulation.
Conclusion:

We have used the virtualization based architecture approach to solve the issues in control and
management plane aspects of the current Internet. This future Internet approach aims to create
individual virtual subnets which deploy management, control and management functions. The
network becomes cognitive by adding knowledge plane which measures and collect network
statuses. The goal is to achieve a concrete network architecture that is controllable, manageable
and measurable.

The network architecture will be tested and evaluated using ALLOY and ALLOY analyzer. The
test bed that will be used to implement this architecture is Planet Lab.
A Systems Approach to Internet Architecture:

Internet2 consists of applications, middleware, network devices and the physical network.
Applying a systems approach to Internet2 enables these components to be viewed as a whole. The
systems approach consists not just of technology but also of users and policy. The systems
approach allows improvement in any one area to be leveraged against for greater overall gain in
user satisfaction. For instance the simplicity of the Internet architecture allows users to run
applications without any knowledge of the physical network, if the PC operating system knows
how the base network is operating, the application performance can be further increased thereby
improving the user experience as well. As changes occur in the network layer, such as IPv6 and IP
multicast, new applications using these services are available to the user.

This is a continuous cycle for Internet2 in which advanced network facilities create a platform for
better applications and vice-versa as illustrated in the figure. Also end to end system performance
and security enhancements cannot be achieved unless all the individual components are given
simultaneous attention.
End to end performance

End to end performance

(Motivate)

Security

(Enable)
Applications

Middleware

Services

Networks

Fig 7 - End to end Performance

NEXT GENERATION INTERNET


Contents: Transition from IPv4 to IPv6

Introduction

Issues with IPv4

Features of IPv6

4. Comparison of IPv4 and IPv6

5. Conclusion

3.1 Transition to IPv6 from IPv4


Background:

This chapter deals with the transition of next generation Internet from IPv4 to IPv6. We talk about
the various drawbacks of IPv4, the advanced features of IPv6
3.2 Issues with IPv4

An Internet Protocol address is simply a big number that identifies a computer on the Internet.
Packets of data sent across the Internet include the destination address. When we send an e-mail or
watch an online video, any number of computers, switches, routers, and other devices scrutinize
the IP address on these packets and forward them along to their eventual destination. Internet
currently uses Internet Protocol version 4 (IPv4). IPv4 addresses are 32-bit numbers, meaning that
there are 4.3 billion possible addresses. This might look like a lot of addresses but it isn’t. This
number has remained the same since the year 1981. This shows that while the number of address
space available has been constant for 30 years, the number of devices that connect to the Internet
has grown exponentially.

Consider this for an example. For instance technology giant Apple alone has sold mobile iOS
devices (iPad, iPhone, IPod Touch) in excess of a 150 million. This is not counting all the
computers routers and other devices that connect to the Internet sold during the past 30 years. With
the pace at which the technology is hurtling along we have long since run out of IPv4 addresses. At
a ceremony in Florida in February, the last block of IPv4 addresses were allocated to the Regional
Internet Registries, whose job it is to further distribute these final addresses to others. The main
reason this was not taken into account was no one even believed that so many addresses could be
fully exhausted. As Vint Cerf - Google's chief Internet evangelist, "the father of the Internet," and
the person responsible for choosing 32-bit numbers - said in an interview earlier this year, "Who
the hell knew how much address space we needed?"

Although lack of addressing space is a major issue in IPv4, there are several other concerns as
well. IPv4 follows the flat routing infrastructure. This means that individual address prefixes were
assigned and each address prefix contributed to a new entry in the routing table of the Internet
backbone routers. Configuration was also a major problem in IPv4. IPv4 networks must either be
configured manually or through the Dynamic Host Configuration Protocol (DHCP). Using DHCP
allows the network to be expanded beyond its present capacities but DHCP also must be
configured and managed manually.

Security is another major topic of concern with IPv4. The Internet was first designed with a
friendly environment in mind and security was taken care of by end to end nodes. For instance, if
an application such as e-mail requires encryption services, it should be the responsibility of the
email application at the end nodes to provide such services. The original Internet is still transparent
and there is no proper security framework in place for threats such as Denial of Service (DoS)
attacks, malicious code distribution (worms and virus attacks), fragmentation attacks and Port
scanning attacks. The address space is so tiny, scanning and finding vulnerable ports to attack in a
Class C Network takes less than 4 minutes.

Priority for certain packets, such as special handling parameters for low delay and low variance in
delay for voice or video traffic, are possible with IPv4. However, it relies on a new interpretation
of the IPv4 Type of Service (ToS) field, which is not supported for all the devices on the network.
Additionally, identification of the packet flow must be done using an upper layer protocol
identifier such as a TCP or User Datagram Protocol (UDP) port. This additional processing of the
packet by intermediate routers makes forwarding less efficient.

As mentioned above there has been a proliferation in mobile devices like phones and music players
that have the capability to connect to the Internet from virtually any location. Mobility is a new
requirement for Internet-connected devices, in which a node can change its address as it changes
its physical attachment to the Internet and still maintain existing connections. Although there is a
specification for IPv4 mobility, due to a lack of infrastructure, communications with an IPv4
mobile node are inefficient.

These problems are addressed to by Internet Protocol IPv6. IPv6 is not a superset of IPv4 but a
completely new set of protocols. The various features of IPv6 are described below.

3.2 IPv6 Features


• Large Address Space: IPv6 addresses are 128 bits long, creating an address space with
3.4 × 10^38 unique addresses. This is plenty of address space for the foreseeable future and
allows all manner of devices to connect to the Internet without the use of NATs. Address
space can also be allocated internationally in a more equitable manner.

• Hierarchical Addressing: Global addresses are those IPv6 addresses that are reachable on
the IPv6 portion of the Internet. There is sufficient address space for the hierarchy of
Internet service providers (ISPs) that typically exist between an organization or home and
the backbone of the Internet. Global addresses are designed to be summarizable and
hierarchical, resulting in relatively few routing entries in the routing tables of Internet
backbone routers.
In IPv6, there are three types of addressing modes. They are unicast, multicast and anycast
addressing modes. Unicast addresses are assigned to a single IPv6 node. Multicast
addresses are assigned to multiple nodes within a single a multicast group. When a packet
is sent to a multicast group, the packets must be delivered to all the packets within that
group. Anycast addressing is similar to multicast, the only difference being that the packet
can be sent to only one node in the group.

• Stateless and stateful address configuration: IPv6 allows hosts to acquire IP addresses
either in a stateless or autonomous way or through a controlled mechanism such as
DHCPv6. IPv6 hosts can automatically configure their own IPv6 addresses and other
configuration parameters, even in the absence of an address configuration infrastructure
such as DHCP.

• Quality of Service: The IPv6 packet header contains fields that facilitate the support for
QoS for both differentiated and integrated services.

• Better Performance: IPv6 provides for significant improvements such as better handling
of packet fragmentation hierarchical addressing, and provisions for header chaining that
reduce routing table size and processing time.
• Built in security: Unlike IPv4, IPv6 support for IPSec protocol headers is required.
Applications can always rely on industry standard security services for data sent and
received. However, the requirement to process IPSec headers does not make IPv6
inherently more secure. IPv6 packets are not required to be protected with Authentication
Header (AH) or Encapsulating Security Payload (ESP).

• Extensibility: Even though the IPv6 address space is 4 times that of IPv4, the size of the
IPv6 header (shown in the figure below) is only 40 bytes (header size of IPv4 packet is 20
bytes). There are no checksum and optional fields in the IPv6 header. Optional fields can
be added up to the size of the IPv6 header giving more extensibility and reduce header
processing time thereby increasing network performance.

Figure 8 - IPv6 Header


• Mobility: IPv6 provides mechanisms that allow mobile nodes to change their locations and
addresses without losing the existing connections through which those nodes are
communicating. This service is supported at the Internet level and therefore is
fully transparent to upper-layer protocols. Rather than attempting to add mobility to an
established protocol with an established infrastructure (as with IPv4), IPv6 can support
mobility more efficiently.

Improvements in IPv6 security

There is little or no security in IPv4 to make it completely vulnerable to external attacks and this
makes the improvements in IPv6 a huge difference in terms of network security.

1. Prevention of Port Scanning Attack


As mentioned above port scanning allows programmers with malicious intent to use black hats
to listen to certain ports (access points or connectors to the backbone with heavy load) that are
known to be vulnerable. In IPv4, port scanning is relatively simple. Most IPv4 segments
belong to Class C networks which have 8 bits for host addressing. So scanning a typical IPv4
subnet at the rate of one host per second translates into
• 128 Hosts x (1 Sec/1 Host) x (1 Minute/60 Seconds) = 4.267 Minutes.

• In IPv6 networks, IPv6 subnets use 64 bits for allocating host addresses.

• A typical IPv6 subnet requires:


2^64 Hosts x (1 Second/ 1 Host) x (1 Year/31536000 Seconds) = 584 Billion Yrs
Scanning such a huge address space is close to impossible solving the port scanning
problem quite well.
2. IPSec
IPSec consists of a set of cryptographic protocols that provide for secure data communication
and key exchange. IPSec uses two wire-level protocols – Authentication Header (AH) and
Encapsulating Security Payload (ESP). These two protocols are responsible for authentication,
data integrity and confidentiality. In IPv6 both AH and ESP are defined as a part of the
extension headers. Additionally there is a third suite of protocols called the Internet Key
Exchange (IKE) which is responsible for key exchange and protocol negotiation. This protocol
provides the initial information needed to establish and negotiate security parameters between
end devices. Additionally it keeps track of this information to guarantee that communication
continues to be secure up to the end.

2.1 Authentication Header: As mentioned above the authentication header prevents the IP
packets from being altered or tampered with. In an IPv4 packet, the AH is part of the payload.
The figure below shows an example of an IPv4 packet with an AH in the payload.

Figure 9. Authentication Header in IPv4 Packet

When the AH protocol was implemented there was some concern about how to integrate it
into the new IPv6 format. The problem was that IPv6 header extensions can change in transit
through the network as the information they contain gets updated during transit.

To solve this problem, IPv6 AH was designed with flexibility in mind—the protocol
authenticates and does integrity check only on those fields in the IPv6 packet header that do
not change in transit. Also, in IPv6 packets, the AH is intelligently located at the end of the
header chain—but ahead of any ESP extension header or any higher level protocol such as
TCP/UDP. A typical sequence of IPv6 extension headers is shown in the figure below.

Figure 10. IPv6 Extension Headers Order

2.2 Encapsulating Security Payload:

In addition to providing the same functionality the AH protocol provides—authentication, data


integrity, and replay protection—ESP also provides confidentiality. In the ESP extension header,
the security parameter index (SPI) field identifies what group of security parameters the sender is
using to secure communication. ESP supports any number of encryption mechanisms. However,
the protocol specifies DES-CBC as its default. Also, ESP does not provide the same level of
authentication available with AH. While AH authenticates the whole IP header (in fact, only those
fields that do not change in transit), ESP authenticates only the information that follows it [1].

ESP provides data integrity by implementing an integrity check value (ICV) that is part of the ESP
header trailer—the authentication field. The ICV is computed once any encryption is complete and
it includes the whole ESP header/trailer—except for the authentication field, of course. The ICV
uses hash message authentication code (HMAC) with SHA-1 and MD5 as the recommended
cryptographic hash functions. Figure below shows a typical ESP extension header.

Figure 11. IPv6 Encapsulating Security Payload (Header and Trailer)


2.3 Transport and tunnel modes

In IPv4 networks, IPSec provides two modes of securing traffic. The first one is called
transport mode and it is intended to provide secure communication between endpoints by securing
only the packet’s payload. The second one is called tunnel mode and it is intended to protect the
entire IPv4 packet. However, in IPv6 networks, there is no need for a tunnel mode because, as
mentioned above, both the AH and ESP protocols provide enough functionality to secure IPv6
traffic.

2.3 Protocol negotiation and key exchangemanagement

In addition to AH and ESP, IPSec also specifies additional functionality for protocol negotiation
and key exchange management. IPSec encryption capabilities depend on the ability to negotiate
and exchange encryption keys between parties. To accomplish this task, IPSec specifies an
Internet key exchange (IKE) protocol. IKE provides the following functionality:
a. Negotiating with other people the protocols, encryption algorithms, and keys, to use.

b. Exchanging keys easily, including changing them often.

c. Keeping track of all these agreements.

To keep track of all protocol and encryption algorithm agreements, IPSec uses the SPI field in both
the AH and ESP headers. This field is an arbitrary 32-bit number that represents a security
association (SA). When communication is negotiated, the receiver node assigns an available SPI
which is not in use, and preferably one that has not been used in a while. It then communicates this
SPI to its communication partner establishing a security association. From then until that SA
expires, whenever a node wishes to communicate with the other using the same SA, it must use the
same SPI to specify it.

3. Neighbor discovery and address auto configuration


Neighbor discovery (ND) is the mechanism responsible for router and prefixes discovery,
duplicate address and network unreachability detection, parameter discovery and link layer
address resolution. This protocol is entirely based on Layer 3. ND operates in tandem with auto
– configuration which is the mechanism used by IPv6 nodes to acquire either stateful or
stateless configuration information. In the stateless mode, all nodes get what they want for
global information including potential illegal ones. In stateful mode configuration information
can be provided selectively reducing the possibility of rogue nodes. Both ND and address auto
– configuration contribute to make IPv6 more secure than IPv4. IPv6 provides for TTL values
of up to 255; it prevents against the outside sourcing of ND packets or duplicate addresses.

Mobility

Mobility is a totally new feature of IPv6 that was not available in its predecessor. Mobility is a
very complex function that raises a considerable amount of concern when considering security.
Mobility uses two types of addresses, the real address and the mobile address. The first is a typical
IPv6 address contained in an extension header. The second is a temporary address contained in the
IP header. Because of the characteristics of this networks (something more complicated if we
consider wireless mobility), the temporary component of a mobile node address could be exposed
to spoofing attacks on the home agent. Mobility requires special security measures and network
administrators must be fully aware of them
Comparison of IPv4 and IPv6

Internet Protocol version 4 (IPv4) Internet Protocol version 6 (IPv6)

Deployed 1981 1999


Address size 32 - bit number 128 – bit number
Address Format Dotted Decimal Notation Hexadecimal Notation
192.149.256.76 3FFE:091A:19D6:12BC:AF89:A
ADD:1123:A101
Prefix Notation 192.149.0.0/24 3FFE:F200:0234::/48
Number of addresses 2^32 2^128
Routing Infrastructure Flat Routing Hierarchical Routing
Configuration Manual Configuration of ports Automatic Configuration of ports
and end devices and end devices
Security Features Dependent on end nodes In-built Security
Port Scanning For Class C network port For Class C Network, port
scanning will take all of 4 scanning will take 584 billion
minutes. years (approx)
Packet Forwarding
Mobility Lack of infrastructure for IPv4 Good support for mobility
Networks
Priority of Packets Depends on ToS field Separate field for identifying
priority of packets.
Quality of Service QoS will not work if packet or Standardized QoS for both
data is encrypted. differentiated and integrated
services in packet header itself.
Extensibility Header is 20 bytes for 32 bit Header is only 40 bytes (twice
addressing space that of IPv4) for 128 bit
addressing space
Efficiency Not efficient Very Efficient
Authentication Header Part of IPv4 packet payload Part of extension header

World IPv6 Day

On 8 June, 2011, Google, Facebook, Yahoo!, Akamai and Limelight Networks will be amongst
some of the major organizations that will offer their content over IPv6 for a 24-hour "test drive".
The goal of the Test Drive Day is to motivate organizations across the industry – Internet service
providers, hardware makers, operating system vendors and web companies – to prepare their
services for IPv6 to ensure a successful transition as IPv4 addresses run out.

Link to test if your network devices are IPv6 ready


http://test-ipv6.com/

Conclusion:

In order to fully deploy and use technologies like IPv6 an Internet2 all major Internet industry
players will need to take action to ensure a successful transition. For example:

• Internet service providers need to make IPv6 connectivity available to their users
• Web companies need to offer their services over IPv6
• Operating system makers may need to implement specific software updates
• Backbone providers may need to establish IPv6 peering with each other.
• Hardware and home gateway manufacturers may need to update firmware
EDGE BASED NEXT GENERATION
INTERNET

Contents: Applications of Internet2

Introduction

Public Television’s Next-Generation Interconnection Pilot

High-Definition Television

Theater-Quality Film Transmission

The Space Physics and Aeronomy Research Collaboration

The Neptune Project

The California Orthopedic Research Network

Conclusion
4.1 Introduction

Advanced Broadband

According to ITU, Broadband is the rate of transmission capacity of a channel that is faster than
primary rate Integrated Services Digital Network (ISDN) at 1.5 or 2.0 Megabits per second
(Mbps). Broadband has reached a stage where it has become prevalent almost throughout the
country. Internet2 seeks to improve upon the broadband by deploying advanced broadband, the
next generation in networking. Advanced Broadband is the transmission capacity of a channel
offering multiple Mbps flows in full duplex operation between computers and other networking
devices. One other feature of advanced broadband is that the device is always connected to the
network and the applications and services will be accessible round the clock. Some of the
advanced services supported by advanced Internet are IP multicast, IPv6, network performance
measurement and advanced service monitoring among others.

Some of the applications of Internet2 are:

1. Public Television’s Next-Generation Interconnection Pilot


2. High-Definition Television
3. Theater-Quality Film Transmission
4. The Space Physics and Aeronomy Research Collaboration
5. The Neptune Project
6. The California Orthopedic Research Network
4.2 Public Television’s Next-Generation Interconnection Pilot

`A one way satellite system is the dominant method used to connect the present public television
network. The Public Broadcasting Service (PBS) stations at University of Wisconsin and
Washington state university along with the universities in the Internet2 consortium are making use
of the Internet2 network to create and deploy advanced applications using this network. By making
use of broadband IP video connections, the project members are testing station to station video
quality, live HD video streaming, video segmentation and search, server-based video on demand
broadcast and collaborative program editing. The goal of this application is to demonstrate how the
television production process can be made more streamlined and thus offer better viewing options
for the subscribers.

4.3 High-Definition Television

High Definition TV has become the norm in the current Internet driven media transmission. The
Research Channel Consortium based out of the University of Washington is at the forefront of
transmission of high definition video over advanced networks. The data rates range from high
quality uncompressed HD video at 1.5 Gbps, editable studio quality HD video at 270Mbps,
production quality HD video at 19.2 Mbps. Another advantage of this data transmission is that
different formats of data can be delivered in a single stream simultaneously during real time
through Internet2.
4.4 Theater-Quality Film Transmission

Internet2 brings you the entire cinema experience without you leaving your house. In collaboration
with Nippon Telegraph and telegraph Corporation, the University of Illinois and the University of
Southern California have transmitted real time theater quality video over the Internet2 based
network. This partnership transmitted super high definition (SHD) video over the Abilene Network
to the Fall 2002 Internet2 Member meeting. An NTT system at the UIC Electronic visualization
laboratory at Chicago sent SHD video to the digital arts center at the USC, California. SHD is four
times the data rate of High Definition streaming used in today’s video on demand services and HD
cable television broadcast. The SHD stream was compressed to 400Mbps streams using an
experimental video encored, stored on the network and sent over the Abilene network to a real
time NTT decoder and displayed in a theater via an 8Mpixel projector to an audience of cinema
experts and technologists.
4.5 The Space Physics and Aeronomy Research Collaboration

The most important and perhaps game changing application of Internet2 is the ability to
collaborate over large distances. In education it can be in the form of professors or scientists who
are actively involved in the field but at the same time can share their knowledge with students
around the world using the applications deployed over the Internet2 Network. One such instance of
collaboration is the University of Michigan’s Space Physics and Aeronomy Research
Collaboration (SPARC). There is now no need to travel to Greenland and other remote locations to
study the earth’s upper atmosphere. Scientists can use SPARC tools (like those shown in fig) that
can provide scientists with real time access to their experiments from the comfort of their labs. A
consequence of this is that many students have now developed mentoring relationships with the
faculty as they now have full time access to the staff and research tools and data.
Figure 5 - Space Physics and Aeronomy Research Collaboration

4.6 The NEPTUNE Project

In addition to university level collaboration, NEPTUNE is an international and multi institutional


project that is a part of a global project to help develop regional, coastal and global ocean
observatories. This project has a 3000 km network of sea floor and fiber optic cable across the
Pacific Ocean. A series of experimental data collection centers are set up across the cable to obtain
data from the tops of the waves to the core of the earth beneath the ocean floor. Hardwired to
advanced telecommunication and network equipment, this project will help in collecting real time
oceanic data across the world. This data will be sent across the world to classrooms and
laboratories so that they can be studied as and when the events occur providing a better
understanding for the events. As they are remotely operated and automatically recharged this
application can also be a potential lifesaver in avoiding the dangers of the deep ocean floor.
The image below is a general overview of the NEPTUNE system. Using this system, students will
be able to view data for a specific location over long periods of time or view data for a large
section of the ocean floor at the same time which will help them get a better grasp of the ocean
floor.

Figure 6 - The NEPTUNE Project


4.7 The California Orthopedic Research Network

Internet2 can also play an active role in the health care system. There is a dedicated Network
called the California Orthopedic Research Network (CORN) in the Internet2 Consortium which
has expanded to include surgeons and medical students from hospitals and medical centers all over
the world. They have the opportunity to observe and learn from live surgical procedures and can
communicate with each other through high definition video conferencing. Facilities are also
provided for surgeons to do operations on patients in distant locations using remote applications.

Conclusion:

Thus with real time collaboration, distance is now no longer an inhibitor. This means that by using
advanced broadband networks we not only do things faster but we also create and deploy several
new business processes. Advanced applications can thus strengthen existing divisions within a
multi-faceted organization and can also bring about contribution of new ideas. As Internet2
becomes the norm these will become widely available to the general public and we will wonder
how we ever lived without them.
Future Trends

If the past is a guide, the Internet is likely to continue to grow at a fast and furious pace. And as it
grows, geographic location will count less and less. The “Information Superhighway” is not only
here, it is already crowded. Internet is being divided, as are the highways of many cities, allowing
for the equivalent of HOV lanes and both local and express routes. The electronic highway now
connects schools, businesses, homes, universities, and organizations.

And it provides both researchers and business leaders with opportunities that seemed like science
fiction no more than a decade ago. Even now, some of these high-tech innovations—including
virtual reality, computer conferencing, and telemanufacturing— have already become standard
fare in some laboratories.

Tele-manufacturing allows remote researchers to move quickly from computer drawing boards to a
physical mock-up. At the San Diego Supercomputer Center (SDSC), the Laminated Object
Manufacturing (LOM) machine turns files into models using either plastic or layers of laminated
paper. The benefits are especially pronounced for molecular biologists who learn how their
molecules actually fit together, or dock. Even in a typical computer graphics depiction of the
molecules, the docking process and other significant details can get lost among the mounds of
insignificant data. SDSC’s models can better depict this type of information. They are also relevant
to the work of researchers studying plate tectonics, hurricanes, the San Diego Bay region, and
mathematical surfaces.

To make the move from the virtual to the physical, researchers use the network to send their files
to SDSC. Tele-manufacturing lead scientist Mike Bailey and his colleagues then create a list of
three-dimensional triangles that bound the surface of the object in question. With that information,
the LOM builds a model. Researchers can even watch their objects take shape. The LOMcam uses
the Web to post new pictures every forty-five seconds while a model is being produced.
“We made it incredibly easy to use so that people who wouldn’t think about manufacturing are
now manufacturing,” says Bailey. For some researchers, the whole process has become so easy
that “they think of it no differently than you do when you make a hard copy on your laser printer,”
he adds. SDSC’s remote lab has moved out of the realm of science fiction and into the area of
everyday office equipment.

While other remote applications are not as far along, their results will be dramatic once the bugs
are ironed out, according to Tom DeFanti of the University of Illinois at Chicago and his
colleagues. DeFanti and many others are manipulating the computer tools that provide multimedia,
interaction, virtual reality, and other applications. The results, he says, will move computers into
another realm. DeFanti is one of the main investigators of I-WAY, or the Information Wide Area
Year, a demonstration of computer power and networking expertise. For the 2005 Supercomputer
Conference in San Diego, he and his colleagues, Rick Stevens of the Argonne National Laboratory
and Larry Smarr of the National Center for Supercomputing Applications, linked more than a
dozen of the country’s fastest computer centers and visualization environments.

The computer shows were more than exercises in pretty pictures; they demonstrated new ways of
digging deeply into the available data. For example, participants in the Virtual Surgery
demonstration were able to use the National Medical Library’s Visible Man and pick up a “virtual
scalpel” to cut “virtual flesh.” At another exhibit, a researcher demonstrated tele-robotics and tele-
presence. While projecting a cyber-image of himself into the conference, the researcher worked
from a remote console and controlled a robot who interacted with conference attendees.

Applications such as these are just the beginning, says DeFanti. Eventually the Internet will make
possible a broader and more in-depth experience than is currently available. “We’re taking the
computer from the two-dimensional ‘desktop’ metaphor and turning it into a three dimensional
‘shopping mall’ model of interaction,” he says. “We want people to go into a computer and be able
to perform multiple tasks just as they do at a mall, a museum, or even a university. The future is
truly here.
Conclusion

What can one do to realize the vision of widely available highly advanced broadband? First we
have to do all we can and be an early adopter so that we can live in the future and help define
tomorrow’s Internet. Access is available at the national level through Internet2 membership and at
the local level through school and community projects. Second, the middleware technology has to
be designed and further developed. Progress has to be made particularly in the fields of security,
privacy and trust so that we can more quickly deploy advanced applications that provide value for
consumers, business, government and education. Co-operation is the key to facilitating knowledge
and technology transfer from early adopters to the broad Internet community.

Academia, industry and government developed the Internet in its initial stages. Internet2 continues
that partnership by providing a framework for individuals and organizations from different sectors
to work on advanced network technologies and advanced applications. As a result of these
collaborations universities are better able to fulfill their missions in teaching, learning, research,
clinical practice and outreach while corporations are positioned to test and deploy the next
generation of services and applications.
Acronyms:

ALLOY: A light object modeling notation.

BGP : Border Gateway Protocol

Consumer Edge

CONET: Content Centric Network

DHT: Direct Hash Table

DNS: Domain Name Server

FIND: Future Internet Design

HTTP: Hyper Text Transfer Protocol

ISP : Internet Service Provider


N

NGI: Next Generation Internet

P.E: Provider Edge

PlanetLab

OSPF: Open Shortest Route First

URL: Uniform Resource Locator


Glossary of Terms:

Alloy: It is language for describing structural properties. On graphical object models a syntax is
declared and by using a set-based formula the constraints are expressed. It used modeling to define
a powerful syntax.

Concept algebra: The concepts or patterns are combined together by using a set of rules of
combining them.

System algebra: System algebra is an abstract mathematical structure for the formal treatment of
abstract and general systems as well as their algebraic relations, operations, and associative rules
for composing and manipulating complex systems.

DHT (Direct Hash Table)


References:

http://ruccs.rutgers.edu/~jacob/Papers/feldman_algebra.pdf

http://enel.ucalgary.ca/People/wangyx/Publications/Papers/DM/IJCINI-2202-SystemAlgebra.pdf
Acknowledgment:

We sincerely thank our professor Dr Keyvan Moataghed for providing us an opportunity to work
on the Next Generation Internet. Next Generation often abbreviated as NGI is revolutionizing the
current Internet to a completely new Internet. It has been pleasure working on the project report.
We thank our professor also on the accounts of necessary help that was provided for completion of
this project report and the flexibility of time that was provided.

This report is a joint effort due to which we could encompass the major aspects in next generation
Internet. We also thank the Santa Clara University library for providing an easy access to the
technical papers and research material which is the major source of our project.

Thank you
List of Figures

1. Internet development spiral


2. Internet2 Network
3. vBNS Backbone Network Map
4. Space Physics and Aeronomy Research Collaboration
5. The NEPTUNE Project
6. Example of Federated Enterprise
7. End to end Performance
8. IPv6 Header
9. Authentication Header in IPv4 Packet
10. IPv6 Extension Headers Order
11. IPv6 Encapsulating Security Payload (Header and Trailer)

List of Tables

1. Comparison of IPv4 and IPv6

List of Sources

1. IPv6 Addressing Architecture in IPv4 Network (2010 Second International Conference


Communication Software and Networks) - IPv6 Header data
2. Study on Multi-dimensional Extendibility of Next-generation Internet Architecture
(2009 International Conference on Multimedia Information Networking and Security) –
Middleware Architecture
3. NGI and Internet2: Accelerating the Creation of Tomorrow’s Internet –Tsinghua
University Beijing 100084, China - Internet2 genesis
4. http.//www.cise.nsf.gov/ ncri/ vbnsaup. Html – vBNS data
5. http://www.pnw-gigapop.net/ - gigaPoP data
6. http://computertechnos.blogspot.com/2008/12/disadvantages-of-ipv4.html - disadv of ipv4
7. http://www.readwriteweb.com/archives/the_last_block_of_ipv4addressesallocated.php -
disadv of ipv4
8. http://ipv6.internet2.edu/ - features of ipv6.
http://www.nsf.gov/about/history/nsf0050/pdf/internet.pdf History of Internet2
9. http://www.nitrd.gov/pubs/bluebooks/2000/lsn.html Applications of Internet2

The Broadband Millennium: Communication Technologies and Markets


By Don Flournoy Applications of internet2

Appendix

List of organizations in Internet2.

List of Acronyms

1. AH – Authentication Header
2. CORN
3. DoS – Denial of Service
4. DHCP – Dynamic Host Configuration Protocol
5. DHCPv6 - Dynamic Host Configuration Protocol version 6
6. DES-CBC - Data Encryption Standard (DES) algorithm in the Cipher Block Chaining
(CBC) mode of operation
7. ESP – Encapsulating Security Payload
8. Gbps – Gigabit per second
9. HD – High Definition
10. HOV – High Occupancy Vehicle Lanes
11. HMAC – Hash Message Authentication Code
12. ISDN - Integrated Services Digital Network (ISDN)
13. ITU – International Telecommunications Union
14. IP-Multicast – Internet Protocol Multicast
15. IPv4 – Internet Protocol version 4
16. IPv6 – Internet Protocol version 6
17. I2-MI – Internet2 Middleware Initiative
18. IPSec – Internet Protocol Security
19. IKE - Internet Key Exchange
20. ICV – Integrity Check Value
21. I-WAY – Internet Wide Area Year
22. LOM – Laminated Object Monitoring
23. Mbps – Mega bits per second
24. MD5 – Message Digest algorithm 5
25. NTT – Nippon Telephone and Telegraph
26. NAT – Network Address Translators
27. OASIS – Organization for Advancement of Structured Information Standards
28. Open SAML – Open Security Assertion Markup Language
29. PBS – Public Broadcasting Service
30. QoS – Quality of Service
31. SPARC - Space Physics and Aeronomy Research Collaboration
32. SAML - Security Assertion Markup Language
33. SHA – 1 – Secure Hash Algorithm
34. SA – Security Association
35. SDSL - Symmetric Digital Subscriber Line
36. ToS – Type of Service
37. TCP – Transmission Control Protocol
38. TTL – Time To Live
39. UIC – University of Illinois, Chicago
40. USC – University of Southern California
41. UDP – User Datagram Protocol
42. vBNS - very high performance Backbone network Service

You might also like