You are on page 1of 20

A Term Paper On

Subject

Network & Telecommunication


In

Partial Requirement for the Program

MBA (EB) 3 - Sem. (2010-11)


RD

Submitted To Submitted By

Mr. Ashish Sonker Rajesh Kumar

Roll No. – 35

Department Of Business Administration


University Of Lucknow
ACKNOWLEDGEMENT

Every task constitutes great deal of assistance and guidance from


the people concerned and this particular term paper is of no exception
a project of this nature surely a result of tremendous support,
guidance, encouragement and help.

I express my sense of gratitude to Mr. Ashish Sonker, faculty


member of department of Business Administration, University of
Lucknow. I thank him for his constructive help, constant support in
completing this term paper.

I would also like to thank my friends and all those individuals


who gave me the proper references and provided me with relevant
information on this topic, gave me important web links for assistance &
helped me a lot.

Finally I thank my parents & great GOD with my deepest


gratitude.

Rajesh Kumar
Ethernet
Ethernet is a family of frame-based computer networking technologies for local area
networks (LANs). The name comes from the physical concept of the ether. It defines a number
of wiring and signaling standards for the Physical Layer of the OSI networking model, through
means of network access at the Media Access Control protocol (a sub-layer of Data Link Layer),
and a common addressing format.

Ethernet is standardized as IEEE 802.3. The combination of the twisted pair versions of
Ethernet for connecting end systems to the network, along with the fiber optic versions for site
backbones, is the most widespread wired LAN technology. It has been in use from around
1980[1] to the present, largely replacing competing LAN standards such as token ring, FDDI, and
ARCNET.

standard 8P8C (often called RJ45) connector used most commonly on cat5 cable, a type
of cabling used primarily in Ethernet networks.

History
Ethernet was developed at Xerox PARC between 1973 and 1975. In 1975, Xerox filed a
patent application listing Robert Metcalfe, David Boggs, Chuck Thacker and Butler Lampson as
inventors, U.S. Patent 4,063,220 "Multipoint data communication system (with collision
detection)". In 1976, after the system was deployed at PARC, Metcalfe and Boggs published a
seminal paper.

The experimental Ethernet described in the 1976 paper ran at 3,000,000 bits per second
(3 Mbit/s) and had eight-bit destination and source address fields, so the original Ethernet
addresses were not the MAC addresses they are today. By software convention, the 16 bits after
the destination and source address fields specified a "packet type", but, as the paper says,
"different protocols use disjoint sets of packet types". Thus the original packet types could vary
within each different protocol, rather than the packet type in the current Ethernet standard which
specifies the protocol being used.

Metcalfe left Xerox in 1979 to promote the use of personal computers and local area
networks (LANs), forming 3Com. He convinced DEC, Intel, and Xerox to work together to
promote Ethernet as a standard, the so-called "DIX" standard, for "Digital/Intel/Xerox"; it
specified the 10 megabits/second Ethernet, with 48-bit destination and source addresses and a
global 16-bit type field. The first standard draft was first published on September 30, 1980 by the
Institute of Electrical and Electronics Engineers (IEEE). It competed with two largely proprietary
systems, Token Ring and Token Bus. To get over delays of the finalization of the Ethernet
"Carrier sense multiple access with collision detection" (CSMA/CD) standard due to the difficult
decision processes in the "open" IEEE, and due to the competitive Token Ring proposal strongly
supported by IBM, support of CSMA/CD in other standardization bodies (i.e. ECMA, IEC and
ISO) was instrumental to its success. The proprietary systems soon found themselves buried
under a tidal wave of Ethernet products. In the process, 3Com became a major company. 3COM
built the first 10 Mbit/s Ethernet adapter (1981). This was followed quickly by DEC's Unibus to
Ethernet adapter, which DEC sold and used internally to build its own corporate network,
reaching over 10,000 nodes by 1986, far and away the largest then extant computer network in
the world.

The advantage of CSMA/CD was that, unlike Token Ring and Token Bus, all nodes
could "see" each other directly. All "talkers" shared the same medium - a single coaxial cable -
however, this was also a limitation; with only one speaker at a time, packets had to be of a
minimum size to guarantee that the leading edge of the propagating wave of the message got to
all parts of the medium before the transmitter could stop transmitting, thus guaranteeing that
collisions (two or more packets initiated within a window of time which forced them to overlap)
would be discovered. Minimum packet size and the physical medium's total length were thus
closely linked.

Through the first half of the 1980s, Digital's ethernet implementation utilized a coaxial
cable about the diameter of a US nickel (5¢ coin) which became known as "thick wire ethernet"
when its successor, "thin wire ethernet" was introduced. Thin-wire ethernet was in essence a
high-quality version of the cable used on closed-circuit television of the era. The emphasis was
on making the physical routing of cable easier, less costly, and, whenever possible, utilize
existing wiring. The observation that there was plenty of excess capacity in unused "twisted pair"
(sometimes "twisted copper") telephone wiring already installed in commercial buildings
provided another opportunity to expand the installed base and thus twisted-pair ethernet was the
next logical development.
Twisted-pair Ethernet systems were developed in the mid 1980s, beginning with
StarLAN, and become widely known with 10BASE-T. These systems replaced the coaxial cable
on which early Ethernets were deployed with a system of hubs linked with unshielded twisted
pair (UTP), ultimately replacing the CSMA/CD scheme in favor of a switched full duplex system
offering higher performance.

Standardization
Notwithstanding its technical merits, timely standardization was instrumental to the
success of Ethernet. It required well-coordinated and partly competitive activities in several
standardization bodies such as the IEEE, ECMA, IEC, and finally ISO.

In February 1980 IEEE started a project, IEEE 802 for the standardization of Local Area
Networks (LAN).

The "DIX-group" with Gary Robinson (DEC), Phil Arst (Intel) and Bob Printis (Xerox)
submitted the so-called "Blue Book" CSMA/CD specification as a candidate for the LAN
specification. Since IEEE membership is open to all professionals including students, the group
received countless comments on this brand-new technology.

In addition to CSMA/CD, Token Ring (supported by IBM) and Token Bus (selected and
henceforward supported by General Motors) were also considered as candidates for a LAN
standard. Due to the goal of IEEE 802 to forward only one standard and due to the strong
company support for all three designs, the necessary agreement on a LAN standard was
significantly delayed.

In the Ethernet camp, it put at risk the market introduction of the Xerox Star workstation
and 3Com's Ethernet LAN products. With such business implications in mind, David Liddle
(General Manager, Xerox Office Systems) and Metcalfe (3Com) strongly supported a proposal
of Fritz Röscheisen (Siemens Private Networks) for an alliance in the emerging office
communication market, including Siemens' support for the international standardization of
Ethernet (April 10, 1981). Ingrid Fromm, Siemens representative to IEEE 802 quickly achieved
broader support for Ethernet beyond IEEE by the establishment of a competing Task Group
"Local Networks" within the European standards body ECMA TC24. As early as March 1982
ECMA TC24 with its corporate members reached agreement on a standard for CSMA/CD based
on the IEEE 802 draft. The speedy action taken by ECMA decisively contributed to the
conciliation of opinions within IEEE and approval of IEEE 802.3 CSMA/CD by the end of 1982.

Approval of Ethernet on the international level was achieved by a similar, cross-partisan


action with Fromm as liaison officer working to integrate IEC TC83 and ISO TC97SC6, and the
ISO/IEEE 802/3 standard was approved in 1984.
General description

This is a Figure of 1990s network interface card. This is a combination card that supports
both coaxial-based using a 10BASE2 (BNC connector, left) and twisted pair-based 10BASE-T,
using an RJ45 (8P8C modular connector, right).

Ethernet was originally based on the idea of computers communicating over a shared
coaxial cable acting as a broadcast transmission medium. The methods used show some
similarities to radio systems, although there are fundamental differences, such as the fact that it is
much easier to detect collisions in a cable broadcast system than a radio broadcast. The common
cable providing the communication channel was likened to the ether and it was from this
reference that the name "Ethernet" was derived.

From this early and comparatively simple concept, Ethernet evolved into the complex
networking technology that today underlies most LANs. The coaxial cable was replaced with
point-to-point links connected by Ethernet hubs and/or switches to reduce installation costs,
increase reliability, and enable point-to-point management and troubleshooting. StarLAN was
the first step in the evolution of Ethernet from a coaxial cable bus to a hub-managed, twisted-pair
network. The advent of twisted-pair wiring dramatically lowered installation costs relative to
competing technologies, including the older Ethernet technologies.

Above the physical layer, Ethernet stations communicate by sending each other data
packets, blocks of data that are individually sent and delivered. As with other IEEE 802 LANs,
each Ethernet station is given a single 48-bit MAC address, which is used to specify both the
destination and the source of each data packet. Network interface cards (NICs) or chips normally
do not accept packets addressed to other Ethernet stations. Adapters generally come programmed
with a globally unique address, but this can be overridden, either to avoid an address change
when an adapter is replaced, or to use locally administered addresses.

Despite the significant changes in Ethernet from a thick coaxial cable bus running at 10
Mbit/s to point-to-point links running at 1 Gbit/s and beyond, all generations of Ethernet
(excluding early experimental versions) share the same frame formats (and hence the same
interface for higher layers), and can be readily interconnected.
Due to the ubiquity of Ethernet, the ever-decreasing cost of the hardware needed to
support it, and the reduced panel space needed by twisted pair Ethernet, most manufacturers now
build the functionality of an Ethernet card directly into PC motherboards, eliminating the need
for installation of a separate network card.

Dealing with multiple clients

CSMA/CD shared medium Ethernet

Ethernet originally used a shared coaxial cable (the shared medium) winding around a
building or campus to every attached machine. A scheme known as carrier sense multiple access
with collision detection (CSMA/CD) governed the way the computers shared the channel. This
scheme was simpler than the competing token ring or token bus technologies. When a computer
wanted to send some information, it used the following algorithm:

Main procedure

1. Frame ready for transmission.


2. Is medium idle? If not, wait until it becomes ready and wait the interframe gap period (9.6 µs
in 10 Mbit/s Ethernet).
3. Start transmitting.
4. Did a collision occur? If so, go to collision detected procedure.
5. Reset retransmission counters and end frame transmission.

Collision detected procedure

1. Continue transmission until minimum packet time is reached (jam signal) to ensure that all
receivers detect the collision.
2. Increment retransmission counter.
3. Was the maximum number of transmission attempts reached? If so, abort transmission.
4. Calculate and wait random backoff period based on number of collisions.
5. Re-enter main procedure at stage 1.

This can be likened to what happens at a dinner party, where all the guests talk to each other
through a common medium (the air). Before speaking, each guest politely waits for the current
speaker to finish. If two guests start speaking at the same time, both stop and wait for short,
random periods of time (in Ethernet, this time is generally measured in microseconds). The hope
is that by each choosing a random period of time, both guests will not choose the same time to
try to speak again, thus avoiding another collision. Exponentially increasing back-off times
(determined using the truncated binary exponential backoff algorithm) are used when there is
more than one failed attempt to transmit.

Computers were connected to an Attachment Unit Interface (AUI) transceiver, which was in
turn connected to the cable (later with thin Ethernet the transceiver was integrated into the
network adapter). While a simple passive wire was highly reliable for small Ethernets, it was not
reliable for large extended networks, where damage to the wire in a single place, or a single bad
connector, could make the whole Ethernet segment unusable. Multipoint systems are also prone
to very strange failure modes when an electrical discontinuity reflects the signal in such a
manner that some nodes would work properly while others work slowly because of excessive
retries or not at all (see standing wave for an explanation of why); these could be much more
painful to diagnose than a complete failure of the segment. Debugging such failures often
involved several people crawling around wiggling connectors while others watched the displays
of computers running a ping command and shouted out reports as performance changed.

Since all communications happen on the same wire, any information sent by one computer is
received by all, even if that information is intended for just one destination. The network
interface card interrupts the CPU only when applicable packets are received: the card ignores
information not addressed to it unless it is put into "promiscuous mode". This "one speaks, all
listen" property is a security weakness of shared-medium Ethernet, since a node on an Ethernet
network can eavesdrop on all traffic on the wire if it so chooses. Use of a single cable also means
that the bandwidth is shared, so that network traffic can slow to a crawl when, for example, the
network and nodes restart after a power failure.

Repeaters and hubs

For signal degradation and timing reasons, coaxial Ethernet segments had a restricted size
which depended on the medium used. For example, 10BASE5 coax cables had a maximum
length of 500 meters (1,640 ft). Also, as was the case with most other high-speed buses, Ethernet
segments had to be terminated with a resistor at each end. For coaxial-cable-based Ethernet, each
end of the cable had a 50 ohm (Ω) resistor attached. Typically this resistor was built into a male
BNC or N connector and attached to the last device on the bus, or, if vampire taps were in use, to
the end of the cable just past the last device. If termination was not done, or if there was a break
in the cable, the AC signal on the bus was reflected, rather than dissipated, when it reached the
end. This reflected signal was indistinguishable from a collision, and so no communication
would be able to take place.

A greater length could be obtained by an Ethernet repeater, which took the signal from
one Ethernet cable and repeated it onto another cable. If a collision was detected, the repeater
transmitted a jam signal onto all ports to ensure collision detection. Repeaters could be used to
connect segments such that there were up to five Ethernet segments between any two hosts, three
of which could have attached devices. Repeaters could detect an improperly terminated link from
the continuous collisions and stop forwarding data from it. Hence they alleviated the problem of
cable breakages: when an Ethernet coax segment broke, while all devices on that segment were
unable to communicate, repeaters allowed the other segments to continue working - although
depending on which segment was broken and the layout of the network the partitioning that
resulted may have made other segments unable to reach important servers and thus effectively
useless.

People recognized the advantages of cabling in a star topology, primarily that only faults
at the star point will result in a badly partitioned network, and network vendors began creating
repeaters having multiple ports, thus reducing the number of repeaters required at the star point.
Multiport Ethernet repeaters became known as "Ethernet hubs". Network vendors such as DEC
and SynOptics sold hubs that connected many 10BASE2 thin coaxial segments. There were also
"multi-port transceivers" or "fan-outs". These could be connected to each other and/or a coax
backbone. A well-known early example was DEC's DELNI. These devices allowed multiple
hosts with AUI connections to share a single transceiver. They also allowed creation of a small
standalone Ethernet segment without using a coaxial cable.

Ethernet on unshielded twisted-pair cables (UTP), beginning with StarLAN and


continuing with 10BASE-T, was designed for point-to-point links only and all termination was
built into the device. This changed hubs from a specialist device used at the center of large
networks to a device that every twisted pair-based network with more than two machines had to
use. The tree structure that resulted from this made Ethernet networks more reliable by
preventing faults with (but not deliberate misbehavior of) one peer or its associated cable from
affecting other devices on the network, although a failure of a hub or an inter-hub link could still
affect lots of users. Also, since twisted pair Ethernet is point-to-point and terminated inside the
hardware, the total empty panel space required around a port is much reduced, making it easier to
design hubs with lots of ports and to integrate Ethernet onto computer motherboards.

Despite the physical star topology, hubbed Ethernet networks still use half-duplex and
CSMA/CD, with only minimal activity by the hub, primarily the Collision Enforcement signal,
in dealing with packet collisions. Every packet is sent to every port on the hub, so bandwidth and
security problems aren't addressed. The total throughput of the hub is limited to that of a single
link and all links must operate at the same speed.

Collisions reduce throughput by their very nature. In the worst case, when there are lots
of hosts with long cables that attempt to transmit many short frames, excessive collisions can
reduce throughput dramatically. However, a Xerox report in 1980 summarized the results of
having 20 fast nodes attempting to transmit packets of various sizes as quickly as possible on the
same Ethernet segment. The results showed that, even for the smallest Ethernet frames (64B),
90% throughput on the LAN was the norm. This is in comparison with token passing LANs
(token ring, token bus), all of which suffer throughput degradation as each new node comes into
the LAN, due to token waits.

This report was controversial, as modeling showed that collision-based networks became
unstable under loads as low as 40% of nominal capacity. Many early researchers failed to
understand the subtleties of the CSMA/CD protocol and how important it was to get the details
right, and were really modeling somewhat different networks (usually not as good as real
Ethernet).

Bridging and switching

While repeaters could isolate some aspects of Ethernet segments, such as cable
breakages, they still forwarded all traffic to all Ethernet devices. This created practical limits on
how many machines could communicate on an Ethernet network. Also as the entire network was
one collision domain and all hosts had to be able to detect collisions anywhere on the network,
the number of repeaters between the farthest nodes was limited. Finally segments joined by
repeaters had to all operate at the same speed, making phased-in upgrades impossible.

To alleviate these problems, bridging was created to communicate at the data link layer
while isolating the physical layer. With bridging, only well-formed Ethernet packets are
forwarded from one Ethernet segment to another; collisions and packet errors are isolated.
Bridges learn where devices are, by watching MAC addresses, and do not forward packets across
segments when they know the destination address is not located in that direction.

Prior to discovery of network devices on the different segments, Ethernet bridges (and
switches) work somewhat like Ethernet hubs, passing all traffic between segments. However, as
the bridge discovers the addresses associated with each port, it only forwards network traffic to
the necessary segments, improving overall performance. Broadcast traffic is still forwarded to all
network segments. Bridges also overcame the limits on total segments between two hosts and
allowed the mixing of speeds, both of which became very important with the introduction of Fast
Ethernet.

Early bridges examined each packet one by one using software on a CPU, and some of
them were significantly slower than hubs (multi-port repeaters) at forwarding traffic, especially
when handling many ports at the same time. This was in part due to the fact that the entire
Ethernet packet would be read into a buffer, the destination address compared with an internal
table of known MAC addresses and a decision made as to whether to drop the packet or forward
it to another or all segments.
In 1989 the networking company Kalpana introduced their EtherSwitch, the first Ethernet
switch. This worked somewhat differently from an Ethernet bridge, in that only the header of the
incoming packet would be examined before it was either dropped or forwarded to another
segment. This greatly reduced the forwarding latency and the processing load on the network
device. One drawback of this cut-through switching method was that packets that had been
corrupted at a point beyond the header could still be propagated through the network, so a
jabbering station could continue to disrupt the entire network. The remedy for this was to make
available store-and-forward switching, where the packet would be read into a buffer on the
switch in its entirety, verified against its checksum and then forwarded. This was essentially a
return to the original approach of bridging, but with the advantage of more powerful, application-
specific processors being used. Hence the bridging is then done in hardware, allowing packets to
be forwarded at full wire speed. It is important to remember that the term switch was invented by
device manufacturers and does not appear in the 802.3 standard.

Since packets are typically only delivered to the port they are intended for, traffic on a
switched Ethernet is slightly less public than on shared-medium Ethernet. Despite this, switched
Ethernet should still be regarded as an insecure network technology, because it is easy to subvert
switched Ethernet systems by means such as ARP spoofing and MAC flooding. The bandwidth
advantages, the slightly better isolation of devices from each other, the ability to easily mix
different speeds of devices and the elimination of the chaining limits inherent in non-switched
Ethernet have made switched Ethernet the dominant network technology.

When a twisted pair or fiber link segment is used and neither end is connected to a hub,
full-duplex Ethernet becomes possible over that segment. In full duplex mode both devices can
transmit and receive to/from each other at the same time, and there is no collision domain. This
doubles the aggregate bandwidth of the link and is sometimes advertised as double the link speed
(e.g. 200 Mbit/s) to account for this. However, this is misleading as performance will only
double if traffic patterns are symmetrical (which in reality they rarely are). The elimination of the
collision domain also means that all the link's bandwidth can be used and that segment length is
not limited by the need for correct collision detection (this is most significant with some of the
fiber variants of Ethernet).

Dual speed hubs

In the early days of Fast Ethernet, Ethernet switches were relatively expensive devices.
Hubs suffered from the problem that if there were any 10BASE-T devices connected then the
whole network needed to run at 10 Mbit/s. Therefore a compromise between a hub and a switch
was developed, known as a dual speed hub. These devices consisted of an internal two-port
switch, dividing the 10BASE-T (10 Mbit/s) and 100BASE-T (100 Mbit/s) segments. The device
would typically consist of more than two physical ports. When a network device becomes active
on any of the physical ports, the device attaches it to either the 10BASE-T segment or the
100BASE-T segment, as appropriate. This prevented the need for an all-or-nothing migration
from 10BASE-T to 100BASE-T networks. These devices are hubs because the traffic between
devices connected at the same speed is not switched.

More advanced networks

Simple switched Ethernet networks, while an improvement over hub based Ethernet, suffer
from a number of issues:

• They suffer from single points of failure. If any link fails some devices will be unable to
communicate with other devices and if the link that fails is in a central location lots of users
can be cut off from the resources they require.
• It is possible to trick switches or hosts into sending data to your machine even if it's not
intended for it (see switch vulnerabilities).
• Large amounts of broadcast traffic, whether malicious, accidental, or simply a side effect of
network size can flood slower links and/or systems.
o It is possible for any host to flood the network with broadcast traffic forming a denial of
service attack against any hosts that run at the same or lower speed as the attacking device.
o As the network grows, normal broadcast traffic takes up an ever greater amount of
bandwidth.
o If switches are not multicast aware, multicast traffic will end up treated like broadcast
traffic due to being directed at a MAC with no associated port.
o If switches discover more MAC addresses than they can store (either through network size
or through an attack) some addresses must inevitably be dropped and traffic to those
addresses will be treated the same way as traffic to unknown addresses, that is essentially
the same as broadcast traffic (this issue is known as failopen).
• They suffer from bandwidth choke points where a lot of traffic is forced down a single link.

Some switches offer a variety of tools to combat these issues including:

• Spanning-tree protocol to maintain the active links of the network as a tree while allowing
physical loops for redundancy.
• Various port protection features, as it is far more likely an attacker will be on an end system
port than on a switch-switch link.
• VLANs to keep different classes of users separate while using the same physical
infrastructure.
• Fast routing at higher levels to route between those VLANs.
• Link aggregation to add bandwidth to overloaded links and to provide some measure of
redundancy, although the links won't protect against switch failure because they connect the
same pair of switches.
Autonegotiation and duplex mismatch
Many different modes of operations (10BASE-T half duplex, 10BASE-T full duplex,
100BASE-TX half duplex, …) exist for Ethernet over twisted pair cable using 8P8C modular
connectors (not to be confused with FCC's RJ45), and most devices are capable of different
modes of operations. In 1995, IEEE standard 802.3u (100baseTX) was released, allowing two
network interfaces connected to each other to autonegotiate the best possible shared mode of
operation. This works well for a network in which every device being set to autonegotiate.

The autonegotiation standard contained a mechanism for detecting the speed but not the
duplex setting of an Ethernet peer that did not use autonegotiation. An autonegotiating device
defaults to half duplex, when the remote does not negotiate, as the remote peer is assumed to be a
hub (which always has autonegotiation disabled and supports only half duplex mode). If the
remote is operating in half duplex mode this works. But if remote is in full duplex mode, this
generates a duplex mismatch. When two interfaces are connected and set to different "duplex"
modes, the effect of the duplex mismatch is a network that works, but is much slower than its
nominal speed, and generates more collisions. The primary rule for avoiding this is to never set
one end of a connection to a forced full duplex setting and the other end to autonegotiation.

Interoperability problems lead some network administrators to manually fix the mode of
operation of interfaces on network devices. What would happen is that some device would fail to
autonegotiate and therefore had to be set into one setting or another. This often led to duplex
setting mismatches. In particular, when two interfaces are connected to each other with one set to
autonegotiation and one set to full duplex mode, a duplex mismatch results because the
autonegotiation process fails and half duplex is assumed. The interface in full duplex mode then
transmits at the same time as receiving, and the interface in half duplex mode then gives up on
transmitting a frame. The interface in half duplex mode is not ready to receive a frame, so it
signals a collision, and transmissions are halted, for amounts of time based on backoff (random
wait times) algorithms. When both packets start trying to transmit again, they interfere again and
the backoff strategy may result in a longer and longer wait time before attempting to transmit
again; eventually a transmission succeeds but this then causes the flood and collisions to resume.

Because of the wait times, the effect of a duplex mismatch is a network that is not
completely 'broken' but is incredibly slow. This bad behaviour can be tolerated on low traffic
link, but is really dramatic under heavy bandwidth transfer attempt, and can lead to a complete
stop of the traffic.

While autonegotiation is not required for 10/100 Mbit/s, it is recommended as default behaviour
by IEEE 802.3u. However, 1000baseT devices require autonegotiation to be active to elect the
clock master (source of timing). Enabling autonegotiation on every node eases transition from
10/100Mbit/s to 1000baseT switch and LAN.
There are no disadvantages of keeping autonegotiation active on all devices, because
complete physical link behaviours are controlled through autonegotiation (speed, duplex, clock
master and flow control). For example, to force a single speed link you can keep negotiation on,
but negotiate only one speed. So the old method with autonegotiation off is deprecated
everywhere, on switch and LAN cards.

Physical layer
The first Ethernet networks, 10BASE5, used thick yellow cable with vampire taps as a
shared medium (using CSMA/CD). Later, 10BASE2 Ethernet used thinner coaxial cable (with
BNC connectors) as the shared CSMA/CD medium. The later StarLAN 1BASE5 and 10BASE-T
used twisted pair connected to Ethernet hubs with 8P8C (RJ45) modular connectors.

Currently Ethernet has many varieties that vary both in speed and physical medium used.
The most common forms used are 10BASE-T, 100BASE-TX, and 1000BASE-T. All three
utilize Category 5 cables and 8P8C modular connectors. They run at 10 Mbit/s, 100 Mbit/s, and
1 Gbit/s, respectively.

Fiber optic variants of Ethernet are commonly used in structured cabling applications.
These variants have also seen substantial penetration in enterprise datacenter applications, but
are rarely seen connected to end user systems for cost/convenience reasons. Their advantages lie
in performance, electrical isolation and distance (up to tens of kilometers with some versions). 10
gigabit Ethernet is becoming more popular in both enterprise and carrier networks, with
development starting on 40 Gbit/sand 100 Gbit/s Ethernet. Metcalfe now believes commercial
applications using terabit Ethernet may occur by 2015 though he says existing Ethernet standards
may have to be overthrown to reach terabit Ethernet.

A data packet on the wire is called a frame. A frame viewed on the actual physical wire
would show Preamble and Start Frame Delimiter, in addition to the other data. These are
required by all physical hardware. They are not displayed by packet sniffing software because
these bits are removed by the Ethernet adapter before being passed on to the host.

The table below shows the complete Ethernet frame, as transmitted, for the MTU of 1500
bytes (some implementations of gigabit Ethernet and higher speeds support larger jumbo
frames). Note that the bit patterns in the preamble and start of frame delimiter are written as bit
strings, with the first bit transmitted on the left (not as byte values, which in Ethernet are
transmitted least significant bit first). This notation matches the one used in the IEEE 802.3
standard. One octet is eight bits of data (i.e., a byte on most modern computers).

10/100M transceiver chips (MII PHY) work with four bits (one nibble) at a time.
Therefore the preamble will be 7 instances of 0101 + 0101, and the Start Frame Delimiter will be
0101 + 1101. 8-bit values are sent low 4-bit and then high 4-bit. 1000M transceiver chips (GMII)
work with 8 bits at a time, and 10 Gbit/s (XGMII) PHY works with 32 bits at a time.

After a frame has been sent transmitters are required to transmit 12 octets of idle
characters before transmitting the next frame.

From this table, we may calculate the efficiency and net bit rate for Ethernet:

Maximum efficiency is achieved with largest allowed payload size and is for

untagged packets and when 802.1Q is used.

Net bit rate may be calculated from efficiency:

Maximum net bit rate for 100BASE-TX Ethernet without 802.1Q is 97.53 Mbit/s.

Ethernet frame types and the EtherType field


There are several types of Ethernet frames:

• The Ethernet Version 2 or Ethernet II frame, the so-called DIX frame (named after DEC,
Intel, and Xerox); this is the most common today, as it is often used directly by the Internet
Protocol.
• Novell's non-standard variation of IEEE 802.3 ("raw 802.3 frame") without an IEEE 802.2
LLC header.
• IEEE 802.2 LLC frame
• IEEE 802.2 LLC/SNAP frame

In addition, all four Ethernet frames types may optionally contain a IEEE 802.1Q tag to
identify what VLAN it belongs to and its IEEE 802.1p priority (quality of service). This
encapsulation is defined in the IEEE 802.3ac specification and increases the maximum frame by
4 bytes to 1522 bytes.

The different frame types have different formats and MTU values, but can coexist on the
same physical medium.
The most common Ethernet Frame format, type II

Versions 1.0 and 2.0 of the Digital/Intel/Xerox (DIX) Ethernet specification have a 16-bit
sub-protocol label field called the EtherType. The new IEEE 802.3 Ethernet specification
replaced that with a 16-bit length field, with the MAC header followed by an IEEE 802.2 logical
link control (LLC) header. The maximum length of a frame was 1518 bytes for untagged (1522
for 802.1p or 802.1q tagged) classical Ethernet v2 and IEEE802.3 frames. The two formats were
eventually unified by the convention that values of that field between 64 and 1522 indicated the
use of the new 802.3 Ethernet format with a length field, while values of 1536 decimal (0600
hexadecimal) and greater indicated the use of the original DIX or Ethernet II frame format with
an EtherType sub-protocol identifier.[10] This convention allows software to determine whether a
frame is an Ethernet II frame or an IEEE 802.3 frame, allowing the coexistence of both standards
on the same physical medium. See also Jumbo Frames.

By examining the 802.2 LLC header, it is possible to determine whether it is followed by


a SNAP (subnetwork access protocol) header. Some protocols, particularly those designed for
the OSI networking stack, operate directly on top of 802.2 LLC, which provides both datagram
and connection-oriented network services. The LLC header includes two additional eight-bit
address fields, called service access points or SAPs in OSI terminology; when both source and
destination SAP are set to the value 0xAA, the SNAP service is requested. The SNAP header
allows EtherType values to be used with all IEEE 802 protocols, as well as supporting private
protocol ID spaces. In IEEE 802.3x-1997, the IEEE Ethernet standard was changed to explicitly
allow the use of the 16-bit field after the MAC addresses to be used as a length field or a type
field.

Novell's "raw" 802.3 frame format was based on early IEEE 802.3 work. Novell used this
as a starting point to create the first implementation of its own IPX Network Protocol over
Ethernet. They did not use any LLC header but started the IPX packet directly after the length
field. This does not conform to the IEEE 802.3 standard, but since IPX has always FF at the first
two bytes (while in IEEE 802.2 LLC that pattern is theoretically possible but extremely
unlikely), in practice this mostly coexists on the wire with other Ethernet implementations, with
the notable exception of some early forms of DECnet which got confused by this.

Novell NetWare used this frame type by default until the mid nineties, and since Netware
was very widespread back then, while IP was not, at some point in time most of the world's
Ethernet traffic ran over "raw" 802.3 carrying IPX. Since Netware 4.10, Netware now defaults to
IEEE 802.2 with LLC (Netware Frame Type Ethernet_802.2) when using IPX. (See "Ethernet
Framing" in References for details.)

Mac OS uses 802.2/SNAP framing for the AppleTalk V2 protocol suite on Ethernet
("EtherTalk") and Ethernet II framing for TCP/IP.

The 802.2 variants of Ethernet are not in widespread use on common networks currently,
with the exception of large corporate Netware installations that have not yet migrated to Netware
over IP. In the past, many corporate networks supported 802.2 Ethernet to support transparent
translating bridges between Ethernet and IEEE 802.5 Token Ring or FDDI networks. The most
common framing type used today is Ethernet Version 2, as it is used by most Internet Protocol-
based networks, with its EtherType set to 0x0800 for IPv4 and 0x86DD for IPv6.

There exists an Internet standard for encapsulating IP version 4 traffic in IEEE 802.2
frames with LLC/SNAP headers.[11] It is almost never implemented on Ethernet (although it is
used on FDDI and on token ring, IEEE 802.11, and other IEEE 802 networks). IP traffic cannot
be encapsulated in IEEE 802.2 LLC frames without SNAP because, although there is an LLC
protocol type for IP, there is no LLC protocol type for ARP. IP Version 6 can also be transmitted
over Ethernet using IEEE 802.2 with LLC/SNAP, but, again, that's almost never used (although
LLC/SNAP encapsulation of IPv6 is used on IEEE 802 networks).

The IEEE 802.1Q tag, if present, is placed between the Source Address and the
EtherType or Length fields. The first two bytes of the tag are the Tag Protocol Identifier (TPID)
value of 0x8100. This is located in the same place as the EtherType/Length field in untagged
frames, so an EtherType value of 0x8100 means the frame is tagged, and the true
EtherType/Length is located after the Q-tag. The TPID is followed by two bytes containing the
Tag Control Information (TCI) (the IEEE 802.1p priority (quality of service) and VLAN id). The
Q-tag is followed by the rest of the frame, using one of the types described above.

Runt frames

A runt frame is an Ethernet frame that is less than the IEEE 802.3 minimum length of 64
bytes. Possible causes are collision, underruns, bad network card or software.[12][13]

Varieties of Ethernet
Early varieties

• 10BASE5: original standard uses a single coaxial cable into which you literally tap a
connection by drilling into the cable to connect to the core and screen. Largely obsolete,
though due to its widespread deployment in the early days, some systems may still be in use.
Was known also as Thick-Ethernet.
• 10BROAD36: Obsolete. An early standard supporting Ethernet over longer distances. It
utilized broadband modulation techniques, similar to those employed in cable modem
systems, and operated over coaxial cable.
• 10BASE2 (also called ThinNet or Cheapernet): 50 Ω coaxial cable connects machines
together, each machine using a T-adaptor to connect to its NIC. Requires terminators at each
end. For many years this was the dominant Ethernet standard 10 Mbit/s.
• 1BASE5: AKA StarLAN, it operated at 1 Mbit/s over twisted pair to an active hub. Although
a commercial failure, 1BASE5 defined the architecture for all subsequent Ethernet evolution.

10 MBit/s Ethernet

• 10BASE-T: runs over four wires (two twisted pairs) on a Category 3 or Category 5 cable. An
active hub or switch sits in the middle and has a port for each node. This is also the
configuration used for 100BASE-T and gigabit Ethernet.
• FOIRL: Fiber-optic inter-repeater link. The original standard for Ethernet over fibre.
• 10BASE-F: A generic term for the new family of 10 Mbit/s Ethernet standards: 10BASE-FL,
10BASE-FB and 10BASE-FP. Of these only 10BASE-FL is in widespread use.
o 10BASE-FL: An updated version of the FOIRL standard.
o 10BASE-FB: Intended for backbones connecting a number of hubs or switches, it is now
obsolete.

Fast Ethernet

• 100BASE-T: A term for any of the three standard for 100 Mbit/s Ethernet over twisted pair
cable. Includes 100BASE-TX, 100BASE-T4 and 100BASE-T2. As of 2009, 100BASE-TX
has totally dominated the market, and is often considered to be synonymous with 100BASE-
T in informal usage.
o 100BASE-TX: 100 Mbit/s Ethernet over Category 5 cable (using two out of four pairs).
Similar star-shaped configuration to 10BASE-T.
o 100BASE-T4: 100 Mbit/s Ethernet over Category 3 cable (as used for 10BASE-T
installations). Uses all four pairs in the cable, and is limited to half-duplex. Now obsolete,
as Category 5 cables are the norm.
o 100BASE-T2: 100 Mbit/s Ethernet over Category 3 cable. Uses only two pairs, and
supports full-duplex. It is functionally equivalent to 100BASE-TX, but supports old
cable. No products supporting this standard were ever manufactured.
• 100BASE-FX: 100 Mbit/s Ethernet over fiber.
Gigabit Ethernet

• 1000BASE-T: 1 Gbit/s over unshielded twisted pair copper cabling (at least Category 5
cable, with Category 5e strongly recommended).
• 1000BASE-SX: 1 Gbit/s over short range multi-mode fiber.
• 1000BASE-LX: 1 Gbit/s over long range single-mode fiber.
• 1000BASE-CX: A short-haul solution (up to 25 m) for running 1 Gbit/s Ethernet over special
copper cable. Predates 1000BASE-T, and now obsolete.

10-gigabit Ethernet

The 10 gigabit Ethernet family of standards encompasses media types for single-mode fibre
(long haul), multi-mode fibre (up to 300 m), copper backplane (up to 1 m) and copper twisted
pair (up to 100 m). It was first standardised as IEEE Std 802.3ae-2002, but is now included in
IEEE Std 802.3-2008.

• 10GBASE-SR: designed to support short distances over deployed multi-mode fiber cabling,
it has a range of between 26 m and 82 m depending on cable type. It also supports 300 m
operation over a new 2000 MHz·km multi-mode fiber.
• 10GBASE-LX4: uses wavelength division multiplexing to support ranges of between 240 m
and 300 m over deployed multi-mode cabling. Also supports 10 km over single-mode fiber.
• 10GBASE-LR and 10GBASE-ER: these standards support 10 km and 40 km respectively
over single-mode fiber.
• 10GBASE-SW, 10GBASE-LW and 10GBASE-EW. These varieties use the WAN PHY,
designed to interoperate with OC-192 / STM-64 SONET/SDH equipment. They correspond
at the physical layer to 10GBASE-SR, 10GBASE-LR and 10GBASE-ER respectively, and
hence use the same types of fiber and support the same distances. (There is no WAN PHY
standard corresponding to 10GBASE-LX4.)
• 10GBASE-T: designed to support copper twisted pair was specified by the IEEE Std
802.3an-2006 which has been incorporated into the IEEE Std 802.3-2008.

As of 2009, 10 gigabit Ethernet is predominantly deployed in carrier networks, where


10GBASE-LR and 10GBASE-ER enjoy significant market shares.

40 Gigabit Ethernet and 100 Gigabit Ethernet

As of 2009, 40 Gigabit Ethernet and 100 Gigabit Ethernet (100GbE) standards are still in draft
status.
Related standards
• Networking standards that are not part of the IEEE 802.3 Ethernet standard, but support the
Ethernet frame format, and are capable of interoperating with it.
o LattisNet—A SynOptics pre-standard twisted-pair 10 Mbit/s variant.
o 100BaseVG—An early contender for 100 Mbit/s Ethernet. It runs over Category 3
cabling. Uses four pairs. Commercial failure.
o TIA 100BASE-SX—Promoted by the Telecommunications Industry Association.
100BASE-SX is an alternative implementation of 100 Mbit/s Ethernet over fiber; it is
incompatible with the official 100BASE-FX standard. Its main feature is interoperability
with 10BASE-FL, supporting autonegotiation between 10 Mbit/s and 100 Mbit/s
operation – a feature lacking in the official standards due to the use of differing LED
wavelengths. It is targeted at the installed base of 10 Mbit/s fiber network installations.
o TIA 1000BASE-TX—Promoted by the Telecommunications Industry Association, it was
a commercial failure, and no products exist. 1000BASE-TX uses a simpler protocol than
the official 1000BASE-T standard so the electronics can be cheaper, but requires
Category 6 cabling.
o G.hn—A standard developed by ITU-T and promoted by HomeGrid Forum for high-
speed (up to 1 Gbit/s) local area networks over existing home wiring (coaxial cables,
power lines and phone lines). G.hn defines an Application Protocol Convergence (APC)
layer that accepts Ethernet frames and encapsulates them into G.hn MSDUs.

• Networking standards that do not use the Ethernet frame format but can still be connected to
Ethernet using MAC-based bridging.
o 802.11—A standard for wireless local area networks (LANs), often paired with an
Ethernet backbone.
o 802.16—A standard for wireless metropolitan area networks (MANs), including WiMAX
• 10BaseS—Ethernet over VDSL
• Long Reach Ethernet
• Avionics Full-Duplex Switched Ethernet
• TTEthernet — Time-Triggered Ethernet for design of mixed-criticality embedded systems
• Metro Ethernet

It has been observed that Ethernet traffic has self-similar properties, with important
consequences for traffic engineering

You might also like