You are on page 1of 0

SPONSORED BY:

EDITORIAL GUIDE
The Expanding World
of Data Centers
Data center requirements have grown
increasingly complex, requiring new
technology that operates at higher speeds.
Whether its servers, storage, or the
networks that connect them, data center
elements are evolving at a rapid pace, as
these articles illustrate.
2
Six considerations for
optimizing the cloud
network
8
Effective troubleshooting
of FCoE and iSCSI at 40G
16
Reducing the cost of
broadband by lowering
data-center footprint
Lightwave :: EDITORIAL GUIDE
2
Originally published May 1, 2013
Six considerations for
optimizing the cloud network
Network operators likely will fnd that traditional
approaches wont keep pace with emerging
cloud-based services requirements.
By DR. ROBERT KEYS
B
Y 2016 NEARLY two-thirds of data-center traffc will be cloud traffc.
This expansion of cloud services will derive from both consumer and
enterprise demands for ubiquitous access to services from anywhere
and at any time. Most CIOs have already defned a roadmap for the
transition of enterprise services to public or hybrid cloud models. It has become
clear that the cloud provides fexibility to scale capacity as needed with powerful
economics to meet varying demands.
Content and service providers in particular face challenges from this trend,
since they must scale their networks effciently to support daily demands from
consumers and businesses - 250 million photos uploaded on Facebook, 864,000
hours of video uploaded on YouTube, 22 million hours of TV and movies watched
on Netfix, and more than 35 million applications downloaded. An impatient
public complicates these demands; for example, a 100-msec delay costs Amazon
1% in sales, and Bing found that a 2-sec slowdown reduced queries by 1.8% and
revenue per user by 4.3%.
:: Cloud network and business challenges for content and service providers
include:
:: Delivery of timely and differentiated cloud services.
:: Better use of expensive resources in the network.
Six considerations for optimizing the cloud network
3
Lightwave :: EDITORIAL GUIDE
:: Reduction of operational (opex) and capital expenses (capex).
:: Keeping up with scale and capacity demands effciently.
The growth in the volume of data traffc - which has increased eightfold over
the last fve years according to Ciscos Visual Networking Index - creates the
opportunity for providers to take a bigger role in delivering cloud services and
drive new revenue streams. But with the evolution from traditional to cloud data
centers, providers must understand and accommodate the changes in network
traffc patterns via such strategies as allocating workloads across multiple data
centers to handle varied demand requirements and virtualizing multiple data
centers as a shared pool of resources.
The dynamic traffc fows the growth of new cloud services creates are
unpredictable. To keep up with escalating regional and global processing
demands, content and service providers are expanding data-center capacities and
adding more locations and servers, while increasing power use and foor space
needs. Additionally, corresponding growth in inter-data-center network traffc
often means leasing more lines, building provider-owned networks (for control
and cost), and purchasing more point networking devices, including routers,
switches, and optical transport devices that are expensive to deploy and manage.
These approaches typically increase operational complexities and costs, affect
network use, and can reduce performance. In short, this exponential bandwidth
growth has created space, power, scale, management, control, cost, and service
fexibility challenges, particularly for service and content providers.
The emergence of new or enhanced technologies such as software-defned
networking (SDN), label-switch routing (LSR), and the integration of applications
with management and control layer software has created substantial
opportunities to dramatically improve the cloud network. Factors like
performance, cost, scale, complexity, and responsiveness to new services all have
to be considered when making decisions to transform the cloud network.
New innovations offer such signifcant improvements as real time access to
analytical data, new levels of controls for application fows, a reduction in the
number of network elements to simplify operations, and the ability to drive
massive scale and capacity with high performance.
Todays inter-data-center
cloud connectivity
Optimized inter-data-center cloud connectivity
App App App App App App
Optical Optical
App App App App App App
App App App
SDN-
enabled
platform
Management/
control software
SDN-
enabled
platform
10/100GxN
Application
servers
Switches
Routers
No applications awareness
Limited scale/performance
High latency
High cost and complexity
Ineffcient utilization
Data center Data center
Access network
Peering points/private
Partners Access network
Peering points/private
Partners
10GxN
100GxN
Application-aware
High performance/scale
Reduced opex and capex
Fully optimized resources
Service innovation
Data center
Access network
Peering points/private
Partners
Data center
Access network
Peering points/private
Partners
Six considerations for optimizing the cloud network
4
Lightwave :: EDITORIAL GUIDE
Six steps to transformation
Areas that can most affect performance and scale in the cloud - and are
correspondingly ripe for improvement driven by new hardware and software
innovations - include:
1. Increased applications awareness. Inter-data-center connectivity is often
supported with separate layers of discrete switching, routing, and optical
networking elements that support specifc protocols and functions and offer
little to no application awareness. Conversely, an approach with centralized
software control and applications integration enables programmability, increased
control, and enhanced performance. These network applications can reside on
separate servers or, for highest performance, on embedded applications modules
As cloud-based services drive more traffc and revenues, operators must rethink their data-center
network strategies.
Six considerations for optimizing the cloud network
5
Lightwave :: EDITORIAL GUIDE
that are co-resident in an integrated platform. Applications can include traffc
analysis/analytics, traffc steering, application level optimization, and security.
Additionally, the use of applications software with open APIs strengthens
innovation and differentiation of new services.
2. Improved network analytics. To optimize cloud-service performance, network
resources must be allocated based on application needs. Linking network
intelligence through analytics and application control increases the value of
network assets. Network analytics enables data mining for improved response
times to changing traffc patterns and identifes new revenue-generating
opportunities. Additionally, analytics can optimize network economics by
knowing where and when to increase capacity, locating the best peering routes,
and highlighting the most proftable applications. Real time analytics can also
effectively monitor how the network handles data fows and what the policy
engine requires for maximum effciency.
3. Tighter control of the network. Use of new SDN capabilities extends
network programmability to the optical layer and fattens the network architecture.
Moving network control from physical devices to a programmable transport layer
provides a high level of automation through the connection to higher-level cloud
and virtualization platforms. New SDN converged packet-optical platforms with
virtual-machine awareness give service and content providers new intelligence on
the communications needs of applications. They also make the applications aware
of the capabilities and state of the network. The provision of centralized intelligence
virtually in networks eases confguration and creates controlled response to
changes and the delivery of new services and applications.
4. Reduced layers of complexity. With data centers experiencing major changes
to accommodate unprecedented levels of machine-to-machine, or M2M, traffc,
the proliferation of layers and devices has resulted in massive underuse of
resources and very signifcant operational complexities and costs. Attempts to
manage separate layers also lead to poor performance, high latency, ineffcient
platform and network use, and multiple points of failure. The integration
of such functions as optical transport, switching, and network applications
with management software can converge multiple layers of the network and
drastically reduce complexity. Benefts include high performance, massive scale
Six considerations for optimizing the cloud network
6
Lightwave :: EDITORIAL GUIDE
and capacity, improved economics, fully optimized resources, and new levels of
service innovation through SDN applications.
5. Driving scale and performance through optical. Optical networking
technologies are vital in transforming data centers to handle unpredictable and
dynamic cloud-based traffc fows. The convergence of packet and optical provides
scale and increased performance and reduces the requirement for separate
discrete devices. Optical communications provides the fexibility required to
deliver connectivity and bandwidth where and when needed and eliminates the
constraints of fxed connections. Operators can reconfgure the network easily to
match application traffc fows.
6. Driving scale and performance through LSR. The cloud introduces signifcant
capacity planning and traffc engineering challenges for content and service
providers. Server and desktop virtualization, distributed computing platforms
such as Hadoop, and web application architectures dramatically alter traffc fows
and volumes inside and outside the data center. Combining optical transport and
LSR will lower the cost per bit through hardware integration and deliver the scale
and performance required.
Time is right
Networks built on closed products lack application awareness, suffer performance
degradation, and cannot scale for capacity demands. The constant addition of
new devices causes complexity and rising costs. Large content providers have
already begun to tackling these issues through the use of fexible platforms that
incorporate SDN, network intelligence, and analytics as well as deliver massive
scale, density, and increased performance - all at lower opex and capex. There
has never been a better time for service and content providers to assess their
cloud network opportunities and deploy new platforms and strategies to optimize
emerging revenue streams.
DR. ROBERT KEYS is chief technology offcer at BTI Systems.
Lightwave :: EDITORIAL GUIDE
8
Originally published August 30, 2013
Effective troubleshooting
of FCoE and iSCSI at 40G
By SIMON THOMAS, Teledyne LeCroy
T
HE MOST IMPORTANT factor in effective troubleshooting is being able
to trust your data. When your data is wrong, you have to add a step to
the debugging process: determining whether a problem is real before
you dedicate resources to trying to resolve it.
For example, consider a switch with four 10G ports aggregated to a single 40G
port. An analyzer that can capture data on the 10G ports may not be able to
maintain line rate on the 40G port without dropping any packets. Suddenly it
looks like the switch is dropping packets when its really a shortcoming of the
analyzer.
The analyzer is a key tool for Fibre Channel over Ethernet (FCoE) and Internet
Small Computer System Interface (iSCSI) developers designing reliable and
effcient 40G systems. Software analyzers, also known as sniffers, modify the
protocol stack to intercept data. This is an intrusive process, given that the
analyzer runs on the same CPU that is passing traffc, thus affecting performance
and reliability. A network interface card (NIC), for example, has only enough
capacity to perform the tasks for which it was designed. When CPU cycles are
diverted to capture and analyze traffc, the NIC will no longer be able to operate
at wire speed, resulting in congestion and dropped packets.
Throughput can be a problem for software-based analyzers even at less than
line rate. For example, the CPU has other tasks for which it has reserved
memory and system resources like storage bandwidth. As a result, software
analyzers cannot guarantee available processing cycles or memory bandwidth,
and packets may have to be dropped even when the device under test is
operating with light traffc loads.
Effective troubleshooting of FCoE and iSCSI at 40G
9
Lightwave :: EDITORIAL GUIDE
For these and other reasons that well discuss below, users have increasingly
begun to turn to hardware-based analyzers for accurate, effcient FCoE and
iSCSI troubleshooting.
Packet loss
For lossless protocols like Fibre Channel (FC) and FCoE, packet loss cannot be
tolerated at any level, so software analyzers are simply not an option in FC
applications. However, dropped packets are a potential problem even for lossy
protocols like Ethernet and iSCSI (see the sidebar Phantoms in the network).
With iSCSI, for example, iSCSI messages can be striped across multiple packet
payloads, and the analyzer must follow every packet through a particular
connection to keep track of where the iSCSI headers are. If the analyzer drops a
packet, there wont be a retransmission as there is when the link drops a packet.
The loss of even a single packet could prevent reassembly of messages, and
Phantoms in the network
You have to have a reliable analyzer that you can trust, or you may fnd yourself in
serious trouble trying to solve problems that arent really there. Lou Dickens, protocol test
engineer at a Fortune 100 company, talks from experience. He analyzes Fibre Channel,
iSCSI, and FCoE traffc on a daily basis, depending upon what he is currently testing.
When packets are dropped by your analyzer, you can fnd yourself trying to
solve a phantom problem that isnt really there. For example, if the analyzer drops the
packet that indicates an exchange is complete, it appears as if the completion packet
was never sent. Now youve got a false target issue pulling your attention away from
the real problem, he explains.
Lost packets can manifest as a variety of intermittent problems that are diffcult
to repeat consistently and even more diffcult to resolve. You can lose weeks tracking a
phantom problem through the network trying to fnd the source. Its even worse when youre
working as a team because everyone has a different idea of whats gone wrong, Dickens adds
Bad data can lead to other problems as well. One time the team replaced all the
cables trying to fx a problem and, in the rush, put a bad one in which created a whole
new set of problems that werent there before, Dickens says. Even worse than wasting
time debugging phantom problems is fxing them. Your analyzer dropping a packet
can lead you to the wrong conclusions and the wrong fxes. Now youve got another
problem to clean up after, says Dickens.
Effective troubleshooting of FCoE and iSCSI at 40G
10
Lightwave :: EDITORIAL GUIDE
analysis tools will lose the ability to visualize what is passing over the link. Thus,
even though iSCSI is a lossy protocol, analysis of iSCSI must be lossless.
A hardware analyzer works in-line to traffc. A 1:2 splitter serves as the front end,
passing traffc to its destination while a copy is passed to analyzer hardware for
processing and storage. Hardware analyzers are able to guarantee 100% capture
at line rate because they perform their function in dedicated hardware rather
than load and impede the device under test. When they are also non-blocking,
traffc passes through the analyzer transparently without materially affecting
network operation. Other than minimal latency delay through a hardware
analyzer ranges from a 100 ps to a few nanoseconds its as if the analyzer isnt
there. However, this also means that because the analyzer is just listening, it has
no control over traffc or the ability to apply backpressure.
To achieve non-blocking functionality, a hardware analyzer must have suffcient
memory bandwidth to store not just packets but the metadata associated with
each packet, including timestamps and error fags. To prevent dropped packets,
the system must also have enough memory to sustain throughput during worst-
case traffc conditions.
Reliability is even more important in the feld. When Im troubleshooting at a
customer site, their IT department has to approve any additions to make sure they
arent going to bring down the network, Dickens states. In one case, it took a month
just to get an analyzer in place. If we had to deal with phantom problems as well, we
would need to request moving the analyzer throughout the network. Youd be surprised
at how fast six months can go by.
For these reasons, we dont use software analyzers because we cant afford to be
sidetracked like that, Dickens asserts. We have found over the years, hardware is far
more stable than software, as a general rule. Dickens has experienced frsthand the
cost of product delays caused by analyzer-induced phantom errors. In one case, product
shipment was delayed at the cost of thousands of dollars per day.
Another time it was a feld problem with huge visibility that could affect the
companys reputation. We had 30 people tied up trying to solve a problem that wasnt
even there, Dickens recalls. Thats why its so critical that the information you act on
is accurate. If it isnt, you can chase your tail for months.
Effective troubleshooting of FCoE and iSCSI at 40G
11
Lightwave :: EDITORIAL GUIDE
Visibility
With a hardware analyzer, developers can see the raw 66-bit scrambled signal
on the line. This means they can set triggers on any aspect of traffc, including
primitives and packet order, not just at the protocol layer. Triggering is non-
blocking as well, even with multi-stage triggers, since dedicated state machines
in hardware are used. Timestamp accuracy is also improved since triggers are
resolved quickly and do not add overhead as is the case with software-based
approaches.
Software analyzers have only limited access to what is actually passing over
the wire. For example, low-level primitives are not accessible at the stack level
because they have already been stripped off by the NIC. The timing of packets is
affected too, since there is a delay between when the packet reaches the NIC and
when it is passed to the protocol stack.
Visibility can be impaired in the other direction as well. For example, a software
analyzer cannot probe beyond the NIC into a switch. This means that developers
cannot observe what is happening when 10G ports are aggregated into a 40G
port. In contrast, hardware analyzers can provide visibility at every point in the
traffc chain, from both before and after the NIC to the server/switch and through
aggregation points. A single analyzer with multiple ports can also verify traffc in
and out of a target and automatically correlate and compare results.
A single hardware analyzer is also able to support multiple protocols at different
speeds. This gives developers greater fexibility, as well as allows them to leverage
tool investment across multiple applications. The ability to use a single tool to
debug multiple protocols also enables seamless correlation of traffc across ports
and protocols in a way not possible when separate analyzers are used. This is
critical for analyzing traffc that crosses domains, such as FCoE.
Advanced troubleshooting
Just having access to captured data, however, is not enough to always know
what is stored in a packet. For example, TCP delimiters, also known as packet
data units (PDUs), can be spread across multiple packets. To be able to analyze
traffc, developers need to be able to identify where PDUs are and aggregate
them. However, if there is a retransmission of a packet, at 40G that packet may
Effective troubleshooting of FCoE and iSCSI at 40G
12
Lightwave :: EDITORIAL GUIDE
be hundreds of millions of events further down the trace. Manually reassembling
multi-line PDUs into a single packet is a tedious and error-prone process. In
addition, ambiguities can arise as to whether the entire packet was lost or just a
single PDU.
When the analyzer can handle processes like this automatically and without
packet loss, it can save tremendous effort on the part of developers. To ease
troubleshooting, all relevant data can be seen within a small window that shows
traffc as it was transmitted (see Figure 1), enabling each PDU to be verifed as
well as shown with the data with which it is associated. In addition, a huge buffer
is not longer required as only data of interest needs to be captured.
Two key capabilities for enabling effcient data capture and debugging are multi-
state triggers and enhanced fltering. Consider that even though a session may
run over a short time, at 40G a lot of data passes through the system. Triggers
and flters pare down the amount of information that the analyzer captures and
that developers need to sift through to resolve an issue.
FIGURE 1. Proper presentation of test data can speed analysis and trouble resolution.
Effective troubleshooting of FCoE and iSCSI at 40G
13
Lightwave :: EDITORIAL GUIDE
For example, developers can more clearly defne when to begin capturing
traffc by confguring triggers to very specifc conditions. Enhanced fltering
complements multi-state triggers by automatically removing traffc that is not
of interest, such as data to devices that are not being debugged. As a result, the
trace buffer required to capture even a simple connection will contain just traffc
that is relevant rather than spanning gigabytes of data.
Flexibility in analyzer output is important as well; if captured data is locked to
a proprietary tool, developers cannot take advantage of their offine analysis
tools of choice. Analyzers that enable export of data to tools like WireShark that
support custom analysis enable teams to leverage their existing test benches.
Measurement of latency and performance verifcation in 40G systems requires,
accurate timestamps as well. Today, hardware analyzers offer timestamps
with accuracy to 1 ns. This capability enables developers to identify potential
bottlenecks, analyze fow control issues, accurately measure port-to-port delay,
and even explore low-level interactions at the LINK and PHY layers that can affect
performance.
Non-intrusive probing
Variations in how long it takes to pass traffc through the analyzer introduce
unwanted jitter to traffc. With software analyzers, high-priority host tasks
like interrupts and locked access to storage or memory can increase latency.
Advanced triggering and fltering, if supported, will add delay as well. In addition,
because triggering introduces loading on the CPU, software analyzers do not
support the complex multi-state triggers developers need to identify and locate
performance and reliability issues.
Hardware analyzers avoid introducing jitter by minimizing added latency.
Because processing is performed by dedicated hardware, this latency is also
deterministic and consistent, eliminating any issues that might arise from jitter.
The result is little to no impact on traffc.
To be completely non-intrusive, hardware analyzers also need to redrive signals
rather than retime them. Retimers have a potential to materially alter traffc
by adding or removing symbols when passing the traffc through. A redriver, in
Effective troubleshooting of FCoE and iSCSI at 40G
14
Lightwave :: EDITORIAL GUIDE
contrast, retransmits traffc without retiming what was received from the optics
and so preserves traffc integrity.
The sheer amount of data to analyze brings unique challenges to designing 40G
systems. Even though the FCoE and iSCSI protocols tolerate packet loss, packets
dropped by an analyzer can create phantom problems that distract developers
from real issues, potentially causing costly product delays. Through deep visibility
and 100% data capture at line rate, developers can effectively troubleshoot
systems to ensure optimal performance and reliability.
SIMON THOMAS is product manager at Teledyne LeCroy.
Lightwave :: EDITORIAL GUIDE
16
Originally published May 1, 2013
Reducing the cost of
broadband by lowering
data-center footprint
With data-center space at a premium - and expensive
- designers and managers must fnd new ways to
maximize the effciency of their foor plans.
By CHERI BERANEK
M
UCH HAS BEEN written about carbon footprint. But for data-
center designers and managers, reducing footprint is not just about
going green. Lets review the opportunities associated with going
lean and reducing data-center space requirements.
The media barrage us with messages
about how to lose weight. Regardless
of the method, the overriding
message is that carrying around
excess fat stresses our skeletal
systems. Our broadband networks
are no different. As increasing
bandwidth and performance
requirements have resulted in an
explosion of deployed fber, more
and more data is being pushed
through these networks for dynamic
storage and retrieval. The resulting
expansion of data centers has been
rapid, and real estate is being quickly
consumed.
With data centers expanding in capacity and
complexity, costs have risen as well. Source: Google Inc.
Reducing the cost of broadband by lowering data-center footprint
17
Lightwave :: EDITORIAL GUIDE
The strain this expansion has put on data centers extends beyond the capacity
and redundancy of servers and switches to the physical media - the highway
used to transfer it all around. Multimode fber has replaced copper. Newer 50-m
laser-optimized fber is now replacing conventional multimode - and in turn is
being replaced by the limitless bandwidth potential of singlemode fber.
Effective data-center design is crucial to the long-term viability of cost-effective
broadband management. To ensure a lean footprint today and into many
tomorrows, forethought is required.
Data-center architectures and requirements
Todays data centers are complex facilities. Most consume more power than a
small town and have auxiliary generators in the event they lose utility power.
With all that power being consumed, cooling the data center consumes even
more power. Some companies are building data centers in traditionally cold
locations (such as Duluth, MN, and Finland) to use the cold air to cool the data
center more economically.
The wiring for a data center can be a work of art or a birds nest. Most data centers
have at least three different types of cabling (copper, fber, and power). In addition,
to provide increased resiliency, data centers have two sets of each type of cable,
referred to as A side and B side. That allows the systems housed in the data center
to connect to redundant power, networks, etc., by being connected to both the A
and B side cabling. Keeping separation between the copper wires and the power
cords is important because the power cords could cause disruption to the signals
on the copper. Of course, fber is not affected by that type of interference.
There are two options for running the cabling: under the foor and above the rack.
Some data centers use a combination of the two options, but the general trend is
to run the cables through troughs mounted above the racks. This approach makes
the cable much easier to run, locate, and maintain.
All these cables connect equipment in racks. Servers, storage arrays, and
network switches are all rack-mounted with rack depths approaching 3.5 feet.
Unfortunately for Hollywood, reel-to-reel tape has been retired, and what tape is
in use today looks more like VHS. As noted earlier, most equipment can accept
Reducing the cost of broadband by lowering data-center footprint
18
Lightwave :: EDITORIAL GUIDE
redundant connections for power, networking, and storage, and todays racks
can be confgured with in-rack power distribution units (PDUs), dual PDUs,
temperature sensors, and remote power control.
Current cost considerations for size and space
With all the power, cooling, and cabling requirements, data centers are
expensive. Current studies indicate that building a data center can cost $1,500
to $2,000 per square foot of data-center space, not including the costs for racks,
servers, etc. Renting square footage within someone elses data center (normally
referred to as co-location) can run $30 to $50/square foot/month, depending on
the type of data center.
The Uptime Institute has developed a classifcation system for data centers based
on the redundancy and resiliency built into the facility. Four tiers categorize data
centers from least resilient (Tier 1) to most resilient (Tier 4). Redundant hardware
and cross connections can add more resiliency. The more resilient the data center,
the more expensive it will be.
With the increased use of the Internet and customer demands for systems
that are available 24/7, companies are beginning to move their applications
from traditional in-house Tier 1 or, at best, Tier 2 data centers to professionally
managed Tier 3 or 4 co-location facilities. These facilities cost more to use, so
companies are expending considerable resources to reduce their footprint and the
amount of wasted space (aisles, etc.).
Technologies such as blade servers and virtualization have enabled companies
to reduce their footprint. But the amount of wasted space is still high because
of requirements to have access aisles on both sides of current racks: one side to
access the front of the server/switch/etc. and the other side to access the cabling
on the back of the equipment. Traditional designs using traditional equipment
that require 3- to 4-foot-wide aisles on each side of racks waste a signifcant
amount of space. New fber management technology and cassette connections
enable the elimination of the back aisle because all of the functionality and cables
can be accessed from the front.
Reducing the cost of broadband by lowering data-center footprint
19
Lightwave :: EDITORIAL GUIDE
Scalability crucial to
cost-effective growth
For todays data
centers, traditional
fber management
methods are just overkill
- they have too many
needless components
that drive up costs.
Rather than changing
the lipstick, todays
fber management
must be designed from
conception for high-
density environments.
Fiber management today
must be very fexible
and confgured so it
can be scaled to meet a
variety of environments.
In addition, todays
economic environment
forces all data-center
providers to maximize
the alignment between
capital equipment and
data-center utilization
rates. As a result, it
makes economic sense
to take fber management down to the lowest common denominator that aligns
with fber constructions of the day - 12 fbers at a time. Fiber management that
scales in 12-port increments enables data-center designers to upgrade their port
counts. And the upgrade must be done without loss of space at full confguration.
Comparison of the increased number of racks a high-density front
access system (a) can support in cages of various sizes versus a
standard fber management approach (b).
a)
a)
a)
b)
b)
b)
10 10
10 12
12 12
Reducing the cost of broadband by lowering data-center footprint
20
Lightwave :: EDITORIAL GUIDE
But careful attention must be placed on every element of the fber. One area
of concern is the protection of buffer tubes. Most traditional approaches are
inadequate since the buffer tube is stored in a common route path and area.
Alternatives that address this challenge with in-device buffer-tube storage
reduce the footprint of the overall device and enhance the protection of the
fber sub-unit.
Superior density reduces real estate costs
Fiber management design has long promoted the need for density. We push to
increase the number of ports per rack-unit space while balancing the need for
fber access.
Regardless of whether you own your central offce or rent or choose to co-locate,
real estate is a cost of doing business. To create a cost metric for the savings of
space that fber management can provide, lets use the cost per square foot to
rent space in a co-location environment (a cage) from a third party. While some
locations will be signifcantly higher, Table 1 uses $30/square foot/month.
Assuming the port count is fully maximized, dividing the cost of the footprint
per year by the number of ports on the frame calculates the cost per port for the
high-density approach to be 80 cents per port per year. Thats 24 per port per
year less than the $1.04 cost of the traditional approach, delivering a cost savings
of 23%.
Front/rear access implications
The cost savings associated with real estate goes beyond density. Fiber
management approaches that provide full access to the fbers using only
the front of the frame offer better use of the cage or full data center. While
many data centers require up to 4 feet for aisle access, some conservative
TABLE 1: Cost comparison between traditional and new era fiber management
Number
of ports
Footprint Square
feet/
rack
Cost/
square
foot/
month
Cost of
footprint/
month
Price/port/
month if
maximized
Cost of
footprint/
year
Cost/
port/
year
Traditional frame
1,728 2430" 5 $30 $150 $0.0868 $1,800 $1.04
High-density
frame
2,016 1836" 4.5 $30 $135 $0.067 $1,620 $0.80
Reducing the cost of broadband by lowering data-center footprint
21
Lightwave :: EDITORIAL GUIDE
environments may use as little as 30 inches. Using this estimated 30 inches
of aisle space for access, when a front access technology is deployed, that 30
inches of aisle space is required only on the front, with zero space required in
the rear. That compares to the 30 inches of aisle space on the front and rear of
the frame required with a standard platform. As a result, the amount of square
footage required for the high-density front access system is dramatically less
than the alternative solutions.
To establish the cost savings of these designs, Table 2 outlines the cost of
each approach on a per port, per year basis, accounting for both the footprint
of the two racks and the required aisle space. The 24 square feet of the high-
density front access frame will house 4,032 ports, while the 35 square feet of
the traditional method houses 3,546 ports. Assuming the port count is fully
maximized in this two-frame example, the annual cost per port for the high-
density front access frame is $2.04 per port - $1.61/port/year less than the $3.65
cost of the traditional frame, a savings of 44%.
Full cage implications
To further demonstrate the savings associated with reducing the real estate
requirements for fber management, the real world implications of optimizing the
foor plan of a cage exemplify its full impact. High-density front access designs
can be deployed either
in a back-to-back
layout or against the
wall (see Figure). As
Table 3 details, 32,256
ports could be housed
in a 1212 cage with
the high-density front
TABLE 2: Cost of traditional vs. front access-enabled approach
Number
of ports
Footprint Square
feet/
rack
Cost/
square
foot/
month
Cost of
footprint/
month
Price/port/
month if
maximized
Cost of
footprint/
year
Cost/
port/
year
Traditional frame
3,546 2430" 35 $30 $1,050 $0.3038 $12,600 $3.65
High-density front
access frame
4,032 1836" 24 $30 $720 $0.17 $8,640 $2.04
TABLE 3: Comparison of space efficiency for various cage sizes
Cage size 1212 1012 1010
Square feet
144 120 100
Monthly cost
$4,320 $3,500 $3,000
Traditional frames
maximum ports
13,824 13,824 13,824
High-density front access
frames maximum ports
32,256 24,192 18,144
Reducing the cost of broadband by lowering data-center footprint
22
Lightwave :: EDITORIAL GUIDE
access system, while only 13,824 could be housed using the traditional platform.
The high-density front access design provides an improvement of 233% more
ports per cage versus the traditional approach without jeopardizing ease of use.
Investing in fber management is not a necessary evil. Protecting fber with
management devices that scale to your capacity requirements will pay dividends
in the future. Unlike a standard resolution to lose weight, this commitment to
establishing a lean infrastructure for your data center will pay dividends long into
the future.
CHERI BERANEK is president and CEO of fber management product developer
Clearfeld Inc.
23
Lightwave :: EDITORIAL GUIDE
Company Description:
LeCroy Corporation is a worldwide leader in serial data test solutions, creating
advanced instruments that drive product innovation by quickly measuring,
analyzing and verifying complex electronic signals. The Company offers high-
performance oscilloscopes, serial data analyzers and global communications
protocol test solutions used by design engineers in the computer, semiconductor
and consumer electronics, data storage, automotive and industrial,
telecommunications and military and aerospace markets.
LINKS:
New Flexport allows fexible Ethernet and Fibre Channel
testing up to 40G
Videos: Introduction to SierraFC
Lowest Cost Ethernet and Fibre Channel Analyzers on the Market -
starts at $19,950
Full range of Fibre Channel testing available
Compact, Portable, Lightweight SierraNet 408 measures FC and
Ethernet up to 40G in a 1U Chassis

You might also like