You are on page 1of 7

[2]

Optical Interconnection Network Design Study


AbstractThis paper gives a brief summary of the optical
interconnections in future data center networks. It starts by
describing the advantages of using optical interconnects over
electrical interconnects in data centers. The aim of this paper is to
identify technologies that can enable terabit/second speeds in
data center networks.

I.

INTRODUCTION

HE idea of using light beams to replace wires now


dominates all long-distance communications and is
progressively taking over in networks over shorter
distances. But should we use light beams at the much shorter
distances inside digital computers, possibly connecting
directly to the silicon chips, or even for connections on chips?
If so, why, and, ultimately, when? [1]
The paper is organized as follows. Section II presents an
overview of the advantages of using optical interconnects. I
will list down possible practical benefits of optical
interconnects and also highlight three critical requirements for
a terabit/s-class of transceiver used in data center networks. In
Section III I shall propose a terabit/second optical transceiver
design which is based on the IBM Terabus project. Each
components of the transmitter, receiver and the optocard will
be described and the feasibility of this optical link is analyzed
using the 10G Ethernet Link model. The results of the excel
sheet are presented in the paper. Section IV and Section V are
related to the network architecture that can be used to connect
the Top-Of-Rack switches in the data center. Here I have
presented a novel network architecture which is employed by
Facebook in their newly built Altoona Data Center. An
independent analysis for the optical and electrical power
budget is made using existing optical transceivers from Finisar
and Lenovo servers. Lastly, we analyze how the network
design proposed excel as compared to the existing
technologies keeping in mind the three critical requirements
mentioned in Section II.
II. OPTICAL INTERCONNECTS

For five decades, the semiconductor industry has


distinguished itself by the rapid pace of improvement in its
products. The principal categories of improvement trends are
shown in Table 1 with examples of each. Most of these trends
have resulted principally from the industrys ability to
exponentially decrease the minimum feature sizes used to
fabricate integrated circuits. Of course, the most frequently
cited trend is in integration level, which is usually expressed
as Moores Law (that is, the number of components per chip
doubles roughly every 24 months). The most significant trend
is the decreasing cost-per-function, which has led to
significant improvements in economic productivity and
overall quality of life through proliferation of computers,
communication, and other industrial and consumer electronics.

This evolution is shifting the balance between devices and


interconnection
in
digital
processing
systems,
interconnections, at least as we know them today, do not scale
up with the devices.[3] Optical fiber has already taken over the
task of long distance communications from electrical cables
and is increasingly advancing in connections between different
parts of large electronic systems.[4]
With the advent of data center networks and the
requirement of higher speeds, the use of optical interconnects
is increasing exponentially. In this section we will discuss the
physical reasons for using optics for interconnection within
otherwise electronic information processing machines. The
idea of changing the system used for interconnects in
computing and switching systems from silicon chips to optical
interconnects is a radical one and it is important to understand
what the practical and physical benefits are in using the same.
Some of the resulting possible practical benefits of optical
interconnects are as follows:[5], [3]
1) Design Simplification:
a) Absence of electromagnetic wave
phenomena (impedance matching, crosstalk,
and inductance difficulties such as inductive
voltage drops on pins and wires)
b) Distance independence of performance of
optical interconnects
c) Frequency independence of optical
interconnects
2) Architectural Advantages:
a) Large synchronous zones
b) Architectures with large numbers of long
high-speed connections (i.e. avoiding the
architectural aspect ratio scaling limit of
electrical interconnects). [6]
c) Regular interconnections of large numbers
of crossing wires (useful, for example, in
some switching and signal processing
architectures).
d) Ability to have 2-D interconnects directly
out of the area of the chip rather than from
the edge.
e) Avoidance of the necessity of an
interconnect hierarchy (once in the form of
light, the signal can be sent very long
distances without changing its form, or
amplifying it or reshaping it, even on
physically thin connections).
3) Timing:
a) Predictability of timing of signals.
b) Precision of the timing of the clock signal.
c) Removal of timing skew in signals.
d) Reduction in power and area for clock
distribution
4) Other physical benefits:
a) Reduction of power dissipation in
interconnects.

b) Voltage isolation.
c) Higher interconnect density especially for
longer off-chip interconnects.
d) Possible noncontact, parallel testing of chip
operation.
e) Option of use of short optical pulses for
synchronization and improved circuit
performance.
f) Possibility of wavelength division
multiplexed interconnects without the use of
any electrical multiplexed circuitry.
With the idea of optical interconnects, the prognosis for the
use of optics in digital computing and data center networks is
more optimistic and realistic now than it likely has been at any
point in the past. However if optics were to compete with
conventional electrical backplanes for interconnects,
significant advances in terms of speed, power consumption,
density and cost have to be made. Problems to be tackled span
from the base level of materials (stability, processability) and
devices (reliability, lifetime), over the subsystem level of
packages (concepts, cost-efficient assembly and alignment) all
the way up to the system level (link architecture, system
packaging, heat management). [7] Of all the issues mentioned
above, the three critical requirements for a terabit/s-class
optical transceiver used in data center networks are
Interconnect density, power consumption and speed.
A terabit/s class of data center will have a lot of server
connections and eventually the optical switches and
transceiver connections will also be huge. The amount of
wires and connections can be estimated from figure 1. To
reduce down these connections, we need to have a higher
Interconnect density. Furthermore, the progress of
transmission technology has been increasing the data rate of
transmission signals such as Ethernet and Infiniband from the
conventional 10 to 40 Gb/s and 100 Gb/s. All these are
expected to increase the volume of data connection in the
chassis at an accelerated rate in the future.[8] Hence with high
data rates, Interconnect Density is a critical requirement for
designing future data center networks. At the peak of this, is
the criticality related to power consumption. One of the most
challenging issues in the design and deployment of a data
center is the power consumption. According to some estimates
estimates [9], the servers in the data centers consume around
40% of the total IT power, storage upto 37% and the network
devices consume around 23% of the total IT power. Morever,
as the total power consumption of the IT devices in the data
centers continues to increase rapidly, so does the power of the
Heating-Ventilation And Air-Conditioning equipment to keep
steady the temperature of the data center site. Therefore, the
reduction in the power consumption of the network devices
has a significant impact on the overall power consumption of
the data center site.[10]

Figure 1 Schematic view of Terabus package

III. TRANSCEIVER DESIGN


The bandwidth and density requirements for interconnects
within high-performance computing systems are growing fast,
owing to increasing chip speeds, wider buses, and larger
numbers of processors per system. A number of research
programs have started to develop components and work on the
integration for high-density on-board optical interconnects.
Two-dimensional (2-D) arrays with up to 540 optical
transmitter and receiver elements have been demonstrated
[30], high-speed driver and receiver circuits with low-power
consumption have been designed low-loss polymer materials
with optical waveguides have been developed and schemes
that allow optical coupling between optoelectronic modules
and waveguides on backplanes compatible with manufacturing
processes are being pursued. However, it is challenging to
fulfil all these requirements together, and to develop simple
packaging processes that permit the dense integration of highspeed components. [11]
Terabus is one such optical interconnect program by IBM
where optical data bus technologies are developed that can
support terabit/second chip-to-chip data transfers over organic
cards with high-performance servers, switch routers and other
intensive computing systems. The Terabus project takes into
view the three critical requirements that have been highlighted
in section one i.e. speed (high bit rates), channel density and
low power consumption.
The optomodule consists of four separate chips flip bonded
which include a VCSEL array, VCSEL drivers, the photodiode
array and receiver amplifiers. In the optocard, the optical
devices emit and receive to an array of lenses, through a free
space gap and into another matching array of lenses for
connection of an array of flexible polymer waveguides.
Some examples of the design choices to meet these
requirements include the following:
1) extensive use of the flip-chip technologies in order to avoid
the parasitics associated with wirebonds
2) the choice of surface-laminar-circuitry (SLC) as an organic
card because of the higher wiring density allowed by such
build-up technologies
3) the use of a silicon carrier for the Optochip package
because the through-vias allow direct solder attachment of the
Optochip to the Optocard along with high-density wiring to
the IC
4) the use of CMOS integrated circuits (CMOS ICs) to
minimize IC power and cost
5) an operating wavelength of 985 nm, which permits a simple
optical design with emission through the GaAs and InP
substrates without the need to thin the OE substrates. This
wavelength also permits the direct integration of lenses into
the substrates
The Terabus components and packaging details have been
described in detail below.
A. Optocard with integrated waveguides
The Optocard is a 15 cm15 cm printed circuit board made
of the SLC technology. A top view of the Optocard is shown in
Fig. 3. The build-up layers of SLC have a dielectric constant
of 3.4 and a loss tangent of 0.027. Short differential striplines
(typically <10 mm) of 20-m width with a spacing of 50 m

are used to connect signal probe pads to the sites onto which
the Optochips are mounted. The measured attenuation of these
lines is 1.5 dB/cm at 20 GHz. An acrylate layer is deposited on
top of the SLC card by doctor blading, and waveguides are
photo lithographically patterned into this layer by UV
exposure through a contact mask [40]. The unexposed regions
are removed by a solvent. Upon completion, the claddingcore-cladding stack is thermally baked to complete the cure.
The Optocard has 48 integrated multimode waveguides with a
cross section of 35 35 m on a 62.5-m pitch.
Figure 3 Frequency response of photodiodes with diameters of 30, 50
and 60 m at 1.5V reverse bias

At a reverse bias of 1.5 V, the capacitances range from 90 fF


for the smallest to 230 fF for the largest devices. The
frequency response is calculated from a Fourier transform of
impulse response measurements, using 2-ps pulses at 985 nm.
Fig. 3 shows that the 3-dB bandwidths range from 13 GHz
(for 60-m-diameter photodiodes) up to 30 GHz (for 30-m
diameter photodiodes) at a reverse bias of 1.5 V.
Figure 2 Array (4X12) of 10-Gb/s VCSEL eyes

B. VCSELs
The VCSELs[12] are grown in a metalorganic chemical
vapor deposition (MOCVD) reactor on semi-insulating GaAs
substrates with multiple strained InGaAs quantum wells. The
devices have an oxide-confined structure optimized for low
series resistance, low parasitics, and high-speed operation at
low current densities. The VCSELs with apertures of 4, 6, and
8 m are fabricated. The VCSELs are optimized for operation
at 70 C with an emission wavelength around 985 nm. A 412
array of 10-Gb/s eye diagrams at 70 C is shown in Fig. 2. The
VCSELs have diameters of 4 m (rows AI) and 6 m (rows J
and K), and their bandwidths are above 15 GHz. The bias is
2mA for the 4-m devices and 3 mA for the 6-m VCSELs.
The modulation is identical for all 48 devices, and the
extinction ratios are above 6 dB on each channel. Fig. 4 also
shows a zoom on a 10- and a 20-Gb/s eye of a 6-m VCSEL
at 70 C.
C. Photodiodes
Photodiodes with mesa device structure are grown on an
Fe-doped InP substrate. They are backside illuminated,
which means that the light enters through the substrate lens
and passes through the p-InGaAs contact before reaching
the intrinsic layer. Optical absorption in the p-InGaAs layer
is detrimental to the photodiode responsivity, which means
that the p-InGaAs layer needs to be as thin as possible. The
responsivity is measured as 0.65 A/W at 985 nm.
Photodiodes with diameters of 30, 40, 50, and 60 m are
fabricated.

D. CMOS IC Arrays
The laser diode driver (LDD) [42] and receiver (RX) [32]
IC arrays were fabricated by IBM in a standard 0.13-m
CMOS process. The LDD and RX arrays share a common
electrical pad layout and a 3.9 mm2.3 mm footprint, so that
either chip can be attached to a common silicon carrier. The
performance of both LDD and RX array ICs benefit from the
Terabus packaging configuration: the flip-chip bonding of the
OE element to the IC provides a very short electrical path that
minimizes parasitic effects at this critical interconnection
point. Both arrays consist of 48 individual amplifier elements
and utilize two voltage supplies to minimize power
dissipation. The power supply to each array is further divided
into eight different domains such that blocks of six channels
share the same power connections. This configuration enables
the characterization of channel-to channel crosstalk both
within and between power blocks.
E. VCSEL Driver Circuits
The 48-channel LDD array is powered by a 1.8-V supply
for the input amplifier circuitry and a 3.3-V supply for the
output stage and bias. As shown in Fig. 4, each driver circuit
contains a differential preamplifier followed by a dc-coupled
transconductance output amplifier that supplies the
modulation current to the VCSEL. Each driver has differential
inputs with a 100- floating termination and fully differential
signal paths except for the output stage. Transformer peaking
is utilized in all of the predrivers to achieve a large voltage
swing and to provide fast transition times to the output stage.
The transconductance output stage has a single-ended current
output

Figure 6 OE flip chip bonded onto the IC/silicon-carrier assembly

Figure 4 Block diagram of an individual channel of the VCSEL


driver IC

with an adjacent ground pad to provide a low-inductance


return path for the modulation current.

Figure 5 Block diagram of an individual channel of the receiver IC

F. Receiver Circuits
The 48-channel receiver array is powered by dual 1.8-V
supplies for the amplifier circuits and a separate 1.53-V
supply for the photodiode bias. Each receiver element is
comprised of a low-noise differential transimpedance
amplifier (TIA) followed by a limiting amplifier (LA) and an
output buffer (Fig 5). The array is configured so that the TIA
and LA circuits occupy the central region of the chip and share
one 1.8-V supply, whereas the output buffers are located at the
chip edges and are powered with a separate 1.8-V supply. The
input of the TIA is ac-coupled using on-chip three dimensional
(3-D) interdigitated vertical parallel plate capacitors that
provide a high capacitance per unit area and a low parasitic
capacitance to the substrate. The TIA is a modified commongate circuit similar to the one described in [13], and utilizes
inductive peaking in series with both the TIA inputs and loads
to enhance the circuit bandwidth.
A top view of the silicon carrier design with a clear central
region for OE cavity, differential microstrip lines, and through
vias is shown in Fig. 6. The through-vias for signal, power,
and ground are distributed on three sides of the silicon carrier.
The fourth side is left free to accommodate space for the
waveguides underneath the carrier on the Optocard.

Now it is important to analyze how will the system work


together as an entire link. It is wise to find whether the worst
case works out using the Ethernet link model. Power penalties
related to noise and power budget allocations are made. It can
also be seen from Fig. 8 that the calculations help in the
Ethernet model. We shall calculate each of the parameters for
the Terabus circuits and components and the power penalties
associated with it and use the Ethernet link model to see if the
optical link works or not. From [11], it is possible to extract
details required to calculate noises and the losses associated
with it. The power penalties and budgeting is done and in the
end we can determine the Receiver and the Transmitter eye
diagrams. The power penalties versus distance graph indicates
the values of different power penalties at the target distance.
Hence it is an indication of whether the link is going to work
or not. The eye diagram is an indication of signal distortion.
So if the eye diagram is closed, it indicates that the signal
cannot be detected at the receiver. Fig. 9 consolidates the
parameters required in the Ethernet link model from the
Terabus paper. It can be concluded that the parameter BWm
(which is a product of modal bandwidth and the target reach)
helps us make important conclusions.
After carefully studying the results in Fig. 10 and Fig. 11, it
can be concluded that with lower BWm (i.e. lower target
reach), the link doesnt work as can be seen in Fig. 11. The eye
diagram in Fig. 11 messes up with Tx Eye height reducing to
50.9% and the power penalties due to Pcross and Prin shoot up
with distance. Hence it can be concluded from the Ethernet
link excel sheet that for Terabus model to work, the distance
has to be greater than the existing 15X15cm optocard.

Figure 7 Bandwidth consumption of Facebook from March, 2011 to


May , 2012

Figure 8 The Ethernet link model: power budgets, margins &


PARAMETERS FROM TERABUS LINK USED IN THE ETHERNET LINK MODEL
Parameter

Value derived from Terabus [11]

Transmitter wavelength
Transmitter power
(OMA)

985 nm
0 dBm

Transmitter Extinction
Ratio
Base rate and Q
Ts(20-80)
BER
Pulse shrinkage jitter=
Det. Jitter
Receiver OMA
BWm

6 dB

Type of fiber
Target reach

15000 MBd and 12


15 ps
<10^-12
1.3 ps
-10 dBm
*will vary depending on target
reach
50 um MMF
Variable

penalties

Instead of the large devices and clusters, the entire network


is broken down into small identical units server pods and a
uniform high performance connectivity is created between
pods in the data center. Fig. 12 shows a sample pod of the
network. Whats different is the much smaller size of our new
unit each pod has only 48 server racks, and this form factor
is always the same for all pods. Its an efficient building block
that fits nicely into various data center floor plans, and it
requires only basic mid-size switches to aggregate the TORs.
The smaller port density of the fabric switches makes their
internal architecture very simple, modular, and robust, and
there are several easy-to-find options available from multiple
sources. Another notable difference is how the pods are
connected together to form a data center network. For each
downlink port to a TOR, an equal amount of uplink capacity
on the pods fabric switches is reserved, which allows to scale
the network performance up to statistically non-blocking.[16]
To implement building-wide connectivity, four independent
planes of spine switches are created, each scalable up to 48

Figure 9 Parameters from Terabus link used in the Ethernet link


model

Figure 12 Example of pod: the unit of the entire network and


Schematic
Figure 10 BWm=39 Tx Eye height=69.2%
Figure 11 BWm=3.9 Tx Eye height=-50.9%

IV. NETWORK DESIGN


The proposed interconnection network analyses a fairly
recent next-generation architecture that has been successfully
deployed at the Facebooks fourth new data center at Altoona
that officially went online and serving traffic from November
14, 2014.[14] All previous data centers of Facebook were built
using clusters. But the cluster-focussed architecture has its
limitations. The size of a cluster is limited by the port density
of the cluster switch.
A typical data center has many such clusters. On the front,
these data centers push traffic to internet to the people who use
Facebook and its services. On the back, clusters talk to each
other. Fig. 8[15]shows the bandwidth consumption from
March 2011 and May 2012. As can be seen, the growth in the
machine to user traffic is dwarfed by the inter-cluster machine
to machine traffic. There is a limit to the traffic that clusters
can accommodate. Allocating more ports to accommodate
inter-cluster traffic takes away from the cluster sizes. With
rapid and dynamic growth, this balancing act never ends
unless the rules are changed.[16]

independent devices within a plane. Each fabric switch of each


pod connects to each spine switch within its local plane Each
fabric switch of each pod connects to each spine switch within
its local plane. Together, pods and planes form a modular
network topology capable of accommodating hundreds of
thousands of 10G-connected servers, scaling to multi-petabit
bisection bandwidth, and covering the data center buildings
with non-oversubscribed rack-to-rack performance.
Fabric offers a multitude of equal paths between any points
on the network, making individual circuits and devices
unimportant such a network is able to survive multiple
simultaneous component failures with no impact. Smaller and
simpler devices mean easier troubleshooting. The modular
design and component sizing allows to use the same mid-size
switch hardware platforms for all roles in the network fabric
switches, spine switches, and edges making them simple
Lego-style building blocks that we can procure from
multiple sources. Fig. 12 shows the schematic of such data
center network architecture.
To understand the data capacity, connectivity and the
hardware required for this kind of network, lets do an analysis
on how many optical ports are required in the network. The
network fabric consists of individual elements called pods.
There are 48 such pods in the network architecture. Each pod

has 48 Top-Of-Rack server switches. Moreover, one TOR


switch has 48 ports. Each port is a 40 Giga bit Ethernet port.
In addition to the pods, there are 4 spine planes above the pods
and each has 48 TOR switches. This calculation yields 2304
TOR switches and 110592 40GbE ports for the pods and 192
TOR switches for the spine plane. The large number
immediately helps us conclude that we need full-duplex
Optical Transceiver Modules with multiple channels. The
network employees 40GbE ports and hence transceivers at that
speed should be selected. Hence we choose Finisar Optical
Transceiver module with 40GbE, four channel and full duplex
communication. It has a QSFP+ form factor. Transceivers with
different form factor with higher speeds (CXP) were available
but this is an ideal choice from compatibility with our network
architecture. Further optical power budget calculations for the
transceiver module have been carried out in the next section.
This clears out the reason for choosing the aforementioned
transceiver.
V. ENERGY REQUIREMENT
In this section, we calculate the optical and electrical power
requirements for the interconnection network. The
contributions for the optical power budget can be obtained
from the energy calculations carried out in the Ethernet link
model. The electrical power budget can be estimated from the
wattage ratings of a standard server. It is important to highlight
that the estimation carried out in this way portrays only the
power requirements of servers and networking equipment.
Five distinct sub-systems account for most of a data centers
power draw are mentioned in Table 1.
In order to carry out the calculations for Electrical Power, it
is important that we understand Power Usage Effectiveness
(PUE). PUE is a measure of how efficiently a computer data
center uses energy, especially how much energy is used by the
computing equipment (in contrast to cooling and other
overheads). A PUE of 2.0 means that for every watt of IT
power, an additional watt is consumed to cool and distribute
power to the IT equipment. A PUE closer to 1.0 means nearly
all of the energy is used for computing. Modern day data
centers like the Prineville data center shows real-time PUE,
WUE and other parameters on the web. These values help in
estimating the overall power consumption and power
efficiency of a data center.[17]
To calculate the electrical power associated with the server
racks, Fig. 12[18] shows the average consumption of a server
in watts.
The figure shows little old values. The latest servers by Dell
and Lenovo have a lower power rating. For example, the
Lenovo Thinkserver RS140[19] has a power rating of 300W. It
is not possible to have a server being fully occupied and
utilizing 300W at any given time. But for calculation purpose,
let us assume full utilization. As per the calculations in the
previous section for the network architecture, each pod has 48
servers.
There are 48 such pods. Hence the total power consumption
turns out to be 48*48*300 Watts. This amounts to 691 kW of
electrical power corresponding to the servers and storage
systems. [20] gives an estimation for a typical data center
power breakdown. It has been reproduced in Table 1. This

table can be used to get a fair estimate of the average power


associated with cooling, power conditioning and others.

Table 1 Typical Data Center Power Breakdown

The optical power budget calculations will help us


determine the power associated with the network. As indicated
in the previous section, we choose Finisar Optical transceiver
module with 40GbE for inter and intra-rack connections.[21]
This serves our purpose of having a 40 Gigabit Ethernet.
Moreover, it has 4 channels and is a full-duplex system. As
calculated in the previous section, there are 110592 ports of 40
Gigabit Ethernet in the network architecture. The Optical
transceiver module has 4 channels and hence we know that we
require 27648 transceivers in all. So now the optical power
budget calculation has to be done for 27648 transceivers.
From Fig. 2 of [21], we can conclude that the maximum power
consumption per end is 1.5 Watts. Hence the total optical
power budget for 27648 transceivers is 41.472 kW. Cross
verification of the calculated electrical power budget and the
optical power budget with the percentage contributions
mentioned in Table 1 gives consistent results. Table 1. can also
be used to get a fair estimate of the power associated with
cooling systems and power conditioning.
VI. BENCHMARKING AND CONCLUSION
The three critical requirements highlighted in section I for a
terabit/s class of optical interconnect are Interconnect density,
Power consumption and Speed. Present data center networks
use 10 GbE for interconnections. Our system uses 40 GbE for
inter and intra-rack connections. This not only increases the
speed but the QSFP+ form factor ensures a high interconnect
density. In the case, the optical and electrical power
consumption turns out to be around 730 kW. If the ratios
mentioned in Table 1 are considered, the total consumption
turns out to be around 1100 kW.
Now refer to the calculation done in Fig. 13.[22] The value
arrived from our calculations is close to the estimated value in
the Fig. 13. It can be concluded that the system proposed in
this paper consumes power that is equivalent to any present
day data center but at the same price, we are getting tens of
Terabit/s speed.

Figure 13 Analysis of power consumption in a typical data center

REFERENCES
David A.B. Miller, Optical Interconnects to Silicon, in IEEE journal
on selected topics in Quantum Electronics, Vol. 6, No. 6,
November/December 2000.
[2] The International Technology Roadmap for Semiconductors 2012
update review.
[3] David A.B. Miller, Rationale and Challenges for Optical Interconnects
to Electronic Chips
[4] David A.B. Miller, Device Requirements for Optical Interconnects to
Silicon Chips
[5] David A.B. Miller, Optical Interconnects to Silicon
[6] David A.B. Miller, Optical interconnects to electronic chips
[7] C. Berger, B. J. Offrein, and M. Schmatz, Challenges for the
introduction of board-level optical
interconnect technology into
product development roadmaps, presented at the Photonics West, San
Jose, CA, Jan. 2126, 2006, Paper 6124-18.
[8] T. Yamamoto, K. Tanaka, S. Ide, T. Aoki, Optical Interconnect
Technology for High Bandwidth Data connection in next generation
servers.
[9] Report to Congress on Server and Data Center Energy Efficiency U.S
Environmental Protection Agency. Energy Star Program.
[10] C. Kachris and I. Tomkos, A survey on Optical interconnects for Data
Centers.
[11] Terabus: Terabit/Second-Class Card-level Optical Interconnect
Technologies by IBM T.J. Watson Research Center, Yorktown Heights,
NY
[12] C. K. Lin, A. Tandon, K. D. Djordjev, S. Corzine, and M. R. T. Tan,
Highspeed 985 nm bottom-emitting VCSEL arrays for chip-to-chip
parallel optical interconnects
[1]

[13] D. Guckenberger, J. D. Schaub, D. Kucharski, and K. T. Kornegay, 1 V,


10 mW, 10 Gb/s CMOS optical receiver front-end,
[14] Facebook Altoona Data Center: http://www.lightedge.com/datacenters/altoona-data-center/
[15] http://www.enterprisetech.com/2014/11/18/facebook-gives-lessonsnetwork-datacenter-design/
[16] Data
Center
Fabric
https://code.facebook.com/posts/360346274145943/introducing-datacenter-fabric-the-next-generation-facebook-data-center-network/
[17] Prineville
Data
Center
https://www.facebook.com/PrinevilleDataCenter/app_399244020173259
[18] Facebook Altoona Data Center: http://www.lightedge.com/datacenters/altoona-data-center/
[19] Lenovo
Thinkserver
RS140
http://www.lenovo.com/images/products/server/pdfs/datasheets/thinkser
ver_rs140_ds.pdf
[20] S. Pelley, D. Meisner, T.F. Weinisch, J.W. VanGilder Understanding and
Abstracting Total Data Center Power
[21] Finisar
Optical
Transceiver
module:
http://www.finisar.com/products/optical-modules/QSFP/FTL410QD2C
[22] Emerson Network Power, Energy Logic: Reducing Data Center Energy
Consumption by Creating Savings that Cascade across Systems

You might also like