Professional Documents
Culture Documents
I.
INTRODUCTION
b) Voltage isolation.
c) Higher interconnect density especially for
longer off-chip interconnects.
d) Possible noncontact, parallel testing of chip
operation.
e) Option of use of short optical pulses for
synchronization and improved circuit
performance.
f) Possibility of wavelength division
multiplexed interconnects without the use of
any electrical multiplexed circuitry.
With the idea of optical interconnects, the prognosis for the
use of optics in digital computing and data center networks is
more optimistic and realistic now than it likely has been at any
point in the past. However if optics were to compete with
conventional electrical backplanes for interconnects,
significant advances in terms of speed, power consumption,
density and cost have to be made. Problems to be tackled span
from the base level of materials (stability, processability) and
devices (reliability, lifetime), over the subsystem level of
packages (concepts, cost-efficient assembly and alignment) all
the way up to the system level (link architecture, system
packaging, heat management). [7] Of all the issues mentioned
above, the three critical requirements for a terabit/s-class
optical transceiver used in data center networks are
Interconnect density, power consumption and speed.
A terabit/s class of data center will have a lot of server
connections and eventually the optical switches and
transceiver connections will also be huge. The amount of
wires and connections can be estimated from figure 1. To
reduce down these connections, we need to have a higher
Interconnect density. Furthermore, the progress of
transmission technology has been increasing the data rate of
transmission signals such as Ethernet and Infiniband from the
conventional 10 to 40 Gb/s and 100 Gb/s. All these are
expected to increase the volume of data connection in the
chassis at an accelerated rate in the future.[8] Hence with high
data rates, Interconnect Density is a critical requirement for
designing future data center networks. At the peak of this, is
the criticality related to power consumption. One of the most
challenging issues in the design and deployment of a data
center is the power consumption. According to some estimates
estimates [9], the servers in the data centers consume around
40% of the total IT power, storage upto 37% and the network
devices consume around 23% of the total IT power. Morever,
as the total power consumption of the IT devices in the data
centers continues to increase rapidly, so does the power of the
Heating-Ventilation And Air-Conditioning equipment to keep
steady the temperature of the data center site. Therefore, the
reduction in the power consumption of the network devices
has a significant impact on the overall power consumption of
the data center site.[10]
are used to connect signal probe pads to the sites onto which
the Optochips are mounted. The measured attenuation of these
lines is 1.5 dB/cm at 20 GHz. An acrylate layer is deposited on
top of the SLC card by doctor blading, and waveguides are
photo lithographically patterned into this layer by UV
exposure through a contact mask [40]. The unexposed regions
are removed by a solvent. Upon completion, the claddingcore-cladding stack is thermally baked to complete the cure.
The Optocard has 48 integrated multimode waveguides with a
cross section of 35 35 m on a 62.5-m pitch.
Figure 3 Frequency response of photodiodes with diameters of 30, 50
and 60 m at 1.5V reverse bias
B. VCSELs
The VCSELs[12] are grown in a metalorganic chemical
vapor deposition (MOCVD) reactor on semi-insulating GaAs
substrates with multiple strained InGaAs quantum wells. The
devices have an oxide-confined structure optimized for low
series resistance, low parasitics, and high-speed operation at
low current densities. The VCSELs with apertures of 4, 6, and
8 m are fabricated. The VCSELs are optimized for operation
at 70 C with an emission wavelength around 985 nm. A 412
array of 10-Gb/s eye diagrams at 70 C is shown in Fig. 2. The
VCSELs have diameters of 4 m (rows AI) and 6 m (rows J
and K), and their bandwidths are above 15 GHz. The bias is
2mA for the 4-m devices and 3 mA for the 6-m VCSELs.
The modulation is identical for all 48 devices, and the
extinction ratios are above 6 dB on each channel. Fig. 4 also
shows a zoom on a 10- and a 20-Gb/s eye of a 6-m VCSEL
at 70 C.
C. Photodiodes
Photodiodes with mesa device structure are grown on an
Fe-doped InP substrate. They are backside illuminated,
which means that the light enters through the substrate lens
and passes through the p-InGaAs contact before reaching
the intrinsic layer. Optical absorption in the p-InGaAs layer
is detrimental to the photodiode responsivity, which means
that the p-InGaAs layer needs to be as thin as possible. The
responsivity is measured as 0.65 A/W at 985 nm.
Photodiodes with diameters of 30, 40, 50, and 60 m are
fabricated.
D. CMOS IC Arrays
The laser diode driver (LDD) [42] and receiver (RX) [32]
IC arrays were fabricated by IBM in a standard 0.13-m
CMOS process. The LDD and RX arrays share a common
electrical pad layout and a 3.9 mm2.3 mm footprint, so that
either chip can be attached to a common silicon carrier. The
performance of both LDD and RX array ICs benefit from the
Terabus packaging configuration: the flip-chip bonding of the
OE element to the IC provides a very short electrical path that
minimizes parasitic effects at this critical interconnection
point. Both arrays consist of 48 individual amplifier elements
and utilize two voltage supplies to minimize power
dissipation. The power supply to each array is further divided
into eight different domains such that blocks of six channels
share the same power connections. This configuration enables
the characterization of channel-to channel crosstalk both
within and between power blocks.
E. VCSEL Driver Circuits
The 48-channel LDD array is powered by a 1.8-V supply
for the input amplifier circuitry and a 3.3-V supply for the
output stage and bias. As shown in Fig. 4, each driver circuit
contains a differential preamplifier followed by a dc-coupled
transconductance output amplifier that supplies the
modulation current to the VCSEL. Each driver has differential
inputs with a 100- floating termination and fully differential
signal paths except for the output stage. Transformer peaking
is utilized in all of the predrivers to achieve a large voltage
swing and to provide fast transition times to the output stage.
The transconductance output stage has a single-ended current
output
F. Receiver Circuits
The 48-channel receiver array is powered by dual 1.8-V
supplies for the amplifier circuits and a separate 1.53-V
supply for the photodiode bias. Each receiver element is
comprised of a low-noise differential transimpedance
amplifier (TIA) followed by a limiting amplifier (LA) and an
output buffer (Fig 5). The array is configured so that the TIA
and LA circuits occupy the central region of the chip and share
one 1.8-V supply, whereas the output buffers are located at the
chip edges and are powered with a separate 1.8-V supply. The
input of the TIA is ac-coupled using on-chip three dimensional
(3-D) interdigitated vertical parallel plate capacitors that
provide a high capacitance per unit area and a low parasitic
capacitance to the substrate. The TIA is a modified commongate circuit similar to the one described in [13], and utilizes
inductive peaking in series with both the TIA inputs and loads
to enhance the circuit bandwidth.
A top view of the silicon carrier design with a clear central
region for OE cavity, differential microstrip lines, and through
vias is shown in Fig. 6. The through-vias for signal, power,
and ground are distributed on three sides of the silicon carrier.
The fourth side is left free to accommodate space for the
waveguides underneath the carrier on the Optocard.
Transmitter wavelength
Transmitter power
(OMA)
985 nm
0 dBm
Transmitter Extinction
Ratio
Base rate and Q
Ts(20-80)
BER
Pulse shrinkage jitter=
Det. Jitter
Receiver OMA
BWm
6 dB
Type of fiber
Target reach
penalties
REFERENCES
David A.B. Miller, Optical Interconnects to Silicon, in IEEE journal
on selected topics in Quantum Electronics, Vol. 6, No. 6,
November/December 2000.
[2] The International Technology Roadmap for Semiconductors 2012
update review.
[3] David A.B. Miller, Rationale and Challenges for Optical Interconnects
to Electronic Chips
[4] David A.B. Miller, Device Requirements for Optical Interconnects to
Silicon Chips
[5] David A.B. Miller, Optical Interconnects to Silicon
[6] David A.B. Miller, Optical interconnects to electronic chips
[7] C. Berger, B. J. Offrein, and M. Schmatz, Challenges for the
introduction of board-level optical
interconnect technology into
product development roadmaps, presented at the Photonics West, San
Jose, CA, Jan. 2126, 2006, Paper 6124-18.
[8] T. Yamamoto, K. Tanaka, S. Ide, T. Aoki, Optical Interconnect
Technology for High Bandwidth Data connection in next generation
servers.
[9] Report to Congress on Server and Data Center Energy Efficiency U.S
Environmental Protection Agency. Energy Star Program.
[10] C. Kachris and I. Tomkos, A survey on Optical interconnects for Data
Centers.
[11] Terabus: Terabit/Second-Class Card-level Optical Interconnect
Technologies by IBM T.J. Watson Research Center, Yorktown Heights,
NY
[12] C. K. Lin, A. Tandon, K. D. Djordjev, S. Corzine, and M. R. T. Tan,
Highspeed 985 nm bottom-emitting VCSEL arrays for chip-to-chip
parallel optical interconnects
[1]