You are on page 1of 208

HYBRID OPTICAL WIRELESS ACCESS NETWORKS

A DISSERTATION SUBMITTED TO THE DEPARTMENT OF ELECTRICAL ENGINEERING AND THE COMMITTEE ON GRADUATE STUDIES OF STANFORD UNIVERSITY IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY

Wei-Tao Shaw March 2009

UMI Number: 3351490 Copyright 2009 by Shaw, Wei-Tao

All rights reserved.

INFORMATION TO USERS

The quality of this reproduction is dependent upon the quality of the copy submitted. Broken or indistinct print, colored or poor quality illustrations and photographs, print bleed-through, substandard margins, and improper alignment can adversely affect reproduction. In the unlikely event that the author did not send a complete manuscript and there are missing pages, these will be noted. Also, if unauthorized copyright material had to be removed, a note will indicate the deletion.

UMI
UMI Microform 3351490 Copyright 2009 by ProQuest LLC. All rights reserved. This microform edition is protected against unauthorized copying under Title 17, United States Code. ProQuest LLC 789 E. Eisenhower Parkway PO Box 1346 Ann Arbor, Ml 48106-1346

Copyright by Wei-Tao Shaw 2009 All Rights Reserved

I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy.

/40*&p&.
(Leonid G. Kazovsky) Principal Adviser

I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy.

(Donald C. Cox)

I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy.

(David A. B. Miller)

Approved for the University Committee on Graduate Studies.

^k./. A
in

Abstract
Next generation access network will require flexible deployment, large backbone capacity, upgrade ability, scalable to user number and demand, and economically feasible. One example is to provide ubiquitous, blanketed broadband access service in metropolitan area. Such requirements are instrinsically impossible to meet if the network is designed with any single access technology. On the other hand, a hybrid optical and wireless access network would combine high optical capacity and flexible wireless deployment that is economic and scalable. This dissertation focuses on hybrid optical and wireless access networks that consists of wireless mesh network based on multi-hop wireless communications and novel optical backhaul networks. Two novel optical backhaul networks are proposed, analyzed, and experimentally evaluated. The first optical backhaul network is a reconfigurable architecture based on the Time Division Multiplexing Passive Optical Networks. The reconfigurable architecture can optimize the network efficiency and performance by dynamic bandwidth allocation. An experimental testbed is built to demonstrate the feasibility for realisitc application. The second optical backhaul network is based on a grid structure to provide broadband, scalable, blanket-cover, and cost-effective connectivity. Advanced optical devices are employed to achieve centralized control, bandwidth scalability, resource sharing, statistical multiplexing
gain, a n d ease of deployement and m a n a g e m e n t . Experimental t e s t b e d is built for

performance evaluation and demonstration of QoS capability. An integrated routing paradigm is developed to enahnce the wireless mesh network performance by leveraging the optical backhaul. Finally, a novel feedback-based burst-mode clock and data recovery architecture is proposed for both optical backhaul networks. It can rapidly

IV

recover clock for data sampling and provide jitter tolerance that feed-forward CDR circuits cannot achieve.

Acknowledgement
I am grateful for working with many wonderful people at Stanford. These people made these years one of the most memorable times in my life. First of all, I want to thank my advisor Professor Leonid Kazovsky for many invaluable advice and constant support during my study in the Photonics and Networking Research Laboratory (PNRL). I appreciate the research environment he created that allowed me to explore various interesting research topics. Under his advice throughout the years, I not only learned how to solve problems, but also built the vision to future research directions. I would also like to thank my associated advisor, Professor Donald Cox, from whom I took one of my first classes in Stanford. He is a very knowledgeable and nice teacher in class. Later we worked together on a joint research project, Grid Reconfigurable Optical and Wireless Network, and he gave me many valuable advices and came up with good ideas. It is indeed my pleasure to work with him. Another person I would like to thank is Professor David Miller. He is a very kind and patient Professor when students need his help. I closely followed his research, and it helped my research directly or indirectly. The project sponsors are imperative to my resrach and myself. I am so grateful for the Industrial Technology Research Institute (ITRI) in Taiwan and the National Science Foundation (NSF). I also thank my past and current group mates in PNRL. I received uncountable help from each of them in different aspects of my life. Without them, my time in the office and laboratory would not be as enjoyable and memorable. Finally I want to thank my family, especially my uncle Jean, who is the most important person in my life. Since I was a kid, he has influenced me deeply in many aspects. His never changing love, care, patience, and support made me what I am

VI

and who I am. Although he has left me, I will always remember him with love. I am also so grateful for my dear wife, Minny. She and her love made me complete.

Contents
Abstract Acknowledgement 1 Introduction 1.1 1.2 1.3 Communication Network Hierarchy Bottleneck: Access Networks Brief Introduction to Access Technologies 1.3.1 1.3.2 1.3.3 1.3.4 1.4 1.5 2 Digital Subscriber Line Hybrid Fiber Coax Passive Optical Networks Wireless Access Technologies iv vi 1 2 5 6 6 7 8 8 10 11 12 12 13 16 18 19 19 19 21 vm

Convergence of Optical and Wireless Access Dissertation Outline

Enabling Optical and Wireless Access Technologies 2.1 Passive Optical Networks 2.1.1 2:1.2 2.1.3 2.1.4 2.2 2.2.1 2.2.2 TDM PON WDM PON PON Deployment Method and Cost Summary of Optical Access Technologies Multi-Hop Wireless Communication Network Wireless Mesh Networks

Enabling Wireless Access Technologies

2.2.3 2.2.4 2.2.5 2.2.6 2.2.7 2.2.8 2.2.9

Client Wireless Mesh Networks Infrastructure Wireless Mesh Networks PHY and MAC Layers of WMN Routing Algorithms of Wireless Multi-Hop Networks Capacity Scalability of WMN Backhaul Solutions for WMN Candidate Solutions for Broadband Backhaul in WMN . . . .

22 23 25 26 28 34 36 40 41 41 42 44 49 51 56 56 60 68 71 71 72 73 75 78 80 85 86 88 91

2.2.10 Summary of Wireless Access Technologies 3 Hybrid Architecture Enabling S m o o t h Wireless Access Upgrade 3.1 3.2 3.3 3.4 Introduction Network Architecture 3.2.1 3.3.1 3.4.1 3.4.2 3.4.3 4 Upgrading Path Performance simulation of the reconfigurable optical backhaul Handshaking protocol for network reconfiguration Enabling Devices Experimental Results Reconfigurability of the Proposed Optical Backhaul Experimental testbed of reconfigurable optical backhaul

Next-Generation Hybrid Access Network 4.1 4.2 4.3 4.4 Introduction Requirements of Next-Generation Optical Backhaul Network A Novel Optical Grid Backhaul Optical Grid Unit 4.4.1 4.4.2 4.5 4.5.1 4.5.2 4.6 Node Structures of the Central Hub Node Structure of the Optical Terminal Statistical Multiplexing Gain of Tunable Lasers Simulations of Packet Delay and Jitter Improvement due to Statistical Multiplexing Gain Optimization between Bandwidth Scalability and Cost-efficiency . . .
IX

Multiplexing of Tunable Lasers

4.7

System testbed of the optical grid unit 4.7.1 4.7.2 4.7.3 4.7.4 Enabling devices Downstream Transmission Experiment Upstream transmission experiment QoS Experiments of the Optical Backhaul Integrated Routing Algorithm Algorithm Simulation

98 99 103 105 110 115 116 120 126 126 131 131 136 139 145 146 147 151 163 170 171 179 182

4.8

Integrated Routing Algorithm for Optical Grid Unit 4.8.1 4.8.2

B u r s t - M o d e Clock and D a t a Recovery Technique 5.1 5.2 5.3 Introduction Review of Up-to-date Burst-mode CDR techniques 5.3.1 5.3.2 5.3.3 5.3.4 5.4 5.4.1 5.4.2 5.4.3 5.4.4 5.4.5 Digital ring oscillator Instantaneously locked clock and data recovery circuit Nonlinear clock extraction circuit Summary among the three burst-mode CDR techniques . . . . Simplified Delay lock loop (DLL) Charge Pump DLL (CPDLL) Charge pump phase lock loop (CPPLL) The third-order CPPLL Simulation of dual-loop CDR structure

Clock and Data recovery in conventional optical communication systems 128

Dual-loop burst-mode CDR technique

Conclusion

Bibliography

List of Tables
1.1 1.2 2.1 2.2 5.1 Multimedia applications and their bandwidth requirements Comparison of bandwidth and reach for popular access technologies . Cost of fiber deployment for PON Summary of I E E E 8 0 2 . i l standards Summary of three CDR techniques 2 9 19 25 145

XI

List of Figures
1.1 1.2 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 Modern Communication Network Hierarchy Synergy of Optical and Wireless to Construct Access Network . . . . Passive optical networks TDM PON WDM PON IEEE 802.16 WiMAX: Single-Hop wireless Communication Network . IEEE 802.16J: Mobile Multiple Relay WiMAX Network Generic Wireless Mesh Network (WMN) . . . Client Wireless Mesh Network Infrastructure Wireless Mesh Network Two Categories of Routing Algorithms in Wireless Ad-Hoc Networks Router, GR: Wireless Gateway Router, Dagg: Aggregated data rate,
CMR-MR'

3 10 13 14 18 20 20 22 23 24 26

2.10 Scalibility in one-dimensional exemplar WMN. MR: Wireless Mesh Link capacity between adjacent MRs, CMR-user'- Link ca28 Aggregated data rate,

pacity between MR and users, DMR- Total user demand within area served by a MR 2.10 Scalibility in one-dimensional examplar WMN. MR: Wireless Mesh Router, GR: Wireless Gateway Router, Dagg:
CMR-MR'-

Link capacity between adjacent MRs, CMR-user' Link ca29

pacity between MR and users, DMR' Total user demand within area served by a MR (con't)

xn

2.10 Scalibility in one-dimensional exemplar WMN. MR: Wireless Mesh Router, GR: Wireless Gateway Router, Dagg:
CMR-MR'-

Aggregated data rate,

Link capacity between adjacent MRs, CMR-user'- Link ca30

pacity between MR and users, DMR'- Total user demand within area served by a MR (con't) 2.10 Scalibility in one-dimensional examplar WMN. MR: Wireless Mesh Router, GR: Wireless Gateway Router, Dagg: Aggregated data rate,
CMR-MR-

Link capacity between adjacent MRs, CMR-user' Link ca31

pacity between MR and users, DMR- Total user demand within area served by a MR (con't) 2.10 Scalibility in one-dimensional exemplar WMN. MR: Wireless Mesh Router, GR: Wireless Gateway Router, Dagg: Aggregated data rate,
CMR-MR-

Link capacity between adjacent MRs, CMR-user- Link caTotal user demand within area 31 32 33 34 35 36 37 38 39 .

pacity between MR and users, DMR: served by a MR 2.11 1-Dimensional WMN

2.12 Throughput per router and overall capacity of one-dimentional WMN 2.12 Throughput per router and overall capacity of one-dimentional WMN 2.12 Throughput per router and overall capacity of one-dimentional WMN 2.12 Throughput per router and overall capacity of one-dimentional WMN 2.13 Hierarchical Wireless Access Network (proposed by Google-Earthlink to San Francisco City) 2.14 Fundamental limit of transmission distance of copper wire technologies 2.15 Comparison of broadband backhaul technology canidate for WMN. 3.1 3.2 3.2

Hybrid optical wireless access network as proposed by Google and Earthlink for San Francisco Metro Wireless Networks Project Smooth upgrading path of the wireless backhaul proposed by Google and Earthlink for San Francisco City wireless access network Smooth upgrading path of the wireless backhaul proposed by Google and Earthlink for San Francisco City wireless access network (con't) . 46 45 43

xiii

3.2 3.2 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9

Smooth upgrading path of the wireless backhaul proposed by Google and Earthlink for San Francisco City wireless access network (con't) . Smooth upgrading path of the wireless backhaul proposed by Google and Earthlink for San Francisco City wireless access network (con't) . Smooth upgrading path of the wireless backhaul proposed by Google and Earthlink for San Francisco City wireless access network Reconfigurable Optical Backhaul System architecture of the central office to facilitate the reconfigurable optical backhaul Traffic throughput at P I in figure 3-4 Buffer depth and packet rejection ratio of PON1 in both architectures Throughput of each PON and combined throughput of both architectures; measured at P2 in and fixed load for PON2) Experimental testbed of reconfigurable optical backhaul figure3-4 55 57 58 59 61 61 62 62 63 64 65 66 figure3-ll Long-term average packet delay of both PONs (varying load for PON1 51 53 54 48 49 47 46

3.10 Timing diagram of the reconfiguration protocol 3.11 State diagrams of the RCI's in the central office, heavily and lightly loaded OLTs, ONU, and RCI behind ONU 3.11 State diagrams of the RCI's in the central office, heavily and lightly loaded OLTs, ONU, and RCI behind ONU (con't) 3.11 State diagrams of the RGI's in the central office, heavily and lightly loaded OLTs, ONU, and RCI behind ONU (con't) 3.11 State diagrams of the RCI's in the central office, heavily and lightly loaded OLTs, ONU, and RCI behind ONU (con't) 3.11 State diagrams of the RCI's in the central office, heavily and lightly loaded OLTs, ONU, and RCI behind ONU 3.12 Optical tunable receiver used on the reconfigurable optical backhaul experimental testbed 3.13 Transient response of the optical tunable receiver in figure3-12 . . . . 3.14 FPGA and 1.25Gbps SerDes board xiv

3.15 Functionalities implemented on the two FPGA boards 3.16 Control packet format 3.17 Experimental result of the reconfiguration protocol 4.1 4.1 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 GROW-Net architecture GROW-Net architecture (con't) GROW-Net architecture (con't) Optical grid unit structure Node structure of central hub Tunable optical transmitter at the central office DWDM optical filter used in optical terminal Wavelength duplexer implemented by DWDM filter RSOA structure and upstream transmission Incremental bandwidth scalability M / M / l queue model of tunable optical transmitter

67 69 70 74 74 76 77 78 79 81 82 83 85 86

4.10 Packet delay and jitter improvement due to statistical multiplexing gain 89 4.10 Packet delay and jitter improvement due to statistical multiplexing gain (con't) 4.11 Bandwidth scalability of optical grid unit 4.11 Bandwidth scalability of optical grid unit (con't) 4.11 Bandwidth scalability of optical grid unit (con't) 4.12 Optical loss along the longest path 4.13 Further scale-up by splitting the optical grid unit 4.14 System testbed of the optical grid unit 4.15 Fast tunable laser 4.15 Fast tunable laser (con't) 4.15 Fast tunable laser (con't) 4.16 RSOA module employed on the system testbed 4.17 Downstream transmission experiment 4.17 Downstream Transmission Experiment (con't) 4.17 Downstream Transmission Experiment (con't) 90 92 94 95 96 97 98 100 101 102 103 104 104 105

xv

4.17 Downstream Transmission Experiment (con't) 4.18 Upstream transmission experiment 4.18 Upstream transmission experiment (con't) 4.18 Upstream transmission experiment (con't) 4.19 QoS implementation and experiment results on the system testbed . .

106 107 108 109 Ill

4.19 QoS implementation and experiment results on the system testbed (con't) 113 4.19 QoS implementation and experiment results on the system testbed (con't) 113 4.19 QoS implementation and experiment results on the system testbed (con't) 114 4.19 QoS implementation and experiment results on the system testbed (con't) 114 4.20 Appropriate optical link allocation to improve WMN performance . . 4.21 System operation of proposed integrated routing algorithm 4.22 Bounded flooding of link state information 4.23 Simulation scenario 4.24 Normalized average throughput distributions 4.24 Normalized average throughput distributions (con't) 4.25 Average throughput and packet delay comparison 4.25 Average throughput and packet delay comparison (con't) 5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8 5.9 5.9 Burst-mode transmission on optical grid unit and burst-mode receiver Burst-mode clock and data recovery PLL CDR LPF responses and eye diagram at different loop bandwidth Digital ring oscillator Two feedback loops in DR_OSC CDR A single pulse applied to the DR-OSC CDR Pulse train applied to the DR-OSC CDR Simulation of DR-OSC CDR Simulation of DR-OSC CDR (con't) 116 117 118 121 122 123 124 125 127 128 128 130 132 132 134 134 135 135 136 137 138

5.10 Instantaneous clock and data recovery circuit 5.11 Gated Voltage Controlled Oscillator 5.12 The outputs of GVCO's and resultant clock

xvi

5.13 Nonlinear clock extraction circuit 5.14 Frequency spectrum of random non return to zero (NRZ) signal . . . 5.15 The math model of the nonlinear clock recovery circuit 5.16 Input and output waveforms and corresponding average frequency spectrum 5.17 Waveforms of NRZ burst input and recovered clock 5.18 Impact of BPF bandwidth on the recovered clock quality 5.19 Dual-loop burst-mode CDR circuit architecture 5.20 Delay lock loop (DLL) 5.21 Simulation results of control voltage of the VCDL with different LPF bandwidth 5.22 VCDL output clock waveforms with different loop gains 5.23 Charge pump delay lock loop (CPDLL) 5.24 Hogge phase detector 5.25 Waveform of the nodes in Hogge PD when the incoming data and the clock are aligned 5.26 Waveform of the nodes in Hogge PD when the clock leads the incoming data for 0.02ns (bit period is 0.1ns, or lOGbps) 5.27 The control voltages of the DLLs with different loop gains 5.28 Phase relationships between input data and DLL output clock at different points of the burst 5.29 Control voltage of unequal frequency situations 5.30 Phase tracking error of VCDL output clock in the presence of clock frequency deviation 5.31 VCDL output clock rising edge variation during the random payload 5.32 Charge pump phase lock loop (CPPLL) 5.33 Root locus plot of 2nd-order CPPLL 5.34 VCO control voltage (yctri) of 2nd-order under-damping CPPLL . . . 5.35 VCO control voltage (Vctr.j) of 2nd-order over-damping CPPLL . . . .

139 140 140 142 143 144 146 147 150 151 152 153 154 155 157 158 159 161 162 164 165 166 168

xvn

5.36 VCO control voltage (yctri) responses to (1) input data consisting of alternating logic zeros and ones and (2) the real packet consisting of preamble and random payload 5.37 Third-order CPPLL 5.39 Dual-loop CDR architecture 5.40 The VCDL control voltages under different conditions 5.41 Input data and clock phase relationship at different moments of time 5.42 The response of VCO control voltage 169 170 172 174 176 177

5.38 Locking dynamics comparison between 2nd-order and 3rd-order CPPLLsl71

5.43 Jitter tolerance masks of dual-loop CDR structure and typical CPPLL 178

xvm

Chapter 1 Introduction
Since the first deployment of the Advanced Research Projects Agency Network [1] in 1969, communication networks have dramatically changed how people live, work, and interact. Historically, communication networks transport three types of services: voice, video, and data, together referred to as triple play. Conventional voice traffic is a two-way, point-to-point, continuous 3.4 kHz, analog signal with a very stringent delay requirement. The standard TV signal is a broadcasting, point-to-multipoint, continuous 6 MHz analog signal. Data traffic is typically bursty with varying bandwidth and delay requirements. Since the traffic characteristics of voice, data and video services and their requirements for quality of service (QoS) were fundamentally different, three major types of networks were built to render these services separately in a cost-effective manner: the public switched telephone networks (PSTN) for voice conversation, the hybrid fiber coax (HFC) networks for video distribution, and the Internet for data transfer. Due to advances in digital communication technology, voice and video signals have been digitized to accommodate different transport platforms. Emerging multimedia applications such as video-on-demand, e-learning, interactive gaming, etc. are often packaged and transported with voice, data and video services. Fueled by advances in computer technology, these applications are growing in size and demanding more and more bandwidth for transport. Tablel-1 lists common end user applications and their bandwidth requirements. As demand continues to grow, the required bandwidth may 1

CHAPTER

1.

INTRODUCTION

Table 1.1: Multimedia applications and their bandwidth requirements. Application Bandwidth Latency Other Requirements Voice over IP (VoIP) Video conference File sharing SDTV Interactive game Telemedicine Real time video Video on demand HDTV Network-hosted software 64 kb/s 2Mb/s 3Mb/s 4.5 Mb/s/ch 5Mb/s 8Mb/s 10 Mb/s 10 Mb/s/ch 10 Mb/s/ch 25 Mb/s 200 ms 200 ms 1s 10 s 200 ms 50 ms 200 ms 10 s 10 s 200 ms Protection Content distribution Low packet loss Multicasting security Multicasting Protection Protection

even exceed 50 Mbps in the foreseeable future. Breakneck increases in bandwidth demand spurred rapid evolution of communication technologies and fierce competition between service providers. Driven by user demands and stiff competition, service providers began to integrate these services and applications together and strived to build new communication infrastructure to meet the new challenges. As a result, we have witnessed rapid development of communication infrastructure around the world and the explosive growth of the Internet within the past decade. The sum of today's communication network is an exceptionally complicated system, covering the entire globe. Such an intricate system is built and managed on a hierarchical structure, which consists of four network layers. These network layers cooperate to achieve the ultimate goal: anyone, anywhere, anytime and any media communications.

1.1

Communication Network Hierarchy

The hierarchical structure of today's communication network consists of local area, access, metropolitan area and wide area networks as illustrated in Figure 1-1. Local area networks (LANs) connect computers and other electronic devices

CHAPTER 1.

INTRODUCTION

Backbone Networks
Continent-to-Continent Coast-to-Coast Typical Distance > 1000km 10Gbps~Tbps
-<
V San Francisco

London

Metro Area Networks


City-to-City Typical Distance ~100km 1Gbps~40Gbps
San Mateo

-<
Palo Alto

Access Networks
Central Office to end users Typical Distance 100m~20km 56Kbps ~ 10 Mbps

i f f i m*d m l PaioAito

**P S ; ^ \?m Central Offices

Home Gateways J i r i a

Mobile Users

Local Area Networks


Intra-office or home connection 10Mbps ~ 1Gbps Figure 1.1: Modern Communication Network Hierarchy

CHAPTER

1.

INTRODUCTION

such as printers within a home, office, or building. Hence, the geographical coverage of LANs is relatively small, spanning from a few meters to hundreds of meters. LANs are generally not part of any public network but are independently owned and operated by private organizations. The most popular LANs are Ethernet based, supporting up to a few hundred users with bit rates reaching lGbps. Access networks connect computers and communication equipment of a private organization to a public telecommunication network, bridging end-users to service providers via twisted-pairs (phone line), coaxial cables, optical fiber or wireless links. The typical distance covered by an access network ranges anywhere from a few kilometers to up to twenty kilometers. Today's access networks are mostly based on copper wires, and can provide users with up to approximately 10Mbps bandwidth. Metropolitan area networks (MANs) aggregate the traffic from multiple access networks and transport them at a higher speed. MANs typically cover regions ranging from metropolitan areas to small regions in the countryside. The topology is usually composed of a fiber ring connecting multiple central offices with transmission data rates typically reaching up to 40Gbps. W i d e area networks (WANs) carry considerable amounts of traffic coming in from cities, countries and continents. WANs multiplex traffic from MANs, transporting the resulting aggregated traffic at a significantly higher bandwidth. The data transmission rate can reach up to 1 Tbps when using wavelength division multiplexing (WDM) technology over optical fibers. The coverage of WANs can span up to thousands of kilometers. Beyond WANs, submarine optical links connect different continents. Generally, these submarine systems are point-to-point links with large capacities covering long distances. Submarine links are presently deployed across the Pacific and Atlantic oceans. Shorter submarine links are also widely used in Mediterranean, Asian Pacific and African regions.

CHAPTER

1. INTRODUCTION

1.2

Bottleneck: Access Networks

Within the past decade, increasingly demanding multimedia applications have fueled the explosive growth of the Internet, which has been progressively pervading every part of our lives, from the home to workplace. As bandwidth usage from end users keeps growing, all four network layers need to provide sufficient bandwidth to accommodate the accelerating demand. Among the four network layers, WAN and MAN are built upon long-distance fiber optical links. In these networks, the bandwidth can be easily enhanced using advanced photonic technologies. In the case of LANs, which are deployed in small areas, shortdistance broadband technologies such as Gigabit Ethernet and 802.lln WiFi can be used to enhance the bandwidth. As for access networks, the most prevalent access technologies are based on copper wires which have fundamental bandwidth limits. For this reason, access networks have become the so-called first/last-mile bottleneck in the communication network hierarchy (since access network bridges users and central offices that are miles away, the link is often called the last mile by service providers, but from an end user's perspective, it is the first mile). The resultant delays in data access and audio/video clips downloading have earned the Internet a nickname "World Wide Wait". Alleviating this bottleneck has been a very challenging task for service providers. In addition to bandwidth bottlenecks, increasing mobility requirements for access networks present new challenges for service providers. Mobility is highly desirable for users, because it enables access to the Internet regardless of location. The term, "Quadruple play", referring to voice, video, data, and mobility, has recently emerged, indicating the importance of mobility with the other three conventional services. Furthermore, unlike WANs and MANs which consist of point-to-point links, access networks may have to reach millions of users individually, the network infrastructure being widely deployed in metropolitan areas. To enable wide deployment and affordable fees for users, the cost of network deployment and maintenance must be low, which is the reason why conventional access networks were based on previously existing twisted pair and cable technology. Hence, any new access technology based

CHAPTER

1.

INTRODUCTION

on a different infrastructure must take cost into account.

1.3

Brief Introduction to Access Technologies

For broadband access services, there is strong competition among several technologies, namely digital subscriber line, hybrid fiber coax, wireless, and FTTx (fiber to the x. x stands for home, curb, neighborhood, office, business, premise, or user, etc). For comparison, Table 1.2 summarizes the bandwidths (per user) and physical ranges of these technologies. The following sections briefly introduce these access technologies.

1.3.1

Digital Subscriber Line

Digital subscriber loop (DSL) is a family of access technologies that utilize the telephone line (twisted pair) to provide access service. Although the audio signal (voice) carried by a telephony system is limited to the 300 Hz - 3.4kHz range, the twisted pair is capable of handling frequencies up to tens of megahertz. DSL takes advantage of this unused band and transmits data using multiple frequency channels, in the presence of telephony analog signals. DSL comes with different flavors, supporting various downstream and upstream bit rates and access distances. DSL standards are defined in ANSI T l , and ITU-T Recommendation G.992/993. Collectively these DSL technologies are referred to as xDSL. Two commonly deployed DSL standards are ADSL and VDSL. ADSL support asymmetrical transmission rate between downstream (from central office to user) and upstream (from user to central office) transmissions, as its name suggests. Depending on the length and signal to noise ratio of the twisted pair, the downstream bit rate can be as high as 10 times the upstream transmission. The maximum reach of ADSL is 5.5km. The ADSL1 can support downstream bit rates up to 8 Mb/s and upstream data rates up to 896 kb/s, while ADSL2 supports up to 15 Mb/s downstream and 3.8 Mb/s upstream. To support higher bit rates, the Very-high speed DSL (VDSL) standard was developed. Trading transmission distance for data rate, VDSL can support much higher data rate but with very limited reach. The VDSL1 standard specifies data rates of 50

CHAPTER

1.

INTRODUCTION

Mb/s for downstream and 30 Mb/s for upstream transmission. The maximum reach of VDSL1 is limited to up to 1,500 meters. A newer version of VDSL standards VDSL2 is an enhancement of VDSL1, supporting data rate up to 100 Mb/s but with a transmission distance of 500 meters. At 1 kilometer, the bit rate will drop to 50 Mb/s. For reaches longer than 1.6 kilometers, VDSL2 performance is close to ADSL. Because of its higher data rates and ADSL-like long reach performance, VDSL2 is considered as a very promising solution for upgrading existing ADSL infrastructure.

1.3.2

Hybrid Fiber Coax

Coaxial cable networks were originally developed for TV signal distribution, so they are optimized for one way, point-to-multipoint broadcasting purpose. Due to maturity of optical communication technologies, most cable TV systems gradually upgraded to hybrid fiber coax (HFC) networks, eliminating numerous electronic amplifiers along the trunk line. In HFC networks, analog TV signals are carried from cable head end to distribution nodes via optical fibers, and from distribution node, coaxial cable drops are deployed to serve 500 to 2,000 subscribers. An HFC network is a shared medium system with a tree topology. A cable modem deployed at the subscriber end provides data connection to the cable network, while at the head end, a cable modem termination system connects to a variety of data servers, providing service to subscribers. Compared with the twisted pairs in the telephone system, coaxial cables support a much higher bandwidth ( 1000 MHz total bandwidth). However, as cable systems are shared medium networks, the bandwidth is in turn shared by all the cable modems connected to the network. By contrast, DSL uses dedicated twist pairs for each user, thus no bandwidth sharing exists across different users. Furthermore, as a shared media, when congestion occurs in a specific channel, the medium access control (MAC) protocol must instruct cable modems to tune to a different channel, which leads to significant overhead. As a result, depending on the signal-to-noise ratio of the coaxial cable, while downstream rates can reach values such as 40 Mbps, upstream transmission are typically limited to a fraction of the downstream rate such as 10

CHAPTER 1. INTRODUCTION

Mbps.

1.3.3

Passive Optical Networks


In

Due to the physical media constraints, the product of bandwidth times transmission distance for both copper wire and wireless access technology is very limited. response to foreseeable increase in demand, fiber optical access solutions, i.e. FTTx, have long been speculated to be the most promising technology for widening the access pipes. Among various optical solutions, the passive optical network (PON) [2], [3] is the most promising solution because of its cost-effectiveness, which is important to access networks. As the name implies, there is no active component deployed in the field; active devices exist only in the central office and at user premises. The passive infrastructure leads to low cost network deployment and system maintenance. Depending on the multiplexing scheme, PONs can be categorized to Time Division Multiplexing (TDM) PON and Wavelength Division Multiplexing (WDM) PON. Current TDM PONs can provide approximately 1 Gbps shared by up to 64 users while next-generation TDM-PONs will provide up to lOGbps total bandwidth. In WDM PONs, a dedicated link is provided to each user, who can each have up to lOGbps bandwidth [4]. In the next chapter we will introduce these two categories of PONs in more detail.

1.3.4

Wireless Access Technologies

As mentioned previously, mobility is highly desirable for end users because it enables them to access information regardless of location. Wireless communications technologies are undoubtedly the key to providing such a mobile and ubiquitous connection to Internet information services. As a result, a number of wireless access technologies such as free space optical communications technology, WiMAX, and Wi-Fi Mesh Networks have been recently developed as alternatives to conventional wired access service. Except for free space optical communications, the rest of wireless access technologies use radio frequency (RF) signals to establish communication links between the central office and subscribers. The choice of radio access technologies largely

CHAPTER

1.

INTRODUCTION

Table 1.2: Comparison of bandwidth and reach for popular access technologies Service Medium Downstream (Mb/s) Upstream (Mb/s) Max reach (km) ADSL ADSL2 VDSL1 VDSL2 HFC BPON GPON EPON WiFi WiMax Twisted pair Twistedpair Twisted pair Twisted pair Coax cable Fiber Fiber Fiber Free space Free space 8 15 50 100 40 155 or 622 1244 or 2488 1250 54 75 0.896 3.8 30 30 9 155 155-2488 1250 54 75 5.5 5.5 1.5 0.5 25 20 20 20 0.1 5

depends on the application, required data rate, available frequency spectrum and transmission distance. WiMAX is an access technology based on IEEE 802.16 standards [5] [6]. A WiMAX base station can support total data rate up to 75 Mbps to residential and business users within kilometer range. For WiMAX networks with a normal network size, the residential and business subscribers typical receive up to 1 Mbps and a few Mbps bandwidth, respectively. To support high bandwidth and delay sensitive multimedia applications such as voice and video streaming, Quality of Service (QoS) capabilities are implemented in WiMAX for guaranteed user experiences. Wireless Fidelity (Wi-Fi) based on IEEE 802.11 standards were developed in the 1990s for wireless local area networks (WLANs). Wi-Fi mesh networks have emerged as an access solution in the urban area because of the high and wide penetration of Wi-Fi in personal electronic devices. A Wi-Fi mesh network is based on wireless links running among multiple Wi-Fi routers, forming a wireless mesh backbone. User traffic usually passes through several hops before reaching the destination. These access technologies are summarized in Table 1-2.

CHAPTER

1.

INTRODUCTION

10

I I I l l 8 l M @ l l l i S S I S l l i g

* Central Office

Optical Backhaul <PNRL>

Wireless Mesh Networks (WCR Residential & Mobile Users

PNRL: Photonics & Networking Research Lab, Stanford University WCRG: Wireless Communications Research Group, Stanford University

Figure 1.2: Synergy of Optical and Wireless to Construct Access Network

1.4

Convergence of Optical and Wireless Access

Although a variety of broadband access technologies have been developed to address the major challenges, such as bandwidth, mobility, and cost of access network, it is difficult for any individual technology to resolve all the challenges in access network, since each technology is designed to address a specific issue, often including tradeoffs affecting other issues. For example, although PONs enable broadband provisioning, the dedicated infrastructure leads to significant deployment cost, and the network availability is confined within a residential and business unit. On the other side, despite its ubiquitous and flexible connectivity, wireless access technologies have limited bandwidth, preventing simultaneous broadband access among large number of users. In light of the complementary characteristics of optical and wireless technologies, a hybrid optical-wireless access network may provide a desirable compromise among bandwidth, cost, and network availability issues. Specifically, a hybrid optical-wireless access architecture consists of wireless networks in the user's proximity and optical networks as the backhaul as shown in Figurel-2. In the front end, wireless networks are deployed to penetrate into user's vicinity, facilitating ubiquitous connectivity and minimizing deployment cost. In the back end, broadband optical backhaul networks are deployed to connect between the wireless network and the central office which manages the entire network. Compared with wireless access solutions which are limited in bandwidth, a hybrid optical-wireless architecture should be able to facilitate

CHAPTER

1.

INTRODUCTION

11

bandwidth upgrade in the wireless segment by deploying more resources in the optical segment. On the other hand, compared with optical-only access solutions that require deploying infrastructure to user's proximity, a hybrid optical-wireless architecture should be able to reduce the service cost by replacing the last/first hundreds of feet infrastructure on the optical segment with ubiquitous wireless links. As a result, a hybrid optical-wireless access network should enable scalable bandwidth provisioning according to demand in a cost-effective manner. 1

1.5

Dissertation Outline

This dissertation focuses on two novel optical backhaul architectures of hybrid opticalwireless access networks and a new optical burst-mode clock and data recovery (CDR) technique. The first optical backhaul architecture is designed to smoothly upgrade the wireless backhaul of a deployed wireless mesh network. The second architecture aims to enable a high-performance next-generation hybrid access based on a cleanslate deployment. The new optical burst-mode CDR technique is an enabling device for the two proposed optical backhaul architectures. The rest of the dissertation is organized as follows: Chapter 2 provides a detailed description of the key optical and wireless access technologies upon which the hybrid optical-wireless access network is built. Chapter 3 describes the hybrid optical wireless access network architecture, which facilitates a smooth upgrade from hierarchical wireless access networks using commercial optical access technologies. Chapter 4 describes the next-generation hybrid optical-wireless access network based on a cleanslate infrastructure deployment, and an integrated routing algorithm that can improve load balancing for hybrid optical wireless access networks. Chapter 5 first introduces the up-to-date optical burst-mode CDR techniques and their inabilities, and then describes the proposed dual-loop CDR architecture and evaluates its performance with analysis and simulations.
The hybrid optical wireless access network is a joint research project between the Wireless Communications Research Group (WCRG) and Photonics and Networking Research Group (PNRL) of the Electrical Engineering Department at Stanford University, which design and develop the wireless and optical segments respectively
x

Chapter 2 Enabling Optical and Wireless Access Technologies


2.1 Passive Optical Networks

As introduced in Chapter 1, passive optical networks (PONs) offer a promising optical access solution for significantly enhancing the bandwidth of access networks. Figure 2-1 illustrates the architecture of a passive optical network. At the central office, an optical line terminal (OLT) is deployed to communicate with the optical network units (ONUs) deployed on the user side. As mentioned in section 1.3.3, there is no active component between the central office and the user's premise; active devices exist only in the central office and at the user premise. From the central office, a feeder fiber runs to a 1:N passive optical multiplexer/de-multiplexer near the user's premise, which distributes and aggregates the downstream and upstream traffic, respectively. The multiplex ratio supported by a PON can span from 2 to 128 depending on the power budget; the typical ratio is 16, 32 or 64. In all PONs, the downstream and upstream traffic are carried on different wavelengths to avoid collision. The output ports of the passive multiplexer connect to the ONUs via individual distribution fibers. The total length of feeder and distribution fiber is typically less than 20 km. The fibers and passive components between the central office and the user's premise are commonly referred to as an optical distribution network. At the tail end of this network are the 12

CHAPTER 2. ENABLING

OPTICAL AND WIRELESS ACCESS

TECHNOLOGIES!^

ONUs, which could be located at the user's home, office, or a curb side cabinet. The resulting fiber which connects to the home, office, business, neighborhood, curb, user, premise, node, can be generalized by the term Fiber To The X (FTTX). In the case of fiber-to-the-neighborhood/curb/node, twisted pairs are typically deployed to connect end users to the ONUs, thus providing a hybrid fiber/DSL access solution. PONs can be categorized into the time division multiplexing (TDM) PON and wavelength division multiplexing (WDM) PON. The following two sections will introduce these two categories.

Up to 20km
ONU (Fiber to the office/business)

Feeder Fiber " ^ OCT ** Central Office Multiplexer

OLT: Optical Line Terminal ONU: Optical Network Unit

tONUi
(Fiber to the Home/user)

Figure 2.1: Passive optical networks.

2.1.1

TDM PON

TDM PON is well standardized and has been deployed to provide services in different countries [2] [3]. A typical TDM PON architecture is illustrated in figure 2-2. The OLT transmits downstream data using 1490 nm wavelength, and the broadcasting video is sent through 1550 nm wavelength. The downstream traffic transmission uses a "broadcast and select" scheme, that is, the downstream data and video are

CHAPTER 2. ENABLING

OPTICAL AND WIRELESS ACCESS

TECHNOLOGIESU

broadcast to all the users who are in turn assigned with different MAC addresses; the data are then selected for each user based on the MAC address labeled on each packet. At the user end, all the ONUs transmit upstream data at 1310 nm wavelength. To avoid collision, a multiple access protocol based on time division multiple access principle is employed: each ONU is assigned time slots for upstream transmission by the OLT (the name of TDM PON is derived from the TDM scheme). In TDM PON, the multiplexer deployed in the field (as in figure 2-1) is implemented with a low-loss passive optical splitter. Note that in addition to the passive infrastructure facilitating cost-effectiveness, optical devices such as the laser and optical receiver in the OLT are shared by all the ONUs. This resource sharing is another reason that makes TDM PON a competitive solution for cost when compared against other optical access solutions. Downstream Video (1490nm) - I I ^^ONU(1) / ^ - ONU(2)

Downstream Data (1550nm)

ONU(3) OLT Central Office


>^__a^^

MHI ^ ^ ^ Upstream Data (131 Onm)

Passive Optical Coupler

\ \ | x. ^ I

ONU(N)

Upstream Data (131 Onm) Figure 2.2: TDM PON Early work on passive optical networks started in 1990s, when telecom service providers and system equipment vendors formed the Full Service Access Networks

CHAPTER 2. ENABLING

OPTICAL AND WIRELESS ACCESS

TECHN0L0GIES15

(FSAN) working group [7]. The common goal of FSAN group is to develop truly broadband fiber access networks. To provide performance guaranteed services, the first PON standard, ATM PON (APON), was adopted on asynchronous transfer mode (ATM) technology because of its traffic management capabilities and robust QoS support. APON supports 622.08 Mbps for downstream transmission and 155.52 Mbps for upstream traffic. All user traffic is encapsulated in standard ATM cells, which consist of a 5-byte control header and 48-bytes of user data. The APON standard was ratified by ITU-T in 1998 in Recommendation G.983.1. In the early days, APON was mostly deployed for business applications, e.g., fiber-to-the-office, but currently, APON networks have been largely substituted with higher bit-rate versions: BPONs and GPONs. Based on APON, ITU-T further developed the broadband passive optical network (BPON) standard as specified in a series of recommendations in G.983. BPON is an enhanced version of APON, which supports higher transmission rate and detailed control protocols. BPON provides a maximum downstream data rate of 1.2 Gb/s and a maximum upstream data rate of 622 Mb/s. ITU-T G.983 also specifies dynamic bandwidth allocation (DBA), management and control interfaces, and network protection. There has been large scale deployment of BPON in support of fiber-to-the-premise applications. The growing demand for higher bandwidth in the access networks stimulated further development of PON standards with higher capacity beyond even BPON. Starting in 2001, FSAN group developed a new standard called Gigabit PON (GPON), which became the ITU-T G.984 standard. GPON supports a maximum downstream/upstream data rate at 2.488 Gb/s, and the transmission convergence layer specifies the GPON frame format, media access control, operation and maintenance procedures and encryption method. Besides having higher bit rate, GPON adopted ITU-T G.7041 Generic Framing (an encapsulation method) to support different layer 2 protocols, such as ATM and Ethernet. The adoption of the GEM encapsulation method enables backward compatibility to APON and BPON, and exhibits better bandwidth efficiency than Ethernet frames, which uses Ethernet PON (EPON), GPON's competing standard. Field trials and deployment of GPON have already started in North America for new installations and replacement of existing BPONs.

CHAPTER

2. ENABLING

OPTICAL AND WIRELESS ACCESS

TECHNOLOGIES!^

While ITU-T rolled out the BPON and GPON standards, IEEE Ethernet-in-thefirst-mile working group developed the Ethernet PON (EPON) standard based on Ethernet [2]. As its name suggests, EPON encapsulates and transports user data in Ethernet frames. Thus EPON is a natural extension of the local area networks in the user premise, and connects LANs to the Ethernet-based MAN/WAN infrastructure. EPON can support a maximum of 1.25 Gb/s (effective data rate 1.0 Gb/s) for both downstream and upstream transmission. Since there is no data fragment or assembly in EPON and its requirement on physical media dependent layer is more relaxed, EPON equipment is less expensive than GPON. As Ethernet has been widely used in local area networks, EPON emerged as a very attractive access technology. Currently, EPON networks have been deployed on large scale in Japan, serving millions of users.

2.1.2

WDM PON

As user bandwidth demands keep increasing, TDM PON will eventually no longer be able to satisfy the growing demand. However, there exist a few candidate solutions in the short term for enhancing the bandwidth of existing PONs. One solution is to physically split a single PON into multiple PONs, by connecting some ONUs with new fibers and OLTs, so that each PON will be burdened by a smaller number of users and each user will in turn have more available bandwidth. Another solution is to upgrade TDM PON's bit rate from 1.25Gbps to lOGbps for instance. In fact, IEEE 802.3av study group is creating a draft standard for the lOGbps EPON. However, the two aforementioned solutions are not very cost-effective and cannot scale up easily as bandwidth demands further increase. In addition, since the power distribution of the passive splitter is fixed, the first solution will lead to uneven power budget for users and limit the transmission distance. As a result, the WDM PON is widely considered the future proof technology that can satisfy any bandwidth demand [4]. Figure 2-3 shows a generic network architecture of WDM PON. As shown, optical transmitters with different wavelengths are deployed in both OLT and ONU, and a passive wavelength division multiplexer, such as array waveguide grating (AWG), is deployed at the distribution node to separate and combine the downstream and

CHAPTER 2. ENABLING

OPTICAL AND WIRELESS ACCESS

TECHN0L0GIES17

upstream wavelengths. Each ONU is assigned a dedicated wavelength set for upstream and downstream transmission. If the user bandwidth demands are low, or put in other words, if a small number of users can still share a single wavelength, then a passive optical splitter following the wavelength division multiplexer is used to broadcast the downstream traffic and combine the upstream traffic. In this case, multiple wavelengths separate a single PON into multiple logical TDM PONs: each PON runs on different wavelength, and a smaller number of users share the bandwidth of a TDM PON. In addition, since the optical power is split for a smaller number of users, WDM PONs are less subject to optical power budget constraints and can support long reach to the ONUs. If a user requires a large amount of bandwidth (e.g., a few gigabits per second), then a single wavelength can be dedicated to this specific user. In some extreme cases, the user can even be provided with multiple wavelengths. Hence, very large amounts of bandwidth can be provided to a single user if needed. Note that unlike TDM PON, the optical devices and resources at the OLT in WDM PONs are not shared by ONUs. Although the cost of WDM PON is inherently higher than TDM PON, it is believed that the migration from TDM PON to WDM PON is still ultimately necessary to address the growing, insatiable hunger for bandwidth by users. To facilitate the migration, a smooth and cost-effective upgrade with minimum effect on legacy users is highly desirable. Currently the best migration approach is still under debate. Various approaches to implement WDM PON have been and are being explored, and field deployment has already been started in Asia (Korea, to be exact [8]). A number of schemes to incorporate WDM technology into access networks has been studied and tested in experiments, and those WDM PON architectures exhibit certain exceptional features in the WDM implementation in either downstream, upstream, or both directions. As optical technology becomes cheaper and easier to deploy and end users demand more and more bandwidth, WDM PONs will eventually make the first/last mile bottleneck a history [9].

CHAPTER 2. ENABLING

OPTICAL AND WIRELESS ACCESS

TECHN0L0GIES18

0NU(1)

ONU(N)

Central Office

Figure 2.3: WDM PON

2.1.3

P O N Deployment M e t h o d and Cost

Historically, fiber optical telecommunications technology was costly so it was employed mainly for long-haul systems. Nonetheless, in order to realize the last-mile optical connection to the end users, new optical fiber deployment techniques and strategies have been developed for passive optical networks to reduce the deployment cost. Currently there are three deployment methods: (1) Aerial: stringing fibers above the ground, sometimes using existing utility poles; (2) Buried: digging trenches to install fiber; (3) Conduit: placing fiber in an existing conduit underground. The typical cost of the three deployment methods are summarized in table 2-1.
Note t h a t besides deployment of new optical fibers, there are existing unused d a r k

and dim fibers (part of the fiber bandwidth is used for communications) in urban areas deployed underneath streets. These unused dark or dim fibers can be readily employed as part of PONs at even lower cost.

CHAPTER 2. ENABLING

OPTICAL AND WIRELESS ACCESS

TECHN0L0GIES19

Table 2.1: Cost of fiber deployment for PON

Deployment Cethod Arial1 Trenching1 Existing Conduit1 New Conduit2


1 2

Deployment Cost 900USD/Km 1,200 USD/Km 700 USD/Km 4,000 USD/Km

NFOEC 2007 Keynote by Ronald Heron, Alcatel Inc " Optimized Passive Optical Network Deployment",

M. Hajduczenia et. al. Journal of Optical Networking Sep. 2007

2.1.4

Summary of Optical Access Technologies

The demand for broadband access networks has spurred rapid development and installation of PON in the last several years. As we will see in Chapters 3 and 4, the high-speed communications technologies and low-cost infrastructure deployment developed in PON are essential to realize the optical backhaul of the hybrid optical wireless access architecture.

2.2

Enabling Wireless Access Technologies

As introduced in chapter 1, wireless access technologies offer promising solutions to address the mobility issue in access networks. In this section we will introduce a wireless access technology: wireless mesh network (WMN), which enables the frontend wireless connectivity of the hybrid optical wireless access network.

2.2.1

Multi-Hop Wireless Communication Network

Like Conventional cellular systems, current wireless access technologies, such as the IEEE 802.16 Mobile Worldwide Interoperability for Microwave Access (WiMAX), are single-hop wireless communications systems. In a single-hop communication system, as depicted in figure 2-4, packets only pass through one wireless link (between user's device and the base station) before reaching the destination from the sender. To

CHAPTER 2. ENABLING OPTICAL AND WIRELESS ACCESS TECHNOLOGIES20

enhance the service coverage and flexibility and to reduce the infrastructure deployment cost, multi-hop wireless communications have been considered and added to current standards. For example, a new task group IEEE 802.16J [10] was formed in 2006 to support mobile multi-hop relay (MMR) operation in the current IEEE 802.16e standard. The multi-hop wireless communication structure of IEEE 802.16j MMR network is depicted in figure 2-5, where wireless packets are usually relayed by multiple base stations, hence passing through multiple wireless links.

JjY
*!

Wpmm

Figure 2.4: IEEE 802.16 WiMAX: Single-Hop wireless Communication Network

Relay Station <

Relay Link

Relay Station 4g

Relay Link

jjj| EE

cn

Figure 2.5: IEEE 802.16j: Mobile Multiple Relay WiMAX Network

CHAPTER 2. ENABLING

OPTICAL AND WIRELESS ACCESS

TECHN0L0GIES21

2.2.2

Wireless Mesh Networks

Although the multi-hop packet relay was only recently adopted by the IEEE 802.16 WiMAX standard, the idea of using multi-hop communications to enhance wireless network coverage and flexibility is not a new idea. In 1972, the Department of Defense sponsored the Packet Radio Networks (PRNET) project to provide a packetswitched networking to mobile battlefield elements in an infrastructureless, hostile environment. Based on ancestor technology such as PRNET, the Mobile Ad-hoc Networks (MANETs) later emerged and drew a lot of research attention in the wireless networking field [11]. MANET is a system of wireless mobile nodes that can freely and dynamically self-organize in an arbitrary and temporary network topology without the need of a wired backhaul or a centralized administration. Personnel and devices can be seamlessly internetworked in areas without any preexisting communication infrastructure. By periodically broadcasting and exchanging node information with adjacent nodes, each node discovers and maintains wireless connectivity with nearby nodes, and keeps track of full or partial routing information to other nodes on MANET. Data packets are usually handled by multiple nodes before reaching the destination node. Applications of MANET include battlefield communications, emergency networks, and wireless sensor networks [11]. Fueled by the concepts, technologies, and algorithms developed for MANET, Wireless Mesh Networks (WMN) have emerged and been applied to more general applications such as extension of wireless local area networks, city-wide wireless access networks, community wireless networks, and public safety network within the last decade [12]. WMN consist of multi-hop wireless communication links which forward traffic en route to and from wired Internet entry points [13], as depicted in figure 2-6. On a WMN, any node can communicate with any other node through a route comprised of multiple wireless links. For instance, node A and node B can communicate via a route formed by nodes 1, 2, and 3. WMN dynamically self-configure and automatically maintain the mesh connectivity among nodes in the networks. One important feature of WMN is the self-healing capability: for example if the link between node 1 and 2 fails, the traffic between node A and B can detour through the route formed by nodes 4, 5, and 6 instead.

CHAPTER 2. ENABLING

OPTICAL AND WIRELESS ACCESS

TECHNOLOGIES22

\A

ZjL"
,

.
V

...
;;A2 =
:

il

A*-...

/ \.

\ i / 3;
*
A.

A /*.//

'B\1

% A;

^ 6 A Wireless Mesh Node


Figure 2.6: Generic Wireless Mesh Network (WMN) Depending on the applications and type of nodes, the WMN can be categorized into (1) client WMN and (2) infrastructure WMN.

2.2.3

Client Wireless Mesh Networks

A client WMN is an infrastructureless, peer-to-peer wireless network. As illustrated in figure 2-7, a client WMN consists of only one type of mesh nodes, i.e. the client devices that provide application interface with end-users. These client devices perform traffic routing and mesh configuration functionalities among themselves, and they usually have mobility. Recently an IEEE technical task group under IEEE 802.11 is working to develop a new client WMN standard - the IEEE802.11s [14]. Based on the PHY interface and MAC protocol defined by IEEE802.il a/b/g, IEEE 802.11s focuses on the network formation/maintenance and traffic routing. For portable client devices, such as laptops, Personal Digital Assistants (PDAs), etc., the transmission power is usually limited, so IEEE802.11s is considered an extension of wireless local area networks (WLAN) [15].

CHAPTER 2. ENABLING

OPTICAL AND WIRELESS ACCESS

TECHNOLOGIES23

Client Device

3
~JL Client Device

Client Device

Client Device

Mm?
Client Device

Figure 2.7: Client Wireless Mesh Network

2.2.4

Infrastructure Wireless Mesh Networks

An infrastructure WMN has a hierarchical network architecture, which consists of two types of wireless nodes: the wireless mesh routers and client devices as illustrated in figure 2-8. The wireless mesh routers are deployed as infrastructure without mobility, while the client devices are usually mobile and provide application interface to the end-users. In this architecture there are two link layers: (1) the links between mesh clients and mesh routers, and (2) the links between mesh routers. As in figure 2-8, the wireless mesh routers form a self-configuring, self-healing mesh network. Among the mesh routers, one or some mesh routers have wired connection to the Internet. These are called the gateway routers, and traffic is distributed or aggregated from or to these gateway routers. The upstream traffic from the end-user is first transmitted by the mesh client and received by a nearby mesh router Then the packets are relayed to one of the nearby gateway routers. The downstream traffic is forwarded from the gateway router to the end user in the same manner but reverse direction. The IEEE 802.11 technologies are widely used in most infrastructure WMNs due to the high popularity
among portable electronic devices a n d P C s a n d low cost [12], [13]. T h e higher layer

link can be built using various types of radio technologies, in addition to the IEEE 802.11 technologies. Nowadays infrastructure WMNs are emerging as a promising wireless access solution to compliment or even replace wired access networks because of the cost-effectiveness, scalable architecture, and flexible network deployment. Since

CHAPTER

2. ENABLING

OPTICAL AND WIRELESS ACCESS

TECHN0L0GIES2A

the emphasis of this thesis is on the access networks, in the rest of this chapter we will focus on the infrastructure WMN, and the term "WMN" will refer to the infrastructure WMN hereafter.

*" Mesh ^ Router

^Client ^Device

^Gateway L Router P

Figure 2.8: Infrastructure Wireless Mesh Network Given the rapid advance in wireless communications, WMNs have been commercialized using high-speed and cost-effective wireless interfaces and technologies. In some US cities, for example, WMNs using IEEE802.il a/b/g as the interface have been deployed to provide wireless Internet access services [16], [17], [18]. By upgrading to higher data rate technologies, such as mobile WiMAX [19], WMNs have been considered a promising wireless access solution to replace or compliment existing access networks in metropolitan areas. Hence, in the hybrid optical-wireless architecture to be introduced, WMNs will serve as the wireless segment. In the following sections we will introduce WMNs in detail.

CHAPTER 2. ENABLING

OPTICAL AND WIRELESS ACCESS

TECHNOLOGIES25

Table 2.2: Summary of IEEE 802.11 standards

Parameters 802.11b 802.11a 802.11g Max Bit Rate 11Mbps 54Mbps 54Mbps Non-overlapping channels 3 X 20MHz Channels 3 X 20MHz Channels 15 X 20MHz Channel: 50m Max Uplink Distance 100m 100m

2.2.5

P H Y and M A C Layers of W M N

As mentioned, IEEE 802.11b/a/g (Wi-Fi) technologies are currently widely exploited in the links between mesh routers [16], [17], [18]. These Wi-Fi based WMNs are going to be deployed in urban areas in North America. However, due to the multihop communications among mesh routers in an open environment, as the network load increases, the co-channel interference among mesh routers will increase significantly. As originally designed for local area networks, the PHY technology and Carrier Sense Multiple Access with Collision Avoidance (CSMA-CA) protocol of Wi-Fi are not optimized for high-load situation. Therefore the network efficiency at the higher layer of Wi-Fi based WMN will degrade significantly as the network is highly loaded. The family of IEEE 802.11 standards is summarized in table 2-1. In order to improve the overall network performance of WMNs, new technologies and protocols in the PHY and MAC layers and a routing protocol are proposed [12]. In the PHY layer, smart antenna, multi-input multi-output (MIMO), arid multichannel/interface systems are being explored to enhance network capacity. MAC protocols based on distributed time and code division multiplexed access are expected to improve the bandwidth efficiency from CSMA/CA protocols [12]. Furthermore, since packets are routed among mesh routers in the presence of interference, shadowing, and fading in WMNs, a cross-layer design is required to optimize routing. To accommodate the projected demand increase of Wi-Fi based WMNs, the vendor companies of WMNs have proposed to use WiMAX (IEEE 802.16) to enhance the higher-layer link capacity [19]. Ultra-high bandwidth standards such as the emerging IEEE 802.16m [20], which aims to provide l G b / s and lOOMb/s shared bandwidth for residential and mobile users respectively, can be employed to further enhance capacity.

CHAPTER 2. ENABLING

OPTICAL AND WIRELESS ACCESS

TECHNOLOGIES26

2.2.6

Routing Algorithms of Wireless Multi-Hop Networks

Routing for wireless multi-hop networks have been extensively researched in the context of wireless ad-hoc networks. There exist several algorithms or protocols for routing with varying performance parameters. Basically these routing protocols can be categorized into two classes: the proactive class which is represented by Optimized link state routing (OLSR) [21] and the reactive class which is represented by ad-hoc on-demand distance vector (AODV) [22] protocols. The proactive routing protocol computes possible paths to all destinations regardless of the traffic demand between the communicating node pairs and stores them in routing tables. Since these tables need to be updated in a time-varying environment, the overhead may become prohibitively high for a large network. On the other hand, the reactive routing protocols compute routing paths in an "on-demand" basis, depending on the dynamics of traffic categories and network topology. The overhead for maintaining a large routing table can thus be avoided. However, the delay for establishing a new path is longer. The two categories of proactive and reactive routing protocols and their features are summarized in figure 2-9. Proactive Routing
@

Reactive Routing
@ @ @

\\^W,

^"K

Donation
1 I

Routes calculated before needed; Keep routing-information to all nodes; Issues: big routing table & bandwidth overhead for large network size. Example: Optimized link state routing (OLSR)

Route calculated when needed; Not keep routing-information to all nodes; Issues; High delay for large network size. Example: Ad-hoc On-Demand Distance Vector (AODV)

Figure 2.9: Two Categories of Routing Algorithms in Wireless Ad-Hoc Networks More recently, in the context of wireless community networks, several routing

CHAPTER 2. ENABLING

OPTICAL AND WIRELESS ACCESS

TECHNOLOGIES27

metrics have been suggested to incorporate underlying physical layer conditions [23], [24], [25], [26]. For example, the expected transmission count (ETX) is proposed in [25] to minimize the expected total number of packet transmissions required to successfully deliver a packet to the destination. The metric incorporates the effects of packet loss rates of both forward and reverse directions of a link and the interference among the successive hops of a path. In [25], ETX was adapted into two conventional shortest-path routing protocols, the Destination-Sequenced Distance Vector routing (DSDV) [27] and the Dynamic Source Routing (DSR) [28], and was shown to improve the network performance via simulations and a test-bed implementation consisting of 29 nodes in an office building. Another similar metric, the expected transmission time (ETT) proposed in [23], [24], considers the bandwidth of a link as well as the loss rate. The metric was used in a routing scheme for networks employing multiple radio interfaces and was shown to improve performance over a scheme based on ETX. In [26], another routing algorithm based on a cross-layer approach is proposed. The authors introduce physical layer metrics representing the transmission rate, interference, and packet error rate (PER); and develop a heuristic algorithm for joint power control and routing to optimize the information-theoretic capacity [29] of "dense" multi-hop wireless networks. Simulation results showed that the heuristic algorithm improved network throughput noticeably when compared to the commonly benchmarked routing protocols AODV [22], DSDV [27] and DSR [28]. Although a WMN is a kind of wireless multi-hop network, it is very different from wireless ad-hoc networks in terms of network architectures and applications. For example, wireless ad-hoc networks are designed to support mobility among the nodes, minimize power consumption, and forward peer-to-peer traffic among the nodes. On the other hand, the wireless mesh routers in WMN have no mobility, less power consumption constraints and higher computational capability. In addition, the traffic
p a t t e r n in W M N s may be very different from t h e peer-to-peer traffic in wireless ad-

hoc networks since traffic may be mostly forwarded, especially between the gateway router and end-users [30]. To date, however, most routing protocols of WMN are retrofitted based on wireless ad-hoc network architectures. Hence, routing algorithm research is currently heavily aimed at WMNs for optimizing the network performance

CHAPTER

2. ENABLING

OPTICAL AND WIRELESS ACCESS

TECHN0L0GIES2S

Wired Backhaul

'.v.-/*
I
D ' l U mi^ i m MR I ,

/--y-7.
i D l nm i J UMR I ,

/_-y_7'
i

*'.-y_-/i
i

> I I

J'

J'

D I U mr* MR I J

J'

D I mrc i "lUR I ,

J'

**

% A * *

/Wired / Backhaul

(a) one-dimensional exemplar W M N

Figure 2.10: Scalibility in one-dimensional Router, GR: Wireless Gateway Router, Dagg: capacity between adjacent MRs, CMR-USCT'DMR' Total user demand within area served in terms of throughput, delay, etc.

exemplar WMN. MR: Wireless Mesh Aggregated data rate, CMR-MR- Link Link capacity between MR and users, by a MR

2.2.7

Capacity Scalability of W M N

Capacity scalability is a challenging issue for WMN. Unlike fiber-optical networks, where capacity can be enhanced by simply adding wavelengths and transceivers, a WMN has scarce RF bandwidth. In WMNs, frequency channels are reused as is a cellular system. In cellular systems, cell splitting which involves deploying more basestations in an area (each cell or base station ending up with smaller size), is a common technique to enhance overall system capacity. In WMNs, however, straightforward cell splitting, i.e. deploying more wireless mesh routers in an area, will not lead to the same capacity enhancement as in cellular systems [13], [12]. To clarify this issue, let us consider a one-dimensional WMN as illustrated in figure 2-10(a). In figure 2-10(a), each mesh router (MR) is modeled as a node which connects to other nodes and "the local water tank" via pipes. The water tank models the service area (or cell) under each mesh router. In each water tank, the traffic load of users (DMR) is modeled as water. The thickness of the pipes signifies the capacity of the
(CMR-USBT,

links, and we assume that the capacities of two kinds of links or layers

CHAPTER 2. ENABLING

OPTICAL AND WIRELESS ACCESS

TECHNOLOGIES29

Wired . Backhaul

DMR! !

' DMR! !

! DMRj >

i D,

l* ^%.A r t ; U f. At Dm>ired
,.--'' \ . .-'' '-.. .-'' '-. ..-'Backhaul
(b) Overall load increase

Figure 2.10: Scalibility in one-dimensional Router, GR: Wireless Gateway Router, Dagg: capacity between adjacent MRs, CMR-userDMR- Total user demand within area served
CMR-MR)

examplar WMN. MR: Wireless Mesh Aggregated data rate, CMR-MR' Link Link capacity between MR and users, by a MR (con't)

are independent. The gateway router (GR), which has backhaul connection

to the Internet, is the outlet of the entire system, where we assume that there is only upstream traffic. As traffic load from end users increases under each mesh router, the link capacity between the mesh router and end users may be exhausted as indicated in figure 210(b). To reduce the load on each mesh router, we can deploy more mesh routers in a given area with lower transmission power as in figure 2-10(c), which is analogous to cell splitting in cellular systems. Note that after cell splitting, the pipe between the gateway router and its adjacent mesh router needs to conduct the aggregated flows (Dagg) from more mesh routers; in this exemple the number increases from three (figure 2-10(a)) to five (figure 2-10(c)). Also note that the pipe connecting the routers may become thinner after cell splitting because of the increased co-channel interference from other routers. Since the link connecting the gateway router and its adjacent mesh router needs to aggregate traffic based on a larger amount of mesh routers, the aggregated load {Dagg) may soon exhaust the pipe capacity connecting the gateway router and its adjacent mesh router, as shown in figure 2-10(d). The bandwidth insufficiency of this link will choke

CHAPTER 2. ENABLING

OPTICAL AND WIRELESS ACCESS

TECHNOLOGIES^

Pipe may be thinner due to increased interference

Wired Backhaul

^E~(i
|
"MR | |

nj-Yi
|
U

r^-fi
| "MR | |

rz-fi
|
U

rz.-(\
|
U

r-fi
|
U

MR | |

MR | |

MR | |

MR | J

Wired -Backhaul

\^"t0^'t^t'^'t^''t"%)
(c) Cell splitting to reduce average loading

Figure 2.10: Scalibility in one-dimensional Router, GR: Wireless Gateway Router, Dagg: capacity between adjacent MRs, CMR-User' DMR- Total user demand within area served

exemplar WMN. MR: Wireless Mesh Aggregated data rate, CMR-MR'- Link Link capacity between MR and users, by a MR (con't)

each mesh router's throughput and the overall network capacity of the WMN. In other words, after cell splitting, although each router's average load is reduced, the throughput of each router and overall capacity of the WMN may not necessarily increase. To improve the performance, a straightforward solution is to upgrade the link capacity by exploiting more frequency channels and radio interfaces on the routers [12], i.e. widening the pipes of the model. However, since the number of available channels in any wireless system is limited and the right of using the licensed bands is very expensive, this approach is limited by either the finite capacity or cost. Instead of resorting to upgrade the capacity of wireless links, the throughput per router and overall capacity can be enhanced by scaling the number of gateway routers accordingly with that of the mesh routers. This can be accomplished by replacing the middle mesh router with another gateway router as illustrated in figure 2-10(e). After placing this gateway router, traffic flow from a few mesh routers can be routed to the new gateway router and thus the bottleneck at the original gateway router link is mitigated. After qualitative insight into the scalability issues of WMN and the proposed solution, let's analyze the maximum throughput per router and the overall capacity

CHAPTER 2. ENABLING

OPTICAL AND WIRELESS ACCESS

TECHNOLOGIES?,!

Wired Backhaul

' V ' 1 I DMR t J

^ ~ ^ D MR 1 I

f ^ " ' 1 I DMR I 1

""rT"'1 I DMR t I

^""' D MR I J

""'' I DMR I I

..-.. ,..-.._ ., .......Wired * \ / ' ' % \ / ' % \ /' % \ / ' %. frapkhaul """"{ *!** 1 ".fe" y***"'-||9 \""""""9
lp. >

W~ J \ WK- W" / \ *>

W* / \ W>' W- / \ 1%. lis.

(d) Overall load keeps increasing

Figure 2.10: Scalibility in one-dimensional examplar WMN. MR: Wireless Mesh Router, GR: Wireless Gateway Router, Dagg: Aggregated data rate, CMR-MR- Link capacity between adjacent MRs, CMR-User'- Link capacity between MR and users, DMR- Total user demand within area served by a MR (con't)

Wired Backhaul

Wired Backhaul

rr-fi D
I
/

MR I I

r-_fi D
I

MR I I

rD_ri -

>.__/

MR I I

r--fi D

MR I I

r--fi D
I

MR I I

rD r

f Wired iBarckhaul
!>

MR I I

..--.

..Wired BacJchauJ

..

(e) Add wireless gateway router

Figure 2.10: Scalibility in one-dimensional exemplar WMN. MR: Wireless Mesh Router, GR: Wireless Gateway Router, Dagg: Aggregated data rate, CMR-MR- Link capacity between adjacent MRs, CMR-User'- Link capacity between MR and users, DMR'- Total user demand within area served by a MR

CHAPTER 2. ENABLING

OPTICAL AND WIRELESS ACCESS

TECHNOLOGIES^

of a one-dimensional WMN. In this analysis, we first assume that there is only one gateway router at the end of the one-dimensional WMN as in figure 2-11, and the link speed of the higher layer of links among the mesh and gateway routers is 54Mbps and does not interfere with the links of the lower layer. We further assume that the distance between any two adjacent mesh routers of the WMN is a constant A, as shown in figure 2-11, and all the routers are identical in transmission power, receiver sensitivity, and type of antenna (omni-directional).
" Interference Range - - Transmission Range

.<<**

>.

'"0a

j-0- -0-0-f-:-0-r-|-O
g *

O
*

Of
tt

*
*

%
i. /^

#
^a

*
* *

%
* #

I'
*

hd

7^

Figure 2.11: 1-Dimensional WMN The radio channel(s) are spatially reused, and the transmission of one mesh router will introduce co-channel interference to other mesh routers. By applying the simplified range model in [31], [32], we assume that the transmission range (TR) is A+ (A< A+ <2A), and the interference range (IR) is 2A+ (2A< 2A+ <3A). In other words, each mesh router can only correctly receive packets from its two adjacent routers and is interfered by routers that are up to two hops away. Given these assumptions, we simulate the throughput per router and the overall capacity as a function of the number of mesh routers. As shown in figure 2-12(a) and figure 2-12(b), the maximum throughput drops to zero and the overall network capacity reaches a finite value as the number of routers is increased. On the other hand, if we add one gateway router to every four additional mesh routers to keep the ratio of gateway routers to mesh routers constant, the average throughput will approach a finite value and the overall capacity will continue scaling

CHAPTER 2. ENABLING

OPTICAL AND WIRELESS ACCESS

TECHNOLOGIES33

Throughput v s. Number of routers

120

-* ***1 Channel & 1 radio/node fwiibeuf W sealing) -a- 2 Channels &2 radtos/nawte {jrtHhout SW seating) I i I I t i i i

GW: Gateway router MR: Mesh router

1I
40

i \ i \

20

^ ? ^ - t t ^ r f i h 8 ^ jftltts
to is
30

25

ithirnhur tif rticsri rrtiitsrs;

(a) Throughput vs. number of routers without increase of gateway (link rate equals to 54Mbps)

Figure 2.12: Throughput per router and overall capacity of one-dimentional WMN up as shown in figure 2-12(c) and figure 2-12(d) respectively. Another important observation revealed in figure 2-12(c) and figure 2-12(d) is that given the same number of mesh routers in WMN, increasing the ratio of gateway routers over mesh routers can improve the throughput per router and the overall network capacity. This is because given a higher gateway router to mesh router ratio, the average number of hops a packet takes is reduced. As a result, to enable flexible increase of wireless gateway routers in a WMN, a broadband and scalable backhaul network is required. The backhaul network needs to cost-effectively connect to numerous wireless gateway routers that are geographically scattered in a large area, and accommodate the increase of wireless gateway routers in the future. The backhaul network for WMNs is built on either wireless or wired infrastructure. In the following sections, we will review these backhaul technologies.

CHAPTER

2. ENABLING

OPTICAL AND WIRELESS ACCESS

TECHNOLOGIES34

Total capaefty v,$. Number of routers i in 100 90 80 70


3^

l I

i i i

- # 1 Channels & 1 radftele (without GW scaling) -O- 2 Channels &.2I raiKos/rrexfe (wilhosii W scaling)

GW: Gateway router MR: Mesh router


V

ao
33 10

! it

' S t c i <CHE3a-13

1UI

lis

.JO

Number of mesh routers (b) Capacity vs. number of routers without increase of gateway (link rate equals to 54Mbps)

Figure 2.12: Throughput per router and overall capacity of one-dimentional WMN

2.2.8

Backhaul Solutions for W M N

In terms of reducing deployment time and cost, wireless backhauls provide an attractive solution for network operators. Taking the proposal of wireless access network for San Francisco [33] as an example, wireless point-to-multipoint and ring links are deployed to collect and distribute WMN traffic across the city. The network architecture is shown in Fig. 2-13(a), which forms a hierarchical wireless access network. There are five kinds of nodes in the hierarchical architecture: wireless mesh routers, wireless gateway routers, access towers, aggregation towers, and master towers. As shown in Fig. 2-13(b), there are three layers consisting of wireless links between different kinds of nodes: (1) lst-Layer (Mesh Layer): consists of wireless links between wireless mesh routers and gateway routers of the WMN, which are deployed throughout the urban area and penetrate into user's premises to provide ubiquitous, blanket-coverage

CHAPTER 2. ENABLING

OPTICAL AND WIRELESS ACCESS

TECHNOLOGIES^

ThraujihpMt v.e. Number of routers 120 HQ % Channels St. \ tadiso/noAs, NurfGW^NuiiCMH3s!'1/4 ,.*..,.2 channels & 2 radistrtwde, Nm(GW)/Num<MR)=ty4 *--"2 Channels Ss'if sadtos/naisle, Nem(OWj/Num(Mil)=t/3

100

t
|

1 i i i
t i % i 1

GW: Gateway router MR: Mesh router

SO

I
4a

II
p.

20

V /*-,
"*

' O H ^ * * #. ^ , < fr,,

' +1 :

10 Nuiwtjar of mesh roiHars

20

25

(c) Throughput vs. number of routers with increase of gateway (link rate equals to 54Mbps)

Figure 2.12: Throughput per router and overall capacity of one-dimentional WMN connection. (2) 2nd-Layer (Capacity Injection Layer): consists of high-speed point-to-multipoint wireless links between an access tower and wireless gateway routers of the WMN. It aggregates the traffic from the Mesh Layer. For each access tower, approximately 300Mbps of bandwidth [34] is shared among multiple wireless gateway routers. (3) 3rd-Layer (Backhaul Layer): consists of (a) point-to-point wireless links between the access towers and aggregation towers, and (b) point-to-point wireless links among the aggregation towers and master tower. The wireless links among the aggregation towers and master tower form a ring network and a pair of radios is used to facilitate redundancy. Note that in this layer each point-to-point link has dedicated (unshared) bandwidth that is less than 1 Gbps.

CHAPTER 2. ENABLING

OPTICAL AND WIRELESS ACCESS

TECHNOLOGIES36

Total capacity v.e Number of routerc

dm
400 -on I Chanrwte &. I radio/node, Num(X3vVyNurttfMR)=1W *'-2 Chnnneis & 2 radios/node. Num(CWVNum(Mn> K 1/4 - -2 Chftrt#ls & 2 radios/nada, Num(GW>*lum(MR)-lO

son %2S0
S 200 I-

GW: Gateway router MR: M e s h r o u t e r

10

IS

26.

Numb or of mesh roulere

(d) Capacity vs. number of routers with increase of gateway (link rate equals to 54Mbps)

Figure 2.12: Throughput per router and overall capacity of one-dimentional WMN Note that the 2nd and 3rd layers jointly constitute the backhaul for the lower WMN layer. In [33], proprietary wireless technologies are employed to realize the backhaul network. Although this proposed wireless backhaul is readily deployed to quickly bring up the service, its capacity is very limited in terms of supporting the aggregated traffic of the entire city. The capacity in the 3rd and 2nd layers, therefore, will soon be exhausted after the network is deployed. The bandwidth insufficiency in the backhaul layers will be further exacerbated as the bandwidth of the lower WMN layer is enhanced by employing advanced wireless technologies.

2.2.9

Candidate Solutions for Broadband Backhaul in W M N


broadband

To aggregate the huge volume of WMN traffic from an urban

backhaul network is critically important. Besides providing high capacity, the backhaul network should also support long-distance transmission to collect traffic from

CHAPTER 2. ENABLING OPTICAL AND WIRELESS ACCESS TECHNOLOGIES37

item tower < "

Backhaul LayeV

.***/Wc!H-' i** , Towerf*

** *". Capacity liijectfaa -.; >..

Master Tower 1 * . ,

. Backbone Router

% . Wuntislpal Alia > , _ ! / '

**&

M|l^^

Wireless Gateway Router

j I Wireless n*"* Mesh Router

End User

(a) Network architecture

Master Central * 3 S Rackharil Office " & * *

Aggregation \vSFZ? * *** Point-to-Point wireless

Capacity Injection Layer

Access Tower/

*-''
S <

* *
. * > * ,

Point-to Multipoint, ~300Mbps (shared)

C.ipacilv in,- iti-nlayci

Wireless mesh, 54Mbps (shared)

Wireless Gateway
Router

1 Wireless &&* Mesh


Router

U End W" User

(b) Three layers in t h e hierarchical architecture

Figure 2.13: Hierarchical Wireless Access Network (proposed by Google-Earthlink to San Francisco City)

CHAPTER 2. ENABLING

OPTICAL AND WIRELESS ACCESS

TECHNOLOGIES38

numerous wireless gateway routers that are scattered around the urban area. To fulfill these two requirements simultaneously, copper wire technologies seem not be an appropriate solution because of their limited product of bandwidth and transmission distance. The bit-rate and transmission distance of the advanced Digital Subscriber Loop technology, VDSL2, and the co-axial cable are summarized in figure 2-14(a) and figure2-14(b). As a result, we envision that ultra broadband wireless technologies and fiber optics are the most appropriate candidates.

s v m m d t r l c a l rate with Xlkfk mQati*bn

100 Mb/s symmetric @ 500 m U10Q-PASS-TS)J

syimfteiricfl rata] without Xtalk mHraBttoh 0300 400 500 800 700 800 000 1000 1100 1200

length [m] (a) Bit rate versus distance of VDSL2 C o a x Cabla S l g p l Loss? (Attenuation) In dB par IQOfT T , S'jSfefc"? I I

I
I "SftMMir I.OdID i 0,6dD

_ O.edID
| _

<0."*D f
.

O-SdD

0.4-dD ft flflB

.
SOSMttt -4DDMHU

_ I
W*SV I? S 2oe 1 27 O I

I
.4 6 0.1) 7 8JB 'I-IOHB

r.riR
ii.:lk* 5.3d I |

i a<m
,1 U3fef 9 646

i nam l
5 saw S<

irtR %**

'HUW

as *
3 . 0 )J8
I

ao*e
2SJB
* S<SSB

T 8f6 2.0dB S.OlJB 4.2 <JB

1 I ! i-.Jgl te4|

i 4 3.JB
I O.OdB i 7.7 dB l 8.3UB 1ft

TOOMte OMMIte

2 0 1 4 B . 1 2 8<l 13S4B

Y
0 W&

6 i & 3^J iHothrw -%ft,*hTO

3 2 C M B * 1 74S

-SJB JWnhm

1 6rtihii

*5nHm

(b) Attenuation of coaxial cable

Figure 2.14: Fundamental limit of transmission distance of copper wire technologies

CHAPTER 2. ENABLING

OPTICAL AND WIRELESS ACCESS

TECHNOLOGIES^

For wireless approaches, millimeter-wave [35] and free space optical [36] technologies are two candidate technologies because of their high capacity. However, due to their high carrier frequencies, these broadband wireless technologies are limited to line-of-sight conditions. Hence, the reliability becomes an issue: links may degrade or be broken due to obstruction and even harsh weather condition. The degraded or broken backhaul links will affect numerous downstream WMN users. In addition, these wireless technologies can only operate at full rate within certain distance limitations: the maximum distance between millimeter-wave transceiver is 1.2km [35] and the maximum distance between free space optical transceiver is 2km [36]. This distance limitation will increase the deployment cost and constrain the flexibility as the backhaul network is deployed. The data rate, cost, and transmission distance of the commercial millimeter wave and free space optical products are summarized in the first two columns of figure 2-15.
MM-Wave (60GHz)1 Free Space Optics 2

W9
Max Rate Max Dist. Link Type Reliability Scalability Dominant Cost
1.2SGbpsperTX/RX 1.2 km Point-to-point Low (LOS) Medium 15,000 USD per TX/RX 2 km

1.25GbpsperTX/RX(A)

ys

K^a
2.5/1.25Gbps per wavelength 20km Point-to-multipoint Medium High (WDM) Deployment of fiber optics

Point-to-point Low (LOS) High (WDM) 46,000 USD per TX/RX


1

Proxim GigaLink 6451 e, 2MRV Terescope Series

Figure 2.15: Comparison of b r o a d b a n d backhaul technology canidate for W M N .

In contrast to these broadband wireless backhaul technologies, fiber optical backhaul has ultra high capacity, long transmission distance, and high reliability. Some may argue that the cost of infrastructure deployment of optical networks requires expensive investment. Thanks to the technological advance, the cost of optical devices

CHAPTER 2. ENABLING

OPTICAL AND WIRELESS ACCESS

TECHNOLOGIESAO

has dropped significantly so that it is less than that of broadband wireless equipments. By comparing the dominant costs of wireless technologies to optical access networks (the fiber deployment cost is discussed in section 2.1.3), fiber optical networks are clearly justified as an economically viable solution for backhaul connection to the WMN. Note that besides deploying new optical fibers, in urban area there have been dark fibers or dim fibers (part of the fibers have been used for communications) deployed along the streets. These deployed dark or dim fibers can be readily employed as part of the backhaul network at even lower cost.

2.2.10

Summary of Wireless Access Technologies

Compared to a single-hop wireless network, the multi-hop communications of a WMN enables cost-effective infrastructure deployment. Although currently the wireless interface of commercialized WMNs are based on Wi-Fi, given the rapid advance in wireless technologies, we believe higher rate and more reliable wireless interfaces, including WiMAX, will emerge and be applied to enhancing the fundamental speed of WMN. The network efficiency of WMN can also be improved using cross-layer design and optimized routing algorithms. In the hybrid optical wireless access architecture, a WMN is chosen as the front-end wireless segment. To resolve the capacity scalability issue, optical backhaul is a viable solution, and the resultant network is a hybrid optical wireless access network.

Chapter 3 Hybrid Architecture Enabling Smooth Wireless Access Upgrade


3.1 Introduction

As mentioned in chapter 2, wireless mesh networks (WMNs) are favorable wireless front ends for hybrid optical wireless access networks. Fiber optical networks, in turn, are a promising solution for providing broadband backhaul connectivity for WMNs, with cost justified in terms of deployment. Now we will consider the question: what kind of fiber optical backhaul network is appropriate for aggregating WMN traffic across a metropolitan area? Cellular systems and their backhauling networks are perhaps one of the first hybrid optical-wireless networks. In this kind of hybrid network, wireless base stations are connected to the central office/network server through dedicated optical point-topoint links. Compared to the aggregated voice/data traffic of a wireless base station however, the dedicated high speed optical link over-provisions bandwidth, and the infrastructure topology prohibits bandwidth and device sharing among multiple base stations. Another type of hybrid optical and wireless network is the radio over fiber (ROF) technology [37]. ROF technologies allow signals of wireless communication systems, such as cellular systems and WLAN, to be transmitted transparently over optical fiber. Since the optical fiber system is transparent to the wireless signal, all 41

CHAPTER 3. HYBRID ARCHITECTURE

ENABLING

SMOOTH WIRELESS ACCESS

UPGRA

the processing such as modulation/demodulation, encoding/decoding, handover, etc are all carried out in a central station. The two ends of the optical systems only translate wireless signal to/from wireless radio signal in the physical (PHY) layer. Since our focus is on the hybrid network architecture which includes the physical, data link, and networking layers, ROF is out of the scope of our interest. In this chapter, we focus on a novel approach to smoothly upgrading the wireless backhaul of WMNs deployed in metropolitan areas using commercially available optical access technologies.

3.2

Network Architecture

As mentioned in section 2.2.8, the wireless backhaul links in figure 2-13(a) and (b) are used to aggregate WMN traffic in the hierarchical wireless access proposed in [33]. In [33], it is mentioned that the insufficient bandwidth of wireless backhaul networks will soon become a bottleneck as traffic grows. To upgrade this architecture, [38], [39] propose a hybrid optical wireless architecture that uses TDM PON technology to connect to the WMN deployed in a metropolitan area. The reason for leveraging TDM PON technologies is because of: (1) technological maturity: TDM PON has been deployed already and is providing service in some countries. (2) Cost-effectiveness: TDM PON allows large portions of infrastructure and devices in the central office to be shared among end nodes which are the ONUs/wireless nodes. (3) Bandwidth efficiency: the inherent statistical multiplexing of TDM PON optimizes bandwidth usage. (4) Topology flexibility: TDM PON supports different types of network topologies, such as tree, bus, and ring [40] for connecting to widely scattered WMN routers. The generic hybrid optical and wireless architecture is illustrated in figure 3-1, where the backhaul network consists of a fiber ring and tree networks. ONUs are deployed at the ends of tree networks and connected to the wireless nodes. Since TDM PON streams are multiplexed at different wavelengths, a large number of TDM PON streams will require dense wavelength division multiplexing (DWDM) for both downstream and upstream traffic. At the joint of the ring and a tree network, a low-loss wavelength add/drop filter is installed to add or drop a wavelength set,

CHAPTER 3. HYBRID ARCHITECTURE

ENABLING

SMOOTH WIRELESS ACCESS

UPGRA

consisting of multiple wavelengths to separate the downstream and upstream traffic. The wavelength add/drop niters segregate the physical network into numerous pointto-multipoint networks, on which dedicated wavelengths carry traffic between the central office and wireless gateway routers. Hence, TDM PON technology can be readily employed. Note that although not shown here, if a large number of ONUs are connected at the ends of the tree network, optical amplifiers may be required to compensate for the component loss along the optical ring network and at the joint nodes.

Wireless link Optical link CD Optical backbone node Q Wireless gateway D Central Hub O Optical aggregation node A Wireless mesh router Wireless aggregation point End user

KatftkHi MAC A

:- Hmw-A MAC A MAC

IWPP MAC

llllllsr g _ r

2HK7
I c*..n rtuo

7 ant-'-y

PHY

PHY
A*" -ft P" - f i

........

W
End user

! J . I > Rr.ntr-

Figure 3.1: Hybrid optical wireless access network as proposed by Google and Earthlink for San Francisco Metro Wireless Networks Project. Under this hybrid optical-wireless architecture, the upstream traffic is first received by a nearby wireless mesh router via the links of the lower layer and then relayed to

CHAPTER

3. HYBRID ARCHITECTURE

ENABLING

SMOOTH WIRELESS ACCESS

UPGRA

one of the nearby wireless gateway routers through multi-hop communications. In figure 3-1, mesh router 4 aggregates traffic of nearby end users, and relays it over router 3, 2, and 1 to reach gateway router A. Once traffic reaches the gateway router, it is forwarded toward the central office over the optical backhaul. For downstream traffic, packets are first routed to one of the gateway routers, such as B in figure 3-1 and forwarded on specific route, e.g. through router 5, 3, and 4, to the end user. Since the optical backhaul and WMN are implemented with different technologies, such as Ethernet PON (EPON) and Wi-Fi, interoperability is needed at the interface between the ONU and wireless gateway router. To address this issue, the optical backhaul and WMN can either be fused at the networking layer using an IP router, or employ application-specific integrated circuits (ASIC) designed to translate the packet formats. The integration of the point-to-multipoint optical backhaul and WMN paves multiple routes between the central hub and the end user. Therefore an integrated routing paradigm that can dynamically choose the optimum route is essential for a hybrid optical-wireless network. Although routing in WMNs by itself is a challenging issue, as described in previous sections, we envision that the optical backhaul can help to collect some network conditions in WMNs, such as link status, traffic load, interference, etc, and help to determine the optimum route. [39] proposed an integrated routing algorithm to achieve load balancing as congestion occurs in the wireless mesh network. This integrated routing algorithm will be discussed in chapter 5.

3.2.1

Upgrading Path

The proposed hybrid architecture is designed to smoothly upgrade a hierarchical wireless access network in figure 2-13(a) and (b). The upgrading path proposed in [38] and [39] begins with the replacement of the wireless links at the backhaul layer in figure 2-13(b). The top layer aggregates traffic from the lower layers. Consequently this layer will become the first bandwidth bottleneck. Figure 3-2(a) shows the upstream wireless segments of the ring network (figure 213(b)), which are closer to the central office. These wireless links aggregate traffic from

CHAPTER 3. HYBRID ARCHITECTURE

ENABLING

SMOOTH WIRELESS ACCESS

UPGRA

UK- - . - . . Central Office M -

Backhaul Layer ^ ^ f " " " ~ /J^AggTegatTorT Tower

(a) A segment of wireless link in figure3-l

Figure 3.2: Smooth upgrading path of the wireless backhaul proposed by Google and Earthlink for San Francisco City wireless access network the downstream segments along the ring, so they need to be upgraded first. Figure 3-2 (b) shows the upgrade of the first wireless link: a fiber is deployed to replace the wireless link. A TDM PON stream at a pair of wavelengths (Al pair) is allotted to facilitate communications between the central office and the first aggregation tower. The Al pair consists of two different wavelengths for separating the downstream and upstream traffic. At the first aggregation tower, a low-loss optical wavelength add/drop is installed to add or drop the wavelength pair Al. The wireless links under the first aggregation tower (i.e. the capacity injection layer in figure2-13) and the wireless link between the first and the second aggregation tower (i.e. the backhaul layer in figure2-13) remain intact. Note that the first aggregation tower enjoys the full bandwidth provisioned by the TDM-PON stream and if needed, more wavelength pairs that carry TDM-PON streams can be allotted in addition to Al pair. Figure 3-2(c) illustrates that the upgrade proceeds to the second wireless link in the backhaul layer. In figure 3-2(c), a A2 pair is allotted to carry new TDMPON stream, which bypasses the first aggregation tower and drops at the second aggregation tower. The replacement can be applied to the subsequent wireless links until the entire backhaul layer is upgraded. As the wireless links in the ring network can be upgraded with optical links, the
wireless links between t h e aggregation a n d access towers can also b e upgraded with

optical links. Note that in [33], a point-to-point wireless link is dedicated to facilitate communication between the aggregation tower and a nearby access tower. So point-tomultipoint topology of the aggregation and access towers is based on many expensive point-to-point links. To upgrade these links, we can install an ONU by the access

CHAPTER 3. HYBRID ARCHITECTURE ENABLING SMOOTH WIRELESS ACCESS UPGRA

A's Add/Drop

.>*|gfr. Central Office Aggregation Tower

Aggregation Tower

ONU

Avg. Bandwidth Allocation

: :

(b) Upgrade of upper wireless link

Figure 3.2: Smooth upgrading path of the wireless backhaul proposed by Google and Earthlink for San Francisco City wireless access network (con't)

A's Add/Drop A's Add/Drop

A1.A2
Central Office

.-'M""M
\\ .*&, Aggregation /*^i Tower

ONU I

Avg. Bandwidth Allocation

Avg. Bandwidth Allocation

(c) Further upgrade of upper wireless link

Figure 3.2: Smooth upgrading path of the wireless backhaul proposed by Google and Earthlink for San Francisco City wireless access network (con't)

CHAPTER 3. HYBRID ARCHITECTURE

ENABLING

SMOOTH WIRELESS ACCESS

UPGRA

Central Office

i Splitter

I Aggregation splitter & X s * . ^ Tower

Access Toswer 0NUi0NUf0NU?0MU5 ", a b e d "


k

Avg. Bandwidth Allocation

Avg. Bandwidth Allocation

(d) Upgrade of lower wireless link

Figure 3.2: Smooth upgrading path of the wireless backhaul proposed by Google and Earthlink for San Francisco City wireless access network (con't) tower to replace the wireless link. Figure 3-2(d) shows that as the wireless links are gradually upgraded, the point-to-multipoint MAC protocol of TDM-PON will automatically manage the point-to-multipoint network and its bandwidth allocation. Once the bottleneck in the backhaul layer is resolved by upgrading to optical links, the capacity injection layer will become the next bandwidth bottleneck. The upgrade can be realized with the same principle by further deploying ONU at wireless gateway routers to gradually replace wireless links, as shown in figure 3-2(e). If a certain district needs more bandwidth, (for example the district X in figure 3-2(e)), another TDM PON stream on a different wavelength pair can be allotted. Passive optical filters will be required at the ONU to separate different wavelengths. Note that the upgrade will eventually result in the hybrid optical and wireless architecture illustrated in figure 3-1.

CHAPTER 3. HYBRID ARCHITECTURE ENABLING SMOOTH WIRELESS ACCESS UPGRA

A-band Add/Drop (<1dB loss)

ran

A-band Add/Drop (<1dB loss)

Splitter I

ONU O N U Q N U

0 N U

^..

l&

&

*S* &

. GNU"

ONU ONU ^ 9 >

Avg. Bandwidth Allocation *%<( District X ^/ Avg. Bandwidth * : Allocation

(e) Further upgrade of lower wireless link

Figure 3.2: Smooth upgrading path of the wireless backhaul proposed by Google and Earthlink for San Francisco City wireless access network

CHAPTER

3. HYBRID ARCHITECTURE

ENABLING

SMOOTH WIRELESS ACCESS

UPGRA

3.3

Reconfigurability of t h e Proposed Optical Backhaul

In a hybrid optical-wireless network, the bandwidth demand from different districts can vary drastically in a day. For example, while a business district will have high bandwidth demands during daytime but little during night time, the residential district will have the opposite. Instead of over-provisioning based on the peak demand or under-provisioning to lower the network investment cost, it is desirable to have a reconfigurable network system, in which the bandwidth can be reallocated among different districts. Given the hybrid TDM/WDM backhaul structure, the reconfigurability can be realized by using a tunable optical transceiver at the ONU. For example, in figure3-3 both districts X and Y are connected to the same physical tree network, and are served by PON1 and 2 respectively. Assume that tunable transceivers are used at ONUs, and each ONU will only transmit and receive at certain wavelength pairs. If PON1 is highly loaded while PON2 has little load, PONl's load can be reduced by tuning the transmitting and receiving wavelengths of ONU3 from ID, 1U to 2D, 2U, thereby enabling ONU3 to join PON2 to reduce PONl's load. In this way the bandwidth is reallocated based on the dynamic demand.
mm
Central Office
A1, A2, A3,..

-COL.

A3,...

fflD A1DU,A2D. uf

Backbone Fiber

V * Add/Drop
Splitter/Coupler PON1

If i;
^* y * *T
I I K H

^ ^ ^ ^ ^ #

Heavily loaded < PON 2

A1 D ' U f

|A2DU

Tunable Optical Transceiver

mm mm

ONU1 I

ONU2 \

ONU3 I

:'M
:

XMHtlnl ; ONU4 ONUS ONU6

M i '::
i ;

.t.

""> %*

1 *?

%>

/! i

Lightly loaded,

District X (Heavily Loaded)

District Y (Lightly Loaded District)

Figure 3.3: Reconfigurable Optical Backhaul

CHAPTER 3. HYBRID ARCHITECTURE

ENABLING

SMOOTH WIRELESS ACCESS

UPGRA

To optimize the bandwidth utilization with the proposed reconfigurability, the central office needs to monitor the bandwidth demand variation, and reallocate bandwidth based on TDM PON technologies. Figure 3-4 shows an example that can be implemented in the central office to fulfill this purpose. Behind the OLTs, a network terminal (NT) is devised to manage multiple TDM PONs. In the NT, the system bandwidth management module continually monitors the buffer depth of each OLT for the downstream traffic. When any PON is heavily loaded, the OLT buffer depth will exceeds a certain threshold, and the system bandwidth management module will instruct the heavily loaded PON to deregister some ONUs and re-register them to lightly loaded PON(s) at different wavelength pair(s). The number of ONUs to be moved depends on the average load among PONs, which are continuously monitored by the traffic estimators (TE) in figure 3-4. Before an ONU is deregistered, its packets queued in the OLT will be emptied first. After an ONU is deregistered, the incoming traffic to that ONU is temporarily stored in the queue of NT until re-registration is complete. Note that the aforementioned ONU deregistration and registration can be readily realized by the Multi-Point Control Protocol Data Unit (MPCPDU) messages (deregistration and registration processes) [2] and the Physical Layer Operation and Maintenance (PLOAM) message (deactivate and ONU activation procedure) [3] for EPON and GPON, respectively. In the context of optical access networks, tunable transceivers have been proposed for different purposes such as bandwidth efficiency improvement [41], network scalability [42], and inventory simplification [43]. Due to the rapid advance of optical technology, various tunable laser technologies have been developed [43]. To optimize cost, tunable long wavelength vertically-cavity surface-emitting lasers (VCSELs) can be used [44]. The lower cost results from integrated manufacturing, and easy packaging and testing. Recently, an integrated fast wavelength selective photo-detection
was developed [45]. Its tuning time is in t h e nanosecond range, a n d its monolithic

design facilitates cost reduction. With slow tunable transceivers, the tuning time can be a major overhead during reconfiguration, which leads to network performance degradation. As traffic load on a PON oscillates, reconfiguration will be triggered frequently causing network efficiency

CHAPTER 3. HYBRID ARCHITECTURE

ENABLING

SMOOTH WIRELESS ACCESS

UPGRA

Network Terminal Card


Bandwidth Management l|di|jte<

Monitor OLT buffer depths

"tTT
BUF(1) BUF(2)

m pi ;
0LT(1 OLT(2] A1
A2 ;

P2

-*| TE(1) f
J ' .

From backbone

striper

BUF(3)

TE scheduler -H ,w

. i OLT(N) AN MUX/ DMUX

Nr BUF(NXM) i/i

TE(N)

TE: Traffic Estimator; BUF: Buffer N: number of OLTs; M: max. number of ONU under an OLT

czc

rt> OLT buffer

Figure 3.4: System architecture of the central office to facilitate the reconfigurable optical backhaul to suffer. To minimize the performance loss, the buffer size can be increased along with setting a higher threshold for triggering reconfiguration. With these changes, when the reconfiguration is triggered, the tuning time will take a relatively small fraction of overhead compared to the time needed to empty the packets stored in the OLT buffer, thereby improving network efficiency. However, the packets will suffer more delay from longer queuing time in the buffer, resulting in lower QoS. To relieve the QoS degradation, traffic differentiation, or first forwarding high-priority traffic, can be used to enhance QoS during reconfiguration.

3.3.1

Performance simulation of the reconfigurable optical backhaul

The performance of the proposed reconfigurable architecture is compared with fixed architectures with simulations. Assume that both the reconfigurable and fixed architectures consist of two EPONs and the aggregated traffic is Poisson traffic. First we simulate the average packet delay of the two PONs in both architectures. In the simulation, (1) the average load of PON2 is fixed at 0.2 and that of PON1 is

CHAPTER 3. HYBRID ARCHITECTURE

ENABLING

SMOOTH WIRELESS ACCESS

UPGRA

changed from 0 to 1.8 at t = Is (2) the buffer size in the OLT is 128MB [46]; (3) Network reconfiguration is triggered as the OLT buffer depth exceeds 50% (buffer depth refers to the total queued bits/Bytes in a buffer); (4) Total reconfiguration overhead is 250ms (including ONU deregistration, transceiver tuning time, ONU registration, and propagation delays). The traffic estimator is realized by low-pass filter (LPF) with averaging window of 10ms. Figure 3-5 shows the dynamic response to variation of traffic load. The traffic allocation between two PONs is measured at P I in figure 3-4, the outputs of the NT card. As shown, the reconfigurable architecture is able to balance the overall traffic load between two PONs after Tpi, which is approximately:

Tp

n -n
U%n '-'out

+0V

(3-x)

where TH is the buffer depth threshold triggering reconfiguration, Dout is the OLT data rate, Din is the allocated traffic rate to OLT, and OV is the reconfiguration overhead, i.e. delay imported by the tunable devices. Figure 3-6 shows the buffer depth and packet loss of PON1 in both architectures. After reconfiguration period, the buffer depth decreases in PON1 of the reconfigurable architecture due to load balancing, while that of the fixed architecture keeps increasing eventually resulting in packet loss. The estimated maximum buffer depth of PON1,
B D M A X is:

BDMAX

= Tm x (Din - Dout)

(3.2)

Figure 3-7 shows the throughput of each PONs and the combined throughput of both architectures, measured at P2 in figure3-4. The results show that the combined throughput of both PONs will reach the maximum rate to de-queue the stored packets in OLT1, and the required time, TP21, is approximately: i p 2 ~ - i P 2 + y^
'-'out

p;
^innew

\6-6)

Where Djeu, is the new traffic rate allotted to OLT1. Figure 3-8 shows long-term average packet delay of two PONs in log and linear

CHAPTER 3. HYBRID ARCHITECTURE ENABLING SMOOTH WIRELESS ACCESS UPGRA

Fixed Backhaul Architecture

Traffic allocated to OLTs (Gbps) 1

Reconfigurable Backhaul Architecture


2.5

Traffic allocated1-5 to OLTs (Gbps)

0.5

Figure 3.5: Traffic throughput at PI in figure 3-4

CHAPTER 3. HYBRID ARCHITECTURE ENABLING SMOOTH WIRELESS ACCESS UPGRA

0LT1 Buffer Depth


i . t

1.2 1 0.8 ' Buffer Depth (Gb) 0.6 a 6 0.4 0.2


%
/
/

Fixed PON Reconf. PON

/-

7
1 2 3 4

\
5 6 7 8
Time (sec)

PON1 input loading: 0.2 ->1.2

OLT1 Packet Rejection Ratio 25 20 15 Packet Rejection Ratio (%) 1Q

-J

L_

1 2 4

3 4 5 Time (sec)

PON1 input loading: 0.2 -vl.2

Figure 3.6: Buffer depth and packet rejection ratio of PON1 in both architectures

CHAPTER 3. HYBRID ARCHITECTURE ENABLING SMOOTH WIRELESS ACCESS UPGRA

Fixed Backhaul Architecture

Throughput (Gbps) 0.8 0.6

MMiLiMimimiMmikmAMh
1 2 t 3 4 5 Time (sec) 6 8 P0N1 input loading: 0.2 -^1.2

Reconfigurable Backhaul Architecture


I

1.5 Throughput (Gbps) 1

1 2 t

3 4 5 Time (sec)

PON1 input loading: 0.2 ->1.2 Figure 3.7: Throughput of each PON and combined throughput of both architectures; measured at P2 in figure3-4

CHAPTER

3. HYBRID ARCHITECTURE

ENABLING

SMOOTH WIRELESS ACCESS

UPGRA

scales with different loads for P 0 N 1 and fixed load for P0N2. According to the linear scale, the fixed architecture's performance reaches its limit as load reaches 90%, while the reconfigurable system does not until the overall load reaches 180%. On the log scale, after passing 90%, the reconfigurable backhaul is insensitive to load increasing due to reconfiguration.

3.4

Experimental testbed of reconfigurable optical backhaul

After reviewing the performance improvement from dynamic load balancing realized by a reconfigurable backhaul, it is desirable to investigate its feasibility, performance, and compatibility to TDM PON technologies. For these purposes, an experimental testbed was built as illustrated in figure 3-9 [39]. It consists of two nodes: one emulates the central office having two OLTs and the other node emulates the ONU; both nodes are connected with a 100m single mode optical fiber. The two OLTs are equipped with fixed optical transceivers and the ONU is installed with a optical tunable transceiver. The TDM PON standard we selected is the IEEE 802.3ah EPON standard, and the functionalities of the OLT and the ONU as specified in EPON standards are programmed in commercial Field Programmable Gate Array (FPGA) chips. To facilitate a reliable network reconfiguration, the so called "reconfiguration control interfaces (RCIs)" are implemented in the FPGAs at the ONU and the central office (as shown in figure 3-9). These RCIs sit at higher level above the MAC layer defined by EPON standard, and coordinate with a handshaking protocol to achieve safe network reconfiguration.

3.4.1

Handshaking protocol for network reconfiguration

The handshaking protocol can be illustrated by the timing diagram as in figure 3-10. The red dashed lines indicate the transmission of RCI reconfiguration commands, and the solid lines signify the MPCPDU packets specified in the EPON standard. As traffic load needs to be re-balanced, the system bandwidth management module

CHAPTER 3. HYBRID ARCHITECTURE ENABLING SMOOTH WIRELESS ACCESS UPGRA

P0N2 Loading 20%


14000 12000 -Fixed Backhaul 10000 -Reconfigurable Backhaul

Average Packet s o oo Delay (us)


6000 4000 2000 0.

(Linear-Scale)

J
20 40 60 80 100 120 140 160 180

Traffic Loading (%) PON2 Loading 20%

10a

-Fixed Backhaul 10' -Reconfigurable Backhaul

Average Packet Delay (us)

10*

(Log-Scale)
10

10

20

40

60

80

100

120

140

160

180

Traffic Loading (%)

Figure 3.8: Long-term average packet delay of both PONs (varying load for PON1 and fixed load for PON2)

CHAPTER 3. HYBRID ARCHITECTURE

ENABLING

SMOOTH WIRELESS ACCESS

UPGRA

Central Office

Tunable TX: Tunable transmitter Tunable RX: Tunable receiver RCI: Reconfiguration Control Interface

0LT1 RCI 0LT2 siKiiJSliJlMi!

FPGA+SERDES
optical fiber

Splitter
Optical Network Unit (ONU)
1550.1 nm -> 1550.9nm

' Bamtaplltter

9MMIV
1591nm -* 1586nm

mm

Wwfat

FPGA+SERDES

Figure 3.9: Experimental testbed of reconfigurable optical backhaul will issue a network reconfiguration trigger to the RCI at the central office. Upon reception of the trigger signal, the RCI will coordinate with the RCIs at the ONU(s) to deregister from a heavily loaded PON (e.g. P0N1) and register to a lightly loaded PON (e.g. P0N2). The detailed steps are described as follows: (1) The RCI at the central office sends new wavelength information, e.g. A 2d,u of the lightly loaded OLT (i.e. 0LT2) to the heavily loaded OLT (i.e. OLT1), and the OLT will arrange the next available data packet to deliver the new wavelength information to the ONU to be deregistered. Note that this information can be piggybacked on data packets the specified by TDM PON standards. (2) Upon receiving the RCI information, the ONU passes it to the RCI behind ONU. (3) The RCI behind the ONU stores the new wavelength information, and generates an acknowledgement (ACK) signal and passes it onto the ONU for transmission.

CHAPTER 3. HYBRID ARCHITECTURE ENABLING SMOOTH WIRELESS ACCESS UPGRA

System BW RCI Module .Reconfiguration

0LT1 0LT2

ONU

RCI

Reconfiguration J - SOps

Dashed arrows: RCI reconfiguration commands Black arrows: EPON MPCPDU commands

Figure 3.10: Timing diagram of the reconfiguration protocol

CHAPTER 3. HYBRID ARCHITECTURE

ENABLING

SMOOTH WIRELESS ACCESS

UPGRA

The ONU will then arrange the next available data packet to deliver the ACK. (4) Upon reception of the ACK signal from upstream packets, the heavily loaded OLT passes it to the RCI at the central office. The RCI will proceed to instruct the heavily loaded OLT to start the standard deregistration process specified in TDM PON standards [2], [3]. (5) After the deregistration ACK is delivered by the ONU, the RCI at the ONU will tune the wavelengths of tunable transceiver to the new wavelengths of the designated OLT (e.g. from A ld)U t o A 2d,u)(6) After receiving the deregistration ACK from the ONU, the RCI at the central office will wait for a certain amount of time (the waiting time is preset according to the tuning time of the optical tunable transceiver at the ONU), and instruct the designated OLT (e.g. OLT2) to start the standard discovery process defined in TDM PON standards [2], [3] to discover and register the new ONU. To realize and examine the handshaking protocol, figure 3-11 (a)-(e) summarizes the state diagrams of RCI's at the central hub, the heavily and lightly loaded OLTs, the reconfigurable ONU, and the RCI at the reconfigurable ONU which are implemented in the FPGAs shown on figure 3-9. Note that the discovery, de-registration, registration processes are programmed according to the IEEE 802.3ah EPON standards.

3.4.2

Enabling Devices

This section introduces the two enabling devices in details on the experimental testbed. The tunable transmitter (laser) will be introduced in chapter 4.

CHAPTER 3. HYBRID ARCHITECTURE ENABLING SMOOTH WIRELESS ACCESS UPGRA

Idle
Receive reconfig. command from System Bandwidth Management

Instruct heavily loaded OLT to send reconfig. info. & deregister the to-be-reconfigured ONU Inform System Bandwidth Management of Reconfig. failure

Wait lor ACK from heavily loaded OLT


Timer expires Receive ACK from heavily loaded GLT

Wait for reconfig. period of the ONU transceiver


Timer expires OR incorrect ONU registration

Instruct designated OLT to perform autodiscovery process

Receive
ACK from designated OLT

I
Wait for ONU registration from designated OLT

(a) State diagram of the reconfiguration control interface at the central office

Figure 3.11: State diagrams of the RCI's in the central office, heavily and lightly loaded OLTs, ONU, and RCI behind ONU

Receive recc nfig. instruction fr ymRazonfig. Ctrl, interfac Send PHY info, of designated OLT to the to-be-deregistered ONU Inform reconfig. Ctrl. Interface of reconfig. failure s*t
V

4
Timer e spires

Wait for ACK from the to-be-deregistered ONU Receive ACK from the ONU(s) Deregister the ONU

Inform Reconfig. CM. Interface of ONU deregisteretion (b) State diagram of the heavily loaded OLT

Figure 3.11: State diagrams of the RCI's in the central office, heavily and lightly loaded OLTs, ONU, and RCI behind ONU (con't)

CHAPTER 3. HYBRID ARCHITECTURE ENABLING SMOOTH WIRELESS ACCESS UPGRA

Idle (Normal Operation) Autodiscovery fails after N trials Report registration failure to Reconfig. Ctrl. Interface Receive instruction from Reconfig. Ctrl. Interface to perform the autodiscovery process Autodiscovery , performed periodically

Autodiscovery

New ONU ^registered Report registration success & ONU's MAC address to Reconfig. Ctrl. Interface

(c) State diagram of the designated lightly loaded OLT

Figure 3.11: State diagrams of the RCI's in the central office, heavily and lightly loaded OLTs, ONU, and RCI behind ONU (con't)

Idle (Normal operation) Receive reconfig. info. Inform Reconfig. Ctrl. Interface at ONU of upcoming Reconfig.

T
Deregistration Reconfig. the transceiver Autodiscovery Process

(d) State diagram of the reconfigured ONU

Figure 3.11: State diagrams of the RCI's in the central office, heavily and lightly loaded OLTs, ONU, and RCI behind ONU (con't)

CHAPTER 3. HYBRID ARCHITECTURE

ENABLING

SMOOTH WIRELESS ACCESS

UPGRA

Tune transceiver to theA pair of original OLT

Receive reconfig, instruction & Apair of the designated OLT Deregistration from current PON system

1
Tune the transceiver to the A pair of designated OLT

registration fails

Wait for registration

ft
*
registration succeeds

(e) State diagram of the reconfiguration control interface at the reconfigured ONU

Figure 3.11: State diagrams of the RCI's in the central office, heavily and lightly loaded OLTs, ONU, and RCI behind ONU Optical Tunable Reciver The tunable receiver at the ONU is implemented with an MEMS tunable filter, as shown in figure 3-12, which has a wide tuning range, from 1591nm to 1525nm, corresponding to the control voltage of OV to 35V. Throughout the tuning range, the insertion loss is less than ldB. To examine the transient response of the tunable filter, a lightbeam at 1586nm is continuously transmitted to the tunable filter, and the tunable filter tunes to receive between 1591nm and 1586nm by changing its control voltage. As shown in figure 3-13, after the control voltage is changed, it takes 33.6^s for the filter to stabilize to receive the 1586nm light. Since the tuning time is much longer than that of the tunable transmitter, the reconfiguration overhead incurred in the PHY layer is dominated by the tunable receiver. To accommodate the tuning time, the period between de-registration (step 2: de-register ONU from heavily loaded PON) and re-registration (step 4: re-register the ONU to lightly loaded PON) issued by the RCI at the central office is thus programmed as 50/is in the FPGA.

CHAPTER

3. HYBRID ARCHITECTURE

ENABLING SMOOTH WIRELESS ACCESS

UPGRA

Figure 3.12: Optical tunable receiver used on the reconfigurable optical backhaul experimental testbed F P G A and SerDes Figure 3-14 shows the PC board which has the field programmable gates array (FPGA) and the SerDes (serializer-deserializer) we used on the testbed. The FPGA is made by Altera Inc. and can parallelly process 16 bits running at a 77.6MHz basic clock rate. For the transmitter (TX) function, the serializer of the SerDes combines the 16 parallel bits within 1 basic clock cycle to make a 1.25Gbps data stream. For the receiver (RX) function, the deserializer segregates the 1.25Gbps serial data stream into 16 data streams at 77.6MHz rate. The OLT, ONU, and RCI functionalities depicted in the state diagrams, are programmed in the FPGAs. Figure 3-15(a) and (b) depict the configuration of the OLT and RCI at the central office and the ONU and its associated RCI. On each board, there are two FPGA boards, which are synchronized to the same basic clock and can communicate via the central bridge. The state diagrams are programmed in Static Random Access Memory (SRAM). In the next section we will see how these FPGAs coordinate to implement the reconfiguration process.

CHAPTER 3. HYBRID ARCHITECTURE ENABLING SMOOTH WIRELESS ACCESS UPGRA

?11.2U!i

k
. Control Signal

>
-

r, * n r n .T
1

r,

4.

r>

-i

33.6 us 1

*HK*

1591nm -+ f 5J
i
1

n ?fi

Filter response" o.is


u. i
o.oc
1

IE

il I il i n
i i i i
il II

II

'II

u
.!
R | 4." rl 7> 1 If
1
p

n nnR

ItKs

Measured after Photo Diode


Figure 3.13: Transient response of the optical tunable receiver in figure3-12

CHAPTER 3. HYBRID ARCHITECTURE ENABLING SMOOTH WIRELESS ACCESS UPGRA

&

FPGA ",'.
ar

i
.>.i .;;_

Figure 3.14: FPGA and 1.25Gbps SerDes board figure3-ll

CHAPTER 3. HYBRID ARCHITECTURE ENABLING SMOOTH WIRELESS ACCESS UPGRA

;SMAs;

^ ^ 3LT2 Date SERDES

13[
StatrsfrdsC*6UhteYs TunaBleTX"; Header *"I ^wHwR*&' A N ^ $ ! $ ! * ^& Error Monitoring XCON :.....Header.. Tufial5leRX"i FPGA1 \Crr!:/ FPGA2 IReceiver ....Header.... 3 SRAM -A| *&&&. i ransminer l(Ro Aeeombly)1 SRAM

(V/nOs)
:"': Passive '"' Connector r-] Active Compone nts

ISA BUT

SON6T Q&3I OC-12 SERDES

2L
Ethernet IControllBii

31
(Controller! RJ46 : Generic . . . . . . ; Parallel j Interface Header

Central Office om
RCI 0LT2

tSMAs:

3C

Generic Parallel Interface Header

I...DC x : v .
| RJ4S : ...... I

FPGA+SERDES

(a) OLT and RCI FPGA configuration

Control Control Channel a Channel SERDES Receiver

;SMAs;

'SMAs^

01

7 * 5T
^ 1 ^ 1

Control Channel SERDES

Control Channel Receiver

if

;SMAs;

JUL PNU Datal


SERDES

Data SERDES

Tunable TX"|. ...Header....; TunaBIe'RX'V-

y^mfe /\ .^
FPGA1
;

3-2V

S3

XUJN

F p G A 2

Statistics' C6unYetrs =$ Error Monitoring ..Header.....: SRAM i(Ro Acoombly)1


ONU

\^

$RAftJR '
I ... . .ISABUS. ,

-r--"~"'->.'.T'^''-'-, jpcP^r

in
I

:"; Passive '"' Connector r j Active Compone nts

Ethernet fcontrollen

u_T_Jt

Ethernet pontroM

Generic R J 4 5 Parallel interface Header

,y....5c 5

R J 4 S : Generic Parallel Interface Header

ONU

RCI

FPGA+SERDES

(b) ONU and RCI FPGA configuration

Figure 3.15: Functionalities implemented on the two FPGA boards

CHAPTER 3. HYBRID ARCHITECTURE ENABLING SMOOTH WIRELESS ACCESS UPGRA

3.4.3

Experimental Results

To facilitate the communication of RCI on the ONU and central office sides, a control packet is used on the testbed and its format is shown in figure 3-16. It consists of the following parts: 1. Preamble (60 Bytes): for clock and level recovery; specifically to facilitate upstream reception. 2. Sync Bytes (4 Bytes): delimiter for bit synchronization 3. Frame ID: for statistics and error checking purposes 4. TX: sender ID 5. RX: receiver ID 6. Frame type: indicates the type of MAC frame 7. Payload: carries RCI commands such as de-registration, ACK, new wavelength information, etc. The experimental results are shown in figure 3-17, which demonstrates the ONU deregistration from the OLT1, the reconfiguration period, and the ONU discovery and registration performed by the OLT2. The results are captured by the HP16500B Logic Analyzer, which allows data rate up to 4Gb/s, and can observe at the same time up to 32 waveforms and 0/1 transitions in a digital system in order to monitor the system as well as verify its performance. The signals marked with numbers in figure 3-17 are outputs measured on the two FPGA boards on the central office and ONU sides, and they indicate the following events: (1) OLT1 sends deregistration message;

CHAPTER

3. HYBRID ARCHITECTURE

ENABLING

SMOOTH WIRELESS ACCESS

UPGRA

To facilitate upstream reception

Sender ID

Receiver ID

RCI commands: Reconfiguration & ACK New wavelength information Dummy bits

Preamble
Y

Sync
y

Frame ID
Y

TX
Y

RX
Y

Frame Type
Y

Payload
"j.

30-60 bytes 4 bytes 2 bytes

2 bytes

2 bytes

2 bytes

100 bytes

Delimiter for bit synchronization

Statistics & error checking

Type of MAC frames

Figure 3.16: Control packet format

2) ONU receives the deregistration message; 3) ONU sends the ACK; 4) OLT1 receives the ACK; 5) after the 50/us reconfiguration period, OLT2 sends discovery gate message; 6) ONU receives the discovery gate message; 7) ONU sends the registration request; 8) OLT2 receives the registration request; 9) OLT2 sends the registration message; 10) ONU receives the registration message; 11) ONU sends the registration ACK;

CHAPTER 3. HYBRID ARCHITECTURE ENABLING SMOOTH WIRELESS ACCESS UPGRA

(12) 0LT2 receives the registration ACK.

( ^ M z / l i t i i Lfl E j ^ Hivtlorm ficcumliti Off


S8C/Di 20,0 IS OLT1mTX OLT2_TX OLT2JTX ONU^RX ONU^RX ONU_RX ONU^TX OLT^RX

jj ffice.CotTQlJ

( P rint J |

Run

i Acquired 17^37^53 | 28 Jun 2007 liirkirs


L

i x u r n t l Cloclc Piriod 18.768 m Ssnplas / Extesrn*' Clock 4

Off

j
T 1

Sample O f f s e t | < S 9 \

(D

50us (5)

(9)

m
\ (6) (10) (3) _ _ (4L
(7& 11)

(8 & 12)

Figure 3.17: Experimental result of the reconfiguration protocol One thing to note is that in figure 3-17 the signal propagation delay through 2km is insignificant compared to the reconfiguration period that is in tens of /is scale. The results show that the optical and electrical devices and the handshaking protocol coordinate together as expected and the scheme can be applied in a real hybrid system using EPON technologies.

Chapter 4 Next-Generation Hybrid Access Network


4.1 Introduction

In order to provide blanket coverage of service in a metropolitan area, numerous wireless mesh routers need to be deployed near user premises. The traffic of these wireless routers which is geographically scattered across an urban area is forwarded to a few central offices or network servers. In order to connect to these widely scattered wireless routers and accommodate ever-increasing traffic growth from users, a broadband optical backhaul technology is a desirable solution, as we have mentioned in previous chapters. In chapter 3, a smooth upgrade for hierarchical wireless access networks was proposed. The upgrade is designed to incrementally upgrade the wireless backhaul links with a reconfigurable optical backhaul network based on commercial TDM PON technology. The proposed reconfigurable optical backhaul enables reeonfigurability to optimize the bandwidth utilization and investment cost. Although the reconfigurable optical backhaul can provide an upgrade solution for existing wireless backhauls, if we were to build a clean-slate hybrid optical-wireless access network, what kind of backhaul structure could optimize the cost, bandwidth capacity, and network efficiency? In addition, as mentioned in chapter 2, current wireless mesh networks 71

CHAPTER

4. NEXT-GENERATION

HYBRID ACCESS

NETWORK

72

(WMN) are based on commercial Wi-Fi technologies that are not designed for access network applications. Hence the next-generation hybrid access network also requires a new WMN designed to address the capacity requirement, adapt to propagation environment, and optimize network efficiency in access networks. To enable the next-generation hybrid optical-wireless access network, the Grid Optical Wireless Network (GROW-Net) project was jointly established by Photonics and Networking Research Lab and Wireless Communications Research Lab in the Electrical Engineering Department of Stanford University since January, 2007.1 With the different expertise on optical and wireless network systems, the two labs address different issues of the optical and wireless segments that are keys to the nextgeneration hybrid access network. In this chapter we will focus on a novel optical backhaul structure that aims for clean slate deployment of the next-generation hybrid optical-wireless access networks.

4.2

Requirements of Next-Generation Optical Backhaul Network

In the hybrid optical-wireless access network, we focus on the optical backhaul network and assume that the wireless segment, i.e. WMN, has been widely deployed to cover most users. Besides broadband provisioning, the next-generation optical backhaul network should be designed to meet the following objectives: (1) Scalable bandwidth provisioning: the bandwidth provisioning of a hybrid optical-wireless network should be incrementally scalable to population or demand growth. (2) Flexible and extensible infrastructure: the hybrid access network should provide flexible connectivity to widely scattered wireless routers. As service area expands, for example from urban to suburban, the optical backhaul should be easily and cost-effectively extended.
This joint research project is sponsored by National Science Foundation (NSF), award number: 0627085.
1

CHAPTER

4. NEXT-GENERATION

HYBRID ACCESS NETWORK

73

(3) Network/resource usage efficiency: Network and resource usage efficiency is desirable to optimize cost and power consumption. To achieve high efficiency, the resources, infrastructure, and bandwidth should be shared by different parts of the network. In any case, synergy between the optical and wireless segments is required for optimizing overall efficiency and performance.

4.3

A Novel Optical Grid Backhaul

To achieve the aforementioned objectives, a novel optical grid backhaul for the nextgeneration hybrid access network is proposed as part of the GROW-Net project [47], [48]. The generic GROW-Net architecture is shown in figure 4-1(a), which consists of an optical grid backhaul and a WMN that is densely deployed throughout the metropolitan shown in figure 4-1 (b). Since in most modern cities the street layout usually follows grid topology, the grid optical backhaul can be readily constructed by deploying fibers along the streets. To reduce the deployment cost, the fiber deployment methods developed for PONs (in section 2.1.3) can be employed. If dark or dim fibers are available along the streets, they can be readily used as part, if not all, of the grid network. The optical grid network is composed of many optical grid units: each grid unit consists of 4 basic squares as shown in figure 4-1 (b) and contains a central hub. A central hub has two major functions: (1) collect and distribute the local WMN traffic within a grid unit; (2) coordinate with other central hubs to facilitate traffic transaction on the optical grid network. Certain central hubs are special in that they have connection to the external longhaul networks, such as central hub X and Y in figure 4-l(a). Hence, they are called the gateway hubs. At the gateway hubs, IP routers are deployed as interface between

CHAPTER 4. NEXT-GENERATION HYBRID ACCESS NETWORK

74

g
Hub A HubB

g
HubY

i
<

Huttc

;
*

Grid Unit

g
fl

g
Central hub H

g
IP router

(a) Optical grid backhaul network

Figure 4.1: GROW-Net architecture

Opticaj Fiber r ' j * L r : * r v

'r 'r

r \ r* B jpaiii* * r

'g : "BT X 'aT rf1 if , ,,\ , ,,

r^ r a "

'*" ~ r :r" "* *r ir ~ Y r r !r r fe|l lr r


Wireless Mesh Router

'r

r r

r
Hut I

t j ^ j P Central Hub

(b) Optical grid unit and W M N

Figure 4.1: GROW-Net architecture (con't)

CHAPTER

4. NEXT-GENERATION

HYBRID ACCESS NETWORK

75

GROW-Net and the external IP-based Internet. Figure 4-1(a) illustrates an example of traffic routing from external long-haul network to an user within the GROW-Net: the inbound traffic from the Internet enters into GROW-Net at one of these central hubs, such as hub X, travels along hub A and B, and arrives at the destination hub C, which manages the grid unit where the user is located. Hub C then schedules transmission through optical fiber links to one of the wireless gateway routers that are close to the user. Before entering into the WMN, the packet format is converted according to the requirements of WMNs. The wireless mesh routers then coordinate to relay the packets toward the user. Note that the uplink (from user to external long-haul network) traffic routing is conducted in the same manner but in reverse direction. To achieve efficient routing on such a large-scale heterogeneous network, it is desirable to divide the end-to-end routing into several sections where each section is managed by different parts of the network. Given that WMN by itself forms a subnetwork to handle traffic routing, we divide the optical backhaul into two layers as illustrated in figure 4-1 (c): Layer 1, or the intra grid unit layer, handles the traffic between central hub and all the wireless gateway routers within an optical grid unit. Note that an optical terminal is employed next to a wireless gateway router in figure 4-1 (c) which coordinates with the central hub to facilitate traffic transactions and performs packet format conversion. Layer 2, or the inter grid unit layer, handles traffic routing between central hubs on the entire grid network. This thesis focuses on the intra grid unit layer, i.e. layer 1. In this chapter we will describe the network architecture design, network performance simulations, and an experimental testbed of the optical grid unit.

4.4

Optical Grid Unit

The detailed optical grid unit structure is illustrated in figure 4-2, which provides broadband connectivity to multiple wireless gateway routers via fiber infrastructure. The solid lines in figure 4-2 signify the optical fiber links associated with the grid unit, and the dashed lines signify the links belonging to the neighboring grid unit.

CHAPTER

4. NEXT-GENERATION

HYBRID ACCESS

NETWORK

76

Layer 2 (Inter Grid Unit)

Layer 1 "^ (Intra Grid Unit)


1

Optic Terminal ical Wireless Gateway Router

WMN

\
Wireless Mesh Router

(c) Three layers in GROW-Net architecture

Figure 4.1: GROW-Net architecture (con't) Repetition of the grid unit pattern in figure 4-2 will result in the optical grid network as illustrated in figure 4-1 (b). Note that the optical links on the edge need to carryboth intra grid unit (layer 1) and inter grid unit (layer 2) traffic. To isolate the two types of traffic in the same fiber, different wavelength bands, such as C band (1525nm to 1565nm) and L band (1570nm to 1620nm), can be used to carry these two types of traffic. On each of the four branches (branches I-IV) that come out of the central hub, every optical terminal is assigned with a unique pair of Dense Wavelength Division Multiplexing (DWDM) channels (As) as shown in figure 4-2, one for downstream transmission and the other for upstream transmission. This pair of DWDM wavelength channels are reused among different branches, for example, Xlu, D is reused by terminal A, X, Y, and Z on the branches I, II, III, and IV, respectively. For each optical terminal, this pair of DWDM wavelength channels serves as its unique address. For example, when the central hub needs to send traffic to the optical terminal E on branch I, its optical transmitter will first tune to A5,D and then inject the signal to branch I. The signal will then bypass the intermediate optical terminals A and D, and be received by optical terminal E. For upstream traffic, optical terminal E will first

CHAPTER 4. NEXT-GENERATION HYBRID ACCESS NETWORK

77

(*3,D)

1
(^U,D)

I ft (J3UlD) I | dlZiUnJ #8(0,,^ "v

$E(X5U>D) I
s

t'l >- Branch If Branch III J'*

^' ,'.

' ,-........,N

*-.--

,Y(MUD)"' |
L^k. vgp>

Branch II '.', j,.. - .* Branch 1


Uk ^^p^ iik ^HP

y |

TA(IUID)

imMv*\ l \ I

D
,

! I

a ! .1 '
r

.......i L

0 1 .,,1 I

i /

\tr
^C(X3,j,D) ^

'"(^n)

(X2UjD)

<?X(U,D)

Wireless Gateway Router Wireless Mesh Router

4 ^
Q)-
115

O Optical Terminal

Figure 4.2: Optical grid unit structure

CHAPTER 4. NEXT-GENERATION HYBRID ACCESS NETWORK

78

request the central hub to transmit a period of continuous lightwave at A5[/ as a seed light. When optical terminal E receives the seed light, it will modulate the upstream data on the seed light and reflect it back to the central hub. In next section, we will describe the node structures of the central hub and optical terminals and explain how the wavelength routing is realized.

4.4.1

Node Structures of the Central Hub

The node structure of a central hub is illustrated in figure 4-3. There are four sets of optical transceivers (TX and RX), and each transceiver set is dedicated to one branch to facilitate communications with the optical terminals on that branch. The optical transceiver consists of three key optical components:

TX: Optical Transmitter PD: Photo Detector AWG: Arrayed Waveguide Grating RSOA: Reflective Semiconductor Optical Amplifier O-W: Optical-Wireless Interface

>N
i |
1

O-W
\t

>
Ku 1X3 Splitter

! |NMft||PD|
|
*1D,

! Nffliplexer/' \i FilterlA,)

\'<

Figure 4.3: Node structure of central hub

CHAPTER

4. NEXT-GENERATION

HYBRID ACCESS NETWORK

79

(1) Fast tunable optical transmitter: As illustrated in figure 4-3, one optical transmitter (TX) is dedicated to transmitting downstream data or upstream seed light at each of the four branches. As illustrated in figure 4-4, the optical transmitter consists of multiple fast tunable lasers (TLs) multiplexed with a passive optical coupler [49]. The employed fast tunable laser can tune between DWDM channels across most of the C and L bands in 100ns [39]. This wavelength tunability allows a tunable laser to be shared among multiple optical terminals on a wavelength routing system. For example, in figure 4-4(a) until t = t i the TL transmit continuous seed light for optical terminal A at X\u , and within At, can tune to A2j? to transmit downstream traffic to optical terminal B as in in figure 4-4(b). Besides resource sharing, multiplexing TLs will bring several benefits, we will explain the details in section 4.5.1.
T=t1,A=A1(J TL TL T=t1+At,A=A2D

*Fk

TL: Tunable laser


Figure4-4(a) Figure4-4(b)

Figure 4.4: Tunable optical transmitter at the central office (2) Optical receiver: The optical receiver receives multiple uplink traffic from different optical terminals. As shown in figure 4-3 the traffic from different optical terminals are carried on different DWDM wavelength channels (AK/, \2U---)I SO a wavelength de-multiplexer such as an arrayed waveguide grating (AWG) de-multiplexer is required to separate the uplink streams at the central hub. After de-multiplexing, each stream is converted to an electrical signal by the photo detector for processing. (3) Optical circulator: To separate the downlink and uplink streams, optical circulators are utilized in the central hub. As shown in figure 4-3, a circulator forwards the downstream traffic that enters from port 1 (connected to TX) to port 2 (connected to the external fiber link) and upstream traffic from port 2 to port 3 (connected to

CHAPTER

4. NEXT-GENERATION

HYBRID ACCESS NETWORK

80

RX). A typical circulator has low loss and high isolation over wideband: the insertion loss is less than ldB and isolation is more than 50 dB in both 1550nm and 1310nm bands [50].

4.4.2

Node Structure of t h e Optical Terminal

As its name suggests, optical terminals remotely terminate the optical link of a grid unit and interface with WMNs from the central hub's point of view. Since there are many optical terminals deployed on a grid unit, a simplified node structure is preferred for low-cost deployment and easy maintenance. In addition, a node structure enabling centralized network control is desirable, because centralized network control can reduce the protocol complexity for a point-to-multipoint network. To achieve these goals, three types of optical terminals are constructed with the following four basic optical components, as shown in figure 4-3: (1) Optical DWDM filter: as shown in figure 4-3, an optical DWDM filter is required at the front end of an optical terminal to pass the two assigned DWDM channels, Xxu and XXD, and filter out the other channels. A commercial thin-film based DWDM filter has three ports (figure 4-5): (A) the input port, (B) the transmission port, and (C) the reflection port. As multiple wavelengths enter into the input port, only the DWDM channels designated for communication, such as \2u and \2D in figure 4-5, are forwarded to the transmission port, and the rest wavelengths are reflected to the reflection port. Commercial DWDM filters are reciprocal, polarization insensitive, and have low insertion loss (less than 1 dB), high isolation (at least 30 dB from adjacent channels), and low chromatic dispersion (less than 40.6ps/nm and 8.3ps/nm to transmission and reflection ports, respectively) [51].' Most importantly, these filters can be custom designed to allow flexible pass wavelength and bandwidth across the C and L bands. (2) Wavelength duplexer: the function of a wavelength duplexer is to enable simultaneous bi-directional communications. Since the downstream and upstream signals are carried on two separate DWDM channels, a DWDM filter can be used as the wavelength duplexer. Unlike the DWDM filter used at the front end of an optical terminal,

CHAPTER 4. NEXT-GENERATION HYBRID ACCESS NETWORK

81

(B) Transmission Port

(C) Reflection Port


Courtesy of JDSU, http://www.jdsu.com

(A) Input Port

(A) Input port * Wavelength

(B) Transmission Port "* Wavelength

(C) Reflection Port "* Wavelength

Figure 4.5: DWDM optical filter used in optical terminal

CHAPTER

4. NEXT-GENERATION

HYBRID ACCESS NETWORK

82

however, the DWDM filter serving as a wavelength duplexer is custom designed to pass only one DWDM channel, as shown in figure 4-4.

Wavelength (BNDWDMKC)

Wavelength

Wavelength

Figure 4.6: Wavelength duplexer implemented by DWDM filter (3) Photo detector: a photo detector is connected to the reflection port of the wavelength duplexer to receive the downstream signal and convert it to electrical signal for processing. (4) Reflective Semiconductor Optical Amplifier (RSOA): To simplify the node structure, a light emitting source is eliminated at the optical terminal, and a novel reflective optical device, such as Reflective Semiconductor Optical Amplifier (RSOA), is employed instead. Figure 4-7 illustrates the structure of RSOA and how it facilitates the upstream transmission. The RSOA consists of an anti-reflection coating at the front end, gain medium, and high-reflection coating at the back end. The gain of the gain medium is controlled by external current injected from the modulation (Mod) port. As shown in figure 4-7, due to path and component losses, the magnitude of continuous seed light at A ^ is attenuated when it arrives at the RSOA. The

CHAPTER

4. NEXT-GENERATION

HYBRID ACCESS NETWORK

83

RSOA reflects, modulates, and amplifies the seed light, and the modulated light is then injected back into the optical fiber. Since the wavelength is unchanged, the reflected and modulated light will trace back to the central hub along the same path to complete the upstream transmission. Employment of RSOA makes each optical terminal colorless for upstream transmission, i.e. optical terminals do not need the precise wavelength locking or stabilization that is required by lasers in typical DWDM systems. This feature is highly desirable for mass deployment of optical terminals.
CWatX., CW at X4U

2?

4 59

/
\puplexefrr

iL
Upstream Signal at X4U

Upstream Signal CWM-ight i

3^

Upstream Signal at X4U

1 1
i \

Mod Port

, 1
Upstream Signja
10 1 1 0\1 1 .t\ i M

Gain 1| Medium
s %

Anti-Reflection High-Reflection Coating Coating

Figure 4.7: RSOA structure and upstream transmission Using reflective optical devices, the optical terminals become "passive" because they cannot actively transmit uplink signal unless the central hub sends continuous seed light. In this way the medium access control protocol can be simplified because the transmission in both directions is controlled by the central hub. Behind the RSOA, an optical/wireless interface (the O-W in figure 4-7) is critical to integrate the two heterogeneous communication networks. The O-W interface needs to perform different functions at different layers of the OSI model. In the data link layer (layer 2 of the OSI model), the optical/wireless interface needs to convert the

CHAPTER

4. NEXT-GENERATION

HYBRID ACCESS NETWORK

84

packet format between the optical grid and the wireless mesh network. In the network layer (layer 3 of the OSI model), it should enable an integrated routing paradigm to optimize the end-to-end network performance. In practice, the O-W interface can be realized by a layer 3 router or in Apllication Specific Integrated Circuit (ASIC) for mass production. Three types of optical terminals Depending on the location on a branch, there are three possible node structures for optical terminals: (1) Optical terminal at the end of an optical fiber link (e.g. node 5 in figure 4-3): This kind of optical terminal sits at the end of a fiber link, so the reflection port of the DWDM filter is not used and connected. (2) Optical terminal along an optical fiber (e.g. node 4 in figure 4-3): This kind of optical terminal needs to bypass signals to other optical terminals along a fiber link, so all three DWDM filter ports are used to perform the bypassing. (3) Optical terminal at the intersection of two optical fibers (e.g. node 1 in figure 4-3): This kind of optical terminal needs to bypass signals to optical terminals on multiple optical fiber links, so an optical splitter is employed to split or couple the downstream or upstream signals to or from different optical fibers. Given the optical terminal node structures described in previous sections, the wavelength of a signal emitted from the central hub will determine how the signal is routed and which node will receive it. To facilitate communications with any optical terminal, the central hub only needs to tune to the assigned wavelength channel. In the grid unit, the received downstream and upstream traffic by optical terminals and the central hub are not bursty, not continuous as in conventional optical systems. To handle the bursty traffic, an optical burst-mode receiver is a critical component. In chapter 5, we will review optical burst-mode receivers and propose a novel burstmode clock and data recovery technique that is suitable for signal transmission on

CHAPTER

4. NEXT-GENERATION

HYBRID ACCESS NETWORK

85

the grid unit.

4.5

Multiplexing of Tunable Lasers

Since both downstream and upstream transmissions rely on the optical light sources in a central hub, these optical transmitters play an important role in the overall network performance. In conventional optical communications systems, a laser is dedicated for transmission to a node. Given the superior bandwidth provisioning of optical technologies (up to lOGbps per wavelength for optical access technologies) over wireless technologies (up to hundreds of Mbps provisioned by advanced wireless access technologies), the unused bandwidth of the dedicated optical stream is wasted. Therefore the bandwidth sharing enabled by fast tunable lasers is highly desirable because it allows cost-efficiency. As shown in figure 4-8, for example, if the bandwidth of a TL can satisfy demand of up to 5 optical terminals, then a new TL is added as every 5 new optical terminals are deployed on a branch. Besides cost-efficiency, the incremental bandwidth investment, or investing when needed, is also highly desirable for service providers.
Number of TLs TL TL TL

TL: Tunable laser


10 15 Number of optical terminals (or overall bandwidth demand)

Figure 4.8: Incremental bandwidth scalability Another benefit of multiplexing tunable lasers is the statistical multiplex gain. In the next sections we will investigate how the statistical multiplexing gain can improve the packet delay and jitter can.

CHAPTER

4. NEXT-GENERATION

HYBRID ACCESS NETWORK

86

4.5.1

Statistical Multiplexing Gain of Tunable Lasers

The statistical multiplex gain enabled by multiplexing TLs can be explained and understood by queuing theory. As shown in figure4-9, the system is simplified and modeled as an M / M / l model based on the following assumptions:
TL & Overall Demand M/M/1 Queue model Arrival Rate A1 / Average Delay

E3-

mm
a optical terminals Service Rate p1 Arrival Rate A1 = M-A1

T- 1

MUX \ /

T
M a optical terminals
t

(i) The incoming traffic to the system is a Poisson process with parameter A (average arrival rate), i.e. the time between two consecutive packet arrivals is exponentially distributed. For t I 0, the probability density function is:

(ii) Assuming each optical terminal has average demand DOT and there are N optical terminals on a branch, then the arrival rate is proportional to the overall bandwidth demand, i.e. X = N- DOT (4.2)

MTLs

Figure 4.9: M / M / l queue model of tunable optical transmitter

MUX \

M\^~\)

Service Rate u2 = M-u,

f(t) = \-e-xt

(4.1)

CHAPTER

4. NEXT-GENERATION

HYBRID ACCESS

NETWORK

87

(iii) The service time at the output of the system has exponential distribution with parameter p . For t > 0, the probability density function is: g(t) = p e-"< (4.3)

(iv) Assuming each fast tunable laser has peak data rate Cpx and there are P fast tunable lasers at the central hub, then the service rate is proportional to the total system capacity, which is proportional to the number of fast tunable lasers installed at the central hub: p = P-CFT . (4.4)

(v) There is only one server in the system. Note that although there are P tunable lasers, they always transmit at different wavelengths to serve different optical terminals at any time instant, so these tunable lasers are modeled as a single server. (vi) The queue length is infinite. To investigate the statistical multiplexing gain, we compare two special cases: (1) 1 tunable laser and a optical terminals: service rate p\ = CFT, and a optical terminals, i.e. arrival rate Al = a DOT', (2)M tunable lasers and M a optical terminals: service rate p2 = M-CFT, than 1. For both cases, the ratio of arrival rate over service rate, defined as p, is kept constant: Pi = = Pi = = Po < 1
Pi P2

and

M-o" optical terminals, i.e. arrival rate A2 = M-DOT- Note that M is an integer larger

(4.5)

To ensure the system is stable, the system should be designed in the way that p < 1, i.e. the system capacity is larger than the average bandwidth demand. The ratio between Ai and A2 (as well as p,\ and /i 2 ) is:

CHAPTER

4. NEXT-GENERATION

HYBRID ACCESS NETWORK

88

^i = ^ = (4.6) y J A2 /x2 M Assume that the number of packets stored in the queue is a random process n, then the expected number of packets, or buffer depth (BD) for both systems is:
00 \ '

BD = np[l - p0] = zr^-

= - ^

(4.7)

Then according to Little's theory, in steady state the average packet queuing time for the two systems is:
nn BD 1 Mi - Ai BD A2 1 fJ-2 A2 1 M ( ^ i - Ai) Ti M

(4.8)

T2 =

(4.9)

In other words, given the same ratio of demand over capacity, the system that multiplexes more tunable lasers will lead to less packet delay in the system. The results are summarized in figure 4-9.

4.5.2

Simulations of Packet Delay and J i t t e r Improvement due t o Statistical Multiplexing Gain

To quantitatively evaluate the statistical multiplexing gain of the proposed tunable laser scheme, we simulate the packet delay and jitter performance of the system. As depicted in figure 4-10(a), the simulation scenario is defined as follows: (1) Input packets are Poisson arrival process;
(2) Incoming packets have r a n d o m E t h e r n e t packet length;

(3) There is a traffic scheduler that inserts an incoming packet to the buffer with least buffer depth;

CHAPTER

4. NEXT-GENERATION

HYBRID ACCESS NETWORK

89

Buffer |-i-prL1 {j3uffer]-l-^_TL2

(a) Simulation scenario Figure 4.10: Packet delay and jitter improvement due to statistical multiplexing gain (4) First in first out (FIFO) buffer with Infinite buffer length; (5) The tuning time of tunable laser is 30ns; (6) The average fiber length to optical terminals on the branch is 2 km; (7) Tunable laser's data rate is 1.25 Gbps; (8) The ratio of tunable laser number over optical terminal number is 1/8; (9) The total traffic loading of all optical terminals is varied from 0 to 90% of the total capacity, and we will investigate the average packet delay and jitter at different loads. Figure 4-10(b) shows the simulation results, where we simulate three cases: 1, 2, and 4 tunable lasers. As shown, when the total load is 50% of the system capacity, for the 4 tunable lasers case, the average packet delay and jitter is improved by almost 10 times and 5 times over the 1 tunable laser case. The packet delay has better performance compared to the prediction of queuing theory, which is 4 times. This is because the theory assumes that the serving time is exponentially distributed while in the system, the de-queue scheme is active and definite, which helps the packet delay performance.

CHAPTER 4. NEXT-GENERATION HYBRID ACCESS NETWORK

KT

*Propagation delay is removed


4TL in Hub, 32 Opt. Terminals 2TL in Hub, 16 Opt. Terminals 1TL in Hub, 8 Opt. Terminals

Delay (ps)*

tOMmejsW
10 20 30 40 50 60 70 80 90

Traffic Loading (%)


4TL in Hub, 32 Opt. Terminals 2TL in Hub, 16 Opt, Terminals ITLinHub, 8 Opt. Terminals

10!

Jitter (|JS)

10"

+5 times of Improvement
10
20 30 40 50

60

70

~80 ~ 90

Traffic Loading (%)


(b) Packet delay and jitter simulation results

Figure 4.10: Packet delay and jitter improvement due to statistical multiplexing (con't)

CHAPTER

4. NEXT-GENERATION

HYBRID ACCESS NETWORK

91

4.6

Optimization between Bandwidth Scalability and Cost-efficiency

An important feature of a generic hybrid optical-wireless access network is its optimization between bandwidth scalability and cost-efficiency. The feature stems from network heterogeneity: wireless segment facilitates low-cost infrastructure deployment, and optical segment enables broadband provisioning. Therefore, by changing the weighting between the optical and wireless segments, a hybrid network should satisfy a wide range of demand with minimum amount of infrastructure deployment and investment cost. In this section we will investigate how the grid unit architecture and node structure can realistically enable the network scalability. Low D e m a n d Let's begin with a low demand situation to investigate the network scale-up strategy. As shown in figure 4-ll(a), when the demand from WMN is low in an L by L grid unit, we assume that four wireless gateway routers are sufficient to satisfy the demand. Although not shown, wireless mesh routers are densely deployed to cover the users. As the grid unit pattern of figure 4-11(a) is repeated in both X and Y directions, it is apparent that a gateway router serves an L/2 x L/2 area. In the low demand case, since gateway routers are relatively sparsely deployed, the average number of hops experienced by packets in the WMN is relatively high. On the other hand the number of optical terminals that packets pass through is low; for example in figure 4-11 (a) only one hop is required from the central hub to any optical terminal on the backhaul. Hence the wireless segment has more weighting as illustrated at the bottom of figure4-ll(a).

Medium Demand As bandwidth demand increases, more wireless gateway routers need to be deployed to enhance the capacity of WMNs. Figure 4-11(b) illustrates a way to quadruple the number of gateway routers within the grid unit. Some of the added gateway

CHAPTER 4. NEXT-GENERATION HYBRID ACCESS NETWORK

92

L/2

r 'r T l o
T i O Optical Terminal * p Wireless u Gateway Router
k|s* Wireless

Mesh

Router -> X

Low Capacity & Investment Cost

High Capacity & Investment Cost

Central Hub
j

End-user

Optical Backhaul

WMN (a) Low demand case

F i g u r e 4 . 1 1 : B a n d w i d t h scalability of o p t i c a l grid u n i t

CHAPTER

4. NEXT-GENERATION

HYBRID ACCESS NETWORK

93

routers are deployed along the existing fibers in figure 4-ll(a), while the others require extended fiber deployment. Compared with figure 4-ll(a), the distance between two adjacent wireless gateway routers is halved after the wireless gateway routers are added. The average number of hops experienced by packets is thus reduced to effectively enhance WMN capacity as explained in section 2.2.7. Although not shown, besides adding optical terminals and wireless gateway routers, the number of tunable lasers and receivers in the central hub is increased accordingly. Compared to the low demand case, the addition of optical terminals and wireless gateway routers moves the weighting toward the optical segment as illustrated at the bottom of figure 4-11(b). High D e m a n d As demand keeps growing, more gateway routers can be added and connected to the optical backhaul in the way illustrated in figure 4-11(c) to further enhance the WMN capacity. After adding the gateway routers, the distance between two adjacent gateway routers is further halved, so the number of hops can be further reduced to enhance capacity of the WMN. Compared to the medium demand case, the optical segment has more weighting as illustrated at the bottom of figure 4-11(c). Further Scale-Up t o meet higher demand By repeating the approach described above, each time the number of gateway routers can be quadrupled within an optical grid unit. However, there is a limitation from the PHY layer as we scale up the bandwidth using the approach. The optical components such as the circulator, DWDM filter, and optical splitter that coordinate to pave wavelength routing paths to optical terminals, inevitably introduce some loss. When the bandwidth is scaled up, more optical terminals will be added, and eventually the accumulated loss will attenuate the signal to unacceptable level. As depicted in the example in figure 4-12, with typical loss of commercial optical components, the maximum total added loss is up to 18dB (note that fiber loss is ignored because we assume that an optical grid unit has small physical size when deployed in urban area). Assuming that the power level of the output signal from the central hub is OdBm,

CHAPTER 4. NEXT-GENERATION HYBRID ACCESS NETWORK

94

L/4
t"'.

I
^ *"!. ^

f
* |Y It
V

O Optical Terminal ^Wireless u Gateway Router

-+ x

tr Router

Wireless Mesh

Low Capacity & Investment Cost

High Capacity & Investment Cost

Central Hub
t

End-user
1

Optical Backhaul

WMN

(b) Medium demand case Figure 4.11: Bandwidth scalability of optical grid unit (con't)

CHAPTER 4. NEXT-GENERATION HYBRID ACCESS NETWORK

95

* Q

: I*
'8

o-oQ Q <

?
6

i
i
Q I

9
I

9
s

9
t

9
I

TT^
6 0

f
I

-o

<>

o
I O Optical Terminal

9
I

p
$

i> ^ o

<H

* L^_x

9
6

<> O

6
O

^Wireless 11 Gateway Router --Wireless Mesh ' Router

Low Capacity & Investment Cost

High Capacity & Investment Cost

Central Hub
i

End-user 1 Optical Backhaul WMN

(b) High demand case Figure 4.11: B a n d w i d t h scalability of optical grid unit (con't)

CHAPTER

4. NEXT-GENERATION

HYBRID ACCESS NETWORK

96

this total loss almost consumes the margin for the optical budget in a 2.5Gbps optical transceiver. As a result, we envision that further scale-ups cannot simply rely on the aforementioned network expansion in the PHY layer.

O Optical Terminal ^Wireless Gateway u Router


y l

Wireless Mesh Router

Figure 4.12: Optical loss along the longest path To further scale up the bandwidth, we can split each grid unit into four smaller grid units as illustrated in figure 4-13 by replacing four existing optical terminals with four new central hubs. The four new grid units need to be enclosed by new fibers, indicated by the dashed lines. After the split, each new grid unit is reset to the situation illustrated in Fig. 4-11(b), so the network expansion in PHY layer can be applied again to scale up the capacity.

CHAPTER 4. NEXT-GENERATION HYBRID ACCESS NETWORK

97

New Grid Unit

Old grid Unit


Figure 4.13: Further scale-up by splitting the optical grid unit

CHAPTER

4. NEXT-GENERATION

HYBRID ACCESS NETWORK

98

4.7

System testbed of t h e optical grid unit

To demonstrate the feasibility and evaluate the performance of the proposed optical grid unit structure, we constructed a system testbed on which the downlink and uplink transmission experiments were performed. As shown in figure 4-14, the testbed is composed of one central hub and two optical terminals. The central hub consists of a tunable laser, photo detector, circulator, and a Field Programmable Gate Array (FPGA) board with 1.25Gbps serializer-deserializer (SERDES). The FPGA board, which was described in section 3.4.2, controls the fast tunable laser, and the modulation of downstream and upstream data.

PD1

PD2#3.3nmfCh.30)

rj'-o giinirc ii m

AWG 200GHz DWDM Filter


..'"

FPGA SERDES

TL Circulator

Fiber X

1X3 Coupler

FiberY

200GHz DWDM Fitter


/::::

Central Hub

A1D = 1SS0.9nm (Ch.33) * ' K.u = 1551.7nm (Ch.32)

Aa. = 15S2.Snm(Ch.31) &A K,u = 1553.3nm(Ch.30)

100GHZ DWDM Filter

100GHz DWDM *H, = 1SS2.Snm(Ch.31) Fitter

1553.3nm (Ch.30)

TX: Optical Transmitter A1D - 1551.7nm(Ch.33) prj RSOA PD: Photo Detector AWG: Arrayed Waveguide Grating RSOA: Reflective Semiconductor Optical Amplifier VOA: Variable Optical Attenuator SERDES

-15S09nm(Ch.32)

PD

RSOA

L. FPGA J
Optical Terminal 1

t 1
FPGA SERDES Optical Terminal 2

Figure 4.14: System testbed of the optical grid unit As mentioned before, in figure 4-14, the optical terminals consists of DWDM filters at the front end, a DWDM filter as the duplexer, a photo detector to receive the downstream traffic, and an RSOA to facilitate the upstream transmission. Note that optical terminal 1 has an extra 1 x 3 optical splitter to emulate the splitting in a real system.

CHAPTER

4. NEXT-GENERATION

HYBRID ACCESS NETWORK

99

Two sets of DWDM channels are assigned: DWDM channels 33(1550.9nm) and channel 32(1551.7nm) and DWDM channels 31(1552.5nm) and channel 30 (1553.3nm) to optical terminal 1 and 2, respectively. The FPGA boards are also deployed at the optical terminals to coordinate with the central hub, modulate upstream traffic, and demodulate the downstream traffic. Optical terminal 1 is connected to the central hub and terminal 2 with fiber X and Y respectively. The lengths of fiber X and Y are varied throughout the experiment in order to evaluate different aspects of system performance.

4.7.1

Enabling devices

Fast Tunable Laser Fast tunable lasers (TL) are a key component for enabling the next-generation optical backhaul. A promising TL should have fast tuning capability and wide wavelength tuning range. As technology progresses, semiconductor TL [52] [53] [54] [55] [56] [57] with tuning time less than 100ns have been developed and demonstrated. Among various types of semiconductor tunable lasers, those based on the free-plasma effect, such as grating assisted codirectional coupler with sampled reflector (GCSR) lasers and Sample Grating Distributed Bragg Reflector (SG-DBR) lasers, provide wide wavelength tuning range (40nm -70 nm) and fast tuning in tens of nanoseconds [52], [55] [56]. Our previous research results have demonstrated fast and fine tunings of GCSR tunable laser by digitally controlled overdrive currents [53], [54]. To further explore capabilities of semiconductor tunable lasers, we investigate and employ the SG-DBR lasers on the proposed backhaul system testbed. Figure 4-15(a) illustrates the fast tunable laser used. It consists of a laser module made by Agility Inc. and control circuit board that is custom designed to achieve
fast wavelength tunability. As shown in figure 4-15(b), t h e laser module has three

tuning sections: (1) Front-mirror section: located at the front side of the laser resonant cavity. It reflects a comb of wavelengths back to the cavity.

CHAPTER

4. NEXT-GENERATION

HYBRID ACCESS NETWORK

100

Control circuit board

Laser module

(a) The tunable laser module used on the system testbed Figure 4.15: Fast tunable laser

(2) Back-mirror section: located on the other side of the cavity. It reflects a comb of wavelengths that is slightly deviated from the front-mirror comb pitch. Hence only the wavelengths that are common to both mirrors can be amplified in the cavity. (3) Phase section: located within the resonant cavity. It can adjust the effective optical length. Each section can be independently controlled by electrical current, so as we inject different current values, the lasing wavelength can be tuned. We explored the wavelength tuning range by adjusting currents, and the experimental results show that nearly 50 nm tuning range (1520nm - 1570nm) is achieved. To generate accurate control currents to the three sections, three commercial 16-bit digital to analog conversion (DAC) chips are used. Since the laser modules have slight manufacture difference, characterization is required to construct a current-wavelength map for each laser module. The map is then coded to a Field Programmable Gate Array (FPGA) chip. During operation, the DAC chips can output the required currents according to the map. To enhance the tuning speed, an overshooting technique

CHAPTER

4. NEXT-GENERATION

HYBRID ACCESS NETWORK

101

Front section comb

A ft = M ,., :
Front mirror Phase Gain section section Rear section comb Rear mirror Sampled grating
III!
It!

A I: A A
Laser output

111.

-*X
Front section comb

Light Out

SG-DBR
Courtesy of Agility, http://www.agility.com

JJUJLiU
Rear section comb

Laser output

ft 0 A , ,
-X

(b) Tunable laser structure Figure 4.15: Fast tunable laser (con't) is employed [54]. Figure 4-15(c) shows the transient response of tuning the wavelength from 1550.lnm to 1550.9nm, and the result shows that the output stabilizes in less than 10ns. Throughout 50nm tuning range, the tuning times between any two DWDM 100 GHz channels are less than 100ns. Note that the same tunable laser module is also used on the system testbed of the reconfigurable optical backhaul architecture in section 3.4.3. Reflective Semiconductor Optical Amplifier Reflective Semiconductor Optical Amplifier (RSOA) is another enabling device to the optical grid unit. RSOAs have attracted considerable attention in the research of WDM-PON [58], [59] as they were considered a low-cost alternative with the following advantages: (1) Combination of amplification and high-speed modulation. (2) Wide optical bandwidth (more than 60nm) with flexible center wavelength between 1200-1550nm.

CHAPTER

4. NEXT-GENERATION

HYBRID ACCESS NETWORK

102

A:j--i Gjbrj;* iji.t'-'- .^.T

suwrw-jw

1550.1 nm -> 1550.9nm


(c) Tuning transient response from 1550.lnm to 1550.9nm

Figure 4.15: Fast tunable laser (con't) (3) Improved noise figure: around 6dB [60]. In the WDM-PONs, the RSOA is usually located at the remote node to reflect the seed light transmitted by the central office. In [59] modulation speed up to 5Gbps with 20km fiber link is demonstrated. A typical RSOA has three sections as shown in figure 4-7: (1) Front facet: uses anti reflection coating to realize transparent interface. (2) Back mirror: uses high reflection coating to reflect the input light to the RSOA. (3) Gain medium: The traveling light in both directions is amplified in the gain medium. The gain is controlled by current at the modulation port. Figure 4-16 illustrates the RSOA module we used on the testbed, which is manufactured by Kamelian Inc. Its maximum modulation speed is 1.5 Gbps, maximum optical gain is lOdB, and optical bandwidth is 30 nm centered at 1550nm. We built

CHAPTER

4. NEXT-GENERATION

HYBRID ACCESS NETWORK

103

a control circuit running at 1.25Gbps to control the modulation current.

Kamelian RSOA
Figure 4.16: RSOA module employed on the system testbed

4.7.2

Downstream Transmission Experiment

A 1.25 Gbps downstream transmission experiment is depicted in figure 4-17(a). Besides the generic devices in figure 4-14, two fixed lasers are employed to emulate simultaneous transmission by other TLs at the central hub. The downstream traffic to terminals 1 and 2 are transmitted at 1550.9nm, and 1552.5 nm, respectively. As we evaluate the transmission to optical terminal 2, the two interfering lasers are set at the adjacent channels, i.e. 1555.3nm and 1551.7nm. The length of fiber X and Y are 2km and 13km respectively. The eye diagram and bit error rate at different situations are measured to evaluate system performance. Since the TL is shared among optical terminals, the transmission duration to each terminal needs to be scheduled, which results in burst transmission. We scheduled the transmission bursts to terminal 1 and 2 to be 100/iS and 20/xs, respectively. Figure 4-17(b) shows the received signals of the PD of optical terminal 1 and 2 on the oscilloscope. Note that the tuning transient is approximately 20ns. To evaluate the signal degradation due to different factors, we measured the eyediagram of the transmitted signal in three scenarios:

CHAPTER 4. NEXT-GENERATION HYBRID ACCESS NETWORK

104

BERT Mnterferer2) FL
A,, = 1561 7nm (Cft.32)

rP6A

3X1
TL Coup|w

Fiber X (2km)
k^n'1SS3Jnm(Ch.30)

Optical
Tewlinal1

Fiber Y (13km)

T e m n j n a ,2

Optical

<1550.9nm)

(1552.5nm)

FL (Interfeierl) Central Hub TL: Tunable laser FL: Fixed laser BERT: Bit error rate tester Oscilloscope <*-

(a) Experimental testbed configuration

Figure 4.17: Downstream transmission experiment

O p t i c a l Terminal 1 , A1D = 1 5 5 0 . 9 n m

Optical Terminal 2, A 2D = 1 5 5 2 . 5 n m

(b) Waveforms of signals received by optical terminal 1 and 2

Figure 4.17: Downstream Transmission Experiment (con't)

CHAPTER

4. NEXT-GENERATION

HYBRID ACCESS NETWORK

105

no fiber, no interference

no fiber 2 interference

15km fiber, 2 interference

(c) Eye diagrams received by optical terminal 2 of three scenarios

Figure 4.17: Downstream Transmission Experiment (con't) (i) no interference (both FLs are off) in the central hub, measured at the output of central hub; (ii) 2 interference on, measured at the output of central hub; (iii) 2 interference on, measured at optical terminal 2's receiver. The three eye-diagrams in figure 4-17(c) show the signal degradation due to interference and path loss and component impairment. Apparently the eye closing from (1) to (ii) is due to the interference, and (ii) to (iii) due the signal attenuation. Note that with two interfering lasers and 15 km of transmission length, i.e. case (iii), we still have good eye opening for photo detection. The bit error rate (BER) measurements in figure 4-17(d) quantitatively show the signal degradation. At BER=10-9, the fiber and component impairments introduce 0.36 dB degradation. The 1st and 2nd interferers introduce 0.05 dB and 0.08 dB respectively.

4.7.3

Upstream transmission experiment

A 1.25 Gbps upstream transmission experiment is depicted in figure 4-18(a). As in the downstream experiment, two fixed lasers are used to emulate simultaneous transmission by other TLs at the central hub. The seed light to terminals 1 and 2 are transmitted at 1551.7nm, and 1553.3 nm, respectively. As we evaluate the upstream

CHAPTER 4. NEXT-GENERATION HYBRID ACCESS NETWORK

: i-

. >

\ \ j v

:
-I

\ \ \
^ t

^ V \ J V

baseline 15km-no interference 15km-1 interference 15km-2 interference

^.^ V v v U U

0 36dB ^S NfclS
~

1
w.

<^N

:
*

X
St

i i i
i i I i

-14.6

-14.4

-14.2

-14

-13.8 -13.6 -13.4 Received Power (dBm)

-13.2

-13

-12.8

-12.6

(d) BER measurements of the three scenarios

Figure 4.17: Downstream Transmission Experiment (con't)

CHAPTER

4. NEXT-GENERATION

HYBRID ACCESS NETWORK

107

BERT

(Interfered)
FL \^-1S5Z.Snm(Ch.31)

FPGA SERDES

TL

3X1 Coupler

Optical Fiber X (1km) terminal 1 = 1S54.1nm(Ch.29) (1551.7nm)

VOA

Fiber Y (1km)

Optical Terminal 2 (1553.3nm)

FL

(Interfererl) Central Hub - Oscilloscope *~

TL: Tunable laser FL: Fixed laser BERT: Bit error rate tester VOA: Variable optical attenuator
(a) Experimental testbed configuration

Figure 4.18: Upstream transmission experiment transmission of optical terminal 2, the two interfering lasers are set at the adjacent channels, i.e. 1554. lnm and 1552.5nm. A variable optical attenuator (VOA) is installed to facilitate the transmission experiment. The lengths of fiber X and Y are 1km and 1km respectively. We used a shorter fiber length than that used in the downstream transmission because a longer fiber introduces unacceptable Raleigh backscattering, which degrads the signal quality. This is in fact a challenge for RSOA to be used in access networks that is up to 20km. Fortunately, since the hybrid optical-wireless network is designed for urban areas, we envision that 2km of fiber connectivity for one branch will be sufficient for most cases. If we need to extend the fiber length, techniques such as [61] can
effectively reduce Raleigh backscattering.

To evaluate the signal degradation due to Raleigh backscattering, we measure the eye-diagrams of the transmitted signal in two cases: (i) Back-to-back connection: fiber patch cords are used as fiber X and Y. RSOA's gain is fixed at 10.5dB, and we adjusted the VOA so that the received signal power

CHAPTER

4. NEXT-GENERATION

HYBRID ACCESS NETWORK

108

Back-to-back (RSOA gain =10.5dB, received power - -23 dBm)

2 km (1km+1km) total fiber length (RSOA gain =10.5dB, received power = -23 dBm)

(b) Eye diagrams measured at the central hub

Figure 4.18: Upstream transmission experiment (con't) at the central hub is -23dBm. (ii) 2 km total fiber length: two 1km fibers are used for fiber X and Y. RSOA's gain is fixed at 10.5dB, and we adjusted the VOA so that the received signal power at the central hub is -23dBm. Figure 4-18(b) shows the eye diagrams measured at the output of the PD of the central hub. Compared to the back-to-back case, the eye diagram of 2km total fiber length has obvious vertical and horizontal eye closure. To quantitatively investigate the degradation due to backscattering and RSOA noise, the bit error rates (BERs) are measured in the following four situations: (i) Back-to-back connection. The input power to RSOA was set to -3.5dBm by adjusting the VOA attenuation. Then we adjusted the RSOA gain so that the output power of RSOA equal to OdBm. (ii) Back-to-back connection. The input power to the RSOA was set to -10.5dBm by adjusting the VOA attenuation. Then the RSOA gain was adjusted so that the output power of the RSOA was equal to OdBm.

CHAPTER 4. NEXT-GENERATION HYBRID ACCESS NETWORK

109

RSOA BER curve comparison

-27

-26

-Z

-24 -23 Receiver input power [dbn]

-22

-21

-20

(b) B E R measurements of four scenarios

Figure 4.18: Upstream transmission experiment (con't)

(iii) 2km fiber link. The input power to the RSOA was set to -3.5dBm by adjusting the VOA attenuation. Then the RSOA gain was adjusted so that the output power of the RSOA was equal to OdBm. (iv) 2km fiber link. The input power to RSOA was set to -10.5dBm by adjusting the VOA attenuation. Then the RSOA gain was adjusted so that the output power of the RSOA was equal to OdBm. Figure 4-18(c) shows the BER curves corresponding to the four scenarios. The BER degradation from (i) to (ii) and from (iii) to (iv) is due to lower SNR, and the noise is dominated by the RSOA noise. The BER degradation from (i) to (iii) and from (ii) to (iv) are due to the Rayleigh backscattering along the optical fibers.

CHAPTER 4. NEXT-GENERATION HYBRID ACCESS NETWORK

110

4.7.4

QoS Experiments of the Optical Backhaul

Quality of Service (QoS) is critical user's experience of communication networks. QoS refers to resource reservation control mechanisms which provide the capability to prioritize different applications, users, or data flows. With QoS, the bit rate, packet delay, packet jitter, packet loss, or bit error rate associated with different applications, users, or data flows can be guaranteed. QoS is thus imperative for real-time multimedia applications such as voice over IP (VoIP), online games, and IP-TV, because these applications usually require constant bit rate or are delay sensitive. On the proposed optical backhaul, the central hub controls communications with all optical terminals, so QoS can be readily implemented by the central hub to schedule packet transmissions. In this section, we study the QoS performance over the experimental testbed described in figure 4-14. Among different QoS metrics, the packet delay is selected because it is critical to all broadband multimedia applications. We differentiate the traffic into 4 classes with different packet delay guarantees as defined in wireless IEEE 802.lie standard. The 4 classes are: (1) High Priority (HP) traffic: Highest priority for transmission when the medium is available; such as the voice over IP traffic. (2) Medium Priority (MP) traffic: Second highest priority for transmission when the medium is available; such as the video-on-demand traffic. (3) Normal Priority (NP) traffic: Third highest priority for transmission when the medium is available; such as video broadcast. (4) Best Effort (BE) traffic: Lowest priority for transmission when the medium is available; such as email and web browsing. In figure 4-19(a), the setup for the QoS performance measurement over the optical grid unit is illustrated. On the FPGA board we implemented the following functional blocks to facilitate QoS:

CHAPTER

4. NEXT-GENERATION

HYBRID ACCESS NETWORK

111

-[HP Buffer}* -JMP Bufferl*


-JNP Bufferi* Scheduler + TL2 Buffer | - * [ TTL2 j-i :FPGA [ / J : DRAM JTLT Buffer!-[ TL1 H

X * . - ! BE Buffer L* tripper '

-*TL1

iP**!

25J1

FPGA SERDES

TL2

_ J Hub
Parameters: - 4 traffic classes, 106 packets generated & divided among 4 classes - Fixed packet length: 40 bytes preamble, 8 bytes header, & 200 bytes payload - DRAM buffer can store up to 1024 packets (a) FPGA configuration enabling QoS

Figure 4.19: QoS implementation and experiment results on the system testbed

(1) Packet generator: randomly generates four classes of packets with equal probability with fixed packet size (40 Byte preamble, 8 Byte header, and 200 Byte payload) to emulate random incoming traffic to the central hub. (2) Packet stripper: investigates the packet class and route the packet to the appropriate buffer according to its class. (3) Buffers for the four traffic classes: buffers temporarily store the packets before they are routed to the tunable laser buffer for transmission. The on-board memory holds a maximum of 1024 packets per class and rejects any additional packets. (4) Packet scheduler: moves packets to the tunable laser buffer according to the

CHAPTER

4. NEXT-GENERATION

HYBRID ACCESS NETWORK

112

absolute scheduling policy: de-queue packets according to priority class, lower priority buffers will not be de-queued until higher priority buffers are empty. De-queued packets are assigned to the tunable laser buffer with the least buffer depth. (5) Buffers for tunable laser: temporarily store the packets before transmitted by the tunable laser. The on-board memory holds a maximum of 1024 packets and rejects any additional packets. A total of 106 packets are generated according to a Poisson random process. We conducted experiments to measure the packet delay, TL buffer depth, and packet loss. These numbers are measured under 40% and 80% traffic loading cases, and the traffic loading is defined as:

Loading = -^

(4.10)

where B tota / is the total number of bits generated, D^nfc is the data rate of the link, and AT is the duration of the experiment. The experiment results are summarized in Figure 4-19(b) through figure 4-19(e). Figure 4-19(b) shows the packet delay distribution among the four traffic classes under the case of 40% load. As expected, the delay distribution indicates that higher priority packets experience less delay; less than 1% of the HP traffic is delivered above 300/xs of delay. Figure 4-19(c) and Figure 4-19(d) show the average buffer depth distribution of the four buffers under 40% and 80% traffic load cases respectively. The values of buffer depth are averaged throughout the experiment. For both cases, the HP traffic were almost never queued in the HP buffer. As traffic load increases, the lower priority buffers tend to queue more packets. Figure 4-19(e) shows the packet loss of the four traffic class buffers under 40% and 80% traffic load cases. Packet loss is due to buffer overflow. In both cases, all HP packets are successfully delivered with similar delay statistics. Under 40% traffic loads, about 130ppm of BE packets are

CHAPTER 4. NEXT-GENERATION HYBRID ACCESS NETWORK

113

Total of 10s packets generated

HP MP NP

I
Statistics (%)

a BE

33 61 92 122153183214245275306336367398426457487518548579610640671701732763 791822

Delay (us), loading = 0.4

(b) Distribution of packet delay under 40% traffic loading

Figure 4.19: QoS implementation and experiment results on the system testbed (con't)

"HP

Total of 10" packets generated


W

WIP ^BE

i
0.1 I

r
I
mi

|
J

i
?

Statistics (%)

| i .

61

92 122 153 183 214 245 275 306 336 367 398 426 457 487 518 548 579 610 640 671 701 732 763

Buffer depth (packet), loading = 0.4

(c) Average buffer depth of the 4 traffic class buffers under 40% traffic loading

Figure 4.19: QoS implementation and experiment results on the system testbed (con't)

CHAPTER 4. NEXT-GENERATION HYBRID ACCESS NETWORK

114

Total of 10s packets generated

HP MP

-.mm-

i I
0.1

a BE

Statistics (%)
0.01

61 92 122 153 183 214 245 275 306 336 367 398 426 457 487 518 548 579 610 640 671 701 732 763

Buffer depth (packet), loading = 0.8

(d) Average buffer depth of the 4 traffic class buffers under 80% traffic loading

Figure 4.19: QoS implementation and experiment results on the system testbed (con't)

Priority Packet loss

HP 0

MP 1

NP 59

BE 344

Priority Packet loss

HP 0

MP 21

NP 701

BE 15098

loading = 0.4

loading = 0.8

(c) Packet loss of four traffic class buffersunder 80% traffic loading

Figure 4.19: QoS implementation and experiment results on the system testbed (con't)

CHAPTER

4. NEXT-GENERATION

HYBRID ACCESS NETWORK

115

dropped, which is a small fraction of overall BE traffics. Under 80% traffic loads, still only about 6% of best effort traffic is dropped.

4.8

Integrated Routing Algorithm for Optical Grid Unit

Under the optical grid unit architecture, there are usually multiple routing paths between the central hub and any user. Different routing paths consist of different combinations of optical and wireless links. In general the wireless links have limited bandwidth compared to optical links. Hence as congestion occurs among certain wireless routers of the WMN, it is desirable to allocate an alternative route(s) to reduce the congestion and improve the overall performance. In this section, we will focus on integrated routing paradigm that jointly allocates optical and wireless links to achieve network and resource efficiency. In section 2.2.6, we have mentioned that most routing protocols for WMNs are derived from the wireless ad-hoc networks, which is different from WMNs in terms of node mobility, power constraint, and computation capability. In addition, traffic patterns in WMNs are very different from that of wireless ad-hoc networks because the traffic in WMN is mostly forwarded, if not all, between the wireless gateway routers and end users as in other access networks [30]. Hence conventional proactive and reactive routing protocols developed for wireless ad-hoc networks need to be modified if they were to be applied to WMN for performance optimization. In determining an end-to-end routing path, the required resource such as bandwidth consumption is called the routing cost. A routing path is usually evaluated according to its routing cost. One approach to a good routing algorithm is to allocate routing paths so tha,t the overall routing cost is minimized. In hybrid optical wireless access networks, wireless links generally have much higher routing cost than optical links. This is because wireless links share limited RF spectrum in open space, and an active wireless link will interfere with neighboring wireless links. On the other hand, the optical backhaul has abundant bandwidth, so

CHAPTER

4. NEXT-GENERATION

HYBRID ACCESS NETWORK

116

allocating optical links along the end-to-end route incurs negligible routing cost. Although the optical backhaul can be ignored in the routing cost calculation, allocating the appropriate optical link or equivalent optical terminal along the routes can help to improve the network performance. In figure 4-20, for example, there is a "hot zone" around wireless gateway router A where wireless links are heavily loaded. To send traffic to mesh router a that is equally far from wireless gateway routers A, B, C, and D, it is better to route through B, C, or D to avoid the hot zone. Therefore a desirable routing algorithm for a hybrid optical wireless architecture should allocate appropriate optical links (or optical terminals) based on the real time WMN situation for performance optimization. Since both optical and wireless segments are involved in routing decision, the resultant algorithm is called an integrated routing algorithm.

Ci!

Kl'iy A

""6"'

Figure 4.20: Appropriate optical link allocation to improve WMN performance

4.8.1

Integrated Routing Algorithm

We proposed an integrated routing algorithm that computes the optimum route based on the up-to-date wireless link state and average traffic load [39]. This algorithm will adapt to each wireless mesh router's link status and select the optimum end-to-end route on the hybrid network to dynamically balance the load. The proposed integrated routing algorithm is described by the following steps and the corresponding system operation is illustrated in figure 4-21: (1) Wireless link state update: Each wireless mesh router periodically probes the

CHAPTER 4. NEXT-GENERATION HYBRID ACCESS NETWORK

117

Route Assignment
i

H-

TX/RX

O
o9L 0
Q> O 7?

*l

Opt. Term.1 '

Zr

Central

Hub

.A--r I'EHI!
. bpt. Term, a

I Interfacel ' |lnterface1 I I


Gateway 1

I I' \ Ilnterface2 ' |lnterface2 ilzzx Gateway2

=r
Q> C

Mesh Routeri

Mesh Router2
o

Mesh Router3

Mesh Router4

Figure 4.21: System operation of proposed integrated routing algorithm

CHAPTER

4. NEXT-GENERATION

HYBRID ACCESS NETWORK

118

link states with neighboring routers. The probing can be done by measuring the retransmission ratio [25] or transmission loss rate [24] in both directions. The measurement verifies the connectivity and reflects the quality and capacity of wireless links that vary over time. Each wireless mesh router then broadcasts the link states in a bounded flooding manner, i.e. the packet can only propagate on the WMN for a limited number of hops, as illustrated in figure 4-22. The hop-count limit, H m a x , is assigned according to the (wireless gateway router)/(wireless mesh router) ratio: a small ratio will require a large hop-count limit. As such, only the nearby wireless gateway routers will receive the broadcast from each mesh router. Compared to wireless ad-hoc networks, the overhead of the link state update under the hybrid architecture is significantly lower because the fixed infrastructure and network engineering leads to less channel variation. A high gateway/router ratio will further reduce the overhead by limiting the propagation of update broadcast on the WMN.

if

i"

it/

^ - - t ; / ^--ar^rPI
t v

ir*

ir

ir/ i" %/ y'^a>* .,4^i^ it i" i" t / it i r / t r t " Ih^^^T^-#:,_ V r *


/ & 4w> / 4^* i^<iJ* fe kJ** ^^^^* %** '* * W * 'W*1

^y

f*^f-"

f- I f

^MAX

Figure 4.22: Bounded flooding of link state information (2) Local WMN route calculation: Based on the link state updates, the interface between the optical terminal and wireless gateway router calculates the optimum route for each mesh router within the hop-count limit in both downstream and upstream directions using shortest path algorithm with the link states as cost [24], [25].

CHAPTER

4. NEXT-GENERATION

HYBRID ACCESS NETWORK

119

(3) Route cost report: Each optical terminal reports to the central hub the calculated route cost in both directions for each wireless mesh router within Hmax. (4) Gateway association: After receiving the reports from all optical terminals, the route assignment module will associate every wireless mesh router with the gateway router that has the lowest cost. The comparison result between a wireless mesh router and different wireless gateway routers is stored for future reference to enable load balancing. (5) Congestion monitoring: At the interface between the optical terminal and the wireless gateway router, a low pass filter is dedicated to each associated wireless mesh router to measure the average flow rates in both directions. Based on the measurement, a capacity table is continuously updated to monitor the overall loading of the local WMN. To calculate the capacity, the interference in the local WMN can be measured using a technique proposed in [62]. If loading in a certain area exceeds a congestion threshold, the interface will locate the k furthermost router(s) RT{l,2,k} for which the flows pass the hot-zone and can sufficiently reduce the congestion if the flows are removed. If some routers among RT{l,2,k} exceed a min hop-count threshold Hmin, these routers will be categorized to RT/;Oi0cOTj4r.o/{l,2,i}, and the rest of the routers to RT/ o a 4 a / a n c i n s {l,2,j}; note i+j=k. (6)Congestion report: The interface that detects congestion (e.g. interface 1 in figure 4-21) sends a congestion report including RTioad_baiancing{l,2j} to the route assignment module. (7) Alternative gateway router look-ups: For each router in RT/oad_{,a;ancig{l,2,j}, the route assignment module checks the gateways that received link state update (e.g. interface 2 in figure4-21) based on the stored comparison result described in step 4. (8) Wireless gateway router re-association: The re-association begins with gateways with the lowest cost. The interfaces of the gateway will then check whether and RTfiow_controi{l,

CHAPTER

4. NEXT-GENERATION

HYBRID ACCESS NETWORK

120

the loading threshold will be exceeded if the flows are added. If the threshold will be exceeded, then the route assignment module will negotiate with the gateway with second lowest cost, and so on. If routers among RTjoa<2j,a;ancjng{l,2,j} fail to be associated to a new gateway, they will be included in RT flow-control. (9) Flow control: After RT/ oa dj, a / ariC j ng {l,2,j} are re-associated, flow control will be executed in RTflow_controi.

4.8.2

Algorithm Simulation

The performance improvement achieved by the proposed algorithm was simulated with NS-2 [63]. The simulation scenario consists of 4 gateways and 192 mesh routers as shown in figure4-23, where the distance between any two adjacent nodes is 100m. An IEEE 802.11b module implemented in NS-2 is used, and the data rate is up to 11Mbps. Poisson random traffic and a two-ray radio propagation model are used throughout the simulation. It is assumed that every mesh router has uniform transmission power and receiver sensitivity. By applying the range model described in [31], [32], we assume that the transmission range and the interference ranges equal 120m and 180m respectively. Each node has only one radio interface and is equipped with omni-directional antenna. Note that in this simulation scenario, the shortest path routing with link state as cost is reduced to minimum hop routing. We inject uniform background traffic to each mesh router, whereby the average flow rate out of each gateway to the 48 nearby mesh routers is 1% of the date rate. Assuming there is a hot-zone as shown in figure 4-23, additional traffic are injected on wireless mesh routers in the hot-zone, increasing the overall traffic from 0.1Mbps to 1.5Mbps. The congestion threshold is set to be 8% (0.88 Mbps) of the data rate, i.e. after the flow rate of a certain router exceeds 0.88 Mbps, the interface will detect congestion and send a congestion report to the central hub. Figure 4-24(a) and figure 4-24(b) shows the normalized throughputs of wireless mesh routers using a conventional minimum hop routing algorithm and integrated

CHAPTER 4. NEXT-GENERATION HYBRID ACCESS NETWORK

121

Optical-wireless gateway
i i 1 I 1 1 1
i

Wireless mesh router


i i i
t

1400 - 1300 - # 1200 - # 1100 1000 900


0 K

ft

f"

c
1r

* % :
3

w *

ft *

ft # #

ft % *fs ft

z m-

#* * V [A'SA.."k '&\ f , * #k # * * * j w # o ft * ft # S-

ft c ft

Y(m)

800 700
V

600 -

* f*
m

'Hot-zone
f<

# ft

* * ft <t
*y

v
*

v#

500 - ft 400 *,

f Q34# #^ f* % >,
i

ft 9" ' * ^
> i

%
% 1

e
# % X

Q^ 3 # a
\ ft ft ft ft ft r t *

>? ft # ft
i

300 - # 200 - ft 100 - tk

ft ft

* # <fc
i

# > $

'<*

100 200 300 400 500 600 700 800 90010001100120013001400

X(m)
Figure 4.23: Simulation scenario

CHAPTER

4. NEXT-GENERATION

HYBRID ACCESS NETWORK

122

(a) Throughput distribution with minimum-hop algorithm

Figure 4.24: Normalized average throughput distributions routing algorithm when the overall traffic equals to 0.88 Mbps. The conventional routing algorithm, one of the simplest routing algorithms used in many networks, allocates a route to a specific node with minimum hop count. As shown, the throughput of wireless mesh routers is balanced throughout the WMN because the traffic load at the hot zone is shared by less congested wireless gateway routers. Figure 4-25(a) and figure 4-25(b) show the average throughput and packet delay of the wireless mesh router in the hot-zone. As shown, after the loading of the hot zone exceeds 0.77Mbps (in the presence of 0.11 Mbps background traffic), flows to the boundary wireless mesh routers begin to shift to the other three wireless gateway routers to balance overall load based on the proposed algorithm. routing algorithm. At high loads, the throughput and delay are improved by about 25% using the proposed integrated

CHAPTER 4. NEXT-GENERATION HYBRID ACCESS NETWORK

123

Throughput

X(100m) Y(100m)
(b) Throughput distribution with integrated routing algorithm

Figure 4.24: Normalized average throughput distributions (con't)

CHAPTER 4. NEXT-GENERATION HYBRID ACCESS NETWORK

124

1.5

p - ^ ^ ^ - ^ - ^

-I^-^-I^-^-I^-^-P

-I^-^-P

Integrated Routing Algorith i

las*

Vtvr

WM

o
CO o

i.o r

Minimum Hop Algorithm

0.5
C> /

(Nominal data rate 11 Mbps)

o.o

0.0

0.5
(a) Throughput comparison

1.0

1.5

Load (Mbps)
Figure 4.25: Average throughput and packet delay comparison

CHAPTER

4. NEXT-GENERATION

HYBRID ACCESS

NETWORK

200

150 h D

(Nominal data rate 11 Mbps)

If
C/>

100

Minimum Hop Algorithm Integrated Routing Algor t

50

0 0.0

0.5

1.0

1.5

Load (Mbps)
(b) Latency comparison Figure 4.25: Average throughput and packet delay comparison (con't)

Chapter 5 B u r s t - M o d e Clock and D a t a Recovery Technique


5.1 Introduction

In chapter 4 we have introduced a novel optical grid unit backhaul that uses a fast tunable laser and wavelength routing infrastructure to facilitate communications between the central hub and optical terminals. To enable resource sharing and bandwidth efficiency, the total bandwidth of fast tunable lasers in the central hub is shared by all the optical terminals for both upstream and downstream transmissions. In the time domain, the received signal at the central hub and any optical terminal is bursty, i.e. the received signal has detectable level for only a period of time. For example, in figure 5-1 the receiver 1 (RX1) at the central hub receives upstream signal from optical terminal 1 that is carried at Ai. Since the upstream traffic from optical terminal 1 can only be sent when the tunable laser at the central hub is available, the signal received by RX1 is segregated into bursts from different optical terminals. To receive burst-mode traffic, a burst-mode receiver is required, which consists of two key blocks: the level recovery and clock and data recovery (CDR) circuits. The level recovery circuit amplifies input bursts to a sufficiently high level to facilitate subsequent electrical processing. The clock and data recovery circuit recovers a clock to sample the input data bits. Compared to a continuous-mode receiver in 126

CHAPTER

5. BURST-MODE

CLOCK AND DATA RECOVERY

TECHNIQUE 127

59
I
1

4i

ll
3^

-o4

TL RX1

From terminal 1 at A,

From terminal 1 at A,

Payload Preamble Payload

time

: T>-TIA

* UA

Decision |^ Clock Recovery

DMVJXT Data t I

Preamble

Receiver (RX)

Figure 5.1: Burst-mode transmission on optical grid unit and burst-mode receiver conventional optical communications systems, the challenge of burst-mode receiver is that it has to recover the level and clock in a short period of time compared to the burst payload. To enhance the clock and data recovery speed, the burst begins with a preamble that consists of alternating zeros and ones as shown in figure 5-2. The consecutive transitions will minimize the time required to recover the clock. After the preamble, the payload carries random data bits and the recovered clock should align with the middle of the data bits for sampling. In this chapter, we will focus on burst-mode CDR techniques and present a novel burst-mode CDR technique. We will first introduce CDR technology for conventional continuous mode transmission and show why it is incapable of burst-mode applications. Next we will introduce up-to-date burst-mode CDR solutions. Then we introduce the proposed new CDR technique that can rapidly recover clock for burst mode transmission and perform jitter tolerance.

CHAPTER 5. BURST-MODE

CLOCK AND DATA RECOVERY

TECHNIQUE 128

Received signal (after level recovery) Preamble Payload

Preamble

_ru i_r
5.2

_ru'
| I 1 [ J Clock 1 Recovery

*+

Payload
i i i

-i

TJUUUUUL
Recovered Clock

Figure 5.2: Burst-mode clock and data recovery

Clock and D a t a recovery in conventional optical communication systems

The Phase Locked Loop (PLL) is the conventional solution for clock and data recovery in continuous communication traffic, where phase-coherent data flows continuously. The PLL CDR circuit tracks the phase of incoming data continuously and provides a clock that is aligned with the data bits for data sampling. Because of its inherently long phase locking time, however, the PLL CDR is not a high-performance candidate for burst-mode applications. Let's first investigate the limitation of PLL CDR with mathematical analysis and simulations.

+1 wLPF
*ir

Pu /k

LPF

Vctrl

VCU
<t>out

PD: Phase Detector LPF: Low Pass Filter VCO: Voltage Controlled Oscillator Figure 5.3: PLL CDR Figure 5-3 shows a second-order PLL mathematical model, which consists of a phase detector (PD), a first-order low pass filter (LPF), and a voltage controlled

CHAPTER

5. BURST-MODE

CLOCK AND DATA RECOVERY

TECHNIQUE

129

oscillator (VCO). The PD compares the phase difference between input data phase </>j and VCO output clock phase 0 ^ . The LPF filters out the high-frequency components of the PD output, and the low-frequency components control the VCO frequency. In the phase domain, a second-order PLL has two poles, one from the first-order LPF, the other from the VCO that is modeled as an integrator. The open loop transfer function of second-order PLL is: H{s)open Kpo and Kyco
are

=
^
m

(s) = KPD
'

s
WLPF

(5.1)

the gains of the PD and the VCO, respectively. The second

term on the right hand side is the transfer function of the first-order LPF. From the open loop transfer function, we can derive the closed loop transfer function [64]:
11/ \ KpDKVCO ,r
0x

H{s)ciose = - g s ;

r;

(5.2)

output response time constant is:

+ s+

KPDKvco

For this second-order system, if a phase step function is applied at the input, the

Tiock = 777 WLPF

(5.3)

As shown in Eq. 5.3, the time constant that determines the locking time, or clock recovery time, of second-order PLL is inversely proportional to the LPF bandwidth. Hence a LPF bandwidth increase will lead to a shorter locking time. Figure 5-4 shows the LPF dynamic response upon phase locking and eye diagrams after locking at two loop bandwidth: lOOKHz and 1MHz. The data rate is lGbps. From figure 5-4, if we increase the LPF bandwidth by 10 times, the resultant control voltage of the VCO and thus the clock eye diagram degrade dramatically. The trade-off between locking time and clock quality is demonstrated. To mitigate the performance degradation, a second-order LPF is usually employed in a PLL CDR. The second-order LPF is explained in detail in section 5.4.3. With proper design, some commercial PLL CDR circuits can push the locking time from thousands of bits to hundreds of bits. For example, in EPON specification [2],

CHAPTER 5. BURST-MODE CLOCK AND DATA RECOVERY TECHNIQUE 130

mm\p\pM*w>m\B

jaaippp A i s l e

(a) VCO control voltage response (loop bandwidth = lOOKHz)

(b) VCO control voltage response (loop bandwidth = 1MHz)

IQ&&&

k k J* /

&0

01 (c) Eye diagram after phase locking(loop bandwidth = lOOKHz)

02

o.;

o.4

o.5

as

07

o.e

09

(d) Eye diagram after phase lockingfloop bandwidth = 1MHz)

Figure 5.4: LPF responses and eye diagram at different loop bandwidth

CHAPTER

5. BURST-MODE

CLOCK AND DATA RECOVERY

TECHNIQUE 131

the clock recovery time must be less than 400ns, which is 500bits for 1.25Gbit/s line rate, and some legacy PLL CDRs from long-haul optical communications systems can be tuned to fulfill the specification. However, compared to the typical Ethernet packet length (hundreds of Bytes), used to encapsulate traffic in EPON, 500 bits is a significant fraction of packet length. This reduces the bandwidth efficiency of EPON compared to its competitor, GPON. As a result, a faster clock recovery circuit is desirable for high-performance burst mode optical communication systems.

5.3

Review of Up-to-date Burst-mode C D R techniques

In this section we will review three major burst-mode CDR techniques that can facilitate rapid data and clock recovery.

5.3.1

Digital ring oscillator

This digital ring oscillator (DR-OSC) CDR was first proposed in [65]. Its architecture is shown in figure 5-5. The incoming signal is first split into two branches after level recovery. The upper branch consists of an electrical delay line, which requires manual tune-up to align the data with the recovered clock for data sampling. The lower branch, which is the heart of the CDR, consists of positive and negative feedback loops to recover clock. The clock frequency is the common frequency of the two loops. A preamble (alternating logic zeros and ones) is required at the beginning of the incoming burst, because the recovered clock is generated by duplicating the preamble. The switch in figure 5-5 is a key component: it turns on upon the burst arrival and turns off at the boundary between the preamble and data payload. Note that if the preamble has the same rate as the data, a double edge triggering is necessary for the decision circuit (i.e. the D Flip-Flop in figure 5-5). This double-edge triggering, however, makes this CDR circuit vulnerable to clock duty-cycle distortion.

CHAPTER 5. BURST-MODE CLOCK AND DATA RECOVERY TECHNIQUE 132

Phase alignment

I y .1
Double edge detection

>'>
Switch open @ t=L

Figure 5.5: Digital ring oscillator Operation principle The oscillation of the DR-OSC can be analyzed as follows. Figure 5-6 shows the two loops in the DR-OSC circuit:

\T

Positive loop

Figure 5.6: Two feedback loops in DR_OSC CDR The resonance frequencies of the positive feedback loop are expressed as [65]:

/-

(5.4)

where L is round trip time of the positive loop, and i=l,2,3. The resonance frequency of the negative loop is:

CHAPTER

5. BURST-MODE

CLOCK AND DATA RECOVERY

TECHNIQUE 133

Jn

~ 2(L - AL)

[b b)

where (L-AL) is the round trip time of the negative loop, and j=l,2,3. The coefficient 2 in the denominator of the negative loop fulfills the Barkhausen criteria. The DR-OSC operates at a matched resonance frequency of the two loops, i.e. fp(i)=fn(j). Hence by selecting proper L and AL, we can obtain the specific frequency we want. Although the above analysis determines the clock frequency, the time-domain waveform can actually provide insight to the operation. As shown in figure 5-7, the number 0 or 1 in black at each node indicates the initial condition. For simplification, let's assume that the logic gate delay is negligible. In addition, assume AL equals mxTfoit, where m is an odd integer equal to or larger than 1 (in figure5-7, m = l ) , and L is the length of the preamble, nxTbj t , n > m . Let's first consider the situation when a pulse with width Tut is applied at the input. After the pulse comes out of the AND gate on the left-hand side, it will experience a delay of AL after the delay buffer. Therefore, the output of the following AND gate will generate a delayed version of the pulse after L. By feeding this delayed pulse into the inverter, which is necessary to drive the external feedback delay line, the looped-back inverse pulse will appear at the starting point of the loop (L-AL) later, and continue the next lap. The time period for this process is L exactly, and the pulse will propagate around the loop until the loop is broken. From the discussion above, it is clear that if a "pulse train" with pulse period Tbit and total length of L x n x T w is fed into the load, a continuous clock will emerge at the output port AL later. The operation is shown on figure 5-8, where AL is equal to 3 Tbn, and L equals 6 T&j*. A simulation is performed and the result verifies the analysis of the DR-OSC CDR. Figure 5-9(a) shows the applied 2ns preamble, and Figure 5-9(b) shows the generated (duplicated) clock. As expected, the clock recovery time is 50ps.

CHAPTER 5. BURST-MODE CLOCK AND DATA RECOVERY TECHNIQUE 134

<^U

L-AL

ir j

IT
UN

A.
AZ

JL

AT

Figure 5.7: A single pulse applied to the DR-OSC CDR

-r~

L-AL

inr

TOT
i\ "i "s \

MILK > (;iy L ;

- ^

I U U L

nniL AL

.jinnnnim

Figure 5.8: Pulse train applied to the DR-OSC CDR

CHAPTER

5. BURST-MODE

CLOCK AND DATA RECOVERY

TECHNIQUE 135

Figure 5.9: Simulation of DR-OSC CDR

(b) Recovered clock Figure 5.9: Simulation of DR-OSC CDR (con't) Discussion (1) For DR-OSC, the accuracy of the delay buffer and the delay line is critical. If AL is longer than m x T ^ , it will lead to clock distortion and receiver sensitivity degradation. Furthermore, if the external delay line is longer than the signal preamble length, then the duplicated clock will be distorted periodically. As a result, tunable delay lines/buffers and manual tune-up are required for performance optimization. (2) Even if the optimum delay is derived by adjusting the two delay elements, the clock quality may still drift due to temperature change and electron migration on the monolithic chip. Hence a temperature compensation and performance monitoring circuitry is mandatory. (3) The DR-OSC CDR is vulnerable to input phase jitter. The duplicated preamble is uncorrelated with the payload in the presence of random phase jitter

CHAPTER 5. BURST-MODE CLOCK AND DATA RECOVERY TECHNIQUE 136

(4) The gate delay cannot be ignored at high data rate. Accurate gate delay is critical when designing the preamble length L and acquisition time AL. The highest clock recovery time (AL), is fundamentally limited by the inverter in front of the AL delay line.

5.3.2

Instantaneously locked clock and d a t a recovery circuit

The instantaneously locked clock and data recovery circuit was proposed by [66]. The circuit is shown in figure 5-10.
Recovered * Data

D-flipflop
Gate GVCO-A Freauencv Control Gate

.. \

n
B

ry

^ C

Clock

Data

GVCO-B

- > Frequency Control Local reference clock

i <7 I !
\ GND

Gate GVCO-C - Freauencv Control

1
Phase Detector Loop Filter

PLL Figure 5.10: Instantaneous clock and data recovery circuit The incoming random data are split into two branches. One branch is fed into the CDR module, and the recovered clock is used to sample the signal on the other branch. The CDR module is composed of three identical Gated Voltage Controlled Oscillators (GVCOs), logic gates, and a frequency synthesizer (PLL). In figure 5-10, the frequency synthesizer PLL generates a clock at the data rate based on the local reference clock. Figure 5-11 shows the structure of GVCO, which is the heart of the

CHAPTER

5. BURST-MODE

CLOCK AND DATA RECOVERY

TECHNIQUE 137

circuit.

Frequency Control (Volt)

Start/Stop gate 1: Stop Oscillation 0: Start Oscillation

Figure 5.11: Gated Voltage Controlled Oscillator The GVCO is composed of logic gates and forms a ring oscillator. The oscillation frequency is equal to the reciprocal of the loop delay. The bias voltage can be adjusted to tune the oscillation frequency. The NOR gate serves as a switch to turn the GVCO on or off. When logic one is applied, the NOR output is forced to zero, and the whole loop will be turned off. When a logic zero is applied, the output will rise to logic one after a gate delay and the whole loop becomes a typical ring oscillator. The circuit is notable for its controllability and quasi instantaneous start-up of oscillation (a short delay contributed by the NOR gate). Operation principle First let's assume the three GVCOs in figure 5-10 (GVCO-A, GVCO-B, and GVCOC) are perfectly matched in every aspect. As part of the PLL, the GVCO-C is always locked to the local reference clock driven by a certain control voltage value. Since GVCO-A and GVCO-B are the same as GVCO-C, by sharing the same control voltage, GVCO-A and GVCO-B will oscillate at the same frequency when they are turned on. Due to the inverter between the gate input of GVCO-A and GVCO-B, only one of them is turned on at any moment. According to figure 5-12, for example, when the input data bit is at logic high level, GVCO-A is in the "on" state, and thus feeds the clock output into the following NOR gate. It is the same case for GVCO-B when

CHAPTER

5. BURST-MODE

CLOCK AND DATA RECOVERY

TECHNIQUE 138

the input data bit is at logic low level. During the off state, the output of GVCO-A or GVCO-B is always held low, so the following NOR gate will faithfully duplicate the inverse version from one of its inputs. In other words, GVCO-A and GVCO-B provide complementary oscillating periods, as shown in figure 5-12. The NOR gate (point C) thus combines the outputs of the two GVCOs and provides a continuous output clock. The recovered clock is refreshed every time the gated oscillators restart due to an input data transition.

Data

mi n Jimuix^^
It

Jl

niL

Figure 5.12: The outputs of GVCO's and resultant clock

Discussion (1) Since the control voltage of GVCO-C is shared by both GVCO-A and GVCO-B to generate a clock with the same frequency, the three GVCOs must be matched perfectly because the operation frequency is determined by the total gate delay in the GVCO. However, mismatch is inevitable in monolithic integrated circuit (IC). Hence mismatch may cause clock frequency deviation among the three GVCOs. If the frequency deviation among GVCOs is significant, the recovered clock will gradually shift away from the optimum sampling point, and result in sensitivity degradation. (2) Even if the GVCOs are perfectly matched, since the clock is generated freely according to the local reference clock, there is no jitter tolerance to phase jitter on the incoming signal.

CHAPTER

5. BURST-MODE

CLOCK AND DATA RECOVERY

TECHNIQUE 139

5.3.3

Nonlinear clock extraction circuit

Nonlinear clock extraction circuit [67] is illustrated in figure 5-13: a level recovery circuit first amplifies the incoming bursts above certain signal level. After level recovery, the signal is split into two branches.
D-flip flop Dclsis I inc

-Hi
I (Single Edge) Clock

Frequency Doubler

Figure 5.13: Nonlinear clock extraction circuit In the lower branch, the LPF first filters out high-frequency noise. The amplifier following the LPF further amplifies the signal above a certain level before the frequency doubler. Conceptually, the frequency doubler is a mixer with both input ports fed by the same signal. Then output of the frequency doubler is fed into a bandpass filter (BPF) which has a passband centered at the specified clock frequency to extract power at the desired clock frequency. Finally, the filtered clock passes through a limiting amplifier for constant output amplitude. In the upper branch, there is an adjustable delay line, which is manually adjusted to align the recovered clock and input data. Operation Principle For Non-Return to Zero (NRZ) signal, there is no clock spectral component at the desired clock frequency, fc;fc, as shown in figure 5-14. One straightforward way to obtain this desired clock is simply double the frequency of a random NRZ signal and extract the clock with a BPF centered at fcjfc. (fcjfc=l/Tftjt), where T{,it is the bit period. This is the basic idea of the nonlinear clock recovery circuit based on nonlinear techniques.

CHAPTER 5. BURST-MODE CLOCK AND DATA RECOVERY TECHNIQUE 140

xlO

Figure 5.14: Frequency spectrum of random non return to zero (NRZ) signal Mathmatical analysis of clock recovery time To derive a closed form of the clock recovery time, we first simply the clock recovery circuit as in figure 5-15: x(t) = sin(w0t/2) u(t)

y(t)

Figure 5.15: The math model of the nonlinear clock recovery circuit The preamble of the input signal is a sinusoidal function with the frequency locating at half of the desired clock frequency. We assume that the BPF is second-order
BPF with a transfer function:

HBPF{S)

8> + 8** +> V& Q ^ 0

(5.6)

where Q is quality factor of the BPF, w0 is the center frequency. The 3dB bandwidth is w 0 /Q.

CHAPTER

5. BURST-MODE

CLOCK AND DATA RECOVERY

TECHNIQUE

141

Assuming that the input is sinusoidal, the output of the frequency doubler (square operator) is: K * ) s i n ( y ) t ] 2 = i [ u ( t ) - u{t)cos{wQ)} where u(t) is the unit step function. Given this input, the output of the BPF is: l i s s^ Y(s) = ^ - ^ - _o )( ( , , "1: , ) -((5.7)

,)

(5-8)

After applying the inverse Laplace transform to Y(s), the time-domain response y(t) is:

y(t) =--u(t)-[e

Q-sin(kwot)cos(w0t)+e

Q -cos(kwQt)----e

Q -sin(kw0t)] (5.9)

where

* t~h
approximately 1. So y(t) can be approximated as: ?/(*) ~ ~ T T ' b ^ e
A
Z\a^
2Q sin w

( 1) 50

'

If Q is much larger than 1, which is the case for a desirable BPF, then k is

- ( ot)+e

Q-cos(w0t)-cos(w0t)]tt

^(l-e
Z

Q-cos{wQt)) (5.11)

Therefore, if Q is much larger than 1, the envelop of the transient output response is (1-e 2Q ). This reveals that the clock recovery waveform is exponential and is only a function of the BPF's 3dB bandwidth, i.e. ^ . The clock recovery time -ris: r=? = ^
WO

(5.12)

WdB

CHAPTER 5. BURST-MODE CLOCK AND DATA RECOVERY TECHNIQUE 142

Simulation of nonlinear clock recovery We verify the mathematical analysis with simulations. Figure 5-16 shows lOGbps input consisting of alternating zeros and ones and the output clock waveform when BPF bandwidth equals to 60MHz and 120MHz:

Input sinusoidal 5GHz

4 6 6 "0 reauency (rads/sec)

12 ,^o

Avenge TowtT Ctei-tra'i Deroity 15 . 10 5

Output clock (BPF= 60MHz)

4 6 8 10 requency (ad^/'isc)

12 1D10

Avprar,e Fo\*r Spectral Dprmtj "j It f

1
Output clock (BPF=120MHz)

4 6 8 10 Frequency (tads/sec)

'2 *r$>

Time domain

Frequency domain

Figure 5.16: Input and output waveforms and corresponding average frequency spectrum As shown, qualitatively the output clock grows exponentially and the time constant is inversely proportional to the BPF bandwidth. The time instant for the

CHAPTER 5. BURST-MODE

CLOCK AND DATA RECOVERY

TECHNIQUE 143

extrapolation of the time constant is approximately 5.3ns, which agrees with the mathematical analysis. Next we investigate the impact of random input data on the clock recovery.

Figure 5.17: Waveforms of NRZ burst input and recovered clock Figure 5-17(a) shows the input burst that is composed of a preamble (i.e. alternative zeros and ones) and a random NRZ payload. The bit period is 0.1ns (i.e. data rate = lOGbps). As shown, the output clock amplitude fluctuates during the burst payload. There are two important issues: (1) If the payload has no transition for many bits, the output clock level will decay exponentially in the same way as clock recovery. To maintain the clock amplitude above a certain level, acoding scheme that guarantees the transition density such as 8B10B coding [68] is required. (2) The fluctuation of the clock amplitude needs to be eliminated by limiting amplifier before the clock triggers the high-speed decision circuit. Similar to PLL CDR, when we increase the BPF bandwidth to enhance clock recovery speed, the recovered clock quality will degrade. This is illustrated by the

CHAPTER

5. BURST-MODE

CLOCK AND DATA RECOVERY

TECHNIQUE 144

eye-diagrams in figure 5-18:

(a) Jittered Input

(b) Output Clock (BPF BW=60MHz)

(c) Output Clock (BPF BW=10MHz)

Figure 5.18: Impact of BPF bandwidth on the recovered clock quality Figure 5-18(a) shows the jittered random input to the CDR circuit. Figure 518(b) and figure 5-18(c) show the eye diagrams of the recovered clocks when the BPF bandwidth equals 60MHz and 10MHz respectively. The flat edges of the eyes are the result of the trimming of the limiting amplifier. Discussion (1) If the BPF center frequency is not located at l/T&it, the clock sampling will drift away from the center of the eye through the packet; in other words, the data sampling point will drift from the optimum point. The amount of sampling point drifting depends on the frequency deviation and the packet length. (2) There is a trade-off between clock jitter suppression and clock recovery time as shown in figure5-18. As the data rate increases, the recovery time needs to be reduced to maintain the same number of preamble bits, and therefore a wider bandwidth should be used. Wider BPF bandwidth introduces more jitter to the recovered clock, as a result, a worse BER results.

CHAPTER 5. BURST-MODE

CLOCK AND DATA RECOVERY

TECHNIQUE 145

Table 5.1: Summary of three CDR techniques Nonlinear Clock Extraction DR-OSC Instantaneous CDR

Recovery time Clk Freq. diviation Monolithicability Match requirement Jitter Suppression Ext. phase alignment

<15ns[HORNET] Low No N/A Yes Yes

few bits 1 Medium Yes3 Low4 No Yes

0ns2 High Yes High5 No No

(3) This circuit is difficult to implement on a single chip. For high data rate systems, a high-Q BPF is required. For most VLSI technologies, a monolithic LC tank is not a reliable choice because of its poor controllability on both the center frequency and quality factor. If off-chip components are allowed, then a RF BPF, a SAW filter, and a dielectric resonator are possible candidates. (4) The the upper branch delay line requires manual adjustment to align the clock phase to the data bits.

5.3.4

Summary among t h e three burst-mode C D R techniques

Compared to conventional PLL CDR, these three CDR techniques can recover clock rapidly. Table 5-1 is a summary of the three techniques. To achieve fast clock recovery, these circuits are based on feedforward architectures. Without a feedback loop to track the input burst, the recovered clock has potential frequency deviation from the input data rate and cannot tolerate phase jitter. In addition, these techniques are not adequate for monolithic mass production. In the next section, we will describe a new burst-mode CDR technique based on feedback loops to fix the frequency deviation and provide good jitter tolerance.
1

Depend on total logic gate delay 2 Quasi-instantaneously 3If replace the external

delay line with on-chip buffer 4 Match among on-chip delay elements 5 Match among GVCOs

CHAPTER 5. BURST-MODE

CLOCK AND DATA RECOVERY

TECHNIQUE 146

5.4

Dual-loop burst-mode C D R technique

We propose a new burst-mode dual-loop CDR technique that consists of delay lock loop (DLL) and phase lock loop (PLL) to achieve (1) rapid clock recovery, (2) precise frequency tracking, (3) provide good jitter tolerance. The dual-loop CDR architecture is illustrated in figure 5-19.
Recovered Clock

O A

u ^-Recovered t

*******
Inmminp Data .

..... . . . * . . .
PD Loop Filter

..... ' >


I PD
i

.....
Loop Filter

.....

vco
\\

VCDL

>^

^M

;
PLL (slow loop, for frequency acquisition) DLL (fast loop, for phase alignment)

Figure 5.19: Dual-loop burst-mode CDR circuit architecture In principle, the DLL inherently has fast response in the phase domain, so it is employed to rapidly align the local clock phase with input data upon burst arrival. After the phase is aligned and locked, the PLL will gradually pull the local clock frequency to the input data rate. In the following sections, we will first analyze DLL and PLL separately to see how their functionality is accomplished as an integral part of the dual-loop CDR structure. Mathematical analysis and simulation will provide insight of their enabling characteristics. Then we will integrate the two loops as in figure 5-19 and simulate the performance. The simulation results and analysis will illustrate how the two loops cooperate to achieve the required performance goals.

CHAPTER

5. BURST-MODE

CLOCK AND DATA RECOVERY

TECHNIQUE 147

5.4.1

Simplified Delay lock loop (DLL)

The simplified DLL schematic and mathematical model are shown in figure 5-20(a) and figure 5-20(b), respectively. The simplified DLL structure is similar to the PLL in section 5.2, except that the VCO is replaced by the voltage control delay line (VCDL). The VCDL takes an clock input from local receiver (RX) reference clock. The delay of the VCDL is controlled by the LPF output, and the negative feedback loop adjusts the VCDL delay to eliminate the phase difference between the VCDL output clock and input data. In steady state, the VCDL output clock is locked to the input by the DLL.

Clock

Volidfu: ( onlrolkd l)cU\ T.inc IVC1M.I

(a) DLL * I 1*1-"

Data

(b) Math model of DLL


Oc

Figure 5.20: Delay lock loop (DLL)

M a t h analysis of simplified DLL model


To analyze t h e phase locking dynamics of DLL, we will focus on t h e excess phase of t h e

input 4>in{i) and that of the VCDL output clock <?W(t), as shown in the mathematical model.
1

The total phase 4>totai u>ot +</>(t), where u>o is the frequency of input data rate or local reference clock. Note that the input data rate is the same as the transmitter (TX) reference clock frequency.

CHAPTER

5. BURST-MODE

CLOCK AND DATA RECOVERY

TECHNIQUE 148

To simplify the mathematical analysis, we assume: (l)the VCDL is linear, having a gain of KVCDL (rad/V).

(2) The PD is modeled as a multiplier that will output some high-frequency components and a DC component that is equal to Kpoi^dk 4>in), where KPn is the gain of PD. (3) The low pass filter (LPF) is a first-order LPF with -3dB frequency located at
WLPF-

In practice, the components in DLL are not linear. For example, the transfer function: (f)out/Vctri of a real VCDL is not linear. Therefore to precisely verify the loop behavior of a monolithic DLL, a professional computer aided design (CAD) tool is required for simulation. Although the following analysis is a mathematical approximation, it provides an useful insight for design and understanding of the loop dynamics. The closed-loop transfer function is:
H( \ dk s n- \ )close-loop s in ^ ^ KppKycDL ,-, ^ A + 1 + &PD&VCDL (zio\ 1,0.16)

Assume that there is a sudden input phase change due to arrival of burst, which is modeled by a step function of the input excess phase, i.e. $*(*) = 0ou(t) (without the loss of generality, the phase change is assumed to occur at t=0.) This phase change is directly translated to excess phase $(). The VCDL output clock phase response is: ScfcOO =
KPBKVCDL
(5 14)

<M*) = So, KKv>L


1+ KpDKVCDL

{i - e -^ P F a + K P D K v C D L ) t }

(5>15)

CHAPTER 5. BURST-MODE

CLOCK AND DATA RECOVERY

TECHNIQUE

149

Discussion of DLL mathematical analysis (1) The final value of the VCDL clock output (f>cik(t) is not equal to </>0, unless
KPD

KVCDL

approaches infinity.

(2) The time constant (r) that determines the phase acquisition (or clock recovery) speed is:

WLPF{1 +

KPDKVCDL)

Therefore, to speed up the phase acquisition time, we can either increase the LPF bandwidth or the product of Kpp and KVCDLHowever, if the LPF bandwidth is too large, then the high-frequency components of the LPF output will introduce significant jitter on the VCDL output clock. We will address this issue with simulation later. (3) Comparing the transfer function of the PLL and the DLL, the DLL inherently is one order lower than the PLL. It is because the VCDL directly translates the control voltage to the clock phase, while the VCO translates the control voltage to frequency and the resultant excess phase is accumulated over a period of time. The accumulation or integration contributes a pole to the transfer function. The lower order makes the DLL more stable than the PLL and allows more flexibility to adjust other performance metrics such as locking speed etc. Simulation results We perform simulation to verify the math analysis. We use a 5GHz continuous sinusoidal wave to emulate continuous input data. In figure 5-21, the VCDL outputs are compared for different cases: LPF bandwidths equal to 50Mrad/s and lOOMrad/s, respectively. The loop gain, defined as KPDKVCDL: is fixed at 1. As shown, wider bandwidth will result in faster acquisition (figure 5-21(a) and figure 5-21(c)), whereas it introduces larger ripple on the delayed clock at steady state (figure 5-21 (b) and figure 5-21 (d)). Note that the dynamics follow the predicted exponential function to approach the steady state. If we define the time constant as 10%-90% time (tw%
90%)

of

CHAPTER

5. BURST-MODE

CLOCK AND DATA RECOVERY

TECHNIQUE 150

the exponential waveform, the time constant is approximately 12ns for the lOOMrad/s and loop gain = 1 case, corresponding to 120bits at lOGbps. The theoretical value predicted by eq. 5.17 is 2 . 2 X T , which is approximately 11ns. Hence the simulation and theoretical values match well.

mmm^mms^^mm^^^^mmmmm^
(a) VCDL output phase (LPF=50MRad/s, loop gain=l)

loop gain=i)

(c) VCDL output phase (LPF=100MRad/s, loop gain=l)

Figure 5.21: Simulation results of control voltage of the VCDL with different LPF bandwidth In figure 5-22, when the loop gain equals 4, the resultant clock is significantly distorted compared to the low loop-gain = 1 case. Although the lower loop gain has better clock quality due to the smaller ripple of the VCDL control voltage, lower loop gain also introduces higher phase difference between the input continuous wave and VCDL output clock. The trade off between the phase locking time, phase difference,

CHAPTER 5. BURST-MODE

CLOCK AND DATA RECOVERY

TECHNIQUE

151

and clock quality is thus manifested.

Figure 5.22: VCDL output clock waveforms with different loop gains

5.4.2

Charge P u m p DLL (CPDLL)

In the last section, the simplified DLL model is analyzed and simulated with continuous sinusoidal input. In practice the input is a random NRZ bit stream. In addition, the finite phase difference at steady state is required to sustain the nonzero VCDL control voltage. The finite phase difference is not desirable for the CDR, because the phase difference requires an additional phase alignment scheme to align the input data and VCDL output clock. To lock to random NRZ data and eliminate the finite phase difference, a Charge Pump DLL (CPDLL) is typically employed. The architecture of a CPDLL is illustrated in figure 5-23. During input data transitions, the PD provides a charge or discharge signal depending on the phase difference; otherwise it is idle. The charge pump (CP) consists of current sources that are controlled by charge/discharge signals. It pumps/sinks current to/from the following capacitor. The charge stored in the capacitor will determine the control voltage to the VCDL.

CHAPTER

5. BURST-MODE

CLOCK AND DATA RECOVERY

TECHNIQUE 152

Local Reference clock (~data rate)

Random Input data

Storage Charge pump


component

VCDL output clock

Figure 5.23: Charge pump delay lock loop (CPDLL) Phase detector and charge p u m p Since the input is random data, the phase detector (PD) in figure 5-23 should be a specialized design that can compare the phases between the random input and the RX reference clock. Figure 5-24 shows the Hogge phase detector [69], which is a popular linear phase detector for random data phase detection. It consists of an inverter, 2 D-flip-nops, and 2 XORs. Because an XOR outputs a logic one when the inputs are complementary, X will become a logic one as long as the incoming data has transitions. The duration of a logic one will depend on the phase relationship between input data and the local clock. If the clock leads the input data, i.e. the triggering edge is to the left of the optimum sampling point, then the duration of a logic one will be less than half clock cycle. On the other hand, if the clock lags, the triggering edge is to the right of the
o p t i m u m point. T h e duration of a logic one will be longer t h a n half clock cycle. As

such, the voltage at node X serves as an indication of the phase relationship. Node Y always experiences a duration of a logic one for half clock cycle, because the rising edge of the local clock sets Y to high while the next falling edge resets Y, if the input data has a transition.

CHAPTER

5. BURST-MODE

CLOCK AND DATA RECOVERY

TECHNIQUE 153

> 11 >JJ

i ^discharging signal) J

J\\

X (charsine sienaH

Input

^LJ-^
D D-FF Q Ql D D-FF Q Q2
i

\
T

Local Clock

Figure 5.24: Hogge phase detector Figure 5-25 and figure 5-26 demonstrate the waveforms at different nodes of a Hogge PD. In figure 5-25, the local clock rising edge is aligned with the midpoint of the data bit. Therefore the resultant X and Y both experience logic one for half bit (i.e. 0.05ns at lOGbps data rate) when the input data has transition. In figure 5-26, the local clock leads the data bit by 0.01 ns, i.e. the clock rising edge is shifted to the left for 0.01 ns. As shown, the resultant X logic one duration is only 0.04 ns while the Y remains for half clock cycle. Therefore, the resultant (-X+Y) will have a non-zero total area. Mathematical analysis To simplify the analysis of CPDLL, we assume that the Hogge PD and VCDL are linear 2 , and the gains of Hogge PD, charge pump (CP), capacitor, and VCDL are: (KPD)(V/sec)3, charge pump (KCp)(A/V), ^ ( s e c / F ) , and KVCDL (sec/V), respecKioop/s: tively. For notation simplicity, we lump the four gains into a single term Kloop equals KPn-KnPKVcnL_ We will call Kloop the loop gain.
the details of the linear approximation of the Hogge PD can be found on page 267-269 of [64] Here time is used to indicate the phase difference, i.e. one clock or bit period corresponds to 2TT phase.
3 2

CHAPTER 5. BURST-MODE CLOCK AND DATA RECOVERY TECHNIQUE 154

Local clock

Incoming data

1 Qi

Q2

Figure 5.25: Waveform of the nodes in Hogge PD when the incoming data and the clock are aligned

CHAPTER 5. BURST-MODE CLOCK AND DATA RECOVERY TECHNIQUE 155

Local clock

I Incoming data

-X+Y

Figure 5.26: Waveform of the nodes in Hogge PD when the clock leads the incoming data for 0.02ns (bit period is 0.1ns, or lOGbps)

CHAPTER

5. BURST-MODE

CLOCK AND DATA RECOVERY

TECHNIQUE 156

Prom the closed loop illustrated in figure 5-23, the VCDL output $dfc(s) is:
Q KVCDL

*dfc(s) = ($i - * d fc) KPD

KCP

(5.17)

$dk{s) $in{s)

KPD KCP KVCDL sc + KPD KCp KVCDL

K
S+ K

(5.18)

again, Kloop = * > p - * y ^ p f r . Then the closed loop transfer function is: H(s) dosed joop TS- (5.19)

When a burst arrives, we assume that there is a sudden phase change in the input to the CPDLL, which will directly translate to input excess phase 4>m(t). We model the excess phase change with 0ou(t), where u(t) is the unit step function. From the closed loop transfer function, the response of $cik(s) is:
$ (s) ^ .
KI

f ^ l

$ 0

(5

20)

In the time domain, the response of the VCDL output clock (<f>cik{t)) to the input phase change is: = M l - e~Kloopt}

Mt)

(5-21)

As the last equation implies, the phase acquisition time of the CPDLL during the preamble (alternating zero and one) is inversely proportional to the loop gain Ki00p. Besides, at steady state there will be no finite phase difference between the local reference clock and the input data. This is desirable because as the CPDLL is used for a CDR, no additional phase adjustment is required. C P D L L simulation result In this section we simulate a CPDLL to verify the analysis and evaluate its performance. The input data fed into the CPDLL is a realistic packet with preamble of

CHAPTER

5. BURST-MODE

CLOCK AND DATA RECOVERY

TECHNIQUE 157

40ns long (400bits at lOGbps) and random data payload; the data rate is at lOGbps. The preamble is composed of alternating logic zeros and ones. Upon a packet or burst arrival, the initial VCDL output clock leads the input data bits by 0.03ns (30% bit period). A Hogge phase detector (figure 5-24) is employed as the PD. The control voltages with different loop gains and their zoom-in waveforms at steady state are illustrated in figure 5-27. The phase acquisition speed is determined by the loop gain as indicated by eq. 5.22. For loop gain Kioop = 0.5 e~9, the acquisition time is about 4.4ns (44 bits at lOGbps) from 10% to 90%, and 22ns (220 bits at lOGbps) for loop gain =0.1 e - 9 . These values agree with the mathematical prediction. Preamble , Random NRZ

Loop gain = O.le-9 Figure 5.27: The control voltages of the DLLs with different loop gains In figure 5-27, the control voltage has ripple at steady state as in the simplified DLL simulation. Larger ripple will introduce more variation on the clock's zerocrossing points. This variation of clock zero-crossing due to the ripple will degrade the clock quality. As shown in figure 5-27, when the loop gain Kioop equals to 0.5 e~9,

CHAPTER

5. BURST-MODE

CLOCK AND DATA RECOVERY

TECHNIQUE 158

the peak clock variation is 0.0025ns, or 2.5% of the bit period, which is an acceptable clock jitter. Figure 5-28 shows the phase relationship between local clock and input data. Both the initial and the steady state phase relationship are shown. As shown, at the steady state the DLL output locks to the input data and the phase difference is zero.

Data bits [0.3ns~0.8ns)

Data bits (99.5ns~100ns)

Delayed Clock (0.3ns~0.8ns)

Delayed Clock (99.5ns~100ns)

(a) Initial phase difference

(b) Steady state phase difference

Figure 5.28: Phase relationships between input data and DLL output clock at different points of the burst

Using C P D L L for C D R From the above discussion and simulation results, the rapid phase acquisition and zero phase difference at steady state qualify the CPDLL to be a promising burst-mode CDR candidate. However, the assumption that the RX local clock frequency equals the input data rate is not a general case in most systems. Typically communications systems use commercial crystal resonators to generate reference frequencies at a TX and an RX independently. The reference frequencies at the TX and RX may deviate

CHAPTER 5. BURST-MODE

CLOCK AND DATA RECOVERY

TECHNIQUE 159

up to 200ppm. Therefore we need to investigate the performance of a CPDLL used a CDR in the presence of TX RX reference frequency deviation.

(a) VCDL control voltage when reference clock frequency of TX deviates from that of RX by -200ppm

(b) VCDL control voltage when reference clock frequency of TX deviates from that of RX by -lOOOppm

(c) VCDL control voltage when reference clock frequency of TX deviates from that of RX by +200ppm

Figure 5.29: Control voltage of unequal frequency situations Assume that the RX clock period is 0.1ns, or clock frequency equals to 10GHz. Figure 5-29 (a) shows the control voltage of a VCDL when the TX clock period is 0.10001ns, or the TX frequency is -lOOppm deviated from the nominal frequency. Figure 5-29(b) is the case when the TX clock period is 0.1001ns, or the TX frequency is -lOOOppm deviated from the nominal frequency. As shown, the phase difference between input data and the local clock accumulates over time when the frequencies of the TX and the RX are different. Hence the CPDLL will adjust its delay on the clock to accommodate the phase difference. Figure 5-29(c) shows the opposite case when the input data bit period is 0.09999ns.

CHAPTER

5. BURST-MODE

CLOCK AND DATA RECOVERY

TECHNIQUE

160

Phase tracking error when T X and R X reference frequency deviate Next let's investigate the recovered clock phase in the presence of frequency deviation between the TX and the RX. Assuming that the input data is alternating, then mathematically the excess phase of the input is a ramp function with a slope of the frequency difference Af, i.e. the input excess phase </>*(*) = A/ramp(t), where ramp(t) is a unit ramp function. The response of the VCDL output clock due to the frequency deviation is 4 :
' OnrAf 5 K 77
9

A f

^ 1

^clk{S)

s2

s + Ki00p

s2

*IM. 7~F?

(0.22)

s + Ki00p (5.23)

$ dfc (t) = 2TTA/ x t - ^ - [ 1 - e-*""*'] u(t)

As the last equation implies, if the loop gain {Kioop) approaches infinity, A$(i)(t) will follow the input excess phase simultaneously and coincide with the ramp function exactly. When the loop gain is finite, the time required to follow the input ramp is determined by the loop gain, which results in a non-zero phase difference. In figure 5-30, two plots for different loop gains are presented: Assume that there is a lOOppm frequency deviation between the TX and the RX, figure 5-30 shows the phases of the input ramp (blue line) and VCDL clock output (green line) versus time when loop gains equal to 0.5 e~9 and 5 e~9. The phase error at steady state is a function of the loop gain. Note that the phase tracking error is only 0.3% UI, even for a loop gain=0.5 e~9. As a result, the phase tracking error is insignificant. In reality, however, since the input data is random, there are periods when the input data has no transitions and the CPDLL cannot drive the VCDL to track the phase (the PD only outputs charge or discharge signals when there are transitions). Therefore simulation is required to estimate the phase error during a random payload. Figure 5-31 demonstrates the clock edge variation histogram throughout a 10,000 random bit stream. The loop gain Kioap is fixed at 0.5 e - 9 . For -lOOOppm frequency
4

Note that the unit ramp function corresponds to 1/s 2 in the Laplace domain

CHAPTER 5. BURST-MODE CLOCK AND DATA RECOVERY TECHNIQ UE161

x10 0.08
-I r~l 1 1 T"

Loop gain = 5e~

x 10

Figure 5.30: Phase tracking error of VCDL output clock in the presence of clock frequency deviation

CHAPTER 5. BURST-MODE

CLOCK AND DATA RECOVERY

TECHNIQUE 162

deviation, the clock edge variation is less than 0.1 Unit Interval (UI). For -lOOppm deviation the variation is even smaller. From the above discussion and simulation results, the VCDL output clock can track the optimum sampling point quite well in most situations.
16r

Optimum sampling point of input data ^"^ -

Optimum sampling point of input data

~:

,/

1
-0.5' 0
' 1 ' 2 ' 3 ' 4 ' 5

irf".

(a) VCDL output clock edge histogram for 10000 random bits (frequency deviation = -lOOppm, loop gain =0.5e-9)

(b) VCDL output clock edge histogram for 10000 random bits (frequency deviation = -lOOOppm, loop gain =0.5e"9)

Figure 5.31: VCDL output clock rising edge variation during the random pay load

Summary In summary, a CPDLL can rapidly recover the clock to sample a input random burst or packet. In the presence of frequency deviation between the reference clocks at the TX and the RX, the phase tracking error during steady state is negligible. However there are two issues using a CPDLL for a burst-mode CDR when the TX and RX reference clocks have frequency deviation: (1) If there is a long run of consecutive logic ones or zeros, the accumulated phase error cannot be fixed and will cause sampling failure. The maximum acceptable length of consecutive zeros or ones is determined by the frequency deviation. If it is

CHAPTER

5. BURST-MODE

CLOCK AND DATA RECOVERY

TECHNIQUE

163

200ppm from the data bit rate (as in figure 5-29(a)), and assume the jitter margin is 0.2UI (unit interval, fraction of bit period mean value), then 0.3 UI is the maximum phase error accumulation. As such, 300 bits is the maximum consecutive zeros or ones that the CPDLL can endure. (2) A VCDL has a finite delay range. If the burst length is long, the VCDL will be driven to exceed the maximum or minimum delay value in the presence of clock frequency deviation. Once a VCDL exceeds the maximum or minimum delay, the VCDL cannot track the optimum sampling point any more, and it will eventually lead to significant bit errors. To address these two issues, we need a mechanism to lock the frequency of the RX reference clock to that of the TX frequency. Therefore we employ a charge pump phase lock loop (CPPLL) to achieve the frequency locking, and the resultant architecture is dual-loop CDR. In the next section, we will introduce and analyze the CPPLL.

5.4.3

Charge p u m p phase lock loop (CPPLL)

Figure 5-32 shows a typical second-order CPPLL. The CPPLL is similar to the CPDLL except that the VCDL is replaced with a VCO and the low pass filter has an additional resistor. The resistor in the LPF is added in series with the capacitor to stabilize the loop [64]. Without the resistor, the loop is unstable because the poles contributed by the VCO and the capacitor are both located at the origin. Mathematical Analysis of t h e second-order C P P L L In figure 5-32, the gains of the PD and CP are combined as KP>CP (A/rad). The
gain of t h e low pass filter is t h e impedance seen by the charge p u m p :
R c s 1

*i+

(Ohm).

The gain of the VCO is Kyco

(Hz/V). If initially there is a step function of phase

difference $= $0) similar to CPDLL analysis we derive:

vM = * W * . c . + i)

(5 . 24)

CHAPTER 5. BURST-MODE CLOCK AND DATA RECOVERY TECHNIQUE 164

vco

1
Local Reference clock (~data rate)

Random Input data VCO output clock

Figure 5.32: Charge pump phase lock loop (CPPLL) The excess clock output of the VCO is:
QinKpDCpKvcoiRzCxS + 1)

$<* =

Cxs

(5.25)

By definition the open-loop transfer function is: Ti( \ - ^* K

PDCpKVCp{RzCis
n

+ 1)

ti \S)openJoop

(5.26)

The closed loop transfer function becomes:

H(s) closed-loop We define:

H(s) open-loop
-! + " \S)open-loop C\s
2

KPDCpKVco(RzCiS
4- KPDCPKVCORZCIS 4-

+ 1)
KPDCPKVCO

(5.27)

CHAPTER

5. BURST-MODE

CLOCK AND DATA RECOVERY

TECHNIQUE 165

K PDCpKycO wn =

(5.29)

where ( is the damping ratio and wn is the natural frequency. These are typical parameters for the second-order systems. The root locus plot of second-order CPPLL is shown in figure 5-33.

Figure 5.33: Root locus plot of 2nd-order CPPLL (1) Under-damping: For the second-order CPPLL, when the damping ratio is smaller than unity, the loop is an under-damped system. The poles are located at: (-Cjy/l-C2)wn (5.30)

S1,2 =

Si and S2 have the same real part and the real part determines the time constant in time domain: r = - = (wn
-5 14

(5.31) KpDcpKVcoR KVco,


an

Figure 5-34 shows the under-damped case, where KPDCp, Rz.

d the capacitor

are equal to 0.01, l e , le~ , respectively. We change the damping ratio by changing The Hogge phase detector is employed as in CPDLL. The VCO's free running frequency is set at 9GHz and the input data rate is lOGbps. The damping ratios are

CHAPTER 5. BURST-MODE CLOCK AND DATA RECOVERY TECHNIQUE 166

HI Damping ratio = 0.016

111 Damping ratio = 0.16

'

'

"

'

'!

'-.>

'

=_J

: .

'

Damping ratio = 0.474

Figure 5.34: VCO control voltage (Vctri) of 2nd-order under-damping CPPLL

CHAPTER 5. BURST-MODE

CLOCK AND DATA RECOVERY

TECHNIQUE

167

shown next to the time response waveforms, which correspond to Rz le 4 , le 5 and 3e 5 , respectively. As shown, a larger Rz will increase the damping ratio and reduce the time constant, which is proportional to the acquisition time. In the second-order CPPLL, the larger damping ratio will result in faster acquisition, however, worse clock quality results from larger ripple on the control voltage. The acquisition time in the under-damped case is determined by the real part of the pole. Therefore, when we increase Rz, the real part of the complex poles keeps decreasing, and will reach the maximum of 2(i? z C 1 )~ 1 , which is the critical-damping point. (2) Over-damped: when the damping ratio is larger than unity, the system is called over-damped. The poles are located at:

Sli2 = (-CVC 2 -lK


approach negative infinity, and the other will approach 1/RZC\. constant will approach RZC\ when the loop is over-damped.

(5-32)
Therefore the time

Both of the poles are real. When is much larger than unity, one pole will

Figure 5-35 illustrates the control voltages of the VCO at different damping ratios for the over-damped cases. Again, the Kppcp, 0.01, l e , l e
-5 -14

Kyco,

and capacitor are fixed at

respectively. Again the VCO free-running frequency is 9 GHz, and

the incoming data rate is lOGbps. The damping ratios are shown next to the plots, which translate to the resistor values of 16e5, 2e6 and 2.5e6, respectively. As shown a larger damping ratio in the over-damped case will slow down the phase locking time, because when is larger than one, the dominant pole approaches (i? z Ci) _ 1 as suggested by the root locus. Also note that higher damping also leads to higher ripple which degrades the output clock quality. The effect of random N R Z on the 2nd-order C P P L L Dynamics The analysis and simulation presented are based on input data consisting of alternating logic zeros and ones. However, in reality the CPPLL of a dual-loop CDR will experience random payload after the phase is locked by the CPDLL during the preamble. As a result, the total acquisition time of the CPPLL is longer, because the

CHAPTER 5. BURST-MODE CLOCK AND DATA RECOVERY TECHNIQUE 168

Damping ratio = 2.53

jjj Damping ratio = 3.16

Damping ratio = 3.95

II

Clock waveform when

Figure 5.35: VCO control voltage (V^r;) of 2nd-order over-damping CPPLL

CHAPTER

5. BURST-MODE

CLOCK AND DATA RECOVERY

TECHNIQUE 169

control voltage is only updated when the input data has transitions.

Figure 5.36: VCO control voltage (V^w) responses to (1) input data consisting of alternating logic zeros and ones and (2) the real packet consisting of preamble and random payload. Figure 5-36 shows the waveforms of the VCO control voltage ( V ^ ) responses to (1) input data consisting of alternating logic zeros and ones and (2) a real packet consisting of a preamble and random payload. The input packet, as in CPDLL simulation, is composed of a 40ns preamble followed by a random NRZ payload. As shown, during the preamble part (t<40ns), the two control voltages are exactly the same. After entering the random payload part, the phase acquisition for the real packet is much slower than the alternating zeros and ones case. Problems with a second-order C P P L L According to the clock waveform at the bottom of figure 5-35 (damping ratio=3.95), the VCO output clock at steady state is highly distorted. The jitter margin and the thus the BER will degrade. The distortion stems from the steady-state ripple of Vctri. To mitigate the Vctri ripple, the loop can be purposely designed under-damped. For an under-damped loop, the ripple amplitude is reduced more than two orders compared to the over-damped loop. Unfortunately, the process-induced deviation

CHAPTER

5. BURST-MODE

CLOCK AND DATA RECOVERY

TECHNIQUE 170

changes considerably, and will make the damping ratio uncontrollable (note how the ripple amplitude varies as the damping ratio is increased). Furthermore, as will be discussed later, an under-damped second-order CPPLL will present a very poor jitter transfer function. Therefore a third-order loop CPPLL is more suitable for realistic applications.

5.4.4

The third-order CPPLL

As shown in figure 5-37, the third-order CPPLL employs a loop filter composed of a series RC combination and an extra capacitor C2 in parallel. Intuitively C2 acts as a high-frequency filter, which can filter out the resistor-induced ripple. However the presence of C2 introduces an extra pole, which makes the loop only conditionally stable because three poles introduce a maximum phase shift of 270. Generally C% is much smaller than C\ such that the disturbance from the extra pole is minimized.
%

POCP

^vco Local Reference clock (~data rate)

Random Input data VCO output clock

Figure 5.37: Third-order CPPLL Figure 5-38 demonstrates a comparison of the locking response of second-order and third-order CPPLLs, where the VCO free running frequency is 9GHz and the input data rate is lOGbps. Given the loop parameters, the ripple amplitude of the third-order CPPLL can be reduced significantly by proper design, so the resultant

CHAPTER

5. BURST-MODE

CLOCK AND DATA RECOVERY

TECHNIQUE

171

clock quality is much better than the second-order loop. Note that the fundamental trade-off, acquisition vs. Vctri ripple, still exist for the third-order loop.

2nd-order CPPLL: Rz=4e6, C,=1.56e-14, KPDCP=4e-3, Kvco=le5

3rd-order CPPLL: R^le6, Cj=le"13, C2=le" , KpDCP=50e" , K.vco=le

3rd-order CPPLL: Rz=le5, C1=le-13, C2=le14, KPDCP=100e-3, Kvco=le5

VCO output clock at steady state of 3rd-order CPPLL: Rz=le5, C,=le-13, C2=le-14, KPDCP=100e-3, Kvco=le5

Figure 5.38: Locking dynamics comparison between 2nd-order and 3rd-order CPPLLs

5.4.5

Simulation of dual-loop C D R structure

The discussion in the previous sections suggests that the dual-loop CDR (as shown in figure 5-39) is potentially a promising burst-mode CDR solution because: (1) during the preamble of a newly arrived burst, the CPDLL can rapidly respond to the input burst for clock phase recovery, and (2) the CPPLL can gradually pull the VCO frequency to the data rate of input burst.

CHAPTER 5. BURST-MODE CLOCK AND DATA RECOVERY TECHNIQUE 172

C-KVCDL (Recovered clock) Input data burst

Figure 5.39: Dual-loop CDR architecture

CHAPTER 5. BURST-MODE

CLOCK AND DATA RECOVERY

TECHNIQUE 173

Note that the third-order loop is chosen for the CPPLL because of its better clock quality as mentioned before. In the CPDLL, the VCDL is initially set at the middle of operation (delay) range when new bursts arrive. The delay range is long enough to accommodate the phase accumulation due to frequency deviation between the TX and the RX. Below we will design a practical dual-loop CDR architecture and demonstrate with simulation: (1) rapidly recover clock, (2) precisely tracking frequency, and (3) provide good jitter tolerance. Simulation scenario: 1. Frequency deviation: -lOOOppm (TX data rate=10Gbps, RX VCO free running frequency=9.999GHz) 2. Initial phase deviation: local clock leads input data 0.008ns (0.08UI) 3. VCDL operation range: 0.4ns (4bits) 4. VCDL initial delay: 0.2ns (at the center of the delay range) 5. Data burst length" 10,000bits. 6. Burst payload: Pseudo random bit stream generated by MATLAB. Design parameters: 1. CPDLL: K=0.05, 2. CPPLL: Rz=le Ohm,
KPDCP-5-3,
5

KVCDL=1-

(7i=10e- 1 2 F, C 2 =0.1e- 1 2 F,

KVCo=le5.

The performance of the dual-loop CDR is demonstrated in terms of (1) the control voltage of the VCDL (Ktr^Li), (2) phase alignment at different points in the packet, and (3) the control voltage of the VCO {VctripLL)Performance simulations

Figure 5-40(a) shows the control voltage of the VCDL (VctriDLL) of the dual-loop CDR structure upon receiving the input burst. Figure 5-40(b) shows (VctriDLL) of a single CPDLL CDR with the same design parameters when the RX and TX reference clock frequencies are equal. Figure 5-40(c) shows (VctriDLL) of a single CPDLL CDR

CHAPTER

5. BURST-MODE

CLOCK AND DATA RECOVERY

TECHNIQUE 174

with the same design parameters when the RX deviates from the TX reference clock frequency by -lOOOppm. By comparing Figure 5-40(a) and Figure 5-40(b), we see that the dual-loop CDR structure can recover clock phase within the preamble (4ns), as fast as the single CPDLL CDR structure. Compared with Figure 5-40(c), where the VttriDLL and thus the CPDLL output clock excess phase keeps decreasing to track the input data phase due to frequency deviation, the CPPLL of the dual-loop structure during payload gradually pulls the RX clock frequency towards the input burst data rate, until they are equal. As a result the dual-loop CDR technique can eliminate the risk of exhausting the delay range of the VCDL.

<a) vttn DLL response

(b) Vctri DLL response of single CPDLL when RX reference clock equals to Input burst data rate

IBBWIfBiffiiiwffiM

mmmmmm:

1(c) Vctri DLL response of single DLL when RX deviated from TX reference clock by J-lOOOppm

Figure 5.40: The VCDL control voltages under different conditions Figure 5-41 presents the phase relationship between the input data burst and the

CHAPTER

5. BURST-MODE

CLOCK AND DATA RECOVERY

TECHNIQUE 175

VCDL output clock at different moments of interest: (1) data burst arrival (figure 5-41 (a)), (2) in the middle of the preamble (figure 5-41 (b)), (3) in the middle of the data burst (figure 5-41(c)), and (4) at the end of the data burst (figure 5-41(d)). The red line indicates the center of the data bit. As shown in figure 5-41 (a), within the preamble period (4ns long), the VCDL output clock is aligned well with the data bit center around 15ns. Afterwards the clock is well aligned with the input data burst, even in the presence of frequency tracking conducted by the CPPLL, until the end of the burst Figure 5-42 shows the responses of the VCO control voltage ( V ^ P L L ) - The VCO frequency is gradually pulled to the input data rate after approximately 1000 bits. Note that if a line coding scheme such as 8B10B scheme is used, the speed of the frequency locking can be enhanced further. Jitter Tolerance of the dual-loop C D R structure In optical communications systems, the jitter tolerance is a key jitter metric for a CDR circuit. Jitter tolerance is the ability of the recovered clock to track the input data bits in the presence of jitter. To track jittered input data, a feedback loop is required for the CDR to continuously compare the phase difference between the recovered clock and the input data. Given the open loop structures, the three CDR techniques introduced in the beginning of this chapter cannot provide jitter tolerance. On the other hand, the phase-locked dual-loop CDR provides good jitter tolerance because of the phase locked loops. Both a CPPLL and a CPDLL provide phase tracking capability. However the CPDLL is designed to have much faster response than the CPPLL. When there is phase variation at the input data due to jitter, the CPDLL's response will dominate the output excess phase. As a result, the jitter tolerance is mainly determined by the CPDLL of the dual-loop CDR structure. Quantitatively, jitter tolerance specifies how much input jitter a CDR circuit must tolerate without increasing the bit error rate (BER). In general, jitter tolerance is specified with a mask as a function of the jitter frequency, and this mask delineates the maximum excess phase variation of input data at different jitter frequencies. Assume that the excess phase of the input data is <f>in, and the excess phase of the recovered

CHAPTER 5. BURST-MODE CLOCK AND DATA RECOVERY TECHNIQUE 176

(a) Phase relationship between the input data bit and VCDL output upon burst arrival (0.4ns-0.5ns)

(b) Phase relationship between the input data bit and VCDL output in the middle of preamble (15.1ns-15.2ns)

(c) Phase relationship between the input data bit and VCDL output in the middle of the burst (around 500ns)

(d) Phase relationship between the input data bit and VCDL output at the end of the burst (around 970ns)

Figure 5.41: Input data and clock phase relationship at different moments of time

CHAPTER

5. BURST-MODE

CLOCK AND DATA RECOVERY

TECHNIQUE 177

. ' : - "

"

<

;'

I ' T - . - V j ' ^ M " . ' ! ! ^ !

Figure 5.42: The response of VCO control voltage clock is 0 o t . In general, at a certain jitter frequency the BER will not increase until the phase error approaches half unit interval (UI). Thus an approximate condition to avoid increasing BER is: bdk < 0.5UI For a loop that :

(5.33)

4>in <f>clk 0 m ( 1

Hciose(ij00p(s))

(5.34)

Then

0.5UI
$in < J" closedJ,oop\S)

(5.35)

Hence we can express the jitter tolerance as

FM>) = jfjr- closed Joop\S) H

(5.36)
)

Substituting the closed loop transfer function of CPDLL to eq. 5.37, we derive
S + K KycDL FJT{S)

2s

(5.37)

Figure 5-43 shows the jitter tolerance masks of the dual-loop CDR structure and a typical 2nd-order PLL CDR for comparison. The
\FJT(W)\

of the dual-loop CDR

CHAPTER

5. BURST-MODE

CLOCK AND DATA RECOVERY

TECHNIQUE 178

structure falls at -20dB/decade rate before WD, which equals to K KVCDLto achieve rapid clock recovery, the jitter tolerance will be enhanced.
Jitter Tolerance (Unit Interval)

Since,

for typical burst-mode applications, the product of K and KVCDL is made high so as

gam increases

0.5 Ul

Jitter Frequency (log scale)

Figure 5.43: Jitter tolerance masks of dual-loop CDR structure and typical CPPLL

Summary In this chapter, we reviewed, analyzed, and simulated three up-to-date burst-mode CDR techniques. In light of the shortcomings of the open-loop based CDR designs, i.e. free running RX clock frequency and lack of jitter tolerance, we proposed a dual-loop CDR structure as a promising burst-mode technique. This dual-loop CDR technique can recover a clock rapidly for data sampling and the clock frequency can be locked to the input data rate. In addition, the feedback loop enables tolerance to jitter, which the open loop burst-mode CDR techniques cannot achieve. An exemplar design of the dual-loop CDR structure is presented and its performance is simulated. The results show that the dual-loop CDR design can recover a clock in less than 4ns (40 bits in lOGbps), and the clock frequency is locked to the input data rate in 600ns. The jitter tolerance of the dual-loop CDR structure is investigated and analyzed mathematically.

Chapter 6 Conclusion
This dissertation focuses on the hybrid optical and wireless access networks that provide broadband, scalable, cost-effective, and ubiquitous access solution. Under the heterogeneous network architecture, the wireless segment at the user end is enbaled by wireless mesh network based on multi-hop wireless communications. We proposed two novel optical backhaul networks, one aims to smoothly upgrade the wireless backhaul of a deployed wireless mesh network using commercial optical access techniques, and the other aims to enable next-generation hybrid access based on clean-slate optical fiber grid deployment. The first optical backhaul network employs TDM PON that is a mature and costefficient optical access technology. The optical backhaul can be deployed to connect to wireless gateway routers of the wireless mesh network, or upgrade the wireless backhaul of a hierarchical wireless access network. In upgrading the wireless backhaul network, TDM PON allows seamless upgrade of wireless links at different layers. We develop a novel reconfigurable architecture based on tunable optical devices that can achieve load balancing among different parts of the backhaul network. Simulaton shows that this reconfigurable architecture outperforms fixed architecture in terms of network bandwidth efficiency and performance by dynamic bandwidth allocation. An experimental testbed is built to demonstrate the feasibility for realisitc applications. The key components - advanced optical tunable filter and lasers are incorporated on

179

CHAPTER

6.

CONCLUSION

180

the testbed, and we developed and evaluated the hand-shaking prorocol that facilitates the network reconfiguration. The second optical backhaul network is part of the Grid Reconfigurable Optical Wireless Network (GROW-Net), a joint research project between the Photonics and Networking Research Lab and the Wireless Research Communications Group at the Electrical Engineering Department of Stanford University. The fiber grid structure is evenly deployed to distribute and collect traffic from the wireless gateway routers, which are widely scattered in a metropolitan area. It can provide broadband, scalable, blanket-cover, and cost-effective connectivity to the wireless mesh network in metropolitan area. As a fundamental unit of the optical grid structure, we focused on the grid unit design that directly interfaces with the wireless mesh network. Under the grid unit structure, we employ fast tunable laser and reflective optical devices to achieve centralized control, resource sharing, and ease of deployement and management. The multiplexing scheme of tunable lasers can faciliate statistical gain to enhance the network performance compared with conventinal dedicated optical transmitter scheme. Experimental testbed is also built for perofrmance evaluation and demonstration of QoS capability. The enabling components - fast tunable laser and reflective semiconductor optical amplifier are incorporated into the testbed to deomnstrate the feasibility for realistic application. An integrated routing algorithm is developed to enahnce the wireless mesh network performance by leveraging the optical links. The simulation results show that the integrated routing algorithm outperforms the shortest hop routing in terms of throughput and packet delay by 25% by evenly allocate loading across the WMN. Under the proposed optical backhaul networks, the optical burst-mode clock and data recovery circuit is a critical component. A novel feedback-based burst-mode CDR architecture is proposed. It consists of two feedback loops, and these two loops cooperate to rapidly recover clock for data sampling and enable jitter tolerance that the existing feed forward based burst mode CDR architecture cannot achieve. Future Works This thesis focuses on the optical backhaul design of the hybrid optical wireless access network. The wireless mesh network (WMN), although not covered in this thesis, is as

CHAPTER

6.

CONCLUSION

181

important as the optical backhaul. To improve the overall performance of WMN, new PHY layer technologies and MAC protocols are required. In the PHY layer, for example, smart antenna, multi-input multi-output (MIMO), and multi-channel/interface systems are being explored to enhance network capacity. MAC protocols based on distributed time and code division multiplexing access is expected to improve the bandwidth efficiency from CSMA/CA protocols. Furthermore, in WMN since packets are routed among mesh routers in the presence of interference, shadowing, and fading, a cross-layer design is required to optimize routing in WMN. To accommodate the projection of demand increase of Wi-Fi based WMN, the vendor companies of WMN have proposed to use WiMAX (IEEE 802.16) to enhance the higher-layer link capacity of WMN. Ultra-high bandwidth standards such as the emerging IEEE 802.16m, which aims to provide l G b / s and lOOMb/s shared bandwidth for residential and mobile users, can be employed to further enhance the capacity. The optical backhaul under GROW-Net architecture consists of two layers as described in section 4.3, i.e. the intra grid unit layer (Layer 1) and the inter grid unit layer (Layer 2). This thesis focuses on different aspects of the intra grid unit layer, such as the network architecture, system implementation, transmission and QoS experiment, and integrated routing algorithm. Although the foundation is laid, we still lack the Medium Access Control (MAC) protocol to manage traffic flows and allocate bandwidth among the optical terminals. The inter grid unit layer is not covered in this thesis. It requires further research effort to address the network architecture, system implemntation, MAC protocol, network scalability, and routing paradigm. As for the integrated routing paradigm on the hybrid optical and wireless access network, although we proposed an integrated routing algorithm, there are many other issues to be addressed, such the end-to-end QoS, mobility control in the wireless segment, cross-layer design on both the optical and wireless segments.

Bibliography
[1] L. Roberts, "The Arpanet and computer networks," Proceedings of the ACM Conference on the history of personal workstations, pp. 51-58, 1986. [2] IEEE 802.3ah: Ethernet in the First Mile. [3] ITU-T G.984: Gigabit capable passive optical networks. [4] R. Feldman, E. Harstead, S. Jiang, T. Wood, and M. Zirngibl, "An evaluation of architectures incorporating wavelength divisionmultiplexing for broad-band fiber access," Journal of Lightwave Technology, vol. 16, issue 9, pp. 1546-1559, September 1998. [5] IEEE Standard 802.16 (2004): Air Interface for Fixed Broadband Wireless Access Systems. [6] IEEE Standard 802.16e: Air Interface for Fixed and Mobile Broadband Wireless Access Systems-Amendment for Physical and Medium Access Control Layers for Combined Fixed and Mobile Operation in Licensed Bands. [7] http://www.fsanweb.org/default.asp. [8] C.-H. Lee, S.-M. Lee, K.-M. Choi, J.-H. Moon, S.-G. Mun, K.-T. Jeong, J. H.
Kim, a n d B. Kim, " W D M - P O N experiences in Korea," Journal working, vol. 6, no. 5, p p . 451-464, 2007. of Optical Net-

[9] L. G. Kazovsky, W.-T. Shaw, D. Gutierrez, and S.-W. Wong, "Next-Generation Broadband Optical Access Networks," Journal of Lightwave Technology, vol. 25, issue 11, pp. 3428-3442, November 2007. 182

BIBLIOGRAPHY

183

[10] IEEE 802.16's Relay Task Group, http://wirelessman.org/relay/index.html. [11] G. Aggelou, Mobile Ad Hoc Networks: From Wireless LANs to 4G Networks. McGraw-Hill, 2004. [12] I. Akyildiz, "A Survey on Wireless Mesh Networks," IEEE Radio tions, vol. 43, issue 9, pp. S23-S30, September 2005. [13] R. Bruno, M. Conti, , and E. Gregori, "Mesh Networks: Commodity Multihop Ad Hoc Networks," IEEE Communications 131, March 2005. [14] 802.11s task Group, http://www.ieee802.org/ll/Reports/tgs_update.htm. [15] H. Aoki, A. Takeda, K. Yaggu, and A. Yamada, "IEEE 802.11s Wireless LAN Mesh Network Technology," NTT DOCOMO Technical Journal, vol. 8, no. 2, pp. 13-21, 2006. [16] J. Bicket, D. Aguayo, S. Biswas, and R. Morris, "Architecture and Evaluation of an Unplanned 802.11b Mesh Network," Conference on Mobil Computing and Networking, pp. 31-42, August 2005. [17] Tropos Networks, http://www.tropos.com/. [18] Belair Networks, http://www.belairnetworks.com/. [19] Tropos WiMAX Strategy, http://www.tropos.com/pdf/tropos_wimax_strategy.pdf. [20] IEEE 802.16 task Group m, http://www.ieee802.org/16/tgm. [21] T. Clausen and P.Jaquet, "Optimized
Task

Communica-

Magazine, vol. 43, issue 3, pp. 123-

link
Force

state
(IEFT),

routing

protocol
2003.

(OLSR),"

International

Engineering

October

http://www.ietf.org/rfc/rfc3626.txt. [22] E. Perkins, E. Belding-Royer, and S. Das, "Ad hoc on demand distance vector (AODV) routing," International Engineering Task Force (IEFT), vol. RFC 3561, July 2003. http://www.ietf.org/rfc/rfc3561.txt.

BIBLIOGRAPHY

184

[23] R. Draves, J. Padhye, and B. Zill, "Comparisons of routing metrics for static multi-hop wireless networks," ACM Special Interest Group Data tions (SIGCOMM), pp. 133-144, August 2004. [24] R. Draves, J. Padhye, and B. Zill, "Routing in multi-radio, multi-hop wireless mesh networks," ACM International Conference on Mobile Computing and Networking (MOBICOM), pp. 114-128, 2004. [25] D. Couto, D. Aguayo, J. Bicket, and R. Morris, "A high-throughput path metric for multi-hop wireless routing," International Conference on Mobile Computing and Networking, vol. 11, no. 4, pp. 134-146, 2003. [26] L. Iannone and S. Fdida, "MRS: A simple cross-layer heuristic to improve throughput capacity in wireless mesh networks," Conference on Future Networking Technologies (CoNEXT), October 2005. Communica-

[27] C. Perkins and P. Bhagwat, "Highly dynamic destination-sequenced distancevector (DSDV) routing for mobile computers," ACM SIGCOMM Communication Review, vol. 24, issue 4, pp. 234-244, October 1994. Computer

[28] D. Johnson and D. Maltz, Mobile Computing, vol. 353. Springer US, 1996. pp. 153-181. [29] P. Gupta and P. Kumar, "The capacity of wireless networks," IEEE on Information Theory, vol. 46, pp. 388-404, March 2002. Transactions

[30] J. Jun and M. L. Sichitiu, "The Nominal Capacity of Wireless Mesh Networks," IEEE Wireless Communications, vol. 10, issue 5, pp. 8-14, October 2003.
[31] J. Broch, D. A. Maltz, D. B. Johnson, Y.-C. Hu, a n d J. Jetcheva, "A Perform a n c e Comparison of Multi-Hop Wireless Ad-Hoc Network R o u t i n g protocols,"

International Conference on Mobile Computing and Networking, pp. 85-97, 1998. [32] J. Li, C. Blake, D. S. J. D. Couto, H. I. Lee, and R. Morris, "Capacity of Ad Hoc Wireless Networks," ACM International and Networking, July 2001. Conference on Mobile Computing

BIBLIOGRAPHY

185

[33] San Francisco wireless access network, http://www.sfgov.org/site/tech_connect_page.asp. [34] Motorola Canopy System, http://motorola.canopywireless.com/. [35] GigaLink Millimeter Wave Point-to-Point Link System,

http://www.connectronics.com/proxim/GigaLink.htm. [36] Terescope Free Space Optical Transport System, http://www.mrv.com/opticaltransport/terescope/. [37] A. Koonen, M. G. Larrod, A. Ng'oma*, K. Wang, H. Yang, Y. Zheng, and E. Tangdiongga, "Perspectives of Radio over Fiber Technologies," Optical Fiber communication/National Fiber Optic Engineers Conference, 2008.

[38] W.-T. Shaw, S.-W. Wong, N. Cheng, , and L. G. Kazovsky, "MARIN Hybrid Optical-Wireless Access Network," Optical Fiber Communication March 2007. [39] W.-T. Shaw, S.-W. Wong, N. Cheng, K. Balasubramanian, M. M. X. Zhu, and L. G. Kazovsky, "Hybrid Architecture and Integrated Routing in a Scalable Optical-Wireless Access Network," Journal of Lightwave Technology, vol. 25, issue 11, pp. 3443-3451, November 2007. [40] G.Kramer and G. Pesavento, "Ethernet Passive Optical Network (EPON): Building a Next-Generation Optical Access Network," IEEE Communications, vol. 40, pp. 66-73, February 2002. [41] T. Chen, H. Woesner, Y. Ye, and I. Chlamtac, "Wireless Gigabit Ethernet Extension," Broadband networks, vol. 1, pp. 425-433, 2005. [42] Y.-L. Hsueh, W.-T. Shaw, L. G. Kazovsky, A. Agata, and S. Yamamoto, "SUCCESS PON Demonstrator: Experimental Exploration of Next-Generation Optical Access Networks," IEEE Optical Communications, S33, August 2005. vol. 43, issue 8, pp. S26Conference,

BIBLIOGRAPHY

186

[43] J. Buus and E. Murphy, "Tunable Lasers in Optical Networks," Journal of lightwave Technology, vol. 24, issue 1, pp. 5-11, January 2006. [44] C. J. Chang-Hasnain, "Tunable VCSEL," IEEE Journal on Selected Topics in Quantum Electronics, vol. 6, pp. 978-987, November/December 2000. [45] R. Chen, D. A. B. Miller, K. Ma, and J. S. H. Jr., "Novel Electrically Controlled Rapidly Wavelength Selective Photodetection using MSMs," IEEE Journal on Selected Topics in Quantum Electronics, vol. 11, pp. 184-189, January/Febuary 2005. [46] Telnovus EPON OLT, http://www.teknovus.com/files/TK3722%20PBl.pdf. [47] W. tao Shaw, D. Gutierrez, K. S. Kim, N. Cheng, S.-H. Wong, and S.-H. Yen, "Grid Reconfigurable Optical Wireless Network (GROW-Net) - A New Hybrid Optical Wireless Access Network Architecture," Joint Conference on Information Sciences (JCIS), 2006. [48] W.-T. Shaw, S.-W. Wong, S.-H. Yen, and L. G. Kazovsky, "An Ultra-Scalable Broadband Architecture for Municipal Hybrid Wireless Access Using Optical Grid Network," Optical Fiber Communication Conference, March 2009.

[49] W. tao Shaw, G. Kalogerakis, S.-W. Wong, Y.-L. Hsueh, N. Cheng, S.-H. Yen, M. E. Marhic, and L. G. Kazovsky, "MARIN: Metro-Access Ring Integrated Network," Global Communications [50] JDSU Circulator, Conference, 2006. http://www.jdsu.com/products/optical-

communications/products/passive-components-andmodules/circulators/circulator-compact-high-power. html . [51] JDSU DWDM filter, http://www.jdsu.com/products/optical-

communications/products/passive-components-and-modules/couplers-spliterswdms/wdm-filter-100-ghz-itu-component.html .

BIBLIOGRAPHY

187

[52] R. J. Rigole, M. Shell, S. Nilsson, D. J. Blumenthal, and E. Berglind, "Fast wavelength switching in a widely tunable GCSR laser using a pulse pre-distortion technique," Optical Fiber Communication Technical Digest, vol. 6, March 1997.

[53] Y. Fukashiro, K. Shrikhande, M. Avenarius, M. Rogge, I. White, D. Wonglumsom, and L. Kazovsky, "Fast and fine wavelength tuning of a GCSR laser using a digitallycontrolled driver," Optical Fiber Communication pp. 338-340, 2000. [54] K. Shrikhande, I. White, M. Rogge, F.-T. An, A. Srivatsa, E. Hu, S.-H. Yam, and L. Kazovsky, "Performance demonstration of a fast-tunable transmitter and burst-mode packet receiver for HORNET," Optical Fiber Communication ference, vol. 4, 2001. [55] C.-K. Chan, K. Sherman, and M. Zirngibl, "A fast 100-channel wavelengthtunable transmitter for opticalpacket switching," IEEE Photonics Letters, vol. 13, issue 7, pp. 729-731, 2001. [56] J. Simsarian, A. Bhardwaj, J. Gripp, K. Sherman, Y. Su, C. Webb, L. Zhang, and M. Zirngibl, "Fast switching characteristics of a widely tunable laser transmitter," IEEE Photonics 2003. [57] B. Mason, J. Barton, G. Fish, L. Coldren, and S. Denbaars, "Design of sampled grating DBR lasers with integrated semiconductoroptical amplifiers," IEEE Photonics Technology Letters, vol. 12 issue 7, pp. 762-764, July 2000. [58] H. C. Shin, J. S. Lee, I. K. Tun, S. W. Kim, H. I. Kim, H. S. Shin, S. T. Hwang, Y. J. Oh, Y. K. Oh, and C. S. Shim, "Reflective SOASs optimized for 1.25Gbit/s WDM PONs," Laser and Electro-Optics Society, vol. 1, pp. 308-309, November 2004. [59] P. Chanclou, F. Payoux, T. Soret, N. Genay, R. Brenot, F. Blache, M. Coix, J. Landreau, O. Legouezigou, and F. Mallecot, "Demonstration of RSOA-based Technology Letters, vol. 15 issue 8, pp. 1038-1135, Technology ConConference, vol. 2,

BIBLIOGRAPHY

188

remote modulation at 2.5 and 5 Gbit/s for WDM PON," Optical Fiber Communication Conference, March 2007. [60] T. Briant, P. Grangier, R. Tualle-Brouri, A. Belleman, R. Brenot, and B. Thedrez, "Accurate determination of the noise figure if polarization-dependent optical amplifiers," Journal of Lightwave Technology, vol. 24, pp. 1499-1503, March 2006. [61] P. J. Urban, A. M. J. Koonen, G. D. Khoe, and H. de Waardt, "Rayleigh backscattering-suppression in a WDM access network employing a reflective semiconductor optical amplifier," pp. 147-150, November 2004. [62] J. Padhye, S. Agarwal, V. N. Padmanabhan, A. R. L. Qiu, and Z. B, "Estimation of Link Interference in Static Multi-hop Wireless Networks," Measurement Conference, pp. 305-310, 2005. [63] Network Simulator 2 (NS2), http://www.isi.edu/nsnam/ns/. [64] B. Razavi, Design of integrated circuits for optical communications. Hill, 1 ed., 2002. [65] Y. Yamada, S. Mino, and K. Habara, "Ultra-fast clock recovery for burst-mode optical packet communication," Optical Fiber Communication Conference, vol. 1, pp. 114-116, 1999. [66] Y. Ota, R. Swartz, I. Archer, V.D., S. Korotky, M. Banu, and A. Dunlop, "Highspeed, burst-mode, packet-capable optical receiver and instantaneous clock recovery for optical bus operation," Journal of Lightwave Technology, vol. 12 issue 2, pp. 325-331, 1994. [67] I. White, E. S.-T. Hu, Y.-L. Hsueh, K. Shrikhande, M. Rogge, and L. Kazovsky, "Demonstration and system analysis of the HORNET architecture," Journal of Lightwave Technology, vol. 21 issue 11, pp. 2489-2498, 2003. McGrawInternet Laser and Electro-Optics Society, vol. 1,

BIBLIOGRAPHY

189

[68] A. X. Widmer and P. A. Franaszek, "A DC-Balanced, Partitioned-Block, 8B/10B Transmission Code," IBM Journal of Research and Development, vol. 27, no. 5, p. 440, 1983. [69] C. R. Hogge, "A Self-Correcting Clock Recovery Circuit," IEEE Journal of Lightwave Technology, vol. 3, pp. 1312-1314, December 1985.

You might also like