You are on page 1of 4

COMMENTARY

SOFTWARE DEFINED NETWORKING OPPORTUNITIES FOR TRANSPORT


BY DAVE MCDYSAN, VERIZON

ABSTRACT
The economic landscape where trends of computing and networking are diverging creates motivation for new engineering paradigms. One emerging paradigm is software defined networking, whose attributes include separation of control from forwarding, enabling joint optimization of computing and networking deployments, software-defined functions at a number of layers, including the transport (i.e., optical and TDM) layer, enabling simpler interworking of different administrative and technology domains, and application-aware transport network resources. This column provides some background on the aspects of SDN related to transport networking, summarizes major drivers, trends, and issues to be addressed, and concludes by discussing areas of potential research and development for transport-related future SDN work.

INTRODUCTION
Software defined networking (SDN) is an acronym that has been assigned a number of interpretations since its earliest mention in the literature [1]. This column focuses on the following aspects of SDN relevant to transport networking: Separation of control from forwarding to enable new or enhanced paradigms based on abstractions of the forwarding technology to enable new or improved network and service structures [1, 2] Enable extraction of simplicity instead of managing the ever increasing complexity of networking [2] Support logically centralized control implementations that are more efficient than current distributed system implementations and overcome limitations of these legacy systems [3, 4] Software definable forwarding (modulation/demodulation, encoding/decoding, etc.). Much work is occurring on software-defined radio technology, and some of this may be applicable to software-defined optical technology. Allow transport networks in different domains to be intelligently interconnected and managed using dynamically controlled and coordinated multilayer network capabilities to meet service needs [5] Enable cross stratum optimization in complex multilayer systems, which include computing, storage, and application resources as well as transport and packet networks [4] Enable simple implementation of software defined transponders and flexible grid and elastic networking Provide virtualized, partitioned network views to service provider customers so that they can dynamically signal bandwidth and other requirements via either an overlay and/or interworking with traditional control and management plane protocols

DRIVERS FOR SOFTWARE-DEFINED TRANSPORT NETWORKING


A number of drivers for SDN-based transport networking are created by trends in relative price-performance of computing and networking technologies, better matching the network to user traffic patterns, and using direct transport communication when warranted for cloudbursting type services.

The price to achieve the same level of throughput and performance for processing and storage is declining much faster than the price of routing and transmission. The price-performance of electronic routing and switching is decreasing more rapidly than that of transport systems, whose price-performance is improving at the slowest rate. Transport system price-performance is decreasing more slowly than the other technologies for a number of reasons, including the complexity of modulation, encoding, and impairment compensation at very high line rates (e.g., 100 Gb/s), the need to aggregate traffic onto fewer transport routes to achieve the lower unit cost of higher line rates, and the lower overall deployed volume of interface capacity compared with processing, storage, and routers/switches deployed in data centers where each packet may traverse many devices. These trends motivate deployment of processing and storage that implement middle box and/or server-based services in more distributed network locations since this decreases the overall cost of providing services compared with carrying traffic back and forth via an indirect route to a regionalized data center [6]. Furthermore, placing processing and shared storage closer to users can achieve very low-latency highly interactive application performance. Since transport system price performance is improving at the slowest rate, there is a tremendous need to make transmission systems more responsive to application traffic patterns and service provider needs and reduce the number of transport components in the network. Regular patterns of time of day and day of week usage are seen for specific classes of users and access networks. For example, enterprise users generate most traffic during weekdays and normal business hours, while on the other hand consumers typically generate most traffic during nights and weekends. This means that time of day sharing between enterprise and consumer usage patterns is possible. Rearranging transport network topologies to interconnect networking and computing more economically based on time of day and day of week to better serve these predictable phases of behaviors could significantly reduce overall networking cost. The economics of virtualization will compel enterprises to outsource computing needs to network-based cloud and virtual private network (VPN) providers. Transferring computing state and storage to other locations in a reasonable timeframe requires commensurate bandwidth for the duration of such transfers, and transport communication should be the least expensive networking method when there is a requirement for a large amount of relatively constant traffic in a point-to-point or broadcast manner. A difficulty for many applications is to accurately determine the network parameters (e.g., bandwidth and quality measures such as latency and loss) that are needed to meet higher-level service objectives. However, signaling such parameters for every session has serious scaling challenges. Instead, network traffic engineering is usually done at an aggregate level, providing statistical guarantees for quality. What is needed is a closer tie between application needs and the combination of resources (e.g., transport networking, electronic networking, computing, and storage). Time of day and time zone optimization, where non-real-time applications can be shifted to other locations with spare computing and network capacity and/or different power usage rates, can be employed to reduce the cost of providing services [6].

28

IEEE Communications Magazine March 2013

COMMENTARY

nx1GE

nx1GE

10GE

10GE

Enterprise data center

Cloud data center Cloud data center (a)

Enterprise data center

Cloud data center Cloud data center (b)

Packet switch, router

Transport switch

Data center

Packet flow

Transport flow

Physical interface

Figure 1. Example for coordinating virtual networking and computing: a) packet networking operation; b) SDN-based multilayer networking.

EXAMPLES OF SDN TRANSPORT NETWORKING USE CASES


This section provides illustrative examples and references to related publications that provide greater detail. The use cases covered are coordinated networking and computing for a cloudbursting style service, multilayer and cross computing and networking stratum optimization, and multidomain and multilayer networking optimization.

COORDINATED NETWORKING AND COMPUTING FOR CLOUDBURSTING


Figure 1 illustrates a simple example environment for coordinating networking and computing between an enterprise data center connected via a packet and transport network to two service provider cloud data centers. Cloudbursting is a term used to describe an event in which an enterprise wishes to temporarily augment its data center capacity (e.g., processing, storage, software) by purchasing this capacity in a cloud provider data center. Something missing in cloudbursting today is efficiently coupling the network and its properties with the computing environment [7, 8]. Cloud bursting is desirable when large amounts of data need to be moved to the cloud for initial service activation (e.g., moving the customers database and/or virtual machine [VM] instances into the cloud), to support migration of the cloud service for resource optimization, or to implement high-bandwidth storage replication. As shown in Fig. 1a, packet networks normally interconnect customer applications located in private enterprise data centers to cloud provider data centers. These packet networks may offer bandwidth on demand features, which are useful when the bandwidth is relatively low (e.g., n1GE). However, when the capacity requirement for communication between enterprise and cloud data centers becomes large (e.g., 10 GE),

a more efficient solution could be to use dynamically configured transport connections instead of packet networking, as illustrated in Fig. 1b. Additionally, to meet customer requirements, there may be a choice between cloud provider data centers to meet customer application requirements for diversity and latency and/or select the cloud data center with available capacity. OpenFlow-based SDN can be used to implement such a networking scheme [7]. In some cases, in order to achieve improved efficiency and avoid blocking, it may be necessary to rearrange the underlying transport connectivity for the packet network as shown in the figure. This will require the OpenFlow controller to work with traditional packet and transport management and control plane systems. What is needed is an integration of cloud computing with multilayer networking bandwidth on demand features in heterogeneous network.

GLOBAL OPTIMIZATION OF TRANSMISSION, SWITCHING, COMPUTING, AND STORAGE


Figure 2 shows a simple example of three data centers interconnected by packet and transport network elements. The data centers have a logical packet reachability requirement that is full mesh. Figure 2a shows how the logical full mesh packet reachability is achieved by the packet switches P.A, P.B, and P.C using transport connectivity between P.A to P.C and P.A to P.B over the corresponding transport switches T.A to T.C and T.A to T.C, respectively. Instead of drawing the full mesh of packet flows, the dots in packet switches P.A and P.B show how packet switching in these nodes implement the logical full mesh. Similar to the cloudbursting scenario previously described, a service provider may have a need to markedly increase the capacity between specific data centers. In the example of Fig. 2b, there is a need to provide increased capacity between data centers A and C. In this example, the packet network connectivity is rearranged, and full mesh logical packet reachability is maintained, but now a more efficient

IEEE Communications Magazine March 2013

29

COMMENTARY

P.A.

P.B.

P.A.

P.B.

T.A.

T.B. B

T.A.

T.B.

T.C.

Dynamically instantiated transport flow C

T.C.

C P.C. Logical full mesh packet reachability overlaid on transport (b) Data center Packet flow Transport flow Physical interface

P.C. (a) Packet switch, router Transport switch

Logical full mesh packet reachability overlaid on transport

Figure 2. Example for multilayer optimization of transport and packet switching: a) packet networking operation; b) SDN-based multilayer networking. transport flow path can be dynamically configured connecting data centers A and C. The overall management system must make changes in the relevant equipment in data centers A and C to direct traffic onto this transport flow path. Either OpenFlow or a stateful path computation element (PCE) [9] approach could be used to implement this SDN-based networking paradigm. Of course, one would not want to rearrange the underlying connectivity of the packet switches too frequently, because this can cause packet reordering and potentially packet loss. A promising approach might be a stateful PCE [9] that can be commanded to apply specific policies on label switched path (LSP) signaling messages via an external SDN control system that can optimize one or more of the characteristics not optimized by a distributed signaling and routing system, such as those in the examples of this article and the cited references where the transport network below the packet network is rearranged to more efficiently serve specific traffic patterns. In particular, another use of the stateful PCE approach is in cross stratum optimization (CSO) across a network stratum (transmission, switching, PCE) and an application stratum (computing, software, and storage resources) [4]. Users interact with an application stratum controller, which can query and command one or more network stratum controllers. The network controllers may communicate with one or more PCEs, and there may also be communication between network controllers. The network controller provides an abstraction to the application controller regarding attributes of the underlying network layers and domains. Combining this network-level information with knowledge of computing resources, the application controller is able to make globally optimized decisions regarding placement of compute and storage resources in specific data center sites along with the required multilayer networking capacity. A study based on a 14-node topology showed a reduction in blocking probability of 10 to 20 percent for wavelength provisioning for joint application and network optimization vs. a strategy that selected only the data center site with the lowest utilization [4].

MULTILAYER AND INTERDOMAIN OPTIMIZATION


Figure 3 shows a simple example illustrating multilayer and interdomain optimization for data center interconnection. In the drawing, there are two domains (e.g., service providers, organizations within an enterprise), each having its own set of interconnected data center, packet, and transport elements. Figure 3a shows a typical case where the transport network flows connect a set of packet switches in an underlay manner. As previously described, the packet switches implement logical full mesh packet reachability overlaid on these transport flows (note that the packet flows are not shown in the figure in the interest of clarity). In this example, a need arises to create a large amount of capacity between data centers B.1 and A.2. The packet network may not have sufficient capacity, or may not be able to support this demand economically. Figure 3b shows a potential solution to this problem involving reconfiguration of the transport underlay for the packet network to free up capacity and provide a direct transport (e.g, time-division multiplexing or lambda) connection between data centers B.1 and A.2. Current interdomain multilayer routing and signaling standards do not support this capability. What is needed to achieve this result is a higher-level SDN-based interface that can perform this resource discovery and configuration across multiple domains. Various standards bodies (e.g., Internet Engineering Task Force, Optical Internet Forum) have been working on extending packet-based routing (e.g., IGP) and signaling (e.g., RSVP-TE) standards to support transport-based profiles. This suite of standards is often referred to as general multiprotocol label switching (GMPLS) and has been implemented by a number of transmission vendors. However, the distributed routing and signaling GMPLS approach has some known sub-

30

IEEE Communications Magazine March 2013

COMMENTARY
Domain 1 Domain 2 Domain 1 Domain 2 Dynamically instantiated transport flow

A.1

A.2

A.1

A.2

B.1

B.2

B.1

B.2

Logical full mesh packet reachability overlaid on transport (a) Packet switch, router

Logical full mesh packet reachability overlaid on transport (b)

Transport switch

Data center

Packet flow

Transport flow

Physical interface

Figure 3. Example for multilayer and interdomain optimization: a) packet networking; b) SDN-based multilayer networking.

optimal characteristics [3], such as inefficient bin packing, lack of determinism for LSP placement, potential for creating deadlocks, and lack of support for scheduling/calendaring the placement of traffic onto transmission and packet layer equipment. An important requirement for multilayer and interdomain networking is that of route selection [5], which could be implemented using stateful PCE to specify explicit routes in a (G)MPLS routed network.

ensure operability and manageability. For certain aspects mature standards will be needed to ensure interoperability and reusability.

REFERENCES
[1] G. Goth, Software-Defined Networking Could Shake Up More than Packets, IEEE Internet Computing, July/Aug. 2011. [2] S. Shenker, The Future of Networking, and the Past of Protocols, Open Networking Summit, Oct. 18, 2011, http://opennetsummit.org/ talks/shenker-tue.pdf. [3] E. Crabbe and V. Valancius, SDN at Google Opportunities for WAN Optimization, IETF 84, Vancouver, Aug. 1, 2012, http://www.ietf.org/ proceedings/84/slides/slides-84-sdnrg-4.pdf. [9] E. Crabbe et al., PCEP Extensions for Stateful PCE, IETF, Jan 22, 2012, work in progress. [5] A. Isogai et al. , Global-Scale Experiment on Multi-Domain Software Defined Transport Network, 10th Intl. Conf. Optical Internet, Yokohama, Japan, May 2931, 2012. [7] Open Networking Foundation, OpenFlow-Enabled Hybrid Cloud Services Connect Enterprise and Service Provider Data Centers, Nov 13, 2012, https://www.opennetworking.org/images/stories/downloads/solution-briefs/sb-hybrid-cloud-services.pdf. [4] D. Dhody, Y. Lee, and H. Yang, Cross Stratum Optimization (CSO) Enabled PCE Architecture, 2012 10th IEEE Intl. Symp. Parallel and Distributed Processing with Applications, 1013 July, 2012, Madrid, Spain. [6] Intel, Software-Defined Networking and Services on Intel Processors, 2012, http://www.intel.com/content/www/us/en/communications/communications-sw-defined-networking-paper.html. [8] D. McDysan, Cloudbursting Use Case, IETF Internet-Draft, work-inprogress, Oct. 2011.

SUMMARY AND CONCLUSIONS


A revolution is occurring in the world of networking and computing. This revolution will not suddenly replace all of our existing network infrastructures it will emerge as a living, evolving computing and networking overlay that interworks with traditional control and management plane functions in packet and transport networks deployed in enterprises, service providers, and data centers. In effect, stateful PCE and OpenFlow approaches extract simplicity in the form of an abstraction that more directly maps to the real world problems being solved. Further work is needed to more precisely define the needed abstractions, and create software and standards that can leverage already deployed network elements and operational support systems. The focus should be on integration of cloud computing with multilayer networking bandwidth on demand features in heterogeneous networks. As part of this effort, research, development, experimentation, and standardization will be needed; initially, on control paradigms beyond those of legacy routing and switching that is provably correct, stable, and scalable will be an important foundation for the future. Development using new/evolved protocols, such as OpenFlow and stateful PCE, modern management interfaces and higher-level application programming interfaces (APIs) to solve new problems and/or realize significant optimization as proof of concept prior to standardization is important in a software-defined world. Experimentation in laboratories and field trials is the next important step to

BIOGRAPHY
DAVID E. MCDYSAN [SM] (dave.mcdysan@verizon.com) received a B.S. degree in electrical engineering from Virginia Tech, and M.S. and D.Sc. degrees in electrical engineering and computer science from George Washington University in 1976, 1979, and 1989, respectively. He is employed as a Principal Member of Technical Staff in the Verizon network architecture department. His responsibilities and interests include cloud computing infrastructure, software infrastructure, IP and data services technology and standards, software-defined networking, and how these relate to the overall service and network architecture. He works to investigate new and emerging technologies, define architectural approaches for these technologies, interacts with other organizations to address important business aspects, and models the economic and performance advantages of new and refined architectures.

IEEE Communications Magazine March 2013

31

You might also like