You are on page 1of 68

DATA CENTER LAN MIGRATION GUIDE

Data Center LAN Migration Guide

Table of Contents
Chapter 1: Why Migrate to Juniper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Introduction to the Migration Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

Audience. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

Data Center Architecture and Guide Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

Why Migrate?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Scaling Is Too Complex with Current Data Center Architectures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

The Case for a High Performing, Simplified Architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

Why Juniper? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

Other Considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14

Chapter 2: Pre-Migration Information Requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

Pre-Migration Information Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

Technical Knowledge and Education. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

QFX3500. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

Chapter 3: Data Center Migration -Trigger Events and Deployment Processes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

How Migrations Begin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

Trigger Events for Change and Their Associated Insertion Points. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

Considerations for Introducing an Alternative Network Infrastructure Provider. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

Trigger Events, Insertion Points, and Design Considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

IOS to Junos OS Conversion Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

Data Center Migration Insertion Points: Best Practices and Installation Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

New Application/Technology Refresh/Server Virtualization Trigger Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

Network Challenge and Solutions for Virtual Servers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

Network Automation and Orchestration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

Data Center Consolidation Trigger Event. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

Best Practices: Designing the Upgraded Aggregation/Core Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

Best Practices: Upgraded Security Services in the Core . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

Aggregation/Core Insertion Point Installation Tasks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

Consolidating and Virtualizing Security Services in the Data Center: Installation Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . 44

Business Continuity and Workload Mobility Trigger Events. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

Best Practices Design for Business Continuity and HADR Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

Best Practices Design to Support Workload Mobility Within and Between Data Centers . . . . . . . . . . . . . . . . . . . . . . . . . 48

Best Practices for Incorporating MPLS/VPLS in the Data Center Network Design. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

Six Process Steps for Migrating to MPLS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

Completed Migration to a Simplified, High-Performance, Two-Tier Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

Juniper Professional Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

2 Copyright © 2011, Juniper Networks, Inc.


Data Center LAN Migration Guide

Chapter 4: Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

Troubleshooting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

Troubleshooting Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

OSI Layer 1: Physical Troubleshooting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

OSI Layer 2: Data Link Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

Virtual Chassis Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

OSI Layer 3: Network Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

OSPF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

VPLS Troubleshooting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

Multicast . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

Quality of Service/Class of Service (CoS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

OSI Layer 4-7: Transport to Application Troubleshooting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

Tools. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

Troubleshooting Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

Chapter 5: Summary and Additional Resources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

Additional Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

Data Center Design Resources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

Training Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

Juniper Networks Professional Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

About Juniper Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

Copyright © 2011, Juniper Networks, Inc. 3


Data Center LAN Migration Guide

Table of Figures
Figure 1: Multitier legacy data center LAN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

Figure 2: Simpler two-tier data center LAN design. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Figure 3: Data center traffic flows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

Figure 4: Collapsed network design delivers increased density, performance, and reliability. . . . . . . . . . . . . . . . . . . . . . . . . . . 11

Figure 5: Juniper Networks 3:2:1 Data Center Network Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

Figure 6: Junos OS - The power of one. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

Figure 7: The modular Junos OS architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

Figure 8: Junos OS lowers operations costs across the data center. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

Figure 9: Troubleshooting with Service Now . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

Figure 10: Converting IOS to Junos OS using I2J . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

Figure 11: The I2J input page for converting IOS to Junos OS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

Figure 12: Inverted U design using two physical servers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

Figure 13: Inverted U design with NIC teaming. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

Figure 14: EX4200 top-of-rack access layer deployment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

Figure 15: Aggregation/core layer insertion point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

Figure 16: SRX Series platform for security consolidation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

Figure 17: Workload mobility alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

Figure 18: Switching across data centers using VPLS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

Figure 19: Transitioning to a Juniper two-tier high-performance network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

4 Copyright © 2011, Juniper Networks, Inc.


Chapter 1:
Why Migrate to Juniper

Copyright © 2010, Juniper Networks, Inc.


Data Center LAN Migration Guide

Introduction to the Migration Guide


IT has become integral to business success in virtually all industries and markets. Today’s data center is the centralized
repository of computing resources enabling enterprises to meet their business objectives. Today’s data center traffic
flows and performance requirements have changed considerably from the past with the advent of cloud computing
and service-oriented architecture (SOA)-based applications. In addition, increased mobility, unified communications,
compliance requirements, virtualization, the sheer number of connecting devices, and changing network security
boundaries present new challenges to today’s data center managers. Architecting data centers based on old traffic
patterns and outdated security models is inefficient and results in lower performance, unnecessary complexity,
difficulty in scaling, and higher cost.

A simplified, cloud-ready, two-tier data center design is needed to meet these new challenges—without any
compromise in performance. Migrating to such a data center network can theoretically take place at any time.
Practically speaking, however, most enterprises will not disrupt a production data center except for a limited time
window to perform scheduled maintenance and business continuity testing. Luckily and within this context, migration
to a simpler two-tier design can begin at various insertion points and proceed in controlled ways in an existing legacy
data center architecture.

Juniper’s Data Center LAN Migration Guide identifies the most common trigger events at which migration to a
simplified design can take place together with design considerations at each network layer for a successful migration.

The guide is segmented into two parts. For the business decision maker, Chapter 1: Why Migrate to Juniper will be most
relevant. The technical decision maker will find Chapters 2 and 3 most relevant, particularly Chapter 3, which covers
the data center “trigger events” that can stimulate a transition and the corresponding insertion points, designs, and
best practices associated with pre-install, install, and post-install tasks.

Audience
While much of the high-level information presented in this document will be useful to anyone making strategic
decisions about a data center LAN, this guide is targeted primarily to:

• Data center network and security architects evaluating the feasibility of new approaches in network design

• Data center network planners, engineers, and operators designing and implementing new data center networks

• Data center managers, IT managers, network and security managers planning and evaluating data center
infrastructure and security requirements

Data Center Architecture and Guide Overview


One of the primary ways to increase data center efficiency is to simplify the infrastructure. Most data center networks
in place today are based on a three-tier architecture. A simplified two–tier design, made possible by the enhanced
performance and more efficient packaging of today’s Ethernet switches, reduces cost and complexity, and increases
efficiency without compromising performance.

During the 1990s, Ethernet switches became the basic building block of enterprise campus network design. Networks
were typically built in a three-tier hierarchical tree structure to compensate for switch performance limitations. Each
tier performed a different function and exhibited different form factors, port densities, and throughputs to handle the
workload. The same topology was deployed when Ethernet moved into the data center displacing Systems Network
Architecture (SNA), DECnet, and token ring designs.

6 Copyright © 2011, Juniper Networks, Inc.


Data Center LAN Migration Guide

3-TIER LEGACY NETWORK

Data Center Interconnect


WAN Edge

Ethernet Core

Aggregation Layer

Access Layer

Servers NAS FC Storage

FC SAN

Figure 1: Multitier legacy data center LAN

This multitiered architecture, shown in Figure 1, worked well in a client/server world where the traffic was primarily “north
and south,” and oversubscription ratios at tiers of the network closest to the endpoints (including servers and storage)
could be high. However, traffic flows and performance requirements have changed considerably with the advent of
applications based on SOA, increased mobility, Web 2.0, unified communications, compliance requirements, and the
sheer number of devices connecting to the corporate infrastructure. Building networks today to accommodate 5 to 10
year old traffic patterns is not optimal, and results in lower performance, unnecessary complexity, and higher cost.

A new data center network design is needed to maximize IT investment and easily scale to support the new
applications and services a high-performance enterprise requires to stay competitive. According to Gartner,
“Established LAN design practices were created for an environment of limited switch performance. Today’s high-
capacity switches allow new design approaches, thus reducing cost and complexity in campus and data center LANs.
The three-tier concept can be discarded, because all switch ports can typically deliver rich functionality without
impacting performance.” 1

1
Neil Rikard “Minimize LAN Switch Tiers to Reduce Cost and Increase Efficiency,” Gartner Research ID Number: G00172149 November 17, 2009

Copyright © 2011, Juniper Networks, Inc. 7


Data Center LAN Migration Guide

MX Series

Core Layer

EX8216
SRX5800

QFX3500 Switch

GbE

NAS
Servers FC Storage
FC

FC SAN

Figure 2: Simpler two-tier data center LAN design

Juniper Networks offers a next-generation data center solution, shown in Figure 2, which delivers:

• Simplified design for high performance and ease of management


• Scalable services and infrastructure to meet the needs of a high-performance enterprise
• Virtualized resources to increase efficiency
This two-tier data center LAN architecture provides a more elastic and more efficient network that can also easily scale.

This guide covers the key considerations in migrating an existing three-tier data center network to a simplified, cloud-
ready, two-tier design. From a practical perspective, most enterprises won’t initiate a complete data center redesign
for an existing, operational data center. However, there are several events, such as bringing a new application or
service online or a data center consolidation, which require an addition to the existing data center infrastructure. We
call these common events at which migration can begin trigger events. Trigger events generate changes in design at a
given network layer, which we call an insertion point. In Chapter 3 of this guide, we cover the best practices and steps
involved for migration at each of the insertion points presented by a specific trigger event. By following these steps and
practices, it is possible to extend migration to other legacy network tiers and continue towards a simplified two-tier
Juniper infrastructure over time.

In summary, this Data Center LAN Migration Guide describes:

• Pre-migration information requirements


• Migration process overview and design considerations
• Logical migration steps and Juniper best practices for transitioning each network layer insertion point
• Troubleshooting steps
• Additional resources

8 Copyright © 2011, Juniper Networks, Inc.


Data Center LAN Migration Guide

Why Migrate?
IT continues to become more tightly integrated with business across all industries and markets. Technology is the
means by which enterprises can provide better access to information in near or real time to satisfy customer needs,
while simultaneously driving new efficiencies. However, today’s enterprise network infrastructures face growing
scalability, agility, and security challenges. This is due to factors such as increased collaboration with business
partners, additional workforce mobility, and the sheer proliferation of users with smart mobile devices requiring
constant access to information and services. These infrastructure challenges are seriously compounded when growth
factors are combined with the trend towards data center consolidation. What is needed is a new network infrastructure
that is more elastic, more efficient, and can easily scale.

Scalability is a high priority, as it is safe to predict that much of the change facing businesses today is going to come as
a requirement for more storage, more processing power, and more flexibility.

Recent studies by companies such as IDC suggest that global enterprises will be focusing their investments and
resources in the next 5 to 10 years on lowering costs while continuing to look for new growth areas. Industry analysts
have identified several key data center business initiatives that align with these directions:

• Data center consolidation: Enterprises combine data centers as a result of merger or acquisition to reduce cost as
well as centralize and consolidate resources.

• Virtualization: Server virtualization is used to increase utilization of CPU resources, provide flexibility, and deliver
“on-demand” services that easily scale (currently the most prevalent virtualization example).

• Cloud computing: Pooling resources within a cloud provides a cost-efficient way to reconfigure, reclaim, and reuse
resources to deliver responsive services.

• I/O convergence or consolidation: Ethernet and Fibre Channel are consolidated over a single wire on the server side.

• Virtual Desktop Infrastructure (VDI): Applications are run on centralized servers to reduce operational costs and
also provide greater flexibility.

These key initiatives all revolve around creating greater data center efficiencies. While meeting these business
requirements, it is vital that efficient solutions remain flexible and scalable systems are easy to manage to maximize
all aspects of potential cost savings.

In today’s data center, applications are constantly being introduced, updated, and retired. Demand for services is
unpredictable and ever changing. Remaining responsive, and at the same time cost efficient, is a significant resource
management challenge, and adding resources needs to be a last resort since it increases the cost basis for service
production and delivery. Having the ability to dynamically reconfigure, reclaim, and reuse resources positions the data
center to effectively address today’s responsiveness and efficiency challenges.

Furthermore, existing three-tier architectures are built around a client/server model that is less relevant in today’s
application environment. Clearly, a new data center LAN design is needed to adapt to changing network dynamics,
overcome the complexity of scaling with the current multitiered architecture, as well as capitalize on the benefits of
high-performance platforms and a simplified design.

Copyright © 2011, Juniper Networks, Inc. 9


Data Center LAN Migration Guide

Network topologies should mirror


the nature of the traffic they transport

S
W UP TO 70% E

Figure 3: Data center traffic flows

Applications built on SOA architecture and those delivered in the software as a service (SaaS) model require an
increasing number of interactions among servers in the data center. These technologies generate a significant amount of
server-to-server traffic; in fact, up to 70% of data center LAN traffic is between servers. Additional server traffic may also
be produced by the increased adoption of virtualization, where shared resources such as a server pool are used at greater
capacity to improve efficiency. Today’s network topologies need to mirror the nature of the traffic being transported.

Existing three-tier architectures were not designed to handle server-to-server traffic without going up and back through
the many layers of tiers. This is inherently inefficient, adding latency at each hop, which in turn impacts performance,
particularly for real-time applications like unified communications, or in industries requiring high performance such as
financial trading.

Scaling Is Too Complex with Current Data Center Architectures


Simply deploying ever more servers, storage, and devices in a three-tier architecture to meet demand significantly
increases network complexity and cost. In many cases, it isn’t possible to add more devices due to space, power,
cooling, or throughput constraints. And even when it is possible, it is often difficult and time-consuming to manage due
to the size and scope of the network. Or it is inherently inefficient, as it’s been estimated that as much as 50% of all
ports in a typical data center are used for connecting switches to each other as opposed to doing the more important
task of interconnecting storage to servers and applications to users. Additionally, large Layer 2 domains using Spanning
Tree Protocol (STP) are prone to failure and poor performance. This creates barriers to the efficient distribution of
resources in the DC and fundamentally prevents a fast and flexible network scale out. Similarly, commonly deployed
data center technologies like multicast don’t perform at scale across tiers and devices in a consistent fashion.

Legacy security services may not easily scale and are often not efficiently deployed in a data center LAN due to the
difficulty of incorporating security into a legacy,multitiered design. Security blades which are bolted into switches at the
aggregation layer consume excessive power and space, impact performance, and don’t protect virtualized resources.
Another challenge of legacy security service appliances is the limited performance scalability, which may be far
below the throughput requirements of most high-performance enterprises consolidating applications or data centers.
The ability to cluster together firewalls as a single logical entity to increase scalability without added management
complexity is another important consideration.

Proprietary systems may also limit further expansion with vendor lock-in to low performance equipment. Different
operating systems at each layer may add to the complexity to operate and scale the network. This complexity is costly,
limits flexibility, increases the time it takes to provision new capacity or services, and restricts the dynamic allocation of
resources for services such as virtualization.

10 Copyright © 2011, Juniper Networks, Inc.


Data Center LAN Migration Guide

The Case for a High Performing, Simplified Architecture


Enhanced, high-performance LAN switch technology can help meet these scaling challenges. According to Network World,
“Over the next few years, the old switching equipment needs to be replaced with faster and more flexible switches. This
time, speed needs to be coupled with lower latency, abandoning spanning tree and support for the new storage protocols.
Networking in the data center must evolve to a unified switching fabric.”2

New switching technology such as that found in Juniper Networks® EX Series Ethernet Switches has caught up to
meet or surpass the demands of even the most high-performance enterprise. Due to specially designed application-
specific integrated circuits (ASICs) which perform in-device switching functions, enhanced switches now offer high
throughput capacity of more than one terabit per second (Tbps) with numerous GbE and 10GbE ports, vastly improving
performance and reducing the number of uplink connections. Some new switches also provide built-in virtualization
that reduces the number of devices that must be managed, yet can rapidly scale with growth. Providing much greater
performance, enhanced switches also enable the collapsing of unnecessary network tiers—moving towards a new,
simplified network design. Similarly, scalable enhanced security devices can be added to complement such a design,
providing security services throughout the data center LAN.

A simplified, two-tier data center LAN design can lower costs without compromising performance. Built on high-
performance platforms, a collapsed design requires fewer devices, thereby reducing capital outlay and the operational
costs to manage the data center LAN. Having fewer network tiers also decreases latency and increases performance,
enabling wider support of additional cost savings and high bandwidth applications such as unified communications.
Despite having fewer devices, a simplified design still offers high availability (HA) with key devices being deployed in
redundant pairs and dual homed to upstream devices. Additional HA is offered with features like redundant switching
fabrics, dual power supplies, and the other resilient capabilities available in enhanced platforms.

MULTI-TIER LEGACY NETWORK 2-TIER DESIGN

Density
Performance
Reliability

Figure 4: Collapsed network design delivers increased density, performance, and reliability

2
Robin Layland/Layland Consulting “10G Ethernet shakes Net Design to the Core/Shift from three- to two-tier architectures accelerating,” Network World
September 14, 2009

Copyright © 2011, Juniper Networks, Inc. 11


Data Center LAN Migration Guide

Two-Tier Design Facilitates Cloud Computing

By simplifying the design, by sharing resources, and by allowing for integrated security, a two-tier design also enables
the enterprise to take advantage of the benefits of cloud computing. Cloud computing delivers on-demand services to
any point on the network without requiring the acquisition or provisioning of location-specific hardware and software.
These cloud services are delivered via a centrally managed and consolidated infrastructure that has been virtualized.
Standard data center elements such as servers, appliances, storage, and other networking devices can be arranged in
resource pools that are shared securely across multiple applications, users, departments, or any other way they should
be logically shared. The resources are dynamically allocated to accommodate the changing capacity requirements of
different applications and improve asset utilization levels. This type of on-demand service and infrastructure simplifies
management, reduces operating and ownership costs, and allows services to be provisioned with unprecedented
speed. Reduced application and service delivery times mean that the enterprise is able to capitalize on opportunities
as they occur.

Achieving Power Savings and Operating Efficiencies

Fewer devices require less power, which in turn reduces cooling requirements, thus adding up to substantial
power savings. For example, a simplified design can offer more than a 39% power savings over a three-tier legacy
architecture. Ideally, a common operating system should be used on all data center LAN devices to reduce errors,
decrease training costs, ensure consistent features, and thus lower the cost of operating the network.

Consolidating Data Centers

Due to expanding services, enterprises often have more than one data center. Virtualization technologies like server
migration and application load balancing require multiple data centers to be virtually consolidated into a single, logical
data center. Locations need to be transparently interconnected with LAN interconnect technologies such as virtual
private LAN service (VPLS) to interoperate and appear as one.

All this is possible with a new, simplified data center LAN design from Juniper Networks. However, as stated earlier,
Juniper recognizes that it is impractical to flash migrate from an existing, operational, three-tier production data center
LAN design to a simpler two-tier design, regardless of the substantial benefits. However, migration can begin as a result
of any of the following trigger events:

• Addition of a new application or service

• Refresh cycle

• Server virtualization migration

• Data center consolidation

• Business continuity and workload mobility initiatives

• Data center core network upgrade

• Higher performance and scalability for security services

The design considerations and steps for initiating migration from any of these trigger events is covered in detail in
Chapter 3: Data Center Migration—Trigger Events and Deployment Processes.

12 Copyright © 2011, Juniper Networks, Inc.


Data Center LAN Migration Guide

Why Juniper?
Juniper delivers high-performance networks that are open to and embrace third-party partnerships to lower total cost
of ownership (TCO) as well as to create flexibility and choice. Juniper is able to provide this based on its extensive
investment in software, silicon, and systems.

• Software: Juniper’s investment in software starts with Juniper Networks Junos® operating system. Junos OS offers
the advantage of one operating system with one release train and one modular architecture across the enterprise
portfolio. This results in feature consistency and simplified management throughout all platforms in the network.

• Silicon: Juniper is one of the few network vendors that invests in ASICs which are optimized for Junos OS to
maximize performance and resiliency.

• Systems: The combination of the investment in ASICs and Junos OS produces high-performance systems that
simultaneously scale connectivity, capacity, and the control capability needed to deliver new applications and
business processes on a single infrastructure that also reduces application and service delivery time.

Juniper’s strategy for simplifying the data center network is called the 3-2-1 Data Center Network Architecture, which
eliminates layers of switching to “flatten” and collapse the network from today’s three-tier tree structure to two
layers, and in the future just one (see Figure 5). A key enabler of this simplification is achieved by deploying Juniper’s
Virtual Chassis fabric technology, which interconnects multiple physical switches to create a single, logical device that
combines the performance and simplicity of a switch with the connectivity and resiliency of a network. Organizations
can migrate from a three-tier to a two-tier network beginning with a Trigger Event such as adding a new POD or a
technology refresh. Migration Trigger Events will be presented in more detail in Chapter 3. Alternatively, they can
move directly into a Juniper-enabled data center fabric as it becomes available. Creating a simplified infrastructure
with shared resources and secure services delivers significant advantages over other designs. It helps lower costs,
increase efficiency, and keep the data center agile enough to accommodate any future business changes or technology
infrastructure requirements. The steps to migrate from an existing three-tier network to a flatter design, as articulated
by the Juniper Networks 3-2-1 Data Center Network Architecture, is built on four core principles:

• Simplify the architecture: Consolidating legacy siloed systems and collapsing inefficient tiers results in fewer
devices, a smaller operational footprint, and simplified management from a “single pane of glass.”

• Share the resources: Segmenting the network into simple, logical, and scalable partitions with privacy, flexibility,
high performance, and quality of service (QoS) enables network agility to rapidly adapt to an increasing number of
users, applications, and services.

• Secure the data flows: Integrating scalable, virtualized security services into the network core provides benefits to all
users and applications. Comprehensive protection secures data flows into, within, and between data centers. It also
provides centralized management and the distributed dynamic enforcement of application and identity-aware policies.

• Automate network operations at each step—An open, extensible software platform reduces operational costs
and complexity, enables rapid scaling, minimizes operator errors, and increases reliability through a single network
operating system. A powerful network application platform with innovative applications enables network operators
to leverage Juniper or third-party applications for simplifying operations and scaling application infrastructure to
improve operational efficiency.

Copyright © 2011, Juniper Networks, Inc. 13


Data Center LAN Migration Guide

Legacy three-tier Juniper two-tier Juniper’s data


3. data center
2. data center
1. center fabric

W Up to 75% of traffic E W Up to 75% of traffic E

Figure 5: Juniper Networks 3:2:1 Data Center Network Architecture

Juniper’s data center LAN architecture embodies these principles and enables high-performance enterprises to build
next-generation, cloud-ready data centers. For information on Building the Cloud-Ready Data Center, please refer to:
www.juniper.net/us/en/solutions/enterprise/data-center.

Other Considerations
It is interesting to note that even as vendors introduce new product lines, the legacy three-tier architecture remains as
the reference architecture for Data Centers. This legacy three-tier architecture retains the same limitations in terms of
scalability and increased complexity.

Additionally, migrating to a new product line, even with an incumbent vendor, may require adopting a new OS,
modifying configurations, and replacing hardware. The potential operational impact of introducing new hardware
is a key consideration for insertion into an existing data center infrastructure, regardless of the platform provider.
Prior to specific implementation at any layer of the network, it is sound practice to test interoperability and feature
consistency in terms of availability and implementation. When considering an incumbent vendor with a new platform,
any Enterprise organization weighing migration to a new platform from their existing one, should also evaluate moving
towards a simpler high performing Juniper-based solution, which can deliver substantial incremental benefits. (See
Chapter 3: Data Center Migration—Trigger Events and Deployment Processes for more details about introducing a
second switching infrastructure vendor into an existing single vendor network.)

In summary, migrating to a simpler data center design enables an enterprise to improve the end user experience and
scale without complexity, while also driving down operational costs.

14 Copyright © 2011, Juniper Networks, Inc.


Data Center LAN Migration Guide

Chapter 2:
Pre-Migration Information
Requirements

Copyright © 2011, Juniper Networks, Inc. 15


Data Center LAN Migration Guide

Pre-Migration Information Requirements


Migrating towards a simplified design is based on a certain level of familiarity with the following Juniper solutions:

• Juniper Networks Junos operating system

• Juniper Networks EX Series Ethernet Switches and MX Series 3D Universal Edge Routers

• Juniper Networks SRX Series Services Gateways

• Juniper Networks Network and Security Manager, STRM Series Security Threat Response Managers, and Junos Space
network management solutions

Juniper Networks Cloud-Ready Data Center Reference Architecture communicates Juniper’s conceptual framework and
architectural philosophy in creating data center and cloud computing networks robust enough to serve the range of
customer environments that exist today. It can be downloaded from: www.juniper.net/us/en/solutions/enterprise/
data-center/simplify/#literature.

Technical Knowledge and Education


This Migration Guide assumes some experience with Junos OS and its rich tool set, which will not only help simplify
the data center LAN migration but also ongoing network operations. A brief overview of Junos OS is provided in the
following section. Juniper also offers a comprehensive series of Junos OS workshops. Standardization of networking
protocols should ease the introduction of Junos OS into the data center since the basic constructs are similar. Juniper
Networks offers a rich curriculum of introductory and advanced courses on all of its products and solutions.

Learn more about Juniper’s free and fee-based online and instructor-led hands-on training offerings at:
www.juniper.net/us/en/training/technical_education.

Additional education may be required for migrating security services such as firewall and intrusion prevention system (IPS).

If needed, Juniper Networks Professional Services can provide access to industry-leading IP experts to help with all
phases of the design, planning, testing, and migration process. These experts are also available as training resources,
to help with project management, risk assessment, and more. The full suite of Juniper Networks Professional Services
offerings can be found at: www.juniper.net/us/en/products-services/consulting-services.

Junos OS Overview

Enterprises deploying legacy-based solutions today are most likely familiar with the number of different operating
systems (OS versions) running on switching, security, and routing platforms. This can result in feature inconsistencies,
software instability, time-consuming fixes and upgrades. It’s not uncommon for a legacy data center to be running
many different versions of a switching OS, which may increase network downtime and require greater time, effort, and
cost to manage the network. From its beginning, Juniper set out to create an operating system that addressed these
common problems. The result is Junos OS, which offers one consistent operating system across all of Juniper’s routing,
switching, and security devices.

16 Copyright © 2011, Juniper Networks, Inc.


Data Center LAN Migration Guide

T Series EX8216

Junos Space EX8208

Junos Pulse NSM

SRX5000 Line
NSMXpress

SRX3000 Line
EX4500 Line

MX Series EX4200 Line


SRX650 SRX240 EX3200 Line
M Series EX2200 Line
SRX100 SRX210 J Series LN1000 QFX3500

SECURITY ROUTERS SWITCHES

10.3 10.4 11.1

— API —
Module
Branch Core Frequent Releases X

One OS One Release Track One Architecture

Figure 6: Junos OS - The power of one

Junos OS serves as the foundation of a highly reliable network infrastructure and has been at the core of the world’s
largest service provider networks for over 10 years. Junos OS offers identical carrier-class performance and reliability
to any sized enterprise data center LAN. Also through open, standards-based protocols and an API, Junos OS can be
customized to optimize any enterprise-specific requirement.

What sets Junos OS apart from other network operating systems is the way it is built: one operating system (OS)
delivered in one software release train, and with one modular architecture. Feature consistency across platforms and
one predictable release of new features ensure compatibility throughout the data center LAN. This reduces network
management complexity, increases network availability, and enables faster service deployment, lowering TCO and
providing greater flexibility to capitalize on new business opportunities.

Junos OS’ consistent user experience and automated tool sets make planning and training easier and day-to-day
operations more efficient, allowing for faster changes. Further, integrating new software functionality protects not just
hardware investments, but also an organization’s investment in internal systems, practices, and knowledge.

Junos OS Architecture

The Junos OS architecture is a modular design conceived for flexible yet stable innovation across many networking
functions and platforms. The architecture’s modularity and well-defined interfaces streamline new development and
enable complete, holistic integration of services.

Copyright © 2011, Juniper Networks, Inc. 17


Data Center LAN Migration Guide

OPEN MANAGEMENT INTERFACES

Scripts
NSM/
CLI Junos Space J-Web Toolkit

Service

CONTROL PLANE

Management
App 1

Interfaces

Module n
Routing
Service

SERVICES PLANE
App 2

Services Interfaces
Kernel
Service
App 3
DATA PLANE

Packet Forwarding

Service
Physical Interfaces App n

Figure 7: The modular Junos OS architecture

The advantages of modularity reach beyond the operating system software’s stable, evolutionary design. For example,
the Junos OS architecture’s process modules run independently in their own protected memory space, so one module
cannot disrupt another. The architecture also provides separation between control and forwarding functions to support
predictable high performance with powerful scalability. This separation also hardens Junos OS against distributed
denial-of-service (DDoS) attacks. Junos operating system’s modularity is integral to the high reliability, performance,
and scalability delivered by its software design. It enables unified in-service software upgrade (ISSU), graceful Routing
Engine switchover (GRES), and nonstop routing.

Automated Scripting with Junoscript Automation

With Junoscript Automation, experienced engineers can create scripts that reflect their own organization’s needs and
procedures. The scripts can be used to flag potential errors in basic configuration elements such as interfaces and
peering. The scripts can also automate network troubleshooting and quickly detect, diagnose, and fix problems as
they occur. In this way, new personnel running the scripts benefit from their predecessors’ long-term knowledge and
expertise. Networks using Junoscript Automation can increase productivity, reduce OpEx, and increase high availability
(HA), since the most common reason for a network outage is operator error.

For more detailed information on Junos Script Automation, please see: www.juniper.net/us/en/community/junos.

18 Copyright © 2011, Juniper Networks, Inc.


Data Center LAN Migration Guide

A key benefit of using Junos OS is lower TCO as a result of reduced operational challenges and improved operational
productivity at all levels in the network.

Critical Categories of Enterprise Network Operational Costs

Switch Switch and Switch and Unplanned Overall


and Router Router Router Switch and Switch
Downtime Maintenance Deployment Router and Router
Costs and Support Time Costs Events Network
Costs Resolution Operations
Costs Costs

Baseline for all network operating systems

27%* 54%* 25%* 40%* 41%*


Lower Lower Lower Lower Lower
with with with with with
Junos Junos Junos Junos Junos

(Based on reduction (A “planned (The “adding (The time needed to (The combined total
in frequency and events” infrastructure” resolve unplanned savings associated
duration of category) task) network events) with planned,
unplanned unplanned, planning
network events) and provisioning,
and adding
infrastructure tasks)

Multiple network operating systems diminish efficiency3

Figure 8: Junos OS lowers operations costs across the data center

An independent commissioned study conducted by Forrester Consulting3 (www.juniper.net/us/en/reports/junos_tei.pdf)


found that the use of Junos OS and Juniper platforms produced a 41% reduction in overall operations costs for network
operational tasks including planning and provisioning, deployment, and planned and unplanned network events.

Juniper Platform Overview

The ability to migrate from a three-tier network design to a simpler two-tier design with increased performance,
scalability, and simplicity is predicated on the availability of hardware-based services found in networking platforms
such as the EX Series Ethernet Switches, MX Series 3D Universal Edge Routers, and the SRX Series Services Gateways.
A consistent and unified view of the data center, campus, and branch office networks is provided by Juniper’s “single
pane of glass” management platforms, including the recently introduced Junos Space.

The following section provides a brief overview of the capabilities of Juniper’s platforms. All of the Junos OS-based
platforms highlighted provide feature consistency throughout the data center LAN and lower TCO.

EX4200 Switch with Virtual Chassis Technology

Typically deployed at the access layer in a data center, Juniper Networks EX4200 Ethernet Switch provides chassis-
class, high availability features, and high-performance throughput in a pay as you grow 1 rack unit (1 U) switch.
Depending on the size of the data center, the EX4200 may also be deployed at the aggregation layer. Offering flexible
cabling options, the EX4200 can be located at the top of a rack or end of a row. There are several different port
configurations available with each EX4200 switch, providing up to 48 wire-speed, non-blocking, 10/100/1000 ports
with full or partial Power over Ethernet (PoE). Despite its small size, this high-performance switch also offers multiple
GbE or 10Gbe uplinks to the core, eliminating the need for an aggregation layer. And because of its small size, it takes
less space, requires less power and cooling, and it costs less to deploy and maintain sparing.

Up to 10 EX4200 line switches can be connected, configured, and managed as one single logical device through built-in
Virtual Chassis technology. The actual number deployed in a single Virtual Chassis instance depends upon the physical
layout of your data center and the nature of your traffic. Connected via a 128 Gbps backplane, a Virtual Chassis can be
comprised of EX4200 switches within a rack or row, or it can use a 10GbE connection anywhere within a data center or
across data centers up to 40 km apart.
3
“The Total Economic Impact of Junos Network Operating Systems”, a commissioned study conducted by Forrester Consulting on behalf of Juniper Networks,
February 2009

Copyright © 2011, Juniper Networks, Inc. 19


Data Center LAN Migration Guide

Juniper’s Virtual Chassis technology enables virtualization at the access layer, offering three key benefits:

1. It reduces the number of managed devices by a factor of 10X.

2. The network topology now closely maps to the traffic flow. Rather than sending inter-server traffic up to an
aggregation layer and then back down in order to send it across the rack, it’s sent directly “east-to-west,” reducing
the latency for these transactions. This also more easily facilitates workload mobility when server virtualization is
deployed.

3. Since the network topology now maps to the traffic flows directly, the number of uplinks required can be reduced.

The Virtual Chassis also delivers best-in-class performance. According to testing done by Network World (see full
report at www.networkworld.com/slideshows/2008/071408-juniper-ex4200.html), the EX4200 offers the lowest
latency of any Ethernet switch they had tested, making the EX4200 an optimal solution for high-performance, low
latency, real-time applications. There has also been EX4200 performance testing done in May 2010 by Network Test
which demonstrates the low latency high performance and high availability capabilities of the EX 4200 series, viewable
at http://networktest.com/jnprvc.

When multiple EX4200 platforms are connected in a Virtual Chassis configuration, they offer the same software high
availability as traditional chassis-based platforms. Each Virtual Chassis has a master and backup Routing Engine pre-
elected with synchronized routing tables and routing protocol states for rapid failover should a master switch fail. The
EX4200 line also offers fully redundant power and cooling.

To further lower TCO, Juniper includes core routing features such as OSFP and RIPv2 in the base software license,
providing a no incremental cost option for deploying Layer 3 at the access layer.

In every deployment, the EX4200 reduces network configuration burdens and measurably improves performance for
server-to-server communications in SOA, Web services, and other distributed application designs.

For more information, refer to the EX4200 Ethernet Switch data sheet for a complete list of features, benefits, and
specifications at: www.juniper.net/us/en/products-services/switching/ex-series.

QFX3500
QFabric consists of edge, interconnect, and control devices that work together to create a high-performance, low
latency fabric that unleashes the power of the data center. QFabric represents the “1” in Juniper Networks 3-2-1
architecture, dramatically reducing complexity in the data center by delivering any-to-any connectivity while lowering
capital, management, and operational expenses.

The first QFabric product, the Juniper Networks QFX3500 represents a new level of integration and performance for
top of rack switches by being the first to combine all of the following in 1 RU.

• Ultra Low Latency - matching industry best latency for a 48+ port Ethernet switch

• L2 – full L2 switching functionality

• L3 – routing and IP addressing functions (Future)

• Storage convergence – Ethernet storage (NAS, iSCSI, FCoE)and Fibre Channel gateway

• 40G – high capacity uplinks (Future)

Refer to the QFX3500 data sheet for more information at:


www.juniper.net/us/en/local/pdf/datasheets/1000361-en.pdf

EX4500 10GbE Switch

The Juniper Networks EX4500 Ethernet Switch delivers a scalable, compact, high-performance platform for supporting
a mix of GbE and high-density 10 gigabit per second (10 Gbps) data center top-of-rack, as well as data center, campus,
and service provider aggregation deployments. The QFX3500 is the preferred platform for 10 Gigabit per second Top
of Rack Deployments. The Junos OS-based EX4500 is a 48 port wire-speed switch whose ports can be provisioned as
either gigabit Ethernet (GbE) or 10GbE ports in a two rack unit (2 U) form factor. The 48 ports are allocated with 40
1000BaseT ports in the base unit and 8 optional uplink module ports. The EX4500 delivers 960 Gbps throughput (full
duplex) for both Layer 2 and Layer 3 protocols. The EX4500 also supports Virtual Chassis technology.

20 Copyright © 2011, Juniper Networks, Inc.


Data Center LAN Migration Guide

For smaller data centers, the EX4500 can be deployed as the core layer switch, aggregating 10GbE uplinks from
EX4200 Virtual Chassis configurations in the access layer. Back-to-front and front-to-back cooling ensure consistency
with server designs for hot and cold aisle deployments.

Juniper plans to add support to the EX4500 for Data Center Bridging and Fibre Channel over Ethernet(FCoE) in
upcoming product releases, providing FCoE Transit Switch Functionality.

Refer to the EX4500 Ethernet Switch data sheet for more information at: www.juniper.net/us/en/products-services/
switching/ex-series/ex4500/#literature.

The QFX3500 would be the preferred platform for those organizations that are building out a high density 10 Giga-
bit Data Center. It is also a building block towards the single Data Center Fabric which Juniper will be providing in the
future. For Data Center architectures where there is a mix of primarily Gigabit and 10 Gigabit, the EX4500 would be the
appropriate platform.

EX8200 Line of Ethernet Switches

The Juniper Networks EX8200 line of Ethernet switches is a high-performance chassis platform designed for the high
throughput that a collapsed core layer requires. This highly scalable platform supports up to 160,000 media access
control (MAC) addresses, 64,000 access control lists (ACLs), and wire-rate multicast replication. The EX8200 line
may also be deployed as an end-of-rack switch for those enterprises requiring a dedicated modular chassis platform.
The advanced architecture and capabilities of the EX8200 line, similar to the EX4200, accelerate migration towards a
simplified data center design.

The EX8200-40XS line card brings 10GbE to the access layer for end-of-row configurations. This line card will deliver
25 percent greater density per chassis and consume half the power of competing platforms, reducing rack space and
management costs. The EX8200 line is expected to add Virtual Chassis support later in 2010 with additional features
being added in early 2011. With the new 40-port line card, the EX8200 line with Virtual Chassis technology will enable
a common fabric of more than 1200 10GbE ports.

The most fundamental challenge that data center managers face is the challenge of physical plant limitations. In this
environment, taking every step possible to minimize power draw for the required functionality becomes a critical goal.
For data center operators searching for the most capable equipment in terms of functionality for the minimum in rack
space, power, and cooling, the EX8200 line delivers higher performance and scalability in less rack space with lower
power consumption than competing platforms.

Designed for carrier-class HA, each EX8200 line model also features fully redundant power and cooling, fully
redundant Routing Engines, and N+1 redundant switch fabrics.

For more information, refer to the EX8200 line data sheets for a complete list of features and specifications at:
www.juniper.net/us/en/products-services/switching/ex-series.

MX Series 3D Universal Edge Routers

It’s important to have a consistent set of powerful edge services routers to be able to interconnect the data center to
other data centers and out to dispersed users. The MX Series with the new Trio chipset delivers cost-effective, powerful
scaling that allows enterprises to support application-level replication for disaster recovery or virtual machine migration
between data centers by extending VLANs across data centers using mature, proven technologies such as VPLS.

It is interesting to note the following observation from the recent 2010 MPLS Ethernet World Conference from the Day 3
Data Center Interconnect session: “VPLS is the most mature technology today to map DCI requirements”.

Delivering carrier-class HA, each MX Series model features fully redundant power and cooling, fully redundant Routing
Engines, and N+1 redundant switch fabrics.

For more information, refer to the MX Series data sheet for a complete list of features, benefits, and specifications at:
www.juniper.net/us/en/products-services/routing/mx-series.

Copyright © 2011, Juniper Networks, Inc. 21


Data Center LAN Migration Guide

Consolidated Security with SRX Series Services Gateways

The SRX Series Services Gateways replace numerous legacy security solutions by providing a suite of services in one
platform, including a firewall, IPS, and VPN services.

Supporting the concept of zones, the SRX Series can provide granular security throughout the data center LAN. The
SRX Series can be virtualized and consolidated into a single pool of security services via clustering. The SRX Series
can scale up to 10 million concurrent sessions allowing the SRX Series to massively and rapidly scale to handle any
throughput without additional devices, multiple cumbersome device configurations, or operating systems.

The highly scalable performance capabilities of the SRX Series platform, as with the EX Series switches, lays the
groundwork for a simplified data center infrastructure and enable enterprises to easily scale to meet future growth
requirements. This is in contrast to legacy integrated firewall modules and standalone appliances which have limited
performance scalability. Even when multiple firewall modules are used, the aggregate performance may still be far
below the throughput required for consolidating applications or data centers, where firewall aggregate throughput of
greater than 100 gigabits may be required. The lack of clustering capabilities in some legacy firewalls not only limits
performance scalability but also increases management and network complexity.

The SRX Series provides HA features such as redundant power supplies and cooling fans, as well as redundant switch
fabrics. This robust platform also delivers carrier-class throughput. The SRX5600 is the industry’s fastest firewall and
IPS by a large margin, according to Network World.

For more information, refer to the SRX Series data sheet for a complete list of features, benefits, and specifications at:
www.juniper.net/us/en/products-services/security/srx-series.

Juniper Networks vGW Virtual Gateway

To address the unique security challenges of virtualized networks and data centers, the vGW virtual firewall and cloud
protection software provides network and application visibility and granular control over virtual machines (VM).
Combining a powerful stateful virtual firewall, VM Introspection and automated compliance assessment, the vGW
Virtual Gateway protecting virtualized workloads slipstreams easily into Juniper environments featuring any of the
following:

• SRX Series Services Gateways

• STRM Series Security Threat Response Managers

• IDP Series Intrusion Detection and Prevention Appliances

The vGW integrations focus on preserving customers’ investment into Juniper security, and extending it to the
virtualized infrastructure with the similar feature, functionality, and enterprise-grade requirements like high-
performance, redundancy, and central management.

Juniper customers can deploy the vGW software on the virtualized server, and integrate security policies, logs, and
related work flow into existing SRX Series, STRM Series, and IDP Series infrastructure. Customers benefit from layered,
granular security without the management and OpEx overhead. vGW will export firewall logs and inter-VM traffic flow
information to STRM Series to deliver ‘single-pane’ of glass for threat management. Customers who have deployed
Juniper Networks IDP Series, and management processes around threat detection and mitigation can extend that to
the virtualized server infrastructure with no additional CapEx investment.

The vGW Virtual Gateway’s upcoming enhancements with SRX Series and Junos Space continues on the vision to
deliver ‘gapless’ security with a common management platform. The vGW-SRX Series integration will ensure trust zone
integrity is guaranteed to the last mile - particularly relevant in cloud and shared-infrastructure deployments. vGW
integration with Junos Space will bridge the gap between management of physical resources and virtual resources to
provide a comprehensive view of the entire data center.

Refer to the vGW Virtual Gateway datasheet for more:


www.juniper.net/us/en/local/pdf/datasheets/1000363-en.pdf.

22 Copyright © 2011, Juniper Networks, Inc.


Data Center LAN Migration Guide

MPLS/VPLS for Data Center Interconnect

The consolidation of network services increases the need for Data Center Interconnect (DCI). Resources in one data
center are often accessed by one or more data centers. Different business units, for example, may share information
across multiple data centers via VPNs. Or compliance regulations may require that certain application traffic be kept
on separate networks throughout data centers. Or businesses may need a real-time synchronized standby system to
provide optimum HA in a service outage.

MPLS is a suite of protocols developed to add transport and virtualization capabilities to large data center networks.
MPLS enables enterprises to scale their topologies and services. An MPLS network is managed using familiar protocols
such as OSPF or Integrated IS-IS and BGP.

MPLS provides complementary capabilities to standard IP routing. Moving to an MPLS network provides business
benefits like improved network availability, performance, and policy enforcement. MPLS networks can be employed for
a variety of reasons:

• Inter Data Center Transport: To connect consolidated data centers to support mission critical applications. For
example, real-time mainframe replication or disk, database, or transaction mirroring.

• Virtualizing the Network Core: For logically separating network services. For example, providing different levels of
QoS for certain applications or separate application traffic due to compliance requirements.

• Extending L2VPNs for Data Center Interconnect: To extend L2 domains across data centers using VPLS. For example,
to support application mobility with virtualization technologies like VMware VMotion, or to provide resilient business
continuity for HA by copying transaction information in real time to another set of servers in another data center.

The MX Series provides high capacity MPLS and VPLS technologies. MPLS networks can also facilitate migration
towards a simpler, highly scalable and flexible data center infrastructure.

Juniper’s Unified Management Solution

Juniper provides three powerful management solutions for the data center LAN via its NSM and STRM Series platforms,
as well as Junos Space.

For more information on MPLS/VPLS, please refer to the “Implementing VPLS for Data Center Interconnectivity”
Implementation Guide at: www.juniper.net/us/en/solutions/enterprise/data-center/simplify/#literature.

Network and Security Manager

NSM offers a single pane of glass to manage and maintain Juniper platforms as the network grows. It also helps
maintain and configure consistent routing and security policies across the entire network. And NSM helps delegate
roles and permissions as well.

Delivered as a software application or a network appliance, NSM provides many benefits:

• Centralized activation of routers, switches, and security devices

• Granular role-based access and policies

• Global policies and objects

• Monitoring and investigative tools

• Scalable and deployable solutions

• Reliability and redundancy

• Lower TCO

Copyright © 2011, Juniper Networks, Inc. 23


Data Center LAN Migration Guide

The comprehensive NSM solution provides full life cycle management for all platforms in the data center LAN.

• Deployment: Provides a number of options for adding device configurations into the database, such as importing a
list of devices, or discovering and importing deployed network devices, or manually adding a device and configuration
in NSM, or having the device contact NSM to add its configuration to the database.

• Configuration: Offers central configuration to view and edit all managed devices. Provides offline editing/modeling
of device configuration. Facilitates the sharing of common configurations across devices via templates and policies.
Provides configuration file management for backup, versioning, configuration comparisons, and more.

• Monitoring: Provides centralized event log management with predefined and user-customizable reports. Provides
tools for auditing log trends and finding anomalies. Provides automatic network topology creation using standards-
based discovery of Juniper and non-Juniper devices based on configured subnets. Offers inventory management
for device management interface (DMI)-enabled devices, and Job Manager to view device operations performed by
other team members.

• Maintenance: Delivers centralized Software Manager to version track software images for network devices. Other
tools also transform/validate between user inputs and device-specific data formats via DMI schemas.

Using open standards like SNMP and system logging, NSM has support for third-party network management solutions
from IBM, Computer Associates, InfoVista, HP, EMC, and others.

Refer to the Network and Security Manager data sheet for a complete list of features, benefits, and specifications:
www.juniper.net/us/en/products-services/security/nsmcm.

STRM Series Security Threat Response Managers

Complementing Juniper’s portfolio, the STRM Series offers a single pane of glass to manage security threats. It
provides threat detection, event log management, compliance, and efficient IT access to the following:

• Log Management: Provides long-term collection, archival, search, and reporting of event logs, flow logs, and
application data.

• Security Information and Event Management (SIEM): Centralizes heterogeneous event monitoring, correlation, and
management. Unrivaled data management greatly improves IT’s ability to meet security control objectives.

• Network Behavior Anomaly Detection (NBAD): Discovers aberrant network activities using network and application
flow data to detect new threats that others miss.

Refer to the STRM Series data sheet for a complete list of features, benefits, and specifications: www.juniper.net/us/
en/products-services/security/strm-series.

Junos Space

Another of IT’s challenges has been adding new services and applications to meet the ever growing demand. Historically,
this has not been easy, requiring months of planning and only making changes in strict maintenance windows.

Junos Space is a new, open network application platform designed for building applications that simplify network
operations, automate support, and scale services. Organizations can take control of their own networks through self-
written programs or third-party applications from the developer community. Embodied in a number of appliances
across Juniper’s routing, switching, and security portfolio, an enterprise can seamlessly add new applications, devices,
and device updates as they become available from Juniper and the developer community, without ever restarting the
system for full plug and play.

24 Copyright © 2011, Juniper Networks, Inc.


Data Center LAN Migration Guide

Junos Space applications include:

• Junos Space Virtual Control allows users to monitor, manage, and control the virtual network environments that
support virtualized servers deployed in the data center. Virtual Control provides a consolidated solution for network
administrators to gain end-to-end visibility into, and control over, both virtual and physical networks from a single
management screen. By enabling network-wide topology, configuration, and policy management, Virtual Control
minimizes errors and dramatically simplifies data center network orchestration, while at the same time lowering
total cost of ownership by providing operational consistency across the entire data center network. Virtual Control
also greatly improves business agility by accelerating server virtualization deployment.

Juniper has also formed a new collaboration with VMware that takes advantage of its open APIs to achieve seamless
orchestration across both physical and virtual network elements by leveraging Virtual Control. The combination of
Junos Space Virtual Control and VMware vSphere™ provides automated orchestration between the physical and virtual
networks, wherein a change in the virtual network is seamlessly carried over the physical network and vice versa.

• Junos Space Ethernet Design is a Junos Space software application that enables end-to-end campus and data
center network automation. Ethernet Design provides full automation including configuration, provisioning,
monitoring, and administration of large switch and router networks. Designed to enable rapid endpoint connectivity
and operationalization of the data center, Ethernet Design uses a best practice configuration and scalable workflows
to scale data center operations with minimal operational overhead. It is a single pane of glass platform for end-to-
end network automation that improves productivity via a simplified, “create one, use extensively” configuration and
provisioning model.

• Junos Space Security Design enables fast, easy, and accurate enforcement of security state across the enterprise
network. Security Design enables quick conversion of business intent to device-specific configuration, and it enables
auto-configuration and provisioning through workflows and best practices to reduce the cost and complexity of
security operations.

• Service Now and Junos Space Service Insight consists of Junos Space applications that enable fast and proactive
detection, diagnosis, and resolution of network issues. (See Automated Support with Service Now for more details.)

• Junos Space Network Activate facilitates fast and easy setup of VPLS services, and allows for full lifecycle
management of MPLS services

In addition, the Junos Space Software Development Kit (SDK) will be released to enable development of a wide range
of third-party applications covering all aspects of network management. Junos Space is designed to be open and
provides northbound, standards-based APIs for integration to third-party data center and service provider solutions.
Junos Space also includes DMI based on NetConf, an IETF standard, which can enable management of DMI-compliant
third-party devices.

Refer to the following URL for more information on Junos Space applications: www.juniper.net/us/en/products-
services/software/junos-platform/junos-space/applications.

Copyright © 2011, Juniper Networks, Inc. 25


Data Center LAN Migration Guide

Automated Support with Service Now

Built on the Junos Space platform, Service Now delivers on Juniper’s promise of network efficiency, agility, and
simplicity by delivering service automation that leverages Junos OS embedded technology.

For devices running Junos OS 9.x and later releases, Service Now aids in troubleshooting for Juniper’s J-Care Technical
Services. Junos OS contains the scripts which provide device and incident information that is relayed to the Service
Now application where it is logged, stored, and with the customer’s permission, forwarded to Juniper Networks
Technical Services for immediate action by the Juniper Networks Technical Assistance Center (JTAC).

Not only does Service Now provide automated incident management, it offers automated inventory management for
all Junos OS devices running release 9.x and later. These two elements provide substantial time savings in the form
of more network uptime and less time spent on administrative tasks like inventory data collection. This results in a
reduction of operational expenses and streamlined operations, allowing key personnel to focus on the goals of the
network rather than its maintenance—all of which enhance Juniper’s ability to simplify the data center.

JMB
Juniper
Hardware Service Now Support System
Software
AI Scripts Resources Service
Installed Calibration Insight

INTERNET Gateway
CUSTOMER CUSTOMER OR
NETWORK PARTNER NOC JUNIPER

Figure 9: Troubleshooting with Service Now

The Service Insight application, available in Fall 2010 on the Junos Space platform, takes service automation to the
next level by delivering proactive, customized support for networks running Juniper devices. While Service Now enables
automation for reactive support components such as incident and inventory management for efficient network
management and maintenance, Service Insight brings a level of proactive, actionable network insight that helps
manage risk, lower TCO, and improve application reliability.

The first release of Service Insight will consist of the following features:

• Targeted product bug notification: Proactive notification to the end user of any new bug notification that could
impact network performance and availability with analysis of which devices could be vulnerable to the defect. This
capability can avoid network incidents due to known product issues, as well as save numerous hours of manual
impact analysis for system-wide impact of a packet-switched network (PSN).

• EOL/EOS reports: On-demand view of the end of life (EOL), end of service (EOS), and end of engineering (EOE)
status of devices and field-replaceable units (FRUs) in the network. This capability brings efficiency to network
management operations and mitigates the risk of running obsolete network devices and/or software/firmware.
With this capability, the task of taking network inventory and assessing the impact of EOL/EOS announcements is
reduced to the touch of a button instead of a time-consuming analysis of equipment and software revision levels
and compatibility matrices.

26 Copyright © 2011, Juniper Networks, Inc.


Data Center LAN Migration Guide

Chapter 3: Data Center


Migration -Trigger Events and
Deployment Processes

Copyright © 2011, Juniper Networks, Inc. 27


Data Center LAN Migration Guide

How Migrations Begin


Many enterprises have taken on server, application, and data center consolidations to reduce costs and to increase the
return on their IT investments. To continue their streamlining efforts, many organizations are also considering the use of
cloud computing in their pooled, consolidated infrastructures. While migrating to a next-generation cloud-ready data
center design can theoretically take place at any time, most organizations will not disrupt a production facility except
for a limited time-window to perform scheduled maintenance and continuity testing, or for a suitably compelling
reason whose return is worth the investment and the work.

In Chapter 3 of this guide, we identify a series of such reasons—typically stimulated by trigger events—and the way
these events turn into transitions at various insertion points in the data center network. We also cover the best
practices and steps involved in migration at each of the insertion points presented by a specific trigger event. By
following these steps and practices, it is possible to extend migration to legacy network tiers and move safely towards
a simplified data center infrastructure.

Trigger Events for Change and Their Associated Insertion Points


Change in the data center network is typically determined by the type of event triggering the organization to make
that change. What follows is a short description of trigger events which can stimulate an organization to make the
investments related to these events:

• Provisioning a new area of infrastructure area or Point of Delivery (POD) in an existing data center due to
additional capacity required for new applications and services. The new applications may also have higher
performance requirements that cannot be delivered by the existing infrastructure.

• Technology refresh due to either EOL on a given product line or an upgrade to the latest switching and/or server
technology. A refresh can also be driven by the end of an equipment depreciation cycle, company policy regarding
available headroom capacity, or for adding capacity to meet planned future expansion.

• Infrastructure redesign due to increased use of server virtualization.

• Data center consolidation due to merger or acquisition, cost saving initiatives, or moving from an existing co-
location facility. Due to the increased scalability, performance, and high availability requirements, data center
consolidation may also require a technology refresh.

• Business continuity and workload mobility initiatives. Delivering HA and VM/application mobility typically involves
“VLAN stretching” within or between data centers.

• Upgrade to the core data center network for higher bandwidth and capacity to support new capabilities such as
server virtualization/workload mobility or higher application performance. This may also be due to a technology
refresh as a result of the retirement of legacy equipment which is at end of life (EOL).

• Need for higher performance and scale in security. Existing security gateways, whether integrated in a chassis
or running as standalone appliances, may not be able to deliver the higher performance required to support the
increased traffic from data center consolidation, growth in connected devices, increased extranet collaboration,
and internal/external compliance and auditing requirements. Server, desktop, and application virtualization may
also drive changes in the security model, to increase the strength of security in the new environments and ease
complexity in management. Enhancements can be made to the core, edge, or virtual server areas of the data center
network to deal with these requirements.

• OnRamp to QFabric: QFabric represents the “1” in Juniper Networks 3-2-1 architecture, dramatically reducing
complexity in the data center by delivering any-to-any connectivity while lowering capital, management, and
operational expenses. QFabric consists of edge, interconnect, and control devices that work together to create a
high-performance, low latency fabric that unleashes the power of the data center. The QFabric technology also
offers unprecedented scalability with minimal additional overhead, supporting converged traffic and making it
easy for enterprises to run Fibre Channel, Ethernet, and Fibre Channel over Ethernet on a single network. The high-
performance, non-blocking, and lossless QFabric architecture delivers much lower latency than traditional network
architectures--crucial for server virtualization and the high-speed communications that define the modern data
center. The first QFabric product, the Juniper Networks QFX3500, delivers a 48-port 10GbE top-of-rack switch that
provides low latency, high-speed access for today’s most demanding data center environments. When deployed
with the other components, the QFX3500 offers a fabric-ready edge solution that contributes to the QFabric’s highly
scalable, highly efficient architecture for supporting today’s exponential data center.

28 Copyright © 2011, Juniper Networks, Inc.


Data Center LAN Migration Guide

Addressing any or all of these trigger events results in deployment of new technology into the access, aggregation,
core, or services tiers of an existing data center network.

Considerations for Introducing an Alternative Network Infrastructure Provider


In some installations, a key consideration when evolving an existing infrastructure is the impact of introducing another
vendor. Organizations can minimize any impact by using the same best practices they employ in a single vendor
network. For example, it is sound practice to test interoperability and feature consistency before an implementation at
any network layer. Many enterprises do this today, since there are often multiple inconsistent versions of an operating
system within a single vendor’s portfolio, or even completely different operating systems within that portfolio. For
example, the firewall or intrusion prevention system (IPS) platforms may have a different OS and interface from the
switching products. Even within a switching portfolio, there may be different operating systems, each supporting
different feature implementations.

It is also sound practice to limit fault domains and contain risks when introducing an additional vendor. This can be
accomplished with a building block design for the target insertion point, when deploying into an existing LAN. This
approach allows for definition of the new insertion as a functional module, testing of the module in proof-of-concept
(PoC) environments before deployment, and clean insertion of the new module into production after testing. As
mentioned earlier, PoC testing is often done as a best practice in a single vendor network as well.

Other steps that can ensure successful insertion of Juniper Networks technology into an existing data center LAN include:

• Training

• Multivendor automation and management tools

Training

The simplicity of Juniper’s implementations typically minimizes the need for extensive training to accompany
deployment; however, Juniper also offers a variety of training resources to accelerate deployments. To start with,
standardization of protocols within the network typically eases introduction, since basic constructs are similar and
interoperability has usually been tested and proven ahead of time by Juniper. Beyond the protocols, differences in
command-line interface (CLI) are usually easier to navigate than people initially think. Time after time, people familiar
with other CLIs find themselves able to make the transition quickly due to the consistent, intuitive nature of Junos
operating system’s implementation (it is easy to learn and use). Junos OS also has a tremendous amount of flexibility
and user support built into it. For example, to ease migration from Cisco’s IOS, there is a Junos OS command to display
a configuration file in a format similar to IOS. Additionally, hands-on training is available in the form of a two-day boot
camp. Customized training can also be mapped to address any enterprise’s specific environment. Training not only
gives an opportunity to raise the project team’s skill level, but also to get experience with any potential configuration
complexities prior to entering the implementation phase of a project.

Junos OS also provides embedded automation capabilities. A library of scripts that automate common operations
tasks is readily available online for viewing and downloading. Categorized by function, the script with the best fit can
easily be found. Refer to the Junos OS Script Library for a complete list: www.juniper.net/us/en/community/junos/
script-automation/library.

Multivendor Automation and Management Tools

In a multivendor environment, it is often critical to establish a foundation of multivendor management tools that
work with existing suppliers as well as with Juniper. There are well established multivendor tools available in the
fault and performance analysis areas. These tools work with equipment from all of the major vendors in the market
including Juniper and other vendors. In the provisioning, configuration, inventory management, and capacity planning
areas, existing third-party tools typically don’t scale or leverage the capabilities and best practices of each vendor’s
equipment. In these situations, it works well to leverage an integrated platform like Junos Space to support the Juniper
infrastructure consistently, and, where possible, to incorporate support for other vendors’ platforms and applications
if the APIs and SDKs of Junos Space can be used to complete that integration. Junos Space provides APIs and SDKs
to customize existing applications, and also enables partners to integrate their applications into this homogenous
application platform.

Copyright © 2011, Juniper Networks, Inc. 29


Data Center LAN Migration Guide

Trigger Events, Insertion Points, and Design Considerations


To summarize the preceding discussions, the table below highlights the mapping between a trigger event, the
corresponding network insertion point, and the network design considerations that are pertinent in each of the data
center tiers.

Table 1: Trigger Event Design Consideration Summary


Trigger Event Network Insertion Layer(s) Design Considerations

New application or service New switching infrastructure for: Top–of-rack or end-of-row deployment
Technology refresh • Access Cabling changes
Server virtualization • Aggregation Traffic patterns among servers
• Core VLAN definitions for applications/services segmentation
L2 domain for VM workload mobility
Server connectivity speed: GbE/10GbE
• Application latency requirements
• Network interface card (NIC) teaming
• Fibre Channel Storage Network Convergence
Uplinks
• Uplink oversubscription ratios
• Number and placement of uplinks
• GbE/10GbE uplinks
• Use of L2 or L3 for uplinks
• IEEE Spanning Tree Protocol (STP)
• Redundant trunk groups as STP alternative
- Link aggregation sizing/protocol
QoS
• Classification and prioritization
• Policing
High availability (HA) requirements
Multicast scalability/performance requirements
Interoperability testing definitions
IOS configuration for conversion to Junos OS
Data center consolidation Aggregation/core layer Sufficiency of existing physical capacity
Access layer Interior gateway protocol (IGP) and exterior gateway
Services layer protocol (EGP) design
Multicast requirements
L2/L3 domain boundaries
IEEE STP
Default gateway/root bridge mapping
Virtualized services support requirements (VPN/VRF)
Uplinks
Density
Speed
Link aggregation
Scaling requirements for MAC addresses and ACLs
Latency requirements
Fibre Channel Storage Network Convergence
HA features
QoS
Security Policies
• Existing policy migration
• ACL migration
• New policy definitions
Interoperability testing definitions
IOS configuration for conversion to Junos OS

30 Copyright © 2011, Juniper Networks, Inc.


Data Center LAN Migration Guide

Trigger Event Network Insertion Layer(s) Design Considerations

Business continuity Core layer Sufficiency of existing physical capacity


Workload mobility VLAN definitions for applications/services segmentation
Data center core network VLAN/VPN routing and forwarding (VRF) mapping
upgrade Latency/performance requirements
Traffic engineering/QoS requirements
Connecting with legacy/proprietary IGP protocols
Data Center Interconnect (DCI) method
Stretching VLANs
• Physical layer extension via dense wavelength-division
multiplexing (DWDM)
• VPLS/MPLS
Interoperability testing definitions
IOS configuration for conversion to Junos OS
Higher performance security Aggregation/core layer Insertion Capacity and throughput requirements
services Compliance and audit requirements
Existing security policy rules migration
Zone definitions
Virtual machine security requirements for virtual firewall
Interoperability testing definitions

IOS to Junos OS Conversion Tools


For organizations with existing Cisco IOS configurations, Juniper provides an IOS to Junos OS (I2J) migration tool.
I2J is a web-based software translation tool that converts Cisco IOS configurations for Cisco Catalyst switches and
Cisco routers into their Junos OS equivalent. I2J lowers the time and cost of migrating to Juniper router and switching
platforms. The I2J tool supports configurations for physical and logical interfaces, routing protocols, routing policies,
packet filtering functions, and system commands. I2J also automatically flags any conversion errors or incompatible
configurations. It can be used for IOS configuration conversions at all network insertion points. Figure 9 shows a
sample I2J screen.

Since all network operating systems are not created equal, it should be noted that a straight IOS to Junos OS
conversion may mask opportunities to improve the overall network configuration or implement features or functions
not possible with IOS. Conversely, I2J also helps identify IOS-specific features which may not be implementable under
Junos OS in a direct 1:1 mapping but need to be implemented in an alternate manner.

Figure 10: Converting IOS to Junos OS using I2J

I2J is only available to Juniper customers or partners with a support contract via a secure Advanced Encryption
Standard (AES) 256-bit encrypted website found at: https://i2j.juniper.net.

Copyright © 2011, Juniper Networks, Inc. 31


Data Center LAN Migration Guide

IOS Input Page

The main interface for I2J supports upload or cut and paste of IOS configuration files. You can adjust a variety of
translation options, such as outputting verbose IOS comments or consolidating firewall terms. Then use the Translate
button to convert the IOS file into Junos OS. The output is displayed with statistics, the Junos OS configuration output,
and the IOS source with messages.

Figure 11: The I2J input page for converting IOS to Junos OS

Data Center Migration Insertion Points: Best Practices and Installation Tasks
The legacy three-tier design found in many of today’s data centers was depicted in Figure 1. This is the baseline for
layer insertion points addressed in this document. For a specific insertion point such as the access layer, for example,
the recommended Juniper best practices pertaining to that layer are provided first. This is then followed by the
recommended preinstallation, installation, and post installation tasks.

Recommended best practices and installation-related tasks focus primarily on currently shipping products and
capabilities.

A dedicated Troubleshooting chapter detailing Juniper recommended guidelines for the most commonly encountered
migration and installation issues is also included in this guide.

New Application/Technology Refresh/Server Virtualization Trigger Events


These events are often driven by a lack of capacity in the existing infrastructure to support a new application or service.
They may also occur when an organization is trying to maximize its processor capacity through use of virtual servers.
Redesigns based on these triggers can involve upgrades to either data center access or aggregation tiers (or both) as
described later in this section. They may also involve a redesign across data centers, addressed later in the Business
Continuity and Workload Mobility Trigger Events section. In general, server virtualization poses an interesting set of
design challenges, detailed at the end of this section.

The insertion point for each of these triggers often involves provisioning one or more new Points of Delivery (PODs) or
designated sections of a data center layout, including new switches for the network’s access layer. The new POD(s)
may also include an upgrade of the related aggregation layer which, depending on the requirements, could potentially
later serve as the core in a simplified two-tier design. The process may also involve a core switch/router upgrade or
replacement to increase functionality and bandwidth for each new POD requirement.

For 10GbE server connectivity in a top of rack deployment, Juniper’s recommended design would be based on either the
QFX3500 or EX4500. The choice would be based on several factors.

• The QFX3500 would be the preferred platform where sub microsecond latency is required such as a high
performance compute cluster and financial services transactions.

32 Copyright © 2011, Juniper Networks, Inc.


Data Center LAN Migration Guide

• The QFX3500 is also preferred for 10 Gigabit data center designs and for those deployments requiring FCoE and
Fibre Channel Gateway functionality.

• The QFX3500 also is a building block to Juniper’s QFabric technology to deliver a data center fabric which enables
exponential gains in scale,performance and efficiency.

The EX4500 is typically deployed in smaller Enterprise as a data center aggregation switch or in a smaller campus
environment. When the requirement is for GbE server connectivity, the EX4200 would be the platform of choice, where
up to 480 GbE server connections may be accommodated within a single Virtual Chassis.

For 10 Gigabit connectivity, the QFX3500 enables IT organizations to take advantage of the opportunity to converge
I/O in a rack. Servers can connect to the QFX3500 using a converged network adaptor (CNA) over 10Gb channeling
both IP and SAN traffic over a single interface. The QFX3500 devices will in turn connect directly to the data center
SAN Array functioning as a Fiber Channel (FC) gateway. Additionally, the solution can support operating the QFX3500
as a FC transit switch and connect it to an external SAN director that will fulfill the FC gateway functionality.

Design Options and Best Practices: New Application/Technology Refresh/Server Virtualization


Trigger Events
When deploying a new access layer as part of this trigger event, there are issues related to uplink oversubscription,
STP, and Virtual Chassis. Understanding design options and best practices for each of these topics is important to
a successful deployment. This next section will cover access layer migration for Gigabit connected servers. The next
version of the data center Migration guide will include 10 Gigabit as well as Fibre Channel convergence best practices.

Tunable Oversubscription in Uplink Link Aggregation

In legacy networks, oversubscription is an ongoing issue, leading to unpredictable performance and the inability to
deploy or provision a new application with confidence. Oversubscription typically occurs between the access and the
core/aggregation switches (on the access network uplinks).

Juniper provides a tunable oversubscription ratio of from 1 to 12 between the access and the core with the EX4200
Virtual Chassis. Up to 10 EX4200 Virtual Chassis systems can be configured together to perform and be managed as
one device. Each Virtual Chassis supports up to 48 GbE connections and up to two 10GbE uplinks.

A full configuration of 10 EX4200s has up to 480 GbE ports + 2 x (m) x 10GbE uplinks, where m is 1 to 10. The
oversubscription ratio is tuned by adjusting the number of units in the Virtual Chassis, the number of GbE ports, and the
10GbE uplinks. An oversubscription ratio of 1, delivering full wire rate, is achieved when there is no greater than 2 units
in a Virtual Chassis configuration and 40 gigabit user ports provisioned. An oversubscription ratio of 12:1 is achieved if
there are 480 GbE ports and 4x10GbE uplinks.

The target oversubscription ratio is always going to be based on the applications and services that the Virtual Chassis
is expected to support. It can be easily adjusted by adding or removing 10GbE uplinks, or by increasing or reducing
the number of member switches in the Virtual Chassis. For example, a 5 to 6 member Virtual Chassis using two to
four 10GbE uplinks delivers oversubscription ratios between 7:1 and 12:1. Or, a 7 to 8 member Virtual Chassis using 4
10GbE uplinks, two in the middle and two at the ends, delivers oversubscription ratios between 8:1 and 9:1. While these
oversubscription levels are common, the most important point is that they can be adjusted as needed.

Spanning Tree Alternatives for Access Layer Insertion Point

STP offers benefits to the data center, but it is also plagued with a number of well-known and hard to overcome issues.
These include inefficient link utilization, configuration errors that can potentially bring an entire L2 domain down, and
other problems. To avoid these issues, enterprises may consider alternative architectures to STP when inserting a new
POD into the network. These can include an inverted U design, Redundant Trunk Groups (RTGs), and L3 uplinks.

Inverted U Design

An enterprise can create an STP-free data center topology when using an L2 domain to facilitate virtual machine
mobility through the way it connects servers to an access layer switch. An inverted U design can be used such that
no L2 loops are created. While technologies like STP or RTG aren’t required to prevent the loop, best practices still
recommend provisioning STP to prevent accidental looping due to incorrect configuration.

Copyright © 2011, Juniper Networks, Inc. 33


Data Center LAN Migration Guide

There are two basic options to provision this design: two physically separate servers connected to two separate access
layer switches, or two separate Virtual Chassis units in a Juniper deployment as depicted in Figure 11.

802.3ad LAG
802.1q Trunking

AGGREGATION / CORE EX82XX EX82XX


802.3ad LAG 802.3ad LAG
802.1q Trunking 802.1q Trunking

EX4200 Virtual Chassis EX4200 Virtual Chassis

Access

Figure 12: Inverted U design using two physical servers

The bold lines show that there are no loops in this upside down or inverted U design. Load balancers are typically used
in this design for inbound server traffic as well as Global Load Balancing Protocol on outbound server traffic.

The next option is to use NIC teaming within a VMware infrastructure as depicted in Figure 12.

802.3ad LAG
802.1q Trunking

AGGREGATION / CORE EX82XX EX82XX


802.3ad LAG 802.3ad LAG
802.1q Trunking 802.1q Trunking

EX4200 EX4200

Access

pNIC1 pNIC2

vNIC1 10.0.1.4/24
vNIC2 10.0.2.4/24

Figure 13: Inverted U design with NIC teaming

This option, using network interface teaming in a VMware environment, is the more prevalent form of an inverted
U design. NIC teaming is a feature of VMware Infrastructure 3 that allows you to connect a single virtual switch to
multiple physical Ethernet adapters. A team can share traffic loads between physical and virtual networks and provide
passive failover in case of an outage. NIC teaming policies are set at the port group level.

34 Copyright © 2011, Juniper Networks, Inc.


Data Center LAN Migration Guide

For more detailed information on NIC teaming using VMware Infrastructure 3, refer to:

www.vmware.com/technology/virtual-networking/virtual-networks.html

www.vmware.com/files/pdf/virtual_networking_concepts.pdf

The main advantages to the inverted U design are that all ports on the aggregation switches have useable bandwidth
100% of the time, traffic flows between access and aggregation are always deterministic, and there is a deterministic
latency to all Virtual Chassis connected to a single aggregation or core switch.

Redundant Trunk Groups

LAN designs using the EX4200 Ethernet Switch with Virtual Chassis technology also benefit from RTG protocol as a
built-in, optimized replacement to STP for sub-second convergence and automatic load balancing.

RTG is an HA link feature of the EX Series Ethernet Switches that eliminates the need for STP. In fact, STP can’t be
provisioned on an RTG link. Ideally implemented on a switch with a dual-home connection, RTG configures one link
as active and forwarding traffic, and the other link as blocking and backup to the active link. RTG provides extremely
fast convergence in the event of a link failure. It is similar in practice to Rapid Spanning Tree Protocol (RSTP) root and
alternate port, but doesn’t require an RSTP configuration.

Layer Three Uplinks

Another alternative is to use L3 uplinks from the access layer to the aggregation/core switches. This limits L2 domain and
VLANs to a single Virtual Chassis. Equal-cost multipath (ECMP), which is included in the base license for L3 OSPF on
EX Series data center switches, is used for uplinks. An advanced license is needed if BGP, IS-IS, or Ipv6 are required. Up
to 480 servers can be provisioned with one EX4200 Virtual Chassis. This allows applications to maintain L2 adjacency
within a single switch, however servers are typically collocated within a single row which is part of the Virtual Chassis. The
VLAN boundary is the access layer for low latency application data transfer. Virtual Chassis extension would allow distant
servers to be accommodated as well. These servers would be part of a single Virtual Chassis domain.

Virtual Chassis Best Practices

These best practices should be followed when deploying and operating the EX4200 at the access layer:

• When designing a Virtual Chassis configuration, consider a deployment in which ports are distributed across as many
switches as possible to provide the highest resiliency and the smallest failure domain.
• When possible, place uplink modules in the Virtual Chassis configuration line card switches, and place uplinks in
devices which are separated at equal distances by member hop.
• Use the virtual management Ethernet (VME) interface as the management interface to configure Virtual Chassis
technology options.
• Evenly space master and backup switches by member hop when possible.
• When installing a Virtual Chassis configuration, explicitly configure the mastership priority of the members that you
want to function as the master and backup switches.
• When removing a switch from a Virtual Chassis configuration, immediately recycle its member ID so that the ID
becomes the next lowest available unused ID. In this way, the replacement switch automatically is assigned that
member ID and inherits the configuration of the original switch.
• Specify the same mastership priority value for the master and backup switches in a Virtual Chassis configuration.
• Configure the highest possible mastership priority value (255) for the master and backup switches.
• Place the master and backup switches in separate locations when deploying an extended Virtual Chassis
configuration.
• For maximum resiliency, interconnect Virtual Chassis member devices in a ring topology.
• When changing configuration settings on the master switch, propagate changes to all other switches in the Virtual
Chassis configuration via a “commit sync” command.
Refer to Juniper’s Virtual Chassis Technology Best Practices Implementation Guide for more information:
www.juniper.net/us/en/products-services/switching/ex-series/ex4200/#literature.

Copyright © 2011, Juniper Networks, Inc. 35


Data Center LAN Migration Guide

Figure 13 below depicts an EX4200 Virtual Chassis top-of-rack access layer deployment.

TOR DEPLOYMENT

Legacy Aggregation Layer

L2/L3 Switch L2/L3 Switch

Access Layer

EX4200 Virtual Chassis EX4200 Virtual Chassis

Figure 14: EX4200 top-of-rack access layer deployment

Access Layer Preinstallation Tasks

• One of the first tasks is to determine space requirements for the new equipment. If the new access layer switches are
to be housed in new racks, make sure that there is adequate space for the racks in the POD or data center. Or, if it is
a switch refresh, ensure that there is sufficient space in the existing racks to accommodate the new switches. If the
existing racks have the capacity, the eventual switchover becomes a matter of simply switching server cables from
the old switches to the new ones. New racks are usually involved when a server refresh is combined with a switch
refresh. It is assumed that data center facilities already have the power, cooling, airflow, and cabling required for any
new equipment being provisioned.

• In a top-of-rack configuration, the EX4200 with Virtual Chassis technology can be logically viewed as a single
chassis horizontally deployed across racks. It is important to understand the traffic profiles, since this determines the
number of required uplinks. If the traffic flows are predominantly between servers, often referred to as “east-west”
traffic, fewer uplinks to the core network layer are required since inter-server traffic primarily traverses the Virtual
Chassis’ 128 Gbps backplane. The number of uplinks required is also a function of acceptable oversubscription
ratios, which can be easily tuned as per the Virtual Chassis Best Practices section. The EX4200 with Virtual Chassis
technology may also be used in an end-of-row deployment taking these same considerations into account.

• When connecting to an existing non-Juniper aggregation layer switch, it’s important to use open standard protocols
such as 802.1Q for trunking VLANs and Multiple VLAN Registration Protocol (MVRP) if VLAN propagation is desired.
To ensure interoperability, one of the standards-based STPs (IEEE STP, Rapid Spanning Tree Protocol, or Multiple
Spanning Tree Protocol) should also be used.

• If they exist, company standard IOS-based access layer configurations should be collected. They can be quickly and
easily converted into Junos OS using Juniper’s I2J translation tool as previously described.

• To simplify the deployment process and ensure consistent configurations when installing multiple access layer
switches, Junos OS automation tools such as AutoInstall may be used. Refer to the following for more information:
www.juniper.net/techpubs/software/junos-security/junos-security96/junos-security-admin-guide/config-
autoinstall-chapter.html.

• To further save time and ensure consistency, configuration files may be generated in advance, and then uploaded to
a Trivial File Transfer Protocol (TFTP)/HTTP or FTP server for later downloading. The operations support systems
(OSS) in place for switch provisioning and configuration management determine the type of server to use.

• To test feature consistency and feature implementations for the new Juniper access layer switches, a proof-of-
concept (PoC) lab could be set up, as previously noted. Feature and compatibility testing is greatly reduced with a
single OS across all platforms such as Junos OS, which maintains a strict, serial release cycle. Feature testing in a

36 Copyright © 2011, Juniper Networks, Inc.


Data Center LAN Migration Guide

PoC lab could include:

-- Interface connectivity

-- Trunking and VLAN mapping

-- Spanning-tree interoperability with aggregation layer

-- QoS policy—classification marking, rate limiting

-- Multicast

• Provision the new access layer switches into any existing terminal servers for out of band CLI access via SSH.

• Network Automation: The approach to use for automating management in these scenarios depends on which other
vendors’ equipment will be collocated in the data center, what other multivendor tools will be employed (Tivoli,
OpenView, etc.), and how responsibilities will be partitioned across teams for different parts of the infrastructure.
If the insertion is relatively small and contained, and other tools supporting each vendor’s platforms are already
deployed, testing for compatible integration of the Juniper equipment is the best PoC step. However if this insertion
is the start of a longer term, more expanded, strategic deployment of Juniper infrastructure into multiple PODs
and sites (because of the importance of new applications, for example), then it may make sense to also test for
integration of Junos Space into the environment because of the likely value of Space to the installation over time.
Ethernet Design, Network Activate, or other Junos Space data center management applications could be considered.
Space could be deployed as either a physical or a virtual appliance, depending on the allocation of responsibilities
for management and the allocation of resources within the design. VM requirements for running Junos Space are 8
GB RAM and 40 GB hard disk in a production environment. If Junos Space VM is installed for lab/trial purposes, 2 GB
RAM and 8 GB hard disk space is sufficient.

For more information on deploying the Junos Space platform and managing nodes in the Junos Space fabric, please
refer to the technical documentation: www.juniper.net/techpubs/en_US/junos-space1.0/information-products/
index-junos-space.html.

Installation

After comprehensive PoC testing, physical installation of the new POD should be relatively straightforward. As
previously mentioned, if the installation only involves adding new access layer switches to existing racks, switchover
becomes a matter of changing server cables from the old switches to the new ones. If a server refresh is combined
with a switch refresh, those components should be physically installed in the new racks. Uplink interfaces are then
connected to the aggregation switch(es).

Post Installation

Procedures which are similar to installing a new POD within a single vendor environment should be employed after
physical installation is complete. As a best practice, verify:

• Access to the new POD via CLI or network management tools.

• Monitoring and alerting through OSS tools is functional.

• Security monitoring is in place and working. To limit false positives, security tuning should be conducted if a Security
Incident and Event Manager (SIEM) system such as the STRM Series is in place.

• Interface status for server connections and uplinks via CLI or network management tools.

• L2/STP state between access and aggregation tiers.

• QoS consistency between access and aggregation tiers.

• Multicast state (if applicable).

• Traffic is being passed via ping tests and traffic monitoring tools.

• Applications are running as anticipated via end user test, performance management, and other verifications.

Copyright © 2011, Juniper Networks, Inc. 37


Data Center LAN Migration Guide

Network Challenge and Solutions for Virtual Servers


Server virtualization has introduced a new layer of software to the data center called the hypervisor. This layer
allows multiple dissimilar operating systems, and the applications running within them, to share the server’s physical
resources such as CPU, memory, and I/O using entities called virtual machines (VMs). A Virtual Ethernet Bridge (VEB)
within the physical server switches traffic between the VMs and applications running within the same server, as well
as between VMs and external network resources. Instead of interfacing directly to the physical NICs, the applications
now talk to “virtual ports” on the internal VEB (or vSwitch). This creates virtual network endpoints, each with its own
virtual IP and MAC addresses. Unique to server virtualization, a virtual switch (vSwitch) is a Layer 2 switch with limited
function and security. Physical switches (or pSwitches) like the EX Series Ethernet Switches are actual pieces of
network equipment.

The network design for server virtualization requires an organization to consider the connections between the
vSwitch and the pSwitch, including network functionality such as VLAN tagging and link aggregation (LAG) between
network nodes. As VM to physical server ratios increase, server-based networking becomes more complex and
may require multiple virtual switches, different VLANs, QoS tags, security zones, and more. Adding this amount of
network processing to physical server infrastructures adds a great deal of overhead to their loads. It also requires
more networking functionality to be added into hypervisors. To reduce this overhead and simplify operations, an
alternative approach, strategically, is to use a Virtual Ethernet Port Aggregator (VEPA), where servers remain focused
on application processing, and the hypervisor has visibility into the network, delegating switching tasks to collaborating
physical switches. Networking for a large number of physical and logical servers can be simplified by attaching many of
them to the wide configuration of a single Juniper Virtual Chassis switch. As previously described, EX Series switches,
initially, the EX4200, can be logically combined into a single switch spanning over 400 physical server access ports and
supporting several thousand virtual machines using Virtual Chassis technology.

It is important to note that vSwitches and pSwitches are not competing technologies. The IEEE standards body is
working on solving vSwitch issues within its VEPA standards working group. VEPA proposes to offload all switching
activities from hypervisor-based vSwitches to the actual physical switches. In addition, VEPA proposes to ease
management issues. The IEEE 802.1Qbg (Edge Virtual Bridging) and 802.1Qbh (Bridge Port Extension) VEPA standards
specify how to move networking from virtual servers to dedicated physical Ethernet switches. This will help by
concentrating networking functions into equipment that is purpose-built for tasks, enhancing performance, security,
and management within the data centers. It will also help reduce computing overhead on virtual servers, allowing the
CPU cycles to be spent on application processing as opposed to networking tasks. Offloading switching activities from
hypervisor-based vSwitches will also result in an increase in the supported number of VMs. Most importantly, a ratified
VEPA standard will be universal as opposed to a vendor-specific, proprietary solution. For more information on VEPA,
please see: www.ieee802.org/1/files/public/docs2008/new-congdon-vepa-1108-v01.pdf.

In addition to Virtual Chassis and VEPA-based forwarding, a management application such as Junos Space Virtual
Control allows users to monitor, manage, and control the virtual network environments that support virtualized servers
deployed in the data center. Virtual Control provides a consolidated solution for network administrators to gain end-to-
end visibility into, and control over, both virtual and physical networks from a single management screen.

Network Automation and Orchestration


Automation is a crucial tool for running a scalable and resilient data center at all levels. When developing an
automation plan, it often works well to partition functions to be covered by the automation platforms and detail how
these functions map into operations teams and data center domains. Within these boundaries, an open architecture
should be selected that addresses the functions across the respective equipment suppliers’ platforms and maps them
to the appropriate applications. In a multivendor infrastructure, a difference typically exists in the level at which vendor
integration occurs—sometimes integration is possible in a multivendor application, and in other cases the integration
is less complete and some number of each vendor’s specific tools needs to be kept in service to accomplish the
necessary tasks.

38 Copyright © 2011, Juniper Networks, Inc.


Data Center LAN Migration Guide

Significant levels of integration can be done in fault, performance, and network configuration and change
management. As an example, IBM’s Tivoli NetCool is a well established leading multivendor solution for fault
management. Juniper’s Junos Space platform licenses and integrates NetCool into its Junos Space offering. There
are also well established multivendor solutions for performance management that include IBM NetCool Proviso,
CA eHealth, InfoVista, and HP OpenView that an enterprise can consider for managing network and application
performance for both current vendor and Juniper network infrastructures.

There are currently over a dozen Network Configuration and Change Management (NCCM) vendors with multivendor
tools. These tools bring more structure to the change management process and also enable automated configuration
management. NCCM vendors include IBM Tivoli, AlterPoint, BMC (EmprisaNetworks), EMC (Voyence), HP (Opsware),
and others. Prior to introducing Juniper into an existing single vendor infrastructure, Juniper recommends that you
replace manual network configuration management processes and vendor-specific tools with automated multivendor
NCCM tools. It is also good practice to establish standard network device configuration policies which would apply to
all vendors in the network infrastructure. Automated network configuration management is more efficient and also
reduces operational complexity. An IT management solution should be built around the standards outlined by the
Fault, Configuration, Accounting, Performance, Security (FCAPS) Model.

Refer to the following URL for more information: http://en.wikipedia.org/wiki/FCAPS.

Integrating Management and Orchestration with Junos Space

Some amount of integration for management and orchestration can be accomplished using Junos Space. Junos Space
is a network application platform for developing and deploying applications that simplify data center operations. Junos
Space abstracts network intelligence such as traffic flows, routing topology, security events, and user statistics (to
name a few), and makes this available as services used both by Juniper-developed applications and applications from
third-party vendors leveraging the Junos Space SDKs/APIs. Junos Space applications are designed to be collaborative.
They share common services such as inventory and configuration management, job scheduling, HA/clustering, etc.,
and they use a common security framework. This enables users to optimize scale, security, and resources across their
application environment. The Junos Space application portfolio currently includes Ethernet Design (rapid endpoint and
switch port configuration in campus/data center environments), Security Design (fast, accurate deployment of security
devices and services), Network Activate (point/click setup and management of VPLS services), Route Insight (visibility,
troubleshooting, and change modeling for L3 MPLS/IP networks), Virtual Control, which provides a consolidated
solution for network administrators to gain end-to-end visibility into, and control over, both virtual and physical
networks from a single management screen, and Service Now (automated case and incident management).

Data Center Consolidation Trigger Event


Numerous large enterprises are consolidating their geographically distributed data centers into mega data
centers to take advantage of cost benefits, economies of scale, and increased reliability, and to fully exploit the
latest virtualization technologies. According to industry research, more than 50% of the companies surveyed had
consolidated data centers within the last year and even more planned to consolidate in the upcoming year. Data center
consolidation can involve multiple insertion points into access and core aggregation layers as well as consolidation of
security services.

Best Practices in Designing for the Access Layer Insertion Point

We have already discussed the recommended best practices for the access layer insertion point. In this section, we will
highlight key best practice/design considerations for the other insertion points related to the consolidation trigger event.

Copyright © 2011, Juniper Networks, Inc. 39


Data Center LAN Migration Guide

Best Practices: Designing the Upgraded Aggregation/Core Layer


The insertion point for an enterprise seeking to initially retain its existing three-tier design may be at the current design’s
aggregation layer. In this case, the recommended Juniper aggregation switch, typically an EX8200 line or MX Series
platform, should be provisioned in anticipation of it eventually becoming the collapsed core/aggregation layer switch.

If the upgrade is focused on the aggregation layer in a three-tier design (to approach transformation of the data
center network architecture in an incremental way, as just described), the most typical scenario is for the aggregation
switches to be installed as part of the L2 topology of the data center network, extending the size of the L2 domains
within the data center and interfacing them to the organization’s L3 routed infrastructure that typically begins at the
core tier, one tier “up“in the design from the aggregation tier.

In this case, the key design considerations for the upgraded aggregation tier include:

• Ensuring sufficient link capacity for the necessary uplinks from the access tier, resiliency between nodes in the
aggregation tier, and any required uplinks between the aggregation and core tiers in the network

• Supporting the appropriate VLAN, LAG, and STP configurations within the L2 domains

• Incorporating the correct configurations for access to the L3 routed infrastructure at the core tier, especially for
knowledge of the default gateway in a VRRP (Virtual Router Redundancy Protocol) environment

• Ensuring continuity in QoS and policy filter configurations appropriate to the applications and user groups supported

At a later point of evolution (or perhaps at the initial installation, depending on your requirements), it may be that these
nodes perform an integrated core and aggregation function in a two-tier network design. This would be the case if it suited
the organization’s economic and operational needs, and could be accomplished in a well managed insertion/upgrade.

In such a case, the new “consolidated core” of the network would most typically perform both L2 and L3 functions. The
L2 portions would, at a minimum, include the functions described above. They could also extend or “stretch” the L2
domains in certain cases to accommodate functions like application mirroring or live migration of workloads in a virtual
server operation between parts of the installation in other areas of the data center or even in other data centers. We
describe design considerations for this later in this guide.

In addition to L2 functions, this consolidated core will provide L3 routing capabilities in most cases. As a baseline, the
L3 routing capabilities to be included are:

• Delivering a resilient interface of the routed infrastructure to the L2 access portion of the data center network. This
is likely to include VRRP default gateway capabilities. It is also likely to include one form or another of an integrated
routing/bridging interface in the nodes such as routed VLAN interfaces (RVIs), or integrated routing and bridging
interfaces (IRBs), to provide transition points between the L2 and L3 forwarding domains within the nodes.

• Resilient, HA interfaces to adjacent routing nodes, typically at the edge of the data center network. Such high
availability functions can include nonstop active routing (NSR), GRES, Bidirectional Forwarding Detection (BFD),
and even MPLS fast reroute depending on the functionality and configuration of the routing services in the site.
For definitions of these terms, please refer to the section on Node-Link Resiliency. MPLS fast reroute is a local
restoration network resiliency mechanism where each path in MPLS is protected by a backup path which originates
at the node immediately upstream.

• Incorporation of the appropriate policy filters at the core tier for enforcement of QoS, routing area optimization,
and security objectives for the organization. On the QoS level, this may involve the use of matching Differentiated
Services code points (DSCPs) and MPLS traffic engineering designs with the rest of the routed infrastructure to
which the core is adjacent at the edge, as well as matching priorities with the 802.1p settings being used in the L2
infrastructure in the access tier. On the security side, it may include stateless filters that forward selected traffic to
security devices such as firewall/IDP platforms at the core of the data center to enforce appropriate protections for
the applications and user groups supported by the data center (see the next section of the core tier best practices
for a complementary discussion of the firewall/IDP part of the design).

In some cases, the core design may include use of VPN technology—most likely VPLS and MPLS—to provide
differentiated handling of traffic belonging to different applications and user communities, as well as to provide
special networking functions between various data center areas, and between data centers and other parts of the
organization’s network.

40 Copyright © 2011, Juniper Networks, Inc.


Data Center LAN Migration Guide

The most common case will be use of VPLS to provide “stretched” VLANs between areas of a large data center
network, or between multiple distant data centers using VPLS (over MPLS) to create a transparent extension of the
LAN to support nonstop application services (transparent failovers), transaction mirroring, data base backups, and
dynamic management of virtual server workloads across multiple data center sites.

In these cases, the core nodes will include VPLS instances matching the L2 topology and VLAN configurations required
by the applications, as well as the appropriate implementation of MPLS between the core nodes and the rest of the
organization’s routed IP/MPLS network. This design will include ensuring high availability and resilient access of the
L2 access tier into the “elastic” L2 infrastructure enabled by VPLS in the core; use of appropriate traffic engineering,
and HA features of MPLS to enable the proper QoS and degree of availability for the traffic being supported in the
transparent VLAN network. Details on these design points are included in the section of the Migration Guide on
incorporating multiple sites into the data center network design using MPLS in the Six Process Steps for Ensuring
MPLS Migration section.

Best Practices: Upgraded Security Services in the Core


Frequently a data center consolidation requires consolidating previously separate and siloed security appliances into
a more efficient security tier integrated into the L2 and L3 infrastructures at the core network layer. Here we describe
design considerations for accomplishing that integration of security services in the core.

• All security appliances should be consolidated and virtualized into a single pool of security services with a platform
such as the SRX Series Services Gateways.

• To connect to and protect all core data center network domains, the virtual appliance tier should optimally
participate in the interior gateway routing protocols within the data center network.

• Security zones should be defined to apply granular and logically precise protection for network partitions and
virtualized resources within the network wherever they reside, above and beyond the granularity of traditional
perimeter defenses.

• The security tier should support the performance required by the data center’s applications and be able to inspect
information up to L7 at line rate. A powerful application decoder is necessary on top of the forwarding, firewall
filtering, and IDP signature detection also applied to the designated traffic streams. Including this range of logic
modularly in a high-performance security architecture for the core helps reduce the number of devices in the network
and increase overall efficiency.

• Scalable, strong access controls for remote access devices and universal access control should be employed to
ensure that only those with an organizational need can access resources at the appropriate level. Integration of
secure access with unified policies and automation using coordinated threat control not only improves security
strength but also increases efficiency and productivity of applications within the data center.

• Finally, incorporation of virtual appliances such as virtual firewalls and endpoint verification servers into the data
center’s security design in a way that integrates protection for the virtual servers, desktops, and related network
transports provides an extension of the common security fabric into all of the resources the IT team needs to protect.

Aggregation/Core Insertion Point Installation Tasks

Preinstallation Tasks

The tasks described in this section pertain to a consolidation within an existing data center that has the required space
and power to support consolidation. Alternatively, a consolidation may take place in a new facility, sometimes referred
to as a “greenfield” installation. That scenario would follow the best practices outlined in Juniper’s Cloud-Ready Data
Center Reference Architecture: www.juniper.net/us/en/solutions/enterprise/data-center/simplify/#literature.

The steps outlined here also apply to a case in which the organization wants to stay with its existing three-tier design,
at least for the initial steps in the process. In such a case, deployment and provisioning should be done leaving
flexibility to move to a two-tier design at some future date.

Copyright © 2011, Juniper Networks, Inc. 41


Data Center LAN Migration Guide

The scenario for transitioning the core data center network in a design upgrade triggered by a consolidation project
should encompass the following kinds of tasks, suited to each organization’s case:

• Ensure that the power, cooling, airflow, physical rack space, and cabling required for any new equipment has
been designed, ordered, and installed (however your organization breaks down these responsibilities between
departments, suppliers, and integrators).

• Size the new/upgraded switching platforms using your organization’s policy for additional performance and capacity
headroom for future growth.

• Design should include a pair of switches that can eventually serve as the new collapsed core/aggregation layer. Initial
design can be tuned to the exact role the switches will perform. For example, if they will be focused on pure core
functions in the initial phase, they can be focused on capacity/functionality required for that role (e.g., IGP policies
and area design appropriate to a core data center switch). If they will be focused on pure aggregation functions
initially, design can focus on L2 and L3 interface behaviors appropriate to that role. Or, if they will be performing a
blended core/aggregation role (as would be appropriate in many two-tier data center networks), a mix of L2 and L3
functions can be designed to fit the network’s requirements.

• If the new switches are aggregating existing access layer switches, industry standard protocols (such as 802.1D,
802.1Q, 802.1s, 802.3ad, etc.) should be used to ensure interoperability.

• Configuration checklist should include:

- - IGP and EGP requirements such as area border roles, routing metrics and policies, and possible route
redistributions

- - L2/L3 domain demarcations

›› VLAN to VRF mapping

›› Virtualized service support requirements (VPN/VRF)

›› Default gateway/root bridge mapping

›› Hot Standby Routing Protocol (HSRP) to Virtual Router Redundancy Protocol (VRRP) mappings

- - Uplink specifications

›› Density

›› Speeds (number of GbE and 10GbE links)

›› Link aggregations

›› Oversubscription ratios

- - Scaling requirements for MAC addresses

- - Firewall filters (from IOS ACL mapping)

- - QoS policies

- - Multicast topology and performance

- - Audit/compliance requirements should be clarified and any logging or statistics collection functions designed in.

- - IOS configurations should be mapped to Junos OS as outlined in the section on IOS to Junos OS translation tools.

• As noted earlier, a PoC lab could be set up to test feature consistency and implementation in the new Juniper
infrastructure. Testing could include:

- - Interface connections

- - Trunking and VLAN mapping

- - STP interoperability with any existing switches

- - QoS policy (classification marking; rate limiting)

- - Multicast

42 Copyright © 2011, Juniper Networks, Inc.


Data Center LAN Migration Guide

Installation Tasks

Refer to Figure 14 when considering the tasks described in this section:

EX8200 EX8200
EX8216 EX8216
“West” “East”

Legacy Switch “West” Legacy Switch “East”

Legacy Aggregation
Layer Switches and
Security Applicances

Legacy Access Layer

Figure 15: Aggregation/core layer insertion point

• If the new aggregation layer switches are part of a new POD that includes new access layer switches, the installation
should be straightforward.

• If the new aggregation layer switches are part of a replacement for existing legacy switches, it is important to create
a fallback position in the event of any unforeseen issues. Typically, EX8200 line switches would be provisioned and
deployed as a pair, replacing the legacy switches.

• Once the EX8200 line switches are installed, they should be connected to the existing core layer. Again, this scenario
is appropriate in cases where the organization initially will maintain its existing three-tier architecture. Appropriate
IGP and EGP configurations as identified in the preinstallation configuration checklist would have been provisioned
and interoperability verified by checking neighbor state and forwarding tables.

• An initial access layer switch’s uplinks would then be connected to the EX8200 line switches and connectivity
verified as outlined in the post installation checklist. Once this baseline is established, additional access layer
switches would then be migrated to the EX8200 line.

Post Installation

As previously noted, procedures which are similar to those employed when installing a new aggregation switch in a
single vendor environment can be used, after physical installation is complete, to verify successful operations. As a
best practice, verify:

• Access to new configuration via CLI or network management tools.

• Interface status for server connections and uplinks via CLI or network management tools.

• L2/STP status between network tiers.

• QoS consistency between network tiers.

• Multicast state (if applicable).

• Traffic passing via ping tests.

• Application connectivity and flows with statistics or end user tests (or both).

Copyright © 2011, Juniper Networks, Inc. 43


Data Center LAN Migration Guide

Consolidating and Virtualizing Security Services in the Data Center: Installation Tasks
In addition to cyber theft and increasing malware levels, organizations must guard against new vulnerabilities
introduced by data center technologies themselves. To date, security in the data center has been applied primarily
at the perimeter and server levels. However, this approach isn’t comprehensive enough to protect information and
resources in new system architectures. In traditional data center models, applications, compute resources, and
networks have been tightly coupled, with all communications gated by security devices at key choke points. However,
technologies such as server virtualization and Web services eliminate this coupling and create a mesh of interactions
between systems that create subtle and significant new security risks within the interior of the data center. For a
complete discussion of security challenges in building cloud-ready, next-generation data centers, refer to the white
paper, “Security Considerations for Cloud-Ready Data Center”: www.juniper.net/us/en/local/pdf/implementation-
guides/8010046-en.pdf.

A key requirement for this insertion point is for security services platforms to provide the performance, scalability, and
traffic visibility needed to meet the increased demands of a consolidated data center. Enterprises deploying platforms
which do not offer the performance and scalability of Juniper Networks SRX Series Services Gateways and their associated
management applications are faced with a complex appliance sprawl and management challenge, where numerous
appliances and tools are needed to meet requirements. This is a more costly, less efficient, and less scalable approach.

Preinstallation Tasks for Security Consolidation and Virtualization

SRX5800 EX82XX

Legacy Security Appliances

Figure 16: SRX Series platform for security consolidation

• Ensure that the appropriate power, cooling, airflow, physical rack space, and cabling required to support the new
equipment have been ordered and installed.

• Ensure that the security tier is sized to meet the organization’s requirements for capacity headroom for future growth.

• Define and provision routing/switching Infrastructure first (see prior section). This sets the L3/L2 foundation
domains upon which the security “zones” the SRX Series enforces will be built. The SRX Series supports a pool
of virtualized security services that can be applied to any application flow traversing the data center network.
Setting up the network with this foundation of subnets and VLANs feeding the dynamic security enforcement point
segments the data center resources properly and identifies what is being protected and what level of protection
is needed. With SOA, for example, there are numerous data flows between servers within the data center and
perimeter but security is often insufficient for securing these flows. Policies based on role/function, applications,
business goals, or regulatory requirements can be achieved using a mix of VLAN, routing, and security zone policies
enabling the SRX Series to enforce the appropriate security posture for each flow in the network.

• The performance and scalability requirements should be scoped. SRX Series devices can be paired together in a
cluster to scale to 120 Gbps of firewall throughput, as well as providing HA.

Virtual machine security requirements should also be defined. Juniper’s vGW Virtual Gateway is hypervisor neutral,
eliminating VM security blind spots. For more information on Juniper’s virtual firewall solution, refer to Chapter 2 (vGW
Virtual Gateway).

44 Copyright © 2011, Juniper Networks, Inc.


Data Center LAN Migration Guide

In the preinstallation phase, security policies must be developed. This typically takes time and can be complex to
coordinate. Juniper Professional Services can be used as a resource to help analyze and optimize security policies at all
enforcement points. The full suite of Juniper Networks Professional Services offerings can be found at:
www.juniper.net/us/en/products-services/consulting-services.

• Establish a migration plan, identifying a time line and key migration points, if all appliances cannot be migrated in a
flash cut.

• As with the other insertion points, PoC testing can be done and could include:

- - Establishing the size of the target rule base to be used post conversion

- - Checking the efficacy of the zone definitions

- - Determining the effectiveness of the IPS controls

- - Determining the suitability and implementation of the access controls to be used

Installation Tasks

As with the aggregation/core insertion point, it is important to have a fallback position to existing appliances in the
event of any operational issues. The current firewall appliances should be kept on hot standby. The key to a successful
migration is to have applications identified for validation and to have a clear test plan for success criteria. There are
three typical options for migration, with option 1 being the one most commonly used.

Migration Test Plan (Option 1)

• Test failover by failing the master firewall (legacy vendor) to the backup (this confirms that HA works and the other
devices involved in the path are working as expected).

• Replace the primary master, which was just manually failed, with the Juniper firewall. The traffic should still be
flowing through the secondary, which is the legacy vendor firewall.

• Turn off the virtual IP (VIP) address or bring the interface down on the backup (legacy vendor firewall) and force
everything through the new Juniper firewall.

• A longer troubleshooting window helps to ensure that the switchover has happened successfully. Also, turn off
synchronization checks (syn-checks) initially, to process already established sessions, since the TCP handshake has
already occurred on the legacy firewall. This will ensure that the newly established Juniper firewall will not drop all
active sessions as it starts up.

Migration Test Plan (Option 2)

• This is essentially a flash cut option where an alternate IP address for the new firewall is configured along with the
routers and the hosts then point to the new firewall. If there is an issue, gateways and hosts can then be provisioned
to fall back to the legacy firewalls. With this option, organizations will sometimes choose to leave IPsec VPNs or
other termination on their old legacy firewalls and gradually migrate them over a period of time.

Migration Test Plan (Option 3)

• This option is typically used by financial organizations due to the sensitive nature of their applications.

• A Switched Port Analyzer (SPAN) session will be set up on the relevant switches with traffic sent to the Juniper
firewalls, where traffic is analyzed and session tables are built. This provides a clear understanding of traffic
patterns and provides more insight into the applications being run. This option also determines whether there is any
interference due to filter policies or IPS, and it creates a more robust test and cutover planning scenario. This option
typically takes more time than the other options, so organizations typically prefer to go with option 1. Again, this is a
more common option for companies in the financial sector.

Copyright © 2011, Juniper Networks, Inc. 45


Data Center LAN Migration Guide

Post Installation

As previously noted, procedures similar to those used when installing new security appliances in a single vendor
environment could be used after physical installation is complete. As a best practice:

• Verify access via CLI or network management tools.

• Verify interface status via CLI or network management tools.

• Verify that traffic is passing through the platform.

• Verify that rules are operational and behaving as they should.

• Confirm that Application Layer Gateway (ALG) policies/IPS are stopping anomalous or illegal traffic in the
application layer, while passing permitted traffic.

• Confirm that security platforms are reporting appropriately to a centralized logging or SIEM platform.

Business Continuity and Workload Mobility Trigger Events


Sometimes an improvement in availability of systems to external and internal users drives a critical initiative to enhance
the availability of data center infrastructures, either within an individual data center or between sets of data centers
such as primary, backup, and distributed data center sites. The goal is almost always to preserve a business’ value to its
stakeholders, and it often requires upgrades or extensions to critical infrastructure areas to achieve this goal.

Business continuity or disaster recovery sites can be set up as active/active, warm-standby, or cold-standby
configurations. A cold-standby site could involve an agreement with a provider such as SunGuard in which backup
tapes are trucked to a SunGuard backup data center facility. A warm-standby site could interconnect primary and
standby data centers for resumption of processing after a certain amount of backup/recovery system startup has
occurred. A hot-standby, active/active configuration involves continuously available services running in each site that
allow transparent switching between “primary” and “secondary” as needed, driven by planned or unplanned outages. A
large organization may have instances of each.

Business continuity and workload mobility are tightly coupled. Business continuity or high availability disaster recovery
(HADR) often involves provisioning between two or more data centers. The design could involve replicating an entire
data center (essentially a Greenfield installation), or the design could involve adding additional capacity to one or
more existing data centers. The specific insertion points could be at any of the tiers of an existing three-tier design. We
have already outlined best practices and specific installation tasks for several of these network insertion points in this
chapter. Once provisioning for the disaster recovery data center has been done, users should be able to connect into
any of the data centers transparently.

Since we have already described the installation tasks for access and aggregation/core switching and services tiers
of the new data center network, we won’t repeat those here. The same procedures can be used to enhance the data
center infrastructures that will take part in the HADR system. To the extent that MPLS and VPLS are involved in the
configuration between centers, we will address the steps associated with that part of the network in the section on
workload mobility further on in this guide.

Best Practices Design for Business Continuity and HADR Systems


• Business continuity is enabled using a mix of device-level, link-level, and network-level resiliency within and between
an organization’s data center sites. In most cases, it also involves application and host system resiliency capabilities
that need to interwork seamlessly with the network to achieve continuity across multiple sites.

• In this section, we first concentrate on the network-level design within the data center sites.

• In the following section (on workload mobility), we also describe capabilities that extend continuity to the
network supporting multiple data center sites and to certain considerations around host and application resiliency
interworking with the network.

46 Copyright © 2011, Juniper Networks, Inc.


Data Center LAN Migration Guide

• Link-level redundancy in data center networks can be implemented with the following network technologies:

- - Link Aggregation Group (LAG)

- - Redundant Trunk Groups (RTGs)

- - Spanning Tree Protocol (STP) and its variations

- - Bidirectional Forwarding Detection (BFD)

- - MPLS

We have already discussed LAG, RTG, and STP earlier in this guide. BFD is rapidly gaining popularity in data center
deployments, because it is a simple protocol aiding rapid network convergence (30 to 300 ms resulting in a sub-
second convergence time). BFD is a simple low layer protocol involving a hello mechanism between two devices. The
communication can be across directly connected links or across a virtualized communications path like MPLS.

• Node level resiliency can be achieved using the following technologies:

- - Graceful Routing Engine switchover (GRES)

- - Graceful restart

- - Nonstop active routing (NSR) and nonstop bridging (NSB)

GRES is a feature used to handle planned and unplanned platform restarts gracefully, without any disruptions, by
deploying a redundant Routing Engine in a chassis. The Routing Engines synchronize and share their forwarding state
and configuration. Once synchronized, if the primary Routing Engine fails due to a hardware or software problem, the
secondary Routing Engine comes online immediately, resulting in minimum traffic forwarding interruption.

Rather than being a feature, graceful restart is a standards-based protocol that relies on routing neighbors to
orchestrate and help a restarting router to continue forwarding. There is no disruption in control or in the forwarding
path when the graceful restart node and its neighbors are participating fully and are employing the standard
procedures.

NSR builds on GRES and implements a higher level of synchronization between the Routing Engines. In addition to
the synchronizations and checkpoints between the Routing Engines that GRES achieves, NSR employs additional
protective steps and results in no disruption to the control or data planes, hiding the failure from the rest of the
network. And, it does not require any help from its neighbors to achieve these results. Note that graceful restart and
NSR are mutually exclusive—they are two different means to achieve the same high availability goal.

NSB is similar to GRES and preserves interface and Layer 2 protocol information. In the event of a planned or
unplanned disruption in the primary Routing Engine, forwarding and bridging are continued during the switchover
resulting in minimal packet loss.

Node-level HA includes many aspects, starting with the architecture of the node itself and ending with the protocols
that the node uses to network with other components. Paralleling the Open Systems Interconnection (OSI) Reference
Model, you can view the network infrastructure components starting from the physical layer, which is the node’s
internal architecture and protocols, and ending with the upper tiers, which would include components such as OSPF,
IS-IS, BGP, MPLS, GRES, NSR, etc. No single component provides HA. It is all of the components working together
which creates one architecture and results in high availability.

For more detailed information on HA features on Juniper platforms, refer to: www.juniper.net/techpubs/en_US/
junos10.1/information-products/topic-collections/swconfig-high-availability/frameset.html.

To complement the node-level availability mechanisms highlighted above, devices and systems deployed at critical
points in the data center design should include redundancy of important common equipment such as power supplies,
fans, and Routing Engines at the node levels, so that the procedures mentioned above can have a stable hardware
environment to build upon.

In addition, the software/firmware in these devices should be based on a modular architecture to prevent software
failures or upgrade events from impacting the entire device. There should also be a clean separation between control
plane and data processes to ensure system availability. Junos OS is an example of a multitasking OS that operates in
this manner, ensuring that a failure in one process doesn’t impact any others.

Copyright © 2011, Juniper Networks, Inc. 47


Data Center LAN Migration Guide

Best Practices Design to Support Workload Mobility Within and Between Data Centers
In current and future data centers, the application workloads will increasingly be handled on an infrastructure of
extensively virtualized servers, storage, and supporting network infrastructure. In these environments, operations teams
will frequently need to move workloads from test into production, and distribute the loads among production resources
based on performance, time of day, and other considerations. This kind of workload relocation among servers may
occur between servers within the same rack, within the same data center, and increasingly, between multiple data
centers depending on the organization’s size, support for cloud computing services for “bursting overloads,” and other
similar considerations.

In general, we refer to this process of relocating virtual machines and their supporting resources as “workload mobility.”
Workload mobility plays an important part in maintaining business continuity and availability of services, and, as such,
is discussed within this section of the guide.

Workload mobility can be deployed in two different ways, as shown in Figure 16.

VPLS

Virtual Chassis Virtual Virtual


Chassis Chassis

Cloud Center Cloud Center

Rack A Rack B
Layer 2 domain across racks Layer 2 domain across
and across data center virtual private LAN

RACK TO RACK CLOUD TO CLOUD

Figure 17: Workload mobility alternatives

With Juniper’s Virtual Chassis technology, a Layer 2 domain can be easily extended across racks or rows, allowing
server administrators to move virtual machines quickly and easily within a data center.

When moving workloads between data center sites, consideration should be given to the latency requirements
of applications spanning the data center sites to be sure that the workload moves can meet application needs.
If it is important to support the movement of workloads between data centers, the L2 domain supporting the VM
infrastructure can be extended, or stretched, across data centers in two different ways:

• If the data centers are under 100 km apart, and the sites are directly connected physically, a Virtual Chassis can
be extended across the sites, using the 10 Gigabit chassis extension ports supporting a configuration up to the
maximum of 10 switches in a single Virtual Chassis. Note that directly connected means that ports are directly
connected to one another, i.e., a single fiber link connecting one device to another device or L1 signal repeater is used.

• If the connection between data centers is at a distance requiring services of a WAN, MPLS can be used to create
transparent virtual private LANs at very high scale using a Juniper infrastructure.

48 Copyright © 2011, Juniper Networks, Inc.


Data Center LAN Migration Guide

At the level of the data center network core and the transparent virtual WAN that an organization can use to support
its business continuity and workload mobility goals, MPLS provides a number of key advantages over any other
alternative:

• MPLS virtualization enables the physical network to be run as many separate virtual networks. The benefits include
cost savings, improved privacy through traffic segmentation, improved end user experience with traffic engineering
and QoS, and improved resiliency with functionality such as MPLS fast reroute and BFD. This can be done in a
completely private network context (e.g., the enterprise owns the entire infrastructure), or it can be achieved through
the interworking of the organization’s private data center and WAN infrastructures with an appropriately deployed
carrier service.

• VPLS provides Ethernet-based point-to-point, point-to-multipoint, and multipoint-to-multipoint (full mesh)


transparent LAN services over an IP/MPLS infrastructure. It allows geographically dispersed LANs to connect across
an MPLS backbone, allowing connected nodes (such as servers) to interpret that they are on the same Ethernet LAN.
VPLS thus provides an efficient and cost-effective method for communicating at L2 across two or more data center
sites. This can be useful for transaction mirroring in active/active or other backup configurations. And it is necessary
for supporting workload mobility and migration of virtual machines between locations over a WAN.

• MPLS can provide private L3VPN networks between data center sites that share the same L3 infrastructure. A
composite, virtualized L2 and L3 infrastructure can thus be realized. Very useful security properties can be achieved
in such a design as well. For example, by mapping L3VPNs to virtual security zones in an advanced firewall such as
the SRX Series, many security policies can be selectively layered on the traffic.

Also in support of business continuity, MPLS’ traffic engineering (TE) and fast reroute capabilities combine
sophisticated QoS and resiliency features into a multiservice packet core for superior performance and economics.
TE could be used to support real-time data replication and transaction mirroring, along with service-level agreement
(SLA) protection for real-time communications such as video conferencing and collaboration services. Fast reroute
delivers rapid path protection in the packet-based network without requiring redundant investments in SONET or SDH
level services (e.g., superior performance for lower cost).

For workload mobility that involves extending a Layer 2 domain across data centers to support relevant applications
like VMware VMotion, archiving, backup, and mirroring, L2VPNs using VPLS could be used between data center(s).
VPLS allows the connected data centers to be in the same L2 domain, while maintaining the bandwidth required for
backup purposes. This feature ensures that other production applications are not overburdened.

Best Practices for Incorporating MPLS/VPLS in the Data Center Network Design
Current L2/L3 switching technologies designed for the LAN do not scale well with the appropriate levels of rerouting,
availability, security, QoS, and multicast capabilities to achieve the required performance and availability. As a result,
when redesigning or upgrading the data center, an upgrade to MPLS is frequently appropriate and justified to meet
business operational demands and cost constraints. MPLS often simplifies the network for the data center, removing
costly network equipment and potential failure points while providing complete network redundancy and fast rerouting.

When fine grained QoS is required with traffic engineering for the data center, RSVP should be used to establish
bandwidth reservations based upon priorities, available bandwidth, and server performance capacities. MPLS-based
TE is a tool made available to the data center network administrators which is not presently available in common
IP networks. Furthermore, MPLS virtualization capabilities can be leveraged to segment and secure server access,
becoming a very important part of maintaining a secure data center environment.

For this section of the Data Center LAN Migration Guide, the prior construct of best practice, preinstall, install, and post
install is going to be combined into six process steps for migrating to MPLS, keeping in mind that VPLS runs over an IP/
MPLS network.

Copyright © 2011, Juniper Networks, Inc. 49


Data Center LAN Migration Guide

Switching across data centers using VPLS is depicted below in Figure 17.

CORE

M Series M Series

MX Series MX Series

EX4200 EX4500 EX4200 EX4500

Virtual Chassis Virtual Chassis

Apps Apps Apps Apps

VLAN 10 VLAN 20 Mirroring VLAN 10 Mirroring VLAN 20

DATA CENTER 1 DATA CENTER 2

Figure 18: Switching across data centers using VPLS

Six Process Steps for Migrating to MPLS


The following approach using a phased series of steps is one we have found useful in many enterprises. However, there
may be specific circumstances which could dictate a different approach in a given case.

Step 1: Upgrade the IP network to MPLS-capable platforms, yet continue to run it as an IP network. In step one,
upgrade the routers connecting the data centers to routers capable of running MPLS, yet configure the network as an IP
network without MPLS. Use this time to verify a stable and properly performing inter data center connection. This will
provide the opportunity to have the MPLS network in place and to be sure routers are configured and working correctly
to support IP connectivity. If you’re presently running Extended IGRP (EIGRP), use this opportunity to migrate to OSPF
or one of the other L3 protocols that will perform better with MPLS. Depending upon how many data centers will be
inter connected, once you’ve migrated to OSPF and/or IS-IS, it is a good time to enable BGP as well. BGP can be used
for automatic MPLS label distribution. Juniper has multiple sources of design guidelines and practical techniques for
accomplishing these tasks, which can be delivered in either document-based or engineering professional services
modes. Please refer to the Additional Resources sections in Chapter 5 for specific URLs.

50 Copyright © 2011, Juniper Networks, Inc.


Data Center LAN Migration Guide

Step 2: Build the MPLS layer. Once you have migrated to an MPLS capable network and tested and verified its
connectivity and performance, activate the MPLS overlay and build label-switched paths (LSPs) to reach other data
centers. Label distribution is automated with the use of LDP and RSVP with extensions, to support creation and
maintenance of LSPs and to create bandwidth reservations on LSPs (RFC 3209). BGP can also be used to support
label distribution at the customer’s choice. The choice of protocol for label distribution on the network depends on the
needs of the organization and applications supported by the network. If traffic engineering or fast reroute are required
on the network, you must use RSVP with extensions for MPLS label distribution. It is the decision of whether or not to
traffic engineer the network or require fast reroute that frequently makes the decision between the use of LDP or RSVP
for MPLS label distribution.

Step 3: Configure MPLS VPNs. MPLS VPNs can segregate traffic based on departments, groups, or users, as well as
by applications or any combination of user group and application. Let’s take a step back and look at why we call MPLS
virtualized networks “VPNs.” First, they are networks because they provide connectivity between separately defined
locations. They are private because they have the same properties and guarantees as a private network in terms of
network operations and in terms of traffic forwarding. And lastly, they are virtual because they may use the same
transport links and routers to provide these separated transport services. Since each network to be converged onto
the newly built network has its own set of QoS, security, and policy requirements, you will want to define MPLS-based
VPNs that map to the legacy networks already built. MPLS VPNs can be defined by:

Department, business unit, or other function: Where there is a logical separation of traffic that goes to a more
granular level than the network, perhaps down to the department or business unit, application, or specific security
requirement level, you will want to define VPNs on the MPLS network, each to support the logical separation
required for a unique QoS, security, and application support combination.

Service requirements: In the context of this Data Center LAN Migration Guide, VPLS makes it appear as though
there is a large LAN extended across the data center(s). VPLS can be used for IP services or it can be used to
interconnect with a VPLS service provider to seamlessly network data center(s) across the cloud. IP QoS in the
LAN can be carried over the VPLS service with proper forwarding equivalence class (FEC) mapping and VPN
configuration. If connecting to a service provider’s VPLS service, you will either need to collocate with the service
provider or leverage a metro Ethernet service, as VPLS requires an Ethernet hand-off from the enterprise to the
service provider.

QoS needs: Many existing applications within your enterprise may run on separate networks today. To be properly
supported, these applications and their users make specific and unique security and quality demands on the
network. This is why, as a best practice, it is suggested that you start by creating VPNs that support your existing
networks. This is the minimum number of VPNs you will need.

Security requirements: Security requirements may be defined by user groups such as those working on sensitive and
confidential projects, by compliance requirements to protect confidential information, and by application to protect
special applications. Each “special” security zone can be sectioned off with enhanced security via MPLS VPNs.

Performance requirements: Typically, your applications and available bandwidth will determine traffic engineering
and fast reroute requirements, however users and business needs may impact these considerations as well.

Additional network virtualization: Once MPLS-based VPNs are provisioned to support your existing networks,
user groups, QoS, and security requirements, consideration should be given to new VPNs that may be needed. For
example, evolving compliance processes supporting requirements such as Sarbanes-Oxley or Health Insurance
Portability and Accountability Act (HIPAA) may require new and secure VPNs. Furthermore, a future acquisition of a
business unit may require network integration and this can easily be performed on the network with the addition of
a VPN to accommodate the acquisition.

Step 4: Transfer networks onto the MPLS VPNs. All of the required VPNs do not have to be defined before initiating the
process of migrating the existing network(s) to MPLS. In fact, you may wish to build the first VPN and then migrate the
related network, then build the second VPN and migrate the next network and so on. As the existing networks converge to
the MPLS network, monitor network performance and traffic loads to verify that expected transport demands are being
met. If for some reason performance or traffic loads vary from expected results, investigate further as MPLS can provide
deterministic traffic characteristics, and resulting performance should not vary greatly from the expected results. Based
upon findings, there may be opportunities to further optimize the network for cost and performance gains.

Copyright © 2011, Juniper Networks, Inc. 51


Data Center LAN Migration Guide

Step 5: Traffic engineer the network. Step 5 does not require steps 3 or 4 above to be completed before initiating this
step. Traffic engineering may begin as soon as the MPLS network plane is established. However, as a best practice it
is recommended to first migrate some of the existing traffic to the MPLS plane before configuring TE. This will allow
you to experience firsthand the benefits and granular level of control you have over the network through the traffic
engineering of an MPLS network. Start by assessing the existing traffic demand of applications across data center(s).
Group traffic demand into priority categories, for instance, voice and video may be gathered into a “real time” priority
category, while private data is grouped into a second and Internet traffic is grouped into a third category.

Step 6: Monitor and manage. As with any network, you must continue to monitor and manage the network once it
is deployed and running while supporting new service loads and demands. An advantage MPLS provides above and
beyond IP is its capability to traffic engineer based upon utilization and application demands as the business evolves.

For more information on MPLS, refer to: www.juniper.net/techpubs/software/junos/junos53/swconfig53-mpls-


apps/html/mpls-overview.html.

For more information on VPLS, refer to: www.juniper.net/techpubs/en_US/junos10.2/information-products/


pathway-pages/config-guide-vpns/config-guide-vpns-vpls.html.

High-Performance Security Services Trigger Event

Please refer to the best practices and related installation information outlined in the Upgrading Security Services in
the Core section. as that trigger event covers the security services network insertion point.

Completed Migration to a Simplified, High-Performance, Two-Tier Network


As discussed throughout this document, enterprises with an existing legacy three-tier data center architecture can
begin their migration to a next-generation, cloud-ready, Juniper-based two-tier design from any of the common trigger
events outlined in this chapter. We’ve identified the best practices and the key prescriptive installation steps needed to
ensure successful insertion into the existing architecture. You can transition to a simplified architecture from an existing
legacy multitier architecture, as shown in Figure 18, by provisioning each of these network insertion points.

3-TIER LEGACY NETWORK SIMPLER, 2-TIER DESIGN

Core
Ethernet

Aggregation Layer Collapsed Aggregation/Core Layer

SRX5800 EX82XX

Access Layer EX4200 EX4500 Access Layer

Virtual Chassis EX82XX

Servers NAS FC Storage


Servers NAS FC Storage

FC SAN FC SAN

Figure 19: Transitioning to a Juniper two-tier high-performance network

52 Copyright © 2011, Juniper Networks, Inc.


Data Center LAN Migration Guide

Juniper Professional Services


Juniper Networks offers a suite of professional services that span the entire migration process—from the design and
planning to the implementation and operation phases of a new environment. These services provide an expeditious
and thorough approach to getting the network up and running, while at the same time minimizing the risk of business
disruptions and optimizing the network’s performance. High- and low-level design services, as well as network consulting,
help determine the high-level technical requirements to support business needs, including network assessment and
recommendations. Implementation services offer a thorough review of the system configurations to be migrated,
complete the actual migration/implementation tasks, and provide the necessary testing and troubleshooting activities
to ensure a successful network implementation. Custom and fixed scope installation, quick start and transition services
are available to ease the installation and enable a complete knowledge transfer of Juniper technology. These services
also help accelerate the migration and minimize the risk and cost in moving from legacy products. Subsequent to
the migration activities, Juniper consultant expertise can be obtained when and where it provides the most benefit
during operation, to efficiently fill potential skill gaps and to help assess current network performance, providing
recommendations to meet specific operational needs and maintain the network in the most optimal way.

The full suite of Juniper Networks Professional Services offerings can be found at: www.juniper.net/us/en/products-
services/consulting-services.

Copyright © 2011, Juniper Networks, Inc. 53


Data Center LAN Migration Guide

54 Copyright © 2011, Juniper Networks, Inc.


Chapter 4:
Troubleshooting

Copyright © 2010, Juniper Networks, Inc.


Data Center LAN Migration Guide

Troubleshooting

Introduction
The scope of this section is to provide an overview of common issues that might be encountered at different insertion
points when inserting Juniper platforms as a result of a trigger event (adding a new application or service to the
organization). This section won’t provide exhaustive troubleshooting details, however, we do describe the principal
recommended approaches to troubleshooting the most common issues and provide guidelines for identification,
isolation, and resolution.

Troubleshooting Overview
When investigating the root cause of a problem, it is important to determine the problem’s nature and analyze its
symptoms. When troubleshooting a problem, it is generally advisable to start at the most general level and work
progressively into the details, as needed. Using the OSI model as a reference, troubleshooting typically begins at the
lower layers (physical and data link) and works progressively up toward the application layer until the problem is found.
This approach tends to quickly identify what is working properly so that it can be eliminated from consideration, and
narrows the problem domain for quick problem identification and resolution.

The following list of questions provides a methodology on how to use clues and visible effects of a problem to reduce
the diagnostic time.

• Has the issue appeared just after a migration, a deployment of new network equipment, a new link connection, or
a configuration change? This is the context being presented in this Data Center LAN Migration Guide. The Method of
Procedure (MOP) detailing the steps of the operation in question should include the tasks to be performed to return
to the original state before the network event, should any abnormal conditions be identified. If any issue arises during
or after the operation that cannot be resolved in a timely manner, it may be necessary to roll back and disconnect
newly deployed equipment while the problem is researched and resolved. The decision to back out should be
made well in advance, prior to the expiration of the maintenance window. This type of problem is likely due to an
equipment misconfiguration or planning error.

• Does the problem have a local or a global impact on the network? The possible causes of a local problem may
likely be found at L1 or L2, or it could be related to an Ethernet switching issue at the access layer. An IP routing
problem may potentially have a global impact on networks, and the operator should focus its investigation on the
aggregation and core layer of the network.

• Is it an intermittent problem? When troubleshooting an intermittent problem, system logging and traceoptions
provide the primary debugging tools on Juniper Networks platforms, and can be focused on various protocol
mechanisms at various levels of detail. Events occurring in the network will cause the logging of state transitions
related to physical, logical, or protocols to local or remote files for analysis.

• Is it a total or partial loss of connectivity or is it a performance problem? All Juniper Networks platforms have
a common architecture in that there are separate control and forwarding planes. For connectivity issues, Juniper
recommends that you first focus on the control plane to verify routing and signaling states and then concentrate
on the forwarding or data plane, which is implemented in the forwarding hardware (Packet Forwarding Engine or
PFE). If network performance is adversely affected by packet loss, delays, and jitter impacting one or multiple traffic
types, the root cause is most likely related to network congestion, high link utilization, and packet queuing along the
traversed path.

56 Copyright © 2011, Juniper Networks, Inc.


Data Center LAN Migration Guide

Hardware

The first action to take when troubleshooting a problem and also before making any change in the network is to ensure
proper functionality and integrity of the network equipment and systems. A series of validation checks and inspection
tests should be completed to verify that the hardware and the software operate properly and there are not any fault
conditions. The following presents a list of “show commands” from the Junos OS CLI relative to this, as well as a brief
description of expected outcomes.

• show system boot-messages

Review the output and verify that no abnormal conditions or errors occurred during the booting process. POST (power-
on self-test) results are captured in the bootup message log and stored on the hard drive.

• show chassis hardware detail

Verify that all hardware appears in the output (i.e., routing engines, control boards, switch fabric boards, power
supplies, line cards, and physical ports). Verify that no hardware indicates a failure condition.

• show chassis alarms

Verify that there are no active alarms.

• show log messages

Search log for errors and failures and review the log for any abnormal conditions. The search can be narrowed to
specific keywords using the “grep” function.

• show system core-dumps

Verify any transient software failures. Junos OS under fatal fault condition will create a core file of the kernel and
processes in question for diagnostic analysis.

For more details on platform specifics, please refer to the Juniper technical documentation that can be found at:
www.juniper.net/techpubs.

OSI Layer 1: Physical Troubleshooting


An OSI Layer 1 problem or physical link failure can occur in any part of the network. Each media type has different
physical and logical properties and provides different diagnostic capabilities. Focus here will be on Ethernet, as it is
universally deployed in data centers at all tiers and in multiple flavors: GbE, 10GbE, copper, fiber, etc.

• show interface extensive command produces the most detailed and complete information about all interfaces. It
displays input and output errors for the interface displayed in multiple categories such as carrier transition, cyclic
redundancy check (CRC) errors, L3 incomplete errors, policed discard, L2 channel errors, static RAM (SRAM) errors,
packet drops, etc. It also contains interface status and setup information at both physical and logical layers. Ethernet
networks can present many symptoms, but troubleshooting can be helped by applying common principles: verify
media type, speed, fiber mode and length, interface and protocol maximum transmission unit (MTU), flow control
and link mode. The physical interface may have a link status of “up” because the physical link is operational with no
active alarm, but the logical interface has a link status of “down” because the data link layer cannot be established
end to end. If this occurs, refer to the next command.

• monitor interface provides real-time packets and byte counters as well as displaying error and alarm conditions.

After an equipment migration or a new link activation, the network operator should ping a locally connected host to
verify that the link and interface are operating correctly and monitor if there are any incrementing error counters. The
do-not-fragment flag in a ping test is a good tool to detect MTU problems which can adversely affect end-to-end
communication.

For 802.3ad aggregated Ethernet interfaces, we recommend enabling Link Aggregation Control Protocol (LACP) as a
dynamic bundling protocol to form one logical interface with multiple physical interfaces. LACP is designed to provide
link monitoring capabilities and fast failure detection over an Ethernet bundle connection.

Copyright © 2011, Juniper Networks, Inc. 57


Data Center LAN Migration Guide

OSI Layer 2: Data Link Troubleshooting


Below are some common steps to assist in troubleshooting issues at Layer 2 in the access and aggregation tiers:

• Are the devices utilizing DHCP to obtain an IP addresses? Is the Dynamic Host Configuration Protocol (DHCP)
server functioning properly so that host devices receive an IP address assignment from the DHCP server? If routed, is
the DHCP request being correctly forwarded?

• monitor traffic interface ge-0/0/0 command provides a tool for monitoring local traffic. Expect to see all packets
that are sent out and received to and from ge-0/0/0. This is particularly useful to verify the Address Resolution
Protocol (ARP) process over the connected LAN or VLAN. Use the show arp command to display ARP entries.

• Is the VLAN in question active on the switch? Is a trunk active on the switch that could interfere with the ability
to communicate? Is the routed VLAN interface (RVI) configured with the correct prefix and attached to the
corresponding VLAN? Is VRRP functioning properly and showing one unique routing node as master for the virtual IP
(VIP) address?

• Virtual Chassis, Layer 3 uplinks, inverted U designs, and VPLS offer different alternatives to prevent L2 data
forwarding loops in a switching infrastructure without the need to implement Spanning Tree Protocols (STPs).
Nevertheless, it is common best practice to enable STP as a protection mechanism to prevent broadcast storms in
the event of a switch misconfiguration or a connection being established by accident between two access switches.

Virtual Chassis Troubleshooting


Configuring a Virtual Chassis is essentially plug and play. However, if there are connectivity issues, the following
section provides the relevant commands to perform operational analysis and troubleshooting. To troubleshoot the
configuration of a Virtual Chassis, perform the following steps.

Check and confirm Virtual Chassis configuration and status with the following commands:

• show configuration virtual-chassis

• show virtual-chassis member-config all-members

• show virtual-chassis status

Check and confirm Virtual Chassis interfaces:

• show interfaces terse

• show interfaces terse vcp*

• show interfaces terse *me*

Verify that the mastership priority is assigned appropriately:

• show virtual-chassis status

• show virtual-chassis vc-port all-members

Verify the Virtual Chassis active topology and neighbors:

• show virtual-chassis active-topology

• show virtual-chassis protocol adjacency

• show virtual-chassis protocol database extensive

• show virtual-chassis protocol route

• show virtual-chassis protocol statistics

In addition to the verifications above, also check the following:

• Check the cable to make sure that it is properly and securely connected to the ports. If the Virtual Chassis port (VCP)
is an uplink port, make sure that the uplink module is model EX-UM-2XFP.

• If the VCP is an uplink port, make sure that the uplink port has been explicitly set as a VCP.

• If the VCP is an uplink port, make sure that you have specified the options (pic-slot, port-number, member-id)
correctly.

58 Copyright © 2011, Juniper Networks, Inc.


Data Center LAN Migration Guide

OSI Layer 3: Network Troubleshooting


While L1 and L2 problems have limited effect and are local, L3 or routing issues may affect other networks by
propagation and may have a global impact. In the data center, the aggregation/core tiers may be affected. The
following focuses on the operation of OSPF and BGP as they are commonly implemented in data centers to exchange
internal and external routing information.

In a next -generation or newly deployed network, OSPF’s primary responsibility typically is to discover endpoints for internal
BGP. Unlike OSPF, BGP may play multiple roles that include providing connectivity to an external network, information
exchange between VRFs for L3 MPLS VPN or VPLS, eventually carrying data centers internal routes to access routers.

OSPF
A common problem in OSPF is troubleshooting adjacency issues which can occur for multiple reasons: mismatched IP
subnet/mask, area number, area type, authentication, hello/dead interval, network type, or mismatched IP MTU.

The following are useful commands for troubleshooting an OSPF problem:

• show ospf neighbor displays information about OSPF neighbors and the state of the adjacencies which must be
shown as “full.”

• show ospf interface displays information about the status of OSPF interfaces.

• show ospf log logs shortest-path-first (SPF) calculation.

• show ospf statistics displays number and type of OSPF packets sent and received.

• show ospf databases displays entries in the OSPF link-state database (LSDB).

OSPF traceoptions provide the primary debugging tool, and the OSPF operation can be flagged to log error packets
and state transitions along with the events causing them.

BGP

show bgp summary is a primary command used to verify the state of BGP peer sessions, and it should display that the
peering is “established” to be fully operational.

BGP has multiprotocol capabilities made possible through simple extensions that add new address families. This
command also helps to verify which address families are carried over the BGP session, for example, inet-vpn if L3 MPLS
VPN service is required or L2VPN for VPLS.

BGP is a policy-driven routing protocol. It offers flexibility and granularity when implementing routing policy for path
determination and for prefix filtering. A network operator must be familiar with the rich set of attributes that can
be modified and also with the BGP route selection process. Routing policy controls and filters can modify routing
information entering or leaving the router in order to alter forwarding and routing decisions based on the following criteria:

• What should be learned about the network from all protocols?

• What routes should be shared with other routing protocols?

• What should be advertised to other routers?

• What routing information should be modified, if any?

Consistent policies must be applied across the entire network to filter/advertise routes and modify BGP route
attributes. The following commands assist in the troubleshooting of routing policies:

• show route receive-protocol bgp <neighbor> displays received attributes.

• show route advertising-protocol bgp <neighbor> displays route and attributes sent by BGP to a specific peer.

• show route hidden extensive displays routes not usable due to BGP next-hop problems and routes filtered by an
inbound route filter.

Logging of peer state transitions and flagging BGP operations provides a good source of information when investigating
BGP problems.

Copyright © 2011, Juniper Networks, Inc. 59


Data Center LAN Migration Guide

VPLS Troubleshooting
This section provides a logical approach to take when determining the root cause of a problem in a VPLS network. A good
place to start is to verify the configuration setup. Following is a brief configuration snippet with corresponding descriptor.

routing-instance {
vpls_vpn1 { #arbitrary name
instance-type vpls; #VPLS type
vlan-tags outer 4094 inner 4093; #VLAN Normalization
must match if configured
interface ge-1/0/0.3001; # int.unit
route-distinguisher 65000:1001; # RD carried in MPGP
vrf-target target:65000:1001; # VPN RT must match on all PEs in this
VPLS
protocols {
vpls {
mac-table-size {
100; # max mac table size
}
interface-mac-limit { # max mac that may be
learned from all CE
50; facing interfaces
}
no-tunnel-services; # lsi interfaces for tunneling
site site-1 { # arbitrary name
site-identifier 1001; # unique site ID
interface ge-1/0/0.3001; # list of int.unit in this
VPN

The next step is to verify the control plane with the following operation commands:

• show route receive-protocol bgp <neighbor> table <vpls-vpn> detail displays BGP routes received from an MP-
iBGP peer for a VPLS instance. Use the detail/extensive option to see other BGP attributes such as route-target RT,
label base, and site-ID. The BGP next hop must have a route in the routing table for the mapping to a transport MPLS
LSP.

• show vpls connections is an excellent command to verify the VPLS connection status and to aid in troubleshooting.

After the control plane has been validated as fully functional, the forwarding plane should be checked next by issuing
the following commands. Note that the naming of devices maps to a private MPLS network, as opposed to using a
service provider MPLS network.

On local switch:

• show arp

• show interfaces ge-0/0/0

On MPLS edge router

• show vpls mac-table

• show route forwarding-table

On MPLS core router

• show route table mpls

60 Copyright © 2011, Juniper Networks, Inc.


Data Center LAN Migration Guide

The commands presented in this section should highlight the proper VPLS operation as follows:

• Sending to unknown MAC address, VPLS edge router floods to all members of the VPLS.

• Sending to a known MAC address, VPLS edge router maps to an outer and inner label.

• Receiving a MAC address, VPLS edge router identifies the sender and maps the MAC address to a label stack in the
MAC address cache.

• VPLS provider edge (PE) router periodically ages out unused entries from the MAC address cache.

Multicast
Looked at simplistically, multicast routing is upside down unicast routing. Multicast routing functionality is focused on
where the packet came from and directs traffic away from its source. When troubleshooting multicast, the following
methodology is recommended:

• Gather information

In one-to-many and many-to-many communications, it is important to have a good understanding of the expected
traffic flow to clearly identify all sources and receivers for a particular multicast group.

• Verify receiver interest by issuing the following commands:

show igmp group <mc_group> displays information about Internet Group Management Protocol (IGMP) group
membership received from the multicast receivers on the LAN interface.

show pim interfaces is used to verify the designated router for that interface or VLAN.

• Verify knowledge of the active source by issuing the following commands:

show multicast route group <mc_group> source-prefix <ip_address> extensive displays the forwarding state (pruned
or forwarding) and the rate for this multicast route.

show pim rps extensive determines if the source designated router has the right rendezvous point (RP) and displays
tunnel interface-related information for register message for encapsulation/de-encapsulation.

• Trace the forwarding state backwards, working your way back towards the source IP and looking for Physical
Interface Module (PIM) problems along the way with the following commands:

show pim neighbors displays information about PIM neighbors.

Show pim join extensive <mc_group> validates outgoing interface list and upstream neighbor and displays source tree
and shared tree (Real-Time Transport Protocol or RTP) state, with join/prune status.

Show multicast route group <mc_group> source-prefix <ip_address> produces extensive checks if traffic is flowing
and has a positive traffic rate.

show multicast rpf <source_address>Multicast routing uses “reverse path forwarding” check. A router forwards only
multicast packets if received on the upstream interface to the source. Otherwise, the RPF check fails, and the packet is
discarded.

Quality of Service/Class of Service (CoS)


Link congestion in the network can be the root cause of packet drops. The show interfaces queue command provides
CoS queue statistics for all physical interfaces to assist in determining the number of packets dropped due to tail drop,
and the number of packets dropped due to random early detection (RED).

Copyright © 2011, Juniper Networks, Inc. 61


Data Center LAN Migration Guide

OSI Layer 4-7: Transport to Application Troubleshooting


This type of problem is most likely to occur on firewalls or on routers secured with firewall filters. Below are some
important things to remember when troubleshooting Layer 4-7 issues:

• Standard troubleshooting tools such as ping and traceroute may not work. Generally, ping and traceroute are not
enabled through a firewall except in specific circumstances.

• Firewalls are routers too, In addition to enforcing stateful policies on traffic, firewalls also have the responsibility of
routing packets to their next hop. To do this, firewalls must have a working and complete routing table statically or
dynamically defined. If the table is incomplete or incorrect, the firewall will not be able to forward traffic correctly.

• Firewalls are stateful and build state for every session that has passed through the firewall. If a non-SYN packet
comes to the firewall and the firewall does not have a session open for that packet, it is considered an “out of state”
packet. This can be the sign of an attack or an application that is dormant beyond the firewall session timeout
duration attempting to send traffic.

• By definition, stateful firewalls enforce traffic though their policy based on the network and transport layers of the
OSI model. In addition, firewalls may also do protocol anomaly checks and signature matches on the application
layer for selected protocols.

• This function is implemented by ALGs. ALGs recognize application-specific sequences, change the application layer
to make protocols compatible with Port Address Translation (PAT) attempting to send traffic and Network Address
Translation (NAT), and deliver higher layer content to deep inspection (DI), antivirus, URL filter, and spam filter
features, if enabled.

• If you experience a problem that involves the passing or blocking of traffic, the very first place to look is the firewall
logs. Often the log messages will give strong hints about the problem.

Tools
Junos OS has embedded script tools to simplify and automate some tasks for network engineers. Commit scripts,
operation (op) scripts, and event scripts provide self monitoring, self diagnosing, and self healing capabilities to the
network. The apply-macro command feeds a commit script to extend and customize the router configuration based
on user-defined data and templates. Together, these tools offer an almost infinite number of applications to reduce
downtime, minimize human error, accelerate service deployment, and reduce overall operational costs. For more
information, refer to: www.juniper.net/us/en/community/junos/script-automation.

Troubleshooting Summary
Presenting an exhaustive and complete troubleshooting guide falls outside the scope of this Data Center LAN
Migration Guide. Presented in this section is a methodology to understand the factors contributing to a problem and a
logical approach to the diagnostics needed to investigate root causes. This method relies on the fact that IP networks
are modeled around multiple layered architectures. Each layer depends on the services of the underlying layers. From
the physical network topology comprised of access, aggregation, and core tiers to the model of IP communication
founded on the 7 OSI layers, matching symptoms to the root cause layer is a critical step in the troubleshooting
methodology. Juniper platforms have also implemented a layered architecture by integrating separate control and
forwarding planes. Once the root cause layer is correctly identified, the next steps are to isolate the problem and to
take the needed corrective action at that specific layer.

For more details on platform specifics, please refer to the Juniper technical documentation that can be found at:
www.juniper.net/techpubs.

62 Copyright © 2011, Juniper Networks, Inc.


Chapter 5: Summary and
Additional Resources

Copyright © 2010, Juniper Networks, Inc.


Data Center LAN Migration Guide

Summary
Today’s data center is a vital component to business success in virtually all industries and markets. New trends and
technologies such as cloud computing and SOA-based applications have significantly altered data center traffic flows
and performance requirements. Data center designs based on the performance characteristics of older technology
and traffic flows have resulted in complex architectures which do not easily scale and also may not provide the
performance needed to efficiently meet today’s business objectives.

A simplified, next-generation, cloud-ready two-tier data center design is needed to meet these new challenges without
compromising performance or security. Since most enterprises won’t disrupt a production data center except for
scheduled maintenance and business continuity testing, a gradual migration to the “new network” is often most practical.

This guide has identified and described how migration towards a simpler two-tier design can begin at any time and
at various insertion points within an existing legacy data center architecture. These insertion points are determined
by related trigger events such as adding a new application or service, or by larger events such as data center
consolidation. This guide has outlined the design considerations for migration at each network layer, providing
organizations with a path towards a simplified high-performance network which can not only lower TCO, but also
provide the agility and efficiency to enable organizations to gain a competitive advantage by leveraging their data
center network.

Juniper Networks has been delivering a steady stream of network innovations for more than a decade. Juniper brings
this innovation to a simplified data center LAN solution built on three core principles: simplify, share, and secure.
Creating a simplified infrastructure with shared resources and secure services delivers significant advantages over
other designs. It helps lower costs, increase efficiency, and keep the data center agile enough to accommodate any
future business changes or technology infrastructure requirements.

Additional Resources

Data Center Design Resources


Juniper has developed a variety of materials that complement this Migration Guide and are helpful to the network
design process in multiple types of customer environments and data center infrastructures. On our public Internet
website, we keep many of these materials at the following locations:

1. Data Center Solution Reference Materials Site:

www.juniper.net/us/en/solutions/enterprise/data-center/simplify/#literature

At this location you will find information helpful to the design process at a number of levels, organized by selectable
tabs according to the type of information you are seeking—analyst reports, solution brochures, case studies, reference
architecture, design guides, implementation guides, and industry reports.

The difference between a reference architecture, design guide, and implementation guide is the level of detail the
document addresses. The reference architecture is the highest level organization of our data center network approach;
the design guide provides guidance at intermediate levels of detail appropriate to, say, insertion point considerations
(in the terms of this Migration Guide); and implementation guides provide specific guidance for important types
of network deployments at different tiers of the data center network. Implementation guides give customers and
other readers enough information to start specific product implementation tasks appropriate for the most common
deployment scenarios, and are quite usable in combination with an individual product’s installation, configuration, or
operations manual.

2. Information on Juniper’s individual lines relevant to the insertion scenarios described in this guide can be found at:

Ethernet switching: www.juniper.net/us/en/products-services/switching

IP routing: www.juniper.net/us/en/products-services/routing

Network security: www.juniper.net/us/en/products-services/security

Network management: www.juniper.net/us/en/products-services/software/junos-platform/junos-space

64 Copyright © 2011, Juniper Networks, Inc.


Data Center LAN Migration Guide

3.Information on the way Juniper’s offerings fit with various alliance partners in the data center environment can be
found at: www.juniper.net/us/en/company/partners/enterprise-alliances.

4. Information on Juniper’s Professional Services to support planning and design of migration projects can be found at:
www.juniper.net/us/en/products-services/consulting-services/#services.

For more detailed discussions of individual projects and requirements, please contact your Juniper or authorized
Juniper partner representative directly.

Training Resources
Juniper Networks offers a rich curriculum of introductory and advanced courses on all of its products and solutions.

Learn more about Juniper’s free and fee-based online and instructor-led hands-on training offerings:
www.juniper.net/us/en/training/technical_education.

Juniper Networks Professional Services


Juniper Networks Professional Services organization is uniquely qualified to help enterprises or service providers design,
implement, and optimize their networks for confident operation and rapid returns on infrastructure investments.
The Professional Services team understands today’s Internet demands and those that are just around the corner—
for bandwidth efficiency, best-in-class security, solid reliability, and cost-effective scaling. These highly trained,
experienced professionals augment your team to keep your established network protected, up-to-date, performing at
its best, and aligned with the goals of your organization.

• Planning and design consulting—Ensures that the new design is aligned with your stated business and technical goals.
Incorporates critical business information in the network re-architecture. Eliminates costly redesigns and lost time.

• Implementation and configuration services—Helps you achieve the deployment of complex products and
technologies efficiently with minimal business disruption. These services shorten the interval between network
element installation and revenue generation. This may includes services such as onsite and remote support
for installation, configuration, and testing. It may also include production of site requirements, site survey
documentation, hardware installation documentation, and network test documentation.

• Knowledge transfer—Juniper Networks’ IP experts can train internal teams in best-in-class migration practices or
any specific enterprise issues.

• Project management—A dedicated project manager can be assigned to assist with administration and management
throughout the entire migration project. In general, this service provides an emphasis on tasks such as project
planning, reports that track project tasks against scheduled due dates, and project documentation.

• Resident engineer—A Juniper Networks Resident Engineer (RE) can be placed onsite at any desired enterprise
location to engage with the engineering or operations staff on a daily basis to support a data center network.
Functioning as part of the enterprise team, REs are available for a 12-month engagement to the specific networking
environment, and provide technical assistance such as network implementation and migration, troubleshooting
and operations support, network and configuration analysis, assistance in testing Juniper product features and
functionality to help optimize the value of high-performance networking to meet an evolving business environment.

Learn more about Juniper Networks Professional Services at: www.juniper.net/us/en/products-services/consulting-


services.

Juniper Networks Technical Assistance Center (JTAC) provides a single point of contact for all support needs with
skilled engineers and automatic escalation alerts to senior management.

The Customer Support Center (CSC) provides instant, secure access to critical information, including the Juniper
Networks Knowledge Base, frequently asked questions, proactive technical bulletins, problem reports, technical notes,
release notes, and product documentation.

Copyright © 2011, Juniper Networks, Inc. 65


Data Center LAN Migration Guide

About Juniper Networks


Juniper Networks is in the business of network innovation. From devices to data centers, from consumers to cloud
providers, Juniper Networks delivers the software, silicon and systems that transform the experience and economics of
networking. The company serves customers and partners worldwide. Additional information can be found at
www.juniper.net.

66 Copyright © 2011, Juniper Networks, Inc.


Data Center LAN Migration Guide

Copyright © 2011, Juniper Networks, Inc. 67


Data Center LAN Migration Guide

Corporate and Sales Headquarters APAC Headquarters EMEA Headquarters To purchase Juniper Networks solutions,
Juniper Networks, Inc. Juniper Networks (Hong Kong) Juniper Networks Ireland please contact your Juniper Networks
1194 North Mathilda Avenue 26/F, Cityplaza One Airside Business Park representative at 1-866-298-6428 or
Sunnyvale, CA 94089 USA 1111 King’s Road Swords, County Dublin, Ireland
authorized reseller.
Phone: 888.JUNIPER (888.586.4737) Taikoo Shing, Hong Kong Phone: 35.31.8903.600
or 408.745.2000 Phone: 852.2332.3636 EMEA Sales: 00800.4586.4737
Fax: 408.745.2100 Fax: 852.2574.7803 Fax: 35.31.8903.601
www.juniper.net

Copyright 2011 Juniper Networks, Inc. All rights reserved. Juniper Networks, the Juniper Networks logo, Junos,
NetScreen, and ScreenOS are registered trademarks of Juniper Networks, Inc. in the United States and other
countries. All other trademarks, service marks, registered marks, or registered service marks are the property of
their respective owners. Juniper Networks assumes no responsibility for any inaccuracies in this document. Juniper
Networks reserves the right to change, modify, transfer, or otherwise revise this publication without notice.

7100128-002-EN Feb 2011 Printed on recycled paper

68 Copyright © 2010, Juniper Networks, Inc.

You might also like