Professional Documents
Culture Documents
Table of Contents
Chapter 1: Why Migrate to Juniper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Introduction to the Migration Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Data Center Architecture and Guide Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Why Migrate? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Scaling Is Too Complex with Current Data Center Architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10 The Case for a High Performing, Simplified Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Why Juniper? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Other Considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Chapter 2: Pre-Migration Information Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Pre-Migration Information Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16 Technical Knowledge and Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16 Chapter 3: Data Center Migration -Trigger Events and Deployment Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 How Migrations Begin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Trigger Events for Change and Their Associated Insertion Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Considerations for Introducing an Alternative Network Infrastructure Provider . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Trigger Events, Insertion Points, and Design Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 IOS to Junos OS Conversion Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Data Center Migration Insertion Points: Best Practices and Installation Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 New Application/Technology Refresh/Server Virtualization Trigger Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Design Options and Best Practices: New Application/Technology Refresh/Server Virtualization Trigger Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Network Challenge and Solutions for Virtual Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Network Automation and Orchestration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Data Center Consolidation Trigger Event. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Best Practices: Designing the Upgraded Aggregation/Core Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Best Practices: Upgraded Security Services in the Core . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .41 Aggregation/Core Insertion Point Installation Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .41 Consolidating and Virtualizing Security Services in the Data Center: Installation Tasks . . . . . . . . . . . . . . . . . . . . . . . . . 44 Business Continuity and Workload Mobility Trigger Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Best Practices Design for Business Continuity and HADR Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Best Practices Design to Support Workload Mobility Within and Between Data Centers . . . . . . . . . . . . . . . . . . . . . . . . 48 Best Practices for Incorporating MPLS/VPLS in the Data Center Network Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Six Process Steps for Migrating to MPLS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Completed Migration to a Simplified, High-Performance, Two-Tier Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 Juniper Professional Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Chapter 4: Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 Troubleshooting Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 OSI Layer 1: Physical Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 OSI Layer 2: Data Link Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 Virtual Chassis Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 OSI Layer 3: Network Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 OSPF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 VPLS Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 Multicast . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .61 Quality of Service/Class of Service (CoS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .61 OSI Layer 4-7: Transport to Application Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 Troubleshooting Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 Chapter 5: Summary and Additional Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 Additional Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 Data Center Design Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 Training Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Juniper Networks Professional Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 About Juniper Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Table of Figures
Figure 1: Multitier legacy data center LAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 Figure 2: Simpler two-tier data center LAN design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Figure 3: Data center traffic flows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10 Figure 4: Collapsed network design delivers increased density, performance, and reliability . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Figure 5: Junos OS - The power of one . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Figure 6: The modular Junos OS architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18 Figure 7: Junos OS lowers operations costs across the data center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .19 Figure 8: Troubleshooting with Service Now . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Figure 9: Converting IOS to Junos OS using I2J . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Figure 10: The I2J input page for converting IOS to Junos OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Figure 11: Inverted U design using two physical servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Figure 12: Inverted U design with NIC teaming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Figure 13: EX4200 top-of-rack access layer deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Figure 14: Aggregation/core layer insertion point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Figure 15: SRX Series platform for security consolidation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Figure 16: Workload mobility alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 Figure 17: Switching across data centers using VPLS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Figure 18: Transitioning to a Juniper two-tier high-performance network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Audience
While much of the high-level information presented in this document will be useful to anyone making strategic decisions about a data center LAN, this guide is targeted primarily to: Data center network and security architects evaluating the feasibility of new approaches in network design Data center network planners, engineers, and operators designing and implementing new data center networks Data center managers, IT managers, network and security managers planning and evaluating data center infrastructure and security requirements
Ethernet
Core
Aggregation Layer
Access Layer
Servers
NAS
FC Storage
FC SAN
Neil Rikard Minimize LAN Switch Tiers to Reduce Cost and Increase Efficiency, Gartner Research ID Number: G00172149 November 17, 2009
EX4500 EX82XX
Servers
NAS
FC Storage
FC SAN
Why Migrate?
IT continues to become more tightly integrated with business across all industries and markets. Technology is the means by which enterprises can provide better access to information in near or real time to satisfy customer needs, while simultaneously driving new efficiencies. However, todays enterprise network infrastructures face growing scalability, agility, and security challenges. This is due to factors such as increased collaboration with business partners, additional workforce mobility, and the sheer proliferation of users with smart mobile devices requiring constant access to information and services. These infrastructure challenges are seriously compounded when growth factors are combined with the trend towards data center consolidation. What is needed is a new network infrastructure that is more elastic, more efficient, and can easily scale. Scalability is a high priority, as it is safe to predict that much of the change facing businesses today is going to come as a requirement for more storage, more processing power, and more flexibility. Recent studies by companies such as IDC suggest that global enterprises will be focusing their investments and resources in the next 5 to 10 years on lowering costs while continuing to look for new growth areas. Industry analysts have identified several key data center business initiatives that align with these directions: Data center consolidation: Enterprises combine data centers as a result of merger or acquisition to reduce cost as well as centralize and consolidate resources. Virtualization: Server virtualization is used to increase utilization of CPU resources, provide flexibility, and deliver on-demand services that easily scale (currently the most prevalent virtualization example). Cloud computing: Pooling resources within a cloud provides a cost-efficient way to reconfigure, reclaim, and reuse resources to deliver responsive services. I/O convergence or consolidation: Ethernet and Fibre Channel are consolidated over a single wire on the server side. Virtual Desktop Infrastructure (VDI): Applications are run on centralized servers to reduce operational costs and also provide greater flexibility. These key initiatives all revolve around creating greater data center efficiencies. While meeting these business requirements, it is vital that efficient solutions remain flexible and scalable systems are easy to manage to maximize all aspects of potential cost savings. In todays data center, applications are constantly being introduced, updated, and retired. Demand for services is unpredictable and ever changing. Remaining responsive, and at the same time cost efficient, is a significant resource management challenge, and adding resources needs to be a last resort since it increases the cost basis for service production and delivery. Having the ability to dynamically reconfigure, reclaim, and reuse resources positions the data center to effectively address todays responsiveness and efficiency challenges. Furthermore, existing three-tier architectures are built around a client/server model that is less relevant in todays application environment. Clearly, a new data center LAN design is needed to adapt to changing network dynamics, overcome the complexity of scaling with the current multitiered architecture, as well as capitalize on the benefits of high-performance platforms and a simplified design.
Network topologies should mirror the nature of the tra c they transport
W UP TO 70% E
10
2-TIER DESIGN
Figure 4: Collapsed network design delivers increased density, performance, and reliability
Robin Layland/Layland Consulting 10G Ethernet shakes Net Design to the Core/Shift from three- to two-tier architectures accelerating, Network World September 14, 2009
11
12
Why Juniper?
Juniper delivers high-performance networks that are open to and embrace third-party partnerships to lower total cost of ownership (TCO) as well as to create flexibility and choice. Juniper is able to provide this based on its extensive investment in software, silicon, and systems. Software: Junipers investment in software starts with Juniper Networks Junos operating system. Junos OS offers the advantage of one operating system with one release train and one modular architecture across the enterprise portfolio. This results in feature consistency and simplified management throughout all platforms in the network. Silicon: Juniper is one of the few network vendors that invests in ASICs which are optimized for Junos OS to maximize performance and resiliency. Systems: The combination of the investment in ASICs and Junos OS produces high-performance systems that simultaneously scale connectivity, capacity, and the control capability needed to deliver new applications and business processes on a single infrastructure that also reduces application and service delivery time. Juniper Networks has been delivering a steady stream of network innovations for more than a decade. Juniper brings this innovation to a simplified data center LAN solution built on four core principles: simplify, share, secure, and automate. Creating a simplified infrastructure with shared resources and secure services delivers significant advantages over other designs. It helps lower costs, increase efficiency, and keep the data center agile enough to accommodate any future business changes or technology infrastructure requirements. Simplify the architecture: Consolidating legacy siloed systems and collapsing inefficient tiers results in fewer devices, a smaller operational footprint, and simplified management from a single pane of glass. Share the resources: Segmenting the network into simple, logical, and scalable partitions with privacy, flexibility, high performance, and quality of service (QoS) enables network agility to rapidly adapt to an increasing number of users, applications, and services. Secure the data flows: Integrating scalable, virtualized security services into the network core provides benefits to all users and applications. Comprehensive protection secures data flows into, within, and between data centers. It also provides centralized management and the distributed dynamic enforcement of application and identity-aware policies. Automate network operations at each stepAn open, extensible software platform reduces operational costs and complexity, enables rapid scaling, minimizes operator errors, and increases reliability through a single network operating system. A powerful network application platform with innovative applications enables network operators to leverage Juniper or third-party applications for simplifying operations and scaling application infrastructure to improve operational efficiency. Junipers data center LAN architecture embodies these principles and enables high-performance enterprises to build next-generation, cloud-ready data centers. For information on Building the Cloud-Ready Data Center, please refer to: www.juniper.net/us/en/solutions/enterprise/data-center.
Other Considerations
It is interesting to note that even as vendors introduce new product lines, the legacy three-tier architecture remains as the reference architecture for Data Centers. This legacy three-tier architecture retains the same limitations in terms of scalability and increased complexity.
13
Additionally, migrating to a new product line, even with an incumbent vendor, may require adopting a new OS, modifying configurations, and replacing hardware. The potential operational impact of introducing new hardware is a key consideration for insertion into an existing data center infrastructure, regardless of the platform provider. Prior to specific implementation at any layer of the network, it is sound practice to test interoperability and feature consistency in terms of availability and implementation. When considering an incumbent vendor with a new platform, any Enterprise organization weighing migration to a new platform from their existing one, should also evaluate moving towards a simpler high performing Juniper-based solution, which can deliver substantial incremental benefits. (See Chapter 3: Data Center MigrationTrigger Events and Deployment Processes for more details about introducing a second switching infrastructure vendor into an existing single vendor network.) In summary, migrating to a simpler data center design enables an enterprise to improve the end user experience and scale without complexity, while also driving down operational costs.
14
15
Junos OS Overview
Enterprises deploying legacy-based solutions today are most likely familiar with the number of different operating systems (OS versions) running on switching, security, and routing platforms. This can result in feature inconsistencies, software instability, time-consuming fixes and upgrades. Its not uncommon for a legacy data center to be running many different versions of a switching OS, which may increase network downtime and require greater time, effort, and cost to manage the network. From its beginning, Juniper set out to create an operating system that addressed these common problems. The result is Junos OS, which offers one consistent operating system across all of Junipers routing, switching, and security devices.
16
EX8216
NSM
NSMXpress SRX3000 Line MX Series SRX650 SRX240 M Series SRX100 SRX210 J Series LN1000 EX4500 Line EX4200 Line EX3200 Line EX2200 Line
SECURITY
ROUTERS
SWITCHES
Branch
Core
Frequent Releases
Module X
One OS
One Architecture
Junos OS serves as the foundation of a highly reliable network infrastructure and has been at the core of the worlds largest service provider networks for over 10 years. Junos OS offers identical carrier-class performance and reliability to any sized enterprise data center LAN. Also through open, standards-based protocols and an API, Junos OS can be customized to optimize any enterprise-specific requirement. What sets Junos OS apart from other network operating systems is the way it is built: one operating system (OS) delivered in one software release train, and with one modular architecture. Feature consistency across platforms and one predictable release of new features ensure compatibility throughout the data center LAN. This reduces network management complexity, increases network availability, and enables faster service deployment, lowering TCO and providing greater flexibility to capitalize on new business opportunities. Junos OS consistent user experience and automated tool sets make planning and training easier and day-to-day operations more efficient, allowing for faster changes. Further, integrating new software functionality protects not just hardware investments, but also an organizations investment in internal systems, practices, and knowledge.
Junos OS Architecture
The Junos OS architecture is a modular design conceived for flexible yet stable innovation across many networking functions and platforms. The architectures modularity and well-defined interfaces streamline new development and enable complete, holistic integration of services.
API
10.0
10.1
10.2
17
Scripts CLI
J-Web
Toolkit
CONTROL PLANE
Management
Interfaces
Module n
Routing
Service App 1
Kernel
Service App 3
DATA PLANE
Physical Interfaces
18
SERVICES PLANE
Services Interfaces
Service App 2
A key benefit of using Junos OS is lower TCO as a result of reduced operational challenges and improved operational productivity at all levels in the network.
(The combined total savings associated with planned, unplanned, planning and provisioning, and adding infrastructure tasks)
The Total Economic Impact of Junos Network Operating Systems, a commissioned study conducted by Forrester Consulting on behalf of Juniper Networks, February 2009
19
Up to 10 EX4200 line switches can be connected, configured, and managed as one single logical device through built-in Virtual Chassis technology. The actual number deployed in a single Virtual Chassis instance depends upon the physical layout of your data center and the nature of your traffic. Connected via a 128 Gbps backplane, a Virtual Chassis can be comprised of EX4200 switches within a rack or row, or it can use a 10GbE connection anywhere within a data center or across data centers up to 40 km apart. Junipers Virtual Chassis technology enables virtualization at the access layer, offering three key benefits: 1. It reduces the number of managed devices by a factor of 10X. 2. The network topology now closely maps to the traffic flow. Rather than sending inter-server traffic up to an aggregation layer and then back down in order to send it across the rack, its sent directly east-to-west, reducing the latency for these transactions. This also more easily facilitates workload mobility when server virtualization is deployed. 3. Since the network topology now maps to the traffic flows directly, the number of uplinks required can be reduced. The Virtual Chassis also delivers best-in-class performance. According to testing done by Network World (see full report at www.networkworld.com/slideshows/2008/071408-juniper-ex4200.html), the EX4200 offers the lowest latency of any Ethernet switch they had tested, making the EX4200 an optimal solution for high-performance, low latency, real-time applications. There has also been EX4200 performance testing done in May 2010 by Network Test which demonstrates the low latency high performance and high availability capabilities of the EX 4200 series, viewable at http://networktest.com/jnprvc. When multiple EX4200 platforms are connected in a Virtual Chassis configuration, they offer the same software high availability as traditional chassis-based platforms. Each Virtual Chassis has a master and backup Routing Engine preelected with synchronized routing tables and routing protocol states for rapid failover should a master switch fail. The EX4200 line also offers fully redundant power and cooling. To further lower TCO, Juniper includes core routing features such as OSFP and RIPv2 in the base software license, providing a no incremental cost option for deploying Layer 3 at the access layer. In every deployment, the EX4200 reduces network configuration burdens and measurably improves performance for server-to-server communications in SOA, Web services, and other distributed application designs. For more information, refer to the EX4200 Ethernet Switch data sheet for a complete list of features, benefits, and specifications at: www.juniper.net/us/en/products-services/switching/ex-series.
20
21
22
23
The comprehensive NSM solution provides full life cycle management for all platforms in the data center LAN. Deployment: Provides a number of options for adding device configurations into the database, such as importing a list of devices, or discovering and importing deployed network devices, or manually adding a device and configuration in NSM, or having the device contact NSM to add its configuration to the database. Configuration: Offers central configuration to view and edit all managed devices. Provides offline editing/modeling of device configuration. Facilitates the sharing of common configurations across devices via templates and policies. Provides configuration file management for backup, versioning, configuration comparisons, and more. Monitoring: Provides centralized event log management with predefined and user-customizable reports. Provides tools for auditing log trends and finding anomalies. Provides automatic network topology creation using standardsbased discovery of Juniper and non-Juniper devices based on configured subnets. Offers inventory management for device management interface (DMI)-enabled devices, and Job Manager to view device operations performed by other team members. Maintenance: Delivers centralized Software Manager to version track software images for network devices. Other tools also transform/validate between user inputs and device-specific data formats via DMI schemas. Using open standards like SNMP and system logging, NSM has support for third-party network management solutions from IBM, Computer Associates, InfoVista, HP, EMC, and others. Refer to the Network and Security Manager data sheet for a complete list of features, benefits, and specifications: www.juniper.net/us/en/products-services/security/nsmcm.
Junos Space
Another of ITs challenges has been adding new services and applications to meet the ever growing demand. Historically, this has not been easy, requiring months of planning and only making changes in strict maintenance windows. Junos Space is a new, open network application platform designed for building applications that simplify network operations, automate support, and scale services. Organizations can take control of their own networks through selfwritten programs or third-party applications from the developer community. Embodied in a number of appliances across Junipers routing, switching, and security portfolio, an enterprise can seamlessly add new applications, devices, and device updates as they become available from Juniper and the developer community, without ever restarting the system for full plug and play.
24
Several applications will be available on Junos Space throughout 2010. Junos Space applications introduced as of the first half of 2010 include: Junos Space Virtual Control (expected availability in Q3 2010) allows users to monitor, manage, and control the virtual network environments that support virtualized servers deployed in the data center. Virtual Control provides a consolidated solution for network administrators to gain end-to-end visibility into, and control over, both virtual and physical networks from a single management screen. By enabling network-wide topology, configuration, and policy management, Virtual Control minimizes errors and dramatically simplifies data center network orchestration, while at the same time lowering total cost of ownership by providing operational consistency across the entire data center network. Virtual Control also greatly improves business agility by accelerating server virtualization deployment. Juniper has also formed a new collaboration with VMware that takes advantage of its open APIs to achieve seamless orchestration across both physical and virtual network elements by leveraging Virtual Control. The combination of Junos Space Virtual Control and VMware vSphere provides automated orchestration between the physical and virtual networks, wherein a change in the virtual network is seamlessly carried over the physical network and vice versa. Junos Space Ethernet Design (available now) is a Junos Space software application that enables end-to-end campus and data center network automation. Ethernet Design provides full automation including configuration, provisioning, monitoring, and administration of large switch and router networks. Designed to enable rapid endpoint connectivity and operationalization of the data center, Ethernet Design uses a best practice configuration and scalable workflows to scale data center operations with minimal operational overhead. It is a single pane of glass platform for end-to-end network automation that improves productivity via a simplified, create one, use extensively configuration and provisioning model. Junos Space Security Design (available now) enables fast, easy, and accurate enforcement of security state across the enterprise network. Security Design enables quick conversion of business intent to device-specific configuration, and it enables auto-configuration and provisioning through workflows and best practices to reduce the cost and complexity of security operations. Service Now and Junos Space Service Insight (available now) consists of Junos Space applications that enable fast and proactive detection, diagnosis, and resolution of network issues. (See Automated Support with Service Now for more details.) Junos Space Network Activate (expected availability Q4 2010) facilitates fast and easy setup of VPLS services, and allows for full lifecycle management of MPLS services In addition, the Junos Space Software Development Kit (SDK) will be released to enable development of a wide range of third-party applications covering all aspects of network management. Junos Space is designed to be open and provides northbound, standards-based APIs for integration to third-party data center and service provider solutions. Junos Space also includes DMI based on NetConf, an IETF standard, which can enable management of DMI-compliant third-party devices. Refer to the following URL for more information on Junos Space applications: www.juniper.net/us/en/productsservices/software/junos-platform/junos-space/applications.
25
AI Scripts Installed
Service Now
INTERNET
Gateway
JUNIPER
26
27
28
Training
The simplicity of Junipers implementations typically minimizes the need for extensive training to accompany deployment; however, Juniper also offers a variety of training resources to accelerate deployments. To start with, standardization of protocols within the network typically eases introduction, since basic constructs are similar and interoperability has usually been tested and proven ahead of time by Juniper. Beyond the protocols, differences in command-line interface (CLI) are usually easier to navigate than people initially think. Time after time, people familiar with other CLIs find themselves able to make the transition quickly due to the consistent, intuitive nature of Junos operating systems implementation (it is easy to learn and use). Junos OS also has a tremendous amount of flexibility and user support built into it. For example, to ease migration from Ciscos IOS, there is a Junos OS command to display a configuration file in a format similar to IOS. Additionally, hands-on training is available in the form of a two-day boot camp. Customized training can also be mapped to address any enterprises specific environment. Training not only gives an opportunity to raise the project teams skill level, but also to get experience with any potential configuration complexities prior to entering the implementation phase of a project. Junos OS also provides embedded automation capabilities. A library of scripts that automate common operations tasks is readily available online for viewing and downloading. Categorized by function, the script with the best fit can easily be found. Refer to the Junos OS Script Library for a complete list: www.juniper.net/us/en/community/junos/ script-automation/library.
29
Topof-rack or end-of-row deployment Cabling changes Traffic patterns among servers VLAN definitions for applications/services segmentation L2 domain for VM workload mobility Server connectivity speed: GbE/10GbE Application latency requirements Network interface card (NIC) teaming Uplinks Uplink oversubscription ratios Number and placement of uplinks GbE/10GbE uplinks Use of L2 or L3 for uplinks IEEE Spanning Tree Protocol (STP) Redundant trunk groups as STP alternative - Link aggregation sizing/protocol QoS Classification and prioritization Policing High availability (HA) requirements Multicast scalability/performance requirements Interoperability testing definitions IOS configuration for conversion to Junos OS
Sufficiency of existing physical capacity Interior gateway protocol (IGP) and exterior gateway protocol (EGP) design Multicast requirements L2/L3 domain boundaries IEEE STP Default gateway/root bridge mapping Virtualized services support requirements (VPN/VRF) Uplinks Density Speed Link aggregation Scaling requirements for MAC addresses and ACLs Latency requirements HA features QoS Security Policies Existing policy migration ACL migration New policy definitions Interoperability testing definitions IOS configuration for conversion to Junos OS
30
TRIGGER EVENT
DESIGN CONSIDERATIONS
Core layer
Sufficiency of existing physical capacity VLAN definitions for applications/services segmentation VLAN/VPN routing and forwarding (VRF) mapping Latency/performance requirements Traffic engineering/QoS requirements Connecting with legacy/proprietary IGP protocols Data Center Interconnect (DCI) method Stretching VLANs Physical layer extension via dense wavelength-division multiplexing (DWDM) VPLS/MPLS Interoperability testing definitions IOS configuration for conversion to Junos OS
Capacity and throughput requirements Compliance and audit requirements Existing security policy rules migration Zone definitions Virtual machine security requirements for virtual firewall Interoperability testing definitions
31
Figure 10: The I2J input page for converting IOS to Junos OS Data Center Migration Insertion Points: Best Practices and Installation Tasks
The legacy three-tier design found in many of todays data centers was depicted in Figure 1. This is the baseline for layer insertion points addressed in this document. For a specific insertion point such as the access layer, for example, the recommended Juniper best practices pertaining to that layer are provided first. This is then followed by the recommended preinstallation, installation, and post installation tasks. Recommended best practices and installation-related tasks focus primarily on currently shipping products and capabilities. The next revision of the Data Center LAN Migration Guide will include more implementation detail on the switching platforms and software introduced by Juniper as part of the New Network for the Data Center announcements, May 2010: www.juniper.net/us/en/solutions/enterprise/data-center. A dedicated Troubleshooting chapter detailing Juniper recommended guidelines for the most commonly encountered migration and installation issues is also included in this guide.
32
EX4500 and EX4200 switches will be able to work in the same Virtual Chassis, allowing companies to mix GbE and 10GbE attached devices within the same fabric. When the requirement is for GbE server connectivity, the EX4200 would be the platform of choice, where up to 480 GbE server connections may be accommodated within a single Virtual Chassis.
Design Options and Best Practices: New Application/Technology Refresh/Server Virtualization Trigger Events
When deploying a new access layer as part of this trigger event, there are issues related to uplink oversubscription, STP, and Virtual Chassis. Understanding design options and best practices for each of these topics is important to a successful deployment.
Inverted U Design
An enterprise can create an STP-free data center topology when using an L2 domain to facilitate virtual machine mobility through the way it connects servers to an access layer switch. An inverted U design can be used such that no L2 loops are created. While technologies like STP or RTG arent required to prevent the loop, best practices still recommend provisioning STP to prevent accidental looping due to incorrect configuration.
33
There are two basic options to provision this design: two physically separate servers connected to two separate access layer switches, or two separate Virtual Chassis units in a Juniper deployment as depicted in Figure 11.
AGGREGATION / CORE
EX82XX
802.3ad LAG 802.1q Trunking 802.3ad LAG 802.1q Trunking
EX82XX
EX4200
Virtual Chassis
EX4200
Virtual Chassis
Access
AGGREGATION / CORE
EX82XX
802.3ad LAG 802.1q Trunking 802.3ad LAG 802.1q Trunking
EX82XX
EX4200
EX4200
Access
pNIC1 pNIC2
34
For more detailed information on NIC teaming using VMware Infrastructure 3, refer to: www.vmware.com/technology/virtual-networking/virtual-networks.html www.vmware.com/files/pdf/virtual_networking_concepts.pdf The main advantages to the inverted U design are that all ports on the aggregation switches have useable bandwidth 100% of the time, traffic flows between access and aggregation are always deterministic, and there is a deterministic latency to all Virtual Chassis connected to a single aggregation or core switch.
35
Figure 13 below depicts an EX4200 Virtual Chassis top-of-rack access layer deployment.
TOR DEPLOYMENT
Legacy Aggregation Layer
L2/L3 Switch L2/L3 Switch
Access Layer
EX4200
Virtual Chassis
EX4200
Virtual Chassis
36
PoC lab could include: - Interface connectivity - Trunking and VLAN mapping - Spanning-tree interoperability with aggregation layer - QoS policyclassification marking, rate limiting - Multicast Provision the new access layer switches into any existing terminal servers for out of band CLI access via SSH. Network Automation: The approach to use for automating management in these scenarios depends on which other vendors equipment will be collocated in the data center, what other multivendor tools will be employed (Tivoli, OpenView, etc.), and how responsibilities will be partitioned across teams for different parts of the infrastructure. If the insertion is relatively small and contained, and other tools supporting each vendors platforms are already deployed, testing for compatible integration of the Juniper equipment is the best PoC step. However if this insertion is the start of a longer term, more expanded, strategic deployment of Juniper infrastructure into multiple PODs and sites (because of the importance of new applications, for example), then it may make sense to also test for integration of Junos Space into the environment because of the likely value of Space to the installation over time. Ethernet Design, Network Activate, or other Junos Space data center management applications could be considered. Space could be deployed as either a physical or a virtual appliance, depending on the allocation of responsibilities for management and the allocation of resources within the design. VM requirements for running Junos Space are 8 GB RAM and 40 GB hard disk in a production environment. If Junos Space VM is installed for lab/trial purposes, 2 GB RAM and 8 GB hard disk space is sufficient. For more information on deploying the Junos Space platform and managing nodes in the Junos Space fabric, please refer to the technical documentation: www.juniper.net/techpubs/en_US/junos-space1.0/information-products/ index-junos-space.html.
Installation
After comprehensive PoC testing, physical installation of the new POD should be relatively straightforward. As previously mentioned, if the installation only involves adding new access layer switches to existing racks, switchover becomes a matter of changing server cables from the old switches to the new ones. If a server refresh is combined with a switch refresh, those components should be physically installed in the new racks. Uplink interfaces are then connected to the aggregation switch(es).
Post Installation
Procedures which are similar to installing a new POD within a single vendor environment should be employed after physical installation is complete. As a best practice, verify: Access to the new POD via CLI or network management tools. Monitoring and alerting through OSS tools is functional. Security monitoring is in place and working. To limit false positives, security tuning should be conducted if a Security Incident and Event Manager (SIEM) system such as the STRM Series is in place. Interface status for server connections and uplinks via CLI or network management tools. L2/STP state between access and aggregation tiers. QoS consistency between access and aggregation tiers. Multicast state (if applicable). Traffic is being passed via ping tests and traffic monitoring tools. Applications are running as anticipated via end user test, performance management, and other verifications.
37
38
Significant levels of integration can be done in fault, performance, and network configuration and change management. As an example, IBMs Tivoli NetCool is a well established leading multivendor solution for fault management. Junipers Junos Space platform licenses and integrates NetCool into its Junos Space offering. There are also well established multivendor solutions for performance management that include IBM NetCool Proviso, CA eHealth, InfoVista, and HP OpenView that an enterprise can consider for managing network and application performance for both current vendor and Juniper network infrastructures. There are currently over a dozen Network Configuration and Change Management (NCCM) vendors with multivendor tools. These tools bring more structure to the change management process and also enable automated configuration management. NCCM vendors include IBM Tivoli, AlterPoint, BMC (EmprisaNetworks), EMC (Voyence), HP (Opsware), and others. Prior to introducing Juniper into an existing single vendor infrastructure, Juniper recommends that you replace manual network configuration management processes and vendor-specific tools with automated multivendor NCCM tools. It is also good practice to establish standard network device configuration policies which would apply to all vendors in the network infrastructure. Automated network configuration management is more efficient and also reduces operational complexity. An IT management solution should be built around the standards outlined by the Fault, Configuration, Accounting, Performance, Security (FCAPS) Model. Refer to the following URL for more information: http://en.wikipedia.org/wiki/FCAPS.
39
40
The most common case will be use of VPLS to provide stretched VLANs between areas of a large data center network, or between multiple distant data centers using VPLS (over MPLS) to create a transparent extension of the LAN to support nonstop application services (transparent failovers), transaction mirroring, data base backups, and dynamic management of virtual server workloads across multiple data center sites. In these cases, the core nodes will include VPLS instances matching the L2 topology and VLAN configurations required by the applications, as well as the appropriate implementation of MPLS between the core nodes and the rest of the organizations routed IP/MPLS network. This design will include ensuring high availability and resilient access of the L2 access tier into the elastic L2 infrastructure enabled by VPLS in the core; use of appropriate traffic engineering, and HA features of MPLS to enable the proper QoS and degree of availability for the traffic being supported in the transparent VLAN network. Details on these design points are included in the section of the Migration Guide on incorporating multiple sites into the data center network design using MPLS in the Six Process Steps for Ensuring MPLS Migration section.
41
The scenario for transitioning the core data center network in a design upgrade triggered by a consolidation project should encompass the following kinds of tasks, suited to each organizations case: Ensure that the power, cooling, airflow, physical rack space, and cabling required for any new equipment has been designed, ordered, and installed (however your organization breaks down these responsibilities between departments, suppliers, and integrators). Size the new/upgraded switching platforms using your organizations policy for additional performance and capacity headroom for future growth. Design should include a pair of switches that can eventually serve as the new collapsed core/aggregation layer. Initial design can be tuned to the exact role the switches will perform. For example, if they will be focused on pure core functions in the initial phase, they can be focused on capacity/functionality required for that role (e.g., IGP policies and area design appropriate to a core data center switch). If they will be focused on pure aggregation functions initially, design can focus on L2 and L3 interface behaviors appropriate to that role. Or, if they will be performing a blended core/aggregation role (as would be appropriate in many two-tier data center networks), a mix of L2 and L3 functions can be designed to fit the networks requirements. If the new switches are aggregating existing access layer switches, industry standard protocols (such as 802.1D, 802.1Q, 802.1s, 802.3ad, etc.) should be used to ensure interoperability. Configuration checklist should include: - IGP and EGP requirements such as area border roles, routing metrics and policies, and possible route redistributions - L2/L3 domain demarcations VLAN to VRF mapping Virtualized service support requirements (VPN/VRF) Default gateway/root bridge mapping Hot Standby Routing Protocol (HSRP) to Virtual Router Redundancy Protocol (VRRP) mappings - Uplink specifications Density Speeds (number of GbE and 10GbE links) Link aggregations Oversubscription ratios - Scaling requirements for MAC addresses - Firewall filters (from IOS ACL mapping) - QoS policies - Multicast topology and performance - Audit/compliance requirements should be clarified and any logging or statistics collection functions designed in. - IOS configurations should be mapped to Junos OS as outlined in the section on IOS to Junos OS translation tools. As noted earlier, a PoC lab could be set up to test feature consistency and implementation in the new Juniper infrastructure. Testing could include: - Interface connections - Trunking and VLAN mapping - STP interoperability with any existing switches - QoS policy (classification marking; rate limiting) - Multicast
42
Installation Tasks
Refer to Figure 14 when considering the tasks described in this section:
EX8200 EX8216 West
Legacy Switch West
Post Installation
As previously noted, procedures which are similar to those employed when installing a new aggregation switch in a single vendor environment can be used, after physical installation is complete, to verify successful operations. As a best practice, verify: Access to new configuration via CLI or network management tools. Interface status for server connections and uplinks via CLI or network management tools. L2/STP status between network tiers. QoS consistency between network tiers. Multicast state (if applicable). Traffic passing via ping tests. Application connectivity and flows with statistics or end user tests (or both).
43
Consolidating and Virtualizing Security Services in the Data Center: Installation Tasks
In addition to cyber theft and increasing malware levels, organizations must guard against new vulnerabilities introduced by data center technologies themselves. To date, security in the data center has been applied primarily at the perimeter and server levels. However, this approach isnt comprehensive enough to protect information and resources in new system architectures. In traditional data center models, applications, compute resources, and networks have been tightly coupled, with all communications gated by security devices at key choke points. However, technologies such as server virtualization and Web services eliminate this coupling and create a mesh of interactions between systems that create subtle and significant new security risks within the interior of the data center. For a complete discussion of security challenges in building cloud-ready, next-generation data centers, refer to the white paper, Security Considerations for Cloud-Ready Data Center: www.juniper.net/us/en/local/pdf/implementationguides/8010046-en.pdf. A key requirement for this insertion point is for security services platforms to provide the performance, scalability, and traffic visibility needed to meet the increased demands of a consolidated data center. Enterprises deploying platforms which do not offer the performance and scalability of Juniper Networks SRX Series Services Gateways and their associated management applications are faced with a complex appliance sprawl and management challenge, where numerous appliances and tools are needed to meet requirements. This is a more costly, less efficient, and less scalable approach.
SRX5800
EX82XX
Ensure that the appropriate power, cooling, airflow, physical rack space, and cabling required to support the new equipment have been ordered and installed. Ensure that the security tier is sized to meet the organizations requirements for capacity headroom for future growth. Define and provision routing/switching Infrastructure first (see prior section). This sets the L3/L2 foundation domains upon which the security zones the SRX Series enforces will be built. The SRX Series supports a pool of virtualized security services that can be applied to any application flow traversing the data center network. Setting up the network with this foundation of subnets and VLANs feeding the dynamic security enforcement point segments the data center resources properly and identifies what is being protected and what level of protection is needed. With SOA, for example, there are numerous data flows between servers within the data center and perimeter but security is often insufficient for securing these flows. Policies based on role/function, applications, business goals, or regulatory requirements can be achieved using a mix of VLAN, routing, and security zone policies enabling the SRX Series to enforce the appropriate security posture for each flow in the network. The performance and scalability requirements should be scoped. SRX Series devices can be paired together in a cluster to scale to 120 Gbps of firewall throughput, as well as providing HA. Virtual machine security requirements should also be defined. Junipers partnership with Altor Networks is hypervisor neutral, eliminating VM security blind spots. For more information on Altors virtual firewall solution, refer to Chapter 2 (Altor Networks Virtual Firewall).
44
In the preinstallation phase, security policies must be developed. This typically takes time and can be complex to coordinate. Juniper Professional Services can be used as a resource to help analyze and optimize security policies at all enforcement points. The full suite of Juniper Networks Professional Services offerings can be found at: www.juniper.net/us/en/products-services/consulting-services. Establish a migration plan, identifying a time line and key migration points, if all appliances cannot be migrated in a flash cut. As with the other insertion points, PoC testing can be done and could include: - Establishing the size of the target rule base to be used post conversion - Checking the efficacy of the zone definitions - Determining the effectiveness of the IPS controls - Determining the suitability and implementation of the access controls to be used
Installation Tasks
As with the aggregation/core insertion point, it is important to have a fallback position to existing appliances in the event of any operational issues. The current firewall appliances should be kept on hot standby. The key to a successful migration is to have applications identified for validation and to have a clear test plan for success criteria. There are three typical options for migration, with option 1 being the one most commonly used. Migration Test Plan (Option 1) Test failover by failing the master firewall (legacy vendor) to the backup (this confirms that HA works and the other devices involved in the path are working as expected). Replace the primary master, which was just manually failed, with the Juniper firewall. The traffic should still be flowing through the secondary, which is the legacy vendor firewall. Turn off the virtual IP (VIP) address or bring the interface down on the backup (legacy vendor firewall) and force everything through the new Juniper firewall. A longer troubleshooting window helps to ensure that the switchover has happened successfully. Also, turn off synchronization checks (syn-checks) initially, to process already established sessions, since the TCP handshake has already occurred on the legacy firewall. This will ensure that the newly established Juniper firewall will not drop all active sessions as it starts up. Migration Test Plan (Option 2) This is essentially a flash cut option where an alternate IP address for the new firewall is configured along with the routers and the hosts then point to the new firewall. If there is an issue, gateways and hosts can then be provisioned to fall back to the legacy firewalls. With this option, organizations will sometimes choose to leave IPsec VPNs or other termination on their old legacy firewalls and gradually migrate them over a period of time. Migration Test Plan (Option 3) This option is typically used by financial organizations due to the sensitive nature of their applications. A Switched Port Analyzer (SPAN) session will be set up on the relevant switches with traffic sent to the Juniper firewalls, where traffic is analyzed and session tables are built. This provides a clear understanding of traffic patterns and provides more insight into the applications being run. This option also determines whether there is any interference due to filter policies or IPS, and it creates a more robust test and cutover planning scenario. This option typically takes more time than the other options, so organizations typically prefer to go with option 1. Again, this is a more common option for companies in the financial sector.
45
Post Installation
As previously noted, procedures similar to those used when installing new security appliances in a single vendor environment could be used after physical installation is complete. As a best practice: Verify access via CLI or network management tools. Verify interface status via CLI or network management tools. Verify that traffic is passing through the platform. Verify that rules are operational and behaving as they should. Confirm that Application Layer Gateway (ALG) policies/IPS are stopping anomalous or illegal traffic in the application layer, while passing permitted traffic. Confirm that security platforms are reporting appropriately to a centralized logging or SIEM platform.
46
Link-level redundancy in data center networks can be implemented with the following network technologies: - Link Aggregation Group (LAG) - Redundant Trunk Groups (RTGs) - Spanning Tree Protocol (STP) and its variations - Bidirectional Forwarding Detection (BFD) - MPLS We have already discussed LAG, RTG, and STP earlier in this guide. BFD is rapidly gaining popularity in data center deployments, because it is a simple protocol aiding rapid network convergence (30 to 300 ms resulting in a subsecond convergence time). BFD is a simple low layer protocol involving a hello mechanism between two devices. The communication can be across directly connected links or across a virtualized communications path like MPLS. Node level resiliency can be achieved using the following technologies: - Graceful Routing Engine switchover (GRES) - Graceful restart - Nonstop active routing (NSR) and nonstop bridging (NSB) GRES is a feature used to handle planned and unplanned platform restarts gracefully, without any disruptions, by deploying a redundant Routing Engine in a chassis. The Routing Engines synchronize and share their forwarding state and configuration. Once synchronized, if the primary Routing Engine fails due to a hardware or software problem, the secondary Routing Engine comes online immediately, resulting in minimum traffic forwarding interruption. Rather than being a feature, graceful restart is a standards-based protocol that relies on routing neighbors to orchestrate and help a restarting router to continue forwarding. There is no disruption in control or in the forwarding path when the graceful restart node and its neighbors are participating fully and are employing the standard procedures. NSR builds on GRES and implements a higher level of synchronization between the Routing Engines. In addition to the synchronizations and checkpoints between the Routing Engines that GRES achieves, NSR employs additional protective steps and results in no disruption to the control or data planes, hiding the failure from the rest of the network. And, it does not require any help from its neighbors to achieve these results. Note that graceful restart and NSR are mutually exclusivethey are two different means to achieve the same high availability goal. NSB is similar to GRES and preserves interface and Layer 2 protocol information. In the event of a planned or unplanned disruption in the primary Routing Engine, forwarding and bridging are continued during the switchover resulting in minimal packet loss. Node-level HA includes many aspects, starting with the architecture of the node itself and ending with the protocols that the node uses to network with other components. Paralleling the Open Systems Interconnection (OSI) Reference Model, you can view the network infrastructure components starting from the physical layer, which is the nodes internal architecture and protocols, and ending with the upper tiers, which would include components such as OSPF, IS-IS, BGP, MPLS, GRES, NSR, etc. No single component provides HA. It is all of the components working together which creates one architecture and results in high availability. For more detailed information on HA features on Juniper platforms, refer to: www.juniper.net/techpubs/en_US/ junos10.1/information-products/topic-collections/swconfig-high-availability/frameset.html. To complement the node-level availability mechanisms highlighted above, devices and systems deployed at critical points in the data center design should include redundancy of important common equipment such as power supplies, fans, and Routing Engines at the node levels, so that the procedures mentioned above can have a stable hardware environment to build upon. In addition, the software/firmware in these devices should be based on a modular architecture to prevent software failures or upgrade events from impacting the entire device. There should also be a clean separation between control plane and data processes to ensure system availability. Junos OS is an example of a multitasking OS that operates in this manner, ensuring that a failure in one process doesnt impact any others.
47
Best Practices Design to Support Workload Mobility Within and Between Data Centers
In current and future data centers, the application workloads will increasingly be handled on an infrastructure of extensively virtualized servers, storage, and supporting network infrastructure. In these environments, operations teams will frequently need to move workloads from test into production, and distribute the loads among production resources based on performance, time of day, and other considerations. This kind of workload relocation among servers may occur between servers within the same rack, within the same data center, and increasingly, between multiple data centers depending on the organizations size, support for cloud computing services for bursting overloads, and other similar considerations. In general, we refer to this process of relocating virtual machines and their supporting resources as workload mobility. Workload mobility plays an important part in maintaining business continuity and availability of services, and, as such, is discussed within this section of the guide. Workload mobility can be deployed in two different ways, as shown in Figure 16.
VPLS
Virtual Chassis
Virtual Chassis
Virtual Chassis
Cloud Center
Cloud Center
Rack A
Rack B
RACK TO RACK
CLOUD TO CLOUD
48
At the level of the data center network core and the transparent virtual WAN that an organization can use to support its business continuity and workload mobility goals, MPLS provides a number of key advantages over any other alternative: MPLS virtualization enables the physical network to be run as many separate virtual networks. The benefits include cost savings, improved privacy through traffic segmentation, improved end user experience with traffic engineering and QoS, and improved resiliency with functionality such as MPLS fast reroute and BFD. This can be done in a completely private network context (e.g., the enterprise owns the entire infrastructure), or it can be achieved through the interworking of the organizations private data center and WAN infrastructures with an appropriately deployed carrier service. VPLS provides Ethernet-based point-to-point, point-to-multipoint, and multipoint-to-multipoint (full mesh) transparent LAN services over an IP/MPLS infrastructure. It allows geographically dispersed LANs to connect across an MPLS backbone, allowing connected nodes (such as servers) to interpret that they are on the same Ethernet LAN. VPLS thus provides an efficient and cost-effective method for communicating at L2 across two or more data center sites. This can be useful for transaction mirroring in active/active or other backup configurations. And it is necessary for supporting workload mobility and migration of virtual machines between locations over a WAN. MPLS can provide private L3VPN networks between data center sites that share the same L3 infrastructure. A composite, virtualized L2 and L3 infrastructure can thus be realized. Very useful security properties can be achieved in such a design as well. For example, by mapping L3VPNs to virtual security zones in an advanced firewall such as the SRX Series, many security policies can be selectively layered on the traffic. Also in support of business continuity, MPLS traffic engineering (TE) and fast reroute capabilities combine sophisticated QoS and resiliency features into a multiservice packet core for superior performance and economics. TE could be used to support real-time data replication and transaction mirroring, along with service-level agreement (SLA) protection for real-time communications such as video conferencing and collaboration services. Fast reroute delivers rapid path protection in the packet-based network without requiring redundant investments in SONET or SDH level services (e.g., superior performance for lower cost). For workload mobility that involves extending a Layer 2 domain across data centers to support relevant applications like VMware VMotion, archiving, backup, and mirroring, L2VPNs using VPLS could be used between data center(s). VPLS allows the connected data centers to be in the same L2 domain, while maintaining the bandwidth required for backup purposes. This feature ensures that other production applications are not overburdened.
Best Practices for Incorporating MPLS/VPLS in the Data Center Network Design
Current L2/L3 switching technologies designed for the LAN do not scale well with the appropriate levels of rerouting, availability, security, QoS, and multicast capabilities to achieve the required performance and availability. As a result, when redesigning or upgrading the data center, an upgrade to MPLS is frequently appropriate and justified to meet business operational demands and cost constraints. MPLS often simplifies the network for the data center, removing costly network equipment and potential failure points while providing complete network redundancy and fast rerouting. When fine grained QoS is required with traffic engineering for the data center, RSVP should be used to establish bandwidth reservations based upon priorities, available bandwidth, and server performance capacities. MPLS-based TE is a tool made available to the data center network administrators which is not presently available in common IP networks. Furthermore, MPLS virtualization capabilities can be leveraged to segment and secure server access, becoming a very important part of maintaining a secure data center environment. For this section of the Data Center LAN Migration Guide, the prior construct of best practice, preinstall, install, and post install is going to be combined into six process steps for migrating to MPLS, keeping in mind that VPLS runs over an IP/ MPLS network.
49
Switching across data centers using VPLS is depicted below in Figure 17.
CORE
M Series
M Series
MX Series
MX Series
EX4200
Virtual Chassis
EX4500
EX4200
Virtual Chassis
EX4500
Apps
Apps
Apps
Apps
VLAN 10
VLAN 20
Mirroring VLAN 10
Mirroring VLAN 20
DATA CENTER 1
DATA CENTER 2
Figure 17: Switching across data centers using VPLS Six Process Steps for Migrating to MPLS
The following approach using a phased series of steps is one we have found useful in many enterprises. However, there may be specific circumstances which could dictate a different approach in a given case. Step 1: Upgrade the IP network to MPLS-capable platforms, yet continue to run it as an IP network. In step one, upgrade the routers connecting the data centers to routers capable of running MPLS, yet configure the network as an IP network without MPLS. Use this time to verify a stable and properly performing inter data center connection. This will provide the opportunity to have the MPLS network in place and to be sure routers are configured and working correctly to support IP connectivity. If youre presently running Extended IGRP (EIGRP), use this opportunity to migrate to OSPF or one of the other L3 protocols that will perform better with MPLS. Depending upon how many data centers will be inter connected, once youve migrated to OSPF and/or IS-IS, it is a good time to enable BGP as well. BGP can be used for automatic MPLS label distribution. Juniper has multiple sources of design guidelines and practical techniques for accomplishing these tasks, which can be delivered in either document-based or engineering professional services modes. Please refer to the Additional Resources sections in Chapter 5 for specific URLs.
50
Step 2: Build the MPLS layer. Once you have migrated to an MPLS capable network and tested and verified its connectivity and performance, activate the MPLS overlay and build label-switched paths (LSPs) to reach other data centers. Label distribution is automated with the use of LDP and RSVP with extensions, to support creation and maintenance of LSPs and to create bandwidth reservations on LSPs (RFC 3209). BGP can also be used to support label distribution at the customers choice. The choice of protocol for label distribution on the network depends on the needs of the organization and applications supported by the network. If traffic engineering or fast reroute are required on the network, you must use RSVP with extensions for MPLS label distribution. It is the decision of whether or not to traffic engineer the network or require fast reroute that frequently makes the decision between the use of LDP or RSVP for MPLS label distribution. Step 3: Configure MPLS VPNs. MPLS VPNs can segregate traffic based on departments, groups, or users, as well as by applications or any combination of user group and application. Lets take a step back and look at why we call MPLS virtualized networks VPNs. First, they are networks because they provide connectivity between separately defined locations. They are private because they have the same properties and guarantees as a private network in terms of network operations and in terms of traffic forwarding. And lastly, they are virtual because they may use the same transport links and routers to provide these separated transport services. Since each network to be converged onto the newly built network has its own set of QoS, security, and policy requirements, you will want to define MPLS-based VPNs that map to the legacy networks already built. MPLS VPNs can be defined by: Department, business unit, or other function: Where there is a logical separation of traffic that goes to a more granular level than the network, perhaps down to the department or business unit, application, or specific security requirement level, you will want to define VPNs on the MPLS network, each to support the logical separation required for a unique QoS, security, and application support combination. Service requirements: In the context of this Data Center LAN Migration Guide, VPLS makes it appear as though there is a large LAN extended across the data center(s). VPLS can be used for IP services or it can be used to interconnect with a VPLS service provider to seamlessly network data center(s) across the cloud. IP QoS in the LAN can be carried over the VPLS service with proper forwarding equivalence class (FEC) mapping and VPN configuration. If connecting to a service providers VPLS service, you will either need to collocate with the service provider or leverage a metro Ethernet service, as VPLS requires an Ethernet hand-off from the enterprise to the service provider. QoS needs: Many existing applications within your enterprise may run on separate networks today. To be properly supported, these applications and their users make specific and unique security and quality demands on the network. This is why, as a best practice, it is suggested that you start by creating VPNs that support your existing networks. This is the minimum number of VPNs you will need. Security requirements: Security requirements may be defined by user groups such as those working on sensitive and confidential projects, by compliance requirements to protect confidential information, and by application to protect special applications. Each special security zone can be sectioned off with enhanced security via MPLS VPNs. Performance requirements: Typically, your applications and available bandwidth will determine traffic engineering and fast reroute requirements, however users and business needs may impact these considerations as well. Additional network virtualization: Once MPLS-based VPNs are provisioned to support your existing networks, user groups, QoS, and security requirements, consideration should be given to new VPNs that may be needed. For example, evolving compliance processes supporting requirements such as Sarbanes-Oxley or Health Insurance Portability and Accountability Act (HIPAA) may require new and secure VPNs. Furthermore, a future acquisition of a business unit may require network integration and this can easily be performed on the network with the addition of a VPN to accommodate the acquisition. Step 4: Transfer networks onto the MPLS VPNs. All of the required VPNs do not have to be defined before initiating the process of migrating the existing network(s) to MPLS. In fact, you may wish to build the first VPN and then migrate the related network, then build the second VPN and migrate the next network and so on. As the existing networks converge to the MPLS network, monitor network performance and traffic loads to verify that expected transport demands are being met. If for some reason performance or traffic loads vary from expected results, investigate further as MPLS can provide deterministic traffic characteristics, and resulting performance should not vary greatly from the expected results. Based upon findings, there may be opportunities to further optimize the network for cost and performance gains.
51
Step 5: Traffic engineer the network. Step 5 does not require steps 3 or 4 above to be completed before initiating this step. Traffic engineering may begin as soon as the MPLS network plane is established. However, as a best practice it is recommended to first migrate some of the existing traffic to the MPLS plane before configuring TE. This will allow you to experience firsthand the benefits and granular level of control you have over the network through the traffic engineering of an MPLS network. Start by assessing the existing traffic demand of applications across data center(s). Group traffic demand into priority categories, for instance, voice and video may be gathered into a real time priority category, while private data is grouped into a second and Internet traffic is grouped into a third category. Step 6: Monitor and manage. As with any network, you must continue to monitor and manage the network once it is deployed and running while supporting new service loads and demands. An advantage MPLS provides above and beyond IP is its capability to traffic engineer based upon utilization and application demands as the business evolves. For more information on MPLS, refer to: www.juniper.net/techpubs/software/junos/junos53/swconfig53-mplsapps/html/mpls-overview.html. For more information on VPLS, refer to: www.juniper.net/techpubs/en_US/junos10.2/information-products/ pathway-pages/config-guide-vpns/config-guide-vpns-vpls.html.
Ethernet
Aggregation Layer
SRX5800
EX82XX
Access Layer
EX4200
Virtual Chassis
EX4500 EX82XX
Access Layer
Servers
NAS
FC Storage
Servers
NAS
FC Storage
FC SAN
FC SAN
52
53
54
Chapter 4: Troubleshooting
Troubleshooting
Introduction
The scope of this section is to provide an overview of common issues that might be encountered at different insertion points when inserting Juniper platforms as a result of a trigger event (adding a new application or service to the organization). This section wont provide exhaustive troubleshooting details, however, we do describe the principal recommended approaches to troubleshooting the most common issues and provide guidelines for identification, isolation, and resolution.
Troubleshooting Overview
When investigating the root cause of a problem, it is important to determine the problems nature and analyze its symptoms. When troubleshooting a problem, it is generally advisable to start at the most general level and work progressively into the details, as needed. Using the OSI model as a reference, troubleshooting typically begins at the lower layers (physical and data link) and works progressively up toward the application layer until the problem is found. This approach tends to quickly identify what is working properly so that it can be eliminated from consideration, and narrows the problem domain for quick problem identification and resolution. The following list of questions provides a methodology on how to use clues and visible effects of a problem to reduce the diagnostic time. Has the issue appeared just after a migration, a deployment of new network equipment, a new link connection, or a configuration change? This is the context being presented in this Data Center LAN Migration Guide. The Method of Procedure (MOP) detailing the steps of the operation in question should include the tasks to be performed to return to the original state before the network event, should any abnormal conditions be identified. If any issue arises during or after the operation that cannot be resolved in a timely manner, it may be necessary to roll back and disconnect newly deployed equipment while the problem is researched and resolved. The decision to back out should be made well in advance, prior to the expiration of the maintenance window. This type of problem is likely due to an equipment misconfiguration or planning error. Does the problem have a local or a global impact on the network? The possible causes of a local problem may likely be found at L1 or L2, or it could be related to an Ethernet switching issue at the access layer. An IP routing problem may potentially have a global impact on networks, and the operator should focus its investigation on the aggregation and core layer of the network. Is it an intermittent problem? When troubleshooting an intermittent problem, system logging and traceoptions provide the primary debugging tools on Juniper Networks platforms, and can be focused on various protocol mechanisms at various levels of detail. Events occurring in the network will cause the logging of state transitions related to physical, logical, or protocols to local or remote files for analysis. Is it a total or partial loss of connectivity or is it a performance problem? All Juniper Networks platforms have a common architecture in that there are separate control and forwarding planes. For connectivity issues, Juniper recommends that you first focus on the control plane to verify routing and signaling states and then concentrate on the forwarding or data plane, which is implemented in the forwarding hardware (Packet Forwarding Engine or PFE). If network performance is adversely affected by packet loss, delays, and jitter impacting one or multiple traffic types, the root cause is most likely related to network congestion, high link utilization, and packet queuing along the traversed path.
56
Hardware
The first action to take when troubleshooting a problem and also before making any change in the network is to ensure proper functionality and integrity of the network equipment and systems. A series of validation checks and inspection tests should be completed to verify that the hardware and the software operate properly and there are not any fault conditions. The following presents a list of show commands from the Junos OS CLI relative to this, as well as a brief description of expected outcomes. show system boot-messages Review the output and verify that no abnormal conditions or errors occurred during the booting process. POST (poweron self-test) results are captured in the bootup message log and stored on the hard drive. show chassis hardware detail Verify that all hardware appears in the output (i.e., routing engines, control boards, switch fabric boards, power supplies, line cards, and physical ports). Verify that no hardware indicates a failure condition. show chassis alarms Verify that there are no active alarms. show log messages Search log for errors and failures and review the log for any abnormal conditions. The search can be narrowed to specific keywords using the grep function. show system core-dumps Verify any transient software failures. Junos OS under fatal fault condition will create a core file of the kernel and processes in question for diagnostic analysis. For more details on platform specifics, please refer to the Juniper technical documentation that can be found at: www.juniper.net/techpubs.
57
58
OSPF
A common problem in OSPF is troubleshooting adjacency issues which can occur for multiple reasons: mismatched IP subnet/mask, area number, area type, authentication, hello/dead interval, network type, or mismatched IP MTU. The following are useful commands for troubleshooting an OSPF problem: show ospf neighbor displays information about OSPF neighbors and the state of the adjacencies which must be shown as full. show ospf interface displays information about the status of OSPF interfaces. show ospf log logs shortest-path-first (SPF) calculation. show ospf statistics displays number and type of OSPF packets sent and received. show ospf databases displays entries in the OSPF link-state database (LSDB). OSPF traceoptions provide the primary debugging tool, and the OSPF operation can be flagged to log error packets and state transitions along with the events causing them.
BGP
show bgp summary is a primary command used to verify the state of BGP peer sessions, and it should display that the peering is established to be fully operational. BGP has multiprotocol capabilities made possible through simple extensions that add new address families. This command also helps to verify which address families are carried over the BGP session, for example, inet-vpn if L3 MPLS VPN service is required or L2VPN for VPLS. BGP is a policy-driven routing protocol. It offers flexibility and granularity when implementing routing policy for path determination and for prefix filtering. A network operator must be familiar with the rich set of attributes that can be modified and also with the BGP route selection process. Routing policy controls and filters can modify routing information entering or leaving the router in order to alter forwarding and routing decisions based on the following criteria: What should be learned about the network from all protocols? What routes should be shared with other routing protocols? What should be advertised to other routers? What routing information should be modified, if any? Consistent policies must be applied across the entire network to filter/advertise routes and modify BGP route attributes. The following commands assist in the troubleshooting of routing policies: show route receive-protocol bgp <neighbor> displays received attributes. show route advertising-protocol bgp <neighbor> displays route and attributes sent by BGP to a specific peer. show route hidden extensive displays routes not usable due to BGP next-hop problems and routes filtered by an inbound route filter. Logging of peer state transitions and flagging BGP operations provides a good source of information when investigating BGP problems.
59
VPLS Troubleshooting
This section provides a logical approach to take when determining the root cause of a problem in a VPLS network. A good place to start is to verify the configuration setup. Following is a brief configuration snippet with corresponding descriptor.
routing-instance { vpls_vpn1 { instance-type vpls; vlan-tags outer 4094 inner 4093; interface ge-1/0/0.3001; route-distinguisher 65000:1001; vrf-target target:65000:1001; protocols { vpls { mac-table-size { 100; } interface-mac-limit {
#arbitrary name #VPLS type #VLAN Normalization must match if configured # int.unit # RD carried in MPGP # VPN RT must match on all PEs in this VPLS
# max mac table size # max mac that may be learned from all CE facing interfaces # lsi interfaces for tunneling # arbitrary name # unique site ID # list of int.unit in this VPN
The next step is to verify the control plane with the following operation commands: show route receive-protocol bgp <neighbor> table <vpls-vpn> detail displays BGP routes received from an MPiBGP peer for a VPLS instance. Use the detail/extensive option to see other BGP attributes such as route-target RT, label base, and site-ID. The BGP next hop must have a route in the routing table for the mapping to a transport MPLS LSP. show vpls connections is an excellent command to verify the VPLS connection status and to aid in troubleshooting. After the control plane has been validated as fully functional, the forwarding plane should be checked next by issuing the following commands. Note that the naming of devices maps to a private MPLS network, as opposed to using a service provider MPLS network. On local switch: show arp show interfaces ge-0/0/0 On MPLS edge router show vpls mac-table show route forwarding-table On MPLS core router show route table mpls
60
The commands presented in this section should highlight the proper VPLS operation as follows: Sending to unknown MAC address, VPLS edge router floods to all members of the VPLS. Sending to a known MAC address, VPLS edge router maps to an outer and inner label. Receiving a MAC address, VPLS edge router identifies the sender and maps the MAC address to a label stack in the MAC address cache. VPLS provider edge (PE) router periodically ages out unused entries from the MAC address cache.
Multicast
Looked at simplistically, multicast routing is upside down unicast routing. Multicast routing functionality is focused on where the packet came from and directs traffic away from its source. When troubleshooting multicast, the following methodology is recommended: Gather information In one-to-many and many-to-many communications, it is important to have a good understanding of the expected traffic flow to clearly identify all sources and receivers for a particular multicast group. Verify receiver interest by issuing the following commands: show igmp group <mc_group> displays information about Internet Group Management Protocol (IGMP) group membership received from the multicast receivers on the LAN interface. show pim interfaces is used to verify the designated router for that interface or VLAN. Verify knowledge of the active source by issuing the following commands: show multicast route group <mc_group> source-prefix <ip_address> extensive displays the forwarding state (pruned or forwarding) and the rate for this multicast route. show pim rps extensive determines if the source designated router has the right rendezvous point (RP) and displays tunnel interface-related information for register message for encapsulation/de-encapsulation. Trace the forwarding state backwards, working your way back towards the source IP and looking for Physical Interface Module (PIM) problems along the way with the following commands: show pim neighbors displays information about PIM neighbors. Show pim join extensive <mc_group> validates outgoing interface list and upstream neighbor and displays source tree and shared tree (Real-Time Transport Protocol or RTP) state, with join/prune status. Show multicast route group <mc_group> source-prefix <ip_address> produces extensive checks if traffic is flowing and has a positive traffic rate. show multicast rpf <source_address>Multicast routing uses reverse path forwarding check. A router forwards only multicast packets if received on the upstream interface to the source. Otherwise, the RPF check fails, and the packet is discarded.
61
Tools
Junos OS has embedded script tools to simplify and automate some tasks for network engineers. Commit scripts, operation (op) scripts, and event scripts provide self monitoring, self diagnosing, and self healing capabilities to the network. The apply-macro command feeds a commit script to extend and customize the router configuration based on user-defined data and templates. Together, these tools offer an almost infinite number of applications to reduce downtime, minimize human error, accelerate service deployment, and reduce overall operational costs. For more information, refer to: www.juniper.net/us/en/community/junos/script-automation.
Troubleshooting Summary
Presenting an exhaustive and complete troubleshooting guide falls outside the scope of this Data Center LAN Migration Guide. Presented in this section is a methodology to understand the factors contributing to a problem and a logical approach to the diagnostics needed to investigate root causes. This method relies on the fact that IP networks are modeled around multiple layered architectures. Each layer depends on the services of the underlying layers. From the physical network topology comprised of access, aggregation, and core tiers to the model of IP communication founded on the 7 OSI layers, matching symptoms to the root cause layer is a critical step in the troubleshooting methodology. Juniper platforms have also implemented a layered architecture by integrating separate control and forwarding planes. Once the root cause layer is correctly identified, the next steps are to isolate the problem and to take the needed corrective action at that specific layer. For more details on platform specifics, please refer to the Juniper technical documentation that can be found at: www.juniper.net/techpubs.
62
Summary
Todays data center is a vital component to business success in virtually all industries and markets. New trends and technologies such as cloud computing and SOA-based applications have significantly altered data center traffic flows and performance requirements. Data center designs based on the performance characteristics of older technology and traffic flows have resulted in complex architectures which do not easily scale and also may not provide the performance needed to efficiently meet todays business objectives. A simplified, next-generation, cloud-ready two-tier data center design is needed to meet these new challenges without compromising performance or security. Since most enterprises wont disrupt a production data center except for scheduled maintenance and business continuity testing, a gradual migration to the new network is often most practical. This guide has identified and described how migration towards a simpler two-tier design can begin at any time and at various insertion points within an existing legacy data center architecture. These insertion points are determined by related trigger events such as adding a new application or service, or by larger events such as data center consolidation. This guide has outlined the design considerations for migration at each network layer, providing organizations with a path towards a simplified high-performance network which can not only lower TCO, but also provide the agility and efficiency to enable organizations to gain a competitive advantage by leveraging their data center network. Juniper Networks has been delivering a steady stream of network innovations for more than a decade. Juniper brings this innovation to a simplified data center LAN solution built on three core principles: simplify, share, and secure. Creating a simplified infrastructure with shared resources and secure services delivers significant advantages over other designs. It helps lower costs, increase efficiency, and keep the data center agile enough to accommodate any future business changes or technology infrastructure requirements.
Additional Resources
Data Center Design Resources
Juniper has developed a variety of materials that complement this Migration Guide and are helpful to the network design process in multiple types of customer environments and data center infrastructures. On our public Internet website, we keep many of these materials at the following locations: 1. Data Center Solution Reference Materials Site: www.juniper.net/us/en/solutions/enterprise/data-center/simplify/#literature At this location you will find information helpful to the design process at a number of levels, organized by selectable tabs according to the type of information you are seekinganalyst reports, solution brochures, case studies, reference architecture, design guides, implementation guides, and industry reports. The difference between a reference architecture, design guide, and implementation guide is the level of detail the document addresses. The reference architecture is the highest level organization of our data center network approach; the design guide provides guidance at intermediate levels of detail appropriate to, say, insertion point considerations (in the terms of this Migration Guide); and implementation guides provide specific guidance for important types of network deployments at different tiers of the data center network. Implementation guides give customers and other readers enough information to start specific product implementation tasks appropriate for the most common deployment scenarios, and are quite usable in combination with an individual products installation, configuration, or operations manual. 2. Information on Junipers individual lines relevant to the insertion scenarios described in this guide can be found at: Ethernet switching: www.juniper.net/us/en/products-services/switching IP routing: www.juniper.net/us/en/products-services/routing Network security: www.juniper.net/us/en/products-services/security Network management: http://www.juniper.net/us/en/products-services/software/junos-platform/junos-space
64
3. Information on the way Junipers offerings fit with various alliance partners in the data center environment can be found at: www.juniper.net/us/en/company/partners/enterprise-alliances. 4. Information on Junipers Professional Services to support planning and design of migration projects can be found at: www.juniper.net/us/en/products-services/consulting-services/#services. For more detailed discussions of individual projects and requirements, please contact your Juniper or authorized Juniper partner representative directly.
Training Resources
Juniper Networks offers a rich curriculum of introductory and advanced courses on all of its products and solutions. Learn more about Junipers free and fee-based online and instructor-led hands-on training offerings: www.juniper.net/us/en/training/technical_education.
65
66
67
Corporate and Sales Headquarters Juniper Networks, Inc. 1194 North Mathilda Avenue Sunnyvale, CA 94089 USA Phone: 888.JUNIPER (888.586.4737) or 408.745.2000 Fax: 408.745.2100 www.juniper.net
APAC Headquarters Juniper Networks (Hong Kong) 26/F, Cityplaza One 1111 Kings Road Taikoo Shing, Hong Kong Phone: 852.2332.3636 Fax: 852.2574.7803
EMEA Headquarters Juniper Networks Ireland Airside Business Park Swords, County Dublin, Ireland Phone: 35.31.8903.600 EMEA Sales: 00800.4586.4737 Fax: 35.31.8903.601
To purchase Juniper Networks solutions, please contact your Juniper Networks representative at 1-866-298-6428 or authorized reseller.
Copyright 2010 Juniper Networks, Inc. All rights reserved. Juniper Networks, the Juniper Networks logo, Junos, NetScreen, and ScreenOS are registered trademarks of Juniper Networks, Inc. in the United States and other countries. All other trademarks, service marks, registered marks, or registered service marks are the property of their respective owners. Juniper Networks assumes no responsibility for any inaccuracies in this document. Juniper Networks reserves the right to change, modify, transfer, or otherwise revise this publication without notice.
7100128-001-EN
Aug 2010
68