You are on page 1of 62

Wireless Cloud Element

Overview PRESENTATION

23.06.2014
WCE Cabinet/Shelf Level Growth – Customer Offerings

1x c7000 2x c7000 3x c7000 4x c7000

Add Add Add


c7000 Seismic Rack, c7000
Enclosure c7000 Enclosure
and Enclosure and
Blades and Blades
Blades

WCE1 WCE2 WCE3 WCE4


BOM Building Block Packages

Primary Cabinet Expansion Cabinet Standalone BL460 G8


(Group 1) (Group 2) Enclosure Bladeserver
(Group 3) (Group 4)

WCE Platform
Management Server
(Group 5)
Enclosure C7000 and blades – front view

Blade Blade
1 8

Blade Blade
9 16
Enclosure C7000 Primary and blades – Rear view

6125XLG Switch-1 6125XLG Switch-2

6125G Switch-2
6125G Switch-1

OA - 1 OA - 2
Enclosure C7000 Expansion and blades – Rear view

6125XLG Switch-1 6125XLG Switch-2

OA - 1
OA - 2
Wireless Cloud Element Cabinet Racking Layout
Primary Cabinet Single c7000 – Telecom Version
DC Breaker Panel #1
(Bottom c7000)
Redundant Power
Filler Panel(s)
•The c7000 has its own independent breaker
panel with A and B feeds
DC Fuse Panel
•Each of the six power supplies in the c7000 has a
dedicated circuit breaker and -48V feeder/return
pair
•The e5400 SAN has its own independent source
Filler Panels
of power, also with A and B feeds
NetApp e5400
10G iSCSI SAN
NetApp e5400 SAN
•Redundant 10GbE iSCSI Controllers
•Redundant fan/power supply modules
Filler Panels •24x SFF 600GB 10K RPM SAS HDD

c7000 BladeSystem
1U Seismic Brace •16x BL460c G8 Ivy Bridge servers
•Redundant 6125XLG Blade Switches
•Redundant 6125G Blade Switches for OAM aggregation
•Redundant OnBoard Administrator OAM modules
c7000 BladeSystem
Enclosure #1 •N+N spared fan modules, 10 total
•N+N spared power supplies, 6 total
Wireless Cloud Element Cabinet Racking Layout
Primary Cabinet Dual c7000 – Telecom Version
DC Breaker Panel #1
(Bottom c7000)
Redundant Power
DC Breaker Panel #2
•Each c7000 has its own independent breaker
(Upper c7000) panel with A and B feeds
DC Fuse Panel
•Each of the six power supplies in the c7000 has a
dedicated circuit breaker and -48V feeder/return
pair
•The e5400 SAN has its own independent source
Filler Panels
of power, also with A and B feeds
NetApp e5400
10G iSCSI SAN
NetApp e5400 SAN
1U Seismic Brace
•Redundant 10GbE iSCSI Controllers
•Redundant fan/power supply modules
c7000 BladeSystem •24x SFF 600GB 10K RPM SAS HDD
Enclosure #2

c7000 BladeSystem
1U Seismic Brace •16x BL460c G8 Ivy Bridge servers
•Redundant 6125XLG Blade Switches
•Redundant OnBoard Administrator OAM modules
•Redundant 6125G Blade Switches for OAM aggregation
c7000 BladeSystem
Enclosure #1 (lower c7000 enclosure only)
•N+N spared fan modules, 10 total
•N+N spared power supplies, 6 total
Wireless Cloud Element Cabinet Racking Layout
Expansion Cabinet Single c7000 – Telecom Version
DC Breaker Panel #1
(Bottom c7000) Redundant Power
•The c7000 has its own independent breaker
panel with A and B feeds
•Each of the six power supplies in the c7000 has a
Filler Panels dedicated circuit breaker and -48V feeder/return
pair

1U Filler Panel

Filler Panels

c7000 BladeSystem
1U Seismic Brace •16x BL460c G8 Ivy Bridge servers
•Redundant 6125XLG Blade Switches
•Redundant OnBoard Administrator OAM modules
•N+N spared fan modules, 10 total
c7000 BladeSystem
Enclosure #1 •N+N spared power supplies, 6 total

36U Seismic
Cabinet
Wireless Cloud Element Cabinet Racking Layout
Expansion Cabinet Dual c7000 – Telecom Version
DC Breaker Panel #1
(Bottom c7000) Redundant Power
•Each c7000 has its own independent breaker
DC Breaker Panel #2
(Upper c7000) panel with A and B feeds
•Each of the six power supplies in the c7000 has a
dedicated circuit breaker and -48V feeder/return
pair
Filler Panels

1U Seismic Brace

c7000 BladeSystem
Enclosure #2

c7000 BladeSystem
1U Seismic Brace •16x BL460c G8 Ivy Bridge servers
•Redundant 6125XLG Blade Switches
•Redundant OnBoard Administrator OAM modules
•N+N spared fan modules, 10 total
c7000 BladeSystem
Enclosure #1 •N+N spared power supplies, 6 total

36U Seismic
Cabinet
WCE Platform Management Server
• The WCE Platform Management Server is the maintenance platform for HP hardware, VM management, and NetApp SAN management
• This server is not integrated into the WCE cabinets
– These are installed into racks provided by the customer in an operations center
• One WCE Platform Management Server is intended to manage multiple WCE installations
• This server is a DL380p Gen 8 server equipped with application and backup disks
– Contains dual Sandy Bridge Xeon e5-2658 8-core CPUs and 64GB of memory
– OS/Application is installed on a pair of mirrored disks
– vCenter and HP hardware databases are stored on a second pair of mirrored disks
– Logs and non-critical data is stored on a non-mirrored disk
– Backups are captured to three disks configured as a RAID 5 group
• The WCE Platform Management server uses Windows Server 2012 as its operating system
– Microsoft SQL Server 2012 R2 is the database engine used for VMware and HP Systems Insight Manager databases
Platform Management Server
HP DL 380p

Front view
2x Processors:
Xeon E5-2658 - 8-Core 2.1GHz

Memory: 64GB RAM 8x HDD:


- 5x HP 300GB 6G SAS 15K 2.5in SC ENT HDD running
with Raid 1+0
- 3x HP 1TB 6G SAS 15K 2.5in SC ENT HDD running
with Raid 5
Raid configuration :
• C: Disk 1,2: - used for the windows OS
• D: Disk 3,4: - used for SQL data
• E: Disk 5: - used for SQL Logs
• F: Disk 6,7,8: - used for Windows Backup.

Rear view
2x Power Sources for redundancy

4x1Gb Eth ports iLO Eth port


Platform Management Server – Example: Connect to iLO

The local Monitor, mouse and keyboard is used only to set up for
the first time an IP address, Netmask and Gateway for iLO.

iLO (HP Integrated Lights-Out) gives the possibility to perform


activities on an HP server from a remote location (Example:
reset, power-off/on, remote console, mount CD/DVD/ISO
image)

The iLO card has a separate network connection (and its own IP
address) to which one can connect via HTTPS.

iLO IP Address
PC connected to iLO through IP network
Platform Management Server – Example: Open iLO remote console
Platform Management Server - Installation

- Is used to manage the WCE

- Comes pre-installed from the factory with: Microsoft Windows 2012 Enterprise
(64 bits)

The following operations must be performed to finish the installation:


- Install the Microsoft SQL Server 2012 (databases used for Vcenter and HP SIM)
- Install the VMware Software (server and client sw)
- Install HP Systems Insight Manager (connects to iLO of all the HP machines)
- Install SANtricity Client (connects to NetApp e5400 SAN)
- Perform Windows settings.

For more information about the Platform Management Server


installation, please use the following document:

Alcatel-Lucent 9771 Wireless


Wireless Cloud Sorin CRACIUN
9YZ-04157-0125-RJZZA Cloud Element - Platform Installation Methods
Element
Management Software Installation
The blades do not have HDD. So we must have all the HDDs in a
separate unit: NetApp e5400 SAN with 24 600GB HDDs.
This solution was chosen because the SAN offers redundancy.

Front View

HDD … … HDD

2x 10Gbps HIC card 2x 10Gbps HIC card


Rear View

2 controllers

2 power
supplies
WCE Cabinet Data Interconnect Overview

40G faceplate IRF

40G backplane IRF

WCE1 40G ARP MAD link WCE2

WCE3 WCE4
WCE 1 Interconnect Topology

40GbE (IRF)

e5400 SAN
Ctlr A Ctlr B
A1 B1
10GbE A2 B2 10GbE
iSCSI iSCSI

C7000 #1
40GbE
4x 10GbE (Backplane IRF)
6125XLG 6125XLG

LAG 40GbE MAD


Group

LAG
Group 10GbE DAC Cable
10GbE DAC Cable
10GbE MM Fiber

*NHR *NHR Multi-Chassis LAG Link


Active Standby
40GbE QSFP+ DAC Cable
*NOTE: If Core Network and RAN NHR
pairs are required, the uplink
connections are replicated to the
second pair of NHR
WCE 2 Interconnect Topology

40GbE MAD

e5400 SAN

A1 Ctlr A Ctlr B B1
A2 B2
10GbE 10GbE
iSCSI iSCSI

C7000 #1 C7000 #2

4x 10GbE (Backplane IRF) 4x 10GbE (Backplane IRF)


40GbE MAD
6125XLG 6125XLG 6125XLG 6125XLG
40GbE (IRF) 40GbE (IRF)

LAG
Group

10GbE DAC Cable


LAG 10GbE DAC Cable
Group 10GbE MM Fiber

40GbE QSFP+ DAC Cabl

Multi-Chassis LAG Link


*NOTE: If Core Network and RAN NHR
*NHR are required, the uplink connections *NHR
Active are replicated to the second pair of Standby
NHR
WCE 3 Interconnect Topology

40GbE MAD

e5400 SAN
A1 Ctlr A Ctlr B B1
A2 B2
10GbE 10GbE
iSCSI iSCSI

C7000 #3 C7000 #1 C7000 #2

4x 10GbE (Backplane IRF) 4x 10GbE (Backplane IRF) 4x 10GbE (Backplane IRF)


40GbE MAD 40GbE MAD
6125XLG 6125XLG 6125XLG 6125XLG 6125XLG 6125XLG
40GbE (IRF) 40GbE (IRF) 40GbE (IRF)

LAG
Group

10GbE DAC Cable


LAG 10GbE DAC Cable
Group
10GbE MM Fiber

40GbE QSFP+ DAC Cabl

Multi-Chassis LAG Link


*NOTE: If Core Network and RAN NHR
*NHR are required, the uplink connections*NHR
Active are replicated to the second pair of Standby
NHR
WCE 4 Interconnect Topology
40GbE MAD

e5400 SAN

A1 Ctlr A Ctlr B B1
10GbE A2 B2 10GbE
iSCSI iSCSI

C7000 #3 C7000 #4 C7000 #1 C7000 #2

4x 10GbE (Backplane IRF) 40GbE MAD 4x 10GbE (Backplane IRF) 4x 10GbE (Backplane IRF) 4x 10GbE (Backplane IRF)
40GbE MAD 40GbE MAD
6125XLG 6125XLG 6125XLG 6125XLG 6125XLG 6125XLG 6125XLG 6125XLG
40GbE (IRF) 40GbE (IRF) 40GbE (IRF) 40GbE (IRF)

LAG
Group

10GbE DAC Cable


LAG 10GbE DAC Cable
Group 10GbE MM Fiber

40GbE QSFP+ DAC Cabl


*NOTE: If Core Network and RAN NHR
are required, the uplink connections Multi-Chassis LAG Link
are replicated to the second pair of
NHR *NHR *NHR
Active Standby
6125XLG Faceplate Port Assignments – WCE Internal Ports
WCE Primary Shelf – WCE1

5 6 7 8
1 2 3 4
9 10 11 12
Port 1 (40G) – MAD – Mate 6125XLG Port 9 (10G) – SAN Controller A
Port 2 (40G) – IRF – Mate 6125XLG Port 10 (10G) – SAN Controller B
Port 3 (40G) – Reserved Port 11 (10G) – RNC/VMware OAM Link to 6125G OAM Switch
Port 4 (40G) – Reserved

WCE Primary Shelf – Multi-Shelf WCE

5 6 7 8
1 2 3 4 9 10 11 12
Port 1 (40G) – MAD – Mate 6125XLG Port 9 (10G) – SAN Controller A
Port 2 (40G) – IRF – Next shelf 6125XLG Port 10 (10G) – SAN Controller B
Port 3 (40G) – MAD – Next shelf 6125XLG Port 11 (10G) – RNC/VMware OAM Link to 6125G OAM Switch
Port 4 (40G) – Reserved

WCE Expansion Shelves

5 6 7 8
1 2 3 4 9 10 11 12
Port 1 (40G) – MAD – Mate 6125XLG
Port 2 (40G) – IRF – Next shelf 6125XLG
Port 3 (40G) – MAD – Next shelf 6125XLG
Port 4 (40G) – Reserved
6125XLG Faceplate Port Assignments – Uplink Ports: Common Network
WCE Primary Shelf – WCE1

5 6 7 8
1 2 3 4
9 10 11 12

Port 5 (10G) – RAN & Core Network NHR


Port 6 (10G) – RAN & Core Network NHR
Port 7 (10G) – Unused
Port 8 (10G) – Unused
Port 12 (10G) – Unused

WCE Primary Shelf – Multi-Shelf WCE

5 6 7 8
1 2 3 4
9 10 11 12

Port 5 (10G) – RAN & Core Network NHR


Port 6 (10G) – Unused
Port 7 (10G) – Unused
Port 8 (10G) – Unused
Port 12 (10G) – Unused
WCE1 Single Network

Shelf1

6125-L 6125-R

IRF Domain

Core & RAN Core & RAN


NHR-1 NHR-2
WCE2 Single Network

Shelf1 Shelf2

6125-L 6125-R 6125-L 6125-R


IRF Domain

Core & RAN Core & RAN


NHR-1 NHR-2
WCE3 Single Network

Shelf1 Shelf2 Shelf3

6125-L 6125-R 6125-L 6125-R 6125-L 6125-R


IRF Domain

Core & RAN Core & RAN


NHR-1 NHR-2
WCE4 Single Network

Shelf1 Shelf2 Shelf3 Shelf4

6125-L 6125-R 6125-L 6125-R 6125-L 6125-R 6125-L 6125-R


IRF Domain

Core & RAN Core & RAN


NHR-1 NHR-2
Active/Active NHR MC-LAG Implementation
WCE4 Single Network

Shelf1 Shelf2 Shelf3 Shelf4

6125-L 6125-R 6125-L 6125-R 6125-L 6125-R 6125-L 6125-R


IRF Domain

Core & RAN Core & RAN


NHR-1 NHR-2

Depending on the implementation of MC-LAG on the NHRs, all 8 links could be load-sharing (i.e. active/active).
WCE OAM Switches

RJ-45 SFP required for these 4 ports


•SFP’s only required as c7000 enclosures are added

c7000 Enclosure #1 Customer


Onboard Admin OAM Port
c7000 Enclosure #2 Inter-Switch
Onboard Admin MAD Link
c7000 Enclosure #3 Primary Shelf
Onboard Admin RNC OAM Port
c7000 Enclosure #4 NetApp SAN
Onboard Admin OAM Port

6125G OAM Switch Port Allocation


WCE OAM Network Connectivity with In-Rack
Aggregation Switch (6125G)

Primary Shelf
OAa OAs


iLO iLO

BL460-1
BL460-1 BL460-16
BL460-16
HPI WMS
SANtricit NTP Server
y vCenter
6125XLG-L 6125XLG-R

6125G-L 6125G-R

OAM
OAM Edge
Edge Switch
Switch

Appl’n & Mgmt


OAM Network SAN

iLO NIC NIC NIC NIC

WCE Mgmt Server OAa OAs OAa OAs OAa OAs

… … …
iLO iLO iLO iLO iLO iLO

BL460-1
BL460-1 BL460-16
BL460-16 BL460-1 BL460-16
BL460-16 BL460-1
BL460-1 BL460-16
BL460-16
C7000 mgmt backplane
standby links
10G->1G RNC OAM uplinks
6125XLG-L 6125XLG-R 6125XLG-L 6125XLG-R 6125XLG-L 6125XLG-R
1G SAN mgmt links
1G OA mgmt links
1G -> FE Mobily OAM links
spare port – could be used in uplink LAG
WCE1 – No interlink cabling required

Head of “daisy-chain” port


for connection to next enclosure
in the chain
Local Craft/Operator Port
at head of “daisy-chain”

Onboard Administrator 1 Onboard Administrator 2

Onboard Administrator Tray


with inter-enclosure link
connectors
WCE2 – Single Inter-Enclosure Link Required

Expansion Shelf
(Tail of chain)

Primary
End of chain port
CAT5e Cabinet
Cable

Primary Shelf
(Head of chain)

Local Craft/Operator Port

1m CAT5e Cable
WCE4 – Three Inter-Enclosure Links Required

Primary Expansion
Cabinet Cabinet

3rd Expansion Shelf


1st Expansion Shelf (Tail of chain)
(Link in chain)

CAT5e End of chain port


Cables

Primary Shelf 2nd Expansion Shelf


(Head of chain) (Link in chain)

Local Craft/Operator Port

1m CAT5e Cable
5m CAT5e Cable
Insight Display
WCE 1
HP BladeSystem Onboard Administrator
Set the EBIPA IP Addresses for the blades bays – the iLO IP Addresses of each
blade.
The iLO IP Address of a blade is actually set on the enclosure and does not need to
be set again in case the blade is replaced.
The c7000 enclosure, the
SSWs, the blades and the
NetApp SAN already have an
initial configuration from
the factory. Only customized
IP Addresses must be set on
site.
NetApp e5400 SAN
Configuration:
1- using a laptop and a SSW
connected locally to set the
initial IP Addresses
2- using the SANtricity client:
create volumes

EBIPA IP Addresses for Command line: HP


the SSWs 6125XLG Switches Initial
Blades iLO IP Addresses Configuration: IP, Mask,
GW, LAG to NHR.
OA IP Addresses
SAN Settings:
IQN number for each blade.
Blade IQN -> LUN mapping.
ESXi

ESXi ESXi

VLAN
4094 set
on
SSWs
ESXi IP Address Configuration

VLAN 4093 –
OAM
Management
HP Insight Manager (HP SIM)
HP Smart Upgrade Manager – HP SUM)
NetApp Santricity Management Gui for e5400 SAN
vSphere Client – vCenter GUI

Create a Datacenter

Create a folder

Create a cluster

Declare the
hosts (blades)

Declare Resource
Pool, vAPP and
VMs
Create the
templates
folders
Upload the SW to Datastore:
-RedHat Software for VMs (WCE_RHEL6.4_Ver02.iso)
-Upload the LRCEMgr Template Folder
-Upload the 3goam Template Folder
-Upload the DA Template Folder

Deploy LRCMgr Template


Deploy DA Template
Deploy 3goam Template
Deploy CMU template
Deploy PC template
Deploy UMU template

Deploy LRCEMgr VM:


Power On the LRCEMgr VM
From LRCEMgr command line, the DAs are deployed:
vSphere Web Client – vCenter GUI
LRC Mgr
Management of virtual resources within
an enterprise data-center is traditionally
done at the individual VM level
Telco network elements or functions
tend to span many tens or even
hundreds of VMs and need coordinated
management
WCE’s LRC Manager bridges the Network
Function requirements to standard
enterprise management systems
(sometimes called Cloud Operating
Systems)
WCE Tenant
Groups of virtual machines acting
together to implement a single
Network Function (e.g. an RNC) are
referred to as a Tenant
All of the VMs within a Tenant may
be operated on as a unit thus
providing functionality like:
•Create Tenant
•Reset Tenant
•Delete Tenant

New capabilities like the “Shadow”


are possible with VMs and Tenants
Wireless Cloud Element – 3G RNC Architecture

vCenter Server
RNC Components:
Spared Roles (SWACT in a few seconds): LRC Mgr
Signaling
•CMU (N+M) – Cell Management 3gOAM
•3gOAM (1+1) – OAM termination OMU Netconf

•PC (N+1) – Public / Private network Disk


Access
gateway
Unspared Role (Return to capacity ~ 60s):
•UMU - UE Management (UE Call) & RAB UE C-Plane NI U-Plane PC PC
Control NAT
Packet processing NoB CP M3UA RPM-Cell
SCTP

CMU PC
Platform Components:
•vCenter Server – VM Management
(VMware)
C-Plane NI U-Plane
•LCR Mgr – Tenant Management (unspared) SCCP
UE CP GTP/RLC
•Disk Access – NAS Front End to SAN MAC

UMU
Dedicated

Each RNC component represents a single Common


Channel
Traffic

virtual machine Channel


Traffic
Iub/Iur/IuCS/IuPS
Iub / NBAP
WCE – 3G RNC Architecture – 3GOAM Role

The 3gOAM role acts as the


termination point for Operation
Administration and Maintenance of the 3GOAM
RNC and consists of two primary sub-
roles:
•3g application management as ConfD
OMU
implemented by the RNC 9370’s OMU (VxEll)
largely unchanged, and
•platform management via a Netconf
interface.
In addition the 3gOAM acts as a host
for monitoring and control of the
internal components of the virtual
RNC. The 3gOAM node is 1+1 spared.
WCE – 3G RNC Architecture – CMU Role
The CMU is the role which is responsible
for the creation and management of all of
the UMTS cells in the RAN. It consists
primarily of the following sub-roles:
•the C-Plane for cells which consists of the
NodeB Core Process from the RNC 9370 CMU
architecture,
•the U-Plane for cells which is an instance of
the 9370 RAB processes but specially 3g 3g 3g
SS7
targeted to handle common channel traffic App App App
(VxEll)
on the cells, (VxEll) (VxEll) (VxEll)
•the lower two layers of the SS7 networking
stack (for IP only), specially the SCTP and
M3UA protocols, and
•a proxy for a distributed version of TBM
which will handle management of transport
resources (UDP port numbers and link
bandwidth).
•CMUs are N+M spared where M is
defined as the number of instances of
the emulated VxWorks running within a
virtual machine.
WCE – 3G RNC Architecture – UMU Role

The UMU role is responsible for all


aspects of UE management and
consists of the following sub-roles:
•the C-Plane for UEs with consists of the
UE Core Process from the RNC 9370 UMU
architecture,
•the U-Plane for UE traffic which is an
3g 3g 3g
instance of the 9370 RAB processes but
App App App
specially targeted to handle UE traffic, (VxEll) (VxEll) (VxEll)
and
•the upper layer of the SS7 networking
stack, specially SCCP.
All of the context and processing for a
single UE happens within a single UMU
role and does not depend on the
presence of any other UMU role. As
UMUs do not support any form of
sparing at either the control, user or
signaling plane level a failure of an
UMU will cause the calls that were
hosted on that UMU to be lost.
WCE – 3G RNC Architecture – PC Role

The Protocol Converter Role consists of


three primary functions:
•UDP NAT which allows for private IP
addresses unique to the controllers to be
hidden from external nodes thus allowing for PC
many network advantages such as NAT NAT
reduction in consumption of externally (VxEll) (VxEll)
visible addresses and enhanced security. TRM

•GDP-U NAT (a proprietary function), much


like UDP NAT, hides private IP addresses
from the core network.
•Transport Resource Management or the
allocation of UDP ports and link bandwidth.
The PC Role is 1+1 spared
Carrier Grade Overview
vCenter Server
NAS
Disk Disk
Front End Access Access
Two CG Domains: To SAN LRC Mgr

1+1 (2)
Spared Nodes (3gOAM, CMU, PC Un-spared
& DA) that are required to RNC
Operations
maintain cells and allow new
calls to be originated. 1+1 (2)

Characterized by shared data Cell


and rapid (in the order of Management

seconds) switch of activity. N+M (~10s)

Un-spared Nodes (UMU) that


are UE specific where sharing User
Management
data is not required and return
to service is slow (in the order Un-spared (~100s)

of minutes).
Transport
Termination

N+1 (~10s)
GPS Troubleshooting Guide

https://umtsweb.ca.alcatel-lucent.com/wiki/bin/view/WcdmaRNC/OttGps_wceTroubleshooting
END

You might also like