You are on page 1of 31

ECOMP (Enhanced Control, Orchestration, Management & Policy) Architecture White Paper

AT&T Inc.
Abstract: AT&Ts Domain 2.0 (D2) program is focused management provided by the Operational
on leveraging cloud technologies (the AT&T Management Framework and application
Integrated Cloud AIC) and network virtualization to components.
offer services while reducing Capital and Operations
It provides high utilization of network resources by
expenditures and achieving significant levels of
combining dynamic, policy-enforced functions for
operational automation. The ECOMP Software
component and workload shaping, placement,
Platform delivers product/service independent
execution, and administration. These functions are
capabilities for the design, creation and lifecycle
built into the AT&T ECOMP Platform, and utilize the
management of the D2 environment for carrier-scale,
AIC; when combined, these provide a unique level of
real-time workloads. It consists of eight (8) software
operational and administrative capabilities for
subsystems covering two major architectural
workloads that run natively within the ecosystem,
frameworks: a design time environment to design,
and will extend many of these capabilities into 3rd
define and program the platform, and an execution
party cloud ecosystems as interoperability standards
time environment to execute the logic programmed
evolve.
in the design phase utilizing closed-loop, policy-driven
automation. Diverse workloads are at the heart of the capabilities
enabled by the ECOMP Platform. The service design
ECOMP is critical in achieving AT&Ts D2 imperatives
and creation capabilities and policy recipes eliminate
to increase the value of our network to customers by
many of the manual and long running processes
rapidly on-boarding new services (created by AT&T or
performed via traditional OSSs (e.g., break-fix largely
3rd parties), enabling the creation of a new ecosystem
moves to plan and build function). The ECOMP
of cloud consumer and enterprise services, reducing
platform provides external applications (OSS/BSS,
Capital and Operational Expenditures, and providing
customer apps, and 3rd party integration) with a
Operations efficiencies. It delivers enhanced
secured, RESTful API access control to ECOMP
customer experience by allowing them in near real-
services, events, and data via AT&T gateways. In the
time to reconfigure their network, services, and
near future, ECOMP will be available in the AT&T D2
capacity. While ECOMP does not directly support
Incubation & Certification Environment (ICE) making
legacy physical elements, it works with traditional
ECOMP APIs available to allow vendors, cloud
OSSs to provide a seamless customer experience
providers, and other 3rd parties to develop solutions
across both virtual and physical elements.
using ECOMP and AIC reference architecture (current
ECOMP enables network agility, elasticity, and and future-looking).
improves Time-to-Market/Revenue/Scale via the
AT&T Service Design and Creation (ASDC) visual 1. Introduction
modeling and design. ECOMP provides a Policy- This whitepaper is intended to give cloud and
driven operational management framework for telecommunication providers, solution vendors, and
security, performance and reliability/resiliency other interested 3rd parties an overall context for how
utilizing a metadata-driven repeating design pattern the Enhanced Control, Orchestration, Management
at each layer in the architecture. It reduces Capital and Policy (ECOMP) platform enables the AT&T
Expenditure through a closed loop automation Domain 2.0 (D2) initiative and operates within the
approach that provides dynamic capacity and AT&T Integrated Cloud (AIC) infrastructure. In order
consistent failure management when and where it is to more completely understand what drove the
needed. ECOMP facilitates Operations efficiency definition of the ECOMP Architecture and the
though the real-time automation of functions ascribed to it, it is helpful to consider the
service/network/cloud delivery and lifecycle

2016 AT&T Intellectual Property, all rights reserved. AT&T, Globe Logo, Mobilizing Your World and DIRECTV are
registered trademarks and service marks of AT&T Intellectual Property and/or AT&T Affiliated Companies. All other
marks are the property of their respective owners.
2

initial drivers of AT&Ts D2 and network function CPU, memory and storage can be dynamically
virtualization efforts. controlled.
When the D2 effort began, cloud technology was The ECOMP architecture and Operational
being adopted, focusing primarily on Information Management Framework (OMF) were defined to
Technology (IT) or corporate applications. The cloud address the business/strategic objectives of D2 as
technology provided the ability to manage diverse well as new technical challenges of managing a highly
workloads of applications dynamically. Included dynamic environment of virtual network functions
among the cloud capabilities were: and services, i.e., a software ecosystem where
functions such as network and service provisioning,
Real-time instantiation of Virtual Machines
service instantiation, and network element
(VMs) on commercial hardware where
deployment all occur dynamically in real-time.
appropriate
Dynamic assignment of applications and ECOMP enables the rapid on-boarding of new
workloads to VMs services (created by AT&T or 3rd parties) desired by
Dynamic movement of applications and our customers and the reduction of OpEx and CapEx
dependent functions to different VMs on servers through its metadata driven service design and
within and across data centers in different creation platform and its real-time operational
geographies (within the limits of physical access management framework a framework that provides
tie-down constraints) real-time, policy driven automation of management
Dynamic control of resources made available to functions. The metadata driven service design and
applications (CPU, memory, storage) creation capabilities enable services to be defined
with minimal IT development required thus
At the same time, efforts were underway to virtualize contributing to reductions in CapEx. The real-time
network functions that had been realized in purpose- OMF provides significant automation of network
built appliances: specialized hardware and software management functions enabling the detection and
(e.g., routers, firewalls, switches, etc.). Network correction of problems in an automated fashion
function virtualization (NFV) was focused on contributing to reductions in OpEx.
transforming network appliances into software
applications. One of the challenges for service providers in a
traditional telecommunications environment is the
AT&Ts D2 strategy is grounded in the confluence of fact that unique and proprietary interfaces are
network function virtualization, Software Defined required for many network management and
Network (SDN), and cloud technology. With virtual element management systems (EMS), leading to
network functions running as cloud applications, D2 significant integration, startup, and operational costs.
takes advantage of the dynamic capabilities of cloud Standards evolve slowly in this space.
cited above in defining, instantiating and managing
network infrastructures and services. This strategy As AT&T transitions to a SDN/NFV cloud based
shaped the definition of ECOMP, its architecture and environment, we plan to continue contributing and
the capabilities/functions it provides. This strategy leveraging the open source community to facilitate
also shaped the AT&T Integrated Cloud (AIC) agile and iterative standards that incorporate
infrastructure that converges multiple clouds into incremental improvements. In the Domain 2.0
one corporate smart cloud capable of interoperating ECOMP ecosystem, we look to be able to rapidly
dynamically with ECOMP-controlled virtual functions onboard vendor VNFs with standard processes, and
and 3rd party clouds. operate these resources via vendor-agnostic
controllers and standard management, security, and
In D2 the dynamic cloud capabilities are applied to application interfaces. Configuration and
applications - i.e., virtual network functions (vNFs) - management is model driven using Yang and will
thus applying the benefits of cloud to virtual network utilize standards as they become available. To this
elements. For example, vNFs, such as routers, end, AT&T supports open cloud standards (e.g.,
switches, firewalls, can be spun up on commodity OpenStack, TOSCA, etc.) and engages in many Cloud
hardware, moved from one data center to another and Network Virtualization industry initiatives (e.g.,
center dynamically (within the limits of physical NetConf, Yang, OPNFV, etc.). As further
access tie-down constraints) and resources such as
3

standardization occurs, AT&T will incorporate them policy-driven to ensure flexible ways in which
into ECOMP as appropriate. capabilities are used and delivered
The architecture shall enable sourcing best-in-
AT&Ts objective is to virtualize and operate over 75%
class components
of target network workloads within AT&Ts Domain
2.0 Architecture by 2020. Our goal for the first year is Common capabilities are developed once and
to deploy initial capabilities to validate the used many times
architecture. This early investment establishes the Core capabilities shall support many AT&T
metadata driven foundation that enables rapid Services
onboarding of new resources, capabilities, and The architecture shall support elastic scaling as
services without requiring long development cycles. needs grow or shrink
These capabilities are provided using two major
2. ECOMP Platform architectural frameworks: (1) a design time
The ECOMP Platform enables product/service framework to design, define and program the
independent capabilities for design, creation and platform (uniform onboarding), and (2) a runtime
lifecycle management. There are many requirements execution framework to execute the logic
that must be met by ECOMP to support the programmed in the design time framework (uniform
D2/ECOMP vision. Of those many requirements, delivery and lifecycle management). Figure 1 shows
some are key in supporting the following the ECOMP Platform architecture.
foundational principles:
The architecture will be metadata-driven and

Figure 1: ECOMP Platform


The design time framework component is an supports the development of new capabilities,
integrated development environment with tools, augmentation of existing capabilities and operational
techniques, and repositories for defining/describing improvements throughout the lifecycle of a service.
AT&T assets. The design time framework facilitates ASDC, Policy, and Data Collection, Analytics and
re-use models thus improving efficiency as more and Events (DCAE) SDKs allow operations/security, 3rd
more models are available for reuse. Assets include parties (e.g., vendors), and other experts to
models of D2 resources, services and products. The continually define/refine new collection, analytics,
models include various process specifications and and policies (including recipes for corrective/remedial
policies (e.g., rule sets) for controlling behavior and action) using the ECOMP Design Framework Portal.
process execution. Process specifications are used by Certain process specifications (aka recipes) and
ECOMP to automatically sequence the instantiation, policies are geographically distributed to many points
delivery and lifecycle management aspects of D2- of use to optimize performance and maximize
based resources, services, products and ECOMP autonomous behavior in D2s federated cloud
components themselves. The design time framework environment. Figure 2 provides a high-level view of
4

the ECOMP Platform components. These construct specific behaviors. To create a service or
components use micro-services to perform their operational capability, it is necessary to develop
roles. The Platform provides the common functions service/operations-specific collection, analytics, and
(e.g., data collection, control loops, meta-data recipe policies (including recipes for corrective/remedial
creation, policy/recipe distribution, etc.) necessary to action) using the ECOMP Design Framework Portal.

Figure 2: ECOMP Platform components

The two primary components of the design time utilities. Using the design studio, Product and Service
framework are AT&T Service Design and Creation designers onboard/extend/retire resources, services
(ASDC) and the Policy Creation components. ASDC is and products. Operations, Engineers, Customer
an integrated development environment with tools, Experience Managers, and Security Experts create
techniques, and repositories to workflows, policies and methods to implement
define/simulate/certify D2 assets as well as their Closed Loop Automation and manage elastic
associated processes and policies. Each asset is scalability.
categorized into one of four (4) asset groups:
The runtime execution framework executes the rules
Resource, Services, Products, or Offers. The Policy
and policies distributed by the design and creation
Creation component deals with Polices; these are
environment. This allows us to distribute policy
conditions, requirements, constraints, attributes, or
enforcement and templates among various ECOMP
needs that must be provided, maintained, and/or
modules such as the Master Service Orchestrator
enforced. At a lower level, Policy involves machine-
(MSO), Controllers, Data Collection, Analytics and
readable rules enabling actions to be taken based on
Events (DCAE), Active and Available Inventory (A&AI),
triggers or requests. Policies often consider specific
and a Security Framework. These components
conditions in effect (both in terms of triggering
advantageously use common services that support
specific policies when conditions are met, and in
logging, access control, and data management.
selecting specific outcomes of the evaluated policies
appropriate to the conditions). Policy allows rapid Orchestration is the function defined via a process
updates through easily updating rules, thus updating specification that is executed by an orchestrator
technical behaviors of components in which those component which automates sequences of activities,
policies are used, without requiring rewrites of their tasks, rules and policies needed for on-demand
software code. Policy permits simpler management / creation, modification or removal of network,
control of complex mechanisms via abstraction. application or infrastructure services and resources.
The MSO provides orchestration at a very high level,
The design and creation environment supports a
with an end to end view of the infrastructure,
multitude of diverse users via common services and
network, and application scopes. Controllers are
5

applications which are intimate with cloud and it allows the policy framework to define ECOMP
network services and execute the configuration, real- automation.
time policies, and control the state of distributed
ECOMP delivers a single, consistent user experience
components and services. Rather than using a single
based on the users role and allows D2 role changes
monolithic control layer, AT&T has chosen to use
to be configured within the single ecosystem. This
three distinct Controller types that manage resources
user experience is managed by the ECOMP Portal. The
in the execution environment corresponding to their
ECOMP Portal provides access to design, analytics
assigned controlled domain such as cloud computing
and operational control/administration functions via
resources (Infrastructure Controller, typically within
a common role based menu or dashboard. The portal
the cloud layer), network configuration (Network
architecture provides web based capabilities
Controller) and application (Application Controller).
including application onboarding and management,
DCAE and other ECOMP components provide FCAPS centralized access management, dashboards as well
(Fault Configuration Accounting Performance as hosted application widgets. The portal provides an
Security) functionality. DCAE supports closed loop SDK to drive multiple development teams to adhere
control and higher-level correlation for business and to consistent UI development requirements by taking
operations activities. It is the ecosystem component advantage of built-in capabilities (Services/ API/ UI
supporting analytics and events: it collects controls), tools and technologies.
performance, usage, and configuration data; provides
ECOMP provides common operational services for all
computation of analytics; aids in trouble-shooting;
ECOMP components including activity logging,
and publishes events, data and analytics (e.g., to
reporting, common data layer, access control,
policy, orchestration, and the data lake).
resiliency, and software lifecycle management. These
A&AI is the ECOMP component that provides real- services provides access management and security
time views of Domain 2.0 Resources, Services, enforcement, data backup, restoration and recovery.
Products and their relationships. The views provided They support standardized VNF interfaces and
by Active and Available Inventory relate data guidelines.
managed by multiple ECOMP Platforms, Business
The virtual operating environment of D2 introduces
Support Systems (BSS), Operation Support Systems
new security challenges and opportunities. ECOMP
(OSS), and network applications to form a top to
provides increased security by embedding access
bottom view ranging from the Products customers
controls in each ECOMP platform component,
buy to the Resources that form the raw material for
augmented by analytics and policy components
creating the Products. Active and Available Inventory
specifically designed for the detection and mitigation
not only forms a registry of Products, Services, and
of security violations.
Resources, it also maintains up-to-date views of the
relationships between these inventory items. To 3. ETSI NFV MANO and ECOMP Alignment
deliver the vision of the dynamism of Domain 2.0, The European Telecommunications Standards
Active and Available Inventory will manage these Institute (ETSI) developed a Reference Architecture
multi-dimensional relationships in real-time. Framework and specifications in support of NFV
Active and Available Inventory is updated in real-time Management and Orchestration (MANO). The main
by controllers as they make changes in the Domain 2 components of the ETSI-NFV architecture are:
environment. A&AI is metadata driven, allowing new Orchestrator, VNF Manager, and VI (Virtualized
inventory item types to be added dynamically and Infrastructure) Manager. ECOMP expands the scope
quickly via ASDC catalog definitions, eliminating the of ETSI MANO coverage by including Controller and
need for lengthy development cycles. Policy components. Policy plays an important role to
control and manage behavior of various VNFs and the
The platform includes a real-time dashboard,
management framework. ECOMP also significantly
controller and administration tools to monitor and
increases the scope of ETSI MANOs resource
manage all the ECOMP components via an OA&M
description to include complete meta-data for
(Operations, Administration & Management)
lifecycle management of the virtual environment
instance of ECOMP. It allows the design studio to
(Infrastructure as well as VNFs). The ECOMP design
onboard ECOMP components and create recipes, and framework is used to create resource / service /
6

product definitions (consistent with MANO) as well as framework facilitates rapid incorporation into future
engineering rules, recipes for various actions, policies services.
and processes. The Meta-data-driven Generic VNF
ECOMP can be considered as an enhanced Generic
manager (i.e., ECOMP) allows us to quickly on-board
VNF manager as described by MANO NFV
new VNF types, without going through long
Management and Orchestration Architectural
development and integration cycles and efficiently
Options (see Figure 3).
manage cross-dependencies between various VNFs.
Once a VNF is on-boarded, the design time

Figure 3: Comparison of ETSI MANO and AT&T ECOMP Architectures

In addition, ECOMP subsumes the traditional FCAPS of these objects, and the relationships among them,
functionality as supported by EMSs in the MANO embody semantics that correspond to real-world-
reference architecture. This is critical to aspects of the modeled elements. The modeling
implementing analytic driven closed loop automation process abstracts common features and internal
across the infrastructure and VNFs, analyzing cross behaviors in order to drive architectural consistency
dependencies and responding to the root cause of and operational efficiency. The logical
problems as quickly as possible. To successfully representations of underlying elements can be
implement ECOMP, the Ve-Vnfm-vnf interface (as manipulated and extended by designers in consistent
well as Nf-Vi) is critical. AT&T expects the Ve-Vnfm- ways, and uniformly consumed by the runtime
vnf interface to be a standard interface defined by execution framework. Alignment between elements
ETSI. AT&Ts approach is to document detailed in the model space and those in the real world is
specifications for such interface(s) to collect rich real- continuously maintained through tooling that
time data in a standard format from a variety of VNFs resolves dependencies, handles exceptions, and
and quickly integrate with ECOMP without long infers required actions based on the metadata
custom development work. associated with the modeled elements.

4. Metadata Driven Design Time and Runtime One of the key benefits in AT&Ts D2 plans for a
virtualized network architecture is a significantly
Execution
decreased time from service concept to market. The
Metadata is generally described as data about data
need to operationalize on a service-specific basis
and is a critical architecture concept dealing with
would be a major obstacle to this goal. Thus, AT&T
both abstraction and methodology. Metadata
plans to manage its D2 network and services via the
expresses structural and operational aspects of the
execution environment driven by a common (service
virtualized elements comprising products, services,
independent) operations support model populated
and resources as expressed in terms of logical objects
with service-specific metadata. In conjunction with
within a formally defined model space. The attributes
7

this, these support systems will provide high levels of HEAT, TOSCA, YAML, BPMN/BPEL, etc.). It automates
service management automation through the use of the formation of an AT&T internal metadata format
metadata driven event based control loops. to drive end-to-end runtime execution. ASDC
provides a cohesive and collaborative environment
Such an approach achieves another benefit of driving
for design, test, certification, version control, and
down support systems costs through the ability to
distribution of metadata models across the Resource,
support new services with changes only in the
Service, Product, and Offer development lifecycle.
metadata, and without any code modifications. In
addition, a common operations support model across In ASDC the resource metadata is created in the
all AT&T offered services, coupled with high description of the cloud infrastructure (e.g., AIC
automation, drives down operations support costs. requirements) and the configuration data attributes
to support the implementations. Subsequently, the
AT&T is implementing the metadata driven
resource metadata descriptions become a pool of
methodology in D2 by centralizing the creation of
building blocks that may be combined in developing
rules, and policies in ASDC. All ECOMP applications
services, and the combined service models to form
and controllers will ingest the metadata that governs
products.
their behavior from ASDC. Implementation of the
metadata-driven methodology requires an In addition to the description of the object itself,
enterprise-wide paradigm change of the modeling the management needs of an object using
development process. It demands an upfront metadata patterns in ASDC may be applied to almost
agreement on the overall metadata-model across the any business and operations functions in AT&T. For
business (e.g., product development) and the example, metadata can be used to describe the
software developers writing code for ECOMP, BSS, mapping of resource attributes to relational tables,
OSS, and AIC. This agreement is a behavioral contract through the definition of rules around the runtime
to permit both the detailed business analysis and the session management of a group of related resources
construction of software to run in parallel. The to the signatures of services offered by a particular
Software development teams focus on building type of resources. Such rules form the policy
service-independent software within a common definitions that can be used to control the underlying
operations framework for runtime automation; while behavior of the software function. Another example
the Business teams can focus on tackling unique is to describe the workflow steps as a process in the
characteristics of the business needs by defining metadata that can be used by ECOMP Orchestration
metadata models to feed the execution environment. to fulfill a customer service request.
The metadata model content itself is the glue
D2 policies are one type of metadata, and will
between design time and runtime execution
eventually be quite numerous and support many
frameworks, and the result is that the service-specific
purposes. Policies are created and managed centrally
metadata models drive the common service-
so that all policy actions, both simple and complex,
independent software for runtime execution.
are easily visualized and understood together, and
The metadata model is managed through a design validated properly prior to use. Once validated and
environment (ASDC) which guides a series of corrected for any conflicts, the policies are precisely
designers from 3rd party resource onboarding through distributed to the many points of use/enforcement;
service creation, verification, and distribution. The as well, the decisions and actions taken by policy are
modular nature of the ASDC metadata model distributed, but are still part of the policy component
provides a catalog of patterns that can be reused in of ECOMP. In this manner, policies will already be
future services. This provides a rich forward- available when needed by a component, minimizing
engineering approach to quickly extend resource real-time requests to a central policy engine / PDP
functions to manageable services and sellable (Policy Decision Point), or to Policy Distribution. This
products, further realizing benefits in time to market. improves scalability and reduces latency.
ASDC is the ECOMP component that supports AT&Ts ASDC and Policy Creation exist in close relationship.
metadata model design. Many different models of Policies are created by many user groups (e.g., service
metadata exist to address different business and designers, security personnel, operations staff, etc.).
technology domains. ASDC integrates various tools Various techniques are used to validate newly
supporting multiple types of data input (e.g., YANG, created policies and to help identify and resolve
8

potential conflicts with pre-existing policies. An orchestration and control framework (MSO
Validated policies are then stored in repositories. and Controllers) that is recipe/policy driven to
Subsequently, policies are distributed in two ways: (1) provide automated instantiation of the service
Service-related policies are initially distributed in when needed and managing service demands in
conjunction with the distribution of recipes (created an elastic manner
via ASDC), e.g., for service instantiation, and (2) other An analytic framework that closely monitors the
policies (e.g., some security and operations policies) service behavior during the service lifecycle
are unrelated to particular services, and therefore, based on the specified design, analytics and
are independently distributable. In any case, policies policies to enable response as required from the
are updatable at any time as required. control framework, to deal with situations
ranging from those that require healing to those
5. Closed-Loop Automation that require scaling of the resources to elastically
Given the D2 vision described above, we expect adjust to demand variations.
network elements and services to be instantiated by
customers and providers in a significantly dynamic The following sections describe the ECOMP
process with real-time response to actionable events. frameworks designed to address these major
In order to design, engineer, plan, bill and assure requirements. The key pattern that these frameworks
these dynamic services, we have three (3) major help automate is
requirements: Design -> Create -> Collect -> Analyze -> Detect -
A robust design framework that allows > Publish -> Respond.
specification of the service in all aspects We refer to this automation pattern as Closed-loop
modeling the resources and relationships that automation in that it provides the necessary
make up the service, specifying the policy rules automations in proactively responding to network
that guide the service behavior, specifying the and service conditions without human intervention. A
applications, analytics and closed-loop events high-level schematic of the Closed-loop automation
needed for the elastic management of the and the various phases within the service lifecycle
service using the automation is depicted in Figure 4.

Figure 4: ECOMP Closed Loop Automation


9

The various phases shown in the service lifecycle and control framework provides the automation of
above are supported by the Design, Orchestration & the configuration (both the initial and subsequent
Analytic frameworks described below. changes) necessary for the resources at the
appropriate locations in the network to ensure
Design Framework
smooth operation of the service. For closed loop
The service designers and operations users must, automation to occur during the lifecycle management
during the design phase, create the necessary recipes, phase of the service, the instantiation phase must
templates and rules for instantiation, control, data ensure that the inventory, monitoring and control
collection and analysis functions. Policies and their functions are activated, including the types of closed
enforcement points (including those associated with loop control related to the participating virtual
the closed loop automation) must also be defined for functions and the overall service health.
various service conditions that require a response,
Figure 5 illustrates a Runtime Execution of a Service
along with the actors and their roles in orchestrating
Instantiation high-level use case. As requests come
the response. This upfront design ensures that the
into ECOMP (), whether they are customer
logic/rules/metadata is codified to describe and
requests, orders, or an internal operation triggering a
manage the closed-loop behavior. The metadata
network build out, the Orchestration framework will
(recipes, templates and policies) is then distributed to
first decompose the request and retrieve the
the appropriate Orchestration engines, Controllers,
necessary recipe(s) for execution.
and DCAE components.
Orchestration and Control Framework
Closed Loop Automation includes the service
instantiation or delivery process. The orchestration

Figure 5: Service Instantiation Use Case

The initial steps of the recipes include a homing and LATA/regulatory restrictions, application latency,
placement task () using constraints specified in the network resource and bandwidth, infrastructure and
requests. Homing and Placement are micro-services VNF capacity, as well as cost and rating parameters.
involving orchestration, inventory, and controllers
When a location is recommended and the assignment
responsible for infrastructure, network, and
of resources are done, Orchestration then triggers the
application. The goal is to allow algorithms to use
various controllers () to create the internal
real-time network data and determine the most
datacenter and WAN networks (L2 VLANs or L3 VPNs),
efficient use of available infrastructure capacity.
spin up the VMs (), load the appropriate VNF
Micro-services are policy driven. Examples of policy
software images, connect them to the designated
categories may include geographic server area,
10

data plane and control plane networks. a test, query history) to further analyze the condition.
Orchestration may also instruct the controllers to The results of such diagnosis might further result in
further configure the virtual functions for additional publishing of a more specific condition for which
L3 features and L4+ capabilities (). When the there is a defined response. The conditions referred
controllers make the changes in the network, to here could be ones related to the health of the
autonomous events from the controllers and the virtualized function (e.g., blade down, software hung,
networks themselves are emitted for updating the service table overflow, etc.) that require healing. The
active inventory (). Orchestration (MSO) conditions could be related to the overall service (not
completes the instantiation process by triggering the a specific virtual function, e.g., network congestion)
Test and Turn-Up tasks (), including the ability for that requires healing (e.g., re-route traffic). The
ECOMP to collect service-related events () and conditions could also relate to capacity conditions
policy-driven analytics that trigger the appropriate (e.g., based on service demand variation, congestion)
closed loop automation functions. Similar resulting in a closed-loop response that appropriately
orchestration/controller recipes and templates as scales up (or down) the service. In the cases where
well as the policy definitions are also essential anomalous conditions are detected but specific
ingredients for any closed loop automation, as responses are not identifiable, the condition will be
Orchestration and Controllers will be the actors that represented in an Operations portal for additional
will execute the recommended actions deriving from operational analysis.
a closed loop automation policy.
The analytic framework includes applications that
Analytic Framework analyze the history of the service lifecycle to discern
patterns governing usage, thresholds, events, policy
The analytic framework ensures that the service is
effectiveness etc. and enable the feedback necessary
continuously monitored to detect both the
to effect changes in the service design, policies or
anomalous conditions that require healing as well as
analytics. This completes the service lifecycle and
the service demand variations to enable scaling (up or
provides an iterative way to continuously evolve the
down) the resources to the right level.
service to better utilization, better experience and
Once the Orchestration and Control framework increased automation.
completes the service instantiation, the DCAE
Analytic framework begins to monitor the service by 6. ASDC
collecting the various data and listening to events ASDC is the component within the design time
from the virtual functions and their agents. The environment that provides multiple organizations the
framework processes the data, analyzes the data, ability to create and manage AT&T D2 assets in terms
stores them as necessary for further analysis (e.g., of models. ASDC asset models are generally
establishing a baseline, perform trending, look for a categorized into four object types: Resource, Service,
signature) and provides the information to the Product and Offer.
Application Controller. The applications in the
A Resource model represents a fundamental
framework look for specific conditions or signatures capability of D2 which is developed by AT&T or a 3rd
based on the analysis. When a condition is detected,
party supplier. Resources, either hardware or
the application publishes a corresponding event. The
software, can be categorized as:
subsequent steps will depend on the specific policies
associated with the condition. In the most simple Infrastructure Resource (the Cloud resources,
case, the orchestration framework proceeds to e.g., Compute, Storage).
perform the changes necessary (as defined by the Network Resource (network connectivity
policy and design for the condition) to alleviate the functions & elements).
condition. In more complex cases, the actor Application Resource (the features and
responsible for the event would execute the complex capabilities of a software application).
policy rules (defined at design time) for determining
A Service model represents a well-formed object with
the response. In other cases, where the condition
one or more resources (compute + connectivity + app
does not uniquely identify specific response(s), the
functions/features) composed and operationalized in
responsible actor would conduct a series of additional
AT&T environment. In some cases a Service supports
steps (defined by recipes at design time, e.g., running
11

multiple customers, while in other cases a Service will products with specific Marketing configurations for
be dedicated to a single customer. selling to the customers.
A Product model includes one or more services Figure 6 provides a high level overview of the aspects
packaged with base commercialization attributes for of the metadata-driven model methodology using
customer ordering and billing of the underlying ASDC Models.
service(s). An Offer model specifies the bundling of

Figure 6: ASDC Meta-Driven Methodology

The specific models stored in the ASDC Master developers for rapid on-boarding of 3rd party
Reference Catalog are reusable by the entire solutions.
enterprise. The content from the Master Catalog is
6.1 ASDC Model Framework
distributed to the runtime execution engines so
they can efficiently interoperate and consistently A model in ASDC is the profile of an asset item
execute. expressed with its Descriptor and Management
Recipes. ASDC provides basic templates for modeling
This model driven approach allows AT&T to vastly Resources, Services, and Products. They are all
reduce the time needed to bring new products, derived from a generic extendable template as
services, and resources to market, and enables AT&T depicted in Figure 7.
to open its Network to partners and industry
12

Figure 7: Common Model Framework

The Descriptor defines the capability of the resource, o At the Resource layer, the configuration
service, or product item provided by the recipe may be expressed by standards-based
developer/designer. It contains the following syntax and Open Source model (e.g., TOSCA,
information: HEAT for infrastructure, and YANG for the
Network) to drive the controller execution.
Common Parameters which provide required
o At the Service layer, a Configuration Recipe
information across all asset items, such as ID and
may be expressed by BPEL instructions to
name.
drive the ECOMP MSO execution.
Additional parameters (0 to N) that may be
An Ordering Recipe at the product layer that
added for any information related to a specific
may be composed of BPEL instructions to provide
asset model.
the ordering flow executed by an Ordering BSS.
Composition graphs (0 to N) that may be
Monitoring Recipe for fault, Performance
included to express the composition &
Recipe for performance analysis, or Usage
relationship with the underlying components.
Recipe for Usage Measurement at the service
APIs (0 to N) that are exposed by the asset item
layer that may be used by ECOMP DCAE or other
itself to provide functional or management
service assurance OSSs.
capabilities.
Software executables (0 to N). 6.2 ASDC Functional Architecture
ASDC employs a set of studios to design, certify, and
A Management Recipe includes the Parameters,
distribute standardized models including the
Processes, and Policies related to AT&Ts methods of
relationships within and across them. From a
instantiation and management of an actual asset
bottom-up view, ASDC provides tools to onboard
instance. These recipes are fed to AT&T Management
resources such as building blocks and make them
Systems via APIs for metadata-driven execution, with
available for enterprise-wide composition. From a
each recipe targeted for a specific operations or
top-down view, ASDC supports product managers
control function. The number of recipes included in a
who compose new products from existing
particular asset model is variable (from 0 to N) and
services/resources in the ASDC catalog or acquire
determined by the capability of the corresponding
new resource capabilities from internal or external
execution system for accepting the metadata to drive
developers. The components of ASDC are shown in
its execution. For example:
Figure 8.
A Configuration Recipe may be created to
specify the execution process for instantiation,
change, and deletion of an asset instance.
13

Figure 8: Design Studio

The ASDC Design Studio consists of a portal and Data Repositories


back end tooling for onboarding, iterative modeling,
ASDC Data Repositories maintain the design artifacts
and validation of AT&T assets. The Design Studio
and expose content to the designers, testers, and
includes a set of basic model templates that each
distributors. The repositories include:
project can further configure and extend with
additional parameters, parameter value ranges and Master Reference Catalog is the data store of
validation rules. The configured project-specific designed models, including resources, services, and
templates are used by the Design Studio GUI (as drop- products, the relationships of the asset items, and
down menu or rule-based validation) to validate the their references to the process & policy
designs. This ensures the models contain the repositories.
necessary information and valid values based on the
Process Repository is the data store of designed
type of model and the project requirements.
processes.
ASDC provides access to modeling tools that can be
Policy Repository is the data store of designed
used to create executable process definitions that will
policies.
be used by the various D2 components. Processes
can be created with standard process modeling tools Resource Images is the data store that contains the
such as BPMN (Business Process Management resource executables.
Notation) tools. The processes created are stored in
Service Images is the data store that contains the
the Process Repository and asset models in the
executables for the service.
catalog may refer to them.
Certification Repository is the data store of testing
The models in the Master Reference Catalog can be
artifacts.
translated to any industry-standard or required
proprietary format by the Distribution Studio. The Certification Studio
modeling process does not result in any instantiation ASDC provides a Certification Studio with expanded
in the run-time environment until the MSO receives a use of automated simulation and test tools along with
request to do so. access to a shared, virtualized testing sandbox. The
ASDC will integrate with AT&Ts Third Party Supplier model-driven approach to testing enables a reduction
Management functions as needed for onboarding, in overall deployment cost, complexity, and cycle
modeling, and cataloging software assets along with time. The studio:
their entitlement and license models as part of the
Design Studio.
14

Allows reuse and reallocation of hardware changed as needed. Policy is used to control, to
resources as needed for testing rather than influence, and to help ensure compliance with goals.
dedicated lab environments
A policy in the sense of a particular D2.0 policy may
Supports test environment instantiation of be defined at a high level to create a condition,
various sizes when needed using a mix of requirement, constraint, or need that must be
production and test components, using the provided, maintained, and enforced. A policy may
standardized and automated D2 operations also be defined at a lower or functional level, such
framework as a machine-readable rule or software
Provides expanded use of automated test tools condition/assertion which enables actions to be
beginning in design through deployment for taken based on a trigger or request, specific to
simulation/modeling behaviors, conflict and particular selected conditions in effect at that time.
error identification, test planning and execution, This can include XACML policies, Drool policies, etc.
and results/certification lower level policies may also be embodied in models
Distribution Studio such as YANG, TOSCA, etc.

The D2 models are stored in an internal AT&T 7.1 Policy Creation


technology independent format which provides AT&T The ECOMP policy platform has a broad scope
greater flexibility in selecting the D2 execution supporting infrastructure, product / services,
engines that consume the data. In a model-driven, operation automation, security-related policy rules.
software-based architecture, controlling the These policy rules are defined by multiple
distribution of model data and software executables stakeholders, (Network / Service Designers,
is critical. This Studio provides a flexible, auditable Operations, Security, customers, etc.). In addition,
mechanism to format, translate when needed, and input from various sources (ASDC, Policy Editor,
distribute the models to the various D2 components. Customer Input, etc.) should be collected and
Validated models are distributed from the design rationalized. Therefore, a centralized policy creation
time environment to a runtime repository. The environment will be used to validate policies rules,
runtime Distribution component supports two modes identify and resolve overlaps & conflicts, and derive
of access: 1) models can be sent to the component policies where needed. This creation framework
using the model, in advance of when they are needed should be universally accessible, developed and
or 2) models can be accessed in real-time by the managed as a common asset, and provides editing
runtime components. This distribution is intelligent, tools to allow users to easily create or change policy
such that each function automatically receives or has rules. Offline analysis of performance/fault/closed-
access to only the specific model components which loop action data are used to identify opportunities to
match its needs and scope. discover new signatures and refine existing signatures
and closed loop operations. Policy translation/
7. Policy derivation functionality is also included to derive
The Policy platform plays an important role in lower level policies from higher level policies. Conflict
realizing the D2.0 vision of closed loop automation detection and mitigation are used to detect and
and lifecycle management. The Policy platforms resolve policies that may potentially cause conflicts,
main objective is to control/affect/modify behavior of prior to distribution. Once validated and free of
the complete D2.0 Environment (NFVI, VNF, ECOMP, conflicts, policies are placed in an appropriate
etc.) using field configurable policies/rules without repository.
always requiring a development cycle. Conceptually,
Policy is an approach to intelligently constrain 7.2 Policy Distribution
and/or influence the behaviors of functions and After completing initial policy creation or
systems. Policy permits simpler management/ modification to existing policies, the Policy
control of complex mechanisms via abstraction. Distribution Framework sends policies (e.g., from the
Incorporating high-level goals, a set of technologies repository) to their points of use, in advance of when
and architecture, and supportive methods/patterns, they are needed. This distribution is intelligent and
Policy is based on easily-updateable conditional rules, precise, such that each distributed policy-enabled
which are implemented in various ways such that function automatically receives only the specific
policies and resulting behaviors can be quickly policies which match its needs and scope.
15

Notifications or events can be used to communicate Policy Decision/Enforcement functionality generally


links/URLs for policies to components needing receives policies in advance, via Policy Distribution. In
policies, so that components can utilize those links to some cases a particular runtime Policy engine may be
fetch particular policies or groups of policies as queried in real-time for policies/guidance, as
needed. Components in some cases may also publish indicated in the previous section. Additional unifying
events indicating they need new policies, eliciting a mechanisms, methods, and attributes help manage
response with updated links/URLs. Also, in some complexity and ensure that Policy is not added
cases policies can be given to components indicating inefficiently as separate islands. Attribute values
they should subscribe to one or more policies, so that may be defined at creation time. Examples include
they receive updates to those policies automatically Policy Scope attributes, described in the following
as they become available. section (Policy Unification and Organization). Note
also that Policy objects and attributes will need to be
7.3 Policy Decision and Enforcement
included in a proper governance process to ensure
Runtime policy decision and enforcement that correct intended outcomes are achieved for the
functionality is a distributed system that can apply to business.
the various ECOMP modules in most cases (there
could be some exceptions). For example, Policy rules Policy related APIs can provide the ability to: 1. Get
for data collection and their frequency are enforced (read) policies from a component, i.e., on demand, 2.
by DCAE data collection functionality. Analytic policy Set (write) one or more policies into a component,
rules, anomalous / abnormal condition identification, i.e., immediately pushed/updated, and 3. Distribute a
and publication of events signaling detection of such set of policies to multiple components that match the
conditions are enforced by DCAE Analytic scope of those policies, for immediate use (forced) or
applications. Policy rules for associated remedial or later use (upon need, e.g., time-determined) by those
other action (e.g., further diagnosis) are enforced by entities.
the right actor/participant in a control loop (MSO,
Controller, DCAE, etc.).

Figure 9: D2 Policy Architecture Framework

Figure 9 shows Policy Creation on the left, Policy Services (for policy scopes that are related to these),
Repository & Distribution at the bottom, and Policy or separately for policies of scopes orthogonal to
use on the right (e.g., in Control Loops, or in VNFs). As these (i.e., unrelated to particular products &
shown in Figure 9, Policy Creation is in close services). Orthogonal policies may include various
association with ASDC. When fully integrated, policies policies for operations, security, infrastructure
will be created either in conjunction with Products & optimization, etc.
16

Note that the architecture shown is a logical By then setting values for these attributes, Policy
architecture, and may be implemented in various Scope can be used to specify the precise Policy
ways. Some functions in whole or in part may be scope of: (A) Policy events or requests/triggers to
implemented either as separate virtualized elements allow each event/request to self-indicate its scope,
or within other (non-policy) functions. e.g., which can then be examined by a suitable
function for specifics of routing/delivery, (B) Policy
7.4 Policy Unification and Organization
decision/enforcement functions or other Policy
In an expandable, multi-purpose Policy Framework, functions to allow each Policy function to self-indicate
many types of Policy may be used. Policy may be its scope of decision making, enforcement, or other
organized using many convenient dimensions in order capabilities, (C) Virtual Functions of any type for auto-
to facilitate the workings of the Framework within attachment to the appropriate Policy Framework and
D2.0. A flexible organizing principle termed Policy distribution mechanism instances, and most
Scope will enable a set of attributes to specify (to the importantly to (D) individual policies to aid in
degree/precision desired, and using any set of desired management and distribution of the policies.
dimensions) the precise scope of both policies
and policy-enabled functions/components. Useful 7.5 Policy Technologies
organizing dimensions of Policy Scope may include: D2 Policy will utilize rather than replace various
(a) Policy type or category, e.g., taxonomical, (b) technologies; examples of possible policy areas are
Policy ownership / administrative domain, (c) shown in the following table. These will be used, e.g.,
geographic area or location, (d) technology type via translation capabilities, to achieve the best
and/or specifics, (e) Policy language, version, etc., (f) possible solution that takes advantage of helpful
security level or other security-related technologies while still providing in effect a single
values/specifiers/limiters, (g) particular defined D2.0 Policy brain.
grouping, and (h) any other dimensions/attributes as
may be deemed helpful, e.g., by Operations. Note
that attributes can be defined for each dimension.

7.6 Policy Use many cases, those policies will be utilized to make
At runtime, policies that were previously distributed decisions, where these decisions will often be
to policy-enabled components will be used by those conditional upon the current situation.
components to control or influence their functionality A major example of this approach is the
and behavior, including any actions that are taken. In feedback/control loop pattern driven by DCAE. Many
17

specific control loops can be defined. In a particular To support the large number of Orchestration
control loop, each participant (e.g., orchestrator, requests, the orchestration engine will be exposed as
controller, DCAE, virtual function) will have received a reusable service. With this approach, any
policies determining how it should act as part of that component of the architecture can execute process
loop. All of the policies for that loop will have been recipes. Orchestration Services will be capable of
previously created together, ensuring proper consuming a process recipe and executing against it
coordinated closed-loop action. DCAE can receive to completion. The Service model maintains
specific policies for data collection (e.g., what data to consistency and reusability across all orchestration
collect, how to collect, and how often), data analysis activities and ensures consistent methods, structure
(e.g., what type and depth of analysis to perform), and version of the workflow execution environment.
and signature and event publishing (e.g., what
Orchestration Services will expose a common set of
analysis results to look for as well as the specifics of
APIs to drive consistency across the interaction of
the event to be published upon detecting those
ECOMP components. To maintain consistency across
results). Remaining components of the loop (e.g.,
the platform, orchestration processes will interact
orchestrators, controllers, etc.) can receive specific
with other platform components or external systems
policies determining actions to be taken upon
via standard and well-defined APIs.
receiving the triggered event from DCAE. Each loop
participant could also receive policies determining The Master Service Orchestrators (MSOs) primary
the specific events to which it subscribes. function is the automation of end-to-end service
instance provisioning activities. The MSO is
8. Orchestration responsible for the instantiation/release, and
In general, Orchestration can be viewed as the migration/relocation of VNFs in support of overall
definition and execution of workflows or processes to ECOMP end-to-end service instantiation, operations
manage the completion of a task. The ability to and management. The MSO executes well-defined
graphically design and modify a workflow process is processes to complete its objectives and is typically
the key differentiator between an orchestrated triggered by the receipt of service requests
process and a standard compiled set of procedural generated by other ECOMP components or by Order
code. Lifecycle Management in the BSS layer. The
orchestration recipe is obtained from the Service
Orchestration provides adaptability and improved
Design and Creation (ASDC) component of the
time-to-market due to the ease of definition and
change without the need for a development ECOMP Platform where all Service Designs are
created and exposed/distributed for consumption.
engagement. As such, it is a primary driver of
flexibility in the architecture. Interoperating with Controllers (Infrastructure, Network and Application)
Policy, the combination provides a basis for the participate in service instantiation and are the
definition of a flexible process that can be guided by primary players in ongoing service management, e.g.,
business and technical policies and driven by process control loop actions, service migration/scaling,
designers. service configuration, and service management
activities. Each Controller instance supports some
Orchestration exists throughout the D2 architecture
form of orchestration to manage operations within its
and should not be limited to the constraints implied
scope.
by the term workflow as it typically implies some
degree of human intervention. Orchestration in D2 Figure 10 illustrates the use of Orchestration in the
will not involve human intervention/decision/ two main areas: Service Orchestration embodied in
guidance in the vast majority of cases. The human the Master Service Orchestrator and Service Control
involvement in orchestration is typically performed embodied in the Infrastructure, Application and
up front in the design process although there may be Network Controllers. It illustrates the two major
processes that will require intervention or alternate domains of D2 that employ orchestration. Although
action such as exception or fallout processing. the objectives and scope of the domains vary, they
both follow a consistent model for the definition and
execution of orchestration activities.
18

Figure 10: Comparison of MSO and Controllers

Depending on the scope of a network issue, the MSO Controllers to support a Service Request. A&AI will
may delegate, or a Controller may assume, some of provide the addresses of candidate Controllers that
the activities identified above. Although the primary are able to support the Service Request. The MSO
orchestrator is called Master Service Orchestrator may then interrogate the Controller to validate its
(MSO), its job is to manage orchestration at the top continued available capacity. The MSO and the
level and to facilitate the orchestration that takes Controllers report reference information back to
place within the underlying controllers and marshal A&AI upon completion of a Service request to be used
data between the Controllers such that they have the in subsequent operations.
process steps and all the ingredients to complete
8.1 Application, Network and Infrastructure
the execution of their respective recipes. For new
services, this may involve determination of service Controller Orchestration
placement and identification of existing controllers As previously stated, orchestration is performed
that meet the Service Request parameters and have throughout the D2 Architecture by various
the required capacity. If existing controllers components, primarily the MSO and the Application,
(Infrastructure, Network or Application) do not exist Network and Infrastructure controllers. Each will
or do not have capacity, the MSO will obtain a recipe perform orchestration for:
for instantiation of a new Controller under which the Service Delivery or Changes to existing Service
requested Service can be placed. Service Scaling, Optimization, or Migration
ASDC is the module of ECOMP where orchestration Controller Instantiation
process flows are defined. These process flows will Capacity Management
start with a template that may include common
Regardless of the focus of the orchestration, all
functions such as homing determination, selection of
recipes will include the need to update A&AI with
Infrastructure, Network and Application Controllers,
configuration information, identifiers and IP
consultation of policies and interrogation of A&AI to
Addresses.
obtain necessary information to guide the process
flows. The MSO does not provide any process-based Infrastructure Controller Orchestration
functionality without a recipe for the requested
Like the MSO, Controllers will obtain their
activity regardless of whether that request is a
orchestration process and payload (templates/
Customer Order or a Service adjustment/
models) from Service Design & Creation (ASDC). For
configuration update to an existing service.
Service instantiation, the MSO maintains overall end-
MSO will interrogate A&AI to obtain information to-end responsibility for ensuring that a request is
regarding existing Network and Application completed. As part of that responsibility, the MSO
19

will select the appropriate controllers (Infrastructure, which will be included in the recipe and configured to
Network, and Application) to carry out the request. meet the instance specific customer or service
Because a Service Request is often comprised of one requirements at each level. Physical Access might
or more Resources, the MSO will request the need to be provisioned in the legacy provisioning
appropriate Controllers to obtain the recipe for the systems prior to requesting the MSO to instantiate
instantiation of a Resource within the scope of the the service.
requested Controller. After service placement is
Application Control Orchestration
determined, the MSO may request the creation of a
Virtual Machine (VM) at one or more locations Application Controllers will also be requested by the
depending on the breadth of the service being MSO to obtain the Application Specific component of
instantiated and whether an existing instance of the the Service Recipe from ASDC and execute the
requested service can be used. If new VM resources orchestration workflow. The MSO continues to be
are required, the MSO will place the request to the responsible for ensuring that the Application
Infrastructure Controller for the specific AIC location. Controller successfully completes its Resource
Upon receipt of the request, the Infrastructure configuration as defined by the recipe. As with
Controller may obtain its Resource Recipe from ASDC. Infrastructure and Network Controllers, all
The Infrastructure Controller will then begin workflows, whether focused on Instantiation,
orchestrating the request. For Infrastructure configuration or scaling, will be obtained or originate
Controllers, this typically involves execution of from ASDC. In addition, workflows also will report
OpenStack requests for the creation of virtual their actions to A&AI as well as to MSO.
machines and for the loading of the Virtual Function
Note that not all changes in network or service
(VF) software into the new VM container. The
behavior are the result of orchestration. For
Resource recipe will define VM sizing, including
example, application Virtual Functions can change
compute, storage and memory. If the Resource Level
network behavior by changing rules or policies
Recipe requires multiple VMs, the MSO will repeat
associated with Controller activities. These policy
the process, requesting each Infrastructure Controller
changes can dynamically enable service behavior
to spin up one or more VMs and load the appropriate
changes.
VFs, again driven by the Resource Recipe of the
Infrastructure Controller. When the Infrastructure 9. DCAE
Controller completes the request, it will pass the In the D2 vision, virtualized functions across various
virtual resource identifier and access (IP) information layers of functionality are expected to be instantiated
back to the MSO to provide to the Network and in a significantly dynamic manner that requires the
Application controllers. Along the entire process, the ability to provide real-time responses to actionable
MSO may write identifier information to A&AI for events from virtualized resources, ECOMP
inventory tracking. applications, as well as requests from customers,
Network Controller Orchestration AT&T partners and other providers. In order to
engineer, plan, bill and assure these dynamic services,
Network Controllers are constructed and operate in DCAE within the ECOMP framework gathers key
much the same manner as Application and performance, usage, telemetry and events from the
Infrastructure Controllers. New Service requests will dynamic, multi-vendor virtualized infrastructure in
be associated with an overall recipe for instantiation order to compute various analytics and respond with
of that Service. The MSO will obtain compatible appropriate actions based on any observed anomalies
Network Controller information from A&AI and will in or significant events. These significant events include
turn request LAN or WAN connectivity and application events that lead to resource scaling,
configuration to be performed. This may be done by configuration changes, and other activities as well as
requesting the Network Controller to obtain its faults and performance degradations requiring
resource recipe from ASDC. It is the responsibility of healing. The collected data and computed analytics
the MSO to request (virtual) network connectivity are stored for persistence as well as use by other
between the components and to ensure that the applications for business and operations (e.g., billing,
selected Network Controller successfully completes ticketing). More importantly, DCAE has to perform a
the Network configuration workflow. A Service may lot of these functions in real-time. One of the key
have LAN, WAN and Access requirements, each of
20

design patterns we expect this component to help capabilities, behavior and scale to support the
realize is evolution of the various network functions that are
virtualized over time.
Collect Data Analyze & Correlate Detect 9.1 Platform Approach to DCAE
Anomalies Publish need for Action. It is essential that we take the ECOMP Platform
approach described earlier in order to fulfill the DCAE
We expect this pattern in various forms to be realized goals. The notion of Platform here is a reference to
at multiple layers (e.g., infrastructure, network, and the core DCAE capabilities that help define how data
service). We envision the data to be collected once is collected, moved, stored and analyzed within DCAE.
and made available to multiple applications (AT&T, These capabilities can then be used as a foundation
vendors, partners, others) for consumption. We to realize many applications serving the needs of a
expect applications supporting various operational diverse community. Figure 11 is a functional
and business functions to be key consumers of the architecture rendition of the DCAE Platform to enable
data & events made available by DCAE. We envision analytic applications within the ECOMP/DCAE
DCAE to be an open analytic framework in allowing environment.
AT&T to be able to extend and enhance its

Figure 11: DCAE Platform Approach to Analytic Applications

As Figure 11 suggests, the DCAE Platform requires a behavior of the managed elements and the ECOMP
development environment with well documented response.
capabilities and toolkit that allows the development
9.2 DCAE Platform Components
and on-boarding of platform components and
applications. The DCAE platform and applications The DCAE Platform consists of multiple components:
depend on the rich instrumentation made available Common Collection Framework, Data Movement,
by the underlying AIC infrastructure and the various Edge & Central Lake, Analytic Framework, and
virtual and physical elements present in that Analytic Applications. These are described below:
infrastructure to enable the collection, processing, Common Collection Framework
movement and analysis necessary for elastic
management of the infrastructure resources. In The collection layer provides the various collectors
addition, it relies on robust interfaces with key necessary to collect the instrumentation made
ECOMP components for the reference information available in the AIC infrastructure. The scope of the
about the managed elements (A&AI), the rules data collection includes all of the physical and virtual
(Policy) and templates (ASDC) that govern the elements (Compute, Storage and Network) in the AIC
infrastructure. The collection includes the types of
21

events data necessary to monitor the health of the the intent is to structure the environment that allows
managed environment, the types of data to compute for agile introduction of applications from various
the key performance and capacity indicators providers (Labs, IT, vendors, etc.). The framework
necessary for elastic management of the resources, should support the ability to process both a real-time
the types of granular data (e.g., flow, session & call stream of data as well as data collected via traditional
records) needed for detecting network & service batch methods. The framework should support
conditions, etc. The collection will support both real- methods that allow developers to compose
time streaming as well as batch methods of data applications that process data from multiple streams
collection. and sources. Analytic applications are developed by
various organizations, however, they all run in the
Data Movement
DCAE framework and are managed by the DCAE
This component facilitates the movement of controller. These applications are micro-services
messages and data between various publishers and developed by a broad community and adhere to
interested subscribers. While a key component within ECOMP Framework standards.
DCAE, this is also the component that enables data
Analytic Applications
movement between various ECOMP components.
The following list provides examples of types of
Edge & Central Lake
applications that can be built on top of DCAE and that
DCAE needs to support a variety of applications and depend on the timely collection of detailed data and
use cases ranging from real-time applications that events by DCAE.
have stringent latency requirements to other analytic
Analytics These will be the most common
applications that have a need to process a range of
applications that are processing the collected data
unstructured and structured data. The DCAE storage
and deriving interesting metrics or analytics for use
lake needs to support all of these needs and must do
by other applications or Operations. These
so in a way that allows for incorporating new storage
analytics range from very simple ones (from a single
technologies as they become available. This will be
source of data) that compute usage, utilization,
done by encapsulating data access via APIs and
latency, etc. to very complex ones that detect
minimizing application knowledge of the specific
specific conditions based on data collected from
technology implementations.
various sources. The analytics could be capacity
Given the scope of requirements around the volume, indicators used to adjust resources or could be
velocity and variety of data that DCAE needs to performance indicators pointing to anomalous
support, the storage will technologies that Big Data conditions requiring response.
has to offer, such as support for NOSQL technologies,
Fault / Event Correlation This is a key application
including in-memory repositories, and support for
that processes events and thresholds published by
raw, structured, unstructured and semi-structured
managed resources or other applications that
data. While there may be detailed data retained at
detect specific conditions. Based on defined rules,
the DCAE edge layer for detailed analysis and trouble-
policies, known signatures and other knowledge
shooting, applications should optimize the use of
about the network or service behavior, this
precious bandwidth & storage resources by ensuring
application would determine root cause for various
they propagate only the required data (reduced,
conditions and notify interested applications and
transformed, aggregated, etc.) to the Core Data Lake
Operations.
for other analyses.
Performance Surveillance & Visualization This
Analytic Framework
class of application provides a window to
The Analytic Framework is an environment that Operations notifying them of network and service
allows for development of real-time applications conditions. The notifications could include outages
(e.g., analytics, anomaly detection, capacity and impacted services or customers based on
monitoring, congestion monitoring, alarm correlation various dimensions of interest to Operations. They
etc.) as well as other non-real-time applications (e.g., provide visual aids ranging from geographic
analytics, forwarding synthesized or aggregated or dashboards to virtual information model browsers
transformed data to Big Data stores and applications);
22

to detailed drilldown to specific service or applications to detect anomalies that signal a


customer impacts. security threat, such as DDoS attack, and
automatically trigger mitigating action.
Capacity Planning This class of application provides
planners and engineers the ability to adjust Other We note that the applications that are listed
forecasts based on observed demands as well as above are by no means exhaustive and the open
plan specific capacity augments at various levels, architecture of DCAE will lend itself to integration
e.g., NFVI level (technical plant, racks, clusters, of application capabilities over time from various
etc.), Network level (bandwidth, circuits, etc.), sources and providers.
Service or Customer levels.
10. Active & Available Inventory (A&AI)
Testing & Trouble-shooting This class of
Active and Available Inventory (A&AI) is the ECOMP
application provides operations the tools to test &
component that provides real-time views of D2
trouble-shoot specific conditions. They could range
Resources, Services, Products, and Customer
from simple health checks for testing purposes, to
Subscriptions for D2 services. Figure 12 provides a
complex service emulations orchestrated for
functional view of A&AI. The views provided by
troubleshooting purposes. In both cases, DCAE
Active and Available Inventory relate data managed
provides the ability to collect the results of health
by multiple ECOMP, BSS, OSS, and network
checks and tests that are conducted. These checks applications to form a top to bottom view ranging
and tests could be done on an ongoing basis,
from the Products customers buy to the Services and
scheduled or conducted on demand.
Resources used to compose the Products. Active and
Security Some components of AIC may expose new Available Inventory not only forms a registry of
targets for security threats. Orchestration and Products, Services, and Resources, it also maintains
control, decoupled hardware and software, and up-to-date views of the relationships between these
commodity hardware may be more susceptible to inventory items across their lifecycles. To deliver the
attack than proprietary hardware. However, SDN vision of the dynamism of D2, Active and Available
and virtual networks also offer an opportunity for Inventory will manage these multi-dimensional
collecting a rich set of data for security analytics relationships in real-time.

Figure 12: A&AI Functional View

Active and Available Inventory maintains real-time store relationships between inventory items. Graph
Inventory and Topology data by being continually traversals can then be used to identify chains of
updated as changes are made within the AT&T dependencies between items. A&AI data views are
Integrated Cloud. It uses graph data technology to used by homing logic during real-time service
23

delivery, root cause analysis of problems, impact new vendor for MSO (Master Service
analysis, capacity management, software license Orchestrator) or Controllers.
management and many other D2 functions. Enable dynamic placement functions to
determine which workloads are assigned to
The Inventory and Topology data includes resources,
specific ECOMP components (i.e., Controllers or
service, products, and customer subscriptions, along
VNFs) for optimal performance and utilization
with topological relationships between them.
efficiency.
Relationships captured by A&AI include top to
bottom relationships such as those defined in ASDC Identify the controllers to be used for any
when products are composed of services, and particular request.
services are composed of resources. It also includes 10.2 A&AI Functionality
side to side relationships such as end to end A&AI functionality includes Inventory and Topology
connectivity of virtualized functions to form service Management, Administration, and Reporting &
chains. A&AI also keeps track of the span of control Notification.
of each controller, and is queried by MSO and
placement functions to identify which controller to Inventory and Topology Management
invoke to perform a given operation. A&AI federates inventory using a central registry to
create the global view of D2 inventory and topology.
A&AI is metadata driven, allowing new inventory item A&AI receives updates from the various inventory
types to be added dynamically and quickly via AT&T masters distributed throughout the D2 infrastructure,
Service Design & Creation (ASDC) catalog definitions, and persists just enough to maintain the global view.
reducing the need for lengthy development cycles. As transactions occur within D2, A&AI persists asset
10.1 Key A&AI Requirements attributes and relationships into the federated view
The following list provides A&AI key requirements. based on configurable metadata definitions for each
activity that determine what is relevant to the A&AI
Provide accurate and timely views of Resource, inventory. A&AI provides standard APIs to enable
Service, and Product Inventory and their queries from various clients regarding inventory and
relationship to the customers subscription. topology. Queries can be supported for a specific
Deliver topologies and graphs. asset or a collection of assets. The A&AI global view
Maintain relationships to other key entities (e.g., of relationships is necessary for forming aggregate
location) as well as non-D2 inventory. views of detailed inventory across the distributed
Maintain the state of active, available and master data sources within D2.
assigned inventory within ECOMP
Administration
Allow introduction of new types of Resources, A&AI also performs a number of administrative
Services, and Products without a software functions. Given the model driven basis of ECOMP,
development cycle (i.e., be metadata driven). metadata models for the various catalog items are
Be easily accessible and consumable by internal stored, updated, applied and versioned dynamically
and external clients. as needed without taking the system down for
Provide functional APIs that expose invariant maintenance. Given the distributed nature of A&AI
services and models to clients as well as the relationships with other ECOMP
Provide highly available and reliable functions components, audits are periodically run to assure that
and APIs capable of operating as generic cloud A&AI is in sync with the inventory masters such as
workloads that can be placed arbitrarily within controllers and MSO. Adapters allow A&AI to
the AT&T AIC cloud infrastructure capable of interoperate with non-D2 systems as well as 3rd party
supporting those workloads. cloud providers via evolving cloud standards.
Scale incrementally as ECOMP volumes and AIC
Infrastructure scales. Reporting and Notification
Perform to the requirements of clients, with Consistent with other ECOMP applications, A&AI
quick response times and high throughput. produces canned and ad-hoc reports, integrates with
Enable vendor product and technology swap- the ECOMP dashboards, publishes notifications other
outs over time, e.g., migration to a new ECOMP components can subscribe to, and performs
technology for data storage or migration to a
24

logging consistent with configurable framework that can enable upstream User Experience (UX)
constraints. changes in how products are sold, ordered, and
billed.
11. Business Support Systems Require Pivot to
Catalog Driven - BSSs will become catalog driven to
Take Advantage of D2
enable agility, quick time to market and reduce
D2-based Offer/Products will be designed, created,
Technology Development costs.
deployed and managed in near real-time, rather than
requiring software development cycles. ECOMP is the Improved data repositories support accuracy and
framework that provides service creation and improved access to dynamically changing data
operational management of D2 which enables (e.g., customer subscriptions).
significant reductions in the time and cost required to
Real-time APIs BSS platforms must expose
develop, deploy, operate and retire AT&Ts products, functionality via real-time APIs to improve
services and networks. The Business Support Systems flexibility and reduce cycle time.
which care for such capabilities as sales, ordering, and
billing will interact with ECOMP in the D2 New Usage Data & Network Events - BSSs that
architecture, and those systems will need to pivot to have a need to know about network events (e.g.,
this new paradigm. billing) will be re-tooled to support new
information from DCAE and A&AI.
While BSSs exist for todays network, they will need
to change in order to work and integrate with ECOMP. Expose BSS functions provide BSS functions
These changes will need to be made with the directly to our customers to streamline processes
assumption that the BSSs must also support existing and allow new distribution channels.
products (perhaps in a new format) and enable Time
11.1 BSS Scope
to Market (TTM) new product creation &
The following Figure 13 shows the BSS Scope that
configuration on the D2 network.
includes Customer Management, Sales & Marketing,
The BSS transformation to support the dynamic D2 Order Lifecycle Management, Usage and Event
environment is based on the following: Management, Billing, Customer Finance, User
Experience, and End-to-End BSS Orchestration.
Building blocks BSS migration from monolithic
systems to a platform building block architecture

Figure 13: BSS Scope

The BSS scope includes the following areas. about the customer, managing customer service
level agreements and building customer loyalty.
Customer Management: focuses on customer
Key new consolidated data stores of customer
information, retention of the customer, insight
25

information for D2 are the customer profile, and the internal view for AT&T sales agents, care
customer subscription, and customer interaction center agents and other internal users.
history.
End to End BSS Orchestration: These functions
Sales & Marketing: provides all of the capabilities identify and manage the business level processes
necessary to attract customers to the products and and events required to enable a customer request
services AT&T offers, create solutions that meet and interactions between domains. They trigger
customers specific needs and contract to deliver activities across domains and manage status, and
specific services. Sales & Marketing hand off provide a tool for Care to holistically manage the
contracts to billing for implementation and end to end request as well as the customer
solutions to ordering for service provisioning. relationship for the duration of the enablement/
activation of the request.
Order Lifecycle Management: provides the
capabilities necessary to support the end-to-end 11.2 BSS Interaction with ECOMP Components
handling of a customer's order. Orders may be for BSS interaction with multiple ECOMP Components is
new service, or may be for moves, changes, or critical to streamline the introduction of new offers,
cancellation of an existing service. The experience products and services.
of ordering will change in D2, as customers will
experience provisioning in near real-time. AT&T Service Design and Creation - Offer/Product
Creation
Usage and Event Management: focuses on end-to-
end management of D2 usage and events, including AT&T Service Design and Creation (ASDC) is key to the
transforming from the traditional vertically BSS agility we require for D2. ASDC Offer/Product
oriented architecture to a decomposed function models, recipes/templates and policies will be
based architecture. This will drive common expressed as metadata that is distributed to BSS
collection, mediation, distribution, controls and systems for real-time use or consumption in the
error processing. The scope of this includes usage execution of their work. BSSs will have to be
and events that are required for real-time rating significantly retooled to enable them to be configured
and balance management, offline charging, by the information contained in ASDC. This will
configuration events, customer notifications, etc. enable Customers to configure their own products via
the self-service portal.
Billing: focuses on providing the necessary
functionality to manage billing accounts, calculate The ASDC platform contains the Master Reference
charges, perform rating, as well as format and Catalog which describes D2 assets in terms of a
render bills. Billing will evolve under D2 to become hierarchical Offer, Product, Service and Resource
more real-time, and decomposed into modules model. Only the Offer and Product Model are sent to
that can be individually accessed via configurable the BSS layer. Products are created from Services by
reusable APIs. Product Designers and then Product Managers make
products and offers sellable by adding descriptors
Customer Finance: manages the customers containing pricing, discounting and promotion
financial activities. It includes the Accounts specifications. These design time relationships will be
Receivable Management Functions, Credit & through the Offer and Product levels of the ASDC
Collections Functions, Journals and Reporting catalog. Each offer and product in the Model has a
Functions as well as bill inquiries, including billing description (profile) with a recipe (model) containing
disputes and any resulting adjustments. As in the processes, policies & rules that is distributed and
case of Billing, Customer Finance will evolve under stored locally in each BSS.
D2 to expose more API based components that are
reusable across platforms. Each ASDC level has associated definitions, processes,
and policies for management and execution. BSSs will
User Experience: provides a single presentation integrate seamlessly with ECOMP via these shared
platform to internal and external users each of definitions and models to radically improve time-to-
which receives a customized view based on role. market for new services, products, and offers. This
The user experience is divided into the self-service will also enable Customers to configure their own
view for external customers, the B2B API Gateway products via the Customer Product Composer UI by
for direct use of AT&T APIs by external applications
26

chaining products together from the same product changes, consumption or any new bill impacting
definitions as the Product Designer would use. These network product or service. BSSs use the data for
interactions will be through the ASDC catalog Offer rating, balance management, charge calculations, etc.
and Product level. ASDC distributes Offer and
ECOMP Data sources will provide data to BSS for
Product models to BSSs, and distributes Service and
Customer SLA credit processing. SLA Data Sources
Resource models to ECOMP components.
will collect information needed to perform Customer
MSO (Master Service Orchestrator) SLA Management. The ECOMP Data Sources are as
follows: DCAE network measurement, ECOMP
The Master Service Orchestrators (MSOs) primary
Orchestration service orchestration (activation),
function is the automation of end-to-end service
Policy rules/violations, ASDC SLA Definition, data
instance provisioning activities. MSO provides the
templates for products. Note that both ECOMP Data
BSS systems an interface to orchestrate delivery of D2
sources and BSS will need to be informed about the
services.
fields to be gathered for those SLAs by the product
BSS End to End Orchestration will decompose the definition in ASDC. The actual determination if an SLA
customer request in to D2 and non-D2 services based has been violated, calculation of credits, and applying
on rules from the product catalog. The BSS End to End those credits to the customer bill will be done by BSS,
Orchestration triggers the MSO to initiate ECOMP not ECOMP Data sources. However, if there is an
activity, and the MSO manages the elastic response required based on a SLA violation
provisioning/network activity by interacting with being approached, the service may require an action
Controllers as required. The MSO sends the business by ECOMP to monitor and respond to a defined
level status back to the BSS Orchestration. The BSS policy.
Orchestration does not maintain provisioning logic
Active & Available Inventory (A&AI)
nor manage provisioning processes.
Customer Information Management (CIM) /
Data Collection, Analytics and Events (DCAE)
Customer Subscription Management maintains a
DCAE provides the infrastructure for collection of view of items purchased or in the process of being
autonomous events from the network and other D2 purchased by a customer. Customer Subscription
components, making them available to subscribing entitlements are created by service designers in
applications, including the BSS applications. advance then bound to Product/Service/Resource
inventory instances in ECOMPs Active & Available
The DCAE environment forwards usage and other
Inventory (A&AI) at the time they are
information that can be used by the BSS to generate
provisioned. Non-virtual items may still be related to
billable events, Usage Event Management for
one another in static infrequently changing
collecting D2 usage, and other events and records.
relationships. Customer Subscription will be able to
Based on the timing requirements provided by Usage
associate the product instance to the subscription
Management and other applications (e.g., seconds,
instance which will enable support of DCAE functions
minutes vs. hours vs. days), the BSSs will obtain the
such as routing events to the BSS about the customer
data from DCAE distribution channels or from the
subscription. Subscription data contains linkages to
Data Lake. For example, the Billing function in a BSS
the AT&T Service Design & Creation (ASDC) platforms
can support near real-time balance management by
Product Catalog for static attributes (e.g., product
receiving streaming analytics from DCAE.
name and type) and pricing and adjustment
Usage and Event Management BSSs can be created as lists. Subscription items will also contain certain
applications on top of the DCAE environment as well configurable parameters that may be changed by the
as applications outside DCAE. The allocation of these customer.
applications will be determined as use cases are
Data collection performed by the Usage and Event
further developed and the associated allocation of
Management applications built on top of the DCAE
functions are delineated. Usage and Event
will require interaction with A&AI. This includes
Management applications will collect customer
association of all events belonging to a given Product
events and perform mediation of the usage/events to
Instance ID. The A&AI view will be used to aggregate
downstream BSSs/OSSs. Similarly, BSSs can collect
all of the events from resource instance to service
network events such as bill impacting configuration
instance to product instance. These BSS applications
27

will then associate the Product Instance ID to the customer. At the Product level these would be rules
Customer via the corresponding Customer Product that govern the workings of an AT&T Product for a
Subscription. particular customer at a point in time. Policy usage
in ECOMP focuses on closed loop patterns that will
Policy
keep the Product performing at the Customers
BSSs interact with Policy at the Product layer when desired level. BSSs will move from using tightly bound
Product Designers set the initial Policy associated service Policy parameters coded into each application
with a Product in the Product level of the ASDC as part of a Technology Development release to
Catalog. This can take the form of minimum Service dynamically driven by ASDC.
Level Agreement guaranteeing certain levels of
uptime, response and throughput in traditional 12. Security
products or set as ranges with associated price levels There are two main aspects to security in relation to
and left open for the customer to specify later. the ECOMP Platform: security of the platform itself
and the capability to integrate security into the cloud
The customer purchases a product which creates an
services. These cloud services are created and
order containing customer subscription information
orchestrated by the ECOMP platform. This approach
including the initial product level parameters which
is referred to as security by design.
get passed to ECOMP. These are used by ECOMP to
set up the customer service with underlying service The enabler for these capabilities within ECOMP is an
level policies. The customer can then optimize their API-based Security Framework, depicted in Figure 14
network performance within a certain product below. This illustration also shows how the AT&T
allowed minimum and maximum range through the security platforms work with ECOMP to provide
customer portal by setting these parameters security functionality. The AT&T Security platform is
themselves, e.g., varying Class of Service or connected to ECOMP through a set of security APIs.
Bandwidth. Other limitations set by policy could be While the security platform is specific to AT&T, the
maximum allowed changes per day, minimum time ECOMP framework can be utilized by other security
period between changes, etc. platforms. This framework allows for security
platforms and applications existing outside of ECOMP
While Policy can be created and employed by DCAE at
to be used for platform security and security by
any layer of the execution environment, BSS sets
design for services it orchestrates.
Policy at the Offer/Product/Service level visible to the

Figure 14: ECOMP Platform Decomposition

Security of the platform begins with a strong security best practices as an inherent part of the
foundation of security requirements and following ECOMP design. Some examples include:
28

deployment of the platform on a secure analysis has determined that a security event has
physical and network infrastructure occurred, a pre-determined policy can be invoked via
adherence to secure coding best practices the ECOMP platform. The ability to respond
automatically to a security-related event, such as a
security analysis of source code Distributed Denial of Service (DDoS) attack, will
vulnerability scanning enable closed loop security controls, such as
defined vulnerability patching process modifying firewall rules, or updating Intrusion
Prevention System (IPS) signatures, etc. In the event
Building upon this foundation, external security that a pre-determined policy has not been created for
platforms that provide additional security capabilities an event, it will be sent to a ticket system, and then a
such as identity and access management, micro- new policy can be generated for the next time that
perimeter controls and security event analysis are event occurs.
integrated onto the platform through advantageous
use of the ECOMP Security Framework. The The ECOMP platform also enables security by design
additional security these external platforms provide for services it orchestrates by engaging a security
are described below. trust model and engine. This begins with validation
of security characteristics of resources as part of the
Security modules such as the Identity and Access ASDC resource certification process. This assures
Management (IAM) platform provide critical security service designers are using resource modules that
capabilities to the ECOMP solution. Access have accounted for security. Using the ECOMP
management enhancements deliver preventive and security framework to access an external security
detective access controls for the ECOMP portal and engine, additional security logic can be applied and
related front ends. Options for fine grained enforced during service creation.
authorization capability also exist. For identity
lifecycle management, this platform provides user ECOMP is a platform for many types of services.
provisioning, access request, approval and review Because of its inherent security, it is also a powerful
capabilities and is designed to minimize means to provide security as a service. In many ways,
administrative burden. security services are similar to other services;
however, even more so than other services, security
Internal to AT&T, security such as micro-perimeter services must be provided via a platform /
controls can be provided by Astra, the AT&T- infrastructure that is inherently secure.
developed innovative and award winning1 cloud
security platform; this platform enables continuous Many types of security services can be offered,
protection for the AT&T Integrated Cloud (AIC). The spanning access control, authentication,
Astra security ecosystem and framework allows authorization, compliance monitoring, logging, threat
virtual security protections to be enabled effortlessly analysis and management, etc. Management of vFW
via APIs and automated intelligent provisioning, (virtual Firewall) capabilities can be described to
creating micro-perimeters around the platform and illustrate this opportunity. For example, when a
applications. Astra enables security function customer has a need for firewall capability, the
virtualization as well as dynamic real-time security customer provides the needed information via the
controls in response to the ever evolving threat portal to enable ECOMP to determine and
landscape. For example, based on security analytics orchestrate the firewall placement. In addition, the
using big data intelligence, Astra enables virtual firewall capabilities (e.g., rules, layer 7 firewall) are
security functions on-demand, leveraging our SDN instantiated at the appropriate locations within the
enabled network, to dynamically mitigate security architecture. If necessary, many security controls and
threats. technologies including firewalls, URL blocking, etc.,
can be service-chained to provide all the needed
Security event analysis, provided by a security functionality. As part of an overall security
analytics platform, will use the ECOMP DCAE data architecture, the log data from the firewalls can be
collection and analytics engine to gather VNF data, captured by DCAE and used by the threat
network data, logs and events. Once the security management application to perform security

1
ISE Northeast Project Award Winner 2015
29

analytics. Should a threat be detected, various applications/services and application/service


mitigation steps can be taken, such as altering IPS components are programmable by AT&T (and users
settings, change routing, or deploy more resources to in many cases) via policy and rules to
better absorb an attack. This can be achieved by eliminate/minimize the need for per service
Astra working with ECOMP to deploy the appropriate developments. Services are expected to move
updates across the infrastructure, thereby minimizing through a lightweight development process where
the service interruption due to the security threat. completion of the software effort is in an AIC
sandbox environment which can be directly toggled
13. Today and Tomorrow into production, ready for fully automated
In 2015, AT&T successfully deployed 74 AIC nodes deployment and lifecycle management.
(exceeding its target of 69) and surpassed its goal of
To expedite the delivery of Platform components,
virtualizing 5% of the target network load. These
AT&T has a preference towards open source software
achievements in virtualization were largely made
and industry standards. AT&T will engage with D2
possible by the delivery of various ECOMP Platform
Suppliers for co-development and off-the-shelf
components in support of early D2 projects such
software when appropriate.
as Network Function on Demand (NFoD), Hosted
BVoIP platform consolidation, Mobility Call Recording 14. Summary
(MCR), Virtual Universal Service Platforms (vUSP) and
ECOMP is an open software platform, capable of
IP Interconnect. Initial ECOMP Platform capabilities
supporting any business domain. It is designed and
were released at a component level, with the
built for real-time workloads at carrier scale. It is
operational components of A&AI, Infrastructure
currently optimized to deliver an open management
Controller, MSO, Network Controller, DCAE, and
platform for defining, operating and managing
Portal seeing multiple releases. The design
products and service based upon virtualized network
components of ASDC and Policy had initial releases
and infrastructure resources and software
centered upon basic meta-data driven capabilities,
applications. As an open software platform, it
with a plan to rapidly build out the more sophisticated
includes components enabling a reduced time to
modeling framework, including complex policy
market via the rapid introduction of new functions
creation, distribution and enforcement.
using dynamic definitions and implementations
Much of ECOMPs success can be attributed to the based on standard metadata and policies all managed
combined use of agile development methodologies using a visual design studio. As a network
and a holistic architecture approach. The end state management platform, it includes a framework
consists of multiple policy driven control loops with consistent across cloud infrastructure, applications
template/pattern/recipe driven application flows and network management components, that enables
resulting in a dynamically controlled D2 rapid introduction of compute, memory, and storage
environment. This environment cannot be directly resources used to dynamically instantiate, operate
managed by human operators and requires the and lifecycle manage services, virtualized network
support of intelligent automation constructed using a functions, applications and smart cloud
DevOps approach which synergizes the combined infrastructure. The network management platform
expertise of software experts, network experts, and generates value by enabling the virtualization of
operations SMEs. Incremental agile delivery and network functions via automation of the definition,
DevOps are transforming AT&Ts technical culture in delivery and management of virtualized functions,
lock step with overall D2 evolution and are and by the dynamic shaping and placement of
corporately recognized as key success factors. infrastructure-agnostic workloads. Value is further
enhanced by enabling interoperability across 3rd party
In the near future, ECOMP will be providing open
cloud infrastructures and between virtualized
platform capabilities via the ECOMP Design
networks.
Framework that enables 3rd parties to make use of,
integrate with, create and enhance capabilities of D2 ECOMP is critical in achieving AT&Ts D2 imperatives:
functions or services. Key functions will be exposed increase the value of our network to customers by
via open APIs which align to industry/AT&T standards rapidly on-boarding of new services (created by AT&T
and are supported by an open and extensible or 3rd parties), reduce CapEx and OpEx, and provide
information/data model. A success factor is that Operations efficiencies. It delivers enhanced
30

customer experience by allowing them in near real- control loops driven by ASDC and Policy recipes that
time to reconfigure their network, services, and eliminates many of the manual and long running
capacity. ECOMP enables network agility, elasticity, processes performed via traditional OSSs (e.g.,
and improves Time-to-Market/Revenue/Scale via break-fix largely moves to plan and build function).
ASDC visual modeling and design. ASDC utilizes AT&T achieves economy of scale by using ECOMP as
catalog-driven visual modeling and design that allows a single platform that manages a shared AIC
the quick on-boarding of new services, reducing cycle infrastructure and provides operational efficiency by
time from many months to a few days, and facilitates focusing Network and Service Management control,
new business models and associated monetization automation and visualization capabilities on
paradigms. ECOMP provides a Policy-driven managing scale and virtualized application
operational management framework for security, performance and availability.
performance and reliability/resiliency utilizing a
The ECOMP platform provides external applications
metadata-driven repeating pattern at each layer in
(OSS/BSS, customer apps, and 3rd party integration)
the architecture. This approach dramatically
with a secured, RESTful API access control to ECOMP
improves reliability and resiliency as well as
services, events, and data via AT&T gateways. In the
operational flexibility and speed. ECOMP reduces
near future, ECOMP will be available in AT&Ts D2
CapEx through a closed loop automation approach
Incubation & Certification Environment (ICE) making
that provides dynamic capacity and consistent failure
ECOMP APIs available to allow vendors, cloud
management when and where it is needed. This
providers, and other 3rd parties to develop solutions
closed loop automation approach reduces capital
using ECOMP and AIC reference architecture (current
cost for spare capacity deployed today for worst case
and future-looking).
failover scenarios. Managing shared resources across
various network and service functions is enabled by For Further Information
aggregating capacity thus maximizing overall CapEx
To provide technical feedback on the whitepaper, or
utilization. ECOMP facilitates OpEx efficiency though
express interest in driving this initiative forward,
the real-time automation of service/network delivery
write us at ecomp-feedback@research.att.com. For
and lifecycle management provided by the OMF
ECOMP supply chain questions please contact
framework and application components. The nexus
https://www.attsuppliers.com/domain2.asp.
of these capabilities is a family of deterministic
31

This document presents information about the current plans and considerations for AT&Ts ECOMP
System. This information is subject to change without notice to you. Neither the furnishing of this document
to you nor any information contained in this document is or should be interpreted by you as a legal
representation, express or implied warranty, agreement, or commitment by AT&T concerning (1) any
information or subjects contained in or referenced by this document, or (2) the furnishing of any products or
services by AT&T to you, or (3) the purchase of any products or services by AT&T from you, or (4) any other
topic or subject. AT&T owns intellectual property relating to the information presented in this
document. Notwithstanding anything in this document to the contrary, no rights or licenses in or to this or
any other AT&T intellectual property are granted, either expressly or impliedly, either by this document or
the furnishing of this document to you or anyone else. Rights to AT&T intellectual property may be obtained
only by express written agreement with AT&T, signed by AT&Ts Chief Technology Officer (CTO) or the CTOs
authorized designate.

You might also like