You are on page 1of 37

®

The Infinity Paradigm

The Standards Framework


For The Application Ecosystem TM

Author & Editor:


®
Mehdi Paryavi, DCA
Founder & Chairman
International Data Center Authority

AE360
Copyright 2017 International Data Center Authority

Co-Editors:
® ®
Steve Hambruch, DCA | Dikran Kechichian, DCA

Copyright 2017

International Data Center Authority Page 1


Acknowledgements

The author wishes to extend his gratitude and recognition towards all those whose continuous
feedback and support has been the bedrock of our commitments to the industry:

IDCA Board, Technical Standards Committee and Advisory Panel:


®
Steve Hambruch, DCA , Chairman, Technical Standards Committee
® ®
Richard Zanatta, DCIE | DCOM , Former: Information Technology Chief at White House
Communication Agency and Director of Communications and Facilities at U.S. House
®
Rick Collins, DCIE , Sr. Director, Global Data Center Operations, AIG
®
Eric Wendorf, DCOM , Staff Engineer, LinkedIn
®
Dikran Kechichian DCA , Chief Consultant and Auditor, TechXact Group Corporation
® ®
Timothy Grant, DCE | DCM , Vice President, Technical Facilities, MLB Advance Media
®
Marchelle Adams, DCOM , Architecture Branch Chief, Cloud Technology & Hosting Office, U.S. Courts
® ®
Boris Privalikhin, DCIE | DCOM , Data Center Manager, Time Warner Cable
®
Zouhir Almai, DCA , Executive Director, TechXact Group Corporation
® ®
Michelle S. Walker, DCIE | DCOM , Director, Systems & Integrations Office, U.S. Department of State
Dr. Tarique Mustafa, CEO & CTO, Ghangor Cloud
David Shaw, Former: Global Direct at Perot System, SVP & COO at IO Data Centers, Managing
Director of North America at Uptime Institute, Director of IT System at HonorHealth
® ®
Joseph Lance Walker, DCE | DCM , Data Center Program Manager, CHRISTUS Health
Jonathan Perrigo, IT Security Manager/Architect, Infrastructure, Pittsburgh Glass Words
Steve Acosta, Senior Manager, Applications Administration, Martin Marietta
®
DeWayne Moore, DCOM , Lead Systems Architect, VISA
Mark Thiele, Chief Strategic Officer, Apcera and Former EVP at Switch
®
Ben Calvert, DCOM , CISO, SLAC National Accelerator Laboratory at Stanford
Carlos Limon, Data Center Director, NBC Universal
Eric Flaum, Head of Data Center Floor Management, HSBC North America
® ®
Carlos Tavares, DCE | DCM , Board Member and Strategy Council, IDCA
Christine Hoerbinger, Board Member, IDCA
Richard Bowie, Board Member, Programs Development Council, IDCA
®
Ashley Sharif, DCOM , IDCA Board Member and Legal Council, IDCA
Brian Branson, Board Member, IDCA, Former CEO at Global Knowledge

Fervent Industry Members:


Robert J. Cassiliano, Chairman & CEO, 7x24 Exchange International
®
John Kacperski, DCIE , Chairman of BICSI International Standards Committee
®
Robert Ayr, DCIE , Data Center Director, AIG
®
Reuben Toll, DCIE , Sr. Manager Critical Facilities Engineering, T-Mobile
David Schirmacher, President, 7x24 Exchange International
Greg Crumpton, Vice President of Critical Environments & Technologist, Service Logic
Dwayne Wilson, Vice President of Data Centers, Salesforce
Lynn Gibson, CTO/Vice President, CHRISTUS Health
Christopher Neil, DCOM®, Director of Data Centers, Expedia
® ®
Solomon Edun, DCE | DCM , Technical Director, IDCA
® ®
Alan Thomas, DCE | DCM , IDCA Instructor
® ®
Ali Al-Lawati, DCE | DCM , Former: Head of IM&T Projects Delivery at PDO
Dan Boling, DCOM®, Chief Technology Architect, Ginnie Mae
®
Brett Kirby, DCIE® | DCOM , Director, Power & Tel
®
Steve Geffin, DCE , Vice President, Strategic Initiatives, Vertiv Co.
® ®
Robert Long, DCE | DCM , IDCA Instructor

Copyright 2017

International Data Center Authority Page 2


Table of Contents
The Infinity Paradigm® ....................................................................................................................... 1
Acknowledgements ........................................................................................................................... 2
Table of Contents .............................................................................................................................. 3
Executive Summary ........................................................................................................................... 5
Preface ............................................................................................................................................... 5
The Problem: Legacy Principles, Modern Requirements .................................................................. 6
The Solution: The Infinity Paradigm® ................................................................................................ 7
Application Layer ........................................................................................................................... 8
Platform Layer ............................................................................................................................... 8
Compute Layer .............................................................................................................................. 8
IT Infrastructure (ITI) Layer ............................................................................................................ 9
Site Facilities Infrastructure (SFI) Layer ......................................................................................... 9
Site Layer ....................................................................................................................................... 9
Topology Layer .............................................................................................................................. 9
Grade Levels (Gs) ............................................................................................................................. 10
Grade Level 4 ............................................................................................................................... 11
Grade Level 3 ............................................................................................................................... 11
Grade Level 2 ............................................................................................................................... 11
Grade Level 1 ............................................................................................................................... 11
Grade Level 0 ............................................................................................................................... 11
Efficacies .......................................................................................................................................... 12
Efficacy Ratings ................................................................................................................................ 12
Availability Efficacy Rating ........................................................................................................... 12
Capacity Efficacy Rating ............................................................................................................... 13
Operations Efficacy Rating ........................................................................................................... 13
Security Efficacy Rating ................................................................................................................ 13
Resilience Efficacy Rating ............................................................................................................ 13
Efficiency Efficacy Rating ............................................................................................................. 14
Innovation Efficacy Rating ........................................................................................................... 14
Efficacy Score Rating ........................................................................................................................ 14
The Infinity Paradigm® Grading Model ............................................................................................ 14
The Infinity Paradigm® Evaluation Model ........................................................................................ 15
Controls ........................................................................................................................................... 16

Copyright 2017

International Data Center Authority Page 3


Examples of Infinity Paradigm® Controls ..................................................................................... 16


Sub-Controls .................................................................................................................................... 16
Controls Evaluation .......................................................................................................................... 17
Qualitative Assessment ............................................................................................................... 17
Linear Value Assessment ............................................................................................................. 17
Binary Assessment ....................................................................................................................... 18
Normalizing Evaluation Outcomes .................................................................................................. 18
Assigning Controls to Efficacies ....................................................................................................... 18
Calculating Evaluation Outcomes and Efficacy Ratings for Controls ............................................... 19
Calculating the Overall Efficacy Scores for a Layer .......................................................................... 20
Calculating the Grade Level (G) for a Layer ..................................................................................... 21
Efficacy Weighting ........................................................................................................................... 21
Grade Level Calculation Formula with Efficacy Weighting .............................................................. 22
Calculating the Efficacy Score Rating (ESR®) .................................................................................... 22
Using the ESR® Score ........................................................................................................................ 23
The Infinity Paradigm® in Action ...................................................................................................... 24
Case Study 1: FinanceCorp, Inc. .................................................................................................. 24
Case Summary ............................................................................................................................. 28
Case Study 2: Federal Government Agency ..................................................................................... 29
Case Summary ............................................................................................................................. 33
Infinity Paradigm® Benefits .............................................................................................................. 35
Comprehensive & Cloud-inclusive ............................................................................................... 35
Operations Conducive ................................................................................................................. 35
Efficiency and Innovation Driven ................................................................................................. 35
International, Yet Localized ......................................................................................................... 35
Effective & Application-Centric .................................................................................................... 36
Vendor Neutrality ........................................................................................................................ 36
Enabling Service Provider Integration Strategies ........................................................................ 36
Open Community Effort ............................................................................................................... 36
Conclusion ....................................................................................................................................... 36
Table of Illustrations .................................................................................................................... 37

Copyright 2017

International Data Center Authority Page 4


Executive Summary

Business owners and Technology stakeholders have never had a systematic and dynamic
mechanism that provided a holistic end-result driven evaluation of the effectiveness of the
vast and disparate array of information, applications, resources, technologies, processes,
personnel, infrastructure, documentation, standards and governance that together, despite
their inherit complexity, form the ecosystem that powers their business. Nor have they had a
common language that allowed them to effectively interact with their executive leadership
and peers as well as industry stakeholders, vendors, and service providers in a way that
universally assures projected business results, until now. The Infinity Paradigm® empowers
organizations to truly synchronize technology strategies, designs, plans, implementations
and operations with business strategy; and to optimize business strategy with accurate data
indicating true capabilities, vulnerabilities, and competencies obtained from every layer of
the Application Ecosystem™.

Preface

Until recently, a data center was most commonly understood to be a single facility located at
a specific site. IDCA’s Infinity Paradigm® considers this definition to be completely
inadequate, and thus has redefined it to measure up to its modern task. The function of the
data center is to serve the application, and today’s business application requirements
usually demand that it be delivered across multiple sites and facilities. Whereas
predecessors tend to focus on a narrow or micro view of a single element, the Infinity
Paradigm® provides stakeholders with big picture (macro) as well as detailed (micro) views
of their entire Application Ecosystem™ stack.

Figure 1: The Application Ecosystem

Copyright 2017

International Data Center Authority Page 5


The rapid evolution in the areas of information technology, cloud, data center facilities, IoT,
big data, cyber security, business sustainability and national security has revealed and
highlighted unresolved infrastructure issues, and shortcomings in the practices and the
ideological principles of the past.

To this end, key questions have remained unanswered: How do we address the ever-
growing challenges with data center cooling, ensure power efficacy, and be redundant in
components and paths while eliminating redundant costs? How can we obtain the
necessary resilience, remain efficient, and achieve security and safety at the same time?
How do we align a staff of platform, network, storage, security, electrical and mechanical
engineering professionals to serve a single common goal? How do we deploy application
delivery models that actually meet our unique business and application needs, as opposed
to dealing with “one size fits all” solutions?

A data center is not simply a room that contains computer and network equipment. If it were,
then every computer vendor’s warehouse would be a data center. The purpose of a data
center is to deliver applications and data to end users in accordance with specific and
unique business requirements. Therefore, the physical infrastructure must work together as
a system to create logical services in furtherance of application delivery. Even the most
basic Application Ecosystems have a myriad of interdependent components and innate
complexities that cannot be properly quantified simply by analyzing their individual
components or operational aspects in isolation. When two disparate components must
interoperate to support the Application, a third measurable is introduced into consideration.
There are observable characteristics of each of the two components individually, of course,
but now there are also observable characteristics of the logical behavior of the system those
two components create via their interdependency. That logical system is a distinct entity,
separate and apart from the components that it is comprised of, and as such, it may have its
own efficacies, performance requirements, security requirements, controls, etc.

The problem has never been about advising individuals how to design or operate electrical
or mechanical infrastructure, nor has it ever been about configuring switches, routers and
storage devices. The problem has always been about making individuals and institutions
understand how to design, deploy and operate such infrastructure cohesively to facilitate
data and application delivery according to business requirements. Therefore, developing a
collective state of mind whereby all the practices and resources are driven by common
business requirements is fundamental to necessities of the modern era.

The Problem: Legacy Principles, Modern Requirements

How do we address the dichotomies we are all faced with - operations vs. infrastructure,
distribution vs. consolidation, CapEx vs. OpEx, energy efficient vs. resilience, security vs.
complexity, and more - all while staying in budget? How do we address the non-technical
trends that may impact our organization, such as legislation and governance, geopolitics,
climate or sustainability trends, and other diverse and conflicting priorities?

There is a vivid need for a universal language and understanding amongst all stakeholders
in the data center industry. This need can present itself in the fine details, such as a lack of
an industry wide color coding scheme for cabling and piping, resulting in the increased
probability of human error related outages as workers go from one company to another. It

Copyright 2017

International Data Center Authority Page 6


can also present itself on the broader strategic level, where service providers and their
customers have no common basis for quantifying and qualifying performance requirements
and capabilities.

According to legacy industry guidelines and commonly employed practices, the design
requirements for the data center facilities of factories and airports are all exactly the same.
However, it is easy to recognize that each of these institutions have vastly different
missions, and therefore operational requirements. Yet the prevailing guidance is so blind to
this, as to suggest that high availability cannot be achieved for a factory’s data center if it is
located too close to an airport, and vice versa. This presents a conundrum. If a factory’s
data center cannot achieve high availability because it is too close to an airport, how can an
airport’s data center, which is located at the airport, achieve high availability? We need
mission specific guidance that factors in the unique operational requirements of specific
industries and organizations.

Furthermore, organizations famously suffer from a disconnect between IT and facilities,


often resulting in critical and costly misalignments. They also have a lack of any clear
guidance on how the use of cloud providers or cloud technologies impacts their overall
application performance.

These challenges are compounded by a workforce that too often cleaves to “the way we
have always done it”, or conversely, reflexively adopts technologies simply because they are
new and buzzworthy. There is a measurable shortage of skilled workers that understand
both the operational disciplines and technical advancements that must combine to propel an
organization forward in pursuit of a focused objective. This shortage is aggravated by the
speed of technological evolution, making it very difficult for organizations to keep their staff
trained and up to date on even a fraction of the disciplines that they are in charge of, while
simultaneously maintaining the rigors of daily operations.

The Solution: The Infinity Paradigm®

Established by the International Data Center Authority, Infinity Paradigm® is the fundamental
framework for organizations to reduce business risk, and optimize & align performance with
defined business requirements. It is the framework for visualizing and analyzing an entire
information technology system as a holistic and dynamic model, in a manner that
qualitatively and quantitatively illustrates the myriad of interdependencies of its various
supporting components. This model is called the Application Ecosystem™, and it is
comprised of various physical layers of technical infrastructure as well as some conceptual
or logical layers that are derived from those physical layers.

These layers are commonly depicted in a pyramid fashion to illustrate the concept of the
interdependent infrastructure stack. The layers include the Topology (T) Layer at the
foundation of the pyramid, followed by the Site layer, the Site Facilities Infrastructure (SFI)
layer, the Information Technology Infrastructure (ITI) layer, the Compute (C) layer, the
Platform (P) layer, and finally the Application (A) layer.

Copyright 2017

International Data Center Authority Page 7


Figure 2: Application Ecosystem Layers

Each layer, except for the Topology layer, can be comprised of multiple discreet instances of
infrastructure working in concert to support the layers above them. These instances may
have a one-to-one, one-to-many, or many-to-many relationship with the instances of
infrastructure in the layers below it.

Application Layer

The Application Layer is a set of software services that fulfill organizational


requirements. As an example, an ERP Application is composed of accounting,
payroll, inventory, and asset management software services.

Platform Layer

The Platform Layer represents the methodology by which the application is


delivered. Common Delivery Platforms are Business as a Service (BaaS), Software
as a Service (SaaS), Platform as a Service (PaaS), Infrastructure as a Service (IaaS)
and Nothing as a Service (NaaS).

Compute Layer

The Compute Layer is a logical layer where processing requirements of the


application are defined in an abstract form. The compute cloud would subsequently
map to actual virtual or physical cores in the IT Infrastructure layer.

Copyright 2017

International Data Center Authority Page 8


IT Infrastructure (ITI) Layer

The ITI Layer is a set of physical information technology instances such as network
infrastructure, servers, storage, and other IT infrastructure utilized to support
application delivery.

Site Facilities Infrastructure (SFI) Layer

The SFI Layer is the collection of facilities infrastructure components that support a
Data Center Node (DCN). The SFI layer includes the power, cooling, and other
infrastructure and related services that a data center node requires to meet the
business requirements.

Site Layer

A Site is an independently operated space to house instances of site facilities and IT


infrastructure. The Site layer involves land, building, utilities, civil infrastructure, etc.

Topology Layer

The Topology Layer specifies the physical locations, interconnectivity and


interrelation of data center nodes. The Topology is the ultimate global map of data
center nodes that depicts the characteristics of physical distribution and layout of the
infrastructure that supports the business application.

In addition to these layers, several other defined logical constructs, including but not limited
to the following, are part of the Infinity Paradigm® concept as well:

• Data Center (DC): A data center is the Infrastructure supporting the Application
Ecosystem (AE).
• Data Center Node (DCN): Data center node is a combination of Site, SFI and ITI
infrastructure supporting the AE at a specific topology node.

• Data Center Cloud (DCC): Data center cloud is a set of data center nodes (DCNs)
arranged in a specific Topology.

• Application Delivery Model (ADM): ADM is the platform or combination of different


platforms used to support the Application.
• Application Delivery Infrastructure (ADI): ADI is the aggregate collection of physical
and logical infrastructure components that support the Application.
• Application Delivery Zone (ADZ): ADZ describes a discrete logical combination of
compute, platform, and application instances, formed into a logical failure domain.

One important aspect of these layers is the ability to achieve agility, conformity, and
universal comprehension among service providers and end-user organizations, in aligning
the needs of application owners and end-users with the capabilities of service providers and

Copyright 2017

International Data Center Authority Page 9


suppliers. For example, collocation providers offer services in the Site and Site Facilities
Infrastructure layers, while managed hosting providers provide services in the Information
Technology Infrastructure layer, and Cloud services providers manage everything from the
Compute layer down to the Topology Layer. None of these providers directly competes with
the others, because they operate in different layers of the Infinity Paradigm® model and each
serves a defined need. The ability to effectively identify where each provider exists within the
Application EcosystemTM stack makes vendor selection and management much more
straightforward. Further, this model provides the common language by which service
providers and end-users can agree on and manage performance requirements.

Additionally, the layer model also better facilitates job classification and skills requirements
for personnel. No longer will cloud services, IT, network, cooling, and electrical engineers all
be referred to as “data center engineers” – confusing candidates, recruiters, and hiring
managers alike. Now, job roles can be more accurately categorized by layer, such as: Site
Facility Engineer, IT Infrastructure Engineer, Platform Architect, etc.

Grade Levels (Gs)

Grade Levels, or “Gs” are the method of performance classification within the various
Application Ecosystem™ layers. Gs range from G4 to G0, with G4 representing the
maximum allowable level of design, infrastructure and operational vulnerabilities, such as
probability of failure, security risks, inefficiencies, operational lags, capacity insufficiencies,
and lack of resilience, while G0 essentially mandates total elimination of all such
vulnerabilities. Thus, it represents the highest Grade Level that can be achieved at any
given layer of the application ecosystem.

Achieving G0 (Infinity) at any or every layer of the AE is not typically mandated by most
business requirements (only the most demanding mission critical environments would
warrant it), but it is nevertheless a goal to be strived for. G0 represents a comprehensive
grasp of excellence in the areas of Availability, Capacity, Operations, Resilience, Safety &
Security, Efficiency, and Innovation. While an Application Ecosystem™ layer that qualifies
for any G Level has demonstrated at least some fundamental competencies, the closer an
organization is to G0 level grades in their AE Layers, the closer it is to achieving unrivaled
infrastructural and operational superiority.

Further, while Availability is one of the efficacies that are measured using Infinity Paradigm®
Grade Levels, it is important to note that Gs are not simply an expression of availability.
Achieving a certain Grade Level for a particular Layer indicates systemic performance
across a wide range of efficacies, and availability is merely one component of the total
evaluation. Therefore, a Grade Level provides a much more comprehensive evaluation of

Copyright 2017

International Data Center Authority Page 10


actual or predicted performance to both IT/Facilities management and business leadership


than any measurement or Key Performance Indicator (KPI) that evaluates availability alone.

Grade Level 4

Grade Level 4 is the lowest level of achievement that can be attained by any
application, platform, compute, ITI, SFI, Site or Topology layer. G4 represents
the highest acceptable levels of risk, and highest acceptable level of
insecurity, inefficiency, vulnerability and probability of failure.

Grade Level 3

Grade Level 3 is an elementary level of achievement that can be attained by


any application, platform, compute, ITI, SFI, Site or Topology layer. G3
represents above average exposure to risk, insecurity, inefficiency,
vulnerability and probability of failure.

Grade Level 2

Grade Level 2 is an intermediate level of achievement that can be attained by


any application, platform, compute, ITI, SFI, Site or Topology layer. G2
represents an average exposure to risk, insecurity, inefficiency, vulnerability
and probability of failure.

Grade Level 1

Grade Level 1 is an advanced level of achievement that can be attained by


any application, platform, compute, ITI, SFI, Site or Topology layer. G1
represents below average levels of risk, insecurity, inefficiency, vulnerability
and probability of failure.

Grade Level 0

Grade Level 0 is the highest level of achievement that can be attained by the
Application Ecosystem™ layers. G0 represents the lowest levels of risk,
insecurity, inefficiency, vulnerability and probability of failure.

While it may be tempting to make a subjective judgment as to the relative value of each of
these grades, such as concluding that G1 is “better” than G3, the real value of these grades
is how they aggregate across all layers to reflect alignment with business requirements. If
the business requirements dictate that G3 level performance is all that is required, then
investing in infrastructure, services, and staffing to achieve G1 level performance would not
only be unwarranted, but may also add unjustified complexity. Therefore, while designing for
G1 is better than G3 in the abstract because it yields higher performance, it may not be a
better solution for a particular organization that does not require that level of performance.

Additionally, while a business application might require a G1 grade overall, that does not
necessarily mean that the business must invest in a G1 data center node to support that
application. The required G1 level results for a given application might actually be
supported by a Topology that achieves G1 level performance by aggregating multiple less
expensive G2, G3 or even G4 Data Center Nodes.

Copyright 2017

International Data Center Authority Page 11


Efficacies

To understand the interdependencies of the AE layers, and to assess their performance in


support of the AE business requirements, various common criteria have been defined for
which design, implementation and operational guidance is provided, and measurements and
calculations can be performed. These criteria are grouped into categories called “efficacies”,
and these efficacies are common across all layers. Efficacies include Availability, Capacity,
Operations, Resilience, Safety and Security, Efficiency, and Innovation. Thus, a matrix is
created by this conceptual construct, and this matrix is the heart of the Infinity Paradigm’s
Standards Framework. These measurements and calculations are used to derive an
Efficacy Rating for each Efficacy.

An important consideration that is made apparent by the Infinity Paradigm® is that


proficiency in one efficacy of one layer does not indicate competency for that efficacy over
all layers. For example, having highly redundant site facilities infrastructure in a site may
indicate proficiency for the Availability efficacy for the SFI layer, but that alone does not
indicate overall proficiency for Availability for the entire Application Ecosystem. With rare
exceptions, a deficiency in one layer is difficult or impossible to correct by overbuilding a
different layer. The net effect typically is that the deficiency negates the proficiency gained in
that other layer. So, a Data Center Node that achieves 100% uptime is negated by a flawed
network infrastructure that experiences numerous service disruptions per day.

Additionally, due to inherent interdependencies, proficiency in one efficacy can sometimes


have a negative impact on other efficacies. For example, the more redundant infrastructure
we build to increase availability, the more operationally complex and less energy and space
efficient an ecosystem can become. So an inverse relationship can exist between the
Availability efficacy, and the Operations and Efficiency efficacies. The key is to find the right
balance between them. Similar interdependent relationships can be found between other
efficacies as well.

Efficacy Ratings
Efficacy Ratings are measures of the effectiveness, capability and performance level of the
designs, practices, strategies, methods, resources, systems, and/or components
implemented in any layer, related to each of seven defined efficacies. Measures for these
efficacies are evaluated at the layer level and then aggregated across all layers to provide a
comprehensive measure of the Application Ecosystem® as it relates to the business
requirements for that efficacy.

Availability Efficacy Rating


AER is a figure of merit that quantifiably and qualitatively grades the


probability that the layer or system will be operational at or above the
minimum performance levels defined by the business requirements during
the entirety of its service mission. It evaluates both intrinsic and inherent
availability, as well as the suitability of the design, infrastructure or
operation for meeting targeted performance levels without excess
redundancy.

Copyright 2017

International Data Center Authority Page 12


Capacity Efficacy Rating



CER is a figure of merit that quantifiably grades a layer or system's ability
to support current transactional or computational load requirements, and
also scale to support future growth demand without the need for system
redesign, impacting required redundancy levels, compromising efficiency,
or the ability to support the scalability of dependent layers.

Operations Efficacy Rating



OER is a figure of merit that quantifiably grades the effectiveness of an
organization's ability to plan, design, implement, maintain, and sustainably
manage the complex and interdependent systems, documents, processes,
resources and technologies required to support the business
requirements, including the rigor of its staffing, policies, practices,
methods, protocols, training, analytics, standards, and regulatory
compliance.

Security Efficacy Rating



SER is a figure of merit that quantifiably grades a layer or system's ability
to protect itself; the data it contains; the overall system; the people who
maintain, operate or depend on it; and the organization as a whole from
harm resultant from vulnerability to physical or logical attack; intentional or
accidental misuse; Fire, Flood & Geological events; maintenance,
modification and repair; or other hazards. This includes all systems, tools,
methods and practices for assuring information security and integrity, as
well as all physical security elements and life-safety measures.

Resilience Efficacy Rating



RER is a business continuity figure of merit that quantifiably grades a
layer, system or process’ designs, functionality, capabilities and potential
for tolerating and computing through infrastructure component failures, as
well as its ability to recover from catastrophic failures or force majeure
events. It encapsulates both the disaster readiness and disaster recovery
functions.

Copyright 2017

International Data Center Authority Page 13


Efficiency Efficacy Rating



EER is a figure of merit that quantifiably grades a layer, system or process’
transactional, computational, environmental, operational, energy, and
financial effectiveness as a function of the resources consumed or likely to
be consumed in production. It illustrates the technical cost of production,
with particular clarity regarding the inverse relationship to increased
capabilities in other efficacies, and also reveals performance in operational
processes and workflows, process improvement, technology development,
and automation initiatives.

Innovation Efficacy Rating

IER is a figure of merit that measures the level of effective creativity in the
selection or development of technologies and the modernizing of
processes, designs, systems, and methodologies that represent
revolutionary or evolutionary steps forward for the information technology
or facility departments, the organization they serve, the organization’s
industry, or the information technology industry as a whole.

Efficacy Score Rating

The Efficacy Score Rating is the most reliable scoring mechanism for
evaluating and comparing the performance of a complex system. It is an
aggregated value derived from an algorithmic evaluation of the results of
each layer’s G grade performance as well as the results of each Efficacy’s
performance across layers. This scoring methodology calculates the
inherent effect of interdependencies and independencies across and
between layers and efficacies, and applies weightings to favor functions that are more
central to supporting the fundamental business requirements.

No other existing KPI provides such a simple, comprehensively informed view of a system
as complex as an Application Ecosystem™. The ultimate purpose of evaluating the layers
and efficacies is to be able to produce an ESR® for the total system. This score provides a
simple, straightforward, quantitative output - a single number - that represents the
Application Ecosystem’s ability to reliably and sustainably meet business requirements.

The Infinity Paradigm® Grading Model

To understand the mathematical models used to derive the various grades and scores,
conceptually arrange the layers and efficacies into a matrix, with the various efficacies on
the X axis, and the various layers on the Y axis. Within this matrix, each cell represents the
grouping of all the controls that govern that specific efficacy for that layer. Values are
assigned to each control based on the level of compliance with the stated objective. The
values assigned are then tallied to provide an overall score for that cell. A subsequent
calculation factoring all the cell scores in a layer (with appropriate weighting) results in the G

Copyright 2017

International Data Center Authority Page 14


score for that layer. Similarly, a subsequent calculation factoring all the cell scores for a
given efficacy results render an Efficacy Rating for that efficacy. Finally, the G scores and
Efficacy ratings are factored into a calculation to determine the overall Efficacy Score
Rating. These controls and calculations will be described in detail in the next section of this
document.

Table 1: The Infinity Paradigm® Grading Matrix

Safety & Aggregate


Availability Capacity Operations Resilience Efficiency Innovation
Security Grade

Application Application Application Application Application Application Application


Application Availability Capacity Operations Security Resilience Efficiency Innovation Gx
Controls Controls Controls Controls Controls Controls Controls

Platform Platform Platform Platform Platform Platform Platform


Platform Availability Capacity Operations Security Resilience Efficiency Innovation Gx
Controls Controls Controls Controls Controls Controls Controls

Compute Compute Compute Compute Compute Compute Compute


Compute Availability Capacity Operations Security Resilience Efficiency Innovation Gx
Controls Controls Controls Controls Controls Controls Controls

Information ITI ITI ITI ITI ITI ITI ITI


Technology Availability Capacity Operations Security Resilience Efficiency Innovation Gx
Infrastructure Controls Controls Controls Controls Controls Controls Controls

SFI SFI SFI SFI SFI SFI SFI


Site Facilities
Infrastructure
Availability Capacity Operations Security Resilience Efficiency Innovation Gx
Controls Controls Controls Controls Controls Controls Controls

Site Site Site Site Site Site Site


Site Availability Capacity Operations Security Resilience Efficiency Innovation Gx
Controls Controls Controls Controls Controls Controls Controls

Topology Topology Topology Topology Topology Topology Topology


Topology Availability Capacity Operations Security Resilience Efficiency Innovation Gx
Controls Controls Controls Controls Controls Controls Controls

Efficacy
Scores
AER™ CER™ OER™ SER™ RER™ EER™ IER™ ESR®

The Infinity Paradigm® Evaluation Model

The Infinity Paradigm® Evaluation Model enforces rigor and detailed analysis of the entire
Application Ecosystem™ by establishing various controls corresponding to each layer and
efficacy, providing a defined means of quantifying compliance with those controls,
normalizing the KPIs and metrics associated with those controls, and employing defined
mathematical formulas to illustrate the impact of interdependencies and assign Grades.

Copyright 2017

International Data Center Authority Page 15


Controls

An Infinity Paradigm® Control is a measure that provides a reasonable assurance that any
AE component operates as intended, and in compliance with business requirements.

In many cases, these controls are defined by existing, well established, and competent third
party industry standards or guidelines that have relevance to a given topical area, such as
BICSI, ISO, ASHRAE, NFPA, and many others. These existing third party standards are
continuously evaluated by IDCA to determine suitability and applicability within the Infinity
Paradigm® Standards Framework requirements. Wherever gaps exist between existing
standards - or in cases where an existing standard is determined to no longer represent
modern requirements, technologies, or operational realities - IDCA has defined its own
relevant and applicable controls.

In addition to technical or operational criteria, designs, or practices, some controls are based
on regulatory and governance criteria, such as Sarbanes Oxley, PCI, or Fisma. These
controls reflect the Application Ecosystem’s operational compliance with these regulations,
wherever and whenever they may be applicable.

Since these third party standards have been integrated into the Infinity Paradigm®
Standards Framework, end users who have invested in compliance with those standards
are protected. There is no need to abandon existing standards investments in order to adopt
the Infinity Paradigm®. However, invaluable additional benefits can be realized from those
investments as a result of adopting the Infinity Paradigm® Standards Framework.

Examples of Infinity Paradigm® Controls



1. Site Location conforms to DC selection criteria.
2. Network Security is compliant with the relevant policies.
3. A Risk Management process is in place and effective.
4. Utility Feed Architecture and Implementation Criteria is met.

Sub-Controls

The evaluation of a Control is done through the checking of several elements structured
in the form of a sub-controls associated to that control. For example:

1. Site Location conforms to DC site selection criteria (capacity, hazards, extreme


environment, accessibility etc.)
1.1 All relevant documents are complete and accessible.
1.2 Capacity is sufficient to support future growth.
1.3 All required utilities are available at site location.
1.4 Historical environmental aspects are validated and transparent.

2. Network Security is compliant with the related policy


2.1 Effective a network security policy is formalized and maintained.
2.2 Regular audits evaluating network security policy compliance efforts are
performed to identify gaps.
2.3 Compliance gaps are fully remediated.

Copyright 2017

International Data Center Authority Page 16


3. A Risk Management process is in place and effective


3.1 A managed risk analysis policy is formalized and maintained.
3.2 Risk levels are regularly reported.
3.3 Mitigation / countermeasures are fully analyzed & implemented.
3.4 Compliance with local codes is implemented and routinely evaluated.

Controls Evaluation

Infinity Paradigm® controls are graded based on the level of compliance with their
associated sub-controls. To facilitate such grading, sub-controls are assigned numeric
values. These values range from 0 to 4. Depending on the nature of the control, these
values may represent either a qualitative assessment of the compliance with that control, a
linear value representing the presence of a certain type of component or function, or a
binary value for controls that present a true/false or yes/no style assessment. The resulting
values of these sub-controls are averaged to produce an overall score for the control. In all
evaluation modes, “Not Applicable” is a possible evaluation outcome for any given sub-
control. In such cases, selection of NA will not negatively or positively alter the overall score
for that control.

Qualitative Assessment

Qualitative Assessments apply to sub-controls where a range of increasing compliance


levels are possible. In such cases, the value does not represent a subjective judgment of
compliance, but rather whether that compliance is reinforced by additional factors.
Qualitative Assessments typically use the following compliance grading scale:

NA = Not Applicable
0 = Not Met
1 = Partially Met
2 = Fully Met
3 = Fully Met & Documented
4 = Fully Met, Documented & Monitored

In this evaluation model, a value of 0 represents 0% compliant, 1 represents 25% compliant,


2 represents 50% compliant, 3 represents 75% compliant, and 4 represents 100%
compliant.

Linear Value Assessment

Linear Value Assessments apply to sub-controls where a range of increasingly robust or


rigorous solutions are possible. It is used where a design, system or an operational choice is
being evaluated, rather than compliance with a pre-defined standard. In such cases, the
value represents the robustness or rigor of the deployed solution, relative to other possible
solutions. Linear Value Assessments will always be unique to the technical domain of the
sub-control being evaluated. A zero score would only be an option if the control is
applicable, but the absence of any solution was a possible choice. An example of such an
assessment, in this case for the Utility Power Feed Class control, would be:

Copyright 2017

International Data Center Authority Page 17


NA = Not Applicable
1 = Class 1: Single Utility Feed – Single Substation
2 = Class 2: Dual Utility Feed – Single Substation
3 = Class 3: Dual Utility Feed – Dual Substation
4 = Class 4: Dual Utility Feed – Fully Redundant

In this evaluation model, the values do not represent a percentage of compliance, since not
all data center nodes are built to the same requirements.

Binary Assessment

Binary Assessments apply to sub-controls where only two outcomes are possible – such as
compliance or non-compliance, True or False, and Yes or No. In such cases the value is
purely objective, and does not attempt to evaluate the quality or rigor of that compliance.
Binary Assessments typically use the following compliance grading scale:

NA = Not Applicable
0 = Not Met
4 = Fully Met

In this evaluation model, a value of 0 represents 0% compliant, and 4 represents 100%


compliant.

Normalizing Evaluation Outcomes

The different types of assessments each have a range of possible outcomes, and are
measuring their criteria in different ways. Therefore it is important that their outputs be
normalized so that their impact on a Layer Grade or Efficacy Rating is consistent from one
type of assessment to another. Certain normalization occurs within the assessment
evaluations themselves. For example, in all evaluation models, 0 represents the lowest
possible value, and 4 represents the highest possible value. If the possible values for binary
assessments were 0 and 1, then 1 would be the best possible outcome for that control,
whereas 1 would be among the lowest possible outcomes for the other assessment types.
That difference would make any qualitative comparison or calculation using these
assessment values very difficult.

As a further step in the process of normalization, once the controls are assigned to the
various efficacies, the evaluation outcomes are converted to percentages and become the
grades for that control in those efficacies. An evaluation value of 4 = 1.0 or 100%, an
evaluation value of 3 equals 0.75 or 75%, an evaluation value of 2 equals 0.50 or 50%, an
evaluation value of 1 equals 0.25 or 25%, and an evaluation value of 0 = 0%. An evaluation
value of NA is null and has no value. Refer to Table 2 for illustration.

Assigning Controls to Efficacies

Any given control or sub-control can impact multiple efficacies. For example, the presence
or absence of a documented change management process affects not just the system
availability, but also other efficacies such as efficiency, security, resilience and operations.

Copyright 2017

International Data Center Authority Page 18


Therefore, since a control may have an impact on both the Availability and Operations
efficacies, its evaluation outcome is reflected in the efficacy ratings for both efficacies. In
such cases, there is no weighting or division of the score to reflect more heavily on one
efficacy than another. If the evaluation outcome for a sub-control that affects both
Availability and Operations was a 3, then the corresponding value (0.75 or 75%) would be
applied to both efficacies for that control. There are some weightings placed on aggregate
calculated values in latter parts of the scoring process, but not at this stage.

Calculating Evaluation Outcomes and Efficacy Ratings for Controls

The evaluation value for a control is derived from the average of the values for its
corresponding sub-controls, and is not directly evaluated or assigned. If a control includes
only two sub-controls, with values of 4 and 2 respectively, the derived value for the control
itself will become 3 (the average of 4 and 2). This evaluation methodology can illustrate the
significance of the impact that even small deficiencies can have on a complex system.

Similarly, the efficacy scores for a control are derived from the average of the efficacy
values for its corresponding sub-controls.

Table 2 illustrates this evaluation and grading process. It has one Control Header,
containing two controls. The first of these two controls contains four sub-controls, while the
second control contains two sub-controls. In the first control, “4.3.1 Storage Capacity
Planning and Alignment”, the numbers 4.3.1 simply represent the ID number of the control
and are not part of the evaluation or grading process. In this Table, the sub-controls each
have been evaluated, and respective values of 4, 2, 3, and 1 have been assigned to them.

Under the heading “Affected Efficacies”, there are seven columns labeled A, C, O, R, S, E,
and I. These represent the efficacies of Availability, Capacity, Operations, Resilience, Safety
& Security, Efficiency, and Innovation. For each sub-control, a marker (in this case the letter
x) is placed to designate which efficacies are impacted by this sub-control.

Under the Heading “Efficacy Scores”, there are another seven columns. These columns are
labeled AER, CER, OER, RER, SER, EER, and IER. These columns represent the
Availability Efficacy Rating, the Capacity Efficacy Rating, the Operations Efficacy Rating, the
Resiliency Efficacy Rating, the Security Efficacy Rating, the Efficiency Efficacy Rating, and
the Innovation Efficacy Rating respectively.

Within these columns, grades are calculated as described above for the efficacies
designated in the previous step, and shown in decimal form.

Copyright 2017

International Data Center Authority Page 19


Table 2: Controls Evaluation, Efficacy Assignment, and Efficacy Grading

Affected Efficacies Efficacy Scores

Evaluation
Control Description
A C O R S E I AER CER OER RER SER EER IER

Control
4.3 ITI System Architecture – Storage - Capacity Management
Header
4.3.1 Storage Capacity Planning
Control 2.5 0.75 0.625 0.625
and Alignment
A formalized capacity planning
4 x x x 1.00
1.00 1.00
process is in place and maintained

System resource planning is aligned


2 x x x 0.50
0.50 0.50
Sub- to capacity planning process
Controls Capacity modeling process is
documented
3 x x
0.75 0.75

Capacity alignment monitoring


1 x x
0.25 0.25
process is implemented

4.3.2 Storage Capacity


Control 3 1.00 0.75 0.50
Performance Evaluation
Total installed primary data storage
capacity meets current application 4 x x 1.00
1.00
requirements
Sub-
Controls Supply chain process and agreements
in place to assure projected storage
capacity growth requirements are
2 x x
0.50
0.50
met

Calculating the Overall Efficacy Scores for a Layer

Once the Efficacy Scores have been calculated for all the controls in a layer, the overall
Efficacy Scores for that Layer can be calculated. This is accomplished by calculating the
average of the efficacy scores for the controls in that layer, for each efficacy. Only the
control efficacy scores are used, the sub-control scores are no longer included in the
calculations at this stage.

In Table 3 below, the previously calculated efficacy scores for each control in a layer are
populated into a matrix. The bottom row illustrates the averages that were calculated from
the values in each column. For illustration purposes, only a small number of controls are
shown for this layer. However, in actual practice many controls would be involved.

Copyright 2017

International Data Center Authority Page 20



Table 3: Calculating the overall Efficacy Scores for a Layer

Information Technology
AER CER OER RER SER EER IER
Infrastructure Layer
4.3.1 Storage Capacity
0.75 0.625 0.625
Planning and Alignment
4.3.2 Storage Capacity
1.00 0.75 0.50
Performance Evaluation
4.3.3 Storage Area
Network Capacity 0.75 0.50 0.88 1.00
Planning and Alignment
4.3.4 Storage Area
0.88 0.9 0.90
Network Performance
4.4.1 Network
0.95 0.66 0.90 0.50 0.75
Infrastructure Design
4.4.2 Network
0.92 0.50 0.88
Infrastructure Security
4.4.3 Network
Infrastructure 0.93 0.75 0.25
Redundancy
Overall Layer
Efficacy Scores
0.89 0.61 0.56 0.84 0.89 0.58 0.83

Calculating the Grade Level (G) for a Layer

Once the overall Efficacy Scores have been calculated for a layer, the layer’s Grade Level
or “G” can be determined. The various overall efficacy scores for the layer serve as inputs
into this calculation.

Efficacy Weighting

Before a final Grade Level can be determined, the various overall efficacy scores must be
properly weighted. Weighting is the process of placing greater or lesser emphasis on certain
numerical inputs relative to the proportion of their influence over the total efficacy of the
Application Ecosystem™. In all but the rarest Application Ecosystem™ business
requirements, Availability, Capacity, Operations, Resilience, Safety & Security, Efficiency
and Innovation all have significant if not equal influence over Application Ecosystem®
design, implementation and operation.

Copyright 2017

International Data Center Authority Page 21


Grade Level Calculation Formula with Efficacy Weighting

The overall efficacy scores for the Availability, Capacity, Operations, Resilience, and Safety
& Security efficacies are averaged, and the result is multiplied by 0.85. These five efficacies
account for 85% of the total Grade Level. Next, the overall Efficiency efficacy score is
multiplied by 0.1. This efficacy represents 10% of the total Grade Level. Lastly, the overall
Innovation efficacy is multiplied by 0.05, as it represents just 5% of the total grade level.

Note that the efficacy scores of Efficiency and Innovation are given less weight
than the others as inputs into the final Grade Level calculation. This does not
mean that innovation or efficiency should be of less value or importance to a
given organization. On the contrary, by including efficiency and innovation
efficacies, IDCA emphasizes and promotes the important and critical nature of
these two efficacies to the business and its Application Ecosystem™. Much of
the business bottom line and other efficacies are directly affected by the merits
of both efficiency and innovation.

However, for certain organizations, Efficiency and Innovation cannot be a


primary or equal consideration due to the nature and purpose of the Application,
while for others they may be an important but ancillary consideration. It would
present an inaccurate calculation to weight these two equally with the other
efficacies, and unfairly impact the Grade Levels and ESR Scores by placing
more emphasis on these efficacies than the organization’s application warrants
or could support.
The results of these calculations are then summed to form an overall numerical Layer Grade
that comprehensively accounts for the various efficacies, deficiencies and
interdependencies within that layer. This numerical Layer Grade will fall within pre-defined
ranges for Grade Levels, such that certain numerical grades correspond to G4, G3, G2, G1,
or G0. This process is performed for each layer in the Infinity Paradigm® Standards
Framework.

Calculating the Efficacy Score Rating (ESR®)

The Efficacy Score Rating is calculated in two steps. First, the Layer Efficacy Scores for
each efficacy (that were calculated in the previous step) are averaged to form an overall
Application Ecosystem™ efficacy score for each efficacy. Second, those overall efficacy
scores are summed together, resulting in an ESR grade. This ESR grade is a single number
that holistically represents all of the inherent performance and operational characteristics,
and interdependencies of the entire Application Ecosystem™, informed by potentially
thousands of individual data points and dozens of sub-calculations. As such, it is the most
comprehensive and useful number to the business.

Copyright 2017

International Data Center Authority Page 22


Using the ESR® Score

Once an ESR® Score has been calculated, it becomes useful to the business in a number of
ways. Among these are Business Analytics, Requirements Definitions, Forecasting,
Predictive Analysis, Budget, Resource and Cost Management, Organizational Planning,
Efficiency Planning, Safety Soundness, Security Validation, Personnel Management,
Business Process and Application Effectiveness, Design Evaluations, Technology and
Service Provider Selections, Vendor management, mergers and acquisitions, etc.

The ESR®, and the data that supports it, is an excellent tool for predictive analysis, to
illustrate gaps or lapses in regulatory compliance, or understand the impact that certain
practices, processes or capabilities are having on the ability of the Application Ecosystem™
to meet business requirements. Organizations will be able to see, prior to changes, what
impact the change will have on overall system performance. For example, if the organization
requires a certain minimum ESR® and is falling short, an inspection of the layer grades will
reveal which layers, and which efficacies are most significantly impacting that score. This
allows organizations to focus their time and capital on the specific areas of weakness, rather
than casting a wide and expensive net hoping that general improvements across the board
will have the desired effect. Conversely, organizations that are exceeding their required
ESR® score can make data driven determinations as to which kinds of budgetary or
technology cuts may be advisable.

Copyright 2017

International Data Center Authority Page 23


The Infinity Paradigm® in Action

In order to gain perspective on the actual implementation of the above concepts, we will
evaluate two Case Studies to better illustrate some common ways the Infinity Paradigm®
methodology improves operational rigor, system performance, and cost efficiency. While
these are real world scenarios, the names have been changed.

Case Study 1: FinanceCorp, Inc.

FinanceCorp, a large financial services provider, is experiencing challenges keeping up with


capacity demand growth while maintaining required levels of availability and resilience.
Further, their large 8-year-old data center sites are not designed for the rack power densities
that this future capacity growth appears to require. Their average rack density has ballooned
to 20kW higher than their data center footprints were designed to support, resulting in
stranded floor capacity in order to consolidate power and cooling resources to support a
smaller number of higher density racks.

Given that the production system is responsible for nearly 100% of corporate revenue
generation and transaction values that amount to hundreds of billions of dollars annually, the
company’s business requirement for application availability is actually 99.999% of uptime.
However, management has been unable to properly articulate this requirement into an
achievable goal, and therefore only holds the business accountable to a 99.95% availability
requirement, and the ability to restore operations during catastrophic failures in not more
than 8 hours. The current capacity demand growth projection is 25% YoY.

FinanceCorp’s application and database systems, as well as its data center facilities were all
designed according to relevant technical standards and common legacy design practices for
their industry type, with the primary design consideration being to achieve certain prescribed
levels of availability and transaction processing speed. Based on their design, the escalating
cost of adding capacity is becoming increasingly prohibitive each year.

The company employs two data center sites, 500 miles apart, each built and operated in
accordance with legacy data center specifications that provide fully operational N+1
redundancy. One serves as the primary site, while the other serves as a warm standby
Disaster Recovery Site. Within the Primary site, the application components are separated
into three Application Delivery Zones. Two redundant zones supporting identical
implementations of front-end and mid tier application services, each sized to 100% of
required capacity, and a third zone supporting back-end database systems and storage.
Under normal operation, these two front-end Application Delivery Zones are load balanced
so that neither is handling more than 50% of the load. The purpose of this redundancy is to
facilitate scheduled maintenance or endure short duration service disruptions without having
to initiate a failover to the DR site. Meanwhile, the Disaster Recovery Site supports a third
Application Delivery Zone of front and mid tier application services (also sized to 100% of
required capacity), and one zone of back-end Database systems. It does not serve any
traffic unless a failover event has occurred.

Copyright 2017

International Data Center Authority Page 24


Figure 3: Legacy Topology

As is common practice, the data center sites themselves are somewhat overbuilt from an
availability standpoint, using the common but typically false rationalization that excess
availability at the Site Facilities Infrastructure layer will somehow translate to increased
application availability. The applications, in turn, are designed based on the premise that
their back-end databases will always be local, and therefore no latency management
functionality is included in their design. Further, they are not topology aware, and therefore
include no functionality that would allow them to actively and automatically participate in
managing the Application Ecosystem’s health.

As we analyze this Application Ecosystem™ and its capacity management challenges


through the lens of the Infinity Paradigm®, several characteristics of the system become
apparent:

● Classic Legacy architecture - Designed for Availability, not Efficiency


● Requires 300% of target IT capacity
● Zones A and B can never run more than 50% utilization to allow for failover,
effectively stranding 50% of their power and cooling
● Zone C (DR site) runs at 0% utilization, stranding 100% of its power and cooling
● Design does not scale effectively
● Application Delivery Model architecture is the primary (and significant) limiter of
efficiency and capacity
So, while the business is focused on its capacity management challenges, they may have
been blind to the efficiency challenges that are actually the root cause of them. No amount
of effort at increasing rack density, or improving cooling, or anything of the sort will ultimately
overcome the systemic efficiency and capacity limitations being introduced by the flawed
Application Delivery Model. Yet, the business’s most likely approach, based on legacy
thinking, would have been to investigate retrofitting their power and cooling capabilities to
support additional high density racks, or to investigate building another major data center to
offload some of the new capacity demand. Either solution will consume tens or hundreds of
millions of capital dollars, take months or years to complete, and still not solve the problem.
The controls and metrics in the Infinity Paradigm® have revealed to business leadership that
the real constraint that is causing both an inability to support future capacity requirements,
and the upheaval on the data center floors themselves, is not a power constraint, or a
cooling constraint. It is actually a few hundred lines of missing application programming
code. Code that would make the application aware of the topology it is hosted on, and
commands that it can automatically execute to correct for various failure scenarios. But most

Copyright 2017

International Data Center Authority Page 25


importantly, code to provide the capability to operate with larger back-end latency
tolerances. The current Application code not only assumes that the entire application and
database system is operating in one data center room on one network, it actually depends
on it. That is the source of the problem.

Now that the Infinity Paradigm® has revealed the precise constraints and flaws in the
Application Delivery Model (“ADM”), a more effective and focused solution can be designed.
Once the application has been updated with the required capabilities, the infrastructure can
be transitioned to a much more efficient and scalable architecture.

Solution
Based on the Infinity Paradigm® evaluation results, several key deficiencies were identified.
FinanceCorp used a predictive analysis process with the Infinity Paradigm® evaluation
methodology to analyze several potential solutions. Various changes were “plugged into” the
model to see what effect they would have on overall Application Ecosystem™ performance,
with specific focus on the areas identified as weaknesses in the baseline evaluation. As a
result, they were able to identify a specific recipe of changes that would have the most
significant overall impact.

The first step was to change the data center topology to a distributed N+2 architecture.
Secondly, to increase availability and maximize cost efficiency, they decided to deploy
multiple smaller Application Delivery Zones rather than fewer large ones. This made adding
additional capacity easier, and reduced the size and availability requirements of each
individual zone. They decided to size each availability zone at 33% of the required capacity
for N, and deploy 5 total zones to achieve N+2. Each of these zones operates autonomously
and independently as a discrete failure domain. They can be hosted off campus at co-
location facilities, or within modular data centers located anywhere within the network
latency boundary (about 500 miles from the database zones). In addition, two back-end
database zones are geographically and logically isolated from each other (in the existing two
legacy data centers), and all front-end application delivery zones are configured to be able
to connect to either back-end database zone as needed, such that if one database zone
fails, all front-end zones that were connected to it automatically and independently connect
to the other database zone. This capability is made possible by enlarging the latency
tolerance of the front-end applications, but also by proactively caching the most commonly
used and most static data on local database copies within each front-end zone itself. This
can dramatically reduce the number of database calls to the back-end database zones, and
as a result reduce their capacity requirements and improve performance.

Further, since all front-end Application Delivery Zones are active simultaneously in an N+2
configuration, the company can take down any zone at any time to do maintenance without
impacting system capacity, availability, or resilience. It can also drive system utilization
levels to 80% or beyond, as opposed to having to maintain unused failover headroom.
Moreover, since each zone holds just 33% of the N Capacity, N+2 is achieved using just
166% of required system capacity rather than 300%, greatly reducing costs. To achieve
additional capacity, the company can deploy capacity in 33% increments by adding
additional zones, or for smaller adds, by simply adding supplemental capacity equally
across existing zones. In either case, the cost and time to market for capacity adds is
significantly reduced.

Copyright 2017

International Data Center Authority Page 26



Figure 4: Distributed Topology


The key elements and requirements of this architecture:
● Applications must be topology aware and have internal diagnostics to trigger self-
failover to alternate database zones.
● Each front-end Application Delivery Zone has direct independent connectivity to both
back-end database zones.
● No direct interconnectivity between front-end Application Delivery Zones (from one to
another) is required.
● All capacity is active at 80+% potential utilization, maximizing electrical and cooling
efficiency- no dormant or under-utilized infrastructure in order to achieve availability
objectives.
● No Disaster Recovery sites or processes, system self heals and computes through
loss of back-end or front-end-nodes while maintaining full capacity, thus elimination
of 8 hour DR failover/failback timeline.
● All Application Delivery Zones are hosted in discrete data center node failure
domains. They cannot share common network, storage, power, cooling or building
infrastructure, and ideally should not exist at the same physical site.
● The Availability requirements for the individual data center node designs can be as
low as G4 and still yield in excess of G1 grade Data Center Cluster availability
(>99.999%) at N.
● Intra-zone network is primarily a data replication network to synchronize perimeter
databases with back-end database masters. Significant reduction in direct database
calls from front-end zones to back-end databases.

This approach accomplishes a number of key objectives:


● Scalable High Availability solution (scales across sites or modular DCNs)
● Greatly improved space, electrical, and cooling efficiency
● Topology is N+2 at 166% of required capacity, rather than 300%
● Topology is concurrently maintainable without capacity loss or availability
degradation.
● Supports theoretically unlimited growth with the addition of new data center nodes
● Application is topology aware, and automatically transfers database calls to surviving
database in the event of failure. (No dormant DR Site)
● Application architecture is the key enabler of massive efficiency gains.

Copyright 2017

International Data Center Authority Page 27


Case Summary

Through the use of the Infinity Paradigm®, FinanceCorp was able to avoid a very costly and
time consuming retrofit of their legacy data centers to support increased capacity through
higher rack density. The Infinity Paradigm® evaluation process revealed not only the actual
source of their challenges, but also that their planned solution would not have solved the
problem. FinanceCorp was able to address all the misalignments between their actual
business requirements and their Application Delivery Model, while simultaneously improving
on numerous other lagging KPIs. As a result, FinanceCorp is saving 30% on capacity adds,
and has improved availability, and extended the useful service life of existing data center
facilities investments. Further, the solution revolutionized how FinanceCorp deploys
compute capacity, and its highly precise and innovative architecture has achieved
remarkable improvement in operations and efficiency.

Copyright 2017

International Data Center Authority Page 28


Case Study 2: Federal Government Agency

The Department of Administrative Affairs was experiencing significant challenges meeting


ever increasing demands for new service offerings and expanded capabilities while
managing to a very restrictive budget, and a symphony of regulations and other governance
that stifled agility and innovation.

The CIO had no method of properly articulating risk, and found it difficult to translate new
directives from stakeholders into actionable information technology objectives. There were
several main causes for this. First, regardless of the requests from constituents for new or
expanded service offerings and capabilities, the agency’s information technology budget
was capped by legislative mandate. The budget was the same every year, regardless of the
services desired.

Secondly, since the agency is not a business, it was difficult for their IT leadership to
express budgetary or technology risks, limitations, or constraints in a metric that
stakeholders or senior leadership would understand. In a typical business, the currency is
dollars. If there is an outage, it can be measured in dollars lost. If there is a new capability
desired, it’s value can be measured in Return on Investment. But in the public sector, where
the purpose of the organization has nothing to do with generating revenue or increasing
shareholder value, dollars are an essentially meaningless currency to measure IT business
effectiveness. They needed a better approach.

Lastly, the myriad of regulations and governance that their IT operations were subject to
was significant. Regulations, in general, are designed to produce or even force a specific
desired outcome, typically with little concern about the impact they have on efficiency, or the
cost associated with compliance. When combined with a capped budget, their impact can be
particularly disruptive. The CIO had no way to properly and accurately determine what his
day to day state of compliance was, how changes in his environment would affect
compliance, and what the true operations and budget impact of that compliance would be.

Analysis

The CIO decided to perform an audit of his entire Application Ecosystem™ using the Infinity
Paradigm® Standards Framework as a basis. This exhaustive analysis revealed a wealth of
information about his operations, efficiency, and overall state of readiness to meet future
demands. It also quantifiably confirmed some previously held assumptions. He was able to
form several important conclusions from this effort.

Since the CIO faced an expanding demand for new services, but no corresponding ability to
set a budget to facilitate those services, it was paramount to be very effective at
understanding the design, operational limitations, and capabilities his environment
presented at all times.

He was able to identify the key transaction types his applications produce that were core to
his agency’s mission, and then view his entire operation through the lens of those
transaction types. By identifying these units of work, and translating his business
requirements into those units of work, he now had a gauge by which to identify alignment of

Copyright 2017

International Data Center Authority Page 29


the various layers and efficacies in his Application Ecosystem™. He no longer viewed his
systems capabilities in terms of kilowatts or terabytes, or budget dollars. He viewed it in
terms of the capability to produce the required quantity and quality of specific transaction
types at specific performance levels.

When viewing it this way, he was able to more clearly see where some existing services and
projects (with their corresponding budget dollars) were not improving his ability to meet
expanding demands. These observations fell into several key areas, including Data Center
Site, Site Facilities Infrastructure, Finance & Cost Efficiency, Platform & ADM, and
Regulatory Compliance.

For Data Center Site & Site Facilities Infrastructure (SFI), he observed that:

• His existing primary data center facility was originally designed 20 years ago to
support mainframe equipment. A long-term tech refresh plan was underway to
retrofit this facility to a more modern design, because the cost of building a new
facility could not be included in the budget. This new design was conceived 6 years
ago, and progress toward completing it has been slow due to budget constraints.
This all but guarantees that once the retrofit is complete, the new design will be
nearly as obsolete as the one it was intended to replace. Without a significant
increase in budget, the organization is simply not well positioned to keep pace with
the modernization demands that new technologies place on aging facilities
infrastructure.

• In addition to technical modernization, the CIO also identified that the data center
staff were largely comprised of legacy mainframe-centric thinkers, who lacked
practical experience with the modern technologies he was being asked to support.
As a result, there was a significant skills gap which will be challenging to address,
since agencies like his have difficulty attracting new talent when competing with
private sector firms for the same applicants.

• In the current geopolitical climate, the location of his data center is no longer
desirable for the assurance of continuity of operations in the face of modern threat
profiles. This was much less of a concern when the facility was built than it is today.

With these realizations, the CIO had a basis to carefully scrutinize the rationale for
continuing the data center modernization project as opposed to seeking alternate solutions.

In the area of Finance and Cost Efficiency, he observed that:

• His organization depreciates information technology assets too slowly to keep pace
with technology demands. A five-year depreciation cycle forces him to maintain
older, less energy efficient, less powerful equipment in the data center environment.
Some Information Technology Infrastructure (ITI) in the data center node is even
older than 5 years.

• Although part of the rationale for the long depreciation cycle was that it reduced the
frequency of capital expenditures on technology upgrades, this was revealed to be a

Copyright 2017

International Data Center Authority Page 30


short sighted view that lacked a basic consideration for Total Cost of Ownership as a
primary purchase factor. The newer equipment would allow the CIO to meet
expanding demands for compute, while significantly increasing compute output per
watt of power and cooling, and reducing floor space requirements. In other words, a
smaller number of more powerful servers typically are more energy and cost efficient
than a larger number of less powerful ones – and this efficiency gain can often offset
the cost of the upgrades.

• Since the supply chain was managed by an external agency as part of a multi-
agency bargaining agreement, the CIO had little influence over sourcing, terms for
service and support contracts, or other cost drivers, even though he was forced to
pay those costs out of his budget. The multi-agency purchase agreements
sometimes included requirements mandated by other agencies that exceeded the
needs of his department, and therefore cost him more budget dollars than necessary
due to extraneous features or services he will never require.

In the area of Platform & ADM, he observed that:

• The absence of a cohesive and well aligned Application Delivery Model and Platform
specification was allowing technology selections to be made based on their own
technical merits in isolation, rather than basing those decisions on how those
technologies align with the requirements of their respective layers, and ultimately the
Application Delivery Model as a whole. This resulted in overbuilding certain layers,
such as the network layer, while falling short of requirements in other layers, such as
the compute layer.

In the area of Regulatory Compliance, he observed that:

• Regulatory compliance was heavily impacting the system’s and the department’s
efficiency, largely due to a lack of automation in tracking key auditable data points. A
large operations burden could be lifted by introducing programmatic controls and
collection of compliance data as opposed to manual oversight and data collection. It
was estimated that as much as 70% of the IT staff spent 25% of their time on tasks
related to regulatory compliance. The net effect of this was that staffing levels were
18% higher than otherwise necessary, simply to account for regulatory tasks that
could be automated.

Solution

Armed with the facts facilitated by the Infinity Paradigm® audit, the CIO and his leadership
team significantly revised their planning for the coming budget cycles. Several major
changes were enacted.

• More accurate requirements definitions led to the creation of well-aligned


specifications for the Application Delivery Model. This in turn led to better aligned
requirements and specifications for each of the supporting layers, that took into
account the interdependencies between those layers.

Copyright 2017

International Data Center Authority Page 31


• The data center node retrofit project was suspended. Their DR site was phased out,
and their primary site was re-purposed as a diminished capacity DR site.

• The agency outsourced their primary data center node operations to an approved
collocation provider.
o The cost of hosting in a collocation arrangement was lower than the cost to
operate, upgrade, and maintain their existing primary and DR sites.
o The collocation provider is located outside the area designated in their
geopolitical threat assessment, but still within the network latency boundaries
specified in the Application Delivery Model.
o The collocation facility was designed to support modern information
technology rack profiles, and any future upgrades that may be required are a
cost of doing business for the collocation provider, not a cost that must be
budgeted by the agency.
o The collocation facility has sufficient capacity to support the agency’s current
and future growth projections.
o The availability, safety and security, and efficiency capabilities meet or
exceed the Agency’s requirements.
o And most importantly, the collocation provider could provide a set of IDCA
certified audit results documenting the G scores and Efficacy Ratings of their
infrastructure, which the CIO could plug right in to his own Application
Ecosystem™ model, as if that data center were his own. This eliminated all
assumptions as to the true capabilities of the service provider, and allowed
the CIO to not rely on service level agreements as his sole indicator of
expected performance. Service level agreements are an arbitrator of past
performance, not an indicator of future performance.

• The CIO then turned to Regulatory compliance. Custom monitoring and data
collection application systems where designed and implemented to automate most of
the monitoring for regulatory compliance. Programmatic controls were introduced to
automatically prevent actions or changes that would violate regulations.
o Once implemented, the operational burden of manual data collection and
documentation, manual process auditing, change management, operational
reviews, and other regulation driven activities was significantly reduced,
which freed a large amount of man hours that could be re-directed toward
operational objectives, or eliminated through force reduction.

• From a staff readiness standpoint, the CIO was able to evaluate the skills gaps when
contrasting his staff’s current capabilities with future requirements. He was also able
to re-evaluate required workforce headcounts based on the additional headroom
created by the regulation process automation project, as well as the staffing needs
associated with upcoming deliverables. From this effort, he was able to craft a
staffing plan that included proper training for the team members with the smallest
skills gaps, a force reduction plan for team members whose major skills were no
longer mission-relevant and a recruiting and internship program to better attract 21st
Century skills candidates. Also, the increased emphasis on staff training and
certification yielded a more engaged and capable staff that was better aligned with
mission objectives and less prone to error.

Copyright 2017

International Data Center Authority Page 32


• In the area of supply chain management, he was able to identify misalignments


between current SKUs defined in the multi-agency purchase agreements, and the
actual SKUs defined by his application’s requirements. In many cases, due to the
system architecture, extended warranty services and 4 hour support response
guarantees were not warranted for his needs, even though they were built into the
SKUs based on other agencies’ needs. He was able to reduce his cost by working
with supply chain management to define alternate custom SKUs for his
requirements, rather than accepting “one size fits all” purchasing models. He further
created an accelerated asset disposition and tech refresh program to assure that
OpEx efficiency, and not CapEx cost, was the primary driver for determining useful
service life definitions for his assets.

Table 4: High Level Budget Comparison

Annual budgets are legislatively capped at $10 Million


Previous budget New budget
(thousands) (thousands) Difference
Site Facilities Infrastructure $5,000 $3,000 -40%
Owned Facility 1 $3,000 $2,000 -33%
Operations $1,500 $1,500
Maintenance $500 $500
Retrofit Project $1,000 $0
Owned Facility 2 $2,000 $0 -100%
Operations $1,500 $0
Maintenance $500 $0
Collocation Provider $0 $1,000 100%
Operations $0 $1,000
Operations $5,000 $7,000 40%
IT Tech Refresh $400 $2,000 400%
New IT Assets $1,000 $1,000 0%
Application Development $500 $1,000 100%
Training $100 $540 440%
Staffing $3,000 $2,460 -18%
Total $10,000 $10,000 0%

Case Summary

As a result of employing the Infinity Paradigm® model, the CIO was able to revolutionize his
department’s ability to meet future demands, within his mandated budget constraints. He
was able to modernize and right size his data center facilities capabilities through
outsourcing, killing a major cost retrofit project in the process, resulting in a net annual
CapEx/OpEx budget savings of 40% for facilities. This allowed a 40% increase in other,
more effective spending. He modernized and streamlined his workforce, while making it
much more cost and production efficient, resulting in the ability to meet expanding demands
without expanding cost. He aligned his supply chain with his technical and business
requirements, reducing IT costs. He was able to combine those savings with cost savings
from other areas to invest in modernizing his Information Technology Infrastructure fleet to
meet future requirements.

Copyright 2017

International Data Center Authority Page 33


Figure 5: Meeting the Demand Curve

4500 5000

4000 Previous 4500 New


Budget 4000 Budget

Transcations Per Second


3500
Transactions Per Second

3000
Plan 3500 Plan
3000
2500
Capacity 2500 Capacity
2000
2000 Demand
Demand
1500
1500
1000 1000
500 500
0 0
2016 2017 2018 2019 2020 2016 2017 2018 2019 2020
Budget Year Budget Year

The CIO’s use of the Infinity Paradigm® enabled data driven and targeted decisions based
on quantifiable performance and operational metrics that were holistic and comprehensive.
The end result was a complete course correction – one that enabled his organization to
align on business requirements as opposed to budget constraints, and to finally get out in
front of the demand curve. Due to the nature of the changes enabled by this analysis, the
organization actually becomes more and more efficient over each successive budget cycle,
as the effect of this methodology phases in. It would not have been possible to accomplish
this under the previous methods and practices.

Copyright 2017

International Data Center Authority Page 34


Infinity Paradigm® Benefits

As we have seen in the preceding scenarios, the Infinity Paradigm® can present a wide
variety of benefits, based on the challenges each end user is attempting to discover or
solve. From a C level executive down to an entry-level staff engineer, from a public sector
organization to a global corporation to a local small business, every adopter and constituent
is empowered by its comprehensive capabilities and relentlessly thorough approach. Among
the many benefits presented by the Infinity Paradigm®, certain key features stand out:

Comprehensive & Cloud-inclusive


The Infinity Paradigm® was founded on the basis of bridging existing industry gaps. It
enforces a comprehensive view of the logical and physical aspects of information
technology, data centers, and Cloud, as well as their investments, designs, infrastructure,
personnel, performance metrics, overall operations, and management. Virtualization is
essential to the modern era computing environments. The Infinity Paradigm® is the pioneer
of introducing cloud and virtualized services as a key factor of the next-generation standards
framework.

Operations Conducive
Operational conduciveness is a foundational element of the Infinity Paradigm®. The industry
has favored availability and resilience over operational effectiveness for far too long. IDCA’s
philosophy dictates that an infrastructure’s availability, capacity, security, safety, compliance,
etc. cannot be measured unless its operational requirements are fully evaluated and met.
Only a demonstrable operational competency with each component’s individual function and
the interdependencies between components will fulfill Infinity Paradigm’s requirements of
compliance.

Efficiency and Innovation Driven


IDCA’s standards framework promotes efficiency and innovation as vital efficacies. Their
continuous involvement and active roles within the organization are the driving force that
constantly evolves an organization’s Application Ecosystem™ through phases of
development, implementation, and operation; bringing cost-efficiency, viability, feasibility,
enhanced productivity, competitive-edge and enriched efficacies. The notion of “Infinity” is
meant to symbolize the infinite possibilities that result from the continuous pursuit of
perfection.

International, Yet Localized


The Infinity Paradigm® was designed to bring the world’s best and brightest together,
allowing all cloud, application, information technology, facility and data center stakeholders
to enjoy the synergy that globalized collaboration can provide. It is extensible and adaptable
to localized technical, language, environmental, regulatory, political, and industrial
requirements and capabilities. This unprecedented level of “reality-centric” customization,
allows for ease of access, enhanced efficiency, safety, compliance and effectiveness.

Copyright 2017

International Data Center Authority Page 35


Effective & Application-Centric


One of the Infinity Paradigm’s core missions is ensuring effectiveness via the notion of
practicality. Its framework matrix enables a practical, intuitive, detailed and holistic
understanding of the entire ecosystem. As the first organization to acknowledge and
promote the purpose of data centers to be application delivery, IDCA has effectively
redefined the data center as ”the infrastructure supporting the Application Ecosystem™”.
This new, application-centric approach eliminates redundant and goalless efforts, honing in
on a data center’s true purpose, enabling stakeholders to set accurate expectations and
properly strategize towards satisfying their application needs.

Vendor Neutrality
IDCA does not solicit or accept input from technology manufacturers, and therefore the
standards framework is not biased toward or against any particular manufacturers,
technologies, or technical solutions.

Enabling Service Provider Integration Strategies


The Infinity Paradigm® enables and empowers the integration of cloud and infrastructure
service providers into the end user’s Application Ecosystem by providing a common grading
model for service providers to validate performance capabilities far beyond basic service
levels agreements. Service provider’s Efficacy Ratings and G Scores can be integrated into
the end user’s ESR score directly, providing management with the most powerful tool yet to
understand the impact that partner integration has on their overall operations.

Open Community Effort


The Infinity Paradigm® is a free, open data center framework available worldwide. It
empowers functional performance on a global scale, while adapting to local requirements. It
provides the stakeholder with an all-inclusive standards framework granting total control
over Availability, Efficiency, Security, Capacity, Operation, Innovation and Resilience based
on the specific needs of the business.

Conclusion
The Infinity Paradigm® bridges existing industry gaps by providing a comprehensive open
standards framework capable of handling the complexity of the post-legacy information
technology and data center age. It maintains a focus on certain unique core values:
comprehensiveness, effectiveness, practicality, and adaptability. This framework, and the
standards, controls, and guidance that support it, will grow and evolve over time through the
input and collaboration of its users around the world, continuously redefining the very
essence of what information technology, cloud, and the data center is and does. It integrates
legacy standards and guidelines, remaining fully aware of their strengths and limitations, in a
forward looking and practical way. As a result, the Infinity Paradigm® provides information
technology, cloud, and data center designers, planners, builders, operators and end-users
with a holistic and constructive approach, addressing the problems of today and challenges
of tomorrow.

Copyright 2017

International Data Center Authority Page 36


Table of Illustrations

Figure 1: The Application Ecosystem™ .............................................................................................. 5


Figure 2: Application Ecosystem™ Layers .......................................................................................... 8
Figure 3: Legacy Topology ............................................................................................................... 25
Figure 4: Distributed Topology ........................................................................................................ 27
Figure 5: Meeting the Demand Curve ............................................................................................. 34

Table 1: The Infinity Paradigm® Grading Matrix .............................................................................. 15


Table 2: Controls Evaluation, Efficacy Assignment, and Efficacy Grading ....................................... 20
Table 3: Calculating the overall Efficacy Scores for a Layer ............................................................. 21
Table 4: High Level Budget Comparison .......................................................................................... 33

Copyright 2017

International Data Center Authority Page 37

You might also like