Professional Documents
Culture Documents
Table of contents
1.
Introduction
2.
Context
3.
4.
5.
6.
7.
Appendix B: References
8.
1. Introduction
1.1 Purpose
The Data Centre Services (DCS) Reference Architecture Document (RAD) has been developed
to describe the reference architecture models for a common, shared and consolidated DC
environment for the Government of Canada (GC) as an enterprise. The DCS RAD defines the
end-to-end architectural model and the components that will form the basis for developing
the Shared Services Canada (SSC) target end-state DC services for the Data Centre
Consolidation Program (DCCP). The architecture will support the delivery of DCCP services for
SSC's partner organizations over the near term (less than three years), with an evolutionary
capability to encompass a hybrid cloud service delivery model for long-term strategic
planning.
1.2 Scope
This document is limited to articulating the conceptual data centre infrastructure architecture.
The logical and physical architecture required to meet the intent of the conceptual
architecture described in this deliverable is the subject of associated Technical Architecture
Documents (TADs).
The target baseline security profile for SSC's end-state data centre services is Protected B,
Medium Integrity, Medium Availability (PBMM). End-state data centres will also support
workload profiles above PBMM (e.g. Secret) where required through appropriate safeguarding
measures over and above those implemented for the PBMM baseline. Partner organizations
are responsible for implementing application-level security safeguards over and above those
implemented by SSC in its data centres in order to meet their particular information security
requirements.
The security controls that are identified in this document are strongly influenced by
Communications Security Establishment Canada's (CSEC's) Information Technology Security
Guidelines (ITSG) publications. In addition, this document constitutes a key deliverable for
achieving Security Assessment and Authorization (SA&A) and overall service authorization
successfully. This approach will allow risk management groups to validate the compliance of
each component's design and implementation with this document specification, thereby
facilitating assessment efforts and accuracy.
The architecture is based on current concepts and technologies available within the data
centre space. As the technologies and surrounding infrastructures evolve, the architecture will
also need to evolve. The architecture presented in this document will set the standard for the
target data centre services that will allow SSC to re-engineer, virtualize and consolidate DC
services, and enable integration of various other partners and service providers.
This document covers the following topics:
DC requirements,
DC architecture models,
DC service management.
Future releases of this document will elaborate on public/hybrid cloud computing architecture
models, usage and integration with the GC community cloud, security profiles higher
than PBMM, and partner organization applications.
The document maps shown in Figure 1and Figure 2 illustrate the associated end-state
deliverable reference documents that will form an evolving document container for RADs and
TADs.
This document:
takes a pragmatic and integrated delivery approach for planning, building and
operating the DCs;
provides traceability and direction in the creation of the TADs, Detailed Design
Specifications (DDSs) and Build Books;
provides a security by design view of the infrastructure elements and the service
specific elements that support DC services; and
identifies a security architecture that aligns with ITSG security guidelines, as well as the
IT Shared Services Security Domain and Zones Architecture documents.
1.4 References
This section identifies reference material that has been utilized for the development of the DC
Reference Architecture. Refer toAppendix B: References for a list of documents utilized for the
creation of this RAD.
Further details on NIST CCRA are available at Publication Citation: NIST Cloud Computing
Reference Architecture
1.4.2 OpenStack
OpenStack, an Infrastructure as a Service (IaaS) cloud computing project, is a cloud
operating system that provides a flexible architecture to enable the convergence and
provisioning of on-demand compute, storage and network resources for building highly
scalable public and private clouds. Further details on the OpenStack cloud computing
reference architecture are available atwww.openstack.org. SSC is currently investigating how
cloud operating systems such as OpenStack can be leveraged going forward.
Footnote 1
NIST SP 500-292, "NIST Cloud Computing Reference Architecture"
Return to footnote1referrer
Top of Page
2. Context
service capacity varies greatly from one DC to another some have excess computing
capacity that is unused, while others strain to meet demand;
many have outdated heating and cooling systems that are not energy efficient and
require frequent maintenance; and
most DCs have their own reliability and security standards, requiring multiple service
teams and varying service contracts.
of the GC, the DCCP will deliver efficient, scalable and standardized DC services that will
reduce operating costs for government DC services as a whole.
2.3.1 Vision
The DCCP vision includes the consolidation of more than 400 DCs to fewer than ten state-ofthe-art facilities providing enterprise-class application hosting services. Data centres will
utilize a secure containment strategy to host the workloads of partner organizations within a
shared domain/zone configuration. Data centres will be deployed in a manner that provides
partner organizations with High Availability (HA) and Disaster Recovery (DR) capabilities to
support enhanced and mission-critical systems. The model defined to support this goal is
referred to as a 2+1 Availability Strategy and will be accomplished through the operation
of two DCs within a region forming a High-Availability pair (''Intra-Region HA''), with one
DC outside the region providing Disaster Recovery (''Out-of-Region DR'').
DCCP will also provide SSC partner organizations with a set of defined target services that are
coupled with advanced features of the underlying infrastructure in order to:
provide a dynamic, "just in time" computing environment that meets the varied
application and data processing needs of SSC partner organizations on an ongoing
basis;
adapt and evolve over time in a manner that aligns with ever-changing technological
and market landscapes, without incurring penalties due to decisions made;
support service model deployment innovation and cost savings through private-sector
engagement; and
enable online brokerage and orchestration services with the capability to leverage
private, public and hybrid cloud computing services.
consolidation and standardization. Finally, reducing the number of DCs will save on power and
cooling, and improve security.
While DC consolidation will provide the GC with significant advantages in the near to medium
term, the development of a dynamic and flexible sourcing strategy that leverages the
capabilities of workload mobility, open standards and hybrid cloud computing resources will
enable SSC to future-proof service delivery, with the ability to broker, orchestrate, provision,
deliver and repatriate standards-based IT services from multiple sources.
new applications that should result in increased utilization of existing assets, not the
acquisition of new assets.
Priority
Costs and
Funding
Definition
Strategic Requirement
Category relates to GC
constraints
Availability
the infrastructure so as to
support delivery of GC
Priority
Definition
Strategic Requirement
meet variable computing demands. Offer a
range of standard services, service levels and
service level monitoring and reporting
capabilities in order to meet the full variety
of business needs across the GC.
Performance
quickly to changing GC
requirements
Policy
Compliance and
Security
Category relates to
DC services environment
Strategic
Alignment and
respect to DC services
Political
Sensitivity
Top of Page
3.1 Assumptions
The following assumptions are made:
1. The Government of Canada (GC) is a single enterprise that will make use of a common,
shared data centre (DC) and telecommunications network infrastructure.
2. Applications will be migrated to the target architecture as part of the application
lifecycle, either with new deployments or re-engineering of existing applications driven
by partner organizations.
3. Full traceability of Detailed Design Specifications (DDS) documents to the architectural
requirements identified in the DC Reference Architecture Documents (RADs) and
Technical Architecture Documents (TADs) will be possible.
4. Various build groups will be able to provide full traceability via Build Books to the
certifier before the solution goes into production.
5. The Information Protection Centre (IPC) will collect, analyze and aggregate information
from logs when required, and as part of their incident handling and investigation best
practices.
6. Data centre services in scope of this RAD will service SSC's 43 partner organizations, as
well as clients from other government departments and agencies.
7. Network connectivity between partners and clients and the new DC services will be
provided through the common GCNet Inter-Building Network.
8. Shared Services Canada (SSC) will develop the security profile for Protected
B/Medium/Medium (PBMM) and socialize with partner organizations prior to production
deployment.
thin client access methods, including desktops, mobile platforms and web
browsers;
o
Enterprise Security
Service Management
Smart Evergreening
Consolidation Principles
1. As few data centres as possible
2. Locations determined objectively for the long-term
3. Several levels of resiliency and availability (establish in pairs)
4. Scalable and flexible Infrastructure.
5. Infrastructure transformed; not "fork-lifted" from old to new
6. Separate application development environment
7. Standard platforms which meet common requirements (not re-architecting of
applications)
Business intent
Business to government
Government to Government
Citizens to Government
capability of workloads to move between public and GC private clouds versus remaining
on GC-controlled infrastructure.
Top of Page
Application Hosting,
Database Hosting,
Distributed Print,
Bulk Print,
on a case-by-case basis. The above platform service options reside on SSC's standard
File Service (GCDrive) (PaaS) provides a centralized, highly scalable, secure online
storage solution for unstructured data and files. File service allows users to store,
access and share files from a virtual file server anywhere on the GC network, without
having to know the physical location of the file. Service provides:
o
anti-virus protection;
Distributed Print Service (SaaS) provides a fully managed printing service where
users can print efficiently and securely, and coordinate all activities related to printing
services on a GC network and in the Government of Canada Community Cloud (GCCC).
Users are provided with self-service print management to associate printers with their
user account, select the printer and printer properties for each print job, and receive
updates regarding job status and progress. The service includes centralized monitoring
and management of policies, printers and consumption; providing alerts and analytics
for optimal productivity; and cost efficiency.
Bulk Print (SaaS) provides a standardized and fully managed print service for
consumers requiring very high volume and specialized print media, with high-volume
distribution and mailing capabilities in secure, centralized printing facilities.
Data Archival,
Facilities Management,
Remote Administration.
Compute and Storage Provisioning (IaaS) provides a highly available, secure and
fully managed capability for computing and storage. Compute provides a fully managed
virtual infrastructure platform with container isolation for guest OS and workloads
(physical bare-metal and virtual machine). Storage provides various levels of data
protection, data availability and data performance, in a highly available online data
repository. Storage infrastructure provides both block-level and file-level capacity in the
form of Storage Area Network (SAN) and Network Attached Storage (NAS) respectively.
Data Archival (IaaS) provides secure storage of older or less utilised data, for
longer-term retention. Archived data are indexed and accessible by business users. The
Archive Service makes use of redundant SAN technologies and interacts with the
compute and storage provisioning service.
Top of Page
The business platform enables the delivery of IT-as-a-service, while the technology platform
leverages the three architectural components (compute, network, storage) to render three
basic service delivery offerings: Infrastructure-as-a-Service (IaaS), Platform-as-a-Service
(PaaS) and Software-as-a-Service (SaaS). The business platform provides the framework for
IT business alignment incorporating business drivers that are leveraged against the
technology platform.
The framework consists of:
target services from which partner organizations select those IT services they require;
and
The technology platform provides the framework for delivering IT-as-a-service to consumers.
Characteristics of that delivery are influenced by the business platform. The technology
platform framework incorporates:
security services that will provide secure workloads enabling the confidentiality, integrity
and availability of services.
regional outages. Each production DC will have capacity to host in-region High Availability
(HA) requirements ("enhanced'' service level profile), as well as capacity to host out-of-region
DR requirements ("Premium'' service level profile). All production and development DCs will
operate in a "lights-out'' manner, where no human interaction inside the DC secure space will
be allowed outside of pre-approved, limited installation and maintenance activities.
Figure 6: SSC Data Centre Facilities
Text Version of Figure 6: SSC Data Centre Facilities
deployed, or acquire new infrastructure. The CIs are sized and deployed based on templates
that allow for implementation and growth with a predefined approach. This removes the
planning and configuration burdens of traditional deployments and the heavy reliance on
human interaction during the provisioning phase. The simplified architecture accelerates
deployment of new capacity, provides greater flexibility of services and increased efficiency of
deployed capacity, while lowering operational risks. Converged Infrastructure provides a
blueprint for the DC to accelerate the provisioning of services and applications, and will be
utilized to deploy the large majority of workloads within each Shared Services Canada (SSC)
DC as the infrastructure of choice. Figure 8 illustrates the various benefits and capabilities of
components within a CI.
Converged Infrastructure that is used to host the large majority of partner workloads (e.g.
common J2EE, .Net, Commercial off-the-Shelf (COTS) Application Hosting) is referred to as
the "general purpose platform,'' contrasted with "special purpose platforms'' that are geared
to particular needs not well suited for the general purpose platform (e.g. high performance
computing, mainframe, VoIP Gateway Appliance).
The compute infrastructure is implemented using a stateless compute node model where
compute, storage and networking are assembled in a software-defined fashion to provide
dynamically configurable compute infrastructure containers.
for the CIs where required (i.e. overflow capacity), and to store large data repositories such
as video, backups and big data that might not be well suited for CI storage. Traditional
enterprise storage will also be used by non-CI servers such as mainframes, and also as the
main repository for enterprise backup/recovery and data archival services. Figure
10 represents a conceptual view of the storage service model.
Block level access is to be used for applications that require high Input/Output per Second
(IOPS) and availability. File level access is to be used for Common Internet File System
(CIFS) and Network File System (NFS) file shares. Commonly, file storage is implemented in
the form of a gateway appliance that connects to externally attached back-end block storage.
File storage is used primarily for unstructured user data, and possibly even virtual hosts
installed on NFS partitions, due to its high scalability and ease of management
Storage optimization techniques will be deployed to reduce cost and improve performance.
These will include:
Automated Storage Tiering: moves data blocks across multiple storage media
without impact to hosts accessing the storage, in order to align the performance
requirements to the storage media capabilities. Data blocks that are accessed more
frequently are relocated onto faster, more expensive media, while infrequently accessed
data blocks are placed onto slower and less expensive media. This entire process is
transparent to the end client.
Top of Page
Footnote3
Figure 11illustrates the overall GCNet and Data Center Network (DCN) high-level architectural
components and position within the SSC DCs in terms of the major architecture blocks, DC
components and their relationship. The DCN is the foundation for all DC services, and
provides the transport infrastructure and connectivity between all components within the DC
(intra-DCN), security services and interconnectivity to external networks. The inter-DCN is an
overlay network that leverages GCNet to provide connectivity between facilities.
The DCN architecture described within this Reference Architecture Document (RAD) relates to
the virtual networking within the compute infrastructure and access layer switching included
with the CI. Each CI (compute, access, storage) architecture includes the physical layer 2
access switches required to connect all the compute and storage components and virtualized
networking and security. These access switches are then connected to the upstream DCN
infrastructure for layer 3 connectivity, advanced services (e.g. Application Delivery Controller
(ADC), external firewalls, etc.). Switching components within the CIs provides connectivity
between compute components and storage services.
Component
Component
Identifier
Name
C4.1
Core
Connectivity
Component Description
C4.2.n
Perimeter
Security
A large portion of the network and security segregation will be performed within the
hypervisors. All inter-server communications will be achieved through virtual networking and
firewalls to restrict unauthorized flows. Unlike traditional deployments that have all security
performed on physical devices, virtual networking and firewalls will allow filtering to be
implemented on flows between Virtual Machines (VMs) within the hypervisor. Three
deployment models will be used.
Virtual Deployment Model (Default): fully virtualised where all components are
deployed within the hypervisor. The DC architecture is developed to accommodate
primarily virtualised workloads. Under the virtualised model, network and security
separation of workloads are contained within the hypervisor. This allows servers within
different zones to communicate without leaving the compute layer; provides a single
communication point for users' access to devices within the Public Access Zone (PAZ) or
Operational Zone (OZ); and enables restricted access to backend application and
database servers. The presentation server will respond to user requests through an
interface that provides network access either through a firewall or directly. All backend
communication between the presentation application and data layers will be achieved
through private networks secured by security devices.
Hybrid Deployment Model: workloads are both virtual and physical. The hybrid
approach will leverage a combination of the above options to accommodate solutions
developed with both virtual and physical workloads
Top of Page
Text Version and Expanded View of Figure 12: Platform Architecture Model
DCs will be deployed to provide both HA and DR solutions to the end systems that are
deployed within. The default model defined to support this goal is referred to as a 2+1
Availability Strategy. The deployment of this 2+1 Availability Strategy will be
accomplished through the operation of two DCs within a region forming a HA pair (''IntraRegion HA''), with one DC outside the region providing DR (''Out-of-Region DR''). The
geographical limit placed upon intra-regional DC pairs is established by technological
constraints on synchronous data replication and application response time latency. Because of
this, another availability strategy will also be utilized where HA is provided within one DC
(''Intra-DC HA''), with one DC outside the region providing DR. There is no practical
geographical limit placed on inter-regional DCs.
Intra-R egion and Intra-DC HA design is driven by extremely stringent service recovery time
objectives and data recovery point objectives, whereas inter-region DR design is driven more
by survivability of mission-critical applications in case of large regional disaster situations
rather than individual DC outages.
As illustrated in Figure 13, three generic service level profiles will be available:
1. Standard, where services will be provided through a single DC, with local system
redundancies built in, but no advanced HA capabilities (e.g. synchronous data
replication);
2. Enhanced, where services will be provided through two DCs within a region (default) or
one DC (optional) with advanced HA capabilities (e.g. synchronous data replication);
3. Premium, where services are deployed with an out of region DR capability over and
above the "Enhanced" capability.
For the HA pairs, synchronous storage array-based replication will be used for workloads that
have a Recovery Point Objective (RPO) of zero. Synchronous storage array-based replication
provides the fastest and most reliable form of data replication known today. For greater
distance, asynchronous replication will be used. Asynchronous replication leverages IP as the
transport and has an RPO greater than zero, but is far less expensive than synchronous
storage array-based replication. Asynchronous replication can support virtually unlimited
distances.
There are various implementation options to achieve HA/DR, including storage array-based
replication, database- or host-based replication, and application-based replication. Each
implementation option has its own advantages and disadvantages, specifically related to cost
and performance. For example, array-based replication is the most costly but provides the
greatest performance and availability, while host-based replication is lower in cost and
performance benefits compared to array-based replication. The various methodologies will be
described in further detail within the various DC Technical Architecture Documents (TADs). It
must be noted that, gradually in the coming years, applications whether developed inhouse
or commercially , will be developed with cloud-aware capabilities that assume an
''unreliable'' underlying platform and infrastructure. In other words, applications will be able
to call up cloud resources directly from different sources to compensate for unreliable
underlying services. This should reduce the need to build in complex and costly HA
capabilities in the infrastructure over time.
Figure 14 illustrates the HA/DR configuration.
Figure 14 : Conceptual High Availability/Disaster Recovery Architecture
Text Version of Figure 14 : Conceptual High Availability/Disaster Recovery Architecture
New features, technologies and/or software images will go through the Infrastructure/Service
Development lifecycle to be deemed production ready. Once this has been completed, they
will be deployed into the development DC infrastructure, followed by production.
Development: this is used for development and basic testing of new applications and
features. The applications are deployed to confirm functionality before introducing
network segmentation and security.
Test: as with the development environment, the test environment allows the developer
to confirm the basic functionality of an application, but deployed in a production-like
model that includes, for example, network segmentation and security.
User Acceptance Testing (UAT): the UAT environment is where selected users test
the applications before they are deployed into production. Applications already in testing
are usually one release ahead of production.
Training (TRNG): the training environment allows programs to provide user training
on new applications or added features before being released.
The Application Development Environments constitute the steps or gates that applications
may pass through in order to be released into production. To optimize the use of SSC DC
infrastructure, engineering and support resources, as well as to enable consolidation and
rationalization of DCs, it is important that partner organizations agree on standardized
requirements for the type and number of environments. Without this, customized
environments will emerge that require more resources.
Both of these offerings will be provided from SSC's secure and robust Development Data
Centres. Each capability will include support services similar to those of SSC's other services,
but with an approach tailored to systems development (e.g. less stringent service-level
targets, a SDLC emphasis on technical support and professional services, etc.) SSC will use
the same SDE platform for the development, engineering and maintenance of its own service
platforms, and internal business and service management systems.
Figure 17: Standard Development Environment Lifecycle Process and Context
Text Version and Expanded View of Figure 17: Standard Development Environment Lifecycle
Process and Context
Top of Page
Security
Class Description
Class
Technical
security mechanisms
contained in hardware,
software and firmware
components
Identification and
Authentication supports the unique
identification of users and the authentication
of these users when attempting to access
information system resources.
Operational
Controls include
information system
primarily implemented
through processes
executed by people
Management
management of IT
information system.
risks
The specification of security requirements along with their implementation will be documented
in Service Definitions, Technical Architecture Documents (TADs), Detailed Design
Specifications documents and Build Books, as per SSC implementation of ITSG-33's
Information System Security Implementation Process (ISSIP). Refer
to Section 1.3 Document Map for further details.
Figure 19 identifies the containment area selection process that will be utilized to identify how
workloads will be deployed.
Figure 19 : Containment Area Selection
Each containment area will be divided into zones based on CSEC's ITSG-22 Security
Guidance, Network Security Zoning and SSC's Secure Domain and Zones Architecture. Figure
20 illustrates the zones that will be deployed within each physical and virtual containment
area. Partners will be provided with a Public Access Zone (PAZ) for external-facing
presentation servers, an Operational Zone (OZ) for internal-facing presentation servers, an
Application Restricted Zone (APPRZ) for application servers (business logic) and Database
Restricted Zone (DBRZ) for servers housing application data. A Restricted Extranet Zone
(REZ) will also be available for presentation servers used by trusted business partners.
Figure 20: Physical and Virtual Containment Areas
The management containment area will host workloads that administer and support the
infrastructure components and partner organizations' workloads across all DCs. Unlike the
other containment areas that deploy zones based on types of workloads, the zones within this
containment area are based on function. All infrastructure components within the DC
containment areas will have system management or out-of-band management interfaces
connected to the Management Restricted Zone (MRZ). As depicted inFigure 20, the
management containment area will consist of the following zones:
Zone
Description
Management
Access Zone
(MAZ)
Management
Restricted Zone
(MRZ)
intrusion prevention.
In addition to services normally deployed on the server, they are now able to be deployed at
the vNIC, including:
By deploying services normally run in single instances on the server via a common service run
on the hypervisor, associated operational costs should decrease.
If one steps back and examines the fully converged virtual server world now available on the
x86 platform, in many ways it resembles a traditional mainframe architecture a shared CPU,
memory and storage pool shared by a number of virtual machines isolated and connected to
each other by a hypervisor. In fact, the term hypervisor comes from mainframe computing in
the 1960s.
It is recognized that even with an improved security posture through virtualization, some
applications may be better served via deployment on physical networks, firewalls and servers.
However, it is also worth noting that such applications are typically not best suited for
deployment within a consolidated DC infrastructure.
Footnote 2
Does not include HPC Data Centre
Return to footnote2referrer
Footnote 3
Refer to GCNet Reference Architecture Document and related Technical
Architecture Documents developed by Enterprise Architecture for the Telecom
Transformation Program.
Return to footnote3referrer
Footnote 4
Statement of Sensitivity, Threat Assessment, Risk Assessment and Legislation,
etc., will define the security requirements
Return to footnote4referrer
Footnote 5
This may be because the roots of ITSG-22 (ITSD-02) predate the widespread
use of virtualization.
Return to footnote5referrer
Top of Page
allocation and, eventually, a fully automated orchestration capability that adjusts dynamically
when its monitoring indicates that live operation is failing to meet predefined orchestration
policies, workload profiles and performance objectives.
These new capabilities enable dramatic improvements in innovation, development cycles,
scalability, elasticity, self-service provisioning of infrastructure, less downtime, better use of
existing assets and licensing, and a reduction in the IT effort and cost required to provision
what traditionally would have required complex labour-intensive engineering and changes.
The following illustrates service provisioning within the target-state DC services:
1. A partner organization subscribes to a DC service such as Application Hosting with an
Enhanced Tiered Service Package.
2. A Partner Infrastructure Lead responsible for managing the DC's cloud infrastructure
(for the partner organization's application teams) is trained in the use of the DC
service's cloud-based infrastructure and the overall Cloud Manager tool with its various
capabilities (service portal, orchestration guidelines, self-service provisioning,
monitoring and status, show-back, etc.).
3. The Partner Infrastructure Lead consults with the application specialists (and a SSC DC
technical liaison, if required) to define resource quotas (compute, storage, memory,
network, etc.), orchestration policies, workload profiles, and other aspects of
collaboration, responsibility, integration, etc.
4. Based on earlier requirements from the partner organization's application expert, the
Partner Infrastructure Lead uses the Cloud Manager to request the provisioning of the
application hosting instances, which are then automatically identified and committed to
the partner organization's Cloud Manager (the self-service provisioning request is
fulfilled automatically).
5. The partner organization loads its workloads onto the new application hosting instances,
and initiates its testing and acceptance before releasing the application to operations
and business users.
6. The Partner Infrastructure Lead monitors or is alerted to workload operational
performance and dynamic resource allocation to ensure the systems stay within
operational limits. Orchestration, dynamic resource allocation and performance
management activities are mostly automatic.
7. For more critical cases, however (possibly depending on predefined criteria for more
critical actions, policies and profiles), it is possible that the Partner Infrastructure Lead
may be required to "OK" some actions by orchestration and dynamic resource
allocation.
8. The Partner Infrastructure Lead maintains communication with the application team to
ensure they are apprised of workload performance and issues, and to field any new
requests or changes that need to be provisioned or discussed. Note that partner
organizations require their own change management, not to provision resources but, for
example, to assess and decide where to use resources among competing application
team demands, to schedule their provisioning as part of a larger application release,
etc.
As illustrated in Figure 5: Conceptual Business and Technology Platform, several features are
required for the successful implementation of this target capability.
5.1 Self-Service
Both traditional IT self-service and cloud self-service will be required by SSC partner
organizations and users of DC services. Self-service in the traditional IT service context
generally refers to the ability for users to access a service portal, where they can report
incidents or submit requests (usually from a request catalogue), and search for answers in an
FAQ or a knowledge base maintained by the Service Desk.
Ref
Capability
PL-1
Access to Data
Description
The architecture must:
PL-2
Service Levels
PL-3
Government of
Regulation and
Legislation
Enterprise Requirements
The following enterprise requirements were developed in consultation with SSC partner
organizations and stakeholders.
Ref
Capability
ER-1
IT Service Management
Description
The architecture must support:
ER-2
Catalogue
ER-3
Service
Monitoring/Reporting
ER-4
Information Lifecycle
Management (ILM)
Ref
Capability
Description
ID
PR-1
Standard Delivery
PR-2
Service On-Demand
PR-3
Security
PR-4
Lower CAPEX/OPEX
PR-5
Reduced Footprint
PR-6
Efficiency
PR-7
Scalability
PR-8
Elasticity
PR-9
High
Availability/Business
Continuity
PR-10
PR-11
High-Performance
Computing
PR-12
Storage
PR-13
Data Access
PR-14
Mobility
access technologies.
PR-15
Reporting
PR-16
Performance
Monitoring
Security Requirements
The following security requirements were developed in consultation with SSC partner
organizations and stakeholders.
Ref
SR-1
Capability
Security
Description
The architecture must support the following security requirements:
Technology Requirements
The following table identifies the high-level technology requirements that have been
generated through design efforts and consultations with partner organizations.
Ref
Capability
Description
ID
TR-1
Multi-Tenancy
TR-2
Orchestration
TR-3
Security Separation
TR-4
Secured Environments
(INTRAnet/INTERnet)
TR-5
Common Services
TR-6
Disaster Recovery
TR-7
Responsiveness
TR-8
Virtualization
TR-9
Consolidation
TR-10
Standardization
TR-11
Workload Mobility
TR-12
Single ITSM/ESM
Capability
Top of Page
Appendix B: References
The following documents form part of the reference material library that has been utilized for
the creation of the Data Centre (DC) Reference Architecture Document (RAD).
Ref
Details
ID
1
Shared Services Canada, Data Centre Services Target Services Data Sheets
10
Shared Services Canada, IT Shared Services Security Domain & Zones Architecture
11
12
13
14
15
16
17
Top of Page
Acronym or
Definition
Term
AD
ADC
ADO.NET
A set of classes that expose data access services for .NET Framework
programmers
ALM
App-RZ
ASP.NET
AV
B2G
Business to Government
C2G
Citizen to Government
CESG
academia.
CI
CIFS
CNA
COTS
CPU
DB-RZ
Database Restricted Zone network security zone for sensitive and/or critical
data stores
DCN
Data Centre Network the network and security infrastructure deployed within
the data centre (DC)
DHCP
DHCP
DNS
DR
EJB
FCoE
Fibre Channel over Ethernet storage protocol that enables Fibre Channel
communications to run directly over Ethernet
FCP
FICON
G2G
Government to Government
GCNet
GPP
HA
High Availability provides DC service protection within and across DCs in the
same geographic region, using various techniques such as automated failover,
clustering and synchronous replication at the network, platform and storage
layers.
HBA
HIDPS
HPC
HSM
IaaS
IDE
IDPS
IDS
Internet
POP
Inter-
Region HA
IOPS
IP PBX
IPAM
IPC
IRC2
iSCSI
ITIL
ITSG
ITSM
J2EE
J2EE
J2EE (Java 2 Platform, Enterprise Edition) is a Java platform designed for the
mainframe-scale computing typical of large enterprises.
JDBC
A Java-based data access technology that defines how a client may access a
database
JSP
LAMP
Linux, Apache, MySQL, PHP is an open source web development platform that
uses Linux as the operating system, Apache as the web server, MySQL as the
relational database management system and PHP as the object-oriented
scripting language.
LUNs
NAS
NCR
NFS
NIC
NIST
NTP
ODBC
OPI
OS
Out-of-
Region DR
OZ
PaaS
PAZ
Public Access Zone a network security zone that controls traffic between a
public zone and either an Operational Zone or a Restricted Zone
PHP
PMI
PZ
Public Zone a network security zone that is unsecured and outside the
control of the GC. The best example of a PZ is the Internet
RACI
REZ
Restricted Extranet Zone a network security zone for normal connecting with
trusted partners.
RPO
RTO
Recovery Time Objective the duration of time and a service level within
which a business process must be restored after a disaster
RZ
SaaS
SAN
SAN
SAS
SATA
SDE
SDK
Service
Catalogue
partners, including description and types of services, supported SLAs, and who
can view or use the services.
SNAP
SPP
SRM
SSD
TCP/IP
UAT
UPS
VDI
VM
Virtual Machine
VoIP
Voice over IP
VPN
WAN
Workload
Container