You are on page 1of 262

1

The information contained in this document represents the current view of Microsoft Corporation on the issues discussed as of the date of publication. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information presented after the date of publication. This Reference Architecture is for informational purposes only. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS DOCUMENT. Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of Microsoft Corporation. Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property rights covering subject matter in this document. Except as expressly provided in any written license agreement from Microsoft, the furnishing of this document does not give you any license to these patents, trademarks, copyrights, or other intellectual property. 2013 Microsoft Corporation. All rights reserved.

Smart Energy Reference Architecture V2.0 Contents


Introduction ................................................................................................................................................ 15 An Update ................................................................................................................................................... 16 Smart Grid/Smart Energy Ecosystem Developments Worldwide............................................................... 17 New Developments in the Smart Energy Reference Architecture ............................................................. 21 1.0 1.1 1.2 1.2.3 1.3 1.4 1.4.1 1.4.1.1 1.4.1.2 1.4.1.3 1.4.1.4 1.4.1.5 2.0 2.1 2.2 2.3 2.4 2.5 2.6 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7 Evolution of the Power System and the Utility Industry................................................................. 26 Grid Standards Evolution Offer Example ................................................................................ 27 The Build-out of the Smart Energy Ecosystem........................................................................ 29 Innovation in the Smart Energy Ecosystem Virtual Power Plants.................................... 31 Participants within the Smart Energy Ecosystem ................................................................... 32 Collaboration within the Ecosystem ....................................................................................... 32 European Union Smart Ecosystem Standards for Collaboration ........................................ 34 EU Smart Metering Roll-out Preparations ...................................................................... 34 Common Functional Requirements for Smart Meters ................................................... 36 Standards for Smart Grids .............................................................................................. 36 Mandates for Electric Vehicles........................................................................................ 37 Requirements for communications standards in Smart Metering Systems ................... 38

Changing Demands on the Utility Business .................................................................................... 38 Changing demands on generation companies........................................................................ 39 Changing demands on transmission companies ..................................................................... 40 Changing demands on distribution companies ...................................................................... 40 Changing demands on retailers .............................................................................................. 41 Energy resources, constraints and challenges ........................................................................ 41 Business Factors ...................................................................................................................... 42 Utility Workforce Optimization ........................................................................................... 42 Workforce Demographic Changes ...................................................................................... 42 Equipment Collaboration Optimization .............................................................................. 43 Outsourcing and Contracting Optimization ........................................................................ 43 Workforce Mobilization ...................................................................................................... 43 Technology Enablers ............................................................................................................... 43 4

2.7.1 2.7.2 2.7.3 3.0 3.1 3.1.1 3.1.2 3.1.3 3.1.4 3.1.5 3.2 3.2.1 3.2.2 3.2.3 3.2.4 3.3 3.3.1 3.3.2 3.3.3 3.3.4 3.4 3.4.1 3.4.2 3.4.2.1 3.4.2.2 3.4.2.3 3.4.2.4 3.4.2.5 3.4.3 3.4.3.1 3.4.3.2 3.4.3.3

Advanced Sensors and Web Integration............................................................................. 44 AMI and Communication Networks.................................................................................... 44 New Computing Paradigms................................................................................................. 46

Architecture .................................................................................................................................... 48 Approach ................................................................................................................................. 49 Performance Oriented Infrastructure ................................................................................. 49 Holistic Life-user Experience ............................................................................................... 50 Operations Optimization..................................................................................................... 51 Partner-enabling Rich Applications Platform ...................................................................... 51 Interoperability ................................................................................................................... 52 User Experience (UX) .............................................................................................................. 52 Visualization ........................................................................................................................ 53 Analysis ............................................................................................................................... 55 Business Intelligence ........................................................................................................... 56 Reporting............................................................................................................................. 56 Collaboration........................................................................................................................... 58 Collaboration....................................................................................................................... 58 Orchestration ...................................................................................................................... 59 Notification Infrastructure .................................................................................................. 59 Chain of Command Notification plus Workflow .............................................................. 60 Information ............................................................................................................................. 60 Standards and Domain Models ........................................................................................... 61 International Electrotechnical Commission (IEC) Common Information Model ................ 63 Models and Tools of the CIM .......................................................................................... 65 Extending CIM ................................................................................................................. 65 Enterprise Data Models .................................................................................................. 67 Industry Data Models ...................................................................................................... 68 Data Models for Enterprise Data Warehouses ............................................................... 70 Master Data Management .................................................................................................. 71 CIM for Master Data Management ................................................................................. 71 Master Data Management for Network Operations ...................................................... 73 Master Data Management for Network Extension Planning .......................................... 74 5

3.4.4 3.4.5 3.4.5.1 3.4.5.2 3.4.5.3 3.4.5.4 3.4.6 3.4.6.1 3.4.7 3.4.8 3.4.8.1 3.4.8.2 3.4.8.3 3.4.8.4 3.4.9 3.4.10 3.5 3.6 3.6.1 3.6.2 3.6.3 3.7 3.7.1 3.7.2 3.7.3 3.7.4 3.7.5 3.7.6 3.7.7 3.7.8 3.7.8.1 3.7.8.2

CIM for Topology Data Exchange ........................................................................................ 74 Measurement Data ............................................................................................................. 75 Historians ........................................................................................................................ 75 Operations Databases ..................................................................................................... 76 Data Warehouses ............................................................................................................ 76 Spatial Data Considerations ............................................................................................ 80 Interoperability ................................................................................................................... 81 Harmonizing CIM and IEC 61850..................................................................................... 83 Messages and Interfaces..................................................................................................... 83 Big Data .............................................................................................................................. 87 Advanced Analytics for Big Data ..................................................................................... 88 Analyzing Customer Behavior Using Big Data ................................................................. 88 Unstructured Data in Analytics ....................................................................................... 88 Additional Types of Big Data Analytics............................................................................ 89 Event Clouds and Complex Event Processing ..................................................................... 90 Moving from Raw Data ....................................................................................................... 92 Application Architecture ......................................................................................................... 92 Deployment Strategies............................................................................................................ 93 Deployment Migration ........................................................................................................ 94 Deployment Considerations ............................................................................................... 94 Deployment Roadmaps....................................................................................................... 95 Integration .............................................................................................................................. 95 Integration Patterns ............................................................................................................ 98 Service-Oriented Architecture ............................................................................................ 98 Enterprise Service Bus in SOAs ......................................................................................... 100 Applications....................................................................................................................... 101 Power Systems Operations Control Center (PSOCC) ........................................................ 102 Business-to-business Integration ...................................................................................... 105 Customer Integration ........................................................................................................ 106 Power System Grid ............................................................................................................ 108 Intelligent End Devices .................................................................................................. 110 Distributed Network Protocol ....................................................................................... 110 6

3.7.8.3 3.7.8.4 3.7.8.5 3.7.8.6 3.7.8.7 3.7.8.8 3.7.8.9 3.7.8.10 3.7.9 3.7.10

Inter-Control Center Protocol ....................................................................................... 111 IEEE C37.118.................................................................................................................. 111 IEEE 1547.3.................................................................................................................... 111 ANSI C12 ........................................................................................................................ 111 ZigBee and IPSO ............................................................................................................ 111 Open Automated Demand Response ........................................................................... 112 BACnet........................................................................................................................... 112 LonWorks .................................................................................................................. 112

Common Services .............................................................................................................. 113 Cloud Services ................................................................................................................... 113 Attributes of Cloud Deployments ............................................................................. 114 Levels of Cloud Deployments .................................................................................... 114 Movement among Levels of Cloud Deployments ..................................................... 115 Challenges to Cloud Adaptations .............................................................................. 116

3.7.10.1 3.7.10.2 3.7.10.3 3.7.10.4

3.7.10.4.1 Security ................................................................................................................. 116 3.7.10.4.2 Privacy and Data Sovereignty ............................................................................... 117 3.7.10.4.3 Regulatory Compliance ......................................................................................... 117 3.7.10.4.4 Performance ......................................................................................................... 118 3.7.10.4.5 Availability............................................................................................................. 118 3.7.10.4.6 Interoperability ..................................................................................................... 118 3.7.10.4.7 Integration ............................................................................................................ 118 3.7.10.4.8 Configurability and Customizability ...................................................................... 119 3.7.10.4.9 Hard to move back ................................................................................................ 119 3.7.10.4.10 Expense versus Capital Investment .................................................................... 119 3.7.11 Application to Application Integration Using CIM ............................................................ 119 Specifying the Context Model ................................................................................... 120 Specifying Message Models ...................................................................................... 122 Specifying Message Schemas .................................................................................... 123 Specifying Service Interfaces..................................................................................... 123 Specifying Service Orchestration .............................................................................. 123 Repository for artifacts ............................................................................................. 123

3.7.11.1 3.7.12.2 3.7.12.3 3.7.12.4 3.7.12.5 3.7.12.6 3.8

Security ................................................................................................................................. 124 7

3.8.1 3.8.2 3.8.3 3.8.3.1 3.8.3.2 3.8.4 3.8.4.1

Physical Security................................................................................................................ 124 Operational security.......................................................................................................... 124 Cybersecurity .................................................................................................................... 124 SERA First Principles for Cybersecurity ......................................................................... 126 Effective Security and the Microsoft Secure Development Lifecycle ........................... 126 Industry Guidance for Security ......................................................................................... 127 The Microsoft Secure Development Lifecycle .............................................................. 127

3.8.4.1.1 NIST IR 7628 ............................................................................................................ 127 3.8.4.1.2 IEC 62443 ................................................................................................................ 127 3.8.4.1.3 IEC/ISO 27034: Application Security ....................................................................... 127 3.8.4.1.4 EC M/490 ................................................................................................................ 128 3.8.4.2 3.8.5 3.8.5.1 Industry Guidance for Risk Management ................................................................. 128 Regulatory Guidance for Cybersecurity ............................................................................ 129 NERC CIPs ICT Systems Considerations ......................................................................... 129

3.8.5.1.1 NERC CIPs for Functional Security Zones ................................................................ 130 3.8.5.1.2 NERC CIPs for Bulk Power Systems ......................................................................... 130 3.8.5.1.3 NERC CIPs for Distribution Level Systems ............................................................... 130 3.8.5.1.4 NERC CIPs for Device Management ........................................................................ 130 3.8.5.1.5 NERC CIPs Grey Areas ............................................................................................. 131 3.8.6 3.8.6.1 3.8.6.2 3.8.6.3 3.9 3.9. 1 3.9.2 3.9.3 3.9.4 3.9.5 4.0 4.1 4.1.1 Preparing with a Holistic Security Strategy ....................................................................... 132 Setting Priorities for Security Strategy .......................................................................... 132 Applying SDL.................................................................................................................. 133 Independent Solution Vendor Application of SDL ........................................................ 133

European Union Security and Smart Ecosystem Standards.................................................. 134 EU Smart Metering Roll-out Preparations ........................................................................ 134 Common Functional Requirements for Smart Meters ..................................................... 135 Standards for Smart Grids ................................................................................................. 136 Mandates for Electric Vehicles.......................................................................................... 137 Requirements for communications standards in Smart Metering Systems ..................... 138

SERA Alignment Guidelines and Capability Maturity Model ........................................................ 139 SERA Alignment Considerations for the Utility ..................................................................... 141 User Experience and Information Composition................................................................ 142 8

4.1.2 4.1.3 4.1.4 4.1.5 4.1.6 4.1.7 4.1.8 4.1.9 4.2 4.2.1 4.2.1.1 4.2.1.2 4.2.1.3 4.2.1.4 4.2.1.5 4.2.1.6 4.2.1.7 4.2.2 4.3 4.3.1 4.3.2 4.3.3 4.3.4 4.3.5 4.3.6 4.3.7 4.3.8 4.4 4.4.1 4.5 4.5.1

Service-Oriented Architecture and Business Process Management ................................ 143 Business intelligence ......................................................................................................... 145 Data Integration and Enterprise-wide Data Mapping/Flow ............................................. 146 Master Data & Enterprise-wide Modeling ........................................................................ 147 Enterprise-wide Eventing and Complex Event Processing (CEP) ................................. 148 Security ............................................................................................................................. 149 Governance, Risk and Compliance .................................................................................... 151 SERA Futures Guidelines and Software + Services ............................................................ 152 Microsoft SERA Capability Maturity Model .......................................................................... 154 SERA Maturity Assessment Sliders, Solution Assessment Sliders and SERA Building Blocks 155 User Experience and devices ........................................................................................ 155 Cross enterprise integration ......................................................................................... 156 Security ......................................................................................................................... 156 SERA IO .......................................................................................................................... 156 Smart Grid Domains and SERA Business Capabilities ................................................... 157 Consumer equity SERA Assessment .............................................................................. 157 Generation SERA Assessment ....................................................................................... 157 Translating SERA Assessment Slider Results into an Actionable Program........................ 158 SERA ISV and SI Partner Alignment ....................................................................................... 158 User experience (UX) and Information Composition (Web 2.0) ....................................... 158 Business intelligence ......................................................................................................... 159 Business Processes ............................................................................................................ 159 Data integration and enterprise-wide data mapping/flow............................................... 159 Enterprise-wide eventing & complex event processing (CEP)....................................... 159 Master Data & Enterprise-wide Modeling ........................................................................ 159 Security ............................................................................................................................. 159 Governance, Risk and Compliance .................................................................................... 160 SERA Futures Guidelines ....................................................................................................... 160 Software + Services and Making New Developments Location Agnostic ......................... 160 Microsoft Infrastructure Optimization Models..................................................................... 161 Core Infrastructure Optimization...................................................................................... 161 9

4.5.2 4.6 5.0 5.1 5.2 5.2.1 5.2.2 5.3 5.3.1 5.3.2 5.3.2.1

Business Productivity Infrastructure Optimization ........................................................... 162 Microsoft Consulting Services Enterprise Architecture and Strategy ................................... 163

Microsoft Technology Stack .......................................................................................................... 164 Stack Integration Overview ................................................................................................... 164 Capability-based Information Architecture .......................................................................... 166 Event-driven Enterprise Services Bus and BizTalk Server ................................................. 167 Data Integration Architecture ........................................................................................... 168 Microsoft Cloud Technologies and Services ......................................................................... 170 Microsoft Global Foundation Services (GFS) .................................................................... 172 Microsoft Cloud Operating Systems ................................................................................. 172 Windows Server 2012 ................................................................................................... 173

5.3.2.1.1 Windows Server Virtualization ............................................................................... 173 5.3.2.1.2 Windows Server Networking .................................................................................. 174 5.3.2.1.3 Windows Server Network Virtualization ................................................................ 175 5.3.2.1.4 Windows Server Identity and Access ...................................................................... 175 5.3.2.1.5 Windows Server Storage......................................................................................... 176 5.3.2.1.6 Windows Server Manageability and Automation ................................................... 177 5.3.2.1.7 Windows Server Web and Application Platform .................................................... 177 5.3.2.1.8 Windows Server Virtual Desktop Infrastructure (VDI)............................................ 177 5.3.2.1.9 Windows Server Active Directory ........................................................................... 177 5.3.2.2 Windows Azure ............................................................................................................. 178

5.3.2.2.1 Windows Azure Virtual Machines ........................................................................... 179 5.3.2.2.2 Windows Azure Data Management ........................................................................ 179 5.3.2.2.3 Windows Azure Business Analytics ......................................................................... 180 5.3.2.2.4 Windows Azure Active Directory Identity and Security .......................................... 180 5.3.2.2.5 Windows Azure Networking ................................................................................... 181 5.3.2.2.6 Windows Azure HPC Scheduler .............................................................................. 183 5.3.2.2.7 Windows Azure Standards-Based Interoperability Surfaces .................................. 183 5.3.2.2.8 Windows Azure Service Bus Notification Hubs....................................................... 185 5.4 5.4.1 5.4.2 Collaboration Services........................................................................................................... 185 Windows Azure Services Platform .................................................................................... 185 Microsoft Office SharePoint Server .................................................................................. 187 10

5.5 5.5.1 5.5.2 5.5.3 5.5.4 5.6 5.6.1 5.6.2

Business Software ................................................................................................................. 190 Dynamics CRM .................................................................................................................. 190 Dynamics xRM ................................................................................................................... 191 Microsoft Dynamics AX ..................................................................................................... 192 Microsoft Dynamics Partners ............................................................................................ 192 Process Integration ............................................................................................................... 194 Service Oriented Architecture .......................................................................................... 196 Enterprise Service Bus using BizTalk Server ...................................................................... 197 Business Activity Monitoring Using Biztalk ................................................................... 197 Multi-Platform Adapters ............................................................................................... 198 The BTC CIM Accelerator for Microsoft BizTalk ............................................................ 198 Additional Microsoft BizTalk Capabilities...................................................................... 199 The IP-Network as Bus for Process Integration ................................................................ 202 Service Bus Process Integration ........................................................................................ 203 Databases, Data Warehouses and Data Management ......................................................... 206 Microsoft SQL Server ........................................................................................................ 206 SQL Server Database Engine ......................................................................................... 207 SQL Server Management Studio ................................................................................... 208 SQL Analysis Services .................................................................................................... 208 SQL xVelocity ................................................................................................................. 208 SQL Server Integration Services .................................................................................... 208 SQL Server Replication .................................................................................................. 208 SQL Server High Availability .......................................................................................... 208 SQL Reporting Services ................................................................................................. 209 SQL Service Broker .................................................................................................... 209 SQL BI Semantic Model ............................................................................................. 209 Master Data Services ................................................................................................ 209 Data Quality Services ................................................................................................ 209

5.6.2.1 5.6.2.2 5.6.2.3 5.6.2.4 5.6.3 5.6.4 5.7 5.7.1 5.7.1.1 5.7.1.2 5.7.1.3 5.7.1.4 5.7.1.5 5.7.1.6 5.7.1.7 5.7.1.8 5.7.1.9 5.7.1.10 5.7.1.11 5.7.1.12

5.7.2 Parallel Data Warehouse........................................................................................................ 209 5.7.3 5.7.4 5.8 Microsoft GeoFlow Addin to Excel .................................................................................... 212 ADRM Software with SQL Server ...................................................................................... 212 Business Intelligence ............................................................................................................. 214 11

5.8.1 5.9 5.9.1 5.10 5.11

Balancing Enterprise Data Warehouses and LOB Needs .................................................. 216 Microsoft Big Data Strategy .................................................................................................. 217 Complex Event Processing ................................................................................................ 218 Mobility ................................................................................................................................. 220 Microsoft Security Stack: Management and Security ........................................................... 221

5.11.1 Systems Management........................................................................................................... 222 5.11.1.1 System Center ................................................................................................................... 222 5.11.1.2 5.11.1.3 System Center for the Smart Energy Ecosystem....................................................... 223 General System Center Capabilities and Components ............................................. 223

5.11.1.3.1 App Controller....................................................................................................... 224 5.11.1.3.2 Data Protection Manager ..................................................................................... 224 5.11.1.3.3 Operations Manager ............................................................................................ 224

5.11.1.3.4 Configuration Manager ......................................................................................... 225 5.11.1.3.5 Endpoint Protection .............................................................................................. 225 5.11.1.3.6 Orchestrator.......................................................................................................... 225 5.11.1.3.7 Virtual Machine Manager ..................................................................................... 226 5.11.1.4 5.11.2 5.11.3 5.11.4 5.12 5.12.1 5.12.2 5.12.3 5.12.4 5.12.5 5.12.6 5.12.7 5.12.8 5.12.9 System Center Regulatory Compliance Capabilities and Components..................... 226

Windows Intune ................................................................................................................ 226 Microsoft Consulting Services ........................................................................................... 227 Other Microsoft Security Tools ......................................................................................... 227 End-to-End Trust ................................................................................................................... 228 End-to-End Trust in SERA .................................................................................................. 228 Rights Management Services ............................................................................................ 229 Certificate Services ............................................................................................................ 229 Domain Services ................................................................................................................ 229 Federation Services ........................................................................................................... 230 Claims-Based Applications ................................................................................................ 231 Security Token Service ...................................................................................................... 232 Lightweight Directory Services.......................................................................................... 233 Trusted Boot...................................................................................................................... 234

5.12.10 BitLocker ........................................................................................................................... 234 5.12.11 Trusted Platform Module Management ........................................................................... 234 12

5.12.12 AppLocker ......................................................................................................................... 235 5.12.13 Comprehensive Access Security........................................................................................ 235 5.12.14 Identity Lifecycle Manager ................................................................................................ 236 5.12.15 Network Policy and Access Services ................................................................................. 237 5.12.16 IPsec .................................................................................................................................. 238 5.12.17 Direct Access Connections ................................................................................................ 239 5.12.18 Azure Cloud Security ......................................................................................................... 240 5.12.18.1 Windows Azure Trust Center Security ................................................................... 240

5.12.19 Secure Development Lifecycle .......................................................................................... 240 5.12.20 Secure Development Lifecycle Optimization Model ......................................................... 242 5.12.21 Cybersecurity Maturity Model .......................................................................................... 243 5.12.22 Secure Operations............................................................................................................. 245 5.13 5.13.1 5.13.2 5.13.3 5.14 5.14.1 5.14.2 5.14.3 A.1 Privacy ................................................................................................................................... 245 Location of Customer Data ............................................................................................... 245 Data Protection ................................................................................................................. 246 Privacy-by-design (Pbd) and Privacy Enhancing Technologies ......................................... 247 Compliance............................................................................................................................ 248 Windows Azure Trust Center Compliance ..................................................................... 248 ISO/IEC 27001:2005 Audit and Certification ..................................................................... 248 SSAE 16/ISAE 3402 Attestation ......................................................................................... 249

APPENDIX: Non-Microsoft Guidelines and Models ...................................................................... 251 A1.1 A 1.2 U.S. Department of Energy Risk Management Process Guideline........................................ 251 The Carnegie Mellon Smart Grid Maturity Model ................................................................ 252

A 1.2.1 Strategy, Management, Regulatory .................................................................................. 252 A 1.2.2 Organization & Structure .................................................................................................. 252 A 1.2.3 Grid Operations ................................................................................................................. 252 A 1.2.4 Work and Asset Management .......................................................................................... 253 A 1.2.5 Technology ........................................................................................................................ 253 A 1.2.6 Customer Management and Experience .......................................................................... 253 A 1.2.7 Value Chain Integration .................................................................................................... 253 A 1.2.8 Societal & Environmental.................................................................................................. 253

13

A1.3 The Carnegie Mellon Electricity Subsector Cybersecurity Capability Maturity Model (ESC2M2) and Risk Management Guidelines .......................................................................................... 253 A1.4 A1.5 The Open Group Architecture Framework (TOGAF) ............................................................. 254 Innovation Value Institute Capability Maturity Framework .............................................. 255

14

The Microsoft Smart Energy Reference Architecture


Introduction
The structure, engineering, and objectives of the worlds power systems have been undergoing dramatic rethinking and significant change over the last several years due to new, driving forces. Concerns about climate change abatement, the introduction of novel market participants such as plug-in hybrid electric vehicles, and ever increasing worldwide demand for energy are combining to drive development of a smart energy ecosystem that integrates the generating plant with its grid and end users in a holistic and, aspiringly, an automated way. Many observers believe that the extent to which the transformation to a smart energy ecosystem might change societies is on the same scale as the inception of electricity distribution itself. If true, and theres no reason to doubt it, this dynamic will affect every single part of the power utility industry and how companies operate. For our part, Microsoft and its partners have focused on enabling the technology innovation and advancements needed to create such ecosystems, whether they reside in Moscow or Timbuktu, Buenos Aires or Shanghai, Kansas City or Melbourne. Each will have its own distinct needs requiring the core attributes of flexibility and security, ease of use, and unwavering reliability. This Smart Energy Reference Architecture, now in this second iteration, reflects those attributes as well as the hard won lessons of the industry over the last two years. It adds to our first version with new technologies, new concepts, and new protocols. It examines the state of the industry to date, and which paths might make the most sense to pursue to go forward. And it continues to offer the much needed guidance our partners and customers need to think about their smart energy ecosystems, no matter what stage of development they are in. Join us again as we venture forward into the tinkering with the largest machine ever known to mankind, the power systems of the world.

15

An Update
Since release of its first version in October 2009, the Microsoft Smart Energy reference Architecture has traveled the world, with presentations to members of the worldwide utility community on virtually every continent and in so many countries we couldnt keep count. In that time, we humbly received many accolades for SERAs intent and content, but one from John Shaw, the CIO and head of information technology at Mainstream Renewable Power, summed its value in the most succinct way.

Figure 1: Mainstream Renewable Power CEO John Shaw summarized the value of SERA to his new companies ICT strategy.

We bring this example to your attention because it is typical of the feedback weve heard, though not usually as succinct. As a greenfield start-up, Mainstream had the opportunity to leverage SERAs integrated offerings through several Microsoft partners from their inception, without the headaches of adjusting to legacy systems. Our SERA work with them has been instructive and gratifying -- we strive to provide you with a similar experience through the creation of this second version of SERA. Of particular importance to SERA v2s development throughout 2011 and 2012 was the valuable input of the SERA Advisory Council, a group that worked with us to articulate a more advanced version of the Smart Energy Reference Architecture through the inclusion of new considerations affecting utility 16

company IT/OT decisions. Their contributions and content revisions are reflected throughout this document. The section, New Developments in the Smart Energy Reference Architecture reflects the specific topics they wanted us to address with this iteration and we feel the document is much advanced because of this input. We wish to offer our sincere thanks and appreciation to all SERA Advisory Council members who have participated with us and contributed to this new version.

Smart Grid/Smart Energy Ecosystem Developments Worldwide


In recent months, weve sensed a bit of a decline in the hype- factor over the smart grid, which in our view is not a bad thing. Hype creates unrealistic expectations and it has been important that the industry stay grounded in reality as to what the smart grid can achieve. The last few years have been noteworthy for the number of pilot projects and benchmarking studies that have occurred to prove the ability of technology to accomplish the core reliability mission implicit in any implementation of the smart energy economy.

Figure 2: Members of the Smart Grid Advisory Council. Also special thanks to Doug Houseman, Enernex, for his contributions.

Heres what we do know from various snapshots of measures: An Innovation Observatory report believes that 80 percent of worldwide smart grid investments will concentrate in 10 countries in the run up to 2030. The United States will lead smart grid spend for several years, with a $60 billion total expenditure, but Chinas investment will reach $99 billion by 2030. India, Brazil, France, Germany, Spain, and the United Kingdom will all have leading systems, as will Japan and South Korea.1 The number of utility companies implementing smart grid technologies increased 25% from 2010 to 2011, according to a Microsoft/OSIsoft Worldwide Utility Industry Survey of more than 200 worldwide utility executives. Another 28% are planning implementations, while 24% have not started any smart grid adaptation. The survey showed that more utilities companies are adding new devices to the grid and incorporating new data sets into their operational

Smart Grid spend to be concentrated in 10 countries. Intelligent Utility, March/April 2012, p. 8.

17

capabilities. However, many are encountering significant interoperability and integration challenges.2 The European Network of Transmission System Operators for Electricity (ENTSO-e) put Regulation (EC) 714/2009 into force on 3 March 2011, calling for ENTSO-E to undertake the drafting of network codes on the basis of framework guidelines adopted by ACER. Due to their importance for effective system operation, market integration, and system development, network codes will cover topic areas related to operations, development and market. Together, these codes have the potential to become the framework of consistent detailed rules needed for the secure operation of European power systems and for the implementation of a liberalized Europe-wide electricity market. The European Technology Platforms for Electricity Networks of the Future held its fourth General Assembly in early 2012, setting forth its vision of the smart energy system that would be characterized by optimal flexibility in demand and generation, including full costumer participation, integration of solutions for energy management, free choice in services and providers, ubiquitous energy internet, new businesses and markets.3 The European Electricity Grid Initiative, an industrial initiative of the European strategic Energy Technology Plan, aims to enable the distribution of up to 35% of electricity from dispersed and concentrated renewable sources by 2020. The EEGI compiled a list of select national projects, achieve and expected results, and their allocation to functional projects. Over 203 projects from 22 European countries were identified. Many projects are slated to complete in 2012.4

Figure 3: Smart Grids projects in Europe mapped to EEGI functional projects. Darker colors indicate more projects running during that year.
2

Smart grid implementation rises 25 percent in the past year, according to utilities industry survey. Microsoft/OSIsoft Worldwide utility Industry Survey, Jan. 24, 2012. 3 Smart Grids European Technology Platform, 4 Mapping and Gap Analysis of current European Smart Grids Projects, Report by the EEGI Member States Initiative: A pathway toward functional projects for distribution grids, April 2012, pg. 3.

18

A survey of 50 initial participants from US and Canadian utilities by the Newton-Evans Research company suggests that power utilities are likely to upgrade, retrofit and buy Energy Management Systems (EMS), supervisory control systems (SCADA), distribution management systems (DMS) and outage management systems (OMS), in the next three years. Plans for procurements of new DMS and (OMS) are significant, with more than one quarter planning to purchase a new or replacement DMS and nearly one-in-five planning OMS procurements. There is interest shown among one-third of these early respondents to combining DMS and OMS on a common platform, but cyber security concerns have been voiced by several operations officials looking into such system combinations.5 During a 2012 briefing to lawmakers in Washington, DC, FERC Chairman Jon Wellinghoff identified three primary trends that would affect US utilities in the coming years. Wellinghoff noted that electricity usage is growing at an anemic 1% due to ever-greater energy efficiency efforts, a trend that would likely continue and should compel utilities to identify other revenue streams. He also observed how the internet of things is forcing utilities to dedicate ever more attention to bringing IT skills, concepts, and systems into their operational technologies. And finally, Wellinghoff forecast the ever-increasing uptake by non-utility entities of distributed generation using renewable power sources. This could likely expand the first trend he noted, but likewise offer utilities additional business opportunities. 6 The growth and transformation of the smart grid is increasing access to critical infrastructure. The number of cyber-attacks on U.S. critical infrastructure increased 52% in 2012, according to a report by the U.S. Department of Homeland Security (DHS). Attacks on gas pipeline companies were successful and could facilitate remote unauthorized operations. Energy sector companies suffered 82 attacks, while water sector companies reported 29 attacks.7 Therefore, the National Institute of Standards and Technology is focusing on cybersecurity with four primary areas of concern: protecting data and information and information in a mobile environment in different use cases; continuous monitoring of information security, to enable the exchange of security data and leverage security automation; identity management through standardized work and personal identity verification cards; and completion of a new national standard for a cryptographic hash algorithm for SHA-3.8 U.S. President Barack Obama issued an Executive Order directing better cyber security information sharing and creation of a cybersecurity framework, with directions to DHS to encourage development and adoption.9 China has announced an aggressive framework for smart grid deployment and is supporting it with billions of dollars. In 2010, the North China Power Grid Company completed a smart community demonstration project consisting of 655 households and 11 buildings, including a

Preliminary findings point to solid growth for EMS, SCADA, DMS and OMS during 2013-2015 among North American Electric Power utilities. 6 Get Ready! FERC spotlights 3 major challenges for utilities, by Jesse Berst, Smart Grid News, Dec. 7, 2012. 7 Hacker hits on U.S. power and nuclear targets spiked in 2012, by David Goldman, CNN Money, January 9, 2013. 8 Inside NISTs cybersecurity strategy, by Nick Wakeman, Regulatory Cyber Security, March 27, 2012. 9 Cyber Security and Getting the Presidents Ear, by Prudence Parks, EnergyBiz, March 27, 2013.

19

low-voltage electricity network, power usage information collection, an interactive service platform, smart household installment, electricity automobile charging facilities, distributed power generation and energy storage, automatic electricity distribution, integrated network of internet, television and telephone, as well as a showcase of the smart community technologies.10 The International Electrotechnical Commission (IEC) created a comprehensive framework of common technical standards for the smart grid. The body developed five core standards for any smart grid implementation, including: o IEC/TR 62357 Framework of power automation standards and description of the SOA o (Service Oriented Architecture) concept o IEC 61850 Substation automation and beyond o IEC 61970 Energy Management System CIM and GID definitions o IEC 61968 Distribution Management System CIM and CIS definitions o IEC 62351 Security11 The Institute of Electrical and Electronics Engineers (IEEE) in 2011 September published its Draft Guide for Smart Grid Interoperability of Energy Technology and Information Technology Operation with the Electric Power System (EPS), and End-Use Applications and Loads, in a standards publication called IEEE P2030. The guide provides a knowledge base addressing terminology, characteristics, functional performance and evaluation criteria, and the application of engineering principles for smart grid interoperability of the electric power system with enduse applications and loads. It also discusses alternate approaches to good practices for the smart grid.12 The National Institute of Standards and Technology in early 2012 released the NIST Framework and Roadmap for Smart Grid Interoperability Standards, Release 2.0, laying out a plan for transforming the US electric power system into an interoperable smart grid. The release included a new chapter on the roles of the SGIP; an expanded view of the architecture of the smart grid; a number of developments related to ensuring cybersecurity for the smart grid, including a Risk Management Framework to provide guidance on security practices; a new framework for testing the conformity of devices and systems to be connected to the smart gridthe Interoperability Process Reference Manual; information on efforts to coordinate the smart grid standards effort for the United States with similar efforts in other parts of the world; and an overview of future areas of work, including electromagnetic disturbance and

10

A Strong Smart Grid: the Engine of Energy Innovation and Revolution , by the State Grid Corporation of China, 2009. 11 IEC Smart Grid Standardization Roadmap, by SMB Smart Grid Strategic Group, June 2010. 12 2030-2011 - IEEE Guide for Smart Grid Interoperability of Energy Technology and Information Technology Operation with the Electric Power System (EPS), End-Use Applications, and Loads.

20

interference, and improvements to SGIP processes. The final framework added 22 standards, specifications, and guidelines to the 75 NIST recommended in January 2010. Of note, the final draft contained a risk management framework offering guidance on security, as well as a framework for testing the conformity of devices and systems to be connected to the smart grid.13 Microsoft is committed to supporting these global efforts by taking a leadership role in the development of the smart energy ecosystem. Steve Ballmer, the chief executive officer of Microsoft, commented on this companys commitment to smart energy at the Cambridge Energy Research Associates annual conference in 2012: Between now and 2035, energy demand will grow, I don't know, 40 percent or more due to a growing population, a shift of people from rural areas to cities, the growth in Asia. In just 15 years alone the population of the world's cities is expected to increase by approximately 2 billion people. To me it's clear that we're not going to be able to just conserve our way into our energy future. Something more than that has got to happen, and it's going to have to happen, like it does in most industries, on the back not just of efficiency and conservation, but on the back of innovation We share a vision for transforming today's energy infrastructure by using the power of cloud computing to enable utilities, operators, transportation system managers, and individual citizens, to help participate in the power generation right on through to consumption processes.14

New Developments in the Smart Energy Reference Architecture


Just as standards have been the enablers to the development of the smart energy ecosystem, Microsoft views as equally important the establishment of an architectural philosophy with a vision and strong foundation for migrating to the new infrastructure, as well as outlining the services necessary to monitor, control, and report on the assets of this new power system. In support of that view, Microsoft updated this reference architecture to articulate an industry vision for the smart energy ecosystem. Since the architectures release in 2009, weve sought input from the industry and our partners to expand upon the documents relevance for the transformative work at hand. We convened a Smart Grid Advisory Panel and began meeting with them in May 2011 and have continued a dialogue well into 2013 about the issues affecting the industry. SERA v2 adds new sections that come directly from those discussions. It is much richer as a result.

13 14

NIST releases final smart grid Framework 2.0 document, contact Chad Boutin, February 28, 2012. Video: Steve Ballmer at CERAWeek: Energy in Motion, 2012.

21

New Focus Areas in SERA v2


Big Data
Heres how Forrester Research defines Big Data: Volume: exceeds physical limits of vertical scalability Velocity: decision window small compare to data change rate Variety: many different formats makes integration expensive Variability: many options or variable interpretations confound analysis In the utility context though, Big Data should be also viewed as collecting existing data, new data, and even social data that previously had not been considered rapidly deriving information from all the data sets after realizing a discovery or new insight, validating the findings to achieve confidence, leveraging the new insights by committing to production. utilities should seek to use Big Data to gain insights into business issues that were never evident before. They can do this by utilizing Big datas capability to provide 360 degree views of customers, supply chains, operations, and employees and with much less latency. The Big Data promise applies also to the aggregation of operations, financial and customer data at scale. Additionally, Big Data solutions enable the consolidation and composition of both structured and unstructured data to derive wholly new levels of business insight.

Master Data Management


Most software systems have lists of data that are shared and used by several of the applications that make up the system. Master data is often one of the key assets of a company. It's not unusual for a company to be acquired primarily for access to its Customer Master data. Master Data Management (MDM) as the technology, tools, and processes required to create and maintain consistent and accurate lists of master data. MDM is not just a technological problem. In many cases, fundamental changes to business process will be required to maintain clean master data, and some of the most difficult MDM issues are more political than technical. The second thing is that MDM includes both creating and maintaining master data. Investing a lot of time, money, and effort in creating a clean, consistent set of master data is a wasted effort unless the solution includes tools and processes to keep the master data clean and consistent as it is updated and expanded.

Cloud
Cloud services continue to drive the overall cost of information technology lower and enable completely new classes of applications and solutions, largely because of service delivery through multiple formats. Think of the cloud applications that would be appropriate to mobile users, or those for generating fleets. Often, an IT department will start with transition from on-premise to a private cloud environment and perhaps focus on deploying virtual servers. This is commonly referred to as Infrastructure as a service and represents one of the flavors of IT as a service that is available within the world of cloud computing. Since this is only a single example of the possible cloud models, IT professionals must become familiar with a variety of cloud standards so that they can select appropriately based on the needs of the enterprise. It is common to divide cloud computing into three categories and four deployment systems: Categories Infrastructure as a service (IaaS), which provides hardware for storage, servers and network and resource provisioning to support flexible ways to create, use and manage virtual machines (VMs). Platform as a service (PaaS), focused on providing the higherlevel capabilities more than just VMs required to support applications. Software as a service (SaaS), the applications that provide business value for users.

Patterns for Integration


Given todays complex technical and business environment, how do you create an integrated portfolio of applications and services for your enterprise? Patterns for integration provide known good ways to integrate systems; patterns describe them. An enterprises integration architecture balances the requirements of the business and the requirements of individual applications. Inside this integration architecture, you often find an overwhelming maze of systems, connections, and channels. If you study enough of these, you see common combinations of integrated systems such as portals, networks of connections such as message brokers, buses, and point-to-point connections, and numerous individual connections and channels. To understand the maze, it is helpful to understand how many of these integration architectures evolve one application at a time. Although this approach works well from the perspective of a single application, connecting all applications in this way is unlikely to produce a well-ordered set of applications. Instead, you need a logical design at the integration level, just like you need a logical design at the application level. To think clearly about an integrated portfolio of applications and services at the enterprise level, you must invert your viewpoint. You must first consider the needs of the enterprise as an integrated whole and then consider how to expose shared functionality through networked applications. This kind of thinking is quite different from traditional monolithic application development or n-tier development. It begs the question: what is an application anyway?

Systems On-Premise. Most conventional, especially for Energy Management Systems and other core Mission Critical Operations systems. Private Clouds. Virtualization, resource pooling and resource consolidation create on-premise Private Clouds, or hosted

22

Private Clouds at cloud service providers. Hybrid Cloud. Combines elements of the other cloud models but keeps sensitive data or Mission Critical processing behind the utilities on-premise firewall. Public Cloud. Either dedicated, servicing only one customer, or multitenant.

Capability Maturity Models


The Capability Maturity Model for Software (CMM) is a framework that describes the key elements of an effective software process. There are CMMs for non-software processes as well, such as Business Process Management (BPM). The CMM describes an evolutionary improvement path from an ad hoc, immature process to a mature, disciplined process. The CMM covers practices for planning, engineering, and managing software development and maintenance. When followed, these key practices improve the ability of organizations to meet goals for cost, schedule, functionality, and product quality. The CMM establishes a yardstick against which it is possible to judge, in a repeatable way, the maturity of an organization's software process and compare it to the state of the practice of the industry. The CMM can also be used by an organization to plan improvements to its software process. It also reflects the needs of individuals performing software process, improvement, software process assessments, or software capability evaluations; is documented; and is publicly available.

Predictive Analytics
Analytics is an increasingly powerful business tool, one that is critical for competing successfully in todays utility industry. In the financial services sector, the value of predictive analytics is well proven in enabling betterquality high-speed trading decisions and outcomes, but the same may be applied to many functions and business operations of the modern day utility. Todays analytics tools can be used across lines of business and within business functions to enrich insight and intelligence, monitor operations, evaluate customer relationships, and manage those relationships profitably. In addition, analytics can help firms provide market-leading advice both internally and to clients of all types. The value lies in offering these capabilities in real time and in the stream of business operations. In short, analytics provides the high-quality decision-making capability that will define the difference between winners and losers in tomorrows industry.

Hierarchical Controls
Hierarchical data stores the relationships between other data. It may be stored as part of an accounting system or separately as descriptions of real-world relationships, such as company organizational structures or product lines. Hierarchical data is sometimes considered a super MDM domain, because it is critical to understanding and sometimes discovering the relationships between master data.

Security
Established as a mandatory policy in 2004, the Microsoft SDL was designed as an integral part of the software development process at Microsoft. The development, implementation, and constant improvement of the SDL represents our strategic investment to the security effort. This is an evolution in the way that software is designed, developed, and tested and has now matured into a well-defined methodology . Our commitment for a more secure and trustworthy computing ecosystem has also inspired the creation of guidance papers, tools and training resources available to the public.

Electric Vehicle Use Cases To be developed deferred to SERA 3.0

SERA 2020 To be developed deferred to SERA 3.0

Of particular importance to this updated SERA are maturity models that offer our guidance as to how the evolution of a utility companys smart energy ecosystem might look. The maturity model matrixes are useful in assigning values to the progress of various smart energy ecosystem initiatives already underway. For example, utilities planning their smart grid might define their goals for operational improvements, have their plan approved by management and then incorporate their plan into the organizations business strategy as the first three steps of its development. They then can optimize their operations and venture further into innovation as a result of their continuing implementation of strategy. We believe this maturity modeling will be helpful to organizations that need step-by-step benchmarks for every facet of development of their smart energy ecosystem, from advances in customer service that respond to the consumerization of information technology, to their responses to ever more demanding regulations for renewable power sources or environmental protection.

23

Observers will also note that the Microsoft Smart Energy Ecosystem Reference Architecture was designed to maximize agility and enable role-based productivity while ensuring secure IT and operations with the very best ROI for all participants of the smart energy ecosystem. It is intended to ensure this balance now, with existing legacy systems, and in the future, as system requirements expand to address increasing complexities. As such, youll see that SERA v2.0 has five major sections that seek to address the next evolution of the smart energy ecosystem: The first section, Evolution of the Power System, offers a high-level point of view of the forces shaping the future direction of the industry. The section provides overview of the challenges coming to the fore and sets the stage for the technology solutions that we and our partners offer. The second section, Changing Demands on the Business, offers an industry architectural vision that details the entire value chain, from the utility to the end-use consumer, whether they are commercial, industrial, or residential. Business decision makers will gain a greater understanding of the business challenges they will face as the smart energy ecosystem emerges. The Architecture section will be the most useful to software developers, system integrators, and solution specialists who already have an in-depth understanding of the industry and information architecture and are most focused on the use of Microsoft technologies. The SERA Alignment Guidelines and Capability Maturity Model is new to SERA and responds to SERA Advisory Councils request for us to identify and offer guidelines as to how utilities might align their technology planning and implementation to this document. Their counsel observed how utilities and Microsofts solution partners could similarly benefit from these guidelines, in their effort to understand how to become a first order citizen in the SERA Enterprise Architecture Guidelines. The fifth and final Microsoft Technology Stack section is updated from Version 1 and identifies Microsoft products and solutions, as well as partner-led solutions that enable this architectural vision. This section has been updated with the release of many new game-changing technologies now provided by Microsoft.

Finally, throughout this document we: offer detailed guidance and hyperlinks to the specific topics and solutions mentioned. provide references where available and applicable to accelerate development and guide deployments for the smart energy ecosystem.

24

compiled a large appendix that provides information on non-Microsoft related items that may be on the minds of utilities if they originally embarked on other paths toward the smart energy ecosystem.

Even while we endeavor to offer this view of an underlying architecture, Microsoft urges the reader to acknowledge with us that achievement of the smart energy ecosystem is a journey and not a destination. The inclusion of the maturity model will help define the journey and as such is a major contribution to this second version of SERA. Its helpful to reiterate that SERA seeks to establish a vision that maximizes Microsofts value to customers by articulating a clear vision for the smart energy ecosystem and then describing the Microsoft and partner technologies that can realize that vision.

25

1.0

Evolution of the Power System and the Utility Industry

Since its inception in the late 1800s, the power system has understood as its core challenge the mission of reliability, of keeping the production of electricity constant with demand. Toward this end, steam power replaced water power as the most capable mechanical source for maintaining reliability, and then the industry addressed the challenge of transmitting the electricity over distance. Simply put, the industry has been shaped by an infrastructure that was first deployed to generate then transmit electricity, in that order. Arguably, this simple mission has led to the creation of the largest machine in the history of the world.

Figure 4: The traditional electricity value chain from generation to customer is fairly one dimensional from the perspective that power is transferred from a generating station to the end user with minimal feedback to the system through meters.

The concept of the smart energy ecosystem is based on the observation that the power system of the future will be radically different than the traditional electricity value chain indicated above. The new drivers of change have reached critical mass and are tipping the industry in directions it had not foreseen. Photovoltaic electric vehicles, wind turbines, utility scale storage, solar array generating farms, a focus on cybersecurity, and advanced consumer interaction with power consumption are a few of the 26

changes moving the industry in new, dynamic directions. Each of these and many others add their own management challenges.

Figure 5: The smart energy ecosystem is much more dynamic than the traditional electricity value chain depicted in previous figure.

To help the industry keep pace, information technology has been developed at a phenomenal pace. Together with our partners, Microsoft offers information technology solutions that address these new operational challenges that are common to all utilities around the world. These technologies integrate components for fuel acquisition, generation, supply, delivery, sales, service, and regulatory compliance in such a way that utilities can first cope with and then change their business models in ways that may better profit from the changing power ecosystem. The technologies also take into account the fact that IT-like devices now improve and inform all components of the ecosystem.

1.1

Grid Standards Evolution Offer Example

As an example of the larger changes occurring to the energy ecosystem, utilities are adding new devices to the grid at a rapid pace. These grid devices are adding new data sets to the utilities knowledge bases and operational domains, if only they are first interoperable with existing investments and can also accept future innovations through adherence to new, evolving industry standards.

27

While SERA v1 seemed to outline a vision for the dramatic improvements in microprocessor, software, and communications that are enabling new grid devices and capabilities, we need to make sure SERA v2 readers understand the holistic nature of changes affecting the utility industry, at every level of the company, from executive management to customer service and every line of business in between. Indeed, utilities are already deploying many devices with the microprocessors and two-way communication that will enable a wide variety of capabilities not possible before, including collection of more information, local decision-making, and coordination. One of the largest challenges to grid devices is the advancement of industry standards by various governing bodies, each within very specific domains. As example, the Electric Power Research Institute (EPRI) is a non-profit research center mainly funded by the American power industry. EPRI plays an important role in the development and dissemination of the Common Information Model (CIM), an effort to address interoperability issues. EPRI has been working to establish several sets of standards for components of the smart energy ecosystem. As another example, in collaboration with the Western Electricity Coordinating Council (WECC) Renewable Energy Modeling Task Force, the North American Electric Reliability Corporation's Integration of Variable Generation Task Force, the IEEE Dynamic Performance of Wind Generation Working Group, and the International Electrotechnical Commissions Technical Committee 88, Working Group 27, EPRI has led an industry-wide effort to develop generic and public models for steady-state and dynamic performance of variable-generation technologies for power system analysis. Other EPRI standards-development efforts have been advanced to the International Electrotechnical Commission (IEC) for standardization and have led to the development of active users groups. These included the: Inter-Control Center Protocol (ICCP) Utility Communication Architecture (UCA) Common Information Model (CIM)

Other standardization efforts are worth mentioning as well: The IEEE, a professional association for the advancement of technology, has helped create many important communications and power engineering standards. In early 2012, IEEE-SA published IEEE 1591.1-2012 Standard for Testing and Performance of Hardware for Optical Ground Wire (OPGW); IEEE P1909.1 Recommended Practice for Smart Grid Communication Equipment, intended to document testing and installation procedures; IEEE P1703 Standard for Local Area Network/Wide Area Network (LAN/WAN) Node Communication Protocol to complement the Utility Industry End Device Data Tables; IEEE P1854 Guide for Smart Distribution Applications Guide, to categorize and describe

28

important smart distribution applications and fill a gap for standardized definitions of such systems.15 In 2004, the US Department of Energy (DOE) and the GridWise Alliance agreed to work together to realize the vision of a transformed national electricity grid in the United States. An effort from the International Council on Large Electric Systems (CIGRE) called D2.24 is driving requirements and architecture for next-generation energy market and energy management systems. Standards developed by the IEC and IEEE are now finding their way into NIST-led efforts related to the smart grid. Finally, as the smart energy ecosystem evolves to include the end use consumer, either commercial or residential, Web services standards bodies such as OASIS will play a greater role.

1.2

The Build-out of the Smart Energy Ecosystem

A smarter grid comprised of new, intelligent connected devices is the backbone and core enabler of the smart energy ecosystem. But improving and modernizing the grid enables utilities to offer the many new capabilities that respond to, as well as drive, changing consumer behavior and attitudes toward energy.

15

Are we seeing real progress on smart grid applications? Smart Grid News, April 24, 2012.

29

Figure 6: Whereas many discussions focus exclusively on grid mechanics and engineering, the smart grid will become the smart energy ecosystem as consumers transform into customers by availing themselves of new grid capabilities and the services that are created.

The grid will transform to an ecosystem as those same customers become sources of generation as well. Thus, all these new interactions with customers will give utilities opportunities to create new business and customer relationships. Feedback from websites, customer service calls, social media and other feedback loops will need to be incorporated into utility operations at all business levels. Lets consider that the smart energy ecosystem will likely need to operate in a customer-driven autonomous fashion, say for example at some point to accept new power inputs coming from 30

customers premises. That might be electricity generated from solar arrays on the rooftops of commercial buildings and private homes or electricity stored neighborhood energy storage devices. And when millions of individuals own plug-in hybrid electric vehicles (PHEVs), the smart energy ecosystem could conceivably be configured to allow consumers to buy electricity from the grid during late night, non-peak hours, for charging. Then, when the grid needs power during peaking events, the utility might draw from the stored power in those very same PHEVs. The same would be true of solar array investments on residential or commercial rooftops. All these new sources of electricity create new billing, customer service, and contracting impacts on the IT systems required to manage and monitor them. 1.2.3 Innovation in the Smart Energy Ecosystem Virtual Power Plants The smart energy ecosystem is already creating new innovation of the type Microsoft CEO Steve Ballmer mentioned, with Virtual Power Plants (VPP) quickly becoming one of the most promising emerging applications of smart energy technologies. A VPP is an IT system that can optimise the use of energy related assets through various controls and coordination of generating and demand efforts. Power resources are accessed through software instead of through physical generating facilities.16

Figure 7: A VPP controls a wide range of generating assets, from large generating units like combined heat and power (CHP), hydro plants and smaller generating assets, like emergency generators or wind turbines. Source: Dong Energy

16

Virtual Power Plants: systems of the future, by Chuck Ross, Electrical Contractor Magazine, July 2011.

31

VPP technology has the potential to help a variety of power system challenges, like a lack of generation capacity, grid congestion handling and outage prevention, and integration of large shares of intermittent power production while maintaining supply reliability and security. VPPs provide a good example of the type of innovation that will likely continue to emerge from a smart energy ecosystem.

1.3

Participants within the Smart Energy Ecosystem

As the smart energy ecosystem expands its potential with each new technology-enabled device and business model, there are wide and growing sets of active new participants developing their new roles, interests, and associated responsibilities. The following list provides a sample of the participants: Utilities and related companies, including: Distribution companies Independent System Operators (ISOs) Regional Transmission Operators (RTOs) Transmission market operators Transmission companies Generation companies Distribution balancing authorities Service providers, including: Energy aggregators Maintenance service providers Metering service providers Weather forecasting Retail energy providers Equipment providers (PHEVs, solar panels, storage, etc.) Customers, including: Residential Commercial Industrial Governmental

The reference architecture that follows describes how these participant interchanges will work, and provides guidance for implementing these systems based upon Microsoft platform technologies.

1.4

Collaboration within the Ecosystem

By perceiving the power system as an energy ecosystem, with all its attendant complexities, it becomes immediately evident that there is serious need for collaboration between organizations and equipment.

32

Microsoft believes that the critical factor for organizational success is empowering people, specifically, those people who create, analyze, distribute, or consume information as part of their jobs the information workers. Microsofts collaboration vision is about providing software and services that deliver pervasive capabilities to enable people to work together more effectively. Microsoft is addressing challenges in the four areas critical to effective collaboration: integrated communications collaborative workspaces access to information and people people-driven processes. Collaboration and associated business processes must occur between: users businesses individual customers a variety of technology systems, resources, and intelligent devices. Collaborative relationships may be cooperative or competitive. Utilities and market operators may cooperate to resolve a critical outage that threatens grid stability. Market participants may collaborate with the electricity market in a competitive environment. Indeed, collaboration must occur for many purposes: To operate the electricity grid To buy and sell energy through an energy market To cost effectively utilize energy To maximize the use of electricity when consumed To participate in energy (e.g. demand response, efficiency) programs to better manage use of energy Scheduling of resources Scheduling of consumption Settlement of accounts Maintenance of the electrical infrastructure Responsibilities within the ecosystem are federated, where there may be interactions between different types of organizations, as well as their interactions with the electricity grid and customer infrastructures. Some organizations may take on multiple roles, as in the case of a vertically integrated utility that may have combined responsibilities for generation, transmission, and distribution. There also may be many types of service providers, such as those offering metering, maintenance, weather forecasts, or load aggregation for participation in demand response programs.

33

In addition, all of them will need to interact and collaborate. Historically, participants have been people or organizations, but the capacity for local decision making by devices extends the participant/participation model. 1.4.1 European Union Smart Ecosystem Standards for Collaboration Microsoft SERAs recognition of the need for collaboration within the smart energy ecosystem is being codified by the European Commission through a series of mandates by different Directorates-General. To summarize their work, the European Commission has mandated that the European Standards Organization issue standards for the smart grid in Europe. The EC set up the Smart Grids Task Force in 2009 to develop policy and regulatory directions for smart grid deployment under the Third Energy Package, a gas and electricity market liberalization for the European Union. The commission has issued recommendations for the major components of the smart grid/smart energy ecosystem, including: Preparations for the roll-out of smart metering systems Sets of common functional requirements for smart meters Standards for smart grids Mandates for electric vehicles The adoption of communication standards for smart grids The following sections describe in brief some characteristics of the above mandates and standards: 1.4.1.1 EU Smart Metering Roll-out Preparations The following recommendations apply to utility companies from 2012/148/EU: Commission Recommendation of 9 March 2012 on preparations for the roll-out of smart metering systems: For the metering operator: (c) Allow remote reading of meters by the operator. This functionality relates to the supply side (metering operators). There is a broad consensus that this is a key functionality. (d) Provide two-way communication between the smart metering system and external networks for maintenance and control of the metering system. This functionality relates to metering. There is a broad consensus that this is a key functionality. (e) Allow readings to be taken frequently enough for the information to be used for network planning. This functionality relates to both the demand side and the supply side.

34

For commercial aspects of energy supply: (f) Support advanced tariff systems. This functionality relates to both the demand side and the supply side. Smart metering systems should include advance tariff structures, time-of-use registers and remote tariff control. This should help consumers and network operators to achieve energy efficiencies and save costs by reducing the peaks in energy demand. This functionality, together with functionalities referred to in points (a) and (b), is a key driving force for empowering the consumer and for improving the energy efficiency of the supply system. It is strongly recommended that the smart metering system allows automatic transfer of information about advanced tariffs options to the final customers, e.g. via standardised interface mentioned under (a). (g) Allow remote on/off control of the supply and/or flow or power limitation. This functionality relates to both the demand side and the supply side. It provides additional protection for the consumer by allowing grading in the limitations. It speeds up processes such as when moving home the old supply can be disconnected and the new supply connected quickly and simply. It is needed for handling technical grid emergencies. It may, however, introduce additional security risks which need to be minimised.

For security and data protection: (h) Provide secure data communications. This functionality relates to both the demand side and the supply side. High levels of security are essential for all communications between the meter and the operator. This applies both to direct communications with the meter and to any messages passed via the meter to or from any appliances or controls on the consumers premises. For local communications within the consumers premises, both privacy and data protection are required. (i) Fraud prevention and detection. This functionality relates to the supply side: security and safety in the case of access. The strong consensus shows the importance attached to this functionality. This is necessary to protect the consumer, for example from hacking access, and not just for fraud prevention.

For distributed generation: (j) Provide import/export and reactive metering. This functionality relates to both the demand side and the supply side. Most countries are providing the functionalities necessary to allow renewable and local micro-generation, thus future-proofing meter installation. It is recommended that this function should be installed by default and activated/disabled in accordance with the wishes and needs of the consumer. 35

1.4.1.2 Common Functional Requirements for Smart Meters Based on the analysis of 11 cost-benefit assessments, the European Commission established the Set of common functional requirements of the Smart Meter, with the intent of achieving cost-efficiencies for Member states, the metering industry, utilities, and regulators for their investments, roll-outs, and reference definitions. The following chart is a summary of findings of DG ENER and DG INFSO towards the Digital Agenda, Action 73.

Figure 8 summarizes some of the EC's requirements for smart meters.

1.4.1.3 Standards for Smart Grids The CEN/CENELEC/ETSI Joint Working Group (JWG) on standards for smart grids produced a report addressing standards for smart grids, including the following. Final Report of the CEN/CENELEC/ETSI Joint Working Group on standards for smart grids Recommendations for smart grid standardization in Europe 36

On 1 March 2011, the European Commission issued Mandate M/490 requesting the three European Standards Organisations (ESOs), CEN, CENELEC, and ETSI, "to develop a framework to enable European Standardisation Organisations to perform continuous standard enhancement and development in the field of smart grids, while maintaining transverse consistency and promote continuous innovation." 1.4.1.4 Mandates for Electric Vehicles CEN and CENELEC, as European Standards Organisations, have sought to standardize all aspects of electro-mobility. Their task has involved finding common ground for different standards communities with different perspectives (those dealing with vehicles, those responsible for the electrical system and its components, even the ICT community); the international work already done necessarily has to accommodate different rules and requirements in different regions of the globe; at the European level there are (still) national regulations that make the application of single solutions difficult ; some of the requirements are still evolving; some technical solutions are still not fully mature. They created a short-term focus group to prepare an advisory report subject to approval by the CEN and CENELEC Technical Boards, which will decide on the implementation for the various recommendations. The top-level recommendations from their report Standardization for road vehicles and associated infrastructure follow: Connectors and charging systems In order to facilitate the adoption of electro-mobility, the charging should be as convenient and as low cost as possible while providing an acceptable level of safety. This inherently implies the implementation of simple uniform charging systems both for the home and for publicly accessible charging places. Smart charging Smart charging is seen as a necessity to optimise the use of the electrical grid for efficient EV charging, while maximizing the use of renewable energy. It is considered that the customer should be encouraged to charge at the best possible moment in terms of available energy, by providing a smart charging mechanism, based on information supplied by the electric grid and on the physical environment (Energy Management System, EMS). This would help to avoid the need for extensive new investment in Europe for the grid. The active interaction of the vehicle with the grid could even be beneficial for global energy usage. New standardization activities are recommended, especially in relation to the requirements for storage, labeling, and battery switching stations. The possibility of standard battery designs and the re-use of batteries in alternative applications may be an interesting possibility that could be explored. This requires additional standardization work.

37

Electromagnetic compatibility (EMC) EMC standards are needed to ensure that electrical and radio apparatus does not cause interference to other such equipment, and that it is adequately protected against such interference. This is a heavily-regulated area, the relevant EU Directives being new approach where European Standards are agreed to meet the essential legal requirements. However, the various standards that are available in this domain were not usually designed with mass-market use of equipment for electric vehicle charging in mind, or for the high concentrations of EVs in one place that may result from such a deployment. This may imply the need for a number of detailed amendments to the standards portfolio. The present Directive 72/245/EEC defines provisions for the whole vehicle and for the electrical and electronic sub-assemblies, but does not cover the connection of an EV to the grid. UN-ECE Regulation 10 is presently under revision and will cover these same requirements and take account of all the EMC aspects for the EV. Regulation Our report contains some complementary considerations regarding the interface between standardization and regulation at European level, for instance the Low Voltage Directive, as well as European vehicle type approval Directives and the UNECE regulations relating to vehicles.

1.4.1.5 Requirements for communications standards in Smart Metering Systems The Smart Metering Coordination Group issued six key functionalities for communications among smart metering systems in their May 2011 Technical Report Functional Reference Architecture for Communications in Smart Metering Systems: Remote reading of metrological register(s) and provision to designated market organisations Two-way communication between the metering system and designated market organisation(s) To support advanced tariffing and payment systems To allow remote disablement and enablement of supply and flow/power limitation To provide secure communication enabling the smart meter to export metrological data for display and potential analysis to the end consumer or a third party designated by the end consumer To provide information via web portal/gateway to an in-home/building display or auxiliary equipment

2.0

Changing Demands on the Utility Business

The economics that emerge with the development of the smart energy ecosystem will change every facet of the utility business, from generation through its transmission and distribution, and all the way 38

to retailers. Each facet will have its own technology enablers, as well as enablers that affect all participants. Consider, for example, the previously mentioned Virtual Power Plant thats enabled by connected information technologies. The VPP combines generation units into a portfolio of assets that can deliver energy to those markets where it can attain the highest value, whether that is the power exchange, markets for ancillary services, or even markets that have exceeded the capacity of their existing generation assets. Later, when the contracted electricity must be delivered, the VPP activates the cheapest units that can be combined to deliver the services. On the demand side, the units will normally have a limited flexibility, and here the task is to optimise this flexibility, so it is utilised on the days and hours where it creates the greatest value. Another example of this optimization capability of VPPs within a smart energy ecosystem might be to optimize CHPs so that they deliver the combined heat and power contracted in the most efficient fashion, taking into consideration the different load factors, the available heat accumulator and current heat need, the different fuels available, or the varying ramping capabilities at each CHP. Imagine being the utility executive team watching this sort of innovation operate underneath its traditional model! Now consider that this is but one future scenario possibly affecting the traditional utility company. There are many others and this section considers their implications on utility business models: Changing demands on generation companies Changing demands on transmission companies Changing demands on distribution companies Changing demands on retailers Energy Resources and Constraints Business Factors Technology Enablers

2.1

Changing demands on generation companies

Generation companies (GenCos) must become exceptionally good at their ability to manage loads of new inputs into its demand equations and match those to their ability to either produce power or buy it on the open market. In addition, GenCos will be required to add renewable power sources, like solar arrays and offshore wind sources, to their fleets, to meet environmental demands of state and federal regulators. As such, the successful GenCo will be adroit in its ability to monitor, estimate, and forecast the energy they generate from renewable and traditional power resources, in real-time at various voltage 39

levels. The GenCos margin for profitability will rely on its forecasting, predictive analytics, and realtime decision making, to assess the various impacts of, say, microclimates in its operating region using the feedback from its grid devices. Key technology enabler: Big Data, Patterns for Integration

2.2

Changing demands on transmission companies

Todays transmission grid has operational constraints that still need to be carefully managed. Transmission line modernization includes repairs and replacements of various components as well as the installation of new equipment. More mature economies will need to overhaul their systems for the increasing loads they will carry, while emerging economies will have the advantage of adding new infrastructure that already comes with the latest smart energy ecosystem enabling technologies. Modernization might include core power electronic and thyristor replacements, the installation of new digital controls and user interfaces, and upgrading extant cooling systems, as is the case with Alstoms modernization of an Alaskan transmission line. The Alaska upgrade is like new infrastructure in many developing economies, where the electricity system cant rely on backups from other power systems. Ongoing repair and replace strategies will need development to overcome aging and already-worn equipment assets. Key technology enabler: Predictive analysis

2.3

Changing demands on distribution companies

Distribution companies are tasked with the objective of improving the efficiency of their grid, ensuring reliability within the limits set by regulators, improving their fault analysis and switching capabilities. Full-scale automation of distribution technologies are the ultimate goal and can be achieved through enhanced fault location, isolation and service restoration, and integrated automated switching plan management. Optimizing the grid for advanced capabilities must include such capabilities as the adoption of green energy or renewable energy sources, community energy storage, and plug-in electric vehicles. Customers will come to expect ever-faster outage repairs, with personalized notifications for estimated restored services. In the last mile of distribution, new factors and their constraints are emerging around the basic utility function of metering. In the past, it was only possible to measure usage for all but large consumers on an aggregate monthly basis. With advanced meter deployments, it is now possible to measure usage for all customers in near real-time on an interval basis, where all customers usage may be reported every 15 minutes. Such interval reporting provides new opportunities to charge customers more for electricity consumption during more expensive peak hours, or provide reduced rates for usage during off-peak hours. This time-of-use pricing provides customers with the incentive to change their consumption behaviors and/or leverage devices within their home or business to rationalize overall energy costs. The communication infrastructure used for the advanced meter then becomes a gateway between the customer and utility or service providers for additional services including demand response, outage detection, power quality monitoring, etc. Once thought to be an emerging critical issue, distribution network constraints may yet become more apparent as consumers purchase more PHEVs and deploy more distributed resources. Where 40

a distribution feeder may have been designed for average customer loads of 1.5KW, the charging cycle of a single PHEV can add a load of up to 20KW. As more PHEVs come onto the grid, they can easily exceed the capacity of a distribution feeder, requiring large-scale physical upgrades and/or coordination of PHEV recharging. The utility will prefer the coordination option, in order to minimize peaks and provide for balanced operation of feeders within their designed limits. Key technology enabler: Master data management, Predictive analytics

2.4

Changing demands on retailers

As the middle-men between consumers and producers, energy marketers, traders, and retail service providers play a pivotal role in the balance the supply and demand of energy. They must manage multiple commodities and complex transactions, and monitor all sorts of risks to the equilibrium they try to achieve. As a result, the retailers must become incredibly good at developing competitive pricing that respond to quickly changing market demands while remaining within various regulatory requirements for financial dealings and reporting. For retail electric providers (REPs), there will be no end to new expectations by their consumers and the need to fulfill their requests and changing demands. For example, imagine the REPs who might see an opportunity to specialize in electric vehicle recharging and the consumer outreach and pricing they will regularly create to attract customers. Gas stations advertise their prices for the passing motorist and keep marquees updated with numeric posters. Electric stations might have minute-by-minute changes to their electricity prices. Key technology enabler: Patterns for Integration

2.5

Energy resources, constraints and challenges

The increasing diversity of energy resources will be another notable driver of new business models. Among fossil fuels, there currently is a worldwide drive to harness the shale gas available through unconventional resource drilling, sometimes very near to existing power plants, or available to future plants. In North America, the viability shale gas has made natural gas plentiful to the point of being a possible net energy exporter in the future. Indeed, some observers have noted how some manufacturing plants that are sited near these natural gas resources may seek to generate their own electricity on site, without the assistance of a utility company. Coal is still widely used as a generating source in much of Asia, but its future in North America and Europe may be further challenged by cleaner burning natural gas and mandates for the use of renewable power sources. Meanwhile, renewable power sources like wind, solar, geothermal, and other forms of distributed energy generation are becoming common and more cost effective, even though they have far different operational, economic, and control characteristics than conventional plants. Consider the operational complexity when a utility combines such variable generation sources with demand response, where the energy not used (sometimes referred to as a negawatt) can be considered an energy resource if demand can be controlled. Key technology enabler: Patterns for Integration 41

2.6

Business Factors

The wide variety of economic and technical changes that will occur with the advent of the smart energy ecosystem will require everyday business processes to increase in their ability and flexibility to adapt. The new marketplace will offer many new opportunities to profit if a company can change its business processes quickly and cost effectively. Such flexibility will require an information technology architecture that supports and anticipates each next stage of the evolution toward the smart energy ecosystem. The architectures value will be gained from the implementations cost effectiveness on the ongoing evolution of specific business process. In fact, that ongoing flexibility and capability to adapt will be the primary reason that an architectural framework is needed from the outset. The following industry issues demonstrate the challenges facing utility businesses and the technology solutions that may address them. 2.6.1 Utility Workforce Optimization Like companies in other industries, utilities face mounting pressures to minimize the number of people needed to support their business processes. Whereas business process execution in the past may have required several persons with knowledge of specific applications, it is now possible to leverage workflow technologies to hide the underlying application details from users. Doing so can provide users with a simplified, streamlined view of the process so that it can be executed more efficiently, even with less training. Workflow technologies also automate many steps and avoid redundant data entry, improving accuracy and efficiency and ensuring that business process execution follows corporate compliance policies and procedures. 2.6.2 Workforce Demographic Changes The architecture supporting the smart energy ecosystem will also need to consider and enable new dynamics occurring within the changing utility workforce. Much has been written about how the aging of the baby boomer generation will equate to senior resources and their experience leaving the workplace. In addition to the proverbial brain drain, a new workforce demographic will demand new work tools: the so-called millennials who are entering the workforce have heightened expectations about the sophisticated tools they think they should be using to execute the utilitys work processes. These dynamics will drive businesses to seek technology systems that will address both workforce demographic challenges. Those businesses that adapt the most quickly to these changing conditions will benefit more quickly. But in order to achieve this flexibility, people throughout the business will require timely access to information they need in a form they can use, through tools that create collaboration, knowledge management, data repositories, and process integration. Businesses will need an architecture that is able to support pragmatic integration as an enabler of their evolution to the smart energy ecosystem.

42

2.6.3 Equipment Collaboration Optimization Adding equipment to the grid also serves as an example of a process where workflow automation can facilitate the updating of planning and operations models, as well as asset management systems, geospatial information systems, and, potentially, customer information systems (in the case of phase rebalancing). 2.6.4 Outsourcing and Contracting Optimization Business process outsourcing and contracting optimization continue to assist the utility industry with cost rationalization for certain activities that can be achieved virtually, through harnessing information technology to lower cost staff centers around the world. Utilities however are still unsure of some aspects of outsourcing, from call center performance to personal information security. Extracting greater business value through the adoption of more robust strategies and technologies like the cloud may spur the innovation needed to make this activity more attractive. 2.6.5 Workforce Mobilization The modern workforce is mobile, traveling over weekends to Mondays morning meeting, working remotely from home, or taking all orders for field repairs while in the truck. Making data available to all these types of mobile employees is critical to speed and effectiveness. Business-critical data cannot be limited to desktop-only viewing. This new mobile workforce will require the utility to enable them through access to all resources necessary to be most effective on the job. This includes access to such things as documents, drawings, photos, work orders, and maintenance manuals for field equipment, and all of them must be available on easy-to-use familiar devices. The mobile workforce also needs tools that enable collaboration with other employees across the enterprise, so that they may gain access to, say, engineering subject matter experts at any time from any location. They will need to know that people are available, and then be able to initiate an IM Chat or conversation, or in some cases a video conversation to immediately address the work issue they face. In sum, as customers leverage technical advances and rationalize their energy consumption, and as other outside factors affect how utilities must change their business operations, utility companies will find themselves looking to leverage a variety of technical and business innovations to assist their journey toward a smart energy ecosystem.

2.7

Technology Enablers

The technology architecture of the smart energy ecosystem wont be confined to the need to revise business practices for workforce, consumer and regulatory changes. It will also need to be an enabler of new technologies, some we know about, and some that are yet to come. This section considers the three most promising new technologies driving utility innovation: Advanced Sensors and Web Integration 43

AMI and Communication Networks New Computing Paradigms

2.7.1 Advanced Sensors and Web Integration New, advanced sensors will expand the capabilities of the smart energy ecosystem with increased integration with the Web. These include: Global Positioning Systems (GPS) Phasor Measurement Units (PMU) Interval Meter Readings Centralized Remedial Action Schemes (C-RAS)

For instance, by leveraging technologies like GPS, it is now possible for devices to take measurements with a very precise view of time. This makes it possible to measure phase angles at locations on the grid using PMUs and to take grid-wide measurement snapshots. Interval meter readings will enable more accurate load models. Together, these technologies provide new opportunities for improvements in network analysis, monitoring, and control, thereby offering improvements in grid stability and security, as well as facilitating better grid utilization. Another example of advanced sensors and Web integration is C-RAS. Utilities have demonstrated that C-RAS can be used to create fast grid event mitigation schemes that can lead to material reduction in reserve margins while maintaining or improving overall reliability. The ability to automatically trigger pre-enabled grid response actions greatly enhances autonomous reliable grid operation. Other core components of the smart energy ecosystem technology architecture will be the Web technologies, integration standards, and related products that now offer increased collaboration at many levels. These technologies provide opportunities for more pragmatic, lower cost implementations and will overcome previous cost barriers to integration. 2.7.2 AMI and Communication Networks Advanced metering infrastructures (AMI) is yet another important enabler that some people often consider synonymous with smart grid. Because of its two-way communication capabilities, AMI has created many new opportunities including: More timely measurement of usage, providing opportunities for new pricing options beyond billing thats based on total monthly consumption. Automatic detection and confirmation of outages, with automatic verification of restoration. Detection of customer-level power quality issues, such as momentary outages and voltage levels.

44

Providing a gateway to home area networks, such as those now provided using ZigBee, where home devices can react to pricing and load control signals as needed to implement demand response programs. Management of schedules for local energy consumption, where the user can minimize costs based upon their preferences and the utility can balance loads and make better utilization of the distribution networks.

Because of their role as an enabler of the smart energy ecosystem, communication networks should be considered a primary component in any architectural blueprint. The field networks currently used to communicate with AMI devices are typically private, often using proprietary or utility industry-specific protocols. Alternatively, broadband internet services offer a communication infrastructure that is open, cost effective, higher bandwidth, and already widely deployed.17 Because it is already deployed, it is already cost competitive with the lower performing utility-specific infrastructure. The recent FCC commitment to net neutrality removes the biggest remaining broadband concern. As long as security is addressed up front, metering and home area network (HAN) communications infrastructures allow new families of devices to be added to the set of monitored and controllable devices on the grid, including: Smart thermostats Smart appliances Plug-in hybrid electric vehicles (PHEVs), which can be in states for charging, storage and discharging ZigBee18 Smart Energy (SE) profile devices HomePlug devices IPSO devices Residential solar and wind Building automation

A new generation of field and home devices that have the ability to make local decisions using two-way communication capabilities will allow customers to better monitor, control, and schedule energy consumption, as well as respond to demand response events and pricing signals. Utilities or independent service providers could use these devices to extend their operational capabilities by facilitating registration of the devices in energy programs that permit

17It

should be noted that there is a price for openness and cost effectiveness: As metering infrastructures and gateways to HANs leverage the internet, the overall architecture must pay careful attention to security issues.
18

ZigBee is a set of specifications created by the ZigBee Alliance and built around the IEEE 802.15.4 wireless protocol, and targeting low-power, low-cost, sensor networks.

45

the power provider to adjust schedules to provide more efficient and balanced operation of distribution networks.19 2.7.3 New Computing Paradigms New computing paradigms will require new approaches to the smart energy ecosystem. These paradigms include: Advances in communications technology Advances in storage Cloud Computing technologies Datacenters Participation of unreliable entities Scale

For example, multiple cores in processors will be commonplace. Applications will need to transition to multi-core, multi-processor, multi-threaded design. Inexpensive, low-power, massively-parallel computing will dominate infrastructures and drive application design. Even while preserving existing investment through co-existence, application disaggregation will be necessary to capitalize upon new hardware platforms. It is clear that Moores Law will be maintained via scale-out rather than continued scale-up via faster clock rates and processor capacities. Transition to multi-threading and new technologies such as MapReduce and Hadoop for solving massive parallel complex analysis will emerge. In addition, communication capacities both wireless and hardwired continue to expand. Indeed, bandwidth is expanding faster than Moores Law. However, the communication can be unreliable, either at certain times or geographic locations (commonly referred to as cell holes). Solutions will need to be flexible and resilient to momentary loss or interruptions of communication. As a result, autonomous operation will need to be a constant consideration. The scale of connected smart energy systems will grow to new levels with the addition of the active participation of loads (end-use customers) and a multitude of tiny new devices. Tight coupling of unreliable autonomous participants will be proven unreliable. Systems will need to be designed to be flexible and adaptive to autonomous behavior. The true measure of success will be building a working system out of small autonomous independent unreliable devices and participants. As a result, for some parts of the smart energy system, mastership cannot be assumed. The system will require design that should expect the same computing problem to be addressed in

19

It is also import to note that HAN technologies provide a monitoring and control infrastructure that can extend beyond electricity to include other energy and non-energy related services including: gas; water, home security, home monitoring and remote control; pre-payment metering services and home healthcare.

46

multiple locations. For example, micro-grids and integrated control centers may both calculate energy balancing of a given distribution segment: In the case of micro-grids, the solution can support effective operation of the microgrids in the event of loss of control center communications. In the case of control centers, the solution can be coordinated between all neighboring feeders.

Real-time energy management systems, whether at the transmission or distribution levels, will continue to have rigorous performance and reliability constraints. The smart energy reference architecture recognizes that close coupling of all the new participants to the operation of the real-time systems will prove to be fragile and unreliable over the long term. Systems must be designed to be adaptive and resilient to autonomous, independent, potentially unexpected or non-responsive behavior of the new participants whether at scale as in the case of end use residential customers, or in bulk such as large scale renewable energy sources.

47

3.0

Architecture

A reference architecture is a consistent framework that can guide implementation within a particular domain. The Microsoft SERA reflects best practices and attempts to understand and incorporate the likely impacts of technical, business, and regulatory trends. The resulting implementations and deployments then form a smart energy ecosystem. The incredible diversity of energy generation and delivery systems make it absolutely impossible and beyond human capability to coherently offer a single, detailed view of one particular architectural framework that will work in every single instance. The Microsoft SERA is instead intended to address prevailing systems and issues in enough detail to be useful, but without so much detail as to be untenable. SERA seeks to provide in one place a level of understanding about those products and technologies that exist in dozens of sources. Its our hope that if an organization is interested in some specific smart grid component, say, implementing demand response solutions, they will find enough information here to know that Microsoft and its partners have the technology components that would fit a larger framework of capability. SERA components include: Approach User Experience Collaboration Information Application Architecture Deployment Strategies Integration Security

48

3.1

Approach

The Microsoft SERA offers an approach based on five foundational pillars:

Figure 9: The five pillars of SERA's approach offer benchmarks for capabilities.

Performance Oriented Infrastructure Holistic Life-User Experience Operations Optimization Partner Enabling Rich Application Platform Interoperability

3.1.1 Performance Oriented Infrastructure A performance-oriented infrastructure includes those features that make architecture complete and appropriate to business needs. These include: Economic: The infrastructure must provide cost effective means to deploy and integrate functionality. Deployment: Components have to consider flexibility in how and where they can be deployed. 49

Location agnostic: Services are designed so that they can be deployed on-premise or in the cloud. Always-connected: Users and software components have access to platforms and services wherever they are located. Manageability: Infrastructure components can be efficiently deployed, managed, and monitored. Transferability: Functionality and information can be migrated easily from one version of underlying infrastructure components to another with minimal interruption or intervention. Secure: Deployed components, functionality, and associated information are protected from unauthorized access or malicious attacks. High performing and scalable: Support for more users, larger models, increased transaction volumes, etc. can be accommodated through increasing hardware performance (scale-up) or the linear addition of hardware and network resources (scaleout). Virtualization: Components can be deployed in a manner that optimizes the use of hardware resources. Highly available and self-healing: Support for transition to new equipment in the event of equipment failure. Disaster recovery and backup: Capability to move to a new platform or facility or recovery from a natural disaster or terrorist event and the back-up of results to facilitate the transition.

3.1.2 Holistic Life-user Experience A holistic life-user experience enables all participants to view the smart energy ecosystem from the perspective of other participants. To Microsoft, this equates to ensuring that the host company understands how customers experience the world and how technology fits into that experience. A technology architecture that facilitates the smart energy ecosystem will then necessarily consist of: A rich, integrated technology user experience for home, car, control center, and field workers. Browser-based collaboration using SMS, mobile devices, personal computers, tablets, ruggedized personal computers. Non-conventional devices, like TV and in-home displays. Supporting functionality for collaboration, information aggregation and mash-ups through the use of Microsoft Office SharePoint Server and services. A unified communications infrastructure, where the nature of the underlying communication infrastructures are transparent to users.

50

3.1.3 Operations Optimization Microsoft SERA permits an energy-operating network to connect smart devices. An optimized energy network incorporates:

Flexible communications: Deployments can leverage a variety of communications paths and technologies and are easily reconfigured minimizing the time required to make new information available to users. Smart connected devices: Intelligence is added to devices and they are connected to the communications network enabling both intelligent autonomous operation and visibility of the operation of the network. Desktop, server, embedded and mobile operating systems: Operating systems (OS) can be effectively employed leveraging the right OS, at the right level, for the right role, with the right performance. Application architecture: This is the architecture for applications infrastructure and services for commonly used capabilities so developers can focus on domain-specific functionality optimizing speed to market and the reliability of solutions.

3.1.4 Partner-enabling Rich Applications Platform Microsoft SERA acknowledges from the outset that no one vendor is able to provide all the application functionality needed to implement the smart energy ecosystem. As such, this reference architecture seeks to offer a rich platform that makes it easy for partners to develop and deploy their applications. Notable aspects of the applications platform include services for:

Analytics: Rich statistical and analysis packages for data mining, discovery, and reporting for diverse information consumers. Collaboration: Tools, services, and applications enabling interaction between users and equipment. Complex event processing: Stream processing engines that can detect and filter events. Integration: Messaging and database technology for linking together workflow, processes, and data optimization. Service bus: Services and components for the communication of device and equipment data. Storage: Repositories for capturing and enabling analysis of utility operational and business data. Workflow: Services for managing the automation of applications as well as business processes.

By providing these services to developers, Microsoft partners will only need to worry about using their expertise for the solution of domain-specific problems, leaving the platform to provide the common capabilities needed across many vertical domains. As a result, multiple vendors can provide competitive platform-consistent products and services, giving customers better offerings and more choices that are easy to leverage.

51

3.1.5 Interoperability Microsoft SERA seeks to enable interoperability in order for the ecosystem to develop in a cost effective manner. Otherwise, the vision for the ecosystem will go unfulfilled. New solutions must work with previous utility technology systems in order to protect those investments. Pragmatic integration approaches will need to be considered and the SERA should be flexible to allow deploying new components without custom integration. Interoperability considerations include: Standards that define a consistent industry-wide interface to allow new component deployment. Published interfaces that are transparently publicized for open industry use even if a standard is not available and also satisfy important interoperability needs. Information models: Consistent ontology for referring to equipment and assets to enable exchange of information throughout the enterprise and the value chain. User interfaces: Consistent content and behavior in presentation of information and interaction with the user. Components: Well defined sets of functionality packaged for developer and integrator reuse. Message formats: Key construct of service-oriented architecture (SOA)20 defining format and content that enables services exchange messages using the defined format (e.g. publishsubscribe pattern). Interface definitions: All the elements of an interface so that applications can be independently developed to leverage the interface. Communication protocols: Format, content, and exchange mechanism so applications can be written to transfer information using the protocol definition. Security: Definition of the security implementation including authentication, authorization, identity lifecycle management, certificates, claims, and threat models to enable secure interoperable design and deployment.

3.2

User Experience (UX)

In addition to the reference architecture having a codified approach, the overall framework must identify goals and characteristics that can be achieved, for both the utility companys employees and their customers. This section on user experience and subsequent sections will discuss those goals and characteristics, for internal and external audiences.
20

In computing, service-oriented architecture (SOA) provides a set of principles of governing concepts used during phases of systems development and integration. Such an architecture will package functionality as interoperable services: software modules provided as a service can be integrated or used by several organizations, even if their respective client systems are substantially different. Wikipedia.org

52

Goal 1: interfaces must provide utility company employees with access to information and services appropriate for their role within each organization. Similarly, utility company customers must be presented with information that is easy to understand, graphically compelling and intuitive, in terms that the consumer can identify. Goal 2: The UX for utility employees and customers should also allow for a composable front-end that provides consistency in how data is displayed, but does not lock an enterprise into using yet another standalone portal that does not integrate with assets the enterprise already owns. Characteristic 1: Role-based UX will necessarily have capabilities that provide users with secure, location independent access to functionality. Characteristic 2: Beyond these basic requirements, there is a need for richness, efficiency, quality, and consistency of UX that depends upon information technology systems that enable visualization, analysis, business intelligence, and reporting.

3.2.1 Visualization This reference architecture contains as a primary tenet the ability to integrate information from many sources into a visual representation in a location agnostic manner. Visualization tools offer users the ability to select which data types, often referred to as a thematic layer, that they want to view, and then to create overlays that inform and instruct. As example, visualization can create wide-area situational awareness capabilities to manage the grid or real-time phasor measurements across a grid. The visualizations might be created by mixing the utility network information over the top of various source sets, say a topographical image, a satellite, or a roadmap image that includes weather conditions, like fog or rain. It should be noted that whereas many sets of source information may be read-only, others may be transactional through underlying services.

53

Figure 10: This screenshot of the Alstom Grid e-terravision solution demonstrates how visualizations can depict numerous system notifications, different terrain and weather conditions over a transmission network. (source: Alstom Grid, eterravision)

The integration of data to create visual representations produces numerous useful applications for the smart energy ecosystem, including: Transmission operations staff can prepare emergency and repair crews at times when bad weather (e.g. high winds, ice and lightning) may cause outages. Network operations can use anticipated changes in temperature, wind, and luminosity to revise load forecasts and adjust generation and interchange schedules. Load planners can visualize changes in wind patterns that may affect output from wind farms and require replacement energy to be purchased or produced by alternate generation sources.

The technology capabilities of visualization tools require: ability to connect to a diverse set of data feeds securely ability to link to a variety of data sources, and then correlate objects to a geo-coded spatial position ability to overlay geospatially a wide variety of information ability to drill through the display and view the underlying data that drove what was presented to the user. This requires mapping and information integration of the underlying data for the graphics rendered on the display. computing performance rich graphics rendering user configurable composite applications

54

3.2.2 Analysis The quantity and quality of meter data coming from smart meters is creating what EPRI terms a Data Tsunami. But it would be shortsighted to overlook all the data thats being generated by other utility operations, including generation, delivery, sales and services, and regulatory compliance and controls. To think about Big Data in these larger contexts, its important to begin an analytical process with the following starter questions: What really is the issue? How do we take steps to address the impact so that it can be handled through our evolving architectural and enterprise integration strategies and investments? How can we put technology foundation in place to handle massive amounts of data in a rapid and effective way? How much data do we keep and why? What tools can be used or needed to utilize the data? How do you know if Big Data is right for you? What are the guiding principles to determine what is good enough for you? Is storage really that cheap that you dont have to worry about keeping all data? How do we align quantity versus quality? How does the enterprise data warehouse fit in this pattern? Data historians?

Electric utilities perform a variety of electrical network analytical computations in the course of their everyday work managing the grid. Some of these computations are very highly specialized and complex and often involve taking a model and applying current, historical, or possible future states using a diverse set of data sources. Examples of these situations include: Analytics of large quantities of meter data to produce actionable information and sound decision-making. Contingency analysis to determine if the network will remain stable if one or more pieces of equipment fail. Dynamic feeder loading analysis for customer energy usage at the distribution feeder level. Feeder analysis, where the voltage and loading characteristics of a given feeder can be studied. Market analysis of customer responsiveness to demand response programs. Outage analysis to determine the point of failure given a set of trouble calls and other inputs. Power flow, where power, current and voltages are calculated for a point or node in the network. Reliability analysis to determine the failure rates of certain types of equipment. 55

Factors that should be considered for analysis tools include the ability to: drive analysis using different input sources produce high-speed, low-latency, easily configurable and rich expressions look at underlying network models at different points in time, and integrate output of analysis with a variety of visualizations.

Historically, these analysis functions have been implemented as applications in energy management systems and distribution management systems. New smart devices and more powerful computing platforms enable new architectures for deployment. For example, metering systems have access to customer outage information and can identify outages much closer to the field equipment. Contingency analysis requires significant compute power solving many individual power flows with potential failed equipment removed, so massively parallel high-performance computing provides the potential for detecting contingencies much more quickly than conventional deployments. Packaging the analysis functions as location agnostic services allows for execution at the most appropriate location. 3.2.3 Business Intelligence Business intelligence helps utilities executives and managers acquire a better understanding of the commercial context of activities, thereby improving the value of their decisions and enhancing their decision-making capabilities. Business intelligence tools often leverage information captured within a data warehouse to create information and then present that intelligent data to the right people through a variety of visual mechanisms that make the most sense to the task at hand. Technology factors to be considered include: Ease of development (including composability such as third party Web parts) Breadth of visualization capabilities Integration capabilities Ease of deployment Ease of maintenance and support Secure access

3.2.4 Reporting In addition to business intelligence tools, a utility, market operator, or service provider may define, create, maintain, publish and/or use a variety of reports, including: Demand response event history Equipment failures Generation schedules Load forecast Load history 56

Market nodal prices Market transaction history Meter usage Outage history Outage schedules

While some reports may be generated periodically for widespread distribution or long-term retention, others may be generated on demand, where a user may request a report with specific filters for a given period of time. There also may be constraints on access, where some reports may be public, while others may only provide specific information to certain users. Where business intelligence focuses on making better decisions, reporting is more general in nature, providing information for a broader set of users for a broader set of purposes.

57

3.3

Collaboration

Collaboration will be another characteristic of the smart energy ecosystem of the future. By collaboration, we mean the need for people, organizations, applications, and/or devices to actively participate and interact upon sets of inter-related business processes. Examples of collaboration include: Aggregation, where a service provider will identify, register, and manage a set of resources (e.g., distributed generation, controllable loads, etc.) and their participation in market programs. Demand response, where, as an extension of the energy market processes, devices may respond automatically to market pricing signals to take local actions related to energy usage. Energy markets, where organizations will register resources and participate in the trading and settlement of energy in different markets. Load balancing, where load and available energy supply must be balanced. For example, the charging of plug-in vehicles may require coordination between devices (including vehicles) on the feeder, between devices within substations and with energy market dispatch schedules.

This section describes various information and data exchange styles including: Collaboration Orchestration Notification Infrastructure Chain of Command Notification plus Workflow

3.3.1 Collaboration Utilities can use the Web and associated technologies and products as primary infrastructure to create a collaborative environment of several different forms including: Human-to-human collaboration with delivery mechanisms such as Web portals and messages. System-to-human and vice-versa with delivery mechanisms such as messages, emails, instant messages, feeds, alert indicators, etc. System-to-system collaboration with automated data exchange automated via orchestrations or publish-subscribe message collaboration.

The underlying services can be deployed through cloud-based computing as provided by Windows Azure or within an enterprise with secure external access through a portal. Protocols that simultaneously address both security and privacy will be required to enable Web-based collaboration, as well as associated orchestration and notification.

58

3.3.2 Orchestration The term orchestration is usually applied to more complex, long running processes that coordinate the execution of multiple Web services into automated business processes and may have many steps and require user interaction. This is distinct from the term workflow, which is commonly applied to a set of coordinated short running tasks. The users involved in business processes, which have been automated using orchestration (either as specific users or those with the role in an organization) and/or systems, participate in business processes either within an organization or across organizational boundaries. 3.3.3 Notification Infrastructure Collaboration requires the ability to notify a user or group of users whenever there is a condition of potential interest so that, if necessary, they can take appropriate actions. One example would be when an industrial participant in a demand response program would be made aware that a load curtailment is scheduled for later in the day. The industrial participant can then revise factory production schedules. Notifications also can include a wide range of other conditions, including: perimeter security notices equipment state changes, and alarm limits violations. Since users are fundamentally mobile they may be at work, at home or otherwise away from their personal computer notifications can be issued using: e-mail, Web feeds (such as RSS or ATOM), dashboard icons, SMS messages (to mobile devices), and voice, where voicemail can be issued mobile platform notification systems (APNS, WNS, etc.). For some notifications, the need for a positive acknowledgement is important. The basic need is to be sure that the user receives and acknowledges the message within a reasonable time. If they do not, it may be necessary to escalate/transfer that message to another user. Examples of this could be for planned outages or emergency load reductions, where the user needs to be aware that power may be out for a period of time so they have appropriate advance warning. In other cases, such as voluntary load reductions, the acknowledgement may need to be more involved, where the user can indicate if they will participate or not. Business process automation also requires the ability to filter notification types that a user or group will receive and how they receive them. Role-based notifications limit the distribution to subscribers relevant to the event, and the notion of presence can ensure notifications go to users that are available at the time of notification. Subscription patterns and rules engines can 59

be used to decouple notification subscriptions from actual business process flow. For example, some users may be interested in informative events such as pricing signals, where others are only interested in emergency events. 3.3.4 Chain of Command Notification plus Workflow Combining notification with managed workflow can be an effective way to organize human process within the smart energy ecosystem. As the number of participants in the ecosystem increases and the nature of activities becomes more diverse, assigning tasks and tracking the completion of activities will be challenging. Combining notification with managed workflows can also be a way to improve the timeliness of resolution of issues, be they field or enterprise related. Cross-organizational boundaries can be efficiently handled through managed workflows, and notification automates collaboration for resolution of the issues. Tracking and reporting on the managed workflow can also provide evidence of timely response and notification for regulators.

3.4

Information

Information within the smart energy ecosystem includes logical and physical models that are used to describe database schemas, message structures, and interface definitions. Due to the complex nature of the electricity infrastructure and the associated business processes, many different systems, applications, and sources of information are used.

Figure 11 demonstrates how information can be organized physically, logically and conceptually as well as by functional area.

60

This information section discusses: Standards and Domain Models IEC utilities Common Information Model Master Data Management CIM Topology for Data Exchange Measurement Data o Historians o Operations Databases o Data Warehouses o Spatial Data Considerations Interoperability Messages And Interfaces Big Data Event Cloud and Complex Event Processing Moving from Raw Data

3.4.1 Standards and Domain Models The smart energy ecosystem will require a wide variety of information to be managed, accessed and analyzed. Some of this information is specific to the domain of the electricity industry, while some is common to a wide variety of other industries. No matter the case, the specific information models can be viewed as ontology. In computer science and information science, ontology is a formal representation of a set of concepts within a domain and the relationships between those concepts. It is used to reason about the properties of that domain, and may be used to define the domain. Many of these information models are either directly or indirectly defined by industry standards, such as the IEC Common Information Model (CIM), while others can be a consequence of more broad-based standards or even proprietary information models that are defined by systems vendors. The collective set of information models used by the electricity industry can be viewed as a federation of ontologies. Figure 12 illustrates a view of the logical relationships between domain models either defined or implied by the various standards and specifications that are being proposed by IEC. This reference architecture is known as the seamless integration reference architecture (IEC 62357). 61

Figure 12: The IEC 62357 Reference Architecture for Power System information exchange.

As seen, there are many different standards relevant for smart grids. The layers in the upper part of Figure 12 (above the layer Data Acquisition and Control Front-End) are mostly for business integration, data definitions, and application. The CIM is located in this area and is the only standard in this area. All layers below this area are standards used for the direct communication with field devices. Both vertical boxes in the left of the diagram represent crosscutting standards for data and communication security.

62

The following table provides relevant standards within the reference architecture.

Standards in the Reference Architecture21


60495 and 60663 60870-5 60870-6 61334 61850
Planning of (single-sideband) power line carrier systems. (WG20) Standards for reliable data acquisition and control on narrow-band serial data links or over TCP/IP networks between SCADA masters and substations. (WG3) Standards for the exchange of real-time operational data between control centers over Wide Area Networks (WANs). This standard is known officially as TASE-2 and unofficially as ICCP. (WG7) Standards for data communications over distribution line carrier systems. (WG9) Standards for communications and data acquisition in substations. These standards are known unofficially as the UCA2 protocol standards. They also include standards for hydroelectric power plant communication, monitoring, and control of distributed energy resources and hydroelectric power plants. (WG10, 17, 18) Standards for Distribution Management System (DMS) interfaces for information exchange with other IT systems. These include the distribution management parts of the CIM and eXtensible Markup Language (XML) message standards for information exchange between a variety of business systems, such as meter data management, asset management, work order management, Geographical Information Systems (GIS), etc. (WG14) Standards to facilitate integration of applications within a control center, exchange of network power system models with other control centers, and interactions with external operations in distribution as well as other external sources/sinks of information needed for real-time operations. These standards include the generation and transmission parts of the Common Information Model (CIM), profiles for power system model exchange and other information exchanges, and XML file format standards for information exchange. (WG13) Standards for deregulated energy market communications. (WG16) Standards for data and communication security. (WG15)

61968

61970

62325 62351

3.4.2 International Electrotechnical Commission (IEC) Common Information Model The Common Information Model (CIM) is an international standard, codified by the International Electrotechnical Commission (IEC), for the data and model exchanges amongst transmission and electrical distribution management systems22 CIM represents a universal information model for electrical utility management. It offers a common standard not only for the physical aspects of a utility, such as lines or circuits, but also for the internal and external communication.23

21 22

IEC 62357-1 TR Ed.1: Reference architecture for power system information exchange. Core IEC Standards, and IEC 61970 Energy management system application program interface (EMS-API) - Part 301 23 An Introduction to IEC 61970-301 & 61968-11: The Common Information Model, by Alan McMorran. (Glasgow: Institute for Energy and Environment, Department of Electronic and Electrical Engineering, January 2007)

63

The most important series of standards are IEC 61970, IEC 61968, and IEC 62325: The IEC 61970-301 standard, which is part of the IEC 61970 series, is the CIM core describing Energy Management Systems. This includes many key components such as power lines, circuits, transformers and their interconnections. The IEC 61968-11 standard, part of the IEC 61968 series, describes the part of the model needed by the IT of distribution systems, e.g., distribution management systems and outage management systems. It is derived from IEC 61970-301 and refers to many of its components.

cl a ss P a ck a geDependenci es I EC 61970 + +

TC 57C I M ::C ombi nedV er si on date: AbsoluteDate [0..1] = 2011-02-05 {readOnly} version: String [0..1] = iec61970CIM15v1... {readOnly}

(from TC57CIM)

I EC 61970::I EC 61970C I M V er si on {root} + + date: AbsoluteDate [0..1] = 2011-02-05 {readOnly} version: String [0..1] = IEC61970CIM15v15 {readOnly}

I EC 61968 + +

I EC 61968::I EC 61968C I M V er si on date: AbsoluteDate [0..1] = 2011-01-27 {readOnly} version: String [0..1] = IEC61968CIM11v06 {readOnly}

(from TC57CIM)

I EC 62325::I EC 62325C I M V er si on + + date: AbsoluteDate [0..1] = 2011-01-28 {readOnly} version: String [0..1] = IEC62325CIM01v03 {readOnly}

I EC 62325

(from TC57CIM)
+ +

P a ck a geDependenci esC I M V er si on date: AbsoluteDate [0..1] = 2011-02-05 {readOnly} version: String [0..1] = 2 {readOnly}

P a ck a geDependenci es

(from TC57CIM)

Figure 13: Overview of CIM packages highest level and the corresponding standards.

24

The IEC 62325 standard provides a framework for energy market communications24

Figure 13 is from IEC 62357: TC57 Architecture. Part 1: Reference Architecture for Power System Information Exchange, Second Edition Draft, Revision 6, October 1, 2011, pg. 51.

64

In addition to the official working groups of the IEC that develop the formal standards, there is also the CIM users group (CIMUg) whose aim is to support the development of the CIM. The CIM users group is a forum where utility managers, vendors, and integrators who use the CIM can meet and exchange views. The CIM users group meets regularly and hosts a website,25 which is a rich source for all sorts of documents related to the CIM. In addition, the CIMUg also provides a liaison with the standard committee for suggestions to improve or extend the CIM. Turning to the international diffusion of the CIM, it has been a legally mandated standard in the United States for several years. It has also been gaining wide acceptance in Asia and Europe. For instance, the European Network of Transmission System Operators for Electricity (ENTSO-E), which comprises 41 members in 34 countries, is currently migrating to the CIM with respect to power model exchanges and day-ahead forecasts of their members. More than 80 applications and more than 60 vendors support the CIM. 3.4.2.1 Models and Tools of the CIM The CIM gives an abstract representation of objects, their attributes, the associations between them and the methods they provide. It is a common model of three large domains (with interconnections), viz. a technical one for energy management systems, one for modeling utility distribution systems and one for energy markets. As a formal model, the CIM gives an abstract yet precise description of the problem domain, without committing itself to a concrete implementation. It makes a standardized RDF/XML syntax available (IEC 61970-552-4) for serializing the CIM or a subset thereof. As an alternative to the RDF format, it also provides XML Naming and Design Rules (NDR), which define a canonical way to derive an XML schema (XSD) from the model. The abstraction of the CIM from any particular format also enables the model driven transformation of data from and to the CIM standard. The transformations remain the same even when the concrete format changes, say from files to a database. 3.4.2.2 Extending CIM Within an enterprise, there may be a wide variety of metadata (data about data) that needs to be managed. This metadata describes the information that is managed within databases and in the definition of information exchanges. The metadata can represent both logical data models (e.g. using UML) and physical design artifacts (e.g. XML Schemas, DDL, RDF Schemas, etc.). For integration purposes, it is important to be able to define and manage the mappings between models and artifacts. For example, when integrating systems, the source and

25

The CIM Users Group.

65

target systems may use different but overlapping models, where it is necessary to map (transform/translate) the information from one system into a form needed by the target. Metadata is typically derived from a variety of sources including: CIM logical model as defined in UML, which provides logical coverage for a variety of standards including IEC 61970, 61968, and 61850. Other standard logical models, including those defined by MIMOSA and the Open Geospatial Consortium (OGC). Proprietary logical models, as might be either provided by vendors or reflected by their products. Design artifacts as provided by a variety of standards and specifications, including IEC 61968, MultiSpeak, IEC 61970, IEC 61850, and OpenADR. Design artifacts as provided by vendors that are reflective of their product interfaces. Design artifacts that are a consequence of locally developed applications.

A repository can be used to manage metadata. The metadata can take a variety of forms, including UML, XMI, XSDs, RDFS, OWL, etc. The primary use of the metadata is to support integration efforts. Interfaces and information exchanges are defined within the IEC standards efforts using a three- layer model: 1. Information models are the highest level. Within an enterprise and associated integration, the information models that form an overall enterprise information model may be derived from the IEC CIM, extensions to the CIM and other models as may be sourced from other standards and vendor products. 2. Contextual profiles define the next level. A contextual profile is a formal subset derived from the information models that defines the content of an information exchange. 3. Design artifacts comprise the lowest level. Design artifacts consist of a. XML Schemas b. RDF Schemas c. Database schemas, which are used to define physical models in the form of interfaces, messages, and databases.

66

Figure 14: The contextual profile is a crucial consideration because each profile defines a logical model for an information exchange.

These profiles form the basis of many IEC standards, such as those used for network model exchanges and IEC 61968 messages. 3.4.2.3 Enterprise Data Models The Common Information Model (CIM) IEC 61968/IEC 61970 provides an information model that works well for model exchange, information exchange, business process integration, and SOA service definitions in an SOA architecture. The most complete realm for application of the model is the wholesale network grid model. Significant advances have also been made in the distribution grid equipment definition as well as progress on other assets in the power system equipment. However, there are a number of business areas where the CIM must be complimented with a holistic enterprise data model. For example, integration of operational data with financial system and customer information system data requires data definitions beyond the scope of the CIM. The relationship of industry data models to the CIM is an important point for leveraging an industry data model in utilities. The ADRM Software Utility Industry Data Model includes the equipment represented in CIM but also includes all the other business area models as shown above. 67

The CIM based operations applications and even an operational data store build deriving the schema from a CIM information model can be used to feed the enterprise data warehouse built using the industry data model. Operational data stores can also be built using the industry data models. Integration with the CIM inspired operations solutions, as well as legacy applications, can be viewed as data aggregation for the solutions as data providers in overall star aggregation architecture. Industry data models are most often considered in the context of a data aggregation ETL scenario typically as a foundation for advanced business analytics and BI reporting. However, they can add value to a number of roles in a SERA context: Advanced Analytics: Seeding Big Data and Complex Social Analytics Business Intelligence: Reusable extensible analytics/BI Business Process: ESB, Business Process Integration, SOA Data Aggregation: Enterprise Data Warehouses, Data Marts, Operational Data Stores Large SQL & PDW Deployments Data Integration: ETL, Data Mapping, 3rd Party Data Integration Enterprise Information Architecture: Enterprise Data Models, Business Area Models, Data Warehouse Models Merger & Acquisition: Business Level Process & Data Integration Solutions: ERP, CRM, Applications Rationalization, Application Development

3.4.2.4 Industry Data Models Industry data models provide enterprisewide models for industry verticals of varying levels of completeness, depending upon the supplier. The following table lists some business area models in the ADRM Software model set:

68

Figure 15: ADRM Software provides a number of Industry Data Models including a suite of models for the gas and electric utility industry

Utilities can derive a subset of each of these models to establish a starting point for an enterprise data model for an operational data store or enterprise data warehouse. Leveraging an existing industry data model can significantly reduce the risk and time associated with a utility warehouse project.

Figure 16: Every block on the diagram above explodes out to a detailed attribute/parameter list and linkages between equipment and business entities across all the Business Area blocks. Colors represent individual topics or Business 69 Areas so it is easy to see the Enterprise Model transcends a number of Business Area Models.

The model density and completeness is a major consideration when selecting an industry data model or a utility contemplating developing their own. A very complete IDM means less risk and less time to get to a well-connected and robust enterprise warehouse data model. Every block on the diagram explodes out to a detailed attribute/parameter list and linkages between equipment and business entities across all the business area blocks are shown on the enterprise model. Colors represent individual topics or business areas so it is easy to see the enterprise model transcends a number of business area models. 3.4.2.5 Data Models for Enterprise Data Warehouses One area that is proving to be a big driver for industry data models is the creation of enterprise data warehouses to house all the data for smart meters and grid sensors. Utility companies have invested heavily in smart meter installations and smart grid upgrades. In many cases, theyve passed those costs to customers with a promise to regulators that the new equipment will build a better, smarter grid enabling new customer services, better relationships, and more efficiency and reliability. Achieving the promise has been difficult as several challenges have stood in the way. Utilities have purchased smart grid/meter systems with solution specific data stores containing only small subsets of enterprise data, a situation that forces them to choose whether to grow the meter data management or the asset management system to meet the enterprise data warehouse need. Adding to the complexity, none of the systems that utilities purchased were designed to aggregate data from the operations, financial, and customer information systems into one location, an accomplishment that would enable building holistic cubes for high performance analytics. Or, in many cases, utilities choose to incrementally build their own databases for smart grid/smart meter data in parallel with getting their operating processes up and running. Their consultants help them start with some basic enterprise or data warehouse (DW) models that get loaded, query-enabled and running in 6-9 months, typically with warehouses containing 400-600 tables. But each subsequent phase or system add-on also only contain other small pieces of business area models just enough to meet requirements. Rarely do all the add-ons create a holistic enterprise model that enables deeper analytics and understanding. And the final result may not be well formed and lack referential integrity.

Other characteristics of these implementations tend to frustrate and slow the utilities. Typically, the DW or enterprise models that utilities develop for their smart grid/meter implementations require large-scale consultant engagements, to learn the organization and map its in-place technology and processes. Or they take on the risk of building a database model from scratch: it may not work to enable the analytics that actually help improve line 70

of business decision making, especially if the component parts of the data arent organized well. To address these challenges, there needs to be a model, with clear ownership and mastership, at an enterprise wide level. All too often, the process lacks definition for adding subsystem models into an enterprise model for want of a holistic vision. The effect is data that is random, inconsistent, unpredictable, and difficult to manage as a resource. Without a well-defined model for all the business areas involved, the investments in smart grid/smart meters dont coalesce to add value. 3.4.3 Master Data Management Diverse sets of business processes within an electric utility or ISO are supported by a diverse set of applications. Consequently, a number of different types of master data must be managed. Information of interest to master data management includes: Network models, which may include electrical transmission and/or distribution Resources, which are primarily generation resources Customer data, as needed to identify customer accounts and service locations Geographic information, which may be used to derive network models especially in distribution Assets Settlements Work orders Measurement history The temporal nature of master data must be recognized as well: The structure of the electricity grid changes over time, typically through the process of construction, upgrades, and decommissioning. The connectivity of the network changes over time, as the position of breakers and switches are changed. The state of the network also changes as load and generation changes or tap positions are altered. Through the process of maintenance and repair, assets may be replaced. Each master data manager will have an internal physical model, where data may be exposed or exported using either standard or proprietary interface mechanisms, ranging from files to APIs to messages to database table access. 3.4.3.1 CIM for Master Data Management Master data are usually seen as the critical nouns of business operations involved with transactional data. Master data are a single source of basic business data used across multiple systems, applications, and/or processes. Typical master data items are customers, products, locations, employees, or assets. Note that master data are non-transactional in nature and rather support transactional processes and operations such as purchases, 71

billings, or e-mail communication. Turning to the handling of master data, the technologies, tools, and processes required to create and maintain consistent and accurate lists of master data are referred to as master data management26. The most critical nouns of current and prospective energy systems are the energy systems devices such as power stations, decentralized energy resources, grid infrastructures, or endconsumers (e.g., household appliances, electric vehicles, or electric heating systems). Only if the different devices interact effectively and efficiently, todays energy system can successfully be transformed into a smart energy ecosystem. Thus, within this context, master data are rather seen as the various devices of the energy system. As a consequence, master data management, in this context should be thought of as describing a model enabling effective and efficient interactions between the energy systems master data. Diverse sets of business processes within an electric utility or ISO are supported by a diverse set of applications. Consequently, a number of different types of master data must be managed. Combining CIM with an ESB enables the efficient and secure accessing of master data from various heterogeneous legacy systems. In the prospective smart energy ecosystem, processes that involve a series of human activities toady can be organized less laboriously since the ESB autonomously collects the respective data from all distributed data sources and routes them to the relevant recipients. This means using CIM derived messages with an enterprise service bus to automate the information exchange. Because the message definitions are CIM inspired, and because they can be used in conjunction with a master CIM repository (against which all information exchange is translated or mapped) also residing on the ESB, the manual human centric tasks for the information exchange can be made programmatically.

26

The What, Why, and How of Master Data Management, by R. Wolter und K. Haselden, Microsoft Developer Network , 2006.

72

Figure 17: Different sets of business processes and the communication between diverse applications via an ESB are illustrated in the Interface Reference Model of IEC 61968-1 in Figure 17.

Given the projected rapid diffusion of decentralized energy resources in the smart energy ecosystem, the idea of combining CIM with an ESB is especially attractive within the two business processes Network Operation and Network Extension Planning. 3.4.3.2 Master Data Management for Network Operations Distribution companies continuously have to control the real-time stability of their distribution networks, which might be adversely affected by sudden fluctuations of the decentralized energy resources energy production or by respective consumption loads. These variations often lead to voltage fluctuations in low and medium voltage grids. That is, instabilities of local networks are transmitted to the distribution network and have to be dealt with either locally at substation level or globally at control center level. The network operation will be organized in a much more flexible and decentralized way in the smart energy ecosystem. Instabilities of local networks will be forecasted, detected, and managed at the local level due to intelligent control systems aware of network topologies, future weather conditions, future consumption loads, and technical conditions of the connected decentralized energy resources, end-consumers, and energy storage devices. Owing to the continuous communication between the distributed data sources and the service buses used by distribution companies, the network control system will be able to effectively avoid and manage network instabilities by changing the topology of meshed networks or by means of local voltage and power-factor correction services.

73

3.4.3.3 Master Data Management for Network Extension Planning Distribution companies have to assess to what extent each projected distributed generation unit like photovoltaic systems, wind farms, or district heating central plants, impacts the stability of local and regional distribution networks. To this end, network planners set up respective sub-grids and simulate network functionalities to analyze the need for network expansions. For this purpose, they use information from various master data management systems such as asset management systems, network control systems, or geographical information systems. As mentioned, due to the non-existing or only rudimentary integration of these master data systems, data extraction is laborious and even involves a series of manual tasks and procedures. In the prospective smart energy ecosystem, the network extension planning can be organized less laborious than today since network planers only need to communicate with an ESB that autonomously collects the respective data from all distributed data sources and routes them to the network planers. Such a facilitated communication will lead to significant risk and cost reductions in network extension planning. 3.4.4 CIM for Topology Data Exchange Exchanging network topology data is an important task for both Transmission System Operators (TSOs) and Distribution System Operators (DSOs). In Europe, for instance, TSOs are exchanging their topology data with their respective geographical neighbors for their day-ahead congestion forecast. Their data exchange is based on the ENTSO-E (European Network of Transmission System Operators for Electricity) profile which itself is based on CIM. The first edition of the ENTSO-E CIM Model Exchange Profile is intended to meet the requirements for accurate modelling of the ENTSO-E interconnection for power flow and short circuit applications. Detailed requirements may be found in various publically available ENTSO-E documents. Data content included in the first edition is a superset of the UCTE Data Exchange Format (UCTE-DEF) that was developed to serve day-ahead congestion forecast process in the former UCTE (now ENTSO-E Regional Group Continental Europe), although the structure is changed to align with IEC CIM modelling.27 The IEC 61970-301 standard defines the CIM base set of packages which provide a logical view of the functional aspects of an energy management system including SCADA. Also, TSOs and DSOs have to internally exchange topology data that are often available in different applications such as GIS, ERP systems, or SCADA systems. The CIM now offers a standardized format for this internal topology data exchange.

27

Common Information Model (CIM) Model Exchange Profile, by ENTSO-E, May 10, 2009.

74

The topology data exchange file format for the TSO is RDF. This is described in the first edition of the ENTSO-E Profile. In order to be considered a valid model, a given combined set of XML must adhere to the following criteria: The file must be well-formed as defined by the Extensible Markup Language (XML) 1.0 (Second Edition) (http://www.w3.org/TR/REC-xml). The file must adhere to the rules set forth in the Simplified RDF Syntax for Power System Model Exchange. The file must contain CIM entities which, are valid according to the CIM RDF Schema file.

3.4.5 Measurement Data Collecting and using data to adhere to industry standards, drive technical advances and improve business processes requires the following set of capabilities: Collection from a variety of sources Validation Error correction Compliance with standards Standardization and centralization Publishing Establishing single systems of record28

The following sections describe utility applications requiring measurement systems.

3.4.5.1 Historians Historians provide the means to capture, store and retrieve measurement history. Such histories are important records for grid, transmission, and distribution operators because each measurement is related to an object in one of the master data managers. Operational historians are important records of operational history and have a wide variety of uses. Two types of historians are in common use, each with subtle but significant differences: Measurement historians (e.g. OSIsoft PI), which collect real time measurements obtained from real-time telemetry. Meter data managers (e.g. Itron MDM, Ferranti MECOMS), which collect readings from meters with a focus on electricity usage. o Historians may offer multiple ways of accessing information: Through the use of a Microsoft SQL Server-compliant interface

28

Measurement Data Management business process, by Quorum Business Solutions.

75

Through an industry reference e.g. OPC Data Access or UA10 Using an application programming interface (API) or Web service Through a product-specific user interface

As a consequence of the high volumes of data that are managed, the structure of information within the historian is often proprietary. In the case of a measurement history, analog values obtained from thousands of data points may be collected and stored every two seconds. In the case of a meter data manager, data may be collected from millions of meters every 15 minutes. 3.4.5.2 Operations Databases Operations databases are typically part of enterprise or operations applications. These databases focus on online transaction processing (OLTP) and are typically normalized data stores with proprietary models. While the industry has trended toward the use of relational databases, or minimally databases with SQL Server-compliant interfaces, this is not always the case. Different operational databases face synchronization issues, especially from the perspective of models, where updates are often continually applied to a network; sometimes those updates are only reflected on a daily, weekly, or even monthly basis in the model. While the industry has been moving toward adoption of the IEC CIM as a common logical data model, it is important to note that there is no IEC standard that defines a CIM compliant relational database structures. Instead, it can be best said that databases may be consistent with or inspired by the CIM. 3.4.5.3 Data Warehouses Data warehouses are typically de-normalized, dimensional data stores that provide information related to a given set of subjects used for online analytical processing (OLAP). The databases focus on decision support and are either part of a vendor product suite or custom in nature. While data warehouses typically are implemented using SQL Server-compliant relational databases, there are meaningful opportunities, if not requirements, to leverage proprietary database features, primarily for indexing. Another option is an enterprise-wide distributed data warehouse, an example of which is in use at a large integrated utility. This distributed data warehouse was established to create the capabilities needed for smart energy and smart grid business needs, over core issues of evolving integration requirements, the reduction of costs for solution development, enhancement and maintenance, and flexibility for business processes and their supporting systems.

76

Data warehouses are often organized using a star schema structure that is characterized by tables for dimensions and facts, where a fact table identifies a set of quantities that are related to a number of dimensions.

Figure 18 is a generic example of a Star Schema.

A data warehouse design for problems in the electric utility operations domain would typically leverage the IEC CIM, or an industry data model which encompasses the IEC CIM. Where the IEC CIM is a logical data model, there is a level of design necessary to leverage it for the realization of a data warehouse. Such a data warehouse would be said to be CIM inspired and would include such dimensions as: Assets Asset type hierarchy Customers Equipment hierarchy (e.g. describing the hierarchical relationships between regions, substation, voltage level, bay, lines, equipment) Functional type (i.e. inheritance) hierarchy for equipment (e.g. conducting equipment, switch, breaker) Geographical location hierarchy Organizational hierarchy Time, where data will be typically aggregated within time intervals, where depending upon data the finest level of granularity within the data warehouse, may typically be 2-15 minutes) 77

Figure 19 represents a portion of a data warehouse that uses the CIM model, where a fact table leverages several dimensions. Note how the dimensions are typically hierarchical, providing the means to slice and dice information in many different ways.

Figure 19 is an example of a CIM-inspired Star Schema.

78

Figure 20 shows how data can be integrated within the enterprise. Note the population of data into the warehouse and its use for visualization, reporting, and analysis.

Figure 20 shows how the data warehouse is typically populated using ETL and processes that load data from staging tables, although there may be cases where the data warehouse may be updated using the ESB.

The staging tables in Figure 20 are used to collect data from various sources prior to aggregation and insertion into the data warehouse. The sources of data and methods of movement can include: Databases used by other enterprise applications, using ETL ESB processes, which may involve event driven or periodic processing to place data in staging tables or sometimes directly into the data warehouse Databases replicated from other databases, where ETL is then used to transform and populate data in staging as appropriate RDF parsers, where a CIM/XML model in RDF format is used to populate structures in staging tables 79

External data feeds, where adapters in the ESB can be used to populate staging tables or the data warehouse directly Applications through ESB interfaces, where the application database is private or of a proprietary nature Directly from an application such as a historian, where it can be configured to periodically aggregate, summarize and export desired data

The staging tables are used only to prepare data for the warehouse and are not used for any transactional or visualization purposes. The data warehouse itself is not a transactional database, and is a read-only data store from the perspectives of visualization, reporting and analysis. Figure 20 also shows integration of the data warehouse with a geographic information system (GIS). This is important, as more visualization and reporting will allow information to be presented spatially. CIM XML exports may also provide geo-coded locations for network objects. Also in Figure 20, the integration of the historian reflects the fact that it might be more efficient to aggregate and/or summarize information from a historian rather than to try to always retrieve and summarize data from it on the fly when needed for analysis. Whereas the historian may have measurements for a given data point for every two seconds, the data warehouse would store the average, minimum and maximum values for a measurement over a much wider time interface, such as 5- to 15- minutes. This would provide a convenience by simplifying many types of analysis, where it would always be possible to retrieve the detailed measurement history from the historian if needed. The structure of some staging tables may be derived from the CIM, and extended as needed. Some staging tables may leverage other models. Whatever the case, the staging table design must allow for efficient aggregation and population of data to be inserted into the data warehouse. 3.4.5.4 Spatial Data Considerations Spatial data represents information about the physical location and shape of geometric objects. These objects can be point locations or more complex objects such as countries, roads, or lakes. Utilities have historically grasped temporal solutions such as a time-series-oriented database, or time-tagged event processing. In consideration of asset lifecycles and condition-based maintenance, its increasingly evident theres a new requirement to address: the notion of adding spatial data to the aggregation and historical record, so that for example, all operations on a specific piece of equipment follow the equipment everywhere it is deployed. 80

For example, if a transformer is deployed in one location, and then moved because it is replaced by a larger one, the operational records for the original transformer should not be set back to zero simply because it is in a new location. That would be like resetting the odometer on a used vehicle. Asset health should include temporal and spatial data records as well. SQL Server supports two spatial data types: the geometry data type and the geography data type: The geometry type represents data in a Euclidean (flat) coordinate system. The geography type represents data in a round-earth coordinate system.

Both data types are implemented as Microsoft .NET common language runtime (CLR) data types in SQL Server. 3.4.6 Interoperability Standards promote interoperability through definition of messages and interfaces. However, standards can vary with respect to the degree of interoperability they provide. For example, standards can be: Plug and play with automatic discovery or minimal configuration Interoperable with some pre-configuration Interoperable with mapping and/or some modest level of integration effort

81

Figure 21 demonstrates how an integration layer is often used to connect information flows between applications by performing mappings that may be needed in cases where an interface standard is less than plug and play.

When the number of integration points is high, such as the case with device integration, plug and play is mandatory, either through the use of standards or proprietary interfaces. However, there are also cases where some aspects of integration may not be covered by widely adopted plug and play standards. As examples, the integration of distribution, market and work management systems are commonly dependent upon some level of integration effort in order for them to exchange information with other enterprise systems. This is especially true of domain-focused applications that are implemented, sold and deployed by vendors in quantities of 1, 10 or even 100; as opposed to much more broadly deployed software applications such as Microsoft Office or Microsoft Exchange Server. The use of an integration layer as provided by an ESB is the recommended approach for impedance matching. The need to impedance match or perform custom integration is typically a consequence of the diversity of business processes, immaturity of standards, and consequences upon related applications, as well as evolving needs of the business and industry.

82

3.4.6.1 Harmonizing CIM and IEC 61850 Smart grid data management standards such as CIM must be semantically integrated with the corresponding automation and control models. With respect to the integration of IEC 61850 and CIM, different efforts have been made since 2000. Other standards and models that should be harmonized with CIM are DLMS/COSEM, MultiSpeak, UN/CEFACT CCTS, and Electronic Data Interchanges EDI messages.29 3.4.7 Messages and Interfaces While a variety of standards can be leveraged for integration, the primary concern is to choose an approach that: minimizes integration costs for the initial implementation, provides opportunities for reuse, is supportable longer term, and enables the flexibility needed for evolution over the long term.

For integration purposes, the following choices are ordered for preference: 1. Use a standard interface supported by vendor products. 2. Use a productized, but potentially proprietary interface supported by a vendor product and map as needed within the integration layer (i.e. ESB). 3. Select an appropriate interface standard that can be readily adapted to interface to products or applications of interest. 4. Define new interfaces leveraging appropriate integration standards. As one example, the IEC 61968 standard focuses on the integration of electrical distribution systems. However, IEC 61968 has also been applied to the integration of applications related to transmission, generation, and energy markets. These are subject areas where the IEC CIM is often leveraged. Within IEC 61968, messages are defined using a message envelope that has three primary parts: 1. A verb, to identify an action such as CREATE, CHANGE, DELETE 2. A noun, to identify the contents of the message payload 3. A payload, which is an XML document derived from some subset of the classes, attributes and relationships typically identified by the IEC CIM, although other domain models can also be leveraged Using the verb/noun scheme, a given application can be characterized in terms of messages it produces and consumes. Figure 22 provides an example of system characterization.
29

The Common Information Model CIM IEC 61968/61970 and 62325 - A Practical Introduction to the CIM, by Mathias Uslar. (Springer: Berlin, 2012)

83

Figure 22: System Characterization Worksheet

One realization of an IEC 61968 message is through an envelope defined by an XML schema that has a header to contain the noun and verb. The header may contain other parameters, such as timestamps and user IDs. Other structures are added to the message envelope structure to convey information such as request parameters and error strings.

84

Figure 23 illustrates the IEC 61968 Message Envelope and the associated headers and strings. The relevant information is conveyed using the payload structure within the message envelope.

Figure 23: IEC 61968 Message Envelope

85

Figures 24 and 25 provide example payload structures from IEC 61968-9 that are used to convey end-device controls and events. The message structures are derived from a contextual profile of the IEC CIM.

Figure 24: End device controls payload structure.

Figure 25: End device events payload structure

Its important to note that IEC 61968 is transport independent so that it can be implemented using technologies such as Web services, Microsoft BizTalk Server messages and even technologies yet unknown. For use within Web services, message definitions are simply referenced within Web Service Description Languages (WSDLs)30. The use of XML also permits

30

The Web Services Description Language (WSDL, pronounced 'wiz-dl' or spelled out, 'W-S-D-L') is an XML-based language that provides a model for describing Web services. The meaning of the acronym has changed from version 1.1 where the D was standing for Definition. Wikipedia.org

86

these messages to be managed by a collaboration infrastructure, where references to the messages can be conveyed using links in ATOM feeds.31 MultiSpeak is an industry specification developed by the National Rural Electric Cooperative Association (NRECA) that deals with the exchange of information between distributed-related applications. Within MultiSpeak, interfaces are defined as Web services. Where there are differences between the models used by MultiSpeak and the IEC CIM, they are not significant and can be accommodated through mapping. It is better to have an application interface that can be mapped and leveraged for integration as opposed to not having an interface at all. Another integration standard often used for process control integration is Object Linking and Embedding (OLE)32 for Process Control (OPC) as defined by the OPC Foundation. OPC and the newer OPC Unified Architecture33 are technologies used across process control-related domains for information exchanges. Many interfaces are defined by OPC, such as those for conveying measurement data. OPC UA is now an IEC standard known as IEC 62541. An important trend of note is that of shallow integration, where standard interfaces are defined in a manner that minimizes the depth of understanding a client must have of the models and processes internal to a service. In this way, it is possible to support integration of a diverse set of systems and allow for innovation. 3.4.8 Big Data The industry is currently experiencing a hype cycle around the many forms of Big Data. Big Data is an umbrella term including the data being generated by smart meters and the new deployments of distribution grid sensor data made possible by the advent of AMI infrastructure to transport the sensor data. The new data volumes could be considered being disruptive to utilities because of the new requirements for the data and analytics. Consider these examples:

31

The name Atom applies to a pair of related standards. The Atom Syndication Format is an XML language used for web feeds, while the Atom Publishing Protocol (AtomPub or APP) is a simple HTTP-based protocol for creating and updating web resources. Wikipedia.org 32 Object Linking and Embedding (OLE) is a technology that allows embedding and linking to documents and other objects developed by Microsoft. For developers, it brought OLE Control eXtension (OCX), a way to develop and use custom user interface elements. On a technical level, an OLE object is any object that implements the IOleObject interface, possibly along with a wide range of other interfaces, depending on the object's needs. Wikipedia.org 33 OPC Unified Architecture is the most recent OPC specification from the OPC Foundation and differs significantly from its predecessors. After 3 years of specification work and another year of prototyping the first version of Unified Architecture is now being released. The OPC Foundation's main goals with this project was to provide a path forward from the original OPC communications model (namely COM/DCOM) to a current communications model namely SOA and introduce a cross-platform architecture for process control, while enhancing security and providing an information model. Wikipedia.org

87

15-minute meter reads for 18 months for a one million customer utility results in a database with approximately 52 BILLION rows in one table. A utility with 5 million or 35 million customers, and multiple years of data represents a truly big volume of data.

These are but examples of the 4 Vs of Big Data. Heres the complete list: 1. 2. 3. 4. Volume: the overall quantity of data (for example, smart meter data) Velocity: the speed with which data arrives (for example, 60 HZ Phasor data) Variability: the amount of change in the data (e.g., windspeed in a renewables windfarm) Variety: the number of different types of data (e.g., all forms of social media)

Big Data is available, but how can it be utilized? 3.4.8.1 Advanced Analytics for Big Data As utilities look to maximize the value of their smart meter investments and its data, the field of advanced analytics is evolving rapidly with views toward developing the following capabilities: Mining the information received from grid sensors Leveraging social networks all in an effort to evolve their relationship with their customers. Integrating operational activities with external data sources, like those for demand, weather, business activity, etc.

3.4.8.2 Analyzing Customer Behavior Using Big Data By using analytics in enterprise data warehouses utilities can now ask, and then answer, questions like: If I look at customer energy usage patterns for the past year, and combine it with all possible tariff plans for retailers, and then examine actual market prices, which tariff what would be the best tariff plan and what would be the customer savings if they had been under that plan? Of course, the complexity of this analysis can be increased significantly by analyzing the sensitivity of the customer savings to migrations in temperature and other global energy events, as a way to help predict if the results might be the same in future years. The significant increase in the level and complexity of utility aspirations for examination of customer behavior is driving the need for a whole new architecture and new technologies. 3.4.8.3 Unstructured Data in Analytics That utilities are focusing on how to capture, manage, and mine smart meter data to help gain those new customer usage insights is only part of the story. A whole new dimension is the advent of unstructured data.

88

IDC forecasts that 80 percent of all information takes form in unstructured data and they expect this figure to grow. Examples of unstructured data include: Twitter feeds Facebook posts Blogs Customers click streams Customers contributions to other social media like video. The Big Data challenge to utilities is how they aggregate utility enterprise data including combinations with operations data and unstructured consumer data to derive new insights into customer behavior. 3.4.8.4 Additional Types of Big Data Analytics Gaining insight into consumer behavior is but one Big Data analytics scenarios. Big Data analytics can be considered for many additional utilities problems, including a host of distribution analytics for improving distribution level power quality and transformer condition-based maintenance and predictive analytics forecasting transformer failures. Some common areas for analysis are: Prediction: Root Cause determination Fault Prediction Equipment Based Maintenance Equipment Failure Prediction Social Analytics: Customer Sentiment Customer Behavior Propensity for Tariff or market responses Pattern Matching: Web Scraping Fraud Detection

New technologies such as Hadoop and associated MapReduce and the full complement of Apache compatible Hadoop distribution services enables a whole new way to approach unstructured data: MapReduce incorporates a map step to break an analysis into parallelized subproblems and distributes them to a myriad of compute nodes, followed by a reduce step to collect and combine the solutions into a synthetic answer. Map problems enable massive scale for distributed processing. Hadoop Distributed File System (HDFS) can be used to store very large amounts of data hundreds of terabytes to petabytes. HDFS is better suited to long sequential 89

reads, rather than a general random access network file system. However, HDFS provides a very efficient data store with a minimum of forced structure. 3.4.9 Event Clouds and Complex Event Processing An event cloud is a logical construct where event streams that were generated by a wide variety of sources can be accessed, filtered, correlated and analyzed. An event cloud is the result of many event generating activities occurring in different parts of an IT system. To analyze event clouds, utilities need a special type of analysis, called complex event processing (CEP). CEP can be applied to many types of problems, including those related to business activity monitoring (BAM). The following partial list of events that may be useful for CEP include: Circuit overloads Demand response events Device status changes Large differences between scheduled and measured values Market submissions Measurement limit violations Meter outage reports Meter power quality events Phasor measurement snapshots Pricing signals Processing errors Resource availability changes and forecasting Trouble calls Virtual execution environment changes

90

Figure 26 illustrates the relationships between Complex Event Processing (CEP) engine and ESB.

Events generated by various event sources are forwarded to the CEP engine, where: The first process filters events of no interest Then, events of potential interest are added to a cache and an attempt is made using correlation rules and models to identify conditions of interest. If such a condition is detected, rules are applied to determine next necessary actions. Typically, when a condition of interest is encountered, a notification service will issue an appropriate message to potentially interested persons (e.g., via e-mail) or components (e.g., via invocation of a Web service). Typically, the automated detection of a condition of interest will trigger a business process.

Event messages are often candidates for use in complex event processing. Some measurements, such as status changes and analog measurements (especially those that might identify a limit violation), can be obtained using a variety of standards and are useful inputs to CEP. In many cases the CEP engine and rules must be able to use a model, such as a network topology model, in order to analyze events. This aspect of ESB integration demonstrates the usefulness of a common message envelope, where events from different sources can be conveyed to the CEP engine in a common way, avoiding the need for additional translations. 91

3.4.10 Moving from Raw Data The process of moving from raw data to information is another important element in taking advantage of new technologies like Hadoop: Step 1 Data acquisition. This can be everything from field sensor data to web scraping and social analytics feeds. Step 2 Structuring mining and insight discovery. This step may well lead to new understanding. However, Hadoop is not typically used in direct utilities operation of the power system. Step 3 Committing to production. Typically, this step will involve reflecting the discovered insights in the BI, reporting, and production systems. Step 4 Visualization and situational awareness. This capability provides intuitive at-aglance understanding.

3.5

Application Architecture
Efficient construction of high quality software components Location independent user interface through a browser Interoperability through Web services Ability to construct composite applications Libraries of reusable components Development and testing tools

The key requirements for application architecture include:

Application architecture implemented using modern application frameworks such as Microsoft .NET Framework also provide for the use of managed code to improve application portability and security. Application frameworks can be either all-encompassing or specialized for the needs of certain types of software development, where examples include: Graphical user interfaces (GUI) Web services Rich Internet Applications (RIA) Mobile applications Embedded applications

Even though applications may be developed using different frameworks, interoperability is provided through various standards, such as the ubiquitous use of Web services. However, within an organization there can be significant advantages to the selection of a single application framework that is encompassing and well supported by development tools.

92

3.6

Deployment Strategies
On-Premise. The on-premise option is the most conventional systems, especially for energy management systems and other core mission critical operations systems. Private Clouds. The practices of virtualization, resource pooling and resource consolidation have created the notion of on-premise private clouds, or hosted private clouds at cloud service providers. Hybrid Cloud. A hybrid cloud combines elements of the other cloud models but keeps sensitive data or mission critical processing behind the utilities on-premise firewall. Public Cloud. The public cloud is the third option in the continuum of deployment strategies. Public clouds can be either dedicated servicing only one customer, or multitenant.

Utilities have a number of different models available for deploying applications and systems.

Transition to the cloud has significant potential benefit for utilities and may materially transform their IT operations and service delivery. Moving down the path toward utilizing the cloud must come with these considerations: Utilities should have a very clear understanding of their intended use for cloud services, as this will aid their selection of a technology and services provider. Utilities should realize that an ideal cloud solution, delivered via an environment provided by a cloud services company, will be characterized by a consistent approach to its underlying on-premise, private cloud, hybrid cloud, and public cloud offerings. This consistency of approach will prevent utilities from incurring significant costs should they need to move to a different deployment model as their requirements and conditions change. Utilities should seek a cloud environment wherein the following capabilities are consistent across all the following paradigms: Automation Common configuration management Common development tools and environment, Common network, application, and system virtualization Common security and governance tools and strategies Provisioning Reporting

Utilities should ensure that the cloud service technology provider selected offers consistent connectivity and networking as a core capability.

93

In addition to the consistency of capabilities that should exist in all the cloud deployment models, utilities should ensure that the migration of applications and solutions from one environment to another is predictable and easy, even including the ease of moving away from the cloud should the utility wish to consider other options. 3.6.1 Deployment Migration For a growing number of enterprises, the journey from on-premise computing to public cloud computing begins with a private cloud implementation that successfully transforms the way the utility business delivers and consumes IT services. This occurs because the private cloud creates a layer of abstraction over pooled resources, enabling true service capabilities as well as optimally managed application services. Utilities are transitioning to using private cloud deployment models for systems with importance ranging all the way up to providing reliability coordination services for transmission system energy management. A private cloud solution offers new levels of agility, focus, and cost-savings. Whether a utility builds a private cloud in an on-premise datacenter or opts for a hosted offering, they quickly realize transformative benefits. 3.6.2 Deployment Considerations Each utility must build an individual cloud strategy business case and associated deployment roadmap based upon their particular business drivers and desired benefits. Investment models must consider the best interests of the utilities constituency. Here are the typical considerations for migration to private and/or public clouds: Capital investment. When considering new investment, utilities typically start by focusing on a capital investment due to the practice of achieving margin on the capital equipment investment. Cloud solutions and services are typically treated as operational expenses. However, other considerations, such as the benefits of refocusing utility staff to activities that are core to the business, can offset the margin consideration. 34 Cloud comparisons to current systems. Utilities also consider reliability, overall cost, availability, flexibility, agility and the best use of utility resources to properly value the cloud. Focus on innovation. Lowering or refocusing core IT resources on business innovation rather than legacy application and hardware administration

34

Some utilities have said it might be possible to treat cloud services which bring about brand new functionality to the Utility as a capital expense. Each Utility must consider their individual accounting practices for how cloud services must be treated.

94

Focus on core competencies. Taking advantage of applications and services managed and maintained by service providers where the applications are core to the service provider business and absolutely NOT core to the utility Modernizing. Keeping the utility aligned with the future of computing! Operating Expenses. Paying for what you use and monthly payments reducing the impact of large capital outlay. One-time or occasional uses. Using the cloud for occasional scenarios that do not warrant permanent on-premise resource investment such as a test environment, or a Big Data analytics environment. Special attributes. Taking advantage of typical cloud attributes like self-service, dynamic provisioning, and massive scale Speed to value. Minimizing short term hardware purchase and deployment. Upgrade concerns and investments. Benefiting from speed to new functionality because cloud services typically have a 3- to 6-month upgrade cadence compared to an 18-month to 3-year product refresh cycle.

3.6.3 Deployment Roadmaps Once the decision is taken to consider cloud scenarios, utilities need to establish their own roadmap for the types of payloads and applications that are to be moved to the cloud. These strategies can guide the application migration plan: Typically, the utility should start by identifying applications and services that are not core or mission-critical. This strategy helps the utility gain experience working with the cloud. Identifying data and processes as to where they fall along the spectrum leading up to sensitive business critical.

Microsoft partner solutions can serve as a big building block for migration to the cloud, in consideration of when they will run in the cloud.

3.7

Integration

As previously discussed, tomorrows energy environment will feature a wide variety of participants who will take on one or more roles in effecting the smart energy ecosystem.

95

Vertically integrated electric utilities will likely take on the task of performing many functions among their business units, while other types of electric utilities may be content with more limited functions because of deregulation or the possibilities for outsourcing to service providers. And SERA recognizes the status quo, that there might be hundreds or even thousands of different software solutions in circulation at the IT department of a typical utility or distribution network. These diverse software solutions were originally developed independently of each other and are used and maintained by different departments to this day. Therefore, Microsoft SERA focuses on reducing integration costs for either situation through pragmatic, product-based approaches that avoid the costs and time-sinks of custom integration when and where possible. Primary among these approaches will be the cloud, a solution capable of helping utilities achieve better business agility, economics, and user experience while also transcending the need and costs for on-premise maintenance. Ignoring specific approaches, examples where integration may occur include integration between: Enterprise applications External enterprise applications (e.g., load aggregators) Enterprise and the network operations center Enterprise and mobile users Network operations center and devices in the field Network operations center applications Orchestrated business processes and applications Orchestrated business processes and users Portals and enterprise applications Users and devices Users and portals Figure 26 depicts a template that accommodates four different aspects of integration: Process-centric application integration, where enterprise or control center applications are integrated through a service-oriented architecture (SOA). This may involve services, messaging between processes, short running workflows or the more complex interactions between services and resources through orchestration. Database-centric application integration, where information exchanges between databases are driven through the use of extract transform load (ETL) mechanisms. Grid integration, where the enterprise interacts with devices using standards defined by the IEC, IEEE, and ANSI, where a variety of private and public networks can be leveraged as transports. 96

Web integration, where users and organizations connect via Web-based technologies over the internet, intranet, or virtual private network.

Figure 27 - Integration overview

In addition, utilities may increasingly come to look for possible integration scenarios between onpremise and cloud-based applications: Browser access only: Usually, SaaS applications are full-service standalone applications that are accessible via a Web browser. Cloud functionality that is exposed as a service for consumption by on-premise or other cloud applications. Existing on-premise business functionality or services that are exposed to the cloud. Data integration between on-premise and cloud data stores.

Such emerging hybrid models of on-premise and cloud applications result in a new class of distributed applications. SOA-based integration with SOAP or REST Web services seems to be the easy answer to this problem; however, there are various other issues that should be handled for it to work across organizational boundaries and firewalls. The next section discusses the following integration concerns: 97

Integration Patterns Service-Oriented Architecture Enterprise Service Bus in SOAs Applications Network Operations Centers Business-to-business Integration Customer Integration Power System Grid Common Services Cloud Services Application to Application

3.7.1 Integration Patterns Integration patterns are used to design and build an integration architecture. Within the reference architecture there are three strategies for integration layers: Entity aggregation, providing unified data access across systems Process integration, which focuses on the orchestration of interactions between systems Portal integration, providing a unified view to the user Several integration topologies may exist within each integration layer, including: Point-to-point connections Brokers Message buses Identifying patterns is useful for integration. These patterns may be implemented using off the shelf components, locally defined components, or templates that provide a starting point for implementation. Integration patterns offer many benefits as they fundamentally promote the reuse of designs and components. 3.7.2 Service-Oriented Architecture Service-oriented architecture (SOA) is a design philosophy that is increasing in use and in popularity within the utility industry as evidenced by its recognition in utility industry standards like IEC 61968. SOA has many different definitions, but the World Wide Web Consortium (W3C) says SOA is a set of components which can be invoked, and whose interface descriptions can be published and discovered. While Web services specifications provide an open standard on which SOAs are commonly built, a service-oriented architecture infrastructure still leaves many integration challenges.

98

Figure 28 portrays the SOA reference architecture for the utility:

Figure 28: SOA Reference Architecture for the utility. (Modified from the SOA Reference Model 101 diagram produced by from Drexel University Software Engineering Group.)

The SOA reference architecture illustrates both atomic and composite services, where workflow and orchestration may be used to coordinate services for consumers. A simpler view is shown in Figure 29 from the perspective of service consumers and providers within an SOA:

Figure 29 - Consumers and providers in an SOA.

99

While many SOA implementations focus on synchronous request messaging patterns, there is also the notion of an event driven SOA, where messaging patterns are expanded to include support for asynchronous messages, including events. 3.7.3 Enterprise Service Bus in SOAs The term Enterprise Service Bus (ESB) is one pattern of messaging infrastructure and is widely used to form the backbone of the infrastructure of a service-oriented architecture. The characteristics common to ESB products include: Brokered communication. The basic function of an ESB is to send data between processes on the same or different computers. Like message-oriented middleware, the ESB uses a software intermediary between the sender and the receiver, providing a brokered communication between them. Address indirection and intelligent routing. ESBs typically include some type of repository used to resolve service addresses at run time. They also typically are capable of routing messages based on a predefined set of criteria. Basic Web services support. A growing number of ESBs support basic Web services standards including SOAP and WSDL as well as foundational standards such as TCP/IP and XML. Endpoint metadata. ESBs typically maintain metadata that documents service interfaces and message schemas.

Figure 30 shows a set of service consumers and providers integrated via an ESB. Aside from providing for reliable messaging, process orchestration, common services (sometimes called utility services) and other functionality, the ESB makes the integration much more manageable, especially when the integration is grown to dozens or more components.

Figure 30: SOA using an ESB

100

An ESB is one of many building blocks that comprise a comprehensive SOA. The messaging capabilities required in an SOA extend the functions of traditional enterprise application integration (EAI) and message oriented middleware (MOM) to include first class support for Web service standards and integration with other infrastructure components such as policy management, metadata registry, and operational and business monitoring frameworks. It is important that an ESB enhance the ability to leverage existing assets in a service-oriented world. Many enterprise applications were not designed with SOA in mind. This is especially true, given the diverse and evolving nature of the many recommended standards for use within the smart grid. 3.7.4 Applications An enterprise may need to integrate many varied applications in order to support business processes. For example, some applications are highly productized and have well defined interfaces, while others may be custom developed or highly customized to meet the needs of a specific company or a market. Others may be deployed within an enterprise or use cloud-based services. For these reasons, the integration infrastructure should readily accommodate the impedance mismatches between applications and prevent the need for further application customization. As such, there is real need to leverage mechanisms to easily automate various business processes. In contrast to a user needing to interact directly with individual applications (i.e., working with code), it is now possible to construct composite applications where users are shielded from the underlying details of the related applications.

101

Figure 31: Composite applications allow users to be shielded from the underlying details of the related applications.

Instead, the users view is often constructed as a rich internet application (RIA). Information can then be collected from many systems, and underlying transaction functionality may be an orchestration, via a set of Web service interactions. 3.7.5 Power Systems Operations Control Center (PSOCC) Because of its emerging nature, the smart energy ecosystem is introducing the need to segregate applications into two types: 1) Those used to support business processes within the power systems operations control center (PSOCC). The PSOCC can be used for any combinations of management for the grid, transmission networks or distribution networks. The technical infrastructure supporting the operations centers business processes is typically responsible for management of the electricity grid, which is considered a critical infrastructure. In the United States, the control center and associated applications are considered part of the utilitys critical infrastructure. This is the reason that the NERC CIPs seek to set rigorous protocols for protection from the perspectives of security, availability, and performance. 2) Those used elsewhere in the enterprise. While the rest of the enterprise still requires a high level of security, requirements for availability and performance may be relaxed. In response, the reference architecture provides for use of multiple buses, where bridges 102

used to pass messages between buses can serve as insulation, thus isolating the impacts of servicing information to the enterprise from the operational systems. In Figure 32, a variety of standards are used to integrate the databases and applications within the network operations center with components outside of the PSOCC. The IEC and IEEE provide many of these standards, especially those to other operations systems or the field.

Figure 32: The use of standards for integration.

From an integration perspective, these interfaces are implemented using productized adapters and gateways. The following examples are those standards supported by such adapters and gateways: Inter-control center data links using IEC 60870-6, also known as ICCP and TASE.217 Substation communications using IEC 61850 and DNP183 Process control communications using OLE for Process Control (OPC), where the new OPC Unified Architecture (UA) is standardized as IEC 62541 Model exchanges using IEC 61970 standards Meter integration using IEC 61968-9 and ANSI C12 standards 103

Demand response integration using OpenADR and ZigBee specifications Communications to and between smart devices on home area networks using IPSO, 19ZigBee and/or HomePlug Application integration based upon IEC 61968, IEC 61970 and MultiSpeak specifications

In situations where the network operations center is required to operate flawlessly in a 24/7 environment, it is common for PSOCC functionality to be supported using redundant networks and hardware. Consequently, support for inter-site failover of PSOCC functionality is often a requirement. See Figure 33.

Figure 33: PSOCCs may be deployed at two different geographically distinct sites in order to protect against a variety of physical threats, including earthquakes, fires, and acts of terrorism, among others.

When there are multiple PSOCCs, at any point in time, one PSOCC is typically designated as the on-line PSOCC where the PSOCC applications are running on local hardware in an on-line mode. The databases at the backup site are kept up-to-date to mirror the contents of the online databases so as to be concurrent within a small time interval. Back-up sites can be tightly coupled with the online control center and in this mode offer control center redundancy. When circumstances or operational procedures dictate, on-line functionality can be transferred from primary to backup site. Users can be physically located at either, where their ability to access the functionality provided by the on-line PSOCC is unimpaired except for obvious cases where their physical location is not functional or cannot access the on-line site. A PSOCC also has provisions for development, training, and testing environments. These environments are often built upon a network, with hardware and application software that enables operations staff training and application software testing, particularly for new software versions. See Figure 34 on next page.

104

Figure 34 shows the configuration of training and testing environments.

The underlying databases for training or testing environments can be initialized using either current or recent data from the NOC as well as potentially some enterprise systems. However, information or operational requests from training or testing systems should never be propagated back to a PSOCC or other enterprise system, as this would effectively contaminate those systems with invalid data. 3.7.6 Business-to-business Integration The smart energy ecosystem contains many examples of need for business-to-business integration, including integration between ISOs, utilities and service providers. Such integration is becoming increasingly dependent upon the Internet, as opposed to private data links, which creates several notable requirements for message transport: Mutual authentication, typically using X.509 certificates Encryption, typically leveraging TLS20 Signatures for non-repudiation of transactions, typically leveraging WS-Security

There are three primary patterns for one business to obtain information from another: 1. Request/reply integration via secure Web services (noting that Web services may not need to be secured if only public information is conveyed).

105

2. Publish/subscribe integration, where an organization may publish information to registered partners. 3. Portal integration, where a registered user for an organization can log into a partner portal to interactively submit transactions or view reports. Information within messages is now typically conveyed using XML documents, replacing the situation where a variety of proprietary formats were common in the past. Aside from the fact that many standards now specify information exchanges in the form of XML documents, they also have the advantages of being structured, self-descriptive and can be leveraged by a wide variety of tools. All integrations require observance of information protection and privacy concerns. For example, where an ISO maintains demand response program registrations, those registrations may need the approval of only certain details by different organizations. That means that other details must be hidden from organizations who do not share the need to know. Indeed, each piece of information maintained by an organization may have a different level of confidentiality. These levels would include: Public, where the information has no confidentiality restrictions Shared by a set of designated organizations, such as market participants Specific accessibility by one or more identified organizations Private, where the information is not shared externally

Additionally, there is typically the need for role-based access control (RBAC) where: Each organization has a set of users Each user has one or more roles within the organization Each role identifies privileges, allowing access to resources, such as the ability to read certain sets of information and/or submit certain types of transactions

A notable example of RBAC is the separation between the unregulated side of the utility business (generation and traders) from the internal transmission network operations information such as grid status, which could give them an undue competitive advantage. Traders could game the market knowing when and where congestion might occur due to transmission network operations. 3.7.7 Customer Integration Within the smart energy ecosystem, a wide variety of customers will collaborate with utilities, service providers, and devices. Those customer types include: Residential Commercial Industrial Wholesale 106

For the purpose of the following discussion, its important to note that some aspects of the reference architecture may be more appropriate for one group than another. As previously discussed, customers are changing, becoming active participants on the electric grid by engaging with new technologies. Customers are now more than energy consumers. Regardless of whether they are large or small, they may now export energy to a distribution network, or potentially the transmission network in the case of very large customers.

Figure 35 illustrates how a residential customer may interact with the smart energy ecosystem through collaboration portals and home area networks.

107

Customer-focused portals provide access to billing, usage history, and energy programs. Customers may be able to participate in demand response programs or offer distributed energy resources to the energy market. One example of a common energy resource will be the storage opportunity that batteries in hybrid or electric vehicles offer. Batteries permit a vehicle to be recharged at those times of the day or night when prices are low. Or the grid operator could draw from the batteries as an energy source at times when prices are higher and somehow compensate the battery owner (as long as this is properly coordinated with the vehicle owners planned driving schedule). The communication framework used by advanced meter infrastructures (AMI) may come into use as a gateway into home area network, where a specific HAN technology may be used within the home or business for device integration. Currently, a ZigBee smart energy profile is used to define devices and their interactions within the HAN. In this way, local devices can respond to market price schedules, pricing signals and demand response events. From a standards perspective, there is currently a standardization gap between the HAN and the utility (or service provider). When this gap is closed, utilities or service providers will be able to interact with the HAN gateway, which, in terms of the ZigBee Smart Energy profile, will be called an energy services portal (ESP). A more general movement towards embracing the use of IP communications to HAN devices is clearly seen through efforts of the IPSO alliance. Customers may also employ service providers that act as aggregators for purposes of participating in distributed generation or demand response programs, where the aggregator can then competitively participate in an energy market using the integration options provided in a smart energy ecosystem. In the cases of commercial and industrial customers, BACnet, LonWorks and OpenADR are the current common standards being used for demand response: BACnet is a standard used for building automation control systems. LonWorks is a standard for building automation networking. OpenADR is starting to come into use by utilities and service providers to issue pricing signals and load control events.

Integration solutions at the customer level must be inexpensive and easy to install and maintain. Many factors demand plug and play integration and significant standardization at this level. Customers will also need to have capabilities for more local preferences and control. 3.7.8 Power System Grid Integrating with the grid involves meshing those devices that are used to monitor and control equipment and resources on the grid via a hierarchy of interconnected networks.

108

A number of industry standards are being widely used for such purposes, including those provided by organizations such as the IEC and IEEE. Integration with these standards is typically accomplished using capabilities provided by a vendor system, or through the use of a third party adapter. These adapters then typically use field networks for communication, which are often private, highly isolated, or highly protected from use by parties other than the electric utility that owns the equipment being controlled. Data acquisition from, and control commands to substations, intelligent end devices (IEDs), and metering devices often use a variety of networking technologies in the field. Many different types of devices are used in the exchange of measurements, controls, meter readings, signals, events, etc., and many different standards are used to support these exchanges. Some of these standards are very stable and some are still evolving.

Figure 36 depicts how integration occurs over many networks.35

35

Impact of Secure, Scalable Performance on Demand Response Communication Architecture, by Dave Hardin, EnerNOC, and Scott Neumann, UISOL; Grid Interop 2012.

109

The following paragraphs provide a brief introduction to some of the most commonly used standards and how they apply in the overall system architecture. These include: Intelligent End Devices Distributed Network Protocol Inter-Control Center Protocol IEEE C37.118 IEEE 1547.3 ANSI C12 ZigBee and IPSO Open Automated Demand Response BACnet LonWorks 3.7.8.1 Intelligent End Devices Clearly, the various networks will allow more autonomous decision making and controls within substations, on feeders, and locally by resources. To further enable this trend, intelligent end devices (IEDs) need to integrate via plug-and-play standards, especially those IEDs at the end user level. Devices in substations and on feeders are configured. There is typically no custom software or integration at this level. The IEC 61850 series of standards defines the communication for substation and feeder automation, providing for interoperability between IEDs. This is primarily used for control and protection within substations and on feeders. Within IEC 61850, a dialect of XML called substation configuration language (SCL) is used to describe automation systems and IEDs. A hierarchical model is used within IEC 61850, where a server can have: many logical devices each with many logical nodes each with many data objects each having many attributes

IEC 61850 uses a high speed station and process bus, allowing an IED to publish events to other IEDs packaged as generic object oriented substation event (GOOSE) messages. There are many vendors providing a wide variety of IEC 61850-compliant devices and applications. There are also gateway products and toolkits that can be leveraged for integration purposes. 3.7.8.2 Distributed Network Protocol Distributed Network Protocol 3 is another protocol commonly used for communications between SCADA master stations, remote terminal units (RTU), and IEDs. Within DNP 3, data is organized into data types that include: Binary inputs Binary outputs (e.g. controls) Analog inputs 110

Analog outputs Counters

Data points are defined from these data types to manage a single value. Measured values can be reported periodically or by exception. The DNP3 protocol is supported by products from many SCADA vendors and has been widely deployed. 3.7.8.3 Inter-Control Center Protocol IEC 60870-6 is a standard used to provide data links between control centers for the exchange of measurements and controls. IEC 60870-6 is also commonly known as TASE.2 and the Inter-Control Center protocol (ICCP). This protocol has been in use for many years, and is supported by a variety of SCADA and energy management systems, as well as other vendor products. There are also gateway products and toolkits that can be used for IEC 60870-6 data links. 3.7.8.4 IEEE C37.118 The IEEE C37.118 standard defines synchronized phasor measurements used in power system applications. Given that an IED can leverage GPS for accurate time synchronization, it is now possible to obtain accurate, synchronized measurements of the wave forms for points on the electricity grid. This standard has been submitted to the IEC for inclusion within IEC 61850. 3.7.8.5 IEEE 1547.3 The IEEE 1547.3 standard is a guide for monitoring and exchanging information and controlling distributed energy resources. This document is at the level of use cases and examples, leaving some gaps for future standardization. 3.7.8.6 ANSI C12 ANSI C12 is a suite of standards used to standardize many aspects of meters. These standards range from decide codes to optical ports to protocol specifications, and they recognize that meters are used for water and gas, as well as electricity. 3.7.8.7 ZigBee and IPSO Two technologies are being proposed for home area networks (HAN), including ZigBee and IP for Smart Objects (IPSO). ZigBee allows for secure wireless communication between devices, where specific devices may provide specific capabilities useful for energy control and conservation. IPSO is focused on the use of IP for connections and communication between Smart Objects.

111

HAN technologies such as ZigBee and IPSO can also be applied to commercial and industrial settings as well as residential settings. Examples of HAN integration include: Meters In-home displays Smart appliances Thermostats HVAC Occupancy sensors Lighting Pool pumps Charging of PHEVs

Where the specifications for communication within the HAN are well defined, the gateway between the HAN and the utility is not and represents a notable gap. Currently, the gap is often bridged by closed mechanisms, such as the proprietary communications provided by some metering infrastructures. However, this then creates challenges to security, reliability, and scalability. One very challenging security issue relates to making all the devices integrated on the HAN accessible and visible from the internet. Scalability is largely constrained by whether a multicast capability can be leveraged to send signals. An open gateway specification is a likely area for standardization, which would then also likely provide for HAN networks based on a variety of technologies such as IPSO, HomePlug, or ZigBee. 3.7.8.8 Open Automated Demand Response Another specification coming into some use for demand response is Open Automated Demand Response (OpenADR), which allows for the management and execution of demand response programs, where pricing signals and load control requests can be issued to clients. OpenADR supports the communications between client devices and a server for events, pricing signals, and bidding. 3.7.8.9 BACnet BACnet is a standard used for building automation and control networks. Specifications such as ZigBee and OpenADR recognize the need for BACnet integration. Within industrial settings, OPC is another technology that might be used for controlling and monitoring equipment. 3.7.8.10 LonWorks LonWorks is a protocol standard (codified under ANSI/CEA-709.1-B and ISO/IEC 149081,2,3,4) that is used for communications on twisted pair, power line, fiber optic, RF media for data acquisition, and control functions in buildings for equipment such as lighting, HVAC and other building controls.

112

3.7.9 Common Services Common services provide functionality that can be leveraged by many applications and generally fall within two categories: 1. Generic in nature, e.g., for logging and notifications 2. Domain focused, e.g., for topology processing and power flow Common services are sometimes the realization of SOA as a reusable component. They can also be provided as functionality embedded within an integration infrastructure. Some common services may also be obtained as third party products. 3.7.10 Cloud Services The topic of cloud continues to come up in virtually every discussion of deployment strategies and Microsoft is at the forefront of cloud services. IDC recently quantified the major trends in the cloud marketplace. Three key statistics are: 1. Cloud spending will reach $100B in 2016 2. Public cloud growing 4x the rate of IT market 3. Software as a service currently represents 77 percent of cloud revenue Unfortunately, there are a number of impressions and misperceptions that surround the term cloud and seem to limit open dialogue about its potential benefits to utilities. Indeed, the term cloud, when presented without a preceding qualifier like hybrid or private, seems to be synonymous with the public cloud, a linkage that then seems to give rise to a number of security and privacy concerns and stifle conversation among utility industry participants. Alternatively, however, from Microsofts perspective, the cloud in all its forms differs from onpremise data management capabilities because of the ease of secure connectivity for the scaleout of billions of devices. As an example, think of the challenges to managing data coming from millions of electric vehicles, and the provision of services to customers, especially the locations of charging stations and the grid capacities for those stations. The resulting information tsunami would be overwhelming, daunting and possibly prohibitively expensive for individual utilities, especially if they were required to build the on-premise infrastructure on their own. As one part of the solution to those issues, we at Microsoft are working today with utilities to assess how they can use the cloud as a way to increase their agility, control their costs, and provide elasticity across their internal and external systems. The following sections (and the technology section describing the Microsoft approach to cloud deployments) seek to clarify cloud approaches, as well as speak to some common cloud blockers. 113

3.7.10.1 Attributes of Cloud Deployments By starting with the attributes of cloud-oriented deployments we can distinguish it from the simple notion of virtualization. The following list describes common attributes of cloudoriented deployments regardless of the location of the cloud datacenter: On-demand self-service applications and resources are delivered as services. Customers and users can request resources as well as configure and manage those resources via automated provisioning using an interactive portal. Broadly accessible via network Resources are made available via network connectivity and are broadly accessible typically via a number of form factors and accessible via any internet connection. Pooled resources Key hardware resources are pooled. Computing, storage, and networking resources are pooled and abstracted into consolidated units enabling dynamic resource provisioning and scaling. Rapid elasticity IT services and cloud resources can be scaled up or down instantly to meet evolving business demands via automation or resource workflows. The ability to quickly expand or contract resources removes the need to provision for the ultimate performance need which is sometimes unknowable at initial deployment time. Measured and metered All resources are instrumented so resource usage can be measured and metered, enabling customers to pay for only the resources that are actually consumed. Pay as you go Customers pay regularly for the services consumed in the last billing period. Typically, this is a monthly service subscription. Committed Service Level Agreement (SLA) A well understood level of service availability is included in pay as you go agreements giving customers a clear understanding expected service downtime.

3.7.10.2 Levels of Cloud Deployments From an ontological standpoint, these are the three basic levels of cloud deployment: Infrastructure as a Service (IaaS) consists of all computing resources, memory, storage, and networking necessary to provide an on-demand datacenter. IaaS provides a cloud-based datacenter without requiring the installation of the computing resources. The resources are typically provided pay as you go, based upon resource consumption. In this model, IaaS consumer is responsible for all patching, monitoring, and management of the infrastructure. Platform as a Service (PaaS) consists of all underlying computing infrastructure, as well as a cloud operating system, resource management, development and test tools and environment all focused on development, deployment, testing, hosting, managing, and maintaining applications. Applications are developed to run in the 114

cloud environment so no computing resources are necessary to delivery PaaS oriented applications. PaaS enables applications to grow to Web scale with delivery via the Internet. Multi-tenant versions of applications running in PaaS enable access by many users simultaneously. PaaS can also be used to architect hybrid deployments where compute or connectivity may be via the cloud, but data resides on-site. Software as a Service (SaaS) consists of software which is deployed as a service via the Internet. SaaS architecture is typically centrally hosted single instance multitenant applications delivered and paid via a subscription model. Upgrades and patches to both the underlying platform and the application are all done transparently to the service consumer. As a result, SaaS applications do not require deployment of large on-premise infrastructure, thus saving significant time and reducing resource commitment.

3.7.10.3 Movement among Levels of Cloud Deployments As a utility moves its deployment model move from IaaS to PaaS to Saas, datacenter and cloud providers take on larger and larger levels of support. In IaaS, the infrastructure provider is responsible for the basic computing, storage, and networking resources but the service provider is responsible for all operating system and application updates and patching, monitoring, and management. In PaaS, the cloud provider is responsible for all infrastructure and operating system management, maintenance, update, and patching thus offloading the service provider to just focus on their application and application updates. In SaaS, the cloud provider is responsible for the entire service delivery including all hardware, platform, operating system and application management as well as maintenance, updates, and patching.

115

Figure 37 illustrates conventional on-premise deployment as well as the three service delivery models and how they are managed.

3.7.10.4 Challenges to Cloud Adaptations Utilities are adopting use of the various clouds at various rates depending upon their own needs, levels of comfort or conservatism, and the levels of development offered by solution providers. At this stage in the clouds evolution, the primary tension is the tradeoff between control and cost and those considerations fall into political, technical, financial, and legal categories. Resistance to cloud scenarios to maintain control of application functionality inside the organization may well override all other considerations. Understanding the technical, financial, and legal implications of cloud is important to help establish which cloud model best fits a utilitys needs. Some of the typical blockers cited for not taking advantage of cloud models are described below, along with the considerations, best practices, and ways to mitigate these blockers in practical cloud deployments. The implications for any cloud deployment strategy should be assessed before undertaking the transition. The approaches below will help the decision, as well as help identify acceptable cloud service providers based upon how completely the service providers address the key considerations. 3.7.10.4.1 Security Security is always a major consideration for utilities. Cloud service providers are making great progress in defining and establishing cloud security best practices.

116

Security best practices include both physical of datacenters and associated assets, as well as cybersecurity for computing assets and networking, as well as personnel security policies and practices. Datacenters should comply with ISO/IEC 27001:2005 and should incorporate a program for regularly conducting intrusion testing programs. Also important are security practices for data at rest, data in transit, and solution abstraction models. It is imperative that cloud service providers, integrators, and application service providers make provision for encryption of any sensitive data that transition to or be stored within clouds, in addition to abstraction models to further obfuscate important data. An example of this might be using a GUID for all smart meter data so that even if a meter data store is compromised, linkage of the data to a customer is not possible because the GUID resolution to a customer ID is held on premise behind the utility firewall. Utilities will become more comfortable with cloud security when they fully understand how security is a holistic process that is transparent to them. The utility can then make decisions about the security that is being offered to them from the cloud service providers. Utilities should be encouraged to learn more about the Cloud Security Alliance, an organization that is providing a matrix guide to the security processes and controls available to them. The operating and management software within a cloud service provider needs to be developed with a clear security process, and must be understood as capable of mitigating unknown threats. 3.7.10.4.2 Privacy and Data Sovereignty Privacy is also a key consideration especially for utilities thinking about the fiduciary responsibility of protecting customer usage information for example, or utility operating practices and results. Cloud service providers should have a clearly stated commitment to privacy and privacy practices. The policies regarding data sovereignty and the movement of data across country and geographical regions must be clearly stated. These policies must clearly state how and when data might be transferred across these geographical boundaries and the legal instances when data may be provided under regulatory requirements like the US Patriot Act. In the European Union, the EU Data Protection Directive (95/46/EC) controls handling of personal data within the European Union. Cloud service providers must articulate how they comply with this EU directive, or with the Safe Harbor Framework. 3.7.10.4.3 Regulatory Compliance EU and US NERC CIPs Critical Infrastructure compliancy, as well as any other regulatory compliance such as ISO/IEC 27001:2005, must be identified and as a minimum evidence of compliancy or certification should be available. Understanding of the NERC CIPs Electronic Security Perimeter and the application to control of Bulk Electric System 117

assets is critical to understanding what processes and computations can be used in which type of cloud deployment model. See section 3.8.5 for discussion of NERC CIPs, security and applicability to power system applications and systems. 3.7.10.4.4 Performance Concern over performance compared to high performance on-premise deployments is often a key concern. Performance must be considered holistically from the edge of the utility network. A datacenter or cloud service provider that has stellar datacenter performance, but no consideration for networking to the datacenter precludes mission critical or important systems deployments. Networking is every bit as important as the compute and storage capabilities of the provider. 3.7.10.4.5 Availability Availability is also a fundamental measure of cloud service performance. Cloud service providers should be able to provide a contractual level of availability for the various deployment models and failure to meet the service level agreement (SLA) should result in a direct financial impact to the service provider so that motivation to take all steps reasonable to ensure the availability is central to the provider business. 3.7.10.4.6 Interoperability A number of interoperability standards are in work for cloud deployments but because the capabilities of cloud offerings is changing so fast and because this is an area where cloud providers differentiate their offerings, true cloud interoperability may be some ways off. Moving data including encryption, application management, networking, security, and building applications using the development environment and cloud services infrastructure all affect cloud interoperability. Leveraging standards such as the Distributed Management Task Force Cloud Infrastructure Management, OASIS OData, IETF OAuth, and OASIS AMQP (Advanced Message Queuing Protocol) all help with functional interoperability. Use of Internet standards such as HTTP, XML, SOAP, REST, JSON, and TCP/IP all help enable application level interoperability. 3.7.10.4.7 Integration Web services and SOA integration solves many of the issues for cloud oriented integration. However, consuming cloud services may inherently result in service delivery dependencies the instant the services are consumed. The underlying connectivity, service bus integration, and security model, and the mechanisms for network connectivity are all fundamental to the integration strategy and consistency of these mechanisms across the cloud deployment models is important to minimize the service dependencies. Virtual networking, in addition to machine virtualization, is changing the approaches and options for cloud integration. A good cloud service provider will enable application level, network level, and machine level integration between on-premise and cloud solutions in a hybrid deployment configuration.

118

3.7.10.4.8 Configurability and Customizability The level of customizability gets reduced as deployment models move closer and closer to software as a service. To achieve the benefits of efficiency and scale, applications need to be configurable but multi-tenancy tends to limit or eliminate customization. Solution providers with a rapid development and deployment cycle may incorporate new functionality into the service offering to respond to key customer need. Feature rich multi-tenant cloud solutions provide the best economics. If significant customization is required, a tailored on-premise deployment may be the only alternative. However, based upon investment and cloud trends, the number of these type offerings and deployments would appear to be on the decline. 3.7.10.4.9 Hard to move back Utility business and IT departments are reluctant to adopt products, solutions, or services which may be hard to move away from should the decision turn out to be a bad strategy. Ultimately the solution offerings will prove themselves in the marketplace so this is less of a concern. However, the best way to mitigate this consideration is to adopt offerings build upon underlying technology that is consistent across the various deployment models. Strategic selection of infrastructure and a platform that is consistent on-premise, in private clouds, in hybrid clouds, and in public clouds is the best way to mitigate impacts from a cloud decision and to maintain options going forward as utilities needs change. 3.7.10.4.10 Expense versus Capital Investment Cloud infrastructure and cloud-oriented services are typically considered OEM expenses rather than capital investment. Although in some cases the addition of new functionality via cloud services can be capitalized, the financial model is a consideration. The solution ROI should include consideration for the OEM expense versus the margin on capital investment loss. In many cases, the efficiencies and other cloud benefits far outweigh the financial impact, but the utility must consider the investment tradeoffs. The unregulated utility such as retailers often will not have this constraint and can more easily justify the cloud strategy. 3.7.11 Application to Application Integration Using CIM Not only can the CIM model seem overly large at times, but it is also given in an abstract, platform and implementation agnostic format. It might be helpful, therefore, to provide some guidelines of how the CIM can be used in a concrete case, as a message model. In the following, a process model for the integration with the CIM is described as it is also recommended by the IEC 61968 standard.

119

Figure 38 CIM procedure model (Source: BTC)

We detail the activities occurring in Figure 38 in the following sections. 3.7.11.1 Specifying the Context Model So-called use cases, which are part of the UML standard, have become quite popular in recent years. These modeling aids are now considered best practice in software design. A use case is meant to depict and determine the interactions between the major components of the system. In other words, a use case asks who does what with whom and when. Specifying use cases is meant to ensure from the outset that no essential software requirements will be missed at the later implementation stage. A use case first fixes the socalled actors. These are by no means always real persons. Usually they are the main components of the software. A use case typically describes one scenario (possibly with variants), which is a sequence of interactions between the actors and the software. The term actor refers to the fact that actors have the leading roles in the scenario. An example of a scenario in this context is a power outage for a customer or the reading and recording of a meter.

120

Use cases are written down as an ordered list and are often supplemented by a diagram (cf. Figure 39). As often with modeling, the importance is to choose the right level of detail. This is another reason why guidelines might prove useful. Some guidelines can be found in the IEC 61968-1-1 standard series. These guidelines do not form part of the official standard but should rather be seen as templates and examples for the user. Hence, the first step of CIM integration is to take a suitable use case from the standard if possible. If an appropriate use case is contained in the standard, one can adapt and fine-tune it for the purposes of the respective company.

Figure 39: Use case for meter reading based on IEC 69168-9 (Source: BTC)

An additional advantage of specifying use cases is that they make the drawing up of sequence diagrams simple. A sequence diagram normally captures the behavior of a single use case. More precisely, it captures which messages are exchanged between the actors together with their chronological order. Sequence diagrams are very helpful for understanding complex software interactions. This is particularly true if there are multiple threads and/or asynchronous messages. According to the CIM model, a sequence diagram should always indicate the descriptive verb/noun pair of any 121

message exchanged. Among other things, the verb specifies which design pattern is used (publish/subscribe vs. request/reply). For its part, the noun indicates which load the message carries: the chosen noun lays down which schema has to be used for the message payload. Both noun and verb are meta-information of the message relayed and are contained in the message header. The header can be seen as an envelope for the real content of the message, viz. the payload.

Figure 40 is sequence diagram for meter reading (source: BTC)

The specified use cases (cf. Fig. 39) together with the sequence diagrams (cf. Fig. 40) form the so-called context model. 3.7.12.2 Specifying Message Models Despite its generality, it can and does occasionally happen that the CIM does not reflect company-specific information. This might be because the CIM has a gap or because the needs of a particular company are too idiosyncratic to be included in a general model. In that case the standard explicitly allows the extension of the CIM. Such an extension should be incorporated into the model itself, e.g., with enterprise architect.36 Missing information can be added to the class diagram using the ground rules of object-oriented design without changing the standardized areas. For example, suppose that for a concrete application we need an additional attribute X which is not part of the

36

Enterprise Architect, Sparx Systems.

122

standard. In that case we can define a new class New Meter which extends the meter class and contains the new attribute X. In this fashion our extension gets conjoined to the standard while leaving the general model intact. Another advantage of this approach is that you can keep your extensions even when the underlying model is updated or changed. CIM conforming message schemas can then be generated, including any model extensions. This is the next step of our procedure model. 3.7.12.3 Specifying Message Schemas The header of CIM messages is defined in the context model. Then, payloads need to be defined. For this, the (possibly extended) CIM model needs to be loaded and, if desired, restricted to a subset by using a profile. From this model, one can extract the objects relevant to the message. The payload of a CIM message stays the same when the header is exchanged, e.g., if an electrical line is changed or newly inserted. 3.7.12.4 Specifying Service Interfaces After the messages or message schemas have been generated, the corresponding interfaces need to be designed and written. For every external system that does not have an interface matching the schema already, but which is supposed to be integrated with the middleware layer such an adapter needs to be written. Although each of these adapters depends on the particular software used so that this process cannot be completely automated, the process should be made considerably easier through the extensive use cases and sequence diagrams provided. The service stub of such an adapter can be generated with standard tools such as Microsoft Visual Studio. For the logic behind, however, in general the professional users need to be consulted in order to work out the particular mapping. 3.7.12.5 Specifying Service Orchestration In order to map the processes specified by the sequence diagrams to so-called orchestrations or workflows, a fitting middleware should be used. Usually an enterprise service bus (ESB) such as Microsoft BizTalk Server is employed for this purpose. Besides relaying messages and sending them point-to-point as well as carrying out calculations, the bus can also be configured in such a fashion that format adapters can be generated easily. For example, such an adapter could be used for converting a CIM-compliant message and its attributes to a proprietary format used by one of the systems involved, and vice versa of course. 3.7.12.6 Repository for artifacts In order to ease the communication between consultants and developers, a central CIM repository can be operated. This repository is a file server providing several CIM model versions, CIM schema versions, extension models, CIM tools, and used CIM message

123

schemata in combination with versioning. That is, all projects that share CIM models have a common place to refer to.37

3.8

Security

Security considerations are of utmost concern for the transformation of the current utility network to a smart energy ecosystem because of the mission critical nature of the grid and generation infrastructure. Regulatory authorities and the industries themselves have intensified their focus on security due to the increased complexity of computing environments and the growing sophistication of attacks that seem to occur almost daily. Concerns have been heightened due to several high profile events demonstrating system breach and, in some cases, improper and unauthorized operation of control systems. As might be expected due to the importance of utilities to functioning economies and even the very social fabric enabling human life, national governments consider their countrys energy assets as critical infrastructure that must be guarded like their very lands and resources. As such, they are taking bold, significant steps to ensure that utilities amplify their attention to security. National regulatory initiatives require concerted attention to the following three types of security: 3.8.1 Physical Security Physical security includes controlling access to all critical infrastructure, assets IEDs and networks. Utilities are expanding their surveillance programs to monitor and ensure the physical security of the power system 3.8.2 Operational security Operational security consists of ensuring assets are operated within their designed limits for safety and performance. Proper training, qualification and operation should be included in the utility operational security program and the architecture must enable execution and reporting on these programs. As utilities run assets closer and closer to design limits, accurate and timely update of operational limits must be maintained. As example: Dynamic Feeder Ratings offer a good example of a set of operational limits that must be maintained dynamically and consistently. Running with limits set on a cold winter wind free day could be a real problem if left unchanged for hot, windy summer days.

3.8.3 Cybersecurity The greatest concern to utilities and regulators has been the relatively porous, inconsistent, and inadequate application of cybersecurity.
37

Using CIM for Smart Grid ICT integration, by Matthias Rohr, Andre Osterloh, Michael Grndler,Till Luhmann, Michael Stadler, Nils Vogel. (IBIS Journal Issue 1 (6) 2011 International Journal of Interoperability in Business Information Systems, September 2011, pgs. 45-61)

124

This has occurred because the software vendors selling point solutions over the last two decades have not had to consider their role in a supply chain of an overall security approach. The point solutions unwittingly opened the door to potential attackers who have preyed upon single vulnerabilities to access entire systems. This has been compounded by lack of an integrated and holistic architectural approach to the overall system design to help mitigate risk. Exacerbating the matter, an even greater threat has presented itself with the advent of the twoway devices that enable the smart energy ecosystem. In the past, the utilitys control of field equipment was channeled through closed, proprietary communication infrastructures. While secure, the closed system may have served to shut out innovative opportunities that would improve operating efficiencies. The smart grid movement could be described as working to change that. Now, utilities are using open and standards-based infrastructures to control their devices. While openness and interconnection offer great benefit to businesses and users alike, they also enable those who would seek to disrupt their operation, for whatever reason, with even greater opportunity to access networks and devices. Open systems enable a greater data-driven understanding of an energy infrastructure operation. Some new innovations have come with bolt-on, add-on types of security measures. Its like building a house of glass the views are great from the inside but everyone on the outside might also endeavor to take a look. The only option is to mount blinds, a superficial barrier detracting from the original design and deflecting only glancing attacks. Defenses need to be built into the system everywhere at the architectural level, e.g., like the use of special glass, strong frames, and other measures that work together to secure the entire structure. In our view, the threats to vulnerabilities are often related to the age of deployed assets. Outdated systems on outdated hardware, operating systems designed to address security threats common 15 years ago, and dated applications cannot be expected to thwart sophisticated, modern threats. Legacy, custom, and even poorly designed COTS (Commercial Off-The-Shelf) applications can represent significant risks. Anti-malware and anti-virus software that is misconfigured, unused, or containing outdated virus definition files may offer a false sense of security, or as little protection as none at all. The industrys core perception of security therefore must change.

125

The implementation of security measures must not be viewed as the implementation of singlepoint chokeholds, like passwords; they must instead be considered holistically across the entire system, all contributors of the supply chain to the system, and throughout that systems entire lifecycle, from inception through adaptation/implementation and even to disablement. Baking security into the design and development process itself is the only answer, to build a fortress beginning with the first stone. The real key here is to assume that you will have vulnerabilities that you may not even understand right now and how you would make exploitation of those vulnerabilities difficult for an attacker. 3.8.3.1 SERA First Principles for Cybersecurity SERA recommendations are based on tried and proven principles that can be applied regardless of implementation technology or application scenario. 3.8.3.2 Effective Security and the Microsoft Secure Development Lifecycle Keeping those principles in mind, the Smart Energy Reference Architecture sets as its goal effective security, an integrated program that consistently limits the possibility of a security breach and, should a breach occur, helps minimize the impacts on an organizations operations, assets, and people.

Figure 41 summarizes the major security principles described throughout this reference architecture.

It is possible for vendors to implement effective, holistic security with the Microsoft Secure Development Lifecycle (SDL), a process that the industry now broadly recognizes as a best 126

practice, and an approach that has received continuous revision and upgrade over the last eight years as the nature of the threat landscape and sophistication of attacks evolve. This introduction of the Secure Development Lifecycle will be expanded later in this section, but for now its important to place SDL in the context of the industrys overall move toward an integrated approach for greater security standards and regulatory control. 3.8.4 Industry Guidance for Security A number of organizations offer their views on security standards and approaches. The following examples provide several that the utility should consider in developing its own approach, but this is not an exhaustive list for every geographic region. Utilities should consider these among the options they have available to them for establishing their own security framework. 3.8.4.1 The Microsoft Secure Development Lifecycle The Microsoft Secure Development Lifecycle (SDL), which will be discussed in detail in the following sections of this document, is also closely connected to utility industry standards governing bodies. In fact, worldwide, a number of organizations have released new guidelines and standards demonstrate how key governmental agencies, trade associations, and utilities are approaching security and security best practices using SDL. Many of the following guidelines from other industry groups integrate Microsoft SDL concepts into their structures. 3.8.4.1.1 NIST IR 7628 The SDLs Security Engineering Principles have been integrated into NIST IR 7628, a three-volume document created by the Cybersecurity Working Group (CSWG) of the Smart Grid Interoperability Panel (SGIP) as a guideline for organizations to develop smart grid security strategies. NIST IR 7628 is the NIST guideline for addressing security throughout all NIST SGIP chartered smart grid standards, regardless of which standards body eventually authors and formalizes the standards. The comprehensive nature of the document has resulted in other geographies and regulatory organizations leveraging the document for their own smart ecosystem security requirements. 3.8.4.1.2 IEC 62443 IEC 62443 complements NIST IR 7628 by defining the elements necessary to establish a cybersecurity management system (CSMS) for industrial automation and control systems (IACS) and provides guidance on how to develop those elements. 3.8.4.1.3 IEC/ISO 27034: Application Security ISO/IEC 27034 is a six-part standard providing frameworks and process to help organizations integrate security throughout the lifecycle of their applications.

127

ISO says that the aim of ISO 27034 is to ensure that computer applications deliver the desired/necessary level of security in support of the organizations information security management system. ISO/IEC 27034 offers guidance on information security to those specifying, designing/programming or procuring, implementing and using application systems, in other words business and IT managers, developers and auditors, and ultimately the endusers of application systems. Part one of ISO 27034 Information technology Security techniques Application Security was published in November 2011. Annex A of ISO/IEC 27034-1 is a case study showing how a generalized development process based on the Microsoft SDL aligns to ISO 27034. The part of 27034 that was published in late 2012 serves as a framework for developing secure software applications. The appendix to the standard uses Microsoft SDL as a case study to illustrate points made in the guideline.

3.8.4.1.4 EC M/490 The European Commission Directorate General for Energy issued the M/490 Smart Grid Mandate directing the CEN, CENELEC and ETSI organizations to create similar standards including a reference architecture, security and sustainable processes for the EU. The Working Group includes EU TSOs like ENTSO-E, DSOs, and energy and telecom companies and is focused on rationalizing coverage of IEC 27001/IEC27002, NISTR 7628, IEC 62351-7 and -8 . EC M/490 aligns well with the stated program approach for SERA security. A very good treatment of Electric utility Threats is included in the M/490 Working Group Smart Grid Information Security Report on SGIS requirement standards and should be a reference for anyone beginning an undertaking to define and model utility threats. To demonstrate the point made in the introduction to this section, that the standards are very interconnected across geographies, EC M/490 references NISTR 7628, a standard which incorporates Microsoft Secure Development Engineering Principles. 3.8.4.2 Industry Guidance for Risk Management Effective security also requires good risk management. And good risk management is a mindset that accepts the tenant that constant diligence is required to stay one step ahead of continuous threats. Risks must be understood in order to prioritize resources. Good risk management also accepts the notion that security solutions need to be built in to every possible type of software and process across the IT and OT solution supply chain. Utilities are now requiring that providers build security into their solutions, to address various risks. Utilities should realize that their responsibility for security does not lessen once they make these requirements of vendors they will still need to have their own 128

security risk management strategy in place while the vendors bring their products up to specification, including mitigating legacy solutions until new vendor software is available. In addition to the above states of vigilance, utilities should consider the following guidelines for risk management as offered by the following organizations and included in the Appendix to this document: U.S. Department of Energy Risk Management Process Guideline Carnegie Mellon Electricity Subsector Capability maturity Model (ES-C2M2)

3.8.5 Regulatory Guidance for Cybersecurity Understanding the scope and applicability of the North American Electric Reliability Corporation (NERC) Critical Infrastructure Protection Standards (CIPs) to cybersecurity is a key component of any North American interconnected transmission utility, or any worldwide utility where the regulators follow or directly adopt the standards as applicable. The CIP standards 2-9 below provide a framework for identification and protection of assets that fall into the category of Critical Cyber Assets supporting reliable operation of the US Bulk Electric System. The version 4 CIPs are in force until FERC ratifies CIPs version 5, which would take effect in 2015. The references herein are to the CIPs version 5 standards. The NERC CIPs website can be used for the version 4 standards.

Figure 42 - NERC CIPs version 5 are expected to take effect in 2015.

The following section seeks to describe where these resources reside and how ICT systems that operate the equipment are affected. This is not a legal interpretation of the CIPs standards and cannot be a replacement for proper due diligence by individual utilities for their specific case. 3.8.5.1 NERC CIPs ICT Systems Considerations New versions of NERC CIPs standards for utilities continue to evolve as the nature of new threats become known, as best practices develop, and as utilities gain greater 129

understanding of the regulations. The following sections describe what areas are included under NERC CIPs and discuss the gaps/grey areas as well. 3.8.5.1.1 NERC CIPs for Functional Security Zones For example, utilities are now creating functional security zones (FSZ) with progressively tighter controls and reporting to comply with NERC CIPs. The following diagram shows three FSZs ranging from corporate IT to the tightest controlled environment of the control center network and applications. 3.8.5.1.2 NERC CIPs for Bulk Power Systems The NERC CIP standards focus directly on operation of the Bulk Power System. As such, these control systems are directly affected: Energy Management Systems (EMS) at transmission voltage levels Protection and Control Systems for transmission protective relaying SCADA systems which supply real-time information to the EMS Reliability Coordination Systems across major control areas

3.8.5.1.3 NERC CIPs for Distribution Level Systems The NERC CIPs currently are only applicable to distribution voltage level systems and equipment if the systems contribute to 300MW of automated load shedding or compromise of the systems would adversely affect operation of the Bulk Electric System. Some systems typically fall outside of the current standards: Distribution Management Systems Distribution-level Outage Management Systems Meter Data Management 3.8.5.1.4 NERC CIPs for Device Management The functionality of traditional substation devices coupled with the introduction of NERC CIP-007 is leaving many utilities uncertain about how to effectively approach the problem of total device management. Total device management includes removing the separation between IT and OT equipment as well as understanding the cyber-security practices that are requiring utilities to rethink their approach to remote engineering access, device password management, configuration management, and active device monitoring. This problem is worsened with the vendor-variety of our equipment that is installed within our substations. This presentation makes an argument for why all of these functions cannot be considered separately, and that a unified architecture is required to effectively solve the problem. As well, aspects of integrating legacy device configuration management with modern IEC-61850 practices are also discussed. Utilities must address: The complexities of managing substation device passwords 130

Risks incurred when manually managing device passwords Factors to consider when integrating automated password management Identifying the need to approach this problem in a vendor-agnostic manner Describing how configuration management is a natural extension of remote engineering access and password management Integrating legacy device configurations with IEC-61850 SCL Enhancing substation device situational awareness and configuration management with device active monitoring

3.8.5.1.5 NERC CIPs Grey Areas While the above examples offer clear-cut examples of what is included and excluded, the following systems operate in NERC CIP grey areas: Electricity Markets the Reliability Coordination function and the real-time dispatch fall under NERC CIPs - as well as any ICT systems running in the same network environment as these physical market-clearing functions. However, the rest of the electricity markets functionality like financial market clearing, bidding, monitoring, settlement, reporting do not need to conform to NERC CIPs, as long as they are not executing on the reliability coordination system LAN. DR, DER, Load Aggregation these fall under NERC CIP at certain load levels. For instance: o The well-defined limit of 300MW above which these systems fall under the CIP regulations. o If a DR system or a Load Aggregator dispatches over 300MW of Load Reduction which might be used for grid stability or under frequency/under-voltage mitigation in the Bulk Power System, they fall under NERC CIPs. If a system is directly controlling the output (or sending pricing signals which control the output) of distributed energy resources used for grid stability or ancillary services, above the 300MW limit, fall under the CIPs regulations. Communication Links to Auxiliary Systems At present, NERC CIPS do not cover these communication links, but future releases are expected to incorporate them into the applicable standards. Other Systems on Control Center LAN Other systems that are physically implemented on the EMS Control Center LAN do fall under CIPs guidelines, whether or not they have anything to do with dispatch of the Bulk Electric System. The NERC CIPs guidelines provide good guidance for best practices for 131

maintaining and protecting these ICT systems but due to the effort involved in tracking and maintaining records for these non-critical systems, it is clear they should be removed from the control center environment. Cloud computing datacenters provide great environments for many compute, storage, or communication intensive ICT deployments. The NERC CIPs have implications as to what can and cannot be considered for cloud deployment: Those systems described in the previous section as not being covered by NERC CIPs can be considered for cloud deployment. Mobility and load aggregator/DR systems that connect to a broad array of devices may be ideal candidates for cloud deployment.

3.8.6 Preparing with a Holistic Security Strategy In smart energy environments for bulk transmission in North America, the NERC CIPs represent non-negotiable minimal compliance practices. It is incumbent upon those utilities to adjust their security strategies according to awareness that the security landscape is evolving and that NERC CIPs represent a minimum standard. By themselves, they are not adequate for proper security management on a permanent basis. Therefore, the following strategy is appropriate to all North American utilities as well as worldwide utilities in its holistic scope. 3.8.6.1 Setting Priorities for Security Strategy Utilities do not have unlimited financial resources to implement full-scale security upgrades. The Center for Strategic and International Studies offers its 20 Critical Security Controls Version 4.0 as a consensus audit guideline for effective cyber defense and is well worth taking into account. For our purposes within the smart energy ecosystem, organizations must identify and prioritize security threats according to these principles: Initial focus should center on rank-ordering the most important people, systems, applications, and data. Secondary focus should be on prioritizing the upgrade or replacement of remaining systems based upon risk. o This secondary focus must also include people, processes, and technology and should include ALL systems (Microsoft, its partners, in-house applications, and COTS) Utilities should also examine the security they have in place for the all of the following processes, to ensure they are fully addressed in the prioritization effort. If the utility does not have all of the following processes, they might also put it on

132

their list for improvement, as resources allow (and SERA recognizes that all utilities are not alike, so some of the following may be appropriate, or not at all): o Application and Site White-listing o Information Classification o Patch and Vulnerability Management o Privileged Identify Management o Role Based Access Controls o Security Configuration o Security Event and Incident Management Program o Security Risk Assessments o Software Configuration Management 3.8.6.2 Applying SDL As introduced in section 3.7.3.2, the Microsoft Secure Development Lifecycle serves as core component of many industry guidelines and standards and may be applied in a holistic manner by providing: Security Education for all Development and IT Staff Threat Modeling Risk Prioritization Incident Response Planning Incident Remediation and Recovery Planning Disaster Recovery Plans Cross-Industry Incident Notification Services and Programs

3.8.6.3 Independent Solution Vendor Application of SDL Pushing the concern about security outside the organization requires third-parties to acknowledge its importance to the utility. Independent software providers should have as their goal the implementation of repeatable processes that reliably deliver measurably improved security. As such, software vendors must transition to a more stringent software development process that focuses, to a greater extent, on security by: minimizing the number of security vulnerabilities extant in the design, coding, and documentation detecting and removing vulnerabilities as early in the development lifecycle as possible. mitigating the effect of an exploit of an unknown vulnerability

133

The need for this process is greatest for the enterprise and consumer software solutions that process inputs received from Internet, to control critical systems likely to be attacked, or to process personally identifiable information.

3.9

European Union Security and Smart Ecosystem Standards

The European Commission has mandated that the European Standards Organization issue standards for the smart grid in Europe. The EC set up the Smart Grids Task Force in 2009 to develop policy and regulatory directions for smart grid deployment under the Third Energy Package, a gas and electricity market liberalization for the European Union. The commission has issued recommendations for the major components of the smart grid/smart energy ecosystem, including: Preparations for the roll-out of smart metering systems Sets of common functional requirements for smart meters Standards for smart grids Mandates for electric vehicles The adoption of communication standards for smart grids

The following sections describe in brief some characteristics of the above mandates and standards: 3.9. 1 EU Smart Metering Roll-out Preparations The following recommendations apply to utility companies from 2012/148/EU: Commission Recommendation of 9 March 2012 on preparations for the roll-out of smart metering systems: For the metering operator: (c) Allow remote reading of meters by the operator. This functionality relates to the supply side (metering operators). There is a broad consensus that this is a key functionality. (d) Provide two-way communication between the smart metering system and external networks for maintenance and control of the metering system. This functionality relates to metering. There is a broad consensus that this is a key functionality. (e) Allow readings to be taken frequently enough for the information to be used for network planning. This functionality relates to both the demand side and the supply side. For commercial aspects of energy supply: (f) Support advanced tariff systems. This functionality relates to both the demand side and the supply side. Smart metering systems should include advance tariff structures, time-ofuse registers and remote tariff control. This should help consumers and network operators to achieve energy efficiencies and save costs by reducing the peaks in energy demand. This functionality, together with functionalities referred to in points (a) and (b), is a key driving force for empowering the consumer and for improving the energy efficiency of the supply system. It is strongly recommended that the smart metering system allows automatic

134

transfer of information about advanced tariffs options to the final customers, e.g. via standardised interface mentioned under (a). (g) Allow remote on/off control of the supply and/or flow or power limitation. This functionality relates to both the demand side and the supply side. It provides additional protection for the consumer by allowing grading in the limitations. It speeds up processes such as when moving home the old supply can be disconnected and the new supply connected quickly and simply. It is needed for handling technical grid emergencies. It may, however, introduce additional security risks which need to be minimised. For security and data protection: (h) Provide secure data communications. This functionality relates to both the demand side and the supply side. High levels of security are essential for all communications between the meter and the operator. This applies both to direct communications with the meter and to any messages passed via the meter to or from any appliances or controls on the consumers premises. For local communications within the consumers premises, both privacy and data protection are required. (i) Fraud prevention and detection. This functionality relates to the supply side: security and safety in the case of access. The strong consensus shows the importance attached to this functionality. This is necessary to protect the consumer, for example from hacking access, and not just for fraud prevention. For distributed generation: (j) Provide import/export and reactive metering. This functionality relates to both the demand side and the supply side. Most countries are providing the functionalities necessary to allow renewable and local micro-generation, thus future-proofing meter installation. It is recommended that this function should be installed by default and activated/disabled in accordance with the wishes and needs of the consumer. 3.9.2 Common Functional Requirements for Smart Meters Based on analysis of 11 cost-benefit assessments, the European Commission established the Set of common functional requirements of the Smart Meter, with the intent of achieving costefficiencies for member states, the metering industry, utilities, and regulators for their investments, roll-outs, and reference definitions. The following chart is a summary of findings of DG ENER and DG INFSO towards the Digital Agenda, Action 73.

135

Figure 43 shows the European Unions expectations for how smart meters should function.

3.9.3 Standards for Smart Grids The CEN/CENELEC/ETSI Joint Working Group (JWG) on standards for smart grids produced a report addressing standards for smart grids, including the following: Final Report of the CEN/CENELEC/ETSI Joint Working Group on standards for smart grids Recommendations for smart grid standardization in Europe

On 1 March 2011, the European Commission issued Mandate M/490 requesting the three European Standards Organisations (ESOs), CEN, CENELEC and ETSI, "develop a framework to enable European Standardisation Organisations to perform continuous standard enhancement and development in the field of smart grids, while maintaining transverse consistency and promote continuous innovation."

136

3.9.4 Mandates for Electric Vehicles CEN and CENELEC, as European Standards Organisations, have sought to standardize all aspects of electro-mobility. Their task has involved finding common ground for different standards communities with different perspectives (those dealing with vehicles, those responsible for the electrical system and its components, even the ICT community); the international work already done necessarily has to accommodate different rules and requirements in different regions of the globe; at the European level there are (still) national regulations that make the application of single solutions difficult ; some of the requirements are still evolving; some technical solutions are still not fully mature. They created a short-term focus group to prepare an advisory report subject to approval by the CEN and CENELEC Technical Boards, which will decide about the implementation for the various recommendations. The top-level recommendations from their report Standardization for road vehicles and associated infrastructure follow: Connectors and charging systems In order to facilitate the adoption of electro-mobility, the charging should be as convenient and as low cost as possible while providing an acceptable level of safety. This inherently implies the implementation of simple uniform charging systems both for the home and for publicly accessible charging places. Smart charging Smart charging is seen as a necessity to optimise the use of the electrical grid for efficient EV charging, while maximizing the use of renewable energy. It is considered that the customer should be encouraged to charge at the best possible moment in terms of available energy, by providing a smart charging mechanism, based on information supplied by the electric grid and on the physical environment (Energy Management System, EMS). This would help to avoid the need for extensive new investment in Europe for the grid. The active interaction of the vehicle with the grid could even be beneficial for global energy usage. New standardization activities are recommended, especially in relation to the requirements for storage, labeling, and battery switching stations. The possibility of standard battery designs and the re-use of batteries in alternative applications may be an interesting possibility that could be explored. This requires additional standardization work. Electromagnetic compatibility (EMC) EMC standards are needed to ensure that electrical and radio apparatus does not cause interference to other such equipment, and that it is adequately protected against such interference. This is a heavily-regulated area, the relevant EU Directives being new approach where European Standards are agreed to meet the essential legal requirements. However, the various standards that are available in this domain were not usually designed with mass-market use of equipment for electric vehicle charging in mind, or for the high concentrations of EVs in one place that may result from such a deployment. This may imply the need for a number of detailed amendments to the 137

standards portfolio. The present Directive 72/245/EEC defines provisions for the whole vehicle and for the electrical and electronic sub-assemblies, but does not cover the connection of an EV to the grid. UN-ECE Regulation 10 is presently under revision and will cover these same requirements and take account of all the EMC aspects for the EV. Regulation Our report contains some complementary considerations regarding the interface between standardization and regulation at European level, for instance the Low Voltage Directive, as well as European vehicle type approval Directives and the UN-ECE regulations relating to vehicles.

3.9.5 Requirements for communications standards in Smart Metering Systems The Smart Metering Coordination Group issued six key functionalities for communications among smart metering systems in their May 2011 technical report Functional Reference Architecture for Communications in Smart Metering Systems: Remote reading of metrological register(s) and provision to designated market organisations Two-way communication between the metering system and designated market organisation(s) To support advanced tariffing and payment systems To allow remote disablement and enablement of supply and flow/power limitation To provide secure communication enabling the smart meter to export metrological data for display and potential analysis to the end consumer or a third party designated by the end consumer To provide information via web portal/gateway to an in-home/building display or auxiliary equipment

138

4.0

SERA Alignment Guidelines and Capability Maturity Model

This chapter is new to SERA for its second edition, in response to the SERA Advisory Councils input on the need to identify and offer guidelines for utility alignment to the Microsoft Smart Energy Reference Architecture (SERA). Whereas the primary perspective of the document was for the utility enterprise, we recognized that partner solutions could also benefit from a review of such guidelines, to understand how to become a first order citizen in the SERA Enterprise Architecture Guidelines. As such, the intent of this chapter is to provide a set of recommendations for building an architecture that is consistent with SERA utilizing Microsoft .NET platform, servers, services, system integrators, and third party independent software vendors (ISVs), and guidance for assessing the alignment with SERA of an entire IT/OT environment or any single component. Our goals are: to provide utilities with a starting point for establishing the criteria needed to validate solutions and to baseline the utility architecture to provide technical and information architecture guidance to partners for improved speed, security, cost, and flexibility of delivery of Microsoft SERA-based solutions to demonstrate that the value of the sum is greater than the individual parts

Discussion of SERA alignment must first entail setting the context for the assessment. SERA seeks to orchestrate the complete Smart Energy value chain. As such, SERA alignment must be considered from the perspective of: Independent Software Vendor (ISV) The SERA alignment for an ISV revolves largely around the notion that the ISV solution(s) must be good first order citizens in the utility enterprise Architecture. Systems Integrator (SI) partner -- The SI alignment considers the key tenants of the information and technical architecture so that resultant deployments and future work provide the SERA benefits to the utility. Utility stakeholder From the utility perspective, SERA provides the roadmap and strategy for internal as well as purchased technology and integration projects.

The following assessment tools can be valuable especially when base-lining the initial state of a utility enterprise information and technical architecture. Combined, the assessment tools enable the prioritization of development, integration, organizational, training, governance or management actions necessary to advance on their smart energy journey.

139

The remainder of this Chapter is broken into five sections with the following focus: The Microsoft Enterprise Strategy Program focuses on business impact and value by optimizing the use of technology to accelerate customers toward business goals. It provides a programmatic approach that enables business transformation, advances technology thought leadership, fosters innovation, and maximizes the value of Microsoft products and services. The program provides customers with an opportunity to focus on key initiatives under the direction of a Microsoft enterprise architect who is an expert in both delivering business-centric architecture services and providing the leadership necessary to deliver business value. The Enterprise Strategy Program can scale up or down to accommodate the needs of the customer and the local market conditions.

140

4.1

SERA Alignment Considerations for the Utility

This section addresses the technical and information architecture alignment and the associated assessment criteria from the perspective of a utility. Note that these assessment criteria are also reflected in the SERA customer capability maturity section and the maturity assessment sliders.

141

4.2 Microsoft SERA Capability Maturity Model This section lays out the Microsoft SERA capability maturity model for utility assessments against SERA. The process and the sliders used for assessment are described, as well as the process for taking the information from the slider assessment and turning it into actionable priorities with real business and technology capability improvements.

142

4.2

Microsoft SERA Capability Maturity Model

The following sections provide an assessment approach for utilities which encompasses the underlying SERA alignment criteria, while recognizing varying business models and making provision for tailoring the assessment based upon business function. The technique we suggest utilizes sliders, a set of graphics that present a sliding scale indicator depicting the extent to which a factor in a particular system or component maps to the SERA and meets the guidelines and intent of SERA. Ideally, each trait should be assessed for alignment and presented in slider form. The overall alignment factors and the most relevant corresponding sliders depend upon the type of utility. For example, customer equity may apply to a retailer, but generally is less relevant to a transmission company. Although the sliding scale may be adapted to each implementation or environment, the scales shown in the examples follow this general convention: Basic: Generally applies to utility environments or components with little or no recognition of SERA principles. Integrating: Describes a utility IT/OT environment or solution component that recognizes SERA principles but lacks concrete progress towards implementation. Advanced: Indicates progress in implementation for multiple aspects of SERA guidelines and requirements. Dynamic: Describes the system or component where adherence to SERA principles is complete and consistent, and supports an agile response to changing or expanding requirements. Pioneering: Refers to utility IT/OT ecosystems or solutions that have furthered the experience and benefits of SERA and has driven new uses of technologies in original and beneficial ways.

Progress towards SERA alignment is described by movement along the path from basic to pioneering, with the objective to be, generally, in the dynamic range of each slider. In each case, sample characteristics of capabilities are offered along each continuum, and describe roughly the defining characteristics. Overall SERA maturity should be viewed as the aggregate position of the maturity levels in each aspect of alignment. This section presents the utility SERA assessment process and associated sliders that would be used to baseline SERA capability maturity. The first section of SERA maturity, solution, security and building block maturity sliders would apply to all entities.

143

Smart grid domains and SERA business capabilities would apply to utilities that have implemented aspects of smart grid technologies and vendors supplying products in that domain. Consumer equity might only apply to a retail utilities entity. Grid operations slider might only apply to a distribution or transmission entity.

The SERA Advisory Council, Microsoft Consulting Services, and the Worldwide Power and Utilities team collaborated to identify the SERA CMM approach, its primary assessment sliders, and the characteristics associated with steps along the sliding scales. It is envisaged as more of a utilities benchmark against the sliders in actual practice, and as new sliders are identified, the sliders will be updated accordingly. 4.2.1 SERA Maturity Assessment Sliders, Solution Assessment Sliders and SERA Building Blocks The following assessment sliders are common to any smart energy entity to derive their business objectives, technical change and SERA-enabling building blocks. The sliders assess capabilities in the following areas: User experience and devices Cross enterprise integration Security SERA IO Maturity Slider Smart Grid Domains and SERA Business Capabilities Consumer Equity Generation User Experience and devices

4.2.1.1

Figure 48 - User Experience and Devices Slider

144

4.2.1.2

Cross enterprise integration

Figure 49 - Cross-Enterprise Integration Slider

4.2.1.3

Security

Figure 50 - Security Slider

4.2.1.4

SERA IO

Figure 51 - SERA IO Maturity Slider

145

4.2.1.5

Smart Grid Domains and SERA Business Capabilities

Figure 52 - SERA Grid Operations slider

4.2.1.6

Consumer equity SERA Assessment

Figure 53 - Consumer equity SERA assessment slider

4.2.1.7 Generation SERA Assessment The following slider is applicable to an Energy Generator, Independent Power Producer, or Distributed Generation entities:

Figure 54 - Generation Assessment Slider

146

4.2.2 Translating SERA Assessment Slider Results into an Actionable Program The results that emerge from a SERA assessment of a utility company can be combined with any other assessment results (e.g., CM SGMM or IO Model Assessment) to establish a solid baseline of the utilitys current state. From that point, the utility may conduct an analysis that contrasts the current state versus the desired state. This gap analysis provides the utility with a set of goals that it can translate into specific and measurable technical capabilities that it desires. In terms of setting budgets and priorities, the utility may use that gap analysis to establish a roadmap for SERA implementation. Several avenues exist to develop and implement the SERA roadmap. These include Microsoft Consulting Services and the Enterprise Strategy and Architecture Services Group, as well as system integrator partners.

4.3

SERA ISV and SI Partner Alignment

This section addresses the technical and information architecture alignment criteria for ISV and SI partners. The key perspective is how well partner solutions behave as good first order citizens in the case of ISVs, and how well the core Integration precepts are reflected in the case of SIs for SERA Alignment. 4.5 Microsoft Infrastructure Optimization Models

The infrastructure optimization models section describes a process Microsoft teams can used to take a detailed view of customers maturity for the perspective of core infrastructure optimization and business productivity infrastructure optimization. These two models can be used to compliment or act as workshop input to a SERA Assessment. Microsoft Consulting Services Enterprise Architecture and Strategy The Microsoft Enterprise Strategy Program focuses on business impact and value by optimizing the use of technology to accelerate customers toward business goals. It provides a programmatic approach that enables business transformation, advances technology thought leadership, fosters innovation, and maximizes the value of Microsoft products and services. The program provides customers with an opportunity to focus on key initiatives under the direction of a Microsoft enterprise architect who is an expert in both delivering business-centric architecture services and providing the leadership necessary to deliver business value. The Enterprise Strategy Program can scale up or down to accommodate the needs of the customer and the local market conditions.

147

4.1

SERA Alignment Considerations for the Utility

The following sections describe SERA architecture alignment considerations from a utility company perspective. Application of the assessment criteria in the sections that follow may require some judgment for appropriate application. The sections below are also considered to be building blocks of efficient and effective application of SERA guidelines. The sliders that appear in section in 4.2 take the perspective of a utility business context, but it is important that every entity across the utility supply chain or value chain also align with SERA guidelines. Nowhere is this more important than in the SERA security capability maturity model, where every entity in the supply chain must be factored in to an overall SERA security alignment assessment. Overall utility security can only be considered as good as the security level of the weakest component or solution. The following sections make recommendations based on SERA alignment considerations, followed by an assessment of the current state of technical capabilities across the key tenants as appropriate. The SERA alignment considerations include: User Experience and Information Composition. Assess user experiences that drive productivity, customer loyalty, and business growth. Service-Oriented Architecture and Business Process Management. Guidance on how to establish and manage flexible, repeatable, and connected business and IT processes within a service-oriented architecture. Business Intelligence infrastructure that ties information together across the utility enterprise to remove barriers to finding and using data, which helps people throughout the organization to collaborate and make informed decisions. Data Integration and Enterprise-wide Data Mapping/Flow. Describes infrastructure strategies to effectively expose ever-increasing amounts of data from disparate sources. Master Data and Enterprise-Wide Modeling. Adoption of best practices for unified modeling across the corporation, resulting in streamlined storage and single entry, multiple use data leverage. Adoption of an enterprisewide data model. Enterprise-wide Eventing and Complex Event Processing (CEP). Ability to ingest high volumes of data and efficiently segregate important events, such as power quality of outage notifications from steady state data streams to trigger actions and workflows. Security. Guidance on security and governance risk and compliance including the full software Secure Development Lifecycle and leveraging the SDL for all deployed software. Governance, Risk and Compliance. Well documented and socialized policies for standards and practices, based on risk assessment techniques.

148

SERA Future Guidelines and Software + Services. Adherence to first principles and standards in implementation to enable agile response to changing or expanding requirements, and disciplined approach to making new developments location agnostic

4.1.1 User Experience and Information Composition The SERA-aligned utility has a strategy for composition of information from a variety of sources across the enterprise in an easy and secure fashion, with the following characteristics: The composition needs to align with role-based security stated below. The most straight-forward SERA-compliant approach is using Microsoft SharePoint as a container and implementing Web Parts from information sources that can be pulled together on all primary form factors including PC, browser and phone. In the SharePoint container approach, solution providers and subsystem software solutions are able to presence enable solution clients - such as identifying subject matter experts establishing easy social interaction. Solution providers are actively involved with implementing the Microsoft UX tools including Windows Presentation Foundation, Microsoft Silverlight, Microsoft client security, and SharePoint. Windows Presentation Foundation is more applicable to control center application UX, whereas SharePoint can be customer facing or enterprise UX. The key element of this UX SERA requirement is that information composition is enabled by partner solutions and a utility UX strategy is able to quickly compose information from a variety of sources for an intuitive synthetic perspective. Assessment: User experience will be critical to success in terms of providing familiar yet rich interfaces across roles and composing information to the correct level of fidelity: A focus group should be used to leverage a customer utilities own business strategists or innovation team. Vision demonstrators and Microsoft Expression Blend + SketchFlow can illustrate realistic scenarios, and explore key business issues and ideas for new services.

149

Figure 44 shows the building blocks to SERA.

4.1.2 Service-Oriented Architecture and Business Process Management SERA recommends that utility business processes are captured and documented explicitly and that the appropriate business processes are integrated and automated in a pragmatic way. An event-driven service oriented architecture is established and the enterprise service bus pattern is enabled, consistent with Microsoft ESB 2.1 guidance for BizTalk, for processes that involve information transformation, workflows, human centric, or are long running. Business process implementations are able to integrate workflows and processes with actions in virtually any part of the enterprise. The processes are composable in a rapid and graphical basis.

150

Assessments: Implementing a SERA-ready SOA architecture across all aspects of the organization will require a concerted effort and roadmap to move from the standard to dynamic state. It is recommended that customer utilities take a staged approach to reach maturity and start with the following subsets of critical path objectives: Service enablement of key legacy functions: ERP and other LOB functions Business and IT identifies and prioritizes legacy functions Isolate applications from services system changes Promote easier migration, replacement, or retirement Adopt master data management strategy Microsoft SQL Server and service oriented integration Establishment of a security model for services: Encryption service with key management Establish a demilitarized zone (DMZ) environment Securing services access including checking for valid messaging and passing account information Data at rest protections within the SOA Policy enforcement, tracking and logging

Figure 45: Basic SOA Capability and Roadmap

151

4.1.3 Business intelligence SERA requires that an approach be established to make Business Intelligence reports available for the average user as well as to the power user who can compose self-service BI. The goal becomes achieving the availability of Business Intelligence broadly across the entire organization. The average user prefers standardized reports with key performance indicators, while the power user business analyst seeks the data sources (using Microsoft PowerPivot) for more in-depth analysis capabilities. The information access and exposure must be consistent with the secure role-based information access thats discussed in the enterprise-wide security section below. The power users are sophisticated utility personnel who know where key data resides in the enterprise. For these users, Microsoft PowerPivot is an ideal technology enabling self-service information access. Likewise, surfacing data from a variety of sources via SharePoint and leveraging the capabilities of the familiar Microsoft Office productivity tools is a powerful way to make BI available to all people in the enterprise who need it.

Assessments: Future smart grid solutions will require new tools with familiar interfaces to expose line of business and dynamic real-time systems. For example, future consideration should be made to evaluate integration of Microsoft SharePoint and Microsoft BizTalk to customer ERP to expose reports consistently, enlist rich Silverlight, Office, and SharePoint BI tools to include data while providing line of sight to workflows across systems to improve overall performance management. Utilities will move from basic/standardized to rationalized (or dynamic) enabling performance management, reporting and analysis, and data warehousing to establish a common overview of the smart grid business processes. The enterprise also provides qualitative and quantitative benchmarks to measure and track smart grid solution performance.

152

Figure 46 - SERA Architectural Building Blocks: Business Intelligence and Semantics

4.1.4 Data Integration and Enterprise-wide Data Mapping/Flow SERA recommends full enterprise-wide data mapping showing data sources, data mastership, data provider sources to consumers, and the data provider approach. The following characteristics should be evident: The data mapping should reveal only one source of the data and should not reflect recycling of the data (calling into question the accuracy). Consumers should be identified and data restrictions based clearly upon roles. The enforcement should be called out as well. The utility data integration philosophy should be defined. The enterprise-wide data aggregation strategy needs to be defined. This should include the data store for time series data and for aggregation into a data store with other operational data. This integration will typically include a security consideration and access rights to operational data should be clearly identified. Data aggregation to support business intelligence and utility analytics should be defined. This will typically include an enterprise-wide utility industry data model. The CIM is the most applicable to operational systems. Data aggregation from all data sources should be addressed, not just operation systems. Copies of key operational data, the timeliness requirements, and access to other enterprise data such as ERP/OPEX and CAPEX cost figures should be defined. 153

Assessments: Initial assessment for data integration includes review of the operational system framework, whether CIM or MultiSpeak oriented, as well as the strategy for the rest of the utility business areas. An assessment of CIM includes determining whether a utility has experience with SPARX Enterprise Architect for CIM model management and CIMtool, and assessment of any initial CIM based BizTalk cases. Establishment of normalized and de-normalized CIM based information stores for operational data to service the enterprise, and the enterprisewide PI time series data architecture should be considered. The information integration with this temporal strategy needs to be established. Beyond a CIM operational data store is the utility approach for enterprisewide data aggregation and the underlying data model to support the data aggregation. An application inventory should be compiled and information flows graphically identified. The strategy for marrying operational and IT system information should be explored as smart grid scenarios drive not only transcending the OT/IT boundary, but also drive additional aggregation, analytics, and reporting requirements. 4.1.5 Master Data & Enterprise-wide Modeling SERA recommends that utility master data is: defined and consolidated. federated and a single source is established. removed from all but the consolidated store, as appropriate. SERA recommends that the utility has: moved to a model-driven architecture. federated the models and integrated them across the enterprise. established model management and model federation, so that mapping, and versioning are all in place enterprise-wide. aligned operations models to IEC 61968 and IEC 61970. Utility schemas are defined and federated, just as data flows, mapping, and mastership are defined for data sources. established an enterprise-wide data model for aggregation of operational, financial, and customer data. considered the federation of models with other participants in the utility supply chain (e.g., federation of transmission systems models with markets systems, EMS providers, and other trusted partners).

154

The following data types and master data are modeled for integration, transformation and management: Operational Data Data representing electrical behavior of power system equipment and the grid. This includes: voltage and current phasors, real and reactive power flows, demand response capacity, DER capacity, power flows, and forecasts for any of the above. Non-Operational Data Data representing condition or behavior of assets. This includes: power quality and reliability data, asset stress level information, and telemetry from instrumentation not directly associated with grid power delivery. Meter Data Usage data from meters, including total usage, average demand, peak demand and time-of-day or peak demand values. This does not include voltages, power flows, power factor, or PQ data.

Assessments: Opportunities should be explored for a CIM master data management solution, enabling smart grid software vendors across the customer utility landscape. A SERA-aligned partner team should be assembled to define a working use case and further articulate an operationally sound solution for enterprisewide CIM. 4.1.6 Enterprise-wide Eventing and Complex Event Processing (CEP) SERA recommends that the utility has made the transition from collecting and passing large blocks of data around the network, with each subsequent consumer drilling on the data, to the paradigm where: events are detected as close to the edge as possible. events are passed around the enterprise to trigger workflows, business processes, and alerts as appropriate.

Complex event processing is used to analyze streams of utility temporal data in conjunction with system state to determine relevant complex events, and the enterprise-wide event processing strategy provides for propagating the events to workflows and to trigger business processes throughout the enterprise. The complex event processing and the workflows and business processes are able to transcend the historical OT/IT boundary.

Assessments: The extent to which current information is characterized and aggregated across systems, including any progress at creation of an operational data store or warehouse needs to be assessed. The opportunity to aggregate and expose data in flight from events at low latency with 155

operational and LOB information systems will drive smart grid value to the information worker, analyst and loss model optimization. Use cases should be explicitly considered to represent, characterize and integrate siloed information such as: Asset-based monitoring and aggregation of machine-born data. Sensor-based observation of power system equipment activities and output. Event and alert generation the moment something goes wrong, close to the edge. Proactive, condition-based maintenance for key equipment. Low-latency analysis of aggregated data with near real time, operational to develop new models for assessing outages or losses with real time KPIs.

Figure 47: SERA Architectural Building Blocks: business process execution.

4.1.7 Security SERA recommends that a clear enterprise-wide security strategy is in place and is consistently applied in all software in all parts of the company. For example, operations technology (OT) inherits security policy from IT just as in any other corporate computing asset. Security is implemented in a manner consistent with the Microsoft SDL for the entire lifecycle of all software deployed. This includes: 156

Identification of assets; Management controls; Personnel training; Detection and prevention measures; and Response plan.

There is a direct mapping to the Microsoft SDL and the overall Microsoft security strategy. The SDL specifies that training, threat modeling, secure development practices, source control, and response are a part of every software solution deployed. Whether the utility is subject to and complies with NERC CIPS criteria and the new NIST CyberSecurity Guidelines, the key elements of the security guides are reflected at the utility, and because the enterprise is only as secure as the unmitigated weakest link, an enterprise-wide security audit has been conducted to identify threats and potential holes and these issues are on a mitigation plan roadmap. The mitigation plan roadmap can move the enterprise to a more secure state over time and should be fully aligned with the risk oriented security management program prioritization. The detailed criteria for these security guidelines per the NISTIR 7628 Guidelines for Smart Grid Cyber Security v1.0 August 2010 are as follows: Ongoing secure development education requirements for all developers involved in the smart grid information system; Specification of a minimum standard for security; Specification of a minimum standard for privacy; Creation of a threat model for a smart grid information system; Updating of product specifications to include mitigations for threats discovered during threat modeling; Use of secure coding practices to reduce common security errors; Testing to validate the effectiveness of secure coding practices; Performance of a final security audit prior to authorization to operate to confirm adherence to security requirements; Creation of a documented and tested security response plan in the event vulnerability is discovered; Creation of a documented and tested privacy response plan in the event vulnerability is discovered; and Performance of a root cause analysis to understand the cause of identified vulnerabilities.

It is worth noting that supply chain security has recently come into focus as a critical aspect of the IT/OT ecosystem. Utilities procure components for systems from a wide variety of vendors and sources, and even partners and ISVs that are SERA-aligned may obtain components from

157

external sources. Consistent application of security principles to supply chain processes can help to ensure the holistic security of the environment. Assessments: A complete security assessment against the SDL, and the security considerations SERA suggest beyond the SDL, is required to respond to this criteria. Utilities may wish to consider a detailed review against the SDL given the heightened focus on smart grid security, the addition of new scenarios resulting from the smart grid use cases, and the issuance of the NIST CyberSecurity guidelines. There are three stages of software lifecycle that need to be considered in the security assessment: development, deployment, support. The Microsoft SDL focuses on the development, validation, verification, and maintenance of software aligned with the goal of secure by design, secure by development, and secure by deployment. Utilities should ensure solution providers have a policy to follow the Microsoft SDL process. In implementation, utilities, solution providers, and system integrators should work together, aligned around the security best practices discussed in the security sections of SERA, to achieve a well-integrated and holistic cybersecurity result. To the extent possible, solution providers and system integrators should inherit the policies and practices of the utility for a complementary and effective cybersecurity program. Software support also necessitates a consistent software update and patch management process that is automated and aligned with the utility program. This support program should include both the actual process of software upgrade, but also the testing to ensure that solution provider software is compatible with system patches, and the process transparency to enable a holistic utility-wide tracking and reporting process. 4.1.8 Governance, Risk and Compliance SERA recommends that the utility articulates and demonstrates a clear approach for the management of computing assets. This strategy needs to be consistent with the tenants of the enterprise-wide security criteria above. In addition: Regardless of whether the utility is bound by and conforms to NERC CIPS, the architecture must support tracking, loading, and managing all elements of enterprisewide software and associated hardware assets. Software updates, security patches, ISV solution updates, and all associated configuration details are tracked and reportable. Automation tools are employed to manage the process of updating all boxes so that the potential for out-of-rev packages are limited.

158

The appropriate Microsoft products here include Microsoft System Center, Microsoft Operations Manager, and the Microsoft Optimized Desktop solution set. ISV solutions are selected where the ISV has an evergreen strategy so that the software runs on the current version of the platform, and so that the supplier continuously tests and supports any updates and patches issued by the underlying platform provider consistent with the Microsoft SDL.

Assessments: Investigation should be made if the utility has some automation capabilities in place for software configuration management, patch management, and deployment. Additional tracking, reporting, and controls are necessary to fully align with NERC CIPS if this is to be used as a key voluntary target for non-US utilities. An enterprise-wide patch, deployment, upgrade, tracking and reporting solution is not yet in place. Serious consideration should be given to Microsoft Opalis for enabling these capabilities. 4.1.9 SERA Futures Guidelines and Software + Services SERA enables an environment where expanding and changing requirements can be met with minimal disruption. New compliant applications should be deployable in a straightforward way, provided that they adhere to first principles. Customizations to ISV code and highly custom systems integration should be avoided whenever possible. Wrapping core components with tailored service implementations allows utilities to incorporate solution updates, new versions, new services, and patches with minimal disruption. The configuration management requirements addressed in the section on governance should embed the basic concepts of future proofing. Although not necessarily central to an initial implementation, utilities should give consideration to new initiatives for the potential delivery of capabilities via cloud services, whether onpremise or private or public cloud. Transition to the cloud will be simplified by writing software in .NET 4.0 or later. While this is not explicitly a customer utility requirement, location agnostic development it is a key tenant for SERA. It will likely be a core consideration for most utilities to provide flexibility in deployment and provisioning of resources, to minimize cost if deployment plans evolve, and to capitalize on industry trends bringing TCO improvements thru cloud sourced models. Ensuring the protection of investments and enabling the future agility of solutions will depend upon minimizing the difficulty of transitioning to the cloud. SERA recommends that transition to leveraging cloud delivered services will be an ever increasing part of the next generation agile utility. To this end, location agnostic development should be implemented to the maximum extent possible for:

159

Employing service bus for the communications and security functionality providing a consistent developer experience regardless of the deployment target. Implementing storage models compatible both with Microsoft SQL Server on premise and Windows Azure SQL Database. Leveraging mainstream web technologies such as SharePoint and Silverlight where possible so UX can be exposed on a variety of devices and form factors, as well as to users with PCs and to desk-less workers. Implementing Microsoft claims-based security and the Windows Azure Access Control service based upon the Active Directory Federation Services to ensure secure implementations and easy migration from on premise to cloud as appropriate.

The SERA guidelines should he leveraged to protect utilities investments through alignment with Windos Azure services whether eventually deployed via an on-premise Azure Appliance, or a private cloud, or Azure services in a public cloud. SERA recommends that transition to leveraging cloud delivered services will be an ever increasing part of the next generation agile utility. Assessments: Assessing a particular environment for its future proof capabilities is difficult for the obvious reason; achieving future benefit can only be proven in the future. However, periodic review of practices and the architecture itself should be an opportunity to spot inconsistencies in the current implementation that could lead to bottlenecks in the future. Additionally, migration to the cloud is a new consideration for all utilities. Most utility companies will be consistent in being at the very start of the cloud software plus services transition journey.

160

4.2

Microsoft SERA Capability Maturity Model

The following sections provide an assessment approach for utilities which encompasses the underlying SERA alignment criteria, while recognizing varying business models and making provision for tailoring the assessment based upon business function. The technique we suggest utilizes sliders, a set of graphics that present a sliding scale indicator depicting the extent to which a factor in a particular system or component maps to the SERA and meets the guidelines and intent of SERA. Ideally, each trait should be assessed for alignment and presented in slider form. The overall alignment factors and the most relevant corresponding sliders depend upon the type of utility. For example, customer equity may apply to a retailer, but generally is less relevant to a transmission company. Although the sliding scale may be adapted to each implementation or environment, the scales shown in the examples follow this general convention: Basic: Generally applies to utility environments or components with little or no recognition of SERA principles. Integrating: Describes a utility IT/OT environment or solution component that recognizes SERA principles but lacks concrete progress towards implementation. Advanced: Indicates progress in implementation for multiple aspects of SERA guidelines and requirements. Dynamic: Describes the system or component where adherence to SERA principles is complete and consistent, and supports an agile response to changing or expanding requirements. Pioneering: Refers to utility IT/OT ecosystems or solutions that have furthered the experience and benefits of SERA and has driven new uses of technologies in original and beneficial ways.

Progress towards SERA alignment is described by movement along the path from basic to pioneering, with the objective to be, generally, in the dynamic range of each slider. In each case, sample characteristics of capabilities are offered along each continuum, and describe roughly the defining characteristics. Overall SERA maturity should be viewed as the aggregate position of the maturity levels in each aspect of alignment. This section presents the utility SERA assessment process and associated sliders that would be used to baseline SERA capability maturity. The first section of SERA maturity, solution, security and building block maturity sliders would apply to all entities. Smart grid domains and SERA business capabilities would apply to utilities that have implemented aspects of smart grid technologies and vendors supplying products in that domain. 161

Consumer equity might only apply to a retail utilities entity. Grid operations slider might only apply to a distribution or transmission entity.

The SERA Advisory Council, Microsoft Consulting Services, and the Worldwide Power and Utilities team collaborated to identify the SERA CMM approach, its primary assessment sliders, and the characteristics associated with steps along the sliding scales. It is envisaged as more of a utilities benchmark against the sliders in actual practice, and as new sliders are identified, the sliders will be updated accordingly. 4.2.1 SERA Maturity Assessment Sliders, Solution Assessment Sliders and SERA Building Blocks The following assessment sliders are common to any smart energy entity to derive their business objectives, technical change and SERA-enabling building blocks. The sliders assess capabilities in the following areas: User experience and devices Cross enterprise integration Security SERA IO Maturity Slider Smart Grid Domains and SERA Business Capabilities Consumer Equity Generation User Experience and devices

4.2.1.1

Figure 48 - User Experience and Devices Slider

162

4.2.1.2

Cross enterprise integration

Figure 49 - Cross-Enterprise Integration Slider

4.2.1.3

Security

Figure 50 - Security Slider

4.2.1.4

SERA IO

Figure 51 - SERA IO Maturity Slider

163

4.2.1.5

Smart Grid Domains and SERA Business Capabilities

Figure 52 - SERA Grid Operations slider

4.2.1.6

Consumer equity SERA Assessment

Figure 53 - Consumer equity SERA assessment slider

4.2.1.7 Generation SERA Assessment The following slider is applicable to an Energy Generator, Independent Power Producer, or Distributed Generation entities:

Figure 54 - Generation Assessment Slider

164

4.2.2 Translating SERA Assessment Slider Results into an Actionable Program The results that emerge from a SERA assessment of a utility company can be combined with any other assessment results (e.g., CM SGMM or IO Model Assessment) to establish a solid baseline of the utilitys current state. From that point, the utility may conduct an analysis that contrasts the current state versus the desired state. This gap analysis provides the utility with a set of goals that it can translate into specific and measurable technical capabilities that it desires. In terms of setting budgets and priorities, the utility may use that gap analysis to establish a roadmap for SERA implementation. Several avenues exist to develop and implement the SERA roadmap. These include Microsoft Consulting Services and the Enterprise Strategy and Architecture Services Group, as well as system integrator partners.

4.3

SERA ISV and SI Partner Alignment

While section 4.1 focused on utility alignment and section 4.2 introduced slider models for utilities, this section describes how Independent Software Vendors and System Integrator partners alignments are based upon how well partner solutions enable the key SERA technology and information architecture building blocks. The ISV perspective considers how well an ISV solution behaves as a first order citizen in a SERA enterprise architecture. The SI perspective considers how well the SI approach takes into consideration the intended ISV building blocks to bring the best short- and long-term value to utility companies.

The building blocks below describe the important technology and information architecture considerations from a partner perspective for SERA alignment. Note that the same categories are applied as in Section 4.1 SERA Alignment Considerations for the utility. However, in this section the focus is on how a partner and partner software can contribute to or enable the utility alignment. This section is intended to serve as a guide and assessment perspective for partners. Partners may wish to establish their own set of assessment sliders which reflect the maturity of their offering against these criteria as well as how well their solutions move customers to higher levels of maturity for the perspective of the partner products and services. 4.3.1 User experience (UX) and Information Composition (Web 2.0) Do partner solutions expose information for aggregation external to their systems? This may include operation information that is to be combined with financial information to get a holistic perspective of distribution operations for example.

165

Do the partner solutions use declarative programming techniques so that the context or target role for the information presentation establishes what is displayed? Is it easy to aggregate the information via standard display tooling such as using Microsoft SharePoint as a container into which partner web parts are collected? 4.3.2 Business intelligence Partner solutions must expose key performance indicators as well as raw data storage to enable various Business Intelligence strategies. Typically, utilities will endeavor to have a holistic BI strategy and partner solutions must be able to contribute to and enable the holistic strategy. This implies easy access to operational and IT data that may contribute either to BI KPIs, calculations, or even be used in discovery by such tools as Big Data Hadoop, etc. 4.3.3 Business Processes Partner solutions must be able to participate in and potentially even initiate integrated business processes. This implies a service oriented architecture and application level or partner solution level Web services that can participate in integrated utility business processes. This alignment can be further enabled by easy integration with Microsoft technologies such as BizTalk, Microsoft BizTalk Enterprise Service Bus, Windows Workflow Foundation in .NET, and UX solutions such as SharePoint. 4.3.4 Data integration and enterprise-wide data mapping/flow Partner solutions expose data so that aggregation into operational data warehouses can be achieved. Data usage and data mapping is enabled within partner solutions to aid the utility company in determining and tracking sources of data and to the extent possible uses of data and data flows. Data aggregation is straight forward based upon exposed partner interfaces and associated data models. 4.3.5 Enterprise-wide eventing & complex event processing (CEP) Partner solutions can easily initiate triggers/events, as well as consumer them in an enterprisewide eventing scheme. Events can be published beyond the boundaries of the partner solution for consumption by other subsystems. 4.3.6 Master Data & Enterprise-wide Modeling Partner solutions recognize the existence of other subsystems which depend upon and can contribute to, enterprise wide modeling approaches. The partner solution can exchange models, such as CIM-based operations applications, and can coordinate models to enable information integration and ESB integration. Partner solutions can inherit master data where appropriate, and if the utility has a holistic enterprisewide modeling strategy, the partner solutions are able to participate in and contribute to the master data modeling strategy. 4.3.7 Security Partner solutions must recognize a security paradigm that extends well beyond the boundaries of their solution. This includes inheriting security policy from IT, the ability to deprecate system 166

access based upon a single identity update, and the ability to participate in role based security information access and workflows. Partner solutions enable automated distribution of security updates. Partners can describe their secure software development strategy. SERA asserts that this strategy should be fully aligned with the Microsoft Secure Software Development Lifecycle and associated standards like the NIST 7628 Guidelines for Smart Grid. Partner solutions also can contribute to enterprisewide security strategies as appropriate for verification of secure operation of assets within designed guidelines and for physical security monitoring, in addition to cybersecurity. Utilities should also conduct independent security assessment of partner solutions, sometimes through third party security assessment organizations. To the extent possible and appropriate, the results of the testing should be shared with other utilities either directly, via the SERA Advisory Council, or via the solution partner to minimize duplications of effort and expense. 4.3.8 Governance, Risk and Compliance Partner solutions can provide health and monitoring information to enterprise-wide configuration management and reporting. This includes but is not limited to compliance monitoring such as NERC CIPs compliance and reporting. Ideally partner solutions can be managed by a holistic enterprise-wide management and automation solution such as Microsoft System Center. A number of capabilities can be enabled, including updates, automation of distribution of software to devices, device health, tracking, and configuration management, and participation in a holistic security patch management strategy. The reporting may be viewed both in the context of System Center and using Microsoft SharePoint dashboard capabilities.

4.4

SERA Futures Guidelines

It is extremely important that partner solutions recognize that technology evolution is a reality in the SERA-enabled utility and that solutions are designed for upgrade and evolution. One of the most critical considerations is the notion of location agnostic design and development. 4.4.1 Software + Services and Making New Developments Location Agnostic Communication techniques such as development with Service Bus in .NET and using resource abstraction, multi-tenancy, and stateless application instances will enable easy transition to and from the cloud. Further, using a security model that is supported both on premise and in the cloud such as Active Directory Federation Services and a Configuration Management tool such as System Center which manages resources across on-premise, private cloud, Windows Azure, and hybrid cloud deployments maximizes the value to the utility. Assessments: Each trait in the partner or ISV solution should be rigorously assessed using the criteria outlined in the previous section, but from the point of view of the solution provider. Only by 167

determining the alignment of each component or solution, as well as the overall implementation, can complete SERA alignment be assured.

4.5

Microsoft Infrastructure Optimization Models

The Microsoft infrastructure optimization process consists of a structured walkthrough of a number of questions which taken in aggregate provide a personalized view of the utility infrastructure baseline:

Figure 55 - There are four identified levels of Infrastructure Optimization Maturity ranging from Basic to Dynamic.

Fully implemented, the Smart Energy Reference Architecture provides guidance to enable achievement of the highest level of optimization: dynamic. However, the reference architecture recognizes that a building block approach must be established to make efficient and meaningful progress toward the stated vision. The Microsoft IO model prescribes that fundamental capabilities be put in place as a foundation, before attempting to ascend to the higher levels of dynamic infrastructure. There are two areas of focus: core infrastructure and business productivity infrastructure. The detailed capabilities identified for each of these categories state the what needs to be achieved for a utility to progress toward the dynamic optimized state. The reference architecture articulates the how solutions and infrastructure should be architected and deployed to enable achievement of the vision. 4.5.1 Core Infrastructure Optimization Core infrastructure optimization helps organizations better understand and move toward a more secure, well-managed, and dynamic core IT infrastructure that helps reduce overall IT costs, better use IT resources, and make IT a strategic asset for the business. This model supports IT professionals in the management of servers, desktops, mobile devices, and applications. Through this model, you can achieve more efficient resource usage to help you eliminate unnecessary cost and complexity, ensure that your business is always up and running, 168

and establish a responsive infrastructure. It can help your organizations advance from a costly and inefficient IT environment to an optimal IT infrastructure. Each optimization model includes specific technical capabilities that provide a comprehensive set of solutions to help advance your infrastructure optimization levels. The core infrastructure optimization model defines four capabilities that are necessary to build a more agile IT infrastructure:

Figure 56 - Core infrastructure optimization model

4.5.2 Business Productivity Infrastructure Optimization Business productivity infrastructure optimization helps streamline the management and control of content, data, and processes across all areas of your business. It provides more transparency with greater accountability and increased security and privacy within and across organizations. It helps simplify how people communicate and share expertise, helps make processes and content management more efficient, and helps improve the quality of business insight while enabling your IT department to increase responsiveness, have a strategic impact on the business, and amplify the impact of your people. Each infrastructure optimization model includes specific technical capabilities that provide a comprehensive set of solutions to help advance organizations' infrastructure optimization levels. The business productivity infrastructure optimization model includes six capabilities that are necessary to build a more agile, cost-effective IT infrastructure:

169

Figure 57 - Business productivity infrastructure optimization model

4.6 Microsoft Consulting Services Enterprise Architecture and Strategy


The Microsoft Services Enterprise Strategy Program focuses on business impact and value by optimizing the use of technology to accelerate customers toward business goals. It provides a programmatic approach that enables business transformation, advances technology thought leadership, fosters innovation, and maximizes the value of Microsoft products and services. The program provides customers with an opportunity to focus on key initiatives under the direction of a Microsoft enterprise architect who is an expert in both delivering business-centric architecture services and providing the leadership necessary to deliver business value. The Enterprise Strategy Program can scale up or down to accommodate the needs of the customer and the local market conditions.

170

5.0

Microsoft Technology Stack

The smart energy ecosystem could be implemented end-to-end using Microsoft technologies in conjunction with products provided by third party vendors, many that also build on the Microsoft technology stack.38 Realistically, a homogeneous Microsoft technology deployment is seldom the case, but the use of interoperability standards can effectively overcome any potential challenges. This section seeks to describe how the Microsoft technology stack can be leveraged and discusses its advantages over other alternatives. The section includes: Stack Integration Overview Capability-based Information Architecture Microsoft Cloud Technologies and Services Collaboration Services Business Software Process Integration Databases and Data Warehouses Business Intelligence Complex Event Processing Mobility Management and Security System Center End to End Trust Platform Virtualization

5.1

Stack Integration Overview

As overview, Figure 58 provides an integrated view of the overall architecture in an electric utility. The diagram shows: The technologies in use within an electric utility Applications that are typically provided by third party vendors The integration architecture leveraging a variety of Microsoft products to establish an enterprise wide strategy

38

A technology stack comprises the layers of components or services that are used to provide a software solution or application, according to its definition on Wikipedia.org38

171

Figure 58 - Integrated view of overall utility infrastructure

The overall reference architecture establishes a messaging and process integration approach, as well as data warehousing and storage approach, consistent with the integration overview of Figure 57. The overall architecture shown in Figure 58 reflects several key points: A variety of communications approaches will exist for field connections, but transition to consistent core communications channels as early as possible limits the impact to the rest of the architecture. Data is captured and reduced to information and events as close to the source as possible. Consistent integration is enabled via core technologies at the data Integration level, at the messaging/process integration level and at the user experience level. Microsoft StreamInsight CEP is used to service events, both for filtering and analysis of complex composite events, and stream management across the whole architecture.

172

The Windows Communication Foundation (WCF) Service Bus is used to transport data both for on-premise and for supply to Windows Azure services.39 Shallow integration is leveraged to hide complexity and implementation details at interfaces to enable rapid development/deployment and agile response to market forces. The CIM is leveraged both as the basis of message definitions as well as a direct contributor to the schema for data marts or data warehouses. Microsoft SQL Server Analysis Services for business intelligence is enabled by Extract, Transform & Load (ETL) integration across the architecture. The approach to data replication is described below.

5.2

Capability-based Information Architecture

Business capabilities for the Microsoft reference architecture scenarios drive the decisions for information architecture and are represented by role, tools, and most importantly, data timeliness requirements. This concept, pictured in Figure 59, is coupled with our notion of service-oriented business intelligence and guides design preferences for integration.

Figure 59 - Capability-based information architecture (Source: Architecture Journal)

39

Juval Lowy of iDesign provides a good discussion for implementing Service Bus connectivity to the Internet in his article Working With The .NET Service Bus.

173

We will explore when to use the following approaches: Event-driven Enterprise Services Bus (and Microsoft Manufacturing Toolkit) Data Integration Architecture (EAI and ETL)

5.2.1 Event-driven Enterprise Services Bus and BizTalk Server While Microsoft .NET and Web services standards can be employed to event application and time series data, it becomes more difficult to manage based on the number of points and types of integration, both machine/application and people-oriented workflow. Our approach is to leverage a services bus for the publishing and subscription of data and applications for a managed integration. Process events, like out of range limit alarms, take many forms that will drive different activities. A functional imperative for many of our solutions is to enable cross-boundary workflow between applications, and people-oriented workflow, depending on the severity or trend of an event. Events and workflows are managed and monitored through business activity monitoring. In most near real-time event processing scenarios, the Microsoft BizTalk Server ESB solution is sufficiently scalable to process hundreds of small documents each second. For higher performance requirements, Microsoft StreamInsight Complex Event Processor can be used. StreamInsight has demonstrated performance exceeding 100K events/second. The Microsoft Manufacturing Toolkit demonstrates the use of the Microsoft platform, including BizTalk Server, to build a publish/subscribe services-based architecture. The toolkit consists of guidance documentation and demonstrative code-samples. A design principle is that time series data will be hosted in a real-time historian database. Calculated data from this source will be made available to an events data warehouse via a Web service through BizTalk Server. Options for event processing include: Instantiate an operational workflow Notify a subscribing application Notify an engineer for immediate attention The overall reference architecture diagram (Figure 58), shows two CIM integration message busses: one for operations and one for the enterprise. The busses could be implemented as a single message bus so that there is a single version of the truth. However, for security and performance reasons, operations typically implement an isolated messaging architecture. As a result, only messages that specifically pertain to operations are passed (such as asset model updates when equipment is added or moved). These could be reflected as events, and passed as model update events, or passed as messages.

174

The rest of the messaging traffic is typically outbound one-way traffic from operations to the rest of the enterprise. This traffic could be handled via events, via the bus connector, or via SQL Server Integration Services (SSIS) Extract, Transform and Load (ETL) integration, depending upon the form and volume of the data. In any of these cases, it is important to establish information architecture with well-defined data mastership so that a single version of the truth is maintained. Please refer to section 3.7.5 on the need to isolate the network operation center bus from the enterprise bus for security/performance reasons. The reference architecture diagram reflects a single event bus enterprise wide. This bus is intended to be the single point for event subscribers. This implementation precludes the need to jump through the hoops to subscribe to multiple busses. In practicality, implicit in the reference architecture is filtering and message distribution, both at the StreamInsight CEP stream management level, and at the operations to enterprise CIM message bus interconnect, to ensure appropriate distribution of events. The event handling strategy should be established enterprise wide to ensure subscribers need only connect to one bus, and that duplicate events are not distributed. When events are driven from special purpose distributed control systems (DCS) applications, they may be passed along to a StreamInsight Complex Event Processing engine. CIM provides an information model that should be the basis for utility message definitions and for master data. In practice, extensions to the CIM are necessary in most deployments. A key to successful integration will be the master data strategy. Microsoft M modeling, the M model repository, and the new Master Data Services (the Stratature product Microsoft purchased being released in SQL Server 2008 R2 release) can be leveraged to establish an enterprise-wide master data architecture. 5.2.2 Data Integration Architecture A data integration architecture should be developed in concert with the master data strategy. Data integration is fundamental to enabling deep analysis to drive business intelligence. Realistically, all the potential uses for integrated data in the smart energy ecosystem cannot be foreseen today. For example, analysis of customer behavior in response to variable pricing programs may result in the need for data from the financial system, the meter data management system, and distribution operations. Therefore, the new scenarios necessitate a data integration architecture that will enable these data consolidations, with these considerations: Performance, ease of access, and security considerations should drive the tradeoff between replication and an OLAP integration architecture.

175

De-normalized cube design can ease reporting and analysis. Serious consideration should be given to building a CIM-based unified data model using OLAP access, rather than constructing an all-encompassing data warehouse (DW). The latter poses serious maintenance and update issues.

The reference architecture diagram, Figure 57, shows an ETL pipe from operations data stores to enterprise data stores. In practice, this is typically implemented as one-way data integration, for supplying operational data to the rest of the enterprise. The following considerations should be taken into account for the design of the data integration architecture: The primary objective is establishing data mastership, whether in the operations system or in other enterprise systems. A good example of such data mastership is the decision that the GIS/asset management system should be the data source for all distribution field equipment model updates. Meter data to support billing. Operations data such as breaker switching to drive condition based maintenance, etc. Non-time series will be hosted in the native operational data warehouse or accessed through SQL UDM. Data replication and creating additional DWs is not advised, but rather accessed from a Unified Cube SSAS. Non-critical event data will be passed along to an events DW for trending, or modeling with special purpose modeling applications. Higher priority events will be passed through Window Workflow Foundation to Microsoft SharePoint for notification leveraging dashboard services such as Microsoft Excel, Real Time OSIsoft Webparts, or Rich Client UX such as Silverlight or Windows Presentation Foundation.

176

Figure 60: Microsoft Smart Energy Ecosystem Reference Architecture

5.3

Microsoft Cloud Technologies and Services

Datacenters and clouds belong in the same discussion as technology offerings available to the utility considering its smart energy ecosystem evolution. Microsoft is a leading innovator in datacenter design and deployment as a strategic competitive advantage for the hosting and delivery of extreme scale services. Our next generation datacenters is already taking the modular datacenter to the utility generation facility whether that is at a conventional thermal, DER, renewables, or biomass plant or even a water treatment or waste power plant. And to fully appreciate available Microsoft cloud technology services, there must also be a recognition that we have been at the forefront of the datacenter evolution that is leading to cloud computing. Datacenters have evolved from their initial staging of collocated equipment racks, to shrinking unit sizes and maximizing the density of servers, storage, and networking within the racks, to the advent of large shipping containerized racks of pre-installed very high-density compute, storage, and networking. The large shipping containers were completely pre-wired and equipment installed. 177

Network connections, power, cooling, and back-up power were supplied with piggy-back containers. The new configuration brought efficiency and scalability as well as air and water economization. However, Microsoft and others realized that additional efficiencies could be possible. For example, by cooling datacenters to 70 degrees Fahrenheit, the operator could make the datacenter staff more comfortable. This became a watershed event in datacenter innovation, ushering in a whole new approach to datacenters: modular in nature, self-contained, operating at wider temperature ranges, and adiabatic cooling to eliminate the need for a building and expensive air conditioning. Indeed, these IT Pre-Assembled Components (ITPACs) have been deployed into some of the most inhospitable computing environments. Design improvements for fully enclosed hot aisles, improved adiabatic cooling, server and storage density, and advanced operating temperature equipment all have resulted in power utilization efficiencies (the measure of the percent of incoming power to the datacenter that goes into compute cycles) coming down from 2.0 PUE in the early colocation datacenter designs to 1.05 PUE the new modular datacenters (given ideal airside conditions). The fully self-contained modular ITPAC enabled a completely new approach for full datacenter design. While an ITPAC can be considered a datacenter in a box, it is ideally suited for individual business private cloud deployments or distributed datacenters in an overall datacenter network for key functions like content delivery of media close to its consumers. Achieving the compute cluster density comprised of tens of thousands of servers in todays modern datacenter, multiple modular ITPACs are necessary. The new datacenter approach is to align dozens of ITPACs on both sides of a covered raceway in a secure open-air parking lot. The raceway provides all power, networking, and a small cooling water supply for the self-contained ITPACs. The very rapid speed to deployment of an ITPAC, combined with the self-contained datacenter packaging, enables a whole new era of rotation of the compute load to the location of least cost, most green, most plentiful energy, rather than transmission and distribution of the power to the datacenter. Microsoft is working closely with visionary utilities and its partners to realize this new vision. The Microsoft and partner cloud services are comprised of:

Microsoft datacenters and associated networking, known as Microsoft Global Foundation Services Microsoft partner hosted datacenters Microsoft cloud operating systems including Windows Server 2012 and Windows Azure; and Microsoft and partner applications delivered as services via a SaaS paradigm.

178

Figure 61: The evolution of Microsoft data centers since 1989.

5.3.1 Microsoft Global Foundation Services (GFS) Microsoft GFS is responsible for all Microsoft datacenters including design, construction, and operation as well as the global datacenter network, all datacenter operations and maintenance, and datacenter security and certifications. Certifications include ISO/IEC 27001:2005, SAS70, and FISMA. Details of these certifications, as well as SSAE 16/ISAE 3402 and HIPPA, are covered in the Windows Azure Trust Center section on compliance available here. GFS is a driver of Microsoft cloud success. In addition to hosting Microsoft operations, the GFS datacenters host more than 200 applications being served to more than 20 million businesses and more than 1 billion customers in 76+ markets worldwide. Bing, Xbox Live, Microsoft Office 365, Outlook.com, Hotmail, Messenger, Skype, SkyDrive, SharePoint, and a host of other services are all pushing to rapidly expand the number of markets served. 5.3.2 Microsoft Cloud Operating Systems The overall direction for Microsoft cloud offerings is to provide a consistent platform across all cloud models that enables the next generation of modern applications, systems, analytics and IT infrastructure. The combination of Windows Server and Windows Azure comprise the cloud OS foundation that drives consistency across private, hosted, and public cloud deployments. This consistence removes the typical boundaries between these deployment models and enables agility and flexibility to leverage the environments best suited for utility requirements, and supporting

179

migration across the environments seamlessly as changes occur to business requirements and needs.

Figure 62: The diagram above reflects a high level view of the convergence in operating system functionality, development tools and environment, security, management tools, and networking to achieve this consistent view.

Regardless of whether deploying on-premise, in a private cloud, in a public cloud, or with a hybrid architecture, Microsoft Windows Server 2012 and Windows Azure provide a consistent foundation for this next generation of computing. The following sections discuss these key consistent functionalities, technologies, tools. The new capabilities of Windows Server built with cloud deployment in mind are addressed, as well as the capabilities of Windows Azure to run a whole spectrum of payloads and support all important Service modes and development environments. 5.3.2.1 Windows Server 2012 The learning resulting from the development, deployment, and early adopters of Windows Azure helped Microsoft establish the services, functionality, and architecture for the next generation Windows Server. Windows Server was initially designed to be a single box server operating system but the advent of private clouds completely changed the operating paradigm and forced a rethink of Windows Server design. As a result, Windows Server 2012 was rebuilt from the ground up to be a cloud-oriented operating system for servicing a number of pooled resources and efficiently managing payloads across those resources. Key core capabilities were pulled from Windows Azure back into Windows Server. New functionality necessary to support private cloud deployments was added. The net result is a wholesale upgrade to Windows Server capabilities, including: 5.3.2.1.1 Windows Server Virtualization 180

Most utilities have initiated a server consolidation program using server virtualization technology, resulting in cost savings and operational efficiencies. Server Virtualization technology has evolved and the capabilities in Windows Server support creation of comprehensive platforms for private clouds. Microsoft virtualization technology Hyper-V is a platform that can help fundamentally transform a utilities datacenter data center and cloud computing. With its Hyper-V virtualization role, Windows Server 2012 can help increase your server scalability and performance and provide better connectivity to cloud services. As virtualized datacenters become more realizable, utility IT organizations and hosting providers are interested in implementing IaaS to provide server instances on demand. However, these server instances must offer security and isolation from one another. Hyper-V multi-tenancy provides new security and isolation capabilities to keep VMs isolated even when stored on the same physical network. The Windows Server 2012 Hyper-V Extensible Virtual Switch further enhances networking and security capabilities by including provision for third party extensions. Another significant new capability in Windows Server 2012 is resource metering. Utilities and utility IT departments are under increasing pressure to reduce costs. Resource metering enables transparency for both internal accounting and for chargeback of external customers. This capability can completely redefine the conversation with operations and business consumers of IT services by providing historical records of exactly the resources consumed, when, and how. This allows the IT organization to then work with business departments to optimize resource utilization and efficiency. Improved quality of service capability to enforce minimum network bandwidth requirements also helps improve service consumer experiences. 5.3.2.1.2 Windows Server Networking Windows Server 2012 networking capabilities help reduce networking complexities to lower costs and simplify management tasks: Tools can automate and consolidate networking processes and resources. IT Administrators can more easily connect private clouds with public cloud services. Administrators can also grant users access across both physical boundaries as well as private and public cloud environments.

Windows Server 2012 can significantly reduce utility resource loading by enabling an entire network to be managed as a single server, providing the reliability and scalability of multiple servers without the costs. With automatic rerouting around storage, server, and network failures, it keeps file services online with minimal noticeable downtime. 181

It also provides for highly available servers and network storage to compensate for high latency, low bandwidth, and network congestion. Windows Server 2012 includes a number of features to help improve performance and reliability of utility datacenter deployments. One example of this is NIC teaming where network interface cards work together to enhance connection throughput, until a failure occurs, at which point one of the NICs picks up the entire load. 5.3.2.1.3 Windows Server Network Virtualization Network virtualization is another significant new capability in Windows Server 2012. Server virtualization enables multiple server instances to run concurrently on a single physical host; yet server instances are isolated from each other. Each virtual machine essentially operates as if it is the only server running on the physical computer. Network virtualization provides a similar capability, in which multiple virtual network infrastructures run on the same physical network (potentially with overlapping IP addresses), and each virtual network infrastructure operates as if it is the only virtual network running on the shared network infrastructure. Windows Server virtual networking uses generic routing virtualization and IP rewrite to enable customer virtual machines to send data packets in the customer address space, which traverse the physical network infrastructure through their own virtual networks, or tunnels. This enables secure virtual networking over the same physical network infrastructure. 5.3.2.1.4 Windows Server Identity and Access Security, regulatory compliance, confidentiality, data privacy and protection of competitive utility strategic IP are all growing concerns in the utility industry. The utilities fiduciary responsibility for protection of customer energy usage information, and recent security intrusions have served to heighten these concerns. Utility IT departments need to provide efficient and secure access to information both for internal users and business partners. The information may reside on servers, laptops, removable and mobile devices, in emails and in the cloud. Users need broad role based access to the information from virtually anywhere. And, users need to access diverse systems and resources on the corporate network, from different types of devices. These evolving needs and the advent of cloud computing have fundamentally changed the identity and security landscape. Maintaining a cohesive identity framework across the private and public cloud is critical to maintaining the security of enterprise applications in the cloud. As a result, Windows Server 2012 has completely transformed the approach to identity and access management with significant improvements to Active Directory and introduction of Dynamic Access Control. Flexibility for cloud computing and modern work styles is supported, regardless of how data is accessed.

182

Active Directory Rights Management Services can help utilities create a comprehensive information protection framework. Dynamic Access Control provides a data classification and protection system by applying tags to classify information, which can be integrated into central access policies and enable a rich audit facility. For Windows Server 2012, Dynamic Access Control has been integrated with Rights Management Services, so the central access policy components of Dynamic Access Control automatically apply Rights Management Services templates to specific documents. 5.3.2.1.5 Windows Server Storage Storage management and failures can significantly affect utility IT infrastructure and availability. Windows Server 2012 has been designed to address the causes of unplanned downtime by monitoring and preventing server failures as well as by isolating users from disruption during scheduled maintenance periods. A number of significant improvements to storage, performance, networking and server capabilities also help utilities increase efficiency, lower IT costs, and still make advancements to be able to support single-server, multi-server, and multi-site environments. The capabilities in Windows Server 2012 enabling these goals are: Cluster-Aware Updating, which reduces server downtime by removing a node from a cluster, updating it, and then returning the updated node to the cluster. High-Performance, Highly-Available Storage using updated Server Message Block (SMB) protocol for enhanced reliability, availability, manageability, and performance and including new modes like Active-Active file sharing for simultaneous access to shares from any node on the cluster. Hyper-V over SMB, which eliminates the expensive FibreChannel SAN requirement for Hyper-V guest storage provisioning. On-line Backup to the cloud via secure compressed and encrypted data transfer. Online corruption scanning to significantly reduce CHKDSK scan and repair times. Resilient File System, which uses copy-on-write, check-summing, and proactive error identification with a data scrubber to ensure high levels of data integrity Storage Pools, which are virtualized units of storage administration for aggregations of physical disks and storage spaces based upon the storage pools that can have definable resiliency either through mirroring or parity 183

5.3.2.1.6 Windows Server Manageability and Automation To meet comprehensive multi-server manageability needs, Windows Server 2012 provides new management capabilities for multiple machines, robust automation, improved compliance with industry management technology standards, and unified experiences across physical and virtual platforms. Server Manager facilitates multi-server tasks like role and feature deployment to both physical and virtual machines. Windows PowerShell 3.0 is a powerful platform for managing server roles and automating management tasks such as session connectivity, job scheduling, workflows, and scripting. 5.3.2.1.7 Windows Server Web and Application Platform A scalable web and application platform is key to protect utilities existing investments in on-premise applications. Windows Server 2012 helps utilities bridge hybrid IT environments that transcend on-premise, private cloud, and public cloud environments. Windows Server 2012 also gives enterprises and cloud hosting providers the scalability and compatibility for creating and managing private clouds. Windows Server 2012 supports a number of connectivity levels with Windows Azure such as Windows Azure Service Bus and messaging, Windows Azure Connect, virtual machine portability which enable hybrid scenarios. Windows Server 2012 can also improve website density and efficiency via new features for building, provisioning, and managing multi-tenant environments as well as centralized SSL certificate support to increase SSL scalability and manageability. 5.3.2.1.8 Windows Server Virtual Desktop Infrastructure (VDI) Powered by Windows Server 2012, Microsoft VDI allows users to seamlessly access their rich and full fidelity Windows environment running in the datacenter from any device. With Remote Desktop Services (RDS) and Hyper-V in Windows Server 2012, organizations get the following benefits: Windows Server 2012 provides a single platform to deliver any type of hosted desktop, making it simple to deploy and easy to manage RemoteFX provides a consistently rich user experience regardless of what type of virtual desktop is being accessed or the location the users are using to access their desktop Remote Desktop Services can host either session-based desktops, pooled VMs, or personal VMs giving utilities the flexibility to deploy the right type of VDI desktop for their users, all from a single platform.

5.3.2.1.9 Windows Server Active Directory Active Directory is added as a core capability in Windows Azure so security and identity are managed consistently across all three cloud models. Virtualization with Hyper-V is delivered the same way in Windows Server 2012 and in Windows Azure. The Visual 184

Studio development environment, the .NET Framework, and deployment via GIT hub is the same for both target environments. Application level connectivity and messaging is the same in both environments with Service Bus the core messaging capability within Windows Communication Foundation. Finally, System Center provides a single place to monitor, manage, update and automate IT activities across all three cloud models. The vision for consistency across all cloud models is unique to Microsoft and the ability to scale from micro to internet scale offering a robust and rich set of new services provides a unique and powerful foundation for the new modern SERA compliant enterprise architecture. 5.3.2.2 Windows Azure The current Windows Azure release assimilated the culmination of many years of development for provision of the following capabilities: To allow development and deployment in a number of popular development tools and environments; To bring consistency to a number of services on-premise and in the cloud such as security and identity; To enable a number of hybrid connectivity scenarios; To host Web Sites; and To host IaaS enabling both Windows and leading Linux VMs to run in the Azure environment

185

Figure 63: Windows Azure's full release covers the full spectrum of cloud platforms

5.3.2.2.1 Windows Azure Virtual Machines Windows Azure enables a number of different images and VM sizes to run. Operating systems can consist of Windows Server 2008 R2 Enterprise, Windows 8, as well as CentOS, openSUSE, and Ubuntu versions of Linux! Five sizes of VM are configurable, as well as storage and networking. The IaaS model for running VMs enables utilities not only to migrate existing apps to the cloud, create test environments, or to build new cloud targeted apps, but enables scenarios like a VM in the cloud as a backup for the on premise instance, or hybrid architectures where cloud resources compliment or extend the processing and functionality of the on premise app.

5.3.2.2.2 Windows Azure Data Management Windows Azure Data Management includes Tables, Blobs, and is for applications that require a full featured database capability. Database handling for Azure includes: SQL Database (formerly known as SQL Azure Database) is available for relational database as a service. In addition, Windows Azure can also run SQL Server in a VM in IaaS. 186

SQL Database running as an Azure Service provides utilities a high degree of interoperability and the ability to use existing SQL skills and expertise to accelerate robust database application creation as well as extend applications across on-premise to the cloud. SQL Data Sync is a powerful web service that allows utilities to bi-directionally synchronize data across multiple SQL Databases as well as synchronizing Windows Azure SQL Databases with SQL Server Databases. This removes the utility efforts for data alignment and allows the utility to focus on application logic. Windows Azure Tables provide NoSQL capabilities for utility applications that require storage of very large amounts of unstructured data. Tables offer manual, key-based access to un-schematized data at a low cost for applications with simple data access needs. Table Storage is Microsofts implementation of noSQL using key-value storage pattern. Tables are an ISO 27001 certified managed service which can auto scale to meet massive volume of up to 100 terabytes of data and incredibly high throughput performance, as well as being accessible from virtually anywhere via REST and managed APIs. Blobs provide a simple and inexpensive way to store video, audio and images. Blob storage is the simplest way to store large amounts of unstructured text or binary data such as video, audio and images with the fastest read performance. Windows Azure Drive allows applications to mount a Blob formatted as a single volume NTFS VM. Utilities can also securely lock down permissions to Blobs.

5.3.2.2.3 Windows Azure Business Analytics Microsofts business analytics solutions for Windows Azure embrace the new scope, scale, and diversity of data today. These solutions provide enterprise class data management and emerging technologies such as Hadoop for Big Data. They enable ease of discovery and data enrichment with third party data sets and information services from the Windows Azure Marketplace. Additionally they deliver insight to users by surfacing and building reporting capabilities into your Windows Azure applications with Windows Azure SQL Reporting. 5.3.2.2.4 Windows Azure Active Directory Identity and Security Windows Azure Active Directory is a modern REST-based service that provides identity management and access control for cloud applications similar to the familiar on-premise implementation. 187

Utilities can quickly extend on-premise Active Directory to apply policies and to control and authenticate users with their existing corporate credentials to Windows Azure and other cloud services. This means utilities can have one identity service across Windows Azure, Microsoft Office 365, Microsoft Dynamics CRM Online, Windows Intune and other third party cloud services. Windows Azure Active Directory provides a cloud-based identity provider that easily integrates with on-premises AD deployments and full support of third party identity providers. Utilities can provide users with single sign-on across Microsoft Online Services, third party cloud services, and applications build on Windows Azure using popular web identity providers like Microsoft Account, Google, Yahoo!, and Facebook.

Figure 64: Azure Active Directory helps utilities manage access control applications based upon centralized policies and rules.

This provides an easy way to manage access control to applications based upon centralized policies and rules and ensures consistent and appropriate access to utilities applications to meet security and compliance requirements. This also provides utility application developers a consumer identity provider-based authentication and authorization for Windows Azure based applications. Security could also extend to instant messaging services to customers using utility services.

5.3.2.2.5 Windows Azure Networking Windows Azure Networking provides a number of cross-platform connectivity capabilities to connect on-premise infrastructure to the public cloud. The following diagram shows connectivity at a number of different levels. 188

Figure 65: Windows Azure Networking provides a number of cross-platform connectivity capabilities to connect onpremise infrastructure to the public cloud.

Data synchronization for SQL Database is covered under the data management section above, and these three characteristics are available in Windows Azure for networking: Virtual Networks provide network administrators the ability to provision and manage VPNs subnets in the cloud and to securely link them to the utility onpremise IT infrastructure. Administrators can extend their datacenters into the cloud via traditional site-to-site VPNs to securely scale out datacenter capacity. Industry standard IPSEC protocol is used to establish the secure connection between the corporate gateway and Windows Azure. Administrators have control of the network topology including configuration of DNS and IP address ranges for virtual machines on the subnets. Windows Azure Connect enables a machine-to-machine connection between Windows Azure services and on-premise resources like database servers and domain controllers. For example, with Windows Azure Connect, a utility developer can create a direct connection between their local development machine and apps hosted in Windows Azure to test and debug the apps using their same tools as for the on-premise apps. Utility developers can also build 189

cloud apps hosted in Windows Azure that can securely access on-premise SQL Server databases or authenticate users against an on-premise Active Directory service. Service Bus provides application level connectivity and messaging between Windows Azure applications and between on-premise and Windows Azure applications. Service Bus Queues support guaranteed FIFO message delivery using standard REST and AMQP protocols. Service Bus Topics deliver messages to multiple subscribers. Service Bus Relay solves the thorny challenge for utilities to communicate between on-premise applications and the outside world by allowing web services running on-premise to project public endpoints that are serviced by Windows Azure which allows the on-premise web services to be accessed anywhere on the planet!

5.3.2.2.6 Windows Azure HPC Scheduler Some utilities have created High Performance Computing applications to do such things as Black-Scholes price and risk estimation for energy trading. Windows Azure can now support both SOA and MPI jobs for compute intensive parallel applications. The scheduler and runtimes are supported in Windows Azure and are accessed via a REST call from a web application, a portal, or a rich client on-premise. This service can be used to massively scale out on-premise MPI HPC jobs to meet evolving utility needs without on-premise hardware expansion. 5.3.2.2.7 Windows Azure Standards-Based Interoperability Surfaces A number of commonly accepted standards including Internet standards exist for cloud platforms and new standards for the broad array of new cloud scenarios are in creation. Cloud platforms should support the commonly accepted standards to enable interoperability. Windows Azure provides a standards-based environment that enables interoperability in cloud computing. Microsoft is actively working with Standards Development Organizations (SDOs) and technology leaders at high levels to develop appropriate new standards for cloud computing in many areas:

190

Figure 66: Microsoft is working with standards development organizations for cloud computing.

In addition, the use of low level internet plumbing protocols (such as HTTP, XML, SOAP, REST, JSON, and TCP/IP) provide the foundation for Windows Azures approach to interoperability, assuring that popular tools, frameworks, and languages can interoperate with Windows Azure services. Windows Azure services are exposed via REST APIs to enable their use from many different languages, platforms, frameworks, and tools. Specialized standards and protocols such as SQL (relational database access), memcache (caching), and AMQP (messaging) are supported where appropriate. The following diagram shows the cloud and Windows Azure services and the associated standards based access to those services:

Figure 67: Cloud and Windows Azure Services and the associated standards-based access to those services.

191

5.3.2.2.8 Windows Azure Service Bus Notification Hubs. Mobility is becoming ever more important. As utility workers become increasing mobile, they will rely on having access to information in an always connected environment for significantly enhanced efficiency. A key element in the support of the mobile worker is the transition of solutions to native mobile platform notification systems as the primary mechanism for alerting users via mobile devices. SERA recommends leveraging the new Azure Service Bus Notification Hubs capability to provide organizations with a cloud-based alerting mechanism which can interface with mobile platform notification systems in a vendor neutral manner. Windows Azure Service Bus Notification Hubs can provide notifications via the Windows Push Notification Services (WNS) for Windows 8, and the Apple Push Notification service (APNs). As additional Azure services are added for the Windows Phone 8 and the Android platform via GCM, the Microsoft Push Notification Service (MPNS)become a standard platform way to provide notification scalability from a single user to millions of concurrent users.

5.4

Collaboration Services

As previously described, the smart energy ecosystem will thrive if collaboration is adopted as a key tenet of its operation. Customers are becoming active participants in this new energy environment, creating the emergence of a wide range of new service providers. Many new services are already being provided using the Software as a Service (SaaS) model. To further enable this dynamic, Microsoft offers some key technologies that provide a collaboration infrastructure, featuring: Windows Azure, which focuses on supporting collaboration using cloud-based services, and Windows SharePoint Server, which focuses on user collaboration.

5.4.1 Windows Azure Services Platform While Azure is technically an operating system in the cloud, and Azure Services Platform is technically a cloud computing environment, the reference architecture advises leveraging the new Microsoft Azure Services as a key enabler for cross enterprise process integration, data exchange and cloud-based collaboration.

192

The Azure Service Platform is an Internet-scale cloud services platform that is hosted by Microsoft data centers. The platform is central to the Microsoft software plus services(S+S) approach to providing rich computing environments, whether hosted on-premise or in a Microsoft data center. The S+S approach provides choice and flexibility in the development, operation, migration, and management of applications, whether they exist on the Internet, or are hosted locally. Figure 67 illustrates the Azure Services Platform capabilities:

Figure 68: The Azure Services Platform can be used to implement new applications or enhance existing applications, where applications can run in the cloud or on local systems.

Components within the Azure Services Platform include: Windows Azure operating system that runs in the cloud using Microsoft datacenters. Applications and services that run in the cloud on Windows Azure. A service bus, part of .NET Services that provides a secure, standards-based messaging infrastructure. Other cloud services, including .NET Services beyond the Service Bus, SQL Services and Live Services. Local (on-premise) systems that may use operating systems such as Windows Server, Windows 8, Windows XP, Windows Mobile, and other non-Microsoft operating systems.

193

Applications that run on local systems that interact with Windows Azure and the other cloud services.

Azure applications and services are developed using the .NET Framework, supporting a variety of managed code programming languages. Azure provides for standards-based interoperability using HTTP, SOAP, REST, PHP and XML.

Figure 69: Azure Services platform (Source: David Chappell & Associates)

5.4.2 Microsoft Office SharePoint Server Microsoft also enables collaboration through Microsoft SharePoint Server 2013, where the focus is collaboration between users and access to enterprise information resources. SharePoint Sever 2013 is an integrated suite of capabilities that can help improve operational effectiveness. SharePoint Server 2013 features a clean user interface that can be utilized with touch-based devices.

194

Figure 70: SharePoint 2013 Platform service applications

Microsoft SharePoint Server 2013 provides the following key capabilities: Team collaboration, where teams can work together, collaborate on and publish documents, maintain task lists, implement workflows and share information through the use of wikis and blogs. Portals, where users can be connected to business critical resources. Enterprise content management (ECM). Enterprise search, which can leverage a variety of repositories and formats. Business process and forms, where workflow templates can be used to automate approval, review and archiving processes. Business intelligence, where dashboards and reports can provide up to date information, improving insight and helping to improve decisions.

195

Web content management, where the goal is to put Web publishing into the hands of business users. Social computing, where the goal is often to improve communication within a user community.

SharePoint provides an ideal solution to the problem of content management for support of compliance and compliance reporting. There are also many examples where SharePoint is currently used to support collaboration by teams, especially in those situations where an organization may have many subject areas of interest, each with corresponding interest groups, and given users may participate in a subset of those groups. One example would be for an enterprise architecture, where architects might collaborate to define different aspects of the architecture. The specifications often take the form of Microsoft Office documents, UML, and design artifacts. Developers might then have read-only access to the architecture, and thus be able to raise concerns. The developers may also have different subject areas related to software designs and the posting of design artifacts for sharing with other groups. One notable user of SharePoint is the UCA International Users Group, which is the parent users group for many smart grid related efforts including IEC TC57 standards working groups. The reference architecture recommends Microsoft SharePoint Server as a core enabler for information distribution both internal and external to an organization. Security, RBAC, IRM, workflows, and a rich set of tools for construction and distribution of enterprise content make SharePoint a core element in the reference architecture.

196

5.5

Business Software

Microsoft Dynamics is powerful set of on-premise and cloud solutions that can be applied to the utility industry for the purpose of creating a dynamic business that is connected, forward-looking and driven by the empowerment of people to predict potential issues and opportunities, and enable organizations to expand the possibilities for competitive advantage.

Figure 71: Microsoft Dynamics is a line of integrated, adaptable business management solutions that enable a utility and its people to make business decisions with greater confidence.

Microsoft Dynamics works like and with familiar Microsoft software, automating and streamlining financial, customer relationship, and supply chain processes in a way that can helps utilities drive business success. Most notable for utilities are: Dynamics CRM Dynamics xRM Dynamics AX Microsoft Dynamics partners

5.5.1 Dynamics CRM Dynamics CRM is focused on the core customer relationship management workloads of: Sales Marketing Customer Care

197

Dynamics CRM delivers a holistic view of each customer that enables client-facing employees to make expedited and educated decisions about strategic efforts in the sales, marketing, and customer service fields. Utilities are deriving value with Dynamics CRM in a number of key use cases including B2B for market participants in RTO/ISOs, demand response programs, renewable energy DER relationships, as well as customer facing programs for tariff workflows and customer services. Utilities are also leveraging the Dynamics CRM Call Center Agent (CCA) capability to drastically improve call center agent responsiveness and quality of agent work experience. The CCA capability to easily interface with and aggregate information from a number of utility systems, sometime very difficult to customize legacy systems, is very powerful. 5.5.2 Dynamics xRM In addition, a number of utilities are also applying the Microsoft Dynamics xRM Development Framework upon which Dynamics CRM is built to create Extended CRM applications. The Dynamics xRM Development Framework is very general and can be used to implement functionality for virtually any entity/relationship business process.

Figure 72 - All Dynamics CRM workloads and applications can be consumed from a variety of devices, with modern form factors such as tablets and smartphone, as well as browsers, Outlook add-in and more.

Dynamics CRM core applications and Extended CRM applications both take advantage of the Dynamics xRM Development Framework, which is a declarative rapid application development framework providing the underlying relationships, interactions, processes, and insights.

198

5.5.3 Microsoft Dynamics AX Microsoft Dynamics offers a line of simple-to-learn and easy-to-use enterprise resource planning (ERP) business solutions that connect with your existing technology and scale as you grow to deliver long-term value. Microsoft Dynamics AX is the full featured highly flexible scalable ERP solution. These solutions work the way your people and organization work, enabling everyone to make more informed decisions and adapt to new opportunities and rapid change. Microsoft Dynamics AX 2012 includes capabilities like: Financial Management Human Capital Management Manufacturing Supply Chain Management Procurement and Sourcing Project Management and Accounting Sales, Service, and Marketing Retail Business Intelligence and Reporting Microsoft Dynamics AX is built upon the familiar Microsoft technology platform and interacts with business and productivity applications and communications, all running on premise in the cloud. For example, the Dynamics AX database is SQL Server, and Dynamics can view historical data using at least 16 pre-configured SQL cubes, or create views of data using SQL Server 2012 Reporting Services. The solution has a model driven layered architecture enabling rapid enhancement and creation of new functionality to implement business processes in days for what used to require weeks or months in historical industry solutions. The Microsoft Dynamics AX Application Integration Framework (AIF) enables companies to integrate and communicate with external business processes and partners through the exchange of XML over various transport media. AIF enables both business-to-business and application-to-application integration scenarios. Microsoft Dynamics AX also implements individual security for authentication and establishes role-based security where users are assigned to security roles based upon responsibilities. Further, data security policies for data tables can be established and maintained using a Table Permissions Framework enforced by the Application Object Server. 5.5.4 Microsoft Dynamics Partners Microsoft partner Ferranti Computer Systems provides their MECOMS solution, which is tailored to the utility industry and runs on top of the Dynamics AX and overall Microsoft stack. Ferrantis MECOMS solution is an integrated meter data management (MDM), enterprise asset management (EAM) and customer information management (CIS) solution and offers a scalable performance alternative to historical ERP solutions in the utility industry value chain. Some of 199

the most demanding new utilities challenges include the ever-changing and highly complex billing associated with todays time-oriented and multi-tier dynamic pricing/tariff structures. As new tariffs are envisaged and approved by the regulators, the ability to rapidly deploy the tariffs can be a direct function of the agility of the ERP system. Further, the ability of Utilities to move customers from one tariff program to another will be an important element of utility customer relationships going forward. Ferrantis MECOMS solution supports smart metering as well as more classically metered regions, and provides functionality for captive as well as liberalized markets.
Figure 73 provides a functional overview of the Ferranti Computer Systems MECOMS solution.

200

5.6

Process Integration

Process integration refers to the integration of processes within an organization as well as integration of processes that may span multiple organizations. For example, a business function such as processing an incoming order requires the coordinated participation of the customer management system, the inventory system, the shipping system, and one or more financial systems. The business could operate more efficiently if all the systems are integrated so that the business function of processing the incoming order is completed automatically. In a utility context, Microsoft SERA recognizes that there are many possible scenarios for how process integration might occur: Process integration may occur entirely within an organization. Process integration may involve B2B data links to other organizations. Process integration may involve many devices inside and outside any enterprise. Process integration may involve the use of cloud services. Process integration may be multi-organization, where cloud-based messaging and orchestration are needed. One way to address these integration scenarios is to consider the time horizon and technologies per the following information architecture playbook:

Figure 74 depicts an information architecture playbook.

201

The varied process integration scenarios above translate into a need to have many different types of process integration capabilities available to an organization concerned about improving the coordination and effectiveness of business operations. These types of process integration capabilities are provided through: A service-oriented architecture, which is a system designed to direct how information and communications resources are integrated with each other, so it can manage which services are available and when they are available so as to align with the business goals. An enterprise service bus (ESB), for on-premise integration, where the intra enterprise role of the ESB can be served by BizTalk Server, especially with BizTalks ability to connect with Windows Azure so as to include the Azure virtual network as part of the corporate network. An IP-based network that could and should conceivably supersede conventional ESB deployments to meet the changing utility environment in terms of process scope, device and sensor interaction, and scale. An inter-service bus, for B2B or multi-organization integration, where the needs of inter enterprise integration can be served by the Internetbased Azure .NET Service Bus. An extra-service bus, for integration that may involve devices or customers, where the needs of extra enterprise integration may be similarly served by the internet-based Azure .NET.

Utilities should work to select the correct technologies and architectural strategies according to the time horizon of the integration, as noted in the chart below. Figure 74 illustrates such an architecture, where on-premise ESBs, clients and other services can be integrated using the service bus.

202

Figure 75: On-premise ESBs, clients, and other services can be integrated using the Service Bus

5.6.1 Service Oriented Architecture A service-oriented architecture (SOA) is a standards-based design approach for creating an integrated technology infrastructure capable of rapidly responding to the utility business changing needs. SOA enables data, logic, and infrastructure resources to be organized as a service and be accessed by routing messages between network interfaces. SOA helps IT to become an accelerator for business agility and innovation.40These attributes of SOA lend themselves to process integration: Interoperable. Components can be interoperable across platform and technology boundaries. Componentized. Services are exposed as autonomous components that can be versioned and managed independently. Composable. Services can be composed by an application to perform more complex operations or to enact a business process. Message-based interfaces. Interfaces are defined by message contracts and schemas. Operation calls and parameters are passed in XML message envelopes.

40

Microsoft in Education: Service-Oriented Architecture; Microsoft BizTalk Server: Fuel Innovation, Boost Business Agility and get Higher Return on Investments.

203

Distributable. Service components can be consumed from the same machine or distributed to remote machines. The service interface and logic is independent of the transport and protocol used to access the service. Discoverable. Services publish their metadata as WSDL so that client applications can discover the interfaces and schemas and generate a client-side proxy to consume the service.41

5.6.2 Enterprise Service Bus using BizTalk Server In an enterprise requiring strict centralized governance, the enterprise service bus is an architectural pattern and key enabler to implementing the infrastructure for a service-oriented enterprise. The ESB provides support interaction between heterogeneous services and interfaces that might be mismatched, or that might change over time. The ESB addresses integration problems in a way that maximizes the re-use of services and maintains flexibility.42 BizTalk Server is the Microsoft offering for process integration as it provides the means to seamlessly connect systems within and across organizations. BizTalk Server offers the BizTalk ESB Toolkit collection of tools and libraries that: Extend BizTalk Server capabilities of supporting a loosely coupled and dynamic messaging architecture. Function as middleware that provides tools for rapid mediation between services and their consumers. Enable maximum flexibility at run time, by simplifying loosely coupled compositions of service endpoints and management of service interactions.43

Said another way, BizTalk Server serves as a gateway technology. 5.6.2.1 Business Activity Monitoring Using Biztalk For cases where data is pulled from backend systems and traffic monitoring is important, the Business Activity Monitor in BizTalk offers a solution. Business Activity Monitoring (BAM) in BizTalk Server allows business analysts, developers, and information workers to monitor and analyze data from business process information sources in real time. By using BAM, users can get information about business state, trends, and critical conditions.

41 42

Security Fundamentals for Web Services. Microsoft patterns and practices, SOA definition. What is an Enterprise Service Bus?, Microsoft Enterprise Service Bus. 43 Ibid.

204

Additionally, the BAM application programming interface (API) enables developers to expose visibility into data that is external to BizTalk processes, such as archival data or other non-BizTalk processes and systems. Developers, administrators, business analysts, and end users can use the BAM Portal to view, aggregate, search, or create alerts based on the data collected by BAM. A major goal for SOA is to provide total process visibility. In some cases, there are dozens or hundreds of different services, each of which plays its own part in a larger business process. Each one of these services is likely to have its own tracking or monitoring mechanism. By using BAM, a developer could create a single activity that spans across all of these processes and gathers the relevant information about each process. This kind of implementation provides a single view for information about the process as a whole rather than using different tools to look at disparate data stores to view information about a single service. 5.6.2.2 Multi-Platform Adapters For long running processes, BizTalk orchestration may be appropriate. BizTalk includes more than 25 multi-platform adapters and a robust messaging infrastructure. In addition to integration functionality, BizTalk provides: Strong durable messaging designed to never lose messages A rules engine BRE EDI connectivity Business activity monitoring (BAM) RFID device management and event based communication ESB guidance Integrated ESB Guidance occurring with the base BizTalk configuration (in BizTalk Server 2013)

5.6.2.3 The BTC CIM Accelerator for Microsoft BizTalk Power grid optimization, smart grids, and regulatory demands for better grid efficiency require the exchange of huge amounts of information between applications like SAP, power grid control systems, and GIS. Microsoft BizTalk Server, a proven standard product interconnecting and orchestrating all involved applications, stands ready for seamless integration of existing workflows so as to implement complex integration projects within energy companies. By using Microsoft BizTalk Server, it is possible to integrate internal applications and external market partners by using a variety of adapters. The Common Information Model (CIM), as created by the IEC (International Electrotechnical Commission), standardizes data communication for energy companies, facilitates the integration of IT systems and ensures technically correct exchange of information. Use of CIM (IEC 61970/61968) in the energy sector is officially endorsed or even mandatory in 205

several U.S. states. It is also commonly used throughout Asia and more and more is being used in Europe as well. A long-standing Microsoft partner, BTC has developed the BTC CIM Accelerator as a powerful toolset that enables the application of the CIM standard with Microsoft BizTalk Server in a precisely adjusted and easy-to-use way. With BTC CIM Accelerator, utilities can benefit from Microsoft BizTalk Server with: Automated, reliable and CIM-compliant exchange of information. Fast and flexible response to constantly changing legal and market requirements. Consistent realization of saving potentials due to low one-off and maintenance costs. Consistent compliance with legal requirements and standards. Significant reduction of CIM complexity by specific tool support. Documentation and tutorials that help you getting started with the application of CIM.

5.6.2.4 Additional Microsoft BizTalk Capabilities BizTalk is a mature product in its seventh version, BizTalk Server 2010, enabling organizations44 to: more easily connect disparate systems connect their core systems both inside and outside their organization integrate functionality provide strong durable messaging, a rules engine, EDI connectivity, Business Activity Monitoring (BAM), RFID capabilities and IBM Host/Mainframe connectivity. simplify and automate interoperability to reduce costs and errors gain critical insights on business processes and performance shield processes from change impacts promote agility and manageability integrate to eliminate redundancy automate business interactions with partners

44

A whitepaper is available that provides an introduction to BizTalk Server 2009.

206

Figure 76 - BizTalk basic message flow (Source: Chappell & Associates)

Figure 76 illustrates that once a message from a source is received by an adapter of a receive port, a message may be validated and converted into an internal format for delivery to the message box. Once delivered to the message box, the message may be read by an orchestration. As the result of an orchestration, another message may be produced and be delivered to the message box. Using subscriptions, messages from the message box are delivered to a send port, where they may be transformed and delivered to a target. An example of this would be for meter readings, where a legacy metering system may be able to provide data using MultiSpeak or MV90 formats, but the target system may need the data to be transformed to an IEC 61968-9 format.

207

Figure 77 provides a broader view of business process management:

Figure 77: Business process management (source: Chappell & Associates)

Figure 77 demonstrates how the BizTalk orchestrations can leverage business rule engine (BRE) and business activity monitoring (BAM) services. Information posted to BAM services can be accessed by users through tools such as Microsoft Excel and Microsoft SharePoint Server BI, as well as other application components. Windows Communication Foundation (WCF) and Windows Workflow Foundation (WF) can be used to develop applications that can access BAM data.

208

In an even broader context, BizTalk Server can be used for business-to-business (B2B) integration, as shown in Figure 78:

Figure 78 - B2B Integration using BizTalk Server (Source: Chappell & Associates)

This shows that, when a business process spans multiple organizations, orchestration can manage communication between multiple systems the ones internal and external to the organization. That orchestration means the flow of tasks creating a seamless experience for very distributed processes. These capabilities are commonly needed in energy markets and procurement as well as for integration between utilities and service providers. The registration and management of demand response programs is one example of this, where service providers identify, register and manage resource participation demand response programs that are managed by a utility or ISO. 5.6.3 The IP-Network as Bus for Process Integration There are new discussions about improvements that may be made to the concept of the enterprise service bus for process integration using IP Networks and HTTP, particularly for their ability to achieve utility scale. Thats because the idea that the control of services can be both centralized and scaled out at the same time might not work in a centrally controlling committee of ESB machines. That is a limiting factor in this age of mobile devices, smart phones, and tablet computing, when their powerful computers initiate ad hoc transactions anytime and from 209

anywhere. Central control and policy driven governance of services negates the ability to increase agility and otherwise reduces utilities abilities to adapt and integrate services according to changing needs and situations. While many may automatically consider HTTP-based services as part of the IP network, it is actually useful to consider the many useful IP networks that actually go beyond HTTP. Yes, the web is the best example of a successful implementation of services on the IP network without any form of centralized services (other than redundant domain naming systems). But the IP network does serve as a bare network with a variety of protocols, such as the Microsoft Service Bus implementation and AQMP. But far more familiar to most, HTTP-based services are capable of serving as the bus because they are built on the four tenets of a service-oriented architecture (SOA), where: Boundaries are explicit. Operations are called over well-defined boundaries, passing explicitly defined messages. Services are autonomous. Each service is maintained, developed, deployed, and versioned autonomously. Services share schemas and contracts, not class. Services share contracts and schemas to communicate. Compatibility is based upon policy. Policy in this case means definition of transport, protocol, security, etc.

By contrast, the idea of the IP-network as Bus embraces HTTP-based services on the Web for passing messages explicitly, mostly over the baseline application contracts and negotiated payloads specified by the HTTP. Services are then understood as units of management, deployment and versioning with an understanding that is then codified as platform-as-a-service offerings. As utilities consider extending messaging to the plethora of devices, sensors, meters, etc. at scale, the Windows Azure Service Bus can provide the scalable application gateway that establishes communication sessions and enforces security and authorization. Because Windows Azure is a federation of autonomous services encompassing the respective management, deployment, versioning, and communication protocol semantics, it does not use (or need) an ESB to achieve true internet scale. 45 5.6.4 Service Bus Process Integration As shown in Figure 77 below, the Microsoft .NET Service Bus is a part of the Azure Services Platform. It provides a secure, standards-based messaging fabric to securely connect applications across the Internet.

45

Utopia ESB by Clemens Vaster, MSDN Blogs, January 15, 2013.

210

The reference architecture recognizes that the emergence of new players in the smart energy ecosystem, new devices and new business models will lead to implementations that employ Internet-based application message exchange as an effective integration solution. The Microsoft .NET Service Bus relies on the .NET Access Control Service for controlling access to solutions through a claims-based security model. The naming system on the service bus allows endpoints to be uniquely identified using host independent, hierarchical URIs. The service bus registry is used for publication and discovery of service endpoint references within a solution.

Figure 79: Microsoft .NET Service Bus Naming System

The primary programming model on the service bus is the Windows Communication Foundation (WCF). Connections are created using TCP or HTTP. The messaging fabric within Service Bus supports standard protocols including those defined by Web service specifications and REST, as well as access from non-.NET platforms.

211

Figure 80 - This figure illustrates the secure publish/subscribe messaging provided by service bus, using both unicast and multicast protocols.

Figure 81 shows that connections between processes can be either relayed by the service bus, or direct between the processes. A hybrid connection model lets this be negotiated once the initial connection is made through the service bus.

212

The Microsoft .NET Workflow Service is another one of the core service offerings found within the Azure .NET Services. The .NET Workflow Service supports cloud-based workflows that model service interactions through the .NET Service Bus and HTTP messaging. This service extends Windows Workflow Foundation (WF) to the cloud.

5.7

Databases, Data Warehouses and Data Management

Achieving the promise of smart grid, smart meter, and smart ecosystem data generation has been difficult for utilities because: The systems have only solution specific data stores containing subsets of enterprise data. None of the systems they use were designed to aggregate data from the operations, financial, and customer information systems into one location, an accomplishment that would enable building holistic cubes for high performance analytics. In many cases, utilities choose to incrementally build their own databases for smart grid/smart meter data in parallel with getting their operating processes up and running.

Microsoft is addressing these challenges with SQL Server, the SQL Server Parallel Data Warehouse, the new GeoFlow add-in to Excel, and partner offerings using SQL Server. 5.7.1 Microsoft SQL Server Microsoft SQL Server is a database management and analysis system that is well suited to provide the needs related to databases, which may be implemented as data warehouses, data marts, production databases, and operational data stores within the smart energy ecosystem. Utilities benefit from the stability of SQL Servers symmetric multi-processor architecture, the most common architecture for data warehouses under 50 terabytes. Those utilities that have reached a more mature level in the data warehouse capability will appreciate the benefits of massively parallel processing (MPP) DW. Parallel data warehousing (PDW) takes data warehousing to new levels by introducing an MPP capability to Microsoft SQL Server. In simple terms, PDW extends the data capacity of SQL Server by distributing the data and database interactions across multiple "nodes," which are separate and distinct instances of SQL Server. However, to the end-user, the MPP appears as one database. All of these capabilities are notable because the information management needs within the smart energy ecosystem will continue to grow in breadth, depth, granularity, and usage. Beyond those basic uses, Microsoft SQL Server also provides the foundation for the Microsoft business intelligence platform and serves as a key integration component.

213

Figure 82 - New services in SQL Server 2012

Within SQL Server 2012, the key components include: Relational database engine Management Studio Analysis services xVelocity In-Memory Technology Integration services Replication High Availability Reporting Services Service Broker BI Semantic Model Master Data Services Data Quality Services

5.7.1.1 SQL Server Database Engine SQL Server is built around a core SQL-compliant database engine that addresses the persistence, access, security, processing, and transactional needs for the wide range of

214

applications that exist within the smart energy ecosystem. SQL Server is optimized for both online transactional processing (OLTP) and online analytical processing (OLAP). 5.7.1.2 SQL Server Management Studio The SQL Server Management Studio provides the capabilities needed to create, edit, and manage database objects in a graphical environment. 5.7.1.3 SQL Analysis Services The analysis services within SQL Server directly address the specific needs of multidimensional data and data mining. These services provide the ability to design, create and manage multidimensional structures that contain information aggregated from multiple data sources, as needed primarily for the implementation of data warehouses and data marts. Also provided are features and tools needed for data mining, such as industrystandard data mining algorithms, support for data mining models, and support for complex prediction queries. 5.7.1.4 SQL xVelocity Performance has been radically improved with SQL Server 2012 with the introduction of xVelocity in-memory technologies. xVelocity column store index optimizes data warehouse queries and the xVelocity In-Memory Analytics engine within the SQL Server Analysis Services significantly accelerates time to insight. Upcoming releases will further improve productivity via the Hekaton OLTP engine built directly into SQL Server so performance improvement will consist of simply identifying which key tables to make memory resident via a configuration detail! 5.7.1.5 SQL Server Integration Services The SQL Server Integration Services (SSIS) is a platform for building enterprise-level data integration and data transformation solutions. These services provide a rich and robust set of capabilities needed for the extract, transform and load (ETL) integration patterns as described in SERA as used to populate a data warehouse. 5.7.1.6 SQL Server Replication The replication mechanisms provided by SQL Server address key needs related to availability and scalability. Within the smart energy ecosystem, replication is typically employed for purposes of availability, disaster recovery, and scalability. For availability, data can be replicated from a master instance to backup instances located remotely at sites serving as backups for network operations centers. In the case of scalability, data from a SQL Server instance (e.g. a data warehouse or data mart) can be replicated to a number of instances so that query workload can be load balanced across multiple instances. 5.7.1.7 SQL Server High Availability SQL Server 2012 enhances the high availability and disaster recovery options with AlwaysOn Failover Clusters and AlwaysOn Availability Groups. SQL Server 2012 also enhances

215

Availability via SQL Data Sync which enables synchronization of enterprise databases to the cloud, as well as cloud to cloud DB synchronization. 5.7.1.8 SQL Reporting Services The reporting services in SQL Server provide a wide range of tools and services for the creation, deployment and management of reports. 5.7.1.9 SQL Service Broker SQL Server Service Broker provides native support for messaging and queuing applications in the SQL Server Database Engine. This makes it easier for developers to create sophisticated applications that use the Database Engine components to communicate between disparate databases. Developers can use Service Broker to easily build distributed and reliable applications. 5.7.1.10 SQL BI Semantic Model The BI Semantic Model is one model for all end user experiences reporting, analytics, scorecards, dashboards, and custom applications. All client tools in the Microsoft BI stack Excel, PowerPivot, SharePoint Insights, and Reporting Services (including Crescent) operate on this model. BI professionals can create the model in Visual Studio and deploy it to an Analysis Services server. 5.7.1.11 Master Data Services SQL Server Master Data Services provides a central data hub that ensures the integrity of information and consistency of data is constant across different applications. 5.7.1.12 Data Quality Services SQL Server Data Quality Services (DQS) is a knowledge-driven data quality product. DQS enables you to build a knowledge base and use it to perform a variety of critical data quality tasks, including correction, enrichment, standardization, and de-duplication of your data. DQS enables you to perform data cleansing by using cloud-based reference data services provided by reference data providers. DQS also provides you with profiling that is integrated into its data-quality tasks, enabling you to analyze the integrity of your data. 5.7.2 Parallel Data Warehouse The SQL Server 2012 Parallel Data Warehouse (PDW) is an enterprise class storage appliance built upon massively parallel processing architecture of SQL Server 2012. As a result, multiple processors and storage units work in parallel via multiple SQL Servers working together to resolve queries. MPP appliances provide a scalability option to SMP deployments which at some point run out of headroom on a single large box. The Parallel Data Warehouse is a core component of the Microsoft utilities Big Data Strategy as can be seen on the following diagram:

216

Figure 83: The PDW represents the aggregate store for data warehousing and operational data stores.

The PDW represents the aggregate store for data warehousing and operational data stores that drive significant scale such as smart meter data or trading data. The PDW can scale from a few terabytes up to 5 petabytes of utility company and customer data in units of rack, single rack, or sets of racks depending upon the utilities scale out requirements:

217

Figure 84: The PDW configuration

In the role as a core building block of the SERA data aggregation strategy, the SQL Server 2012 Parallel Data Warehouse brings several key capabilities to enable the important BI analytics, visualization, and reporting. Polybase provides a solution complimentary to the HDP and HDInsight platforms for unification of structured and unstructured data. Polybase allows SQL Server PDW users to execute queries against data stored in Hadoop Distributed File System (HDFS) as well as external tables, and federate the data with structured SQL Server data. This allows for direct and fully parallelized data access to HDFS data to support joins with other ODW data on the fly. As utilities learn more about the types of analytics that can be created leveraging social and Hadoop, PolyBase this will be a significant capability.

218

5.7.3 Microsoft GeoFlow Addin to Excel Microsoft GeoFlow is an Excel addin for geospatial data overlay visualization analysis that is closely integrated with PowerPivot. GeoFlow enables the exploration of geospatial data and works by recording the steps for video replays. While GeoFlow is still in beta, there are immediate applications to smart meter customer usage represented in Microsoft technology.

Figure 85 shows GeoFlow graphical depictions

5.7.4 ADRM Software with SQL Server To help overcome the data management challenges and create holistic enterprise and industry data models, Microsoft has worked with our partner ADRM Software in their development of a formalized, structured and comprehensive utility data model. This result is an amazing model with large-scale, utility industry specific information roadmap that consists of integrated sets of templates and models that combine industry best-practices and business-friendly definitions.

219

Microsoft partner ADRM Software has developed this full-fledge utility industry data model as a holistic enterprise wide information model that informs the enterprise mapping process of the Utilities third party systems, like those for meter data management, distribution management and asset management. It brings them into ADRMs enterprise wide data utility model so that analysis can be performed, without any debilitating questions about where the analysis will be done, the integrity of the data, hindrances from data aggregation, or what the target aggregation storage platform should be. The ADRM Software Utility Data Model was built using an industry modeling tool, but it uses the SQL Server database for its scalable, stable, mature, highly flexible, advanced high availability, high performance data warehousing, with maximum virtualization and rapid data discovery with corporate and scalable reporting and analytics. The ADRM Software Utility Industry Data Model seeks to logically structure data in standard, consistent, predictable manner to enable its management for uses among specific utility business departments working to achieve their operating goals. The ADRM model provides utility-specific subset models to organize and structure data created in different business areas, like accounting and financial reporting, geography, network, service locations, and tariffs. (See more complete list, figure 84, next page). The goal, of course, is to connect the data throughout those business departments using various analytics and derive operational understanding in less time, and with more efficiency, than the utility could on its own or by using other vendor solutions. The key benefit becomes the holistic integrity of the entire system designed, as it was, by the utility data model. From the utilitys CIOs perspective, the ADRM Software Utility Data Model replaces their technologists penchant for building from scratch, the most expensive and time consuming option available to them. The CIO understands how the model reduces both the operational risk as well as the risk of scale were the utility needs to grow beyond the model it had built without a holistic mindset. CIOs like the fact that they can use ERwin to define which subsets of ADRM models they need now, while knowing they can progressively grow their use of the model as their needs grow without adding risk. And moreover, they can sleep well at night knowing that the overall utility data model was holistically vetted by others in the industry. No one rests well knowing that they are the guinea pig for big database experiments. As previously mentioned the ADRM Software Utility Data Model is supported by the stable, mature, and highly flexible SQL Server database. Utilities benefit from the stability of SQL Servers symmetric multi-processor architecture, the most common architecture for data warehouses under 50 terabytes. Those utilities that have reached a more mature level in the data warehouse capability will appreciate the benefits of massively parallel processing (MPP) DW. Parallel data warehousing (PDW) takes data warehousing to new levels by introducing an MPP capability to Microsoft SQL Server. In simple terms, PDW extends the data capacity of SQL Server by distributing the data 220

and database interactions across multiple "nodes," which are separate and distinct instances of SQL Server. However, to the end-user, the MPP appears as one database. What could be better for handling all the databases classified under the ADRM Software Utility Data Model?

Figure 86 depicts a small sample of the business area models available through the ADRM Enterprise Warehouse Utility Industry Data Model

By using a comprehensive data model, for each business area, the utility can manage the high volumes of new data coming from its grid and meters, to enable new services, gain new efficiencies, and meet regulators demands.

5.8

Business Intelligence

Business intelligence (BI) is the use of technologies to help organizations make better decisions. These benefits are usually realized through consolidating, aggregating, and analyzing data. The focus of BI and SOA are sometimes seen as being in conflict, where the data needed for BI is often scattered between services and hidden behind contracts. As a result, the Microsoft BI solution takes a pragmatic approach to data access and leverages many different alternatives to aggregate data, as well as easy solutions to get data from a variety of applications and products so that access to key performance data does not compromise the SOA.

221

Figure 87 depicts SQL Server and business intelligence.

Business intelligence is supported using the following Microsoft products: Microsoft Excel Services Microsoft SharePoint Server Business Intelligence, which provides functionality needed for performance management including scorecards, dashboards, management reporting, analytics, planning, budgeting, forecasting, and consolidation. Microsoft SQL Server DBMS Microsoft SQL Server Reporting Services Microsoft SQL Server Analysis Services (SSAS), with Data Mining and Data Warehousing capabilities Microsoft SQL Server Integration Services (SSIS), which provides ETL, aggregation, and calculation capabilities Microsoft BizTalk Server

222

5.8.1 Balancing Enterprise Data Warehouses and LOB Needs Utilizing the phases of the BI evolutionary process as a backdrop, Microsoft Information Technology (MSIT) defined a set of three solution patterns which provide a balance between the need to collect and integrate data for the enterprise data warehouse (EDW) and the need for the line of businesses (LOBs) to have immediate, tactical solutions. A pattern offers wellestablished solutions to software engineering problems by capturing essential elements of architecture and depicting those elements in a way that enables categorization of the components. MSIT uses the three patterns, which align to the phases of the BI evolutionary process (with Pattern 3 being the first phase), as a framework to classify how and where a solution fits within the architecture based on the business intelligence (BI) maturity level of the business. These patterns are predicated on an overall EDW architecture which utilizes a common infrastructure, an integrated central data store, and a business-driven distribution (data mart) model: Pattern 3 Point Solutions. These are stand-alone, point solutions, either inherited as legacies or developed for business areas in which there may be limited or no strategic value for integrating within the EDW. They are characterized as uncommon data models and data sources supported by IT, and may utilize EDW infrastructure such as hardware, storage, and process (ETL engine). Pattern 3 provides the bridge between a legitimate independent data warehouse and the EDW. Pattern 2 On Infrastructure. These solutions benefit from fully utilizing the EDW infrastructure, common data sources, and the integrated data within the central data store. A key assertion for Pattern 2 solutions is that they may utilize data directly from the staging area (acquisition) or the central data store. This is an intermediate benefit in that it allows for the quick acquisition and use of data without waiting for the eventual integration or collocation of that data in the central data store. In addition, it provides the ability to define and implement LOB-specific data models. Although such collocated data may be redundant and in conflict with the enterprise-level model, it provides enormous advantages in flexibility to the business and in acquiring and reusing data that other solutions and businesses may find useful. The sourcing and common storage of data from across the enterprise is a critical first step in ensuring that LOBs get the data that they need, thus providing tactical advantage while providing better understanding of existing data assets, including associated problems with sources, definition, gaps, and data management quality and control. Pattern 1 On Strategy. These are fully integrated solutions that utilize a common, subject-oriented data model and conformed domains and dimensions. The data is tightly governed and adheres to the data management policies of the enterprise.

223

5.9

Microsoft Big Data Strategy


A modern data management layer that supports all data types structured, semistructured, and unstructured data at rest or in motion. An enrichment layer that enhances your data through discovery, combining with the worlds data and by refining with advanced analytics. An insights layer that provides insights to all users through familiar tools like Microsoft Office.

Microsofts comprehensive Big Data strategy delivers:

The Microsoft Big Data Strategy is comprised of storage including: SQL Server 2012 Symmetric Multi-Processing (SMP) Massively Parallel Processing (MPP) SQL Database running as an Azure Service Analysis comprised of StreamInsight HDInsight The complete Microsoft BI and presentation via SharePoint and Office

Microsoft provides an enterprise class Big Data platform for on-premise deployment: HDInsight is Microsofts new Hadoop distribution built on HortonWorks Data Platform (HDP). This Hadoop based distribution on Windows Server can complement and extend an Enterprise Data Warehouse with Hadoop connectors for SQL Server. Microsoft provides an elastic Big Data Platform in the cloud with Hadoop-based services on the Windows Azure platform. A number of consistent capabilities are reflected in the Hadoop implementation. For example: o Active Directory can be used to secure the Hadoop cluster. o A Hive add-in for Excel allows analysts to gain insight from Hadoop data. o SQL Server Analysis Services, PowerPivot, and Power View can use a Hive ODBC Connector to analyze unstructured Hadoop data. o A Sqoop based SQL Server Connector for Apache Hadoop enables data transfer between SQL Server and Hadoop. This connector enables import of data from SQL Server to HDFS as well as the results of SQL Server executed queries into HDFS. It also supports transfer of HDFS files and Hive Tables to SQL Server. In addition, Microsoft provides an elastic Big Data Platform in the cloud with Hadoop-based services on the Windows Azure platform.

224

The following diagram reflects Microsofts holistic Big Data analytics and BI Reporting architecture:

Figure 88 depicts Microsoft's holistic Big Data analytics and BI reporting infrastructure.

5.9.1 Complex Event Processing Microsoft StreamInsight is a Complex Event Processing solution. StreamInsight provides on-premise analysis of real-time streaming data such as analyzing a realtime stream of meter outage and event data for determination of key grid events such as voltage droop or actual customer outage. StreamInsight can also be used to analyze streaming data thru the cloud and event processing in the cloud using Windows Azure SQL StreamInsight. This complex event processing (CEP) solution can be implemented and configured using a variety of Microsoft and third party products. The Microsoft CEP platform is built on SQL Server 2012 and allows software developers to create complex and innovative CEP solutions along two scenarios: 1. Building packaged event-driven applications for low latency processing 2. Developing custom event-driven applications for business and the Web with high throughput, low latency needs SERA recognizes the need to a high performance event handling service at the edge of the smart energy ecosystem computing environment.

225

Figure 89 offers a diagram of the Microsoft StreamInsight CEP engine:

Figure 89 illustrates the Microsoft CEP Platform.

StreamInsight has demonstrated performance exceeding 100K events/second. This enables the following important capabilities for the reference architecture: Unpack message streams and identify state changes over time Process real-time events Analyze event streams and create composite events Filter event streams and route events Invoke Web services and trigger business process

226

5.10

Mobility

Mobile devices are becoming increasingly important to utilities, service providers, and customers as they are being used for an increasingly wider range of uses: Mobile technologies help reduce the time it takes to perform time-intensive and repetitive tasks, like equipment installation and repair, crew management and redeployment, meter reading, outage identification, and repair. Mobile solutions enable collaboration through the sharing of data that has been historically unavailable in the field, such as drawings, schematics, blueprints, and equipment repair records. The availability of solutions to help field worker collaboration with improved information transfer will ease the impacts of the aging workforce phenomenon and also help to significantly reduce time to repair thereby reducing customer downtime and improving customer satisfaction.

Microsoft provides technologies that significantly accelerate deployment and management, whether the target platform is embedded, mobile or PC: Microsoft provides the Windows Mobile platform for the development of mobile devices. Visual Studio and the Windows Mobile Software Development Kit (SDK) make it possible to develop both native C++ applications and C# or VB managed code applications for mobile devices.46 The managed languages in particular have extensive support for database access and for the .NET Compact Framework. The .NET Compact Framework is a subset of the .NET Framework, but it does afford a developer the ability to work in essentially the same development environment and tools. Applications that have been designed to use a relational data store can leverage the Microsoft SQL Server CE compact relational database that runs on smart devices.

46

A note on terminology: Microsoft now calls devices with integrated phone and touch screen Windows Mobile Professional devices, and devices without a touch screen Windows Mobile Standard devices. Developers should ensure they have the correct SDK for their target mobility application.

227

5.11

Microsoft Security Stack: Management and Security


Microsoft products that support the management of Microsoft and third party and partner products and solutions Microsoft products, services, and tools that can be used to establish a holistic security program.

This section describes:

There are many layers to solution product management and holistic security, so each must be properly addressed using Microsoft products for a comprehensive security and management program. The combined products work together toward achieving a vision that Microsoft terms End-to-End Trust, which enables a comprehensive approach to the entire scope of security for the smart energy ecosystem when coupled with intrusion remediation and recovery. The following chart shows how SERA proposes a holistic security program approach including a combination of comprehensive tools that enable the set-up and management of End-to-End Trust, as well as tools and strategies for breach remediation and mitigation in the event that an advanced persistent threat is successful. Each component in the chart below is discussed in the sections that follow.

Figure 90 shows how SERA proposes a holistic security program approach including a combination of comprehensive tools that enable the set-up and management of End-to-End Trust, as well as tools and strategies for breach remediation and mitigation in the event that an advanced persistent threat is successful.

For those utilities that are moving toward a cloud-based computing paradigm, note in the chart above the Windows Azure cloud management and security products included in

228

various layers as appropriate. These tools complement and in most cases are integrated into a consistent offering across on-premise, private cloud, hybrid architecture and Azure cloud. From overall systems management like Microsoft System Center monitoring and managing computers, applications, and automation in all these environments, to Active Directory which runs in all these environments, the Microsoft tooling is consistent.

5.11.1

Systems Management

As businesses grow, so do there IT requirements. In many companies, its tough to find a facet of the business that doesnt depend on IT. As dependence on IT to run the business grows, it becomes vitally important to efficiently manage and safeguard IT and data assets. System management solutions - such as service desk management, single sign-on authentication and patch management - can help keep systems up and running, and maximize IT and employee productivity. They can also help your IT team efficiently roll out new software solutions, or upgrade existing ones. In a nutshell, effective systems management solutions help IT organizations move beyond fire-drill mode to provide the business with proactive guidance and support. The following system management solutions also help companies protect against the fallout from downtime and threats, whether caused by system malfunctions, lost or stolen mobile devices, network sabotage, power outages, security breaches, identity theft, human error, or natural and man-made disasters. Should any of these events occur, they can result in lasting financial loss, brand damage, legal liabilities and other extremely unpleasant consequences. 5.11.1.1 System Center Microsoft System Center is a comprehensive tool and building block for enterprise security and management. System Center features include: Provides a single management environment for all Microsoft technology, partner solutions, and competitor products as well. Complements Windows Server and Hyper-V for the core of the Microsoft private cloud infrastructure offering. Captures and aggregates knowledge about the infrastructure, policies, processes, and best practices so that IT professionals can optimize IT structures to reduce costs, improve application availability, and enhance service delivery. Manages everything from images, which are built and downloaded to secure substation automation servers, to VMs in private and public clouds. Provides a clear view of on-premise resources, as well as cloud resources assigned to a utility. Automates a number of IT tasks significantly reducing the level of effort required to manage a smart energy computing environment.

229

5.11.1.2 System Center for the Smart Energy Ecosystem In the smart energy ecosystem, Microsoft System Center can take on many utility specific tasks: As devices scale out, system center can address the demands for more active configuration management o For example, implementing MOM (Microsoft Operations Manager) packs or equivalent device status services can enable a comprehensive view of all the utility assets in a single tool. The captured asset information provides a foundation for insights, driving condition-based monitoring, and for dynamic asset configuration management. With System Center 2012, you get the most cost effective and flexible platform for managing traditional datacenters, private and public clouds, and client computers and devices. System Center 2012 is the only unified management platform on which utilities can manage multiple hypervisors, physical resources, and applications in a single offering, versus multiple fragmented point solutions delivered by competitors. System Center 2012 enables integration of a wide range of technologies into a coherent private cloud by managing different hypervisors centrally from a single pane of glass with support for Windows Server Hyper-V, VMWare vSphere, and Citrix XenServer, monitoring Windows Server, Sun Solaris, and various Linux and Unix distributions, integrating toolsets from HP, CA, BMC, EMC, and VMware into automated workflows.

5.11.1.3 General System Center Capabilities and Components Microsoft System Center 2012 provides an integrated management platform with a robust set of capabilities. Microsoft System Center 2012 offers unique application management capabilities that can empower you to deliver agile, predictable application services to your business counterparts. System Center 2012 helps you simplify and standardize your datacenter with flexible service delivery and automation. Using the Service Manager and Orchestrator components of System Center 2012, you can automate core organizational process workflows like incident management, problem management, change management, and release management. Microsoft System Center 2012 provides a common management toolset to help you configure, provision, monitor, and operate your IT infrastructure.

230

Figure 91 - If your infrastructure is like that of most organizations, you have physical and virtual resources running heterogeneous operating systems. The integrated physical, virtual, private, and public cloud management capabilities in System Center 2012 can help you ensure efficient IT management and optimized ROI of those resources.

5.11.1.3.1 App Controller App Controller provides a unified console that helps manage public clouds and private clouds, as well as cloud-based virtual machines and services. App Controller can deploy, monitor, and delete services, as well as grant access, upgrade, and scale out deployed services. 5.11.1.3.2 Data Protection Manager Data Protection Manager (DPM) is a backup and recovery solution for Microsoft workloads. DPM provides out-of-the-box protection for Files and Folders, Exchange Server, SQL Server, Virtual Machine Manager, SharePoint, Hyper-V, and client computers. For large-scale deployments, DPM can also monitor backups through a central console or remotely. 5.11.1.3.3 Operations Manager Operations Manager provides infrastructure monitoring that is flexible and costeffective, helps ensure the predictable performance and availability of vital applications, and offers comprehensive monitoring for the datacenter and cloud, both private and 231

public. Using Operations Manager Agents and Management Packs, System Center can monitor virtually any application, process, device, gateway, service in the smart energy ecosystem enabling a comprehensive view in one location. 5.11.1.3.4 Configuration Manager Configuration Manager provides a comprehensive solution for change and configuration management for the Microsoft platform. It allows utilities to deploy operating systems, software applications, and software updates; and to monitor and remediate computers for compliance settings. With Configuration Manager, utilities can also monitor hardware and software inventory, and remotely administer computers. Site Administration for System Center 2012 Configuration Manager Migrating from Configuration Manager 2007 to System Center 2012 Configuration Manager Deploying Clients for System Center 2012 Configuration Manager Deploying Software and Operating Systems in System Center 2012 Configuration Manager Assets and Compliance in System Center 2012 Configuration Manager Security and Privacy for System Center 2012 Configuration Manager

5.11.1.3.5 Endpoint Protection System Center integrates System Center Endpoint Protection to provide a security and anti-malware solution for the Microsoft platform. The Endpoint Protection module of System Center allows utilities to centrally configure and deploy Endpoint Protection clients, configure antimalware policies, and set up alerts in the event of malware installation.

5.11.1.3.6 Orchestrator Orchestrator provides orchestration, integration, and automation of IT processes, enabling utilities to define and standardize best practices and improve operational efficiency. Orchestrator can be used for automation and orchestration of virtually any process from updating server images to reloading the content of a substation automation server. The Orchestrator integrates with not only with all the other services of System Center, but also third party products and technologies via the integration packs below, as well as to other software via custom System Center runbooks. Active Directory Integration Pack for System Center 2012 - Orchestrator HP iLO and OA Integration Pack for System Center 2012 - Orchestrator HP Operations Manager Integration Pack for System Center 2012 - Orchestrator HP Service Manager Integration Pack for System Center 2012 - Orchestrator IBM Tivoli Netcool/OMNIbus Integration Pack for System Center 2012 Orchestrator 232

VMware vSphere Integration Pack for System Center 2012 - Orchestrator Integration Packs for System Center

5.11.1.3.7 Virtual Machine Manager Virtual Machine Manager (VMM) is a management solution for the virtualized datacenter, enabling utilities to configure and manage virtualization host, networking, and storage resources in order to create and deploy virtual machines and services to private clouds. 5.11.1.4 System Center Regulatory Compliance Capabilities and Components In addition to the System Center capabilities described above, Microsoft also provides an IT Compliance Management Guide that shows how to shift governance, risk and compliance (GRC) efforts from people to technology. Its configuration guidance can be used to help efficiently address an organization's GRC objectives. This accelerator helps the utility better understand how an IT process framework can help implement GRC controls in Microsoft infrastructure. Microsoft also provides a case study regarding streamlining regulatory compliance offering Microsoft Corporation as the example. Microsoft information technology (Microsoft IT which manages all IT for the company) uses a holistic approach to address the everincreasing complexity of regulatory compliance. This continually evolving system combines IT support for different regulatory frameworks into a single overarching process, and uses standardized tools to test similar controls. By combining tools and using a clearly defined role-based accountability model, Microsoft IT streamlines business processes, reduces duplication of effort, and makes IT professionals more operationally efficient. 5.11.2 Windows Intune Windows Intune is a cloud-based service that helps administrators monitor the status of device endpoints, updates requiring attention, as well as agents, policies, and software. In the utilities context, Windows Intune: provides important capabilities for utilities who do not have large IT and Security staff to manage their devices and associated security. provides the same enterprise grade security, management, and anti-malware experience found in large utilities, via a cloud service. provides an easy and effective way to reach a far broader set of devices in the expanded smart energy ecosystem. provides malware protection and uses the same malware endpoint protection engine as Forefront Endpoint Protection. 233

can be used to monitor, manage, and deploy both Microsoft and third party software, including software detection and inventory cataloging, creation of packages, compression and deployment software services, as well as management and reporting for licensing. provides alerts and manages policies enables end user remote assistance.

Using either Forefront EndPoint Protection or Windows Intune is an important element of protection and monitoring for all network connected computing devices. SERA supports an End-to-End Trust model and rapid recovery should an intrusion occur. These tools not only help protect against intrusion, they enable scalable deployment of updated software and patches. 5.11.3 Microsoft Consulting Services Microsoft Consulting Services (MCS) distills the expertise and experience of Microsoft's product development and IT organizations, and offers them as the Microsoft Solutions Framework (MSF), a series of principles, models, and best practices that provide a highly disciplined teambased approach to planning and executing technology projects. Through MCS, Microsoft has developed strategic relationships with 15 of the world's largest global technology solution providers, including Arthur Andersen, Cambridge Technology Partners, Compaq, Ernst & Young, Hewlett Packard, ICL, Unisys, Wang Global, and others. Taken together, these companies form an army of consultants and service representatives nearly 160,000 strong. In addition, Microsoft has created a network of more than 18,500 Microsoft Certified Solution Providers -- service companies of all sizes that meet exacting criteria for product knowledge and technical expertise. In terms of providing security, MCS provides remediation services for recovery from an intrusion or event. 5.11.4 Other Microsoft Security Tools Microsoft Security Essentials is a free comprehensive malware protection solution for PCs and devices running genuine Windows OS. In the utilities context, SERA suggests that each utility should seek to secure every possible device comprising the smart energy ecosystem so as to reduce possible attack surfaces. For example, the devices that workers use at home to look at utility conditions or to respond to utility alerts need to be secured just like the desktops on the corporate network. Typically, utility enterprise devices will be protected by System Center and Forefront.

234

However, with portals, CoIT devices, and personal devices in the chain from utility to consumer Demand Response or Electric Vehicle charging stations, Windows Security Essentials can complement a comprehensive program. Windows Security Essentials provides protection against viruses, trojans, worms and spyware. Microsoft has also established a relationship with Facebook to provide Windows Security Essentials to Facebook Windows users. This is an example of how a utility using social media with its consumers might feel more confident about security status.

5.12

End-to-End Trust

End-to-End Trust is Microsofts vision for a safer, more trusted computing environment a vision that should extend to the network and all assets at the heart of the smart energy ecosystem. Three primary elements create greater trust: 1. Creation of a trusted stack where security is rooted in hardware and where each element in the stack (hardware, software, data, and people) can be authorized and authenticated in appropriate circumstances. 2. Managing claims relating to identity attributes, with the creation of a system that allows people to pass identity claims (sometimes a full name perhaps, but at other times just an attribute such as proof of age or citizenship, or a secure token representing their claim). This system must also address the issues of authentication, authorization, access, and audit. 3. Good alignment of technological, social, political and economic forces so that there is real progress towards a safer, more trusted computing environment. The goal is to put users in control of their computing environments, increasing security and privacy, and preserving other cherished values such as anonymity and freedom of speech. 5.12.1 End-to-End Trust in SERA SERA recommends the following practices for utility security implementation: Identify and prioritize security initiatives and activities based upon risk (see 3.8.6 Preparing with a Holistic Security Strategy) Encrypt data at rest Encrypt and sign communications Sign all service implementations Encrypt and wrap data in motion with transport layer security (TLS) Abstract where possible and minimize storage and mapping of personally identifiable information (PII) Authenticate and authorize at all major interfaces Leverage the SDL for all software and services Use application whitelisting and sign all executables so only known applications can run Decommission legacy systems which are not securable Fix all known vulnerabilities 235

Eliminate excessive privilege Patch all systems, networking communications, mobile, and infrastructure devices Establish comprehensive risk prioritized security protection program including remediation and recovery

5.12.2 Rights Management Services Microsoft Active Directory Rights Management Services (AD RMS) in Windows Server 2012 and the update to AD RMS in Windows Server 2012 helps safeguard digital information from unauthorized useboth online and offline, inside and outside of the firewall. In conjunction with AD RMSenabled applications, AD RMS augments an organization's security strategy by protecting information through persistent usage policies. These policies remain with the informationwhether documents, spreadsheets, presentations, or e-mail messagesno matter where it goes or how it is stored to: Eliminate unauthorized viewing and distribution of sensitive corporate data. Improve compliance with internal and external regulations by lowering the risk of data leaks. Reduce the risk of intellectual property loss, which can result in a compromised ability to compete.

5.12.3 Certificate Services Microsoft Active Directory Certificate Services (AD CS) represents enhanced capability in Windows Server 2012. AD CS is the Server Role that enables utilities to build a public key infrastructure (PKI) and provide public key cryptography, digital certificates, and digital signature capabilities for the organization. AD CS provides customizable services for issuing and managing digital certificates used in software security systems that employ public key technologies. The digital certificates that AD CS provides can be used to encrypt and digitally sign electronic documents and messages. These digital certificates can be used for authentication of computer, user, or device accounts on a network. Digital certificates are used to provide: Confidentiality through encryption Integrity through digital signatures Authentication by associating certificate keys with computer, user, or device accounts on a computer network

Utilities can use AD CS to enhance security by binding the identity of a person, device, or service to a corresponding private key. 5.12.4 Domain Services Microsoft Active Directory Domain Services (AD DS) in Windows Server 2008 and the update to AD DS in Windows Server 2012 enables utilities to create a scalable, secure, and manageable 236

infrastructure for user and resource management, and provides support for directory-enabled applications such as Microsoft Exchange Server. AD DS provides a distributed database that stores and manages information about network resources and application-specific data from directory-enabled applications. A server that is running AD DS is called a domain controller and administrators can use AD DS to organize elements of a network, such as users, computers, and other devices, into a hierarchical containment structure. The hierarchical containment structure includes the Active Directory forest, domains in the forest, and organizational units (OUs) in each domain. Security is integrated with AD DS through logon authentication and access control to resources in the directory. With a single network logon, administrators can manage directory data and organization throughout their network. Authorized network users can also use a single network logon to access resources anywhere in the network. Policy-based administration eases the management of even the most complex network. 5.12.5 Federation Services Microsoft Active Directory Federation Services (AD FS) in Windows Server 2008 and the update to AD FS in Windows Server 2012 enables utilities to create a scalable, secure, and manageable infrastructure for user and resource management. The AD FS server role provides simplified, secured identity federation and Web single sign-on (SSO) capabilities. AD FS includes a Federation Service role service that enables browser-based Web SSO, a Federation Service Proxy role service to customize the client access experience and protect internal resources, and Web Agent role services used to provide federated users with access to internally hosted applications. AD FS simplifies end-user access to systems and applications by using a claims-based access authorization mechanism to maintain application security. Utilities can deploy AD FS to: Provide employees or customers with seamless access to Web-based resources in any federation partner organization on the Internet without requiring employees or customers to log on more than once. Retain complete control over employee or customer identities without using other signon providers (Windows Live ID, Liberty Alliance, and others). Provide employees or customers with a Web-based, SSO experience when they need remote access to internally hosted Web sites or services. Provide employees or customers a Web-based, SSO experience when they access crossorganizational Web sites or services from within the firewalls of the utility network.

237

5.12.6 Claims-Based Applications Claims-based authentication provides an industry standard security protocol to authenticate a user on a host computer. Claims-based authentication is a set of WS-* standards describing the use of a Security Assertion Markup Language (SAML) token in either passive mode (e.g., when WS-Federation is used with an application like Microsoft Dynamics CRM 2011 and Microsoft Dynamics CRM Online web application) or active mode (e.g., where WS-Trust in used with Windows Communication Foundation (WCF) clients). This authentication works together with WCF to provide secure user authentication and a communication channel with a Microsoft applications server such as Dynamics CRM. All Microsoft Dynamics CRM editions support claimsbased authentication. Claims-based authentication requires the availability of a security token service (STS) running on a server. An STS server can be based on Active Directory Federation Services (AD FS) V2, or any platform that provides the official STS protocol. Applications or services that rely on claims are considered claims-aware or claims-based applications. In the context of Windows Azure Access Control Service (ACS), a relying application is a web site, an application, or a service for which you want to implement federated authentication by using ACS. Claims based or relying party applications can be configured either manually, using the ACS Management Portal, or programmatically, using the ACS Management Service.

238

Figure 92 shows federated identity leveraging ADFS claims-based security

Enterprise protocols are used by cloud providers to implement the claims service. Also provided is OpenID for the consumer arena. This service allows a single federation agreement across many services and reduces lock-in by any one-cloud provider. 5.12.7 Security Token Service The ACS Security Token Service (STS) is the set of endpoints that issue tokens to your relying party applications. In other words, STS is the service that ACS uses to provide federated authentication to your web applications and services.

239

Figure 93: The Access Control Service (ACS) Security Token Service is extremely powerful, especially for utilities considering how they will manage authentication of millions of customers and devices.

ACS supports a variety of protocols that allow it to be accessed from any web platform including .NET Framework, WCF, Silverlight, ASP.NET, Java, Python, Ruby, PHP, and Flash. ACS supports the following protocols: OAuth WRAP OAuth 2.0 WS-Trust WS-Federation OpenID ACS supports the following security token formats: SAML 1.1 SAML 2.0 Simple Web Token (SWT) 5.12.8 Lightweight Directory Services Microsoft Active Directory Lightweight Directory Services (AD LDS) in Windows Server 2012 and the update to AD LDS in Windows Server 2012 is a Lightweight Directory Access Protocol (LDAP) directory service that provides flexible support for directory-enabled applications, without the dependencies and domain-related restrictions of Active Directory Domain Services (AD DS). You can run AD LDS on member servers or stand-alone servers. You can also run multiple instances of AD LDSeach with its own independently managed schemaon one server. 240

By using the Windows Server 2012 Active Directory Lightweight Directory Services (AD LDS) role, formerly known as Active Directory Application Mode (ADAM), utility Administrators can provide directory services for directory-enabled applications without incurring the overhead of domains and forests and the requirements of a single schema throughout a forest. 5.12.9 Trusted Boot The Microsoft Trusted Boot process consists of several components. Secure Boot in Windows 8 takes advantage of the modern day Unified Extensible Firmware Interface (UEFI) replacement for the PC BIOS. Windows 8 devices incorporate a certificate in the UEFI that ensures the correct unmodified properly signed version of the OS is booted. Secure Boot ensures the OS has not been modified by malware and ensures viruses like Stuxnet or Flame have not hijacked the boot process via a Root Kit. If for any reason the OS appears corrupted, the device is prevented from booting so that the network and other devices are not corrupted. The second part of the Trusted Boot functionality shipping with Windows 8 is critical system file tamper recovery. The key system files, the OS Kernel, drivers, and security software like antimalware are all inspected at boot time. If a different version of one of these files is detected, the unmodified version is restored and started at boot time. 5.12.10 BitLocker Windows BitLocker Drive Encryption (BitLocker) is a security feature in the Windows 8, Windows Server 2012, Windows 7, and Windows Server 2012 that can provide protection for the operating system on your computer and data stored on the operating system volume. In Windows Server 2012, BitLocker protection can be extended to volumes used for data storage as well. BitLocker performs two functions: Encrypts all data stored on the Windows operating system volume (and configured data volumes). This includes the Windows operating system, hibernation and paging files, applications, and data used by applications. Is configured by default to use a trusted platform module (TPM) to help ensure the integrity of early startup components (those used in the earlier stages of the startup process), and locks any BitLocker-protected volumes so that they remain protected even if the computer is tampered with when the operating system is not running. 5.12.11 Trusted Platform Module Management Trusted Platform Module (TPM) Management in Windows Vista, Microsoft Windows Server 2008, and Windows 8 used to administer computer TPM security hardware. A TPM is a microchip designed to provide basic security-related functions, primarily involving encryption keys. The architecture provides an infrastructure that allows Windows-based applications to use and share the TPM. Computers that incorporate a TPM have the ability to create cryptographic keys and encrypt them so that they can only be decrypted by the TPM and software like Windows BitLocker Drive Encryption can use the keys to lock data until specific hardware or 241

software conditions are met. Assurances about the state of a systemthat define its "trustworthiness"can be made before the keys are released for use. Because the TPM uses its own internal firmware and logic circuits for processing instructions, it does not rely upon the operating system and is not exposed to external software vulnerabilities. 5.12.12 AppLocker Complimenting BitLocker which controls access to data on computer drives, Microsoft AppLocker restricts which applications can be run on the device. AppLocker is an application control feature in Windows Server 2012, Windows Server 2008 R2, and Windows 8. In many organizations, information is the most valuable asset, and ensuring that only approved users have access to that information is imperative. Access control technologies such as Active Directory Rights Management Services (AD RMS) and access control lists (ACLs) help control what users are allowed to access. However, when a user runs a process, that process uses the same level of access to data as that of the user. As a result, sensitive information can easily be deleted or transmitted out of the organization because a user knowingly or unknowingly ran malicious software. AppLocker can help mitigate these types of attacks by restricting the files that users or groups are allowed to run. Software publishers are beginning to create more applications that can be installed by standard users (non-administrators). This type of software deployment can violate an organization's written security policy and circumvent traditional application deployment solutions that allow software to be installed only in controlled locations. By allowing administrators to create rules that allow files to run or deny files from running, AppLocker helps administrators prevent such per-user applications from running. Applocker can also use signed applications from manufacturers so that the enterprise can trust their applications down to specific versions of each product. End users can also sign the applications themselves and choose only to trust those that they themselves have approved, providing yet another layer of configuration management. AppLocker is ideal for organizations that currently use Group Policy to manage their Windowsbased computers. Because AppLocker is an additional Group Policy mechanism, administrators should have experience with Group Policy creation and deployment. Organizations that want control over which ActiveX controls are installed or per-user application installations also find AppLocker useful. 5.12.13 Comprehensive Access Security Microsoft Forefront delivers a comprehensive family of end-to-end security solutions, both onpremises and in the cloud, to help protect users and enable secure access virtually anywhere. A key challenge in utilities is securing the environment and managing access across data, users, and systems.

242

Microsoft Forefront provides an integrated portfolio of protection, identity, and access products to address this challenge. New point solutions for security are multiplying both in number and complexity, and business security issues continue to grow: Integrating security products so they work well together and leverage each other Integrating security products into pre-existing IT infrastructure Managing and deploying security simply, pervasively, and without mistake Managing security as a single solution instead of a collection of disparate products Forefront provides a comprehensive family of highly effective security products, focused on the integration and management aspects of security, which helps to prevent misconfiguration, enable organization to deploy security products more pervasively, and provide a unified view of the security state of utility networks. Microsoft is working to achieve the goal of business ready security based on three fundamental tenets: 1. Integrate and extend across the enterprise Deeply integrates with the identity infrastructure and across the stack Support for heterogeneous environments On-premises and hosted solutions for seamless connectivity Open standards and protocols based identity and security platform 2. Help protect everywhere, access anywhere Defense in-depth across multiple layers to help protect across endpoints, servers and network Secure identity-based access products help connect the mobile workforce virtually anywhere Identity-aware protection help organizations secure information and enable policybased access 3. Simplify the experience, manage compliance Enable centralized management of the environment and gain critical visibility into the state of the infrastructure Help improve security and compliance through identity tracking and enforcement throughout the enterprise Provide policy management features and reporting to enable auditing and compliance 5.12.14 Identity Lifecycle Manager Common identity is an important tool for ensuring that users have appropriate access to corporate information. Significant challenges can arise without an efficient method of establishing and maintaining a common identity across complex heterogeneous systems, not the least of which could be NERC CIPs compliancy. Issues might include high help-desk costs for password resets and smart card deployment, loss of productivity as users struggle to access the resources they need, and serious risk to the business due to noncompliance with internal and 243

external regulations. A clear element of utility regulations is the deprecation of users off all utility systems within a defined period. Microsoft Forefront Identity Manager (FIM) 2010 R2 helps utilities address these issues by providing self-service identity management for users as well as automated lifecycle management across heterogeneous platforms for administrators and a rich policy framework for enforcing corporate security policies including detailed audit capabilities. FIM 2010 R2 integrates new functionality through the Microsoft BHOLD suite to provide rolebased access control and to allow administrators to review access rights continually across the organization. The FIM 2010 R2 release also adds an improved self-service password reset experience, along with performance, diagnostic, and reporting improvements: Simplify identity lifecycle management through automated workflows and business rules, and by providing easy integration with heterogeneous platforms. Consolidated, cross-platform identity by automating identity and group provisioning, and management based on business policy. Built-in smart card management to centrally manage smartcard provisioning dramatically reducing multi-factor authentication deployment. Easy extensibility via integration with familiar Microsoft Visual Studio and .NET development environment so administrators can easily extend capabilities when business needs change. Improve security and compliance through automated identity management for auditing, role-based access control, and deep role discovery. Role-based access administration to discover and map permissions to individual, assignable roles across multiple systems. Centrally enforced identity policy to automatically maintains consistency of identity information and application of user roles across enterprise identity systems. In-depth auditing and reporting on all the activities and historical states of each event, stage of a workflow, as well as when it took place and any associated approvals.

5.12.15 Network Policy and Access Services Microsoft Network Policy and Access Services (NPAS) in Windows Server 2008 and the update to NPAS in Windows Server 2012 includes the specific role services of Network Policy Server (NPS), Health Registration Authority (HRA), and Host Credential Authorization Protocol (HCAP). Utilities can use the Network Policy and Access Services server role to deploy and configure Network 244

Access Protection (NAP), secure wired and wireless access points, and RADIUS servers and proxies. NAP is an extension to Internet Protocol Security (IPsec)47 primarily for use with mobile computing devices that helps administrators more effectively protect network assets by helping to enforce compliance with system health requirements. IT administrators can create customized health policies with NAP to validate computer security posture before allowing access or communication, automatically update compliant computers to enable ongoing compliance, and optionally confine noncompliant computers to a restricted network until they become compliant. In terms of the smart energy ecosystem, this capability can be applied to help ensure new grid connected devices conform to the appropriate policies, or restrict their access, thus adding to the overall ecosystem security. NAP includes an application programming interface (API) set for developers and vendors to create complete solutions for health policy validation, network access limitation, and ongoing health compliance. To validate access to a network based on system health, NAP provides the following areas of functionality: Health policy validation determines whether the computers are compliant with health policy requirements. Network access limitation limits access for noncompliant computers. Automatic remediation provides necessary updates to allow a noncompliant computer to become compliant. Ongoing compliance automatically updates compliant computers so that they adhere to ongoing changes in health policy requirements.

5.12.16 IPsec Internet Protocol security (IPsec) is a framework of open standards for protecting communications over Internet Protocol (IP) networks through the use of cryptographic security services. IPsec is an overlay to an existing network so that it can be integrated into any network while allowing legacy traffic to flow, preserving investments in network infrastructure. IPsec supports: Network-level peer authentication Data origin authentication Data integrity
47

Internet Protocol Security (IPsec) is a protocol suite for securing Internet Protocol (IP) communications by authenticating and encrypting each IP packet of a data stream. IPsec also includes protocols for establishing mutual authentication between agents at the beginning of the session and negotiation of cryptographic keys to be used during the session. IPsec can be used to protect data flows between a pair of hosts (e.g. computer users or servers), between a pair of security gateways (e.g. routers or firewalls), or between a security gateway and a host. Wikipedia.org

245

Data confidentiality (encryption) Replay protection

The Microsoft implementation of IPsec is based on standards developed by the Internet Engineering Task Force (IETF) IPsec working group. IPsec is supported by: Microsoft Windows 8 Microsoft Windows 7 Windows Vista Windows Server 2012 Windows Server 2008 Windows Server 2003 Windows XP Windows 2000 IPsec is integrated with Active Directory Domain Services (AD DS). IPsec policies can be assigned through Group Policy, which allows IPsec settings to be configured at the domain, site, organizational unit, or security group level. 5.12.17 Direct Access Connections The proliferation of devices in the enterprise has led to the need for better organization and management of device network access, especially in utilities with device deployments from corporate desktops to field service crews. VPNs have historically played a large role in connection to the organization from outside the corporate network. Microsoft DirectAccess overcomes the limitations of VPNs by automatically establishing a bi-directional connection from client computers to the corporate network. DirectAccess is built on a foundation of proven, standards-based technologies: Internet Protocol security (IPsec) and Internet Protocol version 6 (IPv6). DirectAccess uses IPsec to authenticate both the computer and user, allowing IT to manage the computer before the user logs on. Optionally, IT can enforce requiring a smart card for user access to the intranet. DirectAccess also leverages IPsec to provide encryption for communications across the Internet using IPsec encryption methods such as Advanced Encryption Standard (AES) and Triple Data Encryption Standard (3DES). DirectAccess Clients establish an IPsec tunnel for the IPv6 traffic to the DirectAccess server, which acts as a gateway to the intranet. Figure 1 shows a DirectAccess client connecting to a DirectAccess server across the public IPv4 Internet. Clients can connect even if they are behind a firewall.

246

5.12.18 Azure Cloud Security Moving applications and data to the cloud can be a powerful tool. However, there are a number of considerations in this transition. Movement to the cloud should only be contemplated if the risk can be properly assessed and classified. An enterprise identity management capability maturity is crucial to success. Proper authentication and authorization is fundamental to cloud security. Whitelisting that allows access, rather than blacklisting that might only limit some but not all attackers is important. Segmentation, compartmentalization, and implementation of functional security zones are also key to success. Microsoft has implemented the Windows Azure Trust Center as the single location where all these issues and best practices are addressed. 5.12.18.1 Windows Azure Trust Center Security Windows Azure runs in data centers managed and operated by Microsoft Global Foundation Services (GFS). These geographically dispersed data centers comply with key industry standards, such as ISO/IEC 27001:2005, for security and reliability. They are managed, monitored, and administered by Microsoft operations staff that have years of experience in delivering the worlds largest online services with 24/7 continuity. In addition to data center, network, and personnel security practices, Windows Azure incorporates security practices at the application and platform layers to enhance security for application developers and service administrators. The Microsoft Windows Azure Trust Center has a number of very valuable security resources including: 1. Technical Overview of the Security Features in the Windows Azure Platform This document provides a summary of some of the technical and organizational security measures for Windows Azure. 2. Windows Azure Security Overview This in-depth paper provides a detailed discussion of some of the security features and controls implemented in Windows Azure. 3. Security Best Practices for Developing Windows Azure Applications This paper focuses on the recommended approaches for designing and developing secure applications for Windows Azure. 5.12.19 Secure Development Lifecycle Software vendors as well as Microsoft must endeavor to address security threats or risk becoming the weak link targeted for attack. Microsoft works closely with its partners to identify and develop domain specific solutions. This close working relationship represents great value for customers by leveraging the Microsoft platform. 247

All software, including partner solutions, should embrace a Secure Development Lifecycle to meet the security and reliability expectations for the smart energy ecosystem. Security is a core requirement for software vendors because they are driven by market forces, including the need to protect critical infrastructures and to build and preserve widespread trust in computing. All software vendors face the major challenge of creating more secure software that requires less updating through patches and less burdensome security management. For the software industry, the key to meeting today's demand for improved security is to implement repeatable processes that reliably deliver measurably improved security. Therefore, software vendors must transition to a more stringent software development process that focuses, to a greater extent, on security. Such a process is intended to minimize the number of security vulnerabilities extant in the design, coding, and documentation and to detect and remove those vulnerabilities as early in the development lifecycle process as possible. The need for such a process is greatest for enterprise and consumer software that is likely to be used: To process inputs received from the Internet To control critical systems likely to be attacked To process personally identifiable information

Microsofts experience with making real-world software secure has led to a set of high-level principles for this process. The overall security perspective reflects Microsofts end-to-end trust vision for a safer Internet. Microsoft refers to these high-level real world principles in the Microsoft Security Development Lifecycle as SD3+C, which signifies: Secure by Design The software should be architected, designed, and implemented so as to protect itself and the information it processes, and to resist attacks. Secure by Default In the real world, software will not achieve perfect security, so designers should assume that security flaws would be present. To minimize the harm that occurs when attackers target these remaining flaws, software's default state should promote security. For example, software should run with the least necessary privilege, and services and features that are not widely needed should be disabled by default or accessible only to a small population of users. Secure in Deployment Tools and guidance should accompany software to help end users and/or administrators use it securely. Additionally, updates should be easy to deploy.

248

Communications. Software developers should be prepared for the discovery of product vulnerabilities and should communicate openly and responsibly with end users and/or administrators to help them take protective action (such as patching or deploying workarounds).

5.12.20 Secure Development Lifecycle Optimization Model The Secure Development Lifecycle (SDL) Optimization Model has been designed to facilitate gradual, consistent, and cost-effective implementation of the SDL and reduce customer risk.

Figure 94 depicts the SDL Optimization Model that is structured around five capability areas across four maturity levels in these areas.

Figure 95 shows the secure development process model at Microsoft.

249

Security engineering principles include: Ongoing secure development education requirements for all developers involved in the smart grid information system; Specification of a minimum standard for security; Specification of a minimum standard for privacy; Creation of a threat model for a smart grid information system; Updating of product specifications to include mitigations for threats discovered during threat modeling; Use of secure coding practices to reduce common security errors; Testing to validate the effectiveness of secure coding practices; Performance of a final security audit prior to authorization to operate to confirm adherence to security requirements; Creation of a documented and tested security response plan in the event vulnerability is discovered; Creation of a documented and tested privacy response plan in the event vulnerability is discovered; and Performance of a root cause analysis to understand the cause of identified vulnerabilities. 5.12.21 Cybersecurity Maturity Model The Cybersecurity Maturity Model is a guide by which the various security measures may be used by an organization, given its existing reference architecture and the types of investments that can help optimize its infrastructure. In this model, an organization with a basic level of cybersecurity maturity might use rudimentary desktop and data protection solutions, such as anti-malware clients, standard desktop images, and full-volume encryption. By contrast, a rationalized or dynamic organization would support automated solutions for desktop and server security with a life-cycle approach to managing enterprise identities and policies.

250

Figure 96: As an organization matures according to this model, it becomes more efficient in its risk-management tactics. A culture of security develops where a risk-based management approach factors in people, processes, and technologies.

Over time, organizations can improve services and lower costs while reaching for their cybersecurity goals. Basic. An organization has a rudimentary approach to risk management characterized by manual processes. Policies, standards, and controls are typically improvised and reactive. Standardized. A threat-aware organization develops structured, centralized policies, standards, and controls. It recognizes the need for consistent patch management, even if security-management procedures are not fully automated. Rationalized. When an organization takes a holistic view of cyber risk, it implements advanced policies, standards, and controls and tightly integrates them with highly automated IT operations. The organization plans or has already deployed advanced cybersecurity capabilities as part of a risk-based approach to management. Dynamic. By optimizing a strategic, risk-based approach, an organization can adapt to a variety of cyber threats in a highly agile manner. Governance, risk management, and compliance emerge naturally from an organizational culture of security.

251

5.12.22 Secure Operations Secure operation is every bit as critical as the security technologies and a SDL. If proper secure operational guidelines are not followed, the secure design, development and deployment efforts may be severely or completely compromised. Secure operations consist of as a minimum operation of all equipment within their safe and operational limits. NERC CIPS 2-9 provide guidance for the high voltage transmission system. Due to the inherent integration and interoperation being driven by the new business processes of the smart energy ecosystem, it will be paramount that consistent operational security best practices are followed. Proper training, qualification and operation should be included in the utility operational security program and the architecture must enable execution and reporting on these programs. Microsoft supports the evolving security landscape for system operations across the entire smart energy ecosystem.

5.13

Privacy

Privacy is one of the foundations of Microsoft Trustworthy Computing. Microsoft has a longstanding commitment to privacy, which is an integral part of our product and service lifecycle incorporated into the SDL. We work to be transparent in our privacy practices, offer customers meaningful privacy choices, and manage responsibly the data we store. The Microsoft Privacy Principles, our specific privacy statements, and our internal privacy standards guide how we collect, use, and protect customer data. General information about cloud privacy is available from the Microsoft Privacy Web site. We also published a white paper Privacy in the Cloud to explain how Microsoft is addressing privacy in the realm of cloud computing. The Windows Azure Privacy Statement describes the specific privacy policy and practices that govern customers use of Windows Azure. 5.13.1 Location of Customer Data Microsoft currently operates Windows Azure in data centers around the world. In this section, we address common customer inquiries about access and location of customer data. Customer data is all the data, including all text, sound, software or image files that are provided to Microsoft by, or on behalf of, a customer through its use of Windows Azure. For example, this includes data a customer uploads for storage or processing in Windows Azure and applications that a customer uploads for hosting in Windows Azure. Customers may specify the geographic region(s) of the Microsoft datacenters in which Customer data will be stored. At present, the available major regions are Asia, Europe, and the United States. Sub-regions are available in each major region, as follows: 252

o o o

Asia: East (Hong Kong) and Southeast (Singapore) Europe: North (Ireland) and West (Netherlands) United States: North Central (Illinois), South Central (Texas), East (Virginia), and West (California)

Microsoft may transfer customer data within a major geographic region (e.g., within Europe) for data redundancy or other purposes. For example, Windows Azure Storage geo-replication feature will replicate Windows Azure Blob and Table data, at no additional cost, between two sub-regions within the same major region for enhanced data durability in case of a major data center disaster. However, customers can choose to disable this feature. Microsoft will not transfer customer data outside the major geographic region(s) customer specifies (for example, from Europe to the United States, or from the United States to Asia) except: o where the customer configures the account to enable this (for example, through use of the Content Delivery Network (CDN) feature, which enables worldwide caching of content in a network of dozens of global nodes, or use of a prerelease feature that does not allow data center region selection); o where necessary for Microsoft to provide customer support, to troubleshoot the service, or comply with legal requirements; or o for software deployments in Web and Worker Roles (but not VM Role), where backup copies of the software deployment package may be stored in the United States regardless of the specified geographic region. Microsoft does not control or limit the regions from which customers or their end users may access customer data.

5.13.2 Data Protection Microsoft transfers data under the E.U. Data Protection Directive. The E.U. Data Protection Directive (95/46/EC) sets a baseline for handling personal data in the European Union. The EU has stricter privacy rules than the U.S. and most other countries. To allow for the continuous flow of information required by international business (including cross border transfer of personal data), the European Commission reached an agreement with the U.S. Department of Commerce whereby U.S. organizations can self-certify as complying with the Safe Harbor Framework. Microsoft (including, for this purpose, all of our U.S. subsidiaries) is Safe Harbor certified under the U.S. Department of Commerce. In addition to the E.U. Member States, members of the European Economic Area (Iceland, Liechtenstein, and Norway) also recognize organizations certified under the Safe Harbor program as providing adequate privacy protection to justify trans-border transfers from their countries to the U.S. Switzerland has a

253

nearly identical agreement (Swiss-U.S. Safe Harbor) with the U.S. Department of Commerce to legitimize transfers from Switzerland to the US, to which Microsoft has also certified. The Safe Harbor certification allows for the legal transfer of E.U. personal data outside E.U. to Microsoft for processing. Under the E.U. Data Protection Directive and our contractual agreement, Microsoft acts as the data processor, whereas the customer is the data controller with the final ownership of the data and responsibility under the law for making sure that data can be legally transferred to Microsoft. It is important to note that Microsoft will transfer E.U. customer data outside the EU only under very limited circumstances. See the Location of Data section for details. Microsoft also offers additional contractual commitments to its volume licensing customers: A Data Processing Agreement that details our compliance with the E.U. Data Protection Directive and related security requirements for Windows Azure core features within ISO/IEC 27001:2005 scope. E.U. Model Contractual Clauses that provide additional contractual guarantees around transfers of personal data for Windows Azure core features within ISO/IEC 27001:2005 scope.

5.13.3 Privacy-by-design (Pbd) and Privacy Enhancing Technologies Privacy-by-design ensures privacy concerns are considered at every stage of the engineering and deployment process. Appropriate privacy enhancing technologies complement and support privacy policies to ensure robust compliance. Microsoft is a technology leader in privacy-by-design and privacy technologies for smart grids and smart metering; Microsoft Research has made a substantial investment in new generation privacy technologies. By using them, important functions of smart grids such as statistics, aggregation, analytics, fraud detection, and billing can be supported while guaranteeing the core data protection principle of data minimization is respected. These technologies unlock the potential of data intensive applications, while alleviating regulatory concerns and ensuring PII is protected from internal or external risks.48 Radboud University and Microsoft Research have developed protocols to privately compute aggregate meter measurements over defined sets of meters, allowing for fraud and leakage detection as well as network management and statistical processing, without revealing any additional information about the individual meter readings and without revealing customer

48

Microsoft Research privacy technologies have been used twice as case-studies of privacy-by-design in smart grids by the Ontario Independent Privacy Commissioner (IPC) in their Pdb advice for smart meter deployment (here and here). They are currently seeing pilot deployment and are available to partners and customers for preview and licensing.

254

usage information. Analytics may be performed, but the sensitive customer usage info is still hidden.49

5.14

Compliance

Microsoft complies with the data protection and privacy laws generally applicable to the Microsoft provision of a cloud services platform. Customers are responsible for determining if Windows Azure, and the particular applications they intend to run in Windows Azure, comply with the specific laws and regulations applicable to customers industry and use scenario. To help our customers comply with their own specific requirements, we put in place a comprehensive compliance framework through which we will be advancing all Windows Azure features. Microsoft is committed to providing Windows Azure customers with detailed information about our security compliance programs to help customers make their own regulatory assessments. However, it is ultimately up to our customers to evaluate Windows Azure compliance programs against their own requirements to determine if our services satisfy their regulatory needs. 5.14.1 Windows Azure Trust Center Compliance The Windows Azure Trust Center is the central location for all Security, Privacy, and Compliance. It shows security in Azure, how to secure Azure cloud applications, the strategy for data privacy, and compliance certifications to date. Sections 5.12 and 5.13 above focuse on security and data privacy, while this section focuses on compliance in Windows Azure Trust Center. Windows Azure seeks to be relentlessly stringent on security and compliance. To truly understand the scope of Microsofts commitment in this regard, readers can review the overall security program at the Windows Azure Trust Center. Windows Azure is also regularly receiving new certifications which can be viewed on the Windows Azure Trust Compliance pages. Additional detail for some Microsoft cloud services can be found associated with the individual services, such as the Office O365 Trust Center.

5.14.2 ISO/IEC 27001:2005 Audit and Certification On November 29, 2011, Windows Azure obtained ISO/IEC 27001:2005 certification for its core features following a successful audit by the British Standards Institution (BSI). The ISO certificate is available here, which lists the scope as: The Information Security Management System for Microsoft Windows Azure including development, operations and support for the compute, storage (XStore), virtual network and virtual machine services, in

49

Smart Meters in Europe: Privacy by Design at its Best, by Ann Cavoukian, Ph.D., April 2012.

255

accordance with Windows Azure ISMS statement of applicability dated September 28, 2011. The ISMS meets the criteria of ISO/IEC 27001:2005 ISMS requirements Standard. The ISO certification covers the policies, controls, and processes applicable to the following Windows Azure core features: Cloud Services (includes Web and Worker roles, formerly under Compute) Storage (includes Blobs, Queues, and Tables) Networking (includes Traffic Manager, Windows Azure Connect, and Virtual Network) Virtual Machines

Included in the above are Windows Azure service management features and the Windows Azure Management Portal, as well as the information management systems used to monitor, operate, and update these services. In our next phase, we will pursue certification for the remaining features of Windows Azure, including SQL Database, Web Sites, Service Bus, Windows Azure Active Directory, and Caching. Microsofts Global Foundation Services division has a separate ISO/IEC 27001:2005 certification for the data centers in which Windows Azure is hosted. ISO/IEC 27001:2005 is a broad international information security standard. The ISO/IEC 27001:2005 certificate validates that Microsoft has implemented the internationally recognized information security controls defined in this standard, including guidelines and general principles for initiating, implementing, maintaining, and improving information security management within an organization. The Windows Azure ISO/IEC 27001:2005 Statement of Applicability (SOA) is available from Microsoft, and it can be shared with customers under a non-disclosure agreement. It includes over 130 security controls, and it maps Windows Azure controls to control objectives contained in Annex A of ISO/IEC 27001:2005. Please contact Windows Azure customer support or your local Microsoft representative for more information. 5.14.3 SSAE 16/ISAE 3402 Attestation A detailed Service Control Organizations 1 (SOC 1) Type 2 report is available to customers under a non-disclosure agreement. Please contact Windows Azure customer support or your local Microsoft representative to get a copy of the report. The audit was conducted in accordance with the Statement on Standards for Attestation Engagements (SSAE) No. 16 put forth by the Auditing Standards Board (ASB) of the American Institute of Certified Public Accountants (AICPA) and International Standard on Assurance Engagements (ISAE) 3402 put forth by the International Auditing and Assurance Standards Board (IAASB), a standard-setting board within the International Federation of Accountants (IFAC). The examination was conducted in 2012, and it covers the following Windows Azure core features: Cloud Services (includes Web and Worker roles, formerly under Compute) 256

Storage (includes Blobs, Queues, and Tables) Networking (includes Traffic Manager and Windows Azure Connect)

The following additional features were launched after the examination review period but are subject to the same controls and processes that were tested in the audit: Virtual Network Virtual Machines

The SOC 1 Type 2 audit report attests to the fairness of the presentation for Windows Azure service description, and the suitability of the design and operating effectiveness of the controls to achieve the related control objectives. In our next phase, we will broaden the scope of SSAE 16 / ISAE 3402 attestation to cover the remaining features of Windows Azure, including SQL Database, Web Sites, Service Bus, Active Directory, and Caching.

257

A.1

APPENDIX: Non-Microsoft Guidelines and Models

Earlier in this document, we noted that there are several non-Microsoft guidelines and capability models for risk management processes and capability maturity models. We offer those here, with reference links to their home resources. Please note that these guidelines and models are included for utilities reference only, as they may help them develop a robust view of their overall maturity when compared against several benchmarking standards. In most cases, the models are complementary to Microsoft guidelines and modeling, especially the Microsoft Capability Maturity Model. We encourage readers to learn more directly from the sources themselves, but these introductory comments should provide a flavor of their contents. This section describes four such guidelines and models that are not affiliated with Microsoft models: The U.S. Department of Energy Risk Management Process Guideline The Carnegie Mellon Smart Grid Maturity Model The Carnegie Mellon Electricity Subsector Cybersecurity Capability Maturity Model (ES-C2M2) and Risk Management Guidelines The Open Group Architecture Framework (TOGAF) The Innovation Value Institute Capability Maturity Framework

A1.1

U.S. Department of Energy Risk Management Process Guideline

The U.S. Department of Energy in collaboration with NIST and NERC developed the electricity subsector cybersecurity Risk Management Process (RMP) guideline to provide a consistent and repeatable risk management process approach to managing cybersecurity risk across transmission, distribution, generation, and marketing organizations in the electricity subsector. The processes can be tailored to a specific organization to establish either a new program or compliment an existing set of cybersecurity policies, standards guidelines, and procedures, like: Establishing and implementing a structure for risk management and governance; Identifying and prioritizing mission and business processes with respect to strategic goals and objectives; Establishing the recovery order for critical mission and business processes; Establishing the organizations risk tolerance; Defining techniques and methodologies for assessing cybersecurity risk; Defining risk management constraints and requirements; and Establishing the organizations cybersecurity Risk Management Strategy

The RMP also creates a very important definition of risk: Risk=f{Threat, Vulnerability, Impact, Likelihood} 258

Where: Threats include undesirable events from people, processes, technology, external forces, or system incidents causing adverse impacts to assets, individuals, or other electricity subsector organizations. Vulnerabilities describe the vector that a threat can exploit to impact IT and ICS. Impact may be the impacts to operations in terms of power system operations disruptions or outages, or financial loss, or loss of image/reputation, damage to assets or individuals Likelihood may include quantitative assessment based upon historical or current credible threat data using empirical or statistical analysis.

A 1.2

The Carnegie Mellon Smart Grid Maturity Model

The Carnegie Mellon Smart Grid Maturity Model (SGMM) helps utilities assess their smart grid preparedness against a set of smart grid maturity indicators. Included below are very brief descriptions of the eight CM SGMM categories and the assessment criteria they represent. Follow the links provided for detailed CM SGMM descriptions and assessment criteria. These nine SGMM categories can be used individually, in combination, or in total depending upon the relevance and focus of the utility to inform development and integration initiatives that will bring best value to the utility: A 1.2.1 Strategy, Management, Regulatory The Strategy, Management, and Regulatory (SMR) category includes establishing a smart grid strategy including the associated governance and management to execute the strategy. Also included is the proactive leadership to establish and follow new business models as necessary to thrive in the evolving smart grid environment. A 1.2.2 Organization & Structure The Organization and Structure category includes the organizational alignment and maturity to achieve the overall smart grid goals. This includes OT & IT cross functional planning and execution as well as the workforce training and capabilities to achieve the stated smart grid vision. A 1.2.3 Grid Operations The Grid Operations category includes improving the reliable, safe, secure operation of the electric grid through use of dispatch optimization, situational awareness displays, and increased automation. Advanced grid operations enables use of multiple generation types and sources, enhanced visibility of grid performance, improved communications and control, all leading to efficiency of operations, reduced costs, and optimized use of grid assets leading to reliable, safe efficient grid operations and high quality power.

259

A 1.2.4 Work and Asset Management The Work and Asset Management category covers optimized management of resources, assets, and workforce to achieve smart grid goals. Included is the transition to condition based maintenance and the efficient use of resources for asset management, optimal operations, proactive planning, and minimized downtime leading to improved operational efficiency. A 1.2.5 Technology The Technology category includes technology planning, selection and integration of technologies necessary to achieve smart grid operations and innovation goals. A 1.2.6 Customer Management and Experience The Customer category covers enabling customers to make informed decisions about their own energy usage while either passively or actively participating in energy balancing that enhances grid reliability, safety, energy choice, and conservation to meet their personal comfort and usage preferences. A 1.2.7 Value Chain Integration The Value Chain Integration category covers integration of all participants to the power system, enabling optimization of generation and delivery of power. This includes automation of processes and the networking of IT for information exchange enabling data sharing. A 1.2.8 Societal & Environmental The Societal and Environmental category includes how an organization contributes to society via reliability, safety, and security of the power system, as well as enabling the intelligent use of energy and the incorporation of new sources of generation to minimize the impact on the environment.

A1.3 The Carnegie Mellon Electricity Subsector Cybersecurity Capability Maturity Model (ES-C2M2) and Risk Management Guidelines
The Electricity Subsector Cybersecurity Capability Maturity Model (ES-C2M2)is another industry guideline that has been developed by the U.S. Department of Energy, the U.S. Department of Homeland Security, and Carnegie Mellon. The model was developed collaboratively with an industry advisory group of key utilities and industry associations. It is intended to compliment other cybersecurity programs. The goal of the model is to support ongoing development and measurement of cybersecurity capabilities within the electricity subsector through the following four objectives: Strengthen cybersecurity capabilities in the electricity subsector. Enable utilities to effectively and consistently evaluate and benchmark cybersecurity capabilities. Share knowledge, best practices, and relevant references within the subsector as a means to improve cybersecurity capabilities. 260

Enable utilities to prioritize actions and investments to improve cybersecurity.

The model was developed to apply to all electric utilities, regardless of ownership structure, size, or function. Broad use of the model is expected to support benchmarking for the subsectors cybersecurity capabilities. This Electricity Subsector Cybersecurity Capability Maturity Model (ESC2M2) can be used for a security assessment of the utilities cyber program to provide a security compliment to a CM SGMM or other utility capability assessment to get a more complete perspective of a utilities capability maturity baseline. ES-C2M2 recommends that a risk management-based approach be put in place for ten important utility activities: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. Risk Management (RISK) Asset, Change, and Configuration Management (ASSET) Identity and Access Management (ACCESS) Threat and Vulnerability Management (THREAT) Situational Awareness (SITUATION) Information Sharing and Communications (SHARING) Event and Incident Response, Continuity of Operations (RESPONSE) Supply Chain and External Dependencies Management (DEPENDENCIES) Workforce Management (WORKFORCE) Cybersecurity Program Management (CYBER)

A1.4

The Open Group Architecture Framework (TOGAF)

The Open Group bills its architecture framework as a proven enterprise architecture methodology and framework used by the world's leading organizations to improve business efficiency. It is the most prominent and reliable enterprise architecture standard, ensuring consistent standards, methods, and communication among enterprise architecture professionals. Enterprise architecture professionals fluent in TOGAF standards enjoy greater industry credibility, job effectiveness, and career opportunities. TOGAF helps practitioners avoid being locked into proprietary methods, utilize resources more efficiently and effectively, and realize a greater return on investment. First published in 1995, TOGAF was based on the US Department of Defense Technical Architecture Framework for Information Management (TAFIM). From this sound foundation, The Open Group Architecture Forum has developed successive versions of TOGAF at regular intervals and published them on The Open Group public web site. Details of the Forum, and its plans for evolving TOGAF in the current year, are given on the Architecture Forum website. TOGAF Version 9.1 is a detailed method and set of supporting resources for developing an Enterprise ArchitectureTOGAF 9.1 represents an industry consensus framework and method for Enterprise Architecture that is available for use internally by any organization around the world members and non-members of The Open Group alike - subject to license conditions.

261

A1.5 ! .The Innovation Value Institute has developed its Capability Maturity Framework. Microsoft is a member of IVI and has contributed to the CMF. The CMF can be used as a framework to prioritize and manage the implementation of the prioritized Business and Technical Capabilities identified for improvement to achieve a utilities overall SERA goals.

262

You might also like