You are on page 1of 392

Wonderware FactorySuite A2

Deployment Guide

Revision D.2 Last Revision: 1/27/06

Invensys Systems, Inc.

All rights reserved. No part of this documentation shall be reproduced, stored in a retrieval system, or transmitted by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of Invensys Systems, Inc. No copyright or patent liability is assumed with respect to the use of the information contained herein. Although every precaution has been taken in the preparation of this documentation, the publisher and the author assume no responsibility for errors or omissions. Neither is any liability assumed for damages resulting from the use of the information contained herein. The information in this documentation is subject to change without notice and does not represent a commitment on the part of Invensys Systems, Inc. The software described in this documentation is furnished under a license or nondisclosure agreement. This software may be used or copied only in accordance with the terms of these agreements. 2006 Invensys Systems, Inc. All Rights Reserved. Trademarks All terms mentioned in this documentation that are known to be trademarks or service marks have been appropriately capitalized. Invensys Systems, Inc. cannot attest to the accuracy of this information. Use of a term in this documentation should not be regarded as affecting the validity of any trademark or service mark. Alarm Logger, ActiveFactory, ArchestrA, Avantis, DBDump, DBLoad, DT Analyst, FactoryFocus, FactoryOffice, FactorySuite, FactorySuite A2, InBatch, InControl, IndustrialRAD, IndustrialSQL Server, InTouch, InTrack, MaintenanceSuite, MuniSuite, QI Analyst, SCADAlarm, SCADASuite, SuiteLink, SuiteVoyager, WindowMaker, WindowViewer, Wonderware, and Wonderware Logger are trademarks of Invensys plc, its subsidiaries and affiliates. All other brands may be trademarks of their respective owners.

Contents

Contents
Before You Begin ............................................. 11
About This Document ...........................................................................11 Assumptions ..........................................................................................11 FactorySuite A2 Application Versions ............................................. 12 FactorySuite A2 Terminology .......................................................... 13 Document Conventions ........................................................................ 14 Where to Find Additional Information................................................. 14 ArchestrA Community Website........................................................ 14 Technical Support ............................................................................. 15

CHAPTER 1: Planning the Integration Project.............................................17


FactorySuite A2 Project Workflow ...................................................... 18 FactorySuite A2 Workflow Summary .............................................. 18 Identify Field Devices and Functional Requirements ...................... 18 Define Object Naming Conventions................................................. 21 Define the Area Model ..................................................................... 22 Plan Templates.................................................................................. 23 Define the Security Model................................................................ 26 Define the Deployment Model ......................................................... 27 Document the Planning Results........................................................ 28

CHAPTER 2: Identifying Topology Requirements ....................................................29


Topology Component Distribution....................................................... 30 Galaxy Repository (Configuration Database) .................................. 31 AutomationObject Server Node ....................................................... 32 Visualization Node ........................................................................... 33 I/O Server Node................................................................................ 33 Engineering Station Node................................................................. 34 Historian Node.................................................................................. 35 SuiteVoyager Portal .......................................................................... 36 Topology Categories ............................................................................ 37 Distributed Local Network ............................................................... 37 Client/Server ..................................................................................... 40 General Topology Planning Considerations......................................... 41 Legacy InTouch Software Applications ........................................... 44 Terminal Services ............................................................................. 45 Widely-Distributed Network ............................................................ 50 Best Practices for Topology Configuration .......................................... 51 Network Configuration..................................................................... 51 Software Configuration .................................................................... 53

FactorySuite A2 Deployment Guide

Contents

I/O Server Connectivity ........................................................................54 I/O and DAServers: Best Practice .....................................................54 DIObject Advantages ........................................................................55 Extending the IAS Environment ...........................................................56

CHAPTER 3: Implementing Redundancy.......61


Redundant System Requirements .........................................................62 AppEngine Redundancy States .........................................................63 NIC Configuration: Redundant Message Channel (RMC) ...................63 Primary Network Connection............................................................64 RMC Network Connection................................................................64 Redundant DIObjects............................................................................65 Configuration.....................................................................................65 Redundant Configuration Combinations ..............................................67 Dedicated Standby Server - No Redundant I/O Server .....................67 Load Sharing Configurations ............................................................68 Run-Time Considerations..................................................................73 Deployment Considerations ..............................................................74 Scripting Considerations ...................................................................75 History ...............................................................................................77 Alarms in a Redundant Configuration ..................................................78 Redundant Configuration with Dedicated Visualization Nodes........79 Redundant Pair with No Dedicated Visualization Client Nodes .......79 Failover Causes in Redundant AppEngines..........................................80 Forcing Failover ................................................................................80 Communication Failure in the Supervisory (Primary) Network .......80 RMC Communication Failure ...........................................................83 PC Failures ........................................................................................83 Redundant System Checklist ................................................................85 Tuning Recommendations for Redundancy in Large Systems .............86 Tuning Redundant Engine Attributes ................................................87 Engine Monitoring ............................................................................88

CHAPTER 4: Integrating FactorySuite Applications.......................................................91


IndustrialSQL Server Historian ............................................................92 ActiveFactory Software ........................................................................92 ActiveFactory Reporting Website .....................................................92 InTouch HMI Software .........................................................................93 WindowMaker, WindowViewer, and View .......................................93 InTouch SmartSymbols .....................................................................93 Network Utilization ...........................................................................94 InTouch Software Add-Ins ................................................................95 Tablet and Panel PCs .........................................................................96 SCADAlarm Event Notification Software............................................97

FactorySuite A2 Deployment Guide

Contents

Alarm DB Manager.............................................................................. 97 SuiteVoyager Software ......................................................................... 97 QI Analyst Software............................................................................. 98 DT Analyst Software............................................................................ 98 InTrack Software .................................................................................. 99 Using InTrack Software with an Application Server ............................................................................ 99 InTrack Software Integration with Other FactorySuite Software... 101 InBatch Software................................................................................ 102 InBatch Production System Requirements ..................................... 103 InBatch Production System Topologies.......................................... 104 Third Party Application Integration ................................................... 106 FactorySuite Gateway..................................................................... 106 Other Connectivity/Integration Tools ............................................. 106

CHAPTER 5: Working with Templates..........109


Before Creating Templates..................................................................110 Creating a Template Model .................................................................110 Containment vs. UDAs....................................................................111 Base Template Functional Summary ...............................................112 Template Modeling Examples .........................................................114 Using UDAs and Extensions...............................................................116 Deriving Templates and Instances.......................................................117 Re-Using Templates in Different Galaxies .........................................118 Export/Import Templates and Instances ..............................................119 Export Automation Objects ............................................................ 120 Galaxy Dump.................................................................................. 120 Scripting at the Template Level ......................................................... 122 Script Execution Types ................................................................... 123 Scoping Variables ........................................................................... 123 Scripting for OLE and/or COM Objects......................................... 125 Using Aliases .................................................................................. 125 Determining Object and Script Execution Order ........................... 126 Asynchronous Scripts ..................................................................... 128 Scripting I/O References ................................................................ 128 Writing to a Database ..................................................................... 129

CHAPTER 6: Implementing QuickScript .NET .............................................133


IAS Scripting Architecture................................................................. 134 ApplicationObject Script Interactions ............................................ 136 Script Access to IAS Object Attributes.............................................. 137 Referencing Object Attribute and Property Values ........................ 137 Other Script Function Categories ................................................... 138 Using UDAs ................................................................................... 142

FactorySuite A2 Deployment Guide

Contents

Inter- and Intra-Object Scripting Considerations ............................142 Evolving from InTouch to ArchestrA .................................................142 Inter-Object Scripting Interactions ..................................................142 Intra-Object Scripting Interactions ..................................................142 Remote References to Live Data.....................................................143 Indirect Referencing ........................................................................143 IAS Context .....................................................................................143 Accessing the .NET Framework .........................................................144 .NET Overview................................................................................144 System.Data Classes (Connection to MSSQL via ADO.NET).......145 Scripting Practices ...........................................................................145 Using .NET for Database Access........................................................146 Defining Project Scope and Requirements......................................147 Using "Shape" Template Objects ....................................................148 Using the ObjectCacheExt (Function Library)................................148 Posting the Data...............................................................................149 Object Functional Encapsulations ...................................................149 Scripting Database Access ..................................................................150 ObjectCacheExt.DLL Overview .....................................................151 Purpose of ObjectCacheExt.DLL....................................................153 Template Object Examples .................................................................155 Example Object Interactions via UDAs ..........................................155 $SqlConnCacheMgr Script Examples.............................................158 $PostToDBaORb Template Object..................................................170 $PostTOaORb Derived From $PostToDBaORb .............................182 Time-Based Calculations and Retentive Values..............................183 Documenting Your Classes .............................................................190 Summary .........................................................................................190

CHAPTER 7: Architecting Security ..............191


Wonderware Security Perspective.......................................................192 Common Control System Security Considerations.........................193 Common Security Evaluation Topics..............................................193 About Security Infrastructure Components.....................................196 Securing FactorySuite A2 Systems.....................................................200 Security Considerations...................................................................200 Corporate Network Infrastructure Layer .........................................202 Process Control Network (PCN) Layer...........................................202 Securing Visualization ........................................................................204 OS Group Based Security Mode Notes ...........................................205 Securing the Configuration Environment ...........................................206 Distributed COM (DCOM) .................................................................207 Limiting the DCOM Port Range .....................................................207 Security Recommendations Summary ................................................208

FactorySuite A2 Deployment Guide

Contents

CHAPTER 8: Historizing Data .......................209


General Considerations ...................................................................... 210 Data Point Volumes ........................................................................ 210 Data Storage Rate ............................................................................211 Data Loss Prevention.......................................................................211 Area and Data Storage Relocation ......................................................211 Non-Historian Data Storage Considerations ...................................... 212

CHAPTER 9: Implementing Alarms and Events ..........................................213


General Considerations ...................................................................... 214 Configuring Alarm Queries................................................................ 214 Determining the Alarm Topology ...................................................... 215 Alarming in a Distributed Local Network Topology...................... 215 Alarming in a Client/Server Topology ........................................... 216 Logging Historical Alarms................................................................. 217

CHAPTER 10: Assessing System Size and Performance ....................................................219


System Disk Space and RAM Use..................................................... 220 Galaxy Repository Performance Considerations............................ 220 Predicting Disk Space and RAM Requirements at Configuration Time ........................................................................ 222 Predicting Disk Space and RAM Requirements at Run-Time ....... 225 Predicting System Performance ......................................................... 227 Unit Application Definition............................................................ 227 Hardware Specifications................................................................. 229 Baseline Server CPU Values........................................................... 229 Performance Data............................................................................... 230 Single Node System Implementation ............................................. 234 Large Distributed FactorySuite A2 System Topology.................... 237 Very Large Distributed FactorySuite A2 System Topology ........... 239 Failover Performance ......................................................................... 242 Dedicated Standby Configuration................................................... 242 Load Shared with Remote I/O Data Source ....................................... 244 DIObject Performance Notes ............................................................. 245 OPC Client Performance.................................................................... 246

FactorySuite A2 Deployment Guide

Contents

Chapter 11: Working in Wide-Area Networks and SCADA Systems ......................................249


Wide-Area Networks Overview..........................................................250 Network Terminology .....................................................................251 Network and Operating System Configuration...................................251 Minimum Bandwidth Requirements ...............................................251 Subnets ............................................................................................251 DCOM .............................................................................................252 Domain Controller...........................................................................252 Remote Access Services (RAS) ......................................................254 Terminal Services ............................................................................255 Security ...............................................................................................256 Domain-Level Security ...................................................................256 Workgroup-Level Security ..............................................................258 Application Configuration Overview..................................................258 Acquire and Store Timestamps for Event Data ...............................258 Acquire and Store RTU Event Information.....................................258 Disaster Recovery............................................................................259 Industrial Application Server (Distributed IDE) .............................259 Platform and Engine Tuning ...............................................................261 Tuning the Historian Primitive/MDAS (in Platforms and Engines) ..............................................................261 Alarms .............................................................................................262 Inter-Node Communications ...........................................................263 Distributing InTouch HMI Nodes ...................................................265 Using IndustrialSQL Server Historian ............................................266 Diagnostics..........................................................................................266 SCADA Benchmarks ..........................................................................268 MX Subscriptions Network Utilization...........................................268 Alarm Network Utilization..............................................................269 History Network Utilization ............................................................270 Summary Network Utilization ........................................................270 Network Utilization at Deployment ................................................271

Chapter 12: Maintaining the System ............273


FactorySuite A2 System Diagnostic/Maintenance Tools....................274 Object Viewer..................................................................................274 System Management Console (SMC) .............................................275 Add-on Diagnostic/Maintenance Tools ..............................................276 Galaxy Diagnostic Tools .................................................................276 Galaxy Maintenance Tools ..............................................................277 OS Diagnostic Tools ...........................................................................279 Performance Monitor.......................................................................279 Event Viewer ...................................................................................279

FactorySuite A2 Deployment Guide

Contents

APPENDIX A: System Integrator Checklist..281


General ............................................................................................... 282 Use Time Synchronization.............................................................. 282 Disable Hyper-Threading ............................................................... 282 Communication .................................................................................. 282 Configure IP Addressing ................................................................ 282 Configure Dual NICs...................................................................... 283 Security............................................................................................... 283 Confirm User Name and Password ................................................ 283 Configure Anti-Virus Software ...................................................... 283 Administration (Local and Remote)................................................... 284 Install Correct IAS Components..................................................... 284 Connection Requirements for Remote IDE (from a Client Machine to a Galaxy).............................................. 284 Redundancy Configuration ................................................................ 284 Redundant AppEngines .................................................................. 284 Multiple NICs ................................................................................. 284 Migration............................................................................................ 285 Verify Version and Patches ............................................................. 285 Upgrade Correctly .......................................................................... 285 Compatibility...................................................................................... 285 FactorySuite Component Version Compatibility............................ 285

APPENDIX B: .NET Example Source Code ..287


SqlConnCacheMgr Object ................................................................. 288 SqlConnCacheMgr Overview......................................................... 288 SqlConnCacheMgr Configuration .................................................. 289 PostTOaORb Object........................................................................... 313 PostTOaORb Overview .................................................................. 313 Template Object:$PostTOaORb ..................................................... 321 RunTime Object ................................................................................. 340 RunTime Overview......................................................................... 340 RunTime Run-Time Behavior ........................................................ 340 RunTime Configuration.................................................................. 341 Template Object:$RunTime............................................................ 345 $RunTime - Start ............................................................................ 348 ObjectCache.dll Visual Studio.NET C# Solution .............................. 350 Debugging the Project .................................................................... 351 ObjectCacheExt.DLL Source Code................................................ 353

Index ................................................................387

FactorySuite A2 Deployment Guide

10

Contents

FactorySuite A2 Deployment Guide

Before You Begin

11

Before You Begin

About This Document


The FactorySuite A2 Deployment Guide provides recommendations and "best practice" information to help define, design and implement integration projects within the Wonderware FactorySuite A2 System environment. Recommendations included in this guide are based on experience gained from multiple projects using the ArchestrA infrastructure for FactorySuite A2. Recommendations contained in this document should not preclude discovering other methods and procedures that work effectively.

Assumptions
This deployment guide is intended for:

Engineers and other technical personnel who will be developing and implementing FactorySuite A2 System solutions. Sales personnel or Sales Engineers who need to define system topologies in order to submit FactorySuite A2 System project proposals.

It is assumed that you are familiar with the working environment of the Microsoft Windows 2000 Server, Windows Server 2003, and Windows XP Professional operating systems, as well as with a scripting, programming, or macro language. Also, an understanding of concepts such as variables, statements, functions, and methods will help you to achieve best results. It is assumed that you are familiar with the individual components that constitute the FactorySuite A2 environment. For additional information about a component, see the associated user documentation. All topologies referenced in this Deployment Guide assume a "Bus" topology, which comprises a single main communications line (trunk) to which nodes are attached; also known as Linear Bus. Exceptions are noted. For more information on standard topology schemas, see http://www.microsoft.com/technet/prodtechnol/visio/visio2002/plan/glossary. mspx or http://en.wikipedia.org/wiki/Category:Network_topologies.

FactorySuite A2 Deployment Guide

12

Before You Begin

FactorySuite A2 Application Versions


This Deployment Guide has been updated to include current Wonderware software product versions (current Service Packs are assumed for all software). However, the information is designed to apply to previous versions except where noted. The current versions are:

Wonderware Industrial Application Server 2.1 IndustrialSQL Server Historian 9.0 InTouch HMI 9.5 Active Factory Software 9.1 SCADAlarm Event Notification Software 6.0 SP1 SuiteVoyager Software 2.5 QI Analyst Software 4.2 DT Analyst Software 2.2 InTrack Software 7.1 InBatch Software 8.0 (FlexFormula and Premier 8.1) Production Event Module (PEM) 1.0 InControl Software 7.11 Third Party and Wonderware SDK Applications as noted.

The figure on the following page contains terms used throughout this document. Definitions of specific terms are included after the figure.

FactorySuite A2 Deployment Guide

Before You Begin

13

FactorySuite A2 Terminology
The following figure shows basic object classifications and their relationships within the IAS System. This document focuses on the Application/Device Integration/Engine/Platform/Area Objects level except where otherwise noted:

Galaxy Objects

AutomationObjects

Domain Objects

System Objects

Application Objects

Device Integration Objects (DIObjects)

EngineObjects

PlatformObjects

AreaObjects

AnalogDevice

FS Gateway

AppEngine

WinPlatform

Area

DiscreteDevice

AB TCP Network

Field Reference

AB PLC5

Switch

Etc.

User-Defined

The following terms are used throughout this document:

Area: The AutomationObject that represents an Area of a plant within a Galaxy. The Area object acts as an alarm concentrator and is used to put other AutomationObjects into the context of the actual physical automation layout. ApplicationEngine (AppEngine): A real-time Engine that hosts and executes AutomationObjects. ApplicationObject: An AutomationObject that represents some element of a user application. This may include elements such as (but not limited to) an automation process component (e.g. thermocouple, pump, motor, valve, reactor, tank, etc.) or associated application component (e.g. function block, PID loop, Sequential Function Chart, Ladder Logic program, batch phase, SPC data sheet, etc.). AutomationObject: An Object that represents hardware, software or Engines as objects with a user-defined, unique name within the Galaxy. It provides a standard way to create, name, download, execute or monitor the represented component. Device Integration Objects (DIObject): AutomationObjects that represent the communication with external devices, which all run on the Application Engine. For example:

FactorySuite A2 Deployment Guide

14

Before You Begin

DINetwork Object: Refers to the object that represents the network interface port to the device via the Data Access Server, providing diagnostics, and configuration for that specific card. DIDevice Object: Refers to the object that represents the actual external device (e.g. PLC, RTU) which is associated to the DINetwork Object. Providing the ability to diagnose, browse data registers, and DAGroups for that device.

Platform Object: A representation of the physical hardware that the ArchestrA software is running on. Platform Objects host Engine Objects (see WinPlatform). WinPlatform: A single computer in a Galaxy consisting of Network Message Exchange, a set of basic services, the operating system and the physical hardware. This object hosts Engines and is a type of Platform Object.

Document Conventions
This documentation uses the following conventions: Convention Bold
Monospace

Used for Menus, commands, buttons, icons, dialog boxes and dialog box options. Start menu selections, text you must type, and programming code. Options in text or programming code you must type.

Italic

Where to Find Additional Information


Wonderware offers a variety of support options to answer questions on Wonderware products and their implementation.

ArchestrA Community Website


For timely information about products and real-world scenarios, refer to the ArchestrA Community website: http://www.archestra.biz. The ArchestrA Community website is an information center where users, systems integrators (SIs) and OEMs can share information and application stories, obtain products and learn about training opportunities. A key component of this website is the Application Object Warehouse, a constantly growing resource that provides downloadable ArchestrA objects, including a range of shareware products. In the future, objects from the Invensys-driven object library will be available for purchase. Third parties are also encouraged to submit their own ArchestrA objects for inclusion.

FactorySuite A2 Deployment Guide

Before You Begin

15

Technical Support
Before contacting Technical Support, please refer to the appropriate chapter(s) of this manual and to the User's Guide and Online Help for the relevant FactorySuite A2 System component(s). For local support in your language, please contact a Wonderware-certified support provider in your area or country. For a list of certified support providers, see http://www.wonderware.com/about_us/contact_sales.

E-mail: Receive technical support by sending an E-mail to your local distributor or to support@wonderware.com. Online: You can access Wonderware Technical Support online at http://www.wonderware.com/support/mmi. Telephone: Call Wonderware Technical Support at the following numbers:

U.S. and Canada (toll-free): 800-WONDER1 (800-966-3371) 7 a.m. to 5 p.m. (Pacific Time) Outside the U.S. and Canada: (949) 639-8500

If you need to contact technical support for assistance, please have the following information available:

The type and version of the operating system you are using. For example, Microsoft Windows XP Professional. The exact wording of the error messages encountered. Any relevant output listing from the Log Viewer or any other diagnostic applications. Details of the attempts you made to solve the problem(s) and your results. Details of how to reproduce the problem. If known, the Wonderware Technical Support case number assigned to your problem (if this is an ongoing problem).

When requesting technical support, please include your first, last and company names, as well as the telephone number or e-mail address where you can be reached.

FactorySuite A2 Deployment Guide

16

Before You Begin

FactorySuite A2 Deployment Guide

Planning the Integration Project

17

C H A P T E R

Planning the Integration Project

The FactorySuite A2 System project begins with a thorough planning phase. This chapter explains the FactorySuite A2 project workflow. The workflow is designed to make engineering efforts more efficient by completing specific tasks in a logical and consistent (repeatable) sequence.

Contents FactorySuite A2 Project Workflow

FactorySuite A2 Deployment Guide

18

Chapter 1

FactorySuite A2 Project Workflow


The project information resulting from the planning phase becomes a roadmap (project template) when creating the FactorySuite A2 System using the Integrated Development Environment (IDE). The more detailed the project plan, the less time it takes to create and implement the application, with fewer mistakes and rework.

FactorySuite A2 Workflow Summary


The following figure (workflow diagram) summarizes the sequential tasks necessary to successfully complete a FactorySuite A2 project:
Identify Field Devices and Functional Requirements

Define Naming Conventions

Define the Area Model

Plan Templates

Define the Security Model

Define the Deployment Model

Each task is detailed on the following pages, and includes a checklist (summary) where applicable.

Identify Field Devices and Functional Requirements


The first project workflow task identifies field devices that are included in the system. Field devices include components such as valves, agitators, rakes, pumps, Proportional-Integral-Derivative (PID) controllers, totalizers, and so on. Some devices are made up of more base-level devices. For example, a motor is a device that may be part of an agitator or a pump. After identifying all field devices, determine the functionality for each.

FactorySuite A2 Deployment Guide

Planning the Integration Project

19

Field Devices Checklist


A. To identify field devices, refer to a Piping and Instrumentation Diagram (P&ID). This diagram shows all field devices and illustrates the flow between them. A good P&ID ensures the application planning process is faster and more efficient. Verify that the P&ID is correct and up-to-date before beginning the planning process. The following figure shows a simple P&ID:
FIC 301

FV103
PT 301 TT 301

FIC 402

FV402
LIC 401

DRIVE 3

FT 302 CT 301

LT 402

PT 401

FV403
DRIVE 4

FT 401 CT 401

FV401

The key for this P&ID is as follows: FIC = Flow Controller PT = Pressure Transmitter TT = Temperature Transmitter FT = Flow Transmitter CT = Concentration Transmitter LT = Level Transmitter LIC = Level Controller FV = Flow Valve B. Examine each component in the P&ID and identify each basic device. For example, a simple valve can be a basic device. A motor, however, may be comprised of multiple basic devices.

FactorySuite A2 Deployment Guide

20

Chapter 1

C. Once a complete list is created, group the devices according to type, such as by Valves, Pumps, and so on. Consolidate any duplicate devices into common types so that only a list of unique basic devices remains, and then document them in the project planning worksheet. Each basic device is represented in the IDE as an ApplicationObject. An instance of an object must be derived from a defined template. The number of device types in the final list will help determine how many object templates are necessary for your application. Group multiple basic objects to create more complex objects (containment). For more information on objects, templates, and containment, see the IDE documentation for the Industrial Application Server.

Functional Requirements Checklist


D. Define the functional requirements for each unique device. The functional requirements list includes:

User Defined Attributes: Determine the types of attributes the object will have. Attributes are parameters of the object that can access data from other objects as well as provide access to their own data to other objects (inputs and outputs). Scripting: What scripts will be associated with the device? Specify scripts both for self-configuring the object as well as for run-time operation. Historization: Are there process values associated with this device that you want to historize? How often do you want to store the values? Do you want to add change limits for historization? Alarms and Events: Which attributes require alarms? What values do you want to be logged as events?

Note The Industrial Application Server IDE's alarms and events provide similar functionality to what is provided within InTouch Software.

Security: Which users will access to the device? What type of access is appropriate? For example, you may grant a group of operators readonly access for a device, but allow read-write access for an administrator. You can set up different security for each attribute of a device.

All the above functional requirement areas are discussed in detail in this Deployment Guide.

FactorySuite A2 Deployment Guide

Planning the Integration Project

21

Define Object Naming Conventions


The second workflow task defines naming conventions for templates, objects, and object attributes. Naming conventions should adhere to:

Conventions in use by the company. ArchestrA IDE naming restrictions. For information on allowed names and characters, see IDE documentation.

The following (instance) tagname is used as an example: YY123XV456 It has the following attributes: OLS, CLS, Out, Auto, Man The following figure shows the diiffences between the HMI and ArchestrA naming conventions:

HMIs
YY123XV456\OLS YY123XV456\CLS YY123XV456\Out YY123XV456\Auto YY123XV456\Man

ArchestrA
.OLS .CLS YY123XV456 .Out .Auto .Man

Individual Tags

Object

Object Attributes

Create references using the following IDE naming convention: <objectname>.<attributename> For example: YY123XV456.OLS

FactorySuite A2 Deployment Guide

22

Chapter 1

Define the Area Model


The third workflow task defines the Area model. An Area is a logical object group that represents a portion of the physical plant layout. For example, Receiving Area, Process Area, Packaging Area and Dispatch Areas are all logical representations of a physical plant area. Define and document all necessary Plant Areas to ensure each AutomationObject is assigned to its relevant Area. Note The default installation creates the Unassigned Area. All object instances will be assigned to this Area unless new areas are created.

Area Model Checklist


A. Create all Areas first. An object instance can then be easily assigned to the correct Area; otherwise, you will have to move them out of the Unassigned Area later. B. Create a System Area. Assign instances of WinPlatform and AppEngine objects to the System Area. WinPlatform and AppEngine objects are used to support communications for the application, and are not necessarily relevant to a plant-related Area. C. Group Alarms according to Areas. D. Areas can be nested. Sub-areas can be assigned to a different AppEngine on a different platform. E. When building an Area hierarchy, remember that the base Area (assigned to a Platform) determines how underlying objects are deployed. F. If a plant area (physical location) contains two computers running AutomationObject Server platforms, two logical Areas must be created for the single physical plant area. It may be practical to create an object instance for one Area at a time. If using this development approach, mark the Area as Default, so that each object instance is automatically assigned to the Default Area. Before creating instances in another Area, change the default setting to the new Area. G. Equate various Areas to Alarm Groups. Alarm displays can easily be filtered at the Area level. For more information on Areas, see the IDE documentation.

FactorySuite A2 Deployment Guide

Planning the Integration Project

23

Plan Templates
The fourth workflow task determines the necessary object "shape" templates. A Shape template is an object that contains common, configuration parameters for derived objects (objects used multiple times within a project). The Shape Template is derived from the $BaseTemplate object and is designed to represent baseline or "generic" physical objects, or to encapsulate specific, baseline functionality within the production environment. Both the Shape Templates and child Template instances are called ApplicationObjects. For example, multiple instances of a certain valve type may exist within the production environment. Create a Shape valve template that includes the required, basic, valve properties. The Shape Template can now be reused multiple times, either as another template or an object instance. If changes are necessary, they are propagated to the derived object instances. Use the drag-and-drop operation within the IDE to create object instances. The following figure shows multiple instances (Valve001, -002, etc.) derived from a single object template ($Valve):

Note For a practical example of a Shape Template, see "Using "Shape" Template Objects" on page 148. Industrial Application Server is shipped with a number of pre-defined base templates to help you create your application quickly and easily. The base templates provided with Industrial Application Server are summarized in "Base Template Functional Summary" on page 112. Determine if any of their functionally match the requirements of the devices on your list. If the base templates do not satisfy the design requirements, create (derive) new shape templates or object instances from the $UserDefined object base template.

FactorySuite A2 Deployment Guide

24

Chapter 1

Application Objects
$AnalogDevice $DiscreteDevice $FieldReference $UserDefined $Valve Derived Template Base Templates

A child template derived from a base (parent) template can be highly customized. Implement user-defined attributes (UDAs), scripting, and alarm and history extensions. Note Use the Galaxy Dump and Load Utility to create a .csv file, which can then be modified using a text editor. Then load the .csv file back into the Galaxy Repository. This enables quick and easy bulk configuration edits.

Template Derivation
Since templates can be derived from other templates, and child templates can inherit properties of the parents, establish a template hierarchy that defines what is needed before creating other object templates or instances. Always begin with the most basic template for a type of object, then derive more complicated objects. If applicable, lock object attributes at the template level, so that changes cannot be made to those same attributes for any derived objects. A production facility typically uses many different device models from different manufacturers. For example, a process production environment has a number of flow meters in a facility. A base flow meter template would contain those fields, settings, and so on, that are common to all models used within the facility. Derive a new template from the base flow meter template for each manufacturer. The derived template for the specific manufacturer includes additional attributes specific to the manufacturer. A new set of templates would then be derived from the manufacturer-specific template to define specific models of flow meters. Finally, instances would be created from the model-specific template. Note For detailed examples of template derivation, see Chapter 5, "Working with Templates." For more information on templates, template derivation, and locking, see the IDE documentation.

Template Containment
Template containment allows more advanced structures to be modeled as a single object. For example, a new template called "Tank" is derived from the $UserDefined base or shape template. Use the instance to contain other ApplicationObjects that represent aspects of the tank, such as pumps, valves, and levels.

FactorySuite A2 Deployment Guide

Planning the Integration Project

25

Then, derive two $DiscreteDevice template instances called "Inlet" and "Outlet," and configure them as valves. Derive an AnalogDevice template instance called "Level," and contain them within the Tank template. The containment hierarchy is as follows:

$Tank
$V101[ Inlet ] $V102[ Outlet ] $LT102[ Level ]
Note Deeply nested template/container structures can slow the check-in of changes in IDE development and propagation. Two options are available when defining object properties:

Template containment

Use template containment to create a higher-level object with lower-level objects. This practice works best when the lower-level object also has many components to it and may contain even lower-level objects. However, when adding the lowest-level object to a template, it is possible to use either template containment or user-defined attributes. Both allow for an external I/O point link and historization. If required attributes (such as complex alarms, setpoints, or other features) are readily available in a template, use template containment. If the lowerlevel object is very basic, use a UDA (User-Defined Attribute). It is always valid to use a contained object, even if it is a simple property.

Always use a contained object for I/O points and use a user-defined attribute for memory or calculated values. How this is accomplished is up to the application designer, and should be decided in advance for project consistency.

Object Template Checklist


A. Document which existing templates can be used for which objects, and which templates are created from scratch. For information on a particular object template, see the Help file for that object. B. Design your containment model at the template level before generating large object instance quantities. C. Create instances from the top "container" template of a hierarchical set of contained templates. Such template hierarchies should be tested with one or two instances before proceeding to the generation of numerous instances. Any change by insertion or removal of a contained template in the hierarchy does not result in propagation of new insertions or removals in the instance hierarchies. For instances of the containment hierarchy, insertions and removals must be managed individually. However, changes within already included contained templates can be automatically propagated by locking.

FactorySuite A2 Deployment Guide

26

Chapter 1

Note For detailed information on working with templates, see Chapter 5, "Working with Templates."

Define the Security Model


The fifth workflow task defines the security model. The following basic concepts are reviewed in order to reinforce understanding of the ArchestrA security model:

Users: A user is each individual person that will be using the system. For example, John Smith and Peter Perez. Roles: Roles define groups of users within the security system. Roles usually reflect the type of work performed by different groups within the factory environment. For example, Operators and Technicians. Permissions: Permissions determine what users are allowed to do within the system. For example: Operate, Tune, and Configure. Security Groups: A security group is a group of objects with the same security characteristics. The purpose of a security group is to simplify object security management by avoiding the need to assign security permission for each role to each individual object. Security groups typically map to Areas and reflect a physical location of your plant, but each area will have more than one security groups if multiple levels of security is required within the area. For example, it may be necessary to assign specific Technicians to the Line_1 security group, but not to the Line_2 security group.

Security Model Checklist


A. Define Users, Roles, Permissions, and Security Groups necessary to implement security for the factory environment. Select Users and Roles previously defined within the Operating System Security model, or define them within the IDE. A combination of both security types is also possible. Using Operating System users and Roles facilitates object deployment and makes future maintenance easier. B. Determine security settings for writable attributes of objects. The security options for writeable attributes consists of: Read Only, Operate, Tune, Secured Write, Verified Write, and Configure. Review the functional worksheet that lists the objects (and their attributes). Security permissions reflect the users' rights to change the attribute value. An Operate permission requirement does not mean that the user must be an Operator. A QA inspector might have Operate permissions to change a value on an object that that collects QA data, while an operator on the same production line does not have this permission.

FactorySuite A2 Deployment Guide

Planning the Integration Project

27

To set up Security using the IDE 1. 2. 3. 4. 5. Configure the attribute security for Template objects (at the Template level). Create security groups. Create roles and assign them to security groups. Select permissions and grant them to roles. Define users and assign them to roles.

For more information on security configuration, see the IDE documentation. Note For detailed information about FactorySuite A2 Security, see Chapter 7, "Architecting Security."

Define the Deployment Model


The last workflow task defines the deployment model that specifies where objects are deployed. In other words, the deployment model defines which nodes will host the various AutomationObjects. Each computer in the FactorySuite A2 System network must have a Platform object, AppEngine object, and Area object deployed to it. For example, KT101, LT 101 and MK101 are all areas in the following figure:
MyGalaxy Unassigned Host MMark08 MMark13 AppEngine001 KT101 [ KT101 ] LT101 [ LT101 ] MK101 [ MK101 ] LT112HT239 LT112P200 LT112PT221 LT112PT223

The objects deployed on particular platforms and engines define the objects' "load" on the platform. The load is based on the number of I/O points, the number of user-defined attributes (UDAs), etc. The more complex the object, the higher the load required to run it. Note For object types and target deployment node recommendations (such as DIObjects), see Chapter 10, "Assessing System Size and Performance." After deployment, use the Object Viewer to check communications between nodes and determine if the system is running optimally. For example, a node may be executing more objects than it can easily handle, and it will be necessary to deploy one or more objects to another computer.

FactorySuite A2 Deployment Guide

28

Chapter 1

Note For more information on deployment, see IDE documentation.

Document the Planning Results


Determine how to document the project planning results before beginning the planning phase. Use Microsoft Excel (or other spreadsheet application) to document the list of devices, the functionality of each device, process areas to which the devices belong, etc. For example:

FactorySuite A2 Deployment Guide

Identifying Topology Requirements

29

C H A P T E R

Identifying Topology Requirements

System and information requirements are unique to each manufacturing domain. To control equipment, computers must provide real-time response to interrupts. To plan production, scheduling systems must consider sales commitments, routing costs, equipment downtime, and numerous other variables. Enterprise system and information requirements are satisfied by architecting effective network topologies and implementing software to leverage the topology. This chapter describes common FactorySuite A2 System topologies. They include application components such as Industrial Application Server, IndustrialSQL Server Historian, InTouch HMI software, and Wonderware I/O Servers, OLE for Process Control (OPC) Servers, and Data Access Servers (DAServers). The topology configurations include descriptions and "best practice" recommendations for specific components and functionality. Note For updated information on system requirements, see the user guides or readme files in the installation directory of the appropriate product CD. Pay particular attention to the requirements regarding the version and Service Pack level of the operating system and other application components.

Contents Topology Component Distribution Topology Categories General Topology Planning Considerations Best Practices for Topology Configuration I/O Server Connectivity Extending the IAS Environment

FactorySuite A2 Deployment Guide

30

Chapter 2

Topology Component Distribution


A Galaxy encompasses the whole supervisory control system, which is represented by a single logical namespace and a collection of Platforms, AppEngines, and objects. The Galaxy defines the namespace in which all components and objects reside. Because of its distributed nature and common services, Industrial Application Server does not require expensive server-class or fault-tolerant computers to enable a robust industrial application. The Industrial Application Server distributes objects throughout a distributed (networked) environment, allowing a single application to be split into a number of different component objects, each of which can run on a different computer. Before exploring the following FactorySuite A2 System topologies, review the main components and how they will be distributed based on requirements and functionality. The main topology components are:

Galaxy Repository (Configuration Database) AutomationObject Server Node (AOS) Visualization Node I/O Server Node Engineering Station Node Historian Node SuiteVoyager Portal

The following figure identifies each topology component:

Visualization Node

Visualization Node

Visualization Node

Supervisory Network

Network Device (Switch or Router ) AutomationObject Server Historian ( Data and Alarms) Engineering Station Configuration Database SuiteVoyager Portal I/O Server

PLC Network

FactorySuite A2 Deployment Guide

Identifying Topology Requirements

31

Galaxy Repository (Configuration Database)


The Galaxy Repository (also called Configuration Database or "GR") can be installed on a dedicated node or on the Engineering Station. Note The following topology figures show the Galaxy Repository as the Configuration Database on the Engineering Node. The Galaxy Repository manages the configuration data associated with one or more Galaxies. This data is stored in individual databases, one for each Galaxy in the system. Microsoft SQL Server 2000 (Standard Edition) is the relational database used to store the data. Note Install the Galaxy Repository on the same AOS node only when using a single node. For information on installing the GR on the same node with any other components, see "Single Node System Implementation" on page 234. During run-time, the GR communicates with all nodes in the Galaxy to keep them updated on global changes such as security model modifications, etc. Even though it is possible to disconnect the GR from the Galaxy and still keep remaining nodes in production, it is recommended to maintain the GR connection to the Galaxy in order to transfer all global changes when they occur. The Galaxy Repository is accessed when the objects in the database are viewed, created, modified, deleted, deployed, or uploaded. The Galaxy Repository is also accessed when a running object attempts to access another object that has not been previously referenced.

Best Practice
If working in a distributed network context (wide-area networks, distributed SCADA systems characterized by slow connections), install the Galaxy Repository on a laptop computer and carry it to remote sites as a portable resource for application maintenance. Connect the portable configuration database node to the local node via a dedicated local area network (LAN) connection to expedite the process of deploying/undeploying objects and creating and configuring templates. Remember that if the Galaxy Repository is not available on the network, existing objects cannot be deployed or undeployed to/from any Platform. However, any deployed objects will continue to operate normally. Create a ghost image of the portable GR Node, and perform frequent backups in case the laptop is damaged or lost.

FactorySuite A2 Deployment Guide

32

Chapter 2

Configuration and run-time components for the Configuration Database node are described in the following table: Configuration Components Run-Time Components

Integrated Development Environment (IDE) Galaxy Repository* Microsoft SQL Server 2000*

Bootstrap* Platform*

* Required component

AutomationObject Server Node


The AutomationObject Server node provides the processing resources for AppEngines, Areas, Application Objects, and DeviceIntegration (DI) Objects. The AutomationObject Server node requires a Platform to be deployed. The AutomationObject Server node functionality can be combined with a visualization node, depending on the process requirements and system capabilities. A distributed local network topology takes advantage of this type of configuration to provide flexibility to the system. Configuration and run-time components for the AutomationObject Server node are described in the following table: Configuration Components Run-Time Components

Integrated Development Environment (IDE)

Bootstrap* Platform* AppEngine* Areas ApplicationObjects DIObjects

* Required component

FactorySuite A2 Deployment Guide

Identifying Topology Requirements

33

Visualization Node
A Visualization node is a computer running InTouch Software on top of a Platform. The Platform provides for communication with any other Galaxy component via the Message Exchange (MX) protocol. Configuration and run-time components for the Visualization node are described in the following table: Configuration Components (none) Run-Time Components

Bootstrap* Platform* InTouch Software 8.0 or later; OR InTouch View 8.0 or higher*

* Required component

I/O Server Node


An I/O Server node functions as the data source for the Industrial Application Server system. I/O Servers communicate with external devices. Supported communication protocols: Dynamic Data Exchange (DDE), SuiteLink, and OPC. Note An I/O Server node can also be an AutomationObject Server, depending on the available memory and CPU resources. DIObjects on the Industrial Application Server side manage the communication between AutomationObjects and the I/O Servers. DIObjects provide connectivity to I/O data sources, and represent the corresponding server(s) as part of the plant model in the AutomationServer environment. In the case of DAServers, the corresponding DIObjects are so closely related that redeploying a DIObject to a different node uninstalls the DAServer locally and reinstalls it in the target machine. DIObjects require a Platform and an AppEngine. These objects can reside either on the I/O Server node or on any AutomationObject Server node in the Galaxy. Configuration and run-time components for the I/O Server node are described in the following table: Configuration Components I/O Server* Run-Time Components

Bootstrap Platform InTouch Software 8.0 or higher or InTouch View 8.0 or higher I/O Server*

* Required component

FactorySuite A2 Deployment Guide

34

Chapter 2

Best Practice
Observe the following guidelines to optimize I/O data transmission:

Always deploy the DIObject to the same node where the I/O data source is located, regardless of the protocol used by the I/O data source (DDE, Suitelink, OPC). If the I/O data source is on a node that is remote to the AutomationObject Server, the communication between the nodes is highly optimized by the MX (Message Exchange) protocol. A particular benefit is gained with this configuration when using OPC servers in remote nodes, since the system uses the MX protocol instead of DCOM communication between nodes. If the I/O data source is located in the same node as the AutomationObject Server, the communication is local, minimizing the travel time of data through the system.

Note I/O Server installation sequence is important: Always install the most recent software last. For example: first I/O Servers, then DAServers, then the Bootstrap, and so on.

Engineering Station Node


All of the recommended topologies include a dedicated Engineering Station node for maintenance purposes. The Engineering Station node contains a FactorySuite Development package which consists of components such as the Integrated Development Environment (IDE) and InTouch WindowMaker. The GR (Galaxy Repository) could also reside on this node. Thus, you could implement any required changes in the system from the Engineering Station node using the IDE, which would then access the local configuration database. This configuration provides flexibility in maintaining remote sites, since you can use a laptop computer as the Engineering Station containing the configuration database node and gain access to the AutomationObject Server node via a LAN connection. With the IDE and the Galaxy Repository on the same node, if you need to deploy or undeploy large applications, network traffic in the Galaxy would not be affected by the IDE accessing the configuration data in a remote node. Configuration and run-time components for the Engineering Station node are described in the following table: Configuration Components Run-Time Components

Integrated Development Environment (IDE) InTouch WindowMaker 8.0 or higher

Bootstrap* Platform

* Required component The IDE does not require a Platform. However, if Object Viewer is used on an Engineering Station node, a Platform is required. For more information about Object Viewer, see the Object Viewer documentation.

FactorySuite A2 Deployment Guide

Identifying Topology Requirements

35

Best Practice
When remote off-site access to the Galaxy Repository is required by means of the IDE, use a Terminal Service session or Remote Desktop connection to the Galaxy repository where the IDE has also been installed. Important! In order to launch an IDE session, the session must have user accounts in the Administrators group of the Terminal Server node. For more information about integrating InTouch Software for Terminal Services with other FactorySuite A2 System components, see the Terminal Services for InTouch Deployment Guide. The Engineering Station node also hosts the development tools to modify InTouch Software applications using WindowMaker.

Historian Node
The Historian node is used to run IndustrialSQL Server Historian software. IndustrialSQL Server Historian stores all historical process data and provides real-time data to FactorySuite client applications such as ActiveFactory and SuiteVoyager Software. The Historian Node does not require a Platform. The AutomationObject Server pushes data (configured for historization) to the Historian node using the Manual Data Acquisition Service (MDAS) packaged with Industrial Application Server and IndustrialSQL Server Historian. Important! MDAS uses DCOM to send data to IndustrialSQL Server Historian. Ensure that DCOM is enabled (not blocked) and that TCP/UDP port 135 is accessible on both the AppServer and IndustrialSQL Server Historian nodes. The port may not be accessible if DCOM has been disabled on either of the computers or if there is a router between the two computers - the router may block the port. Configuration and run-time components for the Historian Node are included in the following table: Configuration Components Run-Time Components

Microsoft SQL Server 2000*

IndustrialSQL Server Historian*

* Required component

Best Practice
Most system topologies combine the Historical and Alarm databases on the Historian Node. Configure the alarm system using the Alarm Logger utility, which creates the appropriate database and tables in Microsoft SQL Server. For requirements and recommendations for alarm configuration, see Chapter 9, "Implementing Alarms and Events." For information about historization, see Chapter 8, "Historizing Data."

FactorySuite A2 Deployment Guide

36

Chapter 2

SuiteVoyager Portal
A Server machine with a SuiteVoyager Software portal can be incorporated into any Galaxy. Use the Win-XML Exporter to convert InTouch windows to XML format, so SuiteVoyager Software clients can access real-time data from the Galaxy. A Platform must be deployed on the SuiteVoyager Portal for SuiteVoyager Software to access the Galaxy. Configuration and run-time components for the SuiteVoyager Portal are described in the following table: Configuration Components SuiteVoyager Software 2.0 SP1 or higher* Run-Time Components

Bootstrap* Platform* SuiteVoyager Software 2.0 SP1 or higher*

* Required component For information on deployment options for SuiteVoyager Software, see the SuiteVoyager Software documentation.

FactorySuite A2 Deployment Guide

Identifying Topology Requirements

37

Topology Categories
The following information describes high-level topology categories using FactorySuite A2 System components.

Distributed Local Network


This topology is designed for medium-sized systems where the processing requirements of each software component can be easily handled by the nodes providing the projected performance support. The primary characteristic of this topology type is that Visualization and AutomationObject Server functionalities coexist in the same node, called a Workstation. Several Workstation nodes (defined below) share data and run multiple functional components on each node. Different Workstation nodes coexist on a locally distributed network. The following Workstation characteristics are assumed unless otherwise noted:

Workstation Node
The Visualization and AutomationObject Server components are combined on the same node. Both components share the Platform, which handles communication with other nodes in the Galaxy. The Platform also allows for deployment/undeployment of ApplicationObjects. If you plan to combine the Visualization and AutomationServer components on the same node, evaluate the resource requirements for the following:

Active tags-per-window. ActiveX controls displayed. Alarm displays. Trending.

These values will impact AutomationObject service performance. Note For details on Alarm System configuration, see Chapter 9, "Implementing Alarms and Events." The relationship between "standalone" Workstations on a distributed local network is also known as "Peer-to-Peer."

FactorySuite A2 Deployment Guide

38

Chapter 2

The following figure illustrates the software components and their distribution:
Supervisory Network

Network Device (Switch or Router ) Workstation Historian Engineering Station ( Data and Alarms) Configuration Database SuiteVoyager Portal I/O Server

PLC Network

Client operating systems such as Windows XP Professional can manage up to 10 simultaneous active connections with other nodes. If the system is larger than 10 nodes, Windows Server 2003 must be used for all nodes. For additional information, see the General Topology Planning Considerations section later in this chapter.

I/O Server Node(s)


Different I/O data sources have different requirements. Two main groups are identified:

Legacy I/O Server applications (SuiteLink, DDE, and OPC Servers) do not require a platform on the node on which they run. They can reside on either a standalone or workstation node. However, the DIObjects used to communicate with those data sources such as the DDESuiteLinkClient object, OPCClient object, and InTouchProxy objects must be deployed to an AppEngine on a Platform. Although it is not required that these DIObjects be installed on the same node as the data server(s) they communicate with, it is highly recommended in order to optimize communication throughput.

For Device Integration objects like ABCIP and ABTCP DINetwork objects, both the DAServer and the corresponding DIObjects must reside on the same computer hosting an AppEngine.

Best Practice
I/O Servers can run on Workstations, provided the requirements for visualization processing, data processing, and I/O read-writes can be easily handled by the computer. Run the I/O Server and the corresponding DIObject on the same node where most or all of the object instances (that obtain data from that DIObject) are deployed. This implementation expedites the data transfer between the two components (the I/O Server and the object instance), since they both reside on the same node. This implementation also minimizes network traffic and increases reliability.

FactorySuite A2 Deployment Guide

Identifying Topology Requirements

39

However, it is good practice to evaluate the overhead necessary to run each component. The following figure illustrates implementing I/O Servers on the same node:
Supervisory Network

Network Device (Switch or Router ) Workstation I/O Server Historian (Data and Alarms) Engineering Station Configuration Database SuiteVoyager Portal

PLC Network

Historian Node
The IndustrialSQL Server Historian software must run on a designated node.

Engineering Station and GR (Configuration Database)


The Engineering Station node hosts the IDE and InTouch WindowMaker to facilitate Industrial Application Server and InTouch Software application maintenance. As the GR node, it hosts the SQL Server database that stores the Galaxy's Configuration Data.

SuiteVoyager Portal
The SuiteVoyager Portal supplies real-time historical data to web clients.

FactorySuite A2 Deployment Guide

40

Chapter 2

Client/Server
This topology configuration includes dedicated nodes running AutomationObject Servers, while visualization tasks are performed on separate nodes. The benefits of this topology include usability, flexibility, scalability, system reliability, and ease of system maintenance, since all configuration data resides on dedicated servers. The client components (represented by the visualization nodes) provide the means to operate the process using applications that provide data updates to process graphics. The clients have a very light data processing load. The AutomationObject Server nodes share the load of data processing, alarm management, communication to external devices, security management, etc. For details on the implementation of AppEngine and I/O server redundancy in a client server configuration, see the Widely-Distributed Network section later in this chapter. The following figure illustrates a client/server topology:

Visualization Node

Visualization Node

Visualization Node

Supervisory Network

Network Device (Switch or Router ) AutomationObject Server Historian ( Data and Alarms) Engineering Station Configuration Database SuiteVoyager Portal I/O Server

PLC Network

This topology is scalable to include a greater number of servers. Including more servers distributes data processing loads and enables a higher load of I/O reads/writes. Client nodes can be added when additional operator stations are needed.

FactorySuite A2 Deployment Guide

Identifying Topology Requirements

41

General Topology Planning Considerations


When deciding on the topology to implement with Industrial Application Server, it is important to consider different topology variations. For example, it is critical to evaluate variations of the Client/Server topology, keeping in mind process requirements such as the number of I/O points, the update rate of variables at the server and client level, etc. The peer-to-peer nature of the communication between the Platforms requires adequate support for network connections.

Windows XP Operating Station Notes


Client operating systems such as Windows XP Professional can manage up to 10 simultaneous active connections with other nodes. In a pure client/server architecture, it is possible to have more than 10 client nodes running Windows XP Professional. The following requirements must be met:

Client nodes are only running a Platform and InTouch Software. The IDE is not installed on the client nodes. The SMC (System Management Console) is installed with a Bootstrap and InTouch Software. However, the Platform Manager snap-in to the SMC should not be launched, as it connects to all Platforms in the Galaxy. Object Viewer is not run either from the SMC (Platform Manager) or from the executable file installed in the application directory. The client nodes are not running other applications, ActiveX objects, or functions that request data from remote sources (for example, ActiveFactory). This could cause more open connections on the client node. Also, consider any network shares on the client nodes as possible open connections. None of the client platforms are configured as InTouch Alarm Providers. To report alarms from client Platforms, place all Platforms in an Area hosted by any of the servers. A single client node does not require data from more than 10 server nodes.

Note This topology was tested and the above requirements validated on a system that included 16 InTouch Software client nodes and five AutomationObject Server nodes executing ApplicationObjects. The Galaxy Repository was installed on a dedicated server. For more information on defining system size, see Chapter 10, "Assessing System Size and Performance." Finally, consider different options when deploying I/O Servers in a Galaxy, such as whether to run them on AutomationObject Servers or on dedicated computers, and redundancy strategies.

FactorySuite A2 Deployment Guide

42

Chapter 2

I/O Servers on AutomationObject Server Nodes


Both components (I/O Servers and AutomationObject Servers) can reside on the same node if I/O and data processing loads are not a constraint. In a topology that includes multiple I/O Servers, the load is balanced so that each AutomationObject Server executes a set of I/O Servers, DIObjects, and ApplicationObjects. The goal of this topology is to optimize network performance by reducing the network traffic between Galaxy components. Associated DIObjects and ApplicationObjects should run on the same Platform in order to minimize network traffic. The following figure illustrates this concept:

Visualization Node

Visualization Node

Visualization Node

Supervisory Network

Network Device (Switch or Router ) AutomationObject Server I/O Server Historian (Data and Alarms ) Engineering Station SuiteVoyager Configuration Database Portal

PLC Network

In this example, a single I/O Server runs on each AutomationObject Server node. The DIObjects required to communicate with a particular I/O Server are deployed to the same node, as well as all of the ApplicationObjects that will be obtaining data from the I/O Server. Communication between the DIObjects and the ApplicationObjects is handled via Message Exchange.

FactorySuite A2 Deployment Guide

Identifying Topology Requirements

43

Dedicated I/O Server Nodes


It may be beneficial to separate the I/O and data processing tasks so that they run on different servers. The following figure shows a dedicated node running all I/O Servers:

Visualization Node

Visualization Node

Visualization Node

Supervisory Network

Network Device (Switch or Router ) Historian AutomationObject Server ( Data and Alarms) Engineering Station Configuration Database SuiteVoyager Portal I/O Server

PLC Network

To optimize the system performance and data throughput, deploy a platform to the dedicated I/O Server node. This platform hosts the AppEngine and DIObjects that need to communicate with the I/O Servers. In this scenario, the communication between the I/O Server and the AutomationObject Server is handled by Message Exchange protocol, which avoids DCOM issues related to poor performance and complex security settings. Note For details on security considerations in the FactorySuite A2 environment, see See "Securing FactorySuite A2 Systems" on page 200.

FactorySuite A2 Deployment Guide

44

Chapter 2

Legacy InTouch Software Applications


This section describes coexistence of legacy InTouch Software applications within new FactorySuite A2 systems. Sharing an InTouch Software 7.11 applications' data is possible without any additional work or software required on the InTouch Software side: At the Automation Object Server level, configure an InTouchProxy object to access data from the InTouch 7.11 application using the SuiteLink protocol. To facilitate the system's configuration, this object's configuration editor enables browsing the InTouch tagname dictionary. The following figure shows legacy InTouch 7.11 applications sharing data with the new Industrial Application Server system. This topology also contains InTouch View or InTouch run-time 9.0 clients that leverage data provided by the Galaxy.

InTouch 7.11

Visualization Node

Visualization Node

Supervisory Network

Network Device (Switch or Router ) Historian AutomationObject Engineering Station Server (Data and Alarms) Configuration Database SuiteVoyager Portal I/O Server

PLC Network

FactorySuite A2 Deployment Guide

Identifying Topology Requirements

45

Terminal Services
Terminal Services provides the capability to run several sessions of the same InTouch Software application or different applications in a Terminal Server session. Terminal Services technology enables thin-client computer communication to a Terminal Server node, where multiple instances of InTouch Software applications run simultaneously. The software and hardware requirements for the client node are minimized since no programs are required at the client station accessing the application. A dedicated Terminal Server node is recommended for this topology. This node requires the following software components: Configuration Components (none) Run-Time Components

Bootstrap* Platform* Terminal Services for InTouch Software 9.0 or higher*

* Required component Only one Platform runs on a Terminal Server node, regardless of the number of sessions executed. However, the number of sessions affects the size of the license for the system. Note For information on FactorySuite A2 Licensing, see the License Utility Guide. Consider different configuration options when using Terminal Services, such as whether to run Terminal Services on the same node as the AutomationObject Server; or run Terminal Services on a dedicated Server node. Running Terminal Services on a dedicated Server node presents a loading concern - AppEngine execution may be degraded by client sessions' resource consumption.

Windows 2000 Server


Terminal Services should be installed on the designated Terminal Services node using the Remote Administration mode selection. It is the default selection at installation. This option consumes significantly fewer computer resources (than the Application server Mode) when creating a Galaxy, importing or deploying an object. For detailed information on setting up Terminal Servers for InTouch Software on Windows 2000 Server, please refer to the Terminal Services for InTouch Deployment Guide.

FactorySuite A2 Deployment Guide

46

Chapter 2

Windows Server 2003


The default installation mode is Remote Desktop for Administration (formerly known as Remote Administration mode). The mode selection is not available at installation. The following information contains topology descriptions of Terminal Services in the FactorySuite A2 environment.

Dedicated Terminal Server Node


A Terminal Server node is dedicated to running multiple InTouch Software application instances. A Platform deployed on that node enables the server to communicate with any AutomationObject Server in the Galaxy.

Thin Client

Thin Client

Thin Client

Corporate Network

InTouch Terminal Server

Network Device (Switch or Router)

Supervisory Network

Network Device (Switch or Router) AutomationObject Server Historian (Data and Alarms ) Engineering Station Configuration Database SuiteVoyager Portal I/O Server

PLC Network

This configuration prevents any failure at the Terminal Server from impacting the AutomationObject Servers. A problem at the Terminal Server node only effects the visualization nodes; the AutomationObject Server continues operating. Note Configurations for the I/O-DA-OPC Server presented in the Client/Server topology can be included in a Terminal Server topology.

FactorySuite A2 Deployment Guide

Identifying Topology Requirements

47

Server-Based Load Balancing for Terminal Server Clients


Windows Network Load Balancing or Citrix Metaframe Load Balancing can be leveraged effectively within the FactorySuite A2 environment. Multiple dedicated Terminal Server nodes share the network load from all the clients that need access to the application sessions. In the event of a failure in any of the servers, the load from the clients originally connected to that server is handled by the remaining two servers. For detailed information about implementing load balancing, see the Terminal Services for InTouch Deployment Guide.

Thin Client

Thin Client

Thin Client

Corporate Network InTouch Terminal Servers Server Farm Network Device (Switch or Router ) Supervisory Network

Network Device (Switch or Router ) AutomationObject Server Historian (Data and Alarms) Engineering Station Configuration Database SuiteVoyager Portal PLC Network I/O Server

FactorySuite A2 Deployment Guide

48

Chapter 2

Terminal Services on the AutomationObject Server Node


The following figure shows Terminal Services and the AutomationObject Server residing on a common node. All of the AutomationObject Server functionality runs in the console, while clients run multiple sessions of InTouch Software on that same node. I/O Servers could also run in the console of the Terminal Server node or on any other computer.

Thin Client

Thin Client

Thin Client

Supervisory Network

Network Device (Switch or Router) AutomationObject Server Terminal Server Historian (Data and Alarms) Engineering Station Configuration Database SuiteVoyager Portal I/O Server

PLC Network

When considering this deployment, keep in mind that any problem in the Terminal Server/AutomationObject Server node impacts not only the visualization nodes, but also the I/O data collection and the processing of objects on the AutomationObject Server. For details about Terminal Services for InTouch Software, see the Terminal Services for InTouch Deployment Guide. For additional considerations for this configuration, see Chapter 10, "Assessing System Size and Performance."

FactorySuite A2 Deployment Guide

Identifying Topology Requirements

49

Using the IDE with Terminal Services


Application components are configured and deployed from the Integrated Development Environment (IDE) to target workstations and servers. The IDE can be used to configure and deploy an application to the Terminal Server, where multiple instances of InTouch Software applications can run simultaneously. Use Remote Desktop to the Galaxy Repository node to perform remote IDE access rather than another local IDE. Important! In order to launch an IDE session, the session must have user accounts in the Administrators group of the Terminal Server node.

Terminal Services Thin Clients

InTouch Terminal Server Bridge

Ethernet

AutomationObject Server

Historian (Data and Alarms)

Engineering Station Configuration Database

I/O Server

Supervisory Network

Network Device (Switch or Router) PLC Network

FactorySuite A2 Deployment Guide

50

Chapter 2

Widely-Distributed Network
Networks distributed over a large geographical area. This network incorporates a variety of communication components and accounts for network delays, intermittent traffic, and network outages. Widely-Distributed networks are also known as "Intermittent" networks. Because of low bandwidth and/or high traffic, distributed networks tend to experience delays or breaks in communication. The following topology diagram includes 3 connection types:

Dial-up Wireless/Radio WAN


Terminal Services Clients (Thin Client) Visualization (Data, Alarms, History )

Dial-up 28.8k bps line

Ethernet

Terminal Server

SuiteVoyager Portal

AutomationObject Historian Server (Data and Alarms )

Engineering Station Configuration Database

SCADAlarm

I/O Server

Network Device (Switch or Router )

Supervisory Network

Network Device (Switch or Router) Control Network RTU Radio Modem RTU Radio Modem (RS-232) PLCs RTU Radio Modem RS-485 Network Level Transmitters

Detailed information regarding Distributed Network configuration is included in Chapter 11, "Working in Wide-Area Networks and SCADA Systems."

FactorySuite A2 Deployment Guide

Identifying Topology Requirements

51

Best Practices for Topology Configuration


The following Best Practices focus on configuring your network architecture and software components within a FactorySuite A2 system environment. Note The following information focuses on non-redundant topologies where a single Network Interface Card (NIC) is used on each node. For detailed information about Redundancy and dual NIC configuration, see Chapter 3, "Implementing Redundancy."

Network Configuration
Verify Required Connection Settings
If your network administrator or Internet Service Provider (ISP) requires static settings, one or more of the following NIC settings is necessary:

A specific IP address. DNS addresses. DNS domain name. Default gateway address. WINS addresses.

Use Fixed IP Addresses on Industrial Application Server Nodes


Fixed IP addresses, while not required, are recommended in cases where using DHCP interrupts specific topology component functionality. For example, when the Historian node uses DHCP, and its connection is broken and restored (planned maintenance, connection failure, etc.), the restoration of data via store/forward is delayed due to the DNS name resolution interval (no data is lost). This interval is generally between 12 and 15 minutes. Note A regular disconnection of the Historian should not cause delay. The stored data is forwarded when the DNS name is automatically resolved, or when the Historian is successfully "Pinged." The Ping operation invokes the name resolution operation. Another example is when using InTouch Software and Terminal Services. Using DHCP in this case will disrupt some specific component functionality (wwLogger on version 7.1/7.11, SuiteLink, and Terminal Service load balancing configurations). Note For details on InTouch Software and Terminal Services in this scenario, see Tech Note 358: InTouch, Terminal Services, and DHCP.

FactorySuite A2 Deployment Guide

52

Chapter 2

If you must use DHCP, keep in mind that it is possible to reserve IP addresses for specific computers; in the previous example, when the Historian node has a reserved address, store/forward operates without delay. Consult with IT staff for complete network configuration details. Note Industrial Application Server 1.5 and later supports DHCP-assigned IP addresses.

Fine-Tune Protocol Settings Specify Protocol Access Order.


Network performance can often be improved by changing the access order of protocols bound to those providers. For example, the LAN connection is enabled to access NetWare and Microsoft Windows networks, which use IPX and TCP/IP. Your primary connection is to a Microsoft Windows network that uses TCP/IP.

Move Microsoft Windows Network to the top of the Network providers list on the Provider Order tab. Move Internet Protocol (TCP/IP) to the top of the File and Printer Sharing for Microsoft Networks binding on the Adapters and Bindings tab.

Install and enable only the necessary network protocols. Limiting the number of protocols on your computer enhances its performance and reduces network traffic. If the computer encounters a problem with a network or dial-up connection, it attempts to establish connectivity by using every network protocol that is installed and enabled. By installing and enabling only the protocols used by the system, status information is returned efficiently.

Determine Available Network Bandwidth


Bandwidth information is typically managed by the IT group because network resources may be shared with other departments and operations. The ideal condition is to have continuous access to network status, to prevent and/or correct any network-related application issues. Some network devices such as switches and routers contain self-diagnostics tools that provide information on network performance using OPC protocol. If you have access to such devices, you can configure instances of the OPC Proxy to monitor the network bandwidth.

Verify Basic Network Communication


Before attempting any deployment of applications or network operation, verify that each node in the system can be pinged by its node name. You may need to check the DNS configuration or configure the Hosts files. For example, from the Galaxy Repository node, you should try to ping (by name) the remote node to which a platform is going to be deployed.

FactorySuite A2 Deployment Guide

Identifying Topology Requirements

53

Review Firewall Usage


If you have a corporate firewall, hardened Device (Appliance) or Software application on a dedicated hardened platform, the implementation of a personal firewall on each PC is unnecessary.

Software Configuration
FactorySuite A2 Application Coexistence
When installing multiple FactorySuite A2 applications on one node, always install the oldest components first. For example, Wonderware I/O Server technology pre-dates ArchestrA-based technology. Common Components for DAServers are newer than those that come with most I/O Servers. Thus, install I/O Servers first, then DAServers.

Anti-Virus Software
Although it is desirable to run an Anti-virus software on each node in the system, it may impact the performance of some tasks. To reduce the effect of the Anti-virus program, disable the auto-update feature of any virus scan software and manage the updates either manually or by a centrallyadministered scheduler. The following files should be excluded from scanning in the AutomationObject Server nodes:
C:\Program Files\ArchestrA\Framework\Bin\CheckPointer C:\Program Files\ArchestrA\Framework\Bin\GalaxyData C:\Program Files\ArchestrA\Framework\Bin\GlobalDataCache C:\Program Files\ArchestrA\Framework\Bin\Cache

The Virus scan should include the logger files (*.aeh), typically found in:
C:\Program Files\Common Files\ArchestrA

User Account Requirements


Take note of the account that is created during installation of the FactorySuite A2 System components for the username and password. As you expand a FactorySuite A2 system, you are required to use this same account on all computers that are part of the same Galaxy.

Use Time Synchronization


Configure the computers in a Galaxy to synchronize time at regular intervals. This is particularly important for alarm and data history. The Historian Node is a good candidate for the computer by which all other computers will synchronize time.

FactorySuite A2 Deployment Guide

54

Chapter 2

Licensing Requirements
Be sure that the appropriate licenses are installed where required, i.e Galaxy Repository, Visualization nodes, etc. Note that an I/O server license is required in every node that runs an I/O Server. Note For detailed information on licensing requirements, see the License Utility Guide.

I/O Server Connectivity


The following section provides a high-level discussion of I/O Server implementation in the FactorySuite A2 environment.

I/O and DAServers: Best Practice


The server you use and the topology you construct depends upon your existing system and the needs of your process. See the Compatibility Matrix on the Wonderware eSupport website (http://www.wonderware.com/support/) for information about compatibility between FactorySuite components, Microsoft operating systems and SQL server releases. For example, an I/O server might better meet the needs of a system with a number of legacy components, but would not itself be suitable for a system requiring OPC-based communication.

I/O Servers
I/O Servers provide connectivity for devices using DDE, FastDDE and SuiteLink protocols. Wonderware's I/O Servers can connect every FactorySuite 2000 and FactorySuite A2 System component. I/O Servers also connect various popular PLC, RTU, DCS and ESD systems. Wonderware's Rapid Protocol Modeler (RPM) Kit enables I/O Server customization to suit your needs. The RPM Kit can configure connections between FactorySuite applications and devices with standard and non-standard protocols. It handles serial and TCP/IP communications with either ASCII or binary protocols. The RPM Kit can be used to profile and save a protocol.

DAServers
The Data Access Server (DAServer) provides simultaneous connectivity between plant floor devices and SuiteLink, OPC and DDE/SL-based client applications that run under Microsoft's Windows 2000/2003/XP Professional operating systems. DAServers can operate with many current FactorySuite 2000 client components, as well as with FactorySuite A2 product offerings, when these are used with their associated DIObjects. The DAServer supports run-time configuration, device additions and device/server-specific parameter modifications.

FactorySuite A2 Deployment Guide

Identifying Topology Requirements

55

Note If the DAServer has to use specific hardware, for example, a specialized card, it is quarantined until the physical hardware is installed. The DAServer is like a driver: it can receive data from different controllers simultaneously. For example, a DAServer might use OPC to access data remotely in one machine, and use InTouch Software to communicate with another machine. When a DAServer transfers data, it also transfers a timestamp and quality codes. The DAServer is flexible enough to be used in a variety of topologies, but some topologies are more efficient than others. For example, the DAServer can connect to the OPC Server directly across the network, or FactorySuite Gateway can be placed on the same machine as the OPC DAServer and SuiteLink can be used to link the server to devices. Of the two topologies, using FactorySuite Gateway is more efficient than connecting the DAServer directly to the OPC Server.

DIObject Advantages
Device Integration Objects (DIObjects) represent communication with external devices. DIObjects may be DINetwork Objects (for example, the Redundant DIObject) or DI Device Objects. DIObjects (and their associated AppEngine) can reside on any I/O, DA, or Automation Object Server node in the Galaxy. DIObjects allow connectivity to data sources such as DDE servers, SuiteLink servers, OPC servers, and existing InTouch Software applications. The advantages of using DIObjects are as follows:

When IDE software is installed on an Industrial Application Server, DIObjects allow you to configure, deploy, and monitor DAServers from one centralized location. DIObjects can be used to represent all devices and PLCs in a network, enabling representation of an entire plant, including a hierarchical view of network connectivity. DIObjects are so closely tied to the DAServer that when an object is deployed across the network, it remotely installs the DAServer (This means that you can install the DAServer without going to the actual machine, and that the DAServer connects immediately.). DIObjects are very closely tied to the DAServer they are assigned to, so that when an object is deployed, it brings with it all code, including registry, scripting, attributes, and parent. Note that in a large project, this process may take some time. However, tremendous savings are achieved when comparing centralized deployment with individual tasks should the Servers be separately installed and configured on each node.

FactorySuite A2 Deployment Guide

56

Chapter 2

Extending the IAS Environment


This section provides a high-level summary of protocols used in the IAS environment, and common extensibility scenarios with the related communication protocol support. The following figure represents a widely-distributed (SCADA) network:
Terminal Services Clients (Thin Client) Visualization (Data, Alarms, History )

Dial-up 28.8k bps line

Ethernet

Terminal Server

SuiteVoyager Portal

AutomationObject Historian Server (Data and Alarms )

Engineering Station Configuration Database

SCADAlarm

I/O Server

Network Device (Switch or Router )

Supervisory Network

Network Device (Switch or Router) Control Network RTU Radio Modem RTU Radio Modem (RS-232) PLCs RTU Radio Modem RS-485 Network Level Transmitters

FactorySuite A2 Deployment Guide

Identifying Topology Requirements

57

The following figure the shows the same network represented using the communication protocols supported in a FactorySuite A2 system environment:
Terminal Services Thin Clients

Visualization (Data, Alarms.History)

Dial-up 28.8k bps line

Ethernet

SuiteLink (Alarms)

Terminal Server

SuiteVoyager Portal

Historian (Data and Alarms)

Engineering Station Configuration Database

AutomationObject Server

SCADAlarm

I/O Server

Supervisory Network MX OPC/DCOM SuiteLink SuiteLink (Alarms) Control Network RTU Radio Modem RTU Radio Modem (RS-232) PLCs RTU Radio Modem

RS-485 Network

Level Transmitters

The following information summarizes the protocols shown in the previous figure:

MX: Message Exchange consists of two major components, Local Message Exchange (LMX) and Network Message Exchange (NMX).

LMX is that part of message exchange that provides direct services to a client program for accessing AutomationObjects. NMX is the component that supports communications between processes and between computers.

Note MX is not used for communications with data servers. Thus, it is not a replacement for DDE, SuiteLink or OPC.

OPC: Supports the OLE (Object Linking and Embedding) for process control specification. OPC uses the client-server model and Microsoft COM/DCOM (Distributed Component Object Model) protocols for vendor-independent data transfer. SuiteLink: Wonderwares high-speed communications protocol. Based on TCP/IP, SuiteLink offers high-throughput, reliable communication between FactorySuite components in a Windows NT and Windows Server 2000-2003 environment. RDP (Remote Desktop Protocol): Designed for communication between a server and a remote Windows desktop. RDP is not shown in the above figure.

FactorySuite A2 Deployment Guide

58

Chapter 2

SQL Connection: Specialized connection services for client (ActiveFactory, SuiteVoyager Software, etc.) and MDAS components. The SQL connections are not shown in the previous figure.

Note For more information on SQL Connections, see the IndustrialSQL Server Historian Online documents.

Communication Requirement: Communication Between Objects, Engines and Platform(s) (local or remote node)

Protocol Local Message Exchange (LMX): Provides direct services to a client component. This includes infrastructure services that support communication between AutomationObjects on the same Engine. LMX routes messages to NMX when it needs to communicate to another process (or engine). LMX is packaged as an internal component of all Engines. Network Message Exchange (NMX): Includes infrastructure services that support communication between objects on different Engines that are either on the same Platform or on different ones connected by the network. NMX provides queueing capabilities for interprocess communications. NMX is packaged as part of the Platform.

Communicate with 3rd Party I/O Servers

Open Process Control (OPC): Each object has a name, value, time stamp, and quality. DCOM uses the RPC (Remote Procedure Call) mechanism to send and receive information between clients and servers on the same network. DCOM is implemented only for Windows-based operating systems.

Galaxy Administration using Terminal Server

Remote Desktop Protocol (RDP): It is installed by default with Windows Server 2003 Terminal Services.

FactorySuite A2 Deployment Guide

Identifying Topology Requirements

59

Communication Requirement: Communication Between FactorySuite Applications

Protocol SuiteLink: SuiteLink's support for timestamps, value and quality are especially important in alarming, archiving, and SCADA applications. Dynamic Data Exchange (DDE): An interprocess communication (IPC) system built into the Macintosh, Windows, and OS/2 operating systems. DDE enables two running applications to share the same data. DDE is an event-driven communication which allows data to be transmitted from an actual device to the final application.

Communication with legacy I/O Servers

FactorySuite A2 Deployment Guide

60

Chapter 2

FactorySuite A2 Deployment Guide

Implementing Redundancy

61

C H A P T E R

Implementing Redundancy

IAS Redundancy is achieved by deploying combinations of AppEngines and DIObjects on two nodes. Each node has dual, dedicated NICs. Implementing redundancy ensures continuous operation by providing an AppEngine that remains active in the event of a single system component failure. This configuration operates on the premise that one engine is in an Active State while the other is in a Standby State waiting to take control. The following information describes Redundancy in the context of Industrial Application Server.

Contents Redundant System Requirements NIC Configuration: Redundant Message Channel (RMC) Redundant DIObjects Redundant Configuration Combinations Alarms in a Redundant Configuration Failover Causes in Redundant AppEngines Redundant System Checklist Tuning Recommendations for Redundancy in Large Systems

FactorySuite A2 Deployment Guide

62

Chapter 3

Redundant System Requirements


A system configured with redundant AppEngines consists of a Primary and a Backup AppEngine. The redundant pair is configured in the (Primary) AppEngine editor. The AppEngine enabled for Redundancy is considered the Primary Engine of the redundant pair. When the Primary engine is configured for Redundancy, an additional (Backup) engine is automatically created (see the following figure). Use the SMC, Object Viewer, or the IDE to work with the Redundant Engine pair. The IDE provides visualization of the redundant pair in the Deployment view pane. The following figure shows a pair of Platforms and Engines (on the same node). Note the Engine icons and Engine names. The name (Backup) is appended to the Redundant AppEngine created from the Primary AppEngine:

Note When enabling redundancy, do not select the Restart the engine when it fails option (in the Primary engine's General editor tab). Redundant Engine configuration requires the following:

Redundant pair AppEngines must be deployed to different platforms. Both nodes hosting the redundant AppEngine pair should run the same version and service pack levels of supported Operating Systems. The Redundant Message Channel of each platform must be configured by assigning the corresponding IP address in the Platform editor.

Best Practice
Platforms hosting Primary and Backup AppEngines must have identical configurations for the following elements:

Software providing or getting data from/to the AutomationObject Server, i.e. SuiteLink, DDE,OPC Servers, etc. Store and Forward directories. Common user-defined attributes (UDAs). Common scripting.

FactorySuite A2 Deployment Guide

Implementing Redundancy

63

Note For more information on UDAs and scripting, see Chapter 5, "Working with Templates." Changing the default Platform and Engine settings depends on the size of the system, the number of I/O points, and other variables. Detailed information on tuning the Platform and Engine settings is included in Tech Note 410: Fine-Tuning AppEngine Redundancy Settings.

AppEngine Redundancy States


The deployment sequence (Cascade, Primary first, or Backup First) of the AppEngine pair determines which AppEngine takes the Active State. When AppEngines are deployed individually, the first engine deployed takes the Active state while the second engine deployed takes the Standby state. The engines maintain their states until a failure occurs. If either engine is deployed by itself, it assumes the Active Engine state. In a cascade deploy from the Galaxy Object, when the primary AppEngine is available it becomes Active while the Backup AppEngine goes to Standby. If a network communication problem or a failure (such as computer hardware loss or failure) occurs, the Standby AppEngine assumes the Active state and the engine that was in the Active state may assume the Standby state. When the cause of the failure has been remedied, this engine assumes the Standby Ready state. For more information on redundancy, see the IDE User's Guide.

NIC Configuration: Redundant Message Channel (RMC)


Redundant AppEngine functionality requires two computers, each with two Network Interface Cards (NIC). The first network card is for the Supervisory network; the second card is for the Redundancy Message Channel (RMC). The RMC is a dedicated ethernet connection between the platforms hosting redundant engines. The RMC is vital to keep both engines synchronized with alarms, history, and checkpoint items from the Active engine. Each engine also uses this Message Channel to provide its health and status information to the other. Note Access Network Connections properties from the Windows Control Panel.

FactorySuite A2 Deployment Guide

64

Chapter 3

Primary Network Connection


The NIC cards require the following configuration on both nodes: To configure the Primary network connection 1. 2. 3. In the Network Connections window, right-click Primary Network and select Properties. Select TCP/IP and configure the Properties to obtain either dynamic or static IP address. Configure the remaining parameters as appropriate, i.e. DNS, WNS, etc.

RMC Network Connection


To configure the RMC connection 1. 2. 3. 4. In the Network Connections window, right-click RMC Network and select Properties. Select TCP/IP and click the Properties button. Select Use the following IP address. See your network administrator for IP address and subnet mask. The IP address must be fixed and unique. In the TCP/IP Properties dialog box, click the Advanced button, then select the DNS tab. Be sure the Register this connection's address in DNS checkbox is not checked.

Best Practice
Assign a descriptive name to each network connection to easily identify its functionality. From the Network Connections window, rename the Local Area Connections i.e. "Primary Network" and "RMC Network." To assign Network Services primary connections 1. 2. 3. 4. In the Network Connections window, select Advanced/Advanced Settings from the main menu. Select the Adapters and Bindings tab. Set the Primary Network as the first connection to be accessed by network services. Use the Up/Down Arrow buttons to re-order the list. Verify that the normal connection between the redundant pair uses the primary network. This is done using the PING command (from the DOS Command Prompt) with the redundant partner's node name. Verify that the node name resolves to the IP Address of the partner's primary network card.

FactorySuite A2 Deployment Guide

Implementing Redundancy

65

Redundant DIObjects
The following section explains implementing redundant DIObjects.

Configuration
AppEngines can host redundant Device Integration Objects (DIObjects). The Redundant DIObject is a DINetwork Object used to enable continuity of I/O information from field devices. The redundant DIObject provides the ability to configure a single object with connections to two different data sources. If the primary data source fails, the redundant DIObject automatically switches to the backup data source for its information. There is a one-to-two relationship between an instance of the redundant DIObject and the running instances of the source DIObjects; that is, for each redundant DIObject, a pair of source DIObjects is deployed.

Visualization Nodes

Redundant DIObject Supervisory Network AutomationObject Server I/O Server AppEngine1 Platform 1 DI_1 DI_2

DAServer_1

DAServer_2

PLC Network ABTCP DH+

The following naming practices are recommended for implementing redundant DIObjects:

If the I/O or DAServer resides on the same node as the AppEngine hosting the DIObject, configure the server node name in the General tab as <Blank> or <Localhost>. If the I/O or DAServer resides on a remote node, any node name is acceptable as it refers to the same remote node regardless of where the DIObject is located.

FactorySuite A2 Deployment Guide

66

Chapter 3

In the previous example, the PLC sent data using two unique protocols. It is also common for the PLC to send data through two ethernet ports using the same protocol, via different I.P. addresses. In this case, the following redundant DIObject configuration is recommended:

Visualization Nodes

Redundant DIObject Supervisory Network AutomationObject Server I/O Server AppEngine1 Platform 1 DI_1 DI_2

DAServer_1 Topic1 I.P.1

DAServer_2 Topic1 I.P.2

PLC Network ABTCP ABTCP

The figure shows two unique DAServer instances, each using the same topic and pointing to a unique I.P. address. The DAServers in this scenario can be also be deployed on different machines.

Common Configuration Requirements


Redundant DIObject configuration requires the following:

Source DIObjects do not have to be of the same type, but must support the same type of Scan Group and have the same items address space. The Scan Groups configured in the Redundant DIObject must also be configured in both the Primary and Backup DIObjects. The configuration must include at least one Scan Group. The names of the Primary and Backup DIObject must be different. The Primary DIObject attribute refers to the name of the DIObject that will be used as the primary source of I/O attributes.

The Redundant DIObject's Editor supports creating/configuring Scan Groups, BlockReads, and BlockWrites. The Redundant DIObject can have configurable I/O points (Tag dictionary), which are periodically scanned for their value. The redundant DIObject supports Subscription, Read Transaction, and Write Transaction on I/O points.

FactorySuite A2 Deployment Guide

Implementing Redundancy

67

Redundant DIObject Behavior at Run-Time


After the redundant DIObject is initialized, its state changes to Startup. The object opens MX communications and registers a reference to ScanState to track whether the DIObject is deployed. If the DIObject is off scan, the redundant DIObject treats it as a bad data source. The ProtocolFailureCode and ConnectionStatus attributes provide status of the source device. During run-time, the redundant DIObject performs the following tasks: 1. 2. 3. Adds newly activated attributes to the Active DI source. Updates attributes with new values from the Active DI source. Monitors the connection with the Active and Standby DI source. If the connection to the Active DI source is lost, the object switches to the Standby DI source.

If both DI sources are in bad state, the object raises the Connection Alarm.

Redundant Configuration Combinations


Multiple redundant configuration combinations are possible. The combinations include redundant AppEngines and Redundant DIObjects. It is important to select the configuration that provides the best performance and robustness. The following examples present recommended configurations for the Dedicated Standby and Load Shared scenarios.

Dedicated Standby Server - No Redundant I/O Server


This configuration includes a dedicated Standby node ready to take control of the system when the Active Engine is off-line (refer to the following figure). AppEngine1 hosts all AppObjects as well as DIObjects. The I/O Server is installed on both nodes but the DIObject collects data from the node where the Active engine resides. To provide a higher degree of reliability to the system, you can implement a script in the DIObject to set the Redundancy.ForceFailoverCmd attribute in the AppEngine object to True when the connection with the PLC fails.

FactorySuite A2 Deployment Guide

68

Chapter 3

The following figure includes Alarm configuration. For details, see "Alarms in a Redundant Configuration" on page 78.

Visualization InTouch Alarm Provider

Visualization

Supervisory Network AOS (PRIMARY) AppEngine 1 DI DAServer RMC AOS (BACKUP)

Network Device (Switch or Router ) Historian ( Data and Alarms) Alarm Logger PLC Network

Load Sharing Configurations


Load Sharing distributes the system's processing load between two nodes. The two nodes can also be configured to backup each other. The primary benefit of Load Sharing is to reduce failover time: only 1/2 the objects must fail over. A system with redundant AppEngines can host Active and Standby Engines of different redundant pairs on the same node. This configuration enables efficient use of system resources while providing high availability for both nodes. However, configuration of redundant Engines on the same node requires thorough evaluation of the following critical (Performance Monitor) counters: CPU and Memory use on each node. Ensure each AOS node runs at a maximum of 25-30% CPU load. This CPU load represents the resources used by each AOS in steady condition considering Active and Standby engines running in each node as well as any other applications. When a failover occurs one of the AOS nodes will host all the Active engines in the pair. Hosting implies a CPU load increase to a total of 50-60%. It is important to note that although the computers hosting the redundant AppEngines do not have to be identical, the total load of the application must not exceed 60% of CPU usage of the smallest computer.

FactorySuite A2 Deployment Guide

Implementing Redundancy

69

Load Shared - Non Redundant I/O Data Source Using DIObjects


Both AOS nodes host Active and Backup engines for each other. If one node fails, the remaining one hosts all Active Engines for both nodes. The following figure shows standard DAServers with OPC Client or DDE/Suitelink client objects (DIObjects). When both AOS nodes use the same DAServer to communicate with the PLC, the DIObject in the new Active engine refers to the server running in that node when the failover occurs. The DIObjects on each node are configured to point to the local DAServer, leaving the Node Name blank in the DIObject editor. When a failover occurs, the DIObject in the new Active Engine will refer to the local existing instance of the DAServer that is currently running in that node. When the AOS nodes use different DAServers, both servers must be installed on each node. One of the DAServers will provide data to the local Active DIObject (i.e. DI1), while the other server feeds data to the other DIObject (i.e. DI2) after the failover. Use the Redundancy.ForceFailoverCmd AppEngine attribute in a script (in the corresponding DIObject) to trigger the failover in the event of a communication failure with the PLC Network. The following figure includes Alarm configuration. For details, see "Alarms in a Redundant Configuration" on page 78.

Visualization InTouch Alarm Provider

Visualization

Supervisory Network AOS (PRIMARY) AppEngine1 DI 1 DAServer A AppEngine2 (Backup) RMC AOS (BACKUP) AppEngine 2 DI 2 DAServer A AppEngine 1 (Backup) Historian ( Data and Alarms) Alarm Logger Network Device (Switch or Router)

PLC Network

FactorySuite A2 Deployment Guide

70

Chapter 3

Load Shared - No Redundant I/O Data Source DI Network Objects


This variation of Load Shared configuration uses DI Network objects (i.e. DI ABTCP, DI Modbus, etc.). When considering a Load Share configuration, remember that some DI Network objects may allow only one instance of the DAServer to be deployed on a single computer. In a Load Shared scenario, when one of the AppEngines fails, the system would try to run two instances of the same DI Network object on a single node and may cause conflicts. DI Network object configurations in the Load Share context are presented in Scenario 1 and Scenario 2 (following pages). For details on the DI Network object requirements and functionality, see the specific DI Network Object Help provided with the Object.

Scenario 1
A set of AppObjects is hosted by AppEngine1 and another set by AppEngine2. These two engines are hosted by different platforms. AppEngine 1 and AppEngine 2's backup engines reside on the opposite node. One instance of a DI Network object resides in any of the nodes providing data to AppObjects in both nodes. Use the Redundancy.ForceFailoverCmd AppEngine attribute in a script (in the DI Network Object) to trigger the failover in the event of a communication failure with the PLC Network. The following figure includes Alarm configuration. For details, see "Alarms in a Redundant Configuration" on page 78.

Visualization InTouch Alarm Provider

Visualization

Supervisory Network AOS (PRIMARY) AppEngine1 DINetwork DAServer AppEngine2 (BACKUP) RMC AOS (BACKUP) AppEngine 1 (BACKUP) AppEngine 2

Network Device (Switch or Router) Historian (Data and Alarms) Alarm Logger

PLC Network

FactorySuite A2 Deployment Guide

Implementing Redundancy

71

Scenario 2
This scenario provides a very powerful and reliable solution. To avoid the conflict with multiple instances of the same DI Network in a node after a failover occurs, you can arrange the system as shown in the following figure. The configuration in AOS1 is mirrored in AOS2. AOS1 hosts AppEngine 1, which is part of a redundant pair with AppEngine 1 (BACKUP) and is hosted by AOS2. Additionally, AppEngine 1 hosts RDI1, which provides high availability at the I/O level. RDI1 is configured to use DI1Network as the Primary DI source and DI2 Network as the Backup DI source. This configuration provides high availability at the data execution level and I/O data acquisition in a very efficient approach. In the event of a failure in AOS1, AppEngine 1 will failover to AppEngine 1 (BACKUP). Once AppEngine 1 fails over the Standby engine, RDI1 will also switch to DI2 Network in the new node. On the other hand, if there is a communication problem at the DI1 object level, the RDI1 object will automatically switch to DI2 Network while AppEngine 1 still runs on AOS 1. AppEngine E3, which is not a redundant AppEngine, hosts DI1 Network. By keeping the DI1 and DI2 Network objects on separate non-redundant engines, you are avoiding the conflict created by having two instances of the same DI Network object on the same node. The following figure includes Alarm configuration. For details, see "Alarms in a Redundant Configuration" on page 78.
AOS 1 AppEngine 1 RDI1 RDI 2 AOS 2 AppEngine 2

AppEngine 2'

AppEngine 1'

AppEngine 3

DINetwork1

DINetwork2

AppEngine 4

FactorySuite A2 Deployment Guide

72

Chapter 3

The topology supporting this configuration is shown below:

Visualization InTouch Alarm Provider

Visualization

Supervisory Network AOS (PRIMARY) AppEngine 1 RDI 1 AppEngine 2 (BACKUP) AppEngine 3 DINetwork 1 DAServer PLC Network RMC AOS (BACKUP) RDI 2 AppEngine 1 (BACKUP) AppEngine 4 DINetwork 2 DAServer Network Device (Switch or Router ) Historian (Data and Alarms) Alarm Logger

Load Shared - Redundant I/O Data Source


This variation of the Load Shared configuration includes a set of redundant nodes running the I/O Servers. AOS1 and AOS2 are configured as redundant pairs in a Load Shared scenario. In the event of a failure, one of the nodes hosts all AppObjects in both servers. This set up provides high availability for the execution of AppObjects. Alternatively, the I/O data level is protected by implementing a pair of remote I/O Server nodes. Each server hosts the corresponding DIObject and I/O Server (DAS, DDE/Suitelink or OPC servers) or DI Network objects and DAServers. AppObjects on each engine reference I/O points in the local RDIObject. The RDIObjects switch between I/O Server nodes if a failure in any of those nodes occurs. Using redundant I/O Data Sources provides the following benefits:

The communication protocol between the I/O Server node and the AOS is MX. This protocol is optimized for data transfer over the network, with special emphasis on slow and intermittent networks. As the I/O Server uses MX protocol to transfer data, it simplifies configuring OPC communication over the network (DCOM security settings), and overcomes the deficiencies of DCOM communications. Using a platform and an AppEngine on those nodes provides additional diagnostic of conditions associated to the system. These can be historized and alarmed in the same way as with AppObjects.

FactorySuite A2 Deployment Guide

Implementing Redundancy

73

The following figure includes Alarm configuration. For details, see "Alarms in a Redundant Configuration" on page 78.

Visualization InTouch Alarm Provider

Visualization InTouch Alarm Provider

Supervisory Network AOS (PRIMARY) AppEngine 1 AppObjects RDI AppEngine 4 (BACKUP) RMC AOS (BACKUP) AppEngine 4 AppObjects RDI AppEngine 1 (BACKUP) I/O Server 1 AppEngine2 DI 1 DAServer 1 I/O Server 2 AppEngine 3 DI 2 DAServer 2 Historian (Data and Alarms) Alarm Logger PLC Network Network Device (Switch or Router)

Run-Time Considerations
The following information summarizes run-time behaviors between Redundant Engines.

Establishing RMC Communication


The Active and Standby Engines communicate with each other during run-time and use the RMC to monitor each other's status. The redundant engines use the Remote Partner Address (RPA) attribute to locate each other and communicate. The RPA attribute contains the IP address or host name of the platform hosting the partner engine. At startup, each redundant AppEngine establishes communication with its partner. When the failover service receives a connection across the RMC, it updates the RPA attribute of the receiving engine if it is different than the current configured value. Note The value of the RPA may be different if a partner engine has been relocated to a different platform.

FactorySuite A2 Deployment Guide

74

Chapter 3

Checkpointing
AppEngines store specific attributes in memory, then write them to disk in both single- or redundant Engine configurations. The frequency of the write operation is determined by the Checkpoint period setting in the AppEngine editor. Note Checkpoint period configuration details are included in "Tuning Redundant Engine Attributes" on page 87. The checkpointed attribute types include:

Scan Rate Checkpoint Directory location (default is blank but can be modified) StartUp Attributes StartUp Type (Automatic, Semi-Automatic, Manual) StartUp Reason

Redundant AppEngines maintain data synchronization through the RMC. Data synchronization is accomplished by reading checkpointed attribute values written to disk on each node, at each scan. The checkpoint operations occurs at a pre-defined rate in the local node (Scheduler.ScanPeriod). The same operations (write to memory, then to disk) occur on the Backup AppEngine at every scan (via the RMC). When the Standby AppEngine becomes active, it reads the checkpointed values from the designated file in the local (BackUp) node. The system updates the Standby Engine with the values that were sent and written to disk in the last scan before the Active Engine failed. The following attribute types are checkpointed:

Attributes with Category User Writable or Object Writable that are not extended as Input/Output and Input extensions. Calculated Retentive category attributes are always checkpointed.

Complete a thorough evaluation and selection of checkpointed attributes. Unnecessary checkpointing may degrade the performance of the system by writing extra values to the Local/Backup disks, and increase data traffic over the RMC.

Deployment Considerations
Automation Objects are always deployed to the Active Engine:

If the Primary Engine is the Active Engine of a redundant pair, objects are deployed to the Primary Engine. If the Backup Engine is the Active Engine, the objects are deployed to the Backup Engine.

When an Active Engine becomes the Standby, the Engine sets all objects off scan, shuts down all features that make up the object and stops executing all deployed objects. All objects are unregistered on the previously active engine.

FactorySuite A2 Deployment Guide

Implementing Redundancy

75

When a Standby Engine becomes Active, the Engine calls Startup on all features that make up the objects. The call-up includes a method that shows the objects are starting up as part of a failover. The newly active engine calls SetScanState on all features and begins executing all objects that are on scan.

Best Practice
To deploy objects in a Load Shared configuration 1. 2. 3. Deploy the platforms individually rather than in a cascade. Cascade deploy the primary Engines. Finally, cascade deploy the backup Engines. Always deploy the primary Engine first.

Scripting Considerations

When failover occurs, attribute values keep their initial states. Scripts and SQL connections to databases that were interrupted by the failover must be restarted. OffScan scripts are executed in the event of a forced failover. Any state, such as local variables or calculated attributes, that is not kept in checkpointed attributes is not passed to the objects started on the newlyactive Engine. If an attribute value is being passed to the database when failover occurs, the attribute returns to its initial value when the object goes On Scan in the new active Engine. Before the attributes can be updated, the database connection must be restored.

Script Behavior When the Standby Engine Becomes Active


When a Standby Engine becomes active, it sets the Engine.StartupReason attribute to indicate the startup cause. The attribute string can be accessed in a script to determine the startup reason. The following reasons are possible:

Starting_AfterDeploy: Engine starts from standard deploy. Starting_From Standby: Engine starts from failover. Starting_FromCheckpoint: Engine starts from reboot or AutoStart configuration.

These attributes can also be used to execute scripts that re-initialize variables and COM objects. After a failover occurs, scripts in the new Active engine are executed based on the trigger type they use i.e. Startup, On Scan, Periodic and Data Change.

FactorySuite A2 Deployment Guide

76

Chapter 3

Use Startup and OnScan scripts to initialize conditions used later in the script. In many cases the initialization is required only when the object is deployed/redeployed, or when the AppEngine and or platforms are restarted. In the case of a failover, the requirement may be to continue operating using values from the checkpoint rather than re-initializing the conditions. The Redundancy.FailoverOccurred attribute is set to "True" for the first scan right after the failover occurs; after the first scan the attribute is automatically reset to "False." Using this attribute as a script condition initializing the variables prevents the script from running when the system recovers from a failover. Similarly, Data Change scripts execute when the object is deployed, the engine re-started and the Standby engine becomes active after a failover. Using the Redundancy.FailoverOccurred in an "If-then-else" statement will prevent the script from executing after the failover. Any script that is set with an Execution type of Execute and a trigger type of Periodic will have the following behaviors after an AppEngine failover. The situation is described using a period of 60 minutes as an example time period:

The script executes the first time when the engine is deployed. E.g. T0. The next execution time will be 60 minutes later. E.g. T60. The redundant engine fails over and the Standby engine is fully running in the Active state at T30. The Periodic script(s) will restart with the period reset to T0.

The period for the execution of the Periodic script(s) will be shorter than planned for, or possibly longer if an engine failover occurs shortly before the time period elapses. Some applications may have critical data generated by a Periodic script. Do not use Periodic type scripts where the time period could be shorter, or longer, and this is critical information being managed by this script. Instead, set up the script to run using a Condition and set up a UDA for the trigger. Then for a time base, use System Time and calculate the time period from this to set the condition. The UDA current value is maintained in the failover and the System Time is real time versus an expiring timer. Shutdown and OffScan scripts will execute after an orderly completed failover, i.e. using ForceFailoverCmd, or in the event of Primary Network failure.

Asynchronous Scripts
QuickScripts must be evaluated to anticipate likely delays (SQL Query completion, calling COM or .NET objects, etc.) due to network transport or intensive database processing. When a delay in script completion is likely, set the QuickScript to run asynchronously. If not set to run asynchronously, it is possible that the non-asynchronous QuickScript could cause the Engine to miss the following scan while waiting for the script to finish executing. Note The Runs asynchronously option must be manually selected within the Scripts tab page of the Object Editor; it is not set by default.

FactorySuite A2 Deployment Guide

Implementing Redundancy

77

Once set to run asynchronously, the QuickScript will not be cut off when the scan is completed. When a problem occurs, the script could "hang" if the process never completes, as in the case of a SQL query that never returns a rowset or even an error message. When the QuickScript's ExecuteTimeout.Limit value is reached, the ExecuteError.Alarmed and ExecuteError.Condition attributes are set. In this context it is useful to monitor these attributes and log a message when the maximum timeout threshold is exceeded.

History
Historical data is sent to the Historian only from the Active Engine. The Active Engine processes historical data and sends it to the Historian when the Historian is available. If the Historian becomes unavailable, the Active Engine stores the data locally (in Store Forware History Blocks) and forwards it when the Historian becomes available. In the meantime, local Store Forward data is transferred to the Standby Engine via the RMC. When an engine enters Store Forward mode, it synchronizes its data with its partner engine. Store Forward data is transferred (and synchronized) every 30 seconds, so no more than 30 seconds can be lost in the event of an Engine Failure. Note Attributes and tags which were not configured in the Historian before failover are not stored.

History: Redundancy Diagnostics


In order to facilitate management of Store Forward data collected across multiple failures and to improve diagnostics, the Active Engine has attributes which show the status of Store Forward. The following information is available:

Store Forward data has been collected for engine: (Engine.Historian.InStoreForward_Standby) Store Forward data lost: (Engine.Historian.StoreForwardDataLost_Standby) Store Forward data cannot be stored on active engine in store forward mode: (Engine.Historian.StoreForwardProblem_Standby)

FactorySuite A2 Deployment Guide

78

Chapter 3

Alarms in a Redundant Configuration


The following figure shows the distribution of components of the Alarm subsystem in a redundant configuration. The description presented in this section applies for both the Dedicated Standby and the Load Shared configurations. The platforms that host the Visualization clients are configured as InTouch Alarm providers. In order to optimize the network traffic, each platform is filtering alarms for the areas that are of particular interest for that node. Eventually, if alarms from other nodes need to be monitored from a visualization node, this can be accomplished by simply running a specific query for the new node and areas that need to be supervised. The Alarm Database resides in the InSQL node using the MS SQL Server already installed in that node. In this node there is a platform deployed and configured as InTouch Alarm provider. This node does not filter alarms from any area so that it can provide high availability in case any of the visualization clients are not available in the system. The Alarm DB logger is installed in the same node as the Alarm Database. The Alarm DB logger is configured to query all the alarms in the local platform.

Visualization InTouch Alarm Provider

Visualization InTouch Alarm Provider

Supervisory Network AOS (PRIMARY) AppEngine 1 DINetwork DAServer Historian (Data and Alarms ) Alarm Logger PLC Network RMC AOS (BACKUP) AppEngine1 (BACKUP) Network Device (Switch or Router )

FactorySuite A2 Deployment Guide

Implementing Redundancy

79

Redundant Configuration with Dedicated Visualization Nodes


The following figure shows alarm configuration with dedicated visualization nodes:

Visualization InTouch Alarm Provider

Visualization InTouch Alarm Provider

Supervisory Network AOS (PRIMARY) AppEngine 1 DINetwork DAServer Historian (Data and Alarms ) Alarm Logger PLC Network RMC AOS (BACKUP) AppEngine1 (BACKUP) Network Device (Switch or Router )

Redundant Pair with No Dedicated Visualization Client Nodes


If the redundant architecture consists of two AOS nodes that also run InTouch Software as the visualization client for the system, the Alarm configuration should be implemented as shown in the following figure:
Supervisory Network AOS (PRIMARY) AppEngine1 DINetwork DAServer InTouch InTouch Alarm Provider RMC AOS (BACKUP) AppEngine 1 (BACKUP) InTouch Alarm Provider

Network Device (Switch or Router)

Historian (Data and Alarms ) Alarm Logger

PLC Network

FactorySuite A2 Deployment Guide

80

Chapter 3

Failover Causes in Redundant AppEngines


This section describes failover triggers in a redundant AppEngine pair.

Forcing Failover
It is possible to force a failover in a pair of redundant AppEngines by simply setting the attribute ForceFailoverCmd in the Active engine to "true". This can be accomplished using the ObjectViewer, InTouch Software, an object's script or any other application that has access to this attribute. Use this attribute in a script (with any set of conditions) to trigger a failover. For example, you can monitor the status of other applications on the same machine, hardware devices, etc. and based on that status, trigger a failover to the Standby engine. When a failover occurs, the Standby engine becomes Active and stays in that status unless the system is forced to fail back when the new Standby engine becomes available. In this case, the ForceFailoverCmd can be used to take the Active engine back to the original node. For details on the attributes associated to a Redundant AppEngine please refer to the AppEngine Help files.

Communication Failure in the Supervisory (Primary) Network


To understand the effects of a communication failure at the Supervisory Network level, refer to the matrix on the following page. It presents the original status of the redundant pair before the failure as well as the cause of the problem and the final condition. The matrix shows the values of two attributes in the AppEngine used to monitor the status of the redundant pair. They are RedundancyPartnerStatus and RedundancyStatus. The attributes (and other key attributes) are included.

FactorySuite A2 Deployment Guide

Implementing Redundancy

81

Considerations
The Failover scenarios described in this section refer to topologies where there is at least one more platform besides the two hosting the Redundant pair, i.e. Client/Server configuration. If the topology consists of just two platforms hosting the Redundant pair (Peerto-Peer configuration) a failover does not occur in the event of a communication failure in the supervisory network. Instead, the Redundancy.PartnerStatus attribute is set to Missed heartbeats while both partners synchronize data through the RMC. In this case, the user can execute the failover either manually or via scripting, if required.

Visualization InTouch Alarm Provider

Visualization InTouch Alarm Provider

Supervisory Network AOS (PRIMARY) AppEngine 1 DINetwork DAServer Historian (Data and Alarms ) Alarm Logger PLC Network RMC AOS (BACKUP) AppEngine1 (BACKUP) Network Device (Switch or Router )

FactorySuite A2 Deployment Guide

82

Chapter 3

.
Initial Condition Primary SCENARIO 1a Primary Network Red. Partner Status Red. Status SCENARIO 1b Primary Network Red. Partner Status Red. Status SCENARIO 2 Primary Network Red. Partner Status Red. Status SCENARIO 2b Primary Network Red. Partner Status Red. Status SCENARIO 3 Primary Network Red. Partner Status Red. Status Connected Connected Disconnected Disconnected Disconnected Missed heartbeats Active Disconnected Connected Missed Heartbeats Active Disconnected Connected Connected Connected Connected Connected Connected Connected Disconnected Connected Missed heartbeats Active Disconnected Disconnected Connected Missed heartbeats Active Connected Connected Connected Connected Connected Connected Disconnected Connected Disconnected Connected Missed Heartbeats Active Backup Transition Primary Backup Final Condition Primary Backup

Standby Ready Active

Standby Ready Active

Standby Ready Active

Standby Ready Active

Standby Ready Active

FactorySuite A2 Deployment Guide

Implementing Redundancy

83

RMC Communication Failure


Even though an RMC communication failure does not trigger failover, it is important to know the system behavior in this event:
Primary SCENARIO 1 Failure in RMC Red. Partner Status Red. Status SCENARIO 2 Failure in RMC Red. Partner Status Red. Status Connected Standby Ready Connected Disconnected Connected Disconnected Unknown Active - Standby not available Connected Connected Standby Ready Connected Connected Disconnected Connected Unknown Active - Standby not available Disconnected Backup Primary Backup Primary Backup

Active

Active

PC Failures
If a power failure occurs on the Active Engine node, the Standby node takes control of the system. The following matrix shows the corresponding status and AppEngine attribute values under different conditions:
Primary SCENARIO 1 PC Failure Red. Partner Status Red. Status SCENARIO 1b PC Failure Red. Partner Status Red. Status SCENARIO 2 PC Failure Red. Partner Status Red. Status PC Available Standby Ready PC Available PC Not Available PC Available PC Not Available PC Available Unknown Active - Standby not available PC Available PC Not Available PC Available PC Available PC Available PC Available PC Available PC Available PC Available PC Not Available PC Available PC Not Available Backup Primary Backup Primary Backup

Standby Ready

Unknown Active - Standby not available

Active

Unknown Active - Standby Not Available

Standby Ready

Active

Active

FactorySuite A2 Deployment Guide

84

Chapter 3

SCENARIO 2 PC Failure Red. Partner Status Red. Status PC Not Available PC Available PC Available PC Available PC Available PC Available

Unknown Active - Standby not available

Standby Ready

Active

Undeploying AppEngines
Undeploying AppEngines in a redundant pair may trigger failover. The following description refers to non-cascade Undeploy operation of the AppEngine. Executing a cascade Undeploy operation of the Primary AppEngine undeploys all objects from both engines. The table below describes the expected behavior under the non-cascade condition:
Initial Condition Primary SCENARIO 1 Undeploy Backup Engine Red. Partner Status Red. Status SCENARIO 2 Undeploy Primary Engine Red. Partner Status Red. Status SCENARIO 3 Deploy Primary Engine Red. Partner Status Red. Status Undeployed Deployed Deployed Deployed Deployed Deployed Deployed Deployed Undeployed Deployed Undeployed Deployed Deployed Deployed Deployed Undeployed Deployed Undeployed Backup Transition Primary Backup Final Condition Primary Backup

Standby Ready

Unknown Active - Standby not available

Active

Standby Ready

Unknown Active - Standby not available

Active

Unknown Active - Standby not available

Standby Ready

Active

Note It may become necessary to relocate a Primary or Backup AppEngine. Ensure relocation is performed at a non-critical time (scheduled maintenance, plant shutdown, etc.), and that Store Forward is not in operation during the relocation process (undeploy, relocate, redeploy). Data loss may occur if Store Forward is operational during the redeploy operation.

FactorySuite A2 Deployment Guide

Implementing Redundancy

85

Dual Communications Channel Failure Consideration


A redundant AppEngine system configuration provides high availability in the event of a single communication channel failure. This means that the system will gracefully handle situations where only one communication channel fails at a given time; for example, PC failure, Primary network failure, etc. The RMC's role as a dedicated link to synchronize data and monitor the status of the redundant pair minimizes the chances of failures within the system, since it is a dedicated cross-over cable between two computers and does not carry external data traffic. If the Active AppEngine simultaneously loses the connection to the Primary network and to the RMC, the system reverts to an ACTIVE-ACTIVE state. To recover from an ACTIVE-ACTIVE condition, both connections must be reestablished. The system arbitrates assigning the Active state, so that the Primary server will become Active, regardless of its state before the ACTIVEACTIVE condition. It is possible to design the application to handle the ACTIVE-ACTIVE condition. The solution includes scripting and a third component to monitor the status of both engines. For example, a condition script could be implemented in the PLC to monitor the status of both engines and arbitrate which one stays active if a simultaneous failure occurs in the RMC and Primary network.

Redundant System Checklist


Refer to the following checklist when planning your redundant system:

Determine the Redundant System Configuration


Evaluate what type of redundant configuration fits better for your system. Both the Dedicated Standby and Load Shared configuration provide a reliable and robust solution; but depending on the process requirements and system architecture, one of the configurations may be more efficient than the other. Complete details on Dedicated Standby Server - No Redundant I/O Server and Load Sharing Configurations are included later in this chapter.

Check the System's Alarm Configuration


The Alarms in a Redundant Configuration section in this chapter provides detailed recommendations on how to set up the alarm system, including locating Alarm providers, Alarm DB Logger, Alarm Database, etc.

FactorySuite A2 Deployment Guide

86

Chapter 3

Analyze the Expected System Behavior After Failover


It is very important to understand how scripts are executed after a failover. The Scripting Considerations section in this chapter explains how scripts behave in a failover condition. Identify what attributes need to be retentive after a failover. Refer to the Checkpointing section for details on the how attributes can be defined as persistent after a failover.

Distribute Data Traffic


To make the best use of network bandwidth, distribute the traffic coming in and out of the AutomationObject Servers over different networks. For example, the need for data from the control network may be more critical than the data requirements of the supervisory system. Distributing data traffic through separate networks requires the use of multiple network cards in a server. Note Although distributing data traffic through separate networks is not required, multiple NICs are used for optimal communication in the network figures that appear in this chapter.

Rename Each Local Area Connection When Using Multiple Network Adapters
The operating system detects network adapters and automatically creates a local area connection in the Network Connections folder for each network adapter. Renaming each local area connection to reflect its network eliminates confusion. Add or enable the network clients, services, and protocols required for each connection. When doing so, the client, service, or protocol is added or enabled in all other network and dial-up connections.

Tuning Recommendations for Redundancy in Large Systems


To ensure seamless failover in "Large" systems (high I/O, high CPU utilization, SCADA System) performance, modify several default Platform attribute values. Note The following information applies to a large, locally distributed topology. SCADA-specific settings are described in "Inter-Node Communications" on page 263.

FactorySuite A2 Deployment Guide

Implementing Redundancy

87

Tuning Redundant Engine Attributes


Multiple variables (I/O points, number of Objects, number of Historized attributes, DIObject distribution, etc.) are involved in the detection and execution of a Redundant AppEngine Failover. The following table describes some key Engine attribute values that should achieve better failover performance in small, medium-large and very large system environments. The list corresponds to AppEngine and Platform Editor options and describes some behaviors resulting from incorrect settings. The I/O numbers listed are totals-per-Galaxy. Note The Forced failover timeout values 45-240,000 (below) represent the low-high values of a range.

AppEngine Editor

Small System (Default)

Medium- Large System

Very Large System

Remarks If setting is too small, forced failover will not succeed. If setting is too large, failure will not be detected in a timely manner. N/A N/A May be increased to avoid false failovers. Setting this value too low produces false failovers. Setting this value too large results in slow detection of required failover. Setting this value too low produces false failovers. Setting this value too large results in slow detection of required failover. Assuming remote I/O, setting the value too low causes all I/O references to unsubscribe, then resubscribe on failover. The optimum setting ensures that remote I/O references are preserved for failover. This behavior also applies in the RDI Object context. N/A

Forced failover timeout 30,000 ms (up to ~3,000 I/O)

45,000 ms @ 240,000 ms @ 300,000 ms approx. 3,000 approx. 40,000 I/O I/O

Max. checkpoint deltas 0 buffered Max alarm state changes buffered 0

N/A N/A 1000 ms 10 - 30

N/A N/A 1000 ms ~60

Standby/Active engine 1000 ms heartbeat periods Maximum consecutive 5 heartbeats missed from Active engine

Maximum consecutive 5 heartbeats missed from Standby engine

10 - 30

~60

Maximum time to maintain good quality after failure

15,000 ms

120,000 ms

150,000 ms

Max. Time to Discover 15,000 ms Partner

N/A

N/A

FactorySuite A2 Deployment Guide

88

Chapter 3

AppEngine Editor Restart Engine when it fails Checkpoint period

Small System (Default) N/A 0 (Every scan)

Medium- Large System N/A 20,000 ms (20K I/O)

Very Large System N/A 60,000 ms (40K I/O)

Remarks N/A Setting this value too low results in high resource usage. Setting this value too high means that if both partners fail, checkpointed data may not be current.

Platform Editor Consec. number Missed NMX Heartbeats

Small System (Default) 3

Medium-Large System N/A 6

Very Large System N/A 6

Remarks (Platform editor General tab) (Platform editor General tab)

NMX Heartbeat period 2,000 ms

Failover Services talk between themselves using the RMC and determine the communication status between the two nodes. The status is provided by monitoring Heartbeat attributes. Message Channel Heartbeat settings control the heartbeat intervals; i.e., how often the redundant platforms send each heartbeat through the RMC.

Remarks
Modifying the Active/Standby Heartbeat Period values makes the Engines more sensitive to network failure. Missed Consecutive heartbeats determines the number of missed heartbeats that will trigger the redundant engine to act. Setting the values smaller makes the engines more sensitive to network failure. Setting the values larger makes the Engines more tolerant of high CPU loads that can cause missed heartbeats. The values can all be set using the IDE or the Object Viewer.

Engine Monitoring
The following information describes how the failover service monitors the redundant engines. In general, an Engine has the following states:

Start Up: Measured as the time required for all engine objects to be created, initialized and started. Execution: Measured as the time required for all Engine objects to be executed in one scan cycle. Shut Down: Measured as the time required for all Engine objects to be stopped.

FactorySuite A2 Deployment Guide

Implementing Redundancy

89

The following parameters determine how much time the Engine can be unresponsive during each of the above states.

Start Up and Shut Down


If you experience improper Platform shut down (from the SMC), two solutions are possible:

Increase RAM: Increase the RAM to 2GB. Tests have shown that increasing RAM can help provide proper shutdown. Modify Registry Settings.

To add the registry settings The default registry settings are not accessible. However, they can be added/changed at the following location:
[HKEY_LOCAL_MACHINE\SOFTWARE\ArchestrA\Framework\Platform]

1.

Enter the following values:


"WatchdogStartupTimeout"=dword:000493e0 "WatchdogShutdownTimeout"=dword:000493e0

The default view for the time values is Hexidecimal. 2. Change the format to Decimal and ensure the setting is 30000 ms. (5 minutes in this example) or larger.

This should be sufficient for a large system. Setting the values too high could lead to delays in discovery that the Engine has hung/crashed during startup or shutdown, since the Bootstrap considers the Engine healthy until the timeout expires. Note If the WatchdogStartup- and ShutdownTimeout values are modified, they must be reset after the Platform is undeployed and redeployed.

Execution
The EngineFailureTimeout attribute determines how long that Engine has to inform the Bootstrap that it is executing. If the Engine does not signal the attribute for 3 consecutive timeouts, the Engine is determined to be "in trouble," and the redundant partner takes action. Setting this attribute value too low causes the redundant partner to overreact when CPU usage is high. Setting the value too high can delay notification that the Engine is in trouble since the Bootstrap considers the Engine "healthy" until the timeout expires.

Attribute Name

Default Value

Recommended Value 20,000 ms.

EngineFailureTimeout 10,000 ms.

FactorySuite A2 Deployment Guide

90

Chapter 3

FactorySuite A2 Deployment Guide

Integrating FactorySuite Applications

91

C H A P T E R

Integrating FactorySuite Applications

This chapter describes integrating FactorySuite software applications within the Industrial Application Server environment. Specific application requirements drive the integration strategy. The following information provides Wonderware application integration guidelines, and is based on product integration testing. Important! Check the FactorySuite A2 Compatibility Matrix on the FactorySuite support website http://www.wonderware.com/support/ before starting a project that integrates multiple Wonderware software applications.

Contents IndustrialSQL Server Historian ActiveFactory Software InTouch HMI Software SCADAlarm Event Notification Software Alarm DB Manager SuiteVoyager Software QI Analyst Software DT Analyst Software InTrack Software InBatch Software Third Party Application Integration

FactorySuite A2 Deployment Guide

92

Chapter 4

IndustrialSQL Server Historian


IndustrialSQL Server Historian integrates with Industrial Application Server using the Manual Data Acquisition Service (MDAS). Historized Attributes are easily configured in the IDE. During run-time, their values are "pushed" to the Historian Node and stored there using the MDAS service. Industrial Application Server provides extensive diagnostics to monitor the operation of the historian functionality, i.e.: store forward status, historian connection, etc. The Historian node does not require a Platform. Note Industrial Application Server 2.1 requires an IndustrialSQL Server Historian upgrade to version 9 or later. For information on how to configure a platform/engine for history or to configure objects to store history, see the IDE documentation for Industrial Application Server. For information on using IndustrialSQL Server Historian with Industrial Application Server, see Chapter 8, "Historizing Data." For information on configuring the IndustrialSQL Server Historian node, see the IndustrialSQL Server Historian documentation.

ActiveFactory Software
ActiveFactory Software is Wonderware's IndustrialSQL Server Historian Client Application Suite.

ActiveFactory ActiveX controls are "rich" clients and require certification permissions by IT Admin or wwAdmin-level permissions on the client machine. It is necessary to manually enable TCP/IP- Named Pipes protocols in the Windows Firewall when using Windows Server 2003 SP1 or XP Professional SP2 as web clients. This configuration is not an issue when using the HTTP protocol.

ActiveFactory Reporting Website


Two installation options are available for the Reporting Web Site client: A. Run an MSI setup on the browser client (requires local administrator privilege). B. Have a domain administrator authorize .NET controls signed by Invensys via Active Directory policy settings (version 9.1).

FactorySuite A2 Deployment Guide

Integrating FactorySuite Applications

93

InTouch HMI Software


InTouch HMI Software is the visualization component of the FactorySuite A2 System and provides a completely integrated graphical interface for Industrial Application Server. InTouch Software 8.0 or later acquires Galaxy data via the built-in "Galaxy" access name, which uses Message Exchange (MX) and the local Platform. InTouch Software enables easy and powerful browsing of the Industrial Application Server namespace for simplified use of object attributes in InTouch Remote Tag References. Traditionally, InTouch Software developers use windows and scripts to provide functionality such as alarms, history, security, and I/O. When InTouch Software is used as the graphical user interface (GUI) for Industrial Application Server, these features are instead provided by the AutomationObject Server node(s).

WindowMaker, WindowViewer, and View


WindowMaker is the InTouch Software windows and script development environment. The Bootstrap and the IDE must be installed on the InTouch Software node running WindowMaker in order to browse the Galaxy namespace. Although WindowMaker is provided with a FactorySuite A2 System development package that also includes the IDE, they are totally independent. The IDE cannot be used to develop InTouch Software applications, and WindowMaker cannot be used to create Domain objects. WindowViewer is the run-time environment that displays Galaxy data in the windows created with WindowMaker. InTouch View is a version of WindowMaker designed to connect InTouch Software to the Industrial Application Server only. For an InTouch View application, no access names other than "Galaxy" can be used. No historical logging or alarming is performed within InTouch View, but it can be used to display alarms generated by objects in an AppEngine. InTouch View nodes use Industrial Application Server security through the Archestra IDE security model. For information on using InTouch Software with Industrial Application Server, see the white paper InTouch/IAS Migration and Coexistence Planning Guide on the FactorySuite support web site: http://www.wonderware.com/support.

InTouch SmartSymbols
The InTouch SmartSymbol Manager allows you to create, edit and manage libraries of reusable graphical templates (InTouch SmartSymbols). SmartSymbols can be used to connect to ArchestrA objects and their attributes, and also to local InTouch Software tags or to any InTouch Software remote references. SmartSymbol templates can be associated to Application Object templates and instances, providing a very powerful combination.

FactorySuite A2 Deployment Guide

94

Chapter 4

A common practice is to create a SmartSymbol associated to the attributes of an Application Object template. Once this SmartSymbol template has been created and saved into the Symbol Library, instances of the symbols can be used by dropping them into an InTouch window. These instances can then be configured to point to specific Application Object instances by selecting them using the ArchestrA Object Browser (SmartSymbol Properties panel). Furthermore, the SmartSymbol Property panel allows you to create new instances of a selected Application Object template and associate a SmartSymbol with the new instance. SmartSymbols can easily be redirected to point to other object instances at runtime by using the IOSetRemoteReferences() script function.

SmartSymbol Notes
The Integrated Development Environment must be installed on the InTouch Development node to use the following SmartSymbol features:

ArchestrA Object Browser and Attribute Browser. Create new object instances of a selected template within InTouch WindowMaker.

Note Other SmartSymbol features are also enabled when the IDE is present on the InTouch Development node. Creating new instances of an object template using the SmartSymbols property dialog requires the user to have the correct permissions assigned in the Galaxy security configuration. Note For more details on SmartSymbols, the SmartSymbol Manager and the IOSetRemoteRefereces() script function refer to the InTouch Reference Guide. Changes to any Smart Symbol causes recompiling of every window in the application and subsequent re-deployment of all windows. For more information, see "NAD" on page 266.

Network Utilization
Galaxy references (e.g., references of the form Galaxy:MyObject.MyAttribute) that resolve to remote nodes affect bandwidth utilization, which increases as the data-change rate increases. If your requirement is to minimize network-bandwidth utilization between an InTouch Software node and remote Industrial Application Server nodes, you need to account for all active subscriptions between your InTouch Software node and the remote Industrial Application Server nodes that provide your data.

FactorySuite A2 Deployment Guide

Integrating FactorySuite Applications

95

When any of the following items are configured with a galaxy reference, that item activates a subscription between InTouch View and Industrial Application Server when the following events occur:

Launch WindowViewer (with NO windows opened): A. B. C. D. Application "While Running" script. Condition Scripts. Data-change Scripts. Real-Time trend with Only update when in memory unchecked (this is the default).

When any window is open; that is, a currently-open window that has: A. B. C. D. Animation links. Window "While Showing" scripts. AlarmViewerCtrl that is subscribed to the "Galaxy" alarm provider. Real-Time trend (regardless of the "Only update when in memory" check box state).

For example, assume your application has one window which contains a RealTime trend with "Only update when in memory" unchecked. Assume this trend is configured to gather data from Galaxy reference "Galaxy:MyObj.MyAttr". Even though that window is not open, InTouch Software has an active subscription and is receiving updates for Galaxy:MyObj.MyAttr.

InTouch Software Add-Ins


SQL Access Manager, Recipe Manager
InTouch Software includes a number of add-ins (sub-systems) that extend its functionality, such as SQL Access Manager, and Recipe Manager. It is possible to implement a similar functionality of those sub-systems directly into ApplicationObjects. For example, Application Objects that execute database queries using ADO.NET can be used to mimic some of the functionality in SQL Access Manager. You can also create an object that stores data into flat files or tables in Microsoft SQL Server and implement logic similar to the InTouch Recipe Manager. InTouch View does not provide access to the SQL Access nor Recipe Manager add-ins, since this functionality can now be implemented in Application Objects using .NET.

FactorySuite A2 Deployment Guide

96

Chapter 4

SPCPro
Using the SPCPro OLE Automation Library to directly call SPC functions from Industrial Application Server is not recommended. This method requires a very good understanding of the SPCPro database schema, as well as OLE automation. This scenario has not been tested and will not be supported in future releases. Use QI Analyst Software or custom-made objects for SPC analysis within the Industrial Application Server environment.

Tablet and Panel PCs


Using InTouch Software with Industrial Tablet PCs and Touch Panel computers enables immediate access to process information anywhere in the plant.

Tablet PCs
Industrial Tablet PCs are furnished with Windows XP Tablet PC Edition and have InTouch Software pre-installed at the factory. A number of wireless options are available. Industrial Tablet PC users can leverage a number of features in both the Tablet Edition of Windows XP and InTouch Software, such as using digital ink (i.e.: to write values into data links or annotate graphical displays), enabling more efficient communication and troubleshooting of problems on the factory floor. Industrial Tablet PCs support a number of options to secure wireless communications, including the use of VPN over wireless. For more information on securing your wireless networks, refer to the white paper "Tablet PCs in Industrial Applications", accessible through the Wonderware FactorySuite support website.

Panel PCs
Touch Panel computers are shipped with either Microsoft Windows XP Professional or Windows XP Embedded and, in addition to InTouch Software, have selected Wonderware DAServers pre-installed. A pre-installed Industrial Application Server Bootstrap in Windows XP Professional Touch Panel Computers enables fast integration with Industrial Application Server. Both Tablet PCs and Touch Panel computers include the Microsoft Remote Desktop Connection (Terminal Services Client). Using Industrial Tablet PCs or Touch Panel PCs in combination with Terminal Services allows you to install the Industrial Application Server platform and InTouch Software once on a central server and then open sessions from multiple terminals. For information about using InTouch Software with Terminal Services, refer to the Terminal Services for InTouch Deployment Guide on the Wonderware FactorySuite support website http://www.wonderware.com/support.

FactorySuite A2 Deployment Guide

Integrating FactorySuite Applications

97

SCADAlarm Event Notification Software


SCADAlarm Event Notification Software provides an alarm notification service via pagers, phone, and e-mail. While an InTouch Software node could query alarms generated by objects in the Industrial Application Server and act as a gateway to SCADAlarm via remote tag references, the recommended approach is to use SCADAlarm to map directly to an object attribute. To accomplish this, add a server definition in the SCADAlarm configuration that points to the "Object.Attribute" from which you want Alarm notification. SCADAlarm Event Notification Software 6.0 SP1 allows browsing the Galaxy namespace. SCADAlarm Event Notification Software typically reads a string value of 1 in order to determine an alarm condition. This value may be received from the Industrial Application Server as "1.00000000," so you should use this string in the tag definition. Acknowledgements can be sent from SCADAlarm Event Notification Software back to the Industrial Application Server via another "Object.Attribute" reference to the.AckMsg. In this case, a string value of 1 is sent to acknowledge the alarm. Note This information applies to SCADAlarm Event Notification Software v. 6, SP1 and later. For previous versions, FSGateway is used to achieve integration between SCADAlarm and Industrial Application Server.

Alarm DB Manager
Alarms generated by ApplicationObjects are stored within the alarm database. The alarm database can be hosted by the Galaxy Repository node (if available all the time), the IndustrialSQL Server Historian Node, or any other Microsoft SQL Server node that has an Industrial Application Server Platform deployed to it and is configured as an Alarm Provider. The configuration of the Alarm Groups (Areas in the Industrial Application Server) should be configured as "Galaxy!Area" where "Area" is the globally unique name of any configured Area ApplicationObject of the Galaxy.

SuiteVoyager Software
SuiteVoyager is designed as the web portal to any FactorySuite A2 System data. InTouch Software windows that access Industrial Application Server data can be converted to web pages for display in SuiteVoyager. History and alarms generated by an Industrial Server Application can also be made available via SuiteVoyager. The SuiteVoyager portal node requires a local Platform to be part of a Galaxy.

FactorySuite A2 Deployment Guide

98

Chapter 4

Note SuiteVoyager and MSDE are supported on Windows Server 2003 if default settings are modified: enable Network Access and SQL Server authentication, both of which are non-default settings for the install. For details of overriding the default installation, see the SuiteVoyager User's Guide.

QI Analyst Software
QI Analyst Software provides statistical analysis of FactorySuite A2 System data. The data includes real-time InTouch Software data, real-time data from Application Server through FactorySuite Gateway, InSQL (historical) data, or any other ODBC or OLE DB-supported data source. Quality information is collected, analyzed, and displayed through various QI Analyst Software components. The collecting and analysis engines of QI Analyst Software can be accessed and leveraged by Industrial Application Server, thus allowing data to be sent to and read from QI Analyst Software. Process analysis charts are provided through stand-alone QI workstations or through ActiveX controls deployed within InTouch Software or another ActiveX container.

Scripting
QI Analyst Software calls can be made from within an Application Object. The QI Analyst Software OLE Automation library is called with the CreateObject() function from ApplicationObject scripting. New data rows and values can be stored in the QI Analyst Software database through the object interface. The main difficulty is in the persistence of objects from one script to the next, or from object to object. Object persistence must be managed with the .NET global data cache (that is, System.AppDomain.CurrentDomain). For more information, see "Scoping Variables" in Chapter 5, "Working with Templates,". Setting objects such as a QI database or data table to the global memory allows other scripts to continue working with the object in the same state as the previous script stored the object.

DT Analyst Software
DT Analyst Software collects, analyzes and displays downtime and efficiency data using the following components:

Configuration Manager: Configure the Downtime Database. Logic Manager: Monitor changing tag values. Event Monitor ActiveX Control: Display Downtime Events

DT Analyst Software requires its own list of tags. The tagscan be configured to read from I/O Servers, DAServers/OPC Servers, InTouch Software nodes, or the FactorySuite Gateway. Virtual tags (those with no I/O access name) can also be configured. These tags cannot read Application Objects directly from Industrial Application Server.

FactorySuite A2 Deployment Guide

Integrating FactorySuite Applications

99

FactorySuite Gateway
DT Analyst Software is integrated with Industrial Application Server environment by using FactorySuite Gateway. The DT Analyst Logic Manager is configured to read from FactorySuite Gateway, which acts as a SuiteLink Server or OPC Server to provide Galaxy data to any SuiteLink or OPC client. Refer to the FS Gateway documentation for further details. The DT Analyst Logic Manager does not require a Platform because it relies on SuiteLink and OPC to receive value changes from I/O Servers. Important! The DT Analyst Logic Manager and Event Monitor require a connection to the Downtime Database (DTDB) in order to be functional.

Best Practice
Install the DT Analyst Configuration Manager on an Engineering Station node, along with the Integrated Development Environment (IDE) and InTouch WindowMaker. The configuration, downtime, and efficiency data is stored in a SQL Server database called DTDB (default). DTDB is independent of the Galaxy Repository database and the objects that it contains. Aspects of the DT Analyst Software process model (such as Systems, Areas, or Groups) may be similar to some objects in Industrial Application Server, but they are not directly related. However, it is possible to query the DTDB database from an object designed for that purpose.

InTrack Software
InTrack Software is the Work-In-Process (WIP) and Material tracking component of FactorySuite. It delivers configuration tools, OLE COM objects, and database structures that manage the movement, creation and consumption of materials as they move through the manufacturing process. InTrack Software requires either Microsoft SQL Server or Oracle database to function. For specific database requirements and important planning guidelines, refer to The InTrack Deployment Guide.

Using InTrack Software with an Application Server


Industrial Application Server provides an opportunity to enhance and extend the core capabilities of InTrack Software. Industrial Application Server can serve as an event source to trigger InTrack transactions, and can perform conditional alarming or execute pre-conditioning logic or qualification for InTrack Software calls.

FactorySuite A2 Deployment Guide

100

Chapter 4

All IAS attribute tracking capabilities - alarming, historization, and I/O source definition - can be applied to the execution of objects pertaining to InTrack Software interaction. The two components, Industrial Application Server and InTrack Software, must be closely coordinated through naming conventions and structure to accomplish this coexistence. The following section provides guidelines for successful implementation.

IAS and InTrack Software Implementation Guidelines


Use the following strategies when implementing InTrack Software within the IAS environment: Execute InTrack method calls from Industrial Application Server objects, either via scripting or embedded into objects that are built using the Application Object Toolkit. The TrackOBJ.dll can be imported into the Industrial Application Server galaxy and any object that references the library will have the required methods deployed to its platform. Only one connection to the InTrack Database can exist on any platform, so an engine-level object must be created to manage this connection. Error handling must occur within the calling scripts. The scripts may execute synchronously or asynchronously but, since database response can be unpredictable, care must be taken to accommodate execution of the Industrial Application Server scan. As a rule, scripts that execute any DB transaction should not run synchronously in a scan that includes time-sensitive control objects. Allowing this exposes the application to scan overruns and/or lock up. While it is conceivable that one could segregate all InTrack scripting functions to a dedicated engine and simply ignore scan overruns, it is rarely possible to construct an Industrial Application Server object model that lends itself to this architecture. Executing the scripts asynchronously from within IAS spawns a new thread which executes independently and does not interfere with scan execution by waiting for a response. The developer must include a message identifier to ensure that responses are directed back to the appropriate client. Another method involves creating a transaction server that abstracts the Industrial Application Server objects from the InTrack COM libraries. This transaction server is essentially a piece of code that creates and processes InTrack calls. An object executing on an AutomationObject Server platform collects attributes that are required input parameters for InTrack methods (for example, TransID, LotID, Quan, and so forth), packages them, and, on a defined trigger, sends them to the transaction server. Delivery of the packaged parameters can be accomplished via Web services. The Web service handles message quality and integrity. The transaction server unpacks the parameters and assembles them into appropriate COM method calls. These calls are executed against one, or a pool of several, TrackOBJ library(ies) on a central server. This strategy effectively moves the creation and execution of the COM calls to the backend server.

FactorySuite A2 Deployment Guide

Integrating FactorySuite Applications

101

The transaction server can be run on the same server as the database or separately. For larger customers or a high traffic network, you might have three separate servers: the Application server, transaction server, and DB server. A Web server on each side handles taking attributes from the object, sending them across the network, and delivering them to the transaction server.

InTrack Software Integration with Other FactorySuite Software


InTrack Software can be combined with other FactorySuite software applications. InTouch Software is used to develop run-time applications, and to host the visualization and logic for InTrack Software. InTouch Software resides on the client side, either through direct connection or through a terminal.

For applications with many interface graphics and straightforward InTrack code, using InTouch Software as the OLE Automation container enables faster development. If the InTrack scripting is complex and the User Interface simple, Visual Basic is faster and offers more debug options.

InTrack Software may also be combined with additional FactorySuite A2 System components, like InSQL, SPCPro, and InControl Software.

Applications using InSQL should use SQL Server for database management. The InSQL and InTrack databases can reside on the same or separate servers. If either database will have a high transaction rate, the use of separate servers is recommended. SPCPro is Wonderware's statistical control product. When using SPCPro with InTrack Software, use SQL Server for database management. One InControl engine can manage multiple InTrack Software transaction engines. The InControl engine accepts commands from multiple client applications and determines which transaction engine is available for executing the command. All results, processing time, and errors are returned to the calling client application. Although InTrack Software is not integrated with SQL, use Industrial Application Server attributes or inputs from machine level devices to InTouch Software tags as triggers for InTrack Software events. For example: A Start button can trigger attributes collection and send a call across the Web service to the InTrack Software function.

FactorySuite A2 Deployment Guide

102

Chapter 4

InBatch Software
InBatch Software is flexible batch management software designed to power production of any batch process. InBatch Software includes several SQL databases. The Material database and the Recipe database are the most relevant in the context of IAS. For example: It may be necessary to modify the amount of one ingredient added based on the concentration of another ingredient. InBatch data is accessed using an OLE Automation server. The OLE Automation Server is called, and passed a list of "fill-in-the-blank" values indicating what data you want. The object then exposes the data in a group of fields. Having only Discretes, Analogs and Strings, InTouch Software lacked the proper data types to support access to these databases through its QuickScripts. Users were forced to write their own VisualBasic or C code to access these files. With the release of Industrial Application Server, the datatypes supported by the UserDefinedAttributes of an IAS object contain the necessary datatypes to access the Batch data. This means that a object can be built with UDAs that correspond to every attribute of the OLE automation object. Include the proper values into each UDA to define what transaction to make with the OLE server. The script is used to "Fill-in-the-blanks" through the UDAs and then trigger a different UDA linked to a method of the OLE Server. The requested data goes into a different set of UDAs to be read or fed into other objects. InBatch Objects enable the ability to interface with data retrieval objects that previously required the knowledge of higher programming languages; i.e.; VB or a C variant. Industrial Application Server can be used to extend InBatch Software's functionality by performing the following tasks:

Monitor and handle the use of Batch Units. Monitor and handle material assignment and reception of Material relative to a Batch Unit. Perform Statuses and Material reconciliation. Industrial Application Server Application Objects add scripting capabilities to InBatch Software. The scripting engine and other custom functions make it easy to receive materials and reconcile inventory with I/O values. Synchronize the InBatch Material database with an external database such as PreWeigh and Material Management systems. The Industrial Application Server scripting engine can interact with third-party database and enable customization. Generating real-time, batch-specific alarms. Operating as an external phase engine. Applying run-time formula adjustments.

FactorySuite A2 Deployment Guide

Integrating FactorySuite Applications

103

Use materials characteristics to adjust the target amount of a material. Application Objects make it possible to include material information that the batch system alone does not use.

The following Application Objects can be used to extend InBatch Software functionality.

BatchProxy object: Uses the DDE/SuiteLinkClient object as basis for a communications channel between Industrial Application Server and InBatch Software; InBatch Software acts as Server BatchUnit Object: Monitors Unit status tags BatchInventory object: Connects to InBatch Material Database by using InBatch's MaterialSrv.exe COM server interface, can be used as a contained object for any BatchUnit object instance to monitor Materials and for other actions such as Material reconciliation BatchPhase Object: Enables Industrial Application Server to run external phases Note FS Gateway must be installed first; InBatch Software acts as a client.

Information on installing and using Batch Server objects can be found at the http://www.archestra.biz. For additional InBatch Software information, refer to InBatchCOM.pdf.

InBatch Production System Requirements


InBatch Software is supported on Windows 2000 Server and Windows 2003 Server operating systems. Note XP Pro CAN be used on an InTouch Software node that is serving as an InBatch Software client or an IAS visualization node. The InBatch Info Server option (which is the historian part of InBatch Software) and an IAS GR node require the installation of MS SQL Server. Installing MS SQL Server on nonServer operating systems is not supported by Microsoft or Wonderware. An InBatch production system requires two or more machines: an InBatch Software server on one computer, and Industrial Application Server on another. Also, each InBatch system contains at least one Batch client. More complex systems may include a Redundant Server for automatic backup, InTouch Software clients, AutomationObject Servers, and a Terminal Server with associated Batch clients. Batch clients can be used for development, runtime production, report development or web-based reporting.

FactorySuite A2 Deployment Guide

104

Chapter 4

InBatch Software and Industrial Application Server with Windows Server 2003 Windows Server 2003 (Standard Edition) comes with many security
settings which, unlike former Windows Server operating systems, are disabled by default. In order to access a Windows Server 2003 machine remotely, you must enable the appropriate security settings. Note For details on FactorySuite A2 security, see "Securing FactorySuite A2 Systems" on page 200.

Using DCOM in a Workgroup environment requires a common administrator account on both the server and the client machine. Furthermore, to use DCOM from Industrial Application Server scripting, this account must be the same account that was used when installing the server and requesting an account. Refer to the FactorySuite A2 support site http://www.wonderware.com/support/mmi for information on DCOM settings for InBatch Software.

InBatch Production System Topologies


InBatch Software/Industrial Application Server combinations are deployed using one of two topologies. The following descriptions explain what kind of information is exchanged.

InBatchServer
The first topology, BatchServerSuiteLinkClient/IBServ uses the DDE/SuiteLinkClient Object. InBatch is the server, Industrial Application Server is the client. This topology is recommended for batch engine information, batch information, and equipment allocation. The BatchServerSuiteLinkClient/IBServer topology provides the following advantages:

InBatch Unit system tags can be used by Industrial Application Server through BatchServerSuiteLinkClient Object. New unit attributes can be created for InBatch data (also available through COM interfaces). InBatch COM servers can be used within Industrial Application Server scripting. Industrial Application Server scripting can be used to write one object, then instantiate many. Industrial Application Server AppObjects can be used to generate alarms. For example, "Phase Execution Time Exceeded on a Specific Unit."

FactorySuite A2 Deployment Guide

Integrating FactorySuite Applications

105

The following topology figure illustrates the InBatchServer topology:

Batch Server : IBServ

Industrial Application Server : SuiteLink Client /FSGateway

SuiteLink

InBatchClient
The second topology, FS Gateway/IBCli, a unit or phase topology, uses SuiteLink Protocol. Industrial Application Server is the server, InBatch is the client. This topology is recommended for Industrial Application Server scripting and field I/O data, (such as phase information, equipment status). The FS Gateway/IBCli topology:

Exposes phase logic attributes in Industrial Application Server to InBatch Software through FS Gateway. Phase logic in Industrial Application Server is more flexible than PLC database/file access, Also 3rd party systems interface, more intelligent alarming. Can be combined with PLC logic for added intelligence and parameter adjustment.

Batch Server : IBClient

Industrial Application Server : SuiteLink Client /FSGateway

SuiteLink

FactorySuite A2 Deployment Guide

106

Chapter 4

Third Party Application Integration


FactorySuite Gateway
FactorySuite Gateway (FS Gateway) is a universal translator, capable of translating all major protocols (MX, OPC, DDE, SuiteLink) to any protocol required by a network component. Use FS Gateway to translate between Archestra MX protocol and other protocols. For example, an Industrial Application Server, which uses MX, could not (previously) be accessed by third-party clients except though InTouch Software. Excel, Visual Basic, and other third-party applications were unable to receive data from FactorySuite products without using InTouch Software tags. Using FS Gateway as a protocol translator enables direct connection to an Industrial Application Server. FS Gateway can replace OPCLink, which translates OPC to DDE or SuiteLink. FS Gateway is also useful for legacy servers, controllers, and operating systems. Gateway can translate older DDE to the newer SuiteLink protocol, enabling legacy products to connect to newer systems.

Other Connectivity/Integration Tools


Client Application (Device Integration) Objects
Industrial Application Server accesses device data using Device Integration (DI) objects. These DIObjects ($DDESuiteLinkClient, $InTouch proxy, $OPCClient) enable the server to interface with I/O devices that use DDE, SuiteLink, InTouch Software tags, or OPC protocol. Other protocol connectivity tools include third-party OPC servers, the Archestra DAS Toolkit, the Rapid Protocol Modeler Kit, the ActiveX SECSII/GEM kits, and the InTouch Tag Creator.

I/O Servers
I/O servers provide connectivity for devices using DDE, FastDDE, and SuiteLink protocols. I/O servers can connect every FactorySuite 2000 and FactorySuite A2 System component, as well as PLC, RTU, DCS, and ESD systems. Build I/O servers using Wonderware's Rapid Protocol Modeler (RPM) Kit. The RPM Kit can handle both serial and TCP/IP communications with either ASCII or binary protocols to connect FactorySuite client applications to devices with non-standard protocols.

FactorySuite A2 Deployment Guide

Integrating FactorySuite Applications

107

DAServers
DAServers (Data Access Servers), provide simultaneous connectivity between plant floor devices and DDE, SuiteLink and/or OPC-based client applications running under Microsoft Windows. The DAS Toolkit allows you to create a DAServer specific to your needs. DAServer architecture is modular, allowing for plug-ins for DDE/SL, OPC, and other protocols.

Production Events Module (PEM)


The Production Events Module (PEM) is an ArchestrA-based Functional Module designed to provide traceability of production events. The module consists of a set of ArchestrA application object templates and associated components and services required to capture, store and report production event data. Data Storage is handled by a SQL Server relational database designed following the S95 standards for naming, format, data and transactions that occur between Production and Business Systems. Basic built-in reporting features are made available via a PEM Content Unit for SuiteVoyager and a number of Production Database views are made available for end user custom reports. PEM integrates with IndustrialSQL Server Historian and SuiteVoyager. In order to provide store and forward capability, the PEM functional Module leverages Microsoft's Message Queue to handle event messages when PEM objects are configured "without response" - between AOS nodes where the PEM objects reside and the production database node. The basic recommendation when integrating PEM into an application is to develop your ArchestrA model first. Then identify the data that needs to be collected: the process segments, the triggers associated to production events, the identifiers that link these data (lot, production request, etc.) and the points in your model where the PEM objects need to be placed before adding the corresponding PEM objects. Like any Application Object, PEM objects can be extended. PEM objects provide for the configuration of "Extended Production Attributes" to capture additional custom attributes when an event is triggered. The modular nature of PEM enables system scalability allowing growth from small pilot or limited solutions to much larger systems. Note For more detailed information on the PEM Functional module see the PEM User's Guide. For recommendations on how to properly integrate PEM objects in your ArchestrA application, best practices, tips and troubleshooting recommendations, see the Production Events Module v. 1.0 (PEM) Deployment Guide.

FactorySuite A2 Deployment Guide

108

Chapter 4

InControl Software
InControl Software enables connectivity to third-party OPC servers and clients by creating an OPC Server and/or an OPC client. Use the OPC interface set to collect and transfer data between software packages from any vendor. InControl Software's OPC Server consists of these three primary types of objects: server, group(s), and item(s). Each OPC item object represents a single data element in the data source and has a name, value, time stamp, and quality. Items also have attributes and properties. OPC groups manage the attributes of each item contained in them. The OPC server maintains the properties of all OPC items. The InControl OPC server uses SuiteLink for communications. Note OPC uses the client-server model and the Microsoft COM/DCOM protocols for vendor-independent data transfer.

SQL Server
Use SQL Server Linked Server functionality as a gateway to systems using Oracle, Sybase, Ingres and other databases. Once linked to SQL Server, external databases operate as if they were part of the native SQL Server database. Data from multiple databases can then be combined in reports and queries. See the FactorySuite eSupport website, especially the Compatibility Matrix, for more information on specific component combinations.

FactorySuite A2 Deployment Guide

Working with Templates

109

C H A P T E R

Working with Templates

A "template" object represents common functional requirements of a field device (valves, pumps), a group of field devices (skids, stations), or a user function (algorithms). These requirements reflect information such as number of Inputs and Outputs, alarm conditions, history needs, and security. One template object performs the equivalent functions of multiple InTouch tags and scripts. A template-centric development practice enables re-use of existing engineering. This chapter includes recommendations for using templates in the IAS environment.

Contents

Before Creating Templates Creating a Template Model Using UDAs and Extensions Deriving Templates and Instances Re-Using Templates in Different Galaxies Export/Import Templates and Instances Scripting at the Template Level

FactorySuite A2 Deployment Guide

110

Chapter 5

Before Creating Templates


Before building templates, identify and document the functional requirements of the target field device. To accomplish this, the following tasks must be completed:

Identify all required field device properties (attributes). They include names, data types, and interaction requirements (that is, none, input, output, or input/output). For each attribute, determine if:

The attribute requires scaling or uses raw values. The attribute requires alarms, and the alarming model to be used by each. This model can include where the alarms will be generated (locally or in the control system), any alarm priority assignment, and alarm messaging needs. The attribute requires security and the security control. The attribute requires historical logging. For example, is forced data storage required? For a variable data type, do you need to define the trend limits and a deadband?

Identify any required scripting, such as algorithms, interaction between devices, and so on. Determine if field devices are grouped either into a common template or into a containment template model.

It is not necessary to know all requirements before building a template model. Extra functionality can be easily implemented in the template when new requirements are determined.

Creating a Template Model


After generating and documenting field device requirements, decide on a template model that fits those requirements. Begin by reviewing the field devices and their requirements, while looking for commonality across similar field device types. Determining this commonality is the basis for developing the template model. Select a Base Template that provides a logical foundation for the device type. For example, valves, pumps, and motors that have multiple states based on discrete limit switches use the $DiscreteDevice base template. Pressure and level switches use the $Switch base template. Process variable transmitters and controllers use the $AnalogDevice base template. Finally, basic process data receivers use the $FieldReference base template. A template is created either from a base template or from another derived template. Base templates are the objects provided with the Industrial Application Server. Base templates cannot be modified. Never create instances directly from base templates, since you will not be able to take advantage of advanced configuration and maintenance capabilities.

FactorySuite A2 Deployment Guide

Working with Templates

111

Four Object Editor tabs are used for configuring the template: The Object Information tab contains basic configuration information, object execution order, and a Help file link. The Script, UDA, and Extensions tabs are discussed in more detail in the following content. After creating a derived template set, create sets of complex template instances. The derived templates become the basis for all other instances. This derivation practice is called containment.

Containment vs. UDAs


Using containment or UDAs as the best configuration approach depends on actual field device requirements. For example, template containment works best when the lower level object also has many components and may contain even lower-level objects. Similar functionality can be achieved using User-Defined Attributes (UDAs). However, when the lowest level object is added to a template, it may be done using either template containment or UDAs. Both support an external I/O point link and history. Use the following guidelines to expedite an appropriate development strategy:

Use Template Containment when more functionality is required, such as complex alarms, setpoints, I/O points, or other features readily available in a template. Use UDAs when lower-level object is very basic. Use the UDA for memory or calculated values. It is also valid to always use a contained object for consistency, even when the property is very simple.

Decide the appropriate approach in advance of implementation.

Best Practice Ensure the container incorporates functionality; otherwise place it as an


attribute in another object. Do not use excessive empty containers simply as placeholders to host objects. Empty containers impact engine scan resources and time.

There is no practical limit to the number of attributes-per-object. However, strive to keep the actual object count down. Deeply-nested template/container structures slow down change check-in and propagation. Complex objects should be built using UDAs, scripts and containment. If additional performance is required, first try compiling the object scripts into .dlls to streamline scripting. The form supported by Industrial Application Server is a .NET Library and can be created using Visual Studio .NET with either Visual Basic .NET or Visual C# .NET.

FactorySuite A2 Deployment Guide

112

Chapter 5

Note Information specific to the .NET environment is included in Chapter 6, "Implementing QuickScript .NET." If further performance is needed, use the Application Object Toolkit to create custom templates.

Base Template Functional Summary


The following information describes each base template object and recommendations for use.

$AnalogDevice Template
The $AnalogDevice Template object provides supervisory control capabilities for instruments or equipment that have a key continuous variable. It contains numerous features to model more complex analog inputs and control loops. Any analog value that requires I/O scaling or alarming must be derived from this base template. The General tab of the Object Editor includes a field for setting the type of analog device. The Analog option type enables configuring a Process Variable (PV) input source and (optionally) a different output destination. The PV can be scaled, multiple alarm points defined, and history collected for the PV. The Analog regulator option type allows for a PV input (no separate output), a setpoint, an optional different setpoint feedback address, setpoint high and low limits, and optional control tracking. It also supports scaling, alarms, and history. When the analog device is configured as an Analog regulator, many aspects of a PID loop can be defined. It is necessary to add User-Defined Attributes to access specific loop control parameters, such as controller gain, integral time constant, etc. A fully-functional loop simulation object (based on the $AnalogDevice template) and other $AnalogDevice Objects are available for download at http://www.archestra.biz.

$DiscreteDevice Template
The $DiscreteDevice Template object provides supervisory control capabilities for instruments or equipment that have two or more discrete states. Use the $DiscreteDevice template for creating objects that monitor multiple discrete inputs and map them to a state table. A simple example is two discrete values representing an Open limit switch and a Close limit switch. Four options exist for the open and close options, and could be represented by the states Open, Closed, In Transition, and Fault. The Process Variable (PV) attribute of the discrete device is a string representing the state. This can also be read as an enumerated integer value. The object supports up to five distinct states based on one- to four inputs. Up to six discrete outputs are available from the template.

FactorySuite A2 Deployment Guide

Working with Templates

113

The Passive state is provided to represent the state when the field device is not energized. For example, a valve that fails to the closed state when it loses power would have a passive state of Closed. A valve that requires power to command it to open and to close may only use the two active states and not have a passive state. Alarms can be generated for either of the two active states or the fault state. The object also includes the option to record statistics, such as the duration of the various states, and to alarm on the duration.

$FieldReference Template
The $FieldReference Template object provides simple I/O capabilities for an external data source, including field instruments, for a variety of datatypes. The $FieldReference template is the parent for $Boolean, $Double, $Float, $Integer, and $String templates. These templates provide a mechanism to read and/or write to single I/O points in the field and to collect history on the process variable. The $FieldReference template does not provide scaling or alarm limits. Use the $AnalogDevice template for scaling I/O and setting alarm limits. The field reference templates are basically the same as adding a user-defined attribute and using the extensions to add input, output, or history.

$Switch Template
The $Switch Template object provides simple I/O capabilities for a single, two-state discrete signal. The $Switch template provides slightly more functionality than the $Boolean template, but less than the $DiscreteDevice template. The $Switch template provides two text states for a single I/O point (with an optional different output address). The value can be stored in history and either state (On or Off) can be alarmed whereas an alarm extension only alarms on the TRUE state.

$UserDefined Template
The UserDefined object provides an empty starting point for creating custom built objects that include UDAs, Scripts, Extensions, or Contained objects. The UserDefined ApplicationObject enables the user to define the following Analog and Digital inputs: Analog Inputs:

Scale raw values Define Deadband Define alarm levels Define deviation alarms Define Bad PV alarms Define Rate of change alarms Define primary and secondary engineering units

FactorySuite A2 Deployment Guide

114

Chapter 5

Generate events on value change Provide statistics

Discrete Inputs: Assign On/Off Messages as labels. Define alarm states as On or Off condition Invert input value Provide statistics

Template Modeling Examples


The following information describes two template modeling examples:

Example 1
This example describes the model of a process that contains four discrete valve types. The valve types are determined from device requirements. To optimize engineering development, a common derived template called $DValve is developed (from the $DiscreteDevice base template) that contains all the shared discrete valve requirements. A template for each discrete valve type is derived from that common template. The following figure illustrates how these templates are developed:

Base Template

$DValve Discrete Valve

$SDValve Single Actuator Valve without Feedback

$SDValveSF SingleActuator Valve with Feedback

$SDValveDF Single Actuator Valve with Dual Feedback

$DDValveDF Dual Actuator Valve with Dual Feedback

Use this practice to model valves on different manufacturers and their models. A base valve template contains the fields and settings common to all valves within the facility. A new template is then derived from the base valve template for each manufacturer and contains vendor-specific settings. Finally, a new template set is derived from the vendor-specific template for each valve model used in the facility. Once a model-specific template is available, instances will be derived from the template to represent the actual valves.

FactorySuite A2 Deployment Guide

Working with Templates

115

Example 2
This example describes a common, complex relationship called Reactor. Reactor is based upon an interaction of five field devices. Multiple instances of this relationship are used within the plant model. The relationship can easily be developed using containment. Create a derived template called $Reactor from the $UserDefined base template. Then, create template instances representing each of the five field devices. The complex relationship can now be developed (using scripting) in the container object ($Reactor) using the hierarchical names given to each field device. When instances of $Reactor are created as field devices, each has two names: the containment name (hierarchical name); and the physical name. The following figure illustrates this practice:
Derived Template with Containment

$Reactor

$UserDefined Template

Inlet $DDValveDF Dual Actuator Valve with Dual Feedback Outlet

Temperature $Analog Standard Analog

Level

InletPump

$Pump Standard Pump

FactorySuite A2 Deployment Guide

116

Chapter 5

Using UDAs and Extensions


A User-Defined Attribute (UDA) enables data type additions to the template or instances. UDAs can be further enhanced by extensions. Using extensions, a UDA can take on input, output, history, and alarm characteristics. UDAs are categorized as follows:

Calculated: The UDA is only modifiable by the instance. It will have no initial value until the object writes to it. Calculated UDAs are typically used for totals, averages, and so on. Object Writable: The UDA is writable only by instances within the Galaxy. User Writable: At run-time, the UDA is writable by a user (subject to security restrictions), other instances, and the configuration program.

Note Locking a UDA makes it a Constant and therefore, it is not writable at run-time.

Best Practice When defining UDAs with input or output extensions, never lock their
source or destination within a template, since it will be unique to each instance and defined later.

Use three dashes ( --- ) to represent an unknown reference. This prevents the "could not resolve reference" warning when instances are created. User-defined attributes may have a one-dimensional array. Arrayed UDAs cannot be extended. Only a Boolean UDA can take on alarm extensions. Make the proper selections of priority, category, and whether the template description or a unique message for this alarm is to be used. Alarms on analog values require use of an analog device template and object containment. For a Boolean or Analog UDA alarm that comes from the field device control logic and requires a corresponding Acknowledge, the "Acked" attribute should take on an output extension with the destination being the "Ack" point in the control logic. If an underscore ( _ ) is the first character of an attribute name, that attribute is hidden from users at run time. Use this hidden attribute when you need variables to support certain functionality, but want to hide them from users in order to prevent confusion. A hidden attribute cannot be extended.

Any UDA and its extension created within a template is inherited by all derived objects. However, attributes and extensions are only propagated when the instance is created or when that particular attribute is locked.

FactorySuite A2 Deployment Guide

Working with Templates

117

Most UDAs are checkpointed. This means that all data necessary to support automatic restart of a running Application Object is saved periodically. The restarted object has the same configuration, state, and associated data as the last checkpointed value. Unlike other UDAs, Calculated UDAs are not checkpointed. However, a "Retentive Enabled" Calculated UDA is available and can be configured to be checkpointed with all the other attributes in that object.

Note For details on checkpointing, see "Checkpointing" on page 74.

Deriving Templates and Instances


The following information describes effective implementation practices for template and instance derivation:

Best Practice Derive from the Base Template First: Derive a template from the base
templates before deriving any instances. An object instance that is derived from one of the base templates ($AnalogDevice, $DiscreteDevice, $FieldReference, $Switch, $UserDefined, $WinPlatform, $AppEngine, and $Area) cannot be modified at the template level in the future to add additional scripting, UDAs, and so on. Therefore, always derive a new template from the base templates, even if it is identical to the base template. When a template is derived from another template, the derived template inherits all of the characteristics of the parent template. If the parent template is modified, only the attributes and extensions that are locked will propagate to child templates.

Propagate Security: Changes made to the security control of any attribute or extension will not propagate. If new attributes, scripts, and extension are added, they always will propagate. The derived template can then take on additional functionality. When an instance is derived from a template, changes made to the security control of an attribute will propagate, but changes made to the security for an extension will not propagate. If you deploy instances of a template and then modify the template, you will then need to re-deploy the instances. Before deploying changes, you may want to perform an upload of runtime changes. The upload overwrites the initial attribute values with current run-time data.

Lock Attributes: Use Locking to propagate functionality when modifying a template. You can then unlock the attributes and extensions when the propagation is complete.

FactorySuite A2 Deployment Guide

118

Chapter 5

Name the Object: When an instance of an object is first created, it is given a default name based on the parent template name and an incremental number (_XXX). The name should be changed to meet the naming convention established for the project. If the instance is contained by another object instance, it will also have a hierarchical name.

Note The object's instance name is used by IndustrialSQL Server Historian to store historical data (not the hierarchical name). It is important to properly name the object before it is deployed. Each name can contain up to 32 characters. The object hierarchy can be up to 10 levels deep for a maximum hierarchical name of 329 characters. In the case of creating large numbers of instances from a single template, see the following section regarding Galaxy dumps and loads for a potentially faster renaming mechanism for a complex compound object.

Re-Using Templates in Different Galaxies


The following recommendations apply in an environment where separate, unique galaxies exist at different production sites, and where standardization is required across the sites. Templates can be reused in different galaxies (just as they can be reused in the same galaxy) to create multiple instances. The important difference in this scenario is that change propagation in a single galaxy is a simple process where locked features are automatically propagated to instances, whereas a multi-galaxy environment requires formal procedures be defined and implemented to manually update and maintain templates. The following recommendations describe performing updates/synchronization in a structured and repeatable manner:

Best Practice Create a Master Galaxy for development purposes. This Galaxy contains
the master template library with the latest revisions for all of them. When new galaxies must use any of these templates, ensure they include the latest revisions.

Create a "staging" engine for testing purposes. Whenever possible, deploy this engine on a machine that has no impact on a real production system. Once an object design has been tested and its functionality verified, it can be distributed to other sites. Export .aaPkg packages (cab files) that contain the necessary templates out of the Master Galaxy, then import them into the new production Galaxies.

FactorySuite A2 Deployment Guide

Working with Templates

119

Create local templates that derive from the master at each production galaxy. Any changes or specialization required should be implemented in the local template. Make use of toolsets to separate the master templates from the local templates; it is even recommended to hide the master template toolset in the production galaxies and treat them as if they were Read Only templates. Packages with exported templates do not include any logs documenting changes to the base templates, so all changes must be done in the master galaxy and properly documented upon check-in after editing objects. The Import Preferences dialog box (opened when importing a package with templates) includes options to handle version mismatches and name conflicts. In a multi-management environment where a package from the Master Galaxy is updating templates that already exist in production galaxies, select the Overwrite option to handle version mismatch. The Overwrite option will only work if the version of the object being imported is higher than the current version; the object version is stored in the ConfigVersion attribute that is present in all templates and instances.

Changes made on the local templates should be verified and validated before they are merged into the master template on the development Galaxy. If it is determined that the features in local templates should be implemented in the master templates, any new functionality (i.e. attributes or scripts) must be manually implemented on the master templates and properly documented. After manual implementation and documenting, a new package can be exported in order to update all production galaxies with the new version of the templates.

Export/Import Templates and Instances


Use the Export/Import functionality of Industrial Application Server to transfer templates and instances between Galaxies. Two export processes are available:

Automation Objects: Best for transferring templates and optionally instances between Galaxies. Galaxy Dump: Best when working within the same Galaxy to create new instances of a template.

FactorySuite A2 Deployment Guide

120

Chapter 5

Export Automation Objects


When you perform an export of Automation Objects, the following are saved:

The selected instances and templates. The templates from which the instances and templates were derived. The toolset used to display the templates.

An import simply imports the contents of the export file.

Galaxy Dump
A Galaxy Dump exports template instances to a .csv file for editing or for adding an instance of a template. Modifications can be loaded back to the Galaxy. When a dump is performed, any script, attribute, or attribute extension that is not locked at the template level will be dumped, each in its own column. A reference to the parent template is also contained in the file, in order to bring in all of the locked scripts, attributes and extensions. Attributes that are calculated or writable at run time are not dumped. The dump and load functions are useful for quickly creating multiple instances of a template, instead of using the IDE. To prepare a file for a Galaxy Load operation 1. 2. Create one instance of the required template and dump this into a .csv file. Open the .csv file using a spreadsheet editor (Excel).

Note Notepad or WordPad are viable options but contain limited functionality. When editing an existing Galaxy dump, perform a search and replace to change all occurrences of object instance names. The Search/Replace operation can also be performed within Excel. Excel does not recognize the file as a .csv formatted file and all the data is displayed in the first column. 3. Select the first column in Excel and "Convert Columns To Text" using delimited and comma as parameters. This places each column from the the .csv file into a different Excel column.

WARNING! NEVER click the Save button in the toolbar or the Save option in the File menu. Excel will save the spreadsheet as a .xls file and destroy the formatting conversions. See the following step for saving to a .csv format. 4. After the modifications are done to the file (using Excel), select Save As from the File menu and make sure that the File Type is .csv. The file now has the valid format to successfully apply a Galaxy Load operation from within the IDE. In the following example, five additional instances were created from the $Boolean template.

FactorySuite A2 Deployment Guide

Working with Templates

121

For three of the derived instances, the Area is not known, and for three other instances, the Area is HomeArea: ;Created on: 1/10/2003 2:01:12 PM from Galaxy:Test :TEMPLATE=$Boolean :Tagname Boolean1 Boolean2 Boolean3 Boolean4 Boolean5 Boolean6 5. 6. Save the changes to the .csv file. Load the .csv file into the Galaxy. HomeArea HomeArea HomeArea Area

For this example, the following events occur when the load is performed:

The first three instances are created with all template functionality. If no Area is set as the default, the instances appear in the Unassigned folder in the IDE. The next three instances are placed in the HomeArea Area, if this Area currently exists. Otherwise, the objects use the Area settings for the first three instances.

The advantages of using the Galaxy Dump and Load over creating instances within the IDE are evident when conforming to a naming strategy. For a contained object with three levels and hundreds of instances, it is much easier to rename all the instances with a search and replace of 101 to 102 instead of naming each instance one-by-one within the IDE.

Best Practice
When backing up the Galaxy database, use the Backup functionality available within the Galaxy Database Manager of the System Management Console. The backup contains all Galaxy information (including security configuration), whereas the simple export of automation objects only includes the object structure and template toolsets.

FactorySuite A2 Deployment Guide

122

Chapter 5

Scripting at the Template Level


Add additional functionality to a template by creating scripts. Scripts are written using the QuickScript .NET language.

Best Practice
The following best practice recommendations are cross-referenced to practical examples described in the following chapter.

Segregate functionality by creating unique scripts for each segment required. For an example of modular scripting, see "ObjectCacheExt.DLL Overview" on page 151. When functionality within the script will require an extended amount of time to execute, set the script to run asynchronously with a timeout limit. Such functionality is sometimes required by COM objects and .NET objects (for example, file operations, SQL queries, and so on). For an example of Asynchronous scripting, see "$SqlConnCacheMgr Script Examples" on page 158. When creating scripts at the template level, you may want to lock them. You can then make changes to the template script, and the changes will propagate to the next level. When a script is locked and there are no declarations or aliases, these sections should also be locked for improved propagation and deployment performance. When the field devices have a structured containment and addressing convention, create a script that will populate all attributes with an I/O extension at the time of deployment. If instances have been configured using this method, you can use the "Upload Runtime Changes" functionality to synchronize the changes back to the Galaxy Repository. When adding scripts to templates, use relative names (Me, MyContainer, MyEngine, MyArea, MyPlatform, and MyHost). Using relative names saves you from having to edit absolute references in every instance. Lock all inherited scripts, otherwise a copy of each assembly/script will be created for each object derived from the template and multiple copies of the same script will be running on the AppEngine object where they are deployed. Locking the scripts generates one single assembly to be deployed to the AppEngine. The assembly is then shared by all instances derived from that template.

Note For more information on scripting and practical examples, see the Chapter 6, "Implementing QuickScript .NET."

FactorySuite A2 Deployment Guide

Working with Templates

123

Script Execution Types


The script execution types and when they should be used are as follows:

Startup: Called when the object is loaded into memory. Primarily used to instantiate COM objects and .NET objects. OnScan: Called the first time an AppEngine calls this object to execute after the object scan state is changed to onscan. Primarily used to initiate attribute values. Execute: Called every time the AppEngine performs a scan and the object is onscan. Supports conditional trigger types of On True, On False, While True, While False, Periodic, and Data Change. Most run-time functionality is added here. OffScan: Called when the object is taken offscan. Primarily used to clean up the object and account for any needs that should be addressed as a result of the object no longer executing. Shutdown: Called when the object is about to be taken out of memory, usually as a result of the AppEngine stopping. Primarily used to destroy COM objects and .NET objects and clean up memory.

Scoping Variables
Declared variables can be of any data type within the development environment plus object, .NET type libraries, and imported types. Declared variables can be up to a three-dimensional array. The issue becomes how to create a .NET data type that is persisted from scan to scan and available to other scripts within the object or available to other objects. Since UDAs only support the base data types, a declared variable or script variable must be used. Once the variable has been created and values set, it must be added to the .NET global data cache through the following call:
DIM Connection AS System.Data.OLEDB.OLEDBConnection; {Set required values} System.AppDomain.CurrentDomain.SetData("MyConnection", Connection);

When this variable is required in another script, the variable must be dimensioned within the script and then read from the .NET global data cache through the following:
DIM DBConnect AS System.Data.OLEDB.OLEDBConnection; DBConnect = System.AppDomain.CurrentDomain.GetData("MyConnection");

Script variables have the same options as declared variables. Note User-defined attributes can only have data types that are available within the development environment and only support one-dimensional arrays.

FactorySuite A2 Deployment Guide

124

Chapter 5

Variables within a Galaxy can have different scopes:

Global: Global variables can be accessed from anywhere within the Galaxy. Object attributes and user-defined attributes (UDAs) are used to create global variables. The value is held constant when the object is taken offscan. Script: Variables that have been dimensioned in the declarations section of the script can be accessed from any execution type of the script. These variables will persist from scan to scan, but are cleared when the object goes offscan. Script Execution Type: Variables that are dimensioned within the execution type of the script are only accessible from within that execution type. These variables will not persist from scan to scan.

Do not use a UDA if a DIM (dimensioned variable) will suffice. Most UDAs are checkpointed; DIMs are not. Checkpoints consume scan time. The exception to this rule is a UDA set as "calculated", which by default is not checkpointed. Calculated UDAs have the least "weight". Note For more information on DIM Statements, see "Using DIM Statements" on page 146.

FactorySuite A2 Deployment Guide

Working with Templates

125

Scripting for OLE and/or COM Objects


The following script change must be made from InTouch QuickScript to Industrial Application Server QuickScript .NET when using OLE and/or COM objects. Within InTouch, the function call
OLE_CreateObject(%Name, "ProgramID")

is used to create an instance of an OLE automation object with a starting character for the name of either % or #. Within Industrial Application Server, a standard variable name without the leading % or # is used with a "DIM Name as Object" statement. The variable is then assigned to the OLE/COM object by using the CreateObject("ProgramID") function. For example, the following code creates a QI Analyst data component object.

InTouch
OLE_CreateObject(%QIDC, "QI.DataComponent");

Industrial Application Server


DIM QIDC AS Object; QIDC = CreateObject("QI.DataComponent");

Using Aliases
Aliases provide a single place for changing the variable, instead of multiple places in the script. This practice makes script maintenance easier when references contained in that script must be changed. Create an Alias:

When an external attribute must be referenced in the script. The attribute name is not known or will be unique for each instance of the template.

Use three dashes (---) to represent an unknown reference. The dash characters prevent the "could not resolve reference" warning when instances are created. Alias naming can also make the script easier to read and create. When the same relative reference is used multiple times, create an alias for it.

FactorySuite A2 Deployment Guide

126

Chapter 5

Determining Object and Script Execution Order


Industrial Application Server enables control of the object scan order within an Engine, and control of the script execution order within an object. Use this functionality instead of "data handshake bits" to ensure the delivery order of data from script to script and from object to object. Before considering script execution order, it is necessary to review how objects execute. The following information reviews object execution events at a basic level. Note For details on object execution, see the object's help files.

AppEngine Execution
The AppEngine is the only engine that hosts more than one object. Object execution is handled by the scheduler primitive, which is single-threaded. It executes objects registered on the host engine repeatedly, and in a sequential order, during the scan interval. The scan interval is the desired rate of execution of each Automation Object the AppEngine hosts. The following tasks execute engine-to-engine in the following order during the scan interval:

Execution Phase: Individual OnScan objects execute their functionality according to there configuration (defined at Config Time). Output Processing Phase: All pending output requests (SetAttributes, subscription packets, publish notifications) must be sent. Pending requests intended for the same engine should be sent as one block request so that the receiving engine can process them atomically (and in order). Checkpoint Snapshot Phase: This task is configured separately and may not occur during every scan interval. If a checkpoint occurs within a given scan interval, it occurs immediately after the Output Processing Phase. The checkpointer status is checked to see if it is still busy from a previous checkpoint. If not, a new asynchrounous checkpoint is initiated with a checkpoint snapshot. Input Processing Phase: The goal of this phase is to process all input requests (SetAttributes, subscription packets, publish notifications). Input requests are retrieved one at a time. If any input requests are left in the queue, they are processed during the idle period before the next scan interval. At least one queued input request is processed following the Output Processing Phase, and before the Execution Phase.

Scan Overruns are a boolean condition that becomes true when the Execution Phase crosses from one scan interval to the next. When a Scan Overrun occurs, a new Execution Phase is delayed until the next scan interval begins. Any of the phases in the above list can cause a Scan Overrun when they extend beyond the scan interval. All objects deployed on the AppEngine are processed in the following order: DIObjects (multiple DIObjects are processed alphabetically by tagname), hosted ApplicationObjects, then their Areas (numerically).

FactorySuite A2 Deployment Guide

Working with Templates

127

Common Object Execution Order


The execution order of object instances running on an engine is configured within the Object Information tab of the object editor. As an engine executes its scan, it will process the objects in the order specified. Each named script within an object can be specified to run as either just after inputs or just before outputs. The order in which the scripts are listed in either category is the order in which the scripts will be executed. Each object executes its functionality in the following order: 1. 2. 3. 4. 5. 6. Read inputs. Execute "just after inputs" scripts. Execute object native functionality (the UserDefined object has none). Execute "just before outputs" scripts. Write outputs. Test alarms.

Each script is executed in its entirety before the next script is executed. This behavior is different from InTouch, where a script can trigger another script, such as a data change script. Within InTouch, the calling script halts while the data change script is run. Within Industrial Application Server, each script completes before the next script is run. If a user-defined attribute of a second object instance is set during the execution of the script, and that UDA triggers a script on the second object, the script of the second object may or may not run during the same scan of the engine. If the second object is configured to run after the first object, the script on the second object will run during the same scan. If the second object has already been serviced during the scan, the script on the second object will run during the following scan. Since the Engine manages each object, a script runs only as fast as the engine's scan period or some multiple of that period. If the engine's scan period is one second, and an object script is set to periodic for every 1.5 seconds, the script will run every other scan (that is, every two seconds). Data requested or sent to objects residing on another engine/platform are updated on the next scan. This is also true for Application Objects on the same AppEngine if an Application Object needs data from another object but it executes before it on the scan. For example, when Object A executes, it needs the output values from Object B. The values received are from the previous scan, because Object B has not executed yet in the current scan. You must wait one scan if you want to verify a write of this type. Alternatively, you can change the execution order in the Object Editor so that Object B executes first.

FactorySuite A2 Deployment Guide

128

Chapter 5

Asynchronous Scripts
An asynchronous script runs in a separate thread and is not directly tied into the engine's scan process. Therefore, reading and writing to any object attributes (including the calling object) is a slow process. An asynchronous script should not Read or Write within a long FOR-NEXT loop to a UDA or other external source. Since the asynchronous script runs in a separate thread, it must wait until the next scan of the engine for all the Read or Write transactions to occur. If the scripts have not all been completed at the next scan, the system must wait for another scan. A single-system test with an engine scan period of one second achieved approximately 70 UDA writes per second and 35 UDA reads per second.

Asynchronous Timeout Limit Settings


The asynchronous timeout limit must also be set appropriately. If the script times out, the script is halted in an indeterminate state. There is no mechanism for determining what line the script was executing when it was halted. Therefore, another script should be checking for asynchronous script time outs and cleaning up any remaining inconsistencies.

Scripting I/O References


An Onscan script can automatically assign all the input source and output destination attributes of the template to their target reference string. Several different strategies to accomplish this are possible. In general, the suggestions included in this document will only work if a strict naming convention is used throughout the system.

Scripting I/O References Example 1


Create an Initialization script that runs when the object first goes Onscan. This script directs all the I/O sources or destinations used by the object to the appropriate device integration (DIObject) object, topic, and address. If at all possible, do not hard-code any of the address components (DIObject name, topic, and I/O address). The one exception might be the I/O address if it is identical in all PLCs addressed by the object. The DIObject should be named in a manner that can be determined from some other component in the system, such as the engine or platform name plus "PLC." The topic should also be determined from some other component in the system, such as Area. If there is only one topic, use a fixed name. For PLCs that allow textual address names such as Control Logix, using the same object or hierarchical name for the instance within AutomationObject Server is the best approach. The following example assumes that one DIObject with one topic called "Topic" is available on the same engine as the object referencing it. The object is based on the $DiscreteDevice template and has two inputs called OLS and CLS representing the open and close limit switches of a valve. The object name partially defines the address name within the PLC.

FactorySuite A2 Deployment Guide

Working with Templates

129

The onscan script is similar to the following example:


Me.OLS.InputSource = MyEngine.Tagname + "PLC.Topic." + Me.Tagname + "OLS"; Me.CLS.InputSource = MyEngine.Tagname + "PLC.Topic." + Me.Tagname + "CLS";

Scripting I/O References Example 2


If the PLC does not have textual names, or different brands of PLCs are in use within the facility, the previous strategy can be abstracted one more level by using the attribute list within the DIObject. The Topic tab contains a list of attributes for each tag with a textual name and an item reference. The same code as shown above could be used for a PLC with an address convention of B3:12. In the Topic tab of the device integration object, add a couple of entries for the object. Suppose the object instance name was Valve220; the following entries would have to be in the attribute list:

Valve220OLS Valve220CLS

B3:12 B3:13

The Onscan script would still be the same. The added advantage of using the attribute list is that all the I/O definitions are located in one central spot. A second DIObject with essentially the same set of I/O points can easily be created and, if necessary, modified by dumping and loading the existing DIObject. Other slight modifications are to store the DIObject name and topic name within UDAs on the area object hosting the automation object. Then it could be referenced by MyArea.DIName, where DIName is the name of the string UDA on the area object.

Writing to a Database
To write to a database 1. 2. 3. Create an AppDomain object that handles a message queue. At the object level, fill the message queue with the values to be written. At the engine level, create an object that moves the entries from MSMQ to the database, using Stored Procedures. The scripting may run asynchronously as no collecting is required.

Note The Message Queue service may use a large amount of memory, depending on the amount of data written.

FactorySuite A2 Deployment Guide

130

Chapter 5

Best Practice
The following are best practices to keep in mind regarding scripting:

Use stored procedures (SP) when reading/writing to the database to decrease database access time (use a set-up object to create the SPs and then undeploy this object). As an extension to this concept; try to create "services" at a higher level (area, engine or platform) to support common data tasks.

Do not use OnScan scripts to access external data from multiple instances simultaneously. Managing object references by accessing external files from all instances at the same time (onscan) is not recommended. Either some type of handshake mechanism between the Platform and object is required to make this a viable approach, or you might use other methods to achieve this functionality.

Avoid extensive use of scripting within objects when possible. The extensive use of scripting influences the overall performance of the application. Look for opportunities to reduce the amount of scripting, if possible. Bear in mind that each condition needs to be evaluated, even though the script might not be executed. If you have thousands of these scripts implemented, the scripts will use more CPU time than just the execution of the logic. Instead, use a common database access object on a separate engine to perform a connection/read and then expose the returned data in a UDA array or queue for the other objects to read. This practice allows the database object to stop the scan of its engine without affecting the engine used by the requesting objects. If very robust scripts are needed, asynchronous scripts perform more efficiently because they do not stop executing; however, be aware of the time needed to transfer the data back into the database.

Avoid reading a database from within an object script because the engine stops scanning until the connection is made or the query is returned. Using asynchronous scripts does not help in this instance, because the asynchronous script must collect the data and transfer it back into the object for use. If it cannot finish all the 'writes,' it will start over on the next scan. This retrieval can take many scans.

Handle database connections using the .NET AppDomain object at the AppEngine level. This connection can be used by the Application Objects running on that AppEngine to execute the actual database transactions through stored procedures. An effective implementation is to create a template for DB-connector AppEngines. If every AppEngine () uses the same name for the .NET AppDomain object, Application Objects can be configured regardless of the AppEngine on which they run. Do NOT create a database connection for every object.

FactorySuite A2 Deployment Guide

Working with Templates

131

Use asynchronous scripting, when appropriate. For example, use asynchronous scripts to execute functions that tend to "hang" or slow down the processing of other scripts (scripts that verify SQL connections, etc.). Optimize dynamic referencing within scripts. In certain cases it may be necessary to dynamically configure Input source and Output destination references by writing scripts that execute at run-time. It is important to write these scripts so that they execute only one time: When the object is initially deployed. One method of optimizing reference setting is to check within the script to see if the default " --- " is present. If so, the script sets the dynamic reference string. Once the string is set one time, it is checkpointed to disk, so that subsequent Engine or failover restarts will already be configured. This practice is especially important in large systems (large I/O) where reconfiguring thousands of I/O references on Startup (and using excessive CPU resources) is avoided.

Use caution when including Data Change scripts. Recall that Data Change scripts are executed in three instances: Value Change, Quality Change, and a Set OnScan event (when the value is set from "nothing" to its default, and the quality changes). When large numbers of Data Change scripts are included, a failover event could cause all the Data Change scripts to execute at the same time, causing unwanted consequences.

FactorySuite A2 Deployment Guide

132

Chapter 5

FactorySuite A2 Deployment Guide

Implementing QuickScript .NET

133

C H A P T E R

Implementing QuickScript .NET

QuickScript .NET is the IAS scripting language and is integrated tightly with .NET. The greatest scripting capabilities are leveraged by the application of .NET Classes. These are made available through the Function Browser in the Types section. Incorporating imported .NET Function DLLs into the Types achieves the best method for customizing the functionality within QuickScript .NET. Desired behaviors can be compiled using source code in any of the .NET languages. Existing COM DLLs can be wrapped as .NET Function DLLs and incorporated into QuickScript .NET. Import the desired .NET function DLL into the Galaxy. The Classes and their methods are exposed for use in the Types section of the Function Browser. The Function Browser is a very useful tool for selecting script functions. This chapter provides practical examples of .NET script use within the IAS/ArchestrA environment. Note This chapter assumes basic familiarity with .NET concepts and structures. Graphics and script samples are provided from within the IDE Environment. Developing in Visual Studio is outside the scope of this material.

Contents

IAS Scripting Architecture Script Access to IAS Object Attributes Evolving from InTouch to ArchestrA Accessing the .NET Framework Using .NET for Database Access Scripting Database Access Template Object Examples

FactorySuite A2 Deployment Guide

134

Chapter 6

IAS Scripting Architecture


Scripting relationships in the Industrial Application Server environment are best considered as a series of "tiers" or layers:

QuickScript .NET (Static): Supported by calls to a base .NET object and its capabilities. When scripting, a "Dimension" statement is not necessary: the object is accessible via direct or relative reference. This layer contains "out-of-the-box" functionality; arguments of the function calls must support only MX Data types. Also includes basic error handling capability in the form of return value parameters.

Basic .NET Types: Supported by all platforms that support .NET. The Type functions are accessed through the IAS Script Function Browser:

The Types list exposes .NET Framework functions, QuickScript .NET functions, and Imported Functions:

Microsoft.Example.Name: Supplied with all .NET Framework operating systems. System.Example.Name: QuickScript .NET Functions (following figure). Imported Functions: Function Libraries supplied by a developer that include a global namespace identifier (in the above figure, A5.LakeForest.ObjectCacheExt).

FactorySuite A2 Deployment Guide

Implementing QuickScript .NET

135

.NET Classes: Provides .NET object capabilities defined in specific function .dlls., specific calls to .NET classes/functionality, and Try-CatchFinally error handling. The following figure shows the System.Array.Copy function:

Note Highlighting a function displays the argument list at the bottom of the window.

FactorySuite A2 Deployment Guide

136

Chapter 6

ApplicationObject Script Interactions


The following figure shows the script interactions possible within the Industrial Application Server environment. The IAS interactions are grouped to the left; the .NET interactions are grouped to the right:
Input (MX)

aaEngine

Application Object Domain


.NET ObjectCache

Subscriptions (InputSource)

.NET Object

Subscriptions (OutputDest)

.NET Framework

ADO.NET Database

Output (MX) Legend


IAS Scripting Interactions: Modular Script Execution and UDA references Input/Output (Subscriptions) .NET Interactions: Referenced Objects (in .NET ObjectCache) .NET Framework

XML Documents

Application Object Domain


Note "Application Object Domain" is a .NET concept/implementation and should not be confused with ArchestrA ApplicationObjects. An Application Object Domain (often referred to as "AppDomain") is a virtual process that serves to isolate an application. All objects created within the same application scope (in other words, anywhere along the sequence of object activations beginning with the application entry point) are created within the same application domain. Multiple application domains can exist in a single operating system process. In the context of IAS, the Application Object Domain enables multiple scripting access to a common .NET object such as the ObjectCacheExt object described later in this chapter.

FactorySuite A2 Deployment Guide

Implementing QuickScript .NET

137

Script Access to IAS Object Attributes


The following section summarizes "out-of-the-box" functionality that leverages .NET capabilities.

Referencing Object Attribute and Property Values


Several methods are available within IAS Object QuickScripts to reference Attribute and Property values from other IAS Objects. Attribute and Property values of the current IAS Object may also be read from, and written to, through the use of the self-referencing "Me." keyword prefix.

Input/Output
The principal method referencing Attribute and Property values of other IAS Objects is through the use of I/O Attributes. The IAS AnalogDevice Object includes the PV Attribute and properties identifying the PV.Input.InputSource. The IAS DiscreteDevice Object supports configuration of one or more discrete Input and/or Output Attributes which receive designations as "Input1," "Input2," "Output1," "Output2"; these support I/O referencing using the Input1.InputSource and Output1.OutputDest Properties. An Attribute may be configured having any of the MxValue datatypes and it can be extended as an Input, as an Output, or as an InputOutput. This is accomplished using UDAs and Extensions. For example, a UDA named Pressure can be added to a UserDefined Object. The Pressure Attribute can be extended assigning Pressure.InputSource. Assignments of the I/O Source can be deferred until run-time by leaving the triple-dash string " --- " in the configuration text field. A run-time QuickScript, typically named IOInitialize and typically configured as Execute WhileTrue - is implemented so that the QuickScript runs once at the first Execution scan. The typical QuickScript I/O assignment command is similar to the following:
Me.Pressure.InputSource = MyContainer.DIDeviceName + "." + MyContainer.PLCScanGroup + ".F30[146]";

This example binds at run-time to a DIDevice Object name encoded in a Container Object, to a ScanGroup also encoded in that Container Object and to a hard-coded PLC I/O register. See "Scripting I/O References" on page 128 for more information. It is common practice to embed self-references to Attributes of the current IAS Object within QuickScript code using "Me.", for example:
Me.PressureSample = Me.Pressure;

FactorySuite A2 Deployment Guide

138

Chapter 6

Remote references to Attributes of other IAS Objects or to DIDevice Objects may also be encoded directly in QuickScript; however this practice should only be implemented at the lowest level of Object derivation or in Object Instances. WARNING! Direct Remote References embedded in QuickScripts of toplevel templates may be Locked and thus will bind all Instance Objects to that exact hard-coded Remote Reference, disallowing any Instance or Run-time change or Reference.

.NET Framework
The .NET Framework within QuickScript ensures the code syntax remains straightforward, since QuickScript enables many different approaches to the acquisition and processing of I/O data. For example, QuickScripts are used to detect conditions, apply transformation algorithms, post results as I/O as database inserts/updates. Or the QuickScript may simply capture calculated values in UDAs for viewing by InTouch Software or Historization by IndustrialSQL Server Historian. Industrial Application Server includes a number of pre-built .NET functions for Math and for String manipulation. These are exposed under specifically named tree branches in the Script Function Browser.

Other Script Function Categories


Other categories of functions are exposed in the Script Function Browser. The System group within the Script Function Browser tree holds two functions: CreateObject and Now. The CreateObject function is used to instantiate .NET Object Classes. Equivalent code may be implemented by first dimensioning a .NET Object of the desired type in Declarations and then invoking the new key word in an assignment statement (See examples in Appendix B). The Now function retrieves the computer's system clock timestamp and passes it back as an MX-compatible Time data type. The Miscellaneous group recreates some of the Operating System calls that are popular in InTouch Script: ActivateApp, SendKeys and WWControl. It also includes the ever-popular LogMessage function needed for writing diagnostic messages into the ArchestrA Logger. Additional functions of this group are explained in the following sections IsBad, IsGood, IsInitializing, IsUncertain, IsUsable, SetBad, SetInitializing, SetUncertain.

Handling Data Quality


Good scripting practice requires that scripts are aware of the notion of Data Quality. Data Quality provides an indication of the "goodness" of any attribute in ArchestrA, and can be one of four main states: Good, Bad, Uncertain or Initializing.

FactorySuite A2 Deployment Guide

Implementing QuickScript .NET

139

A knowledge of Quality handling rules is necessary. In some cases, the quality will be handled automatically (see DQP below), but in others the script writer must explicitly check data quality. For example, if the script writer wants to check if a numerical attribute is exceeding some limit, the script should be written to check the quality of the attribute before performing the limit check.

Data Quality Propagation (DQP)


ArchestrA scripting incorporates a Data Quality Propagation (DQP) approach to handling data quality. DQP propagates the quality of attributes read by the script through the scripts data as execution occurs. DQP allows for the execution of the script even though some of the referenced attributes might have unacceptable quality. Note The script developer must evaluate the quality of the attributes and allow the script to branch into a safe handling condition. Every attribute has an associated quality property. The quality property can be one of the following 4 primary states: Bad, Good, Initializing, Uncertain. In general, attributes with Bad and Initializing qualities are not in a state to be used in a script (other than setting some condition that something is wrong). Initializing quality is a transition state, usually seen when connections are being established or systems are starting up. Attributes with Good quality are normal and can be used. Attributes with Uncertain quality should be used with caution (they may indicate a manual override of a value, for example).

FactorySuite A2 Deployment Guide

140

Chapter 6

Successful script development requires an understanding of the following rules:

Only referenced attributes have quality value. Local variables do not carry a quality value when the data value is assigned: For example:
Z = MyObject.PV

Z has the value of the object but the quality value of the object attribute is lost. However, it is valid to carry the object quality value by direct assignment to a local integer variable:
Q = MyObject.PV.Quality

If the referenced attribute quality value is Bad, the data value is set to the default value associated with the data type of the respective attribute: type float, bad quality, data value is set to the default NaN (Not a Number). The script system is capable of dealing with the NaN data value in such expressions as:
Me.PressureSetPoint = Me.Pressure + 12.0; {assigned NaN} {NaN} {Constant} Me.PressureMetric = Me.Pressure * Me.KPAperPSI; {assigned NaN} {NaN} {Calculated Constant}

Script functions do not carry quality information. A function such as Float.Sqrt (float attribute value) has the quality information removed. If the attribute quality value is Bad, the data value is set to the default for the associated attribute data type. It is best practice to check the data quality explicitly using one of the convenience functions (i.e. IsGood()) before calling a function to which attribute values are passed as input parameters.

Note The function is executed regardless of the quality of the input variable(s). Be sure that the function is called with valid input values.

Local variables and constants on the right-side of an expression result in a good quality code being assigned to the left-side attribute. For example: the expression
Me.PressureHigh = highPressureLimit; Me.PressureHighHigh = highPressureLimit * highLimitFactor; Me.PressureMinimum = 15.0; {Good}

results in the quality value of the attribute being set to good regardless of the quality value of the attribute prior to the assignment.

FactorySuite A2 Deployment Guide

Implementing QuickScript .NET

141

Condition statements are the only instance where the Data Quality Propagation approach takes quality explicitly into account. In all other cases, script execution ignores the quality and the script developer must test the quality explicitly. A standard coding pattern should be followed to allow quality to be taken into account without the explicit testing of the quality, for example:
IF ((MyContainer.ValveA == "CLOSED") AND (MyContainer.ValveB == "OPENED") THEN Me.Valve.Cmd = "CLOSE"; ELSE Me.Valve.Cmd = "OPEN"; ENDIF;

If the quality of MyContainer.ValveA and MyContainer.ValveB is acceptable then the "IF" and "ELSE" are executed based on the value of the two attributes. However, if at least one of the values has an unacceptable quality (initializing or bad) then the "ELSE" branch is executed. This pattern assumes that the "ELSE" branch is always the fail safe mode.

Data Quality Controlled Execution


In contrast to constants, local variables and script functions, script expressions adhere to the Data Quality Controlled Execution (DQCE) approach which allows execution of an expression only when the attributes referenced in the expression have acceptable quality. Script expressions are evaluated as a whole or not at all.

If any of the input variables to the expression change to an initializing or bad quality state, the result value reported will have a quality of Bad. If a situation exists such that one value is Initializing and another is Bad, Bad quality takes precedence and the expression's result quality value is reported as Bad. The result value itself is set to the default value for the given data type (for example, a result value for a type FLOAT is set to NaN - not a number) when the quality goes unacceptable. If one value is Uncertain and all others are Good, the resulting value is calculated but the resulting quality is set to Uncertain.

See Chapter 12, "Maintaining the System," for information about testing the quality of an attribute.

FactorySuite A2 Deployment Guide

142

Chapter 6

Using UDAs
UDAs provide read-write interactions between domain objects and facilitate MX subscriptions between object instances. This is handled at the Input/Output level. Use remote references to attributes of other objects which are NOT bound directly to UDAs via InputSource or OutputDest properties. The reference in a QuickScript looks like the following:
TankLevel101.PV

Or using hierarchical reference:


Unit101.Tank.Level.PV

Such remote references get subscriptions at run-time but their values are held in a 'memory cache' associated with the Object that contains all such remote references. Even the InputSource and OutputDest extended Attributes have entries listed in the actual memory cache.

Inter- and Intra-Object Scripting Considerations


Objects can access attribute data within the same object. Objects can also access data in other objects, wherever those objects may exist in the Galaxy. Those interactions are handled by MX. The script simply has to use references, and MX will route the data access to the appropriate location.

Evolving from InTouch to ArchestrA


InTouch Software and the ArchestrA scripting environments require vastly different implementation strategies. The following information provides some comparison and contrast between the two environments.

Inter-Object Scripting Interactions


InTouch Software scripting interacts internally using local Memory tags. InTouch Software scripting interacts externally using remote references and Input/Output tags. ArchestrA scripting interacts at the local level using local variables while external interactions are realized by exchanging values via MX.

Intra-Object Scripting Interactions


InTouch Software scripts interact with a global InTouch Software database. Within the ArchestrA environment, scripts are not exposed to each other except locally (within the object). When using IAS QuickScripts, communicating with other ApplicationObjects is accomplished by using the concept of a reference. The reference can be to any attribute, including UDAs and custom attributes.

FactorySuite A2 Deployment Guide

Implementing QuickScript .NET

143

.NET script functions reference objects in a common Application Domain. The .NET ObjectCacheExt runs on the same Application Engine as the objects in the same thread.

Remote References to Live Data


InTouch Software requires an Access Name for each external Datasource, and supports only DDE, NetDDE, and SuiteLink protocols. The local computer must have a definition of the item. The Galaxy Access Name links using MX. IAS provides remote references to any data from within the Galaxy (within a Global Galaxy Namespace).

Indirect Referencing
IAS version 2.0 introduced the "Indirect" variable type for use within QuickScript. In IAS the behavior of an Indirect variable type is different from the same variable type used in InTouch Software. InTouch Software designates three types of Indirect and the designated InTouch Indirect variable is global to the InTouch VIEW application. For IAS the Indirect is generic for all MX datatypes. A dimensioned Indirect variable is local to the QuickScript that contains it. The BindTo Property is used to redirect the data source or data destination of the local Indirect variable immediately during the same scan. However, there is a limitation that data can only be immediately read during the same scan if the reference points to "Me" or to an Attribute of an Object that is running on the same Engine as the current Object, for example by UDA extension. Indirect Referencing requires the creation of Indirect Tagnames in the InTouch Tag Dictionary that are global within that InTouch Software Application. IAS Indirect Referencing provides dynamic reassignment of InputSource/OutputDest attributes to an object with an existing subscription. When a new reference is created (i.e., a reference to a different object), the reassignment takes effect at the next scan.

IAS Context
Note The following material applies to .NET functions, not to native QuickScripts. The QuickScript .NET language is not intended to fully replace a developer tool such as Visual Studio. Complex scripting with .NET object error trapping and user-generated data types still requires a developer tool to write the code and contain it within a .NET or COM component file, such as a .dll, .tlb, or .olb file. These components can then be imported into the Galaxy Repository and be made available within the scripting environment. Any object instance that makes use of an imported library will be deployed with the library file to ensure correct object execution.

FactorySuite A2 Deployment Guide

144

Chapter 6

The main .NET editing tool in the IDE is the Script Function Browser (see previous figure). This utility provides access to .NET functions registered on the local node. All platforms that support .NET expose these function types. Deployed objects automatically install the QuickScript .NET .dll function library (required by the Application Object) on the remote platform. Many types are already installed on the computer. Others are installed by Visual Studio or by installer programs. Note For a list of included .NET functions and Sample Scripts, please refer to the IDE Online Help.

Accessing the .NET Framework


The following section provides a high-level overview of the .NET scripting environment.

.NET Overview
Microsoft .NET is based on open standards. It augments the presentation capabilities of HTML with the metadata capabilities of XML to provide a programmable, message-based infrastructure. Like HTML, XML is a widely supported industry standard, defined by the World Wide Web Consortium. XML provides a means of separating data from views, which offers a means to unlock information so that it can be organized, programmed, edited, and distributed in ways that are useful to digital devices and applications. XML and message-based technologies do not replace existing interface-based technologies like COM, whose tightly coupled, synchronous nature is required in many situations, particularly equipment control. However, the tight coupling afforded by interface-based technologies also makes them difficult to implement over the Internetit is often difficult to even establish connections due to firewall constraints. In contrast, an XML instance document, wrapped in SOAP (Simple Object Access Protocol) and bound to HTTP, can pass easily through an enterprise's firewall on port 80, which is typically open for Web traffic. Much like HTML, XML is independent of operating systems and middlewareas long as an application can parse XML, it can exchange information with other applications. As such, XML provides developers with a powerful new tool for integrating systems in a highly distributed environment. .NET Provides an open standards-based framework that enables the following functionality:

Within the Application Domain: Use an ObjectCache that is referenced by multiple other objects. The objects use its functionality for common tasks. A common example is to enable multiple objects to access multiple database connections via the ObjectCache. ObjectCacheExt is used for the examples provided in this chapter. Outside the Application Domain: Interact directly with databases (using ADO.NET functions) and .xml instances (using XML.NET functions).

FactorySuite A2 Deployment Guide

Implementing QuickScript .NET

145

Advanced Error Handling: Try-Catch-Finally functionality must be implemented inside of any imported QuickScript Function DLLs with error results exposed as return results through Integer error condition codes or String error messages. Web Services: .NET for Manufacturing introduces the concept of Web services. The Web services model is an Internet-based approach, leveraging the broadest reach and the least expensive communication infrastructure in the world. It uses standard Internet protocols, starting with TCP and UDP to establish connections, HTTP and other protocols to transport data, and XML to describe data.

.NET encapsulates functionality used to interact with database, .xml documents, and Web Services.

System.Data Classes (Connection to MSSQL via ADO.NET)


It is not possible to pass dimensioned .NET objects as parameters of some .NET function calls. The work-around is to use Visual Studio .NET and create wrapper function .dlls as a script function library that only take scalar type arguments of the datatypes directly supported in IAS. This is demonstrated in the examples shown in this chapter.

Scripting Practices
The following information summarizes scripting practices that facilitate effective implementation and script management.

Modularize Scripts
Breaking the script into smaller pieces enables greater control, management and reuse of content/functionality. TryConnect: See the "TryConnectNow QuickScript" on page 172. Comments: Includes Connection attempts, Retrys and Error Message handling. PostData: See the "PostDataNow QuickScript" on page 177.

While True
Using a While True script condition ensures that the system is not burdened by unnecessary script processing because once the Expression goes false the script is skipped.

FactorySuite A2 Deployment Guide

146

Chapter 6

Exporting Objects or Templates that Reference .NET Script Functions


When exporting objects or templates, DO NOT assume that all functions used by the object exist in other Galaxies. Ensure the Engineer imports the identical script function libraries into the target Galaxy before importing the templates or objects.

Syntax Differences
Code from VB.NET is not the same as QuickScript - ensure that syntax differences are known before copy/pasting VisualBasic script into the QuickScript editor.

Using DIM Statements


A DIM statement does not instantiate the .NET object. A new statement must also be included in the script body. This concept is especially important in the case of a Dimensioned Connection State object: for the connection to be successful, the object must first be instantiated. Tip Dimensioning statements for variables are supported within any script body. However, the variables are destroyed and recreated each time the script executes. Create dimensioning statements in the Declarations pane. The variable is created one time at Startup and retained for the life of the object.

Handling Exceptions
QuickScript does not directly support exception handling. Encapsulate .NET Exception Handling inside of a QuickScript function .dll.

Using .NET for Database Access


The following section describes object examples that connect to a database (using an ObjectCacheExt.dll) and writes generated values to a specific SQL table. Note The following section includes some of the functionality of the OLEDBDataAccess or SQLDataObject2.1 Objects; however, the concepts demonstrated in this example are useful in a broader context. This example illustrates the following key concepts:

Defining Project Scope and Requirements Using "Shape" Object Templates for broad application and reuse. Using ObjectCacheExt objects (Script Function library).

FactorySuite A2 Deployment Guide

Implementing QuickScript .NET

147

Posting the Data. System.Data classes (connection to MSSQL via ADO.NET). Scripting practices for effective implementation and script management. Handling exceptions.

Defining Project Scope and Requirements


The general approach for exchange of data between IAS Objects and a database was introduced in the last section of the previous chapter ("Writing to a Database" on page 129.) The example project in this chapter implements posting of data to Recipe data tables located in a database on MSSQL Server. The project manages the MSSQL database connections, allowing a single connection to each Recipe data table. Multiple Objects posting data are permitted, with the Object determining which data table it wants to post to. Furthermore, the Object must determine if a connection to that data table is available before it posts the data. For run-time testing, Objects will implement a "ChainReaction" that triggers posting of data in a round-robin fashion. Attributes are included that allow run-time control of the "ChainReaction" as well as manipulation of the data to be posted. See Appendix B for information regarding the Chain Reaction technique. This example is predicated on the following information requirements, which are used to define project scope parameters:

Determine the required number of Databases: In this example, multiple objects connect to a single (SQL) Database. Determine the number of Target Tables: The SQL Database contains 2 target tables. Determine the Table structure/schema: The Table structure is described in the following sections. Determine the number of required Database connections: Multiple connections to the Database are supported. Determine the number of "clients" connecting to the database: The "client" is an IAS Runtime object (not an external client application). Determine whether the data is Posted or Retrieved: This example describes Posting data.

Note Using Database Triggers and Stored Procedures is outside the scope of this example.

FactorySuite A2 Deployment Guide

148

Chapter 6

Using "Shape" Template Objects


This example includes a "shape" template object that includes basic functionality required for any derived object to operate successfully. The shape object provides broad application and reuse capability. The shape object methodology provides two benefits:

Calls .NET functions implement in C++, C#, or VB.NET. Mimics the Object-Oriented structure of the .NET Framework as an inheritance model. Provides the minimum functionality (in this example, Database connection requirements) used by all objects requiring a database connection.

.NET Inheritance Model


The following table illustrates how the "shape" object mimics a .NET inheritance model: Automation Object Template "Shape" Template Implementation Object Implementation .NET Implementation Virtual Class Class Implementation Class Instances

Minimum Database Connection Requirements Must know the Network Node name or the Server Name. Must know the Security Type: Integrated or SQL-type (sa). If SQL, know
the authorized account name and password.

Must know the connection-type: SQL or OLEDB. Must know the Database name.

When a derived object template is created from the "shape" object, the derived UDAs appear in the IDE as Inherited and are called by other UDAs created at the template level. When the object instance is created, it should require only slight customization to be effective. For example, when posting data, a different UDA is required for each data type. If posting Multiple records, use the UDA Array.

Using the ObjectCacheExt (Function Library)


The ObjectCacheExt Script Function Library DLL enables multiple objects to connect to a database. The instance Object then manages the object connections: who connects, connection status, how many connections are supported, etc. The three contained Classes are: ObjectCacheExt, SQLConnCache and OLEDBConnCache. Note For an example of a basic ObjectCache object, see the IDE Help files.

FactorySuite A2 Deployment Guide

Implementing QuickScript .NET

149

ObjectCacheExt Classes
The sample QuickScript Function DLL is located in a file called ObjectCacheExt.dll, and includes three .NET Classes: The first Class is simply the base ObjectCache functionality described in the IAS Help documentation covering scripting. The other two Classes implement a variety of useful methods for managing two database connection types. The two Classes are given the following names: .SQLConnCache, and .OLEDBConnCache (properties of a .NET Class). These two classes are "type safe". This means that each Class only manages .NET database connections of the corresponding type of database protocol: SQL or OLEDB.

Connection Types
Direct ADO.NET calls (direct from QuickScript) are discouraged for the following reasons:

It is difficult to manage connections to multiple objects. It is difficult to recover from Connection Time-outs and other failures. Code can easily be manipulated, resulting in "broken" code.

The two database connection types represented in the ObjectCacheExt classes encapsulate methods that achieve the following:

Communication between multiple objects and one or more databases supporting the designated protocol. Connection pool management. Error handling with descriptive error messages. Prevents scripting errors by referencing a common .dll library.

Posting the Data


Examples of QuickScript code are listed in sections of this chapter to illustrate the following

Demonstrating various techniques applying .NET Framework functions. Calling functions of the (example) QuickScript Function DLL.

Note For the complete QuickScript and source code for ObjectCacheExt.dll, see Appendix B.

Object Functional Encapsulations


The object-oriented nature of IAS begs that run-time tasks such as "data acquisition," "data transformation" and "post to database" be divided up and encapsulated in a variety of IAS Objects. Template Objects for these run-time tasks are generally derived as base data management templates directly derived from the base $UserDefined Object.

FactorySuite A2 Deployment Guide

150

Chapter 6

For the examples described in this chapter there should be one instance Object that manages the database connection pool. The connection pool doesnt reside inside of the IAS Object, however. It must be instantiated as a .NET Class Instance. The Instance is attached to the Application Domain of the Engine hosting the Instance Object.

Scripting Database Access


The following objects and script examples are created from the $UserDefined object template. Note The methods described in this section can also be implemented for OLEDB connections. The following figure shows the Derivation view of the objects provided in this section:

Because the list is alphabetized, the Derivation View does not show the objects' relationships. In this example, the SqlConnCacheMgr provides the connection maintenance, and the PostTOaOrb objects provide posting for the data and use the SqlConnCacheMgr object for the DB connection.

FactorySuite A2 Deployment Guide

Implementing QuickScript .NET

151

ObjectCacheExt.DLL Overview
This section describes the ObjectCacheExt.dll Classes, focusing on the SqlConnCache class functions.

SqlConnCache Function Arguments


The following figure shows the function arguments as viewed from the Function Browser:

The function browser does not indicate the datatypes of the arguments, nor does it indicate the data type of the return value. Function argument datatypes may be guessed from the placeholder names that are given. The sure way to determine the datatypes is to look them up in the documentation for the function. For Microsoft .NET functions, the information can be found in the MSDN documentation or using the Visual Studio .NET Object Browser. The ObjectCacheExt.dll contains the following classes:

ObjectCache(): Designates the constructor for the generic .NET cache as copied from the example code found in the original Industrial Application Server IDE Help documentation. SqlConnCache(): Designates the constructor for the TypeSafe cache for .NET SqlConnection objects. OleDBConnCache(): designates the constructor for the TypeSafe cache for .NET OleDBConnection objects. This class is used in the examples described in this section.

FactorySuite A2 Deployment Guide

152

Chapter 6

SqlConnCache Function Calls


The function calls generally return a String indicating the result of calling the function, although some return an Integer. The following functions return MXValue Datatypes. The argument Datatypes are given following this list.

Functions Add(sqlConnName, sqlObj)

Description Does not return any value (this function internally calls the second Add( ) method setting its third parameter to true). Returns a String with result or Exception. Returns a Boolean indicating whether name is in cache. Returns a String giving number of Database rows affected or the Exception if an error occurs. Returns the SqlConnection object found in the cache. Returns the SqlConnection object found in the cache.

Add(sqlConnName, sqlObj, withExceptions) ContainsKey(sqlConnName) ExecuteNonQuery(sqlConnName, SqlCommandText, enableMonitor) Get(sqlConnName) Get(sqlConnIndex)

GetConnectionString(sqlConnName) Returns the String giving the connection string. GetConnState(sqlConnName) GetDatabaseName(sqlConnName) GetServerName(sqlConnName) GetExecuteNonQueryDuration (sqlConnName) GetExecuteNonQueryCount (sqlConnName) Initialize( ) Initialized( ) Remove(sqlConnName) Returns the String describing the connection state. Returns the String giving the Database name. Returns the String giving the Server name. Returns a TimeSpan giving duration. Returns an Integer count of queries. Returns a String describing success or failure to initialize. Returns a String indicating SqlConnCache object state. Does not return any value.

FactorySuite A2 Deployment Guide

Implementing QuickScript .NET

153

Arguments sqlConnName sqlObj withExceptions sqlConnIndex SqlCommandText enableMonitor

Description String giving the name of the SqlConnection for the cache. SqlConnection .NET object reference. Boolean when set to true requests handling of Exceptions. Returns the SqlConnection object found in the cache. String giving syntactically correct SQL Command. Boolean when set to true invokes .NET Monitor protection.

Purpose of ObjectCacheExt.DLL
Although it is possible to encode ADO.NET function calls in QuickScript .NET the lack of support for handling Exceptions inline does not allow the Objects to recover gracefully from errors (interruptions) in run-time without the help of Function DLLs. Possible interruptions when trying to implement database interactions include:

Wrong node name or server name in connection string. Invalid login - either by proxied user account or by supplied user name and password. Wrong database name or tablename. Invalid or incomplete query command text. Query takes too long and times out. Network cable gets unplugged. Too many SqlConnections trying to talk to MS SQL Server.

The ObjectCacheExt.DLL encapsulates many of the desired function calls for gaining access to any particular database and table. It does not restrict which node or which database or table. The SqlConnCache Class handles only SqlConnection types. The OleDBConnCache handles connections to any database via OleDB. These examples focus on .NET SqlConnection techniques leveraging ADO.NET internally (within the DLL). The function calls and their return Datatypes are listed above. They represent a reasonably comprehensive set of functions for managing a cache of SqlConnections stored in memory in the Application Domain owned by the IAS Engine. The Engine contains a "manager" Object which calls these functions.

FactorySuite A2 Deployment Guide

154

Chapter 6

The SqlConnCache Class contains only "static" member variables. This means that only one instance of the Class will be loaded. The Class is loaded into memory in the Application Domain of the Engine when an IAS Object running under that Engine first calls any function of the Class.

FactorySuite A2 Deployment Guide

Implementing QuickScript .NET

155

Template Object Examples


Two example IAS template Objects are described, although not in complete detail.

The $SqlConnCacheMgr deals with creating and managing .NET SqlConnection objects in a pool, also called a cache. $PostToDBaORb deals with acquiring the SqlConnection object from the pool and applying ExecuteNonQuery methods against a Microsoft SQL Server database. For complete details of these template objects refer to Appendix B.

Example Object Interactions via UDAs


The following section describes the object interactions provided by Base Template Object UDAs. The objects are described in functional order.

SqlConnCacheMgr
The following figure describes the $SqlConnCacheMgr template Object's UDAs. Most of the UDAs are arrays with indexes 1 and 2. This example accommodates up to two independent SqlConnections. The UDAs of interest at this point in the discussion are:

SqlConn.Name[1] holds the String to be given to the cache as the name of the SqlConnection. Connection.AcquiredBy[1] is a placeholder for the name of the IAS Object Instance that "reserves" the named SqlConnection for its own use.
a b LogMessages.Enabled c a b1 b2 c1 d1 c2 d2 e1 e2 e g1 h1 f1 f2 k1 k2 l1 l2 m1m2 n1 n2 o1 o2 p1 p2 d SqlConn. GetStatistics.Now Connection. Acquire[1] Connection. Acquire[2] Connection . Release[1] Connection . Release[2] Connection. Connect[1] Connection. Connect[2] Connection . Disconnect[1] Connection . Disconnect[2] g2 h2 SqlConn. Initialized[1] g SqlConn. Initialized[2] SqlConn. Result[1] SqlConn. Result[2] h

SqlConn.Name[1] SqlConn.Name[2]

SqlConnCacheMgr

i j

Query. Count Query. Duration

q1 q2

r1 r2

Connection . AcquiredBy[1] Connection . AcquiredBy[2] q Connection. Status[1] Connection. Status[2] r

k l n o

Connection . Database. Name[1] Connection . Database. Name[2] Connection . NodeName [1] Connection . NodeName [2] m Connection. NodeName . Validated[1] Connection. NodeName . Validated[2]

Connection . IntegratedSecurity [1] Connection . IntegratedSecurity [2] Connection . User. Name [1] Connection . User. Name [2] p Connection. User. Password[1] Connection. User. Password[2]

FactorySuite A2 Deployment Guide

156

Chapter 6

PostToDBaORb
The $PostToDBaORb template Object is shown in the following figure. Instances derived from the template take care of acquisition of data that will be posted as a new row to a database table. Derived Object instances do not create their own .NET SqlConnection. Instead they reserve a connection owned by the $SqlConnCacheMgr Object instance. A $PostToDBaORb Object instance gets a .NET SqlConnection Object reference from the SqlConnCache Class, then it invokes several methods of the Class to Open the SqlConnection and post a new row of data. The Object releases the named SqlConnection when it is finished with it.
a LogMessages.Enabled Connection . TryConnect . Acquired.ByA b Connection . Acquired.ByB c Connection . Connect. ToAorB d Connection d e f e DB. PostData . Disconnect f Connection

b a c

i j h m o q r s1 t1 u1 r DB. TableName s1 DB. Column. Name[1] -- - s10 DB. Column. Name[10] t1 k

Connection . Result

SqlConn.Name SqlConn.Name.Prefix

g l n p

Connection . State Connection . Attempts

PostToDBaORb

DB. PostData. Result w1 DB. PostData. ResultEnum [1] - - -w4 DB. PostData. ResultEnum [4]

g h

s10 t10 u10 l DB. Name.A m DB. Name.B n o p q Connection. NodeName .A Connection. NodeName .B Connection . IntegratedSec .A Connection . IntegratedSec .B

DB. Column. DataType [1] -- - t10 DB. Column. DataType [10]

u1 DB. Column. ValueString[1] - -- u10 DB. Column. ValueString[10]

FactorySuite A2 Deployment Guide

Implementing QuickScript .NET

157

Application Domain Interactions


The following figure illustrates where the SqlConnCache's cache resides and how SqlConnection object references get stored. A hash table sits in memory external to the $SqlConnCacheMgr Object instance. This remains within the memory context of the Engine that runs this Object instance. This memory context is known as the Application Domain and is associated with the specific aaEngine process under which the Engine runs. The cache itself is organized as a list of SqlConnection names (String Datatype) designated by the UDA SqlConn.Name[n]. The name in the hash table links to a reference pointer in memory linking to the .NET SqlConnection Object.
SqlConnCache
ConnectionDBa ConnectionDBb k1 k2 l1 l2 m1 m2 q1 q2 Database Name Node Name Security type

SqlConnCacheMgr

creates connections

read/write

read/write
b c

Object chooses connection "a" or "b" connection used for ExecuteNonQuery table insert

l n p

m o q

PostToDBaORb

FactorySuite A2 Deployment Guide

158

Chapter 6

$SqlConnCacheMgr Script Examples


$SqlConnCacheMgr is used as a "shape" template object. Note An IAS Instance Object needs to be instantiated that may be deployed to implement the run-time support for SQL Database interactions. The sole purpose of $SqlConnCacheMgr is to initialize and manage one or more .NET SqlConnection Objects. Other Object instances are then used to do all of the hard work of manipulating the database table rows. Pooled SQL connections are acquired and applied. They are managed by the instance Object derived from the $SqlConnCacheMgr template Object.

Initialize QuickScript
The $SqlConnCacheMgr includes many UDAs used to keep track of its own state and the state of a cache of .NET SqlConnection objects. (See Appendix B for a complete listing of the UDAs). Several QuickScripts operate within the Object. The Initialize script must execute before any other script, thus its execution order is stipulated to be first. Use Configure Execution Order in the QuickScript editor window to modify the execution order of scripts:

The purpose for declaring local variables in the Declarations section (of any Object's QuickScript) is to establish the variables in memory upon Object Startup. Such variables are local to the named QuickScript but their lifetime extends until Shutdown of the Object. Local means that the variable can be read and written to from within any of the script types (Startup, OnScan, Execute, OffScan, and ShutDown) but only within the context of the named QuickScript. Note The following examples are not complete and shown from the script editor as graphic images. For the complete, copy-able source code, see Appendix B, .NET Example Source Code.

FactorySuite A2 Deployment Guide

Implementing QuickScript .NET

159

Dim (Dimension) Code Sample


In this example the ConnectTo QuickScript cannot interact with the local variables of the Initialize QuickScript. Declaration of local variables is not restricted to the standard IAS Datatypes. In the Initialize example .NET SqlConnection objects are declared:

It is good practice to fill in the complete .NET Type designation, although the short form SqlConnection will work. A full designation ensures there will be no ambiguity when imported Function DLLs exist that may have the same named function under a different .NET namespace. It is also important to understand that the use of the Dim statement does not by itself create the variable in memory. Dim is used to establish a Type Safe Datatype for the named variable. The QuickScript compiler will give warning messages for code that attempts to use the variable if it happened to be a different Datatype than the one declared using Dim.

Load the Library into Memory Code Sample


The following QuickScript code from the Initialize script invokes the SqlConnCache Class of ObjectCacheExt.DLL. The first line of code looks at the Initialized state. Within the IF ENDIF block the Initialize( ) function is applied. The very first call to any function of the SqlConnCache Class loads the ObjectCacheExt.DLL into memory. Don't be confused by the fact that the Function DLL has a method with the same name as the QuickScript itself:

The IF ENDIF block ensures that the Initialize function call will be invoked only when the initial state of the SqlConnCache is double-quotes (""), meaning Blank.

Cache Look Up Code Sample


The next QuickScript line of code from the example applies the Class's ContainsKey( ) look-up function. It retrieves the Boolean answer to the question: "Is there a .NET SqlConnection Object held in the cache by the name contained in Me.SqlConn.Name[1]?"

FactorySuite A2 Deployment Guide

160

Chapter 6

Me.SqlConn.Name[1] is an Attribute of the $SqlConnCacheMgr Object Instance. It indicates the first indexed String in an array of Strings. The instance Object will contain the actual name to be used for identifying the .NET SqlConnection Object stored in the cache.

Create Connection Code Sample


The code within the following IF ENDIF block is only processed if the particular named .NET SqlConnection is not yet present in the cache. If not yet present, a connection is created:

In order to establish any .NET Object in memory the new operator is used in an assignment statement. The following statement binds the new Object to the local variable sqlConnectionDBa which had been previously dimensioned in the Declarations section:

This particular function call invokes the constructor for a new .NET SqlConnection object and leaves it essentially empty as a default, ready to make a connection to an MSSQL database. The local variable (sqlConnectionDBa) had been previously declared as a local .NET Object variable of the desired type in the Declarations section. Upon assignment as new it goes into memory in the scope of the Initialize QuickScript at Object instance Startup. It then stays in scope until the $SqlConnCacheMgr Object instance goes through Shutdown. Note however that the new reference to a .NET SqlConnection is not yet accessible for use by other IAS Objects, nor is it accessible immediately by other QuickScripts of the same $SqlConnCacheMgr Object instance. As described in the previous section, the $SqlConnCacheMgr Object instance must place a reference to the new .NET SqlConnection Object into the cache. This is achieved using the following line of code in the example:

FactorySuite A2 Deployment Guide

Implementing QuickScript .NET

161

The Add method places the String name for the .NET SqlConnection Object into the hash table in the Application Domain. It uses the value from UDA Me.SqlConn.Name[1] of the $SqlConnCacheMgr Object Instance. That value is inserted into a hash table as a sqlConnection name and a reference to the new .NET SqlConnection Object is passed along to the hash table as well. The .NET SqlConnection object reference has been attached to the locally declared variable (sqlConnectionDBa). The third argument set to true means that the function call should investigate Exceptions and return a String identifying them if and when they occur. Note that the SqlConnCache Class's Add method inspects the Type of the .NET object that is passed in by reference. If the .NET object is not of Type System.Data.SqlClient.SqlConnection, it is not added to the cache but an error message String is returned. Thus the Add function is Type Safe. A local variable (resultConnectionDBa) captures the return value String which is may be inspected using Object Viewer because it gets copied into a UDA:

It is good practice to use locally dimensioned variables to capture return values coming from Function calls. Such local variables may then be inspected using IF THEN ELSE logic to detect error conditions. Finally, an interpreted form of the result or a truncated error message can then be copied into a full UDA of the Object instance.

OffScan Transition Code Sample


The proper coding for the $SqlConnCacheMgr Initialize QuickScript also considers what must happen when the Object instance goes OffScan. The following code deals with this transition:

This code checks the cache again to see whether the .NET SqlConnection Object is present. If so it is gracefully removed by applying the Remove function call. The balance of code in the OffScan QuickScript cleans up the state of the cache in memory, providing a guarantee that the cache will be ready for new .NET SqlConnection Objects once the $SqlConnCacheMgr Object Instance is brought back OnScan.

FactorySuite A2 Deployment Guide

162

Chapter 6

Boolean UDAs in QuickScript Execute Triggers


The $SqlConnCacheMgr Object contains a simple QuickScript called Statistics, which periodically retrieves SqlConnection query count and duration information. The SqlConnCache Class is responsible for calculating the information. However, the statistics would just sit there if the Object instance didn't ask for them. The Expression that triggers the Statistics QuickScript is an illustration of a programming technique that allows for granular control of run-time execution. The following figure shows the Expression with the OR operator:

Two UDAs are used to determine when the Statistics QuickScript runs:
Me.SqlConn.GetStatistics.Periodic Me.SqlConn.GetStatistics.Now

The first of these is set to true by default. The second defaults to false. Either UDA being true results in periodic execution repeated every Trigger period. If the Periodic attribute is set to false, the Statistics QuickScript execution depends entirely upon the Now UDA. Upon setting the Now UDA to true the QuickScript executes once and stops. This is generally called a one shot among control system engineers and technicians. An alternate method utilizes a single Integer UDA as the trigger. The Expressions look to see if the Integer UDA is greater than zero. Non-zero values trigger the QuickScript. Inside the QuickScript would be IF ELSE ENDIF blocks that vary the actions according the actual value of the Integer UDA.

FactorySuite A2 Deployment Guide

Implementing QuickScript .NET

163

ManageConnections QuickScript
The ManageConnections QuickScript of the $SqlConnCacheMgr template Object handles requests from other IAS Objects to acquire .NET SqlConnection objects from the pool of connections. Following are excerpts from the QuickScript:

Template Object QuickScript Name Declarations:

$SqlConnCacheMgr ManageConnections

OnScan Script (Sample):

Execute Expression Execute Trigger Type

Me.Connection.Acquire[1] AND Me.SqlConn.Initialized[1] WhileTrue

Execute Trigger Period 00:00:00.0000000

FactorySuite A2 Deployment Guide

164

Chapter 6

Execute Script (Sample):

First, in the OnScan script a local variable (afterInitialized1) is set to false. This is used to cause a single pass through the Initialization section at the beginning of the ManageConnections Execute script. Upon completion of one pass through this section this local variable is set to true.

FactorySuite A2 Deployment Guide

Implementing QuickScript .NET

165

Although it is tempting to place such initialization code inside of the OnScan section this would not work because the script trigger, even for Initialization, depends upon a Boolean UDA value collected from another IAS Object instance. The script must wait until that Boolean value transitions to true. The OnScan script runs only once; thus code within this script section will execute long before any other IAS Object needs to invoke the initialization steps. The following script excerpt from the Execute section ensures the execution of a single pass only when the UDA Me.SqlConn.Initialized[1] goes true:

Within that initialization code a call to the SqlConnCache Class's ContainsKey function looks to see if the named SqlConnection happens to already be in the cache. The result of the function call is copied into the local Boolean variable (foundConnectionDBa):

If it does exist in the cache, the Class's Get function is called within the code block: IF foundConnectionDBa THEN ENDIF. Note that the local variable (sqlConnectionDBa) has been dimensioned in the Declarations section as an ADO.NET connection object, i.e. System.Data.SqlClient.SqlConnection. Once again it is important to understand that the Declarations section does not place such variables into memory, it only defines the Types of variables for compiler checks. Furthermore it is important to understand that the Declarations section in the ManageConnections QuickScript is essentially independent of the Declarations section in the Initialize QuickScript; thus the Dimension statements are repeated. It is perfectly acceptable to use different local variable names in differently named QuickScripts, even for the same Type of variable. Keeping the same local variable names is actually a convenience of "cut and paste" editing. Once the distinction is made that the two QuickScripts (Initialize and ManageConnections) have independent local variable scope, there should be no confusion in reuse of the local variable names. Just remember to provide the appropriate Dim statements in the Declarations sections.

FactorySuite A2 Deployment Guide

166

Chapter 6

Review the line of code that retrieves a .NET SqlConnection object from the cache:

Remember that the UDA Me.SqlConn.Name[1] contains the String value of a name given to an available cached .NET SqlConnection object. It happens to be the first indexed element of an array of Strings. The Get function invokes the SqlConnCache Class method called Get passing along the name to look up in the hash table. If the name happens to be found in the hash table (i.e. it is in the cache), a reference to the associated .NET SqlConnection object is returned and attached to the local variable (sqlConnectionDBa) which by design has been defined (Dim) to be a System.Data.SqlClient.SqlConnection. Note that this script does not utilize the new keyword. It is not necessary in this case because the .NET SqlConnection object already exists in memory and has a reference stored in the cache:

What happens if the named .NET SqlConnection object is not found in the cache when another IAS object requests it? Even though the Initialize QuickScript was supposed to have created the required .NET SqlConnection object, another QuickScript called DisconnectFrom exists. If this QuickScript had been invoked it is possible that the desired named SqlConnection will have been removed from the cache. Under this scenario it is important that the ManageConnections QuickScript be able to recreate the SqlConnection and reinstate it in the cache:

Thus the new keyword does get applied in this scenario. The UDA Me.Connection.Status[1] gets a copy of the State of the .NET SqlConnection. The ToString( ) function is a universal .NET method that translates any object parameter into a readable String. Having just created a new .NET SqlConnection object, the State should in fact be Closed. Once created it is added to the hash table (the cache) by applying the Add function and the result is placed into the local variable (resultConnectionDBa).

FactorySuite A2 Deployment Guide

Implementing QuickScript .NET

167

Eventually the local variable representing the presence of the SqlConnection (foundConnectionDBa) achieves the true Boolean condition. The UDA Me.Connection.Acquire[1] will still be true because another IAS Object is trying to get a valid .NET SqlConnection from the pool. The following code is then executed, configuring the SqlConnection parameters. Note that these parameters are for the SqlConnection and do not in this QuickScript deal with executing any queries. That will be the job of the other IAS Object (see the $PostToDBaORb Template Object later in this chapter):

Per the logic of acquiring a valid .NET SqlConnection from the pool, it is always necessary to insure that a reference is retrieved from the cache, so these lines of code are repeated. The Status is then inspected. If the SqlConnection is Closed, the following code generates a connection string and applies it in an attempt to establish a live connection with Microsoft SQL Server. The node name of the Microsoft SQL Server is supplied in the string. Note that the format of the connection string will vary depending upon whether the UDA is Me.Connection.NodeName[1]. If another UDA, namely Me.Connection.IntegratedSecurity[1] is set to true. If the security UDA is false, user name and password are passed along from UDAs Me.Connection.User.Name[1] and Me.Connection.User.Password[1]. Note that this technique for passing along a password is not entirely secure because the UDA can be read in plain text using Object Viewer. This code sample also illustrates a common technique in QuickScript whereby a String local variable repeatedly gets additional String values appended to it (connectStringa) upon finishing the connect string by appending the database name:

FactorySuite A2 Deployment Guide

168

Chapter 6

However, if the .NET SqlConnection object is found not to be Closed, a message is logged giving the actual current status:

Finally the trigger Boolean UDA Me.Connection.Acquire[1] is set back to false upon completion of the task of making a live connection to a Microsoft SQL database.

ConnectTo QuickScript
The ConnectTo QuickScript of the $SqlConnCacheMgr template Object handles requests from other IAS Objects to find existing, or create new .NET SqlConnection objects from the connections pool, without acquiring it for its own use. The code is not reproduced here because it is essentially the same as that of the ManageConnections QuickScript. If one script performs essentially the same function as another, why have such a script? The answer lies in the fact that there are many scenarios where other IAS Objects feel the need to initialize a SqlConnection in the cache but they don't need to acquire it immediately for their own use.

FactorySuite A2 Deployment Guide

Implementing QuickScript .NET

169

Certainly a single QuickScript function could handle each scenario internally using IF ELSE ENDIF logic. This would simply require that the trigger expression include several UDAs or a single Integer UDA that encodes the desired choice of action. Alternatively the UDA could be a String that is encoded with a command word such as INITIALIZE or ACQUIRE. It comes down to design choices. For these examples, the engineer chose to use separate UDAs triggering separate QuickScripts.

DisconnectFrom QuickScript
The DisconnectFrom QuickScript handles requests from other IAS Objects to remove existing .NET SqlConnection objects from the connection pool. The Execute trigger expression for this QuickScript includes only the UDA Me.Connection.Disconnect[1]. Most of the lines of code from this QuickScript are not reproduced here. The following lines, however, merit consideration:

This code fragment illustrates the technique for cleaning up the hash table (the cache) owned by the SqlConnCache Class. When it is determined that the .NET SqlConnection is still Open the .NET SqlConnection's Close method is invoked. Note that the local variable (connection1) had been defined as a System.Data.SqlClient.SqlConnection in the QuickScript Declarations section using the Dim statement. Also not shown above is the mandatory Get function call, which binds a .NET SqlConnection reference from the cache to the local variable (connection1). The state of the SqlConnection is then copied into the UDA Me.Connection.Status[1] and if the SqlConnection's State property has achieved Closed status, the Remove function is called to expunge the SqlConnection from the cache. Finally as cleanup, the ContainsKey function is called looking for false. This allows the QuickScript to then clear the UDA Me.Connection.AcquiredBy[1] to a null String (""), which signifies that the SqlConnection is not acquired any longer.

FactorySuite A2 Deployment Guide

170

Chapter 6

.NET Error Handling and Garbage Collection


QuickScript .NET does not support invocation of .NET Try Catch Finally syntax in IAS Objects. Error handling is encapsulated in the SqlConnCache Class inside of the ObjectCacheExt.DLL. The complexities of handling exceptions are hidden away from the Industrial Application Server programmer. In fact the following kinds of .NET Exception are investigated within the DLL, returning a simple String error when encountered. The bodies of the following "catch" clauses are not shown here:
catch (TypeLoadException tldException) catch (NullReferenceException nRefException) catch (InvalidOperationException invalidOpException) catch (Exception e)

The programmer creating the IAS Object must still create logic that reacts to the error message Strings returned by the function calls. Programming languages prior to Java and .NET required careful management of memory by explicitly allocating and de-allocating it. .NET performs automatic Garbage Collection. This means that objects that are no longer referenced at certain steps are available for elimination, thereby recovering memory. For example, when SqlConnection is eventually removed from the cache, the .NET Framework returns that amount of memory to the .NET managed heap. This operation is performed automatically.

$PostToDBaORb Template Object


Previous examples addressed the creation of .NET SqlConnection objects in a pool (the cache) and initializing them by "Opening" the database, given a SQL connection string identifying nodename (i.e. server name), security information, and the database name. The IAS $PostToDBarORb template Object provides functionality for posting data into a table of the database by way of the "Opened" connection. (For the full documentation of the example QuickScripts and Object configuration refer to Appendix B).

UDAs
Following are selected UDAs defined in the $PostToDBaORb template Object. Datatypes and array designations are omitted:

Connection.Acquired.ByA Connection.Attempts Connection.Connect.ToA Connection.Connect.ToAorB Connection.Disconnect Connection.IntegratedSec.A Connection.NodeName.A

FactorySuite A2 Deployment Guide

Implementing QuickScript .NET

171

Connection.TryConnect DB.Column.DataType DB.Column.Name DB.Column.ValueString DB.Name.A DB.PostData DB.TableName SqlConn.Name

Note that the UDAs are segregated into groups using the convenient dot separator. This technique mimics the hierarchical naming structure of IAS within a single Object. Also note that the prefix for each is a functional name - Connection, DB and SqlConn. This convention provides a strong hint as to the purpose of the UDA.

UDAs Extended in $PostToDBaORb


In the block diagrams shown on a previous page, commands are linked between an instance of the $PostToDBaORb Object and the single instance of the $SqlConnCacheMgr Object. The method of linking is quite simple. The $PostToDBaORb instance Object contains several UDAs that are extended as InputOutput with a remote reference to the $SqlConnCacheMgr instance Object's corresponding UDA. For example the UDA Connection.Acquired.ByA has its InputOutput linked to SqlConnCacheMgr.Connection.AcquiredBy[1]:

Note that the text window cuts off the beginning of the text field in this figure. Also note that the UDA of the $SqlConnCachMgr instance Object is indexed to the first element of an array of type String. This particular UDA array serves as a reservation system. An Object links to this UDA and places a String value giving its own Tagname (using Me.Tagname in scripting) into the UDA, thus acquiring exclusive rights to the .NET SqlConnection object owned by the $SqlConnCacheMgr instance Object. When finished with the SqlConnection, the $PostToDBaORb instance Object clears the String value to a null String (assigning double-quotes "") and the reservation is withdrawn.

FactorySuite A2 Deployment Guide

172

Chapter 6

In fact, nothing prevents a greedy $PostToDBaORb instance Object from latching onto the reservation permanently. It can just leave its Tagname in the UDA forever if it so chooses.

TryConnectNow QuickScript
Previous examples addressed the creation of .NET SqlConnection objects in a connection pool using the SqlConnCacheMgr Object.

Template Object QuickScript Name Declarations:

$PostToDBaORb TryConnectNow

OnScan Script (Sample):

OffScan Script (Sample):

Execute Expression Execute Trigger Type

Me.Connection.TryConnect WhileTrue

Execute Trigger Period 00:00:05.0000000

FactorySuite A2 Deployment Guide

Implementing QuickScript .NET

173

Execute Script (Sample):

FactorySuite A2 Deployment Guide

174

Chapter 6

Determining Connection Choice


One interesting method for determining a choice of .NET SqlConnection is implemented in the $PostToDBaORb Object. The method is illustrated in the combination of the OnScan script and the first few lines of the Execute WhileTrue script: OnScan Script (Sample):

Execute While True Script (Sample):

Actually the choice for a .NET SqlConnection is pre-encoded: A suffix letter taken from the Tagname of the $PostToDBaORb instance Object is used to designate the choice. The OnScan script simply picks off the last letter of Me.Tagname and copies it into a local variable (connectLetter).

FactorySuite A2 Deployment Guide

Implementing QuickScript .NET

175

The UDA Me.Connection.Connect.ToAorB also gets a copy of the "letter" so that it may be used in other scripts. This is done in the OnScan script because it doesn't need to be collected repeatedly. Two examples of Object instance names are PostTOa and PostTOb which fill in the UDA with a and b respectively. These designate the first and second indexed SqlConnection objects. The Execute WhileTrue script is invoked by setting a UDA of the $PostToDBaORb instance Object (Me.Connection.TryConnect) to true. The script then tries every 5 seconds to get a .NET SqlConnection from the $SqlConnCacheMgr instance Object. The following line of code should be familiar by now:

The local Boolean variable (hasConnection) gets set to true only if the desired .NET SqlConnection object already resides within the hash table (the cache). The name of the .NET SqlConnection object is given by the String value encode in the UDA Me.SqlConn.Name. Once it is determined that the SqlConnection is really in the cache QuickScript code acquires it for use:

Recall that the local variable connectAcquiredBy has gotten a copy of the value from the InputOutput extended UDA Me.Connection.Acquired.ByA or alternately from the .ByB UDA. The IF ELSE ENDIF block checks to see whether that UDA value happens to be the very Tagname of the acquiring object. If it matches, the SqlConnection is already correctly reserved by the instance of $PostToBDaORb. If not, an attempt is made to acquire by poking the instance Object's name, i.e. Me.Tagname into the InputOutput extended UDA, which happens to be bound to the reservation UDA Me.Connection.AcquiredBy[1] of the $SqlConnCacheMgr instance Object.

FactorySuite A2 Deployment Guide

176

Chapter 6

For the "b" case the second array element [2] gets the reservation Tagname String value. Then the repeating cycle of the QuickScript execution is halted by setting the trigger false. If a SqlConnection reservation has not been acquired successfully the following IF THEN test increments a local variable to count attempts, comparing it against a configured maximum number of attempts encoded in a UDA. If the engineer should forget to fill in a number during configuration, after 10 attempts the QuickScript clears the trigger UDA:

TryConnectOnFalse QuickScript
Template Object QuickScript Name Declarations: $PostToDBaORb TryConnectOnFalse

OnScan Script (Sample):

Execute Expression Execute Trigger Type Execute Runs Asynchronously Execute Script (Sample):

Me.Connection.TryConnect OnFalse Checked

FactorySuite A2 Deployment Guide

Implementing QuickScript .NET

177

The very simple QuickScript called TryConnectOnFalse serves a very simple purpose: It cleans up connection states. Note that the local variable connectLetter is used again to indicate which of two connections the instance Object is dealing with. Alternatively, a reference to Me.Connection.Connect.ToAorB could have been used for this purpose.

PostDataNow QuickScript
Template Object QuickScript Name Declarations: $PostToDBaORb PostDataNow

Execution Expression Execute Trigger Type

Me.DB.PostData OnTrue

FactorySuite A2 Deployment Guide

178

Chapter 6

Execute Script (Sample):

FactorySuite A2 Deployment Guide

Implementing QuickScript .NET

179

This appears to be a complex QuickScript. It is called PostDataNow because that it what the instance Object is supposed to try to do, leveraging the acquired SqlConnection from the cache. The first part of the QuickScript contains the often-used connectLetter technique for determining which SqlConnection shall be targeted. Note that the OnScan QuickScript of PostDataNow is omitted being the same as previous QuickScripts. The following excerpt from the above script copies configured UDA values into local variables, concatenates the connectLetter to a prefix to identify the desired SqlConnection name from the cache. Then the cache is checked, copying the Boolean result into the local variable hasConnection:

If the SqlConnection really is in the cache and it has been acquired by this instance Object the status of the connection is retrieved by a call to the Function DLL and copied into a local variable.

FactorySuite A2 Deployment Guide

180

Chapter 6

If the SqlConnection is Open, a CommandText String is generated. The use of FOR NEXT loops is a convenient method to build the String, adding indexed elements of UDA arrays. The first FOR NEXT loop fills in the list of Column names for an INSERT Query:

The second FOR NEXT loop fills in data values of the INSERT query. Note that there are IF ENDIF blocks which test the column DataType and insert single quotes into the query where they are needed to accommodate the particular DataType. DataTypes are encoded in elements of a UDA array:

FactorySuite A2 Deployment Guide

Implementing QuickScript .NET

181

The INSERT query is wrapped up, and the moment of truth arrives where the ExecuteNonQuery method of the SqlConnCache class gets executed. This query could take several seconds. Hence this QuickScript's Execute operation has been marked as Runs asynchronously.

The FOR NEXT loop that follows illustrates a technique for taking a result String value, looking it up in a UDA String array (an enumeration) to determine if the result matched a known Exception type. Note the use of the EXIT FOR keywords. They are used to jump out of the loop when a match is found:

If there was not an Exception the QuickScript inspects the result for useful information such as the number of rows affected by the INSERT query. This number is extracted and concatenated into a result UDA:

Finally the trigger UDA is cleared:

FactorySuite A2 Deployment Guide

182

Chapter 6

The ExecuteNonQuery method is not restricted to INSERT CommandText strings. Any valid query statement that returns a simple result string will suffice. The SqlConnCache Class (as well as the OleDBConnCache Class) was compiled including the ExecuteNonQuery method because is quite basic and doesn't need a lot of other methods to look at the query results.

$PostTOaORb Derived From $PostToDBaORb


The QuickScripts within the $PostToDBaORb template Object do not deal directly with the acquisition of data needed to populate the INSERT query. The assumption for this template is that the data comes from somewhere and is copied into the array elements of the UDAs. To test the performance of the INSERT query in a live scenario, another template Object - $PostTOaORb - is derived from the original template $PostToDBaORb. The inherited QuickScripts handle acquiring a SqlConnection and posting the data. Specific QuickScripts take care of generating data and pushing the values into the UDA array elements. These QuickScripts are not reproduced here; however some examples of other interesting QuickScript techniques are illustrated. (See Appendix B for the fully documented template).

RandomizeNow QuickScript
Template Object QuickScript Name Declarations: $PostTOaORb RandomizeNow

Execution Expression Execute Trigger Type

Me.Product.Randomize OnTrue

The .NET System.Random object must be initialize in the standard way for any .NET object. The OnScan QuickScript applies the new operator. Each time the trigger UDA goes true a newly randomized value is generated and copied into a UDA.

FactorySuite A2 Deployment Guide

Implementing QuickScript .NET

183

This example creates two values in units of percent (%) from the single System.Random object. Several method calls generate different random result Datatypes depending upon the call. The NextDouble method of the object gives float values between 0.000 and 1.000 or whatever precision you're interested in. The QuickScript multiplies by 100.0 and copies the result to a UDA, then subtracts that result from 100.0 for the other UDA. Another example of a technique is the use of a lookup table stored in a UDA array:

Starting with the randomly generated randomPerCent local variable, an index integer (enumIndex) is extracted using the mathematical function Round. Then an IF ENDIF block ensures that the index doesn't stray out of the range 1 to 5. This number is then used as the index of the UDA array Product.Status.Enum[enumIndex], and that String value is copied to a status String UDA Me.Product.Status. This UDA is one of the String values posted with the data in the INSERT query.

Time-Based Calculations and Retentive Values


The following section presents the example $RunTime template Object. The Object functions as a multi-function Stopwatch that captures elapsed times for "Running" and "Held," and includes automatic calculations of percent time in either state.

Timer QuickScript in $RunTime


The $RunTime template Object illustrates time-based calculations leveraging fundamental .NET calculations supported directly in QuickScript .NET without the need for a custom Function DLL. $RunTime UDAs illustrate another important concept - retentive data. In support of Engine failover calculations, an Object must be preserved and reinstated upon restarting the Object when a backup Engine picks up as Active. The failover scenario leverages the Checkpoint files that are copied from the Primary to the Backup Platform via the RMC. UDAs of an Object that are marked as Category Calculated retentive are captured periodically and copied over to the Backup within the Checkpoint files.

FactorySuite A2 Deployment Guide

184

Chapter 6

MX DataTypes are not comprehensive with respect to what may be needed to perform all of the necessary calculations within an Object. This example illustrates the use base System.Int64 as a high precision DataType. It happens that the MX Datatypes Time and ElapsedTime are directly compatible in precision with .NET System.Int64 - i.e. they all carry 64 bits of data. In order to properly perform calculations that are time based it is imperative to do them using this high precision base .NET type. Template Object: $Runtime UDAs (partial list)

UDA Name Clock.EndTime Clock.StartTime Clock.ElapsedTime Running.ElapsedTime

Data type Time Time ElapsedTime ElapsedTime

Category Calculated retentive Calculated retentive Calculated retentive Calculated retentive Object writeable

Running.PercentOfClock Float QuickScript Name: Timer Declarations

FactorySuite A2 Deployment Guide

Implementing QuickScript .NET

185

Startup Script:

FactorySuite A2 Deployment Guide

186

Chapter 6

OnScan Script:

FactorySuite A2 Deployment Guide

Implementing QuickScript .NET

187

Name Execute script

Trigger type Periodic

Trigger Period 00:00:00.0000000

FactorySuite A2 Deployment Guide

188

Chapter 6

The Startup script for "Timer" includes all of the code that initializes the dimensioned .NET variables e.g.:
calcPeriodT = new System.TimeSpan;

and
lastScan = new System.DateTime;

An IF THEN block checks whether the Engine is not in the recovery state, for example following a failover.
IF NOT MyEngine.Engine.StartingFromCheckPoint THEN .

For the case that there has not been a failover, it is assumed that the Object instance is starting up for the first time. Therefore the variables need to be properly initialized, i.e. not reinstated from Checkpoint data. Several of the UDAs of type Time are initialized by applying the QuickScript .NET function call Now( ), which grabs the time stamp from the computer's system clock.
Me.Clock.StartTime = Now();

Also note that dimensioned local variables (defined in the Declarations section) are used to make time calculations and they are initialized in the Startup script.
cstrtT = Me.Clock.StartTime;

A pair of statements appear to copy the same initialization data into two different local variables:
calcPeriodT = Me.CalcPeriod; calcPeriodL = calcPeriodT;

Actually the two local variables are of two different .NET data types: The first is System.TimeSpan and the second is System.Int64. Calculations comparing time inside of IAS QuickScript .NET must be performed using 64 bit arithmetic. So when differences in time must be evaluated, the System.Int64 local variables must be used. The example script code lines first capture the "retentive" Me.CalcPeriod of MX ElapsedTime, and converts it into a System.TimeSpan. The System.TimeSpan is implicitly converted into a System.Int64 for use in calculations. As stated above for Startup, in the event that there has actually been a failover the Startup script will run, but the local variables will not be initialized, having verified StartingFromCheckpoint and thus skipping that section of code. Instead the OnScan script recovers the correct values from the Checkpoint files, for example:
cstrtT = Me.Clock.StartTime;

The real work of the Object is performed in the Execute script.

FactorySuite A2 Deployment Guide

Implementing QuickScript .NET

189

An IF THEN block that determines when the amount of elapsed time has passed that exceeds the desired "calculation" period. Again, this determination is made by comparing two System.Int64 local variables:
IF calcDelayL >= calcPeriodL THEN

The following local variable is of data type System.DateTime. System.DateTime is similar to System.TimeSpan except that it represents an absolute time in UDT.
thisScan

The local variable alcET (representing elapsed time) is determined using a .NET technique that applies a method of the System.DateTime variable thisScan "Subtract," which back-calculates the amount of time elapsed between a previous time held in the variable lastScan and time of thisScan:
alcET = thisScan.Subtract(lastScan);

The following line illustrates the use of another method of the .NET System.TimeSpan object TotalSeconds, which facilitates calculation of percentages in this case. TotalSeconds actually returns 64 bit values and division of two 64 bit values keeps precision of 64 bits. The UDA is of type MX Float but .NET implicitly converts from 64 bit to a proper float number.
me.Running.PercentOfClock = 100.0 * rET.TotalSeconds() / cET.TotalSeconds();

Having performed calculation using local variables, it is important to get the values back into a UDA of category Calculated retentive to preserve it for another scan and for the Checkpoint files. The following example illustrates this, copying a local variables System.TimeSpan into an MX ElapsedTime UDA:
me.Clock.ElapsedTime = cET;

The calculations in next scans are dependent upon remembering the exact time stamp so the local variable gets a time stamp using the Now( ) function:
lastScan = Now();

Note that this local variable lastScan is not copied to a UDA of category Calculated retentive. This is not necessary because lastScan is dimensioned as System.DateTime in the Declarations section. This means that it is active for the life of the $RunTime instance Object and its value is preserved from scan to scan. In a failover scenario it is not necessary to preserve its value because calculations must start over fresh with the Startup and OnScan scripts anyway.

FactorySuite A2 Deployment Guide

190

Chapter 6

Documenting Your Classes


Document all Classes used in the project

Summary
Interactions between scripts can be complex. This chapter has described some of the important concepts, including UDA scope, local variable scope, script execution order and interactions with other IAS Objects. Interaction with a database is the example selected for illustration in this chapter. However the complexity of ADO.NET and the propensity for database interactions to thwart simple timing logic points toward custom Function DLLs as the most effective script implementation. An IAS Object called SQLDataObject is available and can be used for many database interactions. Thus it is not absolutely necessary to resort to custom Function DLLs to communicate with a database. ObjectCachExt.dll was chosen as the example for this chapter because it also illustrates techniques for leveraging storage into memory using the Application Domain associated with the operation of the AppEngine hosting the IAS Object. QuickScript .NET offers a variety of script operational types - OnStartup, OnScan, Execute, OffScan, and Shutdown, and a variety of trigger types are possible for the Execute operation using a versatile trigger Expression. The timing of repeat cycles can be configured and the Run asynchronously check box option allows an Execute script to overcome the confining limits of an Engine scan. This becomes quite important when a script is attempting to perform programmatic interactions with off-Engine processes like transactions with a database. Incorporation of programmatic access to the Microsoft .NET Common Language Runtime (CLR) is the most powerful aspect of the IAS scripting environment. The syntax of QuickScript .NET has been restricted to basic constructs given the lessons learned with the InTouch Software script language. QuickScript .NET adds some fundamental functions mostly per their definitions and behaviors taken from InTouch Software. The QuickScript .NET code examples in this chapter have been presented in a logical order and are somewhat complete, however they will not complete a functional set of Objects for the test case of communicating with a database. See Appendix B, .NET Example Source Code, for the complete definitions. This chapter also described the concept of time calculations and provided a brief example. This chapter has provided a glimpse into the QuickScript .NET environment using practical examples based on a real-world application. The complete listings for the Object templates and the C#.NET source code are found in Appendix B, .NET Example Source Code.

FactorySuite A2 Deployment Guide

Architecting Security

191

C H A P T E R

Architecting Security

Wonderware works closely with Microsoft and industry standards organizations like the OPC Foundation to involve multiple vendors in an industry-wide approach to solving security problems. The success of a Control System Security Solution is enhanced by pooling IT expertise and Control System Operations groups during the implementation and integration phases of a FactorySuite A2 System project. This chapter provides a high-level Security perspective, and specific recommendations within the FactorySuite A2 System environment.

Contents

Wonderware Security Perspective Securing FactorySuite A2 Systems Securing Visualization Securing the Configuration Environment Distributed COM (DCOM) Security Recommendations Summary

FactorySuite A2 Deployment Guide

192

Chapter 7

Wonderware Security Perspective


Information Systems in manufacturing facilities are evolving rapidly. The evolution of these information systems is driven by manufacturers need to integrate with business/ERP and production systems, provide access to production data across the enterprise (both from inside and outside the environment), and reduce system maintenance costs. Security risks also evolve as new vulnerabilities and targets are discovered in the various systems. Numerous incentives exist to protect a control system:

The technical knowledge, skills and tools required to penetrate your IT and plant systems are widely available. Regulatory mandates and guidelines issued by the US Government (National Strategy to Secure Cyberspace -US Government page 32). Guidelines and best practices for securing plant control systems from advisory groups, such as ISA SP99 committee, NIST (Process Control Security Requirements Forum -PCSRF), NERC, etc.

Wonderware's approach to site network(s) and control system security is driven by the following principles:

View security from both Management and Technical perspectives. Ensure security is addressed from both IT/IS and Control System perspectives. Design and develop multiple network, system, and application security layers. Ensure industry, regulatory and international standards are taken into account. Prevention is critical in plant control systems, supported by detection.

Realizing these principals is accomplished by implementing the following Wonderware Security recommendations:

Maintain a prevention philosophy to support security policies and procedures using the following security components:

Firewalls Network based Intrusion Prevention/Detection Host-based Intrusion Prevention/Detection

Include a clearly defined and clearly communicated change management policy (for example: firewall configuration changes). Converge IT and Plant networks. Maintain secure and insecure protocols on the same network.

FactorySuite A2 Deployment Guide

Architecting Security

193

Enforce monitoring, alerting and diagnostics of plant network control systems and their integration with the corporate network. Move to an off-platform data collector in a DMZ. Retain forensic information to support investigation/legal litigation. Enable/Ensure secure connectivity to wireless devices.

Common Control System Security Considerations


Many process control networks have been implemented in pieces. Most of these systems lack a consistent security design and many were not designed to include security configuration. When securing a control system, the number one criteria is defining and understanding the information/data that needs to be secured. In doing so, potential vulnerabilities are identified. The vulnerabilities may be the result of practices adopted primarily for convenience. Once identified, vulnerabilities may be removed or altered to increase security with no impact on production operation performance. Areas of focus include:

Multiple remote access points. Information queries that can be deferred or accessed though a DMZ/off control network, etc.

Common Security Evaluation Topics


The following security topics must be evaluated and should be critical parts of an effective security strategy:

Policies and Procedures


Security policies and procedures are the foundation of a solid security strategy. Many automation, control, and access areas must have well-defined security policies and procedures. The security policies and procedures (and their enforcement) will have a profound effect on enhancing automation and control system security.

Accounts
Types and uses of security accounts needs to be defined by strong security policies and must comprise useful account creation and maintenance procedures. The policies that govern system accounts should be fully developed, documented and communicated by IT, Automation engineering, and Management in a collaborative environment.

FactorySuite A2 Deployment Guide

194

Chapter 7

The following items must be considered when developing or reviewing account policies:

Only validated users have accounts. Users' IDs must have unique names with strong passwords. Individuals are accountable for the use of their User ID. User access should be restricted as much as possible. Make sure that account lockout duration is well defined. Groups should be defined by user access needs and roles. Guest accounts and default vendor accounts should be removed or reset as applicable. Process Operator station accounts should be limited and defined by operational area. Service accounts should exist on the local Domain or local machine and should not be used to logon to a server.

Passwords
Passwords are one of the most vulnerable security components. This security vulnerability is largely eliminated by defining a solid password policy and configuring your system to enforce the policy. Using complex passwords (and changing them regularly) minimizes the likelihood of unauthorized access to the control system. The following list provides guidelines for effective password management:

Enforce password history to limit the reuse of old passwords. Enforce password aging to force frequent changing of passwords. Enforce minimum password length to reduce password guessing. Enforce password complexity requirements to reduce password guessing. Ensure passwords are not stored using reversible encryption.

Remote Access
The need for access to process information, configuration information and system information from outside of the systems domain is common. Welldefined policies and procedures to manage remote access to the system by other company business units and or suppliers and venders greatly reduces the possibility of security threats penetrating the system. The following list contains guidelines for remote access:

Limit access as much as possible by defining different access levels based on need (job function). Enforce mandatory PC checkups of any equipment that is brought onsite. Configure a separate role-based user group for temporary accounts and review this user list often. Define and document all outside system access routes and accounts.

FactorySuite A2 Deployment Guide

Architecting Security

195

Physical Access
Most production facilities have physical security plans in place. These plans should be an integral part of an overall security program. By not allowing unchecked computers and unauthorized users to have access to critical infrastructure components, many security threats can be eliminated. Critical process control components such as servers, routers, switches, PLCs, and controllers should be protected under lock and key and have personnel assigned who are directly responsible for the components.

Backup and Recovery Plan


The Backup and Recovery plan is a critical security component. Recovery from any level of failure due to either a security or natural interruption of the system must be included in the security policy. The following items must be considered when defining a backup and recovery plan:

Define and document how each part of the system will or can be backed up. Ensure backups are included in routine system maintenance plan and when improvements or other changes to the system occur. Document backup procedures for all system configurations and assign administrative responsibility to appropriate personnel. Document and keep current all versioning of system software and hardware. Provide a protected off-site repository for copies of all system backups Provide a documented escalation plan for recovery and documented processes assigned to qualified personnel for implementing a recovery.

Virus Protection
Add an additional security level at each access point of the system by defining where and what virus protection is to be implemented. Document the proper configurations for the virus protection software. Mandatory virus definition file updates are essential. Note For a list of ArchestrA-related file exclusions, see "Configure AntiVirus Software" on page 283.

Security Patch Implementation


Security patch management is a critical evaluation topic that has the largest impact on Microsoft Operating System-based Supervisory and Control Systems.

FactorySuite A2 Deployment Guide

196

Chapter 7

Careful planning and attention to detail is required when developing and documenting your procedures and policies for implementing security patches. Request a detailed support plan from each automation vendor and security software vendor, and review them with the goal of inclusion as part of any security patch management procedure or policy.

About Security Infrastructure Components


The security infrastructure comprises many components that support the supervisory and control system. Each component needs to be reviewed and defined by critical value and attack vulnerability. Policies and procedures must then be defined that provide auditing and maintaining the security levels of each component. The components are listed below. Note Each component should also be reviewed for possible redundant configuration to improve availability and protect against the system becoming unavailable due to a single failure. For detailed information on Industrial Application Server redundancy configuration, see Chapter 3, "Implementing Redundancy."

Authenticators
System users could be actual operators and engineers, or other systems or services that run internally or externally to the Supervisory Control System. All known users must be accounted for and defined authentication methods and procedures should be developed to reduce the risk of unauthorized access to critical systems or protected information.

Security Policy Enforcement Components


Each device or software package that is deployed for security policy enforcement must be defined by enforcement type and its impact to system on failure. These components include, but are not limited to, Firewalls, Routers, Switches, and Operating System Services. Any enforcement component that is defined as critical should be deployed in a redundant configuration if possible.

Firewalls, Routers, Switches


Firewalls, Routers, and Switches are an integral part of Supervisory and Control systems. Firewalls provide for a way to isolate and control communication between segments of a network and between operational units. A detailed understanding of communication ports, IP addresses, and protocols needed for the Supervisory and Control system to function properly is critical for the success of the Security policy. By defining solid policies and procedures for firewall configuration, operation, and auditing, you can limit your communication to specific ports and IP addresses that allow only authorized communication between systems.

FactorySuite A2 Deployment Guide

Architecting Security

197

Additionally, defining solid policies and procedures for Router and Switch configuration ensures management of where information and access is permitted along with control over bandwidth. Optimal network utilization can then be achieved. Although firewalls, routers, and switches have overlapping capabilities, each device should be used for its base functionality: firewalls should be used to control communication types, routers should be used to forward communication by routing protocols along a proper route, and Switches should be used to manage bandwidth by controlling communication flow between ports and avoiding packet collisions.

Domain Controllers
The use of services such as Microsoft Active Directory provides management and enforcement of access security for users, groups, and organizational units. Not all software supports domain-level security. For example, some automation software will require local PC or even package- or AutomationObject-level security to be defined and implemented. Check the product documentation carefully before deployment.

Physical Networks
The basic building block of a supervisory and control system is the physical network itself. Special attention should be given to the design, selection of media, and installation of the network. A careful review of any installed network segment should be undertaken before extending or adding components. By making sure redundant paths and proper distances are observed, slow and unreliable communication can be avoided. All networks should be reviewed for live unsecure ports and exposed segments that could be tapped. With the complete network layout documented, recover plans can be defined to improve system available in the case of an accident that takes down part of the network.

Remote Access Devices


Policies and procedures should be developed to control the installation and use of modems for remote access. A very good alternative to allowing modem access is to implement Virtual Private Network (VPN) access. If a modem has to be used for remote access a good rule is to require dial back connections.

FactorySuite A2 Deployment Guide

198

Chapter 7

Wireless Access
Wireless technologies are quickly becoming part of Supervisory and Control Systems. Wireless security has a lot of underlying topics that should be discussed. The following topics should be taken into consideration when defining a wireless implementation:

Access can be limited to unwanted areas through the use of directional antennas. Utilize more than the industry-standard WEP ("Wired Equivalent Privacy") protocol. Use a solution based on 802.1X, Extensible Authentication Protocol (EAP), and Wi-Fi encryption. Review implementation guides from your Wireless device vendor and from your operating system vendor.

Software
The software components of a supervisory and control system can have a large impact on the security of the overall system. When reviewing the security features of the software that will be deployed within a production facility, each component should be evaluated as an integrated part of the complete system. All software components should leverage the capabilities of the infrastructure and support configurations that meet the policies and procedures that are defined as need to secure the system. By reviewing all software from a secureability standpoint, policies and procedures can be established to audit the system and maintain high levels of security.

Virus and Malicious Software Protection


With the many Host-based protection system options available on the market today, one must ensure that when selecting this protection component that all Supervisory and Control system software is compatible and that the vendor provides timely updates. Host-based protection software should also provide protection for other malicious software such as Spyware, Malware, and Adware.

Intrusion Protection and Prevention


Intrusion protection and prevention has become a viable way of raising the security level within a TCP-IP LAN or WAN infrastructure. Intrusion Detection Systems monitor network traffic and generate alerts when malicious traffic or repeated password guessing is detected. These tools have been employed by IT department for many years. Intrusion Prevention technology has become the preferred method to detect and alert when hacking or virus/worm attacks are present, as well as block such attempts by managing firewall policy, Switch ports, Router paths, and trapping emails before damage can be done.

FactorySuite A2 Deployment Guide

Architecting Security

199

The implementation of an Intrusion detection or Prevention system on a Supervisory and Control network does include risk. The following list explains some considerations when evaluating their use in a particular environment:

The system should provide centralized reporting and management. The system should provide multiple ways to deliver alerts. Evaluate the supported level of signature-based identification of malicious or anomalous traffic. Connection Flood (denial of service) controls should be included. The system should support Alert-only mode for tuning. The system should support the software and application that you have installed or going to deploy. The system should enable creation of your own policies. Evaluate supported bandwidth and connections.

Because Intrusion detection and prevention system can present a risk to functionality and operation of a Supervisory and Control System, a welldeveloped design with strong policies and procedures should accompany any implementation plan.

Operating Systems
Review the base operating system that hosts all of your Supervisory and Control applications for proper deployment, configuration, and security patches. The initial focus should be reviewing installed components and configured users. Microsoft provides detailed guidance for locking down your operating systems so that security threats can be managed and eliminated. By defining what Supervisory and Control software is to be deployed to a system, you can define the level of lock-down, and at the same time ensure full functionality of manufacturing applications.

Databases
Database applications such as Microsoft SQL Server have become a common component of all manufacturing systems. Because of the need to allow access to database information, and the need to update and append the information, you must be very deliberate in the approach to locking down a database. Provide a detailed mapping of users (people and services) which require access and define usable database security policies.

FactorySuite A2 Deployment Guide

200

Chapter 7

Securing FactorySuite A2 Systems


The ability to secure a FactorySuite A2 System is directly related to the infrastructure (Servers, Workstations, Ethernet Cables, Fiber Optics, Switches, Routers, Firewalls, etc.) as well as the hosting software (Operating systems, Virus protection, Intrusion protection, etc.). ArchestrA-enabled nodes use DCOM for their communication "backbone." This section is designed primarily for Process Control Engineers and IT Professionals who are familiar with standard IT practices, Windows Domain Engineering and Administration, and Process Control or SCADA Environment requirements. Content is derived from Wonderware testing documentation.

Security Considerations
The following section summarizes the security considerations within a production environment, and describes recommendations as applied to a Process Control Network (PCN) or SCADA System (WAN).

Secure Layers
Divide the system into secure "layers." In the security context, a layer can be defined as a division of a network model into multiple discrete layers, or levels, through which messages pass as they are prepared for transmission. All layers are separated by a router or smart switch device. A secure layer is further defined by the need to allow or restrict access and the criticality of the sub-system. An Intrusion detection system is deployed in higher- risk layers. The following figure is designed to show a representative topology; it is not intended to depict an actual Plant System topology. It includes the following named layers:

Corporate Network Infrastructure. Process Control Network (PCN). Remote Domain Network (adjunct to PCN). Remote Legacy Workgroup (adjunct to PCN).

FactorySuite A2 Deployment Guide

Architecting Security

201

Note that all layers (represented by the main backbone) are separated by firewalls and routers:

`
ActiveFactory Client ( http )

` Corporate IT Functions

SuiteVoyager Client ( IE Browser )

Thin Client HMI ( TS Client )

Thin Client Dev ( TS Client )

Notes All IP Addresses are assigned by DHCP Reservation Corporate Network Infrastructure (CorpNet)

Gateway 10.2.72.3 Non-DHCP Server Assigned at Domain Controller Gateway 10.2.72.33


Application Object Server Historian Node SuiteVoyager TerminalServer Galaxy Repository

Firewall

Domain Controller

Process Control Network (PCN)

Gateway 10.2.72.34 Non-DHCP Server Assigned at Domain Controller


` ` PLC1 10.2.72. 51

Gateway 10.2.72.35 DHCP Server (RAS to DC) Gateway 10.2.72.97 Router Slow Connection

DHCP Server (RAS to DC) Gateway 10.2.72.65 Firewall

Optional Firewall Monitored DO Line

Engineering Station

Visualization

Remote Domain Network Modem


SCADAlarm 10.2.72.48 ` PLC3 10.2.72.83

Remote Legacy Workgroup Failover


`

Comm
PLC2 AOS1Site2 10.2.72.99

Application Object Server Site 1

Legacy Visualization

VisualSite 2 10.2.72.98

AOS2Site2 10.2.72.105

FactorySuite Software Applications


Wonderware software applications have been tested in a wide variety of security-related implementations similar to the previous figure. The figure represents the widest usage scenarios of product combinations, which were tested in various scenarios involving limited- or no DCOM connectivity, limited port ranges, narrow firewall settings, and highly routed environments. Certain Wonderware software applications like InTrack, InBatch, and InControl utilize a high degree of connectivity to the Corporate ERP System and Process Control Enterprise, along with the associated distributed computational and remote services requirements. These applications could be adversely affected if unlimited DCOM connectivity were not present. The layers and port listings are detailed below. Note For application-specific communication requirements, see the online or specific Deployment Guide documentation for that application.

FactorySuite A2 Deployment Guide

202

Chapter 7

Corporate Network Infrastructure Layer


The majority of communication between computers on a corporate (business) layer is accomplished using viewers, proxies, or interfaces like Microsoft Internet Explorer. These engines use HTTP or HTTPS (secure http) protocols to transmit and receive data. This data can be secured, filtered, and carefully monitored. For the most part, only traffic with proper credentials or limited functionality is allowed to pass. RPC traffic, or Remote Procedure Calling (required with DCOM) is rare between business nodes. Closing DCOM ports for added security at this level can be effective, since the most desktop applications do not use many, if any, DCOM objects, and therefore do not require ports to transport information.

Corporate Network Firewall Ports


The following table describes the firewall ports necessary for successful communication between business nodes. The following table references the previous figure:

Function HTTP HTTPS Remote Desktop Connection

Port TCP 80 TCP 443 TCP 3389

Process Control Network (PCN) Layer


The Process Control Network (PCN) Layer contains the production nodes (Data Servers, AppServers, etc.) that process and store all production data. This layer requires that all nodes have unrestricted access to each other, in order to process the data in real-time. Data sources are represented in the previous figure by the Legacy and Remote Domain "sub-layers." These layers may also represent geographically distant sites which may be leased to other enterprises, and whose data is required by the parent company.

FactorySuite A2 Deployment Guide

Architecting Security

203

Process Control Network Firewall Ports


The following table describes the firewall ports necessary for successful communication between ArchestrA-enabled and production nodes:

Function CIFS NETBIOS Datagram

Primary Connection TCP 445 Outbound UDP 138 Send

Secondary Connection

Remarks File serving, deploying. From IDE to IAS. Name Service/Browsing. From IAS to Browse Master or from Browse Master to Domain Master Browser. Name Service/Browsing. From IAS to WINS Server or Browse Master or Domain Master Browser. Server Message Block (SMB). Used to implement Windows networking from IAS to the Domain Controller if applicable. Archestra Communication Channel. Peer-to-Peer, bidirectional between all ArchestrA-enabled nodes. DCOM. Peer-to-Peer, bidirectional between all ArchestrA-enabled nodes. *Custom range. Peer-toPeer, bi-directional between all ArchestrAenabled nodes. SQL Server. From SQL Server to Client SQL Client. From Client to SQL Server Only if implementing SQL Server instances

NETBIOS Name Service UDP 137 Send Receive

NETBIOS Session

TCP 139 Outbound

NMXSVC

TCP 5026 Outbound

RPC DCE

TCP 135 Outbound

RPC Dynamic Port Range

TCP 6000-6050* Outbound

SQL Server SQL Client SQL Browser SUITELINK PING

TCP 1433 Inbound TCP 1433 Outbound UDP 1434 Send Receive TCP 5413 ICMP Protocol Type 8

TCP 1024-65000 Suitelink: Intouch, IO (see note below) Server communication. Between all ArchestrA enabled nodes.

FactorySuite A2 Deployment Guide

204

Chapter 7

Function NTP

Primary Connection UDP 123

Secondary Connection

Remarks Time Synchronization. From Client to Domain controller(s) or time master. Domain Name Service. From client to DNS Server. Active Directory Domain, from client to Domain Controller(s) Authentication

DNS

UDP 53, TCP 53

LDAP

TCP 389

KERBEROS

TCP 88

Note SuiteLink establishes a secondary connection in the disclosed port range. Stateful packet inspection firewalls handle this operation automatically. If the user needs to establish this on their own, they are then aware of the secondary connection ports. For more information on configuring port ranges, see Microsoft articles Q154596, Q300083, and Q250367.

Securing Visualization
Users with different roles require different user interface experiences. Typical interface experiences include window-to-window navigation, data visibility on a specific window, and restrictions on visible actions. Each of these actions are easily achieved by animation links in InTouch Software. The animation links (typically) test the InTouch Software system tag $AccessLevel. While this implementation works, it provides a very linear security model. Industrial Application Server roles offer more flexibility and can be leveraged from InTouch Software by using the IsAssignedRole ("RoleName") script function. When executed, this function determines if the currently logged-in user is assigned to the role that was entered into the script call. This function allows the InTouch Software application to access the role-based security of ArchestrA IDE. To implement this, add a Data Change script to InTouch Software that executes any time the InTouch system tag $Operator changes. For example, the following script could be called when the $Operator tag changes:
AdministrativeAccess = IsAssignedRole("Administrator"); SetpointAccess = IsAssignedRole("Engineer"); ManualAccess = IsAssignedRole("Operator");

In this example, AdministrativeAccess, SetpointAccess, and ManualAccess are Discrete InTouch Software memory tags. Users can possess multiple roles and more than one of these discrete tags could be set. You can animate the InTouch Software application by using these tags in the expression statements of the animation links.

FactorySuite A2 Deployment Guide

Architecting Security

205

This implementation has the following advantages:

First, the scripts execute only when the user changes. Instead of running the same script for every animation, it only runs as needed, which improves overall application performance. This also improves the draw times of the screen are also improved, since it is not necessary to evaluate the user rights for each associated animation. The second advantage is in maintenance. By having the script appear in one location, there is only one place to go to make required changes. If, in the previous example, all Manual Control was no longer allowed by Operators, but instead this permission was only going to be given to Engineers, this can be achieved by a simple change. You would change the third line of the script to read:
ManualAccess = IsAssignedRole("Engineer");

This change would only be made in the $Operator Data Change script, instead of every Manual Control animation.

OS Group Based Security Mode Notes


The OS Group Based security mode enables user authorization based on OS Groups; in other words, this mode leverages the operating systems' user authentication system on a Group basis. This means that the user is a member of a particular group and has certain permissions within the context of that group. Two settings are available in this mode: Login Time, and Role Update. The default value for the Login Time setting is 1000 ms. The user will experience a 1 second delay while the system validates the login permissions. The default setting for Role Update setting is 0 ms., which means the system does not pause between validating user membership and groups. This setting is independent of the Login Time.

System Considerations
The first time a user logs on to a system (using InTouch Software, for example), and the OS Group Based mode is set, the login is validated at a Domain Controller. After the login is validated, a cache is created on the local machine and propagated to other nodes in the system. The user then has specific permissions to interact with the system (operator, administrator, etc.) on any node. This scenario has several implications:

The first time a user logs on to the system, they may experience delays while the system validates their permissions and creates the cache. This is especially relevant if the system includes a large number of Groups, and/or network nodes. This delay may be exacerbated by widely-distributed networks (see the last bullet).

FactorySuite A2 Deployment Guide

206

Chapter 7

Subsequent logins in the system use the (local) cache created at the previous login. This means that if login permissions are modified, the user can still log on, but uses the "old" cache until the update occurs. This update operation takes place "under the hood" and does not prevent the user from logging in with the old permissions. If the Login Time is set to 0, the system validates permissions and creates a new cache at each login. When the security mode has a large number of groups, and the system is widely-distributed (SCADA) with slow or intermittent network components, lengthy login delays may occur.

To mitigate login time delays

Provide additional Domain Controllers on "this" side of potential network bottlenecks. Ensure the Login Time and Role Update settings are set correctly for the local environment. For example, setting the Login Time to 10,000 ms means that the user cannot interact with the system for 10 seconds, regardless of the use of the validation cache. In this case, 1000 ms (default) is usually acceptable.

Securing the Configuration Environment


The ArchestrA IDE extends the security models of traditional industrial automation applications to incorporate the configuration actions as well as the run-time actions. When a Galaxy is created, the administrator can authorize only those who need configuration permissions. Additionally, the administrator can limit an IDE user's permissions. This capability can be used in an efficient way to manage a team of engineers who will be building an application together. For example, if a senior engineer is the person responsible for creating the project standards, a role could be created that has the capability of editing the templates. Another role could be created that does not have the permission to edit the templates, but can use the templates to build the application. A junior engineer would be given the latter role. Each object maintains an audit trail of user actions performed against that object. This helps to track the versions being used and the progress of building the application, as well as provides a record of changes. This audit trail can be viewed in the IDE by accessing an object's properties.

FactorySuite A2 Deployment Guide

Architecting Security

207

Distributed COM (DCOM)


DCOM enables communication between objects on different computerson a LAN, a WAN, or the Internet. It uses most transport protocols (TCP/IP, UDP, etc.).

Limiting the DCOM Port Range


Closing or limiting DCOM port ranges is a common IT recommendation and practice at the Process Control Network layer. The intent in closing or limiting port ranges is to make them "obscure" to attackers and various worms and malware by hiding which ports actually are open. This simple solution is a perfectly appropriate and highly-effective security practice in some select cases, where a specific process control environment requires a specific number and type of highly-screened agents to interact with the system. For example, public utility production plants (like water and sewage treatment plants) often have relatively small staffs, who have limited interaction with a relatively slow-changing process control environment. The environment may also include out-of-spec process alarming and historization of data and interaction. This environment provides a secured environment that is extremely long-lived, relatively static, and highly effective. Note Such systems are usually designed to meet exacting specifications at the beginning of the project, and reviewed/updated to current standards when major engineering changes are integrated into the system. However, in most production environments, closing DCOM ports may limit or stop necessary communication and telemetry needed for the parallel computing environment within a Process Control Network or SCADA System. Such limited communication has the potential to create conditions within the plant that can cause intermittent/permanent loss of process data, unexpected operation, or loss of control of the environment resulting in dangerous or deadly conditions. Further, the application of certain registry settings and the creation of specific ActiveDirectory rules and secure zones can create conditions within specific operating systems which WILL require reinstallation of the Operating System in order to successfully roll back a previously undefined key, policy, or setting to the previous state of non-definition. Establishing definitions for undefined registry values may render an Operating System unusable in a Process Control Environment. The only correction is to reinstall the operating system and reset the software benchmark for the affected machine(s). The majority of port numbers between 1024 and 49151 are assigned to various individuals and organizations. A certain percentage of these ports will not be encountered on a Windows Operating System. However, certain 3rd party vendors applications may be affected by limiting the access to those ports by using dcomcnfg.exe or specific registry entries in an untested and non-QA'd security profile (in addition to Wonderware Software).

FactorySuite A2 Deployment Guide

208

Chapter 7

In most production environments, blocking ports will completely break the functionality, intelligence, and operational ability of the distributed computing environment.

Security Recommendations Summary


Modern industrial automation requires direct interaction between machines performing specific functionality within an intelligent enterprise. Limiting the communications between production nodes across an enterprise WILL ALSO limit the functionality and performance of the enterprise and in some cases make the enterprise unsupportable. Extreme care should be exercised when modifying any communication protocols, communication channels and ports, or operating system processes and services within a Process Control Environment. Proposed changes to the Process Control Environment should be tested and modeled on a Shadow System or plant model system before being implemented within the operating PCN or SCADA System. For more in-depth information regarding designing secure Process Control environments, refer to specific Wonderware product deployment guides, Wonderware Training Seminars, and to Wonderware Enterprise Security Guidance documentation. Note For more information regarding DCOM, see http://msdn.microsoft.com/library/default.asp?url=/library/enus/dndcom/html/msdn_dcomtec.asp. For information on assigning a specific/dynamic port range, and securing the RPC ports using IPsec, see http://support.microsoft.com/?kbid=908472&SD=tech.

FactorySuite A2 Deployment Guide

Historizing Data

209

C H A P T E R

Historizing Data

Data storage in the FactorySuite A2 System environment is handled by IndustrialSQL Server Historian. The individual historized data points are defined as attributes of ArchestrA ApplicationObjects. This chapter describes how the Historization component can be applied in the FactorySuite A2 System environment. Note IAS and IndustrialSQL Server have specific version compatability requirements. Check the FactorySuite A2 Compatibility Matrix on the FactorySuite support website http://www.wonderware.com/support/ for detailed compatability information.

Contents

General Considerations Area and Data Storage Relocation Non-Historian Data Storage Considerations

FactorySuite A2 Deployment Guide

210

Chapter 8

General Considerations
When designing a FactorySuite A2 System-based industrial automation application, consider the following data storage concerns:

Data Point Volumes: The volume is the number of points in the application that will be historized. In IAS, data points are object attributes that have history enabled. Data Storage Rate: At what rate will the data be changing and how quickly will that data need to be stored? Data Loss Prevention: What are the possible scenarios that would result in a loss of data? Storing System Data: In a system with multiple historians, how does changing system topology affect the location of stored data? Client Locations: As you plan the network topology for your application, you should consider the location of history clients. User Account: The user account under which services run must be the same for all applications. Also, if you specify a local computer user, then the Historian Node must be in the same network domain or workgroup as the AutomationObject Server node

Data Point Volumes


The data storage rate of a single historized attribute (i.e. point) is a function of the scan period of its hosting AppEngine and its rate of change. The attribute can be stored no faster than the AppEngine scan rate, and will only be sent to the historian if it changes. For analog attributes, a deadband can also be configured to prevent storage of insignificantly small changes. Finally, attributes can be configured to ensure that slow-changing or nonchanging attributes are always stored at a minimum interval (such as once per hour) via the force storage period. The volume of data points that must be stored is a determining factor in deciding how many historians will be required in an application. Most applications will only require one historian. Very large applications may require multiple historians. IndustrialSQL Server Historian has been vigorously tested, and can handle the historization of up to 150,000 individual points. If your requirements exceed this number of points, you should plan to incorporate multiple historians in your application topology.

FactorySuite A2 Deployment Guide

Historizing Data

211

Data Storage Rate


When configuring a data point for storage, it is possible to set a rate for that point to be stored. The limiting factor on this storage rate is the scan rate of the AppEngine: The data can be stored no faster than the AppEngine scan rate. Data storage cannot occur between AppEngine scans. The data storage rate should always be whole multiples of the AppEngine scan rate that hosts the associated object.

Data Loss Prevention


The ArchestrA framework protects historized data from loss. This is done by storing the data locally to disk when connection is lost to the Historian. This operation is called "Store Forward." Each AppEngine and Platform is capable of storing all of the associated acquired historical information to disk when a connection to the historian cannot be established. Once the connection is reestablished, the data will be sent to the historian. It is important to ensure that the AppEngine responsible for sending the data to the historian is not interrupted. The following is a list of precautions to prevent data loss:

Separate the Galaxy Repository from running AppEngines. As applications grow large, the configuration changes to that application will become CPU-intensive. It may be possible to have the configuration services on a computer starve the AppEngine from the required CPU cycles to perform the data storage. To prevent this, place the Galaxy Repository on a remote computer from running AppEngines. Do not deploy to running AppEngines. If an AppEngine is running and gathering information for the historian, deploying additional objects to the AppEngine will cause a momentary interruption of the AppEngine execution. During that time incoming data changes may be missed. This effect can be prevented by only deploying new objects during noncritical data storage periods. Deploying large numbers of objects can be particularly impactful.

Be sure that DCOM is enabled for both the MDAS and IndustrialSQL Server Historian computers and that TCP/UDP port 135 is accessible. The port may not be accessible if DCOM has been disabled on either of the computers or if there is a router between the two computers that is blocking the port. For more information, see "Process Control Network Firewall Ports" on page 203.

Area and Data Storage Relocation


The System topology will evolve throughout the application's life cycle. As this evolution takes place, you must determine the data storage implications. The ArchestrA framework was built with flexibility as a target goal. It is very easy to move an entire Area of objects from one AppEngine to another. But it is important to remember that the AppEngine defines where the historical data for hosted objects will be sent. If using multiple Historians, and the topology is modified, there may be an impact on the location of stored data.

FactorySuite A2 Deployment Guide

212

Chapter 8

When designing the system topology, note that the locations of the historical data clients may impact the end design. This is particularly true when portions of the application are separated by low bandwidth or intermittent network connectivity. Client applications should not be required to access the historian over these poor connections. One solution to this is to have local historians that service the computers that are locally situated with good network connections.

Non-Historian Data Storage Considerations


As a system is put into service, it is normal to maintain the Historian node to ensure enough space is available for continued data storage. This is a requirement for any historian. However, the IndustrialSQL Server Historian is not the only storage mechanism that is used in a FactorySuite A2 System. Nodes other than the Historian Node are capable of storing large amounts of information, so it is important to assess the impact of the following settings on data storage:

Alarm buffer Size: If the network connection to the alarm database is lost, the alarms will begin to be stored in a local buffer. This buffer is a direct reflection of the page file size. An average alarm record is 1400 bytes of data. If the buffer fills up, storage will stop. However, a 10 MB page file can store over 3500 alarms, so using the proper precaution can easily prevent an issue. Log Viewer Event Storage: By default the Log Viewer event storage mechanism (which is installed on all computers) is set to use a maximum of 5 GB of storage. You can adjust this value. The Log Viewer event storage must be considered in the total disk space requirements. Store-and-Forward Deletion Threshold: If network connection to the historian (IndustrialSQL Server Historian) is lost, the historical data will begin to be stored in a local directory. The default circular deletion threshold is 100 MB. You should consider your requirements for this setting, and adjust it, if necessary, in the WinPlatform or the AppEngine object configuration.

When evaluating system configuration, it is worth spending a little time up front to consider disk space availability.

FactorySuite A2 Deployment Guide

Implementing Alarms and Events

213

C H A P T E R

Implementing Alarms and Events

The Alarm and Event subsystem consists of both Alarm Consumers and Alarm Providers. When determining the topology for your application, be aware of how alarm and event messages are processed within the system and how different configurations can affect system performance. Note The event messages produced by Alarm Providers are not the same as events generated by the IndustrialSQL Server Historian system.

General Considerations Configuring Alarm Queries Determining the Alarm Topology Logging Historical Alarms

FactorySuite A2 Deployment Guide

214

Chapter 9

General Considerations
The Platform object serves as an Alarm Provider for all IDE objects and is the primary Alarm Provider in a FactorySuite A2 System. The Platform is capable of providing any alarm in the Galaxy; that is, the Platform is not limited to the alarms generated by the objects it is hosting. The network load can be affected by which Platforms are set as the Alarm Providers. By default, when a Platform is configured as an Alarm Provider, it will automatically subscribe to all alarms in the Galaxy. This means that any time a new alarm occurs, it will be sent to all of the Platforms that have been configured as an Alarm Provider. You can override this by configuring the Platform to be only an Alarm Provider for a set of Areas that you designate. The alarm consumers provided with FactorySuite A2 System are the InTouch Alarm Viewer ActiveX Control and the InTouch Alarm DB Logger Manager utility. These consumers can be configured to query alarms from a local Platform or from a remote Platform. By leveraging this flexibility, you can minimize the network load imposed by alarm distribution. Note The InTouch Software alarm clients used to show summary alarms only query for alarm information when they are visible on the screen.

Configuring Alarm Queries


For an Alarm Consumer to obtain the alarms from an Alarm Provider, it must query the Alarm provider. A typical alarm query is configured as follows:
\\ProviderNodeName\Provider!AlarmGroup

For a FactorySuite A2 System-based application that uses the Industrial Application Server, these translate as follows:

ProviderNodeName - This is the host name of the node where the Alarm Provider resides. Provider - This is the word "Galaxy." There can only be one Platform per computer, and this keyword represents the Platform Alarm Provider. AlarmGroup - The Area objects in the IDE serve as the alarm groups. When building the application in the Model View of the IDE, you can place the Areas within each other. If an Area named "Tanks" hosts another Area named "Clearwell," then subscribing to the alarms in "Tanks" will automatically include the alarms in "Clearwell." Multiple Queries - An Alarm Consumer query can be used to query multiple Alarm Providers by adding a space between the individual query strings.

FactorySuite A2 Deployment Guide

Implementing Alarms and Events

215

Determining the Alarm Topology


When you are determining the alarm topology, it is important to take into consideration the overall topology of the system. Alarming can be implemented using Distributed Local Network or Client/Server topologies.

Best Practice
Be sure that parent alarm areas are on the same node as their subareas. For more information on Distributed Local Network or Client/Server topologies, see Topology Categories in Chapter 2, "Identifying Topology Requirements."

Alarming in a Distributed Local Network Topology


In a distributed local network topology, the nodes that serve data (AutomationObject Servers) are not separated from the clients that consume data (Visualization nodes). Each node hosts both components locally. A Platform is deployed to each Workstation node (which is a combination of AutomationObject Server and Visualization node). It is more likely that every Platform in this topology is configured as an Alarm Provider, and each of the Alarm Consumers queries the local Platforms for alarms. Consider the scope of interest of each Platform. When configured as an Alarm Provider, the Platform requests all alarms in the Galaxy by default. If a workstation does not need to view all of the alarms in the Galaxy, the Platform on that computer should be configured to only subscribe to alarms that are within the scope of interest.

Best Practice
The following list summarizes the key points in setting up an optimized alarm distribution system in a Distributed Local Network topology:

All of the Platforms on workstations will be Alarm Providers. If operators at a workstation will need to view all alarms in an application, you use the default scope for the Platform Alarm Provider on that node, which is to subscribe to all alarms in the Galaxy. If operators at a workstation do not need to view all alarms in the Galaxy, configure the Platform Alarm Provider scope of that node to subscribe only to alarms that are of interest to the operators at that node.

FactorySuite A2 Deployment Guide

216

Chapter 9

Alarming in a Client/Server Topology


The client/server topology separates nodes that serve data (AutomationObject Server nodes) and the clients that consume data (visualization nodes). There more clients than servers. A Platform object must be deployed on each client and each server. One or more of the Platforms on the AutomationObject Servers should be set as an Alarm Provider and each Alarm Consumer should query one of the AutomationObject Server Platforms directly. This deployment minimizes the network traffic by channeling the alarm traffic to specific Alarm providers. If the Platforms on the Visualization node(s) are set as Alarm Providers, each of those Platforms requests all alarms continuously, loading the network with unnecessary traffic. While a Platform can be configured to subscribe only to alarms of particular Areas, the Platform still requests the alarms for the configured Areas on a continual basis. By configuring the AutomationObject Server Platforms as Alarm Providers, only one node requests alarm updates. The Visualization node(s) only request alarms when a window containing the alarm display is active. Alarm consumers only request the alarms that are required to satisfy the alarm query. As stated previously, each Platform is capable of providing all alarms in the Galaxy. However, if all of the consumers are using a single Platform as the sole Alarm Provider in the Galaxy, there is a single point of failure for all Alarm Consumers. Also, the single Platform would constantly be receiving the alarms from all of the other AutomationObject Servers, which would cause a heavy traffic load on the network. If your client/server architecture consists of more than one AutomationObject Server node, take the following measures to ensure the highest availability of alarm information to the Alarm Consumers: 1. 2. Configure the Alarm Consumer queries to query each AutomationObject Server Platform for the Areas that are hosted on that Platform. Configure the AutomationObject Server Platform Alarm Providers to provide only alarms for the Areas hosted by that Platform.

These two configurations lower network traffic between AutomationObject Servers due to alarm distribution and ensure that no one AutomationObject Server is a single point of failure for alarm delivery to the consumers on the Visualization nodes.

FactorySuite A2 Deployment Guide

Implementing Alarms and Events

217

Best Practice
The following list summarizes the key points for setting up an optimized alarm distribution system in a client/server architecture. The list also applies to a Widely-distributed SCADA system environment:

The Platforms on the Visualization nodes should not be Alarm Providers. The Alarm Consumers on the Visualization nodes should query each AutomationObject Server individually for the Areas hosted by that Platform. The AutomationObject Server Platform Alarm Providers should be configured to only be providers for the Areas that are hosted by that Platform.

Note For information on Alarm configuration in a widely-distributed SCADA environment, see "Alarms" on page 262.

Logging Historical Alarms


The InTouch Alarm Logger is an Alarm Consumer, and the Industrial Application Server Platform component is an Alarm Provider. The InTouch Alarm Logger is a component that stores the alarms it receives in either a Microsoft SQL Server database or into the Microsoft Data Engine (MSDE). The Alarm Logger can either be local or remote to the Platform that is the Alarm Provider.

Best Practice
In a typical FactorySuite A2 System, IndustrialSQL Server Historian will be used to store all time-series data. It is recommended that you install IndustrialSQL Server Historian on a dedicated node. For consolidation purposes, the best practice is to store the alarm history in another database on the same node as IndustrialSQL Server Historian. The location of the InTouch Alarm DB Logger utility may greatly affect data loss prevention. The Alarm Logger utility automatically buffers the alarms it receives until they are successfully stored in the target database. Installing the Alarm Logger on the AutomationObject Server node ensures that the alarms are not lost if the connection to the Historian Node is lost. To eliminate a single point of failure, install the Alarm DB Logger Manager utility on all AutomationObject Server nodes on which alarms are generated. The alarm query configured in the Alarm DB Logger Manager utility will then retrieve alarms that are generated only by objects hosted by the local platform. This practice ensures that no network connection will be required to deliver the alarm to the Alarm Logger and prevents the loss of alarm records due to network instability.

FactorySuite A2 Deployment Guide

218

Chapter 9

FactorySuite A2 Deployment Guide

Assessing System Size and Performance

219

C H A P T E R

1 0

Assessing System Size and Performance

This chapter provides sizing and performance data for FactorySuite A2 Systems. The data is based on extensive product testing and includes a predefined set of computer and application specifications. The specifications and results are intended as references. This chapter also describes basic testing topologies, including single-node implementation. Note The data contained in this chapter is provided as general guidelines for planning and sizing your system implementation and is not intended to be an exact predictor or specific guarantee of FactorySuite A2 System performance.

Contents System Disk Space and RAM Use Predicting System Performance Performance Data Failover Performance Load Shared with Remote I/O Data Source DIObject Performance Notes OPC Client Performance

FactorySuite A2 Deployment Guide

220

Chapter 10

System Disk Space and RAM Use


The following information describes the effects of typical configuration and deployment tasks on disk space and RAM. Some of these tasks include object creation, deployment, and template derivation. The data provided in this section are examples; your system will produce variations from these numbers. Use these numbers for evaluation purposes and always allow for adequate spare disk and RAM capacity.

Galaxy Repository Performance Considerations


The ArchestrA database (Galaxy Repository) leverages Microsoft SQL Server functionality. SQL Server enables considerable discretion for configuring database growth and memory use thresholds.

Database Growth
Database growth can be set to either a fixed size, or to Auto Increment. The default setting (Auto Increment) is preferred unless:

Specific IT requirements and dedicated GR management resources are in place in a specific production environment. FactorySuite A2 System size is known with a high degree of confidence.

The Auto Increment setting is effective for the GR Node because of the dynamic nature of the GR; in other words, the GR is not a "static" database that stores raw production values. An example of a "static" database is the ProdDB used by the PEM objects. The ProdDB is effectively managed using a fixed size strategy because its size can be more easily quantified and administered. The ProdDB database is also installed on the Historian node, distributing SQL Server processing needs. SQL Server provides effective growth administration engines and should not decrement system performance when dynamically growing the database, even when the AutomationObject Server is deployed on the same node as the GR.

FactorySuite A2 Deployment Guide

Assessing System Size and Performance

221

RAM Allocation
In practice, MS SQL Server tends to monopolize and consume any RAM that is available at any point in time. SQL Server enables setting a limit (or cap) on RAM allocation so that other system resources can function without limitations. This cap can be configured either as a dynamic or fixed setting. When the RAM allocation is known, select the ...fixed memory size option to limit RAM use to approximately half of the machine's available RAM. For example, if a machine has 1GB of RAM, set the SQL Server cap to 500MB. Setting a fixed memory size ensures the memory is always available. Note For recommendations on Dynamic SQL Server memory allocation, see "Bulk Operations Considerations" on page 232. To set a fixed memory size 1. 2. 3. 4. 5. 6. Start SQL Enterprise Manager. Right-click the Server icon and select Properties from the submenu. Select the Memory tab. Select the Use a fixed memory size (MB) option. Use the slider control to set the memory size. Click OK to close the Properties dialog box.

FactorySuite A2 Deployment Guide

222

Chapter 10

Auto Increment Data


The following table illustrates the increased disk space used by the default Microsoft SQL Server Auto Increment setting, based on the starting size of a typical Galaxy Repository database:

Auto Increment Catalyst

Sizing Empty Galaxy Repository database size

MB 6.6 MB 7.1 MB 7.7 MB .... 11.44 MB 12.44 MB ....

Percent Increase ----+ 7.5% + 7.8% .... +8.3% + 8.7% + 7 to 11%

Derive "X" number of templates or instances. Derive "X" more templates or instances. Incremental Growth Events Derive "X" more templates or instances. Derive "X" more templates or instances. Summary Increment Range

Auto-incremented size at reporting time. Auto-incremented size at reporting time. .... Auto-incremented size at reporting time. Auto-incremented size at reporting time. ....

The default Auto Increment value is 10%. This means that the occupied disk space does not grow with each individual ArchestrA object template and/or object instance added to the Galaxy Repository. Instead, the disk space will grow by the specified percentage, when SQL Server determines that adding space is necessary. In practice, the growth increments for occupied disk space vary from 7% of the Galaxy database size to 11%. Note The size of the increase does not correlate to a single object instance. It is prudent to leave enough disk space available for the planned capacity of the ArchestrA Galaxy Repository. Always allow for 25-50% of projected/future capacity (unless the system is quite well-defined) then add 10% to allow for Microsoft SQL Server to automatically increment beyond that threshold, if and when the threshold is reached.

Predicting Disk Space and RAM Requirements at Configuration Time


The following information describes disk space and RAM usage in various configuration environment scenarios.

Initial Application Installation Data


Disk space and RAM consumption are somewhat predictable for a fresh installation of Microsoft SQL Server, the ArchestrA Bootstrap, the Galaxy Repository, the IDE, and the SMC.

FactorySuite A2 Deployment Guide

Assessing System Size and Performance

223

All disk space and RAM capacity in the following table are approximate:

Component Microsoft SQL Server SP3 .NET 1.0 Framework ArchestrA Bootstrap (including the .NET 1.1 update)

Disk Space 55 MB on disk 36 MB on disk 19 MB on disk

RAM 22 MB RAM 0.6 MB RAM * 7 MB RAM 2 MB RAM 63 MB RAM 7 MB RAM

ArchestrA Galaxy Repository, IDE, and 32 MB on disk SMC Creating the empty Galaxy and running 70 MB on disk IDE Logs for SMC, starting up SMC. 3 MB on disk

* Version 1.0 of ArchestrA installed .NET 1.0 Framework; newer versions update to 1.1 Framework or later. Deployment of the Galaxy's Platform on the Galaxy Repository node, plus the AppEngine object and an Area object, occupies disk space and consumes RAM. All disk space and RAM capacity values in the following table are approximate:

Task Create GR Platform Create GR state Create GR first Area Deploy GR Platform Deploy GR state Deploy GR first Area

Disk Space 3 MB on disk (small increment) (small increment) 16 MB on disk (small increment) (small increment)

RAM 28 MB RAM (small increment) (small increment) 31 MB RAM (small increment) 6 MB RAM

Creating Simple Object Templates


RAM utilization on the ArchestrA Galaxy Repository IDE node is negligibly affected by the creation of simple object templates. However, certain aggregated, hierarchical Application Objects will immediately consume RAM. Application Objects that contain multiple child objects (such as the AnalogDevice object, DiscreteDevice object, and FieldReference object) consume RAM in the IDE node upon creation of a containing template object.

FactorySuite A2 Deployment Guide

224

Chapter 10

Creating Derived Templates


Creating individual derived template objects from base template objects has negligible immediate effect upon disk space and RAM consumption. However, large numbers of derived template objects increase the Galaxy Repository size. The database occasionally auto-increments by a 7% to 11% growth factor, as opposed to incrementing when individual objects are created. You can estimate the expected disk space utilization for each type of derived template object by first determining a size factor for each. Then, multiply the size factor by the planned number of objects of that type. To determine the size factor, create 100 template objects of the particular type and observe the database size impact for the Galaxy Repository. To analyze the impact of this action, it is not sufficient for you to simply look at the disk size property for the disk drive that contains the Microsoft SQL Server database. A good approach for determining the incremental size of the Galaxy Repository database is to use Microsoft Query Analyzer and run the sp_spaceused stored procedure against the Galaxy Repository database. The GR database is visible with the name that you gave the Galaxy when you created it. The sp_spaceused stored procedure returns three columns: database_name, database_size, and unallocated space:

The following table contains an example of calculating the increase in Galaxy Repository database size after creating or importing a specific number of object templates:

Galaxy State Before addition of derived object templates After addition of 100 derived object templates of type XXX

Database Size 11.44 MB 12.44 MB

Unallocated Space 0.28 MB 0.36 MB ---

Actual Space Occupied 11.14 MB 12.08 MB 0.94 MB 0.94 MB / 100 = 0.0094 MB

Database size increase by addition of 100 -derived object templates of type XXX Calculation of approximate size factor -for derived object template of type XXX

FactorySuite A2 Deployment Guide

Assessing System Size and Performance

225

The size factor for different derived object templates varies due to differing selections of UDA attributes and differing attribute extensions.

Derived Object Templates and QuickScripts


If you include a QuickScript as an extension of any derived object template, this will require additional Galaxy Repository database space. The following is an example of a calculation of a size factor for the inclusion of a QuickScript. As the QuickScript in any particular base derived template will be unique, the resulting size factor may need to be calculated for each case.

Galaxy State Before addition of a QuickScript After addition of a QuickScript function with 100 lines of code to a derived object template of type XXX

Database Size 12.44 MB 12.44 MB

Unallocated Space 0.36 MB 0.06 MB

Actual Space Occupied 12.08 MB 12.28 MB

Database size increase for the QuickScript -with 100 lines of code

--

0.20 MB

Creating Object Instances


Creating instances of base and derived Application Objects (instantiation) has a negligible individual effect on the Galaxy Repository database size. However, large numbers of instances will occupy additional space. To determine the size factors for object instances, use the method described in Deploying Object Instances (following section). Instances that contain inherited QuickScripts will not use additional disk space in the database; however, the addition of QuickScripts to individual instances of objects will occupy additional disk space.

Predicting Disk Space and RAM Requirements at Run-Time


The following section contains guidelines for predicting run-time disk space and RAM requirements.

Deploying Object Instances


Deploying an object instance does not affect the size of the Galaxy Repository database. However, deployment impacts disk space and RAM use on the target Platform. Deploying object instances to an AppEngine impacts RAM usage. The initial deployment has largest impact on RAM, since deployment involves loading the associated run-time software on the Platform node (local or remote). Subsequent object instance deployments (based on common templates) use much less RAM.

FactorySuite A2 Deployment Guide

226

Chapter 10

The following table describes disk space and RAM usage when creating and deploying an aggregated Application Object instance containing multiple child objects. The objects are deployed on the local node:

Task

Disk Space

RAM ~ 7.7 MB RAM ~ 2.5 MB RAM

Create master object containing 10 objects each ~ 4.5 MB with 500 UDAs (total of 5,000 UDAs) and each disk space containing 100 lines of inherited QuickScript ~ 17 MB Deploy master object containing 10 objects each with 500 UDAs (total of 5,000 UDAs) and disk space each containing 100 lines of inherited QuickScript

Some RAM on the local (target) node is used by the IDE, the remaining RAM is used by the Platform and AppEngine.

Deploying to an XP Platform
In a production environment, AutomationObjects may be deployed to an AppEngine on the XP platform. Estimating disk space on XP is problematic because of the System Restore feature. System Restore provides protection from inadvertent deletion of required system files. The end result is similar to a Backup/Restore operation on a Server operating system: the user can restore the machine from a certain point in time. However, the System Restore feature maintains extra copies of files on the local machine's disk drive, and provides limited administrative options. File copies are made at a random times determined by the operating system. The deployment operation generates the System Restore operation, but may not include all necessary executable files. Default System Restore Disk Space settings are as follows:

For drives greater than 4 GB, System Restore uses up to 12% of the disk space. For drives less than 4 GB, System Restore by default only uses up to 400 MB of disk space

The data store size is not a reserved space on the disk and the maximum size (to the max values defined above) is limited at any time by the amount of free space available on disk. Therefore, if disk space use encroaches on the data store size, System Restore always yields its data store space to the system. For example, if the data store size is configured to 500 MB, of which 200 MB is already used, and the current free hard-disk space is only 150 MB, the effective size of the data store is 350 MB (200 + 150), not 500 MB. Note that disk space usage can be adjusted at any time. Note System Restore is accessed via the System utility in Control Panel.

FactorySuite A2 Deployment Guide

Assessing System Size and Performance

227

System Restore is enabled by default. DO NOT turn off the System Restore feature on an operational ArchestrA Platform computer. Instead, assess disk space usage using an isolated test computer on an isolated Galaxy where System Restore is disabled.

Predicting System Performance


This section contains specifications and data that can be used to assess and quantify system characteristics. The data is derived from various test case results and is intended to provide a loose template for general system performance predictions. Compare this data to existing system performance data, then factor in the different FactorySuite A2 System applications' guidelines to make the best decisions for your deployment/integration project. The key items used to assess FactorySuite A2 System performance are:

Unit Application Definition: Defines the size of the system in terms of object types, I/O points, alarm attributes, change rates, etc. Hardware Specifications: Baseline computer specifications. Baseline Server CPU Values: Describes Server performance (CPU Usage %) with specific numbers of instances and I/O points. Performance Data: Describes the network of PCs and PLCs.

The bulleted items are explained in the following sections.

Unit Application Definition


A Unit Application is a unit of measure used to describe the size of actual customer application scenarios. Each Unit Application contains a total of 500 objects according to object type and other characteristics. The Unit Application concept is useful because it can be used to quantify system elements. For example, "Application Topology A contains 2 Unit Applications running on each of PC1 and PC2 and 1 Unit Application running on PC2."

FactorySuite A2 Deployment Guide

228

Chapter 10

The Unit Application is defined by the objects it contains (see the following table):

Object Type Discrete Device Analog Reg. w/Input Analog Reg. w/Output Calculation (synchronous script) Calculation (asynchronous script) once/10 secs Areas/SubAreas I/O Networks I/O Devices

Object Alarm Hist. Count I/O Pts. Attrs. Attrs. 200 200 50 48 2 1/5 1 1 3 1 1 0 0 0 0 2 4 1 2 2 0 0 0 1 2 1 1 1 0 0 0

I/O Avg. Change Monitor Alrm. Rate Rate Rate 10/min 1/sec 10/min 1/sec 6/min 0 0 0.5 sec 0.5 sec 1 sec 1 sec 1 sec 0 0 10/hr 30/hr 10/hr 30/hr 30/hr 0 0 -

A Unit Application alone does not specify performance: it is a measurement of scale (size), not of rate, and should be used with other data to extrapolate and predict system performance.

Object Type describes the type of object Template, such as Discrete Device, that was built using the Application Object Toolkit. Object Count describes the number of object instances for each template. I/O Pts. describes the average number of configured I/O points per object. Alarms Attrs. describes the average number of configured alarms per object. History Attrs. describes the average number of configured history attributes per object. I/O Change Rate describes the average rate of process data value changes per object. For example, the average number of discrete device transitions for PV. Monitor Rate describes the average rate of monitoring for data value changes per object by the system. This should always be at least as fast as the expected I/O change rate per object. Avg. Alarm Rate describes the average rate of new alarms detected per object by the system.

Note The state scan period is 500 msec for the Unit Application except when there are six or more Unit Applications on the state.

FactorySuite A2 Deployment Guide

Assessing System Size and Performance

229

Hardware Specifications
A computer with the following specifications was used in testing the FactorySuite A2 System products in order to determine the sizing and performance guidelines:

Operating System CPU Speed Physical RAM Free Disk Space

Windows 2003 Server (+ all current Service Packs) 2.4 GHz 1-2 GB 15 GB

Network Throughput 100 Mbps AppEngine Scan Rate 1000 ms (1 second)

Baseline Server CPU Values


The following table summarizes average base CPU usage for an AutomationObject Server without clients. The data represents the CPU used by an AutomationObject Server in a steady state (with no active client connections). These numbers would increase based on other tasks executed in the server, such as script execution, active client connections, etc.

Object Instances 500 3000 5000

I/O Points 500 3000 5000

CPU Usage (%) 4% 15% 22%

In order to optimize the use of RAM and CPU, it is very important to define what the scripting requirements are and to plan ahead for how the scripts are implemented. In many cases, it may be more efficient to create global objects to execute scripts that update data in multiple objects, instead of running the same script in all objects. For recommendations on applied scripting techniques, see Chapter 5, "Working with Templates."

FactorySuite A2 Deployment Guide

230

Chapter 10

System-Wide Performance Baselines


Configuration environment performance parameters were also captured using the development node of each test scenario:

AppEngine startup (boot) Galaxy Load for 500 objects Galaxy Dump for 500 objects Deployment of 500 objects

1 minute for each 1000 objects 103 sec 60 sec 39 sec*

* Deployment time is measured after the first instance of the base template has been deployed, since this deploys the code modules.

Communicating with Field Devices


When implementing systems with a large number of I/O points, try to determine what the possible I/O throughput levels will be. The I/O points hosted by an AppEngine will be updated every time the associated scan group is read. Scan groups are set up on the DIObject used to connect to Data Access Servers. These scan groups are configured with an update interval value. I/O points that do not need to be updated quickly should be placed into slower scan groups. This will minimize the impact to the control networks due to I/O updates.

Performance Data
A FactorySuite A2 System enables a great degree of freedom when configuring and deploying objects. While it is not practical to test all possible configuration and implementation combinations, three representative system topologies are detailed. The loading parameter details for each of those representative systems are also described. To determine the performance guidelines, each test system configuration was measured against a standard of health. For testing purposes, a healthy system is defined as follows:

Absence of scan overruns. Absence of communication timeouts. Absence of excessive page faults. Acceptable average CPU utilization.

FactorySuite A2 Deployment Guide

Assessing System Size and Performance

231

Common Topology Performance Notes


Each of the topologies discussed in this guide includes an Engineering Station node. It is assumed for this example that the Galaxy Repository also be located on this node. However, the Galaxy Repository is not required to be running once the objects have been deployed. For example, the portable GR or the typical engineering station that needs to be re-imaged multiple times, making its availability intermittent. SQL Server can be shut down in these scenarios with minimal impact to the running system. Installing the GR on a dedicated node has the following benefits:

It separates the engineering station(s) from the running system. It prevents bulk operations (for example, a Galaxy load, mass import, change propagation, etc.) performed on the Galaxy Repository from impacting the running system. If the Galaxy Repository were running on a node where an AppEngine hosting other objects is also running, and a bulk operation was performed, they would compete for CPU resources. The competition could potentially result in scan over-runs or missed scans on the AppEngine. Separating these components prevents unnecessary disruptions in the running application.

Improving Communication with Remote Nodes


The $WinPlatform object has a set of attributes associated with the Message Exchange (MX) protocol which allow fine tuning communication between galaxy components. When the network bandwidth is limited, the connection is intermittent, or the amount of objects is too large (thousand instances), it is recommended to adjust these parameters to obtain a better performance in deployment and communication. It is important to keep in mind that the values may be changed based on specific conditions of operation. The values shown here are intended as a starting point for fine tuning the system. The following table shows the default values and recommended values for the MX parameters. Object WinPlatform Message time-out Recommended Value 300,000 ms

Default Value 30000 ms

Comments Increase to avoid timeouts when deploying large number of instances

NMX heartbeat period Consecutive number of missed NMX heartbeats allowed

2000 ms 3

2000 ms 4 (Automation Object Server) 4 (Galaxy Repository) 4 (Visualization node) Increase to avoid false failovers in large systems

For details on the attributes in the WinPlatform object, please refer to the $WinPlatform object Help.

FactorySuite A2 Deployment Guide

232

Chapter 10

Checkpointing Attributes
Every AppEngine in a galaxy saves changes to values of specific attributes (checkpointed attributes) to disk at a pre-defined period of time. The more attributes defined as checkpointed by an AppEngine, the more data is written to disk. It is critical to set up this attribute to achieve the best performance for the system and the process. In the event of a re-start of an AppEngine or after a failover, the system will read the checkpoint file to update the attribute values and start from there. The default value in the CheckpointPeriod attribute in the AppEngine is set to 0 as a safe way to checkpoint attributes every scan if the user is not aware of the functionality. However it is highly recommended to set this parameter to a higher value so that you can optimize CPU utilization and maintain data integrity in the event of a failure in the AppEngine. Note For more details on Checkpointing, see "Checkpointing" on page 74. For information on checkpoint attributes, see "Tuning Redundant Engine Attributes" on page 87.

Bulk Operations Considerations


Observe the following considerations for bulk operations:

Importing a large Galaxy with multiple levels of template containment may take a long time. Actual time is dependent on the size of the Galaxy and number of template containment associations that must be handled during the import process. For large numbers of Platforms and Engines residing on computers that are situated over slow network connections, it is not recommended to initiate a Cascade Deployment of the entire Galaxy. For such networks it is important to deploy the Platforms separately. Also, Platforms that have complex combinations of Areas and Objects or large numbers of Objects should be selected in smaller groups and deployed as groups. For better performance and scalability, increase the SQL Maximum Memory usage to be a larger portion of actual physical memory. In the Server Enterprise Manager, right-click on the SQL Server, select Properties. On the Memory tab, select "Dynamically configure SQL memory" and increase the Maximum Memory.

Note Configuring Dynamic Memory allocation for SQL Server is most appropriate for a FactorySuite A2 System that is predicted to undergo significant changes and where disk space/memory use is difficult to estimate. In the case where disk space and memory use are known more precisely, see "RAM Allocation" on page 221.

FactorySuite A2 Deployment Guide

Assessing System Size and Performance

233

Handling Multiple Engines


It may be efficient in a system to distribute the load of Automation Object instances into multiple engines. It is a particular advantage when running the application in a multi-processor PC, as each engine running in its own single thread, will be distributed by the Operating System throughout the processors. In Industrial Application Server 2.0, a platform can host a maximum of 20 engines. The platform counts as one engine, leaving 19 available slots. However, it is not recommended to use more than 16 engines on a single platform. Using more than 16 engines on a platform may deteriorate system performance. To determine whether to place multiple Engines on a platform, analyze the Scheduler.ExecutionTimeAvg in the Object Viewer and strive to not to exceed over 30% - 40% of the scan cycle execute time. Also, if the speed you require cannot be accommodated with one engine, you may need to split engines. You may need multiple engines (1) when there are differing scan rates or scan overruns, (2) when there is high execution time and/or (3) when you need to bring data in faster. When you have sets of objects that do not require fast execution, you can group them on one engine. In a system in which you can group application objects based on scan rates, use a platform for each object group. Important! Be sure to monitor CPU usage when adding engines. Also, remember that with multiple engines you are moving Areas. If you have only one Area, you must create sub-areas.

Best Practice To keep multiple engines with the same configuration from firing
simultaneously, set their scan rates to prime numbers. For example, the scan rates of the various engines might be set at 997 ms, 1009 ms, and 1021 ms.

Disable Hyper-Threading on computers that support this hardware function. Hyper-Threading gives a false impression of lowering processor load while actually increasing the load on the actual (rather than virtual) processor. Platforms that are routinely placed off scan may cause a certain amount of increased network traffic coming from other platforms if the other platforms have Application Objects still on scan that subscribe for data from Application Objects belonging to the off scan Platform. If the resulting increase in network traffic becomes excessive, consider placing the "subscriber" Application Objects off scan as well during the time that the particular platform is scheduled to be off scan.

FactorySuite A2 Deployment Guide

234

Chapter 10

Single Node System Implementation


Although intended for use in a distributed topology/distributed system environment, Industrial Application Server (version 2.1 and later) and FactorySuite A2 components can be implemented effectively on a single node. Single Node configuration is supported on IAS 2.1 (and later), and InSQL 9.0 (and later). A new install of these two products (to a "clean" machine) is recommended. Upgrades of previous versions (IAS 2.0 or InSQL 8.0) when migrating to a single node configuration will cause problems. However, future version upgrades from IAS 2.1 and InSQL 9.0 on a single node are supported. Please contact Wonderware Technical Support for further assistance and details. Note In a Single Node implementation, performing bulk operations (Import/Export, Deploy, etc.) on the GR may impact the runtime system. Single Node System implementation differs from the "Standalone Workstation" topology, since all components/functions are combined on a single machine. The following components have been tested on a single node:

Configuration Database (GR) Historian (IndustrialSQL Server 9.0 and later) Visualization (InTouch Software) Development (IDE, SMC) Analysis Client (ActiveFactory)

Note For detailed information on topology component roles, see "Topology Component Distribution" on page 30. Integration testing was performed successfully (i.e., the components installed and ran correctly) on both Windows Server 2003 (recommended implementation) and XP Professional operating systems. Note Installation and implementation on a single XP Professional node is recommended ONLY for use within a lab or demonstration environment, or within the boundaries of the Application Specification defined in this section. Windows Server 2003 is the standard operating system for use in any production environment. The following information is derived from the test results and provides detailed information for installation and operation on a single machine.

Hardware/Software System Requirements


The following values represent minimum hardware size requirements:. Hardware Component Processor RAM Hard Disk Minimum p4, 2 GHz 1GB 10GB

FactorySuite A2 Deployment Guide

Assessing System Size and Performance

235

The following operating systems, SQL Server editions, and auxiliary software were tested: Operating System Windows 2003 Server Windows XP Professional SP2 Antivirus software and other software such as Ad Aware should also be running on this machine Microsoft Office applications The tests were performed on a single, standalone computer. Data was provided through different sources, either by direct connection or a network connection. SQL Server SQL Server SP3a Standard Edition SQL Server 2000 SP3a Personal Edition

Application Specification
The following specification was implemented for single-node tests:

1000 I/O points with update rate 1 sec - make sure these I/O points are distributed among all different data types and are distributed between the DA Servers and legacy I/O Servers as well as be sure to use all three communication protocols (S/L, OPC and DDE). 50% floats, 25% bools, 10% strings, 15% integers. 10% changing I/O points - 100 items were updated every second. However, the server scanned for 1000 items per second 10% I/O points Historized - maintain the same ratio for various data types. (Maximum) 3 AppEngines - Approximately 70 objects per engine. (Maximum) 200 AppObjects - 5 I/O points-per-object. Alarm rate: 2 Alarms per second.

System Components
The single node includes the following components:

Galaxy Repository IDE BootStrap InTouch Software HMI IndustrialSQL Server Historian ActiveFactory 9.0 DAS ABCIP ABTCP Legacy I/O Server DDE Server DASSIDirect

FactorySuite A2 Deployment Guide

236

Chapter 10

The following data was gathered over a period of approximately 2.5 days from a system running Windows Server 2003. Performance on the XP Pro node was found to be similar enough to be considered redundant for this document:

Galaxy Specification AppEngines AutomationObjects DAServer IOServer DIObject Historian IDAS Tags MDAS Tags from IAS Page Fault Usage (Memory) Network Utilization

Component 3 1100 (each with 5 I/O points) DASABCIP ABTCP DASDIDirect InSQL Historian 75 2108 (10% Historized) 1.01 GB 0.14-0.34%

Single-Node Process Performance Data


CPU % Total aaBootstrap aaEngine aaEngine1 aaEngine2 aaEngine3 aaGR aahIDASSvc aahManStSvc abtcp DASABCIP DASSIDirect lsass aaTrend SQL Server 5 10 1 5 2 2 25 PrivateBytes Avg. (Bytes) 934223422 5886859 50091771 51537157 81326269 50945422 75170986 13881344 6864896 3470549 14304104 5608829 10071102 43463130 104809085 PrivateBytes Last Memory Usage (Bytes) (KBytes) 931662894 5885952 50331286 51402770 81223239 50895837 78395992 13881344 6864896 3473408 14576587 5603328 10903473 45399870 64583768 944376 9292 44172 31340 59400 31524 70608 17676 11600 1924 20016 11332 11856 27724 67872

FactorySuite A2 Deployment Guide

Assessing System Size and Performance

237

Large Distributed FactorySuite A2 System Topology


Distributed local systems include a peer-to-peer topology and offer the advantage of autonomy; that is, each node is self-contained and can run independently of its peers. Each node includes a visualization client and an AutomationObject Server.
Supervisory Network

Network Device (Switch or Router ) Workstation Historian Engineering Station ( Data and Alarms) Configuration Database SuiteVoyager Portal I/O Server

PLC Network

Note For more details about this topology type, see the "Distributed Local Network" section in Chapter 2, "Identifying Topology Requirements."

FactorySuite A2 Deployment Guide

238

Chapter 10

Large System Topology Elements


The following elements and data is derived from extensive product testing in lab environments. The I/O data is based on simulated I/O. The following elements and data is derived from single Unit Applications (the Unit application table):

Platforms - Peer-to-Peer AutomationObject Server/Visualization node Platforms - Galaxy Repository Server IndustrialSQL Server Historian Server Node Object Instances per Galaxy Object Instances per Platform Total I/O per Galaxy

40 1 1 20,000 (40 nodes x 500 total object instances) 509 (includes platform, areas, engines, etc.) 34,224 (~ 36,000 or 900 Unit Application I/O points x 40) 900 300 changes/sec 10 changes/sec 300 changes/sec 3/sec 2000/sec

I/O Points per Server Platform Inputs Outputs History stores Steady state new alarm arrival Alarm bursts of limited duration

Note Attribute tuning for Redundant AppEngines is described on "Tuning Redundant Engine Attributes" on page 87. In steady-state condition the system performed as described in the following table:

Node GR Historian AutomationObject Server/Visualization Node

CPU 5% 20%

Memory (MB) 700 563

Network Usage @ 100 MB 0.3% 0.3% 0.7% (Windows 2003) 0.68% (Windows XP Pro)

60% (in 1.9 GHz PCs) 460 (Average) 40-% (in 2.4 GHz PCs)

FactorySuite A2 Deployment Guide

Assessing System Size and Performance

239

Very Large Distributed FactorySuite A2 System Topology


This topology consists of dedicated Automation Object Servers and Visualization clients. The visualization clients are represented by nodes running platforms and InTouch Software as well as Terminal Services sessions. Note For performance data on SCADA systems on slow/intermittent networks, see Chapter 11, "Working in Wide-Area Networks and SCADA Systems." Specific Object attribute settings can be configured to make the object perform more efficiently. It is very important to know how to use those values as it may degrade the system performance if they are changed without a good understanding of their function and of how they relate to other attributes and conditions.
Terminal Services Clients (Thin Client) Visualization (Data, Alarms, History )

Dial-up 28.8k bps line

Ethernet

Terminal Server

SuiteVoyager Portal

AutomationObject Historian Server (Data and Alarms )

Engineering Station Configuration Database

SCADAlarm

I/O Server

Network Device (Switch or Router )

Supervisory Network

Network Device (Switch or Router) Control Network RTU Radio Modem RTU Radio Modem (RS-232) PLCs RTU Radio Modem RS-485 Network Level Transmitters

The following elements and data is derived from the Unit application table that appears in a previous section.

FactorySuite A2 Deployment Guide

240

Chapter 10

I/O Baselines
Note This topology type was tested for integration purposes in a lab environment and uses simulated I/O for data generation. "Real" I/O results (from PLCs) are included in the following section as a subset of this test configuration.

Platforms - Visualization Nodes Platforms - AutomationObject Servers Platforms - Galaxy Repository Server IndustrialSQL Server Historian Object Instances per Galaxy Object Instances per Server Platform I/O per Galaxy I/O Points per Server Platform I/O Change Rate Alarm Rate History Stores

10 20 1 5 50,000 2,500 1,100,000 55,000 3% I/O points change every scan 1% of I/O points per scan 7,000 changes/sec

The following results table contains simulated and real I/O values: Results Table: Page File (PF Usage) Memory %CPU Use (MB) Average 652 791 707 930 782 21-29 22-30 23 27-30 55

Physical Memory Use (MB) Normal Engine (simulated I/O) Redundant Engine (simulated I/O) IT Platform (TSE) Load Sharing (simulated I/O) Redundant Engine (Real I/O) 871 908 759 770 913

%Network (RMC) N/A 0.12 N/A 0.24 0.15

%Network (Primary) 0.01 0.01 0.01 0.01 0.03

Primary Network line speed: 1 Gbps RMC Network line speed: 1Mbps

Note Attribute tuning for Redundant AppEngines is described on "Tuning Redundant Engine Attributes" on page 87.

FactorySuite A2 Deployment Guide

Assessing System Size and Performance

241

Deploying Objects in a Large or Very Large System


The ArchestrA system deploys objects in a "cascade" manner: that is, objects are deployed in hierarchical order. However, in the context of large- or very large systems, a cascade deployment, though reliable, may consume a large amount of system resources, or time, to complete the deploy operation. Note For details about different deployment scenarios, see the IDE Online Help (search "cascade" to filter results). In this context, a large system is defined as a system in a 20- to 50,000 I/O point range, and a very large system is one that has at least 50,000 I/O points. These values are approximate and must be considered along with other performance variables listed in the previous tables. When working in a large or very large systems, it is often more efficient to deploy object groups in a specific order, using the multi-select function in combination with, or instead of cascade deployment. Deploying objects in this scenario can save overall deployment time. To deploy in a large system For a large system, perform Cascade deployment from the Platform level. Make sure that Platforms hosting Primary Engines are deployed first, then Platforms hosting Backup Engines. This strategy avoids issues in loadbalancing environments. To deploy in a very large system When working in a very large system (high I/O, high numbers of Areas), use the multi-select (Ctrl + click) to deploy in the following sequence: 1. 2. 3. 4. 5. Galaxy Repository Platform All other Platforms Primary AppEngines Backup AppEngines Primary Engine for I/O (include redundant partner): Skip already deployed objects.

FactorySuite A2 Deployment Guide

242

Chapter 10

Failover Performance
Much testing has been completed to measure the performance of a system during a failover. Multiple variables have been considered to obtain representative data to reflect the average failover time, CPU and memory utilization in different configurations. Note that failover performance may vary depending on the number of I/O points referenced in the system, the amount of scripts per instance, task executed by scripts, historized data, alarms and checkpointed data. Script quantity and complexity will increase the failover time, as the system requires more time and resources to process them. This section presents two architectures with the corresponding application configuration and the results of the performance test. Note Attribute tuning for Redundant AppEngines is described on "Tuning Redundant Engine Attributes" on page 87.

Dedicated Standby Configuration


Visualization Visualization IT Alarm Provider Supervisory Network AOS (PRIMARY) AE1 DI DAS AOS (BACKUP) RMC AE1 (Backup) Historian Alarm DB AlarmLogger PLC Network IT Alarm Provider

Network Device (Switch or Router )

The following table presents the hardware and software used in the architecture shown above. It also presents the configuration of the application tested.

FactorySuite A2 Deployment Guide

Assessing System Size and Performance

243

Some of the variables in the table are dependent on the number of I/O points referred to in the application, so the value is shown as a percentage of the total I/O count for the particular scenario. Resources: Hardware Operating System Configuration DIObject Instances I/O Change Rate History Alarms Scripts per Instance AppEngine Scan Rate DIObject Scan Group Modified Attributes Dell Precision 360, 1.0 GB RAM, Pentium 4 CPU 2.8 GHz Windows XP Pro SP2 for Redundant AppEngine nodes 1 Redundant AppEngine and 1 local DASABTCP DIObject ABTCPPLC 1:8 ratio (instances:I/O points) 3% of I/O points changing every scan 3% of I/O points historized every scan Total 10 alarms/sec 3 (1 On Scan, 2 execute every scan) 1000 msec 1000, 1200, 1500, 1700, 2600, 3300, 4400, and 4600 msec Checkpoint period: 10000 Force failover timeout: 240000 Maximum time to maintain good quality after failure: 120000 Maximum time to discover partner: 30000

The following table presents the results of the test executed to measure the failover time as well as the CPU and Memory utilization. The Failover values are generated by two events:

Network communication failure. Using the Redundancy.ForceFailoverCmd attribute in Object Viewer.

Also note that another key component tracked in the test is the CPU consumed by the DIObject:

I/O Total Failover Time Counts [sec] Network ForceFail Failure overCmd 2500 5000 10000 20000 30000 23 30 44 59 95 18 25 43 58 75

CPULoadAvg[%] Active Node 16 22 32 60 88 Standby Node DASABTCP 5 5 7 16 11 6 10 11 25 39

Total Memory Usage [MB] Active Node 368 371 419 550 663 Standby Node 306 311 319 399 353

Note During the transition of the engine from Standby to Active state, the CPU % value can spike to high levels.

FactorySuite A2 Deployment Guide

244

Chapter 10

Load Shared with Remote I/O Data Source


The analysis for the Load Shared configuration is very similar to the tests completed for the Dedicated Standby configuration. The following figure shows the one used in this validation:

Visualization IT Alarm Provider

Visualization IT Alarm Provider

Supervisory Network AutomationObject Server 2 (Backup) AE4 AppObjects RDI AE1'

RMC AE1 AppObjects AutomationObject Server 1 (Primary) RDI AE4'

I/O Data Server 1 AE_2 DI 1 DAServer1

I/O Data Server 2 AE_3 DI 2 DAServer2 Network Device InSQL (Switch or Router ) AlarmDB AlarmLogger

PLC Network

The resources and configuration details are described in the following table:

Resources: Hardware Operating System Configuration DIObject Instances I/O Change Rate History Alarms Scripts per Instance AppEngine Scan Rate DIObject Scan Group Modified Attributes Dell Precision 360, 1.0 GB RAM, Pentium 4 CPU 2.8 GHz Windows XP Pro SP2 for Redundant AppEngine nodes 5 AppEngines and 2 RDIObjects with load sharing between redundancy pair platforms. ABTCPPLC DIObject 1:8 ratio (instances: I/O points) 3% of I/O points changing every scan 3% of I/O points historized every scan Total 10 alarms 3 (1 On Scan, 2 execute every scan) 1000, 1100, 1200, 1300, and 1400 msec 1000, 1200, 1500, 1700, 2600, 3300, 4400, and 4600 msec Checkpoint period: 10000 Force failover timeout: 240000 Maximum time to maintain good quality after failure: 120000

FactorySuite A2 Deployment Guide

Assessing System Size and Performance

245

The results shown in the table below present the Failover time, CPU utilization and Memory Usage in different scenarios. This table also contains the CPU load on the node that hosts the I/O Servers.

I/O Total Failover Time Counts [sec]

CPULoadAvg[%] Active Node (postStandby I/O Server failover) Node Nodes 1, 2 7 13 19 40 49 55 4 4 5 2 6 7 7, 10 7, 10 7, 10 11, 15 21, 22 25, 30

Total Memory Usage [MB]

Network ForceFail Failure overCmd 2500 5000 10000 20000 30000 40000 20 22 29 47 67 88 16 18 23 31 44 75

Active Node 430 461 555 605 797 814

Standby Node 330 333 362 385 443 552

Note During the transition of the engine from Standby to Active state, the CPU values spike at high levels. After failover, it will be necessary to restore the system to the original (loadsharing) configuration. The following script example would be used on both nodes and configured as While True with a trigger period of 10 minutes:
if me.redundancy.status == "Active" and me.redundancy.Partnerstatus == "standby - ready" THEN me.redundancy.forceFailOverCmd = true; Endif;

DIObject Performance Notes


Each DAGroup (Scan Group, Block Read, Block Write) has a maximum capacity of 5000 items. For optimal performance, the group should not have more than 4000 items. Note that for each DAGroup added, an additional connection to the server is created. This extra connection increases the memory requirement on the server. If a decision needs to be made between either increasing the number of DAGroups or increasing the number of items in each DAGroup, the recommendation is to put more items in each DAGroup. As each DAServer's performance is subject to the requirements of the connecting devices and the protocols being used, it is also recommended to monitor the DAServer performance levels and adjust the maximum number of items that subscribe to the DAServer. For optimal performance, the CPU consumed by DAServers on each machine should be no more than 55% when items are constantly subscribed.

FactorySuite A2 Deployment Guide

246

Chapter 10

OPC Client Performance


This section describes the relationship between CPU usage and Scan Groups, versus Object usage. Industrial Application Server (IAS) was configured with OPCClient Objects accessing a DAServer on a remote node. The DAServer was configured to retrieve changing data from a PLC via an Ethernet connection. The items with the OPC Client Scan Groups were accessed via Object Viewer. Note The test results described in this section do not take into consideration implementation of Object Scripts, which will affect the execution of the AppEngine; nor do the results account for limitations in the number of items which can be accessed through a single object. The performance test was conducted with the following basic topology:

DELL Precision 340 2.0 Ghz 512MB RAM Windows Server 2003 sp 1 IAS 2.0 sp 1

Allen Bradley ControLogix DELL Precision 370 Model 5555 2.8 Ghz 1024MB RAM Windows Server 2003 sp 1 DASABCIP 3.0 sp 1

The test was conducted using the following strategies: 1. Establish "Base Line" CPU consumption. A. An OPC Client Object with 2,500 items in 5 Scan Groups was measured. B. Items were then increased to 5,000. C. Items were then increased to 10,000. 2. Measure CPU consumption when increased scan groups are used. A. An OPC Client Object with 10,000 items in 5 Scan Groups was measured. B. The 5 Scan Groups were evenly split into 10 Scan Groups. C. The 10 Scan Groups were evenly split into 20 Scan Groups. 3. Measure CPU consumption when increased OPC Client Objects are used. A. 5 OPC Clients with varied number of items, totaling 10,000, were deployed. B. The 5 Objects were split into 10 objects with item total remaining at 10,000. C. The 10 Objects were split into 20 objects with item total remaining at 10,000.

FactorySuite A2 Deployment Guide

Assessing System Size and Performance

247

After the Objects were deployed, the system was "stabilized" for 5 minutes to attain CPU usage. The following results are represented in percentage of total system utilization:

Test Strategy Reference 1A 1B 1C 2A 2B 2C 3A 3B 3C

Objects 1 1 1 1 1 1 5 10 20

Scan Groups per Object 5 5 5 5 10 20 1 1 1


OPC Client Performance

Items 2500 5000 10000 10000 10000 10000 10000 10000 10000

CPU % 13.2 28.0 54.0 54.0 59.2 60.5 57.0 57.4 58.9

100 90 80 70 CPU % 60 50 40 30 20 10 0 Items Item Count Scan Groups Multiple Scan Groups Objects Multiple Objects

The use of multiple Scan Groups appears to have more impact on system performance than does multiple items, although under these test conditions, each are negligible.

FactorySuite A2 Deployment Guide

248

Chapter 10

FactorySuite A2 Deployment Guide

Working in Wide-Area Networks and SCADA Systems

249

C h a p t e r

1 1

Working in Wide-Area Networks and SCADA Systems

This chapter discusses application of FactorySuite A2 products in Wide-Area Network (WAN) and Supervisory Control and Data Acquisition (SCADA) systems environments. The WAN network environment exhibits the following characteristics:

Low bandwidth. High latency. Intermittent communication.

Note This chapter contains information and terms specific to the SCADA Industry.

Contents

Wide-Area Networks Overview Network and Operating System Configuration Security Application Configuration Overview Platform and Engine Tuning Diagnostics SCADA Benchmarks

FactorySuite A2 Deployment Guide

250

Chapter 11

Wide-Area Networks Overview


Wide-Area Networks (WANs) are comprised of computers located across large geographical distances. Communication between the computers is typically handled by modems, T1 lines, or satellite links. Data transmitted in this environment must travel through a large number of network components (routers, satellites, modems). By doing so, latency (delay from when the data was sent to when it was received) is increased. Further, the underlying technologies used for communication are limited to low bandwidth. As a result, these "distributed" networks often experience delays or breaks in communication due to relatively high amounts of network traffic, or interference by external conditions such as severe weather. WANs are used in industries such as Water and Waste Control, Telecommunications, Natural Gas production, and Oil production/distribution, where they are implemented as part of a SCADA system. The SCADA system is usually a central computer that communicates over the WAN to remote PLCs or RTUs. Note In this context, a Remote Terminal Unit (RTU) is defined as an industrial data collection device typically located at a remote location, which communicates data to a host system by using telemetry (such as radio, dial-up telephone, or leased lines). A SCADA system gathers real-time data, transfers the data to a central site, performs the necessary analysis and control, and displays the information visually in an appropriately organized fashion. SCADA topologies can easily be expanded to handle additional remote sites and I/O points (see Distributed SCADA Topologies later in this chapter). The SCADA system collects and records data events and alarms. A SCADA host performs centralized alarm management, data trending, and operator display and control. Current status and commands are handled by remote controllers (RTUs and PLCs). SCADA systems employ RTU or PLC protocols including Modbus, AB-DF1, and DNP3.0. SCADA communications can use a range of wired (lease line, dialup line, fiber, ADSL, cable) and wireless media (licensed radio, spread spectrum, cellular, CDPD, satellite). I/O servers and DAServers collect data from remote units, then send this data to the Industrial Application Server using OPC or SuiteLink protocols.

FactorySuite A2 Deployment Guide

Working in Wide-Area Networks and SCADA Systems

251

Network Terminology
When metric prefixes (k for kilo, M for Mega) are used in a network context, they retain their original definitions. That is, k = 1,000 and M = 1,000,000. This usage differs from disk-storage terminology, where KB = 1024 Bytes and MB = 1, 048,576. The following table summarizes the conventions used in this chapter:

k M B b bps Bps

1,000 1,000,000 Byte bit bits-per-second Bytes-per-second

Network and Operating System Configuration


The following information refers to both network component configuration and the operating system configuration necessary to function successfully. Some information is included for contextual purposes only.

Minimum Bandwidth Requirements


The slowest network connection speed supported with Industrial Application Server 2.1 for platform, history, and alarm communications is 128 kbps of dedicated bandwidth. This is totally independent of communications between OPC/IO/DAServer and field devices (RTUs, etc.), which is typically done over a different device network using RTU and/or other industrial protocols, often at slower data rates (i.e.: 56 kbps and lower). Check ping times for remote nodes and consider improving the available bandwidth or reducing I/O polling frequencies for those nodes that exhibit very slow ping times. For information about using Ping for checking and diagnostics, see "Diagnostics" on page 266.

Subnets
Set up subnets and sub-areas in the IP network. Most SCADA systems use routers and switches to isolate traffic within a particular site so as not to burden the network. Be sure routers are configured to isolate and route information correctly.

FactorySuite A2 Deployment Guide

252

Chapter 11

DCOM
When assessing and setting up the network, be careful in setting blocked ports. Some DCOM ports need to be open to communicate with OPC. Leave open the ports that interact between FactorySuite components. For a list of ports used by FactorySuite components, refer to "Process Control Network Firewall Ports" on page 203.

Domain Controller
The Domain Controller is a Windows server computer that stores user account information, authenticates users, and enforces security policy for a Windows domain. Domain controllers detect changes to user accounts and synchronize changes made in Directory Server user entries. Distributed SCADA networks typically employ multiple Domain Controllers at strategic network locations. The Domain Controller node includes the DNS service and Time Synchronization to manage network communication requests.

Domain Name Server (DNS)


An Internet service that translates domain names and host names into IP addresses.

Time Synchronization
The Windows 2000 environment uses W32Time services to synchronize time settings across a network. For a detailed description of the Windows Time service, how it operates on a Windows 2000 network, and configuration details, see http://www.microsoft.com/windows2000/techinfo/howitworks/security/ wintimeserv.asp. The end result is that all computers within the network running W32Time reliably synchronize to a common time.

Best Practice
Implement Universal Time Synchronization (UTS) at each site. Doing so provides absolute certainty that data is properly time-stamped and forwarded to the historian under all circumstances. Current technology uses GPS or radio broadcast software along with dedicated hardware devices that may linked to one or more computers at the site using serial ports, Ethernet or USB. The following considerations apply when implementing UTS:

If time master provider is not in same geographical area, there is a risk of losing a connection to the time server through time "drift." In other words, nodes in the same geographical area should have access to a master in the same area, or use a GPS.

FactorySuite A2 Deployment Guide

Working in Wide-Area Networks and SCADA Systems

253

Synchronize all Domain Controllers with a common time provider. The Windows service uses time stamp as a part of the Kerberos security implementation. Kerberos is a system that provides a central authentication mechanism for a variety of client/server applications, using passwords, secret keys, and time-sensitivity. If the time between clients/Domain Controllers, or between Domain Controllers (in other geographical areas) drifts, the operating systems may fail.

GPS Time Provider software is commonly available to ensure reliable and standardized time synchronization.

FactorySuite A2 Deployment Guide

254

Chapter 11

Remote Access Services (RAS)


The following information focuses on Remote Access Server node topologies and is specific to the dial-up connection mode. The following figure represents a possible SCADA topology that utilizes modem communications. Note that the Domain Controller and Router reside on dedicated nodes:
Subnet 1/Control Room
Visualization Node /Alarms Domain Controller/DNS Server Historian ( Data and Alarms) Engineering Station Configuration Database

Supervisory Network
10/100Mbps

Network Device (Switch or Router )

RAS Server

Subnet 2
Modem (56kbps)
33.6Kbps

Modem (56kbps)

RAS Client (also performs routing )

10/100Mbps

Subnet 3/Remote Site

Network Device (Switch or Router ) Control Network

Visualization Node

Visualization Node

PLC

PLC

Overview
Remote Access Server (RAS) is often used to connect remote computers with a central computer. The use of telephone modem technology is commonly used when the application requires communication over very long distances and does not require a robust connection, high throughput, and low latency. In addition, using modems allows the user to configure the connection from the RAS client to the RAS server as a call-back, dial-on-demand, and hang-up-onidle or persist-connection. Note Even though modems have low nominal throughput, they can achieve high effective throughput by using software and hardware compression.

FactorySuite A2 Deployment Guide

Working in Wide-Area Networks and SCADA Systems

255

Terminal Services
Terminal Services (TS) is optimized to use minimal network bandwidth. As such, TS works fairly well across modems. When managing a remote AppServer node using the IDE, connect with a dedicated Terminal Services node to manage the connection with the remote node. Ensure that the remote can be pinged successfully. To do so, the DNS server must be able to resolve all remote Node Names.

Industrial Application Server


IDE does not support slow network connection between the GR and the IDE. A Terminal Services session (from the remote client) is the preferred method of configuring the Galaxy.

InTouch HMI
When NAD is used in a Terminal Services environment on a Windows 2003 machine, a share must be created on the Master application even if the Master is on the Terminal Server Node. InTouch for Terminal Services Environment (TSE) may be installed on a central computer and client sessions may be run at remote sites. Please refer to the InTouch TSE Deployment Guide for more information.

IndustrialSQL Server Historian


No special configuration is required in this context.

FactorySuite A2 Deployment Guide

256

Chapter 11

Security
This section contains security information specific to a SCADA environment. For general security-related information, see Chapter 7, "Architecting Security."

Domain-Level Security
The ArchestrA Network account must be configured on the Domain Controller and used by all local and remote component installations. In order to maintain a central point of security administration, the following configurations are recommended:

For Industrial Application Server, configure OS Group-based security. For InTouch Software, configure ArchestrA security from WindowMaker.

These settings facilitate a centrally-administered security model within the distributed Microsoft domain. Note The OS Group Security mode has potential network-related implications with SCADA systems. See "OS Group Based Security Mode Notes" on page 205. Distributed SCADA systems will likely use more than one Domain Controller. Such distributed topologies join computer nodes at different sites to different Domain Controllers; the operators, engineers and technicians log into those Domains. For systems with remote sites that are prone to long disconnected periods (from the central network), it is important to distribute additional DNS servers and Backup Domain Controllers at strategic points in the network. A single ArchestrA network account is accessible from any domain. A Galaxy can span multiple domains, but a single ArchestrA network account must be used for all nodes. This account can be a local account on each node, each account having the identical name and password. Domain control and authenticated token cache expiration time are features of Microsoft security. Before changing domain parameters, refer to Microsoft documentation. It is critical to properly configure the DNS settings for the NIC adapter to ensure that multiple Domains are visible to the computer. This configuration is performed when installing the Bootstrap on each node. Microsoft Security with Active Directory and DNS supports invoking such "cross-domain" accounts at installation time. Tune expiration times relating to domain control and security. For example, in a scenario where Industrial Application Server security is enabled as OS User or OS Group for the Galaxy and the node is temporarily disconnected from a domain controller, logins to SMC, Object Viewer, and InTouch Software still succeed; but for a limited time.

FactorySuite A2 Deployment Guide

Working in Wide-Area Networks and SCADA Systems

257

If a domain is out of communication for a period of time, tokens are locally cached until the configured timeout. If the operating system's default expiration time is too short for your operation, modify/extend the expiration timeout setting for cache security.

Industrial Application Server


Galaxy security (run-time) is configured via the Security dialogue of the IDE. Users/Groups are assigned, states are created and mapped to Galaxy privileges, and individual Application Objects are allocated to Security Groups. As long as their membership is then authenticated against Galaxy authorized groups, they will have access to the capabilities of the system. All Industrial Application Server installations must use a common Domain, User Name and Password for authentication, even for the case in which there are multiple Domain Controllers in the system. The same-named domain account must exist as a member of the local administrator's group on each node; i.e. it must be one Domain, one user name and its associated password. This ensures a contiguous Galaxy. Ensure the Login Time is at its default value of 1000 ms, and not 0 (disables the login). Note For IAS 2.0 and earlier, 0 is the default value. This setting limits the role-validation part of the login to 1 second and improves login time on an application in a SCADA system using "OS Group based security." Role-validation on a large system might otherwise take many seconds. To change the default login time 1. 2. 3. Launch the IDE on the GR node. Select Galaxy > Configure > Security from the main menu. Change the default login time to 1000 ms.

InTouch HMI
Use the ArchestrA security model selection within InTouch WindowMaker.

IndustrialSQL Server Historian


When the Historizing data, the ArchestrA Network account used in IAS must also exist on the IndustrialSQL Server Historian node as a Local Administrator account.

FactorySuite A2 Deployment Guide

258

Chapter 11

Workgroup-Level Security
Configure the following security permissions on each node as applicable:

Industrial Application Server


All IAS installations must use an identical local account. The same-named local account must exist on each node as a member of the local Administrator's group on each node. This ensures a contiguous Galaxy.

InTouch HMI
Use the ArchestrA security model selection within InTouch WindowMaker.

IndustrialSQL Server Historian


When the Historizing data, the ArchestrA Network account used in IAS must also exist on the IndustrialSQL Server Historian node as a Local Administrator account.

Application Configuration Overview


The following material includes system-wide recommendations in a FactorySuite A2 System environment.

Acquire and Store Timestamps for Event Data


For RTU protocols where data timestamps may be retrieved from the remote site, it is necessary that a DAServer or I/O Server acquire this timestamp and make it available as a parameter of the data. Given the data's value and its timestamp, it is possible to transfer the data's value and timestamp into the historical database or process the data with event analysis algorithms. Such data transfer and processing operations require the development of specific Application Server objects designed for this task.

Acquire and Store RTU Event Information


In cases where RTU protocols and event information may be retrieved, particularly as structured blocks with point ID, value, timestamp, and description, a DAServer or I/O Server must acquire the structured information and make it available for processing. Typically this requires special programming for the server that transfers the data to a database. In particular the data may be transferred directly into the IndustrialSQL Server Historian node.

FactorySuite A2 Deployment Guide

Working in Wide-Area Networks and SCADA Systems

259

Disaster Recovery
It is important to establish a site, physically separated from the central one, that has replication capability. Doing so ensures the integrity of an operational system where the central site is at risk from fire, tornado, hurricane or other catastrophe. The replication capability includes having duplicated hardware, and requires that software configuration and key state information is periodically propagated from the central site to the recovery site. Each disaster recovery scenario will be unique, thus it is important to consult with system integration experts regarding the design of communications equipment, hardware and the configuration of the software.

Industrial Application Server (Distributed IDE)


Install the IDE on a Windows Server 2003 Terminal Services node. Remote computers log on as a Remote Terminal Service session. Important! Security accounts for remote computers on which the IDE is installed must be given Administrator privileges at the central Terminal Services node. Do not develop from a remote IDE that accesses the Galaxy Repository over a slow network connection. Initiate a Remote Desktop session and run the IDE on the GR node, or initiate a Terminal Services session. Caution! Installing IDEs at remote sites and/or remote computers consumes significant network bandwidth.

Distributed SCADA Topologies


The figure on the following page shows 3 common topologies found in a distributed SCADA environment. The topologies represent test metrics included in the SCADA Benchmarks section of this chapter. Each remote area contains 3 nodes: 1 Visualization node and 2 AOS nodes. The AOS nodes include Visualization capabilities. All AOS nodes connect to PLCs and generate alarms, send data changes to a Historian, and update subscriptions. Domain controllers and security are used to demonstrate cross-domain communication. Galaxy and applications use OS Group Base security:

FactorySuite A2 Deployment Guide

260

Chapter 11

Control Room
Guest Node Central Visualization IAS Galaxy Repository Historian (Data and Alarms) Engineering Station (IAS Multi IDE, Visualization Master ND) Terminal Services

VPN Corporate
10/100Mbps

Area1
10/100Mbps

Gateway Router

HUB\Switch Cisco Router


9600bps-4Mbps

WAN
Cisco Router
100Kbps

Bandwidth Controller

10/100Mbps

HUB\Switch

Radio Tower

Radio Tower

Area3
9600bps-100Mbps 10/100Mbps

Domain Controller

Cisco Router

HUB\Switch

Site
Visualization

Site
Primary AOS Node + Visualization Redundant Message Channel

Domain Controller

Modem (56kbps)
33.6Kbps

Modem (56kbps)
33.6Kbps

WAN

9600bps-4Mbps

Modem (56kbps)

Modem (56kbps)

Cisco Router

Backup AOS Node + Visualization


10/100Mbps

HU\Switch RAS Client AOS Node + Visualization RAS Client AOS Node + Visualization HUB\Switch

Area2 Site

Visualization

PLC3

Site
10/100Mbps

HUB\Switch

Visualization

AOS Node + Visualization

AOS Node + Visualization

Site Site
PLC1 PLC2

Area 1 Communication
Uses a 56kbps modem connection through a domain controller. The modem is not modified because of excellent compression capabilities.

Area 2 Communication
Radio modem link to 128kbps full duplex. The Cisco router clock rate is set to 125,000kbps

Area 3 Communication
Bandwidth-controller set to 16,000bps. The information on the following pages references the above topology.

ApplicationObject Deployment
Do not initiate a Cascade Deployment of the entire Galaxy when large numbers of Platforms and Engines reside on remote nodes over a distributed network. Rather, deploy the Platforms separately. Separate Platform deployment prevents overloading the network.

FactorySuite A2 Deployment Guide

Working in Wide-Area Networks and SCADA Systems

261

Platforms that have complex combinations of Areas and Objects or large numbers of Objects should be selected in smaller groups and deployed as groups. Perform the initial deployment at the central control location, then ferry the node to the remote site.

Platform and Engine Tuning


Decreasing the engine scan rate decreases the event data that must be transmitted. Note For more information on tuning for redundancy in large and very large systems, see "Tuning Redundant Engine Attributes" on page 87.

Tuning the Historian Primitive/MDAS (in Platforms and Engines)



Set deadbands for parameters of data to historize; increasing the deadband decreases the network traffic. Do not enable the Historian component in WinPlatforms and AppEngines unless they will actually be hosting Historized objects. Do not enable the Historian feature (a check box in the Objects IDE Editor form) in the highest level template for any object because this forces historization of every instance. Selectively apply the Historian feature to some templates and to specific instances of objects. Modify the Historian tuning constants which are attributes of the Engine component found in the WinPlatform and AppEngine objects.

Note The following attributes' default values are designed for a nonintermittent network environment and are especially important in a widelydistributed, redundant system. The attributes are listed with the default values.

Engine.Historian.StoreForwardMinDuration: 0 s (seconds); Engine.Historian.ForwardingChunkSize: 1024 Bytes; Engine.Historian.ForwardingDelay: 250 ms (milliseconds).

Modify the default values after careful observation of network bandwidth utilization:

Increasing the StoreForwardMinDuration value forces the machine to function in "Store" mode for a longer time, and prevents the machine from trying to re-connect to the Historian prematurely. The longer duration allows time for the network problem to resolve itself. Decreasing the chunk size should accommodate slower network connections.

FactorySuite A2 Deployment Guide

262

Chapter 11

Increasing the Forwarding Delay value should ensure data packets arrive and are stored before the following packet arrives.

Note The previous 3 Historian-tuning attributes are available in IAS 2.0 and later. They are accessible using the AppEngine and Platform Editors (Engine tab). For detailed attribute information, see the AppEngine 'General Configuration' help file. For the occasions that communications with the Industrial SQL Server Historian are interrupted, local storage and recovery of historical data is provided. It is important to configure the Historization parameters of the Engine Object using the IDE to accommodate the number of packets that will be transmitted over the network when data is being restored.

Be sure enough disk space is on the local node to temporarily store data until it can be transferred to the historian.

Note For detailed information on changing the default Platform Start Up and Shut Down settings, see "Tuning Recommendations for Redundancy in Large Systems" on page 86.

Alarms

When the (default) InTouch Alarm Provider option box (in the Platform Editor) is checked, and the Area text is blank when you configure the Platform Object, a subscription is initiated to all Alarms throughout the Galaxy, regardless of the originating node. It is important to configure specific Area names for the Platform Alarm subscriptions in order to reduce the number of alarms this platform subscribes to. Do not leave the text entry field blank. Specifying Area names limits the alarm-related network traffic to the areas specified.

Note Alarms from sub-areas (of those specified) create subscriptions because of the hierarchical design of the Alarm subsystem.

Be sure that parent alarm areas are on the same node as their sub-areas. Alarm history is stored in an MSSQL Database. The default name for the database is AlarmDB which may be given a different name (using the configuration utility launched from the WWAlarmDBLogger application).

WWAlarmDBLogger
WWAlarmDBLogger is installed from the InTouch CD-ROM. It is a client application that subscribes to Alarm messages in the same fashion as the InTouch Alarm Clients do, including one or more Query expressions. The WWAlarmDBLogger application delivers alarm message content to the AlarmDB (or otherwise named) database using standard MSSQL network technology. It receives alarm messages from subscribed Areas via the InTouch Distributed Alarm protocol, which is verbose.

FactorySuite A2 Deployment Guide

Working in Wide-Area Networks and SCADA Systems

263

For remote computers that connect to a central supervisory system over a slow network connection, the traditional deployment of Alarm components will result in slow response times over that connection and can interfere with delivery of live data. The recommended topology for slow network connections is to have individual MSSQL Alarm history databases located at the remote sites. This deployment ensures that all Distributed Alarm protocol messages are confined to the remote site's LAN. All database transactions posting Alarm log messages to the AlarmDB (or otherwise named) database are likewise confined to the LAN. Microsoft MSSQL DTS (Data Transformation Services) technology may be used to consolidate AlarmDB messages to a central database periodically for central analysis and reporting. Note For information on implementing Alarms, see "Determining the Alarm Topology" on page 215.

Inter-Node Communications
The following section considers platform communication when deployed across a widely-distributed and/or intermittent network (SCADA). A brief summary is included for context and is not intended as a recommendation, but as a "pointer" for the developer to begin tuning the communications to accommodate the needs, and mitigate the effects, of a SCADA system. The information assumes multiple platforms are deployed on multiple nodes in a SCADA topology.

Communication Summary
Communication between distributed platforms occurs at two levels: Heartbeats and messages (data change requests and replies, subscriptions, status updates/replies, etc.). Messages are handled by Message Exchange (MX) services. IAS is designed to monitor heartbeats and messages (sends/receives) on a regular, configurable basis. Several attributes can be used to monitor and tune the system to avoid problems in a SCADA environment; for example, heartbeats missed because of an intermittent network may cause all subscriptions to be dropped and re-initiated, saturating the network and preventing successful reconnection with remote nodes. The actual settings depend on the particular network environment. Tune the following attributes when implementing Redundant Platforms/Engines within a SCADA environment:

FactorySuite A2 Deployment Guide

264

Chapter 11

NmxSvc Attributes NMXMsgMxTimeout

Primitive

Default Value

Remarks

WinPlatform 30,000 ms Can set at config-time and run-time if (30 seconds) Platform is Off Scan. Specifies how long Engine waits for response from another Engine before declaring timeout. WinPlatform 2000 ms (2 seconds) Can set at config-time and run-time. Specifies how frequently the NmxSvc sends heartbeats to remote Nmx services connected to it. Can set at config-time and run-time. Specifies how many heartbeats are allowed to be missed before remote NmxSvc declares the connection broken. Determines the number consecutive Data Change Notification failures (with MX_E_PlatformCommunicationError or MX_E_RequestTimedOut) that will be allowed before the subscription is torn down by the publisher engine.

NetNMXHeartbeatPeriod

NetNMXHeartbeatsMissedConsecMax WinPlatform 3

DataNotifyFailureConsecMax

Engine

These attributes can be set to balance correct and timely error notification with a stable system performance. For example, the DataNotifyFailureConsecMax value of 0 means that the system will begin tearing down subscriptions (and rebuilding them) if a Data Change Notification failure occurs at any time. Initiating this action means that the network is then flooded with subscription messages both when tearing them down and rebuilding them. This action may not be realistic in an environment in certain connections are sporadically intermittent. Using NetNMXHeartbeatsMissedConsecMax and NetNMXHeartbeatPeriod together provides the total time elapsed since the last heartbeat before the connection is declared broken. The formula is:
(NetMNXHeartbeatsMissedConsecMax + 1) * NetNMXHeartbeatPeriod

Setting the values to smaller numbers should discover broken connections faster, but may also provide "false" broken connections because the Nmxsvc doesnt get enough CPU time to process incoming messages. Note These attributes do not directly affect failover. They specify when Message Exchange will declare communication errors. For information on Heartbeat settings within a locally-distributed environment, see "Tuning Redundant Engine Attributes" on page 87. Note that recovery time on a Distributed Network or from an outside disaster is longer on a redundant system.

FactorySuite A2 Deployment Guide

Working in Wide-Area Networks and SCADA Systems

265

Note The redundant pair must be at the same physical location; they cannot be geographically separate. Redundancy for AutomationObject Server Engines may be applied as needed at remote sites. The Primary and Backup nodes must include individual NICs for their RMC channels and must use a simple crossover cable between them. The only impact upon network traffic will be some amount of additional packets during deployment from the central GR node to both the Primary and Backup nodes.

Load Balancing
Load balancing is relevant only in the central supervisory setting. This is because "load balancing" implies moving traffic to another CPU at the same location (SCADA systems have physically distributed architecture). In a central location, use a cluster of Application servers to distribute processing activities.

Distributing InTouch HMI Nodes


InTouch Software NAD (Network Application Development) can be used to design and configure systems, and to maintain graphical interfaces in a physically distributed system. The server maintains a master copy and clients maintain a copy of the master, so that if the master is unavailable, the client keeps working. Note that remotely notifying client applications of InTouch Software over a slow network is limited by the number and size of changes that may be effectively transmitted within a reasonable amount of time.

Minimize Bandwidth Consumption on Initial Startup


The InTouch Software NAD Master and Client nodes should be configured to minimize bandwidth consumption on initial start-up. Tech Note 380, InTouch HMI NAD and Slow Networks provides detailed steps for reducing startup time and network-bandwidth usage and describes:

The NAD initial startup copy process and overview of the workaround. Detailed instructions for how to prevent a NAD client from copying the full application on initial startup. Other helpful NAD documentation references.

Create a NAD Proxy


Tech Note 390, InTouch NAD and Slow Networks outlines how to reduce network utilization in a NAD environment by configuring an existing InTouch Software NAD-Client to behave as a NAD-Proxy. The NAD-Proxy serves as a NAD-Client that accepts the application changes from the NAD-Master and then notifies the NAD-Clients on its local network of the application changes.

FactorySuite A2 Deployment Guide

266

Chapter 11

NAD
Do not include SmartSymbols in the InTouch Software NAD source application for distribution to HMI nodes over a slow connection (A change to any Smart Symbol causes recompiling of every window in the application and subsequent re-deployment of all windows). Instead, develop a master application using Smart Symbols and import only the modified windows and any modified scripts to the NAD source application. Configure the NAD client's poll intervals to avoid using network bandwidth.

Alarms
Instead of applying a single global query, configure displays to dynamically query different provider nodes. Alarms query an area directly using SuiteLink (and not MX) by specifying the node name (\\NODENAME\Galaxy!Area). WW Alarm DB Logger should log only to a local database or over a fast network. Allocate alarm groups between different stations and adjust visibility for appropriate areas. Some alarm groups may be at physically different sites. Select alarms that need wide-area visibility to minimize network traffic.

Using IndustrialSQL Server Historian


MDAS Time Synchronization
MDAS may fail to forward some live data (over a Distributed SCADA network) when it recovers from a long Store Forward period. The data loss is caused by the Engine host node and the Historian node drifting out of time scope while the Engine is disconnected from the network. If the times are in sync when the engine reconnects, no data loss occurs.

Diagnostics
The following information is applicable within the SCADA environment:

Ping
Ping is a basic command that helps you check out the basics of your network. When pinging another machine, a sequence of special ICMP (Internet Control Message Protocol) Echo Request packets are sent. The receiving machine responds with an "echo reply." The Ping program reports a number of items including: the number of milliseconds it took to get a reply to each Echo Request packet, the maximum, minimum and average round trip times, the number of dropped packets and a TTL (Time To Live) value.

FactorySuite A2 Deployment Guide

Working in Wide-Area Networks and SCADA Systems

267

The average round trip time provides an indication of the speed of your network. In general, it's best if round-trip times are under 200 milliseconds. The maximum and minimum round-trip times give you an idea of the variance ('jitter'). When large variance is present, you may experience poor response in communications. The number of dropped packets may be an indication of network problems. The TTL value helps you find out how many routers (or "hops") the packet goes through in order to get to its destination. Every packet sent has a TTL field set to an initial number (for example 128). As the packet traverses the network, the TTL field is decremented by one, by each router. If the TTL field in successive pings is different, it could indicate that the reply packets are traveling through different routes.

Tracert
Tracert traces the path followed by a packet from one machine to another. The results of this command also provide the IP address of each router the information goes through and how long it took on each hop. Reviewing the time between hops enables identification of slow or heavy traffic segments: If tracert is unsuccessful, you can use the command output to help determine at which intermediate router forwarding failed or was slowed. Looking for hops that have excessive times or dropped packets in the report from a tracert command can find potential trouble spots between two machines.

Time Synchronization
When the network cable is reconnected, the system event viewer may contain a message that the time provider NtpClient is currently receiving valid time data. This message does NOT mean the computer clock time sync happens. It means the internal clock is adjusted and will act as described in the bullets above. In other words, this message is sent every time the computer is reconnected, but only in certain cases is the actual computer clock also updated to the current Server time. In other cases, only the internal clock is adjusted and the computer time is gradually synced with Server time according to the algorithm.

FactorySuite A2 Deployment Guide

268

Chapter 11

SCADA Benchmarks
The following information is derived from Wonderware QA testing.

MX Subscriptions Network Utilization


MX Subscription change rate [points/sec] 0 [heartbeat traffic] 1 10 100

Traffic [kbps] 1.2 3.4 4.1 11.7

Increase [Factor] 2.83 1.2 2.85

Note Tests performed with subscription to a 4-byte integer.

FactorySuite A2 Deployment Guide

Working in Wide-Area Networks and SCADA Systems

269

Alarm Network Utilization


MX Subscription change rate [points/sec] 0 [heartbeat traffic] 1 10 100

Traffic [kbps] 1.2 3.4 4.1 11.7

Increase [Factor] 2.83 1.2 2.85

FactorySuite A2 Deployment Guide

270

Chapter 11

History Network Utilization


History change rate [points/sec] 0 [MDAS heartbeat] 1 10 100

Traffic [kbps] 3.5 6.5 7.7 21.1

Increase [Factor] 1.857 1.184 2.74

Summary Network Utilization


Subsystem MX Subscription History Alarms Change Rate 100 [point/sec] 100 [point/sec] 100 [point/sec] Traffic 11.7 kbps 21.1 kbps 26.1 kbps

Note Tests performed with subscription to a 4-byte integer. Proportion of bandwidth used by MX, History, and Alarm Traffic (each at 100 changes/sec:

FactorySuite A2 Deployment Guide

Working in Wide-Area Networks and SCADA Systems

271

Network Utilization at Deployment


A maximum Network Utilization of 30% (of available bandwidth) is assumed. 250 client nodes are assumed with the following tag breakdown:

1,100 tags-per-second throughout the clients = 2.2 MX changes/second (rounded to 3 changes/sec). 10,500 History tags among the 500 nodes = 42 history tags per node. 1,000,000 tags between 250 nodes = 4,000 tags-per-node. Assume 10% are changing (400). 400/3 = a tag change every 133 seconds. Round 133 to 120 seconds for a change "every two minutes." Among the 400, there are 3.3 changes/sec. Each history tag changes every two minutes, for a tag change every 3 seconds.

Each of the 250 nodes generates the following changes:

MX: 3.3 changes/second. Alarms: .5 Alarms/second (arbitrary number). History: .35 changes/second.

Calculate Network Bandwidth Utilization (per node)


MX: = 3.3 changes/second [peak at 10/second]. = 4kbps [from the MX Subscription Network Utilization table]

Alarm: 0.5 Alarms/second [no peak] = 2kbps [from the Alarms Network Utilization table] History: 0.35 changes/second [peak at 1/second] Total: 13kbps [divided by 64kbps link] = 7kbps [from the History Network Utilization table] 20%

FactorySuite A2 Deployment Guide

272

Chapter 11

FactorySuite A2 Deployment Guide

Maintaining the System

273

C h a p t e r

1 2

Maintaining the System

The FactorySuite A2 System allows users to develop applications that have built-in diagnostics and maintenance functionality. For example, a Platform in the Industrial Application Server can provide information about system resources such as CPU load, memory, network traffic, or disk usage. System operators and supervisors can access both process data and system health information from the alarm and event database, InSQL Server, or InTouch Software windows with links to various attributes in galaxy objects. The FactorySuite A2 System also accommodates system administrators, who require the ability to back up system files periodically, and to perform more indepth diagnostics if problems occur. This section presents diagnostic and maintenance tools available to FactorySuite A2 System users. For information on other resources, refer to the Technical Support section in the preface to this manual.

Contents FactorySuite A2 System Diagnostic/Maintenance Tools Add-on Diagnostic/Maintenance Tools OS Diagnostic Tools

FactorySuite A2 Deployment Guide

274

Chapter 12

FactorySuite A2 System Diagnostic/Maintenance Tools


The following material describes diagnostic tools within the Industrial Application Server context.

Object Viewer
The Object Viewer monitors the status of the objects and their attributes and can be used to modify an attribute value for testing purposes. To add an object to the Object Viewer Watch list, you can manually type the object and attribute names into the Attribute Reference box in the menu bar and select Go. When prompted to enter the Attribute Type, press the OK key. You can save a list of items being monitored. Once you have a list of attributes in the Watch Window, you can select all or some of them and save them to an XML file. Right-click on the Watch window to save the selection or load an existing one. You can also add a second Watch window that shows as a separate tab in the bottom of the Viewer. Refer to the Platform and Engine documentation for information about attributes that may indicate system health. These attributes provide alarm and statistics on how much load a platform or engine may have when executing application objects or communicating with I/O servers and other platforms.

Testing the Quality Value of Attributes


When you use Attribute Data Quality for diagnostic purposes, observe the following tips:

Best Practice
To test for the Attribute Quality Value The actual value of a Bad and Good quality is 0 and 192, respectively. Past methods for testing the Quality value have resulted in code such as:
If MyObject.PV.Quality == 192 then

A more appropriate way to code such tests is to call one of the quality test functions available within the QuickScript language. The previous example for testing for a GOOD quality condition would be coded as:
If IsGood(MyObject.PV.Quality)then

The available functions for testing the Quality value of an attribute are as follows. The functions return a Boolean (True) value for success and a Boolean (False) for failure of the test.

FactorySuite A2 Deployment Guide

Maintaining the System

275

Test Condition:

IsBad IsGood IsInitializing IsUncertain IsUsable

As in the above example, the coding syntax requires the desired function (the specific attribute to test). Note that the parenthesis around the quality is required.

Best Practice
To use the Set Condition Function The available functions for setting the Quality value of an attribute are as follows. The functions return a Boolean (True) value for success and a Boolean (False) for failure of the test. Set Condition:

SetBad SetGood SetInitializing SetUncertain

The syntax for the Set Condition functions is the same as the Test Condition functions except that the attribute to be SET must be an attribute within the object that the script is attached to. Example:
SetInitializing(me.PV)

For more information on Attribute Data Quality, see Chapter 5, "Working with Templates."

System Management Console (SMC)


Galaxy Database Manager
Selecting the Galaxy Database Manager on the SMC Menu allows you to view all the galaxies in the Galaxy Repository, as well as the nodes they reside on. You must have Administrative privileges to use the Galaxy Database Manager.

DAServer Manager
The DAServer Manager allows local or remote configuration of the DAServer and its device groups and items, and can monitor and perform diagnostics on DAServer communication with PLCs and other devices.

FactorySuite A2 Deployment Guide

276

Chapter 12

Important! Like the LogViewer and Platform Manager, the DAServer Manager is a Microsoft Management Console (MMC) snap-in. Many highlevel functions and user-interface elements of the DAServer Manager are universal to all DAServers; to understand the DAServer Manager, it is critical to read the documentation for both the MMC and the DAServer Manager. To read the documentation about the MMC and DAServer Manager, click the Help topics on the SMC Help menu. Both the MMC Help and the DAServer Manager Help are displayed.

Log Viewer
An important troubleshooting tool, the Log Viewer records messages from machine execution. The Log Viewer can:

Monitor messages on any machine in the system Send a portion of the log to notepad or E-mail Filter messages on a flag

Security for the log viewer is set at the galaxy level.

Platform Manager
Using Platform Manager, you can access platforms and engines deployed to any PC node in the galaxy. After highlighting a platform, you can use the Action menu to start or stop a platform, or set it OnScan/OffScan. If the platform has security implemented, you must be logged on as a user configured with the proper SMC permissions to start SMC, Start/Stop engines and platforms, or write from the Object Viewer.

Add-on Diagnostic/Maintenance Tools


The following information describes diagnostic tools available for download from Wonderware eSupport.

Galaxy Diagnostic Tools


The Wonderware website http://www.wonderware.com/support/mmi lists galaxy diagnostic tools available for download.

The A AppServer Script Counter utility is a simple tool created by Wonderware Technical Support to provide an easy way to get a count of the number of Scripts that are in Instances in a Galaxy. This tool is intended to aid in evaluating A Application Server performance during development and deployment. The A Template Counter utility is a simple tool created by Wonderware Technical Support to provide an easy way to get a count of the number of Templates and Instances derived from a given Template

FactorySuite A2 Deployment Guide

Maintaining the System

277

Note These utilities are provided "as is." They are not supported by Wonderware Technical Support. You can also refer to the ArchestrA.biz web site http://archestra.biz for the latest information about Archestra and third-party utilities.

Galaxy Maintenance Tools


Backup and Restore
The Galaxy Database Manager allows you to:

Back up a Galaxy: This function archives in a single .cab file containing all the files and configuration data associated with the specified galaxy. The backup saves everything created in the galaxy: security, platforms, objects, scripts, and attributes. Important! Unless galaxy objects are already deployed and in production, be sure that everything in the galaxy is undeployed before backing it up.

Restore a Galaxy: The Restore function uses the backup file to overwrite an existing corrupt Galaxy or reproduce the Galaxy in another Galaxy Repository. This function restores the entire galaxy configuration. Note that when the galaxy is restored, the objects will show the deployment status they had at the point the original galaxy was backed up.

Galaxy Backup With Secured Administrator Password


As a standard procedure, to preserve the integrity of a Galaxy, it is important to generate backup copies of the database. The Galaxy backup files (CAB) need to be archived in safe disk storage and they may be transferred to CD-ROM or tape if desired. Be sure that there aren't any Objects or Templates checked out at the time of the Backup. In particular the IDE editor for Galaxy Security must not be open for edits when Backups are performed. In addition to having the Galaxy backup, it is important for engineers to export any changes to $Templates and instances of Objects using the IDE export feature. This generates package files that contain one or a group of Objects. These exports result in files with an "aaPKG" extension. For large facilities, a backup should be made after the major development cycle is complete. The Galaxy should already include assignment of OS Groups into Roles and the creation of Security Groups with Objects assigned to the Security Groups. Before making this backup it is important that the Galaxy's Administrator User be given a valid Password. By default a Galaxy starts with a blank Administrator Password; it should be changed according to the IT password policy.

FactorySuite A2 Deployment Guide

278

Chapter 12

After the backup is generated, it should be transferred immediately to CDROM or tape, and removed from direct network file browsing. Alternatively, it should be protected with directory access privileges. Also, the Administrator Password should be recorded and kept in a safe place. With the backup, if IT makes changes in the OS Groups assigned with IDE and SMC privileges and the Galaxy Administrator Password has been changed per IT policy and then forgotten, the backup can be retrieved and restored by anyone with access to the CD-ROM or tape and the saved Administrator Password. Afterwards, any progressive changes can be restored from the periodic Object export files (aaPKGs).

Best Practice
Back up a galaxy to another machine each time you make changes to it. Refer to the Galaxy Database Manager User's Guide for directions for backing up and restoring a galaxy. Note When restoring a Galaxy, use the IDE first to create a new Galaxy in the target Galaxy Repository. The names of the new Galaxy you create in the IDE and the backup Galaxy from the restore process must be the same.

Backup vs. Export


The IDE Export function is used to share objects or recreate them in another galaxy. Exporting generates a package file (.aaPKG) from selected data in the Galaxy Database. The resulting package file can be imported into another galaxy through the IDE import function. In contrast to Backup, Export (from the IDE editor) is selective: you highlight the objects or templates to export. Exporting applies to objects, their associated templates, and configuration state. Security is not exported. The advantage of exporting is that it is an easy way to move objects from one platform to another, and to E-mail portions of the galaxy to another user.

FactorySuite A2 Deployment Guide

Maintaining the System

279

OS Diagnostic Tools
The following information describes tools packaged with the Microsoft operating system.

Performance Monitor
The Windows Performance Monitor or System Monitor is located in Administrative Tools in the Windows Control Panel. The Performance tool is composed of two parts: System Monitor, and Performance Logs and Alerts. System Monitor allows you to collect and view real-time data about memory, disk, processor, network, and other activity. Performance Logs and Alerts allows you to configure logs to record performance data and set system alarms. For information about using Performance tools, click the Action menu in Performance, and then click Help.

Event Viewer
The Event Viewer, located in Administrative Tools in the Windows Control Panel, maintains logs about program, security, and system events. You can use the Event Viewer to view and manage the event logs, gather information about hardware and software problems, and monitor Windows operating system security events.

FactorySuite A2 Deployment Guide

280

Chapter 12

FactorySuite A2 Deployment Guide

System Integrator Checklist

281

A P P E N D I X

System Integrator Checklist

This appendix includes tasks that may be overlooked or omitted from a scope document or a bid. The list items are compiled from integrator comments, Tech Support sources, and documentation. The items include internal and external links to supporting information. The checklist is organized by the following areas:

General Communication Security Administration (Local and Remote) Redundancy Configuration Migration Compatibility

FactorySuite A2 Deployment Guide

282

Appendix A

General
Use Time Synchronization
Configure the computers in a Galaxy to synchronize time at regular intervals. This is particularly important for alarm and data historization. The Historian node is a good candidate for the computer by which all other computers will synchronize time. Note Details about Time Synchronization are available throughout this document by searching (Ctrl + F keys) for "time sync."

Disable Hyper-Threading
Disable Hyper-Threading on computers that support this hardware function. Hyper-Threading gives a false impression of lowering processor load while actually increasing the load on the actual (rather than virtual) processor. Some computers vendors (for instance, Dell) may enable hyper-threading by default. To disable hyper-threading 1. 2. 3. 4. Reboot the computer. Enter BIOS setup. Disable hyper-threading. Save and exit setup.

Note For more information on hyper-threading, see "Best Practice" on page 233.

Communication
Configure IP Addressing
All nodes in your Galaxy must be able to communicate with each other by using both IP address and Node Name in the Network Address option of the WinPlatforms editor. If PCs in the Galaxy are using fixed IP addresses, then create a hosts file with the host name to IP Address mapping. WinPlatform connection problems may result if computers cannot be accessed by both Hostname and IP address. This is true no matter which type of Network Address you choose to use. For example, assume two nodes in your Galaxy (host name: NodeA, IP address: 10.2.69.1; host name: NodeB, IP address: 10.2.69.2). NodeA must be able to ping NodeB with both "NodeB" and "10.2.69.2".

FactorySuite A2 Deployment Guide

System Integrator Checklist

283

The reverse must also be true for NodeB pinging NodeA. Failure in either case, may result in the following: you may not be able to connect to a remote Galaxy Repository node from the IDE or deployment operations may fail.

Configure Dual NICs


Use two Network Cards on a computer that hosts I/O DAServers. Doing so will increase throughput, and improve load balancing. It is also good practice to place PLC communication on a dedicated (Control) network. Note More information on dual NIC configuration is contained in "NIC Configuration: Redundant Message Channel (RMC)" on page 63.

Security
Confirm User Name and Password
All nodes in the Galaxy need to have the same user name/password for the Archestra Admin User (aaAdminUser.exe).

Configure Anti-Virus Software


Anti-Virus software should not process files in the ArchestrA folders listed below. Note The following paths assume C: as the drive. The paths could also be on D: or E: drives.
C:\Program Files\ArchestrA\Framework\Bin\CheckPointer C:\Program Files\ArchestrA\Framework\Bin\GalaxyData C:\Program Files\ArchestrA\Framework\Bin\GlobalDataCache C:\Program Files\ArchestrA\Framework\Bin\Cache C:\Documents and Settings\All Users\Application Data\ArchestrA (default setting, specified on WinPlatform editors General page, History store forward directory option)

If the Anti-Virus software does so, it may result in slow performance as deploys are performed. Note More information on Virus protection is found in "Virus Protection" on page 195.

FactorySuite A2 Deployment Guide

284

Appendix A

Administration (Local and Remote)


Install Correct IAS Components

Install the IDE and Bootstrap on any PC that will browse the Galaxy. This includes WindowMaker and SCADAlarm Event Notification Software nodes. Install the Bootstrap and Deploy a platform to any PC that will be an AOS (Application Object Server) or will be doing I/O with a Galaxy (includes WindowViewer and SCADAlarm Event Notification Software nodes).

Note See "Integrating FactorySuite Applications" on page 91.

Connection Requirements for Remote IDE (from a Client Machine to a Galaxy)


(Tested with Windows XP Pro SP1 on the client machine and Windows 2000 Server SP3 on the GR node.) A. The Archestra Admin User (accessed via the Change Network Account utility) needs to have the same Username and Password on both the client machine and the GR Node. B. The Logged in user on the client machine needs to be part of the local Administrators group. C. The GR Node needs to have a user account with the same username and password as the logged-in user on the client machine. This GR Node user does not have to be logged in and does not have to be part of the Power Users or Administrators group.

Redundancy Configuration
Redundant AppEngines
WinPlatforms hosting redundancy-enabled AppEngines must run on the same operating system. For redundancy to function properly, WinPlatforms hosting redundancyenabled AppEngines must be deployed to computers running the same operating system.

Multiple NICs
In general, multiple NIC configuration is recommended only for Redundancy purposes. Using 1GB network cards in combination with managed switches should be sufficient for most process network throughput needs.

FactorySuite A2 Deployment Guide

System Integrator Checklist

285

If any nodes in your ArchestrA environment (outside of redundant nodes) have multiple NICs, be aware that proper configuration of those computers is essential to successful communication between ArchestrA nodes. In other words, if a PC has two network cards and will have a platform deployed to it, then the Network Binding Order must have the Archestra network as the first network even if one of the Network cards is disabled. Information about configuring multiple NIC computers is included in the Introduction and ArchestrA Redundancy chapters of the IDE documentation (IDE.pdf). See the "NIC Configuration: Redundant Message Channel (RMC)" on page 63. The setting for the Network Binding Order can be found in Tech Note 368, Network Setup for AppEngine Redundancy. Note See "Implementing Redundancy" on page 61.

Migration
Verify Version and Patches
Verify that all nodes in the Galaxy have the same version and patch level of Industrial Application Server.

Upgrade Correctly
When upgrading from IAS 1.x to IAS 2.x be sure to uninstall IAS 1.x first before installing IAS 2.x.

Compatibility
FactorySuite Component Version Compatibility
Ensure that there is no conflict between FactorySuite A2 System Components and FactorySuite 2000 Components by following Tech Note 313, Installing FactorySuite A Components Alongside FactorySuite 2000 Components. IAS and IndustrialSQL Server have specific version compatability requirements. Check the FactorySuite A2 Compatibility Matrix on the FactorySuite support website http://www.wonderware.com/support/ for detailed compatability information.

FactorySuite A2 Deployment Guide

286

Appendix A

FactorySuite A2 Deployment Guide

.NET Example Source Code

287

A P P E N D I X

.NET Example Source Code

This appendix contains the entire code base for the example objects described in Chapter 6, "Implementing QuickScript .NET." The code is commented and the user can copy/paste from the code content. Note Hard line breaks are included for readability in this context. Other formatting issues may arise when copying/pasting from this document into the Script Editor or Visual Studio.

Contents SqlConnCacheMgr Object PostTOaORb Object RunTime Object ObjectCache.dll Visual Studio.NET C# Solution

FactorySuite A2 Deployment Guide

288

Appendix B

SqlConnCacheMgr Object
The SqlConnCacheMgr Object provides hash table cache management of .NET System.Data.SqlClient.SqlConnection objects in a Type Safe manner. The following sections are excerpted from the Help.htm file of the $SqlConnCacheMgr template Object.

SqlConnCacheMgr Overview
The SqlConnCacheMgr Object is a "manager" Object. It leverages the SqlConnCache Class of the ObjectCacheExt.DLL. Other Objects "acquire" a SqlConnection Object that belongs to the SqlConnCacheMgr Object for use in performing ExecuteNonQuery transactions with an MSSQL database. ObjectCacheExt.DLL is a .NET Function DLL created by the A5 Application Consultant group explicitly to demonstrate the capabilities of function DLLs leveraging ADO.NET database access. To use the SqlConnCacheMgr Object 1. 2. Create an instance and configure the UDAs. Create one or more instances from the companion Object template $PostToDBaORb or its derived template $PostTOaORb. See the help file specific to these Object templates for information regarding their configuration.

Note For general information on objects, including relationships, deployment, and alarm distribution, see the Integrated Development Environment (IDE) documentation. For information on configuration options for object information, scripts, userdefined attributes (UDAs), or attribute extensions, click Extensions Help in the Help file header.

SqlConnCacheMgr Run-Time Behavior


The SqlConnCacheMgr Object keeps track of the hash table held in the Application Domain memory associated with the IAS Engine that contains the instance of this Object template. Database transaction Objects derived from either $PostToDBaORB or $PostTOaORb must be used in conjunction with this 'manager' Object in order to process ExecuteNonQuery transactions against a database. See the help files for these template Objects for more information about transaction behavior. Following Startup, the Initialize QuickScript creates the hash table cache in memory, creates new .NET System.Data.SqlClient.SqlConnection objects and adds them to the cache using provided names. The Statistics QuickScript periodically retrieves transaction counts for each .NET SqlConnection and the duration of the most recent transaction. Statistics can also be triggered with the SqlConn.GetStatistics.Now UDA. Upon completion of the QuickScript the "Now" UDA value is cleared to 'false', however the "Periodic" UDA remains is not automatically cleared.

FactorySuite A2 Deployment Guide

.NET Example Source Code

289

The ManageConnections QuickScript is triggered by setting the Connection.Acquire[n] indexed UDA to 'true' while the Connection.Initialized[n] UDA (with the corresponding index) is simultaneously 'true'. This QuickScript first checks whether the requested named .NET SqlConnection object is in the cache and if not creates it and adds it to the cache. Then it checks the State of the .NET SqlConnection, setting it to 'Open' only if it is 'Closed' - i.e. ready for use for a new transaction. When executing the 'Open' command the QuickScript utilizes the database connection settings as configured in UDAs. Upon completion of the QuickScript the corresponding Connection.Acquire[n] UDA is cleared to 'false'. The ConnectTo QuickScript is triggered by setting the Connection.Connect[n] indexed UDA to 'true'. It looks at the .NET SqlConnection object held in the cache under the designated name to see if it is already there. If not it creates it an adds it to the cache. If already in the cache or once it is added to the cache the QuickScript 'Opens' it utilizing the database connection settings as configured in UDAs. Upon completion of the QuickScript the Connection.Connect[n] UDA is cleared to 'false'. The DisconnectFrom QuickScript is triggered by setting the Connection.Disconnect[n] indexed UDA to 'true'. It looks at the .NET SqlConnection object held in the cache under the designated name to see if it is there and whether it is 'Open'. If 'Open' it closes the connection. In all cases, having found the named .NET SqlConnection in the cache the QuickScript removes it once it is 'Closed'. Upon completion of the QuickScript the Connection.Disconnect[n] UDA is cleared to 'false'. The ReleaseAcquired QuickScript is triggered by setting the Connection.Release[n] indexed UDA to 'true'. It looks at the .NET SqlConnection object held in the cache under the designated name to see if it is there and whether it is already 'Closed'. Only if it is already 'Closed' does it then remove it from the cache and simultaneously clears the Connection.AcquiredBy[n] UDA to a null String. Upon completion of the QuickScript the corresponding Connection.Release[n] UDA is cleared to 'false'.

SqlConnCacheMgr Configuration
The SqlConnCacheMgr object is configured by filling in specific UDAs with connection information for the desired SQL database connection. See SqlConnCacheMgr Run-Time Object Attributes for details. This template provides for two indexed .NET SqlConnection objects in the cache. To expand the function of this cache manager (to handle more .NET SqlConnection objects), change the array sizes of all of the array indexed UDAs. When expanding the arrays be sure to also augment the associated lines of QuickScript code in support of the number of array elements.

FactorySuite A2 Deployment Guide

290

Appendix B

SqlConnCacheMgr Run-Time Object Attributes


Following are the UDA attributes of the SqlConnCacheMgr Object. Note that the UDAs incorporate 'dot' separators to enhance the grouping:

UDA Connection.Acquire Connection.AcquiredBy Connection.Connect Connection.Database.Name Connection.Disconnect Connection.IntegratedSecurity Connection.NodeName Connection.NodeName.Validated Connection.Release Connection.Status Connection.User.Name Connection.User.Password LogMessages.Enabled Query.Count Query.Duration SqlConn.GetStatistics.Now SqlConn.GetStatistics.Periodic SqlConn.Initialized SqlConn.Name SqlConn.Result

DataType Boolean String Boolean String Boolean Boolean String Boolean Boolean String String String Boolean Integer ElapsedTime Boolean Boolean Boolean String String

Category Object writeable

Array Size Value false, false [2] false, false Recipes, Recipes false, false true, true MyInSQL, MyInSQL false, false false, false

User Writeable [2]

User Writeable [2] User Writeable [2] User Writeable [2] User Writeable [2] User Writeable [2] User Writeable [2] User Writeable [2] Object writeable [2]

User writeable [2] User writeable [2] User writeable Object writeable Object writeable User writeable User writeable Object writeable [2] [2] [2] true 0, 0 00:00:00.0000000, 00:00:00.0000000 false true false, false ConnectionDBa, ConnectionDBb

User writeable [2] Object writeable [2]

Each of the UDAs (with three exceptions) has two array elements. Each indexed element relates to a single .NET System.Data.SqlClient.SqlConnection object that is held in a hash table cache in the Application Domain associated with the IAS Engine running the instance of the SqlConnCacheMgr Object. Connection.Acquire[n] indexed UDA is linked from an InputOutput extension of the corresponding UDA in a $PostToDBaORb or $PostTOaORb Object. This UDA represents the trigger boolean for the ManageConnections QuickScript.

FactorySuite A2 Deployment Guide

.NET Example Source Code

291

Connection.AcquiredBy[n] indexed UDA is linked from an InputOutput extension of the corresponding UDA in a $PostToDBaORb or $PostTOaORb Object. This UDA contains the Tagname of the Object instance which has 'acquired' a reservation for use of the indexed .NET SqlConnection. Connection.Connnect[n] indexed UDA allows an end user or other Object to instruct this Object to make a connection to an MSSQL database, inserting the .NET SqlConnection object into the cache managed by this Object. When set to 'true' the ConnectTo QuickScript is initiated, performing these actions. Connection.Database.Name[n] indexed UDA provides the String name for the database intended to be connected via the indexed cached .NET SqlConnection object. Connection.Disconnect[n] indexed UDA allows an end user or other Object to command this Object to disconnect from the MSSQL database, also removing its .NET SqlConnection object from the cache managed by this Object. When set to 'true' the DisconnectFrom QuickScript is initiated, performing these actions. Connection.IntegratedSecurity[n] indexed UDA designates whether the .NET SqlConnection 'Connection' String will be concatenated with "Integrated Security=SSPI" as the security designator. Connection.NodeName[n] indexed UDA provides the String name for the node, or SQL Server intended to be connected via the indexed cached .NET SqlConnection object. Note that the SQL Server must exist on the network as a valid node or this name must be configured as an alias for the target node using the SQL Client Utility of the node running the IAS Engine that contains the $SqlConnCacheMgr instance Object. Connection.NodeName.Validated[n] indexed UDA indicates whether the provided Connection.NodeName[n] of the same index number has been validated. Running the Initialize OnScan QuickScript sets this UDA to 'true' upon successful inspection of the provided indexed node name. Connection.Release[n] indexed UDA allows an end user or other Object to command this Object to release the MSSQL database .NET SqlConnection object, removing it from the cache managed by this Object. When set to 'true' the ReleaseAcquired QuickScript is Executed, performing these actions. Connection.Status[n] indexed UDA indicates the State of the indexed .NET SqlConnection as determined by a function call made from the Initialize, ManageConnections, ConnectTo and DisconnectFrom QuickScripts. Connection.User.Name[n] and Connection.User.Password[n] indexed UDAs provide the String user name and password respectively for the Connection string of the MSSQL database .NET SqlConnection object. LogMessages.Enabled UDA allows an end user to control whether or not messages are logged into the aaLogger. The default value is 'true'. Query.Count[n] and Query.Duration[n] indexed UDAs contain statistical information collected by the static SqlConnCache Class in ObjectCacheExt.dll. The "Count" UDA keeps track of the number of times the ExecuteNonQuery method has been executed.

FactorySuite A2 Deployment Guide

292

Appendix B

The Duration UDA provides the last measured amount of time taken for an ExecuteNonQuery method from start to completion. See SqlConn.GetStatistics.Now and SqlConn.GetStatistics.Periodic UDAs below. SqlConn.GetStatistics.Now UDA is an instantaneous trigger for collection of statistics regarding the performance of ExecuteNonQuery transactions managed by this Object. The UDA must be set to 'true' initiating the Statistics QuickScript. SqlConn.GetStatistics.Periodic UDA is a state trigger for collection of statistics regarding the performance of ExecuteNonQuery transactions managed by this Object. The UDA defaults to 'true' allowing the Statistics QuickScript to run periodically. When set 'false,' the SqlConn.GetStatistics.Now UDA used to trigger instantaneous updates of statistics information. SqlConn.Initialized[n] indexed UDA indicates whether the identified .NET SqlConnection object has been created and placed in the hash table cache. SqlConn.Name[n] indexed UDA contains the String name given to the .NET SqlConnection object held in the hash table cache. This name is used to identify the object in the cache for retrieval by Other Objects. SqlConn.Result[n] indexed UDA contains the String result information returned by executing a method of the SqlConnCache Class related to the indexed .NET SqlConnection object. The QuickScripts for the $SqlConnCacheMgr template Object are extracted from the Industrial Application Server Galaxy IDE editor. For the Object UDAs see the previous tables or Object Help.htm file.

Template Object:$SqlConnCacheMgr
UDA Extensions $SqlConnCacheMgr Scripts contained in $SqlConnCacheMgr Aliases $SqlConnCacheMgr - Initialize n/a $SqlConnCacheMgr - Initialize n/a

Declarations $SqlConnCacheMgr - Initialize:


Dim sqlConnCacheInitialized As String; Dim sqlConnectionDBa As System.Data.SqlClient.SqlConnection; Dim sqlConnectionDBb As System.Data.SqlClient.SqlConnection; Dim sqlConnectionCount As Integer; Dim resultConnectionDBa As String; Dim resultConnectionDBb As String; Dim foundConnectionDBa As Boolean; Dim foundConnectionDBb As Boolean; Dim firstChar1 As String; Dim firstChar2 As String;

FactorySuite A2 Deployment Guide

.NET Example Source Code

293

Startup $SqlConnCacheMgr - Initialize script:

n/a

OnScan $SqlConnCacheMgr - Initialize Script


IF NOT MyEngine.Engine.StartingFromCheckPoint THEN {If there has NOT been a failover initialization of initial UDA information goes here...} Me.SqlConn.Result[1] = "INITIALIZE"; Me.SqlConn.Result[2] = "INITIALIZE"; ENDIF; {Regardless of reason for going OnScan create and add the first SqlConnection to the SqlConnCache} {...but only if it is actually in the SqlConnCache...} IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " beginning initialization of SqlConnCache."); ENDIF; sqlConnCacheInitialized = A5.LakeForest.SqlConnCache.Initialized(); IF (sqlConnCacheInitialized == "") THEN sqlConnCacheInitialized = A5.LakeForest.SqlConnCache.Initialize(); ENDIF; foundConnectionDBa = A5.LakeForest.SqlConnCache.ContainsKey(Me.SqlConn.Name [1]); IF NOT foundConnectionDBa THEN sqlConnectionDBa = new System.Data.SqlClient.SqlConnection(); Me.Connection.Status[1] = sqlConnectionDBa.State.ToString(); resultConnectionDBa = A5.LakeForest.SqlConnCache.Add (Me.SqlConn.Name[1], sqlConnectionDBa, true); Me.SqlConn.Result[1] = resultConnectionDBa; IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " " + resultConnectionDBa + " adding " + Me.SqlConn.Name[1]); ENDIF; ENDIF; IF (Me.SqlConn.Result[1] <> "INITIALIZE") THEN Me.SqlConn.Initialized[1] = true; ELSE Me.SqlConn.Initialized[1] = false; ENDIF; {Validate that Me.Connection.NodeName[1] is a valid NodeName string...} firstChar1 = StringLeft(Me.Connection.NodeName[1], 1); IF (Me.Connection.NodeName[1] <> "")

FactorySuite A2 Deployment Guide

294

Appendix B

AND (firstChar1 <> " ") AND (StringASCII(firstChar1) >= 65) AND (StringASCII(firstChar1) < 123) THEN IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " Me.Connection.NodeName[1] is validated."); ENDIF; Me.Connection.NodeName.Validated[1] = true; ELSE IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " Me.Connection.NodeName[1] is invalid"); ENDIF; Me.Connection.NodeName.Validated[1] = false; ENDIF; {Now create and add the second SqlConnection to the SqlConnCache} {...but only if it is actually in the SqlConnCache...} foundConnectionDBb = A5.LakeForest.SqlConnCache.ContainsKey (Me.SqlConn.Name[2]); IF NOT foundConnectionDBb THEN sqlConnectionDBb = new System.Data.SqlClient.SqlConnection(); Me.Connection.Status[2] = sqlConnectionDBa.State.ToString(); resultConnectionDBb = A5.LakeForest.SqlConnCache.Add( Me.SqlConn.Name[2], sqlConnectionDBb , true); IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " " + resultConnectionDBb + " adding " + Me.SqlConn.Name[2]); ENDIF; Me.SqlConn.Result[2] = resultConnectionDBb; ENDIF; IF (Me.SqlConn.Result[2] <> "INITIALIZE") THEN Me.SqlConn.Initialized[2] = true; ELSE Me.SqlConn.Initialized[2] = false; ENDIF; {...if there are more SqlConnections in the SqlConnName array add them here....} {Validate that Me.Connection.NodeName[1] is a valid NodeName string...} firstChar2 = StringLeft(Me.Connection.NodeName[2], 1); IF (Me.Connection.NodeName[2] <> "") AND (firstChar2 <> " ") AND (StringASCII(firstChar2) >= 65) AND (StringASCII(firstChar2) < 123) THEN IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " Me.Connection.NodeName[2] is validated.");

FactorySuite A2 Deployment Guide

.NET Example Source Code

295

ENDIF; Me.Connection.NodeName.Validated[2] = true; ELSE IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " Me.Connection.NodeName[2] is invalid"); ENDIF; Me.Connection.NodeName.Validated[2] = false; ENDIF; {Go ahead and count the SqlConnCache objects and report out to the logger...} sqlConnectionCount = A5.LakeForest.SqlConnCache.Count(); IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " has added " + StringFromIntg( sqlConnectionCount , 10) + " to SQLConnCache."); ENDIF;

OffScan $SQLConnCacheMgr - Initialize Script


{Look for the first SqlConnection in the cache...} foundConnectionDBa = A5.LakeForest.SqlConnCache.ContainsKey(Me.SqlConn.Name [1]); IF foundConnectionDBa THEN {Remove the first one from the SqlConnCache before going OffScan...} A5.LakeForest.SqlConnCache.Remove( Me.SqlConn.Name[1]); IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " removed " + Me.SqlConn.Name[1] + " from SqlConnCache."); ENDIF; ENDIF; Me.Connection.NodeName.Validated[1] = false; Me.SqlConn.Initialized[1] = false; {Look for the second SqlConnection in the cache...} foundConnectionDBb = A5.LakeForest.SqlConnCache.ContainsKey(Me.SqlConn.Name [2]); IF foundConnectionDBb THEN {Remove the second one from the SqlConnCache before going OffScan...} A5.LakeForest.SqlConnCache.Remove( Me.SqlConn.Name[2]); IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " removed " + Me.SqlConn.Name[2] + " from SqlConnCache."); ENDIF; ENDIF; sqlConnCacheInitialized = A5.LakeForest.SqlConnCache.Initialized();

FactorySuite A2 Deployment Guide

296

Appendix B

IF (sqlConnCacheInitialized <> "") THEN {Clear the cache using the Initialize method...} sqlConnCacheInitialized = A5.LakeForest.SqlConnCache.Initialize(); ENDIF; Me.Connection.NodeName.Validated[2] = false; Me.SqlConn.Initialized[2] = false;

End of $SqlConnCacheMgr - Initialize

$SqlConnCacheMgr - Statistics
Aliases $SqlConnCacheMgr - Statistics: n/a Declarations $SqlConnCacheMgr - Statistics:
Dim sqlConnectionDBa As System.Data.SqlClient.SqlConnection; Dim sqlConnectionDBb As System.Data.SqlClient.SqlConnection; Dim sqlConnectionCount As Integer; Dim foundConnectionDBa As Boolean; Dim foundConnectionDBb As Boolean; Dim zeroDuration As ElapsedTime;

Startup $SqlConnCacheMgr - Statistics Script


zeroDuration = 0;

OnScan $SqlConnCacheMgr - Statistics Script


IF NOT MyEngine.Engine.StartingFromCheckPoint THEN {If there has NOT been a failover initialization of initial UDA information goes here...} Me.Query.Count[1] = 0; Me.Query.Duration[1] = zeroDuration; Me.Query.Count[2] = 0; Me.Query.Duration[2] = zeroDuration; ENDIF; {When going OnScan the SqlConnCache connections need to be retrieved...} IF NOT foundConnectionDBa THEN IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " OnScan transition checking SqlConn.Name[1] in SqlConnCache..."); ENDIF; foundConnectionDBa = A5.LakeForest.SqlConnCache.ContainsKey (Me.SqlConn.Name[1]);

FactorySuite A2 Deployment Guide

.NET Example Source Code

297

IF foundConnectionDBa THEN sqlConnectionDBa = A5.LakeForest.SqlConnCache.Get (Me.SqlConn.Name[1]); ENDIF; ENDIF; IF NOT foundConnectionDBb THEN IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " OnScan transition checking SqlConn.Name[2] in SqlConnCache..."); ENDIF; foundConnectionDBb = A5.LakeForest.SqlConnCache.ContainsKey (Me.SqlConn.Name[2]); IF foundConnectionDBb THEN sqlConnectionDBa = A5.LakeForest.SqlConnCache.Get (Me.SqlConn.Name[2]); ENDIF; ENDIF; {If there are more SqlConnections in the SqlConnName array add Get(...) for them here....}

Execute $SqlConnCacheMgr Statistics - Expression Execute $SqlConnCacheMgr Statistics Trigger type Execute $SqlConnCacheMgr Statistics Trigger period

Me.SqlConn.GetStatistics.Periodic OR Me.SqlConn.GetStatistics.Now WhileTrue 00:00:20.0000000

Execute $SqlConnCacheMgr - Statistics Script


{Find out if the SqlConnName's are in the SqlConnCache and set Boolean flags accordingly...} IF NOT foundConnectionDBa THEN foundConnectionDBa = A5.LakeForest.SqlConnCache.ContainsKey (Me.SqlConn.Name[1]); IF foundConnectionDBa THEN sqlConnectionDBa = A5.LakeForest.SqlConnCache.Get (Me.SqlConn.Name[1]); ENDIF; ENDIF; IF NOT foundConnectionDBb THEN foundConnectionDBb = A5.LakeForest.SqlConnCache.ContainsKey (Me.SqlConn.Name[2]); IF foundConnectionDBb THEN sqlConnectionDBa = A5.LakeForest.SqlConnCache.Get (Me.SqlConn.Name[2]); ENDIF; ENDIF;

FactorySuite A2 Deployment Guide

298

Appendix B

{If there are more SqlConnections in the SqlConnName array add checks for them here....} {...don't forget to "Dim" the local variables...}

{Make the function calls for each connection to get the statistics...} IF foundConnectionDBa THEN Me.Query.Count[1] = A5.LakeForest.SqlConnCache.GetExecuteNonQueryCount (Me.SqlConn.Name[1]); Me.Query.Duration[1] = A5.LakeForest.SqlConnCache.GetExecuteNonQuery Duration(Me.SqlConn.Name[1]); ELSE {Reset the statistics because there is no connection} Me.Query.Count[1] = 0; Me.Query.Duration[1] = zeroDuration; ENDIF; IF foundConnectionDBb THEN Me.Query.Count[2] = A5.LakeForest.SqlConnCache.GetExecuteNonQueryCount (Me.SqlConn.Name[2]); Me.Query.Duration[2] = A5.LakeForest.SqlConnCache.GetExecuteNonQuery Duration(Me.SqlConn.Name[2]); ELSE {Reset the statistics because there is no connection} Me.Query.Count[2] = 0; Me.Query.Duration[2] = zeroDuration; ENDIF; {Clear the trigger...} Me.SqlConn.GetStatistics.Now = false;

OffScan $SqlConnCacheMgr - Statistics script: n/a Shutdown $SqlConnCacheMgr - Statistics script: n/a

End of $SqlConnCacheMgr - Statistics

FactorySuite A2 Deployment Guide

.NET Example Source Code

299

$SqlConnCacheMgr - ManageConnections
Aliases $SqlConnCacheMgr - ManageConnections: n/a Declarations $SqlConnCacheMgr - ManageConnections:

Dim sqlConnectionDBa As System.Data.SqlClient.SqlConnection; Dim sqlConnectionDBb As System.Data.SqlClient.SqlConnection; Dim sqlConnectionCount As Integer; Dim resultConnectionDBa As String; Dim resultConnectionDBb As String; Dim foundConnectionDBa As Boolean; Dim foundConnectionDBb As Boolean; Dim connectStringa As String; Dim connectStringb As String; Dim afterInitialized1 As Boolean; Dim afterInitialized2 As Boolean;

Startup $SqlConnCacheMgr - ManageConnections script: n/a

OnScan $SqlConnCacheMgr - ManageConnections Script


afterInitialized1 = false; afterInitialized2 = false;

Execute $SqlConnCacheMgr ManageConnections - Expression

(Me.Connection.Acquire[1] AND Me.SqlConn.Initialized[1]) OR( Me.Connection.Acquire[2] AND Me.SqlConn.Initialized[2]) WhileTrue 00:00:00.0000000

Execute $SqlConnCacheMgr ManageConnections Trigger type: Execute $SqlConnCacheMgr ManageConnections Trigger period:

FactorySuite A2 Deployment Guide

300

Appendix B

Execute $SqlConnCacheMgr - ManageConnections Script


{Following Initialized the SqlConnCache connections need to be retrieved...} IF (Me.SqlConn.Initialized[1]) AND NOT afterInitialized1 THEN IF NOT foundConnectionDBa THEN IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " OnScan transition checking SqlConn.Name[1] in SqlConnCache..."); ENDIF; foundConnectionDBa = A5.LakeForest.SqlConnCache.ContainsKey (Me.SqlConn.Name[1]); IF foundConnectionDBa THEN sqlConnectionDBa = A5.LakeForest.SqlConnCache.Get (Me.SqlConn.Name[1]); ENDIF; ENDIF; afterInitialized1 = true; ENDIF; IF (Me.SqlConn.Initialized[2]) AND NOT afterInitialized2 THEN IF NOT foundConnectionDBb THEN IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " OnScan transition checking SqlConn.Name[2] in SqlConnCache..."); ENDIF; foundConnectionDBb = A5.LakeForest.SqlConnCache.ContainsKey (Me.SqlConn.Name[2]); IF foundConnectionDBb THEN sqlConnectionDBa = A5.LakeForest.SqlConnCache.Get (Me.SqlConn.Name[2]); ENDIF; ENDIF; afterInitialized2 = true; ENDIF; {If there are more SqlConnections in the SqlConnName array add Get(...) for them here....} {... and don't forget to include triggers in the Execute Expression trigger.} IF ((NOT foundConnectionDBa) AND Me.Connection.Acquire[1]) THEN {Is sqlConnectionDBa in SqlConnCache?} foundConnectionDBa = A5.LakeForest.SqlConnCache.ContainsKey (Me.SqlConn.Name[1]); IF NOT foundConnectionDBa THEN {No it isn't. Go ahead - create it and add it.} sqlConnectionDBa = new System.Data.SqlClient.SqlConnection();

FactorySuite A2 Deployment Guide

.NET Example Source Code

301

Me.Connection.Status[1] = sqlConnectionDBa.State.ToString(); IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " attempting to Add " + Me.SqlConn.Name[1] + " to SqlConnCache..."); ENDIF; resultConnectionDBa = A5.LakeForest.SqlConnCache.Add (Me.SqlConn.Name[1],sqlConnectionDBa , true); ENDIF; Me.SqlConn.Result[1] = resultConnectionDBa; ELSE {Since sqlConnectionDBa exists. Go ahead and grab it from SqlConnCache...} sqlConnectionDBa = A5.LakeForest.SqlConnCache.Get(Me.SqlConn.Name[1]); IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " ConnectionDBa exists; connecting as sqlConnectionDBa."); ENDIF; Me.Connection.Status[1] = sqlConnectionDBa.State.ToString(); {Check the status of sqlConnectionDBa. If "Closed" fill in the connection string and open it...} IF Me.Connection.Status[1] == "Closed" THEN IF Me.Connection.NodeName.Validated[1] THEN connectStringa = "server=" + Me.Connection.NodeName[1] + ";"; IF Me.Connection.IntegratedSecurity[1] THEN connectStringa = connectStringa + "Integrated Security=SSPI;"; ELSE connectStringa = connectStringa + "username=" + Me.Connection.User.Name[1] + ";"; connectStringa = connectStringa + "password=" + Me.Connection.User.Password[1] + ";"; ENDIF; connectStringa = connectStringa + "database=" + Me.Connection.Database.Name[1]; sqlConnectionDBa.ConnectionString = connectStringa; sqlConnectionDBa.Open(); Me.Connection.Status[1] = sqlConnectionDBa.State.ToString(); IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + "sqlConnectionDBa .ConnectionString is " + connectStringa); LogMessage(Me.Tagname + " sqlConnectionDBa state is " + Me.Connection.Status[1]); ENDIF; ELSE IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " there is no NodeName designating an MSSQL Server node."); ENDIF; ENDIF;

FactorySuite A2 Deployment Guide

302

Appendix B

ELSE {The sqlConnectionDBa isn't "Closed". Just log the information...} IF (Me.Connection.Status[1] == "Open") OR (Me.Connection.Status[1] == "Connecting") OR (Me.Connection.Status[1] == "Executing") OR (Me.Connection.Status[1] == "Fetching") THEN LogMessage(Me.Tagname + " sqlConnectionDBa is already " + Me.Connection.Status[1]); ENDIF; ENDIF; {This script pass is now complete so clear the Boolean flag...} Me.Connection.Acquire[1] = false; ENDIF;

{For the second SqlConnection....} IF ((NOT foundConnectionDBb) AND Me.Connection.Acquire[2]) THEN {Is sqlConnectionDBb in SqlConnCache?} foundConnectionDBb = A5.LakeForest.SqlConnCache.ContainsKey (Me.SqlConn.Name[2]); IF NOT foundConnectionDBb THEN {No it isn't. Go ahead - create it and add it.} sqlConnectionDBb = new System.Data.SqlClient.SqlConnection(); Me.Connection.Status[2] = sqlConnectionDBa.State.ToString(); IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " attempting to Add " + Me.SqlConn.Name[2] + " to SqlConnCache..."); ENDIF; resultConnectionDBb = A5.LakeForest.SqlConnCache.Add (Me.SqlConn.Name[2], sqlConnectionDBb , true); ENDIF; Me.SqlConn.Result[2] = resultConnectionDBb; ELSE {Since sqlConnectionDBb exists. Go ahead and grab it from SqlConnCache...} sqlConnectionDBb = A5.LakeForest.SqlConnCache.Get(Me.SqlConn.Name[2]); IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " ConnectionDBb exists; connecting as sqlConnectionDBb."); ENDIF; Me.Connection.Status[2] = sqlConnectionDBb.State.ToString(); {Check the status of sqlConnectionDBb. If "Closed" fill in the connection string and open it...} IF Me.Connection.Status[2] == "Closed" THEN IF Me.Connection.NodeName.Validated[2] THEN connectStringb = "server=" + Me.Connection.NodeName[2] + ";";

FactorySuite A2 Deployment Guide

.NET Example Source Code

303

IF Me.Connection.IntegratedSecurity[2] THEN connectStringb = connectStringb + "Integrated Security=SSPI;"; ELSE connectStringb = connectStringb + "username=" + Me.Connection.User.Name[2] + ";"; connectStringb = connectStringb + "password=" + Me.Connection.User.Password[2] + ";"; ENDIF; connectStringb = connectStringb + "database=" + Me.Connection.Database.Name[2]; sqlConnectionDBb.ConnectionString = connectStringb; sqlConnectionDBb.Open(); Me.Connection.Status[2] = sqlConnectionDBb.State.ToString(); IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + "sqlConnectionDBb.ConnectionString is " + connectStringb); LogMessage(Me.Tagname + " sqlConnectionDBb state is " + Me.Connection.Status[2]); ENDIF; ELSE IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " there is no NodeName designating an MSSQL Server node."); ENDIF; ENDIF; ELSE {The sqlConnectionDBb isn't "Closed". Just log the information...} IF (Me.Connection.Status[2] == "Open") OR (Me.Connection.Status[2] == "Connecting") OR (Me.Connection.Status[2] == "Executing") OR (Me.Connection.Status[2] == "Fetching") THEN LogMessage(Me.Tagname + " sqlConnectionDBb is already " + Me.Connection.Status[2]); ENDIF; ENDIF; {This script pass is now complete so clear the Boolean flag...} Me.Connection.Acquire[2] = false; ENDIF;

OffScan $SqlConnCacheMgr - ManageConnections script: n/a Shutdown $SqlConnCacheMgr - ManageConnections script: n/a

End of $SqlConnCacheMgr - ManageConnections

FactorySuite A2 Deployment Guide

304

Appendix B

$SqlConnCacheMgr - ConnectTo
Aliases $SqlConnCacheMgr - ConnectTo: n/a

Declarations $SqlConnCacheMgr - ConnectTo:


Dim connection1 As System.Data.SqlClient.SqlConnection; Dim connection2 As System.Data.SqlClient.SqlConnection; Dim connectStr1 As String; Dim connectStr2 As String; Dim nodeName1 As String; Dim nodeName2 As String; Dim status1 As String; Dim status2 As String; Dim resultStr As String;

Startup $SqlConnCacheMgr ConnectTo script OnScan $SqlConnCacheMgr ConnectTo script Execute $SqlConnCacheMgr ConnectTo - Expression Execute $SqlConnCacheMgr ConnectTo Trigger type: Execute $SqlConnCacheMgr ConnectTo Trigger period:

n/a n/a Me.Connection.Connect[1] OR Me.Connection.Connect[2] WhileTrue 00:00:00.0000000

Execute $SqlConnCacheMgr - ConnectTo Script


IF Me.Connection.Connect[1] THEN nodeName1 = Me.Connection.NodeName[1]; IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " Attempting to open connection on " + nodeName1); ENDIF; IF NOT A5.LakeForest.SqlConnCache.ContainsKey (Me.SqlConn.Name[1]) THEN Me.Connection.Status[1] = "Closed"; connection1 = new System.Data.SqlClient.SqlConnection(); ELSE connection1 = A5.LakeForest.SqlConnCache.Get (Me.SqlConn.Name[1]); status1 = connection1.State.ToString(); IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " ConnectionDBa exists; connecting as connection1."); LogMessage(Me.Tagname + " connection1.State.ToString() is " + status1); ENDIF;

FactorySuite A2 Deployment Guide

.NET Example Source Code

305

Me.Connection.Status[1] = status1; ENDIF; IF Me.Connection.Status[1] == "Closed" THEN IF Me.Connection.NodeName.Validated[1] THEN connectStr1 = "server=" + nodeName1 + ";"; IF Me.Connection.IntegratedSecurity[1] THEN connectStr1 = connectStr1 + "Integrated Security=SSPI;"; ELSE connectStr1 = connectStr1 + "username=" + Me.Connection.User.Name[1] + ";"; connectStr1 = connectStr1 + "password=" + Me.Connection.User.Password[1] + ";"; ENDIF; connectStr1 = connectStr1 + "database=" + Me.Connection.Database.Name[1]; IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " connection1.ConnectionString is " + connectStr1); ENDIF; connection1.ConnectionString = connectStr1; connection1.Open(); Me.Connection.Status[1] = connection1.State.ToString(); IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " connection1 state is " + Me.Connection.Status[1]); ENDIF; IF NOT A5.LakeForest.SqlConnCache.ContainsKey (Me.SqlConn.Name[1]) THEN IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " attempting to Add " + Me.SqlConn.Name[1] + " to SqlConnCache..."); ENDIF; resultStr = A5.LakeForest.SqlConnCache.Add (Me.SqlConn.Name[1],connection1, true); IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " added ConnectionDBa to sqlConnCache."); LogMessage(Me.Tagname + " w/result: " + resultStr); ENDIF; ENDIF; ELSE IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " there is no NodeName designating an MSSQL Server node."); ENDIF; ENDIF; ELSE IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " attempting to add " + Me.SqlConn.Name[1] + ", but it's already in SqlConnCache!"); ENDIF;

FactorySuite A2 Deployment Guide

306

Appendix B

IF (Me.Connection.Status[1] == "Open") OR (Me.Connection.Status[1] == "Connecting") OR (Me.Connection.Status[1] == "Executing") OR (Me.Connection.Status[1] == "Fetching") THEN IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " connection1 is already" + Me.Connection.Status[1]); ENDIF; ENDIF; ENDIF; Me.Connection.Connect[1] = False; ENDIF; {Do the same for the seccond SqlConnection if it is selected...} IF Me.Connection.Connect[2] THEN nodeName2 = Me.Connection.NodeName[2]; IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " Attempting to open connection on " + nodeName2); ENDIF; IF NOT A5.LakeForest.SqlConnCache.ContainsKey (Me.SqlConn.Name[2]) THEN Me.Connection.Status[2] = "Closed"; connection2 = new System.Data.SqlClient.SqlConnection(); ELSE connection2 = A5.LakeForest.SqlConnCache.Get (Me.SqlConn.Name[2]); status2 = connection2.State.ToString(); IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " ConnectionDBb exists; connecting as connection2."); LogMessage(Me.Tagname + " connection2.State.ToString() is " + status2); ENDIF; Me.Connection.Status[2] = status2; ENDIF; IF Me.Connection.Status[2] == "Closed" THEN IF Me.Connection.NodeName.Validated[2] THEN connectStr2 = "server=" + nodeName2 + ";"; IF Me.Connection.IntegratedSecurity[2] THEN connectStr2 = connectStr2 + "Integrated Security=SSPI;"; ELSE connectStr2 = connectStr2 + "username=" + Me.Connection.User.Name[2] + ";"; connectStr2 = connectStr2 + "password=" + Me.Connection.User.Password[2] + ";"; ENDIF; connectStr2 = connectStr2 + "database=" + Me.Connection.Database.Name[2]; IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " connection2.ConnectionString is " + connectStr2);

FactorySuite A2 Deployment Guide

.NET Example Source Code

307

ENDIF; connection2.ConnectionString = connectStr2; connection2.Open(); Me.Connection.Status[2] = connection2.State.ToString(); IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " connection2 state is " + Me.Connection.Status[2]); ENDIF; IF NOT A5.LakeForest.SqlConnCache.ContainsKey (Me.SqlConn.Name[2]) THEN IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " attempting to Add " + Me.SqlConn.Name[2] + " to SqlConnCache..."); ENDIF; resultStr = A5.LakeForest.SqlConnCache.Add (Me.SqlConn.Name[2],connection2, true); IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " added ConnectionDBb to sqlConnCache."); LogMessage(Me.Tagname + " w/result: " + resultStr); ENDIF; ENDIF; ELSE IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " there is no NodeName designating an MSSQL Server node."); ENDIF; ENDIF; ELSE IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " attempting to add " + Me.SqlConn.Name[2] + ", but it's already in SqlConnCache!"); ENDIF; IF (Me.Connection.Status[2] == "Open") OR (Me.Connection.Status[2] == "Connecting") OR (Me.Connection.Status[2] == "Executing") OR (Me.Connection.Status[2] == "Fetching") THEN IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " connection2 is already" + Me.Connection.Status[2]); ENDIF; ENDIF; ENDIF; Me.Connection.Connect[2] = False; ENDIF;

OffScan $SqlConnCacheMgr - ConnectTo script: n/a Shutdown $SqlConnCacheMgr - ConnectTo script: n/a

FactorySuite A2 Deployment Guide

308

Appendix B

End of $SqlConnCacheMgr - ConnectTo

$SqlConnCacheMgr - DisconnectFrom
Aliases $SqlConnCacheMgr - DisconnectFrom: n/a

Declarations $SqlConnCacheMgr - DisconnectFrom:


Dim connection1 As System.Data.SqlClient.SqlConnection; Dim connection2 As System.Data.SqlClient.SqlConnection;

Startup $SqlConnCacheMgr DisconnectFrom script: OnScan $SqlConnCacheMgr DisconnectFrom script: Execute $SqlConnCacheMgr DisconnectFrom - Expression Execute $SqlConnCacheMgr DisconnectFrom Trigger type: Execute $SqlConnCacheMgr DisconnectFrom Trigger period:

n/a n/a Me.Connection.Disconnect[1] OR Me.Connection.Disconnect[2]] WhileTrue 00:00:00.0000000

Execute $SqlConnCacheMgr - DisconnectFrom script:


IF Me.Connection.Disconnect[1] THEN IF A5.LakeForest.SqlConnCache.ContainsKey (Me.SqlConn.Name[1]) THEN connection1 = A5.LakeForest.SqlConnCache.Get (Me.SqlConn.Name[1]); Me.Connection.Status[1] = connection1.State(); IF Me.Connection.Status[1] <> "Closed" THEN IF Me.Connection.Status[1] == "Open" THEN connection1.Close(); Me.Connection.Status[1] = connection1.State.ToString(); IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " connection1 has been " + Me.Connection.Status[1] + "."); ENDIF; IF Me.Connection.Status[1] == "Closed" THEN A5.LakeForest.SqlConnCache.Remove (Me.SqlConn.Name[1]); IF NOT A5.LakeForest.SqlConnCache.ContainsKey (Me.SqlConn.Name[1]) THEN IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " " + Me.SqlConn.Name[1] + " has been removed from SqlConnCache."); ENDIF; Me.Connection.AcquiredBy[1] = ""; ELSE IF Me.LogMessages.Enabled THEN

FactorySuite A2 Deployment Guide

.NET Example Source Code

309

LogMessage(Me.Tagname + " " + Me.SqlConn.Name[1] + " not removed; still in the SqlConnCache."); ENDIF; ENDIF; ELSE IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " failed to disconnect connection1 - " + Me.SqlConn.Name[1] + "."); ENDIF; ENDIF; ELSE IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " " + Me.SqlConn.Name[1] + " is still" + Me.Connection.Status[1] + "."); ENDIF; ENDIF; ELSE IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " " + Me.SqlConn.Name[1] + " is " + Me.Connection.Status[1] + "."); ENDIF; ENDIF; ELSE IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " attempt to disconnect " + Me.SqlConn.Name[1] + "; it is not in SqlConnCache!"); ENDIF; ENDIF; Me.Connection.Disconnect[1] = False; ENDIF; {Do the same for the second SqlConnection...}

IF Me.Connection.Disconnect[2] THEN IF A5.LakeForest.SqlConnCache.ContainsKey (Me.SqlConn.Name[2]) THEN connection2 = A5.LakeForest.SqlConnCache.Get (Me.SqlConn.Name[2]); Me.Connection.Status[2] = connection2.State(); IF Me.Connection.Status[2] <> "Closed" THEN IF Me.Connection.Status[2] == "Open" THEN connection2.Close(); Me.Connection.Status[2] = connection2.State.ToString(); IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " connection2 has been " + Me.Connection.Status[2] + "."); ENDIF; IF Me.Connection.Status[2] == "Closed" THEN A5.LakeForest.SqlConnCache.Remove (Me.SqlConn.Name[2]); IF NOT

FactorySuite A2 Deployment Guide

310

Appendix B

A5.LakeForest.SqlConnCache.ContainsKey (Me.SqlConn.Name [2]) THEN IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " " + Me.SqlConn.Name[2] + " been removed from SqlConnCache."); ENDIF; Me.Connection.AcquiredBy[2] = ""; ELSE IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " " + Me.SqlConn.Name[2] + " not removed; still in the SqlConnCache."); ENDIF; ENDIF; ELSE IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " failed to disconnect connection2 - " + Me.SqlConn.Name[2] + "."); ENDIF; ENDIF; ELSE IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " " + Me.SqlConn.Name[2] + " is still " + Me.Connection.Status[2] + "."); ENDIF; ENDIF; ELSE IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " " + Me.SqlConn.Name[2] + " is " + Me.Connection.Status[2] + "."); ENDIF; ENDIF; ELSE IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " attempt to disconnect " + Me.SqlConn.Name[2] + "; it is not in SqlConnCache!"); ENDIF; ENDIF; Me.Connection.Disconnect[2] = False; ENDIF;

OffScan $SqlConnCacheMgr - DisconnectFrom script: n/a Shutdown $SqlConnCacheMgr - DisconnectFrom script: n/a

End of $SqlConnCacheMgr - DisconnectFrom

FactorySuite A2 Deployment Guide

.NET Example Source Code

311

$SqlConnCacheMgr - ReleaseAcquired
Aliases $SqlConnCacheMgr - ReleaseAcquired: n/a Declarations $SqlConnCacheMgr - ReleaseAcquired:
Dim sqlConnectionDBa As System.Data.SqlClient.SqlConnection; Dim sqlConnectionDBb As System.Data.SqlClient.SqlConnection; Dim foundConnectionDBa As Boolean; Dim foundConnectionDBb As Boolean; Dim sqlConnectionStatusa As String; Dim sqlConnectionStatusb As String;

Startup $SqlConnCacheMgr ReleaseAcquired script: OnScan $SqlConnCacheMgr ReleaseAcquired script: Execute $SqlConnCacheMgr ReleaseAcquired - Expression Execute $SqlConnCacheMgr ReleaseAcquired Trigger type: Execute $SqlConnCacheMgr ReleaseAcquired Trigger period:

n/a n/a Me.Connection.Release[1] OR Me.Connection.Release[2] WhileTrue 00:00:00.0000000

Execute $SqlConnCacheMgr - ReleaseAcquired script:


{Check to see if the first connection has been Closed so it can be released...} IF Me.Connection.Release[1] THEN IF NOT foundConnectionDBa THEN IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " Release command; checking SqlConn.Name[1] in SqlConnCache..."); ENDIF; foundConnectionDBa = A5.LakeForest.SqlConnCache.ContainsKey (Me.SqlConn.Name[1]); IF foundConnectionDBa THEN sqlConnectionDBa = A5.LakeForest.SqlConnCache.Get (Me.SqlConn.Name[1]); ENDIF; ENDIF; sqlConnectionStatusa = sqlConnectionDBa.State.ToString(); {Check the status of sqlConnectionDBa. If "Closed" go ahead and release it...} IF sqlConnectionStatusa == "Closed" THEN Me.Connection.AcquiredBy[1] = ""; IF Me.LogMessages.Enabled THEN

FactorySuite A2 Deployment Guide

312

Appendix B

LogMessage(Me.Tagname + " Released SqlConn.Name[1]."); ENDIF; ENDIF; ENDIF; {Check to see if the second connection has been Closed so it can be released...} IF Me.Connection.Release[2] THEN IF NOT foundConnectionDBb THEN IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " Release command; checking SqlConn.Name[2] in SqlConnCache..."); ENDIF; foundConnectionDBa = A5.LakeForest.SqlConnCache.ContainsKey( Me.SqlConn.Name[2]); IF foundConnectionDBb THEN sqlConnectionDBa = A5.LakeForest.SqlConnCache.Get (Me.SqlConn.Name[2]); ENDIF; ENDIF; sqlConnectionStatusb = sqlConnectionDBb.State.ToString(); {Check the status of sqlConnectionDBb. If "Closed" go ahead and release it...} IF sqlConnectionStatusb == "Closed" THEN Me.Connection.AcquiredBy[2] = ""; IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " Released SqlConn.Name[2]."); ENDIF; ENDIF; ENDIF;

OffScan $SqlConnCacheMgr - ReleaseAcquired script: n/a Shutdown $SqlConnCacheMgr - ReleaseAcquired script: n/a

End of $SqlConnCacheMgr - ReleaseAcquired

FactorySuite A2 Deployment Guide

.NET Example Source Code

313

PostTOaORb Object
The PostTOaORb Object is derived from the $PostToDBaORb template. It acquires a .NET SqlConnection object from a cache, prepares an INSERT Query string using simulated randomized data and implements the ExecuteNonQuery method against a database table. All of the inherited parent UDAs and QuickScripts are documented.

PostTOaORb Overview
The PostTOaORb Object is a "worker" Object. It simulates live data using .NET System.Random function calls and posts the data into a database table. It leverages the .NET SqlConnCache Class of the ObjectCacheExt Function DLL. It "acquires" a SqlConnection object that belongs to a SqlConnCacheMgr Object instance. It implements an ExecuteNonQuery transaction against an MSSQL database table. ObjectCacheExt.DLL is a .NET Function DLL created by the A5 Application Consultant group explicitly for demonstrating the capabilities of function DLLs leveraging ADO.NET database access. To use the PostTOaORb Object

Create an instance and configure the UDAs. An instance of the $SqlConnCacheMgr template Object must exist before creating the instance.

Note See the specific help file for that Object template for information regarding its configuration. For general information on objects, including relationships, deployment, and alarm distribution, see the Integrated Development Environment (IDE) documentation. For information on configuration options for object information, scripts, userdefined attributes (UDAs), or attribute extensions, click Extensions Help in the Help file header.

PostTOaORb Run-Time Behavior


The PostTOaORb Object is responsible for acquiring a .NET SqlConnection object from the hash table cache owned by the SqlConnCacheMgr Object instance. The Object uses scripts inherited from the $PostToDBaORb template it determines which object name by first determining the letter - "a" or "b" extracting the last letter of the Object instance's own Tagname and concatenating the letter at the end of the SqlConn.Name.Prefix. This work is perforemed in the TryConnectNow QuickScript. The TryConnectNow script is triggered using the Connection.TryConnect Boolean UDA. When triggered it attempts to "acquire" the .NET SqlConnection object owned by the SqlConnCacheMgr Object by:

Verifying that it exists in the cache.

FactorySuite A2 Deployment Guide

314

Appendix B

If found in the cache this Object posts it own Tagname into the corresponding UDA of the SqlConnCacheMgr Object instance. If the object is not available repeated attempts are made up to the maximum number configured in the Connection.Attempts Integer UDA.

The inherited TryConnectOnFalse QuickScript does cleanup duty following completion of the TryConnectNow script. Having a .NET SqlConnection object allows for posting of data to a database table. This template Object does not directly acquire or manipulate the data that is posted. Indexed arrays (DB.Column.Name[n] and DB.Column.ValueString[n]) must be filled in prior to posting data to the table of the database. The data transfer is achieved by a user or another Object setting the DB.PostData Boolean UDA to 'true'. Then the PostDataNow QuickScript executes. It verifies that this Object instance has "acquired" the .NET SqlConnection and then it makes sure that the connection is "Open". Once it is opened it builds an INSERT Query using the DB.Column.Name[n] array elements and the DB.Column.ValueString array elements. Based upon the DB.Column.DataType array elements apostrophes are concatenated into the INSERT Query where needed. FOR .. NEXT loops are used and controlled by the DB.Column.LastIndex Integer value assuring that only the desired number of database columns are included. By changing this value minumum of one, up to ten columns can be handled. Note When changing the "LastIndex" Integer value be sure to fill in at least that number of configured array elements in the "Name" and "DataType" UDAs. The PostDataNow QuickScript then executes the INSERT Query by applying the ExecuteNonQuery method of the .NET SqlConnection via the corresponding method of the .NET SqlConnCache Class from ObjectCacheExt.dll. The (inherited) DisconnectNow QuickScript handles the work of telling the SqlConnCacheMgr Object instance to disconnect from the database and then the DisconnectNow QuickScript removes the Object's Tagname from the SqlConnCacheMgr's corresponding "AcquiredBy" UDA. The RandomizeNow QuickScript utilizes .NET System.Random objects to generate random numbers used to build data values for inserting into the target database table. The PrepValuesNow QuickScript takes the randomized values and formats them according to the required data types needed inside of the INSERT Query, placing the values into the appropriate UDA array elements. The ChainReactionEvent QuickScript is triggered by ChainReaction.Latch, which is a latching Boolean set by either ChainReaction or ChainReaction.Trigger OnTrue transition.

FactorySuite A2 Deployment Guide

.NET Example Source Code

315

This QuickScript repeats and increments a scan step counter. For each successive scan step it causes the cycle of database acquisition, database connection and open, and the posting of data via the INSERT Query. Depending upon other ChainReaction UDAs it also dispatches triggers to companion PostTOaORb instance Objects causing a chain reaction of database events. The ChainReactionTrigger QuickScript simply captures the OnTrue transition of either ChainReaction or ChainReaction.Trigger and sets the ChainReaction.Latch 'true' to begin the execution of the ChainReactionEvent QuickScript. The ChainReactionCleanUp QuickScript performs the work of clearing the ChainReaction.ChainPrev and ChainReaction.ChainNext UDAs after they have been used to propagate a chain reaction of events. It also clears ChainReaction.BreakPrev and ChainReaction.BreakNext UDAs.

PostTOaORb Configuration
The PostTOaORb object is configured by filling in specific UDAs (including the UDAs inherited from the $PostToDBaORb template) with table name, column names, and column datatypes, prefixes and enumeration strings for the desired SQL database table. These values are automatically applied to an INSERT query. See PostTOaORb Run-Time Object Attributes for details. PostTOaORb Run-Time Object Attributes Following are the UDA attributes of the PostTOaORb Object. Note that the UDAs incorporate 'dot' separators to enhance the grouping. See the next part of this section for inherited UDAs from the $PostToDBaORb Object.

UDA ChainReaction ChainReaction.BreakNext ChainReaction.BreakPrev ChainReaction.ChainNext ChainReaction.ChainPrev ChainReaction.Latch ChainReaction.Trigger Product.CreationDate Product.Name Product.Name.Prefix Product.Parameter1 Product.Parameter2 Product.Randomize Product.Status

DataType Boolean Boolean Boolean Boolean Boolean Boolean Boolean Time String String Float Float Boolean String

Catagory User Writeable User Writeable User Writeable User Writeable User Writeable User Writeable User Writeable User Writeable User Writeable

Array Size Value false false false false false false true false 8/31/2005 12:00:00.000 PM

ChainReaction.NoAutoDisconnect Boolean

User Writeable [2] User writeable [2] User writeable User writeable User Writeable User Writeable 0.0 0.0 false

FactorySuite A2 Deployment Guide

316

Appendix B

UDA Product.Status.Enum

DataType String

Catagory

Array Size Value OK, HOLD, QUARANTINE, WASTE, SPECIAL false

User writeable [5]

Product.Values.PrepNow

String

User writeable

ChainReaction - This UDA is set to 'true' by a user but more commonly it is set by a companion PostTOaORb Object. It is one of the two UDAs in the trigger Expression for the ChainReactionTrigger QuickScript. ChainReaction.BreakNext - This UDA when 'false' allows the ChainReactionEvent QuickScript to generate a ChainReaction.ChainNext 'true' event which is propagated to a linked companion PostTObORb Object instance via OutputDest extension; when set to 'true' by a user or another Object generation of the ChainReaction.ChainNext event is supressed. ChainReaction.BreakPrev - This UDA when 'false' allows the ChainReactionEvent QuickScript to generate a ChainReaction.ChainPrev 'true' event which is propagated to a linked companion PostTObORb Object instance via OutputDest extension; when set to 'true' by a user or another Object generation of the ChainReaction.ChainPrev event is supressed. ChainReaction.ChainNext - This UDA is set to true upon completion of the scan steps in the ChainReactionEvent QuickScript for the case where ChainReaction.BreakNext remains 'false'. Instance Objects extend this UDA as an OutputDest with a reference to a companion PostTOaORb instance Object's ChainReaction UDA. ChainReaction.ChainPrev - This UDA is set to true upon completion of the scan steps in the ChainReactionEvent QuickScript for the case where ChainReaction.BreakPrev remains 'false'. Instance Objects extend this UDA as an OutputDest with a reference to a companion PostTOaORb instance Object's ChainReaction UDA. ChainReaction.Latch - This UDA is the trigger Expression for the ChainReactionEvent QuickScript. It is set to 'true' inside of the ChainReactionTrigger QuickScript. It remains true during severals scans running ChainReactionEvent. Upon completion of the scan steps it is set to 'false', clearing the trigger. ChainReaction.NoAutoDisconnect - This UDA defaults to 'true'. When set to 'false' by a user or another Object it causes the ChainReactionTrigger from setting the Object's inherited Connection.Disconnect UDA to 'true' which in turn causes the .NET SqlConnection to be disconnected from the database. ChainReaction.Trigger - This UDA is set to 'true' generally by a user. It is one of the two UDAs in the trigger Expression for the ChainReactionTrigger QuickScript. Product.CreationDate - This UDA is automatically filled in by the RandomizeNow QuickScript using the Now( ) function. In the PrepValuesNow QuickScript its value is then copied for use in the INSERT Query. Product.Name - This UDA is automatically filled in by the RandomizeNow QuickScript by concatenating a randomized product index string to the Product.Name.Prefix UDA value. In the PrepValuesNow QuickScript its value is then copied for use in the INSERT Query. FactorySuite A2 Deployment Guide

.NET Example Source Code

317

Product.Name.Prefix - This UDA is used in the RandomizeNow QuickScript for generation of the Product.Name string value. Product.Parameter1 - This UDA is automatically filled in by the RandomizeNow QuickScript as a float percent calculation using the NextDouble( ) method of a .NET random number object. In the PrepValuesNow QuickScript its value is then copied for use in the INSERT Query. Product.Parameter2 - This UDA is automatically filled in by the RandomizeNow QuickScript as a float percent calculation using the NextDouble( ) method of a .NET random number object. In the PrepValuesNow QuickScript its value is then copied for use in the INSERT Query. Product.Randomize - This UDA is set to 'true' by a user or another Object. It is the trigger Expression for the RandomizeNow QuickScript. It is cleared to 'false' upon completion of the QuickScript. Product.Status - This UDA is filled in with a String value randomly picked from the Product.Status.Enum array when the RandomizeNow QuickScript runs. Product.Status.Enum - This UDA is an array the contains possible Status String values which are then picked at random for filling in the Product.Status value when the RandomizeNow QuickScript runs. In the PrepValuesNow QuickScript its value is then copied for use in the INSERT Query. Product.Values.PrepNow - This UDA is set to 'true' by a user or another Object. It is the trigger Expression for the PrepValuesNow QuickScript. It is cleared to 'false' upon completion of the QuickScript. The following UDA attributes are inherited, derived from the $PostToDBaORb template:

UDA Connection.Acquired.ByA Connection.Acquired.ByB Connection.Attempts Connection.Connect.ToA Connection.Connect.ToAorB Connection.Connect.ToB Connection.Disconn.FromA Connection.Disconn.FromB Connection.Disconnect Connection.IntegratedSec.A

DataType String String Integer Boolean Boolean Boolean Boolean Boolean Boolean Boolean

Catagory Object writeable Object writeable User Writeable Object writeable User Writeable Object writeable Object writeable Object writeable User Writeable User Writeable

Array Size Value

10 false false false false false false true

FactorySuite A2 Deployment Guide

318

Appendix B

UDA Connection.IntegratedSec.B Connection.NodeName.A Connection.NodeName.B Connection.Result Connection.State Connection.TryConnect DB.Column.DataType DB.Column.LastIndex DB.Column.Name

DataType Boolean String String String String String String Integer String

Catagory User writeable User writeable User writeable Object writeable Object writeable User writeable

Array Size Value true

User writeable [10] User writeable User writeable [10] 5 ProductName, Param1, Param2, CreationDate, Status

DB.Column.ValueString DB.Name.A DB.Name.B DB.PostData DB.PostData.Result DB.PostData.ResultEnum

String String String Boolean String String

User writeable [10] User writeable User writeable User writeable Object writeable Object writeable [4] OBJECT IS NULL, INVALID OPERATION, EXCEPTION, NO SQLCONN Recipe,Recipe true ConnectionDB

DB.TableName DB.TableName.Suffix LogMessages.Enabled SqlConn.Name SqlConn.Name.Prefix

String String Boolean String String

User writeable Object writeable User writeable User writeable Object writeable [2]

Three of the UDAs include ten (10) array elements. Each indexed element of these arrays relates to a column of a table in a database giving Column DataType, Column Name, and (in Run-Time) a Column ValueString. For one of the UDAs there is an array of four elements representing a String enumeration holding and indexed list of possible Exception error messages. Connection.Acquired.ByA - This UDA is extended with an InputOutput source String to the Connection.AcquiredBy[1] UDA of the associated SqlConnCacheMgr Object instance. Connection.Acquired.ByB - This UDA is extended with an InputOutput source String to the Connection.AcquiredBy[2] UDA of the associated SqlConnCacheMgr Object instance.

FactorySuite A2 Deployment Guide

.NET Example Source Code

319

Connection.Attempts - This UDA is configured with the maximum number of repeat attempts for the PostTOaORb Object instance to try to get a valid .NET SqlConnection object from the SqlConnCacheMgr Object instance. It is applied within the TryConnectNow QuickScript. Connection.Connect.ToA - This UDA indicates that the "a" Object will be acquired, index [1], from the .NET SqlConnection objects by the TryConnectNow QuickScript. Connection.Connect.ToAorB - This UDA stores the "connectLetter" retrieved from the last character position of the PostTOaORb Object instance's Tagname. It is copied during the TryConnectNow OnScan QuickScript. Connection.Connect.ToB - This UDA indicates that the "b" Object will be acquired, index [2], from the .NET SqlConnection objects by the TryConnectNow QuickScript. Connection.Disconn.FromA - This UDA is referenced in the DisconnectNow QuickScript to determine that the "a", index [1], .NET SqlConnection Object should be disconnected. Connection.Disconn.FromB - This UDA is referenced in the DisconnectNow QuickScript to determine that the "b", index [2], .NET SqlConnection Object should be disconnected. Connection.Disconnect - This UDA when set to 'true' by a user or another Object triggers the Execution of the DisconnectNow QuickScript. It is cleared to 'false' upon completion of the QuickScript. Connection.IntegratedSec.A - This UDA is set to 'true' to indicate that the database connection of the .NET SqlConnection object should use Integreted Security in the connection String. The UDA is extended with an InputOutput source string to the Connection.IntegratedSecurity[1] UDA of the SqlConnCacheMgr Object instance which in turn implements the actual connection to an MSSQL database. Connection.IntegratedSec.B - This UDA is set to 'true' to indicate that the database connection of the .NET SqlConnection object should use Integreted Security in the connection String. The UDA is extended with an InputOutput source string to the Connection.IntegratedSecurity[2] UDA of the SqlConnCacheMgr Object instance which in turn implements the actual connection to an MSSQL database. Connection.NodeName.A - This UDA indicates the node name or alias where the desired database resides for the SqlConnection. It is extended with an InputOutput source string to the Connection.NodeName[1] UDA of the SqlConnCacheMgr Object instance which in turn implements the actual connection to an MSSQL database. Connection.NodeName.B - This UDA indicates the node name or alias where the desired database resides for the SqlConnection. It is extended with an InputOutput source string to the Connection.NodeName[2] UDA of the SqlConnCacheMgr Object instance which in turn implements the actual connection to an MSSQL database.

FactorySuite A2 Deployment Guide

320

Appendix B

Connection.Result - This UDA captures the result message string following the attempt to make a connection with the .NET SqlConnection object. Its string value is filled in during the execution of the TryConnectNow QuickScript. Connection.State - This UDA captures the State information from the bound .NET SqlConnection object. Its string value is filled in during the execution of the TryConnectNow QuickScript. Connection.TryConnect - This UDA when set to 'true' by a user or another Object triggers the Execution of the TryConnectNow QuickScript. It is cleared to 'false' upon completion of the QuickScript. DB.Column.DataType[n], DB.Column.Name[n], and DB.Column.ValueString[n] indexed UDAs - these are utilized in building the INSERT Query statement for the ExecuteNonQuery method of the .NET SqlConnCache Class. "DataType" array elements are configured with the SQL column datatype. Name array elements are configured with the SQL columnn names. ValueString array elements get filled in during Run-Time with string converted values as the data for posting within the INSERT Query. Note that array elements are only processed up to the index number configured in DB.Column.LastIndex UDA. These array elements are concatenated into the INSERT Query using FOR ... NEXT loops within the PostDataNow QuickScript. DB.Column.LastIndex - This UDA is the configured index number representing the last index that will be processed from the column "Name" and column "ValueString" arrays, thus forming the database INSERT Query. It is applied in the PostDataNow QuickScript. DB.Name.A - This UDA is configured with the name of the desired MSSQL database. It is extended with the InputOutput source string of the Connection.Database.Name[1] UDA of the SqlConnCacheMgr Object instance which in turn implements the actual connection. DB.Name.B - This UDA is configured with the name of the desired MSSQL database. It is extended with the InputOutput source string of the Connection.Database.Name[2] UDA of the SqlConnCacheMgr Object instance which in turn implements the actual connection. DB.PostData - This UDA when set to 'true' by a user or another Object triggers the Execution of the PostDataNow QuickScript. It is cleared to 'false' upon completion of the QuickScript. DB.PostData.Result - This UDA captures the result message string following the attempt to post data using the ExecuteNonQuery method of the .NET SqlConnection object. It's string value is filled in during the execution of the PostDataNow QuickScript. DB.PostData.ResultEnum[n] - This indexed UDA is configured with the possible Exception result messages that may be returned from the execution of the ExecuteNonQuery method call of the .NET SqlConnCache Class. The enumeration values are compared to the result message to determine that a known Exception ocurred. It is used in the PostDataNow QuickScript.

FactorySuite A2 Deployment Guide

.NET Example Source Code

321

DB.TableName - This UDA is configured with the table name to be used in the INSERT Query which is executed using the ExecuteNonQuery method of the .NET SqlConnCache Class. The table name is actually the concatenation of the DB.TableName.Suffix plus the "connectLetter" determined from the last letter of the Object instance Tagname. The INSERT Query is applied in the PostDataNow QuickScript. DB.TableName.Suffix - This UDA is used to generate the DB.TableName. It gets the last letter of the Object instance Tagname concatenated to it during the execution of the PostDataNow QuickScript. LogMessages.Enabled - This UDA allows an end user to control whether or not messages are logged into the aaLogger. The default value is 'true'. SqlConn.Name - This UDA is generated using the SqlConn.Name.Prefix and concatenating the "connectLetter" to it during the execution of the TryConnectNow QuickScript. This String value is used to reference the .NET SqlConnection object found in the hash table cache owned by the SqlConnCacheMgr Object instance. SqlConn.Name.Prefix - This UDA is configured with the prefix string that is used to generate the name of a .NET SqlConnection object from the cache owned by the SqlConnCacheMgr Object instance. It gets the "connectLetter" concatenated to it in the TryConnectNow QuickScript. The QuickScripts for the $PostTOaORb template Object are extracted from the Industrial Application Server Galaxy IDE editor. This template is derived from the $PostToDBaORb template. For the UDAs see the previous tables or the Help.htm file.

Template Object:$PostTOaORb
UDA Extensions $PostTOaORb:
Connection.AcquiredBy.A Source Output destination differs from input source Destination InputOutput extension SqlConnCacheMgr.Connection.AcquiredBy[2] not checked ---

Connection.AcquiredBy.B Source Output destination differs from input source Destination

InputOutput extension SqlConnCacheMgr.Connection.AcquiredBy[1] not checked ---

FactorySuite A2 Deployment Guide

322

Appendix B

Connection.IntegratedSec.A Source Output destination differs from input source Destination

InputOutput extension SqlConnCacheMgr.Connection.IntegratedSecurity[1] not checked ---

Connection.IntegratedSec.B Source Output destination differs from input source Destination

InputOutput extension SqlConnCacheMgr.Connection.IntegratedSecurity[2] not checked ---

Connection.NodeName.A Source Output destination differs from input source Destination

InputOutput extension SqlConnCacheMgr.Connection.NodeName[1] not checked ---

Connection.NodeName.B Source Output destination differs from input source Destination

InputOutput extension SqlConnCacheMgr.Connection.NodeName[2] not checked ---

DB.Name.A Source Output destination differs from input source Destination

InputOutput extension SqlConnCacheMgr.Connection.Database.Name[1] not checked ---

DB.Name.B Source Output destination differs from input source Destination

InputOutput extension SqlConnCacheMgr.Connection.Database.Name[2] not checked ---

FactorySuite A2 Deployment Guide

.NET Example Source Code

323

$PostTOaORb Scripts
$PostTOaORb - PrepValuesNow
Aliases $PostTOaORb - PrepValuesNow: n/a Declarations $PostTOaORb - PrepValuesNow:
Dim cDateTime As System.DateTime; Dim connectLetter As String; Dim indexOfLetter As String;

Startup $PostTOaORb - PrepValuesNow Script: n/a

OnScan $PostTOaORb - PrepValuesNow script:


connectLetter = Me.Connection.Connect.ToAorB; IF connectLetter == "a" THEN indexOfLetter = 1; ELSE IF connectLetter == "b" THEN indexOfLetter = 2; ELSE indexOfLetter = 1; ENDIF; ENDIF; Me.DB.TableName = StringUpper(connectLetter) + Me.DB.TableName.Suffix[indexOfLetter];

Execute $PostTOaORb PrepValuesNow - Expression Execute $PostTOaORb PrepValuesNow Trigger type Execute $PostTOaORb PrepValuesNow Trigger period

Me.Product.Values.PrepNow OnTrue 00:00:00.0000000

FactorySuite A2 Deployment Guide

324

Appendix B

Execute $PostTOaORb - PrepValuesNow Script


Me.DB.Column.ValueString[1] = Me.Product.Name; Me.DB.Column.ValueString[2] =StringFromReal(Me.Product.Parameter1,4,"f"); Me.DB.Column.ValueString[3] = StringFromReal(Me.Product.Parameter2,4,"f"); cDateTime = Me.Product.CreationDate; Me.DB.Column.ValueString[4] = cDateTime.ToString(); Me.DB.Column.ValueString[5] = Me.Product.Status; { .... additional data prep script lines go here for up to ten parameters. } Me.Product.Values.PrepNow = false;

OffScan $ PostTOaORb - PrepValuesNow script: n/a Shutdown $ PostTOaORb - PrepValuesNow script: n/a

End of $ PostTOaORb - PrepValuesNow

$PostTOaORb - RandomizeNow
Aliases $PostTOaORb - RandomizeNow: n/a Declarations $PostTOaORb - RandomizeNow:
Dim randomValueBase As System.Random; Dim randomPerCent As Float; Dim randomIndex As System.Random; Dim productIndex As Integer; Dim enumIndex As Integer;

Startup $PostTOaORb - RandomizeNow script: n/a

OnScan $PostTOaORb - RandomizeNow script:


randomValueBase = new System.Random(); randomIndex = new System.Random();

Execute $PostTOaORb RandomizeNow - Expression Execute $PostTOaORb RandomizeNow Trigger type Execute $PostTOaORb RandomizeNow Trigger period

Me.Product.Randomize OnTrue 00:00:00.0000000

FactorySuite A2 Deployment Guide

.NET Example Source Code

325

Execute $PostTOaORb - RandomizeNow Script


productIndex = randomIndex.Next(); Me.Product.Name = Me.Product.Name.Prefix + StringFromIntg(productIndex,10); randomPerCent = randomValueBase.NextDouble() * 100.0; Me.Product.Parameter1 = randomPerCent; Me.Product.Parameter2 = 100.0 - randomPerCent; Me.Product.CreationDate = Now(); enumIndex = Round(randomPerCent / 20.0,1); IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " randomPerCent is " + StringFromReal(randomPerCent,2,"f")); LogMessage(Me.Tagname + " enumIndex is " +StringFromIntg(enumIndex,10)); ENDIF; IF enumIndex <= 0 THEN enumIndex = 1; ELSE IF enumIndex > 5 THEN enumIndex = 5; ENDIF; ENDIF; Me.Product.Status = Me.Product.Status.Enum[enumIndex]; Me.Product.Randomize = False;

OffScan $ PostTOaORb - RandomizeNow script: n/a Shutdown $ PostTOaORb - RandomizeNow script: n/a

End of $ PostTOaORb - RandomizeNow

FactorySuite A2 Deployment Guide

326

Appendix B

$PostTOaORb - ChainReactionEvent
Aliases $PostTOaORb - ChainReactionEvent: n/a Declarations $PostTOaORb - ChainReactionEvent:
Dim scanStep As Integer; Dim waitCountDown As Integer; Dim checkConnState As String;

Startup $PostTOaORb - ChainReactionEvent script: n/a OnScan $PostTOaORb - ChainReactionEvent script: n/a

Execute $PostTOaORb ChainReactionEvent - Expression Execute $PostTOaORb ChainReactionEvent Trigger type Execute $PostTOaORb ChainReactionEvent Trigger period

Me.ChainReaction.Latch WhileTrue 00:00:00.0000000

Execute $PostTOaORb - ChainReactionEvent script:


IF scanStep == 0 THEN Me.Connection.TryConnect = True; Me.DB.PostData.Result = "Chain reaction started"; checkConnState = StringLeft(Me.Connection.State, 9); Me.Connection.State = checkConnState; IF checkConnState == "ACQUIRED:" THEN scanStep = 1; ELSE waitCountDown = waitCountDown - 1; IF waitCountDown <= 0 THEN scanStep = 0; waitCountDown = 10; Me.ChainReaction.Latch = false; Me.ChainReaction = false; Me.ChainReaction.Trigger = false; ENDIF; ENDIF; ELSE IF scanStep == 1 THEN Me.Product.Randomize = True; scanStep = 2; ELSE IF scanStep == 2 THEN Me.Product.Values.PrepNow = true; scanStep = 3; ELSE IF scanStep == 3 THEN checkConnState = StringLeft(Me.Connection.State, 9); IF checkConnState == "ACQUIRED:" THEN Me.DB.PostData = True; scanStep = 4;

FactorySuite A2 Deployment Guide

.NET Example Source Code

327

ELSE IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " ChainReaction aborted; not ACQUIRED:"); ENDIF; scanStep = 0; waitCountDown = 10; Me.ChainReaction.Latch = false; Me.ChainReaction = false; Me.ChainReaction.Trigger = false; ENDIF; ELSE IF scanStep >= 4 THEN IF NOT Me.ChainReaction.NoAutoDisconnect THEN IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " disconnecting - NoAutoDisconnect is false."); ENDIF; Me.Connection.Disconnect = true; ENDIF; IF NOT Me.ChainReaction.BreakPrev THEN Me.ChainReaction.ChainPrev = true; ENDIF; IF NOT Me.ChainReaction.BreakNext THEN Me.ChainReaction.ChainNext = true; IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " ChainNext set true, passing to" + Me.ChainReaction.ChainNext.OutputDest + "."); ENDIF; ENDIF; scanStep = 0; waitCountDown = 10; Me.ChainReaction.Latch = false; Me.ChainReaction = false; Me.ChainReaction.Trigger = false; ENDIF; ENDIF; ENDIF; ENDIF; ENDIF;

OffScan $ PostTOaORb - ChainReactionEvent script: n/a Shutdown $ PostTOaORb - ChainReactionEvent script: n/a

End of $ PostTOaORb - ChainReactionEvent

FactorySuite A2 Deployment Guide

328

Appendix B

$PostTOaORb - ChainReactionCleanUp
Aliases $PostTOaORb - ChainReactionCleanUp: n/a Declarations $PostTOaORb - ChainReactionCleanUp:
Dim chainPrevLatchCount As Integer; Dim chainNextLatchCount As Integer; Dim breakPrevLatchCount As Integer; Dim breakNextLatchCount As Integer;

Startup $PostTOaORb - ChainReactionCleanUp script: n/a

OnScan $PostTOaORb - ChainReactionCleanUp Script


chainPrevLatchCount = 0; chainNextLatchCount = 0; breakPrevLatchCount = 0; breakNextLatchCount = 0;

Execute $PostTOaORb ChainReactionCleanUp - Expression Execute $PostTOaORb ChainReactionCleanUp Trigger type Execute $PostTOaORb ChainReactionCleanUp Trigger period

Me.ChainReaction.Latch WhileFalse 00:00:00.0000000

Execute $PostTOaORb ChainReactionCleanUp Script


IF Me.ChainReaction.ChainPrev THEN chainPrevLatchCount = chainPrevLatchCount + 1; IF chainPrevLatchCount >= 3 THEN Me.ChainReaction.ChainPrev = false; chainPrevLatchCount = 0; ENDIF; ENDIF; IF Me.ChainReaction.ChainNext THEN chainNextLatchCount = chainNextLatchCount + 1; IF chainNextLatchCount >= 3 THEN Me.ChainReaction.ChainNext = false; chainNextLatchCount = 0; ENDIF; ENDIF; IF Me.ChainReaction.BreakPrev THEN breakPrevLatchCount = breakPrevLatchCount + 1; IF breakPrevLatchCount >= 3 THEN

FactorySuite A2 Deployment Guide

.NET Example Source Code

329

Me.ChainReaction.BreakPrev = false; breakPrevLatchCount = 0; ENDIF; ENDIF; IF Me.ChainReaction.BreakNext THEN breakNextLatchCount = breakNextLatchCount + 1; IF breakNextLatchCount >= 3 THEN Me.ChainReaction.BreakNext = false; breakNextLatchCount = 0; ENDIF; ENDIF;

OffScan $ PostTOaORb - ChainReactionCleanUp script: n/a Shutdown $ PostTOaORb - ChainReactionCleanUp script: n/a

End of $ PostTOaORb - ChainReactionCleanUp

$PostTOaORb - ChainReactionTrigger
Aliases $PostTOaORb - ChainReactionTrigger: n/a Declarations $PostTOaORb - ChainReactionTrigger:

Dim chainPrevLatchCount As Integer; Dim chainNextLatchCount As Integer; Dim breakPrevLatchCount As Integer; Dim breakNextLatchCount As Integer;

Startup $PostTOaORb - ChainReactionTrigger script: n/a OnScan $PostTOaORb - ChainReactionTrigger script: n/a

Execute $PostTOaORb ChainReactionTrigger - Expression Execute $PostTOaORb ChainReactionTrigger Trigger type

Me.ChainReaction OR Me.ChainReaction.Trigger OnTrue

Execute $PostTOaORb 00:00:00.0000000 ChainReactionTrigger Trigger period

FactorySuite A2 Deployment Guide

330

Appendix B

Execute $PostTOaORb - ChainReactionTrigger Script


Me.ChainReaction.Latch = true; IF Me.LogMessages.Enabled THEN IF Me.ChainReaction THEN LogMessage(Me.Tagname + " ChainReaction activated."); ELSE IF Me.ChainReaction.Trigger THEN LogMessage(Me.Tagname + " ChainReaction.Trigger activated."); ELSE LogMessage(Me.Tagname + " neither ChainReaction nor ChainReaction.Trigger was activated."); ENDIF; ENDIF; ENDIF;

OffScan $ PostTOaORb - ChainReactionTrigger script: n/a Shutdown $ PostTOaORb - ChainReactionTrigger script: n/a

End of $ PostTOaORb - ChainReactionTrigger

$PostTOaORb Scripts (Inherited from $PostToDBaORb)


$PostTOaORb Inherited - TryConnectNow
Aliases $PostTOaORb inherited - TryConnectNow: n/a Declarations $PostTOaORb inherited - TryConnectNow:
Dim connectLetter As String; Dim hasConnection As Boolean; Dim connStatus As Boolean; Dim connAttempts As Integer; Dim connAttemptsStr As String; Dim connectAcquiredBy As String;

Startup $PostTOaORb inherited - TryConnectNow script: n/a

FactorySuite A2 Deployment Guide

.NET Example Source Code

331

OnScan $PostTOaORb Inherited TryConnectNow Script


connectLetter = StringRight(Me.Tagname, 1); Me.Connection.Connect.ToAorB = connectLetter;

Execute $PostTOaORb inherited TryConnectNow - Expression Execute $PostTOaORb inherited TryConnectNow Trigger type Execute $PostTOaORb inherited TryConnectNow Trigger period

Me.Connection.TryConnect WhileTrue 00:00:05.0000000

Execute $PostTOaORb Inherited TryConnectNow Script


{The connection name has been determined from the last letter of Me.Tagname in the OnScan script.} {Depending upon the letter "a" or "b" grab the connection identity from Me.Connection.Acquired.ByX } {which is read via Input/Output extented UDA linked to theSqlConnCacheMgr object's UDA.} IF connectLetter == "a" THEN connectAcquiredBy = Me.Connection.Acquired.ByA; Me.Connection.Connect.ToA = true; ELSE connectAcquiredBy = Me.Connection.Acquired.ByB; Me.Connection.Connect.ToB = true; ENDIF; {Concatenate the SQL Connection name from this object's prefix and Me.Tagname last letter...} Me.SqlConn.Name = Me.SqlConn.Name.Prefix + connectLetter; {Determine of this SQL Connection name is in the SqlConnCache connection owned by the SqlConnCacheMgr object...} hasConnection = A5.LakeForest.SqlConnCache.ContainsKey (Me.SqlConn.Name); IF hasConnection THEN {When the SQL Connection object is found in the SqlConnCache fill in the "Result" UDA...} Me.Connection.Result = Me.SqlConn.Name + " available."; {...then chect whether this object's Me.Tagname is the one that "acquired" the SqlConnCache connection.} IF connectAcquiredBy == Me.Tagname THEN IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " " + Me.SqlConn.Name + " already acquired."); ENDIF;

FactorySuite A2 Deployment Guide

332

Appendix B

Me.Connection.State = "ACQUIRED: " + Me.SqlConn.Name; ELSE {If no other object has "acquired" the SqlConnCache connection...} IF connectAcquiredBy == "" THEN {...go ahead and set the "acquired by" UDA of the SqlConnCacheMgr object by pushing this object's Me.Tagname } {via the Input/Output extended UDA...} IF connectLetter == "a" THEN Me.Connection.Acquired.ByA = Me.Tagname; ELSE Me.Connection.Acquired.ByB = Me.Tagname; ENDIF; Me.Connection.State = "ACQUIRED: " + Me.SqlConn.Name; ELSE {...but when it is discovered that some other object already has the SqlConnCache connection...} IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " " + Me.SqlConn.Name + " acquired by a different object."); ENDIF; {...fill in the Connection state information as "NOT AVAILABLE".} Me.Connection.State = "NOT AVAILABLE: " + Me.SqlConn.Name ; ENDIF; ENDIF; Me.Connection.TryConnect = false; IF connectLetter == "a" THEN Me.Connection.Connect.ToA = false; ELSE Me.Connection.Connect.ToB = false; ENDIF; connAttempts = 0; ELSE Me.Connection.Result = "Failed to acquire " + Me.SqlConn.Name; Me.Connection.State = "NOT ACQUIRED"; connAttempts = connAttempts + 1; connAttemptsStr = StringFromIntg(connAttempts - 1,10); IF ((Me.Connection.Attempts > 0) AND (connAttempts > Me.Connection.Attempts)) OR ((Me.Connection.Attempts <= 0) AND (connAttempts > 10))THEN Me.Connection.TryConnect = false; IF connectLetter == "a" THEN Me.Connection.Connect.ToA = false; ELSE Me.Connection.Connect.ToB = false; ENDIF;

FactorySuite A2 Deployment Guide

.NET Example Source Code

333

Me.Connection.Result = "Failed to acquire " + Me.SqlConn.Name + " after " + connAttemptsStr + " attempts."; IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " failed to acquire " + Me.SqlConn.Name + " after " + connAttemptsStr + "attempts."); ENDIF; ENDIF; connAttempts = 0; ENDIF;

OffScan $ PostTOaORb inherited - TryConnectNow script: n/a Shutdown $ PostTOaORb inherited - TryConnectNow script: n/a

End of $ PostTOaORb Inherited - TryConnectNow

$PostTOaORb inherited - DisconnectNow


Aliases $PostTOaORb inherited - DisconnectNow: n/a Declarations $PostTOaORb inherited - DisconnectNow:
Dim connectLetter As String; Dim sqlConnStatus As String; Dim disconnectAttempts As Integer; Dim connectAcquiredBy As String;

Startup $PostTOaORb inherited - DisconnectNow script: n/a

OnScan $PostTOaORb Inherited - DisconnectNow Script


connectLetter = StringRight(Me.Tagname, 1);

Execute $PostTOaORb inherited DisconnectNow - Expression Execute $PostTOaORb inherited DisconnectNow Trigger type: Execute $PostTOaORb inherited DisconnectNow Trigger period:

Me.Connection.Disconnect WhileTrue 00:00:05.0000000

FactorySuite A2 Deployment Guide

334

Appendix B

Execute $PostTOaORb Inherited DisconnectNow Script


{The connection name has been determined from the last letter of Me.Tagname in the OnScan script.} {Depending upon the letter "a" or "b" the connection identity from Me.Connection.Acquired.ByX } {which is read via Input/Output extented UDA linked to the SqlConnCacheMgr object's UDA.} IF connectLetter == "a" THEN connectAcquiredBy = Me.Connection.Acquired.ByA; ELSE connectAcquiredBy = Me.Connection.Acquired.ByB; ENDIF; {Concatenate the SQL Connection name from this object's prefix and Me.Tagname last letter...} Me.SqlConn.Name = Me.SqlConn.Name.Prefix + connectLetter; {If the SqlConnCacheMgr currently has this object's Tagname assigned to the SqlConnection...} IF connectAcquiredBy == Me.Tagname THEN {...find out the SqlConnection status...} sqlConnStatus = A5.LakeForest.SqlConnCache.GetConnState (Me.SqlConn.Name); IF sqlConnStatus == "Open" THEN {If that SqlConnection is "Open" then close it according to the appropriate letter "a" or "b".} {The disconnect command and "acquired by" are linked via the Input/Output extended UDAs.} IF connectLetter == "a" THEN Me.Connection.Disconn.FromA = true; Me.Connection.Acquired.ByA = ""; ELSE Me.Connection.Disconn.FromB = true; Me.Connection.Acquired.ByB = ""; ENDIF; {...also take care of counting the number of disconnect attempts and declare failure after a certain number of attempts...} disconnectAttempts = disconnectAttempts + 1; IF disconnectAttempts > 5 THEN Me.Connection.Disconnect = False; disconnectAttempts = 0; IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " Failed to disconnect from " + Me.SqlConn.Name); ENDIF; ENDIF; ELSE

FactorySuite A2 Deployment Guide

.NET Example Source Code

335

{SqlConnection is not "Open" so it is OK to declare disconnected right away...} Me.Connection.Disconnect = False; disconnectAttempts = 0; IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " Disconnected from " + Me.SqlConn.Name); ENDIF; ENDIF; ELSE {This object does not have the SqlConnection...} Me.Connection.Disconnect = False; disconnectAttempts = 0; IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " " + Me.SqlConn.Name + " not acquired by this object."); ENDIF; ENDIF;

OffScan $ PostTOaORb inherited - DisconnectNow script: n/a Shutdown $ PostTOaORb inherited - DisconnectNow script: n/a

End of $ PostTOaORb Inherited - DisconnectNow

FactorySuite A2 Deployment Guide

336

Appendix B

$PostTOaORb inherited - PostDataNow


Aliases $PostTOaORb inherited - PostDataNow: n/a Declarations $PostTOaORb inherited - PostDataNow:
Dim connectLetter As String; Dim sqlConnStatus As String; Dim sqlCommandText As String; Dim tblNm As String; Dim qResult As String; Dim qResultIndex As Integer; Dim qResultException As Boolean; Dim qResultLen As Integer; Dim qResultIterator As Integer; Dim qResultChar As String; Dim rowsChars As String; Dim rows As Integer; Dim columnIterator As Integer; Dim columnCount As Integer; Dim quoteStr As String; Dim hasConnection As Boolean; Dim hasAcquiredConnection As Boolean;

Startup $PostTOaORb inherited - PostDataNow script: n/a OnScan $PostTOaORb Inherited - PostDataNow Script:
connectLetter = StringRight(Me.Tagname, 1);

Execute $PostTOaORb inherited PostDataNow - Expression Execute $PostTOaORb inherited PostDataNow Trigger type: Execute $PostTOaORb inherited PostDataNow Runs asynchronously: Execute $PostTOaORb inherited PostDataNow Timeout limit: Execute $PostTOaORb inherited PostDataNow Trigger period:

Me.DB.PostData OnTrue Checked 60000 ms 00:00:00.0000000

FactorySuite A2 Deployment Guide

.NET Example Source Code

337

Execute $PostTOaORb Inherited PostDataNow Script


IF connectLetter == "a" THEN IF (Me.Connection.Acquired.ByA == Me.Tagname) THEN hasAcquiredConnection = true; ELSE hasAcquiredConnection = false; ENDIF; ELSE IF (Me.Connection.Acquired.ByB == Me.Tagname) THEN hasAcquiredConnection = true; ELSE hasAcquiredConnection = false; ENDIF; ENDIF; tblNm = Me.DB.TableName; columnCount = Me.DB.Column.LastIndex; {Concatenate the SQL Connection name from this object's prefix and Me.Tagname last letter...} Me.SqlConn.Name = Me.SqlConn.Name.Prefix + connectLetter; {Check whether the SqlConnCacheMgr has the SqlConnection in the SqlConnCache...} hasConnection = A5.LakeForest.SqlConnCache.ContainsKey(Me.SqlConn.Name); IF (hasConnection AND hasAcquiredConnection) THEN {If it does have the SqlConnection get the status...} sqlConnStatus = A5.LakeForest.SqlConnCache.GetConnState (Me.SqlConn.Name); {Note that the connection manager object is responsible for getting the SQL Connection to the 'Open' state...} IF sqlConnStatus == "Open" THEN {If the SqlConnection status is "Open" go ahead and create an INSERT Query string...} sqlCommandText = "INSERT INTO " + tblNm + " ("; FOR columnIterator = 1 TO columnCount sqlCommandText = sqlCommandText + Me.DB.Column.Name[columnIterator]; IF columnIterator < columnCount THEN sqlCommandText = sqlCommandText + ","; ENDIF; NEXT; sqlCommandText = sqlCommandText + ") VALUES ("; FOR columnIterator = 1 To columnCount IF (Me.DB.Column.DataType[columnIterator] == "char") OR (Me.DB.Column.DataType[columnIterator] == "datetime") THEN quoteStr = "'"; ELSE quoteStr = ""; ENDIF;

FactorySuite A2 Deployment Guide

338

Appendix B

sqlCommandText = sqlCommandText + quoteStr + Me.DB.Column.ValueString[columnIterator] + quoteStr;; IF columnIterator < columnCount THEN sqlCommandText = sqlCommandText + ","; ENDIF; NEXT; sqlCommandText = sqlCommandText + ")"; IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " CommandText: " + sqlCommandText); ENDIF; qResultException = false; {Finally! we are ready to run the query.......} qResult = A5.LakeForest.SqlConnCache.ExecuteNonQuery (Me.SqlConn.Name, sqlCommandText, false); IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " ExecuteNonQuery " + qResult); ENDIF; {For the query result look it up in the DB.PostData.ResultEnum string enumeration } {to find out if there was an exception...} FOR qResultIndex = 1 TO 4 IF qResult == Me.DB.PostData.ResultEnum[qResultIndex] THEN qResultException = True; Me.DB.PostData.Result = qResult; EXIT FOR; ENDIF; NEXT; {If there was no exception then determine the number of rows that were inserted into the database by } {the INSERT query....} IF NOT qResultException THEN qResultLen = StringLen(qResult); FOR qResultIterator = 1 TO qResultLen qResultChar = StringMid(qResult, qResultIterator, 1); IF qResultChar == " " THEN rowsChars = StringLeft(qResult, qResultIterator - 1); rows = StringToIntg(rowsChars); Me.DB.PostData.Result = rowsChars + " row inserted."; {Note: rows Integer variable not required. It is included as an example of conversion from String to Integer.} EXIT FOR; ENDIF; NEXT; ENDIF; ELSE Me.DB.PostData.Result = "Attempted PostData but ConnectionDB" + connectLetter + " was not Open."; ENDIF;

FactorySuite A2 Deployment Guide

.NET Example Source Code

339

ELSE {Aparently the SqlConnection is not in the SqlConnCache...} IF Me.LogMessages.Enabled THEN LogMessage(Me.Tagname + " " + Me.SqlConn.Name + " is not in SqlConnCache."); ENDIF; ENDIF; {Clear the DB.PostData trigger...} Me.DB.PostData = False;

OffScan $ PostTOaORb inherited - PostDataNow script: n/a Shutdown $ PostTOaORb inherited - PostDataNow script: n/a

End of $ PostTOaORb Inherited - PostDataNow

$PostTOaORb Inherited - TryConnectOnFalse


Aliases $PostTOaORb inherited - TryConnectOnFalse: n/a Declarations $PostTOaORb inherited - TryConnectOnFalse:
Dim connectLetter As String;

Startup $PostTOaORb inherited - TryConnectOnFalse script: n/a OnScan $PostTOaORb inherited - TryConnectOnFalse Script:
connectLetter = StringRight(Me.Tagname, 1);

Execute $PostTOaORb inherited $PostTOaORb - Expression Execute $PostTOaORb inherited TryConnectOnFalse Trigger type:

Me.Connection.TryConnect OnFalse

Execute $PostTOaORb Inherited TryConnectOnFalse Script


IF connectLetter == "a" THEN Me.Connection.Connect.ToA = false; ELSE Me.Connection.Connect.ToB = false; ENDIF;

OffScan $ PostTOaORb inherited - DisconnectNow script: n/a Shutdown $ PostTOaORb inherited - DisconnectNow script: n/a

End of $ PostTOaORb Inherited - DisconnectNow

FactorySuite A2 Deployment Guide

340

Appendix B

RunTime Object
The RunTime Object provides stop watch capabilities for capturing "running" and "held" times as well as percent of time in these states.

RunTime Overview
The RunTime Object is a stop watch. It contains a "clock" that keeps track of an elapsed period of time from a "start event" to an "end event". During the clock timing period two states, "running" and "held," are also tracked as regards accumulative elapsed time for each. This Object also calculates the elapsed times as a percentage of the "clock" time in each of the two states. UDAs of the Object funtion as the buttons of the stop watch controlling "clock start event", "clock end event", "clock reset", "running state", and "held state". A "calculation period" may be adjusted at any time as the setting for the frequency of execution of the statistics calculations. For general information on objects, including relationships, deployment, and alarm distribution, see the Integrated Development Environment (IDE) documentation. For information on configuration options for object information, scripts, userdefined attributes (UDAs), or attribute extensions, click Extensions Help in the Help file header.

RunTime Run-Time Behavior


The RunTime Object exhibits the behavior of a stop watch. The stop watch sequence involves the equivalent of a "start" button impleneted by the Clock.StartEvent Boolean UDA and "stop" button implemented by the Clock.EndEvent Boolean UDA The stop watch reads out elapsed time in a Clock.ElapsedTime ElapsedTime UDA. It is also possible to view the "start" and "stop" timestamps using the Clock.StartTime and Clock.EndTime Time UDAs. The equivalent of a "reset" button for the stop watch is implemented as the Clock.Reset Boolean UDA. In addition to stop watch "clock" time, the Object keeps track of "running" and "held" states. The "running" elapsed time is controlled by the equivalent of a detent style button implemented as the Running.State Boolean UDA. The "held" elapsed time is controlled by the equivalent of a detent style button implemented as the Held.State Boolean UDA. For each of these states the accumulated elapsed time is available - Running.ElapsedTime and Held.ElapsedTime. Note that when the "held" button is pushed (i.e. the Held.State is set to 'true') the "running" state is suspended internally even if the "running" button is still pushed. For both "running" and "held" states the percentage of total "clock" time is calculated. The generic algorithm is:
state percent of clock time = 100.0 * (accumulated elapsed time in state X) / (elapsed clock time)

FactorySuite A2 Deployment Guide

.NET Example Source Code

341

The Running.PercentOfClock Float UDA and Held.PercentOfClock Float UDA carry the calculated percentages. The calculations for this Object are performed in the Timer Execute QuickScript. The CalcPeriod ElapsedTime UDA is configured with the desired repeat rate for performing the accumulation and percentage calculations. Even though the CalcPeriod may be set at a number which is not an integer multiple of the Engine scan period, the effective repeat rate will still be the number of seconds in CalcPeriod rounded up to the next higher integer multiple of the Engine scan period. The Timer Startup QuickScript checks to see if the Engine is NOT starting from Checkpoint, and if not initializes all of the internally dimensioned variables to their default values. If it is starting from Checkpoint (recovering from redundancy failover or just a reboot of the platform computer), retentive values are restored for the stop watch. The Timer OnScan QuickScript simply initialized variables are not affected by Checkpoint state. The Start Execute and End Execute QuickScripts are each triggered by a transition to 'true' of the Clock.StartEvent and Clock.EndEvent UDAs respectively. The Start QuickScript kicks off the clock timer sequence, initializing variables and UDA. The End QuickScript stops the sequence, cleaning up states but preserving elapsed times, timestamps and percentages. The Timer Execute QuickScript does all of the real work. It utilizes internally dimensioned variables of .NET System.DateTime and System.TimeSpan to do time calculations. It also uses System.Int64 to do 64 bit arithmetic so that time calculations are exact. All of the UDAs identified as Calculated retentive are designed so that the Object can recover upon failover in a redundancy scenario or when the platform computer has been rebooted after some kind of unplanned event where the Object did not gracefully Shutdown.

RunTime Configuration
The RunTime object is configured by filling in specific UDAs with the following RunTime Run-Time Object Attributes. Note that the UDAs incorporate 'dot' separators to enhance the grouping:

UDA AutoResetEvents CalcPeriod Clock.ElapsedTime Clock.ElapsedTimePrevious Clock.EndEvent

DataType Boolean ElapsedTime ElapsedTime ElapsedTime Boolean

Catagory User writeable User writeable Calculated retentive Calculated retentive User writeable

Array Size Value false 00:00:10.0000000 00:00:00.0000000 00:00:00.0000000 false

FactorySuite A2 Deployment Guide

342

Appendix B

UDA Clock.EndTime Clock.EndTimePrevious Clock.Reset Clock.Started Clock.StartEvent Clock.StartTime Clock.StartTimePrevious Clock.Stopped Held.ElapsedTime Held.ElapsedTimePrevious Held.PercentOfClock Held.PercentOfClockPrevious Held.State Running.ElapsedTime Running.ElapsedTimePrevious Running.PercentOfClock

DataType Time Time Boolean Boolean Boolean Time Time Boolean ElapsedTime ElapsedTime Float Float Boolean ElapsedTime ElapsedTime Float

Catagory Calculated retentive Calculated retentive User writeable Calculated retentive User writeable Calculated retentive Calculated retentive Calculated retentive Calculated retentive Calculated retentive Object writeable Calculated retentive User writeable Calculated retentive Calculated retentive Object writeable Calculated retentive User writeable

Array Size Value 1/31/2005 12:00:00.000 PM 1/31/2005 12:00:00.000 PM false false false 1/31/2005 12:00:00.000 PM 1/31/2005 12:00:00.000 PM false 00:00:00.0000000 00:00:00.0000000 0.0 0.0 false 00:00:00.0000000 00:00:00.0000000 0.0 0.0 false

Running.PercentOfClockPrevious Float Running.State Boolean

AutoResetEvents - Used in the Start - Execute and End - Execute QuickScripts to control whether the Clock.StartEvent and Clock.EndEvent are cleared to 'false' automatically. The default value is 'false' which requires that the user or another Object must take responsibility for clearing the Clock.StartEvent boolean. CalcPeriod - Configured with the number of seconds of delay between the execution of the calculations for the stop watch. If the Engine scan period is longer than this value, the calculations will be executed every scan. If the value is not configured as a multiple of the Engine scan period then the actual occurrence of running the calculations will always be on the next actual scan cycle following the amount of elapsed time of the CalcPeriod. The default value is 10 seconds.

FactorySuite A2 Deployment Guide

.NET Example Source Code

343

Clock.ElapsedTime - Provides a calculation of the actual elapsed time between the Clock.StartEvent OnTrue transition and the Clock.EndEvent OnTrue transition. This value is Category: Calculated retentive. It is saved in the Checkpoint file and supports redundancy with failover. Clock.ElapsedTimePrevious - Captures the previous value of the actual elapsed time between the Clock.StartEvent OnTrue transition and the Clock.EndEvent OnTrue transition. It is used to calculate incremental clock elapsed time. This value is Category: Calculated retentive. It is saved in the Checkpoint file and supports redundancy with failover. Clock.EndEvent - Set to 'true' to by a user or another Object to indicate that the stop watch timing period has ended. If AutoResetEvents is 'true' then it is cleared to 'false' automatically, otherwise the user or other Object must clear it. Clock.EndTime - Provides a snapshot of the actual time of the Clock.EndEvent. This value is Category: Calculated retentive. It is saved in the Checkpoint file and supports redundancy with failover. Clock.EndTimePrevious - Provides a snapshot of the previous actual time of the Clock.EndEvent for use in calculations. This value is Category: Calculated retentive. It is saved in the Checkpoint file and supports redundancy with failover. Clock.Reset - Checked in the Timer - Execute QuickScripts. When set to 'true' by a user or another Object all of the Calculated retentive UDAs are cleared to their default values, thus resetting the stop watch. Upon completion of that cycle of the Timer - Execute QuickScript it is cleared to 'false'. Clock.Started - Provides a indication of the state of the stop watch's clock When the clock is timing the value is 'true', when not timing it is 'false'. This value is Category: Calculated retentive. It is saved in the Checkpoint file and supports redundancy with failover. Clock.StartEvent - this UDA is set to 'true' to by a user or another Object to indicate that the stop watch timing period has begun. If AutoResetEvents is 'true' then it is cleared to 'false' automatically, otherwise the user or other Object must clear it. Clock.StartTime - Provides a snapshot of the actual time of the Clock.StartEvent. This value is Category: Calculated retentive. It is saved in the Checkpoint file and supports redundancy with failover. Clock.StartTimePrevious - Provides a snapshot of the previous actual time of the Clock.EndEvent for use in calculations. This value is Category: Calculated retentive. It is saved in the Checkpoint file and supports redundancy with failover. Clock.Stopped - Provides a indication of the state of the stop watch's clock When the clock is timing the value is 'false', when not timing it is 'true'. This value is Category: Calculated retentive. It is saved in the Checkpoint file and supports redundancy with failover. Held.ElapsedTime - Provides a calculation of the accumulative "Held" elapsed time summing time periods that the Held.State UDA remains 'true'. This value is Category: Calculated retentive. It is saved in the Checkpoint file and supports redundancy with failover.

FactorySuite A2 Deployment Guide

344

Appendix B

Held.ElapsedTimePrevious - Captures the previous value of the accumulative "Held" time. It is used to calculate incremental accumulative Held elapsed time. This value is Category: Calculated retentive. It is saved in the Checkpoint file and supports redundancy with failover. Held.PercentOfClock - Calculation of the percent of "Held" determined from the ration of accumulative "Held" elapsed time over the total "Clock" elapsed time. This value is Category: Calculated retentive. It is saved in the Checkpoint file and supports redundancy with failover. Held.PercentOfClockPrevious - Captures the previous value of the percent of "Held" time. It is used to calculate the incremental change in the Held percent of clock time. This value is Category: Calculated retentive. It is saved in the Checkpoint file and supports redundancy with failover. Held.State - Provides a indication of the stop watch's "Held" state. It is . This value is Category: User writeable. A user or another Object set this UDA to 'true' to indicate that the accumulation of time for the "Held" state shall be calculated. This state takes precedence over the "Running" state. When set to 'true' the "Running" state accumulation calculations are not performed. Running.ElapsedTime - Provides a calculation of the accumulative "Running" elapsed time summing time periods that the Running.State UDA remains 'true'. This value is Category: Calculated retentive. It is saved in the Checkpoint file and supports redundancy with failover. Running.ElapsedTimePrevious - Captures the previous value of the accumulative "Running" time. It is used to calculate incremental accumulative Running elapsed time. This value is Category: Calculated retentive. It is saved in the Checkpoint file and supports redundancy with failover. Running.PercentOfClock - Calculation of the percent of "Running" determined from the ration of accumulative "Running" elapsed time over the total "Clock" elapsed time. This value is Category: Calculated retentive. It is saved in the Checkpoint file and supports redundancy with failover. Running.PercentOfClockPrevious - this UDA captures the previous value of the percent of "Running" time. It is used to calculate the incremental change in the Running percent of clock time. This value is Category: Calculated retentive. It is saved in the Checkpoint file and supports redundancy with failover. Running.State - Provides a indication of the stop watch's "Running" state. This value is Category: User writeable. A user or another Object set this UDA to 'true' to indicate that the accumulation of time for the "Running" state shall be calculated. This state is subordinate to the "Held" state. When the "Held" UDA is set to 'true' the "Running" state accumulation calculations are not performed. The QuickScripts for the $RunTime template Object are extracted from the Industrial Application Server Galaxy IDE editor. For the UDAs see the previous tables or the Help.htm file.

FactorySuite A2 Deployment Guide

.NET Example Source Code

345

Template Object:$RunTime
$RunTime - Timer
Aliases $RunTime - Timer: n/a Declarations $RunTime - Timer:
Dim calcDelayL As System.Int64; Dim calcPeriodL As System.Int64; Dim lastScan As System.DateTime; Dim thisScan As System.DateTime; Dim calcPeriodT As System.TimeSpan; Dim zeroET As System.TimeSpan; Dim calcET As System.TimeSpan; Dim cstrtT As System.DateTime; Dim cendT As System.DateTime; Dim cET As System.TimeSpan; Dim hET As System.TimeSpan; Dim rET AS System.TimeSpan;

Startup $RunTime - Timer script:


zeroET = new System.TimeSpan(0); calcET = zeroET; calcPeriodT = new System.TimeSpan; lastScan = new System.DateTime; thisScan = new System.DateTime; cstrtT = new System.DateTime; cendT = new System.DateTime; cET = new System.TimeSpan; hET = new System.TimeSpan; rET = new System.TimeSpan; IF NOT myEngine.Engine.StartingFromCheckPoint THEN cstrtT = Now(); cendT = Now(); cET = zeroET; hET = zeroET; rET = zeroET; me.Clock.StartTime = Now(); me.Clock.StartTimePrevious = Now(); me.Clock.Endtime = Now(); me.Clock.EndTimePrevious = Now(); me.Clock.ElapsedTime = zeroET; me.Clock.ElapsedTimePrevious = zeroET; me.Held.ElapsedTime = zeroET; me.Held.ElapsedTimePrevious = zeroET; me.Held.PercentOfClockPrevious = 0.0;

FactorySuite A2 Deployment Guide

346

Appendix B

me.Running.ElapsedTime = zeroET; me.Running.ElapsedTimePrevious = zeroET; me.Running.PercentOfClockPrevious = 0.0; me.Clock.Started = false; me.Clock.Stopped = true; ELSE cstrtT = me.Clock.StartTime; cendT = me.Clock.EndTime; cET = me.Clock.ElapsedTime; hET = me.Held.ElapsedTime; rET = me.Running.ElapsedTime; ENDIF;

OnScan $RunTime - Timer Script


calcDelayL = 0; calcPeriodT = me.CalcPeriod; calcET = zeroET; calcPeriodL = calcPeriodT; lastScan = Now(); thisScan = Now(); me.Clock.Reset = false; me.Clock.EndEvent = false; me.Clock.StartEvent = false; cstrtT = me.Clock.StartTime; cendT = me.Clock.EndTime; cET = me.Clock.ElapsedTime; hET = me.Held.ElapsedTime; rET = me.Running.ElapsedTime;

Execute $RunTime - Timer Expression Execute $RunTime - Timer Trigger type: Execute $RunTime - Timer Trigger period:

[blank] Periodic 00:00:00.0000000

FactorySuite A2 Deployment Guide

.NET Example Source Code

347

Execute $RunTime - Timer Script


{Compare this scan's delay time to the end of the last calculation period...} thisScan = Now(); calcDelayL = thisScan - lastScan; {If the delay time exceeds the current setpoint for the calculation period execute the code...} IF calcDelayL >= calcPeriodL THEN cstrtT = me.Clock.StartTime; cendT = me.Clock.EndTime; cET = me.Clock.ElapsedTime; hET = me.Held.ElapsedTime; rET = me.Running.ElapsedTime; calcET = thisScan.Subtract(lastScan); IF NOT me.Clock.Reset THEN IF me.Clock.Started THEN cET = cendT.Subtract(cstrtT); IF me.Held.State THEN hET = hET.Add(calcET); ELSE IF me.Running.State THEN rET = rET.Add(calcET); ENDIF; ENDIF; cendT = Now(); ELSE cET = zeroET; ENDIF; IF cET.TotalSeconds > 0 THEN me.Held.PercentOfClock = 100.0 * hET.TotalSeconds() / cET.TotalSeconds(); me.Running.PercentOfClock = 100.0 * rET.TotalSeconds() / cET.TotalSeconds(); ELSE me.Held.PercentOfClock = 0.0; me.Running.PercentOfClock = 0.0; ENDIF; ELSE hET = zeroET; rET = zeroET; cET = zeroET; cstrtT = Now(); me.Running.PercentOfClock = 0.0; me.Held.PercentOfClock = 0.0; me.Clock.Reset = false; me.Clock.Started = false; me.Clock.Stopped = true; ENDIF; me.Clock.StartTime = cstrtT; me.Clock.EndTime = cendT; me.Clock.ElapsedTime = cET; me.Held.ElapsedTime = hET;

FactorySuite A2 Deployment Guide

348

Appendix B

me.Running.ElapsedTime = rET; {Following execution of the code reset the calculation delay and stamp the last scan time as Now()...} calcDelayL = 0; lastScan = Now(); ENDIF;

OffScan $RunTime - Timer script: n/a Shutdown $RunTime - Timer script: n/a

End of $RunTime - Timer

$RunTime - Start
Aliases $RunTime - Start: n/a Declarations $RunTime - Start:
Dim zeroET As System.TimeSpan;

Startup $RunTime - Start Script:


zeroET = new System.TimeSpan(0);

OnScan $RunTime - Start script: n/a

Execute $RunTime - Start Expression Execute $RunTime - Start Trigger type: Execute $RunTime - Start Trigger period

Me.Clock.StartEvent OnTrue 00:00:00.0000000

Execute $RunTime - Start Script


me.Clock.Started = true; me.Clock.Stopped = false; me.Clock.StartTime = Now(); me.Clock.EndTime = Now(); me.Clock.ElapsedTime = zeroET; me.Held.ElapsedTime = zeroET; me.Running.ElapsedTime = zeroET; IF me.AutoResetEvents THEN me.Clock.StartEvent = false; ENDIF;

FactorySuite A2 Deployment Guide

.NET Example Source Code

349

OffScan $RunTime - Start Script: n/a Shutdown $RunTime - Start script: n/a

End of $RunTime - Start


$RunTime - End Aliases $RunTime - End: n/a

Declarations $RunTime - End:


Dim zeroET As System.TimeSpan;

Startup $RunTime - End script: n/a OnScan $RunTime - End script: n/a

Execute $RunTime - End Expression Execute $RunTime - End Trigger type: Execute $RunTime - End Trigger period:

Me.Clock.EndEvent OnTrue 00:00:00.0000000

Execute $RunTime - End Script


me.Clock.Started = false; me.Clock.Stopped = true; me.Clock.EndTime = Now(); me.Clock.StartTimePrevious = me.Clock.StartTime; me.Clock.EndTimePrevious = me.Clock.EndTime; me.Held.ElapsedTimePrevious = me.Held.ElapsedTime; me.Held.PercentOfClockPrevious = me.Held.PercentOfClock; me.Running.ElapsedTimePrevious = me.Running.ElapsedTime; me.Running.PercentOfClockPrevious = me.Running.PercentOfClock; IF me.AutoResetEvents THEN me.Clock.EndEvent = false; ENDIF;

OffScan $RunTime - End script: n/a Shutdown $RunTime - End script: n/a

End of $RunTime - End

FactorySuite A2 Deployment Guide

350

Appendix B

ObjectCache.dll Visual Studio.NET C# Solution


ObjectCacheExt.dll is a .NET Function DLL that includes three Classes ObjectCacheExt, SqlConnCache and OleDbConnCache. The source code is written in C#.NET and is provided at the end of this appendix. To compile this DLL as a Visual Studio .NET 2003 Solution perform these steps: 1. Create a new "Blank Solution" as a C# Class Library in Visual Studio.NET 2003 in a new project directory (e.g. D:\VisualStudioProjects\ObjectCacheExt). Name the Solution - ObjectCacheExt

2.

Several additional files and subdirectories will have then been created in the identified project directory. 3. Rename the single Class file (Class1.cs) that was automatically created by Visual Studio.NET as - ObjectCacheExt.cs Copy the complete source code text found at the end of this appendix into the body of the ObjectCacheExt.cs file in the Visual Studio editor window. Save the solution (select 'File/Save All' or use the Ctrl+Shift+S hotkey). Note The currentDomain.Load function cannot contain carriage returns. Before compiling the source code, remove any carriage returns from the SqlConnCache Initialize() and OleDbConnCache Intitialize() methods. 4. 5. Verify that 'Build\Configuration Manager' indicates that the Debug version of the DLL will be compiled first. Compile this version of the DLL using 'Build\Build solution' (or the Ctrl+Shift+B hotkeys). Observe whether there are any errors in the Output window. Check the Task List for further recommended steps. When the Debug build is clean with no errors, proceed to the testing phase. Use the copy of the ObjectCacheExt.dll file found in the solution directory \bin\Debug as well as the ObjectCacheExt.pdb file for testing. Following successful testing of the Debug version, build the Release version of the DLL. To do this select Build\Configuration Manager and choose Release. Then repeat the 'Build\Build solution' step. Copy the ObjectCacheExt.dll file from the \bin\Release directory of the Solution to a defined "released" DLLs directory and use that copy in the final testing steps. The DLL (whether it is the Debug version or the Release version) must be imported into a Galaxy for testing. Place a copy of the desired Debug or Release version of the DLL in a known ArchestrA directory, for example C:\Program Files\ArchestrA\Framework\bin. If the Debug DLL is being used also place a copy of the PDB file from the 'bin\Debug' Visual Studio.NET solution.

6.

7.

8.

9.

FactorySuite A2 Deployment Guide

.NET Example Source Code

351

10. Open the desired test Galaxy using the Industrial Application Server Galaxy IDE application. Select Galaxy\Import\Script Function Library. If it is the second or subsequent import of a recompiled version of the DLL select the radio button designating Overwrite existing DLL. 11. To test the Debug version of this DLL using Visual Studio .NET 2003 first import the it into a test Galaxy, create a derived template Object, add QuickScript code that calls functions of the DLL. Provide one or more UDAs with category 'User writeable' that serve to exercize the Object's QuickScripts. 12. Create an instance of the object and place it under an Area on an Engine. All tests described in this example are performed on a single node that has Visual Studio.NET and Industrial Application Server installed.

Debugging the Project


The following steps acquire debugging information revealing the internal functionality of the compiled DLL either by break-points or step-by-step: 1. Be sure that the test Platform is deployed and OnScan on the node that is being tested and that the test Engine is undeployed or Shutdown. You can 'Shutdown' a deployed, running engine from the SMC using Platform Manager. Open the ObjectCacheExt Solution in Visual Studio.NET 2003 (e.g. from the D:\VisualStudioProjects\ObjectCacheExt directory). Open the ObjectCacheExt.cs source file. Open Task Manager and inspect the list of aaEngine processes along with their PID (Process ID). If the PID column is not visible use 'View\Select Columns' and check the box to make it visible. There should be at least one aaEngine process running at this time. If there is only one it represents the Platform. 5. Now deploy the test Engine containing the test Object that references the DLL. If it is already deployed and 'Shutdown', then 'Start' the test Engine using the SMC Platform Manager and make sure it is running 'OnScan'. Inspect the Task Manager again and observe that one additional 'aaEngine' process appears in the list. Take note of this PID (Process ID). In Visual Studio.NET make sure that ObjectCacheExt has been designated as the Startup Project. (Select 'Project\Set as Startup Project' from the menu while the ObjectCacheExt project is visible in the Visual Studio.NET editor.) Open up the ArchestrA Object Browser with the test Object instance and place selected Attributes and UDAs into the watch window.

2. 3. 4.

6. 7.

8.

Tip Save the watch window configuration to a file so that it can be reloaded for later testing. 9. Launch Object Viewer from the IDE or from the Platform Manager in the SMC.

FactorySuite A2 Deployment Guide

352

Appendix B

10. In the SMC select the Local LogViewer from the Default Group and inspect the Log for errors, warnings etc. Periodically inspect the log during the test. 11. In Visual Studio.NET select 'Debug\Processes' from the menu. Inspect the list of 'Available Processes' looking for the ID of the aaEngine 'Process'. A. Click on that line in the list, then select the 'Attach' button. B. In the dialogue check 'Common Language Runtime' and uncheck the other boxes. C. Then click the 'OK' button. The aaEngine.exe with its ID will appear in the 'Debugged Processes' list. Click the 'Close' button. 12. Add one or more Breakpoints in the ObjectCacheExt.cs source code in the Visual Studio.NET editor. 13. Utilizing Object Viewer exersize the Object instance by modifying UDA values that trigger QuickScripts and cause the QuickScript code to make calls to the ObjectCacheExt Class functions. 14. Return immediately to the Visual Studio.NET application and observe where the Breakpoints hit during execution of the function call. Use 'Debug\Continue' or the F5 function key to proceed to the next Breakpoint. 15. When the debugger stops at a Breakpoint it is possible to inspect values of the variables within the .NET CLR that are currently in scope. Plase variables and objects of interest in the Visual Studio .NET Watch Window. Remember that not all variables of the ObjectCacheExt DLL will be in scope at the same time depending upon the code that is executing when reaching the Breakpoint. 16. When finished with this phase of testing select 'Debug\Stop Debugging'. In rare cases this may cause the previously Attached aaEngine Process to stop. If this happens to coninue redeploy the Engine or use the SMC to Start it and set it back to OnScan. 17. If certain Breakpoints are visited repeatedly by Function Calls in Periodic or other frequently executing QuickScripts then remove those Breakpoints so that other methods in the Function DLL may be tested and debugged. 18. If it is necessary to make changes to the source code, recompile both the Debug and Release versions and copy the Debug version to the working directory. Reimport the Function DLL into the Galaxy and validate the Objects that reference it. 19. When completely finished with Debug testing replace the copy of the Debug version of the DLL in the working directory with the compiled Release version and remove the Debug version's PDB file. Reimport the DLL into the Galaxy and perform a final validation of the Objects that reference it.

FactorySuite A2 Deployment Guide

.NET Example Source Code

353

ObjectCacheExt.DLL Source Code


// BEGINNING OF OBJECTCACHEEXT.DLL SOURCE CODE using System; using System.Collections; using System.Data; using System.Data.SqlClient; using System.Data.OleDb; using System.Reflection;

/// <summary> /// Namespace A5.LakeForest identifies contained /// classes developed by the Invensys Wonderware /// ArchestrA A5 group in Lake Forest, CA. /// </summary> namespace A5.LakeForest { /// <summary> /// ObjectCacheExt handles pooling of any kind /// of .NET object. There is no type safe management /// in this class. /// </summary> public class ObjectCacheExt { // a private static member variables is used // as singleton fields of this class. // Hashtable.Synchronized keeps the named list // of objects and ensures thread safe operation. private static Hashtable objects = Hashtable.Synchronized(new Hashtable()); public ObjectCacheExt() { // Constructor logic would normally go here // but it is not needed for this class as there // is no need to explicitly construct an // instance. } /// /// /// /// /// /// /// /// /// /// /// <summary> ObjectCacheExt's Add method takes two arguments, adding the supplied object argument 'o' to the Hashtable cache under the name given as the 'objectName' argument; it provides no return value. </summary> <param name="objectName"> String key identifier for the object 'o' to be added to the object cache. </param>

FactorySuite A2 Deployment Guide

354

Appendix B

/// <param name="o"> /// Reference to the object 'o' to be added /// to the cache. /// </param> public static void Add(string objectName, object o) { objects[objectName] = o; } /// <summary> /// ObjectCacheExt's ContainsKey method takes /// one argument, returning a boolean true value /// if the name supplied as 'objectName' /// contained in the Hashtable cache, /// is returning a boolean false value /// if it is not contained in the cache. /// </summary> /// <param name="objectName"> /// String key identifier used the check /// if object 'o' is contained in object cache. /// </param> /// <returns> /// True if the 'objectName' identifier /// is in the object cache; /// False it it is not in the object cache. /// </returns> public static bool ContainsKey(string objectName) { return objects.ContainsKey(objectName); } /// <summary> /// ObjectCacheExt's Remove method takes /// one argument, deleting a Hashtable entry /// both key and value - if the 'objectName' /// entry is found in the Hashtable cache; /// it provides no return value. /// </summary> /// <param name="objectName"> /// String key identifier for the object 'o' /// that is to be removed from the object cache. /// </param> public static void Remove(string objectName) { objects.Remove(objectName); } /// <summary> /// ObjectCacheExt's first Get method takes /// one argument, /// returning a .NET object reference for the object /// that is contained in the Hashtable cache /// under the key value identified by 'objectName'; /// if that 'objectName' is not found in the cache /// the method /// returns 'Null'. /// </summary> /// <param name="objectName"> /// String key identifier for the object 'o' that is to be /// acquired for use from the object cache.

FactorySuite A2 Deployment Guide

.NET Example Source Code

355

/// </param> /// <returns> /// A reference to the object that is in the /// object cache. /// </returns> public static object Get(string objectName) { return objects[objectName]; } /// <summary> /// ObjectCacheExt's second Get method takes /// one argument, /// returning a .NET object reference for the object /// that is contained in the Hashtable cache under /// the index value identified by 'objectIndex'; /// if that 'objectIndex' is not found /// in the cache the method returns 'Null'. /// </summary> /// <param name="objectIndex"> /// An integer identifying an object held in /// the object cache. /// </param> /// <returns> /// A reference to the object held at /// 'objectIndex' in the object cache; /// Null if 'objectIndex' is not found in /// the object cache. /// </returns> public static object Get(int objectIndex) { int objCount = 0; int indexer = 0; string objectName = ""; objCount = Count(); if ((objectIndex > 0) & (objectIndex <= objCount)) { foreach (string objectNameCheck in objects.Keys) { indexer = indexer + 1; if (indexer == objectIndex) { objectName = objectNameCheck; break; } } } return objects[objectName]; } /// <summary> /// ObjectCacheExt's Count method does not /// have any arguments; /// it returns the integer count of the /// object contained in the Hashtable cache. /// </summary> /// <returns> /// An integer representing the number of objects

FactorySuite A2 Deployment Guide

356

Appendix B

/// in the object cache. /// </returns> public static int Count() { return objects.Count; } } /// <summary> /// SqlConnCache handles pooling of /// System.Data.SqlClient.SqlConnection objects. /// This class insures type safe management ///in the SqlConnections added to the /// Hashtable cache. It also implements /// try-catch-finally exception handling /// with success/fail return messages. /// </summary> public class SqlConnCache { // private static member variables are used // as singleton members of this class. // Hashtable.Synchronized keeps the named list // of SqlConnection objects and ensures // thread safe operation. // sqlDataAssembly is needed to retrieve a valid // SqlConnectinon Type object used in comparing // the Type of objects submitted for inclusion // in the cache. // sqlConnType and its sql...Str capture the // submitted object's Type info. private static Hashtable sqlConns; private static DataSet dsSQLConnInfo; private static DataTable dtSQLConnStats; private static DataColumn dcSQLConnName; private static DataColumn dcSQLConnTransactStart; private static DataColumn dcSQLConnTransactFinish; private static DataColumn dcSQLConnTransactDuration; private static DataColumn dcSQLConnTransactCount; private static DataColumn[] dcArrayKey; private static AppDomain currentDomain; private static Assembly sqlDataAssembly; private static System.Type sqlConnType; private static string sqlConnTypeStr = @""; private static string initializedStr = @""; // /// <summary> /// SqlConnCache's constructor takes no arguments; /// it doesn't do anything - use the Initialize() /// method after instantiation; /// </summary> public SqlConnCache() { // Constructor logic would go here. // This Class does not need construction of any // variables as it has private static members. } /// <summary> /// SqlConnCache's Initialize method takes

FactorySuite A2 Deployment Guide

.NET Example Source Code

357

/// no arguments; /// it provides a string return value describing /// success or failure; /// internally this function builds the cache object /// and statistics DataSet; /// static designation ensures initialization /// or reinitialization of the common Cache ///and DataSet. /// </summary> public static string Initialize() { // Attach to the CurrentDomain of the // AOS Engine ... currentDomain = AppDomain.CurrentDomain; // Get the System.Data DLL loaded in order // to compare types... // Inside the currentDomain.Load function // cannot contain carriage return. sqlDataAssembly = currentDomain.Load("System.Data, Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"); // Set the connection type according to // the loaded DLL and copy the name // to a string ... sqlConnType = sqlDataAssembly.GetType("System.Data. SqlClient.SqlConnection",true); sqlConnTypeStr = sqlConnType.ToString(); // Create the Hashtable... sqlConns = Hashtable.Synchronized(new Hashtable()); // Initialize the SQLConnInfo DataSet... dsSQLConnInfo = new DataSet("SQLConnectionInfo"); // Initialize and add the SQLConnectionStats // DataTable ... dtSQLConnStats = new DataTable("SQLConnectionStats"); dsSQLConnInfo.Tables.Add(dtSQLConnStats); // Initialize and add the Columns for // SQLConnecitonStats... dcSQLConnName = new DataColumn("SQLConnName", typeof(string)); dtSQLConnStats.Columns.Add(dcSQLConnName); dcSQLConnTransactStart = new DataColumn("TransactStart",typeof(DateTime)); dtSQLConnStats.Columns.Add (dcSQLConnTransactStart); dcSQLConnTransactFinish = new DataColumn("TransactFinish", typeof(DateTime)); dtSQLConnStats.Columns.Add (dcSQLConnTransactFinish);

FactorySuite A2 Deployment Guide

358

Appendix B

dcSQLConnTransactDuration = new DataColumn("TransactDuration", typeof(TimeSpan)); dtSQLConnStats.Columns.Add (dcSQLConnTransactDuration); dcSQLConnTransactCount = new DataColumn("TransactCount",typeof(int)); dtSQLConnStats.Columns.Add (dcSQLConnTransactCount); // Create the Column key and add to the DataTable dcArrayKey = new DataColumn[1]; dcArrayKey[0] = dcSQLConnName; dtSQLConnStats.PrimaryKey = dcArrayKey; initializedStr = @"INITIALIZED"; return initializedStr; } /// <summary> /// SqlConnCache's Initialized method takes /// no arguments; /// it provides a string return value /// giving the cache initialization state; /// static designation ensures getting the /// state of the singleton cache object. /// </summary> public static string Initialized() { return initializedStr; } /// <summary> /// SqlConnCache's Add method takes two arguments, /// adding the supplied object argument 'sqlObj' /// to the Hashtable cache under the name given /// as the 'sqlConnName' argument; /// it provides no return value; internally this /// function calls the Add method that has a /// three argument signature automatically passing /// true as the third argument (see the /// alternate Add method below); /// static designation ensures adding to the /// singleton instance of the cache. /// </summary> /// <param name="sqlConnName"> /// String key identifier for the object 'sqlObj' /// to be added to the SqlConnCache. /// </param> /// <param name="sqlObj"> /// Reference to the object 'sqlObj' to be added to /// the SqlConnCache. /// </param> public static void Add(string sqlConnName, object sqlObj) { string addResult2 = ""; addResult2 = Add(sqlConnName, sqlObj, true); }

FactorySuite A2 Deployment Guide

.NET Example Source Code

359

/// <summary> /// SqlConnCache's Add method takes three arguments, /// adding the supplied object argument 'sqlObj' /// to the Hashtable cache under the name given /// as the 'sqlConnName' argument; /// it provides a return value giving /// success/failure info; /// it is a type safe method that checks for /// the Type of the object submitted for /// inclusion in the cache, refusing to /// Add objects that are not of Type /// System.Data.SqlClient.SqlConnection; /// try-catch-finally exception handling is /// implemented and in the case of errors the return /// value is a string that describes the /// type of error; static designation ensures /// adding to the singleton instance of the cache. /// </summary> /// <param name="sqlConnName"></param> /// <param name="sqlObj"></param> /// String key identifier for the object 'sqlObj' /// to be added to the SqlConnCache. /// </param> /// <param name="sqlObj"> /// Reference to the object 'sqlObj' to be added /// to the SqlConnCache. /// </param> /// <param name="withExceptions"> /// Set to true if for the Add method to return /// Exception information. /// </param> /// <returns> /// A result string; if 'withExceptions' argument /// set to true the result string /// may contain Exeption information. /// </returns> public static string Add(string sqlConnName, object sqlObj,bool withExceptions) { string sqlObjTypeStr; string eStr = ""; string addResult = ""; bool exceptionOccured = false; System.Type sqlObjType; System.Data.DataRow rowStats; try { sqlObjType = sqlObj.GetType(); sqlObjTypeStr = sqlObjType.ToString(); if (sqlObjType.Equals( sqlConnType)) { System.DateTime datetimeNow = System.DateTime.Now; sqlConns[sqlConnName] = sqlObj; rowStats = dtSQLConnStats.NewRow(); rowStats["SQLConnName"] = sqlConnName; rowStats["TransactStart"] = datetimeNow; rowStats["TransactFinish"] = datetimeNow;

FactorySuite A2 Deployment Guide

360

Appendix B

rowStats["TransactDuration"] = new TimeSpan(0); rowStats["TransactCount"] = 0; dtSQLConnStats.Rows.Add(rowStats); addResult = @"ADDED"; } else { sqlObjTypeStr = sqlObjType.ToString(); addResult = @"INCORRECT TYPE"; } eStr = @"NO ERROR"; } catch (TypeLoadException tldException) { addResult = @"TypeLoad failed"; eStr = tldException.ToString(); exceptionOccured = true; } catch (NullReferenceException nRefException) { addResult = @"Null Reference"; eStr = nRefException.ToString(); exceptionOccured = true; } catch (InvalidOperationException invalidOpException) { addResult = @"Invalid Operation."; eStr = invalidOpException.ToString(); exceptionOccured = true; } catch (Exception e) { addResult = @"Other."; eStr = e.ToString(); exceptionOccured = true; } finally { if (exceptionOccured) { // Prefix 'Exception - ' to exception text... addResult = @"Exception - " + addResult + @"."; } return addResult; } /// /// /// /// /// /// /// /// /// <summary> SqlConnCache's ContainsKey method takes one argument, returning a boolean true value if the name supplied as 'sqlConnName' is contained in the Hashtable cache, returning a boolean false value if it is not contained in the cache; static designation insures reference to the singleton instance of the cache.

FactorySuite A2 Deployment Guide

.NET Example Source Code

361

/// </summary> /// <param name="sqlConnName"></param> /// String key identifier used the check if object /// 'sqlObj' is contained in SqlConnCache /// </param> /// <returns> /// True if the 'sqlConnName' identifier is in /// the SqlConnCache; /// False it it is not in the SqlConnCache. /// </returns> public static bool ContainsKey(string sqlConnName) { return sqlConns.ContainsKey(sqlConnName); } /// <summary> /// SqlConnCache's Remove method takes one argument, /// deleting a Hashtable entry /// - both key and value - if the 'sqlConnName' /// entry is found in the Hashtable cache; /// it provides no return value; /// static designation insures removing from the /// singleton instance of the cache. /// </summary> /// <param name="sqlConnName"> /// String key identifier for the object 'sqlObj' /// that is to be removed from the SqlConnCache. /// </param> public static void Remove(string sqlConnName) { string [] sqlConnNameFind = new string [1] {sqlConnName}; System.Data.DataRow rowToRemove; rowToRemove = dtSQLConnStats.Rows.Find((string[]) sqlConnNameFind); if (rowToRemove != null) { dtSQLConnStats.Rows.Remove(rowToRemove); } sqlConns.Remove(sqlConnName); } /// /// /// /// /// /// /// /// /// /// /// /// /// /// /// /// <summary> SqlConnCache's first Get method takes one argument,returning a .NET object reference for the object that is contained in the Hashtable cache under the key value identified by 'sqlConnName'; if that 'sqlConnName' is not found in the cache the method returns 'Null'; static designation ensures getting the singleton instance of the cache. </summary> <param name="sqlConnName"></param> String key identifier for the object 'sqlObj' that is to be acquired for use from the SqlConnCache. </param> <returns>

FactorySuite A2 Deployment Guide

362

Appendix B

/// A reference to the object that is in /// the SqlConnCache. /// </returns> public static object Get(string sqlConnName) { return sqlConns[sqlConnName]; } /// <summary> /// SqlConnCache's second Get method takes /// one argument, returning a .NET object /// reference for the object that is contained /// in the Hashtable cache under the index value /// identified by 'sqlConnIndex'; if that /// 'sqlConnIndex' is not found in the cache the /// method returns 'Null'; /// static designation ensures getting the singleton /// instance of the cache. /// </summary> /// <param name="sqlConnIndex"> /// Integer index identifier for the object 'sqlObj' /// that is to be acquired for use from /// the SqlConnCache. /// </param> /// <returns> /// A reference to the object that is in /// the SqlConnCache. /// </returns> public static object Get(int sqlConnIndex) { int sqlConnCount = 0; int sqlConnIndexer = 0; string sqlConnName = ""; sqlConnCount = Count(); if ((sqlConnIndex > 0) & (sqlConnIndex <= sqlConnCount)) { foreach (string sqlConnNameCheck in sqlConns.Keys) { sqlConnIndexer = sqlConnIndexer + 1; if (sqlConnIndexer == sqlConnIndex) { sqlConnName = sqlConnNameCheck; break; } } } return sqlConns[sqlConnName]; } /// /// /// /// /// /// /// <summary> SqlConnCache's Count method does not have any arguments; it returns the integer count of the object contained in the Hashtable cache; static designation insures counting the singleton instance of the cache.

FactorySuite A2 Deployment Guide

.NET Example Source Code

363

/// </summary> /// <returns> /// An integer representing the number of objects /// in the object cache. /// </returns> public static int Count() { return sqlConns.Count; } /// <summary> /// SqlConnCache's GetConnState method takes /// one argument. It returns a string giving /// the specific state of the SqlConnection object /// named by 'sqlConnName'; /// try-catch checks for valid connection state /// information, returning error information /// if the object gives Null Reference or invokes /// an Invalid Operation or other exception; /// static designation insures getting /// the state from the singleton instance /// of the cache. /// </summary> /// <param name="sqlConnName"> /// String key identifier for the object 'sqlObj' /// to be added to the SqlConnCache. /// </param> /// <returns> /// The string describing 'sqlObj' State; /// for an Exeption the string describing /// the Exception. /// </returns> public static string GetConnState(string sqlConnName) { SqlConnection sqlConnSelected; if (ContainsKey(sqlConnName)) { sqlConnSelected = (SqlConnection)Get(sqlConnName); try { return sqlConnSelected.State.ToString(); } catch (NullReferenceException eNull) { string eNullStr = eNull.ToString(); return "OBJECT IS NULL"; } catch (InvalidOperationException eInvalidOp) { string eInvalidOpStr = eInvalidOp.ToString(); return "INVALID OPERATION"; }

FactorySuite A2 Deployment Guide

364

Appendix B

catch (Exception e) { string eStr = e.ToString(); return "EXCEPTION"; } } else { return @"NOT IN CACHE!"; } } /// <summary> /// SqlConnCache's GetServerName method has /// one argument. It returns the DataSource name /// of the server for the SqlConnection named /// by 'sqlConnName' if it is in the cache; /// try-catch exception handling checks for /// problems and returns error information; /// static designation ensures getting the name /// from the singleton instance of the cache. /// </summary> /// <param name="sqlConnName"> /// String key identifier for the object 'sqlObj' /// to be added to the SqlConnCache. /// </param> /// <returns> /// The string describing sqlObj's DataSource; /// for an Exception the string describing /// the Exception. /// </returns> public static string GetServerName(string sqlConnName) { SqlConnection sqlConnSelected; if (ContainsKey(sqlConnName)) { try { sqlConnSelected = (SqlConnection)Get(sqlConnName); return sqlConnSelected.DataSource; } catch (NullReferenceException eNull) { string eNullStr = eNull.ToString(); return "OBJECT IS NULL"; } catch (InvalidOperationException eInvalidOp) { string eInvalidOpStr = eInvalidOp.ToString(); return "INVALID OPERATION"; }

FactorySuite A2 Deployment Guide

.NET Example Source Code

365

catch (Exception e) { string eStr = e.ToString(); return "EXCEPTION"; } } else { return @"NOT IN CACHE!"; } } /// <summary> /// SqlConnCache's GetDatabaseName has one argument. /// It returns the database name for the /// SqlConnection named by 'sqlConnName' /// if it is in the cache; /// try-catch exception handling checks for /// problems and returns error information; /// static designation ensures getting the name /// from the singleton instance of the cache. /// </summary> /// <param name="sqlConnName"> /// String key identifier for the object 'sqlObj' /// to be added to the SqlConnCache. /// </param> /// <returns> /// The string describing sqlObj's Database; /// for and Execption the string describing /// the Exception. /// </returns> public static string GetDatabaseName(string sqlConnName) { SqlConnection sqlConnSelected; if (ContainsKey(sqlConnName)) { try { sqlConnSelected = (SqlConnection)Get(sqlConnName); return sqlConnSelected.Database; } catch (NullReferenceException eNull) { string eNullStr = eNull.ToString(); return "OBJECT IS NULL"; } catch (InvalidOperationException eInvalidOp) { string eInvalidOpStr = eInvalidOp.ToString(); return "INVALID OPERATION"; } catch (Exception e) { string eStr = e.ToString(); return "EXCEPTION"; }

FactorySuite A2 Deployment Guide

366

Appendix B

} else { return @"NOT IN CACHE!"; } } /// <summary> /// SqlConnCache's GetConnectionString has /// one argument. /// It returns the connection string for /// the SqlConnection named by 'sqlConnName' /// if it is in the cache; /// try-catch exception handling checks for /// problems and returns error information; /// static designation ensures getting the /// string from the singleton instance of the cache. /// </summary> /// <param name="sqlConnName"> /// String key identifier for the object 'sqlObj' /// to be added to the SqlConnCache. /// </param> /// <returns> /// The string representing sqlObj's /// ConnectionString; for and Exception ///the string describing the Exception. /// </returns> public static string GetConnectionString(string sqlConnName) { SqlConnection sqlConnSelected; if (ContainsKey(sqlConnName)) { try { sqlConnSelected = (SqlConnection)Get(sqlConnName); return sqlConnSelected.ConnectionString; } catch (NullReferenceException eNull) { string eNullStr = eNull.ToString(); return "OBJECT IS NULL"; } catch (InvalidOperationException eInvalidOp) { string eInvalidOpStr = eInvalidOp.ToString(); return "INVALID OPERATION"; } catch (Exception e) { string eStr = e.ToString(); return "EXCEPTION"; } } else

FactorySuite A2 Deployment Guide

.NET Example Source Code

367

{ return @"NOT IN CACHE!"; } } /// <summary> /// SqlConnCache's ExecuteNonQuery method takes /// two arguments. /// It takes the supplied 'SqlCommandText' string /// and applies it as an ExecuteNonQuery call /// to the database of the named 'sqlConnName' /// SqlConnection, if it is in the cache. /// Success returns the number of rows /// affected by the query. /// try-catch exception handling checks for /// problems and returns error information; /// static designation insures executing /// the query on the singleton instance of /// the cache. /// </summary> /// <param name="sqlConnName"> /// String key identifier for the object /// 'sqlConnName' found in the SqlConnCache. /// </param> /// <param name="SqlCommandText"> /// String representing the SqlCommand's Text /// to be executed by 'sqlConnName'. /// </param> /// <param name="enableMonitor"> /// Boolean true indicates that /// System.Threading.Monitor synchronization /// should be invoked when executing this method. /// </param> /// <returns> /// A string representing the result of the /// execution of the method /// a string representation of the number of /// datatable rows affected or a string /// describing any Exception that occurred. /// </returns> public static string ExecuteNonQuery(string sqlConnName, string SqlCommandText, bool enableMonitor) { SqlConnection sqlConnForQuery; SqlCommand sqlCommandForQuery; int rows; int prevTransactCount = 0; string [] sqlConnNameFind = new string [1] {sqlConnName}; System.Data.DataRow rowStatsForUpdate; System.DateTime dateTimeQueryStart = DateTime.Now; System.DateTime dateTimeQueryFinish; System.TimeSpan timespanQuery; string qResult;

FactorySuite A2 Deployment Guide

368

Appendix B

if (sqlConns.ContainsKey(sqlConnName)) { rowStatsForUpdate = dtSQLConnStats.Rows.Find((string[]) sqlConnNameFind); try { sqlConnForQuery = (SqlConnection)sqlConns[sqlConnName]; sqlCommandForQuery = new SqlCommand(SqlCommandText, sqlConnForQuery); if (rowStatsForUpdate != null) { rowStatsForUpdate["TransactStart"] = dateTimeQueryStart; prevTransactCount = (int)rowStatsForUpdate["TransactCount"]; } if (enableMonitor) { System.Threading.Monitor.TryEnter(sqlConnForQuery, 10000); rows = sqlCommandForQuery.ExecuteNonQuery(); System.Threading.Monitor.Exit(sqlConnForQuery); } else { rows = sqlCommandForQuery.ExecuteNonQuery(); } dateTimeQueryFinish = DateTime.Now; if (rowStatsForUpdate != null) { rowStatsForUpdate["TransactFinish"] = dateTimeQueryFinish; timespanQuery = dateTimeQueryFinish - dateTimeQueryStart; rowStatsForUpdate["TransactDuration"] = timespanQuery; rowStatsForUpdate["TransactCount"] = prevTransactCount + 1; } qResult = rows.ToString() + " executed."; } catch (NullReferenceException eNull) { string eNullStr = eNull.ToString(); qResult = @"OBJECT IS NULL"; } catch (InvalidOperationException eInvalidOp) { string eInvalidOpStr = eInvalidOp.ToString(); qResult = @"INVALID OPERATION"; }

FactorySuite A2 Deployment Guide

.NET Example Source Code

369

catch (Exception e) { string eStr = e.ToString(); qResult = @"EXCEPTION"; } return qResult; } else { qResult = @"NO SQLCONN"; return qResult; } } /// <summary> /// SqlConnCache's GetExecuteNonQueryDuration /// has one argument. /// It returns the duration TimeSpan for the /// SQLConnection named by 'sqlConnName' /// if it is in the cache. /// (It can be readily converted to ElapsedTime.) /// try-catch exception handling checks for /// problems and returns error information; /// static designation insures getting the duration /// from the singleton instance of the cache. /// </summary> /// <param name="sqlConnName"> /// String key identifier (sqlConnName) for the /// object to be inspected /// for TimeSpan duration info. /// </param> /// <returns> /// The TimeSpan representing sqlConnName's /// transaction duration. /// TimeSpan of zero indicates /// no transaction occurred. /// </returns> public static TimeSpan GetExecuteNonQueryDuration(string sqlConnName) { System.TimeSpan timespanQueryDuration = new TimeSpan(0); string [] sqlConnNameFind = new string [1] {sqlConnName}; System.Data.DataRow rowWithDuration; rowWithDuration = dtSQLConnStats.Rows.Find((string[]) sqlConnNameFind); if (rowWithDuration != null) { timespanQueryDuration = (TimeSpan)rowWithDuration ["TransactDuration"];; } return timespanQueryDuration; }

FactorySuite A2 Deployment Guide

370

Appendix B

/// <summary> /// SqlConnCache's GetExecuteNonQueryCount /// has one argument. /// It returns the execution count for /// ExecuteNonQuery calls using the SQLConnection /// named by 'sqlConnName' if it is in the cache; /// try-catch exception handling checks for /// problems and returns error information; /// static designation insures adding to the /// singleton instance of the cache. /// </summary> /// <param name="sqlConnName"> /// String key identifier (sqlConnName) for the /// object to be inspected for count info. /// </param> /// <returns> /// The integer representing sqlConnName's /// transaction count. /// Count of zero indicates /// no transactions have occurred. /// </returns> public static int GetExecuteNonQueryCount(string sqlConnName) { int transactCount = 0; string [] sqlConnNameFind = new string [1] {sqlConnName}; System.Data.DataRow rowWithCount; rowWithCount = dtSQLConnStats.Rows.Find((string[]) sqlConnNameFind); if (rowWithCount != null) { transactCount = (int)rowWithCount["TransactCount"];; } return transactCount; } } public class OleDbConnCache { // private static member variables are used // as singleton fields of this class. // Hashtable.Synchronized keeps the named list // of OleDbConnection objects and ensures // thread safe operation. // oledbDataAssembly is needed to retrieve // a valid OleDbConnection Type object used // in comparing the Type of objects // submitted for inclusion in the cache. // oledbConnType and its oledb...Str capture // the submitted object's Type info. private private private private private static static static static static Hashtable oledbConns; DataSet dsOLEDBConnInfo; DataTable dtOLEDBConnStats; DataColumn dcOLEDBConnName; DataColumn dcOLEDBConnTransactStart;

FactorySuite A2 Deployment Guide

.NET Example Source Code

371

private static DataColumn dcOLEDBConnTransactFinish; private static DataColumn dcOLEDBConnTransactDuration; private static DataColumn dcOLEDBConnTransactCount; private static DataColumn[] dcArrayKey; private static AppDomain currentDomain; private static Assembly oledbDataAssembly; private static System.Type oledbConnType; private static string oledbConnTypeStr = @""; private static string initializedStr = @""; public OleDbConnCache() { // Constructor logic would go here. // This Class does not need construction of any // variables as it has private static members. } /// <summary> /// OleDbConnCache's Initialize method takes /// no arguments; /// it provides a string return value describing /// success or failure; /// internally this function builds the /// cache object and statistics DataSet; /// static designation insures initialization /// or reinitialization of the common Cache /// and DataSet. /// </summary> public static string Initialize() { // Attach to the CurrentDomain of the // AOS Engine ... currentDomain = AppDomain.CurrentDomain; // Get the System.Data DLL loaded in order to // compare types ... // Inside the currentDomain.Load function // cannot contain carriage return. oledbDataAssembly = currentDomain.Load("System.Data, Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"); // Set the connection type according to the loaded DLL and copy the name to a string ... oledbConnType = oledbDataAssembly.GetType("System.Data.SqlClient .SqlConnection",true); oledbConnTypeStr = oledbConnType.ToString(); // Create the Hashtable... oledbConns = Hashtable.Synchronized(new Hashtable()); // Initialize the OLEDBConnectionInfo DataSet... dsOLEDBConnInfo = new DataSet("OLEDBConnectionInfo");

FactorySuite A2 Deployment Guide

372

Appendix B

// Initialize and add the OLEDBConnectionStats DataTable... dtOLEDBConnStats = new DataTable("OLEDBConnectionStats"); dsOLEDBConnInfo.Tables.Add(dtOLEDBConnStats); // Initialize and add the Columns for OLEDBConnectionStats... dcOLEDBConnName = new DataColumn("OLEDBConnName",typeof(string)); dtOLEDBConnStats.Columns.Add(dcOLEDBConnName); dcOLEDBConnTransactStart = new DataColumn("TransactStart", typeof(DateTime)); dtOLEDBConnStats.Columns.Add(dcOLEDBConnTransact Start); dcOLEDBConnTransactFinish = new DataColumn("TransactFinish", typeof(DateTime)); dtOLEDBConnStats.Columns.Add(dcOLEDBConnTransact Finish); dcOLEDBConnTransactDuration = new DataColumn("TransactDuration", typeof(TimeSpan)); dtOLEDBConnStats.Columns.Add(dcOLEDBConnTransact Duration); dcOLEDBConnTransactCount = new DataColumn("TransactCount", typeof(int)); dtOLEDBConnStats.Columns.Add(dcOLEDBConnTransact Count); // Create the Column key and add // to the DataTable ... dcArrayKey = new DataColumn[1]; dcArrayKey[0] = dcOLEDBConnName; dtOLEDBConnStats.PrimaryKey = dcArrayKey; initializedStr = @"INITIALIZED"; return initializedStr; } /// <summary> /// OleDbConnCache's Initialized method takes /// no arguments; /// it provides a string return value giving ///the cache initialization state; /// static designation insures getting /// the state of the singleton cache object. /// </summary> public static string Initialized() { return initializedStr; }

FactorySuite A2 Deployment Guide

.NET Example Source Code

373

/// <summary> /// OleDbConnCache's Add method takes /// two arguments, adding the supplied /// object argument 'oledbObj' to the Hashtable /// cache under the name given as the /// 'oledbConnName' argument; /// it provides no return value; /// internally this function calls the Add method /// that has a three argument signature /// automatically passing true as the /// third argument (see the alternate /// Add method below); /// static designation ensures adding /// to the singleton instance of the cache. /// </summary> /// <param name="oledbConnName"> /// String key identifier for the object /// 'oledbObj' to be added to the OleDbConnCache. /// </param> /// <param name="oledbObj"> /// Reference to the object 'oledbObj' /// to be added to the OleDbConnCache. /// </param> public static void Add(string oledbConnName, object oledbObj) { string addResult2 = ""; addResult2 = Add(oledbConnName, oledbObj, true); } /// <summary> /// OleDbConnCache's Add method takes /// three arguments, adding the supplied /// object argument 'oledbObj' to the /// Hashtable cache under the name given as the /// 'oledbConnName' argument; /// it provides a return value giving /// success/failure info; /// it is a type safe method that checks for /// the Type of the object submitted for /// inclusion in the cache, /// refusing to Add objects that are not of Type /// System.Data.OleDB.OleDbConnection; /// try-catch-finally exception handling is /// implemented and in the case of errors /// the return value is a string /// that describes the type of error; /// static designation insures adding to the /// singleton instance of the cache. /// </summary> /// <param name="oledbConnName"></param> /// <param name="oledbObj"></param> /// String key identifier for the object /// 'oledbObj' to be added to the OleDbConnCache. /// </param> /// <param name="oledbObj"> /// Reference to the object 'oledbObj' /// to be added to the OleDbConnCache.

FactorySuite A2 Deployment Guide

374

Appendix B

/// </param> /// <param name="withExceptions"> /// Set to true if for the Add method to /// return Exception information. /// </param> /// <returns> /// A result string; if 'withExceptions' /// argument set to true the result string /// may contain Exeption information. /// </returns> public static string Add(string oledbConnName, object oledbObj, bool withExceptions) { string oledbObjTypeStr; string eStr = ""; string addResult = ""; bool exceptionOccured = false; System.Type oledbObjType; System.Data.DataRow rowStats; try { oledbObjType = oledbObj.GetType(); oledbObjTypeStr = oledbObjType.ToString(); if (oledbObjType.Equals( oledbConnType)) { System.DateTime datetimeNow = System.DateTime.Now; oledbConns[oledbConnName] = oledbObj; rowStats = dtOLEDBConnStats.NewRow(); rowStats["OLEDBConnName"] = oledbConnName; rowStats["TransactStart"] = datetimeNow; rowStats["TransactFinish"] = datetimeNow; rowStats["TransactDuration"] = new TimeSpan(0); rowStats["TransactCount"] = 0; dtOLEDBConnStats.Rows.Add(rowStats); addResult = @"ADDED"; } else { oledbObjTypeStr = oledbObjType.ToString(); addResult = @"INCORRECT TYPE"; } eStr = @"NO ERROR"; } catch (TypeLoadException tldException) { addResult = @"TypeLoad failed"; eStr = tldException.ToString(); exceptionOccured = true; } catch (NullReferenceException nRefException) { addResult = @"Null Reference"; eStr = nRefException.ToString(); exceptionOccured = true; }

FactorySuite A2 Deployment Guide

.NET Example Source Code

375

catch (InvalidOperationException invalidOpException) { addResult = @"Invalid Operation."; eStr = invalidOpException.ToString(); exceptionOccured = true; } catch (Exception e) { addResult = @"Other."; eStr = e.ToString(); exceptionOccured = true; } finally { if (exceptionOccured) { // Prefix 'Exception - ' // to exception text... addResult = @"Exception - " + addResult + @"."; } } return addResult; } /// <summary> /// OleDbConnCache's ContainsKey method takes /// one argument, returning a boolean true value /// if the name supplied as 'oledbConnName' /// is contained in the Hashtable cache, /// returning a boolean false value /// if it is not contained in the cache; /// static designation ensures checking /// for the singleton instance of the cache. /// </summary> /// <param name="oledbConnName"></param> /// String key identifier used the check if object /// 'oledbObj' is contained in OleDbConnCache. /// </param> /// <returns> /// True if the 'oledbConnName' identifier /// is in the OleDbConnCache; /// False it it is not in the OleDbConnCache. /// </returns> public static bool ContainsKey(string oledbConnName) { return oledbConns.ContainsKey(oledbConnName); } /// /// /// /// /// /// /// /// <summary> OleDbConnCache's Remove method takes one argument, deleting a Hashtable entry both key and value if the 'oledbConnName' entry is found in the Hashtable cache; it provides no return value. Static designation ensures adding

FactorySuite A2 Deployment Guide

376

Appendix B

/// to the singleton instance of the cache. /// </summary> /// <param name="oledbConnName"> /// String key identifier for the object 'oledbObj' /// that is to be removed from the OleDbConnCache. /// </param> public static void Remove(string oledbConnName) { string [] oledbConnNameFind = new string [1] {oledbConnName}; System.Data.DataRow rowToRemove; rowToRemove = dtOLEDBConnStats.Rows.Find((string[]) oledbConnNameFind); if (rowToRemove != null) { dtOLEDBConnStats.Rows.Remove(rowToRemove); } oledbConns.Remove(oledbConnName); } /// <summary> /// OleDbConnCache's first Get method takes /// one argument,returning a .NET object reference /// for the object that is contained in /// the Hashtable cache under the key value /// identified by 'oledbConnName'; /// if that 'oledbConnName' is not found /// in the cache the method returns 'Null'; /// Static designation ensures getting /// the singleton instance of the cache. /// </summary> /// <param name="oledbConnName"></param> /// String key identifier for the object 'oledbObj' /// that is to be acquired for use /// from the OleDbConnCache. /// </param> /// <returns> /// A reference to the object that is in /// the OleDbConnCache. /// </returns> public static object Get(string oledbConnName) { return oledbConns[oledbConnName]; } /// <summary> /// OleDbConnCache's second Get method takes one argument, /// returning a .NET object reference for the object /// that is contained in the Hashtable cache /// under the index value identified /// by 'oledbConnIndex'; /// if that 'oledbConnIndex' is not found /// in the cache the method returns 'Null'; /// Static designation ensures getting the singleton /// instance of the cache.

FactorySuite A2 Deployment Guide

.NET Example Source Code

377

/// </summary> /// <param name="oledbConnIndex"> /// Integer index identifier for the object /// 'oledbObj' that is to be acquired for /// use from the OleDbConnCache. /// </param> /// <returns> /// A reference to the object that is /// in the OleDbConnCache. /// </returns> public static object Get(int oledbConnIndex) { int oledbConnCount = 0; int oledbConnIndexer = 0; string oledbConnName = ""; oledbConnCount = Count(); if ((oledbConnIndex > 0) & (oledbConnIndex <= oledbConnCount)) { foreach (string oledbConnNameCheck in oledbConns.Keys) { oledbConnIndexer = oledbConnIndexer + 1; if (oledbConnIndexer == oledbConnIndex) { oledbConnName = oledbConnNameCheck; break; } } } return oledbConns[oledbConnName]; } /// <summary> /// OleDbConnCache's Count method /// does not have any arguments; /// it returns the integer count of /// the object contained in the Hashtable cache; /// Static designation insures counting the /// singleton instance of the cache. /// </summary> /// <returns> /// An integer representing the number /// of objects in the object cache. /// </returns> public static int Count() { return oledbConns.Count; } /// /// /// /// /// /// /// /// <summary> OleDbConnCache's GetConnState method takes one argument. It returns a string giving the specific state of the OleDbConnection object named by 'oledbConnName'; try-catch checks for valid connection state information, returning error information

FactorySuite A2 Deployment Guide

378

Appendix B

/// if the object gives Null Reference or /// invokes an Invalid Operation or other exception. /// Static designation insures adding to the /// singleton instance of the cache. /// </summary> /// <param name="oledbConnName"> /// String key identifier for the object /// 'oledbObj' to be added to the OleDbConnCache. /// </param> /// <returns> /// The string describing 'oledbObj' State; /// for an Exeption the string describing /// the Exception. /// </returns> public static string GetConnState(string oledbConnName) { OleDbConnection oledbConnSelected; if (ContainsKey(oledbConnName)) { oledbConnSelected = (OleDbConnection)Get(oledbConnName); try { return oledbConnSelected.State.ToString(); } catch (NullReferenceException eNull) { string eNullStr = eNull.ToString(); return "OBJECT IS NULL"; } catch (InvalidOperationException eInvalidOp) { string eInvalidOpStr = eInvalidOp.ToString(); return "INVALID OPERATION"; } catch (Exception e) { string eStr = e.ToString(); return "EXCEPTION"; } } else { return @"NOT IN CACHE!"; } } /// /// /// /// /// /// /// /// /// <summary> OleDbConnCache's GetServerName method has one argument. It returns the DataSource name of the server for the OleDbConnection named by 'oledbConnName' if it is in the cache; try-catch exception handling checks for problems and returns error information; static designation insures getting the name of

FactorySuite A2 Deployment Guide

.NET Example Source Code

379

/// the singleton instance of the cache. /// </summary> /// <param name="oledbConnName"> /// String key identifier for the object /// 'oledbObj' to be added to the OleDbConnCache. /// </param> /// <returns> /// The string describing oledbObj's DataSource; /// for an Exception the string /// describing the Exception. /// </returns> public static string GetServerName(string oledbConnName) { OleDbConnection oledbConnSelected; if (ContainsKey(oledbConnName)) { try { oledbConnSelected = (OleDbConnection)Get(oledbConnName); return oledbConnSelected.DataSource; } catch (NullReferenceException eNull) { string eNullStr = eNull.ToString(); return "OBJECT IS NULL"; } catch (InvalidOperationException eInvalidOp) { string eInvalidOpStr = eInvalidOp.ToString(); return "INVALID OPERATION"; } catch (Exception e) { string eStr = e.ToString(); return "EXCEPTION"; } else { return @"NOT IN CACHE!"; } } /// <summary> /// OleDbConnCache's GetDatabaseName has one argument. /// It returns the database name for the OleDbConnection named /// by 'oledbConnName' if it is in the cache; /// try-catch exception handling checks for /// problems and returns error information; /// static designation insures getting the name /// of the singleton instance of the cache.

FactorySuite A2 Deployment Guide

380

Appendix B

/// </summary> /// <param name="oledbConnName"> /// String key identifier for the object 'oledbObj' /// to be added to the OleDbConnCache. /// </param> /// <returns> /// The string describing oledbObj's Database; /// for and Execption the string /// describing the exception. /// </returns> public static string GetDatabaseName(string oledbConnName) { OleDbConnection oledbConnSelected; if (ContainsKey(oledbConnName)) { try { oledbConnSelected = (OleDbConnection)Get(oledbConnName); return oledbConnSelected.Database; } catch (NullReferenceException eNull) { string eNullStr = eNull.ToString(); return "OBJECT IS NULL"; } catch (InvalidOperationException eInvalidOp) { string eInvalidOpStr = eInvalidOp.ToString(); return "INVALID OPERATION"; } catch (Exception e) { string eStr = e.ToString(); return "EXCEPTION"; } } else { return @"NOT IN CACHE!"; } } /// /// /// /// /// /// /// /// /// /// /// /// /// <summary> OleDbConnCache's GetConnectionString has one argument. It returns the connection string for the OleDbConnection named by 'oledbConnName' if it is in the cache; try-catch exception handling checks for problems and returns error information; static designation insures getting the string from the singleton instance of the cache. </summary> <param name="oledbConnName"> String key identifier for the object

FactorySuite A2 Deployment Guide

.NET Example Source Code

381

/// /// /// /// /// ///

'oledbObj' to be added to the OleDbConnCache. </param> <returns> The string representing oledbObj's ConnectionString; for an Exception the string describing the Exception. /// </returns> public static string GetConnectionString(string oledbConnName) { OleDbConnection oledbConnSelected; if (ContainsKey(oledbConnName)) { try { oledbConnSelected = (OleDbConnection)Get(oledbConnName); return oledbConnSelected.ConnectionString; } catch (NullReferenceException eNull) { string eNullStr = eNull.ToString(); return "OBJECT IS NULL"; } catch (InvalidOperationException eInvalidOp) { string eInvalidOpStr = eInvalidOp.ToString(); return "INVALID OPERATION"; } catch (Exception e) { string eStr = e.ToString(); return "EXCEPTION"; } } else { return @"NOT IN CACHE!"; } } /// /// /// /// /// /// /// /// /// /// /// /// /// /// <summary> OleDbConnCache's ExecuteNonQuery method takes two arguments. It takes the supplied 'OleDbCommandText' string and applies it as an ExecuteNonQuery call to the database of the named 'oledbConnName' OleDbConnection, if it is in the cache. Success returns the number of rows affected by the query. try-catch exception handling checks for problems and returns error information; static designation insures executing the query of the singleton instance of the cache.

FactorySuite A2 Deployment Guide

382

Appendix B

/// </summary> /// <param name="oledbConnName"> /// String key identifier for the object /// 'oledbConnName' found in the OleDbConnCache. /// </param> /// <param name="OleDbCommandText"> /// String representing the OleDbCommand's Text /// to be executed by 'oledbConnName'. /// </param> /// <param name="enableMonitor"> /// Boolean true indicates that /// System.Threading.Monitor synchronization /// should be invoked when executing this method. /// </param> /// <returns> /// A string representing the result of the /// execution of the method /// a string representation of the number of /// datatable rows affected or a string describing /// any Exception that occurred. /// </returns> public static string ExecuteNonQuery(string oledbConnName,string OleDbCommandText, bool enableMonitor) { OleDbConnection oledbConnForQuery; OleDbCommand oledbCommandForQuery; int rows; int prevTransactCount = 0; string [] oledbConnNameFind = new string [1] {oledbConnName}; System.Data.DataRow rowStatsForUpdate; System.DateTime dateTimeQueryStart = DateTime.Now; System.DateTime dateTimeQueryFinish; System.TimeSpan timespanQuery; string qResult; if (oledbConns.ContainsKey(oledbConnName)) { rowStatsForUpdate = dtOLEDBConnStats.Rows.Find((string[]) oledbConnNameFind); try { oledbConnForQuery = (OleDbConnection)oledbConns [oledbConnName]; oledbCommandForQuery = new OleDbCommand(OleDbCommandText, oledbConnForQuery); if (rowStatsForUpdate != null) { rowStatsForUpdate["TransactStart"] = dateTimeQueryStart; prevTransactCount = (int)rowStatsForUpdate ["TransactCount"]; }

FactorySuite A2 Deployment Guide

.NET Example Source Code

383

if (enableMonitor) { System.Threading.Monitor.TryEnter (oledbConnForQuery, 10000); rows = oledbCommandForQuery.ExecuteNonQuery(); System.Threading.Monitor.Exit(oledbConnForQuery); } else { rows = oledbCommandForQuery.ExecuteNonQuery(); } dateTimeQueryFinish = DateTime.Now; if (rowStatsForUpdate != null) { rowStatsForUpdate["TransactFinish"] = dateTimeQueryFinish; timespanQuery = dateTimeQueryFinish dateTimeQueryStart; rowStatsForUpdate["TransactDuration"] = timespanQuery; rowStatsForUpdate["TransactCount"] = prevTransactCount + 1; } qResult = rows.ToString() + " executed."; } catch (NullReferenceException eNull) { string eNullStr = eNull.ToString(); qResult = @"OBJECT IS NULL"; } catch (InvalidOperationException eInvalidOp) { string eInvalidOpStr = eInvalidOp.ToString(); qResult = @"INVALID OPERATION"; } catch (Exception e) { string eStr = e.ToString(); qResult = @"EXCEPTION"; } return qResult; } else { qResult = @"NO OLEDBCONN"; return qResult; } } /// <summary> /// OleDbConnCache's GetExecuteNonQueryDuration /// has one argument. /// It returns the duration TimeSpan for /// the OleDbConnection named by 'oledbConnName' /// if it is in the cache. /// (It can be readily converted to ElapsedTime.) /// try-catch exception handling checks for /// problems and returns error information;

FactorySuite A2 Deployment Guide

384

Appendix B

/// static designation insures getting the /// duration of the singleton instance of the cache. /// </summary> /// <param name="oledbConnName"> /// String key identifier (oledbConnName) ///for the object to be inspected for /// TimeSpan duration info. /// </param> /// <returns> /// The TimeSpan representing oledbConnName's /// transaction duration. /// TimeSpan of zero indicates no /// transaction occurred. /// </returns> public static TimeSpan GetExecuteNonQueryDuration(string oledbConnName) { System.TimeSpan timespanQueryDuration = new TimeSpan(0); string [] oledbConnNameFind = new string [1] {oledbConnName}; System.Data.DataRow rowWithDuration; rowWithDuration = dtOLEDBConnStats.Rows.Find((string[]) oledbConnNameFind); if (rowWithDuration != null) { timespanQueryDuration = (TimeSpan)rowWithDuration ["TransactDuration"];; } return timespanQueryDuration; } /// /// /// /// /// /// /// /// /// /// /// /// /// /// /// /// /// /// /// /// /// <summary> OleDbConnCache's GetExecuteNonQueryCount has one argument. It returns the execution count for ExecuteNonQuery calls using the OLEDBConnection named by 'oledbConnName' if it is in the cache; try-catch exception handling checks for problems and returns error information; static designation insures getting the query count of the singleton instance of the cache. </summary> <param name="sqlConnName"> String key identifier (oledbConnName) for the object to be inspected for count info. </param> <returns> The integer representing oledbConnName's transaction count. Count of zero indicates no transactions have occurred. </returns>

FactorySuite A2 Deployment Guide

.NET Example Source Code

385

public static int GetExecuteNonQueryCount(string oledbConnName) { int transactCount = 0; string [] oledbConnNameFind = new string [1] {oledbConnName}; System.Data.DataRow rowWithCount; rowWithCount = dtOLEDBConnStats.Rows.Find((string[]) oledbConnNameFind); if (rowWithCount != null) { transactCount = (int)rowWithCount["TransactCount"];; } return transactCount; } } } // END OF OBJECTCACHEEXT.DLL SOURCE CODE

FactorySuite A2 Deployment Guide

386

Appendix B

FactorySuite A2 Deployment Guide

Index

387

Index
A
ActiveFactory, FactorySuite A2 System integration 92 Alarm DB Manager, FactorySuite A2 System integration 97 Alarm Logger as alarm consumer 217 installation 217 alarms 97 and SCADAlarm 97 client/server network topology 216 consumers 213 distributed local network topology 215 general considerations 214 InTouch Alarm Logger 217 providers 213 queries 214 redundant configuration 78 topology categories 215 Windows XP 41 WWAlarmDBLogger in WAN 263 $AnalogDevice template 112 Anti-Virus software FactorySuite A2 Systems 53 files exclusion list 283 AppEngine checkpoint attributes 232 relocation 84 scan interval execution order 126 store forward attributes 77 tuning redundant engine attributes 87 undeploy scenarios 84 ApplicationObject example interactions 155 areas, defining area model 22 AutomationObject Server node client/server 40 configuration and run-time component requirements 32 DIObjects 42 function 32 Terminal Services on same node 48 workstation 37 workstation node 37 containment vs. UDAs 111 deriving templates and instances 117 determining alarm topology 215 determining available bandwidth 52 distributing network traffic 86 DT Analyst and FactorySuite Gateway 99 exporting/deploying objects and script functions 146 fine-tuning protocol settings 52 firewall use 53 fixed IP address 51 for each template type 112 galaxy time synchronization 53 load balancing in SCADA system 265 multiple engines 233 platform deployment in a Load Shared configuration 75 redundancy configuration 62 redundant connection configuration 64 renaming each local connection 86 scripting at the template level 122, 130 set condition function 275 template re-use in different galaxies 118 testing quality value of an attribute 274 topology configuration 51 Universal Time Synchonization across SCADA systems 252 using UDAs and extensions 116 verifying basic network communication 52 verifying connection settings 51 bulk operations, sizing and performance 232

C
checkpoint after failover 76 AppEngine execution phase 126 attributes 232 redundancy synchronization 63 scripting considerations 75 .Starting_FromCheckpoint attribute 75 UDAs 117, 124 client/server topology AutomationObject Server node 40 dedicated I/O Server nodes 42 I/O Server node 42 components, installation order 53 connectivity tools 106

B
backup and restore, secured administrator password 277 backup vs. export 278 best practice alarm and historical databases 35 alarm buffering for loss prevention 217 alarm system in client/server network topology 217 alarm system in distributed local network topology 215 backing up the galaxy 121 backup and restore 278

D
DAServer Manager 275 Data Access Server (DAServer) 54 database, configuring growth options 220 DCOM ports 207 ports listing 202, 203 security 200 dedicated standby configuration performance 242 redundancy 67

FactorySuite A2 Deployment Guide

388

Index

deploying objects, large and very large system 241 Device Integration Object See DIObject diagnostic tools 273 A AppServer Script Counter 276 A Template Counter 276 System Management Console (SMC) 275 testing attribute data quality 274 Windows Event Viewer 279 Windows Performance Monitor 279 DIObject advantages 55 configuration requirements 66 interfaces with I/O 106 performance notes 245 redundant DIObjects 65 redundant server name recommendations 65 run-time behavior with redundancy 67 $DiscreteDevice template 112 disk space create objects 223 data for initial installation 222 deploying object instances 225 estimating AppEngine XP node 226 importing objects 224 object instantiation 225 QuickScripts 225 sp_spaceused stored procedure 224 templates 223 Distributed COM See DCOM DT Analyst, FactorySuite A2 System integration 98

SPCPro 96 SQL Access Manager 95 SuiteVoyager Portal 36, 97 WindowMaker 93 WindowViewer 93 failover causes 80 sizing and performance data 242 using Startup and OnScan scripts 76 field devices, sizing and performance issues 230 $FieldReference template 113

G
galaxy backup and restore 277 definition 30 re-using templates in different galaxies 118 Galaxy Database Manager 275 Galaxy Repository configuration and run-time component requirements 32 configuring database growth options 220 function 31 installation options 31 installing in a distributed network topology 31 GR See Galaxy Repository

H
Historian node configuration and run-time component requirements 35 definition 35 fine-tune primitive for SCADA system 261 standalone workstation topology 39 historical data, redundant processing behavior 77

E
Engineering Station node configuration and run-time component requirements 34 definition 34 function 34 standalone workstation topology 39 errors, .NET error handing 170 events deployment issues 213 See also alarms

I
I/O Server node best practice 34 client/server topology 42 configuration and run-time component requirements 33 dedicated node 43 function 33 standalone workstation topology 38 I/O Servers connectivity 54 running on AutomationObject Server nodes 42 IAS as alarm provider 217 Engineering Station node for application maintenance 39 fixed IP address 51 I/O Server node data source 33 integration with DT Analyst 99 integration with InBatch 102 integration with IndustrialSQL Server Historian 92 integration with InTouch HMI 93 integration with InTrack 99 integration with QI Analyst 98

F
FactorySuite A2 System integration ActiveFactory 92 Alarm DB Manager 97 DA Servers 107 DT Analyst 98 FactorySuite Gateway (FS Gateway) 99 FS Gateway 106 I/O Servers 106 InBatch 102 InControl 108 IndustrialSQL Server Historian 92 InTouch HMI 93 InTrack 99 Microsoft SQL Server 108 other system components 101 Production Event Module (PEM) Objects 107 QI Analyst 98 SCADAlarm 97

FactorySuite A2 Deployment Guide

Index

389

large systems 237 licensing requirements 54 Panel PCs 96 topology considerations 41 IDE AutomationObject Server node 32 distributed environment configuration components 32 Engineering Station node 34 Terminal Services 49 InBatch FactorySuite A2 System integration 102 production system requirements 103 WindowsXP client 103 Industrial Application Server See IAS IndustrialSQL Server Historian FactorySuite A2 System integration 92 integration with IAS 92 installation, order for components 53 Integrated Development Environment See IDE integration FactorySuite A2 System application integration 91 network utilization 94 tablet and panel PCs 96 InTouch HMI Alarm Logger 217 dedicated Visualization nodes 79 FactorySuite A2 System integration 93 SmartSymbols 93 InTrack, FactorySuite A2 System integration 99

check communications between nodes 27 communication time-outs 256 diagnostic and maintenance tools 274 Windows XP 41 ObjectCache.dll, implementation example 146

P
performance bulk operations 232 checkpointing 232 multiple engines 233 permissions 26 Platform Manager 276 Production Event Module (PEM) Objects 107 project planning define functional requirements 20 define naming conventions 21 define object shape templates 23 define the deployment model 27 define the security model 26 defining area model 22 identify field devices 18 .NET project planning 147 protocols Dynamic Data Exchange (DDE) 33, 59 FS Gateway translator 106 Message Exchange (MX) 57, 67 Open Process Control (OPC) 33, 57, 58 Remote Desktop Protocol (RDP) 57 SuiteLink 33, 57, 59 summary list 57

L
load balancing, Terminal Services 47 load sharing CPU levels 68 redundant configuration 68 restore after failover script example 245 sizing and performance 244 Log Viewer 276

Q
QI Analyst, FactorySuite A2 System integration 98 QuickScript .NET ApplicationObject interactions 155 define project scope and requirements 147 definition 144 error handling 170 handling data quality 138 introduction 133 ObjectCache.dll 146 script example "ConnectTo" 168 script example "DisconnectFrom" 169 script example "Initialize" 158 script example "ManageConnections" 163 script example "PostDataNow" 177 script example "RandomizeNow" 182 script example "TryConnectNow" 172 script example "TryConnectOnFalse" 176 scripting database access 150 scripting practices 145 shape template objects 148 SqlConnCache function calls 152 syntax differences 146 template object examples 155

M
maintenance tools, Galaxy Database Manager 277 Microsoft SQL Server configuring database growth options 220 FactorySuite A2 System integration 108

N
.NET See QuickScript .NET network utilization 94 and galaxy reference to remote node 94 subscriptions 95 networks large 237 topology categories 37

O
object templates, derivation 24 Object Viewer assess execution time for multiple engines 233 assess failover performance 243

R
RAM create objects 223 data for initial installation 222 sizing guidelines 220

FactorySuite A2 Deployment Guide

390

Index

templates 223 Recipe Manager, integration with Industrial Application Server 95 redundancy alarms 78 AppEngine states 63 checkpoint item synchronization 63 configuration combinations 67 CPU levels for load sharing 68 dedicated standby configuration 67 dedicated Visualization nodes 79 definition 62 DIObject configuration requirements 66 DIObject run-time behavior 67 failover causes 80 historical data processing 77 load shared configuration 68 NIC configuration 63 object deployment considerations 74 redundant DIObject 65 redundant engine configuration 61 Remote Partner Address (RPA) 73 run-time considerations 73 script behaviors 75 scripting considerations 75 server name recommendations 65 store forward attributes 77 system checklist 85 system requirements 62 timeout parameters in large systems 86 tuning redundant engine attributes 87 Redundant Message Channel (RMC) configuring the connection 64 establishing RMC communication on engine start 73 remote nodes, communication 231 roles 26, 204

S
SCADA distributed IDE 259 distributed InTouch HMIs 265 distributed topologies 259 load balancing 265 security 206, 256 system test benchmarks 268 tuning Historian primitive 261 tuning timeout and heartbeat attributes 264 Universal Time Synchronization 252 wide-area networks overview 250 WWAlarmDBLogger 263 SCADAlarm, FactorySuite A2 System integration 97 script behaviors active engine after failover 76 optimizing dynamic referencing and Data Change scripts 131 re-initiate asynchronous scripts after failover 76 scripting asynchronous 76 common Startup and OnScan use for failover 76 data quality controlled execution 141 data quality propagation 139 dimensioning statements 146

handling data quality 138 .NET practices 145 QuickScript .NET 133, 144 SqlConnCache function calls 152 syntax differences in .NET 146 security Anti-Virus software 53, 195, 283 configuration 206 DCOM considerations 200 DCOM ports 207 DCOM recommendations 208 define the model 26 OS Group Based considerations 205 user account requirements 53 user interface 204 security groups 26, 205 shape template objects 148 defined 23 modeling .NET implementation 148 sizing and performance baseline CPU utilization 229 baseline hardware specifications 229 bulk operations 232 checkpoint attributes 232 communicating with remote nodes 231 communication to field devices 230 configuration environments 230 dedicated standby configuration 242 failover data 242 Galaxy Repository on Engineering Station 231 large distributed topology 237 load shared configuration 244 multiple engines 233 single node system 234 topologies 230 Unit Application 227 very large system topology 239 SMC, installing on Windows XP 41 SPCPro, FactorySuite A2 System integration 96 SQL Access Manager, FactorySuite A2 System integration 95 $SqlConnCacheMgr, shape object 151 standalone workstation topology Engineering node 39 Historian node 39 I/O Server node 38 SuiteVoyager Portal configuration and run-time component requirements 36 FactorySuite A2 System integration 36, 97 $Switch template 113 System Management Console (SMC) See SMC System Restore, disk space estimation 226

T
tablet and panel PCs 96 template modeling complex reactor 115 discrete valve example 114 templates $AnalogDevice 112 containment 24 derivation 24

FactorySuite A2 Deployment Guide

Index

391

$Discrete 112 disk space and RAM 223 $FieldReference 113 .NET examples 155 re-use in different galaxies 118 shape 23 $Switch 113 $UserDefined 113, 150 Terminal Server 48 Terminal Services AutomationObject Server node 48 dedicated node 46 server-based load balancing 47 topology categories 45 Using the IDE 49 Windows Server 2003 46 Windows2000 Server 45 time synchronization galaxy 53 SCADA systems 252 topologies deploying objects in a large or very large system 241 Galaxy Repository installation 231 sizing and performance data 230 topology categories about 37 alarms 215 client/server 40 components list 30 distributed local network 37 general considerations 41 introduction 29 standalone workstation 37 Terminal Services 45 widely-distributed networks 50 tuning Platforms and Engines Historian in SCADA environments 261 redundancy attributes 87 redundancy in large systems 86 timeout and heartbeat attributes in SCADA system 264

U
UDA best practice 116 checkpointing 117 function 116 Input/Output 142 locking 116 using the OR operator 162 Unit Application, definition 227 user interface, security 204 $UserDefined template 113, 150

V
Visualization node configuration and run-time component requirements 33 function 33 redundant configuration 79 workstation 37

W
widely-distributed networks See SCADA WindowMaker, FactorySuite A2 System integration 93 Windows Server 2003 distributed networks 38 Terminal Services 46 Windows XP connections 38 connections in client/server topology 41 InBatch 103 Object Viewer 41 System Restore 226 Windows2000 Server, Terminal Services 45 WindowViewer, FactorySuite A2 System integration 93 $WinPlatform, communicating with remote nodes 231 workstation node, definition and context 37 WWAlarmDBLogger, deploying in WAN environment 263

FactorySuite A2 Deployment Guide

392

Index

FactorySuite A2 Deployment Guide

You might also like