You are on page 1of 42

'LVWULEXWHG&RQWURO6\VWHPV

'&6
/HFWXUH

&(516FKRRORI&RPSXWLQJ


WK 6HSWHPEHU

:D\QH6DOWHU,7&2

1
&RQWHQWVRIWKH/HFWXUHV
◆ /HFWXUH '&6V\VWHPVWKHLUXVHDW
&(51DQGWKHWHFKQRORJLHVHPSOR\HG
◆ /HFWXUH 6&$'$V\VWHPVDQGLQ
SDUWLFXODUWKHRQHFKRVHQE\WKH/+&
H[SHULPHQWV7KHGLIIHUHQFHVRIDQ/+&
H[SHULPHQWDQGLQGXVWU\7KHXVHRI)60
DQGWKH)UDPHZRUN
◆ /HFWXUH 6SHFLILFIHDWXUHVRI3966DV
UHTXLUHGIRUWKHH[HUFLVHV
◆ 7XWRULDOSUDFWLFDOV

:6DOWHU &(516FKRRORI&RPSXWLQJ 

2
&RQWHQWVRIWKLV/HFWXUH
◆ :KDWLVD&RQWURO6\VWHP"
◆ :KDWLVD'LVWULEXWHG&RQWURO6\VWHP '&6 "
◆ +RZDUH'&6V XVHGDW&(51"
◆ :KDWLVDQ([SHULPHQW&RQWURO6\VWHP (&6 "

◆ :KDW7HFKQRORJLHVZHUH
XVHGLQWKH/(3(UD"
◆ :KDW7HFKQRORJLHVZLOO
EHXVHGLQWKH/+&(UD"

:6DOWHU &(516FKRRORI&RPSXWLQJ 

The term ECS is being used in preference to Detector Control System (DCS) throughout
these lectures to avoid the confusion with Distributed Control System (DCS). Therefore,
wherever DCS is used this means Distributed Control System.

3
:KDWLVD
&RQWURO
6\VWHP"

:6DOWHU &(516FKRRORI&RPSXWLQJ 

4
:KDWLVD&RQWURO6\VWHP" ,
◆ 6HWRIKDUGZDUHDQGVRIWZDUH
FRPSRQHQWV
◆ 0RQLWRULQJDQGFRQWURO VXSHUYLVLRQ RI
RQHRUPRUHGHYLFHV SURFHVV
◆ &RQWUROHLWKHUDXWRPDWHGRUYLDXVHU
LQWHUYHQWLRQ
◆ 0RQLWRULQJDQGFRQWURORIPXOWLSOH
GHYLFHV SURFHVV LQDQLQWHJUDWHG
PDQQHU

:6DOWHU &(516FKRRORI&RPSXWLQJ 

A control system is a set of hardware and software components used to monitor and control
one or more devices. When multiple devices are being controlled in a co-ordinated fashion
this collection of devices is often referred to as the “process”. A control system may be
passive. That is to say that it provides information on the status of the process to an operator
who then decides whether control actions are required and if so submits the necessary
commands to the system. The control system then passes these commands to the appropriate
device(s) in the form understood by that device. In the case of simple devices this may be an
electrical signal with well defined characteristics whereas in the case of a more complex
device this may be a complex formatted message. Control systems are often not purely
passive and in many cases may be programmed to perform actions in an automated fashion.

5
:KDWLVD&RQWURO6\VWHP" ,,
◆ )LHOG0DQDJHPHQW
✦ ,QSXW2XWSXW ,2 ² VHQVRUVDQGDFWXDWRUV

✦ 6LJQDOSURFHVVLQJ² HJ$QDORJXH'LJLWDO&RQYHUVLRQ $'&

✦ 6LJQDOFRQFHQWUDWLRQ

◆ 3URFHVV0DQDJHPHQW
✦ &ORVHGORRSSURFHVVLQJ HJ3URSRUWLRQDO,QWHJUDO'HULYDWLYH 3,' 

FRQWUROOHU

✦ 'DWDFRQFHQWUDWLRQDQGUHGXFWLRQ

◆ 6XSHUYLVLRQ
✦ +XPDQ0DFKLQH,QWHUIDFH +0,

✦ $ODUPKDQGOLQJ

✦ /RJJLQJDUFKLYLQJ

✦ +LJKHUOHYHOSURFHVVLQJ DQDO\VLVDXWRPDWLRQ

✦ ,QWHUIDFLQJWRRWKHUV\VWHPV

:DUQLQJ7KHDERYHGLVWULEXWLRQRIIXQFWLRQDOLW\LVDJXLGHRQO\
:6DOWHU &(516FKRRORI&RPSXWLQJ 

The functionality of a control system may be broken down into three logical levels. At the
lowest level, the Field Management level is responsible for the physical interaction with the
devices connected to the control system. As such the field level acquires input signals and
distributes output signals. The Input/Output (I/O) signals are processed at this level to
convert them to a form understood by the higher levels of the control system. A typical
example would be an analogue to digital conversion (ADC). Typically the signals are
acquired through individual connections. Another role of the Field Management level is to
reduce the large number of I/O connections to a smaller number of connections to the next
level in the control system (for example via a field bus). In this way multiple I/O parameter
values can be transported over a single cable to the higher levels in the control system. At the
Process Management Level any closed-loop control processing (often called regulation) that
is required is performed. As an example if the measured room temperature were lower than
the desired value the output of a heating device might be increased. When the room
temperature again return of the desired value this would once again be reduced. Also at this
level some data reduction might be performed. One example would be that I/O parameters
required for the closed-loop processing but not required by the operator of the system may be
not be passed to the next level in the control system. Another form of data reduction might
be that only parameter values that have changed by more than a certain defined amount are
passed upwards. Again parameters received from multiple devices at the field level may be
communicated to the Supervision level by a reduced number of connection. The Supervisory
level provides the interface to the operator of the system as well as providing functions such
as alarm handling, logging and interfacing to other systems. At this level some form of high
level processing which combines information from many of the lower level elements is
performed. An example of this might be Finite State Machine (FSM) modelling of the
system for automated control. This topic is covered in more detail in Lecture 2.
It should be noted, however, that this is a guide only and that in real world control systems
the distribution of functionality is much less obvious.

6
:KDWLVD

'LVWULEXWHG
&RQWURO

6\VWHP"

:6DOWHU &(516FKRRORI&RPSXWLQJ 

7
:KDWLVD'LVWULEXWHG&RQWURO6\VWHP"

◆ ,Q'&6WKHWHUP´GLVWULEXWHGµLV
LPSRUWDQW
◆ %XWZKDWGLVWULEXWLRQ"
✦ *HRJUDSKLFDOSK\VLFDO
✦ )XQFWLRQDO
✦ 5LVN
✦ /RDG
1RWH &RPPHUFLDO'&6V\VWHPVDUHQRW

FRYHUHGKHUH

:6DOWHU &(516FKRRORI&RPSXWLQJ 

The definition of distributed control is a bit muddy as the term in used in different ways in
different application domains and by different vendors of commercial DCS systems. In
industry the term DCS is often applied to a particular type of control system sold as
proprietary systems by a number of vendors in which all hardware and software components
are provided by that vendor. In these lecture we will not cover such systems but look more
into the concept of distributed control in general and in particular how it is used at CERN.

Distributed control is basically the distribution of control system elements/functions within


an overall system. One can consider four different aspects of distributed control. The first is
physical distribution, the second is functional distribution, the third is risk distribution and
the fourth is load distribution.

8
3K\VLFDO*HRJUDSKLFDO'LVWULEXWLRQ
&RQWURO6WDWLRQ

&RQWURO5RRP

aP

,2 'HYLFH

3URFHVV

,2 ,QSXW2XWSXW

([SHULPHQWDO&DYHUQ

:6DOWHU &(516FKRRORI&RPSXWLQJ 

Physical, or geographical, distribution is where the control elements/functions are located in


different physical locations. In its purest form, the control of a system is physically
distributed as close as possible to the equipment under control. However, in the domain of
CERN the physical distribution may be to have elements of the control system as close as
possible to the system being controlled for instance to reduce sensor cable lengths whilst
maintaining other elements further away from the system to avoid difficult environments
(magnetic and/or radiation). Furthermore, the systems being controlled are often so large that
some form of physical distribution is inevitable.

9
)XQFWLRQDO'LVWULEXWLRQ
$UFKLYH $ODUP (YHQW +0,
&RQWURO5RRP
0DQDJHU +DQGOHU 0DQDJHU 6WDWLRQ

/$1

,2'HYLFH

3URFHVV

([SHULPHQWDO&DYHUQ

:6DOWHU &(516FKRRORI&RPSXWLQJ 

Functional distribution is where the various control functions are concentrated in different
locations. This is where for instance the process control function in located in one computer,
the HMI function in another, the archiving function in another, the expert system in another,
and so on.

10
5LVN'LVWULEXWLRQ
&RQWURO6WDWLRQV

&RQWURO5RRP
/$1

/$1
,2'HYLFHV

3URFHVV

([SHULPHQWDO&DYHUQ

:6DOWHU &(516FKRRORI&RPSXWLQJ 

Risk distribution is where the risk of losing the complete control system is reduced by
distributing the system into different computers. Here one can also consider the concept of
redundancy where one or more elements of the system may be duplicated to avoid losing that
part of the system due to a single failure. However, when considering redundancy it is
important that one considers all aspects of redundancy. There was a case where although a
Power Station used redundant systems all cables passed through the same cable ducts and
therefore a fire in the cable ducts impacted both systems.

11
/RDG'LVWULEXWLRQ
+0,

6WDWLRQ

/$1

&RQWURO5RRP

,2'HYLFHV

3URFHVV

([SHULPHQWDO&DYHUQ

:6DOWHU &(516FKRRORI&RPSXWLQJ 

Load distribution is where different elements of the overall control system are spread over
multiple computers in order to reduce the load on individual machines and therefore to
ensure satisfactory performance. At CERN where many of the control systems become
extremely large such an approach is essential. It is not possible to run control systems of the
size required by a LHC experiment, for instance, on a single machine. In fact, for the LHC
experiments the controls systems will be comprised of many 10s of computers.

Within a typical DCS more than one type of distribution is often incorporated. For larger
systems some form of load/risk distribution is inevitable and for systems used at CERN there
is almost always some form of physical/geographical distribution. Due to the typical
architecture employed, which is a three-tiered model, then there is also functional
distribution too.

12
+RZDUH
'LVWULEXWHG

&RQWURO6\VWHPV
XVHGDW&(51"

:6DOWHU &(516FKRRORI&RPSXWLQJ 

13
$FFHOHUDWRU&RQWURO

:6DOWHU &(516FKRRORI&RPSXWLQJ 

There are two major control systems and associated control rooms at CERN involved in
accelerator control; one responsible for the PS complex and the other for SPS/LEP (LHC).
Given the size of the accelerators and the associated distribution of equipment there is a
large amount of geographical distribution in the control systems. Much of the equipment
being controlled is specialised custom made hardware, including for beam monitoring and
diagnostics, radio frequency and magnet control.
A very important aspect of the control of the accelerators is the synchronisation of the
various actions of the system in order to control the beam with great precision. For the LHC
the synchronisation will be even more important as the bunch crossing will occur at 40 MHz
i.e. only 25ns between successive bunch crossings.
The current control systems are based on the use of custom developed front-ends based on
VME with the processor running a real-time operating system, LynxOS, connecting to the
hardware together with Unix operator workstations running specialised accelerator physics
programs and providing the user interface.
In the development for the LHC systems there is an increased move to use commercial
equipment, notably PLCs and WorldFip (radiation resistant field bus and associated I/O
modules), where possible. However, this will not remove the necessity still to have a large
amount of custom equipment.

14
7HFKQLFDO6HUYLFHV
• (OHFWULFDOGLVWULEXWLRQ
• &U\RJHQLFV
• &RROLQJDQGYHQWLODWLRQ
• 9DFXXP
• 5DGLDWLRQPRQLWRULQJ
• $FFHVVFRQWURO
• &HQWUDODODUPV\VWHP
• *DVGLVWULEXWLRQ

:6DOWHU &(516FKRRORI&RPSXWLQJ 

•Electrical distribution - the control system has to montor and control the distribution of
power at different voltages across a site spread over an area of in excess of 60 km²
•Cryogenics - 27 km of superconducting structure (largest in the world)
•Cooling and ventilation - for the accelerators, experimental areas and the computer centre
•Vacuum - 27 km of vacuum structure
•Radiation monitoring - high reliability system
•Access control - high reliability system
•Central alarm system - high reliability system
•Gas distribution - at several points around the accelerator as well as for some 24 individual
sub-detector gas systems

Again there is a lot of geographical/physical distribution involved as these service are


provided across the whole of CERN and the underground areas.
These systems are in many ways very similar to industrial systems and as a result they use
mostly commercial components. Furthermore, as there is a lot of industrial experience in
these areas these systems are therefore mostly turnkey solutions based on standard
commercial technologies.

15
([SHULPHQW&RQWURO
$7/$6

:6DOWHU &(516FKRRORI&RPSXWLQJ 

The figure shows an artist’s impression of ATLAS one of the four new experiments being
built to operate with the LHC. The others being ALICE, CMS and LHCb. These experiments
are built and operated by large collaborations which are financially and organisationally
independent from CERN. As such, CERN is just one of many collaborating institutes in such
a collaboration.
These new generation experiments will have up to many millions of physics data channels
which are read out at a frequency of 40 MHz producing data rates as high as 1 petabyte raw
data per second (see Livio Mappeli’s Lectures). This is not the responsibility of the ECS.
However, these experiments will nonetheless have many 100s of thousands of controls
channels which are the responsibility of the ECS. In addition, the detectors will be in a harsh
magnetic and radiation environment which leads to the need to remove as much of the ECS
equipment out of the cavern as possible. Nonetheless, it will still be necessary to have I/O
readout units in the cavern close to the detectors and these must of course be able to
withstand the harsh environment there.

16
:KDWLVDQ

([SHULPHQW

&RQWURO

6\VWHP"

:6DOWHU &(516FKRRORI&RPSXWLQJ 

17
(&6&RQWH[W'LDJUDP
([SHULPHQWHU

&RQILJ
&RPPDQGV 'DWDEDVH
'LVSOD\
6HWWLQJV
,QIRUPDWLRQ

&RQILJXUDWLRQGDWD
$FFHOHUDWRU
VWDWXV
([SW
6DIHW\DODUPV 6DIHW\
$FFHOHUDWRU
'HWHFWRU &RQWURO FRPPDQGV
6\VWHP
VWDWXV
6\VWHP
(YHQW

UHODWHG
FRQWUROV

&RPPDQGV GDWD
6WDWXV
VHWWLQJV
,QIRUPDWLRQ
6WDWXV 'DWD6WRUH

,QIUDVWUXFWXUH

6\VWHPV

'HWHFWRU

:6DOWHU &(516FKRRORI&RPSXWLQJ 

The figure shows an overview of the ECS and its relationship to other systems. It can be
seen that the ECS, although not responsible for the handling of the physics data, still plays an
important role in the overall running of an experiment and must interface to a large number
of other systems.

18
([SHULPHQW&RQWURO (&6
◆ 'DWD$FTXLVLWLRQDQG7ULJJHU 5XQ&RQWURO
✦ &RQILJXUDWLRQSDUWLWLRQLQJPRQLWRULQJRI'DWD
$FTXLVLWLRQ '$4

◆ 'HWHFWRU&RQWURO 6ORZ&RQWURO
✦ *DV+LJK9ROWDJH/RZ9ROWDJHWHPSHUDWXUHV
◆ ([SHULPHQWDO,QIUDVWUXFWXUHV
✦ &RROLQJYHQWLODWLRQHOHFWULFLW\GLVWULEXWLRQ
◆ ,QWHUDFWLRQZLWKWKHRXWVLGHZRUOG
✦ $FFHOHUDWRUV\VWHPPDJQHWGDWDVWRUDJHSURYLGHU
RIIOLQHDQDO\VLVVDIHW\V\VWHP

:6DOWHU &(516FKRRORI&RPSXWLQJ 

As seen in the previous slide the ECS has many roles to play and these various roles many be
assigned to different systems.

19
'LIIHULQJ(&63KLORVRSKLHV
5XQ&RQWURO

'$4 6&
'$4 6&

23$/ &06$7/$6

5XQ&RQWURO
5XQ&RQWURO
(&6
'$4 6&
'$4 6&

$/,&(" $/(3+'(/3+,/+&E

:6DOWHU &(516FKRRORI&RPSXWLQJ 

There are differing philosophies w.r.t. ECS in the various experiments and the figure above
shows some of these. Firstly, one can imagine the Slow Controls system (SC) to be
independent from the Data Acquisition (DAQ) system since the SC is required to run
permanently whereas the DAQ system is typically only required to run for physics data
taking or calibration. In this case there is an interface between the two systems to allow an
exchange of data. This is the approach that was adopted by OPAL and may be adopted by
ATLAS. An alternative approach is to have separate DAQ and SC systems but to build a
layer on top, Run Control, from which both are controlled. Then there is the question as to
which technology is used, DAQ or SC. In ALICE and CMS each seems to be adopting one
of these approaches. A fourth approach is to base the DAQ, SC and Run Control systems on
the same technology and in this case the ECS is a close integration of the three systems.

20
:KDW

7HFKQRORJLHV

ZHUHXVHGLQ

WKH/(3(UD"

:6DOWHU &(516FKRRORI&RPSXWLQJ 

21
&RQWUROV7HFKQRORJLHV /(3(UD
&HQWUDO 6XE'HW


&HQWUDO
&HQWUDO &HQWUDO
&HQWUDO 
:6 :6&HQWUDO
:6
:6 :6:6
:6

(WKHUQHW 7&3,3
*
3
3
(YHQW U 3 3
U R U U 3
3
%XLOGHU F R R U
U 3


R F 6XE'HW6&
F R R U 
F F
F R )URQW(QGV
F

3 6XE'HW
3


U 3 (YHQW 
U 3
R U%XLOGHUV
R U
F R
F R
F
F
)DVWEXV
&RXUWHV\RI'(/3+,


U 6XE'HW 
R )URQW(QGV
F

:6DOWHU &(516FKRRORI&RPSXWLQJ 

In the LEP era each experiment (ALEPH, DELPHI, L3 and OPAL) built its own ECS. Even
within an experiment it was difficult to get agreement on the use of a common approach and
hence at the start-up of LEP the various sub-detectors of many of the experiments had to be
run independently with their own control systems. During the operation of LEP this
improved and by the end most experiments were running with a common control system for
all sub-detectors. Nonetheless, there were some exceptions right up until the end of LEP.
The front-end systems used extensively readout modules in FastBus, Camac, G64 and VME
crates. These were usually built in some sort of hierarchical fashion as shown in the figure
above. Above the front-end systems a number of different approaches were adopted. ALEPH
and DELPHI adopted VMS-based systems whereas the other experiments used a mixture of
UNIX and MAC-based systems.
All the software, both in the front-end systems as well as that running in the supervisory
level machines, was custom-developed. Furthermore, although some commercial readout
modules were used many other custom modules were also developed.

22
,QWURGXFWLRQRID1HZ7RSLF

:KDWLVWKH

DSSURDFKIRU

/+&"

:6DOWHU &(516FKRRORI&RPSXWLQJ 

23
/+&(UD
-RLQW&RQWUROV3URMHFW -&23
◆ &ROODERUDWLRQEHWZHHQWKH/+&
H[SHULPHQWVDQGWKHUHOHYDQW&(51
VXSSRUWJURXSV PDMRUDFKLHYHPHQW
◆ $LPRISURYLGLQJFRPPRQFRQWUROV·
VROXWLRQV
◆ 0DQ\VXESURMHFWV
◆ &RPPHUFLDOVROXWLRQVZKHUHSRVVLEOH
◆ &XVWRPVROXWLRQVZKHUHQHHGHG
◆ )UDPHZRUNV
:6DOWHU &(516FKRRORI&RPSXWLQJ 

Due to reducing resources and to improve on the situation seen in the LEP era a project was
set-up in 1998 to develop a common Framework and components for detector control of the
LHC experiments. This so-called Joint Controls Project (JCOP) is a collaboration between
the four LHC experiments and the relevant CERN support groups. JCOP aims to provide
standard controls solutions where the experiments have common needs and JCOP has
already established a number of sub-projects to look at various controls aspects. JCOP’s aim
is to provide commercial solutions wherever possible to reduce the long term maintenance
needs. However, this is not possible in all cases and some custom solutions will need to be
developed and integrated with the commercial solutions and then maintained by JCOP and
the relevant CERN support groups. To further ease the development of the ECS systems a
number of Frameworks providing standard features and methods to develop specific
components are being provided. These will be covered in more detail in the second Lecture.

24
:KDW

7HFKQRORJLHV

ZLOOEHXVHGLQ

WKH/+&(UD"

:6DOWHU &(516FKRRORI&RPSXWLQJ 

25
&RQWUROV7HFKQRORJLHV /+&(UD
/D\HU6WUXFWXUH 7HFKQRORJLHV
Configuration DB, &RPPHUFLDO &XVWRP
Archives,
Storage Log files, etc. FSM

WAN Supervision

SCADA
LAN

OPC DIM
Process
(LHC, Safety, ...)

LAN
Other systems

Management
Controller/ Communication Protocols
PLC VME
Field Bus PLC/UNICOS VME/SLiC

Node Node Field Field buses & Nodes


Management
Experimental equipment Sensors/devices

%DVHGRQDQRULJLQDOLGHDIURP/+&E

:6DOWHU &(516FKRRORI&RPSXWLQJ 

The above figure shows an overview of an example control system hardware architecture
with the standard technologies being proposed by JCOP on the right-hand side.

26
7HFKQRORJ\ &RPPRQ6ROXWLRQV
◆ ([SHULPHQWDO(TXLSPHQW
✦ +LJK9ROWDJH/RZ9ROWDJH7HPSHUDWXUH3UHVVXUH«

◆ )LHOGEXVQRGHV
✦ %HFNKRII:$*2$Q\%XV)LHOGSRLQW(/0%&UHGLW&DUG3&

◆ )LHOGEXVHV
✦ &$1%XV3URIL%XV:RUOG)LS(WKHUQHW"

◆ )URQWHQGFRPSXWHUV
✦ 3URJUDPPDEOH/RJLF&RQWUROOHUV 3/& 90(3&V

◆ 0LGGOHZDUH
✦ 2/(IRU3URFHVV&RQWURO 23& 'LVWULEXWHG,QIRUPDWLRQ
0DQDJHU ',0 

◆ 'DWDEDVHV
✦ 2UDFOH2EMHFWLYLW\

◆ 6XSHUYLVLRQ
✦ 3966 6XSHUYLVRU\&RQWURO$QG'DWD$FTXLVLWLRQ 6&$'$
✦ 60, )LQLWH6WDWH0DFKLQH )60

:6DOWHU &(516FKRRORI&RPSXWLQJ 

Examples of the various technologies being employed by JCOP are shown above. It is
interesting to note that in most areas both commercial and custom solutions are foreseen.
Examples will now be given for each of these areas.

27
&RQWUROV7HFKQRORJLHV /+&(UD
/D\HU6WUXFWXUH 7HFKQRORJLHV
Configuration DB, &RPPHUFLDO &XVWRP
Archives,
Storage Log files, etc. FSM

WAN Supervision

SCADA
LAN

OPC DIM
Process
(LHC, Safety, ...)

LAN
Other systems

Management
Controller/ Communication Protocols
PLC VME
Field Bus PLC/UNICOS VME/SLiC

Node Node Field Field buses & Nodes


Management
Experimental equipment Sensors/devices

%DVHGRQDQRULJLQDOLGHDIURP/+&E

:6DOWHU &(516FKRRORI&RPSXWLQJ 

The above figure shows an overview of an example control system hardware architecture
with the standard technologies being proposed by JCOP on the right-hand side.

28
([DPSOH:$*2PRGXOHV
◆ $QDORJXHDQG
GLJLWDO,2DQG
VHULDOOLQHV
◆ 6XSSRUWV
PXOWLSOH)LHOG
%XVHV
◆ 3URYLGHV
VWDQGDUG
HOHFWULFDO
LQWHUIDFHV ~ 10 cm

:6DOWHU &(516FKRRORI&RPSXWLQJ 

This is an example of an industrial bus controller. This Beckhoff module supports Profibus,
CANOpen and Interbus S. I/O modules are attached via an internal and proprietary bus to
the bus controller. Standard I/O interfaces are provided such as analogue and digital I/O as
well as serial lines and these are based on standard electrical interfaces. The I/Os handled by
the modules are made accessible by the bus controller over the selected field bus. These are
simple and small devices with no local processing capability. A rather large set of I/O
interfaces are supported but the number of channels per module is limited (up to 4).

29
([DPSOH+06$Q\%XV
◆ 7RLQWHUIDFHFXVWRPHTXLSPHQWWRILHOG
EXVHVZLWKRXWEHLQJDQH[SHUWRQILHOG
EXVHV
◆ ,QWHJUDWHGGLUHFWO\RQWRHOHFWURQLFV
ERDUG

:6DOWHU &(516FKRRORI&RPSXWLQJ 

The HMS AnyBus solution is an example of a card which implements various field bus
protocols and which can be integrated by sub-detector physicists directly onto their
electronics boards. These cards are small (Credit Card size) and their interface to the device
is common for all the supported field buses. A large number of field buses are supported:
Profibus-DP, InterBus, DeviceNet, ControlNet, CANopen, CAN Kingdom, LonWorks,
Modbus Plus, FIPIO. This provides a certain, but limited, amount of local processing and
allows a non-field bus expert to easily integrate his equipment with a field bus.

30
([DPSOH&UHGLW&DUG3&
◆ %DVLFDOO\D3&LQ
FUHGLWFDUGIRUPDW
◆ +DV(WKHUQHW,)
&
&
)(%RDUG

◆ 5XQDQ\VRIWZDUH 
3&
,&
-7$*
%XV

◆ ,QVWDOOGLUHFWO\RQ &
&
)(%RDUG

3&
,&

ERDUG -7$*
%XV

(WKHUQHW
◆ &RQQHFWWRSHULSKHUDO
HTXLSPHQWYLD &
&

)(%RDUG
,&
3& -7$*

VWDQGDUGEXV %XV

:6DOWHU &(516FKRRORI&RPSXWLQJ 

Although not strictly a field bus node the Credit Card PC (CCPC) fulfils a similar role as the
AnyBus module. The CCCP can be integrated into a proprietary board-design in a similar
way to the AnyBus module. However, this is a far more flexible solution as it has a standard
Ethernet connection, eliminating the need for specialised knowledge of any particular field
bus, and the full power of a PC (up to 500Mhz). This allows advanced local processing to be
performed directly on the board reducing the load on the higher levels of the control system.
The CCPC provides a number of standard interfaces to custom electronics and in particular
I2C and JTAG which are foreseen for use in many of the LHC experiments.

31
)LHOGEXVQRGHV &XVWRP
◆ (PEHGGHG/RFDO
0RQLWRU%R[ (/0%
✦ *HQHUDOSXUSRVH
&$12SHQPRGXOH

✦ 'HYHORSHGIRU$7/$6
✦ :LOOEHXVHGE\RWKHU
H[SHULPHQWV

✦ 0DQ\FKDQQHOV
™ DQDORJ

™ GLJLWDO

✦ 5DGLDWLRQUHVLVWDQW
✦ 6XSSRUWVKLJK
PDJQHWLFILHOG

:6DOWHU &(516FKRRORI&RPSXWLQJ 

The Embedded Local Monitor Box (ELMB) is a custom CANOpen node being developed by
ATLAS. The advantage of this node over standard commercial nodes, such as the Beckhoff
module previously shown, is the much higher channel density (64 AI, 16 DO and 8 DI in a
single ELMB module c.f. 4 for a Beckhoff module) and hence the reduction of cost/per
channel required for applications with very large numbers of channels as well as space. In
addition, it has been designed to be able to operate in the adverse magnetic and radiation
environment foreseen for much of the ATLAS detector. Nonetheless, it is still not possible to
use this module in some highly radioactive regions of the detector.

32
&RQWUROV7HFKQRORJLHV /+&(UD
/D\HU6WUXFWXUH 7HFKQRORJLHV
Configuration DB, &RPPHUFLDO &XVWRP
Archives,
Storage Log files, etc. FSM

WAN Supervision

SCADA
LAN

OPC DIM
Process
(LHC, Safety, ...)

LAN
Other systems

Management
Controller/ Communication Protocols
PLC VME
Field Bus PLC/UNICOS VME/SLiC

Node Node Field Field buses & Nodes


Management
Experimental equipment Sensors/devices

%DVHGRQDQRULJLQDOLGHDIURP/+&E

:6DOWHU &(516FKRRORI&RPSXWLQJ 

The above figure shows an overview of an example control system hardware architecture
with the standard technologies being proposed by JCOP on the right-hand side.

33
)LHOGEXV &RPPHUFLDO
◆ 6WDQGDUGLVDWLRQRQWKUHH)LHOG%XVHVDW&(51
◆ &KRVHQIRUFRPSOHPHQWDU\IHDWXUHV
◆ )HOWWRFRYHUDOO&(51QHHGVDWWKDWWLPH
✦ &$1%XV
™ 8VHGLQDXWRPRWLYHLQGXVWU\&KHDSFKLSV

✦ 3URILEXV
™ (TXLSPHQWDYDLODELOLW\

✦ :RUOG)LS
™ 'HWHUPLQLVWLFUHGXQGDQWPHGLXP
™ 5DGLDWLRQUHVLVWDQW

◆ &RQVLGHUDWLRQRI(WKHUQHWDVD)LHOG%XV
✦ (PHUJLQJDVDILHOGQHWZRUN
✦ 3URJUDPPDEOH/RJLF&RQWUROOHUV 3/& JRLQJWR7&3,3
✦ 6WDQGDUGFRPSRQHQWV KRZHYHUPRUHFRVWO\
✦ %DQGZLGWK
:6DOWHU &(516FKRRORI&RPSXWLQJ 

A field bus is a data transport medium typically used in the domain of process control for
connecting field devices to PLCs or SCADA systems. A field bus can also be used for
connecting PLCs together or for connecting PLCs to SCADA systems. A Field Bus Working
Group was set-up at CERN in 1995 and came up with a recommendation in 1996 for the use
of three standard field buses as CERN; namely CANBus, Profibus and WorldFip. These
were chosen for their complementary capabilities and were felt to cover the full needs of
CERN and the experiments at that time. The Working Group has recently started to review
the current situation and has identified the emergence of Ethernet as a field bus and is now
evaluating it as a possible fourth recommended Field Bus.

34
&RQWUROV7HFKQRORJLHV /+&(UD
/D\HU6WUXFWXUH 7HFKQRORJLHV
Configuration DB, &RPPHUFLDO &XVWRP
Archives,
Storage Log files, etc. FSM

WAN Supervision

SCADA
LAN

OPC DIM
Process
(LHC, Safety, ...)

LAN
Other systems

Management
Controller/ Communication Protocols
PLC VME
Field Bus PLC/UNICOS VME/SLiC

Node Node Field Field buses & Nodes


Management
Experimental equipment Sensors/devices

%DVHGRQDQRULJLQDOLGHDIURP/+&E

:6DOWHU &(516FKRRORI&RPSXWLQJ 

The above figure shows an overview of an example control system hardware architecture
with the standard technologies being proposed by JCOP on the right-hand side.

35
)URQWHQGV &RPPHUFLDO
◆ 3URJUDPPDEOH/RJLF&RQWUROOHUV 3/&
✦ 'LVNOHVVFRPSXWHUVZLWKDVHWRIVWDQGDUG
KDUGZDUHLQWHUIDFHV

✦ &(51KDVFKRVHQ6FKQHLGHUDQG6LHPHQV
✦ 6SHFLDOZRUOGZLGHDJUHHPHQWDQGGLVFRXQWV
✦ 6WDQGDUGLVHGVHWRISURJUDPPLQJODQJXDJHV
,(&

✦ %XWHDFKPDQXIDFWXUHUKDVLWVRZQGHYHORSPHQW
WRROV

◆ 8QLILHG,QGXVWULDO&RQWURO6\VWHP 81,&26
✦ 3/&REMHFWRULHQWHGOLEUDU\
✦ 'HYHORSHGIRUF\URJHQLFVFRQWUROV\VWHP

:6DOWHU &(516FKRRORI&RPSXWLQJ 

PLCs are diskless compact computers which include a set of hardware interfaces to connect
to standard process control devices (I/O modules and field bus interfaces). They are
generally used for automatic control applications (closed loop control, etc.) either in stand-
alone or networked through a field bus or more recently also via Ethernet. There is a
standardised set of PLC programming languages referred to as IEC1131-3 which enable
standardised design between different PLCs. However, each manufacturer has his own
development tool which is generally incompatible with the corresponding tool from other
manufacturers. Therefore, porting of software between the PLC of one manufacturer and
another is generally not possible. CERN recommends the use of PLCs from two
manufacturers – Siemens and Schneider.
In the scope of the design and development for the cryogenics control system for the LHC an
object-oriented software library for PLCs has been developed. This is known as UNICOS.
This provides a standard Framework for the development of PLC-based applications
employing a hierarchical modelling of the system. In addition, the UNICOS Framework
includes the modelling of a number of standard field devices such as valves, pumps and flow
meters.

36
)URQWHQGV &XVWRP
◆ 90(
✦ +RXVLQJIRU,QSXW2XWSXWPRGXOHV
✦ &RQQHFWLRQYLDDQLQWHUQDOEXV
✦ 0D\KDYHDSURFHVVRUPRGXOH
✦ :LGHO\XVHGLQ+LJK(QHUJ\3K\VLFV +(3
◆ 6ORZ&RQWURO 6/L& )UDPHZRUN
✦ 90(SURFHVVRUUXQQLQJ/LQX[ DOVR3&
✦ 2EMHFWRULHQWHGEDVHGVRIWZDUHIUDPHZRUN
✦ 6XSSRUWIRUYDULHW\RIERDUGV ,2)DQWUD\V
&$(1+9VXSSOLHV«

✦ 8VHGE\1$&203$66+$53DQG1$
✦ 3RWHQWLDOVROXWLRQIRUWKH/+&H[SHULPHQWV
:6DOWHU &(516FKRRORI&RPSXWLQJ 

Although widely used in industry PLCs cannot cover all the front-end needs of the LHC
experiments. In addition, PC and VME-based front-ends will be employed. Since PCs are
well understood, PC-based front-ends are not covered further here. A VME crate allows I/O
modules to be housed and connected via an internal bus. These can then be read-out via an
interface card or via a processor also housed in the crate. There are many commercial I/O
boards e.g. ADCs, but in addition custom boards are also developed to support specialised
equipment.
To ease the development of front-end software to support such VME-housed I/O modules, as
well as those in PCs, an object-oriented software framework, called SLiC, has been
developed. This provides a set of standard facilities for acquiring data from VME modules
and for making this data available to a supervisory system. To integrate a new module a user
is only required to develop the specific device driver for this module. This software is
already being used by a number of experiments as is a potential solution for the LHC
experiments.

37
&RQWUROV7HFKQRORJLHV /+&(UD
/D\HU6WUXFWXUH 7HFKQRORJLHV
Configuration DB, &RPPHUFLDO &XVWRP
Archives,
Storage Log files, etc. FSM

WAN Supervision

SCADA
LAN

OPC DIM
Process
(LHC, Safety, ...)

LAN
Other systems

Management
Controller/ Communication Protocols
PLC VME
Field Bus PLC/UNICOS VME/SLiC

Node Node Field Field buses & Nodes


Management
Experimental equipment Sensors/devices

%DVHGRQDQRULJLQDOLGHDIURP/+&E

:6DOWHU &(516FKRRORI&RPSXWLQJ 

The above figure shows an overview of an example control system hardware architecture
with the standard technologies being proposed by JCOP on the right-hand side.

38
0LGGOHZDUH &RPPHUFLDO
◆ 23& 2/(IRU3URFHVV&RQWURO
✦ ,QGXVWULDOVROXWLRQEDVHGRQ0LFURVRIWWHFKQRORJ\
✦ 5HGXFHGLYHUVLW\RIGULYHUV
✦ 0DQ\RIIWKHVKHOISURGXFWV
✦ 6XSSRUWHGE\DODUJHQXPEHURIFRPSDQLHV
✦ 6XSSRUWIRU)LHOGEXVHV3/&VDQG6&$'$
◆ 7KUHHNLQGVRIDFFHVV
✦ 5HDG:ULWH V\QFKURQRXV
✦ 6XEVFULSWLRQ DV\QFKURQRXV
✦ 5HIUHVK IRUFHGUHDG DV\QFKURQRXV

:6DOWHU &(516FKRRORI&RPSXWLQJ 

In the past manufacturers of SCADA systems were required to build specific drivers to
connect to a whole host of different front-end equipment as each manufacturer of such
equipment developed his own proprietary communications mechanism. In order to try to
reduce diversity and standardise the interface between the supervisory level and lower levels
of the control system, the OLE for Process Control (OPC) Foundation was set-up in 1995 as
a collaboration between a number of leading automation hardware and software suppliers
and Microsoft. This initiative has led to the production of an open and flexible interface
standard which is now supported by the majority of the manufacturers in this domain. This
allows a SCADA system to connect to a multitude of different front-end systems via a
common interface. This OPC Data Access (DA) mechanism supports three kinds of access,
namely:
•A synchronous read or write
•A subscription whereby data is only sent on change to interested clients
•A refresh whereby the data is read but in an asynchronous manner
All OPC data items include a time stamp and a quality flag.

39
23&'DWD0RGHO
OPC Client OPC Client

OPC Server
OPC Group OPC Group OPC Group

OPC Item OPC Item OPC


OPCItem
Item
OPC Item OPC Item
OPC Item

Process Data

:6DOWHU &(516FKRRORI&RPSXWLQJ 

OPC servers make process data available by means of OPC items. An OPC server creates
OPC items on behalf of an OPC client. The client’s OPC items are organised in OPC groups.
OPC clients can only access their OPC items through their respective OPC groups. The
figure shows two OPC clients accessing process data managed by an OPC server. The first
OPC client accesses its OPC items through two groups whereas the second one uses only one
group.

40
0LGGOHZDUH &XVWRP
◆ ',0 'LVWULEXWHG,QIRUPDWLRQ0DQDJHU
✦ 'HYHORSHGE\'HOSKLDOVRXVHGE\%D%DU DQG
RWKHUV

✦ &OLHQWVHUYHUZLWKSXEOLVKVXEVFULEH
PHFKDQLVPVRQWRSRI7&3,3 DOVRVXSSRUWV

SROOLQJ

✦ 0XOWLSODWIRUPVXSSRUW
✦ &&OLEUDULHV
✦ 6HOHFWHGDVLQWHUPHGLDWHVROXWLRQWRDYRLG
PLJUDWLRQIRUPXOWLSOHFRPPXQLFDWLRQ

SURWRFROV

:6DOWHU &(516FKRRORI&RPSXWLQJ 

As we have seen it is necessary to integrate custom front-ends into the control system.
Ideally, a single standard mechanism should be provided for this and this mechanism should
be straightforward to use, have a small footprint and be able to run on a variety of platforms
e.g. Linux, LynxOS, Windows NT/2000, etc. The Distributed Information Manager (DIM)
was developed by the DELPHI experiment for this reason. It supports a client/server
paradigm with a publish-subscribe mechanism running on top of TCP/IP. It is effectively a
library with a small number of function calls which is available in C/C++ and runs on a large
variety of different platforms including, but not restricted to, those mentioned above. This
middleware has been used not only in DELPHI but also in other experiments, notably the
BaBar experiment at SLAC (Stanford Linear Accelerator Center). DIM has been chosen by
JCOP as an intermediate solution to have a common interface to custom front-end systems.
This choice will be reviewed at a later date.

41
&RQFOXVLRQV
◆ 2YHUYLHZRI&RQWURO6\VWHPV
◆ 2YHUYLHZRI'LVWULEXWHG&RQWURO6\VWHPV
◆ 6KRZQWKDW'&6V\VWHPVDUHXVHG
H[WHQVLYHO\DW&(51LQDOOGRPDLQV
◆ 2YHUYLHZRI([SHULPHQW&RQWURO6\VWHPV
◆ 5HYLHZHGVRPHRIWKHGLIIHUHQWWHFKQRORJLHV
RIWKH/(3DQG/+&HUDV
◆ 6KRZQIRUWKH/+&H[SHULPHQWVWKHUHZLOOEH
DQLQFUHDVLQJXVHRIFRPPHUFLDOFRPSRQHQWV
EXWVWLOODPL[ZLWKFXVWRPFRPSRQHQWV

:6DOWHU &(516FKRRORI&RPSXWLQJ 

42