You are on page 1of 22

SE R VE R

MA NUA L

ma cs
®

management administration communication system

System Manual Vol. 1


System environment and processes

Innovation. Experience. Flexibility. Quality.


Server Manual Vol. 1 - System environnement & processes

Table of Contents
1. THE SYSTEM ENVIRONMENT....................................................................................................................... 3
1.1. System start......................................................................................................................................................... 3
1.2. System stop......................................................................................................................................................... 3
1.3. The file system..................................................................................................................................................... 4
1.4. System start......................................................................................................................................................... 7
1.4.1. System stop............................................................................................................................................ 8
1.4.2. The config.sys file.................................................................................................................................... 8
2. PROCESSES.................................................................................................................................................. 11
2.1. EXTERNAL PROCESSES...................................................................................................................................... 11
2.1.1. System start (SYSTRT)............................................................................................................................. 11
2.1.2. System stop (SYSDWN).......................................................................................................................... 12
2.2. THE KEY PROCESS (bigmama)............................................................................................................................ 12
2.3. DAEMON PROCESSES....................................................................................................................................... 13
2.3.1. Time scheduler (timsch)......................................................................................................................... 13
2.3.2. Action scheduler, Flights (actsch, actschf)................................................................................................. 14
2.3.3. Output handler printout (ohdprt)............................................................................................................ 15
2.3.4. History logging (hislog).......................................................................................................................... 15
2.3.5. Network distributor (ticput)..................................................................................................................... 15
2.3.6. Computer synchronization (icget)............................................................................................................ 16
2.3.7. Computer initialization (icinis)................................................................................................................ 17
2.3.8. Computer monitoring (wdog)................................................................................................................. 18
2.4. Output handler.................................................................................................................................................. 19
2.5. Process concepts................................................................................................................................................ 20
2.5.1. Inter-process communication................................................................................................................. 20
2.5.2. Basic process functionality..................................................................................................................... 21

Page 2 ServerManual-Vol1_content_1.0060_rev00
Server Manual Vol. 1 - System environnement & processes

1. THE SYSTEM ENVIRONMENT


1.1. System start
Once the boot process has been successfully completed, Linux invites you to log into the
system. In order to start the FIDS (Flight Information Display System) at a later point, you must
log in with the login ID ‘fids’.

1.2. System stop


To switch off the Linux computer, you first have to power down the Linux system in a “controlled”
manner. It is also advisable to first shut down the FIDS with the command ‚sysdown‘. Once you
have done this, you have to login as ‚root‘ and enter the following command:
init 0 or halt
This command shuts down the Linux system.
init 6
This command forces the Linux system to reboot

ServerManual-Vol1_1.0060_rev00 Page 3
Server Manual Vol. 1 - System environnement & processes

1.3. The file system


The files required for FIDS (programs, data etc.) are sub-divided into various different
directories, each located in directory /app/sint2.
„sint2“ designates the name of the current installation.
/app
|
sint2
|
DisplayManager
|
ETI
|
RssFeed
|
WIDS
|
dbases
|
exe
|
fdi
|
gt
|
gifs
|
Intranet
|
lib
|
logs
|
mail
|
mon
|
save
|
tab
|
tools
|
userDavis
|
userUC
|
tab

Page 4 ServerManual-Vol1_1.0060_rev00
Server Manual Vol. 1 - System environnement & processes

Here is a listing of some of the directories:

/app/sint2/DisplayManager
This directory contains the configuration files for the program Display Manager.

/app/sint2/ETI
This directory contains various commands to control the FIDS from the Linux console:
ftctrl: program to edit the actual flight table
lutctrl: program to edit lookup tables
tabctrl: program to edit a few of the FIDS configuration tables
devstat: program to control the configured devices
sdpc: program to check the data shown by the displays

/app/sint2/RssFeed
This directory is used by the RssFeed module as a temp folder.

/app/sint2/WIDS
The module WIDSIES uses the subfolder “log” of this directory to save all data telegrams from
the IES server.

/app/sint2/dbases
This directory contains files defined in the file table (fi.tab), including user input and message
logging files. The FIDS system accesses these files using a path name reference from the file
table (fi.tab).
The file name format for user input files is: ilogg.YYMMDD
The file name format for message logging files is: mlogg.YYMMDD
YY: Year MM: Month DD:day
Old user input and message logging files are deleted automatically by the FIDS system after a
configurable number of days defined in the table “at.tab”.

/app/sint2/exe
This directory contains all executable program files. When FIDS is powered up, the name of the
executable program contained in this directory for each process is read from the process table
(pn.tab).

/app/sint2/fdi
This directory contains protocol files of the communication between the FIDS system and FCS.
The name of the files is entered in the configuration table “/app/sint2/tab/FDI/fdiSin.tab”. This
table contains also the maximum size of the log files and the number of days to keep these on
the hard disc.

ServerManual-Vol1_1.0060_rev00 Page 5
Server Manual Vol. 1 - System environnement & processes

/app/sint2/gt
A separate directory for the configuration files for CONRAC monitors. It contains the following
subfolders:
ƒƒ api
It contains manually configured files using api commands.
ƒƒ bin
The “gthdl5” process saves in this folder the compiled pages and carousels made with
PageEditor.
ƒƒ dat
The manually configured api files in the folder “api” must be compiled using the “api2dat”
command. Th output must be copied to this folder.
ƒƒ fonts
All fonts needed to be used by the monitors must be copied to this folder. The server will
distribute the fonts automatically to the monitors, if it is included in a page used by a monitor.
ƒƒ images
All images must be copied to this folder. The server will distribute the images automatically to
the monitors, if needed.
The client software ConVis and GT2 support JPG, PNG and GIF ( including animated GIF )
formats.
ƒƒ media
flash files “*.swf” must be copied to this folder.
ƒƒ mpegs
The Clients ConVis and GT2 are able to show videos in mpeg1 and mpeg2 format.
ConVis is also able to play other formats, if the needed codecs are installed.
The server will distribute the videos automatically to the monitors, if needed.
ƒƒ pages
The pages and carousels created with PageEditor and sent to the server by PageManager are
saved in this folder.
The file name of the pages must have the extension “.gtp”.
The file name of the carousels must have the extension “.gtc”.
ƒƒ pgedit
After sending a page to the server with PageManager, a configuration file will be created. The
name of the configuration file is identical to the name of the page with the extension “.cfg”.
Another file with the extension “.cfg.fonts” will be created. This file includes the fonts used in
the page.

/app/sint2/lib
This directory contains all the libraries required by the FID-System.

/app/sint2/logs
If the trace mode of a process is activated, it will write the output log file to this directory.
The DMU process is using this folder to save screenshots requested from the monitors in JPG
format.

/app/sint2/mon
This directory contains tools to upgrade and control monitors using GT2 client.
In the subfolder “convis”, tools and script files to upgrade monitors with ConVis, are stored.

/app/sint2/save
This directory contains backup copies of the actual database and the system tables. These
backup files are called up during a recovery start of the system.
The following files are saved in this directory:
ƒƒ arr.sav: Backup of the actual arrival flight table
ƒƒ dep.sav: Backup of the actual departure flight table
ƒƒ txxx.sav: Backup of a table configured in “/app/sint2/tab/sys.tab” with table index xxx.

Page 6 ServerManual-Vol1_1.0060_rev00
Server Manual Vol. 1 - System environnement & processes

/app/sint2/tab
A large proportion of the data required to operate FIDS is loaded at each system start up from
the text tables in the computer memory and a permanent record is kept on hard disc. Each of
these text tables is saved in a separate file. All these files are located in directory /app/sint2/
tab. It also contains subfolders to simplify the organization of the tables.

/app/sint2/tools
Different help procedures and scripts can be found in the directory.

1.4. System start


Before FIDS can be started, you first have to log in to the system with the user ID ‚fids’. It is only
possible to start FIDS/Linux with this user identification.
The command for starting the FIDS is ‚fids’. There is an option of using 2 parameters which
represent cold-start options for the system tables and the actual database.

fids [-sys] [–adb] [-MASTER]

If you only enter ‘fids‘, a recovery start is run, i.e. the system tables and the actual database
(arrivals and departures) are loaded from backup files (refer to System Manual Volume 3
Chapter „Save/Recovery“). If ‚-sys‘ is entered as a parameter, the system tables are loaded
“cold”. The parameter ‚-adb‘ produces the same effect for the actual database.
Owing to the fact that, for reasons of performance, virtually all relevant data on the system are
constantly maintained in computer RAM, it is necessary for safety reasons to take snapshots of
RAM memory at cyclical intervals on a permanent data carrier (usually the hard disc), thereby
maintaining backup copies which can be accessed in the event of a recovery start.
Once you have entered the start command for FIDS, the SYSTRT process is initiated which
creates the entire system environment. Key items of system information and system settings are
defined in the file config.sys and SYSTRT reads out this file. The memory requirements of each
individual system area is reported to the console. Once SYSTRT has completed its work, it
converts into the process BIGMAMA. This process is the (grand)mother of all processes involved
in the FIDS. BIGMAMA reads the process table (pn.tab) and generates all processes indicated
here, provided that they have been defined for the local CPU. Once the BIGMAMA process has
completed its work, it issues the following message:

[BIGMAMA]: System Loading completed successfully

From this time on, the BIGMAMA process only monitors its sub-processes but does not otherwise
make an active appearance.
The TIMSCH process, which is responsible for all time-dependent actions in the FIDS, shifts the
system into the so-called READY status. The following message is then issued:

[TIMSCH]: System Ready for Input

Once the system is in ready status, all output handler processes (display peripherals) start to
generate displays showing the information assigned to them by the display table (dp.tab).
The ‚fids‘ command calls up a shell procedure contained in the directory /app/sint2/tools.
The program /app/sint2/exe/systrt is called up with optional arguments ‚-sys’ and ‚-adb’
together with the path names of the config.sys files (/app/sint2/config.sys).
If you try to start the FIDS once this has already taken place, the following message appears:

[SYSTRT]: System is already up !!

This is controlled by the file „/app/.fids“. The system status is entered in this file. This file is also
used for controlling the direct start option.

ServerManual-Vol1_1.0060_rev00 Page 7
Server Manual Vol. 1 - System environnement & processes

As soon as the ‚fids’ command has been executed, the application is powered up in an
asynchronous manner, i.e. it is immediately possible to start entering UNIX commands at the
console once again. However, to prevent misuse of the system console, the system administrator
should log off with the ‚exit’ command once he has started the FIDS.

1.4.1. System stop

To run down the FIDS, in a similar manner to the system start, you have to log on to
the system console with the UserID ‚fids’.
The command is ‚sysdown‘ and it calls up the shell procedure /app/sint2/tools/
sysdown. This calls up the program /app/sint2/exe/sysdwn. When the FIDS is run
down, this program bases itself on the time parameters ATT _ SHUTWAIT (Number
15) and ATT _ KILLWAIT (Number 16), which can be entered as options in the time
parameters in table at.tab.
The time parameter ATT _ SHUTWAIT determines the period of time you have to
wait until the FIDS has completely shut down. A corresponding message is issued
to all user terminals connected to the system. Users working on the system then
have an opportunity to save their work and quit the system during the period of time
indicated. The default time is set at 1 seconds.
The time parameter ATT _ KILLWAIT indicates the length of time left to processes in
the FIDS after the shutdown has been initiated, i.e. how long users have to close their
work in a “controlled” manner. If this period of time is exceeded, the SYSDWN process
stops the process affected using the unstoppable system signal SIGKILL. The default
time for ATT _ KILLWAIT is 30 seconds.
Once the system administrator has initiated a system stop, and once the specified
wait-time has elapsed (ATT _ SHUTWAIT) the following message appears:

[SYSDWN]: The system is being shutdown now !!

The FIDS is placed in DOWN status. After this, the SYSDWN process identifies all
processes defined in the process table (pn.tab) and all those running on the local
CPU and informs them of pending system termination. The grandmother process
BIGMAMA is also informed of the termination of its child processes. Since BIGMAMA
knows how many processes it started, it can, once all these processes have been
terminated, issue the following message:

[BIGMAMA]: All processes stopped - system down!

The UNIX/FIDS system is now shut down.

1.4.2. The config.sys file

The SYSTRT process which generates the system environment for FIDS during system
starts first of all accesses a config.sys file which contains elementary data and
definitions of the system. This file is located directly in the project directory /app/
sint2. This is a simple text file which can be amended easily using any editor. The file
contains a range of parameters.
Comments can be inserted by the leading key stroke combination ‚/*’. One or more
sub-parameters can be appended to the parameter name of each parameter. If there
are several sub-parameters, these should be separated by commas. The config.sys
file contains the following parameters:
PROJDIR Path name
Name of the project folder. This folder contains all application files.

USERID
The ID of the user with the permission to start FIDS.

Page 8 ServerManual-Vol1_1.0060_rev00
Server Manual Vol. 1 - System environnement & processes

SYSNAM Name of installation


A name up to 11 digits in length for the installation. This name is issued when the
system is started.

READY Process name


Process which moves FIDS into READY status when the system is started. This normally
involves the process known as TIMSCH.

PRCSIZ Number of bytes


Every process which produces user terminal output allocates the size specified here
for its output buffer, representing the maximum size (length) for terminal output.

ARR_MAX MAXADB
When this parameter is specified, an actual database (ADB) area is generated. Its
size is MAXADB arrivals.
The set size of an ADB set is obtained from the record description table for the actual
database (af.dsc) in which the fields comprising an ADB set are defined.

ARR_FREE FREEADB
If the arrival table is filled by the INSFLT process with this maximum number of data
records (MAXADB-FREEADB).

DEP_MAX MAXADB
DEP_FREE FREEADB
Same functionality as ARR _ MAX and ARR _ FREE for departures

TCHAINXX up to 20 field numbers


The actual database is capable of internally managing up to 20 time chains. This
function is implemented by appropriate secondary indices. In practical applications,
this function usually deals with scheduled, estimated and actual times. Time chains
can be addressed from 1 to 20.

INSFLT Time chain number


Time chain sequence with which the INSFLT process inserts data records from the
scheduled database into the actual database. Usually
(Time Chain 1 = Scheduled Time).

TIMSCH Time chain number


Time chain sequence used by the time schedule when performing its cyclical
(periodic) search of the actual database. Usually
(Timechain 1 = Scheduled Time).

KEYXX Up to 10 key field numbers


Key fields, which combine to form the actual database keys. The higher-level key
fields are indicated at the top of each list.

BOOT 1
If this parameter is inserted in the config.sys file, FIDS is started automatically as soon
as UNIX has completed its boot process.
The script file for the auto start is „/etc/init.d/startfids“.
This facility is particularly useful after power failures. In that case FIDS will be started
on the master server only if it was in ‚UP‘ status at the time of the power failure.
Internally, this is defined by a status entry in the file „/app/.fids“.

ServerManual-Vol1_1.0060_rev00 Page 9
Server Manual Vol. 1 - System environnement & processes

On slave CPUs, when the $BOOT option is present, FIDS will be started independent
from the last status.

CPU Physical CPU number


In a multiple CPU configuration of the FIDS, every computer involved is assigned a
unique physical CPU number which it can use to determine its logical task inside the
CPU table (cpu.tab). With the exception of this parameter, all computers involved are
absolutely identical in terms of FIDS/UNIX configuration data.

DATE Date format


With this parameter, the date format used within the system can be specified. 3
formats are supported:
DDMMYY,
MMDDYY
YYMMDD.
Here ‚YY‘ indicates the position of the year, ‚MM‘ indicates the month and ‚DD‘
indicates the position of the day’s date in the date setting.

Actual CONFIG.SYS file on MASTER-CPU at SIN TERMINAL 2


[ _ SYSTEM] /*/////////////////////////////////////////////////////////
CPU = 1 /* physical CPU number
RELEASE = 7.2.3.0 /* system release
SYSNAM = SIN-T2 /* name of installed
PRCSIZ = 8000 /* process output buffer
/* size in bytes for terminal
/* and display device output
READY = TIMSCH /* program which
/* makes system ready
USERID = fids /* fids user id
DATE = DDMMYY /* date format
BOOT = 1 /* fids booting after Linux becomes ready
[ _ FILES] /*/////////////////////////////////////////////////////////
PROJDIR = /app/sint2 /* project directory
[ _ FLIGHTS] /*////////////////////////////////////////////////////////
ARR _ MAX = 6200 /* Max number of arr flights
ARR _ FREE = 50 /* Free number of arr flights
DEP _ MAX = 6200 /* Max number of dep flights
DEP _ FREE = 50 /* Free number of dep flights
TCHAIN01 = 25 /* up to 20 time fields
TCHAIN02 = 26
TCHAIN03 = 27
/**** !!! last KEY FIELD must be flight type !!!
/**** !!! last TIME CHAIN number will used by timsch and hislog !!!
KEY01 = 25 /* up to 10 key fields
KEY02 = 20
KEY03 = 21

Page 10 ServerManual-Vol1_1.0060_rev00
Server Manual Vol. 1 - System environnement & processes

2. PROCESSES
The FIDS contains a range of programs responsible for a wide variety of duties in the system.
With just a few exceptions, all programs are started as processes when the system is initialized.
This function is handled by the mother process BIGMAMA which reads out the process table
(pn.tab) and generates the processes listed in it. It is possible for the same program to run in
several process copies. This is a sensible approach to adopt for output handler programs if a
large number of controllers are connected up to the system, all of which should be operated in
parallel fashion.
All programs are located in the directory /app/sint2/exe.
With the processes, we differentiate between different types, grouped together in the following
sections.

2.1. EXTERNAL PROCESSES


External processes are all processes which are not, either directly or indirectly, generated and
controlled by the mother process BIGMAMA.

2.1.1. System start (SYSTRT)

The process SYSTRT is the first process in the FIDS triggered during the system start. It initializes
the entire system environment and then starts the mother process BIGMAMA. There is then
nothing more to do for the SYSTRT. It terminates.
The key tasks performed by SYSTRT are as follows:
SYSTRT first checks whether FIDS/UNIX has already been powered up, or is being powered up
at this time. This is checked by the status contents of the “/app/.fids” file. If DOWN status is not
detected, further execution of the program is terminated with an appropriate message.
As its next step SYSTRT reads the config.sys file which contains important basic information
for system configuration. If the syntax and logic of this file is OK, so-called ‚sharable memory
segments‘ are set up. These are global memory areas to which every process in the FIDS has
access. Here is a detailed list of the segments:

SYSTEM STATUS AREA (SEG_SYS).


Important central system information, including information about system status, is stored here.

PROCESS QUEUES (SEG_PRC).


Every process in the FIDS which receives orders from other processes needs an order or data
entry queue from which it can take its incoming orders in a sequential manner.

SYSTEM TABLES (SEG_STH).


All system tables are loaded into segment SEG _ STH. Access is handled centrally by the system
table handler (sth).

ARRIVAL TABLE (SEG_ARR).


In cases where an arrival area is defined in the config.sys file with parameter ARR _ MAX,
segment SEG _ ARR is created and all arrivals from the actual database are stored here. Access
is handled centrally by the flight table handler (fth).

DEPARTURE TABLE (SEG_DEP).


In cases where a departure area has been defined in the config.sys file with parameter DEP _
MAX, segment SEG _ DEP is created and all departures in the actual database are stored here.
Once all segments have been created and initialized by SYSTRT, an attempt is made to place a
message in all other computers defined in the CPU table (cpu.tab) to establish their own logical
CPU number. Within the network, the physical CPU number which is set by parameter CPU in
the config.sys file, is transferred. If SYSTRT does not gain any contact to other computers in the

ServerManual-Vol1_1.0060_rev00 Page 11
Server Manual Vol. 1 - System environnement & processes

network, the logical CPU number determined from the physical CPU number in the local CPU
table (cpu.tab) is set.
The purpose of the procedure described is to enable computers affected by temporary failure to
be re-integrated in the network with an updated task listing. For example, if a computer which
was out of service for a few days due to a hardware defect was reconnected to the network
during normal operation, it would normally, in the event of a recovery start, be allocated the
last task it was assigned before the failure. However, if an assigned hot-standby computer had
already stepped in when the failure occurred to perform this function, the network would then
have two computers assigned to the same task, something which would inevitably lead to fatal
errors. By sending a query to the network, the computer is able, in every case, to obtain its
current logical CPU number (cpu.tab) from which it can learn of its new task in the network.
This is because all computers in the network always have access on any given date to current
data records.
This is of course only relevant if at least one other computer has been defined in the CPU table
(cpu.tab), in other words it is only relevant to a multi-computer configuration.
Once SYSTRT has finally generated the FIDS/UNIX data environment, it then, as a last step,
reports to the mother process BIGMAMA (process number 0) and the SYSTRT terminates.

2.1.2. System stop (SYSDWN)

The SYSDWN process is started by the ‚sysdown‘ command. This function is only generated
when this command is issued and it assumes the task of stopping all processes listed in the
process table provided they are running on the local computer. The system status stored in the
system status area is set to DOWN.
When shutting down the FIDS, SYSDWN orients itself on the time parameters ATT _ SHUTWAIT
and ATT _ KILLWAIT configured in the time parameter table (at.tab). ATT _ SHUTWAIT defines
the time until final shutdown. This time is reported to all user terminals and indicates the length
of time available to terminate all work on the system in a controlled manner.
The time parameter ATT _ KILLWAIT represents a specified period of time for processes to
terminate their work. If this is not done within the allotted time, SYSDWN stops affected processes
with the non-interruptable UNIX signal SIGKILL. The TIMSCH process constitutes an exception
in this system. The system always waits for this process.
Internally, the process SYSDWN obtains the process ID (pid) of every process from entries in the
process table and sends each process the signal SIGUSR1. The mother process BIGMAMA, which
is waiting for all its “children”, is informed of the type and manner of termination of all its child
processes and submits an appropriate report to the system console /dev/console.
If the computer on which SYSDWN was initiated is located in one of the computer networks
defined in the CPU table, and if it is a MASTER computer, SYSDWN also sends a RESET
command shortly before shutdown to all SLAVE computers listed in the network i.e. to all
NON-MASTER computers prompting them to go into a STANDBY mode and to terminate their
activities. The SLAVE computers remain in this mode until they are either powered down or until
the MASTER computer is returned to READY status when the FIDS is restarted.

2.2. THE KEY PROCESS (bigmama)


The BIGMAMA process has a special status in the FIDS. This is the original or mother process
for all processes entered in the process table. It also features in the process table itself and is
assigned the process number 0. This information is used by SYSTRT to start up BIGMAMA. In
accordance with the hierarchical UNIX concept, all processes generated by BIGMAMA are child
processes and are generated by a splitting mechanism. If one of the child processes terminates,
BIGMAMA is informed immediately and produces a report to this effect on the system console.
If the RESPAWN option for this process is activated in the process table, it is regenerated
(“respawned”) in the event of abnormal termination.
Processes designated in the process table (pn.tab) as „non swapping“ are the first ones to be
generated by BIGMAMA since this maintains their links in the memory.
Once BIGMAMA has generated all its processes, it goes into wait mode. Under normal
conditions, there is nothing else for it to do until the system is powered down with the ‚sysdown’
command.

Page 12 ServerManual-Vol1_1.0060_rev00
Server Manual Vol. 1 - System environnement & processes

Within a FIDS network, BIGMAMA is one of the processes generated on every computer. In
the process table (pn.tab), this is indicated by entering a zero in the component ‚target CPU
number‘.
On a STANDBY computer, BIGMAMA is “woken up” by the process WDOG in the event of a
failure or a serious fault on another computer. BIGMAMA then performs a RESTART. Since WDOG
logically switched the local computer a short time beforehand into the computer to be taken
over, which it did by swapping the logical CPU numbers, the RESTART initiated by BIGMAMA has
the effect of generating all missing processes and enabling work in the network to be continued.

2.3. DAEMON PROCESSES


The term daemon processes is the collective term for processes which run in the background,
are invisible to the user and do not respond directly to user esquires.
The output handler processes which control display information are not covered in this section:
they are grouped separately.

2.3.1. Time scheduler (timsch)

The time scheduler, or TIMSCH for short, is a process which initiates all time-related actions
of the FIDS in the background. This process re-schedules itself continuously, orienting its
rescheduling intervals on a time parameter defined in the time parameter table (at.tab). This
time parameter is ATT _ TIMSCH , assigned the number 1 in the time parameter table. This
time parameter can be set precisely to any specific second and then constitutes the smallest
time interval in the ATT _ TIMSCH parameter in which time-controlled activities in the system
can be initiated. This is because the TIMSCH process recognizes the need for activities of this
kind and is able to initiate them. If the time parameter ATT _ TIMSCH is not provided in the
time parameter table, there is a default value of five minutes. Generally speaking, the time
parameter is set to 1 minute and, in practical situations, this is perfectly adequate. The time
needed by TIMSCH for its work is subtracted from the waiting time up to the next ‚re-schedule‘,
ensuring that TIMSCH becomes active at ‘real’ one minute intervals.
If the FIDS is operating in a computer network, that is to say not in a single CPU configuration,
the TIMSCH process must run on all computers. This can be set in the process table (pn.tab).
Depending on the type of CPU (MASTER, STANDBY, etc.), TIMSCH can respond differently on
each computer.
Every time the time scheduler reactivates itself, it completes a range of activities in the system.
The following section describes these in detail:
ƒƒ Update on the actual database
All arrivals and departures information in the actual database are read by TIMSCH. Then
an update function is performed on these. TIMSCH does not make any direct changes but
changes to a data record can be made by an internal mechanism via the connection table
(cn.tab, cnArr.tab and cnDep.tab) and this is then transferred automatically by the action
scheduler (ACTSCH) to all output handler processes for peripheral display units. If the DELETE
was placed on these data by the UPDATE, the data are deleted immediately afterwards by
TIMSCH.

TIMSCH does not look at the entire actual database every time it is active. Here, a lead time
can be set using time parameter ATT _ ADVANCE (number 9) in the time parameter table
(e.g. 18 hours). The time chain which specifies how often the time scheduler reads through
the actual database is determined by parameter TIMSCH in the config.sys file. It normally
reads the database by scheduled time (time chain number 1).

TIMSCH only performs updates on the actual database if the database is running on the
MASTER CPU.

ƒƒ Backup copies of system tables


All system tables defined in the system are listed in the system table (sys.tab). Here it is
possible to set whether the data in a table can be changed (important for synchronization of
access) and to determine whether a backup copy on a permanent data carrier (usually the
hard disc) should be made for use in the event of a recovery start. TIMSCH then produces
a backup copy for all tables designated here as recovery tables, doing so whenever there
have been changes since the last time a backup was made. This is indicated by a flag in the

ServerManual-Vol1_1.0060_rev00 Page 13
Server Manual Vol. 1 - System environnement & processes

internal table header. If the flag is set, it is reset after the backup. The backup files are stored
in the directory “/app/sint2/save” and are called t9999.sav, where ‚9999‘ represents the
four-digit unique number of that table, e.g. t0045.sav for the backup copy of table 45. The
number of every table can be obtained from the index in the system table (sys.tab).

In a multi CPU configuration, this action is performed on all computers involved.

ƒƒ Backup copies of actual database


In a similar manner to the backup mechanism on the system table, backups are also made of
the arrival and departure tables. The names of backup files are arr.sav for arrivals and dep.
sav for departures. They use the same directory.

Backup copies of the actual database are only ever performed by MASTER and STANDBY
computers. It is not necessary on other SLAVE computers.

ƒƒ Deleting message-logging files/message-logging records


All system messages with appropriate parameters are entered in message-logging files. These
are generated on a daily basis. To avoid the risk of space running out on the hard disc at
some point, these files or data records are deleted once they have reached a certain age.
The age can be specified using time parameter ATT _ MLOGG (number 5).

In any multi CPU configuration, this action is taken on all computers involved.

ƒƒ Cycle requests for output handler displays


The majority of output handler processes responsible for peripheral display units have regular
duties to perform such as synchronizing the time of controllers, status queries etc. In strict
accordance with time parameter ATT _ OHCYCL (number 8), the TIMSCH time scheduler
schedules the action scheduler (ACTSCH) with the function CMD _ CYCLE. The action scheduler
then decides on the output handler processes for which this function code is applicable.

In a multi CPU configuration, this action is taken on all computers involved. However, it is
only effective on computers on which the output handler process is run. This is normally not
the case, at least on STANDBY computers.

2.3.2. Action scheduler, Flights (actsch, actschf)

The action scheduler, known simply as ACTSCH distributes all work and is the central or pivotal
feature of the entire FIDS system. It decides which changes in the system are important and
relevant to which processes and schedules these items. On the whole, it is guided in this work
by the function code of any queries sent to it, e.g. ‚U‘ for UPDATE of a data record in a file
in the system. For further information, the number of record description table is significant,
provided that this is not a function for which a set description is not required (e.g. time
synchronization). The action scheduler uses this information to search through the action
scheduler table (as.tab) for appropriate entries for this function: this table contains the numbers
of processes for which this function is important
Function code ‚I‘ for INSERT and record description number 5 (actual database record
description), to take a specific example, is used by the action scheduler to search all output
handler processes in this table for which insertion of this data record is significant. Specifically
with the UPDATE function, it is also possible here to differentiate by field numbers.
The action scheduler runs in a multi CPU configuration, i.e. when several computers are linked
together in a network, on every computer. However, processes are only initiated which are
running on the local computer. In other words, information distribution is decentralized.
To prevent deadlock situations inside the system and to improve load distribution, the action
scheduler runs in 3 process copies, firstly as ACTSCHFA which handles the distribution of
changes to data relating to the actual arrival database, secondly as ACTSCHFD for actual
departure database, and as ACTSCH for all other changes to data and orders in the system. In
all later sections of this Manual, reference is only made to action scheduler ACTSCH because
this physical division is irrelevant to understanding the logical structure of the system.

Page 14 ServerManual-Vol1_1.0060_rev00
Server Manual Vol. 1 - System environnement & processes

2.3.3. Output handler printout (ohdprt)

The output handler printout known as OHDPRT, is responsible for all system logging files and
console messages.
ƒƒ Message printouts
Message print outs for the console are performed by the OHDPRT itself. The OHDPRT receives
queue entries for messages from the message handler (msh), a runtime-library routine
which can be called up from, theoretically, every process in the system and then distributes a
message defined in the message table (msg.tab) to all locations specified.

ƒƒ Message logging
In addition to print orders, the OHDPRT process handles the entries for system messages which
match printer output to the console , these entries being recorded in the message logging
files. Appropriate orders are received by the software module msh().

2.3.4. History logging (hislog)

All data records deleted from the actual database are transferred to the HISLOG process by
the action scheduler (ACTSCH) due to a corresponding entry in the action scheduler table (as.
tab). This process is responsible for history logging, i.e. all completed data records, separated
into arrivals and departures, are entered into the history tables. The structure of the history
data record is described in the record description table hf.dsc and applies to both arrivals and
departures.

2.3.5. Network distributor (ticput)

All transfer of data and orders to other computers in a multi CPU configuration in the FIDS is
handled by the TICPUT process. If no specific computer is addressed, the same data is sent to
all computers in the network configured in the CPU table (cpu.tab). Once information has been
transferred, other computers can also be monitored. The type of local computer determines
which computer this involves and which actions are initiated when status changes are made,
e.g. MASTER, STANDBY etc., because ICPUT runs on all computers.
Here is a detailed list of commands:
ƒƒ UP command
Another CPU is put into service. This function can only be initiated by the MASTER computer.
Firstly, the process which sets the MASTER computer into READY status, sends this command
to all configured SLAVE computers to force them to make an initialization request. Another
process which uses the UP command is ICINIS. If a computer initialization process is
terminated, for any reason at all, re-initialization is attempted with the UP command.

ƒƒ DOWN command
The process WDOG, which is responsible on STANDBY computers for monitoring assigned
NON-STANDBY computers, sends this command via ICPUT to a computer which justifies
takeover by a standby unit (e.g. hard disc error). WDOG then attempts to power down the
defective computer in a “controlled” fashion.

ƒƒ RESTART command
This command is also only used under the same conditions as the DOWN command. However,
if the restart option is configured in the CPU table (cpu.tab) for the defective computer, once
the system has been powered down, the FIDS tries to power back up and the computer
affected by the fault is then assigned the duty of STANDBY computer.

ƒƒ RESET command
If the MASTER computer is powered down, it sends the RESET command to all SLAVE
computers configured in the CPU table (cpu.tab). The affected computers then go into
STANDBY mode, i.e. they wait, either until they are also powered down, or until they re-
establish contact with the MASTER and are initialized.

The RESET command can also be set by the ICINIS process if this detects a crash during

ServerManual-Vol1_1.0060_rev00 Page 15
Server Manual Vol. 1 - System environnement & processes

initialization of a computer. In this case, this command is sent before the UP command to
reset the status in advance on the computer which is to be initialized.

ƒƒ UPDATE command
DELETE command
INSERT command
These commands are received by the action scheduler ACTSCH and affect all changes to data
in the system (actual database, system tables and sequentially-indexed files). The data are
sent, together with the commands to the ICGET processes on the other configured computers
to ensure they all hold the same data and enabling them to initiate appropriate action
indirectly on a remote basis.

ƒƒ TSYNC command
With this command, time on the other computers is synchronized. This command is issued
after initialization of a SLAVE computer has been completed by the ICINIS process or by the
SERVICE process once the system time on these has been changed by an external clock, or
manually by user entry.

ƒƒ INIT command
This command is used by ICINIS process to initialize SLAVE computers.

ƒƒ PRC command
This command theoretically allows any process in the FIDS to send an order to another
process which is not running on the local computer.

ƒƒ CYCLE command
Parallel to the output handler processes, this command is transferred via the action scheduler
table (as.tab) to ICPUT. This then checks the status of the other configured computers in
the network. If there are any changes in status, there is a response dependent on the task
assigned to the computer (MASTER, SLAVE).

The MASTER computer only checks computers which have UP status or have just been initialized
(status >>UP). If there is no longer any contact here, computer affected is set to DOWN status
with a message.
SLAVE computers only check the status of the MASTER computer in order to allow themselves to
be initialized by it once contact has been successfully established. If contact with the MASTER is
lost, SLAVE computers go into STANDBY mode (wait position).

2.3.6. Computer synchronization (icget)

The ICGET process is virtually a counterpart to the ICPUT process. ICGET also runs on
all computers on the network, receives all commands from the ICPUT processes of other
computers and executes these on the local computer.
In detailed terms, the following commands are defined:
ƒƒ UP command
If ICGET receives an UP command on a SLAVE computer, it then sends the CYCLE command
to the local ICPUT process which then checks the status of the MASTER computer and,
because the MASTER has UP status, calls for initialization of the local computer.

ƒƒ DOWN command
When this command is received, it means that the local computer has been taken over by
the STANDBY computer and now has to be powered down in a “controlled” manner. ICGET
executes this command by calling up the SYSDWN process.

ƒƒ RESTART command
This command is also only used under the same conditions as the DOWN command. However,
if the restart option is configured in the CPU table (cpu.tab) for the defective computer, once
the system has been powered down, the FIDS tries to power back up and the computer

Page 16 ServerManual-Vol1_1.0060_rev00
Server Manual Vol. 1 - System environnement & processes

affected by the fault is then assigned the duty of STANDBY computer.

ƒƒ RESET command
The local computer is placed in STANDBY mode by ICGET.

ƒƒ UPDATE command
DELETE command
INSERT command
Together with these commands, changes to data from other computers in the network are
received. Depending on which files are involved (actual database, system or lookup tables,
sequentially-indexed hard disc files) the appropriate runtime module (flight handler table (fth)
system table handler (sth) or file handler (fhd)) is called up. Since these in turn simply activate
the local action scheduler, local distribution of information is ensured using the standard
mechanism obtained from the action scheduler table (as.tab).

ƒƒ TSYNC command
The local system time is changed and the local action scheduler is prompted. This consults
the action scheduler table (as.tab) and informs all processes on the local computer for which
the change in time is relevant.

ƒƒ CYCLE command
With this command, ICGET receives status queries from other computers in the network. The
computer sending the query is informed of local computer status and some other items of
information.

ƒƒ INIT command
All data involved in initialization of the local SLAVE computer are transferred using the INIT
command. The indirect sender is ICINIS, which transfers individual blocks of data to the
target computer via ICPUT. ICGET receives these blocks of data and integrates them in
suitable locations on the local computer, filing them by header information. The maximum
block size for transmission is set by the component MAX-QUEUE-ENTRY-SIZE in the process
table entry (pn.tab) for ICPUT. For technical reasons, block size is restricted to 1024 bytes.

If an initialization is started on the MASTER computer, it first of all determines dimensions for
individual ‘sharable segments’ for ICGET. If there are any discrepancies here, initialization
is terminated. Changes in configuration which involve changing the size of the existing
configuration should be performed in every case on every configured computer and activated
with a cold boot on all computers.

ƒƒ PRC command
Orders for local processes coming from other computers enter ICGET with the command
PRC. Data are then simply transferred to the desired process.

2.3.7. Computer initialization (icinis)

The ICINIS process has the job of initializing other computers on request, i.e. to synchronize
data with the MASTER computer. ICINIS only runs on the MASTER computer. All SLAVE
computers in the network configured in the CPU table (cpu.tab) check the status of the MASTER
computer as soon as they have created their process environment. If this status is UP or if
the status changes from DOWN to UP, the ICINIS process is prompted by the relevant SLAVE
computer and is then initialized by ICINIS.

ServerManual-Vol1_1.0060_rev00 Page 17
Server Manual Vol. 1 - System environnement & processes

Initialization of a SLAVE computer involves the following steps:


1. Setting the system status bits SYS _ ICINIS for synchronizing data
on the MASTER computer.
2. Transmission of sizes of „sharable segments“ for comparison
with the SLAVE computer.
3. Actual database transmission - Arrivals.
4. Actual database transmission - Departures.
5. System tables transmission.
6. Time synchronization (TSYNC command).
7. Re-setting SYS_ICINIS status bit.
8. Process next initialization order...
Once ICINIS has completed initialization of a SLAVE computer, it waits until this computer
changes to the UP status. The wait period for this status is limited by time parameter ATT _
CPURDY, contained in the time parameter table (at.tab). If the SLAVE computer fails to reach UP
status within this period of time , initialization is deemed to have been aborted and the system
proceeds accordingly.
If initialization of a SLAVE computer is aborted (time-out, transmission error etc.), ICINIS issues
an appropriate message and, first of all, attempts to reset the affected SLAVE computer with
the reset command and to prompt a new initialization process with the UP command. If other
computers have already requested initialization, these computers receive priority treatment.

2.3.8. Computer monitoring (wdog)

The WDOG process runs on all computers configured in the CPU table (cpu.tab). It must therefore
be entered in the process table (pn.tab) with target CPU 0. Depending on whether WDOG runs
on a STANDBY or NON-STANDBY computer, it can be assigned different tasks.
When WDOG is running on a STANDBY computer, it monitors all computers in the network
assigned to it in the CPU table (cpu.tab) for monitoring. If the monitored computer fails to
respond to queries or if some other fatal error occurs, the STANDBY computer steps in to
replace the failed unit. This “CPU switch”, i.e. the logical transfer of tasks, is controlled by the
WDOG process.
This „CPU switch“ occurs in response to one of the following events:
1. TIME-OUT. The computer addressed fails to respond.
2. DISCERR. Access error on the disc, or one of the hard drives.
3. ABORT. A process crash.
4. SYSERR. System error in operating system (system call) or
configuration error.
Computers are monitored at regular, timed intervals. This is controlled by time parameter
ATT _ WDOG (number 13) in the time parameter table (at.tab). If there is no entry here, the
default value is 3 seconds.
At these timed intervals, WDOG checks whether the computers assigned to it respond to its
queries. In other words, WDOG only actively recognizes the error situation TIME-OUT. Other
errors are transferred to the STANDBY units responsible for these errors, and are transferred by
the computers being monitored. WDOG receives its messages from the local ICGET process.
If conditions are satisfied for a computer takeover, it first creates a new process queue
segment, depending on which processes are running on the computer which is to be taken
over. Any existing queue entries are taken over, provided that theses are processes that run
on all computers. WDOG swaps the logical CPU configuration in the CPU table (cpu.tab) by
exchanging the physical CPU reference of its computer, and of the computer being taken over.
Since this UPDATE to the CPU table (cpu.tab), just like any other change, is transferred
immediately to all other computers on the network, the takeover is logically complete from
this moment. The initiated RESTART of the BIGMAMA process and the related start up of newly
arrived processes takes a few seconds but is no longer relevant to task synchronization within
the FIDS network because possible orders for different processes are already entering the

Page 18 ServerManual-Vol1_1.0060_rev00
Server Manual Vol. 1 - System environnement & processes

relevant process queues and can be processed as soon as the relevant process is available.
If the cause of error was not a TIME-OUT, this means that the computer being taken over is still
able to communicate. If this is the case, the last activity performed by WDOG is to send a DOWN or
RESTART command to the computer affected, depending on whether the RESTART option was
configured in the CPU table (cpu.tab) for this computer. An attempt is therefore made to power
down the FIDS on the failed computer, and then to power it back up in case of a „RESTART”
In contrast to the STANDBY computer, the process WDOG on a NON-STANDBY computer only
has the task of checking the functional capability of the disc(s). This check is performed using
one or more type ‘W’ files in the file table (fi.tab). In all listed directories, either a specified or a
machine-generated file is opened, written and then deleted. If a system error occurs at this time,
the error message DISCERR is reported to the responsible STANDBY computer. The computer
responsible can be identified easily from the CPU table. Disc drives are checked at regular
intervals and these intervals can be specified with time parameter ATT _ HDCHECK (number 14)
in the time parameter table (at.tab). The default value is 30 seconds.

2.4. Output handler


An output handler program is produced for every type of display peripheral in the FIDS and this
then handles all changes to data and status entries on the controllers and displays assigned to
it. In cases where the same kind of peripheral has to be controlled using different firmware and/
or products from different manufacturers, different handler programs should be incorporated in
parallel fashion.
Individual handler processes receive orders via the data entry queue, most coming from the
action scheduler (ACTSCH). Whereas the action scheduler (ACTSCH) transfers data records
from the actual database which have given rise to a change in the peripherals. However, even
time synchronization, periodic status queries about the connected display peripherals etc.
are transferred by the time scheduler (TIMSCH) to the output handler processes via the action
scheduler (ACTSCH).
Every output handler finds the logical displays in the display table (dp.tab) by a process number
reference contained in the table. A logical display in this case is any given number of data
records from the actual database which can physically occupy one or more lines of data on
a display medium. The logical display is connected to the physical device (controller) by a
reference in the device table (dv.tab). The format of an item of information on a logical display
is determined by a line format which is defined in the line format table (lf.tab) and it takes the
form of a sequence of field formats and/or reference numbers in the field format table (ff.tab).
The output handler processes derive selection conditions for individual displays from the
selection table (se.tab). In this table individual selection criteria can be defined for each of the
displays.
All data in every configured display are stored in the memory of the transport computer in the
form of a snapshot. Information is stored in what are known as display content tables and these
are recorded in the system table (sys.tab), just like all other tables.
All output handlers of one type can run in the system as several physical process copies, i.e.
they can be defined in multiple fashions in the process table. Since every output handler process
recognizes the displays assigned to it from its own process number reference in the display
table (dp.tab), a division of work of this kind is entirely possible. This is a valuable approach to
adopt if a great number of displays have to be controlled. Every output handler then tackles the
displays assigned to it in sequential order.
In this context, it is worth noting that it is not possible for several output handler processes of
the same type to share one controller (refer to device table dv.tab), i.e. all displays on one
controller must be processed by one and the same output handler.
In functional terms, all output handler processes have the following tasks:
ƒƒ UP function
With this function a controller or a logical display is taken into service and/or set to UP status.
If this involves a display unit, all current display data are recreated and printed out, provided
that the relevant controller also has UP status.

If the controller is in DOWN status, the display status is still changed to UP but no I/O takes
place. If a controller is set to UP, it is common practice for all assigned displays possessing

ServerManual-Vol1_1.0060_rev00 Page 19
Server Manual Vol. 1 - System environnement & processes

the UP status to have their data restructured.

ƒƒ DOWN function
In this function, the output handler sets a logical display to DOWN status, i.e. data in the
display are canceled and no account is taken of them in the subsequent procedure. It is
also possible to set a controller to DOWN. This is not usually a worthwhile approach because
normally all output handlers are designed in such a way that they poll the status of controller
peripherals assigned to them at frequent regular intervals defined in the time parameter table
(at.tab) and, if contact is established, put the controller into service and set it to UP.

ƒƒ ADB functions
All changes in the actual database (ADB) are transferred to the output handlers by the action
scheduler (ACTSCH) provided that this instruction is entered in the action scheduler table
(as.tab). The functions involved are UPDATE, INSERT and DELETE. Since all displays in the
system are maintained in the form of memory snapshots in the display contents tables, it is
possible to minimize I/O because it is possible to determine precisely whether any data area
in a display has changed, and if so which one and how.

ƒƒ CYCLE function
Provided that time parameter ATT _ OHCYCL (number 8) is defined in the time parameter
table (at.tab), the time scheduler (TIMSCH) places a CMD _ CYCLE order in the data entry
queue of the action scheduler (ACTSCH) at the defined intervals of time. This then transfers the
command to all output handler processes (Type PNT _ OUTHAN), provided that these have a
reference in the action scheduler table (as.tab). Every output handler process then deals with
repeating tasks affecting its specific peripherals (time synchronization etc.).

2.5. Process concepts

2.5.1. Inter-process communication

Every process in the FIDS has a unique number defined in the process table (pn.tab). The
mother process BIGMAMA has number 0 and generates all other processes featured in the
process table. Communication between these processes takes place using process queues
which, when combined, form the process queue segment in the FIDS. Every process localizes its
queue by calculating its address offset from its process number defined in the process table (pn.
tab). The size of process queue for individual processes can be adjusted in the process table
(pn.tab). Every process ready to receive waits here in a message queue for a type of message
possessing the value of its process number and continues waiting until there are no more entries
in its data queue.
If at any time there is no further space in a process queue to make an additional entry, the send
process polls the queue at one second intervals to check whether there is still space. Using time
parameter ATT _ DEADLOCK (number 12), a maximum period of time can be entered for which
a send process waits in such a situation until it gives up. If such a situation occurs, abnormal
termination of the process or some other DEADLOCK situation may be about to occur. The send
process then terminates the target process with the signal SIGKILL so that this process can then
be restarted by BIGMAMA using the ‚RESPAWN‘ option in the process table (pn.tab), provided it
has been configured.
Using the command ‚qstat‘, a summary of the existing process queues can be called up on
the UNIX console provided that the system is running. The format and information contents of
the command is explained in detail in another section of this Manual.
If data are sent to a process which is not running on the local computer, this is handled by the
PRC command on the local ICPUT process. This transfers the order through the network to the
ICGET process on the computer on which the desired process is running. The process table
(pn.tab) is consulted to discover which computer is involved. The ICGET process on the target
computer than transfers data locally to the desired process.

Page 20 ServerManual-Vol1_1.0060_rev00
Server Manual Vol. 1 - System environnement & processes

2.5.2. Basic process functionality

All processes featured in the process table (pn.tab) operate during their initialization and
termination phases in accordance with a defined function procedure which incorporates them in
the FIDS/UNIX environment.
ƒƒ Initialization

Once a process has been generated by its mother process, it first performs a ‘sharable
memory mapping’ of its six ‘sharable memory segments’ in which the entire memory-resident
information of the FIDS is stored: this information must be accessible to all processes. After
this, each process enters its own process ID (pid), generated by the UNIX system, in its own
data record of the process table (pn.tab) and , where appropriate, adapts its own priority.

After this, as a final step, once a buffer memory area in the program has been allocated
for entry data from its own entry queue (size specified in pn.tab), the process increases the
process ready counter in the system status area and the BIGMAMA mother process generates
the next process. Every process waits until the FIDS has been set to READY status by a defined
process (usually TIMSCH). Then each process waits, assuming data entries are expected, in
synchronous fashion for queue entries (process blocked).

ƒƒ Termination

When the FIDS is powered down, all processes in the process table (with the exception of
BIGMAMA) are informed of this event by interrupt signal from the SYSDWN process.

At this point, every process sets its process ID (pid) in the process table (pn.tab) to zero and
releases the memory areas previously allocated to it. Finally, each performs an ‘unmap’ on
the „sharable memory segments“.

ServerManual-Vol1_1.0060_rev00 Page 21
Head office & production:
coNRAc GmbH Developed / Designed / Made in Germany
Lindenstrasse 8 · D-97990 Weikersheim · Germany
Tel.: +49-7934-101-0 · Fax: +49-7934-101-101
Specification subject to change without prior notice.
info@conrac.de · www.conrac.de
GM ServerManual-Vol1_1.0060_rev00
DAtA MoDUL GRoUp

Subsidiaries & Sales Offices:


coNRAc France - paris coNRAc MENA FZE - Dubai coNRAc Latin America - bogota coNRAc Sales office Northern
E-mail: info@conracfrance.fr E-mail: info@conrac.ae E-Mail: info@conrac.co Europe - Sweden
www.conrac.fr www.conrac.ae www.conrac.co E-mail: info@conrac.se
Tel.: +33-3-44 54 96 99 Tel.: +971-4-29 94 009 Tel./Fax: +57-1-34 65 338 www.conrac.se
Tel.: +46-42-21 29 39

coNRAc Asia - Singapore coNRAc South Africa - coNRAc Sales office Southern coNRAc Sales office Northern
E-mail: sales@conrac-asia.com Johannesburg Europe - Rome Europe - Norway
www.conrac-asia.com E-Mail: info@conrac.co.za E-mail: info@conrac.it E-mail: info@conrac.no
Tel.: +65-67 42 79 88 www.conrac.co.za www.conrac.it www.conrac.no
Tel.: +27-83-63 50 369 Tel.: +39-06-45 43 92 02 Tel.: +47-52-77 63 85

You might also like