You are on page 1of 51

Training Manual

Integrated Modular Avionics


ABBREVIATIONS
Throughout this document/training course the following abbreviations may be used:
ac Alternating Current
ACMF Aircraft Condition Monitoring Function
ACS ARINC 664 Cabinet Switch (CCR LRM)
AFD Adaptive Flight Display
AGU Audio Gateway Unit
AIM Aircraft Interface Module
APB Auxiliary Power Breaker
APCU Auxiliary Power Unit Controller
APU Auxiliary Power Unit
ARINC Aeronautical Radio Incorporated
ARS ARINC 664 Remote Switch
ASIC Application Specific Integrated Circuit
ASM Application Specific Module
ATRU Auto Transformer Rectifier Unit
ATUC Auto Transformer Unit Controller
BAG Bandwidth Allocation Gap
BC Battery Charger
BIT Built In Test
BITE Built-In Test Equipment
BOSS Broadband Offboard Satellite System
BPCU Bus Power Control Unit
BTB Bus Tie Breaker
CACTCS Cabin Altitude & Cabin Temperature Control System
CAN Controller Area Network
CBB Connexion by Boeing
CBIT Continuous BIT
CCD Cursive Control Device
CCR Common Computing Resource
CCS Common Core System
CDN Common Data Network
CFPS Cargo Fire Protection System
CFG Configuration
CGO Cargo
CIB Captain’s Instrument Bus
CM Configuration Manager
CMCF Central Maintenance Computing Function
CMM Component Maintenance Manual
CMMF Configuration Management Manifest Function
CMRF Configuration Management Reporting Function
CPU Central Processing Unit
CRC Cyclic Redundancy Check
CTR Common Time Reference
CVR Cockpit Video Recorder
ABBREVIATIONS (Cont)
dc Direct Current
DCA Display and Crew Alerting
DCP Display Control Panel
DDG Dispatch Deviation Guide
DDR Double Data Rate
EAFR Enhanced Airborne Flight Recorder
ECS Environmental Control System
EDC Error Detection and Correction
EDE Error Detection Encoding
EE Electronics Equipment
EEC Electronic Engine Controller
EED Enhanced Error Detection
EICAS Engine Indicating and Crew Alerting System
EMI Electro-Magnetic Interference
EPAS Emergency Power Assist System
ES End System(s)
ETI Elapsed Time Indicator
FCAC Forward Cargo Air Conditioning
FCE Flight Control Electronics
FCS Frame Check Sequence
FCZ Fault Containment Zone
FDE Flight Deck Effect
F/D Flight Deck
FIDO Flight Interactive Data Output
FIFO First In First Out
FIS Flight Information Set
FIZ Fault Isolation Zone
FMF Flight Management Function
FO Fibre Optic
FOIB First Officer’s Instrument Bus
F/O First Officer
FOX Fibre Optic Translator Module
FSS Functional Status Set
FWD Forward
GCU Generator Control Unit
GEN Generator
GG Graphics Generator Module
GND Ground
GPM General Processor Module
HA Hosted Application
HBB Hot Battery Bus
HF Hosted Function
HM Health Monitor / Health Manager
HPU HUD Projector Unit
HW Hardware
Hz Hertz
I/O Input / Output
I/U Inhibited/Unihibited
I2C Inter-Integrated Circuit Bus
IBIT Initiated BIT
ICSCRU Integrated Cooling System Cargo Refrigeration Unit
IFG Inter-Frame Gap
IFZ Independent Fault Zone
IMA Integrated Modular Avionics
INBD Inboard
IP Internet Protocol
LAN Local Area Network
LBC Local Bus Controller
LG Landing Gear
LGS Landing Gear System
LME Line Maintenance Engineer
LRM Line Replaceable Module
LRU Line Replaceable Unit
LSAP Loadable Software Aircraft Part
MAC Media Access Control
Mb/s Megabits per second
MBR Main Battery Relay
MDIO An FPGA industry standard acronym for the IEEE 802.3
MI Management Interface
MIB Management Information Base
MII (Ethernet) Media Independent Interface
MII Message Integrity Information
MKP Multi-Select Keypad
MLG Main Landing Gear
MMEL Master Minimum Equipment List
MUX Multiplexer
ms millisecond
NG Nose Gear
NLG Nose Landing Gear
NVM Non-Volatile Memory
NWSS Nose Wheel Steering System
OCMF On-board Configuration Management Function
ODLF On-board Data-Load Function
OFP Operational Flight Program
OMS On-board Maintenance System
OPS Operational Program Software
OS Operating System
PBIT Power-Up BIT
PCM Power Conditioning Module
PDHM Power Distribution Health Manager
PDOM Power Distribution Operations Manager
PECS Power Electronics Cooling System
PFPS Propulsion Fire Protection System
PIC Peripheral Interface Controller
PHY Ethernet Physical Layer Transceiver
PLD Programmable Logic Device
PLM Partition Load Map
RAM Random Access Memory
RAT Ram Air Turbine
RCB RAT Circuit Breaker
RDC Remote Data Concentrator
REU Remote Electronics Unit
RHS Right Hand Side
RM Redundancy Management
RPDU Remote Power Distribution Unit
RTB Right Transfer Bus
RTC Real Time Clock
Rx Receive
SATCOM Satellite Communications
SDRAM Synchronous Dynamic RAM
SFD Start Frame Delimiter
SNMP Simple Network Management Protocol
SPI Serial Peripheral Interface
SSPC Solid State Power Controller
SW Software
SWPM Standard Wiring Practices Manual
TCB Thermal Circuit Breaker
TM Time Manager/Management
TP Twisted Pair
TRU Transformer Rectifier Unit
Tx Transmit
UART Universal Asynchronous Receiver Transmitter
UDP User Datagram Protocol
UTP Unshielded Twisted Pair
VL Virtual Link
XCVR Transceiver
XFMR Transformer
DEFINITIONS
Throughout this document/training course the following terms may be used:

ARINC 664 Frame: An ARINC 664 Frame describes the data packet that is
submitted across the network, inclusive of the protocol bit layers, as well as the payload.

ARINC 664 Message: An ARINC 664 Message is a data item that is packed into the
payloads of one or more ARINC 664 frames. If a message is larger than the max payload
size for a frame, then the message data is split between multiple frames before
transmittal, and then re-joined into a single message upon receipt of all frames for that
message.

Application Specific Module: A component (physical element) of the system that is


installed in the CCR but is not logically part of the CCS.

Bandwidth Allocation Gap: A mechanism for controlling the amount of information that
an LRM/LRU can transmit.

CCS LRU/LRM: The elements within the system boundary of the CCS. This includes the
CCS LRMs in the CCR, CDN Switches and the RDCs. It does not include Hosted
Functions or LRU/LRMs connected to CDN Switches or RDCs.

Compatibility Checking: LRU/LRM initiated check of hardware part numbers vs


software part number
Configuration Checking: LRU/LRM level check based upon integrator defined
configuration tables (i.e. maintenance manual load of CCS manifest).
Consistency Checking: Initiated check of consistency (sameness) of software part
number among multiple instances of all load types installed on CCS components.

Hosted Application (HA): A Hosted Application is defined as a software application


that utilises the computing resources of the CCS. A hosted application can consist of one
or more software partitions.HAs include the CCS level applications that are standard in
the CCR such as Health Management, Time Management and Configuration
Management.

Hosted Function (HF): A Hosted Function is defined as a system that directly


interfaces with the CCS at one or more of its communication and/or I/O interfaces.
HF software need not be written by GE and is not standard on a CCR. A HF is similar to
that of a ‘federated system’ LRU.

Management Information Base (MIB): Error Register Data indicating ES health.


Multicast: The simultaneous delivery of information to a group of destinations.

Packet: A formatted block of data.


Partition: The partition is the virtual resource on the GPM inside which an application
runs (within the context of the ARINC 653 Operating System).
Primitive Basic H/W or S/W generic function. Usually a single bit discrete, or 16-bit
analogue value representing a voltage or frequency. Primitives can be combined to create
larger I/O functions.
Publisher: Any CCS user who passes data into the CDN.
Robust Partitioning: In a resource sharing unit, the resource allocated to a function is not
affected when changes are made to other functions sharing the same unit’s resources. In
CCS resource sharing, units are the GPM software partitions where throughput and
memory are the resources.
Subscriber: Any CCS user who requires data from the CDN.
System Subscriber: Any CCS user who passes data via the CDN.
Unicast: The delivery of information to a single destination.
Common Core System (CCS) Introduction
Moving information between avionics systems on board an aircraft has never been more
crucial, and it is here that electronic data transfer is playing a greater role than ever before.
Since the late 1980’s, the all electronic ‘fly-by-wire’ system has gained such popularity
that it has become the only control system used on new aircraft.
But there are a host of other systems on an aircraft – inertial platforms, communication
systems, and the like, that demand high reliability and high speed communications, as
well. Control systems and avionics in particular, rely on having complete and up-to-date
data delivered from data source to system receiver in a timely fashion. For safety critical
systems, reliable real-time communication links are essential.

This is where the Common Core System (CCS) comes into its own. Consisting of a
Common Data Network (CDN) and ARINC 664 protocol communications, the CCS is
characterised by the following features:
• An integrated high Integrity avionics platform, providing computing, communication
and Input/Output (I/O) services
• A network centralised communications environment
• Real-time deterministic system
• Configurable and extensible architecture
• Robust partitioning
• Fault containment
• Fail-passive design
• Asynchronous component clocking
• Compatibility with legacy LRUs
• Single LRU/LRM Part Numbers for basic platform components
• Open system environment

The utilisation of this type of architecture by the CCS has supported the three major
design goals of the aircraft:

Lower Operating Costs


The CCS architecture offers great flexibility for the aircraft. This flexibility is rooted in
the fact that the CCS is configurable, and extensible. It is also a scalable system that is
built with a basic set of building blocks (computing, network, and I/O) that provide
flexibility for the system’s physical topology. The CCS can be re-configured or scaled as
appropriate to meet the needs for a modified system or a newly added system. This allows
the aircraft operator to make CCS-related aircraft changes at lower costs.
Existing, unused CCS resources provide great opportunities for adding low cost
functionality to the aircraft due to the system configurability properties. In addition, new
building blocks can be connected to the system to make further system resources
available due to the system scalability properties.

Reduced Fuel Consumption


The CCS architecture reduces the overall weight for the set of hosted functions. Reduced
weight translates into reduced fuel consumption for the aircraft. Instead of running
dedicated copper wiring for each I/O instance of a function, the CDN consolidates data
traffic for many functions onto a minimal number of bi-directional fibre optic lines.
Utilising a networked star topology, remote CDN switches are located in central locations
in order to minimise copper/fibre runs to the connected LRUs/sensors/effectors. Likewise
the RDCs are located throughout the aircraft sections in order to significantly minimise
interconnect wiring.
The CCS architecture reduces the overall power consumption for the set of hosted
avionics functions. Reduced power consumption also translates into reduced fuel
consumption. The architecture consolidates the numerous individual federated
computing resources into a minimal set, requiring less overall power than dedicating a
separate processor system to each avionics function.

Reduced Maintenance Costs


The maintenance costs are reduced for the CCS due to a reduced set of LRU/LRM part
numbers and equipment costs. The CCS provides a set of ‘generic’ resources (computing,
communication, and I/O) that are utilised by the entire set of hosted avionics functions.
This means a reduced part list thus reducing the amount of spare units that must be
stocked for maintenance purposes.

Contrast between the CCS and ‘Federated’ architecture


The architecture utilised by the CCS is provided in contrast to the traditional architecture
characterised by ‘Federated Systems’.
Federated systems are designed to provide the following services in each LRU system:
• Separate signal processing
• Separate infrastructure
• Separate I/O
• Internal system bus
• Individual function fault processing and reporting
In addition, any I/O is routed point-to-point between any sensors, effectors and/or LRUs,
as shown below in Figure 1.

Figure 1 – Federated System Architecture


In contrast to federated systems, the architecture utilised by the CCS provides the
following
services for an integrated set of LRU systems:
• Common processing with robustly partitioned application software
• Common infrastructure
• Specific I/O via shared Remote Data Concentrators (RDCs)
• Distributed Systems Bus (CDN)

Figure 2 – CCS ‘Virtual System’ Architecture

The CCS architecture presents a ‘Virtual LRU’ concept to replace the systems packaged
as physical LRUs in a federated architecture system. Figure 2 portrays four (4) ‘Virtual
Systems’ that are equivalent to the four ‘physical’ systems shown in the Figure 1. As
shown, the Virtual System consists of the same logical groupings of components as
contained by a physical system:
• Application software
• Infrastructure / Operating System (OS)
• Processor
• System bus
• I/O
Therefore, a key difference between the CCS architecture and the federated architecture is
the definition of the logical system. In a federated architecture the logical system is the
physical system. In the CCS architecture, the logical system is different from the physical
system and is thus referred to as a ‘virtual system’.
In a federated architecture system the target computer and the software application are
typically packaged in a ‘physical’ system embodied by an LRU. The application is
typically linked with the OS and other support software and hardware, the resulting
executable software being verified as a single software configuration item. Multiple
‘physical’ systems are then integrated in order to perform a specific set of aircraft
functions.

The architecture utilised by the CCS hosts the software application on a General
Processor Module (GPM) which is a computing resource shared between several software
applications.
The GPM hardware and platform software, along with configuration data developed by
the system integrator, forms the equivalent of a target computer. When a software
application is integrated with the target computer, it forms a ‘Virtual System’. Multiple
‘Virtual Systems’ are provided by a single GPM (see Figure 2). The distinction between
the application ‘Virtual System’
in the GPM and an application LRU (physical system) in the federated environment is
that the application ‘Virtual System’ in the GPM is a software configuration item (no
hardware).
To provide all the ‘Virtual Systems’ that are required to be part of the CCS, a number of
GPMs are necessary and these are all housed in a single unit called a ‘Common
Computing Resource’ (CCR). To ensure system integrity of 10-9 there are two (2) CCR
cabinets to allow for system redundancy.
The ‘Virtual System’ concept extends to the Common Data Network (CDN). Many
‘Virtual Systems’ share the CDN as a data transport medium, with Virtual Link (VL)
addressing providing network transport partitioning for the application data messages.
Each VL address is allocated network bandwidth (data size and rate), and a maximum
network delivery latency (i.e. delay) and jitter - parameters that are all guaranteed.
The CDN consists of switches and a CDN harness. The switches are electronic devices
that manage the data traffic on the network between the connected Line Replacement
Modules (LRMs), CCRs, and other system ‘subscribers’. The switches receive data from
any CDN subscriber, or from other switches, analyse and route it to one, or several,
appropriate recipients through the CDN harness.
The CDN harness is a ‘Full Duplex’ physical link between a CDN subscriber and a CDN
switch, and between two (2) CDN switches. The term ‘Full Duplex’ means that the CDN
subscriber can simultaneously transmit and receive on the same link.
For availability reasons, the CCS implements a redundant network. All CDN subscribers
have a connection to both networks A and B thanks to the redundant switches. Moreover,
at the systems level the CCS supports the Side 1/Side 2 segregation principle.
Conventional type LRUs and systems that cannot communicate directly with the CCS are
connected to an RDC. These devices convert the digital, analogue or discrete data into the
correct format for connection to the CDN.
The ‘Virtual System’ concept also extends to the RDC, which is configured to provide I/O
services for multiple ‘Virtual Systems’. Through scheduled read/write operations, the
RDC employs temporal partitioning mechanisms. The actual partitions vary depending
upon specific ‘Virtual System’ usage, providing output signals to effectors, or reading
inputs signals from sensors, for a specific ‘Virtual System’ at a specific point in time.
To aid system integrity, the RDC allows for physical separation between I/O signals
contained within multiple Independent Fault Zones (IFZs) in order to segregate functional
signals. These IFZ boundaries ensure that RDC faults do not affect I/O interfaces outside
of the faulted IFZ.
Each CCR and RDC is interconnected using the CDN, which allows the CCS and/or
conventional avionics to exchange data using the ARINC 664 data protocol. This protocol
is based on technology developed from the commercial Ethernet standard and adapted to
aviation constraints.

CCS Architecture
The CCS is an IMA solution to provide common computing, communications and
interfacing capabilities to support multiple aircraft functions.

The CCS consists of the following three major components:


• Common Computing Resources (CCRs)
These contain the General Processor Modules (GPM) which supports the system
functional computer processing needs.
• Remote Data Concentrators (RDCs)
These support the system analogue, discrete and serial digital interfaces for both
sensors (Inputs) and effectors (Outputs).
• Common Data Network (CDN)
This is the data highway between all components of the CCS and follows the
ARINC 664 protocol for communication between the system elements.
All the above elements are packaged to form the specific implementation for the aircraft.
Elements are packaged as either an LRU, a module or in card form. Modules and cards
are grouped within cabinets that share common resources, notably power supplies and
cooling.

The CDN switches and RDCs are distributed throughout locations within the aircraft to
facilitate separation and minimise wiring to subsystems, sensors and effectors.

An ‘open system’ environment is used within the CCS to enable independent suppliers to
design and implement their systems on the CCS by complying with industry standard
interfaces at all levels within the system.

The CCS is an asynchronous system, ensuring that each components operation schedule
is independent of the other components. Each unit internally controls when data is
produced, there is no attempt to order operations between units at the platform level. This
helps to prevent individual unit behaviour from propagating through the system, affecting
the operation of other units. Also, this unit level independence emulates the federated
system environment, producing the same system level characteristics.
The CCS is a configurable resource system. Functions are allocated the resources they
require to perform their task, in the form of sufficient processing time, memory, network
I/O communication and interface resources for both analogue signals and other digital bus
types.
These resource allocations are implemented within the CCS through specific
configuration tables loaded into each CCS unit. The configuration tables represent the
resource allocations that are guaranteed to each function to perform its task. These
resource guarantees, along with the system partitioning characteristics, form the corner
stone of the hosted system independence and, therefore, change containment within the
system. These properties allow individual functions to change without collateral impact to
other functions.

Hosted Function/Application Description


The CCS is a hardware/software system that provides computing, communications and
I/O services for implementing real-time embedded systems, known as Hosted Functions
(HFs).
HFs are allocated to the system resources to form a ‘functional’ architecture specific to
each system to meet the availability, safety, and configuration requirements for each
function.
When the term is used in its general form, as in the prior paragraph, a HF can describe
either a software application (Hosted Application) that uses the platform computing
resources, or a hardware system that utilises the CCS communication and I/O services.
The HFs may include sensors and effectors that utilise the RDC I/O resources for
interfacing with the CDN. When referencing a more formal definition of these terms,
‘Hosted Function’ can
be distinguished from a ‘Hosted Application’ as described below.

Hosted Function
A Hosted Function (HF) is defined as a system that directly interfaces with the CCS at
one or more of the following CCS communication and/or I/O interfaces:
• CDN
• ARINC 429
• CAN
• Analogue/Digital I/O
The HF is similar to that of a ‘federated system’ LRU.
The HF may include LRUs, or Application Specific Modules (ASM) that can utilise the
CDN;
and/or LRUs resident on the A429 busses or Controller Area Network (CAN) subnets that
utilise the RDC gateway function for interfacing with the CDN.
Partitioning services are provided for both the CDN and RDC. The VLs configured on the
network provide partitioning services for data communicated between networked devices.
The RDC provides partitioning services for its gateway operations.
Hosted Application
A Hosted Application (HA) is defined as a software application that utilises the
computing resources of the platform and can consist of one or more partitions. The HA is
an Operation Flight Program (OFP) which resides within one target computer.
The target computer for a HA is defined as the processor and resources that execute a
computer program in its intended target hardware. Configuration data and platform
software is included as part of the target computer to enable computer programs to
execute on the intended target hardware.

ATA System Computing CDN RDC


21 Cabin Air Conditioning & Temperature Control √ √
21 ECS Low Pressure System √ √
21 Integrated Cooling System/Forward Cargo AC √ √
21 Power Electronics Cooling System √ √
22 AFDS Autopilot √
22 Autothrottle Servo Motor √
22 Thrust Management √
23 Communications Management √ √ √
23 Flight Deck Control Panels √ √
23 Flight Interphone System √
23 SATCOM √ √
24 Batteries √
24 Electric Power Distribution/RPDU √ √
25 Cargo Handling √ √
25 Lavatories √ √
26 Cargo Fire Protection √ √
26 Fire/Overheat Detection √ √
27 Primary Flight Controls √ √
28 Fuel Quantity Indicating System √ √
29 Hydraulic System √ √
30 Wing Icing Protection System √ √ √
31 Aircraft Condition Monitoring Function (ACMF) √ √ √
31 Crew Alerting √ √
31 Display System √ √
31 Recording System √
32 Brake System √
32 Landing Gear √ √
32 Proximity Sensor/Tail Strike √ √
33 Cargo Compartment Light Control √
33 Dimming √
33 Emergency Lighting √
33 Exterior Lights √ √
33 Flight Deck Lighting √ √
33 General Area Lighting Control √ √
33 Master Dim and Test √
34 Air Data Reference System √ √
34 Communication Radios √
34 Earth Reference System √
34 Flight Management √
34 Integrated Navigation Radios √
34 Integrated Surveillance System √ √
34 Navigation √
35 Oxygen System √ √
38 Waste Flush Control √ √
38 Waste Drain Valve Control & Indication √ √
44 Broadband Offboard Satellite System (BOSS) √
44 Cabin Pressure Control System √
44 Cabin Service System √ √
45 Central Maintenance Computing Function (CMCF) √
46 Core Network √
47 Nitrogen Generation System (FS) √ √
49 APU Controller √ √
51 Structural Health Management √
52 Emergency Power Assist System (EPAS) √
52 Flight Deck Security √ √
56 Window Heat √ √
73 Electronic Engine Control √
76 Engine Control √ √
77 Airborne Vibration Monitor √
78 Thrust Reverser Control √ √
80 Nose Wheel Steering √ √
Integrated Modular Avionics
Common Core System Overview
Purpose
The Common Core System (CCS) provides a
common hardware / software platform allowing
computing, communication, and I/O services for
the implementation of real-time embedded systems
(Also known as “Hosted Functions”)
Benefits of Common Core Avionics
Architecture
•Reduction in weight (approx 2000 lbs)
•Common part numbers across avionic applications
reduces spares inventory and eases maintainability
(interchangeabilty)
• Eg. 16 GPM’s, 21 RDC’s
•Flexibility and multiple levels of redundancy
•Open architecture – Lower cost of future
enhancements and capabilities
Common Core System
Highlight Walk Around
Common Computing Resource (CCR) Common Data Network (CDN)
 Hosts multi-supplier aircraft system  Unified network
applications in robustly partitioned computing  High bandwidth (10/100Mbps) & growth
environment CCR enclosure, Multiple General capability
Processing Modules (GPM), Dual Redundant  Interconnects all CCS components and Hosted
Power Conditioning Modules, Provides cooling Function LRUs
 ARINC 653-1 partitioned operating  Dual redundant
environment
 High integrity design (<10-9 undetected failures
 Deterministic Ethernet (ARINC 664-Part
7/AFDX)
per flight hour)
 Switched star topology
 Bidirectional fiber optic media for high speed
users
Remote Data Concentrators (RDC)
 Analog, discrete, and serial digital (ARINC
429 & CAN) interfaces
 Interface gateway with CDN
 Single part number - location configurable
 Independent Fault Zones
 Accessible for maintenance
Basic CCS Architecture
Function
Provide Common Computing Resource
• General Processing Modules (GPMs) are the hardware base of the Common
Computing Resource
• The GPM, resident operating system, and configuration files (unique to each
GPM cabinet position) define a “Virtual System”
• Multiple Hosted Functions operate within each GPM
 Avionics
Flight Deck Displays and Crew Alerting, Flight Management,
Thrust Management, Communications Management,
Health Management (ACMF, CMCF), Configuration Management
 Environmental Control Systems
Air Conditioning, Pressurization, E/E Cooling, Power Electronics Cooling System,
Integrated Cooling System, Humidification & Dehumidification, Fire/Overheat Detection,
Engine & APU Fire Extinguishing, Cargo Fire Extinguishing, Engine Anti-Ice
 Electrical Systems
Electrical Power Distribution, Proximity Sensing, Window Heat
 Fuel Systems
Fuel Management, Fuel Quantity Indication, Nitrogen Generation System
 Hydraulic Systems
Hydraulic System Control and Indication
 Landing Gear Systems
Landing Gear Actuation, Steering
 Payloads / Interior
Potable & Waste Water, Passenger Oxygen, Flight Deck Door Security, Exterior Lighting,
Flight Deck Lighting
 Propulsion
Thrust Reversers
CCS “Virtual System”
Function
Provide Communication Services
• Avionics Full Duplex Switched Ethernet switches (AFDX) form the backbone
of the Common Data network (CDN)
• CDN utilizes 2 types of switches – ACS (AFDX Cabinet Switch) and ARS
(AFDX Remote Switch)
•AFDX switches route and police the data packets transmitted across the
Common Data Network
– ACS
– 2 reside in each CCS Cabinet (A/B channel redundancy)
– Supports 4 ports of 10Mbps copper Ethernet
– Supports 20 ports of 100Mbps Copper Ethernet (via CCS Cabinet
backplane)
– ARS
– Paired throughout aircraft (A/B channel redundancy)
– Supports 4 ports of 100Mbps bi-directional Fiber
– Supports 20 ports of 10Mbps Copper Ethernet
•Fiber Optic Translator (FOX) modules serve as transparent, bi-directional
links between ARS fiber and CDN 100Mbps copper bus (Physically located in
CCR Cabinet)
•All devices connected to the CDN have “End System” (ES) ASICS installed.
These ASICS (Application-Specific Integrated Circuits) serve to encode, decode,
regulate, and validate data sent to / received from the CDN
CCS Communication Map
Function
Provide I/O Services
RDC Overview
The design incorporates an AFDX End System (based upon the Rockwell Collins End
System Packet Engine) to attach to the aircraft communications network.
The RDC provides:
•Digital gateway.
•Analogue inputs and outputs.
•Discrete inputs and outputs
•Frequency inputs and outputs
Additional Functions:
•Independent Fault Zones (IFZs) embodied to ensure segregation of essential Hosted
Systems
•Dual Serial Peripheral Interfaces (SPIs) bridge AFDX Gateway to digital, analog, and
ARINC429 interfaces
•Management Information Base (MIB) stores RDC health, status, and configuration data
RDC Multi-System Architecture –Digital I/O Scheme
CAN Transceiver
• 10 Provided
• Implemented as IFZ
– Faults limited to LRUs / Hosted Functions on associated CAN Bus
– Safety Analysis required for associated CAN Bus
• Fault Detection via system-level wrap checks with CAN LRU
– CAN Bus is bi-directional
ARINC 429 Transmitter
• 6 Provided
• Implemented as High Integrity Master /Checker
– Transmit Master mapped to one CAN IFZ
– Transmit wrap check mapped to second CAN IFZ
– Errors detected by wrap check & disconnect Transmit Output via switch
ARINC 429 Receiver
• 10 Provided
• Implemented as High Integrity Dual Receivers
– Each receiver mapped to separate CAN IFZ
– One receiver accessed via Gateway A; second via Gateway B
– ARINC 429 messages checked by high-integrity AFDX End System
– RDC fault in either receiver signal path result in affected ARINC 429 messages
not being transmitted on AFDX I.e. Fail-Passive
– RDC fault identified by RDC from End System MIB data
CCS System Component Summary
•Qty 2 Common Computing Resources (CCRs) Per Aircraft. Each
CCR is comprised of the following:
• CCR Cabinet / Fan Assembly
• 8 GPMs (General Processing Modules)
• 2 PCMs (Power Conditioning Modules)
• 2 FOXs (Fiber Optic Translators)
• 2 ACS switches (AFDX Cabinet Switches)
• 2 Application-Specific Graphics Generator (GG) Modules
(Manufactured by Rockwell Collins – not part of Common Core
System)

•Remote Components (External to CCR Cabinets)


• Qty 21 Remote Data Concentrators - 20 RDC’s in essential positions,
one non-essential
• Qty 6 AFDX Remote Switches (ARS) distributed throughout
aircraft
CCR Cabinet Hardware Structure
CCS Equipment Locations

X2
x2 x3 x2 x2 x3
x2 x2

x4 x3
CCR x2 Forward E/E bay X1
Aft E/E bay
RDC – 21
REMOTE SWITCHES – 6
Component Features
CCR Cabinet
Shown Partially populated with GPM, GG, and ACS Switch
Forced Air Cooled with Backup Fans (case mounted)
Component Features
Remote Data Concentrator
• The RDC (Remote Data Concentrator)
acts as a remote interface unit to provide
input / output consolidation across an
AFDX network.
• Provides a high speed interface that
reduces the amount of aircraft wiring,
thereby reducing aircraft weight, cost, and
recurring maintenance costs.
• Acts as an interface unit between a
multitude of sensor types and the AFDX
bus. While collecting data from the sensor
suites and encoding it to AFDX message
format, it also decodes AFDX messages
and writes them onto associated outputs.
• Physically located throughout the aircraft
as necessary to provide local interface
points for aircraft systems.
• Completely interchangeable
Component Features
AFDX Switches

ACS, CCR Switch ARS, Remote Switch


• Qty 4 per Ship Set • Qty 6 per Ship Set
• Fully ARINC 664 Compliant Layer 2 • Fully ARINC 664 Compliant Layer 2 Switch
Switch • Supports 20 ports of Copper Ethernet (10baseT)
• Supports 24 ports of Copper Ethernet • Supports 4 Ports of Bidirectional Fiber
•4 ports of 10baseT, 20 ports of (100baseBX)
100baseTX • 24 external ports plus one Internal
• 17W Max Power, 3.3 lbs Max “Uplink” port for Mgmt Processor
Weight • 27W Max Power, 8.3 lbs Max Weight
• “Single wide” LRM designed • Standalone LRU designed for Installation
for cabinet installation anywhere inside the pressure vessel
• Operates off Dual 12VDC • Operates off Dual 28VDC
Component Features
General Processor Module
Arinc 653 Operating System
Hosts Functional Software (eg Flight Management, Nose Wheel
Steering)
Component Features
Power Conditioning Module

2 PCM’s per Cabinet


Dual Redundant Power Conditioning
Component Features
FOX (Fiber Optic Transceiver)

• Provides 2-way media


translation from fiber optic to
copper AFDX network
connections.
• Functions as a transparent
communications link within the
CCR/CCS system.
• Has dedicated power conversion
circuits, BIT monitoring
functions, and data storage
capability.
• Memory is non-volatile, so data
is not lost when power is
removed.
Functional Overview – Sample Signal Flow
Distance Monitoring Equipment (DME)

RDC
Functional Overview – Sample Signal Flow
Distance Monitoring Equipment (DME)

RDC

The Flight Management Function (FMF), a Hosted Function within the


CCS, generates DME Auto tune commands within the GPMs*
Functional Overview – Sample Signal Flow
Distance Monitoring Equipment (DME)

RDC

ARINC 664 Data


(Electrical)

The GPM sends DME Tune commands to ACS Switches via 2


redundant data channels.
Functional Overview – Sample Signal Flow
Distance Monitoring Equipment (DME)

RDC

ARINC 664 Data


(Electrical)

ACS Switches send the data to the FOX Modules


Functional Overview – Sample Signal Flow
Distance Monitoring Equipment (DME)

ARINC 664 Data


(Optical)
RDC

ARINC 664 Data


(Electrical)

Fox Modules convert data from electrical to optical signals, and


send to ARS switches via Fiber Optic Cable
Functional Overview – Sample Signal Flow
Distance Monitoring Equipment (DME)

ARINC 664 Data


(Optical)
ARINC 664 Data
(Electrical)
RDC

ARINC 664 Data


(Electrical)

ARS Modules convert optical data back to electrical, and forward it


to the to RDC
Functional Overview – Sample Signal Flow
Distance Monitoring Equipment (DME)

ARINC 664 Data


(Optical)
ARINC 664 Data
(Electrical)
RDC ARINC 429 Data

A/B Auto tune Commands

RDC converts Auto tune data to ARINC 429 format, sends both
channels to the DME interrogator
Functional Overview – Sample Signal Flow
Distance Monitoring Equipment (DME)

RDC
ARINC 429 Data

DME Distance Data

DME Interrogator provides DME Distance to RDC


Functional Overview – Sample Signal Flow
Distance Monitoring Equipment (DME)

RDC
ARINC 429 Data
ARINC 664 Data
(Electrical)
DME Distance Data

RDC Converts Arinc 429 Signal to Arinc 664 Data, sends data to
ARS switches via 2 redundant data channels
Functional Overview – Sample Signal Flow
Distance Monitoring Equipment (DME)

RDC
ARINC 429 Data
ARINC 664 Data ARINC 664 Data
(Optical) (Electrical)
DME Distance Data

ARS Switches convert data from electrical to optical, send to Fox


Modules
Functional Overview – Sample Signal Flow
Distance Monitoring Equipment (DME)

RDC
ARINC 429 Data
ARINC 664 Data ARINC 664 Data
(Optical) (Electrical)
DME Distance Data

Fox Modules perform optical-to-electrical data conversion, sends


data to ACS Modules
Functional Overview – Sample Signal Flow
Distance Monitoring Equipment (DME)

RDC
ARINC 429 Data
ARINC 664 Data ARINC 664 Data
(Optical) (Electrical)
DME Distance Data

ARINC 664 Data


(Electrical)

ACS Modules sends DME Distance data back to GPM(s) in which


Flight Management Function software is resident
Functional Overview – Sample Signal Flow
Distance Monitoring Equipment (DME)

RDC

Flight Management Function (FMF) uses the DME Distance Data


to calculate Aircraft position
Functional Overview – Sample Signal Flow
Distance Monitoring Equipment (DME)

RDC

Head
Up
Primary Flight Display
Display

Calculated Aircraft position data sent to Display function in


GPM(s)*
Functional Overview – Sample Signal Flow
Distance Monitoring Equipment (DME)

RDC

Head
Up
Primary Flight Display
Display
ARINC 664 Data
(Electrical)

The GPM sends Aircraft Position Display data to Graphics


Generator(s) via 2 redundant data channels*
Functional Overview – Sample Signal Flow
Distance Monitoring Equipment (DME)

RDC

Display Data (Optical)

Head
Up
Primary Flight Display
Display
ARINC 664 Data
(Electrical)

Graphics Generator Modules (hosted within CCR Cabinet)


convert data from electrical to optical, send to Displays*

You might also like