You are on page 1of 537

Sampling is the periodical measurement of the value of the analogue signal.

A sampled signal contains all the information if the sampling frequency is at least twice the highest
frequency of the signal to be sampled.
As the analogue signals in telephony are band-limited from 300 to 3400Hz, .To allow the actual slopes of
the filters used a sampling rate of 8 khz is required.After bad limiting with a low-pass filter the analog
signal is sampled and the samples obtained are digitally encoded. The recommended sampling rate by
ITU-T G.711 is 8000samples per second.8 bits that is one byte should be used per sample.The time
taken for one sample is 125 sec.This time of one sample is called one frame.

Conversion of voice into digital signal:


1.Voice Frequency
4 KHz
2.Sampling
4 KHz * 2 = 8 KHz
3.Quantizing = Amplitude is given a certain value.
4.Encoding
8 KHz * 8 = 64 KHz
5.Line Coding
Digital data and voice transmission is based on a 2.048Mbit/s consisting of 30 time
division multiplexed (TDM) voice channels, each running at 64Kbps (known as E1 and
described by the CCITT G.702,G.703 specification) and two additional channels carrying
control information. At the E1 level, timing is controlled to an accuracy of 1 in 1011 by
synchronising to a master Caesium clock. Increasing traffic has demanded that more of
these basic E1 are to be multiplexed together to provide increased capacity. So time rates
have increased through 8, 34, and 140Mbit/s. The highest capacity encountered by PDH

fibre optic links is 565Mbit/s, with each link carrying 7,680 base channels.

PCM 30 MUX consisting of 30 time division multiplexed (TDM) voice channels, each
running at 64Kbps (known as E1 and described by the CCITT G.702,G.703 specification)
and two additional channels carrying control information. 0 & 16 Channel are carrying the
control information. 0 is for frame alingnment signal and 16 is for Non frame alignment
signal. 32 channels /Time slots each of 125 micro seconds.
Each Time slot is divided in to 8 bytes and over all time period of 8 bytes is of 3.9 micro
sec.Each byte is of 488 nsec time period .

In the figure above ,2 extra bytes are placed in time slot 0 and time slot 16 respectively.One byte
inserted in time slot 0 is used for Frame alignment, that is when PCM-30 MUX is de-muxed, this byte is
used again for Frame alignment. The other byte in Time slot 16 is used in the switch.
The signal of PCM-30 Mux is called E1 when it carries switching data and output from digital switches or
transmission equipment within the same station.When the signal is provided to transmission equipment it
is connected to the interface of (PCM-30) in SDH hierarchy.

In a PDH multiplexer

individual bits must be running at the same speed otherwise the bits cannot be
interleaved The speed of the higher order side is generated by an internal oscillator in the multiplexer and is
not derived from the primary reference clock
The possible Plesiochronous difference is catered for by using a technique known as Justification
Extra bits are added(stuffed)into the digital tributaries which effectively increases the speed of the tributary
until they are all identical

Jitter Effects:
To measure jitter effects, the incoming signal is regenerated to produce a virtually jitterfree signal, which is used for comparison purposes.
No external reference clock source is required for jitter measurement. The maximum
measurable jitter frequency is a function of the bit rate and ranges at 2.488 Gbit/s (STM16/OC-48) up to 20 MHz.
The unit of jitter amplitude is the unit interval (UI), where 1 UI corresponds to an error of
the width of one bit. Test times on the order of minutes are necessary to accurately
measure jitter.

Jitter:
Periodic or random changes in the phase of the transmission clock referred to the master or
reference clock. In other words, the edges of a digital signal are advanced or retarded in time
when compared with the reference clock or an absolutely regular time framework. Jitter generally
referes to deviations of more than 10Hz.
Wander:
Slow changes in phase (below 10Hz); a special type of jitter.

Interference signals
Impulsive noise or cross talk may cause phase variations (non systematic jitter). Normally high frequency
jitter.
Pattern dependent jitter
Distortion of the signal lead to so-called inter-symbol interference, which is pulse cross talk that varies
with time (Pattern dependent jitter.
Phase noise
The clock regenerators in SDH systems are generally synchronized to a reference clock. Some phase
variations remain, due to thermal noise or drift in the oscillator used.
Delay variation
Changes in the signal delay times in the transmission path lead to corresponding phase variations. These
variations are generally slow (Wander). (e.g. Temperature changes in optical fibers).
Stuffing and wait time jitter
During removing of stuffing bits gaps have to be compensated out by a smoothed clock.
Mapping jitter

see above
Pointer jitter
During incrementing or decrementing of the pointer value. This shifts the payload by 8 or 24 bits corresponding to a
phase hit of 8 or 24 UI.

18

Measure of jitter amplitude in Unit Interval [UI].


1UI corresponds to an amplitude of one bit clock period. The unit interval UI is independent of bit rate
and signal coding as it is referred to the length of a clock period. The peak to peak value is expressed in
UIpp.

International standards define upper limits for MTIE and TDEV.


A rough assesment of the tributary wander that may occure can be made by observing the pointer in an
SDH system. If no pointer jumps are seen, this means that no wander occured during the period of
observation. If pointer jumps occure, the wander values can be accessed as follows:
For example, a pointer jump in the AU level at STM-1 corresponds to 3x8 bits at 155 Mbit/s. This means
that the drift is 156 ns referred to the payload of 140 Mbit/s.

Summary of the PDH/SDH multiplex procedure:


Container C-n: (n=1-4)
Basic information structure which forms the synchronous payload. The input data rate is adapted
by fixed stuffing bits. Clock deviations are compensated by a stuffing procedure similar to PDH.
Virtual Container VC-n: (n=1-4)
The virtual container is the information structure with facilities for maintenance and supervising. It
comprises the information (payload) and the POH. Maintenance signals are path related which
spans from end-to-end through the SDH system.
Tributary Unit TU-n: (n=1-3); only for VC-1/2/3
The tributary unit is formed of the virtual container and a pointer to indicate the start of the VC.
The pointer position is fixed.
Triburary Unit Group TUG-n: (n=3,4); only if TU's are available
this is formed by a group of identical TUs for further processing.

Administration Unit AU-n: (n=3,4)


This element comprises a VC and an AU pointer. The pointer position is fixed within the STM-1 frame.

23

A STM-1 signal has a byte-oriented structure with 9 rows and 270 columns. A distinction is made between three
areas:
the payload area, which uses 261 columns
the pointer area
the section overhead, which is splittet up into two parts the Regenerator- and the Multiplex-Section
Overhead.
Each byte corresponds to a 64kbit/s channel. The overall bit rate of the STM-1 frame corresponds to 155.520
Mbit/s. The frame repetition time is 125s.
The STM-n frame structure is best represented as a rectangle of 9 x 270 x n.

The 9 x n first columns are the frame header and the rest of the frame is the inner structure data i.e. payload
(including the data, indication bits, stuff bits, pointers and management).
The STM-n frame is usually transmitted over an optical fiber. The frame is transmitted row by row (first is
transmitted the first row then the second and so on). At the beginning of each frame, synchronization bytes A1, A2

are transmitted .
The multiplexing method of 4 STM-1 streams into a STM-1x4 is an interleaving of the STM-1 streams to produce the STM-4
stream.

26

Summary of the PDH/SDH multiplex procedure:


Container C-n: (n=1-4)
Basic information structure which forms the synchronous payload. The input data rate is adapted by
fixed stuffing bits. Clock deviations are compensated by a stuffing procedure similar to PDH.
Virtual Container VC-n: (n=1-4)
The virtual container is the information structure with facilities for maintenance and supervising. It
comprises the information (payload) and the POH. Maintenance signals are path related which spans
from end-to-end through the SDH system.
Tributary Unit TU-n: (n=1-3); only for VC-1/2/3
The tributary unit is formed of the virtual container and a pointer to indicate the start of the VC. The
pointer position is fixed.
Triburary Unit Group TUG-n: (n=3,4); only if TU's are available
this is formed by a group of identical TUs for further processing.

Administration Unit AU-n: (n=3,4)


This element comprises a VC and an AU pointer. The pointer position is fixed within the STM-1 frame.

27

The PDH signals are first adapted to Containers.This process is defined Frame Alignment.
That is from PCM-30 to C-12, 34M toC3 and 140M to C4.In this process several stuffing bytes are
added.
The process from containers to Virtual Containers is defined mapping.
In mapping path POH bytes are added to the containers namely to C-12,V5,J2,N2 and K4 are added to
make VC-12.
To VC-3/ VC-4, J1,B3,C2,G1,F2,H4,F3,K3,and N1 are added.
Virtual Containers consist of POH and payload. These containers located in the third column from the
right are defined
Lower Order Virtual Containers(LOVC)
The higher layer Tributary Unit accomadates lower order Virtual containers regardless of the locations of
the starting byte.
The location of the starting byte is written in the Pointer bytes.This process is defined as pointer

processing.
When 3 parallel TU-12 are bound up into one TUG-2 this process is defined as multiplexing.
7 TUG-2s are bound up into one TUG-3.
3 TUG-3s are bound up into one VC-4.

28

The 32 bytes times 4 frames of PCM-30 signals (total 128 bytes ) are adapted in C12s, it becomes 136 bytes
totally with 2 stuffing bytes added to each frame respectively.The values for Justification control bits illustrated in
the figure below are 000 for C2 and 111 for C1.When the Justification opportunity bits are indicated with
Justification bit the corresponding bits is filled with stuffing value, not with a datum.Therefore the current actual
value for S1is not defined.The receiver needs to ignore the content value.S2 is a datum.
However within 500 sec that is in consecutive 4 frames in some reason or other the signal speed of PCM-30
becomes slower and fails to deliver 1024 bits,or 128 bytes and results in 1023 bits.The justification opportunity
bit S2 is filled with stuffing value and C2is filled with 111.The receiver is to ignore the content value for S2.
In case when the signal speed of PCM-30 becomes faster and delivers 1025 bits or 128 bytes plus 1 bit the
justification opportunity bit S1 adapts a datum and C1 is filled with 000.
The summary of the possible combinations of the values C1 and C2 are as follows.
C1

C2

1023 bits delivered within 500 sec

111

111

1024 bits delivered within 500 sec (normal)

111

000

1025 bits delivered within 500 sec

000

000

Majority vote is used to make a justification decision for the protection against single bit errors in C bits when the signal is
demultiplexed .The bits indicated as overhead bits are reserved for future use.

29

The four consecutive frames described in the above process is defined as multiframe structure.The
next process from C-12 to VC-12 is defined as Mapping.InMapping 4 different POH bytes are inserted to
the beginning of 4 consecutive frames respectively.Namely V5, J2, N2, K4 are inserted to the beginning
of each frame respectively.which makes up 35 bytes of VC-12.And this multiframe will be repeated in
every 500 sec.The structure of VC-12 results in POH and Information payload.

The VC-12 s are adapted in TU-12s, the method of Alignment is defined to use.The Alignment absorbs
the offset of the frame start for VC-12 and that for TU-12.
A set of four consecutive frames of VC-12s are always adapted in 4 consecutive TU-12s.The start byte
of 4 consecutive frames of VC-12 is V5, whose location is written V1 and V2 bytes in binary numbers as
pointer value.The formats of V1 and V2 are described in the figure.The first 4 bits are allotted for New
Data Flag (NDF), the following 2 bits are allotted for SS bits, and the rest of 10 bits are allotted for
Pointer value.

An indicator whose value defines the frame offset of a Virtual Container with respect to
the frame reference of the transport entity on which it is supported.
In case of TU-12,

the pointer value indicate the frame offset of the beginning of the

multiframed VC-12,that is location of V5.


The starting position of the pointer is just after V2 byte with 0, and the position of
maximum pointer value is located in the last byte of the first frame in the multiframe
structure.An actual pointer value will fluctuate between 0 and the specified maximum

value and indicate the corresponding position in the multiframe.

32

Objectives:

Given a knowledge of SDH Basics, be able to :

Describe signal names and definitions of the multiplexing steps in SDH


Describe the SDH frame structure
Describe the pointer function
Describe the justification principle using the pointer

Terms of multiplexing structure:


1. Container (C-n)
1.1 A bandwidth packet to store PDH signals or B-ISDN signals or
other services
unknown yet.
2. Virtual container (VC-n)
2.1 A bandwidth packet to store signals with path overhead (POH), that carries information
for Path layer administration, and is added to the container.
2.2 VC-4 is named as High Order VC (HOVC) and other VCs are called Low Order VC
(LOVC), because always LOVC is carried by HOVC.
2.3 For SONET, VC-3 is HOVC.
3. Concatenated virtual container (VC-4-nc)
3.1 To carry high bandwidth information larger than VC-4 size, multiple VC-4s are combined
into a unit.
4. Tributary unit (TU-n)
4.1 TU pointer is added to a LOVC to indicate its phase relation to HOVC, to which it is
mapped.

5. Tributary unit group (TUG-n)


5.1 Multiple tributary units of the same type and speed/size are byte interleaved multiplexed into TUG-n.
6. TUG-2 may contain TU-11 or TU-12 or TU-2.
7. TUG-3 may contain multiple TUG-2s or a single TU-3.

34

1. Administrative unit (AU-n)


1.1 Phase information between a HOVC and STM-N frame (AU pointer) is added to a
HOVC.
AU-4 is VC-4 plus AU pointer.(for SDH)
AU-3 is VC-3 plus AU pointer.(for SONET)
2. Administrative unit group (AUG-n)
2.1 Group of multiple AU-3s or AU-4s or an AU-4-nc.
3. Synchronous transfer module (STM-N)
3.1 Line or NNI(Network Node Interface) signal
3.2 The AUG-ns are housed in the payload to which Regenerator and Multiplex section
overhead data are added.
4. Mapping
4.1 Process to put a service into a VC as a first step. Putting
LOVCs into a
HOVC via TUG is not a mapping but a
multiplexing.
5. Aligning
5.1 Pointer processing.

6. Multiplexing
6.1 Process to combine multiple signal into a higher capacity unit using simple byte interleaved multiplexing.
7. ETSI and SONET standard
7.1 Multiplexing route via AU-4 is ETSI standard and used by most countries.
7.2 Route via AU-3 is SONET standard used in North America. Japan also adopts this route.

35

1. Multiplexing process is described from a different point of view.


2. Except for special case, usually a PDH signal is not synchronous to a SDH signal. Frequency
adjustment, i.e. justification between PDH and SDH, is necessary and carried out by stuffing
bits (S) in a C-12.
Stuffing bits or bytes other than for the justification are also inserted in order to get neat size
harmonization with other level packets during mapping and multiplexing. This kind of stuffing
bytes are used in every step.
3. The drawing shows multiplex concept. Actual process is not packet level multiplex but byte
interleaved multiplex, i.e. neighbor byte belongs to other packet.
4. In this process, AUG-1 is the same as the AU-4.

1. A STM-1 frame cycle is 125 s and contains 2,430 bytes.


1.1 In the SDH, vertical and horizontal matrixes represent a frame structure.
Vertical matrixes are always nine rows.
1.2 The number of horizontal matrixes depends on the bit rate (from STM-1 to STM-N).
1.3 Transmission order is row by row and left to right.
2. For STM-1, the matrixes are nine rows by 270 columns.
(Segments represent the conventional frame structure.)
3. In this matrix representation, overheads are arranged on the left end.
3.1 The number of overheads is nine rows by nine columns.
3.2 The first to third rows are the Regenerator section overhead data.
3.3 The fourth row is the AU pointer.
3.4 The fifth to ninth rows are the Multiplex section overhead.
4. The remaining portion supports the traffic, referred to as a payload.

1. Byte Interleaved Multiplex


1.1 SOHs of all STM-1s are terminated (as a result they turn into AU-4).
1.2 Phase of AU-4s are aligned and all AU pointers are renewed (detail will be explained later).
1.3 1st byte of 1st AU-4 (A) is put to 1st byte of AUG-n, 1st byte of 2nd AU-4 (B) to 2nd byte of
AUG-n, ......., 1st byte of (n)th AU-4 (N) to (n)th byte of AUG-4.
1.4 2nd byte of 1st AU-4 (A) is put to (n+1)th of AUG-n, 2nd byte of 2nd AU-4 (B) to (n+2)th byte
of AUG-n, ......., 2nd byte of (n)th AU-4 (N) to (2n)th byte of AUG-4
1.5

...............

1.6 Pointers of AU-4s are also byte interleaved multiplexed into 4th row and 1st to 9th column,
indicated as AU PTRs.
1.7 Insert a new STM-N SOH.
2. STM-n Frame
2.1 Number of row remains 9. Nine rows rule is common to all packet.

2.2 Column number for SOH and AU pointers is 9 x n and columns for

payload are 261 x n.

2.3 Frame period is 125 micro seconds.

38

1. Multiplexing of LOVC into STM-n takes two steps,


1.1 LOVCs into a HOVC and HOVCs into an STM-N. Pointer is used for the multiplexing.
1.2 Unlike PDH case, no frame alignment signal (FAS) is used, i.e. no FAS in the VC. Example is VC-12 to
STM-4, i.e. multiplexing 63 VC-12s into a VC-4 then the VC-4s into a STM-4.
2. Multiplexing of VC-12s into a VC-4
2.1 Phase relation between a TU-12 and the VC-4 is fixed, but between TU-12 and VC-12 it is not fixed. As a
result, phase relation between a VC-12 and the VC-4 is not fixed. The first byte of a VC-12 can be placed
at any byte of the VC-4, although it must belongs to the TU-12 to which the VC-12 is aligned.
2.2 Also, phase relations between 63 VC-12s are not fixed. The first bytes of 63 VC-12s are scattered in the
VC-4 payload at random.
2.3 Those floating phase relation is indicated by TU-12 pointer that carries a pointer offset number of the byte
where the VC-12s first byte is placed. TU-12 pointers appear at the fixed area of the VC-4.
3. Multiplexing of VC-3s into a VC-4
3.1 Using TU-3 and by a similar way as above, maximum 3 VC-3s can be multiplexed.
3.2 TU-3 pointers appear at the fixed area that is different from TU-12 pointers.
4. By reading TU pointer value, a desired VC-12 or VC-3 can be accessed quickly and easily.
5. VC-12s and VC-3(s) can be mixed into a VC-4.
6. Multiplexing of VC-4s into a STM-4
6.1 By replacing TU-12 to AU-4, VC-12 to VC-4, VC-4 to STM-4, 63 to 4 and TU pointer to AU pointer, above

explanation can be applied.

39

1. Pointer offset number of AU-4


1.1 Three successive bytes of AU-4 payload have same pointer offset number and 0~782 are assigned
(3x782=2349=AU-4 payload size).
1.2 Three bytes just after AU pointer are 0s.
1.3 The first byte of 3 same address is allowed to be the first byte of a VC-4.
1.4 A VC-4 frame spread over two AU-4 frames.
2. Pointer offset number of AU-3
2.1 Repeated numbering is not applied to AU-3 (0~782=AU-3 payload size).
3. This method results in that numbering of byte in STM-N payload is identical regardless of its AU composition (i.e.
ether ETSI or SONET standard).
4. Pointer Structure
4.1 Two bytes (H1 and H2) are combined making 16 bits word.
4.2 The last 10 bits show the pointer offset number where a VC-4 frame starts in the binary system.
4.3 The same 10 bits change to Increment (I) or Decrement (D) bits when a justification between VC-4 and
AU-4 is applied to that frame (SDH~SDH justification). The justification carried out using H3s or 0 address
bytes.

4.4 New Data Flag (NNNN) indicates arbitrary pointer value change due to payload change.
4.5 SS (10) show the signal type is AU-4 or AU-4-Xc or AU-3 or TU-3. (Currently, SS of AU pointer is not used for any purpose.)

40

1. A complete set of VC-12 and TU-12 require a multiframe, 4 frames and 125x4=500s, in order to
accommodates POHs and bytes for pointer.
2. V1, V2 and V3 are used in the same way as H1, H2 and H3 of AU-4 pointer. V4 is not defined.
3. The offset numbers are 0~139 and start right after V2. Different from AU-4, succeeding bit has
different number.
4. Justification between VC-12 and TU-12 is done using V3 and the byte after V3.

1. When AU-4s (VC-4s) from different STM-N lines are transferred to a new STM-N at a add-drop
or cross-connection station, their arriving phases are different.
2. For the transfer, the phases of AU-4s must be aligned, giving them delay of 0.5 frame period
in average.
3. If the same delay is applied to VC-4 also, by the repetition of mux./demux., which is quite
common in the SDH network, accumulation of delay will be enormous. This must be avoided to
maintain good transmission quality.
4. Without giving the delay to VC-4, its new phase relation to the new AU-4 is set to the AU-4
pointer.
5. Position of gaps in VC-4 for insertion of SOH must be changed, and this requires buffer and
some delay will be induced. But it is very small comparing to 0.5 frame time.
6. Thus, minimization of accumulated delay is realized by the pointer renewal.

1. Since SDH is a synchronous system, justification (synchronization) between SDH signals is generally not required.
2. Cases in which justification is required for the SDH:
2.1 Interconnection of networks where independent Primary Reference Clocks (PRC) are used. This occurs
for a connection between different countries or different network operators.
2.2 Some stations become asynchronous, i.e. go into an internal clock, by a loss of reference signal, e.g. a line
failure.
3. In those cases, an STM-n (or HOVC) frame must carry VC-4(s) (or LOVCs) that are generated by using different
(asynchronous) clock. Then the justification between SDH signals is necessary.
4. STM-N (AU-4) ~ VC-4 (for ETSI)
4.1 Justification is done by using H3 bytes (3 bytes) for expanded payload capacity (Negative Justification) and
excluding 0 address bytes (3 bytes) for reduced payload capacity (Positive Justification) every several frames.
Justification interval and positive/negative selection is determined by the clock frequency difference.
5. HOVC=VC-4 (TU-3) ~ LOVC(VC-3) (for ETSI)
5.1 Justification is carried out using H3 byte (1 byte) and 0 address byte (1 byte) of the TU-3 in the same way.
6. HOVC=VC-4 (TU-n) ~ LOVC(VC-n ), n=11or 12 or 2 (for ETSI)
6.1 Justification is carried out using V3 byte (1 byte) and 0 address byte (1 byte) of the TU-n in the same way.
7. Justification information is carried by using a pointer as the increment (I) or decrement (D) bits depending on whether it

is positive or negative justification.

43

1. This figure depicts positive justification of AU-4.


2. When the bit rate of VC-4 is lower than that of STM-N :
precise [VC-4] x 270 / 261 < [STM-N] / N)

(to be

2.1 Positive justification is applied, i.e.when an underflow of data is detected, the payload
size of the corresponding frame is reduced by excluding 0 address bytes from a part of
payload.
2.2 The receiving station is instructed to neglect the bytes by the polarity change of I bits of
the frame.
2.3 Following frames go back to normal payload size and pointer value is increased by one
until the next underflow detection.
3. Up to 2 bits error in I bits can be corrected by the majority rule at the receiving side.

1. This drawing shows negative justification.


2. When the bit rate of VC-4 is higher than that of STM-N :
precise [VC-4] x 270 / 261 > [STM-N ] / N)

(to be

2.1 Negative justification is applied, i.e.when an overflow of data is detected the payload
size of the corresponding frame is expanded by using H3 bytes as a part of payload.
2.2 The receiving station is instructed to adopt the H3s as information bytes by the polarity
change of D bits of the frame.
2.3 Following frames go back to normal payload size and pointer value is decreased by one
until the next overflow detection.
3. Up to 2 bits error in D bits can be corrected by the majority rule at the receiving side.

Objectives:

Given a knowledge of SDH Basic, be able to :

Layout and describe the purposes of RSOH, MSOH, and POH


Describe mapping of tributary signals to NNI

1. Section overheads (SOHs) of STM-1 are depicted by the shaded parts in the above drawing.
1.1 Regenerator SOH (RSOH), 3 rows by 9 columns, can be accessed at terminating points of a
regenerator section (RS), at both regenerators and multiplexers.
1.2 Multiplex SOH (MSOH),5 rows by 9 columns, , can be accessed at multiplex section (MS)
terminating points, i.e.at multiplexers only. It passes through regenerators transparently.
1.3 Information carried by RSOH and MSOH is mainly used for administration of RS and MS
layer,respectively .
2. Marked bytes are already defined. Unmarked bytes are reserved for future use.
3. bytes are defined as Media-dependent. If necessary. (For SDH radio it is defined in ITU-R F.750)
4. Bytes marked are assigned for national use. Each country can define their function differently but the
definition is valid within the country, not international.
5. Column number of STM-N is 9 x n and same byte assignment is applied but byte number of A1, A2, B2,
and are increased accordingly.

6. For STM-16, 64 and 256, using some of unmarked bytes FEC (Forward Error Correction) is implemented.

47

1. A1, A2 : Frame Alignment Signal (FAS)


1.1 A1: 11110110

A2: 00101000

1.2 A receiver finds the STM-N frame by detecting fixed A1....A2 .... pattern which appears periodically at
125s interval. (Remember VC-n does not have any FAS, its frame is found by using pointer.)
2. J0 : Regenerator Section Trace
2.1 For verification of Regenerator Section connection
2.2 A transmitter can set an identifier (name) to STM-N signal, maximum 15 characters using J0, and a
receiver compares the received ID (J0 value) to the expected J0 value, which is preset in the receiver,
to verify the connection.
2.3 To carry 15 characters 16 multiframe is formed, the first J0 for FAS and error detection and 15 J0s
for ID.
2.4 (Early recommendation defined this byte as C1 (STM identifier) that shows the unique order number
of STM-1s in STM-N to assist demultiplexing process.)
3. B1, B2 : Error Monitoring

3.1 BIP-X (Bit Interleaved Parity-X) detects error occurrence.


3.2 B1: For Regenerator Section error detection by BIP-8.
B2: For Multiplex Section error detection by BIP-24N (N; STM level). Detail of BIP-X will be explained latter.

48

1. E1, E2 : Engineering Orderwire


One byte in STM-n frame makes a 64 kb/s data channel. Digitized (PCM) voice of an engineering orderwire is carried by
E1 and/or E2.
1.1 E1 can be accessed from both multiplexer and regenerators.
1.2 E2 can be accessed only from the multiplexer.
2. F1 : User Channel
A network operator can use the F1 user channel (64 kb/s clear channel) for its own purpose.
3. D1-3, D4-12 : Data Communication Channels (DCC)
Both D1-3 (192 kb/s) of the RSOH and D4-12 (576 kb/s) of the MSOH are used for data communication channels.
3.1 They are often called DCCr and DCCm, r and m means RS and MS. DCCr is recommended for OAMP
information transmission (NMS connection).
3.2 DCCm is a kind of an user accessible channel and its purpose is not limited to NMS connection.
4. K1, K2 : Automatic Protection Switching (APS) Signaling
4.1 Used to exchange control information among nodes in an MS-SP Ring (BLSR) and a line-protection linear
systems.
4.2 Some bits in K2 are used as a Multiplex Section Remote Defect Indication (MS-RDI) that indicates detection of
defect in the receiving section or reception of MS-AIS.
5. S1 : Synchronization Status
5.1 S1 shows the quality level of the clock source that generated the STM-n frame.
5.2 It is used to control network synchronization, i.e. for selection of a reference clock source.
6. M1 : Multiplex Section Remote Error Indication (MS-REI)

6.1 M1 is used to report a result of error detection by B2, by number of BIP violation = error count, back to the transmission source.
6.2 For STM-64 and 256 two bytes (M0 and M1) are assigned.
2. Z1, Z2 : (Spare Bytes) STM-N (N>4) has additional spare bytes (Z0).

49

1. Section Trace Method


1.1 In the Figure above, J0 bytes is assigned for Section Trace byte.
1.2 Those parameter applies to Section carrying a section trace (or Section Access Pointer Identifier
SAPI )
1.3 The sending SAPI in TX side is compared to a expected SAPI in RX side. If the Section Traces do
not match,
At Node a Path Trace expected value is : ABCDEGF
Received value is :

ABCDEFG

1.4 AIS and RDI will be generated and AIS is transmitted downstream, while RDI is transmitted to
opposite NE.

2. Path Trace Method


2.1 In the Figure above , J1 or J2 bytes is assigned for path Trace byte.
2.2 Those parameter applies to VC-4, VC-3 and VC-12 signals, each carrying a path trance(or Access
Pointer Identifier API )

2.3 The sending API in TX side is compared to expected API in RX side.


2.4 If the path traces do not match,
At Node A Path Trace expected value is : ABCDEGF
Received value is :

ABCDEFG

2.5 AIS and RDI will be generated and AIS is transmitted downstream, while RDI is transmitted to opposite NE.

50

1. Section Trace (J0)


1.1 J0 Section Trace is used for checking the optical fiber or the cable connection between the
Nodes that terminate Regenerator section.
1.2 At RST block of each station, setting of J0 sending at TX side value and J0 expected value at
RX side is necessary.
1.3 Set Send value to a=abc at Node A and Expected value to b=abc at Node B.
1.4 At Node B, received value a=abc and expected value b=abc are checked to see that they
match. If they are much (a=b), all received data are outputted into downstream.
1.5 If not(a=abc, b=acb), J0 TIM (Trace Indicator Mismatch) Alarm is reported and all 1 data
(AIS) output downstream.
1.6 In line protection configured Figure above, set a different value with normal line and protection
line. If misconnection of optical fiber occurs, J0 TIM Alarm will reported, and checking of
optical fiber connection can be done.

1. BIP-X (Bit Interleaved Parity-X), X=8 for above example BIP-8


2. An error detection block is divided in to small sub-blocks with N bits. (For SDH, the block corresponds
to STM-N or VC-n frame.)
3. Kth bits of all sub-blocks in the block are checked in sequence (K1,K2, ,Ki, , Kn) and
number of 1 is counted. When the counting result is even, the Kth bit of the designated sub-block in
the following block is set to 0, for counting result of odd to 1. This process is called even parity.
(For SDH, the designated sub-block is B1 or B2s or B3 )
4. The same procedure is applied to all of X bits in sub-block in parallel.
5. At the receiver, same check is done to the received signal and the result is compared to the indication
of the designated sub-block received. Inconsistency means error detection.
6. Multiple errors in a sequence (K1,K2, ,Ki, , Kn) result in no error or only one error for even
and odd errors respectively. But the length of sequence is short enough to avoid such inconvenience
under practical error rate and almost always the result is correct.

1. Computing area of B1 BIP is entire bits of STM-N frame.


2. But B2 BIP excludes RSOH area. Bytes in RSOH might be accessesed by regenerators and changed.
If RSOH is included, those changes are recognized as errors by B2.
3. Relationship with the scrambling function
3.1 Calculate the parity of B1 after scrambling.
3.2 Calculate the parity of B2 before scrambling.
(Since scrambling function belongs to the Multiplex section, not to Regenerator section, it is
included to the B2 monitoring but excluded from B1.)

Path Overhead Function(POH)


1. J1 : Path Trace
ID can be set to a VC and same function as J0 of RSOH is realized for a
path. It is used for
verification of a path connection.
2. B3 : Error Monitoring (BIP-8)
End-to-end path error monitoring
3. C2 : Signal Label
Shows the type of a service in a VC payload.
4. G1 : Path Status
Used to report the receiving status of a path back to the sending side.
REI and RDI
5. F2, F3 : Path User Channels
Along with a service transmission, two 64 kb/s clear channels are provided
to the path.
Path user or network operator can use them for their own
purpose.
6. H4 : Position Indicator
This byte provides
6.1 A mutliframe and sequence indicator for VC3/4 concatenation.

6.2 A generalized position indicator (e.q. mutliframe indicator for VC-11, VC-2)
6.3 Different purpose for other payload type.
7. K3 : Automatic Protection Switching (APS) Channel
For automatic switching control of VC-3 or VC-4 path
8. N1 : Network Operator Byte
For tandem connection maintenance (see next page)

54

H4 Position Indicator
1. The H4 byte provides a generalized multi-frame indicator for payload.
2. Four consecutive frame (125s x 4 = 500s) are required to make complete TU-12 (VC-12) and it is
necessary to identify the first frame for correct recovery of the information. The H4 byte is used for
this purpose in VC-4 train.
3. The 7th and 8th bits in H4 byte is used and indicates the type as TU-12 multiframe indicator (V1, V2,
V3 and V4)
4. And other bits in H4 byte is set to 1 in the interim.

Path Trace (J1, J2)


1. J1, J2 Path Trace is used for detecting misconnections between the Nodes that terminate those
paths.
2. For establishing the path from LPT block of Node A to that of Node C, set J1 Send value of LPT
block of Node A to a=abc and J1 Expected value of LPT block of Node C to b=abc (setting value :
a=b=abc as original value different from other path )
3. At Node C, received J1 value and expected value b=abc are checked to see that they match.
4. If not(a=abc, b=acb), J1 TIM Alarm is reported and all 1 data output downstream of Node C..
5. Therefore, when J1 send value (a=abc) and expected value (b=abc) are set correctly, path at each
Node can be set correctly and J1 TIM Alarm will not be reported .
6. If there is a wrong path( a=abc, b=acb), J1 TIM Alarm will be reported, and checking of path can be
done.

1. The Tandem Connection (TC) is one of the network administration layers (Path, TC, MS and RS). It is
an optional layer and to use it or not is left to operators discretion.
2. TCs can be defined to any part of a path by a network operator. It is possible to set multiple TC to a
path, if they have no overlap.
3. The network operator can monitor the error performance of the TC, i.e. a defined portion of a path, by
using the Network Operator Byte (N1) of POH.
4. It is possible to detect error occurrence between the path starting point and the TC starting point by
monitoring B3 (or V5 for LOVC) at the TC starting point. In the same way detection of errors between
the path starting point and the TC ending point is possible. And their difference shows errors occurred
in the TC.
5. The B3 monitoring result at the TC starting point is reported to the TC end by using a half of N1 byte.
The TC end calculates the errors of TC by subtracting this value from its monitoring result of B3.
6. The second half of the N1 byte is used for a data communication channel between both ends of the
TC.

1. The LOPOH consist of four bytes (V5, J2, N2 and K4) and four multi-frame (500s) is used to
accommodate them in order to avoid unnecessary bit rate increase.
2. The V5 byte is divided into 5 functions.
3. Functions of LOPOH are identical to HOPOH except that it does not have path user channel.
4. REI is only used by a VC-11 byte synchronous path. It is used when a failure is declared. It is
undefined for VC-12 and VC-2.

1. The Table of The (Section) path access pointer Identifier [(S)PAI] is shown in above.
2. (S)PAI is used free a 16 bytes E.164 format as shown on the Table above.
3. 15 bytes are used for the transport of the 15 ASCII characters required for the E.164 numbering
format.
4. ASCII: American Standard Code for information interchange.

1. A remote station alarm is a report of signal receiving status back to the transmission source.
2. Maintenance signals of SDH system consist of AIS, RDI and REI.
3. AIS (Alarm Indication Signal)
3.1 Reports signal failure in the upper stream of the signal flow to the down stream, indicating
you are receiving defective signal but its not your fault. It is generated when LOF (Loss of
framing), LOS (Loss of signal), LOP (Loss of pointer), AIS, EBER (Excessive bit error,
optional), etc. are detected.
4. RDI (Remote Defect Indication)
4.1 Reports detection of received signal failure to the transmitting side, under the same condition
as AIS.
5. REI (Remote Error Indication)
5.1 Reports error count detected by BIP-X to the transmitting side (for LOVC, only error presence).
6. RFI (Remote Failure Indication)

6.1 reports declaration of failure (persistence of failure beyond threshold) to the transmission side. This is defined
only for VC-11.

60

1. Asynchronous 2,048 kb/s signal


1.1 Frequency justification between PDH and SDH is necessary. In most of VC-12 frame normal
payload space (32 bytes/125s) is used, for faster 2M, additional space (S1 bit) is used in a
frame where data overflow is detected (32 bytes + 1 bit) and for slower 2M, S2 bit is eliminated
from the payload space of a frame where underflow is detected (32 bytes - 1 bit). Usage of S1
and S2 are shown by C1 and C2 bits respectively.
1.2 Position of the 2M frame in the VC-12 is not fixed and it is found by using 2Ms FAS. No
visibility of 64 kb/s channels in 2M.
2. Bit synchronous 2,048 kb/s signal
2.1 No frequency justification is required. Always S2 bit is used and S1 bit is not used. To indicate
this status C2 and C1 are always set to 1 and 0 automatically.
2.2 Same equipment can handle both asynchronous and bit synchronous 2M without any
modification. So the latest G.707 considers this mapping as a part of the asynch. 2M.
3. Byte synchronous 2,048 kb/s signal

3.1 Location of 64 kb/s channels of 2M in VC-12 is allocated, thus visibility of channels in SDH signal is
implemented.

61

1. Mapping the asynchronous 34,368 kb/s signal into VC-3.


2. VC-3 frame is divided into three subframes and same mapping structure is applied to all of them.
3. Justification bits (S1, S2)
4. Each justification control bit (C1, C2) consists of five bits and error correction by majority rule is
implemented.
5. As the size of VC-3 is common to 34M and 45M, for 34M mapping many stuffing bytes are inserted to
fill up the data size gap.

1. Mapping the asynchronous 139,264 kb/s signal into VC-4.


2. VC-4 frame is divided into nine sub frames, which correspond to nine rows. Same mapping structure
is applied to all of them.
3. Negative justification is enabled for each sub frame.
4. Justification bit (S)
5. The justification control bit (C) have five parallel bits.
6. No positive justification is employed.

1. An ATM cell has a fixed length of 53 bytes (5 header bytes + 48 information bytes).
2. ATM cells are mapped into the payload of VC with its byte boundaries aligned to the VC-4 byte
boundaries.
3. Since the capacity (2,340 bytes) of the VC-4 payload is not an integer multiple of a cell length, the
last cell mapped to the payload may cross a VC-4 frame boundary.
4. ATM cell mapping to other size VCs is also defined in the same way.

1. This figure shows seven multiplexing steps from VC-12 to STM-1.


2. Byte interleave multiplexing is performed in each step.
3. When TUG-3 carries TUG-2s, i.e. VC-12s, the bytes at the first to the third rows of the first column
becomes NPI (Null Pointer Indication).
When it carries TU-3, i.e. VC-3, they are TU-3 pointer location. By checking if they are valid value for TU-3
pointer or not, a receiver can decide the TUG-3 is carrying TU-3 (VC-3) or TU-12s (VC-12s).
4. The drawing also shows that all TU-12 pointers are gathered at the fixed (but different from TU-3 pointers)
location of a VC-4. This means that without going through step-by-step procedure tributary signals can be
accessed directly and easily.

Note : Word NPI was used by early recommendation but vanished recently.

65

1. Figure above shows how virtual containers of VC-12 are byte inter-leave multiplexed into a VC-4.
2. Transmission of complete VC-12 requires a four consecutive frame called into multiframe structure
(125s x 4 = 500s). Four 35-byte-subunits of the VC-12 (35x4=140bytes/ 500s), a one step gap is
introduced to each subunit to permit insertion of TU-12 pointer bytes.
3. The phase relationship between the VC-12 and the TU-12 is not fixed and the first byte of the VC-12
(V5) is indicated by the (V1 and V2) pointer value.
4. The first VC-4 frame of the 500s multiframe that carries the first TU-12 subunit (V1 byte) is shown
by the VC-4 POHs H4 byte.

1. The drawing shows multiplex procedure of VC-3 (34M) to STM-1.


2. All TU-3 pointers are gathered at the fixed (but different from TU-12 pointers) location of a VC-4.
3. You will find that TU-3 pointer location corresponds to NPI position of the last drawing.
4. When the receiver detect the valid pointer value, it can decide corresponding TU-3 is carrying VC3(34Mb/s) . Otherwise (i.e. NPI), it is carrying VC-12 (2Mb/s).

1. For the stable timing extraction at a receiver, line signals must have sufficient and uniform data
transition, avoiding long sequence of 1 and 0. To meet this requirement, scrambling is applied to
the STM-N line signal and statistically long sequence of 1 and 0 is suppressed.
2. The scrambler is a frame synchronous type with sequence length of 127 (27-1) and the PN pulse
generator restart at every STM-N frame. The scrambling is not applied to the first row of RSOH.
3. Also the scrambler keeps the mark density of line signal nearly 50%. This ensures stability of laser
diode output level.

1. This block diagram shows the major steps of demapping/demultiplexing an STM-1 signal into 63x C12. For example, STM-1AUGAU-4VC-4TUG-3TUG-2TU-12VC-12C122.084Mbit/s .
1.1 At the receiving side the A1, A2synchronization byte (pattern) is detected in line signal to
identify the start of the STM-1 frame.
1 .2 RSOH and MSOH in STM-1 frame are terminated and VC-4 is extracted by the AU-4 pointers
location address.
1.3 From the VC-4 frame, 3 x TUG-3s are demultiplexed and demapped. From each TUG-3 frame,
7 x TUG-2s are demultiplexed and demapped From each TUG-2 frame, 3 x TU-12s are
demultiplexed and demapped
1.4 From each TU-12 pointer values of V1 and V2 specify, the starting point of V5, the first byte of
the VC-12.
1.5 From the VC-12, the C-12 is extracted of the V5, J2, N2 and K4 bytes are terminated.
1.6 From C-12, the 2.048Mbit/s PDH information is extracted after removing the stuffing bits..

1. Two methods for concatenation are defined: contiguous and virtual concatenation.
2. Both methods provide concatenated bandwidth of X times Container-N at the path termination. The
difference is the transport between the path termination.
3. Contiguous concatenation maintains the contiguous bandwidth through out the whole transport, and
requires concatenation functionality at each network element.
4. While virtual concatenation breaks the contiguous bandwidth into individual VCs, transports the
individual VCs and recombines these VCs to a contiguous bandwidth at the end point of the
transmission, and. virtual concatenation requires concatenation functionality only at the path
termination equipment
5. Figure above AU-4-4c is passed through a Node B lower order STM-1s. In this case, the AU-4-4c is
split up in four VC-4 s, and each VC-4 is switched independently using virtual concatenation.

If the input signal is greater the specify the container, the VC concatenation is used.
Virtual Container
Level

Total of byte

Bit rate (Mbit/s)

Container
Total of byte

Bit rate (Mbit/s)

PDH bit rate


(Mbit/s)

35

2.240

34

2.176

2.048

765

48.960

756

48.384

34.368

2349

150.336

2340

149.760

139.264

1. There are two types of SDH concatenated signal . There are contiguous concatenation and virtual
concatenation.
2. It is known that an AU-4 is designed to carry a C-4 container that has a capacity of 149.76Mbit/s. If
there are services that required a capacity greater than 149.76Mbit/s. one needs a data to transport
the payload of these services. AU-4-Xc is designed for this purpose.

1. AU-4 has a nine-byte pointer. These nine bytes is shown Figure above.
2. An AU-4 Xc signal has 9X bytes of AU-4 pointer. AU-4-X signal , there X sets of nine-byte pointers,
(H1, H2).
3. The first nine AU-4 pointer has its normal function. The seconds, third and Xth AU-4 pointers are
used as the concatenation indications as shown in Figure c.

Virtual concatenation of X VC-3/4s (VC-3/4-Xv, X = 1 ... 256)


1. A VC-3/4-Xv provides a contiguous payload area of X Container-3/4 (VC-3/4-Xc) with a payload capacity
is following equation .
1.1 VC-3-Xv capacity = X x 48.384(84 bytes x 9rows x 64 kbit/s)Mbit/s
1.2 VC-4-Xv capacity = X x 149. 760 (260bytes x 9rows x64 kbit/s)Mbit/s
2. The container is mapped in X individual VC-3/4s which form the VC-3/4-Xv.. The H4 POH byte is used for
the virtual concatenation specific sequence and multiframe indication
3. Each VC-3/4 of the VC-3/4-Xv is transported individually through the network. Due to different propagation
delay of the VC-3/4s, a differential delay will occur between the individual VC-3/4s. This differential delay
has to be compensated and the individual VC-3/4s have to be realigned for access to the contiguous
payload area. The realignment process has to cover at least a differential delay of 125 s.

1. A two-stage 512 ms multiframe is introduced to cover differential delays of 125 ms and above (up to
256 ms). The first stage uses H4, bits 5-8 for the 4-bit multiframe indicator (MFI1). MFI1 is
incremented every basic frame and counts from 0 to 15. For the 8-bit multiframe indicator of the
second stage (MFI2), H4, bits 1-4 in frame 0 (MFI2 bits 1-4) and 1 (MFI2 bits 5-8) of the first
multiframe are used

1. The sequence indicator SQ identifies the sequence/order in which the individual VC-3/4s of the
VC?3/4-Xv are combined to form the contiguous container VC-3/4-Xc as shown in Figure above.
2. Each VC-3/4 of a VC-3/4-Xv has a fixed unique sequence number in the range of 0 to (X-1). The VC3/4 transporting the first time slot of the VC-3/4-Xc has the sequence number 0, the VC-3/4
transporting the second time slot the sequence number 1 and so on up to the VC-3/4 transporting
time slot X of the VC-3/4-Xc with the sequence number (X-1).
3. For applications requiring fixed bandwidth the sequence number is fixed assigned and not
configurable. This allows the constitution of the VC-3/4-Xv to be checked without using the trace.
4. The 8-bit sequence number (which supports values of X up to 256) is transported in bits 1 to 4 of the
H4 bytes, using frame 14 (SQ bits 1-4) and 15 (SQ bits 5-8) of the first multiframe stage as shown in
Table 11-1.

Concatenation of X VC-12
1. A VC-12-Xv provides a payload area of X Container-12 as shown in Figures above. The container is
mapped in X individual VC-12 which form the VC-12. Each VC12 has its own POH.
2. Each VC-12 of the VC-12-Xv is transported individually through the network. Due to this a differential
delay will occur between the individual VC-12s and therefore the order and the alignment of the VC12s will change. At the termination the individual VC-12s have to be rearranged and realigned in
order to re-establish the contiguous concatenated container. The realignment process has to cover at
least a differential delay of 125 s.

1. Payload capacities are shown in Table above for VC-12Xv.

Automatic Protection Switching


These bits are allocated for APS Signalling for protection at the lower order path level.
Bit 5 to 7 of K4 are reserved for an optional use.In this Option these bits shall be set to 000 or 111 a
receiver is required to be able to ignore the contents of these bits.
If 000

means no remote defect

001

means no remote defect

011

means no remote defect

010

Remote payload defect

100

Remote Defect

79

1. The VC-12 virtual concatenation frame count is contained in bits 1 to 5. The VC-12 virtual
concatenation sequence indicator is contained in bits 6 to 11. The remaining 21 bits are reserved for
future standardization, should be set to all "0"s and should be ignored by the receiver.
2. The VC-12 virtual concatenation frame count provides a measure of the differential delay up to 512
ms in 32 steps of 16 ms that is the length of the multiframe (32 x16 ms = 512 ms).
3. The VC-12 virtual concatenation sequence indicator identifies the sequence/order in which the
individual VC-12s of the VC-12-Xv are combined to form the contiguous container VC-12-Xc as
shown in Figures
4. Each VC-12 of a VC-12-Xv has a fixed unique sequence number in the range of 0 to (X-1). The VC12 transporting the first time slot of the VC-12-Xc has the sequence number 0, the VC-1/2
transporting the second time slot the sequence number 1 and so on up to the VC-12 transporting time
slot X of the VC-12-Xc with the sequence number (X-1). For applications requiring fixed bandwidth
the sequence number is fixed assigned and not configurable. This allows the constitution of the VC12-Xv to be checked without using the trace.

The overall capacity of the SOH is 4.608 Mbit/s (9x8x64kbit/s), of which 30 bytes (1.920 Mbit/s) have
fixed definitions. The remaining 64kbit/s channels are not specified. Six are reserved for national use.
Although six bytes are reserved for medium dependent functions (e.g. radio link systems). The columns
1,4 and 7 corresponds also to the STS-1 frame.

A number of functions are defined in the overhead channels to ensure proper transport of the payload.
The Section Overhead (SOH)
The overall capacity of the SOH is 4.608 Mbit/s (9x8x64kbit/s), of which 30 bytes (1.920 Mbit/s)
have fixed definitions. The remaining 64kbit/s channels are not specified. Six are reserved for
national use. Although six bytes are reserved for medium dependent functions (e.g. radio link
systems). The columns 1,4 and 7 corresponds also to the STS-1 frame.
Functions of the SOH:
Contains maintenance, monitoring and operational functions
Each byte refers to a 64kbit/s channel
Splitted into RSOH and MSOH
Protect the connection from point of STM-1 assembly to point of disassembly.
The Path Overhead (POH)
The POH of VC-4/VC-3 consists of 9 bytes and the POH of the VC-11/VC-12 and VC-2 consists
of 4 bytes.

The RSOH is reformed (terminated) by each regenerator. Each regenerator section passes the MSOH
transparently.
1. Section overheads (SOHs) of STM-1 are depicted by the shaded parts in the above drawing.
1.1 Regenerator SOH (RSOH), 3 rows by 9 columns, can be accessed at terminating points of a
regenerator section (RS), at both regenerators and multiplexers.
1.2 Multiplex SOH (MSOH),5 rows by 9 columns, , can be accessed at multiplex section (MS)
terminating points, i.e.at multiplexers only. It passes through regenerators transparently.
1.3 Information carried by RSOH and MSOH is mainly used for administration of RS and MS
layer,respectively .
2. Marked bytes are already defined. Unmarked bytes are reserved for future use.
3. bytes are defined as Media-dependent. If necessary. (For SDH radio it is defined in ITU-R F.750)
4. Bytes marked are assigned for national use. Each country can define their function differently but the
definition is valid within the country, not international.
5. Column number of STM-N is 9 x n and same byte assignment is applied but byte number of A1, A2, B2,
and are increased accordingly.
6. For STM-16, 64 and 256, using some of unmarked bytes FEC (Forward Error Correction) is

implemented.

83

The MSOH is reformed (terminated) by each multiplexer and cross connect . A1, A2 : Frame Alignment
Signal (FAS)
1.1 A1: 11110110

A2: 00101000

1.2 A receiver finds the STM-N frame by detecting fixed A1....A2 .... pattern which appears
periodically at 125s interval. (Remember VC-n does not have any FAS, its frame is found by
using pointer.)

J0 : Regenerator Section Trace


2.1 For verification of Regenerator Section connection
2.2 A transmitter can set an identifier (name) to STM-N signal, maximum 15 characters using J0, and
a receiver compares the received ID (J0 value) to the expected J0 value, which is preset in the
receiver, to verify the connection.
2.3 To carry 15 characters 16 multiframe is formed, the first J0 for FAS and error detection and 15 J0s
for ID.
2.4 (Early recommendation defined this byte as C1 (STM identifier) that shows the unique order
number of STM-1s in STM-N to assist demultiplexing process.)

B1, B2 : Error Monitoring

3.1 BIP-X (Bit Interleaved Parity-X) detects error occurrence.


3.2 B1: For Regenerator Section error detection by BIP-8.
B2: For Multiplex Section error detection by BIP-24N (N; STM level). Detail of BIP-X will be explained latter.

84

The Path Overhead is evaluated at the end point of the transmission system where the unpacking takes
place.

The SDH system monitors transmission quality using a method called


Bit Interleaved Parity (BIP).
A number of BIP types are used in SDH:
BIP-24 for B2 bytes is formed for every STM-1 frame w.o. RSOH.
BIP-8 for B1 byte for STM-N frame after scrambling and
for B3 byte for the VC-3 and VC-4.
BIP-2 for the V5 byte for VC-11, VC-12 and VC-2.

The Pointer indicates the phase shift of the first VC byte (J1, V5) within the payload or the container.
For the mapping of 2Mbit/s signals into SDH, two pointer levels are used. The first level - the AU-4
pointer - identifies the start of the VC-4 relative to the basic STM-1 frame. The second level - the TU-12
pointers - identifies the start of the VC-12 relative to the VC-4 for each of the 63 VC-12s.
The use of the pointer decouples the information channels (VC) from the transport medium (STM signal).
The fixed phase relationships of older systems are avoided in this manner.
It is also possible to multiplex and demultiplex signals in a single device across all levels. The byte
position of a subsignal is easy to compute.

1. Layered network administration structure is adopted in SDH, and they are :


1.1 Path 1.2 Tandem Connection (TC) (optional layer)1.3 Multiplex Section (MS) 1.4 Regenerator Section (RS)
2. For administration of each layer, dedicated overhead bites are assigned.
2.1 Path overhead (POH), MUX section overhead (MSOH) and
2.1 REG section overhead (RSOH). TC uses a part of POH.
3. Regenerator section
3.1 Section between adjacent repeaters or between a multiplexer and an adjacent regenerator.
3.2 Or section between points where RSOH is generated and terminated.
4. Multiplex section
3.3 Section between adjacent multiplexers, may contain multiple RSs.
3.4 Or section between points where MSOH is generated and terminated.
5. When no regenerator, one physical section becomes MS and RS at the same time, but they are independent layers.
6. Path
6.1 Connection between service transmission input/output points to/from SDH network.
6.2 Or connection between points where a VC is assembled and disassembled.

6.3 Or connection between points where a POH is generated and terminated.


1. Path has nothing to do with a connection route in the SDH network.
2. Tandem Connection is a part of a path. The TC is optional and its length is determined by a network operator. Its
administration is carried out using a part of POH.
3. RS and MS are physical connection and Path and TC are logical connection

88

. It regenerate the clock and amplitude of incoming distorted and attenuated signal.
It derive the clock signal from the incoming data stream

The terminal multiplexer is used to multiplex local tributaries (low rate) to the STM-n (high rate)
aggregate. The terminal is used in the chain topology as an end element. The regenerator is used to
regenerate the (high rate) STM-n in case that thedistance between two sites is longer than the
transmitter can carry.

ADM makes possibilities of :


The synchronous digital cross connect receives several (high rate) STM-n and switches any of their (low
rate) tributaries between them. It is used to connect between several topologies.

Extraction from & insertion into high speed SDH bit streams of Plesiochronous and lower
bit rate synchronous signal.
Ring structure of network which provides the advantage of automatic back-up path
switching in the event of fault.

SDH Cross connect: Involves cross connection of electrical signals


DWDM Cross connect:Involves cross connection of optical signals(channels)
Optical cross connects are of two types:
All Optical
Optical/Electrical/Optical(O/E/O)

An optical cross connect is an element used for making interconnection between different channels
either temporarily or permanently
It contains a Space-Switch which allows any wavelength on any input fiber to be routed to any
wavelength on any output fiber,given that wavelength is free to be used
It contains mux/demux and/or switching arrangement
An Agg to Agg connection, a trib to aggregate connection and a tributary to tributary connection is also
possible in case of a Digital Cross Connect

The linear bus (chain) topology used when there is no need for protection and the demography of the
sites is linear.
The ring topology is the most common and known topology of SDH, which allows great network flexibility
and protection.
DWDM networks could be in linear configuration or ring configuration Typically in metro networks due to
the short link lengths DCMs and regenerators may not be used. Some cases amplifiers are also not
used.
ILAs provide 1R regeneration as against 3R regeneration of SONET regenerators
Due to ASE (Amplifier Spontaneous Emission) signal becomes increasingly noisy as it is amplified by
many EDFAs
As per ITU-T G. 692, DWDM systems should be capable of transmitting signals without regeneration for
8 spans of 22dB
If Regeneration is required in all the signals, two back to back terminals may be used or else an OADM
may be used

The ring topology is the most common and known topology of SDH, which allows great network flexibility and protection.
1. Ring systems are classified into five types based on employed switching method combination (unidirectional or bidirectional switch
and path or line switch), routing of two-way traffic (unidirectional or bidirectional ring) and fiber number (2 or 4 fibers).
2. Theoretically any combination between switching and routing direction is possible. But in the ring system, above uni-uni and bi-bi
combination is used.
3. SNCP-ring (Unidirectional)
3.1 Also called 2-fiber Unidirectional Path protection Switch Ring (2F-UPSR)
3.2 No switching protocol is required. This is a simple and fast switching scheme.
4.

MS Dedicated Protection Ring


4.1 2-fiber Unidirectional Line protection Switch Ring (2F-ULSR)
4.2 Protocol by K1 and K2 in MSOH is required and it is not standardized yet by ITU-T.

5.

SNCP-ring (Bidirectional)
5.1 2-fiber Bidirectional Path protection Switch Ring (2F-BPSR)
5.2 Ring must be controlled at a path level and protocol by K3 or K4 in POH is required. It is not standardized yet by ITU-T.

6.

4F MS-SP ring
6.1 4-fiber Bidirectional Line protection Switch Ring (4F-BLSR)
6.2 Protocol by K1 and K2 in MSOH is required.

7.

2F MS-SP ring

7.1 2-fiber Bidirectional Line protection Switch Ring (2F-BLSR)


7.2 Protocol by K1 and K2 in MSOH is required.
1. There are two different algorithm in 4/2F MS-SP ring, the terrestrial application and the transoceanic application.

96

The SDH gives the ability to create topologies with protection for the data being
transmitted.
Following are some examples for protected ring topologies.
At this picture we can see Dual Unidirectional Ring. The normal data flow is
according to ring A . Ring B (black) carries unprotected data which is lost in
case of breakdown or it carries no data at all.

The SDH gives the ability to create topologies with protection for the data being
transmitted.
Following are some examples for protected ring topologies.
At this picture we can see Dual Unidirectional Ring. The normal data flow is
according to ring A . Ring B (black) carries unprotected data which is lost in
case of breakdown or it carries no data at all.

At this picture we can see Dual Unidirectional Ring. The normal data flow is
according to ring A . Ring B (black) carries unprotected data which is lost in
case of breakdown or it carries no data at all.

Error performance monitoring


Remote failure indications (RFI)
Remote Defect Indications ( RDI )
Signal Los
New Data flag indication
Synchronization source information
Pointer adjustment information
Path status
Path trace
Remote error indications (REI)

A special synchronization network is set up to ensure that all of the elements in the communications
network are synchronous. The network is hierarchical distributed. A primary reference clock source
(PRS) controls the secondary clocks of stratum level 2 to 4 (SSU or ST2 to 4). This type of
synchronization signal distribution is also refered as Master/Slave synchronization. The actual
synchronization may take place via a separate , exclusive sub-network, or the communications signals
themselves may be utilized. Ring structures are also possible.

1. Highest-level clock in office used as SSU.


2. Basically, SSU works in a slave mode. (On the other hand PRC is an independent clock source with
high accuracy.)
3. SSU reference taken from another SSU or primary reference PRC.
4. SSU reference from source of equal or higher accuracy.
5. When the reference is lost, it works in a holdover mode. Holdover mode uses stored frequency and
phase data just before the reference loss.
SSUSynchronization Supply Unit
PRC: Primary Reference Clock

1. PRC : Primary Reference Clock


1.1 1 x 10-11 accuracy (atomic clock, GPS, LORAN-C). G.811
1.2 Usually, multiple PRCs are installed and one of them (primary) supplies network clock. During the
primary is normal the secondary is a stand-by and it slaves to the primary to avoid frequency and phase
jump at switching
2. SSU : Synchronization Supply Unit
2.1 1 x 10-9 /day maximum drift (under holdover) G.812-Transit
2.3 2 x 10-8 /day maximum drift (under holdover) G.812-Local
2.3 SSU has redundant synchronization routs to both primary and secondary.
3. SEC or SETS : SDH Equipment Clock or
Synchronous Equipment Timing Source
3.1 4.6 x 10-6 accuracy (under free run). G.813
3.2 This layer means clock circuit of each SDH equipment (multiplexer, regenerator etc.)

4. The latest G.812 dose not distinguish Transit and Local, and in G.707 they are mentioned as SSU-A and SSU-B.
5. Reference distribution between layers usually uses ordinary transmission line, extracting clock component of line signal.

114

Preferred Timing Method


1. The clock signals of network element (NE) is supplied by office clock equipment (PRC or SSU)
using the NEs external clock input port. The SSUs slave to the PRC and whole network is
synchronized to the PRC.
2. The PRC and SSU output one of following clock signal format.
2.1 Framed 2Mb/s (HDB3, Table 6 / G.703)
2.2 When the PRC or SSU is installed at remote location, this format is used to check the
transmission quality using CRC or HDB3 violation.
2.3 2MHz (Continuous signal, Table 10 / G.703)
3. External reference controls all outgoing STM-Ns (both aggregate and tributary lines).
4. Above shows the most desirable configuration but not realistic and most of NEs in a network
uses the line clock method (next slide).
5. Use of GPS (global Positioning Satellite) system receivers might make this quite feasible today

by providing an inexpensive SSU.

115

Useful when SSU availability is limited


1. The line clock method uses an extracted clock component of a line signal as the reference source.
It is possible to use either aggregate lines or tributary lines. 2 Mb/s tributary is also possible if it is
a synchronous signal.
2. In this configuration, one node must be a master using the external clock method (PRC or SSU)
or internal clock (next slide) and the other nodes become slave nodes using line clock.
3. All aggregate and tributary outgoing STM-N are timed to the incoming line chosen for
synchronization.
4. Provisioning must be done carefully to avoid improper timing with fiber failures. Especially for a
ring system, loop timing must be avoided.

Least Desirable Method of Timing


1. In internal clock mode, the clock circuit of each NE becomes independent. It slaves neither to lines
nor to external (PRC or SSU). In the free-run, the oscillator works without references and in the
holdover, it uses data stored before the loss of references.
2. Normally, this method is the last-option of the failure scenario.
3. Against a failure, the NE can be provisioned to enter the holdover, which attempts to maintain the
quality of the original reference.
4. If a system is timed this way, one node using its internal free-running clock becomes the master.
All other nodes in the line must be slaved to the master node using line clock.
5. This configuration is used for an SDH island, the area isolated from the synchronous network.

Used Only at Regenerators


1. Each direction timed differently.
2. Incoming STM-N sets the timing for outgoing signal in the same direction.
3. Opposite direction also derives timing from its incoming STM-N.

1. In the actual SDH network, External, Line, Internal and Through clock methods are combined to
construct the best clock distribution network and also counter measures against failures (redundancy,
reference source switching, etc.) are included.
2. Repetition of line clock accumulates impairment (jitter, wander, etc.) in clock component of the line
signal (STM-N). When it reaches to the limit, the clock signal must be refined. NE supplies degraded
extracted clock component to the SSU and the SSU suppresses the impairment using high
performance phase locked loop (PLL). Then give clean clock signal back to the NE. The SSU
supplies clock signal to other NEs in the office.
3. Maximum number of intermediate line clock nodes is determined by the performance of SSU and
NEs.
4. SDH system at remote area where synchronization reference is not available (SDH island), one of
SDH NE becomes a master node using internal clock (free run) and other NEs slave to it using line
clock configuration.

1. The clock circuit of NE is a PLL (Phase Locked Loop) and it can select a reference from line signals
(STM-N, aggregate and tributary sides), external signals connected to PRC or SSU and PDH 2M
(synchronous) tributary signals.
When the PLL loses a reference, it goes into hold-over (internal clock), keeping the same operation
condition (frequency and phase) as before the loss.
When it does not have a reference from the beginning, it operates as a free-running oscillator
(completely independent internal clock).
2. Line Clock: Clock component of STM-N is extracted and divided to appropriate low frequency. Its
quality (origin) level is indicated by SSM (Synchronization Status Message), i.e. S1 of STM-N.
External Clock: 2.048 MHz or 2.048 Mbit/s framed bipolar from PRC or SSU. In case of 2 Mbit/s, it
carries SSM when the PRC or SSU supports the function. When the SSM is not supported by SSU or
NE, the quality level is provisioned manually.
PDH 2M: When it is generated by a synchronous equipment, e.g. EES, its 2.048 MHz clock component
is possible to be used as a reference. Since it does not have SSM, its quality level must be set
manually at each NE.
Internal: There are two modes. Hold-over uses memorized frequency and phase data of the last
reference. Free-run operates without any reference. Quality level must be assigned and it is lower than
any other references.
3. Each NE has different availability of above reference sources. Some of available sources might not be
used according to the synchronization distribution network design. To all selected reference candidates

priority order is assigned.


4. Selection is carried out using the quality level and the priority order.

120

1. SDH NE can output a reference signal to an SSU or other synchronous equipment to give them
synchronization information from PRC or higher level SSU.
2. Possible output is one of the Line Clocks and PDH 2Ms. Those extracted clock component is directly
output without going through the PLL, after proper conversion to 2.048 MHz. This is generally called
Line Clock against Equipment Clock (below).
3. The output interface is 2.048 MHz (clause 10/ G.703) or 2.048 Mbit/s (clause 6/G.703). The 2.048
Mbit/s carries SSM if the NE is so designed.
4. The PLL output also can be selected as this output. This case is called Equipment Clock.
5. NE should not supply Equipment Clock to SSU. SSU installation at the node means that the clock
component of line signal has degradation and refining is required. And Equipment Clock supply
means its STM-N is generated by the same clock, not form the SSU, and the clock degradation
propagates to the next section.
6. Equipment Clock is used for the synchronization supply to other NE at stations where SSU is not
available.

S1 Synchronization Status Byte


1. Provided in MSOH (Multiplex Section Overhead).
2. Indicates quality of clock used to generate the STM-N

1. The recommendation dose not define Quality Level indication (1~6). Here, it is used for
explanation simplicity.
2. The latest G.812 does not distinguish Transit and Local. And the latest G.707 says SSU-A
and SSU-B respectively.
3. When the equipment generates STM-N by internal clock (holdover or free-run), S1 should be SEC.
4. DNU (Do not use for synchronization) is used when the equipment fails. Under the line clock state,
DNU is set to S1 of backward direction line (see following explanation).
5. Quality Unknown is used when the network uses an existing clock source that might not follow
G.811 or G.812.

1. Above drawing is a simple model of SDH line. Node A and Node C are clocked by PRC and SSU. Node
C, Node D and other nodes between B and C use line clock configuration.
2. SSM (S1) of line A1 is set to Q=1 automatically because its clock component is traceable to G.811.
When PRC and/or NE-A dose not support SSM, external-in port of NE must be provisioned to Q=1.
3. NE-B is not directly connected to PRC but since clock in line B1 is traceable to PRC, S1 of line B1 is set
to Q=1. This is done by reflecting S1 in A1 to B1. Same procedure is applied to all line clock nodes
before Node C.
4. Clock in line B2 is also traceable to PRC, but it is set to Q=6 (Do not use ...) intentionally and
automatically. B2 is clocked by A1 and from a view point of the direction of clock signal flow it is
backward. S1 of backward signal is always set to Q=6, regardless its actual quality in order to avoid
clock loop. If B2 is set to other than Q=6, there is a possibility to make a clock loop (A-A1-B-B2-A) when
PRC is lost.
5. At Node C, extracted clock from B1 is supplied to SSU to be refined and clean clock is given to the NEC. In this case, clock in line C1 is traceable to SSU and its S1becomes Q=3.
6. Direction of line C2 is same as B2. But it is not backward but forward, because it is not clocked by B1
but by SSU. Therefore, its S1 should be Q=3 (SSU).

7. It is not shown here, but when NE-A and -B have STM-N tributary lines their S1s are Q=1and NE-C and -D case Q=3.

124

Highest Quality, Highest Priority


1. First, checks for highest quality.
2. If two or more references are the same highest quality, then checks priority.
3. Finally, result of above checks is chosen.
4. Quality read from SSM, if supported, or as provisioned.
5. Priority must be provisioned by operator at system setup.
6. Quality of a failed line signal must be considered as Do not use for synchronization, even if SSM is
readable.
7. SSM of a backward signal must be set to Do not use synchronization.

1. The Node A is a master node and has a G.811 PRC.


2. At the Node C, synchronous 2M tributary signals are possible reference sources. Against a failure in the PRC, Node C
will take over the master function, using one of 2Ms. B and D are simple slave nodes.
3. The settings are arranged to distributed clock clockwise direction ( A -> B -> C ->D).
At Node A :

Between two highest quality sources (Q=1, EXT 1 and 2), EXT 1 with higher priority order is selected as the
reference.

It generates STM-N signals, to the east and west directions, setting their S1 bytes to Q=1.

To avoid a timing loop, the signal from the west direction is set to unused.
At Node B :

From three available references (west, east and internal), the source of highest quality (west) is selected.
This node sets the S1 to the east (forward) to Q=1 and the S1 to the west (backward) to Q=6.At Node C :

Although this node has tributary 2M signals as the reference source, since their quality level defined by the
NE (Q=3, TRIB 1 and 2) are lower than the S1 indication of the receiving west, it selects the west as the
reference.
At Node D :
1. S1 indications of line signals (west and east) are the same (Q=1) and higher than the internal. The node
selects the west that has higher priority order.

It is assumed that a line signal failure has occurred on the clockwise signal between the Node A and
B. Then, clock reference changes will take place in the ring network with the following procedure.
At Node B :
1.

Because of detection of the signal failure in the receiving west, which has been the
reference source, the node decides the quality level of the reference as Q=6, regardless
its S1 indication.

2.

The Node B stops using the west signal and switches to the internal. It cannot use the
east, as its S1 indication is Q=6 at this moment. The NE sends out S1=Q=5, which is the
quality level of new reference, to east and west directions, automatically changing from
previous Q=1 and Q=6 respectively.

At Node C :
Detecting the change in the S1 of the receiving west signal, Q=1 to Q=5, the node starts
comparison between its available sources. They are west, east, TRIB 1, TRIB2 and internal.
Then, it selects the TRIB 1 with the highest Q, since the TRIB 2 has the same level Q but lower
priority order. The NE sends out S1=Q=3 to east and west directions, changing from previous
Q=1 and Q=6 respectively.

At Node B :
Now the receiving east has higher quality (Q=3) than the current reference, internal (Q=5).
As a result, the node selects the east sending out S1=Q=3 to the west direction and S1=Q=6 to
the east direction.
At Node D :
After comparing qualities of the receiving east (Q=1), the receiving west (Q=3) and the internal
(Q=5), it selects the east having the highest Q. This node sets the S1 to the west (forward) to
Q=1 and the S1 to the east (backward) to Q=6.

At Node C :
As the receiving east has higher Q (=1) than the present reference TRIB 1 (Q=3), its source
is changed to the east from the TRIB 1. It sends Q=1 to the west and Q=6 to the east.
At Node B :
There is no change for the reference selection. But the S1 indication to the west direction is
altered to Q=1, because, by the change at the station C, this nodes clock source is now
traceable back to the G.811 PRC at the station A. This change is done by using the S1
value in the receiving east signal.
1. The final settlement of the clock distribution is changed to counter clockwise direction ( A -> D -> C ->B).
2. The final result is not important in this explanation. It is important to understand how selection rules
are applied and how SSM (S1) is controlled.

1. Another example of reference source switching is a failure in both EXT 1 and 2 at the Node A. Failure
of the PRC itself is highly improbable, but here let us assume cable cut.
2. Although no detailed switching process will be explained, taking similar process shown above, the
clock source of the network will be changed to the sub-master (Node C). In this case, the final clock
distribution will become D -> C -> B -> A. It is recommended to trace the switching process by
yourself, referring previous explanations.

1. Another example of reference source switching is a failure in both EXT 1 and 2 at the Node A. Failure
of the PRC itself is highly improbable, but here let us assume cable cut.
2. Although no detailed switching process will be explained, taking similar process shown above, the
clock source of the network will be changed to the sub-master (Node C). In this case, the final clock
distribution will become D -> C -> B -> A. It is recommended to trace the switching process by
yourself, referring previous explanations.

1. Another example of reference source switching is a failure in both EXT 1 and 2 at the Node A. Failure
of the PRC itself is highly improbable, but here let us assume cable cut.
2. Although no detailed switching process will be explained, taking similar process shown above, the
clock source of the network will be changed to the sub-master (Node C). In this case, the final clock
distribution will become D -> C -> B -> A. It is recommended to trace the switching process by
yourself, referring previous explanations.

1. Another example of reference source switching is a failure in both EXT 1 and 2 at the Node A. Failure
of the PRC itself is highly improbable, but here let us assume cable cut.
2. Although no detailed switching process will be explained, taking similar process shown above, the
clock source of the network will be changed to the sub-master (Node C). In this case, the final clock
distribution will become D -> C -> B -> A. It is recommended to trace the switching process by
yourself, referring previous explanations.

1. To maintain the synchronization of NE, a PLL circuit with holdover is provided in NE..
2. The graph shows the holdover and free-run characteristic curves of PLL circuit.
3. During the clock circuit is being operated in a slave mode, frequency and phase are memorized in
holdover circuit. When the circuit loses the reference source for example by line failure, those
stored data are used for continuous and seamless operation. But due to the thermal noise of the
PLL looped circuit, the accuracy of VCXO is degraded with time as shown in No. 2. The holdover
function is effective within 24 hours. More than 24 hours pass, the VCXO becomes the free-run
status as shown in No 2. The transmission disturbances caused by abrupt change of frequency and
phase can be avoid by holdover function.
4. If there is no reference source supply, the VCXO is free-run state without holdover as shown in No. 1.
5. If the reference clock is recovery during the holdover state, the NE become in slave state again as
shown in No. 3
PH COMP: Phase comparator, A/D: Analogue/Digital converter

D/A: Digital/Analogue converter


VCXO: Voltage Control Crystal Oscillator

135

1. The original three network node interface (NNI) recommendations (G.707, G.708, and G.709) were
integrated into G.707 in 1996.
2. A distinctive features of SDH recommendations, that cannot be found in PDH recommendations, are :
2.1 Recommendations for OAM&P (Operation, Administration, Maintenance and Provisioning), i.e.
NMS (Network Management System), are defined.
2.2 Protocol suits for NMS connection are defined.
2.3 Functional configuration of equipment is described in detail.
2.4 Optical interface is standardized.
3. They are essential for realization of multi-vender environment.

Enterprise Systems Connection :It is an IBM standardized protocol for the interconnection of IT
equipment. Bit rate=200Mbps .Marketing name for a set of IBM & vendor products that interconnect
S/390 computers with each other & with attached storage , locally attached work stations & other devices
using optical fiber technology

GFP Provides an elegant framing procedure with low overhead and support for both packet services and storage
services
Virtual Concatenation Improves on current models of contiguous concatenation by supporting much finer
granularity of circuit provisioning and management from the edge of the network. Right-sized pipes for packet
services (Ethernet, in particular). Both higher order (STS1 granularity) and low order (VT1.5 level) are available,
supporting a range of high- and low-speed service assignments.
LCAS (Link Capacity Adjustment Scheme) tool to provide operators with greater flexibility in provisioning virtual
concatenation groups (VCGs), adjusting their bandwidth in service and providing flexible end-to-end protection
options

From the above figure if we want to transmit the 10Mb data (Ethernet) through SDH .The available containers will be either
VC-12 or VC-3.
The capacity of VC-12 is 2.176 Mbit/s
The capacity of VC-3 is 48.38 Mbit/s.
When we compare these capacities the VC-12 is small to send the Ethernet traffic because VC-12 can carry up to 2.176
Mbit/s.The VC-3 is too large if we send this data through VC-3 the remaining bandwidth will be wasted.
For this we have to go for concatenation

In the above slide we are conctenating the VC-12 s means that we are adding 5 VC-12s
and increasing the bandwidth to send the 10Mb/s data .By Conctenation VC-12s
bandwidth can be increased
One VC-12 bandwidth is

2.176 Mbit/s

If we concatenate 5 VC-12s then the bandwidth is 10.88 Mbit/s


Virtual Container

By this the
bandwidth
notrate
wasted.
(Mbit/s)
Total
of byte is Bit
Level

Container
Total of byte

Bit rate (Mbit/s)

PDH bit rate


(Mbit/s)

35

2.240

34

2.176

2.048

765

48.960

756

48.384

34.368

2349

150.336

2340

149.760

139.264

For the transport of payloads that do not fit efficiently into the standard set of virtual containers
(VC-3/4/12)
VC concatenation can be used. VC concatenation is defined for:
VC-3/4- to provide transport for payloads requiring
3/4;
VC-12- to provide transport for payloads that require
12.

greater capacity than one Containercapacity greater than one Container-

1. Two methods for concatenation are defined: contiguous and virtual concatenation.
2. Both methods provide concatenated bandwidth of X times Container-N at the path termination. The
difference is the transport between the path termination.
3. Contiguous concatenation maintains the contiguous bandwidth through out the whole transport, and
requires concatenation functionality at each network element.
4. While virtual concatenation breaks the contiguous bandwidth into individual VCs, transports the
individual VCs and recombines these VCs to a contiguous bandwidth at the end point of the
transmission, and. virtual concatenation requires concatenation functionality only at the path
termination equipment
5. Figure above AU-4-4c is passed through a Node B lower order STM-1s. In this case, the AU-4-4c is
split up in four VC-4 s, and each VC-4 is switched independently using virtual concatenation.

There are two types of Virtual concatenation


1. High order Virtual container
i.e VC-4 and VC-3 Level
2. Lower Order Virtual Container
i.e VC-2,VC-12 and VC-11 Level

Virtual concatenation of X VC-3/4s (VC-3/4-Xv, X = 1 ... 256)


1. A VC-3/4-Xv provides a contiguous payload area of X Container-3/4 (VC-3/4-Xc) with a payload
capacity is following equation .
1.1 VC-3-Xv capacity = X x 48.384(84 bytes x 9rows x 64 kbit/s)Mbit/s
1.2 VC-4-Xv capacity = X x 149. 760 (260bytes x 9rows x64 kbit/s)Mbit/s
2. The container is mapped in X individual VC-3/4s which form the VC-3/4-Xv.. The H4 POH byte is
used for the virtual concatenation specific sequence and multiframe indication
3. Each VC-3/4 of the VC-3/4-Xv is transported individually through the network. Due to different
propagation delay of the VC-3/4s, a differential delay will occur between the individual VC-3/4s. This
differential delay has to be compensated and the individual VC-3/4s have to be realigned for access
to the contiguous payload area. The realignment process has to cover at least a differential delay of
125 s.

Concatenation of X VC-12
1. A VC-12-Xv provides a payload area of X Container-12 as shown in Figures above. The container is
mapped in X individual VC-12 which form the VC-12. Each VC12 has its own POH.
2. Each VC-12 of the VC-12-Xv is transported individually through the network. Due to this a differential
delay will occur between the individual VC-12s and therefore the order and the alignment of the VC12s will change. At the termination the individual VC-12s have to be rearranged and realigned in
order to re-establish the contiguous concatenated container. The realignment process has to cover at
least a differential delay of 125 s.

1. Two methods for concatenation are defined: contiguous and virtual concatenation.
2. Both methods provide concatenated bandwidth of X times Container-N at the path termination. The
difference is the transport between the path termination.
3. Contiguous concatenation maintains the contiguous bandwidth through out the whole transport, and
requires concatenation functionality at each network element.
4. While virtual concatenation breaks the contiguous bandwidth into individual VCs, transports the
individual VCs and recombines these VCs to a contiguous bandwidth at the end point of the
transmission, and. virtual concatenation requires concatenation functionality only at the path
termination equipment
5. Figure above AU-4-4c is passed through a Node B lower order STM-1s. In this case, the AU-4-4c is
split up in four VC-4 s, and each VC-4 is switched independently using virtual concatenation.

Virtual concatenation of X VC-3/4s (VC-3/4-Xv, X = 1 ... 256)


1. A VC-3/4-Xv provides a contiguous payload area of X Container-3/4 (VC-3/4-Xc) with a payload
capacity is following equation .
1.1 VC-3-Xv capacity = X x 48.384(84 bytes x 9rows x 64 kbit/s)Mbit/s
1.2 VC-4-Xv capacity = X x 149. 760 (260bytes x 9rows x64 kbit/s)Mbit/s
2. The container is mapped in X individual VC-3/4s which form the VC-3/4-Xv.. The H4 POH byte is
used for the virtual concatenation specific sequence and multiframe indication
3. Each VC-3/4 of the VC-3/4-Xv is transported individually through the network. Due to different
propagation delay of the VC-3/4s, a differential delay will occur between the individual VC-3/4s. This
differential delay has to be compensated and the individual VC-3/4s have to be realigned for access
to the contiguous payload area. The realignment process has to cover at least a differential delay of
125 s.

The method of AU-4 Concatenation is devised to provide greater capacity of Payload.The process of Byte
interleaving is not required.It will provide a greater capacity of Transport media.Normal standard container
container in SDH system interms of VC-4 regardless of the different signal types such as STM-1,STM4,STM16,STM-64.
The frame format of VC-4 Xc is filled with POH. Columns 2 to X of the VC-4 Xc are filled with fixed stuff.The
capacity of the payload available for mapping is the X times the capacity of the payload available for
mapping is the X Times the capacity of the Container-4.The capacity of Payload can be expressed as
149.76 x Xmbit/s in terms of the signal rate.
The first AU-4 of an AU-4 Xc shall have a normal range of pointer values.That is ,0 to 782.All subsequent of
AU-4s within the same have their pointer set to concatenation.

1. There are two types of SDH concatenated signal . There are contiguous concatenation and virtual
concatenation.
2. It is known that an AU-4 is designed to carry a C-4 container that has a capacity of 149.76Mbit/s. If
there are services that required a capacity greater than 149.76Mbit/s. one needs a data to transport
the payload of these services. AU-4-Xc is designed for this purpose.
The position of POH Bytes of VC-4 Xc/VC-4/VC-3 is generalized a the first column of the 9-row of
respective Virtual Containers.
The VC-4Xc POH is located in the first column of the 9- row by X x 261 column VC-4 Xc structure.

1. Contiguous concatenation maintains the contiguous bandwidth through out the whole transport, and
requires concatenation functionality at each network element.
2. While virtual concatenation breaks the contiguous bandwidth into individual VCs, transports the
individual VCs and recombines these VCs to a contiguous bandwidth at the end point of the
transmission, and. virtual concatenation requires concatenation functionality only at the path
termination equipment

This table shows the improvements in bandwidth efficiency that can be made by using virtual
concatenation instead of contiguous concatenation.
From the Fast-Ethernet example, we can see that the efficiency improvement is from 67% to 100%.
Even larger efficiency improvements can be made with some other data services. This highlights the
benefits and importance of Virtual Concatenation.

Virtual Concatenation for High-Order VC-3/4


Frame counter(MFI) :
A combination of the 1st multiframe and the 2nd multiframe MFI = 0-4095
Sequence indicator (SQ):
Number to identify each member in the VCG
Sq = 0 255
Virtual Concatenation for Low-Order VC-11/12
Frame counter (MFI):
Multi frame counter MFI 0-31
Sequence indicator (SQ):
Number to identify each member in the in the VCG
SQ = 0-63
Link Capacity Adjustment Scheme High-Order VC-3/4
Member Status (MST) :

The status of all members (256) is transfer in 64 ms


MST =0 .. Link OK
MST =1 .. Link Fail
Link Capacity Adjustment Scheme Lower-Order VC-11/12
Member Status (MST)
The status of all members (64 ) is transferred in 128ms

162

1. The VC-12 virtual concatenation frame count is contained in bits 1 to 5. The VC-12 virtual
concatenation sequence indicator is contained in bits 6 to 11. The remaining 21 bits are reserved for
future standardization, should be set to all "0"s and should be ignored by the receiver.
2. The VC-12 virtual concatenation frame count provides a measure of the differential delay up to 512
ms in 32 steps of 16 ms that is the length of the multiframe (32 x16 ms = 512 ms).
3. The VC-12 virtual concatenation sequence indicator identifies the sequence/order in which the
individual VC-12s of the VC-12-Xv are combined to form the contiguous container VC-12-Xc as
shown in Figures
4. Each VC-12 of a VC-12-Xv has a fixed unique sequence number in the range of 0 to (X-1). The VC12 transporting the first time slot of the VC-12-Xc has the sequence number 0, the VC-1/2
transporting the second time slot the sequence number 1 and so on up to the VC-12 transporting
time slot X of the VC-12-Xc with the sequence number (X-1). For applications requiring fixed
bandwidth the sequence number is fixed assigned and not configurable. This allows the constitution
of the VC-12-Xv to be checked without using the trace.

Another interesting technology associated with Next-Generation SONET/SDH is LCAS.


LCAS, or Link Capacity Adjustment Scheme, allows addition and removal of containers to virtual
concatenation groups under control of a management system thereby changing the transport bandwidth.
Additionally LCAS can also temporarily remove any failed links.
LCAS uses control packets transported over the H4 and K4 SONET/SDH overhead bytes between
source and destination points.
For example, LCAS can change a group from having two VC-3 containers to three VC-3 containers
thereby changing the bandwidth from 100Megabits per second to 155Megabits per second.

Another interesting technology associated with Next-Generation SONET/SDH is LCAS.


LCAS, or Link Capacity Adjustment Scheme, allows addition and removal of containers to virtual
concatenation groups under control of a management system thereby changing the transport bandwidth.
Additionally LCAS can also temporarily remove any failed links.
LCAS uses control packets transported over the H4 and K4 SONET/SDH overhead bytes between
source and destination points.
For example, LCAS can change a group from having two VC-3 containers to three VC-3 containers
thereby changing the bandwidth from 100Megabits per second to 155Megabits per second.

Virtual Concatenation for High-Order VC-3/4


Frame counter(MFI) :
A combination of the 1st multiframe and the 2nd multiframe MFI = 0-4095
Sequence indicator (SQ):
Number to identify each member in the VCG
Sq = 0 255
Virtual Concatenation for Low-Order VC-11/12
Frame counter (MFI):
Multi frame counter MFI 0-31
Sequence indicator (SQ):
Number to identify each member in the in the VCG
SQ = 0-63
Link Capacity Adjustment Scheme High-Order VC-3/4
Member Status (MST) :

The status of all members (256) is transfer in 64 ms


MST =0 .. Link OK
MST =1 .. Link Fail
Link Capacity Adjustment Scheme Lower-Order VC-11/12
Member Status (MST)
The status of all members (64 ) is transferred in 128ms

169

Requirement
Allows containers to be added/removed from group as the data bandwidth requirement changes
Also provides ability to remove links that have failed
Addition and removal of containers must be hitless
Operation
A control packet (transmitted in the H4 byte for high order and K4 byte for low order) is used to configure
the path between source and destination
Control packet describes link status during next control packet
Changes are sent in advance so the receiver can switch as soon as the new configuration arrives

Group Identification Bit is


an additional verification mechanism to secure that all incoming VCG members belong to one group
Re-sequence acknowledgement is
an mechanism, where the sink reports to the source the detection of any additions/removals to/from
the VCG
Member Status Field is
an mechanism, where the sink reports to the source which VCG members are currently and correctly
received
Cyclic Redundancy Check is a
protection mechanism to detect bit errors in the Control Packet

First, lets look at GFP. We will then look at Concatenation.


There are currently two modes of GFP encapsulation defined, which are:
1. Framed mapped GFP, or GFP-F
2. Transparent mapped GFP, or GFP-T
Frame-Mapped GFP maps a client frame (for example, Ethernet) in its entirety into one GFP frame.
Transparent GFP operates on a data client stream as it arrives and requires fixed length GFP frames.
Any protocol (for example. Fiber Channel, ESCON, FICON, Ethernet, etc. ) can be mapped into
Transparent GFP.

Core header contains the information of the length of the payload area,the start of frame information and
CRC-16 ERROR DETECTION &CORRECTION.The length is 4 byte.
The GFP payoad are transports higher layer specific informationLength 4 to 65535
Core Header ; 1.PDU length indicator field
2. Core HEC FIELD
PDU length indicator field :
A binarynumber representing the number of Octets in the GFP payload are minimum value of the PLI
field in a GFP client frame
Cor header is a 16 bit payload length indicator

There are currently two modes of GFP encapsulation defined, which are:
1. Framed mapped GFP, or GFP-F
2. Transparent mapped GFP, or GFP-T
Frame-Mapped GFP maps a client frame (for example, Ethernet) in its entirety into one GFP frame.
Transparent GFP operates on a data client stream as it arrives and requires fixed length GFP frames.
Any protocol (for example. Fiber Channel, ESCON, FICON, Ethernet, etc. ) can be mapped into
Transparent GFP.

Frame-Mapped GFP
maps a client frame (for example, Ethernet) in its entirety into one GFP frame.
Transparent GFP
operates on a data client stream as it arrives and requires fixed length GFP frames. Any protocol (for
example. Fiber Channel, ESCON, FICON, Ethernet, etc. ) can be mapped into Transparent GFP.

Core Header ; 1.PDU length indicator field ( PLI )


2. Core HEC FIELD
PDU length indicator field :
A binarynumber representing the number of Octets in the GFP payload are minimum value of the PLI
field in a GFP client frame
Cor header is a 16 bit payload length indicator

The compact XDM-100 offers scalable STM-1/4/16 aggregation of access traffic in multirings and pointto-point topologies. The platform adds/drops various PDH SDH, Gigabit Ethernet (GbE), and Fast
Ethernet services at local Points of Presence
POPs The XDM-100 also provides a service layer, which terminates WAN links and consolidates
Ethernet traffic arriving from the local access network. Traffic can be carried to local GbE interfaces or
.routed to the metro-core network

When placed in hub RAN sites,


The XDM-100 supports:
. Aggregation of low-order traffic generated at BTSs (Base Station
Transceiver Stations)/Node Bs into higher link rates
. Consolidation of data traffic with TDM traffic for shared infrastructure
. Point-to-point rings and mesh topologies
. Closure of multiple STM-1 rings (BTS/Node B collector rings)
. Closure of higher bitrate rings towards remote BSC or RNC centralized sites
(STM-4/16 rings)
. Efficient handling of advanced data services (WLAN and IP migration) and additional data services
provided by cellular operators.

Acting as both an ADM and a small cross-connect, the XDM-100 can easily convert from an aggregation component to a
Multi-ADM. This is performed once cellularoperators decide to migrate from leased lines at the RAN to building their own
infrastructure.

221

The lower part of the shelf consists of the card cage and accommodates the MXC cards (main and protection) and the
ECU (External
Connection Unit). The upper part of the shelf consists of the module cage and houses up to eight I/O modules. Various
types of interface modules supporting PDH,SDH and data services are available. To support system redundancy, each
MXC card contains an integrated xINF (XDM Input Filter) unit with connectors for two input power sources. The xFCU100 fan control unit at the right side of the shelf provides cooling air to the system. It contains nine separate fans for
added system redundancy.
The basic XDM-100 cage contains slots for I/O interface modules, and dedicated slots for the MXC cards and the ECU.
The cages design and mechanical practice conform to international mechanical standards and specifications.
The modules and cards are distributed as follows:
. Eight (8) slots, I1 to I8, optimally allocated for I/O interface modules.
. Two (2) slots, A and B respectively, allocated for the MXC cards (main and protection). Each MXC card has two slots
(A1 and A2 and B1 and B2) to accommodate SDH aggregate modules.
. One (1) slot allocated for the ECU card.
The ECU is located beneath the MXC cards. Its front panel features several interface connectors for management,

external timing, alarms, orderwire and overhead (future release). It also includes alarm severity colored LED indicators and
selectors plus a
display for selecting specific modules and ports for monitoring purposes.
8x I/O modules

223

1. High-order transmission paths for high-order and low-order subnetworks and for IP networks (for
example LAN to LAN connectivity: GbE . GbE)
2 Leased lines at various bitrates, from 2 Mbps up to 2.5 Gbps
3

Data and other digital services.

The XDM-100 also supports a wide range of I/O interfaces and Ethernet Layer 2 services, enabling its
deployment in transmission networks, including:

E1 (2 Mbps asynchronous mapping)

DS-3 (45 Mbps)

STM-1 (155 Mbps) electrical interface; STM-1 optical interface(155 Mbps)

STM-4 (622 Mbps); STM-4c (ATM/IP 622 Mbps)

STM-16 (2.5 Gbps); STM-16c (ATM/IP 2.5 Gbps)

10/100BaseT

Gigabit Ethernet (GbE)

226

8x I/O modules
MXC100 Main X-connect Card
A1/A2 & B1/B2 Aggregate Slots containing:
SAM1_4/O/E/OE SDH Aggregate Module ,

SAM4_2,

SAM16_1

I/O Modules:
SIM1_4/O/E/OE SDH Interface Module,SIM4_2,PIM2_21 PDH Interface
Module,PIM345_3
EISM Ethernet Interface & Switching
The ECU is located beneath the MXC cards. Its front panel features several interface connectors for
management, external timing, alarms, orderwire and overhead (future release). It also includes alarm
severity colored LED indicators and selectors plus a

display for selecting specific modules and ports for monitoring purposes.
8x I/O modules

231

TCF TPU Control & Fan


TPU Modules:
TPM1_1 Tributary Protection Module (1:1 protection)
TPM1_3 Tributary Protection Module (3:1 protection)
The XDM-100 provides hardware protection to the electrical tributary modules with the Tributary Protection
Unit (TPU). The TPU is an expansion shelf plugged on the top of the XDM-100. Connected to the control
and power of the XDM-100 shelf, it
becomes an integral part of the platform.The TPU accommodates tributary protection modules (TPMs)
supporting 1:1 and
1:3 protection schemes per each type of module. Up to four groups of 1:1 protected electrical modules can
be defined. Each TPM module is connected to a pair of PIM2_21, PIM345_3, SIM1_4/E (or SIM1_4/OE in
the future) modules by traffic
cables. Up to two groups of 1:3 protected electrical modules can also be provided.Users may define any
combination of 1:1 and 1:3 protection schemes. (For additional details, see Chapter 7, Physical
Description.) The TPMs include protection-switching relays, which are activated by the active MXC upon

detection of a faulty module. One module is always in standby mode to


protect the active modules. In case of a failure in the active module, the standby module becomes active and replaces the
faulty one in less than two seconds, without having to disconnect any cable.

232

MXC100 Main X-connect Card


A1/A2 & B1/B2 Aggregate Slots containing:
SAM1_4/O/E/OE SDH Aggregate Module ,

SAM4_2,

SAM16_1

I/O Modules:
SIM1_4/O/E/OE SDH Interface Module ,SIM4_2,PIM2_21 PDH Interface
Module,PIM345_3
EISM Ethernet Interface & Switching

1. Simple, add or replace plug-in modules. This can be performed while the system is in operation,
without affecting traffic in any way.
2 Optimization of aggregate module assignment. Two aggregate modules are associated with each
Main Cross-connect Control (MXC) card. Each module supports a bandwidth of up to 2.5 Gbps.
3 Optimization of tributary I/O slot assignment. Eight slots can accommodate different I/O modules
(PIM, SIM and EIS-M).
4 In-service scalability of SDH links.An optical connection operating at a specific STM rate can be
upgraded from STM-1 to STM-4or STM-16.
5 XDM-100supports mesh,ring,star,and linear topologies.All system
configurations are controlled by a single network management system with end-to-end service
provisioning, from E1 to STM-16 to DS-3.

When placed in hub RAN sites,


The XDM-100 supports:
. Aggregation of low-order traffic generated at BTSs (Base Station
Transceiver Stations)/Node Bs into higher link rates
. Consolidation of data traffic with TDM traffic for shared infrastructure
. Point-to-point rings and mesh topologies
. Closure of multiple STM-1 rings (BTS/Node B collector rings)
. Closure of higher bitrate rings towards remote BSC or RNC centralized sites
(STM-4/16 rings)
. Efficient handling of advanced data services (WLAN and IP migration) and additional data services
provided by cellular operators.

Acting as both an ADM and a small cross-connect, the XDM-100 can easily convert from an aggregation component to a
Multi-ADM. This is performed once cellularoperators decide to migrate from leased lines at the RAN to building their own
infrastructure.

236

XDM-1000 Multiservice Metro Optical Platform designed for high-capacity central exchange
applications, the XDM-1000 is optimized for
the metro core and features unprecedented port densities.
As a digital cross-connect, it builds a fully protected mesh core. As a multi-ADM, it simultaneously closes
STM-64/OC-192 core
MS-SPRing/BLSR and multiple edge 1+1/UPSR rings. The XDM-1000 provides connectivity between
central office legacy switches over E1,
E3/DS-3 and STM-1/OC-3 trunks, and between POPs over native Gigabit Ethernet or POS, while
efficiently grooming traffic from edge rings. As a
DWDM, it enables migration from SDH/SONET to DWDM, providing high capacity and sub-lambda
grooming and reliability.

XDM-2000 Multifunctional Intelligent Optical Switch designed for the metro and metro-regional
core, the XDM-2000 is optimized for pure DWDM and hybrid optical applications. This is a highly dense
DWDM platform that provides intelligent sub-lambda grooming and optimum wavelength
utilization. The XDM-2000 integrates the most advanced optical units with a variety of interfaces and a
sophisticated, high-capacity matrix in one small,
low-cost package.

XDM-500 designed for medium interface


Capacities and street-cabinet installations, the XDM-500 is a compact optical Platform optimized for the
metro edge. It provides traditional broadband
Services and highly advanced data services, such as adaptive rate Gigabit Ethernet, POS and lambda.

XDM-400 Versatile Platform for Optical Metro-Access and Cellular


Networks this is an innovative product optimized for optical metro-access and cellular applications. It
features an advanced architecture, high capacity,
and reliability for data-optimized next-generation optical networks.
System Features
The XDM-400 offers operators a myriad of features and benefits, including:High scalability migration
from low capacity to high capacity SDH/SONET
networks, with high switching capabilities and flexible configurations
(STM-1/4/16 and OC-3/12/48 ADMs, multi-ADM and local cross-connect).High modularity initial
configuration with a small number of interfaces and
gradual increase to the required capacity as demands grow High versatility mix of different services
tailored to each customers needs
Data-aware service interfaces and functionality for migration to data-centric networksCommon cards
shared with the XDM platform, for example, xMCP, MECP,PIO, SIO, XIO (for a detailed description of
these cards, see Chapter 7) Internal mechanisms and high level Built-In Test (BIT) that ensure the

correct connectivity within the switch core, and between the I/Os to the switch. Since there is no blocking of the matrix and
every I/O card has direct independent access to it, the XDM-400 supports star, mesh, ring, and multiple topologies. Small
size the XDM-400 is only 500 mm high (three units can be accommodated in an ETSI rack). amplifier/booster for terminal
sites.

254

Data is the main BW grow driver- this section elaborate ECI data solutions

Virtual Concatenation
Improves on current models of contiguous concatenation by supporting much finer
granularity of circuit provisioning and management from the edge of the network. Rightsized pipes for packet services (Ethernet, in particular). Both higher order (STS1
granularity) and low order (VT1.5 level) are available, supporting a range of high- and lowspeed service assignments.

1. Two methods for concatenation are defined: contiguous and virtual concatenation.
2. Both methods provide concatenated bandwidth of X times Container-N at the path termination. The
difference is the transport between the path termination..
3. While virtual concatenation breaks the contiguous bandwidth into individual VCs, transports the
individual VCs and recombines these VCs to a contiguous bandwidth at the end point of the
transmission, and. virtual concatenation requires concatenation functionality only at the path
termination equipment
4. Figure above AU-4-4c is passed through a Node B lower order STM-1s. In this case, the AU-4-4c is
split up in four VC-4 s, and each VC-4 is switched independently using virtual concatenation.

The LAN delivers carrier-class Ethernet services over SDH/SONET, including: Transparent LAN services,
Shared LAN services, TDM services
The LAN is a unique data-aware SDH/SONET multiplexer with Layer 2 switching intelligence. It functions
as an STM-1/OC-3 ADM, TM or dual homing protected TM, that provides both TDM services and
10/100BaseT Ethernet services. Ethernet signals are mapped into n*VC-12 containers, starting from 2 Mbps
up to the full 100 Mbps with a capacity of up to 2 x 155 Mbps over SDH/SONET. The LANs scalable
architecture effectively expands optical SDH/SONET networks, helping operators provide new services and
tailor solutions for the needs of medium and large enterprises.
Configurable and manageable bandwidth, from 2 Mbps to 100 Mbps with a wide range of interfaces:
2/6 x Fast Ethernet ports (10/100BaseT),16/21 x E1 tributary ports,Up to 2 x STM-1/OC-3 optical
aggregates.
High performance and availability:
Ethernet switch functionality
SDH and Ethernet protection and restoration mechanisms for continuous traffic transmission High QoS for
mission-critical data, VoIP and videoconferencing.
Support for diverse configurations and topologies:

ADM, Terminal Multiplexer (TM), dual homing terminal, regenerator,RepeaterRing, point-to-consecutive-point (chain) and
point-to-point topologies
Protected and unprotected configurations.

258

The 1642 Edge Multiplexer is a full ADM equipment operating at 155 (STM-1) MBit/s bit rate. It can be
configured as a Multi Line Terminal Multiplexer or as an Add/Drop Multiplexer for point-to-point and ring
applications.

The SETS accepts synchronization inputs from a number of sources:


STM-n lines
2Mb/s traffic ports
2Mb/s external input
internal oscillator
Automatic selection of one of such sources is achieved by selector B using quality (SSM / Synchronization
Status Message algorithm) or priority criteria. Also manual selection is possible. The SETS function
produces two outputs.The NE clock reference is used as internal timing source and to time the outgoing
SDH STM-n signals.One external 2Mb/s output is generated as a possible source for external devices. Up
to 4 candidate references may be selected among all STM-n and 2Mb/s traffic ports in the system. 2Mb/s
external input and output are available. When Configured as 2Mb/s, the external I/Os can carry the SSM
timing marker information.
The SETG function has three modes of operation:
Locked, holdover and free-running. In holdover mode, the SETG holds the frequency of the last valid

reference with a maximum drift of 0.37ppm per day. The accuracy of the local oscillator is 4.6 ppm.

266

8x2Mbit/s Retiming unit (Coaxial Interface)


The unit provides the physical interface for the asynchronous mapping of G.703 2Mb/s signals into SDH VC12s.Each unit supports 8 interfaces on
the front panel. The unit implements the 2Mb/s retiming function.This allows applying the internal clock reference to outgoing 2Mb/s frames.
Retiming can be enabled/disabled at the 2Mb/s port level.Another 2Mb/s coaxial interface is provided for DCC/EOW channel transparent
transmission.1642 Edge Multiplexer can house up to 4 8x2Mb/s units in the shelf.

8x2Mbit/s unit (2mm Interface)


The unit provides the physical interface for the Asynchronous mapping of G.703 2Mb/s signals into SDH VC12s.Each unit supports 8 interfaces on
the front panel. There are different units for 75. and 120. applications.1642 Edge Multiplexer can house up to 4 8x2Mb/s units in the shelf.

28x2Mbit/s unit (2mm Interface)


The unit provides the physical interface for the asynchronous mapping of G.703 2Mb/s signals into SDH VC12s.Each unit supports 28 interfaces on
the front panel. There are different units for 75. and 120.applications.1642 Edge Multiplexer can house up to 4 28x2Mb/s units in the shelf.

1x34Mbit/s unit (Coaxial Interface)


The unit provides the physical interface for the asynchronous mapping of G.703 34Mb/s signals into SDH VC3s. Each unit supports 1 interface (75.)
on the front panel.1642 Edge Multiplexer can house up to 4 1x34M units.

1x45Mbit/s unit (Coaxial Interface)


The unit provides the physical interface for the asynchronous mapping of G.703 34Mb/s signals into SDH VC3s. Each unit supports 1 interface (75.)
on the front panel. 1642 Edge Multiplexer can house up to 4 1x45M units.

1xSTM1 Optical Unit


The 1xSTM-1 optical unit provides one optical STM-1 interface on the front panel. Several short and long haul types are available.1642 Edge
Multiplexer can house up to 4 1xSTM-1units.The optical STM-1 interfaces can participate in any combinations of SNCP schemes that can flexibly be
established by craft terminal or management system.

268

The 1642 Edge Multiplexer can be used for transmission over G.652, G.653 and G.654 fibers. The main
applications can be identified in the following
areas:
Delivery of SDH/Ethernet services to customer premises
Local and metropolitan rings
point to point links with intermediate drop/insert and/or regeneration stations;
A comprehensive range of network elements for all transmission needs, from customer premises
applications to metropolitan networks, to long-haul and ultra long-haul terrestrial and trans-oceanic
applications. Everything under control of the same management system.
A comprehensive range of network elements for all transmission needs, from customer premises
applications to metropolitan networks, to long-haul and ultra long-haul terrestrial and trans-oceanic
applications. Everything under control of the same management system.
In addition to traditional transmission applications,OMSN deliver integrated ATM switching and Ethernet
switching capabilities

OMSN guarantees full inter-working with the existing installed base in broad terms: from optical compatibility, to DCC (Data
Communications Channels) compatibility, to network management. In other words, existing networks can be upgraded with
new equipment without constraints.

269

1642 Edge Multiplexer offers Sub-Network Connection Protection (SNCP). This allows to insert the 1642
Edge Multiplexer directly into access
rings or to connect it to access rings via point-to point unprotected or protected links. SNCP allows also
end-to-end protection of SDH paths. Dual
hubbing to two distinct central office ADMs is also possible. 1642 Edge Multiplexer can also offer 48V
power redundancy for equipment protection.

A Compact ADM-1 card concentrates 2xSTM-1 interfaces, SDH matrix, clock reference and equipment
control functions. 4 slots are dedicated to traffic ports, for 2Mb/s services and above. The system can be
configured for example as STM-1 Terminal Multiplexer with 8x2Mb/s plus 6xEthernet/Fast Ethernet for
office interconnect services or as STM-1 ADM with 112x2Mb/s.
It can be used to deliver 2Mb/s and Ethernet/Fast Ethernet leased lines as well as higher-end services
such as 34Mb/s, 45Mb/s, STM-1 with a versatile and compact chassis ideal for enterprise locations. It
empowers the carriers leased line service offering with the strong and extensive SDH end-to-end path
control and monitoring capabilities

The traffic units can be plesiochronous (PDH) or synchronous (SDH). The plug-in cards can be of the following types:
63x2Mbit/s unit 3x34/45Mbit/s switchable unit 4x140Mbit/s-STM-1 switchable electrical unit 4xSTM-1 electrical
unit
4xSTM-1 electrical/optical unit 1 1xSTM-4 optical unit 1xSTM-16 optical unit ISA-ATM switch unit (4x4 VC-4 or
8x8 VC-4 unit)
ISA-Ethernet rate adaptive unit ISA-Gigabit Ethernet rate adaptive unit Packet Ring Edge Aggregator 4xE/FE
Packet Ring Edge Aggregator 1xGbE 2xSTM-1 or STM-4 optical + central functions unit (named Compact ADM-4
unit) 2
1xSTM-16 optical SFP + central functions unit (named Compact ADM-16 unit)
The Compact ADM-4 and ADM-16 units provide the following functionality:
2xSTM-1 optical/electrical or 2xSTM-4 or 1xSTM-1 + 1xSTM-4 or 1xSTM-16 (SFP)
Matrix Function (32x32 for Compact ADM-4 at Low Order and High Order level or 64x64 STM-1 for Compact ADM16 at Low Order VC level)

CRU Function
Equipment Controller Function
All electrical traffic ports can be optionally protected inN+1 configuration.

276

The matrix HW is also open to support VC2 cross connection capabilities in future software
releases.
AU4-4C and AU4-16C concatenated signals can also be cross-connected between any STM-4 and
STM-16 ports. All AUGs (STM-n Administrative Unit Groups) managed by the SDH matrix are
structured according to the standard ETSI mapping (1xAU4).The SDH VC matrix implements
single-ended SNCP/N (non-intrusive) and SNCP/I (inherent) protections. The protection mode may
be revertive or non-revertive. The SDH VC matrix can also provide up to 2 x 2 fibres MS-SPRing
protection at STM-16 with 16 STM-1 tributaries

1642 Edge Multiplexer consists of the following parts:


Chassis (four tributary slots)
Compact ADM-1 unit (providing 2xSTM-1 optical + central functions) 1
Four slots for any traffic ports that can be of the following types:
Metro Core/Backbone
- 8x2Mbit/s unit
- 28x2Mbit/s unit Metro Edge
- 1x34Mbit/s unit
Metro Access - 1x45Mbit/s unit
- 1xSTM-1 optical unit
Metro CPE - 6xE/FE unit
The Compact ADM unit provides the following functionalities:
Up to 2 optical STM-1 modules

SDH Matrix function


CRU function
Equipment controller function
Auxiliary and order-wire function
The Compact ADM-1 unit is used to terminate the network ring (or line).

285

ATM switching modules application


The main application of the ATM ISA plug-in modules is to consolidate ATM traffic collected from
different sources onto shared SDH VCs (virtual containers) in STM-n rings and to switch the ATM
traffic, as needed at VP and /or VC level, into the network.
Typical applications of the ISA concept are ADSL, UMTS and LMDS metropolitan networks.
In all those scenarios the Provider can take great advantage of the distributed switching
functionality for optimizing the transmission resources avoiding wasting capacity not effectively
used by the paying traffic.The switch can also be used in FTTB scenarios and as CPE where a mix
of TDM and ATM services in the same box results in benefits for both the Provider and the
Customers.

In a given node, SDH VCs or PDH flows carrying ATM


cells that do not require switching in the ATM layer, at virtual path (VP) / virtual channel (VC) can
be crossconnected
transparently by the SDH matrix directly, without unnecessarily loading the ATM switch
(passthrough functionality).

OADMs Compact shelf for e.g. customer Premises optimized node

Per channel selectable 1+1 optical channel protection (detection+switching


time < 50ms)
Tunable lasers
32 x 32 built-in optical cross connection matrix
64 bi directional transponders in one rack (ETSI and NEBS2000)
Q3 and TL1 management interface
Managed by Alcatel 1353 SH and 1354 RM

1696 Metro Span supports a wide range of network topologies:


Point-to-point
Point-to-point with linear add-drop
2-fiber ring with single hub-node
Fully meshed 2-fiber ring
Inter-connected rings with drop and continue
The 1696 Metro Span supports 1 ch., 2 ch., 4 ch. and 8 ch. optical add and drop multiplexers.These
OADMs are designed to provide a solution for small size add/drop nodes, minimizing optical loss for the
optical channels in transit. The maximum loss of a 4 ch. OADM for the pass through channels is 4 dB
(including optical supervision). OADMs of different types can be cascaded in a modular way to achieve
cost-optimized solutions on day one and future flexible upgrades.

A Hub node is used as:


a) a 8 ch., 16 ch. , 24 ch. or 32 ch. terminal for point-to- point configurations
b) a scalable OADM capable of adding/dropping from 1 to 32 channels,
providing an
optical pass through for the bands or channels that are not added/dropped
locally

This feature brings flexibility at the network level, making it possible to balance traffic
from one part of the network to another, e.g. from business areas during the day to residential
areas during the night. For the mere cost of a standard 1696 Metro Span transponder, the
operator can benefit from a truly flexible OADM feature.

The 1696 Metro Span amplifiers support a full monitoring and measurement capability:- external ports
are available to monitor both the input and output of the pre-amplifier and of the booster- remote
measurements are available for the input and output powers of both the pre amplifier and the booster.
The 1696 Metro Span amplifiers support an Automatic Gain Control (AGC) scheme to ensure hitless inservice insertion of new channels or removal of channels. Amplifiers automatically self-adjust their points
of operation when the number of incoming channels changes whether
due to a network upgrade or to a network failure.

1. It is important to understand above listed items related to protection switching, before proceeding to
detailed explanation of ring protection systems.
2. They are basic and common to not only ring systems but also linear protection systems.
3. Various combinations between above items are possible, like...
(PPS)-(1+1)-(Uni-)-(Non-revertive) this combination is called single

ended path switch.

(PPS)-(1:1)-(Bi-)-(Revertive)-(SLA)this combination is called dual

ended path switch.

(LPS)-(1+1)-(Bi-)-(Non-revertive) etc.

1. In above drawing, a thin line shows a path (VC-n or channel in STM-N) and a pipe means a line
(STM-N signal or multiplex section)
2. Path Protection Switch
2.1 Protection switches are set up at the ends of each path. Against a line failure, protection
switching take place at the ends of suffered paths. Against a failure in a single path, switching
is applied only to the path. Usually path performance monitoring initiates the switching. A line
failure results in a path failure and the path failure detection causes switching, not the line
failure detection.
3. Line Protection Switch
3.1 Switches are set individually to all channels of the STM-N at both ends of a section. Against a
line failure, all channels are switched to corresponding channels of the protection line.
Although the switching is at a channel (path) level, it is equivalent to that the line (fiber) is
switched. The switching is triggered by the section failure detection. Failure detection in a path
does not cause the line switching.

1. 1+1 Protection Switch


1.1 At the transmitting side the normal signal (traffic) is permanently branched onto both working
and protection lines. Against failure on the working switching takes place at the receiving side
only. This switching scheme does not require switching protocol.
2. 1:1 Protection Switch
2.1 Switches are installed at both transmitting and receiving side. This structure requires switching
protocol. Under normal condition, the protection line carries an idle signal or a low priority extra
traffic which will be removed when the working fails.
3. N:1 Protection Switch
3.1 In this scheme one protection line is provided for N working lines. Switching protocol is
necessary and the protection line can carry an extra traffic while it is not used by one of normal
traffics.
4. Usually 1+1 uses nonrevertive switch and 1:1 and N:1 uses revertive switch, but of course other way
round is possible.

1. In 1:1 or N:1 protection, when the protection line is not occupied by a normal traffic, it can carry a low
priority extra traffic. When failure occurs in working line the extra traffic is removed form the
protection line and the line is used by the affected normal traffic.
2. This arrangement is called Stand-by Line Access (SLA).
3. Bidirectional switch is required.
4. Usually a revertive switch is applied for SLA, but for 1:1 non-revertive type SLA is also possible by
using slightly complicated configuration.

1. Unidirectional Protection Switch


When the failure is only on one direction, only the affected direction traffic is transferred to the
protection line. For 1+1 system, this is a simple method because no protocol is necessary and
switching can be determined only at a receiving side. Disadvantage is that under protection status
different directions use different hardware and line and they might have different propagation time.
2. Bidirectional Protection Switch
Even if a failure is only on one direction, both affected and unaffected directions are switched to
the protection line. This switching scheme requires a protocol, because the unaffected direction
receiver has no way to detect the failure until it is informed by the remote side. Both directions of
transmission maintain equal delays and this may be important where there is a significant length
imbalance for the working and protection.

1. Unidirectional Ring
1.1 Normal routing of the traffic is that one direction of two-way connection uses the right (or left)
half of the CW (or CCW) line and the other direction uses the left (or right) half of the same CW
(or CCW) line. Thus both directions travel around the ring in the same direction and uses
capacity (a channel in STM-N) along the entire circumference of the ring.
1.2 Total number of connections in the ring cannot exceed the capacity of each section.
1.3 Unused line (above case CCW) is for protection capacity.
2. Bidirectional Ring
2.1 Normal routing of the traffic is that both directions of two-way connection routed using the
same section(s) and node(s). Corresponding channel of the STM-N in unused half (above case
left half) can be assigned for other connections which has no overlap section each other.
2.2 Total number of connections in the ring can exceed the capacity of each section.
2.3 Protection capacity is provided by dividing the section capacity half (for 2 fiber ring) or using

additional fiber pair (4 fiber ring).

329

1. Rivertive Switch
1.1 When the failure in the working system is repaired, the traffic that has been carried by the
protection system is switched back to the working system. When the SLA is employed, usually
this switching scheme is used and also in N:1 system to vacate the protection system for the
next failure.

2. Non-revertive Switch
2.1 After the recovery of the failed working system, the traffic is not switched back to the working
system. When the protection system fails later, traffic is transferred to the working. Therefore it
is not appropriate to name working and protection and often they are called 0 and 1
system.
2.2 Whenever switching is applied and if a complicated hitless switching is not employed it is
inevitable to cause data hit, short period error. To avoid the unnecessary errors this scheme is
important, especially in 1+1 protection system.

3. Non-revertive 1:1

3.1 SLA is also possible by a different design method.

330

1. For the protection switching system, manual operations, which are listed above, are possible. They
are used for maintenance purposes.
2. The traffic on a selected system is transferred to the protection system by the MSW. When a failure is
detected on another system (SF, signal failure), the MSW is released and the failed system takes
over the protection system.
3. The FSW does the same switching as the MSW, but against other system failure it does not release
the protection system.
4. When the LKOW is applied to a working system, it is not switched to the protection system even if it
fails. Under this status neither MSW nor FSW can be applied.
5. The LKOP is practiced to the protection system, any switching to it is refused, either automatic (SF)
or manual (MSW and FSW.)
6. Priority order among operations is above order. The larger the number is, the higher the priority. No
priority difference between LKOP and LKOW.

1. In the revertive switching mode, to prevent frequent back-and-forth switching between the working
and protection because of intermittent fault, the failed working must be error-free. To ensure this, the
wait-to-restore (WTR) time is set. It is on the order of 5-12 minutes with 1-second increment.
2. To prevent chattering of the protection switch due to an intermittent failure, the working channel must
be fault free for a fixed of time before the switch taken to transfer to protection.
3. During WTR period if a fault is detected on the waiting working system, the WTR time is reset.

1. Ring Configuration provides reliability to the networks traffic.


2. Traffic is protected because it can flow between any two points in either of two directions in the ring:
Clockwise or Counterclockwise.
3. Rings can be Two Fiber (2F) or Four Fiber (4F). The ring shown above is a 2F Ring (one pair of fibers
between adjacent nodes of the ring). Additional reliability can be obtained by the deployment of 4F
Rings.
4. In a 4F-Ring, nodes are interconnected by two pairs of fibers. The main pair is known as the Working
Fibers (W), and the second pair is known as the Protection Fibers (P).
5. There are two basic protection schemes for the Ring Architecture, which will be explained throughout
this lesson:

1. Ring systems are classified into five types based on employed switching method combination (unidirectional or bidirectional
switch and path or line switch), routing of two-way traffic (unidirectional or bidirectional ring) and fiber number (2 or 4 fibers).
2. Theoretically any combination between switching and routing direction is possible. But in the ring system, above uni-uni and bi-bi
combination is used.
3. SNCP-ring (Unidirectional)
3.1 Also called 2-fiber Unidirectional Path protection Switch Ring (2F-UPSR)
3.2 No switching protocol is required. This is a simple and fast switching scheme.
4. MS Dedicated Protection Ring
4.1 2-fiber Unidirectional Line protection Switch Ring (2F-ULSR)
4.2 Protocol by K1 and K2 in MSOH is required and it is not standardized yet by ITU-T.
5. SNCP-ring (Bidirectional)
5.1 2-fiber Bidirectional Path protection Switch Ring (2F-BPSR)
5.2 Ring must be controlled at a path level and protocol by K3 or K4 in POH is required. It is not standardized yet by ITU-T.
6. 4F MS-SP ring
6.1 4-fiber Bidirectional Line protection Switch Ring (4F-BLSR)

6.2 Protocol by K1 and K2 in MSOH is required.


1. 2F MS-SP ring
7.1 2-fiber Bidirectional Line protection Switch Ring (2F-BLSR)
7.2 Protocol by K1 and K2 in MSOH is required.
2. There are two different algorithm in 4/2F MS-SP ring, the terrestrial application and the transoceanic application.

334

1. SNCP-unidirectional ring (2F-UPSR) uses


2 fiber unidirectional ring architecture and
Unidirectional 1+1path switch scheme.
2. The drawing shows a 2-way connection (A-D, as an example) which uses a channel (time slot) in the STM-N signal. The
solid line indicates the channel in the clockwise (CW) STM-N (fiber) and the broken line the corresponding channel in the
counter clockwise (CCW) STM-N. Other connections between any other nodes are made by using different channels of the
STM-N.
3. At the transmitting node (A) the traffic is branched onto both CW and CCW STM-N and at the receiving node (D) one of
them carried by CW and CCW is selected by a path protection switch (PPS). Opposite direction (D to A) is arranged in the
same way using the same channel on the unused semicircle.
4. Under the normal status, PPSs at A and D select CW signals, making a unidirectional ring. The signals on CCW are standby signals.
5. It is possible to set the one of the PPS to CCW. In this case both direction traffic travel on the same route. This makes
transmission delay of both directions equal. This is a bidirectional-like setting but the system is still SNCP-unidirectional
ring .
6. An important point of the system is that one path occupies one channel of the STM-N (both CW and CCW) along the entire
circumference of the ring. As a result, total number of paths in a ring cannot exceeds STM-Ns capacity.

7. There is no limitation to the node number in an SNCP-ring from the point of view of the ring control, but the STM-N capacity determines the limit.
8. Operators can selectively provision paths to be protected or unprotected. Unprotected path will only use capacity in the selected route
(semicircle). The other route will provide additional unprotected path(s).

336

1. Under the (line and path) failure status


1.1 When failure or degradation is detected on the currently selected signal, the switch position of
the PPS is changed (above case, CW to CCW at D node.) Unaffected direction of the path
does not activate the PPS (above case, at A.)
1.2 The activation of the PPS does not require protocol between nodes. Failure detection and
PPS activation are done by the receiving node alone. This makes system control simple and
as a result, the switching time short.
1.3 When the failure is at the line (STM-N) level, all (or many) paths in the ring are suffered and
PPS switching occur independently at every concerned nodes.
2. For the bidirectional-like setting, when the failure is at the line level, both directions are switched at
the same time. But they are considered as independent unidirectional switching.
2.1 This cause data hits to both directions. But for normal unidirectional ring arrangement, the
data hits are only on one direction.

1. General SNCP has wider definition than previously explained SNCP-ring. The SNCP-ring is a one
type of the SNCP.
2. The SNCP can have any type of network physical structure (i.e. meshed, ring or mixed) between
PPSs. The SNCP-ring (2F-UPSR) is limited to the ring configuration.
3. A path can be subdivided to subnetwork connections (SNC) as described above. The CP is a
connection point of SNCs. The TCP is a path termination point.
4. The SNCP can be used to protect a portion of a path (e.i. a SNC), by setting PPSs at two CPs (Case
3) or at a CP and a TCP (Case 2) or the full end-to-end path putting PPSs at two TCPs (Case 1).
5. To make the SNCP flexible, three types of PPS should be available. They are Aggregate-Aggregate
PPS, Aggregate-Tributary PPS and Tributary-Tributary PPS. Implementation of all or some of them to
equipment depends on its design.
6. The SNCP-ring (2F-UPSR) uses only Aggregate-Aggregate PPS.

1. 4 fiber MS-SP ring (4F-BLSR) uses


1. 4 fiber bidirectional ring architecture and
2. Bidirectional 1:1 line switch scheme.
A pair of fiber is for the working line and the other for the protection line.
2. The solid and broken lines show the same channels (time slots) in the working and protection STMNs respectively.
3. Different from the SNCP-ring, both directions of a 2-way path are directed to the same route and they
are not branched onto the protection channel. The corresponding channel on the second semicircle
(left, in the drawing) is vacant, so it is possible to arrange other paths, like another A-D or D-E and EA, etc., by using the same channel. Of course those paths must not have overlaps. The overlapped
paths must use different channel of the STM-N. As a result, total number of paths in the ring can
exceed the STM-N capacity.
4. The SLA (Stand-by Line Access) is possible.
5. It requires the switching protocol carried by K1 and K2 of MSOH.

6. The maximum number of nodes is limited to sixteen (16). This is determined by the control protocol (standard). Four bits
in K1 are assigned to indicate the message destinations, resulting in maximum 16 nodes.

340

1. Against a total section failure (both working and protection), the traffic on the working fiber is looped
to the corresponding channel of the opposite direction protection fiber at the both ends of the failed
section (B and C).
2. The looped traffic passes through both originating and destination nodes and looped back to the
working fiber at the other end of the failed section (C and B).
3. The same applies to all other channels which are used by other path, i.e. the loop at B and C and the
through at other nodes. The result is just as if the traffic is looped at the STM-N level or the fiber. That
is why this system is categorized to the line switch.
4. The line switch takes place at the failed section side of the node. That is, at C node it is not at the
side facing D node but at the side facing B node.
5. Now the long route (C-D-E-F-A-B) is used as a protection line for the short rout (B-C). In this way, the
protection ring is shared by all sections (MS) in the ring.
6. This switching against a total section failure is named Ring Switch .
7. Unaffected paths, e.g. paths between C-D,E-A etc., remain on the working fiber.

8. Against multiple section failure, all of paths cannot survive, some of them are killed.
9. When a node failure occurs, e.g. at C node, ring switches take place at B node and D node. They are at the sides facing
C node.

341

1. Against a section failure on the working line only, all traffics carried by the failed section are
transferred to the protection fiber of the same section. This switching is named Span Switch.
2. When multiple section failures occur and if they are all working-line-only type failures, the span
switches are applied to all of them. And all normal traffics are protected. The 4F MS-SP ring can
protect multiple failures of a certain type.
3. This means network operators can expect 4F MS-SP higher survivability than 2F MS-SP in addition
to higher capacity. (cf. 2F MS-SP explanations)
4. Extra traffics by the SLA that pass through the failed section will be removed.
5. The system automatically select the ring switch or the span switch depending on the failure mode.
Maintenance crews do not have to intervene.
6. Both in the ring and span switching mode, all traffics are switched to the protection. Unlike SNCP ring,
it is impossible to set protected paths selectively. Always all paths in the MS-SP ring are protected.

1. 2 fiber MS-SP ring (2F-BLSR) uses


2 fiber bidirectional ring architecture and
Bidirectional 1:1 line switch scheme.
2. To provide protection channels, the STM-N capacity is divided into two parts. The first half is
assigned to the working channels and the second half is for the protection.
3. The path arrangement is same as the 4 fiber MS-SP ring including reuse of channels.
4. The total maximum number of paths in the ring is half of 4 fiber MS-SP ring, under the same
conditions.
5. The SLA is possible.
6. Same protocol as 4 fiber ring is used and the maximum node number is also sixteen (16).

1. Against a section failure, the loop switching takes place at both end of the failed section (B and C).
The traffic carried by a channel of the working half of STM-N is transferred to the corresponding
channel of the protection half of opposite direction STM-N. It travels the long route passing originating,
destination and other nodes (A, F, E and D). At the remote end it is switched back to the working half
of the opposite direction (original fiber).
2. There is no Span Switch for 2 fiber ring. Unlike 4F MS-SP, no survivability against multiple failures.
3. Other points are same as 4 fiber ring.

1. Each node on the MS-SP ring must be assigned an ID that is a number between 0 and 15. The ID
assignment is not necessarily in order. The SNCP ring does not require them.
2. In the protocol exchange, the node ID is used to indicate a message destination.
3. The cross connection table of each node must be provided with information that shows the originating
and destination node of each path indicated by the node ID.
4. Each node must have knowledge of the IDs assigned to all other node in the ring and their sequence.
The node ID map, which is provisioned to each node, provides this information.
5. When nodes are added to the ring, the node ID map must be revised.

1. The 4F/2F MS-SP ring has possibility of making a misconnection when a node fails.
2. Misconnection
2.1 Two paths, A-C and C-F, are terminated at the failed node (C) and they use a same channel of
the STM-N. In this case, their stand-by channel in the protection capacity is the same one.
2.2 When C node fails ring switches take place at B and D nodes. B node does it to save A-C path
and D node to save C-F path. And they are switched to the same protection channel.
2.3 The result is that previous two independent paths are changed to single A-F connection.
2.4 This is inevitable and the system send out an AIS (Alarm Indication Signal) to the misconnection
paths in order to avoid inconvenience.
3. No misconnection
3.1 When two paths that use the same channel of the STM-N are not terminated at the failed node
(through connection), A-E and E-F, the misconnection does not occur. No AIS is sent out to those
paths.
4. This control is named Squelch Control. Its procedure is show at the right of the drawing.

5. For the squelch control the squelch table (next slide) must be provisioned to each node.

347

1. There are two kind of AU-4 (VC-4). One type carries low order VCs (LOVC, VC-3, VC-12 etc.) and it
is called VC organized AU. The other is a pure VC-4 that carries a 140Mb/s, for example.
2. When the crossconnect level of a VC organized AU-4 is set to LOVC level, i.e. other than VC-4, the
NE must have information (squelch table) where its crossconnect level is set to LOVC level again, to
both east and west directions. It dose not matter whether the LOVCs crossconnection is add/drop or
through or mixed, both at own node and at the other side of AU-4.
3. When the crossconnect level is VC-4 level, regardless VC organized or not, the squelch table is not
required. Because its terminating node is clear from the crossconect map.

1. If the MS-SP ring has sections with very long length, e.g. a transoceanic submarine cable, and the
ring switch is applied like a terrestrial system, the traffic must cross a ocean back and forth under
failure condition, resulting in very long delay. To solve the problem different algorithm called the
transoceanic application is standardized (also in G.841).
2. In this algorithm, instead of the ring switch, the affected path is switched to the corresponding
channel of the protection capacity of opposite direction at the path terminating node. The same
occurs at all nodes that have suffered paths. Actually, this is not a line switch but a path switch. The
result is same as the SNCP bidirectional ring but the protocol does not use K3 or K1 of POH and
algorithm is different.
3. When a section failure occurs, B and C exchange protocol via the protection fiber of the long route
(B-A-F-D-E-C) using K1 and K2. Nodes A, F, E and D can monitor the exchange, decide which of
their paths are in trouble and take the above action at the path level. For this protocol the DCCr is
also used to exchange the path mapping information.
4. Against the on-only-working type failure on 4F MS-SP, the span switch(s) are applied in the same
way as the terrestrial system.

1. When a path is laid over two rings, it can survive against a ring failure by the self-healing function of
the ring. But the failure is on the link that connects two rings, it will be killed.
2. By connecting two rings by two links and diverting the traffic to the second link against the first link
failure, the survivability of the path can be improved. This method is named Inter Locked Ring (ILR).
3. The ILR is possible for any combination of two rings, MS-AP~MS-AP or SNCP~SNCP or MSSP~SNCP.
4. Two nodes where the links are connected are named the primary node and the secondary node. The
two nodes are not necessarily neighbor nodes. It is possible to put nodes between the primary and
the secondary.
5. The ILR setting is on the path bases. It is not necessary to set ILR to all of paths that pass the link.

1. There are two types of MS-SP ~ MS-SP ILR depending on whether the working capacity or the
protection capacity connects the primary and the secondary node. This drawing shows on-working
case.
2. MS-SP~ MS-SP ILR (P-S on-working)
2.1 At the transmitting side primary node the traffic is branched on to the first and second link
routes. At the receiving side primary node a service selector (SS) is installed and it choose a
normal signal carried by the first or the second link. Either failure on the first or the second link
can be protected. The path connections between four node (two primaries and two secondary
nodes) is same as the SNCP. The SS works as the PPS.
2.2 The right side drawing explains the protection against a section failure on the ring. Usual ring
switch or span switch (not shown) takes place. Thus a double failure on the link and the ring
can be protected.
3. The connection between the primary and the secondary nodes uses the working capacity. A demerit
of this configuration is that it consumes a part of the working capacity for protection purpose and
reduces efficiency of the ring.

1. This drawing shows the protection switching against the primary node failure. This is also same as an
usual ring fail.

1. MS-SP ~ MS-SP ILR (P-S on-protection)


Connection between the primary and the secondary uses protection capacity. The switching against
the link failure is quite same as previous one. One against the ring failure is completely different and
complicated.
Against the ring failure, the same ring switches take place at the failed section. The traffic is looped to
the protection capacity but when it reaches to the primary node it has no way to go further, because a
part of the protection capacity is occupied by the ILR.
To vacate the protection capacity to the ring-switched traffic, automatic reconfiguration of the ILR is
carried out at the primary and the secondary. The SS and the branching for the opposite direction at
the primary is transferred to the secondary as shown in the drawing. And the traffic on the first link is
directly connected to the working capacity of the ring. Now the ILR is ready for a failure on the links.
This is named SS transfer.
The merit of this configuration with complicated ring control is it does not consume any of the working
capacity of the MS-SP ring. The same total ring capacity as an independent MS-SP can be realized.

The drawing shows the SS transfer and switching against a primary node failure. This is equivalent to
that the link and ring failures occurred at the same time.

1. SNCP ring ~ SNCP ring ILR


The ILR arrangement of SNCP rings is shown above, the PPS setting at the ILR node is slightly
different from that of normal one.
For transmitting direction ( connection from the tributary to aggregates) branching onto the CW and
CCW fibers is not applied, only to one direction.
To the received traffics from aggregate lines, the drop-and-continue to the other ILR node is applied.

1. On the SNCP ring, the input traffic from a link ( tributary ) line is transmitted to only one direction, the
first link to CCW and the second link to CW.
Therefore, depending on the mode of double failure on the ring and link, there is a case where the
path cannot be protected. Above drawing explain the protected and unprotected situations.
2. The MS-SP ring does not have this inconvenience.

1. The ILR between the SNCP ring and the MS-SP ring is also possible. As shown above, each ring
sets its own ILR connection.
2. The drawing shows only on-working type ILR for the MS-SP, but on-protection type ILR can be used
without affecting the setting of the SNCP ring side.

Objective: To enable the trainee to enumerate INC-100MS management


Configuration Management
MO Registration, Modification.
NE Detail Setup.
NE SC Data Upload/Download.
Data Base Verification.
NE Firmware Download.
Fault Management.
Maintenance Procedures.
Performance Management.
Security Management
System Management.

functions through:

Starting and Stopping the Server and Client.


Server Redundancy Switching.
MIB (Server Database) Backup, Control.
Other Management Operations

366

With this section, using the INC-100MS Network Node Manager (NNM) Graphic User Interface on the
Client will be explained for Operators Ease of Use and time-effective management. Included in this
section will be:
NNM Screen Layout.
Layout of the initial NNM screen elements and their functions.
Window Navigation.
Explanation of the various elements common to graphic windows such as menu bars, tool bars,
adjusting their size, etc.
Window Tool Bar and MO symbols.
Explanation of the unique Tool bar buttons in the Network Diagram and the various MO Symbols
used for graphic system access.
Symbol Location Setup
Setting user preferences specifying the location of MO graphic symbols in the multi-layered

hierarchy of the various graphic windows.

367

Before putting INC-100MS in service, all objects in the customers SDH network must be registered as
Managed Objects (MO) in the Servers Management Information Base (MIB). Providing all of this data to
the MIB is through the various configuration Management functions. Procedures include:
MO Registration.
MO Modification and Deletion.
Network Map Update.
Path Creation.
Path Information Retrieval.
Registration of the MOs creates data entries in the MIB database, with each item linked to a graphic
symbol on the NNM Client interface for ease of operator control and access using mouse point-and-click
functions.

The MOs are arranged in a hierarchical order, from the largest MO down to the smallest. The lower level MOs are nested
or within the management area of the higher order MOs. The order of their registration will be presented on the following
pages.

368

Event Journal: Displays up to 1,000 current events in real time. The list will be discarded upon closing
and a new one reflecting the latest events will be created each time the window is reopened.
Alarm Status: Displays status of events including time of occurrence and subsequent clearance (even
after deleting the alarm occurrence line). If an alarm is terminated, recovery notification deletes the
occurring alarm.
Current Alarm Copy: Alarm registries can be transferred/updated from an NE or Server to the Client.
This is only necessary when a mismatch between NE/Server, Server/Client is found.
Alarm Inhibit: Alarm notification can be inhibited for MOs such as Domains, Offices, and NEs. Alarms will
still be sent from the inhibited object, but the Server will discard them upon receipt. This is useful for
items under testing prior to commissioning.

Event Log: Displays the most recent 10,000 items in the event log (Alarm/Events history) of a Server. This log can only be
deleted by the Administrator (from version 1.8).

369

Current PM information will retrieve the Performance monitoring information stored in the NE for display.
The NE monitors all traffic interfaces continuously and stores the results in memory. Information is monitored
in each frame (8,000/second) and averaged and stored each 15 minutes.
The recorded information every 15 minutes is then averaged each hour and stored in a separate memory
each 24 hours.
The capacity of these 2 memories is the current period plus the previous 32 periods for 15 minutes (= 8
hours) and the current and previous 7 days (one week) for the 24-hour memory.
When accessing the Current PM information from the NE, the operator can select to view the information in
either 15-minute or 24-hour increments, referred to as granularity. (both memories are retrieved
simultaneously, selection only zooms the display in or out, hence the term granularity)
Whenever a problem needs to monitored for a period longer than one week, then the operator can create a
Long-Term PM schedule, which can be set up to monitor only a specific object. Information is then
automatically retrieved from the NE and stored in the Server for later examination by the operator, again at
either 15 minute or 24 hour granularity.
As many as 50 objects can be scheduled for Long Term PM monitoring at the same time, and once the

situation is resolved, the schedule can be deleted. A schedule can be set to run for a period as long as 3 years.

370

Assigning User IDs and Privileges.


Client management and Server restrictions can be specified for the operators using the security
management function.
Restriction setting is performed by creating different restriction groups or classes, then assigning
individual operators to a group.
The same User can be assigned different restrictions/access privileges for different Domains (from
Version 1.8).
Default Security Access Levels:
All Applications - For unrestricted (manufacturer) maintenance.
Administrator - All operations.
System - All operations except operator security management and stopping INC-100MS.
Maintenance - Reading and maintenance operations.

Casual - Reading information only.


Customized Security Access Level.
Administrator can create a specialized Access set, with specific permissions/denials as needed.

371

As well as using the INC-100MS for managing the SDH networks, certain operations must be carried out
on the INC-100MS system itself to guarantee its availability and integrity of the data that it manages and
gathers from the SDH network. Among these operations are such actions as:
Checking the Server (pair) Status and availability.
Assigning regular/standby identities in redundant systems.
Manual redundancy switching.
For software upgrades.
For maintenance procedures.
Database (MIB) Administration including:
Database backup (manual and scheduled)
Database copying.
Database restoration.
Status views.
NE connectivity.

System Time Set


Map Tool
Symbol Location Setup
Running/Stopping Server

372

From the Main Menu, other operation functions are supported as shown on the following pages (Select
from View, System, or Utility Menus)

DCC- Data Communication Channel

Here we must consider the telecommunications network as a whole. In the area of the subscriber
network nodes the users are connected to the exchanges (DSC) via the user network interface (UNI).
Instead of this central switching points local cross connects (DXC) should be used. In a PDH network a
fixed network is performed by point to point links. The channels are switched via these links. Signals
from other networks use this transmission technology via flexible multiplexers up to 2 Mbit/s.
The growth in data traffic is much higher than in voice communication. The greatest demand lies in the
area of high bit rate access from the subscriber area. Such transmission capacity should be available at
a reasonable cost and on short notice. The Terminal multiplexer (TM) with diverse interfaces feed this
traffic into the SDH network directly or via Add and Drop Multiplexers (ADM) which are configured in a
ring network (Back Bone). This ring is formed by two fiber optical cables with various back-up switching
possibilities. The Network Management (TMN) sets up the necessary connections.

The SDH transmission network is splitted up into different sections.


The Regenerator Section between regenerators.
The Multiplex Section between Multiplexers or Cross Connects.
The Path between the termination points.

Basics of optical DWDM systems


At the termination, we feed four 2.5 Gbit/s signals to four optical transmission modules. The optical output signals are
converted if necessary to defined wavelengths in the 1550 nm window using wavelength transponders. This makes it possible
to use existing standard transmission modules with wavelengths in the 1310 nm or 1550 nm band. Using an optical WDM
coupler, the four optical signals are bunched together and forwarded to an optical fiber amplifier (OFA). Depending on the path
length, one or more fiber amplifiers boost the optical signal, which is attenuated due to the fiber loss. In many cases, a booster
is also used after the WDM coupler. At the termination on the receiving end, it is common to preamplify the optical signals and
then separate them using optical fibers and convert them to electrical signals in the receiver modules. This entire arrangement
must be duplicated in the opposite direction to carry the signals in that direction.

59

XDM family includes number of shevles to suit particular applications:


XDM100- MADM-1, ADM-4, MADM-4, ADM-16, MADM-16
XDM-200 CWDM 8ch 2.5G
XDM-400 MADM-4, ADM-16, MADM-16, LH in-line
XDM-500 ADM-16, MADM-16, limited ADM-64, limited DWDM
XDM-1000 ADM-16, MADM-16, ADM-64, MADM-64, DWDM
XDM-2000 HO with no electrical I/Fs ADM-16/64 MADM-16/64, DWDM
Optimization of aggregate module assignment. Two aggregate modules are associated with each Main Cross-connect Control
(MXC) card. Each module supports a bandwidth of up to 2.5 Gbps.
Optimization of tributary I/O slot assignment. Eight slots can accommodate different I/O modules (PIM, SIM and EIS-M).
In-service scalability of SDH links. An optical connection operating at a specific STM rate can be upgraded from STM-1 to
STM-4 or STM-16.
XDM-100 supports mesh, ring, star, and linear topologies. All system configurations are controlled by a single network
management system with end-to-end service provisioning, from E1 to STM-16 to DS-3.
The XDM-100 can be configured to operate as: Single ADM/TMMulti-ADM/TM

1. Simple, add or replace plug-in modules. This can be performed while the system is in operation,
without affecting traffic in any way.
2 Optimization of aggregate module assignment. Two aggregate modules are associated with each
Main Cross-connect Control (MXC) card. Each module supports a bandwidth of up to 2.5 Gbps.
3

Optimization of tributary I/O slot assignment. Eight slots can accommodate different I/O modules
(PIM, SIM and EIS-M).

4 In-service scalability of SDH links. An optical connection operating at a specific STM rate can be
upgraded from STM-1 to STM-4 or STM-16.
5 XDM-100 supports mesh, ring, star, and linear topologies. All system configurations are controlled by
a single network management system with end-to-end service provisioning, from E1 to STM-16 to
DS-3.
6 The XDM-100 can be configured to operate as:

Single ADM/TM
Multi-ADM/TM

398

-SDM-1 Main Features


1.Automatic path protection switching 2.Automatic performance monitoring
3.NVM(Non Volatile -Memory) card for software and configuration back up
4.Dual input power 5.Variety of timing sources 6.Alarm in/out connectors
XDM-100 Traffic I/O Interface Modules
High-order transmission paths for high-order and low-order sub networks and for IP networks (for
example LAN to LAN connectivity: GbE . GbE)
Leased lines at various bit rates, from 2 Mbps up to 2.5 Gbps
Data and other digital services.
The XDM-100 also supports a wide range of I/O interfaces and Ethernet Layer 2
services, enabling its deployment in transmission networks, including:
E1 (2 Mbps asynchronous mapping) DS-3 (45 Mbps) STM-1 (155 Mbps) electrical interface; STM-1

optical interface

(155 Mbps)

STM-4 (622 Mbps); STM-4c (ATM/IP 622 Mbps) STM-16 (2.5 Gbps); STM-16c (ATM/IP 2.5 Gbps) 10/100BaseT Gigabit
Ethernet (GbE)

399

XDM-100 Traffic I/O Interface Modules

High-order transmission paths for high-order and low-order sub networks and for IP networks (for
example LAN to LAN connectivity: GbE . GbE)

2 Leased lines at various bitrates, from 2 Mbps up to 2.5 Gbps


3

Data and other digital services.


The XDM-100 also supports a wide range of I/O interfaces and Ethernet Layer 2
services, enabling its deployment in transmission networks, including:

E1 (2 Mbps asynchronous mapping)

DS-3 (45 Mbps)

STM-1 (155 Mbps) electrical interface; STM-1 optical interface


(155 Mbps)

STM-4 (622 Mbps); STM-4c (ATM/IP 622 Mbps)

STM-16 (2.5 Gbps); STM-16c (ATM/IP 2.5 Gbps)

10/100BaseT

Gigabit Ethernet (GbE)

400

FOR OUTPUT POWER LEVEL


OPTICAL METER, LAPTOP WITH LCS SOFTWARE FOR ADM HARDWARE KEY.

FOR RECEIVER SENSITIVITY


OPTICAL METER,OPTICAL ATTENUATOR,LAPTOP WITH LCS SOFTWARE FOR ADM,HARDWARE KEY,SDH
ANALYZER.

FOR 2Mb CLOCK


LAPTOP WITH LCS SOFTWARE FOR ADM,HARDWARE KEY,FREQUENCY COUNTER,15 PIN D CONNECTOR WITH
CABLE FOR TESTING AT ADM
FOR 10 Min. BER WITH FREQ. OFFSETS, O/P JITTER, JITTER TOLARENCE, JITTER MAPPING, POINTER JITTER, BER TEST (48
Hrs.)
LAPTOP WITH LCS SOFTWARE FOR ADM.
HARDWARE KEY.
SDH ANALYZER.

FOR RETURN LOSS MEASUREMENTS


NETWORK ANALYZER (E5100) WITH CAL KIT.

75-120 BALUN
DDF CABLES.

FOR PULSE MASK, INSERTION LOSS MEASUREMENTS


SUNSET E-8 TESTER.

406

OPTICAL Tx POWER
CONNECT LAPTOP TO ADM AND LOGON IN TO THE ADM IN ADMINISTRATION MODE.
CONNECT POWER METER TO THE TX PORT OF ADM ( AS SHOWN IN SLIDE NO: 1)
ENTER IN TO THE HARDWARE MENU
IN HARDWARE MENU ENTER IN TO THE TEST & MAINTANANCE (AS SHOWN IN OPERATE1 OPERATE2)
SEE THE LASER STATE OF EACH STM-1 CARD
AND MAKE FORCED LASER ON
CONNECT THE Tx PORT OF ADM TO THE OPTICAL METER AND NOTE THE OPTICAL POWER READING.

OPTICAL Rx SENSITIVITY
CONNECT DTA OPTICAL ATTENUATOR
PREPARE SETTINGS IN DTA AS SHOWN IN SETTINGS
INCREASE ATTENUATION UPTO ERRORS UCCOR IN DTA NOTE THE ATTENUATION IN ATTENUATOR AS (AT1)
STILL INCREASE IN ATTENUATION CAUSES TO GET ERRORS UPTO 1E-6 (AS SHOWN IN OPTSEN2) NOTE THE ATTENUATION
IN ATTENUATOR (AT2)
CONNECT THE ATTENUATOR OUTPUT TO THE OPTICAL METER AND NOTE THE OPTICAL POWER READING (OPT1)

TOTAL OPTICAL SENSITIVITY=(OPT1 - (AT2-AT1))


Ex: (41 - (39 - 37.5)) = 39.5

CLOCK FREQUENCY
ADM 2Mb CLOCK OUTPUT AVAILABLE IN ADM WITH 15 PIN CONNECTOR.
IN THAT 1ST PIN IS GROUND, 13TH PIN IS OUTPUT.
CONNECT ADM WITH FREQUENCY COUNTER
TO GET CLOCK OPTICAL FIBRE BETWEEN TWO ADMs SHOULD BE CONNECTED
DEFINING SYNCHRONOUS CLOCK IS SHOWN IN OPERATE,CLOCK1, CLOCK2, CLOCK3
IN FREQUENCY COUNTER WE CAN THE CLOCK OUTPUT OF 2048KHz 0.005

PULSE MASK MEASUREMENT


CONNECT BNC PATCH CHORD TO THE DDF POI Rx SIDE.
CONNECT THE BNC CONNECTORS TO THE L1 Rx, PORTS IN E - 8. TESTER (MARKED AS A & B IN EACH PORT)

PARAMETERS TO BE CHECKED
PULSE WIDTH (nS)
RISE TIME (nS)
FALL TIME (nS)
OVER SHOOT (%)
UNDER SHOOT(%)
LEVEL (dB)

Here we see the ITU-T definition.


After you have read this several times you will come to the conclusion that this is best explained
graphically.

The graph first shows an ideal pulse train i.e. periodic transitions equally spaced in time.
Jitter is caused when these pulses deviate from these ideal positions in time.
This phase deviation is plotted as function and this is the jitter amplitude of the signal. Phase lead is
assigned a positive amplitude and phase lag a negative amplitude.
It worth noting that when we talk about jitter we are talking of phase variations of 10Hz and above.
Phase variations below 10Hz are called wander. We will talk some more about wander later in the
presentation.

The plotted jitter amplitude is measured in Unit Intervals.


One UI can be considered as a phase deviation of one clock cycle in the signal.
Consider the above example which illustrates 0.5UI of jitter. At first it might seem that the phase
deviation is only 1/4 of a clock cycle, but consider the total phase variation both positive and negative
and you will see the phase variation equals 1/2 of clock cycle i.e. 0.5UI.
Demonstartion of UI on the Oscilloscope.

Jitter can be generated by circuit components. For example, oscillators are widely deployed in telecom
circuit design and these circuits all have a phase noise characteristic depending on the quality of the
design and components used.
Also where logic circuits interface, jitter can be produced due to transition thresholds. These introduce
small phase variations between logic circuits.

This slide shows a typical transmission path starting in the PDH world possibly carrying voice traffic,
possibly data, maybe digitised video traffic.
The symbols represent bi-directional devices since we normally need transmission in both directions.
The Add/Drop muxes at the ends of the path are therefor performing both a multiplex and a demultiplex
function between PDH and the SDH or SONET world. The digital cross-connect mux allows the
tributaries to be routed onto other similar paths so that the network can be electronically configured
rather than hard-wired to a fixed configuration.
The boxes with R in them represent regenerators in the optical transmission paths which are used to
increase the signal to noise ratio of the signal when transported over long distances.
In general we dont see much jitter on the signals transmitted in SDH or SONET but jitter is imparted to
the PDH traffic by indirect effects.

To test all these types of jitter different classes of jitter tests have evolved.
Here is a summary of the different types of jitter tests used in SONET/SDH network.

Let's now look at how each test would be performed. Starting with jitter tolerance.
Jitter tolerance is measure of how well an interface can tolerate data with applied jitter before the
interface cannot accept the data error-free.
Two types of test templates (or jitter masks) are used for jitter generation.
The first is the standard Bellcore/ITU-T mask which is applied and the interface. The interface must be
able to accept this jitter mask and remain error free.
The second mask applies the maximum amount of jitter possible to the interface before errors occur.
This is called the maximum permissible jitter and is now the preferred jitter tolerance test.

The next important jitter test is jitter transfer. This is a measure of the jitter gain across a network
element, typically a regenerator or repeator.
The formula for jitter transfer is a function of the applied jitter in to the measured jitter out.

Jitter transfer is most used to characterise a network element and ensure it does not contribute
significant amounts of jitter to the network.

Output jitter is perhaps the most simple jitter measurement. but it is most important that the correct
measurement filters etc.are used. It is the measurement of jitter at the output of the network. No
stimulus is applied by the test equipment. Tests of Output Jitter from individual network elements or
components (often termed Jitter Generation) are also performed in Development or Production.

CONNECT DTA
PREPARE SETTINGS IN DTA AS SHOWN IN FIGURE SETTINGS
THE TESTING DURATION WILL BE ONE MINUTE, FOR EACH E1 TWO FILTERS (LP + HP1, LP + HP2 ) TO BE USED, JITTER VALUE IN TERMS OF
PEAK TO PEAK TO BE NOTED (AS SHOWN IN MPJTR1,MPJTR2)

JITTER TOLERANCE TESTING


CONNECT DTA
PREPARE SETTINGS IN DTA AS SHOWN IN SETTINGS
START TESTING BY PUTTING THE DTA IN TRANSMITTER MENU (AS SHOWN IN JRTL)
AFTER FINISHING THE GRAPH ENSURE THAT THE TEST GRAPH SHOULD BE ABOVE THE REFERENCE LINE

OUTPUT JITTER TESTING


CONNECT DTA
PREPARE SETTINGS IN DTA AS SHOWN IN SETTINGS
START TESTING BY PUTTING THE DTA RESULT MENU IN JITTER MODE (AS SHOWN IN OPJTR 1, OPJTR 2)

Here we simulate pointer movements generated from the NE1 in the last test. We use them to check
that a Line Terminal or ADM can extract a tributary signal with aceptable jitter in the presence of pointer
movements.
A pointer movement at STM-1 indicates a 3 byte or 24 bit movement of the payload. The resulting spike
of 24UI of jitter would send any PDH network into spasm. NE's have buffers which help smooth out this
spike to accepotable levels hopefully.
There are specific pointer movements which simulate worst case conditions on the network and can be
found in ITU-T G.783.

For wander measurement we are not able to create a jitter free reference using the narrowband PLL that
we looked at in the receiver because the loop time constants would be impossibly slow. However we
can perform measurements relative to an externally supplied reference clock signal.
We could use the same phase detector as is used for jitter measurement but the amplitudes of wander
are many orders of magnitude greater than those encountered as jitter. So, rather than prescaliing the
PSD with larger dividers in order achieve the amplitude range, it is more convenient to measure wander
in the time domain in a similar way to HPs time interval analyzer). This is done by sampling the
wandered clock signal and assessing, piecewise, what phase step is implied by the small offsets in
frequency in each measurement sample. By adding the little delta phase steps together we can
reconstruct the wander signal.
The slide shows a half sine wave being reconstructed piecewise.
A further complication to note is that we need to exclude the small frequency changes caused by phase
jitter as these would cause aliased jitter waveforms to be reproduced on top of our reconstructed wander.
To filter out the jitter we use a similar narrowband PLL to that used in the jitter receiver.

BER is measured as a function of the signal-to-noise ratio Means of comparing different modulation schemes
dominant noise contributions may not be in the receiver(source noise or channel noise) Usually, SNR is
proportional to Q In Optical systems, We measure BER as a function of mean received Optical Power (ROP).
The quantity measured by a power meter If the dominant noise source is thermal noise contributed by the
receiver ,and the extinction ratio is high (I.e P1 >>P0).

Identifying Causes of signal Degradation


A BER measurement cannot tell you what physical mechanism resulted in a particular degradation of the received signal. In
general this depends upon what is in the system under test. However,BER measurements can be very helpful I determining
what type of degradation is occurring Other measurements (e.g. eye patterns)should also be performed for corroboration
general approach to identifying sources of degradation if you suspect that you know the cause, and you can isolate or eliminate
it then do so ,and repeat the BER measurement.
e.g. If you suspect cross talk in a WDM transmission system switch of all the channels except the one you are measuring
interpreting BER curves.

Extinction Ratio Degradation


The extinction ratio (ER) of an Optical OOK signal is the ratio of the average power in a O to the average power in a
1 Ideally ER is in finite, Po=O
ER degradation by itself causes a shift in the BER curve
You can think of power in Os as being wasted optical power penalty is
e.g. r =0.2, PP = 1.76 dB

BER TEST SETTINGS


CONNECT DTA
PREPARE SETTINGS IN DTA AS SHOWN IN SETTINGS
START TESTING BY PUTTING THE DTA IN RESULTS AS PDH RESULTS CUMULATIVE (AS SHOWN IN OFSTBER 1, OFSTBER 2, OFSETBER3)
EACH TEST TO BE DONE UPTO 10 MINUTES EACH HAVING +50PPM, 0PPM & -50 PPM RESPECTIVELY
OFF SET BER TEST SETTINGS

CONNECT DTA
PREPARE SETTINGS IN DTA AS SHOWN IN SETTINGS
START TESTING BY PUTTING THE DTA IN RESULTS AS PDH RESULTS CUMULATIVE (AS SHOWN IN BER
TOTAL TESTING TIME IS 48 Hrs.

Telcordia and ITU-T define a hierarchy of alarm conditions .Full compliance testing of an NEs alarm
handling entails verifying the alarm detection and de-activation thresholds, plus the appropriate
responses. This must be done for each alarm supported by the NE.
To aid the rigorous test of an alarm detection/de-activation threshold OmniBER OTN generates a
precise 3-stage alarm on/off sequence:
The sequence consists of: User programmable starting state (Alarm on/off)
The test condition which is a single burst of the alarm on or off (opposite to starting state). The duration
of this pulse is either equal too or just below the threshold.
A repeating on/off sequence designed to hold the NE in the alarm state it enters as a result of the test
condition. The on/off sequences are programmed to be below the expected alarm detection and deactivation thresholds.
For example let us look at Line-AIS. An N.E must enter the Line-AIS state on receiving 5 consecutive
frames containing the Line-AIS signal and exit on receiving 5 consecutive frames without.
First check that the N.E does not enter AIS when only 4 consecutive frames are received. Then send a

holding pattern of 4 on and 1 off to check that the state is not entered.

432

On threshold test sequence.

Moving on from these tests, SDH elements have built in performance monitors for regenerator (B1),
multiplexer (B2) and path (B3) sections. They allow the network management to carry out preventative
maintenance and must be checked at installation. They are also used by maintenance technicians and
management to sectionalise faults in the network.

To test the performance monitors, we generate errors in the B1/B2/B3 parity bytes and check the te NE
counts them and sends a valid REI in the retrun path.

There are also a number of Alarms that can be monitored. Again please note that there are sets of
alarms for each section of the network, that is, regenerator, multiplexer and path.
The various NE's each respond to the alarms in a different way. Regenerators terminate the
Regenerator section overhead only, so will respond to RSOH alarms and alarm conditions.
DXC's terminate the and regenerate the regenerator and multiplexer section overheads, so will respond
to the RSOH and MSOH alarms and conditions.
Line Terminals and ADM's act as Path and Line terminals. They terminate and regenerate all areas of
the overhead, so respond to alarms and alarm conditions in all areas of the overhead.

Typical tests might be as shown here. Both upstream and downstream signals are tested.

Most of the tests already mentioned would be carried out during installation and commissioning. Similar
test can be carried out for preventative maintenance by monitoring the network in-service.

An example of Multiplexer Switch Protection is shown here.


The Terminal on the left transmits a standby signal on a seperate fiber. At the far end, the receiving NE
monitors the primary signal using the B2 byte. When the number of errors reaches a specified threshold,
the NE switches to the standby line.
This is known as 1+1 protection switching. A variant on this is 1+n, where a number of working lines are
protected by a standby line.

We can check MSP by monitoring the B2 for errors or changing the messages in the K1/K2 bytes.

Previously, when you were using the OmniBER for Passive Automatic Protection testing, when a fault is
generated the network element registers a fault condition.
The network element then sends out a request to switch along the protection channel, which basically
says, it has registered a fault and is looking for verification of an alternative channel it could use.
The OmniBER did not respond. The lack of response sent the network element into an oscillatory
mode where it would send request to switch signals down each channel in turn, asking if they could be
used, without receiving a response.
Now, with Active Automatic Protection Switching, the network element registers a fault in the same way
and sends out the same request to switch. The difference is, that the OmniBER can now respond
intelligently by sending a confirmation signal and transfers the traffic signal along the fault free
protection channel. The network element is then in no doubt about which channel can be used, those
avoiding going into to the undesirable oscillatory mode. Thats for uni-directional networks.
For bi-directional, the same request to switch signal is sent to the OmniBER from the network element,
the OmniBER again responds intelligently by sending the conformation signal and the traffic signal but in
this type of network the OmniBER also sends a request to switch the traffic signal from the other, fault

free service channel onto the other protection channel.

443

Its critical for your customers to be able to test how quickly their designs respond to error or alarm
conditions.
The OmniBER OTN can be used to apply such conditions. The response of any device under test is fed
back to the OmniBER OTN. This in turn triggers a timing device, (such as an oscilloscope). The timing
device will determine just how quickly the device responds.
The OmniBER OTN has a very long list of triggers, for SONET, SDH and OTN. These can be found in
the technical specification. Ill show you how to find the technical specification and other useful
information later.

The key defined overhead bytes carrying important information. A new Label page
provides fast access to the S1 Synch status byte carrying synchronisation messages, to
the C2 byte carrying the high order path payload information and to the V5 signal label
carrying the low order path payload structure. A textual setup and decode of these bytes
is provided meaning technicians do not have to carry pages of byte decodes from ITU
standards when out installing networks.
In the same manner APS (Automatic Protection Switching) Messages are decoded and
displayed in a single page. This complements the existing transmit function which
provides message based set up of Linear (G.783) and Ring (G.841) topology APS
messages.

FEC testing is a key part of the OmniBER OTN. It can generate a structured OTN frame and before
transmitting it, it can also add errors , after the calculated FEC, which accurately simulates a real-world
networking environment.

The FEC-compliant device under test, should be able to recognize and correct these errors, until it starts
passing errors through as the OmniBER OTNs error rate is increased. This test give the user the ability
to identify error conditions that cause the DUT to pass uncorrectable FEC errors, and can be useful in
validating the FEC functionality in new designs.

Earlier in the presentation we observed that FEC:


Improves the BER performance of an existing link.
It increases the maximum span of a link, optimizing span engineering parameters.
And it improves the overall quality of the link by diagnosing link problems earlier

FEC code generation is a 5-step process. Ill describe all of the steps on this slide and and then well take one step at a time
First let me say that The FEC code is generated in one OTN Frame row at a time.
In Step 1:

Each individual Frame Row is de-muxed into 16 individual sub-rows before

In Step 2:

Blank FEC bytes are added to each sub-row.

In Step 3:

The 16 sub-rows are independently connected to 16 FEC encoders, FEC is

calculated and populated in blank

The 16 sub-rows are then re-multiplexed to the original row with the addition

of the newly generated FEC values.

FEC is generated.

FEC bytes.
In Step 4:

In Step 5:

4 frame rows are FECed to comprise an OTN frame.

458

In Step 1, each OTN Frame Row (including overhead + payload) is demuxed into 16 individual sub-rows
before FEC is generated.

As you can see in the diagram, every 16th byte in the G.709 Frame Row gets its own parallel sub-row
channel.

In Step 2, 16 blank FEC bytes are added to each of the 16 sub row channels. As you can see in the diagram,
lovely, transparent, empty bytes are tacked on to the end of the sub-row.

In Step 3, the 16 sub-row channels as shown in colors on left, are independently connected to 16 FEC
encoders, where FEC values (indicated in checks) are calculated and populated into the previous blank
FEC byte locations.

In Step 4, the 16 sub-rows are re-multiplexed to the reconstitute the original serial OTN Frame Row,
now with the addition of all the newly generated FEC values, as shown in checks.

Omni BER-718 Features

- Real-time Wander Analysis


- Jitter Intrinsic

No post analysis is required.Dont have to wait until the end of the test period before the problem is identified.

Provides more accurate jitter o/p measurements.Improves thru-put.

- Differential Electrical Interfaces


Test both differential times simultaneously.saves reconfiguring and over all test time.Halves test time.
7 - SONET O/H Manipulation (Sequences, Thru-mode)
Ability to inject alarms,errors,jitter etc into standard or proprietary test signal.
8 - Best-in-class Remote Control
User interface tracks scpi commands means easier and quicker to debug
Single Optical Connector
ALL rates available from some optical TX,Rx ports.No reconfiguration is required to change rates.
10 - Distributed Network Analyzer

Drawbacks
It doesnt support at STM-64.
Next generation SDH function is not available.
For testing of DWDM Parameters we need a separate OSA.

466

NRZ Electrical interfaces:

725mV +/-125mV (fixed)


AC-coupled
50 ohm to ground
SMA connectors
<120ps rise/fall time
<15% overshoot

Test modes:

SONET
SDH
Unframed (raw PRBS)

Jitter testing
tolerance

Jitter generation

(to ITU-T & Bellcore)


Automatic transfer
Output Jitter Measurement

Automatic

Ethernet payload analysis


GFP (G.7041)
LAPS (X.86)
Contiguous concatenation at
AU-4-2c/AU-4-3c/AU-4-8c
Mixed mapping generation
SONET/SDH up to 2.5Gb/s
Signal Wizard
Overhead sequence generation and capture
Overhead access to undefined bytes
External trigger outputs
Framed/Unframed operation
Transparent thru mode
Service disruption (Concatenated
and Channelized payloads)
Rear connector version
Alarm stress test
Intrusive thru mode
VT/TU Mappings

SONET/SDH electrical: STM-0e, STM-1e, STS-1 & STS-3


Active APS (K1, K2 emulation)
T-carrier/PDH: E1, E2, E3, E4 & DS-1, DS-3

468

470

You might also like