Professional Documents
Culture Documents
A sampled signal contains all the information if the sampling frequency is at least twice the highest
frequency of the signal to be sampled.
As the analogue signals in telephony are band-limited from 300 to 3400Hz, .To allow the actual slopes of
the filters used a sampling rate of 8 khz is required.After bad limiting with a low-pass filter the analog
signal is sampled and the samples obtained are digitally encoded. The recommended sampling rate by
ITU-T G.711 is 8000samples per second.8 bits that is one byte should be used per sample.The time
taken for one sample is 125 sec.This time of one sample is called one frame.
fibre optic links is 565Mbit/s, with each link carrying 7,680 base channels.
PCM 30 MUX consisting of 30 time division multiplexed (TDM) voice channels, each
running at 64Kbps (known as E1 and described by the CCITT G.702,G.703 specification)
and two additional channels carrying control information. 0 & 16 Channel are carrying the
control information. 0 is for frame alingnment signal and 16 is for Non frame alignment
signal. 32 channels /Time slots each of 125 micro seconds.
Each Time slot is divided in to 8 bytes and over all time period of 8 bytes is of 3.9 micro
sec.Each byte is of 488 nsec time period .
In the figure above ,2 extra bytes are placed in time slot 0 and time slot 16 respectively.One byte
inserted in time slot 0 is used for Frame alignment, that is when PCM-30 MUX is de-muxed, this byte is
used again for Frame alignment. The other byte in Time slot 16 is used in the switch.
The signal of PCM-30 Mux is called E1 when it carries switching data and output from digital switches or
transmission equipment within the same station.When the signal is provided to transmission equipment it
is connected to the interface of (PCM-30) in SDH hierarchy.
In a PDH multiplexer
individual bits must be running at the same speed otherwise the bits cannot be
interleaved The speed of the higher order side is generated by an internal oscillator in the multiplexer and is
not derived from the primary reference clock
The possible Plesiochronous difference is catered for by using a technique known as Justification
Extra bits are added(stuffed)into the digital tributaries which effectively increases the speed of the tributary
until they are all identical
Jitter Effects:
To measure jitter effects, the incoming signal is regenerated to produce a virtually jitterfree signal, which is used for comparison purposes.
No external reference clock source is required for jitter measurement. The maximum
measurable jitter frequency is a function of the bit rate and ranges at 2.488 Gbit/s (STM16/OC-48) up to 20 MHz.
The unit of jitter amplitude is the unit interval (UI), where 1 UI corresponds to an error of
the width of one bit. Test times on the order of minutes are necessary to accurately
measure jitter.
Jitter:
Periodic or random changes in the phase of the transmission clock referred to the master or
reference clock. In other words, the edges of a digital signal are advanced or retarded in time
when compared with the reference clock or an absolutely regular time framework. Jitter generally
referes to deviations of more than 10Hz.
Wander:
Slow changes in phase (below 10Hz); a special type of jitter.
Interference signals
Impulsive noise or cross talk may cause phase variations (non systematic jitter). Normally high frequency
jitter.
Pattern dependent jitter
Distortion of the signal lead to so-called inter-symbol interference, which is pulse cross talk that varies
with time (Pattern dependent jitter.
Phase noise
The clock regenerators in SDH systems are generally synchronized to a reference clock. Some phase
variations remain, due to thermal noise or drift in the oscillator used.
Delay variation
Changes in the signal delay times in the transmission path lead to corresponding phase variations. These
variations are generally slow (Wander). (e.g. Temperature changes in optical fibers).
Stuffing and wait time jitter
During removing of stuffing bits gaps have to be compensated out by a smoothed clock.
Mapping jitter
see above
Pointer jitter
During incrementing or decrementing of the pointer value. This shifts the payload by 8 or 24 bits corresponding to a
phase hit of 8 or 24 UI.
18
23
A STM-1 signal has a byte-oriented structure with 9 rows and 270 columns. A distinction is made between three
areas:
the payload area, which uses 261 columns
the pointer area
the section overhead, which is splittet up into two parts the Regenerator- and the Multiplex-Section
Overhead.
Each byte corresponds to a 64kbit/s channel. The overall bit rate of the STM-1 frame corresponds to 155.520
Mbit/s. The frame repetition time is 125s.
The STM-n frame structure is best represented as a rectangle of 9 x 270 x n.
The 9 x n first columns are the frame header and the rest of the frame is the inner structure data i.e. payload
(including the data, indication bits, stuff bits, pointers and management).
The STM-n frame is usually transmitted over an optical fiber. The frame is transmitted row by row (first is
transmitted the first row then the second and so on). At the beginning of each frame, synchronization bytes A1, A2
are transmitted .
The multiplexing method of 4 STM-1 streams into a STM-1x4 is an interleaving of the STM-1 streams to produce the STM-4
stream.
26
27
The PDH signals are first adapted to Containers.This process is defined Frame Alignment.
That is from PCM-30 to C-12, 34M toC3 and 140M to C4.In this process several stuffing bytes are
added.
The process from containers to Virtual Containers is defined mapping.
In mapping path POH bytes are added to the containers namely to C-12,V5,J2,N2 and K4 are added to
make VC-12.
To VC-3/ VC-4, J1,B3,C2,G1,F2,H4,F3,K3,and N1 are added.
Virtual Containers consist of POH and payload. These containers located in the third column from the
right are defined
Lower Order Virtual Containers(LOVC)
The higher layer Tributary Unit accomadates lower order Virtual containers regardless of the locations of
the starting byte.
The location of the starting byte is written in the Pointer bytes.This process is defined as pointer
processing.
When 3 parallel TU-12 are bound up into one TUG-2 this process is defined as multiplexing.
7 TUG-2s are bound up into one TUG-3.
3 TUG-3s are bound up into one VC-4.
28
The 32 bytes times 4 frames of PCM-30 signals (total 128 bytes ) are adapted in C12s, it becomes 136 bytes
totally with 2 stuffing bytes added to each frame respectively.The values for Justification control bits illustrated in
the figure below are 000 for C2 and 111 for C1.When the Justification opportunity bits are indicated with
Justification bit the corresponding bits is filled with stuffing value, not with a datum.Therefore the current actual
value for S1is not defined.The receiver needs to ignore the content value.S2 is a datum.
However within 500 sec that is in consecutive 4 frames in some reason or other the signal speed of PCM-30
becomes slower and fails to deliver 1024 bits,or 128 bytes and results in 1023 bits.The justification opportunity
bit S2 is filled with stuffing value and C2is filled with 111.The receiver is to ignore the content value for S2.
In case when the signal speed of PCM-30 becomes faster and delivers 1025 bits or 128 bytes plus 1 bit the
justification opportunity bit S1 adapts a datum and C1 is filled with 000.
The summary of the possible combinations of the values C1 and C2 are as follows.
C1
C2
111
111
111
000
000
000
Majority vote is used to make a justification decision for the protection against single bit errors in C bits when the signal is
demultiplexed .The bits indicated as overhead bits are reserved for future use.
29
The four consecutive frames described in the above process is defined as multiframe structure.The
next process from C-12 to VC-12 is defined as Mapping.InMapping 4 different POH bytes are inserted to
the beginning of 4 consecutive frames respectively.Namely V5, J2, N2, K4 are inserted to the beginning
of each frame respectively.which makes up 35 bytes of VC-12.And this multiframe will be repeated in
every 500 sec.The structure of VC-12 results in POH and Information payload.
The VC-12 s are adapted in TU-12s, the method of Alignment is defined to use.The Alignment absorbs
the offset of the frame start for VC-12 and that for TU-12.
A set of four consecutive frames of VC-12s are always adapted in 4 consecutive TU-12s.The start byte
of 4 consecutive frames of VC-12 is V5, whose location is written V1 and V2 bytes in binary numbers as
pointer value.The formats of V1 and V2 are described in the figure.The first 4 bits are allotted for New
Data Flag (NDF), the following 2 bits are allotted for SS bits, and the rest of 10 bits are allotted for
Pointer value.
An indicator whose value defines the frame offset of a Virtual Container with respect to
the frame reference of the transport entity on which it is supported.
In case of TU-12,
the pointer value indicate the frame offset of the beginning of the
32
Objectives:
34
6. Multiplexing
6.1 Process to combine multiple signal into a higher capacity unit using simple byte interleaved multiplexing.
7. ETSI and SONET standard
7.1 Multiplexing route via AU-4 is ETSI standard and used by most countries.
7.2 Route via AU-3 is SONET standard used in North America. Japan also adopts this route.
35
...............
1.6 Pointers of AU-4s are also byte interleaved multiplexed into 4th row and 1st to 9th column,
indicated as AU PTRs.
1.7 Insert a new STM-N SOH.
2. STM-n Frame
2.1 Number of row remains 9. Nine rows rule is common to all packet.
2.2 Column number for SOH and AU pointers is 9 x n and columns for
38
39
4.4 New Data Flag (NNNN) indicates arbitrary pointer value change due to payload change.
4.5 SS (10) show the signal type is AU-4 or AU-4-Xc or AU-3 or TU-3. (Currently, SS of AU pointer is not used for any purpose.)
40
1. A complete set of VC-12 and TU-12 require a multiframe, 4 frames and 125x4=500s, in order to
accommodates POHs and bytes for pointer.
2. V1, V2 and V3 are used in the same way as H1, H2 and H3 of AU-4 pointer. V4 is not defined.
3. The offset numbers are 0~139 and start right after V2. Different from AU-4, succeeding bit has
different number.
4. Justification between VC-12 and TU-12 is done using V3 and the byte after V3.
1. When AU-4s (VC-4s) from different STM-N lines are transferred to a new STM-N at a add-drop
or cross-connection station, their arriving phases are different.
2. For the transfer, the phases of AU-4s must be aligned, giving them delay of 0.5 frame period
in average.
3. If the same delay is applied to VC-4 also, by the repetition of mux./demux., which is quite
common in the SDH network, accumulation of delay will be enormous. This must be avoided to
maintain good transmission quality.
4. Without giving the delay to VC-4, its new phase relation to the new AU-4 is set to the AU-4
pointer.
5. Position of gaps in VC-4 for insertion of SOH must be changed, and this requires buffer and
some delay will be induced. But it is very small comparing to 0.5 frame time.
6. Thus, minimization of accumulated delay is realized by the pointer renewal.
1. Since SDH is a synchronous system, justification (synchronization) between SDH signals is generally not required.
2. Cases in which justification is required for the SDH:
2.1 Interconnection of networks where independent Primary Reference Clocks (PRC) are used. This occurs
for a connection between different countries or different network operators.
2.2 Some stations become asynchronous, i.e. go into an internal clock, by a loss of reference signal, e.g. a line
failure.
3. In those cases, an STM-n (or HOVC) frame must carry VC-4(s) (or LOVCs) that are generated by using different
(asynchronous) clock. Then the justification between SDH signals is necessary.
4. STM-N (AU-4) ~ VC-4 (for ETSI)
4.1 Justification is done by using H3 bytes (3 bytes) for expanded payload capacity (Negative Justification) and
excluding 0 address bytes (3 bytes) for reduced payload capacity (Positive Justification) every several frames.
Justification interval and positive/negative selection is determined by the clock frequency difference.
5. HOVC=VC-4 (TU-3) ~ LOVC(VC-3) (for ETSI)
5.1 Justification is carried out using H3 byte (1 byte) and 0 address byte (1 byte) of the TU-3 in the same way.
6. HOVC=VC-4 (TU-n) ~ LOVC(VC-n ), n=11or 12 or 2 (for ETSI)
6.1 Justification is carried out using V3 byte (1 byte) and 0 address byte (1 byte) of the TU-n in the same way.
7. Justification information is carried by using a pointer as the increment (I) or decrement (D) bits depending on whether it
43
(to be
2.1 Positive justification is applied, i.e.when an underflow of data is detected, the payload
size of the corresponding frame is reduced by excluding 0 address bytes from a part of
payload.
2.2 The receiving station is instructed to neglect the bytes by the polarity change of I bits of
the frame.
2.3 Following frames go back to normal payload size and pointer value is increased by one
until the next underflow detection.
3. Up to 2 bits error in I bits can be corrected by the majority rule at the receiving side.
(to be
2.1 Negative justification is applied, i.e.when an overflow of data is detected the payload
size of the corresponding frame is expanded by using H3 bytes as a part of payload.
2.2 The receiving station is instructed to adopt the H3s as information bytes by the polarity
change of D bits of the frame.
2.3 Following frames go back to normal payload size and pointer value is decreased by one
until the next overflow detection.
3. Up to 2 bits error in D bits can be corrected by the majority rule at the receiving side.
Objectives:
1. Section overheads (SOHs) of STM-1 are depicted by the shaded parts in the above drawing.
1.1 Regenerator SOH (RSOH), 3 rows by 9 columns, can be accessed at terminating points of a
regenerator section (RS), at both regenerators and multiplexers.
1.2 Multiplex SOH (MSOH),5 rows by 9 columns, , can be accessed at multiplex section (MS)
terminating points, i.e.at multiplexers only. It passes through regenerators transparently.
1.3 Information carried by RSOH and MSOH is mainly used for administration of RS and MS
layer,respectively .
2. Marked bytes are already defined. Unmarked bytes are reserved for future use.
3. bytes are defined as Media-dependent. If necessary. (For SDH radio it is defined in ITU-R F.750)
4. Bytes marked are assigned for national use. Each country can define their function differently but the
definition is valid within the country, not international.
5. Column number of STM-N is 9 x n and same byte assignment is applied but byte number of A1, A2, B2,
and are increased accordingly.
6. For STM-16, 64 and 256, using some of unmarked bytes FEC (Forward Error Correction) is implemented.
47
A2: 00101000
1.2 A receiver finds the STM-N frame by detecting fixed A1....A2 .... pattern which appears periodically at
125s interval. (Remember VC-n does not have any FAS, its frame is found by using pointer.)
2. J0 : Regenerator Section Trace
2.1 For verification of Regenerator Section connection
2.2 A transmitter can set an identifier (name) to STM-N signal, maximum 15 characters using J0, and a
receiver compares the received ID (J0 value) to the expected J0 value, which is preset in the receiver,
to verify the connection.
2.3 To carry 15 characters 16 multiframe is formed, the first J0 for FAS and error detection and 15 J0s
for ID.
2.4 (Early recommendation defined this byte as C1 (STM identifier) that shows the unique order number
of STM-1s in STM-N to assist demultiplexing process.)
3. B1, B2 : Error Monitoring
48
6.1 M1 is used to report a result of error detection by B2, by number of BIP violation = error count, back to the transmission source.
6.2 For STM-64 and 256 two bytes (M0 and M1) are assigned.
2. Z1, Z2 : (Spare Bytes) STM-N (N>4) has additional spare bytes (Z0).
49
ABCDEFG
1.4 AIS and RDI will be generated and AIS is transmitted downstream, while RDI is transmitted to
opposite NE.
ABCDEFG
2.5 AIS and RDI will be generated and AIS is transmitted downstream, while RDI is transmitted to opposite NE.
50
6.2 A generalized position indicator (e.q. mutliframe indicator for VC-11, VC-2)
6.3 Different purpose for other payload type.
7. K3 : Automatic Protection Switching (APS) Channel
For automatic switching control of VC-3 or VC-4 path
8. N1 : Network Operator Byte
For tandem connection maintenance (see next page)
54
H4 Position Indicator
1. The H4 byte provides a generalized multi-frame indicator for payload.
2. Four consecutive frame (125s x 4 = 500s) are required to make complete TU-12 (VC-12) and it is
necessary to identify the first frame for correct recovery of the information. The H4 byte is used for
this purpose in VC-4 train.
3. The 7th and 8th bits in H4 byte is used and indicates the type as TU-12 multiframe indicator (V1, V2,
V3 and V4)
4. And other bits in H4 byte is set to 1 in the interim.
1. The Tandem Connection (TC) is one of the network administration layers (Path, TC, MS and RS). It is
an optional layer and to use it or not is left to operators discretion.
2. TCs can be defined to any part of a path by a network operator. It is possible to set multiple TC to a
path, if they have no overlap.
3. The network operator can monitor the error performance of the TC, i.e. a defined portion of a path, by
using the Network Operator Byte (N1) of POH.
4. It is possible to detect error occurrence between the path starting point and the TC starting point by
monitoring B3 (or V5 for LOVC) at the TC starting point. In the same way detection of errors between
the path starting point and the TC ending point is possible. And their difference shows errors occurred
in the TC.
5. The B3 monitoring result at the TC starting point is reported to the TC end by using a half of N1 byte.
The TC end calculates the errors of TC by subtracting this value from its monitoring result of B3.
6. The second half of the N1 byte is used for a data communication channel between both ends of the
TC.
1. The LOPOH consist of four bytes (V5, J2, N2 and K4) and four multi-frame (500s) is used to
accommodate them in order to avoid unnecessary bit rate increase.
2. The V5 byte is divided into 5 functions.
3. Functions of LOPOH are identical to HOPOH except that it does not have path user channel.
4. REI is only used by a VC-11 byte synchronous path. It is used when a failure is declared. It is
undefined for VC-12 and VC-2.
1. The Table of The (Section) path access pointer Identifier [(S)PAI] is shown in above.
2. (S)PAI is used free a 16 bytes E.164 format as shown on the Table above.
3. 15 bytes are used for the transport of the 15 ASCII characters required for the E.164 numbering
format.
4. ASCII: American Standard Code for information interchange.
1. A remote station alarm is a report of signal receiving status back to the transmission source.
2. Maintenance signals of SDH system consist of AIS, RDI and REI.
3. AIS (Alarm Indication Signal)
3.1 Reports signal failure in the upper stream of the signal flow to the down stream, indicating
you are receiving defective signal but its not your fault. It is generated when LOF (Loss of
framing), LOS (Loss of signal), LOP (Loss of pointer), AIS, EBER (Excessive bit error,
optional), etc. are detected.
4. RDI (Remote Defect Indication)
4.1 Reports detection of received signal failure to the transmitting side, under the same condition
as AIS.
5. REI (Remote Error Indication)
5.1 Reports error count detected by BIP-X to the transmitting side (for LOVC, only error presence).
6. RFI (Remote Failure Indication)
6.1 reports declaration of failure (persistence of failure beyond threshold) to the transmission side. This is defined
only for VC-11.
60
3.1 Location of 64 kb/s channels of 2M in VC-12 is allocated, thus visibility of channels in SDH signal is
implemented.
61
1. An ATM cell has a fixed length of 53 bytes (5 header bytes + 48 information bytes).
2. ATM cells are mapped into the payload of VC with its byte boundaries aligned to the VC-4 byte
boundaries.
3. Since the capacity (2,340 bytes) of the VC-4 payload is not an integer multiple of a cell length, the
last cell mapped to the payload may cross a VC-4 frame boundary.
4. ATM cell mapping to other size VCs is also defined in the same way.
Note : Word NPI was used by early recommendation but vanished recently.
65
1. Figure above shows how virtual containers of VC-12 are byte inter-leave multiplexed into a VC-4.
2. Transmission of complete VC-12 requires a four consecutive frame called into multiframe structure
(125s x 4 = 500s). Four 35-byte-subunits of the VC-12 (35x4=140bytes/ 500s), a one step gap is
introduced to each subunit to permit insertion of TU-12 pointer bytes.
3. The phase relationship between the VC-12 and the TU-12 is not fixed and the first byte of the VC-12
(V5) is indicated by the (V1 and V2) pointer value.
4. The first VC-4 frame of the 500s multiframe that carries the first TU-12 subunit (V1 byte) is shown
by the VC-4 POHs H4 byte.
1. For the stable timing extraction at a receiver, line signals must have sufficient and uniform data
transition, avoiding long sequence of 1 and 0. To meet this requirement, scrambling is applied to
the STM-N line signal and statistically long sequence of 1 and 0 is suppressed.
2. The scrambler is a frame synchronous type with sequence length of 127 (27-1) and the PN pulse
generator restart at every STM-N frame. The scrambling is not applied to the first row of RSOH.
3. Also the scrambler keeps the mark density of line signal nearly 50%. This ensures stability of laser
diode output level.
1. This block diagram shows the major steps of demapping/demultiplexing an STM-1 signal into 63x C12. For example, STM-1AUGAU-4VC-4TUG-3TUG-2TU-12VC-12C122.084Mbit/s .
1.1 At the receiving side the A1, A2synchronization byte (pattern) is detected in line signal to
identify the start of the STM-1 frame.
1 .2 RSOH and MSOH in STM-1 frame are terminated and VC-4 is extracted by the AU-4 pointers
location address.
1.3 From the VC-4 frame, 3 x TUG-3s are demultiplexed and demapped. From each TUG-3 frame,
7 x TUG-2s are demultiplexed and demapped From each TUG-2 frame, 3 x TU-12s are
demultiplexed and demapped
1.4 From each TU-12 pointer values of V1 and V2 specify, the starting point of V5, the first byte of
the VC-12.
1.5 From the VC-12, the C-12 is extracted of the V5, J2, N2 and K4 bytes are terminated.
1.6 From C-12, the 2.048Mbit/s PDH information is extracted after removing the stuffing bits..
1. Two methods for concatenation are defined: contiguous and virtual concatenation.
2. Both methods provide concatenated bandwidth of X times Container-N at the path termination. The
difference is the transport between the path termination.
3. Contiguous concatenation maintains the contiguous bandwidth through out the whole transport, and
requires concatenation functionality at each network element.
4. While virtual concatenation breaks the contiguous bandwidth into individual VCs, transports the
individual VCs and recombines these VCs to a contiguous bandwidth at the end point of the
transmission, and. virtual concatenation requires concatenation functionality only at the path
termination equipment
5. Figure above AU-4-4c is passed through a Node B lower order STM-1s. In this case, the AU-4-4c is
split up in four VC-4 s, and each VC-4 is switched independently using virtual concatenation.
If the input signal is greater the specify the container, the VC concatenation is used.
Virtual Container
Level
Total of byte
Container
Total of byte
35
2.240
34
2.176
2.048
765
48.960
756
48.384
34.368
2349
150.336
2340
149.760
139.264
1. There are two types of SDH concatenated signal . There are contiguous concatenation and virtual
concatenation.
2. It is known that an AU-4 is designed to carry a C-4 container that has a capacity of 149.76Mbit/s. If
there are services that required a capacity greater than 149.76Mbit/s. one needs a data to transport
the payload of these services. AU-4-Xc is designed for this purpose.
1. AU-4 has a nine-byte pointer. These nine bytes is shown Figure above.
2. An AU-4 Xc signal has 9X bytes of AU-4 pointer. AU-4-X signal , there X sets of nine-byte pointers,
(H1, H2).
3. The first nine AU-4 pointer has its normal function. The seconds, third and Xth AU-4 pointers are
used as the concatenation indications as shown in Figure c.
1. A two-stage 512 ms multiframe is introduced to cover differential delays of 125 ms and above (up to
256 ms). The first stage uses H4, bits 5-8 for the 4-bit multiframe indicator (MFI1). MFI1 is
incremented every basic frame and counts from 0 to 15. For the 8-bit multiframe indicator of the
second stage (MFI2), H4, bits 1-4 in frame 0 (MFI2 bits 1-4) and 1 (MFI2 bits 5-8) of the first
multiframe are used
1. The sequence indicator SQ identifies the sequence/order in which the individual VC-3/4s of the
VC?3/4-Xv are combined to form the contiguous container VC-3/4-Xc as shown in Figure above.
2. Each VC-3/4 of a VC-3/4-Xv has a fixed unique sequence number in the range of 0 to (X-1). The VC3/4 transporting the first time slot of the VC-3/4-Xc has the sequence number 0, the VC-3/4
transporting the second time slot the sequence number 1 and so on up to the VC-3/4 transporting
time slot X of the VC-3/4-Xc with the sequence number (X-1).
3. For applications requiring fixed bandwidth the sequence number is fixed assigned and not
configurable. This allows the constitution of the VC-3/4-Xv to be checked without using the trace.
4. The 8-bit sequence number (which supports values of X up to 256) is transported in bits 1 to 4 of the
H4 bytes, using frame 14 (SQ bits 1-4) and 15 (SQ bits 5-8) of the first multiframe stage as shown in
Table 11-1.
Concatenation of X VC-12
1. A VC-12-Xv provides a payload area of X Container-12 as shown in Figures above. The container is
mapped in X individual VC-12 which form the VC-12. Each VC12 has its own POH.
2. Each VC-12 of the VC-12-Xv is transported individually through the network. Due to this a differential
delay will occur between the individual VC-12s and therefore the order and the alignment of the VC12s will change. At the termination the individual VC-12s have to be rearranged and realigned in
order to re-establish the contiguous concatenated container. The realignment process has to cover at
least a differential delay of 125 s.
001
011
010
100
Remote Defect
79
1. The VC-12 virtual concatenation frame count is contained in bits 1 to 5. The VC-12 virtual
concatenation sequence indicator is contained in bits 6 to 11. The remaining 21 bits are reserved for
future standardization, should be set to all "0"s and should be ignored by the receiver.
2. The VC-12 virtual concatenation frame count provides a measure of the differential delay up to 512
ms in 32 steps of 16 ms that is the length of the multiframe (32 x16 ms = 512 ms).
3. The VC-12 virtual concatenation sequence indicator identifies the sequence/order in which the
individual VC-12s of the VC-12-Xv are combined to form the contiguous container VC-12-Xc as
shown in Figures
4. Each VC-12 of a VC-12-Xv has a fixed unique sequence number in the range of 0 to (X-1). The VC12 transporting the first time slot of the VC-12-Xc has the sequence number 0, the VC-1/2
transporting the second time slot the sequence number 1 and so on up to the VC-12 transporting time
slot X of the VC-12-Xc with the sequence number (X-1). For applications requiring fixed bandwidth
the sequence number is fixed assigned and not configurable. This allows the constitution of the VC12-Xv to be checked without using the trace.
The overall capacity of the SOH is 4.608 Mbit/s (9x8x64kbit/s), of which 30 bytes (1.920 Mbit/s) have
fixed definitions. The remaining 64kbit/s channels are not specified. Six are reserved for national use.
Although six bytes are reserved for medium dependent functions (e.g. radio link systems). The columns
1,4 and 7 corresponds also to the STS-1 frame.
A number of functions are defined in the overhead channels to ensure proper transport of the payload.
The Section Overhead (SOH)
The overall capacity of the SOH is 4.608 Mbit/s (9x8x64kbit/s), of which 30 bytes (1.920 Mbit/s)
have fixed definitions. The remaining 64kbit/s channels are not specified. Six are reserved for
national use. Although six bytes are reserved for medium dependent functions (e.g. radio link
systems). The columns 1,4 and 7 corresponds also to the STS-1 frame.
Functions of the SOH:
Contains maintenance, monitoring and operational functions
Each byte refers to a 64kbit/s channel
Splitted into RSOH and MSOH
Protect the connection from point of STM-1 assembly to point of disassembly.
The Path Overhead (POH)
The POH of VC-4/VC-3 consists of 9 bytes and the POH of the VC-11/VC-12 and VC-2 consists
of 4 bytes.
The RSOH is reformed (terminated) by each regenerator. Each regenerator section passes the MSOH
transparently.
1. Section overheads (SOHs) of STM-1 are depicted by the shaded parts in the above drawing.
1.1 Regenerator SOH (RSOH), 3 rows by 9 columns, can be accessed at terminating points of a
regenerator section (RS), at both regenerators and multiplexers.
1.2 Multiplex SOH (MSOH),5 rows by 9 columns, , can be accessed at multiplex section (MS)
terminating points, i.e.at multiplexers only. It passes through regenerators transparently.
1.3 Information carried by RSOH and MSOH is mainly used for administration of RS and MS
layer,respectively .
2. Marked bytes are already defined. Unmarked bytes are reserved for future use.
3. bytes are defined as Media-dependent. If necessary. (For SDH radio it is defined in ITU-R F.750)
4. Bytes marked are assigned for national use. Each country can define their function differently but the
definition is valid within the country, not international.
5. Column number of STM-N is 9 x n and same byte assignment is applied but byte number of A1, A2, B2,
and are increased accordingly.
6. For STM-16, 64 and 256, using some of unmarked bytes FEC (Forward Error Correction) is
implemented.
83
The MSOH is reformed (terminated) by each multiplexer and cross connect . A1, A2 : Frame Alignment
Signal (FAS)
1.1 A1: 11110110
A2: 00101000
1.2 A receiver finds the STM-N frame by detecting fixed A1....A2 .... pattern which appears
periodically at 125s interval. (Remember VC-n does not have any FAS, its frame is found by
using pointer.)
84
The Path Overhead is evaluated at the end point of the transmission system where the unpacking takes
place.
The Pointer indicates the phase shift of the first VC byte (J1, V5) within the payload or the container.
For the mapping of 2Mbit/s signals into SDH, two pointer levels are used. The first level - the AU-4
pointer - identifies the start of the VC-4 relative to the basic STM-1 frame. The second level - the TU-12
pointers - identifies the start of the VC-12 relative to the VC-4 for each of the 63 VC-12s.
The use of the pointer decouples the information channels (VC) from the transport medium (STM signal).
The fixed phase relationships of older systems are avoided in this manner.
It is also possible to multiplex and demultiplex signals in a single device across all levels. The byte
position of a subsignal is easy to compute.
88
. It regenerate the clock and amplitude of incoming distorted and attenuated signal.
It derive the clock signal from the incoming data stream
The terminal multiplexer is used to multiplex local tributaries (low rate) to the STM-n (high rate)
aggregate. The terminal is used in the chain topology as an end element. The regenerator is used to
regenerate the (high rate) STM-n in case that thedistance between two sites is longer than the
transmitter can carry.
Extraction from & insertion into high speed SDH bit streams of Plesiochronous and lower
bit rate synchronous signal.
Ring structure of network which provides the advantage of automatic back-up path
switching in the event of fault.
An optical cross connect is an element used for making interconnection between different channels
either temporarily or permanently
It contains a Space-Switch which allows any wavelength on any input fiber to be routed to any
wavelength on any output fiber,given that wavelength is free to be used
It contains mux/demux and/or switching arrangement
An Agg to Agg connection, a trib to aggregate connection and a tributary to tributary connection is also
possible in case of a Digital Cross Connect
The linear bus (chain) topology used when there is no need for protection and the demography of the
sites is linear.
The ring topology is the most common and known topology of SDH, which allows great network flexibility
and protection.
DWDM networks could be in linear configuration or ring configuration Typically in metro networks due to
the short link lengths DCMs and regenerators may not be used. Some cases amplifiers are also not
used.
ILAs provide 1R regeneration as against 3R regeneration of SONET regenerators
Due to ASE (Amplifier Spontaneous Emission) signal becomes increasingly noisy as it is amplified by
many EDFAs
As per ITU-T G. 692, DWDM systems should be capable of transmitting signals without regeneration for
8 spans of 22dB
If Regeneration is required in all the signals, two back to back terminals may be used or else an OADM
may be used
The ring topology is the most common and known topology of SDH, which allows great network flexibility and protection.
1. Ring systems are classified into five types based on employed switching method combination (unidirectional or bidirectional switch
and path or line switch), routing of two-way traffic (unidirectional or bidirectional ring) and fiber number (2 or 4 fibers).
2. Theoretically any combination between switching and routing direction is possible. But in the ring system, above uni-uni and bi-bi
combination is used.
3. SNCP-ring (Unidirectional)
3.1 Also called 2-fiber Unidirectional Path protection Switch Ring (2F-UPSR)
3.2 No switching protocol is required. This is a simple and fast switching scheme.
4.
5.
SNCP-ring (Bidirectional)
5.1 2-fiber Bidirectional Path protection Switch Ring (2F-BPSR)
5.2 Ring must be controlled at a path level and protocol by K3 or K4 in POH is required. It is not standardized yet by ITU-T.
6.
4F MS-SP ring
6.1 4-fiber Bidirectional Line protection Switch Ring (4F-BLSR)
6.2 Protocol by K1 and K2 in MSOH is required.
7.
2F MS-SP ring
96
The SDH gives the ability to create topologies with protection for the data being
transmitted.
Following are some examples for protected ring topologies.
At this picture we can see Dual Unidirectional Ring. The normal data flow is
according to ring A . Ring B (black) carries unprotected data which is lost in
case of breakdown or it carries no data at all.
The SDH gives the ability to create topologies with protection for the data being
transmitted.
Following are some examples for protected ring topologies.
At this picture we can see Dual Unidirectional Ring. The normal data flow is
according to ring A . Ring B (black) carries unprotected data which is lost in
case of breakdown or it carries no data at all.
At this picture we can see Dual Unidirectional Ring. The normal data flow is
according to ring A . Ring B (black) carries unprotected data which is lost in
case of breakdown or it carries no data at all.
A special synchronization network is set up to ensure that all of the elements in the communications
network are synchronous. The network is hierarchical distributed. A primary reference clock source
(PRS) controls the secondary clocks of stratum level 2 to 4 (SSU or ST2 to 4). This type of
synchronization signal distribution is also refered as Master/Slave synchronization. The actual
synchronization may take place via a separate , exclusive sub-network, or the communications signals
themselves may be utilized. Ring structures are also possible.
4. The latest G.812 dose not distinguish Transit and Local, and in G.707 they are mentioned as SSU-A and SSU-B.
5. Reference distribution between layers usually uses ordinary transmission line, extracting clock component of line signal.
114
115
1. In the actual SDH network, External, Line, Internal and Through clock methods are combined to
construct the best clock distribution network and also counter measures against failures (redundancy,
reference source switching, etc.) are included.
2. Repetition of line clock accumulates impairment (jitter, wander, etc.) in clock component of the line
signal (STM-N). When it reaches to the limit, the clock signal must be refined. NE supplies degraded
extracted clock component to the SSU and the SSU suppresses the impairment using high
performance phase locked loop (PLL). Then give clean clock signal back to the NE. The SSU
supplies clock signal to other NEs in the office.
3. Maximum number of intermediate line clock nodes is determined by the performance of SSU and
NEs.
4. SDH system at remote area where synchronization reference is not available (SDH island), one of
SDH NE becomes a master node using internal clock (free run) and other NEs slave to it using line
clock configuration.
1. The clock circuit of NE is a PLL (Phase Locked Loop) and it can select a reference from line signals
(STM-N, aggregate and tributary sides), external signals connected to PRC or SSU and PDH 2M
(synchronous) tributary signals.
When the PLL loses a reference, it goes into hold-over (internal clock), keeping the same operation
condition (frequency and phase) as before the loss.
When it does not have a reference from the beginning, it operates as a free-running oscillator
(completely independent internal clock).
2. Line Clock: Clock component of STM-N is extracted and divided to appropriate low frequency. Its
quality (origin) level is indicated by SSM (Synchronization Status Message), i.e. S1 of STM-N.
External Clock: 2.048 MHz or 2.048 Mbit/s framed bipolar from PRC or SSU. In case of 2 Mbit/s, it
carries SSM when the PRC or SSU supports the function. When the SSM is not supported by SSU or
NE, the quality level is provisioned manually.
PDH 2M: When it is generated by a synchronous equipment, e.g. EES, its 2.048 MHz clock component
is possible to be used as a reference. Since it does not have SSM, its quality level must be set
manually at each NE.
Internal: There are two modes. Hold-over uses memorized frequency and phase data of the last
reference. Free-run operates without any reference. Quality level must be assigned and it is lower than
any other references.
3. Each NE has different availability of above reference sources. Some of available sources might not be
used according to the synchronization distribution network design. To all selected reference candidates
120
1. SDH NE can output a reference signal to an SSU or other synchronous equipment to give them
synchronization information from PRC or higher level SSU.
2. Possible output is one of the Line Clocks and PDH 2Ms. Those extracted clock component is directly
output without going through the PLL, after proper conversion to 2.048 MHz. This is generally called
Line Clock against Equipment Clock (below).
3. The output interface is 2.048 MHz (clause 10/ G.703) or 2.048 Mbit/s (clause 6/G.703). The 2.048
Mbit/s carries SSM if the NE is so designed.
4. The PLL output also can be selected as this output. This case is called Equipment Clock.
5. NE should not supply Equipment Clock to SSU. SSU installation at the node means that the clock
component of line signal has degradation and refining is required. And Equipment Clock supply
means its STM-N is generated by the same clock, not form the SSU, and the clock degradation
propagates to the next section.
6. Equipment Clock is used for the synchronization supply to other NE at stations where SSU is not
available.
1. The recommendation dose not define Quality Level indication (1~6). Here, it is used for
explanation simplicity.
2. The latest G.812 does not distinguish Transit and Local. And the latest G.707 says SSU-A
and SSU-B respectively.
3. When the equipment generates STM-N by internal clock (holdover or free-run), S1 should be SEC.
4. DNU (Do not use for synchronization) is used when the equipment fails. Under the line clock state,
DNU is set to S1 of backward direction line (see following explanation).
5. Quality Unknown is used when the network uses an existing clock source that might not follow
G.811 or G.812.
1. Above drawing is a simple model of SDH line. Node A and Node C are clocked by PRC and SSU. Node
C, Node D and other nodes between B and C use line clock configuration.
2. SSM (S1) of line A1 is set to Q=1 automatically because its clock component is traceable to G.811.
When PRC and/or NE-A dose not support SSM, external-in port of NE must be provisioned to Q=1.
3. NE-B is not directly connected to PRC but since clock in line B1 is traceable to PRC, S1 of line B1 is set
to Q=1. This is done by reflecting S1 in A1 to B1. Same procedure is applied to all line clock nodes
before Node C.
4. Clock in line B2 is also traceable to PRC, but it is set to Q=6 (Do not use ...) intentionally and
automatically. B2 is clocked by A1 and from a view point of the direction of clock signal flow it is
backward. S1 of backward signal is always set to Q=6, regardless its actual quality in order to avoid
clock loop. If B2 is set to other than Q=6, there is a possibility to make a clock loop (A-A1-B-B2-A) when
PRC is lost.
5. At Node C, extracted clock from B1 is supplied to SSU to be refined and clean clock is given to the NEC. In this case, clock in line C1 is traceable to SSU and its S1becomes Q=3.
6. Direction of line C2 is same as B2. But it is not backward but forward, because it is not clocked by B1
but by SSU. Therefore, its S1 should be Q=3 (SSU).
7. It is not shown here, but when NE-A and -B have STM-N tributary lines their S1s are Q=1and NE-C and -D case Q=3.
124
Between two highest quality sources (Q=1, EXT 1 and 2), EXT 1 with higher priority order is selected as the
reference.
It generates STM-N signals, to the east and west directions, setting their S1 bytes to Q=1.
To avoid a timing loop, the signal from the west direction is set to unused.
At Node B :
From three available references (west, east and internal), the source of highest quality (west) is selected.
This node sets the S1 to the east (forward) to Q=1 and the S1 to the west (backward) to Q=6.At Node C :
Although this node has tributary 2M signals as the reference source, since their quality level defined by the
NE (Q=3, TRIB 1 and 2) are lower than the S1 indication of the receiving west, it selects the west as the
reference.
At Node D :
1. S1 indications of line signals (west and east) are the same (Q=1) and higher than the internal. The node
selects the west that has higher priority order.
It is assumed that a line signal failure has occurred on the clockwise signal between the Node A and
B. Then, clock reference changes will take place in the ring network with the following procedure.
At Node B :
1.
Because of detection of the signal failure in the receiving west, which has been the
reference source, the node decides the quality level of the reference as Q=6, regardless
its S1 indication.
2.
The Node B stops using the west signal and switches to the internal. It cannot use the
east, as its S1 indication is Q=6 at this moment. The NE sends out S1=Q=5, which is the
quality level of new reference, to east and west directions, automatically changing from
previous Q=1 and Q=6 respectively.
At Node C :
Detecting the change in the S1 of the receiving west signal, Q=1 to Q=5, the node starts
comparison between its available sources. They are west, east, TRIB 1, TRIB2 and internal.
Then, it selects the TRIB 1 with the highest Q, since the TRIB 2 has the same level Q but lower
priority order. The NE sends out S1=Q=3 to east and west directions, changing from previous
Q=1 and Q=6 respectively.
At Node B :
Now the receiving east has higher quality (Q=3) than the current reference, internal (Q=5).
As a result, the node selects the east sending out S1=Q=3 to the west direction and S1=Q=6 to
the east direction.
At Node D :
After comparing qualities of the receiving east (Q=1), the receiving west (Q=3) and the internal
(Q=5), it selects the east having the highest Q. This node sets the S1 to the west (forward) to
Q=1 and the S1 to the east (backward) to Q=6.
At Node C :
As the receiving east has higher Q (=1) than the present reference TRIB 1 (Q=3), its source
is changed to the east from the TRIB 1. It sends Q=1 to the west and Q=6 to the east.
At Node B :
There is no change for the reference selection. But the S1 indication to the west direction is
altered to Q=1, because, by the change at the station C, this nodes clock source is now
traceable back to the G.811 PRC at the station A. This change is done by using the S1
value in the receiving east signal.
1. The final settlement of the clock distribution is changed to counter clockwise direction ( A -> D -> C ->B).
2. The final result is not important in this explanation. It is important to understand how selection rules
are applied and how SSM (S1) is controlled.
1. Another example of reference source switching is a failure in both EXT 1 and 2 at the Node A. Failure
of the PRC itself is highly improbable, but here let us assume cable cut.
2. Although no detailed switching process will be explained, taking similar process shown above, the
clock source of the network will be changed to the sub-master (Node C). In this case, the final clock
distribution will become D -> C -> B -> A. It is recommended to trace the switching process by
yourself, referring previous explanations.
1. Another example of reference source switching is a failure in both EXT 1 and 2 at the Node A. Failure
of the PRC itself is highly improbable, but here let us assume cable cut.
2. Although no detailed switching process will be explained, taking similar process shown above, the
clock source of the network will be changed to the sub-master (Node C). In this case, the final clock
distribution will become D -> C -> B -> A. It is recommended to trace the switching process by
yourself, referring previous explanations.
1. Another example of reference source switching is a failure in both EXT 1 and 2 at the Node A. Failure
of the PRC itself is highly improbable, but here let us assume cable cut.
2. Although no detailed switching process will be explained, taking similar process shown above, the
clock source of the network will be changed to the sub-master (Node C). In this case, the final clock
distribution will become D -> C -> B -> A. It is recommended to trace the switching process by
yourself, referring previous explanations.
1. Another example of reference source switching is a failure in both EXT 1 and 2 at the Node A. Failure
of the PRC itself is highly improbable, but here let us assume cable cut.
2. Although no detailed switching process will be explained, taking similar process shown above, the
clock source of the network will be changed to the sub-master (Node C). In this case, the final clock
distribution will become D -> C -> B -> A. It is recommended to trace the switching process by
yourself, referring previous explanations.
1. To maintain the synchronization of NE, a PLL circuit with holdover is provided in NE..
2. The graph shows the holdover and free-run characteristic curves of PLL circuit.
3. During the clock circuit is being operated in a slave mode, frequency and phase are memorized in
holdover circuit. When the circuit loses the reference source for example by line failure, those
stored data are used for continuous and seamless operation. But due to the thermal noise of the
PLL looped circuit, the accuracy of VCXO is degraded with time as shown in No. 2. The holdover
function is effective within 24 hours. More than 24 hours pass, the VCXO becomes the free-run
status as shown in No 2. The transmission disturbances caused by abrupt change of frequency and
phase can be avoid by holdover function.
4. If there is no reference source supply, the VCXO is free-run state without holdover as shown in No. 1.
5. If the reference clock is recovery during the holdover state, the NE become in slave state again as
shown in No. 3
PH COMP: Phase comparator, A/D: Analogue/Digital converter
135
1. The original three network node interface (NNI) recommendations (G.707, G.708, and G.709) were
integrated into G.707 in 1996.
2. A distinctive features of SDH recommendations, that cannot be found in PDH recommendations, are :
2.1 Recommendations for OAM&P (Operation, Administration, Maintenance and Provisioning), i.e.
NMS (Network Management System), are defined.
2.2 Protocol suits for NMS connection are defined.
2.3 Functional configuration of equipment is described in detail.
2.4 Optical interface is standardized.
3. They are essential for realization of multi-vender environment.
Enterprise Systems Connection :It is an IBM standardized protocol for the interconnection of IT
equipment. Bit rate=200Mbps .Marketing name for a set of IBM & vendor products that interconnect
S/390 computers with each other & with attached storage , locally attached work stations & other devices
using optical fiber technology
GFP Provides an elegant framing procedure with low overhead and support for both packet services and storage
services
Virtual Concatenation Improves on current models of contiguous concatenation by supporting much finer
granularity of circuit provisioning and management from the edge of the network. Right-sized pipes for packet
services (Ethernet, in particular). Both higher order (STS1 granularity) and low order (VT1.5 level) are available,
supporting a range of high- and low-speed service assignments.
LCAS (Link Capacity Adjustment Scheme) tool to provide operators with greater flexibility in provisioning virtual
concatenation groups (VCGs), adjusting their bandwidth in service and providing flexible end-to-end protection
options
From the above figure if we want to transmit the 10Mb data (Ethernet) through SDH .The available containers will be either
VC-12 or VC-3.
The capacity of VC-12 is 2.176 Mbit/s
The capacity of VC-3 is 48.38 Mbit/s.
When we compare these capacities the VC-12 is small to send the Ethernet traffic because VC-12 can carry up to 2.176
Mbit/s.The VC-3 is too large if we send this data through VC-3 the remaining bandwidth will be wasted.
For this we have to go for concatenation
In the above slide we are conctenating the VC-12 s means that we are adding 5 VC-12s
and increasing the bandwidth to send the 10Mb/s data .By Conctenation VC-12s
bandwidth can be increased
One VC-12 bandwidth is
2.176 Mbit/s
By this the
bandwidth
notrate
wasted.
(Mbit/s)
Total
of byte is Bit
Level
Container
Total of byte
35
2.240
34
2.176
2.048
765
48.960
756
48.384
34.368
2349
150.336
2340
149.760
139.264
For the transport of payloads that do not fit efficiently into the standard set of virtual containers
(VC-3/4/12)
VC concatenation can be used. VC concatenation is defined for:
VC-3/4- to provide transport for payloads requiring
3/4;
VC-12- to provide transport for payloads that require
12.
1. Two methods for concatenation are defined: contiguous and virtual concatenation.
2. Both methods provide concatenated bandwidth of X times Container-N at the path termination. The
difference is the transport between the path termination.
3. Contiguous concatenation maintains the contiguous bandwidth through out the whole transport, and
requires concatenation functionality at each network element.
4. While virtual concatenation breaks the contiguous bandwidth into individual VCs, transports the
individual VCs and recombines these VCs to a contiguous bandwidth at the end point of the
transmission, and. virtual concatenation requires concatenation functionality only at the path
termination equipment
5. Figure above AU-4-4c is passed through a Node B lower order STM-1s. In this case, the AU-4-4c is
split up in four VC-4 s, and each VC-4 is switched independently using virtual concatenation.
Concatenation of X VC-12
1. A VC-12-Xv provides a payload area of X Container-12 as shown in Figures above. The container is
mapped in X individual VC-12 which form the VC-12. Each VC12 has its own POH.
2. Each VC-12 of the VC-12-Xv is transported individually through the network. Due to this a differential
delay will occur between the individual VC-12s and therefore the order and the alignment of the VC12s will change. At the termination the individual VC-12s have to be rearranged and realigned in
order to re-establish the contiguous concatenated container. The realignment process has to cover at
least a differential delay of 125 s.
1. Two methods for concatenation are defined: contiguous and virtual concatenation.
2. Both methods provide concatenated bandwidth of X times Container-N at the path termination. The
difference is the transport between the path termination.
3. Contiguous concatenation maintains the contiguous bandwidth through out the whole transport, and
requires concatenation functionality at each network element.
4. While virtual concatenation breaks the contiguous bandwidth into individual VCs, transports the
individual VCs and recombines these VCs to a contiguous bandwidth at the end point of the
transmission, and. virtual concatenation requires concatenation functionality only at the path
termination equipment
5. Figure above AU-4-4c is passed through a Node B lower order STM-1s. In this case, the AU-4-4c is
split up in four VC-4 s, and each VC-4 is switched independently using virtual concatenation.
The method of AU-4 Concatenation is devised to provide greater capacity of Payload.The process of Byte
interleaving is not required.It will provide a greater capacity of Transport media.Normal standard container
container in SDH system interms of VC-4 regardless of the different signal types such as STM-1,STM4,STM16,STM-64.
The frame format of VC-4 Xc is filled with POH. Columns 2 to X of the VC-4 Xc are filled with fixed stuff.The
capacity of the payload available for mapping is the X times the capacity of the payload available for
mapping is the X Times the capacity of the Container-4.The capacity of Payload can be expressed as
149.76 x Xmbit/s in terms of the signal rate.
The first AU-4 of an AU-4 Xc shall have a normal range of pointer values.That is ,0 to 782.All subsequent of
AU-4s within the same have their pointer set to concatenation.
1. There are two types of SDH concatenated signal . There are contiguous concatenation and virtual
concatenation.
2. It is known that an AU-4 is designed to carry a C-4 container that has a capacity of 149.76Mbit/s. If
there are services that required a capacity greater than 149.76Mbit/s. one needs a data to transport
the payload of these services. AU-4-Xc is designed for this purpose.
The position of POH Bytes of VC-4 Xc/VC-4/VC-3 is generalized a the first column of the 9-row of
respective Virtual Containers.
The VC-4Xc POH is located in the first column of the 9- row by X x 261 column VC-4 Xc structure.
1. Contiguous concatenation maintains the contiguous bandwidth through out the whole transport, and
requires concatenation functionality at each network element.
2. While virtual concatenation breaks the contiguous bandwidth into individual VCs, transports the
individual VCs and recombines these VCs to a contiguous bandwidth at the end point of the
transmission, and. virtual concatenation requires concatenation functionality only at the path
termination equipment
This table shows the improvements in bandwidth efficiency that can be made by using virtual
concatenation instead of contiguous concatenation.
From the Fast-Ethernet example, we can see that the efficiency improvement is from 67% to 100%.
Even larger efficiency improvements can be made with some other data services. This highlights the
benefits and importance of Virtual Concatenation.
162
1. The VC-12 virtual concatenation frame count is contained in bits 1 to 5. The VC-12 virtual
concatenation sequence indicator is contained in bits 6 to 11. The remaining 21 bits are reserved for
future standardization, should be set to all "0"s and should be ignored by the receiver.
2. The VC-12 virtual concatenation frame count provides a measure of the differential delay up to 512
ms in 32 steps of 16 ms that is the length of the multiframe (32 x16 ms = 512 ms).
3. The VC-12 virtual concatenation sequence indicator identifies the sequence/order in which the
individual VC-12s of the VC-12-Xv are combined to form the contiguous container VC-12-Xc as
shown in Figures
4. Each VC-12 of a VC-12-Xv has a fixed unique sequence number in the range of 0 to (X-1). The VC12 transporting the first time slot of the VC-12-Xc has the sequence number 0, the VC-1/2
transporting the second time slot the sequence number 1 and so on up to the VC-12 transporting
time slot X of the VC-12-Xc with the sequence number (X-1). For applications requiring fixed
bandwidth the sequence number is fixed assigned and not configurable. This allows the constitution
of the VC-12-Xv to be checked without using the trace.
169
Requirement
Allows containers to be added/removed from group as the data bandwidth requirement changes
Also provides ability to remove links that have failed
Addition and removal of containers must be hitless
Operation
A control packet (transmitted in the H4 byte for high order and K4 byte for low order) is used to configure
the path between source and destination
Control packet describes link status during next control packet
Changes are sent in advance so the receiver can switch as soon as the new configuration arrives
Core header contains the information of the length of the payload area,the start of frame information and
CRC-16 ERROR DETECTION &CORRECTION.The length is 4 byte.
The GFP payoad are transports higher layer specific informationLength 4 to 65535
Core Header ; 1.PDU length indicator field
2. Core HEC FIELD
PDU length indicator field :
A binarynumber representing the number of Octets in the GFP payload are minimum value of the PLI
field in a GFP client frame
Cor header is a 16 bit payload length indicator
There are currently two modes of GFP encapsulation defined, which are:
1. Framed mapped GFP, or GFP-F
2. Transparent mapped GFP, or GFP-T
Frame-Mapped GFP maps a client frame (for example, Ethernet) in its entirety into one GFP frame.
Transparent GFP operates on a data client stream as it arrives and requires fixed length GFP frames.
Any protocol (for example. Fiber Channel, ESCON, FICON, Ethernet, etc. ) can be mapped into
Transparent GFP.
Frame-Mapped GFP
maps a client frame (for example, Ethernet) in its entirety into one GFP frame.
Transparent GFP
operates on a data client stream as it arrives and requires fixed length GFP frames. Any protocol (for
example. Fiber Channel, ESCON, FICON, Ethernet, etc. ) can be mapped into Transparent GFP.
The compact XDM-100 offers scalable STM-1/4/16 aggregation of access traffic in multirings and pointto-point topologies. The platform adds/drops various PDH SDH, Gigabit Ethernet (GbE), and Fast
Ethernet services at local Points of Presence
POPs The XDM-100 also provides a service layer, which terminates WAN links and consolidates
Ethernet traffic arriving from the local access network. Traffic can be carried to local GbE interfaces or
.routed to the metro-core network
Acting as both an ADM and a small cross-connect, the XDM-100 can easily convert from an aggregation component to a
Multi-ADM. This is performed once cellularoperators decide to migrate from leased lines at the RAN to building their own
infrastructure.
221
The lower part of the shelf consists of the card cage and accommodates the MXC cards (main and protection) and the
ECU (External
Connection Unit). The upper part of the shelf consists of the module cage and houses up to eight I/O modules. Various
types of interface modules supporting PDH,SDH and data services are available. To support system redundancy, each
MXC card contains an integrated xINF (XDM Input Filter) unit with connectors for two input power sources. The xFCU100 fan control unit at the right side of the shelf provides cooling air to the system. It contains nine separate fans for
added system redundancy.
The basic XDM-100 cage contains slots for I/O interface modules, and dedicated slots for the MXC cards and the ECU.
The cages design and mechanical practice conform to international mechanical standards and specifications.
The modules and cards are distributed as follows:
. Eight (8) slots, I1 to I8, optimally allocated for I/O interface modules.
. Two (2) slots, A and B respectively, allocated for the MXC cards (main and protection). Each MXC card has two slots
(A1 and A2 and B1 and B2) to accommodate SDH aggregate modules.
. One (1) slot allocated for the ECU card.
The ECU is located beneath the MXC cards. Its front panel features several interface connectors for management,
external timing, alarms, orderwire and overhead (future release). It also includes alarm severity colored LED indicators and
selectors plus a
display for selecting specific modules and ports for monitoring purposes.
8x I/O modules
223
1. High-order transmission paths for high-order and low-order subnetworks and for IP networks (for
example LAN to LAN connectivity: GbE . GbE)
2 Leased lines at various bitrates, from 2 Mbps up to 2.5 Gbps
3
The XDM-100 also supports a wide range of I/O interfaces and Ethernet Layer 2 services, enabling its
deployment in transmission networks, including:
10/100BaseT
226
8x I/O modules
MXC100 Main X-connect Card
A1/A2 & B1/B2 Aggregate Slots containing:
SAM1_4/O/E/OE SDH Aggregate Module ,
SAM4_2,
SAM16_1
I/O Modules:
SIM1_4/O/E/OE SDH Interface Module,SIM4_2,PIM2_21 PDH Interface
Module,PIM345_3
EISM Ethernet Interface & Switching
The ECU is located beneath the MXC cards. Its front panel features several interface connectors for
management, external timing, alarms, orderwire and overhead (future release). It also includes alarm
severity colored LED indicators and selectors plus a
display for selecting specific modules and ports for monitoring purposes.
8x I/O modules
231
232
SAM4_2,
SAM16_1
I/O Modules:
SIM1_4/O/E/OE SDH Interface Module ,SIM4_2,PIM2_21 PDH Interface
Module,PIM345_3
EISM Ethernet Interface & Switching
1. Simple, add or replace plug-in modules. This can be performed while the system is in operation,
without affecting traffic in any way.
2 Optimization of aggregate module assignment. Two aggregate modules are associated with each
Main Cross-connect Control (MXC) card. Each module supports a bandwidth of up to 2.5 Gbps.
3 Optimization of tributary I/O slot assignment. Eight slots can accommodate different I/O modules
(PIM, SIM and EIS-M).
4 In-service scalability of SDH links.An optical connection operating at a specific STM rate can be
upgraded from STM-1 to STM-4or STM-16.
5 XDM-100supports mesh,ring,star,and linear topologies.All system
configurations are controlled by a single network management system with end-to-end service
provisioning, from E1 to STM-16 to DS-3.
Acting as both an ADM and a small cross-connect, the XDM-100 can easily convert from an aggregation component to a
Multi-ADM. This is performed once cellularoperators decide to migrate from leased lines at the RAN to building their own
infrastructure.
236
XDM-1000 Multiservice Metro Optical Platform designed for high-capacity central exchange
applications, the XDM-1000 is optimized for
the metro core and features unprecedented port densities.
As a digital cross-connect, it builds a fully protected mesh core. As a multi-ADM, it simultaneously closes
STM-64/OC-192 core
MS-SPRing/BLSR and multiple edge 1+1/UPSR rings. The XDM-1000 provides connectivity between
central office legacy switches over E1,
E3/DS-3 and STM-1/OC-3 trunks, and between POPs over native Gigabit Ethernet or POS, while
efficiently grooming traffic from edge rings. As a
DWDM, it enables migration from SDH/SONET to DWDM, providing high capacity and sub-lambda
grooming and reliability.
XDM-2000 Multifunctional Intelligent Optical Switch designed for the metro and metro-regional
core, the XDM-2000 is optimized for pure DWDM and hybrid optical applications. This is a highly dense
DWDM platform that provides intelligent sub-lambda grooming and optimum wavelength
utilization. The XDM-2000 integrates the most advanced optical units with a variety of interfaces and a
sophisticated, high-capacity matrix in one small,
low-cost package.
correct connectivity within the switch core, and between the I/Os to the switch. Since there is no blocking of the matrix and
every I/O card has direct independent access to it, the XDM-400 supports star, mesh, ring, and multiple topologies. Small
size the XDM-400 is only 500 mm high (three units can be accommodated in an ETSI rack). amplifier/booster for terminal
sites.
254
Data is the main BW grow driver- this section elaborate ECI data solutions
Virtual Concatenation
Improves on current models of contiguous concatenation by supporting much finer
granularity of circuit provisioning and management from the edge of the network. Rightsized pipes for packet services (Ethernet, in particular). Both higher order (STS1
granularity) and low order (VT1.5 level) are available, supporting a range of high- and lowspeed service assignments.
1. Two methods for concatenation are defined: contiguous and virtual concatenation.
2. Both methods provide concatenated bandwidth of X times Container-N at the path termination. The
difference is the transport between the path termination..
3. While virtual concatenation breaks the contiguous bandwidth into individual VCs, transports the
individual VCs and recombines these VCs to a contiguous bandwidth at the end point of the
transmission, and. virtual concatenation requires concatenation functionality only at the path
termination equipment
4. Figure above AU-4-4c is passed through a Node B lower order STM-1s. In this case, the AU-4-4c is
split up in four VC-4 s, and each VC-4 is switched independently using virtual concatenation.
The LAN delivers carrier-class Ethernet services over SDH/SONET, including: Transparent LAN services,
Shared LAN services, TDM services
The LAN is a unique data-aware SDH/SONET multiplexer with Layer 2 switching intelligence. It functions
as an STM-1/OC-3 ADM, TM or dual homing protected TM, that provides both TDM services and
10/100BaseT Ethernet services. Ethernet signals are mapped into n*VC-12 containers, starting from 2 Mbps
up to the full 100 Mbps with a capacity of up to 2 x 155 Mbps over SDH/SONET. The LANs scalable
architecture effectively expands optical SDH/SONET networks, helping operators provide new services and
tailor solutions for the needs of medium and large enterprises.
Configurable and manageable bandwidth, from 2 Mbps to 100 Mbps with a wide range of interfaces:
2/6 x Fast Ethernet ports (10/100BaseT),16/21 x E1 tributary ports,Up to 2 x STM-1/OC-3 optical
aggregates.
High performance and availability:
Ethernet switch functionality
SDH and Ethernet protection and restoration mechanisms for continuous traffic transmission High QoS for
mission-critical data, VoIP and videoconferencing.
Support for diverse configurations and topologies:
ADM, Terminal Multiplexer (TM), dual homing terminal, regenerator,RepeaterRing, point-to-consecutive-point (chain) and
point-to-point topologies
Protected and unprotected configurations.
258
The 1642 Edge Multiplexer is a full ADM equipment operating at 155 (STM-1) MBit/s bit rate. It can be
configured as a Multi Line Terminal Multiplexer or as an Add/Drop Multiplexer for point-to-point and ring
applications.
reference with a maximum drift of 0.37ppm per day. The accuracy of the local oscillator is 4.6 ppm.
266
268
The 1642 Edge Multiplexer can be used for transmission over G.652, G.653 and G.654 fibers. The main
applications can be identified in the following
areas:
Delivery of SDH/Ethernet services to customer premises
Local and metropolitan rings
point to point links with intermediate drop/insert and/or regeneration stations;
A comprehensive range of network elements for all transmission needs, from customer premises
applications to metropolitan networks, to long-haul and ultra long-haul terrestrial and trans-oceanic
applications. Everything under control of the same management system.
A comprehensive range of network elements for all transmission needs, from customer premises
applications to metropolitan networks, to long-haul and ultra long-haul terrestrial and trans-oceanic
applications. Everything under control of the same management system.
In addition to traditional transmission applications,OMSN deliver integrated ATM switching and Ethernet
switching capabilities
OMSN guarantees full inter-working with the existing installed base in broad terms: from optical compatibility, to DCC (Data
Communications Channels) compatibility, to network management. In other words, existing networks can be upgraded with
new equipment without constraints.
269
1642 Edge Multiplexer offers Sub-Network Connection Protection (SNCP). This allows to insert the 1642
Edge Multiplexer directly into access
rings or to connect it to access rings via point-to point unprotected or protected links. SNCP allows also
end-to-end protection of SDH paths. Dual
hubbing to two distinct central office ADMs is also possible. 1642 Edge Multiplexer can also offer 48V
power redundancy for equipment protection.
A Compact ADM-1 card concentrates 2xSTM-1 interfaces, SDH matrix, clock reference and equipment
control functions. 4 slots are dedicated to traffic ports, for 2Mb/s services and above. The system can be
configured for example as STM-1 Terminal Multiplexer with 8x2Mb/s plus 6xEthernet/Fast Ethernet for
office interconnect services or as STM-1 ADM with 112x2Mb/s.
It can be used to deliver 2Mb/s and Ethernet/Fast Ethernet leased lines as well as higher-end services
such as 34Mb/s, 45Mb/s, STM-1 with a versatile and compact chassis ideal for enterprise locations. It
empowers the carriers leased line service offering with the strong and extensive SDH end-to-end path
control and monitoring capabilities
The traffic units can be plesiochronous (PDH) or synchronous (SDH). The plug-in cards can be of the following types:
63x2Mbit/s unit 3x34/45Mbit/s switchable unit 4x140Mbit/s-STM-1 switchable electrical unit 4xSTM-1 electrical
unit
4xSTM-1 electrical/optical unit 1 1xSTM-4 optical unit 1xSTM-16 optical unit ISA-ATM switch unit (4x4 VC-4 or
8x8 VC-4 unit)
ISA-Ethernet rate adaptive unit ISA-Gigabit Ethernet rate adaptive unit Packet Ring Edge Aggregator 4xE/FE
Packet Ring Edge Aggregator 1xGbE 2xSTM-1 or STM-4 optical + central functions unit (named Compact ADM-4
unit) 2
1xSTM-16 optical SFP + central functions unit (named Compact ADM-16 unit)
The Compact ADM-4 and ADM-16 units provide the following functionality:
2xSTM-1 optical/electrical or 2xSTM-4 or 1xSTM-1 + 1xSTM-4 or 1xSTM-16 (SFP)
Matrix Function (32x32 for Compact ADM-4 at Low Order and High Order level or 64x64 STM-1 for Compact ADM16 at Low Order VC level)
CRU Function
Equipment Controller Function
All electrical traffic ports can be optionally protected inN+1 configuration.
276
The matrix HW is also open to support VC2 cross connection capabilities in future software
releases.
AU4-4C and AU4-16C concatenated signals can also be cross-connected between any STM-4 and
STM-16 ports. All AUGs (STM-n Administrative Unit Groups) managed by the SDH matrix are
structured according to the standard ETSI mapping (1xAU4).The SDH VC matrix implements
single-ended SNCP/N (non-intrusive) and SNCP/I (inherent) protections. The protection mode may
be revertive or non-revertive. The SDH VC matrix can also provide up to 2 x 2 fibres MS-SPRing
protection at STM-16 with 16 STM-1 tributaries
285
This feature brings flexibility at the network level, making it possible to balance traffic
from one part of the network to another, e.g. from business areas during the day to residential
areas during the night. For the mere cost of a standard 1696 Metro Span transponder, the
operator can benefit from a truly flexible OADM feature.
The 1696 Metro Span amplifiers support a full monitoring and measurement capability:- external ports
are available to monitor both the input and output of the pre-amplifier and of the booster- remote
measurements are available for the input and output powers of both the pre amplifier and the booster.
The 1696 Metro Span amplifiers support an Automatic Gain Control (AGC) scheme to ensure hitless inservice insertion of new channels or removal of channels. Amplifiers automatically self-adjust their points
of operation when the number of incoming channels changes whether
due to a network upgrade or to a network failure.
1. It is important to understand above listed items related to protection switching, before proceeding to
detailed explanation of ring protection systems.
2. They are basic and common to not only ring systems but also linear protection systems.
3. Various combinations between above items are possible, like...
(PPS)-(1+1)-(Uni-)-(Non-revertive) this combination is called single
(LPS)-(1+1)-(Bi-)-(Non-revertive) etc.
1. In above drawing, a thin line shows a path (VC-n or channel in STM-N) and a pipe means a line
(STM-N signal or multiplex section)
2. Path Protection Switch
2.1 Protection switches are set up at the ends of each path. Against a line failure, protection
switching take place at the ends of suffered paths. Against a failure in a single path, switching
is applied only to the path. Usually path performance monitoring initiates the switching. A line
failure results in a path failure and the path failure detection causes switching, not the line
failure detection.
3. Line Protection Switch
3.1 Switches are set individually to all channels of the STM-N at both ends of a section. Against a
line failure, all channels are switched to corresponding channels of the protection line.
Although the switching is at a channel (path) level, it is equivalent to that the line (fiber) is
switched. The switching is triggered by the section failure detection. Failure detection in a path
does not cause the line switching.
1. In 1:1 or N:1 protection, when the protection line is not occupied by a normal traffic, it can carry a low
priority extra traffic. When failure occurs in working line the extra traffic is removed form the
protection line and the line is used by the affected normal traffic.
2. This arrangement is called Stand-by Line Access (SLA).
3. Bidirectional switch is required.
4. Usually a revertive switch is applied for SLA, but for 1:1 non-revertive type SLA is also possible by
using slightly complicated configuration.
1. Unidirectional Ring
1.1 Normal routing of the traffic is that one direction of two-way connection uses the right (or left)
half of the CW (or CCW) line and the other direction uses the left (or right) half of the same CW
(or CCW) line. Thus both directions travel around the ring in the same direction and uses
capacity (a channel in STM-N) along the entire circumference of the ring.
1.2 Total number of connections in the ring cannot exceed the capacity of each section.
1.3 Unused line (above case CCW) is for protection capacity.
2. Bidirectional Ring
2.1 Normal routing of the traffic is that both directions of two-way connection routed using the
same section(s) and node(s). Corresponding channel of the STM-N in unused half (above case
left half) can be assigned for other connections which has no overlap section each other.
2.2 Total number of connections in the ring can exceed the capacity of each section.
2.3 Protection capacity is provided by dividing the section capacity half (for 2 fiber ring) or using
329
1. Rivertive Switch
1.1 When the failure in the working system is repaired, the traffic that has been carried by the
protection system is switched back to the working system. When the SLA is employed, usually
this switching scheme is used and also in N:1 system to vacate the protection system for the
next failure.
2. Non-revertive Switch
2.1 After the recovery of the failed working system, the traffic is not switched back to the working
system. When the protection system fails later, traffic is transferred to the working. Therefore it
is not appropriate to name working and protection and often they are called 0 and 1
system.
2.2 Whenever switching is applied and if a complicated hitless switching is not employed it is
inevitable to cause data hit, short period error. To avoid the unnecessary errors this scheme is
important, especially in 1+1 protection system.
3. Non-revertive 1:1
330
1. For the protection switching system, manual operations, which are listed above, are possible. They
are used for maintenance purposes.
2. The traffic on a selected system is transferred to the protection system by the MSW. When a failure is
detected on another system (SF, signal failure), the MSW is released and the failed system takes
over the protection system.
3. The FSW does the same switching as the MSW, but against other system failure it does not release
the protection system.
4. When the LKOW is applied to a working system, it is not switched to the protection system even if it
fails. Under this status neither MSW nor FSW can be applied.
5. The LKOP is practiced to the protection system, any switching to it is refused, either automatic (SF)
or manual (MSW and FSW.)
6. Priority order among operations is above order. The larger the number is, the higher the priority. No
priority difference between LKOP and LKOW.
1. In the revertive switching mode, to prevent frequent back-and-forth switching between the working
and protection because of intermittent fault, the failed working must be error-free. To ensure this, the
wait-to-restore (WTR) time is set. It is on the order of 5-12 minutes with 1-second increment.
2. To prevent chattering of the protection switch due to an intermittent failure, the working channel must
be fault free for a fixed of time before the switch taken to transfer to protection.
3. During WTR period if a fault is detected on the waiting working system, the WTR time is reset.
1. Ring systems are classified into five types based on employed switching method combination (unidirectional or bidirectional
switch and path or line switch), routing of two-way traffic (unidirectional or bidirectional ring) and fiber number (2 or 4 fibers).
2. Theoretically any combination between switching and routing direction is possible. But in the ring system, above uni-uni and bi-bi
combination is used.
3. SNCP-ring (Unidirectional)
3.1 Also called 2-fiber Unidirectional Path protection Switch Ring (2F-UPSR)
3.2 No switching protocol is required. This is a simple and fast switching scheme.
4. MS Dedicated Protection Ring
4.1 2-fiber Unidirectional Line protection Switch Ring (2F-ULSR)
4.2 Protocol by K1 and K2 in MSOH is required and it is not standardized yet by ITU-T.
5. SNCP-ring (Bidirectional)
5.1 2-fiber Bidirectional Path protection Switch Ring (2F-BPSR)
5.2 Ring must be controlled at a path level and protocol by K3 or K4 in POH is required. It is not standardized yet by ITU-T.
6. 4F MS-SP ring
6.1 4-fiber Bidirectional Line protection Switch Ring (4F-BLSR)
334
7. There is no limitation to the node number in an SNCP-ring from the point of view of the ring control, but the STM-N capacity determines the limit.
8. Operators can selectively provision paths to be protected or unprotected. Unprotected path will only use capacity in the selected route
(semicircle). The other route will provide additional unprotected path(s).
336
1. General SNCP has wider definition than previously explained SNCP-ring. The SNCP-ring is a one
type of the SNCP.
2. The SNCP can have any type of network physical structure (i.e. meshed, ring or mixed) between
PPSs. The SNCP-ring (2F-UPSR) is limited to the ring configuration.
3. A path can be subdivided to subnetwork connections (SNC) as described above. The CP is a
connection point of SNCs. The TCP is a path termination point.
4. The SNCP can be used to protect a portion of a path (e.i. a SNC), by setting PPSs at two CPs (Case
3) or at a CP and a TCP (Case 2) or the full end-to-end path putting PPSs at two TCPs (Case 1).
5. To make the SNCP flexible, three types of PPS should be available. They are Aggregate-Aggregate
PPS, Aggregate-Tributary PPS and Tributary-Tributary PPS. Implementation of all or some of them to
equipment depends on its design.
6. The SNCP-ring (2F-UPSR) uses only Aggregate-Aggregate PPS.
6. The maximum number of nodes is limited to sixteen (16). This is determined by the control protocol (standard). Four bits
in K1 are assigned to indicate the message destinations, resulting in maximum 16 nodes.
340
1. Against a total section failure (both working and protection), the traffic on the working fiber is looped
to the corresponding channel of the opposite direction protection fiber at the both ends of the failed
section (B and C).
2. The looped traffic passes through both originating and destination nodes and looped back to the
working fiber at the other end of the failed section (C and B).
3. The same applies to all other channels which are used by other path, i.e. the loop at B and C and the
through at other nodes. The result is just as if the traffic is looped at the STM-N level or the fiber. That
is why this system is categorized to the line switch.
4. The line switch takes place at the failed section side of the node. That is, at C node it is not at the
side facing D node but at the side facing B node.
5. Now the long route (C-D-E-F-A-B) is used as a protection line for the short rout (B-C). In this way, the
protection ring is shared by all sections (MS) in the ring.
6. This switching against a total section failure is named Ring Switch .
7. Unaffected paths, e.g. paths between C-D,E-A etc., remain on the working fiber.
8. Against multiple section failure, all of paths cannot survive, some of them are killed.
9. When a node failure occurs, e.g. at C node, ring switches take place at B node and D node. They are at the sides facing
C node.
341
1. Against a section failure on the working line only, all traffics carried by the failed section are
transferred to the protection fiber of the same section. This switching is named Span Switch.
2. When multiple section failures occur and if they are all working-line-only type failures, the span
switches are applied to all of them. And all normal traffics are protected. The 4F MS-SP ring can
protect multiple failures of a certain type.
3. This means network operators can expect 4F MS-SP higher survivability than 2F MS-SP in addition
to higher capacity. (cf. 2F MS-SP explanations)
4. Extra traffics by the SLA that pass through the failed section will be removed.
5. The system automatically select the ring switch or the span switch depending on the failure mode.
Maintenance crews do not have to intervene.
6. Both in the ring and span switching mode, all traffics are switched to the protection. Unlike SNCP ring,
it is impossible to set protected paths selectively. Always all paths in the MS-SP ring are protected.
1. Against a section failure, the loop switching takes place at both end of the failed section (B and C).
The traffic carried by a channel of the working half of STM-N is transferred to the corresponding
channel of the protection half of opposite direction STM-N. It travels the long route passing originating,
destination and other nodes (A, F, E and D). At the remote end it is switched back to the working half
of the opposite direction (original fiber).
2. There is no Span Switch for 2 fiber ring. Unlike 4F MS-SP, no survivability against multiple failures.
3. Other points are same as 4 fiber ring.
1. Each node on the MS-SP ring must be assigned an ID that is a number between 0 and 15. The ID
assignment is not necessarily in order. The SNCP ring does not require them.
2. In the protocol exchange, the node ID is used to indicate a message destination.
3. The cross connection table of each node must be provided with information that shows the originating
and destination node of each path indicated by the node ID.
4. Each node must have knowledge of the IDs assigned to all other node in the ring and their sequence.
The node ID map, which is provisioned to each node, provides this information.
5. When nodes are added to the ring, the node ID map must be revised.
1. The 4F/2F MS-SP ring has possibility of making a misconnection when a node fails.
2. Misconnection
2.1 Two paths, A-C and C-F, are terminated at the failed node (C) and they use a same channel of
the STM-N. In this case, their stand-by channel in the protection capacity is the same one.
2.2 When C node fails ring switches take place at B and D nodes. B node does it to save A-C path
and D node to save C-F path. And they are switched to the same protection channel.
2.3 The result is that previous two independent paths are changed to single A-F connection.
2.4 This is inevitable and the system send out an AIS (Alarm Indication Signal) to the misconnection
paths in order to avoid inconvenience.
3. No misconnection
3.1 When two paths that use the same channel of the STM-N are not terminated at the failed node
(through connection), A-E and E-F, the misconnection does not occur. No AIS is sent out to those
paths.
4. This control is named Squelch Control. Its procedure is show at the right of the drawing.
5. For the squelch control the squelch table (next slide) must be provisioned to each node.
347
1. There are two kind of AU-4 (VC-4). One type carries low order VCs (LOVC, VC-3, VC-12 etc.) and it
is called VC organized AU. The other is a pure VC-4 that carries a 140Mb/s, for example.
2. When the crossconnect level of a VC organized AU-4 is set to LOVC level, i.e. other than VC-4, the
NE must have information (squelch table) where its crossconnect level is set to LOVC level again, to
both east and west directions. It dose not matter whether the LOVCs crossconnection is add/drop or
through or mixed, both at own node and at the other side of AU-4.
3. When the crossconnect level is VC-4 level, regardless VC organized or not, the squelch table is not
required. Because its terminating node is clear from the crossconect map.
1. If the MS-SP ring has sections with very long length, e.g. a transoceanic submarine cable, and the
ring switch is applied like a terrestrial system, the traffic must cross a ocean back and forth under
failure condition, resulting in very long delay. To solve the problem different algorithm called the
transoceanic application is standardized (also in G.841).
2. In this algorithm, instead of the ring switch, the affected path is switched to the corresponding
channel of the protection capacity of opposite direction at the path terminating node. The same
occurs at all nodes that have suffered paths. Actually, this is not a line switch but a path switch. The
result is same as the SNCP bidirectional ring but the protocol does not use K3 or K1 of POH and
algorithm is different.
3. When a section failure occurs, B and C exchange protocol via the protection fiber of the long route
(B-A-F-D-E-C) using K1 and K2. Nodes A, F, E and D can monitor the exchange, decide which of
their paths are in trouble and take the above action at the path level. For this protocol the DCCr is
also used to exchange the path mapping information.
4. Against the on-only-working type failure on 4F MS-SP, the span switch(s) are applied in the same
way as the terrestrial system.
1. When a path is laid over two rings, it can survive against a ring failure by the self-healing function of
the ring. But the failure is on the link that connects two rings, it will be killed.
2. By connecting two rings by two links and diverting the traffic to the second link against the first link
failure, the survivability of the path can be improved. This method is named Inter Locked Ring (ILR).
3. The ILR is possible for any combination of two rings, MS-AP~MS-AP or SNCP~SNCP or MSSP~SNCP.
4. Two nodes where the links are connected are named the primary node and the secondary node. The
two nodes are not necessarily neighbor nodes. It is possible to put nodes between the primary and
the secondary.
5. The ILR setting is on the path bases. It is not necessary to set ILR to all of paths that pass the link.
1. There are two types of MS-SP ~ MS-SP ILR depending on whether the working capacity or the
protection capacity connects the primary and the secondary node. This drawing shows on-working
case.
2. MS-SP~ MS-SP ILR (P-S on-working)
2.1 At the transmitting side primary node the traffic is branched on to the first and second link
routes. At the receiving side primary node a service selector (SS) is installed and it choose a
normal signal carried by the first or the second link. Either failure on the first or the second link
can be protected. The path connections between four node (two primaries and two secondary
nodes) is same as the SNCP. The SS works as the PPS.
2.2 The right side drawing explains the protection against a section failure on the ring. Usual ring
switch or span switch (not shown) takes place. Thus a double failure on the link and the ring
can be protected.
3. The connection between the primary and the secondary nodes uses the working capacity. A demerit
of this configuration is that it consumes a part of the working capacity for protection purpose and
reduces efficiency of the ring.
1. This drawing shows the protection switching against the primary node failure. This is also same as an
usual ring fail.
The drawing shows the SS transfer and switching against a primary node failure. This is equivalent to
that the link and ring failures occurred at the same time.
1. On the SNCP ring, the input traffic from a link ( tributary ) line is transmitted to only one direction, the
first link to CCW and the second link to CW.
Therefore, depending on the mode of double failure on the ring and link, there is a case where the
path cannot be protected. Above drawing explain the protected and unprotected situations.
2. The MS-SP ring does not have this inconvenience.
1. The ILR between the SNCP ring and the MS-SP ring is also possible. As shown above, each ring
sets its own ILR connection.
2. The drawing shows only on-working type ILR for the MS-SP, but on-protection type ILR can be used
without affecting the setting of the SNCP ring side.
functions through:
366
With this section, using the INC-100MS Network Node Manager (NNM) Graphic User Interface on the
Client will be explained for Operators Ease of Use and time-effective management. Included in this
section will be:
NNM Screen Layout.
Layout of the initial NNM screen elements and their functions.
Window Navigation.
Explanation of the various elements common to graphic windows such as menu bars, tool bars,
adjusting their size, etc.
Window Tool Bar and MO symbols.
Explanation of the unique Tool bar buttons in the Network Diagram and the various MO Symbols
used for graphic system access.
Symbol Location Setup
Setting user preferences specifying the location of MO graphic symbols in the multi-layered
367
Before putting INC-100MS in service, all objects in the customers SDH network must be registered as
Managed Objects (MO) in the Servers Management Information Base (MIB). Providing all of this data to
the MIB is through the various configuration Management functions. Procedures include:
MO Registration.
MO Modification and Deletion.
Network Map Update.
Path Creation.
Path Information Retrieval.
Registration of the MOs creates data entries in the MIB database, with each item linked to a graphic
symbol on the NNM Client interface for ease of operator control and access using mouse point-and-click
functions.
The MOs are arranged in a hierarchical order, from the largest MO down to the smallest. The lower level MOs are nested
or within the management area of the higher order MOs. The order of their registration will be presented on the following
pages.
368
Event Journal: Displays up to 1,000 current events in real time. The list will be discarded upon closing
and a new one reflecting the latest events will be created each time the window is reopened.
Alarm Status: Displays status of events including time of occurrence and subsequent clearance (even
after deleting the alarm occurrence line). If an alarm is terminated, recovery notification deletes the
occurring alarm.
Current Alarm Copy: Alarm registries can be transferred/updated from an NE or Server to the Client.
This is only necessary when a mismatch between NE/Server, Server/Client is found.
Alarm Inhibit: Alarm notification can be inhibited for MOs such as Domains, Offices, and NEs. Alarms will
still be sent from the inhibited object, but the Server will discard them upon receipt. This is useful for
items under testing prior to commissioning.
Event Log: Displays the most recent 10,000 items in the event log (Alarm/Events history) of a Server. This log can only be
deleted by the Administrator (from version 1.8).
369
Current PM information will retrieve the Performance monitoring information stored in the NE for display.
The NE monitors all traffic interfaces continuously and stores the results in memory. Information is monitored
in each frame (8,000/second) and averaged and stored each 15 minutes.
The recorded information every 15 minutes is then averaged each hour and stored in a separate memory
each 24 hours.
The capacity of these 2 memories is the current period plus the previous 32 periods for 15 minutes (= 8
hours) and the current and previous 7 days (one week) for the 24-hour memory.
When accessing the Current PM information from the NE, the operator can select to view the information in
either 15-minute or 24-hour increments, referred to as granularity. (both memories are retrieved
simultaneously, selection only zooms the display in or out, hence the term granularity)
Whenever a problem needs to monitored for a period longer than one week, then the operator can create a
Long-Term PM schedule, which can be set up to monitor only a specific object. Information is then
automatically retrieved from the NE and stored in the Server for later examination by the operator, again at
either 15 minute or 24 hour granularity.
As many as 50 objects can be scheduled for Long Term PM monitoring at the same time, and once the
situation is resolved, the schedule can be deleted. A schedule can be set to run for a period as long as 3 years.
370
371
As well as using the INC-100MS for managing the SDH networks, certain operations must be carried out
on the INC-100MS system itself to guarantee its availability and integrity of the data that it manages and
gathers from the SDH network. Among these operations are such actions as:
Checking the Server (pair) Status and availability.
Assigning regular/standby identities in redundant systems.
Manual redundancy switching.
For software upgrades.
For maintenance procedures.
Database (MIB) Administration including:
Database backup (manual and scheduled)
Database copying.
Database restoration.
Status views.
NE connectivity.
372
From the Main Menu, other operation functions are supported as shown on the following pages (Select
from View, System, or Utility Menus)
Here we must consider the telecommunications network as a whole. In the area of the subscriber
network nodes the users are connected to the exchanges (DSC) via the user network interface (UNI).
Instead of this central switching points local cross connects (DXC) should be used. In a PDH network a
fixed network is performed by point to point links. The channels are switched via these links. Signals
from other networks use this transmission technology via flexible multiplexers up to 2 Mbit/s.
The growth in data traffic is much higher than in voice communication. The greatest demand lies in the
area of high bit rate access from the subscriber area. Such transmission capacity should be available at
a reasonable cost and on short notice. The Terminal multiplexer (TM) with diverse interfaces feed this
traffic into the SDH network directly or via Add and Drop Multiplexers (ADM) which are configured in a
ring network (Back Bone). This ring is formed by two fiber optical cables with various back-up switching
possibilities. The Network Management (TMN) sets up the necessary connections.
59
1. Simple, add or replace plug-in modules. This can be performed while the system is in operation,
without affecting traffic in any way.
2 Optimization of aggregate module assignment. Two aggregate modules are associated with each
Main Cross-connect Control (MXC) card. Each module supports a bandwidth of up to 2.5 Gbps.
3
Optimization of tributary I/O slot assignment. Eight slots can accommodate different I/O modules
(PIM, SIM and EIS-M).
4 In-service scalability of SDH links. An optical connection operating at a specific STM rate can be
upgraded from STM-1 to STM-4 or STM-16.
5 XDM-100 supports mesh, ring, star, and linear topologies. All system configurations are controlled by
a single network management system with end-to-end service provisioning, from E1 to STM-16 to
DS-3.
6 The XDM-100 can be configured to operate as:
Single ADM/TM
Multi-ADM/TM
398
optical interface
(155 Mbps)
STM-4 (622 Mbps); STM-4c (ATM/IP 622 Mbps) STM-16 (2.5 Gbps); STM-16c (ATM/IP 2.5 Gbps) 10/100BaseT Gigabit
Ethernet (GbE)
399
High-order transmission paths for high-order and low-order sub networks and for IP networks (for
example LAN to LAN connectivity: GbE . GbE)
10/100BaseT
400
75-120 BALUN
DDF CABLES.
406
OPTICAL Tx POWER
CONNECT LAPTOP TO ADM AND LOGON IN TO THE ADM IN ADMINISTRATION MODE.
CONNECT POWER METER TO THE TX PORT OF ADM ( AS SHOWN IN SLIDE NO: 1)
ENTER IN TO THE HARDWARE MENU
IN HARDWARE MENU ENTER IN TO THE TEST & MAINTANANCE (AS SHOWN IN OPERATE1 OPERATE2)
SEE THE LASER STATE OF EACH STM-1 CARD
AND MAKE FORCED LASER ON
CONNECT THE Tx PORT OF ADM TO THE OPTICAL METER AND NOTE THE OPTICAL POWER READING.
OPTICAL Rx SENSITIVITY
CONNECT DTA OPTICAL ATTENUATOR
PREPARE SETTINGS IN DTA AS SHOWN IN SETTINGS
INCREASE ATTENUATION UPTO ERRORS UCCOR IN DTA NOTE THE ATTENUATION IN ATTENUATOR AS (AT1)
STILL INCREASE IN ATTENUATION CAUSES TO GET ERRORS UPTO 1E-6 (AS SHOWN IN OPTSEN2) NOTE THE ATTENUATION
IN ATTENUATOR (AT2)
CONNECT THE ATTENUATOR OUTPUT TO THE OPTICAL METER AND NOTE THE OPTICAL POWER READING (OPT1)
CLOCK FREQUENCY
ADM 2Mb CLOCK OUTPUT AVAILABLE IN ADM WITH 15 PIN CONNECTOR.
IN THAT 1ST PIN IS GROUND, 13TH PIN IS OUTPUT.
CONNECT ADM WITH FREQUENCY COUNTER
TO GET CLOCK OPTICAL FIBRE BETWEEN TWO ADMs SHOULD BE CONNECTED
DEFINING SYNCHRONOUS CLOCK IS SHOWN IN OPERATE,CLOCK1, CLOCK2, CLOCK3
IN FREQUENCY COUNTER WE CAN THE CLOCK OUTPUT OF 2048KHz 0.005
PARAMETERS TO BE CHECKED
PULSE WIDTH (nS)
RISE TIME (nS)
FALL TIME (nS)
OVER SHOOT (%)
UNDER SHOOT(%)
LEVEL (dB)
The graph first shows an ideal pulse train i.e. periodic transitions equally spaced in time.
Jitter is caused when these pulses deviate from these ideal positions in time.
This phase deviation is plotted as function and this is the jitter amplitude of the signal. Phase lead is
assigned a positive amplitude and phase lag a negative amplitude.
It worth noting that when we talk about jitter we are talking of phase variations of 10Hz and above.
Phase variations below 10Hz are called wander. We will talk some more about wander later in the
presentation.
Jitter can be generated by circuit components. For example, oscillators are widely deployed in telecom
circuit design and these circuits all have a phase noise characteristic depending on the quality of the
design and components used.
Also where logic circuits interface, jitter can be produced due to transition thresholds. These introduce
small phase variations between logic circuits.
This slide shows a typical transmission path starting in the PDH world possibly carrying voice traffic,
possibly data, maybe digitised video traffic.
The symbols represent bi-directional devices since we normally need transmission in both directions.
The Add/Drop muxes at the ends of the path are therefor performing both a multiplex and a demultiplex
function between PDH and the SDH or SONET world. The digital cross-connect mux allows the
tributaries to be routed onto other similar paths so that the network can be electronically configured
rather than hard-wired to a fixed configuration.
The boxes with R in them represent regenerators in the optical transmission paths which are used to
increase the signal to noise ratio of the signal when transported over long distances.
In general we dont see much jitter on the signals transmitted in SDH or SONET but jitter is imparted to
the PDH traffic by indirect effects.
To test all these types of jitter different classes of jitter tests have evolved.
Here is a summary of the different types of jitter tests used in SONET/SDH network.
Let's now look at how each test would be performed. Starting with jitter tolerance.
Jitter tolerance is measure of how well an interface can tolerate data with applied jitter before the
interface cannot accept the data error-free.
Two types of test templates (or jitter masks) are used for jitter generation.
The first is the standard Bellcore/ITU-T mask which is applied and the interface. The interface must be
able to accept this jitter mask and remain error free.
The second mask applies the maximum amount of jitter possible to the interface before errors occur.
This is called the maximum permissible jitter and is now the preferred jitter tolerance test.
The next important jitter test is jitter transfer. This is a measure of the jitter gain across a network
element, typically a regenerator or repeator.
The formula for jitter transfer is a function of the applied jitter in to the measured jitter out.
Jitter transfer is most used to characterise a network element and ensure it does not contribute
significant amounts of jitter to the network.
Output jitter is perhaps the most simple jitter measurement. but it is most important that the correct
measurement filters etc.are used. It is the measurement of jitter at the output of the network. No
stimulus is applied by the test equipment. Tests of Output Jitter from individual network elements or
components (often termed Jitter Generation) are also performed in Development or Production.
CONNECT DTA
PREPARE SETTINGS IN DTA AS SHOWN IN FIGURE SETTINGS
THE TESTING DURATION WILL BE ONE MINUTE, FOR EACH E1 TWO FILTERS (LP + HP1, LP + HP2 ) TO BE USED, JITTER VALUE IN TERMS OF
PEAK TO PEAK TO BE NOTED (AS SHOWN IN MPJTR1,MPJTR2)
Here we simulate pointer movements generated from the NE1 in the last test. We use them to check
that a Line Terminal or ADM can extract a tributary signal with aceptable jitter in the presence of pointer
movements.
A pointer movement at STM-1 indicates a 3 byte or 24 bit movement of the payload. The resulting spike
of 24UI of jitter would send any PDH network into spasm. NE's have buffers which help smooth out this
spike to accepotable levels hopefully.
There are specific pointer movements which simulate worst case conditions on the network and can be
found in ITU-T G.783.
For wander measurement we are not able to create a jitter free reference using the narrowband PLL that
we looked at in the receiver because the loop time constants would be impossibly slow. However we
can perform measurements relative to an externally supplied reference clock signal.
We could use the same phase detector as is used for jitter measurement but the amplitudes of wander
are many orders of magnitude greater than those encountered as jitter. So, rather than prescaliing the
PSD with larger dividers in order achieve the amplitude range, it is more convenient to measure wander
in the time domain in a similar way to HPs time interval analyzer). This is done by sampling the
wandered clock signal and assessing, piecewise, what phase step is implied by the small offsets in
frequency in each measurement sample. By adding the little delta phase steps together we can
reconstruct the wander signal.
The slide shows a half sine wave being reconstructed piecewise.
A further complication to note is that we need to exclude the small frequency changes caused by phase
jitter as these would cause aliased jitter waveforms to be reproduced on top of our reconstructed wander.
To filter out the jitter we use a similar narrowband PLL to that used in the jitter receiver.
BER is measured as a function of the signal-to-noise ratio Means of comparing different modulation schemes
dominant noise contributions may not be in the receiver(source noise or channel noise) Usually, SNR is
proportional to Q In Optical systems, We measure BER as a function of mean received Optical Power (ROP).
The quantity measured by a power meter If the dominant noise source is thermal noise contributed by the
receiver ,and the extinction ratio is high (I.e P1 >>P0).
CONNECT DTA
PREPARE SETTINGS IN DTA AS SHOWN IN SETTINGS
START TESTING BY PUTTING THE DTA IN RESULTS AS PDH RESULTS CUMULATIVE (AS SHOWN IN BER
TOTAL TESTING TIME IS 48 Hrs.
Telcordia and ITU-T define a hierarchy of alarm conditions .Full compliance testing of an NEs alarm
handling entails verifying the alarm detection and de-activation thresholds, plus the appropriate
responses. This must be done for each alarm supported by the NE.
To aid the rigorous test of an alarm detection/de-activation threshold OmniBER OTN generates a
precise 3-stage alarm on/off sequence:
The sequence consists of: User programmable starting state (Alarm on/off)
The test condition which is a single burst of the alarm on or off (opposite to starting state). The duration
of this pulse is either equal too or just below the threshold.
A repeating on/off sequence designed to hold the NE in the alarm state it enters as a result of the test
condition. The on/off sequences are programmed to be below the expected alarm detection and deactivation thresholds.
For example let us look at Line-AIS. An N.E must enter the Line-AIS state on receiving 5 consecutive
frames containing the Line-AIS signal and exit on receiving 5 consecutive frames without.
First check that the N.E does not enter AIS when only 4 consecutive frames are received. Then send a
holding pattern of 4 on and 1 off to check that the state is not entered.
432
Moving on from these tests, SDH elements have built in performance monitors for regenerator (B1),
multiplexer (B2) and path (B3) sections. They allow the network management to carry out preventative
maintenance and must be checked at installation. They are also used by maintenance technicians and
management to sectionalise faults in the network.
To test the performance monitors, we generate errors in the B1/B2/B3 parity bytes and check the te NE
counts them and sends a valid REI in the retrun path.
There are also a number of Alarms that can be monitored. Again please note that there are sets of
alarms for each section of the network, that is, regenerator, multiplexer and path.
The various NE's each respond to the alarms in a different way. Regenerators terminate the
Regenerator section overhead only, so will respond to RSOH alarms and alarm conditions.
DXC's terminate the and regenerate the regenerator and multiplexer section overheads, so will respond
to the RSOH and MSOH alarms and conditions.
Line Terminals and ADM's act as Path and Line terminals. They terminate and regenerate all areas of
the overhead, so respond to alarms and alarm conditions in all areas of the overhead.
Typical tests might be as shown here. Both upstream and downstream signals are tested.
Most of the tests already mentioned would be carried out during installation and commissioning. Similar
test can be carried out for preventative maintenance by monitoring the network in-service.
We can check MSP by monitoring the B2 for errors or changing the messages in the K1/K2 bytes.
Previously, when you were using the OmniBER for Passive Automatic Protection testing, when a fault is
generated the network element registers a fault condition.
The network element then sends out a request to switch along the protection channel, which basically
says, it has registered a fault and is looking for verification of an alternative channel it could use.
The OmniBER did not respond. The lack of response sent the network element into an oscillatory
mode where it would send request to switch signals down each channel in turn, asking if they could be
used, without receiving a response.
Now, with Active Automatic Protection Switching, the network element registers a fault in the same way
and sends out the same request to switch. The difference is, that the OmniBER can now respond
intelligently by sending a confirmation signal and transfers the traffic signal along the fault free
protection channel. The network element is then in no doubt about which channel can be used, those
avoiding going into to the undesirable oscillatory mode. Thats for uni-directional networks.
For bi-directional, the same request to switch signal is sent to the OmniBER from the network element,
the OmniBER again responds intelligently by sending the conformation signal and the traffic signal but in
this type of network the OmniBER also sends a request to switch the traffic signal from the other, fault
443
Its critical for your customers to be able to test how quickly their designs respond to error or alarm
conditions.
The OmniBER OTN can be used to apply such conditions. The response of any device under test is fed
back to the OmniBER OTN. This in turn triggers a timing device, (such as an oscilloscope). The timing
device will determine just how quickly the device responds.
The OmniBER OTN has a very long list of triggers, for SONET, SDH and OTN. These can be found in
the technical specification. Ill show you how to find the technical specification and other useful
information later.
The key defined overhead bytes carrying important information. A new Label page
provides fast access to the S1 Synch status byte carrying synchronisation messages, to
the C2 byte carrying the high order path payload information and to the V5 signal label
carrying the low order path payload structure. A textual setup and decode of these bytes
is provided meaning technicians do not have to carry pages of byte decodes from ITU
standards when out installing networks.
In the same manner APS (Automatic Protection Switching) Messages are decoded and
displayed in a single page. This complements the existing transmit function which
provides message based set up of Linear (G.783) and Ring (G.841) topology APS
messages.
FEC testing is a key part of the OmniBER OTN. It can generate a structured OTN frame and before
transmitting it, it can also add errors , after the calculated FEC, which accurately simulates a real-world
networking environment.
The FEC-compliant device under test, should be able to recognize and correct these errors, until it starts
passing errors through as the OmniBER OTNs error rate is increased. This test give the user the ability
to identify error conditions that cause the DUT to pass uncorrectable FEC errors, and can be useful in
validating the FEC functionality in new designs.
FEC code generation is a 5-step process. Ill describe all of the steps on this slide and and then well take one step at a time
First let me say that The FEC code is generated in one OTN Frame row at a time.
In Step 1:
In Step 2:
In Step 3:
The 16 sub-rows are then re-multiplexed to the original row with the addition
FEC is generated.
FEC bytes.
In Step 4:
In Step 5:
458
In Step 1, each OTN Frame Row (including overhead + payload) is demuxed into 16 individual sub-rows
before FEC is generated.
As you can see in the diagram, every 16th byte in the G.709 Frame Row gets its own parallel sub-row
channel.
In Step 2, 16 blank FEC bytes are added to each of the 16 sub row channels. As you can see in the diagram,
lovely, transparent, empty bytes are tacked on to the end of the sub-row.
In Step 3, the 16 sub-row channels as shown in colors on left, are independently connected to 16 FEC
encoders, where FEC values (indicated in checks) are calculated and populated into the previous blank
FEC byte locations.
In Step 4, the 16 sub-rows are re-multiplexed to the reconstitute the original serial OTN Frame Row,
now with the addition of all the newly generated FEC values, as shown in checks.
No post analysis is required.Dont have to wait until the end of the test period before the problem is identified.
Drawbacks
It doesnt support at STM-64.
Next generation SDH function is not available.
For testing of DWDM Parameters we need a separate OSA.
466
Test modes:
SONET
SDH
Unframed (raw PRBS)
Jitter testing
tolerance
Jitter generation
Automatic
468
470