You are on page 1of 21

Long Haul DWDM SFP : Dense Wavelength Division Multiplexing (DWDM) Small Form factor Puggable (SFP) transceivers

. Optical Long Haul DWDM solutions comprise leading photonic network components, flexible optical switches and network-management- and control functions. This makes networks more intelligent, simple and reliable than ever before. Typical Long Haul DWDM applications can be found in connections of regional and international networks. Ranges start at 50km and go up to several thousand kilometers, purely optical amplifiers being used instead of regenerators to reduce network costs. Network infrastructures and the applications they carry such as VoIP and IPTV continue to drive increased bandwidth. DWDM is an attractive technology for increasing capacity and enables system developers to provide increased bandwidth serving multiple applications in the Long Haul and Metro at lower costs. Small form-factor pluggable transceiver The small form-factor pluggable (SFP) or Mini-GBIC is a compact, hot-pluggable transceiver used for both telecommunication and data communications applications. It interfaces a network device mother board (for a switch, router, media converter or similar device) to a fiber optic or copper networking cable. SFP transceivers are available with a variety of different transmitter and receiver types, allowing users to select the appropriate transceiver for each link to provide the required optical reach over the available optical fiber type (e.g. multi-mode fiber or single-mode fiber). Optical SFP modules are commonly available in several different categories: 850 nm 550 m multi-mode fiber (SX) 1310 nm 10 km single-mode fiber (LX) 1490 nm 10 km single-mode fiber (BS-D) 1550 nm 40 km (XD), 80 km (ZX), 120 km (EX or EZX)] 1490 nm 1310 nm (BX), Single Fiber Bi-Directional Gigabit SFP Transceivers DWDM

The bare module is easy to confuse if they have no mark,the manufacturers will make the color of pull ring to distinguish generally, For example: Black pull ring is multi-mode, the wavelength is 850nm; Blue is the 1310nm module; Yellow is the 1550nm module ; Purple is the 1490nm module and so on
SFP transceivers are also available with a copper cable interface, allowing a host device designed primarily for optical fiber communications to also communicate over unshielded twisted pair networking cable or transport SDI video signal over coaxial cable. There are also CWDM and single-fiber "bi-directional" (1310/1490 nm Upstream/Downstream) SFPs. SFP transceivers are commercially available with capability for data rates up to 4.25 Gbit/s. An enhanced standard called SFP+ supports data rates up to 10.0 Gbit/s. Standardization The SFP transceiver is specified by a multi-source agreement (MSA) between competing manufacturers. The SFP was designed after the GBIC interface, and allows greater port density (number of transceivers per cm along the edge of a mother board) than the GBIC, which is why SFP is also known as miniGBIC. The related Small Form Factor transceiver is similar in size to the SFP, but is soldered to the host board as a pin through-hole device, rather than plugged into an edge-card socket. However, as a practical matter, some networking equipment manufacturers engage in vendor lock-in practices whereby they deliberately break compatibility with "generic" SFPs by adding a check in the device's firmware that will only enable the vendor's own modules.

Ethernet : Ethernet is a type of network cabling and signaling specifications developed by Xerox in the late 1970. While Internet is a global network, Ethernet is a local area network (LAN). No computer should be an island. With Ethernet, file sharing and printer sharing among machines became possible. The term "ether" was coined by Greek philosopher Aristotle to describe the "divine element" in the heaven. In short, "ether" is said to be a kind of substance that exists everywhere. Although this is a misconception, network developers still adopted the term "ether" and therefore "Ethernet" means "a network of everywhere."

How does it work? Ethernet uses a communication concept called datagrams to get messages across the network. The Ethernet datagrams take the form of self-contained packets of information. These packages have fields containing information about the data, their origin, their destination and the type of data. The data field in each package can contain up to 1500 bytes. Take mailing as a metaphor. An Ethernet package is not just a letter. It is also provided with the sender address, the receiver address, the stamp indicating what the package's contents are. What are the number, and the words "Base" and "T"? There are several standards of Ethernet, such as 1000BaseT, 10GBaseT...etc. The number stands for signaling speed: "1000" is 1000 mega bit per second. However, it is important to point out that this number indicates the "ideal" situation. The actual speed might be slower. "Base" means Baseband, which uses a single carrier frequency so that all devices connected to the network can hear all transmissions. "T" stands for twisted pair cable. Half-duplex vs. full-duplex Ethernet suffers from collision when it is running in half-duplex mode. What is half-duplex? CB radio (Citizens' Band radio) is a typical example of half-duplex. When using a CB radio, you can either send a message or receive a message at one time. When two or more computers attempt to send data at the same time, a collision occurs. Nevertheless, switches make it possible to run Ethernet in fullduplex mode. In this mode, two computers establish a point-to-point connection in full-duplex and thus collisions are avoided. Ethernet is the most widely-installed local area network ( LAN) technology. Specified in a standard, IEEE 802.3, Ethernet was originally developed by Xerox from an earlier specification called Alohanet (for the Palo Alto Research Center Aloha network) and then developed further by Xerox, DEC, and Intel. An Ethernet LAN typically uses coaxial cable or special grades of twisted pair wires. Ethernet is also used in wireless LANs. The most commonly installed Ethernet systems are called 10BASE-T and provide transmission speeds up to 10 Mbps. Devices are connected to the cable and compete for access using a Carrier Sense Multiple Access with Collision Detection (CSMA/CD ) protocol. Fast Ethernet or 100BASE-T provides transmission speeds up to 100 megabits per second and is typically used for LAN backbone systems, supporting workstations with 10BASE-T cards. Gigabit Ethernet provides an even higher level of backbone support at 1000 megabits per second (1 gigabit or 1 billion bits per second). 10-Gigabit Ethernet provides up to 10 billion bits per second. Ethernet was named by Robert Metcalfe, one of its developers, for the passive substance called "luminiferous (light-transmitting) ether" that was once thought to pervade the universe, carrying light throughout. Ethernet was sonamed to describe the way that cabling, also a passive medium, could similarly carry data everywhere throughout the network.
What Is A DS3? DS3 Internet and Voice Pricing DS is an acronym for Digital Signal, while the 3 is an indication of what level the digital signal is being accessed and transferred, as defined by the digital hierarchy. What is a DS3 within the Digital Hierarchy? A DS3 access to telecommunication services is a digital line capable of transmitting 44.736 Megabytes per second (Mbps). This is defined through the grouping levels of channels. Each channel is capable of transmitting 64 Kilobytes per second (Kbps).

A DS3 connection consists of 672 channels. When making a comparison to a lower level of connection within the digital hierarchy, as in the connection rate of a DS1, which utilizes 24 channels, this means that a DS3 will operate at 28 times the transmission rate, or bandwidth of a DS1. SDH Overview : Synchronous Digital Hierarchy (SDH) is the international standard for optical digital transmission at hierarchical rates from 155.520 Mbps (STM-1) to 2.5 Gbps (STM-16) and greater. The International Telecommunications Union Telecommunication Sector (ITU-T) defines a series of SDH transmission rates beginning at 155.520 Mbps as follows: SDH Transmission Rate STM-1 155.520 Mbps STM-4 622.080 Mbps STM-16 2,488.320 Mbps STM-64 9,953.280 Mbps The PA-MC-STM-1 currently allows transmission over single-mode and multimode optical fiber only. Transmission rates are integral multiples of 51.840 Mbps, which can be used to carry E3 bit-synchronous signals. STM-1 Multiplexing Hierarchy Figure 1-3 illustrates the SDH multiplexing structure supported on the PAMC-STM-1. The PA-MC-STM-1 multiplexing structure is a subset of that defined in ITU-T G.707. At the lowest level, containers (Cs) are input into virtual containers (VCs) with stuffing bits to create a uniform VC payload with a common bit-rate, ready for synchronous multiplexing. Then the VCs are aligned into tributary units (TUs) where pointer processing operations are implemented, allowing the TUs to be multiplexed into TU groups (TUGs). Three TU-12s can be multiplexed into one TUG-2. The TUGs are then multiplexed into higher level VCs, which in turn are multiplexed into administration units (AUs). The AUs are then multiplexed into an AU group (AUG) and the final payload from the AUG is then multiplexed into the Synchronous Transport Module(STM).

Figure 1-3 PA-MC-STM-1 Multiplexing Structure

PAYLOAD : is the cargo of a data transmission. It is the part of the transmitted data which is the fundamental purpose of the transmission, to the exclusion of information sent with it (such as headers or metadata, sometimes referred to as overhead data) solely to facilitate delivery. Payload is The essential data that is being carried within a packet or other transmission unit. The payload does not include the "overhead" data required to get the packet to its destination.
STM-1 Optical Fiber Specifications : The PA-MC-STM-1 specification for optical fiber transmission defines two types of fiber: single-mode and multimode. Within the single-mode category, two types of transmission are defined: intermediate reach and long reach. Within the multimode category, only short reach is available. (See Table 1-4 for specifications.) Modes can be thought of as bundles of light rays entering the fiber at a particular angle. Single-mode fiber allows only one mode of light to propagate through the fiber at one wavelength and polarization, and multimode fiber allows multiple modes of light to propagate through the fiber for each wavelength and polarization. Multiple modes of light propagating through the fiber travel different distances depending on the entry angles, which causes them to arrive at the destination at different times (a phenomenon called modal dispersion). Model dispersion limits propagation distance in multimode fiber before attenuation does. Therefore, single-mode fiber is capable of higher bandwidth and greater cable run distances than multimode fiber. Table 1-4 OC-3 Optical Parameters : Transceiver Transmit Maximum Receiver Loss Nominal

Type Single-mode intermediate reach

Power -15 dBm min. to -8 dBm max. at 1280-1335 nm

Power to Receiver -8 dBm

Sensitivty -28dBM

Bugets Distance Betn Stn 0-12 dB upto 5 Km

Single-mode -20 dBm min. intermediate to -14 dBm max. -8 dBm -23dBM 0-7 dB upto 2 Km reach at 1280-1335 nm SDH (Synchronous Digital Hierarchy) is a standard technology for synchronous data transmission on optical media. It is the international equivalent of Synchronous Optical Network. Both technologies provide faster and less expensive network interconnection than traditional PDH (Plesiochronous Digital Hierarchy) equipment. In digital telephone transmission, "synchronous" means the bits from one call are carried within one transmission frame. "Plesiochronous" means "almost (but not) synchronous," or a call that must be extracted from more than one transmission frame. SDH uses the following Synchronous Transport Modules (STM) and rates: STM-1 (155 megabits per second), STM-4 (622 Mbps), STM-16 (2.5 gigabits per second), and STM-64 (10 Gbps). The STM-1 (Synchronous Transport Module level-1) is the SDH ITU-T fiber optic network transmission standard. It has a bit rate of 155.52 Mbit/s. The other levels are STM-4, STM-16 and STM-64. Beyond this we have wavelength-division multiplexing (WDM) commonly used in submarine cabling. Frame structure The STM-1 frame is the basic transmission format for SDH (Synchronous Digital Hierarchy). A STM-1 frame has a byte-oriented structure with 9 rows and 270 columns of bytes, for a total of 2,430 bytes (9 rows * 270 columns = 2430 bytes). Each byte corresponds to a 64kbit/s channel.[3]

TOH: Transport Overhead (RSOH + AU4P + MSOH) MSOH: Multiplex Section Overhead RSOH: Regeneration Section Overhead AU4P: AU-4 Pointers VC4: Virtual Container-4 payload (POH + VC-4 Data) POH: Path Overhead Frame characteristics The STM-1 base frame is structured with the following characteristics: Length: 270 column x 9 row = 2430 bytes Duration (Frame repetition time): 125 s i.e. 8000 frame/s Rate (Frame capacity): 2430 x 8 x 8000 = 155.520 Mbit/s

Payload = 2349bytes x 8bits x 8000frames/sec = 150.336 Mbit/s RSOH (regenerator section overhead)

1st row = Unscrambled bytes. Their contents should therefore be monitored X = Bytes reserved for national use D = Bytes depending on the medium (satellite, radio relay system, ...) The Regenerator Section OverHead uses the first three rows & nine columns in the STM-1 frame A1, A2 The Frame Alignment Word is used to recognize the beginning of an STM-N frame A1: 1111 0110 = F6 (HEX) A2: 0010 1000 = 28 (HEX) J0: Path Trace. It is used to give a path through an SDH Network a "Name". This message (Name) enables the receiver to check the continuity of its connection with the desired transmitter B1: Bit Error Monitoring. The B1 Byte contains the result of the parity check of the previous STM frame, before scrambling of the actual STM frame. This check is carried out with a Bit Interleaved Parity check (BIP-8). E1 Engineering Orderwire (EOW). It can be used to transmit speech signals beyond a Regenerator Section for operating and maintenance purposes F1 User Channel. It is used to transmit data and speech for service and maintenance D1 to D3 Data Communication Channel at 192 kbit/s (DCCR). This channel is used to transmit management information via the STM-N frames MSOH (multiplex section overhead)

X = Bytes reserved for national use. The Multiplex Section OverHead uses the 5th through 9th rows, and first 9 columns in the STM-1 frame. B2 : Bit Error Monitoring. The B2 Bytes contains the result of the parity check of the previous STM frame, except the RSOH, before scrambling of the actual STM frame. This check is carried out with a Bit Interleaved Parity check (BIP24) K1, K2 Automatic Protection Switching (APS). In case of a failure, the STM frames can be routed new with the help of the K1, K2 Bytes through the SDH Network. Assigned to the multiplexing section protection (MSP) protocol K2 (Bit6,7,8) MS_RDI: Multiplex Section Remote Defect Indication (former MS_FERF: Multiplex Section Far End Receive Failure) D4 to D12 Data Communication Channel at 576 kbit/s (DCCM). (See also D1-D3 in RSOH above) S1 (Bit 5 - 8) Synchronization quality level: 0000 Quality unknown 0010 G.811 10-11/day frequency drift 0100 G.812T transit 10-9 /day frequency drift 1000 G.812L local 2*10-8/day frequency drift

1011 G.813 5*10-7/day frequency drift 1111 Not to be used for synchronization E2 Engineering Orderwire (EOW). Same function as E1 in RSOH M1 MS_REI: Multiplex Section Remote Error Indicator, number of interleaved bits which have been detected to be erroneous in the received B2 bytes. (former MS_FEBE: Multiplexing Section Far End Block Errored) Z1, Z2 Spare bytes Dense wavelength division multiplexing (DWDM) is a technology that puts data from different sources together on an optical fiber, with each signal carried at the same time on its own separate light wavelength. Using DWDM, up to 80 (and theoretically more) separate wavelengths or channels of data can be multiplexed into a lightstream transmitted on a single optical fiber. Each channel carries a time division multiplexed (TDM) signal. In a system with each channel carrying 2.5 Gbps (billion bits per second), up to 200 billion bits can be delivered a second by the optical fiber. DWDM is also sometimes called wave division multiplexing (WDM). Since each channel is demultiplexed at the end of the transmission back into the original source, different data formats being transmitted at different data rates can be transmitted together. Specifically, Internet (IP) data, Synchronous Optical Network data (SONET), and asynchronous transfer mode (ATM) data can all be travelling at the same time within the optical fiber. DWDM promises to solve the "fiber exhaust" problem and is expected to be the central technology in the all-optical networks of the future. video on demand (VoD) Video on demand (VoD) is an interactive TV technology that allows subscribers to view programming in real time or download programs and view them later. A VoD system at the consumer level can consist of a standard TV receiver along with a set-top box. Alternatively, the service can be delivered over the Internet to home computers, portable computers, high-end cellular telephone sets and advanced digital media devices. VoD has historically suffered from a lack of available network bandwidth Generic Framing Procedure Generic Framing Procedure (GFP) is a multiplexing technique defined by ITU-T G.7041. This allows mapping of variable length, higher-layer client signals over a circuit switched transport network like OTN, SDH/SONET or PDH. The client signals can be protocol data unit (PDU) oriented (like IP/PPP or Ethernet Media Access Control) or can be block-code oriented (like fibre channel). There are two modes of GFP: Generic Framing Procedure - Framed (GFP-F) and Generic Framing Procedure - Transparent (GFP-T): GFP-F maps each client frame into a single GFP frame.GFP-F is used where the client signal is framed or packetized by the client protocol. GFP-T, on the other hand, allows mapping of multiple 8B/10B block-coded client data streams into an efficient 64B/65B block code for transport within a GFP frame. GFP utilizes a length/HEC-based frame delineation mechanism that is more robust than that used by High-Level Data Link Control (HDLC), which is single octet flag based. There are two types of GFP frames: a GFP client frame and a GFP control frame. A GFP client frame can be further classified as either a client data frame or a client management frame. The former is used to transport client data, while the latter is used to transport point-to-point management information like loss of signal, etc. Client management frames can be differentiated from the client data frames based on the payload type indicator. The GFP control frame currently consists only of a core header field with no payload area. This frame is used to compensate for the gaps between the client signal where the transport medium has a higher capacity than the client signal, and is better known as an idle frame. Frame format A GFP frame consists of: a core header a payload header an optional extension header a GFP payload an optional payload frame check sequence (FCS). Modes Framed GFP (GFP-F) is optimized for bandwidth efficiency at the expense of latency. It encapsulates complete Ethernet (or other types of) frames with a GFP header.

Transparent GFP (GFP-T) is used for low latency transport of block-coded client signals such as Gigabit Ethernet, Fibre Channel, ESCON, FiCON, and Digital Video Broadcast (DVB). In this mode, small groups of 8B/10B symbols are transmitted rather than waiting for a complete frame of data. Link Capacity Adjustment Scheme Link Capacity Adjustment Scheme or LCAS is a method to dynamically increase or decrease the bandwidth of virtual concatenated containers. The LCAS protocol is specified in ITU-T G.7042. It allows on-demand increase or decrease of the bandwidth of the virtual concatenated group in a hitless manner. This brings bandwidth-on-demand capability for data clients like Ethernet when mapped into TDM containers. LCAS is also able to temporarily remove failed members from the virtual concatenation group. A failed member will automatically cause a decrease of the bandwidth and after repair the bandwidth will increase again in a hitless fashion. Together with diverse routing this provides survivability of data traffic without requiring excess protection bandwidth allocation. IGMP snooping IGMP snooping is the process of listening to Internet Group Management Protocol (IGMP) network traffic. IGMP snooping, as implied by the name, is a feature that allows a network switch to listen in on the IGMP conversation between hosts and routers. By listening to these conversations the switch maintains a map of which links need which IP multicast streams. Multicasts may be filtered from the links which do not need them. Purpose A switch will, by default, flood multicast traffic to all the ports in a broadcast domain (or the VLAN equivalent). Multicast can cause unnecessary load on host devices by requiring them to process packets they have not solicited. When purposefully exploited this is known as one variation of a denial-of-service attack. IGMP snooping is designed to prevent hosts on a local network from receiving traffic for a multicast group they have not explicitly joined. It provides switches with a mechanism to prune multicast traffic from links that do not contain a multicast listener (an IGMP client). IGMP snooping allows a switch to only forward multicast traffic to the links that have solicited them. Essentially, IGMP snooping is a layer 2 optimization for the layer 3 IGMP. IGMP snooping takes place internally on switches and is not a protocol feature. Snooping is therefore especially useful for bandwidthintensive IP multicast applications such as IPTV. Standard status IGMP snooping, although an important technique, overlaps two standards organizations namely IEEE which standardizes Ethernet switches, and IETF which standardises IP multicast. This means that even today there is no clear owner of this technique. This is why RFC 4541 on IGMP snooping only has the status Informational[1] despite actually being referenced in other standards work such as RFC 4903 as normative. Implementations options Proxy reporting IGMP snooping with proxy reporting or report suppression actively filters IGMP packets in order to reduce load on the multicast router Joins and leaves heading upstream to the router are filtered so that only the minimal quantity of information is sent. The switch is trying to ensure the router only has a single entry for the group, regardless of how many active listeners there are. If there are two active listeners in a group and the first one leaves, then the switch determines that the router does not need this information since it does not affect the status of the group from the router's point of view. However the next time there is a routine query from the router the switch will forward the reply from the remaining host, to prevent the router from believing there are no active listeners. It follows that in active IGMP snooping, the router will generally only know about the most recently joined member of the group. IGMP querier In order for IGMP, and thus IGMP snooping, to function, an multicast router must exist on the network and generate IGMP queries. The tables created for snooping (holding the member ports for a each multicast group) are associated with the querier. Without a querier the tables are not created and snooping will not work. Furthermore IGMP general queries must be unconditionally forwarded by all switches involved in IGMP snooping.[1] Some IGMP snooping implementations include full querier capability. Others are able to proxy and retransmit queries from the multicast router. Security :

Port Mirroring Port Mirroring is used on a network switch to send a copy of network packets seen on one switch port (or an entire VLAN) to a network monitoring connection on another switch port. This is commonly used for network appliances that require monitoring of network traffic, such as an intrusion-detection system. Port mirroring on a Cisco Systems switch is generally referred to as Switched Port Analyzer (SPAN); some other vendors have other names for it, such as Roving Analysis Port (RAP) on 3Com switches. An example of a SPAN configuration on a Cisco 2950 Switch is below. Monitor session 1 source interface fastethernet 0/1 , 0/2 , 0/3 Monitor session 1 destination interface fastethernet 0/4 encap ingress vlan 1 The above example mirrors data from ports 0/1, 0/2 and 0/3 to the destination port 0/4 using vlan1 for vlan tagging. To show the status of a SPAN monitor session use the following command. show monitor session 1 ...where 1 is the session number from the above statement... Port mirroring, also known as a roving analysis port, is a method of monitoring network traffic that forwards a copy of each incoming and outgoing packet from one port of a network switch to another port where the packet can be studied. A network administrator uses port mirroring as a diagnostic tool or debugging feature, especially when fending off an attack. It enables the administrator to keep close track of switch performance and alter it if necessary. Port mirroring can be managed locally or remotely. An administrator configures port mirroring by assigning a port from which to copy all packets and another port where those packets will be sent. A packet bound for or heading away from the first port will be forwarded onto the second port as well. The administrator places a protocol analyzer on the port receiving the mirrored data to monitor each segment separately. The analyzer captures and evaluates the data without affecting the client on the original port. The monitor port may be a port on the same SwitchModule with an attached RMON probe, a port on a different SwitchModule in the same hub, or the SwitchModule processor. Port mirroring can consume significant CPU resources while active. Better choices for long-term monitoring may include a passive tap like an optical probe or an Ethernet repeater. Access control list An access control list (ACL), with respect to a computer file system, is a list of permissions attached to an object. An ACL specifies which users or system processes are granted access to objects, as well as what operations are allowed on given objects. Each entry in a typical ACL specifies a subject and an operation. For instance, if a file has an ACL that contains (Alice, delete), this would give Alice permission to delete the file. ACL-based security models When a subject requests an operation on an object in an ACL-based security model the operating system first checks the ACL for an applicable entry to decide whether the requested operation is authorized. A key issue in the definition of any ACL-based security model is determining how access control lists are edited, namely which users and processes are granted ACL-modification access. ACL models may be applied to collections of objects as well as to individual entities within the system's hierarchy. Filesystem ACLs A Filesystem ACL is a data structure (usually a table) containing entries that specify individual user or group rights to specific system objects such as programs, processes, or files. These entries are known as access control entries (ACEs) in the Microsoft Windows NT, OpenVMS, Unix-like, and Mac OS X operating systems. Each accessible object contains an identifier to its ACL. The privileges or permissions determine specific access rights, such as whether a user can read from, write to, or execute an object. In some implementations an ACE can control whether or not a user, or group of users, may alter the ACL on an object. Most of the Unix and Unix-like operating systems (e.g. Linux,[1] BSD, or Solaris) support so called POSIX.1e ACLs, based on an early POSIX draft that was abandoned. Many of them, for example AIX, FreeBSD,[2] Mac OS X beginning with version 10.4 ("Tiger"), or Solaris with ZFS filesystem,[3] support NFSv4 ACLs, which are part of the NFSv4 standard. There are two (experimental) implementations of NFSv4 ACLs for Linux. NFSv4 ACLs support for Ext3 filesystem[4] and recent Richacls,[5] which brings NFSv4 ACLs support for Ext4 filesystem. Networking ACLs Main article: Standard Access Control List On some types of proprietary computer hardware, an Access Control List refers to rules that are applied to port numbers or network daemon names that are available on a host or other layer 3, each with a list of hosts

and/or networks permitted to use the service. Both individual servers as well as routers can have network ACLs. Access control lists can generally be configured to control both inbound and outbound traffic, and in this context they are similar to firewalls. 29 Protection 29.1 The equipment shall be provided with automatic self healing protection facilities for system reliability and availability as per following protection mechanism:

The offered system must support the revertive and non-revertive switching mode for any redundant common cards with selfautomatic protection. Subnetwork connection protection In telecommunications, subnetwork connection protection, or SNCP, is a type of protection mechanism associated with synchronous optical networks such as synchronous digital hierarchy. SNCP is a dedicated (1+1) protection mechanism for SDH network spans which may be deployed in ring, point to point or mesh topologies. It is complementary to Multiplex Section Protection (MSP), applied to physical handover interfaces; which offers 1+1 protection of the handover. An alternative to SNCP is Multiplex Section Shared Protection Rings or MS-SPRings, which offers a shared protection mode. SNCP's functional equivalent in SONET is called UPSR [1] SubNetwork Connection Protection is a per path protection. It follows the principle of Congruent Sending Selective Receive, i.e, Signal is sent on both paths but received only where the Signal Strength is best. When the working path for Signal receiving is cut, the receiver detects SD (Signal Degradation) and the receiver of the other path becomes active. SNCP is a network protection mechanism for SDH networks providing path protection (end-to-end protection). The data signal is transmitted in a ring structure via two different paths and can be implemented in line or ring structures. The changeover criteria are specified individually when configuring a network element. A protection protocol is not required. The switchover to protection path occurs in the non-revertive mode, i.e. if traffic was switched to the protection path due to a transmission fault, there is no automatic switch-back to the original path once the fault is rectified, but only if there is a fault on the new path (the one labeled as protecting and currently services traffic). SNCP is a 1+1 protection scheme (one working and one protection transport entity). Input traffic is broadcast in two routes (one being the normal working route and the second one being the protection route). Assume a failure free state for a path from a node B to a node A. Node B bridges the signal destined to A from other nodes on the ring, both on working and protecting routes. At node A, signals from these two routes are continuously monitored for path layer defects and the better quality signal is selected. Now consider a failure state where fiber between node A and node B is cut. The selector switches traffic on the standby route when the active route between node A and node B is failed. In order to prevent any unnecessary or spurious protection switching in the presence of bit errors on both paths, a switch will typically occur when the quality of the alternate path exceeds that of the current working path by some threshold (e.g., an order of magnitude better BER). Consecutively, any case of failure drops in SNCPs decision mechanism. Name Medium Specified distance 1000BASE-CX Twinaxial cabling 25 meters 1000BASE-SX Multi-mode fiber 220 to 550 meters dependent on fiber diameter and bandwidth 1000BASE-LX Multi-mode fiber 550 meters[3]

1000BASE-LX Single-mode fiber 5 km[3] 1000BASE-LX10 Single-mode fiber using 1,310 nm wavelength 10 km 1000BASE-ZX Single-mode fiber at 1,550 nm wavelength ~ 70 km 1000BASE-BX10 Single-mode fiber, over single-strand fiber: 1,490 nm downstream 1,310 nm upstream 10 km 1000BASE-T Twisted-pair cabling (Cat-5, Cat-5e, Cat-6, or Cat-7) 100 meters 1000BASE-TX Twisted-pair cabling (Cat-6, Cat-7)100 meters What is carrier class ? In telecommunication, a "carrier grade" or "carrier class" refers to a system, or a hardware or software component that is extremely reliable, well tested and proven in its capabilities. Carrier grade systems are tested and engineered to meet or exceed "five nines" high availability standards, and provide very fast fault recovery through redundancy (normally less than 50 milliseconds). An add-drop multiplexer (ADM) is an important element of an optical fiber network. A multiplexer combines, or multiplexes, several lower-bandwidth streams of data into a single beam of light. An add-drop multiplexer also has the capability to add one or more lower-bandwidth signals to an existing high-bandwidth data stream, and at the same time can extract or drop other low-bandwidth signals, removing them from the stream and redirecting them to some other network path. This is used as a local "on-ramp" and "off-ramp" to the high-speed network. ADMs can be used both in long-haul core networks and in shorter-distance "metro" networks, although the former are much more expensive due to the difficulty of scaling the technology to the high data rates and dense wavelength division multiplexing (DWDM) used for long-haul communications. The main optical filtering technology used in add-drop multiplexers is the Fabry-Prot etalon. A recent shift in ADM technology has introduced so-called "multi-service SONET/SDH" (also known as a multi-service provisioning platform or MSPP) equipment which has all the capabilities of legacy ADMs, but can also include cross-connect functionality to manage multiple fiber rings in a single chassis. These new devices can replace multiple legacy ADMs and also allow connections directly from Ethernet LANs to a service provider's optical backbone. In the end of 2003, sales of multiservice ADMs exceeded those of legacy ADMs for the first time, as the change to next-generation SONET/SDH networks accelerated. An emerging variety of ADMs that is becoming popular as the carriers continue to invest in metro optical networks are reconfigurable optical add-drop multiplexers (ROADMs What is Digital cross connect system ? A digital cross-connect system (DCS or DXC) is a piece of circuit-switched network equipment, used in telecommunications networks, that allows lower-level TDM bit streams, such as DS0 bit streams, to be rearranged and interconnected among higher-level TDM signals, such as DS1 bit streams. DCS units are available that operate on both older T-carrier/E-carrier bit streams, as well as newer SONET/SDH bit streams. DCS devices can be used for "grooming" telecommunications traffic, switching traffic from one circuit to another in the event of a network failure, supporting automated provisioning, and other applications. Having a DCS in a circuit-switched network provides important flexibility that can otherwise only be obtained at higher cost using manual "DSX" cross-connect patch panels. It is important to realize that while DCS devices "switch" traffic, they are not packet switchesthey switch circuits, not packets, and the circuit arrangements they are used to manage tend to persist over very long time spans, typically months or longer, as compared to packet switches, which can route every packet differently, and operate on micro- or millisecond time spans.

Layers model of SDH : the following scheme describes the different layers of SDH , according to the OSI model

What is SDH Frame Structure ? The STM-n frame structure is best represented as a rectangle of 9 x 270xN. The 9xN first columns are the frame header and the rest of the frame is the inner structure data (including the data, indication bits, stuff bits, pointers and management). The STM-n frame is usually transmitted over an optical fiber. The frame is transmitted row by row (first is transmitted the first row then the second and so on). At the beginning of each frame a synchronized bytes A1A2 are transmitted . The multiplexing method of 4 STM-n streams into a STM-nx4 is an interleaving of the STM-n streams to produce the STM-nx4 stream. The method is shown in the next picture for producing STM-4 from 4 STM-1 streams.

After interleaving we get a higher order stream that in its rectangular form all the low order STM streams are placed as its columns which makes it easier to find each of them in the bigger frame.

What is SNCP ? In telecommunications, subnetwork connection protection, or SNCP, is a type of protection mechanism associated with synchronous optical networks such as synchronous digital hierarchy. SNCP is a dedicated (1+1) protection mechanism for SDH network spans which may be deployed in ring, point to point or mesh topologies. It is complementary to Multiplex Section Protection (MSP), applied to physical handover interfaces; which offers 1+1 protection of the handover. An alternative to SNCP is Multiplex Section Shared Protection Rings or MS-SPRings, which offers a shared protection mode. SNCP's functional equivalent in SONET is called UPSR SubNetwork Connection Protection is a per path protection. It follows the principle of Congruent Sending Selective Receive, i.e, Signal is sent on both paths but received only where the Signal Strength is best. When the working path for Signal receiving is cut, the receiver detects SD (Signal Degradation) and the receiver of the other path becomes active. SNCP is a network protection mechanism for SDH networks providing path protection (end-to-end protection). The data signal is transmitted in a ring structure via two different paths and can be implemented in line or ring structures. The changeover criteria are specified individually when configuring a network element. A protection protocol is not required. The switchover to protection path occurs in the non-revertive mode, i.e. if traffic was switched to the protection path due to a transmission fault, there is no automatic switch-back to the

original path once the fault is rectified, but only if there is a fault on the new path (the one labeled as protecting and currently services traffic). SNCP is a 1+1 protection scheme (one working and one protection transport entity). Input traffic is broadcast in two routes (one being the normal working route and the second one being the protection route). Assume a failure free state for a path from a node B to a node A. Node B bridges the signal destined to A from other nodes on the ring, both on working and protecting routes. At node A, signals from these two routes are continuously monitored for path layer defects and the better quality signal is selected. Now consider a failure state where fiber between node A and node B is cut. The selector switches traffic on the standby route when the active route between node A and node B is failed. In order to prevent any unnecessary or spurious protection switching in the presence of bit errors on both paths, a switch will typically occur when the quality of the alternate path exceeds that of the current working path by some threshold (e.g., an order of magnitude better BER). Consecutively, any case of failure drops in SNCPs decision mechanism. What is Multiplex section protection (line) ? This protection mechanism is described in the ITU-T recommendation G.783. A service line is protected using another line, called a protection line. If an error occurs, for instance a loss of signal (LOS), the protection mechanism should switch over to the protection line. There are two different architectures for protecting the multiplexion section (line): 1+1 and 1:N (N1). In the first of these cases (1+1), the traffic is transmitted simultaneously along the two lines, both the traffic line and the protection line. At the far end, the line that delivers the signal in the best condition is the one that is chosen. In the second case (1:N), there are N service lines with traffic and one providing protection. No traffic is transmitted in the protection line, or at most low priority traffic as a way of making use of its additional capacity, to be interrupted if one of the N line to be protected should fail. When N>1 this method is especially economical (as N service lines share a single protection line). What is Multiplex section shared protection (ring - 2 fibres) Shared protection is characterized by the division into equal parts of the service channels and protection channels transported by each multiplex section. The principle of sharing is based on the idea that several service circuits use (share) the same protection circuits, in contrast to dedicated protection. Any section can have access to the protection channels when faced with the failure of a section or network element in the ring. This way the protection capacity is shared between different sections in the ring. Shared protection in rings is also known by the acronym MSSPRING. When the ring has two fibres with transmission in both directions and shared protection, it is said to be a two-way ring. In this case, each fibre has work channels and protection channels permanently assigned to it. In order to take advantage of the additional capacity of the protection channels under normal operating conditions, these can be used to carry low priority traffic. This traffic can then be interrupted if there is a failure and the protection channel can then switch over to carrying the main traffic.

What is Automatically switched optical network ASON (Automatically Switched Optical Network) is a concept for the evolution of transport networks which allows for dynamic policy-driven control of an optical or SDH network based on signaling between a user and

components of the network. [1] Its aim is to automate the resource and connection management within the network. The IETF defines ASON as an alternative/supplement to NMS based connection management. What is Ethernet Ring Protection Switching, or ERPS ? ERPS is an effort at ITU-T under G.8032 Recommendation to provide sub-50ms protection and recovery switching for Ethernet traffic in a ring topology and at the same time ensuring that there are no loops formed at the Ethernet layer. G.8032v1 supported a single ring topology and G.8032v2 supports multiple rings/ladder topology. ERPS specifies protection switching mechanisms and a protocol for Ethernet layer network (ETH) rings. Ethernet Rings can provide wide-area multipoint connectivity more economically due to their reduced number of links. The mechanisms and protocol defined in this Recommendation achieve highly reliable and stable protection; and never form loops, which would fatally affect network operation and service availability. Each Ethernet Ring Node is connected to adjacent Ethernet Ring Nodes participating in the same Ethernet Ring, using two independent links. A ring link is bounded by two adjacent Ethernet Ring Nodes, and a port for a ring link is called a ring port. The minimum number of Ethernet Ring Nodes in an Ethernet Ring is two. The fundamentals of this ring protection switching architecture are: a) The principle of loop avoidance. b) The utilization of learning, forwarding, and Filtering Database (FDB) mechanisms defined in the Ethernet flow forwarding function (ETH_FF). Loop avoidance in an Ethernet Ring is achieved by guaranteeing that, at any time, traffic may flow on all but one of the ring links. This particular link is called the Ring Protection Link (RPL), and under normal conditions this ring link is blocked, i.e. not used for service traffic. One designated Ethernet Ring Node, the RPL Owner Node, is responsible for blocking traffic at one end of the RPL. Under an Ethernet ring failure condition, the RPL Owner Node is responsible for unblocking its end of the RPL (unless the RPL has failed) allowing the RPL to be used for traffic. The other Ethernet Ring Node adjacent to the RPL, the RPL Neighbour Node, may also participate in blocking or unblocking its end of the RPL. The event of an Ethernet Ring failure results in protection switching of the traffic. This is achieved under the control of the ETH_FF functions on all Ethernet Ring Nodes. An APS protocol is used to coordinate the protection actions over the ring. What is in band signalling ? In the public switched telephone network, (PSTN), in-band signaling is the exchange of signaling (call control) information on the same channel that the telephone call itself is using. Today, most long-distance communication uses out-of-band signaling as specified in various Signaling System 7 (SS7) standards. What is in band out of band control? In-band control is a characteristic of network protocols with which data control is regulated. In-band control passes control data on the same connection as main data. Protocols that use in-band control include HTTP and SMTP. This is as opposed to Out-of-band control used by protocols such as FTP. SMTP is in-band because the control messages, such as "HELO" and "MAIL FROM", are sent in the same stream as the actual message content. Out-of-band control is a characteristic of network protocols with which data control is regulated. Outof-band control passes control data on a separate connection from main data. Protocols such as FTP use out-ofband control. FTP sends its control information, which includes user identification, password, and put/get commands, on one connection, and sends data files on a separate parallel connection. Because it uses a separate connection for the control information, FTP uses out-of-band control. What is SMTP? Simple Mail Transfer Protocol (SMTP) is an Internet standard for electronic mail (e-mail) transmission across Internet Protocol (IP) networks. SMTP was first defined by RFC 821 (1982, eventually declared STD 10),[1] and last updated by RFC 5321 (2008)[2] which includes the extended SMTP (ESMTP) additions, and is the protocol in widespread use today. SMTP is specified for outgoing mail transport and uses TCP port 25. The protocol for new submissions is effectively the same as SMTP, but it uses port 587 instead. SMTP connections secured by SSL are known by the shorthand SMTPS, though SMTPS is not a protocol in its own right. While electronic mail servers and other mail transfer agents use SMTP to send and receive mail messages, userlevel client mail applications typically only use SMTP for sending messages to a mail server for relaying. For receiving messages, client applications usually use either the Post Office Protocol (POP) or the Internet Message

Access Protocol (IMAP) or a proprietary system (such as Microsoft Exchange or Lotus Notes/Domino) to access their mail box accounts on a mail server. What is Synchronization Status Messaging ? Synchronization status messaging (SSM) is an SDH protocol that communicates information about the quality of the timing source. SSM messages are carried on the S1 byte of the SDH section overhead. They enable SDH devices to automatically select the highest quality timing reference and to avoid timing loops. SSM messages are either Generation 1 or Generation 2. Generation 1 is the first and most widely deployed SSM message set. Generation 2 is a newer version. If you enable SSM for the ONS 15454 SDH, consult your timing reference documentation to determine which message set to use. Table 10-1 shows the SDH message s What is ITU-T R G.813 This Recommendation outlines requirements for timing devices used in synchronizing network equipment that operate according to the principles governed by the Synchronous Digital Hierarchy (SDH). These requirements apply under the normal environmental conditions specified for the SDH equipment. In normal operation, SDH equipment contains a slave clock traceable to a primary reference clock. In general, the SDH equipment clock (SEC) will have multiple reference inputs. In the event that all links between the master and the slave clock fail, the equipment should be capable of maintaining operation (holdover) within prescribed performance limits. The SEC is part of the SDH equipment, the functions of which are specified in ITU-T Rec. G.783 as the Synchronous Equipment Timing Source (SETS). Slave clocks used in SDH equipment must meet specific requirements in order to comply with network jitter requirements for plesiochronous tributaries. This Recommendation contains two options for the SEC. The first option, referred to as "Option 1," applies to SDH networks optimized for the 2048 kbit/s hierarchy. These networks allow the worst-case synchronization reference chain as specified in Figure 8-5/G.803. The second option, referred to as "Option 2," applies to SDH networks optimized for the particular 1544 kbit/s hierarchy that includes the rates 1544 kbit/s, 6312 kbit/s, and 44 736 kbit/s. An SDH equipment slave clock should comply with all of the requirements specific to one option and should not mix requirements between Options 1 and 2. In the clauses where one requirement is specified, the requirements are common to both options. It is the intention that Options 1 and 2 should be harmonized in the future. Careful consideration should be taken when interworking between networks with SECs based on Option 1 and networks with SECs based on Option 2. This Recommendation defines the minimum requirements for clocks in SDH NEs. However, some SDH NEs may have a higher quality clock. This Recommendation allows for proper network operation when a SEC (Option 1 or 2) is timed from another SEC (like option), or a higher quality clock. Hierarchical timing distribution is recommended for SDH networks. Timing should not be passed from a SEC in free-run/holdover mode to a higher quality clock since the higher quality clock should not follow the SEC signal during fault conditions. What is CFM-OAM? Ethernet Operations, Administration, and Maintenance (OAM) is a protocol for installing, monitoring, and troubleshooting Ethernet networks to increase management capability within the context of the overall Ethernet infrastructure. The Cisco ME 3400 switch supports IEEE 802.1ag Connectivity Fault Management (CFM), Ethernet Local Management Interface (E-LMI), and IEEE 802.3ah Ethernet OAM discovery, link monitoring, remote fault detection, and remote loopback. Cisco IOS Release 12.2(40)SE adds support for IP Service Level Agreements (SLAs) for CFM. Ethernet OAM manager controls the interworking between any two of the protocols (CFM, E-LMI, and OAM). What is FCAPS? FCAPS is the ISO Telecommunications Management Network model and framework for network management. FCAPS is an acronym for Fault, Configuration, Accounting, Performance, Security, the management categories into which the ISO model defines network management tasks. In non-billing organizations Accounting is sometimes replaced with Administration. The comprehensive management of an organization's information technology (IT) infrastructure is a fundamental requirement. Employees and customers rely on IT services where availability and performance are mandated, and problems can be quickly identified and resolved. Mean time to repair (MTTR) must be as short as possible to avoid system downtimes where a loss of revenue or lives is possible. What is RSTP & MSTP ? This chapter describes how to configure the Cisco implementation of the IEEE 802.1W Rapid Spanning Tree Protocol (RSTP) and the IEEE 802.1S Multiple STP (MSTP) on your Catalyst 3550 switch. It also describes how to configure per-VLAN rapid spanning tree (PVRST). RSTP provides rapid convergence of the spanning

tree. MSTP, which uses RSTP to provide rapid convergence, enables VLANs to be grouped into a spanning-tree instance, provides for multiple forwarding paths for data traffic, and enables load balancing. It improves the fault tolerance of the network because a failure in one instance (forwarding path) does not affect other instances (forwarding paths). The most common initial deployment of MSTP and RSTP is in the backbone and distribution layers of a Layer 2 switched network; this deployment provides the highly-available network required in a service-provider environment. Both RSTP and MSTP improve the operation of the spanning tree while maintaining backward compatibility with equipment that is based on the (original) 802.1D spanning tree, with existing Cisco per-VLAN spanning tree (PVST), and with the existing Cisco-proprietary Multiple Instance STP (MISTP). PVRST uses RSTP to provide rapid convergence of spanning-tree instances. PVRST also maintains backward compatibility with equipment that is based on the 802.1D spanning tree, with PVST and MISTP. You can use this feature on a switch running MSTP. What is ingress & egress? In computer networking, ingress filtering is a technique used to make sure that incoming packets are actually from the networks that they claim to be from.Networks receive packets from other networks. Normally a packet will contain the IP address of the computer that originally sent it. This allows devices in the receiving network to know where it came from, allowing a reply to be routed back (amongst other things). However, a sender IP address can be faked ('spoofed'), characterising a Spoofing attack. This disguises the origin of packets sent, e.g., in a Denial-of-service attack. Network ingress filtering is a packet filtering technique used by many Internet service providers to try to prevent source address spoofing of Internet traffic, and thus indirectly combat various types of net abuse by making Internet traffic traceable to its source. Network ingress filtering is a "good neighbor" policy which relies on cooperation between ISPs for their mutual benefit. The best current practice for network ingress filtering is documented by the Internet Engineering Task Force in BCP 38, which is currently defined by RFC 2827. BCP 38 recommends that upstream providers of IP connectivity filter packets entering their networks from downstream customers, and discard any packets which have a source address which is not allocated to that customer. In computer networking, egress filtering is the practice of monitoring and potentially restricting the flow of information outbound from one network to another. Typically it is information from a private TCP/IP computer network to the Internet that is controlled. TCP/IP packets that are being sent out of the internal network are examined via a router or firewall. Packets that do not meet security policies are not allowed to leave - they are denied "egress". Egress filtering helps ensure that unauthorized or malicious traffic never leaves the internal network. In a corporate network, typically all traffic except that emerging from a select set of servers would be denied egress. Restrictions can further be made such that only select protocols such as HTTP, email, and DNS are allowed. User workstations would then need to be set to use one of the allowed servers as a proxy. Direct access to external networks by the internal user workstation would not be allowed. Egress filtering may require policy changes and administrative work whenever a new application requires external network access. For this reason egress filtering is an uncommon feature on consumer and very small business networks. The recent appearance of botnets within private networks has increased the use of egress filtering by security-conscious organizations.[citation needed] Egress filtering is also becoming required for those who are compliant with the PCI DSS, as it requires egress filtering from any server in the card holder environment. This is seen in PCI-DSS v1.2, sections 1.2.1, and 1.3.5.

Ethernet Card Overview The Cisco CE 8-port Ethernet Card is a single-slot line card offering 8-port, 10/100-Mbps Ethernet via standard RJ-45 interfaces. Traffic from the eight interfaces is encapsulated into a SONET/SDH payload using either GFP or framing based on high-level data link control (HDLC). The resulting packet-over-SONET/SDH (POS) traffic is then mapped into a SONET circuit for transport across the network. These circuits form a oneto-one relationship with the eight front-panel ports and are referred to as virtual concatenated groups (VCGs). Each VCG uses low-order or high-order contiguous and/or virtual concatenation mechanisms to determine circuit sizing. The card also supports LCAS, which allows hitless dynamic adjustment of SONET/SDH link bandwidth. Additionally, each card supports packet processing, classification, queuing based on quality of service (QoS), and traffic-scheduling features, all required for supporting advanced services delivery. The Cisco ONS 15454 MSPP CE 10/100Base-T Ethernet Card includes these features: 8-port 10/100Base-T, RJ-45 connectors 4 x 150 Mbps (4 x STS-3/VC4) SONET/SDH transport bandwidth per card Each 10/100Base-T port mapped to SONET/SDH POS using GFP-F (ITU-T G.7041) or LAN Extension (LEX) (HDLC) encapsulation Each POS can consist of high-order contiguous concatenation (CCAT) (SONET - STS-1, STS-3c; SDH VC4) or VCAT (STS-1-1v, STS-1-2v, STS-1-3v) circuits Each POS port can consist of low-order contiguous concatenation (CCAT) (SDH - VC3) or VCAT (SONET VT1.5-Xv where X=1-64; SDH - VC12-Xv where X=1-63, VC3-1v, VC3-2v, VC3-3v) circuits In-service capacity increment/decrement (ITU-T G.7042 LCAS) Sub-50-millisecond (ms) SONET/SDH protection/restoration of transport circuits

Transparent to Layer 2 bridging, switching, Ethernet MAC control protocols (Cisco EtherChannel technology, 802.1x, Cisco Discovery Protocol, VLAN Trunking Protocol [VTP], Spanning Tree Protocol), and VLAN (802.1Q and QinQ We supply LC FC multimode simplex fiber optic patch cables; they are used to link the equipment in fiber optic communications. The FC LC simplex multimode fiber optic patch cables feature reliable quality and compatibility with international and industrial standards. These fiber optic jumper cables are low insertion loss and with LSZH, RoHS, Rizer and Plenum types optional. We make custom LC FC multimode simplex fiber optic patch cables at customer request and the delivery is fast.

LC is the fiber optic connector invented by Lucent; it is small form factor connector suit for dense installations. LC has single mode and multimode versions, simplex or duplex, PC, UPC or APC. LC features compact, pullproof design and RJ-45 style interface. Our LC fiber optic patch cables are compliant to IEC and EIA/TIA standards, they are compatible to use with equipment from other companies. We supply the LC fiber optic connectors, jumper cables, adapters, LC attenuators and related LC fiber optic components.

FC fiber optic cables are widely used in fiber optic systems; especially FC single mode products are very popular. FC standard fiber optic connectors are used for telecommunications, CATV, LAN, MAN, WAN, Test & Measurement, Industry, Medical and Sensors. FC fiber optic connectors comply with IEC 61754-13 & CECC 86 115-801 standards. They are well known for general, average applications. We supply single mode and multimode LC CATV Video Active Premise Telecommunication Local Data Wide Industrial, FC High Low Optional 100% tested insertion colors during LC simplex Area processing Area medical multimode precision loss and production and & fiber & optic Device Installations Networks Networks networks Networks military patch alignment back cable before reflection length delivery cable features: (WANs) (LANs) Termination FC version FC fiber simplex jumper fiber cables optic and FC patch fiber cable optic components.

multimode

applications

Good price and prompt delivery

An E1 link operates over two separate sets of wires, usually twisted pair cable. A nominal 3 volt peak signal is encoded with pulses using a method avoiding long periods without polarity changes. The line data rate is 2.048 Mbit/s (full duplex, i.e. 2.048 Mbit/s downstream and 2.048 Mbit/s upstream) which is split into 32 timeslots, each being allocated 8 bits in turn. Thus each timeslot sends and receives an 8-bit PCM sample, usually encoded according to A-law algorithm, 8000 times per second (8 x 8000 x 32 = 2,048,000). This is ideal for voice telephone calls where the voice is sampled at that data rate and reconstructed at the other end. The timeslots are numbered from 0 to 31. 40g XC : Ethernet Services Plus Extended Combination (ES Plus XC) line cards are designed for interfaceflexible Ethernet services. They allow service prioritization for voice, video, data, and wireless mobility services and can connect to LAN, WAN, and Optical Transport Network Physical Layer (OTN PHY) interfaces as well as Gigabit Ethernet ports on the same physical line card. This unique form factor allows for configurations with redundant network-to-network 10 Gigabit Ethernet interfaces to reside on separate line card slots for resiliency, while offering user-to-network Gigabit Ethernet interfaces on the same slots for efficiency. Service providers and enterprises benefit from the efficiency gains in power consumption, optimized service scale, and feature capability as well as the flexibility in interface speeds for Ethernet solutions. G.711 is the default pulse code modulation (PCM) standard for Internet Protocol (IP) private branch exchange (PBX) vendors, as well as for the public switched telephone network (PSTN). G.711 digitizes analog voice signals producing output at 64 kilobits per second (Kbps). In G.711, the mu-law codec is used in North America and Japan, while the A-law codec is more common in the rest of the world. Both algorithms create a 64-Kbps digital output using an input sample rate of 8 kilohertz (kHz). G.711 employs a technology called packet loss concealment (PLC) that can minimize the practical effect of dropped packets. The effective signal bandwidth is reduced during silent periods by means of a process known as voice activation detection (VAD). The G.711 algorithm is not new. It was originally introduced by Bell Systems in the 1970s and was formally standardized by the International Telecommunication Union (ITU) in 1988. Today, G.711 is commonly used in Voice over Internet Protocol (VoIP), also known as Internet telephony. G.729 is a standard for Internet Protocol (IP) private branch exchange (PBX) vendors, as well as for the public switched telephone network (PSTN). G.729 digitizes analog voice signals producing output at 8 kilobits per second (Kbps) with 8:1 compression and employs an algorithm called conjugate-structure algebraic codeexcited linear prediction (CS-ACELP). G.729 has several extensions or annexes. The two most common are G.729A and G.729AB. In G.729A, input frames are 10 milliseconds (10 ms) in duration and generated frames contain 80 bits. The input and output contain 16-bit pulse-code modulation (PCM) samples converted from or to 8-Kbps compressed data. The total algorithmic delay is 15 ms. In G.729AB, the parameters are the same as in G.729A with the addition of voice activation detection (VAD) in which the effective signal bandwidth is reduced when there is no audio input. A technology known as comfort noise generation (CNG) produces a small amount of background pink noise during pauses in speech to avoid user distraction that can be caused by intervals of absolute silence. ISDN is based on a number of fundamental building blocks. First, there are two types of ISDN "channels" or communication paths: B-channel The Bearer ("B") channel is a 64 kbps channel which can be used for voice, video, data, or multimedia calls. B-channels can be aggregated together for even higher bandwidth applications. D-channel The Delta ("D") channel can be either a 16 kbps or 64 kbps channel used primarily for communications (or "signaling") between switching equipment in the ISDN network and the ISDN equipment at your site. These ISDN channels are delivered to the user in one of two pre-defined configurations: Basic Rate Interface (BRI) BRI is the ISDN service most people use to connect to the Internet. An ISDN BRI connection supports two 64 kbps B-channels and one 16 kbps D-channel over a standard phone line. BRI is often called "2B+D" referring to its two B-channels and one D-channel. The D-channel on a BRI line can even support low-speed (9.6 kbps) X.25 data, however, this is not a very popular application in the United States.

Primary Rate Interface (PRI) ISDN PRI service is used primarily by large organizations with intensive communications needs. An ISDN PRI connection supports 23 64 kbps B-channels and one 64 kbps D-channel (or 23B+D) over a high speed DS1 (or T-1) circuit. The European PRI configuration is slightly different, supporting 30B+D. BRI is the most common ISDN service for Internet access. A single BRI line can support up to three calls at the same time because it is comprised of three channels (2B+D). Two voice, fax or data "conversations," and one packet switched data "conversation" can take place at the same time. Multiple channels or even multiple BRI lines can be combined into a single faster connection depending on the ISDN equipment you have. Channels can be combined as needed for a specific application (a large multimedia file transfer, for example), then broken down and reassembled into individual channels for different applications (normal voice or data transmissions). In network management, usually SouthBound Interface refers to the interface exposed to lower layers and the NorthBound interface refers to the interface exposed to the higher layers. For an EMS, lower layer is NE and upper layer is NMS. Similarly for NMS, lower layer is EMS and upper layer is service management or OSS. According to TMN Layer Model, the hierarchy of network management layers from bottom to top is as follows: Network Element -> Element Management -> Network Management Layer -> Service Management -> Business Management Lets explain it using an example. The NMS interface exposed to communicate with Network Elements (via EMS) is considered as SouthBound Interface whereas the NMS interface exposed to Service Level Management or Operation Support System (OSS) is considered as NorthBound Interface. Below diagram depicts this very clearly:

Voice Technology Basics : TDM versus packet-based networks. Traditional Separate Networks

Many organizations operate multiple separate networks, because when they were created that was the best way to provide various types of communication services that were both affordable and at a level of quality acceptable to the user community.

For example, many organizations currently operate at least three wide-area networks, one for voice, one for SNA, and another for LAN-to-LAN data communications. This traffic can be very bursty. The traditional model for voice transport has been time-division multiplexing (TDM), which employs dedicated circuits. Dedicated TDM circuits are inefficient for the transport of bursty traffic such as LAN-toLAN data. Lets look at TDM in more detail so that you can understand why. Traditional TDM Networking

TDM relies on the allocation of bandwidth on an end-to-end basis. For example, a pulse code modulated (PCM) voice channel requires 64 kbps to be allocated from end to end. TDM wastes bandwidth, because bandwidth is allocated regardless of whether there is an actual phone conversation taking place. So again, dedicated TDM circuits are inefficient for the transport of bursty traffic because: - LAN traffic can typically be supported by TDM in the WAN only by allocating enough bandwidth to support the peak requirement of each connection or traffic type. The trade-off is between poor application response time and expensive bandwidth. - Regardless of whether single or multiple networks are involved, bandwidth is wasted. TDM traffic is transmitted across time slots. Varying traffic types, mainly voice and data, take dedicated bandwidth, regardless of whether the time slot is idle or active. Bandwidth is not shared. After: Integrated Multiservice Networks Data/Voice/Video

With a multiservice network, all data is run over the same infrastructure. We no longer have three or four separate networks, some TDM, some packet. One packet-based network carries all the data. How does this work? Lets look at packet-based networking. Packet-Based Networking

As we have just seen, TDM networking allocates time slots through the network. In contrast, packetbased networking is statistical, in that it relies on the laws of probability for servicing inbound traffic. A common trait of this type of networking is that the sum of the inbound bandwidth often exceeds the capacity of the trunk. Data traffic by nature is very bursty. At any instant in time, the average amount of offered traffic may be well below the peak rate. Designing the network to more closely match the average offered traffic ensures that the trunk is more efficiently utilized. However, this efficiency is not without its cost. In our effort to increase efficiency, we run the risk of a surge in offered traffic that exceeds our trunk. In that case, there are two options: we can discard the traffic or buffer it. Buffering helps us reduce the potential of discarded data traffic, but increases the delay of the data. Large amounts of oversubscription and large amounts of buffering can result in long variable delays.

You might also like