You are on page 1of 11

1. Write about different network structures in use. Ans:- NETWORK STRUCTURES 1.

Network Operating Systems


Users are aware of multiplicity of machines. Access to resources of various machines is done explicitly by: Remote logging into the appropriate remote machine (ssh, browser) Transferring data from remote machines to local machines (browser, ssh)

2.Distributed Operating Systems Users not aware of multiplicity of machines Access to remote resources similar to access to local resources Data Migration transfer data transferring entire file, or transferring only those portions of the file necessary for the immediate task

Advantages of distributed systems:


Resource Sharing Items such as printers, specialized processors, disk farms, files can be shared among various sites.

Computation Speedup Making use of parallelism.

Load balancing - dividing up all the work evenly between sites.

Reliability Redundancy. With proper configuration, when one site goes down, the others can not continue. But this doesn't happen automatically.

Advantages of distributed systems: Process Migration


Execute an entire process, or parts of it, at different sites

Load balancing distribute processes across network to even the workload Computation speedup subprocesses can run concurrently different sites Hardware preference process execution may require specialized processor

Software preference required software may be available at only a particular site Data access run process remotely, rather than transfer all data locally

Advantages of distributed systems:

Topology
Methods of connecting sites together can be evaluated as follows:

Basic cost: This is the price of wiring, which is the proportional to the number of
connections.

Communication cost:

The time required to send a message. This is proportional to the amount of wire and the number of nodes traversed.

Reliability: If one site fails, can others continue to the communicate.

Let's look at a number of connection mechanisms using these criteria:

PARTIALLY CONNECTED

Direct links exist between to the some, but not all, the sites.

Cheaper, slower, an error can partition system

HIERARCHICAL
Links are formed in a tree structure. Cheaper than partially the connected; slower; children of failed components can not communicate.

STAR
All sites connected through a central site. Basic cost low; bottleneck and reliability are low at hub.

RING
Uni or bi-directional, single, double link. Cost is linear with number of sites; communication cost is high; failure of any site partitions ring.

MULTIACCESS BUS
Nodes hang off a ring rather than being part of it.

Cost is linear; communication cost is low; site failure doesn't affect partitioning.

2. Describe the architecture and usage of ISDN. Ans:- Integrated Services Digital Network (ISDN) is a set of communication
standards for simultaneous digital transmission of voice, video, data, and other network services over the traditional circuits of the public switched telephone network. It was first defined in 1988 in the CCITT red book.Prior ISDN the telephone system was viewed as a way to transport voice, with some special services available for data. The key feature of ISDN is that it integrates speech and data on the same lines, adding features that were not available in the classic telephone system. There are several kinds of access interfaces ISDN defined as Basic Rate Interface (BRI),Primary Rate Interface (PRI) and Broadband ISDN (B-ISDN).

ISDN is developed by ITU-T in 1976.It is a set of protocols that combines digital telephony and data transport services.The whole idea is to digitize the telephone network to permit the transmission of audio, video, and text over existing telephone lines. ISDN is an effort to standardize subscriber services, provide user/network interfaces, and facilitate the internetworking capabilities of existing voice and data networks.

The goal of ISDN is to form a wide area network that provides universal end-to-end connectivity over digital media. This can be done by integrating all of the separate transmission services into one without adding new links or subscriber lines. The purpose of ISDN is to provide fully integrated digital services to users. These services fall into three categories:-

1.Bearer Services: These services provide the means to transfer information


(voice,data,and video) between users without the network manipulating the content of that information. The network does not need to process the information and therefore does not change the content.Bearer services belong to the first three layers of OSI model and are well defined in the ISDN standard.

2.Tele Services:In this service,the network may change or process the contents of the
data. These services correspond to layer 4-7 of the OSI model. Teleservices rely on the facilities of the bearer services and are designed to accommodate complex user needs without the user having to be aware of details of the process. Teleservices include telephony,teletex,videotext,telex,and teleconferencing.

3.Supplementary services: These services provide additional functionality to the bearer


and teleservices. Example:Reverse changing,call waiting,and message handling.

Principals of ISDN:Standards for ISDN have been defined by ITU-T (formerly CCITT). ISDN
relates standards states the principals of ISDN from the point of view of CCITT.

3. Explain the concept of framing in Data Link Layer and its importance in data communication. Ans:-Data transmission in the physical layer means moving bits in the form of a signal from
the source to destination. The physical layer provides bit synchronization to ensure that the sender and receiver use the same bit durations and timing. The data link layer on the other hand needs to pack bits into frames, so that each frame is distinguishable from another. Framing in the link layer separates a message from one source to a destination, or from other message to other destinations, by adding a sender address and a destination address, The destination address defines where the packet is to go: the sender address help the recipient acknowledge the receipt. Although the whole message could be packed into one frame, it is not normally done. When a message is carried in one very large frame, even a single bit error would require the retransmission of the whole message. When a message is divided into smaller frames, a single bit error affects only that small frame. Fixed-Size Framing In this, there is no need for defining the boundaries of the frame the size itself can be used as a delimiter. Example: The ATM WAN, which uses frames of fixed size called cells. Variable Size Framing This type of framing is prevalent in LANs. In this, we need a way to define the end of the frame and the beginning of the next. Two approaches are sued for this purpose: a character-oriented approach and a bit oriented approach. Character Oriented Protocols In these protocols, data to be carried out are 8-bit characters from a coding system such as ASCII. he DLL translates the physical layers raw bit stream into discrete units (messages) called frames. That is because the physical layer just accepts a raw bit stream and to deliver it to the destination. This bit stream is not guaranteed to error free. The number of bits received may be less than, equal to, or more than the number of bits transmitted. They may also have different values. It is up to the DLL to detect and if necessary correct errors. If not correct at least detect errors and take proper action like ask for retransmission etc. The usual approach of DLL is break the bit stream up into discrete frames and then for the purpose.

4. Differentiate Noisy and Noiseless channels in Data Communication Ans:-Noiseless Channels


Assume that we have an idea channel in which no frames are lost, duplicated, or corrupted. We introduce two protocols for this type of channel. The first is a protocol that does not use flow control; the second is the one that does. Of course, neither has error control because we have assumed that the channel is a perfect noiseless channel.

1.Simplest Protocol
This protocol has no flow or error control. It is a unidirectional protocol in which data frames are travelling in only one direction from the sender to receiver. We assume that the receiver can handle any frame it receivers with a processing time is small enough to be negligible, The data link layer of the receiver immediately removes the header from the frame and hands the data packet to its network layer, which can also accept the packet immediately. i.e. the receiver can never be overwhelmed with incoming frames.

2.Stop-and Wait Protocol


If the data frames arrive at the receiver site faster than they can be processed, the frames must be stored until their use. Normally, the receiver does not have enough storage space, especially if it is receiving data from many sources. This may result in either discarding of frames or denial of service. To prevent the receiver from becoming overwhelmed with frames, we somehow need to tell the sender slow down. There must be feedback from the receiver to the sender. This protocol is so called because the sender sends one frame, stops until it receiver confirmation from the receiver, and then sends the next frame . We still have unidirectional communication for data frames, but auxiliary ACK frames travel from the other direction. We discuss three protocols in this section that use error control.

Noisy Channels
Noiseless channels are non-existent. We can ignore the error or we need to add error control to our protocols. We discuss three protocols in this section that use error control.

1.Stop-and-Wait Automatic Repeat Request


This protocols adds a simple error control mechanism to the stop-and-wait protocols. To detect and correct corrupted frames, we need to add redundancy bits to our data frame. When the frame arrives at the receiver site, it is checked and if it is corrupted, it is silently discarded. The detection of errors in this protocol is manifested by the silence of the receiver.

5. Explain any two dynamic routing algorithms. Ans:-Routing Algorithms


The routing process usually directs forwarding on the basis of routing tables which maintain a record of the best routes to various network destinations. Thus constructing routing tables, which are held in the routers memory, becomes very important for efficient routing. Adaptive algorithms differ in where they get their information. May be they get information from locally, adjacent routers, or from all the routers. They get information periodically or whenever there is a change in routes, or the load changes, or the topology changes. Also the algorithms differ in terms of metric used for optimization. The metric used may be distance, number of hops, estimated transit time etc. The study of a number of routing algorithms with examples is the content of this section.

Distance Vector Riuting


It is a dynamic routing algorithm. Distance vector routing algorithm consists of a data structure called a routing table. Each router maintains a table. It is basically a vector that keeps track of best know distance to each destination and which line to use to get there. These tables are updated by exchanging information with the neighbours. Distance vector algorithms use the Bellman-Ford algorithm. This approach assigns a number, the cost, to each of the links between each node in the network. Nodes will send information from point A to point B via the path that results in the lowest total cost (i.e. the sum of the costs of the links between the nodes used).

Hierarchical Routing
When hierarchical routing is used, the routers are divided into regions. Each router knows all details about how to route packets to destinations within its own regin. But it does not have any idea about internal structure of other regions. For huge networks, a two level hierarchy may be insufficient hence it may be necessary to group the regions into clusters, the clusters into zones, the zones into groups and so on. As an example consider a two level hierarchy with five regions.The full routing table for router 1A has 17 entries. When routing is done hierarchically there will be only 7 entries. The saving in table space increases. Unfortunately this reduction in the table space comes with the increased path length. For example the best path from 1A to 5C is via region 2, but hierarchical routing all traffic to region 5 goes via region 3.

6. Discuss Congestion Avoidance in Transport Layer. Ans:-Congestion avoidance


The assumption of the algorithm is that packet loss caused by damage is very small (much less than 1%). Therefore, the loss of a packet signals congestion somewhere in the network between the source and destination. There are two indication of packet loss: 1.A timeout occurs. 2.Duplicate ACKs are received. Congestion avoidance and slow start are independent algorithms with different objectives. But when congestion occurs, TCP must slow down its transmission rate of packets into the network and invoke slow start to things going again. In practice, they are implemented together. Congestion avoidance and slow start require that two variables be maintained for each connection: 1.A congestion window, cwnd 2.A slow start threshold size, ssthresh The combined algorithm operates as follows: 1.Initialization for a given connection sets cwnd to one segment and ssthresh to 65535 bytes. 2.The TCP output routine never sends more than the lower value of cwnd or the receivers advertised window. 3.When congestion occurs, one-half of the current window size is saved in ssthresh. Additionally, if the congestion is indicated by a timeout, cwnd is set to one segment. 4.When new data is acknowledged by the other end, increase cwnd, but the way it increases depends on whether TCP is performing slow start or congestion avoidance. If cwnd is less than or equal to ssthresh, TCP is in slow start; otherwise, TCP is performing congestion avoidance. Slow Start continues until TCP is halfway to where it was when congestion occurred (since it recorded half of the window size that caused the problem in step 2), and then congestion avoidance takes over. Slow start cwnd begin at one segment, and incremented by one segment every time an ACK is received. As mention earlier, this opens the window exponentially: send one segment, then two, then four, and so on. Congestion avoidance dictates that cwnd be incremented by segsize/cwnd each time an ACK is received, where segsize is the segment size and cwnd is maintained in bytes. This is a linear growth of cwnd, compared to slow starts exponential growth. The increase in cwnd should be at most one segment each round-trip time, while slow start increments cwnd by the number of received in a round-trip time. Many implementation incorrectly add a small fraction of the segment size during congestion avoidance.This is wrong and should not be emulated in future releases.

You might also like