You are on page 1of 21

INDEX 1.PRELIMINARIES: Study and use of common TCP/IP protocols: (i) viz.

telnet rlogin ftp, ping,finger, Socket, Port etc. 2. DATA STRUCTURES USED IN NETWORK PROGRAMMING (i) Representation of unidirectional,Directional weighted and unweighted graphs. 3.ALGORITHMS IN NETWORKS: (i)computation of shortest path for one source-one destination and one source all destination. 4. SIMULATION OF NETWORK PROTOCOLS: M/M/1 and M/M/1/N queues. 5. Case study : on LAN Training kit (i) Observe the behavior & measure the throughput of reliable data transfer protocols under various Bit error rates for following DLL layer protocols a.Stop & Wait b. Sliding Window : Go-Back-N and Selective Repeat (ii) Observe the behavior & measure the throughput under various network load conditions for following MAC layer Protocols a. Aloha b. CSMA, CSMA/CD & CSMA/CA c. Token Bus & Token Ring 6. DEVELOPMENT OF CLIENT SERVER APPLICATION: (i) Develop telnet client and server which uses port other than 23. (ii) Write a finger application which prints all available information for five users currently logged on and are using the network for longest duration. Print the information in ascending order of time.

2. DATA STRUCTURES USED IN NETWORK PROGRAMMING AIM: Representation of unidirectional,Directional weighted and unweighted graphs. Definition: A graph is a collection (nonempty set) of vertices and edges. A path from vertex x to vertex y : a list of vertices in which successive vertices are connected by edges. Connected graph: There is a path between each two vertices. Simple path: No vertex is repeated. Cycle: Simple path except that the first vertex is equal to the last. Loop: An edge that connects the vertex with itself. Tree: A graph with no cycles. Spanning tree of a graph: a subgraph that contains all the vertices, and no cycles. Complete graphs: Graphs with all edges present each vertex is connected to all other vertices. Weighted graphs weights are assigned to each edge (e.g. road map with distances). Directed graphs: The edges are oriented, they have a beginning and an end . Types of graphs: directed, acyclic Degree of a node U: the number of edges (U,V) - outgoing edges Indegree of a node U: the number of edges (V,U) - incoming edges Algorithm 1. 2. 3. 4. Initialize sorted list to be empty, and a counter to 0 Compute the indegrees of all nodes Store all nodes with indegree 0 in a queue While the queue is not empty a. get a node U and put it in the sorted list. Increment the counter. b. For all edges (U,V) decrement the indegree of V, and put V in the queue if the updated indegree is 0. 5. If counter is not equal to the number of nodes, there is a cycle. Complexity The number of operations is O(|E| + |V|), |V| - number of vertices, |E| - number of edges. How many operations are needed to compute the indegrees? Depends on the representation:

Adjacency lists: O(|E|) Matrix: O(|V|2) Representation of Graphs There are two common ways of representing graphs. A 2 dimensional array for dense graphs and a linked list structure for sparse graphs. These will now be discussed in detail and the structure of a graph class that could be implemented.

4. SIMULATION OF NETWORK PROTOCOLS: M/M/1 and M/M/1/N queues. The M/M/1 is a single-server queue model, that can be used to approximate simple systems. Following Kendall's notation it indicates a system where

arrivals are a Poisson process; service time is exponentially distributed; there is one server; the length of queue in which arriving users wait before being served is infinite; the population of users (i.e. the pool of users) available to join the system is infinite.

Analysis Such a system can be modelled by a birth-death process, where each state represents the number of users in the system. As the system has an infinite queue and the population is unlimited, the number of states the system can occupy is infinite: state 0 (no users in the system), state 1 (1 user), state 2 (two users), etc. As the queue will never be full and the population size being infinite, the birth rate (arrival rate), , is constant for every state. The death rate (service rate), , is also constant for all states (apart from in state 0). In fact, regardless of the state, we can have only two events:

A new user arrives. So if the system is in state k, it goes to state k + 1 with rate A user leaves the system. So if the system is in state k, it goes to state k 1 (or k if k is 0) with rate

It's easy now to see that the system is stable only if < . In fact if the death rate is less than the birth rate, the average number of users in the queue will become infinite. I.e. the system will not have an equilibrium. The model can reveal interesting performance measures of the system being modelled, for example:

The mean time a user spends in the system The mean time a user spends waiting in the queue The expected number of users in the system The expected number of users in the queue The throughput (Number of users served per unit time).

Stationary solution

We can define The probability the system is in state i can be easily calculated:

With this information, the performance measures of interest can be found; for example:

The expected number of users in the system N is given by , and its variance by .

The expected number of requests in the server

The expected number of requests in the queue

The total expected waiting time (queue+service) is

Expected waiting time in the queue is

Example There are many situations in which an M/M/1 model could be applied. One example is a post office with only one employee, and therefore one queue. The customers arrive, enter the queue, do business with the postal worker, and leave the system. If the arrival process is Poisson and the service time is exponential, a M/M/1 model can be used. Hence, the expected number of people in the queue can be easily calculated, along with the probabilities they will have to wait for a particular length of time, and so forth.

For a P-priority system, class P of highest priority Independent, Poisson arrival processes for each class with li as average arrival rate for class i Service times for each class are independent of each other and of the arrival processes and are exponentially distributed with mean 1/mi for class i Both Non-preemptive and Preemptive Priority Service disciplines are considered Solution Approach Define System State appropriately Draw the corresponding State Transition Diagram with the appropriate flows between the states Write and solve the balance equations to obtain the system state probabilities M/M/-/- Queue with Preemptive Priority For a P-priority queue of this type, define the system state as the following P-tuple (n1, n2,,nP) where ni = Number of jobs of priority class i in the queue i=1,..,P Note that the server will always be engaged by a job of the highest priority class present in the system, i.e. by a job of class j with service rate mj if nj1 and nj+1=.....=nP=0. We illustrate the approach first for a 2-priority M/M/1/ queue

5. Case study : On LAN Training kit (i) Observe the behavior & measure the throughput of reliable data transfer protocols under various Bit error rates for following DLL layer protocols a.Stop & Wait b. Sliding Window : Go-Back-N and Selective Repeat (ii) Observe the behavior & measure the throughput under various network load conditions for following MAC layer Protocols a. Aloha b. CSMA, CSMA/CD & CSMA/CA c. Token Bus & Token Ring Sliding Window Protocol A sliding window protocol is a feature of packet-based data transmission protocols. Sliding window protocols are used where reliable in-order delivery of packets is required, such as in the Data Link Layer (OSI model) as well as in the Transmission Control Protocol (TCP). Conceptually, each portion of the transmission (packets in most data link layers, but bytes in TCP) is assigned a unique consecutive sequence number, and the receiver uses the numbers to place received packets in the correct order, discarding duplicate packets and identifying missing ones. The problem with this is that there is no limit of the size of the sequence numbers that can be required. By placing limits on the number of packets that can be transmitted or received at any given time, a sliding window protocol allows an unlimited number of packets to be communicated using fixed-size sequence numbers. A transmitter that does not hear an acknowledgment cannot know if the receiver actually received the packet; it may be that the packet was lost in transmission (or damaged; if error detection finds an error, the packet is ignored), or it may be that an acknowledgment was sent, but it was lost. In the latter case, the receiver must acknowledge the retransmission, but must otherwise ignore it. Likewise, the receiver is usually uncertain about whether its acknowledgments are being received. Stop-and-wait Stop-and-wait ARQ is a method used in telecommunications to send information between two connected devices. It ensures that information is not lost due to dropped packets and that packets are received in the correct order. It is the simplest kind of automatic repeatrequest (ARQ) method. A stop-and-wait ARQ sender sends one frame at a time; it is a special case of the general sliding window protocol with both transmit and receive window sizes equal to 1. After sending each frame, the sender doesn't send any further frames until it receives an acknowledgement (ACK) signal. After receiving a good frame, the receiver

sends an ACK. If the ACK does not reach the sender before a certain time, known as the timeout, the sender sends the same frame again. The above behavior is the simplest Stop-and-Wait implementation. However, in a real life implementation there are problems to be addressed. Typically the transmitter adds a redundancy check number to the end of each frame. The receiver uses the redundancy check number to check for possible damage. If the receiver sees that the frame is good, it sends an ACK. If the receiver sees that the frame is damaged, the receiver discards it and does not send an ACK -- pretending that the frame was completely lost, not merely damaged. One problem is where the ACK sent by the receiver is damaged or lost. In this case, the sender doesn't receive the ACK, times out, and sends the frame again. Now the receiver has two copies of the same frame, and doesn't know if the second one is a duplicate frame or the next frame of the sequence carrying identical data. Another problem is when the transmission medium has such a long latency that the sender's timeout runs out before the frame reaches the receiver. In this case the sender resends the same packet. Eventually the receiver gets two copies of the same frame, and sends an ACK for each one. The sender, waiting for a single ACK, receives two ACKs, which may cause problems if it assumes that the second ACK is for the next frame in the sequence. To avoid these problems, the most common solution is to define a 1 bit sequence number in the header of the frame. This sequence number alternates (from 0 to 1) in subsequent frames. When the receiver sends an ACK, it includes the sequence number of the next packet it expects. This way, the receiver can detect duplicated frames by checking if the frame sequence numbers alternate. If two subsequent frames have the same sequence number, they are duplicates, and the second frame is discarded. Similarly, if two subsequent ACKs reference the same sequence number, they are acknowledging the same frame. Stop-and-wait ARQ is inefficient compared to other ARQs, because the time between packets, if the ACK and the data are received successfully, is twice the transit time (assuming the turnaround time can be zero). The throughput on the channel is a fraction of what it could be. To solve this problem, one can send more than one packet at a time with a larger sequence number and use one ACK for a set. This is what is done in Go-Back-N ARQ and the Selective Repeat ARQ. Go-Back-N Go-Back-N ARQ is a specific instance of the automatic repeat request (ARQ) protocol, in which the sending process continues to send a number of frames specified by a window size even without receiving an acknowledgement (ACK) packet from the receiver. It is a special

case of the general sliding window protocol with the transmit window size of N and receive window size of 1. The receiver process keeps track of the sequence number of the next frame it expects to receive, and sends that number with every ACK it sends. The receiver will ignore any frame that does not have the exact sequence number it expects whether that frame is a "past" duplicate of a frame it has already ACK'ed [1] or whether that frame is a "future" frame past the last packet it is waiting for. Once the sender has sent all of the frames in its window, it will detect that all of the frames since the first lost frame are outstanding, and will go back to sequence number of the last ACK it received from the receiver process and fill its window starting with that frame and continue the process over again. Go-Back-N ARQ is a more efficient use of a connection than Stop-and-wait ARQ, since unlike waiting for an acknowledgement for each packet, the connection is still being utilized as packets are being sent. CSMA Carrier Sense Multiple Access (CSMA) is a probabilistic Media Access Control (MAC) protocol in which a node verifies the absence of other traffic before transmitting on a shared transmission medium, such as an electrical bus, or a band of the electromagnetic spectrum. "Carrier Sense" describes the fact that a transmitter uses feedback from a receiver that detects a carrier wave before trying to send. That is, it tries to detect the presence of an encoded signal from another station before attempting to transmit. If a carrier is sensed, the station waits for the transmission in progress to finish before initiating its own transmission. "Multiple Access" describes the fact that multiple stations send and receive on the medium. Transmissions by one node are generally received by all other stations using the medium. 1-persistent When the sender (station) is ready to transmit data, it checks if the physical medium is busy. If so, it senses the medium continually until it becomes idle, and then it transmits a piece of data (a frame). In case of a collision, the sender waits for a random period of time and attempts to transmit again. 1-persistent CSMA is used in CSMA/CD systems including Ethernet. P-persistent When the sender is ready to send data, it checks continually if the medium is busy. If the medium becomes idle, the sender transmits a frame with a probability p. If the station chooses not to transmit (the probability of this event is 1-p), the sender waits until the next available time slot and transmits again with the same probability p. This process repeats until the frame is sent or some other sender stops transmitting. In the latter case the sender monitors the channel, and when idle, transmits with a

probability p, and so on. p-persistent CSMA is used in CSMA/CA systems including WiFi and other packet radio systems. O-persistent Each station is assigned a transmission order by a supervisor station. When medium goes idle, stations wait for their time slot in accordance with their assigned transmission order. The station assigned to transmit first transmits immediately. The station assigned to transmit second waits one time slot (but by that time the first station has already started transmitting). Stations monitor the medium for transmissions from other stations and update their assigned order with each detected transmission (i.e. they move one position closer to the front of the queue).[1] Opersistent CSMA is used by CobraNet, LonWorks and the controller area network. CSMA/CD Carrier sense multiple access with collision detection (CSMA/CD) is a computer networking access method in which:

a carrier sensing scheme is used. a transmitting data station that detects another signal while transmitting a frame, stops transmitting that frame, transmits a jam signal, and then waits for a random time interval before trying to send that frame again.

CSMA/CD is a modification of pure carrier sense multiple access (CSMA). CSMA/CD is used to improve CSMA performance by terminating transmission as soon as a collision is detected, thus reducing the probability of a second collision on retry. CSMA/CD is a layer 2 access method, not a protocol of the OSI model When a station wants to send some information, it uses the following algorithm: Main procedure 1. Frame ready for transmission. 2. Is medium idle? If not, wait until it becomes ready[note 1] 3. Start transmitting. 4. Did a collision occur? If so, go to collision detected procedure. 5. Reset retransmission counters and end frame transmission. Collision detected procedure 1. Continue transmission until minimum packet time is reached to ensure that all receivers detect the collision. 2. Increment retransmission counter. 3. Was the maximum number of transmission attempts reached? If so, abort transmission.

Calculate and wait random backoff period based on number of collisions. 5. Re-enter main procedure at stage 1.
4.

This can be likened to what happens at a dinner party, where all the guests talk to each other through a common medium (the air). Before speaking, each guest politely waits for the current speaker to finish. If two guests start speaking at the same time, both stop and wait for short, random periods of time (in Ethernet, this time is measured in microseconds). The hope is that by each choosing a random period of time, both guests will not choose the same time to try to speak again, thus avoiding another collision. Methods for collision detection are media dependent, but on an electrical bus such as 10BASE-5 or 10BASE-2, collisions can be detected by comparing transmitted data with received data or by recognizing a higher than normal signal amplitude on the bus. Applications CSMA/CD was used in bus topology Ethernet variants and in early versions of twisted-pair Ethernet. Modern Ethernet networks built with switches and/or full-duplex connections no longer utilize CSMA/CD. IEEE Std 802.3, which defines all Ethernet variants, for historical reasons still bears the title "Carrier sense multiple access with collision detection (CSMA/CD) access method and physical layer specifications". Variations of the concept are used in radio frequency systems that rely on frequency sharing, including Automatic Packet Reporting System. The ALOHA protocol Pure ALOHA The first version of the protocol (now called "Pure ALOHA", and the one implemented in ALOHAnet) was quite simple:

If you have data to send, send the data If the message collides with another transmission, try resending "later"

Note that the first step implies that Pure ALOHA does not check whether the channel is busy before transmitting. The critical aspect is the "later" concept: the quality of the backoff scheme chosen significantly influences the efficiency of the protocol, the ultimate channel capacity, and the predictability of its behavior. To assess Pure ALOHA, we need to predict its throughput, the rate of (successful) transmission of frames. First, let's make a few simplifying assumptions:

All frames have the same length.

Stations cannot generate a frame while transmitting or trying to transmit. (That is, if a station keeps trying to send a frame, it cannot be allowed to generate more frames to send.) The population of stations attempts to transmit (both new frames and old frames that collided) according to a Poisson distribution.

Let "T" refer to the time needed to transmit one frame on the channel, and let's define "frame-time" as a unit of time equal to T. Let "G" refer to the mean used in the Poisson distribution over transmission-attempt amounts: that is, on average, there are G transmission-attempts per frame-time. Overlapping frames in the pure ALOHA protocol. Frame-time is equal to 1 for all frames. Consider what needs to happen for a frame to be transmitted successfully. Let "t" refer to the time at which we want to send a frame. We want to use the channel for one frame-time beginning at t, and so we need all other stations to refrain from transmitting during this time. Moreover, we need the other stations to refrain from transmitting between t-T and t as well, because a frame sent during this interval would overlap with our frame. For any frame-time, the probability of there being k transmission-attempts during that frame-time is:

Comparison of Pure Aloha and Slotted Aloha shown on Throughput vs. Traffic Load plot. The average amount of transmission-attempts for 2 consecutive frame-times is 2G. Hence, for any pair of consecutive frame-times, the probability of there being k transmission-attempts during those two frame-times is:

Therefore, the probability (Probpure) of there being zero transmission-attempts between t-T and t+T (and thus of a successful transmission for us) is: Probpure = e 2G

The throughput can be calculated as the rate of transmission-attempts multiplied by the probability of success, and so we can conclude that the throughput (Spure) is: Spure = Ge 2G Slotted ALOHA Slotted ALOHA protocol (Boxes indicate frames Shaded boxes indicate frames which are in the same slots.) An improvement to the original ALOHA protocol was "Slotted ALOHA", which introduced discrete timeslots and increased the maximum throughput. A station can send only at the beginning of a timeslot, and thus collisions are reduced. In this case, we only need to worry about the transmission-attempts within 1 frame-time and not 2 consecutive frame-times, since collisions can only occur during each timeslot. Thus, the probability of there being zero transmission-attempts in a single timeslot is: Probslotted = e G The probability of k packets is: Probslottedk = e G(1 e G)k 1 The throughput is: Sslotted = Ge G The maximum throughput is 1/e frames per frame-time (reached when G = 1), which is approximately 0.368 frames per frame-time, or 36.8%. Slotted ALOHA is used in low-data-rate tactical satellite communications networks by military forces, in subscriber-based satellite communications networks, mobile telephony call setup, and in the contactless RFID technologies. Token Bus and Token Ring Token Bus Token Bus was a 4 Mbps Local Area Networking technology created by IBM to connect their terminals to IBM mainframes. Token bus utilized a copper coaxial cable to connect multiple end stations (terminals, wokstations, shared printers etc.) to the mainframe. The coaxial cable served as a common communication bus and a token was created by the Token Bus protocol to manage or 'arbitrate' access to the bus. Any station that holds the token packet has permission to transmit data. The station releases the token when it is done communicating or when a higher priority device needs to transmit (such as the mainframe). This keeps two or more devices from transmitting information on the bus at the same time and accidentally destroying the transmitted data. Token Bus suffered from two limitations. Any failure in the bus caused all the devices beyond the failure to be unable to communicate with the rest of the network. Second, adding more stations to the bus was somewhat difficult. Any new station that was

improperly attached was unlikely to be able to communicate and all devices beyond it were also affected. Thus, token bus networks were seen as somewhat unreliable and difficult to expand and upgrade. Token Ring Token Ring was created by IBM to compete with what became known as the DIX Standard of Ethernet (DEC/Intel/Xerox) and to improve upon their previous Token Bus technology. Up until that time, IBM had produced solutions that started from the mainframe and ran all the way to the desktop (or dumb terminal), allowing them to extend their SNA protocol from the AS400's all the way down to the end user. Mainframes were so expensive that many large corporations that purchased a mainframe as far back as 30-40 years ago are still using these mainframe devices, so Token Ring is still out there and you will encounter it. Token Ring is also still in use where high reliability and redundancy are important--such as in large military craft. Token Ring comes in standard 4 and 16 Mbsp and high-speed Token Ring at 100Mbps(IEEE 802.5t) and 1Gbps (IEEE 802.5v). Many mainframes (and until recently, ALL IBM mainframes) used a Front End Processor (FEP) with either a Line Interface Coupler (LIC) at 56kbps, or a Token-ring Interface Coupler (TIC) at 16 Mbps. Cisco still produces FEP cards for their routers (as of 2004). Token Ring uses a ring based topology and passes a token around the network to control access to the network wiring. This token passing scheme makes conflicts in accessing the wire unlikely and therefore total throughput is as high as typical Ethernet and Fast Ethernet networks. The Token Ring protocol also provides features for allowing delay-sensitive traffic, to share the network with other data, which is key to a mainframe's operation. This feature is not available in any other LAN protocol, except Asynchronous Transfer Mode (ATM). Token Ring does come with a higher price tag because token ring hardware is more complex and more expensive to manufacture. As a network technology, token ring is passing out of use because it has a maximum speed of 16 Mbps which is slow by today's gigabit Ethernet standards.

Token Ring Token passing Media Access Unit Line Interface Coupler (LIC) Token Ring Interface Coupler (TIC)

6. DEVELOPMENT OF CLIENT SERVER APPLICATION: (i) Develop telnet client and server which uses port other than 23. (ii) Write a finger application which prints all available information for five users currently logged on and are using the network for longest duration. Print the information in ascending order of time. Telnet

Telnet is one of the earliest protocols developed Telnet provides reliable communication via TCP Telnet is an Application (operates at the OSI Model's Application Layer) Telnet provides access to the command prompt remotely Telnet utilizes TCP/IP to support communication Information is communicated as ASCII Text Telnet is carried inside the payload of TCP (encapsulated in TCP) Commands:Open close quit

Telnet was one of the first protocols developed for use over TCP/IP. Telnet is an application designed for reliable communication via a virtual terminal. It was intended to be a bidirectional byte-oriented communications protocol utilizing 7-bit ASCII for use in creating communication between terminals (Internet end points) or processes across the Internet. Telnet is one of the oldest IP protocols and from it several other protocols were developed. A telnet server listens for connections on TCP port 23. When a connection is opened from a telnet client to a server, the client attempts to connect to the server machine using TCP on port 23. The client uses a local port above 1023. The client and server will negotiate supported Telnet options and the connection will be established. The remote server will then provide services over that TCP connection. The client sends in ASCII text data and the server responds according to it's design. Telnet is the most basic of all TCP based protocols. When the client receives input from the user, it forwards that information to the telnet server. The client normally will send in the user data one ASCII character at a time unless the NAGLE algorithm for TCP is in use. The Nagle algorithm changes the way TCP handles segments and can alter how data gets buffered before transmission to the other end. Commands:

Microsoft Telnet (Windows) Commands may be abbreviated. Supported commands are: c - close close current connection d - display display operating parameters o - open hostname [port] connect to hostname (default port 23). q - quit exit telnet set - set set options (type 'set ?' for a list) sen - send send strings to server st - status print status information u - unset unset options (type 'unset ?' for a list) ?/h - help print help information Options for the set command Microsoft Telnet> set ? bsasdel Backspace will be sent as delete crlf New line mode - Causes return key to send CR & LF delasbs Delete will be sent as backspace escape x x is an escape charater to enter telnet client prompt localecho Turn on localecho. logfile x x is current client log file logging Turn on logging mode x x is console or stream ntlm Turn on NTLM authentication. term x x is ansi, vt100, vt52, or vtnt Default Operating Parameters Escape Character is 'CTRL+]' Will auth(NTLM Authentication) Local echo off New line mode - Causes return key to send CR & LF Current mode: Console Will term type Preferred term type is ANSI NAGLE ALGORITHM The NAGLE algorithm makes telnet more efficient. Rather than wrap up every single character in a complete IP datagram, the whole input buffer of the keyboard or computer is sent at once or stored and sent as a group of characters once the return key is pressed on the keyboard (an end of line is detected on standard input by the telnet client).

Finger Application In computer networking, the Name/Finger protocol and the Finger user information protocol are simple network protocols for the exchange of human-oriented status and user information. Name/Finger protocol The Name/Finger protocol, written by David Zimmerman, is based on Request for comments document RFC 742 (December 1977) as an interface to the name and finger programs that provide status reports on a particular computer system or a particular person at network sites. The finger program was written in 1971 by Les Earnest who created the program to solve the need of users who wanted information on other users of the network. Information on who is logged-in was useful to check the availability of a person to meet. This was probably the earliest form of Presence information technology that worked for remote users over a network. Prior to the finger program, the only way to get this information was with a who program that showed IDs and terminal line numbers for logged-in users. Earnest named his program after the idea that people would run their fingers down the who list to find what they were looking for. Finger user information protocol: Finger is based on the Transmission Control Protocol, using TCP port 79 decimal. The local host opens a TCP connection to a remote host on the Finger port. An RUIP (Remote User Information Program) becomes available on the remote end of the connection to process the request. The local host sends the RUIP a one line query based upon the Finger query specification, and waits for the RUIP to respond. The RUIP receives and processes the query, returns an answer, then initiates the close of the connection. The local host receives the answer and the close signal, then proceeds closing its end of the connection. The Finger user information protocol is based on RFC 1288 (The Finger User Information Protocol, December 1991). Typically the server side of the protocol is implemented by a program fingerd (for finger daemon), while the client side is implemented by the name and finger programs which are supposed to return a friendly, human-oriented status report on either the system at the moment or a particular person in depth. There is no required format, and the protocol consists mostly of specifying a single command line. The program would supply information such as whether a user is currently logged-on, email address, full name etc. As well as standard user information, finger displays the contents of the .project and .plan files in the user's home directory. Often this file (maintained by the user) contains either useful information about the user's current activities, similar to micro-blogging, or alternatively all manner of humor.

Security concerns Supplying such detailed information as e-mail addresses and full names was considered acceptable and convenient in the early days of Internetworking, but later was considered questionable for privacy and security reasons. Finger information has been frequently used by hackers as a way to initiate a social engineering attack on a company's computer security system. By using a finger client to get a list of a company's employee names, email addresses, phone numbers, and so on, a cracker can telephone or email someone at a company requesting information while posing as another employee. The finger daemon has also had several exploitable security holes which crackers have used to break into systems. The Morris worm exploited an overflow vulnerability in fingerd (among others) to spread. The finger protocol is also incompatible with Network Address Translation (NAT) from the private network address ranges (e.g. 192.168.0.0/16) that are used by the majority of home and office workstations that connect to the Internet through routers or firewalls. For these reasons, while finger was widely used during the early days of Internet, by the late 1990s the vast majority of sites on the internet no longer offered the service.

You might also like