You are on page 1of 5

Error Detection and Correction

All the networks must be able to transfer data from one device to another with complete
accuracy.
A system that cannot guarantee that the data received by one device is identical to data
transmitted by another device is essentially useless.
Yet anytime data transmitted from source to destination can become corrupted in passage.
In fact it is more likely that some part of a message will be altered in the transit than
complete message.
Many factors including line noise can alter or wipe out one or more bits of a given data
unit.
Therefore a Reliable system must have a mechanism for detecting and correcting such
ERRORS.
Types of errors:Whenever an electromagnetic signal flows from one point to another, it is subject to
unpredictable interference from heat, magnetism and other forms of electricity.
This interference can change the shape or timing of signal
If a signal is carrying a encoded binary data, such changes can alter the meaning of the
data.
There are basically two types of errors
First in single bit error and the second is burst error.
In single bit error only one bit of a give data unit is changed from 1 to 0 or 0 to 1.
Single bit errors are least likely to occur in a serial transmission of data. Suppose sender
sends data at a speed of 1MBPS that means each bit last for only one microsecond in the
transmission medium And for a single bit error to occur the noise must have a duration of
only 1 microsecond which is normally very rare phenomenon. Instead a single bit error
can occur in parallel transmission. For example if eight wires are used to send 8 bits of
data and one wire is noisy then one bit can be corrupted in each byte.
Then the next type of error is burst error which means 2 or more bits in the data unit are
change from 1 to 0 or 0 to1.
This type of error is most likely to happen in a serial transmission. The duration of noise
is normally longer than the bit, which means when noise affects data it affects a set of
bits.
The no of bits affects depends on the data rate and duration of noise
Suppose if we are sending data at 1 kbps a noise of 1/100 sec can affect 10 bits.
Now the question arises even if we know what types of errors can occur, will we
recognize one when we see it?
It is a simple task if we have a copy of the intended transmission for comparison..But
what if we dont have a original copy of transmission
Then we will have no way to check for errors until we have decoded the transmission and
failed to make sense of it.

Therefore one error detection technique that was introduced was to send every dat unit
twice as the odds of errors being introduced onto same bits are very small but that system
was slow and not only the the transmission time is doubled but it takes some more extra
time to compare bit by bit.
This concept was good but instead of repeating the entire data stream, a short group of
bits may be appended to the end of each unit. This technique is called redundancy.
In this technique once the data has been generated, it is passed through a device that
analyze it and adds on appropriate coded redundancy check. The data unit is been
enlarged by several bits, it travels over the link to the receiver .The receiver puts the
entire stream through a checking function. If the received bit stream passes the checking
criteria the data portion of data unit is accepted and redundant bits are discarded.
More elaborately there are four types of redundancy checks used in data communications
1)Vertical redundancy check which is also called parity check
2)Longitudinal redundancy check
implemented in physical layer
3)Cyclical redundancy check and
for use in data link layer
4)Checksum
used in upper layers

VRC
It is the most common and least expensive mechanism for error detection and is often
called parity check.
In this technique, a redundant bit also called a parity bit is appended to every data unit so
that the total number of 1s in the unit becomes even.
Suppose we want to transmit the binary data unit 1100001, we pass the data unit through
a parity generator. The parity generator counts the 1s and if it is odd then a parity bit is is
added like in this case. Then the total no of 1s become 4 i.e. is an even no. The system
now transmits the entire expanded unit across the network link. When it reaches its
destination, the receiver puts all eight bits through an even parity checking function.
If receiver sees that the bit contains 4 1s then the data unit passes otherwise if parity
checker counts 5 1s in the data unit the receiver get to know that an error has been
introduced in this data bit and rejects the whole unit
VRC can detect all single bit errors. It can also detect burst errors as long as the total
number of bits changed is odd

LRC
In this technique a block of bits is organized in a table i.e. instead of sending a block 32
bits, we organize them in a table made of 4 rows and 8 columns. Then we calculate parity
bit for each column and create a new row of 8 bits which are parity bits for whole block.
Then we attach the eight parity to original data and send them to receiver.
LRC increase the likelihood of detecting burst errors. Still one pattern of error remains
elusive.
CRC
This is the most powerful of the redundancy checking techniques. It is based on binary
division. The parity bits in CRC are called CRC remainder. It is appended to end of data
unit so that the data unit becomes exactly divisible by a second predetermined binary
number. And at destination the incoming data unit is determined by same number and if
there is no remainder then the data unit is supposed to be intact and then accepted
otherwise it is rejected.
A valid CRC must have two qualities
It should have exact one bit less than divisor
And appending it to the end of data unit must make the resulting bit sequence exactly
divisible by the divisor.
First a string of n 0s is appended to the data unit. The number n is one less than the no of
bits in the predetermined divisor which is n+ 1 bit.
Second the newly elongated data unit is divided by the divisor using a process called
binary division. The remainder resulting from this division is CRC.
Third the CRC of n bits derived in step2 replaces the appended 0s at the end of the data
unit. A CRC may consist of all 0s.
The data unit then arrives at the receiver followed by the CRC. The receiver treats whole
string as a single data unit and divides it by the predetermined divisor. If the remainder
comes out to be 0 then data is intact and accepted otherwise it is rejected.
POLYNOMIALS
The CRC generator is most often represented not as string of 1s and 0s but as algebraic
polynomials.
A polynomial is selected to have at least the following properties:
It should not be divisible by x but
It shout be divisible by x+1
(standard polynomials)
Some of the standard polynomials shown in this slide.

The error correction method used by the higher layer protocols is called checksum
It is also based on concept of redundancy.
In the sender column the checksum generator subdivides the data unit into equal
segments of n bits. These segments are added together using ones complement arithmetic
in such way that the total is also n bits long then the total sum is complemented and
appended to the end of the original data unit as redundancy bits called checksum field
then it is transmitted across network to the receiver
In the receiver column again the data unit is divided into equal segments of n bits.
All the sections are added up together using ones complement then the sum is
complemented if sum turns out to be 0 then data is intact and accepted other wise
rejected.
Now mechanisms that we have covered up to this point detect errors but do not correct
them.
Error correction can be handled in 2 ways:First way is that the receiver discovers that data is corrupt and can have sender retransmit
the entire data
Second way is that receiver can use an error correcting code which automatically corrects
certain errors.
But the no of bits required to correct a multiple bit error or burst error is so high that it is
inefficient to do so in some case therefore error correction is limited to 1 2 or just 3 bit
errors.
Two states 0 and 1 are enough to detect a single bit error but are not enough to correct it
as well.
The secret of error correction is to locate the invalid bits or bit
To calculate the number of redundancy r bits required to correct a given no of bits m we
must find a relation b/w m and r. This figure shows m bits of data with r bits of
redundancy added to them. Then the length of resulting code is m+r.
If the total number of bits in a transmittable unit is m+r then r must be able to indicate at
least m+r+1 different states i.e. one of these states means no error and m+r indicated the
location of an error in each of m+r positions.
So m+r+1 states must be discoverable by the r bits and r bits can indicate 2r different
states
There fore 2r equal to or greater than m+r+1
Hamming Code
It is a technique developed by R.W hamming to manipulate no of bits required to cover
all possible single bit errors to discover which state has occurred?

It can be applied to data of any length and it uses the relationship b/w the data and
redundancy bits
For eg a seven bit ASCII code requires 4 redundancy bits that can be added to end or
interspersed with the original data bits
In this figure redundancy bits are placed in position 1,2,4,8 that are powers of 2.
In hamming code each r bit is VRC bit for combination of data bits.
Burst Error Correction
A hamming code can be designed to correct burst error of certain lengths. The number of
redundancy bits required to make these corrections are dramatically higher than that
required for single bit errors.
To correct double bit errors we must take into consideration that the 2 bits can be
combination any 2 bits in the entire sequence.
Three bit correction means 3 bits combination in entire sequence and so on.
So simple strategy used by hamming code to correct single bit errors must be redesigned
to be applicable for multiple bit correction.

You might also like