You are on page 1of 5

Concept of Error Free Communication In Communication system, the probability of error pe for detected signal at the receiver is inversely

proportional to the transmitted signals power. Then error probability pe and bit energy Eb can related as follows

Where

Probability of error Energy of a bit.

If Eb increases, the probability of error Pe decreases. Signal power can be expressed as S=EbRb, Eb can be increased by increasing signal power S Eb can be increased by decreasing the bit transmission rate Rb . So , In Communication systems, If we reduce the transmission rate ,error rate can be reduced. If we pass, the data with low or no redundancy, noise will affect the data, so information may loss . i.e., its difficult to detect the transmitted symbol correctly. For e.g. ., Optimum codes has no redundancy and compact codes has less redundancy. When these low redundancy codes are transmitted, it will collide with noise. So noise causes some information loss. So its impossible to detect the transmitted symbol correctly. If redundancy of data increases, the immunity of the signal against nose increases. Only way to increase redundancy is to repeat a digit a number of times. The receiver uses majority rule to decipher the message. So, ever if any one of those digits is found to be error, the receiver has detect the transmitted symbol correctly .There is the case that noise may corrupts two or more digits to .So the corrupted bits are found to be error. In that case, redundancy should be increased. For e.g., Repetition of Five digits Figure 4.13 shows all the eight possible sequences that can be received by the receiver when a single bit is repeated three times and transmitted. The sequences are represented as the vertices of a 3dimensional cube. Sequence 000 or 111 are taken as two origin vertices among eight vertices. The majority decision rule basically takes a decision in favor of the message whose Hamming distance is closest to the received sequence.

Sequences 000,001,010 and 100 are within 1 unit of Hamming distance form 000 but it is two units away from 111. So. If any of these are received, decision would be 0. Sequences 110,111,011 and 101 are within 1 unit of Hamming distance form 111 but it is two units away from 000. So, If any of these are received, decision would be 1. For five repetition bits, the sequences are represented by a hypercube of five dimensions. For n repetitions, we may get 2n sequences. Then for the 5 repetitions, we may get 25 =32 sequences. Among these 32 sequences, 00000 and 11111 are taken as origin vertices .These two origin vertices are separated by 5 units.2 errors can be detected for these 5 repetitions. Smaller the fractions of vertices, smaller the probability of error Pe. The process of reducing error probability in a communication system by introducing redundancy of bits is called as channel coding. . We have to keep in mind that transmission rate Rb must also reduced even adding repetitions.This is the main drawback. Drawback of channel code can be improved instead of inserting redundancy ,we can incorporate redundancy for a block of information digit. Mutual Information Mutual information can be defined as amount of information transferred when transmitted and yi is received. the signal x i

Discrete Memory less channel In a discrete-time channel, if the values that the input and output variables can take are finite, or countably infinite, the channel is called a discrete channel. If the detector output in a given interval depends only on the signal transmitted in that interval but does not depends upon previous transmission, then the waveform channel is said to be discrete memory-less channel. Simply we may come to know that there is no memory to store the previous transmission values. In general, a discrete channel is defined by , the input alphabet, , the output alphabet, and p(y | x) the conditional PMF of the output sequence given the input sequence. A schematic representation of a discrete channel is given in figure 4.3. p(y | x) = 4.1 Where p(y | x) probability of receiving symbol y , given that symbol x was sent. y modulator input symbol. x demodulator output symbol In general especially a channel with ISI, the output yi does not only depend on the input at the same time xi but also on the previous inputs , or even previous and future inputs (in storage channels). Therefore, a channel can have memory.

Figure 4.3 Discrete Memory-less channel The modulator has only the binary symbols 0 and 1 as inputs when binary coding is used. If binary quantization of the demodulator output is used, the decoder has only binary inputs. Two types of decision is used at the demodulator. 1).Hard decision 2).Soft decision. 1).Hard Decision Hard decision is made on the demodulator output as to which symbol was actually transmitted. In this situation, We have a binary symmetric channel(BSC) with a transition probability diagram as shown in figure- 4.4.The binary symmetric channel, assuming a channel noise modeled as additive white Gaussian noise(AWGN) channel, is completely described by the transition probability p. The majority of coded digital communication systems employ binary coding with hard-decision decoding. Due to the simplicity of implementation offered by such an approach. Hard-decision decoders, or algebraic decoders, take advantage of the special algebraic structure that is built into the design of channel codes to make the decoding relatively easy to perform. 2).Soft Decision There is irreversible loss information in the receiver while using Hard decision. To reduce this loss, soft-decision coding is used. At the demodulator output , multilevel quantizer included as shown in figure4.4.a.The input-output characteristic of the quantizer is shown in figure 4.4.b. The modulator has only the binary symbols 0 and 1 as inputs, but the demodulator output now has an alphabet with Q symbols. Assuming the use of the quantizer as described in figure 4.4.c. We have Q = 8. Such a channel a binary input Q-ary output discrete memory-less channel. The corresponding channel transition. probability diagram is shown in figure 4.4.c.

Figure 4.4.a:Binary input Q-ary output discrete memory-less channel Output


b1

b4
b5

input

b8 Figure 4.4.b Receiver for BPSK

Figure 4.4.c Channel transition probability diagram Channel capacity C is given by C= The transmission efficiency or channel efficiency ( ) can be represented as follows

= =

Since C= , Then = If k symbols are transmitted per second, then the maximum rate of transmission of information per second is kC.Thus the channel capacity rate in binits per seconds is denoted by Cs. Cs = Kc binits/sec Channel coding Theorem It states that if a discrete memory-less channel has capacity C and a source generates information at a rate less than C, then there exists a coding technique such that the output of the source may be transmitted over the channel. The maximum rate at which one can communicate over a discrete-memory-less channel and still make the error probability approach 0 as the code block length increases, is called the channel capacity and is denoted by C. The most famous formula from Shannon's work is arguably the channel capacity of an ideal band-limited Gaussian channel1, which is given by C =W log2 (1+S/N) (bits/ second). 4.2 Where C is the channel capacity, that is, the maximum number of bits which can be transmitted through this channel per unit time (second), W is the bandwidth of the channel, S/N is the signal-to-noise power ratio at the receiver. Shannon's main theorem, S/N asserts that error probabilities as small as desired can be achieved as long as the transmission rate R through the channel (in bits/second) is smaller than the channel capacity C. This can be achieved by using an appropriate encoding and decoding operation. However, Shannon's theory is silent about the structure of these encoders and decoders.

You might also like