Professional Documents
Culture Documents
Page 83
Source
Demodulator (Receiver)
Estimate of mi
w (t )
There are M messages. For example, each message may carry = log 2 M bits. N bit - dimensiona observatio n space If the durationl is Tb , then the message (or symbol) duration is Ts = Tb . r1 , r rN ) ,dierent The messages( in two time slots are statistically independent. 2, The M signals si (t), i = 1, 2, . . . , M , can be arbitrary but of nite energies 1 Ei . The a priori message probabilities are Pi . Unless stated otherwise, it is Choose s1 (t ) or m1 assumed that the messages are equally probable. The noise w(t) is modeled as a stationary, zero-mean, and white Gaussian process with power spectral density of N0 /2. The receiver observes the received signal r(t) = si (t) + w(t) over one symbol duration and makes a decision to what message was transmitted. The criterion 2 M for the optimum receiver is to minimize the probability of making an error.
Choose s M (t ) or m M
Choose s2 (t ) or m2
Optimum receivers and their error probabilities shall be considered for various signal constellations.
t = kTs
si (t )
r(t )
kTs
( )dt
r1
( k 1) Ts
Decision Device
Decision
University of Saskatchewan
Page 84
Binary Signalling
Received signal: On-O Orthogonal Antipodal H1 (0): r(t) = w(t) r(t) = s1 (t) + w(t) r(t) = s2 (t) + w(t) H2 (1): r(t) = s2 (t) + w(t) r(t) = s2 (t) + w(t) r(t) = s2 (t) + w(t) Ts 0 s1 (t)s2 (t)dt = 0 Decision space and receiver implementation:
Choose 0 s1 (t )
0
Choose 1 s 2 (t )
1 (t ) =
s2 ( t ) E
r (t )
Tb
( )d t
E 2
E 2
1 (t )
(a) On-Off
E 2 E r1 < 2 r1
1D 0D
r2 2 (t )
s 2 (t ) E
Tb
( )d t
r1 r2 r1 r2 < r1 1D 0D
Choose 0 Choose 1
E
0 s1 (t )
r (t )
r1
1 (t )
Tb
1 (t )
(b) Orthogonal
( )d t
r2
2 (t )
r1 0 r1 < 0
Choose 0
s1 (t ) 0 E
Choose 1
s 2 (t )
1 (t ) =
s2 ( t ) E
r (t )
Tb
( )d t
1D 0D
(c) Antipodal
1 (t )
University of Saskatchewan
Page 85
Orthogonal Antipodal Q Q
E N0 Eb N0
Q Q
2E N0 2Eb N0
1 The above shows that, with the same average energy per bit Eb = 2 (E1 + E2 ), antipodal signalling is 3 dB more ecient than orthognal signalling, which has the same performance as that of on-o signalling. This is graphically shown below.
10
10
Pr[error]
Antipodal
4
10
10
10
6 8 E /N (dB)
b 0
10
12
14
University of Saskatchewan
Page 86
Example 1: In passband transmission, the digital information is encoded as a variation of the amplitude, frequency and phase (or their combinations) of a sinusoidal carrier signal. The carrier frequency is much higher than the highest frequency of the modulating signals (messages). Binary amplitude-shift keying (BASK), binary phase-shift keying (BPSK) and binary frequency-shift keying (BFSK) are examples of on-o, orthogonal and antipodal signallings, respectively.
(a) Binary Data 1 (b) Modulating Signal 0 t 0 Tb 2Tb 3Tb 4Tb 5Tb 6Tb 7Tb 8Tb 9Tb 1 0 1 1 0
0
V
V (e) FSK Signal
0
2Tb a0 = 1 a1 = 1 a2 = 0
V
University of Saskatchewan
Page 87
Example 2: Various binary baseband signaling schemes are shown below. The optimum receiver and its error performance follows easily once the two signals s1 (t) and s2 (t) used in each scheme are identied.
Binary Data
Clock
V
(b) NRZ - L
0
V V (c) RZ Code 0 V
(d) RZ - L
V V (e) Bi - Phase 0 V
(f) Bi - Phase - L
0
EE810: Communication Theory I
2 M
1 (t )
Page 88
s (t )
i = 1, 2, . . . , M
(1)
s1 (t )
0
s2 (t )
s3 (t )
sk (t ) (k 1)
f (r1 sk (t ) )
sM 1 (t )
sM ( t )
( M 2) ( M 1)
1 (t )
r1
0
(k 1)
Choose s1 (t )
Choose sk (t )
Choose s M (t )
Decision rule:
f (r1 sk (t ) )
3 1 choose sk1 (t) if k 2 < r1 < k 2 , k = 2, 3, . . . , M 1 3 choose s1 (t) if r1 < 2 and choose sM (t) if r1 > M 2
kTs (k 1)Ts
t= kT s
r (t )
()dt
r1
2
( k 1) Ts
Decision Device
3 2
1 (t )
2
3 2
1 (t )
1 (t )
0
t = Ts
Ts
University of Saskatchewan
0
( )dt
r1
Compute
2 2
Page 89
Probability of error:
f (r1 sk (t ) )
f (r1 sk (t ) )
r1
r1
(k 1) (k 1)
Choose s M (t )
Choose sk (t )
Choose s1 (t )
Choose s M (t
For the M 2 inner signals: Pr[error|si ( 2 Q / 2N0 . ft ()] r1 s= ( t ) ) k For the two end signals: Pr[error|si (t)] = Q / 2N0 Since Pr[si (t) = 1/M , then
M
Choose sk (t )
f (r1 sk (t ) )
Pr[error] =
i=1
2(M 1) r1 / 2N Q 0 M
= log2 M bits, the bit error probability depends on the mapping and it is r1 often tedious to compute exactly. If the mapping is a Gray mapping (i.e., the ) 1 (t 3 3 0 mappings of the nearest signals dier one bit), a good approximation 2 in only 2 2 2 is Pr[bit error = Pr[error]/M .
3 2
T 0 2 0 ()dt
s
t = Ts r1
3 2
1 (t )
Compute
Decision (t ) for i = 1, 2, , M If 1 (t) is a sinusoidal i.e., 1 (t) cos(2fc t), where 0 t 1 1 (carrier, t) 2 2choose 0 and Ts the scheme is also known as M -ary amplitude Ts ; fc = k/Ts ; k is an integer, r2 the smallest d t ( ) shift keying (M -ASK).
2 = t = Ts Ts
0
r(t )
University of Saskatchewan
Ts
( )dt
2 (t )
t = Ts r1
Page 90
To express the probability in terms of Eb /N0 , compute the average transmitted energy per message (or symbol) as follows: Es = =
M i=1 Ei
M
2
2 = 4M
M i=1
(2i 1 M )2 (2)
(M 2 1)2 1 M (M 2 1) = 4M 3 12
Thus the the average transmitted energy per bit is (M 2 1)2 Es = = Eb = log2 M 12 log2 M Pr[error] =
10
1
(3)
2(M 1) Q M
(4)
10
M=16
Pr[symbol error]
M=8 10
3
M=4 10
4
M=2 10
5
10
10 Eb/N0 (dB)
15
20
University of Saskatchewan
Page 91
(5)
where E = V 2 Ts /2 and the two orthonormal basis functions are: 1 (t) = M = 8: V sin(2fc t) V cos(2fc t) ; 2 (t) = E E (6)
2 (t )
s3 (t ) 011 s2 (t ) 001 E
010 s4 (t )
110 s5 (t )
0
s1 (t ) 000
1 (t )
111 s6 (t ) s7 (t ) 101
s8 (t ) 100
2 (t )
s2 ( t )
University of Saskatchewan
Page 92
E
0
s1 (t )
Region Z1 Choose s1 (t ) sM ( t )
r1
Ts
()dt
r1
Compute
r(t )
Decision
1 (t )
Ts
()dt
r2
the smallest
2 (t )
(t ) 2
s3 (t )
s1 (t )
1 (t )
s4 ( t )
r 1
(t ) 1
Choose m1=00
= 1
(7)
Page 93
2 + r 2 and = tan1 r2 (polar coordinate sysChanging the variables V = r1 2 r1 tem), the join pdf of V and is V V 2 + E 2 EV cos p(V, ) = exp (8) N0 N0
2 E 1 N V 2E/N0 cos /2 sin2 = Ve dV e 0 2 0 With the above pdf of , the error probability can be computed as: Pr[error] = 1 Pr[/M /M |s1 (t)]
/M
= 1
p()d
/M
(10)
In general, the integral of p() as above does not reduce to a simple form and must be evaluated numerically, except for M = 2 and M = 4. An approximation to the error probability for large values of M and for large symbol signal-to-noise ratio (SNR) s = E/N0 can be obtained as follows. First the error probability is lower bounded by Pr[error] = Pr[error|s1 (t)] > Pr[(r1 , r2 ) is closer to s2 (t) than s1 (t)|s1 (t)] E/N0 = Q sin M The upper bound is obtained by the following union bound:
(11)
Pr[error] = Pr[error|s1 (t)] < Pr[(r1 , r2 ) is closer to s2 (t) OR sM 1 (t) than s1 (t)|s1 (t)] E/N0 (12) < 2Q sin M Since the lower and upper bounds only dier by a factor of two, they are very tight for high SNR. Thus a good approximation to the error probability of M -PSK is: E/N0 Pr[error] 2Q sin M (2Eb /N0 ) sin = 2Q (13) M
University of Saskatchewan
1 (t )
EE810: Communication Theory I
Ts
()dt
r2
the smallest
Page 94
(t ) 2
Choose m3=11
s3 (t ) 0
s1 (t )
1 (t )
s4 ( t )
r 1
(t ) 1
Choose m4=10
2
E N0 E N0
2
Choose m1=00
(14)
Pr[error] = 1 Pr[correct] = 1 1 Q = 2Q E N0 Q2 E N0
(15)
University of Saskatchewan
Page 95
The bit error probability of QPSK with Gray mapping: Can be obtained by considering dierent conditional message error probabilities: Pr[m2 |m1 ] = Q Pr[m3 |m1 ] = Q2 Pr[m4 |m1 ] = Q The bit error probability is therefore Pr[bit error] = 0.5 Pr[m2 |m1 ] + 0.5 Pr[m4 |m1 ] + 1.0 Pr[m3 |m1 ] = Q E N0 =Q 2Eb N0 (19) E N0 E N0 E N0 1Q E N0 1Q E N0 (16) (17) (18)
where the viewpoint is taken that one of the two bits is chosen at random, i.e., with a probability of 0.5. The above shows that QPSK with Gray mapping has exactly the same bit-error-rate (BER) performance with BPSK, while its bit rate can be double for the same bandwidth. In general, the bit error probability of M -PSK is dicult to obtain for an arbitrary mapping. For Gray mapping, again a good approximation is Pr[bit error] = Pr[sumbol error]/. The exact calculation of the bit error probability can be found in the following paper: J. Lassing, E. G. Str om, E. Agrell and T. Ottosson, Computation of the Exact BitError Rate of Coherent M -ary PSK With Gray Code Bit Mapping, IEEE Trans. on Communications, vol. 51, Nov. 2003, pp. 17581760.
University of Saskatchewan
Page 96
Comparison of M -PSK and BPSK Pr[error]M -ary 2Q (2Eb /N0 ) sin 2Eb N0 2Eb /N0 ) Q2 , M 2Eb N0 (M > 4)
Pr[error]QPSK = 2Q Pr[error]BPSK = Q(
10
10
Pr[symbol error]
10
10
M=2
10
10
10
15
20
M 3 4 5 6 8 16 32 64
M -ary BW/binary BW sin2 (/M ) M -ary energy/binary energy 1/3 1/4 1/5 1/6 0.44 0.15 0.05 0.44 3.6 dB 8.2 dB 13.0 dB 17.0 dB
University of Saskatchewan
Page 97
Examples of QAM constellations are shown on the next page. Rectangular QAM constellations, shown below, are the most popular. For rectangular QAM, Vc,i , Vs,i {(2i 1 M )/2}.
University of Saskatchewan
Page 98
University of Saskatchewan
Page 99
Another example: The 16-QAM signal constellation shown below is an international standard for telephone-line modems (called V.29). The decision regions of the minimum distance receiver are also drawn.
1011 5
1100
1010 3
1110
r2
0101 -5 0100 -3 0000 -1 0001
Region Z 2
3
Choose s2 (t )
s 2 (t )
1111
E1000
0
1001
-3
s1 (t )
Region Z1
r1
Choose s1 (t )
1101
-5
sM ( t )
()dt
r1
Compute
r(t )
Decision
1 (t )
Ts
()dt
r2
the smallest
2 (t )
Choose m2=01 2 (t )
University of Saskatchewan
s 2 (t )
2 r
Page 100
Error performance of rectangular M -QAM: For M = 2 , where is even, QAM consists of two ASK signals on quadrature carriers, each having M = 2/2 signal points. Thus the probability of symbol error can be computed as Pr[error] = 1 Pr[correct] = 1 1 PrM [error] is the probability of error of M -ary ASK: 1 PrM [error] = 2 1 M Q 3 Es M 1 N0
2
(22)
where PrM
(23)
Note that in the above equation, Es is the average energy per symbol of the QAM constellation.
10
1
10
M=16 10
4
M=4 10
5
10
10 Eb/N0 (dB)
15
20
University of Saskatchewan
Page 101
(25)
Since the error probability is dominated by the argument of the Q-function, one may simply compares the arguments of Q for the two modulation schemes. The ratio of the arguments is RM = 3/(M 1) 2 sin2 (/M ) (26)
M 10 log10 RM 8 1.65 dB 16 4.20 dB 32 7.02 dB 64 9.95 dB Thus M -ary QAM yields a better performance than M -ary PSK for the same bit rate (i.e., same M ) and the same transmitted energy (Eb /N0 ).
University of Saskatchewan
Page 102
2 (t )
2 (t )
sE 2 (t )
s 2 (t )
2 (t )s 2 (t )
E E s1 (t )
s 2 (t )
E
1 (t )
s 3 (t )
s1 (t )
E
E
1 (t )
E
0
(a) M=2
s1 (t )
1 (t )
3 ( t ) s 3 (t )
E
(b) M=3
s1 (t )
1 (t )
The decision rule of the minimum distance receiver easily: t = Ts follows (a) M=2 (b) M=3 T (t ) Choose si (t) if ri > rj3, j = 1, 2, 3, . . r .1, M ; j = i
s
( )dt
(28)
Decision
s (t ) 1 (t ) = 1 Ts E
0
t = Ts
( )dt T
s
( )dt
t = Ts r1 rM
r(t )
s (t ) 1 (t ) = 1 EsM (t ) (t ) =
M
t = Ts
Decision
E Ts
0
( )dt
2 (t )
s (t ) M (t ) = M s 2 (t ) E
s1 (t )
s1 (t )
s1 (t )
s3 (t )
2 (t )
1 (t )
2 (t )
E
s1 (t ) E
1 (t )
s Saskatchewan University of 2 (t )
s3 ( t )
s 2 (t )
Page 103
When the M orthonormal functions are chosen to be orthogonal sinusoidal carriers, the signal set is known as M -ary frequency shift keying (M -FSK): si (t) = V cos(2fi t), 0 t Ts ; i = 1, 2, . . . , M (29)
where the frequencies are chosen so that the signals are orthogonal over the interval [0, Ts ] seconds: fi = (k (i 1)) 1 2Ts , i = 1, 2, . . . , M (30)
Error performance of M orthogonal signals: To determine the message error probability consider that message s1 (t) was transmitted. Due to the symmetry of the signal space and because the messages are equally likely Pr[error] = Pr[error|s1 (t)] = 1 Pr[correct|s1 (t)] = 1 Pr[all rj < r1 ) : j = 1|s1 (t)]
M
= 1 = 1 = 1
j =2
R1 = j =2 R1 =
(N0 )
R2 =
0.5
R2 exp 2 N0 dR1
dR2
M 1
(31)
The above integral can only be evaluated numerically. It can be normalized so that only two parameters, namely M (the number of messages) and E/N0 (the signal-to-noise) enter into the numerical integration as given below: 1 Pr[error] = 2
1 y
1 2 2E N0
e
2
x2 /2
dx
M 1
exp
University of Saskatchewan
1 2
dy
(32)
Page 104
The relationship between probability of bit error and probability of symbol error for an M -ary orthogonal signal set can be found as follows. For equiprobable orthogonal signals, all symbol errors are equiprobable and occur with probability Pr[symbol error] Pr[symbol error] = M 1 2 1
There are k ways in which k bits out of may be in error. Hence the average number of bit errors per -bit symbol is
k
k =1
(33)
The probability of bit error is the the above result divided by . Thus (34)
University of Saskatchewan
Page 105
Union Bound: To provide insight into the behavior of Pr[error], consider upper bounding (called the union bound) the error probability as follows. Pr[error] = Pr[(r1 < r2 ) or (r1 < r3 ) or, . . . , or (r1 < rM )|s1 (t)] < Pr[(r1 < r2 )|s1 (t)] + . . . + Pr[(r1 < rM )|s1 (t)] = (M 1)Q E/N0 ) < M Q( E/N0 < M e(E/2N0 )
2
(36)
The above equation shows that there is a denite threshold eect with orthogonal signalling. As M , the probability of error approaches zero exponentially, provided that: Eb > 2 ln 2 = 1.39 = 1.42dB (37) N0 A dierent interpretation of the upper bound in (36) can be obtained as follows. Since E = Eb = Ps Ts (Ps is the transmitted power) and the bit rate rb = /Ts , (36) can be rewritten as: Pr[error] < e ln 2 e(Ps Ts /2N0 ) = eTs [rb ln 2+Ps /2N0 ] The above implies that if rb ln 2 + Ps /2N0 > 0, or rb < (38)
Ps the probability 2 ln 2N0 or error tends to zero as Ts or M become larger and larger. This behaviour of error probability is quite surprising since what it shows is that: provided the bit rate is small enough, the error probability can be made arbitrarily small even though the transmitter power can be nite. The obvious disadvantage, however, of the approach is that the bandwidth requirement increases with M . As M , the transmitted bandwidth goes to innity. Since the union bound is not a very tight upper bound at suciently low SNR 2 due to the fact that the upper bound Q(x) < exp x for the Q function is 2 loose. Using a more elaborate bounding techniques it can be shown that (see texts by Proakis, Wozencraft and Jacobs) Pr[error] 0 as , provided that Eb /N0 > ln 2 = 0.693 (1.6dB).
University of Saskatchewan
( )dt
0
M (t ) =
s M (t ) E
Page 106
Biorthognal Signals
2 (t )
s 2 (t )
2 (t )
s 2 (t )
s1 (t )
E 0
s1 (t ) E
1 (t )
s1 (t )
E 0
s3 (t )
s1 (t ) E
s 2 (t )
1 (t )
s3 ( t )
s2 (t )
3 ( t )
(a) M=2
(b) M=3
A biorthogonal signal set can be obtained from an original orthogonal set of N signals by augmenting it with the negative of each signal. Obviously, for the biorthogonal set M = 2N . Denote the additional signals by si (t), i = 1, 2, . . . , N and assume that each signal has energy E . The received r(t) is closerthan si (t) than si (t) if and only if (i) signal Ts Ts Ts 2 2 0 [r (t) Ei (t)] dt < 0 [r (t)+ Ei (t)] dt. This happens i ri = 0 r (t)i (t)dt > 0. Similarly, r(t) is closer to si (t) than to sj (t) i ri > rj , j = 1 and r(t) is closer to si (t) than to sj (t) i ri > rj , j = 1. It follows that the decision rule of the minimum-distance receiver for biorthogonal signalling can be implemented as The conditional probability of a correct decision for equally likely messages, given that s1 (t) is transmitted and that r1 = E + w 1 = R 1 > 0 (40) is just Pr[correct|s1 (t), R1 > 0] = Pr[R1 < all rj < R1 ) : j = 1|s1 (t), R1 > 0] = (Pr[R1 < rj < R1 |s1 (t), R1 > 0])N 1
R1
j = i
(39)
=
R2 =R1
(N0 )
0.5
R2 exp 2 N0
dR 2
N 1
(41)
University of Saskatchewan
Page 107
(42)
Again, by virtue of symmetry and the equal a priori probability of the messages the above equation is also the Pr[correct]. Finally, by noting that N = M/2 we obtain
2 R2 0.5 Pr[error] = 1 (N0 ) exp N0 R2 = R1 =0 2 (R1 E ) dR1 (N0 )0.5 exp N0 R1
dR2
M 2 1
(43)
The dierence in error performance for M biorthogonal and M orthogonal signals is negligible when M and E/N0 are large, but the number of dimensions (i.e., bandwidth) required is reduced by one half in the biorthogonal case.
University of Saskatchewan
Page 108
2 (t )
s1 (t )
s3 (t )
s 2 (t ) s1 (t )
s 2 (t )
s 4 (t )
0
s1 (t )
s 2 (t )
0
2 Eb
1 (t )
1 (t )
3 ( t )
s6 (t ) s7 (t )
2 Eb
1 (t )
s5 (t ) s8 (t )
s3 (t )
2 Eb
s 4 (t )
(a) M = 2
(b) M = 22 = 4
(c) M = 23 = 8
Here the M = 2 signals are located on the vertices of 2an -dimensional hypers 2 (t ) geometrically above for cube centered at the origin. This conguration is shown = 1, 2, 3. The hypercube signals can be formed as follows:
s 2 (t ) 0
(t )
si (t) =
Eb
E j =1 2
s1 (t )
E 2
Eb . Thus the components of the signal vector si = [si1 , s si(2t, . . . , s ] are i 3 ) To evaluate the error probability, assume that the signal
(a) M = 2 (b) M = 3 = s E , E , . . . , Eb triangle) 1 b b (binary antipodal) (equilateral 2 (t )
2E
(45)
is transmitted. First claim that no error is made if the noise components along j (t) satisfy s 2 (t ) wj < Eb , for all j = 1, 2, . . . , (46) The proof is immediate. When 2 rE= x = s1 + w x si is: 0 w , j (xj ssij )t )= 3( 2 Eb w j , 2
s 4 (t )
2E
(47)
(48)
University of Saskatchewan
Page 109
it follows that
|x si | =
j =1
j =1
2 wj = |x s1 |2
(49)
for all si = s1 whenever (46) is satised. Next claim that an error is made if, for at least one j , wj > Eb (50) This follows from the fact that x is closer to sj than to s1 whenever (50) is satised, where sj denotes that signal with components Eb along the j th direction and Eb in all other directions. (Of course, x may be still closer to some signal other than sj , but it cannot be closest to s1 ). Equations (49) and (50) together imply that a correct decision is made if and only if (46) is satised. The probability of this event, given that m = m1 , is therefore: Pr[correct|m1 ] = Pr all wj <
Eb ; j = 1, 2, . . . , Eb = 1 Pr wj Eb
=
j =1
Pr wj <
= (1 p) in which, p = Pr wj Eb = Q 2Eb N0
(51)
is again the probability of error for two equally likely signals separated by distance 2 Eb . Finally, from symmetry: Pr[correct|mi ] = Pr[correct|m1 ] for all i, hence, Pr[correct] = (1 p) = 1 Q 2Eb N0
(52)
(53)
In order to express this result in terms of signal energy, we again recognize that the distance squared from the origin to each signal si is the same. The transmitted energy is therefore independent of i, hence can be designated by E . Clearly
|si | =
s2 ij = Eb = E
j =1
(54)
University of Saskatchewan
Page 110
The simple form of the result Pr[correct] = (1 p) suggests that a more immediate derivation may exist. Indeed one does. Note j th coordinate of that the the random signal s is a priori equally likely to be + Eb or Eb , independent of all other coordinates. Moreover, the noise wj disturbing the j th coordinate is independent of the noise in all other coordinates. Hence, by the theory of sucient statistics, a decision may be made on the j th coordinate without examining any other coordinate. This single-coordinate decision corresponds to the problem of binary signals separated by distance 2 Eb , for which the probability of correct decision is 1 p. Since in the original hypercube problem a correct decision is made if only if a correct decision is make on every coordinate, an since these decisions are independent, it follows immediately that Pr[correct] = (1 p) .
University of Saskatchewan