Professional Documents
Culture Documents
DETECTION AND
ESTIMATION
DIGITAL COMMUNICATION
Module I
Introduction to Digital communication: Random
variables and random processes – Detection and
estimation – G G-SS procedure – Geometric
interpretation of signals – Response of a bank of
correlators to noisy input – Detection of known
signals
i l in i noise
i – probability
b bili off error – correlation
l i
and matched filter receiver – detection of signals
with unknown phase in noise.
noise
Estimation concepts criteria: MLE – estimator
quality measures – Cramer Rao bound – Wiener
filter for wave form estimation – Linear
pprediction.
Compiled by MKP for CEC S6-EC, DC, Dec 2008
g
Orthogonal functions
Consider a set of functions φ1 (t ), φ2 (t ), φ3 (t ),........, φn (t ),........ defined
over
o e thee interval
e a
Let these functions satisfy the condition
t2
∫ t1
φi (t )φ j (t )dt = 0 i ≠ j
A set of functions which has this property is said to be orthogonal
over the interval from t1 to t2 .
Suppose we have an arbitrary function s(t) and we are interested in s(t)
only in the interval t1 to t2 .where the set of functions Ф(t) are
orthogonal.
th l
Now we can express s(t) as a linear sum of the functions Фn(t).
Because of the orthogonality all of the terms on the RHS of the above
equation become zero except a single term.
t2 t2
∫ t1
s(t )φn (t ) = sn ∫ φn 2 (t )dt
t1
t2
Then, sn =
∫ t1
s(t )φn (t )dt
t2
∫ t1
φn 2 (t )dt
⎧0 ≤ t ≤ T
si (t ) = ∑ j =1 sijφ j (t ) ⎨
N
− − − (1)
⎩i = 1,2,....., M
The coefficients of the expansion sij are defined by
T ⎧i = 1,2,....., M
sij = ∫ si (t )φ j (t )dt ⎨
0
⎩ j = 1,2,...., N
The functions φ1 ( t ), φ2 ( t ), φ3 ( t ),........, φ N ( t ) are orthonormal which
means
means,
T ⎧0 for i ≠ j
∫0
φi (t )φ j (t )dt = ⎨
⎩1 ffor i = j
Compiled by MKP for CEC S6-EC, DC, Dec 2008
g
Gram-Schmidt Orthogonalization Procedure
The first condition states that the basis functions φ1(t ),φ2 (t ),.....,φN (t )
are orthogonal with respect to each other over the interval 0 to T.
The second condition states that the basis functions are normalized to
have unit energy.
Given the set of coefficients {sij}, j=1,2,…,N .we can generate the
signal si(t), i=1,2,….M using a scheme as shown in figure (1).
It consists of a bank of N multipliers,
multipliers with each multiplier supplied with
its own basis function, followed by a summer.
si1
φ1 ( t )
si 2 si (t )
φ2 ( t )
siN
φN ( t )
N
⎧0 ≤ t ≤ T
si (t ) = ∑ sijϕ j (t ), ⎨ Figure (1)
j =1 ⎩i = 1, 2, …, M
φ1 ( t )
T
si 2
∫ 0
dt
si (t ) φ2 ( t )
T
∫ 0
dt siN
φN ( t )
T ⎧i = 1, 2, …, M
sij = ∫ si (t )ϕ j (t )dt , ⎨ Figure (2)
0
⎩ j = 1, 2, …, N
Compiled by MKP for CEC S6-EC, DC, Dec 2008
Gram-Schmidt Orthogonalization Procedure
If the given set of signals s1(t), s2(t), s3(t),……….. sM(t) are linearly
dependent,
p then there exists a set of coefficients a1,a2,…..,aM not all
zero, such that we may write
a1s1 (t ) + a2 s2 (t ) + ⋅⋅⋅⋅⋅ + aM sM (t ) = 0
If aM ≠ 0 then we can express the corresponding signals sM(t) as
⎡ a1 a2 aM −1 ⎤
sM ( t ) = − ⎢ s1 (t ) + s2 (t ) + ⋅⋅⋅⋅⋅ + sM −1 (t ) ⎥
⎣ aM aM aM ⎦
It implies that sM(t) may be expressed in terms of the remaining (M-1)
signals.
Next consider the set of signals s1(t), s2(t), s3(t),……….. sM-1(t) . If this set
is linearly dependent there exists a set of numbers b1,b2,…..,bM-1 not all
equal to zero, such that
b1s1 (t ) + b2 s2 (t ) + ⋅⋅⋅⋅⋅ + bM −1sM −1 (t ) = 0
⎧0 ≤ t ≤ T
si (t ) = ∑ j =1 sijφ j (t ) ⎨
N
⎩i = 1,2, ....., N
s1 (t ) = s11φ1 (t)
Since φ1 (t) is to be a normalized function,
T
s11 = ∫0
s12 (t )dt = E1
s1 (t ) s1 (t )
φ1 (t) = =
s11 E1
In the next step we set to zero all the coefficients except the first two
s21 and s22 in equation (1b)
T
s32 = ∫ s3 (t )φ2 (t)
0
T
s33 = ∫ [ s (t ) − s
0
3 31 1φ (t) − s32φ2 (t)]dt
s1 (t )
φ1 (t ) =
E1 s1 (t ) = E1φ1 (t ) = s11φ1 (t )
T
s21 = ∫ s2 (t )φ1 (t )dt
d
0
g 2 (t ) = s2 (t ) − s21φ1 (t )
g 2 (t )
φ2 (t ) = T T
∫ g (t )dt s2 (t ) = ∫ g22 (t )dtφ2 (t ) + s21φ1 (t )
2
2
0
0
T
sij = ∫ si (t )φ j (t )dt = s22φ2 (t ) + s21φ1 (t )
0
i −1
g i (t ) = si (t ) − ∑ sijφ j (t )
g i (t ) j =1
φi (t ) = T
∫0
g i2 (t )dt
s2
s1
0 1
2
3
φ1
s3
2
cos θ =
( s i ⋅ sj )
si sj
The two vectors are orthogonal if their dot product is zero.
From equations (4) and (5) we can see that the energy of a signal si(t)
is equal to the squared length of the signal vector si representing it.