You are on page 1of 49

Convolutional codes

A convolutional encoder:
Encoding is done as a continuous process
Data stream may be shifted into the encoder as fixed
size blocks but are not block coded
Encoder is a machine with memory

Block codes are based on algebraic/combinatorial
techniques.
Convolutional codes are based on construction
techniques.
Convolutional codes
A Convolutional code is specified by three parameters

or

is the coding rate, determining the
number of data bits per coded word.

K is the constraint length of the encoder, where the
encoder has K-1 memory elements each of size k.
) , , ( K k n ) , / ( K n k
n k R
c
/ =
1
u
2
u
A Rate Convolutional encoder
(representations)
1
u
2
u
k = 1 bit , n = 2 bits , Rate = k/n =
L = no. of memory elements
K = L + 1 = 2 + 1 = 3 or (L+1) x n = 6
k = 2 bits , n = 4 bits , Rate = k/n =
L = no. of memory elements
K = L + 1 = 2 + 1 = 3 or (L+1) x n = 12
A Rate Convolutional encoder
Input data
bits
Output coded
bits
m
First coded bit
1
u
2
u
Second coded bit
2 1
, u u
(Branch word)
A Rate Convolutional encoder
) 101 ( = m
Message sequence:
1 0 0 1
t
1
u
2
u
1 1
2 1
u u
0 1 0 2
t
1
u
2
u
0 1
2 1
u u
1 0 1 3
t
1
u
2
u
0 0
2 1
u u
0 1 0 4
t
1
u
2
u
0 1
2 1
u u
Time
Output Output Time
(Branch word) (Branch word)
A Rate Convolutional encoder





Encoder ) 101 ( = m ) 11 10 00 10 11 ( = U
0 0 1 5
t
1
u
2
u
1 1
2 1
u u
0 0 0 6
t
1
u
2
u
0 0
2 1
u u
Time
Output
Time Output
(Branch word) (Branch word)
Effective code rate
Initialize the memory before encoding the first bit
(all-zero)
Clear out the memory after encoding the last bit
(all-zero), a tail of zero-bits is appended to data bits.


Effective code rate :
M is the number of data bits and k=1 is assumed:
Data Encoder Codeword Tail
c eff
R
K M n
M
R <
+
=
) 1 (
Encoder representation
Vector representation:
m
1
u
2
u
2 1
u u
) 101 (
) 111 (
2
1
=
=
g
g
Encoder representation
Impulse response representation:
11 10 00 10 11
11 10 11 1
00 00 00 0
11 10 11 1
Output Input m
Modulo-2 sum:
1 1 001
0 1 010
1 1 100
11 10 11 : sequence Output
0 0 1 : sequence Input
2 1
u u
Branch word
Register
contents
Encoder representation
Polynomial representation:



The output sequence is found as follows:

2 2 ) 2 (
2
) 2 (
1
) 2 (
0 2
2 2 ) 1 (
2
) 1 (
1
) 1 (
0 1
1 . . ) (
1 . . ) (
X X g X g g X
X X X g X g g X
+ = + + =
+ + = + + =
g
g
) ( ) ( with interlaced ) ( ) ( ) (
2 1
X X X X X g m g m U =
Encoder representation
In more detail:
11 10 00 10 11
) 1 , 1 ( ) 0 , 1 ( ) 0 , 0 ( ) 0 , 1 ( ) 1 , 1 ( ) (
. 0 . 0 . 0 1 ) ( ) (
. 0 1 ) ( ) (
1 ) 1 )( 1 ( ) ( ) (
1 ) 1 )( 1 ( ) ( ) (
4 3 2
4 3 2
2
4 3 2
1
4 2 2
2
4 3 2 2
1
=
+ + + + =
+ + + + =
+ + + + =
+ = + + =
+ + + = + + + =
U
U
g m
g m
g m
g m
X X X X X
X X X X X X
X X X X X X
X X X X X
X X X X X X X X
State diagram
A finite-state machine only encounters a finite
number of states.
State of a machine: the smallest amount of
information that, together with a current input
to the machine, can predict the output of the
machine.
In a Convolutional encoder, the state is
represented by the content of the memory.
Hence, there are states, K is the
constraint length.
1
2
K
State diagram
A state diagram is a way to represent the
encoder.

A state diagram contains all the states and all
possible transitions between them.

Only two transitions initiate from a state

Only two transitions end up in a state
A Rate Convolutional encoder
Input data
bits
Output coded
bits
m
1
u
2
u
First coded bit
Second coded bit
2 1
, u u
(Branch word)
State diagram
Current
State
Input Next
State
Output
S
0
00
0 S
0
00
1 S
2
11
S
1

01
0 S
0
11
1 S
2
00
S
2

10
0 S
1
10
1 S
3
01
S
3

11
0 S
1
01
1 S
3
10
10 01
00
11
0
S
1
S
2
S
3
S
1/11
1/00
1/01
1/10
0/11
0/00
0/01
0/10
Input
Output
(Branch word)
Trellis Representation
Trellis diagram is an extension of the state diagram
that shows the passage of time.
Time
i
t
1 + i
t
State
00
0
= S
01
1
= S
10
2
= S
11
3
= S
0/00
1/10
0/11
0/10
0/01
1/11
1/01
1/00
Trellis
6
t
1
t
2
t
3
t
4
t
5
t
Input bits
0/11
0/10
0/01
1/11
1/01
1/00
0/00
0/11
0/10
0/01
1/11
1/01
1/00
0/00
0/11
0/10
0/01
1/11
1/01
1/00
0/00
0/11
0/10
0/01
1/11
1/01
1/00
0/00
0/11
0/10
0/01
1/11
1/01
1/00
0/00
1 0 1 0 0
11 10 00 10 11
Output bits
Tail bits
Trellis
1/11
0/00
0/10
1/11
1/01
0/00
0/11
0/10
0/01
1/11
1/01
1/00
0/00
0/11
0/10
0/01
0/00
0/11
0/00
6
t
1
t
2
t
3
t
4
t
5
t
1 0 1 0 0
11 10 00 10 11
Input bits
Output bits
Tail bits
Block diagram of the DCS

sequence received
=
3 2 1
,...) ,..., , , (
i
Z Z Z Z Z
Information
source
Rate 1/n
Conv. encoder
Modulator
Information
sink
Rate 1/n
Conv. decoder
Demodulator
sequence Input
,...) ,..., ( m
i
,m
2
m
1
= m
sequence Codeword
3 2 1
,...) ,..., , , (
i
U U U U
=
=
G(m) U
,...) ,..., , (

2 1 i
m m m = m
C
h
a
n
n
e
l

Soft and hard decision
decoding
In hard decision:
The demodulator makes a firm or hard decision
whether 1 or 0 is transmitted and provides no other
information for the decoder such as the reliability of the
decision.

In Soft decision:
The demodulator provides the decoder with some side
information together with the decision. The side
information provides the decoder with a measure of
confidence for the decision.
Soft and hard decision
decoding
ML hard-decisions decoding rule:
Choose the path in the trellis with minimum
Hamming distance from the received sequence

ML soft-decisions decoding rule:
Choose the path in the trellis with minimum
Euclidean distance from the received sequence
ML Maximum Likelihood
Decoding of Convolutional Codes
Maximum likelihood decoding of convolutional codes
Finding the code branch in the trellis that was most likely
transmitted
Based on calculating code Hamming distances for each branch
forming encoded word
Assume that the information symbols applied into an AWGN
channel are equally alike and independent,
0 1 2
... ...
j
x x x x = x
0 1
... ...
j
y y y = y
non -erroneous code:
Decoder
(=Distance
Calculation &
Comparison)
x
y
Decoding of Convolutional Codes
Probability to decode the symbols is then


The most likely path through the trellis will maximize this
metric.
Since probabilities are often small numbers, often ln( ) is
taken from both sides, yielding,



0
( , ) ( | )
j j
j
p p y x

=
[
= y x
received code:
non -erroneous code:
{ }
{ }
1
ln ( , ) ln ( )
j mj
j
p p y x

= y x
Example of Exhaustive
Maximal Likelihood Detection

Assume a three bit message is transmitted [encoded by
(2,1,2) convolutional encoder]. To clear the decoder,
two zero-bits are appended after message. Thus 5 bits
are encoded resulting in 10 bits of code. Assume
channel error probability is p = 0.1. After the channel
10,01,10,11,00 is produced (including some errors).
What comes after the decoder, e.g. what was most likely
the transmitted code and what were the respective
message bits?
Example of Exhaustive
Maximal Likelihood Detection
a
b
c
d
states
decoder outputs
if this path is selected
Received Sequence 10 01 10 11 00

All Zero Sequence 00 00 00 00 00

Hamming Distance 5

Path metric 5 (-2.3) + 5 (-0.11) = -12.05

ln [ p(0/0)] = ln [p(1/1)] = ln (0.9) = - 0.11
ln [ p(1/0)] = ln [p(0/1)] = ln (0.1) = - 2.30
Example of Exhaustive
Maximal Likelihood Detection
Example of Exhaustive
Maximal Likelihood Detection
correct:1+1+2+2+2=8;8 ( 0.11) 0.88
false:1+1+0+0+0=2;2 ( 2.30) 4.6
total path metric: 5.48
=
=

Example of Exhaustive
Maximal Likelihood Detection
The Viterbi Algorithm
Viterbi algorithm performs ML decoding

finds a path through trellis with the largest metric,
processes demodulator outputs in an iterative manner,
At each step in the trellis, it compares the metric of
all paths entering each state, and keeps only the path
with the largest metric, called the survivor, together
with its metric,
It proceeds in the trellis by eliminating the least likely
paths.
Example of Hard decision
Viterbi decoding
1/11
0/00
0/10
1/11
1/01
0/00
0/11
0/10
0/01
1/11
1/01
1/00
0/00
0/11
0/10
0/01
0/00
0/11
0/00
6
t
1
t
2
t
3
t
4
t
5
t
) 101 ( = m
) 11 10 00 10 11 ( = U
) 11 10 00 00 11 (
=
Z
Example of Hard decision
Viterbi decoding-contd
Label all branches with the branch metric (Hamming distance)
0
2
1
2
1
0
2
1
1
2
1
0
0
1
0
2
1
0
2
6
t
1
t
2
t
3
t
4
t
5
t
1
0
i =1
Example of Hard decision
Viterbi decoding-contd
0
2
1
2
1
0
2
1
1
2
1
0
0
1
0
2
1
0
2
6
t
1
t
2
t
3
t
4
t
5
t
1
0
2
0
State metric
Path metric
i=2
Example of Hard decision
Viterbi decoding-contd
0
2
1
2
1
0
2
1
1
2
1
0
0
1
0
2
1
0
2
6
t
1
t
2
t
3
t
4
t
5
t
1
0
2
0
2
4
1
1
i=3
Example of Hard decision
Viterbi decoding-contd
0
2
1
2
1
0
2
1
1
2
1
0
0
1
0
2
1
0
2
6
t
1
t
2
t
3
t
4
t
5
t
1
0
2
0
2
4
1
1
2
2
1
2
X
X
X
X
i=4
Example of Hard decision
Viterbi decoding-contd
0
2
1
2
1
0
2
1
1
2
1
0
0
1
0
2
1
0
2
6
t
1
t
2
t
3
t
4
t
5
t
1
0
2
0
2
4
1
1
2
2
1
2 3
1
X
X
X
X
X
i=5
Example of Hard decision
Viterbi decoding-contd
0
2
1
2
1
0
2
1
1
2
1
0
0
1
0
2
1
0
2
6
t
1
t
2
t
3
t
4
t
5
t
1
0
2
0
2
4
1
1
2
2
1
2 3
1
X
X
X
X
X
i=6
X
1
Example of Hard decision
Viterbi decoding-contd
0
2
1
2
1
0
2
1
1
2
1
0
0
1
0
2
1
0
2
6
t
1
t
2
t
3
t
4
t
5
t
1
0
2
0
2
4
1
1
2
2
1
2 3
1
X
X
X
X
X
X
1
=
) 101 ( m
) 101 ( = m
) 11 10 00 10 11 ( = U
) 11 10 00 00 11 (
=
Z
Example of Hard decision
Viterbi decoding
1/11
0/00
0/10
1/11
1/01
0/00
0/11
0/10
0/01
1/11
1/01
1/00
0/00
0/11
0/10
0/01
0/00
0/11
0/00
6
t
1
t
2
t
3
t
4
t
5
t
) 101 ( = m
) 11 10 00 10 11 ( = U
) 01 10 11 10 11 (
=
Z
Example of Hard decision
Viterbi decoding-contd
0
2
0
1
2
1
0
1
1
0
1
2
2
1
0
2
1
1
1
6
t
1
t
2
t
3
t
4
t
5
t
1
0
i=1
Example of Hard decision
Viterbi decoding-contd
0
2
0
1
2
1
0
1
1
0
1
2
2
1
0
2
1
1
1
6
t
1
t
2
t
3
t
4
t
5
t
1
0 2
0
i=2
Example of Hard decision
Viterbi decoding-contd
0
2
0
1
2
1
0
1
1
0
1
2
2
1
0
2
1
1
1
6
t
1
t
2
t
3
t
4
t
5
t
1
0 2 3
0
2
3 0
i=3
Example of Hard decision
Viterbi decoding-contd
0
2
0
1
2
1
0
1
1
0
1
2
2
1
0
2
1
1
1
6
t
1
t
2
t
3
t
4
t
5
t
1
0 2 3 0
3
2
3
0
2
3 0
i=4
Example of Hard decision
Viterbi decoding-contd
0
2
0
1
2
1
0
1
1
0
1
2
2
1
0
2
1
1
1
6
t
1
t
2
t
3
t
4
t
5
t
1
0 2 3 0 1
3
2
3
2 0
2
3 0
i=5
Example of Hard decision
Viterbi decoding-contd
0
2
0
1
2
1
0
1
1
0
1
2
2
1
0
2
1
1
1
6
t
1
t
2
t
3
t
4
t
5
t
1
0 2 3 0 1 2
3
2
3
2 0
2
3 0
i=6
Example of Hard decision
Viterbi decoding-contd
0
2
0
1
2
1
0
1
1
0
1
2
2
1
0
2
1
1
1
6
t
1
t
2
t
3
t
4
t
5
t
1
0 2 3 0 1 2
3
2
3
2 0
2
3 0
) 100 (

= m
) 101 ( = m
) 11 10 00 10 11 ( = U
) 01 10 11 10 11 (
=
Z
Free distance of Convolutional
Codes
Since the code is linear, the minimum distance of the
code is the minimum distance between each of the
codewords and the all-zero codeword.
This is the minimum distance in the set of all arbitrary
long paths along the trellis that diverge and remerge
to the all-zero path.
It is called the minimum free distance or the free
distance of the code, denoted by
f free
d d or
Free distance
2
0
1
2
1
0
2
1
1
2
1
0
0
2
1
1
0
2
0
6
t
1
t
2
t
3
t
4
t
5
t
Hamming weight
of the branch
All-zero path
The path diverging and remerging to
all-zero path with minimum weight
5 =
f
d
Performance bounds
Error correction capability of Convolutional codes is
given by,


The coding gain is upper bounded by,

2 / ) 1 ( =
f
d t
) ( log 10 gain coding
10 f c
d R s
How to end-up decoding?
In the previous example it was assumed that the
register was finally filled with zeros thus finding the
minimum distance path
In practice with long code words zeroing requires
feeding of long sequence of zeros to the end of the
message bits: this wastes channel capacity &
introduces delay
To avoid this path memory truncation is applied:
It has been experimentally tested that negligible
error rate increase occurs by truncating at 5L
Note this also introduces a delay of 5L

You might also like