You are on page 1of 15

History of Channel Code : Shannon s Channel Coding Theorem Kind of Channel Code : Block Codes Convolutional Codes Concatenated

Codes Turbo Codes:kind of concatenated codes +uses interleaver

History of channel codes:

Types of codes

Block codes

Figure 2: A channel encoder that generates an (n,k) block code. Code rate: Channel data rate: where denotes the bit rate of the information source.

Convolutional codes

Figure 3: A convolutional encoder with memory M that encodes the incoming bits serially. Modulo-2 arithmetic - Taking the remainder when dividing by 2:

Block codes A binary block code C of block length n is a subset of the set of all binary n-tuples where for called a codeword or code vector of the code. . An n-tuple belonging to the code is

If the subset is a vector space over {0,1}, the binary block code is then a linear block code. Minimum Hamming distance The Hamming weight of an n-tuple components of the n-tuple. , , is the number of nonzero

The Hamming distance between any two n-tuples and positions in which their components differ. It is clear that is defined as by-component subtraction). The minimum Hamming distance, between pairs of distinct codewords.

, denoted

, is the number of where (a component-

, of the block code C is the smallest Hamming distance

Example

Correction and detection ability of a block code When is the actual codeword and its possible corrupted received version, then the error pattern is the n-tuple defined by .

The number of errors is just

Error detection We say that a code can detect all patterns of t or fewer errors if the decoder chosen never incorrectly decodes whenever the number of errors is less than or equal to t.

Error correction We say that a code can correct all patterns of t or fewer errors if the decoder chosen correctly decodes whenever the number of errors is less than or equal to t. The implicit decoding rule: decode to the nearest codeword in terms of Hamming distance. Linear codes A binary block code C is linear if, for and in C, is also in C.

The minimum Hamming distance of a linear block code is equal to the smallest weight of the nonzero codewords in the code. Example If we take k n-tuples , is a linear code because , etc.

that are linearly independent and form a matrix

An (n,k) block code may then be conveniently represented as

where .

is the codeword corresponding to the message block

G is called the generator matrix of the code. A code is systematic if the generator matrix G is of the form

i.e., where is a identity matrix and P is an matrix is also said to be systematic.

matrix. The generator

In a systematic code, the first k bits of the codeword is the same as the corresponding message bits. For the generator matrix G of a linear code, there exists an rows are linearly independent such that matrix, H, whose n-k

The matrix H is called the parity check matrix of the code. For any codeword in the code,

If the code is systematic, where is an identity matrix.

Clearly, the n-k rows are linearly independent and

Syndrome decoding Given a received vector Syndrome:

, the receiver has the task of decoding from .

y y

The syndrome depends only on the error pattern, and not on the transmitted codeword. All error patterns that differ by a codeword have the same syndrome.

Standard array for an (n,k) linear code: 1. The codewords are placed in a row with all-zero codeword as the left-most one. 2. An error pattern is picked and placed under , and a second row is formed by adding to each of the remaining codewords in the first row. ( has not appeared previously and has the least minimum Hamming weight.) 3. Step 2 is repeated until all the possible error patterns have been account for. Each row in the standard array is called a coset. The left-most element of a row is called the coset leader of the coset. Note that each coset has a unique syndrome. Syndrome decoding for an (n,k) linear block code: 1. For the received vector , compute the syndrome . 2. Within the coset characterized by the syndrome , identify the coset leader; call it 3. Compute the codeword as the decoded version of the received vector . Example Consider the (6,3) linear code generated by the following matrix

We then have the parity check matrix

Standard array for the (6,3) linear code:

Example (cont.) If the code is used for error correction on a BSC with transition probability p, the probability that the decoder decodes correctly is The probability that the decoder commits an erroneous decoding is then To minimize , the error patterns that are mostly likely to occur for a given channel should be chosen as the coset leaders. Example The generator matrix for the the repetition code of length 3 ({ 000, 111 } ) is Therefore, the parity check matrix of the code is

Hamming codes For any positive , there exists a Hamming code (a with the following parameters: linear code)

The parity check matrix H of this code consists of all the nonzero m-tuple as its columns. In systematic form, the columns of H are arranged in the following form: where is an identity matrix and the submatrix Q consists of which are the m-tuples of weight 2 or more. columns

Concatenated codes

Figure 1: Original concatenated coding system. Concatenated codes are error-correcting codes that are constructed from two or more simpler codes in order to achieve good performance with reasonable complexity. Originally introduced by Forney in 1965 to address a theoretical issue, they became widely used in space communications in the 1970s. Turbo codes and other modern capacity- approaching codes may be regarded as elaborations of this approach.

Capacity-approaching codes
The field of channel coding was revolutionized by the invention of turbo codes by Berrou et al. in 1993 (Berrou et al. 1993) . Turbo codes use multiple carefully chosen codes, a pseudo-random interleaver, and iterative decoding to approach the Shannon limit within 1 dB

Turbo codes are error-correcting codes with performance close to the Shannon theoretical limit [SHA]. These codes have been invented at ENST Bretagne (now TELECOM Bretagne), France, in the beginning of the 90's [BER]. The encoder is formed by the parallel concatenation of two convolutional codes separated by an interleaver or permuter. An iterative process through the two corresponding

decoders is used to decode the data received from the channel. Each elementary decoder passes to the other soft (probabilistic) information about each bit of the sequence to decode. This soft information, called extrinsic information, is updated at each iteration.

Figure 1: Concatenated encoder and decoder


Contents
[hide]

1 Precursor 2 The genesis of turbo codes 3 Turbo-decoding 4 Applications of turbo codes 5 References 6 Further reading 7 See also

[edit]

Precursor
In the 60s, Forney [FOR] introduced the concept of concatenation to obtain coding and decoding schemes with high error-correction capacity. Typically, the inner encoder is a convolutional code and the inner decoder, using the Viterbi algorithm, is able to process soft information, that is, probabilities or logarithms of probabilities in practice. The outer encoder is a block encoder, typically a Reed-Solomon encoder and its associated decoder works with the binary decisions supplied by the inner decoder, as shown in Figure 1. As the former may deliver errors occurring in

packets, the role of the deinterleaver is to spread these errors so as to make the outer decoding more efficient. Though the minimum Hamming distance is very large, the performance of such

concatenated schemes is not optimal, for two reasons. First, some amount of information is lost due to the inability of the inner decoder to provide the outer decoder with soft information. Second, if the outer decoder takes benefit from the work of the inner one, the converse is not true. The decoder operation is clearly dissymmetric. To allow the inner decoder to produce soft decisions instead of binary decisions, modified versions of the Viterbi algorithm (SOVA: Soft-Output Viterbi algorithm) were proposed by Battail [BAT] and Hagenauer & Hoeher [HAG]. But soft inputs are not easy to handle in a Reed-Solomon decoder. [edit]

The genesis of turbo codes


The invention of turbo codes finds its origin in the will to compensate for the dissymmetry of the concatenated decoder of Figure 1. To do this, the concept of feedback - a well-known technique in electronics is implemented between the two component decoders (Figure 2).

Figure 2: Decoding the concatenated code with feedback

The use of feedback requires the existence of Soft-In/Soft-Out (SISO) decoding algorithms for both component codes. As the SOVA algorithm was already available at the time of the invention, the adoption of convolutional codes appeared natural for both codes. For reasons of bandwidth efficiency, serial concatenation is replaced with parallel concatenation. Actually, parallel concatenation combining two codes with rates and gives a global rate equal to:

This rate is higher than that of a serially concatenated code, which is:

for the same values of

and

, and the lower these rates, the larger the

difference. Thus, with the same performance of component codes, parallel concatenation offers a better global rate, but this advantage is lost when the rates come close to unity. Furthermore, in order to ensure a sufficiently large dmin for the concatenated code, classical non-systematic nonrecursive convolutional codes (Figure 3.a) have to be replaced with recursive systematic convolutional (RSC) codes (Figure 3.b).

Figure 3: (A) Non-systematic non-recursive convolutional code with polynomials 13,15. (B) Recursive systematic convolutional (RSC) code with polynomials 13 (recusivity), 15 (parity)

Figure 4: A turbo code with component codes 13, 15

What distinguishes both codes is the minimum input weight weight

. The input

is the number of "1" in an input sequence. Suppose that the ). The encoder will retrieve

encoder of Figure 3.a is initialized in state 0, then fed with an all-zero sequence, except in one place (that is, have then state 0 as soon as the fourth 0 following the 1 will appear at the input. We . In the same conditions, the encoder of Figure 3.b . So, needs a second 1 to retrieve state 0. Without this second 1, this encoder will act as a pseudo-random generator, with respect to its output and this property is very favourable regarding when parallel

concatenation is implemented. A typical turbo code is depicted in Figure 4. The data are encoded both in the natural order and in a permuted order by two RSC codes and that issue parity bits and . In order to encode finite-length blocks of data, RSC encoding is terminated by tail bits or has tail-biting termination. The permutation has to be devised carefully because it has a strong impact on code is . The natural coding rate of a turbo and (three output bits for one input bit). To deal with higher .

coding rates, the parity bits are punctured. For instance, transmitting alternately leads to

The original turbo code [BER] uses a parallel concatenation of convolutional codes. But other schemes like serial concatenation of convolutional codes [BEN] or algebraic turbo codes [PYN] have since been studied. More recently, non-binary turbo codes have also been proposed [DOU]. [edit]

Turbo-decoding
Decoding the code of Figure 4 by a global approach is not possible, because of the astronomical number of states to consider. A joint probabilistic process by the decoders of and , has to be elaborated. Because of latency constraints, this joint process is worked out in an iterative manner in a digital circuit. Turbo decoding relies on the following fundamental criterion: when having several probabilistic machines work together on the estimation of a common set of symbols, all the machines have to give the same decision, with the same probability, about each symbol, as a single (global) decoder would. To make the composite decoder satisfy this criterion, the structure of Figure 5 is adopted. The double loop enables both component decoders to benefit from the whole redundancy. The term turbo was given to this feedback construction with reference to the principle of the turbo-charged engine.

Figure 5: A turbo decoder The components are SISO decoders, permutation ( ) and inverse permutation ( ) memories. The node variables of the decoder are

Logarithms of Likelihood Ratios (LLR). An LLR related to a particular binary datum d is defined as:

The role of a SISO decoder is to process an input LLR and, thanks to local redundancy (i.e. for DEC1, for DEC2), to try to improve it. The output LLR of a SISO decoder may be simply written as

where

is the extrinsic information about

, provided by the

decoder. If this works properly, , and positive if

is most of the time negative if

. The composite decoder is constructed

in such a way that only extrinsic terms are passed by one component decoder to the other. The input LLR to a particular decoder is formed by the sum of two terms: the information symbols stemming from the channel and the extrinsic term provided by the other decoder, which serves as a priori

information. The information symbols are common inputs to both decoders, which is why the extrinsic information must not contain them. In addition, the outgoing extrinsic information does not include the incoming extrinsic information, in order to cut down correlation effects in the loop. There are two families of SISO algorithms, those based on the SOVA [BAT][HAG], the others based on the MAP (also called BCJR or APP) algorithm [BAH] or its simplified versions. Turbo decoding is not optimal. This is because an iterative process has obviously to begin, during the first half-iteration, with only a part of the redundant information available (either is small (less than or ). Fortunately, loss due to sub-optimality ).

Applications of turbo codes

Figure 6: Applications of turbo codes(u can enlarge ths picture to seee it ) Table 1 summarizes normalized or proprietary applications of turbo codes, known to date. Most of these applications are detailed and commented on in [GRA].

You might also like