Professional Documents
Culture Documents
By: Dan Dechene Kevin Peets Supervised by: Dr. Julian Cheng
TABLE OF CONTENTS
1.0 Introduction..........................................................................................................1 1.1 Digital Communication............................................................................2 2.0 Channel Coding ...................................................................................................3 2.1 Shannon Theorem for Channel Coding ...................................................3 2.2 Hamming Code ........................................................................................5 2.3 Tanner Graph Representation...................................................................7 3.0 LDPC ...................................................................................................................8 3.1 Introduction to LDPC ..............................................................................8 3.1.1 Parity Check Matrix..................................................................8 3.1.1.1 Classifying the Matrix................................................8 3.1.1.2 Methods of Generation ..............................................9 3.1.2 Minimum Distance of LDPC Codes .........................................9 3.1.3 Cycle Length of LDPC Codes ..................................................9 3.1.4 Linear Independence.................................................................10 3.2 LDPC System Overview..........................................................................11 3.3 Generation for Simulation........................................................................13 3.4 Encoding ..................................................................................................15 3.4.1 Linear Independence Problem ..................................................17 3.5 Decoding ..................................................................................................17 3.5.1 Hard Decision vs. Soft Decision Decoding ..............................17 3.5.2 SPA Algorithm ..........................................................................19 3.5.2.1 Computing Messages .................................................20 3.5.2.2 Initialization ...............................................................22 3.5.2.3 Soft-Decision .............................................................24 3.5.2.4 Simulation Computation ............................................25 4.0 Results..................................................................................................................26 5.0 Problems Encountered .........................................................................................29 6.0 Future Work .........................................................................................................31 6.1 Increase Efficiency of Simulation Algorithms.........................................31 6.2 Lower Memory Requirements of Parity-Check Matrix...........................31 6.3 VLSI Implementation ..............................................................................31 7.0 Conclusion ...........................................................................................................32 References .................................................................................................................34 Appendix A Code....................................................................................................35 Appendix B Simulink Model ..................................................................................43 ii
TABLE OF FIGURES
Figure 1: Communication System Block Diagram....................................................2 Figure 2: Graphical Representation of Hamming (7,4) Code....................................5 Figure 3: All Possible Codewords for Hamming (7,4) Code.....................................6 Figure 4: Bipartite Tanner Graph ...............................................................................7 Figure 5: Length 4 Cycle ...........................................................................................10 Figure 6: Length 6 Cycle ...........................................................................................10 Figure 7: LDPC System Overview ............................................................................11 Figure 8: Flowchart to create Parity-Check matrix, H...............................................14 Figure 9: Likelihood functions for BPSK modulation over an AWGN channel .......18 Figure 10: Representation of Nodes ..........................................................................20 Figure 11: Representation of Nodes...........................................................................20 Figure 12: Flowchart for Decoding............................................................................25 Figure 13: MacKays Results.....................................................................................27 Figure 14: Simulated Results .....................................................................................27 Figure 15: Performance of simulations vs. Hamming with Shannons limit .............27
iii
1.0 INTRODUCTION
In the early nineties, turbo codes and its new iterative decoding technique were introduced, employing this new coding scheme and its decoding algorithm, it was possible to achieve performance within a few tenths of a dB from the Shannon limit for a bit error rate of 10-5 [1]. This discovery not only had a major impact on the telecommunications industry, but it also kicked off major research into the area of channel maximizing coding schemes using iterative decoding now that they knew it was possible to achieve. In 1962, Robert Gallager had originally proposed Low-Density Parity-Check codes, or LDPC codes [2] as a class of channel coding, but implementation of these codes required a large amount of computing power due to the high complexity and memory requirements of the encoding/decoding operations, so they were forgotten. A few years after turbo codes made their appearance, David MacKay rediscovered LDPC codes [3], and he showed that LDPC codes were also capable of approaching the Shannon limit using iterative decoding techniques. An LDPC code is a linear block code characterised by a very sparse parity-check matrix. This means that the parity check matrix has a very low concentration of 1s in it, hence the name low-density parity-check code. The sparseness of LDPC codes is what has interested researchers, as it can lead to excellent performance in terms of bit error rates. The purpose of this paper was to gain an understanding of LDPC codes and utilize thats knowledge to construct a test a series of algorithms to simulate their performance. This paper will begin with a basic background of digital communications and channel coding theory and then carry the basic principles forward and apply them to LDPC.
(1)
Figure 1: Communication System Block Diagram Figure 1 shows a model of a communication system. A digital message originates from the source (this could have been obtained from an analog signal via an analog-to-digital converter). These digital signals are then passed through a source encoder. The source encoder removes the redundancy of the system; much the same way as computer file compression operates. Following source encoding, the signal is then passed through the channel encoder which adds controlled redundancy to the signal, the signal is then modulated and transmitted over the channel. The reverse process occurs in the receiver. This paper focuses on the channel encoder/decoder blocks channel coding. The purpose of channel coding is to add controlled redundancy into the transmitted signal to increase the reliability of transmission and lower transmission power requirements.
(2)
probability of error. While Shannon proposed this theorem, he provided no insight in how to achieve this capacity. The evidence of the search for this coding scheme can be seen by the rapid development of capacity improving schemes. When Shannon announced his theory in the July and October issues of the Bell System Technical Journal in 1948, the largest communications cable in operation at that time was capable of carrying 1800 voice conversations. Twenty-five years later, the highest capacity cable was capable of carrying 230000 simultaneous conversations [5]. Researchers are continuously looking for ways to improve capacity. Currently, the only measure that can be used for code performance is its proximity to Shannons Limit. Shannons limit can be expressed in a number of different ways. Shannons limit for a band-limited channel is:
PS C = B log 2 1 + P N
C=
The SNR for a coded system also depends on the code rate, R {(bits in original message) / (bits sent on channel)} and can be found as:
R (1 + p log 2 ( p ) + (1 p ) log 2 (1 p )) = CR
EB N0
(4)
where p is the BER for a given SNR and R is the code rate. The above equations can be solved to have an expression that has only contains p and SNR. To solve this equation requires numerical computation.
E EB 1 log 2 1 + 2R B R 2 N0 N0
R (1 + p log 2 ( p ) + (1 p ) log 2 (1 p )) =
From the above, the Shannons limit for a code rate of can be shown to be 0.188dB [6].
(5)
Figure 3: All Possible Codewords for Hamming 7,4 Code [3] Figure 3 shows the constructed codeword for the given Figure 2 above. Another interesting property of any channel coding scheme is its minimum distance. For Hamming (7,4), this minimum distance it 3. This means that given an arbitrary codeword, it will take 3 bits flipping to produce any other possible codeword. In terms of decoding, this means that it is possible to correct a finite number of errors. For Hamming (7,4) code, it is able to detect single and dual bit errors, but is only able to correct single bit errors. It is important to note, that if 3 or more bit errors occur, than the decoder will be unable to correct the bit errors, and in fact may be unable to detect that a bit error occurred. The following equations represent the characteristics of Hamming Code in terms of minimum distance, number of detectable and number of correctable errors.
MD = 2n + 1
p = n +1
Where MD is the minimum distance, n is the number of errors the code can correct, and p is the number of errors the code can detect. Again, it is important to note, that if sufficient noise is present, then the codeword may be corrupted in a matter than Hamming code is unable to detect and correct errors. This means that minimum distance plays an important role as a characteristic of a given code. Minimum distance in general is defined as the fewest number of bits that must flip to in any given codeword, to be distinguished as another. A large minimum distance makes for a good coding scheme, as it increases the noise immunity in the system. It is often very difficult to determine the minimum distance for a given code. This is the case because there exists 2 k possible codewords in any given coding scheme, therefore
(6)
computing minimum distance requires that 2 k 1 * (2 k 1) comparisons be performed. It is obvious that as the blocklength increases, measuring the minimum distance would require a large amount of computational power. There are methods that have been proposed to measure the minimum distance of these codes, however they will not be discussed here. Although Hamming (7,4) code does not provide a large gain in terms of error rate performance versus an uncoded system, it provides an excellent first step in studying coding theory.
f0
f1
f2
1 1 1 0 1 0 0 f0 H = 0 1 1 1 0 1 0 f1 f2 1 0 1 1 0 0 1
c0 c1 c2 c3 c4 c5 c6
c0
c1
c2
c3
c4
c5
c6
(7)
3.0 LDPC
3.1 INTRODUCTION TO LDPC
Robert E. Gallager originally discovered low-Density Parity-Check Codes (or LDPC Codes) in 1962 [2]. They are a class of linear block codes that approach Shannons Channel Capacity Limit (See section 2.1). LDPC Codes are characterized by the sparseness of ones in the parity-check matrix. This low number of ones allows for a large minimum distance of the code, resulting in improved performance. Although proposed in the early 1960s, it has not been since recently that codes have emerged as a promising area of research in achieving channel capacity. This is part due to the large amount of processing power required to simulate the code. In the case of any coding scheme larger blocklength codes provide better performance, but require more computing power. Performance of a code is measured through its bit error rate (BER) vs. signal to noise
EB ratio N in dB. The curve of a good code will show a dramatic drop in BER as SNR 0
improves. The best codes have a cliff drop at an SNR slightly higher than the Shannons limit.
LDPC codes are classified into two different classes of codes: regular and irregular codes. Regular codes are the set of codes in which there is a constant number of wC 1s distributed throughout each column and a constant number of wR 1s per row. For a determined column weight (wC) we can determine the row weight as N*wC /(N-k), N is the blocklength of the code and k is the message length. Irregular codes are those of which do not belong to this set (do not maintain a consistent row weight).
(8)
In the 1960s, Gallager published the existence of the class of LDPC codes, but provided no insight into how to generate the parity-check matrix (also known as the H matrix). There have been many methods proposed by various researchers [3][6][7] as to methods of generation. Several methods include:
In terms of generation there are several key concerns to examine when generating the parity-check matrix such as minimum distance, cycle length and linear independence.
As discussed in section 2.2, the minimum distance is a property of any coding scheme. Ideally this minimum distance should be as large as possible, but there is a practical limit on how large this minimum distance can be. LDPC posses a large problem when calculating this minimum distance efficiently as an effective LDPC code requires rather large blocklengths. Using random generation it is very difficult to specify the minimum distance as a parameter, rather minimum distance will become a property of the code.
Using a Tanner Graph it is possible to view the definition of the minimum cycle length of a code. It is the minimum number of edges travelled from one check node to return to the same check node. Length 4 and Length 6 cycles with the corresponding parity-check matrix configurations are shown in Figures 5 and 6 respectively.
(9)
Check Nodes
Check Nodes
Variable Nodes
Variable Nodes
.. .. H = .. .. ..
.. 1 .. 1 ..
.. .. .. .. ..
.. 1 .. 1 ..
.. .. .. .. ..
.. .. .. H = .. .. .. ..
.. 1 .. .. .. 1 ..
.. .. .. .. .. .. ..
.. .. .. 1 .. 1 ..
.. .. .. .. .. .. ..
.. 1 .. 1 .. .. ..
.. .. .. .. .. .. ..
It has been shown that the existence of these cycles degrade the performance during iterative decoding process [7]. Therefore when generating the parity-check matrix, the minimum cycle length permitted must be determined. It is possible control the minimum cycle length when generating the matrix, however computational complexity and time increases exponentially with each increase in minimum cycle length.
c = GT m
Where,
(10)
c = [c1, c2, .. , cN]T Codeword m = [m1, m2, .. , mK]T Message Word G = k by n Generator matrix In order to guarantee the existence of such a matrix G, the linear independence of all rows of the parity-check matrix must be assured. In practical random generation, this becomes very difficult. The method used to approach this problem will be studied in further depth in section 3.3 Generation for Simulation.
LDPC Encoder
BPSK Modulator
Channel
+
y
Message Destination
Where:
m Message c