The Art of Error Correcting Coding: Iterative decoding and capacity
content="Error Control Coding,Error CorrectingCoding,Error Correcting Codes,FEC,Turbo Codes,Iterative Decoding,DigitalCommunications,Wireless,Satellite,Data,Coded Modulation,Golay,Hamming,BCH,Reed Solomon,Viterbi Decoder,Soft Decision Decoding,Sudan Algorithm,Unequal Error Protection,Variable Rate Coding,Adaptive Coding,Convolutional Codes,LDPC,Low-Density Parity-Check Codes,The Art of Error-Correcting Coding,Capacity-achieving codes,Coding is not dead, is more alive than ever"
name=description>
Iteratively decodable
codes
In this page pointers are presented to programs in C/C++ for simulating
iterative decoding algorithms. These include programs to compute
constellation-constrainted capacity. The design of interleavers is an important
issue, particularly with short frame lengths. Computer programs to construct
several types of interleavers are given. These and other issues are discussed in
Chapter 8 of the href="http://www.amazon.com/gp/product/0470015586/ref=sr_11_1/102-3025380-7157754?ie=UTF8">book!
TURBO CODES
In the design and implementation of a turbo code in software, there are two
main issues: 1. Construction of an interleaver 2. Simulation of
iterative decoding 1. Interleavers
href="http://the-art-of-ecc.com/8_Iterative/Interleaver/ranint.c">RANDOM
interleaver href="http://the-art-of-ecc.com/8_Iterative/Interleaver/helint.c">HELICAL
interleaver href="http://the-art-of-ecc.com/8_Iterative/Interleaver/primint.c">PRIMITIVE
interleaver href="http://the-art-of-ecc.com/8_Iterative/Interleaver/cyclint.c">CYCLIC
interleaver href="http://the-art-of-ecc.com/8_Iterative/Interleaver/diagint.c">DIAGONAL
interleaver href="http://the-art-of-ecc.com/8_Iterative/Interleaver/blockint.c">BLOCK
interleaver
The programs output the interleaver array as one integer per row (i.e.,
separated by CR characters). Other types of interleavers exist, but the above
classes should always yield competitive alternatives. The programs are well
documented.
2. Iterative decoding
MAP decoding of a parallel concatenated (turbo) convolutional code: Rate
1/3 href="http://the-art-of-ecc.com/8_Iterative/Turbo/turbo.cpp">turbo.cpp href="http://the-art-of-ecc.com/8_Iterative/Turbo/random.cpp">random.cpp href="http://the-art-of-ecc.com/8_Iterative/Turbo/random.h">random.h
Simulation of a binary rate-1/3 turbo code with two identical rate-1/2
component recursive convolutional encoders. The memory of the encoders can be
selected betwen 2, 3 and 4, corresponding to 4, 8 and 16 state trellises
respectively. Files turbo_MAP.cpp and random.cpp must be
compiled together, with the "gcc" command in a unix-like environment (or
equivalent in other OS) as
gcc -O2 turbo_MAP.cpp random.cpp -lm
NOTE: This version of the program does not consider tail bits
to terminate the trellis. As a result, performance will be worse than turbo
codes with tail bits, specially with short interleaver lengths. The original
program was written by Mathys Walma in 1998. Please refere to the original href="http://the-art-of-ecc.com/8_Iterative/Turbo/README_1998.txt">README
file. Many changes to the program were necessary to make it more compatible with
the style of the programs in this site.
MAP decoding of a parallel concatenated (turbo) convolutional code:
Puncturing and rate 1/2 href="http://the-art-of-ecc.com/8_Iterative/Turbo/turbo_punc.cpp">turbo_punc.cpp
random.cpp
random.h
These programs are used for simulation of a rate-1/2 punctured turbo
code. A puncturing rule is applied in the branch metric (gamma) computation
stage, very much in the same way as in the convolutional code case. In this
version, the puncturing rule is hard-coded in the program, but it should be easy
to specified it in a file, just as in the case of binary href="http://the-art-of-ecc.com/5_Convolutional/index.html">punctured
convolutional codes.
All other comments made for the rate-1/3 turbo code above are pertinent to
the punctured rate-1/2 turbo code.
LDPC CODES
Below are iterative soft-decision (belief propagation) decoding and
hard-decision decoding algorithms for the important family of low-density
parity-check codes. Since these algorithms can be applied to any linear code
(with a low-density parity check matrix), a simplification is made in that the
all-zero codeword is always transmitted. This simplifies programming enormously.
The iterative decoding algorithms need the parity-check matrix as an input.
The format of the files specifying the structure of these matrices (or the
Tanner graphs) is the same as that used by David MacKay, that is, the "alist"
(adjacency list) format. Please refer to href="http://www.inference.phy.cam.ac.uk/mackay/CodesFiles.html">his web
site for more information, some C source code, and to pick up some matrices
for your simulations. Below are two examples of parity-check matrix file
definition, for Gallager's (20,5) code and a finite (projective) geometry cyclic
(273,191,17) code, respectively: href="http://the-art-of-ecc.com/8_Iterative/BPdec/gallager.20.4.3.dat">gallager.20.4.3
href="http://the-art-of-ecc.com/8_Iterative/BPdec/DSC.273.17.17">DSC.273.17.17
NOTE: The three numbers in suffix of a file name above are N.J.K,
where N=length, J=maximum degree of bit nodes and K=maximum degree of check
nodes. Belief-propagation decoding algorithm:
Probabilistic decoding href="http://the-art-of-ecc.com/8_Iterative/BPdec/pearl.c">pearl.c
The algorithm works directly with probabilities. In terms of numerical
precision, it is the most stable BP decoder, although it is very intensive in
terms of exp() computations. Belief-propagation decoding
algorithm: Logarithm domain href="http://the-art-of-ecc.com/8_Iterative/BPdec/log_pearl.c">log_pearl.c
This version of BP algorithm is obtained from the probabilistic one by
straight log-domain translation. No approximation (table) used. This
results in log(exp(a)+/-exp(b)) computations that are even more intensive than
in pearl.c... Belief-propagation decoding algorithm:
Log-likelihood domain href="http://the-art-of-ecc.com/8_Iterative/BPdec/llr_pearl.c">llr_pearl.c
This version utilizes log-likelihood ratios. This results in much
improvement in terms of speed compared to log_pearl.c with practically the same
numerical precision as pearl.c. It can be further improved by the use of a
look-up table to avoid exp() computations, as mentioned in the href="http://www.amazon.com/exec/obidos/ASIN/0471495816/qid=1016858656/sr=8-1/ref=sr_8_5_1/104-2054788-7703131">book.
Bit-flipping hard-decision decoding algorithm href="http://the-art-of-ecc.com/8_Iterative/BPdec/bitf.c">bitf.c
Gallager's iterative bit-flipping decoding of linear block codes. The user
can specify a threshold on the number of unsatisfied parity checks needed in
order to flip (or complement the value of) a bit.
The Tanner graph of a binary cyclic code:
href="http://the-art-of-ecc.com/8_Iterative/tannercyclic.m"> size=+1>tannercyclic.m size=+1>
Capacity
Both turbo codes and LDPC codes are so-called capacity achieving. Therefore,
there is an interest in knowing the theoretical limits of transmission when
using a particular modulation scheme. This is called constellation-constrainted
capacity. Below are some programs that are useful in computing this capacity
compared to the ultimate Shannon capacity of an AWGN channel. Compute
the constellation-constrainted capacity of an AWGN channel: href="http://the-art-of-ecc.com/8_Iterative/Capacity/capacity.c">capacity.c
Compute the Shannon capacity (versus SNR per symbol, or Es/No): href="http://the-art-of-ecc.com/8_Iterative/Capacity/shannon.c">shannon.c
Compute the Shannon capacity (versus SNR per bit, or Eb/No): href="http://the-art-of-ecc.com/8_Iterative/Capacity/shannon2.c">shannon2.c
BACK TO CONTENTS
This page was last updated
on August 6, 2008, by Robert H. Morelos-Zaragoza.
src="The Art of Error Correcting Coding Iterative decoding and capacity.files/whv2_001.js">
geovisit();
src="The Art of Error Correcting Coding Iterative decoding and capacity.files/visit.gif"
width=1 border=0>