Processing math: 100%

Error Probability and Application Areas

From LNTwww

Block error probability for RSC and BDD


For error probability calculation we start from the same block diagram as in chapter  "Error Correction According to Reed-Solomon Coding",  but here we choose the code word estimator  (y_z_)  to  Bounded Distance Decoding  (BDD).  For  "Maximum Likelihood Decoding"  the results are slightly better.

System model with Reed–Solomon coding,  m BSC  and  Bounded Distance Decoding

Let the block error probability be defined as follows:

Pr(blockerror)=Pr(v_u_)=Pr(z_c_)=Pr(f>t).

Due to the BDD assumption,  the same simple result is obtained as for the binary block codes,  namely the probability that

  • the number  f  of errors in the block  (received word)
  • is greater than the correctability  t  of the code.


Since for the random variable  f  (number of errors in the block)  there is a  binomial distribution  in the range   0fn   we obtain:

Pr(blockerror)=nf=t+1(nf)εSf(1εS)nf.
  • But while in the first main chapter   ciGF(2)   has always been applied and thus the  f  transmission errors were in each case  »bit errors«,
  • under Reed–Solomon coding a transmission error   (yici)   is always understood to be a  »symbol error«  because of   ciGF(2m)   as well as   yiGF(2m).


This results in the following differences:

  1.   The discrete channel model used to describe the binary block codes is the  Binary Symmetric Channel  (BSC).  Each bit  ci  of a code word is falsified  (yici)  with probability  ε  and correctly transmitted  (yi=ci)  with probability  1ε.
  2.  For Reed–Solomon coding,  one must replace the  "1–BSC"  model with the  "m–BSC"  model.  A symbol  ci  is falsified with probability  εS  into another symbol  yi   (no matter which one)   and arrives with probability  1εS  unfalsified at the receiver.

Example 1:  We start from the BSC parameter  ε=0.1  and consider Reed–Solomon code symbols  ciGF(28)   ⇒   m=8,   q=256,   n=255.

  • For a symbol falsification   (yici)   already one wrong bit is sufficient.
  • Or expressed differently:   If  yi=ci  is to be valid,  then all  m=8 bits  of the code symbol must be transmitted correctly:
1εS=(1ε)m=0.980.43.
  • Thus,  for the  "8–BSC"  model,  the falsification probability  εS0.57.
  • With the further assumption that the falsification of  ci=β  into any other symbol  yi=γβ  is equally likely,  we obtain:
Pr(yi=γ|ci=β)=0.57/2550.223%.


Application of Reed–Solomon coding for binary channels


The prerequisites for the following calculation of block error probability of a Reed–Solomon coding system and conversion to binary symbols are summarized in the diagram:

Application of Reed-Solomon coding for binary channels.  at point
(1)  k–bit symbols;   (2)  n–bit symbols;   (3)  n–bit symbols;   (4)  (nm)–bit blocks;
  1. Assume  (n, k) Reed–Solomon coding with symbols of  ciGF(2m).  The smaller the code rate  R=k/n,  the less information can be transmitted at a fixed data rate.
  2. Each symbol is represented by  m  bits   ⇒   "binary mapping".  A block  (code word  c_)  thus consists of  n  symbols or of  nm  binary characters  (bits)  combined in  c_bin.
  3. In addition,  the  AWGN Channel,  identified by the parameter  EB/N0 is assumed.  According to this channel model the transmission is bipolar:   "0"   ↔   +1  and  "1"   ↔ 1.
  4. The receiver makes hard decisions on bit level.  Before decoding including error correction,  the binary symbols are converted back to  GF(2m)  symbols.


The  "Bounded Distance Decoding"  equation from the last section is based on the  "m–BSC"  model:

Pr(blockerror)=nf=t+1(nf)εSf(1εS)nf,

Starting from the AWGN–channel  ("Additive White Gaussian Noise"),  one comes

ε=Q(2ES/N0)=Q(2REB/N0),
  • and from this to the  »symbol error probability«     ⇒     "m–BSC"  parameter:
εS=1(1ε)m.

Example 2:  One gets with

  1.   the code rate   R=k/n=239/255=0.9373,
  2.   the AWGN parameter   10lg EB/N0=7dB   ⇒   EB/N05,  and
  3.   eight bits per symbol   ⇒   m=8,
  4.   the code length   n=281=255  


the following numerical values for

  • the parameter  ε  of the  "1–BSC"  model   ⇒   "bit error probability":
ε=Q(20.93735)=Q(3.06)1.1103=0.11%
  • the parameter  εS  of the  "8–BSC"  model   ⇒   "symbol error probability"  or  "block error probability":
εS=1(11.1103)8=10.99898=10.99120.88%(8ε).

⇒   So every single symbol is transmitted with more than  99  percent probability without errors.


BER of Reed–Solomon coding for binary channels


The following graph shows the block error probabilities given in  [Liv10][1]  as a function of the AWGN quotient  10lg EB/N0.

Shown are the calculated curves   Pr(v_u_)   for two different Reed–Solomon codes corresponding to the  "Deep Space Standards"  according to  CCSDS  ("Consultative Committee for Space Data Systems"),  namely

Block error probabilities of two Reed-Solomon codes of length  n=255
Note:   You are to perform completely the calculation in the  "Exercise 2.15"  for the  RSC (7, 3, 5)8  – thus for somewhat clearer parameters.
  1. the  RSC (255, 239, 17)256  with code rate  R=0.9373,  which can correct up to  t=8  errors,  and
  2. the  RSC (255, 223, 33)256  with higher correction capability  (t=16)  due to smaller code rate.


We analyze the point highlighted in yellow in the graph,  valid for

  • the  RSC (255, 239, 17)256,  and 
  • 10lg EB/N0=7.1dB   ⇒   ε=0.007.


The corresponding block error probability results according to the diagram to

Pr(blockerror)=255f=9(255f)εSf(1εS)255f104.

Dominant here is the first term  (for  f=9), which already accounts for  80% :

(2559)1.11016,ε9S41020,(1εS)2460.18
Pr(f=9)8105.

This should be taken as proof that one may stop the summation already after a few terms.

The results shown in the graph can be summarized as follows:

  1.   For small  EB/N0  (of the AWGN channel),  the Reed–Solomon codes are completely unsuitable.
  2.   Both codes are for  10lg EB/N0<6dB  above the  (black)  comparison curve for uncoded transmission.
  3.   The  RSC (255, 223, 33)256  calculation differs only in the lower summation limit  (fmin=17)  and by a slightly larger  εS  (because  R=0.8745).
  4.   For  BER=106,  this  "red code"  (with  t=16)  is better only by about  1dB  than the  "green code"  (with  t=8).
  5.   So the results of both codes are rather disappointing.


Conclusion: 

  • Reed–Solomon codes are not very good at the memoryless AWGN channel.  Both codes are more than  4dB  away from the information-theoretical  Shannon bound  .
  • Reed–Solomon codes are not perfect.  The consequences of this are covered in  "Exercise 2.16".


Typical applications with Reed–Solomon coding


Reed–Solomon coding is often applied according to the diagram together with an  "inner code"  in cascaded form.

Code scheme with cascading
  • The inner code is almost always a binary code and in satellite– and mobile communications often a convolutional code.
  • If one wishes to study only Reed–Solomon encoding/decoding, one replaces the grayed components in a simulation with a single block called a  "super channel".
  • Concatenated coding systems are very efficient if an interleaver is connected between the two encoders to further relax burst errors.


Example 3:  The diagram shows an example of such an interleaver,  where we restrict ourselves to a  20×4  matrix.  In practice,  these matrices are significantly larger.

Interleaver matrix for  20×4  symbols

The interleaver is written row–by–row and read out column–by–column  (blue label).  With the de–interleaver  (green label)  the sequence is exactly the other way round.

  • The Reed–Solomon symbols are therefore not forwarded to the inner coder consecutively,  but according to the specified sequence as symbol  1, 21, 41, 61, 2, 22, etc. .
  • A contiguous area  (here the symbols  42, 62, 3, 23, 43, 63, 4, 24   ⇒   around the rectangle outlined in red below)  is destroyed on the channel,  for example by a scratch on the channel "storage medium".
  • After de–interleaving,  the symbol sequence is again  1, 2, 3, ...    
    You can see from the red outlined circles that now this burst error has been largely  "broken up".


Example 4:   A very widely used application of Reed–Solomon coding – and also the most commercially successful – is the »Compact Disc«  (CD),  whose error correction mechanism has already been described in the  "introduction chapter"  (Example 7)  of this book.  Here the inner code is also a Reed–Solomon code,  and the concatenated coding system can be described as follows:

  1. Both channels of the stereo–audio signal are sampled at  44.1 kHz  each and every sample is digitally represented at  32  bits  (four bytes).
  2. The grouping of six samples results in a frame  (192  Bit) and thus  24  code symbols from the Galois field  GF(28).  Each code symbol thus corresponds to exactly one byte.
  3. The first Reed–Solomon code with rate  R1=24/28  returns  28  bytes,  which are fed to an interleaver of size  28×109  bytes.  The readout is done  (complicated)  diagonally.
  4. The interleaver distributes contiguous bytes over a large area of the entire disc.  This resolves so-called  "bursts"  e.g. caused by scratches on the  CD.
  5. Together with the second Reed–Solomon code  (rate R2=28/32)  gives a total rate of  R=(24/28)·(28/32)=3/4.  Both codes can each correct  t=2  symbol errors.
  6. The two component codes  RSC (28, 24, 5)  and  RSC (32, 28, 5)  are each based on the Galois field  GF(28),  which would actually mean a code length of  n=255 .
  7. The shorter component codes needed here result from the  RSC (255, 251, 5)256  by  "shortening"   ⇒   see  "Exercise 1.9Z".  However,  this measure does not change the minimum distance  dmin=5 .
  8. With the subsequent  Eight to Fourteen Modulation  and further control symbols,  one finally arrives at the final code rate  R=192/5880.326  of the CD data compression.

In the section  "Slit CD"  you can see for yourself the correction capabilities of this  "cross–interleaved Reed–Solomon coding"  with the help of an audio example,  but you will also see its limitations

.

Exercises for the chapter


Exercise 2.15: Block Error Probability with AWGN

Exercise 2.15Z: Block Error Probability once more

Exercise 2.16: Bounded Distance Decoding: Decision Regions

References

  1. Liva, G.:  Channel Coding.  Lecture Notes, Chair of Communications Engineering, TU Munich and DLR Oberpfaffenhofen, 2010.