Difference between revisions of "Channel Coding/The Basics of Product Codes"
Line 8: | Line 8: | ||
== Basic structure of a product code == | == Basic structure of a product code == | ||
<br> | <br> | ||
− | The graphic shows the principle structure of »product codes«, which were already introduced in 1954 by [https://en.wikipedia.org/wiki/Peter_Elias Peter Elias]. The '''two-dimensional product code''' \mathcal{C} = \mathcal{C}_1 × \mathcal{C}_2 shown here is based on the two linear and binary block codes with parameters (n1, k1) and (n2, k2) respectively | + | The graphic shows the principle structure of »product codes«, which were already introduced in 1954 by [https://en.wikipedia.org/wiki/Peter_Elias Peter Elias]. |
+ | *The '''two-dimensional product code''' \mathcal{C} = \mathcal{C}_1 × \mathcal{C}_2 shown here is based on the two linear and binary block codes with parameters (n1, k1) and (n2, k2) respectively. | ||
− | + | *The code word length is $n = n_1 \cdot n_2$. | |
− | + | [[File:P ID3000 KC T 4 2 S1 v1.png|right|frame|Basic structure of a product code|class=fit]] | |
− | *The k=k1⋅k2 information bits are arranged in the k_2 × k_1 matrix U | + | |
+ | |||
+ | The n encoded bits can be grouped as follows: | ||
+ | *The k=k1⋅k2 information bits are arranged in the k_2 × k_1 matrix U. | ||
+ | |||
+ | *The code rate is equal to the product of the code rates of the base codes: | ||
:R=k/n=(k1/n1)⋅(k2/n2)=R1⋅R2. | :R=k/n=(k1/n1)⋅(k2/n2)=R1⋅R2. | ||
− | *The upper right matrix P(1) with dimension k_2 × m_1 contains the parity bits with respect to the code C1. In each of the k2 rows, | + | *The upper right matrix P(1) with dimension k_2 × m_1 contains the parity bits with respect to the code C1. |
+ | |||
+ | *In each of the k2 rows, $m_1 = n_1 - k_1$ check bits are added to the k1 information bits as described in an earlier chapter using the example of [[Channel_Coding/Examples_of_Binary_Block_Codes#Hamming_Codes|"Hamming codes"]] .<br> | ||
*The lower left matrix P(2) of dimension m_2 × k_1 contains the check bits for the second component code C2. Here the encoding (and also the decoding) is done line by line: In each of the k1 columns the k2 information bits are still supplemented by m2=n2−k2 check bits.<br> | *The lower left matrix P(2) of dimension m_2 × k_1 contains the check bits for the second component code C2. Here the encoding (and also the decoding) is done line by line: In each of the k1 columns the k2 information bits are still supplemented by m2=n2−k2 check bits.<br> |
Revision as of 16:29, 5 December 2022
Contents
Basic structure of a product code
The graphic shows the principle structure of »product codes«, which were already introduced in 1954 by Peter Elias.
- The two-dimensional product code C=C1×C2 shown here is based on the two linear and binary block codes with parameters (n1, k1) and (n2, k2) respectively.
- The code word length is n=n1⋅n2.
The n encoded bits can be grouped as follows:
- The k=k1⋅k2 information bits are arranged in the k2×k1 matrix U.
- The code rate is equal to the product of the code rates of the base codes:
- R=k/n=(k1/n1)⋅(k2/n2)=R1⋅R2.
- The upper right matrix P(1) with dimension k2×m1 contains the parity bits with respect to the code C1.
- In each of the k2 rows, m1=n1−k1 check bits are added to the k1 information bits as described in an earlier chapter using the example of "Hamming codes" .
- The lower left matrix P(2) of dimension m2×k1 contains the check bits for the second component code C2. Here the encoding (and also the decoding) is done line by line: In each of the k1 columns the k2 information bits are still supplemented by m2=n2−k2 check bits.
- The m2×m1–matrix P(12) on the bottom right is called checks–on–checks. Here the two previously generated parity matrices P(1) and P(2) are linked according to the parity-check equations.
Conclusion: All product codes according to the above graphic have the following properties:
- For linear component codes C1 and C2 the product code C=C1×C2 is also linear.
- Each row of C returns a code word of C1 and each column returns a code word of C2.
- The sum of two rows again gives a code word of C1 due to linearity.
- Also, the sum of two columns gives a valid code word of C2.
- Each product code also includes the zero word 0_ (a vector of n zeros).
- The minimum distance of C is dmin=d1⋅d2, where di indicates the minimum distance of Ci
Iterative syndrome decoding of product codes
We now consider the case where a product code with matrix X is transmitted over a binary channel. Let the receive matrix Y=X+E, where E denotes the error matrix. Let all elements of the matrices X, E and Y be binary, that is 0 or 1.
For the decoding of the two component codes the syndrome decoding according to the chapter "Decoding linear block codes" is suitable.
In the two-dimensional case this means:
- One first decodes the n2 rows of the receive matrix Y, based on the parity-check matrix H1 of the component code C1. Syndrome decoding is one way to do this.
- For this one forms in each case the so-called syndrome s_=y_⋅HT1, where the vector y_ of length n1 indicates the current row of Y and "T" stands for "transposed". Corresponding to the calculated s_μ (with 0≤μ<2n1−k1) one finds in a prepared syndrome table the corresponding probable error pattern e_=e_μ.
- If there are only a few errors within the row, then y_+e_ matches the sent row vector x_ . However, if too many errors have occurred, the following incorrect corrections will occur.
- Then one "syndromedecodes" the n1 columns of the (corrected) received matrix Y′, this time based on the (transposed) parity-check matrix HT2 of the component code C2. For this, one forms the syndrome s_=y_′⋅HT2, where the vector y_′ of length n2 denotes the considered column of Y′ .
- From a second syndrome table (valid for the code C2) we find for the computed s_μ (with 0≤μ<2n2−k2) the probable error pattern e_=e_μ of the edited column. After correcting all columns, the matrix Y is present. Now one can do another row and then a column decoding ⇒ second iteration, and so on, and so forth.
Example 1: To illustrate the decoding algorithm, we again consider the (42,12) product code, based on
- the Hamming code HC (7, 4, 3) ⇒ code C1,
- the truncated Hamming code HC (6, 3, 3) ⇒ code C2.
The left graph shows the receive matrix Y. For display reasons, the code matrix X was chosen to be a 6×7 zero matrix, so that the nine ones in Y represent transmission errors at the same time.
The row by row syndrome decoding is done via the syndrome s_=y_⋅HT1 with
- \boldsymbol{\rm H}_1^{\rm T} = \begin{pmatrix} 1 &0 &1 \\ 1 &1 &0 \\ 0 &1 &1 \\ 1 &1 &1 \\ 1 &0 &0 \\ 0 &1 &0 \\ 0 &0 &1 \end{pmatrix} \hspace{0.05cm}.
In particular:
- Row 1 ⇒ Single error correction is successful (also in rows 3, 4 and 6):
- \underline{s} = \left ( 0, \hspace{0.02cm} 0, \hspace{0.02cm}1, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0 \right ) \hspace{-0.03cm}\cdot \hspace{-0.03cm}{ \boldsymbol{\rm H} }_1^{\rm T} \hspace{-0.05cm}= \left ( 0, \hspace{0.03cm} 1, \hspace{0.03cm}1 \right ) = \underline{s}_3
- \Rightarrow \hspace{0.3cm} \underline{y} + \underline{e}_3 = \left ( 0, \hspace{0.02cm} 0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0 \right ) \hspace{0.05cm}.
- Row 2 (contains two errors) ⇒ Error correction concerning bit 5:
- \underline{s} = \left ( 1, \hspace{0.02cm} 0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}1 \right ) \hspace{-0.03cm}\cdot \hspace{-0.03cm}{ \boldsymbol{\rm H} }_1^{\rm T} \hspace{-0.05cm}= \left ( 1, \hspace{0.03cm} 0, \hspace{0.03cm}0 \right ) = \underline{s}_4
- \Rightarrow \hspace{0.3cm} \underline{y} + \underline{e}_4 = \left ( 1, \hspace{0.02cm} 0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}1, \hspace{0.02cm}0, \hspace{0.02cm}1 \right ) \hspace{0.05cm}.
- Row 5 (also contains two errors) ⇒ Error correction concerning bit 3:
- \underline{s} = \left ( 0, \hspace{0.02cm} 0, \hspace{0.02cm}0, \hspace{0.02cm}1, \hspace{0.02cm}1, \hspace{0.02cm}0, \hspace{0.02cm}0 \right ) \hspace{-0.03cm}\cdot \hspace{-0.03cm}{ \boldsymbol{\rm H} }_1^{\rm T} \hspace{-0.05cm}= \left ( 0, \hspace{0.03cm} 1, \hspace{0.03cm}1 \right ) = \underline{s}_3
- \Rightarrow \hspace{0.3cm} \underline{y} + \underline{e}_3 = \left ( 0, \hspace{0.02cm} 0, \hspace{0.02cm}1, \hspace{0.02cm}1, \hspace{0.02cm}1, \hspace{0.02cm}0, \hspace{0.02cm}0 \right ) \hspace{0.05cm}.
The column by column syndrome decoding removes all single errors in columns 1, 2, 3, 4 and 7.
- Column 5 (contains two errors) ⇒ Error correction concerning bit 4:
- \underline{s} = \left ( 0, \hspace{0.02cm} 1, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}1, \hspace{0.02cm}0 \right ) \hspace{-0.03cm}\cdot \hspace{-0.03cm}{ \boldsymbol{\rm H} }_2^{\rm T} \hspace{-0.05cm}= \left ( 0, \hspace{0.02cm} 1, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}1, \hspace{0.02cm}0 \right ) \hspace{-0.03cm}\cdot \hspace{-0.03cm} \begin{pmatrix} 1 &1 &0 \\ 1 &0 &1 \\ 0 &1 &1 \\ 1 &0 &0 \\ 0 &1 &0 \\ 0 &0 &1 \end{pmatrix} = \left ( 1, \hspace{0.03cm} 0, \hspace{0.03cm}0 \right ) = \underline{s}_4
- \Rightarrow \hspace{0.3cm} \underline{y} + \underline{e}_4 = \left ( 0, \hspace{0.02cm} 1, \hspace{0.02cm}0, \hspace{0.02cm}1, \hspace{0.02cm}1, \hspace{0.02cm}0 \right ) \hspace{0.05cm}.
The remaining three errors are corrected by decoding the second iteration loop line by line.
Whether all errors of a block are correctable depends on the error pattern. Here we refer to the "Exercise 4.7"
Performance of product codes
The 1954 introduced product codes were the first codes, which were based on recursive construction rules and thus in principle suitable for iterative decoding. The inventor Peter Elias did not comment on this, but in the last twenty years this aspect and the simultaneous availability of fast processors have contributed to the fact that in the meantime product codes are also used in real communication systems, e.g.
- in error protection of storage media, and
- in very high data rate fiber optic systems.
Usually one uses very long product codes (large n = n_1 \cdot n_2) with the following consequence:
- For effort reasons, the "Maximum Likelihood Decoding at block level" not applicable for component codes \mathcal{C}_1 and \mathcal{C}_2 nor the "syndrome decoding", which is after all a realization form of ML decoding.
- Applicable, on the other hand, even with large n is the "iterative symbol-wise MAP decoding". The exchange of extrinsic and apriori–information happens here between the two component codes. More details on this can be found in [Liv15][1].
The graph shows for a (1024, 676) product code, based on the extended Hamming code {\rm eHC} \ (32, 26) as component codes,
- on the left, the AWGN bit error probability as a function of iterations (I)
- right the error probability of the blocks (or code words).
Here are some additional remarks:
- The code rate is R = R_1 \cdot R_2 = 0.66, giving the Shannon bound to 10 \cdot {\rm lg} \, (E_{\rm B}/N_0) \approx 1 \ \rm dB results.
- In the left graph you can see the influence of the iterations. At the transition from I = 1 to I=2 one gains approx. 2 \ \rm dB (at the bit error rate 10^{-5}) and with I = 10 another \rm dB. Further iterations are not worthwhile.
- All bounds mentioned in the chapter "Bounds for Block Error Probability" can be applied here as well, so also the one shown dashed in the right graph truncated union bound:
- {\rm Pr(Truncated\hspace{0.15cm}Union\hspace{0.15cm} Bound)}= W_{d_{\rm min}} \cdot {\rm Q} \left ( \sqrt{d_{\rm min} \cdot {2R \cdot E_{\rm B}}/{N_0}} \right ) \hspace{0.05cm}.
- The minimum distance is d_{\rm min} = d_1 \cdot d_2 = 4 \cdot 4 = 16. With the weight function of the {\rm eHC} \ (32, 26),
- W_{\rm eHC(32,\hspace{0.08cm}26)}(X) = 1 + 1240 \cdot X^{4} + 27776 \cdot X^{6}+ 330460 \cdot X^{8} + ...\hspace{0.05cm} + X^{32},
- is obtained for the product code W_{d, \cdot min} = 1240^2 = 15\hspace{0.05cm}376\hspace{0.05cm}000. This gives the error probability shown in the graph on the right.
Exercises for the chapter
"Exercise 4.6: Product Code Generation"
"Exercise 4.6Z: Basics of Product Codes"
"Exercise 4.7: Product Code Decoding"
"Exercise 4.7Z: Principle of Syndrome Decoding"
References
- ↑ Liva, G.: Channels Codes for Iterative Decoding. Lecture manuscript, Department of Communications Engineering, TU Munich and DLR Oberpfaffenhofen, 2015.