Loading [MathJax]/jax/output/HTML-CSS/fonts/TeX/fontdata.js

Difference between revisions of "Channel Coding/The Basics of Product Codes"

From LNTwww
 
(30 intermediate revisions by 3 users not shown)
Line 8: Line 8:
 
== Basic structure of a product code ==
 
== Basic structure of a product code ==
 
<br>
 
<br>
The graphic shows the principle structure of product codes, which were already introduced in 1954 by&nbsp; [https://en.wikipedia.org/wiki/Peter_Elias "Peter Elias"]&nbsp;. The&nbsp; '''two-dimensional product code'''&nbsp; \mathcal{C} = \mathcal{C}_1 &times; \mathcal{C}_2&nbsp; shown here is based on the two linear and binary block codes with parameters&nbsp; (n1, k1)&nbsp; and&nbsp; (n2, k2) respectively. The code word length is&nbsp; n=n1n2.  
+
The graphic shows the principle structure of&nbsp; &raquo;product codes&laquo;,&nbsp; which were already introduced in 1954 by&nbsp; [https://en.wikipedia.org/wiki/Peter_Elias $\text{Peter Elias}$].&nbsp;
 +
*The&nbsp; &raquo;'''two-dimensional product code'''&laquo;&nbsp; \mathcal{C} = \mathcal{C}_1 &times; \mathcal{C}_2&nbsp; shown here is based on the two linear and binary block codes with parameters&nbsp; (n1, k1)&nbsp; and&nbsp; (n2, k2) respectively.  
  
[[File:P ID3000 KC T 4 2 S1 v1.png|center|frame|Basic structure of a product code|class=fit]]
+
*The code word length is&nbsp; $n = n_1 \cdot n_2$.
  
These&nbsp; n&nbsp; code bits can be grouped as follows:
+
 
*The&nbsp; k=k1k2&nbsp; information bits are arranged in the&nbsp; k_2 &times; k_1 matrix&nbsp; U&nbsp;. The code rate is equal to the product of the code rates of the two base codes:  
+
The&nbsp; n&nbsp; encoded bits can be grouped as follows:
 +
[[File:EN_KC_T_4_2_S1_v1.png|right|frame|Basic structure of a product code|class=fit]]
 +
*The&nbsp; k=k1k2&nbsp; information bits are arranged in the&nbsp; k_2 &times; k_1 matrix&nbsp; U.
 +
 
 +
*The code rate is equal to the product of the code rates of the base codes:  
 
:R=k/n=(k1/n1)(k2/n2)=R1R2.
 
:R=k/n=(k1/n1)(k2/n2)=R1R2.
  
*The upper right matrix&nbsp; P(1)&nbsp; with dimension&nbsp; k_2 &times; m_1&nbsp; contains the parity bits with respect to the code&nbsp; C1. In each of the&nbsp; k2&nbsp; rows, check bits are added to the&nbsp; k1 information bits&nbsp; $m_1 = n_1 - k_1$&nbsp; as described in an earlier chapter using the example of&nbsp; [[Channel_Coding/Examples_of_Binary_Block_Codes#Hamming_Codes|"Hamming&ndash;Codes"]]&nbsp;.<br>
+
*The upper right matrix&nbsp; P(1)&nbsp; with dimension&nbsp; k_2 &times; m_1&nbsp; contains the parity bits with respect to the code&nbsp; C1.  
 +
 
 +
*In each of the&nbsp; k2&nbsp; rows,&nbsp; $m_1 = n_1 - k_1$&nbsp; check bits are added to the&nbsp; k1&nbsp; information bits as described in an earlier chapter using the example of&nbsp; [[Channel_Coding/Examples_of_Binary_Block_Codes#Hamming_Codes|$\text{Hamming codes}$]]&nbsp;.<br>
  
*The lower left matrix&nbsp; P(2)&nbsp; of dimension&nbsp; m_2 &times; k_1&nbsp; contains the check bits for the second component code&nbsp; C2. Here the encoding (and also the decoding) is done line by line: &nbsp; In each of the&nbsp; k1&nbsp; columns the&nbsp; k2&nbsp; information bits are still supplemented by&nbsp; m2=n2k2&nbsp; check bits.<br>
+
*The lower left matrix&nbsp; P(2)&nbsp; of dimension&nbsp; m_2 &times; k_1&nbsp; contains the check bits for the second component code&nbsp; C2.&nbsp; Here the encoding&nbsp; $($and also the decoding$)$&nbsp; is done line by line: &nbsp; In each of the&nbsp; k1&nbsp; columns,&nbsp;  the&nbsp; k2&nbsp; information bits are still supplemented by&nbsp; m2=n2k2&nbsp; check bits.<br>
  
*The&nbsp; m_2 &times; m_1&ndash;matrix&nbsp; P(12)&nbsp; on the bottom right is called&nbsp; <i>checks&ndash;on&ndash;checks</i>. Here the two previously generated parity matrices&nbsp; P(1)&nbsp; and&nbsp; P(2)&nbsp; are linked according to the parity-check equations.<br><br>
+
*The&nbsp; m_2 &times; m_1&ndash;matrix&nbsp; P(12)&nbsp; on the bottom right is called&nbsp; "checks&ndash;on&ndash;checks".&nbsp; Here the two previously generated parity matrices&nbsp; P(1)&nbsp; and&nbsp; P(2)&nbsp; are linked according to the parity-check equations.<br><br>
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Conclusion:}$&nbsp;  All product codes according to the above graphic have the following properties:
+
$\text{Conclusions:}$&nbsp;  &raquo;'''All product codes'''&laquo;&nbsp; according to the above graph have the following properties:
 
*For linear component codes&nbsp; C1&nbsp; and&nbsp; C2&nbsp; the product code&nbsp; \mathcal{C} = \mathcal{C}_1 &times; \mathcal{C}_2&nbsp; is also linear.<br>
 
*For linear component codes&nbsp; C1&nbsp; and&nbsp; C2&nbsp; the product code&nbsp; \mathcal{C} = \mathcal{C}_1 &times; \mathcal{C}_2&nbsp; is also linear.<br>
  
*Each row of&nbsp; C&nbsp; returns a codeword of&nbsp; C1&nbsp; and each column returns a codeword of&nbsp; C2.<br>
+
*Each row of&nbsp; C&nbsp; returns a code word of&nbsp; C1&nbsp; and each column returns a code word of&nbsp; C2.<br>
  
*The sum of two rows again gives a codeword of&nbsp; C1 due to linearity.<br>
+
*The sum of two rows again gives a code word of&nbsp; C1 due to linearity.<br>
  
*Also, the sum of two columns gives a valid codeword of&nbsp; C2.<br>
+
*Also,&nbsp; the sum of two columns gives a valid code word of&nbsp; C2.<br>
  
*Each product code also includes the null word&nbsp; 0_&nbsp; (a vector of&nbsp; n&nbsp; zeros).<br>
+
*Each product code also includes the&nbsp; "zero word"&nbsp; 0_&nbsp; $($a vector of&nbsp; n&nbsp; "zeros"$)$.<br>
  
*The minimum distance of&nbsp; C&nbsp; is&nbsp; dmin=d1d2, where&nbsp; di&nbsp; indicates the minimum distance of&nbsp; Ci&nbsp; }}
+
*The minimum distance of&nbsp; C&nbsp; is&nbsp; dmin=d1d2,&nbsp; where&nbsp; di&nbsp; indicates the minimum distance of&nbsp; Ci&nbsp; }}
  
 
== Iterative syndrome decoding of product codes ==
 
== Iterative syndrome decoding of product codes ==
 
<br>
 
<br>
We now consider the case where a product code with matrix&nbsp; X&nbsp; is transmitted over a binary channel. Let the receive matrix&nbsp; Y=X+E, where&nbsp; E&nbsp; denotes the error matrix. Let all elements of the matrices&nbsp; X, E&nbsp; and&nbsp; Y be binary, that is&nbsp; 0&nbsp; or&nbsp; 1.<br>
+
We now consider the case where a product code with matrix&nbsp; X&nbsp; is transmitted over a binary channel.&nbsp;
 +
*Let the received matrix&nbsp; Y=X+E, where&nbsp; E&nbsp; denotes the&nbsp; "error matrix".  
 +
 
 +
*Let all elements of the matrices&nbsp; X, E&nbsp; and&nbsp; Y&nbsp; be binary,&nbsp; that is&nbsp; 0&nbsp; or&nbsp; 1.<br>
  
For the decoding of the two component codes the syndrome decoding according to the chapter&nbsp; [[Channel_Coding/Decoding_of_Linear_Block_Codes#Block_diagram_and_requirements| "Decoding linear block codes"]]&nbsp; is suitable.  
+
 
 +
For the decoding of the two component codes the syndrome decoding according to the chapter&nbsp; [[Channel_Coding/Decoding_of_Linear_Block_Codes#Block_diagram_and_requirements| "Decoding linear block codes"]]&nbsp; is suitable.&nbsp;
  
 
In the two-dimensional case this means:
 
In the two-dimensional case this means:
*One first decodes the&nbsp; n2&nbsp; rows of the receive matrix&nbsp; Y, based on the parity-check matrix&nbsp; H1&nbsp; of the component code&nbsp; C1. Syndrome decoding is one way to do this.<br>
+
#One first decodes the&nbsp; n2&nbsp; rows of the received matrix&nbsp; Y,&nbsp; based on the parity-check matrix&nbsp; H1&nbsp; of the component code&nbsp; C1.&nbsp; <br>Syndrome decoding is one way to do this.<br>
 +
#For this one forms in each case the so-called&nbsp; "syndrome" &nbsp; s_=y_HT1,&nbsp; where the vector &nbsp; y_ &nbsp; of length&nbsp; n1&nbsp; indicates the current row of&nbsp; Y&nbsp; and&nbsp; <br>"T"&nbsp; stands for "transposed".
 +
#Correspondingly to the calculated&nbsp; s_μ &nbsp; (with&nbsp; 0 &#8804; \mu < 2^{n_1 -k_1}) &nbsp; one finds in a prepared syndrome table the corresponding probable error pattern&nbsp; e_=e_μ.<br>
 +
#If there are only a few errors within the row,&nbsp; then &nbsp; y_+e_ &nbsp; matches the sent row vector&nbsp; x_.&nbsp;
 +
#However,&nbsp; if too many errors have occurred,&nbsp; then incorrect corrections will occur.<br>
 +
#Afterwards one decodes the&nbsp; n1&nbsp; columns of the&nbsp; (corrected)&nbsp; received matrix&nbsp; Y,&nbsp; this time based on the&nbsp; (transposed)&nbsp; parity-check matrix&nbsp; HT2&nbsp; of the component code&nbsp; C2.
 +
#For this,&nbsp; one forms the syndrome &nbsp; s_=y_HT2,&nbsp; where the vector&nbsp; y_&nbsp; of length&nbsp; n2&nbsp; denotes the considered column of&nbsp; Y&nbsp; .<br>
 +
#From a second syndrome table&nbsp; (valid for code&nbsp; C2)&nbsp; we find for the computed&nbsp; s_μ&nbsp; (with&nbsp; 0 &#8804; \mu < 2^{n_2 -k_2})&nbsp; the probable error pattern e_=e_μ of the edited column.
 +
#After correcting all columns,&nbsp; the matrix&nbsp; Y&nbsp; is present.&nbsp; Now one can do another row and then a column decoding &nbsp; &#8658; &nbsp; second iteration,&nbsp; and so on,&nbsp; and so forth.<br><br>
  
*For this one forms in each case the so-called syndrome&nbsp; $\underline{s} = \underline{y} \cdot \mathbf{H}_1^{\rm T}$, where the vector&nbsp; y_&nbsp; of length&nbsp; $n_1$&nbsp; indicates the current row of&nbsp; Y&nbsp; and "T" stands for "transposed". Corresponding to the calculated&nbsp; $\underline{s}_{\mu}$&nbsp; (with&nbsp; $0 &#8804; \mu < 2^{n_1 -k_1})$&nbsp; one finds in a prepared syndrome table the corresponding probable error pattern&nbsp; $\underline{e} = \underline{e}_{\mu}$.<br>
+
{{GraueBox|TEXT=
 +
$\text{Example 1:}$&nbsp; To illustrate the decoding algorithm,&nbsp; we again consider the&nbsp; $(42, 12)$&nbsp; product code,&nbsp; based on
 +
*the Hamming code&nbsp; $\text{HC (7, 4, 3)}$ &nbsp; &#8658; &nbsp; code&nbsp; $\mathcal{C}_1$,<br>
  
*If there are only a few errors within the row, then&nbsp; $\underline{y} + \underline{e}$&nbsp; matches the sent row vector&nbsp; $\underline{x}$&nbsp;. However, if too many errors have occurred, the following incorrect corrections will occur.<br>
+
*the truncated Hamming code&nbsp; $\text{HC (6, 3, 3)}$ &nbsp; &#8658; &nbsp; code&nbsp; $\mathcal{C}_2$.<br>
  
*Then one "syndromedecodes" the&nbsp; n1&nbsp; columns of the (corrected) received matrix&nbsp; Y, this time based on the (transposed) parity-check matrix&nbsp; HT2&nbsp; of the component code&nbsp; C2. For this, one forms the syndrome&nbsp; s_=y_HT2, where the vector&nbsp; y_&nbsp; of length&nbsp; n2&nbsp; denotes the considered column of&nbsp; Y&nbsp; .<br>
 
  
*From a second syndrome table (valid for the code&nbsp; $\mathcal{C}_2)$&nbsp; we find for the computed&nbsp; $\underline{s}_{\mu}&nbsp;(with&nbsp;0 &#8804; \mu < 2^{n_2 -k_2})&nbsp; the probable error pattern\underline{e} = \underline{e}_{\mu}$ of the edited column. After correcting all columns, the Marix&nbsp; $\mathbf{Y}$&nbsp; is present. Now one can do another row and then a column decoding &nbsp; &#8658; &nbsp; second iteration, and so on, and so forth.<br><br>
+
[[File:EN_KC_T_4_2_S2a_v1.png|right|frame|Syndrome decoding of the&nbsp; $(42, 12)$&nbsp; product code|class=fit]]
 +
[[File:EN KC T 4 2 S2b v2 neu.png|right|frame|Syndrome table for code $\mathcal{C}_1$]]
 +
[[File:EN_KC_T_4_2_S2c_v2.png|right|frame|Syndrome table for the code $\mathcal{C}_2$]]
  
{{GraueBox|TEXT= 
+
The left graph shows the received matrix&nbsp; $\mathbf{Y}$.&nbsp;  
Example 1:&nbsp;  To illustrate the decoding algorithm, we again consider the&nbsp; (42,12) product code, based on
 
*the Hamming code&nbsp; $\text{HC (7, 4, 3)}$ &nbsp; &#8658; &nbsp; code&nbsp; C1,<br>
 
  
*the shortened Hamming code&nbsp; $\text{HC (6, 3, 3)}$ &nbsp; &#8658; &nbsp; code&nbsp; C2.<br>
+
<u>Note:</u> &nbsp; For display reasons,&nbsp;
 +
*the code matrix&nbsp; $\mathbf{X}$&nbsp; was chosen to be a&nbsp; $6 &times; 7$ zero matrix,&nbsp;  
  
 +
*so the eight&nbsp; "ones"&nbsp; in&nbsp; Y&nbsp; represent transmission errors.<br>
  
The left graph shows the receive matrix&nbsp; Y. For display reasons, the code matrix&nbsp; X&nbsp; was chosen to be a&nbsp; 6 &times; 7 zero matrix, so that the nine ones in&nbsp; Y&nbsp; represent transmission errors at the same time.<br>
 
[[File:P ID3014 KC T 4 2 S2a v1.png|right|frame|Syndrome decoding of the&nbsp; (42,12) product code|class=fit]]
 
  
The&nbsp; <b>row by row syndrome decoding</b>&nbsp; is done via the syndrome&nbsp; s_=y_HT1&nbsp; with
+
&rArr; &nbsp; The&nbsp; &raquo;<b>row-by-row syndrome decoding</b>&laquo;&nbsp; is done via the syndrome&nbsp; s_=y_HT1&nbsp; with
 
:$$\boldsymbol{\rm H}_1^{\rm T} =  
 
:$$\boldsymbol{\rm H}_1^{\rm T} =  
 
   \begin{pmatrix}
 
   \begin{pmatrix}
Line 74: Line 94:
 
0 &0 &1
 
0 &0 &1
 
\end{pmatrix}  \hspace{0.05cm}. $$
 
\end{pmatrix}  \hspace{0.05cm}. $$
<br clear=all>
+
 
 
In particular:
 
In particular:
[[File:P ID3015 KC T 4 2 S2b v1.png|right|frame|Syndrome table for code C1]]
+
*<b>Row 1</b> &nbsp;&#8658;&nbsp; Single error correction is successful&nbsp;  $($also in rows 3,&nbsp; 4 and 6$)$:  
*<b>Row 1</b> &nbsp;&#8658;&nbsp; Single error correction is successful (also in rows 3,&nbsp; 4 and 6):  
 
  
 
::<math>\underline{s} = \left ( 0, \hspace{0.02cm} 0, \hspace{0.02cm}1, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0 \right ) \hspace{-0.03cm}\cdot \hspace{-0.03cm}{ \boldsymbol{\rm H} }_1^{\rm T}  
 
::<math>\underline{s} = \left ( 0, \hspace{0.02cm} 0, \hspace{0.02cm}1, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0 \right ) \hspace{-0.03cm}\cdot \hspace{-0.03cm}{ \boldsymbol{\rm H} }_1^{\rm T}  
Line 87: Line 106:
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
*<b>Row 2</b> &nbsp;(contains two errors) &nbsp; &#8658; &nbsp; Error correction concerning bit 5:
+
*<b>Row 2</b> &nbsp; $($contains two errors$)$ &nbsp; &#8658; &nbsp; Error correction concerning bit&nbsp; '''5''':
  
 
::<math>\underline{s} = \left ( 1, \hspace{0.02cm} 0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}1 \right ) \hspace{-0.03cm}\cdot \hspace{-0.03cm}{ \boldsymbol{\rm H} }_1^{\rm T}  
 
::<math>\underline{s} = \left ( 1, \hspace{0.02cm} 0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}1 \right ) \hspace{-0.03cm}\cdot \hspace{-0.03cm}{ \boldsymbol{\rm H} }_1^{\rm T}  
Line 97: Line 116:
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
*<b>Row 5</b> &nbsp;(also contains two errors) &nbsp;&#8658;&nbsp; Error correction concerning bit 3:
+
*<b>Row 5</b> &nbsp;$($also contains two errors$)$ &nbsp;&#8658;&nbsp; Error correction concerning bit&nbsp; ''' 3''':
  
 
::<math>\underline{s} = \left ( 0, \hspace{0.02cm} 0, \hspace{0.02cm}0, \hspace{0.02cm}1, \hspace{0.02cm}1, \hspace{0.02cm}0, \hspace{0.02cm}0 \right ) \hspace{-0.03cm}\cdot \hspace{-0.03cm}{ \boldsymbol{\rm H} }_1^{\rm T}  
 
::<math>\underline{s} = \left ( 0, \hspace{0.02cm} 0, \hspace{0.02cm}0, \hspace{0.02cm}1, \hspace{0.02cm}1, \hspace{0.02cm}0, \hspace{0.02cm}0 \right ) \hspace{-0.03cm}\cdot \hspace{-0.03cm}{ \boldsymbol{\rm H} }_1^{\rm T}  
Line 107: Line 126:
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
The <b>column by column syndrome decoding</b> removes all single errors in columns&nbsp; 1,&nbsp; 2,&nbsp; 3,&nbsp; 4&nbsp; and&nbsp; 7.  
+
&rArr; &nbsp; The&nbsp; &raquo;<b>column-by-column syndrome decoding</b>&laquo;&nbsp; removes all single errors in columns&nbsp; 1,&nbsp; 2,&nbsp; 3,&nbsp; 4&nbsp; and&nbsp; 7.  
[[File:P ID3019 KC T 4 2 S2c v1.png|right|frame|Syndrome table for the code C2]]
+
*<b>Column 5</b> &nbsp;$($contains two errors$)$ &nbsp; &#8658; &nbsp; Error correction concerning bit&nbsp; '''4''':
*<b>Column 5</b> &nbsp;(contains two errors) &nbsp; &#8658; &nbsp; Error correction concerning bit 4:
 
  
 
::<math>\underline{s} = \left ( 0, \hspace{0.02cm} 1, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}1, \hspace{0.02cm}0 \right ) \hspace{-0.03cm}\cdot \hspace{-0.03cm}{ \boldsymbol{\rm H} }_2^{\rm T}  
 
::<math>\underline{s} = \left ( 0, \hspace{0.02cm} 1, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}1, \hspace{0.02cm}0 \right ) \hspace{-0.03cm}\cdot \hspace{-0.03cm}{ \boldsymbol{\rm H} }_2^{\rm T}  
Line 126: Line 144:
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
The remaining three errors are corrected by decoding the&nbsp; <b>second iteration loop</b>&nbsp; line by line.<br>
+
&rArr; &nbsp; The remaining three errors are corrected by decoding the&nbsp; &raquo;<b>second row iteration loop</b>&laquo;&nbsp; (line-by-line).<br>
  
Ob alle Fehler eines Blockes korrigierbar sind, hängt vom Fehlermuster ab. Hier verweisen wir auf die&nbsp; [[Aufgaben:Aufgabe_4.7:_Decodierung_von_Produktcodes|"Aufgabe 4.7"]].}}<br>
+
Whether all errors of a block are correctable depends on the error pattern.&nbsp; Here we refer to&nbsp; [[Aufgaben:Aufgabe_4.7:_Decodierung_von_Produktcodes|"Exercise 4.7"]].}}<br>
  
 
== Performance of product codes ==
 
== Performance of product codes ==
 
<br>
 
<br>
The 1954 introduced&nbsp; <i>product codes</i>&nbsp; were the first codes, which were based on recursive construction rules and thus in principle suitable for iterative decoding. The inventor Peter Elias did not comment on this, but in the last twenty years this aspect and the simultaneous availability of fast processors have contributed to the fact that in the meantime product codes are also used in real communication systems, e.g.  
+
The 1954 introduced&nbsp; product codes&nbsp; were the first codes,&nbsp; which were based on recursive construction rules and thus in principle suitable for iterative decoding.&nbsp; The inventor Peter Elias did not comment on this,&nbsp; but in the last twenty years this aspect and the simultaneous availability of fast processors have contributed to the fact that in the meantime product codes are also used in real communication systems, e.g.  
*in error protection of storage media, and  
+
*in error protection of storage media,&nbsp; and
 +
 
*in very high data rate fiber optic systems.<br>
 
*in very high data rate fiber optic systems.<br>
  
  
<br>Usually one uses very long product codes&nbsp; (large&nbsp; n=n1n2)&nbsp; with the following consequence:
+
Usually one uses very long product codes &nbsp; (large&nbsp; n=n1n2) &nbsp; with the following consequence:
*For effort reasons, the&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#Criteria_.C2.BBMaximum-a-posteriori.C2.AB_and_.C2.BBMaximum-Likelihood.C2. AB|"Maximum Likelihood Decoding at block level"]]&nbsp; not applicable for component codes&nbsp; C1&nbsp; and&nbsp; C2&nbsp; nor the&nbsp; [[Channel_Coding/Decoding_of_Linear_Block_Codes#Principle_of_syndrome_decoding| "syndrome decoding"]], which is after all a realization form of ML decoding.
+
*For effort reasons,&nbsp; the&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#Criteria_.C2.BBMaximum-a-posteriori.C2.AB_and_.C2.BBMaximum-Likelihood.C2.AB|$\text{maximum likelihood decoding at block level}$]]&nbsp; is not applicable for the component codes&nbsp; C1&nbsp; and&nbsp; C2&nbsp; nor the&nbsp; [[Channel_Coding/Decoding_of_Linear_Block_Codes#Principle_of_syndrome_decoding|$\text{syndrome decoding}$]],&nbsp; which is after all a realization form of maximum likelihood decoding.
 +
 
 +
*Applicable,&nbsp; on the other hand,&nbsp; even with large&nbsp; n&nbsp; is the&nbsp; [[Channel_Coding/Soft-in_Soft-Out_Decoder#Bit-wise_soft-in_soft-out_decoding|$\text{iterative symbol-wise MAP decoding}$]].&nbsp; The exchange of extrinsic and a-priori&ndash;information happens here between the two component codes.&nbsp; More details on this can be found in&nbsp; [Liv15]<ref name='Liv15'>Liva, G.:&nbsp; Channels Codes for Iterative Decoding.&nbsp; Lecture manuscript, Department of Communications Engineering, TU Munich and DLR Oberpfaffenhofen, 2015.</ref>.<br>
  
*Applicable, on the other hand, even with large&nbsp; n&nbsp; is the&nbsp; [[Channel_Coding/Soft-in_Soft-Out_Decoder#Symbol-wise_soft-in_soft-out_decoding|"iterative symbol-wise MAP decoding"]]. The exchange of extrinsic and apriori&ndash;information happens here between the two component codes. More details on this can be found in&nbsp; [Liv15]<ref name='Liv15'>Liva, G.: ''Channels Codes for Iterative Decoding.'' Lecture manuscript, Department of Communications Engineering, TU Munich and DLR Oberpfaffenhofen, 2015.</ref>.<br>
 
  
 +
[[File:EN_KC_T_4_2_S3_v3.png|right|frame|Bit and block error probability of a&nbsp; (1024,676)&nbsp; product code at AWGN |class=fit]]
 +
The graph shows for a&nbsp; (1024,676)&nbsp; product code, based on the&nbsp; extended Hamming code&nbsp; eHC (32,26)&nbsp; as component codes,
 +
*on the left,&nbsp; the bit error probability as  function of the AWGN parameter&nbsp; 10lg(EB/N0)&nbsp; the number of iterations&nbsp; (I),
  
The graph shows for a&nbsp; (1024,676) product code, based on the&nbsp; <i>extended Hamming code</i>&nbsp; ${\rm eHC} \ (32, 26)$&nbsp; as component codes,
+
*on the right,&nbsp; the error probability of the blocks,&nbsp; (or code words$)$.  
*on the left, the AWGN bit error probability as a function of iterations&nbsp; $(I)$  
 
*right the error probability of the blocks (or code words).  
 
  
  
[[File:P ID3020 KC T 4 2 S3 v4.png|center|frame|Bit and block error probability of a&nbsp; (1024,676) product code at AWGN|class=fit]]
 
  
 
Here are some additional remarks:
 
Here are some additional remarks:
*The code rate is&nbsp; R=R1R2=0.66, giving the Shannon bound to&nbsp; 10lg(EB/N0)1 dB&nbsp; results.<br>
+
*The code rate is &nbsp; R=R1R2=0.66;&nbsp; this results to the Shannon bound&nbsp; 10lg(EB/N0)1 dB.<br>
  
*In the left graph you can see the influence of the iterations. At the transition from&nbsp; I=1&nbsp; to&nbsp; I=2&nbsp; one gains approx.&nbsp; 2 dB (at the bit error rate&nbsp; 105)&nbsp; and with&nbsp; I=10&nbsp; another dB.  Further iterations are not worthwhile.<br>
+
*The left graph shows the influence of the iterations.&nbsp; At the transition from&nbsp; I=1&nbsp; to&nbsp; I=2&nbsp; one gains&nbsp; $\approx 2 \ \rm dB(at&nbsp;\rm BER =10^{-5})&nbsp; and with&nbsp;I = 10$&nbsp; another&nbsp; dB.&nbsp; Further iterations are not worthwhile.<br>
  
*All bounds mentioned in the chapter&nbsp; [[Channel_Coding/Bounds_for_Block_Error_Probability#Distance_spectrum_of_a_linear_code| "Bounds for Block Error Probability"]]&nbsp; can be applied here as well, so also the one shown dashed in the right graph&nbsp; <i>truncated union bound</i>:
+
*All bounds mentioned in the chapter&nbsp; [[Channel_Coding/Bounds_for_Block_Error_Probability#Union_Bound_of_the_block_error_probability| "Bounds for the Block Error Probability"]]&nbsp; can be applied here as well,&nbsp; e.g. the &nbsp; "truncated union bound"&nbsp; (dashed curve in the right graph):
  
 
::<math>{\rm Pr(Truncated\hspace{0.15cm}Union\hspace{0.15cm} Bound)}= W_{d_{\rm min}} \cdot {\rm Q} \left ( \sqrt{d_{\rm min} \cdot {2R \cdot E_{\rm B}}/{N_0}} \right )  
 
::<math>{\rm Pr(Truncated\hspace{0.15cm}Union\hspace{0.15cm} Bound)}= W_{d_{\rm min}} \cdot {\rm Q} \left ( \sqrt{d_{\rm min} \cdot {2R \cdot E_{\rm B}}/{N_0}} \right )  
 
  \hspace{0.05cm}.</math>
 
  \hspace{0.05cm}.</math>
  
*The minimum distance is&nbsp; dmin=d1d2=44=16. With the weight function of the&nbsp; eHC (32,26),
+
*The minimum distance is&nbsp; dmin=d1d2=44=16.&nbsp;  With the weight function of the&nbsp; eHC (32,26),
  
 
::<math>W_{\rm eHC(32,\hspace{0.08cm}26)}(X) = 1 + 1240 \cdot X^{4}  
 
::<math>W_{\rm eHC(32,\hspace{0.08cm}26)}(X) = 1 + 1240 \cdot X^{4}  
Line 166: Line 186:
 
330460 \cdot X^{8} + ...\hspace{0.05cm} + X^{32},</math>
 
330460 \cdot X^{8} + ...\hspace{0.05cm} + X^{32},</math>
  
:is obtained for the product code&nbsp; $W_{d, \cdot min} = 1240^2 = 15\hspace{0.05cm}376\hspace{0.05cm}000$. This gives the error probability shown in the graph on the right.<br>
+
:we obtain for the product code:&nbsp; Wd, min=12402=15376000.  
 +
*This gives the block error probability bound shown in the graph on the right.<br>
  
 
== Exercises for the chapter==
 
== Exercises for the chapter==
 
<br>
 
<br>
[[Aufgaben:Aufgabe_4.6:_Generierung_von_Produktcodes|"Aufgabe 4.6: Generierung von Produktcodes"]]
+
[[Aufgaben:Exercise_4.6:_Product_Code_Generation|Exercise 4.6: Product Code Generation]]
  
[[Aufgaben:Aufgabe_4.6Z:_Grundlagen_der_Produktcodes|"Aufgabe 4.6Z: Grundlagen der Produktcodes"]]
+
[[Aufgaben:Exercise_4.6Z:_Basics_of_Product_Codes|Exercise 4.6Z: Basics of Product Codes]]
  
[[Aufgaben:Aufgabe_4.7:_Decodierung_von_Produktcodes|"Aufgabe 4.7: Decodierung von Produktcodes"]]
+
[[Aufgaben:Exercise_4.7:_Product_Code_Decoding|Exercise 4.7: Product Code Decoding]]
  
[[Aufgaben:Aufgabe_4.7Z:_Zum_Prinzip_der_Syndromdecodierung|"Aufgabe 4.7Z: Zum Prinzip der Syndromdecodierung"]]
+
[[Aufgaben:Exercise_4.7Z:_Principle_of_Syndrome_Decoding|Exercise 4.7Z: Principle of Syndrome Decoding]]
  
 
==References==
 
==References==

Latest revision as of 18:26, 16 January 2023

Basic structure of a product code


The graphic shows the principle structure of  »product codes«,  which were already introduced in 1954 by  Peter Elias

  • The  »two-dimensional product code«  C=C1×C2  shown here is based on the two linear and binary block codes with parameters  (n1, k1)  and  (n2, k2) respectively.
  • The code word length is  n=n1n2.


The  n  encoded bits can be grouped as follows:

Basic structure of a product code
  • The  k=k1k2  information bits are arranged in the  k2×k1 matrix  U.
  • The code rate is equal to the product of the code rates of the base codes:
R=k/n=(k1/n1)(k2/n2)=R1R2.
  • The upper right matrix  P(1)  with dimension  k2×m1  contains the parity bits with respect to the code  C1.
  • In each of the  k2  rows,  m1=n1k1  check bits are added to the  k1  information bits as described in an earlier chapter using the example of  Hamming codes .
  • The lower left matrix  P(2)  of dimension  m2×k1  contains the check bits for the second component code  C2.  Here the encoding  (and also the decoding)  is done line by line:   In each of the  k1  columns,  the  k2  information bits are still supplemented by  m2=n2k2  check bits.
  • The  m2×m1–matrix  P(12)  on the bottom right is called  "checks–on–checks".  Here the two previously generated parity matrices  P(1)  and  P(2)  are linked according to the parity-check equations.

Conclusions:  »All product codes«  according to the above graph have the following properties:

  • For linear component codes  C1  and  C2  the product code  C=C1×C2  is also linear.
  • Each row of  C  returns a code word of  C1  and each column returns a code word of  C2.
  • The sum of two rows again gives a code word of  C1 due to linearity.
  • Also,  the sum of two columns gives a valid code word of  C2.
  • Each product code also includes the  "zero word"  0_  (a vector of  n  "zeros").
  • The minimum distance of  C  is  dmin=d1d2,  where  di  indicates the minimum distance of  Ci 

Iterative syndrome decoding of product codes


We now consider the case where a product code with matrix  X  is transmitted over a binary channel. 

  • Let the received matrix  Y=X+E, where  E  denotes the  "error matrix".
  • Let all elements of the matrices  X, E  and  Y  be binary,  that is  0  or  1.


For the decoding of the two component codes the syndrome decoding according to the chapter  "Decoding linear block codes"  is suitable. 

In the two-dimensional case this means:

  1. One first decodes the  n2  rows of the received matrix  Y,  based on the parity-check matrix  H1  of the component code  C1
    Syndrome decoding is one way to do this.
  2. For this one forms in each case the so-called  "syndrome"   s_=y_HT1,  where the vector   y_   of length  n1  indicates the current row of  Y  and 
    "T"  stands for "transposed".
  3. Correspondingly to the calculated  s_μ   (with  0μ<2n1k1)   one finds in a prepared syndrome table the corresponding probable error pattern  e_=e_μ.
  4. If there are only a few errors within the row,  then   y_+e_   matches the sent row vector  x_
  5. However,  if too many errors have occurred,  then incorrect corrections will occur.
  6. Afterwards one decodes the  n1  columns of the  (corrected)  received matrix  Y,  this time based on the  (transposed)  parity-check matrix  HT2  of the component code  C2.
  7. For this,  one forms the syndrome   s_=y_HT2,  where the vector  y_  of length  n2  denotes the considered column of  Y  .
  8. From a second syndrome table  (valid for code  C2)  we find for the computed  s_μ  (with  0μ<2n2k2)  the probable error pattern e_=e_μ of the edited column.
  9. After correcting all columns,  the matrix  Y  is present.  Now one can do another row and then a column decoding   ⇒   second iteration,  and so on,  and so forth.

Example 1:  To illustrate the decoding algorithm,  we again consider the  (42,12)  product code,  based on

  • the Hamming code  HC (7, 4, 3)   ⇒   code  C1,
  • the truncated Hamming code  HC (6, 3, 3)   ⇒   code  C2.


Syndrome decoding of the  (42,12)  product code
Syndrome table for code C1
Syndrome table for the code C2

The left graph shows the received matrix  Y

Note:   For display reasons, 

  • the code matrix  X  was chosen to be a  6×7 zero matrix, 
  • so the eight  "ones"  in  Y  represent transmission errors.


⇒   The  »row-by-row syndrome decoding«  is done via the syndrome  s_=y_HT1  with

\boldsymbol{\rm H}_1^{\rm T} = \begin{pmatrix} 1 &0 &1 \\ 1 &1 &0 \\ 0 &1 &1 \\ 1 &1 &1 \\ 1 &0 &0 \\ 0 &1 &0 \\ 0 &0 &1 \end{pmatrix} \hspace{0.05cm}.

In particular:

  • Row 1  ⇒  Single error correction is successful  (also in rows 3,  4 and 6):
\underline{s} = \left ( 0, \hspace{0.02cm} 0, \hspace{0.02cm}1, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0 \right ) \hspace{-0.03cm}\cdot \hspace{-0.03cm}{ \boldsymbol{\rm H} }_1^{\rm T} \hspace{-0.05cm}= \left ( 0, \hspace{0.03cm} 1, \hspace{0.03cm}1 \right ) = \underline{s}_3
\Rightarrow \hspace{0.3cm} \underline{y} + \underline{e}_3 = \left ( 0, \hspace{0.02cm} 0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0 \right ) \hspace{0.05cm}.
  • Row 2   (contains two errors)   ⇒   Error correction concerning bit  5:
\underline{s} = \left ( 1, \hspace{0.02cm} 0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}1 \right ) \hspace{-0.03cm}\cdot \hspace{-0.03cm}{ \boldsymbol{\rm H} }_1^{\rm T} \hspace{-0.05cm}= \left ( 1, \hspace{0.03cm} 0, \hspace{0.03cm}0 \right ) = \underline{s}_4
\Rightarrow \hspace{0.3cm} \underline{y} + \underline{e}_4 = \left ( 1, \hspace{0.02cm} 0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}1, \hspace{0.02cm}0, \hspace{0.02cm}1 \right ) \hspace{0.05cm}.
  • Row 5  (also contains two errors)  ⇒  Error correction concerning bit  3:
\underline{s} = \left ( 0, \hspace{0.02cm} 0, \hspace{0.02cm}0, \hspace{0.02cm}1, \hspace{0.02cm}1, \hspace{0.02cm}0, \hspace{0.02cm}0 \right ) \hspace{-0.03cm}\cdot \hspace{-0.03cm}{ \boldsymbol{\rm H} }_1^{\rm T} \hspace{-0.05cm}= \left ( 0, \hspace{0.03cm} 1, \hspace{0.03cm}1 \right ) = \underline{s}_3
\Rightarrow \hspace{0.3cm} \underline{y} + \underline{e}_3 = \left ( 0, \hspace{0.02cm} 0, \hspace{0.02cm}1, \hspace{0.02cm}1, \hspace{0.02cm}1, \hspace{0.02cm}0, \hspace{0.02cm}0 \right ) \hspace{0.05cm}.

⇒   The  »column-by-column syndrome decoding«  removes all single errors in columns  1,  2,  3,  4  and  7.

  • Column 5  (contains two errors)   ⇒   Error correction concerning bit  4:
\underline{s} = \left ( 0, \hspace{0.02cm} 1, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}1, \hspace{0.02cm}0 \right ) \hspace{-0.03cm}\cdot \hspace{-0.03cm}{ \boldsymbol{\rm H} }_2^{\rm T} \hspace{-0.05cm}= \left ( 0, \hspace{0.02cm} 1, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}1, \hspace{0.02cm}0 \right ) \hspace{-0.03cm}\cdot \hspace{-0.03cm} \begin{pmatrix} 1 &1 &0 \\ 1 &0 &1 \\ 0 &1 &1 \\ 1 &0 &0 \\ 0 &1 &0 \\ 0 &0 &1 \end{pmatrix} = \left ( 1, \hspace{0.03cm} 0, \hspace{0.03cm}0 \right ) = \underline{s}_4
\Rightarrow \hspace{0.3cm} \underline{y} + \underline{e}_4 = \left ( 0, \hspace{0.02cm} 1, \hspace{0.02cm}0, \hspace{0.02cm}1, \hspace{0.02cm}1, \hspace{0.02cm}0 \right ) \hspace{0.05cm}.

⇒   The remaining three errors are corrected by decoding the  »second row iteration loop«  (line-by-line).

Whether all errors of a block are correctable depends on the error pattern.  Here we refer to  "Exercise 4.7".


Performance of product codes


The 1954 introduced  product codes  were the first codes,  which were based on recursive construction rules and thus in principle suitable for iterative decoding.  The inventor Peter Elias did not comment on this,  but in the last twenty years this aspect and the simultaneous availability of fast processors have contributed to the fact that in the meantime product codes are also used in real communication systems, e.g.

  • in error protection of storage media,  and
  • in very high data rate fiber optic systems.


Usually one uses very long product codes   (large  n = n_1 \cdot n_2)   with the following consequence:


Bit and block error probability of a  (1024, 676)  product code at AWGN

The graph shows for a  (1024, 676)  product code, based on the  extended Hamming code  {\rm eHC} \ (32, 26)  as component codes,

  • on the left,  the bit error probability as function of the AWGN parameter  10 \cdot {\rm lg} \, (E_{\rm B}/N_0)  the number of iterations  (I),
  • on the right,  the error probability of the blocks,  (or code words).


Here are some additional remarks:

  • The code rate is   R = R_1 \cdot R_2 = 0.66;  this results to the Shannon bound  10 \cdot {\rm lg} \, (E_{\rm B}/N_0) \approx 1 \ \rm dB.
  • The left graph shows the influence of the iterations.  At the transition from  I = 1  to  I=2  one gains  \approx 2 \ \rm dB (at  \rm BER =10^{-5})  and with  I = 10  another  \rm dB.  Further iterations are not worthwhile.
{\rm Pr(Truncated\hspace{0.15cm}Union\hspace{0.15cm} Bound)}= W_{d_{\rm min}} \cdot {\rm Q} \left ( \sqrt{d_{\rm min} \cdot {2R \cdot E_{\rm B}}/{N_0}} \right ) \hspace{0.05cm}.
  • The minimum distance is  d_{\rm min} = d_1 \cdot d_2 = 4 \cdot 4 = 16.  With the weight function of the  {\rm eHC} \ (32, 26),
W_{\rm eHC(32,\hspace{0.08cm}26)}(X) = 1 + 1240 \cdot X^{4} + 27776 \cdot X^{6}+ 330460 \cdot X^{8} + ...\hspace{0.05cm} + X^{32},
we obtain for the product code:  W_{d,\ min} = 1240^2 = 15\hspace{0.05cm}376\hspace{0.05cm}000.
  • This gives the block error probability bound shown in the graph on the right.

Exercises for the chapter


Exercise 4.6: Product Code Generation

Exercise 4.6Z: Basics of Product Codes

Exercise 4.7: Product Code Decoding

Exercise 4.7Z: Principle of Syndrome Decoding

References

  1. Liva, G.:  Channels Codes for Iterative Decoding.  Lecture manuscript, Department of Communications Engineering, TU Munich and DLR Oberpfaffenhofen, 2015.