Difference between revisions of "Channel Coding/The Basics of Product Codes"

From LNTwww
(Die Seite wurde neu angelegt: „ {{Header |Untermenü=Iterative Decodierverfahren |Vorherige Seite=Soft–in Soft–out Decoder |Nächste Seite=Grundlegendes zu den Turbocodes }} == Grundst…“)
 
 
(61 intermediate revisions by 8 users not shown)
Line 1: Line 1:
 
   
 
   
 
{{Header
 
{{Header
|Untermenü=Iterative Decodierverfahren
+
|Untermenü=Iterative Decoding Methods
|Vorherige Seite=Soft–in Soft–out Decoder
+
|Vorherige Seite=Soft-in Soft-Out Decoder
|Nächste Seite=Grundlegendes zu den Turbocodes
+
|Nächste Seite=The Basics of Turbo Codes
 
}}
 
}}
  
== Grundstruktur eines Produktcodes ==
+
== Basic structure of a product code ==
 
<br>
 
<br>
Die Grafik zeigt den prinzipiellen Aufbau von Produktcodes, die bereits 1954 von Peter Elias eingeführt wurden. Der nachfolgend dargestellte <span style="font-weight: bold;">zweidimensionale Produktcode</span> <i>C</i> = <i>C</i><sub>1</sub> &times; <i>C</i><sub>2</sub> basiert auf den beiden linearen und binären Blockcodes mit den Parametern (<i>n</i><sub>1</sub>, <i>k</i><sub>1</sub>) bzw. (<i>n</i><sub>2</sub>, <i>k</i><sub>2</sub>).<br>
+
The graphic shows the principle structure of&nbsp; &raquo;product codes&laquo;,&nbsp; which were already introduced in 1954 by&nbsp; [https://en.wikipedia.org/wiki/Peter_Elias $\text{Peter Elias}$].&nbsp;
 +
*The&nbsp; &raquo;'''two-dimensional product code'''&laquo;&nbsp; $\mathcal{C} = \mathcal{C}_1 &times; \mathcal{C}_2$&nbsp; shown here is based on the two linear and binary block codes with parameters&nbsp; $(n_1, \ k_1)$&nbsp; and&nbsp; $(n_2, \ k_2)$ respectively.  
  
[[File:P ID3000 KC T 4 2 S1 v1.png|Grundstruktur eines Produktcodes|class=fit]]<br>
+
*The code word length is&nbsp; $n = n_1 \cdot n_2$.  
  
Die Codewortlänge ist <i>n</i>&nbsp;=&nbsp;<i>n</i><sub>1</sub>&nbsp;&middot;&nbsp;<i>n</i><sub>2</sub>. Diese <i>n</i> Codebits lassen sich wie folgt gruppieren:
 
*Die <i>k</i> = <i>k</i><sub>1</sub> &middot; <i>k</i><sub>2</sub> Informationsbits sind in der <i>k</i><sub>2</sub>&times;<i>k</i><sub>1</sub>&ndash;Matrix <b>U</b> angeordnet. Die Coderate ist gleich dem Produkt der Coderaten der beiden Basiscodes: <i>R</i>&nbsp;=&nbsp;<i>k</i>/<i>n</i>&nbsp;=&nbsp;(<i>k</i><sub>1</sub>/<i>n</i><sub>1</sub>)&nbsp;&middot;&nbsp;(<i>k</i><sub>2</sub>/<i>n</i><sub>2</sub>)&nbsp;=&nbsp;<i>R</i><sub>1</sub>&nbsp;&middot;&nbsp;<i>R</i><sub>2</sub>.<br>
 
  
*Die rechte obere Matrix <b>P</b><sup>(1)</sup> mit der Dimension <i>k</i><sub>2</sub>&times;<i>m</i><sub>1</sub> beinhaltet die Prüfbits (englisch: <i>Parity</i>) hinsichtlich des Codes <i>C</i><sub>1</sub>. In jeder der <i>k</i><sub>2</sub> Zeilen werden zu den <i>k</i><sub>1</sub> Informationsbits <i>m</i><sub>1</sub> = <i>n</i><sub>1</sub> &ndash; <i>k</i><sub>1</sub> Prüfbits hinzugefügt, wie in Kapitel 1.3 am Beispiel der Hamming&ndash;Codes beschrieben wurde.<br>
+
The&nbsp; $n$&nbsp; encoded bits can be grouped as follows:
 +
[[File:EN_KC_T_4_2_S1_v1.png|right|frame|Basic structure of a product code|class=fit]]
 +
*The&nbsp; $k = k_1 \cdot k_2$&nbsp; information bits are arranged in the&nbsp; $k_2 &times; k_1$ matrix&nbsp; $\mathbf{U}$.
  
*Die linke untere Matrix <b>P</b><sup>(2)</sup> der Dimension <i>m</i><sub>2</sub>&times;<i>k</i><sub>1</sub> beinhaltet die Prüfbits hinsichtlich des zweiten Komponentencodes <i>C</i><sub>2</sub>. Hier erfolgt die Codierung (und auch die Decodierung) zeilenweise: In jeder der <i>k</i><sub>1</sub> Spalten werden die <i>k</i><sub>2</sub> Informationsbits noch um <i>m</i><sub>2</sub> = <i>n</i><sub>2</sub> &ndash; <i>k</i><sub>2</sub> Prüfbits ergänzt.<br>
+
*The code rate is equal to the product of the code rates of the base codes:
 +
:$$R = k/n = (k_1/n_1) \cdot (k_2/n_2) = R_1 \cdot R_2.$$
  
*Die <i>m</i><sub>2</sub>&times;<i>m</i><sub>1</Sub>&ndash;Matrix <b>P</b><sup>(12)</sup> rechts unten bezeichnet man als <i>Checks&ndash;on&ndash;Checks</i>. Hier werden die vorher erzeugten Parity&ndash;Matrizen <b>P</b><sup>(1)</sup> und <b>P</b><sup>(2)</sup> entsprechend den Prüfgleichungen verknüpft.<br><br>
+
*The upper right matrix&nbsp; $\mathbf{P}^{(1)}$&nbsp; with dimension&nbsp; $k_2 &times; m_1$&nbsp; contains the parity bits with respect to the code&nbsp; $\mathcal{C}_1$.  
  
Alle Produktcodes entsprechend obiger Grafik weisen folgende <b>Eigenschaften</b> auf:
+
*In each of the&nbsp; $k_2$&nbsp; rows,&nbsp; $m_1 = n_1 - k_1$&nbsp; check bits are added to the&nbsp; $k_1$&nbsp;  information bits as described in an earlier chapter using the example of&nbsp; [[Channel_Coding/Examples_of_Binary_Block_Codes#Hamming_Codes|$\text{Hamming codes}$]]&nbsp;.<br>
*Bei linearen Komponentencodes <i>C</i><sub>1</sub> und <i>C</i><sub>2</sub> ist auch der Produktcode <i>C</i> = <i>C</i><sub>1</sub> &times; <i>C</i><sub>2</sub> linear.<br>
 
  
*Jede Zeile von <i>C</i> gibt ein Codewort von <i>C</i><sub>1</sub> wieder und jede Spalte ein Codewort von <i>C</i><sub>2</sub>.<br>
+
*The lower left matrix&nbsp; $\mathbf{P}^{(2)}$&nbsp; of dimension&nbsp; $m_2 &times; k_1$&nbsp; contains the check bits for the second component code&nbsp; $\mathcal{C}_2$.&nbsp; Here the encoding&nbsp; $($and also the decoding$)$&nbsp; is done line by line: &nbsp; In each of the&nbsp; $k_1$&nbsp; columns,&nbsp;  the&nbsp; $k_2$&nbsp; information bits are still supplemented by&nbsp; $m_2 = n_2 -k_2$&nbsp; check bits.<br>
  
*Die Summe zweier Zeilen ergibt aufgrund der Linearität wieder ein Codewort von <i>C</i><sub>1</sub>.<br>
+
*The&nbsp; $m_2 &times; m_1$&ndash;matrix&nbsp; $\mathbf{P}^{(12)}$&nbsp; on the bottom right is called&nbsp; "checks&ndash;on&ndash;checks".&nbsp; Here the two previously generated parity matrices&nbsp; $\mathbf{P}^{(1)}$&nbsp; and&nbsp; $\mathbf{P}^{(2)}$&nbsp; are linked according to the parity-check equations.<br><br>
  
*Ebenso ergibt die Summe zweier Spalten ein gültiges Codewort von <i>C</i><sub>2</sub>.<br>
+
{{BlaueBox|TEXT= 
 +
$\text{Conclusions:}$&nbsp;  &raquo;'''All product codes'''&laquo;&nbsp; according to the above graph have the following properties:
 +
*For linear component codes&nbsp; $\mathcal{C}_1$&nbsp; and&nbsp; $\mathcal{C}_2$&nbsp; the product code&nbsp; $\mathcal{C} = \mathcal{C}_1 &times; \mathcal{C}_2$&nbsp; is also linear.<br>
  
*Jeder Produktcodes beinhaltet auch das Nullwort <u>0</u> (ein Vektor mit <i>n</i> Nullen).<br>
+
*Each row of&nbsp; $\mathcal{C}$&nbsp; returns a code word of&nbsp; $\mathcal{C}_1$&nbsp; and each column returns a code word of&nbsp; $\mathcal{C}_2$.<br>
  
*Die minimale Distanz von <i>C</i> ist <i>d</i><sub>min</sub> = <i>d</i><sub>1</sub> &middot; <i>d</i><sub>2</sub>, wobei <i>d</i><sub><i>i</i></sub> die minimale Distanz von <i>C</i><sub><i>i</i></sub> angibt.<br><br>
+
*The sum of two rows again gives a code word of&nbsp; $\mathcal{C}_1$ due to linearity.<br>
  
== Iterative Syndromdecodierung von Produktcodes (1) ==
+
*Also,&nbsp; the sum of two columns gives a valid code word of&nbsp; $\mathcal{C}_2$.<br>
 +
 
 +
*Each product code also includes the&nbsp; "zero word"&nbsp; $\underline{0}$&nbsp; $($a vector of&nbsp; $n$&nbsp; "zeros"$)$.<br>
 +
 
 +
*The minimum distance of&nbsp; $C$&nbsp; is&nbsp; $d_{\rm min} = d_1 \cdot d_2$,&nbsp; where&nbsp; $d_i$&nbsp; indicates the minimum distance of&nbsp; $\mathcal{C}_i$&nbsp; }}
 +
 
 +
== Iterative syndrome decoding of product codes ==
 
<br>
 
<br>
Wir betrachten nun den Fall, dass ein Produktcode mit Matrix <b>X</b> über einen Binärkanal übertragen wird. Die Empfangsmatrix sei <b>Y</b> = <b>X</b> + <b>E</b>, wobei <b>E</b> die Fehlermatrix bezeichnet. Alle Elemente der Matrizen <b>X</b>, <b>E</b> und <b>Y</b> seien binär, also  0 oder 1.<br>
+
We now consider the case where a product code with matrix&nbsp; $\mathbf{X}$&nbsp; is transmitted over a binary channel.&nbsp;
 +
*Let the received matrix&nbsp; $\mathbf{Y} = \mathbf{X} + \mathbf{E}$, where&nbsp; $\mathbf{E}$&nbsp; denotes the&nbsp; "error matrix".  
  
Für die Decodierung der beiden Komponentencodes bietet sich die Syndromdecodierung entsprechend dem Kapitel 1.5 an. Im zweidimensionalen Fall bedeutet dies:
+
*Let all elements of the matrices&nbsp; $\mathbf{X}, \ \mathbf{E}$&nbsp; and&nbsp; $\mathbf{Y}$&nbsp; be binary,&nbsp; that is&nbsp; $0$&nbsp; or&nbsp; $1$.<br>
*Man decodiert zunächst die <i>n</i><sub>2</sub> Zeilen der Empfangsmatrix <b>Y</b>, basierend auf der Prüfmatrix <b>H</b><sub>1</sub> des Komponentencodes <i>C</i><sub>1</sub>. Man verwendet hierzu die    Syndromdecodierung.<br>
 
  
*Dazu bildet man jeweils das sogenannte Syndrom <u><i>s</i></u> = <u><i>y</i></u> &middot; <b>H</b><sub>1</sub><sup>T</sup>, wobei der Vektor <u><i>y</i></u> der Länge <i>n</i><sub>1</sub> die aktuelle Zeile von <b>Y</b> angibt und &bdquo;T&rdquo; für &bdquo;transponiert&rdquo; steht.<br>
 
  
*Entsprechend dem berechneten <u><i>s</i></u><sub><i>&mu;</i></sub> (mit 0 &#8804; <i>&mu;</i> < 2<sup><i>n</i><sub>1</sub>&ndash;<i>k</i><sub>1</sub></sup>) findet man dann in einer vorbereiteten Syndromtabelle das wahrscheinliche Fehlermuster <u><i>e</i></u> = <u><i>e</i></u><sub><i>&mu;</i></sub>.<br>
+
For the decoding of the two component codes the syndrome decoding according to the chapter&nbsp; [[Channel_Coding/Decoding_of_Linear_Block_Codes#Block_diagram_and_requirements| "Decoding linear block codes"]]&nbsp; is suitable.&nbsp;  
  
*Bei nur wenigen Fehlern innerhalb der Zeile stimmt dann <u><i>y</i></u> + <u><i>e</i></u> mit dem gesendeten Zeilenvektor <u><i>x</i></u> überein. Sind zu viele Fehler aufgetreten, so kommt es allerdings zu Fehlkorrekturen.<br>
+
In the two-dimensional case this means:
 +
#One first decodes the&nbsp; $n_2$&nbsp; rows of the received matrix&nbsp; $\mathbf{Y}$,&nbsp; based on the parity-check matrix&nbsp; $\mathbf{H}_1$&nbsp; of the component code&nbsp; $\mathcal{C}_1$.&nbsp; <br>Syndrome decoding is one way to do this.<br>
 +
#For this one forms in each case the so-called&nbsp; "syndrome" &nbsp; $\underline{s} = \underline{y} \cdot \mathbf{H}_1^{\rm T}$,&nbsp; where the vector &nbsp; $\underline{y}$ &nbsp; of length&nbsp; $n_1$&nbsp; indicates the current row of&nbsp; $\mathbf{Y}$&nbsp; and&nbsp; <br>"T"&nbsp; stands for "transposed".
 +
#Correspondingly to the calculated&nbsp; $\underline{s}_{\mu}$ &nbsp; $($with&nbsp; $0 &#8804; \mu < 2^{n_1 -k_1})$ &nbsp; one finds in a prepared syndrome table the corresponding probable error pattern&nbsp; $\underline{e} = \underline{e}_{\mu}$.<br>
 +
#If there are only a few errors within the row,&nbsp; then &nbsp; $\underline{y} + \underline{e}$ &nbsp; matches the sent row vector&nbsp; $\underline{x}$.&nbsp;
 +
#However,&nbsp; if too many errors have occurred,&nbsp; then incorrect corrections will occur.<br>
 +
#Afterwards one decodes the&nbsp; $n_1$&nbsp; columns of the&nbsp; $($corrected$)$&nbsp; received matrix&nbsp; $\mathbf{Y}\hspace{0.03cm}'$,&nbsp; this time based on the&nbsp; $($transposed$)$&nbsp; parity-check matrix&nbsp; $\mathbf{H}_2^{\rm T}$&nbsp; of the component code&nbsp; $\mathcal{C}_2$.
 +
#For this,&nbsp; one forms the syndrome &nbsp; $\underline{s} = \underline{y}\hspace{0.03cm}' \cdot \mathbf{H}_2^{\rm T}$,&nbsp; where the vector&nbsp; $\underline{y}\hspace{0.03cm}'$&nbsp; of length&nbsp; $n_2$&nbsp; denotes the considered column of&nbsp; $\mathbf{Y}\hspace{0.03cm}'$&nbsp; .<br>
 +
#From a second syndrome table&nbsp; $($valid for code&nbsp; $\mathcal{C}_2)$&nbsp; we find for the computed&nbsp; $\underline{s}_{\mu}$&nbsp; $($with&nbsp; $0 &#8804; \mu < 2^{n_2 -k_2})$&nbsp; the probable error pattern $\underline{e} = \underline{e}_{\mu}$ of the edited column.
 +
#After correcting all columns,&nbsp; the matrix&nbsp; $\mathbf{Y}$&nbsp; is present.&nbsp; Now one can do another row and then a column decoding &nbsp; &#8658; &nbsp; second iteration,&nbsp; and so on,&nbsp; and so forth.<br><br>
  
*Anschließend syndromdecodiert man die <i>n</i><sub>1</sub> Spalten der (korrigierten) Empfangsmatrix <b>Y</b>', diesmal basierend auf der (transponierten) Prüfmatrix <b>H</b><sub>2</sub><sup>T</sup> des Komponentencodes <i>C</i><sub>2</sub>.<br>
+
{{GraueBox|TEXT= 
 +
$\text{Example 1:}$&nbsp;  To illustrate the decoding algorithm,&nbsp; we again consider the&nbsp; $(42, 12)$&nbsp; product code,&nbsp; based on
 +
*the Hamming code&nbsp; $\text{HC (7, 4, 3)}$ &nbsp; &#8658; &nbsp; code&nbsp; $\mathcal{C}_1$,<br>
  
*Hierzu bildet man das Syndrom <u><i>s</i></u> = <u><i>y</i></u>' &middot; <b>H</b><sub>2</sub><sup>T</sup>, wobei der Vektor <u><i>y</i></u>' der Länge <i>n</i><sub>2</sub> die betrachtete  Spalte von <b>Y</b>' bezeichnet.<br>
+
*the truncated Hamming code&nbsp; $\text{HC (6, 3, 3)}$ &nbsp; &#8658; &nbsp; code&nbsp; $\mathcal{C}_2$.<br>
  
*Aus einer zweiten Syndromtabelle (gültig für  den Code <i>C</i><sub>2</sub>) findet man für das berechnete <u><i>s</i></u><sub><i>&mu;</i></sub> (mit 0 &#8804; <i>&mu;</i> < 2<sup><i>n</i><sub>2</sub>&ndash;<i>k</i><sub>2</sub></sup>) das wahrscheinliche Fehlermuster <u><i>e</i></u> = <u><i>e</i></u><sub><i>&mu;</i></sub> der bearbeiteten Spalte.<br>
 
  
*Nach Korrektur aller Spalten liegt die Marix  <b>Y</b>'' vor. Nun kann man wieder eine Zeilen&ndash; und anschließend eine Spaltendecodierung vornehmen &nbsp;&#8658;&nbsp; zweite Iteration, und so weiter, und so fort.<br><br>
+
[[File:EN_KC_T_4_2_S2a_v1.png|right|frame|Syndrome decoding of the&nbsp; $(42, 12)$&nbsp; product code|class=fit]]
 +
[[File:EN KC T 4 2 S2b v2 neu.png|right|frame|Syndrome table for code $\mathcal{C}_1$]]
 +
[[File:EN_KC_T_4_2_S2c_v2.png|right|frame|Syndrome table for the code $\mathcal{C}_2$]]
  
Auf der nächsten Seite wird dieser Decodieralgorithmus an einem Beispiel verdeutlicht. Dazu betrachten wir wieder den (42, 12) Produktcode, basierend auf
+
The left graph shows the received matrix&nbsp; $\mathbf{Y}$.&nbsp;  
*dem Hammingcode (7, 4, 3) &nbsp;&#8658;&nbsp; Code <i>C</i><sub>1</sub>,<br>
 
  
*dem verkürzten Hammingcode (6, 3, 3) &nbsp;&#8658;&nbsp; Code <i>C</i><sub>2</sub>.<br><br>
+
<u>Note:</u> &nbsp; For display reasons,&nbsp;
 +
*the code matrix&nbsp; $\mathbf{X}$&nbsp; was chosen to be a&nbsp; $6 &times; 7$ zero matrix,&nbsp;  
  
Die Prüfmatrizen dieser beiden systematischen Komponentencodes lauten in transponierter Form:
+
*so the eight&nbsp; "ones"&nbsp; in&nbsp; $\mathbf{Y}$&nbsp; represent transmission errors.<br>
  
:<math>{ \boldsymbol{\rm H}}_1^{\rm T} =  
+
 
 +
&rArr; &nbsp; The&nbsp; &raquo;<b>row-by-row syndrome decoding</b>&laquo;&nbsp; is done via the syndrome&nbsp; $\underline{s} = \underline{y} \cdot \mathbf{H}_1^{\rm T}$&nbsp; with
 +
:$$\boldsymbol{\rm H}_1^{\rm T} =  
 
   \begin{pmatrix}
 
   \begin{pmatrix}
 
1 &0 &1 \\
 
1 &0 &1 \\
Line 71: Line 93:
 
0 &1 &0 \\
 
0 &1 &0 \\
 
0 &0 &1
 
0 &0 &1
\end{pmatrix}  \hspace{0.05cm}, \hspace{0.5cm}
+
\end{pmatrix}  \hspace{0.05cm}. $$
{ \boldsymbol{\rm H}}_2^{\rm T} =
 
  \begin{pmatrix}
 
1 &1 &0 \\
 
1 &0 &1 \\
 
0 &1 &1 \\
 
1 &0 &0 \\
 
0 &1 &0 \\
 
0 &0 &1
 
\end{pmatrix}\hspace{0.05cm}.</math><br>
 
  
== Iterative Syndromdecodierung von Produktcodes (2) ==
+
In particular:
<br>
+
*<b>Row 1</b> &nbsp;&#8658;&nbsp; Single error correction is successful&nbsp; $($also in rows 3,&nbsp; 4 and 6$)$:
Die linke Grafik zeigt die Empfangsmatrix <b>Y</b>, Aus Darstellungsgründen wurde die Codermatrix <b>X</b> zu einer 6 &times; 7&ndash;Nullmatrix gewählt, so dass die neun Einsen in <b>Y</b> gleichzeitig Übertragungsfehler darstellen.<br>
 
  
[[File:P ID3014 KC T 4 2 S2a v1.png|Zur Syndromdecodierung des (42, 12)–Produktcodes|class=fit]]<br>
+
::<math>\underline{s} = \left ( 0, \hspace{0.02cm} 0, \hspace{0.02cm}1, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0 \right ) \hspace{-0.03cm}\cdot \hspace{-0.03cm}{ \boldsymbol{\rm H} }_1^{\rm T}  
 
 
<i>Hinweis:</i> Für die folgenden Berechnungen ist der Link zu den transponierten Prüfmatrizen nützlich.<br>
 
 
 
Die <b>zeilenweise Syndromdecodierung</b> geschieht über das Syndrom <u><i>s</i></u> = <u><i>y</i></u> &middot; <b>H</b><sub>1</sub><sup>T</sup>. Im Einzelnen:
 
*<b>Zeile 1</b> &nbsp;&#8658;&nbsp; Einzelfehlerkorrektur ist erfolgreich (ebenso in den Zeilen 3, 4 und 6): [[File:P ID3015 KC T 4 2 S2b v1.png|rahmenlos|rechts|Syndromtabelle für den Code <i>C</i><sub>1</sub>]]
 
 
 
::<math>\underline{s} = \left ( 0, \hspace{0.02cm} 0, \hspace{0.02cm}1, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0 \right ) \hspace{-0.03cm}\cdot \hspace{-0.03cm}{ \boldsymbol{\rm H}}_1^{\rm T}  
 
 
   \hspace{-0.05cm}=
 
   \hspace{-0.05cm}=
 
\left ( 0, \hspace{0.03cm} 1, \hspace{0.03cm}1 \right )
 
\left ( 0, \hspace{0.03cm} 1, \hspace{0.03cm}1 \right )
Line 101: Line 106:
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
*<b>Zeile 2</b> &nbsp;&#8658;&nbsp; Fehlkorrektur bezüglich Bit 5:
+
*<b>Row 2</b> &nbsp; $($contains two errors$)$ &nbsp; &#8658; &nbsp; Error correction concerning bit&nbsp; '''5''':
  
::<math>\underline{s} = \left ( 1, \hspace{0.02cm} 0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}1 \right ) \hspace{-0.03cm}\cdot \hspace{-0.03cm}{ \boldsymbol{\rm H}}_1^{\rm T}  
+
::<math>\underline{s} = \left ( 1, \hspace{0.02cm} 0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}1 \right ) \hspace{-0.03cm}\cdot \hspace{-0.03cm}{ \boldsymbol{\rm H} }_1^{\rm T}  
 
   \hspace{-0.05cm}=
 
   \hspace{-0.05cm}=
 
\left ( 1, \hspace{0.03cm} 0, \hspace{0.03cm}0 \right )
 
\left ( 1, \hspace{0.03cm} 0, \hspace{0.03cm}0 \right )
Line 111: Line 116:
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
*<b>Zeile 5</b> &nbsp;&#8658;&nbsp; Fehlkorrektur bezüglich Bit 3:
+
*<b>Row 5</b> &nbsp;$($also contains two errors$)$ &nbsp;&#8658;&nbsp; Error correction concerning bit&nbsp; ''' 3''':
  
::<math>\underline{s} = \left ( 0, \hspace{0.02cm} 0, \hspace{0.02cm}0, \hspace{0.02cm}1, \hspace{0.02cm}1, \hspace{0.02cm}0, \hspace{0.02cm}0 \right ) \hspace{-0.03cm}\cdot \hspace{-0.03cm}{ \boldsymbol{\rm H}}_1^{\rm T}  
+
::<math>\underline{s} = \left ( 0, \hspace{0.02cm} 0, \hspace{0.02cm}0, \hspace{0.02cm}1, \hspace{0.02cm}1, \hspace{0.02cm}0, \hspace{0.02cm}0 \right ) \hspace{-0.03cm}\cdot \hspace{-0.03cm}{ \boldsymbol{\rm H} }_1^{\rm T}  
 
   \hspace{-0.05cm}=
 
   \hspace{-0.05cm}=
 
\left ( 0, \hspace{0.03cm} 1, \hspace{0.03cm}1 \right )
 
\left ( 0, \hspace{0.03cm} 1, \hspace{0.03cm}1 \right )
Line 121: Line 126:
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
Die <b>spaltenweisen Syndromdecodierung</b> entfernt alle Einzelfehler in den Spalten 1, 2, 3, 4, 6 und 7.  
+
&rArr; &nbsp; The&nbsp; &raquo;<b>column-by-column syndrome decoding</b>&laquo;&nbsp; removes all single errors in columns&nbsp; 1,&nbsp; 2,&nbsp; 3,&nbsp; 4&nbsp; and&nbsp; 7.  
[[File:P ID3019 KC T 4 2 S2c v1.png|rahmenlos|rechts|Syndromtabelle für den Code <i>C</i><sub>2</sub>]]
+
*<b>Column 5</b> &nbsp;$($contains two errors$)$ &nbsp; &#8658; &nbsp; Error correction concerning bit&nbsp; '''4''':
*<b>Spalte 5</b> &nbsp;&#8658;&nbsp; Fehlkorrektur bezüglich Bit 4:
 
  
::<math>\underline{s} = \left ( 0, \hspace{0.02cm} 1, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}1, \hspace{0.02cm}0 \right ) \hspace{-0.03cm}\cdot \hspace{-0.03cm}{ \boldsymbol{\rm H}}_1^{\rm T}  
+
::<math>\underline{s} = \left ( 0, \hspace{0.02cm} 1, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}1, \hspace{0.02cm}0 \right ) \hspace{-0.03cm}\cdot \hspace{-0.03cm}{ \boldsymbol{\rm H} }_2^{\rm T}  
   \hspace{-0.05cm}=
+
   \hspace{-0.05cm}=  \left ( 0, \hspace{0.02cm} 1, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}1, \hspace{0.02cm}0 \right ) \hspace{-0.03cm}\cdot \hspace{-0.03cm} \begin{pmatrix}
 +
1 &1 &0 \\
 +
1 &0 &1 \\
 +
0 &1 &1 \\
 +
1 &0 &0 \\
 +
0 &1 &0 \\
 +
0 &0 &1
 +
\end{pmatrix} =
 
\left ( 1, \hspace{0.03cm} 0, \hspace{0.03cm}0 \right )
 
\left ( 1, \hspace{0.03cm} 0, \hspace{0.03cm}0 \right )
 
= \underline{s}_4</math>
 
= \underline{s}_4</math>
Line 133: Line 144:
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
Die verbliebenen drei Fehler werden  durch zeilenweise Decodierung der <b>zweiten Iterationsschleife</b> korrigiert.<br>
+
&rArr; &nbsp; The remaining three errors are corrected by decoding the&nbsp; &raquo;<b>second row iteration loop</b>&laquo;&nbsp; $($line-by-line$)$.<br>
  
Ob alle Fehler eines Blockes korrigierbar sind, hängt vom Fehlermuster ab. Hier verweisen wir auf Aufgabe A4.7.<br>
+
Whether all errors of a block are correctable depends on the error pattern.&nbsp; Here we refer to&nbsp; [[Aufgaben:Aufgabe_4.7:_Decodierung_von_Produktcodes|"Exercise 4.7"]].}}<br>
  
 +
== Performance of product codes ==
 +
<br>
 +
The 1954 introduced&nbsp; product codes&nbsp; were the first codes,&nbsp; which were based on recursive construction rules and thus in principle suitable for iterative decoding.&nbsp; The inventor Peter Elias did not comment on this,&nbsp; but in the last twenty years this aspect and the simultaneous availability of fast processors have contributed to the fact that in the meantime product codes are also used in real communication systems, e.g.
 +
*in error protection of storage media,&nbsp; and
 +
 +
*in very high data rate fiber optic systems.<br>
  
  
 +
Usually one uses very long product codes &nbsp; $($large&nbsp; $n = n_1 \cdot n_2)$ &nbsp; with the following consequence:
 +
*For effort reasons,&nbsp; the&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#Criteria_.C2.BBMaximum-a-posteriori.C2.AB_and_.C2.BBMaximum-Likelihood.C2.AB|$\text{maximum likelihood decoding at block level}$]]&nbsp; is not applicable for the component codes&nbsp; $\mathcal{C}_1$&nbsp; and&nbsp; $\mathcal{C}_2$&nbsp; nor the&nbsp; [[Channel_Coding/Decoding_of_Linear_Block_Codes#Principle_of_syndrome_decoding|$\text{syndrome decoding}$]],&nbsp; which is after all a realization form of maximum likelihood decoding.
  
 +
*Applicable,&nbsp; on the other hand,&nbsp; even with large&nbsp; $n$&nbsp; is the&nbsp; [[Channel_Coding/Soft-in_Soft-Out_Decoder#Bit-wise_soft-in_soft-out_decoding|$\text{iterative symbol-wise MAP decoding}$]].&nbsp; The exchange of extrinsic and a-priori&ndash;information happens here between the two component codes.&nbsp; More details on this can be found in&nbsp; [Liv15]<ref name='Liv15'>Liva, G.:&nbsp; Channels Codes for Iterative Decoding.&nbsp; Lecture manuscript, Department of Communications Engineering, TU Munich and DLR Oberpfaffenhofen, 2015.</ref>.<br>
  
  
 +
[[File:EN_KC_T_4_2_S3_v3.png|right|frame|Bit and block error probability of a&nbsp; $(1024, 676)$&nbsp; product code at AWGN |class=fit]]
 +
The graph shows for a&nbsp; $(1024, 676)$&nbsp; product code, based on the&nbsp; extended Hamming code&nbsp; ${\rm eHC} \ (32, 26)$&nbsp; as component codes,
 +
*on the left,&nbsp; the bit error probability as  function of the AWGN parameter&nbsp; $10 \cdot {\rm lg} \, (E_{\rm B}/N_0)$&nbsp; the number of iterations&nbsp; $(I)$,
 +
 +
*on the right,&nbsp; the error probability of the blocks,&nbsp; $($or code words$)$.
 +
 +
 +
 +
Here are some additional remarks:
 +
*The code rate is &nbsp; $R = R_1 \cdot R_2 = 0.66$;&nbsp; this results to the Shannon bound&nbsp; $10 \cdot {\rm lg} \, (E_{\rm B}/N_0) \approx 1 \ \rm dB$.<br>
 +
 +
*The left graph shows the influence of the iterations.&nbsp; At the transition from&nbsp; $I = 1$&nbsp; to&nbsp; $I=2$&nbsp; one gains&nbsp; $\approx 2 \ \rm dB$ $($at&nbsp; $\rm BER =10^{-5})$&nbsp; and with&nbsp; $I = 10$&nbsp; another&nbsp; $\rm dB$.&nbsp;  Further iterations are not worthwhile.<br>
 +
 +
*All bounds mentioned in the chapter&nbsp; [[Channel_Coding/Bounds_for_Block_Error_Probability#Union_Bound_of_the_block_error_probability| "Bounds for the Block Error Probability"]]&nbsp; can be applied here as well,&nbsp; e.g. the &nbsp; "truncated union bound"&nbsp; $($dashed curve in the right graph$)$:
 +
 +
::<math>{\rm Pr(Truncated\hspace{0.15cm}Union\hspace{0.15cm} Bound)}= W_{d_{\rm min}} \cdot {\rm Q} \left ( \sqrt{d_{\rm min} \cdot {2R \cdot E_{\rm B}}/{N_0}} \right )
 +
\hspace{0.05cm}.</math>
 +
 +
*The minimum distance is&nbsp; $d_{\rm min} = d_1 \cdot d_2 = 4 \cdot 4 = 16$.&nbsp;  With the weight function of the&nbsp; ${\rm eHC} \ (32, 26)$,
 +
 +
::<math>W_{\rm eHC(32,\hspace{0.08cm}26)}(X) = 1 + 1240 \cdot X^{4}
 +
+ 27776 \cdot X^{6}+
 +
330460 \cdot X^{8} + ...\hspace{0.05cm} + X^{32},</math>
 +
 +
:we obtain for the product code:&nbsp; $W_{d,\ min} = 1240^2 = 15\hspace{0.05cm}376\hspace{0.05cm}000$.
 +
*This gives the block error probability bound shown in the graph on the right.<br>
 +
 +
== Exercises for the chapter==
 +
<br>
 +
[[Aufgaben:Exercise_4.6:_Product_Code_Generation|Exercise 4.6: Product Code Generation]]
  
 +
[[Aufgaben:Exercise_4.6Z:_Basics_of_Product_Codes|Exercise 4.6Z: Basics of Product Codes]]
  
 +
[[Aufgaben:Exercise_4.7:_Product_Code_Decoding|Exercise 4.7: Product Code Decoding]]
  
 +
[[Aufgaben:Exercise_4.7Z:_Principle_of_Syndrome_Decoding|Exercise 4.7Z: Principle of Syndrome Decoding]]
  
 +
==References==
 +
<references/>
  
 
{{Display}}
 
{{Display}}

Latest revision as of 17:26, 16 January 2023

Basic structure of a product code


The graphic shows the principle structure of  »product codes«,  which were already introduced in 1954 by  $\text{Peter Elias}$

  • The  »two-dimensional product code«  $\mathcal{C} = \mathcal{C}_1 × \mathcal{C}_2$  shown here is based on the two linear and binary block codes with parameters  $(n_1, \ k_1)$  and  $(n_2, \ k_2)$ respectively.
  • The code word length is  $n = n_1 \cdot n_2$.


The  $n$  encoded bits can be grouped as follows:

Basic structure of a product code
  • The  $k = k_1 \cdot k_2$  information bits are arranged in the  $k_2 × k_1$ matrix  $\mathbf{U}$.
  • The code rate is equal to the product of the code rates of the base codes:
$$R = k/n = (k_1/n_1) \cdot (k_2/n_2) = R_1 \cdot R_2.$$
  • The upper right matrix  $\mathbf{P}^{(1)}$  with dimension  $k_2 × m_1$  contains the parity bits with respect to the code  $\mathcal{C}_1$.
  • In each of the  $k_2$  rows,  $m_1 = n_1 - k_1$  check bits are added to the  $k_1$  information bits as described in an earlier chapter using the example of  $\text{Hamming codes}$ .
  • The lower left matrix  $\mathbf{P}^{(2)}$  of dimension  $m_2 × k_1$  contains the check bits for the second component code  $\mathcal{C}_2$.  Here the encoding  $($and also the decoding$)$  is done line by line:   In each of the  $k_1$  columns,  the  $k_2$  information bits are still supplemented by  $m_2 = n_2 -k_2$  check bits.
  • The  $m_2 × m_1$–matrix  $\mathbf{P}^{(12)}$  on the bottom right is called  "checks–on–checks".  Here the two previously generated parity matrices  $\mathbf{P}^{(1)}$  and  $\mathbf{P}^{(2)}$  are linked according to the parity-check equations.

$\text{Conclusions:}$  »All product codes«  according to the above graph have the following properties:

  • For linear component codes  $\mathcal{C}_1$  and  $\mathcal{C}_2$  the product code  $\mathcal{C} = \mathcal{C}_1 × \mathcal{C}_2$  is also linear.
  • Each row of  $\mathcal{C}$  returns a code word of  $\mathcal{C}_1$  and each column returns a code word of  $\mathcal{C}_2$.
  • The sum of two rows again gives a code word of  $\mathcal{C}_1$ due to linearity.
  • Also,  the sum of two columns gives a valid code word of  $\mathcal{C}_2$.
  • Each product code also includes the  "zero word"  $\underline{0}$  $($a vector of  $n$  "zeros"$)$.
  • The minimum distance of  $C$  is  $d_{\rm min} = d_1 \cdot d_2$,  where  $d_i$  indicates the minimum distance of  $\mathcal{C}_i$ 

Iterative syndrome decoding of product codes


We now consider the case where a product code with matrix  $\mathbf{X}$  is transmitted over a binary channel. 

  • Let the received matrix  $\mathbf{Y} = \mathbf{X} + \mathbf{E}$, where  $\mathbf{E}$  denotes the  "error matrix".
  • Let all elements of the matrices  $\mathbf{X}, \ \mathbf{E}$  and  $\mathbf{Y}$  be binary,  that is  $0$  or  $1$.


For the decoding of the two component codes the syndrome decoding according to the chapter  "Decoding linear block codes"  is suitable. 

In the two-dimensional case this means:

  1. One first decodes the  $n_2$  rows of the received matrix  $\mathbf{Y}$,  based on the parity-check matrix  $\mathbf{H}_1$  of the component code  $\mathcal{C}_1$. 
    Syndrome decoding is one way to do this.
  2. For this one forms in each case the so-called  "syndrome"   $\underline{s} = \underline{y} \cdot \mathbf{H}_1^{\rm T}$,  where the vector   $\underline{y}$   of length  $n_1$  indicates the current row of  $\mathbf{Y}$  and 
    "T"  stands for "transposed".
  3. Correspondingly to the calculated  $\underline{s}_{\mu}$   $($with  $0 ≤ \mu < 2^{n_1 -k_1})$   one finds in a prepared syndrome table the corresponding probable error pattern  $\underline{e} = \underline{e}_{\mu}$.
  4. If there are only a few errors within the row,  then   $\underline{y} + \underline{e}$   matches the sent row vector  $\underline{x}$. 
  5. However,  if too many errors have occurred,  then incorrect corrections will occur.
  6. Afterwards one decodes the  $n_1$  columns of the  $($corrected$)$  received matrix  $\mathbf{Y}\hspace{0.03cm}'$,  this time based on the  $($transposed$)$  parity-check matrix  $\mathbf{H}_2^{\rm T}$  of the component code  $\mathcal{C}_2$.
  7. For this,  one forms the syndrome   $\underline{s} = \underline{y}\hspace{0.03cm}' \cdot \mathbf{H}_2^{\rm T}$,  where the vector  $\underline{y}\hspace{0.03cm}'$  of length  $n_2$  denotes the considered column of  $\mathbf{Y}\hspace{0.03cm}'$  .
  8. From a second syndrome table  $($valid for code  $\mathcal{C}_2)$  we find for the computed  $\underline{s}_{\mu}$  $($with  $0 ≤ \mu < 2^{n_2 -k_2})$  the probable error pattern $\underline{e} = \underline{e}_{\mu}$ of the edited column.
  9. After correcting all columns,  the matrix  $\mathbf{Y}$  is present.  Now one can do another row and then a column decoding   ⇒   second iteration,  and so on,  and so forth.

$\text{Example 1:}$  To illustrate the decoding algorithm,  we again consider the  $(42, 12)$  product code,  based on

  • the Hamming code  $\text{HC (7, 4, 3)}$   ⇒   code  $\mathcal{C}_1$,
  • the truncated Hamming code  $\text{HC (6, 3, 3)}$   ⇒   code  $\mathcal{C}_2$.


Syndrome decoding of the  $(42, 12)$  product code
Syndrome table for code $\mathcal{C}_1$
Syndrome table for the code $\mathcal{C}_2$

The left graph shows the received matrix  $\mathbf{Y}$. 

Note:   For display reasons, 

  • the code matrix  $\mathbf{X}$  was chosen to be a  $6 × 7$ zero matrix, 
  • so the eight  "ones"  in  $\mathbf{Y}$  represent transmission errors.


⇒   The  »row-by-row syndrome decoding«  is done via the syndrome  $\underline{s} = \underline{y} \cdot \mathbf{H}_1^{\rm T}$  with

$$\boldsymbol{\rm H}_1^{\rm T} = \begin{pmatrix} 1 &0 &1 \\ 1 &1 &0 \\ 0 &1 &1 \\ 1 &1 &1 \\ 1 &0 &0 \\ 0 &1 &0 \\ 0 &0 &1 \end{pmatrix} \hspace{0.05cm}. $$

In particular:

  • Row 1  ⇒  Single error correction is successful  $($also in rows 3,  4 and 6$)$:
\[\underline{s} = \left ( 0, \hspace{0.02cm} 0, \hspace{0.02cm}1, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0 \right ) \hspace{-0.03cm}\cdot \hspace{-0.03cm}{ \boldsymbol{\rm H} }_1^{\rm T} \hspace{-0.05cm}= \left ( 0, \hspace{0.03cm} 1, \hspace{0.03cm}1 \right ) = \underline{s}_3\]
\[\Rightarrow \hspace{0.3cm} \underline{y} + \underline{e}_3 = \left ( 0, \hspace{0.02cm} 0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0 \right ) \hspace{0.05cm}.\]
  • Row 2   $($contains two errors$)$   ⇒   Error correction concerning bit  5:
\[\underline{s} = \left ( 1, \hspace{0.02cm} 0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}1 \right ) \hspace{-0.03cm}\cdot \hspace{-0.03cm}{ \boldsymbol{\rm H} }_1^{\rm T} \hspace{-0.05cm}= \left ( 1, \hspace{0.03cm} 0, \hspace{0.03cm}0 \right ) = \underline{s}_4\]
\[\Rightarrow \hspace{0.3cm} \underline{y} + \underline{e}_4 = \left ( 1, \hspace{0.02cm} 0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}1, \hspace{0.02cm}0, \hspace{0.02cm}1 \right ) \hspace{0.05cm}.\]
  • Row 5  $($also contains two errors$)$  ⇒  Error correction concerning bit  3:
\[\underline{s} = \left ( 0, \hspace{0.02cm} 0, \hspace{0.02cm}0, \hspace{0.02cm}1, \hspace{0.02cm}1, \hspace{0.02cm}0, \hspace{0.02cm}0 \right ) \hspace{-0.03cm}\cdot \hspace{-0.03cm}{ \boldsymbol{\rm H} }_1^{\rm T} \hspace{-0.05cm}= \left ( 0, \hspace{0.03cm} 1, \hspace{0.03cm}1 \right ) = \underline{s}_3\]
\[\Rightarrow \hspace{0.3cm} \underline{y} + \underline{e}_3 = \left ( 0, \hspace{0.02cm} 0, \hspace{0.02cm}1, \hspace{0.02cm}1, \hspace{0.02cm}1, \hspace{0.02cm}0, \hspace{0.02cm}0 \right ) \hspace{0.05cm}.\]

⇒   The  »column-by-column syndrome decoding«  removes all single errors in columns  1,  2,  3,  4  and  7.

  • Column 5  $($contains two errors$)$   ⇒   Error correction concerning bit  4:
\[\underline{s} = \left ( 0, \hspace{0.02cm} 1, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}1, \hspace{0.02cm}0 \right ) \hspace{-0.03cm}\cdot \hspace{-0.03cm}{ \boldsymbol{\rm H} }_2^{\rm T} \hspace{-0.05cm}= \left ( 0, \hspace{0.02cm} 1, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}1, \hspace{0.02cm}0 \right ) \hspace{-0.03cm}\cdot \hspace{-0.03cm} \begin{pmatrix} 1 &1 &0 \\ 1 &0 &1 \\ 0 &1 &1 \\ 1 &0 &0 \\ 0 &1 &0 \\ 0 &0 &1 \end{pmatrix} = \left ( 1, \hspace{0.03cm} 0, \hspace{0.03cm}0 \right ) = \underline{s}_4\]
\[\Rightarrow \hspace{0.3cm} \underline{y} + \underline{e}_4 = \left ( 0, \hspace{0.02cm} 1, \hspace{0.02cm}0, \hspace{0.02cm}1, \hspace{0.02cm}1, \hspace{0.02cm}0 \right ) \hspace{0.05cm}.\]

⇒   The remaining three errors are corrected by decoding the  »second row iteration loop«  $($line-by-line$)$.

Whether all errors of a block are correctable depends on the error pattern.  Here we refer to  "Exercise 4.7".


Performance of product codes


The 1954 introduced  product codes  were the first codes,  which were based on recursive construction rules and thus in principle suitable for iterative decoding.  The inventor Peter Elias did not comment on this,  but in the last twenty years this aspect and the simultaneous availability of fast processors have contributed to the fact that in the meantime product codes are also used in real communication systems, e.g.

  • in error protection of storage media,  and
  • in very high data rate fiber optic systems.


Usually one uses very long product codes   $($large  $n = n_1 \cdot n_2)$   with the following consequence:

  • Applicable,  on the other hand,  even with large  $n$  is the  $\text{iterative symbol-wise MAP decoding}$.  The exchange of extrinsic and a-priori–information happens here between the two component codes.  More details on this can be found in  [Liv15][1].


Bit and block error probability of a  $(1024, 676)$  product code at AWGN

The graph shows for a  $(1024, 676)$  product code, based on the  extended Hamming code  ${\rm eHC} \ (32, 26)$  as component codes,

  • on the left,  the bit error probability as function of the AWGN parameter  $10 \cdot {\rm lg} \, (E_{\rm B}/N_0)$  the number of iterations  $(I)$,
  • on the right,  the error probability of the blocks,  $($or code words$)$.


Here are some additional remarks:

  • The code rate is   $R = R_1 \cdot R_2 = 0.66$;  this results to the Shannon bound  $10 \cdot {\rm lg} \, (E_{\rm B}/N_0) \approx 1 \ \rm dB$.
  • The left graph shows the influence of the iterations.  At the transition from  $I = 1$  to  $I=2$  one gains  $\approx 2 \ \rm dB$ $($at  $\rm BER =10^{-5})$  and with  $I = 10$  another  $\rm dB$.  Further iterations are not worthwhile.
\[{\rm Pr(Truncated\hspace{0.15cm}Union\hspace{0.15cm} Bound)}= W_{d_{\rm min}} \cdot {\rm Q} \left ( \sqrt{d_{\rm min} \cdot {2R \cdot E_{\rm B}}/{N_0}} \right ) \hspace{0.05cm}.\]
  • The minimum distance is  $d_{\rm min} = d_1 \cdot d_2 = 4 \cdot 4 = 16$.  With the weight function of the  ${\rm eHC} \ (32, 26)$,
\[W_{\rm eHC(32,\hspace{0.08cm}26)}(X) = 1 + 1240 \cdot X^{4} + 27776 \cdot X^{6}+ 330460 \cdot X^{8} + ...\hspace{0.05cm} + X^{32},\]
we obtain for the product code:  $W_{d,\ min} = 1240^2 = 15\hspace{0.05cm}376\hspace{0.05cm}000$.
  • This gives the block error probability bound shown in the graph on the right.

Exercises for the chapter


Exercise 4.6: Product Code Generation

Exercise 4.6Z: Basics of Product Codes

Exercise 4.7: Product Code Decoding

Exercise 4.7Z: Principle of Syndrome Decoding

References

  1. Liva, G.:  Channels Codes for Iterative Decoding.  Lecture manuscript, Department of Communications Engineering, TU Munich and DLR Oberpfaffenhofen, 2015.