Difference between revisions of "Channel Coding/The Basics of Product Codes"

From LNTwww
m (Text replacement - "”" to """)
Line 1: Line 1:
 
   
 
   
 
{{Header
 
{{Header
|Untermenü=Iterative Decodierverfahren
+
|Untermenü=Iterative Decoding Methods
|Vorherige Seite=Soft–in Soft–out Decoder
+
|Vorherige Seite=Soft-in Soft-Out Decoder
|Nächste Seite=Grundlegendes zu den Turbocodes
+
|Nächste Seite=The Basics of Turbo Codes
 
}}
 
}}
  
== Grundstruktur eines Produktcodes ==
+
== Basic structure of a product code ==
 
<br>
 
<br>
Die Grafik zeigt den prinzipiellen Aufbau von Produktcodes, die bereits 1954 von&nbsp; [https://de.wikipedia.org/wiki/Peter_Elias Peter Elias]&nbsp; eingeführt wurden. Der hier dargestellte&nbsp; '''zweidimensionale Produktcode'''&nbsp; $\mathcal{C} = \mathcal{C}_1 &times; \mathcal{C}_2$&nbsp; basiert auf den beiden linearen und binären Blockcodes mit den Parametern&nbsp; $(n_1, \ k_1)$&nbsp; bzw.&nbsp; $(n_2, \ k_2)$. Die Codewortlänge ist&nbsp; $n = n_1 \cdot n_2$.  
+
The graphic shows the principle structure of product codes, which were already introduced in 1954 by&nbsp; [https://en.wikipedia.org/wiki/Peter_Elias "Peter Elias"]&nbsp;. The&nbsp; '''two-dimensional product code'''&nbsp; $\mathcal{C} = \mathcal{C}_1 &times; \mathcal{C}_2$&nbsp; shown here is based on the two linear and binary block codes with parameters&nbsp; $(n_1, \ k_1)$&nbsp; and&nbsp; $(n_2, \ k_2)$ respectively. The code word length is&nbsp; $n = n_1 \cdot n_2$.  
  
[[File:P ID3000 KC T 4 2 S1 v1.png|center|frame|Grundstruktur eines Produktcodes|class=fit]]
+
[[File:P ID3000 KC T 4 2 S1 v1.png|center|frame|Basic structure of a product code|class=fit]]
  
Diese&nbsp; $n$&nbsp; Codebits lassen sich wie folgt gruppieren:
+
These&nbsp; $n$&nbsp; code bits can be grouped as follows:
*Die&nbsp; $k = k_1 \cdot k_2$&nbsp; Informationsbits sind in der&nbsp; $k_2 &times; k_1$&ndash;Matrix&nbsp; $\mathbf{U}$&nbsp; angeordnet. Die Coderate ist gleich dem Produkt der Coderaten der beiden Basiscodes:  
+
*The&nbsp; $k = k_1 \cdot k_2$&nbsp; information bits are arranged in the&nbsp; $k_2 &times; k_1$ matrix&nbsp; $\mathbf{U}$&nbsp;. The code rate is equal to the product of the code rates of the two base codes:  
 
:$$R = k/n = (k_1/n_1) \cdot (k_2/n_2) = R_1 \cdot R_2.$$
 
:$$R = k/n = (k_1/n_1) \cdot (k_2/n_2) = R_1 \cdot R_2.$$
  
*Die rechte obere Matrix&nbsp; $\mathbf{P}^{(1)}$&nbsp; mit der Dimension&nbsp; $k_2 &times; m_1$&nbsp; beinhaltet die Prüfbits&nbsp; (englisch: &nbsp; <i>Parity</i>) hinsichtlich des Codes&nbsp; $\mathcal{C}_1$. In jeder der&nbsp; $k_2$&nbsp; Zeilen werden zu den&nbsp; $k_1$ Informationsbits&nbsp; $m_1 = n_1 - k_1$&nbsp; Prüfbits hinzugefügt, wie in einem früheren Kapitel am Beispiel der&nbsp; [[Channel_Coding/Beispiele_bin%C3%A4rer_Blockcodes#Hamming.E2.80.93Codes_.282.29|Hamming&ndash;Codes]]&nbsp; beschrieben wurde.<br>
+
*The upper right matrix&nbsp; $\mathbf{P}^{(1)}$&nbsp; with dimension&nbsp; $k_2 &times; m_1$&nbsp; contains the parity bits with respect to the code&nbsp; $\mathcal{C}_1$. In each of the&nbsp; $k_2$&nbsp; rows, check bits are added to the&nbsp; $k_1$ information bits&nbsp; $m_1 = n_1 - k_1$&nbsp; as described in an earlier chapter using the example of&nbsp; [[Channel_Coding/Examples_of_Binary_Block_Codes#Hamming_Codes|"Hamming&ndash;Codes"]]&nbsp;.<br>
  
*Die linke untere Matrix&nbsp; $\mathbf{P}^{(2)}$&nbsp; der Dimension&nbsp; $m_2 &times; k_1$&nbsp; beinhaltet die Prüfbits für den zweiten Komponentencodes&nbsp; $\mathcal{C}_2$. Hier erfolgt die Codierung (und auch die Decodierung) zeilenweise: &nbsp; In jeder der&nbsp; $k_1$&nbsp; Spalten werden die&nbsp; $k_2$&nbsp; Informationsbits noch um&nbsp; $m_2 = n_2 -k_2$&nbsp; Prüfbits ergänzt.<br>
+
*The lower left matrix&nbsp; $\mathbf{P}^{(2)}$&nbsp; of dimension&nbsp; $m_2 &times; k_1$&nbsp; contains the check bits for the second component code&nbsp; $\mathcal{C}_2$. Here the encoding (and also the decoding) is done line by line: &nbsp; In each of the&nbsp; $k_1$&nbsp; columns the&nbsp; $k_2$&nbsp; information bits are still supplemented by&nbsp; $m_2 = n_2 -k_2$&nbsp; check bits.<br>
  
*Die&nbsp; $m_2 &times; m_1$&ndash;Matrix&nbsp; $\mathbf{P}^{(12)}$&nbsp; rechts unten bezeichnet man als&nbsp; <i>Checks&ndash;on&ndash;Checks</i>. Hier werden die beiden vorher erzeugten Parity&ndash;Matrizen&nbsp; $\mathbf{P}^{(1)}$&nbsp; und&nbsp; $\mathbf{P}^{(2)}$&nbsp; entsprechend den Prüfgleichungen verknüpft.<br><br>
+
*The&nbsp; $m_2 &times; m_1$&ndash;matrix&nbsp; $\mathbf{P}^{(12)}$&nbsp; on the bottom right is called&nbsp; <i>checks&ndash;on&ndash;checks</i>. Here the two previously generated parity matrices&nbsp; $\mathbf{P}^{(1)}$&nbsp; and&nbsp; $\mathbf{P}^{(2)}$&nbsp; are linked according to the parity-check equations.<br><br>
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Fazit:}$&nbsp;  Alle Produktcodes entsprechend obiger Grafik weisen folgende Eigenschaften auf:
+
$\text{Conclusion:}$&nbsp;  All product codes according to the above graphic have the following properties:
*Bei linearen Komponentencodes&nbsp; $\mathcal{C}_1$&nbsp; und&nbsp; $\mathcal{C}_2$&nbsp; ist auch der Produktcode&nbsp; $\mathcal{C} = \mathcal{C}_1 &times; \mathcal{C}_2$&nbsp; linear.<br>
+
*For linear component codes&nbsp; $\mathcal{C}_1$&nbsp; and&nbsp; $\mathcal{C}_2$&nbsp; the product code&nbsp; $\mathcal{C} = \mathcal{C}_1 &times; \mathcal{C}_2$&nbsp; is also linear.<br>
  
*Jede Zeile von&nbsp; $\mathcal{C}$&nbsp; gibt ein Codewort von&nbsp; $\mathcal{C}_1$&nbsp; wieder und jede Spalte ein Codewort von&nbsp; $\mathcal{C}_2$.<br>
+
*Each row of&nbsp; $\mathcal{C}$&nbsp; returns a codeword of&nbsp; $\mathcal{C}_1$&nbsp; and each column returns a codeword of&nbsp; $\mathcal{C}_2$.<br>
  
*Die Summe zweier Zeilen ergibt aufgrund der Linearität wieder ein Codewort von&nbsp; $\mathcal{C}_1$.<br>
+
*The sum of two rows again gives a codeword of&nbsp; $\mathcal{C}_1$ due to linearity.<br>
  
*Ebenso ergibt die Summe zweier Spalten ein gültiges Codewort von&nbsp; $\mathcal{C}_2$.<br>
+
*Also, the sum of two columns gives a valid codeword of&nbsp; $\mathcal{C}_2$.<br>
  
*Jeder Produktcodes beinhaltet auch das Nullwort&nbsp; $\underline{0}$&nbsp; (ein Vektor mit&nbsp; $n$&nbsp; Nullen).<br>
+
*Each product code also includes the null word&nbsp; $\underline{0}$&nbsp; (a vector of&nbsp; $n$&nbsp; zeros).<br>
  
*Die minimale Distanz von&nbsp; $C$&nbsp; ist&nbsp; $d_{\rm min} = d_1 \cdot d_2$, wobei&nbsp; $d_i$&nbsp; die minimale Distanz von&nbsp; $\mathcal{C}_i$&nbsp; angibt.}}
+
*The minimum distance of&nbsp; $C$&nbsp; is&nbsp; $d_{\rm min} = d_1 \cdot d_2$, where&nbsp; $d_i$&nbsp; indicates the minimum distance of&nbsp; $\mathcal{C}_i$&nbsp; }}
  
== Iterative Syndromdecodierung von Produktcodes ==
+
== Iterative syndrome decoding of product codes ==
 
<br>
 
<br>
Wir betrachten nun den Fall, dass ein Produktcode mit Matrix&nbsp; $\mathbf{X}$&nbsp; über einen Binärkanal übertragen wird. Die Empfangsmatrix sei&nbsp; $\mathbf{Y} = \mathbf{X} + \mathbf{E}$, wobei&nbsp; $\mathbf{E}$&nbsp; die Fehlermatrix bezeichnet. Alle Elemente der Matrizen&nbsp; $\mathbf{X}, \ \mathbf{E}$&nbsp; und&nbsp; $\mathbf{Y}$ seien binär, also&nbsp; $0$&nbsp; oder&nbsp; $1$.<br>
+
We now consider the case where a product code with matrix&nbsp; $\mathbf{X}$&nbsp; is transmitted over a binary channel. Let the receive matrix&nbsp; $\mathbf{Y} = \mathbf{X} + \mathbf{E}$, where&nbsp; $\mathbf{E}$&nbsp; denotes the error matrix. Let all elements of the matrices&nbsp; $\mathbf{X}, \ \mathbf{E}$&nbsp; and&nbsp; $\mathbf{Y}$ be binary, that is&nbsp; $0$&nbsp; or&nbsp; $1$.<br>
  
Für die Decodierung der beiden Komponentencodes bietet sich die Syndromdecodierung entsprechend dem Kapitel&nbsp; [[Channel_Coding/Decodierung_linearer_Blockcodes#Blockschaltbild_und_Voraussetzungen| Decodierung linearer Blockcodes]]&nbsp; an.  
+
For the decoding of the two component codes the syndrome decoding according to the chapter&nbsp; [[Channel_Coding/Decoding_of_Linear_Block_Codes#Block_diagram_and_requirements| "Decoding linear block codes"]]&nbsp; is suitable.  
  
Im zweidimensionalen Fall bedeutet dies:
+
In the two-dimensional case this means:
*Man decodiert zunächst die&nbsp; $n_2$&nbsp; Zeilen der Empfangsmatrix&nbsp; $\mathbf{Y}$, basierend auf der Prüfmatrix&nbsp; $\mathbf{H}_1$&nbsp; des Komponentencodes&nbsp; $\mathcal{C}_1$. Eine Möglichkeit hierfür ist die Syndromdecodierung.<br>
+
*One first decodes the&nbsp; $n_2$&nbsp; rows of the receive matrix&nbsp; $\mathbf{Y}$, based on the parity-check matrix&nbsp; $\mathbf{H}_1$&nbsp; of the component code&nbsp; $\mathcal{C}_1$. Syndrome decoding is one way to do this.<br>
  
*Dazu bildet man jeweils das sogenannte Syndrom&nbsp; $\underline{s} = \underline{y} \cdot \mathbf{H}_1^{\rm T}$, wobei der Vektor&nbsp; $\underline{y}$&nbsp; der Länge&nbsp; $n_1$&nbsp; die aktuelle Zeile von&nbsp; $\mathbf{Y}$&nbsp; angibt und "T" für "transponiert" steht. Entsprechend dem berechneten&nbsp; $\underline{s}_{\mu}$&nbsp; $($mit&nbsp; $0 &#8804; \mu < 2^{n_1 -k_1})$&nbsp; findet man in einer vorbereiteten Syndromtabelle das zugehörige wahrscheinliche Fehlermuster&nbsp; $\underline{e} = \underline{e}_{\mu}$.<br>
+
*For this one forms in each case the so-called syndrome&nbsp; $\underline{s} = \underline{y} \cdot \mathbf{H}_1^{\rm T}$, where the vector&nbsp; $\underline{y}$&nbsp; of length&nbsp; $n_1$&nbsp; indicates the current row of&nbsp; $\mathbf{Y}$&nbsp; and "T" stands for "transposed". Corresponding to the calculated&nbsp; $\underline{s}_{\mu}$&nbsp; $($with&nbsp; $0 &#8804; \mu < 2^{n_1 -k_1})$&nbsp; one finds in a prepared syndrome table the corresponding probable error pattern&nbsp; $\underline{e} = \underline{e}_{\mu}$.<br>
  
*Bei nur wenigen Fehlern innerhalb der Zeile stimmt dann&nbsp; $\underline{y} + \underline{e}$&nbsp; mit dem gesendeten Zeilenvektor&nbsp; $\underline{x}$&nbsp; überein. Sind zu viele Fehler aufgetreten, so kommt es allerdings zu Fehlkorrekturen.<br>
+
*If there are only a few errors within the row, then&nbsp; $\underline{y} + \underline{e}$&nbsp; matches the sent row vector&nbsp; $\underline{x}$&nbsp;. However, if too many errors have occurred, the following incorrect corrections will occur.<br>
  
*Anschließend "syndromdecodiert" man die&nbsp; $n_1$&nbsp; Spalten der (korrigierten) Empfangsmatrix&nbsp; $\mathbf{Y}\hspace{0.03cm}'$, diesmal basierend auf der (transponierten) Prüfmatrix&nbsp; $\mathbf{H}_2^{\rm T}$&nbsp; des Komponentencodes&nbsp; $\mathcal{C}_2$. Hierzu bildet man das Syndrom&nbsp; $\underline{s} = \underline{y}\hspace{0.03cm}' \cdot \mathbf{H}_2^{\rm T}$, wobei der Vektor&nbsp; $\underline{y}\hspace{0.03cm}'$&nbsp; der Länge&nbsp; $n_2$&nbsp; die betrachtete Spalte von&nbsp; $\mathbf{Y}\hspace{0.03cm}'$&nbsp; bezeichnet.<br>
+
*Then one "syndromedecodes" the&nbsp; $n_1$&nbsp; columns of the (corrected) received matrix&nbsp; $\mathbf{Y}\hspace{0.03cm}'$, this time based on the (transposed) parity-check matrix&nbsp; $\mathbf{H}_2^{\rm T}$&nbsp; of the component code&nbsp; $\mathcal{C}_2$. For this, one forms the syndrome&nbsp; $\underline{s} = \underline{y}\hspace{0.03cm}' \cdot \mathbf{H}_2^{\rm T}$, where the vector&nbsp; $\underline{y}\hspace{0.03cm}'$&nbsp; of length&nbsp; $n_2$&nbsp; denotes the considered column of&nbsp; $\mathbf{Y}\hspace{0.03cm}'$&nbsp; .<br>
  
*Aus einer zweiten Syndromtabelle $($gültig für  den Code&nbsp; $\mathcal{C}_2)$&nbsp; findet man für das berechnete&nbsp; $\underline{s}_{\mu}$&nbsp; $($mit&nbsp; $0 &#8804; \mu < 2^{n_2 -k_2})$&nbsp; das wahrscheinliche Fehlermuster $\underline{e} = \underline{e}_{\mu}$ der bearbeiteten Spalte. Nach Korrektur aller Spalten liegt die Marix&nbsp; $\mathbf{Y}$&nbsp; vor. Nun kann man wieder eine Zeilen&ndash; und anschließend eine Spaltendecodierung vornehmen &nbsp; &#8658; &nbsp; zweite Iteration, und so weiter, und so fort.<br><br>
+
*From a second syndrome table $($valid for the code&nbsp; $\mathcal{C}_2)$&nbsp; we find for the computed&nbsp; $\underline{s}_{\mu}$&nbsp; $($with&nbsp; $0 &#8804; \mu < 2^{n_2 -k_2})$&nbsp; the probable error pattern $\underline{e} = \underline{e}_{\mu}$ of the edited column. After correcting all columns, the Marix&nbsp; $\mathbf{Y}$&nbsp; is present. Now one can do another row and then a column decoding &nbsp; &#8658; &nbsp; second iteration, and so on, and so forth.<br><br>
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 1:}$&nbsp;  Zur Verdeutlichung des Decodieralgorithmuses  betrachten wir wieder den&nbsp; $(42, 12)$ Produktcode, basierend auf
+
$\text{Example 1:}$&nbsp;  To illustrate the decoding algorithm, we again consider the&nbsp; $(42, 12)$ product code, based on
*dem Hammingcode&nbsp; $\text{HC (7, 4, 3)}$ &nbsp; &#8658; &nbsp; Code&nbsp; $\mathcal{C}_1$,<br>
+
*the Hamming code&nbsp; $\text{HC (7, 4, 3)}$ &nbsp; &#8658; &nbsp; code&nbsp; $\mathcal{C}_1$,<br>
  
*dem verkürzten Hammingcode&nbsp; $\text{HC (6, 3, 3)}$ &nbsp; &#8658; &nbsp; Code&nbsp; $\mathcal{C}_2$.<br>
+
*the shortened Hamming code&nbsp; $\text{HC (6, 3, 3)}$ &nbsp; &#8658; &nbsp; code&nbsp; $\mathcal{C}_2$.<br>
  
  
Die linke Grafik zeigt die Empfangsmatrix&nbsp; $\mathbf{Y}$. Aus Darstellungsgründen wurde die Codermatrix&nbsp; $\mathbf{X}$&nbsp; zu einer&nbsp; $6 &times; 7$&ndash;Nullmatrix gewählt, so dass die neun Einsen in&nbsp; $\mathbf{Y}$&nbsp; gleichzeitig Übertragungsfehler darstellen.<br>
+
The left graph shows the receive matrix&nbsp; $\mathbf{Y}$. For display reasons, the code matrix&nbsp; $\mathbf{X}$&nbsp; was chosen to be a&nbsp; $6 &times; 7$ zero matrix, so that the nine ones in&nbsp; $\mathbf{Y}$&nbsp; represent transmission errors at the same time.<br>
[[File:P ID3014 KC T 4 2 S2a v1.png|right|frame|Zur Syndromdecodierung des&nbsp; $(42, 12)$–Produktcodes|class=fit]]
+
[[File:P ID3014 KC T 4 2 S2a v1.png|right|frame|Syndrome decoding of the&nbsp; $(42, 12)$ product code|class=fit]]
  
Die&nbsp; <b>zeilenweise Syndromdecodierung</b>&nbsp; geschieht über das Syndrom&nbsp; $\underline{s} = \underline{y} \cdot \mathbf{H}_1^{\rm T}$&nbsp; mit
+
The&nbsp; <b>row by row syndrome decoding</b>&nbsp; is done via the syndrome&nbsp; $\underline{s} = \underline{y} \cdot \mathbf{H}_1^{\rm T}$&nbsp; with
 
:$$\boldsymbol{\rm H}_1^{\rm T} =  
 
:$$\boldsymbol{\rm H}_1^{\rm T} =  
 
   \begin{pmatrix}
 
   \begin{pmatrix}
Line 75: Line 75:
 
\end{pmatrix}  \hspace{0.05cm}. $$
 
\end{pmatrix}  \hspace{0.05cm}. $$
 
<br clear=all>
 
<br clear=all>
Im Einzelnen:
+
In particular:
[[File:P ID3015 KC T 4 2 S2b v1.png|right|frame|Syndromtabelle für den Code $\mathcal{C}_1$]]  
+
[[File:P ID3015 KC T 4 2 S2b v1.png|right|frame|Syndrome table for code $\mathcal{C}_1$]]  
*<b>Zeile 1</b> &nbsp;&#8658;&nbsp; Einzelfehlerkorrektur ist erfolgreich (ebenso in den Zeilen 3,&nbsp; 4 und 6):  
+
*<b>Row 1</b> &nbsp;&#8658;&nbsp; Single error correction is successful (also in rows 3,&nbsp; 4 and 6):  
  
 
::<math>\underline{s} = \left ( 0, \hspace{0.02cm} 0, \hspace{0.02cm}1, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0 \right ) \hspace{-0.03cm}\cdot \hspace{-0.03cm}{ \boldsymbol{\rm H} }_1^{\rm T}  
 
::<math>\underline{s} = \left ( 0, \hspace{0.02cm} 0, \hspace{0.02cm}1, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0 \right ) \hspace{-0.03cm}\cdot \hspace{-0.03cm}{ \boldsymbol{\rm H} }_1^{\rm T}  
Line 87: Line 87:
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
*<b>Zeile 2</b> &nbsp;(beinhaltet zwei Fehler) &nbsp; &#8658; &nbsp; Fehlkorrektur bezüglich Bit 5:
+
*<b>Row 2</b> &nbsp;(contains two errors) &nbsp; &#8658; &nbsp; Error correction concerning bit 5:
  
 
::<math>\underline{s} = \left ( 1, \hspace{0.02cm} 0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}1 \right ) \hspace{-0.03cm}\cdot \hspace{-0.03cm}{ \boldsymbol{\rm H} }_1^{\rm T}  
 
::<math>\underline{s} = \left ( 1, \hspace{0.02cm} 0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}1 \right ) \hspace{-0.03cm}\cdot \hspace{-0.03cm}{ \boldsymbol{\rm H} }_1^{\rm T}  
Line 97: Line 97:
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
*<b>Zeile 5</b> &nbsp;(beinhaltet ebenfalls zwei Fehler) &nbsp;&#8658;&nbsp; Fehlkorrektur bezüglich Bit 3:
+
*<b>Row 5</b> &nbsp;(also contains two errors) &nbsp;&#8658;&nbsp; Error correction concerning bit 3:
  
 
::<math>\underline{s} = \left ( 0, \hspace{0.02cm} 0, \hspace{0.02cm}0, \hspace{0.02cm}1, \hspace{0.02cm}1, \hspace{0.02cm}0, \hspace{0.02cm}0 \right ) \hspace{-0.03cm}\cdot \hspace{-0.03cm}{ \boldsymbol{\rm H} }_1^{\rm T}  
 
::<math>\underline{s} = \left ( 0, \hspace{0.02cm} 0, \hspace{0.02cm}0, \hspace{0.02cm}1, \hspace{0.02cm}1, \hspace{0.02cm}0, \hspace{0.02cm}0 \right ) \hspace{-0.03cm}\cdot \hspace{-0.03cm}{ \boldsymbol{\rm H} }_1^{\rm T}  
Line 107: Line 107:
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
Die <b>spaltenweisen Syndromdecodierung</b> entfernt alle Einzelfehler in den Spalten&nbsp; 1,&nbsp; 2,&nbsp; 3,&nbsp; 4&nbsp; und&nbsp; 7.  
+
The <b>column by column syndrome decoding</b> removes all single errors in columns&nbsp; 1,&nbsp; 2,&nbsp; 3,&nbsp; 4&nbsp; and&nbsp; 7.  
[[File:P ID3019 KC T 4 2 S2c v1.png|right|frame|Syndromtabelle für den Code $\mathcal{C}_2$]]
+
[[File:P ID3019 KC T 4 2 S2c v1.png|right|frame|Syndrome table for the code $\mathcal{C}_2$]]
*<b>Spalte 5</b> &nbsp;(beinhaltet zwei Fehler) &nbsp; &#8658; &nbsp; Fehlkorrektur bezüglich Bit 4:
+
*<b>Column 5</b> &nbsp;(contains two errors) &nbsp; &#8658; &nbsp; Error correction concerning bit 4:
  
 
::<math>\underline{s} = \left ( 0, \hspace{0.02cm} 1, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}1, \hspace{0.02cm}0 \right ) \hspace{-0.03cm}\cdot \hspace{-0.03cm}{ \boldsymbol{\rm H} }_2^{\rm T}  
 
::<math>\underline{s} = \left ( 0, \hspace{0.02cm} 1, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}1, \hspace{0.02cm}0 \right ) \hspace{-0.03cm}\cdot \hspace{-0.03cm}{ \boldsymbol{\rm H} }_2^{\rm T}  
Line 126: Line 126:
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
Die verbliebenen drei Fehler werden durch zeilenweise Decodierung der&nbsp; <b>zweiten Iterationsschleife</b>&nbsp; korrigiert.<br>
+
The remaining three errors are corrected by decoding the&nbsp; <b>second iteration loop</b>&nbsp; line by line.<br>
  
Ob alle Fehler eines Blockes korrigierbar sind, hängt vom Fehlermuster ab. Hier verweisen wir auf die&nbsp; [[Aufgaben:Aufgabe_4.7:_Decodierung_von_Produktcodes|Aufgabe 4.7]].}}<br>
+
Ob alle Fehler eines Blockes korrigierbar sind, hängt vom Fehlermuster ab. Hier verweisen wir auf die&nbsp; [[Aufgaben:Aufgabe_4.7:_Decodierung_von_Produktcodes|"Aufgabe 4.7"]].}}<br>
  
== Leistungsfähigkeit der Produktcodes ==
+
== Performance of product codes ==
 
<br>
 
<br>
Die 1954 eingeführten&nbsp; <i>Produktcodes</i>&nbsp; waren die ersten Codes, die auf rekursiven Konstruktionsregeln basierten und somit grundsätzlich für die iterative Decodierung geeignet waren. Der Erfinder Peter Elias hat sich diesbezüglich zwar nicht geäußert, aber in den letzten zwanzig Jahren hat dieser Aspekt und die gleichzeitige Verfügbarkeit schneller Prozessoren dazu beigetragen, dass inzwischen auch Produktcodes in realen Kommunikationssystemen eingesetzt werden, zum Beispiel
+
The 1954 introduced&nbsp; <i>product codes</i>&nbsp; were the first codes, which were based on recursive construction rules and thus in principle suitable for iterative decoding. The inventor Peter Elias did not comment on this, but in the last twenty years this aspect and the simultaneous availability of fast processors have contributed to the fact that in the meantime product codes are also used in real communication systems, e.g.
*beim Fehlerschutz von Speichermedien, und
+
*in error protection of storage media, and
*bei Glasfasersystemen mit sehr hoher Datenrate.<br>
+
*in very high data rate fiber optic systems.<br>
  
  
Meist verwendet man sehr lange Produktcodes&nbsp; $($großes&nbsp; $n = n_1 \cdot n_2)$&nbsp; mit folgender Konsequenz:
+
<br>Usually one uses very long product codes&nbsp; $($large&nbsp; $n = n_1 \cdot n_2)$&nbsp; with the following consequence:
*Aus Aufwandsgründen ist hier die&nbsp; [[Channel_Coding/Signal_classification#MAP.E2.80.93_und_ML.E2.80.93Kriterium_.282.29| Maximum&ndash;Likelihood&ndash;Decodierung auf Blockebene]]&nbsp; für die Komponentencodes&nbsp; $\mathcal{C}_1$&nbsp; und&nbsp; $\mathcal{C}_2$&nbsp; nicht anwendbar, auch nicht die&nbsp; [[Channel_Coding/Decodierung_linearer_Blockcodes#Prinzip_der_Syndromdecodierung| Syndromdecodierung]], die ja eine Realisierungsform der ML&ndash;Decodierung darstellt.<br>
+
*For effort reasons, the&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#Criteria_.C2.BBMaximum-a-posteriori.C2.AB_and_.C2.BBMaximum-Likelihood.C2. AB|"Maximum Likelihood Decoding at block level"]]&nbsp; not applicable for component codes&nbsp; $\mathcal{C}_1$&nbsp; and&nbsp; $\mathcal{C}_2$&nbsp; nor the&nbsp; [[Channel_Coding/Decoding_of_Linear_Block_Codes#Principle_of_syndrome_decoding| "syndrome decoding"]], which is after all a realization form of ML decoding.
  
*Anwendbar ist dagegen auch bei großem&nbsp; $n$&nbsp; die&nbsp; [[Channel_Coding/Soft–in_Soft–out_Decoder#Symbolweise_Soft.E2.80.93in_Soft.E2.80.93out_Decodierung|iterative symbolweise MAP&ndash;Decodierung]]. Der Austausch von extrinsischer und Apriori&ndash;Information geschieht hier zwischen den beiden Komponentencodes. Genaueres hierüber findet man in&nbsp; [Liv15]<ref name='Liv15'>Liva, G.: ''Channels Codes for Iterative Decoding.'' Vorlesungsmanuskript, Lehrstuhl für Nachrichtentechnik, TU München und DLR Oberpfaffenhofen, 2015.</ref>.<br>
+
*Applicable, on the other hand, even with large&nbsp; $n$&nbsp; is the&nbsp; [[Channel_Coding/Soft-in_Soft-Out_Decoder#Symbol-wise_soft-in_soft-out_decoding|"iterative symbol-wise MAP decoding"]]. The exchange of extrinsic and apriori&ndash;information happens here between the two component codes. More details on this can be found in&nbsp; [Liv15]<ref name='Liv15'>Liva, G.: ''Channels Codes for Iterative Decoding.'' Lecture manuscript, Department of Communications Engineering, TU Munich and DLR Oberpfaffenhofen, 2015.</ref>.<br>
  
  
Die Grafik zeigt für einen&nbsp; $(1024, 676)$&ndash;Produktcode, basierend auf dem&nbsp; <i>Extended Hammingcode</i>&nbsp; ${\rm eHC} \ (32, 26)$&nbsp; als Komponentencodes,  
+
The graph shows for a&nbsp; $(1024, 676)$ product code, based on the&nbsp; <i>extended Hamming code</i>&nbsp; ${\rm eHC} \ (32, 26)$&nbsp; as component codes,  
*links die AWGN&ndash;Bitfehlerwahrscheinlichkeit in Abhängigkeit der Iterationen&nbsp; $(I)$  
+
*on the left, the AWGN bit error probability as a function of iterations&nbsp; $(I)$  
*rechts die  Fehlerwahrscheinlichkeit der Blöcke (bzw. Codeworte).  
+
*right the error probability of the blocks (or code words).  
  
  
[[File:P ID3020 KC T 4 2 S3 v4.png|center|frame|Bit– und Blockfehlerwahrscheinlichkeit eines&nbsp; $(1024, 676)$–Produktcodes bei AWGN|class=fit]]
+
[[File:P ID3020 KC T 4 2 S3 v4.png|center|frame|Bit and block error probability of a&nbsp; $(1024, 676)$ product code at AWGN|class=fit]]
  
Hier noch einige ergänzende Bemerkungen:
+
Here are some additional remarks:
*Die Coderate beträgt&nbsp; $R = R_1 \cdot R_2 = 0.66$, womit sich die Shannon&ndash;Grenze zu&nbsp; $10 \cdot {\rm lg} \, (E_{\rm B}/N_0) \approx 1 \ \rm dB$&nbsp; ergibt.<br>
+
*The code rate is&nbsp; $R = R_1 \cdot R_2 = 0.66$, giving the Shannon bound to&nbsp; $10 \cdot {\rm lg} \, (E_{\rm B}/N_0) \approx 1 \ \rm dB$&nbsp; results.<br>
  
*In der linken Grafik erkennt man den Einfluss der Iterationen. Beim Übergang von&nbsp; $I = 1$&nbsp; auf&nbsp; $I=2$&nbsp; gewinnt man ca.&nbsp; $2 \ \rm dB$ $($bei der Bitfehlerrate&nbsp; $10^{-5})$&nbsp; und mit&nbsp; $I = 10$&nbsp; ein weiteres $\rm dB$.  Weitere Iterationen lohnen sich nicht.<br>
+
*In the left graph you can see the influence of the iterations. At the transition from&nbsp; $I = 1$&nbsp; to&nbsp; $I=2$&nbsp; one gains approx.&nbsp; $2 \ \rm dB$ $($at the bit error rate&nbsp; $10^{-5})$&nbsp; and with&nbsp; $I = 10$&nbsp; another $\rm dB$.  Further iterations are not worthwhile.<br>
  
*Alle im Kapitel&nbsp; [[Channel_Coding/Schranken_f%C3%BCr_die_Blockfehlerwahrscheinlichkeit#Distanzspektrum_eines_linearen_Codes_.281.29| Schranken für die Blockfehlerwahrscheinlichkeit]]&nbsp; genannten Schranken können hier ebenfalls angewendet werden, so auch die in der rechten Grafik gestrichelt eingezeichnete&nbsp; <i>Truncated Union Bound</i>:
+
*All bounds mentioned in the chapter&nbsp; [[Channel_Coding/Bounds_for_Block_Error_Probability#Distance_spectrum_of_a_linear_code| "Bounds for Block Error Probability"]]&nbsp; can be applied here as well, so also the one shown dashed in the right graph&nbsp; <i>truncated union bound</i>:
  
 
::<math>{\rm Pr(Truncated\hspace{0.15cm}Union\hspace{0.15cm} Bound)}= W_{d_{\rm min}} \cdot {\rm Q} \left ( \sqrt{d_{\rm min} \cdot {2R \cdot E_{\rm B}}/{N_0}} \right )  
 
::<math>{\rm Pr(Truncated\hspace{0.15cm}Union\hspace{0.15cm} Bound)}= W_{d_{\rm min}} \cdot {\rm Q} \left ( \sqrt{d_{\rm min} \cdot {2R \cdot E_{\rm B}}/{N_0}} \right )  
 
  \hspace{0.05cm}.</math>
 
  \hspace{0.05cm}.</math>
  
*Die minimale Distanz beträgt&nbsp; $d_{\rm min} = d_1 \cdot d_2 = 4 \cdot 4 = 16$. Mit der Gewichtsfunktion des&nbsp; ${\rm eHC} \ (32, 26)$,
+
*The minimum distance is&nbsp; $d_{\rm min} = d_1 \cdot d_2 = 4 \cdot 4 = 16$. With the weight function of the&nbsp; ${\rm eHC} \ (32, 26)$,
  
 
::<math>W_{\rm eHC(32,\hspace{0.08cm}26)}(X) = 1 + 1240 \cdot X^{4}  
 
::<math>W_{\rm eHC(32,\hspace{0.08cm}26)}(X) = 1 + 1240 \cdot X^{4}  
Line 166: Line 166:
 
330460 \cdot X^{8} + ...\hspace{0.05cm} + X^{32},</math>
 
330460 \cdot X^{8} + ...\hspace{0.05cm} + X^{32},</math>
  
:erhält man für den Produktcode&nbsp; $W_{d, \ \rm min} = 1240^2 = 15\hspace{0.05cm}376\hspace{0.05cm}000$. Damit ergibt sich die in der rechten Grafik dargestellte Fehlerwahrscheinlichkeit.<br>
+
:is obtained for the product code&nbsp; $W_{d, \cdot min} = 1240^2 = 15\hspace{0.05cm}376\hspace{0.05cm}000$. This gives the error probability shown in the graph on the right.<br>
  
== Aufgaben zum Kapitel==
+
== Exercises for the chapter==
 
<br>
 
<br>
[[Aufgaben:Aufgabe_4.6:_Generierung_von_Produktcodes|Aufgabe 4.6: Generierung von Produktcodes]]
+
[[Aufgaben:Aufgabe_4.6:_Generierung_von_Produktcodes|"Aufgabe 4.6: Generierung von Produktcodes"]]
  
[[Aufgaben:Aufgabe_4.6Z:_Grundlagen_der_Produktcodes|Aufgabe 4.6Z: Grundlagen der Produktcodes]]
+
[[Aufgaben:Aufgabe_4.6Z:_Grundlagen_der_Produktcodes|"Aufgabe 4.6Z: Grundlagen der Produktcodes"]]
  
[[Aufgaben:Aufgabe_4.7:_Decodierung_von_Produktcodes|Aufgabe 4.7: Decodierung von Produktcodes]]
+
[[Aufgaben:Aufgabe_4.7:_Decodierung_von_Produktcodes|"Aufgabe 4.7: Decodierung von Produktcodes"]]
  
[[Aufgaben:Aufgabe_4.7Z:_Zum_Prinzip_der_Syndromdecodierung|Aufgabe 4.7Z: Zum Prinzip der Syndromdecodierung]]
+
[[Aufgaben:Aufgabe_4.7Z:_Zum_Prinzip_der_Syndromdecodierung|"Aufgabe 4.7Z: Zum Prinzip der Syndromdecodierung"]]
  
==Quellenverzeichnis==
+
==References==
 
<references/>
 
<references/>
  
 
{{Display}}
 
{{Display}}

Revision as of 01:45, 31 October 2022

Basic structure of a product code


The graphic shows the principle structure of product codes, which were already introduced in 1954 by  "Peter Elias" . The  two-dimensional product code  $\mathcal{C} = \mathcal{C}_1 × \mathcal{C}_2$  shown here is based on the two linear and binary block codes with parameters  $(n_1, \ k_1)$  and  $(n_2, \ k_2)$ respectively. The code word length is  $n = n_1 \cdot n_2$.

Basic structure of a product code

These  $n$  code bits can be grouped as follows:

  • The  $k = k_1 \cdot k_2$  information bits are arranged in the  $k_2 × k_1$ matrix  $\mathbf{U}$ . The code rate is equal to the product of the code rates of the two base codes:
$$R = k/n = (k_1/n_1) \cdot (k_2/n_2) = R_1 \cdot R_2.$$
  • The upper right matrix  $\mathbf{P}^{(1)}$  with dimension  $k_2 × m_1$  contains the parity bits with respect to the code  $\mathcal{C}_1$. In each of the  $k_2$  rows, check bits are added to the  $k_1$ information bits  $m_1 = n_1 - k_1$  as described in an earlier chapter using the example of  "Hamming–Codes" .
  • The lower left matrix  $\mathbf{P}^{(2)}$  of dimension  $m_2 × k_1$  contains the check bits for the second component code  $\mathcal{C}_2$. Here the encoding (and also the decoding) is done line by line:   In each of the  $k_1$  columns the  $k_2$  information bits are still supplemented by  $m_2 = n_2 -k_2$  check bits.
  • The  $m_2 × m_1$–matrix  $\mathbf{P}^{(12)}$  on the bottom right is called  checks–on–checks. Here the two previously generated parity matrices  $\mathbf{P}^{(1)}$  and  $\mathbf{P}^{(2)}$  are linked according to the parity-check equations.

$\text{Conclusion:}$  All product codes according to the above graphic have the following properties:

  • For linear component codes  $\mathcal{C}_1$  and  $\mathcal{C}_2$  the product code  $\mathcal{C} = \mathcal{C}_1 × \mathcal{C}_2$  is also linear.
  • Each row of  $\mathcal{C}$  returns a codeword of  $\mathcal{C}_1$  and each column returns a codeword of  $\mathcal{C}_2$.
  • The sum of two rows again gives a codeword of  $\mathcal{C}_1$ due to linearity.
  • Also, the sum of two columns gives a valid codeword of  $\mathcal{C}_2$.
  • Each product code also includes the null word  $\underline{0}$  (a vector of  $n$  zeros).
  • The minimum distance of  $C$  is  $d_{\rm min} = d_1 \cdot d_2$, where  $d_i$  indicates the minimum distance of  $\mathcal{C}_i$ 

Iterative syndrome decoding of product codes


We now consider the case where a product code with matrix  $\mathbf{X}$  is transmitted over a binary channel. Let the receive matrix  $\mathbf{Y} = \mathbf{X} + \mathbf{E}$, where  $\mathbf{E}$  denotes the error matrix. Let all elements of the matrices  $\mathbf{X}, \ \mathbf{E}$  and  $\mathbf{Y}$ be binary, that is  $0$  or  $1$.

For the decoding of the two component codes the syndrome decoding according to the chapter  "Decoding linear block codes"  is suitable.

In the two-dimensional case this means:

  • One first decodes the  $n_2$  rows of the receive matrix  $\mathbf{Y}$, based on the parity-check matrix  $\mathbf{H}_1$  of the component code  $\mathcal{C}_1$. Syndrome decoding is one way to do this.
  • For this one forms in each case the so-called syndrome  $\underline{s} = \underline{y} \cdot \mathbf{H}_1^{\rm T}$, where the vector  $\underline{y}$  of length  $n_1$  indicates the current row of  $\mathbf{Y}$  and "T" stands for "transposed". Corresponding to the calculated  $\underline{s}_{\mu}$  $($with  $0 ≤ \mu < 2^{n_1 -k_1})$  one finds in a prepared syndrome table the corresponding probable error pattern  $\underline{e} = \underline{e}_{\mu}$.
  • If there are only a few errors within the row, then  $\underline{y} + \underline{e}$  matches the sent row vector  $\underline{x}$ . However, if too many errors have occurred, the following incorrect corrections will occur.
  • Then one "syndromedecodes" the  $n_1$  columns of the (corrected) received matrix  $\mathbf{Y}\hspace{0.03cm}'$, this time based on the (transposed) parity-check matrix  $\mathbf{H}_2^{\rm T}$  of the component code  $\mathcal{C}_2$. For this, one forms the syndrome  $\underline{s} = \underline{y}\hspace{0.03cm}' \cdot \mathbf{H}_2^{\rm T}$, where the vector  $\underline{y}\hspace{0.03cm}'$  of length  $n_2$  denotes the considered column of  $\mathbf{Y}\hspace{0.03cm}'$  .
  • From a second syndrome table $($valid for the code  $\mathcal{C}_2)$  we find for the computed  $\underline{s}_{\mu}$  $($with  $0 ≤ \mu < 2^{n_2 -k_2})$  the probable error pattern $\underline{e} = \underline{e}_{\mu}$ of the edited column. After correcting all columns, the Marix  $\mathbf{Y}$  is present. Now one can do another row and then a column decoding   ⇒   second iteration, and so on, and so forth.

$\text{Example 1:}$  To illustrate the decoding algorithm, we again consider the  $(42, 12)$ product code, based on

  • the Hamming code  $\text{HC (7, 4, 3)}$   ⇒   code  $\mathcal{C}_1$,
  • the shortened Hamming code  $\text{HC (6, 3, 3)}$   ⇒   code  $\mathcal{C}_2$.


The left graph shows the receive matrix  $\mathbf{Y}$. For display reasons, the code matrix  $\mathbf{X}$  was chosen to be a  $6 × 7$ zero matrix, so that the nine ones in  $\mathbf{Y}$  represent transmission errors at the same time.

Syndrome decoding of the  $(42, 12)$ product code

The  row by row syndrome decoding  is done via the syndrome  $\underline{s} = \underline{y} \cdot \mathbf{H}_1^{\rm T}$  with

$$\boldsymbol{\rm H}_1^{\rm T} = \begin{pmatrix} 1 &0 &1 \\ 1 &1 &0 \\ 0 &1 &1 \\ 1 &1 &1 \\ 1 &0 &0 \\ 0 &1 &0 \\ 0 &0 &1 \end{pmatrix} \hspace{0.05cm}. $$


In particular:

Syndrome table for code $\mathcal{C}_1$
  • Row 1  ⇒  Single error correction is successful (also in rows 3,  4 and 6):
\[\underline{s} = \left ( 0, \hspace{0.02cm} 0, \hspace{0.02cm}1, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0 \right ) \hspace{-0.03cm}\cdot \hspace{-0.03cm}{ \boldsymbol{\rm H} }_1^{\rm T} \hspace{-0.05cm}= \left ( 0, \hspace{0.03cm} 1, \hspace{0.03cm}1 \right ) = \underline{s}_3\]
\[\Rightarrow \hspace{0.3cm} \underline{y} + \underline{e}_3 = \left ( 0, \hspace{0.02cm} 0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0 \right ) \hspace{0.05cm}.\]
  • Row 2  (contains two errors)   ⇒   Error correction concerning bit 5:
\[\underline{s} = \left ( 1, \hspace{0.02cm} 0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}1 \right ) \hspace{-0.03cm}\cdot \hspace{-0.03cm}{ \boldsymbol{\rm H} }_1^{\rm T} \hspace{-0.05cm}= \left ( 1, \hspace{0.03cm} 0, \hspace{0.03cm}0 \right ) = \underline{s}_4\]
\[\Rightarrow \hspace{0.3cm} \underline{y} + \underline{e}_4 = \left ( 1, \hspace{0.02cm} 0, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}1, \hspace{0.02cm}0, \hspace{0.02cm}1 \right ) \hspace{0.05cm}.\]
  • Row 5  (also contains two errors)  ⇒  Error correction concerning bit 3:
\[\underline{s} = \left ( 0, \hspace{0.02cm} 0, \hspace{0.02cm}0, \hspace{0.02cm}1, \hspace{0.02cm}1, \hspace{0.02cm}0, \hspace{0.02cm}0 \right ) \hspace{-0.03cm}\cdot \hspace{-0.03cm}{ \boldsymbol{\rm H} }_1^{\rm T} \hspace{-0.05cm}= \left ( 0, \hspace{0.03cm} 1, \hspace{0.03cm}1 \right ) = \underline{s}_3\]
\[\Rightarrow \hspace{0.3cm} \underline{y} + \underline{e}_3 = \left ( 0, \hspace{0.02cm} 0, \hspace{0.02cm}1, \hspace{0.02cm}1, \hspace{0.02cm}1, \hspace{0.02cm}0, \hspace{0.02cm}0 \right ) \hspace{0.05cm}.\]

The column by column syndrome decoding removes all single errors in columns  1,  2,  3,  4  and  7.

Syndrome table for the code $\mathcal{C}_2$
  • Column 5  (contains two errors)   ⇒   Error correction concerning bit 4:
\[\underline{s} = \left ( 0, \hspace{0.02cm} 1, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}1, \hspace{0.02cm}0 \right ) \hspace{-0.03cm}\cdot \hspace{-0.03cm}{ \boldsymbol{\rm H} }_2^{\rm T} \hspace{-0.05cm}= \left ( 0, \hspace{0.02cm} 1, \hspace{0.02cm}0, \hspace{0.02cm}0, \hspace{0.02cm}1, \hspace{0.02cm}0 \right ) \hspace{-0.03cm}\cdot \hspace{-0.03cm} \begin{pmatrix} 1 &1 &0 \\ 1 &0 &1 \\ 0 &1 &1 \\ 1 &0 &0 \\ 0 &1 &0 \\ 0 &0 &1 \end{pmatrix} = \left ( 1, \hspace{0.03cm} 0, \hspace{0.03cm}0 \right ) = \underline{s}_4\]
\[\Rightarrow \hspace{0.3cm} \underline{y} + \underline{e}_4 = \left ( 0, \hspace{0.02cm} 1, \hspace{0.02cm}0, \hspace{0.02cm}1, \hspace{0.02cm}1, \hspace{0.02cm}0 \right ) \hspace{0.05cm}.\]

The remaining three errors are corrected by decoding the  second iteration loop  line by line.

Ob alle Fehler eines Blockes korrigierbar sind, hängt vom Fehlermuster ab. Hier verweisen wir auf die  "Aufgabe 4.7".


Performance of product codes


The 1954 introduced  product codes  were the first codes, which were based on recursive construction rules and thus in principle suitable for iterative decoding. The inventor Peter Elias did not comment on this, but in the last twenty years this aspect and the simultaneous availability of fast processors have contributed to the fact that in the meantime product codes are also used in real communication systems, e.g.

  • in error protection of storage media, and
  • in very high data rate fiber optic systems.



Usually one uses very long product codes  $($large  $n = n_1 \cdot n_2)$  with the following consequence:

  • Applicable, on the other hand, even with large  $n$  is the  "iterative symbol-wise MAP decoding". The exchange of extrinsic and apriori–information happens here between the two component codes. More details on this can be found in  [Liv15][1].


The graph shows for a  $(1024, 676)$ product code, based on the  extended Hamming code  ${\rm eHC} \ (32, 26)$  as component codes,

  • on the left, the AWGN bit error probability as a function of iterations  $(I)$
  • right the error probability of the blocks (or code words).


Bit and block error probability of a  $(1024, 676)$ product code at AWGN

Here are some additional remarks:

  • The code rate is  $R = R_1 \cdot R_2 = 0.66$, giving the Shannon bound to  $10 \cdot {\rm lg} \, (E_{\rm B}/N_0) \approx 1 \ \rm dB$  results.
  • In the left graph you can see the influence of the iterations. At the transition from  $I = 1$  to  $I=2$  one gains approx.  $2 \ \rm dB$ $($at the bit error rate  $10^{-5})$  and with  $I = 10$  another $\rm dB$. Further iterations are not worthwhile.
\[{\rm Pr(Truncated\hspace{0.15cm}Union\hspace{0.15cm} Bound)}= W_{d_{\rm min}} \cdot {\rm Q} \left ( \sqrt{d_{\rm min} \cdot {2R \cdot E_{\rm B}}/{N_0}} \right ) \hspace{0.05cm}.\]
  • The minimum distance is  $d_{\rm min} = d_1 \cdot d_2 = 4 \cdot 4 = 16$. With the weight function of the  ${\rm eHC} \ (32, 26)$,
\[W_{\rm eHC(32,\hspace{0.08cm}26)}(X) = 1 + 1240 \cdot X^{4} + 27776 \cdot X^{6}+ 330460 \cdot X^{8} + ...\hspace{0.05cm} + X^{32},\]
is obtained for the product code  $W_{d, \cdot min} = 1240^2 = 15\hspace{0.05cm}376\hspace{0.05cm}000$. This gives the error probability shown in the graph on the right.

Exercises for the chapter


"Aufgabe 4.6: Generierung von Produktcodes"

"Aufgabe 4.6Z: Grundlagen der Produktcodes"

"Aufgabe 4.7: Decodierung von Produktcodes"

"Aufgabe 4.7Z: Zum Prinzip der Syndromdecodierung"

References

  1. Liva, G.: Channels Codes for Iterative Decoding. Lecture manuscript, Department of Communications Engineering, TU Munich and DLR Oberpfaffenhofen, 2015.