Difference between revisions of "Channel Coding/Decoding of Linear Block Codes"

From LNTwww
Line 232: Line 232:
 
:<math>G_{\rm Code} (\hspace{0.05cm}{\rm Hamming \hspace{0.1cm}(7,\hspace{0.02cm}4,\hspace{0.02cm}3), ML-SD}\hspace{0.05cm}|\hspace{0.05cm}{\rm BER} = 10^{-5}\hspace{0.05cm}) = 1.8\ {\rm dB}\hspace{0.05cm}.</math><br>
 
:<math>G_{\rm Code} (\hspace{0.05cm}{\rm Hamming \hspace{0.1cm}(7,\hspace{0.02cm}4,\hspace{0.02cm}3), ML-SD}\hspace{0.05cm}|\hspace{0.05cm}{\rm BER} = 10^{-5}\hspace{0.05cm}) = 1.8\ {\rm dB}\hspace{0.05cm}.</math><br>
  
== Decodierung beim Binary Erasure Channel ==
+
== Decoding at the Binary Erasure Channel ==
 
<br>
 
<br>
Abschließend soll noch gezeigt werden, in wie weit der Decoder zu modifizieren ist, wenn anstelle des&nbsp; [[Channel_Coding/Signal_classification#Binary_Symmetric_Channel_.E2.80.93_BSC| BSC&ndash;Modells]]&nbsp; (<i>Binary Symmetric Channel</i>&nbsp;) das&nbsp; [[Channel_Coding/Signal_classification#Binary_Erasure_Channel_.E2.80.93_BEC| BEC&ndash;Kanalmodell]]&nbsp; (<i>Binary Erasure Channel</i>&nbsp;) zum Einsatz kommt, der keine Fehler produziert, sondern  unsichere Bit als Auslöschungen markiert.<br>
+
Finally, it will be shown to what extent the decoder has to be modified if instead of thes&nbsp; [[Channel_Coding/Signal_classification#Binary_Symmetric_Channel_.E2.80.93_BSC| "BSC&ndash;Modells" (TOTER LINK)]]&nbsp; (<i>Binary Symmetric Channel</i>&nbsp;) the&nbsp; [[Channel_Coding/Signal_classification#Binary_Erasure_Channel_.E2.80.93_BEC| "BEC&ndash;Kanalmodell" (TOTER LINK)]]&nbsp; (<i>Binary Erasure Channel</i>&nbsp;) is used, which does not produce errors but marks uncertain bits as erasures.<br>
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 4:}$&nbsp; Ohne Einschränkung der Allgemeingültigkeit betrachten wir wie im&nbsp; [[Channel_Coding/Decodierung_linearer_Blockcodes#Verallgemeinerung_der_Syndromdecodierung|$\text{Beispiel 3}$]]&nbsp; wieder den systematischen&nbsp; $\text{(5, 2, 3)}$&ndash;Blockcode&nbsp;  
+
$\text{Example 4:}$&nbsp; Without limiting generality, as in&nbsp; [[Channel_Coding/Decoding_of_Linear_Block_Codes#Generalization_of_syndrome_coding|$\text{"Example 3"}$]]&nbsp; we again consider the systematic&nbsp; $\text{(5, 2, 3)}$&ndash;block code&nbsp;  
 
:$$\mathcal{C} = \big \{ (0, 0, 0, 0, 0) \hspace{0.05cm},\hspace{0.15cm}(0, 1, 0, 1, 1) \hspace{0.05cm},\hspace{0.15cm}(1, 0, 1, 1, 0)  \hspace{0.05cm},\hspace{0.15cm}(1, 1, 1, 0, 1) \big \}.$$
 
:$$\mathcal{C} = \big \{ (0, 0, 0, 0, 0) \hspace{0.05cm},\hspace{0.15cm}(0, 1, 0, 1, 1) \hspace{0.05cm},\hspace{0.15cm}(1, 0, 1, 1, 0)  \hspace{0.05cm},\hspace{0.15cm}(1, 1, 1, 0, 1) \big \}.$$
  
[[File:EN_KC_T_1_5_S5.png|right|frame|Zur Fehlerkorrektur bei BSC und BEC]]  
+
[[File:EN_KC_T_1_5_S5.png|right|frame|Error correction with BSC and BEC]]  
 
Die Grafik zeigt das Systemmodell und gibt beispielhafte Werte für die einzelnen Vektoren wider.  
 
Die Grafik zeigt das Systemmodell und gibt beispielhafte Werte für die einzelnen Vektoren wider.  
*Der linke Bildteil (gelb hinterlegt) gilt für "BSC" mit einem  Bitfehler&nbsp; $0 &#8594; 1$&nbsp; beim dritten Bit.
+
*The left part of the picture (yellow background) is valid for "BSC" with a bit error&nbsp; $0 &#8594; 1$&nbsp; at the third bit.
*Der rechte Bildteil (grün hinterlegt) gilt für "BEC" und zeigt zwei&nbsp; <i>Erasures</i>&nbsp; $\rm 1 &#8594; E$&nbsp; bei Bit 2 und Bit 4.
+
*The right part of the picture (green background) is for "BEC" and shows two&nbsp; <i>Erasures</i>&nbsp; $\rm 1 &#8594; E$&nbsp; at bit 2 and bit 4.
  
  
Man erkennt:
+
One recognizes:
*Bei BSC kann wegen&nbsp; $d_{\rm min} = 3$&nbsp; nur ein Bitfehler korrigiert werden&nbsp;  ($t = 1$, rot markiert).  Beschränkt man sich auf Fehlererkennung, so funktioniert diese bis zu&nbsp; $e= d_{\rm min} -1 = 2$ &nbsp; Bitfehler.<br>
+
*With BSC only one bit error can be corrected due to&nbsp; $d_{\rm min} = 3$&nbsp; ($t = 1$, marked in red).  If one restricts oneself to error detection, this works up to&nbsp; $e= d_{\rm min} -1 = 2$&nbsp; bit errors.<br>
  
*Bei BEC macht Fehlererkennung keinen Sinn, denn bereits der Kanal lokalisiert ein unsicheres Bit als <i>Erasure</i>&nbsp; $\rm E$&nbsp; (Auslöschung). Die Nullen und Einsen im BEC&ndash;Empfangswort&nbsp; $\underline{y}$&nbsp; sind sicher. Deshalb funktioniert hier die Fehlerkorrektur bis zu&nbsp; $e = 2$&nbsp; Auslöschungen mit Sicherheit.<br>
+
*For BEC, error detection makes no sense, because already the channel locates an uncertain bit as an <i>Erasure</i>&nbsp; $\rm E$&nbsp;. The zeros and ones in the BEC receive word&nbsp; $\underline{y}$&nbsp; are safe. Therefore the error correction works here up to&nbsp; $e = 2$&nbsp; erasures with certainty.<br>
  
*Auch&nbsp; $e = 3$&nbsp; Auslöschungen sind manchmal noch korrigierbar. So kann&nbsp; $\underline{y} \rm = (E, E, E, 1, 1)$&nbsp; zu&nbsp; $\underline{z} \rm = (0, 1, 0, 1, 1)$&nbsp; korrigiert werden, da kein zweites Codewort mit zwei Einsen endet. $\underline{y} \rm = (0, E, 0, E, E)$&nbsp; ist aber aufgrund des im Code erlaubten Nullwortes nicht korrigierbar.<br>
+
*Also&nbsp; $e = 3$&nbsp; erasures are sometimes still correctable. So&nbsp; $\underline{y} \rm = (E, E, E, 1, 1)$&nbsp; to&nbsp; $\underline{z} \rm = (0, 1, 0, 1, 1)$&nbsp; to be corrected since no second codeword ends with two ones $\underline{y} \rm = (0, E, 0, E, E)$&nbsp; but is not correctable because of the all zero word allowed in the code.<br>
  
*Wird sichergestellt, dass in keinem Empfangswort mehr als zwei Auslöschungen vorkommen, so ist die BEC&ndash;Blockfehlerwahrscheinlichkeit&nbsp; ${\rm Pr}(\underline{z} \ne \underline{x}) = {\rm Pr}(\underline{v} \ne \underline{u}) \equiv 0$. Dagegen hat die entsprechende Blockfehlerwahrscheinlichkeit beim BSC&ndash;Modell stets einen Wert größer als&nbsp; $0$.
+
*If it is ensured that there are no more than two erasures in any receive word, the BEC block error probability&nbsp; ${\rm Pr}(\underline{z} \ne \underline{x}) = {\rm Pr}(\underline{v} \ne \underline{u}) \equiv 0$. In contrast, the corresponding block error probability in the BSC model always has a value greater than&nbsp; $0$.
  
  
Da nach dem BEC ein jedes Empfangswort entweder richtig oder gar nicht decodiert wird, nennen wir hier den Block&nbsp; $\underline{y} &#8594; \underline{z}$&nbsp; zukünftig "Codewortfinder". Eine "Schätzung" findet nur beim BSC&ndash;Modell statt.<br>}}<br>
+
Since after the BEC each receive word is either decoded correctly or not at all, we call here the block&nbsp; $\underline{y} &#8594; \underline{z}$&nbsp; in the future "codeword finder". An "estimation" takes place only in the BSC model.<br>}}<br>
  
Wie funktioniert aber nun die Decodierung eines Empfangswortes&nbsp; $\underline{y}$&nbsp; mit Auslöschungen algorithmisch?  
+
But how does the decoding of a receive word&nbsp; $\underline{y}$&nbsp; with erasures work algorithmically?  
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 5:}$&nbsp; Ausgehend vom Empfangswort&nbsp; $\underline{y} \rm = (0, E, 0, E, 1)$&nbsp; im&nbsp; $\text{Beispiel 4}$&nbsp; setzen wir den Ausgang des Codewortfinders formal zu&nbsp; $\underline{z} \rm = (0, z_2, 0, z_4, 1)$, wobei die Symbole&nbsp; $z_2 \in \{0, \, 1\}$ und $z_4 \in \{0, \, 1\}$&nbsp; entsprechend folgender Gleichung zu bestimmen sind:
+
$\text{Example 5:}$&nbsp; Starting from the receive word&nbsp; $\underline{y} \rm = (0, E, 0, E, 1)$&nbsp; in&nbsp; $\text{example 4}$&nbsp; we formally set the output of the codeword finder to&nbsp; $\underline{z} \rm = (0, z_2, 0, z_4, 1)$, where the symbols&nbsp; $z_2 \in \{0, \, 1\}$ and $z_4 \in \{0, \, 1\}$&nbsp; are to be determined according to the following equation:
  
 
::<math>\underline{z} \cdot { \boldsymbol{\rm H} }^{\rm T}= \underline{0}
 
::<math>\underline{z} \cdot { \boldsymbol{\rm H} }^{\rm T}= \underline{0}
Line 267: Line 267:
 
{ \boldsymbol{\rm H} } \cdot \underline{z}^{\rm T}= \underline{0}^{\rm T} \hspace{0.05cm}.</math>
 
{ \boldsymbol{\rm H} } \cdot \underline{z}^{\rm T}= \underline{0}^{\rm T} \hspace{0.05cm}.</math>
  
Es geht nun darum, diese Bestimmungsgleichung möglichst effizient umzusetzen. Es ergeben sich folgende Rechenschritte:
+
The task now is to implement this determination equation as efficiently as possible. The following calculation steps result:
*Mit der Prüfmatrix&nbsp; $\boldsymbol{\rm H}$&nbsp; des&nbsp; $\text{(5, 2, 3)}$&ndash;Blockcodes und dem Vektor&nbsp; $\underline{z} \rm = (0, z_2, 0, z_4, 1)$&nbsp; lautet die obige Bestimmungsgleichung:
+
*With the parity-check matrix&nbsp; $\boldsymbol{\rm H}$&nbsp; of&nbsp; $\text{(5, 2, 3)}$&ndash;block code and the vector&nbsp; $\underline{z} \rm = (0, z_2, 0, z_4, 1)$&nbsp; is the above determination equation:
  
 
::<math>{ \boldsymbol{\rm H} } \cdot \underline{z}^{\rm T}
 
::<math>{ \boldsymbol{\rm H} } \cdot \underline{z}^{\rm T}
Line 288: Line 288:
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
*Wir fassen die sicheren (korrekten) Bit zum Vektor&nbsp; $\underline{z}_{\rm K}$&nbsp; zusammen und die ausgelöschten Bit zum Vektor&nbsp; $\underline{z}_{\rm E}$. Dann teilen wir die Prüfmatrix&nbsp; $\boldsymbol{\rm H}$&nbsp; in die entsprechenden Teilmatrizen&nbsp; $\boldsymbol{\rm H}_{\rm K}$&nbsp; und&nbsp; $\boldsymbol{\rm H}_{\rm E}$&nbsp; auf:
+
*We sum up the safe (correct) bits to the vector&nbsp; $\underline{z}_{\rm K}$&nbsp; and the erased bits to the vector&nbsp; $\underline{z}_{\rm E}$. Then we split the parity-check matrix&nbsp; $\boldsymbol{\rm H}$&nbsp; into the corresponding submatrices&nbsp; $\boldsymbol{\rm H}_{\rm K}$&nbsp; and&nbsp; $\boldsymbol{\rm H}_{\rm E}$&nbsp; :
  
 
::<math>\underline{z}_{\rm K} =(0, 0, 1)\hspace{0.05cm},\hspace{0.3cm} { \boldsymbol{\rm H} }_{\rm K}=   
 
::<math>\underline{z}_{\rm K} =(0, 0, 1)\hspace{0.05cm},\hspace{0.3cm} { \boldsymbol{\rm H} }_{\rm K}=   
Line 297: Line 297:
 
\end{pmatrix}
 
\end{pmatrix}
 
\hspace{0.3cm}\Rightarrow\hspace{0.3cm}
 
\hspace{0.3cm}\Rightarrow\hspace{0.3cm}
{\rm Spalten\hspace{0.15cm} 1,\hspace{0.15cm}3 \hspace{0.15cm}und \hspace{0.15cm}5 \hspace{0.15cm}der \hspace{0.15cm}Pr\ddot{u}fmatrix}  
+
{\rm Rows\hspace{0.15cm} 1,\hspace{0.15cm}3 \hspace{0.15cm}and \hspace{0.15cm}5 \hspace{0.15cm}of \hspace{0.15cm}the \hspace{0.15cm}parity-check \hspace{0.15cm}matrix} \hspace{0.05cm},</math>
\hspace{0.05cm},</math>
 
 
::<math>\underline{z}_{\rm E} = (z_2, z_4)\hspace{0.05cm},\hspace{0.35cm} { \boldsymbol{\rm H} }_{\rm E}=   
 
::<math>\underline{z}_{\rm E} = (z_2, z_4)\hspace{0.05cm},\hspace{0.35cm} { \boldsymbol{\rm H} }_{\rm E}=   
 
\begin{pmatrix}
 
\begin{pmatrix}
Line 306: Line 305:
 
\end{pmatrix}
 
\end{pmatrix}
 
  \hspace{0.9cm}\Rightarrow\hspace{0.3cm}
 
  \hspace{0.9cm}\Rightarrow\hspace{0.3cm}
{\rm Spalten\hspace{0.15cm} 2 \hspace{0.15cm}und \hspace{0.15cm}4 \hspace{0.15cm}der \hspace{0.15cm}Pr\ddot{u}fmatrix}
+
{\rm Rows\hspace{0.15cm} 2 \hspace{0.15cm}and \hspace{0.15cm}4 \hspace{0.15cm}of \hspace{0.15cm}the \hspace{0.15cm}parity-check \hspace{0.15cm}matrix}
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
*Unter Berücksichtigung der Tatsache, dass in&nbsp; $\rm GF(2)$&nbsp; die Subtraktion gleich der Addition ist, lässt sich die obige Gleichung wie folgt darstellen:
+
*Remembering that in&nbsp; $\rm GF(2)$&nbsp; subtraction equals addition, the above equation can be represented as follows:
  
 
::<math>{ \boldsymbol{\rm H} }_{\rm K} \cdot \underline{z}_{\rm K}^{\rm T}
 
::<math>{ \boldsymbol{\rm H} }_{\rm K} \cdot \underline{z}_{\rm K}^{\rm T}
Line 343: Line 342:
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
*Dies führt zu einem linearen Gleichungssystem mit zwei Gleichungen für die unbekannten&nbsp; $z_2$&nbsp; und&nbsp; $z_4$&nbsp; $($jeweils&nbsp; $0$&nbsp; oder&nbsp; $1)$.  
+
*This leads to a linear system of equations with two equations for the unknown&nbsp; $z_2$&nbsp; and&nbsp; $z_4$&nbsp; $($each&nbsp; $0$&nbsp; or&nbsp; $1)$.  
*Aus der letzten Zeile folgt&nbsp; $z_2 = 1$&nbsp; und aus der zweiten Zeile&nbsp; $z_2 + z_4 = 0$ &nbsp; &#8658; &nbsp; $z_4 = 1$.  
+
*From the last row follows&nbsp; $z_2 = 1$&nbsp; and from the second row&nbsp; $z_2 + z_4 = 0$ &nbsp; &#8658; &nbsp; $z_4 = 1$.  
*Damit ergibt sich das zulässige Codewort&nbsp; $\underline{z} \rm = (0, 1, 0, 1, 1)$.}}<br>
+
*This gives the allowed codeword&nbsp; $\underline{z} \rm = (0, 1, 0, 1, 1)$.}}<br>
  
== Aufgaben zum Kapitel ==
+
== Exercises for the chapter ==
 
<br>
 
<br>
[[Aufgaben:1.11_Syndromdecodierung|Aufgabe 1.11: Syndromdecodierung]]
+
[[Aufgaben:Exercise_1.11:_Syndrome_Decoding|Exercise 1.11: Syndrome Decoding]]
  
[[Aufgaben:Aufgabe_1.11Z:_Nochmals_Syndromdecodierung|Aufgabe 1.11Z: Nochmals Syndromdecodierung]]
+
[[Aufgaben:Exercise_1.11Z:_Syndrome_Decoding_again|Exercise 1.11Z: Syndrome Decoding again]]
  
[[Aufgaben:Aufgabe_1.12:_Hard_Decision_vs._Soft_Decision|Aufgabe 1.12: Hard Decision vs. Soft_Decision]]
+
[[Aufgaben:Exercise_1.12:_Hard_Decision_vs._Soft_Decision|Exercise 1.12: Hard Decision vs. Soft Decision]]
  
[[Aufgaben:Aufgabe_1.12Z:_Vergleich_von_HC_(7,_4,_3)_und_HC_(8,_4,_4)|Aufgabe 1.12Z: Vergleich von HC (7, 4, 3) und HC (8, 4, 4)]]
+
[[Aufgaben:Exercise_1.12Z:_Comparison_of_HC_(7,_4,_3)_and_HC_(8,_4,_4)|Exercise 1.12Z: Comparison of HC (7, 4, 3) and HC (8, 4, 4)]]
  
[[Aufgaben:Aufgabe_1.13:_Decodierung_beim_binären_Auslöschungskanal_(BEC)|Aufgabe 1.13: Decodierung beim binären Auslöschungskanal (BEC)]]
+
[[Aufgaben:Exercise_1.13:_Binary_Erasure_Channel_Decoding|Exercise 1.13: Binary Erasure Channel Decoding]]
  
[[Aufgaben:Aufgabe_1.13Z:_Nochmals_BEC–Decodierung|Aufgabe 1.13Z: Nochmals BEC–Decodierung]]
+
[[Aufgaben:Exercise_1.13Z:_Binary_Erasure_Channel_Decoding_again|Exercise 1.13Z: Binary Erasure Channel Decoding again]]
  
 
{{Display}}
 
{{Display}}

Revision as of 20:30, 11 July 2022

Block diagram and requirements


We start from the block diagram already shown in the chapter  Channel Models and Decision Structures  where the channel model used is mostly the  Binary Symmetric Channel  (BSC). For codeword estimation, we use the  Maximum Likelihood Decider  (ML), which for binary codes   ⇒   $\underline{x} \in {\rm GF}(2^n)$  at the block level gives the same result as the  MAP Receiver.

Block diagram for decoding block codes

The task of the channel decoder can be described as follows:

  • The vector  $\underline{v}$  after decoding (at the sink) should match the information word  $\underline{u}$  as well as possible.
    That is:   The  block error probability  should be as small as possible:
\[{ \rm Pr(block\:error)} = { \rm Pr}( \underline{v} \ne \underline{u}) \stackrel{!}{=} { \rm minimum}\hspace{0.05cm}.\]
  • Because of deterministic assignments  $\underline{x} = {\rm enc}(\underline{u})$  respectively,  $\underline{v} = {\rm enc}^{-1}(\underline{z})$  but also holds:
\[{ \rm Pr(block\:error)} = { \rm Pr}( \underline{z} \ne \underline{x}) \stackrel{!}{=} { \rm minimum}\hspace{0.05cm}.\]
  • Sought thus is the most likely sent codeword  $\underline{y} = \underline{x} +\underline{e}$  for the given received word  $\underline{x}_i$, which is passed on as result  $\underline{z}$ :
\[\underline{z} = {\rm arg} \max_{\underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C}} \hspace{0.1cm} {\rm Pr}( \underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm}|\hspace{0.05cm} \underline{y} ) \hspace{0.05cm}.\]
  • For the BSC–channel, both  $\underline{x}_i \in {\rm GF}(2^n)$  and  $\underline{y} \in {\rm GF}(2^n)$, so the maximum–likelihood–rule can also be written using the  Hamming distance  $d_{\rm H}( \underline{y}, \, \underline{x}_i)$ :
\[\underline{z} = {\rm arg} \min_{\underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C}} \hspace{0.1cm} d_{\rm H}(\underline{y} \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i})\hspace{0.05cm}.\]

Principle of syndrome decoding


Assumed here is a  $(n, \, k)$–block code with the parity-check matrix  $\boldsymbol{\rm H}$  and the systematic code words

\[\underline{x}\hspace{0.05cm} = (x_1, x_2, \hspace{0.05cm}\text{...} \hspace{0.05cm}, x_i, \hspace{0.05cm}\text{...} \hspace{0.05cm}, x_n) = (u_1, u_2, \hspace{0.05cm}\text{...} \hspace{0.05cm}, u_k, p_1, \hspace{0.05cm}\text{...} \hspace{0.05cm}, p_{n-k})\hspace{0.05cm}. \]

Assumed here is a  $(n, \, k)$–block code with the parity-check matrix  $\boldsymbol{\rm H}$  and the systematic code words

\[\underline{y} = \underline{x} + \underline{e} \hspace{0.05cm}, \hspace{0.4cm} \underline{y} \in \hspace{0.1cm} {\rm GF}(2^n) \hspace{0.05cm}, \hspace{0.4cm} \underline{x} \in \hspace{0.1cm} {\rm GF}(2^n) \hspace{0.05cm}, \hspace{0.4cm} \underline{e} \in \hspace{0.1cm} {\rm GF}(2^n) \hspace{0.05cm}.\]

A bit error at position  $i$, that is  $y_i ≠ x_i$, is expressed by the error coefficient  $e_i = 1$.

$\text{Definition:}$  The  syndrome  $\underline{s} = (s_0, s_1, \hspace{0.05cm}\text{...} \hspace{0.05cm}, s_{m-1})$  is calculated (as a row– resp. column vector) from the receive word  $\underline{y}$  and the parity-check matrix  $\boldsymbol{\rm H}$  as follows:

\[\underline{s} = \underline{y} \cdot { \boldsymbol{\rm H} }^{\rm T}\hspace{0.3cm}{\rm bzw.}\hspace{0.3cm} \underline{s}^{\rm T} = { \boldsymbol{\rm H} } \cdot \underline{y}^{\rm T}\hspace{0.05cm}.\]

The vector length of  $\underline{s}$  is equal to  $m = n-k$  $($row number of  $\boldsymbol{\rm H})$.


The syndrome  $\underline{s}$  shows the following characteristics:

  • Because of the equation  $\underline{x} \cdot { \boldsymbol{\rm H}}^{\rm T} = \underline{0}$  the syndrome  $\underline{s}$  does not depend on the codeword  $\underline{x}$  but solely on the error vector  $\underline{e}$:
\[\underline{s} = \underline{y} \cdot { \boldsymbol{\rm H}}^{\rm T} = \hspace{0.05cm} \underline{x} \cdot { \boldsymbol{\rm H}}^{\rm T} + \hspace{0.05cm} \underline{e} \cdot { \boldsymbol{\rm H}}^{\rm T} = \hspace{0.05cm} \underline{e} \cdot { \boldsymbol{\rm H}}^{\rm T} \hspace{0.05cm}.\]
  • For sufficiently few bit errors, $\underline{s}$  provides a clear indication of the error locations, allowing full error correction.

$\text{Example 1:}$  Starting from the systematic  $\text{(7, 4, 3)}$ Hamming code, the following result is obtained for the receive vector  $\underline{y} = (0, 1, 1, 0, 0, 1)$ :

\[{ \boldsymbol{\rm H} } \cdot \underline{y}^{\rm T} = \begin{pmatrix} 1 &1 &1 &0 &1 &0 &0\\ 0 &1 &1 &1 &0 &1 &0\\ 1 &1 &0 &1 &0 &0 &1 \end{pmatrix} \cdot \begin{pmatrix} 0 \\ 1 \\ 1 \\ 1 \\ 0 \\ 0 \\ 1 \end{pmatrix} = \begin{pmatrix} 0 \\ 1 \\ 1 \end{pmatrix} = \underline{s}^{\rm T} \hspace{0.05cm}.\]

Comparing the syndrome with the  parity-check equations  of the Hamming code, we see that

  • most likely the fourth symbol  $(x_4 = u_4)$  of the codeword has been corrupted,
  • the codeword estimator will thus yield the result  $\underline{z} = (0, 1, 1, 0, 0, 1)$ ,
  • the decision is correct only if only one bit was corrupted during transmission.

Below are the required corrections for the  $\text{(7, 4, 3)}$ Hamming code resulting from the calculated syndrome  $\underline{s}$  corresponding to the columns of the parity-check matrix:

\[\underline{s} = (0, 0, 0) \hspace{0.10cm} \Rightarrow\hspace{0.10cm}{\rm keine\hspace{0.15cm} Korrektur}\hspace{0.05cm};\hspace{0.8cm}\underline{s} = (1, 0, 0)\hspace{0.10cm} \Rightarrow\hspace{0.10cm}p_1{\rm \hspace{0.15cm} invertieren}\hspace{0.05cm};\]
\[\underline{s} =(0, 0, 1)\hspace{0.10cm} \Rightarrow\hspace{0.10cm} p_3{\rm \hspace{0.15cm} invertieren}\hspace{0.05cm};\hspace{1.22cm}\underline{s} = (1, 0, 1)\hspace{0.10cm} \Rightarrow\hspace{0.10cm} u_1{\rm \hspace{0.15cm} invertieren}\hspace{0.05cm};\]
\[\underline{s} =(0, 1, 0)\hspace{0.10cm} \Rightarrow\hspace{0.10cm} p_2{\rm \hspace{0.15cm} invertieren}\hspace{0.05cm};\hspace{1.22cm}\underline{s} = (1, 1, 0)\hspace{0.10cm} \Rightarrow\hspace{0.10cm} u_3{\rm \hspace{0.15cm} invertieren}\hspace{0.05cm};\]
\[\underline{s} =(0, 1, 1)\hspace{0.10cm} \Rightarrow\hspace{0.10cm} u_4{\rm \hspace{0.15cm} invertieren}\hspace{0.05cm};\hspace{1.22cm}\underline{s} = (1, 1, 1)\hspace{0.10cm} \Rightarrow\hspace{0.10cm} u_2{\rm \hspace{0.15cm} invertieren}\hspace{0.05cm}. \]


Generalization of syndrome coding


We continue to assume the BSC channel model. This means:

  • The receive vector  $\underline{y}$  and the error vector  $\underline{e}$  are elements of  ${\rm GF}(2^n)$.
  • The possible codewords  $\underline{x}_i$  belong to the code  $\mathcal{C}$, which spans a  $(n-k)$ dimensional subspace of  ${\rm GF}(2^n)$ .


Under this assumption, we briefly summarize again the results of the last pages:

  • Syndrome decoding is a realization possibility of maximum likelihood detection of block codes. One decides on the codeword  $\underline{x}_i$  with the least Hamming distance to the receiving word  $\underline{y}$ :
\[\underline{z} = {\rm arg} \min_{\underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C}} \hspace{0.1cm} d_{\rm H}(\underline{y} \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i})\hspace{0.05cm}.\]
  • But the syndrome decoding is also the search for the most probable error vector  $\underline{e}$ that satisfies the condition  $\underline{e} \cdot { \boldsymbol{\rm H}}^{\rm T} = \underline{s}$ . The  syndrome  is thereby determined by the equation  $\underline{s} = \underline{y} \cdot { \boldsymbol{\rm H}}^{\rm T} $ .
  • With the  Hamming weight  $w_{\rm H}(\underline{e})$  the second interpretation can also be mathematically formulated as follows:
\[\underline{z} = \underline{y} + {\rm arg} \min_{\underline{e}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} {\rm GF}(2^n)} \hspace{0.1cm} w_{\rm H}(\underline{e}_{\hspace{0.03cm}i})\hspace{0.05cm}.\]

$\text{Conclusion:}$  Note that the error vector  $\underline{e}$  as well as the receive vector  $\underline{y}$  is an element of  ${\rm GF}(2^n)$  unlike the syndrome  $\underline{s} \in {\rm GF}(2^m)$  with number  $m = n-k$  of parity-check equations. This means,

  • that the association between the syndrome  $\underline{s}$  and the error vector  $\underline{e}$  is not unique, but
  • that each  $2^k$  error vectors lead to the same syndrome  $\underline{s}$  which one groups together into a  coset 

.

$\text{Example 2:}$  The facts shall be illustrated here by the example  $n = 5, \ k = 2$   ⇒   $m = n-k = 3$ .

Splitting the  $2^k$  error vectors into  "cosets".

You can see from this graph:

  • The  $2^n = 32$  possible error vectors  $\underline{e}$  are divided into  $2^m = 8$  cosets  ${\it \Psi}_0$,  ...  , ${\it \Psi}_7$  split.
  • Explicitly drawn here are only the cosets  ${\it \Psi}_0$  and  ${\it \Psi}_5$.
  • All  $2^k = 4$  error vectors of the coset  ${\it \Psi}_\mu$  lead to the syndrome  $\underline{s}_\mu$.
  • Each minor class  ${\it \Psi}_\mu$  has a leader  $\underline{e}_\mu$, namely the one with the minimum Hamming weight.


$\text{Example 3:}$  Starting from the systematic  $\text{(5, 2, 3)}$ code  $\mathcal{C} = \big \{ (0, 0, 0, 0, 0) \hspace{0.05cm},\hspace{0.15cm}(0, 1, 0, 1, 1) \hspace{0.05cm},\hspace{0.15cm}(1, 0, 1, 1, 0) \hspace{0.05cm},\hspace{0.15cm}(1, 1, 1, 0, 1) \big \}$  the syndrome decoding procedure is now described in detail.

Example  $\text{(5, 2, 3)}$ syndrome table with cosets

The generator matrix and the parity-check matrix are:

\[{ \boldsymbol{\rm G} } = \begin{pmatrix} 1 &0 &1 &1 &0 \\ 0 &1 &0 &1 &1 \end{pmatrix} \hspace{0.05cm},\]
\[{ \boldsymbol{\rm H} } = \begin{pmatrix} 1 &0 &1 &0 &0 \\ 1 &1 &0 &1 &0 \\ 0 &1 &0 &0 &1 \end{pmatrix} \hspace{0.05cm}.\]

The table summarizes the final result. Note:

  • The index  $\mu$  is not identical to the binary value of  $\underline{s}_\mu$.
  • The order is rather given by the number of ones in the minor class leader  $\underline{e}_\mu$.
  • For example, the syndrome  $\underline{s}_5 = (1, 1, 0)$  and the syndrome  $\underline{s}_6 = (1, 0, 1)$.


To derive this table, note:

  • The row 1 refers to the syndrome  $\underline{s}_0 = (0, 0, 0)$  and the associated cosets  ${\it \Psi}_0$. The most likely error sequence here is  $(0, 0, 0, 0)$   ⇒   no bit error, which we call the coset leader  $\underline{e}_0$ .
  • The other entries in the first row, namely  $(1, 0, 1, 1, 0 )$,  $(0, 1, 0, 1, 1)$  and  $(1, 1, 1, 0, 1 )$, each yield the syndrome  $\underline{s}_0 = (0, 0, 0)$, but yield only with at least three bit errors and are correspondingly unlikely.
  • In rows 2 to 6, the respective coset leader  $\underline{e}_\mu$  contains exactly a single one  $(\mu = 1$, ... , $5)$. Here  $\underline{e}_\mu$  is always the most likely error pattern of the class  ${\it \Psi}_\mu$. The other group members result only with at least two bit errors.
  • The syndrome  $\underline{s}_6 = (1, 0, 1)$  is not possible with only one bit error. In creating the table, we then considered all  $5\text{ over }2 = 10$  error patterns  $\underline{e}$  with weight  $w_{\rm H}(\underline{e}) = 2$ .
  • The first found sequence with syndrome  $\underline{s}_6 = (1, 0, 1)$  was chosen as coset leader  $\underline{e}_6 = (1, 1, 0, 0)$ . With a different probing order, the sequence  $(0, 0, 1, 0, 1)$ could also have resulted from ${\it \Psi}_6$ .
  • Similar procedure was followed in determining the leader  $\underline{e}_7 = (0, 1, 1, 0, 0)$  of the cosets class  ${\it \Psi}_7$  characterized by the uniform syndrome  $\underline{s}_7 = (1, 1, 1)$ . Also in the class  ${\it \Psi}_7$  there is another sequence with Hamming weight  $w_{\rm H}(\underline{e}) = 2$, namely  $(1, 0, 0, 0, 1)$.

The above table only needs to be created once and can be used as often as desired. First, the syndrome must be determined. This is for example for the receive vector  $\underline{y} = (0, 1, 0, 0, 1)$:

\[\underline{s} = \underline{y} \cdot { \boldsymbol{\rm H} }^{\rm T} = \begin{pmatrix} 0 &1 &0 &0 &1 \end{pmatrix} \cdot \begin{pmatrix} 1 &1 &0 \\ 0 &1 &1 \\ 1 &0 &0 \\ 0 &1 &0 \\ 0 &0 &1 \\ \end{pmatrix} = \begin{pmatrix} 0 &1 &0 \end{pmatrix}= \underline{s}_2 \hspace{0.05cm}.\]

Using the coset leader  $\underline{e}_2 = (0, 0, 0, 1, 0)$  from the above table $($red entry for  $\mu =2)$  finally arrives at the decoding result:

\[\underline{z} = \underline{y} + \underline{e}_2 = (0,\hspace{0.05cm} 1,\hspace{0.05cm} 0,\hspace{0.05cm} 0,\hspace{0.05cm} 1) + (0,\hspace{0.05cm} 0,\hspace{0.05cm} 0,\hspace{0.05cm} 1,\hspace{0.05cm} 0) = (0,\hspace{0.05cm} 1,\hspace{0.05cm} 0,\hspace{0.05cm} 1,\hspace{0.05cm} 1) \hspace{0.05cm}.\]


$\text{Conclusion:}$  From these examples it is already clear that the  syndrome decoding involves a considerable effort  is connected, if one cannot use certain characteristics as with cyclic codes:

  • For large block code lengths, this method fails completely. Thus, to decode a  BCH codes  – the abbreviation stands for their inventors Bose, Chaudhuri and Hocquenghem – with code parameters  $n = 511$,  $k = 259$  and  $d_{\rm min} = 61$  exactly  $2^{511-259} \approx 10^{76}$  evaluate and save error patterns of length  $511$  .
  • Happily, however, there are special decoding algorithms for these and also for other codes of large block length, which lead to success with less effort

.

Coding gain - bit error rate with AWGN


We now consider the  Bit error rate  for the following constellation:

Bit error rate for the  $\text{(7, 4, 3)}$ Hamming code
  • Hamming code  $\text{HC (7, 4, 3)}$,
  • AWGN–channel, characterized by the quotient  $E_{\rm B}/N_0$ (in dB),
  • Maximum Likelihood Detection (ML) with  Hard Decision  and  Soft Decision respectively.


It should be noted with regard to this graph:

  • The black comparison curve applies, for example, to binary phase modulation (BPSK) without coding. For this one needs for the bit error rate   $10^{-5}$  about  $10 \cdot \lg \, E_{\rm B}/N_0 = 9.6 \, \rm dB$.
  • The red circles apply to the  $\text{(7, 4, 3)}$ code  and hard decisions of the maximum likelihood decoder  $\text{(ML–HD)}$. Syndrome decoding is a possible realization form for this.
  • This configuration brings only for  $10 \cdot \lg \, E_{\rm B}/N_0 >6 \, \rm dB$  an improvement over the comparison system. For  $\rm BER =10^{-5}$  one only needs  $10 \cdot \lg \, E_{\rm B}/N_0 \approx 9.2 \, \rm dB$.
  • The green crosses for the Hamming–Code and  Soft–Decision  $\text{(ML–SD)}$  are below the comparison curve throughout the range. For  $\rm BER =10^{-5}$  this gives  $10 \cdot \lg \, E_{\rm B}/N_0 \approx 7.8 \, \rm dB$.

$\text{Definition:}$  As  coding gain  of a system configuration (characterized by its code and the way it is decoded) we refer to the smaller  $10 \cdot \lg \, E_{\rm B}/N_0$ required for a given bit error rate  $\rm (BER)$  compared to the comparison system (without coding):

\[G_{\rm Code} (\hspace{0.05cm}{\rm System}\hspace{0.05cm}\vert\hspace{0.05cm}{\rm BER}\hspace{0.05cm}) =10 \cdot {\rm lg}\hspace{0.1cm}{E}_{\rm B}/N_0 \hspace{0.15cm}(\hspace{0.05cm}{\rm without\hspace{0.1cm} coding}\hspace{0.05cm}\vert\hspace{0.05cm}{\rm BER}\hspace{0.05cm})- 10 \cdot {\rm lg}\hspace{0.1cm}{E}_{\rm B}/N_0 \hspace{0.15cm}(\hspace{0.05cm}{\rm System}\hspace{0.05cm}\vert\hspace{0.05cm}{\rm BER}\hspace{0.05cm}) \hspace{0.05cm}. \]


Applied to the above graph, one obtains:

\[G_{\rm Code} (\hspace{0.05cm}{\rm Hamming \hspace{0.1cm}(7,\hspace{0.02cm}4,\hspace{0.02cm}3), ML-HD}\hspace{0.05cm}|\hspace{0.05cm}{\rm BER} = 10^{-5}\hspace{0.05cm}) = 0.4\ {\rm dB}\hspace{0.05cm},\]

\[G_{\rm Code} (\hspace{0.05cm}{\rm Hamming \hspace{0.1cm}(7,\hspace{0.02cm}4,\hspace{0.02cm}3), ML-SD}\hspace{0.05cm}|\hspace{0.05cm}{\rm BER} = 10^{-5}\hspace{0.05cm}) = 1.8\ {\rm dB}\hspace{0.05cm}.\]

Decoding at the Binary Erasure Channel


Finally, it will be shown to what extent the decoder has to be modified if instead of thes  "BSC–Modells" (TOTER LINK)  (Binary Symmetric Channel ) the  "BEC–Kanalmodell" (TOTER LINK)  (Binary Erasure Channel ) is used, which does not produce errors but marks uncertain bits as erasures.

$\text{Example 4:}$  Without limiting generality, as in  $\text{"Example 3"}$  we again consider the systematic  $\text{(5, 2, 3)}$–block code 

$$\mathcal{C} = \big \{ (0, 0, 0, 0, 0) \hspace{0.05cm},\hspace{0.15cm}(0, 1, 0, 1, 1) \hspace{0.05cm},\hspace{0.15cm}(1, 0, 1, 1, 0) \hspace{0.05cm},\hspace{0.15cm}(1, 1, 1, 0, 1) \big \}.$$
Error correction with BSC and BEC

Die Grafik zeigt das Systemmodell und gibt beispielhafte Werte für die einzelnen Vektoren wider.

  • The left part of the picture (yellow background) is valid for "BSC" with a bit error  $0 → 1$  at the third bit.
  • The right part of the picture (green background) is for "BEC" and shows two  Erasures  $\rm 1 → E$  at bit 2 and bit 4.


One recognizes:

  • With BSC only one bit error can be corrected due to  $d_{\rm min} = 3$  ($t = 1$, marked in red). If one restricts oneself to error detection, this works up to  $e= d_{\rm min} -1 = 2$  bit errors.
  • For BEC, error detection makes no sense, because already the channel locates an uncertain bit as an Erasure  $\rm E$ . The zeros and ones in the BEC receive word  $\underline{y}$  are safe. Therefore the error correction works here up to  $e = 2$  erasures with certainty.
  • Also  $e = 3$  erasures are sometimes still correctable. So  $\underline{y} \rm = (E, E, E, 1, 1)$  to  $\underline{z} \rm = (0, 1, 0, 1, 1)$  to be corrected since no second codeword ends with two ones $\underline{y} \rm = (0, E, 0, E, E)$  but is not correctable because of the all zero word allowed in the code.
  • If it is ensured that there are no more than two erasures in any receive word, the BEC block error probability  ${\rm Pr}(\underline{z} \ne \underline{x}) = {\rm Pr}(\underline{v} \ne \underline{u}) \equiv 0$. In contrast, the corresponding block error probability in the BSC model always has a value greater than  $0$.


Since after the BEC each receive word is either decoded correctly or not at all, we call here the block  $\underline{y} → \underline{z}$  in the future "codeword finder". An "estimation" takes place only in the BSC model.


But how does the decoding of a receive word  $\underline{y}$  with erasures work algorithmically?

$\text{Example 5:}$  Starting from the receive word  $\underline{y} \rm = (0, E, 0, E, 1)$  in  $\text{example 4}$  we formally set the output of the codeword finder to  $\underline{z} \rm = (0, z_2, 0, z_4, 1)$, where the symbols  $z_2 \in \{0, \, 1\}$ and $z_4 \in \{0, \, 1\}$  are to be determined according to the following equation:

\[\underline{z} \cdot { \boldsymbol{\rm H} }^{\rm T}= \underline{0} \hspace{0.3cm} \Rightarrow \hspace{0.3cm} { \boldsymbol{\rm H} } \cdot \underline{z}^{\rm T}= \underline{0}^{\rm T} \hspace{0.05cm}.\]

The task now is to implement this determination equation as efficiently as possible. The following calculation steps result:

  • With the parity-check matrix  $\boldsymbol{\rm H}$  of  $\text{(5, 2, 3)}$–block code and the vector  $\underline{z} \rm = (0, z_2, 0, z_4, 1)$  is the above determination equation:
\[{ \boldsymbol{\rm H} } \cdot \underline{z}^{\rm T} = \begin{pmatrix} 1 &0 &1 &0 &0\\ 1 &1 &0 &1 &0\\ 0 &1 &0 &0 &1 \end{pmatrix} \cdot \begin{pmatrix} 0 \\ z_2 \\ 0 \\ z_4 \\ 1 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix} \hspace{0.05cm}.\]
  • We sum up the safe (correct) bits to the vector  $\underline{z}_{\rm K}$  and the erased bits to the vector  $\underline{z}_{\rm E}$. Then we split the parity-check matrix  $\boldsymbol{\rm H}$  into the corresponding submatrices  $\boldsymbol{\rm H}_{\rm K}$  and  $\boldsymbol{\rm H}_{\rm E}$  :
\[\underline{z}_{\rm K} =(0, 0, 1)\hspace{0.05cm},\hspace{0.3cm} { \boldsymbol{\rm H} }_{\rm K}= \begin{pmatrix} 1 &1 &0\\ 1 &0 &0\\ 0 &0 &1 \end{pmatrix} \hspace{0.3cm}\Rightarrow\hspace{0.3cm} {\rm Rows\hspace{0.15cm} 1,\hspace{0.15cm}3 \hspace{0.15cm}and \hspace{0.15cm}5 \hspace{0.15cm}of \hspace{0.15cm}the \hspace{0.15cm}parity-check \hspace{0.15cm}matrix} \hspace{0.05cm},\]
\[\underline{z}_{\rm E} = (z_2, z_4)\hspace{0.05cm},\hspace{0.35cm} { \boldsymbol{\rm H} }_{\rm E}= \begin{pmatrix} 0 &0\\ 1 &1\\ 1 &0 \end{pmatrix} \hspace{0.9cm}\Rightarrow\hspace{0.3cm} {\rm Rows\hspace{0.15cm} 2 \hspace{0.15cm}and \hspace{0.15cm}4 \hspace{0.15cm}of \hspace{0.15cm}the \hspace{0.15cm}parity-check \hspace{0.15cm}matrix} \hspace{0.05cm}.\]
  • Remembering that in  $\rm GF(2)$  subtraction equals addition, the above equation can be represented as follows:
\[{ \boldsymbol{\rm H} }_{\rm K} \cdot \underline{z}_{\rm K}^{\rm T} + { \boldsymbol{\rm H} }_{\rm E} \cdot \underline{z}_{\rm E}^{\rm T}= \underline{0}^{\rm T} \hspace{0.3cm} \Rightarrow \hspace{0.3cm} { \boldsymbol{\rm H} }_{\rm E} \cdot \underline{z}_{\rm E}^{\rm T}= { \boldsymbol{\rm H} }_{\rm K} \cdot \underline{z}_{\rm K}^{\rm T} \hspace{0.3cm} \Rightarrow \hspace{0.3cm} \begin{pmatrix} 0 &0\\ 1 &1\\ 1 &0 \end{pmatrix} \cdot \begin{pmatrix} z_2 \\ z_4 \end{pmatrix} = \begin{pmatrix} 1 &1 &0\\ 1 &0 &0\\ 0 &0 &1 \end{pmatrix} \cdot \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix} \hspace{0.05cm}.\]
  • This leads to a linear system of equations with two equations for the unknown  $z_2$  and  $z_4$  $($each  $0$  or  $1)$.
  • From the last row follows  $z_2 = 1$  and from the second row  $z_2 + z_4 = 0$   ⇒   $z_4 = 1$.
  • This gives the allowed codeword  $\underline{z} \rm = (0, 1, 0, 1, 1)$.


Exercises for the chapter


Exercise 1.11: Syndrome Decoding

Exercise 1.11Z: Syndrome Decoding again

Exercise 1.12: Hard Decision vs. Soft Decision

Exercise 1.12Z: Comparison of HC (7, 4, 3) and HC (8, 4, 4)

Exercise 1.13: Binary Erasure Channel Decoding

Exercise 1.13Z: Binary Erasure Channel Decoding again