Difference between revisions of "Channel Coding/Decoding of Linear Block Codes"

From LNTwww
 
(55 intermediate revisions by 7 users not shown)
Line 1: Line 1:
 
   
 
   
 
{{Header
 
{{Header
|Untermenü=Binäre Blockcodes zur Kanalcodierung
+
|Untermenü=Binary Block Codes for Channel Coding
|Vorherige Seite=Allgemeine Beschreibung linearer Blockcodes
+
|Vorherige Seite=General Description of Linear Block Codes
|Nächste Seite=Schranken für die Blockfehlerwahrscheinlichkeit
+
|Nächste Seite=Limits for Block Error Probability
 
}}
 
}}
  
== Blockschaltbild und Voraussetzungen ==
+
== Block diagram and requirements ==
 
<br>
 
<br>
Wir gehen von dem bereits im Kapitel [[Kanalcodierung/Kanalmodelle_und Entscheiderstrukturen|Kanalmodelle und Entscheiderstrukturen]] gezeigten Blockschaltbild aus, wobei als Kanalmodell meist der <i>Binary Symmetric Channel</i> (BSC) verwendet wird. Zur Codewortschätzung verwenden wir den <i>Maximum&ndash;Likelihood&ndash;Entscheider</i> (ML), der für binäre Codes &nbsp; &#8658; &nbsp; $\underline{x} \in {\rm GF}(2^n)$ auf Blockebene das gleiche Ergebnis liefert wie der [[Kanalcodierung/Kanalmodelle_und_Entscheiderstrukturen#Definitionen_der_verschiedenen_Optimalempf.C3.A4nger|MAP&ndash;Empfänger]].<br>
+
We start from the block diagram already shown in the chapter&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures|$\text{Channel Models and Decision Structures}$]]&nbsp; where the digital channel model used is mostly the&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#Binary_Symmetric_Channel_.E2.80.93_BSC|$\text{Binary Symmetric Channel}$]]&nbsp;$\rm (BSC)$.&nbsp;  
  
[[File:P ID2360 KC T 1 5 S1 v2.png|center|frame|Blockschaltbild zur Decodierung von Blockcodes|class=fit]]
+
For code word estimation,&nbsp; we use the&nbsp; "Maximum Likelihood Decision"&nbsp; $\rm (ML)$,&nbsp; which for binary codes &nbsp; &#8658; &nbsp; $\underline{x}  \in {\rm GF}(2^n)$&nbsp; at the block level gives the same result as the&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#Definitions_of_the_different_optimal_receivers|$\text{MAP Receiver}$]].<br>
  
Die Aufgabe des Kanaldecoders kann wie folgt beschrieben werden:
+
[[File:EN_KC_T_1_5_S1.png|right|frame|Block diagram for decoding block codes|class=fit]]
*Der Vektor $\underline{v}$ nach der Decodierung (an der Sinke) soll möglichst gut mit dem Informationswort $\underline{u}$ übereinstimmen. Das heißt: Die '''Blockfehlerwahrscheinlichkeit''' soll möglichst klein sein:
 
  
::<math>{ \rm Pr(Blockfehler)} = { \rm Pr}( \underline{v} \ne \underline{u}) \stackrel{!}{=} { \rm Minimum}\hspace{0.05cm}.</math>
+
The task of the channel decoder can be described as follows:
 +
*The vector&nbsp; $\underline{v}$&nbsp; after decoding&nbsp; (at the sink)&nbsp; should match the information word&nbsp; $\underline{u}$&nbsp; as well as possible.  
  
*Aufgrund der deterministischen Zuweisungen $\underline{x} = {\rm enc}(\underline{u})$ bzw. $\underline{v} = {\rm enc}^{-1}(\underline{z})$ gilt aber auch:
+
*That is: &nbsp; The&nbsp; &raquo;'''block error probability'''&laquo;&nbsp; should be as small as possible:
 +
::<math>{ \rm Pr(block\:error)} = { \rm Pr}( \underline{v} \ne \underline{u}) \stackrel{!}{=} { \rm minimum}\hspace{0.05cm}.</math>
  
::<math>{ \rm Pr(Blockfehler)} = { \rm Pr}( \underline{z} \ne \underline{x}) \stackrel{!}{=} { \rm Minimum}\hspace{0.05cm}.</math>
+
*Because of assignments&nbsp; $\underline{x} = {\rm enc}(\underline{u})$&nbsp; resp.&nbsp; $\underline{v} = {\rm enc}^{-1}(\underline{z})$&nbsp;  also holds:
  
*Gesucht ist somit das zum gegebenen Empfangswort $\underline{y} = \underline{x} +\underline{e}$ am wahrscheinlichsten gesendete Codewort $\underline{x}_i$, das als Ergebnis $\underline{z}$ weiter gegeben wird:
+
::<math>{ \rm Pr(block\:error)} = { \rm Pr}( \underline{z} \ne \underline{x}) \stackrel{!}{=} { \rm minimum}\hspace{0.05cm}.</math>
  
 +
*Sought is the most likely sent code word&nbsp; $\underline{y} = \underline{x} +\underline{e}$&nbsp; for the given received word&nbsp; $\underline{x}_i$,&nbsp; which is passed on as result&nbsp; $\underline{z}$:
 
::<math>\underline{z} = {\rm arg} \max_{\underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C}} \hspace{0.1cm} {\rm Pr}( \underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm}|\hspace{0.05cm} \underline{y} ) \hspace{0.05cm}.</math>
 
::<math>\underline{z} = {\rm arg} \max_{\underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C}} \hspace{0.1cm} {\rm Pr}( \underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm}|\hspace{0.05cm} \underline{y} ) \hspace{0.05cm}.</math>
  
*Beim BSC&ndash;Kanal gilt sowohl $\underline{x}_i \in {\rm GF}(2^n)$ als auch $\underline{y}  \in {\rm GF}(2^n)$, so dass die Maximum&ndash;Likelihood&ndash;Regel auch mit der [[Kanalcodierung/Zielsetzung_der_Kanalcodierung#Einige_wichtige_Definitionen_zur_Blockcodierung |Hamming&ndash;Distanz]] $d_{\rm H}( \underline{y}, \, \underline{x}_i)$ geschrieben werden kann:
+
*For the BSC model,&nbsp; both &nbsp; $\underline{x}_i \in {\rm GF}(2^n)$ &nbsp; and &nbsp; $\underline{y}  \in {\rm GF}(2^n)$,&nbsp; so the maximum likelihood decision rule can also be written using the&nbsp; [[Channel_Coding/Objective_of_Channel_Coding#Important_definitions_for_block_coding |$\text{Hamming distance}$]]&nbsp; $d_{\rm H}( \underline{y}, \, \underline{x}_i)$:
  
 
::<math>\underline{z} = {\rm arg} \min_{\underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C}} \hspace{0.1cm}
 
::<math>\underline{z} = {\rm arg} \min_{\underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C}} \hspace{0.1cm}
 
d_{\rm H}(\underline{y}  \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i})\hspace{0.05cm}.</math>
 
d_{\rm H}(\underline{y}  \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i})\hspace{0.05cm}.</math>
  
== Prinzip der Syndromdecodierung ==
+
== Principle of syndrome decoding ==
 
<br>
 
<br>
Vorausgesetzt wird hier ein (<i>n</i>, <i>k</i>)&ndash;Blockcode mit der Prüfmatrix <b>H</b> und den systematischen Codeworten
+
Assumed here is a&nbsp; $(n, \, k)$&nbsp; block code with the parity-check matrix&nbsp; $\boldsymbol{\rm H}$&nbsp; and the systematic code words
  
::<math>\underline{x}\hspace{0.05cm} = (x_1, x_2, ... \hspace{0.05cm}, x_i, ... \hspace{0.05cm}, x_n)
+
::<math>\underline{x}\hspace{0.05cm} = (x_1, x_2, \hspace{0.05cm}\text{...} \hspace{0.05cm}, x_i, \hspace{0.05cm}\text{...}  \hspace{0.05cm}, x_n)
  = (u_1, u_2, ... \hspace{0.05cm}, u_k, p_1, ... \hspace{0.05cm}, p_{n-k})\hspace{0.05cm}. </math>
+
  = (u_1, u_2, \hspace{0.05cm}\text{...}  \hspace{0.05cm}, u_k, p_1, \hspace{0.05cm}\text{...\hspace{0.05cm}, p_{n-k})\hspace{0.05cm}. </math>
  
Mit dem Fehlervektor <i><u>e</u></i> gilt dann für das Empfangswort:
+
With the error vector&nbsp; $\underline{e}$&nbsp; then applies to the received word:
  
::<math>\underline{y} = \underline{x} + \underline{e} \hspace{0.05cm}, \hspace{0.4cm} \underline{y} \in \hspace{0.1cm} {\rm GF}(2^n) \hspace{0.05cm}, \hspace{0.1cm} \underline{x} \in \hspace{0.1cm} {\rm GF}(2^n) \hspace{0.05cm}, \hspace{0.1cm} \underline{e} \in \hspace{0.1cm} {\rm GF}(2^n)
+
::<math>\underline{y} = \underline{x} + \underline{e} \hspace{0.05cm}, \hspace{0.4cm} \underline{y} \in \hspace{0.1cm} {\rm GF}(2^n) \hspace{0.05cm}, \hspace{0.4cm} \underline{x} \in \hspace{0.1cm} {\rm GF}(2^n) \hspace{0.05cm}, \hspace{0.4cm} \underline{e} \in \hspace{0.1cm} {\rm GF}(2^n)
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
Ein Bitfehler an der Position <i>i</i>, das heißt <i>y<sub>i</sub></i> &ne; <i>x<sub>i</sub></i>, wird ausgedrückt durch den Fehlerkoeffizienten <i>e<sub>i</sub></i> = 1.<br>
+
A bit error at position&nbsp; $i$ &nbsp; &rArr; &nbsp; $y_i &ne; x_i$&nbsp; is expressed by the&nbsp; &raquo;'''error coefficient'''&laquo;&nbsp; $e_i = 1$.<br>
  
{{Definition}}''':''' Das Syndrom <i><u>s</u></i> = (<i>s</i><sub>0</sub>, <i>s</i><sub>1</sub>, ... , <i>s</i><sub><i>m</i>&ndash;1</sub>) berechnet sich (als Zeilen&ndash; bzw. Spaltenvektor) aus dem Empfangswort <i><u>y</u></i> und der Prüfmatrix <b>H</b> in folgender Weise:
+
{{BlaueBox|TEXT= 
 +
$\text{Definition:}$&nbsp; The&nbsp; &raquo;'''syndrome'''&laquo;&nbsp; $\underline{s} = (s_0, s_1, \hspace{0.05cm}\text{...} \hspace{0.05cm}, s_{m-1})$&nbsp; is calculated&nbsp; (as row resp. column vector)&nbsp; from the received word&nbsp; $\underline{y}$&nbsp; and the parity-check matrix&nbsp; $\boldsymbol{\rm H}$&nbsp; as follows:
  
:<math>\underline{s} = \underline{y} \cdot { \boldsymbol{\rm H}}^{\rm T}\hspace{0.3cm}{\rm bzw.}\hspace{0.3cm}  
+
::<math>\underline{s} = \underline{y} \cdot { \boldsymbol{\rm H} }^{\rm T}\hspace{0.3cm}{\rm bzw.}\hspace{0.3cm}  
\underline{s}^{\rm T} = { \boldsymbol{\rm H}} \cdot \underline{y}^{\rm T}\hspace{0.05cm}.</math>
+
\underline{s}^{\rm T} = { \boldsymbol{\rm H} } \cdot \underline{y}^{\rm T}\hspace{0.05cm}.</math>
  
Die Vektorlänge von <u><i>s</i></u> ist gleich <i>m</i> = <i>n</i> &ndash; <i>k</i> (Zeilenzahl von <b>H</b>).{{end}}<br>
+
*The vector length of&nbsp; $\underline{s}$&nbsp; is equal to&nbsp; $m = n-k$&nbsp; $($row number of&nbsp; $\boldsymbol{\rm H})$.}}<br>
  
Das Syndrom <i><u>s</u></i> zeigt folgende Charakteristika:
+
The syndrome&nbsp; $\underline{s}$&nbsp; shows the following characteristics:
*Wegen <i><u>x</u></i> &middot; <b>H</b><sup>T</sup> = <u>0</u> hängt <i><u>s</u></i> nicht vom Codewort <i><u>x</u></i> ab, sondern allein vom Fehlervektor <i><u>e</u></i>:
+
*Because of the equation&nbsp; $\underline{x} \cdot { \boldsymbol{\rm H}}^{\rm T} = \underline{0}$ &nbsp; the syndrome&nbsp; $\underline{s}$&nbsp; does not depend on the code word&nbsp; $\underline{x}$&nbsp; but solely on the error vector&nbsp; $\underline{e}$:
  
 
::<math>\underline{s} = \underline{y} \cdot { \boldsymbol{\rm H}}^{\rm T}
 
::<math>\underline{s} = \underline{y} \cdot { \boldsymbol{\rm H}}^{\rm T}
Line 59: Line 62:
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
*Bei hinreichend wenig Bitfehlern liefert <i><u>s</u></i> einen eindeutigen Hinweis auf die Fehlerpositionen und ermöglicht so eine vollständige Fehlerkorrektur.<br><br>
+
*For sufficiently few bit errors,&nbsp; $\underline{s}$&nbsp; provides a clear indication of the error locations,&nbsp; allowing full error correction.<br><br>
  
{{Beispiel}}''':''' Ausgehend vom systematischen [http://en.lntwww.de/Kanalcodierung/Allgemeine_Beschreibung_linearer_Blockcodes#Einige_Eigenschaften_des_.287.2C_4.2C_3.29.E2.80.93Hamming.E2.80.93Codes (7,&nbsp;4,&nbsp;3)&ndash;Hamming&ndash;Code] erhält man beispielsweise für den Empfangsvektor <u><i>y</i></u> = (0, 1, 1, 1, 0, 0, 1) das folgende Ergebnis:
+
{{GraueBox|TEXT= 
 +
$\text{Example 1:}$&nbsp; Starting from the systematic&nbsp; [[Channel_Coding/General_Description_of_Linear_Block_Codes#Some_properties_of_the_.287.2C_4.2C_3.29_Hamming_code|$\text{(7, 4, 3)}$ Hamming code]],&nbsp; the following result is obtained for the received vector&nbsp; $\underline{y} = (0, 1, 1, 1, 0, 0, 1)$:
  
:<math>{ \boldsymbol{\rm H}} \cdot \underline{y}^{\rm T}
+
::<math>{ \boldsymbol{\rm H} } \cdot \underline{y}^{\rm T}
 
=  \begin{pmatrix}
 
=  \begin{pmatrix}
 
1 &1 &1 &0 &1 &0 &0\\
 
1 &1 &1 &0 &1 &0 &0\\
Line 82: Line 86:
 
\end{pmatrix} = \underline{s}^{\rm T} \hspace{0.05cm}.</math>
 
\end{pmatrix} = \underline{s}^{\rm T} \hspace{0.05cm}.</math>
  
Vergleicht man das Syndrom mit den [http://en.lntwww.de/Kanalcodierung/Allgemeine_Beschreibung_linearer_Blockcodes#Codefestlegung_durch_die_Pr.C3.BCfmatrix Prüfgleichungen] des Hamming&ndash;Codes, so erkennt man, dass
+
Comparing the syndrome with the&nbsp; [[Channel_Coding/General_Description_of_Linear_Block_Codes#Code_definition_by_the_parity-check_matrix|$\text{parity-check equations}$]]&nbsp; of the Hamming code,&nbsp; we see that
*am wahrscheinlichsten das vierte Symbol (<i>x</i><sub>4</sub> = <i>u</i><sub>4</sub>) des Codewortes verfälscht wurde,<br>
+
*most likely the fourth symbol&nbsp; $(x_4 = u_4)$&nbsp; of the code word has been falsified,<br>
  
*der Codewortschätzer somit das Ergebnis  <u><i>z</i></u> = (0, 1, 1, 0, 0, 0, 1) liefern wird.<br>
+
*the code word estimator will thus yield the result&nbsp; $\underline{z} = (0, 1, 1, 0, 0, 0, 1)$,<br>
  
*die Entscheidung nur dann richtig ist, wenn bei der Übertragung nur ein Bit verfälscht wurde.<br><br>
+
*the decision is correct only if only one bit was falsified during transmission.<br><br>
  
Nachfolgend sind die erforderlichen Korrekturen für den (7,&nbsp;4,&nbsp;3)&ndash;Hamming&ndash;Code angegeben, die sich aus dem errechneten Syndrom <i><u>s</u></i> entsprechend den Spalten der Prüfmatrix ergeben:
+
Below are the required corrections for the&nbsp; $\text{(7, 4, 3)}$&nbsp; Hamming code resulting from the calculated syndrome&nbsp; $\underline{s}$&nbsp; corresponding to the columns of the parity-check matrix:
  
:<math>\underline{s} \hspace{-0.15cm} = \hspace{-0.15cm} (0, 0, 0) \hspace{0.10cm} \Rightarrow\hspace{0.10cm}{\rm keine\hspace{0.15cm} Korrektur}\hspace{0.05cm};\hspace{0.4cm}\underline{s} = (1, 0, 0)\hspace{0.10cm} \Rightarrow\hspace{0.10cm}p_1{\rm \hspace{0.15cm} invertieren}\hspace{0.05cm};</math>
+
::<math>\underline{s} = (0, 0, 0) \hspace{0.10cm} \Rightarrow\hspace{0.10cm}{\rm no\hspace{0.15cm} correction}\hspace{0.05cm};\hspace{0.8cm}\underline{s} = (1, 0, 0)\hspace{0.10cm} \Rightarrow\hspace{0.10cm}{\rm invert}\hspace{0.15cm}p_1\hspace{0.05cm};</math>
:<math>\underline{s} \hspace{-0.15cm} = \hspace{-0.15cm}(0, 0, 1)\hspace{0.10cm} \Rightarrow\hspace{0.10cm} p_3{\rm \hspace{0.15cm} invertieren}\hspace{0.05cm};\hspace{0.82cm}\underline{s} = (1, 0, 1)\hspace{0.10cm} \Rightarrow\hspace{0.10cm} u_1{\rm \hspace{0.15cm} invertieren}\hspace{0.05cm};</math>
+
::<math>\underline{s} =(0, 0, 1)\hspace{0.10cm} \Rightarrow\hspace{0.10cm} {\rm invert}\hspace{0.15cm}p_3\hspace{0.05cm};\hspace{1.63cm}\underline{s} = (1, 0, 1)\hspace{0.10cm} \Rightarrow\hspace{0.10cm} {\rm invert}\hspace{0.15cm}u_1\hspace{0.05cm};</math>
:<math>\underline{s} \hspace{-0.15cm} = \hspace{-0.15cm}(0, 1, 0)\hspace{0.10cm} \Rightarrow\hspace{0.10cm} p_2{\rm \hspace{0.15cm} invertieren}\hspace{0.05cm};\hspace{0.82cm}\underline{s} = (1, 1, 0)\hspace{0.10cm} \Rightarrow\hspace{0.10cm} u_3{\rm \hspace{0.15cm} invertieren}\hspace{0.05cm};</math>
+
::<math>\underline{s} =(0, 1, 0)\hspace{0.10cm} \Rightarrow\hspace{0.10cm} {\rm invert}\hspace{0.15cm}p_2\hspace{0.05cm};\hspace{1.63cm}\underline{s} = (1, 1, 0)\hspace{0.10cm} \Rightarrow\hspace{0.10cm} {\rm invert}\hspace{0.15cm}u_3\hspace{0.05cm};</math>
:<math>\underline{s} \hspace{-0.15cm} = \hspace{-0.15cm}(0, 1, 1)\hspace{0.10cm} \Rightarrow\hspace{0.10cm} u_4{\rm \hspace{0.15cm} invertieren}\hspace{0.05cm};\hspace{0.82cm}\underline{s} = (1, 1, 1)\hspace{0.10cm} \Rightarrow\hspace{0.10cm} u_2{\rm \hspace{0.15cm} invertieren}\hspace{0.05cm}. </math>{{end}}<br>
+
::<math>\underline{s} =(0, 1, 1)\hspace{0.10cm} \Rightarrow\hspace{0.10cm} {\rm invert}\hspace{0.15cm}u_4\hspace{0.05cm};\hspace{1.63cm}\underline{s} = (1, 1, 1)\hspace{0.10cm} \Rightarrow\hspace{0.10cm} {\rm invert}\hspace{0.15cm}u_2\hspace{0.05cm}. </math>}}<br>
  
== Verallgemeinerung der Syndromdecodierung (1) ==
+
== Generalization of syndrome coding ==
 
<br>
 
<br>
Wir fassen die Ergebnisse der letzten Seiten zusammen, wobei wir weiterhin vom BSC&ndash;Kanalmodell ausgehen. Das bedeutet: <i><u>y</u></i> und <i><u>e</u></i> sind Elemente von GF(2<sup><i>n</i></sup>), während die möglichen Codeworte <i><u>x</u><sub>i</sub></i> zum Code <i>C</i> gehören, der einen (<i>n</i> &ndash; <i>k</i>)&ndash;dimensionalen Untervektorraum von GF(2<sup><i>n</i></sup>) darstellt. Dann gilt:
+
We continue to assume the BSC channel model. This means:
 +
*The received vector&nbsp; $\underline{y}$ &nbsp; and the error vector&nbsp; $\underline{e}$ &nbsp; are elements of&nbsp; ${\rm GF}(2^n)$.
 +
 +
*The possible code words&nbsp; $\underline{x}_i$&nbsp; belong to the code&nbsp; $\mathcal{C}$,&nbsp; which spans a&nbsp; $(n-k)$-dimensional subspace of&nbsp; ${\rm GF}(2^n)$.  
  
*Die Syndromdecodierung ist eine Realisierungsmöglichkeit der Maximum&ndash;Likelihood&ndash;Detektion von Blockcodes. Man entscheidet sich für das Codewort mit der geringsten Hamming&ndash;Distanz zum Empfangswort:
+
 
 +
Under this assumption,&nbsp; we briefly summarize the results of the last sections:
 +
*Syndrome decoding is a realization possibility of maximum likelihood decoding of block codes.&nbsp; One decides on the code word&nbsp; $\underline{x}_i$ &nbsp; with the least Hamming distance to the received word&nbsp; $\underline{y}$:
  
 
::<math>\underline{z} = {\rm arg} \min_{\underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C}} \hspace{0.1cm}
 
::<math>\underline{z} = {\rm arg} \min_{\underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C}} \hspace{0.1cm}
 
d_{\rm H}(\underline{y}  \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i})\hspace{0.05cm}.</math>
 
d_{\rm H}(\underline{y}  \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i})\hspace{0.05cm}.</math>
  
*Die Syndromdecodierung  ist aber auch die Suche nach dem wahrscheinlichsten Fehlervektor <i><u>e</u></i>, der die Bedingung <i><u>e</u></i> &middot; <b>H</b><sup>T</sup> = <i><u>s</u></i> erfüllt. Das <i>Syndrom</i> liegt dabei durch <i><u>s</u></i>&nbsp;=&nbsp;<i><u>y</u></i>&nbsp;&middot;&nbsp;<b>H</b><sup>T</sup> fest.
+
*But the syndrome decoding is also the search for the most probable error vector&nbsp; $\underline{e}$&nbsp; that satisfies the condition&nbsp; $\underline{e} \cdot { \boldsymbol{\rm H}}^{\rm T} = \underline{s}$.&nbsp; The&nbsp; "syndrome"&nbsp; is thereby determined by the equation &nbsp;
 +
:$$\underline{s} = \underline{y} \cdot { \boldsymbol{\rm H}}^{\rm T} .$$
  
*Mit dem [http://en.lntwww.de/Kanalcodierung/Zielsetzung_der_Kanalcodierung#Einige_wichtige_Definitionen_zur_Blockcodierung Hamming&ndash;Gewicht] <i>w</i><sub>H</sub>(<i><u>e</u></i>) kann die zweite Interpretation auch wie folgt mathematisch formuliert werden:
+
*With the&nbsp; [[Channel_Coding/Objective_of_Channel_Coding#Important_definitions_for_block_coding|$\text{Hamming weight}$]]&nbsp; $w_{\rm H}(\underline{e})$&nbsp; the second interpretation can also be mathematically formulated as follows:
  
 
::<math>\underline{z} = \underline{y} + {\rm arg} \min_{\underline{e}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} {\rm GF}(2^n)} \hspace{0.1cm}
 
::<math>\underline{z} = \underline{y} + {\rm arg} \min_{\underline{e}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} {\rm GF}(2^n)} \hspace{0.1cm}
 
w_{\rm H}(\underline{e}_{\hspace{0.03cm}i})\hspace{0.05cm}.</math>
 
w_{\rm H}(\underline{e}_{\hspace{0.03cm}i})\hspace{0.05cm}.</math>
  
Zu beachten ist, dass der Fehlervektor <i><u>e</u></i> ebenso wie der Empfangsvektor <i><u>y</u></i> ein Element von GF(2<sup><i>n</i></sup>) ist im Gegensatz zum Syndrom <i><u>s</u></i>&nbsp;&#8712;&nbsp;GF(2<sup><i>m</i></sup>) mit der Anzahl <i>m</i>&nbsp;=&nbsp;<i>n</i>&nbsp;&ndash;&nbsp;<i>k</i> der Prüfgleichungen. Das bedeutet,
+
{{BlaueBox|TEXT= 
*dass die Zuordnung zwischen Syndrom <i><u>s</u></i> und Fehlervektor <i><u>e</u></i> nicht eindeutig ist, sondern<br>
+
$\text{Conclusion:}$&nbsp; Note that the error vector&nbsp; $\underline{e}$&nbsp; as well as the received vector&nbsp; $\underline{y}$&nbsp; is an element of&nbsp; ${\rm GF}(2^n)$&nbsp; unlike the syndrome&nbsp; $\underline{s} \in {\rm GF}(2^m)$&nbsp; with number&nbsp; $m = n-k$&nbsp; of parity-check equations. This means,&nbsp; that
 +
*the association between the syndrome&nbsp; $\underline{s}$&nbsp; and the error vector&nbsp; $\underline{e}$&nbsp; is not unique,&nbsp; but
  
*dass jeweils 2<sup><i>k</i></sup> Fehlervektoren zum gleichen Syndrom <i><u>s</u></i> führen, die man zu einer Nebenklasse (englisch: <i>Coset</i>) zusammenfasst.<br>
+
*each&nbsp; $2^k$&nbsp; error vectors lead to the same syndrome&nbsp; $\underline{s}$&nbsp; which one groups together into a so-called &nbsp; "coset".}}<br>
:[[File:P ID2361 KC T 1 5 S3 v2.png|Aufteilung der 2<sup><i>k</i></sup> Fehlervektoren in <i>Cosets</i>|class=fit]]<br>
 
  
Die Grafik verdeutlicht diesen Sachverhalt am Beispiel <i>n</i> = 5 und <i>k</i> = 2 &#8658; <i>m</i> = <i>n</i> &ndash; <i>k</i> = 3.
+
{{GraueBox|TEXT= 
*Die insgesamt 2<sup><i>n</i></sup> = 32 möglichen Fehlervektoren <i><u>e</u></i> werden in 2<sup><i>m</i></sup> = 8 Nebenklassen <i>&Psi;</i><sub>0</sub>, ... , <i>&Psi;</i><sub>7</sub> aufgeteilt, auch &bdquo;<i>Cosets</i>&rdquo; genannt. Explizit gezeichnet sind hier nur die Cosets <i>&Psi;</i><sub>0</sub> und <i>&Psi;</i><sub>5</sub>.<br>
+
$\text{Example 2:}$&nbsp; The facts shall be illustrated by the example with parameters&nbsp; $n = 5, \ k = 2$ &nbsp; &#8658; &nbsp; $m = n-k = 3$&nbsp;.
  
*Alle 2<sup><i>k</i></sup> = 4 Fehlervektoren des Cosets <i>&Psi;<sub>&mu;</sub></i> führen zum gleichen Syndrom <i><u>s</u></i><sub><i>&mu;</i></sub>. Zudem hat jede Nebenklasse einen Anführer <i><u>e</u></i><sub><i>&mu;</i></sub>, nämlich denjenigen mit dem minimalen Hamming&ndash;Gewicht.<br><br>
+
[[File:EN_KC_T_1_5_S3a.png|right|frame|Splitting the&nbsp; $2^k$&nbsp; error vectors into&nbsp; "cosets"|class=fit]]
  
Die Vorgehensweise bei der Syndromdecodierung wird auf den nächsten Seiten nochmals ausführlich an einem Beispiel erläutert.<br>
+
You can see from this graph:
 
 
== Verallgemeinerung der Syndromdecodierung (2) ==
 
<br>
 
Die Syndromdecodierung wird hier am Beispiel eines systematischen (5,&nbsp;2,&nbsp;3)&ndash;Codes beschrieben:
 
  
:<math>\mathcal{C} = \{ (0, 0, 0, 0, 0) \hspace{0.05cm},\hspace{0.15cm}(0, 1, 0, 1, 1) \hspace{0.05cm},\hspace{0.15cm}(1, 0, 1, 1, 0)  \hspace{0.05cm},\hspace{0.15cm}(1, 1, 1, 0, 1) \}\hspace{0.05cm}.</math>
+
#The&nbsp; $2^n = 32$&nbsp; possible error vectors&nbsp; $\underline{e}$&nbsp; are divided into&nbsp; $2^m = 8$&nbsp; cosets&nbsp; ${\it \Psi}_0$,&nbsp; ... &nbsp;, ${\it \Psi}_7$.&nbsp; Explicitly drawn here are only the cosets&nbsp; ${\it \Psi}_0$&nbsp; and&nbsp; ${\it \Psi}_5$.<br><br>
 +
#All&nbsp; $2^k = 4$&nbsp; error vectors of the coset&nbsp; ${\it \Psi}_\mu$&nbsp; lead to the syndrome&nbsp; $\underline{s}_\mu$.<br><br>
 +
#Each minor class&nbsp; ${\it \Psi}_\mu$&nbsp; has a&nbsp; "leader"&nbsp; $\underline{e}_\mu$, namely the one with the minimum Hamming weight.}}<br>
  
Generatormatrix und Prüfmatrix lauten:
+
{{GraueBox|TEXT= 
 +
$\text{Example 3:}$&nbsp; Starting from the systematic&nbsp; $\text{(5,&nbsp;2,&nbsp;3)}$&nbsp; code&nbsp; $\mathcal{C} = \big \{ (0, 0, 0, 0, 0) \hspace{0.05cm},\hspace{0.15cm}(0, 1, 0, 1, 1) \hspace{0.05cm},\hspace{0.15cm}(1, 0, 1, 1, 0)  \hspace{0.05cm},\hspace{0.15cm}(1, 1, 1, 0, 1) \big \}$&nbsp; the syndrome decoding procedure is now described in detail.
 +
[[File:EN_KC_T_1_5_S3b_neu2.png|right|frame|Example&nbsp; $\text{(5,&nbsp;2,&nbsp;3)}$ syndrome table with cosets|class=fit]]
 +
*The generator matrix and the parity-check matrix are:
  
:<math>{ \boldsymbol{\rm G}}  
+
::<math>{ \boldsymbol{\rm G} }  
 
=  \begin{pmatrix}
 
=  \begin{pmatrix}
 
1 &0 &1 &1 &0 \\
 
1 &0 &1 &1 &0 \\
 
0 &1 &0 &1 &1
 
0 &1 &0 &1 &1
\end{pmatrix} \hspace{0.05cm}, \hspace{0.4cm} { \boldsymbol{\rm H}}  
+
\end{pmatrix} \hspace{0.05cm},</math>
 +
::<math>{ \boldsymbol{\rm H} }  
 
=  \begin{pmatrix}
 
=  \begin{pmatrix}
 
1 &0 &1 &0 &0 \\
 
1 &0 &1 &0 &0 \\
Line 144: Line 156:
 
\end{pmatrix} \hspace{0.05cm}.</math>
 
\end{pmatrix} \hspace{0.05cm}.</math>
  
Die Tabelle fasst das Endergebnis zusammen.<br>
+
*The table summarizes the final result. Note:   
 
+
#The index&nbsp; $\mu$&nbsp; is not identical to the binary value of&nbsp; $\underline{s}_\mu$.  
[[File:P ID2362 KC T 1 5 S3b v2.png|center|frame|Beispielhafte (5, 2, 3)–Syndromtabelle mit Nebenklassen|class=fit]]<br>
+
#The order is rather given by the number of ones in the minor class leader&nbsp; $\underline{e}_\mu$.  
 
+
#For example, the syndrome&nbsp; $\underline{s}_5 = (1, 1, 0)$&nbsp; and the syndrome&nbsp; $\underline{s}_6 = (1, 0, 1)$.<br>
Zur Herleitung dieser Tabelle ist anzumerken:
 
*Die Zeile 1 bezieht sich auf das Syndrom <i><u>s</u></i><sub>0</sub> = (0, 0, 0) und die dazugehörige Nebenklasse <i>&Psi;</i><sub>0</sub>. Am wahrscheinlichsten ist hier die Fehlerfolge (0, 0, 0, 0, 0) &#8658; kein Bitfehler, die wir als Nebenklassenanführer <i><u>e</u></i><sub>0</sub> bezeichnen. Auch die weiteren Einträge in der ersten Zeile, nämlich (1, 0, 1, 1, 0 ),  (0, 1, 0, 1, 1) und (1, 1, 1, 0, 1 ), liefern jeweils das Syndrom <i><u>s</u></i><sub>0</sub> = (0, 0, 0), ergeben sich aber nur mit mindestens drei Bitfehlern und sind entsprechend unwahrscheinlich.<br>
 
 
 
*In den Zeilen 2 bis 6 beinhaltet der jeweilige Nebenklassenanführer <i><u>e</u></i><sub><i>&mu;</i></sub> genau eine einzige Eins. Dementsprechend ist <i><u>e</u></i><sub><i>&mu;</i></sub> stets das wahrscheinlichste Fehlermuster der Klasse <i>&Psi;<sub>&mu;</sub></i> (<i>&mu;</i> = 1, ... , 5). Die &bdquo;Mitläufer&rdquo; ergeben sich erst bei mindestens zwei Übertragungsfehlern.<br>
 
 
 
*Das Syndrom <i><u>s</u></i><sub>6</sub> = (1, 0, 1) ist mit nur einem Bitfehler nicht möglich. Bei der Erstellung obiger Tabelle wurden daraufhin alle &bdquo;5 über 2&rdquo; = 10 Fehlermuster <i><u>e</u></i> mit Gewicht <i>w</i><sub>H</sub>(<i><u>e</u></i>)  = 2 betrachtet. Die zuerst gefundene Folge mit  Syndrom <i><u>s</u></i><sub>6</sub> = (1, 0, 1) wurde als <i><u>e</u></i><sub>6</sub> = (1, 1, 0, 0, 0)  ausgewählt. Bei anderer Probierreihenfolge hätte sich auch die Folge (0, 0, 1, 0, 1) aus <i>&Psi;</i><sub>6</sub> ergeben können.<br>
 
 
 
*Ähnlich wurde bei der Bestimmung des Anführers <i><u>e</u></i><sub>7</sub> = (0, 1, 1, 0, 0) der Nebenklasse <i>&Psi;</i><sub>7</sub> vorgegangen, die durch das einheitliche Syndrom <i><u>s</u></i><sub>7</sub> = (1, 1, 1) gekennzeichnet ist. Auch in der Klasse <i>&Psi;</i><sub>7</sub> gibt es eine weitere Folge mit Hamming&ndash;Gewicht <i>w</i><sub>H</sub>(<i><u>e</u></i>)  = 2, nämlich (1, 0, 0, 0, 1).<br><br>
 
 
 
Die Beschreibung wird auf der nächsten Seite fortgesetzt.<br>
 
 
 
== Verallgemeinerung der Syndromdecodierung (3) ==
 
<br>
 
Das Beispiel der letzten Seite wird fortgesetzt. Beachten Sie bitte bei der Tabelle, dass der Index <i>&mu;</i> nicht identisch ist mit dem Binärwert von <i>s<sub>&mu;</sub></i>. Die Reihenfolge ergibt sich vielmehr durch die Anzahl der Einsen im Nebenklassenanführer <i>e<sub>&mu;</sub></i>. So ergibt sich beispielsweise das Syndrom <i>s</i><sub>5</sub> zu (1, 1, 0) und  das Syndrom <i>s</i><sub>6</sub> = (1, 0, 1).<br>
 
  
[[File:P ID2363 KC T 1 5 S3b v2.png|Beispielhafte (5, 2, 3)–Syndromtabelle  mit Nebenklassen|class=fit]]<br>
 
  
Die obige Tabelle muss nur einmal erstellt und kann beliebig oft genutzt werden. Lautet beispielsweise der Empfangsvektor  <i><u>y</u></i> = (0, 1, 0, 0, 1), so muss zunächst das Syndrom ermittelt werden:
+
*To derive this table,&nbsp; note:
 +
#The row 1 refers to the syndrome&nbsp; $\underline{s}_0 = (0, 0, 0)$&nbsp; and the associated cosets&nbsp; ${\it \Psi}_0$. The most likely error sequence here is&nbsp; $(0, 0, 0, 0, 0)$ &nbsp; &#8658; &nbsp; no bit error, which we call the&nbsp; "coset leader"&nbsp; $\underline{e}_0$.
 +
#The other entries in the first row,&nbsp; namely&nbsp; $(1, 0, 1, 1, 0 )$,&nbsp; $(0, 1, 0, 1, 1)$&nbsp; and&nbsp; $(1, 1, 1, 0, 1 )$,&nbsp; also each yield the syndrome&nbsp; $\underline{s}_0 = (0, 0, 0)$,&nbsp; but only result with at least three bit errors and are correspondingly unlikely.<br>
 +
#In rows 2 to 6,&nbsp; the respective coset leader&nbsp; $\underline{e}_\mu$&nbsp; contains exactly a single&nbsp; "$1$"&nbsp; $(\mu = 1$, ... , $5)$. Here&nbsp; $\underline{e}_\mu$&nbsp; is always the most likely error pattern of the class&nbsp; ${\it \Psi}_\mu$. The other group members result only with at least two bit errors.<br>
 +
#The syndrome&nbsp; $\underline{s}_6 = (1, 0, 1)$&nbsp; is not possible with only one bit error.&nbsp; In creating the table,&nbsp; we then considered all&nbsp; "$5\text{ over }2 = 10$"&nbsp; error patterns&nbsp; $\underline{e}$&nbsp; with weight&nbsp; $w_{\rm H}(\underline{e}) = 2$.
 +
#The first found sequence with syndrome&nbsp; $\underline{s}_6 = (1, 0, 1)$&nbsp; was chosen as coset leader&nbsp; $\underline{e}_6 = (1, 1, 0, 0)$&nbsp;. With a different probing order,&nbsp; the sequence&nbsp; $(0, 0, 1, 0, 1)$&nbsp; could also have resulted from&nbsp; ${\it \Psi}_6$.<br>
 +
#Similar procedure was followed in determining the leader&nbsp; $\underline{e}_7 = (0, 1, 1, 0, 0)$&nbsp; of the cosets class&nbsp; ${\it \Psi}_7$&nbsp; characterized by the uniform syndrome&nbsp; $\underline{s}_7 = (1, 1, 1)$.&nbsp; Also in the class&nbsp; ${\it \Psi}_7$&nbsp; there is another sequence with Hamming weight&nbsp; $w_{\rm H}(\underline{e}) = 2$,&nbsp; namely&nbsp; $(1, 0, 0, 0, 1)$.<br><br>
  
:<math>\underline{s} = \underline{y} \cdot { \boldsymbol{\rm H}}^{\rm T} = \begin{pmatrix}
+
*The table only needs to be created once.&nbsp; First,&nbsp; the syndrome must be determined,&nbsp; e.g. for the received vector&nbsp;  $\underline{y} = (0, 1, 0, 0, 1)$:
 +
::<math>\underline{s} = \underline{y} \cdot { \boldsymbol{\rm H} }^{\rm T} = \begin{pmatrix}
 
0 &1 &0 &0 &1
 
0 &1 &0 &0 &1
 
\end{pmatrix} \cdot  
 
\end{pmatrix} \cdot  
Line 181: Line 185:
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
Mit dem Nebenklassenanführer <i><u>e</u></i><sub>2</sub> = (0, 0, 0, 1, 0) aus obiger Tabelle (roter Eintrag für Index 2) gelangt man schließlich zum Decodierergebnis:
+
*Using the coset leader&nbsp; $\underline{e}_2 = (0, 0, 0, 1, 0)$&nbsp; from the above table $($red entry for &nbsp;$\mu =2)$&nbsp; finally arrives at the decoding result:
  
:<math>\underline{z} = \underline{y}  + \underline{e}_2  = (0,\hspace{0.05cm} 1,\hspace{0.05cm} 0,\hspace{0.05cm} 0,\hspace{0.05cm} 1) + (0,\hspace{0.05cm} 0,\hspace{0.05cm} 0,\hspace{0.05cm} 1,\hspace{0.05cm} 0) = (0,\hspace{0.05cm} 1,\hspace{0.05cm} 0,\hspace{0.05cm} 1,\hspace{0.05cm} 1)
+
::<math>\underline{z} = \underline{y}  + \underline{e}_2  = (0,\hspace{0.05cm} 1,\hspace{0.05cm} 0,\hspace{0.05cm} 0,\hspace{0.05cm} 1) + (0,\hspace{0.05cm} 0,\hspace{0.05cm} 0,\hspace{0.05cm} 1,\hspace{0.05cm} 0) = (0,\hspace{0.05cm} 1,\hspace{0.05cm} 0,\hspace{0.05cm} 1,\hspace{0.05cm} 1)
\hspace{0.05cm}.</math>
+
\hspace{0.05cm}.</math>}}
 +
 
 +
 
 +
{{BlaueBox|TEXT= 
 +
$\text{Conclusion:}$&nbsp; From these examples it is already clear that the&nbsp; &raquo;'''syndrome decoding means a considerable effort'''&laquo;,&nbsp; if one cannot use certain properties e.g. cyclic codes.
  
Aus dieser Kurzzusammenfassung geht schon hervor, dass die Syndromdecodierung mit einem enormen Aufwand verbunden ist, wenn man nicht wie bei zyklischen Codes gewisse Eigenschaften nutzen kann. Bei großen Blockcodelängen versagt diese Methode vollständig. So müsste man zur Decodierung eines BCH&ndash;Codes &ndash; die Abkürzung steht für deren Erfinder Bose, Chaudhuri und  Hocquenghem &ndash; mit den Codeparametern <i>n</i> = 511, <i>k</i> = 259 und <i>d</i><sub>min</sub> = 61 genau 2<sup>511&ndash;259</sup>&nbsp;&asymp;&nbsp;10<sup>76</sup> Fehlermuster der Länge 511 auswerten und abspeichern. Für diese Codes und auch für andere Codes großer Blocklänge gibt es aber spezielle Decodieralgorithmen, die mit weniger Aufwand zum Erfolg führen.
+
*For large block code lengths, this method fails completely. Thus, to decode a&nbsp; [https://en.wikipedia.org/wiki/BCH_code $\text{BCH code}$]&nbsp; $($the abbreviation stands for their inventors '''B'''ose, '''C'''haudhuri, '''H'''ocquenghem$)$&nbsp; with code parameters&nbsp; $n = 511$,&nbsp; $k = 259$&nbsp; and&nbsp; $d_{\rm min} = 61$,&nbsp;  one has to evaluate and store exactly&nbsp; $2^{511-259} \approx 10^{76}$&nbsp;error patterns of length&nbsp; $511$.
 +
*Happily,&nbsp; however,&nbsp; there are special decoding algorithms for these and also for other codes of large block length,&nbsp; which lead to success with less effort}}.
  
== Codiergewinn – Bitfehlerrate bei AWGN ==
+
== Coding gain - bit error rate with AWGN ==
 
<br>
 
<br>
Wir betrachten nun die [http://en.lntwww.de/Digitalsignal%C3%BCbertragung/Fehlerwahrscheinlichkeit_bei_Basisband%C3%BCbertragung#Definition_der_Bitfehlerquote_.281.29 Bitfehlerrate] (englisch: <i>Bit Error Rate</i>, BER) für folgende Konstellationen:
+
We now consider the&nbsp; [[Digital_Signal_Transmission/Error_Probability_for_Baseband_Transmission#Definition_of_the_bit_error_rate|$\text{bit error rate}$]]&nbsp; for the following constellation:
*Hamming&ndash;Code (7, 4, 3),<br>
+
[[File:EN_KC_T_1_5_S4.png|right|frame|Bit error rate for the&nbsp; $\text{(7, 4, 3)}$ Hamming code|class=fit]]
 
+
#Hamming code&nbsp; $\text{HC (7, 4, 3)}$,<br>
*AWGN&ndash;Kanal, gekennzeichnet durch den Quotienten <i>E</i><sub>B</sub>/<i>N</i><sub>0</sub> (in dB),<br>
+
#AWGN&ndash;channel, characterized by the quotient&nbsp; $E_{\rm B}/N_0$&nbsp; (in dB),<br>
 +
#Maximum Likelihood Decoder&nbsp; $\rm (ML)$&nbsp; with&nbsp; <br>"Hard Decision"&nbsp; $\rm (HD)$&nbsp; and &nbsp;"Soft Decision"&nbsp; $\rm (SD)$,&nbsp; resp.
  
*Maximum&ndash;Likelihood&ndash;Detektion (ML) mit <i>Hard Decision</i> bzw. <i>Soft Decision</i>.<br>
 
  
:[[File:P ID2364 KC T 1 5 S4 v2.png|center|frame|Bitfehlerrate bei (7, 4, 3)–Hamming–Codierung|class=fit]]
+
It should be noted with regard to this graph:
 +
*The black comparison curve applies e.g. to binary phase modulation&nbsp; $\rm (BPSK)$&nbsp; without coding.&nbsp; For this one needs for the bit error rate &nbsp; $10^{-5}$&nbsp; about&nbsp; $10 \cdot \lg \, E_{\rm B}/N_0 = 9.6 \, \rm dB$.<br>
  
Zu dieser Grafik ist anzumerken:
+
*The red circles apply to the&nbsp; [[Channel_Coding/General_Description_of_Linear_Block_Codes#Some_properties_of_the_.287.2C_4.2C_3.29_Hamming_code|$\text{HC (7, 4, 3)}$]]&nbsp; and hard decisions of the maximum likelihood decoder&nbsp; $\text{(ML&ndash;HD)}$.&nbsp; Syndrome decoding is a possible realization form for this.<br>
*Die schwarze Vergleichskurve gilt beispielsweise für die binäre Phasenmodulation (BPSK) ohne Codierung. Hierfür benötigt man für die Bitfehlerrate 10<sup>&ndash;5</sup> etwa 10 &middot; lg <i>E</i><sub>B</sub>/<i>N</i><sub>0</Sub> &asymp; 9.6 dB.<br>
 
  
*Die roten Kreise gelten für den (7,&nbsp;4,&nbsp;3)&ndash;Code und harte Entscheidungen des ML&ndash;Decoders (ML&ndash;HD). Die Syndromdecodierung ist hierfür eine mögliche Realisierungsform.<br>
+
*This configuration brings an improvement over the comparison system only for&nbsp; $10 \cdot \lg \, E_{\rm B}/N_0 >6 \, \rm dB$.&nbsp; For&nbsp; $\rm BER =10^{-5}$&nbsp; one only needs&nbsp; $10 \cdot \lg \, E_{\rm B}/N_0 \approx 9.2 \, \rm dB$.<br>
  
*Diese Systemkonfiguration bringt erst für 10 &middot; lg <i>E</i><sub>B</sub>/<i>N</i><sub>0</sub> > 6 dB  eine Verbesserung gegenüber dem Vergleichssystem. Für BER = 10<sup>&ndash;5</sup> benötigt man nur noch 10&nbsp;&middot;&nbsp;lg&nbsp;<i>E</i><sub>B</sub>/<i>N</i><sub>0</sub>&nbsp;&asymp;&nbsp;9.2&nbsp;dB.<br>
+
*The green crosses for&nbsp; [[Channel_Coding/General_Description_of_Linear_Block_Codes#Some_properties_of_the_.287.2C_4.2C_3.29_Hamming_code|$\text{HC (7, 4, 3)}$]]&nbsp; and&nbsp; [[Channel_Coding/Soft-in_Soft-Out_Decoder| $\text{soft decision}$]]&nbsp; $\text{(ML&ndash;SD)}$&nbsp; are below the red curve throughout the range.&nbsp; For&nbsp; $\rm BER =10^{-5}$&nbsp; this gives&nbsp; $10 \cdot \lg \, E_{\rm B}/N_0 \approx 7.8 \, \rm dB$.<br><br>
  
*Die grünen Kreuze für den Hamming&ndash;Code mit [http://en.lntwww.de/Kanalcodierung/Klassifizierung_von_Signalen#ML.E2.80.93Entscheidung_beim_AWGN.E2.80.93Kanal Soft&ndash;Decision] (ML&ndash;SD) liegen im gesamten Bereich unterhalb der Vergleichskurve. Für BER = 10<sup>&ndash;5</sup> ergibt sich 10 &middot; lg (<i>E</i><sub>B</sub>/<i>N</i><sub>0</sub>) &asymp; 7.8 dB.<br><br>
+
{{BlaueBox|TEXT= 
 +
$\text{Definition:}$&nbsp; We refer as&nbsp; &raquo;'''coding gain'''&laquo;&nbsp; of a considered system configuration,&nbsp;  
 +
*characterized by its code and
 +
*the way it is decoded,
  
{{Definition}}''':''' Als Codiergewinn einer Systemkonfiguration (gekennzeichnet durch seinen Code und die Art der Decodierung) bezeichnen wir das gegenüber dem Vergleichssystem (ohne Codierung) kleinere 10 &middot; lg (<i>E</i><sub>B</sub>/<i>N</i><sub>0</sub>), das für eine vorgegebene Bitfehlerrate (BER) erforderlich ist:
 
  
:<math>G_{\rm Code} (\hspace{0.05cm}{\rm System}\hspace{0.05cm}|\hspace{0.05cm}{\rm BER}\hspace{0.05cm}) =</math>
+
the smaller&nbsp; $10 \cdot \lg \, E_{\rm B}/N_0$&nbsp; required for a given bit error rate&nbsp; $\rm (BER)$&nbsp; compared to the comparison system&nbsp; $($without coding$)$:
  
:<math> = 10 \cdot {\rm lg}\hspace{0.1cm}{E}_{\rm B}/N_0
+
::<math>G_{\rm Code} (\hspace{0.05cm}\text{considered system} \hspace{0.05cm}\vert\hspace{0.05cm}{\rm BER}\hspace{0.05cm}) =10 \cdot {\rm lg}\hspace{0.1cm}{E}_{\rm B}/N_0
\hspace{0.15cm}(\hspace{0.05cm}{\rm ohne\hspace{0.1cm} Codierung}\hspace{0.05cm}|\hspace{0.05cm}{\rm BER}\hspace{0.05cm})-
+
\hspace{0.15cm}(\hspace{0.05cm}\text{comparison system}\hspace{0.05cm}\vert\hspace{0.05cm}{\rm BER}\hspace{0.05cm})-
 
10 \cdot {\rm lg}\hspace{0.1cm}{E}_{\rm B}/N_0
 
10 \cdot {\rm lg}\hspace{0.1cm}{E}_{\rm B}/N_0
\hspace{0.15cm}(\hspace{0.05cm}{\rm System}\hspace{0.05cm}|\hspace{0.05cm}{\rm BER}\hspace{0.05cm})   
+
\hspace{0.15cm}(\hspace{0.05cm}\text{considered system}\hspace{0.05cm}\vert\hspace{0.05cm}{\rm BER}\hspace{0.05cm})   
\hspace{0.05cm}. </math>{{end}}<br>
+
\hspace{0.05cm}. </math>}}<br>
  
Angewendet auf obige Grafik erhält man:
+
Applied to the above graph, one obtains:
  
:<math>G_{\rm Code} (\hspace{0.05cm}{\rm Hamming \hspace{0.1cm}(7,\hspace{0.02cm}4,\hspace{0.02cm}3), ML-HD}\hspace{0.05cm}|\hspace{0.05cm}{\rm BER} = 10^{-5}\hspace{0.05cm}) = 0.4\,{\rm dB}\hspace{0.05cm},</math>
+
:<math>G_{\rm Code} (\hspace{0.05cm}{\rm Hamming \hspace{0.1cm}(7,\hspace{0.02cm}4,\hspace{0.02cm}3), ML-HD}\hspace{0.05cm}|\hspace{0.05cm}{\rm BER} = 10^{-5}\hspace{0.05cm}) = 0.4\ {\rm dB}\hspace{0.05cm},</math>
  
:<math>G_{\rm Code} (\hspace{0.05cm}{\rm Hamming \hspace{0.1cm}(7,\hspace{0.02cm}4,\hspace{0.02cm}3), ML-SD}\hspace{0.05cm}|\hspace{0.05cm}{\rm BER} = 10^{-5}\hspace{0.05cm}) = 1.8\,{\rm dB}\hspace{0.05cm}.</math><br>
+
:<math>G_{\rm Code} (\hspace{0.05cm}{\rm Hamming \hspace{0.1cm}(7,\hspace{0.02cm}4,\hspace{0.02cm}3), ML-SD}\hspace{0.05cm}|\hspace{0.05cm}{\rm BER} = 10^{-5}\hspace{0.05cm}) = 1.8\ {\rm dB}\hspace{0.05cm}.</math><br>
  
== Decodierung beim Binary Erasure Channel (1) ==
+
== Decoding at the Binary Erasure Channel ==
 
<br>
 
<br>
Abschließend soll noch gezeigt werden, in wie weit der Decoder zu modifizieren ist, wenn anstelle des [http://en.lntwww.de/Kanalcodierung/Klassifizierung_von_Signalen#Binary_Symmetric_Channel_.E2.80.93_BSC BSC&ndash;Modells] <i>(Binary Symmetrie Channel)</i> das [http://en.lntwww.de/Kanalcodierung/Klassifizierung_von_Signalen#Binary_Erasure_Channel_.E2.80.93_BEC BEC&ndash;Kanalmodell] <i>(Binary Erasure Channel)</i> zum Einsatz kommt, der keine Fehler produziert, sondern  unsichere Bit als Auslöschungen markiert.<br>
+
Finally,&nbsp; it will be shown to what extent the decoder has to be modified
 +
*if instead of the&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#Binary_Symmetric_Channel_.E2.80.93_BSC| $\text{BSC model}$]]&nbsp; ("Binary Symmetric Channel")&nbsp;
 +
*the&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#Binary_Erasure_Channel_.E2.80.93_BEC|$\text{BEC model}$]]&nbsp; ("Binary Erasure Channel")&nbsp; is used,&nbsp;
 +
 
  
{{Beispiel}}''':''' [[File:P ID2537 KC T 1 5 S5.png|Zur Fehlerkorrektur bei BSC und BEC|rechts|rahmenlos]] Nebenstehende Grafik zeigt das Systemmodell und gibt beispielhafte Werte für die einzelnen Vektoren wider. Der linke Bildteil (blau hinterlegt) gilt für das BSC&ndash;Modell (ein Bitfehler 0 &#8594; 1) und der rechte (grün hinterlegt) für das BEC&ndash;Modell (zwei <i>Erasures</i> 1 &#8594; E).
+
which does not produce errors but marks uncertain bits as&nbsp; "erasures".<br>
  
<br><br><br><br><br><br><br><br><br>Ohne Einschränkung der Allgemeingültigkeit betrachten wir wie im Beispiel auf Seite 3c dieses Kapitels wieder den systematischen (5, 2, 3)&ndash;Blockcode mit den vier Codeworten
+
{{GraueBox|TEXT= 
 +
$\text{Example 4:}$&nbsp; We consider again as in&nbsp; [[Channel_Coding/Decoding_of_Linear_Block_Codes#Generalization_of_syndrome_coding|$\text{Example 3}$]]&nbsp; the systematic&nbsp; $\text{(5, 2, 3)}$&nbsp; block code&nbsp;
 +
:$$\mathcal{C} = \big \{ (0, 0, 0, 0, 0) \hspace{0.05cm},\hspace{0.3cm}(0, 1, 0, 1, 1) \hspace{0.05cm},\hspace{0.3cm}(1, 0, 1, 1, 0)  \hspace{0.05cm},\hspace{0.3cm}(1, 1, 1, 0, 1) \big \}.$$
  
:<math>\mathcal{C} = \{ (0, 0, 0, 0, 0) \hspace{0.05cm},\hspace{0.15cm}(0, 1, 0, 1, 1) \hspace{0.05cm},\hspace{0.15cm}(1, 0, 1, 1, 0) \hspace{0.05cm},\hspace{0.15cm}(1, 1, 1, 0, 1) \}\hspace{0.05cm}.</math>
+
The graphic shows the system model and gives exemplary values for the individual vectors.
 +
*The left part of the picture&nbsp; (yellow background)&nbsp; is valid for&nbsp; "BSC"&nbsp; with one bit error&nbsp; $(0 &#8594; 1)$&nbsp; at the third bit.
 +
*The right part of the picture&nbsp; (green background)&nbsp; is for&nbsp; "BEC"&nbsp; and shows two&nbsp; erasures&nbsp; $(\rm 1 &#8594; E)$&nbsp; at the second and the fourth bit.
 +
[[File:EN_KC_T_1_5_S5.png|right|frame|Error correction with BSC and BEC]]
  
Man erkennt:
 
*Bei BSC kann wegen <i>d</i><sub>min</sub> = 3 nur ein Bitfehler (<i>t</i> = 1) korrigiert werden (rot markiert).  Beschränkt man sich auf Fehlererkennung, so funktioniert diese bis zu <i>e</i> = <i>d</i><sub>min</sub> &ndash; 1 = 2 Bitfehler.<br>
 
  
*Bei BEC macht Fehlererkennung keinen Sinn, denn bereits der Kanal lokalisiert ein unsicheres Bit als <i>Erasure</i> E (Auslöschung). Die Nullen und Einsen im BEC&ndash;Empfangswort <u><i>y</i></u> sind sicher. Deshalb funktioniert hier die Fehlerkorrektur bis zu <i>e</i> = 2 Auslöschungen mit Sicherheit.<br>
+
One recognizes:
 +
*With BSC only&nbsp; $t = 1$&nbsp; bit error&nbsp;  (marked in red in the left part)&nbsp; can be corrected due to&nbsp; $d_{\rm min} = 3$.&nbsp;  If one restricts oneself to error detection, this works up to&nbsp; $e= d_{\rm min} -1 = 2$&nbsp; bit errors.<br>
  
*Auch <i>e</i> = 3 Auslöschungen sind manchmal noch korrigierbar. So kann <u><i>y</i></u> = (E, E, E, 1, 1) zu <u><i>z</i></u>&nbsp;=&nbsp;(0,&nbsp;1,&nbsp;0,&nbsp;1,&nbsp;1)  korrigiert werden, da kein zweites Codewort mit zwei Einsen endet. Dagegen ist <u><i>y</i></u>&nbsp;=&nbsp;(0,&nbsp;E,&nbsp;0,&nbsp;E,&nbsp;E) aufgrund des im Code erlaubten Nullwortes nicht korrigierbar.<br>
+
*For BEC,&nbsp; error detection makes no sense, because already the channel locates an uncertain bit as an&nbsp; "erasure"&nbsp; $\rm E$.&nbsp; The zeros and ones in the BEC received word&nbsp; $\underline{y}$&nbsp; are safe.&nbsp; Therefore the error correction works here up to&nbsp; $e = 2$&nbsp; erasures&nbsp; (marked in red in the right part)&nbsp; with certainty.<br>
  
*Wird sichergestellt, dass in keinem Empfangswort mehr als zwei Auslöschungen vorkommen, so ist die BEC&ndash;Blockfehlerwahrscheinlichkeit Pr(<u><i>z</i></u> &ne; <u><i>x</i></u>) = Pr(<i>&upsilon;</i> &ne; <u><i>u</i></u>) identisch 0. Dagegen ist die entsprechende Blockfehlerwahrscheinlichkeit beim BSC&ndash;Modell stets größer als 0.{{end}}<br><br>
+
*Also&nbsp; $e = 3$&nbsp; erasures are sometimes still correctable.&nbsp; So&nbsp; $\underline{y} \rm = (E, E, E, 1, 1)$&nbsp; to&nbsp; $\underline{z} \rm = (0, 1, 0, 1, 1)$&nbsp; to be corrected since no second code word ends with two ones.&nbsp; But&nbsp; $\underline{y} \rm = (0, E, 0, E, E)$&nbsp; is not correctable because of the all zero word allowed in the code.<br>
  
Da nach dem BEC ein jedes Empfangswort entweder richtig oder gar nicht decodiert wird, nennen wir hier den Block <i>y</i> &#8594; <i>z</i> zukünftig &bdquo;Codewortfinder&rdquo;. Eine &bdquo;Schätzung&rdquo; findet nur beim BSC&ndash;Modell statt.<br>
+
*If it is ensured that there are no more than two erasures in any received word,&nbsp; the BEC block error probability&nbsp; ${\rm Pr}(\underline{z} \ne \underline{x}) = {\rm Pr}(\underline{v} \ne \underline{u}) \equiv 0$.&nbsp; In contrast,&nbsp; the corresponding block error probability in the BSC model always has a value greater than&nbsp; $0$.
  
== Decodierung beim Binary Erasure Channel (2) ==
 
<br>
 
Wie funktioniert aber nun die Decodierung eines Empfangswortes <u><i>y</i></u> mit Auslöschungen algorithmisch? Ausgehend vom Empfangswort <u><i>y</i></u>&nbsp;=&nbsp;(0, E, 0, E, 1) des letzten Beispiels setzen wir den Ausgang des Codewortfinders formal zu <u><i>z</i></u>&nbsp;=&nbsp;(0,&nbsp;<i>z</i><sub>2</sub>,&nbsp;0,&nbsp;<i>z</i><sub>4</sub>,&nbsp;1), wobei die Symbole <i>z</i><sub>2</sub> und <i>z</i><sub>4</sub> (jeweils 0 oder 1) entsprechend der folgenden Gleichung zu bestimmen sind:
 
  
:<math>\underline{z} \cdot { \boldsymbol{\rm H}}^{\rm T}= \underline{0}
+
Since after the BEC each received word is either decoded correctly or not at all,&nbsp;  we call here the block&nbsp; $\underline{y} &#8594; \underline{z}$&nbsp; in the future&nbsp; "code word finder".&nbsp; An&nbsp; "estimation"&nbsp; takes place only in the BSC model.<br>}}<br>
 +
 
 +
But how does the decoding of a received word &nbsp; $\underline{y}$ &nbsp; with erasures work algorithmically?
 +
 
 +
{{GraueBox|TEXT= 
 +
$\text{Example 5:}$&nbsp; Starting from the received word&nbsp; $\underline{y} \rm = (0, E, 0, E, 1)$&nbsp; in&nbsp; $\text{Example 4}$&nbsp; we formally set the output of the code word finder to&nbsp; $\underline{z} \rm = (0, z_2, 0, z_4, 1)$,&nbsp; where the symbols&nbsp; $z_2 \in \{0, \, 1\}$&nbsp; and&nbsp; $z_4 \in \{0, \, 1\}$&nbsp; are to be determined according to the following equation:
 +
 
 +
::<math>\underline{z} \cdot { \boldsymbol{\rm H} }^{\rm T}= \underline{0}
 
\hspace{0.3cm} \Rightarrow \hspace{0.3cm}
 
\hspace{0.3cm} \Rightarrow \hspace{0.3cm}
{ \boldsymbol{\rm H}} \cdot \underline{z}^{\rm T}= \underline{0}^{\rm T} \hspace{0.05cm}.</math>
+
{ \boldsymbol{\rm H} } \cdot \underline{z}^{\rm T}= \underline{0}^{\rm T} \hspace{0.05cm}.</math>
  
Es geht nun darum, diese Bestimmungsgleichung möglichst effizient umzusetzen.<br>
+
The task is now to implement this determination equation as efficiently as possible.&nbsp; The following calculation steps result:
 +
*With the parity-check matrix&nbsp; $\boldsymbol{\rm H}$&nbsp; of the&nbsp; $\text{(5, 2, 3)}$&nbsp; block code and the vector&nbsp; $\underline{z} \rm = (0, z_2, 0, z_4, 1)$&nbsp; is the above determination equation:
  
{{Beispiel}}''':''' Wir betrachten wieder den auf der [http://en.lntwww.de/Kanalcodierung/Decodierung_linearer_Blockcodes#Decodierung_beim_Binary_Erasure_Channel_.281.29 letzten Seite] skizzierten Fall mit dem Empfangsvektor <u><i>y</i></u> = (0, E, 0, E, 1). Damit ergeben sich folgende Rechenschritte:
+
::<math>{ \boldsymbol{\rm H} } \cdot \underline{z}^{\rm T}
*Mit der Prüfmatrix <b>H</b> des (5, 2, 3)&ndash;Blockcodes und dem Vektor <u><i>z</i></u> = (0, <i>z</i><sub>2</sub>, 0, <i>z</i><sub>4</sub>, 1) lautet die obige Bestimmungsgleichung:
 
 
 
::<math>{ \boldsymbol{\rm H}} \cdot \underline{z}^{\rm T}
 
 
=  \begin{pmatrix}
 
=  \begin{pmatrix}
 
1 &0 &1 &0 &0\\
 
1 &0 &1 &0 &0\\
Line 276: Line 296:
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
*Wir fassen die sicheren (korrekten) Bit zum Vektor <u><i>z</i></u><sub>K</sub> zusammen und die ausgelöschten Bit zum Vektor <u><i>z</i></u><sub>E</sub>. Dann teilen wir die Prüfmatrix <b>H</b> in die entsprechenden Teilmatrizen <b>H</b><sub>K</sub> und <b>H</b><sub>E</sub> auf:
+
*We sum up the&nbsp; "correct bits"&nbsp; (German:&nbsp; "korrekt" &nbsp; &rArr; &nbsp; subscript&nbsp;"K")&nbsp; to the vector&nbsp; $\underline{z}_{\rm K}$&nbsp; and the&nbsp; "erased bits"&nbsp; to the vector&nbsp; $\underline{z}_{\rm E}$.&nbsp; Then we split the parity-check matrix&nbsp; $\boldsymbol{\rm H}$&nbsp; into the corresponding submatrices&nbsp; $\boldsymbol{\rm H}_{\rm K}$&nbsp; and&nbsp; $\boldsymbol{\rm H}_{\rm E}$:
  
::<math>\underline{z}_{\rm K} \hspace{-0.15cm}  = \hspace{-0.15cm} (0, 0, 1)\hspace{0.05cm},\hspace{0.3cm} { \boldsymbol{\rm H}}_{\rm K}=   
+
::<math>\underline{z}_{\rm K} =(0, 0, 1)\hspace{0.05cm},\hspace{0.3cm} { \boldsymbol{\rm H} }_{\rm K}=   
 
\begin{pmatrix}
 
\begin{pmatrix}
 
1 &1 &0\\
 
1 &1 &0\\
Line 285: Line 305:
 
\end{pmatrix}
 
\end{pmatrix}
 
\hspace{0.3cm}\Rightarrow\hspace{0.3cm}
 
\hspace{0.3cm}\Rightarrow\hspace{0.3cm}
{\rm Spalten\hspace{0.15cm} 1,\hspace{0.15cm}3 \hspace{0.15cm}und \hspace{0.15cm}5 \hspace{0.15cm}der \hspace{0.15cm}Pr\ddot{u}fmatrix}  
+
\text{Rows 1, 3 and  5 of  the  parity-check  matrix} \hspace{0.05cm},</math>
\hspace{0.05cm},</math>
+
::<math>\underline{z}_{\rm E} = (z_2, z_4)\hspace{0.05cm},\hspace{0.35cm} { \boldsymbol{\rm H} }_{\rm E}=   
::<math>\underline{z}_{\rm E} \hspace{-0.15cm}  = \hspace{-0.15cm} (z_2, z_4)\hspace{0.05cm},\hspace{0.35cm} { \boldsymbol{\rm H}}_{\rm E}=   
 
 
\begin{pmatrix}
 
\begin{pmatrix}
 
0 &0\\
 
0 &0\\
Line 293: Line 312:
 
1 &0
 
1 &0
 
\end{pmatrix}
 
\end{pmatrix}
  \hspace{0.9cm}\Rightarrow\hspace{0.3cm}
+
  \hspace{1.05cm}\Rightarrow\hspace{0.3cm}
{\rm Spalten\hspace{0.15cm} 2 \hspace{0.15cm}und \hspace{0.15cm}4 \hspace{0.15cm}der \hspace{0.15cm}Pr\ddot{u}fmatrix}
+
\text{Rows 2 and  4 of  the  parity-check  matrix} \hspace{0.05cm}.</math>
\hspace{0.05cm}.</math>
 
  
*Unter Berücksichtigung der Tatsache, dass in GF(2) die Subtraktion gleich der Addition ist, lässt sich die obige Gleichung wie folgt darstellen:
+
*Remembering that in&nbsp; $\rm GF(2)$&nbsp; subtraction equals addition,&nbsp;  the above equation can be represented as follows:
  
::<math>{ \boldsymbol{\rm H}}_{\rm K} \cdot \underline{z}_{\rm K}^{\rm T}
+
::<math>{ \boldsymbol{\rm H} }_{\rm K} \cdot \underline{z}_{\rm K}^{\rm T}
 
+  
 
+  
{ \boldsymbol{\rm H}}_{\rm E} \cdot \underline{z}_{\rm E}^{\rm T}= \underline{0}^{\rm T}  
+
{ \boldsymbol{\rm H} }_{\rm E} \cdot \underline{z}_{\rm E}^{\rm T}= \underline{0}^{\rm T}  
 
\hspace{0.3cm} \Rightarrow \hspace{0.3cm}
 
\hspace{0.3cm} \Rightarrow \hspace{0.3cm}
{ \boldsymbol{\rm H}}_{\rm E} \cdot \underline{z}_{\rm E}^{\rm T}=
+
{ \boldsymbol{\rm H} }_{\rm E} \cdot \underline{z}_{\rm E}^{\rm T}=
{ \boldsymbol{\rm H}}_{\rm K} \cdot \underline{z}_{\rm K}^{\rm T} </math>
+
{ \boldsymbol{\rm H} }_{\rm K} \cdot \underline{z}_{\rm K}^{\rm T} \hspace{0.3cm}
 
+
\Rightarrow \hspace{0.3cm}
::<math>\Rightarrow \hspace{0.3cm}
 
 
\begin{pmatrix}
 
\begin{pmatrix}
 
0 &0\\
 
0 &0\\
Line 332: Line 349:
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
*Dies führt zu einem linearen Gleichungssystem mit zwei Gleichungen für die beiden unbekannten <i>z</i><sub>2</sub> und <i>z</i><sub>4</sub> (jeweils 0 oder 1). Aus der letzten Zeile erhält man <i>z</i><sub>2</sub> = 1 und aus der zweiten Zeile folgt somit <i>z</i><sub>2</sub> + <i>z</i><sub>4</sub> = 0 &nbsp;&#8658;&nbsp; <i>z</i><sub>4</sub> = 1. Damit ergibt sich das zulässige Codewort <u><i>z</i></u>&nbsp;=&nbsp;(0,&nbsp;1,&nbsp;0,&nbsp;1,&nbsp;1).{{end}}<br>
+
*This leads to a linear system of equations with two equations for the unknown&nbsp; $z_2$&nbsp; and&nbsp; $z_4$&nbsp; $($each&nbsp; $0$&nbsp; or&nbsp; $1)$.
 +
 +
*From the last row follows&nbsp; $z_2 = 1$&nbsp; and from the second row&nbsp; $z_2 + z_4 = 0$ &nbsp; &#8658; &nbsp; $z_4 = 1$.&nbsp; This gives the allowed code word&nbsp; $\underline{z} \rm = (0, 1, 0, 1, 1)$.}}<br>
  
== Aufgaben zum Kapitel ==
+
== Exercises for the chapter ==
 
<br>
 
<br>
[[Aufgaben:1.11 Syndromdecodierung|A1.11 Syndromdecodierung]]
+
[[Aufgaben:Exercise_1.11:_Syndrome_Decoding|Exercise 1.11: Syndrome Decoding]]
  
[[Zusatzaufgaben:1.11 Nochmals Syndromdecodierung]]
+
[[Aufgaben:Exercise_1.11Z:_Syndrome_Decoding_again|Exercise 1.11Z: Syndrome Decoding again]]
  
[[Aufgaben:1.12 Hard / Soft Decision|A1.12 Hard / Soft Decision]]
+
[[Aufgaben:Exercise_1.12:_Hard_Decision_vs._Soft_Decision|Exercise 1.12: Hard Decision vs. Soft Decision]]
  
[[Zusatzaufgaben:1.12 Vergleich (7, 4, 3) und (8, 4, 4)]]
+
[[Aufgaben:Exercise_1.12Z:_Comparison_of_HC_(7,_4,_3)_and_HC_(8,_4,_4)|Exercise 1.12Z: Comparison of HC (7, 4, 3) and HC (8, 4, 4)]]
  
[[Aufgaben:1.13 BEC–Decodierung|A1.13 BEC–Decodierung]]
+
[[Aufgaben:Exercise_1.13:_Binary_Erasure_Channel_Decoding|Exercise 1.13: Binary Erasure Channel Decoding]]
  
[[Zusatzaufgaben:1.13 Nochmals BEC–Decodierung]]
+
[[Aufgaben:Exercise_1.13Z:_Binary_Erasure_Channel_Decoding_again|Exercise 1.13Z: Binary Erasure Channel Decoding again]]
  
 
{{Display}}
 
{{Display}}

Latest revision as of 17:17, 30 November 2022

Block diagram and requirements


We start from the block diagram already shown in the chapter  $\text{Channel Models and Decision Structures}$  where the digital channel model used is mostly the  $\text{Binary Symmetric Channel}$ $\rm (BSC)$. 

For code word estimation,  we use the  "Maximum Likelihood Decision"  $\rm (ML)$,  which for binary codes   ⇒   $\underline{x} \in {\rm GF}(2^n)$  at the block level gives the same result as the  $\text{MAP Receiver}$.

Block diagram for decoding block codes

The task of the channel decoder can be described as follows:

  • The vector  $\underline{v}$  after decoding  (at the sink)  should match the information word  $\underline{u}$  as well as possible.
  • That is:   The  »block error probability«  should be as small as possible:
\[{ \rm Pr(block\:error)} = { \rm Pr}( \underline{v} \ne \underline{u}) \stackrel{!}{=} { \rm minimum}\hspace{0.05cm}.\]
  • Because of assignments  $\underline{x} = {\rm enc}(\underline{u})$  resp.  $\underline{v} = {\rm enc}^{-1}(\underline{z})$  also holds:
\[{ \rm Pr(block\:error)} = { \rm Pr}( \underline{z} \ne \underline{x}) \stackrel{!}{=} { \rm minimum}\hspace{0.05cm}.\]
  • Sought is the most likely sent code word  $\underline{y} = \underline{x} +\underline{e}$  for the given received word  $\underline{x}_i$,  which is passed on as result  $\underline{z}$:
\[\underline{z} = {\rm arg} \max_{\underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C}} \hspace{0.1cm} {\rm Pr}( \underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm}|\hspace{0.05cm} \underline{y} ) \hspace{0.05cm}.\]
  • For the BSC model,  both   $\underline{x}_i \in {\rm GF}(2^n)$   and   $\underline{y} \in {\rm GF}(2^n)$,  so the maximum likelihood decision rule can also be written using the  $\text{Hamming distance}$  $d_{\rm H}( \underline{y}, \, \underline{x}_i)$:
\[\underline{z} = {\rm arg} \min_{\underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C}} \hspace{0.1cm} d_{\rm H}(\underline{y} \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i})\hspace{0.05cm}.\]

Principle of syndrome decoding


Assumed here is a  $(n, \, k)$  block code with the parity-check matrix  $\boldsymbol{\rm H}$  and the systematic code words

\[\underline{x}\hspace{0.05cm} = (x_1, x_2, \hspace{0.05cm}\text{...} \hspace{0.05cm}, x_i, \hspace{0.05cm}\text{...} \hspace{0.05cm}, x_n) = (u_1, u_2, \hspace{0.05cm}\text{...} \hspace{0.05cm}, u_k, p_1, \hspace{0.05cm}\text{...} \hspace{0.05cm}, p_{n-k})\hspace{0.05cm}. \]

With the error vector  $\underline{e}$  then applies to the received word:

\[\underline{y} = \underline{x} + \underline{e} \hspace{0.05cm}, \hspace{0.4cm} \underline{y} \in \hspace{0.1cm} {\rm GF}(2^n) \hspace{0.05cm}, \hspace{0.4cm} \underline{x} \in \hspace{0.1cm} {\rm GF}(2^n) \hspace{0.05cm}, \hspace{0.4cm} \underline{e} \in \hspace{0.1cm} {\rm GF}(2^n) \hspace{0.05cm}.\]

A bit error at position  $i$   ⇒   $y_i ≠ x_i$  is expressed by the  »error coefficient«  $e_i = 1$.

$\text{Definition:}$  The  »syndrome«  $\underline{s} = (s_0, s_1, \hspace{0.05cm}\text{...} \hspace{0.05cm}, s_{m-1})$  is calculated  (as row resp. column vector)  from the received word  $\underline{y}$  and the parity-check matrix  $\boldsymbol{\rm H}$  as follows:

\[\underline{s} = \underline{y} \cdot { \boldsymbol{\rm H} }^{\rm T}\hspace{0.3cm}{\rm bzw.}\hspace{0.3cm} \underline{s}^{\rm T} = { \boldsymbol{\rm H} } \cdot \underline{y}^{\rm T}\hspace{0.05cm}.\]
  • The vector length of  $\underline{s}$  is equal to  $m = n-k$  $($row number of  $\boldsymbol{\rm H})$.


The syndrome  $\underline{s}$  shows the following characteristics:

  • Because of the equation  $\underline{x} \cdot { \boldsymbol{\rm H}}^{\rm T} = \underline{0}$   the syndrome  $\underline{s}$  does not depend on the code word  $\underline{x}$  but solely on the error vector  $\underline{e}$:
\[\underline{s} = \underline{y} \cdot { \boldsymbol{\rm H}}^{\rm T} = \hspace{0.05cm} \underline{x} \cdot { \boldsymbol{\rm H}}^{\rm T} + \hspace{0.05cm} \underline{e} \cdot { \boldsymbol{\rm H}}^{\rm T} = \hspace{0.05cm} \underline{e} \cdot { \boldsymbol{\rm H}}^{\rm T} \hspace{0.05cm}.\]
  • For sufficiently few bit errors,  $\underline{s}$  provides a clear indication of the error locations,  allowing full error correction.

$\text{Example 1:}$  Starting from the systematic  $\text{(7, 4, 3)}$ Hamming code,  the following result is obtained for the received vector  $\underline{y} = (0, 1, 1, 1, 0, 0, 1)$:

\[{ \boldsymbol{\rm H} } \cdot \underline{y}^{\rm T} = \begin{pmatrix} 1 &1 &1 &0 &1 &0 &0\\ 0 &1 &1 &1 &0 &1 &0\\ 1 &1 &0 &1 &0 &0 &1 \end{pmatrix} \cdot \begin{pmatrix} 0 \\ 1 \\ 1 \\ 1 \\ 0 \\ 0 \\ 1 \end{pmatrix} = \begin{pmatrix} 0 \\ 1 \\ 1 \end{pmatrix} = \underline{s}^{\rm T} \hspace{0.05cm}.\]

Comparing the syndrome with the  $\text{parity-check equations}$  of the Hamming code,  we see that

  • most likely the fourth symbol  $(x_4 = u_4)$  of the code word has been falsified,
  • the code word estimator will thus yield the result  $\underline{z} = (0, 1, 1, 0, 0, 0, 1)$,
  • the decision is correct only if only one bit was falsified during transmission.

Below are the required corrections for the  $\text{(7, 4, 3)}$  Hamming code resulting from the calculated syndrome  $\underline{s}$  corresponding to the columns of the parity-check matrix:

\[\underline{s} = (0, 0, 0) \hspace{0.10cm} \Rightarrow\hspace{0.10cm}{\rm no\hspace{0.15cm} correction}\hspace{0.05cm};\hspace{0.8cm}\underline{s} = (1, 0, 0)\hspace{0.10cm} \Rightarrow\hspace{0.10cm}{\rm invert}\hspace{0.15cm}p_1\hspace{0.05cm};\]
\[\underline{s} =(0, 0, 1)\hspace{0.10cm} \Rightarrow\hspace{0.10cm} {\rm invert}\hspace{0.15cm}p_3\hspace{0.05cm};\hspace{1.63cm}\underline{s} = (1, 0, 1)\hspace{0.10cm} \Rightarrow\hspace{0.10cm} {\rm invert}\hspace{0.15cm}u_1\hspace{0.05cm};\]
\[\underline{s} =(0, 1, 0)\hspace{0.10cm} \Rightarrow\hspace{0.10cm} {\rm invert}\hspace{0.15cm}p_2\hspace{0.05cm};\hspace{1.63cm}\underline{s} = (1, 1, 0)\hspace{0.10cm} \Rightarrow\hspace{0.10cm} {\rm invert}\hspace{0.15cm}u_3\hspace{0.05cm};\]
\[\underline{s} =(0, 1, 1)\hspace{0.10cm} \Rightarrow\hspace{0.10cm} {\rm invert}\hspace{0.15cm}u_4\hspace{0.05cm};\hspace{1.63cm}\underline{s} = (1, 1, 1)\hspace{0.10cm} \Rightarrow\hspace{0.10cm} {\rm invert}\hspace{0.15cm}u_2\hspace{0.05cm}. \]


Generalization of syndrome coding


We continue to assume the BSC channel model. This means:

  • The received vector  $\underline{y}$   and the error vector  $\underline{e}$   are elements of  ${\rm GF}(2^n)$.
  • The possible code words  $\underline{x}_i$  belong to the code  $\mathcal{C}$,  which spans a  $(n-k)$-dimensional subspace of  ${\rm GF}(2^n)$.


Under this assumption,  we briefly summarize the results of the last sections:

  • Syndrome decoding is a realization possibility of maximum likelihood decoding of block codes.  One decides on the code word  $\underline{x}_i$   with the least Hamming distance to the received word  $\underline{y}$:
\[\underline{z} = {\rm arg} \min_{\underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C}} \hspace{0.1cm} d_{\rm H}(\underline{y} \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i})\hspace{0.05cm}.\]
  • But the syndrome decoding is also the search for the most probable error vector  $\underline{e}$  that satisfies the condition  $\underline{e} \cdot { \boldsymbol{\rm H}}^{\rm T} = \underline{s}$.  The  "syndrome"  is thereby determined by the equation  
$$\underline{s} = \underline{y} \cdot { \boldsymbol{\rm H}}^{\rm T} .$$
  • With the  $\text{Hamming weight}$  $w_{\rm H}(\underline{e})$  the second interpretation can also be mathematically formulated as follows:
\[\underline{z} = \underline{y} + {\rm arg} \min_{\underline{e}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} {\rm GF}(2^n)} \hspace{0.1cm} w_{\rm H}(\underline{e}_{\hspace{0.03cm}i})\hspace{0.05cm}.\]

$\text{Conclusion:}$  Note that the error vector  $\underline{e}$  as well as the received vector  $\underline{y}$  is an element of  ${\rm GF}(2^n)$  unlike the syndrome  $\underline{s} \in {\rm GF}(2^m)$  with number  $m = n-k$  of parity-check equations. This means,  that

  • the association between the syndrome  $\underline{s}$  and the error vector  $\underline{e}$  is not unique,  but
  • each  $2^k$  error vectors lead to the same syndrome  $\underline{s}$  which one groups together into a so-called   "coset".


$\text{Example 2:}$  The facts shall be illustrated by the example with parameters  $n = 5, \ k = 2$   ⇒   $m = n-k = 3$ .

Splitting the  $2^k$  error vectors into  "cosets"

You can see from this graph:

  1. The  $2^n = 32$  possible error vectors  $\underline{e}$  are divided into  $2^m = 8$  cosets  ${\it \Psi}_0$,  ...  , ${\it \Psi}_7$.  Explicitly drawn here are only the cosets  ${\it \Psi}_0$  and  ${\it \Psi}_5$.

  2. All  $2^k = 4$  error vectors of the coset  ${\it \Psi}_\mu$  lead to the syndrome  $\underline{s}_\mu$.

  3. Each minor class  ${\it \Psi}_\mu$  has a  "leader"  $\underline{e}_\mu$, namely the one with the minimum Hamming weight.


$\text{Example 3:}$  Starting from the systematic  $\text{(5, 2, 3)}$  code  $\mathcal{C} = \big \{ (0, 0, 0, 0, 0) \hspace{0.05cm},\hspace{0.15cm}(0, 1, 0, 1, 1) \hspace{0.05cm},\hspace{0.15cm}(1, 0, 1, 1, 0) \hspace{0.05cm},\hspace{0.15cm}(1, 1, 1, 0, 1) \big \}$  the syndrome decoding procedure is now described in detail.

Example  $\text{(5, 2, 3)}$ syndrome table with cosets
  • The generator matrix and the parity-check matrix are:
\[{ \boldsymbol{\rm G} } = \begin{pmatrix} 1 &0 &1 &1 &0 \\ 0 &1 &0 &1 &1 \end{pmatrix} \hspace{0.05cm},\]
\[{ \boldsymbol{\rm H} } = \begin{pmatrix} 1 &0 &1 &0 &0 \\ 1 &1 &0 &1 &0 \\ 0 &1 &0 &0 &1 \end{pmatrix} \hspace{0.05cm}.\]
  • The table summarizes the final result. Note:
  1. The index  $\mu$  is not identical to the binary value of  $\underline{s}_\mu$.
  2. The order is rather given by the number of ones in the minor class leader  $\underline{e}_\mu$.
  3. For example, the syndrome  $\underline{s}_5 = (1, 1, 0)$  and the syndrome  $\underline{s}_6 = (1, 0, 1)$.


  • To derive this table,  note:
  1. The row 1 refers to the syndrome  $\underline{s}_0 = (0, 0, 0)$  and the associated cosets  ${\it \Psi}_0$. The most likely error sequence here is  $(0, 0, 0, 0, 0)$   ⇒   no bit error, which we call the  "coset leader"  $\underline{e}_0$.
  2. The other entries in the first row,  namely  $(1, 0, 1, 1, 0 )$,  $(0, 1, 0, 1, 1)$  and  $(1, 1, 1, 0, 1 )$,  also each yield the syndrome  $\underline{s}_0 = (0, 0, 0)$,  but only result with at least three bit errors and are correspondingly unlikely.
  3. In rows 2 to 6,  the respective coset leader  $\underline{e}_\mu$  contains exactly a single  "$1$"  $(\mu = 1$, ... , $5)$. Here  $\underline{e}_\mu$  is always the most likely error pattern of the class  ${\it \Psi}_\mu$. The other group members result only with at least two bit errors.
  4. The syndrome  $\underline{s}_6 = (1, 0, 1)$  is not possible with only one bit error.  In creating the table,  we then considered all  "$5\text{ over }2 = 10$"  error patterns  $\underline{e}$  with weight  $w_{\rm H}(\underline{e}) = 2$.
  5. The first found sequence with syndrome  $\underline{s}_6 = (1, 0, 1)$  was chosen as coset leader  $\underline{e}_6 = (1, 1, 0, 0)$ . With a different probing order,  the sequence  $(0, 0, 1, 0, 1)$  could also have resulted from  ${\it \Psi}_6$.
  6. Similar procedure was followed in determining the leader  $\underline{e}_7 = (0, 1, 1, 0, 0)$  of the cosets class  ${\it \Psi}_7$  characterized by the uniform syndrome  $\underline{s}_7 = (1, 1, 1)$.  Also in the class  ${\it \Psi}_7$  there is another sequence with Hamming weight  $w_{\rm H}(\underline{e}) = 2$,  namely  $(1, 0, 0, 0, 1)$.

  • The table only needs to be created once.  First,  the syndrome must be determined,  e.g. for the received vector  $\underline{y} = (0, 1, 0, 0, 1)$:
\[\underline{s} = \underline{y} \cdot { \boldsymbol{\rm H} }^{\rm T} = \begin{pmatrix} 0 &1 &0 &0 &1 \end{pmatrix} \cdot \begin{pmatrix} 1 &1 &0 \\ 0 &1 &1 \\ 1 &0 &0 \\ 0 &1 &0 \\ 0 &0 &1 \\ \end{pmatrix} = \begin{pmatrix} 0 &1 &0 \end{pmatrix}= \underline{s}_2 \hspace{0.05cm}.\]
  • Using the coset leader  $\underline{e}_2 = (0, 0, 0, 1, 0)$  from the above table $($red entry for  $\mu =2)$  finally arrives at the decoding result:
\[\underline{z} = \underline{y} + \underline{e}_2 = (0,\hspace{0.05cm} 1,\hspace{0.05cm} 0,\hspace{0.05cm} 0,\hspace{0.05cm} 1) + (0,\hspace{0.05cm} 0,\hspace{0.05cm} 0,\hspace{0.05cm} 1,\hspace{0.05cm} 0) = (0,\hspace{0.05cm} 1,\hspace{0.05cm} 0,\hspace{0.05cm} 1,\hspace{0.05cm} 1) \hspace{0.05cm}.\]


$\text{Conclusion:}$  From these examples it is already clear that the  »syndrome decoding means a considerable effort«,  if one cannot use certain properties e.g. cyclic codes.

  • For large block code lengths, this method fails completely. Thus, to decode a  $\text{BCH code}$  $($the abbreviation stands for their inventors Bose, Chaudhuri, Hocquenghem$)$  with code parameters  $n = 511$,  $k = 259$  and  $d_{\rm min} = 61$,  one has to evaluate and store exactly  $2^{511-259} \approx 10^{76}$ error patterns of length  $511$.
  • Happily,  however,  there are special decoding algorithms for these and also for other codes of large block length,  which lead to success with less effort

.

Coding gain - bit error rate with AWGN


We now consider the  $\text{bit error rate}$  for the following constellation:

Bit error rate for the  $\text{(7, 4, 3)}$ Hamming code
  1. Hamming code  $\text{HC (7, 4, 3)}$,
  2. AWGN–channel, characterized by the quotient  $E_{\rm B}/N_0$  (in dB),
  3. Maximum Likelihood Decoder  $\rm (ML)$  with 
    "Hard Decision"  $\rm (HD)$  and  "Soft Decision"  $\rm (SD)$,  resp.


It should be noted with regard to this graph:

  • The black comparison curve applies e.g. to binary phase modulation  $\rm (BPSK)$  without coding.  For this one needs for the bit error rate   $10^{-5}$  about  $10 \cdot \lg \, E_{\rm B}/N_0 = 9.6 \, \rm dB$.
  • The red circles apply to the  $\text{HC (7, 4, 3)}$  and hard decisions of the maximum likelihood decoder  $\text{(ML–HD)}$.  Syndrome decoding is a possible realization form for this.
  • This configuration brings an improvement over the comparison system only for  $10 \cdot \lg \, E_{\rm B}/N_0 >6 \, \rm dB$.  For  $\rm BER =10^{-5}$  one only needs  $10 \cdot \lg \, E_{\rm B}/N_0 \approx 9.2 \, \rm dB$.
  • The green crosses for  $\text{HC (7, 4, 3)}$  and  $\text{soft decision}$  $\text{(ML–SD)}$  are below the red curve throughout the range.  For  $\rm BER =10^{-5}$  this gives  $10 \cdot \lg \, E_{\rm B}/N_0 \approx 7.8 \, \rm dB$.

$\text{Definition:}$  We refer as  »coding gain«  of a considered system configuration, 

  • characterized by its code and
  • the way it is decoded,


the smaller  $10 \cdot \lg \, E_{\rm B}/N_0$  required for a given bit error rate  $\rm (BER)$  compared to the comparison system  $($without coding$)$:

\[G_{\rm Code} (\hspace{0.05cm}\text{considered system} \hspace{0.05cm}\vert\hspace{0.05cm}{\rm BER}\hspace{0.05cm}) =10 \cdot {\rm lg}\hspace{0.1cm}{E}_{\rm B}/N_0 \hspace{0.15cm}(\hspace{0.05cm}\text{comparison system}\hspace{0.05cm}\vert\hspace{0.05cm}{\rm BER}\hspace{0.05cm})- 10 \cdot {\rm lg}\hspace{0.1cm}{E}_{\rm B}/N_0 \hspace{0.15cm}(\hspace{0.05cm}\text{considered system}\hspace{0.05cm}\vert\hspace{0.05cm}{\rm BER}\hspace{0.05cm}) \hspace{0.05cm}. \]


Applied to the above graph, one obtains:

\[G_{\rm Code} (\hspace{0.05cm}{\rm Hamming \hspace{0.1cm}(7,\hspace{0.02cm}4,\hspace{0.02cm}3), ML-HD}\hspace{0.05cm}|\hspace{0.05cm}{\rm BER} = 10^{-5}\hspace{0.05cm}) = 0.4\ {\rm dB}\hspace{0.05cm},\]

\[G_{\rm Code} (\hspace{0.05cm}{\rm Hamming \hspace{0.1cm}(7,\hspace{0.02cm}4,\hspace{0.02cm}3), ML-SD}\hspace{0.05cm}|\hspace{0.05cm}{\rm BER} = 10^{-5}\hspace{0.05cm}) = 1.8\ {\rm dB}\hspace{0.05cm}.\]

Decoding at the Binary Erasure Channel


Finally,  it will be shown to what extent the decoder has to be modified


which does not produce errors but marks uncertain bits as  "erasures".

$\text{Example 4:}$  We consider again as in  $\text{Example 3}$  the systematic  $\text{(5, 2, 3)}$  block code 

$$\mathcal{C} = \big \{ (0, 0, 0, 0, 0) \hspace{0.05cm},\hspace{0.3cm}(0, 1, 0, 1, 1) \hspace{0.05cm},\hspace{0.3cm}(1, 0, 1, 1, 0) \hspace{0.05cm},\hspace{0.3cm}(1, 1, 1, 0, 1) \big \}.$$

The graphic shows the system model and gives exemplary values for the individual vectors.

  • The left part of the picture  (yellow background)  is valid for  "BSC"  with one bit error  $(0 → 1)$  at the third bit.
  • The right part of the picture  (green background)  is for  "BEC"  and shows two  erasures  $(\rm 1 → E)$  at the second and the fourth bit.
Error correction with BSC and BEC


One recognizes:

  • With BSC only  $t = 1$  bit error  (marked in red in the left part)  can be corrected due to  $d_{\rm min} = 3$.  If one restricts oneself to error detection, this works up to  $e= d_{\rm min} -1 = 2$  bit errors.
  • For BEC,  error detection makes no sense, because already the channel locates an uncertain bit as an  "erasure"  $\rm E$.  The zeros and ones in the BEC received word  $\underline{y}$  are safe.  Therefore the error correction works here up to  $e = 2$  erasures  (marked in red in the right part)  with certainty.
  • Also  $e = 3$  erasures are sometimes still correctable.  So  $\underline{y} \rm = (E, E, E, 1, 1)$  to  $\underline{z} \rm = (0, 1, 0, 1, 1)$  to be corrected since no second code word ends with two ones.  But  $\underline{y} \rm = (0, E, 0, E, E)$  is not correctable because of the all zero word allowed in the code.
  • If it is ensured that there are no more than two erasures in any received word,  the BEC block error probability  ${\rm Pr}(\underline{z} \ne \underline{x}) = {\rm Pr}(\underline{v} \ne \underline{u}) \equiv 0$.  In contrast,  the corresponding block error probability in the BSC model always has a value greater than  $0$.


Since after the BEC each received word is either decoded correctly or not at all,  we call here the block  $\underline{y} → \underline{z}$  in the future  "code word finder".  An  "estimation"  takes place only in the BSC model.


But how does the decoding of a received word   $\underline{y}$   with erasures work algorithmically?

$\text{Example 5:}$  Starting from the received word  $\underline{y} \rm = (0, E, 0, E, 1)$  in  $\text{Example 4}$  we formally set the output of the code word finder to  $\underline{z} \rm = (0, z_2, 0, z_4, 1)$,  where the symbols  $z_2 \in \{0, \, 1\}$  and  $z_4 \in \{0, \, 1\}$  are to be determined according to the following equation:

\[\underline{z} \cdot { \boldsymbol{\rm H} }^{\rm T}= \underline{0} \hspace{0.3cm} \Rightarrow \hspace{0.3cm} { \boldsymbol{\rm H} } \cdot \underline{z}^{\rm T}= \underline{0}^{\rm T} \hspace{0.05cm}.\]

The task is now to implement this determination equation as efficiently as possible.  The following calculation steps result:

  • With the parity-check matrix  $\boldsymbol{\rm H}$  of the  $\text{(5, 2, 3)}$  block code and the vector  $\underline{z} \rm = (0, z_2, 0, z_4, 1)$  is the above determination equation:
\[{ \boldsymbol{\rm H} } \cdot \underline{z}^{\rm T} = \begin{pmatrix} 1 &0 &1 &0 &0\\ 1 &1 &0 &1 &0\\ 0 &1 &0 &0 &1 \end{pmatrix} \cdot \begin{pmatrix} 0 \\ z_2 \\ 0 \\ z_4 \\ 1 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix} \hspace{0.05cm}.\]
  • We sum up the  "correct bits"  (German:  "korrekt"   ⇒   subscript "K")  to the vector  $\underline{z}_{\rm K}$  and the  "erased bits"  to the vector  $\underline{z}_{\rm E}$.  Then we split the parity-check matrix  $\boldsymbol{\rm H}$  into the corresponding submatrices  $\boldsymbol{\rm H}_{\rm K}$  and  $\boldsymbol{\rm H}_{\rm E}$:
\[\underline{z}_{\rm K} =(0, 0, 1)\hspace{0.05cm},\hspace{0.3cm} { \boldsymbol{\rm H} }_{\rm K}= \begin{pmatrix} 1 &1 &0\\ 1 &0 &0\\ 0 &0 &1 \end{pmatrix} \hspace{0.3cm}\Rightarrow\hspace{0.3cm} \text{Rows 1, 3 and 5 of the parity-check matrix} \hspace{0.05cm},\]
\[\underline{z}_{\rm E} = (z_2, z_4)\hspace{0.05cm},\hspace{0.35cm} { \boldsymbol{\rm H} }_{\rm E}= \begin{pmatrix} 0 &0\\ 1 &1\\ 1 &0 \end{pmatrix} \hspace{1.05cm}\Rightarrow\hspace{0.3cm} \text{Rows 2 and 4 of the parity-check matrix} \hspace{0.05cm}.\]
  • Remembering that in  $\rm GF(2)$  subtraction equals addition,  the above equation can be represented as follows:
\[{ \boldsymbol{\rm H} }_{\rm K} \cdot \underline{z}_{\rm K}^{\rm T} + { \boldsymbol{\rm H} }_{\rm E} \cdot \underline{z}_{\rm E}^{\rm T}= \underline{0}^{\rm T} \hspace{0.3cm} \Rightarrow \hspace{0.3cm} { \boldsymbol{\rm H} }_{\rm E} \cdot \underline{z}_{\rm E}^{\rm T}= { \boldsymbol{\rm H} }_{\rm K} \cdot \underline{z}_{\rm K}^{\rm T} \hspace{0.3cm} \Rightarrow \hspace{0.3cm} \begin{pmatrix} 0 &0\\ 1 &1\\ 1 &0 \end{pmatrix} \cdot \begin{pmatrix} z_2 \\ z_4 \end{pmatrix} = \begin{pmatrix} 1 &1 &0\\ 1 &0 &0\\ 0 &0 &1 \end{pmatrix} \cdot \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix} \hspace{0.05cm}.\]
  • This leads to a linear system of equations with two equations for the unknown  $z_2$  and  $z_4$  $($each  $0$  or  $1)$.
  • From the last row follows  $z_2 = 1$  and from the second row  $z_2 + z_4 = 0$   ⇒   $z_4 = 1$.  This gives the allowed code word  $\underline{z} \rm = (0, 1, 0, 1, 1)$.


Exercises for the chapter


Exercise 1.11: Syndrome Decoding

Exercise 1.11Z: Syndrome Decoding again

Exercise 1.12: Hard Decision vs. Soft Decision

Exercise 1.12Z: Comparison of HC (7, 4, 3) and HC (8, 4, 4)

Exercise 1.13: Binary Erasure Channel Decoding

Exercise 1.13Z: Binary Erasure Channel Decoding again