Difference between revisions of "Channel Coding/Examples of Binary Block Codes"

From LNTwww
 
(61 intermediate revisions by 8 users not shown)
Line 1: Line 1:
 
   
 
   
 
{{Header
 
{{Header
|Untermenü=Binäre Blockcodes zur Kanalcodierung
+
|Untermenü=Binary Block Codes for Channel Coding
|Vorherige Seite=Kanalmodelle und Entscheiderstrukturen
+
|Vorherige Seite=Channel Models and Decision Structures
|Nächste Seite=Allgemeine Beschreibung linearer Blockcodes
+
|Nächste Seite=General Description of Linear Block Codes
 
}}
 
}}
  
== Single Parity–check Codes (1) ==
+
== Single Parity-check Codes==
 
<br>
 
<br>
Der <i>Single Parity&ndash;check Code</i> (SPC) fügt zu dem Informationsblock <i><u>u</u></i> = (<i>u</i><sub>1</sub>, <i>u</u><sub>2</sub>, ... , <i>u</i><sub>k</sub></i>) ein Prüfbit (englisch: <i>Parity</i>) <i>p</i> hinzu:
+
The&nbsp; &raquo;'''Single parity-check code'''&laquo;&nbsp; $\rm (SPC)$&nbsp; adds to the information block&nbsp; $\underline{u}= (u_1, u_2, \hspace{0.05cm}\text{...}\hspace{0.05cm}, u_k)$&nbsp; a parity bit&nbsp; $p$:
 +
[[File:P ID2346 KC T 1 3 S1 v2.png|right|frame|Various single parity-check codes&nbsp; $(n = k + 1)$|class=fit]]
 +
:$$\underline{u} = (u_1, u_2,\hspace{0.05cm} \text{...}  \hspace{0.05cm} , u_k)  \hspace{0.3cm}$$
 +
:$$\Rightarrow \hspace{0.3cm}
 +
\underline{x} =  (x_1, x_2,\hspace{0.05cm}\text{...} \hspace{0.05cm} , x_n) = (u_1, u_2,\hspace{0.05cm} \text{...}\hspace{0.05cm} , u_k, p)
 +
\hspace{0.05cm}.$$
  
:<math>\underline{u} = (u_1, u_2,\hspace{0.05cm} ... \hspace{0.05cm} , u_k)  \hspace{0.3cm} \Rightarrow \hspace{0.3cm}
+
The graphic shows three coding examples:
\underline{x} = (x_1, x_2,\hspace{0.05cm} ... \hspace{0.05cm} , x_n) = (u_1, u_2,\hspace{0.05cm} ... \hspace{0.05cm} , u_k, p)
+
*$|\hspace{0.05cm}\mathcal{C}\hspace{0.05cm}| = 4 \hspace{0.15cm} (k = 2)$,
\hspace{0.05cm}.</math>
+
 
 +
*$|\hspace{0.05cm}\mathcal{C}\hspace{0.05cm}| = 8 \hspace{0.15cm} (k = 3)$,
  
Die Grafik zeigt drei Beispiele solcher Codes mit |<i>C</i>| = 4 (<i>k</i> = 2), |<i>C</i>| = 8 (<i>k</i> = 3) und |<i>C</i>| = 16 (<i>k</i> = 4).<br>
+
*$|\hspace{0.05cm}\mathcal{C}\hspace{0.05cm}| = 16 \hspace{0.15cm} (k = 4)$.
  
[[File:P ID2346 KC T 1 3 S1 v2.png|Single Parity–check Code (<i>n</i> = <i>k</i> + 1)|class=fit]]<br>
 
  
Dieser sehr einfache Code ist wie folgt charakterisiert:
+
This very simple code can be characterized as follows:
*Aus <i>n</i> = <i>k</i> + 1 folgt für die Coderate R = k/n = (<i>n</i> &ndash; 1)/<i>n</i> und für die Redundanz 1 &ndash; <i>R</i> = 1/<i>n</i>. Für <i>k</i> = 2 ergibt sich zum Beispiel die Coderate 2/3 und die relative Redundanz beträgt 33.3%.
+
*From&nbsp; $n = k + 1$&nbsp; follows for the&nbsp; &raquo;'''code rate'''&laquo;&nbsp; $R = k/n = (n-1)/n$&nbsp; and for the&nbsp; &raquo;'''redundancy'''&laquo;&nbsp; $1-R = 1/n$.&nbsp; For&nbsp; $k = 2$,&nbsp; for example,&nbsp; the code rate is&nbsp; $2/3$&nbsp; and the relative redundancy is&nbsp; $33.3\%$.
  
*Das Prüfbit erhält man durch die Modulo&ndash;2&ndash;Addition. Darunter versteht man die Addition im Galoisfeld zur Basis 2 &nbsp;&#8658;&nbsp; GF(2), sodass  1&oplus;1 = 0 ergibt:
+
*The parity bit is obtained by&nbsp; &raquo;'''modulo&ndash;2'''&laquo;&nbsp; addition.&nbsp; This is the addition in the&nbsp; [[Channel_Coding/Some_Basics_of_Algebra#Definition_of_a_Galois field|$\text{Galois field}$]]&nbsp; to the base &nbsp;$2$ &nbsp; &#8658; &nbsp; $\rm GF(2)$,&nbsp; so that&nbsp; $1 \oplus 1 = 0$&nbsp; results:
  
::<math>p = u_1 \oplus u_2 \oplus ... \hspace{0.05cm} \oplus u_k  
+
::<math>p = u_1 \oplus u_2 \oplus \text{...} \hspace{0.05cm} \oplus u_k  
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
*Damit enthält jedes gültige Codewort <i><u>x</u></i> eine gerade Anzahl von Einsen. Ausgedrückt mit &oplus; bzw. in vereinfachter Schreibweise entsprechend der zweiten Gleichung lautet diese Bedingung:
+
*Thus every valid code word&nbsp; $\underline{x}$&nbsp; contains an even number of ones.&nbsp; Expressed as&nbsp; $\oplus$&nbsp; or in simplified notation according to the second equation,&nbsp; this condition reads:
  
::<math> x_1 \oplus x_2 \oplus ... \hspace{0.05cm} \oplus x_n = 0  
+
::<math> x_1 \oplus x_2 \oplus \text{...} \hspace{0.05cm} \oplus x_n = 0  
\hspace{0.05cm}, \hspace{0.5cm}{\rm oder:}\hspace{0.15cm}  
+
\hspace{0.05cm}, \hspace{0.5cm}{\rm or:}\hspace{0.5cm}
 
\sum_{i=1}^{n} \hspace{0.2cm} x_i = 0\hspace{0.05cm}
 
\sum_{i=1}^{n} \hspace{0.2cm} x_i = 0\hspace{0.05cm}
, \hspace{0.3cm} {\rm Addition\hspace{0.15cm} in \hspace{0.15cm}  GF(2)}
+
, \hspace{0.3cm} {\rm addition\hspace{0.15cm} in \hspace{0.15cm}  GF(2)}
 
\hspace{0.05cm}. </math>
 
\hspace{0.05cm}. </math>
 
+
*For&nbsp; $k = 2$ &nbsp; &#8658; &nbsp; $n = 3$&nbsp; the following four code words result,&nbsp; where the parity bit&nbsp; $p$&nbsp; is marked by a small arrow in each case:
*Für <i>k</i> = 2 &#8658; <i>n</i> = 3 ergeben sich die folgenden vier Codeworte, wobei in der ersten Zeile das Prüfbit jeweils durch einen kleinen Pfeil markiert ist:
 
  
 
::<math>\underline{x}_0 = (0, 0_{\hspace{0.05cm} \rightarrow}\hspace{0.05cm} 0)\hspace{0.05cm}, \hspace{0.2cm} \underline{x}_1 = (0, 1_{\hspace{0.05cm} \rightarrow}\hspace{0.05cm} 1)\hspace{0.05cm}, \hspace{0.2cm}
 
::<math>\underline{x}_0 = (0, 0_{\hspace{0.05cm} \rightarrow}\hspace{0.05cm} 0)\hspace{0.05cm}, \hspace{0.2cm} \underline{x}_1 = (0, 1_{\hspace{0.05cm} \rightarrow}\hspace{0.05cm} 1)\hspace{0.05cm}, \hspace{0.2cm}
\underline{x}_2 = (1, 0 _{\hspace{0.05cm} \rightarrow}\hspace{0.05cm} 1)\hspace{0.05cm}, \hspace{0.2cm} \underline{x}_3 = (1, 1 _{\hspace{0.05cm} \rightarrow}\hspace{0.05cm} 0)</math>
+
\underline{x}_2 = (1, 0 _{\hspace{0.05cm} \rightarrow}\hspace{0.05cm} 1)\hspace{0.05cm}, \hspace{0.2cm} \underline{x}_3 = (1, 1 _{\hspace{0.05cm} \rightarrow}\hspace{0.05cm} 0)\hspace{0.05cm}.</math>
 +
*This code&nbsp; $\mathcal{C} = \big \{ (0, 0, 0), \ (0, 1, 1), \ (1, 0, 1), \ (1, 1, 0) \big \}$&nbsp; is&nbsp; &raquo;'''linear'''&laquo; since the sum of any two code words again gives a valid code word,&nbsp; for example:
 +
:$$\underline{x}_1 \oplus \underline{x}_2 = \underline{x}_3.$$
 +
*For any&nbsp; $k$ &nbsp; &#8658; &nbsp; $n = k+1$&nbsp; each code word differs from all others at an even number of positions.&nbsp; Thus,&nbsp; the minimum distance of the code is&nbsp;
 +
:$$d_{\rm min} = 2.$$
  
*Es handelt sich um einen linearen Code, da die Summe zweier beliebiger Codeworte wieder ein gültiges Codewort ergibt, zum Beispiel <i><u>x</u></i><sub>1</sub> &oplus; <i><u>x</u></i><sub>2</sub> = <i><u>x</u></i><sub>3</sub>.<br>
+
{{BlaueBox|TEXT= 
 +
$\text{Definition:}$&nbsp; Each&nbsp; $\text{single parity-check code (SPC)}$&nbsp; can be formally described as follows:
  
*Für beliebiges <i>k</i> &nbsp;&#8658;&nbsp; <i>n</i> = <i>k</i> + 1 unterscheidet sich jedes Codewort von allen anderen an einer geraden Anzahl von Positionen. Bei diesem Code ist die minimale Distanz <i>d</i><sub>min</sub> = 2.<br><br>
+
::<math>\mathcal{C} = \{ \underline{x} \in {\rm GF}(2^n)\hspace{-0.15cm}: \hspace{0.15cm}{\rm with \hspace{0.15cm}even\hspace{0.15cm} number\hspace{0.15cm} of\hspace{0.15cm} ones\hspace{0.15cm} in \hspace{0.15cm} } \underline{x} \}\hspace{0.05cm}.</math>
  
Mit der allgemeinen Codebezeichnung (<i>n</i>, <i>k</i>, <i>d</i><sub>min</sub>) lässt sich jeder <i>Single Parity&ndash;check Code</i> auch mit (n, n &ndash; 1, 2) benennen. Die Grafik zeigt den SPC (3, 2, 2), den SPC (4, 3, 2) und den SPC (5, 4, 2).
+
*With the general code name&nbsp; $(n, \ k, \ d_{\rm min})$&nbsp; any single parity&ndash;check code can also be named&nbsp; $\text{SPC }(n, \ n-1, \ 2)$&nbsp;.  
  
{{Definition}}''':''' Jeder Single Parity&ndash;check Code (SPC) lässt sich formal wie folgt beschreiben:
+
*The top graph thus shows the&nbsp; $\text{SPC (3, 2, 2)}$,&nbsp; the&nbsp; $\text{SPC (4, 3, 2)}$,&nbsp; and the&nbsp; $\text{SPC (5, 4, 2)}$.}}<br>
  
:<math>\mathcal{C} = \{ \underline{x} \in {\rm GF}(2^n)\hspace{-0.15cm}: \hspace{0.15cm}{\rm mit \hspace{0.15cm}geradzahliger\hspace{0.15cm} Anzahl\hspace{0.15cm} von\hspace{0.15cm} Einsen\hspace{0.15cm} in \hspace{0.15cm}} \underline{x} \}\hspace{0.05cm}.</math>{{end}}<br>
+
The digital channel may change the code word&nbsp; $\underline{x}= (x_1, x_2, \hspace{0.05cm}\text{...}\hspace{0.05cm}, x_n)$&nbsp; to the received word&nbsp; $\underline{y}= (y_1, y_2, \hspace{0.05cm}\text{...}\hspace{0.05cm}, y_n)$. With the error vector&nbsp; $\underline{e}= (e_1, e_2, \hspace{0.05cm}\text{...}\hspace{0.05cm}, e_n)$&nbsp; holds:
 +
:$$\underline{y}\underline{x} \oplus \underline{e}.$$
  
== Single Parity–check Codes (2) ==
+
For&nbsp; $\text{decoding the single parity-check code}$&nbsp; one forms the so-called&nbsp; &raquo;'''syndrome'''&laquo;:
<br>
 
Der digitale Kanal ändert möglicherweise das Codewort <i><u>x</u></i> = (<i>x</i><sub>1</sub>, <i>x</i><sub>2</sub>, ... , <i>x<sub>n</sub></i>)  in das Empfangswort <i><u>y</u></i>&nbsp;=&nbsp;(<i>y</i><sub>1</sub>, <i>y</i><sub>2</sub>, ... ,&nbsp;<i>y<sub>n</sub></i>), wobei mit dem Fehlervektor <i><u>e</u></i>&nbsp;=&nbsp;(<i>e</i><sub>1</sub>, <i>e</i><sub>2</sub>, ... ,&nbsp;<i>e<sub>n</sub></i>) gilt: <u><i>y</i></u> = <u><i>x</i></u> &oplus; <u><i>e</i></u>. Zur Decodierung des <i>Single Parity&ndash;check Codes</i>  bildet man das sogenannte Syndrom:
 
  
:<math>s = y_1 \oplus y_2 \oplus ... \hspace{0.05cm} \oplus y_n = \sum_{i=1}^{n} \hspace{0.2cm} y_i \hspace{0.1cm} \in \hspace{0.2cm} \{0, 1 \}  
+
::<math>s = y_1 \oplus y_2 \oplus \hspace{0.05cm}\text{...} \hspace{0.05cm} \oplus y_n = \sum_{i=1}^{n} \hspace{0.2cm} y_i \hspace{0.1cm} \in \hspace{0.2cm} \{0, 1 \}  
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
 +
The result&nbsp; $s=1$&nbsp; then indicates&nbsp; (at least)&nbsp; one bit error within the code word,&nbsp; while&nbsp; $s=0$&nbsp; should be interpreted as follows:
 +
*The transmission was error-free, or:<br>
 +
*the number of bit errors is even.<br><br>
  
Das Ergebnis &bdquo;<i>s</i> = 1&rdquo; weist dann auf (mindestens) einen Bitfehler innerhalb des Codewortes hin, während  &bdquo;<i>s</i> = 0&rdquo; wie folgt zu interpretieren ist:
+
{{GraueBox|TEXT=
*Die Übertragung war fehlerfrei, oder:<br>
+
$\text{Example 1:}$&nbsp; We consider the&nbsp; $\text{SPC (4, 3, 2)}$&nbsp; and assume that the all-zero word was sent.&nbsp; The table shows all possibilities that&nbsp; $f$&nbsp; bits are falsified and gives the respective syndrome&nbsp; $s \in \{0, 1\}$.&nbsp;
 
+
[[File:P ID2382 KC T 1 3 S1c.png|right|frame|Possible received values at the&nbsp; $\text{SPC (4, 3, 2)}$ |class=fit]]
*Die Anzahl der Bitfehler ist geradzahlig.<br><br>
 
 
 
{{Beispiel}}''':''' Wir betrachten den SPC (4, 3, 2) und gehen davon aus, dass das Nullwort gesendet wurde. Die Tabelle zeigt alle Möglichkeiten, dass <i>f</i> Bit verfälscht werden und gibt das jeweilige Syndrom <i>s</i> (entweder 0 oder 1) an.<br>
 
 
 
[[File:P ID2382 KC T 1 3 S1c.png|Mögliche Empfangswerte beim SPC (4, 3, 2) |class=fit]]<br>
 
 
 
Für das BSC&ndash;Modell mit <i>&epsilon;</i> = 1% ergeben sich dann  folgende Wahrscheinlichkeiten:
 
*Das Informationswort wird richtig decodiert (blaue Hinterlegung):
 
  
 +
For the&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#Binary_Symmetric_Channel_.E2.80.93_BSC|$\text{BSC model}$]]&nbsp; with the crossover probability&nbsp; $\varepsilon = 1\%$&nbsp; the following probabilities then result:
 +
*The information word is correctly decoded&nbsp; (blue background):
 
::<math>{\rm Pr}(\underline{v} = \underline{u}) = {\rm Pr}(\underline{y} = \underline{x}) = (1 - \varepsilon)^n = 0.99^4 \approx 96\,\%\hspace{0.05cm}.</math>
 
::<math>{\rm Pr}(\underline{v} = \underline{u}) = {\rm Pr}(\underline{y} = \underline{x}) = (1 - \varepsilon)^n = 0.99^4 \approx 96\,\%\hspace{0.05cm}.</math>
  
*Der Decoder erkennt, dass Übertragungsfehler aufgetreten sind (grüne Hinterlegung):
+
*The decoder detects that transmission errors have occurred&nbsp; (green background):
  
::<math>{\rm Pr}(s=1) \hspace{-0.1cm} =  \hspace{-0.1cm}  \sum_{f=1 \atop f \hspace{0.1cm}{\rm ungerade} }^{n} {n \choose f} \cdot \varepsilon^{f} \cdot (1 - \varepsilon)^{n-f} = </math>
+
:$${\rm Pr}(s=1) \hspace{-0.1cm} =  \hspace{-0.1cm}  \sum_{f=1 \atop f \hspace{0.1cm}{\rm odd} }^{n} {n \choose f} \cdot \varepsilon^{f} \cdot (1 - \varepsilon)^{n-f}$$
::<math>\hspace{1.8cm} = \hspace{-0.1cm} {4 \choose 1} \cdot 0.01 \cdot 0.99^3 + {4 \choose 3} \cdot 0.01^3 \cdot 0.99 \approx 3.9\,\%\hspace{0.05cm}.</math>
+
:$$\Rightarrow \hspace{0.3cm} {\rm Pr}(s=1) \hspace{-0.1cm} =    {4 \choose 1} \cdot 0.01 \cdot 0.99^3 + {4 \choose 3} \cdot 0.01^3 \cdot 0.99 \approx 3.9\,\%\hspace{0.05cm}.$$
  
*Das Informationswort wird falsch decodiert (rote Hinterlegung):
+
*The information word is decoded incorrectly&nbsp; (red background):
  
::<math>{\rm Pr}(\underline{v} \ne \underline{u})  \hspace{-0.1cm} =  \hspace{-0.1cm}  \sum_{f=2 \atop f \hspace{0.1cm}{\rm gerade} }^{n} {n \choose f} \cdot \varepsilon^{f} \cdot (1 - \varepsilon)^{n-f} = </math>
+
::<math>{\rm Pr}(\underline{v} \ne \underline{u})  \hspace{-0.1cm} =  \hspace{-0.1cm}  \sum_{f=2 \atop f \hspace{0.1cm}{\rm gerade} }^{n} {n \choose f} \cdot \varepsilon^{f} \cdot (1 - \varepsilon)^{n-f} =  1 - {\rm Pr}(\underline{v} = \underline{u}) - {\rm Pr}(s=1)\approx 0.1\,\%\hspace{0.05cm}.</math>
::<math>\hspace{2cm} =  \hspace{-0.1cm} 1 - {\rm Pr}(\underline{v} = \underline{u}) - {\rm Pr}(s=1)\approx 0.1\,\%\hspace{0.05cm}.</math>
 
  
Wir verweisen hier auf das Modul [[Ereigniswahrscheinlichkeiten der Binomialverteilung. Please add link and do not upload flash videos.]]{{end}}<br>
+
We refer here to the HTML5/JavaScript applet&nbsp; [[Applets:Binomial_and_Poisson_Distribution_(Applet)|$\text{Binomial and Poisson Distribution}$]].&nbsp; The results obtained here are also discussed in&nbsp; [[Aufgaben:Exercise_1.5:_SPC_(5,_4)_and_BEC_Model|$\text{Exercise 1.5}$]]. }}<br>
  
In der Aufgabe A1.5 werden die hier gewonnenen Ergebnisse noch ausführlich diskutiert.<br>
+
{{GraueBox|TEXT= 
 +
$\text{Example 2:}$&nbsp; Error correction of the single parity&ndash;check code is not possible for the BSC model unlike the&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#Binary_Erasure_Channel_.E2.80.93_BEC|$\text{BEC model}$]]&nbsp; ("Binary Erasure Channel"). 
  
== Single Parity–check Codes (3) ==
+
Bit errors are excluded with this one.&nbsp; If only one bit is erased&nbsp; $($"erasure", &nbsp;$\rm E)$,&nbsp; then due to the fact&nbsp; "the number of ones in the code word is even",&nbsp; error correction is also possible,&nbsp; for example for the&nbsp; $\text{SPC (5, 4, 2)}$:
<br>
 
Eine Fehlerkorrektur des Single Parity&ndash;check Codes ist beim BSC&ndash;Modell nicht möglich im Unterschied zum BEC&ndash;Kanal (<i>Binary Erasure Channel</i>). Bei diesem werden Bitfehler ausgeschlossen. Ist nur ein Bit ausgelöscht (englisch: <i>Erasure</i>, E), so ist aufgrund der Tatsache &bdquo;die Anzahl der Einsen im Codewort ist gerade&rdquo; auch eine Fehlerkorrektur möglich, zum Beispiel für den SPC (5, 4):
 
  
:<math>\underline{y} \hspace{-0.1cm} = \hspace{-0.1cm} (1, 0, {\rm E}, 1, 1)  \hspace{0.2cm}\Rightarrow\hspace{0.2cm}\underline{z} =  (1, 0, 1, 1, 1)
+
:<math>\underline{y} = (1, 0, {\rm E}, 1, 1)  \hspace{0.2cm}\Rightarrow\hspace{0.2cm}\underline{z} =  (1, 0, 1, 1, 1)
 
  \hspace{0.2cm}\Rightarrow\hspace{0.2cm}
 
  \hspace{0.2cm}\Rightarrow\hspace{0.2cm}
 
\underline{v} =  (1, 0,  1, 1) =  \underline{u}\hspace{0.05cm},</math>
 
\underline{v} =  (1, 0,  1, 1) =  \underline{u}\hspace{0.05cm},</math>
:<math>\underline{y} \hspace{-0.1cm} = \hspace{-0.1cm} (0, 1, 1, {\rm E}, 0)  \hspace{0.2cm}\Rightarrow\hspace{0.2cm}\underline{z} =  (0, 1, 1, 0, 0)
+
:<math>\underline{y}=(0, 1, 1, {\rm E}, 0)  \hspace{0.2cm}\Rightarrow\hspace{0.2cm}\underline{z} =  (0, 1, 1, 0, 0)
 
  \hspace{0.2cm}\Rightarrow\hspace{0.2cm}
 
  \hspace{0.2cm}\Rightarrow\hspace{0.2cm}
 
\underline{v} =  (0,  1, 1, 0) =  \underline{u}\hspace{0.05cm},</math>
 
\underline{v} =  (0,  1, 1, 0) =  \underline{u}\hspace{0.05cm},</math>
:<math>\underline{y} \hspace{-0.1cm} = \hspace{-0.1cm} (0, 1, 0, 1, {\rm E})  \hspace{0.2cm}\Rightarrow\hspace{0.2cm}\underline{z} =  (0, 1, 0, 1, 0)
+
:<math>\underline{y} = (0, 1, 0, 1, {\rm E})  \hspace{0.2cm}\Rightarrow\hspace{0.2cm}\underline{z} =  (0, 1, 0, 1, 0)
 
  \hspace{0.2cm}\Rightarrow\hspace{0.2cm}
 
  \hspace{0.2cm}\Rightarrow\hspace{0.2cm}
\underline{v} =  (0,  1, 0, 1) =  \underline{u}\hspace{0.05cm}.</math>
+
\underline{v} =  (0,  1, 0, 1) =  \underline{u}\hspace{0.05cm}.</math>}}
  
Auch beim AWGN&ndash;Kanal ist Fehlerkorrektur möglich, wenn man <i>Soft Decision</i> anwendet. Für das Folgende gehen wir von bipolarer Signalisierung aus: <i>x</i> = 0 &#8594; <i>x</i>&#x0303; = +1, &nbsp;&nbsp; <i>x</i> = 1 &#8594; <i>x</i>&#x0303; = &ndash;1.<br>
+
 
 +
{{GraueBox|TEXT= 
 +
$\text{Example 3:}$&nbsp; Also with the&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#AWGN_channel_at_binary_input|$\text{AWGN model}$]]&nbsp; error correction is possible when applying&nbsp; "soft decision".&nbsp; For the following we assume bipolar signaling:
 +
[[File:P ID2387 KC T 1 3 S1d v2.png|right|frame|To clarify&nbsp; "soft decision"&nbsp; at AWGN|class=fit]]
 +
*$x=0&nbsp; &rArr; &nbsp; $\tilde{x}= +1$,&nbsp; as well as
 +
*$x=1&nbsp; &rArr; &nbsp; $\tilde{x}= -1$.<br>
 
   
 
   
[[File:P ID2387 KC T 1 3 S1d v2.png|Zur Verdeutlichung von <i>Soft Decision</i> bei AWGN|class=fit]]<br>
 
  
Die Grafik verdeutlicht den hier dargelegten Sachverhalt:
+
The graphic illustrates the facts presented here:
*Beispielsweise lautet der Empfangsvektor (rote Punkte):
+
*For example,&nbsp; the received vector is&nbsp; (red dots):
  
 
::<math>\underline{y} =  (+0.8, -1.2, -0.1, +0.5, -0.6)  \hspace{0.05cm}.</math>
 
::<math>\underline{y} =  (+0.8, -1.2, -0.1, +0.5, -0.6)  \hspace{0.05cm}.</math>
  
*Bei harter Entscheidung (Schwelle <i>G</i> = 0, nur die Vorzeichen werden ausgewertet) würde man zu folgendem binären Ergebnis kommen (grüne Quadrate <i>Y<sub>i</sub></i> = <i>y<sub>i</sub></i>&nbsp;/&nbsp;|<i>y<sub>i</sub></i>|):
+
*With a hard decision&nbsp; $($threshold&nbsp; $G = 0$,&nbsp; only the signs are evaluated$)$&nbsp; one would arrive at the following binary result&nbsp; $($green squares&nbsp; $Y_i = y_i/ \vert y_i \vert)$:
  
 
::<math>\underline{Y} =  (+1, -1, -1, +1, -1)  \hspace{0.05cm}.</math>
 
::<math>\underline{Y} =  (+1, -1, -1, +1, -1)  \hspace{0.05cm}.</math>
  
*In Symbolschreibweise ergibt sich daraus (0, 1, 1, 0, 1), was kein gültiges Codewort ist &nbsp;&#8658;&nbsp; Syndrom <i>s</i> = 1. Also müssen ein, drei oder fünf Bit verfälscht worden sein.<br>
+
*In symbol notation,&nbsp; this gives&nbsp; $(0, 1, 1, 0, 1)$,&nbsp; which is not a valid code word of&nbsp; $\text{SPC (5, 4, 2)}$&nbsp; &#8658; &nbsp; syndrome&nbsp; $s = 1$.&nbsp; So one,&nbsp; three or five bits must have been falsified.<br>
  
*Die Wahrscheinlichkeit für drei oder fünf Bitfehler ist allerdings um Größenordnungen kleiner als diejenige für einen einzigen Fehler. Die Annahme &bdquo;ein Bitfehler&rdquo; ist deshalb nicht abwegig.<br>
+
*The probability for three or five bit errors,&nbsp; however,&nbsp; is orders of magnitude smaller than that for a single error.&nbsp; The assumption of&nbsp; "one bit error"&nbsp; is therefore not unreasonable.<br>
  
*Da der Empfangswert <i>y</i><sub>3</sub> sehr nahe an der Schwelle <i>G</i> = 0 liegt, geht man davon aus, dass genau dieses Bit verfälscht wurde. Damit fällt bei <i>Soft Decision</i> die Entscheidung für <i><u>z</u></i> = (0, 1, 0, 0, 1) &nbsp;&#8658;&nbsp; <i><u>&upsilon;</u></i> = (0, 1, 0, 0). Die Blockfehlerwahrscheinlichkeit Pr(<u><i>&upsilon;</i></u> &ne; <u><i>u</i></u>) ist so am geringsten.<br><br>
+
*Since the received value&nbsp; $y_3$&nbsp; is very close to the threshold&nbsp; $G = 0$&nbsp; it is assumed that exactly this bit has been falsified.&nbsp; Thus,&nbsp; with "soft decision",&nbsp; the decision is for&nbsp; $\underline{z} = (0, 1, 0, 0, 1)$ &nbsp; &#8658; &nbsp; $\underline{v} = (0, 1, 0, 0)$.&nbsp; The block error probability&nbsp; ${\rm Pr}(\underline{v} \ne \underline{u})$&nbsp; is thus lowest.}}<br><br>
  
== Wiederholungscodes (1) ==
+
== Repetition Codes==
 
<br>
 
<br>
{{Definition}}''':''' Man bezeichnet einen linearen binären (<i>n</i>, <i>k</i>)&ndash;Blockcode
+
{{BlaueBox|TEXT= 
 +
$\text{Definition:}$&nbsp; A&nbsp; $\text{repetition code}$&nbsp; ($\rm RC)$&nbsp; is a linear binary&nbsp; $(n, \, k)$&nbsp; block code of the form
 +
 
 +
::<math>\mathcal{C} = \big \{ \underline{x} \in {\rm GF}(2^n)\text{:} \ \ x_i = x_j \hspace{0.25cm}{\rm for \hspace{0.25cm}all\hspace{0.35cm} } i, j = 1, \hspace{0.05cm} \text{...} \hspace{0.05cm}, n \big \}.</math>
 +
 
 +
*The code parameter&nbsp; $n$&nbsp; denotes the code length.&nbsp; Independent of&nbsp; $n$&nbsp; always holds&nbsp; $k = 1$.
 +
 +
*Accordingly,&nbsp; there exist only the two code words&nbsp; $(0, 0, \hspace{0.05cm} \text{...} \hspace{0.05cm} , 0)$&nbsp; and&nbsp; $(1, 1, \hspace{0.05cm}\text{...}\hspace{0.05cm} , 1)$, which differ in&nbsp; $n$&nbsp; binary places.
 +
 +
*From this follows for the minimum distance&nbsp;  $d_{\rm min} = n$.}}<br>
 +
 
 +
The graphic shows repetition codes for&nbsp; $n=3$,&nbsp;  $n=4$&nbsp; and&nbsp; $n=5$.&nbsp; Such a repetition code has the following properties:
 +
[[File:P ID2347 KC T 1 3 S2 v2.png|right|frame|Various repetition codes|class=fit]]
  
:<math>\mathcal{C} = \{ \underline{x} \in {\rm GF}(2^n): x_i = x_j \hspace{0.15cm}{\rm f\ddot{u}r \hspace{0.15cm}alle\hspace{0.25cm}} i, j = 1, ... \hspace{0.1cm}, n \}</math>
+
*This&nbsp; $(n, \, 1, \, n)$&nbsp; block code has the very small code rate&nbsp; $R = 1/n$.
  
als Wiederholungscode (englisch: <i>Repetition Code</i>, RC) der Länge <i>n</i>. Es gilt also stets k = 1. Entsprechend existieren nur zwei Codeworte (0, 0, ... , 0) und (1, 1, ... , 1), die sich in <i>n</i> Binärstellen unterscheiden. Daraus folgt für die minimale Distanz  <i>d</i><sub>min</sub> = <i>n</i>.{{end}}<br>
+
*So,&nbsp;  such a code is only suitable for transferring or storing small files.
  
Ein solcher (<i>n</i>, 1, <i>n</i>)&ndash;Blockcode besitzt nur die sehr kleine Coderate <nobr><i>R</i> = 1/<i>n</i>,</nobr> ist aber auch sehr robust. Insbesondere beim BEC&ndash;Kanal genügt ein einziges richtig übertragenes Bit an beliebiger Position (alle anderen Bit können ausgelöscht sein), um das Informationswort richtig zu decodieren.<br>
+
*On the other hand,&nbsp; the repetition code is very robust.
 +
 +
*In particular,&nbsp; in the&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#Binary_Erasure_Channel_.E2.80.93_BEC|$\text{BEC channel}$]]&nbsp; ("Binary Erasure Channel"),&nbsp; a single correctly transmitted bit at any position&nbsp; (all other bits may be erased)&nbsp; is sufficient to correctly decode the information word.<br>
  
[[File:P ID2347 KC T 1 3 S2 v2.png|Wiederholungscode (<i>Repetition Code</i>)|class=fit]]<br>
 
  
Betrachten wir zunächst die Decodierung und Fehlerwahrscheinlichkeiten beim BSC&ndash;Kanal.<br>
+
{{GraueBox|TEXT= 
 +
$\text{Example 4: Decoding and error probabilities of the repetition code at the BSC channel}$
 +
<br>
  
{{Beispiel}}''':''' Es gelte der BSC&ndash;Kanal mit <i>&epsilon;</i> = 10%. Die Decodierung basiere auf dem Majoritätsprinzip. Bei ungeradem <i>n</i> können <i>e</i> = <i>n</i> &ndash; 1 Fehler erkannt und <i>t</i> = (<i>n</i> &ndash; 1)/2 Bitfehler korrigiert werden. Damit ergibt sich für die Wahrscheinlichkeit der korrekten Decodierung der Informationsbits <i>u</i>:
+
The&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#Binary_Symmetric_Channel_.E2.80.93_BSC|$\text{BSC channel}$]]&nbsp; with&nbsp; $\varepsilon = 10\%$&nbsp; applies.&nbsp; The decoding is based on the majority principle.  
 +
*For odd&nbsp; $n$ &nbsp; &rArr; &nbsp; $e=n-1$ bit errors can be detected and&nbsp; $t=(n-1)/2$&nbsp; bit errors can be corrected.  
  
:<math>{\rm Pr}(v = u) =  \sum_{f=0  }^{(n-1)/2} {n \choose f} \cdot \varepsilon^{f} \cdot (1 - \varepsilon)^{n-f} \hspace{0.05cm}.</math>
+
*This gives for the probability of correct decoding of the information bit&nbsp;  $u$:
  
Die nachfolgenden Zahlenwerte gelten für <i>n</i> = 5. Das heißt: Es sind <i>t</i> = 2 Bitfehler korrigierbar:
+
::<math>{\rm Pr}(v = u) =  \sum_{f=0  }^{(n-1)/2} {n \choose f} \cdot \varepsilon^{f} \cdot (1 - \varepsilon)^{n-f} \hspace{0.05cm}.</math>
  
:<math>{\rm Pr}(v = u) = (1 - \varepsilon)^5 + 5 \cdot \varepsilon \cdot (1 - \varepsilon)^4 + 10 \cdot \varepsilon^2 \cdot (1 - \varepsilon)^3 \approx 99.15\,\%</math>
+
*The following numerical values are valid for&nbsp; $n = 5$. That means: &nbsp; There are&nbsp; $t = 2$&nbsp; bit errors correctable:
  
:<math>\Rightarrow\hspace{0.3cm}{\rm Pr}(v \ne u) = 1-  {\rm Pr}(v = u) \approx 0.85\,\%\hspace{0.05cm}.</math>
+
::<math>{\rm Pr}(v = u) = (1 - \varepsilon)^5 + 5 \cdot \varepsilon \cdot (1 - \varepsilon)^4 + 10 \cdot \varepsilon^2 \cdot (1 - \varepsilon)^3 \approx 99.15\,\%</math>
 +
::<math>\Rightarrow\hspace{0.3cm}{\rm Pr}(v \ne u) = 1-  {\rm Pr}(v = u) \approx 0.85\,\%\hspace{0.05cm}.</math>
  
Bei geradem <i>n</i> können dagegen nur <i>n</i>/2 &ndash; 1 Fehler korrigiert werden. Erhöht man <i>n</i> von 5 auf 6, so sind weiterhin auch nur zwei Bitfehler innerhalb eines Codewortes korrigierbar. Einen dritten Bitfehler kann man zwar nicht korrigieren, aber zumindest erkennen:
+
*On the other hand,&nbsp; with even&nbsp; $n$&nbsp; only&nbsp; $t=n/2-1$&nbsp; errors can be corrected.&nbsp; If&nbsp; $n$&nbsp; is increased from&nbsp; $5$&nbsp; to&nbsp; $6$,&nbsp; then only two bit errors within a code word can be corrected.&nbsp; A third bit error cannot be corrected,&nbsp; but at least it can be recognized:
  
:<math>{\rm Pr}({\rm nicht\hspace{0.15cm} korrigierbarer\hspace{0.15cm} Fehler})  
+
::<math>{\rm Pr}({\rm not\hspace{0.15cm} correctable\hspace{0.15cm} error})  
 
= {6 \choose 3} \cdot \varepsilon^{3} \cdot (1 - \varepsilon)^{3}= 20 \cdot 0.1^{3} \cdot 0.9^{3}\approx  
 
= {6 \choose 3} \cdot \varepsilon^{3} \cdot (1 - \varepsilon)^{3}= 20 \cdot 0.1^{3} \cdot 0.9^{3}\approx  
 
1.46\,\%\hspace{0.05cm}. </math>
 
1.46\,\%\hspace{0.05cm}. </math>
  
Ein (unerkannter) Decodierfehler (<i>&upsilon;</i> &ne; <i>u</i>) ergibt sich erst, wenn innerhalb des 6 Bit&ndash;Wortes vier oder mehr Bit verfälscht wurden. Als Näherung unter der Annahme, dass fünf oder sechs Bitfehler sehr viel unwahrscheinlicher sind als vier, gilt:
+
*An&nbsp; (undetected)&nbsp; decoding error&nbsp; $(v \ne u)$&nbsp; results only when four or more bits have been falsified within the six bit word.&nbsp; As an approximation,&nbsp; assuming that five or six bit errors are much less likely than four:
  
:<math>{\rm Pr}(v \ne u)  \approx {6 \choose 4} \cdot \varepsilon^{4} \cdot (1 - \varepsilon)^{2}=
+
::<math>{\rm Pr}(v \ne u)  \approx {6 \choose 4} \cdot \varepsilon^{4} \cdot (1 - \varepsilon)^{2}=
 
0.122\,\%\hspace{0.05cm}.</math>
 
0.122\,\%\hspace{0.05cm}.</math>
  
Interessant ist, dass beim RC(6, 1, 6) die Wahrscheinlichkeit Pr(<i>&upsilon;</i> = <i>u</i>) für eine mögliche und richtige Decodierung mit 98.42% kleiner ist als beim RC(5, 1, 5).{{end}}<br>
+
*It is interesting to note that for&nbsp; $\text{RC(6, 1, 6)}$&nbsp; the probability&nbsp; ${\rm Pr}(v = u)$&nbsp; for a possible and correct decoding with&nbsp; $98.42\%$&nbsp; is smaller than for&nbsp; $\text{RC (5, 1, 5)}$. <br>For the latter: &nbsp; ${\rm Pr}(v = u)  \approx 99.15\%.$}}<br>
  
== Wiederholungscodes (2) ==
+
{{GraueBox|TEXT=
 +
$\text{Example 5: Performance of the repetition code at the AWGN channel}$
 
<br>
 
<br>
Nun interessieren wir uns für die Leistungsfähigkeit des Wiederholungscodes beim AWGN&ndash;Kanal. Bei uncodierter Übertragung (oder dem Wiederholungscode mit <i>n</i> = 1) ist der Empfangswert <i>y</i> = <i>x</i>&#x0303; + <i>&eta;</i>, wobei <i>x</i>&#x0303; &#8712; {+1, &ndash;1} das Informationsbit bei bipolarer Signalisierung bezeichnet und <i>&eta;</i> den Rauschterm. Um Verwechslungen mit dem Codeparameter <i>n</i> zu vermeiden, wurde das Rauschen umbenannt: <i>n</i> &#8594; <i>&eta;</i>.<br>
 
  
Für die Fehlerwahrscheinlichkeit gilt mit dem komplementären Gaußschen Fehlerintegral Q(<i>x</i>)
+
We now consider the&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#AWGN_channel_at_binary_input|$\text{AWGN channel}$]].&nbsp; For uncoded transmission&nbsp; $($or the repetition code with&nbsp; $n=1)$&nbsp; the received value is&nbsp; $y = \tilde{x}+\eta$&nbsp;, where&nbsp; $\tilde{x} \in \{+1, -1\}$&nbsp; denotes the information bit in bipolar signaling and&nbsp; $\eta$&nbsp; denotes the noise term.&nbsp; To avoid confusion with the code parameter&nbsp; $n$&nbsp; we have renamed the noise: &nbsp; $n &#8594; \eta$.<br>
 +
 
 +
For the error probability, with the&nbsp; [[Theory_of_Stochastic_Signals/Gaussian_Distributed_Random_Variables#Exceedance_probability|$\text{complementary Gaussian error integral}$]]&nbsp; ${\rm Q}(x)$
  
:<math>{\rm Pr}(v \ne u)  = {\rm Q}(\sqrt{\rho})  
+
::<math>{\rm Pr}(v \ne u)  = {\rm Q}(\sqrt{\rho})  
 
\hspace{0.05cm},</math>
 
\hspace{0.05cm},</math>
  
wobei folgende physikalische Größen zu verwenden sind:
+
where the following physical quantities are to be used:
*das Signal&ndash;zu&ndash;Rauschleistungsverhältnis <i>&rho;</i> = 1/<i>&sigma;</i><sup>2</sup> = 2 <i>E</i><sub>S</sub>/<i>N</i><sub>0</sub>,<br>
+
*the signal-to-noise ratio&nbsp; $\rm (SNR)$&nbsp; $\rho= 1/\sigma^2 = 2 \cdot E_{\rm S}/N_0$,<br>
  
*die Energie <i>E</i><sub>S</sub> pro Codesymbol &nbsp;&#8658;&nbsp; Symbolenergie,<br>
+
*the energy&nbsp; $E_{\rm S}$&nbsp; per code symbol &nbsp; &#8658; &nbsp; "symbol energy",<br>
  
*die normierte Streuung <i>&sigma;</i> des Rauschens, gültig für das Nutzsignal <i>x</i>&#x0303; &#8712; {+1, &ndash;1}, und<br>
+
*the normalized standard deviation&nbsp; $\sigma$&nbsp; of the noise,&nbsp; valid for the bipolar information bit&nbsp; $\tilde{x} \in \{+1, -1\}$,&nbsp; and<br>
  
*die konstante (einseitige) Rauschleistungsdichte <i>N</i><sub>0</sub> des AWGN&ndash;Rauschens.<br><br>
+
*the constant&nbsp; (one-sided)&nbsp; noise power density&nbsp; $N_0$&nbsp; of the AWGN noise.<br><br>
  
Bei einem (<i>n</i>, 1, <i>n</i>)&ndash;Wiederholungscode ergibt sich dagegen für den Eingangswert des ML&ndash;Decoders <i>y</i><sup>&nbsp;</sup>' =&nbsp;<i>x</i>&#x0303;<sup>&nbsp;</sup>'&nbsp;+&nbsp;<i>&eta;</i><sup>&nbsp;</sup>' mit folgenden Eigenschaften:
+
[[File:EN_KC_T_1_3_S2b.png|right|frame|Error probability of the repetition code at the AWGN channel|class=fit]]
 +
In contrast,&nbsp; for a&nbsp; $(n,\ 1,\ n)$&nbsp; repetition code,&nbsp; the input value of the maximum likelihood decoder&nbsp; $y \hspace{0.04cm}' = \tilde{x} \hspace{0.04cm}'+\eta \hspace{0.04cm}'$&nbsp; with the following properties:
 +
::<math>\tilde{x} \hspace{0.04cm}'  =\sum_{i=1  }^{n} \tilde{x}_i \in \{ +n, -n \}\hspace{0.2cm} \Rightarrow\hspace{0.2cm}
 +
n{\rm -fold \hspace{0.15cm}amplitude}</math>
 +
::<math>\hspace{4.8cm} \Rightarrow\hspace{0.2cm}n^2{\rm -fold \hspace{0.15cm}power}\hspace{0.05cm},</math>
 +
::<math>\eta\hspace{0.04cm}'  = \sum_{i=1  }^{n} \eta_i\hspace{0.2cm} \Rightarrow\hspace{0.2cm}
 +
n{\rm -fold \hspace{0.15cm}variance:\hspace{0.15cm} } \sigma^2 \rightarrow n \cdot \sigma^2\hspace{0.05cm},</math>
 +
::<math>\rho\hspace{0.04cm}'  = \frac{n^2}{n \cdot \sigma^2} = n \cdot \rho
 +
\hspace{0.2cm} \Rightarrow\hspace{0.2cm}{\rm Pr}(v \ne u)  = {\rm Q}(\sqrt{n \cdot \frac{2E_{\rm S} }{N_0} } )\hspace{0.05cm}.</math>
  
:<math>\tilde{x} \hspace{0.04cm}'  \hspace{-0.1cm} =  \hspace{-0.1cm} \sum_{i=1  }^{n} \tilde{x}_i \in \{ +n, -n \}\hspace{0.2cm} \Rightarrow\hspace{0.2cm}
+
The error probability in double logarithmic representation is shown in the left graph.  
n{\rm -fache \hspace{0.15cm}Amplitude}
+
#As abscissa is&nbsp; $10 \cdot \lg \, (E_{\rm S}/N_0)$&nbsp; plotted.  
\hspace{0.2cm} \Rightarrow\hspace{0.2cm}n^2{\rm -fache \hspace{0.15cm}Leistung}\hspace{0.05cm},</math>
+
#The energy per bit&nbsp; $(E_{\rm B})$&nbsp; is &nbsp;$n$&nbsp; times larger than the symbol energy&nbsp; $E_{\rm S}$,&nbsp; as illustrated in the graph for &nbsp;$n=3$&nbsp;.  
:<math>\eta\hspace{0.04cm}'  \hspace{-0.1cm} = \hspace{-0.1cm} \sum_{i=1  }^{n} \eta_i\hspace{0.2cm} \Rightarrow\hspace{0.2cm}
+
<br clear=all>
n{\rm -fache \hspace{0.15cm}Varianz:\hspace{0.15cm}} \sigma^2 \rightarrow n \cdot \sigma^2\hspace{0.05cm},</math>
+
This set of curves can be interpreted as follows:
:<math>\rho\hspace{0.04cm}'  \hspace{-0.1cm} =  \hspace{-0.1cm} \frac{n^2}{n \cdot \sigma^2} = n \cdot \rho
+
*If one plots the error probability over the abscissa&nbsp; $10 \cdot \lg \, (E_{\rm S}/N_0)$&nbsp; then&nbsp; $n$&ndash;times repetition over uncoded transmission&nbsp; $(n=1)$&nbsp; results in a significant improvement.<br>
\hspace{0.2cm} \Rightarrow\hspace{0.2cm}{\rm Pr}(v \ne u) = {\rm Q}(\sqrt{n \cdot \frac{2E_{\rm S}}{N_0}})\hspace{0.05cm}.</math>
 
  
Die linke Grafik zeigt die Fehlerwahrscheinlichkeit in doppelt logarithmischer Darstellung. Als Abszisse ist 10 &middot; lg (<i>E</i><sub>S</sub>/<i>N</i><sub>0</sub>) aufgetragen. Die Bildbeschreibung folgt auf der nächsten Seite.<br>
+
*The curve for the repetition factor&nbsp; $n$&nbsp; is obtained by left shifting by&nbsp; $10 \cdot \lg \, n$&nbsp; $($in&nbsp; $\rm dB)$&nbsp; with respect to the comparison curve.&nbsp; <br>The gain is&nbsp; $4.77 \ {\rm dB} \ (n = 3)$&nbsp; or&nbsp; $\approx 5 \ {\rm dB} \ (n = 5)$.<br>
  
[[File:P ID2348 KC T 1 3 S2b v2.png|Fehlerwahrscheinlichkeit des Wiederholungscodes beim AWGN–Kanal|class=fit]]<br>
+
*However,&nbsp; a comparison at constant&nbsp; $E_{\rm S}$&nbsp; is not fair,&nbsp; since with the repetition code&nbsp; $\text{RC (5, 1, 5)}$&nbsp; one spends a factor&nbsp; $n$&nbsp; larger energy for the transmission of an information bit than with uncoded transmission: &nbsp; $E_{\rm B} = E_{\rm S}/{R} = n \cdot E_{\rm S}\hspace{0.05cm}.$
  
== Wiederholungscodes (3) ==
 
<br>
 
Es folgt nun die Interpretation der angegebenen Grafiken.<br>
 
  
[[File:P ID2349 KC T 1 3 S2b v2.png|Fehlerwahrscheinlichkeit des Wiederholungscodes beim AWGN–Kanal|class=fit]]<br>
+
From the graph on the right,&nbsp; we can see that all the curves lie exactly on top of each other when plotted on the abscissa&nbsp; $10 \cdot \lg \, (E_{\rm B}/N_0)$.}}<br>
  
Die linke Grafik kann wie folgt interpretiert werden:
+
{{BlaueBox|TEXT= 
*Trägt man die Fehlerwahrscheinlichkeit über der Abszisse 10 &middot; lg (<i>E</i><sub>S</sub>/<i>N</i><sub>0</sub>) auf, so ergibt sich durch <i>n</i>&ndash;fache Wiederholung gegenüber uncodierter Übertragung (<i>n</i> = 1) eine signifikante Verbesserung.<br>
+
$\text{Conclusion regarding repetition codes on the AWGN channel:}$
 +
*The error probability is independent of the repetition factor&nbsp; $n$&nbsp; for a fair comparison: &nbsp; &nbsp; ${\rm Pr}(v \ne u) = {\rm Q}\left (\sqrt{2E_{\rm B} /{N_0} } \right )
 +
\hspace{0.05cm}.$
  
*Die Kurve für den Wiederholungsfaktor <i>n</i> erhält man durch Linksverschiebung um 10&nbsp;&middot;&nbsp;lg&nbsp;<i>n</i> (in dB) gegenüber der Vergleichskurve. Der Gewinn beträgt 4.77 dB (<i>n</i> = 3) bzw. ca. 7 dB (<i>n</i> = 5).<br>
+
*For the AWGN channel,&nbsp; no&nbsp; [[Channel_Coding/Decoding_of_Linear_Block_Codes#Coding_gain_-_bit_error_rate_with_AWGN|$\text{coding gain}$]]&nbsp; can be achieved by a repetition code.}}<br>
  
*Allerdings ist ein Vergleich bei konstantem <i>E</i><sub>S</sub> nicht fair, da man mit einem (5, 1) Repetition Code für die Übertragung eines Informationsbits eine um den Faktor <i>n</i> größere Energie aufwendet:
+
== Hamming Codes ==
 
 
::<math>E_{\rm B}  = \frac{E_{\rm S}}{R} = n \cdot E_{\rm S}
 
\hspace{0.05cm}.</math>
 
 
 
Aus der rechten Grafik erkennt man, dass alle Kurven genau übereinander liegen, wenn auf der Abszisse 10 &middot; lg (<i>E</i><sub>B</sub>/<i>N</i><sub>0</sub>) aufgetragen wird. Daraus folgt:<br>
 
 
 
'''Zusammenfassung der Ergebnisse für den Wiederholungscode bei AWGN'''
 
*Die Fehlerwahrscheinlichkeit ist bei fairem Vergleich unabhängig vom Wiederholungsfaktor <i>n</i>:
 
 
 
::<math>{\rm Pr}(v \ne u)  = {\rm Q}\left (\sqrt{\frac{2E_{\rm B}}{N_0}} \right )
 
\hspace{0.05cm}.</math>
 
 
 
*Beim AWGN&ndash;Kanal ist durch einen Wiederholungscode kein Codiergewinn zu erzielen.<br>
 
 
 
== Hamming–Codes (1) ==
 
 
<br>
 
<br>
Richard Wesley Hamming hat 1962 eine Klasse binärer Blockcodes angegeben, die sich durch die Anzahl <i>m</i> = 2, 3, ... der zugesetzten <i>Parity Bits</i> unterscheiden. Für diese Codeklasse gilt:
+
In 1962&nbsp; [https://en.wikipedia.org/wiki/Richard_Hamming $\text{Richard Wesley Hamming}$]&nbsp; specified a class of binary block codes that differ in the number&nbsp; $m = 2, 3, \text{...} $&nbsp; of added&nbsp; "parity bits".&nbsp; For this code class:
*Die Codelänge ergibt sich zu <i>n</i> = 2<sup><i>m</i></sup> &ndash; 1. Möglich sind demzufolge beim Hamming&ndash;Code nur die Längen <i>n</i>&nbsp;=&nbsp;3, <i>n</i>&nbsp;=&nbsp;7, <i>n</i>&nbsp;=&nbsp;15, <i>n</i>&nbsp;=&nbsp;31, <i>n</i>&nbsp;=&nbsp;63, <i>n</i>&nbsp;=&nbsp;127, <i>n</i>&nbsp;=&nbsp;255, usw.<br>
+
*The code length always results in&nbsp; $n = 2^m -1$.&nbsp; Consequently,&nbsp; only the lengths&nbsp; $n = 3$,&nbsp; $n = 7$,&nbsp; $n = 15$,&nbsp; $n = 31$,&nbsp; $n = 63$,&nbsp; $n = 127$,&nbsp; $n = 255$, etc. are possible.<br>
  
*Ein Informationswort besteht aus <i>k</i> = <i>n</i> &ndash; <i>m</i> Bit. Die Coderate ist somit gleich
+
*An information word consists of&nbsp; $k = n-m$&nbsp; bits.&nbsp; The code rate is therefore equal to
  
 
::<math>R = \frac{k}{n} = \frac{2^m - 1 - m}{2^m - 1} \in \{1/3, \hspace{0.1cm}4/7,\hspace{0.1cm}11/15,\hspace{0.1cm}26/31,\hspace{0.1cm}57/63,
 
::<math>R = \frac{k}{n} = \frac{2^m - 1 - m}{2^m - 1} \in \{1/3, \hspace{0.1cm}4/7,\hspace{0.1cm}11/15,\hspace{0.1cm}26/31,\hspace{0.1cm}57/63,
\hspace{0.1cm}120/127,\hspace{0.1cm}247/255, \hspace{0.05cm} ... \hspace{0.05cm}
+
\hspace{0.1cm}120/127,\hspace{0.1cm}247/255, \hspace{0.05cm} \text{...} \hspace{0.05cm}
 
\}\hspace{0.05cm}.</math>
 
\}\hspace{0.05cm}.</math>
  
*Alle Hamming&ndash;Codes weisen die minimale Distanz <i>d</i><sub>min</sub> = 3 auf. Bei größerer Codelänge <i>n</i> erreicht man die minimale Distanz  3 schon mit weniger Redundanz, also bei größerer Coderate <i>R</i>.<br>
+
*All Hamming codes have the minimum distance&nbsp; $d_{\rm min} = 3$.&nbsp; With larger code length&nbsp; $n$&nbsp; one reaches&nbsp; $d_{\rm min} = 3$&nbsp; already with less redundancy, i.e. with larger code rate&nbsp; $R$.<br>
  
*Aus der Angabe <i>d</i><sub>min</sub> = 3 folgt weiter, dass hier <i>e</i> = <i>d</i><sub>min</sub> &ndash; 1 = 2 Fehler erkannt werden können und <i>t</i> = (<i>d</i><sub>min</sub> &ndash; 1)/2 = 1 Fehler korrigiert werden kann.<br>
+
*It further follows from the statement&nbsp; $d_{\rm min} = 3$&nbsp; that here only&nbsp; $e = d_{\rm min} -1 =2$&nbsp; errors can be detected and only one error  can&nbsp; $t = (d_{\rm min} -1)/2 = 1$&nbsp; correct errors.<br>
  
*Der Hamming&ndash;Code (3, 1, 3) ist identisch mit dem Wiederholungscode (3, 1, 3):
+
*The Hamming code&nbsp; $\text{HC (3, 1, 3)}$&nbsp; is identical to the repetition code&nbsp; $\text{RP (3, 1, 3)}$&nbsp; and is:
  
::<math>\mathcal{C} = \{ (0, 0, 0) \hspace{0.05cm}, (1, 1, 1)  \}\hspace{0.05cm}. </math>
+
::<math>\mathcal{C} = \big \{ (0, 0, 0) \hspace{0.25cm}, (1, 1, 1)  \big \}\hspace{0.05cm}. </math>
  
*Bei systematischer Codierung sind die ersten <i>k</i> Stellen eines jeden Codewortes identisch mit dem Informationswort. Anschließend folgen bei einem Hamming&ndash;Code die <i>m</i> = <i>n</i> &ndash; <i>k</i> Paritätsbit:
+
*In systematic coding,&nbsp;  the first&nbsp; $k$&nbsp; digits of each Hamming code word&nbsp; $\underline{x}$&nbsp; are identical to the information word&nbsp; $\underline{u}$.&nbsp; This is then followed by&nbsp; $m = n-k$&nbsp; parity bits:
  
::<math>\underline{x} = ( x_1, x_2, ... \hspace{0.05cm}, x_n) = ( u_1, u_2, ... \hspace{0.05cm}, u_k, p_1, p_2, ... \hspace{0.05cm}, p_{n-k})
+
::<math>\underline{x} = ( x_1,\ x_2,\hspace{0.05cm}\text{...} \hspace{0.05cm},\ x_n) = ( u_1,\ u_2,\ \hspace{0.05cm}\text{...} \hspace{0.05cm},\ u_k,\ p_1,\ p_2,\ \hspace{0.05cm}\text{...} \hspace{0.05cm},\ p_{n-k})
 
   \hspace{0.05cm}.</math>
 
   \hspace{0.05cm}.</math>
  
Im Folgenden betrachten wir stets den (7, 4, 3)&ndash;Hamming&ndash;Code, der durch das folgende Schaubild verdeutlicht wird. Daraus lassen sich die drei Bedingungen ableiten:
+
{{GraueBox|TEXT= 
 +
$\text{Example 6:  Parity equations of the (7, 4, 3) Hamming code}$
 +
[[File:P ID2353 KC T 1 3 S3 v2.png|right|frame|Chart of the&nbsp; $\text{HC (7, 4, 3)}$]]
 +
 
 +
The&nbsp; $\text{(7, 4, 3)}$&nbsp; Hamming code is illustrated by the diagram shown.&nbsp; From it one can derive the three conditions:
  
[[File:P ID2353 KC T 1 3 S3 v2.png|rahmenlos|rechts|Verdeutlichung des (7, 4, 3)–Hamming–Codes ]]
+
::<math>x_1 \oplus x_2 \oplus x_3 \oplus x_5    = 0 \hspace{0.05cm},</math>
:<math>x_1 \oplus x_2 \oplus x_3 \oplus x_5    \hspace{-0.1cm} = \hspace{-0.1cm} 0 \hspace{0.05cm},</math>
+
::<math>x_2 \oplus x_3 \oplus x_4 \oplus x_6    = 0 \hspace{0.05cm},</math>
:<math>x_2 \oplus x_3 \oplus x_4 \oplus x_6    \hspace{-0.1cm} = \hspace{-0.1cm} 0 \hspace{0.05cm},</math>
+
::<math>x_1 \oplus x_2 \oplus x_4 \oplus x_7    = 0 \hspace{0.05cm}. </math>
:<math>x_1 \oplus x_2 \oplus x_4 \oplus x_7    \hspace{-0.1cm} = \hspace{-0.1cm} 0 \hspace{0.05cm}. </math>
 
  
Im Schaubild kennzeichnet der rote Kreis die erste Prüfgleichung, der grüne die zweite und der blaue die letzte. In jedem Kreis muss die Anzahl der Einsen geradzahlig sein.<br><br><br><br><br>
+
*In the diagram,&nbsp; the red circle indicates the first test equation,&nbsp; the green the second and the blue the last.
 +
 +
*In each circle,&nbsp; the number of ones must be even.
  
== Hamming–Codes (2) ==
 
<br>
 
[[File:P ID2350 KC T 1 3 S3b v2.png|rahmenlos|rechts|Systematischer (7, 4, 3)–Hamming–Code]]
 
  
Bei systematischer Codierung des  (7, 4, 3)&ndash;Hamming&ndash;Codes
+
[[File:P ID2351 KC T 1 3 S3c v2.png|right|frame|Assignment&nbsp; $\underline{u} → \underline{x}$&nbsp; of the systematic $\text{(7, 4, 3)}$ Hamming code|class=fit]]
 +
In systematic coding of the&nbsp; $\text{(7, 4, 3)}$ Hamming code
  
:<math>x_1 \hspace{-0.2cm} = \hspace{-0.2cm} u_1 ,\hspace{0.2cm}
+
::<math>x_1 = u_1 ,\hspace{0.2cm}
 
x_2 = u_2 ,\hspace{0.2cm}
 
x_2 = u_2 ,\hspace{0.2cm}
 
x_3 = u_3 ,\hspace{0.2cm}
 
x_3 = u_3 ,\hspace{0.2cm}
x_4 = u_4 ,\hspace{0.2cm}</math>
+
x_4 = u_4 ,\hspace{0.2cm}
:<math>x_5 \hspace{-0.2cm} = \hspace{-0.2cm} p_1 ,\hspace{0.2cm}
+
x_5 = p_1 ,\hspace{0.2cm}
 
x_6 = p_2 ,\hspace{0.2cm}
 
x_6 = p_2 ,\hspace{0.2cm}
 
x_7 = p_3 </math>
 
x_7 = p_3 </math>
  
lauten die Bestimmungsgleichungen der drei Prüfbit, wie aus dem Schaubild rechts hervorgeht:
+
are the equations of determination of the three test bits, as shown in the diagram:
 +
 
 +
::<math>p_1 =u_1 \oplus u_2 \oplus u_3  \hspace{0.05cm},</math>
 +
::<math>p_2 = u_2 \oplus u_3 \oplus u_4  \hspace{0.05cm},</math>
 +
::<math>p_3 = u_1 \oplus u_2 \oplus u_4 \hspace{0.05cm}.</math>
 +
 
 +
The table shows the&nbsp; $2^k = 16$&nbsp; allowed code words of the systematic&nbsp; $\text{HC (7, 4, 3)}$:
 +
:$$\underline{x} = ( x_1,\ x_2,\ x_3,\ x_4,\ x_5,\ x_6,\ x_7) =  ( u_1,\ u_2,\ u_3,\ u_4,\ p_1,\ p_2,\ p_3).$$ 
 +
*The information word&nbsp; $\underline{u} =( u_1,\ u_2,\ u_3,\ u_4)$&nbsp; is shown in black and the check bits&nbsp; $p_1$,&nbsp; $p_2$&nbsp; and&nbsp; $p_3$&nbsp; in red.
 +
 +
*It can be seen from this table that each two of the&nbsp; $16$&nbsp; possible code words differ in at least&nbsp; $d_{\rm min} = 3$&nbsp; binary values.}}<br>
  
:<math>p_1 \hspace{-0.1cm} =  \hspace{-0.1cm} u_1 \oplus u_2 \oplus u_3  \hspace{0.05cm},</math>
+
Later the&nbsp; [[Channel_Coding/Decoding_of_Linear_Block_Codes|$\text{Decoding of linear block codes}$]]&nbsp; will be covered in more detail.&nbsp; The following example is intended to explain the decoding of the Hamming code rather intuitively.<br>
:<math>p_2 \hspace{-0.1cm} =  \hspace{-0.1cm} u_2 \oplus u_3 \oplus u_4  \hspace{0.05cm},</math>
 
:<math>p_3 \hspace{-0.1cm} =  \hspace{-0.1cm} u_1 \oplus u_2 \oplus u_4 \hspace{0.05cm}.</math>
 
  
<br>Die nachfolgende Tabelle zeigt die 2<sup><i>k</i></sup> = 16 zulässigen Codeworte <i><u>x</u></i> = (<i>u</i><sub>1</sub>, <i>u</i><sub>2</sub>, <i>u</i><sub>3</sub>, <i>u</i><sub>4</sub>, <i>p</i><sub>1</sub>, <i>p</i><sub>2</sub>, <i>p</i><sub>3</sub>) des systematischen (7, 4, 3)&ndash;Codes. Das Informationswort <i><u>u</u></i> = (<i>u</i><sub>1</sub>, <i>u</i><sub>2</sub>, <i>u</i><sub>3</sub>, <i>u</i><sub>4</sub>) ist schwarz dargestellt und die Prüfbits <i>p</i><sub>1</sub>, <i>p</i><sub>2</sub>, <i>p</i><sub>3</sub> rot. Man erkennt anhand dieser Tabelle, dass sich jeweils zwei der 16 möglichen Codeworte in mindestens <i>d</i><sub>min</sub> = 3 Binärwerten unterscheiden.<br>
+
{{GraueBox|TEXT=
 +
$\text{Example 7:  Parity equations of the HC (7, 4, 3)}$
 +
<br>
  
[[File:P ID2351 KC T 1 3 S3c v2.png|Zuordnung <i><u>u</u></i> → <i><u>x</u></i> des systematischen (7, 4, 3)–Hamming–Codes|class=fit]]<br>
+
We further assume the systematic&nbsp; $\text{(7, 4, 3)}$ Hamming code and consider the received word&nbsp; $\underline{y} = ( y_1,\ y_2,\ y_3,\ y_4,\ y_5,\ y_6,\ y_7)$.
  
Die Decodierung linearer Blockcodes wird im Kapitel 1.5 ausführlich behandelt. Das nun folgende Beispiel soll die Decodierung des Hamming&ndash;Codes eher intuitiv  erklären.<br>
+
For decoding,&nbsp; we form the three parity equations
  
{{Beispiel}}''':''' Wir gehen vom systematischen (7, 4, 3)&ndash;Code aus und betrachten das Empfangswort <i><u>y</u></i>&nbsp;=&nbsp;(<i>y</i><sub>1</sub>,&nbsp;<i>y</i><sub>2</sub>,&nbsp;<i>y</i><sub>3</sub>,&nbsp;<i>y</i><sub>4</sub>,&nbsp;<i>y</i><sub>5</sub>,&nbsp;<i>y</i><sub>6</sub>,&nbsp;<i>y</i><sub>7</sub>). Zur Decodierung bilden wir die drei Paritätsgleichungen
+
::<math> y_1 \oplus y_2 \oplus y_3 \oplus y_5    \hspace{-0.1cm}=  \hspace{-0.1cm} 0 \hspace{0.05cm},\hspace{0.5cm}{\rm (I)} </math>
 +
::<math>y_2 \oplus y_3 \oplus y_4 \oplus y_6  \hspace{-0.1cm}= \hspace{-0.1cm}0 \hspace{0.05cm},\hspace{0.5cm}{\rm (II)}  </math>
 +
::<math>y_1 \oplus y_2 \oplus y_4 \oplus y_7    \hspace{-0.1cm}=  \hspace{-0.1cm} 0\hspace{0.05cm}.  \hspace{0.5cm}{\rm (III)}</math>
  
:<math> y_1 \oplus y_2 \oplus y_3 \oplus y_5    \hspace{-0.1cm}\hspace{-0.1cm} 0 \hspace{0.05cm},\hspace{0.5cm}{\rm (I)} </math>
+
In the following&nbsp; $\underline{v}$&nbsp; denotes the decoding result; this should always match&nbsp; $\underline{u} = (1,\ 0,\ 1,\ 0)$.&nbsp;
:<math>y_2 \oplus y_3 \oplus y_4 \oplus y_6  \hspace{-0.1cm}=  \hspace{-0.1cm}0 \hspace{0.05cm},\hspace{0.5cm}{\rm (II)}  </math>
 
:<math>y_1 \oplus y_2 \oplus y_4 \oplus y_7    \hspace{-0.1cm}=  \hspace{-0.1cm} 0\hspace{0.05cm}. \hspace{0.5cm}{\rm (III)}</math>
 
  
Unter der Voraussetzung, dass in jedem Codewort höchstens ein Bit verfälscht wird, gelten dann die folgenden Aussagen. <i><u>&upsilon;</u></i> bezeichnet das Decodierergebnis und sollte mit  <i><u>u</u></i>&nbsp;=&nbsp;(1, 0, 1, 0) übereinstimmen:
+
Provided that at most one bit is falsified in each code word,&nbsp; the following statements are then valid:
*Das Empfangswort <i><u>y</u></i> = (1, 0, 1, 0, 0, 1, 1) erfüllt alle drei Paritätsgleichungen. Das heißt, dass kein einziger Übertragungsfehler aufgetreten ist &nbsp;&nbsp;&#8658;&nbsp;&nbsp; <i><u>y</u></i> = <i><u>x</u></i>&nbsp;&nbsp;&#8658;&nbsp;&nbsp;<i><u>u</u></i> = (1, 0, 1, 0).<br>
+
*The received word&nbsp; $\underline{y} = (1,\ 0,\ 1,\ 0,\ 0,\ 1,\ 1)$&nbsp; satisfies all three parity equations. This means that not a single transmission error has occurred &nbsp; &#8658; &nbsp; $\underline{y} = \underline{x}$ &nbsp; &#8658; &nbsp; $\underline{v} = \underline{u} = (1,\ 0,\ 1,\ 0)$.<br>
  
*Werden zwei der drei Paritätsgleichungen erfüllt wie zum Beispiel für das empfangene Wort <i><u>y</u></i>&nbsp;=&nbsp;(1, 0, 1, 0, 0, 1, 0), so wurde ein Paritätsbit verfälscht und es gilt auch hier <i><u>&upsilon;</u></i>&nbsp;=&nbsp;(1,&nbsp;0,&nbsp;1,&nbsp;0).<br>
+
*If two of the three parity equations are satisfied,&nbsp; such as for the received word&nbsp; $\underline{y} =(1,\ 0,\ 1,\ 0,\ 0,\ 1,\ 0)$,&nbsp; then one parity bit has been falsified and the following also applies here&nbsp; $\underline{v} = \underline{u} = (1,\ 0,\ 1,\ 0)$.<br>
  
*Mit <i><u>y</u></i> = (1, 0, 1, 1, 0, 1, 1) wird nur die Gleichung (I) erfüllt und die beiden anderen nicht. Somit kann man die Verfälschung des vierten Binärsymbols korrigieren, und es gilt auch hier <i><u>&upsilon;</u></i> = <i><u>u</u></i>.<br>
+
*With&nbsp; $\underline{y} = (1,\ 0,\ 1,\ 1,\ 0,\ 1,\ 1)$&nbsp; only the equation&nbsp; $\rm (I)$&nbsp; is satisfied and the other two are not.&nbsp; Thus,&nbsp; the falsification of the fourth binary symbol can be corrected,&nbsp; and it is also valid here&nbsp; $\underline{v} = \underline{u} = (1,\ 0,\ 1,\ 0)$.<br>
  
*Ein Übertragungsfehler des zweiten Bits &nbsp;&nbsp;&#8658;&nbsp;&nbsp; <i><u>y</u></i> = (1, 1, 1, 0, 0, 1, 1) führt dazu, dass alle drei Paritätsgleichungen nicht erfüllt werden. Auch dieser Fehler lässt sich eindeutig korrigieren.{{end}}<br>
+
*A transmission error of the second bit &nbsp; &#8658; &nbsp; $\underline{y} = (1,\ 1,\ 1,\ 0,\ 0,\ 1,\ 1)$&nbsp; leads to the fact that all three parity equations are not fulfilled.&nbsp; This error can also be clearly corrected since only&nbsp; $u_2$&nbsp; occurs in all equations.}}<br>
  
== Aufgaben ==
+
== Exercises for the chapter ==
 
<br>
 
<br>
[[Aufgaben:1.5 SPC (5, 4) und BEC–Modell|A1.5 SPC (5, 4) und BEC–Modell]]
+
[[Aufgaben:Exercise_1.5:_SPC_(5,_4)_and_BEC_Model|Exercise 1.5: SPC (5, 4) and BEC Model]]
  
[[Zusatzaufgaben:1.5 SPC (5, 4) vs. RC (5, 1)]]
+
[[Aufgaben:Exercise_1.5Z:_SPC_(5,_4)_vs._RC_(5,_1)|Exercise 1.5Z: SPC (5, 4) vs. RC (5, 1)]]
  
[[Aufgaben:1.6 Zum (7, 4)–Hamming–Code|A1.6 Zum (7, 4)–Hamming–Code]]
+
[[Aufgaben:Exercise_1.6:_(7,_4)_Hamming_Code|Exercise 1.6: (7, 4) Hamming Code]]
  
 
{{Display}}
 
{{Display}}

Latest revision as of 15:42, 23 January 2023

Single Parity-check Codes


The  »Single parity-check code«  $\rm (SPC)$  adds to the information block  $\underline{u}= (u_1, u_2, \hspace{0.05cm}\text{...}\hspace{0.05cm}, u_k)$  a parity bit  $p$:

Various single parity-check codes  $(n = k + 1)$
$$\underline{u} = (u_1, u_2,\hspace{0.05cm} \text{...} \hspace{0.05cm} , u_k) \hspace{0.3cm}$$
$$\Rightarrow \hspace{0.3cm} \underline{x} = (x_1, x_2,\hspace{0.05cm}\text{...} \hspace{0.05cm} , x_n) = (u_1, u_2,\hspace{0.05cm} \text{...}\hspace{0.05cm} , u_k, p) \hspace{0.05cm}.$$

The graphic shows three coding examples:

  • $|\hspace{0.05cm}\mathcal{C}\hspace{0.05cm}| = 4 \hspace{0.15cm} (k = 2)$,
  • $|\hspace{0.05cm}\mathcal{C}\hspace{0.05cm}| = 8 \hspace{0.15cm} (k = 3)$,
  • $|\hspace{0.05cm}\mathcal{C}\hspace{0.05cm}| = 16 \hspace{0.15cm} (k = 4)$.


This very simple code can be characterized as follows:

  • From  $n = k + 1$  follows for the  »code rate«  $R = k/n = (n-1)/n$  and for the  »redundancy«  $1-R = 1/n$.  For  $k = 2$,  for example,  the code rate is  $2/3$  and the relative redundancy is  $33.3\%$.
  • The parity bit is obtained by  »modulo–2«  addition.  This is the addition in the  $\text{Galois field}$  to the base  $2$   ⇒   $\rm GF(2)$,  so that  $1 \oplus 1 = 0$  results:
\[p = u_1 \oplus u_2 \oplus \text{...} \hspace{0.05cm} \oplus u_k \hspace{0.05cm}.\]
  • Thus every valid code word  $\underline{x}$  contains an even number of ones.  Expressed as  $\oplus$  or in simplified notation according to the second equation,  this condition reads:
\[ x_1 \oplus x_2 \oplus \text{...} \hspace{0.05cm} \oplus x_n = 0 \hspace{0.05cm}, \hspace{0.5cm}{\rm or:}\hspace{0.5cm} \sum_{i=1}^{n} \hspace{0.2cm} x_i = 0\hspace{0.05cm} , \hspace{0.3cm} {\rm addition\hspace{0.15cm} in \hspace{0.15cm} GF(2)} \hspace{0.05cm}. \]
  • For  $k = 2$   ⇒   $n = 3$  the following four code words result,  where the parity bit  $p$  is marked by a small arrow in each case:
\[\underline{x}_0 = (0, 0_{\hspace{0.05cm} \rightarrow}\hspace{0.05cm} 0)\hspace{0.05cm}, \hspace{0.2cm} \underline{x}_1 = (0, 1_{\hspace{0.05cm} \rightarrow}\hspace{0.05cm} 1)\hspace{0.05cm}, \hspace{0.2cm} \underline{x}_2 = (1, 0 _{\hspace{0.05cm} \rightarrow}\hspace{0.05cm} 1)\hspace{0.05cm}, \hspace{0.2cm} \underline{x}_3 = (1, 1 _{\hspace{0.05cm} \rightarrow}\hspace{0.05cm} 0)\hspace{0.05cm}.\]
  • This code  $\mathcal{C} = \big \{ (0, 0, 0), \ (0, 1, 1), \ (1, 0, 1), \ (1, 1, 0) \big \}$  is  »linear« since the sum of any two code words again gives a valid code word,  for example:
$$\underline{x}_1 \oplus \underline{x}_2 = \underline{x}_3.$$
  • For any  $k$   ⇒   $n = k+1$  each code word differs from all others at an even number of positions.  Thus,  the minimum distance of the code is 
$$d_{\rm min} = 2.$$

$\text{Definition:}$  Each  $\text{single parity-check code (SPC)}$  can be formally described as follows:

\[\mathcal{C} = \{ \underline{x} \in {\rm GF}(2^n)\hspace{-0.15cm}: \hspace{0.15cm}{\rm with \hspace{0.15cm}even\hspace{0.15cm} number\hspace{0.15cm} of\hspace{0.15cm} ones\hspace{0.15cm} in \hspace{0.15cm} } \underline{x} \}\hspace{0.05cm}.\]
  • With the general code name  $(n, \ k, \ d_{\rm min})$  any single parity–check code can also be named  $\text{SPC }(n, \ n-1, \ 2)$ .
  • The top graph thus shows the  $\text{SPC (3, 2, 2)}$,  the  $\text{SPC (4, 3, 2)}$,  and the  $\text{SPC (5, 4, 2)}$.


The digital channel may change the code word  $\underline{x}= (x_1, x_2, \hspace{0.05cm}\text{...}\hspace{0.05cm}, x_n)$  to the received word  $\underline{y}= (y_1, y_2, \hspace{0.05cm}\text{...}\hspace{0.05cm}, y_n)$. With the error vector  $\underline{e}= (e_1, e_2, \hspace{0.05cm}\text{...}\hspace{0.05cm}, e_n)$  holds:

$$\underline{y}= \underline{x} \oplus \underline{e}.$$

For  $\text{decoding the single parity-check code}$  one forms the so-called  »syndrome«:

\[s = y_1 \oplus y_2 \oplus \hspace{0.05cm}\text{...} \hspace{0.05cm} \oplus y_n = \sum_{i=1}^{n} \hspace{0.2cm} y_i \hspace{0.1cm} \in \hspace{0.2cm} \{0, 1 \} \hspace{0.05cm}.\]

The result  $s=1$  then indicates  (at least)  one bit error within the code word,  while  $s=0$  should be interpreted as follows:

  • The transmission was error-free, or:
  • the number of bit errors is even.

$\text{Example 1:}$  We consider the  $\text{SPC (4, 3, 2)}$  and assume that the all-zero word was sent.  The table shows all possibilities that  $f$  bits are falsified and gives the respective syndrome  $s \in \{0, 1\}$. 

Possible received values at the  $\text{SPC (4, 3, 2)}$

For the  $\text{BSC model}$  with the crossover probability  $\varepsilon = 1\%$  the following probabilities then result:

  • The information word is correctly decoded  (blue background):
\[{\rm Pr}(\underline{v} = \underline{u}) = {\rm Pr}(\underline{y} = \underline{x}) = (1 - \varepsilon)^n = 0.99^4 \approx 96\,\%\hspace{0.05cm}.\]
  • The decoder detects that transmission errors have occurred  (green background):
$${\rm Pr}(s=1) \hspace{-0.1cm} = \hspace{-0.1cm} \sum_{f=1 \atop f \hspace{0.1cm}{\rm odd} }^{n} {n \choose f} \cdot \varepsilon^{f} \cdot (1 - \varepsilon)^{n-f}$$
$$\Rightarrow \hspace{0.3cm} {\rm Pr}(s=1) \hspace{-0.1cm} = {4 \choose 1} \cdot 0.01 \cdot 0.99^3 + {4 \choose 3} \cdot 0.01^3 \cdot 0.99 \approx 3.9\,\%\hspace{0.05cm}.$$
  • The information word is decoded incorrectly  (red background):
\[{\rm Pr}(\underline{v} \ne \underline{u}) \hspace{-0.1cm} = \hspace{-0.1cm} \sum_{f=2 \atop f \hspace{0.1cm}{\rm gerade} }^{n} {n \choose f} \cdot \varepsilon^{f} \cdot (1 - \varepsilon)^{n-f} = 1 - {\rm Pr}(\underline{v} = \underline{u}) - {\rm Pr}(s=1)\approx 0.1\,\%\hspace{0.05cm}.\]

We refer here to the HTML5/JavaScript applet  $\text{Binomial and Poisson Distribution}$.  The results obtained here are also discussed in  $\text{Exercise 1.5}$.


$\text{Example 2:}$  Error correction of the single parity–check code is not possible for the BSC model unlike the  $\text{BEC model}$  ("Binary Erasure Channel").

Bit errors are excluded with this one.  If only one bit is erased  $($"erasure",  $\rm E)$,  then due to the fact  "the number of ones in the code word is even",  error correction is also possible,  for example for the  $\text{SPC (5, 4, 2)}$:

\[\underline{y} = (1, 0, {\rm E}, 1, 1) \hspace{0.2cm}\Rightarrow\hspace{0.2cm}\underline{z} = (1, 0, 1, 1, 1) \hspace{0.2cm}\Rightarrow\hspace{0.2cm} \underline{v} = (1, 0, 1, 1) = \underline{u}\hspace{0.05cm},\] \[\underline{y}=(0, 1, 1, {\rm E}, 0) \hspace{0.2cm}\Rightarrow\hspace{0.2cm}\underline{z} = (0, 1, 1, 0, 0) \hspace{0.2cm}\Rightarrow\hspace{0.2cm} \underline{v} = (0, 1, 1, 0) = \underline{u}\hspace{0.05cm},\] \[\underline{y} = (0, 1, 0, 1, {\rm E}) \hspace{0.2cm}\Rightarrow\hspace{0.2cm}\underline{z} = (0, 1, 0, 1, 0) \hspace{0.2cm}\Rightarrow\hspace{0.2cm} \underline{v} = (0, 1, 0, 1) = \underline{u}\hspace{0.05cm}.\]


$\text{Example 3:}$  Also with the  $\text{AWGN model}$  error correction is possible when applying  "soft decision".  For the following we assume bipolar signaling:

To clarify  "soft decision"  at AWGN
  • $x=0$   ⇒   $\tilde{x}= +1$,  as well as
  • $x=1$   ⇒   $\tilde{x}= -1$.


The graphic illustrates the facts presented here:

  • For example,  the received vector is  (red dots):
\[\underline{y} = (+0.8, -1.2, -0.1, +0.5, -0.6) \hspace{0.05cm}.\]
  • With a hard decision  $($threshold  $G = 0$,  only the signs are evaluated$)$  one would arrive at the following binary result  $($green squares  $Y_i = y_i/ \vert y_i \vert)$:
\[\underline{Y} = (+1, -1, -1, +1, -1) \hspace{0.05cm}.\]
  • In symbol notation,  this gives  $(0, 1, 1, 0, 1)$,  which is not a valid code word of  $\text{SPC (5, 4, 2)}$  ⇒   syndrome  $s = 1$.  So one,  three or five bits must have been falsified.
  • The probability for three or five bit errors,  however,  is orders of magnitude smaller than that for a single error.  The assumption of  "one bit error"  is therefore not unreasonable.
  • Since the received value  $y_3$  is very close to the threshold  $G = 0$  it is assumed that exactly this bit has been falsified.  Thus,  with "soft decision",  the decision is for  $\underline{z} = (0, 1, 0, 0, 1)$   ⇒   $\underline{v} = (0, 1, 0, 0)$.  The block error probability  ${\rm Pr}(\underline{v} \ne \underline{u})$  is thus lowest.



Repetition Codes


$\text{Definition:}$  A  $\text{repetition code}$  ($\rm RC)$  is a linear binary  $(n, \, k)$  block code of the form

\[\mathcal{C} = \big \{ \underline{x} \in {\rm GF}(2^n)\text{:} \ \ x_i = x_j \hspace{0.25cm}{\rm for \hspace{0.25cm}all\hspace{0.35cm} } i, j = 1, \hspace{0.05cm} \text{...} \hspace{0.05cm}, n \big \}.\]
  • The code parameter  $n$  denotes the code length.  Independent of  $n$  always holds  $k = 1$.
  • Accordingly,  there exist only the two code words  $(0, 0, \hspace{0.05cm} \text{...} \hspace{0.05cm} , 0)$  and  $(1, 1, \hspace{0.05cm}\text{...}\hspace{0.05cm} , 1)$, which differ in  $n$  binary places.
  • From this follows for the minimum distance  $d_{\rm min} = n$.


The graphic shows repetition codes for  $n=3$,  $n=4$  and  $n=5$.  Such a repetition code has the following properties:

Various repetition codes
  • This  $(n, \, 1, \, n)$  block code has the very small code rate  $R = 1/n$.
  • So,  such a code is only suitable for transferring or storing small files.
  • On the other hand,  the repetition code is very robust.
  • In particular,  in the  $\text{BEC channel}$  ("Binary Erasure Channel"),  a single correctly transmitted bit at any position  (all other bits may be erased)  is sufficient to correctly decode the information word.


$\text{Example 4: Decoding and error probabilities of the repetition code at the BSC channel}$

The  $\text{BSC channel}$  with  $\varepsilon = 10\%$  applies.  The decoding is based on the majority principle.

  • For odd  $n$   ⇒   $e=n-1$ bit errors can be detected and  $t=(n-1)/2$  bit errors can be corrected.
  • This gives for the probability of correct decoding of the information bit  $u$:
\[{\rm Pr}(v = u) = \sum_{f=0 }^{(n-1)/2} {n \choose f} \cdot \varepsilon^{f} \cdot (1 - \varepsilon)^{n-f} \hspace{0.05cm}.\]
  • The following numerical values are valid for  $n = 5$. That means:   There are  $t = 2$  bit errors correctable:
\[{\rm Pr}(v = u) = (1 - \varepsilon)^5 + 5 \cdot \varepsilon \cdot (1 - \varepsilon)^4 + 10 \cdot \varepsilon^2 \cdot (1 - \varepsilon)^3 \approx 99.15\,\%\]
\[\Rightarrow\hspace{0.3cm}{\rm Pr}(v \ne u) = 1- {\rm Pr}(v = u) \approx 0.85\,\%\hspace{0.05cm}.\]
  • On the other hand,  with even  $n$  only  $t=n/2-1$  errors can be corrected.  If  $n$  is increased from  $5$  to  $6$,  then only two bit errors within a code word can be corrected.  A third bit error cannot be corrected,  but at least it can be recognized:
\[{\rm Pr}({\rm not\hspace{0.15cm} correctable\hspace{0.15cm} error}) = {6 \choose 3} \cdot \varepsilon^{3} \cdot (1 - \varepsilon)^{3}= 20 \cdot 0.1^{3} \cdot 0.9^{3}\approx 1.46\,\%\hspace{0.05cm}. \]
  • An  (undetected)  decoding error  $(v \ne u)$  results only when four or more bits have been falsified within the six bit word.  As an approximation,  assuming that five or six bit errors are much less likely than four:
\[{\rm Pr}(v \ne u) \approx {6 \choose 4} \cdot \varepsilon^{4} \cdot (1 - \varepsilon)^{2}= 0.122\,\%\hspace{0.05cm}.\]
  • It is interesting to note that for  $\text{RC(6, 1, 6)}$  the probability  ${\rm Pr}(v = u)$  for a possible and correct decoding with  $98.42\%$  is smaller than for  $\text{RC (5, 1, 5)}$.
    For the latter:   ${\rm Pr}(v = u) \approx 99.15\%.$


$\text{Example 5: Performance of the repetition code at the AWGN channel}$

We now consider the  $\text{AWGN channel}$.  For uncoded transmission  $($or the repetition code with  $n=1)$  the received value is  $y = \tilde{x}+\eta$ , where  $\tilde{x} \in \{+1, -1\}$  denotes the information bit in bipolar signaling and  $\eta$  denotes the noise term.  To avoid confusion with the code parameter  $n$  we have renamed the noise:   $n → \eta$.

For the error probability, with the  $\text{complementary Gaussian error integral}$  ${\rm Q}(x)$

\[{\rm Pr}(v \ne u) = {\rm Q}(\sqrt{\rho}) \hspace{0.05cm},\]

where the following physical quantities are to be used:

  • the signal-to-noise ratio  $\rm (SNR)$  $\rho= 1/\sigma^2 = 2 \cdot E_{\rm S}/N_0$,
  • the energy  $E_{\rm S}$  per code symbol   ⇒   "symbol energy",
  • the normalized standard deviation  $\sigma$  of the noise,  valid for the bipolar information bit  $\tilde{x} \in \{+1, -1\}$,  and
  • the constant  (one-sided)  noise power density  $N_0$  of the AWGN noise.

Error probability of the repetition code at the AWGN channel

In contrast,  for a  $(n,\ 1,\ n)$  repetition code,  the input value of the maximum likelihood decoder  $y \hspace{0.04cm}' = \tilde{x} \hspace{0.04cm}'+\eta \hspace{0.04cm}'$  with the following properties:

\[\tilde{x} \hspace{0.04cm}' =\sum_{i=1 }^{n} \tilde{x}_i \in \{ +n, -n \}\hspace{0.2cm} \Rightarrow\hspace{0.2cm} n{\rm -fold \hspace{0.15cm}amplitude}\]
\[\hspace{4.8cm} \Rightarrow\hspace{0.2cm}n^2{\rm -fold \hspace{0.15cm}power}\hspace{0.05cm},\]
\[\eta\hspace{0.04cm}' = \sum_{i=1 }^{n} \eta_i\hspace{0.2cm} \Rightarrow\hspace{0.2cm} n{\rm -fold \hspace{0.15cm}variance:\hspace{0.15cm} } \sigma^2 \rightarrow n \cdot \sigma^2\hspace{0.05cm},\]
\[\rho\hspace{0.04cm}' = \frac{n^2}{n \cdot \sigma^2} = n \cdot \rho \hspace{0.2cm} \Rightarrow\hspace{0.2cm}{\rm Pr}(v \ne u) = {\rm Q}(\sqrt{n \cdot \frac{2E_{\rm S} }{N_0} } )\hspace{0.05cm}.\]

The error probability in double logarithmic representation is shown in the left graph.

  1. As abscissa is  $10 \cdot \lg \, (E_{\rm S}/N_0)$  plotted.
  2. The energy per bit  $(E_{\rm B})$  is  $n$  times larger than the symbol energy  $E_{\rm S}$,  as illustrated in the graph for  $n=3$ .


This set of curves can be interpreted as follows:

  • If one plots the error probability over the abscissa  $10 \cdot \lg \, (E_{\rm S}/N_0)$  then  $n$–times repetition over uncoded transmission  $(n=1)$  results in a significant improvement.
  • The curve for the repetition factor  $n$  is obtained by left shifting by  $10 \cdot \lg \, n$  $($in  $\rm dB)$  with respect to the comparison curve. 
    The gain is  $4.77 \ {\rm dB} \ (n = 3)$  or  $\approx 5 \ {\rm dB} \ (n = 5)$.
  • However,  a comparison at constant  $E_{\rm S}$  is not fair,  since with the repetition code  $\text{RC (5, 1, 5)}$  one spends a factor  $n$  larger energy for the transmission of an information bit than with uncoded transmission:   $E_{\rm B} = E_{\rm S}/{R} = n \cdot E_{\rm S}\hspace{0.05cm}.$


From the graph on the right,  we can see that all the curves lie exactly on top of each other when plotted on the abscissa  $10 \cdot \lg \, (E_{\rm B}/N_0)$.


$\text{Conclusion regarding repetition codes on the AWGN channel:}$

  • The error probability is independent of the repetition factor  $n$  for a fair comparison:     ${\rm Pr}(v \ne u) = {\rm Q}\left (\sqrt{2E_{\rm B} /{N_0} } \right ) \hspace{0.05cm}.$


Hamming Codes


In 1962  $\text{Richard Wesley Hamming}$  specified a class of binary block codes that differ in the number  $m = 2, 3, \text{...} $  of added  "parity bits".  For this code class:

  • The code length always results in  $n = 2^m -1$.  Consequently,  only the lengths  $n = 3$,  $n = 7$,  $n = 15$,  $n = 31$,  $n = 63$,  $n = 127$,  $n = 255$, etc. are possible.
  • An information word consists of  $k = n-m$  bits.  The code rate is therefore equal to
\[R = \frac{k}{n} = \frac{2^m - 1 - m}{2^m - 1} \in \{1/3, \hspace{0.1cm}4/7,\hspace{0.1cm}11/15,\hspace{0.1cm}26/31,\hspace{0.1cm}57/63, \hspace{0.1cm}120/127,\hspace{0.1cm}247/255, \hspace{0.05cm} \text{...} \hspace{0.05cm} \}\hspace{0.05cm}.\]
  • All Hamming codes have the minimum distance  $d_{\rm min} = 3$.  With larger code length  $n$  one reaches  $d_{\rm min} = 3$  already with less redundancy, i.e. with larger code rate  $R$.
  • It further follows from the statement  $d_{\rm min} = 3$  that here only  $e = d_{\rm min} -1 =2$  errors can be detected and only one error can  $t = (d_{\rm min} -1)/2 = 1$  correct errors.
  • The Hamming code  $\text{HC (3, 1, 3)}$  is identical to the repetition code  $\text{RP (3, 1, 3)}$  and is:
\[\mathcal{C} = \big \{ (0, 0, 0) \hspace{0.25cm}, (1, 1, 1) \big \}\hspace{0.05cm}. \]
  • In systematic coding,  the first  $k$  digits of each Hamming code word  $\underline{x}$  are identical to the information word  $\underline{u}$.  This is then followed by  $m = n-k$  parity bits:
\[\underline{x} = ( x_1,\ x_2,\hspace{0.05cm}\text{...} \hspace{0.05cm},\ x_n) = ( u_1,\ u_2,\ \hspace{0.05cm}\text{...} \hspace{0.05cm},\ u_k,\ p_1,\ p_2,\ \hspace{0.05cm}\text{...} \hspace{0.05cm},\ p_{n-k}) \hspace{0.05cm}.\]

$\text{Example 6: Parity equations of the (7, 4, 3) Hamming code}$

Chart of the  $\text{HC (7, 4, 3)}$

The  $\text{(7, 4, 3)}$  Hamming code is illustrated by the diagram shown.  From it one can derive the three conditions:

\[x_1 \oplus x_2 \oplus x_3 \oplus x_5 = 0 \hspace{0.05cm},\]
\[x_2 \oplus x_3 \oplus x_4 \oplus x_6 = 0 \hspace{0.05cm},\]
\[x_1 \oplus x_2 \oplus x_4 \oplus x_7 = 0 \hspace{0.05cm}. \]
  • In the diagram,  the red circle indicates the first test equation,  the green the second and the blue the last.
  • In each circle,  the number of ones must be even.


Assignment  $\underline{u} → \underline{x}$  of the systematic $\text{(7, 4, 3)}$ Hamming code

In systematic coding of the  $\text{(7, 4, 3)}$ Hamming code

\[x_1 = u_1 ,\hspace{0.2cm} x_2 = u_2 ,\hspace{0.2cm} x_3 = u_3 ,\hspace{0.2cm} x_4 = u_4 ,\hspace{0.2cm} x_5 = p_1 ,\hspace{0.2cm} x_6 = p_2 ,\hspace{0.2cm} x_7 = p_3 \]

are the equations of determination of the three test bits, as shown in the diagram:

\[p_1 =u_1 \oplus u_2 \oplus u_3 \hspace{0.05cm},\]
\[p_2 = u_2 \oplus u_3 \oplus u_4 \hspace{0.05cm},\]
\[p_3 = u_1 \oplus u_2 \oplus u_4 \hspace{0.05cm}.\]

The table shows the  $2^k = 16$  allowed code words of the systematic  $\text{HC (7, 4, 3)}$:

$$\underline{x} = ( x_1,\ x_2,\ x_3,\ x_4,\ x_5,\ x_6,\ x_7) = ( u_1,\ u_2,\ u_3,\ u_4,\ p_1,\ p_2,\ p_3).$$
  • The information word  $\underline{u} =( u_1,\ u_2,\ u_3,\ u_4)$  is shown in black and the check bits  $p_1$,  $p_2$  and  $p_3$  in red.
  • It can be seen from this table that each two of the  $16$  possible code words differ in at least  $d_{\rm min} = 3$  binary values.


Later the  $\text{Decoding of linear block codes}$  will be covered in more detail.  The following example is intended to explain the decoding of the Hamming code rather intuitively.

$\text{Example 7: Parity equations of the HC (7, 4, 3)}$

We further assume the systematic  $\text{(7, 4, 3)}$ Hamming code and consider the received word  $\underline{y} = ( y_1,\ y_2,\ y_3,\ y_4,\ y_5,\ y_6,\ y_7)$.

For decoding,  we form the three parity equations

\[ y_1 \oplus y_2 \oplus y_3 \oplus y_5 \hspace{-0.1cm}= \hspace{-0.1cm} 0 \hspace{0.05cm},\hspace{0.5cm}{\rm (I)} \]
\[y_2 \oplus y_3 \oplus y_4 \oplus y_6 \hspace{-0.1cm}= \hspace{-0.1cm}0 \hspace{0.05cm},\hspace{0.5cm}{\rm (II)} \]
\[y_1 \oplus y_2 \oplus y_4 \oplus y_7 \hspace{-0.1cm}= \hspace{-0.1cm} 0\hspace{0.05cm}. \hspace{0.5cm}{\rm (III)}\]

In the following  $\underline{v}$  denotes the decoding result; this should always match  $\underline{u} = (1,\ 0,\ 1,\ 0)$. 

Provided that at most one bit is falsified in each code word,  the following statements are then valid:

  • The received word  $\underline{y} = (1,\ 0,\ 1,\ 0,\ 0,\ 1,\ 1)$  satisfies all three parity equations. This means that not a single transmission error has occurred   ⇒   $\underline{y} = \underline{x}$   ⇒   $\underline{v} = \underline{u} = (1,\ 0,\ 1,\ 0)$.
  • If two of the three parity equations are satisfied,  such as for the received word  $\underline{y} =(1,\ 0,\ 1,\ 0,\ 0,\ 1,\ 0)$,  then one parity bit has been falsified and the following also applies here  $\underline{v} = \underline{u} = (1,\ 0,\ 1,\ 0)$.
  • With  $\underline{y} = (1,\ 0,\ 1,\ 1,\ 0,\ 1,\ 1)$  only the equation  $\rm (I)$  is satisfied and the other two are not.  Thus,  the falsification of the fourth binary symbol can be corrected,  and it is also valid here  $\underline{v} = \underline{u} = (1,\ 0,\ 1,\ 0)$.
  • A transmission error of the second bit   ⇒   $\underline{y} = (1,\ 1,\ 1,\ 0,\ 0,\ 1,\ 1)$  leads to the fact that all three parity equations are not fulfilled.  This error can also be clearly corrected since only  $u_2$  occurs in all equations.


Exercises for the chapter


Exercise 1.5: SPC (5, 4) and BEC Model

Exercise 1.5Z: SPC (5, 4) vs. RC (5, 1)

Exercise 1.6: (7, 4) Hamming Code