Difference between revisions of "Channel Coding/Examples of Binary Block Codes"

From LNTwww
 
(46 intermediate revisions by 6 users not shown)
Line 1: Line 1:
 
   
 
   
 
{{Header
 
{{Header
|Untermenü=Binäre Blockcodes zur Kanalcodierung
+
|Untermenü=Binary Block Codes for Channel Coding
|Vorherige Seite=Kanalmodelle und Entscheiderstrukturen
+
|Vorherige Seite=Channel Models and Decision Structures
|Nächste Seite=Allgemeine Beschreibung linearer Blockcodes
+
|Nächste Seite=General Description of Linear Block Codes
 
}}
 
}}
  
== Single Parity–check Codes==
+
== Single Parity-check Codes==
 
<br>
 
<br>
Der <i>Single Parity&ndash;check Code</i> (SPC) fügt zu dem Informationsblock $\underline{u}= (u_1, u_2, \hspace{0.05cm}\text{...}\hspace{0.05cm}, u_k)$ ein Prüfbit (englisch: <i>Parity</i>&nbsp;) $p$ hinzu:
+
The&nbsp; &raquo;'''Single parity-check code'''&laquo;&nbsp; $\rm (SPC)$&nbsp; adds to the information block&nbsp; $\underline{u}= (u_1, u_2, \hspace{0.05cm}\text{...}\hspace{0.05cm}, u_k)$&nbsp; a parity bit&nbsp; $p$:
 +
[[File:P ID2346 KC T 1 3 S1 v2.png|right|frame|Various single parity-check codes&nbsp; $(n = k  + 1)$|class=fit]]
 +
:$$\underline{u} = (u_1, u_2,\hspace{0.05cm} \text{...}  \hspace{0.05cm} , u_k)  \hspace{0.3cm}$$
 +
:$$\Rightarrow \hspace{0.3cm}
 +
\underline{x} =  (x_1, x_2,\hspace{0.05cm}\text{...} \hspace{0.05cm} , x_n) = (u_1, u_2,\hspace{0.05cm} \text{...}\hspace{0.05cm} , u_k, p)
 +
\hspace{0.05cm}.$$
 +
 
 +
The graphic shows three coding examples:
 +
*$|\hspace{0.05cm}\mathcal{C}\hspace{0.05cm}| = 4 \hspace{0.15cm} (k = 2)$,
 +
 
 +
*$|\hspace{0.05cm}\mathcal{C}\hspace{0.05cm}| = 8 \hspace{0.15cm} (k = 3)$,
 +
 
 +
*$|\hspace{0.05cm}\mathcal{C}\hspace{0.05cm}| = 16 \hspace{0.15cm} (k = 4)$.
  
::<math>\underline{u} = (u_1, u_2,\hspace{0.05cm} \text{...}  \hspace{0.05cm} , u_k)  \hspace{0.3cm} \Rightarrow \hspace{0.3cm}
 
\underline{x} =  (x_1, x_2,\hspace{0.05cm}\text{...} \hspace{0.05cm} , x_n) = (u_1, u_2,\hspace{0.05cm} \text{...}\hspace{0.05cm} , u_k, p)
 
\hspace{0.05cm}.</math>
 
  
[[File:P ID2346 KC T 1 3 S1 v2.png|right|frame|Single Parity–check Code $(n = k  + 1)$|class=fit]]
+
This very simple code can be characterized as follows:
Die Grafik zeigt drei Coder&ndash;Beispiele mit
+
*From&nbsp; $n = k + 1$&nbsp; follows for the&nbsp; &raquo;'''code rate'''&laquo;&nbsp; $R = k/n = (n-1)/n$&nbsp; and for the&nbsp; &raquo;'''redundancy'''&laquo;&nbsp; $1-R = 1/n$.&nbsp; For&nbsp; $k = 2$,&nbsp; for example,&nbsp; the code rate is&nbsp; $2/3$&nbsp; and the relative redundancy is&nbsp; $33.3\%$.
*$|\mathcal{C}| = 4 \ (k = 2)$,
 
*$|\mathcal{C}| = 8 \ (k = 3)$,
 
*$|\mathcal{C}| = 16 \ (k = 4)$.
 
<br clear = all>
 
Dieser sehr einfache Code kann wie folgt charakterisiert werden:
 
*Aus $n = k + 1$ folgt für die ''Coderate''&nbsp; $R = k/n = (n-1)/n$ und für die ''Redundanz''&nbsp; $1-R = 1/n$. Für $k = 2$ ergibt sich zum Beispiel die Coderate $2/3$ und die relative Redundanz beträgt $33.3\%$.
 
  
*Das Prüfbit erhält man durch die Modulo&ndash;2&ndash;Addition. Darunter versteht man die Addition im [[Kanalcodierung/Einige_Grundlagen_der_Algebra#Definition_eines_Galoisfeldes|Galoisfeld]] zur Basis 2 &nbsp; &#8658; &nbsp; GF(2), sodass  $1 \oplus 1 = 0$ ergibt:
+
*The parity bit is obtained by&nbsp; &raquo;'''modulo&ndash;2'''&laquo;&nbsp; addition.&nbsp; This is the addition in the&nbsp; [[Channel_Coding/Some_Basics_of_Algebra#Definition_of_a_Galois field|$\text{Galois field}$]]&nbsp; to the base &nbsp;$2$ &nbsp; &#8658; &nbsp; $\rm GF(2)$,&nbsp; so that&nbsp; $1 \oplus 1 = 0$&nbsp; results:
  
 
::<math>p = u_1 \oplus u_2 \oplus \text{...} \hspace{0.05cm} \oplus u_k  
 
::<math>p = u_1 \oplus u_2 \oplus \text{...} \hspace{0.05cm} \oplus u_k  
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
*Damit enthält jedes gültige Codewort $\underline{x}$ eine gerade Anzahl von Einsen. Ausgedrückt mit $\oplus$ bzw. in vereinfachter Schreibweise gemäß der zweiten Gleichung lautet diese Bedingung:
+
*Thus every valid code word&nbsp; $\underline{x}$&nbsp; contains an even number of ones.&nbsp; Expressed as&nbsp; $\oplus$&nbsp; or in simplified notation according to the second equation,&nbsp; this condition reads:
  
 
::<math> x_1 \oplus x_2 \oplus \text{...} \hspace{0.05cm} \oplus x_n = 0  
 
::<math> x_1 \oplus x_2 \oplus \text{...} \hspace{0.05cm} \oplus x_n = 0  
\hspace{0.05cm}, \hspace{0.5cm}{\rm oder:}\hspace{0.15cm}  
+
\hspace{0.05cm}, \hspace{0.5cm}{\rm or:}\hspace{0.5cm}
 
\sum_{i=1}^{n} \hspace{0.2cm} x_i = 0\hspace{0.05cm}
 
\sum_{i=1}^{n} \hspace{0.2cm} x_i = 0\hspace{0.05cm}
, \hspace{0.3cm} {\rm Addition\hspace{0.15cm} in \hspace{0.15cm}  GF(2)}
+
, \hspace{0.3cm} {\rm addition\hspace{0.15cm} in \hspace{0.15cm}  GF(2)}
 
\hspace{0.05cm}. </math>
 
\hspace{0.05cm}. </math>
 
+
*For&nbsp; $k = 2$ &nbsp; &#8658; &nbsp; $n = 3$&nbsp; the following four code words result,&nbsp; where the parity bit&nbsp; $p$&nbsp; is marked by a small arrow in each case:
*Für $k = 2$ &nbsp; &#8658; &nbsp; $n = 3$ ergeben sich die folgenden vier Codeworte, wobei das Prüfbit $p$ jeweils durch einen kleinen Pfeil markiert ist:
 
  
 
::<math>\underline{x}_0 = (0, 0_{\hspace{0.05cm} \rightarrow}\hspace{0.05cm} 0)\hspace{0.05cm}, \hspace{0.2cm} \underline{x}_1 = (0, 1_{\hspace{0.05cm} \rightarrow}\hspace{0.05cm} 1)\hspace{0.05cm}, \hspace{0.2cm}
 
::<math>\underline{x}_0 = (0, 0_{\hspace{0.05cm} \rightarrow}\hspace{0.05cm} 0)\hspace{0.05cm}, \hspace{0.2cm} \underline{x}_1 = (0, 1_{\hspace{0.05cm} \rightarrow}\hspace{0.05cm} 1)\hspace{0.05cm}, \hspace{0.2cm}
 
\underline{x}_2 = (1, 0 _{\hspace{0.05cm} \rightarrow}\hspace{0.05cm} 1)\hspace{0.05cm}, \hspace{0.2cm} \underline{x}_3 = (1, 1 _{\hspace{0.05cm} \rightarrow}\hspace{0.05cm} 0)\hspace{0.05cm}.</math>
 
\underline{x}_2 = (1, 0 _{\hspace{0.05cm} \rightarrow}\hspace{0.05cm} 1)\hspace{0.05cm}, \hspace{0.2cm} \underline{x}_3 = (1, 1 _{\hspace{0.05cm} \rightarrow}\hspace{0.05cm} 0)\hspace{0.05cm}.</math>
 
+
*This code&nbsp; $\mathcal{C} = \big \{ (0, 0, 0), \ (0, 1, 1), \ (1, 0, 1), \ (1, 1, 0) \big \}$&nbsp; is&nbsp; &raquo;'''linear'''&laquo; since the sum of any two code words again gives a valid code word,&nbsp; for example:
*Der Code $\mathcal{C} = \big \{ (0, 0, 0), \ (0, 1, 1), \ (1, 0, 1), \ (1, 1, 0) \big \}$ ist ''linear'', da die Summe zweier beliebiger Codeworte wieder ein gültiges Codewort ergibt, zum Beispiel
 
 
:$$\underline{x}_1 \oplus \underline{x}_2 = \underline{x}_3.$$
 
:$$\underline{x}_1 \oplus \underline{x}_2 = \underline{x}_3.$$
 +
*For any&nbsp; $k$ &nbsp; &#8658; &nbsp; $n = k+1$&nbsp; each code word differs from all others at an even number of positions.&nbsp; Thus,&nbsp; the minimum distance of the code is&nbsp;
 +
:$$d_{\rm min} = 2.$$
  
*Für beliebiges $k$ &nbsp; &#8658; &nbsp; $n = k+1$ unterscheidet sich jedes Codewort von allen anderen an einer geraden Anzahl von Positionen. Bei diesem Code ist die minimale Distanz $d_{\rm min} = 2$.<br><br>
+
{{BlaueBox|TEXT= 
 +
$\text{Definition:}$&nbsp; Each&nbsp; $\text{single parity-check code (SPC)}$&nbsp; can be formally described as follows:
  
{{BlaueBox|TEXT= 
+
::<math>\mathcal{C} = \{ \underline{x} \in {\rm GF}(2^n)\hspace{-0.15cm}: \hspace{0.15cm}{\rm with \hspace{0.15cm}even\hspace{0.15cm} number\hspace{0.15cm} of\hspace{0.15cm} ones\hspace{0.15cm} in \hspace{0.15cm} } \underline{x} \}\hspace{0.05cm}.</math>
$\text{Definition:}$&nbsp; Jeder '''Single Parity&ndash;check Code''' $\text{(SPC)}$ lässt sich formal wie folgt beschreiben:
 
  
::<math>\mathcal{C} = \{ \underline{x} \in {\rm GF}(2^n)\hspace{-0.15cm}: \hspace{0.15cm}{\rm mit \hspace{0.15cm}geradzahliger\hspace{0.15cm} Anzahl\hspace{0.15cm} von\hspace{0.15cm} Einsen\hspace{0.15cm} in \hspace{0.15cm} } \underline{x} \}\hspace{0.05cm}.</math>
+
*With the general code name&nbsp; $(n, \ k, \ d_{\rm min})$&nbsp; any single parity&ndash;check code can also be named&nbsp; $\text{SPC }(n, \ n-1, \ 2)$&nbsp;.  
  
*Mit der allgemeinen Codebezeichnung $(n, \ k, \ d_{\rm min})$ lässt sich jeder <i>Single Parity&ndash;check Code</i> auch mit $(n, \ n-1, \ 2)$ benennen.
+
*The top graph thus shows the&nbsp; $\text{SPC (3, 2, 2)}$,&nbsp; the&nbsp; $\text{SPC (4, 3, 2)}$,&nbsp; and the&nbsp; $\text{SPC (5, 4, 2)}$.}}<br>
*Die obere Grafik zeigt somit den $\text{SPC (3, 2, 2)}$, den $\text{SPC (4, 3, 2)}$ und den $\text{SPC (5, 4, 2)}$.}}<br>
 
  
Der digitale Kanal ändert möglicherweise das Codewort $\underline{x}= (x_1, x_2, \hspace{0.05cm}\text{...}\hspace{0.05cm}, x_n)$ in das Empfangswort $\underline{y}= (y_1, y_2, \hspace{0.05cm}\text{...}\hspace{0.05cm}, y_n)$, wobei mit dem Fehlervektor $\underline{e}= (e_1, e_2, \hspace{0.05cm}\text{...}\hspace{0.05cm}, e_n)$ gilt:  
+
The digital channel may change the code word&nbsp; $\underline{x}= (x_1, x_2, \hspace{0.05cm}\text{...}\hspace{0.05cm}, x_n)$&nbsp; to the received word&nbsp; $\underline{y}= (y_1, y_2, \hspace{0.05cm}\text{...}\hspace{0.05cm}, y_n)$. With the error vector&nbsp; $\underline{e}= (e_1, e_2, \hspace{0.05cm}\text{...}\hspace{0.05cm}, e_n)$&nbsp; holds:  
 
:$$\underline{y}=  \underline{x} \oplus \underline{e}.$$
 
:$$\underline{y}=  \underline{x} \oplus \underline{e}.$$
  
Zur Decodierung des <i>Single Parity&ndash;check Codes</i>  bildet man das so genannte Syndrom:
+
For&nbsp; $\text{decoding the single parity-check code}$&nbsp; one forms the so-called&nbsp; &raquo;'''syndrome'''&laquo;:
  
 
::<math>s = y_1 \oplus y_2 \oplus \hspace{0.05cm}\text{...} \hspace{0.05cm} \oplus y_n = \sum_{i=1}^{n} \hspace{0.2cm} y_i \hspace{0.1cm} \in \hspace{0.2cm} \{0, 1 \}  
 
::<math>s = y_1 \oplus y_2 \oplus \hspace{0.05cm}\text{...} \hspace{0.05cm} \oplus y_n = \sum_{i=1}^{n} \hspace{0.2cm} y_i \hspace{0.1cm} \in \hspace{0.2cm} \{0, 1 \}  
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
 +
The result&nbsp; $s=1$&nbsp; then indicates&nbsp; (at least)&nbsp; one bit error within the code word,&nbsp; while&nbsp; $s=0$&nbsp; should be interpreted as follows:
 +
*The transmission was error-free, or:<br>
 +
*the number of bit errors is even.<br><br>
  
Das Ergebnis $s=1$ weist dann auf (mindestens) einen Bitfehler innerhalb des Codewortes hin, während  $s=0$ wie folgt zu interpretieren ist:
 
*Die Übertragung war fehlerfrei, oder:<br>
 
*Die Anzahl der Bitfehler ist geradzahlig.<br><br>
 
 
[[File:P ID2382 KC T 1 3 S1c.png|right|frame|Mögliche Empfangswerte beim $\text{SPC (4, 3, 2)}$ |class=fit]]
 
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 1:}$&nbsp; Wir betrachten den $\text{SPC (4, 3, 2)}$ und gehen davon aus, dass das Nullwort gesendet wurde.  
+
$\text{Example 1:}$&nbsp; We consider the&nbsp; $\text{SPC (4, 3, 2)}$&nbsp; and assume that the all-zero word was sent.&nbsp; The table shows all possibilities that&nbsp; $f$&nbsp; bits are falsified and gives the respective syndrome&nbsp; $s \in \{0, 1\}$.&nbsp;
 
+
[[File:P ID2382 KC T 1 3 S1c.png|right|frame|Possible received values at the&nbsp; $\text{SPC (4, 3, 2)}$ |class=fit]]
Die Tabelle zeigt alle Möglichkeiten, dass $f$ Bit verfälscht werden und gibt das jeweilige Syndrom $s \in \{0, 1\}$ an.
 
<br clear= all>
 
Für das [[Kanalcodierung/Kanalmodelle_und_Entscheiderstrukturen#Binary_Symmetric_Channel_.E2.80.93_BSC|BSC&ndash;Modell]] mit der Verfälschungswahrscheinlichkeit $\varepsilon = 1\%$ ergeben sich dann  folgende Wahrscheinlichkeiten:
 
*Das Informationswort wird richtig decodiert (blaue Hinterlegung):
 
  
 +
For the&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#Binary_Symmetric_Channel_.E2.80.93_BSC|$\text{BSC model}$]]&nbsp; with the crossover probability&nbsp; $\varepsilon = 1\%$&nbsp; the following probabilities then result:
 +
*The information word is correctly decoded&nbsp; (blue background):
 
::<math>{\rm Pr}(\underline{v} = \underline{u}) = {\rm Pr}(\underline{y} = \underline{x}) = (1 - \varepsilon)^n = 0.99^4 \approx 96\,\%\hspace{0.05cm}.</math>
 
::<math>{\rm Pr}(\underline{v} = \underline{u}) = {\rm Pr}(\underline{y} = \underline{x}) = (1 - \varepsilon)^n = 0.99^4 \approx 96\,\%\hspace{0.05cm}.</math>
  
*Der Decoder erkennt, dass Übertragungsfehler aufgetreten sind (grüne Hinterlegung):
+
*The decoder detects that transmission errors have occurred&nbsp; (green background):
  
::<math>{\rm Pr}(s=1) \hspace{-0.1cm} =  \hspace{-0.1cm}  \sum_{f=1 \atop f \hspace{0.1cm}{\rm ungerade} }^{n} {n \choose f} \cdot \varepsilon^{f} \cdot (1 - \varepsilon)^{n-f} = {4 \choose 1} \cdot 0.01 \cdot 0.99^3 + {4 \choose 3} \cdot 0.01^3 \cdot 0.99 \approx 3.9\,\%\hspace{0.05cm}.</math>
+
:$${\rm Pr}(s=1) \hspace{-0.1cm} =  \hspace{-0.1cm}  \sum_{f=1 \atop f \hspace{0.1cm}{\rm odd} }^{n} {n \choose f} \cdot \varepsilon^{f} \cdot (1 - \varepsilon)^{n-f}$$
 +
:$$\Rightarrow \hspace{0.3cm} {\rm Pr}(s=1) \hspace{-0.1cm} =   {4 \choose 1} \cdot 0.01 \cdot 0.99^3 + {4 \choose 3} \cdot 0.01^3 \cdot 0.99 \approx 3.9\,\%\hspace{0.05cm}.$$
  
*Das Informationswort wird falsch decodiert (rote Hinterlegung):
+
*The information word is decoded incorrectly&nbsp; (red background):
  
 
::<math>{\rm Pr}(\underline{v} \ne \underline{u})  \hspace{-0.1cm} =  \hspace{-0.1cm}  \sum_{f=2 \atop f \hspace{0.1cm}{\rm gerade} }^{n} {n \choose f} \cdot \varepsilon^{f} \cdot (1 - \varepsilon)^{n-f} =  1 - {\rm Pr}(\underline{v} = \underline{u}) - {\rm Pr}(s=1)\approx 0.1\,\%\hspace{0.05cm}.</math>
 
::<math>{\rm Pr}(\underline{v} \ne \underline{u})  \hspace{-0.1cm} =  \hspace{-0.1cm}  \sum_{f=2 \atop f \hspace{0.1cm}{\rm gerade} }^{n} {n \choose f} \cdot \varepsilon^{f} \cdot (1 - \varepsilon)^{n-f} =  1 - {\rm Pr}(\underline{v} = \underline{u}) - {\rm Pr}(s=1)\approx 0.1\,\%\hspace{0.05cm}.</math>
  
Wir verweisen hier auf das interaktive Applet [[Applets:Binomial-_und_Poissonverteilung_(Applet)|Binomial- und Poissonverteilung]]. Die hier gewonnenen Ergebnisse werden auch in der [[Aufgaben:1.5_SPC_(5,_4)_und_BEC%E2%80%93Modell|Aufgabe A1.5]] nochmals diskutiert.}}<br>
+
We refer here to the HTML5/JavaScript applet&nbsp; [[Applets:Binomial_and_Poisson_Distribution_(Applet)|$\text{Binomial and Poisson Distribution}$]].&nbsp; The results obtained here are also discussed in&nbsp; [[Aufgaben:Exercise_1.5:_SPC_(5,_4)_and_BEC_Model|$\text{Exercise 1.5}$]]. }}<br>
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 2:}$&nbsp; Eine Fehlerkorrektur des Single Parity&ndash;check Codes ist beim BSC&ndash;Modell nicht möglich im Unterschied zum [[Kanalcodierung/Kanalmodelle_und_Entscheiderstrukturen#Binary_Erasure_Channel_.E2.80.93_BEC|BEC&ndash;Kanal]] (<i>Binary Erasure Channel</i>). Bei diesem werden Bitfehler ausgeschlossen. Ist nur ein Bit ausgelöscht (englisch: <i>Erasure</i>, $\rm E$), so ist aufgrund der Tatsache &bdquo;die Anzahl der Einsen im Codewort ist gerade&rdquo; auch eine Fehlerkorrektur möglich, zum Beispiel für den $\text{SPC (5, 4, 2)}$:
+
$\text{Example 2:}$&nbsp; Error correction of the single parity&ndash;check code is not possible for the BSC model unlike the&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#Binary_Erasure_Channel_.E2.80.93_BEC|$\text{BEC model}$]]&nbsp; ("Binary Erasure Channel").
 +
 
 +
Bit errors are excluded with this one.&nbsp; If only one bit is erased&nbsp; $($"erasure", &nbsp;$\rm E)$,&nbsp; then due to the fact&nbsp; "the number of ones in the code word is even",&nbsp; error correction is also possible,&nbsp; for example for the&nbsp; $\text{SPC (5, 4, 2)}$:
  
 
:<math>\underline{y} = (1, 0, {\rm E}, 1, 1)  \hspace{0.2cm}\Rightarrow\hspace{0.2cm}\underline{z} =  (1, 0, 1, 1, 1)
 
:<math>\underline{y} = (1, 0, {\rm E}, 1, 1)  \hspace{0.2cm}\Rightarrow\hspace{0.2cm}\underline{z} =  (1, 0, 1, 1, 1)
Line 102: Line 103:
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 3:}$&nbsp; Auch beim [[Kanalcodierung/Kanalmodelle_und_Entscheiderstrukturen#AWGN.E2.80.93Kanal_bei_bin.C3.A4rem_Eingang|AWGN&ndash;Kanal]] ist Fehlerkorrektur möglich, wenn man <i>Soft Decision</i> anwendet. Für das Folgende gehen wir von bipolarer Signalisierung aus:
+
$\text{Example 3:}$&nbsp; Also with the&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#AWGN_channel_at_binary_input|$\text{AWGN model}$]]&nbsp; error correction is possible when applying&nbsp; "soft decision".&nbsp; For the following we assume bipolar signaling:
[[File:P ID2387 KC T 1 3 S1d v2.png|right|frame|Zur Verdeutlichung von <i>Soft Decision</i> bei AWGN|class=fit]]  
+
[[File:P ID2387 KC T 1 3 S1d v2.png|right|frame|To clarify&nbsp; "soft decision"&nbsp; at AWGN|class=fit]]  
*$x=0$  &nbsp; &rArr; &nbsp; &#8594; $\tilde{x}= +1$, sowie
+
*$x=0$  &nbsp; &rArr; &nbsp; $\tilde{x}= +1$,&nbsp; as well as
*$x=1$  &nbsp; &rArr; &nbsp; &#8594; $\tilde{x}= -1$.<br>
+
*$x=1$  &nbsp; &rArr; &nbsp; $\tilde{x}= -1$.<br>
 
   
 
   
  
Die Grafik verdeutlicht den hier dargelegten Sachverhalt:
+
The graphic illustrates the facts presented here:
*Beispielsweise lautet der Empfangsvektor (rote Punkte):
+
*For example,&nbsp; the received vector is&nbsp; (red dots):
  
 
::<math>\underline{y} =  (+0.8, -1.2, -0.1, +0.5, -0.6)  \hspace{0.05cm}.</math>
 
::<math>\underline{y} =  (+0.8, -1.2, -0.1, +0.5, -0.6)  \hspace{0.05cm}.</math>
  
*Bei harter Entscheidung (Schwelle $G = 0$, nur die Vorzeichen werden ausgewertet) würde man zu folgendem binären Ergebnis kommen (grüne Quadrate $Y_i = y_i/ \vert y_i \vert$):
+
*With a hard decision&nbsp; $($threshold&nbsp; $G = 0$,&nbsp; only the signs are evaluated$)$&nbsp; one would arrive at the following binary result&nbsp; $($green squares&nbsp; $Y_i = y_i/ \vert y_i \vert)$:
  
 
::<math>\underline{Y} =  (+1, -1, -1, +1, -1)  \hspace{0.05cm}.</math>
 
::<math>\underline{Y} =  (+1, -1, -1, +1, -1)  \hspace{0.05cm}.</math>
<br clear=all>
 
*In Symbolschreibweise ergibt sich daraus $(0, 1, 1, 0, 1)$, was kein gültiges Codewort des $\text{SPC (5, 4, 2)}$ ist &nbsp; &#8658; &nbsp;  Syndrom $s = 1$. Also müssen ein, drei oder fünf Bit verfälscht worden sein.<br>
 
  
*Die Wahrscheinlichkeit für drei oder fünf Bitfehler ist allerdings um Größenordnungen kleiner als diejenige für einen einzigen Fehler. Die Annahme &bdquo;ein Bitfehler&rdquo; ist deshalb nicht abwegig.<br>
+
*In symbol notation,&nbsp; this gives&nbsp; $(0, 1, 1, 0, 1)$,&nbsp; which is not a valid code word of&nbsp; $\text{SPC (5, 4, 2)}$&nbsp; &#8658; &nbsp; syndrome&nbsp; $s = 1$.&nbsp; So one,&nbsp; three or five bits must have been falsified.<br>
 +
 
 +
*The probability for three or five bit errors,&nbsp; however,&nbsp; is orders of magnitude smaller than that for a single error.&nbsp; The assumption of&nbsp; "one bit error"&nbsp; is therefore not unreasonable.<br>
  
*Da der Empfangswert $y_3$ sehr nahe an der Schwelle $G = 0$ liegt, geht man davon aus, dass genau dieses Bit verfälscht wurde.
+
*Since the received value&nbsp; $y_3$&nbsp; is very close to the threshold&nbsp; $G = 0$&nbsp; it is assumed that exactly this bit has been falsified.&nbsp; Thus,&nbsp; with "soft decision",&nbsp; the decision is for&nbsp; $\underline{z} = (0, 1, 0, 0, 1)$ &nbsp; &#8658; &nbsp; $\underline{v} = (0, 1, 0, 0)$.&nbsp; The block error probability&nbsp; ${\rm Pr}(\underline{v} \ne \underline{u})$&nbsp; is thus lowest.}}<br><br>
 
*Damit fällt bei <i>Soft Decision</i> die Entscheidung für $\underline{z} = (0, 1, 0, 0, 1)$ &nbsp; &#8658; &nbsp; $\underline{v} = (0, 1, 0, 0)$.  
 
*Die Blockfehlerwahrscheinlichkeit ${\rm Pr}(\underline{v} \ne \underline{u})$ ist so am geringsten.}}<br><br>
 
  
== Wiederholungscodes==
+
== Repetition Codes==
 
<br>
 
<br>
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Definition:}$&nbsp; Als '''Wiederholungscode''' (englisch: <i>Repetition Code</i>, RC) bezeichnet man einen linearen binären $(n, \, k)$&ndash;Blockcode der Form
+
$\text{Definition:}$&nbsp; A&nbsp; $\text{repetition code}$&nbsp; ($\rm RC)$&nbsp; is a linear binary&nbsp; $(n, \, k)$&nbsp; block code of the form
  
::<math>\mathcal{C} = \{ \underline{x} \in {\rm GF}(2^n): x_i = x_j \hspace{0.15cm}{\rm f\ddot{u}r \hspace{0.15cm}alle\hspace{0.25cm} } i, j = 1, ... \hspace{0.1cm}, n \}.</math>
+
::<math>\mathcal{C} = \big \{ \underline{x} \in {\rm GF}(2^n)\text{:} \ \ x_i = x_j \hspace{0.25cm}{\rm for \hspace{0.25cm}all\hspace{0.35cm} } i, j = 1, \hspace{0.05cm} \text{...} \hspace{0.05cm}, n \big \}.</math>
  
*Der Codeparameter $n$ bezeichnet die Codelänge. Unabhängig von  $n$ gilt stets $k = 1$.  
+
*The code parameter&nbsp; $n$&nbsp; denotes the code length.&nbsp; Independent of&nbsp; $n$&nbsp; always holds&nbsp; $k = 1$.
*Entsprechend existieren nur die zwei Codeworte $(0, 0, \text{...} , 0)$ und $(1, 1, \text{...} , 1)$, die sich in $n$ Binärstellen unterscheiden. *Daraus folgt für die minimale Distanz   $d_{\rm min} = n$.}}<br>
+
 +
*Accordingly,&nbsp; there exist only the two code words&nbsp; $(0, 0, \hspace{0.05cm} \text{...} \hspace{0.05cm} , 0)$&nbsp; and&nbsp; $(1, 1, \hspace{0.05cm}\text{...}\hspace{0.05cm} , 1)$, which differ in&nbsp; $n$&nbsp; binary places.
 +
 +
*From this follows for the minimum distance&nbsp;   $d_{\rm min} = n$.}}<br>
  
[[File:P ID2347 KC T 1 3 S2 v2.png|right|frame|Wiederholungscode (<i>Repetition Code</i>)|class=fit]]
+
The graphic shows repetition codes for&nbsp; $n=3$,&nbsp;  $n=4$&nbsp; and&nbsp; $n=5$.&nbsp; Such a repetition code has the following properties:
<br><br>Die Grafik zeigt Wiederholungscodes für $n=3$,  $n=4$ und $n=5$.
+
[[File:P ID2347 KC T 1 3 S2 v2.png|right|frame|Various repetition codes|class=fit]]
<br clear=all>
+
 
*Dieser $(n, 1, n)$&ndash;Blockcode besitzt die sehr kleine Coderate $R = 1/n$, ist also nur für die Übertragung bzw. Speicherung kleiner Dateien geeignet.
+
*This&nbsp; $(n, \, 1, \, n)$&nbsp; block code has the very small code rate&nbsp; $R = 1/n$.
*Andererseits ist der Wiederholungscode sehr robust. Insbesondere beim [[Kanalcodierung/Kanalmodelle_und_Entscheiderstrukturen#Binary_Erasure_Channel_.E2.80.93_BEC|BEC&ndash;Kanal]] (<i>Binary Erasure Channel</i>) genügt ein einziges richtig übertragenes Bit an beliebiger Position (alle anderen Bit können ausgelöscht sein), um das Informationswort richtig zu decodieren.<br>
+
 
 +
*So,&nbsp;  such a code is only suitable for transferring or storing small files.
 +
 
 +
*On the other hand,&nbsp; the repetition code is very robust.
 +
 +
*In particular,&nbsp; in the&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#Binary_Erasure_Channel_.E2.80.93_BEC|$\text{BEC channel}$]]&nbsp; ("Binary Erasure Channel"),&nbsp; a single correctly transmitted bit at any position&nbsp; (all other bits may be erased)&nbsp; is sufficient to correctly decode the information word.<br>
  
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 4: Decodierung und Fehlerwahrscheinlichkeiten beim BSC&ndash;Kanal}$
+
$\text{Example 4: Decoding and error probabilities of the repetition code at the BSC channel}$
 
<br>
 
<br>
Es gelte der [[Kanalcodierung/Kanalmodelle_und_Entscheiderstrukturen#Binary_Symmetric_Channel_.E2.80.93_BSC|BSC&ndash;Kanal]] mit $\varepsilon = 10\%$. Die Decodierung basiere auf dem Majoritätsprinzip.  
+
 
*Bei ungeradem $n$ können $e=n-1$ Bitfehler erkannt und $t=(n-1)/2$ Bitfehler korrigiert werden. Damit ergibt sich für die Wahrscheinlichkeit der korrekten Decodierung der Informationsbits $u$:
+
The&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#Binary_Symmetric_Channel_.E2.80.93_BSC|$\text{BSC channel}$]]&nbsp; with&nbsp; $\varepsilon = 10\%$&nbsp; applies.&nbsp; The decoding is based on the majority principle.  
 +
*For odd&nbsp; $n$ &nbsp; &rArr; &nbsp; $e=n-1$ bit errors can be detected and&nbsp; $t=(n-1)/2$&nbsp; bit errors can be corrected.  
 +
 
 +
*This gives for the probability of correct decoding of the information bit&nbsp;  $u$:
  
 
::<math>{\rm Pr}(v = u) =  \sum_{f=0  }^{(n-1)/2} {n \choose f} \cdot \varepsilon^{f} \cdot (1 - \varepsilon)^{n-f} \hspace{0.05cm}.</math>
 
::<math>{\rm Pr}(v = u) =  \sum_{f=0  }^{(n-1)/2} {n \choose f} \cdot \varepsilon^{f} \cdot (1 - \varepsilon)^{n-f} \hspace{0.05cm}.</math>
  
*Die nachfolgenden Zahlenwerte gelten für $n = 5$. Das heißt: Es sind $t = 2$ Bitfehler korrigierbar:
+
*The following numerical values are valid for&nbsp; $n = 5$. That means: &nbsp; There are&nbsp; $t = 2$&nbsp; bit errors correctable:
  
 
::<math>{\rm Pr}(v = u) = (1 - \varepsilon)^5 + 5 \cdot \varepsilon \cdot (1 - \varepsilon)^4 + 10 \cdot \varepsilon^2 \cdot (1 - \varepsilon)^3 \approx 99.15\,\%</math>
 
::<math>{\rm Pr}(v = u) = (1 - \varepsilon)^5 + 5 \cdot \varepsilon \cdot (1 - \varepsilon)^4 + 10 \cdot \varepsilon^2 \cdot (1 - \varepsilon)^3 \approx 99.15\,\%</math>
 
::<math>\Rightarrow\hspace{0.3cm}{\rm Pr}(v \ne u) = 1-  {\rm Pr}(v = u) \approx 0.85\,\%\hspace{0.05cm}.</math>
 
::<math>\Rightarrow\hspace{0.3cm}{\rm Pr}(v \ne u) = 1-  {\rm Pr}(v = u) \approx 0.85\,\%\hspace{0.05cm}.</math>
  
*Bei geradem $n$ können dagegen nur $t=n/2-1$ Fehler korrigiert werden. Erhöht man $n$ von $5$ auf $6$, so sind weiterhin auch nur zwei Bitfehler innerhalb eines Codewortes korrigierbar. Einen dritten Bitfehler kann man zwar nicht korrigieren, aber zumindest erkennen:
+
*On the other hand,&nbsp; with even&nbsp; $n$&nbsp; only&nbsp; $t=n/2-1$&nbsp; errors can be corrected.&nbsp; If&nbsp; $n$&nbsp; is increased from&nbsp; $5$&nbsp; to&nbsp; $6$,&nbsp; then only two bit errors within a code word can be corrected.&nbsp; A third bit error cannot be corrected,&nbsp; but at least it can be recognized:
  
::<math>{\rm Pr}({\rm nicht\hspace{0.15cm} korrigierbarer\hspace{0.15cm} Fehler})  
+
::<math>{\rm Pr}({\rm not\hspace{0.15cm} correctable\hspace{0.15cm} error})  
 
= {6 \choose 3} \cdot \varepsilon^{3} \cdot (1 - \varepsilon)^{3}= 20 \cdot 0.1^{3} \cdot 0.9^{3}\approx  
 
= {6 \choose 3} \cdot \varepsilon^{3} \cdot (1 - \varepsilon)^{3}= 20 \cdot 0.1^{3} \cdot 0.9^{3}\approx  
 
1.46\,\%\hspace{0.05cm}. </math>
 
1.46\,\%\hspace{0.05cm}. </math>
  
*Ein (unerkannter) Decodierfehler $(v \ne u)$) ergibt sich erst, wenn innerhalb des 6 Bit&ndash;Wortes vier oder mehr Bit verfälscht wurden. Als Näherung unter der Annahme, dass fünf oder sechs Bitfehler sehr viel unwahrscheinlicher sind als vier, gilt:
+
*An&nbsp; (undetected)&nbsp; decoding error&nbsp; $(v \ne u)$&nbsp; results only when four or more bits have been falsified within the six bit word.&nbsp; As an approximation,&nbsp; assuming that five or six bit errors are much less likely than four:
  
 
::<math>{\rm Pr}(v \ne u)  \approx {6 \choose 4} \cdot \varepsilon^{4} \cdot (1 - \varepsilon)^{2}=
 
::<math>{\rm Pr}(v \ne u)  \approx {6 \choose 4} \cdot \varepsilon^{4} \cdot (1 - \varepsilon)^{2}=
 
0.122\,\%\hspace{0.05cm}.</math>
 
0.122\,\%\hspace{0.05cm}.</math>
  
*Interessant ist, dass beim RC(6, 1, 6) die Wahrscheinlichkeit ${\rm Pr}(v = u)$ für eine mögliche und richtige Decodierung mit $98.42\%$ kleiner ist als beim RC(5, 1, 5) mit $1- 0.00122 \approx 99.88\%$.}}<br>
+
*It is interesting to note that for&nbsp; $\text{RC(6, 1, 6)}$&nbsp; the probability&nbsp; ${\rm Pr}(v = u)$&nbsp; for a possible and correct decoding with&nbsp; $98.42\%$&nbsp; is smaller than for&nbsp; $\text{RC (5, 1, 5)}$. <br>For the latter: &nbsp; ${\rm Pr}(v = u)  \approx 99.15\%.$}}<br>
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 5: Leistungsfähigkeit des Wiederholungscodes beim AWGN&ndash;Kanal}$
+
$\text{Example 5: Performance of the repetition code at the AWGN channel}$
 
<br>
 
<br>
Nun betrachten wir den [[Kanalcodierung/Kanalmodelle_und_Entscheiderstrukturen#AWGN.E2.80.93Kanal_bei_bin.C3.A4rem_Eingang|AWGN&ndash;Kanal]]. Bei uncodierter Übertragung (oder dem Wiederholungscode mit $n=1$) ist der Empfangswert $y = \tilde{x}+\eta$, wobei $\tilde{x} \in \{+1, -1\}$ das Informationsbit bei bipolarer Signalisierung bezeichnet und $\eta$ den Rauschterm. Um Verwechslungen mit dem Codeparameter $n$ zu vermeiden, haben wir das Rauschen umbenannt: &nbsp; $n &#8594; \eta$.<br>
 
  
Für die Fehlerwahrscheinlichkeit gilt mit dem [[Stochastische_Signaltheorie/Gaußverteilte_Zufallsgrößen#.C3.9Cberschreitungswahrscheinlichkeit|komplementären Gaußschen Fehlerintegral]] ${\rm Q}(x)$
+
We now consider the&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#AWGN_channel_at_binary_input|$\text{AWGN channel}$]].&nbsp; For uncoded transmission&nbsp; $($or the repetition code with&nbsp; $n=1)$&nbsp; the received value is&nbsp; $y = \tilde{x}+\eta$&nbsp;, where&nbsp; $\tilde{x} \in \{+1, -1\}$&nbsp; denotes the information bit in bipolar signaling and&nbsp; $\eta$&nbsp; denotes the noise term.&nbsp; To avoid confusion with the code parameter&nbsp; $n$&nbsp; we have renamed the noise: &nbsp; $n &#8594; \eta$.<br>
 +
 
 +
For the error probability, with the&nbsp; [[Theory_of_Stochastic_Signals/Gaussian_Distributed_Random_Variables#Exceedance_probability|$\text{complementary Gaussian error integral}$]]&nbsp; ${\rm Q}(x)$
  
 
::<math>{\rm Pr}(v \ne u)  = {\rm Q}(\sqrt{\rho})  
 
::<math>{\rm Pr}(v \ne u)  = {\rm Q}(\sqrt{\rho})  
 
\hspace{0.05cm},</math>
 
\hspace{0.05cm},</math>
  
wobei folgende physikalische Größen zu verwenden sind:
+
where the following physical quantities are to be used:
*das ''Signal&ndash;zu&ndash;Rauschleistungsverhältnis'' $\rho= 1/\sigma^2 =  2 \cdot E_{\rm S}/N_0$,<br>
+
*the signal-to-noise ratio&nbsp; $\rm (SNR)$&nbsp; $\rho= 1/\sigma^2 =  2 \cdot E_{\rm S}/N_0$,<br>
  
*die Energie $E_{\rm S}$ pro Codesymbol &nbsp;&#8658;&nbsp; ''Symbolenergie'',<br>
+
*the energy&nbsp; $E_{\rm S}$&nbsp; per code symbol &nbsp; &#8658; &nbsp; "symbol energy",<br>
  
*die normierte Streuung $\sigma$ des Rauschens, gültig für das bipolare Informationsbit $\tilde{x} \in \{+1, -1\}$, und<br>
+
*the normalized standard deviation&nbsp; $\sigma$&nbsp; of the noise,&nbsp; valid for the bipolar information bit&nbsp; $\tilde{x} \in \{+1, -1\}$,&nbsp; and<br>
  
*die konstante (einseitige) Rauschleistungsdichte $N_0$ des AWGN&ndash;Rauschens.<br><br>
+
*the constant&nbsp; (one-sided)&nbsp; noise power density&nbsp; $N_0$&nbsp; of the AWGN noise.<br><br>
 
 
Bei einem $(n, 1, n)$&ndash;Wiederholungscode ergibt sich dagegen für den Eingangswert des Maximum&ndash;Likelihood&ndash;Decoders $y \hspace{0.04cm}' = \tilde{x} \hspace{0.04cm}'+\eta \hspace{0.04cm}'$ mit folgenden Eigenschaften:
 
  
 +
[[File:EN_KC_T_1_3_S2b.png|right|frame|Error probability of the repetition code at the AWGN channel|class=fit]]
 +
In contrast,&nbsp; for a&nbsp; $(n,\ 1,\ n)$&nbsp; repetition code,&nbsp; the input value of the maximum likelihood decoder&nbsp; $y \hspace{0.04cm}' = \tilde{x} \hspace{0.04cm}'+\eta \hspace{0.04cm}'$&nbsp; with the following properties:
 
::<math>\tilde{x} \hspace{0.04cm}'  =\sum_{i=1  }^{n} \tilde{x}_i \in \{ +n, -n \}\hspace{0.2cm} \Rightarrow\hspace{0.2cm}
 
::<math>\tilde{x} \hspace{0.04cm}'  =\sum_{i=1  }^{n} \tilde{x}_i \in \{ +n, -n \}\hspace{0.2cm} \Rightarrow\hspace{0.2cm}
n{\rm -fache \hspace{0.15cm}Amplitude}
+
n{\rm -fold \hspace{0.15cm}amplitude}</math>
\hspace{0.2cm} \Rightarrow\hspace{0.2cm}n^2{\rm -fache \hspace{0.15cm}Leistung}\hspace{0.05cm},</math>
+
::<math>\hspace{4.8cm} \Rightarrow\hspace{0.2cm}n^2{\rm -fold \hspace{0.15cm}power}\hspace{0.05cm},</math>
 
::<math>\eta\hspace{0.04cm}'  = \sum_{i=1  }^{n} \eta_i\hspace{0.2cm} \Rightarrow\hspace{0.2cm}
 
::<math>\eta\hspace{0.04cm}'  = \sum_{i=1  }^{n} \eta_i\hspace{0.2cm} \Rightarrow\hspace{0.2cm}
n{\rm -fache \hspace{0.15cm}Varianz:\hspace{0.15cm} } \sigma^2 \rightarrow n \cdot \sigma^2\hspace{0.05cm},</math>
+
n{\rm -fold \hspace{0.15cm}variance:\hspace{0.15cm} } \sigma^2 \rightarrow n \cdot \sigma^2\hspace{0.05cm},</math>
 
::<math>\rho\hspace{0.04cm}'  = \frac{n^2}{n \cdot \sigma^2} = n \cdot \rho
 
::<math>\rho\hspace{0.04cm}'  = \frac{n^2}{n \cdot \sigma^2} = n \cdot \rho
 
\hspace{0.2cm} \Rightarrow\hspace{0.2cm}{\rm Pr}(v \ne u)  = {\rm Q}(\sqrt{n \cdot \frac{2E_{\rm S} }{N_0} } )\hspace{0.05cm}.</math>
 
\hspace{0.2cm} \Rightarrow\hspace{0.2cm}{\rm Pr}(v \ne u)  = {\rm Q}(\sqrt{n \cdot \frac{2E_{\rm S} }{N_0} } )\hspace{0.05cm}.</math>
  
[[File:P ID2348 KC T 1 3 S2b v2.png|center|frame|Fehlerwahrscheinlichkeit des Wiederholungscodes beim AWGN–Kanal|class=fit]]
+
The error probability in double logarithmic representation is shown in the left graph.
 +
#As abscissa is&nbsp; $10 \cdot \lg \, (E_{\rm S}/N_0)$&nbsp; plotted.
 +
#The energy per bit&nbsp; $(E_{\rm B})$&nbsp; is &nbsp;$n$&nbsp; times larger than the symbol energy&nbsp; $E_{\rm S}$,&nbsp; as illustrated in the graph for &nbsp;$n=3$&nbsp;.
 +
<br clear=all>
 +
This set of curves can be interpreted as follows:
 +
*If one plots the error probability over the abscissa&nbsp; $10 \cdot \lg \, (E_{\rm S}/N_0)$&nbsp; then&nbsp; $n$&ndash;times repetition over uncoded transmission&nbsp; $(n=1)$&nbsp; results in a significant improvement.<br>
  
Die linke Grafik zeigt die Fehlerwahrscheinlichkeit in doppelt logarithmischer Darstellung. Als Abszisse ist $10 \cdot \lg \, (E_{\rm S}/N_0)$ aufgetragen. Diese Kurvenschar  kann wie folgt interpretiert werden:
+
*The curve for the repetition factor&nbsp; $n$&nbsp; is obtained by left shifting by&nbsp; $10 \cdot \lg \, n$&nbsp; $($in&nbsp; $\rm dB)$&nbsp; with respect to the comparison curve.&nbsp; <br>The gain is&nbsp; $4.77 \ {\rm dB} \ (n = 3)$&nbsp; or&nbsp; $\approx 5 \ {\rm dB} \ (n = 5)$.<br>
*Trägt man die Fehlerwahrscheinlichkeit über der Abszisse $10 \cdot \lg \, (E_{\rm S}/N_0)$ auf, so ergibt sich durch $n$&ndash;fache Wiederholung  gegenüber uncodierter Übertragung $(n=1)$ eine signifikante Verbesserung.<br>
 
  
*Die Kurve für den Wiederholungsfaktor $n$ erhält man durch Linksverschiebung um $10 \cdot \lg \, n$ (in dB) gegenüber der Vergleichskurve. Der Gewinn beträgt $4.77 \ {\rm dB} \ (n = 3)$ bzw. $\approx 5 \ {\rm dB} \ (n = 5)$.<br>
+
*However,&nbsp; a comparison at constant&nbsp; $E_{\rm S}$&nbsp; is not fair,&nbsp; since with the repetition code&nbsp; $\text{RC (5, 1, 5)}$&nbsp; one spends a factor&nbsp; $n$&nbsp; larger energy for the transmission of an information bit than with uncoded transmission: &nbsp; $E_{\rm B} = E_{\rm S}/{R} = n \cdot E_{\rm S}\hspace{0.05cm}.$
  
*Allerdings ist ein Vergleich bei konstantem $E_{\rm S}$ nicht fair, da man mit einem (5, 1, 5) Repetition Code für die Übertragung eines Informationsbits eine um den Faktor $n$ größere Energie aufwendet: &nbsp; $E_{\rm B}  = E_{\rm S}/{R} = n \cdot E_{\rm S}\hspace{0.05cm}.$
 
  
 
+
From the graph on the right,&nbsp; we can see that all the curves lie exactly on top of each other when plotted on the abscissa&nbsp; $10 \cdot \lg \, (E_{\rm B}/N_0)$.}}<br>
Aus der rechten Grafik erkennt man, dass alle Kurven genau übereinander liegen, wenn auf der Abszisse $10 \cdot \lg \, (E_{\rm S}/N_0)$ aufgetragen wird.}}<br>
 
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Fazit bezüglich Wiederholungscodes beim AWGN&ndash;Kanal:}$
+
$\text{Conclusion regarding repetition codes on the AWGN channel:}$
*Die Fehlerwahrscheinlichkeit ist bei fairem Vergleich unabhängig vom Wiederholungsfaktor $n$: &nbsp; ${\rm Pr}(v \ne u)  = {\rm Q}\left (\sqrt{2E_{\rm B} /{N_0} } \right )
+
*The error probability is independent of the repetition factor&nbsp; $n$&nbsp; for a fair comparison: &nbsp; &nbsp; ${\rm Pr}(v \ne u)  = {\rm Q}\left (\sqrt{2E_{\rm B} /{N_0} } \right )
 
\hspace{0.05cm}.$
 
\hspace{0.05cm}.$
  
*Beim AWGN&ndash;Kanal ist durch einen Wiederholungscode kein [[Kanalcodierung/Decodierung_linearer_Blockcodes#Codiergewinn_.E2.80.93_Bitfehlerrate_bei_AWGN| Codiergewinn]] zu erzielen.}}<br>
+
*For the AWGN channel,&nbsp; no&nbsp; [[Channel_Coding/Decoding_of_Linear_Block_Codes#Coding_gain_-_bit_error_rate_with_AWGN|$\text{coding gain}$]]&nbsp; can be achieved by a repetition code.}}<br>
  
== Hamming–Codes ==
+
== Hamming Codes ==
 
<br>
 
<br>
[https://de.wikipedia.org/wiki/Richard_Hamming Richard Wesley Hamming] hat 1962 eine Klasse binärer Blockcodes angegeben, die sich durch die Anzahl $m = 2, 3, \text{...} $ der zugesetzten <i>Parity Bits</i> unterscheiden. Für diese Codeklasse gilt:
+
In 1962&nbsp; [https://en.wikipedia.org/wiki/Richard_Hamming $\text{Richard Wesley Hamming}$]&nbsp; specified a class of binary block codes that differ in the number&nbsp; $m = 2, 3, \text{...} $&nbsp; of added&nbsp; "parity bits".&nbsp; For this code class:
*Die Codelänge ergibt sich zu $n = 2^m -1$. Möglich sind demzufolge beim Hamming&ndash;Code nur die Längen $n = 3$, $n = 7$, $n = 15$, $n = 31$, $n = 63$,$n = 127$, $n = 255$, usw.<br>
+
*The code length always results in&nbsp; $n = 2^m -1$.&nbsp; Consequently,&nbsp; only the lengths&nbsp; $n = 3$,&nbsp; $n = 7$,&nbsp; $n = 15$,&nbsp; $n = 31$,&nbsp; $n = 63$,&nbsp; $n = 127$,&nbsp; $n = 255$, etc. are possible.<br>
  
*Ein Informationswort besteht aus $k = n-m$ Bit. Die Coderate ist somit gleich
+
*An information word consists of&nbsp; $k = n-m$&nbsp; bits.&nbsp; The code rate is therefore equal to
  
 
::<math>R = \frac{k}{n} = \frac{2^m - 1 - m}{2^m - 1} \in \{1/3, \hspace{0.1cm}4/7,\hspace{0.1cm}11/15,\hspace{0.1cm}26/31,\hspace{0.1cm}57/63,
 
::<math>R = \frac{k}{n} = \frac{2^m - 1 - m}{2^m - 1} \in \{1/3, \hspace{0.1cm}4/7,\hspace{0.1cm}11/15,\hspace{0.1cm}26/31,\hspace{0.1cm}57/63,
Line 228: Line 240:
 
\}\hspace{0.05cm}.</math>
 
\}\hspace{0.05cm}.</math>
  
*Alle Hamming&ndash;Codes weisen die minimale Distanz $d_{\rm min} = 3$ auf. Bei größerer Codelänge $n$ erreicht man $d_{\rm min} = 3$ schon mit weniger Redundanz, also bei größerer Coderate $R$.<br>
+
*All Hamming codes have the minimum distance&nbsp; $d_{\rm min} = 3$.&nbsp; With larger code length&nbsp; $n$&nbsp; one reaches&nbsp; $d_{\rm min} = 3$&nbsp; already with less redundancy, i.e. with larger code rate&nbsp; $R$.<br>
  
*Aus der Angabe $d_{\rm min} = 3$ folgt weiter, dass hier $e = d_{\rm min} -1 =2$ Fehler erkannt werden können und $t = (d_{\rm min} -1)/2 = 1$ Fehler korrigiert werden kann.<br>
+
*It further follows from the statement&nbsp; $d_{\rm min} = 3$&nbsp; that here only&nbsp; $e = d_{\rm min} -1 =2$&nbsp; errors can be detected and only one error  can&nbsp; $t = (d_{\rm min} -1)/2 = 1$&nbsp; correct errors.<br>
  
*Der Hamming&ndash;Code (3, 1, 3) ist identisch mit dem Wiederholungscode (3, 1, 3) und lautet:
+
*The Hamming code&nbsp; $\text{HC (3, 1, 3)}$&nbsp; is identical to the repetition code&nbsp; $\text{RP (3, 1, 3)}$&nbsp; and is:
  
::<math>\mathcal{C} = \{ (0, 0, 0) \hspace{0.05cm}, (1, 1, 1)  \}\hspace{0.05cm}. </math>
+
::<math>\mathcal{C} = \big \{ (0, 0, 0) \hspace{0.25cm}, (1, 1, 1)  \big \}\hspace{0.05cm}. </math>
  
*Bei systematischer Codierung sind die ersten $k$ Stellen eines jeden Hamming&ndash;Codewortes $\underline{x}$ identisch mit dem Informationswort $\underline{u}$. Anschließend folgen dann  die $m = n-k$ Paritätsbit:
+
*In systematic coding,&nbsp;  the first&nbsp; $k$&nbsp; digits of each Hamming code word&nbsp; $\underline{x}$&nbsp; are identical to the information word&nbsp; $\underline{u}$.&nbsp; This is then followed by&nbsp; $m = n-k$&nbsp; parity bits:
  
::<math>\underline{x} = ( x_1, x_2, ... \hspace{0.05cm}, x_n) = ( u_1, u_2, ... \hspace{0.05cm}, u_k, p_1, p_2, ... \hspace{0.05cm}, p_{n-k})
+
::<math>\underline{x} = ( x_1,\ x_2,\hspace{0.05cm}\text{...} \hspace{0.05cm},\ x_n) = ( u_1,\ u_2,\ \hspace{0.05cm}\text{...} \hspace{0.05cm},\ u_k,\ p_1,\ p_2,\ \hspace{0.05cm}\text{...} \hspace{0.05cm},\ p_{n-k})
 
   \hspace{0.05cm}.</math>
 
   \hspace{0.05cm}.</math>
  
[[File:P ID2353 KC T 1 3 S3 v2.png|right|frame|Verdeutlichung des (7, 4, 3)–Hamming–Codes ]]
 
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 6: Paritätsgleichungen des (7, 4, 3)-Hamming-Codes}$
+
$\text{Example 6:   Parity equations of the (7, 4, 3) Hamming code}$
<br>Im Folgenden betrachten wir stets den (7, 4, 3)&ndash;Hamming&ndash;Code, der durch das nebenstehende Schaubild verdeutlicht wird. Daraus lassen sich die drei Bedingungen ableiten:
+
[[File:P ID2353 KC T 1 3 S3 v2.png|right|frame|Chart of the&nbsp$\text{HC (7, 4, 3)}$]]
::<math>x_1 \oplus x_2 \oplus x_3 \oplus x_5    \hspace{-0.1cm} = \hspace{-0.1cm} 0 \hspace{0.05cm},</math>
 
::<math>x_2 \oplus x_3 \oplus x_4 \oplus x_6    \hspace{-0.1cm} =  \hspace{-0.1cm} 0 \hspace{0.05cm},</math>
 
::<math>x_1 \oplus x_2 \oplus x_4 \oplus x_7    \hspace{-0.1cm} =  \hspace{-0.1cm} 0 \hspace{0.05cm}. </math>
 
  
*Im Schaubild kennzeichnet der rote Kreis die erste Prüfgleichung, der grüne die zweite und der blaue die letzte.
+
The&nbsp; $\text{(7, 4, 3)}$&nbsp; Hamming code is illustrated by the diagram shown.&nbsp; From it one can derive the three conditions:
*In jedem Kreis muss die Anzahl der Einsen geradzahlig sein.
 
  
 +
::<math>x_1 \oplus x_2 \oplus x_3 \oplus x_5    = 0 \hspace{0.05cm},</math>
 +
::<math>x_2 \oplus x_3 \oplus x_4 \oplus x_6    = 0 \hspace{0.05cm},</math>
 +
::<math>x_1 \oplus x_2 \oplus x_4 \oplus x_7    = 0 \hspace{0.05cm}. </math>
  
Bei systematischer Codierung des  (7, 4, 3)&ndash;Hamming&ndash;Codes
+
*In the diagram,&nbsp; the red circle indicates the first test equation,&nbsp; the green the second and the blue the last.
 +
 +
*In each circle,&nbsp; the number of ones must be even.
  
:<math>x_1 = u_1 ,\hspace{0.2cm}
+
 
 +
[[File:P ID2351 KC T 1 3 S3c v2.png|right|frame|Assignment&nbsp; $\underline{u} → \underline{x}$&nbsp; of the systematic $\text{(7, 4, 3)}$ Hamming code|class=fit]]
 +
In systematic coding of the&nbsp; $\text{(7, 4, 3)}$ Hamming code
 +
 
 +
::<math>x_1 = u_1 ,\hspace{0.2cm}
 
x_2 = u_2 ,\hspace{0.2cm}
 
x_2 = u_2 ,\hspace{0.2cm}
 
x_3 = u_3 ,\hspace{0.2cm}
 
x_3 = u_3 ,\hspace{0.2cm}
Line 263: Line 279:
 
x_7 = p_3 </math>
 
x_7 = p_3 </math>
  
lauten die Bestimmungsgleichungen der drei Prüfbit, wie aus dem Schaubild(oben) hervorgeht:
+
are the equations of determination of the three test bits, as shown in the diagram:
[[File:P ID2351 KC T 1 3 S3c v2.png|right|frame|Zuordnung <i><u>u</u></i> → <i><u>x</u></i> des systematischen (7, 4, 3)–Hamming–Codes|class=fit]]
 
  
:<math>p_1 =u_1 \oplus u_2 \oplus u_3  \hspace{0.05cm},</math>
+
::<math>p_1 =u_1 \oplus u_2 \oplus u_3  \hspace{0.05cm},</math>
:<math>p_2 = u_2 \oplus u_3 \oplus u_4  \hspace{0.05cm},</math>
+
::<math>p_2 = u_2 \oplus u_3 \oplus u_4  \hspace{0.05cm},</math>
:<math>p_3 = u_1 \oplus u_2 \oplus u_4 \hspace{0.05cm}.</math>
+
::<math>p_3 = u_1 \oplus u_2 \oplus u_4 \hspace{0.05cm}.</math>
<br clear=all>
+
 
Die Tabelle zeigt die $2^k = 16$ zulässigen Codeworte $\underline{x} = ( x_1, x_2, x_3, x_4, x_5, x_6, x_7) =  ( u_1, u_2, u_3, u_4, p_1, p_2, p_3)$ des systematischen (7, 4, 3)&ndash;Codes.
+
The table shows the&nbsp; $2^k = 16$&nbsp; allowed code words of the systematic&nbsp; $\text{HC (7, 4, 3)}$:
*Das Informationswort $\underline{u} =( u_1, u_2, u_3, u_4)$ ist schwarz dargestellt und die Prüfbits $p_1$, $p_2$ und $p_3$ rot.  
+
:$$\underline{x} = ( x_1,\ x_2,\ x_3,\ x_4,\ x_5,\ x_6,\ x_7) =  ( u_1,\ u_2,\ u_3,\ u_4,\ p_1,\ p_2,\ p_3).$
*Man erkennt anhand dieser Tabelle, dass sich jeweils zwei der $16$ möglichen Codeworte in mindestens $d_{\rm min} = 3$ Binärwerten unterscheiden.}}<br>
+
*The information word&nbsp; $\underline{u} =( u_1,\ u_2,\ u_3,\ u_4)$&nbsp; is shown in black and the check bits&nbsp; $p_1$,&nbsp; $p_2$&nbsp; and&nbsp; $p_3$&nbsp; in red.
 +
 +
*It can be seen from this table that each two of the&nbsp; $16$&nbsp; possible code words differ in at least&nbsp; $d_{\rm min} = 3$&nbsp; binary values.}}<br>
  
Später wird die  [[Kanalcodierung/Decodierung_linearer_Blockcodes|Decodierung linearer Blockcodes]] noch ausführlich behandelt. Das folgende Beispiel soll die Decodierung des Hamming&ndash;Codes eher intuitiv  erklären.<br>
+
Later the&nbsp; [[Channel_Coding/Decoding_of_Linear_Block_Codes|$\text{Decoding of linear block codes}$]]&nbsp; will be covered in more detail.&nbsp; The following example is intended to explain the decoding of the Hamming code rather intuitively.<br>
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 7:  Paritätsgleichungen des (7, 4, 3)-Hamming-Codes}$
+
$\text{Example 7:  Parity equations of the HC (7, 4, 3)}$
<br>Wir gehen weiter vom systematischen (7, 4, 3)&ndash;Code aus und betrachten das Empfangswort $\underline{y} = ( y_1, y_2, y_3, y_4, y_5, y_6, y_7)$. Zur Decodierung bilden wir die drei Paritätsgleichungen
+
<br>
 +
 
 +
We further assume the systematic&nbsp; $\text{(7, 4, 3)}$ Hamming code and consider the received word&nbsp; $\underline{y} = ( y_1,\ y_2,\ y_3,\ y_4,\ y_5,\ y_6,\ y_7)$.  
 +
 
 +
For decoding,&nbsp; we form the three parity equations
  
 
::<math> y_1 \oplus y_2 \oplus y_3 \oplus y_5    \hspace{-0.1cm}=  \hspace{-0.1cm}  0 \hspace{0.05cm},\hspace{0.5cm}{\rm (I)} </math>
 
::<math> y_1 \oplus y_2 \oplus y_3 \oplus y_5    \hspace{-0.1cm}=  \hspace{-0.1cm}  0 \hspace{0.05cm},\hspace{0.5cm}{\rm (I)} </math>
Line 284: Line 305:
 
::<math>y_1 \oplus y_2 \oplus y_4 \oplus y_7    \hspace{-0.1cm}=  \hspace{-0.1cm} 0\hspace{0.05cm}.  \hspace{0.5cm}{\rm (III)}</math>
 
::<math>y_1 \oplus y_2 \oplus y_4 \oplus y_7    \hspace{-0.1cm}=  \hspace{-0.1cm} 0\hspace{0.05cm}.  \hspace{0.5cm}{\rm (III)}</math>
  
Unter der Voraussetzung, dass in jedem Codewort höchstens ein Bit verfälscht wird, gelten dann die folgenden Aussagen. $\underline{v}$ bezeichnet das Decodierergebnis und sollte mit  $\underline{u} = (1, 0, 1, 0)$ übereinstimmen:
+
In the following&nbsp; $\underline{v}$&nbsp; denotes the decoding result; this should always match&nbsp; $\underline{u} = (1,\ 0,\ 1,\ 0)$.&nbsp;
*Das Empfangswort $\underline{y} = (1, 0, 1, 0, 0, 1, 1)$ erfüllt alle drei Paritätsgleichungen. Das heißt, dass kein einziger Übertragungsfehler aufgetreten ist &nbsp; &#8658; &nbsp; $\underline{y} = \underline{x}$ &nbsp; &#8658; &nbsp; $\underline{v} = \underline{u} = (1, 0, 1, 0)$.<br>
+
 
 +
Provided that at most one bit is falsified in each code word,&nbsp; the following statements are then valid:
 +
*The received word&nbsp; $\underline{y} = (1,\ 0,\ 1,\ 0,\ 0,\ 1,\ 1)$&nbsp; satisfies all three parity equations. This means that not a single transmission error has occurred &nbsp; &#8658; &nbsp; $\underline{y} = \underline{x}$ &nbsp; &#8658; &nbsp; $\underline{v} = \underline{u} = (1,\ 0,\ 1,\ 0)$.<br>
  
*Werden zwei der drei Paritätsgleichungen erfüllt wie zum Beispiel für das empfangene Wort $\underline{y} =(1, 0, 1, 0, 0, 1, 0)$, so wurde ein Paritätsbit verfälscht und es gilt auch hier $\underline{v} = \underline{u} = (1, 0, 1, 0)$.<br>
+
*If two of the three parity equations are satisfied,&nbsp; such as for the received word&nbsp; $\underline{y} =(1,\ 0,\ 1,\ 0,\ 0,\ 1,\ 0)$,&nbsp; then one parity bit has been falsified and the following also applies here&nbsp; $\underline{v} = \underline{u} = (1,\ 0,\ 1,\ 0)$.<br>
  
*Mit $\underline{y} = (1, 0, 1, 1, 0, 1, 1)$ wird nur die Gleichung (I) erfüllt und die beiden anderen nicht. Somit kann man die Verfälschung des vierten Binärsymbols korrigieren, und es gilt auch hier $\underline{v} = \underline{u} = (1, 0, 1, 0)$.<br>
+
*With&nbsp; $\underline{y} = (1,\ 0,\ 1,\ 1,\ 0,\ 1,\ 1)$&nbsp; only the equation&nbsp; $\rm (I)$&nbsp; is satisfied and the other two are not.&nbsp; Thus,&nbsp; the falsification of the fourth binary symbol can be corrected,&nbsp; and it is also valid here&nbsp; $\underline{v} = \underline{u} = (1,\ 0,\ 1,\ 0)$.<br>
  
*Ein Übertragungsfehler des zweiten Bits &nbsp; &#8658; &nbsp; $\underline{y} = (1, 1, 1, 0, 0, 1, 1)$ führt dazu, dass alle drei Paritätsgleichungen nicht erfüllt werden. Auch dieser Fehler lässt sich eindeutig korrigieren.}}<br>
+
*A transmission error of the second bit &nbsp; &#8658; &nbsp; $\underline{y} = (1,\ 1,\ 1,\ 0,\ 0,\ 1,\ 1)$&nbsp; leads to the fact that all three parity equations are not fulfilled.&nbsp; This error can also be clearly corrected since only&nbsp; $u_2$&nbsp; occurs in all equations.}}<br>
  
== Aufgaben zum Kapitel ==
+
== Exercises for the chapter ==
 
<br>
 
<br>
[[Aufgaben:1.5 SPC (5, 4) und BEC–Modell|Aufgabe 1.5: SPC (5, 4) und BEC–Modell]]
+
[[Aufgaben:Exercise_1.5:_SPC_(5,_4)_and_BEC_Model|Exercise 1.5: SPC (5, 4) and BEC Model]]
  
[[Aufgaben:1.5Z SPC (5, 4) vs. RC (5, 1)|Aufgabe 1.5Z: SPC (5, 4) vs. RC (5, 1)]]
+
[[Aufgaben:Exercise_1.5Z:_SPC_(5,_4)_vs._RC_(5,_1)|Exercise 1.5Z: SPC (5, 4) vs. RC (5, 1)]]
  
[[Aufgaben:1.6 Zum (7, 4)–Hamming–Code|Aufgabe 1.6: Zum (7, 4)–Hamming–Code]]
+
[[Aufgaben:Exercise_1.6:_(7,_4)_Hamming_Code|Exercise 1.6: (7, 4) Hamming Code]]
  
 
{{Display}}
 
{{Display}}

Latest revision as of 15:42, 23 January 2023

Single Parity-check Codes


The  »Single parity-check code«  $\rm (SPC)$  adds to the information block  $\underline{u}= (u_1, u_2, \hspace{0.05cm}\text{...}\hspace{0.05cm}, u_k)$  a parity bit  $p$:

Various single parity-check codes  $(n = k + 1)$
$$\underline{u} = (u_1, u_2,\hspace{0.05cm} \text{...} \hspace{0.05cm} , u_k) \hspace{0.3cm}$$
$$\Rightarrow \hspace{0.3cm} \underline{x} = (x_1, x_2,\hspace{0.05cm}\text{...} \hspace{0.05cm} , x_n) = (u_1, u_2,\hspace{0.05cm} \text{...}\hspace{0.05cm} , u_k, p) \hspace{0.05cm}.$$

The graphic shows three coding examples:

  • $|\hspace{0.05cm}\mathcal{C}\hspace{0.05cm}| = 4 \hspace{0.15cm} (k = 2)$,
  • $|\hspace{0.05cm}\mathcal{C}\hspace{0.05cm}| = 8 \hspace{0.15cm} (k = 3)$,
  • $|\hspace{0.05cm}\mathcal{C}\hspace{0.05cm}| = 16 \hspace{0.15cm} (k = 4)$.


This very simple code can be characterized as follows:

  • From  $n = k + 1$  follows for the  »code rate«  $R = k/n = (n-1)/n$  and for the  »redundancy«  $1-R = 1/n$.  For  $k = 2$,  for example,  the code rate is  $2/3$  and the relative redundancy is  $33.3\%$.
  • The parity bit is obtained by  »modulo–2«  addition.  This is the addition in the  $\text{Galois field}$  to the base  $2$   ⇒   $\rm GF(2)$,  so that  $1 \oplus 1 = 0$  results:
\[p = u_1 \oplus u_2 \oplus \text{...} \hspace{0.05cm} \oplus u_k \hspace{0.05cm}.\]
  • Thus every valid code word  $\underline{x}$  contains an even number of ones.  Expressed as  $\oplus$  or in simplified notation according to the second equation,  this condition reads:
\[ x_1 \oplus x_2 \oplus \text{...} \hspace{0.05cm} \oplus x_n = 0 \hspace{0.05cm}, \hspace{0.5cm}{\rm or:}\hspace{0.5cm} \sum_{i=1}^{n} \hspace{0.2cm} x_i = 0\hspace{0.05cm} , \hspace{0.3cm} {\rm addition\hspace{0.15cm} in \hspace{0.15cm} GF(2)} \hspace{0.05cm}. \]
  • For  $k = 2$   ⇒   $n = 3$  the following four code words result,  where the parity bit  $p$  is marked by a small arrow in each case:
\[\underline{x}_0 = (0, 0_{\hspace{0.05cm} \rightarrow}\hspace{0.05cm} 0)\hspace{0.05cm}, \hspace{0.2cm} \underline{x}_1 = (0, 1_{\hspace{0.05cm} \rightarrow}\hspace{0.05cm} 1)\hspace{0.05cm}, \hspace{0.2cm} \underline{x}_2 = (1, 0 _{\hspace{0.05cm} \rightarrow}\hspace{0.05cm} 1)\hspace{0.05cm}, \hspace{0.2cm} \underline{x}_3 = (1, 1 _{\hspace{0.05cm} \rightarrow}\hspace{0.05cm} 0)\hspace{0.05cm}.\]
  • This code  $\mathcal{C} = \big \{ (0, 0, 0), \ (0, 1, 1), \ (1, 0, 1), \ (1, 1, 0) \big \}$  is  »linear« since the sum of any two code words again gives a valid code word,  for example:
$$\underline{x}_1 \oplus \underline{x}_2 = \underline{x}_3.$$
  • For any  $k$   ⇒   $n = k+1$  each code word differs from all others at an even number of positions.  Thus,  the minimum distance of the code is 
$$d_{\rm min} = 2.$$

$\text{Definition:}$  Each  $\text{single parity-check code (SPC)}$  can be formally described as follows:

\[\mathcal{C} = \{ \underline{x} \in {\rm GF}(2^n)\hspace{-0.15cm}: \hspace{0.15cm}{\rm with \hspace{0.15cm}even\hspace{0.15cm} number\hspace{0.15cm} of\hspace{0.15cm} ones\hspace{0.15cm} in \hspace{0.15cm} } \underline{x} \}\hspace{0.05cm}.\]
  • With the general code name  $(n, \ k, \ d_{\rm min})$  any single parity–check code can also be named  $\text{SPC }(n, \ n-1, \ 2)$ .
  • The top graph thus shows the  $\text{SPC (3, 2, 2)}$,  the  $\text{SPC (4, 3, 2)}$,  and the  $\text{SPC (5, 4, 2)}$.


The digital channel may change the code word  $\underline{x}= (x_1, x_2, \hspace{0.05cm}\text{...}\hspace{0.05cm}, x_n)$  to the received word  $\underline{y}= (y_1, y_2, \hspace{0.05cm}\text{...}\hspace{0.05cm}, y_n)$. With the error vector  $\underline{e}= (e_1, e_2, \hspace{0.05cm}\text{...}\hspace{0.05cm}, e_n)$  holds:

$$\underline{y}= \underline{x} \oplus \underline{e}.$$

For  $\text{decoding the single parity-check code}$  one forms the so-called  »syndrome«:

\[s = y_1 \oplus y_2 \oplus \hspace{0.05cm}\text{...} \hspace{0.05cm} \oplus y_n = \sum_{i=1}^{n} \hspace{0.2cm} y_i \hspace{0.1cm} \in \hspace{0.2cm} \{0, 1 \} \hspace{0.05cm}.\]

The result  $s=1$  then indicates  (at least)  one bit error within the code word,  while  $s=0$  should be interpreted as follows:

  • The transmission was error-free, or:
  • the number of bit errors is even.

$\text{Example 1:}$  We consider the  $\text{SPC (4, 3, 2)}$  and assume that the all-zero word was sent.  The table shows all possibilities that  $f$  bits are falsified and gives the respective syndrome  $s \in \{0, 1\}$. 

Possible received values at the  $\text{SPC (4, 3, 2)}$

For the  $\text{BSC model}$  with the crossover probability  $\varepsilon = 1\%$  the following probabilities then result:

  • The information word is correctly decoded  (blue background):
\[{\rm Pr}(\underline{v} = \underline{u}) = {\rm Pr}(\underline{y} = \underline{x}) = (1 - \varepsilon)^n = 0.99^4 \approx 96\,\%\hspace{0.05cm}.\]
  • The decoder detects that transmission errors have occurred  (green background):
$${\rm Pr}(s=1) \hspace{-0.1cm} = \hspace{-0.1cm} \sum_{f=1 \atop f \hspace{0.1cm}{\rm odd} }^{n} {n \choose f} \cdot \varepsilon^{f} \cdot (1 - \varepsilon)^{n-f}$$
$$\Rightarrow \hspace{0.3cm} {\rm Pr}(s=1) \hspace{-0.1cm} = {4 \choose 1} \cdot 0.01 \cdot 0.99^3 + {4 \choose 3} \cdot 0.01^3 \cdot 0.99 \approx 3.9\,\%\hspace{0.05cm}.$$
  • The information word is decoded incorrectly  (red background):
\[{\rm Pr}(\underline{v} \ne \underline{u}) \hspace{-0.1cm} = \hspace{-0.1cm} \sum_{f=2 \atop f \hspace{0.1cm}{\rm gerade} }^{n} {n \choose f} \cdot \varepsilon^{f} \cdot (1 - \varepsilon)^{n-f} = 1 - {\rm Pr}(\underline{v} = \underline{u}) - {\rm Pr}(s=1)\approx 0.1\,\%\hspace{0.05cm}.\]

We refer here to the HTML5/JavaScript applet  $\text{Binomial and Poisson Distribution}$.  The results obtained here are also discussed in  $\text{Exercise 1.5}$.


$\text{Example 2:}$  Error correction of the single parity–check code is not possible for the BSC model unlike the  $\text{BEC model}$  ("Binary Erasure Channel").

Bit errors are excluded with this one.  If only one bit is erased  $($"erasure",  $\rm E)$,  then due to the fact  "the number of ones in the code word is even",  error correction is also possible,  for example for the  $\text{SPC (5, 4, 2)}$:

\[\underline{y} = (1, 0, {\rm E}, 1, 1) \hspace{0.2cm}\Rightarrow\hspace{0.2cm}\underline{z} = (1, 0, 1, 1, 1) \hspace{0.2cm}\Rightarrow\hspace{0.2cm} \underline{v} = (1, 0, 1, 1) = \underline{u}\hspace{0.05cm},\] \[\underline{y}=(0, 1, 1, {\rm E}, 0) \hspace{0.2cm}\Rightarrow\hspace{0.2cm}\underline{z} = (0, 1, 1, 0, 0) \hspace{0.2cm}\Rightarrow\hspace{0.2cm} \underline{v} = (0, 1, 1, 0) = \underline{u}\hspace{0.05cm},\] \[\underline{y} = (0, 1, 0, 1, {\rm E}) \hspace{0.2cm}\Rightarrow\hspace{0.2cm}\underline{z} = (0, 1, 0, 1, 0) \hspace{0.2cm}\Rightarrow\hspace{0.2cm} \underline{v} = (0, 1, 0, 1) = \underline{u}\hspace{0.05cm}.\]


$\text{Example 3:}$  Also with the  $\text{AWGN model}$  error correction is possible when applying  "soft decision".  For the following we assume bipolar signaling:

To clarify  "soft decision"  at AWGN
  • $x=0$   ⇒   $\tilde{x}= +1$,  as well as
  • $x=1$   ⇒   $\tilde{x}= -1$.


The graphic illustrates the facts presented here:

  • For example,  the received vector is  (red dots):
\[\underline{y} = (+0.8, -1.2, -0.1, +0.5, -0.6) \hspace{0.05cm}.\]
  • With a hard decision  $($threshold  $G = 0$,  only the signs are evaluated$)$  one would arrive at the following binary result  $($green squares  $Y_i = y_i/ \vert y_i \vert)$:
\[\underline{Y} = (+1, -1, -1, +1, -1) \hspace{0.05cm}.\]
  • In symbol notation,  this gives  $(0, 1, 1, 0, 1)$,  which is not a valid code word of  $\text{SPC (5, 4, 2)}$  ⇒   syndrome  $s = 1$.  So one,  three or five bits must have been falsified.
  • The probability for three or five bit errors,  however,  is orders of magnitude smaller than that for a single error.  The assumption of  "one bit error"  is therefore not unreasonable.
  • Since the received value  $y_3$  is very close to the threshold  $G = 0$  it is assumed that exactly this bit has been falsified.  Thus,  with "soft decision",  the decision is for  $\underline{z} = (0, 1, 0, 0, 1)$   ⇒   $\underline{v} = (0, 1, 0, 0)$.  The block error probability  ${\rm Pr}(\underline{v} \ne \underline{u})$  is thus lowest.



Repetition Codes


$\text{Definition:}$  A  $\text{repetition code}$  ($\rm RC)$  is a linear binary  $(n, \, k)$  block code of the form

\[\mathcal{C} = \big \{ \underline{x} \in {\rm GF}(2^n)\text{:} \ \ x_i = x_j \hspace{0.25cm}{\rm for \hspace{0.25cm}all\hspace{0.35cm} } i, j = 1, \hspace{0.05cm} \text{...} \hspace{0.05cm}, n \big \}.\]
  • The code parameter  $n$  denotes the code length.  Independent of  $n$  always holds  $k = 1$.
  • Accordingly,  there exist only the two code words  $(0, 0, \hspace{0.05cm} \text{...} \hspace{0.05cm} , 0)$  and  $(1, 1, \hspace{0.05cm}\text{...}\hspace{0.05cm} , 1)$, which differ in  $n$  binary places.
  • From this follows for the minimum distance  $d_{\rm min} = n$.


The graphic shows repetition codes for  $n=3$,  $n=4$  and  $n=5$.  Such a repetition code has the following properties:

Various repetition codes
  • This  $(n, \, 1, \, n)$  block code has the very small code rate  $R = 1/n$.
  • So,  such a code is only suitable for transferring or storing small files.
  • On the other hand,  the repetition code is very robust.
  • In particular,  in the  $\text{BEC channel}$  ("Binary Erasure Channel"),  a single correctly transmitted bit at any position  (all other bits may be erased)  is sufficient to correctly decode the information word.


$\text{Example 4: Decoding and error probabilities of the repetition code at the BSC channel}$

The  $\text{BSC channel}$  with  $\varepsilon = 10\%$  applies.  The decoding is based on the majority principle.

  • For odd  $n$   ⇒   $e=n-1$ bit errors can be detected and  $t=(n-1)/2$  bit errors can be corrected.
  • This gives for the probability of correct decoding of the information bit  $u$:
\[{\rm Pr}(v = u) = \sum_{f=0 }^{(n-1)/2} {n \choose f} \cdot \varepsilon^{f} \cdot (1 - \varepsilon)^{n-f} \hspace{0.05cm}.\]
  • The following numerical values are valid for  $n = 5$. That means:   There are  $t = 2$  bit errors correctable:
\[{\rm Pr}(v = u) = (1 - \varepsilon)^5 + 5 \cdot \varepsilon \cdot (1 - \varepsilon)^4 + 10 \cdot \varepsilon^2 \cdot (1 - \varepsilon)^3 \approx 99.15\,\%\]
\[\Rightarrow\hspace{0.3cm}{\rm Pr}(v \ne u) = 1- {\rm Pr}(v = u) \approx 0.85\,\%\hspace{0.05cm}.\]
  • On the other hand,  with even  $n$  only  $t=n/2-1$  errors can be corrected.  If  $n$  is increased from  $5$  to  $6$,  then only two bit errors within a code word can be corrected.  A third bit error cannot be corrected,  but at least it can be recognized:
\[{\rm Pr}({\rm not\hspace{0.15cm} correctable\hspace{0.15cm} error}) = {6 \choose 3} \cdot \varepsilon^{3} \cdot (1 - \varepsilon)^{3}= 20 \cdot 0.1^{3} \cdot 0.9^{3}\approx 1.46\,\%\hspace{0.05cm}. \]
  • An  (undetected)  decoding error  $(v \ne u)$  results only when four or more bits have been falsified within the six bit word.  As an approximation,  assuming that five or six bit errors are much less likely than four:
\[{\rm Pr}(v \ne u) \approx {6 \choose 4} \cdot \varepsilon^{4} \cdot (1 - \varepsilon)^{2}= 0.122\,\%\hspace{0.05cm}.\]
  • It is interesting to note that for  $\text{RC(6, 1, 6)}$  the probability  ${\rm Pr}(v = u)$  for a possible and correct decoding with  $98.42\%$  is smaller than for  $\text{RC (5, 1, 5)}$.
    For the latter:   ${\rm Pr}(v = u) \approx 99.15\%.$


$\text{Example 5: Performance of the repetition code at the AWGN channel}$

We now consider the  $\text{AWGN channel}$.  For uncoded transmission  $($or the repetition code with  $n=1)$  the received value is  $y = \tilde{x}+\eta$ , where  $\tilde{x} \in \{+1, -1\}$  denotes the information bit in bipolar signaling and  $\eta$  denotes the noise term.  To avoid confusion with the code parameter  $n$  we have renamed the noise:   $n → \eta$.

For the error probability, with the  $\text{complementary Gaussian error integral}$  ${\rm Q}(x)$

\[{\rm Pr}(v \ne u) = {\rm Q}(\sqrt{\rho}) \hspace{0.05cm},\]

where the following physical quantities are to be used:

  • the signal-to-noise ratio  $\rm (SNR)$  $\rho= 1/\sigma^2 = 2 \cdot E_{\rm S}/N_0$,
  • the energy  $E_{\rm S}$  per code symbol   ⇒   "symbol energy",
  • the normalized standard deviation  $\sigma$  of the noise,  valid for the bipolar information bit  $\tilde{x} \in \{+1, -1\}$,  and
  • the constant  (one-sided)  noise power density  $N_0$  of the AWGN noise.

Error probability of the repetition code at the AWGN channel

In contrast,  for a  $(n,\ 1,\ n)$  repetition code,  the input value of the maximum likelihood decoder  $y \hspace{0.04cm}' = \tilde{x} \hspace{0.04cm}'+\eta \hspace{0.04cm}'$  with the following properties:

\[\tilde{x} \hspace{0.04cm}' =\sum_{i=1 }^{n} \tilde{x}_i \in \{ +n, -n \}\hspace{0.2cm} \Rightarrow\hspace{0.2cm} n{\rm -fold \hspace{0.15cm}amplitude}\]
\[\hspace{4.8cm} \Rightarrow\hspace{0.2cm}n^2{\rm -fold \hspace{0.15cm}power}\hspace{0.05cm},\]
\[\eta\hspace{0.04cm}' = \sum_{i=1 }^{n} \eta_i\hspace{0.2cm} \Rightarrow\hspace{0.2cm} n{\rm -fold \hspace{0.15cm}variance:\hspace{0.15cm} } \sigma^2 \rightarrow n \cdot \sigma^2\hspace{0.05cm},\]
\[\rho\hspace{0.04cm}' = \frac{n^2}{n \cdot \sigma^2} = n \cdot \rho \hspace{0.2cm} \Rightarrow\hspace{0.2cm}{\rm Pr}(v \ne u) = {\rm Q}(\sqrt{n \cdot \frac{2E_{\rm S} }{N_0} } )\hspace{0.05cm}.\]

The error probability in double logarithmic representation is shown in the left graph.

  1. As abscissa is  $10 \cdot \lg \, (E_{\rm S}/N_0)$  plotted.
  2. The energy per bit  $(E_{\rm B})$  is  $n$  times larger than the symbol energy  $E_{\rm S}$,  as illustrated in the graph for  $n=3$ .


This set of curves can be interpreted as follows:

  • If one plots the error probability over the abscissa  $10 \cdot \lg \, (E_{\rm S}/N_0)$  then  $n$–times repetition over uncoded transmission  $(n=1)$  results in a significant improvement.
  • The curve for the repetition factor  $n$  is obtained by left shifting by  $10 \cdot \lg \, n$  $($in  $\rm dB)$  with respect to the comparison curve. 
    The gain is  $4.77 \ {\rm dB} \ (n = 3)$  or  $\approx 5 \ {\rm dB} \ (n = 5)$.
  • However,  a comparison at constant  $E_{\rm S}$  is not fair,  since with the repetition code  $\text{RC (5, 1, 5)}$  one spends a factor  $n$  larger energy for the transmission of an information bit than with uncoded transmission:   $E_{\rm B} = E_{\rm S}/{R} = n \cdot E_{\rm S}\hspace{0.05cm}.$


From the graph on the right,  we can see that all the curves lie exactly on top of each other when plotted on the abscissa  $10 \cdot \lg \, (E_{\rm B}/N_0)$.


$\text{Conclusion regarding repetition codes on the AWGN channel:}$

  • The error probability is independent of the repetition factor  $n$  for a fair comparison:     ${\rm Pr}(v \ne u) = {\rm Q}\left (\sqrt{2E_{\rm B} /{N_0} } \right ) \hspace{0.05cm}.$


Hamming Codes


In 1962  $\text{Richard Wesley Hamming}$  specified a class of binary block codes that differ in the number  $m = 2, 3, \text{...} $  of added  "parity bits".  For this code class:

  • The code length always results in  $n = 2^m -1$.  Consequently,  only the lengths  $n = 3$,  $n = 7$,  $n = 15$,  $n = 31$,  $n = 63$,  $n = 127$,  $n = 255$, etc. are possible.
  • An information word consists of  $k = n-m$  bits.  The code rate is therefore equal to
\[R = \frac{k}{n} = \frac{2^m - 1 - m}{2^m - 1} \in \{1/3, \hspace{0.1cm}4/7,\hspace{0.1cm}11/15,\hspace{0.1cm}26/31,\hspace{0.1cm}57/63, \hspace{0.1cm}120/127,\hspace{0.1cm}247/255, \hspace{0.05cm} \text{...} \hspace{0.05cm} \}\hspace{0.05cm}.\]
  • All Hamming codes have the minimum distance  $d_{\rm min} = 3$.  With larger code length  $n$  one reaches  $d_{\rm min} = 3$  already with less redundancy, i.e. with larger code rate  $R$.
  • It further follows from the statement  $d_{\rm min} = 3$  that here only  $e = d_{\rm min} -1 =2$  errors can be detected and only one error can  $t = (d_{\rm min} -1)/2 = 1$  correct errors.
  • The Hamming code  $\text{HC (3, 1, 3)}$  is identical to the repetition code  $\text{RP (3, 1, 3)}$  and is:
\[\mathcal{C} = \big \{ (0, 0, 0) \hspace{0.25cm}, (1, 1, 1) \big \}\hspace{0.05cm}. \]
  • In systematic coding,  the first  $k$  digits of each Hamming code word  $\underline{x}$  are identical to the information word  $\underline{u}$.  This is then followed by  $m = n-k$  parity bits:
\[\underline{x} = ( x_1,\ x_2,\hspace{0.05cm}\text{...} \hspace{0.05cm},\ x_n) = ( u_1,\ u_2,\ \hspace{0.05cm}\text{...} \hspace{0.05cm},\ u_k,\ p_1,\ p_2,\ \hspace{0.05cm}\text{...} \hspace{0.05cm},\ p_{n-k}) \hspace{0.05cm}.\]

$\text{Example 6: Parity equations of the (7, 4, 3) Hamming code}$

Chart of the  $\text{HC (7, 4, 3)}$

The  $\text{(7, 4, 3)}$  Hamming code is illustrated by the diagram shown.  From it one can derive the three conditions:

\[x_1 \oplus x_2 \oplus x_3 \oplus x_5 = 0 \hspace{0.05cm},\]
\[x_2 \oplus x_3 \oplus x_4 \oplus x_6 = 0 \hspace{0.05cm},\]
\[x_1 \oplus x_2 \oplus x_4 \oplus x_7 = 0 \hspace{0.05cm}. \]
  • In the diagram,  the red circle indicates the first test equation,  the green the second and the blue the last.
  • In each circle,  the number of ones must be even.


Assignment  $\underline{u} → \underline{x}$  of the systematic $\text{(7, 4, 3)}$ Hamming code

In systematic coding of the  $\text{(7, 4, 3)}$ Hamming code

\[x_1 = u_1 ,\hspace{0.2cm} x_2 = u_2 ,\hspace{0.2cm} x_3 = u_3 ,\hspace{0.2cm} x_4 = u_4 ,\hspace{0.2cm} x_5 = p_1 ,\hspace{0.2cm} x_6 = p_2 ,\hspace{0.2cm} x_7 = p_3 \]

are the equations of determination of the three test bits, as shown in the diagram:

\[p_1 =u_1 \oplus u_2 \oplus u_3 \hspace{0.05cm},\]
\[p_2 = u_2 \oplus u_3 \oplus u_4 \hspace{0.05cm},\]
\[p_3 = u_1 \oplus u_2 \oplus u_4 \hspace{0.05cm}.\]

The table shows the  $2^k = 16$  allowed code words of the systematic  $\text{HC (7, 4, 3)}$:

$$\underline{x} = ( x_1,\ x_2,\ x_3,\ x_4,\ x_5,\ x_6,\ x_7) = ( u_1,\ u_2,\ u_3,\ u_4,\ p_1,\ p_2,\ p_3).$$
  • The information word  $\underline{u} =( u_1,\ u_2,\ u_3,\ u_4)$  is shown in black and the check bits  $p_1$,  $p_2$  and  $p_3$  in red.
  • It can be seen from this table that each two of the  $16$  possible code words differ in at least  $d_{\rm min} = 3$  binary values.


Later the  $\text{Decoding of linear block codes}$  will be covered in more detail.  The following example is intended to explain the decoding of the Hamming code rather intuitively.

$\text{Example 7: Parity equations of the HC (7, 4, 3)}$

We further assume the systematic  $\text{(7, 4, 3)}$ Hamming code and consider the received word  $\underline{y} = ( y_1,\ y_2,\ y_3,\ y_4,\ y_5,\ y_6,\ y_7)$.

For decoding,  we form the three parity equations

\[ y_1 \oplus y_2 \oplus y_3 \oplus y_5 \hspace{-0.1cm}= \hspace{-0.1cm} 0 \hspace{0.05cm},\hspace{0.5cm}{\rm (I)} \]
\[y_2 \oplus y_3 \oplus y_4 \oplus y_6 \hspace{-0.1cm}= \hspace{-0.1cm}0 \hspace{0.05cm},\hspace{0.5cm}{\rm (II)} \]
\[y_1 \oplus y_2 \oplus y_4 \oplus y_7 \hspace{-0.1cm}= \hspace{-0.1cm} 0\hspace{0.05cm}. \hspace{0.5cm}{\rm (III)}\]

In the following  $\underline{v}$  denotes the decoding result; this should always match  $\underline{u} = (1,\ 0,\ 1,\ 0)$. 

Provided that at most one bit is falsified in each code word,  the following statements are then valid:

  • The received word  $\underline{y} = (1,\ 0,\ 1,\ 0,\ 0,\ 1,\ 1)$  satisfies all three parity equations. This means that not a single transmission error has occurred   ⇒   $\underline{y} = \underline{x}$   ⇒   $\underline{v} = \underline{u} = (1,\ 0,\ 1,\ 0)$.
  • If two of the three parity equations are satisfied,  such as for the received word  $\underline{y} =(1,\ 0,\ 1,\ 0,\ 0,\ 1,\ 0)$,  then one parity bit has been falsified and the following also applies here  $\underline{v} = \underline{u} = (1,\ 0,\ 1,\ 0)$.
  • With  $\underline{y} = (1,\ 0,\ 1,\ 1,\ 0,\ 1,\ 1)$  only the equation  $\rm (I)$  is satisfied and the other two are not.  Thus,  the falsification of the fourth binary symbol can be corrected,  and it is also valid here  $\underline{v} = \underline{u} = (1,\ 0,\ 1,\ 0)$.
  • A transmission error of the second bit   ⇒   $\underline{y} = (1,\ 1,\ 1,\ 0,\ 0,\ 1,\ 1)$  leads to the fact that all three parity equations are not fulfilled.  This error can also be clearly corrected since only  $u_2$  occurs in all equations.


Exercises for the chapter


Exercise 1.5: SPC (5, 4) and BEC Model

Exercise 1.5Z: SPC (5, 4) vs. RC (5, 1)

Exercise 1.6: (7, 4) Hamming Code