Difference between revisions of "Aufgaben:Exercise 1.11: Syndrome Decoding"

From LNTwww
 
(20 intermediate revisions by 5 users not shown)
Line 1: Line 1:
{{quiz-Header|Buchseite=Kanalcodierung/Decodierung linearer Blockcodes}}
+
{{quiz-Header|Buchseite=Channel_Coding/Decoding_of_Linear_Block_Codes}}
  
[[File:P_ID2397__KC_A_1_11.png|right|frame|Syndrom und Nebenklassenanführer (unvollständige Liste) des  $\rm HC \ (7, \, 4, \, 3)$ ]]
+
[[File:EN_KC_A_1_11.png|right|frame|Syndrome and coset leaders <br>$($incomplete list$)$&nbsp; of &nbsp;$\rm HC \ (7, \, 4, \, 3)$ ]]
  
Zur Decodierung eines &nbsp;$(7, \, 4, \, 3)$–Hamming–Codes, der durch seine Prüfmatrix
+
To decode a &nbsp;$(7, \, 4, \, 3)$&nbsp; Hamming code,&nbsp; which is defined by its parity-check matrix
  
:$${ \boldsymbol{\rm H}}_{\rm } = \begin{pmatrix} 1 &1 &0 &1 &1 &0 &0\\ 0 &1 &1 &1 &0 &1 &0\\ 1 &0 &1 &1 &0 &0 &1 \end{pmatrix} \hspace{0.05cm}$$
+
:$${ \boldsymbol{\rm H}}_{\rm } = \begin{pmatrix} 1 &1 &0 &1 &1 &0 &0\\ 0 &1 &1 &1 &0 &1 &0\\ 1 &0 &1 &1 &0 &0 &1 \end{pmatrix} \hspace{0.05cm},$$
  
gegeben ist, eignet sich auch die &bdquo;Syndromdecodierung&rdquo;. Da alle Hamming–Codes perfekt sind, ergibt sich hiermit ein gleich gutes Ergebnis wie mit der (im allgemeinen Fall) komplizierteren Maximum–Likelihood–Detektion.
+
"syndrome decoding"&nbsp; is also suitable.&nbsp; Since all Hamming codes are perfect,&nbsp; the result is as good as with the&nbsp; $($in the general case$)$&nbsp; more complicated maximum likelihood decoding.
  
Bei der&nbsp; [[Kanalcodierung/Decodierung_linearer_Blockcodes#Prinzip_der_Syndromdecodierung|Syndromdecodierung]]&nbsp; geht man wie folgt vor:
+
For&nbsp; [[Channel_Coding/Decoding_of_Linear_Block_Codes#Principle_of_syndrome_decoding|"syndrome decoding"]]&nbsp; one proceeds as follows:
*Man bildet aus dem Empfangsvektor&nbsp; $\underline{y}$&nbsp; das Syndrom&nbsp; $($es gilt&nbsp; $m = n - k)$:
+
*From the received vector&nbsp; $\underline{y}$ &nbsp; one forms the syndrome&nbsp; $($it holds&nbsp; $m = n - k)$:
 
:$$\underline{s} = \underline{y} \cdot { \boldsymbol{\rm H}}^{\rm T} \in {\rm GF}(2^m) \hspace{0.05cm}.$$
 
:$$\underline{s} = \underline{y} \cdot { \boldsymbol{\rm H}}^{\rm T} \in {\rm GF}(2^m) \hspace{0.05cm}.$$
  
*Beim&nbsp; [[Kanalcodierung/Kanalmodelle_und_Entscheiderstrukturen#Binary_Symmetric_Channel_.E2.80.93_BSC|BSC–Kanal]]&nbsp; ist auch das Empfangswort&nbsp; $\underline{y} = \underline{x} \, ({\rm Codewort}) + \underline{e} \, ({\rm Fehlervektor})$&nbsp; ein Element aus&nbsp; ${\rm GF}(2^n)$, und es gilt wegen&nbsp; $ \underline{x} · \boldsymbol {{\rm H} }^{\rm T} =  
+
*In&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#Binary_Symmetric_Channel_.E2.80.93_BSC|"BSC channel"]]&nbsp; the received word &nbsp; $\underline{y} = \underline{x} \, ({\rm code\:word}) + \underline{e} \, ({\rm error\:vector})$ &nbsp; is also an element of&nbsp; ${\rm GF}(2^n)$,&nbsp; and it holds because of&nbsp;&nbsp; $ \underline{x} · \boldsymbol {{\rm H} }^{\rm T} =  
  \underline{0}$&nbsp; gleichermaßen:
+
  \underline{0}$:
 
:$$\underline{s} = \underline{e} \cdot { \boldsymbol{\rm H}}^{\rm T} \hspace{0.05cm}.$$
 
:$$\underline{s} = \underline{e} \cdot { \boldsymbol{\rm H}}^{\rm T} \hspace{0.05cm}.$$
  
*Viele Fehlermuster&nbsp; $\underline{e}$&nbsp; führen zum gleichen Syndrom&nbsp; $\underline{s}$. Man fasst nun diejenigen Fehlermuster mit gleichem Syndrom&nbsp; $\underline{s}_{\mu}$&nbsp; zur Nebenklasse&nbsp; ${\it \Psi}_{\mu}$&nbsp; zusammen.
+
*Many error patterns&nbsp; $\underline{e}$&nbsp; lead to the same syndrome&nbsp; $\underline{s}$.&nbsp; One now groups those error patterns with the same syndrome&nbsp; $\underline{s}_{\mu}$&nbsp; to the coset&nbsp; ${\it \Psi}_{\mu}$.
  
*Als Nebenklassenanführer&nbsp; $\underline{e}_{\mu}$&nbsp; bezeichnet man denjenigen Fehlervektor, der innerhalb der Klasse&nbsp; ${\it \Psi}_{\mu}$&nbsp; das geringste Hamming–Gewicht aufweist und dementsprechend am wahrscheinlichsten ist. Gibt es hiervon mehrere, so wählt man willkürlich einen davon aus.
+
*The coset leader&nbsp; $\underline{e}_{\mu}$&nbsp; is the fault vector that has the lowest Hamming weight within the class&nbsp; ${\it \Psi}_{\mu}$&nbsp; and is accordingly the most probable.&nbsp; If there are several of these,&nbsp; one chooses one of them arbitrarily.
  
  
Die obige Grafik zeigt die unvollständige Liste der Nebenklassenanführer&nbsp; $\underline{e}_{\mu}$&nbsp; für die einzelnen&nbsp; $\underline{s}_{\mu}$ .  
+
The above graph shows the incomplete list of coset leaders&nbsp; $\underline{e}_{\mu}$&nbsp; for each&nbsp; $\underline{s}_{\mu}$.  
Die wahrscheinlichsten Fehlervektoren
+
The most likely error vectors
*$\underline{e}_{3}$&nbsp; mit Syndrom&nbsp; $\underline{s}_{3} = (0, 1, 1)$,
+
*$\underline{e}_{3}$&nbsp; with syndrome&nbsp; $\underline{s}_{3} = (0, 1, 1)$,
*$\underline{e}_{5}$&nbsp; mit Syndrom&nbsp; $\underline{s}_{5} = (1, 0, 1)$,
+
*$\underline{e}_{5}$&nbsp; with syndrome&nbsp; $\underline{s}_{5} = (1, 0, 1)$,
*$\underline{e}_{6}$&nbsp; mit Syndrom&nbsp; $\underline{s}_{6} = (1, 1, 0)$,
+
*$\underline{e}_{6}$&nbsp; with syndrome&nbsp; $\underline{s}_{6} = (1, 1, 0)$,
*$\underline{e}_{7}$&nbsp; mit Syndrom&nbsp; $\underline{s}_{7} = (1, 1, 1)$
+
*$\underline{e}_{7}$&nbsp; with syndrome&nbsp; $\underline{s}_{7} = (1, 1, 1)$
  
  
sollen in den Teilaufgaben '''(4)''' und '''(5)''' ermittelt werden.
+
are to be determined in the subtasks&nbsp; '''(4)'''&nbsp; and&nbsp; '''(5)'''.
  
  
Line 37: Line 37:
  
  
 
+
Hints:
''Hinweise:''
+
* This exercise belongs to the chapter&nbsp; [[Channel_Coding/Decoding_of_Linear_Block_Codes|"Decoding of Linear Block Codes"]].
* Die Aufgabe gehört zum Kapitel&nbsp; [[Kanalcodierung/Decodierung_linearer_Blockcodes|Decodierung linearer Blockcodes]].  
+
* Zugrunde liegt ein Hamming–Code mit den Parametern&nbsp; $n = 7$&nbsp; und&nbsp; $k = 4$ &nbsp;   &rArr; &nbsp; $m = 3$. Alle Codeworte haben das folgende Format:
+
* Underlying is a Hamming code with parameters&nbsp; $n = 7$,&nbsp; $k = 4$ &nbsp; &rArr; &nbsp; $m = 3$.&nbsp; All code words have the following format:
 
:$$\underline{x} = ( x_1, x_2, x_3, x_4, x_5, x_6, x_7) = ( u_1, u_2, u_3, u_4, p_1, p_2, p_{3}) \hspace{0.05cm}.$$
 
:$$\underline{x} = ( x_1, x_2, x_3, x_4, x_5, x_6, x_7) = ( u_1, u_2, u_3, u_4, p_1, p_2, p_{3}) \hspace{0.05cm}.$$
*Die Prüfgleichungen sind auf dem Angabenblatt zur&nbsp; [[Aufgaben:1.11Z_Nochmals_Syndromdecodierung|Aufgabe 1.11Z]]&nbsp; veranschaulicht, in der die gleiche Konstellation wie in der vorliegenden Aufgabe betrachtet wird.
+
*The parity-check equations are illustrated on the information sheet for&nbsp; [[Aufgaben:Exercise_1.11Z:_Syndrome_Decoding_again|"Exercise 1.11Z"]]&nbsp; which considers the same constellation as in the present exercise.
*Verwenden Sie in der letzten Teilaufgabe '''(6)''' den BSC–Parameter&nbsp; $ \varepsilon = 0.1$.
+
   
 +
*In the last subtask&nbsp; '''(6)''',&nbsp; use the BSC parameter&nbsp; $ \varepsilon = 0.1$.
 
   
 
   
  
Line 49: Line 50:
  
  
===Fragebogen===
+
===Questions===
 
<quiz display=simple>
 
<quiz display=simple>
{Wie viele Empfangsworte&nbsp; $(N_{0})$&nbsp; führen zum Syndrom&nbsp; $\underline{s} = \underline{s}_{0} = (0, 0, 0)$?
+
{How many received words&nbsp; $(N_{0})$&nbsp; lead to the syndrome&nbsp; $\underline{s} = \underline{s}_{0} = (0, 0, 0)$?
 
|type="{}"}
 
|type="{}"}
 
$N_{0} \ = \ $ { 16 3% }
 
$N_{0} \ = \ $ { 16 3% }
  
{Wie viele Empfangsworte&nbsp; $(N_{7})$&nbsp; führen zum Syndrom&nbsp; $\underline{s} = \underline{s}_{7} = (1, 1, 1)$?
+
{How many received words&nbsp; $(N_{7})$&nbsp; lead to the syndrome&nbsp; $\underline{s} = \underline{s}_{7} = (1, 1, 1)$?
 
|type="{}"}
 
|type="{}"}
 
$N_{7} \ = \ $ { 16 3% }
 
$N_{7} \ = \ $ { 16 3% }
  
{Welche Eigenschaften weisen alle Nebenklassenanführer&nbsp; $\underline{e}_{\mu}$&nbsp; auf?
+
{What are the characteristics of all coset leaders&nbsp; $\underline{e}_{\mu}$?
 
|type="[]"}
 
|type="[]"}
- Die letzten drei Bit von&nbsp; $\underline{e}_{\mu}$&nbsp; sind identisch mit&nbsp; $\underline{s}_{\mu}$ .
+
- The last three bits of&nbsp; $\underline{e}_{\mu}$&nbsp; are identical to&nbsp; $\underline{s}_{\mu}$ .
- Alle&nbsp; $\underline{e}_{\mu}$&nbsp; beinhalten jeweils eine einzige&nbsp; $1$.
+
- All&nbsp; $\underline{e}_{\mu}$&nbsp; contain a single&nbsp; "$1$"&nbsp; each.
+ Alle&nbsp; $\underline{e}_{\mu}$&nbsp; beinhalten höchstens eine&nbsp; $1$.
+
+ All&nbsp; $\underline{e}_{\mu}$&nbsp; include at most one&nbsp; "$1$".
  
{Zu welchem Syndrom&nbsp; $\underline{s}_{\mu}$&nbsp; führt der Fehlervektor&nbsp; $(1, 0, 0, 0, 0, 0, 0)$?
+
{What syndrome&nbsp; $\underline{s}_{\mu}$&nbsp; does the error vector&nbsp; $(1, 0, 0, 0, 0, 0, 0)$&nbsp; lead to?
 
|type="{}"}
 
|type="{}"}
 
$\underline{e} = (1, 0, 0, 0, 0, 0, 0) \text{:} \hspace{0.4cm} {\rm Index} \ \mu \ = \ $ { 5 }
 
$\underline{e} = (1, 0, 0, 0, 0, 0, 0) \text{:} \hspace{0.4cm} {\rm Index} \ \mu \ = \ $ { 5 }
  
{Berechnen Sie jeweils das Syndrom&nbsp; $\underline{s}_{\mu}$&nbsp; $($Eingabe: &nbsp;Index $\mu)$&nbsp; für
+
{Calculate the syndrome&nbsp; $\underline{s}_{\mu}$&nbsp; $($Input: &nbsp;Index $\mu)$&nbsp; for
 
|type="{}"}
 
|type="{}"}
 
$\ \underline{e} = (0, 1, 0, 0, 0, 0, 0) \text{:} \hspace{0.4cm} {\rm Index} \ \mu \ = \ $ { 6 }
 
$\ \underline{e} = (0, 1, 0, 0, 0, 0, 0) \text{:} \hspace{0.4cm} {\rm Index} \ \mu \ = \ $ { 6 }
Line 76: Line 77:
  
  
{Welche Blockfehlerwahrscheinlichkeit ergibt sich für das BSC–Modell mit der Verfälschungswahrscheinlichkeit&nbsp; $\varepsilon = 0.1$?
+
{What is the block error probability for the BSC model with falsification probability&nbsp; $\varepsilon = 0.1$?
 
|type="{}"}
 
|type="{}"}
${\rm Pr(Blockfehler)} \ = \ $ { 0.15 3% }
+
${\rm Pr(block\:error)} \ = \ $ { 15 3% } $\ \%$
 
</quiz>
 
</quiz>
  
===Musterlösung===
+
===Solution===
 
{{ML-Kopf}}
 
{{ML-Kopf}}
'''(1)'''&nbsp; Es gibt insgesamt $2^7 = 128$ verschiedene Codeworte $\underline{x}$ und entsprechend dem BSC–Modell auch $2^7$ unterschiedliche Empfangsworte $y$ und ebenso viele Fehlervektoren $\underline{e}$.
+
'''(1)'''&nbsp; There are a total of&nbsp; $2^7 = 128$&nbsp; different code words&nbsp; $\underline{x}$&nbsp; and according to the BSC model also&nbsp; $2^7$&nbsp; different received words&nbsp; $y$&nbsp; and as many error vectors&nbsp; $\underline{e}$.
  
 
+
*With&nbsp; $m = 3$&nbsp; parity bits,&nbsp; there are&nbsp; $2^3 = 8$&nbsp; distinct values for the syndrome,
Mit $m = 3$ Prüfbits gibt es $2^3 = 8$ unterschiedliche Werte für das Syndrom,  
 
  
 
:$$\underline{s} \hspace{0.05cm} \in \hspace{0.05cm} \{ \underline{s}_0, \underline{s}_1,\hspace{0.05cm} \text{...} \hspace{0.05cm} , \underline{s}_7\} = \{ \underline{s}_{\mu} \}\hspace{0.05cm}, \hspace{0.2cm} \mu = 0, \hspace{0.05cm} \text{...} \hspace{0.05cm} , 7 \hspace{0.05cm},$$
 
:$$\underline{s} \hspace{0.05cm} \in \hspace{0.05cm} \{ \underline{s}_0, \underline{s}_1,\hspace{0.05cm} \text{...} \hspace{0.05cm} , \underline{s}_7\} = \{ \underline{s}_{\mu} \}\hspace{0.05cm}, \hspace{0.2cm} \mu = 0, \hspace{0.05cm} \text{...} \hspace{0.05cm} , 7 \hspace{0.05cm},$$
  
und ebenso viele Nebenklassen.  
+
:and just as many&nbsp; "cosets".
  
Da beim Hamming–Code, der ja perfekt ist, alle Fehlervektoren zu einer der acht Nebenklassen ${\it \Psi}_{\mu}$ gehören und zudem die Anzahl aller Vektoren in allen Nebenklassen gleich ist („Warum sollte es anders sein?” Genügt Ihnen das als Beweis?), erhält man
+
*Since in the Hamming code,&nbsp; which is perfect,&nbsp; all error vectors belong to one of the eight cosets&nbsp; ${\it \Psi}_{\mu}$&nbsp; and,&nbsp; moreover,&nbsp; the number of all vectors in all cosets is the same&nbsp; ("Why should it be different?" Is that enough proof for you?),&nbsp; we get:
  
 
:$$ N_0 = \frac{2^n}{2^m} = 2^k \hspace{0.15cm} \underline{= 16} \hspace{0.05cm}.$$
 
:$$ N_0 = \frac{2^n}{2^m} = 2^k \hspace{0.15cm} \underline{= 16} \hspace{0.05cm}.$$
  
Zur Nebenklasse ${\it \Psi}_{0}$ gehören beispielsweise – siehe Musterlösung zur [[Aufgaben:1.11Z_Nochmals_Syndromdecodierung|Aufgabe 1.11Z]] – die Vektoren
+
*The cosets&nbsp; ${\it \Psi}_{0}$&nbsp; include for example - see sample solution to&nbsp; [[Aufgaben:Exercise_1.11Z:_Syndrome_Decoding_again|"Exercise 1.11Z"]] - the vectors
 +
 
 +
:$$\underline{e} = (1, 1, 0, 0, 0, 0, 1),$$
 +
:$$\underline{e} = (1, 1, 1, 1, 1, 1, 1).$$
 +
 
  
*$\underline{e} = (1, 1, 0, 0, 0, 0, 1),$
 
*$\underline{e} = (1, 1, 0, 0, 0, 0, 1).$
 
  
 +
'''(2)'''&nbsp; According to the comments of the last partial result,&nbsp; the following applies equally:&nbsp; $N_{7} \ \underline{= 16}$.
  
'''(2)'''&nbsp; Entsprechend den Kommentaren des letzten Teilergebnisses gilt gleichermaßen $N_{7} \ \underline{= 16}$.
 
  
  
'''(3)'''&nbsp; Richtig ist <u>Antwort 3</u>:  
+
'''(3)'''&nbsp; Correct is&nbsp; <u>answer 3</u>:  
*Der Nebenklassenanführer $\underline{e}_{\mu}$ ist derjenige Fehlervektor $\underline{e}$ mit dem geringsten [[Kanalcodierung/Zielsetzung_der_Kanalcodierung#Einige_wichtige_Definitionen_zur_Blockcodierung|Hamming–Gewicht]] $w_{\rm H}(\underline{e})$, der zum Syndrom $\underline{s}_{\mu}$  führt.
+
*The coset leader&nbsp; $\underline{e}_{\mu}$&nbsp; is the error vector&nbsp; $\underline{e}$&nbsp; with the lowest [[Channel_Coding/Objective_of_Channel_Coding#Important_definitions_for_block_coding|"Hamming weight"]]&nbsp; $w_{\rm H}(\underline{e})$&nbsp; that leads to the syndrome&nbsp; $\underline{s}_{\mu}$.
*Der hier betrachtete Hamming–Code (7, 4, 3) ist perfekt. Das heißt: Alle acht Nebenklassenanführer beinhalten deshalb
+
   
:*entweder keine „Eins” ($\underline{e}_{0}$ &nbsp; ⇒ &nbsp; es ist keinerlei Korrektur erforderlich), oder
+
*The (7, 4, 3) Hamming code considered here is perfect.&nbsp; That is,&nbsp; all eight coset leaders therefore contain.
:*genau eine einzige „Eins” ($\underline{e}_{1}, \hspace{0.05cm} \text{...} \hspace{0.05cm} , \underline{e}_{7}$ &nbsp; ⇒ &nbsp; es muss ein Informations– oder Prüfbit korrigiert werden).
+
:*either no&nbsp; "one"&nbsp; $(\underline{e}_{0}$ &nbsp; ⇒ &nbsp; no correction is required$)$,&nbsp; or
 +
:*exactly a single&nbsp; "one"&nbsp; $(\underline{e}_{1}, \hspace{0.05cm} \text{...} \hspace{0.05cm} , \underline{e}_{7}$ &nbsp; ⇒ &nbsp; an information or parity bit must be corrected$)$.
  
  
'''(4)'''&nbsp;  Es gilt $\underline{s} = \underline{e} · \boldsymbol{\rm H}^{\rm T}$:
+
 
 +
'''(4)'''&nbsp;  It holds&nbsp; $\underline{s} = \underline{e} · \boldsymbol{\rm H}^{\rm T}$:
 
:$$\underline{s} = \begin{pmatrix} 1 &0 &0 &0 &0 &0 &0 \end{pmatrix} \cdot \begin{pmatrix} 1 &0 &1\\ 1 &1 &0\\ 0 &1 &1\\ 1 &1 &1\\ 1 &0 &0\\ 0 &1 &0\\ 0 &0 &1 \end{pmatrix} = \begin{pmatrix} 1 &0 &1 \end{pmatrix} = \underline{s}_5 \hspace{0.3cm} \Rightarrow\hspace{0.3cm} \hspace{0.15cm} \underline{ \mu= 5} \hspace{0.05cm}.$$
 
:$$\underline{s} = \begin{pmatrix} 1 &0 &0 &0 &0 &0 &0 \end{pmatrix} \cdot \begin{pmatrix} 1 &0 &1\\ 1 &1 &0\\ 0 &1 &1\\ 1 &1 &1\\ 1 &0 &0\\ 0 &1 &0\\ 0 &0 &1 \end{pmatrix} = \begin{pmatrix} 1 &0 &1 \end{pmatrix} = \underline{s}_5 \hspace{0.3cm} \Rightarrow\hspace{0.3cm} \hspace{0.15cm} \underline{ \mu= 5} \hspace{0.05cm}.$$
  
 
   
 
   
  [[File:P_ID2398__KC_A_1_11d.png|right|frame|Alle Nebenklassenanführer des $(7, \, 4, \, 3)$–Hamming–Codes]]  
+
 
'''(5)'''&nbsp; Ein Vergleich mit der Lösung zur letzten Teilaufgabe zeigt, dass $(0, 1, 0, 0, 0, 0, 0) ·  \boldsymbol{\rm H}^{\rm T}$ als Syndrom die zweite Zeile der transponierten Matrix ergibt:
+
  [[File:EN_KC_A_1_11d_v2.png|right|frame|All coset leaders of the&nbsp; $\rm HC (7, \, 4, \, 3)$ ]]  
 +
'''(5)'''&nbsp; A comparison with the solution to the last subtask shows that&nbsp; $(0, 1, 0, 0, 0, 0, 0) \cdot \boldsymbol{\rm H}^{\rm T}$&nbsp; as a syndrome gives the second row of the transposed matrix:
  
 
:$$\begin{pmatrix} 0 &1 &0 &0 &0 \hspace{0.2cm}\text{... }\end{pmatrix} \cdot { \boldsymbol{\rm H}}^{\rm T} = \begin{pmatrix} 1 &1 &0 \end{pmatrix} = \underline{s}_6$$
 
:$$\begin{pmatrix} 0 &1 &0 &0 &0 \hspace{0.2cm}\text{... }\end{pmatrix} \cdot { \boldsymbol{\rm H}}^{\rm T} = \begin{pmatrix} 1 &1 &0 \end{pmatrix} = \underline{s}_6$$
  
:$$\Rightarrow\hspace{0.45cm} \underline{ \mu= 6} \hspace{0.05cm}, \hspace{0.2cm} \underline{e}_6 = \begin{pmatrix} 0 &1 &0 &0 &0 &0 &0 \end{pmatrix}\hspace{0.05cm}.$$
+
:$$\Rightarrow\hspace{0.45cm} \underline{ \mu= 6} \hspace{0.05cm}, \hspace{0.5cm} \underline{e}_6 = \begin{pmatrix} 0 &1 &0 &0 &0 &0 &0 \end{pmatrix}\hspace{0.05cm}.$$
  
 
:$$\begin{pmatrix} 0 &0 &1 &0 &0 \hspace{0.2cm}\text{... }\end{pmatrix} \cdot { \boldsymbol{\rm H}}^{\rm T} = \begin{pmatrix} 0 &1 &1 \end{pmatrix} = \underline{s}_3$$
 
:$$\begin{pmatrix} 0 &0 &1 &0 &0 \hspace{0.2cm}\text{... }\end{pmatrix} \cdot { \boldsymbol{\rm H}}^{\rm T} = \begin{pmatrix} 0 &1 &1 \end{pmatrix} = \underline{s}_3$$
  
:$$\Rightarrow\hspace{0.45cm} \underline{ \mu= 3} \hspace{0.05cm}, \hspace{0.2cm} \underline{e}_3 = \begin{pmatrix} 0 &0 &1 &0 &0 &0 &0 \end{pmatrix}\hspace{0.05cm}.$$
+
:$$\Rightarrow\hspace{0.45cm} \underline{ \mu= 3} \hspace{0.05cm}, \hspace{0.5cm} \underline{e}_3 = \begin{pmatrix} 0 &0 &1 &0 &0 &0 &0 \end{pmatrix}\hspace{0.05cm}.$$
  
 
:$$\begin{pmatrix} 0 &0 &0 &1 &0 \hspace{0.2cm}\text{... } \end{pmatrix} \cdot { \boldsymbol{\rm H}}^{\rm T} = \begin{pmatrix} 1 &1 &1 \end{pmatrix} = \underline{s}_7$$
 
:$$\begin{pmatrix} 0 &0 &0 &1 &0 \hspace{0.2cm}\text{... } \end{pmatrix} \cdot { \boldsymbol{\rm H}}^{\rm T} = \begin{pmatrix} 1 &1 &1 \end{pmatrix} = \underline{s}_7$$
  
:$$\Rightarrow\hspace{0.45cm} \underline{ \mu= 7} \hspace{0.05cm}, \hspace{0.2cm} \underline{e}_7 = \begin{pmatrix} 0 &0 &0 &1 &0 &0 &0 \end{pmatrix}\hspace{0.05cm}.$$
+
:$$\Rightarrow\hspace{0.45cm} \underline{ \mu= 7} \hspace{0.05cm}, \hspace{0.5cm} \underline{e}_7 = \begin{pmatrix} 0 &0 &0 &1 &0 &0 &0 \end{pmatrix}\hspace{0.05cm}.$$
 +
 
 +
The adjacent graph summarizes again the results of the subtasks&nbsp; '''(4)'''&nbsp; and&nbsp; '''(5)'''.
  
Die nebenstehende Grafik fasst die Ergebnisse der Teilaufgaben (4) und (5) nochmals zusammen.
 
  
  
'''(6)'''&nbsp;  Beim betrachteten $(7, \, 4, \, 3)$–Hamming–Code wird dann für das richtige Informationswort entschieden, wenn bei der Übertragung höchstens ein Bit innerhalb des Codewortes verfälscht wird. Daraus folgt:
+
'''(6)'''&nbsp;  For the&nbsp; $(7, \, 4, \, 3)$&nbsp; Hamming code under consideration,&nbsp; the correct information word is decided if at most one bit within the code word is falsified during transmission.&nbsp; It follows:
  
:$${ \rm Pr(Blockfehler)} \hspace{-0.15cm}\ = \ \hspace{-0.15cm} { \rm Pr(\ge 2\hspace{0.15cm}Bitfehler\hspace{0.15cm})} =1 - { \rm Pr(0\hspace{0.15cm}Bitfehler)} - { \rm Pr(1\hspace{0.15cm}Bitfehler)} =1 - 0.9^7 - 7 \cdot 0.1 \cdot 0.9^7 \hspace{0.15cm} \underline{ = 0.15} \hspace{0.05cm}.$$
+
:$${ \rm Pr(block\:errors)} \hspace{-0.15cm}\ = \ \hspace{-0.15cm} { \rm Pr(\ge 2\hspace{0.15cm}bit\hspace{0.15cm}errors\hspace{0.15cm})} =1 - { \rm Pr(0\hspace{0.15cm}bit\hspace{0.15cm}errors)} - { \rm Pr(1\hspace{0.15cm}bit\hspace{0.15cm}errors)} =1 - 0.9^7 - 7 \cdot 0.1 \cdot 0.9^7 \hspace{0.15cm} \underline{ = 0.15} \hspace{0.05cm}.$$
  
Bei uncodierter Übertragung eines Blocks mit $n = k = 4$ Bit ergäbe sich beim gleichen BSC–Kanal:
+
*Uncoded transmission of a block with&nbsp; $n = k = 4$&nbsp; bits would result in the same BSC channel:
  
:$${ \rm Pr(Blockfehler)}= 1 - 0.9^4 \approx 0.344 \hspace{0.05cm}.$$
+
:$${ \rm Pr(block\:errors)}= 1 - 0.9^4 \approx 0.344 \hspace{0.05cm}.$$
  
Der Vergleich ist allerdings nicht ganz fair, da mit $n = 4$ eine kleinere Verfälschungswahrscheinlichkeit $\varepsilon$ anzusetzen wäre als mit $n = 7$ (kleinere Symbolrate &nbsp; ⇒&nbsp; kleinere Bandbreite &nbsp; ⇒ &nbsp; kleinere Rauschleistung).
+
*The comparison is not entirely fair,&nbsp; however,&nbsp; since with&nbsp; $n = 4$&nbsp; a smaller falsification probability&nbsp; $\varepsilon$&nbsp; would have to be applied than with&nbsp; $n = 7$&nbsp; $($smaller symbol rate &nbsp; ⇒ &nbsp; smaller bandwidth &nbsp; ⇒ &nbsp; smaller noise power$)$.
 
{{ML-Fuß}}
 
{{ML-Fuß}}
  
  
  
[[Category:Aufgaben zu  Kanalcodierung|^1.5 Decodierung linearer Blockcodes^]]
+
[[Category:Channel Coding: Exercises|^1.5 Linear Block Code Decoding^]]

Latest revision as of 13:03, 21 April 2023

Syndrome and coset leaders
$($incomplete list$)$  of  $\rm HC \ (7, \, 4, \, 3)$

To decode a  $(7, \, 4, \, 3)$  Hamming code,  which is defined by its parity-check matrix

$${ \boldsymbol{\rm H}}_{\rm } = \begin{pmatrix} 1 &1 &0 &1 &1 &0 &0\\ 0 &1 &1 &1 &0 &1 &0\\ 1 &0 &1 &1 &0 &0 &1 \end{pmatrix} \hspace{0.05cm},$$

"syndrome decoding"  is also suitable.  Since all Hamming codes are perfect,  the result is as good as with the  $($in the general case$)$  more complicated maximum likelihood decoding.

For  "syndrome decoding"  one proceeds as follows:

  • From the received vector  $\underline{y}$   one forms the syndrome  $($it holds  $m = n - k)$:
$$\underline{s} = \underline{y} \cdot { \boldsymbol{\rm H}}^{\rm T} \in {\rm GF}(2^m) \hspace{0.05cm}.$$
  • In  "BSC channel"  the received word   $\underline{y} = \underline{x} \, ({\rm code\:word}) + \underline{e} \, ({\rm error\:vector})$   is also an element of  ${\rm GF}(2^n)$,  and it holds because of   $ \underline{x} · \boldsymbol {{\rm H} }^{\rm T} = \underline{0}$:
$$\underline{s} = \underline{e} \cdot { \boldsymbol{\rm H}}^{\rm T} \hspace{0.05cm}.$$
  • Many error patterns  $\underline{e}$  lead to the same syndrome  $\underline{s}$.  One now groups those error patterns with the same syndrome  $\underline{s}_{\mu}$  to the coset  ${\it \Psi}_{\mu}$.
  • The coset leader  $\underline{e}_{\mu}$  is the fault vector that has the lowest Hamming weight within the class  ${\it \Psi}_{\mu}$  and is accordingly the most probable.  If there are several of these,  one chooses one of them arbitrarily.


The above graph shows the incomplete list of coset leaders  $\underline{e}_{\mu}$  for each  $\underline{s}_{\mu}$. The most likely error vectors

  • $\underline{e}_{3}$  with syndrome  $\underline{s}_{3} = (0, 1, 1)$,
  • $\underline{e}_{5}$  with syndrome  $\underline{s}_{5} = (1, 0, 1)$,
  • $\underline{e}_{6}$  with syndrome  $\underline{s}_{6} = (1, 1, 0)$,
  • $\underline{e}_{7}$  with syndrome  $\underline{s}_{7} = (1, 1, 1)$


are to be determined in the subtasks  (4)  and  (5).




Hints:

  • Underlying is a Hamming code with parameters  $n = 7$,  $k = 4$   ⇒   $m = 3$.  All code words have the following format:
$$\underline{x} = ( x_1, x_2, x_3, x_4, x_5, x_6, x_7) = ( u_1, u_2, u_3, u_4, p_1, p_2, p_{3}) \hspace{0.05cm}.$$
  • The parity-check equations are illustrated on the information sheet for  "Exercise 1.11Z"  which considers the same constellation as in the present exercise.
  • In the last subtask  (6),  use the BSC parameter  $ \varepsilon = 0.1$.



Questions

1

How many received words  $(N_{0})$  lead to the syndrome  $\underline{s} = \underline{s}_{0} = (0, 0, 0)$?

$N_{0} \ = \ $

2

How many received words  $(N_{7})$  lead to the syndrome  $\underline{s} = \underline{s}_{7} = (1, 1, 1)$?

$N_{7} \ = \ $

3

What are the characteristics of all coset leaders  $\underline{e}_{\mu}$?

The last three bits of  $\underline{e}_{\mu}$  are identical to  $\underline{s}_{\mu}$ .
All  $\underline{e}_{\mu}$  contain a single  "$1$"  each.
All  $\underline{e}_{\mu}$  include at most one  "$1$".

4

What syndrome  $\underline{s}_{\mu}$  does the error vector  $(1, 0, 0, 0, 0, 0, 0)$  lead to?

$\underline{e} = (1, 0, 0, 0, 0, 0, 0) \text{:} \hspace{0.4cm} {\rm Index} \ \mu \ = \ $

5

Calculate the syndrome  $\underline{s}_{\mu}$  $($Input:  Index $\mu)$  for

$\ \underline{e} = (0, 1, 0, 0, 0, 0, 0) \text{:} \hspace{0.4cm} {\rm Index} \ \mu \ = \ $

$\ \underline{e} = (0, 0, 1, 0, 0, 0, 0) \text{:} \hspace{0.4cm} {\rm Index} \ \mu \ = \ $

$\ \underline{e} = (0, 0, 0, 1, 0, 0, 0) \text{:} \hspace{0.4cm} {\rm Index} \ \mu \ = \ $

6

What is the block error probability for the BSC model with falsification probability  $\varepsilon = 0.1$?

${\rm Pr(block\:error)} \ = \ $

$\ \%$


Solution

(1)  There are a total of  $2^7 = 128$  different code words  $\underline{x}$  and according to the BSC model also  $2^7$  different received words  $y$  and as many error vectors  $\underline{e}$.

  • With  $m = 3$  parity bits,  there are  $2^3 = 8$  distinct values for the syndrome,
$$\underline{s} \hspace{0.05cm} \in \hspace{0.05cm} \{ \underline{s}_0, \underline{s}_1,\hspace{0.05cm} \text{...} \hspace{0.05cm} , \underline{s}_7\} = \{ \underline{s}_{\mu} \}\hspace{0.05cm}, \hspace{0.2cm} \mu = 0, \hspace{0.05cm} \text{...} \hspace{0.05cm} , 7 \hspace{0.05cm},$$
and just as many  "cosets".
  • Since in the Hamming code,  which is perfect,  all error vectors belong to one of the eight cosets  ${\it \Psi}_{\mu}$  and,  moreover,  the number of all vectors in all cosets is the same  ("Why should it be different?" Is that enough proof for you?),  we get:
$$ N_0 = \frac{2^n}{2^m} = 2^k \hspace{0.15cm} \underline{= 16} \hspace{0.05cm}.$$
  • The cosets  ${\it \Psi}_{0}$  include for example - see sample solution to  "Exercise 1.11Z" - the vectors
$$\underline{e} = (1, 1, 0, 0, 0, 0, 1),$$
$$\underline{e} = (1, 1, 1, 1, 1, 1, 1).$$


(2)  According to the comments of the last partial result,  the following applies equally:  $N_{7} \ \underline{= 16}$.


(3)  Correct is  answer 3:

  • The coset leader  $\underline{e}_{\mu}$  is the error vector  $\underline{e}$  with the lowest "Hamming weight"  $w_{\rm H}(\underline{e})$  that leads to the syndrome  $\underline{s}_{\mu}$.
  • The (7, 4, 3) Hamming code considered here is perfect.  That is,  all eight coset leaders therefore contain.
  • either no  "one"  $(\underline{e}_{0}$   ⇒   no correction is required$)$,  or
  • exactly a single  "one"  $(\underline{e}_{1}, \hspace{0.05cm} \text{...} \hspace{0.05cm} , \underline{e}_{7}$   ⇒   an information or parity bit must be corrected$)$.


(4)  It holds  $\underline{s} = \underline{e} · \boldsymbol{\rm H}^{\rm T}$:

$$\underline{s} = \begin{pmatrix} 1 &0 &0 &0 &0 &0 &0 \end{pmatrix} \cdot \begin{pmatrix} 1 &0 &1\\ 1 &1 &0\\ 0 &1 &1\\ 1 &1 &1\\ 1 &0 &0\\ 0 &1 &0\\ 0 &0 &1 \end{pmatrix} = \begin{pmatrix} 1 &0 &1 \end{pmatrix} = \underline{s}_5 \hspace{0.3cm} \Rightarrow\hspace{0.3cm} \hspace{0.15cm} \underline{ \mu= 5} \hspace{0.05cm}.$$


All coset leaders of the  $\rm HC (7, \, 4, \, 3)$

(5)  A comparison with the solution to the last subtask shows that  $(0, 1, 0, 0, 0, 0, 0) \cdot \boldsymbol{\rm H}^{\rm T}$  as a syndrome gives the second row of the transposed matrix:

$$\begin{pmatrix} 0 &1 &0 &0 &0 \hspace{0.2cm}\text{... }\end{pmatrix} \cdot { \boldsymbol{\rm H}}^{\rm T} = \begin{pmatrix} 1 &1 &0 \end{pmatrix} = \underline{s}_6$$
$$\Rightarrow\hspace{0.45cm} \underline{ \mu= 6} \hspace{0.05cm}, \hspace{0.5cm} \underline{e}_6 = \begin{pmatrix} 0 &1 &0 &0 &0 &0 &0 \end{pmatrix}\hspace{0.05cm}.$$
$$\begin{pmatrix} 0 &0 &1 &0 &0 \hspace{0.2cm}\text{... }\end{pmatrix} \cdot { \boldsymbol{\rm H}}^{\rm T} = \begin{pmatrix} 0 &1 &1 \end{pmatrix} = \underline{s}_3$$
$$\Rightarrow\hspace{0.45cm} \underline{ \mu= 3} \hspace{0.05cm}, \hspace{0.5cm} \underline{e}_3 = \begin{pmatrix} 0 &0 &1 &0 &0 &0 &0 \end{pmatrix}\hspace{0.05cm}.$$
$$\begin{pmatrix} 0 &0 &0 &1 &0 \hspace{0.2cm}\text{... } \end{pmatrix} \cdot { \boldsymbol{\rm H}}^{\rm T} = \begin{pmatrix} 1 &1 &1 \end{pmatrix} = \underline{s}_7$$
$$\Rightarrow\hspace{0.45cm} \underline{ \mu= 7} \hspace{0.05cm}, \hspace{0.5cm} \underline{e}_7 = \begin{pmatrix} 0 &0 &0 &1 &0 &0 &0 \end{pmatrix}\hspace{0.05cm}.$$

The adjacent graph summarizes again the results of the subtasks  (4)  and  (5).


(6)  For the  $(7, \, 4, \, 3)$  Hamming code under consideration,  the correct information word is decided if at most one bit within the code word is falsified during transmission.  It follows:

$${ \rm Pr(block\:errors)} \hspace{-0.15cm}\ = \ \hspace{-0.15cm} { \rm Pr(\ge 2\hspace{0.15cm}bit\hspace{0.15cm}errors\hspace{0.15cm})} =1 - { \rm Pr(0\hspace{0.15cm}bit\hspace{0.15cm}errors)} - { \rm Pr(1\hspace{0.15cm}bit\hspace{0.15cm}errors)} =1 - 0.9^7 - 7 \cdot 0.1 \cdot 0.9^7 \hspace{0.15cm} \underline{ = 0.15} \hspace{0.05cm}.$$
  • Uncoded transmission of a block with  $n = k = 4$  bits would result in the same BSC channel:
$${ \rm Pr(block\:errors)}= 1 - 0.9^4 \approx 0.344 \hspace{0.05cm}.$$
  • The comparison is not entirely fair,  however,  since with  $n = 4$  a smaller falsification probability  $\varepsilon$  would have to be applied than with  $n = 7$  $($smaller symbol rate   ⇒   smaller bandwidth   ⇒   smaller noise power$)$.