Difference between revisions of "Aufgaben:Exercise 1.15: Distance Spectra of HC (7, 4, 3) and HC (8, 4, 4)"

From LNTwww
m (Text replacement - "Category:Aufgaben zu Kanalcodierung" to "Category:Channel Coding: Exercises")
 
(11 intermediate revisions by 2 users not shown)
Line 1: Line 1:
{{quiz-Header|Buchseite=Kanalcodierung/Schranken für die Blockfehlerwahrscheinlichkeit}}
+
{{quiz-Header|Buchseite=Channel_Coding/Limits_for_Block_Error_Probability}}
  
[[File:P_ID2407__KC_A_1_9.png|right|frame|Codetabellen des  $(7, 4, 3)$–Hamming–Codes und der  $(8, 4, 4)$–Erweiterung]]
+
[[File:EN_KC_A_1_9_neu.png|right|frame|Code tables of the  $(7, 4, 3)$ Hamming code and the  $(8, 4, 4)$ extension]]
  
Wir betrachten wie in der  [[Aufgaben:Aufgabe_1.09:_Erweiterter_Hamming–Code|Aufgabe 1.9]]
+
We consider as in the  [[Aufgaben:Exercise_1.09:_Extended_Hamming_Code|"Exercise 1.9"]]
*den  $(7, 4, 3)$–Hamming–Code und
+
*the  $(7, 4, 3)$  Hamming code and
*den erweiterten  $(8, 4, 4)$–Hamming–Code.
 
  
 +
*the extended  $(8, 4, 4)$  Hamming code.
  
Die Grafik zeigt die zugehörigen Codetabellen. In der  [[Aufgaben:Aufgabe_1.12:_Hard_Decision_vs._Soft_Decision|Aufgabe 1.12]]  wurde schon die Syndromdecodierung dieser beiden Codes behandelt. In dieser Aufgabe sollen nun die Unterschiede hinsichtlich des Distanzspektrums  $\{W_{i}\}$  herausgearbeitet werden. Für die Laufvariable gilt  $i = 0, \ \text{...} \ , n$:
 
  
*Die Integerzahl  $W_{i}$  gibt die Zahl der Codeworte  $\underline{x}$  mit dem  [[Channel_Coding/Zielsetzung_der_Kanalcodierung#Einige_wichtige_Definitionen_zur_Blockcodierung|Hamming–Gewicht]]  $\underline{w}_{\rm H}( \underline{x} ) = i$ an.
+
The graphic shows the corresponding code tables.  In  [[Aufgaben:Exercise_1.12:_Hard_Decision_vs._Soft_Decision|"Exercise 1.12"]],  the syndrome decoding of these two codes has already been covered.  In this exercise,  the differences regarding the distance spectrum  $\{W_{i}\}$  shall now be worked out.  For the indexing variable  $i = 0, \ \text{...} \ , n$:
*Bei den hier betrachteten linearen Code bescheibt  $W_{i}$  gleichzeitig die Anzahl der Codeworte mit der  [[Channel_Coding/Schranken_für_die_Blockfehlerwahrscheinlichkeit#Distanzspektrum_eines_linearen_Codes|Hamming–Distanz]]  $i$  vom Nullwort.
 
*Häufig weist man der Zahlenmenge  $\{W_i\}$  einer Pseudo–Funktion zu, die man  [[Channel_Coding/Schranken_für_die_Blockfehlerwahrscheinlichkeit#Distanzspektrum_eines_linearen_Codes|Gewichtsfunktion]]  (englisch:   ''Weight Enumerator Function'', WEF) nennt:
 
  
 +
*The integer  $W_{i}$  specifies the number of code words  $\underline{x}$  with the  [[Channel_Coding/Objective_of_Channel_Coding#Important_definitions_for_block_coding|"Hamming weight"]]  $\underline{w}_{\rm H}( \underline{x} ) = i$.
 +
 +
*For the linear code considered here,  $W_{i}$  simultaneously describes the number of code words with the  [[Channel_Coding/Limits_for_Block_Error_Probability#Distance_spectrum_of_a_linear_code|"Hamming distance"]]  $i$  from the all-zero word.
 +
 +
*Often one assigns to the number set  $\{W_i\}$  a pseudo-function called  [[Channel_Coding/Limits_for_Block_Error_Probability#Distance_spectrum_of_a_linear_code|"weight enumerator function"]] :
 
:$$\left \{ \hspace{0.05cm} W_i \hspace{0.05cm} \right \} \hspace{0.3cm} \Leftrightarrow \hspace{0.3cm} W(X) = \sum_{i=0 }^{n} W_i \cdot X^{i} = W_0 + W_1 \cdot X + W_2 \cdot X^{2} + ... \hspace{0.05cm} + W_n \cdot X^{n}\hspace{0.05cm}.$$
 
:$$\left \{ \hspace{0.05cm} W_i \hspace{0.05cm} \right \} \hspace{0.3cm} \Leftrightarrow \hspace{0.3cm} W(X) = \sum_{i=0 }^{n} W_i \cdot X^{i} = W_0 + W_1 \cdot X + W_2 \cdot X^{2} + ... \hspace{0.05cm} + W_n \cdot X^{n}\hspace{0.05cm}.$$
 
   
 
   
Bhattacharyya hat die Pseudo–Funktion  $W(X)$  verwendet, um eine kanalunabhängige (obere) Schranke für die Blockfehlerwahrscheinlichkeit anzugeben:
+
Bhattacharyya has used the pseudo-function  $W(X)$  to specify a channel-independent  (upper)  bound on the block error probability:
 
   
 
   
:$${\rm Pr(Blockfehler)} \le{\rm Pr(Bhattacharyya)} = W(\beta) -1 \hspace{0.05cm}.$$
+
:$${\rm Pr(block\:error)} \le{\rm Pr(Bhattacharyya)} = W(\beta) -1 \hspace{0.05cm}.$$
 +
 
 +
The so-called  "Bhattacharyya parameter"  is given as follows:
  
Der so genannte ''Bhattacharyya–Parameter'' ist dabei wie folgt gegeben:
+
:$$\beta = \left\{ \begin{array}{c} \lambda \\ \\ 2 \cdot \sqrt{\varepsilon \cdot (1- \varepsilon)}\\ \\ {\rm e}^{- R \hspace{0.05cm}\cdot \hspace{0.05cm}E_{\rm B}/N_0} \end{array} \right.\quad \begin{array}{*{1}c} {\rm for\hspace{0.15cm} the \hspace{0.15cm}BEC\:model},\\ \\ {\rm for\hspace{0.15cm} the \hspace{0.15cm}BSC\:model}, \\ \\{\rm for\hspace{0.15cm} the \hspace{0.15cm}AWGN\:model}. \end{array}$$
  
:$$\beta = \left\{ \begin{array}{c} \lambda \\ \\ 2 \cdot \sqrt{\varepsilon \cdot (1- \varepsilon)}\\ \\ {\rm e}^{- R \hspace{0.05cm}\cdot \hspace{0.05cm}E_{\rm B}/N_0} \end{array} \right.\quad \begin{array}{*{1}c} {\rm f\ddot{u}r\hspace{0.15cm} das \hspace{0.15cm}BEC-Modell},\\ \\ {\rm f\ddot{u}r\hspace{0.15cm} das \hspace{0.15cm}BSC-Modell}, \\ \\{\rm f\ddot{u}r\hspace{0.15cm} das \hspace{0.15cm}AWGN-Modell}. \end{array}$$
+
It should be noted that the  "Bhattacharyya Bound"  is generally very pessimistic.  The actual block error probability is often significantly lower.
  
Anzumerken ist, dass die Bhattacharyya–Schranke im allgemeinen sehr pessimistisch ist. Die tatsächliche Blockfehlerwahrscheinlichkeit liegt oft deutlich darunter.
 
  
  
Line 30: Line 33:
  
  
 +
 +
Hints:
 +
*This exercise refers to the chapter  [[Channel_Coding/Limits_for_Block_Error_Probability|"Bounds for Block Error Probability"]].
  
 +
*A similar topic is covered in  [[Aufgaben:Exercise_1.14:_Bhattacharyya_Bound_for_BEC|"Exercise 1.14"]]  and in  [[Aufgaben:Exercise_1.16:_Block_Error_Probability_Bounds_for_AWGN|"Exercise 1.16"]].
 
   
 
   
''Hinweise:''
+
* The channels to be considered are:
* Die Aufgabe bezieht sich auf das Kapitel  [[Channel_Coding/Schranken_für_die_Blockfehlerwahrscheinlichkeit|Schranken für die Blockfehlerwahrscheinlichkeit]].
+
** the  [[Channel_Coding/Channel_Models_and_Decision_Structures#Binary_Symmetric_Channel_.E2.80.93_BSC|"BSC model"]]  ("Binary Symmetric Channel"),
*Eine ähnliche Thematik wird in  [[Aufgaben:1.14_Bhattacharyya–Schranke_für_BEC|Aufgabe 1.14]]  und in  [[Aufgaben:Aufgabe_1.16:_Fehlerwahrscheinlichkeitsschranken_für_AWGN|Aufgabe 1.16]]  behandelt.
+
** the  [[Channel_Coding/Channel_Models_and_Decision_Structures#Binary_Erasure_Channel_.E2.80.93_BEC|"BEC model"]]  ("Binary Erasure Channel"),
* Als Kanäle sollen betrachtet werden:
+
** the  [[Channel_Coding/Channel_Models_and_Decision_Structures#AWGN_channel_at_Binary_Input|"AWGN channel model"]].
** das  [[Channel_Coding/Kanalmodelle_und_Entscheiderstrukturen#Binary_Symmetric_Channel_.E2.80.93_BSC|BSC–Modell]]  (''Binary Symmetric Channel''),
 
** das  [[Channel_Coding/Kanalmodelle_und_Entscheiderstrukturen#Binary_Erasure_Channel_.E2.80.93_BEC|BEC–Modell]]  (''Binary Erasure Channel''),
 
** das  [[Channel_Coding/Kanalmodelle_und_Entscheiderstrukturen#AWGN.E2.80.93Kanal_bei_bin.C3.A4rem_Eingang|AWGN–Kanalmodell]].
 
  
 
   
 
   
Line 45: Line 49:
  
  
===Fragebogen===
+
===Questions===
 
<quiz display=simple>
 
<quiz display=simple>
{Geben Sie das Distanzspektrum des&nbsp; $(7, 4, 3)$–Hamming–Codes an.
+
{Specify the distance spectrum of the&nbsp; $(7, 4, 3)$ Hamming code.
 
|type="{}"}
 
|type="{}"}
 
$W_{0} \ = \ ${ 1 }
 
$W_{0} \ = \ ${ 1 }
Line 54: Line 58:
 
$W_{7} \ = \ ${ 1 }
 
$W_{7} \ = \ ${ 1 }
  
{Wie lautet die Bhattacharyya–Schranke für den&nbsp; $(7, 4, 3)$–Hamming–Code und das BSC–Modell mit&nbsp; $\varepsilon = 0.01$?
+
{What is the Bhattacharyya Bound for the&nbsp; $(7, 4, 3)$&nbsp; Hamming code and the BSC model with&nbsp; $\varepsilon = 0.01$?
 
|type="{}"}
 
|type="{}"}
 
${\rm Pr(Bhattacharyya)} \ = \ $ { 6.6 3% } $\ \%$
 
${\rm Pr(Bhattacharyya)} \ = \ $ { 6.6 3% } $\ \%$
  
{Wie lautet bei gleichem Kanal die Schranke des erweiterten&nbsp; $(8, 4, 4)$–Hamming–Codes?
+
{Given the same channel,&nbsp; what is the bound of the extended&nbsp; $(8, 4, 4)$&nbsp; Hamming code?
 
|type="{}"}
 
|type="{}"}
 
${\rm Pr(Bhattacharyya)} \ = \ ${ 2.2 3% } $\ \%$
 
${\rm Pr(Bhattacharyya)} \ = \ ${ 2.2 3% } $\ \%$
  
{Mit welchem BEC–Parameter&nbsp; $\lambda$ erhält man die genau gleichen Schranken?
+
{With which BEC parameter&nbsp; $\lambda$&nbsp; do you get the exact same barriers?
 
|type="{}"}
 
|type="{}"}
 
$\lambda \ = \ $ { 0.199 3% }
 
$\lambda \ = \ $ { 0.199 3% }
  
{Wir betrachten weiterhin den erweiterten&nbsp; $(8, 4, 4)$–Hamming–Code, aber nun das AWGN–Modell.  
+
{We continue to consider the extended&nbsp; $(8, 4, 4)$&nbsp; Hamming code,&nbsp; but now the AWGN model.  
<br>Bestimmen Sie&nbsp; $E_{\rm B} / N_{0}$&nbsp; (in dB) derart, dass sich die gleiche Bhattacharyya–Schranke ergibt.
+
<br>Determine&nbsp; $E_{\rm B} / N_{0}$&nbsp; (in dB)&nbsp; such that the same Bhattacharyya Bound results.
 
|type="{}"}
 
|type="{}"}
 
$10 · \lg {E_{\rm B}/N_0} \ = \ $ { 5 3% }$ \ \rm dB$
 
$10 · \lg {E_{\rm B}/N_0} \ = \ $ { 5 3% }$ \ \rm dB$
  
{Ermitteln Sie nun den AWGN–Parameter&nbsp; $(10 · \lg {E_{\rm B}/N_0})$&nbsp; für den&nbsp; $(7, 4, 3)$–Hamming–Code.
+
{Now determine the AWGN parameter&nbsp; $(10 · \lg {E_{\rm B}/N_0})$&nbsp; for the&nbsp; $(7, 4, 3)$&nbsp; Hamming code.
 
|type="{}"}
 
|type="{}"}
 
$10 · \lg {E_{\rm B}/N_0} \ = \ $ { 4.417 3% }$ \ \rm dB$
 
$10 · \lg {E_{\rm B}/N_0} \ = \ $ { 4.417 3% }$ \ \rm dB$
 
</quiz>
 
</quiz>
  
===Musterlösung===
+
===Solution===
 
{{ML-Kopf}}
 
{{ML-Kopf}}
'''(1)'''&nbsp; Durch die Analyse aller Codeworte des $(7, 4, 3)$–Hamming–Codes erkennt man, dass
+
'''(1)'''&nbsp; By analyzing all the code words of the&nbsp; $(7, 4, 3)$&nbsp; Hamming code,&nbsp; we see that.
  
*$W_{0} \ \underline{ = \ 1}$ Codewort keine Eins beinhaltet (das Nullwort),
+
*$W_{0} \ \underline{ = \ 1}$ &nbsp; &rArr; &nbsp; code word does not contain a&nbsp; "one"&nbsp; (the zero-word),
*$W_{3} \ \underline{ = \ 7}$ Codeworte drei Einsen beinhalten,
+
*$W_{3} \ \underline{ = \ 7}$ &nbsp; &rArr; &nbsp; cod words contain three&nbsp; "ones",
*$W_{4} \ \underline{ = \ 7}$ Codeworte vier Einsen beinhalten,
+
*$W_{4} \ \underline{ = \ 7}$ &nbsp; &rArr; &nbsp; code words contain four&nbsp; "ones",
*$W_{7} \ \underline{ = \ 1}$ Codewort nur aus Einsen besteht.
+
*$W_{7} \ \underline{ = \ 1}$ &nbsp; &rArr; &nbsp; code word consists of only&nbsp; "ones.
  
  
$W_{i}$ gibt gleichzeitig die Anzahl der Codeworte an, die sich vom Nullwort in $i \ \rm Bit$ unterscheiden.
+
$W_{i}$&nbsp; simultaneously specifies the number of code words that differ from the zero-word in&nbsp; $i \ \rm bit$.
  
  
  
'''(2)'''&nbsp; Die Bhattacharyya–Schranke lautet:
+
'''(2)'''&nbsp; The Bhattacharyya bound reads:
 
   
 
   
:$${\rm Pr(Blockfehler)} \le{\rm Pr(Bhattacharyya)} = W(\beta) -1 \hspace{0.05cm}.$$
+
:$${\rm Pr(block\:error)} \le{\rm Pr(Bhattacharyya)} = W(\beta) -1 \hspace{0.05cm}.$$
  
*Die Gewichtsfunktion ist durch die Teilaufgabe '''(1)''' festgelegt:
+
*The weight function is defined by the subtask&nbsp; '''(1)''':
 
    
 
    
 
:$$W(X) = 1+ 7 \cdot X^{3} + 7 \cdot X^{4} + X^{7}\hspace{0.3cm}
 
:$$W(X) = 1+ 7 \cdot X^{3} + 7 \cdot X^{4} + X^{7}\hspace{0.3cm}
 
\Rightarrow \hspace{0.3cm} {\rm Pr(Bhattacharyya)} = 7 \cdot \beta^{3} + 7 \cdot \beta^{4} + \beta^{7} \hspace{0.05cm}.$$
 
\Rightarrow \hspace{0.3cm} {\rm Pr(Bhattacharyya)} = 7 \cdot \beta^{3} + 7 \cdot \beta^{4} + \beta^{7} \hspace{0.05cm}.$$
  
*Für den Bhattacharyya–Parameter des BSC–Modells gilt:
+
*For the Bhattacharyya parameter of the BSC model:
  
 
:$$\beta = 2 \cdot \sqrt{\varepsilon \cdot (1- \varepsilon)} = 2 \cdot \sqrt{0.01 \cdot 0.99} = 0.199\hspace{0.3cm}
 
:$$\beta = 2 \cdot \sqrt{\varepsilon \cdot (1- \varepsilon)} = 2 \cdot \sqrt{0.01 \cdot 0.99} = 0.199\hspace{0.3cm}
 
   \Rightarrow \hspace{0.3cm} {\rm Pr(Bhattacharyya)} = 7 \cdot 0.199^{3} + 7 \cdot 0.199^{4} + 0.199^{7} \hspace{0.15cm} \underline{ \approx 6.6\%} \hspace{0.05cm}.$$
 
   \Rightarrow \hspace{0.3cm} {\rm Pr(Bhattacharyya)} = 7 \cdot 0.199^{3} + 7 \cdot 0.199^{4} + 0.199^{7} \hspace{0.15cm} \underline{ \approx 6.6\%} \hspace{0.05cm}.$$
  
*Ein Vergleich mit der tatsächlichen Blockfehlerwahrscheinlichkeit, wie in [[Aufgaben:1.12_Hard_/_Soft_Decision|Aufgabe 1.12]] berechnet,
+
*A comparison with the actual block error probability as calculated in&nbsp; [[Aufgaben:Exercise_1.12:_Hard_Decision_vs._Soft_Decision|"Exercise 1.12"]],
  
:$${\rm Pr(Blockfehler)} \approx 21 \cdot \varepsilon^2 = 2.1 \cdot 10^{-3} \hspace{0.05cm},$$
+
:$${\rm Pr(block\:error)} \approx 21 \cdot \varepsilon^2 = 2.1 \cdot 10^{-3} \hspace{0.05cm},$$
  
:zeigt, dass Bhattacharyya nur eine grobe Schranke bereitstellt. Im vorliegenden Fall liegt diese Schranke um mehr als den Faktor $30$ über dem tatsächlichen Wert.
+
:shows that Bhattacharyya provides only a rough bound.&nbsp; In the present case,&nbsp; this bound is more than a factor of&nbsp; $30$&nbsp; higher than the actual value.
  
  
  
'''(3)'''&nbsp; Aus der Codetabelle des $(8, 4, 4)$–Codes erhält man folgende Ergebnisse:
+
'''(3)'''&nbsp; From the code table of the extended&nbsp; $(8, 4, 4)$&nbsp; Hamming code,&nbsp; the following results are obtained:
  
 
:$$W(X) = 1+ 14 \cdot X^{4} + X^{8}\hspace{0.3cm}  
 
:$$W(X) = 1+ 14 \cdot X^{4} + X^{8}\hspace{0.3cm}  
Line 119: Line 123:
 
    
 
    
  
'''(4)'''&nbsp; Die Gleichung für den Bhattacharyya–Parameter lautet:
+
'''(4)'''&nbsp; The equation for the Bhattacharyya parameter is:
  
:$$\beta = \left\{ \begin{array}{c} \lambda \\ \\ 2 \cdot \sqrt{ \varepsilon \cdot (1- \varepsilon)}\\ \\ {\rm e}^{- R \cdot E_{\rm B}/N_0} \end{array} \right.\quad \begin{array}{*{1}c} {\rm f\ddot{u}r\hspace{0.15cm} das \hspace{0.15cm}BEC-Modell},\\ \\ {\rm f\ddot{u}r\hspace{0.15cm} das \hspace{0.15cm}BSC-Modell}, \\ \\{\rm f\ddot{u}r\hspace{0.15cm} das \hspace{0.15cm}AWGN-Modell}. \end{array}$$
+
:$$\beta = \left\{ \begin{array}{c} \lambda \\ \\ 2 \cdot \sqrt{ \varepsilon \cdot (1- \varepsilon)}\\ \\ {\rm e}^{- R \cdot E_{\rm B}/N_0} \end{array} \right.\quad \begin{array}{*{1}c} {\rm for\hspace{0.15cm} the \hspace{0.15cm}BEC model},\\ \\ {\rm for\hspace{0.15cm} the \hspace{0.15cm}BSC model}, \\ \\{\rm for\hspace{0.15cm} the \hspace{0.15cm}AWGN model}. \end{array}$$
+
 
Mit dem BEC–Modell ergibt sich genau die gleiche Schranke, wenn die Auslöschungswahrscheinlichkeit $\lambda = \beta \ \underline{= 0.199}$ beträgt.
+
With the BEC model,&nbsp; exactly the same bound is obtained when the erasure probability is&nbsp; $\lambda = \beta \ \underline{= 0.199}$.
  
  
  
'''(5)'''&nbsp; Entsprechend obiger Gleichung muss gelten:
+
'''(5)'''&nbsp; According to the above equation must apply:
 
   
 
   
 
:$$\beta = {\rm e}^{- R \hspace{0.05cm}\cdot \hspace{0.05cm} E_{\rm B}/N_0} = 0.199 \hspace{0.3cm} \Rightarrow \hspace{0.3cm} R \cdot E_{\rm B}/N_0 = 10^{0.199} = 1.58 \hspace{0.05cm}.$$
 
:$$\beta = {\rm e}^{- R \hspace{0.05cm}\cdot \hspace{0.05cm} E_{\rm B}/N_0} = 0.199 \hspace{0.3cm} \Rightarrow \hspace{0.3cm} R \cdot E_{\rm B}/N_0 = 10^{0.199} = 1.58 \hspace{0.05cm}.$$
  
*Die Coderate des erweiterten $(8, 4, 4)$–Hamming–Codes beträgt $R = 0.5$:
+
*The code rate of the extended&nbsp; $(8, 4, 4)$&nbsp; Hamming code is&nbsp; $R = 0.5$:
  
 
:$$E_{\rm B}/N_0 = 3.16 \hspace{0.3cm} \Rightarrow \hspace{0.3cm} 10 \cdot {\rm lg} \hspace{0.1cm} E_{\rm B}/N_0 \hspace{0.15cm} \underline{\approx 5\,{\rm dB}} \hspace{0.05cm}.$$
 
:$$E_{\rm B}/N_0 = 3.16 \hspace{0.3cm} \Rightarrow \hspace{0.3cm} 10 \cdot {\rm lg} \hspace{0.1cm} E_{\rm B}/N_0 \hspace{0.15cm} \underline{\approx 5\,{\rm dB}} \hspace{0.05cm}.$$
Line 137: Line 141:
  
  
'''(6)'''&nbsp; Mit der Coderate $R = 4/7$ des $(7, 4, 3)$–Hamming–Codes erhält man:
+
'''(6)'''&nbsp; Using the code rate&nbsp; $R = 4/7$&nbsp; of the&nbsp; $(7, 4, 3)$&nbsp; Hamming code,&nbsp; we obtain:
  
 
:$$E_{\rm B}/N_0 = 7/4 \cdot 1.58 = 2.765 \hspace{0.3cm} \Rightarrow \hspace{0.3cm} 10 \cdot {\rm lg} \hspace{0.1cm} E_{\rm B}/N_0 \hspace{0.15cm} \underline{\approx 4.417\,{\rm dB}} \hspace{0.05cm}.$$
 
:$$E_{\rm B}/N_0 = 7/4 \cdot 1.58 = 2.765 \hspace{0.3cm} \Rightarrow \hspace{0.3cm} 10 \cdot {\rm lg} \hspace{0.1cm} E_{\rm B}/N_0 \hspace{0.15cm} \underline{\approx 4.417\,{\rm dB}} \hspace{0.05cm}.$$
Line 144: Line 148:
  
  
[[Category:Channel Coding: Exercises|^1.6 Fehlerwahrscheinlichkeitsschranken^]]
+
[[Category:Channel Coding: Exercises|^1.6 Error Probability Bounds^]]

Latest revision as of 16:14, 8 October 2022

Code tables of the  $(7, 4, 3)$ Hamming code and the  $(8, 4, 4)$ extension

We consider as in the  "Exercise 1.9"

  • the  $(7, 4, 3)$  Hamming code and
  • the extended  $(8, 4, 4)$  Hamming code.


The graphic shows the corresponding code tables.  In  "Exercise 1.12",  the syndrome decoding of these two codes has already been covered.  In this exercise,  the differences regarding the distance spectrum  $\{W_{i}\}$  shall now be worked out.  For the indexing variable  $i = 0, \ \text{...} \ , n$:

  • The integer  $W_{i}$  specifies the number of code words  $\underline{x}$  with the  "Hamming weight"  $\underline{w}_{\rm H}( \underline{x} ) = i$.
  • For the linear code considered here,  $W_{i}$  simultaneously describes the number of code words with the  "Hamming distance"  $i$  from the all-zero word.
$$\left \{ \hspace{0.05cm} W_i \hspace{0.05cm} \right \} \hspace{0.3cm} \Leftrightarrow \hspace{0.3cm} W(X) = \sum_{i=0 }^{n} W_i \cdot X^{i} = W_0 + W_1 \cdot X + W_2 \cdot X^{2} + ... \hspace{0.05cm} + W_n \cdot X^{n}\hspace{0.05cm}.$$

Bhattacharyya has used the pseudo-function  $W(X)$  to specify a channel-independent  (upper)  bound on the block error probability:

$${\rm Pr(block\:error)} \le{\rm Pr(Bhattacharyya)} = W(\beta) -1 \hspace{0.05cm}.$$

The so-called  "Bhattacharyya parameter"  is given as follows:

$$\beta = \left\{ \begin{array}{c} \lambda \\ \\ 2 \cdot \sqrt{\varepsilon \cdot (1- \varepsilon)}\\ \\ {\rm e}^{- R \hspace{0.05cm}\cdot \hspace{0.05cm}E_{\rm B}/N_0} \end{array} \right.\quad \begin{array}{*{1}c} {\rm for\hspace{0.15cm} the \hspace{0.15cm}BEC\:model},\\ \\ {\rm for\hspace{0.15cm} the \hspace{0.15cm}BSC\:model}, \\ \\{\rm for\hspace{0.15cm} the \hspace{0.15cm}AWGN\:model}. \end{array}$$

It should be noted that the  "Bhattacharyya Bound"  is generally very pessimistic.  The actual block error probability is often significantly lower.




Hints:




Questions

1

Specify the distance spectrum of the  $(7, 4, 3)$ Hamming code.

$W_{0} \ = \ $

$W_{3} \ = \ $

$W_{4} \ = \ $

$W_{7} \ = \ $

2

What is the Bhattacharyya Bound for the  $(7, 4, 3)$  Hamming code and the BSC model with  $\varepsilon = 0.01$?

${\rm Pr(Bhattacharyya)} \ = \ $

$\ \%$

3

Given the same channel,  what is the bound of the extended  $(8, 4, 4)$  Hamming code?

${\rm Pr(Bhattacharyya)} \ = \ $

$\ \%$

4

With which BEC parameter  $\lambda$  do you get the exact same barriers?

$\lambda \ = \ $

5

We continue to consider the extended  $(8, 4, 4)$  Hamming code,  but now the AWGN model.
Determine  $E_{\rm B} / N_{0}$  (in dB)  such that the same Bhattacharyya Bound results.

$10 · \lg {E_{\rm B}/N_0} \ = \ $

$ \ \rm dB$

6

Now determine the AWGN parameter  $(10 · \lg {E_{\rm B}/N_0})$  for the  $(7, 4, 3)$  Hamming code.

$10 · \lg {E_{\rm B}/N_0} \ = \ $

$ \ \rm dB$


Solution

(1)  By analyzing all the code words of the  $(7, 4, 3)$  Hamming code,  we see that.

  • $W_{0} \ \underline{ = \ 1}$   ⇒   code word does not contain a  "one"  (the zero-word),
  • $W_{3} \ \underline{ = \ 7}$   ⇒   cod words contain three  "ones",
  • $W_{4} \ \underline{ = \ 7}$   ⇒   code words contain four  "ones",
  • $W_{7} \ \underline{ = \ 1}$   ⇒   code word consists of only  "ones.


$W_{i}$  simultaneously specifies the number of code words that differ from the zero-word in  $i \ \rm bit$.


(2)  The Bhattacharyya bound reads:

$${\rm Pr(block\:error)} \le{\rm Pr(Bhattacharyya)} = W(\beta) -1 \hspace{0.05cm}.$$
  • The weight function is defined by the subtask  (1):
$$W(X) = 1+ 7 \cdot X^{3} + 7 \cdot X^{4} + X^{7}\hspace{0.3cm} \Rightarrow \hspace{0.3cm} {\rm Pr(Bhattacharyya)} = 7 \cdot \beta^{3} + 7 \cdot \beta^{4} + \beta^{7} \hspace{0.05cm}.$$
  • For the Bhattacharyya parameter of the BSC model:
$$\beta = 2 \cdot \sqrt{\varepsilon \cdot (1- \varepsilon)} = 2 \cdot \sqrt{0.01 \cdot 0.99} = 0.199\hspace{0.3cm} \Rightarrow \hspace{0.3cm} {\rm Pr(Bhattacharyya)} = 7 \cdot 0.199^{3} + 7 \cdot 0.199^{4} + 0.199^{7} \hspace{0.15cm} \underline{ \approx 6.6\%} \hspace{0.05cm}.$$
  • A comparison with the actual block error probability as calculated in  "Exercise 1.12",
$${\rm Pr(block\:error)} \approx 21 \cdot \varepsilon^2 = 2.1 \cdot 10^{-3} \hspace{0.05cm},$$
shows that Bhattacharyya provides only a rough bound.  In the present case,  this bound is more than a factor of  $30$  higher than the actual value.


(3)  From the code table of the extended  $(8, 4, 4)$  Hamming code,  the following results are obtained:

$$W(X) = 1+ 14 \cdot X^{4} + X^{8}\hspace{0.3cm} \Rightarrow \hspace{0.3cm} {\rm Pr(Bhattacharyya)} = 14 \cdot \beta^{4} + \beta^{8} = 14 \cdot 0.199^{4} + 0.199^{8} \hspace{0.15cm} \underline{ \approx 2.2\%} \hspace{0.05cm}.$$


(4)  The equation for the Bhattacharyya parameter is:

$$\beta = \left\{ \begin{array}{c} \lambda \\ \\ 2 \cdot \sqrt{ \varepsilon \cdot (1- \varepsilon)}\\ \\ {\rm e}^{- R \cdot E_{\rm B}/N_0} \end{array} \right.\quad \begin{array}{*{1}c} {\rm for\hspace{0.15cm} the \hspace{0.15cm}BEC model},\\ \\ {\rm for\hspace{0.15cm} the \hspace{0.15cm}BSC model}, \\ \\{\rm for\hspace{0.15cm} the \hspace{0.15cm}AWGN model}. \end{array}$$

With the BEC model,  exactly the same bound is obtained when the erasure probability is  $\lambda = \beta \ \underline{= 0.199}$.


(5)  According to the above equation must apply:

$$\beta = {\rm e}^{- R \hspace{0.05cm}\cdot \hspace{0.05cm} E_{\rm B}/N_0} = 0.199 \hspace{0.3cm} \Rightarrow \hspace{0.3cm} R \cdot E_{\rm B}/N_0 = 10^{0.199} = 1.58 \hspace{0.05cm}.$$
  • The code rate of the extended  $(8, 4, 4)$  Hamming code is  $R = 0.5$:
$$E_{\rm B}/N_0 = 3.16 \hspace{0.3cm} \Rightarrow \hspace{0.3cm} 10 \cdot {\rm lg} \hspace{0.1cm} E_{\rm B}/N_0 \hspace{0.15cm} \underline{\approx 5\,{\rm dB}} \hspace{0.05cm}.$$


(6)  Using the code rate  $R = 4/7$  of the  $(7, 4, 3)$  Hamming code,  we obtain:

$$E_{\rm B}/N_0 = 7/4 \cdot 1.58 = 2.765 \hspace{0.3cm} \Rightarrow \hspace{0.3cm} 10 \cdot {\rm lg} \hspace{0.1cm} E_{\rm B}/N_0 \hspace{0.15cm} \underline{\approx 4.417\,{\rm dB}} \hspace{0.05cm}.$$