Difference between revisions of "Channel Coding/Bounds for Block Error Probability"

From LNTwww
 
(37 intermediate revisions by 4 users not shown)
Line 1: Line 1:
 
   
 
   
 
{{Header
 
{{Header
|Untermenü=Binäre Blockcodes zur Kanalcodierung
+
|Untermenü=Binary Block Codes for Channel Coding
|Vorherige Seite=Decodierung linearer Blockcodes
+
|Vorherige Seite=Decoding of Linear Block Codes
|Nächste Seite=Informationstheoretische Grenzen der Kanalcodierung
+
|Nächste Seite=Information Theoretical Limits of Channel Coding
 
}}
 
}}
  
== Distanzspektrum eines linearen Codes ==
+
== Distance spectrum of a linear code ==
 
<br>
 
<br>
Wir gehen weiterhin von einem linearen und binären&nbsp; $(n, \hspace{0.05cm} k)$&ndash;Blockcode&nbsp; $\mathcal{C}$&nbsp; aus. Ein wesentliches Ziel des Codedesigns ist es, die&nbsp; [[Channel_Coding/Decodierung_linearer_Blockcodes#Blockschaltbild_und_Voraussetzungen| Blockfehlerwahrscheinlichkeit]]&nbsp; ${\rm Pr}(\underline{u} \ne \underline{v}) = {\rm Pr}(\underline{z} \ne \underline{x})$&nbsp; möglichst gering zu halten. Dies erreicht man unter anderem dadurch, dass
+
We further assume a linear and binary&nbsp; $(n, \hspace{0.05cm} k)$ block code&nbsp; $\mathcal{C}$.&nbsp; A major goal of code design is to keep the&nbsp; [[Channel_Coding/Decoding_of_Linear_Block_Codes#Block_diagram_and_requirements|$\text{block error probability}$]]&nbsp; ${\rm Pr}(\underline{u} \ne \underline{v}) = {\rm Pr}(\underline{z} \ne \underline{x})$&nbsp; as low as possible.&nbsp; This is achieved by,&nbsp; among other things
  
*die minimale Distanz&nbsp; $d_{\rm min}$&nbsp; zwischen zwei Codeworten&nbsp; $\underline{x}$&nbsp; und&nbsp; $\underline{x}\hspace{0.05cm}'$&nbsp; möglichst groß ist, so dass man bis zu&nbsp; $t = &lfloor;(d_{\rm min}-1)/2&rfloor;$&nbsp; Bitfehler  korrigieren kann;<br>
+
*the minimum distance&nbsp; $d_{\rm min}$&nbsp; between two code words&nbsp; $\underline{x}$&nbsp; and&nbsp; $\underline{x}\hspace{0.05cm}'$&nbsp; is as large as possible,&nbsp; so that one can correct up to&nbsp; $t = &lfloor;(d_{\rm min}-1)/2&rfloor;$&nbsp; bit errors;<br>
  
*gleichzeitig die minimale Distanz&nbsp; $d_{\rm min}$ &nbsp; &#8658; &nbsp; <i>worst&ndash;case</i>&nbsp; möglichst selten auftritt, wenn man alle zulässigen Codeworte berücksichtigt.<br><br>
+
*at the same time this  minimum distance&nbsp; $d_{\rm min}$&nbsp; occurs as rarely as possible,&nbsp; given all the allowed code words.<br><br>
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Definition:}$&nbsp;  Wir benennen die&nbsp; '''Anzahl'''&nbsp; der Codeworte&nbsp; $\underline{x}\hspace{0.05cm}' \in \mathcal{C}$&nbsp; mit Hamming&ndash;Distanz&nbsp; $i$&nbsp; vom betrachteten Codewort&nbsp; $\underline{x}$&nbsp; des gleichen Codes&nbsp; $\mathcal{C}$&nbsp; mit&nbsp; $W_i(\underline{x})$, wobei gilt:
+
$\text{Definition:}$&nbsp;  We denote the&nbsp; &raquo;'''number'''&laquo;&nbsp; of code words&nbsp; $\underline{x}\hspace{0.05cm}' \in \mathcal{C}$&nbsp; with Hamming distance&nbsp; $i$&nbsp; from the considered code word&nbsp; $\underline{x}$&nbsp; of the same code&nbsp; $\mathcal{C}$&nbsp; with&nbsp; $W_i(\underline{x})$,&nbsp; where holds:
  
 
::<math>W_i(\underline{x}) = \big \vert \hspace{0.05cm} \left \{  
 
::<math>W_i(\underline{x}) = \big \vert \hspace{0.05cm} \left \{  
Line 22: Line 22:
 
) = i  \right \} \hspace{0.05cm} \big \vert\hspace{0.05cm}.</math>
 
) = i  \right \} \hspace{0.05cm} \big \vert\hspace{0.05cm}.</math>
  
*Die Betragsstriche kennzeichnen hierbei die Anzahl der Codeworte&nbsp; $\underline{x}\hspace{0.05cm}'$, die die Bedingung&nbsp; $d_{\rm H}(\underline{x}  \hspace{0.05cm}, \hspace{0.1cm}\underline{x}\hspace{0.05cm}'
+
*The magnitude strokes of the absolute value here denote the number of code words&nbsp; $\underline{x}\hspace{0.05cm}'$, that satisfy the condition&nbsp; $d_{\rm H}(\underline{x}  \hspace{0.05cm}, \hspace{0.1cm}\underline{x}\hspace{0.05cm}'
) = i $&nbsp; erfüllen.
+
) = i $.&nbsp;
*Man bezeichnet diesen Wert auch als&nbsp; '''Vielfachheit'''&nbsp; (englisch: &nbsp;<i>Multiplicity</i>).}}
+
 
 +
*This value is also called&nbsp; "multiplicity".}}
  
  
[[File:P ID2365 KC T 1 6 S1 neu.png|right|frame|Hamming–Distanzen zwischen allen Codeworten|class=fit]]
 
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 1:}$&nbsp;  Wir betrachten den&nbsp; $(5, \, 2)$&ndash;Blockcode&nbsp; $\mathcal{C}$&nbsp; mit der Generatormatrix
+
$\text{Example 1:}$&nbsp;  We consider the&nbsp; $(5, \, 2)$ block code&nbsp; $\mathcal{C}$&nbsp; with generator matrix
 
+
[[File:P ID2365 KC T 1 6 S1 neu.png|right|frame|Hamming distances between all code words|class=fit]]
 
:<math>{ \boldsymbol{\rm G} }  
 
:<math>{ \boldsymbol{\rm G} }  
 
=  \begin{pmatrix}
 
=  \begin{pmatrix}
Line 37: Line 37:
 
\end{pmatrix} \hspace{0.05cm}.</math>
 
\end{pmatrix} \hspace{0.05cm}.</math>
  
Die Tabelle zeigt die Hamming&ndash;Distanzen
+
The table shows the Hamming distances
*zwischen allen Codeworten&nbsp; $\underline{x}_i$  
+
*between all code words&nbsp; $\underline{x}_i$
*zu den Bezugsworten&nbsp; $\underline{x}_0$, ... , $\underline{x}_3$.
+
 +
*to the reference words&nbsp; $\underline{x}_0$, ... , $\underline{x}_3$.
  
  
Man erkennt: &nbsp; Unabhängig vom Bezugswort&nbsp; $\underline{x}_i$&nbsp; gilt:
+
One recognizes: &nbsp; Independently of the reference word&nbsp; $\underline{x}_i$&nbsp; holds:
  
 
::<math>W_0 = 1 \hspace{0.05cm}, \hspace{0.5cm}W_1 = W_2 = 0 \hspace{0.05cm},\hspace{0.5cm}
 
::<math>W_0 = 1 \hspace{0.05cm}, \hspace{0.5cm}W_1 = W_2 = 0 \hspace{0.05cm},\hspace{0.5cm}
Line 49: Line 50:
 
<br clear=all>
 
<br clear=all>
  
Nicht nur in diesem Beispiel, sondern bei jedem linearen Code ergeben sich für jedes Codewort die gleichen Vielfachheiten&nbsp; $W_i$. Da zudem das Nullwort&nbsp; $\underline{0} = (0, 0,\text{ ...} \hspace{0.05cm}, 0)$&nbsp; Bestandteil eines jeden linearen Binärcodes ist, lässt sich die obige Definition auch wie folgt formulieren:<br>
+
Not only in this example,&nbsp; but in any linear code,&nbsp; the same multiplicities&nbsp; $W_i$&nbsp; result for each code word.&nbsp; Furthermore,&nbsp; since the all-zero word &nbsp; $\underline{0} = (0, 0,\text{ ...} \hspace{0.05cm}, 0)$ &nbsp; is part of every linear binary code,&nbsp; the above definition can also be formulated as follows:<br>
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Definition:}$&nbsp;  Das&nbsp; '''Distanzspektrum'''&nbsp; eines linearen binären&nbsp; $(n, \hspace{0.03cm} k)$&ndash;Blockcodes ist die Menge&nbsp; $\{W_i \}$&nbsp; mit&nbsp; $i = 0, 1,$ ... , $n$. Hierbei gibt&nbsp; $W_i$&nbsp; die Anzahl der Codeworte&nbsp; $\underline{x} \in \mathcal{C}$&nbsp; mit Hamming&ndash;Gewicht&nbsp; $w_{\rm H}(\underline{x}) = i$&nbsp; an.  
+
$\text{Definition:}$&nbsp;  The&nbsp; &raquo;'''distance spectrum'''&laquo;&nbsp; of a linear binary&nbsp; $(n, \hspace{0.03cm} k)$&nbsp; block code is the set &nbsp; $\{W_i \}$&nbsp; with&nbsp; $i = 0,\ 1,$ ... ,&nbsp; $n$.&nbsp;  
 +
*Here,&nbsp; $W_i$&nbsp; gives the number of code words&nbsp; $\underline{x} \in \mathcal{C}$&nbsp; with Hamming weight&nbsp; $w_{\rm H}(\underline{x}) = i$.
 +
 
 +
*Often one describes the set&nbsp; $\hspace{0.05cm}\{W_i \}\hspace{0.05cm}$&nbsp; also as a polynomial with a pseudo-variable&nbsp; $X$:
  
Oft beschreibt man die Menge&nbsp;  $\hspace{0.05cm}\{W_i \}\hspace{0.05cm}$&nbsp;  auch als Polynom mit einer Pseudovariablen&nbsp;  $X$:
 
  
 
::<math>\left \{ \hspace{0.05cm} W_i \hspace{0.05cm} \right \} \hspace{0.3cm} \Leftrightarrow \hspace{0.3cm}
 
::<math>\left \{ \hspace{0.05cm} W_i \hspace{0.05cm} \right \} \hspace{0.3cm} \Leftrightarrow \hspace{0.3cm}
W(X) = \sum_{i=0  }^{n}  W_i \cdot Xi^{i} = W_0 + W_1 \cdot X + W_2 \cdot X^{2} + ... \hspace{0.05cm} + W_n \cdot X^{n}\hspace{0.05cm}.</math>
+
W(X) = \sum_{i=0  }^{n}  W_i \cdot X^{i} = W_0 + W_1 \cdot X + W_2 \cdot X^{2} + ... \hspace{0.05cm} + W_n \cdot X^{n}\hspace{0.05cm}.</math>
  
Man bezeichnet&nbsp;  $W(X)$&nbsp;   auch als&nbsp;  '''Gewichtsfunktion'''&nbsp;  (englisch: &nbsp; <i>Weight Enumerator Function</i>, WEF).}}<br>
+
*$W(X)$&nbsp; is also called the &nbsp;  &raquo;'''weight enumerator function'''&laquo;.}}<br>
  
Beispielsweise lautet die Gewichtsfunktion des&nbsp;  $(5, \hspace{0.02cm} 2)$&ndash;Codes&nbsp;  $\mathcal{C} = \left  \{ \hspace{0.05cm}(0, 0, 0, 0, 0) \hspace{0.05cm},\hspace{0.15cm} (0, 1, 0, 1, 1) \hspace{0.05cm},\hspace{0.15cm}(1, 0, 1, 1, 0) \hspace{0.05cm},\hspace{0.15cm}(1, 1, 1, 0, 1) \hspace{0.05cm} \right \}$&nbsp;  von&nbsp;  $\text{Beispiel 1}$:
+
*For example,&nbsp; the weight enumerator  function of the&nbsp;  $(5, \hspace{0.02cm} 2)$&nbsp; code&nbsp;  $\mathcal{C} = \left  \{ \hspace{0.05cm}(0, 0, 0, 0, 0) \hspace{0.05cm},\hspace{0.15cm} (0, 1, 0, 1, 1) \hspace{0.05cm},\hspace{0.15cm}(1, 0, 1, 1, 0) \hspace{0.05cm},\hspace{0.15cm}(1, 1, 1, 0, 1) \hspace{0.05cm} \right \}$&nbsp;  of&nbsp;  $\text{Example 1}$:
  
 
::<math>W(X) = 1 + 2 \cdot X^{3} + X^{4}\hspace{0.05cm}.</math>
 
::<math>W(X) = 1 + 2 \cdot X^{3} + X^{4}\hspace{0.05cm}.</math>
  
Wie aus der&nbsp; [[Channel_Coding/Allgemeine_Beschreibung_linearer_Blockcodes#Einige_Eigenschaften_des_.287.2C_4.2C_3.29.E2.80.93Hamming.E2.80.93Codes |Tabelle seiner Codeworte]]&nbsp;  hervorgeht, erhält man für den&nbsp;  $(7, \hspace{0.02cm}4, \hspace{0.02cm}3)$&ndash;Hamming&ndash;Code:
+
*As can be seen for the&nbsp; $(7, \hspace{0.02cm}4, \hspace{0.02cm}3)$ Hamming code from the&nbsp; [[Channel_Coding/General_Description_of_Linear_Block_Codes#Some_properties_of_the_.287.2C_4.2C_3.29_Hamming_code |$\text{table of its code words}$]]:
  
 
::<math>W(X) = 1 + 7 \cdot X^{3} + 7 \cdot X^{4} + X^{7}\hspace{0.05cm}.</math>
 
::<math>W(X) = 1 + 7 \cdot X^{3} + 7 \cdot X^{4} + X^{7}\hspace{0.05cm}.</math>
  
Die Überführung des Distanzspektrums&nbsp; $\hspace{0.01cm}\{W_i \}\hspace{0.01cm}$&nbsp; in die Gewichtsfunktion&nbsp; $W(X)$&nbsp; bietet zudem bei manchen Aufgabenstellungen große numerische Vorteile. Ist beispielsweise die <i>Weight Enumerator Function</i>&nbsp; $W(X)$&nbsp; eines&nbsp; $(n, \hspace{0.03cm} k)$&ndash;Blockcodes&nbsp; $\mathcal{C}$&nbsp; bekannt, so gilt für den hierzu&nbsp; [[Channel_Coding/Allgemeine_Beschreibung_linearer_Blockcodes#Darstellung_von_SPC_und_RC_als_duale_Codes|dualen]]&nbsp; $(n, \hspace{0.03cm} n-k)$&ndash;Code&nbsp; $\mathcal{C}_{\rm Dual}$:
+
*The transformation of the distance spectrum&nbsp; $\hspace{0.01cm}\{W_i \}\hspace{0.01cm}$&nbsp; into the weight enumerator function&nbsp; $W(X)$&nbsp; also offers great numerical advantages in some exercises.&nbsp; For example,&nbsp; if the weight enumerator function&nbsp; $W(X)$&nbsp; of a&nbsp; $(n, \hspace{0.03cm} k)$ block code&nbsp; $\mathcal{C}$&nbsp; is known,&nbsp; then for the corresponding&nbsp; [[Channel_Coding/General_Description_of_Linear_Block_Codes#Representation_of_SPC_and_RC_as_dual_codes|$\text{dual}$]]&nbsp; $(n, \hspace{0.03cm} n-k)$&nbsp; code&nbsp; $\mathcal{C}_{\rm dual}$:
  
::<math>W_{\rm Dual}(X) = \frac{(1+X)^n}{2^k} \cdot W \left ( \frac{1-X}{1+X} \right )\hspace{0.05cm}.</math>
+
::<math>W_{\rm dual}(X) = \frac{(1+X)^n}{2^k} \cdot W \left ( \frac{1-X}{1+X} \right )\hspace{0.05cm}.</math>
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 2:}$&nbsp;  Gesucht ist die Gewichtsfunktion&nbsp; $W(X)$&nbsp; des&nbsp; [[Channel_Coding/Beispiele_bin%C3%A4rer_Blockcodes#Single_Parity.E2.80.93check_Codes|Single Parity&ndash;check Codes]]&nbsp; mit&nbsp; $n = 6$,&nbsp; $k = 5$ &nbsp; &#8658; &nbsp; $\text{SPC (6, 5)}$.
+
$\text{Example 2:}$&nbsp;  The weight enumerator function&nbsp; $W(X)$&nbsp; of the&nbsp; [[Channel_Coding/Examples_of_Binary_Block_Codes#Single_Parity-check_Codes|$\text{single parity&ndash;check codes}$]]&nbsp; with &nbsp; $n = 6$,&nbsp; $k = 5$ &nbsp; &#8658; &nbsp; $\text{SPC (6, 5)}$&nbsp; is obtained by comparing all&nbsp; $2^5 = 32$&nbsp; code words with the all-zero word:  
 
 
Man erhält diese durch Vergleich aller&nbsp; $2^5 = 32$&nbsp; Codeworte mit dem Nullwort:  
 
  
 
::<math>W_{\rm SPC\hspace{0.03cm}(6,\hspace{0.08cm}5)}(X) = 1 + 15 \cdot X^{2} + 15 \cdot X^{4} + X^{6}\hspace{0.05cm}.</math>
 
::<math>W_{\rm SPC\hspace{0.03cm}(6,\hspace{0.08cm}5)}(X) = 1 + 15 \cdot X^{2} + 15 \cdot X^{4} + X^{6}\hspace{0.05cm}.</math>
  
Unter Berücksichtigung obiger Gleichung kommt man sehr viel schneller zum gleichen Ergebnis:
+
Considering the above equation, you get the same result much faster:
*Der zu&nbsp; $\text{SPC (6, 5)}$&nbsp; duale Code ist der&nbsp; [[Channel_Coding/Beispiele_bin%C3%A4rer_Blockcodes#Wiederholungscodes| Repetition Code]]&nbsp;  $\text{RC (6, 1)}$&nbsp; mit nur zwei Codeworten&nbsp; $(0, 0, 0, 0, 0, 0)$&nbsp; und&nbsp; $(1, 1, 1, 1, 1, 1)$:
+
*The dual code to&nbsp; $\text{SPC (6, 5)}$&nbsp;   is the&nbsp; [[Channel_Coding/Examples_of_Binary_Block_Codes#Repetition_Codes|$\text{repetition code}$]]&nbsp;  $\text{RC (6, 1)}$&nbsp; with only two code words&nbsp; $(0, 0, 0, 0, 0, 0)$&nbsp; and&nbsp; $(1, 1, 1, 1, 1, 1)$:
  
 
::<math>W_{\rm RC\hspace{0.03cm}(6,\hspace{0.08cm}1)}(X) = 1 + X^{6}\hspace{0.05cm}.</math>
 
::<math>W_{\rm RC\hspace{0.03cm}(6,\hspace{0.08cm}1)}(X) = 1 + X^{6}\hspace{0.05cm}.</math>
  
*Daraus folgt für die Gewichtsfunktion des&nbsp;   $\text{SPC (6, 5)}$&nbsp; nach obiger Gleichung mit&nbsp; $k = 1$:
+
*From this it follows for the weight enumerator function of the&nbsp; $\text{SPC (6, 5)}$&nbsp; according to the above equation with&nbsp; $k = 1$:
  
 
::<math>W_{\rm SPC\hspace{0.03cm}(6,\hspace{0.08cm}5)}(X) = \frac{(1+X)^6}{2^1} \cdot W \left [1 \hspace{-0.05cm}+\hspace{-0.05cm} \left ( \frac {1\hspace{-0.05cm}-\hspace{-0.05cm}X}{1\hspace{-0.05cm}+\hspace{-0.05cm}X}\right )^6 \right ] =  1/2 \cdot \big [( 1\hspace{-0.05cm}+\hspace{-0.05cm}X) ^6 \hspace{-0.05cm}+\hspace{-0.05cm} ( 1-X) ^6 \big ] = 1 \hspace{-0.05cm}+\hspace{-0.05cm} 15 \cdot X^{2} \hspace{-0.05cm}+\hspace{-0.05cm} 15 \cdot X^{4} \hspace{-0.05cm}+\hspace{-0.05cm} X^{6}\hspace{0.05cm}.</math>}}<br>
 
::<math>W_{\rm SPC\hspace{0.03cm}(6,\hspace{0.08cm}5)}(X) = \frac{(1+X)^6}{2^1} \cdot W \left [1 \hspace{-0.05cm}+\hspace{-0.05cm} \left ( \frac {1\hspace{-0.05cm}-\hspace{-0.05cm}X}{1\hspace{-0.05cm}+\hspace{-0.05cm}X}\right )^6 \right ] =  1/2 \cdot \big [( 1\hspace{-0.05cm}+\hspace{-0.05cm}X) ^6 \hspace{-0.05cm}+\hspace{-0.05cm} ( 1-X) ^6 \big ] = 1 \hspace{-0.05cm}+\hspace{-0.05cm} 15 \cdot X^{2} \hspace{-0.05cm}+\hspace{-0.05cm} 15 \cdot X^{4} \hspace{-0.05cm}+\hspace{-0.05cm} X^{6}\hspace{0.05cm}.</math>}}<br>
  
== Union Bound der Blockfehlerwahrscheinlichkeit ==
+
== Union Bound of the block error probability ==
 
<br>
 
<br>
Wir betrachten wie im&nbsp; $\text{Beispiel 1}$&nbsp; zum Distanzspektrum] den&nbsp; $(5, \hspace{0.02cm} 2)$&ndash;Blockcode&nbsp; $\mathcal{C} = \{\underline{x}_0, \underline{x}_1, \underline{x}_2, \underline{x}_3 \}$&nbsp;&nbsp; und setzen voraus, dass das Codewort&nbsp; $\underline{x}_0$&nbsp; gesendet wurde. Die Grafik verdeutlicht den Sachverhalt.
+
We consider as in&nbsp; $\text{Example 1}$&nbsp; the distance spectrum of the&nbsp; $(5, \hspace{0.02cm} 2)$&nbsp; block code&nbsp; $\mathcal{C} = \{\underline{x}_0, \underline{x}_1, \underline{x}_2, \underline{x}_3 \}$&nbsp;&nbsp; and assume that the code word&nbsp; $\underline{x}_0$&nbsp; has been sent.&nbsp; The graphic illustrates the situation.
  
[[File:P ID2366 KC T 1 6 S2b v2.png|center|frame|Zur Herleitung der Union Bound |class=fit]]
+
*In the error-free case,&nbsp; the code word estimator would then return&nbsp; $\underline{z} = \underline{x}_0$.&nbsp;
  
Im fehlerfreien Fall würde dann der Codewortschätzer&nbsp; $\underline{z} = \underline{x}_0$&nbsp; liefern. Andernfalls käme es zu einem Blockfehler (das heißt: &nbsp; $\underline{z} \ne \underline{x}_0$&nbsp; und dementsprechend&nbsp; $\underline{v} \ne \underline{u}_0)$&nbsp; mit der Wahrscheinlichkeit
+
*Otherwise,&nbsp; a block error would occur&nbsp; (that is, &nbsp; $\underline{z} \ne \underline{x}_0$&nbsp; and accordingly&nbsp; $\underline{v} \ne \underline{u}_0)$&nbsp; with probability
 +
[[File:EN_KC_T_1_6_S2b.png|right|frame|Derivation of the Union Bound |class=fit]]
  
::<math>{\rm Pr(Blockfehler)} = {\rm Pr}\left (\big [\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}1}\big ] \hspace{0.05cm}\cup\hspace{0.05cm}\big [\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}2}\big ] \hspace{0.05cm}\cup\hspace{0.05cm}\big [\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}3}\big ] \right )
+
:$${\rm Pr(block\:error)} = {\rm Pr}\left (\big [\underline{x}_{\hspace{0.02cm}0} \hspace{-0.07cm}\mapsto \hspace{-0.07cm}\underline{x}_{\hspace{0.02cm}1}\big ] \hspace{-0.07cm}\cup\hspace{-0.07cm}\big [\underline{x}_{\hspace{0.02cm}0} \hspace{-0.07cm}\mapsto \hspace{-0.07cm} \underline{x}_{\hspace{0.02cm}2}\big ] \hspace{-0.07cm}\cup\hspace{-0.07cm}\big [\underline{x}_{\hspace{0.02cm}0} \hspace{-0.07cm}\mapsto \hspace{-0.07cm} \underline{x}_{\hspace{0.02cm}3}\big ] \right ).$$
\hspace{0.05cm}.</math>
 
  
Das Ereignis&nbsp; &bdquo;Verfälschung von&nbsp; $\underline{x}_0$&nbsp; nach&nbsp; $\underline{x}_1$&rdquo;&nbsp; tritt für ein gegebenes Empfangswort&nbsp; $\underline{y}$&nbsp; entsprechend der&nbsp; [[Channel_Coding/Kanalmodelle_und_Entscheiderstrukturen#Definitionen_der_verschiedenen_Optimalempf.C3.A4nger|Maximum&ndash;Likelihood&ndash;Entscheidungsregel]]&nbsp; &bdquo;block–wise ML&rdquo;&nbsp; genau dann ein, wenn für die bedingte Wahrscheinlichkeitsdichtefunktion gilt:
+
&rArr; &nbsp;The event&nbsp; "falsification from &nbsp; $\underline{x}_0$ &nbsp; to&nbsp; $\underline{x}_1$"&nbsp; occurs for a given received word &nbsp; $\underline{y}$ &nbsp; according to the&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#Definitions_of_the_different_optimal_receivers|"maximum likelihood block-wise decision rule"]]&nbsp; exactly when for the conditional probability density function  holds:
  
 
::<math>\big [\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}1}\big ] \hspace{0.3cm} \Leftrightarrow \hspace{0.3cm}
 
::<math>\big [\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}1}\big ] \hspace{0.3cm} \Leftrightarrow \hspace{0.3cm}
Line 106: Line 107:
 
  \hspace{0.05cm}.</math>
 
  \hspace{0.05cm}.</math>
  
Da&nbsp;  $\big [\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}1}\big ] $,&nbsp; $\big [\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}2}\big ] $,&nbsp; $\big [\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}3}\big ]&nbsp; $nicht notwendigerweise <i>disjunkte Ereignisse</i>&nbsp; sind (die sich somit gegenseitig ausschließen würden), ist die&nbsp; [[Theory_of_Stochastic_Signals/Mengentheoretische_Grundlagen#Vereinigungsmenge| Wahrscheinlichkeit der Vereinigungsmenge]]&nbsp; kleiner oder gleich der Summe der Einzelwahrscheinlichkeiten:
+
&rArr; &nbsp; Since the events&nbsp;  $\big [\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}1}\big ] $,&nbsp; $\big [\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}2}\big ] $,&nbsp; $\big [\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}3}\big ] $&nbsp; are not necessarily&nbsp; "disjoint" &nbsp; (which would thus be mutually exclusive),&nbsp; the&nbsp; [[Theory_of_Stochastic_Signals/Set_Theory_Basics#Union_set|$\text{probability of the union set}$]]&nbsp; is less than or equal to the sum of the individual probabilities:
  
:<math>{\rm Pr(Blockfehler)} \le {\rm Pr}\big [\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}1}\big ] \hspace{0.05cm}+\hspace{0.05cm}{\rm Pr}\big [\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}2}\big ] \hspace{0.05cm}+ \hspace{0.05cm}{\rm Pr}\big [\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}3}\big ]  \hspace{0.05cm}.</math>
+
:<math>{\rm Pr(block\:error)} \le {\rm Pr}\big [\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}1}\big ] \hspace{0.05cm}+\hspace{0.05cm}{\rm Pr}\big [\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}2}\big ] \hspace{0.05cm}+ \hspace{0.05cm}{\rm Pr}\big [\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}3}\big ]  \hspace{0.05cm}.</math>
  
Man nennt diese obere Schranke für die (Block&ndash;)Fehlerwahrscheinlichkeit die&nbsp; '''Union Bound'''. Diese wurde bereits im Kapitel&nbsp; [[Digitalsignalübertragung/Approximation_der_Fehlerwahrscheinlichkeit#Union_Bound_-_Obere_Schranke_f.C3.BCr_die_Fehlerwahrscheinlichkeit|Approximation der Fehlerwahrscheinlichkeit]]&nbsp; des Buches &bdquo;Digitalsignalübertragung&rdquo; verwendet.<br>
+
#One calls this upper bound for the block error probability the&nbsp; &raquo;'''Union Bound'''&laquo;.  
 +
#This was already used in the chapter&nbsp; [[Digital_Signal_Transmission/Approximation_of_the_Error_Probability#Union_Bound_-_Upper_bound_for_the_error_probability|"Approximation of the error probability"]]&nbsp; of the book&nbsp; "Digital Signal Transmission".<br>
  
Wir verallgemeinern und formalisieren diese Ergebnisse unter der Voraussetzung, dass sowohl&nbsp; $\underline{x}$&nbsp; als auch&nbsp; $\underline{x}\hspace{0.05cm}'$&nbsp; zum Code&nbsp; $\mathcal{C}$&nbsp; gehören.  
+
 
 +
We generalize now and formalize these results under the assumption that both&nbsp; $\underline{x}$&nbsp; and&nbsp; $\underline{x}\hspace{0.05cm}'$&nbsp; belong to the code&nbsp; $\mathcal{C}$&nbsp;.  
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Dann gelten folgende Berechnungsvorschriften:}$&nbsp;   
+
$\text{Then the following calculation rules apply:}$&nbsp;   
*$\rm Blockfehlerwahrscheinlichkeit$:
+
*$\rm Block\:error\:probability$:
  
::$${\rm Pr(Blockfehler)} = {\rm Pr} \left ( \bigcup_{\underline{x}\hspace{0.05cm}' \ne \underline{x} } \hspace{0.15cm}\big [\underline{x} \mapsto \underline{x}\hspace{0.05cm}'\big] \right )\hspace{0.05cm},$$
+
::$${\rm Pr(block\:error)} = {\rm Pr} \left ( \bigcup_{\underline{x}\hspace{0.05cm}' \ne \underline{x} } \hspace{0.15cm}\big [\underline{x} \mapsto \underline{x}\hspace{0.05cm}'\big] \right )\hspace{0.05cm},$$
  
*Obere Schranke nach der&nbsp; $\text{Union Bound}$:
+
*Upper bound &nbsp; &rArr; &nbsp; $\text{Union Bound}$:
  
 
::$${\rm Pr(Union \hspace{0.15cm}Bound)} \le \sum_{\underline{x}\hspace{0.05cm}' \ne \underline{x} } \hspace{0.15cm} {\rm Pr}\big[\underline{x} \mapsto \underline{x}\hspace{0.05cm}'\big] \hspace{0.05cm},$$
 
::$${\rm Pr(Union \hspace{0.15cm}Bound)} \le \sum_{\underline{x}\hspace{0.05cm}' \ne \underline{x} } \hspace{0.15cm} {\rm Pr}\big[\underline{x} \mapsto \underline{x}\hspace{0.05cm}'\big] \hspace{0.05cm},$$
  
* $\text{Paarweise Fehlerwahrscheinlichkeit}$&nbsp; (nach dem MAP&ndash; bzw. ML&ndash;Kriterium):
+
* $\text{Pairwise error probability}$&nbsp; (according to the maximum-a- posteriori resp. maximum likelihood criterion):
  
 
::$${\rm Pr}\hspace{0.02cm}\big[\underline{x} \mapsto \underline{x}\hspace{0.05cm}'\big] = {\rm Pr} \big [
 
::$${\rm Pr}\hspace{0.02cm}\big[\underline{x} \mapsto \underline{x}\hspace{0.05cm}'\big] = {\rm Pr} \big [
f(\underline{x}\hspace{0.05cm}\vert \hspace{0.05cm}\underline{y}) \le f(\underline{x}\hspace{0.05cm}'\hspace{0.05cm} \vert \hspace{0.05cm}\underline{y}) \big ]
+
p(\underline{x}\hspace{0.05cm}\vert \hspace{0.05cm}\underline{y}) \le p(\underline{x}\hspace{0.05cm}'\hspace{0.05cm} \vert \hspace{0.05cm}\underline{y}) \big ]
 
  \hspace{0.05cm}.$$}}
 
  \hspace{0.05cm}.$$}}
  
  
Auf den nächsten Seiten werden diese Ergebnisse auf verschiedene Kanäle angewendet.<br>
+
In the next sections,&nbsp; these results are applied to different channels.<br>
  
== Union Bound für das BSC–Modell ==
+
== Union Bound for the BSC model ==
 
<br>
 
<br>
Wir betrachten weiterhin den beispielhaften&nbsp; $(5, \hspace{0.02cm} 2)$&ndash;Code:&nbsp; $\mathcal{C} = \left  \{ \hspace{0.05cm}(0, 0, 0, 0, 0) \hspace{0.05cm},\hspace{0.15cm} (0, 1, 0, 1, 1) \hspace{0.05cm},\hspace{0.15cm}(1, 0, 1, 1, 0) \hspace{0.05cm},\hspace{0.15cm}(1, 1, 1, 0, 1) \hspace{0.05cm} \right \}$&nbsp; und für den digitalen Kanal verwenden wir das&nbsp; [[Channel_Coding/Signal_classification#Binary_Symmetric_Channel_.E2.80.93_BSC| BSC&ndash;Modell]]&nbsp; (<i>Binary Symmetric Channel</i>&nbsp;):
+
We further consider the exemplary&nbsp; $(5, \hspace{0.02cm} 2)$ code:&nbsp;  
 +
[[File:EN_KC_T_1_6_S2.png|right|frame|BSC model and maximum likelihood detection|class=fit]]
 +
:$$\mathcal{C} = \left  \{ \hspace{0.05cm}(0, 0, 0, 0, 0) \hspace{0.05cm},\hspace{0.15cm} (0, 1, 0, 1, 1) \hspace{0.05cm},\hspace{0.15cm}(1, 0, 1, 1, 0) \hspace{0.05cm},\hspace{0.15cm}(1, 1, 1, 0, 1) \hspace{0.05cm} \right \}.$$
 +
For the digital channel we use the&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#Binary_Symmetric_Channel_.E2.80.93_BSC| $\text{BSC model}$]]&nbsp; ("Binary Symmetric Channel"):
  
 
::<math>{\rm Pr}(y = 1 \hspace{0.05cm} | \hspace{0.05cm}x = 0 )  =  {\rm Pr}(y = 0 \hspace{0.05cm} | \hspace{0.05cm}x = 1 ) = {\rm Pr}(e = 1) = \varepsilon \hspace{0.05cm},</math>
 
::<math>{\rm Pr}(y = 1 \hspace{0.05cm} | \hspace{0.05cm}x = 0 )  =  {\rm Pr}(y = 0 \hspace{0.05cm} | \hspace{0.05cm}x = 1 ) = {\rm Pr}(e = 1) = \varepsilon \hspace{0.05cm},</math>
 
::<math>{\rm Pr}(y = 0 \hspace{0.05cm} | \hspace{0.05cm}x = 0 )  =  {\rm Pr}(y = 1 \hspace{0.05cm} | \hspace{0.05cm}x = 1 ) = {\rm Pr}(e = 0) = 1 -\varepsilon \hspace{0.05cm}.</math>
 
::<math>{\rm Pr}(y = 0 \hspace{0.05cm} | \hspace{0.05cm}x = 0 )  =  {\rm Pr}(y = 1 \hspace{0.05cm} | \hspace{0.05cm}x = 1 ) = {\rm Pr}(e = 0) = 1 -\varepsilon \hspace{0.05cm}.</math>
  
Dann gilt (siehe nachfolgende Grafik):  
+
Then holds:  
*Die Codeworte&nbsp; $\underline{x}_0 = (0, 0, 0, 0, 0)$&nbsp; und&nbsp; $\underline{x}_1 = (0, 1, 0, 1, 1)$&nbsp; unterscheiden sich in&nbsp; $d = 3$&nbsp; Bit, wobei&nbsp; $d$&nbsp; die [[Channel_Coding/Zielsetzung_der_Kanalcodierung#Einige_wichtige_Definitionen_zur_Blockcodierung|Hamming&ndash;Distanz]]&nbsp; zwischen&nbsp; $\underline{x}_0$&nbsp; und&nbsp; $\underline{x}_1$&nbsp; angibt.  
+
#The code words&nbsp; $\underline{x}_0 = (0, 0, 0, 0, 0)$&nbsp; and&nbsp; $\underline{x}_1 = (0, 1, 0, 1, 1)$&nbsp; differ in&nbsp; $d = 3$&nbsp; bits;&nbsp; $d$&nbsp; is the&nbsp; [[Channel_Coding/Objective_of_Channel_Coding#Important_definitions_for_block_coding|$\text{Hamming distance}$]] &nbsp; between&nbsp; $\underline{x}_0$&nbsp; and&nbsp; $\underline{x}_1$.  
*Ein falsches Decodierergebnis&nbsp; $\big[\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}1}\big] $&nbsp; erhält man immer dann, wenn mindestens zwei der drei Bit an den Bitpositionen 2, 4 und 5 verfälscht werden.  
+
#An incorrect decoding result &nbsp; $\big[\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}1}\big] $&nbsp; is obtained whenever at least two of the three bits at bit positions 2, 4, and 5 are falsified.  
*Die Bitpositionen 1 und 3 spielen hier dagegen keine Rolle, da diese für&nbsp; $\underline{x}_0$&nbsp; und &nbsp;$\underline{x}_1$&nbsp; gleich sind.
+
#In contrast, the bit positions 1 and 3 do not matter here, since they are the same for&nbsp; $\underline{x}_0$&nbsp; and &nbsp;$\underline{x}_1$.
  
  
[[File:P ID2406 KC T 1 6 S2 v2.png|center|frame|BSC–Modell und ML–Detektion|class=fit]]
+
Since the considered code can correct&nbsp; $t = &lfloor;(d-1)/2&rfloor; = 1$&nbsp; errors:
 
 
Da der betrachtete Code&nbsp; $t = &lfloor;(d-1)/2&rfloor; = 1$&nbsp; Fehler korrigieren kann, gilt:
 
 
 
 
::<math>{\rm Pr}\big [\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}1}\big ]  \hspace{-0.1cm} =  \hspace{-0.1cm}    \sum_{i=t+1  }^{d} {d \choose i} \cdot \varepsilon^{i} \cdot (1 - \varepsilon)^{d-i} =  {3 \choose 2} \cdot \varepsilon^{2} \cdot (1 - \varepsilon) +  
 
::<math>{\rm Pr}\big [\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}1}\big ]  \hspace{-0.1cm} =  \hspace{-0.1cm}    \sum_{i=t+1  }^{d} {d \choose i} \cdot \varepsilon^{i} \cdot (1 - \varepsilon)^{d-i} =  {3 \choose 2} \cdot \varepsilon^{2} \cdot (1 - \varepsilon) +  
 
{3 \choose 3} \cdot \varepsilon^{3} =3 \cdot \varepsilon^2 \cdot (1 - \varepsilon)  + \varepsilon^3  = {\rm Pr}\big[\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}2}\big]\hspace{0.05cm}.</math>
 
{3 \choose 3} \cdot \varepsilon^{3} =3 \cdot \varepsilon^2 \cdot (1 - \varepsilon)  + \varepsilon^3  = {\rm Pr}\big[\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}2}\big]\hspace{0.05cm}.</math>
  
Hierbei ist berücksichtigt, dass sich&nbsp; $\underline{x}_0 = (0, 0, 0, 0, 0)$&nbsp; und&nbsp; $\underline{x}_2 = (1, 0, 1, 1, 0)$&nbsp; ebenfalls in drei Bitpositionen unterscheiden.
+
This takes into account that &nbsp; $\underline{x}_0 = (0, 0, 0, 0, 0)$ &nbsp; and &nbsp; $\underline{x}_2 = (1, 0, 1, 1, 0)$ &nbsp; also differ in three bit positions.
  
Die Codeworte&nbsp; $\underline{x}_0 = (0, 0, 0, 0, 0)$&nbsp; und&nbsp; $\underline{x}_3 = (1, 1, 1, 0, 1)$&nbsp; unterscheiden sich dagegen in vier Bitpositionen:  
+
On the other hand,&nbsp; the code words &nbsp; $\underline{x}_0 = (0, 0, 0, 0, 0)$ &nbsp; and &nbsp; $\underline{x}_3 = (1, 1, 1, 0, 1)$ &nbsp; differ in four bit positions:  
*Zu einer falschen Decodierung des Blocks kommt es deshalb mit Sicherheit, wenn vier oder drei Bit verfälscht werden.  
+
*Wrong decoding of the block therefore certainly occurs if four or three bits are falsified.
*Eine Verfälschung von zwei Bit hat mit &nbsp;$50$&ndash;prozentiger Wahrscheinlichkeit ebenfalls einen Blockfehler zur Folge, wenn man hierfür eine Zufallsentscheidung voraussetzt.
+
 +
*A falsification of two bits results in a block error with &nbsp;$50\%$&nbsp; probability,&nbsp; if one assumes a random decision for this&nbsp; (last term).
  
 
::<math>{\rm Pr}\big[\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}3}\big]  = \varepsilon^4 + 4 \cdot \varepsilon^3 \cdot (1 - \varepsilon)    +  {1}/{2} \cdot 6 \cdot \varepsilon^2 \cdot (1 - \varepsilon)^2 \hspace{0.05cm}.</math>
 
::<math>{\rm Pr}\big[\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}3}\big]  = \varepsilon^4 + 4 \cdot \varepsilon^3 \cdot (1 - \varepsilon)    +  {1}/{2} \cdot 6 \cdot \varepsilon^2 \cdot (1 - \varepsilon)^2 \hspace{0.05cm}.</math>
  
Daraus ergibt sich für die &bdquo;Union Bound&rdquo;:
+
*This results in the "Union Bound":
  
::<math>{\rm Pr(Union \hspace{0.15cm}Bound)} = {\rm Pr}\big[\underline{x}_{\hspace{0.02cm}0} \hspace{-0.02cm}\mapsto \hspace{-0.02cm}\underline{x}_{\hspace{0.02cm}1}\big] +{\rm Pr}\big[\underline{x}_{\hspace{0.02cm}0} \hspace{-0.02cm}\mapsto \hspace{-0.02cm} \underline{x}_{\hspace{0.02cm}2}] +{\rm Pr}[\underline{x}_{\hspace{0.02cm}0} \hspace{-0.02cm}\mapsto \hspace{-0.02cm} \underline{x}_{\hspace{0.02cm}3}\big] \ge {\rm Pr(Blockfehler)}
+
::<math>{\rm Pr(Union \hspace{0.15cm}Bound)} = {\rm Pr}\big[\underline{x}_{\hspace{0.02cm}0} \hspace{-0.02cm}\mapsto \hspace{-0.02cm}\underline{x}_{\hspace{0.02cm}1}\big] +{\rm Pr}\big[\underline{x}_{\hspace{0.02cm}0} \hspace{-0.02cm}\mapsto \hspace{-0.02cm} \underline{x}_{\hspace{0.02cm}2}] +{\rm Pr}[\underline{x}_{\hspace{0.02cm}0} \hspace{-0.02cm}\mapsto \hspace{-0.02cm} \underline{x}_{\hspace{0.02cm}3}\big] \ge {\rm Pr(block\hspace{0.15cm}error)}
 
.</math>
 
.</math>
  
[[File:P ID2367 KC T 1 6 S3 neu.png|right|frame|Zahlenmäßige Union Bound für den&nbsp; $\text{(5, 2)}$–Code|class=fit]]
 
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 3:}$&nbsp;  In der Tabelle sind die Ergebnisse für verschiedene Werte des BSC&ndash;Parameters&nbsp; $\varepsilon$&nbsp; zusammengefasst.<br>
+
$\text{Example 3:}$&nbsp;  The table summarizes the results for different values of the BSC parameter&nbsp; $\varepsilon$&nbsp;.<br>
 +
[[File:P ID2367 KC T 1 6 S3 neu.png|right|frame|Numeric Union Bound for the considered&nbsp; $\text{(5, 2)}$ code|class=fit]]
 +
 
 +
It is worth mentioning that the completely different probabilities to be calculated&nbsp;
 +
*${\rm Pr}\big[\underline{x}_{\hspace{0.02cm}0} \hspace{-0.02cm}\mapsto \hspace{-0.02cm}\underline{x}_{\hspace{0.02cm}1}\big]=3 \cdot \varepsilon^2 \cdot (1 - \varepsilon)  + \varepsilon^3  = {\rm Pr}\big[\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}2}\big]\hspace{0.05cm},$&nbsp;
 +
 +
*${\rm Pr}\big[\underline{x}_{\hspace{0.02cm}0} \hspace{-0.02cm}\mapsto \hspace{-0.02cm}\underline{x}_{\hspace{0.02cm}3}\big]= \varepsilon^4 + 4 \cdot \varepsilon^3 \cdot (1 - \varepsilon)    +  {1}/{2} \cdot 6 \cdot \varepsilon^2 \cdot (1 - \varepsilon)^2 \hspace{0.05cm},$&nbsp;
  
  
Zu erwähnen ist, dass die völlig unterschiedlich zu berechnenden Wahrscheinlichkeiten&nbsp; ${\rm Pr}\big[\underline{x}_{\hspace{0.02cm}0} \hspace{-0.02cm}\mapsto \hspace{-0.02cm}\underline{x}_{\hspace{0.02cm}1}\big]$&nbsp;  und&nbsp; ${\rm Pr}\big[\underline{x}_{\hspace{0.02cm}0} \hspace{-0.02cm}\mapsto \hspace{-0.02cm}\underline{x}_{\hspace{0.02cm}3}\big]$&nbsp; genau das gleiche numerische Ergebnis liefern.}}<br>
+
give exactly the same numerical result.}}<br>
  
== Die obere Schranke nach Bhattacharyya ==
+
== The upper bound according to Bhattacharyya ==
 
<br>
 
<br>
Eine weitere obere Schranke für die Blockfehlerwahrscheinlichkeit wurde von&nbsp; [https://en.wikipedia.org/wiki/Anil_Kumar_Bhattacharya Bhattacharyya]&nbsp; angegeben:
+
Another upper bound on the block error probability was given by&nbsp; [https://en.wikipedia.org/wiki/Anil_Kumar_Bhattacharya $\text{Anil Kumar Bhattacharyya}$]&nbsp;:
  
::<math>{\rm Pr(Blockfehler)} \le W(X = \beta) -1 = {\rm Pr(Bhattacharyya)}
+
::<math>{\rm Pr(block\:error)} \le W(X = \beta) -1 = {\rm Pr(Bhattacharyya)}
 
  \hspace{0.05cm}.</math>
 
  \hspace{0.05cm}.</math>
  
Hierzu ist anzumerken:
+
It should be noted that:
*$W(X)$&nbsp; ist die oben definierte&nbsp; [[Channel_Coding/Schranken_f%C3%BCr_die_Blockfehlerwahrscheinlichkeit#Distanzspektrum_eines_linearen_Codes|Gewichtsfunktion]], die den verwendeten Kanalcode charakterisiert.<br>
+
*$W(X)$&nbsp; is the above defined&nbsp; [[Channel_Coding/Limits_for_Block_Error_Probability#Distance_spectrum_of_a_linear_code|$\text{weight enumerator function}$]] characterizing the channel code used.<br>
  
*Der&nbsp; <i>Bhattacharyya&ndash;Parameter</i>&nbsp; $\beta$&nbsp; kennzeichnet den digitalen Kanal. Beispielsweise gilt:
+
*The&nbsp; &raquo;$\text{Bhattacharyya parameter}$&laquo;&nbsp; $\beta$&nbsp; identifies the digital channel.&nbsp; For example:
  
 
::<math>\beta = \left\{ \begin{array}{c} \lambda \\  \sqrt{4 \cdot \varepsilon \cdot (1- \varepsilon)}\\  
 
::<math>\beta = \left\{ \begin{array}{c} \lambda \\  \sqrt{4 \cdot \varepsilon \cdot (1- \varepsilon)}\\  
 
  {\rm e}^{- R \hspace{0.05cm}\cdot \hspace{0.05cm}E_{\rm B}/N_0}  \end{array} \right.\quad
 
  {\rm e}^{- R \hspace{0.05cm}\cdot \hspace{0.05cm}E_{\rm B}/N_0}  \end{array} \right.\quad
\begin{array}{*{1}c} {\rm f\ddot{u}r\hspace{0.15cm} das \hspace{0.15cm}BEC-Modell},\\  
+
\begin{array}{*{1}c} {\rm for \hspace{0.15cm}BEC\hspace{0.15cm}model},\\  
   {\rm f\ddot{u}r\hspace{0.15cm} das \hspace{0.15cm}BSC-Modell}, \\  {\rm f\ddot{u}r\hspace{0.15cm} das \hspace{0.15cm}AWGN-Modell}. \end{array}</math>
+
   {\rm for \hspace{0.15cm}BSC\hspace{0.15cm}model}, \\  {\rm for\hspace{0.15cm}AWGN\hspace{0.15cm} model}. \end{array}</math>
  
*Die Bhattacharyya&ndash;Schranke liegt stets (und meist deutlich) oberhalb der Kurve für die &bdquo;Union Bound&rdquo;. Mit dem Ziel, eine für alle Kanäle einheitliche Schranke zu finden, müssen hier sehr viel gröbere Abschätzungen vorgenommen werden als für die &bdquo;Union Bound&rdquo;.<br><br>
+
*The&nbsp; "Bhattacharyya Bound"&nbsp; is always&nbsp; (and usually significantly)&nbsp; above the curve for the "Union Bound". With the goal of finding a uniform bound for all channels,&nbsp; much coarser estimates must be made here than for the&nbsp; "Union Bound".<br><br>
  
Wir beschränken uns hier auf die <b>Bhattacharyya&ndash;Schranke für das BSC&ndash;Modell</b>. Für dessen paarweise Verfälschungswahrscheinlichkeit wurde vorne hergeleitet:
+
We restrict ourselves here to the&nbsp; &raquo;<b>Bhattacharyya bound for the BSC model</b>&laquo;.  
 +
*For its pairwise error probability was derived in front:
  
 
::<math>{\rm Pr}\big[\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}1}\big]  =    \sum_{i= \left\lfloor (d-1)/2 \right\rfloor}^{d} {d \choose i} \cdot \varepsilon^{i} \cdot (1 - \varepsilon)^{d-i}  =    \sum_{i= \left\lceil d/2 \right\rceil }^{d} {d \choose i} \cdot \varepsilon^{i} \cdot (1 - \varepsilon)^{d-i}\hspace{0.05cm}.</math>
 
::<math>{\rm Pr}\big[\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}1}\big]  =    \sum_{i= \left\lfloor (d-1)/2 \right\rfloor}^{d} {d \choose i} \cdot \varepsilon^{i} \cdot (1 - \varepsilon)^{d-i}  =    \sum_{i= \left\lceil d/2 \right\rceil }^{d} {d \choose i} \cdot \varepsilon^{i} \cdot (1 - \varepsilon)^{d-i}\hspace{0.05cm}.</math>
  
*Hierbei kennzeichnet&nbsp; $\varepsilon = {\rm Pr}(y = 1\hspace{0.04cm}|\hspace{0.04cm} x = 0) = {\rm Pr}(y = 0\hspace{0.04cm}|\hspace{0.04cm} x = 1)< 0.5$&nbsp; das Kanalmodell.
+
:Here,&nbsp; $\varepsilon = {\rm Pr}(y = 1\hspace{0.04cm}|\hspace{0.04cm} x = 0) = {\rm Pr}(y = 0\hspace{0.04cm}|\hspace{0.04cm} x = 1)< 0.5$&nbsp; denotes the BSC model,&nbsp; and&nbsp; $d = d_{\rm H}(\underline{x}_0,\, \underline{x}_1)$&nbsp; indicates the Hamming distance between all code words.<br>
*$d = d_{\rm H}(\underline{x}_0,\, \underline{x}_1)$&nbsp; gibt die Hamming&ndash;Distanz der betrachteten Codeworte an.<br>
 
  
 
+
*To arrive at the Bhattacharyya bound, the following estimates must be made:&nbsp; For all&nbsp; $i < d$&nbsp; holds&nbsp; $\varepsilon^{i} \cdot  (1 - \varepsilon)^{d-i} \le  (1 - \varepsilon)^{d/2}$:
Um zur Bhattacharyya&ndash;Schranke zu kommen, müssen folgende Abschätzungen getroffen werden:
 
*Für alle&nbsp; $i < d$&nbsp; gilt&nbsp; $\varepsilon^{i} \cdot  (1 - \varepsilon)^{d-i} \le  (1 - \varepsilon)^{d/2}$:
 
  
 
::<math>{\rm Pr}\big[\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}1}\big]  \le  \big[\varepsilon \cdot (1 - \varepsilon)\big]^{d/2} \cdot \sum_{i= \left\lceil d/2 \right\rceil }^{d} {d \choose i}  \hspace{0.05cm}.</math>
 
::<math>{\rm Pr}\big[\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}1}\big]  \le  \big[\varepsilon \cdot (1 - \varepsilon)\big]^{d/2} \cdot \sum_{i= \left\lceil d/2 \right\rceil }^{d} {d \choose i}  \hspace{0.05cm}.</math>
  
*Änderung bezüglich der unteren Grenze der Laufvariablen&nbsp; $i$:
+
*Change with respect to the lower limit of the indexing variable&nbsp; $i$:
  
 
::<math>\sum_{i= \left\lceil d/2 \right\rceil }^{d} {d \choose i} \hspace{0.15cm} < \hspace{0.15cm} \sum_{i= 0 }^{d} {d \choose i}  = 2^d\hspace{0.05cm},
 
::<math>\sum_{i= \left\lceil d/2 \right\rceil }^{d} {d \choose i} \hspace{0.15cm} < \hspace{0.15cm} \sum_{i= 0 }^{d} {d \choose i}  = 2^d\hspace{0.05cm},
Line 212: Line 219:
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
*Umsortierung gemäß den Hamming&ndash;Gewichten&nbsp; $W_i$&nbsp; (Hamming&ndash;Distanz&nbsp; $d = i$&nbsp; kommt&nbsp; $W_i$&nbsp; mal vor):
+
*Resort according to Hamming weights&nbsp; $W_i$&nbsp; $($Hamming distance&nbsp; $d = i$&nbsp; occurs&nbsp; $W_i$&nbsp; times$)$:
  
::<math>{\rm Pr(Blockfehler)} \hspace{0.1cm} \le \hspace{0.1cm} \sum_{i= 1 }^{n} W_i \cdot \beta^{i} = 1 +  W_1 \cdot \beta + W_2 \cdot \beta^2 + \hspace{0.05cm}\text{ ...} \hspace{0.05cm}+ W_n \cdot \beta^n
+
::<math>{\rm Pr(block\:error)} \hspace{0.1cm} \le \hspace{0.1cm} \sum_{i= 1 }^{n} W_i \cdot \beta^{i} = 1 +  W_1 \cdot \beta + W_2 \cdot \beta^2 + \hspace{0.05cm}\text{ ...} \hspace{0.05cm}+ W_n \cdot \beta^n
 
  \hspace{0.05cm}.</math>
 
  \hspace{0.05cm}.</math>
  
*Mit der Gewichtsfunktion&nbsp; $W(X)= 1 + W_1 \cdot  X + W_2 \cdot  X^2 + \text{...} + W_n \cdot  X^n$:
+
*With the weight enumerator function&nbsp; $W(X)= 1 + W_1 \cdot  X + W_2 \cdot  X^2 + \text{...} + W_n \cdot  X^n$:
  
::<math>{\rm Pr(Blockfehler)} \le W(X = \beta) -1= {\rm Pr(Bhattacharyya)}  
+
::<math>{\rm Pr(block\:error)} \le W(X = \beta) -1= {\rm Pr(Bhattacharyya)}  
 
  \hspace{0.05cm}.</math>
 
  \hspace{0.05cm}.</math>
  
[[File:P ID2370 KC T 1 6 S4.png|right|frame|Vergleich zwischen &bdquo;Union Bound&rdquo; und &bdquo;Bhattacharyya–Schranke&rdquo;, gültig für das BSC–Modell|class=fit]]
 
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 4:}$&nbsp;  In der Tabelle sind die Bhattacharyya–Ergebnisse für verschiedene Werte des BSC&ndash;Parameters&nbsp; $\varepsilon$&nbsp; zusammengefasst, gültig für den&nbsp; [[Channel_Coding/Schranken_f%C3%BCr_die_Blockfehlerwahrscheinlichkeit#Distanzspektrum_eines_linearen_Codes|beispielhaften $\text{(5, 2)}$&ndash;Code]].
+
$\text{Example 4:}$&nbsp;  The table summarizes the Bhattacharyya results for different values of the BSC parameter&nbsp; $\varepsilon$&nbsp; valid for the&nbsp; [[Channel_Coding/Limits_for_Block_Error_Probability#Distance_spectrum_of_a_linear_code|$\text{example $\text{(5, 2)}$ code}$]].
 +
[[File:P ID2370 KC T 1 6 S4.png|right|frame|"Union Bound"&nbsp; vs.&nbsp; "Bhattacharyya Bound"&nbsp; valid for BSC model|class=fit]]
 
   
 
   
Für diesen gilt:  
+
*For this one applies:  
 
:$$W_0 = 1, \ \ W_1 = W_2 = 0,  \ \ W_3 = 2,  \ \ W_4 = 1$$
 
:$$W_0 = 1, \ \ W_1 = W_2 = 0,  \ \ W_3 = 2,  \ \ W_4 = 1$$
 
:$$\Rightarrow\hspace{0.3cm} W(X) = 1 +  2 \cdot X^3 +  X^4.$$
 
:$$\Rightarrow\hspace{0.3cm} W(X) = 1 +  2 \cdot X^3 +  X^4.$$
  
Damit kann die Bhattacharyya&ndash;Schranke berechnet werden:
+
*Thus,&nbsp; the Bhattacharyya bound can be calculated:
 
::<math>  {\rm Pr(Bhattacharyya)} = W(\beta) -1 = 2 \cdot \beta^3 + \beta^4\hspace{0.05cm}.</math>
 
::<math>  {\rm Pr(Bhattacharyya)} = W(\beta) -1 = 2 \cdot \beta^3 + \beta^4\hspace{0.05cm}.</math>
  
Diese stellt eine (oft grobe) Schranke für die Blockfehlerwahrscheinlichkeit dar:
+
*This provides a&nbsp; (often rough)&nbsp; bound on the block error probability:
::<math> {\rm Pr(Blockfehler)}   
+
::<math> {\rm Pr(block\:error)}   
 
  \le {\rm Pr(Bhattacharyya)}
 
  \le {\rm Pr(Bhattacharyya)}
 
\hspace{0.05cm}.</math>}}
 
\hspace{0.05cm}.</math>}}
Line 240: Line 247:
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Fazit:}$&nbsp;  Basierend auf dem&nbsp; [[Channel_Coding/Schranken_für_die_Blockfehlerwahrscheinlichkeit#Union_Bound_f.C3.BCr_das_BSC.E2.80.93Modell|$\text{Beispiel 3}$]]&nbsp; und dem&nbsp; $\text{Beispiel 4}$&nbsp; (auf dieser Seite) für den einfachen&nbsp; $\text{(5, 2)}$&ndash;Blockcode, der allerdings wenig praxisrelevant ist, sowie im Vorgriff auf das&nbsp; $\text{Beispiel 5}$&nbsp; (auf der nächsten  Seite) für den&nbsp; $\text{(7,&nbsp;4,&nbsp;3)}$&ndash;Hamming&ndash;Code fassen wir zusammen:
+
$\text{Conclusions:}$&nbsp;  Based on&nbsp; [[Channel_Coding/Limits_for_Block_Error_Probability#Union_Bound_for_the_BSC_model|$\text{Example 3}$]]&nbsp; and&nbsp; $\text{Example 4}$&nbsp; (in this section)&nbsp; for the simple&nbsp; $\text{(5, 2)}$&nbsp; block code,&nbsp; though of little practical relevance,&nbsp; and in anticipation of&nbsp; $\text{Example 5}$&nbsp; (in the next section)&nbsp; for the&nbsp; $\text{(7,&nbsp;4,&nbsp;3)}$&nbsp; Hamming code,&nbsp; we summarize:
*Die Blockfehlerwahrscheinlichkeit eines Codiersystems ist oft analytisch nicht angebbar und muss per Simulation ermittelt werden. Gleiches gilt für die Bitfehlerwahrscheinlichkeit.<br>
+
#The block error probability of a coding system is often not analytically specifiable and must be determined by simulation.&nbsp; The same applies to the bit error probability.<br>
 +
#The&nbsp; &raquo;'''Union Bound'''&laquo;&nbsp; provides an upper bound for the block error probability.&nbsp; For many applications&nbsp; $($especially short codes$)$&nbsp; the Union Bound is only slightly above the actual error probability.<br>
 +
#The &nbsp;&raquo;'''Bhattacharyya Bound'''&laquo;&nbsp; for the BEC channel is about a factor&nbsp; $2$&nbsp; above the&nbsp; Union Bound&nbsp; &ndash; see&nbsp; [[Aufgaben:Exercise_1.14:_Bhattacharyya_Bound_for_BEC|$\text{Exercise 1.14}$]].&nbsp; For the BSC and the AWGN channel,&nbsp; the gap is much larger.&nbsp; The factor&nbsp; $10$&nbsp; (and more)&nbsp; is not uncommon.<br>
 +
#The Bhattacharyya Bound &nbsp; &rArr; &nbsp; "$W(\beta) - 1$" &nbsp; looks very simple at first sight.&nbsp; Nevertheless,&nbsp; one needs also here knowledge about the exact weight function&nbsp; $W(\xi)$&nbsp; of the code.<br>
 +
#If the transmission channel&nbsp; $($BEC,&nbsp; BSC,&nbsp; AWGN&nbsp; or variations thereof$)$&nbsp; and its parameters are known,&nbsp; there is nothing to be said against using the&nbsp; (more accurate)&nbsp; "Union Bound"&nbsp; as an upper bound for the block error probability.}}<br>
  
*Die&nbsp; '''Union Bound'''&nbsp; liefert eine obere Schranke für die Blockfehlerwahrscheinlichkeit. Bei vielen Anwendungen (insbesondere bei kurzen Codes) liegt die Union Bound nur geringfügig über der tatsächlichen Fehlerwahrscheinlichkeit.<br>
+
== Bounds for the (7, 4, 3) Hamming code at the AWGN channel ==
 +
<br>
 +
Finally,&nbsp; we consider the block error probability and its bounds&nbsp; $($"Union Bound"&nbsp; and&nbsp; "Bhattacharyya Bound")&nbsp; for the following configuration:
 +
#AWGN channel,&nbsp; characterized by the quotient&nbsp; $E_{\rm B}/N_0$,<br>
 +
#Hamming code&nbsp; $\text{HC(7, 4, 3)}$ &nbsp;&nbsp;&#8658;&nbsp;&nbsp; rate&nbsp; $R = 4/7$, &nbsp; weight enumerator function&nbsp; $W(X)-1 =  7 \cdot X^3 +  7 \cdot X^4 + X^7$,<br>
 +
#"Soft Decision"&nbsp; according to the maximum likelihood criterion.<br><br>
  
*Die &nbsp;'''Bhattacharyya&ndash;Schranke'''&nbsp; liegt beim BEC&ndash;Kanal etwa um den Faktor&nbsp; $2$&nbsp; oberhalb der&nbsp; Union Bound&nbsp; &ndash; siehe&nbsp; [[Aufgaben:Aufgabe_1.14:_Bhattacharyya–Schranke_für_BEC|Aufgabe 1.14]]. Beim BSC&ndash; und beim AWGN&ndash;Kanal ist der Abstand deutlich größer. Der Faktor&nbsp; $10$&nbsp; (und mehr) ist keine Seltenheit.<br>
+
{{GraueBox|TEXT= 
 +
$\text{Example 5:}$&nbsp; The results are summarized in the graph.
 +
[[File:EN_KC_T_1_6_S5.png|right|frame|Block error probability and bounds of the&nbsp; $\text{HC (7, 4, 3)}$|class=fit]]
 +
 +
*In contrast to the graph in the section&nbsp; [[Channel_Coding/Decoding_of_Linear_Block_Codes#Coding_gain_-_bit_error_rate_with_AWGN|"Coding gain &ndash; bit error rate with AWGN"]],&nbsp; the&nbsp; &raquo;'''block error rate'''&laquo;&nbsp; is given here and not the bit error rate.
 +
 +
*Approximately,&nbsp; the latter is smaller by the factor&nbsp; $d_{\rm min}/k$&nbsp; if,&nbsp; as here&nbsp; $d_{\rm min}< k$.&nbsp; In the present example:&nbsp; $d_{\rm min}/k = 0.75$.<br>
  
*Die Bhattacharyya&ndash;Schranke&nbsp; $W(\beta) - 1$&nbsp; wirkt auf den ersten Blick sehr einfach. Trotzdem benötigt man auch hier Kenntnis über die genaue  Gewichtsfunktion&nbsp; $W(\xi)$&nbsp; des Codes.<br>
+
*Only the points for integer&nbsp; $\rm dB$&nbsp; values were calculated.&nbsp; The dashed lines were interpolated.
  
*Bei Kenntnis des vorliegenden Übertragungskanals (BEC, BSC, AWGN oder Abwandlungen hiervon) und dessen Parameter spricht somit vom Aufwand her nichts dagegen, gleich die (genauere) &bdquo;Union Bound&rdquo; als obere Schranke für die Blockfehlerwahrscheinlichkeit zu verwenden.}}<br>
+
*The&nbsp; (blue)&nbsp; numerical values given on the right are for&nbsp; (blue vertical)&nbsp; $10 \cdot \lg \, E_{\rm B}/N_0 = 8 \, \rm dB$ &nbsp;&nbsp;&#8658;&nbsp;&nbsp; $E_{\rm B}/N_0 \approx 6.31$.
  
== Schranken für den (7, 4, 3)–Hamming–Code beim AWGN–Kanal ==
 
<br>
 
Abschließend betrachten wir die Blockfehlerwahrscheinlichkeit und deren Schranken (&bdquo;Union Bound&rdquo; und &bdquo;Bhattacharyya&ndash;Schranke&rdquo;&nbsp;) für die folgende Konfiguration:
 
*AWGN&ndash;Kanal, gekennzeichnet durch den Quotienten&nbsp; $E_{\rm B}/N_0$,<br>
 
*Hamming&ndash;Code&nbsp; $\text{HC(7, 4, 3)}$ &nbsp;&nbsp;&#8658;&nbsp;&nbsp; $R = 4/7$, &nbsp; $W(X)-1 =  7 \cdot X^3 +  7 \cdot X^4 + X^7$,<br>
 
*&bdquo;Soft&ndash;Decision&rdquo; nach dem Maximum&ndash;Likelihood&ndash;Kriterium.<br><br>
 
  
[[File:P ID2369 KC T 1 6 S5 v3.png|right|frame|Blockfehlerwahrscheinlichkeit und Schranken des &nbsp; $\text{HC (7, 4, 3)}$|class=fit]]
+
The green crosses mark the&nbsp; "Union Bound".&nbsp; For this applies:
{{GraueBox|TEXT= 
 
$\text{Beispiel 5:}$&nbsp; Die Ergebnisse sind in der Grafik zusammengefasst.  
 
*Im Gegensatz zur Grafik im Abschnitt&nbsp; [[Channel_Coding/Decodierung_linearer_Blockcodes#Codiergewinn_.E2.80.93_Bitfehlerrate_bei_AWGN| Codiergewinn &ndash; Bitfehlerrate bei AWGN]] ist hier die Blockfehlerrate angegeben und nicht die Bitfehlerrate.
 
*Näherungsweise ist Letztere um den Faktor&nbsp; $d_{\rm min}/k$&nbsp; kleiner, falls wie hier&nbsp; $d_{\rm min}< k$&nbsp;  ist. Im vorliegenden Beispiel gilt&nbsp; $d_{\rm min}/k = 0.75$.<br>
 
*Berechnet wurden nur die Punkte für ganzzahlige&nbsp; $\rm dB$&ndash;Werte. Die gestrichelten Linien wurden interpoliert.
 
*Die rechts  angegebenen (blauen) Zahlenwerte gelten für&nbsp; $10 \cdot \lg \, E_{\rm B}/N_0 = 8 \, \rm dB$ &nbsp;&nbsp;&#8658;&nbsp;&nbsp; $E_{\rm B}/N_0  \approx 6.31$ (blaue Vertikale).
 
  
 
+
::<math>{\rm Pr(block\:error)}  \le  \sum_{i= d_{\rm min} }^{n} W_i \cdot {\rm Q} \left ( \sqrt{i \cdot {2R \cdot E_{\rm B} }/{N_0} } \right )  </math>
Die grünen Kreuze markieren die &bdquo;Union Bound&rdquo;. Nach dieser gilt:
+
::<math>\Rightarrow \hspace{0.3cm}{\rm Pr(block\:error)} \approx   
 
 
::<math>{\rm Pr(Blockfehler)}  \le  \sum_{i= d_{\rm min} }^{n} W_i \cdot {\rm Q} \left ( \sqrt{i \cdot {2R \cdot E_{\rm B} }/{N_0} } \right )  </math>
 
::<math>\Rightarrow \hspace{0.3cm}{\rm Pr(Blockfehler)} \approx   
 
 
7 \cdot {\rm Q} (4.65) + 7 \cdot {\rm Q} (5.37) + {\rm Q} (7.10) = </math>
 
7 \cdot {\rm Q} (4.65) + 7 \cdot {\rm Q} (5.37) + {\rm Q} (7.10) = </math>
::<math>\hspace{4.0cm} \approx   
+
::<math>\hspace{2.0cm} \approx   
 
7 \cdot 1.66 \cdot 10^{-6} + 7 \cdot 3.93 \cdot 10^{-8}+ 10^{-9} = 1.2 \cdot 10^{-5}
 
7 \cdot 1.66 \cdot 10^{-6} + 7 \cdot 3.93 \cdot 10^{-8}+ 10^{-9} = 1.2 \cdot 10^{-5}
 
  \hspace{0.05cm}.</math>
 
  \hspace{0.05cm}.</math>
*Die Zahlenwerte machen deutlich, dass die Union Bound im wesentlichen durch den ersten Term bestimmt wird:
+
*The numerical values make it clear that the Union Bound is essentially determined by the first term:
 
::<math>{\rm Pr(Union\hspace{0.15cm} Bound)} \approx W_{d_{\rm min} } \cdot {\rm Q} \left ( \sqrt{d_{\rm min} \cdot {2R \cdot E_{\rm B} }/{N_0} } \right ) = 1.16 \cdot 10^{-5}
 
::<math>{\rm Pr(Union\hspace{0.15cm} Bound)} \approx W_{d_{\rm min} } \cdot {\rm Q} \left ( \sqrt{d_{\rm min} \cdot {2R \cdot E_{\rm B} }/{N_0} } \right ) = 1.16 \cdot 10^{-5}
 
  \hspace{0.05cm}.</math>
 
  \hspace{0.05cm}.</math>
*Allerdings ist diese so genannte&nbsp; &bdquo;Truncated Union Bound&rdquo; &nbsp;nicht mehr bei allen Anwendungen eine echte Schranke für die Blockfehlerwahrscheinlichkeit, sondern ist als Näherung  zu verstehen.<br>
+
*However,&nbsp; this so-called&nbsp; "Truncated Union Bound" &nbsp;is no longer a true bound for the block error probability in all applications,&nbsp; but is to be understood as an approximation.<br>
*Die Bhattacharyya&ndash;Schranke ist in der Grafik durch rote Punkte markiert. Diese liegt aufgrund der stark vereinfachten&nbsp; [https://en.wikipedia.org/wiki/Chernoff_bound Chernoff&ndash;Rubin Bound]&nbsp; ${\rm Q}(x) \le {\rm e}^{-x^2/2}$&nbsp; deutlich über der Union Bound.
+
 
*Zum Beispiel erhält man für&nbsp; $10 \cdot \lg \, E_{\rm B}/N_0 = 8 \, \rm dB$&nbsp; mit&nbsp; $\beta = {\rm e}^{-R \hspace{0.05cm}\cdot \hspace{0.05cm}E_{\rm B}/N_0} \approx 0.027$&nbsp; gegenüber der Union Bound einen mehr als zehnfachen Wert:
+
*The&nbsp; "Bhattacharyya Bound"&nbsp; is marked by red dots in the graph.&nbsp; This is well above the Union Bound due to the highly simplified&nbsp; [https://en.wikipedia.org/wiki/Chernoff_bound $\text{Chernoff&ndash;Rubin Bound}$]&nbsp;  
 +
:$${\rm Q}(x) \le {\rm e}^{-x^2/2}.$$
 +
*For example,&nbsp; for&nbsp; $10 \cdot \lg \, E_{\rm B}/N_0 = 8 \, \rm dB$&nbsp; with&nbsp; $\beta = {\rm e}^{-R \hspace{0.05cm}\cdot \hspace{0.05cm}E_{\rm B}/N_0} \approx 0.027$&nbsp; a more than tenfold value compared to the Union Bound:
  
 
::<math>{\rm Pr(Bhattacharyya)} = W(\beta) -1 = 7 \cdot \beta^3 + 7 \cdot \beta^4 + \beta^7 \approx 1.44 \cdot 10^{-4}
 
::<math>{\rm Pr(Bhattacharyya)} = W(\beta) -1 = 7 \cdot \beta^3 + 7 \cdot \beta^4 + \beta^7 \approx 1.44 \cdot 10^{-4}
 
  \hspace{0.05cm}.</math>}}
 
  \hspace{0.05cm}.</math>}}
  
== Aufgaben zum Kapitel ==
+
== Exercises for the chapter ==
 
<br>
 
<br>
[[Aufgaben:Aufgabe_1.14:_Bhattacharyya–Schranke_für_BEC|Aufgabe 1.14: Bhattacharyya–Schranke für BEC]]
+
[[Aufgaben:Exercise_1.14:_Bhattacharyya_Bound_for_BEC|Exercise 1.14: Bhattacharyya Bound for BEC]]
  
[[Aufgaben:Aufgabe_1.15:_Distanzspektren_von_HC_(7,_4,_3)_und_HC_(8,_4,_4)|Aufgabe 1.15: Distanzspektren von HC (7, 4, 3) und HC (8, 4, 4)]]
+
[[Aufgaben:Exercise_1.15:_Distance_Spectra_of_HC_(7,_4,_3)_and_HC_(8,_4,_4)|Exercise 1.15: Distance Spectra of HC (7, 4, 3) and HC (8, 4, 4)]]
  
[[Aufgaben:Aufgabe_1.16:_Fehlerwahrscheinlichkeitsschranken_für_AWGN|Aufgabe 1.16: Fehlerwahrscheinlichkeitsschranken für AWGN]]
+
[[Aufgaben:Exercise_1.16:_Block_Error_Probability_Bounds_for_AWGN|Exercise 1.16: Block Error Probability Bounds for AWGN]]
  
[[Aufgaben:Aufgabe_1.16Z:_Schranken_für_die_Gaußsche_Fehlerfunktion|Aufgabe 1.16Z: Schranken für die Gaußsche Fehlerfunktion]]
+
[[Aufgaben:Exercise_1.16Z:_Bounds_for_the_Gaussian_Error_Function|Exercise 1.16Z: Bounds for the Gaussian Error Function]]
  
 
{{Display}}
 
{{Display}}

Latest revision as of 17:34, 22 November 2022

Distance spectrum of a linear code


We further assume a linear and binary  $(n, \hspace{0.05cm} k)$ block code  $\mathcal{C}$.  A major goal of code design is to keep the  $\text{block error probability}$  ${\rm Pr}(\underline{u} \ne \underline{v}) = {\rm Pr}(\underline{z} \ne \underline{x})$  as low as possible.  This is achieved by,  among other things

  • the minimum distance  $d_{\rm min}$  between two code words  $\underline{x}$  and  $\underline{x}\hspace{0.05cm}'$  is as large as possible,  so that one can correct up to  $t = ⌊(d_{\rm min}-1)/2⌋$  bit errors;
  • at the same time this minimum distance  $d_{\rm min}$  occurs as rarely as possible,  given all the allowed code words.

$\text{Definition:}$  We denote the  »number«  of code words  $\underline{x}\hspace{0.05cm}' \in \mathcal{C}$  with Hamming distance  $i$  from the considered code word  $\underline{x}$  of the same code  $\mathcal{C}$  with  $W_i(\underline{x})$,  where holds:

\[W_i(\underline{x}) = \big \vert \hspace{0.05cm} \left \{ \underline{x} \hspace{0.05cm}, \underline{x}{\hspace{0.05cm}' \in \hspace{0.05cm} \mathcal{C} } \hspace{0.1cm}\vert\hspace{0.1cm} d_{\rm H}(\underline{x} \hspace{0.05cm}, \hspace{0.1cm}\underline{x}\hspace{0.05cm}' ) = i \right \} \hspace{0.05cm} \big \vert\hspace{0.05cm}.\]
  • The magnitude strokes of the absolute value here denote the number of code words  $\underline{x}\hspace{0.05cm}'$, that satisfy the condition  $d_{\rm H}(\underline{x} \hspace{0.05cm}, \hspace{0.1cm}\underline{x}\hspace{0.05cm}' ) = i $. 
  • This value is also called  "multiplicity".


$\text{Example 1:}$  We consider the  $(5, \, 2)$ block code  $\mathcal{C}$  with generator matrix

Hamming distances between all code words

\[{ \boldsymbol{\rm G} } = \begin{pmatrix} 1 &0 &1 &1 &0 \\ 0 &1 &0 &1 &1 \end{pmatrix} \hspace{0.05cm}.\]

The table shows the Hamming distances

  • between all code words  $\underline{x}_i$
  • to the reference words  $\underline{x}_0$, ... , $\underline{x}_3$.


One recognizes:   Independently of the reference word  $\underline{x}_i$  holds:

\[W_0 = 1 \hspace{0.05cm}, \hspace{0.5cm}W_1 = W_2 = 0 \hspace{0.05cm},\hspace{0.5cm} W_3 = 2 \hspace{0.05cm},\hspace{0.5cm} W_4 = 1\]
\[ \Rightarrow\hspace{0.3cm} d_{\rm min} = 3\hspace{0.05cm}.\]


Not only in this example,  but in any linear code,  the same multiplicities  $W_i$  result for each code word.  Furthermore,  since the all-zero word   $\underline{0} = (0, 0,\text{ ...} \hspace{0.05cm}, 0)$   is part of every linear binary code,  the above definition can also be formulated as follows:

$\text{Definition:}$  The  »distance spectrum«  of a linear binary  $(n, \hspace{0.03cm} k)$  block code is the set   $\{W_i \}$  with  $i = 0,\ 1,$ ... ,  $n$. 

  • Here,  $W_i$  gives the number of code words  $\underline{x} \in \mathcal{C}$  with Hamming weight  $w_{\rm H}(\underline{x}) = i$.
  • Often one describes the set  $\hspace{0.05cm}\{W_i \}\hspace{0.05cm}$  also as a polynomial with a pseudo-variable  $X$:


\[\left \{ \hspace{0.05cm} W_i \hspace{0.05cm} \right \} \hspace{0.3cm} \Leftrightarrow \hspace{0.3cm} W(X) = \sum_{i=0 }^{n} W_i \cdot X^{i} = W_0 + W_1 \cdot X + W_2 \cdot X^{2} + ... \hspace{0.05cm} + W_n \cdot X^{n}\hspace{0.05cm}.\]
  • $W(X)$  is also called the   »weight enumerator function«.


  • For example,  the weight enumerator function of the  $(5, \hspace{0.02cm} 2)$  code  $\mathcal{C} = \left \{ \hspace{0.05cm}(0, 0, 0, 0, 0) \hspace{0.05cm},\hspace{0.15cm} (0, 1, 0, 1, 1) \hspace{0.05cm},\hspace{0.15cm}(1, 0, 1, 1, 0) \hspace{0.05cm},\hspace{0.15cm}(1, 1, 1, 0, 1) \hspace{0.05cm} \right \}$  of  $\text{Example 1}$:
\[W(X) = 1 + 2 \cdot X^{3} + X^{4}\hspace{0.05cm}.\]
\[W(X) = 1 + 7 \cdot X^{3} + 7 \cdot X^{4} + X^{7}\hspace{0.05cm}.\]
  • The transformation of the distance spectrum  $\hspace{0.01cm}\{W_i \}\hspace{0.01cm}$  into the weight enumerator function  $W(X)$  also offers great numerical advantages in some exercises.  For example,  if the weight enumerator function  $W(X)$  of a  $(n, \hspace{0.03cm} k)$ block code  $\mathcal{C}$  is known,  then for the corresponding  $\text{dual}$  $(n, \hspace{0.03cm} n-k)$  code  $\mathcal{C}_{\rm dual}$:
\[W_{\rm dual}(X) = \frac{(1+X)^n}{2^k} \cdot W \left ( \frac{1-X}{1+X} \right )\hspace{0.05cm}.\]

$\text{Example 2:}$  The weight enumerator function  $W(X)$  of the  $\text{single parity–check codes}$  with   $n = 6$,  $k = 5$   ⇒   $\text{SPC (6, 5)}$  is obtained by comparing all  $2^5 = 32$  code words with the all-zero word:

\[W_{\rm SPC\hspace{0.03cm}(6,\hspace{0.08cm}5)}(X) = 1 + 15 \cdot X^{2} + 15 \cdot X^{4} + X^{6}\hspace{0.05cm}.\]

Considering the above equation, you get the same result much faster:

  • The dual code to  $\text{SPC (6, 5)}$  is the  $\text{repetition code}$  $\text{RC (6, 1)}$  with only two code words  $(0, 0, 0, 0, 0, 0)$  and  $(1, 1, 1, 1, 1, 1)$:
\[W_{\rm RC\hspace{0.03cm}(6,\hspace{0.08cm}1)}(X) = 1 + X^{6}\hspace{0.05cm}.\]
  • From this it follows for the weight enumerator function of the  $\text{SPC (6, 5)}$  according to the above equation with  $k = 1$:
\[W_{\rm SPC\hspace{0.03cm}(6,\hspace{0.08cm}5)}(X) = \frac{(1+X)^6}{2^1} \cdot W \left [1 \hspace{-0.05cm}+\hspace{-0.05cm} \left ( \frac {1\hspace{-0.05cm}-\hspace{-0.05cm}X}{1\hspace{-0.05cm}+\hspace{-0.05cm}X}\right )^6 \right ] = 1/2 \cdot \big [( 1\hspace{-0.05cm}+\hspace{-0.05cm}X) ^6 \hspace{-0.05cm}+\hspace{-0.05cm} ( 1-X) ^6 \big ] = 1 \hspace{-0.05cm}+\hspace{-0.05cm} 15 \cdot X^{2} \hspace{-0.05cm}+\hspace{-0.05cm} 15 \cdot X^{4} \hspace{-0.05cm}+\hspace{-0.05cm} X^{6}\hspace{0.05cm}.\]


Union Bound of the block error probability


We consider as in  $\text{Example 1}$  the distance spectrum of the  $(5, \hspace{0.02cm} 2)$  block code  $\mathcal{C} = \{\underline{x}_0, \underline{x}_1, \underline{x}_2, \underline{x}_3 \}$   and assume that the code word  $\underline{x}_0$  has been sent.  The graphic illustrates the situation.

  • In the error-free case,  the code word estimator would then return  $\underline{z} = \underline{x}_0$. 
  • Otherwise,  a block error would occur  (that is,   $\underline{z} \ne \underline{x}_0$  and accordingly  $\underline{v} \ne \underline{u}_0)$  with probability
Derivation of the Union Bound
$${\rm Pr(block\:error)} = {\rm Pr}\left (\big [\underline{x}_{\hspace{0.02cm}0} \hspace{-0.07cm}\mapsto \hspace{-0.07cm}\underline{x}_{\hspace{0.02cm}1}\big ] \hspace{-0.07cm}\cup\hspace{-0.07cm}\big [\underline{x}_{\hspace{0.02cm}0} \hspace{-0.07cm}\mapsto \hspace{-0.07cm} \underline{x}_{\hspace{0.02cm}2}\big ] \hspace{-0.07cm}\cup\hspace{-0.07cm}\big [\underline{x}_{\hspace{0.02cm}0} \hspace{-0.07cm}\mapsto \hspace{-0.07cm} \underline{x}_{\hspace{0.02cm}3}\big ] \right ).$$

⇒  The event  "falsification from   $\underline{x}_0$   to  $\underline{x}_1$"  occurs for a given received word   $\underline{y}$   according to the  "maximum likelihood block-wise decision rule"  exactly when for the conditional probability density function holds:

\[\big [\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}1}\big ] \hspace{0.3cm} \Leftrightarrow \hspace{0.3cm} f(\underline{x}_{\hspace{0.02cm}0}\hspace{0.02cm} | \hspace{0.05cm}\underline{y}) < f(\underline{x}_{\hspace{0.02cm}1}\hspace{0.02cm} | \hspace{0.05cm}\underline{y}) \hspace{0.05cm}.\]

⇒   Since the events  $\big [\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}1}\big ] $,  $\big [\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}2}\big ] $,  $\big [\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}3}\big ] $  are not necessarily  "disjoint"   (which would thus be mutually exclusive),  the  $\text{probability of the union set}$  is less than or equal to the sum of the individual probabilities:

\[{\rm Pr(block\:error)} \le {\rm Pr}\big [\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}1}\big ] \hspace{0.05cm}+\hspace{0.05cm}{\rm Pr}\big [\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}2}\big ] \hspace{0.05cm}+ \hspace{0.05cm}{\rm Pr}\big [\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}3}\big ] \hspace{0.05cm}.\]

  1. One calls this upper bound for the block error probability the  »Union Bound«.
  2. This was already used in the chapter  "Approximation of the error probability"  of the book  "Digital Signal Transmission".


We generalize now and formalize these results under the assumption that both  $\underline{x}$  and  $\underline{x}\hspace{0.05cm}'$  belong to the code  $\mathcal{C}$ .

$\text{Then the following calculation rules apply:}$ 

  • $\rm Block\:error\:probability$:
$${\rm Pr(block\:error)} = {\rm Pr} \left ( \bigcup_{\underline{x}\hspace{0.05cm}' \ne \underline{x} } \hspace{0.15cm}\big [\underline{x} \mapsto \underline{x}\hspace{0.05cm}'\big] \right )\hspace{0.05cm},$$
  • Upper bound   ⇒   $\text{Union Bound}$:
$${\rm Pr(Union \hspace{0.15cm}Bound)} \le \sum_{\underline{x}\hspace{0.05cm}' \ne \underline{x} } \hspace{0.15cm} {\rm Pr}\big[\underline{x} \mapsto \underline{x}\hspace{0.05cm}'\big] \hspace{0.05cm},$$
  • $\text{Pairwise error probability}$  (according to the maximum-a- posteriori resp. maximum likelihood criterion):
$${\rm Pr}\hspace{0.02cm}\big[\underline{x} \mapsto \underline{x}\hspace{0.05cm}'\big] = {\rm Pr} \big [ p(\underline{x}\hspace{0.05cm}\vert \hspace{0.05cm}\underline{y}) \le p(\underline{x}\hspace{0.05cm}'\hspace{0.05cm} \vert \hspace{0.05cm}\underline{y}) \big ] \hspace{0.05cm}.$$


In the next sections,  these results are applied to different channels.

Union Bound for the BSC model


We further consider the exemplary  $(5, \hspace{0.02cm} 2)$ code: 

BSC model and maximum likelihood detection
$$\mathcal{C} = \left \{ \hspace{0.05cm}(0, 0, 0, 0, 0) \hspace{0.05cm},\hspace{0.15cm} (0, 1, 0, 1, 1) \hspace{0.05cm},\hspace{0.15cm}(1, 0, 1, 1, 0) \hspace{0.05cm},\hspace{0.15cm}(1, 1, 1, 0, 1) \hspace{0.05cm} \right \}.$$

For the digital channel we use the  $\text{BSC model}$  ("Binary Symmetric Channel"):

\[{\rm Pr}(y = 1 \hspace{0.05cm} | \hspace{0.05cm}x = 0 ) = {\rm Pr}(y = 0 \hspace{0.05cm} | \hspace{0.05cm}x = 1 ) = {\rm Pr}(e = 1) = \varepsilon \hspace{0.05cm},\]
\[{\rm Pr}(y = 0 \hspace{0.05cm} | \hspace{0.05cm}x = 0 ) = {\rm Pr}(y = 1 \hspace{0.05cm} | \hspace{0.05cm}x = 1 ) = {\rm Pr}(e = 0) = 1 -\varepsilon \hspace{0.05cm}.\]

Then holds:

  1. The code words  $\underline{x}_0 = (0, 0, 0, 0, 0)$  and  $\underline{x}_1 = (0, 1, 0, 1, 1)$  differ in  $d = 3$  bits;  $d$  is the  $\text{Hamming distance}$   between  $\underline{x}_0$  and  $\underline{x}_1$.
  2. An incorrect decoding result   $\big[\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}1}\big] $  is obtained whenever at least two of the three bits at bit positions 2, 4, and 5 are falsified.
  3. In contrast, the bit positions 1 and 3 do not matter here, since they are the same for  $\underline{x}_0$  and  $\underline{x}_1$.


Since the considered code can correct  $t = ⌊(d-1)/2⌋ = 1$  errors:

\[{\rm Pr}\big [\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}1}\big ] \hspace{-0.1cm} = \hspace{-0.1cm} \sum_{i=t+1 }^{d} {d \choose i} \cdot \varepsilon^{i} \cdot (1 - \varepsilon)^{d-i} = {3 \choose 2} \cdot \varepsilon^{2} \cdot (1 - \varepsilon) + {3 \choose 3} \cdot \varepsilon^{3} =3 \cdot \varepsilon^2 \cdot (1 - \varepsilon) + \varepsilon^3 = {\rm Pr}\big[\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}2}\big]\hspace{0.05cm}.\]

This takes into account that   $\underline{x}_0 = (0, 0, 0, 0, 0)$   and   $\underline{x}_2 = (1, 0, 1, 1, 0)$   also differ in three bit positions.

On the other hand,  the code words   $\underline{x}_0 = (0, 0, 0, 0, 0)$   and   $\underline{x}_3 = (1, 1, 1, 0, 1)$   differ in four bit positions:

  • Wrong decoding of the block therefore certainly occurs if four or three bits are falsified.
  • A falsification of two bits results in a block error with  $50\%$  probability,  if one assumes a random decision for this  (last term).
\[{\rm Pr}\big[\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}3}\big] = \varepsilon^4 + 4 \cdot \varepsilon^3 \cdot (1 - \varepsilon) + {1}/{2} \cdot 6 \cdot \varepsilon^2 \cdot (1 - \varepsilon)^2 \hspace{0.05cm}.\]
  • This results in the "Union Bound":
\[{\rm Pr(Union \hspace{0.15cm}Bound)} = {\rm Pr}\big[\underline{x}_{\hspace{0.02cm}0} \hspace{-0.02cm}\mapsto \hspace{-0.02cm}\underline{x}_{\hspace{0.02cm}1}\big] +{\rm Pr}\big[\underline{x}_{\hspace{0.02cm}0} \hspace{-0.02cm}\mapsto \hspace{-0.02cm} \underline{x}_{\hspace{0.02cm}2}] +{\rm Pr}[\underline{x}_{\hspace{0.02cm}0} \hspace{-0.02cm}\mapsto \hspace{-0.02cm} \underline{x}_{\hspace{0.02cm}3}\big] \ge {\rm Pr(block\hspace{0.15cm}error)} .\]

$\text{Example 3:}$  The table summarizes the results for different values of the BSC parameter  $\varepsilon$ .

Numeric Union Bound for the considered  $\text{(5, 2)}$ code

It is worth mentioning that the completely different probabilities to be calculated 

  • ${\rm Pr}\big[\underline{x}_{\hspace{0.02cm}0} \hspace{-0.02cm}\mapsto \hspace{-0.02cm}\underline{x}_{\hspace{0.02cm}1}\big]=3 \cdot \varepsilon^2 \cdot (1 - \varepsilon) + \varepsilon^3 = {\rm Pr}\big[\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}2}\big]\hspace{0.05cm},$ 
  • ${\rm Pr}\big[\underline{x}_{\hspace{0.02cm}0} \hspace{-0.02cm}\mapsto \hspace{-0.02cm}\underline{x}_{\hspace{0.02cm}3}\big]= \varepsilon^4 + 4 \cdot \varepsilon^3 \cdot (1 - \varepsilon) + {1}/{2} \cdot 6 \cdot \varepsilon^2 \cdot (1 - \varepsilon)^2 \hspace{0.05cm},$ 


give exactly the same numerical result.


The upper bound according to Bhattacharyya


Another upper bound on the block error probability was given by  $\text{Anil Kumar Bhattacharyya}$ :

\[{\rm Pr(block\:error)} \le W(X = \beta) -1 = {\rm Pr(Bhattacharyya)} \hspace{0.05cm}.\]

It should be noted that:

  • The  »$\text{Bhattacharyya parameter}$«  $\beta$  identifies the digital channel.  For example:
\[\beta = \left\{ \begin{array}{c} \lambda \\ \sqrt{4 \cdot \varepsilon \cdot (1- \varepsilon)}\\ {\rm e}^{- R \hspace{0.05cm}\cdot \hspace{0.05cm}E_{\rm B}/N_0} \end{array} \right.\quad \begin{array}{*{1}c} {\rm for \hspace{0.15cm}BEC\hspace{0.15cm}model},\\ {\rm for \hspace{0.15cm}BSC\hspace{0.15cm}model}, \\ {\rm for\hspace{0.15cm}AWGN\hspace{0.15cm} model}. \end{array}\]
  • The  "Bhattacharyya Bound"  is always  (and usually significantly)  above the curve for the "Union Bound". With the goal of finding a uniform bound for all channels,  much coarser estimates must be made here than for the  "Union Bound".

We restrict ourselves here to the  »Bhattacharyya bound for the BSC model«.

  • For its pairwise error probability was derived in front:
\[{\rm Pr}\big[\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}1}\big] = \sum_{i= \left\lfloor (d-1)/2 \right\rfloor}^{d} {d \choose i} \cdot \varepsilon^{i} \cdot (1 - \varepsilon)^{d-i} = \sum_{i= \left\lceil d/2 \right\rceil }^{d} {d \choose i} \cdot \varepsilon^{i} \cdot (1 - \varepsilon)^{d-i}\hspace{0.05cm}.\]
Here,  $\varepsilon = {\rm Pr}(y = 1\hspace{0.04cm}|\hspace{0.04cm} x = 0) = {\rm Pr}(y = 0\hspace{0.04cm}|\hspace{0.04cm} x = 1)< 0.5$  denotes the BSC model,  and  $d = d_{\rm H}(\underline{x}_0,\, \underline{x}_1)$  indicates the Hamming distance between all code words.
  • To arrive at the Bhattacharyya bound, the following estimates must be made:  For all  $i < d$  holds  $\varepsilon^{i} \cdot (1 - \varepsilon)^{d-i} \le (1 - \varepsilon)^{d/2}$:
\[{\rm Pr}\big[\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}1}\big] \le \big[\varepsilon \cdot (1 - \varepsilon)\big]^{d/2} \cdot \sum_{i= \left\lceil d/2 \right\rceil }^{d} {d \choose i} \hspace{0.05cm}.\]
  • Change with respect to the lower limit of the indexing variable  $i$:
\[\sum_{i= \left\lceil d/2 \right\rceil }^{d} {d \choose i} \hspace{0.15cm} < \hspace{0.15cm} \sum_{i= 0 }^{d} {d \choose i} = 2^d\hspace{0.05cm}, \hspace{0.2cm}\beta = 2 \cdot \sqrt{\varepsilon \cdot (1 - \varepsilon)} \hspace{0.3cm} \Rightarrow \hspace{0.3cm} {\rm Pr}\big[\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}1}\big] = \beta^{d} \hspace{0.05cm}.\]
  • Resort according to Hamming weights  $W_i$  $($Hamming distance  $d = i$  occurs  $W_i$  times$)$:
\[{\rm Pr(block\:error)} \hspace{0.1cm} \le \hspace{0.1cm} \sum_{i= 1 }^{n} W_i \cdot \beta^{i} = 1 + W_1 \cdot \beta + W_2 \cdot \beta^2 + \hspace{0.05cm}\text{ ...} \hspace{0.05cm}+ W_n \cdot \beta^n \hspace{0.05cm}.\]
  • With the weight enumerator function  $W(X)= 1 + W_1 \cdot X + W_2 \cdot X^2 + \text{...} + W_n \cdot X^n$:
\[{\rm Pr(block\:error)} \le W(X = \beta) -1= {\rm Pr(Bhattacharyya)} \hspace{0.05cm}.\]

$\text{Example 4:}$  The table summarizes the Bhattacharyya results for different values of the BSC parameter  $\varepsilon$  valid for the  $\text{example $\text{(5, 2)}$ code}$.

"Union Bound"  vs.  "Bhattacharyya Bound"  valid for BSC model
  • For this one applies:
$$W_0 = 1, \ \ W_1 = W_2 = 0, \ \ W_3 = 2, \ \ W_4 = 1$$
$$\Rightarrow\hspace{0.3cm} W(X) = 1 + 2 \cdot X^3 + X^4.$$
  • Thus,  the Bhattacharyya bound can be calculated:
\[ {\rm Pr(Bhattacharyya)} = W(\beta) -1 = 2 \cdot \beta^3 + \beta^4\hspace{0.05cm}.\]
  • This provides a  (often rough)  bound on the block error probability:
\[ {\rm Pr(block\:error)} \le {\rm Pr(Bhattacharyya)} \hspace{0.05cm}.\]


$\text{Conclusions:}$  Based on  $\text{Example 3}$  and  $\text{Example 4}$  (in this section)  for the simple  $\text{(5, 2)}$  block code,  though of little practical relevance,  and in anticipation of  $\text{Example 5}$  (in the next section)  for the  $\text{(7, 4, 3)}$  Hamming code,  we summarize:

  1. The block error probability of a coding system is often not analytically specifiable and must be determined by simulation.  The same applies to the bit error probability.
  2. The  »Union Bound«  provides an upper bound for the block error probability.  For many applications  $($especially short codes$)$  the Union Bound is only slightly above the actual error probability.
  3. The  »Bhattacharyya Bound«  for the BEC channel is about a factor  $2$  above the  Union Bound  – see  $\text{Exercise 1.14}$.  For the BSC and the AWGN channel,  the gap is much larger.  The factor  $10$  (and more)  is not uncommon.
  4. The Bhattacharyya Bound   ⇒   "$W(\beta) - 1$"   looks very simple at first sight.  Nevertheless,  one needs also here knowledge about the exact weight function  $W(\xi)$  of the code.
  5. If the transmission channel  $($BEC,  BSC,  AWGN  or variations thereof$)$  and its parameters are known,  there is nothing to be said against using the  (more accurate)  "Union Bound"  as an upper bound for the block error probability.


Bounds for the (7, 4, 3) Hamming code at the AWGN channel


Finally,  we consider the block error probability and its bounds  $($"Union Bound"  and  "Bhattacharyya Bound")  for the following configuration:

  1. AWGN channel,  characterized by the quotient  $E_{\rm B}/N_0$,
  2. Hamming code  $\text{HC(7, 4, 3)}$   ⇒   rate  $R = 4/7$,   weight enumerator function  $W(X)-1 = 7 \cdot X^3 + 7 \cdot X^4 + X^7$,
  3. "Soft Decision"  according to the maximum likelihood criterion.

$\text{Example 5:}$  The results are summarized in the graph.

Block error probability and bounds of the  $\text{HC (7, 4, 3)}$
  • Approximately,  the latter is smaller by the factor  $d_{\rm min}/k$  if,  as here  $d_{\rm min}< k$.  In the present example:  $d_{\rm min}/k = 0.75$.
  • Only the points for integer  $\rm dB$  values were calculated.  The dashed lines were interpolated.
  • The  (blue)  numerical values given on the right are for  (blue vertical)  $10 \cdot \lg \, E_{\rm B}/N_0 = 8 \, \rm dB$   ⇒   $E_{\rm B}/N_0 \approx 6.31$.


The green crosses mark the  "Union Bound".  For this applies:

\[{\rm Pr(block\:error)} \le \sum_{i= d_{\rm min} }^{n} W_i \cdot {\rm Q} \left ( \sqrt{i \cdot {2R \cdot E_{\rm B} }/{N_0} } \right ) \]
\[\Rightarrow \hspace{0.3cm}{\rm Pr(block\:error)} \approx 7 \cdot {\rm Q} (4.65) + 7 \cdot {\rm Q} (5.37) + {\rm Q} (7.10) = \]
\[\hspace{2.0cm} \approx 7 \cdot 1.66 \cdot 10^{-6} + 7 \cdot 3.93 \cdot 10^{-8}+ 10^{-9} = 1.2 \cdot 10^{-5} \hspace{0.05cm}.\]
  • The numerical values make it clear that the Union Bound is essentially determined by the first term:
\[{\rm Pr(Union\hspace{0.15cm} Bound)} \approx W_{d_{\rm min} } \cdot {\rm Q} \left ( \sqrt{d_{\rm min} \cdot {2R \cdot E_{\rm B} }/{N_0} } \right ) = 1.16 \cdot 10^{-5} \hspace{0.05cm}.\]
  • However,  this so-called  "Truncated Union Bound"  is no longer a true bound for the block error probability in all applications,  but is to be understood as an approximation.
  • The  "Bhattacharyya Bound"  is marked by red dots in the graph.  This is well above the Union Bound due to the highly simplified  $\text{Chernoff–Rubin Bound}$ 
$${\rm Q}(x) \le {\rm e}^{-x^2/2}.$$
  • For example,  for  $10 \cdot \lg \, E_{\rm B}/N_0 = 8 \, \rm dB$  with  $\beta = {\rm e}^{-R \hspace{0.05cm}\cdot \hspace{0.05cm}E_{\rm B}/N_0} \approx 0.027$  a more than tenfold value compared to the Union Bound:
\[{\rm Pr(Bhattacharyya)} = W(\beta) -1 = 7 \cdot \beta^3 + 7 \cdot \beta^4 + \beta^7 \approx 1.44 \cdot 10^{-4} \hspace{0.05cm}.\]

Exercises for the chapter


Exercise 1.14: Bhattacharyya Bound for BEC

Exercise 1.15: Distance Spectra of HC (7, 4, 3) and HC (8, 4, 4)

Exercise 1.16: Block Error Probability Bounds for AWGN

Exercise 1.16Z: Bounds for the Gaussian Error Function