Difference between revisions of "Aufgaben:Exercise 3.14: Error Probability Bounds"

From LNTwww
Line 1: Line 1:
{{quiz-Header|Buchseite=Kanalcodierung/Distanzeigenschaften und Fehlerwahrscheinlichkeitsschranken}}
+
{{quiz-Header|Buchseite=Channel_Coding/Distance_Characteristics_and_Error_Probability_Barriers}}
  
[[File:P_ID2713__KC_A_3_14.png|right|frame|Bhattacharyya– und die Viterbi–Schranke beim BSC–Modell (unvollständige Tabelle)]]
+
[[File:P_ID2713__KC_A_3_14.png|right|frame|Bhattacharyya and the Viterbi barrier in the BSC model (incomplete table).]]
Für den häufig verwendeten Faltungscode mit
+
For the frequently used convolutional code with
* der Coderate  $R = 1/2$,
+
* the code rate  $R = 1/2$,
* dem Gedächtnis  $m = 2$, und
+
* the memory  $m = 2$, and
* der Übertragungsfunktionsmatrix
+
* the transfer function matrix
:$${\boldsymbol{\rm G}}(D) = \big ( 1 + D + D^2\hspace{0.05cm},\hspace{0.1cm} 1 + D^2 \hspace{0.05cm}\big ) $$
+
:$${\boldsymbol{\rm G}}(D) = \big ( 1 + D + D^2\hspace{0.05cm},\hspace{0.1cm} 1 + D^2 \hspace{0.05cm}\big ) $$
  
lautet die  [[Channel_Coding/Distanzeigenschaften_und_Fehlerwahrscheinlichkeitsschranken#Erweiterte_Pfadgewichtsfunktion|erweiterte Pfadgewichtsfunktion]]:
+
is the  [[Channel_Coding/Distance_Characteristics_and_Error_Probability_Barriers#Enhanced_path_weighting_enumerator_function|"extended path weighting enumerator function"]]:
 
:$$T_{\rm enh}(X, U) =  \frac{UX^5}{1- 2 \hspace{0.05cm}U \hspace{-0.05cm}X}  \hspace{0.05cm}.$$
 
:$$T_{\rm enh}(X, U) =  \frac{UX^5}{1- 2 \hspace{0.05cm}U \hspace{-0.05cm}X}  \hspace{0.05cm}.$$
  
Mit der schon häufiger benutzten Reihenentwicklung  $1/(1 \, –x) = 1 + x + x^2 + \text{...} \ $  kann hierfür auch geschrieben werden:
+
With the already more frequently used series expansion  $1/(1 \, –x) = 1 + x + x^2 + \text{...} \ $  can also be written for this purpose:
 
:$$T_{\rm enh}(X, U) = U X^5 \cdot \left [ 1 + (2 \hspace{0.05cm}U \hspace{-0.05cm}X) + (2 \hspace{0.05cm}U\hspace{-0.05cm}X)^2 + (2 \hspace{0.05cm}U\hspace{-0.05cm}X)^3 +\text{...}  \hspace{0.10cm} \right ]  \hspace{0.05cm}.$$
 
:$$T_{\rm enh}(X, U) = U X^5 \cdot \left [ 1 + (2 \hspace{0.05cm}U \hspace{-0.05cm}X) + (2 \hspace{0.05cm}U\hspace{-0.05cm}X)^2 + (2 \hspace{0.05cm}U\hspace{-0.05cm}X)^3 +\text{...}  \hspace{0.10cm} \right ]  \hspace{0.05cm}.$$
  
Die "einfache" Pfadgewichtsfunktion  $T(X)$  ergibt sich daraus, wenn man die zweite Variable  $U = 1$  setzt.
+
The "simple" path weighting enumerator function  $T(X)$  results from setting the second variable  $U = 1$ .
  
Anhand dieser beiden  Funktionen können Fehlerwahrscheinlichkeitsschranken angegeben werden:
+
Using these two functions, error probability bounds can be specified:
* Die&nbsp; <i>Burstfehlerwahrscheinlichkeit</i>&nbsp; wird durch die&nbsp; <b>Bhattacharyya&ndash;Schranke</b>&nbsp; begrenzt:
+
* The&nbsp; <i>burst error probability</i>&nbsp; is limited by the&nbsp; <b>Bhattacharyya barrier</b>&nbsp;:
:$${\rm Pr(Burstfehler)} \le {\rm Pr(Bhattacharyya)} = T(X = \beta) \hspace{0.05cm}.$$
+
:$${\rm Pr(Burst\:error)} \le {\rm Pr(Bhattacharyya)} = T(X = \beta) \hspace{0.05cm}.$$
  
* Dagegen ist die&nbsp; <i>Bitfehlerwahrscheinlichkeit</i>&nbsp; stets kleiner (oder gleich) der&nbsp; <b>Viterbi&ndash;Schranke</b>:
+
* In contrast, the&nbsp; <i>bit error probability</i>&nbsp; is always less than (or equal to) the&nbsp; <b>Viterbi bound</b>:
::<math>{\rm Pr(Bitfehler)} \le {\rm Pr(Viterbi)} = \left [ \frac {\rm d}{ {\rm d}U}\hspace{0.2cm}T_{\rm enh}(X, U) \right ]_{\substack{X=\beta \\ U=1} }  
+
::<math>{\rm Pr(Bit\:error)} \le {\rm Pr(Viterbi)} = \left [ \frac {\rm d}{ {\rm d}U}\hspace{0.2cm}T_{\rm enh}(X, U) \right ]_{\substack{X=\beta \\ U=1} }  
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
Line 31: Line 31:
  
  
''Hinweise:''
+
Hints:
* Die Aufgabe gehört zum Kapitel&nbsp; [[Channel_Coding/Distanzeigenschaften_und_Fehlerwahrscheinlichkeitsschranken| Distanzeigenschaften und Fehlerwahrscheinlichkeitsschranken]].  
+
* The exercise belongs to the chapter&nbsp; [[Channel_Coding/Distance_Characteristics_and_Error_Probability_Barriers| "Distance Characteristics and Error Probability Barriers"]].  
* Der Bhattacharyya&ndash;Parameter für BSC lautet: &nbsp; $\beta = 2 \cdot \sqrt{\varepsilon \cdot (1- \varepsilon)}$.
+
* The Bhattacharyya&ndash;parameter for BSC is: &nbsp; $\beta = 2 \cdot \sqrt{\varepsilon \cdot (1- \varepsilon)}$.
* In obiger Tabelle sind für einige Werte des BSC&ndash;Parameters&nbsp; $\varepsilon$&nbsp; angegeben:
+
* In the above table, for some values of the BSC&ndash;parameter&nbsp; $\varepsilon$&nbsp; are given:
:*&nbsp; der Bhattacharyya&ndash;Parameter&nbsp; $\beta$,
+
:*&nbsp; the Bhattacharyya parameter&nbsp; $\beta$,
:*&nbsp; die Bhattacharyya&ndash;Schranke&nbsp; ${\rm Pr}(\rm Bhattacharyya)$, und
+
:*&nbsp; the Bhattacharyya barrier&nbsp; ${\rm Pr}(\rm Bhattacharyya)$, and
:* &nbsp; die Viterbi&ndash;Schranke&nbsp; $\rm Pr(Viterbi)$.
+
:* &nbsp; the Viterbi barrier&nbsp; $\rm Pr(Viterbi)$.
* Im Verlauf dieser Aufgabe sollen Sie die entsprechenden Größen für&nbsp; $\varepsilon = 10^{-2}$&nbsp; und&nbsp; $\varepsilon = 10^{-4}$ berechnen.
+
* Throughout this exercise, you are to compute the corresponding quantities for&nbsp; $\varepsilon = 10^{-2}$&nbsp; and&nbsp; $\varepsilon = 10^{-4}$.
* Die vollständige Tabelle finden Sie in der Musterlösung.
+
* You can find the complete table in the sample solution.
 
   
 
   
  
  
  
===Fragebogen===
+
===Questions===
 
<quiz display=simple>
 
<quiz display=simple>
{Welcher Bhattacharyya&ndash;Parameter ergibt sich für das BSC&ndash;Modell?
+
{What Bhattacharyya parameter results for the BSC model?
 
|type="{}"}
 
|type="{}"}
 
$\varepsilon = 10^{&ndash;2} \text{:} \hspace{0.4cm} \beta \ = \ ${ 0.199 3% }
 
$\varepsilon = 10^{&ndash;2} \text{:} \hspace{0.4cm} \beta \ = \ ${ 0.199 3% }
 
$\varepsilon = 10^{&ndash;4} \text{:} \hspace{0.4cm} \beta \ = \ ${ 0.02 3% }
 
$\varepsilon = 10^{&ndash;4} \text{:} \hspace{0.4cm} \beta \ = \ ${ 0.02 3% }
  
{Wie lautet die Bhattacharyya&ndash;Schranke?
+
{What is the Bhattacharyya bound?
 
|type="{}"}
 
|type="{}"}
 
$\varepsilon = 10^{-2} \text{:} \hspace{0.4cm} {\rm Pr(Bhattacharyya)} \ = \ ${ 5.18 3% } $\ \cdot 10^{&ndash;4}$
 
$\varepsilon = 10^{-2} \text{:} \hspace{0.4cm} {\rm Pr(Bhattacharyya)} \ = \ ${ 5.18 3% } $\ \cdot 10^{&ndash;4}$
 
$\varepsilon = 10^{-4} \text{:} \hspace{0.4cm} {\rm Pr(Bhattacharyya)} \ = \ ${ 3.33 3% } $\ \cdot 10^{&ndash;9}$
 
$\varepsilon = 10^{-4} \text{:} \hspace{0.4cm} {\rm Pr(Bhattacharyya)} \ = \ ${ 3.33 3% } $\ \cdot 10^{&ndash;9}$
  
{Wie lautet die Viterbi&ndash;Schranke?
+
{What is the viterbi bound?
 
|type="{}"}
 
|type="{}"}
 
$\varepsilon = 10^{-2} \text{:} \hspace{0.4cm} {\rm Pr(Viterbi)} \ = \ ${ 8.61 3% } $\ \cdot 10^{&ndash;4}$
 
$\varepsilon = 10^{-2} \text{:} \hspace{0.4cm} {\rm Pr(Viterbi)} \ = \ ${ 8.61 3% } $\ \cdot 10^{&ndash;4}$
 
$\varepsilon = 10^{-4} \text{:} \hspace{0.4cm} {\rm Pr(Viterbi)} \ = \ ${ 3.47 3% } $\ \cdot 10^{&ndash;9}$
 
$\varepsilon = 10^{-4} \text{:} \hspace{0.4cm} {\rm Pr(Viterbi)} \ = \ ${ 3.47 3% } $\ \cdot 10^{&ndash;9}$
  
{Für welche Werte&nbsp; $\varepsilon < \varepsilon_0$&nbsp; sind beide Schranken nicht anwendbar?
+
{For which values&nbsp; $\varepsilon < \varepsilon_0$&nbsp; are both bounds not applicable?
 
|type="{}"}
 
|type="{}"}
 
$\varepsilon_0 \ = \ ${ 0.067 3% }
 
$\varepsilon_0 \ = \ ${ 0.067 3% }
 
</quiz>
 
</quiz>
  
===Musterlösung===
+
===Solution===
 
{{ML-Kopf}}
 
{{ML-Kopf}}
'''(1)'''&nbsp; Der Bhattacharyya&ndash;Parameter ergibt sich für das BSC&ndash;Modell mit $\varepsilon = 0.01$ zu
+
'''(1)'''&nbsp; The Bhattacharyya parameter results for the BSC model with $\varepsilon = 0.01$ to
 
:$$\beta = 2 \cdot \sqrt{\varepsilon \cdot (1- \varepsilon)} = 2 \cdot \sqrt{0.01 \cdot 0.99} \hspace{0.2cm}\underline {\approx 0.199}
 
:$$\beta = 2 \cdot \sqrt{\varepsilon \cdot (1- \varepsilon)} = 2 \cdot \sqrt{0.01 \cdot 0.99} \hspace{0.2cm}\underline {\approx 0.199}
 
\hspace{0.05cm}.$$
 
\hspace{0.05cm}.$$
  
Für noch kleinere Verfälschungswahrscheinlichkeiten $\varepsilon$ kann näherungsweise geschrieben werden:
+
For even smaller corruption probabilities $\varepsilon$ can be written approximately:
 
:$$\beta \approx 2 \cdot \sqrt{\varepsilon } \hspace{0.3cm} \Rightarrow \hspace{0.3cm} \varepsilon = 10^{-4}\hspace{-0.1cm}: \hspace{0.2cm} \beta \hspace{0.2cm}\underline {\approx 0.02}
 
:$$\beta \approx 2 \cdot \sqrt{\varepsilon } \hspace{0.3cm} \Rightarrow \hspace{0.3cm} \varepsilon = 10^{-4}\hspace{-0.1cm}: \hspace{0.2cm} \beta \hspace{0.2cm}\underline {\approx 0.02}
 
\hspace{0.05cm}.$$
 
\hspace{0.05cm}.$$
  
  
'''(2)'''&nbsp; Es gilt ${\rm Pr(Burstfehler)} &#8804; {\rm Pr(Bhattacharyya)}$ mit ${\rm Pr(Bhattacharyya)} = T(X = \beta)$.  
+
'''(2)'''&nbsp; It holds ${\rm Pr(Burst\:error)} &#8804; {\rm Pr(Bhattacharyya)}$ with ${\rm Pr(Bhattacharyya)} = T(X = \beta)$.  
*Für den betrachteten Faltungscode der Rate 1/2, dem Gedächtnis $m = 2$ und $\mathbf{G}(D) = (1 + D + D^2, \ 1 + D^2)$ lautet die Pfadgewichtsfunktion:
+
*For the considered convolutional code of rate 1/2, memory $m = 2$ and $\mathbf{G}(D) = (1 + D + D^2, \ 1 + D^2)$, the path weighting enumerator function is:
 
:$$T(X) = \frac{X^5 }{1- 2X} \hspace{0.3cm} \Rightarrow \hspace{0.3cm}
 
:$$T(X) = \frac{X^5 }{1- 2X} \hspace{0.3cm} \Rightarrow \hspace{0.3cm}
 
{\rm Pr(Bhattacharyya)} = T(X = \beta) = \frac{\beta^5 }{1- 2\beta}$$
 
{\rm Pr(Bhattacharyya)} = T(X = \beta) = \frac{\beta^5 }{1- 2\beta}$$
Line 87: Line 87:
  
  
'''(3)'''&nbsp; Zur Berechnung der Viterbi&ndash;Schranke gehen wir von der erweiterten Pfadgewichtsfunktion aus:
+
'''(3)'''&nbsp; To calculate the Viterbi bound, we assume the extended path weighting enumerator function:
 
:$$T_{\rm enh}(X, U) =  \frac{U  X^5}{1- 2UX}  \hspace{0.05cm}.$$
 
:$$T_{\rm enh}(X, U) =  \frac{U  X^5}{1- 2UX}  \hspace{0.05cm}.$$
* Die Ableitung dieser Funktion nach dem Eingangsparameter $U$ lautet:
+
* The derivative of this function with respect to the input parameter $U$ is:
 
:$$\frac {\rm d}{{\rm d}U}\hspace{0.1cm}T_{\rm enh}(X, U) = \frac{(1- 2UX) \cdot X^5 - U  X^5 \cdot (-2X)}{(1- 2UX)^2}
 
:$$\frac {\rm d}{{\rm d}U}\hspace{0.1cm}T_{\rm enh}(X, U) = \frac{(1- 2UX) \cdot X^5 - U  X^5 \cdot (-2X)}{(1- 2UX)^2}
 
  =  \frac{ X^5}{(1- 2UX)^2}
 
  =  \frac{ X^5}{(1- 2UX)^2}
 
\hspace{0.05cm}.$$
 
\hspace{0.05cm}.$$
* Diese Gleichung liefert für $U = 1$ und $X = \beta$ die Viterbi&ndash;Schranke:
+
* This equation provides the Viterbi bound for $U = 1$ and $X = \beta$:
 
:$$\frac {\rm d}{{\rm d}U}\hspace{0.1cm}T_{\rm enh}(X, U) = \frac{(1- 2UX) \cdot X^5 - U  X^5 \cdot (-2X)}{(1- 2UX)^2}
 
:$$\frac {\rm d}{{\rm d}U}\hspace{0.1cm}T_{\rm enh}(X, U) = \frac{(1- 2UX) \cdot X^5 - U  X^5 \cdot (-2X)}{(1- 2UX)^2}
 
  =  \frac{U  X^5}{(1- 2UX)^2}
 
  =  \frac{U  X^5}{(1- 2UX)^2}
Line 102: Line 102:
 
{\rm Pr(Viterbi)} \hspace{-0.15cm} \ = \ \hspace{-0.15cm} \frac{0.02^5 }{(1- 2\cdot 0.02)^2} = \hspace{0.2cm}\underline {\approx 3.47 \cdot 10^{-9}}\hspace{0.05cm}.$$
 
{\rm Pr(Viterbi)} \hspace{-0.15cm} \ = \ \hspace{-0.15cm} \frac{0.02^5 }{(1- 2\cdot 0.02)^2} = \hspace{0.2cm}\underline {\approx 3.47 \cdot 10^{-9}}\hspace{0.05cm}.$$
  
*Wir überprüfen das Ergebnis anhand der folgenden Näherung:
+
*We check the result using the following approximation:
 
:$$T_{\rm enh}(X, U) = U X^5 + 2\hspace{0.05cm}U^2 X^6 + 4\hspace{0.05cm}U^3 X^7 + 8\hspace{0.05cm}U^4 X^8 + \text{...} $$
 
:$$T_{\rm enh}(X, U) = U X^5 + 2\hspace{0.05cm}U^2 X^6 + 4\hspace{0.05cm}U^3 X^7 + 8\hspace{0.05cm}U^4 X^8 + \text{...} $$
 
:$$\Rightarrow \hspace{0.3cm}\frac {\rm d}{{\rm d}U}\hspace{0.1cm}T_{\rm enh}(X, U) = X^5 + 4\hspace{0.05cm}U X^6 + 12\hspace{0.05cm}U^2 X^7 + 32\hspace{0.05cm}U^3 X^8 + \text{...} $$
 
:$$\Rightarrow \hspace{0.3cm}\frac {\rm d}{{\rm d}U}\hspace{0.1cm}T_{\rm enh}(X, U) = X^5 + 4\hspace{0.05cm}U X^6 + 12\hspace{0.05cm}U^2 X^7 + 32\hspace{0.05cm}U^3 X^8 + \text{...} $$
  
*Setzt man $U = 1$ und $X = \beta$ so erhält man wieder die Viterbi&ndash;Schranke:
+
*Setting $U = 1$ and $X = \beta$ we get again the Viterbi bound:
 
:$${\rm Pr(Viterbi)}  \hspace{-0.15cm} \ = \ \hspace{-0.15cm} \beta^5 + 4\hspace{0.05cm}\beta^6 + 12\hspace{0.05cm}\beta^7 + 32\hspace{0.05cm}\beta^8 +\text{...}
 
:$${\rm Pr(Viterbi)}  \hspace{-0.15cm} \ = \ \hspace{-0.15cm} \beta^5 + 4\hspace{0.05cm}\beta^6 + 12\hspace{0.05cm}\beta^7 + 32\hspace{0.05cm}\beta^8 +\text{...}
 
= \beta^5 \cdot (1+ 4\hspace{0.05cm}\beta + 12\hspace{0.05cm}\beta^2 + 32\hspace{0.05cm}\beta^3 + ... )\hspace{0.05cm}. $$
 
= \beta^5 \cdot (1+ 4\hspace{0.05cm}\beta + 12\hspace{0.05cm}\beta^2 + 32\hspace{0.05cm}\beta^3 + ... )\hspace{0.05cm}. $$
  
*Für $\varepsilon = 10^{&ndash;2} \ \Rightarrow \ \beta = 0.199$ erhält man, wenn man die unendliche Summe nach dem $\beta^3$&ndash;Term abbricht:
+
*For $\varepsilon = 10^{&ndash;2} \ \Rightarrow \ \beta = 0.199$ is obtained by truncating the infinite sum after the $\beta^3$&ndash;term:
 
:$${\rm Pr(Viterbi)} \approx 3.12 \cdot 10^{-4} \cdot (1 + 0.796 + 0.475 + 0.252) = 7.87 \cdot 10^{-4}
 
:$${\rm Pr(Viterbi)} \approx 3.12 \cdot 10^{-4} \cdot (1 + 0.796 + 0.475 + 0.252) = 7.87 \cdot 10^{-4}
 
\hspace{0.05cm}.$$
 
\hspace{0.05cm}.$$
  
*Der Abbruchfehler &ndash; bezogen auf $8.61 \cdot 10^{&ndash;4}$ &ndash; beträgt hier ca. $8.6\%$. Für $\varepsilon = 10^{&ndash;4} \ \Rightarrow \ \beta = 0.02$ ist der Abbruchfehler noch geringer:
+
*The termination error &ndash; related to $8.61 \cdot 10^{&ndash;4}$ &ndash; here is about $8.6\%$. For $\varepsilon = 10^{&ndash;4} \ \Rightarrow \ \beta = 0.02$ the termination error is even smaller:
 
:$${\rm Pr(Viterbi)} \approx 3.20 \cdot 10^{-9} \cdot (1 + 0.086 + 0.0048 + 0.0003) = 3.47 \cdot 10^{-9}
 
:$${\rm Pr(Viterbi)} \approx 3.20 \cdot 10^{-9} \cdot (1 + 0.086 + 0.0048 + 0.0003) = 3.47 \cdot 10^{-9}
 
\hspace{0.05cm}.$$
 
\hspace{0.05cm}.$$
  
  
  [[File:P_ID2714__KC_A_3_14c.png|right|frame|Bhattacharyya– und die Viterbi–Schranke beim BSC&ndash;Modell (vollständige Tabelle)]]
+
  [[File:P_ID2714__KC_A_3_14c.png|right|frame|Bhattacharyya and the Viterbi bound in the BSC model (full table).]]
'''(4)'''&nbsp; Für $\beta = 0.5$ ergeben sich für beide Schranken der Wert "unendlich".  
+
'''(4)'''&nbsp; For $\beta = 0.5$ both bounds result in the value "infinite".  
  
*Für noch größere $\beta$&ndash;Werte wird die Bhattacharyya&ndash;Schranke negativ und auch das Ergebnis für die Viterbi&ndash;Schranke ist dann nicht anwendbar. Daraus folgt:
+
*For even larger $\beta$&ndash;values, the Bhattacharyya bound becomes negative and the result for the Viterbi bound is then also not applicable. It follows:
 
:$$\beta_0 = 2 \cdot \sqrt{\varepsilon_0 \cdot (1- \varepsilon_0)} = 0.5$$
 
:$$\beta_0 = 2 \cdot \sqrt{\varepsilon_0 \cdot (1- \varepsilon_0)} = 0.5$$
 
:$$\Rightarrow \hspace{0.3cm} {\varepsilon_0 \cdot (1- \varepsilon_0)} = 0.25^2 = 0.0625$$
 
:$$\Rightarrow \hspace{0.3cm} {\varepsilon_0 \cdot (1- \varepsilon_0)} = 0.25^2 = 0.0625$$

Revision as of 21:49, 20 October 2022

Bhattacharyya and the Viterbi barrier in the BSC model (incomplete table).

For the frequently used convolutional code with

  • the code rate  $R = 1/2$,
  • the memory  $m = 2$, and
  • the transfer function matrix
$${\boldsymbol{\rm G}}(D) = \big ( 1 + D + D^2\hspace{0.05cm},\hspace{0.1cm} 1 + D^2 \hspace{0.05cm}\big ) $$

is the  "extended path weighting enumerator function":

$$T_{\rm enh}(X, U) = \frac{UX^5}{1- 2 \hspace{0.05cm}U \hspace{-0.05cm}X} \hspace{0.05cm}.$$

With the already more frequently used series expansion  $1/(1 \, –x) = 1 + x + x^2 + \text{...} \ $  can also be written for this purpose:

$$T_{\rm enh}(X, U) = U X^5 \cdot \left [ 1 + (2 \hspace{0.05cm}U \hspace{-0.05cm}X) + (2 \hspace{0.05cm}U\hspace{-0.05cm}X)^2 + (2 \hspace{0.05cm}U\hspace{-0.05cm}X)^3 +\text{...} \hspace{0.10cm} \right ] \hspace{0.05cm}.$$

The "simple" path weighting enumerator function  $T(X)$  results from setting the second variable  $U = 1$ .

Using these two functions, error probability bounds can be specified:

  • The  burst error probability  is limited by the  Bhattacharyya barrier :
$${\rm Pr(Burst\:error)} \le {\rm Pr(Bhattacharyya)} = T(X = \beta) \hspace{0.05cm}.$$
  • In contrast, the  bit error probability  is always less than (or equal to) the  Viterbi bound:
\[{\rm Pr(Bit\:error)} \le {\rm Pr(Viterbi)} = \left [ \frac {\rm d}{ {\rm d}U}\hspace{0.2cm}T_{\rm enh}(X, U) \right ]_{\substack{X=\beta \\ U=1} } \hspace{0.05cm}.\]





Hints:

  • The exercise belongs to the chapter  "Distance Characteristics and Error Probability Barriers".
  • The Bhattacharyya–parameter for BSC is:   $\beta = 2 \cdot \sqrt{\varepsilon \cdot (1- \varepsilon)}$.
  • In the above table, for some values of the BSC–parameter  $\varepsilon$  are given:
  •   the Bhattacharyya parameter  $\beta$,
  •   the Bhattacharyya barrier  ${\rm Pr}(\rm Bhattacharyya)$, and
  •   the Viterbi barrier  $\rm Pr(Viterbi)$.
  • Throughout this exercise, you are to compute the corresponding quantities for  $\varepsilon = 10^{-2}$  and  $\varepsilon = 10^{-4}$.
  • You can find the complete table in the sample solution.



Questions

1

What Bhattacharyya parameter results for the BSC model?

$\varepsilon = 10^{–2} \text{:} \hspace{0.4cm} \beta \ = \ $

$\varepsilon = 10^{–4} \text{:} \hspace{0.4cm} \beta \ = \ $

2

What is the Bhattacharyya bound?

$\varepsilon = 10^{-2} \text{:} \hspace{0.4cm} {\rm Pr(Bhattacharyya)} \ = \ $

$\ \cdot 10^{–4}$
$\varepsilon = 10^{-4} \text{:} \hspace{0.4cm} {\rm Pr(Bhattacharyya)} \ = \ $

$\ \cdot 10^{–9}$

3

What is the viterbi bound?

$\varepsilon = 10^{-2} \text{:} \hspace{0.4cm} {\rm Pr(Viterbi)} \ = \ $

$\ \cdot 10^{–4}$
$\varepsilon = 10^{-4} \text{:} \hspace{0.4cm} {\rm Pr(Viterbi)} \ = \ $

$\ \cdot 10^{–9}$

4

For which values  $\varepsilon < \varepsilon_0$  are both bounds not applicable?

$\varepsilon_0 \ = \ $


Solution

(1)  The Bhattacharyya parameter results for the BSC model with $\varepsilon = 0.01$ to

$$\beta = 2 \cdot \sqrt{\varepsilon \cdot (1- \varepsilon)} = 2 \cdot \sqrt{0.01 \cdot 0.99} \hspace{0.2cm}\underline {\approx 0.199} \hspace{0.05cm}.$$

For even smaller corruption probabilities $\varepsilon$ can be written approximately:

$$\beta \approx 2 \cdot \sqrt{\varepsilon } \hspace{0.3cm} \Rightarrow \hspace{0.3cm} \varepsilon = 10^{-4}\hspace{-0.1cm}: \hspace{0.2cm} \beta \hspace{0.2cm}\underline {\approx 0.02} \hspace{0.05cm}.$$


(2)  It holds ${\rm Pr(Burst\:error)} ≤ {\rm Pr(Bhattacharyya)}$ with ${\rm Pr(Bhattacharyya)} = T(X = \beta)$.

  • For the considered convolutional code of rate 1/2, memory $m = 2$ and $\mathbf{G}(D) = (1 + D + D^2, \ 1 + D^2)$, the path weighting enumerator function is:
$$T(X) = \frac{X^5 }{1- 2X} \hspace{0.3cm} \Rightarrow \hspace{0.3cm} {\rm Pr(Bhattacharyya)} = T(X = \beta) = \frac{\beta^5 }{1- 2\beta}$$
$$\Rightarrow \hspace{0.3cm}\varepsilon = 10^{-2}\hspace{-0.1cm}: \hspace{0.1cm} {\rm Pr(Bhattacharyya)} \hspace{-0.15cm} \ = \ \hspace{-0.15cm} \frac{0.199^5 }{1- 2\cdot 0.199} \hspace{0.2cm}\underline {\approx 5.18 \cdot 10^{-4}}\hspace{0.05cm},$$
$$\hspace{0.85cm} \varepsilon = 10^{-4}\hspace{-0.1cm}: \hspace{0.1cm} {\rm Pr(Bhattacharyya)} \hspace{-0.15cm} \ = \ \hspace{-0.15cm} \frac{0.02^5 }{1- 2\cdot 0.02} \hspace{0.38cm}\underline {\approx 3.33 \cdot 10^{-9}}\hspace{0.05cm}.$$


(3)  To calculate the Viterbi bound, we assume the extended path weighting enumerator function:

$$T_{\rm enh}(X, U) = \frac{U X^5}{1- 2UX} \hspace{0.05cm}.$$
  • The derivative of this function with respect to the input parameter $U$ is:
$$\frac {\rm d}{{\rm d}U}\hspace{0.1cm}T_{\rm enh}(X, U) = \frac{(1- 2UX) \cdot X^5 - U X^5 \cdot (-2X)}{(1- 2UX)^2} = \frac{ X^5}{(1- 2UX)^2} \hspace{0.05cm}.$$
  • This equation provides the Viterbi bound for $U = 1$ and $X = \beta$:
$$\frac {\rm d}{{\rm d}U}\hspace{0.1cm}T_{\rm enh}(X, U) = \frac{(1- 2UX) \cdot X^5 - U X^5 \cdot (-2X)}{(1- 2UX)^2} = \frac{U X^5}{(1- 2UX)^2} \hspace{0.05cm}.$$
$$\Rightarrow \hspace{0.3cm}\varepsilon = 10^{-2}\hspace{-0.1cm}: \hspace{0.1cm} {\rm Pr(Viterbi)} \hspace{-0.15cm} \ = \ \hspace{-0.15cm} \frac{0.199^5 }{(1- 2\cdot 0.199)^2} = \hspace{0.2cm}\underline {\approx 8.61 \cdot 10^{-4}}\hspace{0.05cm},$$
$$\hspace{0.85cm} \varepsilon = 10^{-4}\hspace{-0.1cm}: \hspace{0.1cm} {\rm Pr(Viterbi)} \hspace{-0.15cm} \ = \ \hspace{-0.15cm} \frac{0.02^5 }{(1- 2\cdot 0.02)^2} = \hspace{0.2cm}\underline {\approx 3.47 \cdot 10^{-9}}\hspace{0.05cm}.$$
  • We check the result using the following approximation:
$$T_{\rm enh}(X, U) = U X^5 + 2\hspace{0.05cm}U^2 X^6 + 4\hspace{0.05cm}U^3 X^7 + 8\hspace{0.05cm}U^4 X^8 + \text{...} $$
$$\Rightarrow \hspace{0.3cm}\frac {\rm d}{{\rm d}U}\hspace{0.1cm}T_{\rm enh}(X, U) = X^5 + 4\hspace{0.05cm}U X^6 + 12\hspace{0.05cm}U^2 X^7 + 32\hspace{0.05cm}U^3 X^8 + \text{...} $$
  • Setting $U = 1$ and $X = \beta$ we get again the Viterbi bound:
$${\rm Pr(Viterbi)} \hspace{-0.15cm} \ = \ \hspace{-0.15cm} \beta^5 + 4\hspace{0.05cm}\beta^6 + 12\hspace{0.05cm}\beta^7 + 32\hspace{0.05cm}\beta^8 +\text{...} = \beta^5 \cdot (1+ 4\hspace{0.05cm}\beta + 12\hspace{0.05cm}\beta^2 + 32\hspace{0.05cm}\beta^3 + ... )\hspace{0.05cm}. $$
  • For $\varepsilon = 10^{–2} \ \Rightarrow \ \beta = 0.199$ is obtained by truncating the infinite sum after the $\beta^3$–term:
$${\rm Pr(Viterbi)} \approx 3.12 \cdot 10^{-4} \cdot (1 + 0.796 + 0.475 + 0.252) = 7.87 \cdot 10^{-4} \hspace{0.05cm}.$$
  • The termination error – related to $8.61 \cdot 10^{–4}$ – here is about $8.6\%$. For $\varepsilon = 10^{–4} \ \Rightarrow \ \beta = 0.02$ the termination error is even smaller:
$${\rm Pr(Viterbi)} \approx 3.20 \cdot 10^{-9} \cdot (1 + 0.086 + 0.0048 + 0.0003) = 3.47 \cdot 10^{-9} \hspace{0.05cm}.$$


Bhattacharyya and the Viterbi bound in the BSC model (full table).

(4)  For $\beta = 0.5$ both bounds result in the value "infinite".

  • For even larger $\beta$–values, the Bhattacharyya bound becomes negative and the result for the Viterbi bound is then also not applicable. It follows:
$$\beta_0 = 2 \cdot \sqrt{\varepsilon_0 \cdot (1- \varepsilon_0)} = 0.5$$
$$\Rightarrow \hspace{0.3cm} {\varepsilon_0 \cdot (1- \varepsilon_0)} = 0.25^2 = 0.0625$$
$$\Rightarrow \hspace{0.3cm} \varepsilon_0^2 - \varepsilon_0 + 0.0625 = 0$$
$$\Rightarrow \hspace{0.3cm} \varepsilon_0 = 0.5 \cdot (1 - \sqrt{0.75}) \hspace{0.15cm} \underline {\approx 0.067}\hspace{0.05cm}.$$