Difference between revisions of "Aufgaben:Exercise 1.14: Bhattacharyya Bound for BEC"

From LNTwww
m (Text replacement - "Category:Aufgaben zu Kanalcodierung" to "Category:Channel Coding: Exercises")
 
(12 intermediate revisions by 3 users not shown)
Line 1: Line 1:
{{quiz-Header|Buchseite=Kanalcodierung/Schranken für die Blockfehlerwahrscheinlichkeit}}
+
{{quiz-Header|Buchseite=Channel_Coding/Limits_for_Block_Error_Probability}}
  
[[File:P_ID2411__KC_A_1_13.png|right|frame|Mögliche Empfangsvektoren für den  $(5, 2)$–Code und BEC]]
+
[[File:EN_KC_A_1_14.png|right|frame|Possible received vectors for the  $(5, 2)$ code and BEC ]]
  
Wir betrachten in dieser Aufgabe den systematischen  $(5, 2)$–Code
+
In this exercise,  we consider the systematic  $(5, 2)$  code
*mit der  $2×5$–Generatormatrix
+
*with the  $2×5$ generator matrix
 
   
 
   
 
:$${ \boldsymbol{\rm G}}_{(5, 2)} = \begin{pmatrix} 1 &0 &1 &1 &0\\ 0 &1 &0 &1 &1 \end{pmatrix} \hspace{0.05cm},$$
 
:$${ \boldsymbol{\rm G}}_{(5, 2)} = \begin{pmatrix} 1 &0 &1 &1 &0\\ 0 &1 &0 &1 &1 \end{pmatrix} \hspace{0.05cm},$$
  
*der  $3 × 5$–Prüfmatrix
+
*the  $3 × 5$ parity-check matrix
 
   
 
   
 
:$${ \boldsymbol{\rm H}}_{(5, 2)} = \begin{pmatrix} 1 &0 &1 &0 &0\\ 1 &1 &0 &1 &0\\ 0 &1 &0 &0 &1 \end{pmatrix} \hspace{0.05cm},$$
 
:$${ \boldsymbol{\rm H}}_{(5, 2)} = \begin{pmatrix} 1 &0 &1 &0 &0\\ 1 &1 &0 &1 &0\\ 0 &1 &0 &0 &1 \end{pmatrix} \hspace{0.05cm},$$
  
*und den  $2^k = 4$  Codeworten
+
*and the  $2^k = 4$  code words
 
   
 
   
 
:$$\underline{x}_0 \hspace{-0.15cm}\ = \ \hspace{-0.15cm} (0, 0, 0, 0, 0) \hspace{0.05cm},\hspace{0.2cm}\underline{x}_1 = (0, 1, 0, 1, 1)\hspace{0.05cm},\hspace{0.2cm}\underline{x}_2 \hspace{-0.15cm}\ = \ \hspace{-0.15cm} (1, 0, 1, 1, 0) \hspace{0.05cm},\hspace{0.2cm}\underline{x}_3 = (1, 1, 1, 0, 1)\hspace{0.05cm}.$$
 
:$$\underline{x}_0 \hspace{-0.15cm}\ = \ \hspace{-0.15cm} (0, 0, 0, 0, 0) \hspace{0.05cm},\hspace{0.2cm}\underline{x}_1 = (0, 1, 0, 1, 1)\hspace{0.05cm},\hspace{0.2cm}\underline{x}_2 \hspace{-0.15cm}\ = \ \hspace{-0.15cm} (1, 0, 1, 1, 0) \hspace{0.05cm},\hspace{0.2cm}\underline{x}_3 = (1, 1, 1, 0, 1)\hspace{0.05cm}.$$
  
Am Ausgang des digitalen Kanals, der durch das  [[Channel_Coding/Kanalmodelle_und_Entscheiderstrukturen#Binary_Erasure_Channel_.E2.80.93_BEC|BEC–Modell]]  (''Binary Erasure Channel'') mit der Auslöschungswahrscheinlichkeit  $\lambda = 0.001$  festgelegt wird, tritt der Empfangsvektor
+
At the output of the digital channel defined by the  [[Channel_Coding/Channel_Models_and_Decision_Structures#Binary_Erasure_Channel_.E2.80.93_BEC|"BEC model"]]  ("Binary Erasure Channel")  with the erasure probability  $\lambda = 0.001$,  the received vector
 
   
 
   
 
:$$\underline{y} = (y_1, \hspace{0.05cm}y_2, \hspace{0.05cm}y_3, \hspace{0.05cm}y_4, \hspace{0.05cm}y_5)$$
 
:$$\underline{y} = (y_1, \hspace{0.05cm}y_2, \hspace{0.05cm}y_3, \hspace{0.05cm}y_4, \hspace{0.05cm}y_5)$$
  
auf, wobei für  $i = 1, \ \text{...} \ , 5$  gilt:   $y_{i} \in \{0, 1, \rm E\}$.
+
occurs,  where for  $i = 1, \ \text{...} \ , 5$  holds:    
 +
:$$y_{i} \in \{0, 1, \rm E\}.$$
  
Der BEC–Kanal zeichnet sich dadurch aus, dass
+
The BEC channel is characterized by the fact that
*Verfälschungen  $(0 → 1, 1 → 0)$  ausgeschlossen sind,
+
*falsifications  $(0 → 1,\ 1 → 0)$  are excluded,
*es aber zu Auslöschungen  $(0 → \rm E, 1 → E)$  kommen kann.
 
  
 +
*but erasures  $(0 → \rm E,\ 1 → E)$  may occur.
  
Die Grafik zeigt explizit alle möglichen Empfangsvektoren  $\underline{y}$  mit drei oder mehr Auslöschungen $($englisch:   ''Erasures'', abgekürzt  $\rm E)$  unter der Voraussetzung, dass der Nullvektor  $(0, 0, 0, 0, 0)$ gesendet wurde.
 
*Bei weniger als drei Auslöschungen liefert bei dem betrachteten  $(5, 2)$–Code der Codewortfinder immer die richtige Entscheidung:   $\underline{z} = \underline{x}$.
 
  
*Bei drei oder mehr Auslöschungen kann es dagegen zu Fehlentscheidungen kommen. In diesem Fall gilt für die Blockfehlerwahrscheinlichkeit:
+
The graph explicitly shows all possible received vectors   $\underline{y}$   with three or more erasures $\rm (E)$  assuming that the all-zero vector   $(0, 0, 0, 0, 0)$  was sent.
 +
*For less than three erausures,  for the considered  $(5, 2)$  code,  the  "code word finder"  always returns the correct decision:   $\underline{z} = \underline{x}$.
 +
 
 +
*On the other hand,  if there are three or more erasures,  wrong decisions may occur.  In this case,  the following applies to the block error probability:
 
   
 
   
:$$ {\rm Pr(Blockfehler)}= {\rm Pr} (\underline{z} \ne \underline{x}) = {\rm Pr}\left \{ \hspace{0.1cm} [\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}1}] \hspace{0.05cm}\cup\hspace{0.05cm}[\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}2}] \hspace{0.05cm}\cup \hspace{0.05cm}[\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}3}] \hspace{0.1cm}\right \} \hspace{0.05cm}.$$
+
:$$ {\rm Pr(block\:error)}= {\rm Pr} (\underline{z} \ne \underline{x}) = {\rm Pr}\left \{ \hspace{0.1cm} [\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}1}] \hspace{0.05cm}\cup\hspace{0.05cm}[\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}2}] \hspace{0.05cm}\cup \hspace{0.05cm}[\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}3}] \hspace{0.1cm}\right \} \hspace{0.05cm}.$$
 +
 
 +
Please note:
  
Bitte beachten Sie:
+
*The event  $[\underline{x}_{0} → \underline{x}_{1}]$  does not necessarily say that at the received vector under consideration  $\underline{y}$  is actually decided for the code word  $\underline{x}_{1}$,   but only that the decision for  $x_{1}$  would be more reasonable than the decision for  $\underline{x}_{0}$  due to statistics.
 +
 +
*But it could also be decided for  $\underline{x}_{2}$  or  $\underline{x}_{3}$  if the  [[Channel_Coding/Channel_Models_and_Decision_Structures#Criteria_.C2.BBMaximum-a-posteriori.C2.AB_and_.C2.BBMaximum-Likelihood.C2.AB|"maximum-likelihood criterion"]]  is in favor.
  
*Das Ereignis  $[\underline{x}_{0} → \underline{x}_{1}]$  sagt nicht unbedingt aus, dass beim betrachteten Empfangsvektor  $\underline{y}$  tatsächlich für das Codewort  $\underline{x}_{1}$  entschieden wird, sondern lediglich, dass die Entscheidung für  $x_{1}$  aufgrund der Statistik sinnvoller wäre als die Entscheidung für  $\underline{x}_{0}$.
 
*Es könnte aber auch für  $\underline{x}_{2}$  oder  $\underline{x}_{3}$  entschieden werden, wenn das  [[Channel_Coding/Kanalmodelle_und_Entscheiderstrukturen#Maximum-a-posteriori.E2.80.93_und_Maximum-Likelihood.E2.80.93Kriterium|Maximum–Likelihood–Kriterium]]  hierfür spricht.
 
  
 +
Computing the block error probability is difficult because the events  $[\underline{x}_{0} → \underline{x}_{1}]$ ,  $[\underline{x}_{0} → \underline{x}_{2}]$   and  $[\underline{x}_{0} → \underline{x}_{3}]$  are not necessarily  [[Theory_of_Stochastic_Signals/Set_Theory_Basics#Disjoint_sets|"disjoint"]] . An upper bound is provided by the  [[Channel_Coding/Limits_for_Block_Error_Probability#Union_Bound_of_the_block_error_probability|"Union Bound"]]:
  
Die Berechnung der Blockfehlerwahrscheinlichkeit ist schwierig, da die Ereignisse  $[\underline{x}_{0} \underline{x}_{1}]$ ,  $[\underline{x}_{0} \underline{x}_{2}]$   und  $[\underline{x}_{0} \underline{x}_{3}]$   nicht notwendigerweise  [[Theory_of_Stochastic_Signals/Mengentheoretische_Grundlagen#Disjunkte_Mengen|disjunkt]]  sind. Eine obere Schranke liefert die  [[Channel_Coding/Schranken_für_die_Blockfehlerwahrscheinlichkeit#Union_Bound_der_Blockfehlerwahrscheinlichkeit|Union Bound]]:
+
:$${\rm Pr(Union \hspace{0.15cm}Bound)} = {\rm Pr}[\underline{x}_{\hspace{0.02cm}0} \hspace{-0.02cm}\mapsto \hspace{-0.02cm}\underline{x}_{\hspace{0.02cm}1}] +{\rm Pr}[\underline{x}_{\hspace{0.02cm}0} \hspace{-0.02cm}\mapsto \hspace{-0.02cm} \underline{x}_{\hspace{0.02cm}2}] +{\rm Pr}[\underline{x}_{\hspace{0.02cm}0} \hspace{-0.02cm}\mapsto \hspace{-0.02cm} \underline{x}_{\hspace{0.02cm}3}] \ge {\rm Pr(block\hspace{0.15cm}error)} \hspace{0.05cm}.$$
 +
 
 +
Another bound was given by Bhattacharyya:
  
:$${\rm Pr(Union \hspace{0.15cm}Bound)} = {\rm Pr}[\underline{x}_{\hspace{0.02cm}0} \hspace{-0.02cm}\mapsto \hspace{-0.02cm}\underline{x}_{\hspace{0.02cm}1}] +{\rm Pr}[\underline{x}_{\hspace{0.02cm}0} \hspace{-0.02cm}\mapsto \hspace{-0.02cm} \underline{x}_{\hspace{0.02cm}2}] +{\rm Pr}[\underline{x}_{\hspace{0.02cm}0} \hspace{-0.02cm}\mapsto \hspace{-0.02cm} \underline{x}_{\hspace{0.02cm}3}] \ge {\rm Pr(Blockfehler)} \hspace{0.05cm}.$$
+
:$${\rm Pr(Bhattacharyya)} = W(\beta)-1 \ge {\rm Pr(Union \hspace{0.15cm}Bound)} \ge {\rm Pr(block\:error)} \hspace{0.05cm},$$
 
   
 
   
Eine weitere Schranke wurde von Bhattacharyya angegeben:
+
where for the binary erasure channel,  the Bhattacharyya parameter  $\beta = \lambda$  and the  [[Channel_Coding/Limits_for_Block_Error_Probability#Distance_spectrum_of_a_linear_code|"weight enumerator function"]]  $W(X)$  where the pseudo-variable  $X$  is to be replaced here by the Bhattacharyya parameter  $\lambda$.
  
:$${\rm Pr(Bhattacharyya)} = W(\beta)-1 \ge {\rm Pr(Union \hspace{0.15cm}Bound)} \ge {\rm Pr(Blockfehler)} \hspace{0.05cm},$$
+
*The  "Bhattacharyya Bound"  is more or less far above the  "Union Bound"  depending on the channel.
 
   
 
   
wobei beim Binary Erasure Channel für den Bhattacharyya–Parameter  $\beta = \lambda$  gilt und   $W(X)$  die  [[Channel_Coding/Schranken_für_die_Blockfehlerwahrscheinlichkeit#Distanzspektrum_eines_linearen_Codes|Gewichtsfunktion]]  angibt, wobei die Pseudo–Variable  $X$  hier durch den Bhattacharyya–Parameter  $\lambda$  zu ersetzen ist.
+
*Its importance lies in the fact that the bound can be specified in the same way for different channels.
  
*Die Bhattacharyya–Schranke liegt je nach Kanal mehr oder weniger weit oberhalb der  ''Union Bound.''
 
*Ihre Bedeutung liegt darin, dass die Schranke für unterschiedliche Kanäle in gleicher Weise angebbar ist.
 
  
  
  
  
 +
Hints:  The exercise belongs to the chapter  [[Channel_Coding/Limits_for_Block_Error_Probability|"Bounds on block error probability"]].
  
  
  
''Hinweis:''
 
*Die Aufgabe gehört zum Kapitel  [[Channel_Coding/Schranken_für_die_Blockfehlerwahrscheinlichkeit|Schranken für die Blockfehlerwahrscheinlichkeit]].
 
  
 
+
===Questions===
 
 
 
 
===Fragebogen===
 
 
<quiz display=simple>
 
<quiz display=simple>
{Wie groß ist die paarweise Fehlerwahrscheinlichkeit zwischen den Codeworten&nbsp; $\underline{x}_{0} = (0, 0, 0, 0, 0)$&nbsp; und&nbsp; $\underline{x}_{1} = (0, 1, 0, 1, 1)$?
+
{What is the pairwise error probability between the code words &nbsp; $\underline{x}_{0} = (0, 0, 0, 0, 0)$ &nbsp; and &nbsp; $\underline{x}_{1} = (0, 1, 0, 1, 1)$?
 
|type="{}"}
 
|type="{}"}
 
${\rm Pr}[\underline{x}_{0} → \underline{x}_{1}] \ = \ $ { 0.5 3% }$\ \cdot 10^{-3} $
 
${\rm Pr}[\underline{x}_{0} → \underline{x}_{1}] \ = \ $ { 0.5 3% }$\ \cdot 10^{-3} $
  
{Welche Aussagen stimmen bezüglich&nbsp; ${\rm Pr}[\underline{x}_{0} → \underline{x}_{i}]$&nbsp; mit Laufindex&nbsp; $i = 1, \ \text{...} \ , 3$? &nbsp; <br>$d_{{\rm H},\hspace{0.05cm}i}$&nbsp; bezeichnet hier die Hamming–Distanz zwischen&nbsp; $x_{0}$&nbsp; und&nbsp; $x_{i}$.
+
{Which statements are true regarding&nbsp; ${\rm Pr}[\underline{x}_{0} → \underline{x}_{i}]$&nbsp; with index&nbsp; $i = 1, \ \text{...} \ , 3$? &nbsp; $d_{{\rm H},\hspace{0.05cm}i}$&nbsp; denotes here the Hamming distance between&nbsp; $x_{0}$&nbsp; and&nbsp; $x_{i}$.
 
|type="()"}
 
|type="()"}
  
- Es gilt&nbsp; ${\rm Pr}[\underline{x}_{0} → \underline{x}_{i}] \ = \ \lambda ^{d_{{\rm H},\hspace{0.05cm}i}} \  · \  (1 – \lambda)^{n \hspace{0.05cm}– \hspace{0.05cm}d_{{\rm H},\hspace{0.05cm}i}}$.
+
- It holds&nbsp; ${\rm Pr}[\underline{x}_{0} → \underline{x}_{i}] \ = \ \lambda ^{d_{{\rm H},\hspace{0.05cm}i}} \  · \  (1 – \lambda)^{n \hspace{0.05cm}– \hspace{0.05cm}d_{{\rm H},\hspace{0.05cm}i}}$.
+ Es gilt&nbsp; ${\rm Pr}[\underline{x}_{0} → \underline{x}_{i}] \ = \ 1/2 · \lambda ^{d_{{\rm H},\hspace{0.05cm}i}}.$
+
+ It holds&nbsp; ${\rm Pr}[\underline{x}_{0} → \underline{x}_{i}] \ = \ 1/2 · \lambda ^{d_{{\rm H},\hspace{0.05cm}i}}.$
- ${\rm Pr}[\underline{x}_{0} → \underline{x}_{i}]$ ist die Verfälschungswahrscheinlichkeit von $x_{0}$ nach $x_{i}$.
+
- ${\rm Pr}[\underline{x}_{0} → \underline{x}_{i}]$&nbsp; is the falsification probability from&nbsp; $x_{0}$&nbsp; to&nbsp; $x_{i}$.
  
{Wie groß sind die folgenden Wahrscheinlichkeiten?
+
{What are the following probabilities?
 
|type="{}"}
 
|type="{}"}
 
$\ {\rm Pr}[\underline{x}_{0} → \underline{x}_{2}] \ = \ $ { 0.5 3% } $\ \cdot 10^{-3} $
 
$\ {\rm Pr}[\underline{x}_{0} → \underline{x}_{2}] \ = \ $ { 0.5 3% } $\ \cdot 10^{-3} $
 
$\ {\rm Pr}[\underline{x}_{0} → \underline{x}_{3}] \ = \ $ { 0.05 3% } $\ \cdot 10^{-3} $
 
$\ {\rm Pr}[\underline{x}_{0} → \underline{x}_{3}] \ = \ $ { 0.05 3% } $\ \cdot 10^{-3} $
  
{Geben Sie die ''Union Bound''&nbsp; für die Blockfehlerwahrscheinlichkeit an.
+
{Specify the&nbsp; "Union Bound"&nbsp; for the block error probability.
 
|type="{}"}
 
|type="{}"}
 
$\ {\rm Pr(Union\  Bound)} \ = \ ${ 1.05 3% } $\ \cdot 10^{-3} $
 
$\ {\rm Pr(Union\  Bound)} \ = \ ${ 1.05 3% } $\ \cdot 10^{-3} $
  
{Wie lautet im vorliegenden Fall die ''Bhattacharyya–Schranke''?
+
{What is the&nbsp; "Bhattacharyya Bound"&nbsp; in the present case?
 
|type="{}"}
 
|type="{}"}
 
$\ {\rm Pr(Bhattacharyya)} \ = \ ${ 2.1 3% } $\ \cdot 10^{-3} $
 
$\ {\rm Pr(Bhattacharyya)} \ = \ ${ 2.1 3% } $\ \cdot 10^{-3} $
Line 93: Line 94:
 
</quiz>
 
</quiz>
  
===Musterlösung===
+
===Solution===
 
{{ML-Kopf}}
 
{{ML-Kopf}}
'''(1)'''&nbsp; Die Codeworte $\underline{x}_{0}$ und $\underline{x}_{1}$ unterscheiden sich in Bit $2, \ 4$ und $5$. Wird nur einer dieser drei Binärwerte richtig übertragen, ist damit das gesamte Codewort eindeutig bestimmt. Keine Information über das Codewort erhält man bei folgenden Empfangsvektoren (siehe Tabelle auf der Angabenseite):
+
'''(1)'''&nbsp; The code words &nbsp; $\underline{x}_{0}$ &nbsp; and &nbsp; $\underline{x}_{1}$&nbsp; differ in bit $2, \ 4$&nbsp; and $5$.&nbsp; If only one of these three binary values is transmitted correctly,&nbsp; the entire code word is thus uniquely determined.&nbsp; No information about the code word is obtained for the following received vectors&nbsp; (see table in the information section):
* $\underline{y} = (0, {\rm E}, 0, {\rm E}, {\rm E})$ mit Wahrscheinlichkeit $\lambda^3 \ · \ (1 – \lambda)^2$,
+
* $\underline{y} = (0, {\rm E}, 0, {\rm E}, {\rm E})$&nbsp; with probability&nbsp; $\lambda^3 \ · \ (1 – \lambda)^2$,
* $\underline{y} = (0, {\rm E}, {\rm E}, {\rm E}, {\rm E})$ mit Wahrscheinlichkeit $\lambda^4 \ · \ (1 – \lambda)$,
+
* $\underline{y} = (0, {\rm E}, {\rm E}, {\rm E}, {\rm E})$&nbsp; with probability&nbsp; $\lambda^4 \ · \ (1 – \lambda)$,
* $\underline{y} = ({\rm E}, {\rm E}, 0, {\rm E}, {\rm E})$ mit Wahrscheinlichkeit $\lambda^4 \ · \ (1 – \lambda)$,
+
* $\underline{y} = ({\rm E}, {\rm E}, 0, {\rm E}, {\rm E})$&nbsp; with probability $\lambda^4 \ · \ (1 – \lambda)$,
* $\underline{y} = ({\rm E}, {\rm E}, {\rm E}, {\rm E}, {\rm E})$ mit Wahrscheinlichkeit $\lambda^5$.
+
* $\underline{y} = ({\rm E}, {\rm E}, {\rm E}, {\rm E}, {\rm E})$ with probability&nbsp; $\lambda^5$.
  
  
Die Wahrscheinlichkeit, dass aufgrund des spezifischen Empfangsvektors $\underline{y}$ das Codewort $\underline{x}_{1}$ genau so wahrscheinlich ist wie $\underline{x}_{0}$, ergibt sich zu
+
The probability that due to the specific received vector&nbsp;  $\underline{y}$,&nbsp; the code word&nbsp; $\underline{x}_{1}$&nbsp; is just as likely as&nbsp; $\underline{x}_{0}$,&nbsp; is given by
 
   
 
   
:$$\ {\rm Pr}\ [\underline{x}_0 \hspace{0.12cm}{\rm und}\hspace{0.12cm} \underline{x}_1 \hspace{0.15cm}{\rm sind \hspace{0.15cm}gleichwahrscheinlich}] = \lambda^3 \cdot (1- \lambda)^2 + 2 \cdot \lambda^4 \cdot (1- \lambda) + \lambda^5 =\lambda^3 \cdot \left [ (1- \lambda)^2 + 2 \cdot \lambda \cdot (1- \lambda) + \lambda^2 \right ] = \lambda^3 \hspace{0.05cm}.$$
+
:$$\ {\rm Pr}\ [\underline{x}_0 \hspace{0.12cm}{\rm and}\hspace{0.12cm} \underline{x}_1 \hspace{0.15cm}{\rm are \hspace{0.15cm}equally \hspace{0.15cm}probable}] = \lambda^3 \cdot (1- \lambda)^2 + 2 \cdot \lambda^4 \cdot (1- \lambda) + \lambda^5 =\lambda^3 \cdot \left [ (1- \lambda)^2 + 2 \cdot \lambda \cdot (1- \lambda) + \lambda^2 \right ] = \lambda^3 \hspace{0.05cm}.$$
  
In diesem Fall entscheidet man sich nach dem Zufallsprinzip für  $\underline{x}_{0}$ (wäre richtig) oder für  $\underline{x}_{1}$ (leider falsch), und zwar mit gleicher Wahrscheinlichkeit. Daraus folgt:
+
In this case,&nbsp; one decides at random for&nbsp; $\underline{x}_{0}$&nbsp; (would be correct)&nbsp; or for&nbsp; $\underline{x}_{1}$&nbsp; (unfortunately wrong),&nbsp; with equal probability.&nbsp; From this follows:
  
 
:$${\rm Pr} [\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}1}] = 1/2 \cdot \lambda^3 \hspace{0.15cm} \underline{= 0.5 \cdot 10^{-3}} \hspace{0.05cm}.$$
 
:$${\rm Pr} [\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}1}] = 1/2 \cdot \lambda^3 \hspace{0.15cm} \underline{= 0.5 \cdot 10^{-3}} \hspace{0.05cm}.$$
Line 112: Line 113:
  
  
'''(2)'''&nbsp; Nach Teilaufgabe '''(1)''' ist die <u>Antwort 2</u> richtig und nicht die Antwort 1. Auch die Aussage 3 ist falsch:  
+
'''(2)'''&nbsp; According to subtask&nbsp; '''(1)''',&nbsp; <u>only answer 2</u>&nbsp; is correct:  
*${\rm Pr}[\underline{x}_{0} → \underline{x}_{1}]$ sagt nicht aus, dass mit dieser Wahrscheinlickeit das Codewort $\underline{x}_{0}$ tatsächlich in das falsche Codewort  $\underline{x}_{1}$ übergeht, sondern nur, dass es mit dieser Wahrscheinlichkeit zu  $\underline{x}_{1}$ übergehen könnte.  
+
*${\rm Pr}[\underline{x}_{0} → \underline{x}_{1}]$&nbsp; does not state that with this probability the code word&nbsp; $\underline{x}_{0}$&nbsp; actually transitions to the incorrect code word&nbsp; $\underline{x}_{1}$,&nbsp; but only that it could transition to&nbsp; $\underline{x}_{1}$&nbsp; with this probability.
* ${\rm Pr}[\underline{x}_{0} → \underline{x}_{1}]$ beinhaltet auch Konstellationen, bei denen die Entscheidung tatsächlich für $\underline{x}_{2}$ bzw. $\underline{x}_{3}$ fällt.
+
 +
* ${\rm Pr}[\underline{x}_{0} → \underline{x}_{1}]$&nbsp; also includes constellations where the decision is actually for&nbsp; $\underline{x}_{2}$&nbsp; or&nbsp; $\underline{x}_{3}$.
  
  
  
  
'''(3)'''&nbsp; Wegen&nbsp; $d_{\rm H}(\underline{x}_{0}, \underline{x}_{2}) = 3$&nbsp; und&nbsp; $d_{\rm H}(\underline{x}_{0}, \underline{x}_{3}) = 4$&nbsp; ergibt sich hierfür
+
'''(3)'''&nbsp; Because of&nbsp; $d_{\rm H}(\underline{x}_{0}, \underline{x}_{2}) = 3$&nbsp; and&nbsp; $d_{\rm H}(\underline{x}_{0}, \underline{x}_{3}) = 4$&nbsp; it follows that
 
   
 
   
:$${\rm Pr} [\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}2}] = 1/2 \cdot \lambda^3 \hspace{0.15cm} \underline{= 0.5 \cdot 10^{-3}} \hspace{0.05cm},\hspace{0.2cm} {\rm Pr} [\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}3}] = 1/2 \cdot \lambda^4 \hspace{0.15cm} \underline{= 0.05 \cdot 10^{-3}} \hspace{0.05cm}.$$
+
:$${\rm Pr} [\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}2}] = 1/2 \cdot \lambda^3 \hspace{0.15cm} \underline{= 0.5 \cdot 10^{-3}} \hspace{0.05cm},$$
 +
:$$ {\rm Pr} [\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}3}] = 1/2 \cdot \lambda^4 \hspace{0.15cm} \underline{= 0.05 \cdot 10^{-3}} \hspace{0.05cm}.$$
  
  
  
'''(4)'''&nbsp; Die Blockfehlerwahrscheinlichkeit ist nie größer (mit einer gewissen Wahrscheinlichkeit eher kleiner) als die so genannte ''Union Bound'':
+
'''(4)'''&nbsp; The block error probability is never larger&nbsp; (with a certain probability rather smaller)&nbsp; than the so-called&nbsp; "Union Bound":
 
   
 
   
 
:$${\rm Pr(Union \hspace{0.15cm}Bound)} \hspace{-0.15cm}\ = \ \hspace{-0.15cm} {\rm Pr}[\underline{x}_{\hspace{0.02cm}0} \hspace{-0.02cm}\mapsto \hspace{-0.02cm}\underline{x}_{\hspace{0.02cm}1}] +{\rm Pr}[\underline{x}_{\hspace{0.02cm}0} \hspace{-0.02cm}\mapsto \hspace{-0.02cm} \underline{x}_{\hspace{0.02cm}2}] +{\rm Pr}[\underline{x}_{\hspace{0.02cm}0} \hspace{-0.02cm}\mapsto \hspace{-0.02cm} \underline{x}_{\hspace{0.02cm}3}] = 2 \cdot \lambda^3/2 + \lambda^4/2 = 0.001 + 0.00005 \hspace{0.15cm} \underline{= 1.05 \cdot 10^{-3}} \hspace{0.05cm}.$$
 
:$${\rm Pr(Union \hspace{0.15cm}Bound)} \hspace{-0.15cm}\ = \ \hspace{-0.15cm} {\rm Pr}[\underline{x}_{\hspace{0.02cm}0} \hspace{-0.02cm}\mapsto \hspace{-0.02cm}\underline{x}_{\hspace{0.02cm}1}] +{\rm Pr}[\underline{x}_{\hspace{0.02cm}0} \hspace{-0.02cm}\mapsto \hspace{-0.02cm} \underline{x}_{\hspace{0.02cm}2}] +{\rm Pr}[\underline{x}_{\hspace{0.02cm}0} \hspace{-0.02cm}\mapsto \hspace{-0.02cm} \underline{x}_{\hspace{0.02cm}3}] = 2 \cdot \lambda^3/2 + \lambda^4/2 = 0.001 + 0.00005 \hspace{0.15cm} \underline{= 1.05 \cdot 10^{-3}} \hspace{0.05cm}.$$
Line 131: Line 134:
  
  
'''(5)'''&nbsp; Allgemein gilt:  
+
'''(5)'''&nbsp; Generally speaking:
:$${\rm Pr(Blockfehler) ≤ {\rm Pr(Union \hspace{0.15cm}Bound)}  \le Pr(Bhattacharyya)} = W(\beta) - 1.$$  
+
:$${\rm Pr(block\:error) ≤ {\rm Pr(Union \hspace{0.15cm}Bound)}  \le Pr(Bhattacharyya)} = W(\beta) - 1.$$  
  
*Für das Distanzspektrum bzw. die Gewichtsfunktion erhält man im vorliegenden Fall:
+
*For the distance spectrum or the enumerator weight function,&nbsp; we obtain in the present case:
 
   
 
   
 
:$$W_0 = 1 \hspace{0.05cm}, \hspace{0.2cm} W_3 = 2 \hspace{0.05cm}, \hspace{0.2cm}W_4 = 1 \hspace{0.3cm} \Rightarrow \hspace{0.3cm} W(X) = 1+ 2 \cdot X^{3} +X^{4} \hspace{0.05cm}.$$
 
:$$W_0 = 1 \hspace{0.05cm}, \hspace{0.2cm} W_3 = 2 \hspace{0.05cm}, \hspace{0.2cm}W_4 = 1 \hspace{0.3cm} \Rightarrow \hspace{0.3cm} W(X) = 1+ 2 \cdot X^{3} +X^{4} \hspace{0.05cm}.$$
  
*Beim BEC–Kanal gilt zudem $\beta = \lambda$. Daraus folgt als Endergebnis für $\lambda = 0.001$:
+
*For the BEC channel,&nbsp; $\beta = \lambda$&nbsp; also applies.&nbsp; From this follows as a final result for&nbsp; $\lambda = 0.001$:
 
   
 
   
 
:$${\rm Pr(Bhattacharyya)} = 2 \cdot \lambda^3 + \lambda^4 \hspace{0.15cm} \underline{= 2.1 \cdot 10^{-3}} \hspace{0.05cm}.$$
 
:$${\rm Pr(Bhattacharyya)} = 2 \cdot \lambda^3 + \lambda^4 \hspace{0.15cm} \underline{= 2.1 \cdot 10^{-3}} \hspace{0.05cm}.$$
  
Anzumerken ist, dass beim BEC–Modell die ''Bhattacharyya–Schranke'' stets doppelt so groß ist wie die ''Union Bound'', die ja selbst wieder eine obere Schranke für die Blockfehlerwahrscheinlichkeit darstellt.
+
*Note that with the BEC model,&nbsp; the&nbsp; "Bhattacharyya Bound"&nbsp; is always twice as large as the&nbsp; "Union Bound",&nbsp; which is itself an upper bound on the block error probability.
 
{{ML-Fuß}}
 
{{ML-Fuß}}
  
  
  
[[Category:Channel Coding: Exercises|^1.6 Fehlerwahrscheinlichkeitsschranken^]]
+
[[Category:Channel Coding: Exercises|^1.6 Error Probability Bounds^]]

Latest revision as of 18:03, 23 January 2023

Possible received vectors for the  $(5, 2)$ code and BEC

In this exercise,  we consider the systematic  $(5, 2)$  code

  • with the  $2×5$ generator matrix
$${ \boldsymbol{\rm G}}_{(5, 2)} = \begin{pmatrix} 1 &0 &1 &1 &0\\ 0 &1 &0 &1 &1 \end{pmatrix} \hspace{0.05cm},$$
  • the  $3 × 5$ parity-check matrix
$${ \boldsymbol{\rm H}}_{(5, 2)} = \begin{pmatrix} 1 &0 &1 &0 &0\\ 1 &1 &0 &1 &0\\ 0 &1 &0 &0 &1 \end{pmatrix} \hspace{0.05cm},$$
  • and the  $2^k = 4$  code words
$$\underline{x}_0 \hspace{-0.15cm}\ = \ \hspace{-0.15cm} (0, 0, 0, 0, 0) \hspace{0.05cm},\hspace{0.2cm}\underline{x}_1 = (0, 1, 0, 1, 1)\hspace{0.05cm},\hspace{0.2cm}\underline{x}_2 \hspace{-0.15cm}\ = \ \hspace{-0.15cm} (1, 0, 1, 1, 0) \hspace{0.05cm},\hspace{0.2cm}\underline{x}_3 = (1, 1, 1, 0, 1)\hspace{0.05cm}.$$

At the output of the digital channel defined by the  "BEC model"  ("Binary Erasure Channel")  with the erasure probability  $\lambda = 0.001$,  the received vector

$$\underline{y} = (y_1, \hspace{0.05cm}y_2, \hspace{0.05cm}y_3, \hspace{0.05cm}y_4, \hspace{0.05cm}y_5)$$

occurs,  where for  $i = 1, \ \text{...} \ , 5$  holds:  

$$y_{i} \in \{0, 1, \rm E\}.$$

The BEC channel is characterized by the fact that

  • falsifications  $(0 → 1,\ 1 → 0)$  are excluded,
  • but erasures  $(0 → \rm E,\ 1 → E)$  may occur.


The graph explicitly shows all possible received vectors   $\underline{y}$   with three or more erasures $\rm (E)$  assuming that the all-zero vector   $(0, 0, 0, 0, 0)$  was sent.

  • For less than three erausures,  for the considered  $(5, 2)$  code,  the  "code word finder"  always returns the correct decision:   $\underline{z} = \underline{x}$.
  • On the other hand,  if there are three or more erasures,  wrong decisions may occur.  In this case,  the following applies to the block error probability:
$$ {\rm Pr(block\:error)}= {\rm Pr} (\underline{z} \ne \underline{x}) = {\rm Pr}\left \{ \hspace{0.1cm} [\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}1}] \hspace{0.05cm}\cup\hspace{0.05cm}[\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}2}] \hspace{0.05cm}\cup \hspace{0.05cm}[\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}3}] \hspace{0.1cm}\right \} \hspace{0.05cm}.$$

Please note:

  • The event  $[\underline{x}_{0} → \underline{x}_{1}]$  does not necessarily say that at the received vector under consideration  $\underline{y}$  is actually decided for the code word  $\underline{x}_{1}$,  but only that the decision for  $x_{1}$  would be more reasonable than the decision for  $\underline{x}_{0}$  due to statistics.


Computing the block error probability is difficult because the events  $[\underline{x}_{0} → \underline{x}_{1}]$ ,  $[\underline{x}_{0} → \underline{x}_{2}]$  and  $[\underline{x}_{0} → \underline{x}_{3}]$  are not necessarily  "disjoint" . An upper bound is provided by the  "Union Bound":

$${\rm Pr(Union \hspace{0.15cm}Bound)} = {\rm Pr}[\underline{x}_{\hspace{0.02cm}0} \hspace{-0.02cm}\mapsto \hspace{-0.02cm}\underline{x}_{\hspace{0.02cm}1}] +{\rm Pr}[\underline{x}_{\hspace{0.02cm}0} \hspace{-0.02cm}\mapsto \hspace{-0.02cm} \underline{x}_{\hspace{0.02cm}2}] +{\rm Pr}[\underline{x}_{\hspace{0.02cm}0} \hspace{-0.02cm}\mapsto \hspace{-0.02cm} \underline{x}_{\hspace{0.02cm}3}] \ge {\rm Pr(block\hspace{0.15cm}error)} \hspace{0.05cm}.$$

Another bound was given by Bhattacharyya:

$${\rm Pr(Bhattacharyya)} = W(\beta)-1 \ge {\rm Pr(Union \hspace{0.15cm}Bound)} \ge {\rm Pr(block\:error)} \hspace{0.05cm},$$

where for the binary erasure channel,  the Bhattacharyya parameter  $\beta = \lambda$  and the  "weight enumerator function"  $W(X)$  where the pseudo-variable  $X$  is to be replaced here by the Bhattacharyya parameter  $\lambda$.

  • The  "Bhattacharyya Bound"  is more or less far above the  "Union Bound"  depending on the channel.
  • Its importance lies in the fact that the bound can be specified in the same way for different channels.



Hints:  The exercise belongs to the chapter  "Bounds on block error probability".



Questions

1

What is the pairwise error probability between the code words   $\underline{x}_{0} = (0, 0, 0, 0, 0)$   and   $\underline{x}_{1} = (0, 1, 0, 1, 1)$?

${\rm Pr}[\underline{x}_{0} → \underline{x}_{1}] \ = \ $

$\ \cdot 10^{-3} $

2

Which statements are true regarding  ${\rm Pr}[\underline{x}_{0} → \underline{x}_{i}]$  with index  $i = 1, \ \text{...} \ , 3$?   $d_{{\rm H},\hspace{0.05cm}i}$  denotes here the Hamming distance between  $x_{0}$  and  $x_{i}$.

It holds  ${\rm Pr}[\underline{x}_{0} → \underline{x}_{i}] \ = \ \lambda ^{d_{{\rm H},\hspace{0.05cm}i}} \ · \ (1 – \lambda)^{n \hspace{0.05cm}– \hspace{0.05cm}d_{{\rm H},\hspace{0.05cm}i}}$.
It holds  ${\rm Pr}[\underline{x}_{0} → \underline{x}_{i}] \ = \ 1/2 · \lambda ^{d_{{\rm H},\hspace{0.05cm}i}}.$
${\rm Pr}[\underline{x}_{0} → \underline{x}_{i}]$  is the falsification probability from  $x_{0}$  to  $x_{i}$.

3

What are the following probabilities?

$\ {\rm Pr}[\underline{x}_{0} → \underline{x}_{2}] \ = \ $

$\ \cdot 10^{-3} $
$\ {\rm Pr}[\underline{x}_{0} → \underline{x}_{3}] \ = \ $

$\ \cdot 10^{-3} $

4

Specify the  "Union Bound"  for the block error probability.

$\ {\rm Pr(Union\ Bound)} \ = \ $

$\ \cdot 10^{-3} $

5

What is the  "Bhattacharyya Bound"  in the present case?

$\ {\rm Pr(Bhattacharyya)} \ = \ $

$\ \cdot 10^{-3} $


Solution

(1)  The code words   $\underline{x}_{0}$   and   $\underline{x}_{1}$  differ in bit $2, \ 4$  and $5$.  If only one of these three binary values is transmitted correctly,  the entire code word is thus uniquely determined.  No information about the code word is obtained for the following received vectors  (see table in the information section):

  • $\underline{y} = (0, {\rm E}, 0, {\rm E}, {\rm E})$  with probability  $\lambda^3 \ · \ (1 – \lambda)^2$,
  • $\underline{y} = (0, {\rm E}, {\rm E}, {\rm E}, {\rm E})$  with probability  $\lambda^4 \ · \ (1 – \lambda)$,
  • $\underline{y} = ({\rm E}, {\rm E}, 0, {\rm E}, {\rm E})$  with probability $\lambda^4 \ · \ (1 – \lambda)$,
  • $\underline{y} = ({\rm E}, {\rm E}, {\rm E}, {\rm E}, {\rm E})$ with probability  $\lambda^5$.


The probability that due to the specific received vector  $\underline{y}$,  the code word  $\underline{x}_{1}$  is just as likely as  $\underline{x}_{0}$,  is given by

$$\ {\rm Pr}\ [\underline{x}_0 \hspace{0.12cm}{\rm and}\hspace{0.12cm} \underline{x}_1 \hspace{0.15cm}{\rm are \hspace{0.15cm}equally \hspace{0.15cm}probable}] = \lambda^3 \cdot (1- \lambda)^2 + 2 \cdot \lambda^4 \cdot (1- \lambda) + \lambda^5 =\lambda^3 \cdot \left [ (1- \lambda)^2 + 2 \cdot \lambda \cdot (1- \lambda) + \lambda^2 \right ] = \lambda^3 \hspace{0.05cm}.$$

In this case,  one decides at random for  $\underline{x}_{0}$  (would be correct)  or for  $\underline{x}_{1}$  (unfortunately wrong),  with equal probability.  From this follows:

$${\rm Pr} [\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}1}] = 1/2 \cdot \lambda^3 \hspace{0.15cm} \underline{= 0.5 \cdot 10^{-3}} \hspace{0.05cm}.$$


(2)  According to subtask  (1)only answer 2  is correct:

  • ${\rm Pr}[\underline{x}_{0} → \underline{x}_{1}]$  does not state that with this probability the code word  $\underline{x}_{0}$  actually transitions to the incorrect code word  $\underline{x}_{1}$,  but only that it could transition to  $\underline{x}_{1}$  with this probability.
  • ${\rm Pr}[\underline{x}_{0} → \underline{x}_{1}]$  also includes constellations where the decision is actually for  $\underline{x}_{2}$  or  $\underline{x}_{3}$.



(3)  Because of  $d_{\rm H}(\underline{x}_{0}, \underline{x}_{2}) = 3$  and  $d_{\rm H}(\underline{x}_{0}, \underline{x}_{3}) = 4$  it follows that

$${\rm Pr} [\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}2}] = 1/2 \cdot \lambda^3 \hspace{0.15cm} \underline{= 0.5 \cdot 10^{-3}} \hspace{0.05cm},$$
$$ {\rm Pr} [\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}3}] = 1/2 \cdot \lambda^4 \hspace{0.15cm} \underline{= 0.05 \cdot 10^{-3}} \hspace{0.05cm}.$$


(4)  The block error probability is never larger  (with a certain probability rather smaller)  than the so-called  "Union Bound":

$${\rm Pr(Union \hspace{0.15cm}Bound)} \hspace{-0.15cm}\ = \ \hspace{-0.15cm} {\rm Pr}[\underline{x}_{\hspace{0.02cm}0} \hspace{-0.02cm}\mapsto \hspace{-0.02cm}\underline{x}_{\hspace{0.02cm}1}] +{\rm Pr}[\underline{x}_{\hspace{0.02cm}0} \hspace{-0.02cm}\mapsto \hspace{-0.02cm} \underline{x}_{\hspace{0.02cm}2}] +{\rm Pr}[\underline{x}_{\hspace{0.02cm}0} \hspace{-0.02cm}\mapsto \hspace{-0.02cm} \underline{x}_{\hspace{0.02cm}3}] = 2 \cdot \lambda^3/2 + \lambda^4/2 = 0.001 + 0.00005 \hspace{0.15cm} \underline{= 1.05 \cdot 10^{-3}} \hspace{0.05cm}.$$


(5)  Generally speaking:

$${\rm Pr(block\:error) ≤ {\rm Pr(Union \hspace{0.15cm}Bound)} \le Pr(Bhattacharyya)} = W(\beta) - 1.$$
  • For the distance spectrum or the enumerator weight function,  we obtain in the present case:
$$W_0 = 1 \hspace{0.05cm}, \hspace{0.2cm} W_3 = 2 \hspace{0.05cm}, \hspace{0.2cm}W_4 = 1 \hspace{0.3cm} \Rightarrow \hspace{0.3cm} W(X) = 1+ 2 \cdot X^{3} +X^{4} \hspace{0.05cm}.$$
  • For the BEC channel,  $\beta = \lambda$  also applies.  From this follows as a final result for  $\lambda = 0.001$:
$${\rm Pr(Bhattacharyya)} = 2 \cdot \lambda^3 + \lambda^4 \hspace{0.15cm} \underline{= 2.1 \cdot 10^{-3}} \hspace{0.05cm}.$$
  • Note that with the BEC model,  the  "Bhattacharyya Bound"  is always twice as large as the  "Union Bound",  which is itself an upper bound on the block error probability.