Difference between revisions of "Aufgaben:Exercise 3.10: Mutual Information at the BSC"

From LNTwww
 
(29 intermediate revisions by 4 users not shown)
Line 1: Line 1:
[http://www.example.com Link-Text]
+
{{quiz-Header|Buchseite=Information_Theory/Application_to_Digital_Signal_Transmission
{{quiz-Header|Buchseite=Informationstheorie/Anwendung auf DSÜ-Kanäle
 
 
}}
 
}}
  
[[File:P_ID2787__Inf_A_3_9.png|right|]]
+
[[File:P_ID2787__Inf_A_3_9.png|right|frame|BSC model considered]]
Wir betrachten den $\text{Binary Symmetric Channel}$ (BSC). Für die gesamte Aufgabe gelten die  Parameterwerte:  
+
We consider the  [[Channel_Coding/Kanalmodelle_und_Entscheiderstrukturen#Binary_Symmetric_Channel_.E2.80.93_BSC|Binary Symmetric Channel]]  $\rm (BSC)$. The parameter values are valid for the whole exercise:
:* Verfälschungswahrscheinlichkeit: $\epsilon = 0.1$
+
* Crossover probability:   $\varepsilon = 0.1$,
:* Wahrscheinlichkeit für $0$:   $p_0 = 0.2$,
+
* Probability for  $0$:    $p_0 = 0.2$,
:* Wahrscheinlichkeit für $1$:   $p_1 = 0.8$.
+
* Probability for  $1$:    $p_1 = 0.8$.
  
Damit lautet die Wahrscheinlichkeitsfunktion der Quelle:
 
  
$P_X(X)= (0.2 , 0.8)$
+
Thus the probability mass function of the source is:   $P_X(X)= (0.2 , \ 0.8)$  and for the source entropy applies:
 +
:$$H(X) = p_0 \cdot {\rm log}_2 \hspace{0.1cm} \frac{1}{p_0} + p_1\cdot {\rm log}_2 \hspace{0.1cm} \frac{1}{p_1} = H_{\rm bin}(0.2)={ 0.7219\,{\rm bit}} \hspace{0.05cm}.$$
  
und für die Quellenentropie gilt:
+
The task is to determine:
 +
* the probability function of the sink:
 +
:$$P_Y(Y) = (\hspace{0.05cm}P_Y(0)\hspace{0.05cm}, \ \hspace{0.05cm} P_Y(1)\hspace{0.05cm})
 +
\hspace{0.05cm},$$
 +
* the joint probability function:
 +
:$$P_{XY}(X, Y) = \begin{pmatrix}
 +
p_{00}  & p_{01}\\
 +
p_{10}  & p_{11}
 +
\end{pmatrix}  \hspace{0.05cm},$$
 +
* the mutual information:
 +
:$$I(X;Y) = {\rm E} \hspace{-0.08cm}\left [ \hspace{0.02cm}{\rm log}_2 \hspace{0.1cm} \frac{P_{XY}(X, Y)}
 +
{P_{X}(X) \cdot P_{Y}(Y) }\right ]  \hspace{0.05cm},$$
 +
*the equivocation:
 +
:$$H(X \hspace{-0.1cm}\mid \hspace{-0.1cm} Y) =  {\rm E} \hspace{0.02cm} \big [ \hspace{0.02cm} {\rm log}_2 \hspace{0.1cm} \frac{1}{P_{\hspace{0.03cm}X \mid \hspace{0.03cm} Y} (X \hspace{-0.05cm}\mid \hspace{-0.05cm} Y)} \big ] \hspace{0.05cm},$$
 +
*the irrelevance:
 +
:$$H(Y \hspace{-0.1cm}\mid \hspace{-0.1cm} X) =  {\rm E} \hspace{0.02cm} \big [ \hspace{0.02cm} {\rm log}_2 \hspace{0.1cm} \frac{1}{P_{\hspace{0.03cm}Y \mid \hspace{0.03cm} X} (Y \hspace{-0.05cm}\mid \hspace{-0.05cm} X)} \big ] \hspace{0.05cm}.$$
  
$H(X) = p_0 \cdot log_2 \frac{1}{p_0} + p_1 \cdot log_2 \frac{1}{p_1} = H_{bin}(0.2) = 0.7219 (\text{bit})$
 
  
In der Aufgabe sollen ermittelt werden:
 
:* die Wahrscheinlichkeitsfunktion der Sinke:
 
  
$P_Y(Y) = (P_Y(0) , P_Y(1))$,
 
:* die Verbundwahrscheinlichkeitsfunktion :
 
$P_{XY}(X, Y) = \begin{pmatrix} p_{00} & p_{01}\\ p_{10} & p_{11} \end{pmatrix} \hspace{0.05cm}$
 
:* die Transinformation
 
  
$I(X;Y) = E[ log_2 \frac{P_{ XY }(X,Y)}{P_X(X) . P_Y(Y)}]$,
 
:*die Äquivokation:
 
  
$H(X \mid Y) = E[log_2 \frac{1}{P_{ X \mid Y }(X \mid Y)}]$,
 
:*die Irrelevanz:
 
  
$H(Y \mid X) = E[log_2 \frac{1}{P_{ Y \mid X }(Y \mid X)}]$
 
  
'''Hinwies:''' Die Aufgabe gehört zu [http://en.lntwww.de/Informationstheorie/Anwendung_auf_die_Digitalsignal%C3%BCbertragung Kapitel 3.3.] In der [http://en.lntwww.de/Aufgaben:3.09Z_BSC%E2%80%93Kanalkapazit%C3%A4t Aufgabe Z3.9] wird die [http://en.lntwww.de/Informationstheorie/Anwendung_auf_die_Digitalsignal%C3%BCbertragung#Kanalkapazit.C3.A4t_eines_Bin.C3.A4rkanals Kanalkapazität] $C_{ BSC }$ des $BSC$–Modells berechnet. Diese ergibt sich als die maximale Transinformation $I(X; Y)$ durch Maximierung bezüglich der Symbolwahrscheinlichkeiten $p_0$ bzw. $p_1 = 1 p_0$.
+
Hints:
 +
*The exercise belongs to the chapter  [[Information_Theory/Anwendung_auf_die_Digitalsignalübertragung|Application to Digital Signal Transmission]].
 +
*Reference is made in particular to the page     [[Information_Theory/Anwendung_auf_die_Digitalsignalübertragung#Calculation_of_the_mutual_information_for_the_binary_channel|Mutual information calculation for the binary channel]].
 +
*In  [[Aufgaben:Aufgabe_3.10Z:_BSC–Kanalkapazität|Exercise 3.10Z]]  the channel capacity  $C_{\rm BSC }$  of the BSC model is calculated.
 +
*This results as the maximum mutual information  $I(X;\ Y)$   by maximization with respect to the probabilities  $p_0$  or  $p_1 = 1 - p_0$.
 +
  
  
===Fragebogen===
+
===Questions===
  
 
<quiz display=simple>
 
<quiz display=simple>
  
{Berechnen Sie die Verbundwahrscheinlichkeiten $P_{ XY }(X, Y)$
+
{Calculate the joint probabilities&nbsp; $P_{ XY }(X, Y)$
 
|type="{}"}
 
|type="{}"}
$P_{ XY }(0, 0)$ = { 0.18 3% }
+
$P_{ XY }(0, 0) \ = \ $ { 0.18 3% }
$P_{ XY }(0, 1)$ = { 0.02 3% }
+
$P_{ XY }(0, 1) \ = \ $ { 0.02 3% }
$P_{ XY }(1, 0)$ = { 0.08 3% }
+
$P_{ XY }(1, 0) \ = \ $ { 0.08 3% }
$P_{ XY }(1, 1)$ = { 0.72 3% }
+
$P_{ XY }(1, 1) \ = \ $ { 0.72 3% }
  
{Wie lautet die Wahrscheinlichkeitsfunktion $P_Y(Y)$?
+
{What is the probability mass function&nbsp; $P_Y(Y)$&nbsp; of the sink?
 
|type="{}"}
 
|type="{}"}
$P_Y(0)$ = { 0.26 3% }
+
$P_Y(0)\ = \ $ { 0.26 3% }
$P_Y(1)$ = { 0.74 3% }
+
$P_Y(1) \ = \ $ { 0.74 3% }
  
{Welcher Wert ergibt sich für die Transinformation?
+
{What is the value of the mutual information&nbsp; $I(X;\
 +
Y)$?
 
|type="{}"}
 
|type="{}"}
$I(X; Y)$ = { 0.3578 3% }
+
$I(X; Y)\ = \ $ { 0.3578 3% } $\ \rm bit$
  
{Welcher Wert ergibt sich für die Äquivokation?
+
{Which value results for the equivocation&nbsp; $H(X|Y)$?
 
|type="{}"}
 
|type="{}"}
$H(X|Y)$ = {  0.3642 3% }  
+
$H(X|Y) \ = \ $ {  0.3642 3% } $\ \rm bit$
  
  
{Welche Aussage trifft für die Sinkenentropie $H(Y)$ zu?
+
{Which statement is true for the sink entropy&nbsp; $H(Y)$&nbsp;?
 
|type="[]"}
 
|type="[]"}
- $H(Y)$ ist nie größer als $H(X)$.
+
- $H(Y)$&nbsp; is never greater than&nbsp; $H(X)$.
+ $H(Y)$ ist nie kleiner als $H(X)$.
+
+ $H(Y)$&nbsp; is never smaller than&nbsp; $H(X)$.
  
{Welche Aussage trifft für die Irrelevanz $H(Y|X)$ zu?
+
{Which statement is true for the irrelevance&nbsp; $H(Y|X)$&nbsp;?
 
|type="[]"}
 
|type="[]"}
- $H(Y|X)$ ist nie größer als die Äquivokation $H(X|Y)$.
+
- $H(Y|X)$&nbsp; is never larger than the equivocation&nbsp; $H(X|Y)$.
+ $H(Y|X)$ ist nie kleiner als die Äquivokation $H(X|Y)$.
+
+ $H(Y|X)$&nbsp; is never smaller than the equivocation&nbsp; $H(X|Y)$.
  
  
 
</quiz>
 
</quiz>
  
===Musterlösung===
+
===Solution===
 
{{ML-Kopf}}
 
{{ML-Kopf}}
'''1.''' Für die gesuchten Größen gilt allgemein bzw. mit den Zahlenwerten $p_0 = 0.2$ und $ε = 0.1$:
+
'''(1)'''&nbsp; The following applies in general or with the numerical values&nbsp; $p_0 = 0.2$&nbsp; and&nbsp; $\varepsilon = 0.1$ for the quantities sought:
 +
:$$P_{XY}(0, 0) = p_0 \cdot (1 - \varepsilon)
 +
\hspace{0.15cm} \underline {=0.18} \hspace{0.05cm}, \hspace{0.5cm}
 +
P_{XY}(0, 1) = p_0 \cdot \varepsilon
 +
\hspace{0.15cm} \underline {=0.02} \hspace{0.05cm},$$
 +
:$$P_{XY}(1, 0) = p_1 \cdot \varepsilon
 +
\hspace{0.15cm} \underline {=0.08} \hspace{0.05cm}, \hspace{1.55cm}
 +
P_{XY}(1, 1) = p_1 \cdot (1 - \varepsilon)
 +
\hspace{0.15cm} \underline {=0.72} \hspace{0.05cm}.$$
  
$P_{ XY }(0 , 0) = p_0 . (1 - \varepsilon ) = 0.18 $ , $P_{XY}(0,1) = p_0  . \varepsilon = 0.02$,
 
  
$P_{XY}(1,0) = p_1  . \varepsilon = 0.08$ ,  $P_{ XY }(1 , 1) = p_1 . (1 - \varepsilon ) = 0.72$.
 
  
 +
'''(2)'''&nbsp; In general:
 +
:$$P_Y(Y) = \big [ {\rm Pr}( Y = 0)\hspace{0.05cm}, {\rm Pr}( Y = 1) \big ] = \big ( p_0\hspace{0.05cm}, p_1 \big ) \cdot \begin{pmatrix} 1 - \varepsilon & \varepsilon\\ \varepsilon & 1 - \varepsilon \end{pmatrix}.$$
 +
This gives the following numerical values:
 +
:$$ {\rm Pr}( Y = 0)= p_0 \cdot (1 - \varepsilon) + p_1 \cdot \varepsilon = 0.2 \cdot 0.9 + 0.8 \cdot 0.1 \hspace{0.15cm} \underline {=0.26} \hspace{0.05cm},$$
 +
:$${\rm Pr}( Y = 1)= p_0 \cdot \varepsilon + p_1 \cdot (1 - \varepsilon) = 0.2 \cdot 0.1 + 0.8 \cdot 0.9 \hspace{0.15cm} \underline {=0.74} \hspace{0.05cm}.$$
  
'''2.''' Es gilt:
 
  
$$P_Y(Y) = \big ( {\rm Pr}( Y = 0)\hspace{0.05cm}, {\rm Pr}( Y = 1) \big ) = \big ( p_0\hspace{0.05cm}, p_1 \big ) \cdot \begin{pmatrix} 1 - \varepsilon & \varepsilon\\ \varepsilon & 1 - \varepsilon \end{pmatrix}$$
 
  
$$\Rightarrow \hspace{0.3cm} {\rm Pr}( Y = 0)\hspace{-0.15cm} \hspace{-0.15cm} p_0 \cdot (1 - \varepsilon) + p_1 \cdot \varepsilon = 0.2 \cdot 0.9 + 0.8 \cdot 0.1 \hspace{0.15cm} \underline {=0.26} \hspace{0.05cm},\\ {\rm Pr}( Y = 1)\hspace{-0.15cm} = \hspace{-0.15cm} p_0 \cdot \varepsilon + p_1 \cdot (1 - \varepsilon) = 0.2 \cdot 0.1 + 0.8 \cdot 0.9 \hspace{0.15cm} \underline {=0.74} \hspace{0.05cm}$$
+
'''(3)'''&nbsp; For the mutual information, according to the definition with&nbsp; $p_0 = 0.2$,&nbsp; $p_1 = 0.8$&nbsp; and&nbsp; $\varepsilon = 0.1$:
 +
:$$I(X;Y) = {\rm E} \hspace{-0.08cm}\left [ \hspace{0.02cm}{\rm log}_2 \hspace{0.08cm} \frac{P_{XY}(X, Y)} {P_{X}(X) \hspace{-0.05cm}\cdot \hspace{-0.05cm} P_{Y}(Y) }\right ] \hspace{0.3cm} \Rightarrow$$
 +
:$$I(X;Y) = 0.18 \cdot {\rm log}_2 \hspace{0.1cm} \frac{0.18}{0.2 \hspace{-0.05cm}\cdot \hspace{-0.05cm} 0.26} + 0.02 \cdot {\rm log}_2 \hspace{0.08cm} \frac{0.02}{0.2 \hspace{-0.05cm}\cdot \hspace{-0.05cm} 0.74}
 +
+ 0.08 \cdot {\rm log}_2 \hspace{0.08cm} \frac{0.08}{0.8 \hspace{-0.05cm}\cdot \hspace{-0.05cm} 0.26} + 0.72 \cdot {\rm log}_2 \hspace{0.08cm} \frac{0.72}{0.8 \hspace{-0.05cm}\cdot \hspace{-0.05cm} 0.74} \hspace{0.15cm} \underline {=0.3578\,{\rm bit}} \hspace{0.05cm}.$$
  
'''3.'''  Für die Transinformation gilt gemäß der Definition mit $p_0 = 0.2$ , $p_1 = 0.8$ und $ε = 0.1$
 
  
$$I(X;Y) \hspace{-0.2cm}  =  \hspace{-0.2cm} {\rm E} \hspace{-0.08cm}\left [ \hspace{0.02cm}{\rm log}_2 \hspace{0.08cm} \frac{P_{XY}(X, Y)} {P_{X}(X) \hspace{-0.05cm}\cdot \hspace{-0.05cm} P_{Y}(Y) }\right ] = 0.18 \cdot {\rm log}_2 \hspace{0.1cm} \frac{0.18}{0.2 \hspace{-0.05cm}\cdot \hspace{-0.05cm} 0.26} + 0.02 \cdot {\rm log}_2 \hspace{0.08cm} \frac{0.02}{0.2 \hspace{-0.05cm}\cdot \hspace{-0.05cm} 0.74} +$$
 
$$\hspace{-0.2cm} 0.08 \cdot {\rm log}_2 \hspace{0.08cm} \frac{0.08}{0.8 \hspace{-0.05cm}\cdot \hspace{-0.05cm} 0.26} + 0.72 \cdot {\rm log}_2 \hspace{0.08cm} \frac{0.72}{0.8 \hspace{-0.05cm}\cdot \hspace{-0.05cm} 0.74} \hspace{0.15cm} \underline {=0.3578\,{\rm bit}} \hspace{0.05cm}$$
 
  
 +
'''(4)'''&nbsp; With the source entropy&nbsp; $H(X)$&nbsp; given, we obtain for the equivocation:
 +
:$$H(X \hspace{-0.1cm}\mid \hspace{-0.1cm} Y) = H(X) - I(X;Y) = 0.7219 - 0.3578 \hspace{0.15cm} \underline {=0.3642\,{\rm bit}} \hspace{0.05cm}.$$
 +
*However, one could also apply the general definition with the inference probabilities&nbsp; $P_{X|Y}(⋅)$&nbsp;:
 +
:$$H(X \hspace{-0.1cm}\mid \hspace{-0.1cm} Y) = {\rm E} \hspace{0.02cm} \left [ \hspace{0.05cm} {\rm log}_2 \hspace{0.1cm} \frac{1}{P_{\hspace{0.03cm}X \mid \hspace{0.03cm} Y} (X \hspace{-0.05cm}\mid \hspace{-0.05cm} Y)} \hspace{0.05cm}\right ] = {\rm E} \hspace{0.02cm} \left [ \hspace{0.05cm} {\rm log}_2 \hspace{0.1cm} \frac{P_Y(Y)}{P_{XY} (X, Y)} \hspace{0.05cm} \right ] \hspace{0.05cm}$$
  
 +
*In the example, the same result&nbsp; $H(X|Y) = 0.3642 \ \rm bit$&nbsp; is also obtained according to this calculation rule:
 +
:$$H(X \hspace{-0.1cm}\mid \hspace{-0.1cm} Y) = 0.18 \cdot {\rm log}_2 \hspace{0.1cm} \frac{0.26}{0.18} + 0.02 \cdot {\rm log}_2 \hspace{0.1cm} \frac{0.74}{0.02} + 0.08 \cdot {\rm log}_2 \hspace{0.1cm} \frac{0.26}{0.08} + 0.72 \cdot {\rm log}_2 \hspace{0.1cm} \frac{0.74}{0.72} \hspace{0.05cm}.$$
  
'''4.'''  Mit  der angegebenen Quellenentropie $H(X)$ erhält man für die Äquivokation:
 
  
$$H(X \hspace{-0.1cm}\mid \hspace{-0.1cm} Y) = H(X) - I(X;Y) = 0.7219 - 0.3578 \hspace{0.15cm} \underline {=0.3642\,{\rm bit}} \hspace{0.05cm}$$.
 
Man könnte auch die allgemeine Definition mit den Rückschlusswahrscheinlichkeiten $P_{X|Y}(⋅)$
 
anwenden:
 
  
$$H(X \hspace{-0.1cm}\mid \hspace{-0.1cm} Y) = {\rm E} \hspace{0.02cm} \big [ \hspace{0.05cm} {\rm log}_2 \hspace{0.1cm} \frac{1}{P_{\hspace{0.03cm}X \mid \hspace{0.03cm} Y} (X \hspace{-0.05cm}\mid \hspace{-0.05cm} Y)} \hspace{0.05cm}\big ] = {\rm E} \hspace{0.02cm} \big [ \hspace{0.05cm} {\rm log}_2 \hspace{0.1cm} \frac{P_Y(Y)}{P_{XY} (X, Y)} \hspace{0.05cm} \big ] \hspace{0.05cm}$$
+
'''(5)'''&nbsp; Correct is the <u>proposed solution 2:</u>
 +
*In the case of disturbed transmission&nbsp; $(ε > 0)$&nbsp; the uncertainty regarding the sink is always greater than the uncertainty regarding the source.&nbsp; One obtains here as a numerical value:
 +
:$$H(Y) = H_{\rm bin}(0.26)={ 0.8268\,{\rm bit}} \hspace{0.05cm}.$$
 +
*With error-free transmission&nbsp; $(ε = 0)$,&nbsp; on the other hand,&nbsp; $P_Y(⋅) = P_X(⋅)$&nbsp; and&nbsp; $H(Y) = H(X)$&nbsp; would apply.
  
Im Beispiel erhält man auch nach dieser Berechnungsvorschrift das gleiche Ergebnis $H(X|Y) = 0.3642 bit$ :
 
  
$$H(X \hspace{-0.1cm}\mid \hspace{-0.1cm} Y) \hspace{-0.15cm}  =  \hspace{-0.15cm} 0.18 \cdot {\rm log}_2 \hspace{0.1cm} \frac{0.26}{0.18} + 0.02 \cdot {\rm log}_2 \hspace{0.1cm} \frac{0.74}{0.02} + 0.08 \cdot {\rm log}_2 \hspace{0.1cm} \frac{0.26}{0.08} + 0.72 \cdot {\rm log}_2 \hspace{0.1cm} \frac{0.74}{0.72} \hspace{0.05cm}$$
 
  
'''5.''' Richtig ist der ''Lösungsvorschlag 2.'' Bei gestörter Übertragung $(ε > 0)$ ist die Unsicherheit hinsichtlich der Sinke stets größer als die Unsicherheit bezüglich der Quelle. Man erhält hier als Zahlenwert:
+
'''(6)'''&nbsp; Here, too, the <u>second proposed solution</u> is correct:
$$H(Y) = H_{\rm bin}(0.26)={ 0.8268\,{\rm bit}} \hspace{0.05cm}$$
+
*Because of&nbsp; $I(X;Y) = H(X) - H(X \hspace{-0.1cm}\mid \hspace{-0.1cm} Y) = H(Y) - H(Y \hspace{-0.1cm}\mid \hspace{-0.1cm} X)$&nbsp;,&nbsp; $H(Y|X)$&nbsp; is greater than&nbsp; $H(X|Y)$ by the same magnitude that&nbsp; $H(Y)$&nbsp; is greater than&nbsp; $H(X)$:  
Bei fehlerfreier Übertragung $(ε = 0)$ würde dagegen $P_Y(⋅) = P_X(⋅)$ und $H(Y) = H(X)$ gelten
+
:$$H(Y \hspace{-0.1cm}\mid \hspace{-0.1cm} X) = H(Y) -I(X;Y) = 0.8268 - 0.3578 ={ 0.469\,{\rm bit}} \hspace{0.05cm}$$
 
+
*Direct calculation gives the same result&nbsp; $H(Y|X) = 0.469\ \rm  bit$:
 
+
:$$H(Y \hspace{-0.1cm}\mid \hspace{-0.1cm} X) = {\rm E} \hspace{0.02cm} \left [ \hspace{0.02cm} {\rm log}_2 \hspace{0.1cm} \frac{1}{P_{\hspace{0.03cm}Y \mid \hspace{0.03cm} X} (Y \hspace{-0.05cm}\mid \hspace{-0.05cm} X)} \right ] = 0.18 \cdot {\rm log}_2 \hspace{0.1cm} \frac{1}{0.9} + 0.02 \cdot {\rm log}_2 \hspace{0.1cm} \frac{1}{0.1} + 0.08 \cdot {\rm log}_2 \hspace{0.1cm} \frac{1}{0.1} + 0.72 \cdot {\rm log}_2 \hspace{0.1cm} \frac{1}{0.9} \hspace{0.05cm}.$$
'''6.''' Auch hier ist der ''zweite Lösungsvorschlag'' richtig. Wegen
 
$$I(X;Y) = H(X) - H(X \hspace{-0.1cm}\mid \hspace{-0.1cm} Y) = H(Y) - H(Y \hspace{-0.1cm}\mid \hspace{-0.1cm} X)$$
 
 
 
ist $H(Y|X)$ um den gleichen Betrag größer als $H(X|Y)$, um den auch $H(Y)$ größer ist als $H(X)$:
 
$$H(Y \hspace{-0.1cm}\mid \hspace{-0.1cm} X) = H(Y) -I(X;Y) = 0.8268 - 0.3578 ={ 0.4690\,{\rm bit}} \hspace{0.05cm}$$
 
Bei direkter Berechnung erhält man das gleiche Ergebnis $H(Y|X) = 0.4690 bit$:
 
 
 
$$H(Y \hspace{-0.1cm}\mid \hspace{-0.1cm} X) \hspace{-0.15cm}  = \hspace{-0.15cm} {\rm E} \hspace{0.02cm} \big [ \hspace{0.02cm} {\rm log}_2 \hspace{0.1cm} \frac{1}{P_{\hspace{0.03cm}Y \mid \hspace{0.03cm} X} (Y \hspace{-0.05cm}\mid \hspace{-0.05cm} X)} \big ] =$$ 
 
$$=\hspace{-0.15cm} 0.18 \cdot {\rm log}_2 \hspace{0.1cm} \frac{1}{0.9} + 0.02 \cdot {\rm log}_2 \hspace{0.1cm} \frac{1}{0.1} + 0.08 \cdot {\rm log}_2 \hspace{0.1cm} \frac{1}{0.1} + 0.72 \cdot {\rm log}_2 \hspace{0.1cm} \frac{1}{0.9} \hspace{0.05cm}$$
 
  
 
{{ML-Fuß}}
 
{{ML-Fuß}}
Line 127: Line 138:
  
  
[[Category:Aufgaben zu Informationstheorie|^3.3 Anwendung auf DSÜ-Kanäle^]]
+
[[Category:Information Theory: Exercises|^3.3 Application to Digital Signal Transmission^]]

Latest revision as of 06:05, 18 September 2022

BSC model considered

We consider the  Binary Symmetric Channel  $\rm (BSC)$. The parameter values are valid for the whole exercise:

  • Crossover probability:   $\varepsilon = 0.1$,
  • Probability for  $0$:   $p_0 = 0.2$,
  • Probability for  $1$:   $p_1 = 0.8$.


Thus the probability mass function of the source is:   $P_X(X)= (0.2 , \ 0.8)$  and for the source entropy applies:

$$H(X) = p_0 \cdot {\rm log}_2 \hspace{0.1cm} \frac{1}{p_0} + p_1\cdot {\rm log}_2 \hspace{0.1cm} \frac{1}{p_1} = H_{\rm bin}(0.2)={ 0.7219\,{\rm bit}} \hspace{0.05cm}.$$

The task is to determine:

  • the probability function of the sink:
$$P_Y(Y) = (\hspace{0.05cm}P_Y(0)\hspace{0.05cm}, \ \hspace{0.05cm} P_Y(1)\hspace{0.05cm}) \hspace{0.05cm},$$
  • the joint probability function:
$$P_{XY}(X, Y) = \begin{pmatrix} p_{00} & p_{01}\\ p_{10} & p_{11} \end{pmatrix} \hspace{0.05cm},$$
  • the mutual information:
$$I(X;Y) = {\rm E} \hspace{-0.08cm}\left [ \hspace{0.02cm}{\rm log}_2 \hspace{0.1cm} \frac{P_{XY}(X, Y)} {P_{X}(X) \cdot P_{Y}(Y) }\right ] \hspace{0.05cm},$$
  • the equivocation:
$$H(X \hspace{-0.1cm}\mid \hspace{-0.1cm} Y) = {\rm E} \hspace{0.02cm} \big [ \hspace{0.02cm} {\rm log}_2 \hspace{0.1cm} \frac{1}{P_{\hspace{0.03cm}X \mid \hspace{0.03cm} Y} (X \hspace{-0.05cm}\mid \hspace{-0.05cm} Y)} \big ] \hspace{0.05cm},$$
  • the irrelevance:
$$H(Y \hspace{-0.1cm}\mid \hspace{-0.1cm} X) = {\rm E} \hspace{0.02cm} \big [ \hspace{0.02cm} {\rm log}_2 \hspace{0.1cm} \frac{1}{P_{\hspace{0.03cm}Y \mid \hspace{0.03cm} X} (Y \hspace{-0.05cm}\mid \hspace{-0.05cm} X)} \big ] \hspace{0.05cm}.$$




Hints:


Questions

1

Calculate the joint probabilities  $P_{ XY }(X, Y)$

$P_{ XY }(0, 0) \ = \ $

$P_{ XY }(0, 1) \ = \ $

$P_{ XY }(1, 0) \ = \ $

$P_{ XY }(1, 1) \ = \ $

2

What is the probability mass function  $P_Y(Y)$  of the sink?

$P_Y(0)\ = \ $

$P_Y(1) \ = \ $

3

What is the value of the mutual information  $I(X;\ Y)$?

$I(X; Y)\ = \ $

$\ \rm bit$

4

Which value results for the equivocation  $H(X|Y)$?

$H(X|Y) \ = \ $

$\ \rm bit$

5

Which statement is true for the sink entropy  $H(Y)$ ?

$H(Y)$  is never greater than  $H(X)$.
$H(Y)$  is never smaller than  $H(X)$.

6

Which statement is true for the irrelevance  $H(Y|X)$ ?

$H(Y|X)$  is never larger than the equivocation  $H(X|Y)$.
$H(Y|X)$  is never smaller than the equivocation  $H(X|Y)$.


Solution

(1)  The following applies in general or with the numerical values  $p_0 = 0.2$  and  $\varepsilon = 0.1$ for the quantities sought:

$$P_{XY}(0, 0) = p_0 \cdot (1 - \varepsilon) \hspace{0.15cm} \underline {=0.18} \hspace{0.05cm}, \hspace{0.5cm} P_{XY}(0, 1) = p_0 \cdot \varepsilon \hspace{0.15cm} \underline {=0.02} \hspace{0.05cm},$$
$$P_{XY}(1, 0) = p_1 \cdot \varepsilon \hspace{0.15cm} \underline {=0.08} \hspace{0.05cm}, \hspace{1.55cm} P_{XY}(1, 1) = p_1 \cdot (1 - \varepsilon) \hspace{0.15cm} \underline {=0.72} \hspace{0.05cm}.$$


(2)  In general:

$$P_Y(Y) = \big [ {\rm Pr}( Y = 0)\hspace{0.05cm}, {\rm Pr}( Y = 1) \big ] = \big ( p_0\hspace{0.05cm}, p_1 \big ) \cdot \begin{pmatrix} 1 - \varepsilon & \varepsilon\\ \varepsilon & 1 - \varepsilon \end{pmatrix}.$$

This gives the following numerical values:

$$ {\rm Pr}( Y = 0)= p_0 \cdot (1 - \varepsilon) + p_1 \cdot \varepsilon = 0.2 \cdot 0.9 + 0.8 \cdot 0.1 \hspace{0.15cm} \underline {=0.26} \hspace{0.05cm},$$
$${\rm Pr}( Y = 1)= p_0 \cdot \varepsilon + p_1 \cdot (1 - \varepsilon) = 0.2 \cdot 0.1 + 0.8 \cdot 0.9 \hspace{0.15cm} \underline {=0.74} \hspace{0.05cm}.$$


(3)  For the mutual information, according to the definition with  $p_0 = 0.2$,  $p_1 = 0.8$  and  $\varepsilon = 0.1$:

$$I(X;Y) = {\rm E} \hspace{-0.08cm}\left [ \hspace{0.02cm}{\rm log}_2 \hspace{0.08cm} \frac{P_{XY}(X, Y)} {P_{X}(X) \hspace{-0.05cm}\cdot \hspace{-0.05cm} P_{Y}(Y) }\right ] \hspace{0.3cm} \Rightarrow$$
$$I(X;Y) = 0.18 \cdot {\rm log}_2 \hspace{0.1cm} \frac{0.18}{0.2 \hspace{-0.05cm}\cdot \hspace{-0.05cm} 0.26} + 0.02 \cdot {\rm log}_2 \hspace{0.08cm} \frac{0.02}{0.2 \hspace{-0.05cm}\cdot \hspace{-0.05cm} 0.74} + 0.08 \cdot {\rm log}_2 \hspace{0.08cm} \frac{0.08}{0.8 \hspace{-0.05cm}\cdot \hspace{-0.05cm} 0.26} + 0.72 \cdot {\rm log}_2 \hspace{0.08cm} \frac{0.72}{0.8 \hspace{-0.05cm}\cdot \hspace{-0.05cm} 0.74} \hspace{0.15cm} \underline {=0.3578\,{\rm bit}} \hspace{0.05cm}.$$


(4)  With the source entropy  $H(X)$  given, we obtain for the equivocation:

$$H(X \hspace{-0.1cm}\mid \hspace{-0.1cm} Y) = H(X) - I(X;Y) = 0.7219 - 0.3578 \hspace{0.15cm} \underline {=0.3642\,{\rm bit}} \hspace{0.05cm}.$$
  • However, one could also apply the general definition with the inference probabilities  $P_{X|Y}(⋅)$ :
$$H(X \hspace{-0.1cm}\mid \hspace{-0.1cm} Y) = {\rm E} \hspace{0.02cm} \left [ \hspace{0.05cm} {\rm log}_2 \hspace{0.1cm} \frac{1}{P_{\hspace{0.03cm}X \mid \hspace{0.03cm} Y} (X \hspace{-0.05cm}\mid \hspace{-0.05cm} Y)} \hspace{0.05cm}\right ] = {\rm E} \hspace{0.02cm} \left [ \hspace{0.05cm} {\rm log}_2 \hspace{0.1cm} \frac{P_Y(Y)}{P_{XY} (X, Y)} \hspace{0.05cm} \right ] \hspace{0.05cm}$$
  • In the example, the same result  $H(X|Y) = 0.3642 \ \rm bit$  is also obtained according to this calculation rule:
$$H(X \hspace{-0.1cm}\mid \hspace{-0.1cm} Y) = 0.18 \cdot {\rm log}_2 \hspace{0.1cm} \frac{0.26}{0.18} + 0.02 \cdot {\rm log}_2 \hspace{0.1cm} \frac{0.74}{0.02} + 0.08 \cdot {\rm log}_2 \hspace{0.1cm} \frac{0.26}{0.08} + 0.72 \cdot {\rm log}_2 \hspace{0.1cm} \frac{0.74}{0.72} \hspace{0.05cm}.$$


(5)  Correct is the proposed solution 2:

  • In the case of disturbed transmission  $(ε > 0)$  the uncertainty regarding the sink is always greater than the uncertainty regarding the source.  One obtains here as a numerical value:
$$H(Y) = H_{\rm bin}(0.26)={ 0.8268\,{\rm bit}} \hspace{0.05cm}.$$
  • With error-free transmission  $(ε = 0)$,  on the other hand,  $P_Y(⋅) = P_X(⋅)$  and  $H(Y) = H(X)$  would apply.


(6)  Here, too, the second proposed solution is correct:

  • Because of  $I(X;Y) = H(X) - H(X \hspace{-0.1cm}\mid \hspace{-0.1cm} Y) = H(Y) - H(Y \hspace{-0.1cm}\mid \hspace{-0.1cm} X)$ ,  $H(Y|X)$  is greater than  $H(X|Y)$ by the same magnitude that  $H(Y)$  is greater than  $H(X)$:
$$H(Y \hspace{-0.1cm}\mid \hspace{-0.1cm} X) = H(Y) -I(X;Y) = 0.8268 - 0.3578 ={ 0.469\,{\rm bit}} \hspace{0.05cm}$$
  • Direct calculation gives the same result  $H(Y|X) = 0.469\ \rm bit$:
$$H(Y \hspace{-0.1cm}\mid \hspace{-0.1cm} X) = {\rm E} \hspace{0.02cm} \left [ \hspace{0.02cm} {\rm log}_2 \hspace{0.1cm} \frac{1}{P_{\hspace{0.03cm}Y \mid \hspace{0.03cm} X} (Y \hspace{-0.05cm}\mid \hspace{-0.05cm} X)} \right ] = 0.18 \cdot {\rm log}_2 \hspace{0.1cm} \frac{1}{0.9} + 0.02 \cdot {\rm log}_2 \hspace{0.1cm} \frac{1}{0.1} + 0.08 \cdot {\rm log}_2 \hspace{0.1cm} \frac{1}{0.1} + 0.72 \cdot {\rm log}_2 \hspace{0.1cm} \frac{1}{0.9} \hspace{0.05cm}.$$