Difference between revisions of "Aufgaben:Exercise 3.10Z: BSC Channel Capacity"

From LNTwww
 
(13 intermediate revisions by 3 users not shown)
Line 3: Line 3:
 
}}
 
}}
  
[[File:P_ID2789__Inf_Z_3_9.png|right|frame|Entropien der Modelle „BC” und „BSC”]]
+
[[File:EN_Inf_Z_3_9_A.png|right|frame|Entropies of  "BC"  and  "BSC"]]
Die Kanalkapazität  $C$  wurde von  [https://en.wikipedia.org/wiki/Claude_Shannon Claude E. Shannon]  als die maximale Transinformation definiert, wobei sich die Maximierung allein auf die Quellenstatistik bezieht:
+
The channel capacity  $C$  was defined by  [https://en.wikipedia.org/wiki/Claude_Shannon Claude E. Shannon]  as the maximum mutual information, whereby the maximization refers solely to the source statistics:
 
:$$ C = \max_{P_X(X)} \hspace{0.15cm} I(X;Y) \hspace{0.05cm}$$
 
:$$ C = \max_{P_X(X)} \hspace{0.15cm} I(X;Y) \hspace{0.05cm}$$
Beim Binärkanal mit der Wahrscheinlichkeitsfunktion  $P_X(X) = \big [p_0, \ p_1 \big]$  ist nur ein Parameter optimierbar, beispielsweise  $p_0$.  Die Wahrscheinlichkeit für eine  $1$  ist damit ebenfalls festgelegt:    $p_1 = 1 - p_0.$
+
In the binary channel with the probability function  $P_X(X) = \big [p_0, \ p_1 \big]$  only one parameter is optimizable, for example   $p_0$.  The probability for a  $1$  is thus also fixed:    $p_1 = 1 - p_0.$
  
Die obere Grafik (rötlich hinterlegt) fasst die Ergebnisse für den  [[Informationstheorie/Anwendung_auf_die_Digitalsignalübertragung#Kanalkapazit.C3.A4t_eines_Bin.C3.A4rkanals|unsymmetrischen Binärkanal]]  mit  $ε_0 = 0.01$  und  $ε_1 = 0.2$  zusammen, der auch im Theorieteil betrachtet wurde.  
+
The upper graph (reddish background) summarises the results for the  [[Information_Theory/Anwendung_auf_die_Digitalsignalübertragung#Channel_capacity_of_a_binary_channel|asymmetric binary channel]]  with  $ε_0 = 0.01$  and  $ε_1 = 0.2$ , which was also considered in the theory section.
  
Die Maximierung führt zum Ergebnis  $p_0 = 0.55$   ⇒   $p_1 = 0.45$,  und man erhält für die Kanalkapazität:
+
The maximization leads to the result  $p_0 = 0.55$   ⇒   $p_1 = 0.45$,  and one obtains for the channel capacity:
 
:$$C_{\rm BC} = \hspace{-0.05cm} \max_{P_X(X)} \hspace{0.1cm} I(X;Y) \big |_{p_0 \hspace{0.05cm} = \hspace{0.05cm}0.55} \hspace{0.05cm}=\hspace{0.05cm} 0.5779\,{\rm bit} \hspace{0.05cm}.$$
 
:$$C_{\rm BC} = \hspace{-0.05cm} \max_{P_X(X)} \hspace{0.1cm} I(X;Y) \big |_{p_0 \hspace{0.05cm} = \hspace{0.05cm}0.55} \hspace{0.05cm}=\hspace{0.05cm} 0.5779\,{\rm bit} \hspace{0.05cm}.$$
  
In der unteren Grafik (blaue Hinterlegung) sind die gleichen informationstheoretischen Größen für den symmetrischen Kanal   ⇒   [[Kanalcodierung/Kanalmodelle_und_Entscheiderstrukturen#Binary_Symmetric_Channel_.E2.80.93_BSC|Binary Symmetric Channel]]   (BSC) mit den Verfälschungswahrscheinlichkeiten  $ε_0 = ε_1 = ε = 0.1$  angegeben, der auch für die  [[Aufgaben:3.10_Transinformation_beim_BSC| Aufgabe 3.10]]  vorausgesetzt wurde.
+
In the lower graph (blue background), the same information-theoretical quantities are given for the  [[Channel_Coding/Kanalmodelle_und_Entscheiderstrukturen#Binary_Symmetric_Channel_.E2.80.93_BSC|Binary Symmetric Channel]]   $\rm (BSC)$  with the falsification probabilities  $ε_0 = ε_1 = ε = 0.1$,  which was also assumed for  [[Aufgaben:Exercise_3.10:_Mutual_Information_at_the_BSC| Exercise 3.10]].
  
In der vorliegenden Aufgabe sollen Sie für das BSC–Kanalmodell  $($zunächst für  $ε = 0.1)$
+
In the present exercise you are
:* die Entropien  $H(X)$,  $H(Y)$,  $H(X|Y)$  und  $H(Y|X)$ analysieren,
+
:* to analyze the entropies  $H(X)$,  $H(Y)$,  $H(X|Y)$  and  $H(Y|X)$,
:* den Quellenparameter  $p_0$  hinsichtlich maximaler Transinformation  $I(X; Y)$  optimieren,
+
:* to optimize the source parameter  $p_0$  with respect to maximum mutual information  $I(X; Y)$ ,
:* somit die Kanalkapazität  $C(ε)$  bestimmen, sowie
+
:* to determine the channel capacity  $C(ε)$ ,  
:* durch Verallgemeinerung eine geschlossene Gleichung für  $C(ε)$  angeben.
+
:* to give a closed equation for  $C(ε)$  by generalization for the BSC channel model  $($initially for  $ε = 0.1)$.
  
  
  
  
 
+
Hints:  
''Hinweise:''
+
*The exercise belongs to the chapter  [[Information_Theory/Anwendung_auf_die_Digitalsignalübertragung|Application to digital signal transmission]].
*Die Aufgabe gehört zum  Kapitel  [[Informationstheorie/Anwendung_auf_die_Digitalsignalübertragung|Anwendung auf die Digitalsignalübertragung]].
+
*Reference is made in particular to the page     [[Information_Theory/Application_to_Digital_Signal_Transmission#Channel_capacity_of_a_binary_channel|Channel capacity of a binary channel]].
*Bezug genommen wird insbesondere auf die Seite     [[Informationstheorie/Anwendung_auf_die_Digitalsignalübertragung#Kanalkapazit.C3.A4t_eines_Bin.C3.A4rkanals|Kanalkapazität eines Binärkanals]].
 
 
   
 
   
  
  
  
===Fragebogen===
+
===Questions===
  
 
<quiz display=simple>
 
<quiz display=simple>
{ Welche Aussagen gelten für die bedingten Entropien beim BSC–Modell?
+
{ Which statements are true for the conditional entropies in the BSC model?
 
|type="[]"}
 
|type="[]"}
- Die Äquivokation ergibt sich zu&nbsp; $H(X|Y) = H_{\rm bin}(ε)$.
+
- The equivocation results in&nbsp; $H(X|Y) = H_{\rm bin}(ε)$.
+ Die Irrelevanz ergibt sich zu&nbsp; $H(Y|X) = H_{\rm bin}(ε)$.
+
+ The irrelevance results in&nbsp; $H(Y|X) = H_{\rm bin}(ε)$.
- Die Irrelevanz ergibt sich zu&nbsp; $H(Y|X) = H_{\rm bin}(p_0)$.
+
- The irrelevance results in&nbsp; $H(Y|X) = H_{\rm bin}(p_0)$.
  
{Welche Aussagen gelten für die Kanalkapazität&nbsp; $C_{\rm BSC}$&nbsp; des BSC–Modells?
+
{Which statements are true for the channel capacity&nbsp; $C_{\rm BSC}$&nbsp; of the BSC model?
 
|type="[]"}
 
|type="[]"}
+ Die Kanalkapazität ist gleich der maximalen Transinformation.
+
+ The channel capacity is equal to the maximum mutual information.
+ Die Maximierung führt beim BSC zum Ergebnis &nbsp;$p_0 = p_1 = 0.5$.
+
+ For the BSC, maximization leads to the result &nbsp;$p_0 = p_1 = 0.5$.
+ Für&nbsp; $p_0 = p_1 = 0.5$&nbsp; &nbsp;gilt&nbsp; $H(X) = H(Y) = 1 \ \rm bit$.
+
+ For&nbsp; $p_0 = p_1 = 0.5$&nbsp; &nbsp;,&nbsp; $H(X) = H(Y) = 1 \ \rm bit$.
  
{Welche Kanalkapazität&nbsp; $C_{\rm BSC}$&nbsp; ergibt sich abhängig vom BSC–Parameter&nbsp; $ε$?
+
{Which channel capacity&nbsp; $C_{\rm BSC}$&nbsp; results depending on the BSC parameter&nbsp; $ε$?
 
|type="{}"}
 
|type="{}"}
 
$ε = 0.0\text{:} \hspace{0.5cm}  C_{\rm BSC} \ = \ $ { 1 1% } $\ \rm bit$
 
$ε = 0.0\text{:} \hspace{0.5cm}  C_{\rm BSC} \ = \ $ { 1 1% } $\ \rm bit$
Line 53: Line 52:
 
$ε = 0.5\text{:} \hspace{0.5cm} C_{\rm BSC} \ = \ $ { 0. } $\ \rm bit$
 
$ε = 0.5\text{:} \hspace{0.5cm} C_{\rm BSC} \ = \ $ { 0. } $\ \rm bit$
  
{Welche Kanalkapazität&nbsp; $C_{\rm BSC}$&nbsp; ergibt sich abhängig vom BSC–Parameter&nbsp; $ε$?
+
{Which channel capacity&nbsp; $C_{\rm BSC}$&nbsp; results depending on the BSC parameter &nbsp; $ε$?
 
|type="{}"}
 
|type="{}"}
 
$ε = 1.0\text{:} \hspace{0.5cm} C_{\rm BSC} \ = \ $ { 1 1% } $\ \rm bit$
 
$ε = 1.0\text{:} \hspace{0.5cm} C_{\rm BSC} \ = \ $ { 1 1% } $\ \rm bit$
Line 61: Line 60:
 
</quiz>
 
</quiz>
  
===Musterlösung===
+
===Solution===
 
{{ML-Kopf}}
 
{{ML-Kopf}}
'''(1)'''&nbsp; Richtig ist der <u>Lösungsvorschlag 2</u>, wie die folgende Rechnung zeigt:
+
'''(1)'''&nbsp; The&nbsp; <u>proposed solution 2</u> is correct, as the following calculation shows:
 
:$$H(Y \hspace{-0.1cm}\mid \hspace{-0.1cm} X) = p_0 \cdot (1 - \varepsilon) \cdot {\rm log}_2 \hspace{0.1cm} \frac{1}{1 - \varepsilon} + p_0 \cdot \varepsilon \cdot {\rm log}_2 \hspace{0.1cm} \frac{1}{\varepsilon} +p_1 \cdot \varepsilon \cdot {\rm log}_2 \hspace{0.1cm} \frac{1}{\varepsilon} + p_1 \cdot (1 - \varepsilon) \cdot {\rm log}_2 \hspace{0.1cm} \frac{1}{1 - \varepsilon} $$
 
:$$H(Y \hspace{-0.1cm}\mid \hspace{-0.1cm} X) = p_0 \cdot (1 - \varepsilon) \cdot {\rm log}_2 \hspace{0.1cm} \frac{1}{1 - \varepsilon} + p_0 \cdot \varepsilon \cdot {\rm log}_2 \hspace{0.1cm} \frac{1}{\varepsilon} +p_1 \cdot \varepsilon \cdot {\rm log}_2 \hspace{0.1cm} \frac{1}{\varepsilon} + p_1 \cdot (1 - \varepsilon) \cdot {\rm log}_2 \hspace{0.1cm} \frac{1}{1 - \varepsilon} $$
 
:$$\Rightarrow \hspace{0.3cm} H(Y \hspace{-0.1cm}\mid \hspace{-0.1cm} X) =  (p_0 + p_1) \cdot \left [ \varepsilon \cdot {\rm log}_2 \hspace{0.1cm} \frac{1}{\varepsilon} + (1 - \varepsilon) \cdot {\rm log}_2 \hspace{0.1cm} \frac{1}{1 - \varepsilon} \right ] \hspace{0.05cm}.$$
 
:$$\Rightarrow \hspace{0.3cm} H(Y \hspace{-0.1cm}\mid \hspace{-0.1cm} X) =  (p_0 + p_1) \cdot \left [ \varepsilon \cdot {\rm log}_2 \hspace{0.1cm} \frac{1}{\varepsilon} + (1 - \varepsilon) \cdot {\rm log}_2 \hspace{0.1cm} \frac{1}{1 - \varepsilon} \right ] \hspace{0.05cm}.$$
*Mit&nbsp; $p_0 + p_1 = 1$&nbsp;  und der binären Entropiefunktion&nbsp; $H_{\rm bin}$&nbsp; erhält man das vorgeschlagene Ergebnis: &nbsp; $H(Y \hspace{-0.1cm}\mid \hspace{-0.1cm} X) = H_{\rm bin}(\varepsilon)\hspace{0.05cm}.$
+
*With&nbsp; $p_0 + p_1 = 1$&nbsp;  and the binary entropy function&nbsp; $H_{\rm bin}$,&nbsp; one obtains the proposed result: &nbsp; $H(Y \hspace{-0.1cm}\mid \hspace{-0.1cm} X) = H_{\rm bin}(\varepsilon)\hspace{0.05cm}.$
*Für&nbsp; $ε = 0.1$&nbsp; ergibt sich&nbsp; $H(Y|X) = 0.4690 \ \rm bit$.&nbsp; Der gleiche Wert steht für alle&nbsp; $p_0$&nbsp; in der vorgegebenen Tabelle.
+
*For&nbsp; $ε = 0.1$&nbsp; we get&nbsp; $H(Y|X) = 0.4690 \ \rm bit$.&nbsp; The same value stands for&nbsp; $p_0=0.50$&nbsp; in the given table.
*Aus der Tabelle erkennt man auch, dass beim BSC–Modell (blaue Hinterlegung) wie auch beim allgemeineren (unsymmetrischen) BC–Modell (rote Hinterlegung) die Äquivokation&nbsp; $H(X|Y)$&nbsp; von den Quellensymbolwahrscheinlichkeiten&nbsp; $p_0$&nbsp; und&nbsp; $p_1$&nbsp; abhängen. Daraus folgt, dass der Lösungsvorschlag 1 nicht richtig sein kann.  
+
*From the table one can also see that for the BSC model (blue background) as well as for the more general (asymmetric) BC model (red background) <br>the equivocation&nbsp; $H(X|Y)$&nbsp; depends on the source symbol probabilities&nbsp; $p_0$&nbsp; and&nbsp; $p_1$.&nbsp; It follows that proposed solution 1 cannot be correct.
*Die Irrelevanz&nbsp; $H(Y|X)$&nbsp; istunabhängig&nbsp; von der Quellenstatistik, so dass auch der Lösungsvorschlag 3 ausgeschlossen werden kann.
+
*The irrelevance&nbsp; $H(Y|X)$&nbsp; is independent &nbsp; of the source statistics, so that solution proposal 3 can also be excluded.
  
  
  
  
'''(2)'''&nbsp;  Zutreffend sind hier <u>alle vorgegebenen Lösungsalternativen</u>:  
+
'''(2)'''&nbsp;  <u>All given alternative solutions</u> are correct:  
*Die Kanalkapazität ist definiert als die maximale Transinformation, wobei die Maximierung hinsichtlich&nbsp; $P_X = (p_0, p_1)$&nbsp; zu erfolgen hat:
+
*The channel capacity is defined as the maximum mutual information, where the maximization has to be done with respect to&nbsp; $P_X = (p_0, p_1)$&nbsp;:
 
:$$C = \max_{P_X(X)} \hspace{0.15cm} I(X;Y) \hspace{0.05cm}.$$
 
:$$C = \max_{P_X(X)} \hspace{0.15cm} I(X;Y) \hspace{0.05cm}.$$
*Die Gleichung gilt allgemein, also auch für den rot hinterlegten unsymmetrischen Binärkanal.
+
*The equation is generally valid, i.e. also for the unbalanced binary channel highlighted in red.
*Die Transinformation kann zum Beispiel berechnet werden als&nbsp; $I(X;Y) = H(Y) - H(Y \hspace{-0.1cm}\mid \hspace{-0.1cm} X)\hspace{0.05cm}$,&nbsp; wobei entsprechend der Teilaufgabe&nbsp; '''(1)'''&nbsp; der Term&nbsp; $H(Y \hspace{-0.1cm}\mid \hspace{-0.1cm} X)\hspace{0.05cm}$&nbsp; unabhängig von&nbsp; $p_0$&nbsp; bzw.&nbsp; $p_1 = 1- p_0$&nbsp; ist.  
+
*The mutual information can be calculated, for example, as&nbsp; $I(X;Y) = H(Y) - H(Y \hspace{-0.1cm}\mid \hspace{-0.1cm} X)\hspace{0.05cm}$,&nbsp; <br>where according to subtask&nbsp; '''(1)'''&nbsp; the term&nbsp; $H(Y \hspace{-0.1cm}\mid \hspace{-0.1cm} X)\hspace{0.05cm}$&nbsp; is independent of&nbsp; $p_0$&nbsp; or&nbsp; $p_1 = 1- p_0$&nbsp;.
*Die maximale Transinformation ergibt sich somit genau dann, wenn die Sinkenentropie&nbsp; $H(Y)$&nbsp; maximal ist.&nbsp; Dies ist der Fall für&nbsp; $p_0 = p_1 = 0.5$:  
+
*The maximum mutual information thus results exactly when the sink entropy&nbsp; $H(Y)$&nbsp; is maximum.&nbsp; This is the case for&nbsp; $p_0 = p_1 = 0.5$:  
 
:$$H(X) = H(Y) =  1 \ \rm bit.$$
 
:$$H(X) = H(Y) =  1 \ \rm bit.$$
  
  
  
[[File:P_ID2790__Inf_Z_3_9_B.png|frame|Binäre Entropiefunktion und BSC–Kanalkapazität]]
+
[[File:P_ID2790__Inf_Z_3_9_B.png|frame|Binary entropy function and BSC channel capacity]]
'''(3)'''&nbsp;  Entsprechend den Teilaufgaben&nbsp; '''(1)'''&nbsp; und&nbsp; '''(2)'''&nbsp; erhält man für die BSC&ndash;Kanalkapazität:
+
'''(3)'''&nbsp;  According to the subtasks&nbsp; '''(1)'''&nbsp; and&nbsp; '''(2)'''&nbsp; one obtains for the BSC channel capacity:
 
:$$C = \max_{P_X(X)} \hspace{0.15cm}  I(X;Y)  \hspace{0.05cm}.$$
 
:$$C = \max_{P_X(X)} \hspace{0.15cm}  I(X;Y)  \hspace{0.05cm}.$$
  
Die Grafik zeigt links die binäre Entropiefunktion und rechts die Kanalkapazität.&nbsp; Man erhält:
+
The graph shows the binary entropy function on the left and the channel capacity on the right.&nbsp; One obtains:
* für&nbsp; $ε = 0.0$&nbsp; (fehlerfreier Kanal): <br> &nbsp; &nbsp; $C = 1\ \rm  (bit)$  &nbsp; &rArr; &nbsp; Punkt mit gelber Füllung,
+
* for&nbsp; $ε = 0.0$&nbsp; (error-free channel): <br> &nbsp; &nbsp; $C = 1\ \rm  (bit)$  &nbsp; &rArr; &nbsp; dot with yellow filling,
* für&nbsp; $ε = 0.1$&nbsp; (bisher betrachtet): <br> &nbsp; &nbsp; $C = 0.531\ \rm  (bit)$  &nbsp; &rArr; &nbsp;  Punkt mit grüner Füllung,
+
* for&nbsp; $ε = 0.1$&nbsp; (considered so far): <br> &nbsp; &nbsp; $C = 0.531\ \rm  (bit)$  &nbsp; &rArr; &nbsp;  dot with green filling,
* für&nbsp; $ε = 0.5$&nbsp; (vollkommen gestört): <br> &nbsp; &nbsp; $C = 0\ \rm  (bit)$  &nbsp; &rArr; &nbsp;  Punkt mit grauer Füllung.
+
* for&nbsp; $ε = 0.5$&nbsp; (completely disturbed): <br> &nbsp; &nbsp; $C = 0\ \rm  (bit)$  &nbsp; &rArr; &nbsp;  dot with grey filling.
 +
 
  
  
'''(4)'''&nbsp;  Aus der Grafik erkennt man, dass aus informationstheoretischer Sicht&nbsp; $ε = 1$&nbsp; identisch mit&nbsp; $ε = 0$&nbsp; ist:  
+
'''(4)'''&nbsp;  From the graph one can see that from an information-theoretical point of view&nbsp; $ε = 1$&nbsp; is identical to&nbsp; $ε = 0$&nbsp;:
 
:$$C_{\rm BSC} \big |_{\hspace{0.05cm}\varepsilon \hspace{0.05cm} = \hspace{0.05cm}1} \hspace{0.05cm}= C_{\rm BSC} \big |_{\hspace{0.05cm}\varepsilon \hspace{0.05cm} = \hspace{0.05cm}0} \hspace{0.15cm} \underline {=1\,{\rm (bit)}} \hspace{0.05cm}.$$
 
:$$C_{\rm BSC} \big |_{\hspace{0.05cm}\varepsilon \hspace{0.05cm} = \hspace{0.05cm}1} \hspace{0.05cm}= C_{\rm BSC} \big |_{\hspace{0.05cm}\varepsilon \hspace{0.05cm} = \hspace{0.05cm}0} \hspace{0.15cm} \underline {=1\,{\rm (bit)}} \hspace{0.05cm}.$$
*Der Kanal nimmt hier nur eine Umbenennung vor.&nbsp; Man spricht von &bdquo;Mapping&rdquo;.
+
*The channel only carries out a renaming here.&nbsp; This is called&nbsp; "mapping".
* Aus jeder&nbsp; $0$&nbsp; wird eine&nbsp; $1$&nbsp; und aus jeder&nbsp; $1$&nbsp; eine&nbsp; $0$.&nbsp; Entsprechend gilt:
+
*Every&nbsp; $0$&nbsp; becomes a&nbsp; $1$&nbsp; and every&nbsp; $1$&nbsp; becomes a&nbsp; $0$.&nbsp; Accordingly:
 
:$$C_{\rm BSC} \big |_{\hspace{0.05cm}\varepsilon \hspace{0.05cm} = \hspace{0.05cm}0.9} \hspace{0.05cm}= C_{\rm BSC} \big |_{\hspace{0.05cm}\varepsilon \hspace{0.05cm} = \hspace{0.05cm}0.1} \hspace{0.15cm} \underline {=0.531\,{\rm (bit)}} \hspace{0.05cm}$$
 
:$$C_{\rm BSC} \big |_{\hspace{0.05cm}\varepsilon \hspace{0.05cm} = \hspace{0.05cm}0.9} \hspace{0.05cm}= C_{\rm BSC} \big |_{\hspace{0.05cm}\varepsilon \hspace{0.05cm} = \hspace{0.05cm}0.1} \hspace{0.15cm} \underline {=0.531\,{\rm (bit)}} \hspace{0.05cm}$$
  
Line 104: Line 104:
  
  
[[Category:Aufgaben zu Informationstheorie|^3.3 Anwendung auf DSÜ-Kanäle^]]
+
[[Category:Information Theory: Exercises|^3.3 Application to Digital Signal Transmission^]]

Latest revision as of 16:07, 27 September 2021

Entropies of  "BC"  and  "BSC"

The channel capacity  $C$  was defined by  Claude E. Shannon  as the maximum mutual information, whereby the maximization refers solely to the source statistics:

$$ C = \max_{P_X(X)} \hspace{0.15cm} I(X;Y) \hspace{0.05cm}$$

In the binary channel with the probability function  $P_X(X) = \big [p_0, \ p_1 \big]$  only one parameter is optimizable, for example   $p_0$.  The probability for a  $1$  is thus also fixed:   $p_1 = 1 - p_0.$

The upper graph (reddish background) summarises the results for the  asymmetric binary channel  with  $ε_0 = 0.01$  and  $ε_1 = 0.2$ , which was also considered in the theory section.

The maximization leads to the result  $p_0 = 0.55$   ⇒   $p_1 = 0.45$,  and one obtains for the channel capacity:

$$C_{\rm BC} = \hspace{-0.05cm} \max_{P_X(X)} \hspace{0.1cm} I(X;Y) \big |_{p_0 \hspace{0.05cm} = \hspace{0.05cm}0.55} \hspace{0.05cm}=\hspace{0.05cm} 0.5779\,{\rm bit} \hspace{0.05cm}.$$

In the lower graph (blue background), the same information-theoretical quantities are given for the  Binary Symmetric Channel  $\rm (BSC)$  with the falsification probabilities  $ε_0 = ε_1 = ε = 0.1$,  which was also assumed for  Exercise 3.10.

In the present exercise you are

  • to analyze the entropies  $H(X)$,  $H(Y)$,  $H(X|Y)$  and  $H(Y|X)$,
  • to optimize the source parameter  $p_0$  with respect to maximum mutual information  $I(X; Y)$ ,
  • to determine the channel capacity  $C(ε)$ ,
  • to give a closed equation for  $C(ε)$  by generalization for the BSC channel model  $($initially for  $ε = 0.1)$.



Hints:



Questions

1

Which statements are true for the conditional entropies in the BSC model?

The equivocation results in  $H(X|Y) = H_{\rm bin}(ε)$.
The irrelevance results in  $H(Y|X) = H_{\rm bin}(ε)$.
The irrelevance results in  $H(Y|X) = H_{\rm bin}(p_0)$.

2

Which statements are true for the channel capacity  $C_{\rm BSC}$  of the BSC model?

The channel capacity is equal to the maximum mutual information.
For the BSC, maximization leads to the result  $p_0 = p_1 = 0.5$.
For  $p_0 = p_1 = 0.5$   ,  $H(X) = H(Y) = 1 \ \rm bit$.

3

Which channel capacity  $C_{\rm BSC}$  results depending on the BSC parameter  $ε$?

$ε = 0.0\text{:} \hspace{0.5cm} C_{\rm BSC} \ = \ $

$\ \rm bit$
$ε = 0.1\text{:} \hspace{0.5cm} C_{\rm BSC} \ = \ $

$\ \rm bit$
$ε = 0.5\text{:} \hspace{0.5cm} C_{\rm BSC} \ = \ $

$\ \rm bit$

4

Which channel capacity  $C_{\rm BSC}$  results depending on the BSC parameter   $ε$?

$ε = 1.0\text{:} \hspace{0.5cm} C_{\rm BSC} \ = \ $

$\ \rm bit$
$ε = 0.9\text{:} \hspace{0.5cm} C_{\rm BSC} \ = \ $

$\ \rm bit$


Solution

(1)  The  proposed solution 2 is correct, as the following calculation shows:

$$H(Y \hspace{-0.1cm}\mid \hspace{-0.1cm} X) = p_0 \cdot (1 - \varepsilon) \cdot {\rm log}_2 \hspace{0.1cm} \frac{1}{1 - \varepsilon} + p_0 \cdot \varepsilon \cdot {\rm log}_2 \hspace{0.1cm} \frac{1}{\varepsilon} +p_1 \cdot \varepsilon \cdot {\rm log}_2 \hspace{0.1cm} \frac{1}{\varepsilon} + p_1 \cdot (1 - \varepsilon) \cdot {\rm log}_2 \hspace{0.1cm} \frac{1}{1 - \varepsilon} $$
$$\Rightarrow \hspace{0.3cm} H(Y \hspace{-0.1cm}\mid \hspace{-0.1cm} X) = (p_0 + p_1) \cdot \left [ \varepsilon \cdot {\rm log}_2 \hspace{0.1cm} \frac{1}{\varepsilon} + (1 - \varepsilon) \cdot {\rm log}_2 \hspace{0.1cm} \frac{1}{1 - \varepsilon} \right ] \hspace{0.05cm}.$$
  • With  $p_0 + p_1 = 1$  and the binary entropy function  $H_{\rm bin}$,  one obtains the proposed result:   $H(Y \hspace{-0.1cm}\mid \hspace{-0.1cm} X) = H_{\rm bin}(\varepsilon)\hspace{0.05cm}.$
  • For  $ε = 0.1$  we get  $H(Y|X) = 0.4690 \ \rm bit$.  The same value stands for  $p_0=0.50$  in the given table.
  • From the table one can also see that for the BSC model (blue background) as well as for the more general (asymmetric) BC model (red background)
    the equivocation  $H(X|Y)$  depends on the source symbol probabilities  $p_0$  and  $p_1$.  It follows that proposed solution 1 cannot be correct.
  • The irrelevance  $H(Y|X)$  is independent   of the source statistics, so that solution proposal 3 can also be excluded.



(2)  All given alternative solutions are correct:

  • The channel capacity is defined as the maximum mutual information, where the maximization has to be done with respect to  $P_X = (p_0, p_1)$ :
$$C = \max_{P_X(X)} \hspace{0.15cm} I(X;Y) \hspace{0.05cm}.$$
  • The equation is generally valid, i.e. also for the unbalanced binary channel highlighted in red.
  • The mutual information can be calculated, for example, as  $I(X;Y) = H(Y) - H(Y \hspace{-0.1cm}\mid \hspace{-0.1cm} X)\hspace{0.05cm}$, 
    where according to subtask  (1)  the term  $H(Y \hspace{-0.1cm}\mid \hspace{-0.1cm} X)\hspace{0.05cm}$  is independent of  $p_0$  or  $p_1 = 1- p_0$ .
  • The maximum mutual information thus results exactly when the sink entropy  $H(Y)$  is maximum.  This is the case for  $p_0 = p_1 = 0.5$:
$$H(X) = H(Y) = 1 \ \rm bit.$$


Binary entropy function and BSC channel capacity

(3)  According to the subtasks  (1)  and  (2)  one obtains for the BSC channel capacity:

$$C = \max_{P_X(X)} \hspace{0.15cm} I(X;Y) \hspace{0.05cm}.$$

The graph shows the binary entropy function on the left and the channel capacity on the right.  One obtains:

  • for  $ε = 0.0$  (error-free channel):
        $C = 1\ \rm (bit)$   ⇒   dot with yellow filling,
  • for  $ε = 0.1$  (considered so far):
        $C = 0.531\ \rm (bit)$   ⇒   dot with green filling,
  • for  $ε = 0.5$  (completely disturbed):
        $C = 0\ \rm (bit)$   ⇒   dot with grey filling.


(4)  From the graph one can see that from an information-theoretical point of view  $ε = 1$  is identical to  $ε = 0$ :

$$C_{\rm BSC} \big |_{\hspace{0.05cm}\varepsilon \hspace{0.05cm} = \hspace{0.05cm}1} \hspace{0.05cm}= C_{\rm BSC} \big |_{\hspace{0.05cm}\varepsilon \hspace{0.05cm} = \hspace{0.05cm}0} \hspace{0.15cm} \underline {=1\,{\rm (bit)}} \hspace{0.05cm}.$$
  • The channel only carries out a renaming here.  This is called  "mapping".
  • Every  $0$  becomes a  $1$  and every  $1$  becomes a  $0$.  Accordingly:
$$C_{\rm BSC} \big |_{\hspace{0.05cm}\varepsilon \hspace{0.05cm} = \hspace{0.05cm}0.9} \hspace{0.05cm}= C_{\rm BSC} \big |_{\hspace{0.05cm}\varepsilon \hspace{0.05cm} = \hspace{0.05cm}0.1} \hspace{0.15cm} \underline {=0.531\,{\rm (bit)}} \hspace{0.05cm}$$