Difference between revisions of "Aufgaben:Exercise 3.15: Data Processing Theorem"

From LNTwww
m (Text replacement - "„" to """)
 
(8 intermediate revisions by 3 users not shown)
Line 1: Line 1:
  
{{quiz-Header|Buchseite=Informationstheorie/Anwendung auf die Digitalsignalübertragung
+
{{quiz-Header|Buchseite=Information_Theory/Application_to_Digital_Signal_Transmission
 
}}
 
}}
  
[[File:P_ID2818__Inf_A_3_14.png|right|frame|"Data Processing Theorem”]]
+
[[File:P_ID2818__Inf_A_3_14.png|right|frame|"Data Processing Theorem"]]
Wir betrachten die folgende Datenverarbeitungskette:
+
We consider the following data processing chain:
* Binäre Eingangsdaten  $X$  werden durch den Prozessor  $1$  (obere Hälfte in der Grafik) verarbeitet, der durch bedingte Wahrscheinlichkeiten   ⇒   $P_{Y\hspace{0.03cm}|\hspace{0.03cm}X}(\cdot)$  beschreibbar ist.  Dessen Ausgangsgröße ist  $Y$.
+
* Binary input data  $X$  is processed by processor   $1$  (top half in the graph)  which is describable by conditional probabilities   ⇒   $P_{Y\hspace{0.03cm}|\hspace{0.03cm}X}(\cdot)$ .  Its output variable is  $Y$.
* Ein zweiter Prozessor mit der Zufallsgröße  $Y$  am Eingang und der Zufallsgröße  $Z$  am Ausgang ist durch  $P_{Z\hspace{0.03cm}|\hspace{0.03cm}Y}(\cdot)$  gegeben   (untere Hälfte in der Grafik).  $Z$  hängt allein von  $Y$  ab  (deterministisch oder stochastisch)  und ist unabhängig von  $X$:
+
* A second processor with random variable  $Y$  at input and random variable  $Z$  at output is given by  $P_{Z\hspace{0.03cm}|\hspace{0.03cm}Y}(\cdot)$    (lower half in the graph).  $Z$  depends on  $Y$  alone  (deterministic or stochastic)  and is independent of  $X$:
 
:$$P_{Z\hspace{0.05cm}|\hspace{0.03cm} XY\hspace{-0.03cm}}(z\hspace{0.03cm}|\hspace{0.03cm} x, y) =P_{Z\hspace{0.05cm}|\hspace{0.03cm} Y\hspace{-0.03cm}}(z\hspace{0.03cm}|\hspace{0.03cm} y) \hspace{0.05cm}.$$
 
:$$P_{Z\hspace{0.05cm}|\hspace{0.03cm} XY\hspace{-0.03cm}}(z\hspace{0.03cm}|\hspace{0.03cm} x, y) =P_{Z\hspace{0.05cm}|\hspace{0.03cm} Y\hspace{-0.03cm}}(z\hspace{0.03cm}|\hspace{0.03cm} y) \hspace{0.05cm}.$$
Für diese Beschreibung wurde folgende Nomenklatur benutzt:
+
The following nomenclature was used for this description:
 
:$$x \in X = \{0, 1\}\hspace{0.02cm},\hspace{0.3cm} y \in Y = \{0,1\}\hspace{0.02cm},\hspace{0.3cm} z \in Z = \{0, 1\}\hspace{0.02cm}.$$
 
:$$x \in X = \{0, 1\}\hspace{0.02cm},\hspace{0.3cm} y \in Y = \{0,1\}\hspace{0.02cm},\hspace{0.3cm} z \in Z = \{0, 1\}\hspace{0.02cm}.$$
Die Verbund–Wahrscheinlichkeitsfunktion  (englisch:  ''Joint Probability Mass Function'')  lautet:
+
The joint probability mass function is:
 
:$$P_{XYZ}(x, y, z) = P_{X}(x) \cdot P_{Y\hspace{0.05cm}|\hspace{0.03cm} X\hspace{-0.03cm}}(y\hspace{0.03cm}|\hspace{0.03cm} x)\cdot P_{Z\hspace{0.05cm}|\hspace{0.03cm} Y\hspace{-0.03cm}}(z\hspace{0.03cm}|\hspace{0.03cm} y) \hspace{0.05cm}.$$
 
:$$P_{XYZ}(x, y, z) = P_{X}(x) \cdot P_{Y\hspace{0.05cm}|\hspace{0.03cm} X\hspace{-0.03cm}}(y\hspace{0.03cm}|\hspace{0.03cm} x)\cdot P_{Z\hspace{0.05cm}|\hspace{0.03cm} Y\hspace{-0.03cm}}(z\hspace{0.03cm}|\hspace{0.03cm} y) \hspace{0.05cm}.$$
Das bedeutet auch:  
+
This also means:
  
$X → Y → Z$  bilden eine  [[Theory_of_Stochastic_Signals/Markovketten|Markovkette]].  Für eine solche gilt das  ''Data Processing Theorem''  mit folgender Konsequenz:
+
$X → Y → Z$  form a  [[Theory_of_Stochastic_Signals/Markovketten|Markov chain]].  For such a one, the  "Data Processing Theorem"  holds with the following consequence:
 
:$$I(X;Z)  \le  I(X;Y ) \hspace{0.05cm}, $$
 
:$$I(X;Z)  \le  I(X;Y ) \hspace{0.05cm}, $$
 
:$$I(X;Z)  \le  I(Y;Z ) \hspace{0.05cm}.$$
 
:$$I(X;Z)  \le  I(Y;Z ) \hspace{0.05cm}.$$
Das Theorem besagt somit:  
+
The theorem thus states:
:* Man kann durch Manipulation  (''Processing'')  der Daten  $Y$  keine zusätzliche Information über den Eingang  $X$  gewinnen.
+
:* One cannot gain additional information about input  $X$  by manipulating   ("processing")  data  $Y$ .
:* Datenverarbeitung  (durch den Prozessor   $1$)  dient nur dem Zweck, die in  $X$  enthaltene Information besser sichtbar zu machen.
+
:* Data processing  (by processor   $1$)  serves only the purpose of making the information contained in  $X$  more visible.
  
  
Line 28: Line 28:
  
  
''Hinweise:''
+
Hints:
*Die Aufgabe gehört zum  Kapitel  [[Information_Theory/Anwendung_auf_die_Digitalsignalübertragung|Anwendung auf die Digitalsignalübertragung]].
+
*The task belongs to the chapter  [[Information_Theory/Anwendung_auf_die_Digitalsignalübertragung| Application to Digital Signal Transmission]].
*Bezug genommen wird auch auf die Seite  [[Information_Theory/Verschiedene_Entropien_zweidimensionaler_Zufallsgrößen#Kettenregel_der_Transinformation|Kettenregel der Transinformation]] im vorherigen Kapitel.
+
*Reference is also made to the page  [[Information_Theory/Different_Entropy_Measures_of_Two-Dimensional_Random_Variables#Chain_rule_of_the_mutual_information|Chain rule of the mutual information]]  in the previous chapter]].
 
   
 
   
  
  
  
===Fragebogen===
+
===Questions===
  
 
<quiz display=simple>
 
<quiz display=simple>
{Wie lässt sich das Ergebnis &nbsp;$I(X; Y) = 1 - H_{\rm bin}(p)$&nbsp; interpretieren?
+
{How can the result &nbsp;$I(X; Y) = 1 - H_{\rm bin}(p)$&nbsp; be interpreted?
 
|type="[]"}
 
|type="[]"}
+ Die Herleitung erfolgt über die Eigenschaften eines streng symmetrischen Kanals.
+
+ The derivation is based on the properties of a strictly symmetric channel.
- Ausgenutzt wird, dass &nbsp;$H_{\rm bin}(p)$&nbsp; eine konkave Funktion ist.
+
- It is exploited that &nbsp;$H_{\rm bin}(p)$&nbsp; is a concave function.
- Das Ergebnis gilt für jede Wahrscheinlichkeitsfunktion &nbsp;$P_X(X)$.
+
- The result is valid for any probability mass function &nbsp;$P_X(X)$.
  
  
{Welche Transinformation &nbsp;$I(X; Y)$&nbsp; ergibt sich für den ersten Prozessor mit &nbsp;$p = 0.1$?
+
{Which mutual information &nbsp;$I(X; Y)$&nbsp; results for the first processor with &nbsp;$p = 0.1$?
 
|type="{}"}
 
|type="{}"}
 
$ I(X; Y) \ = \ $ { 0.531 3% } $\ \rm bit$
 
$ I(X; Y) \ = \ $ { 0.531 3% } $\ \rm bit$
  
{Welche Transinformation &nbsp;$I(Y; Z)$&nbsp; ergibt sich für den zweiten Prozessor mit &nbsp;$q = 0.2$?
+
{Which mutual information &nbsp;$I(Y; Z)$&nbsp; results for the second processor with &nbsp;$q = 0.2$?
 
|type="{}"}
 
|type="{}"}
 
$I(Y; Z)  \ = \ $ { 0.278 3% } $\ \rm bit$
 
$I(Y; Z)  \ = \ $ { 0.278 3% } $\ \rm bit$
  
{Welche Transinformation &nbsp;$I(X; Z)$&nbsp; ergibt sich für das Gesamtsystem  mit&nbsp; $p = 0.1$ &nbsp;und&nbsp; $q = 0.2$?  
+
{Which mutual information &nbsp;$I(X; Z)$&nbsp; results for the whole system with&nbsp; $p = 0.1$ &nbsp;and&nbsp; $q = 0.2$?  
 
|type="{}"}
 
|type="{}"}
 
$I(X; Z) \ = \ $ { 0.173 3% } $\ \rm bit$
 
$I(X; Z) \ = \ $ { 0.173 3% } $\ \rm bit$
  
{Erfüllt dieses Beispiel das&nbsp; "Data Processing Theorem&rdquo;?
+
{Does this example satisfy the&nbsp; "Data Processing Theorem"?
 
|type="()"}
 
|type="()"}
+ Ja,  
+
+ Yes,  
- Nein.  
+
- No.  
 
</quiz>
 
</quiz>
  
===Musterlösung===
+
===Solution===
 
{{ML-Kopf}}
 
{{ML-Kopf}}
'''(1)'''&nbsp; Richtig ist nur der <u>Lösungsvorschlag 1</u>:  
+
'''(1)'''&nbsp; Only <u>solution proposal 1</u> is correct:  
* Beide Prozessoren beschreiben streng symmetrische Kanäle &nbsp; &rArr; &nbsp;  sowohl gleichmäßig dispersiv als auch gleichmäßig fokussierend.
+
* Both processors describe strictly symmetric channels &nbsp; &rArr; &nbsp;  both uniformly dispersive and uniformly focusing.
* Für einen solchen Binärkanal gilt mit&nbsp; $Y = \{0, 1\} \ ⇒ \ |Y| = 2$:
+
* For such a binary channel, with&nbsp; $Y = \{0, 1\} \ ⇒ \ |Y| = 2$:
 
:$$I(X;Y) = 1 + \sum_{y \hspace{0.05cm}\in\hspace{0.05cm} Y} \hspace{0.1cm} P_{Y\hspace{0.03cm}|\hspace{0.03cm} X}(y\hspace{0.03cm}|\hspace{0.03cm}x) \cdot {\rm log}_2 \hspace{0.1cm}P_{\hspace{0.01cm}Y \hspace{0.03cm}|\hspace{0.03cm} X}(y\hspace{0.03cm}|\hspace{0.03cm}x) \hspace{0.05cm}.$$
 
:$$I(X;Y) = 1 + \sum_{y \hspace{0.05cm}\in\hspace{0.05cm} Y} \hspace{0.1cm} P_{Y\hspace{0.03cm}|\hspace{0.03cm} X}(y\hspace{0.03cm}|\hspace{0.03cm}x) \cdot {\rm log}_2 \hspace{0.1cm}P_{\hspace{0.01cm}Y \hspace{0.03cm}|\hspace{0.03cm} X}(y\hspace{0.03cm}|\hspace{0.03cm}x) \hspace{0.05cm}.$$
*Hierbei ist es völlig egal, ob man von&nbsp; $X = 0$&nbsp; oder von&nbsp; $X = 1$&nbsp; ausgeht.&nbsp;  
+
*Here it does not matter at all whether one starts from&nbsp; $X = 0$&nbsp; or from&nbsp; $X = 1$&nbsp;.&nbsp;  
*Für&nbsp; $X = 0$&nbsp; erhält man mit&nbsp; $P_{Y\hspace{0.03cm}|\hspace{0.03cm}X}(Y = 1\hspace{0.03cm}|\hspace{0.03cm}X = 0) = p$  &nbsp;und&nbsp;  $P_{Y\hspace{0.03cm}|\hspace{0.03cm}X}(Y = 0\hspace{0.03cm}|\hspace{0.03cm}X = 0) = 1 – p\hspace{0.05cm}$:
+
*For&nbsp; $X = 0$&nbsp; one obtains with&nbsp; $P_{Y\hspace{0.03cm}|\hspace{0.03cm}X}(Y = 1\hspace{0.03cm}|\hspace{0.03cm}X = 0) = p$  &nbsp;and&nbsp;  $P_{Y\hspace{0.03cm}|\hspace{0.03cm}X}(Y = 0\hspace{0.03cm}|\hspace{0.03cm}X = 0) = 1 – p\hspace{0.05cm}$:
 
:$$ I(X;Y) = 1 + p \cdot {\rm log}_2 \hspace{0.1cm} (p) + (1-p) \cdot {\rm log}_2 \hspace{0.1cm} (1-p) = 1 - H_{\rm bin}(p)\hspace{0.05cm},
 
:$$ I(X;Y) = 1 + p \cdot {\rm log}_2 \hspace{0.1cm} (p) + (1-p) \cdot {\rm log}_2 \hspace{0.1cm} (1-p) = 1 - H_{\rm bin}(p)\hspace{0.05cm},
 
\hspace{1.0cm}H_{\rm bin}(p)= p \cdot {\rm log}_2 \hspace{0.1cm} \frac{1}{p}+ (1-p) \cdot {\rm log}_2 \hspace{0.1cm} \frac{1}{1-p}\hspace{0.05cm}.$$
 
\hspace{1.0cm}H_{\rm bin}(p)= p \cdot {\rm log}_2 \hspace{0.1cm} \frac{1}{p}+ (1-p) \cdot {\rm log}_2 \hspace{0.1cm} \frac{1}{1-p}\hspace{0.05cm}.$$
*Das Ergebnis gilt allerdings nur für&nbsp; $P_X(X) = (0.5, \ 0.5)$ &nbsp; &rArr; &nbsp; maximale Transinformation &nbsp; &rArr; &nbsp; Kanalkapazität.  
+
*However, the result holds only for&nbsp; $P_X(X) = (0.5, \ 0.5)$ &nbsp; &rArr; &nbsp; maximum mutual information &nbsp; &rArr; &nbsp; channel capacity.  
*Andernfalls ist&nbsp; $I(X; Y)$&nbsp; kleiner.&nbsp; &nbsp; &nbsp;Beispielsweise gilt für $P_X(X) = (1, \ 0)$: &nbsp;  $H(X) = 0$  &nbsp; &rArr; &nbsp;  $I(X; Y) = 0.$
+
*Otherwise,&nbsp; $I(X; Y)$&nbsp; is smaller.&nbsp; &nbsp; &nbsp;For example, for $P_X(X) = (1, \ 0)$: &nbsp;  $H(X) = 0$  &nbsp; &rArr; &nbsp;  $I(X; Y) = 0.$
*Die binäre Entropiefunktion ist zwar konkav, aber diese Eigenschaft wurde bei der Herleitung nicht benutzt &nbsp; &rArr; &nbsp;  Antwort 2 ist falsch.
+
*The binary entropy function is concave, but this property was not used in the derivation &nbsp; &rArr; &nbsp;  answer 2 is incorrect.
  
  
  
'''(2)'''&nbsp; Für den Prozessor&nbsp; $1$&nbsp; ergibt sich mit&nbsp; $p = 0.1\hspace{0.05cm}$:
+
'''(2)'''&nbsp; For processor&nbsp; $1$&nbsp;,&nbsp; $p = 0.1\hspace{0.05cm}$ gives:
 
:$$ I(X;Y) = 1 - H_{\rm bin}(0.1) = 1 - 0.469 \hspace{0.15cm} \underline {=0.531 \,{\rm (bit)}} \hspace{0.05cm}.$$
 
:$$ I(X;Y) = 1 - H_{\rm bin}(0.1) = 1 - 0.469 \hspace{0.15cm} \underline {=0.531 \,{\rm (bit)}} \hspace{0.05cm}.$$
  
  
'''(3)'''&nbsp; Entsprechend gilt für den zweiten Prozessor mit&nbsp; $q = 0.2\hspace{0.05cm}$:
+
'''(3)'''&nbsp; Correspondingly, for the second processor with&nbsp; $q = 0.2\hspace{0.05cm}$:
 
:$$I(Y;Z) = 1 - H_{\rm bin}(0.2) = 1 - 0.722 \hspace{0.15cm} \underline {=0.278 \,{\rm (bit)}} \hspace{0.05cm}.$$
 
:$$I(Y;Z) = 1 - H_{\rm bin}(0.2) = 1 - 0.722 \hspace{0.15cm} \underline {=0.278 \,{\rm (bit)}} \hspace{0.05cm}.$$
  
  
'''(4)'''&nbsp; Die Wahrscheinlichkeit für&nbsp; $Z = 0$&nbsp; unter der Bedingung&nbsp; $X = 0$&nbsp; ergibt sich über zwei Wege zu
+
'''(4)'''&nbsp; The probability for&nbsp; $Z = 0$&nbsp; under the condition&nbsp; $X = 0$&nbsp; results in two ways to
 
:$$P(\hspace{0.01cm}Z\hspace{-0.05cm} = 0\hspace{0.03cm} | \hspace{0.03cm} X\hspace{-0.05cm} = \hspace{-0.05cm}0) = (1-p) \cdot (1-q) + p \cdot q = 1 - p - q + 2pq \stackrel{!}{=} 1 - \varepsilon \hspace{0.05cm}.$$
 
:$$P(\hspace{0.01cm}Z\hspace{-0.05cm} = 0\hspace{0.03cm} | \hspace{0.03cm} X\hspace{-0.05cm} = \hspace{-0.05cm}0) = (1-p) \cdot (1-q) + p \cdot q = 1 - p - q + 2pq \stackrel{!}{=} 1 - \varepsilon \hspace{0.05cm}.$$
*Das Gesamtsystem hat dann die genau gleiche BSC–Struktur wie die Prozessoren $1$ und $2$, aber nun mit der Verfälschungswahrscheinlichkeit&nbsp; $\varepsilon = p + q - 2pq \hspace{0.05cm}.$  
+
*The overall system then has exactly the same BSC structure as processors $1$ and $2$, but now with falsification probability&nbsp; $\varepsilon = p + q - 2pq \hspace{0.05cm}.$  
*Für&nbsp; $p = 0.1$&nbsp; und&nbsp; $q = 0.2$&nbsp; erhält man:
+
*For&nbsp; $p = 0.1$&nbsp; and&nbsp; $q = 0.2$&nbsp;, we obtain:
 
:$$ \varepsilon = 0.1 + 0.2 - 2\cdot 0.1 \cdot 0.2 = 0.26 \hspace{0.3cm}\Rightarrow \hspace{0.3cm} I(X;Z) = 1 - H_{\rm bin}(0.26) = 1 - 0.827 \hspace{0.15cm} \underline {=0.173 \,{\rm (bit)}} \hspace{0.05cm}.$$
 
:$$ \varepsilon = 0.1 + 0.2 - 2\cdot 0.1 \cdot 0.2 = 0.26 \hspace{0.3cm}\Rightarrow \hspace{0.3cm} I(X;Z) = 1 - H_{\rm bin}(0.26) = 1 - 0.827 \hspace{0.15cm} \underline {=0.173 \,{\rm (bit)}} \hspace{0.05cm}.$$
  
  
'''(5)'''&nbsp; Die Antwort ist natürlich&nbsp; <u>JA</u>, da beim&nbsp; ''Data Processing Theorem'' genau von den hier gegebenen Voraussetzungen ausgegangen wird.  
+
'''(5)'''&nbsp; The answer is&nbsp; <u>YES</u>,&nbsp; because the&nbsp; "Data Processing Theorem" assumes exactly the conditions given here.
  
Wir wollen aber zusätzlich einige numerische Ergebnisse auswerten:
+
However, we want to evaluate some additional numerical results:
* Gilt&nbsp; $0 ≤ p < 0.5$ &nbsp;und&nbsp; $0 ≤ q < 0.5$, so erhält man:
+
* If&nbsp; $0 ≤ p < 0.5$ &nbsp;and&nbsp; $0 ≤ q < 0.5$&nbsp; hold, we obtain:
 
:$$\varepsilon = p + q \cdot (1-2p) > p \hspace{0.3cm}\Rightarrow \hspace{0.3cm} I(X;Z) < I(X;Y) \hspace{0.05cm},$$
 
:$$\varepsilon = p + q \cdot (1-2p) > p \hspace{0.3cm}\Rightarrow \hspace{0.3cm} I(X;Z) < I(X;Y) \hspace{0.05cm},$$
 
:$$\varepsilon = q + p \cdot (1-2q) > q \hspace{0.3cm}\Rightarrow \hspace{0.3cm} I(X;Z) < I(Y;Z) \hspace{0.05cm}.$$
 
:$$\varepsilon = q + p \cdot (1-2q) > q \hspace{0.3cm}\Rightarrow \hspace{0.3cm} I(X;Z) < I(Y;Z) \hspace{0.05cm}.$$
*Für &nbsp;$p = 0.5$&nbsp; gilt unabhängig von &nbsp;$q$, da&nbsp; $I(X; Z)$&nbsp; nicht größer sein kann als&nbsp; $I(X; Y)$:
+
*For &nbsp;$p = 0.5$&nbsp; holds independently of &nbsp;$q$,&nbsp; since&nbsp; $I(X; Z)$&nbsp; cannot be greater than&nbsp; $I(X; Y)$:
 
:$$\varepsilon = 0.5 + q \cdot (1-1) = 0.5 \hspace{0.3cm}\Rightarrow \hspace{0.3cm} I(X;Y) = 0 \hspace{0.3cm}\Rightarrow \hspace{0.3cm} I(X;Z) = 0 \hspace{0.05cm}.$$
 
:$$\varepsilon = 0.5 + q \cdot (1-1) = 0.5 \hspace{0.3cm}\Rightarrow \hspace{0.3cm} I(X;Y) = 0 \hspace{0.3cm}\Rightarrow \hspace{0.3cm} I(X;Z) = 0 \hspace{0.05cm}.$$
*Ebenso erhält man mit &nbsp;$q = 0.5$&nbsp; unabhängig von &nbsp;$p$:
+
*Similarly, with &nbsp;$q = 0.5$&nbsp; independent of &nbsp;$p$, we obtain:
 
:$$\varepsilon = 0.5 + p \cdot (1-1) = 0.5 \hspace{0.3cm}\Rightarrow \hspace{0.3cm} I(Y;Z) = 0 \hspace{0.3cm}\Rightarrow \hspace{0.3cm} I(X;Z) = 0 \hspace{0.05cm}.$$
 
:$$\varepsilon = 0.5 + p \cdot (1-1) = 0.5 \hspace{0.3cm}\Rightarrow \hspace{0.3cm} I(Y;Z) = 0 \hspace{0.3cm}\Rightarrow \hspace{0.3cm} I(X;Z) = 0 \hspace{0.05cm}.$$
*Auch für &nbsp;$p < 0.5$&nbsp; und &nbsp;$q > 0.5$&nbsp; wird das Data Processing Theorem nicht verletzt, was hier nur an einem Beispiel gezeigt werden soll:  
+
*Also for &nbsp;$p < 0.5$&nbsp; and &nbsp;$q > 0.5$&nbsp; the Data Processing Theorem is not violated, which will be shown here only by an example:
::Mit &nbsp;$p = 0.1$&nbsp; und &nbsp;$q = 0.8$&nbsp; erhält man das gleiche Ergebnis wie in Teilaufgabe&nbsp; '''(4)''':
+
::With &nbsp;$p = 0.1$&nbsp; and &nbsp;$q = 0.8$&nbsp;, the same result is obtained as in subtask&nbsp; '''(4)''':
 
::$$\varepsilon = 0.1 + 0.8 - 2\cdot 0.1 \cdot 0.8 = 0.74  \hspace{0.3cm}  
 
::$$\varepsilon = 0.1 + 0.8 - 2\cdot 0.1 \cdot 0.8 = 0.74  \hspace{0.3cm}  
 
\Rightarrow \hspace{0.3cm} I(X;Z) = 1 - H_{\rm bin}(0.74) = 1 - H_{\rm bin}(0.26) =0.173 \,{\rm (bit)} \hspace{0.05cm}.$$
 
\Rightarrow \hspace{0.3cm} I(X;Z) = 1 - H_{\rm bin}(0.74) = 1 - H_{\rm bin}(0.26) =0.173 \,{\rm (bit)} \hspace{0.05cm}.$$
Line 112: Line 112:
  
  
[[Category:Information Theory: Exercises|^3.3 Anwendung auf DSÜ-Kanäle^]]
+
[[Category:Information Theory: Exercises|^3.3 Application to Digital Signal Transmission^]]

Latest revision as of 13:22, 24 September 2021

"Data Processing Theorem"

We consider the following data processing chain:

  • Binary input data  $X$  is processed by processor   $1$  (top half in the graph)  which is describable by conditional probabilities   ⇒   $P_{Y\hspace{0.03cm}|\hspace{0.03cm}X}(\cdot)$ .  Its output variable is  $Y$.
  • A second processor with random variable  $Y$  at input and random variable  $Z$  at output is given by  $P_{Z\hspace{0.03cm}|\hspace{0.03cm}Y}(\cdot)$  (lower half in the graph).  $Z$  depends on  $Y$  alone  (deterministic or stochastic)  and is independent of  $X$:
$$P_{Z\hspace{0.05cm}|\hspace{0.03cm} XY\hspace{-0.03cm}}(z\hspace{0.03cm}|\hspace{0.03cm} x, y) =P_{Z\hspace{0.05cm}|\hspace{0.03cm} Y\hspace{-0.03cm}}(z\hspace{0.03cm}|\hspace{0.03cm} y) \hspace{0.05cm}.$$

The following nomenclature was used for this description:

$$x \in X = \{0, 1\}\hspace{0.02cm},\hspace{0.3cm} y \in Y = \{0,1\}\hspace{0.02cm},\hspace{0.3cm} z \in Z = \{0, 1\}\hspace{0.02cm}.$$

The joint probability mass function is:

$$P_{XYZ}(x, y, z) = P_{X}(x) \cdot P_{Y\hspace{0.05cm}|\hspace{0.03cm} X\hspace{-0.03cm}}(y\hspace{0.03cm}|\hspace{0.03cm} x)\cdot P_{Z\hspace{0.05cm}|\hspace{0.03cm} Y\hspace{-0.03cm}}(z\hspace{0.03cm}|\hspace{0.03cm} y) \hspace{0.05cm}.$$

This also means:

$X → Y → Z$  form a  Markov chain.  For such a one, the  "Data Processing Theorem"  holds with the following consequence:

$$I(X;Z) \le I(X;Y ) \hspace{0.05cm}, $$
$$I(X;Z) \le I(Y;Z ) \hspace{0.05cm}.$$

The theorem thus states:

  • One cannot gain additional information about input  $X$  by manipulating   ("processing")  data  $Y$ .
  • Data processing  (by processor   $1$)  serves only the purpose of making the information contained in  $X$  more visible.





Hints:



Questions

1

How can the result  $I(X; Y) = 1 - H_{\rm bin}(p)$  be interpreted?

The derivation is based on the properties of a strictly symmetric channel.
It is exploited that  $H_{\rm bin}(p)$  is a concave function.
The result is valid for any probability mass function  $P_X(X)$.

2

Which mutual information  $I(X; Y)$  results for the first processor with  $p = 0.1$?

$ I(X; Y) \ = \ $

$\ \rm bit$

3

Which mutual information  $I(Y; Z)$  results for the second processor with  $q = 0.2$?

$I(Y; Z) \ = \ $

$\ \rm bit$

4

Which mutual information  $I(X; Z)$  results for the whole system with  $p = 0.1$  and  $q = 0.2$?

$I(X; Z) \ = \ $

$\ \rm bit$

5

Does this example satisfy the  "Data Processing Theorem"?

Yes,
No.


Solution

(1)  Only solution proposal 1 is correct:

  • Both processors describe strictly symmetric channels   ⇒   both uniformly dispersive and uniformly focusing.
  • For such a binary channel, with  $Y = \{0, 1\} \ ⇒ \ |Y| = 2$:
$$I(X;Y) = 1 + \sum_{y \hspace{0.05cm}\in\hspace{0.05cm} Y} \hspace{0.1cm} P_{Y\hspace{0.03cm}|\hspace{0.03cm} X}(y\hspace{0.03cm}|\hspace{0.03cm}x) \cdot {\rm log}_2 \hspace{0.1cm}P_{\hspace{0.01cm}Y \hspace{0.03cm}|\hspace{0.03cm} X}(y\hspace{0.03cm}|\hspace{0.03cm}x) \hspace{0.05cm}.$$
  • Here it does not matter at all whether one starts from  $X = 0$  or from  $X = 1$ . 
  • For  $X = 0$  one obtains with  $P_{Y\hspace{0.03cm}|\hspace{0.03cm}X}(Y = 1\hspace{0.03cm}|\hspace{0.03cm}X = 0) = p$  and  $P_{Y\hspace{0.03cm}|\hspace{0.03cm}X}(Y = 0\hspace{0.03cm}|\hspace{0.03cm}X = 0) = 1 – p\hspace{0.05cm}$:
$$ I(X;Y) = 1 + p \cdot {\rm log}_2 \hspace{0.1cm} (p) + (1-p) \cdot {\rm log}_2 \hspace{0.1cm} (1-p) = 1 - H_{\rm bin}(p)\hspace{0.05cm}, \hspace{1.0cm}H_{\rm bin}(p)= p \cdot {\rm log}_2 \hspace{0.1cm} \frac{1}{p}+ (1-p) \cdot {\rm log}_2 \hspace{0.1cm} \frac{1}{1-p}\hspace{0.05cm}.$$
  • However, the result holds only for  $P_X(X) = (0.5, \ 0.5)$   ⇒   maximum mutual information   ⇒   channel capacity.
  • Otherwise,  $I(X; Y)$  is smaller.     For example, for $P_X(X) = (1, \ 0)$:   $H(X) = 0$   ⇒   $I(X; Y) = 0.$
  • The binary entropy function is concave, but this property was not used in the derivation   ⇒   answer 2 is incorrect.


(2)  For processor  $1$ ,  $p = 0.1\hspace{0.05cm}$ gives:

$$ I(X;Y) = 1 - H_{\rm bin}(0.1) = 1 - 0.469 \hspace{0.15cm} \underline {=0.531 \,{\rm (bit)}} \hspace{0.05cm}.$$


(3)  Correspondingly, for the second processor with  $q = 0.2\hspace{0.05cm}$:

$$I(Y;Z) = 1 - H_{\rm bin}(0.2) = 1 - 0.722 \hspace{0.15cm} \underline {=0.278 \,{\rm (bit)}} \hspace{0.05cm}.$$


(4)  The probability for  $Z = 0$  under the condition  $X = 0$  results in two ways to

$$P(\hspace{0.01cm}Z\hspace{-0.05cm} = 0\hspace{0.03cm} | \hspace{0.03cm} X\hspace{-0.05cm} = \hspace{-0.05cm}0) = (1-p) \cdot (1-q) + p \cdot q = 1 - p - q + 2pq \stackrel{!}{=} 1 - \varepsilon \hspace{0.05cm}.$$
  • The overall system then has exactly the same BSC structure as processors $1$ and $2$, but now with falsification probability  $\varepsilon = p + q - 2pq \hspace{0.05cm}.$
  • For  $p = 0.1$  and  $q = 0.2$ , we obtain:
$$ \varepsilon = 0.1 + 0.2 - 2\cdot 0.1 \cdot 0.2 = 0.26 \hspace{0.3cm}\Rightarrow \hspace{0.3cm} I(X;Z) = 1 - H_{\rm bin}(0.26) = 1 - 0.827 \hspace{0.15cm} \underline {=0.173 \,{\rm (bit)}} \hspace{0.05cm}.$$


(5)  The answer is  YES,  because the  "Data Processing Theorem" assumes exactly the conditions given here.

However, we want to evaluate some additional numerical results:

  • If  $0 ≤ p < 0.5$  and  $0 ≤ q < 0.5$  hold, we obtain:
$$\varepsilon = p + q \cdot (1-2p) > p \hspace{0.3cm}\Rightarrow \hspace{0.3cm} I(X;Z) < I(X;Y) \hspace{0.05cm},$$
$$\varepsilon = q + p \cdot (1-2q) > q \hspace{0.3cm}\Rightarrow \hspace{0.3cm} I(X;Z) < I(Y;Z) \hspace{0.05cm}.$$
  • For  $p = 0.5$  holds independently of  $q$,  since  $I(X; Z)$  cannot be greater than  $I(X; Y)$:
$$\varepsilon = 0.5 + q \cdot (1-1) = 0.5 \hspace{0.3cm}\Rightarrow \hspace{0.3cm} I(X;Y) = 0 \hspace{0.3cm}\Rightarrow \hspace{0.3cm} I(X;Z) = 0 \hspace{0.05cm}.$$
  • Similarly, with  $q = 0.5$  independent of  $p$, we obtain:
$$\varepsilon = 0.5 + p \cdot (1-1) = 0.5 \hspace{0.3cm}\Rightarrow \hspace{0.3cm} I(Y;Z) = 0 \hspace{0.3cm}\Rightarrow \hspace{0.3cm} I(X;Z) = 0 \hspace{0.05cm}.$$
  • Also for  $p < 0.5$  and  $q > 0.5$  the Data Processing Theorem is not violated, which will be shown here only by an example:
With  $p = 0.1$  and  $q = 0.8$ , the same result is obtained as in subtask  (4):
$$\varepsilon = 0.1 + 0.8 - 2\cdot 0.1 \cdot 0.8 = 0.74 \hspace{0.3cm} \Rightarrow \hspace{0.3cm} I(X;Z) = 1 - H_{\rm bin}(0.74) = 1 - H_{\rm bin}(0.26) =0.173 \,{\rm (bit)} \hspace{0.05cm}.$$