Difference between revisions of "Aufgaben:Exercise 4.6: AWGN Channel Capacity"

From LNTwww
 
(9 intermediate revisions by 3 users not shown)
Line 1: Line 1:
  
{{quiz-Header|Buchseite=Informationstheorie/AWGN–Kanalkapazität bei wertkontinuierlichem Eingang
+
{{quiz-Header|Buchseite=Information_Theory/AWGN_Channel_Capacity_for_Continuous_Input
 
}}
 
}}
  
[[File:P_ID2899__Inf_A_4_6.png|right|frame|Flussdiagramm der Information]]
+
[[File:P_ID2899__Inf_A_4_6.png|right|frame|Flowchart of the information]]
Wir gehen vom[  [[Information_Theory/AWGN–Kanalkapazität_bei_wertkontinuierlichem_Eingang#Kanalkapazit.C3.A4t_des_AWGN.E2.80.93Kanals| AWGN-Kanalmodell]]  aus:
+
We start from the   [[Information_Theory/AWGN–Kanalkapazität_bei_wertkontinuierlichem_Eingang#Channel_capacity_of_the_AWGN_channel| AWGN channel model]] :
* $X$  kennzeichnet den Eingang (Sender).
+
* $X$  denotes the input (transmitter).
* $N$  steht für eine gaußverteilte Störung.
+
* $N$  stands for a Gaussian distributed noise.
* $Y = X +N$  beschreibt den Ausgang (Empfänger) bei additiver Störung.
+
* $Y = X +N$  describes the output (receiver) in case of additive noise.
  
  
Für die Wahrscheinlichkeitsdichtefunktion der Störung gelte:
+
For the probability density function  $\rm (PDF)$  of the noise,  let hold:
 
:$$f_N(n) = \frac{1}{\sqrt{2\pi \hspace{0.03cm}\sigma_{\hspace{-0.05cm}N}^2}} \cdot {\rm e}^{
 
:$$f_N(n) = \frac{1}{\sqrt{2\pi \hspace{0.03cm}\sigma_{\hspace{-0.05cm}N}^2}} \cdot {\rm e}^{
 
- \hspace{0.05cm}{n^2}\hspace{-0.05cm}/{(2 \hspace{0.03cm} \sigma_{\hspace{-0.05cm}N}^2) }} \hspace{0.05cm}.$$
 
- \hspace{0.05cm}{n^2}\hspace{-0.05cm}/{(2 \hspace{0.03cm} \sigma_{\hspace{-0.05cm}N}^2) }} \hspace{0.05cm}.$$
Da die Zufallsgröße  $N$  mittelwertfrei ist   ⇒   $m_{N} = 0$, kann man die Varianz  $\sigma_{\hspace{-0.05cm}N}^2$  mit der Leistung  $P_N$  gleichsetzen.  In diesem Fall ist die differentielle Entropie der Zufallsgröße  $N$   wie folgt angebbar (mit der Pseudo–Einheit "bit"):
+
Since the random variable  $N$  is zero mean   ⇒   $m_{N} = 0$,  we can equate the variance $\sigma_{\hspace{-0.05cm}N}^2$  with the power  $P_N$ .  In this case, the differential entropy of the random variable  $N$   is specifiable  (with the pseudo–unit "bit")  as follows:
 
:$$h(N) = {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ( 2\pi {\rm e} \cdot P_N \right )\hspace{0.05cm}.$$
 
:$$h(N) = {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ( 2\pi {\rm e} \cdot P_N \right )\hspace{0.05cm}.$$
In dieser Aufgabe wird  $P_N = 1 \rm mW$  vorgegeben.  Dabei ist zu beachten:
+
In this exercise,  $P_N = 1 \hspace{0.15cm} \rm mW$  is given.  It should be noted:
  
* Die Leistung  $P_N$  in obiger Gleichung muss wie die Varianz  $\sigma_{\hspace{-0.05cm}N}^2$  dimensionslos sein.
+
* The power  $P_N$  in the above equation, like the variance  $\sigma_{\hspace{-0.05cm}N}^2$ , must be dimensionless.
  
* Um mit dieser Gleichung arbeiten zu können, muss die physikalische Größe  $P_N$  geeignet normiert werden, zum Beispiel entsprechend  $P_N = 1 \rm mW$    ⇒    $P_N\hspace{0.01cm}' = 1$.
+
* To work with this equation, the physical quantity  $P_N$  must be suitably normalized, for example corresponding to  $P_N = 1 \hspace{0.15cm} \rm mW$    ⇒    $P_N\hspace{0.01cm}' = 1$.
  
* Bei anderer Normierung, beispielsweise  $P_N = 1 \rm mW$     ⇒     $P_N\hspace{0.01cm}' = 0.001$  ergäbe sich für  $h(N)$  ein völlig anderer Zahlenwert.
+
* With other normalization, for example  $P_N = 1 \hspace{0.15cm} \rm mW$     ⇒     $P_N\hspace{0.01cm}' = 0.001$  a completely different numerical value would result fo  $h(N)$ .
  
  
Weiter können Sie für die Lösung dieser Aufgabe berücksichtigen:
+
Further,  you can consider for the solution of this exercise:
  
* Die Kanalkapazität ist definiert als die maximale Transinformation zwischen Eingang  $X$  und Ausgang  $Y$  bei bestmöglicher Eingangsverteilung:  
+
* The channel capacity is defined as the maximum mutual information between input  $X$  and output  $Y$  with the best possible input distribution:
 
:$$C = \max_{\hspace{-0.15cm}f_X:\hspace{0.05cm} {\rm E}[X^2] \le P_X} \hspace{-0.2cm}  I(X;Y)   
 
:$$C = \max_{\hspace{-0.15cm}f_X:\hspace{0.05cm} {\rm E}[X^2] \le P_X} \hspace{-0.2cm}  I(X;Y)   
 
\hspace{0.05cm}.$$
 
\hspace{0.05cm}.$$
  
*Die Kanalkapazität des AWGN–Kanals lautet:
+
*The channel capacity of the AWGN channel is:
 
:$$C_{\rm AWGN} = {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ( 1 + \frac{P_X}{P_N} \right )
 
:$$C_{\rm AWGN} = {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ( 1 + \frac{P_X}{P_N} \right )
 
= {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ( 1 + \frac{P_{\hspace{-0.05cm}X}\hspace{0.01cm}'}{P_{\hspace{-0.05cm}N}\hspace{0.01cm}'} \right )\hspace{0.05cm}.$$
 
= {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ( 1 + \frac{P_{\hspace{-0.05cm}X}\hspace{0.01cm}'}{P_{\hspace{-0.05cm}N}\hspace{0.01cm}'} \right )\hspace{0.05cm}.$$
:Man erkennt:  Die Kanalkapazität  $C$  und auch die Transinformation  $I(X; Y)$  sind im Gegensatz zu den differentiellen Entropien unabhängig von obiger Normierung.
+
:It can be seen:  The channel capacity  $C$  and also the mutual information  $I(X; Y)$  are independent of the above normalization, in contrast to the differential entropies.
  
* Bei gaußförmiger Stör–WDF  $f_N(n)$  führt eine ebenfalls gaußförmige Eingangs–WDF $f_X(x)$  zur maximalen Transinformation und damit zur Kanalkapazität.
+
* With Gaussian noise PDF  $f_N(n)$,  an Gaussian input PDF $f_X(x)$  leads to the maximum mutual information and thus to the channel capacity.
  
  
Line 46: Line 46:
  
  
''Hinweise:''
+
Hints:
*Die Aufgabe gehört zum  Kapitel  [[Information_Theory/AWGN–Kanalkapazität_bei_wertkontinuierlichem_Eingang|AWGN–Kanalkapazität bei wertkontinuierlichem Eingang]].
+
*The exercise belongs to the chapter  [[Information_Theory/AWGN–Kanalkapazität_bei_wertkontinuierlichem_Eingang|AWGN channel capacity with continuous input]].
*Da die Ergebnisse in "bit" angegeben werden sollen, wird in den Gleichungen  "log" &nbsp;&#8658;&nbsp; "log<sub>2</sub>" verwendet.  
+
*Since the results are to be given in&nbsp; "bit",&nbsp; "log" &nbsp;&#8658;&nbsp; "log<sub>2</sub>"&nbsp; is used in the equations.
 
   
 
   
  
  
===Fragebogen===
+
===Questions===
  
 
<quiz display=simple>
 
<quiz display=simple>
  
{Welche Sendeleistung ist für &nbsp;$C = 2 \ \rm bit$&nbsp; erforderlich?
+
{What transmission power is required for &nbsp;$C = 2 \ \rm bit$?
 
|type="{}"}
 
|type="{}"}
 
$P_X \ = \ $ { 15 3% } $\ \rm mW$
 
$P_X \ = \ $ { 15 3% } $\ \rm mW$
  
{Unter welchen Voraussetzungen ist &nbsp;$I(X; Y) = 2 \ \rm bit$&nbsp; überhaupt erreichbar?
+
{Under which conditions is &nbsp;$I(X; Y) = 2 \ \rm bit$&nbsp; achievable at all?
 
|type="[]"}
 
|type="[]"}
+ $P_X$&nbsp; ist wie unter&nbsp; '''(1)'''&nbsp; ermittelt oder größer.
+
+ $P_X$&nbsp; is determined as in&nbsp; '''(1)'''&nbsp; or larger.
+ Die Zufallsgröße&nbsp; $X$&nbsp; ist gaußverteilt.
+
+ The random variable&nbsp; $X$&nbsp; is Gaussian distributed.
+ Die Zufallsgröße&nbsp; $X$&nbsp; ist mittelwertfrei.
+
+ The random variable&nbsp; $X$&nbsp; is zero mean.
+ Die Zufallsgrößen&nbsp; $X$&nbsp; und&nbsp; $N$&nbsp; sind unkorreliert.
+
+ The random variables&nbsp; $X$&nbsp; and&nbsp; $N$&nbsp; are uncorrelated.
- Die Zufallsgrößen&nbsp; $X$&nbsp; und&nbsp; $Y$&nbsp; sind unkorreliert.
+
- The random variables&nbsp; $X$&nbsp; and&nbsp; $Y$&nbsp; are uncorrelated.
  
  
  
{Berechnen Sie die differentiellen Entropien der Zufallsgrößen&nbsp; $N$,&nbsp; $X$&nbsp; und&nbsp; $Y$&nbsp; bei geeigneter Normierung, <br>zum Beispiel&nbsp;  $P_N = 1 \rm mW$ &nbsp;&nbsp; &#8658; &nbsp;&nbsp; $P_N\hspace{0.01cm}' = 1$.
+
{Calculate the differential entropies of the random variables&nbsp; $N$,&nbsp; $X$&nbsp; and&nbsp; $Y$&nbsp; with appropriate normalization, <br>for example,&nbsp;  $P_N = 1 \hspace{0.15cm} \rm mW$ &nbsp;&nbsp; &#8658; &nbsp;&nbsp; $P_N\hspace{0.01cm}' = 1$.
 
|type="{}"}
 
|type="{}"}
 
$h(N) \ = \ $ { 2.047 3% } $\ \rm bit$
 
$h(N) \ = \ $ { 2.047 3% } $\ \rm bit$
Line 77: Line 77:
  
  
{Wie lauten die weiteren informationstheoretischen Beschreibungsgrößen?
+
{What are the other information-theoretic descriptive quantities?
 
|type="{}"}
 
|type="{}"}
 
$h(Y|X) \ = \ $ { 2.047 3% } $\ \rm bit$
 
$h(Y|X) \ = \ $ { 2.047 3% } $\ \rm bit$
Line 85: Line 85:
  
  
{Welche Größen ergäben sich bei gleichem&nbsp; $P_X$&nbsp; im Grenzfall&nbsp; $P_N\hspace{0.01cm} ' \to 0$ ?
+
{What quantities would result for the same&nbsp; $P_X$&nbsp; in the limiting case &nbsp; $P_N\hspace{0.01cm} ' \to 0$ ?
 
|type="{}"}
 
|type="{}"}
 
$h(X) \ = \ $ { 4 3% } $\ \rm bit$
 
$h(X) \ = \ $ { 4 3% } $\ \rm bit$
Line 97: Line 97:
 
</quiz>
 
</quiz>
  
===Musterlösung===
+
===Solution===
 
{{ML-Kopf}}
 
{{ML-Kopf}}
'''(1)'''&nbsp; Die Gleichung für die AWGN&ndash;Kanalkapazität in "bit" lautet:  
+
'''(1)'''&nbsp; The equation for the AWGN channel capacity in&nbsp; "bit"&nbsp; is:
 
:$$C_{\rm bit} = {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ( 1 + {P_X}/{P_N} \right )\hspace{0.05cm}.$$
 
:$$C_{\rm bit} = {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ( 1 + {P_X}/{P_N} \right )\hspace{0.05cm}.$$
*Mit&nbsp; $C_{\rm bit} = 2$&nbsp; ergibt sich daraus:
+
:With&nbsp; $C_{\rm bit} = 2$&nbsp; this results in:
 
:$$4 \stackrel{!}{=} {\rm log}_2\hspace{0.05cm}\left ( 1 + {P_X}/{P_N} \right )
 
:$$4 \stackrel{!}{=} {\rm log}_2\hspace{0.05cm}\left ( 1 + {P_X}/{P_N} \right )
 
\hspace{0.3cm}\Rightarrow \hspace{0.3cm} 1 + {P_X}/{P_N} \stackrel {!}{=} 2^4 = 16
 
\hspace{0.3cm}\Rightarrow \hspace{0.3cm} 1 + {P_X}/{P_N} \stackrel {!}{=} 2^4 = 16
Line 109: Line 109:
  
  
 +
'''(2)'''&nbsp; Correct are&nbsp; <u>proposed solutions 1 through 4</u>.&nbsp; Justification:
 +
* For &nbsp;$P_X < 15 \ \rm  mW$&nbsp; the mutual information &nbsp;$I(X; Y)$&nbsp; will always be less than&nbsp; $2$&nbsp; bit,&nbsp; regardless of all other conditions.
 +
* With &nbsp;$P_X = 15 \ \rm mW$&nbsp; the maximum mutual information &nbsp;$I(X; Y) = 2$&nbsp; bit is only achievable if the input quantity&nbsp; $X$&nbsp; is Gaussian distributed.&nbsp; <br>The output quantity&nbsp; $Y$&nbsp; is then also Gaussian distributed.
 +
* If the random variable&nbsp; $X$&nbsp; has a constant proportion &nbsp;$m_X$&nbsp; then the variance &nbsp;$\sigma_X^2 = P_X - m_X^2 $&nbsp;  for given&nbsp; $P_X$ &nbsp;is smaller, and it holds &nbsp; <br>$I(X; Y) = 1/2 &middot; \log_2 \ (1 + \sigma_X^2/P_N) < 2$&nbsp; bit.
 +
* The precondition for the given channel capacity equation is that&nbsp; $X$ &nbsp;and&nbsp; $N$&nbsp; are uncorrelated.&nbsp; On the other hand, if the random variables&nbsp; $X$ &nbsp;and&nbsp; $N$&nbsp; were uncorrelated, then &nbsp;$I(X; Y) = 0$&nbsp; would result.
  
'''(2)'''&nbsp; Richtig sind die <u>Lösungsvorschläge 1 bis 4</u>. Begründung:
 
* Für &nbsp;$P_X < 15 \ \rm mW$&nbsp; wird die Transinformation &nbsp;$I(X; Y)$&nbsp; stets kleiner als&nbsp; $2$&nbsp; bit sein, unabhängig von allen anderen Gegebenheiten.
 
* Mit &nbsp;$P_X = 15 \ \rm mW$&nbsp; ist die maximale Transinformation &nbsp;$I(X; Y) = 2$&nbsp; bit nur  erreichbar, wenn die Eingangsgröße&nbsp; $X$&nbsp; gaußverteilt ist.&nbsp;  <br>Die Ausgangsgröße&nbsp; $Y$&nbsp; ist dann ebenfalls gaußverteilt.
 
* Weist die Zufallsgröße&nbsp; $X$&nbsp; einen Gleichanteil &nbsp;$m_X$&nbsp; auf, so ist die Varianz &nbsp;$\sigma_X^2 = P_X - m_X^2 $&nbsp;  bei gegebenem&nbsp; $P_X$ &nbsp;kleiner, und es gilt &nbsp;$I(X; Y) = 1/2 &middot; \log_2 \ (1 + \sigma_X^2/P_N) < 2$&nbsp; bit.
 
*  Voraussetzung für die gegebene Kanalkapazitätsgleichung ist, dass&nbsp; $X$ &nbsp;und&nbsp; $N$&nbsp; unkorreliert sind.&nbsp; Wären dagegen die Zufallsgrößen&nbsp; $X$ &nbsp;und&nbsp; $N$&nbsp; unkorreliert, so ergäbe sich &nbsp;$I(X; Y) = 0$.
 
  
 
+
'''(3)'''&nbsp; The given equation for differential entropy makes sense only for dimensionless power.&nbsp; With the proposed normalization, one obtains:
 
+
[[File: P_ID2901__Inf_A_4_6c.png |right|frame|Information-theoretical values with the AWGN channel]]
 
+
* For &nbsp;$P_N = 1 \ \rm mW$&nbsp; &nbsp;&#8658;&nbsp; &nbsp;$P_N\hspace{0.05cm}' = 1$:
[[File: P_ID2901__Inf_A_4_6c.png |right|frame|Informationstheoretische Größen für den AWGN-Kanal]]
 
'''(3)'''&nbsp; Die angegebene Gleichung für die differentielle Entropie macht nur bei dimensionsloser Leistung Sinn.&nbsp; Mit der vorgeschlagenen Normierung erhält man:
 
* Für &nbsp;$P_N = 1 \ \rm mW$&nbsp; &nbsp;&#8658;&nbsp; &nbsp;$P_N\hspace{0.05cm}' = 1$:
 
 
:$$h(N) \  =  \ {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ( 2\pi {\rm e} \cdot 1 \right )  
 
:$$h(N) \  =  \ {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ( 2\pi {\rm e} \cdot 1 \right )  
 
   =  \ {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ( 17.08 \right )
 
   =  \ {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ( 17.08 \right )
 
\hspace{0.15cm}\underline{= 2.047\,{\rm bit}}\hspace{0.05cm},$$
 
\hspace{0.15cm}\underline{= 2.047\,{\rm bit}}\hspace{0.05cm},$$
* Für &nbsp;$P_X = 15 \ \rm mW$&nbsp; &nbsp;&#8658;&nbsp; &nbsp;$P_X\hspace{0.01cm}' = 15$:
+
* For &nbsp;$P_X = 15 \ \rm mW$&nbsp; &nbsp;&#8658;&nbsp; &nbsp;$P_X\hspace{0.01cm}' = 15$:
 
:$$h(X) \  =  \ {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ( 2\pi {\rm e} \cdot 15 \right )  =  {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ( 2\pi {\rm e}  \right ) +  
 
:$$h(X) \  =  \ {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ( 2\pi {\rm e} \cdot 15 \right )  =  {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ( 2\pi {\rm e}  \right ) +  
 
{1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left (15 \right )  
 
{1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left (15 \right )  
 
\hspace{0.15cm}\underline{= 4.000\,{\rm bit}}\hspace{0.05cm}, $$
 
\hspace{0.15cm}\underline{= 4.000\,{\rm bit}}\hspace{0.05cm}, $$
Für &nbsp;$P_Y = P_X  + P_N = 16 \ \rm mW$&nbsp; &nbsp;&#8658;&nbsp; $P_Y\hspace{0.01cm}' = 16$:
+
For &nbsp;$P_Y = P_X  + P_N = 16 \ \rm mW$&nbsp; &nbsp;&#8658;&nbsp; $P_Y\hspace{0.01cm}' = 16$:
 
:$$h(Y) = 2.047\,{\rm bit} + 2.000\,{\rm bit}
 
:$$h(Y) = 2.047\,{\rm bit} + 2.000\,{\rm bit}
 
\hspace{0.15cm}\underline{= 4.047\,{\rm bit}}\hspace{0.05cm}.$$
 
\hspace{0.15cm}\underline{= 4.047\,{\rm bit}}\hspace{0.05cm}.$$
Line 135: Line 132:
  
  
'''(4)'''&nbsp; Für die differentielle Irrelevanz gilt beim AWGN&ndash;Kanal:
+
'''(4)'''&nbsp; The differential irrelevance for the AWGN channel:
 
:$$h(Y \hspace{-0.05cm}\mid \hspace{-0.05cm} X) = h(N) \hspace{0.15cm}\underline{= 2.047\,{\rm bit}}\hspace{0.05cm}.$$
 
:$$h(Y \hspace{-0.05cm}\mid \hspace{-0.05cm} X) = h(N) \hspace{0.15cm}\underline{= 2.047\,{\rm bit}}\hspace{0.05cm}.$$
*Entsprechend nebenstehender Grafik gilt aber auch:
+
*However,&nbsp; according to the adjacent graph,&nbsp; also holds:
 
:$$h(Y \hspace{-0.05cm}\mid \hspace{-0.05cm} X) = h(Y) - I(X;Y) = 4.047 \,{\rm bit} - 2 \,{\rm bit} \hspace{0.15cm}\underline{= 2.047\,{\rm bit}}\hspace{0.05cm}. $$
 
:$$h(Y \hspace{-0.05cm}\mid \hspace{-0.05cm} X) = h(Y) - I(X;Y) = 4.047 \,{\rm bit} - 2 \,{\rm bit} \hspace{0.15cm}\underline{= 2.047\,{\rm bit}}\hspace{0.05cm}. $$
*Daraus kann die differentielle Äquivokation wie folgt berechnet werden:
+
*From this,&nbsp; the differential equivocation can be calculated as follows:
 
:$$h(X \hspace{-0.05cm}\mid \hspace{-0.05cm} Y) = h(X) - I(X;Y) = 4.000 \,{\rm bit} - 2 \,{\rm bit} \hspace{0.15cm}\underline{= 2.000\,{\rm bit}}\hspace{0.05cm}.$$
 
:$$h(X \hspace{-0.05cm}\mid \hspace{-0.05cm} Y) = h(X) - I(X;Y) = 4.000 \,{\rm bit} - 2 \,{\rm bit} \hspace{0.15cm}\underline{= 2.000\,{\rm bit}}\hspace{0.05cm}.$$
 
+
[[File: P_ID2900__Inf_A_4_6e.png |right|frame|Information-theoretical values with the ideal channel]]
*Abschließend wird auch noch die differentielle Verbundentropie angegeben, die aus obigem Schaubild nicht direkt ablesbar ist:
+
*Finally,&nbsp; the differential composite entropy is also given,&nbsp; which cannot be read directly from the above diagram:
 
:$$h(XY) = h(X) + h(Y) - I(X;Y) = 4.000 \,{\rm bit} + 4.047 \,{\rm bit}  - 2 \,{\rm bit} \hspace{0.15cm}\underline{= 6.047\,{\rm bit}}\hspace{0.05cm}.$$
 
:$$h(XY) = h(X) + h(Y) - I(X;Y) = 4.000 \,{\rm bit} + 4.047 \,{\rm bit}  - 2 \,{\rm bit} \hspace{0.15cm}\underline{= 6.047\,{\rm bit}}\hspace{0.05cm}.$$
  
  
  
[[File: P_ID2900__Inf_A_4_6e.png |right|frame|Informationstheoretische Größen beim idealen Kanal]]
+
'''(5)'''&nbsp; For the ideal channel with&nbsp; $h(X)\hspace{0.15cm}\underline{=  4.000 \,{\rm bit}}$:
'''(5)'''&nbsp; Beim idealen Kanal erhält man mit $h(X)\hspace{0.15cm}\underline{=  4.000 \,{\rm bit}}$:
 
 
:$$h(Y \hspace{-0.05cm}\mid \hspace{-0.05cm} X) \  =  \ h(N) \hspace{0.15cm}\underline{= 0\,{\rm (bit)}}\hspace{0.05cm},$$  
 
:$$h(Y \hspace{-0.05cm}\mid \hspace{-0.05cm} X) \  =  \ h(N) \hspace{0.15cm}\underline{= 0\,{\rm (bit)}}\hspace{0.05cm},$$  
 
:$$h(Y) \  =  \ h(X) \hspace{0.15cm}\underline{= 4\,{\rm bit}}\hspace{0.05cm},$$
 
:$$h(Y) \  =  \ h(X) \hspace{0.15cm}\underline{= 4\,{\rm bit}}\hspace{0.05cm},$$
Line 154: Line 150:
 
h(X \hspace{-0.05cm}\mid \hspace{-0.05cm} Y) \  =  \ h(X) - I(X;Y)\hspace{0.15cm}\underline{= 0\,{\rm (bit)}}\hspace{0.05cm}.$$
 
h(X \hspace{-0.05cm}\mid \hspace{-0.05cm} Y) \  =  \ h(X) - I(X;Y)\hspace{0.15cm}\underline{= 0\,{\rm (bit)}}\hspace{0.05cm}.$$
  
*Die Grafik zeigt diese Größen in einem Flussdiagramm.
+
*The graph shows these quantities in a flowchart.&nbsp; The same diagram would result in the discrete value case with&nbsp; $M = 16$&nbsp; equally probable symbols &nbsp; &#8658; &nbsp; $H(X)=  4.000 \,{\rm bit}$.  
 
+
*One only would have to replace each&nbsp; $h$&nbsp; by an&nbsp; $H$.
*Das gleiche Diagramm ergäbe sich auch im wertdiskreten Fall mit&nbsp; $M = 16$&nbsp; gleichwahrscheinlichen Symbolen &nbsp; &#8658; &nbsp; $H(X)=  4.000 \,{\rm bit}$.  
 
*Man müsste nur jedes&nbsp; $h$&nbsp; durch ein&nbsp; $H$&nbsp; ersetzen.
 
  
 
{{ML-Fuß}}
 
{{ML-Fuß}}

Latest revision as of 13:18, 3 November 2021

Flowchart of the information

We start from the   AWGN channel model :

  • $X$  denotes the input (transmitter).
  • $N$  stands for a Gaussian distributed noise.
  • $Y = X +N$  describes the output (receiver) in case of additive noise.


For the probability density function  $\rm (PDF)$  of the noise,  let hold:

$$f_N(n) = \frac{1}{\sqrt{2\pi \hspace{0.03cm}\sigma_{\hspace{-0.05cm}N}^2}} \cdot {\rm e}^{ - \hspace{0.05cm}{n^2}\hspace{-0.05cm}/{(2 \hspace{0.03cm} \sigma_{\hspace{-0.05cm}N}^2) }} \hspace{0.05cm}.$$

Since the random variable  $N$  is zero mean   ⇒   $m_{N} = 0$,  we can equate the variance $\sigma_{\hspace{-0.05cm}N}^2$  with the power  $P_N$ .  In this case, the differential entropy of the random variable  $N$  is specifiable  (with the pseudo–unit "bit")  as follows:

$$h(N) = {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ( 2\pi {\rm e} \cdot P_N \right )\hspace{0.05cm}.$$

In this exercise,  $P_N = 1 \hspace{0.15cm} \rm mW$  is given.  It should be noted:

  • The power  $P_N$  in the above equation, like the variance  $\sigma_{\hspace{-0.05cm}N}^2$ , must be dimensionless.
  • To work with this equation, the physical quantity  $P_N$  must be suitably normalized, for example corresponding to  $P_N = 1 \hspace{0.15cm} \rm mW$    ⇒    $P_N\hspace{0.01cm}' = 1$.
  • With other normalization, for example  $P_N = 1 \hspace{0.15cm} \rm mW$     ⇒     $P_N\hspace{0.01cm}' = 0.001$  a completely different numerical value would result fo  $h(N)$ .


Further,  you can consider for the solution of this exercise:

  • The channel capacity is defined as the maximum mutual information between input  $X$  and output  $Y$  with the best possible input distribution:
$$C = \max_{\hspace{-0.15cm}f_X:\hspace{0.05cm} {\rm E}[X^2] \le P_X} \hspace{-0.2cm} I(X;Y) \hspace{0.05cm}.$$
  • The channel capacity of the AWGN channel is:
$$C_{\rm AWGN} = {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ( 1 + \frac{P_X}{P_N} \right ) = {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ( 1 + \frac{P_{\hspace{-0.05cm}X}\hspace{0.01cm}'}{P_{\hspace{-0.05cm}N}\hspace{0.01cm}'} \right )\hspace{0.05cm}.$$
It can be seen:  The channel capacity  $C$  and also the mutual information  $I(X; Y)$  are independent of the above normalization, in contrast to the differential entropies.
  • With Gaussian noise PDF  $f_N(n)$,  an Gaussian input PDF $f_X(x)$  leads to the maximum mutual information and thus to the channel capacity.






Hints:


Questions

1

What transmission power is required for  $C = 2 \ \rm bit$?

$P_X \ = \ $

$\ \rm mW$

2

Under which conditions is  $I(X; Y) = 2 \ \rm bit$  achievable at all?

$P_X$  is determined as in  (1)  or larger.
The random variable  $X$  is Gaussian distributed.
The random variable  $X$  is zero mean.
The random variables  $X$  and  $N$  are uncorrelated.
The random variables  $X$  and  $Y$  are uncorrelated.

3

Calculate the differential entropies of the random variables  $N$,  $X$  and  $Y$  with appropriate normalization,
for example,  $P_N = 1 \hspace{0.15cm} \rm mW$    ⇒    $P_N\hspace{0.01cm}' = 1$.

$h(N) \ = \ $

$\ \rm bit$
$h(X) \ = \ $

$\ \rm bit$
$h(Y) \ = \ $

$\ \rm bit$

4

What are the other information-theoretic descriptive quantities?

$h(Y|X) \ = \ $

$\ \rm bit$
$h(X|Y) \ = \ $

$\ \rm bit$
$h(XY) \ = \ $

$\ \rm bit$

5

What quantities would result for the same  $P_X$  in the limiting case   $P_N\hspace{0.01cm} ' \to 0$ ?

$h(X) \ = \ $

$\ \rm bit$
$h(Y|X) \ = \ $

$\ \rm bit$
$h(Y) \ = \ $

$\ \rm bit$
$I(X;Y) \ = \ $

$\ \rm bit$
$h(X|Y) \ = \ $

$\ \rm bit$


Solution

(1)  The equation for the AWGN channel capacity in  "bit"  is:

$$C_{\rm bit} = {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ( 1 + {P_X}/{P_N} \right )\hspace{0.05cm}.$$
With  $C_{\rm bit} = 2$  this results in:
$$4 \stackrel{!}{=} {\rm log}_2\hspace{0.05cm}\left ( 1 + {P_X}/{P_N} \right ) \hspace{0.3cm}\Rightarrow \hspace{0.3cm} 1 + {P_X}/{P_N} \stackrel {!}{=} 2^4 = 16 \hspace{0.3cm}\Rightarrow \hspace{0.3cm} P_X = 15 \cdot P_N \hspace{0.15cm}\underline{= 15\,{\rm mW}} \hspace{0.05cm}. $$


(2)  Correct are  proposed solutions 1 through 4.  Justification:

  • For  $P_X < 15 \ \rm mW$  the mutual information  $I(X; Y)$  will always be less than  $2$  bit,  regardless of all other conditions.
  • With  $P_X = 15 \ \rm mW$  the maximum mutual information  $I(X; Y) = 2$  bit is only achievable if the input quantity  $X$  is Gaussian distributed. 
    The output quantity  $Y$  is then also Gaussian distributed.
  • If the random variable  $X$  has a constant proportion  $m_X$  then the variance  $\sigma_X^2 = P_X - m_X^2 $  for given  $P_X$  is smaller, and it holds  
    $I(X; Y) = 1/2 · \log_2 \ (1 + \sigma_X^2/P_N) < 2$  bit.
  • The precondition for the given channel capacity equation is that  $X$  and  $N$  are uncorrelated.  On the other hand, if the random variables  $X$  and  $N$  were uncorrelated, then  $I(X; Y) = 0$  would result.


(3)  The given equation for differential entropy makes sense only for dimensionless power.  With the proposed normalization, one obtains:

Information-theoretical values with the AWGN channel
  • For  $P_N = 1 \ \rm mW$   ⇒   $P_N\hspace{0.05cm}' = 1$:
$$h(N) \ = \ {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ( 2\pi {\rm e} \cdot 1 \right ) = \ {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ( 17.08 \right ) \hspace{0.15cm}\underline{= 2.047\,{\rm bit}}\hspace{0.05cm},$$
  • For  $P_X = 15 \ \rm mW$   ⇒   $P_X\hspace{0.01cm}' = 15$:
$$h(X) \ = \ {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ( 2\pi {\rm e} \cdot 15 \right ) = {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ( 2\pi {\rm e} \right ) + {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left (15 \right ) \hspace{0.15cm}\underline{= 4.000\,{\rm bit}}\hspace{0.05cm}, $$
  • For  $P_Y = P_X + P_N = 16 \ \rm mW$   ⇒  $P_Y\hspace{0.01cm}' = 16$:
$$h(Y) = 2.047\,{\rm bit} + 2.000\,{\rm bit} \hspace{0.15cm}\underline{= 4.047\,{\rm bit}}\hspace{0.05cm}.$$


(4)  The differential irrelevance for the AWGN channel:

$$h(Y \hspace{-0.05cm}\mid \hspace{-0.05cm} X) = h(N) \hspace{0.15cm}\underline{= 2.047\,{\rm bit}}\hspace{0.05cm}.$$
  • However,  according to the adjacent graph,  also holds:
$$h(Y \hspace{-0.05cm}\mid \hspace{-0.05cm} X) = h(Y) - I(X;Y) = 4.047 \,{\rm bit} - 2 \,{\rm bit} \hspace{0.15cm}\underline{= 2.047\,{\rm bit}}\hspace{0.05cm}. $$
  • From this,  the differential equivocation can be calculated as follows:
$$h(X \hspace{-0.05cm}\mid \hspace{-0.05cm} Y) = h(X) - I(X;Y) = 4.000 \,{\rm bit} - 2 \,{\rm bit} \hspace{0.15cm}\underline{= 2.000\,{\rm bit}}\hspace{0.05cm}.$$
Information-theoretical values with the ideal channel
  • Finally,  the differential composite entropy is also given,  which cannot be read directly from the above diagram:
$$h(XY) = h(X) + h(Y) - I(X;Y) = 4.000 \,{\rm bit} + 4.047 \,{\rm bit} - 2 \,{\rm bit} \hspace{0.15cm}\underline{= 6.047\,{\rm bit}}\hspace{0.05cm}.$$


(5)  For the ideal channel with  $h(X)\hspace{0.15cm}\underline{= 4.000 \,{\rm bit}}$:

$$h(Y \hspace{-0.05cm}\mid \hspace{-0.05cm} X) \ = \ h(N) \hspace{0.15cm}\underline{= 0\,{\rm (bit)}}\hspace{0.05cm},$$
$$h(Y) \ = \ h(X) \hspace{0.15cm}\underline{= 4\,{\rm bit}}\hspace{0.05cm},$$
$$I(X;Y) \ = \ h(Y) - h(Y \hspace{-0.05cm}\mid \hspace{-0.05cm} X)\hspace{0.15cm}\underline{= 4\,{\rm bit}}\hspace{0.05cm},$$ $$ h(X \hspace{-0.05cm}\mid \hspace{-0.05cm} Y) \ = \ h(X) - I(X;Y)\hspace{0.15cm}\underline{= 0\,{\rm (bit)}}\hspace{0.05cm}.$$
  • The graph shows these quantities in a flowchart.  The same diagram would result in the discrete value case with  $M = 16$  equally probable symbols   ⇒   $H(X)= 4.000 \,{\rm bit}$.
  • One only would have to replace each  $h$  by an  $H$.