Difference between revisions of "Aufgaben:Exercise 4.3: PDF Comparison with Regard to Differential Entropy"

From LNTwww
m (Text replacement - "„" to """)
 
(10 intermediate revisions by 2 users not shown)
Line 1: Line 1:
  
{{quiz-Header|Buchseite=Informationstheorie/Differentielle Entropie
+
{{quiz-Header|Buchseite=Information_Theory/Differential_Entropy
 
}}
 
}}
  
[[File:Inf_A_4_3_neu2.png|right|frame|$h(X)$  für vier Dichtefunktionen]]
+
[[File:EN_Inf_A_4_3_v2.png|right|frame|$h(X)$  for four probability density functions]]
Nebenstehende Tabelle zeigt das Vergleichsergebnis hinsichtlich der differentiellen Entropie  $h(X)$  für
+
The adjacent table shows the comparison result with respect to the differential entropy  $h(X)$  for
* die  [[Theory_of_Stochastic_Signals/Gleichverteilte_Zufallsgrößen|Gleichverteilung]]   ⇒   $f_X(x) = f_1(x)$:  
+
* the  [[Theory_of_Stochastic_Signals/Gleichverteilte_Zufallsgrößen|uniform distribution]]   ⇒   $f_X(x) = f_1(x)$:  
:$$f_1(x) = \left\{ \begin{array}{c} 1/(2A)  \\  0 \\  \end{array} \right. \begin{array}{*{20}c}  {\rm{f\ddot{u}r}} \hspace{0.1cm} |x| \le A \\    {\rm sonst} \\ \end{array}
+
:$$f_1(x) = \left\{ \begin{array}{c} 1/(2A)  \\  0 \\  \end{array} \right. \begin{array}{*{20}c}  {\rm{f\ddot{u}r}} \hspace{0.1cm} |x| \le A \\    {\rm else} \\ \end{array}
 
,$$
 
,$$
* die  [[Aufgaben:3.1Z_Dreieckförmige_WDF|Dreieckverteilung]]   ⇒   $f_X(x) = f_2(x)$:
+
* the  [[Aufgaben:Exercise_3.1Z:_Triangular_PDF|triangular distribution]]   ⇒   $f_X(x) = f_2(x)$:
:$$f_2(x) = \left\{ \begin{array}{c} 1/A \cdot \big [1 - |x|/A \big ] \\  0 \\  \end{array} \right. \begin{array}{*{20}c}  {\rm{f\ddot{u}r}} \hspace{0.1cm} |x| \le A \\    {\rm sonst} \\ \end{array}
+
:$$f_2(x) = \left\{ \begin{array}{c} 1/A \cdot \big [1 - |x|/A \big ] \\  0 \\  \end{array} \right. \begin{array}{*{20}c}  {\rm{f\ddot{u}r}} \hspace{0.1cm} |x| \le A \\    {\rm else} \\ \end{array}
 
,$$
 
,$$
* die  [[Theory_of_Stochastic_Signals/Exponentialverteilte_Zufallsgrößen#Zweiseitige_Exponentialverteilung_.E2.80.93_Laplaceverteilung|Laplaceverteilung]]   ⇒   $f_X(x) = f_3(x)$:
+
* the  [[Theory_of_Stochastic_Signals/Exponentially_Distributed_Random_Variables#Two-sided_exponential_distribution_-_Laplace_distribution|Laplace distribution]]   ⇒   $f_X(x) = f_3(x)$:
 
:$$f_3(x) =  \lambda/2 \cdot {\rm e}^{-\lambda \hspace{0.05cm} \cdot \hspace{0.05cm}|x|}\hspace{0.05cm}.$$
 
:$$f_3(x) =  \lambda/2 \cdot {\rm e}^{-\lambda \hspace{0.05cm} \cdot \hspace{0.05cm}|x|}\hspace{0.05cm}.$$
  
Die Werte für die  [[Theory_of_Stochastic_Signals/Gaußverteilte_Zufallsgrößen|Gaußverteilung]]   ⇒   $f_X(x) = f_4(x)$  mit
+
The values for the  [[Theory_of_Stochastic_Signals/Gaußverteilte_Zufallsgrößen|Gaussian distribution]]   ⇒   $f_X(x) = f_4(x)$  with
 
:$$f_4(x) = \frac{1}{\sqrt{2\pi  \sigma^2}} \cdot {\rm e}^{  
 
:$$f_4(x) = \frac{1}{\sqrt{2\pi  \sigma^2}} \cdot {\rm e}^{  
 
- \hspace{0.05cm}{x ^2}/{(2 \sigma^2})}$$
 
- \hspace{0.05cm}{x ^2}/{(2 \sigma^2})}$$
sind hier noch nicht eingetragen.  Diese sollen in den Teilaufgaben  '''(1)'''  bis  '''(3)'''  ermittelt werden.
+
are not yet entered here.  These are to be determined in subtasks  '''(1)'''  to  '''(3)''' .
  
Alle hier betrachteten Wahrscheinlichkeitsdichtefunktionen sind
+
Each probability density function  $\rm (PDF)$  considered here is
* symmetrisch um  $x = 0$    ⇒   $f_X(-x) = f_X(x)$
+
* symmetric about  $x = 0$    ⇒   $f_X(-x) = f_X(x)$
* und damit mittelwertfrei   ⇒  $m_1 = 0$.
+
* and thus zero mean   ⇒  $m_1 = 0$.
  
  
In allen hier betrachteten Fällen kann die differentielle Entropie wie folgt dargestellt werden:
+
In all cases considered here, the differential entropy can be represented as follows:
*Unter der Nebenbedingung  $|X| ≤ A$   ⇒     [[Information_Theory/Differentielle_Entropie#Beweis:_Maximale_differentielle_Entropie_bei_Spitzenwertbegrenzung|Spitzenwertbegrenzung]]:
+
*Under the constraint  $|X| ≤ A$   ⇒     [[Information_Theory/Differentielle_Entropie#Proof:_Maximum_differential_entropy_with_peak_constraint|peak constraint]]  $($German:  "Spitzenwertbegrenzung"  or  "Amplitudenbegrenzung"   ⇒   Identifier:   $\rm A)$:
 
:$$h(X) = {\rm log} \hspace{0.1cm} ({\it \Gamma}_{\hspace{-0.01cm}\rm A} \cdot A)  
 
:$$h(X) = {\rm log} \hspace{0.1cm} ({\it \Gamma}_{\hspace{-0.01cm}\rm A} \cdot A)  
 
\hspace{0.05cm},$$
 
\hspace{0.05cm},$$
*Unter der Nebenbedingung   ${\rm E}\big [|X – m_1|^2 \big ] ≤ σ^2$   ⇒   [[Information_Theory/Differentielle_Entropie#Beweis:_Maximale_differentielle_Entropie_bei_Leistungsbegrenzung|Leistungsbegrenzung]]:
+
*Under the constraint   ${\rm E}\big [|X – m_1|^2 \big ] ≤ σ^2$   ⇒   [[Information_Theory/Differentielle_Entropie#Proof:_Maximum_differential_entropy_with_power_constraint|power constraint]]  $($German:   "Leistungsbegrenzung"   ⇒    Identifier:   $\rm L)$:
 
:$$h(X) = {1}/{2} \cdot {\rm log} \hspace{0.1cm} ({\it \Gamma}_{\hspace{-0.01cm}\rm L} \cdot \sigma^2)  
 
:$$h(X) = {1}/{2} \cdot {\rm log} \hspace{0.1cm} ({\it \Gamma}_{\hspace{-0.01cm}\rm L} \cdot \sigma^2)  
 
\hspace{0.05cm}.$$
 
\hspace{0.05cm}.$$
Je größer die jeweilige Kenngröße  ${\it \Gamma}_{\hspace{-0.01cm}\rm A}$  bzw.   ${\it \Gamma}_{\hspace{-0.01cm}\rm L}$  ist, desto günstiger ist bei der vereinbarten Nebenbedingung die vorliegende WDF hinsichtlich der differentiellen Entropie.
+
The larger the respective parameter  ${\it \Gamma}_{\hspace{-0.01cm}\rm A}$  or   ${\it \Gamma}_{\hspace{-0.01cm}\rm L}$  is, the more favorable is the present PDF in terms of differential entropy for the agreed constraint.
  
  
Line 40: Line 40:
  
  
''Hinweise:''
+
Hints:
*Die Aufgabe gehört zum  Kapitel  [[Information_Theory/Differentielle_Entropie|Differentielle Entropie]].
+
*The exercise belongs to the chapter  [[Information_Theory/Differentielle_Entropie|Differential Entropy]].
*Nützliche Hinweise zur Lösung dieser Aufgabe finden Sie insbesondere auf den Seiten
+
*Useful hints for solving this task can be found in particular on the pages
::[[Information_Theory/Differentielle_Entropie#Differentielle_Entropie_einiger_spitzenwertbegrenzter_Zufallsgr.C3.B6.C3.9Fen|Differentielle Entropie einiger spitzenwertbegrenzter Zufallsgrößen]]  sowie
+
::[[Information_Theory/Differentielle_Entropie#Differential_entropy_of_some_peak-constrained_random_variables|Differential entropy of some peak-constrained random variables]],   
::[[Information_Theory/Differentielle_Entropie#Differentielle_Entropie_einiger_leistungsbegrenzter_Zufallsgr.C3.B6.C3.9Fen|Differentielle Entropie einiger leistungsbegrenzter Zufallsgrößen]].
+
::[[Information_Theory/Differentielle_Entropie#Differential_entropy_of_some_power-constrained_random_variables|Differential entropy of some power-constrained random variables]].
 
   
 
   
  
  
  
===Fragebogen===
+
===Questions===
  
 
<quiz display=simple>
 
<quiz display=simple>
{Welche Gleichung gilt für den Logarithmus der Gauß&ndash;WDF?
+
{Which equation is valid for the logarithm of the Gaussian PDF?
 
|type="[]"}
 
|type="[]"}
+ Es gilt: &nbsp; $\ln \big[f_X(x) \big] = \ln (A) - x^2/(2 \sigma^2)$ &nbsp; mit &nbsp; $A = f_X(x=0)$.
+
+ It holds: &nbsp; $\ln \big[f_X(x) \big] = \ln (A) - x^2/(2 \sigma^2)$ &nbsp; with &nbsp; $A = f_X(x=0)$.
- Es gilt: &nbsp; $\ln \big [f_X(x) \big] = A - \ln (x^2/(2 \sigma^2)$ &nbsp; mit &nbsp; $A = f_X(x=0)$.
+
- Es It holds: &nbsp; $\ln \big [f_X(x) \big] = A - \ln (x^2/(2 \sigma^2)$ &nbsp; with &nbsp; $A = f_X(x=0)$.
  
{Welche Gleichung gilt für die differentielle Entropie  der Gauß&ndash;WDF?
+
{Which equation holds for the differential entropy of the Gaussian PDF?
 
|type="[]"}
 
|type="[]"}
+ Es gilt: &nbsp; $h(X)= 1/2 \cdot \ln (2\pi\hspace{0.05cm}{\rm e}\hspace{0.05cm}\sigma^2)$&nbsp; mit der Pseudoeinheit "nat".
+
+ It holds: &nbsp; $h(X)= 1/2 \cdot \ln (2\pi\hspace{0.05cm}{\rm e}\hspace{0.01cm}\cdot\hspace{0.01cm}\sigma^2)$&nbsp; with the pseudo-unit&nbsp; "nat".
+ Es gilt: &nbsp; $h(X)= 1/2 \cdot \log_2 (2\pi\hspace{0.05cm}{\rm e}\hspace{0.05cm}\sigma^2)$&nbsp; mit der Pseudoeinheit "bit".
+
+ It holds: &nbsp; $h(X)= 1/2 \cdot \log_2 (2\pi\hspace{0.05cm}{\rm e}\hspace{0.01cm}\cdot\hspace{0.01cm}\sigma^2)$&nbsp; with the pseudo-unit&nbsp; "bit".
  
{Ergänzen Sie den fehlenden Eintrag für die Gauß&ndash;WDF in obiger Tabelle.
+
{Complete the missing entry for the Gaussian PDF in the above table.
 
|type="{}"}
 
|type="{}"}
 
${\it \Gamma}_{\rm L} \ = \ $ { 17.08 3% }
 
${\it \Gamma}_{\rm L} \ = \ $ { 17.08 3% }
  
{Welche Werte erhält man für die Gauß&ndash;WDF mit dem Gleichanteil &nbsp;$m_1 = \sigma = 1$?
+
{What values are obtained for the Gaussian PDF with the DC component &nbsp;$m_1 = \sigma = 1$?
 
|type="{}"}
 
|type="{}"}
 
$P/\sigma^2 \ = \ $ { 2 3% }
 
$P/\sigma^2 \ = \ $ { 2 3% }
 
$h(X) \ = \ $ { 2.047 3% } $\ \rm bit$
 
$h(X) \ = \ $ { 2.047 3% } $\ \rm bit$
  
{Welche der Aussagen stimmen für die differentielle Entropie&nbsp;  $h(X)$&nbsp; unter der Nebenbedingung "Leistungsbegrenzung" auf&nbsp; ${\rm E}\big[|X – m_1|^2\big] ≤ σ^2$?
+
{Which of the statements are true for the differential entropy&nbsp;  $h(X)$&nbsp; considering the&nbsp; "power constraint"&nbsp; ${\rm E}\big[|X – m_1|^2\big] ≤ σ^2$?
 
|type="[]"}
 
|type="[]"}
+ Die Gaußverteilung &nbsp; &rArr; &nbsp; $f_4(x)$&nbsp; führt zum maximalen&nbsp; $h(X)$.
+
+ The Gaussian PDF &nbsp; &rArr; &nbsp; $f_4(x)$&nbsp; leads to the maximum&nbsp; $h(X)$.
- Die Gleichverteilung &nbsp; &rArr; &nbsp; $f_1(x)$&nbsp; führt zum maximalen&nbsp; $h(X)$.
+
- The uniform PDF &nbsp; &rArr; &nbsp; $f_1(x)$&nbsp; leads to the maximum&nbsp; $h(X)$.
- Die Dreieck&ndash;WDF &nbsp; &rArr; &nbsp; $f_2(x)$&nbsp; ist sehr ungünstig, da spitzenwertbegrenzt.  
+
- The triangular PDF &nbsp; &rArr; &nbsp; $f_2(x)$&nbsp; is very unfavorable because it is peak-constrained.  
+ Die Dreieck&ndash;WDF &nbsp; &rArr; &nbsp; $f_2(x)$&nbsp; ist günstiger als die Laplaceverteilung &nbsp; &rArr; &nbsp;  $f_3(x)$.  
+
+ The triangular PDF &nbsp; &rArr; &nbsp; $f_2(x)$&nbsp; is more favorable than the Laplace PDF &nbsp; &rArr; &nbsp;  $f_3(x)$.  
  
{Welche der Aussagen stimmen bei "Spitzenwertbegrenzung" auf den Bereich&nbsp;  $|X| ≤ A$.&nbsp; Die maximale differentielle Entropie&nbsp;  $h(X)$&nbsp; ergibt sich für
+
{Which of the statements are true for&nbsp; "peak constraint"&nbsp; to the range&nbsp;  $|X| ≤ A$.&nbsp; The maximum differential entropy&nbsp;  $h(X)$&nbsp; is obtained for
 
|type="[]"}
 
|type="[]"}
- eine Gauß&ndash;WDF &nbsp; &rArr; &nbsp; $f_4(x)$&nbsp; mit anschließender Begrenzung &nbsp; &#8658; &nbsp;$|X| ≤ A$,
+
- a Gaussian PDF &nbsp; &rArr; &nbsp; $f_4(x)$&nbsp; followed by a constraint &nbsp; &#8658; &nbsp;$|X| ≤ A$,
+ die Gleichverteilung &nbsp; &rArr; &nbsp; $f_1(x)$,
+
+ the uniform PDF &nbsp; &rArr; &nbsp; $f_1(x)$,
- die Dreieckverteilung &nbsp; &rArr; &nbsp; $f_2(x)$.
+
- the triangular PDF &nbsp; &rArr; &nbsp; $f_2(x)$.
  
 
</quiz>
 
</quiz>
  
===Musterlösung===
+
===Solution===
 
{{ML-Kopf}}
 
{{ML-Kopf}}
'''(1)'''&nbsp; Wir gehen von der mittelwertfreien Gauß&ndash;WDF aus:
+
'''(1)'''&nbsp; We assume the zero mean Gaussian PDF:
 
:$$f_X(x) = f_4(x) =A \cdot {\rm exp} [  
 
:$$f_X(x) = f_4(x) =A \cdot {\rm exp} [  
 
- \hspace{0.05cm}\frac{x ^2}{2 \sigma^2}]
 
- \hspace{0.05cm}\frac{x ^2}{2 \sigma^2}]
\hspace{0.5cm}{\rm mit}\hspace{0.5cm}
+
\hspace{0.5cm}{\rm with}\hspace{0.5cm}
 
A = \frac{1}{\sqrt{2\pi  \sigma^2}}\hspace{0.05cm}.$$
 
A = \frac{1}{\sqrt{2\pi  \sigma^2}}\hspace{0.05cm}.$$
*Logarithmiert man diese Funktion, so erhält man als Ergebnis den <u>Lösungsvorschlag 1</u>:
+
*Logarithmizing this function, the result is <u>proposed solution 1</u>:
 
:$${\rm ln}\hspace{0.1cm} \big [f_X(x) \big ] = {\rm ln}\hspace{0.1cm}(A) +
 
:$${\rm ln}\hspace{0.1cm} \big [f_X(x) \big ] = {\rm ln}\hspace{0.1cm}(A) +
 
{\rm ln}\hspace{0.1cm}\left [{\rm exp} (  
 
{\rm ln}\hspace{0.1cm}\left [{\rm exp} (  
Line 101: Line 101:
  
  
'''(2)'''&nbsp; <u>Beide Lösungsvorschläge</u> sind richtig.
+
'''(2)'''&nbsp; <u>Both proposed solutions</u> are correct:
*Mit dem Ergebnis aus&nbsp; '''(1)'''&nbsp; erhält man für die differentielle Entropie in "nat":
+
*Using the result from&nbsp; '''(1)'''&nbsp; we obtain for the differential entropy in&nbsp;  "nat":
 
:$$h_{\rm nat}(X)=  -\hspace{-0.1cm}  \int_{-\infty}^{+\infty} \hspace{-0.15cm}  f_X(x) \cdot {\rm ln} \hspace{0.1cm} [f_X(x)] \hspace{0.1cm}{\rm d}x =
 
:$$h_{\rm nat}(X)=  -\hspace{-0.1cm}  \int_{-\infty}^{+\infty} \hspace{-0.15cm}  f_X(x) \cdot {\rm ln} \hspace{0.1cm} [f_X(x)] \hspace{0.1cm}{\rm d}x =
 
- {\rm ln}\hspace{0.1cm}(A) \cdot  
 
- {\rm ln}\hspace{0.1cm}(A) \cdot  
Line 108: Line 108:
 
+ \frac{1}{2 \sigma^2} \cdot \int_{-\infty}^{+\infty} \hspace{-0.15cm}  x^2 \cdot f_X(x) \hspace{0.1cm}{\rm d}x = - {\rm ln}\hspace{0.1cm}(A)  + {1}/{2}
 
+ \frac{1}{2 \sigma^2} \cdot \int_{-\infty}^{+\infty} \hspace{-0.15cm}  x^2 \cdot f_X(x) \hspace{0.1cm}{\rm d}x = - {\rm ln}\hspace{0.1cm}(A)  + {1}/{2}
 
\hspace{0.05cm}.$$
 
\hspace{0.05cm}.$$
*Hierbei ist berücksichtigt, dass das erste Integral gleich&nbsp; $1$&nbsp; ist&nbsp; (WDF&ndash;Fläche).
+
*Here it is taken into account that the first integral is equal to&nbsp; $1$&nbsp;&nbsp; (PDF area).
*Das zweite Integral gibt zugleich die Varianz&nbsp; $\sigma^2$ an&nbsp; (wenn wie hier der Gleichanteil&nbsp; $m_1 = 0$&nbsp; ist).  
+
*The second integral also gives the variance&nbsp; $\sigma^2$&nbsp; (if, as here, the equal part&nbsp; $m_1 = 0$&nbsp;).  
*Ersetzt man die Abkürzungsvariable&nbsp; $A$, so erhält man:
+
*Substituting the abbreviation variable&nbsp; $A$, we obtain:
 
:$$h_{\rm nat}(X) \hspace{-0.15cm}  =  \hspace{-0.15cm}  - {\rm ln}\hspace{0.05cm}\left (\frac{1}{\sqrt{2\pi  \sigma^2}} \right )  + {1}/{2} = {1}/{2}\cdot {\rm ln}\hspace{0.05cm}\left ({2\pi  \sigma^2} \right ) + {1}/{2} \cdot {\rm ln}\hspace{0.05cm}\left ( {\rm e} \right ) = {1}/{2} \cdot {\rm ln}\hspace{0.05cm}\left ({{2\pi {\rm e} \cdot \sigma^2}} \right )
 
:$$h_{\rm nat}(X) \hspace{-0.15cm}  =  \hspace{-0.15cm}  - {\rm ln}\hspace{0.05cm}\left (\frac{1}{\sqrt{2\pi  \sigma^2}} \right )  + {1}/{2} = {1}/{2}\cdot {\rm ln}\hspace{0.05cm}\left ({2\pi  \sigma^2} \right ) + {1}/{2} \cdot {\rm ln}\hspace{0.05cm}\left ( {\rm e} \right ) = {1}/{2} \cdot {\rm ln}\hspace{0.05cm}\left ({{2\pi {\rm e} \cdot \sigma^2}} \right )
 
\hspace{0.05cm}.$$
 
\hspace{0.05cm}.$$
*Soll die  differentielle Entropie&nbsp; $h(X)$&nbsp; nicht in "nat" angegeben werden, sondern in "bit",&nbsp; so ist für den Logarithmus die Basis&nbsp; $2$&nbsp; zu wählen:
+
*If the differential entropy&nbsp; $h(X)$&nbsp; is not to be given in&nbsp; "nat"&nbsp; but in&nbsp; "bit",&nbsp; choose base&nbsp; $2$&nbsp; for the logarithm:
 
:$$h_{\rm bit}(X) = {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ({{2\pi {\rm e} \cdot \sigma^2}} \right )
 
:$$h_{\rm bit}(X) = {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ({{2\pi {\rm e} \cdot \sigma^2}} \right )
 
\hspace{0.05cm}.$$
 
\hspace{0.05cm}.$$
Line 119: Line 119:
  
  
'''(3)'''&nbsp; Nach der impliziten Definition&nbsp; $h(X) = {1}/{2} \cdot {\rm log} \hspace{0.1cm} ({\it \Gamma}_{\hspace{-0.01cm}\rm L} \cdot \sigma^2)$&nbsp; ergibt sich somit für die Kenngröße:
+
'''(3)'''&nbsp; Thus, according to the implicit definition&nbsp; $h(X) = {1}/{2} \cdot {\rm log} \hspace{0.1cm} ({\it \Gamma}_{\hspace{-0.01cm}\rm L} \cdot \sigma^2)$&nbsp;, the parameter is:
 
:$${\it \Gamma}_{\rm L} = 2\pi {\rm e} \hspace{0.15cm}\underline{\approx 17.08}
 
:$${\it \Gamma}_{\rm L} = 2\pi {\rm e} \hspace{0.15cm}\underline{\approx 17.08}
 
\hspace{0.05cm}.$$
 
\hspace{0.05cm}.$$
Line 125: Line 125:
  
  
'''(4)'''&nbsp; Wir betrachten nun eine Gaußsche Wahrscheinlichkeitsdichtefunktion mit Mittelwert&nbsp; $m_1$:
+
'''(4)'''&nbsp; We now consider a Gaussian probability density function with mean&nbsp; $m_1$:
 
:$$f_X(x) = \frac{1}{\sqrt{2\pi  \sigma^2}} \cdot {\rm exp}\left [  
 
:$$f_X(x) = \frac{1}{\sqrt{2\pi  \sigma^2}} \cdot {\rm exp}\left [  
 
- \hspace{0.05cm}\frac{(x -m_1)^2}{2 \sigma^2} \right ]
 
- \hspace{0.05cm}\frac{(x -m_1)^2}{2 \sigma^2} \right ]
 
  \hspace{0.05cm}.$$
 
  \hspace{0.05cm}.$$
* Das zweite Moment&nbsp; $m_2 = {\rm E}\big [X ^2 \big ]$&nbsp; kann man auch als die Leistung&nbsp; $P$&nbsp; bezeichnen, während für die Varianz gilt  (ist gleichzeitig das zweite Zentralmoment):  
+
* The second moment&nbsp; $m_2 = {\rm E}\big [X ^2 \big ]$&nbsp; can also be called the power&nbsp; $P$,&nbsp; while for the variance holds&nbsp; (this is also the second central moment):
 
:$$\sigma^2 = {\rm E}\big [|X – m_1|^2 \big ] = \mu_2.$$   
 
:$$\sigma^2 = {\rm E}\big [|X – m_1|^2 \big ] = \mu_2.$$   
* Nach dem Satz von Steiner gilt&nbsp; $P =  m_2 = m_1^2 + \sigma^2$.&nbsp; Unter der Voraussetzung &nbsp;$m_1 = \sigma = 1$&nbsp; ist somit&nbsp; $\underline{P/\sigma^2 = 2}$.
+
*According to Steiner's theorem,&nbsp; $P =  m_2 = m_1^2 + \sigma^2$.&nbsp; Thus, assuming &nbsp;$m_1 = \sigma = 1$ &nbsp; &rArr; &nbsp; $\underline{P/\sigma^2 = 2}$.
  
*Durch den Gleichanteil wird zwar die Leistung verdoppelt.&nbsp; An der differentiellen Entropie ändert sich dadurch aber nichts.&nbsp; Es gilt somit weiterhin:
+
*Due to the DC component, the power is indeed doubled.&nbsp; However, this does not change anything in the differential entropy.&nbsp; Thus, it is still valid:
 
:$$h(X) = {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ({{2\pi {\rm e} \cdot \sigma^2}} \right )= {1}/{2} \cdot {\rm log}_2\hspace{0.05cm} (17.08)\hspace{0.15cm}\underline{\approx 2.047\,{\rm bit}}
 
:$$h(X) = {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ({{2\pi {\rm e} \cdot \sigma^2}} \right )= {1}/{2} \cdot {\rm log}_2\hspace{0.05cm} (17.08)\hspace{0.15cm}\underline{\approx 2.047\,{\rm bit}}
 
\hspace{0.05cm}.$$
 
\hspace{0.05cm}.$$
  
  
[[File:P_ID2876__Inf_A_4_3e_neu.png|right|frame|Vervollständigte Ergebnistabelle für&nbsp; $h(X)$]]
+
[[File:EN_Inf_A_4_3_M_v2L.png|right|frame|Completed results table for&nbsp; $h(X)$]]
'''(5)'''&nbsp; In der vervollständigten Tabelle sind auch die numerischen Werte der Kenngrößen&nbsp; ${\it \Gamma}_{\rm L}$&nbsp; und&nbsp; ${\it \Gamma}_{\rm A}$&nbsp; eingetragen.
+
'''(5)'''&nbsp; Correct are the proposed solutions&nbsp; '''(1)'''&nbsp; and&nbsp; '''(4)'''.&nbsp; The numerical values of the characteristics&nbsp; ${\it \Gamma}_{\rm L}$&nbsp; and&nbsp; ${\it \Gamma}_{\rm A}$&nbsp; are also entered in the completed table on the right.
  
Eine Wahrscheinlichkeitsdichtefunktion&nbsp; $f_X(x)$&nbsp; ist bei  Leistungsbegrenzung immer dann besonders günstig, wenn der Wert&nbsp; ${\it \Gamma}_{\rm L}$&nbsp; (rechte Spalte)&nbsp; möglichst groß ist.&nbsp; Dann ist die differentielle Entropie&nbsp; $h(X)$&nbsp; ebenfalls groß.
+
A probability density function&nbsp; $f_X(x)$&nbsp; is always particularly favorable under power constraints if the value&nbsp; ${\it \Gamma}_{\rm L}$&nbsp; (right column)&nbsp; is as large as possible.&nbsp; Then the differential entropy&nbsp; $h(X)$&nbsp; is also large.
  
Die numerischen Ergebnisse lassen sich wie folgt interpretieren:
+
The numerical results can be interpreted as follows:
* Wie imTheorieteil bewiesen wird, führt die Gaußverteilung&nbsp; $f_4(x)$&nbsp; hier zum größtmöglichen&nbsp; ${\it \Gamma}_{\rm L} &asymp; 17.08$ &nbsp; &#8658; &nbsp; der <u>Lösungsvorschlag 1</u> ist richtig (der Wert in der letzten Spalte ist rot markiert).
+
* As is proved in the theory part, the Gaussian PDF&nbsp; $f_4(x)$&nbsp; leads here to the largest possible&nbsp; ${\it \Gamma}_{\rm L} &asymp; 17.08$ &nbsp; &#8658; &nbsp; the <u>proposed solution 1</u> is correct (the value in the last column is marked in red).
* Für die Gleichverteilung&nbsp; $f_1(x)$&nbsp; ist die Kenngröße ${\it \Gamma}_{\rm L} = 12$&nbsp; die kleinste in der gesamten Tabelle &nbsp; &#8658; &nbsp; der Lösungsvorschlag 2 ist falsch.
+
* For the uniform PDF&nbsp; $f_1(x)$&nbsp; the parameter ${\it \Gamma}_{\rm L} = 12$&nbsp; is the smallest in the whole table &nbsp; &#8658; &nbsp; the proposed solution 2 is wrong.
* Die Dreieckverteilung&nbsp; $f_2(x)$&nbsp; ist mit&nbsp; ${\it \Gamma}_{\rm L} = 16.31$&nbsp;  günstiger als die Gleichverteilung &nbsp; &#8658; &nbsp; der Lösungsvorschlag 3 ist falsch.
+
* The triangular PDF&nbsp; $f_2(x)$&nbsp; with&nbsp; ${\it \Gamma}_{\rm L} = 16.31$&nbsp;  is more favorable than the uniform PDF &nbsp; &#8658; &nbsp; the proposed solution 3 is wrong.
*Die Dreieckverteilung&nbsp; $f_2(x)$&nbsp; ist auch besser als die Laplaceverteilung&nbsp; $f_2(x) \ \ ({\it \Gamma}_{\rm L} = 14.78)$ &nbsp; &#8658; &nbsp; der <u>Lösungsvorschlag 4</u> ist richtig.  
+
*The triangular PDF&nbsp; $f_2(x)$&nbsp; is also better than the Laplace PDF&nbsp; $f_2(x) \ \ ({\it \Gamma}_{\rm L} = 14.78)$ &nbsp; &#8658; &nbsp; the <u>proposed solution 4</u> is correct.
  
  
  
  
'''(6)'''&nbsp; Eine WDF&nbsp; $f_X(x)$&nbsp; ist unter der Nebenbedingung der Spitzenwertbegrenzung &nbsp; &#8658; &nbsp;  $|X| ≤ A$ günstig hinsichtlich der differentiellen Entropie&nbsp; $h(X)$, wenn der Bewertungsfaktor&nbsp;  ${\it \Gamma}_{\rm A}$&nbsp; (mittlere Spalte)&nbsp; möglichst groß ist:
+
'''(6)'''&nbsp; Correct is the proposed solution&nbsp; '''(2)'''.&nbsp; A PDF&nbsp; $f_X(x)$&nbsp; is favorable in terms of differential entropy&nbsp; $h(X)$ under the peak constraint &nbsp; &#8658; &nbsp;  $|X| ≤ A$,&nbsp; if the weighting factor&nbsp;  ${\it \Gamma}_{\rm A}$&nbsp; (middle column)&nbsp; is as large as possible:
* Wie im Theorieteil gezeigt wird, führt die Gleichverteilung&nbsp; $f_1(x)$&nbsp; hier zum größtmöglichen&nbsp; ${\it \Gamma}_{\rm A}= 2$  &nbsp;  &#8658; &nbsp; der <u>Lösungsvorschlag 2</u> ist richtig (der Wert in der mittleren Spalte ist rot markiert).
+
* As shown in the theory section, the uniform distribution &nbsp; $f_1(x)$&nbsp; leads here to the largest possible&nbsp; ${\it \Gamma}_{\rm A}= 2$  &nbsp;  &#8658; &nbsp; the <u>proposed solution 2</u> is correct&nbsp; (the value in the middle column is marked in red).
* Die ebenfalls spitzenwertbegrenzte Dreieckverteilung&nbsp; $f_2(x)$&nbsp; ist durch ein etwas kleineres&nbsp;  ${\it \Gamma}_{\rm A}=  1.649$&nbsp; gekennzeichnet &nbsp; &#8658; &nbsp; der Lösungsvorschlag 3 ist falsch.  
+
* The triangular PDF&nbsp; $f_2(x)$,&nbsp; which is also peak-constrained, is characterized by a somewhat smaller&nbsp;  ${\it \Gamma}_{\rm A}=  1.649$&nbsp; &nbsp; &#8658; &nbsp; dthe proposed solution 3 is incorrect.
* Die Gaußverteilung&nbsp; $f_4(x)$&nbsp; ist unendlich weit ausgedehnt.&nbsp; Eine Spitzenwertbegrenzung auf&nbsp; $|X| ≤ A$ führt hier zu Diracfunktionen in der WDF &nbsp; &#8658; &nbsp; $h(X) \to - \infty$, siehe Musterlösung zur Aufgabe 4.2Z, Teilaufgabe '''(4)'''.
+
* The Gaussian PDF&nbsp; $f_4(x)$&nbsp; is infinitely extended.&nbsp; A peak constraint on&nbsp; $|X| ≤ A$&nbsp; leads here to Dirac functions in the PDF &nbsp; &#8658; &nbsp; $h(X) \to - \infty$, see sample solution to Exercise 4.2Z, subtask '''(4)'''.
* Gleiches würde auch für die Laplaceverteilung&nbsp; $f_3(x)$&nbsp; gelten.
+
* The same would be true for the Laplace PDF&nbsp; $f_3(x)$&nbsp;.
  
 
{{ML-Fuß}}
 
{{ML-Fuß}}
Line 163: Line 163:
  
  
[[Category:Information Theory: Exercises|^4.1  Differentielle Entropie^]]
+
[[Category:Information Theory: Exercises|^4.1  Differential Entropy^]]

Latest revision as of 14:08, 28 September 2021

$h(X)$  for four probability density functions

The adjacent table shows the comparison result with respect to the differential entropy  $h(X)$  for

$$f_1(x) = \left\{ \begin{array}{c} 1/(2A) \\ 0 \\ \end{array} \right. \begin{array}{*{20}c} {\rm{f\ddot{u}r}} \hspace{0.1cm} |x| \le A \\ {\rm else} \\ \end{array} ,$$
$$f_2(x) = \left\{ \begin{array}{c} 1/A \cdot \big [1 - |x|/A \big ] \\ 0 \\ \end{array} \right. \begin{array}{*{20}c} {\rm{f\ddot{u}r}} \hspace{0.1cm} |x| \le A \\ {\rm else} \\ \end{array} ,$$
$$f_3(x) = \lambda/2 \cdot {\rm e}^{-\lambda \hspace{0.05cm} \cdot \hspace{0.05cm}|x|}\hspace{0.05cm}.$$

The values for the  Gaussian distribution   ⇒   $f_X(x) = f_4(x)$  with

$$f_4(x) = \frac{1}{\sqrt{2\pi \sigma^2}} \cdot {\rm e}^{ - \hspace{0.05cm}{x ^2}/{(2 \sigma^2})}$$

are not yet entered here.  These are to be determined in subtasks  (1)  to  (3) .

Each probability density function  $\rm (PDF)$  considered here is

  • symmetric about  $x = 0$    ⇒   $f_X(-x) = f_X(x)$
  • and thus zero mean   ⇒  $m_1 = 0$.


In all cases considered here, the differential entropy can be represented as follows:

  • Under the constraint  $|X| ≤ A$   ⇒    peak constraint  $($German:  "Spitzenwertbegrenzung"  or  "Amplitudenbegrenzung"   ⇒   Identifier:  $\rm A)$:
$$h(X) = {\rm log} \hspace{0.1cm} ({\it \Gamma}_{\hspace{-0.01cm}\rm A} \cdot A) \hspace{0.05cm},$$
  • Under the constraint  ${\rm E}\big [|X – m_1|^2 \big ] ≤ σ^2$   ⇒   power constraint  $($German:  "Leistungsbegrenzung"   ⇒   Identifier:  $\rm L)$:
$$h(X) = {1}/{2} \cdot {\rm log} \hspace{0.1cm} ({\it \Gamma}_{\hspace{-0.01cm}\rm L} \cdot \sigma^2) \hspace{0.05cm}.$$

The larger the respective parameter  ${\it \Gamma}_{\hspace{-0.01cm}\rm A}$  or  ${\it \Gamma}_{\hspace{-0.01cm}\rm L}$  is, the more favorable is the present PDF in terms of differential entropy for the agreed constraint.





Hints:

  • The exercise belongs to the chapter  Differential Entropy.
  • Useful hints for solving this task can be found in particular on the pages
Differential entropy of some peak-constrained random variables
Differential entropy of some power-constrained random variables.



Questions

1

Which equation is valid for the logarithm of the Gaussian PDF?

It holds:   $\ln \big[f_X(x) \big] = \ln (A) - x^2/(2 \sigma^2)$   with   $A = f_X(x=0)$.
Es It holds:   $\ln \big [f_X(x) \big] = A - \ln (x^2/(2 \sigma^2)$   with   $A = f_X(x=0)$.

2

Which equation holds for the differential entropy of the Gaussian PDF?

It holds:   $h(X)= 1/2 \cdot \ln (2\pi\hspace{0.05cm}{\rm e}\hspace{0.01cm}\cdot\hspace{0.01cm}\sigma^2)$  with the pseudo-unit  "nat".
It holds:   $h(X)= 1/2 \cdot \log_2 (2\pi\hspace{0.05cm}{\rm e}\hspace{0.01cm}\cdot\hspace{0.01cm}\sigma^2)$  with the pseudo-unit  "bit".

3

Complete the missing entry for the Gaussian PDF in the above table.

${\it \Gamma}_{\rm L} \ = \ $

4

What values are obtained for the Gaussian PDF with the DC component  $m_1 = \sigma = 1$?

$P/\sigma^2 \ = \ $

$h(X) \ = \ $

$\ \rm bit$

5

Which of the statements are true for the differential entropy  $h(X)$  considering the  "power constraint"  ${\rm E}\big[|X – m_1|^2\big] ≤ σ^2$?

The Gaussian PDF   ⇒   $f_4(x)$  leads to the maximum  $h(X)$.
The uniform PDF   ⇒   $f_1(x)$  leads to the maximum  $h(X)$.
The triangular PDF   ⇒   $f_2(x)$  is very unfavorable because it is peak-constrained.
The triangular PDF   ⇒   $f_2(x)$  is more favorable than the Laplace PDF   ⇒   $f_3(x)$.

6

Which of the statements are true for  "peak constraint"  to the range  $|X| ≤ A$.  The maximum differential entropy  $h(X)$  is obtained for

a Gaussian PDF   ⇒   $f_4(x)$  followed by a constraint   ⇒  $|X| ≤ A$,
the uniform PDF   ⇒   $f_1(x)$,
the triangular PDF   ⇒   $f_2(x)$.


Solution

(1)  We assume the zero mean Gaussian PDF:

$$f_X(x) = f_4(x) =A \cdot {\rm exp} [ - \hspace{0.05cm}\frac{x ^2}{2 \sigma^2}] \hspace{0.5cm}{\rm with}\hspace{0.5cm} A = \frac{1}{\sqrt{2\pi \sigma^2}}\hspace{0.05cm}.$$
  • Logarithmizing this function, the result is proposed solution 1:
$${\rm ln}\hspace{0.1cm} \big [f_X(x) \big ] = {\rm ln}\hspace{0.1cm}(A) + {\rm ln}\hspace{0.1cm}\left [{\rm exp} ( - \hspace{0.05cm}\frac{x ^2}{2 \sigma^2}) \right ] = {\rm ln}\hspace{0.1cm}(A) - \frac{x ^2}{2 \sigma^2}\hspace{0.05cm}.$$


(2)  Both proposed solutions are correct:

  • Using the result from  (1)  we obtain for the differential entropy in  "nat":
$$h_{\rm nat}(X)= -\hspace{-0.1cm} \int_{-\infty}^{+\infty} \hspace{-0.15cm} f_X(x) \cdot {\rm ln} \hspace{0.1cm} [f_X(x)] \hspace{0.1cm}{\rm d}x = - {\rm ln}\hspace{0.1cm}(A) \cdot \int_{-\infty}^{+\infty} \hspace{-0.15cm} f_X(x) \hspace{0.1cm}{\rm d}x + \frac{1}{2 \sigma^2} \cdot \int_{-\infty}^{+\infty} \hspace{-0.15cm} x^2 \cdot f_X(x) \hspace{0.1cm}{\rm d}x = - {\rm ln}\hspace{0.1cm}(A) + {1}/{2} \hspace{0.05cm}.$$
  • Here it is taken into account that the first integral is equal to  $1$   (PDF area).
  • The second integral also gives the variance  $\sigma^2$  (if, as here, the equal part  $m_1 = 0$ ).
  • Substituting the abbreviation variable  $A$, we obtain:
$$h_{\rm nat}(X) \hspace{-0.15cm} = \hspace{-0.15cm} - {\rm ln}\hspace{0.05cm}\left (\frac{1}{\sqrt{2\pi \sigma^2}} \right ) + {1}/{2} = {1}/{2}\cdot {\rm ln}\hspace{0.05cm}\left ({2\pi \sigma^2} \right ) + {1}/{2} \cdot {\rm ln}\hspace{0.05cm}\left ( {\rm e} \right ) = {1}/{2} \cdot {\rm ln}\hspace{0.05cm}\left ({{2\pi {\rm e} \cdot \sigma^2}} \right ) \hspace{0.05cm}.$$
  • If the differential entropy  $h(X)$  is not to be given in  "nat"  but in  "bit",  choose base  $2$  for the logarithm:
$$h_{\rm bit}(X) = {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ({{2\pi {\rm e} \cdot \sigma^2}} \right ) \hspace{0.05cm}.$$


(3)  Thus, according to the implicit definition  $h(X) = {1}/{2} \cdot {\rm log} \hspace{0.1cm} ({\it \Gamma}_{\hspace{-0.01cm}\rm L} \cdot \sigma^2)$ , the parameter is:

$${\it \Gamma}_{\rm L} = 2\pi {\rm e} \hspace{0.15cm}\underline{\approx 17.08} \hspace{0.05cm}.$$


(4)  We now consider a Gaussian probability density function with mean  $m_1$:

$$f_X(x) = \frac{1}{\sqrt{2\pi \sigma^2}} \cdot {\rm exp}\left [ - \hspace{0.05cm}\frac{(x -m_1)^2}{2 \sigma^2} \right ] \hspace{0.05cm}.$$
  • The second moment  $m_2 = {\rm E}\big [X ^2 \big ]$  can also be called the power  $P$,  while for the variance holds  (this is also the second central moment):
$$\sigma^2 = {\rm E}\big [|X – m_1|^2 \big ] = \mu_2.$$
  • According to Steiner's theorem,  $P = m_2 = m_1^2 + \sigma^2$.  Thus, assuming  $m_1 = \sigma = 1$   ⇒   $\underline{P/\sigma^2 = 2}$.
  • Due to the DC component, the power is indeed doubled.  However, this does not change anything in the differential entropy.  Thus, it is still valid:
$$h(X) = {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ({{2\pi {\rm e} \cdot \sigma^2}} \right )= {1}/{2} \cdot {\rm log}_2\hspace{0.05cm} (17.08)\hspace{0.15cm}\underline{\approx 2.047\,{\rm bit}} \hspace{0.05cm}.$$


Completed results table for  $h(X)$

(5)  Correct are the proposed solutions  (1)  and  (4).  The numerical values of the characteristics  ${\it \Gamma}_{\rm L}$  and  ${\it \Gamma}_{\rm A}$  are also entered in the completed table on the right.

A probability density function  $f_X(x)$  is always particularly favorable under power constraints if the value  ${\it \Gamma}_{\rm L}$  (right column)  is as large as possible.  Then the differential entropy  $h(X)$  is also large.

The numerical results can be interpreted as follows:

  • As is proved in the theory part, the Gaussian PDF  $f_4(x)$  leads here to the largest possible  ${\it \Gamma}_{\rm L} ≈ 17.08$   ⇒   the proposed solution 1 is correct (the value in the last column is marked in red).
  • For the uniform PDF  $f_1(x)$  the parameter ${\it \Gamma}_{\rm L} = 12$  is the smallest in the whole table   ⇒   the proposed solution 2 is wrong.
  • The triangular PDF  $f_2(x)$  with  ${\it \Gamma}_{\rm L} = 16.31$  is more favorable than the uniform PDF   ⇒   the proposed solution 3 is wrong.
  • The triangular PDF  $f_2(x)$  is also better than the Laplace PDF  $f_2(x) \ \ ({\it \Gamma}_{\rm L} = 14.78)$   ⇒   the proposed solution 4 is correct.



(6)  Correct is the proposed solution  (2).  A PDF  $f_X(x)$  is favorable in terms of differential entropy  $h(X)$ under the peak constraint   ⇒   $|X| ≤ A$,  if the weighting factor  ${\it \Gamma}_{\rm A}$  (middle column)  is as large as possible:

  • As shown in the theory section, the uniform distribution   $f_1(x)$  leads here to the largest possible  ${\it \Gamma}_{\rm A}= 2$   ⇒   the proposed solution 2 is correct  (the value in the middle column is marked in red).
  • The triangular PDF  $f_2(x)$,  which is also peak-constrained, is characterized by a somewhat smaller  ${\it \Gamma}_{\rm A}= 1.649$    ⇒   dthe proposed solution 3 is incorrect.
  • The Gaussian PDF  $f_4(x)$  is infinitely extended.  A peak constraint on  $|X| ≤ A$  leads here to Dirac functions in the PDF   ⇒   $h(X) \to - \infty$, see sample solution to Exercise 4.2Z, subtask (4).
  • The same would be true for the Laplace PDF  $f_3(x)$ .