Difference between revisions of "Aufgaben:Exercise 4.3Z: Exponential and Laplace Distribution"

From LNTwww
m (Text replacement - "value-continuous" to "continuous")
 
(6 intermediate revisions by 2 users not shown)
Line 3: Line 3:
 
}}
 
}}
  
[[File:P_ID2875__Inf_Z_4_3.png|right|frame|WDF von Exponentialverteilung  und Laplaceverteilung (unten)]]
+
[[File:EN_Inf_Z_4_3.png|right|frame|Exponential PDF (above) d<br>Laplace PDF (below)]]
Wir betrachten hier die Wahrscheinlichkeitsdichtefunktionen (WDF) zweier wertkontinuierlicher Zufallsgrößen:
+
We consider here the probability density functions&nbsp; $\rm (PDF)$&nbsp; of two continuous random variables:
* Die Zufallsgröße&nbsp; X&nbsp; ist exponentialverteilt (siehe obere Darstellung): &nbsp; Für&nbsp; x<0&nbsp;  ist&nbsp; fX(x)=0,&nbsp; und für positive x&ndash;Werte gilt:
+
*The random variable &nbsp; X&nbsp; is exponentially distributed (see top plot): &nbsp; For&nbsp; x<0&nbsp;  &nbsp; &rArr; &nbsp; fX(x)=0,&nbsp; and for positive x&ndash;values:
 
:fX(x)=λeλx.
 
:fX(x)=λeλx.
* Dagegen gilt für die laplaceverteilte Zufallsgröße&nbsp; Y&nbsp; im gesamten Bereich&nbsp; <y<+&nbsp;  (untere Skizze):
+
* On the other hand, for the Laplace distributed random variable&nbsp; Y&nbsp; in the whole range&nbsp; <y<+&nbsp;  holds (lower sketch):
 
:fY(y)=λ/2eλ|y|.
 
:fY(y)=λ/2eλ|y|.
  
Zu berechnen sind die differentiellen Entropien&nbsp; h(X)&nbsp; und&nbsp; h(Y)&nbsp; abhängig vom WDF&ndash;Parameter&nbsp; λ.&nbsp; Zum Beispiel gilt:
+
To be calculated are the differential entropies&nbsp; h(X)&nbsp; and&nbsp; h(Y)&nbsp; depending on the PDF parameter&nbsp; λ.&nbsp; For example:
 
:$$h(X) = -\hspace{-0.7cm}  \int\limits_{x \hspace{0.05cm}\in \hspace{0.05cm}{\rm supp}
 
:$$h(X) = -\hspace{-0.7cm}  \int\limits_{x \hspace{0.05cm}\in \hspace{0.05cm}{\rm supp}
 
\hspace{0.03cm}(\hspace{-0.03cm}f_X)} \hspace{-0.55cm}  f_X(x) \cdot {\rm log} \hspace{0.1cm} \big [f_X(x) \big ] \hspace{0.1cm}{\rm d}x
 
\hspace{0.03cm}(\hspace{-0.03cm}f_X)} \hspace{-0.55cm}  f_X(x) \cdot {\rm log} \hspace{0.1cm} \big [f_X(x) \big ] \hspace{0.1cm}{\rm d}x
 
\hspace{0.05cm}.$$
 
\hspace{0.05cm}.$$
Bei Verwendung von&nbsp; log2&nbsp; ist die Pseudo&ndash;Einheit "bit" anzufügen.
+
If&nbsp; log2&nbsp; is used, add the pseudo-unit&nbsp; "bit".
  
  
In den Teilaufgaben&nbsp; '''(2)'''&nbsp; und&nbsp; '''(4)'''&nbsp; ist die differentielle Entropie in folgender Form anzugeben:
+
In subtasks&nbsp; '''(2)'''&nbsp; and&nbsp; '''(4)'''&nbsp; specify the differential entropy in the following form:
:$$h(X) = {1}/{2} \cdot {\rm log} \hspace{0.1cm} ({\it \Gamma}_{{\hspace{-0.01cm} \rm L}}^{\hspace{0.08cm}(X)}  \cdot \sigma^2)  
+
:$$h(X) = {1}/{2} \cdot {\rm log} \hspace{0.1cm} ({\it \Gamma}_{{\hspace{-0.01cm} \rm L}}^{\hspace{0.08cm}(X)}  \cdot \sigma^2),
\hspace{0.5cm}{\rm bzw.} \hspace{0.5cm}h(Y) = {1}/{2} \cdot {\rm log} \hspace{0.1cm} ({\it \Gamma}_{{\hspace{-0.05cm} \rm L}}^{\hspace{0.08cm}(Y)}  \cdot \sigma^2)  
+
\hspace{0.8cm}h(Y) = {1}/{2} \cdot {\rm log} \hspace{0.1cm} ({\it \Gamma}_{{\hspace{-0.05cm} \rm L}}^{\hspace{0.08cm}(Y)}  \cdot \sigma^2)  
 
\hspace{0.05cm}.$$
 
\hspace{0.05cm}.$$
Zu ermitteln ist, durch welchen Faktor&nbsp; Γ(X)L&nbsp; die Exponentialverteilung charakterisiert wird und welcher Faktor&nbsp; Γ(Y)L&nbsp; sich für die Laplaceverteilung ergibt.
+
Determine by which factor&nbsp; Γ(X)L&nbsp; the exponential PDF is characterized and which factor&nbsp; Γ(Y)L&nbsp; results for the Laplace PDF.
  
  
Line 30: Line 30:
  
  
''Hinweise:''
+
Hints:
*Die Aufgabe gehört zum  Kapitel&nbsp; [[Information_Theory/Differentielle_Entropie|Differentielle Entropie]].
+
*The exercise belongs to the chapter&nbsp; [[Information_Theory/Differentielle_Entropie|Differential Entropy]].
*Nützliche Hinweise zur Lösung dieser Aufgabe finden Sie insbesondere auf der Seite&nbsp;  [[Information_Theory/Differentielle_Entropie#Differentielle_Entropie_einiger_leistungsbegrenzter_Zufallsgr.C3.B6.C3.9Fen|Differentielle Entropie einiger leistungsbegrenzter Zufallsgrößen]].
+
*Useful hints for solving this task can be found in particular on the page&nbsp;  [[Information_Theory/Differentielle_Entropie#Differential_entropy_of_some_power-constrained_random_variables|Differential entropy of some power-constrained random variables]].
*Für die Varianz der exponentialverteiten Zufallsgröße X gilt, wie in  [[Aufgaben:4.01Z_Momentenberechnung|Aufgabe 4.1Z]] hergeleitet: &nbsp; σ2=1/λ2.
+
*For the variance of the exponentially distributed random variable&nbsp; X&nbsp; holds, as derived in&nbsp; [[Aufgaben:Exercise_4.1Z:_Calculation_of_Moments|Exercise 4.1Z]]: &nbsp; σ2=1/λ2.
*Die Varianz der laplaceverteiten Zufallsgröße Y ist bei gleichem λ doppelt so groß: &nbsp; σ2=2/λ2.
+
*The variance of the Laplace distributed random variable&nbsp; Y&nbsp; is twice as large for the same&nbsp; λ: &nbsp; σ2=2/λ2.
 
   
 
   
  
  
===Fragebogen===
+
===Questions===
  
 
<quiz display=simple>
 
<quiz display=simple>
  
{Berechnen Sie die differentielle Entropie der Exponentialverteilung für&nbsp; λ=1.
+
{Calculate the differential entropy of the exponential distribution for&nbsp; λ=1.
 
|type="{}"}
 
|type="{}"}
 
h(X) =  { 1.443 3% }  bit
 
h(X) =  { 1.443 3% }  bit
  
{Welche Kenngröße&nbsp; &nbsp;Γ(X)L&nbsp; ergibt sich für die Exponentialverteilung entsprechend der Form &nbsp;h(X)=1/2log2(Γ(X)Lσ2) ?
+
{What is the characteristic&nbsp; &nbsp;Γ(X)L&nbsp; for the exponential distribution corresponding to the form &nbsp;h(X)=1/2log2(Γ(X)Lσ2) ?
 
|type="{}"}
 
|type="{}"}
 
Γ(X)L =  { 7.39 3% }
 
Γ(X)L =  { 7.39 3% }
  
  
{Berechnen Sie die differentielle Entropie der Laplaceverteilung  für&nbsp; λ=1.
+
{Calculate the differential entropy of the Laplace distribution for&nbsp; λ=1.
 
|type="{}"}
 
|type="{}"}
 
h(Y) =  { 2.443 3% }  bit
 
h(Y) =  { 2.443 3% }  bit
  
{Welche Kenngröße &nbsp;Γ(Y)L&nbsp; ergibt sich für die Laplaceverteilung entsprechend der Form &nbsp;h(Y)=1/2log2(Γ(Y)Lσ2)?
+
{What is the characteristic &nbsp;Γ(Y)L&nbsp; for the Laplace distribution corresponding to the form &nbsp;h(Y)=1/2log2(Γ(Y)Lσ2)?
 
|type="{}"}
 
|type="{}"}
 
Γ(Y)L =  { 14.78 3% }
 
Γ(Y)L =  { 14.78 3% }
Line 64: Line 64:
 
</quiz>
 
</quiz>
  
===Musterlösung===
+
===Solution===
 
{{ML-Kopf}}
 
{{ML-Kopf}}
'''(1)'''&nbsp; Obwohl in dieser Aufgabe das Ergebnis in "bit" angegeben werden soll, verwenden wir zur Herleitung den natürlichen Logarithmus.  
+
'''(1)'''&nbsp; Although in this exercise the result should be given in&nbsp; "bit",&nbsp; we use the natural logarithm for derivation.
  
*Dann gilt für die differentielle Entropie:
+
*Then the differential entropy is:
 
:$$h(X) = -\hspace{-0.7cm}  \int\limits_{x \hspace{0.05cm}\in \hspace{0.05cm}{\rm supp}
 
:$$h(X) = -\hspace{-0.7cm}  \int\limits_{x \hspace{0.05cm}\in \hspace{0.05cm}{\rm supp}
 
\hspace{0.03cm}(\hspace{-0.03cm}f_X)} \hspace{-0.35cm}  f_X(x) \cdot {\rm ln} \hspace{0.1cm} \big [f_X(x)\big] \hspace{0.1cm}{\rm d}x
 
\hspace{0.03cm}(\hspace{-0.03cm}f_X)} \hspace{-0.35cm}  f_X(x) \cdot {\rm ln} \hspace{0.1cm} \big [f_X(x)\big] \hspace{0.1cm}{\rm d}x
 
\hspace{0.05cm}.$$
 
\hspace{0.05cm}.$$
*Für die Exponentialverteilung sind die Integrationsgrenzen&nbsp; 0&nbsp; und&nbsp; +&#8734;&nbsp; anzusetzen.&nbsp; In diesem Bereich wird die WDF&nbsp; fX(x)&nbsp; gemäß Angabenblatt eingesetzt:
+
*For the exponential distribution,&nbsp; the integration limits are&nbsp; 0&nbsp; and&nbsp; +&#8734;.&nbsp; In this range, the PDF&nbsp; fX(x)&nbsp; according to the specification sheet  is used:
 
:$$h(X) =-  \int_{0}^{\infty} \hspace{-0.15cm}   
 
:$$h(X) =-  \int_{0}^{\infty} \hspace{-0.15cm}   
 
\lambda \cdot {\rm e}^{-\lambda \hspace{0.05cm}\cdot \hspace{0.05cm}x}  
 
\lambda \cdot {\rm e}^{-\lambda \hspace{0.05cm}\cdot \hspace{0.05cm}x}  
Line 82: Line 82:
 
\lambda \cdot x \cdot {\rm e}^{-\lambda \hspace{0.05cm}\cdot \hspace{0.05cm}x}\hspace{0.1cm}{\rm d}x
 
\lambda \cdot x \cdot {\rm e}^{-\lambda \hspace{0.05cm}\cdot \hspace{0.05cm}x}\hspace{0.1cm}{\rm d}x
 
\hspace{0.05cm}.$$
 
\hspace{0.05cm}.$$
Man erkennt:
+
We can see:
* Der erste Integrand ist identisch mit der hier betrachteten WDF&nbsp; fX(x).&nbsp; Das Integral über den gesamten Integrationsbereich ergibt somit&nbsp; 1.
+
* The first integrand is identical to the PDF&nbsp; fX(x) considered here.&nbsp; Thus, the integral over the entire integration domain yields&nbsp; 1.
* Das zweite Integral entspricht genau der Definition des Mittelwertes&nbsp; m1&nbsp; (Moment erster Ordnung).&nbsp; Für die Exponentialverteilung gilt&nbsp; m_1 = 1/&lambda;.&nbsp; Daraus folgt:  
+
* The second integral corresponds exactly to the definition of the mean value&nbsp; m1&nbsp; (moment of first order).&nbsp; For the exponential PDF,&nbsp; m_1 = 1/&lambda; holds.&nbsp; From this follows:
 
:$$h(X) = - \hspace{0.05cm} {\rm ln} \hspace{0.1cm} (\lambda) + 1 =
 
:$$h(X) = - \hspace{0.05cm} {\rm ln} \hspace{0.1cm} (\lambda) + 1 =
 
- \hspace{0.05cm} {\rm ln} \hspace{0.1cm} (\lambda) + \hspace{0.05cm} {\rm ln} \hspace{0.1cm} ({\rm e}) = {\rm ln} \hspace{0.1cm} ({\rm e}/\lambda)
 
- \hspace{0.05cm} {\rm ln} \hspace{0.1cm} (\lambda) + \hspace{0.05cm} {\rm ln} \hspace{0.1cm} ({\rm e}) = {\rm ln} \hspace{0.1cm} ({\rm e}/\lambda)
 
\hspace{0.05cm}.$$
 
\hspace{0.05cm}.$$
*Dieses Ergebnis ist mit der Zusatzeinheit "nat" zu versehen.&nbsp; Mit&nbsp; log2&nbsp; anstelle von&nbsp; ln&nbsp; erhält man die differentielle Entropie in "bit":
+
*This result is to be given the additional unit&nbsp; "nat".&nbsp; Using&nbsp; log2&nbsp; instead of&nbsp; ln,&nbsp; we obtain the differential entropy in&nbsp; "bit":
 
:$$h(X) =  {\rm log}_2 \hspace{0.1cm} ({\rm e}/\lambda)
 
:$$h(X) =  {\rm log}_2 \hspace{0.1cm} ({\rm e}/\lambda)
 
\hspace{0.3cm} \Rightarrow \hspace{0.3cm} \lambda = 1{\rm :}
 
\hspace{0.3cm} \Rightarrow \hspace{0.3cm} \lambda = 1{\rm :}
Line 97: Line 97:
  
  
'''(2)'''&nbsp; Unter Berücksichtigung der für die Exponentialverteilung gültigen Gleichung&nbsp; σ2=1/λ2&nbsp; kann man das in&nbsp;  '''(1)'''&nbsp; gefundene Ergebnis wie folgt umformen:
+
'''(2)'''&nbsp; Considering the equation&nbsp; σ2=1/λ2&nbsp; valid for the exponential distribution, we can transform the result found in&nbsp;  '''(1)'''&nbsp; as follows:
 
: $$h(X) =  {\rm log}_2 \hspace{0.1cm} ({\rm e}/\lambda) =  
 
: $$h(X) =  {\rm log}_2 \hspace{0.1cm} ({\rm e}/\lambda) =  
 
{1}/{2}\cdot {\rm log}_2 \hspace{0.1cm} ({\rm e}^2/\lambda^2)
 
{1}/{2}\cdot {\rm log}_2 \hspace{0.1cm} ({\rm e}^2/\lambda^2)
Line 103: Line 103:
 
{1}/{2} \cdot {\rm log}_2 \hspace{0.1cm} ({\rm e}^2 \cdot \sigma^2)
 
{1}/{2} \cdot {\rm log}_2 \hspace{0.1cm} ({\rm e}^2 \cdot \sigma^2)
 
\hspace{0.05cm}.$$
 
\hspace{0.05cm}.$$
*Ein Vergleich mit der geforderten Grundform &nbsp;h(X)=1/2log2(Γ(X)Lσ2)&nbsp; führt zum Ergebnis:
+
*A comparison with the required basic form &nbsp;h(X)=1/2log2(Γ(X)Lσ2)&nbsp; leads to the result:
 
:$${\it \Gamma}_{{\hspace{-0.05cm} \rm L}}^{\hspace{0.08cm}(X)}  = {\rm e}^2 \hspace{0.15cm}\underline{\approx 7.39}
 
:$${\it \Gamma}_{{\hspace{-0.05cm} \rm L}}^{\hspace{0.08cm}(X)}  = {\rm e}^2 \hspace{0.15cm}\underline{\approx 7.39}
 
\hspace{0.05cm}.$$
 
\hspace{0.05cm}.$$
Line 109: Line 109:
  
  
'''(3)'''&nbsp; Bei der Laplaceverteilung unterteilen wir den Integrationsbereich in zwei Teilbereiche:
+
'''(3)'''&nbsp; For the Laplace distribution, we divide the integration domain into two subdomains:
* Y&nbsp; negativ &nbsp; &#8658; &nbsp; Anteil&nbsp; hneg(Y),
+
* Y&nbsp; negative &nbsp; &#8658; &nbsp; proportion&nbsp; hneg(Y),
* Y&nbsp; positiv &nbsp; &#8658; &nbsp; Anteil&nbsp; hpos(Y).
+
* Y&nbsp; positive &nbsp; &#8658; &nbsp; proportion&nbsp; hpos(Y).
  
  
Die gesamte differentielle Entropie ergibt sich unter Berücksichtigung von&nbsp; hneg(Y)=hpos(Y)&nbsp; zu
+
The total differential entropy, taking into account&nbsp; hneg(Y)=hpos(Y)&nbsp; is given by
 
:h(Y)=hneg(Y)+hpos(Y)=2hpos(Y)
 
:h(Y)=hneg(Y)+hpos(Y)=2hpos(Y)
 
:$$\Rightarrow \hspace{0.3cm} h(Y) = -  2 \cdot \int_{0}^{\infty} \hspace{-0.15cm}   
 
:$$\Rightarrow \hspace{0.3cm} h(Y) = -  2 \cdot \int_{0}^{\infty} \hspace{-0.15cm}   
Line 125: Line 125:
 
\hspace{0.05cm}.$$
 
\hspace{0.05cm}.$$
  
Berücksichtigen wir wiederum, dass das erste Integral den Wert&nbsp; 1 ergibt&nbsp; (WDF&ndash;Fläche) und das zweite Integral den Mittelwert&nbsp; m1=1/λ&nbsp; angibt, so erhalten wir:
+
If we again consider that the first integral gives the value&nbsp; 1 &nbsp; (PDF area) and the second integral gives the mean value&nbsp; m1=1/λ&nbsp; we obtain:
 
:$$h(Y) = - \hspace{0.05cm} {\rm ln} \hspace{0.1cm} (\lambda/2) + 1 =
 
:$$h(Y) = - \hspace{0.05cm} {\rm ln} \hspace{0.1cm} (\lambda/2) + 1 =
 
- \hspace{0.05cm} {\rm ln} \hspace{0.1cm} (\lambda/2) + \hspace{0.05cm} {\rm ln} \hspace{0.1cm} ({\rm e}) = {\rm ln} \hspace{0.1cm} (2{\rm e}/\lambda)
 
- \hspace{0.05cm} {\rm ln} \hspace{0.1cm} (\lambda/2) + \hspace{0.05cm} {\rm ln} \hspace{0.1cm} ({\rm e}) = {\rm ln} \hspace{0.1cm} (2{\rm e}/\lambda)
 
\hspace{0.05cm}.$$
 
\hspace{0.05cm}.$$
*Da das Ergebnis in "bit" gefordert ist, muss noch&nbsp; ln&nbsp; durch&nbsp; log2&nbsp; ersetzt werden:
+
*Since the result is required in&nbsp; "bit",&nbsp; we still need to replace&nbsp; "ln"&nbsp; by&nbsp; "log2":
 
:$$h(Y) =  {\rm log}_2 \hspace{0.1cm} (2{\rm e}/\lambda)
 
:$$h(Y) =  {\rm log}_2 \hspace{0.1cm} (2{\rm e}/\lambda)
 
\hspace{0.3cm} \Rightarrow \hspace{0.3cm} \lambda = 1{\rm :}
 
\hspace{0.3cm} \Rightarrow \hspace{0.3cm} \lambda = 1{\rm :}
Line 138: Line 138:
  
  
'''(4)'''&nbsp; Bei der Laplaceverteilung gilt der Zusammenhang&nbsp; σ2=2/λ2.&nbsp; Damit erhält man:
+
'''(4)'''&nbsp; For the Laplace distribution, the relation&nbsp; σ2=2/λ2 holds.&nbsp; Thus, we obtain:
 
:$$h(X) =  {\rm log}_2 \hspace{0.1cm} (\frac{2{\rm e}}{\lambda}) =  
 
:$$h(X) =  {\rm log}_2 \hspace{0.1cm} (\frac{2{\rm e}}{\lambda}) =  
 
{1}/{2} \cdot {\rm log}_2 \hspace{0.1cm} (\frac{4{\rm e}^2}{\lambda^2})
 
{1}/{2} \cdot {\rm log}_2 \hspace{0.1cm} (\frac{4{\rm e}^2}{\lambda^2})
Line 144: Line 144:
 
{1}/{2} \cdot {\rm log}_2 \hspace{0.1cm} (2 {\rm e}^2 \cdot \sigma^2) \hspace{0.3cm} \Rightarrow \hspace{0.3cm} {\it \Gamma}_{{\hspace{-0.05cm} \rm L}}^{\hspace{0.08cm}(Y)}  = 2 \cdot {\rm e}^2 \hspace{0.15cm}\underline{\approx 14.78}
 
{1}/{2} \cdot {\rm log}_2 \hspace{0.1cm} (2 {\rm e}^2 \cdot \sigma^2) \hspace{0.3cm} \Rightarrow \hspace{0.3cm} {\it \Gamma}_{{\hspace{-0.05cm} \rm L}}^{\hspace{0.08cm}(Y)}  = 2 \cdot {\rm e}^2 \hspace{0.15cm}\underline{\approx 14.78}
 
\hspace{0.05cm}.$$
 
\hspace{0.05cm}.$$
*Der&nbsp; ΓL&ndash;Wert ist demzufolge bei der Laplaceverteilung doppelt so groß wie bei der Exponentialverteilung.  
+
*Consequently, the&nbsp; ΓL value is twice as large for the Laplace distribution as for the exponential distribution.
*Die Laplaceverteilung ist also bezüglich der differentiellen Entropie besser als die Exponentialverteilung, wenn man von leistungsbegrenzten Signalen ausgeht.  
+
*Thus, the Laplace PDF is better than the exponential PDF in terms of differential entropy when power-limited signals are assumed.
*Unter der Nebenbedingung der Spitzenwertbegrenzung sind sowohl die Exponential&ndash; als auch die Laplaceverteilung völlig ungeeignet, ebenso wie die Gaußverteilung.&nbsp; Diese reichen alle bis ins Unendliche.
+
*Under the constraint of peak limitation,&nbsp; both the exponential and Laplace PDF are completely unsuitable, as is the Gaussian PDF.&nbsp; These all extend to infinity.
  
 
{{ML-Fuß}}
 
{{ML-Fuß}}

Latest revision as of 10:27, 11 October 2021

Exponential PDF (above) d
Laplace PDF (below)

We consider here the probability density functions  (PDF)  of two continuous random variables:

  • The random variable   X  is exponentially distributed (see top plot):   For  x<0    ⇒   fX(x)=0,  and for positive x–values:
fX(x)=λeλx.
  • On the other hand, for the Laplace distributed random variable  Y  in the whole range  <y<+  holds (lower sketch):
fY(y)=λ/2eλ|y|.

To be calculated are the differential entropies  h(X)  and  h(Y)  depending on the PDF parameter  λ.  For example:

h(X)=xsupp(fX)fX(x)log[fX(x)]dx.

If  log2  is used, add the pseudo-unit  "bit".


In subtasks  (2)  and  (4)  specify the differential entropy in the following form:

h(X)=1/2log(Γ(X)Lσ2),h(Y)=1/2log(Γ(Y)Lσ2).

Determine by which factor  Γ(X)L  the exponential PDF is characterized and which factor  Γ(Y)L  results for the Laplace PDF.





Hints:

  • The exercise belongs to the chapter  Differential Entropy.
  • Useful hints for solving this task can be found in particular on the page  Differential entropy of some power-constrained random variables.
  • For the variance of the exponentially distributed random variable  X  holds, as derived in  Exercise 4.1Z:   σ2=1/λ2.
  • The variance of the Laplace distributed random variable  Y  is twice as large for the same  λ:   σ2=2/λ2.


Questions

1

Calculate the differential entropy of the exponential distribution for  λ=1.

h(X) = 

 bit

2

What is the characteristic   Γ(X)L  for the exponential distribution corresponding to the form  h(X)=1/2log2(Γ(X)Lσ2) ?

Γ(X)L = 

3

Calculate the differential entropy of the Laplace distribution for  λ=1.

h(Y) = 

 bit

4

What is the characteristic  Γ(Y)L  for the Laplace distribution corresponding to the form  h(Y)=1/2log2(Γ(Y)Lσ2)?

Γ(Y)L = 


Solution

(1)  Although in this exercise the result should be given in  "bit",  we use the natural logarithm for derivation.

  • Then the differential entropy is:
h(X)=xsupp(fX)fX(x)ln[fX(x)]dx.
  • For the exponential distribution,  the integration limits are  0  and  +.  In this range, the PDF  fX(x)  according to the specification sheet is used:
h(X)=0λeλx[ln(λ)+ln(eλx)]dxln(λ)0λeλxdx+λ0λxeλxdx.

We can see:

  • The first integrand is identical to the PDF  fX(x) considered here.  Thus, the integral over the entire integration domain yields  1.
  • The second integral corresponds exactly to the definition of the mean value  m1  (moment of first order).  For the exponential PDF,  m1=1/λ holds.  From this follows:
h(X)=ln(λ)+1=ln(λ)+ln(e)=ln(e/λ).
  • This result is to be given the additional unit  "nat".  Using  log2  instead of  ln,  we obtain the differential entropy in  "bit":
h(X)=log2(e/λ)λ=1:h(X)=log2(e)=ln(e)ln(2)=1.443bit_.


(2)  Considering the equation  σ2=1/λ2  valid for the exponential distribution, we can transform the result found in  (1)  as follows:

h(X)=log2(e/λ)=1/2log2(e2/λ2)=1/2log2(e2σ2).
  • A comparison with the required basic form  h(X)=1/2log2(Γ(X)Lσ2)  leads to the result:
Γ(X)L=e27.39_.


(3)  For the Laplace distribution, we divide the integration domain into two subdomains:

  • Y  negative   ⇒   proportion  hneg(Y),
  • Y  positive   ⇒   proportion  hpos(Y).


The total differential entropy, taking into account  hneg(Y)=hpos(Y)  is given by

h(Y)=hneg(Y)+hpos(Y)=2hpos(Y)
h(Y)=20λ/2eλy[ln(λ/2)+ln(eλy)]dy=ln(λ/2)0λeλydy+λ0λyeλydy.

If we again consider that the first integral gives the value  1   (PDF area) and the second integral gives the mean value  m1=1/λ  we obtain:

h(Y)=ln(λ/2)+1=ln(λ/2)+ln(e)=ln(2e/λ).
  • Since the result is required in  "bit",  we still need to replace  "ln"  by  "log2":
h(Y)=log2(2e/λ)λ=1:h(Y)=log2(2e)=2.443bit_.


(4)  For the Laplace distribution, the relation  σ2=2/λ2 holds.  Thus, we obtain:

h(X)=log2(2eλ)=1/2log2(4e2λ2)=1/2log2(2e2σ2)Γ(Y)L=2e214.78_.
  • Consequently, the  ΓL value is twice as large for the Laplace distribution as for the exponential distribution.
  • Thus, the Laplace PDF is better than the exponential PDF in terms of differential entropy when power-limited signals are assumed.
  • Under the constraint of peak limitation,  both the exponential and Laplace PDF are completely unsuitable, as is the Gaussian PDF.  These all extend to infinity.