Loading [MathJax]/jax/output/HTML-CSS/fonts/TeX/fontdata.js

Difference between revisions of "Aufgaben:Exercise 4.3Z: Exponential and Laplace Distribution"

From LNTwww
Line 3: Line 3:
 
}}
 
}}
  
[[File:P_ID2875__Inf_Z_4_3.png|right|frame|PDF of exponential distribution and Laplace distribution (below)]]
+
[[File:EN_Inf_Z_4_3.png|right|frame|PDF of exponential distribution and Laplace distribution (below)]]
 
We consider here the probability density functions (WDF) of two continuous-value random variables:
 
We consider here the probability density functions (WDF) of two continuous-value random variables:
 
*The random variable &nbsp; X&nbsp; is exponentially distributed (see top plot): &nbsp; For&nbsp; x<0&nbsp;  for&nbsp; fX(x)=0,&nbsp; and for positive x&ndash;values:
 
*The random variable &nbsp; X&nbsp; is exponentially distributed (see top plot): &nbsp; For&nbsp; x<0&nbsp;  for&nbsp; fX(x)=0,&nbsp; and for positive x&ndash;values:

Revision as of 17:19, 27 September 2021

PDF of exponential distribution and Laplace distribution (below)

We consider here the probability density functions (WDF) of two continuous-value random variables:

  • The random variable   X  is exponentially distributed (see top plot):   For  x<0  for  fX(x)=0,  and for positive x–values:
fX(x)=λeλx.
  • On the other hand, for the Laplace distributed random variable  Y  in the whole range  <y<+  (lower sketch) holds:
fY(y)=λ/2eλ|y|.

To be calculated are the differential entropies  h(X)  and  h(Y)  adepending on the PDF parameter  λ.  For example:

h(X)=xsupp(fX)fX(x)log[fX(x)]dx.

If  log2  is used, add the pseudo-unit "bit".


In subtasks  (2)  and  (4)  specify the differential entropy in the following form:

h(X)=1/2log(Γ(X)Lσ2)bzw.h(Y)=1/2log(Γ(Y)Lσ2).

Determine by which factor  Γ(X)L  the exponential distribution is characterized and which factor  Γ(Y)L  results for the Laplace distribution.





Hints:


Questions

1

Calculate the differential entropy of the exponential distribution for  λ=1.

h(X) = 

 bit

2

What is the characteristic   Γ(X)L  for the exponential distribution corresponding to the form  h(X)=1/2log2(Γ(X)Lσ2) ?

Γ(X)L = 

3

Calculate the differential entropy of the Laplace distribution for  λ=1.

h(Y) = 

 bit

4

What is the characteristic  Γ(Y)L  for the Laplace distribution corresponding to the form  h(Y)=1/2log2(Γ(Y)Lσ2)?

Γ(Y)L = 


Solution

(1)  Although in this exercise the result should be given in "bit" , we use the natural logarithm for derivation.

  • Then the differential entropy is:
h(X)=xsupp(fX)fX(x)ln[fX(x)]dx.
  • For the exponential distribution, the integration limits are  0  and  + .  In this range, the PDF  fX(x)  is used according to the specification sheet:
h(X)=0λeλx[ln(λ)+ln(eλx)]dxln(λ)0λeλxdx+λ0λxeλxdx.

We can see:

  • The first integrand is identical to the PDF  fX(x) considered here.  Thus, the integral over the entire integration domain yields  1.
  • The second integral corresponds exactly to the definition of the mean value  m1  (moment of first order).  For the exponential distribution,  m_1 = 1/λ holds.  From this follows:
h(X) = - \hspace{0.05cm} {\rm ln} \hspace{0.1cm} (\lambda) + 1 = - \hspace{0.05cm} {\rm ln} \hspace{0.1cm} (\lambda) + \hspace{0.05cm} {\rm ln} \hspace{0.1cm} ({\rm e}) = {\rm ln} \hspace{0.1cm} ({\rm e}/\lambda) \hspace{0.05cm}.
  • This result is to be given the additional unit "nat".  Using  \log_2  instead of  \ln , we obtain the differential entropy in "bit":
h(X) = {\rm log}_2 \hspace{0.1cm} ({\rm e}/\lambda) \hspace{0.3cm} \Rightarrow \hspace{0.3cm} \lambda = 1{\rm :} \hspace{0.3cm} h(X) = {\rm log}_2 \hspace{0.1cm} ({\rm e}) = \frac{{\rm ln} \hspace{0.1cm} ({\rm e})}{{\rm ln} \hspace{0.1cm} (2)} \hspace{0.15cm}\underline{= 1.443\,{\rm bit}} \hspace{0.05cm}.


(2)  Considering the equation  \sigma^2 = 1/\lambda^2  valid for the exponential distribution, we can transform the result found in  (1)  as follows:

h(X) = {\rm log}_2 \hspace{0.1cm} ({\rm e}/\lambda) = {1}/{2}\cdot {\rm log}_2 \hspace{0.1cm} ({\rm e}^2/\lambda^2) = {1}/{2} \cdot {\rm log}_2 \hspace{0.1cm} ({\rm e}^2 \cdot \sigma^2) \hspace{0.05cm}.
  • A comparison with the required basic form  h(X) = {1}/{2} \cdot {\rm log}_2 \hspace{0.1cm} ({\it \Gamma}_{\hspace{-0.05cm} \rm L}^{\hspace{0.08cm}(X)} \cdot \sigma^2)  leads to the result:
{\it \Gamma}_{{\hspace{-0.05cm} \rm L}}^{\hspace{0.08cm}(X)} = {\rm e}^2 \hspace{0.15cm}\underline{\approx 7.39} \hspace{0.05cm}.


(3)  For the Laplace distribution, we divide the integration domain into two subdomains:

  • Y  negative   ⇒   Anteil  h_{\rm neg}(Y),
  • Y  positive   ⇒   Anteil  h_{\rm pos}(Y).


The total differential entropy, taking into account  h_{\rm neg}(Y) = h_{\rm pos}(Y)  is given by

h(Y) = h_{\rm neg}(Y) + h_{\rm pos}(Y) = 2 \cdot h_{\rm pos}(Y)
\Rightarrow \hspace{0.3cm} h(Y) = - 2 \cdot \int_{0}^{\infty} \hspace{-0.15cm} \lambda/2 \cdot {\rm e}^{-\lambda \hspace{0.05cm}\cdot \hspace{0.05cm}y} \hspace{0.05cm} \cdot \hspace{0.05cm} \left [ {\rm ln} \hspace{0.1cm} (\lambda/2) + {\rm ln} \hspace{0.1cm} ({\rm e}^{-\lambda \hspace{0.05cm}\cdot \hspace{0.05cm}y})\right ]\hspace{0.1cm}{\rm d}y = - \hspace{0.05cm} {\rm ln} \hspace{0.1cm} (\lambda/2) \cdot \int_{0}^{\infty} \hspace{-0.15cm} \lambda \cdot {\rm e}^{-\lambda \hspace{0.05cm}\cdot \hspace{0.05cm}y}\hspace{0.1cm}{\rm d}y \hspace{0.1cm} + \hspace{0.1cm} \lambda \cdot \int_{0}^{\infty} \hspace{-0.15cm} \lambda \cdot y \cdot {\rm e}^{-\lambda \hspace{0.05cm}\cdot \hspace{0.05cm}y}\hspace{0.1cm}{\rm d}y \hspace{0.05cm}.

If we again consider that the first integral gives the value  1   (PDF area) and the second integral gives the mean value  m_1 = 1/\lambda  we obtain:

h(Y) = - \hspace{0.05cm} {\rm ln} \hspace{0.1cm} (\lambda/2) + 1 = - \hspace{0.05cm} {\rm ln} \hspace{0.1cm} (\lambda/2) + \hspace{0.05cm} {\rm ln} \hspace{0.1cm} ({\rm e}) = {\rm ln} \hspace{0.1cm} (2{\rm e}/\lambda) \hspace{0.05cm}.
  • Since the result is required in "bit" , we still need to replace  \ln  by  \log_2 :
h(Y) = {\rm log}_2 \hspace{0.1cm} (2{\rm e}/\lambda) \hspace{0.3cm} \Rightarrow \hspace{0.3cm} \lambda = 1{\rm :} \hspace{0.3cm} h(Y) = {\rm log}_2 \hspace{0.1cm} (2{\rm e}) \hspace{0.15cm}\underline{= 2.443\,{\rm bit}} \hspace{0.05cm}.


(4)  For the Laplace distribution, the relation  \sigma^2 = 2/\lambda^2 holds.  Thus, we obtain:

h(X) = {\rm log}_2 \hspace{0.1cm} (\frac{2{\rm e}}{\lambda}) = {1}/{2} \cdot {\rm log}_2 \hspace{0.1cm} (\frac{4{\rm e}^2}{\lambda^2}) = {1}/{2} \cdot {\rm log}_2 \hspace{0.1cm} (2 {\rm e}^2 \cdot \sigma^2) \hspace{0.3cm} \Rightarrow \hspace{0.3cm} {\it \Gamma}_{{\hspace{-0.05cm} \rm L}}^{\hspace{0.08cm}(Y)} = 2 \cdot {\rm e}^2 \hspace{0.15cm}\underline{\approx 14.78} \hspace{0.05cm}.
  • Consequently, the  {\it \Gamma}_{{\hspace{-0.05cm} \rm L}} value is twice as large for the Laplace distribution as for the exponential distribution.
  • Thus, the Laplace distribution is better than the exponential distribution in terms of differential entropy when power-limited signals are assumed.
  • Under the constraint of peak limiting, both the exponential and Laplace distributions are completely unsuitable, as is the Gaussian distribution. These all extend to infinity.