Difference between revisions of "Applets:PDF, CDF and Moments of Special Distributions"

From LNTwww
m (Text replacement - "Biographies_and_Bibliographies/LNTwww_members_from_LÜT#Tasn.C3.A1d_Kernetzky.2C_M.Sc._.28at_L.C3.9CT_since_2014.29" to "Biographies_and_Bibliographies/LNTwww_members_from_LÜT#Dr.-Ing._Tasn.C3.A1d_Kernetzky_.28at_L.C3.9CT_from_2014-2022.29")
 
(89 intermediate revisions by 4 users not shown)
Line 1: Line 1:
{{LntAppletLink|verteilungen}}  
+
{{LntAppletLinkEnDe|wdf-vtf_en|wdf-vtf}}
  
==Programmbeschreibung==
+
==Applet Description==
 
<br>
 
<br>
'''WDF, VTF und Momente spezieller Verteilungen'''
+
The applet presents the description forms of two continuous value random variables&nbsp; $X$&nbsp; and&nbsp; $Y\hspace{-0.1cm}$.&nbsp; For the red random variable&nbsp; $X$&nbsp; and the blue random variable&nbsp; $Y$,&nbsp; the following basic forms are available for selection:
Das Applet verdeutlicht die Eigenschaften zweidimensionaler Gaußscher Zufallsgrößen&nbsp; $XY\hspace{-0.1cm}$, gekennzeichnet durch die Standardabweichungen (Streuungen)&nbsp; $\sigma_X$&nbsp; und&nbsp; $\sigma_Y$&nbsp; ihrer beiden Komponenten sowie den Korrelationskoeffizienten&nbsp; $\rho_{XY}$&nbsp;zwischen diesen. Die Komponenten werden als mittelwertfrei vorausgesetzt:&nbsp; $m_X = m_Y = 0$.
 
  
Das Applet zeigt
+
* Gaussian distribution, uniform distribution, triangular distribution, exponential distribution, Laplace distribution, Rayleigh distribution, Rice distribution, Weibull distribution, Wigner semicircle distribution, Wigner parabolic distribution, Cauchy distribution.
* die zweidimensionale Wahrscheinlichkeitsdichtefunktion &nbsp; &rArr; &nbsp; $\rm 2D\hspace{-0.1cm}-\hspace{-0.1cm}WDF$&nbsp; $f_{XY}(x, \hspace{0.1cm}y)$&nbsp; in dreidimensionaler Darstellung sowie in Form von Höhenlinien,
 
* die zugehörigen Randwahrscheinlichkeitsdichtefunktionen&nbsp; $f_{X}(x)$&nbsp; und &nbsp; $f_{Y}(y)$ &nbsp; &rArr; &nbsp; $\rm 1D\hspace{-0.1cm}-\hspace{-0.1cm}WDFs$,  
 
* die zweidimensionale Verteilungsfunktion &nbsp; &rArr; &nbsp; $\rm 2D\hspace{-0.1cm}-\hspace{-0.1cm}VTF$&nbsp; $F_{XY}(x, \hspace{0.1cm}y)$&nbsp; als 3D-Plot.
 
  
  
Das Applet verwendet das Framework &nbsp;[https://en.wikipedia.org/wiki/Plotly Plot.ly]
+
The following data refer to the random variables&nbsp; $X$. Graphically represented are
 
+
* the probability density function&nbsp; $f_{X}(x)$&nbsp; (above) and
 +
* the cumulative distribution function&nbsp; $F_{X}(x)$&nbsp; (bottom).
  
  
 +
In addition, some integral parameters are output, namely
 +
*the linear mean value&nbsp; $m_X = {\rm E}\big[X \big]$,
 +
*the second order moment&nbsp; $P_X ={\rm E}\big[X^2  \big] $,
 +
*the variance&nbsp; $\sigma_X^2 = P_X - m_X^2$,
 +
*the standard deviation&nbsp; $\sigma_X$,
 +
*the Charlier skewness&nbsp; $S_X$,
 +
*the kurtosis&nbsp; $K_X$.
  
[[File: P_ID41__Sto_T_3_1_S1_neu.png  |right|frame| Signal und WDF eines Gaußschen Rauschsignals]]
+
 
{{GraueBox|TEXT= 
 
$\text{Beispiel 1:}$&nbsp; Die Grafik zeigt einen Ausschnitt eines stochastischen Rauschsignals $x(t)$, dessen Momentanwert als eine kontinuierliche Zufallsgröße $x$ aufgefasst werden kann.
 
  
*Aus der rechts dargestellten ''Wahrscheinlichkeitsdichtefunktion'' (WDF) erkennt man, dass bei diesem Beispielsignal Momentanwerte um den Mittelwert $m_1$ am häufigsten auftreten.
+
==Definition and Properties of the Presented Descriptive Variables==
*Da zwischen den Abtastwerten $x_ν$ keine statistischen Bindungen bestehen, bezeichnet man ein solches Signal auch als ''„Weißes Rauschen”.''}}
 
 
==Definition und Eigenschaften der dargestellten Beschreibungsgrößen==
 
 
<br>
 
<br>
In diesem Applet betrachten wir ausschließlich ''(wert&ndash;)kontinuierliche Zufallsgrößen'', also solche, deren mögliche Zahlenwerte nicht abzählbar sind. Zumindest in gewissen Intervallen ... '''???'''
+
In this applet we consider only ''(value&ndash;)continuous random variables'', i.e. those whose possible numerical values are not countable.
Alle Beispiele Gauß
+
*The range of values of these random variables is thus in general that of the real numbers&nbsp; $(-\infty \le X \le +\infty)$.  
 +
*However, it is possible that the range of values is limited to an interval:&nbsp; $x_{\rm min} \le X \le +x_{\rm max}$.
 
<br><br>
 
<br><br>
  
===Wahrscheinlichkeitsdichtefunktion (WDF)===
+
===Probability density function (PDF)===
Bei einer kontinuierlichen Zufallsgröße&nbsp; $X$&nbsp; sind die Wahrscheinlichkeiten, dass&nbsp; $X$&nbsp; ganz bestimmte Werte&nbsp; $x$&nbsp; annimmt, identisch Null:&nbsp; ${\rm Pr}(X= x) \equiv 0$.&nbsp; Deshalb muss zur Beschreibung einer kontinuierlichen Zufallsgröße stets auf die&nbsp; ''Wahrscheinlichkeitsdichtefunktion''&nbsp; – abgekürzt&nbsp; $\rm WDF$&nbsp; – übergegangen werden.
+
For a continuous random variable&nbsp; $X$&nbsp; the probabilities that&nbsp; $X$&nbsp; takes on quite specific values&nbsp; $x$&nbsp; are zero:&nbsp; ${\rm Pr}(X= x) \equiv 0$.&nbsp; Therefore,  to describe a continuous random variable,  we must always refer to the&nbsp; ''probability density function''&nbsp; – in short&nbsp; $\rm PDF$.&nbsp;  
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Definition:}$&nbsp; Der Wert der&nbsp; '''Wahrscheinlichkeitsdichtefunktion'''&nbsp; $f_{X}(x)$&nbsp; an der Stelle&nbsp; $x$&nbsp; ist gleich der Wahrscheinlichkeit, dass der Momentanwert der Zufallsgröße&nbsp; $X$&nbsp; in einem (unendlich kleinen) Intervall der Breite&nbsp; $Δx$&nbsp; um&nbsp; $x$&nbsp; liegt, dividiert durch&nbsp; $Δx$:
+
$\text{Definition:}$&nbsp; The value of the&nbsp; &raquo;'''probability density function'''&laquo;&nbsp; $f_{X}(x)$&nbsp; at location&nbsp; $x$&nbsp; is equal to the probability that the instantaneous value of the random variable&nbsp; $x$&nbsp; lies in an&nbsp; (infinitesimally small)&nbsp; interval of width&nbsp; $Δx$&nbsp; around&nbsp; $x_\mu$,&nbsp; divided by&nbsp; $Δx$:
  
 
:$$f_X(x) = \lim_{ {\rm \Delta} x \hspace{0.05cm}\to \hspace{0.05cm} 0} \frac{ {\rm Pr} \big [x - {\rm \Delta} x/2 \le X \le x +{\rm \Delta} x/2 \big ] }{ {\rm \Delta} x}.$$
 
:$$f_X(x) = \lim_{ {\rm \Delta} x \hspace{0.05cm}\to \hspace{0.05cm} 0} \frac{ {\rm Pr} \big [x - {\rm \Delta} x/2 \le X \le x +{\rm \Delta} x/2 \big ] }{ {\rm \Delta} x}.$$
  
Die englische Bezeichnung für die Wahrscheinlichkeitsdichtefunktion (WDF) ist&nbsp; ''Probability Density Function''&nbsp; (PDF). }}
+
}}
  
  
Die WDF weist folgende Eigenschaften auf:  
+
This extremely important descriptive variable has the following properties:  
  
*Für die Wahrscheinlichkeit, dass die Zufallsgröße&nbsp; $X$&nbsp; im Bereich zwischen&nbsp; $x_{\rm u}$&nbsp; und&nbsp; $x_{\rm o} > x_{\rm u}$&nbsp; liegt, gilt:
+
*For the probability that the random variable&nbsp; $X$&nbsp; lies in the range between&nbsp; $x_{\rm u}$&nbsp; and&nbsp; $x_{\rm o} > x_{\rm u}$:&nbsp;  
 
:$${\rm Pr}(x_{\rm u} \le  X \le x_{\rm o}) = \int_{x_{\rm u}}^{x_{\rm o}} f_{X}(x) \ {\rm d}x.$$
 
:$${\rm Pr}(x_{\rm u} \le  X \le x_{\rm o}) = \int_{x_{\rm u}}^{x_{\rm o}} f_{X}(x) \ {\rm d}x.$$
*Als wichtige Normierungseigenschaft ergibt sich daraus für die Fläche unter der WDF mit den Grenzübergängen&nbsp; $x_{\rm u} → \hspace{0.1cm} – \hspace{0.05cm} ∞$&nbsp; und&nbsp; $x_{\rm o} → +∞$:
+
*As an important normalization property,&nbsp; this yields for the area under the PDF with the boundary transitions&nbsp; $x_{\rm u} → \hspace{0.1cm} – \hspace{0.05cm} ∞$&nbsp; and&nbsp; $x_{\rm o} → +∞$:
 
:$$\int_{-\infty}^{+\infty} f_{X}(x) \ {\rm d}x = 1.$$
 
:$$\int_{-\infty}^{+\infty} f_{X}(x) \ {\rm d}x = 1.$$
 
<br>
 
<br>
  
===Verteilungsfunktion (VTF)===
+
===Cumulative distribution function (CDF)===
  
Die&nbsp; ''Verteilungsfunktion''&nbsp; – abgekürzt&nbsp; $\rm VTF$&nbsp; –  liefert die gleiche Information über die Zufallsgröße&nbsp; $X$&nbsp; wie die Wahrscheinlichkeitsdichtefunktion.
+
The&nbsp; ''cumulative distribution function''&nbsp; – in short&nbsp; $\rm CDF$&nbsp; –  provides the same information about the random variable&nbsp; $X$&nbsp; as the probability density function.
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Definition:}$&nbsp; Die&nbsp; '''Verteilungsfunktion'''&nbsp;  $F_{X}(x)$&nbsp; entspricht der Wahrscheinlichkeit, dass die Zufallsgröße&nbsp; $X$&nbsp; kleiner oder gleich einem reellen Zahlenwert&nbsp; $x$&nbsp; ist:
+
$\text{Definition:}$&nbsp; The&nbsp; &raquo;'''cumulative distribution function'''&laquo;&nbsp;  $F_{X}(x)$&nbsp; corresponds to the probability that the random variable&nbsp; $X$&nbsp; is less than or equal to a real number&nbsp; $x$:&nbsp;  
:$$F_{X}(x)  = {\rm Pr}( X \le x).$$
+
:$$F_{X}(x)  = {\rm Pr}( X \le x).$$}}
  
Die englische Bezeichnung für die Verteilungsfunktion (VTF) ist&nbsp; ''Cumulative Distribution Function''&nbsp; (CDF). }}
 
  
 +
The CDF has the following characteristics:
  
Die VTF weist folgende Eigenschaften auf:
+
*The CDF is computable from the probability density function&nbsp; $f_{X}(x)$&nbsp; by integration.&nbsp; It holds:  
 
 
*Die Verteilungsfunktion ist aus der Wahrscheinlichkeitsdichtefunktion&nbsp; $f_{X}(x)$&nbsp; durch Integration berechenbar. Es gilt:  
 
 
:$$F_{X}(x) = \int_{-\infty}^{x}f_X(\xi)\,{\rm d}\xi.$$
 
:$$F_{X}(x) = \int_{-\infty}^{x}f_X(\xi)\,{\rm d}\xi.$$
*Da die WDF nie negativ ist, steigt&nbsp; $F_{X}(x)$&nbsp; zumindest schwach monoton an, und liegt stets zwischen den folgenden Grenzwerten
+
*Since the PDF is never negative,&nbsp; $F_{X}(x)$&nbsp; increases at least weakly monotonically,&nbsp; and always lies between the following limits:
 
:$$F_{X}(x → \hspace{0.1cm} – \hspace{0.05cm} ∞) = 0,  \hspace{0.5cm}F_{X}(x → +∞) = 1.$$  
 
:$$F_{X}(x → \hspace{0.1cm} – \hspace{0.05cm} ∞) = 0,  \hspace{0.5cm}F_{X}(x → +∞) = 1.$$  
*Umgekehrt lässt sich die Wahrscheinlichkeitsdichtefunktion aus der Verteilungsfunktion durch Differentiation bestimmen:  
+
*Inversely,&nbsp; the probability density function can be determined from the CDF by differentiation:  
 
:$$f_{X}(x)=\frac{{\rm d} F_{X}(\xi)}{{\rm d}\xi}\Bigg |_{\hspace{0.1cm}x=\xi}.$$
 
:$$f_{X}(x)=\frac{{\rm d} F_{X}(\xi)}{{\rm d}\xi}\Bigg |_{\hspace{0.1cm}x=\xi}.$$
*Für die Wahrscheinlichkeit, dass die Zufallsgröße&nbsp; $X$&nbsp; im Bereich zwischen&nbsp; $x_{\rm u}$&nbsp; und&nbsp; $x_{\rm o} > x_{\rm u}$&nbsp; liegt, gilt:
+
*For the probability that the random variable&nbsp; $X$&nbsp; is in the range between&nbsp; $x_{\rm u}$&nbsp; and&nbsp; $x_{\rm o} > x_{\rm u}$&nbsp; holds:
 
:$${\rm Pr}(x_{\rm u} \le  X \le x_{\rm o}) = F_{X}(x_{\rm o}) - F_{X}(x_{\rm u}).$$
 
:$${\rm Pr}(x_{\rm u} \le  X \le x_{\rm o}) = F_{X}(x_{\rm o}) - F_{X}(x_{\rm u}).$$
 
<br>
 
<br>
  
===Erwartungswerte und Momente===
+
===Expected values and moments===
Die Wahrscheinlichkeitsdichtefunktion bietet ebenso wie die Verteilungsfunktion sehr weitreichende Informationen über die betrachtete Zufallsgröße. Weniger, aber dafür kompaktere  Informationen liefern die so genannten&nbsp; ''Erwartungswerte''&nbsp; und&nbsp; ''Momente.''
+
The probability density function provides very extensive information about the random variable under consideration.&nbsp; Less,&nbsp; but more compact information is provided by the so-called&nbsp; "expected values"&nbsp; and&nbsp; "moments".  
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Definition:}$&nbsp; Der&nbsp; '''Erwartungswert'''&nbsp; bezüglich einer beliebigen Gewichtungsfunktion&nbsp; $g(x)$&nbsp; kann mit der WDF&nbsp; $f_{\rm X}(x)$&nbsp; in folgender Weise berechnet werden:
+
$\text{Definition:}$&nbsp; The&nbsp; &raquo;'''expected value'''&laquo;&nbsp; with respect to any weighting function&nbsp; $g(x)$&nbsp; can be calculated with the PDF&nbsp; $f_{\rm X}(x)$&nbsp; in the following way:
 
:$${\rm E}\big[g (X ) \big] = \int_{-\infty}^{+\infty} g(x)\cdot f_{X}(x) \,{\rm d}x.$$
 
:$${\rm E}\big[g (X ) \big] = \int_{-\infty}^{+\infty} g(x)\cdot f_{X}(x) \,{\rm d}x.$$
Setzt man in diese Gleichung für&nbsp; $g(X) = x^k$&nbsp; ein, so erhält man das&nbsp; '''Moment $k$-ter Ordnung''':  
+
Substituting into this equation for&nbsp; $g(x) = x^k$&nbsp; we get the&nbsp; &raquo;'''moment of $k$-th order'''&laquo;:  
 
:$$m_k = {\rm E}\big[X^k  \big] = \int_{-\infty}^{+\infty} x^k\cdot f_{X} (x ) \, {\rm d}x.$$}}
 
:$$m_k = {\rm E}\big[X^k  \big] = \int_{-\infty}^{+\infty} x^k\cdot f_{X} (x ) \, {\rm d}x.$$}}
  
  
Aus dieser Gleichung erhält man
+
From this equation follows. 
*mit&nbsp; $k = 1$&nbsp; für den&nbsp; '''linearen Mittelwert''':
+
*with&nbsp; $k = 1$&nbsp; for the&nbsp; ''first order moment''&nbsp; or the&nbsp; ''(linear)&nbsp; mean'':
 
:$$m_1 = {\rm E}\big[X \big] = \int_{-\infty}^{ \rm +\infty} x\cdot f_{X} (x ) \,{\rm d}x,$$
 
:$$m_1 = {\rm E}\big[X \big] = \int_{-\infty}^{ \rm +\infty} x\cdot f_{X} (x ) \,{\rm d}x,$$
*mit&nbsp; $k = 2$&nbsp; für den&nbsp; '''quadratischen Mittelwert''':  
+
*with&nbsp; $k = 2$&nbsp; for the&nbsp; ''second order moment''&nbsp; or the&nbsp; ''second moment'':
 
:$$m_2 = {\rm E}\big[X^{\rm 2} \big] = \int_{-\infty}^{ \rm +\infty} x^{ 2}\cdot f_{ X} (x) \,{\rm d}x.$$
 
:$$m_2 = {\rm E}\big[X^{\rm 2} \big] = \int_{-\infty}^{ \rm +\infty} x^{ 2}\cdot f_{ X} (x) \,{\rm d}x.$$
  
In Zusammenhang mit Signalen sind auch folgende Bezeichnungen üblich:  
+
In relation to signals,&nbsp; the following terms are also common:  
* $m_1$&nbsp; gibt den&nbsp; ''Gleichanteil''&nbsp; an,
+
* $m_1$&nbsp; indicates the&nbsp; ''DC component'';&nbsp; &nbsp; with respect to the random quantity&nbsp; $X$&nbsp; in the following we also write&nbsp; $m_X$.
* $m_2$&nbsp; entspricht der&nbsp; (auf den Einheitswiderstand&nbsp; $1 \ Ω$&nbsp; bezogenen) ''Signalleistung''.  
+
* $m_2$&nbsp; corresponds to the ''signal power''&nbsp; $P_X$ &nbsp; (referred to the unit resistance&nbsp; $1 \ Ω$&nbsp;) .  
  
  
Bezeichnet&nbsp; $X$&nbsp; beispielsweise eine Spannung, so hat nach diesen Gleichungen&nbsp; $m_1$&nbsp; die Einheit&nbsp; ${\rm V}$&nbsp; und&nbsp; $m_2$&nbsp; die Einheit&nbsp; ${\rm V}^2.$ Will man die Leistung in „Watt”&nbsp; $\rm (W)$ angeben, so muss&nbsp; $m_2$&nbsp; noch durch den Widerstandswert&nbsp; $R$&nbsp; dividiert werden.
+
For example, if&nbsp; $X$&nbsp; denotes a voltage, then according to these equations&nbsp; $m_X$&nbsp; has the unit&nbsp; ${\rm V}$&nbsp; and the power&nbsp; $P_X$&nbsp; has the unit&nbsp; ${\rm V}^2.$ If the power is to be expressed in "watts"&nbsp; $\rm (W)$, then&nbsp; $P_X$&nbsp; must be divided by the resistance value&nbsp; $R$.&nbsp;  
 
<br>
 
<br>
  
===Zentralmomente===
+
===Central moments===
  
Besondere Bedeutung haben in der Statistik allgemein die so genannten&nbsp; ''Zentralmomente'', von denen viele Kenngrößen abgeleitet werden,
+
Of particular importance in statistics in general are the so-called&nbsp; ''central moments'' from which many characteristics are derived,
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Definition:}$&nbsp; Die&nbsp; '''Zentralmomente'''&nbsp; sind im Gegensatz zu den herkömmlichen Momenten jeweils auf den Mittelwert&nbsp; $m_1$&nbsp; bezogen. Für diese gilt mit&nbsp; $k = 1, \ 2,$&nbsp;...:  
+
$\text{Definition:}$&nbsp; The&nbsp; &raquo;'''central moments'''&laquo;,&nbsp; in contrast to the conventional moments, are each related to the mean value&nbsp; $m_1$&nbsp; in each case. For these, the following applies with&nbsp; $k = 1, \ 2,$&nbsp;...:  
  
 
:$$\mu_k = {\rm E}\big[(X-m_{\rm 1})^k\big] = \int_{-\infty}^{+\infty} (x-m_{\rm 1})^k\cdot f_x(x) \,\rm d \it x.$$}}
 
:$$\mu_k = {\rm E}\big[(X-m_{\rm 1})^k\big] = \int_{-\infty}^{+\infty} (x-m_{\rm 1})^k\cdot f_x(x) \,\rm d \it x.$$}}
  
  
*Bei mittelwertfreien Zufallsgrößen stimmen die zentrierten Momente&nbsp; $\mu_k$&nbsp; mit den nichtzentrierten Momente&nbsp; $m_k$&nbsp; überein.
+
*For mean-free random variables, the central moments&nbsp; $\mu_k$&nbsp; coincide with the noncentral moments&nbsp; $m_k$.&nbsp;  
  
*Das Zentralmoment erster Ordnung ist definitionsgemäß gleich&nbsp; $\mu_1 = 0$.  
+
*The first order central moment is by definition equal to&nbsp; $\mu_1 = 0$.  
  
Die nichtzentrierten Momente&nbsp; $m_k$&nbsp; und die Zentralmomente&nbsp; $\mu_k$&nbsp; können direkt ineinander umgerechnet werden.&nbsp; Mit&nbsp; $m_0 = 1$&nbsp; und&nbsp; $\mu_0 = 1$&nbsp; gilt dabei:  
+
The noncentral moments&nbsp; $m_k$&nbsp; and the central moments&nbsp; $\mu_k$&nbsp; can be converted directly into each other.&nbsp; With&nbsp; $m_0 = 1$&nbsp; and&nbsp; $\mu_0 = 1$&nbsp; it is valid:
 
:$$\mu_k = \sum\limits_{\kappa= 0}^{k} \left( \begin{array}{*{2}{c}} k \\ \kappa \\ \end{array} \right)\cdot m_\kappa \cdot (-m_1)^{k-\kappa},$$
 
:$$\mu_k = \sum\limits_{\kappa= 0}^{k} \left( \begin{array}{*{2}{c}} k \\ \kappa \\ \end{array} \right)\cdot m_\kappa \cdot (-m_1)^{k-\kappa},$$
 
:$$m_k = \sum\limits_{\kappa= 0}^{k} \left( \begin{array}{*{2}{c}}  k \\ \kappa \\ \end{array} \right)\cdot \mu_\kappa \cdot {m_1}^{k-\kappa}.$$
 
:$$m_k = \sum\limits_{\kappa= 0}^{k} \left( \begin{array}{*{2}{c}}  k \\ \kappa \\ \end{array} \right)\cdot \mu_\kappa \cdot {m_1}^{k-\kappa}.$$
 
<br>
 
<br>
===Einige häufig benutzte Zentralmomente===
+
===Some Frequently Used Central Moments===
  
Aus der letzten Definition können folgende statistische Kenngrößen abgeleitet werden:  
+
From the last definition the following additional characteristics can be derived:  
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Definition:}$&nbsp; Die&nbsp; '''Varianz'''&nbsp; der betrachteten Zufallsgröße&nbsp; $X$&nbsp; ist das Zentralmoment zweiter Ordnung:
+
$\text{Definition:}$&nbsp; The&nbsp; &raquo;'''variance'''&laquo;&nbsp; of the considered random variable&nbsp; $X$&nbsp; is the second order central moment:
 
:$$\mu_2 = {\rm E}\big[(X-m_{\rm 1})^2\big] = \sigma_X^2.$$  
 
:$$\mu_2 = {\rm E}\big[(X-m_{\rm 1})^2\big] = \sigma_X^2.$$  
*Die Varianz&nbsp; $σ_X^2$&nbsp; entspricht physikalisch der &bdquo;Wechselleistung&rdquo; und die&nbsp; '''Streung'''&nbsp; $σ_X$&nbsp; (oder auch&nbsp; ''Standardabweichung'') gibt den &bdquo;Effektivwert&rdquo; an.  
+
*The variance&nbsp; $σ_X^2$&nbsp; corresponds physically to the&nbsp; "switching power"&nbsp; and&nbsp; &raquo;'''standard deviation'''&laquo;&nbsp; $σ_X$&nbsp;  gives the  "rms value".  
*Aus dem linearen und dem quadratischen Mittelwert ist die Varianz nach dem&nbsp; ''Satz von Steiner''&nbsp; in folgender Weise berechenbar:&nbsp; $\sigma_X^{2} = {\rm E}\big[X^2 \big] - {\rm E}^2\big[X \big].$}}
+
*From the linear and the second moment,&nbsp; the variance can be calculated according to&nbsp; ''Steiner's theorem''&nbsp; in the following way:&nbsp; $\sigma_X^{2} = {\rm E}\big[X^2 \big] - {\rm E}^2\big[X \big].$}}
  
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Definition:}$&nbsp; Die&nbsp; '''Charliersche Schiefe'''&nbsp; $S_X$&nbsp; der betrachteten Zufallsgröße&nbsp; $X$&nbsp; bezeichnet das auf $σ_X^3$ bezogene dritte Zentralmoment.  
+
$\text{Definition:}$&nbsp; The&nbsp; &raquo;'''Charlier's skewness'''&laquo;&nbsp; $S_X$&nbsp; of the considered random variable&nbsp; $X$&nbsp; denotes the third central moment related to $σ_X^3$.
*Bei symmetrischer Dichtefunktion ist die Kenngröße&nbsp; $S_X$&nbsp; sets Null.  
+
*For symmetric probability density function,&nbsp; this parameter &nbsp; $S_X$&nbsp; is always zero.  
*Je größer&nbsp; $S_X = \mu_3/σ_X^3$&nbsp; ist, um so unsymmetrischer verläuft die WDF um den Mittelwert&nbsp; $m_X$.  
+
*The larger&nbsp; $S_X = \mu_3/σ_X^3$&nbsp; is,&nbsp; the more asymmetric is the PDF around the mean&nbsp; $m_X$.  
*Beispielsweise ergibt sich für die Exponentialverteilung die Schiefe&nbsp; $S_X =2$, und zwar unabhängig vom Verteilungsparameter &nbsp;$λ$.}}
+
*For example,&nbsp; for the exponential distribution the (positive) skewness&nbsp; $S_X =2$, and this is independent of the distribution parameter &nbsp;$λ$.}}
  
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Definition:}$&nbsp; Als&nbsp; '''Kurtosis'''&nbsp; der betrachteten Zufallsgröße&nbsp; $X$&nbsp; bezeichnet man den Quotienten&nbsp; $K_X = \mu_4/σ_X^4$&nbsp; &nbsp; $(\mu_4:$&nbsp; Zentralmoment vierter Ordnung$)$.  
+
$\text{Definition:}$&nbsp; The&nbsp; &raquo;'''kurtosis'''&laquo;&nbsp; of the considered random variable&nbsp; $X$&nbsp; is the quotient&nbsp; $K_X = \mu_4/σ_X^4$&nbsp; &nbsp; $(\mu_4:$&nbsp; fourth-order central moment$)$.  
*Bei einer gaußverteilten Zufallsgröße  ergibt sich hierfür immer der Wert&nbsp; $K_X = 3$.  
+
*For a Gaussian distributed random variable this always yields the value&nbsp; $K_X = 3$.  
*Anhand dieser Kenngröße kann man beispielsweise überprüfen, ob eine vorliegende Zufallsgröße tatsächlich gaußisch ist. }}
+
*This parameter can be used, for example, to check whether a given random variable is actually Gaussian or can at least be approximated by a Gaussian distribution. }}
  
  
==Zusammenstellung einiger wertkontinuierlicher Zufallsgrößen==
+
==Compilation of some Continuous&ndash;Value Random Variables==
 
<br>   
 
<br>   
Das Applet berücksichtigt folgende Verteilungen:&nbsp; Gleichverteilung, Gaußverteilung, ...
+
The applet considers the following distributions:&nbsp;
===Gleichverteilte Zufallsgrößen===
+
 
<br>
+
:Gaussian distribution, uniform distribution, triangular distribution, exponential distribution, Laplace distribution, Rayleigh distribution, <br>Rice distribution, Weibull distribution, Wigner semicircle distribution, Wigner parabolic distribution, Cauchy distribution.  
Bei einer kontinuierlichen Zufallsgröße&nbsp; $X$&nbsp; sind die Wahrscheinlichkeiten, dass diese ganz bestimmte Werte annimmt, identisch Null. Deshalb muss zur Beschreibung einer kontinuierlichen Zufallsgröße stets auf die ''Wahrscheinlichkeitsdichtefunktion'' – abgekürzt '''WDF''' – übergegangen werden.  
+
 
 +
Some of these will be described in detail here.
 +
 
 +
===Gaussian distributed random variables===
 +
 
 +
[[File:EN_Sto_T_3_5_S2_v2.png |right|frame|Gaussian random variable:&nbsp; PDF and CDF]]
 +
'''(1)'''&nbsp; &nbsp; &raquo;'''Probability density function'''&laquo; &nbsp; $($axisymmetric around&nbsp; $m_X)$
 +
:$$f_X(x) = \frac{1}{\sqrt{2\pi}\cdot\sigma_X}\cdot {\rm e}^{-(X-m_X)^2 /(2\sigma_X^2) }.$$
 +
PDF parameters:&nbsp; 
 +
*$m_X$&nbsp; (mean or DC component),
 +
*$σ_X$&nbsp; (standard deviation or rms value).
 +
 
 +
 
 +
'''(2)'''&nbsp; &nbsp; &raquo;'''Cumulative distribution function'''&laquo; &nbsp; $($point symmetric around&nbsp; $m_X)$
 +
:$$F_X(x)= \phi(\frac{\it x-m_X}{\sigma_X})\hspace{0.5cm}\rm with\hspace{0.5cm}\rm \phi (\it x\rm ) = \frac{\rm 1}{\sqrt{\rm 2\it \pi}}\int_{-\rm\infty}^{\it x} \rm e^{\it -u^{\rm 2}/\rm 2}\,\, d \it u.$$
 +
 
 +
$ϕ(x)$: &nbsp; Gaussian error integral (cannot be calculated analytically, must be taken from tables).
 +
 
 +
 
 +
'''(3)'''&nbsp; &nbsp; &raquo;'''Central moments'''&laquo;
 +
:$$\mu_{k}=(k- 1)\cdot (k- 3) \ \cdots \  3\cdot 1\cdot\sigma_X^k\hspace{0.2cm}\rm (if\hspace{0.2cm}\it k\hspace{0.2cm}\rm even).$$
 +
*Charlier's skewness&nbsp; $S_X = 0$,&nbsp; since&nbsp; $\mu_3 = 0$&nbsp; $($PDF is symmetric about&nbsp; $m_X)$.
 +
*Kurtosis&nbsp; $K_X = 3$,&nbsp; since&nbsp; $\mu_4 = 3 \cdot \sigma_X^2$&nbsp; &rArr; &nbsp; $K_X = 3$&nbsp; results only for the Gaussian PDF.
 +
 
 +
 
 +
'''(4)'''&nbsp; &nbsp; &raquo;'''Further remarks'''&laquo;
 +
*The naming is due to the important mathematician, physicist and astronomer Carl Friedrich Gauss.
 +
*If&nbsp; $m_X = 0$&nbsp; and&nbsp; $σ_X = 1$, it is often referred to as the&nbsp; ''normal distribution''. 
 +
 
 +
*The standard deviation can also be determined graphically from the bell-shaped PDF $f_{X}(x)$ &nbsp; (as the distance between the maximum value and the point of inflection).
 +
*Random quantities with Gaussian WDF are realistic models for many physical physical quantities and also of great importance for communications engineering.
 +
*The sum of many small and independent components always leads to the Gaussian PDF &nbsp; &rArr; &nbsp; Central Limit Theorem of Statistics &nbsp; &rArr; &nbsp; Basis for noise processes.
 +
*If one applies a Gaussian distributed signal to a linear filter for spectral shaping, the output signal is also Gaussian distributed.
 +
 
 +
 
 +
[[File:Gauss_Signal.png|right|frame| Signal and PDF of a Gaussian noise signal]]
 +
{{GraueBox|TEXT= 
 +
$\text{Example 1:}$&nbsp; The graphic shows a section of a stochastic noise signal&nbsp; $x(t)$&nbsp; whose instantaneous value can be taken as a continuous random variable&nbsp; $X$.  From the PDF shown on the right, it can be seen that:
 +
* A Gaussian random variable is present.
 +
*Instantaneous values around the mean&nbsp; $m_X$&nbsp; occur most frequently.
 +
*If there are no statistical ties between the samples&nbsp; $x_ν$&nbsp; of the sequence, such a signal is also called ''"white noise".''}} 
 +
 
 +
 
 +
===Uniformly distributed random variables===
 +
 
 +
[[File:Rechteck_WDF_VTF.png|right|frame|Uniform distribution:&nbsp; PDF and CDF]]
 +
'''(1)'''&nbsp; &nbsp; &raquo;'''Probability density function'''&laquo;
 +
 
 +
*The probability density function (PDF)&nbsp;  $f_{X}(x)$&nbsp; is in the range from&nbsp; $x_{\rm min}$&nbsp; to&nbsp; $x_{\rm max}$&nbsp; constant equal to &nbsp;$1/(x_{\rm max} - x_{\rm min})$&nbsp; and outside zero.
 +
*At the range limits for&nbsp;  $f_{X}(x)$&nbsp; only half the value&nbsp; (mean value between left and right limit value)&nbsp; is to be set.
 +
 
 +
 
 +
'''(2)'''&nbsp; &nbsp; &raquo;'''Cumulative distribution function'''&laquo;
 +
 
 +
*The cumulative distribution function (CDF) increases in the range from&nbsp; $x_{\rm min}$&nbsp; to&nbsp; $x_{\rm max}$&nbsp; linearly from zero to&nbsp; $1$.&nbsp;
 +
 
 +
 
 +
'''(3)'''&nbsp; &nbsp; &raquo;''''''&laquo;
 +
*Mean and standard deviation have the following values for the uniform distribution:
 +
:$$m_X = \frac{\it x_ {\rm max} \rm + \it x_{\rm min}}{2},\hspace{0.5cm}
 +
\sigma_X^2 = \frac{(\it x_{\rm max} - \it x_{\rm min}\rm )^2}{12}.$$
 +
*For symmetric PDF &nbsp; &rArr; &nbsp; $x_{\rm min} = -x_{\rm max}$&nbsp; the mean value&nbsp; $m_X = 0$&nbsp; and the variance&nbsp; $σ_X^2 = x_{\rm max}^2/3.$
 +
*Because of the symmetry around the mean&nbsp; $m_X$&nbsp; the Charlier skewness&nbsp; $S_X = 0$.
 +
*The kurtosis is with &nbsp; $K_X = 1.8$&nbsp; significantly smaller than for the Gaussian distribution because of the absence of PDF outliers.
 +
 
 +
 
 +
'''(4)'''&nbsp; &nbsp; &raquo;'''Further remarks'''&laquo;
 +
 
 +
*For modeling transmission systems, uniformly distributed random variables are the exception. An example of an actual (nearly) uniformly distributed random variable is the phase in circularly symmetric interference, such as occurs in &nbsp;''quadrature amplitude modulation''&nbsp; (QAM) schemes.
 +
 
 +
*The importance of uniformly distributed random variables for information and communication technology lies rather in the fact that, from the point of view of information theory, this PDF form represents an optimum with respect to differential entropy under the constraint of "peak limitation".
 +
 
 +
*In ''image processing & encoding'', the uniform distribution is often used instead of the actual distribution of the original image, which is usually much more complicated, because the difference in information content between a ''natural image'' and the model based on the uniform distribution is relatively small.
 +
 
 +
*In the simulation of intelligence systems, one often uses "pseudo-random generators" based on the uniform distribution (which are relatively easy to realize), from which other distributions&nbsp; (Gaussian distribution, exponential distribution, etc.)&nbsp; can be easily derived.
 +
 
 +
 
 +
===Exponentially distributed random variables===
 +
 
 +
 
 +
'''(1)'''&nbsp; &nbsp; &raquo;'''Probability distribution function'''&laquo;
 +
[[File:Exponential_WDF_VTF.png|right|frame|Exponential distribution:&nbsp; PDF and CDF]]
 +
 
 +
An exponentially distributed random variable&nbsp; $X$&nbsp; can only take on non&ndash;negative values. For&nbsp; $x>0$&nbsp; the PDF has the following shape:
 +
:$$f_X(x)=\it \lambda_X\cdot\rm e^{\it -\lambda_X \hspace{0.05cm}\cdot \hspace{0.03cm} x}.$$
 +
*The larger the distribution parameter&nbsp; $λ_X$,&nbsp; the steeper the drop.
 +
*By definition,&nbsp; $f_{X}(0) = λ_X/2$, which is the average of the left-hand limit&nbsp; $(0)$&nbsp; and the right-hand limit &nbsp;$(\lambda_X)$.
 +
 
 +
 
 +
'''(2)'''&nbsp; &nbsp; &raquo;'''Cumulative distribution function'''&laquo;
 +
 
 +
Distribution function PDF, we obtain for&nbsp; $x > 0$: 
 +
:$$F_{X}(x)=1-\rm e^{\it -\lambda_X\hspace{0.05cm}\cdot \hspace{0.03cm} x}.$$
 +
 +
'''(3)'''&nbsp; &nbsp; &raquo;'''Moments and central moments'''&laquo;
 +
 
 +
*The&nbsp; ''moments''&nbsp; of the (one-sided) exponential distribution are generally equal to:
 +
:$$m_k =  \int_{-\infty}^{+\infty} x^k \cdot f_{X}(x) \,\,{\rm d} x = \frac{k!}{\lambda_X^k}.$$
 +
*From this and from Steiner's theorem we get for mean and standard deviation:
 +
:$$m_X = m_1=\frac{1}{\lambda_X},\hspace{0.6cm}\sigma_X^2={m_2-m_1^2}={\frac{2}{\lambda_X^2}-\frac{1}{\lambda_X^2}}=\frac{1}{\lambda_X^2}.$$
 +
*The PDF is clearly asymmetric here. For the Charlier skewness&nbsp; $S_X = 2$.
 +
*The kurtosis with &nbsp; $K_X = 9$&nbsp; is clearly larger than for the Gaussian distribution, because the PDF foothills extend much further.
 +
 
 +
 
 +
 
 +
'''(4)'''&nbsp; &nbsp; &raquo;'''Further remarks'''&laquo;
 +
 
 +
*The exponential distribution has great importance for reliability studies; in this context, the term "lifetime distribution" is also commonly used.
 +
*In these applications, the random variable is often the time&nbsp; $t$, that elapses before a component fails.
 +
*Furthermore, it should be noted that the exponential distribution is closely related to the Laplace distribution.
 +
 
 +
 
 +
===Laplace distributed random variables===
 +
 
 +
[[File:Laplace_WDF_VTF.png|right|frame|Laplace distribution:&nbsp; PDF and CDF]]
 +
'''(1)'''&nbsp; &nbsp; &raquo;'''Probability density function'''&laquo;
 +
 
 +
As can be seen from the graph, the Laplace distribution is a "two-sided exponential distribution":
 +
 
 +
:$$f_{X}(x)=\frac{\lambda_X} {2}\cdot{\rm e}^ { - \lambda_X \hspace{0.05cm} \cdot \hspace{0.05cm} \vert \hspace{0.05cm} x \hspace{0.05cm} \vert}.$$
 +
 
 +
* The maximum value here is&nbsp; $\lambda_X/2$.
 +
*The tangent at&nbsp; $x=0$&nbsp; intersects the abscissa at&nbsp; $1/\lambda_X$, as in the exponential distribution.
 +
 
  
 +
'''(2)'''&nbsp; &nbsp; &raquo;'''Cumulative distribution function'''&laquo;
 +
 +
:$$F_{X}(x) = {\rm Pr}\big [X \le x \big ] = \int_{-\infty}^{x} f_{X}(\xi) \,\,{\rm d}\xi $$
 +
:$$\Rightarrow \hspace{0.5cm}  F_{X}(x) =  0.5 + 0.5 \cdot {\rm sign}(x) \cdot \big [ 1 - {\rm e}^ { - \lambda_X \hspace{0.05cm} \cdot \hspace{0.05cm} \vert \hspace{0.05cm} x \hspace{0.05cm} \vert}\big ] $$
 +
:$$\Rightarrow \hspace{0.5cm} F_{X}(-\infty) = 0, \hspace{0.5cm}F_{X}(0) = 0.5, \hspace{0.5cm} F_{X}(+\infty) = 1.$$
  
 +
'''(3)'''&nbsp; &nbsp; &raquo;'''Moments and central moments'''&laquo;
  
Die englische Bezeichnung für die Wahrscheinlichkeitsdichtefunktion (WDF) ist ''Probability Density Function'' (PDF). }}
+
* For odd&nbsp; $k$,&nbsp; the Laplace distribution always gives&nbsp; $m_k= 0$ due to symmetry. Among others:&nbsp; Linear mean&nbsp; $m_X =m_1 = 0$.
Wir betrachten zwei wertkontinuierliche Zufallsgrößen&nbsp; $X$&nbsp; und&nbsp; $Y\hspace{-0.1cm}$, zwischen denen statistische Abhängigkeiten bestehen können. Zur Beschreibung der Wechselbeziehungen zwischen diesen Größen ist es zweckmäßig, die beiden Komponenten zu einer&nbsp; '''zweidimensionalen Zufallsgröße'''&nbsp; $XY =(X, Y)$&nbsp; zusammenzufassen. Dann gilt:
 
  
{{BlaueBox|TEXT= 
+
* For even&nbsp; $k$&nbsp; the moments of Laplace distribution and exponential distribution agree:&nbsp; $m_k = {k!}/{\lambda^k}$.
$\text{Definition:}$&nbsp;
 
Die &nbsp;'''Verbundwahrscheinlichkeitsdichtefunktion'''&nbsp; ist die Wahrscheinlichkeitsdichtefunktion (WDF, &nbsp;englisch:&nbsp; ''Probability Density Function'', kurz:&nbsp;PDF) der zweidimensionalen Zufallsgröße&nbsp; $XY$&nbsp; an der Stelle&nbsp; $(x, y)$:
 
:$$f_{XY}(x, \hspace{0.1cm}y) = \lim_{\left.{\Delta x\rightarrow 0 \atop {\Delta y\rightarrow 0} }\right.}\frac{ {\rm Pr}\big [ (x - {\rm \Delta} x/{\rm 2} \le X  \le x  + {\rm \Delta} x/{\rm 2}) \cap (y - {\rm \Delta} y/{\rm 2} \le Y \le y +{\rm \Delta}y/{\rm 2}) \big]  }{ {\rm \Delta} \ x\cdot{\rm \Delta} y}.$$
 
  
*Die Verbundwahrscheinlichkeitsdichtefunktion oder kurz&nbsp; $\rm 2D\hspace{-0.1cm}-\hspace{-0.1cm}WDF$&nbsp; ist eine Erweiterung der eindimensionalen WDF.
+
* For the variance&nbsp; $(=$ second order central moment $=$ second order moment$)$&nbsp; holds:&nbsp; $\sigma_X^2 = {2}/{\lambda_X^2}$ &nbsp; &rArr; &nbsp; twice as large as for the exponential distribution.
*$$&nbsp; kennzeichnet die logische UND-Verknüpfung.
 
*$X$&nbsp; und&nbsp; $Y$ bezeichnen die beiden Zufallsgrößen, und&nbsp; $x \in X$&nbsp; sowie &nbsp; $y \in Y$ geben  Realisierungen hiervon an.
 
*Die für dieses Applet verwendete Nomenklatur unterscheidet sich also geringfügig gegenüber der Beschreibung im [[Stochastische_Signaltheorie/Zweidimensionale_Zufallsgrößen#Verbundwahrscheinlichkeitsdichtefunktion|Theorieteil]].}}
 
  
 +
*For the Charlier skewness,&nbsp; $S_X = 0$ is obtained here due to the symmetric PDF.
  
Anhand dieser 2D–WDF&nbsp; $f_{XY}(x, y)$&nbsp; werden auch statistische Abhängigkeiten innerhalb der zweidimensionalen Zufallsgröße &nbsp;$XY$&nbsp; vollständig erfasst im Gegensatz zu den beiden eindimensionalen Dichtefunktionen &nbsp; ⇒ &nbsp; '''Randwahrscheinlichkeitsdichtefunktionen''':
+
*The kurtosis is&nbsp; $K_X = 6$,&nbsp; significantly larger than for the Gaussian distribution, but smaller than for the exponential distribution.
:$$f_{X}(x) = \int _{-\infty}^{+\infty} f_{XY}(x,y) \,\,{\rm d}y  ,$$
 
:$$f_{Y}(y) = \int_{-\infty}^{+\infty} f_{XY}(x,y) \,\,{\rm d}x  .$$
 
  
Diese beiden Randdichtefunktionen&nbsp; $f_X(x)$&nbsp; und&nbsp; $f_Y(y)$
 
*liefern lediglich statistische Aussagen über die Einzelkomponenten&nbsp; $X$&nbsp; bzw.&nbsp; $Y$,
 
*nicht jedoch über die Bindungen zwischen diesen.
 
  
  
Als quantitatives Maß für die linearen statistischen Bindungen &nbsp; &rArr; &nbsp; '''Korrelation'''&nbsp; verwendet man
+
'''(4)'''&nbsp; &nbsp; &raquo;'''Further remarks'''&laquo;
* die&nbsp; '''Kovarianz'''&nbsp; $\mu_{XY}$, die bei mittelwertfreien Komponenten gleich dem gemeinsamen linearen Moment erster Ordnung ist:
 
:$$\mu_{XY} = {\rm E}\big[X \cdot Y\big] = \int_{-\infty}^{+\infty} \int_{-\infty}^{+\infty} X \cdot Y \cdot f_{XY}(x,y) \,{\rm d}x \,  {\rm d}y ,$$ 
 
*den&nbsp; '''Korrelationskoeffizienten'''&nbsp; nach Normierung auf die beiden  Effektivwerte &nbsp;$σ_X$&nbsp; und&nbsp;$σ_Y$&nbsp; der beiden Komponenten:
 
:$$\rho_{XY}=\frac{\mu_{XY} }{\sigma_X \cdot \sigma_Y}.$$
 
  
{{BlaueBox|TEXT= 
+
*The instantaneous values of speech and music signals are Laplace distributed with good approximation. <br>See learning video&nbsp; [[Wahrscheinlichkeit_und_WDF_(Lernvideo)|"Wahrscheinlichkeit und Wahrscheinlichkeitsdichtefunktion"]],&nbsp; part 2.
$\text{Eigenschaften des Korrelationskoeffizienten:}$&nbsp;
+
*By adding a Dirac delta function at&nbsp; $x=0$,&nbsp; speech pauses can also be modeled.
*Aufgrund der Normierung gilt stets&nbsp;  $-1 \le  ρ_{XY}  ≤ +1$.  
 
*Sind die beiden Zufallsgrößen &nbsp;$X$&nbsp; und &nbsp;$Y$ unkorreliert, so ist &nbsp;$ρ_{XY} = 0$.  
 
*Bei strenger linearer Abhängigkeit zwischen &nbsp;$X$&nbsp; und &nbsp;$Y$ ist &nbsp;$ρ_{XY}= ±1$ &nbsp; &rArr; &nbsp; vollständige Korrelation.
 
*Ein positiver Korrelationskoeffizient bedeutet, dass bei größerem &nbsp;$X$–Wert im statistischen Mittel auch &nbsp;$Y$&nbsp; größer ist als bei kleinerem &nbsp;$X$.
 
*Dagegen drückt ein negativer Korrelationskoeffizient aus, dass &nbsp;$Y$&nbsp; mit steigendem &nbsp;$X$&nbsp; im Mittel kleiner wird.}} 
 
 
<br><br>
 
<br><br>
 +
===Brief description of other distributions===
 +
<br>
 +
$\text{(A)  Rayleigh distribution}$ &nbsp; &nbsp; [[Mobile_Communications/Probability_Density_of_Rayleigh_Fading|$\text{More detailed description}$]]
 +
 +
*Probability density function:
 +
:$$f_X(x) =
 +
\left\{ \begin{array}{c}  x/\lambda_X^2 \cdot {\rm e}^{- x^2/(2 \hspace{0.05cm}\cdot\hspace{0.05cm} \lambda_X^2)} \\
 +
0  \end{array} \right.\hspace{0.15cm}
 +
\begin{array}{*{1}c} {\rm for}\hspace{0.1cm} x\hspace{-0.05cm} \ge \hspace{-0.05cm}0,
 +
\\  {\rm for}\hspace{0.1cm} x \hspace{-0.05cm}<\hspace{-0.05cm} 0. \\ \end{array}.$$
 +
*Application: &nbsp; &nbsp; Modeling of the cellular channel (non-frequency selective fading,  attenuation, diffraction, and refraction effects only, no line-of-sight).
 +
  
  
===Höhenlinien bei unkorrelierten Zufallsgrößen===
+
$\text{(B)  Rice distribution}$ &nbsp; &nbsp; [[Mobile_Communications/Non-Frequency-Selective_Fading_With_Direct_Component|$\text{More detailed description}$]]
  
[[File:Sto_App_Bild2.png |frame| Höhenlinien der 2D-WDF bei unkorrelierten Größen | rechts]]
+
*Probability density function&nbsp; $(\rm I_0$&nbsp; denotes the modified zero-order Bessel function$)$:
Aus der Bedingungsgleichung&nbsp; $f_{XY}(x, y) = {\rm const.}$&nbsp; können die Höhenlinien der WDF berechnet werden.  
+
:$$f_X(x) = \frac{x}{\lambda_X^2} \cdot {\rm exp} \big [ -\frac{x^2 + C_X^2}{2\cdot \lambda_X^2}\big ] \cdot {\rm I}_0 \left [ \frac{x \cdot C_X}{\lambda_X^2} \right ]\hspace{0.5cm}\text{with}\hspace{0.5cm}{\rm I }_0 (u) = {\rm J }_0 ({\rm j} \cdot u) =
 +
\sum_{k = 0}^{\infty} \frac{ (u/2)^{2k}}{k! \cdot \Gamma (k+1)}
 +
\hspace{0.05cm}.$$
 +
*Application: &nbsp; &nbsp; Cellular channel modeling (non-frequency selective fading,  attenuation, diffraction, and refraction effects only, with line-of-sight).
  
Sind die Komponenten&nbsp; $X$&nbsp; und&nbsp; $Y$ unkorreliert&nbsp; $(ρ_{XY} = 0)$, so erhält man als Gleichung für die Höhenlinien:
 
  
:$$\frac{x^{\rm 2}}{\sigma_{X}^{\rm 2}}+\frac{y^{\rm 2}}{\sigma_{Y}^{\rm 2}} =\rm const.$$
 
Die Höhenlinien beschreiben in diesem Fall folgende Figuren:
 
*'''Kreise'''&nbsp; (falls&nbsp; $σ_X = σ_Y$, &nbsp; grüne Kurve), oder
 
*'''Ellipsen'''&nbsp; (für&nbsp; $σ_X ≠ σ_Y$, &nbsp; blaue Kurve) in Ausrichtung der beiden Achsen.
 
<br clear=all>
 
===Korrelationsgerade===
 
  
Als &nbsp;'''Korrelationsgerade'''&nbsp; bezeichnet man  die Gerade &nbsp;$y = K(x)$&nbsp; in der &nbsp;$(x, y)$&ndash;Ebene durch den „Mittelpunkt” $(m_X, m_Y)$. Diese besitzt folgende Eigenschaften: 
+
$\text{(C) Weibull distribution}$ &nbsp; &nbsp; [https://en.wikipedia.org/wiki/Weibull_distribution $\text{More detailed description}$]
[[File:Sto_App_Bild1a.png|frame| Gaußsche 2D-WDF (Approximation mit $N$ Messpunkten) und <br>Korrelationsgerade &nbsp;$y = K(x)$]]
 
  
*Die mittlere quadratische Abweichung von dieser Geraden – in &nbsp;$y$&ndash;Richtung betrachtet und über alle &nbsp;$N$&nbsp; Messpunkte gemittelt – ist minimal:  
+
*Probability density function:  
:$$\overline{\varepsilon_y^{\rm 2} }=\frac{\rm 1}{N} \cdot \sum_{\nu=\rm 1}^{N}\; \;\big [y_\nu - K(x_{\nu})\big ]^{\rm 2}={\rm Minimum}.$$
+
:$$f_X(x) = \lambda_X \cdot k_X \cdot (\lambda_X \cdot x)^{k_X-1} \cdot {\rm e}^{(\lambda_X \cdot x)^{k_X}}
*Die Korrelationsgerade kann als eine Art „statistische Symmetrieachse“ interpretiert werden. Die Geradengleichung lautet im allgemeinen Fall:
+
\hspace{0.05cm}.$$
:$$y=K(x)=\frac{\sigma_Y}{\sigma_X}\cdot\rho_{XY}\cdot(x - m_X)+m_Y.$$
 
  
*Der Winkel, den die Korrelationsgerade zur &nbsp;$x$&ndash;Achse einnimmt, beträgt:
+
*Application: &nbsp; &nbsp; PDF with adjustable skewness&nbsp;$S_X$; exponential distribution&nbsp; $(k_X = 1)$&nbsp; and Rayleigh distribution&nbsp; $(k_X = 2)$&nbsp; included as special cases.
:$$\theta={\rm arctan}(\frac{\sigma_{Y} }{\sigma_{X} }\cdot \rho_{XY}).$$
 
  
  
  
===Höhenlinien bei korrelierten Zufallsgrößen===
+
$\text{(D)  Wigner semicircle distribution}$ &nbsp; &nbsp; [https://en.wikipedia.org/wiki/Wigner_semicircle_distribution $\text{More detailed description}$]
  
Bei korrelierten Komponenten&nbsp; $(ρ_{XY} 0)$&nbsp; sind die Höhenlinien der WDF (fast) immer elliptisch, also auch für den Sonderfall&nbsp; $σ_X = σ_Y$.  
+
*Probability density function:
 +
:$$f_X(x) =
 +
\left\{ \begin{array}{c}  2/(\pi \cdot {R_X}^2) \cdot \sqrt{{R_X}^2 - (x- m_X)^2} \\
 +
0  \end{array} \right.\hspace{0.15cm}
 +
\begin{array}{*{1}c} {\rm for}\hspace{0.1cm} |x- m_X|\hspace{-0.05cm} \le \hspace{-0.05cm}R_X,
 +
\\  {\rm for}\hspace{0.1cm} |x- m_X| \hspace{-0.05cm} > \hspace{-0.05cm} R_X \\ \end{array}.$$
 +
*Application: &nbsp; &nbsp; PDF of Chebyshev nodes &nbsp; &rArr; &nbsp; zeros of Chebyshev polynomials from numerics.
  
<u>Ausnahme:</u>&nbsp; $ρ_{XY}=\pm 1$ &nbsp; &rArr; &nbsp; Diracwand; siehe&nbsp; [[Aufgaben:Aufgabe_4.4:_Gaußsche_2D-WDF|Aufgabe 4.4]]&nbsp; im Buch &bdquo;Stochastische Signaltheorie&rdquo;, Teilaufgabe &nbsp;'''(5)'''.
 
[[File:Sto_App_Bild3.png|right|frame|Höhenlinien der 2D-WDF bei korrelierten Größen]]
 
Hier lautet die Bestimmungsgleichung der WDF-Höhenlinien:
 
  
:$$f_{XY}(x, y) = {\rm const.} \hspace{0.5cm} \Rightarrow \hspace{0.5cm} \frac{x^{\rm 2} }{\sigma_{X}^{\rm 2}}+\frac{y^{\rm 2} }{\sigma_{Y}^{\rm 2} }-{\rm 2}\cdot\rho_{XY}\cdot\frac{x\cdot y}{\sigma_X\cdot \sigma_Y}={\rm const.}$$
 
Die Grafik zeigt in hellerem Blau für zwei unterschiedliche Parametersätze je eine Höhenlinie.
 
  
*Die Ellipsenhauptachse ist dunkelblau gestrichelt.
+
$\text{(E) Wigner parabolic distribution}$  
*Die&nbsp; [[Stochastische_Signaltheorie/Zweidimensionale_Zufallsgrößen#Korrelationsgerade|Korrelationsgerade]]&nbsp; $K(x)$&nbsp; ist durchgehend rot eingezeichnet.
 
  
 +
*Probability density function:
 +
:$$f_X(x) =
 +
\left\{ \begin{array}{c}  3/(4 \cdot {R_X}^3) \cdot \big ({R_X}^2 - (x- m_X)^2\big ) \\
 +
0  \end{array} \right.\hspace{0.15cm}
 +
\begin{array}{*{1}c} {\rm for}\hspace{0.1cm} |x|\hspace{-0.05cm} \le \hspace{-0.05cm}R_X,
 +
\\  {\rm for}\hspace{0.1cm} |x| \hspace{-0.05cm} > \hspace{-0.05cm} R_X \\ \end{array}.$$
 +
*Application: &nbsp; &nbsp; PDF of eigenvalues of symmetric random matrices whose dimension approaches infinity.
  
Anhand dieser Darstellung sind folgende Aussagen möglich:
 
*Die Ellipsenform hängt außer vom Korrelationskoeffizienten&nbsp; $ρ_{XY}$&nbsp; auch vom Verhältnis der beiden Streuungen&nbsp; $σ_X$&nbsp; und&nbsp; $σ_Y$&nbsp; ab. 
 
*Der Neigungswinkel&nbsp; $α$&nbsp; der Ellipsenhauptachse (gestrichelte Gerade) gegenüber der&nbsp; $x$&ndash;Achse hängt ebenfalls von&nbsp; $σ_X$,&nbsp; $σ_Y$&nbsp; und&nbsp; $ρ_{XY}$&nbsp; ab:
 
:$$\alpha = {1}/{2} \cdot {\rm arctan } \big ( 2 \cdot \rho_{XY} \cdot \frac {\sigma_X \cdot \sigma_Y}{\sigma_X^2 - \sigma_Y^2} \big ).$$
 
*Die (rote) Korrelationsgerade&nbsp; $y = K(x)$&nbsp; einer Gaußschen 2D–Zufallsgröße liegt stets unterhalb der (blau gestrichelten) Ellipsenhauptachse.
 
* $K(x)$&nbsp; kann aus dem Schnittpunkt der Höhenlinien und ihrer vertikalen Tangenten geometrisch konstruiert werden, wie in der Skizze in grüner Farbe angedeutet. 
 
<br><br>
 
===Zweidimensionale Verteilungsfunktion &nbsp; &rArr; &nbsp; 2D&ndash;VTF===
 
  
{{BlaueBox|TEXT= 
 
$\text{Definition:}$&nbsp; Die&nbsp; '''2D-Verteilungsfunktion'''&nbsp; ist ebenso wie die 2D-WDF lediglich eine sinnvolle Erweiterung der&nbsp; [[Stochastische_Signaltheorie/Verteilungsfunktion_(VTF)#VTF_bei_kontinuierlichen_Zufallsgr.C3.B6.C3.9Fen_.281.29|eindimensionalen Verteilungsfunktion]]&nbsp;  (VTF):
 
:$$F_{XY}(x,y) = {\rm Pr}\big [(X \le x) \cap (Y \le y) \big ]  .$$}}
 
  
 +
$\text{(F)  Cauchy distribution}$ &nbsp; &nbsp; [[Theory_of_Stochastic_Signals/Further_Distributions#Cauchy_PDF|$\text{More detailed description}$]]
  
Es ergeben sich folgende Gemeinsamkeiten und Unterschiede zwischen der &bdquo;1D-VTF&rdquo; und der&bdquo; 2D-VTF&rdquo;:
+
*Probability density function and distribution function:  
*Der Funktionalzusammenhang zwischen &bdquo;2D&ndash;WDF&rdquo; und &bdquo;2D&ndash;VTF&rdquo; ist wie im eindimensionalen Fall durch die Integration gegeben, aber nun in zwei Dimensionen. Bei kontinuierlichen Zufallsgrößen gilt:  
+
:$$f_{X}(x)=\frac{1}{\pi}\cdot\frac{\lambda_X}{\lambda_X^2+x^2}, \hspace{2cm} F_{X}(x)={\rm 1}/{2}+{\rm arctan}({x}/{\lambda_X}).$$
:$$F_{XY}(x,y)=\int_{-\infty}^{y} \int_{-\infty}^{x} f_{XY}(\xi,\eta) \,\,{\rm d}\xi \,\, {\rm d}\eta  .$$
+
*In the Cauchy distribution, all moments&nbsp; $m_k$&nbsp; for even&nbsp; $k$&nbsp; have an infinitely large value, independent of the parameter&nbsp; $λ_X$.
*Umgekehrt lässt sich die Wahrscheinlichkeitsdichtefunktion aus der Verteilungsfunktion durch partielle Differentiation nach&nbsp; $x$&nbsp; und&nbsp; $y$&nbsp; angeben:
+
*Thus, this distribution also has an infinitely large variance:&nbsp;  $\sigma_X^2 \to \infty$.  
:$$f_{XY}(x,y)=\frac{{\rm d}^{\rm 2} F_{XY}(\xi,\eta)}{{\rm d} \xi \,\, {\rm d} \eta}\Bigg|_{\left.{x=\xi \atop {y=\eta}}\right.}.$$
+
*Due to symmetry, for odd&nbsp; $k$&nbsp; all moments&nbsp; $m_k = 0$, if one assumes the "Cauchy Principal Value" as in the program:&nbsp; $m_X = 0, \ S_X = 0$.
*Bezüglich der Verteilungsfunktion&nbsp; $F_{XY}(x, y)$&nbsp; gelten folgende Grenzwerte:
+
*Example: &nbsp; &nbsp; The quotient of two Gaussian mean-free random variables is Cauchy distributed. For practical applications the Cauchy distribution has less meaning.
:$$F_{XY}(-\infty,\ -\infty) = 0,\hspace{0.5cm}F_{XY}(x,\ +\infty)=F_{X}(x ),\hspace{0.5cm}
 
F_{XY}(+\infty,\ y)=F_{Y}(y ) ,\hspace{0.5cm}F_{XY}(+\infty,\ +\infty) = 1.$$
 
*Im Grenzfall $($unendlich große&nbsp; $x$&nbsp; und&nbsp; $y)$&nbsp; ergibt sich demnach für die &bdquo;2D-VTF&rdquo; der Wert&nbsp; $1$. Daraus erhält man die&nbsp; '''Normierungsbedingung'''&nbsp; für die 2D-Wahrscheinlichkeitsdichtefunktion:
 
:$$\int_{-\infty}^{+\infty} \int_{-\infty}^{+\infty} f_{XY}(x,y) \,\,{\rm d}x \,\,{\rm d}y=1  .   $$
 
  
{{BlaueBox|TEXT= 
 
$\text{Fazit:}$&nbsp; Beachten Sie den signifikanten Unterschied zwischen eindimensionalen und zweidimensionalen Zufallsgrößen:
 
*Bei eindimensionalen Zufallsgrößen ergibt die Fläche unter der WDF stets den Wert $1$.
 
*Bei zweidimensionalen Zufallsgrößen ist das WDF-Volumen immer gleich $1$.}}
 
<br><br>
 
  
==Versuchsdurchführung==
+
==Exercises==
 
<br>
 
<br>
[[File:Aufgaben_2D-Gauss.png|right]]
+
*First, select the number&nbsp; $(1,\ 2,  \text{...} \ )$&nbsp; of the task to be processed.&nbsp; The number&nbsp; "$0$"&nbsp; corresponds to a&nbsp; "Reset":&nbsp; Same setting as at program start.
 +
*A task description is displayed.&nbsp; The parameter values are adjusted.&nbsp; Solution after pressing&nbsp; "Show Solution".
 +
*In the following&nbsp; $\text{Red}$&nbsp; stands for the random variable&nbsp; $X$&nbsp; and&nbsp; $\text{Blue}$&nbsp; for&nbsp; $Y$.
 +
 
 +
 
 +
{{BlueBox|TEXT=
 +
'''(1)'''&nbsp; Select &nbsp;$\text{red: Gaussian PDF}\ (m_X = 1, \ \sigma_X = 0.4)$&nbsp; and&nbsp; $\text{blue: Rectangular PDF}\ (y_{\rm min} = -2, \ y_{\rm max} = +3)$.&nbsp; Interpret the&nbsp; $\rm PDF$&nbsp; graph.}}
 +
 
 +
*&nbsp;$\text{Gaussian PDF}$:&nbsp; The &nbsp;$\rm PDF$ maximum is equal to &nbsp;$f_{X}(x = m_X) = \sqrt{1/(2\pi \cdot \sigma_X^2)} = 0.9974 \approx 1$.
 +
*&nbsp;$\text{Rectangular PDF}$:&nbsp; All &nbsp;$\rm PDF$ values are equal&nbsp; $0.2$&nbsp; in the range&nbsp; $-2 < y < +3$.&nbsp; At the edges&nbsp; $f_Y(-2) = f_Y(+3)= 0.1$&nbsp; (half value) holds.
 +
 
 +
 
 +
{{BlueBox|TEXT=
 +
'''(2)'''&nbsp; Same setting as for &nbsp;$(1)$.&nbsp; What are the probabilities&nbsp; ${\rm Pr}(X = 0)$, &nbsp;  ${\rm Pr}(0.5 \le X \le 1.5)$, &nbsp;  ${\rm Pr}(Y = 0)$ &nbsp; and&nbsp; ${\rm Pr}(0.5 \le Y \le 1.5)$ .}}
 +
 
 +
*&nbsp;${\rm Pr}(X = 0)={\rm Pr}(Y = 0) \equiv 0$ &nbsp; &rArr; &nbsp; Probability of a discrete random variable to take exactly a certain value.
 +
*&nbsp;The other two probabilities can be obtained by integration over the PDF in the range &nbsp;$+0.5\ \text{...} \ +\hspace{-0.1cm}1.5$.
 +
*&nbsp;Or:&nbsp; ${\rm Pr}(0.5 \le X \le 1.5)= F_X(1.5) - F_X(0.5) = 0.8944-0.1056 = 0.7888$. Correspondingly:&nbsp; ${\rm Pr}(0.5 \le Y \le 1.5)= 0.7-0.5=0.2$.  
 +
 
 +
 
 +
{{BlueBox|TEXT=
 +
'''(3)'''&nbsp; Same settings as before.&nbsp; How must the standard deviation&nbsp; $\sigma_X$&nbsp; be changed so that with the same mean&nbsp; $m_X$&nbsp; it holds for the second order moment:&nbsp; $P_X=2$&nbsp;?}}
 +
 
 +
*&nbsp;According to Steiner's theorem:&nbsp; $P_X=m_X^2 + \sigma_X^2$ &nbsp; &rArr; &nbsp; $\sigma_X^2 = P_X-m_X^2 = 2 - 1^2 = 1 $ &nbsp; &rArr; &nbsp; $\sigma_X = 1$.
 +
 
 +
 
 +
{{BlueBox|TEXT=
 +
'''(4)'''&nbsp; Same settings as before:&nbsp; How must the parameters&nbsp; $y_{\rm min}$&nbsp; and&nbsp; $y_{\rm max}$&nbsp; of the rectangular PDF be changed to yield&nbsp; $m_Y = 0$&nbsp; and&nbsp; $\sigma_Y^2 = 0.75$?}}
  
*Wählen Sie zunächst die Nummer ('''1''', ...) der zu bearbeitenden Aufgabe.
+
*&nbsp;Starting from the previous setting&nbsp; $(y_{\rm min} = -2, \ y_{\rm max} = +3)$&nbsp; we change&nbsp; $y_{\rm max}$ until&nbsp; $\sigma_Y^2 = 0.75$&nbsp; occurs &nbsp; &rArr; &nbsp; $y_{\rm max} = 1$.
*Eine Aufgabenbeschreibung wird angezeigt. Die Parameterwerte sind angepasst.
+
*&nbsp;The width of the rectangle is now&nbsp; $3$.&nbsp; The desired mean &nbsp; $m_Y = 0$&nbsp; is obtained by shifting:&nbsp; $y_{\rm min} = -1.5, \ y_{\rm max} = +1.5$.
*Lösung nach Drücken von &bdquo;Musterlösung&rdquo;.
+
*&nbsp;You could also consider that for a mean-free random variable&nbsp; $(y_{\rm min} = -y_{\rm max})$&nbsp; the following equation holds: &nbsp; $\sigma_Y^2 = y_{\rm max}^2/3$.
*Bei der Aufgabenbeschreibung verwenden wir &nbsp;$\rho$&nbsp; anstelle von &nbsp;$\rho_{XY}$.
 
*Für die &bdquo;1D-WDF&rdquo; gilt:&nbsp; $f_{X}(x) = \sqrt{1/(2\pi \cdot \sigma_X^2)} \cdot {\rm e}^{-x^2/(2 \hspace{0.05cm}\cdot \hspace{0.05cm} \sigma_X^2)}$.  
 
  
  
Die Nummer '''0''' entspricht einem &bdquo;Reset&rdquo;:
+
{{BlueBox|TEXT=
*Gleiche Einstellung wie beim Programmstart.
+
'''(5)'''&nbsp; For which of the adjustable distributions is the Charlier skewness&nbsp; $S \ne 0$&nbsp;? }}
*Ausgabe eines &bdquo;Reset&ndash;Textes&rdquo; mit weiteren Erläuterungen zum Applet.
 
  
 +
*&nbsp;The Charlier's skewness denotes the third central moment related to&nbsp; $σ_X^3$ &nbsp; &rArr; &nbsp;$S_X = \mu_3/σ_X^3$&nbsp;  $($valid for the random variable&nbsp; $X)$.
 +
*&nbsp;If the PDF&nbsp; $f_X(x)$&nbsp; is symmetric around the mean&nbsp; $m_X$&nbsp; then the parameter&nbsp; $S_X$&nbsp; is always zero.
 +
*&nbsp;Exponential distribution:&nbsp; $S_X =2$;&nbsp; Rayleigh distribution:&nbsp; $S_X =0.631$ &nbsp; $($both independent of&nbsp; $λ_X)$; &nbsp; Rice distribution:&nbsp; $S_X >0$&nbsp; $($dependent of &nbsp;$C_X, \ λ_X)$.
 +
*&nbsp;With the Weibull distribution, the Charlier skewness&nbsp; $S_X$&nbsp; can be zero, positive or negative,&nbsp; depending on the PDF parameter&nbsp; $k_X$.
 +
*&nbsp; Weibull distribution, &nbsp;$\lambda_X=0.4$:&nbsp; With&nbsp; $k_X = 1.5$&nbsp; &rArr; &nbsp; PDF is curved to the left&nbsp; $(S_X > 0)$; &nbsp; $k_X = 7$&nbsp; &rArr; &nbsp; PDF is curved to the right&nbsp; $(S_X < 0)$.
  
  
{{BlaueBox|TEXT=
+
{{BlueBox|TEXT=
'''(1)'''&nbsp; Machen Sie sich anhand der Voreinstellung &nbsp;$(\sigma_X=1, \ \sigma_Y=0.5, \ \rho = 0.7)$&nbsp; mit dem Programm vertraut. Interpretieren Sie die Grafiken für &nbsp;$\rm WDF$&nbsp; und&nbsp; $\rm VTF$.}}
+
'''(6)'''&nbsp; Select &nbsp;$\text{Red: Gaussian PDF}\ (m_X = 1, \ \sigma_X = 0.4)$&nbsp; and&nbsp; $\text{Blue: Gaussian PDF}\ (m_X = 0, \ \sigma_X = 1)$.&nbsp; What is the kurtosis in each case?}}
  
::*&nbsp;$\rm WDF$&nbsp; ist ein Bergrücken mit dem Maximum bei&nbsp; $x = 0, \ y = 0$. Der Bergkamm ist leicht verdreht gegenüber der &nbsp;$x$&ndash;Achse.
+
*&nbsp;For each Gaussian distribution the kurtosis has the same value: &nbsp; $K_X = K_Y =3$.&nbsp; Therefore, &nbsp;$K-3$&nbsp; is called "excess".  
::*&nbsp;$\rm VTF$&nbsp; ergibt sich aus &nbsp;$\rm WDF$&nbsp; durch fortlaufende Integration in beide Richtungen. Das Maximum $($nahezu &nbsp;$1)$&nbsp; tritt bei &nbsp;$x=3, \ y=3$&nbsp; auf.
+
*This parameter can be used to check whether a given random variable can be approximated by a Gaussian distribution.
  
{{BlaueBox|TEXT=
 
'''(2)'''&nbsp; Nun lautet die Einstellung &nbsp;$\sigma_X= \sigma_Y=1, \ \rho = 0$. Welche Werte ergeben sich für &nbsp;$f_{XY}(0,\ 0)$&nbsp; und &nbsp;$F_{XY}(0,\ 0)$? Interpretieren Sie die Ergebnisse.}}
 
  
::*&nbsp;Das WDF&ndash;Maximum ist&nbsp;  $f_{XY}(0,\ 0) = 1/(2\pi)= 0.1592$, wegen &nbsp;$\sigma_X= \sigma_Y = 1, \ \rho = 0$. Die Höhenlinien sind Kreise.
+
{{BlueBox|TEXT=
::*&nbsp;Für den VTF-Wert gilt:&nbsp; $F_{XY}(0,\ 0) = [{\rm Pr}(X \le 0)] \cdot [{\rm Pr}(Y \le 0)] = 0.25$. Geringfügige Abweichung wegen numerischer Integration.
+
'''(7)'''&nbsp; For which distributions does a significantly smaller kurtosis value result than &nbsp;$K=3$?&nbsp; And for which distributions does a significantly larger one?}}
  
{{BlaueBox|TEXT=
+
*&nbsp;$K<3$&nbsp; always results when the PDF values are more concentrated around the mean than in the Gaussian distribution.
'''(3)'''&nbsp; Es gelten weiter die Einstellungen von '''(2)'''. Welche Werte ergeben sich für &nbsp;$f_{XY}(0,\ 1)$&nbsp; und &nbsp;$F_{XY}(0,\ 1)$? Interpretieren Sie die Ergebnisse.}}
+
*&nbsp;This is true, for example, for the uniform distribution &nbsp;$(K=1.8)$&nbsp; and for the triangular distribution &nbsp;$(K=2.4)$.
 +
*&nbsp;$K>3$,&nbsp; if the PDF offshoots are more pronounced than for the Gaussian distribution.&nbsp; Example:&nbsp; Exponential PDF &nbsp;$(K=9)$.
  
::*&nbsp;Es gilt&nbsp;  $f_{XY}(0,\ 1) = f_{X}(0) \cdot f_{Y}(1) = [ \sqrt{1/(2\pi)}]  \cdot [\sqrt{1/(2\pi)} \cdot {\rm e}^{-0.5}] = 1/(2\pi) \cdot {\rm e}^{-0.5} = 0.0965$.
 
::*&nbsp;Das Programm liefert&nbsp;  $F_{XY}(0,\ 1) = [{\rm Pr}(X \le 0)] \cdot [{\rm Pr}(Y \le 1)] = 0.4187$, also einen größeren Wert als in '''(2)''', da weiter integriert wird.
 
  
{{BlaueBox|TEXT=
+
{{BlueBox|TEXT=
'''(4)'''&nbsp; Die Einstellungen bleiben erhalten. Welche Werte ergeben sich für &nbsp;$f_{XY}(1,\ 0)$&nbsp; und &nbsp;$F_{XY}(1,\ 0)$? Interpretieren Sie die Ergebnisse.}}
+
'''(8)'''&nbsp; Select &nbsp;$\text{Red: Exponential PDF}\ (\lambda_X = 1)$&nbsp; and&nbsp; $\text{Blue: Laplace PDF}\ (\lambda_Y = 1)$.&nbsp; Interpret the differences.}}
  
::*&nbsp;Aufgrund der Rotationssysmmetrie gleiche Ergebnisse wie in '''(3)'''.
+
*&nbsp;The Laplace distribution is symmetric around its mean &nbsp;$(S_Y=0, \ m_Y=0)$&nbsp; unlike the exponential distribution &nbsp;$(S_X=2, \ m_X=1)$.
 +
*&nbsp;The even moments &nbsp;$m_2, \ m_4, \ \text{...}$&nbsp; are equal,&nbsp; for example:&nbsp; $P_X=P_Y=2$.&nbsp; But not the variances:&nbsp; $\sigma_X^2 =1, \ \sigma_Y^2 =2$.
 +
*&nbsp;The probabilities &nbsp;${\rm Pr}(|X| < 2) = F_X(2) = 0.864$&nbsp; and&nbsp; ${\rm Pr}(|Y| < 2) = F_Y(2) - F_Y(-2)= 0.932 - 0.068 = 0.864$&nbsp; are equal.
 +
*&nbsp;In the Laplace PDF, the values are more tightly concentrated around the mean than in the exponential PDF:&nbsp; $K_Y =6 < K_X = 9$.
  
{{BlaueBox|TEXT=
 
'''(5)'''&nbsp; Stimmt die Aussage:&nbsp;&bdquo;Elliptische Höhenlinien gibt es nur für &nbsp;$\rho \ne 0$&rdquo;. Interpretieren Sie die&nbsp; $\rm 2D\hspace{-0.1cm}-\hspace{-0.1cm}WDF$&nbsp; und $\rm 2D\hspace{-0.1cm}-\hspace{-0.1cm}VTF$&nbsp; für &nbsp;$\sigma_X=1, \ \sigma_Y=0.5$&nbsp; und&nbsp; $\rho = 0$.}}
 
  
::*&nbsp;Nein! Auch für&nbsp; $\ \rho = 0$&nbsp; sind die Höhenlinien elliptisch (nicht kreisförmig), falls &nbsp;$\sigma_X \ne \sigma_Y$.
+
{{BlueBox|TEXT=
::*&nbsp;Für&nbsp;$\sigma_X \gg \sigma_Y$&nbsp; hat die&nbsp; $\rm 2D\hspace{-0.1cm}-\hspace{-0.1cm}WDF$&nbsp; die Form eines langgestreckten Bergkamms parallel zur&nbsp; $x$&ndash;Achse, für&nbsp;$\sigma_X \ll \sigma_Y$&nbsp; parallel zur&nbsp; $y$&ndash;Achse.
+
'''(9)'''&nbsp; Select &nbsp;$\text{Red: Rice PDF}\ (\lambda_X = 1, \ C_X = 1)$&nbsp; and &nbsp;$\text{Blue: Rayleigh PDF}\ (\lambda_Y = 1)$.&nbsp; Interpret the differences.}}
::*&nbsp;Für&nbsp;$\sigma_X \gg \sigma_Y$&nbsp; ist der Anstieg der&nbsp; $\rm 2D\hspace{-0.1cm}-\hspace{-0.1cm}VTF$&nbsp; in Richtung der &nbsp;$y$&ndash;Achse deutlich steiler als in Richtung der &nbsp;$x$&ndash;Achse.
 
  
{{BlaueBox|TEXT=
+
*&nbsp; With&nbsp; $C_X = 0$&nbsp; the Rice PDF transitions to the Rayleigh PDF.&nbsp; A larger &nbsp;$C_X$&nbsp;  improves the performance, e.g., in mobile communications.
'''(6)'''&nbsp; Variieren Sie ausgehend von&nbsp; $\sigma_X=\sigma_Y=1, \ \rho = 0.7$&nbsp; den Korrelationskoeffizienten&nbsp; $\rho$. Wie groß ist der Neigungswinkel &nbsp;$\alpha$&nbsp; der Ellipsen&ndash;Hauptachse?}}
+
*&nbsp; Both, in &nbsp;"Rayleigh"&nbsp; and &nbsp;"Rice"&nbsp; the abscissa is the magnitude&nbsp; $A$&nbsp; of the received signal.&nbsp; Favorably, if&nbsp; ${\rm Pr}(A \le A_0)$&nbsp; is small&nbsp; $(A_0$&nbsp; given$)$.
 +
*&nbsp; For&nbsp; $C_X \ne 0$&nbsp; and equal&nbsp; $\lambda$&nbsp; the Rice CDF is below the Rayleigh CDF &nbsp; &rArr; &nbsp; smaller&nbsp; ${\rm Pr}(A \le A_0)$&nbsp; for all&nbsp; $A_0$.
  
::*&nbsp;Für&nbsp; $\rho > 0$&nbsp; ist &nbsp;$\alpha = 45^\circ$&nbsp; und für&nbsp; $\rho < 0$&nbsp; ist &nbsp;$\alpha = -45^\circ$. Für&nbsp; $\rho = 0$&nbsp; sind die Höhenlinien kreisfömig und somit gibt es auch keine Ellipsen&ndash;Hauptachse.
 
  
{{BlaueBox|TEXT=
+
{{BlueBox|TEXT=
'''(7)'''&nbsp; Variieren Sie ausgehend von&nbsp; $\sigma_X=\sigma_Y=1, \ \rho = 0.7$&nbsp; den Korrelationskoeffizienten&nbsp; $\rho > 0$. Wie groß ist der Neigungswinkel &nbsp;$\theta$&nbsp; der Korrelationsgeraden&nbsp; $K(x)$?}}
+
'''(10)'''&nbsp; Select &nbsp;$\text{Red: Rice PDF}\ (\lambda_X = 0.6, \ C_X = 2)$.&nbsp; By which distribution &nbsp;$F_Y(y)$&nbsp; can this Rice distribution be well approximated? }}
  
::*&nbsp;Für&nbsp; $\sigma_X=\sigma_Y$&nbsp; ist &nbsp;$\theta={\rm arctan}\ (\rho)$. Die Steigung nimmt mit wachsendem&nbsp; $\rho > 0$&nbsp; zu. In allen Fällen gilt  &nbsp;$\theta < \alpha = 45^\circ$. Für&nbsp; $\rho = 0.7$&nbsp; ergibt sich &nbsp;$\theta = 35^\circ$.
+
*&nbsp; The kurtosis &nbsp; $K_X = 2.9539 \approx 3$&nbsp; indicates the Gaussian distribution. &nbsp; Favorable parameters:&nbsp; $m_Y = 2.1 > C_X, \ \ \sigma_Y = \lambda_X = 0.6$.
 +
*&nbsp; The larger tht quotient&nbsp; $C_X/\lambda_X$&nbsp; is, the better the Rice PDF is approximated by a Gaussian PDF.  
 +
*&nbsp; For large &nbsp; $C_X/\lambda_X$&nbsp; the Rice PDF has no more similarity with the Rayleigh PDF.
  
{{BlaueBox|TEXT=
 
'''(8)'''&nbsp; Variieren Sie ausgehend von&nbsp; $\sigma_X=\sigma_Y=0.75, \ \rho = 0.7$&nbsp; die Parameter&nbsp; $\sigma_Y$&nbsp; und&nbsp; $\rho \ (>0)$. Welche Aussagen gelten für die Winkel &nbsp;$\alpha$&nbsp; und&nbsp; $\theta$?}}
 
  
::*&nbsp;Für&nbsp; $\sigma_Y<\sigma_X$&nbsp; ist &nbsp;$\alpha < 45^\circ$&nbsp; und für&nbsp; $\sigma_Y>\sigma_X$&nbsp; dagegen &nbsp;$\alpha > 45^\circ$.  
+
{{BlueBox|TEXT=
::*&nbsp;Bei allen Einstellungen gilt:&nbsp;  '''Die Korrelationsgerade liegt unter der Ellipsen&ndash;Hauptachse'''.
+
'''(11)'''&nbsp; Select &nbsp;$\text{Red: Weibull PDF}\ (\lambda_X = 1, \ k_X = 1)$&nbsp; and &nbsp;$\text{Blue: Weibull PDF}\ (\lambda_Y = 1, \ k_Y = 2)$. Interpret the results. }}
  
{{BlaueBox|TEXT=
+
*&nbsp; The Weibull PDF&nbsp; $f_X(x)$&nbsp; is identical to the exponential PDF and&nbsp; $f_Y(y)$&nbsp; to the Rayleigh PDF.
'''(9)'''&nbsp; Gehen Sie von&nbsp; $\sigma_X= 1, \ \sigma_Y=0.75, \ \rho = 0.7$&nbsp; aus und variieren Sie&nbsp; $\rho$. Wie könnte man die Korrelationsgerade aus den Höhenlinien konstruieren?}}
+
*&nbsp; However, after best fit, the parameters&nbsp; $\lambda_{\rm Weibull} = 1$&nbsp; and&nbsp; $\lambda_{\rm Rayleigh} = 0.7$ differ.
 +
*&nbsp; Moreover, it holds &nbsp;$f_X(x = 0) \to \infty$&nbsp; for &nbsp;$k_X < 1$.&nbsp;  However, this does not have the affect of infinite moments.
  
::*&nbsp;Die Korrelationsgerade schneidet alle Höhenlinien an den Punkten, an denen die Tangente zu der Höhenlinie senkrecht verläuft.
 
  
{{BlaueBox|TEXT=
+
{{BlueBox|TEXT=
'''(10)'''&nbsp; Nun gelte&nbsp; $\sigma_X=  \sigma_Y=1, \ \rho = 0.95$. Interpretieren Sie die&nbsp; $\rm 2D\hspace{-0.1cm}-\hspace{-0.1cm}WDF$. Welche Aussagen würden für den Grenzfall&nbsp; $\rho \to 1$&nbsp; zutreffen?}}
+
'''(12)'''&nbsp; Select &nbsp;$\text{Red: Weibull PDF}\ (\lambda_X = 1, \ k_X = 1.6)$&nbsp; and &nbsp; $\text{Blue: Weibull PDF}\ (\lambda_Y = 1, \ k_Y = 5.6)$.&nbsp; Interpret the Charlier skewness. }}  
  
::*&nbsp;Die&nbsp; $\rm 2D\hspace{-0.1cm}-\hspace{-0.1cm}WDF$&nbsp; hat nur Anteile in der Nähe der Ellipsen&ndash;Hauptachse. Die Korrelationsgerade liegt nur knapp darunter:&nbsp; $\alpha = 45^\circ, \ \theta = 43.5^\circ$.
+
*&nbsp; One observes: &nbsp; For the PDF parameter &nbsp;$k < k_*$&nbsp; the Charlier skewness is positive and for &nbsp;$k > k_*$&nbsp; negative.&nbsp; It is approximately&nbsp; $k_* = 3.6$.
::*&nbsp;Im Grenzfall&nbsp; $\rho \to 1$&nbsp; wäre&nbsp; $\theta = \alpha = 45^\circ$. Außerhalb der Korrelationsgeraden hätte die&nbsp; $\rm 2D\hspace{-0.1cm}-\hspace{-0.1cm}WDF$&nbsp; keine Anteile. Das heißt:
 
::*&nbsp;Längs der Korrelationsgeraden ergäbe sich eine '''Diracwand'''&nbsp; &rArr; &nbsp; Alle Werte sind unendlich groß, trotzdem um den Mittelwert gaußisch gewichtet.  
 
  
  
 +
{{BlueBox|TEXT=
 +
'''(13)'''&nbsp; Select &nbsp;$\text{Red: Semicircle PDF}\ (m_X = 0, \ R_X = 1)$&nbsp; and &nbsp;$\text{Blue: Parabolic PDF}\ (m_Y = 0, \ R_Y = 1)$.&nbsp; Vary the parameter &nbsp;$R$&nbsp; in each case. }}
 +
 +
*&nbsp; The PDF in each case is mean-free and symmetric&nbsp; $(S_X = S_Y =0)$&nbsp; with&nbsp; $\sigma_X^2 = 0.25, \ K_X = 2$&nbsp; respectively,&nbsp; $\sigma_Y^2 = 0.2, \ K_Y \approx 2.2$. 
  
  
  
  
==Zur Handhabung des Applets==
+
==Applet Manual==
 
<br>
 
<br>
[[File:Anleitung_2D-Gauss.png|left|600px]]
+
[[File:Bildschirm_WDF_VTF_neu.png|right|600px|frame|Screenshot of the German version]]
&nbsp; &nbsp; '''(A)''' &nbsp; &nbsp; Parametereingabe per Slider:&nbsp; $\sigma_X$, &nbsp;$\sigma_Y$ und&nbsp; $\rho$
+
&nbsp; &nbsp; '''(A)''' &nbsp; &nbsp; Selection of the distribution&nbsp; $f_X(x)$&nbsp; (red curves and output values)
  
&nbsp; &nbsp; '''(B)''' &nbsp; &nbsp; Auswahl:&nbsp; Darstellung von WDF oder VTF
+
&nbsp; &nbsp; '''(B)''' &nbsp; &nbsp; Parameter input for the "red distribution" via slider
  
&nbsp; &nbsp; '''(C)''' &nbsp; &nbsp; Reset:&nbsp; Einstellung wie beim Programmstart
+
&nbsp; &nbsp; '''(C)''' &nbsp; &nbsp; Selection of the distribution&nbsp; $f_Y(y)$&nbsp; (blue curves and output values)
  
&nbsp; &nbsp; '''(D)''' &nbsp; &nbsp; Höhenlinien darstellen anstelle von &bdquo;1D-WDF&rdquo;
+
&nbsp; &nbsp; '''(D)''' &nbsp; &nbsp; Parameter input for the "red distribution" via slider
  
&nbsp; &nbsp; '''(E)''' &nbsp; &nbsp; Darstellungsbereich für &bdquo;2D-WDF&rdquo;
+
&nbsp; &nbsp; '''(E)''' &nbsp; &nbsp; Graphic area for the probability density function (PDF)
  
&nbsp; &nbsp; '''(F)''' &nbsp; &nbsp; Manipulation der 3D-Grafik (Zoom, Drehen, ...)
+
&nbsp; &nbsp; '''(F)''' &nbsp; &nbsp; Graphic area for the distribution function (CDF)
  
&nbsp; &nbsp; '''(G)''' &nbsp; &nbsp; Darstellungsbereich für &bdquo;1D-WDF&rdquo; bzw. &bdquo;Höhenlinien&rdquo;
+
&nbsp; &nbsp; '''(G)''' &nbsp; &nbsp; Numerical output for the "red distribution"
  
&nbsp; &nbsp; '''(H)''' &nbsp; &nbsp; Manipulation der 2D-Grafik (&bdquo;1D-WDF&rdquo;)
+
&nbsp; &nbsp; '''(H)''' &nbsp; &nbsp; Numerical output for the "blue distribution"
  
&nbsp; &nbsp; '''( I )''' &nbsp; &nbsp; Bereich für die Versuchsdurchführung:  Aufgabenauswahl 
+
&nbsp; &nbsp; '''( I )''' &nbsp; &nbsp; Input of &nbsp;$x_*$&nbsp; and &nbsp;$y_*$&nbsp; abscissa values for the numerics outputs
  
&nbsp; &nbsp; '''(J)''' &nbsp; &nbsp; Bereich für die Versuchsdurchführung:   Aufgabenstellung
+
&nbsp; &nbsp; '''(J)''' &nbsp; &nbsp; Experiment execution area: &nbsp;  task selection 
  
&nbsp; &nbsp; '''( L)''' &nbsp; &nbsp; Bereich für die Versuchsdurchführung:   Musterlösung
+
&nbsp; &nbsp; '''(K)''' &nbsp; &nbsp; Experiment execution area: &nbsp;  task description
<br><br><br><br><br><br><br><br>
+
 
Werte&ndash;Ausgabe über Maussteuerung (sowohl bei 2D als auch bei 3D)   
+
&nbsp; &nbsp; '''( L)''' &nbsp; &nbsp; Experiment execution area: &nbsp;  sample solution
 +
<br>
 +
 
 +
 
 +
'''Selection options for''' for&nbsp; $\rm A$&nbsp; and&nbsp; $\rm C$: &nbsp;
 +
 +
Gaussian distribution, &nbsp; uniform distribution, &nbsp; triangular distribution, &nbsp; exponential distribution, &nbsp; Laplace distribution, &nbsp; Rayleigh distribution,&nbsp;  Rice distribution,  &nbsp; Weibull distribution, &nbsp; Wigner semicircle distribution, &nbsp;  Wigner parabolic distribution, &nbsp; Cauchy distribution.
 +
 
 +
 
 +
The following &raquo;'''integral parameters'''&laquo; are output&nbsp; $($with respect to $X)$: &nbsp;
 +
 
 +
Linear mean value&nbsp; $m_X = {\rm E}\big[X \big]$, &nbsp; second order moment&nbsp; $P_X ={\rm E}\big[X^2 \big] $, &nbsp; variance&nbsp; $\sigma_X^2 = P_X - m_X^2$, &nbsp; standard deviation&nbsp; $\sigma_X$,&nbsp; Charlier's skewness&nbsp; $S_X$, &nbsp; kurtosis&nbsp; $K_X$.
 +
 
 +
 
 +
'''In all applets top right''':&nbsp; &nbsp; Changeable graphical interface design &nbsp; &rArr; &nbsp; '''Theme''':
 +
* Dark: &nbsp; black background&nbsp; (recommended by the authors).
 +
* Bright: &nbsp; white background&nbsp; (recommended for beamers and printouts)
 +
* Deuteranopia: &nbsp; for users with pronounced green&ndash;visual impairment
 +
* Protanopia: &nbsp; for users with pronounced red&ndash;visual impairment
 
<br clear=all>
 
<br clear=all>
 +
==About the Authors==
 +
<br>
 +
This interactive calculation tool was designed and implemented at the&nbsp; [https://www.ei.tum.de/en/lnt/home/ $\text{Institute for Communications Engineering}$]&nbsp; at the&nbsp; [https://www.tum.de/en $\text{Technical University of Munich}$].
 +
*The first version was created in 2005 by&nbsp; [[Biographies_and_Bibliographies/An_LNTwww_beteiligte_Studierende#Bettina_Hirner_.28Diplomarbeit_LB_2005.29|&raquo;Bettina Hirner&laquo;]]&nbsp; as part of her diploma thesis with “FlashMX – Actionscript”&nbsp; (Supervisor:&nbsp; [[Biographies_and_Bibliographies/LNTwww_members_from_LNT#Prof._Dr.-Ing._habil._G.C3.BCnter_S.C3.B6der_.28at_LNT_since_1974.29| &raquo;Günter Söder&laquo; ]]&nbsp; and&nbsp; [[Biographies_and_Bibliographies/LNTwww_members_from_LNT#Dr.-Ing._Klaus_Eichin_.28at_LNT_from_1972-2011.29| &raquo;Klaus Eichin&laquo; ]]).
 +
 +
*In 2019 the program was redesigned via HTML5/JavaScript by&nbsp; [[Biographies_and_Bibliographies/Students_involved_in_LNTwww#Matthias_Niller_.28Ingenieurspraxis_Math_2019.29|&raquo;Matthias Niller&laquo;]]&nbsp;  (Ingenieurspraxis Mathematik, Supervisor:&nbsp; [[Biographies_and_Bibliographies/LNTwww_members_from_LÜT#Benedikt_Leible.2C_M.Sc._.28at_L.C3.9CT_since_2017.29| &raquo;Benedikt Leible&laquo; ]]&nbsp; and&nbsp; [[Biographies_and_Bibliographies/LNTwww_members_from_LÜT#Dr.-Ing._Tasn.C3.A1d_Kernetzky_.28at_L.C3.9CT_from_2014-2022.29| &raquo;Tasnád Kernetzky&laquo; ]] ).
  
 +
*Last revision and English version 2021 by&nbsp; [[Biographies_and_Bibliographies/Students_involved_in_LNTwww#Carolin_Mirschina_.28Ingenieurspraxis_Math_2019.2C_danach_Werkstudentin.29|&raquo;Carolin Mirschina&laquo;]]&nbsp; in the context of a working student activity.&nbsp;
  
==Über die Autoren==
+
*The conversion of this applet was financially supported by&nbsp; [https://www.ei.tum.de/studium/studienzuschuesse/ $\text{Studienzuschüsse}$]&nbsp; (TUM Department of Electrical and Computer Engineering).&nbsp; We thank.
Dieses interaktive Berechnungstool  wurde am [http://www.lnt.ei.tum.de/startseite Lehrstuhl für Nachrichtentechnik] der [https://www.tum.de/ Technischen Universität München] konzipiert und realisiert.
 
*Die erste Version wurde 2003 von [[Biografien_und_Bibliografien/An_LNTwww_beteiligte_Studierende#Ji_Li_.28Bachelorarbeit_EI_2003.2C_Diplomarbeit_EI_2005.29|Ji Li]] im Rahmen ihrer Diplomarbeit mit &bdquo;FlashMX&ndash;Actionscript&rdquo; erstellt (Betreuer: [[Biografien_und_Bibliografien/An_LNTwww_beteiligte_Mitarbeiter_und_Dozenten#Prof._Dr.-Ing._habil._G.C3.BCnter_S.C3.B6der_.28am_LNT_seit_1974.29|Günter Söder]]).  
 
* 2019 wurde das Programm  von&nbsp;[[Biografien_und_Bibliografien/An_LNTwww_beteiligte_Studierende#Carolin_Mirschina_.28Ingenieurspraxis_Math_2019.2C_danach_Werkstudentin.29|Carolin Mirschina]]&nbsp; im Rahmen einer Werkstudententätigkeit auf  &bdquo;HTML5&rdquo; umgesetzt und neu gestaltet (Betreuer: [[Biografien_und_Bibliografien/Beteiligte_der_Professur_Leitungsgebundene_%C3%9Cbertragungstechnik#Tasn.C3.A1d_Kernetzky.2C_M.Sc._.28bei_L.C3.9CT_seit_2014.29|Tasnád Kernetzky]]).
 
 
 
  
Die Umsetzung dieses Applets auf HTML 5 wurde durch&nbsp; [https://www.ei.tum.de/studium/studienzuschuesse/ Studienzuschüsse]&nbsp; der Fakultät EI der TU München finanziell unterstützt. Wir bedanken uns.
 
  
  
==Nochmalige Aufrufmöglichkeit des Applets in neuem Fenster==
+
==Once again: Open Applet in new Tab==
  
{{LntAppletLink|verteilungen}}
+
{{LntAppletLinkEnDe|wdf-vtf_en|wdf-vtf}}

Latest revision as of 12:17, 26 October 2023

Open Applet in new Tab   Deutsche Version Öffnen

Applet Description


The applet presents the description forms of two continuous value random variables  $X$  and  $Y\hspace{-0.1cm}$.  For the red random variable  $X$  and the blue random variable  $Y$,  the following basic forms are available for selection:

  • Gaussian distribution, uniform distribution, triangular distribution, exponential distribution, Laplace distribution, Rayleigh distribution, Rice distribution, Weibull distribution, Wigner semicircle distribution, Wigner parabolic distribution, Cauchy distribution.


The following data refer to the random variables  $X$. Graphically represented are

  • the probability density function  $f_{X}(x)$  (above) and
  • the cumulative distribution function  $F_{X}(x)$  (bottom).


In addition, some integral parameters are output, namely

  • the linear mean value  $m_X = {\rm E}\big[X \big]$,
  • the second order moment  $P_X ={\rm E}\big[X^2 \big] $,
  • the variance  $\sigma_X^2 = P_X - m_X^2$,
  • the standard deviation  $\sigma_X$,
  • the Charlier skewness  $S_X$,
  • the kurtosis  $K_X$.


Definition and Properties of the Presented Descriptive Variables


In this applet we consider only (value–)continuous random variables, i.e. those whose possible numerical values are not countable.

  • The range of values of these random variables is thus in general that of the real numbers  $(-\infty \le X \le +\infty)$.
  • However, it is possible that the range of values is limited to an interval:  $x_{\rm min} \le X \le +x_{\rm max}$.



Probability density function (PDF)

For a continuous random variable  $X$  the probabilities that  $X$  takes on quite specific values  $x$  are zero:  ${\rm Pr}(X= x) \equiv 0$.  Therefore, to describe a continuous random variable, we must always refer to the  probability density function  – in short  $\rm PDF$. 

$\text{Definition:}$  The value of the  »probability density function«  $f_{X}(x)$  at location  $x$  is equal to the probability that the instantaneous value of the random variable  $x$  lies in an  (infinitesimally small)  interval of width  $Δx$  around  $x_\mu$,  divided by  $Δx$:

$$f_X(x) = \lim_{ {\rm \Delta} x \hspace{0.05cm}\to \hspace{0.05cm} 0} \frac{ {\rm Pr} \big [x - {\rm \Delta} x/2 \le X \le x +{\rm \Delta} x/2 \big ] }{ {\rm \Delta} x}.$$


This extremely important descriptive variable has the following properties:

  • For the probability that the random variable  $X$  lies in the range between  $x_{\rm u}$  and  $x_{\rm o} > x_{\rm u}$: 
$${\rm Pr}(x_{\rm u} \le X \le x_{\rm o}) = \int_{x_{\rm u}}^{x_{\rm o}} f_{X}(x) \ {\rm d}x.$$
  • As an important normalization property,  this yields for the area under the PDF with the boundary transitions  $x_{\rm u} → \hspace{0.1cm} – \hspace{0.05cm} ∞$  and  $x_{\rm o} → +∞$:
$$\int_{-\infty}^{+\infty} f_{X}(x) \ {\rm d}x = 1.$$


Cumulative distribution function (CDF)

The  cumulative distribution function  – in short  $\rm CDF$  – provides the same information about the random variable  $X$  as the probability density function.

$\text{Definition:}$  The  »cumulative distribution function«  $F_{X}(x)$  corresponds to the probability that the random variable  $X$  is less than or equal to a real number  $x$: 

$$F_{X}(x) = {\rm Pr}( X \le x).$$


The CDF has the following characteristics:

  • The CDF is computable from the probability density function  $f_{X}(x)$  by integration.  It holds:
$$F_{X}(x) = \int_{-\infty}^{x}f_X(\xi)\,{\rm d}\xi.$$
  • Since the PDF is never negative,  $F_{X}(x)$  increases at least weakly monotonically,  and always lies between the following limits:
$$F_{X}(x → \hspace{0.1cm} – \hspace{0.05cm} ∞) = 0, \hspace{0.5cm}F_{X}(x → +∞) = 1.$$
  • Inversely,  the probability density function can be determined from the CDF by differentiation:
$$f_{X}(x)=\frac{{\rm d} F_{X}(\xi)}{{\rm d}\xi}\Bigg |_{\hspace{0.1cm}x=\xi}.$$
  • For the probability that the random variable  $X$  is in the range between  $x_{\rm u}$  and  $x_{\rm o} > x_{\rm u}$  holds:
$${\rm Pr}(x_{\rm u} \le X \le x_{\rm o}) = F_{X}(x_{\rm o}) - F_{X}(x_{\rm u}).$$


Expected values and moments

The probability density function provides very extensive information about the random variable under consideration.  Less,  but more compact information is provided by the so-called  "expected values"  and  "moments".

$\text{Definition:}$  The  »expected value«  with respect to any weighting function  $g(x)$  can be calculated with the PDF  $f_{\rm X}(x)$  in the following way:

$${\rm E}\big[g (X ) \big] = \int_{-\infty}^{+\infty} g(x)\cdot f_{X}(x) \,{\rm d}x.$$

Substituting into this equation for  $g(x) = x^k$  we get the  »moment of $k$-th order«:

$$m_k = {\rm E}\big[X^k \big] = \int_{-\infty}^{+\infty} x^k\cdot f_{X} (x ) \, {\rm d}x.$$


From this equation follows.

  • with  $k = 1$  for the  first order moment  or the  (linear)  mean:
$$m_1 = {\rm E}\big[X \big] = \int_{-\infty}^{ \rm +\infty} x\cdot f_{X} (x ) \,{\rm d}x,$$
  • with  $k = 2$  for the  second order moment  or the  second moment:
$$m_2 = {\rm E}\big[X^{\rm 2} \big] = \int_{-\infty}^{ \rm +\infty} x^{ 2}\cdot f_{ X} (x) \,{\rm d}x.$$

In relation to signals,  the following terms are also common:

  • $m_1$  indicates the  DC component;    with respect to the random quantity  $X$  in the following we also write  $m_X$.
  • $m_2$  corresponds to the signal power  $P_X$   (referred to the unit resistance  $1 \ Ω$ ) .


For example, if  $X$  denotes a voltage, then according to these equations  $m_X$  has the unit  ${\rm V}$  and the power  $P_X$  has the unit  ${\rm V}^2.$ If the power is to be expressed in "watts"  $\rm (W)$, then  $P_X$  must be divided by the resistance value  $R$. 

Central moments

Of particular importance in statistics in general are the so-called  central moments from which many characteristics are derived,

$\text{Definition:}$  The  »central moments«,  in contrast to the conventional moments, are each related to the mean value  $m_1$  in each case. For these, the following applies with  $k = 1, \ 2,$ ...:

$$\mu_k = {\rm E}\big[(X-m_{\rm 1})^k\big] = \int_{-\infty}^{+\infty} (x-m_{\rm 1})^k\cdot f_x(x) \,\rm d \it x.$$


  • For mean-free random variables, the central moments  $\mu_k$  coincide with the noncentral moments  $m_k$. 
  • The first order central moment is by definition equal to  $\mu_1 = 0$.
  • The noncentral moments  $m_k$  and the central moments  $\mu_k$  can be converted directly into each other.  With  $m_0 = 1$  and  $\mu_0 = 1$  it is valid:
$$\mu_k = \sum\limits_{\kappa= 0}^{k} \left( \begin{array}{*{2}{c}} k \\ \kappa \\ \end{array} \right)\cdot m_\kappa \cdot (-m_1)^{k-\kappa},$$
$$m_k = \sum\limits_{\kappa= 0}^{k} \left( \begin{array}{*{2}{c}} k \\ \kappa \\ \end{array} \right)\cdot \mu_\kappa \cdot {m_1}^{k-\kappa}.$$


Some Frequently Used Central Moments

From the last definition the following additional characteristics can be derived:

$\text{Definition:}$  The  »variance«  of the considered random variable  $X$  is the second order central moment:

$$\mu_2 = {\rm E}\big[(X-m_{\rm 1})^2\big] = \sigma_X^2.$$
  • The variance  $σ_X^2$  corresponds physically to the  "switching power"  and  »standard deviation«  $σ_X$  gives the "rms value".
  • From the linear and the second moment,  the variance can be calculated according to  Steiner's theorem  in the following way:  $\sigma_X^{2} = {\rm E}\big[X^2 \big] - {\rm E}^2\big[X \big].$


$\text{Definition:}$  The  »Charlier's skewness«  $S_X$  of the considered random variable  $X$  denotes the third central moment related to $σ_X^3$.

  • For symmetric probability density function,  this parameter   $S_X$  is always zero.
  • The larger  $S_X = \mu_3/σ_X^3$  is,  the more asymmetric is the PDF around the mean  $m_X$.
  • For example,  for the exponential distribution the (positive) skewness  $S_X =2$, and this is independent of the distribution parameter  $λ$.


$\text{Definition:}$  The  »kurtosis«  of the considered random variable  $X$  is the quotient  $K_X = \mu_4/σ_X^4$    $(\mu_4:$  fourth-order central moment$)$.

  • For a Gaussian distributed random variable this always yields the value  $K_X = 3$.
  • This parameter can be used, for example, to check whether a given random variable is actually Gaussian or can at least be approximated by a Gaussian distribution.


Compilation of some Continuous–Value Random Variables


The applet considers the following distributions: 

Gaussian distribution, uniform distribution, triangular distribution, exponential distribution, Laplace distribution, Rayleigh distribution,
Rice distribution, Weibull distribution, Wigner semicircle distribution, Wigner parabolic distribution, Cauchy distribution.

Some of these will be described in detail here.

Gaussian distributed random variables

Gaussian random variable:  PDF and CDF

(1)    »Probability density function«   $($axisymmetric around  $m_X)$

$$f_X(x) = \frac{1}{\sqrt{2\pi}\cdot\sigma_X}\cdot {\rm e}^{-(X-m_X)^2 /(2\sigma_X^2) }.$$

PDF parameters: 

  • $m_X$  (mean or DC component),
  • $σ_X$  (standard deviation or rms value).


(2)    »Cumulative distribution function«   $($point symmetric around  $m_X)$

$$F_X(x)= \phi(\frac{\it x-m_X}{\sigma_X})\hspace{0.5cm}\rm with\hspace{0.5cm}\rm \phi (\it x\rm ) = \frac{\rm 1}{\sqrt{\rm 2\it \pi}}\int_{-\rm\infty}^{\it x} \rm e^{\it -u^{\rm 2}/\rm 2}\,\, d \it u.$$

$ϕ(x)$:   Gaussian error integral (cannot be calculated analytically, must be taken from tables).


(3)    »Central moments«

$$\mu_{k}=(k- 1)\cdot (k- 3) \ \cdots \ 3\cdot 1\cdot\sigma_X^k\hspace{0.2cm}\rm (if\hspace{0.2cm}\it k\hspace{0.2cm}\rm even).$$
  • Charlier's skewness  $S_X = 0$,  since  $\mu_3 = 0$  $($PDF is symmetric about  $m_X)$.
  • Kurtosis  $K_X = 3$,  since  $\mu_4 = 3 \cdot \sigma_X^2$  ⇒   $K_X = 3$  results only for the Gaussian PDF.


(4)    »Further remarks«

  • The naming is due to the important mathematician, physicist and astronomer Carl Friedrich Gauss.
  • If  $m_X = 0$  and  $σ_X = 1$, it is often referred to as the  normal distribution.
  • The standard deviation can also be determined graphically from the bell-shaped PDF $f_{X}(x)$   (as the distance between the maximum value and the point of inflection).
  • Random quantities with Gaussian WDF are realistic models for many physical physical quantities and also of great importance for communications engineering.
  • The sum of many small and independent components always leads to the Gaussian PDF   ⇒   Central Limit Theorem of Statistics   ⇒   Basis for noise processes.
  • If one applies a Gaussian distributed signal to a linear filter for spectral shaping, the output signal is also Gaussian distributed.


Signal and PDF of a Gaussian noise signal

$\text{Example 1:}$  The graphic shows a section of a stochastic noise signal  $x(t)$  whose instantaneous value can be taken as a continuous random variable  $X$. From the PDF shown on the right, it can be seen that:

  • A Gaussian random variable is present.
  • Instantaneous values around the mean  $m_X$  occur most frequently.
  • If there are no statistical ties between the samples  $x_ν$  of the sequence, such a signal is also called "white noise".


Uniformly distributed random variables

Uniform distribution:  PDF and CDF

(1)    »Probability density function«

  • The probability density function (PDF)  $f_{X}(x)$  is in the range from  $x_{\rm min}$  to  $x_{\rm max}$  constant equal to  $1/(x_{\rm max} - x_{\rm min})$  and outside zero.
  • At the range limits for  $f_{X}(x)$  only half the value  (mean value between left and right limit value)  is to be set.


(2)    »Cumulative distribution function«

  • The cumulative distribution function (CDF) increases in the range from  $x_{\rm min}$  to  $x_{\rm max}$  linearly from zero to  $1$. 


'(3)    »'«

  • Mean and standard deviation have the following values for the uniform distribution:
$$m_X = \frac{\it x_ {\rm max} \rm + \it x_{\rm min}}{2},\hspace{0.5cm} \sigma_X^2 = \frac{(\it x_{\rm max} - \it x_{\rm min}\rm )^2}{12}.$$
  • For symmetric PDF   ⇒   $x_{\rm min} = -x_{\rm max}$  the mean value  $m_X = 0$  and the variance  $σ_X^2 = x_{\rm max}^2/3.$
  • Because of the symmetry around the mean  $m_X$  the Charlier skewness  $S_X = 0$.
  • The kurtosis is with   $K_X = 1.8$  significantly smaller than for the Gaussian distribution because of the absence of PDF outliers.


(4)    »Further remarks«

  • For modeling transmission systems, uniformly distributed random variables are the exception. An example of an actual (nearly) uniformly distributed random variable is the phase in circularly symmetric interference, such as occurs in  quadrature amplitude modulation  (QAM) schemes.
  • The importance of uniformly distributed random variables for information and communication technology lies rather in the fact that, from the point of view of information theory, this PDF form represents an optimum with respect to differential entropy under the constraint of "peak limitation".
  • In image processing & encoding, the uniform distribution is often used instead of the actual distribution of the original image, which is usually much more complicated, because the difference in information content between a natural image and the model based on the uniform distribution is relatively small.
  • In the simulation of intelligence systems, one often uses "pseudo-random generators" based on the uniform distribution (which are relatively easy to realize), from which other distributions  (Gaussian distribution, exponential distribution, etc.)  can be easily derived.


Exponentially distributed random variables

(1)    »Probability distribution function«

Exponential distribution:  PDF and CDF

An exponentially distributed random variable  $X$  can only take on non–negative values. For  $x>0$  the PDF has the following shape:

$$f_X(x)=\it \lambda_X\cdot\rm e^{\it -\lambda_X \hspace{0.05cm}\cdot \hspace{0.03cm} x}.$$
  • The larger the distribution parameter  $λ_X$,  the steeper the drop.
  • By definition,  $f_{X}(0) = λ_X/2$, which is the average of the left-hand limit  $(0)$  and the right-hand limit  $(\lambda_X)$.


(2)    »Cumulative distribution function«

Distribution function PDF, we obtain for  $x > 0$:

$$F_{X}(x)=1-\rm e^{\it -\lambda_X\hspace{0.05cm}\cdot \hspace{0.03cm} x}.$$

(3)    »Moments and central moments«

  • The  moments  of the (one-sided) exponential distribution are generally equal to:
$$m_k = \int_{-\infty}^{+\infty} x^k \cdot f_{X}(x) \,\,{\rm d} x = \frac{k!}{\lambda_X^k}.$$
  • From this and from Steiner's theorem we get for mean and standard deviation:
$$m_X = m_1=\frac{1}{\lambda_X},\hspace{0.6cm}\sigma_X^2={m_2-m_1^2}={\frac{2}{\lambda_X^2}-\frac{1}{\lambda_X^2}}=\frac{1}{\lambda_X^2}.$$
  • The PDF is clearly asymmetric here. For the Charlier skewness  $S_X = 2$.
  • The kurtosis with   $K_X = 9$  is clearly larger than for the Gaussian distribution, because the PDF foothills extend much further.


(4)    »Further remarks«

  • The exponential distribution has great importance for reliability studies; in this context, the term "lifetime distribution" is also commonly used.
  • In these applications, the random variable is often the time  $t$, that elapses before a component fails.
  • Furthermore, it should be noted that the exponential distribution is closely related to the Laplace distribution.


Laplace distributed random variables

Laplace distribution:  PDF and CDF

(1)    »Probability density function«

As can be seen from the graph, the Laplace distribution is a "two-sided exponential distribution":

$$f_{X}(x)=\frac{\lambda_X} {2}\cdot{\rm e}^ { - \lambda_X \hspace{0.05cm} \cdot \hspace{0.05cm} \vert \hspace{0.05cm} x \hspace{0.05cm} \vert}.$$
  • The maximum value here is  $\lambda_X/2$.
  • The tangent at  $x=0$  intersects the abscissa at  $1/\lambda_X$, as in the exponential distribution.


(2)    »Cumulative distribution function«

$$F_{X}(x) = {\rm Pr}\big [X \le x \big ] = \int_{-\infty}^{x} f_{X}(\xi) \,\,{\rm d}\xi $$
$$\Rightarrow \hspace{0.5cm} F_{X}(x) = 0.5 + 0.5 \cdot {\rm sign}(x) \cdot \big [ 1 - {\rm e}^ { - \lambda_X \hspace{0.05cm} \cdot \hspace{0.05cm} \vert \hspace{0.05cm} x \hspace{0.05cm} \vert}\big ] $$
$$\Rightarrow \hspace{0.5cm} F_{X}(-\infty) = 0, \hspace{0.5cm}F_{X}(0) = 0.5, \hspace{0.5cm} F_{X}(+\infty) = 1.$$

(3)    »Moments and central moments«

  • For odd  $k$,  the Laplace distribution always gives  $m_k= 0$ due to symmetry. Among others:  Linear mean  $m_X =m_1 = 0$.
  • For even  $k$  the moments of Laplace distribution and exponential distribution agree:  $m_k = {k!}/{\lambda^k}$.
  • For the variance  $(=$ second order central moment $=$ second order moment$)$  holds:  $\sigma_X^2 = {2}/{\lambda_X^2}$   ⇒   twice as large as for the exponential distribution.
  • For the Charlier skewness,  $S_X = 0$ is obtained here due to the symmetric PDF.
  • The kurtosis is  $K_X = 6$,  significantly larger than for the Gaussian distribution, but smaller than for the exponential distribution.


(4)    »Further remarks«



Brief description of other distributions


$\text{(A) Rayleigh distribution}$     $\text{More detailed description}$

  • Probability density function:
$$f_X(x) = \left\{ \begin{array}{c} x/\lambda_X^2 \cdot {\rm e}^{- x^2/(2 \hspace{0.05cm}\cdot\hspace{0.05cm} \lambda_X^2)} \\ 0 \end{array} \right.\hspace{0.15cm} \begin{array}{*{1}c} {\rm for}\hspace{0.1cm} x\hspace{-0.05cm} \ge \hspace{-0.05cm}0, \\ {\rm for}\hspace{0.1cm} x \hspace{-0.05cm}<\hspace{-0.05cm} 0. \\ \end{array}.$$
  • Application:     Modeling of the cellular channel (non-frequency selective fading, attenuation, diffraction, and refraction effects only, no line-of-sight).


$\text{(B) Rice distribution}$     $\text{More detailed description}$

  • Probability density function  $(\rm I_0$  denotes the modified zero-order Bessel function$)$:
$$f_X(x) = \frac{x}{\lambda_X^2} \cdot {\rm exp} \big [ -\frac{x^2 + C_X^2}{2\cdot \lambda_X^2}\big ] \cdot {\rm I}_0 \left [ \frac{x \cdot C_X}{\lambda_X^2} \right ]\hspace{0.5cm}\text{with}\hspace{0.5cm}{\rm I }_0 (u) = {\rm J }_0 ({\rm j} \cdot u) = \sum_{k = 0}^{\infty} \frac{ (u/2)^{2k}}{k! \cdot \Gamma (k+1)} \hspace{0.05cm}.$$
  • Application:     Cellular channel modeling (non-frequency selective fading, attenuation, diffraction, and refraction effects only, with line-of-sight).


$\text{(C) Weibull distribution}$     $\text{More detailed description}$

  • Probability density function:
$$f_X(x) = \lambda_X \cdot k_X \cdot (\lambda_X \cdot x)^{k_X-1} \cdot {\rm e}^{(\lambda_X \cdot x)^{k_X}} \hspace{0.05cm}.$$
  • Application:     PDF with adjustable skewness $S_X$; exponential distribution  $(k_X = 1)$  and Rayleigh distribution  $(k_X = 2)$  included as special cases.


$\text{(D) Wigner semicircle distribution}$     $\text{More detailed description}$

  • Probability density function:
$$f_X(x) = \left\{ \begin{array}{c} 2/(\pi \cdot {R_X}^2) \cdot \sqrt{{R_X}^2 - (x- m_X)^2} \\ 0 \end{array} \right.\hspace{0.15cm} \begin{array}{*{1}c} {\rm for}\hspace{0.1cm} |x- m_X|\hspace{-0.05cm} \le \hspace{-0.05cm}R_X, \\ {\rm for}\hspace{0.1cm} |x- m_X| \hspace{-0.05cm} > \hspace{-0.05cm} R_X \\ \end{array}.$$
  • Application:     PDF of Chebyshev nodes   ⇒   zeros of Chebyshev polynomials from numerics.


$\text{(E) Wigner parabolic distribution}$

  • Probability density function:
$$f_X(x) = \left\{ \begin{array}{c} 3/(4 \cdot {R_X}^3) \cdot \big ({R_X}^2 - (x- m_X)^2\big ) \\ 0 \end{array} \right.\hspace{0.15cm} \begin{array}{*{1}c} {\rm for}\hspace{0.1cm} |x|\hspace{-0.05cm} \le \hspace{-0.05cm}R_X, \\ {\rm for}\hspace{0.1cm} |x| \hspace{-0.05cm} > \hspace{-0.05cm} R_X \\ \end{array}.$$
  • Application:     PDF of eigenvalues of symmetric random matrices whose dimension approaches infinity.


$\text{(F) Cauchy distribution}$     $\text{More detailed description}$

  • Probability density function and distribution function:
$$f_{X}(x)=\frac{1}{\pi}\cdot\frac{\lambda_X}{\lambda_X^2+x^2}, \hspace{2cm} F_{X}(x)={\rm 1}/{2}+{\rm arctan}({x}/{\lambda_X}).$$
  • In the Cauchy distribution, all moments  $m_k$  for even  $k$  have an infinitely large value, independent of the parameter  $λ_X$.
  • Thus, this distribution also has an infinitely large variance:  $\sigma_X^2 \to \infty$.
  • Due to symmetry, for odd  $k$  all moments  $m_k = 0$, if one assumes the "Cauchy Principal Value" as in the program:  $m_X = 0, \ S_X = 0$.
  • Example:     The quotient of two Gaussian mean-free random variables is Cauchy distributed. For practical applications the Cauchy distribution has less meaning.


Exercises


  • First, select the number  $(1,\ 2, \text{...} \ )$  of the task to be processed.  The number  "$0$"  corresponds to a  "Reset":  Same setting as at program start.
  • A task description is displayed.  The parameter values are adjusted.  Solution after pressing  "Show Solution".
  • In the following  $\text{Red}$  stands for the random variable  $X$  and  $\text{Blue}$  for  $Y$.


(1)  Select  $\text{red: Gaussian PDF}\ (m_X = 1, \ \sigma_X = 0.4)$  and  $\text{blue: Rectangular PDF}\ (y_{\rm min} = -2, \ y_{\rm max} = +3)$.  Interpret the  $\rm PDF$  graph.

  •  $\text{Gaussian PDF}$:  The  $\rm PDF$ maximum is equal to  $f_{X}(x = m_X) = \sqrt{1/(2\pi \cdot \sigma_X^2)} = 0.9974 \approx 1$.
  •  $\text{Rectangular PDF}$:  All  $\rm PDF$ values are equal  $0.2$  in the range  $-2 < y < +3$.  At the edges  $f_Y(-2) = f_Y(+3)= 0.1$  (half value) holds.


(2)  Same setting as for  $(1)$.  What are the probabilities  ${\rm Pr}(X = 0)$,   ${\rm Pr}(0.5 \le X \le 1.5)$,   ${\rm Pr}(Y = 0)$   and  ${\rm Pr}(0.5 \le Y \le 1.5)$ .

  •  ${\rm Pr}(X = 0)={\rm Pr}(Y = 0) \equiv 0$   ⇒   Probability of a discrete random variable to take exactly a certain value.
  •  The other two probabilities can be obtained by integration over the PDF in the range  $+0.5\ \text{...} \ +\hspace{-0.1cm}1.5$.
  •  Or:  ${\rm Pr}(0.5 \le X \le 1.5)= F_X(1.5) - F_X(0.5) = 0.8944-0.1056 = 0.7888$. Correspondingly:  ${\rm Pr}(0.5 \le Y \le 1.5)= 0.7-0.5=0.2$.


(3)  Same settings as before.  How must the standard deviation  $\sigma_X$  be changed so that with the same mean  $m_X$  it holds for the second order moment:  $P_X=2$ ?

  •  According to Steiner's theorem:  $P_X=m_X^2 + \sigma_X^2$   ⇒   $\sigma_X^2 = P_X-m_X^2 = 2 - 1^2 = 1 $   ⇒   $\sigma_X = 1$.


(4)  Same settings as before:  How must the parameters  $y_{\rm min}$  and  $y_{\rm max}$  of the rectangular PDF be changed to yield  $m_Y = 0$  and  $\sigma_Y^2 = 0.75$?

  •  Starting from the previous setting  $(y_{\rm min} = -2, \ y_{\rm max} = +3)$  we change  $y_{\rm max}$ until  $\sigma_Y^2 = 0.75$  occurs   ⇒   $y_{\rm max} = 1$.
  •  The width of the rectangle is now  $3$.  The desired mean   $m_Y = 0$  is obtained by shifting:  $y_{\rm min} = -1.5, \ y_{\rm max} = +1.5$.
  •  You could also consider that for a mean-free random variable  $(y_{\rm min} = -y_{\rm max})$  the following equation holds:   $\sigma_Y^2 = y_{\rm max}^2/3$.


(5)  For which of the adjustable distributions is the Charlier skewness  $S \ne 0$ ?

  •  The Charlier's skewness denotes the third central moment related to  $σ_X^3$   ⇒  $S_X = \mu_3/σ_X^3$  $($valid for the random variable  $X)$.
  •  If the PDF  $f_X(x)$  is symmetric around the mean  $m_X$  then the parameter  $S_X$  is always zero.
  •  Exponential distribution:  $S_X =2$;  Rayleigh distribution:  $S_X =0.631$   $($both independent of  $λ_X)$;   Rice distribution:  $S_X >0$  $($dependent of  $C_X, \ λ_X)$.
  •  With the Weibull distribution, the Charlier skewness  $S_X$  can be zero, positive or negative,  depending on the PDF parameter  $k_X$.
  •   Weibull distribution,  $\lambda_X=0.4$:  With  $k_X = 1.5$  ⇒   PDF is curved to the left  $(S_X > 0)$;   $k_X = 7$  ⇒   PDF is curved to the right  $(S_X < 0)$.


(6)  Select  $\text{Red: Gaussian PDF}\ (m_X = 1, \ \sigma_X = 0.4)$  and  $\text{Blue: Gaussian PDF}\ (m_X = 0, \ \sigma_X = 1)$.  What is the kurtosis in each case?

  •  For each Gaussian distribution the kurtosis has the same value:   $K_X = K_Y =3$.  Therefore,  $K-3$  is called "excess".
  • This parameter can be used to check whether a given random variable can be approximated by a Gaussian distribution.


(7)  For which distributions does a significantly smaller kurtosis value result than  $K=3$?  And for which distributions does a significantly larger one?

  •  $K<3$  always results when the PDF values are more concentrated around the mean than in the Gaussian distribution.
  •  This is true, for example, for the uniform distribution  $(K=1.8)$  and for the triangular distribution  $(K=2.4)$.
  •  $K>3$,  if the PDF offshoots are more pronounced than for the Gaussian distribution.  Example:  Exponential PDF  $(K=9)$.


(8)  Select  $\text{Red: Exponential PDF}\ (\lambda_X = 1)$  and  $\text{Blue: Laplace PDF}\ (\lambda_Y = 1)$.  Interpret the differences.

  •  The Laplace distribution is symmetric around its mean  $(S_Y=0, \ m_Y=0)$  unlike the exponential distribution  $(S_X=2, \ m_X=1)$.
  •  The even moments  $m_2, \ m_4, \ \text{...}$  are equal,  for example:  $P_X=P_Y=2$.  But not the variances:  $\sigma_X^2 =1, \ \sigma_Y^2 =2$.
  •  The probabilities  ${\rm Pr}(|X| < 2) = F_X(2) = 0.864$  and  ${\rm Pr}(|Y| < 2) = F_Y(2) - F_Y(-2)= 0.932 - 0.068 = 0.864$  are equal.
  •  In the Laplace PDF, the values are more tightly concentrated around the mean than in the exponential PDF:  $K_Y =6 < K_X = 9$.


(9)  Select  $\text{Red: Rice PDF}\ (\lambda_X = 1, \ C_X = 1)$  and  $\text{Blue: Rayleigh PDF}\ (\lambda_Y = 1)$.  Interpret the differences.

  •   With  $C_X = 0$  the Rice PDF transitions to the Rayleigh PDF.  A larger  $C_X$  improves the performance, e.g., in mobile communications.
  •   Both, in  "Rayleigh"  and  "Rice"  the abscissa is the magnitude  $A$  of the received signal.  Favorably, if  ${\rm Pr}(A \le A_0)$  is small  $(A_0$  given$)$.
  •   For  $C_X \ne 0$  and equal  $\lambda$  the Rice CDF is below the Rayleigh CDF   ⇒   smaller  ${\rm Pr}(A \le A_0)$  for all  $A_0$.


(10)  Select  $\text{Red: Rice PDF}\ (\lambda_X = 0.6, \ C_X = 2)$.  By which distribution  $F_Y(y)$  can this Rice distribution be well approximated?

  •   The kurtosis   $K_X = 2.9539 \approx 3$  indicates the Gaussian distribution.   Favorable parameters:  $m_Y = 2.1 > C_X, \ \ \sigma_Y = \lambda_X = 0.6$.
  •   The larger tht quotient  $C_X/\lambda_X$  is, the better the Rice PDF is approximated by a Gaussian PDF.
  •   For large   $C_X/\lambda_X$  the Rice PDF has no more similarity with the Rayleigh PDF.


(11)  Select  $\text{Red: Weibull PDF}\ (\lambda_X = 1, \ k_X = 1)$  and  $\text{Blue: Weibull PDF}\ (\lambda_Y = 1, \ k_Y = 2)$. Interpret the results.

  •   The Weibull PDF  $f_X(x)$  is identical to the exponential PDF and  $f_Y(y)$  to the Rayleigh PDF.
  •   However, after best fit, the parameters  $\lambda_{\rm Weibull} = 1$  and  $\lambda_{\rm Rayleigh} = 0.7$ differ.
  •   Moreover, it holds  $f_X(x = 0) \to \infty$  for  $k_X < 1$.  However, this does not have the affect of infinite moments.


(12)  Select  $\text{Red: Weibull PDF}\ (\lambda_X = 1, \ k_X = 1.6)$  and   $\text{Blue: Weibull PDF}\ (\lambda_Y = 1, \ k_Y = 5.6)$.  Interpret the Charlier skewness.

  •   One observes:   For the PDF parameter  $k < k_*$  the Charlier skewness is positive and for  $k > k_*$  negative.  It is approximately  $k_* = 3.6$.


(13)  Select  $\text{Red: Semicircle PDF}\ (m_X = 0, \ R_X = 1)$  and  $\text{Blue: Parabolic PDF}\ (m_Y = 0, \ R_Y = 1)$.  Vary the parameter  $R$  in each case.

  •   The PDF in each case is mean-free and symmetric  $(S_X = S_Y =0)$  with  $\sigma_X^2 = 0.25, \ K_X = 2$  respectively,  $\sigma_Y^2 = 0.2, \ K_Y \approx 2.2$.



Applet Manual


Screenshot of the German version

    (A)     Selection of the distribution  $f_X(x)$  (red curves and output values)

    (B)     Parameter input for the "red distribution" via slider

    (C)     Selection of the distribution  $f_Y(y)$  (blue curves and output values)

    (D)     Parameter input for the "red distribution" via slider

    (E)     Graphic area for the probability density function (PDF)

    (F)     Graphic area for the distribution function (CDF)

    (G)     Numerical output for the "red distribution"

    (H)     Numerical output for the "blue distribution"

    ( I )     Input of  $x_*$  and  $y_*$  abscissa values for the numerics outputs

    (J)     Experiment execution area:   task selection

    (K)     Experiment execution area:   task description

    ( L)     Experiment execution area:   sample solution


Selection options for for  $\rm A$  and  $\rm C$:  

Gaussian distribution,   uniform distribution,   triangular distribution,   exponential distribution,   Laplace distribution,   Rayleigh distribution,  Rice distribution,   Weibull distribution,   Wigner semicircle distribution,   Wigner parabolic distribution,   Cauchy distribution.


The following »integral parameters« are output  $($with respect to $X)$:  

Linear mean value  $m_X = {\rm E}\big[X \big]$,   second order moment  $P_X ={\rm E}\big[X^2 \big] $,   variance  $\sigma_X^2 = P_X - m_X^2$,   standard deviation  $\sigma_X$,  Charlier's skewness  $S_X$,   kurtosis  $K_X$.


In all applets top right:    Changeable graphical interface design   ⇒   Theme:

  • Dark:   black background  (recommended by the authors).
  • Bright:   white background  (recommended for beamers and printouts)
  • Deuteranopia:   for users with pronounced green–visual impairment
  • Protanopia:   for users with pronounced red–visual impairment


About the Authors


This interactive calculation tool was designed and implemented at the  $\text{Institute for Communications Engineering}$  at the  $\text{Technical University of Munich}$.

  • Last revision and English version 2021 by  »Carolin Mirschina«  in the context of a working student activity. 
  • The conversion of this applet was financially supported by  $\text{Studienzuschüsse}$  (TUM Department of Electrical and Computer Engineering).  We thank.


Once again: Open Applet in new Tab

Open Applet in new Tab   Deutsche Version Öffnen