Difference between revisions of "Theory of Stochastic Signals/Gaussian Distributed Random Variables"

From LNTwww
 
(69 intermediate revisions by 10 users not shown)
Line 1: Line 1:
 
   
 
   
 
{{Header
 
{{Header
|Untermenü=Kontinuierliche Zufallsgrößen
+
|Untermenü=Continuous Random Variables
|Vorherige Seite=Gleichverteilte Zufallsgröße
+
|Vorherige Seite=Uniformly Distributed Random Variables
|Nächste Seite=Exponentialverteilte Zufallsgrößen
+
|Nächste Seite=Exponentially Distributed Random Variables
 
}}
 
}}
==Allgemeine Beschreibung==
+
==General description==
Zufallsgrößen mit Gaußscher Wahrscheinlichkeitsdichtefunktion – die Namensgebung geht dabei auf den bedeutenden Mathematiker, Physiker und Astronomen Carl Friedrich Gauß  zurück – sind wirklichkeitsnahe Modelle für viele physikalische Größen und haben auch für die Nachrichtentechnik eine große Bedeutung. Dies hat mehrere Gründe:
+
<br>
*Nach dem ''zentralen Grenzwertsatz'' besitzt jede Linearkombination statistischer Größen
+
Random variables with Gaussian probability density function&nbsp; - the name goes back to the important mathematician,&nbsp; physicist and astronomer&nbsp; [https://en.wikipedia.org/wiki/Carl_Friedrich_Gauss $\text{Carl Friedrich Gauss}$]&nbsp; -&nbsp; are realistic models for many physical variables and are also of great importance for communications engineering.  
$$x=\sum\limits_{i=\rm 1}^{\it I}x_i ,$$
 
:im Grenzfall $(I → ∞)$ eine Gaußsche WDF, so lange die einzelnen Komponenten keine statistischen Bindungen besitzen. Dies gilt (nahezu) für alle Dichtefunktionen der einzelnen Summanden $x_i$.
 
*Viele ''Rauschprozesse'' erfüllen genau diese Voraussetzung, das heißt, sie setzen sich additiv aus einer sehr großen Anzahl voneinander unabhängiger Einzelbeiträge zusammen, so dass ihre Musterfunktionen (Rauschsignale) eine Gaußsche Amplitudenverteilung aufweisen.
 
*Legt man ein gaußverteiltes Signal zur spektralen Formung an ein lineares Filter, so ist das Ausgangssignal ebenfalls gaußverteilt. Es ändern sich nur die Verteilungsparameter wie Mittelwert und Streuung sowie die inneren statistischen Bindungen der Abtastwerte.  
 
  
 +
{{BlaueBox|TEXT= 
 +
$\text{Definition:}$&nbsp;
 +
To describe the&nbsp; &raquo;'''Gaussian distribution'''&laquo;,&nbsp; we consider a sum of&nbsp; $I$&nbsp; statistical variables:
 +
:$$x=\sum\limits_{i=\rm 1}^{\it I}x_i .$$
  
{{Beispiel}}
+
*According to the&nbsp; [https://en.wikipedia.org/wiki/Central_limit_theorem $\text{central limit theorem of statistics}$]&nbsp; this sum has a Gaussian PDF in the limiting case&nbsp; $(I → ∞)$&nbsp; as long as the individual components&nbsp; $x_i$&nbsp; have no statistical  bindings.&nbsp; This holds&nbsp; (almost)&nbsp; for all density functions of the individual summands.
Das Bild zeigt links ein Gaußsches Zufallssignal $x_1(t)$ und rechts im Vergleich dazu ein gleichverteiltes Signal $x_2(t)$ mit gleichem Mittelwert $m_1$ und gleicher Streuung $σ$.  
+
*Many&nbsp; "noise processes"&nbsp; fulfill exactly this condition,&nbsp; that is,&nbsp; they are additively composed of a large number of independent individual contributions,&nbsp; so that their pattern functions&nbsp; ("noise signals")&nbsp; exhibit a Gaussian amplitude distribution.
 +
*If one applies a Gaussian distributed signal to a linear filter for spectral shaping,&nbsp; the output signal is also Gaussian distributed. &nbsp; Only the distribution parameters such as mean and standard deviation change,&nbsp; as well as the internal statistical bindings of the samples.}}.  
  
[[File:P_ID68__Sto_T_3_5_S1_neu.png | Beispiele Gaußscher Zufallssignale]]
+
[[File:P_ID68__Sto_T_3_5_S1_neu.png |right|frame|Gaussian distributed and uniformly distributed random signal]]
 +
{{GraueBox|TEXT= 
 +
$\text{Example 1:}$&nbsp;
 +
The graph shows in comparison
 +
*on the left,&nbsp; a Gaussian random signal&nbsp; $x_1(t)$&nbsp; and
 +
*on the right,&nbsp; an uniformly distributed signal&nbsp; $x_2(t)$&nbsp;
  
Man erkennt, dass bei der Gaußverteilung im Gegensatz zur Gleichverteilung beliebig große und beliebig kleine Amplitudenwerte auftreten können, auch wenn diese sehr unwahrscheinlich sind im Vergleich zum mittleren Amplitudenbereich.
 
{{end}}
 
  
==Wahrscheinlichkeitsdichte- und Verteilungsfunktion==
+
with equal mean&nbsp; $m_1$&nbsp; and equal standard deviation&nbsp; $σ$.  
Die Wahrscheinlichkeitsdichtefunktion einer gaußverteilten Zufallsgröße lautet allgemein:
 
$$f_x(x) = \frac{1}{\sqrt{2\pi}\cdot\sigma}\cdot {\rm exp}\left (-\frac{(x-m_1)^2 }{2\sigma^2} \right ).$$
 
Die Parameter einer Gaußschen WDF sind
 
*der Mittelwert bzw. der Gleichanteil $m_1$,
 
*die Streuung bzw. der Effektivwert $σ$.  
 
  
 +
It can be seen that with the Gaussian distribution,&nbsp; in contrast to the uniform distribution
 +
*any large and any small amplitude values can occur,
 +
*even if they are improbable compared to the mean amplitude range.}}.
  
[[File:P_ID65__Sto_T_3_5_S2_neu.png | WDF und VTF einer gaußverteilten Zufallsgröße]]
 
  
  
Aus der linken Darstellung geht hervor, dass die Streuung $σ$ als der Abstand von Maximalwert und Wendepunkt aus der glockenförmigen WDF $f_{\rm x}(x)$ auch grafisch ermittelt werden kann. Ist $m_1 =$ 0 und $σ =$ 1, so spricht man oft auch von der Normalverteilung.
+
==Probability density function &ndash; Cumulative density function==
 +
<br>
 +
{{BlaueBox|TEXT= 
 +
$\text{Definition:}$&nbsp;
 +
The&nbsp; &raquo;'''probability density function'''&laquo;&nbsp; $\rm (PDF)$&nbsp; of a Gaussian distributed random variable&nbsp; $x$&nbsp; is generally:
  
Rechts ist die Verteilungsfunktion $F_{\rm x}(r)$ einer gaußverteilten Zufallsgröße dargestellt. Die VTF ist punktsymmetrisch um den Mittelwert $m_1$. Durch Integration über die Gaußsche WDF erhält man:
+
[[File:EN_Sto_T_3_5_S2.png |right|frame| PDF and CDF of a Gaussian distributed random variable]]
$$F_x(r)= \phi(\frac{\it r-m_{\rm 1}}{\sigma})\hspace{0.5cm}\rm mit\hspace{0.5cm}\rm \phi (\it x) = \frac{\rm 1}{\sqrt{\rm 2\it \pi}}\int_{-\rm\infty}^{\it x} \rm e^{\it -u^{\rm 2}/\rm 2}\,\, d \it u.$$
+
$$\hspace{0.4cm}f_x(x) = \frac{1}{\sqrt{2\pi}\cdot\sigma}\cdot {\rm e}^{-(x-m_1)^2 /(2\sigma^2) }.$$
 +
The parameters of such a Gaussian PDF are
 +
*$m_1$&nbsp; ("mean"&nbsp; or&nbsp; "DC component"),  
 +
*$σ$&nbsp; ("standard deviation").
  
Man bezeichnet $ϕ(x)$ als das Gaußsche Fehlerintegral. Dessen Funktionsverlauf ist analytisch nicht berechenbar und muss deshalb aus Tabellen entnommen werden. $ϕ(x)$ lässt sich durch eine Taylorreihe annähern oder aus der in Programmbibliotheken oft vorhandenen Funktion „erf( $x$)” berechnen.
 
  
Weitere Informationen zu den gaußverteilten Zufallsgrößen liefert das folgende Lernvideo:
+
If&nbsp; $m_1 = 0$&nbsp; and&nbsp; $σ = 1$, it is often referred to as the&nbsp; "normal distribution".
Der AWGN–Kanal – Teil 2
 
  
==Überschreitungswahrscheinlichkeit==
+
From the left plot,&nbsp; it can be seen that the standard deviation&nbsp; $σ$&nbsp; can also be determined graphically as the distance from the maximum value and the inflection point from the bell-shaped PDF&nbsp; $f_{x}(x)$.}}
Bei der Untersuchung digitaler Übertragungssysteme muss oft die Wahrscheinlichkeit bestimmt werden, dass eine (mittelwertfreie) gaußverteilte Zufallsgröße $x$ mit der Varianz ^2$ einen vorgegebenen Wert $x_0$ überschreitet. Für diese Wahrscheinlichkeit gilt:
 
$$\rm Pr(\it x > x_{\rm 0})=\rm Q({\it x_{\rm 0}}/{\sigma}).$$
 
Hierbei bezeichnet $Q(x) = 1 − ϕ(x)$ die Komplementärfunktion zu $ϕ(x)$; man nennt diese Funktion das Komplementäre Gaußsche Fehlerintegral  und es gilt folgende Berechnungsvorschrift:
 
$$\rm Q (\it x) = \frac{\rm 1}{\sqrt{\rm 2\pi}}\int_{\it x}^{+\infty}\hspace{-0.2cm}\rm e^{\it -u^{\rm 2}/\rm 2}\,d \it u = \rm 1- \phi (\it x).$$
 
Dieses Integral ist ebenfalls nicht analytisch lösbar und muss aus Tabellen entnommen werden. In Bibliotheken findet man oft die Funktion „erfc( $x$)”, die mit $Q(x)$ wie folgt zusammenhängt:
 
$$\rm Q(\it x)={\rm 1}/{\rm 2}\cdot \rm erfc({\it x}/{\sqrt{\rm 2}}).$$
 
Speziell für größere $x$–Werte von (also für kleine Fehlerwahrscheinlichkeiten) liefern die nachfolgend angegebenen Schranken eine brauchbare Abschätzung für das Komplementäre Gaußsche Fehlerintegral. $Q_o(x)$ bezeichnet hierbei eine obere und $Q_u(x)$ eine untere Schranke:
 
$$\rm Q_o(\it x)=\frac{\rm 1}{\sqrt{\rm 2\pi}\cdot x}\cdot \rm e^{-\it x^{\rm 2}/\rm 2}, \rm Q_u(\it x)=\frac{\rm 1-{\rm 1}/{\it x^{\rm 2}}}{\sqrt{\rm 2\pi}\cdot x}\cdot \rm e^{-\it x^{\rm 2}/\rm 2} =\rm Q_0(\it x)\left(\rm 1-{\rm 1}/{\it x^{\rm 2}}\right) .$$
 
Das Grafik zeigt die Q-Funktion in logarithmischer Darstellung für lineare (obere Achse) und logarithmische Abszissenwerte (untere Achse). Die obere Schranke (rote Kreise) ist ab ca. $x =$ 1 brauchbar, die untere Schranke (grüne Rauten) ab $x ≈$ 2. Für $x$-Werte ≥ 4 sind beide Schranken innerhalb der Zeichengenauigkeit vom tatsächlichen Verlauf Q( $x$) nicht mehr zu unterscheiden.
 
  
[[File:P_ID621__Sto_T_3_5_S3neu.png | Komplementäres Gaußsches Fehlerintegral]]
 
  
==Zentralmomente und Momente==
+
On the right the&nbsp; &raquo;'''cumulative distribution function'''&laquo;&nbsp; $F_{x}(r)$&nbsp; of the Gaussian distributed random variable  is shown.&nbsp; It can be seen:
Die Kenngrößen der Gaußverteilung weisen folgende Eigenschaften auf:
+
*The CDF is point symmetric about the mean&nbsp; $m_1$.  
*Die Zentralmomente $\mu_k$ (identisch mit den Momenten $m_k$ der äquivalenten mittelwertfreien Zufallsgröße $x – m_1$) sind bei der Gaußschen WDF wie auch bei der Gleichverteilung aufgrund der symmetrischen Verhältnisse für ungerade Werte von $k$ identisch 0. Das Zentralmoment $\mu_2$ ist definitionsgemäß gleich $σ^2$.  
+
*By integration over the Gaussian PDF one obtains:  
*Alle höheren Zentralmomente mit geradzahligen Werten von $k$ lassen sich bei gaußförmiger WDF – wohlgemerkt: ausschließlich bei dieser – durch die Varianz $σ^2$ ausdrücken:  
+
:$$F_x(r)= \phi(\frac{\it r-m_{\rm 1}}{\sigma})\hspace{0.5cm}\rm with\hspace{0.5cm}\rm \phi (\it x\rm ) = \frac{\rm 1}{\sqrt{\rm 2\it \pi}}\int_{-\rm\infty}^{\it x} \rm e^{\it -u^{\rm 2}/\rm 2}\,\, d \it u.$$
$$\mu_{k}=(k-\rm 1)\cdot (k-\rm 3)\cdot ... \cdot \rm 3\cdot\rm 1\cdot\sigma^k\hspace{0.2cm}\rm (falls\hspace{0.1cm}\it k\hspace{0.1cm}\rm gerade).$$
 
*Daraus können die nichtzentrierten Momente $m_k$ wie folgt bestimmt werden:
 
$$m_k = \sum\limits_{\kappa= 0}^{k} \left(          \begin{array}{*{2}{c}} k \\   \kappa \\   \end{array}   \right)\cdot \mu_\kappa \cdot {m_1}^{k-\kappa}.$$
 
:Es ist anzumerken, dass diese Gleichung allgemein gilt, also für beliebige Verteilungen.
 
*Aus der oberen Gleichung folgt direkt $\mu_4 = 3σ^4$ und daraus für die Kurtosis der Wert $K =$ 3. Den Wert $K$ − 3 bezeichnet man deshalb auch häufig als die Gaußabweichung. Ist diese negativ, so erfolgt der WDF-Abfall schneller als bei der Gaußverteilung. Beispielsweise hat bei einer Gleichverteilung die Gaußabweichung stets den Zahlenwert 1.8 – 3 = –1.2.
 
  
 +
*One calls&nbsp; $ϕ(x)$&nbsp; the&nbsp; &raquo;'''Gaussian error integral'''&laquo;.&nbsp; This function cannot be calculated analytically and must therefore be taken from tables.
 +
*$ϕ(x)$&nbsp; can be approximated by a Taylor series or calculated from the function&nbsp; ${\rm erfc}(x)$&nbsp; often available in program libraries.
  
{{Beispiel}}
 
Die ersten Zentralmomente einer Gaußschen Zufallsgröße mit Streuung $σ =$ 1/2 sind:
 
$$\mu_2 = \frac{1}{4}, \hspace{0.4cm}\mu_4 = \frac{3}{16},\hspace{0.4cm}\mu_6 = \frac{15}{64}, \hspace{0.4cm}\mu_8 = \frac{105}{256}.$$
 
Alle Zentralmomente mit ungeradem Index sind indenisch 0.
 
{{end}}
 
  
 +
The topic of this chapter is illustrated with examples in the&nbsp;  (German language)&nbsp;  learning video&nbsp; [[Der_AWGN-Kanal_(Lernvideo)|"Der AWGN-Kanal"]] &nbsp; $\Rightarrow$ &nbsp; "The AWGN channel",&nbsp; especially in the second part.
  
Mit dem folgenden Modul können Sie sich die Kenngrößen der Gaußverteilung anzeigen lassen:
+
==Exceedance probability==
WDF, VTF und Momente spezieller Verteilungen
+
<br>
 +
In the study of digital transmission systems,&nbsp; it is often necessary to determine the probability that a&nbsp; (zero mean)&nbsp; Gaussian distributed random variable&nbsp; $x$&nbsp; with variance&nbsp; $σ^2$&nbsp; exceeds a given value&nbsp; $x_0$.
  
 +
{{BlaueBox|TEXT= 
 +
$\text{Definition:}$&nbsp; For this&nbsp; &raquo;'''exceedance probability'''&laquo;&nbsp; holds:
 +
[[File:P_ID621__Sto_T_3_5_S3neu.png |right|frame| Complementary Gaussian error integral&nbsp; ${\rm Q}(x)$]]
 +
:$${\rm Pr}(x > x_{\rm 0})={\rm Q}({x_{\rm 0} }/{\sigma}).$$
 +
*Here,&nbsp; ${\rm Q}(x) = 1 - {\rm ϕ}(x)$&nbsp; denotes the complementary function to&nbsp; $ {\rm ϕ}(x)$.&nbsp; This function is called the&nbsp; &raquo;'''complementary Gaussian error integral'''&laquo;&nbsp; and the following calculation rule applies:
 +
 +
:$$\rm Q (\it x\rm ) = \rm 1- \phi (\it x)$$
 +
:$$\Rightarrow \hspace{0.3cm}\rm Q (\it x\rm ) = \frac{\rm 1}{\sqrt{\rm 2\pi} }\int_{\it x}^{\rm +\infty}\hspace{-0.4cm}\rm e^{\it - u^{\rm 2}/\hspace{0.05cm} \rm 2}\,d \it u .$$
 +
 +
*${\rm Q}(x)$&nbsp; like&nbsp; ${\rm \phi}(x)$&nbsp; is not analytically solvable and must be taken from tables.
 +
 +
*In libraries one often finds the function&nbsp; ${\rm erfc}(x)$&nbsp; related to&nbsp; ${\rm Q}(x)$&nbsp; as follows:
 +
:$${\rm Q}(x)={\rm 1}/\hspace{0.05cm}{\rm 2}\cdot \rm erfc({\it x}/{\sqrt{\rm 2} }).$$}}
 +
 +
 +
Especially for larger&nbsp; $x$-values&nbsp; (i.e.,&nbsp; for small error probabilities)&nbsp; the bounds given below provide useful estimates for the complementary Gaussian error integral:
 +
*&raquo;'''Upper bound'''&laquo;&nbsp; (German:&nbsp; "obere Schranke" &nbsp; &rArr; &nbsp; subscript:&nbsp; "o"):
 +
:$${\rm Q_o}(x ) \ge {\rm Q}(x)=\frac{ 1}{\sqrt{2\pi}\cdot x}\cdot {\rm e}^{- x^{2}/\hspace{0.05cm}2}. $$
 +
*&raquo;'''Lower bound'''&laquo;&nbsp; (German:&nbsp; "untere Schranke" &nbsp; &rArr; &nbsp; subscript:&nbsp; "u"):
 +
:$${\rm Q_u}(x  )\le {\rm Q}(x)=\frac{\rm 1-{\rm 1}/{\it x^{\rm 2}}}{\sqrt{\rm 2\pi}\cdot \it x}\cdot \rm e^{-\it x^{\rm 2}/\hspace{0.05cm}\rm 2} =\rm Q_0(\it x \rm ) \cdot \left(\rm 1-{\rm 1}/{\it x^{\rm 2}}\right) .$$
 +
 +
{{BlaueBox|TEXT= 
 +
$\text{Conclusion:}$&nbsp;
 +
The upper graph shows the  function&nbsp; $\rm Q$&nbsp; in logarithmic representation for linear&nbsp; (upper&nbsp; $x$&ndash;axis)&nbsp; and logarithmic abscissa values&nbsp; (lower axis).
 +
*The upper bound&nbsp; ${\rm Q_o}(x )$&nbsp; (red circles)&nbsp; is useful from about&nbsp; $x = 1$&nbsp; and the lower bound&nbsp; ${\rm Q_u}(x )$&nbsp; (green diamonds) from&nbsp; $x ≈ 2$.
 +
*For&nbsp; $x ≥ 4$&nbsp; both bounds are indistinguishable from the actual course&nbsp; ${\rm Q}(x)$&nbsp; within character precision. }}
 +
 +
 +
The interactive HTML5/JavaScript applet&nbsp; [[Applets:Complementary_Gaussian_Error_Functions|"Complementary Gaussian Error Functions"]]&nbsp; provides
 +
*the numerical values of the functions&nbsp; ${\rm Q}(x)$&nbsp; and&nbsp; $1/2 \cdot {\rm erfc}(x)$&nbsp;
 +
*including the two bounds given here.
 +
 +
 +
 +
==Central moments and moments==
 +
<br>
 +
The characteristics of the Gaussian distribution have the following properties:
 +
*The central moments&nbsp; $\mu_k$&nbsp; $($identical to the moments&nbsp; $m_k$&nbsp; of the equivalent zero mean random variable&nbsp; $x - m_1$&nbsp; are identically zero for the Gaussian PDF as well as for the uniform distribution due to the symmetric relations for odd values of&nbsp; $k$&nbsp;.
 +
*The second central moment is by definition equal to&nbsp; $\mu_2 = σ^2$.
 +
*All higher central moments with even values of&nbsp; $k$&nbsp; can be expressed by the variance&nbsp; $σ^2$&nbsp; for the Gaussian PDF - mind you:&nbsp; exclusively for this one:
 +
:$$\mu_{k}=(k- 1)\cdot (k- 3) \ \cdots \ 3\cdot 1\cdot\sigma^k\hspace{0.2cm}\rm (if\hspace{0.1cm}\it k\hspace{0.1cm}\rm even).$$
 +
*From this,&nbsp; the noncentered moments&nbsp; $m_k$&nbsp; can be determined as follows:
 +
:$$m_k = \sum\limits_{\kappa= 0}^{k} \left(          \begin{array}{*{2}{c}}  k \\  \kappa \\  \end{array}    \right)\cdot \mu_\kappa \cdot {m_1}^{k-\kappa}.$$
 +
:This last equation holds in general,&nbsp; i.e.,&nbsp; for arbitrary distributions.
 +
 +
{{BlaueBox|TEXT= 
 +
$\text{Conclusion:}$&nbsp;
 +
*From the above equation it follows directly&nbsp; $\mu_4 = 3 \cdot σ^4$&nbsp; and from it for the kurtosis the value&nbsp; $K = 3$.
 +
*For this reason,&nbsp; one often refers to&nbsp; $K-3$&nbsp; as the&nbsp; "Gaussian deviation"&nbsp; or as the&nbsp; "excess".
 +
*If the Gaussian deviation is negative,&nbsp; the PDF decay is faster than for the Gaussian distribution.&nbsp; For example,&nbsp; for a uniform distribution,&nbsp; the Gaussian deviation always has the numerical value&nbsp; $1.8 - 3 = -1.2$. }}
 +
 +
 +
{{GraueBox|TEXT= 
 +
$\text{Example 2:}$&nbsp; The first central moments of a Gaussian random variable with standard deviation&nbsp; $σ = 1/2$ &nbsp; &rArr; &nbsp; varince&nbsp; $σ^2 = 1/4$&nbsp; are:
 +
:$$\mu_2 = \frac{1}{4}, \hspace{0.4cm}\mu_4 = \frac{3}{16},\hspace{0.4cm}\mu_6 = \frac{15}{64}, \hspace{0.4cm}\mu_8 = \frac{105}{256}.$$
 +
All central moments with odd index are identically zero.}}
 +
 +
 +
The interactive HTML5/JavaScript applet&nbsp; [[Applets:PDF,_CDF_and_Moments_of_Special_Distributions|"PDF, CDF and moments of special distributions"]]&nbsp; gives,&nbsp; among other things,&nbsp; the characteristics of the Gaussian distribution.
 +
 +
 +
==Gaussian generation by addition method==
 +
<br>
 +
This simple procedure,&nbsp; based on the&nbsp; [https://en.wikipedia.org/wiki/Central_limit_theorem $\text{central limit theorem of statistics}$]&nbsp; for the computational generation of a Gaussian random variable,&nbsp; shall be outlined here only in a bullet point fashion:
 +
 +
 +
'''(1)''' &nbsp; One assumes&nbsp; $($between&nbsp; $0$&nbsp; and&nbsp; $1)$&nbsp; equally distributed and statistically independent random variables&nbsp; $u_i$&nbsp; &nbsp; ⇒ &nbsp; mean&nbsp; $m_u = 1/2$,&nbsp; variance&nbsp; $\sigma_u^2 = 1/12.$
 +
 +
'''(2)''' &nbsp; One now forms the sum over&nbsp; $I$&nbsp; summands,&nbsp; where&nbsp; $I$&nbsp; must be chosen sufficiently large:
 +
:$$s=\sum\limits_{i=1}^{I}u_i.$$
 +
 +
'''(3)''' &nbsp; According to the central limit theorem,&nbsp; the random variable&nbsp; $s$&nbsp; is Gaussian distributed with good approximation if&nbsp; $I$&nbsp; is sufficiently large. &nbsp; In contrast,&nbsp; for&nbsp; $I =2$&nbsp; for example,&nbsp; only an amplitude-limited triangle PDF&nbsp; $($with values between&nbsp; $0$&nbsp; and&nbsp; $2)$&nbsp;  results &nbsp; &rArr; &nbsp; convolution of two rectangles.
 +
 +
'''(4)''' &nbsp; Thus,&nbsp; the mean of the random variable&nbsp; $s$&nbsp; is&nbsp; $m_s = I/2$.&nbsp; Since the uniformly distributed random variables&nbsp; $u_i$&nbsp; were assumed to be statistically independent of each other,&nbsp; their variances can also be added,&nbsp; yielding for the variance of&nbsp; $s$&nbsp; the value&nbsp; $\sigma_s^2 = I/12$.
 +
 +
'''(5)''' &nbsp; If a Gaussian distributed random variable&nbsp; $x$&nbsp; with different mean&nbsp; $m_x$&nbsp; and different standard deviation&nbsp; $σ_x$&nbsp; is to be generated,&nbsp; the following linear transformation must still be performed:
 +
:$$x=m_x+\frac{\sigma_x}{\sqrt{I/\rm 12}}\cdot \bigg[\big (\sum\limits_{\it i=\rm 1}^{\it I}u_i\big )-{I}/{\rm 2}\bigg].$$
 +
 +
'''(6)''' &nbsp; With the parameter&nbsp; $I =12$&nbsp; the generation rule simplifies,&nbsp; which can be exploited especially in computationally time-critical applications,&nbsp; e.g. in a real-time simulation:
 +
:$$x=m_x+\sigma_x\cdot \left [\big(\sum\limits_{i=\rm 1}^{12}\it u_i \rm \big )-\rm 6 \right ].$$
 +
 +
{{BlueBox|TEXT= 
 +
$\text{Conclusion:}$&nbsp; However,&nbsp; the Gaussian random variable approximated by the addition method&nbsp; $($with parameter&nbsp; $I)$&nbsp; yields values only in a limited range around the mean&nbsp; $m_x$.&nbsp; In general:
 +
:$$m_x-\sqrt{3 I}\cdot \sigma_x\le x \le m_x+\sqrt{3 I}\cdot \sigma_x.$$
 +
 +
*The error with respect to the theoretical Gaussian distribution is largest at these limits and becomes smaller for increasing&nbsp; $I$.
 +
*The topic of this chapter is illustrated with examples in the&nbsp;  (German language)&nbsp;  learning video&nbsp; <br> &nbsp; &nbsp; [[Prinzip_der_Additionsmethode_(Lernvideo)|"Prinzip der Additionsmethode"]] &nbsp; $\Rightarrow$ &nbsp; "Principle of the addition method". }}
 +
 +
 +
==Gaussian generation with the Box/Muller method==
 +
<br>
 +
In this method,&nbsp; two statistically independent Gaussian distributed random variables&nbsp; $x$&nbsp; and&nbsp; $y$&nbsp; are generated&nbsp; (approximately)&nbsp; from the two&nbsp; $($between&nbsp; $0$&nbsp; and&nbsp; $1$&nbsp; uniformly distributed and statistically independent random variables&nbsp; $u$&nbsp; and&nbsp; $v)$&nbsp; by&nbsp; [[Theory_of_Stochastic_Signals/Exponentially_Distributed_Random_Variables#Transformation_of_random_variables|$\text{nonlinear transformation}$]]:
 +
:$$x=m_x+\sigma_{x}\cdot \cos(2 \pi u)\cdot\sqrt{-2\cdot \ln(v)},$$
 +
:$$y=m_y+\sigma_{y}\cdot \sin(2 \pi u)\cdot\sqrt{-2\cdot \ln(v)}.$$
 +
 +
The Box and Muller method&nbsp; &ndash; hereafter abbreviated to&nbsp; "BM"&nbsp; &ndash; can be characterized as follows:
 +
*The theoretical background for the validity of above generation rules is based on the regularities for&nbsp; [[Theory_of_Stochastic_Signals/Two-Dimensional_Random_Variables|$\text{two-dimensional random variables}$]].
 +
*Obvious equations successively yield two Gaussian values without statistical bindings.&nbsp; This fact can be used to reduce simulation time by generating a tuple&nbsp; $(x, \ y)$&nbsp; of Gaussian values at each function call.
 +
*A comparison of the computation times shows that&nbsp; &ndash; with the best possible implementation in each case&nbsp; &ndash; the BM method is superior to the addition method with&nbsp; $I =12$&nbsp; by&nbsp;  (approximately)&nbsp; a factor of&nbsp; $3$.
 +
*The range of values is less limited in the BM method than in the addition method,&nbsp; so that even small probabilities are simulated more accurately.&nbsp; But even with the BM method,&nbsp; it is not possible to simulate arbitrarily small error probabilities.
 +
 +
 +
{{GraueBox|TEXT= 
 +
$\text{Example 3:}$&nbsp;
 +
For the following estimation,&nbsp;  we assume the parameters&nbsp; $m_x = m_y = 0$&nbsp; and&nbsp; $σ_x = σ_y = 1$.
 +
*For a 32-bit calculator,&nbsp; the smallest float number that can be represented is&nbsp; $2^{-31} ≈ 0.466 - 10^{-9}$.&nbsp; Thus,&nbsp; the maximum value of the root expression in the generation rule of the BM method cannot become larger than approximately&nbsp; $6.55$&nbsp; and is also extremely improbable.
 +
*Since both the cosine and sine functions are limited in magnitude to&nbsp; $1$,&nbsp; this would also be the maximum possible value for&nbsp; $x$&nbsp; and&nbsp; $y$.
 +
 +
 +
However,&nbsp; a simulation documented in&nbsp; [ES96]<ref name='ES96'>Eck, P.; Söder, G.:&nbsp; Tabulated Inversion, a Fast Method for White Gaussian Noise Simulation.&nbsp; In:&nbsp; AEÜ Int. J. Electron. Commun. 50 (1996), pp. 41-48.</ref>&nbsp; over&nbsp; $10^{9}$&nbsp; samples has shown that the BM method approximates the Q function very well only up to error probabilities of&nbsp; $10^{-5}$&nbsp; but then the curve shape breaks off steeply.
 +
*The maximum occurring value of the root expression was not&nbsp; $6.55$,&nbsp; but due to the current random variables&nbsp; $u$&nbsp; and&nbsp; $v$&nbsp; only about&nbsp; $4.6$,&nbsp; which explains the abrupt drop from about&nbsp; $10^{-5}$&nbsp; on.
 +
*Of course,&nbsp; this method works much better with 64 bit arithmetic operations.}}
 +
 +
==Gaussian generation with the "Tabulated Inversion" method==
 +
<br>
 +
In this method developed by Peter Eck and Günter Söder&nbsp; [ES96]<ref name='ES96'/>&nbsp; the following procedure is followed:
 +
 +
'''(1)''' &nbsp; The Gaussian PDF is divided into&nbsp; $J$&nbsp; intervals with equal area contents&nbsp; &ndash; and correspondingly different widths &ndash;&nbsp; where&nbsp; $J$&nbsp; represents a power of two.
 +
 +
'''(2)''' &nbsp; A characteristic value&nbsp; $C_j$&nbsp; is assigned to the interval with index&nbsp; $j$.&nbsp; Thus,&nbsp; for each new function value,&nbsp; it is sufficient to call only one integer number generator,&nbsp; which will generate the integer values&nbsp; $j = ±1, \hspace{0.1cm}\text{...} \hspace{0.1cm}, ±J/2$&nbsp; with equal probability and thus selects one of the&nbsp; $C_j$.
 +
 +
'''(3)''' &nbsp; If&nbsp; $J$&nbsp; is chosen sufficiently large,&nbsp; e.g.&nbsp; $J = 2^{15} = 32\hspace{0.03cm}768$,&nbsp; then the&nbsp; $C_j$&nbsp; can be set equal to the interval averages for simplicity.&nbsp; These values need to be calculated only once and can be stored in a file before the actual simulation.
 +
 +
'''(4)''' &nbsp; On the other hand,&nbsp; the boundary regions are problematic and must be treated separately.&nbsp; By means of&nbsp; [[Theory_of_Stochastic_Signals/Exponentially_Distributed_Random_Variables#Transformation_of_random_variables|$\text{nonlinear transformation}$]]&nbsp; a float value is determined for this according to the outliers of the Gaussian PDF.
 +
 +
 +
[[File:EN_Sto_T_3_5_S9.png |right|frame|To illustrate the "Tabulated Inversion" procedure]]
 +
{{GraueBox|TEXT=
 +
$\text{Example 4:}$&nbsp;
 +
The sketch shows the PDF splitting for&nbsp; $J = 16$&nbsp; by the boundaries&nbsp; $I_{-7}$, ... , $ I_7$.
 +
*These interval boundaries were chosen so that each interval has the same area&nbsp; $p_j = 1/J = 1/16$.&nbsp; 
 +
*The characteristic value&nbsp; $C_j$&nbsp; of each interval lies exactly midway between&nbsp; $I_{j-1}$&nbsp; and&nbsp; $I_j$.
 +
 +
 +
One now generates an equally distributed discrete random variable&nbsp; $k$&nbsp; $($with values between&nbsp; $1$&nbsp; and&nbsp; $8)$&nbsp; and additionally a sign bit.
 +
*For example, if the sign bit is negative and&nbsp; $k =4$&nbsp; the following value is output:
 +
:$$C_{-4} = -C_4 =-(0.49+0.67)/2 =-0.58.$$
 +
*For&nbsp; $k =8$&nbsp; the special case occurs that one must determine the random value&nbsp; $C_8$&nbsp; by nonlinear transformation corresponding to the outliers of the Gaussian curve.}}
 +
 +
 +
{{BlaueBox|TEXT= 
 +
$\text{Conclusion:}$&nbsp; The&nbsp; "Tabulated Inversion"&nbsp; properties can be summarized as follows:
 +
*This method is with&nbsp; $J = 2^{15}$&nbsp;  faster about a factor of&nbsp; $8$&nbsp; than the BM method,&nbsp; with comparable simulation accuracy.
 +
*Disadvantageous is that now the exceedance probability&nbsp; ${\rm Pr}(x > r)$&nbsp; is no longer continuous in the inner regions,&nbsp; <br>but a staircase curve results due to the discretization.&nbsp; This shortcoming can be compensated by a larger&nbsp; $J$.
 +
*The special treatment of the edges makes the method suitable for very small error probabilities.&nbsp; }}
 +
 +
==Exercises for the chapter==
 +
<br>
 +
[[Aufgaben:Exercise_3.6:_Noisy_DC_Signal|Exercise 3.6: Noisy DC Signal]]
 +
 +
[[Aufgaben:Exercise_3.6Z:_Examination_Correction|Exercise 3.6Z: Examination Correction]]
 +
 +
[[Aufgaben:Exercise_3.7:_Bit_Error_Rate_(BER)|Exercise 3.7: Bit Error Rate (BER)]]
 +
 +
[[Aufgaben:Exercise_3.7Z:_Error_Performance|Exercise 3.7Z: Error Performance]]
 +
 +
 +
==References==
 +
<references/>
  
 
{{Display}}
 
{{Display}}

Latest revision as of 10:00, 22 December 2022

General description


Random variables with Gaussian probability density function  - the name goes back to the important mathematician,  physicist and astronomer  $\text{Carl Friedrich Gauss}$  -  are realistic models for many physical variables and are also of great importance for communications engineering.

$\text{Definition:}$  To describe the  »Gaussian distribution«,  we consider a sum of  $I$  statistical variables:

$$x=\sum\limits_{i=\rm 1}^{\it I}x_i .$$
  • According to the  $\text{central limit theorem of statistics}$  this sum has a Gaussian PDF in the limiting case  $(I → ∞)$  as long as the individual components  $x_i$  have no statistical bindings.  This holds  (almost)  for all density functions of the individual summands.
  • Many  "noise processes"  fulfill exactly this condition,  that is,  they are additively composed of a large number of independent individual contributions,  so that their pattern functions  ("noise signals")  exhibit a Gaussian amplitude distribution.
  • If one applies a Gaussian distributed signal to a linear filter for spectral shaping,  the output signal is also Gaussian distributed.   Only the distribution parameters such as mean and standard deviation change,  as well as the internal statistical bindings of the samples.

.

Gaussian distributed and uniformly distributed random signal

$\text{Example 1:}$  The graph shows in comparison

  • on the left,  a Gaussian random signal  $x_1(t)$  and
  • on the right,  an uniformly distributed signal  $x_2(t)$ 


with equal mean  $m_1$  and equal standard deviation  $σ$.

It can be seen that with the Gaussian distribution,  in contrast to the uniform distribution

  • any large and any small amplitude values can occur,
  • even if they are improbable compared to the mean amplitude range.

.


Probability density function – Cumulative density function


$\text{Definition:}$  The  »probability density function«  $\rm (PDF)$  of a Gaussian distributed random variable  $x$  is generally:

PDF and CDF of a Gaussian distributed random variable

$$\hspace{0.4cm}f_x(x) = \frac{1}{\sqrt{2\pi}\cdot\sigma}\cdot {\rm e}^{-(x-m_1)^2 /(2\sigma^2) }.$$ The parameters of such a Gaussian PDF are

  • $m_1$  ("mean"  or  "DC component"),
  • $σ$  ("standard deviation").


If  $m_1 = 0$  and  $σ = 1$, it is often referred to as the  "normal distribution".

From the left plot,  it can be seen that the standard deviation  $σ$  can also be determined graphically as the distance from the maximum value and the inflection point from the bell-shaped PDF  $f_{x}(x)$.


On the right the  »cumulative distribution function«  $F_{x}(r)$  of the Gaussian distributed random variable is shown.  It can be seen:

  • The CDF is point symmetric about the mean  $m_1$.
  • By integration over the Gaussian PDF one obtains:
$$F_x(r)= \phi(\frac{\it r-m_{\rm 1}}{\sigma})\hspace{0.5cm}\rm with\hspace{0.5cm}\rm \phi (\it x\rm ) = \frac{\rm 1}{\sqrt{\rm 2\it \pi}}\int_{-\rm\infty}^{\it x} \rm e^{\it -u^{\rm 2}/\rm 2}\,\, d \it u.$$
  • One calls  $ϕ(x)$  the  »Gaussian error integral«.  This function cannot be calculated analytically and must therefore be taken from tables.
  • $ϕ(x)$  can be approximated by a Taylor series or calculated from the function  ${\rm erfc}(x)$  often available in program libraries.


The topic of this chapter is illustrated with examples in the  (German language)  learning video  "Der AWGN-Kanal"   $\Rightarrow$   "The AWGN channel",  especially in the second part.

Exceedance probability


In the study of digital transmission systems,  it is often necessary to determine the probability that a  (zero mean)  Gaussian distributed random variable  $x$  with variance  $σ^2$  exceeds a given value  $x_0$.

$\text{Definition:}$  For this  »exceedance probability«  holds:

Complementary Gaussian error integral  ${\rm Q}(x)$
$${\rm Pr}(x > x_{\rm 0})={\rm Q}({x_{\rm 0} }/{\sigma}).$$
  • Here,  ${\rm Q}(x) = 1 - {\rm ϕ}(x)$  denotes the complementary function to  $ {\rm ϕ}(x)$.  This function is called the  »complementary Gaussian error integral«  and the following calculation rule applies:
$$\rm Q (\it x\rm ) = \rm 1- \phi (\it x)$$
$$\Rightarrow \hspace{0.3cm}\rm Q (\it x\rm ) = \frac{\rm 1}{\sqrt{\rm 2\pi} }\int_{\it x}^{\rm +\infty}\hspace{-0.4cm}\rm e^{\it - u^{\rm 2}/\hspace{0.05cm} \rm 2}\,d \it u .$$
  • ${\rm Q}(x)$  like  ${\rm \phi}(x)$  is not analytically solvable and must be taken from tables.
  • In libraries one often finds the function  ${\rm erfc}(x)$  related to  ${\rm Q}(x)$  as follows:
$${\rm Q}(x)={\rm 1}/\hspace{0.05cm}{\rm 2}\cdot \rm erfc({\it x}/{\sqrt{\rm 2} }).$$


Especially for larger  $x$-values  (i.e.,  for small error probabilities)  the bounds given below provide useful estimates for the complementary Gaussian error integral:

  • »Upper bound«  (German:  "obere Schranke"   ⇒   subscript:  "o"):
$${\rm Q_o}(x ) \ge {\rm Q}(x)=\frac{ 1}{\sqrt{2\pi}\cdot x}\cdot {\rm e}^{- x^{2}/\hspace{0.05cm}2}. $$
  • »Lower bound«  (German:  "untere Schranke"   ⇒   subscript:  "u"):
$${\rm Q_u}(x )\le {\rm Q}(x)=\frac{\rm 1-{\rm 1}/{\it x^{\rm 2}}}{\sqrt{\rm 2\pi}\cdot \it x}\cdot \rm e^{-\it x^{\rm 2}/\hspace{0.05cm}\rm 2} =\rm Q_0(\it x \rm ) \cdot \left(\rm 1-{\rm 1}/{\it x^{\rm 2}}\right) .$$

$\text{Conclusion:}$  The upper graph shows the function  $\rm Q$  in logarithmic representation for linear  (upper  $x$–axis)  and logarithmic abscissa values  (lower axis).

  • The upper bound  ${\rm Q_o}(x )$  (red circles)  is useful from about  $x = 1$  and the lower bound  ${\rm Q_u}(x )$  (green diamonds) from  $x ≈ 2$.
  • For  $x ≥ 4$  both bounds are indistinguishable from the actual course  ${\rm Q}(x)$  within character precision.


The interactive HTML5/JavaScript applet  "Complementary Gaussian Error Functions"  provides

  • the numerical values of the functions  ${\rm Q}(x)$  and  $1/2 \cdot {\rm erfc}(x)$ 
  • including the two bounds given here.


Central moments and moments


The characteristics of the Gaussian distribution have the following properties:

  • The central moments  $\mu_k$  $($identical to the moments  $m_k$  of the equivalent zero mean random variable  $x - m_1$  are identically zero for the Gaussian PDF as well as for the uniform distribution due to the symmetric relations for odd values of  $k$ .
  • The second central moment is by definition equal to  $\mu_2 = σ^2$.
  • All higher central moments with even values of  $k$  can be expressed by the variance  $σ^2$  for the Gaussian PDF - mind you:  exclusively for this one:
$$\mu_{k}=(k- 1)\cdot (k- 3) \ \cdots \ 3\cdot 1\cdot\sigma^k\hspace{0.2cm}\rm (if\hspace{0.1cm}\it k\hspace{0.1cm}\rm even).$$
  • From this,  the noncentered moments  $m_k$  can be determined as follows:
$$m_k = \sum\limits_{\kappa= 0}^{k} \left( \begin{array}{*{2}{c}} k \\ \kappa \\ \end{array} \right)\cdot \mu_\kappa \cdot {m_1}^{k-\kappa}.$$
This last equation holds in general,  i.e.,  for arbitrary distributions.

$\text{Conclusion:}$ 

  • From the above equation it follows directly  $\mu_4 = 3 \cdot σ^4$  and from it for the kurtosis the value  $K = 3$.
  • For this reason,  one often refers to  $K-3$  as the  "Gaussian deviation"  or as the  "excess".
  • If the Gaussian deviation is negative,  the PDF decay is faster than for the Gaussian distribution.  For example,  for a uniform distribution,  the Gaussian deviation always has the numerical value  $1.8 - 3 = -1.2$.


$\text{Example 2:}$  The first central moments of a Gaussian random variable with standard deviation  $σ = 1/2$   ⇒   varince  $σ^2 = 1/4$  are:

$$\mu_2 = \frac{1}{4}, \hspace{0.4cm}\mu_4 = \frac{3}{16},\hspace{0.4cm}\mu_6 = \frac{15}{64}, \hspace{0.4cm}\mu_8 = \frac{105}{256}.$$

All central moments with odd index are identically zero.


The interactive HTML5/JavaScript applet  "PDF, CDF and moments of special distributions"  gives,  among other things,  the characteristics of the Gaussian distribution.


Gaussian generation by addition method


This simple procedure,  based on the  $\text{central limit theorem of statistics}$  for the computational generation of a Gaussian random variable,  shall be outlined here only in a bullet point fashion:


(1)   One assumes  $($between  $0$  and  $1)$  equally distributed and statistically independent random variables  $u_i$    ⇒   mean  $m_u = 1/2$,  variance  $\sigma_u^2 = 1/12.$

(2)   One now forms the sum over  $I$  summands,  where  $I$  must be chosen sufficiently large:

$$s=\sum\limits_{i=1}^{I}u_i.$$

(3)   According to the central limit theorem,  the random variable  $s$  is Gaussian distributed with good approximation if  $I$  is sufficiently large.   In contrast,  for  $I =2$  for example,  only an amplitude-limited triangle PDF  $($with values between  $0$  and  $2)$  results   ⇒   convolution of two rectangles.

(4)   Thus,  the mean of the random variable  $s$  is  $m_s = I/2$.  Since the uniformly distributed random variables  $u_i$  were assumed to be statistically independent of each other,  their variances can also be added,  yielding for the variance of  $s$  the value  $\sigma_s^2 = I/12$.

(5)   If a Gaussian distributed random variable  $x$  with different mean  $m_x$  and different standard deviation  $σ_x$  is to be generated,  the following linear transformation must still be performed:

$$x=m_x+\frac{\sigma_x}{\sqrt{I/\rm 12}}\cdot \bigg[\big (\sum\limits_{\it i=\rm 1}^{\it I}u_i\big )-{I}/{\rm 2}\bigg].$$

(6)   With the parameter  $I =12$  the generation rule simplifies,  which can be exploited especially in computationally time-critical applications,  e.g. in a real-time simulation:

$$x=m_x+\sigma_x\cdot \left [\big(\sum\limits_{i=\rm 1}^{12}\it u_i \rm \big )-\rm 6 \right ].$$

$\text{Conclusion:}$  However,  the Gaussian random variable approximated by the addition method  $($with parameter  $I)$  yields values only in a limited range around the mean  $m_x$.  In general:

$$m_x-\sqrt{3 I}\cdot \sigma_x\le x \le m_x+\sqrt{3 I}\cdot \sigma_x.$$
  • The error with respect to the theoretical Gaussian distribution is largest at these limits and becomes smaller for increasing  $I$.
  • The topic of this chapter is illustrated with examples in the  (German language)  learning video 
        "Prinzip der Additionsmethode"   $\Rightarrow$   "Principle of the addition method".


Gaussian generation with the Box/Muller method


In this method,  two statistically independent Gaussian distributed random variables  $x$  and  $y$  are generated  (approximately)  from the two  $($between  $0$  and  $1$  uniformly distributed and statistically independent random variables  $u$  and  $v)$  by  $\text{nonlinear transformation}$:

$$x=m_x+\sigma_{x}\cdot \cos(2 \pi u)\cdot\sqrt{-2\cdot \ln(v)},$$
$$y=m_y+\sigma_{y}\cdot \sin(2 \pi u)\cdot\sqrt{-2\cdot \ln(v)}.$$

The Box and Muller method  – hereafter abbreviated to  "BM"  – can be characterized as follows:

  • The theoretical background for the validity of above generation rules is based on the regularities for  $\text{two-dimensional random variables}$.
  • Obvious equations successively yield two Gaussian values without statistical bindings.  This fact can be used to reduce simulation time by generating a tuple  $(x, \ y)$  of Gaussian values at each function call.
  • A comparison of the computation times shows that  – with the best possible implementation in each case  – the BM method is superior to the addition method with  $I =12$  by  (approximately)  a factor of  $3$.
  • The range of values is less limited in the BM method than in the addition method,  so that even small probabilities are simulated more accurately.  But even with the BM method,  it is not possible to simulate arbitrarily small error probabilities.


$\text{Example 3:}$  For the following estimation,  we assume the parameters  $m_x = m_y = 0$  and  $σ_x = σ_y = 1$.

  • For a 32-bit calculator,  the smallest float number that can be represented is  $2^{-31} ≈ 0.466 - 10^{-9}$.  Thus,  the maximum value of the root expression in the generation rule of the BM method cannot become larger than approximately  $6.55$  and is also extremely improbable.
  • Since both the cosine and sine functions are limited in magnitude to  $1$,  this would also be the maximum possible value for  $x$  and  $y$.


However,  a simulation documented in  [ES96][1]  over  $10^{9}$  samples has shown that the BM method approximates the Q function very well only up to error probabilities of  $10^{-5}$  but then the curve shape breaks off steeply.

  • The maximum occurring value of the root expression was not  $6.55$,  but due to the current random variables  $u$  and  $v$  only about  $4.6$,  which explains the abrupt drop from about  $10^{-5}$  on.
  • Of course,  this method works much better with 64 bit arithmetic operations.

Gaussian generation with the "Tabulated Inversion" method


In this method developed by Peter Eck and Günter Söder  [ES96][1]  the following procedure is followed:

(1)   The Gaussian PDF is divided into  $J$  intervals with equal area contents  – and correspondingly different widths –  where  $J$  represents a power of two.

(2)   A characteristic value  $C_j$  is assigned to the interval with index  $j$.  Thus,  for each new function value,  it is sufficient to call only one integer number generator,  which will generate the integer values  $j = ±1, \hspace{0.1cm}\text{...} \hspace{0.1cm}, ±J/2$  with equal probability and thus selects one of the  $C_j$.

(3)   If  $J$  is chosen sufficiently large,  e.g.  $J = 2^{15} = 32\hspace{0.03cm}768$,  then the  $C_j$  can be set equal to the interval averages for simplicity.  These values need to be calculated only once and can be stored in a file before the actual simulation.

(4)   On the other hand,  the boundary regions are problematic and must be treated separately.  By means of  $\text{nonlinear transformation}$  a float value is determined for this according to the outliers of the Gaussian PDF.


To illustrate the "Tabulated Inversion" procedure

$\text{Example 4:}$  The sketch shows the PDF splitting for  $J = 16$  by the boundaries  $I_{-7}$, ... , $ I_7$.

  • These interval boundaries were chosen so that each interval has the same area  $p_j = 1/J = 1/16$. 
  • The characteristic value  $C_j$  of each interval lies exactly midway between  $I_{j-1}$  and  $I_j$.


One now generates an equally distributed discrete random variable  $k$  $($with values between  $1$  and  $8)$  and additionally a sign bit.

  • For example, if the sign bit is negative and  $k =4$  the following value is output:
$$C_{-4} = -C_4 =-(0.49+0.67)/2 =-0.58.$$
  • For  $k =8$  the special case occurs that one must determine the random value  $C_8$  by nonlinear transformation corresponding to the outliers of the Gaussian curve.


$\text{Conclusion:}$  The  "Tabulated Inversion"  properties can be summarized as follows:

  • This method is with  $J = 2^{15}$  faster about a factor of  $8$  than the BM method,  with comparable simulation accuracy.
  • Disadvantageous is that now the exceedance probability  ${\rm Pr}(x > r)$  is no longer continuous in the inner regions, 
    but a staircase curve results due to the discretization.  This shortcoming can be compensated by a larger  $J$.
  • The special treatment of the edges makes the method suitable for very small error probabilities. 

Exercises for the chapter


Exercise 3.6: Noisy DC Signal

Exercise 3.6Z: Examination Correction

Exercise 3.7: Bit Error Rate (BER)

Exercise 3.7Z: Error Performance


References

  1. 1.0 1.1 Eck, P.; Söder, G.:  Tabulated Inversion, a Fast Method for White Gaussian Noise Simulation.  In:  AEÜ Int. J. Electron. Commun. 50 (1996), pp. 41-48.