Difference between revisions of "Theory of Stochastic Signals/Power-Spectral Density"

From LNTwww
 
(66 intermediate revisions by 8 users not shown)
Line 1: Line 1:
 
   
 
   
 
{{Header
 
{{Header
|Untermenü=Zufallsgrößen mit statistischen Bindungen
+
|Untermenü=Random Variables with Statistical Dependence
|Vorherige Seite=Autokorrelationsfunktion (AKF)
+
|Vorherige Seite=Auto-Correlation Function
|Nächste Seite=Kreuzkorrelationsfunktion und Kreuzleistungsdichte
+
|Nächste Seite=Cross-Correlation Function and Cross Power Density
 
}}
 
}}
==Theorem von Wiener-Chintchine==
+
==Wiener-Khintchine Theorem==
Im Weiteren beschränken wir uns auf ergodische Prozesse. Wie im Kapitel 4.4  gezeigt wurde, gelten dann die folgenden Aussagen:   
+
<br>
*Jede einzelne Musterfunktion $x_i(t)$ ist repräsentativ für den gesamten Zufallsprozess { $x_i(t)$}. Alle Zeitmittelwerte sind somit identisch mit den dazugehörigen Scharmittelwerten.  
+
In the remainder of this paper we restrict ourselves to ergodic processes.&nbsp; As was shown in the&nbsp; [[Theory_of_Stochastic_Signals/Auto-Correlation_Function#Ergodic_random_processes|"last chapter"]]&nbsp; the following statements then hold:   
*Die Autokorrelationsfunktion, die allgemein von den beiden Zeitparametern $t_1$ und $t_2$ beeinflusst wird, hängt nur noch von der Zeitdifferenz $τ = t_2 t_1$ ab:  
+
*Each individual pattern function&nbsp; $x_i(t)$&nbsp; is representative of the entire random process&nbsp; $\{x_i(t)\}$.  
$$\varphi_x(t_1,t_2)={\rm E}[x(t_{\rm 1})\cdot x(t_{\rm 2})] = \varphi_x(\tau)= \int^{+\infty}_{-\infty}x(t)\cdot x(t+\tau)\,{\rm d}t.$$
+
*All time means are thus identical to the corresponding coulter means.  
 +
*The auto-correlation function,&nbsp; which is generally affected by the two time parameters&nbsp; $t_1$&nbsp; and&nbsp; $t_2$,&nbsp; now depends only on the time difference&nbsp; $τ = t_2 - t_1$:  
 +
:$$\varphi_x(t_1,t_2)={\rm E}\big[x(t_{\rm 1})\cdot x(t_{\rm 2})\big] = \varphi_x(\tau)= \int^{+\infty}_{-\infty}x(t)\cdot x(t+\tau)\,{\rm d}t.$$
  
 +
The auto-correlation function provides quantitative information about the&nbsp; (linear)&nbsp; statistical bindings within the ergodic process&nbsp; $\{x_i(t)\}$&nbsp; in the time domain.&nbsp; The equivalent descriptor in the frequency domain is the&nbsp; "power-spectral density",&nbsp; often also referred to as the&nbsp; "power-spectral density".
  
Diese Funktion liefert quantitative Aussagen über die (linearen) statistischen Bindungen innerhalb des ergodischen Prozesses { $x_i(t)$} im Zeitbereich. Die äquivalente Beschreibungsgröße im Frequenzbereich ist die ''spektrale Leistungsdichte,'' häufig auch als ''Leistungsdichtespektrum'' (LDS) bezeichnet.  
+
{{BlaueBox|TEXT= 
 +
$\text{Definition:}$&nbsp; The&nbsp; &raquo;'''power-spectral density'''&laquo;&nbsp; $\rm (PSD)$&nbsp; of an ergodic random process&nbsp; $\{x_i(t)\}$&nbsp; is the Fourier transform of the auto-correlation function&nbsp; $\rm (ACF)$:
 +
:$${\it \Phi}_x(f)=\int^{+\infty}_{-\infty}\varphi_x(\tau) \cdot {\rm e}^{- {\rm j\hspace{0.05cm}\cdot \hspace{0.05cm} \pi}\hspace{0.05cm}\cdot \hspace{0.05cm} f \hspace{0.05cm}\cdot \hspace{0.05cm}\tau} {\rm d} \tau. $$
 +
This functional relationship is called the&nbsp; "Theorem of&nbsp; [https://en.wikipedia.org/wiki/Norbert_Wiener $\text{Wiener}$]&nbsp; and&nbsp; [https://en.wikipedia.org/wiki/Aleksandr_Khinchin $\text{Khinchin}$]". }}
  
  
{{Definition}}
+
Similarly,&nbsp; the auto-correlation function can be computed as the inverse Fourier transform of the power-spectral density&nbsp; (see section&nbsp; [[Signal_Representation/Fourier_Transform_and_its_Inverse#The_second_Fourier_integral|"Inverse Fourier transform"]]&nbsp; in the book&nbsp; "Signal Representation"):  
Das Leistungsdichtespektrum (LDS) eines ergodischen Zufallsprozesses { $x_i(t)$} ist die Fouriertransformierte der Autokorrelationsfunktion (AKF):  
+
:$$ \varphi_x(\tau)=\int^{+\infty}_{-\infty} {\it \Phi}_x \cdot {\rm e}^{- {\rm j\hspace{0.05cm}\cdot \hspace{0.05cm} \pi}\hspace{0.05cm}\cdot \hspace{0.05cm} f \hspace{0.05cm}\cdot \hspace{0.05cm}\tau} {\rm d} f.$$
$${\Phi}_x(f)=\int^{+\infty}_{-\infty}\varphi_x(\tau) \cdot {\rm e}^{- {\rm j\pi} f \tau} {\rm d} \tau. $$
+
*The two equations are directly applicable only if the random process contains neither a DC component nor periodic components.  
Diesen Funktionalzusammenhang nennt man das Theorem von Wiener und Chintchine.  
+
*Otherwise,&nbsp; one must proceed according to the specifications given in section&nbsp; [[Theory_of_Stochastic_Signals/Power-Spectral_Density#Power-spectral_density_with_DC_component|"Power-spectral density with DC component"]].
{{end}}
 
  
 +
==Physical interpretation and measurement==
 +
<br>
 +
The lower chart shows an arrangement for&nbsp; (approximate)&nbsp; metrological determination of the power-spectral density&nbsp; ${\it \Phi}_x(f)$.&nbsp; The following should be noted in this regard:
 +
*The random signal&nbsp; $x(t)$&nbsp; is applied to a&nbsp; (preferably)&nbsp; rectangular and&nbsp; (preferably)&nbsp; narrowband filter with center frequency&nbsp; $f$&nbsp; and bandwidth&nbsp; $Δf$&nbsp; where&nbsp; $Δf$&nbsp; must be chosen sufficiently small according to the desired frequency resolution.
 +
*The corresponding output signal&nbsp; $x_f(t)$&nbsp; is squared and then the mean value is formed over a sufficiently long measurement period&nbsp; $T_{\rm M}$.&nbsp; This gives the&nbsp; "power of&nbsp; $x_f(t)$"&nbsp; or the&nbsp; "power components of&nbsp; $x(t)$&nbsp; in the spectral range from&nbsp; $f - Δf/2$&nbsp; to&nbsp; $f + Δf/2$":
 +
[[File: P_ID387__Sto_T_4_5_S2_neu.png |right|frame| To measure the power-spectral density]]
 +
:$$P_{x_f} =\overline{x_f(t)^2}=\frac{1}{T_{\rm M}}\cdot\int^{T_{\rm M}}_{0}x_f^2(t) \hspace{0.1cm}\rm d \it t.$$
 +
*Division by&nbsp; $Δf$&nbsp; leads to the power-spectral density&nbsp; $\rm (PSD)$:
 +
:$${{\it \Phi}_{x \rm +}}(f)  =\frac{P_{x_f}}{{\rm \Delta} f} \hspace {0.5cm} \Rightarrow  \hspace {0.5cm} {\it \Phi}_{x}(f) = \frac{P_{x_f}}{{\rm 2 \cdot \Delta} f}.$$
 +
*${\it \Phi}_{x+}(f) = 2 \cdot {\it \Phi}_x(f)$&nbsp; denotes&nbsp;the one-sided PSD defined only for positive frequencies. &nbsp; For&nbsp; $f<0$ &nbsp; &rArr; &nbsp; ${\it \Phi}_{x+}(f) = 0$.&nbsp; In contrast,&nbsp; for the commonly used two-sided power-spectral density:
 +
:$${\it \Phi}_x(-f) = {\it \Phi}_x(f).$$
 +
*While the power&nbsp; $P_{x_f}$&nbsp; tends to zero as the bandwidth&nbsp; $Δf$&nbsp; becomes smaller,&nbsp; the power-spectral density remains nearly constant above a sufficiently small value of&nbsp; $Δf$.&nbsp; For the exact determination of&nbsp; ${\it \Phi}_x(f)$&nbsp; two boundary crossings are necessary:
 +
:$${{\it \Phi}_x(f)} = \lim_{{\rm \Delta}f\to 0} \hspace{0.2cm} \lim_{T_{\rm M}\to\infty}\hspace{0.2cm} \frac{1}{{\rm 2 \cdot \Delta}f\cdot T_{\rm M}}\cdot\int^{T_{\rm M}}_{0}x_f^2(t) \hspace{0.1cm} \rm d \it t.$$
  
Ebenso kann die AKF als Fourierrücktransformierte des LDS berechnet werden (siehe Kapitel 3.1  des Buches „Signaldarstellung”):  
+
{{BlaueBox|TEXT= 
$$ \varphi_x(\tau)=\int^{+\infty}_{-\infty} \Phi_x(f) \cdot {\rm e}^{{\rm j\pi} f \tau} {\rm d} f.$$
+
$\text{Conclusion:}$&nbsp;
Die beiden Gleichungen sind nur dann direkt anwendbar, wenn der Zufallsprozess weder einen Gleichanteil noch periodische Anteile beinhaltet. Andernfalls muss man nach den Angaben auf Seite 4 dieses Abschnitts vorgehen: Spektrale Leistungsdichte mit Gleichsignalkomponente.
+
*From this physical interpretation it further follows that the power-spectral density is always real and can never become negative. &nbsp;
 +
*The total power of the random signal&nbsp; $x(t)$&nbsp; is then obtained by integration over all spectral components:  
 +
:$$P_x = \int^{\infty}_{0}{\it \Phi}_{x \rm +}(f) \hspace{0.1cm}{\rm d} f = \int^{+\infty}_{-\infty}{\it \Phi}_x(f)\hspace{0.1cm} {\rm d} f .$$}}
  
==Physikalische Interpretation und Messung==
+
==Reciprocity law of ACF duration and PSD bandwidth==
Das folgende Bild zeigt eine Anordnung zur (näherungsweisen) messtechnischen Bestimmung des Leistungsdichtespektrums $Φ_x(f)$.
+
<br>
 +
All the&nbsp; [[Signal_Representation/Fourier_Transform_Laws|$\text{Fourier transform theorems}$]]&nbsp; derived in the book&nbsp; "Signal Representation"&nbsp; for deterministic signals can also be applied to
 +
[[File:P_ID390__Sto_T_4_5_S3_Ganz_neu.png |frame| On the&nbsp; "Reciprocity Theorem"&nbsp; of ACF and PSD]]
  
 +
*the&nbsp; auto-correlation function&nbsp; $\rm (ACF)$,&nbsp; and
 +
*the&nbsp; power-spectral density&nbsp; $\rm (PSD)$.&nbsp;
 +
<br>However,&nbsp; not all laws yield meaningful results due to the specific properties
 +
*of auto-correlation function&nbsp; (always real and even)
 +
*and power-spectral density&nbsp; (always real, even, and non&ndash;negative).
 +
  
[[File: P_ID387__Sto_T_4_5_S2_neu.png | Zur Messung des Leistungsdichtespektrums]]
+
We now consider as in the section&nbsp; [[Theory_of_Stochastic_Signals/Auto-Correlation_Function#Interpretation_of_the_auto-correlation_function|"Interpretation of the auto-correlation function"]]&nbsp; two different ergodic random processes&nbsp; $\{x_i(t)\}$&nbsp; and&nbsp; $\{y_i(t)\}$&nbsp; based on
 +
#two pattern signals&nbsp; $x(t)$&nbsp; and&nbsp; $y(t)$ &nbsp; ⇒ &nbsp; upper sketch,
 +
#two auto-correlation functions&nbsp; $φ_x(τ)$&nbsp; and&nbsp; $φ_y(τ)$ &nbsp; ⇒ &nbsp; middle sketch,
 +
#two power-spectral densities&nbsp; ${\it \Phi}_x(f)$&nbsp; and&nbsp; ${\it \Phi}_y(f)$ &nbsp; ⇒ &nbsp; bottom sketch.
  
  
Hierzu ist folgendes anzumerken:  
+
Based on these exemplary graphs,&nbsp; the following statements can be made:  
*Das Zufallssignal $x(t)$ wird auf ein (möglichst) rechteckförmiges und (möglichst) schmalbandiges Filter mit Mittenfrequenz $f$ und Bandbreite $Δf$ gegeben, wobei $Δf$ entsprechend der gewünschten Frequenzauflösung hinreichend klein gewählt werden muss.
+
*The areas under the PSD curves are equal &nbsp; ⇒ &nbsp; the processes&nbsp; $\{x_i(t)\}$&nbsp; and&nbsp; $\{y_i(t)\}$&nbsp; have the same power:  
*Das entsprechende Ausgangssignal $x_f(t)$ wird quadriert und anschließend der Mittelwert über eine hinreichend lange Messdauer $T_{\rm M}$ gebildet. Damit erhält man die Leistung von $x_f(t)$ bzw. die Leistungsanteile von $x(t)$ im Spektralbereich von $f – Δf/2$ bis $f + Δf/2$:
+
:$${\varphi_x({\rm 0})}\hspace{0.05cm} =\hspace{0.05cm} \int^{+\infty}_{-\infty}{{\it \Phi}_x(f)} \hspace{0.1cm} {\rm d} f \hspace{0.2cm} = \hspace{0.2cm}{\varphi_y({\rm 0})} = \int^{+\infty}_{-\infty}{{\it \Phi}_y(f)} \hspace{0.1cm} {\rm d} f .$$
$$P_{xf} =\overline{x_f(t)^2}=\frac{1}{T_{\rm M}}\cdot\int^{T_{\rm M}}_{0}x_f(t)^2 \hspace{0.1cm}\rm d \it t.$$
+
*The from classical&nbsp; (deterministic)&nbsp; system theory well known &nbsp; [[Signal_Representation/Fourier_Transform_Theorems#Reciprocity_Theorem_of_time_duration_and_bandwidth|$\text{Reciprocity Theorem of time duration and bandwidth}$]]&nbsp; also applies here: &nbsp; '''A narrow ACF corresponds to a broad PSD and vice versa'''.  
*Die Division durch $Δf$ führt von der spektralen Leistung zur spektralen Leistungsdichte:
+
*As a descriptive quantity,&nbsp; we use here the&nbsp; &raquo;'''equivalent PSD bandwidth'''&laquo; &nbsp; $∇f$&nbsp; $($one speaks&nbsp; "Nabla-f"$)$,&nbsp; <br>similarly defined as the equivalent ACF duration&nbsp;  $∇τ$&nbsp; in chapter&nbsp; [[Theory_of_Stochastic_Signals/Auto-Correlation_Function#Interpretation_of_the_auto-correlation_function|"Interpretation of the auto-correlation function"]]:  
$${\Phi_{x \rm +}}(f)  =\frac{P_{xf}}{{\rm \Delta} f} \hspace {0.5cm} {\rm bzw.} \hspace {0.5cm} \Phi_{x}(f) = \frac{P_{xf}}{{\rm 2 \cdot \Delta} f}.$$
+
:$${{\rm \nabla} f_x} = \frac {1}{{\it \Phi}_x(f = {\rm 0})} \cdot \int^{+\infty}_{-\infty}{{\it \Phi}_x(f)} \hspace{0.1cm} {\rm d} f, $$
:Hierbei bezeichnet $Φ_{x+}(f) = 2 · Φ_x(f)$ das einseitige, nur für positive Frequenzen definierte LDS. Für negative Frequenzen ist $Φ_{x+}(f) =$ 0. Im Gegensatz dazu gilt für das üblicherweise verwendete zweiseitige LDS: $Φ_x(–f) = Φ_x(f)$.
+
:$${ {\rm \nabla} \tau_x} = \frac {\rm 1}{ \varphi_x(\tau = \rm 0)} \cdot \int^{+\infty}_{-\infty}{\varphi_x(\tau )} \hspace{0.1cm} {\rm d} \tau.$$
*Während die Leistung $P_{xf}$ mit kleiner werdender Bandbreite $Δf$ gegen Null tendiert, bleibt die spektrale Leistungsdichte ab einem hinreichend kleinen Wert von $Δf$ nahezu konstant.
+
*With these definitions,&nbsp; the following basic relationship holds:
*Für die exakte Bestimmung von $Φ_x(f)$ sind zwei Grenzübergänge notwendig:
+
:$${{\rm \nabla} \tau_x} \cdot {{\rm \nabla} f_x} = 1\hspace{1cm}{\rm resp.}\hspace{1cm}
$${\Phi_x(f)} = \lim_{{\rm \Delta}f\to 0} \hspace{0.2cm} \lim_{T_{\rm M}\to\infty}\hspace{0.2cm} \frac{1}{{\rm 2 \cdot \Delta}f\cdot T_{\rm M}}\cdot\int^{T_{\rm M}}_{0}x_f^2(t) \hspace{0.1cm} \rm d \it t.$$
+
{{\rm \nabla} \tau_y} \cdot {{\rm \nabla} f_y} = 1.$$
  
 +
{{GraueBox|TEXT= 
 +
$\text{Example 1:}$&nbsp; We start from the graph at the top of this section:
 +
*The characteristics of the higher frequency signal&nbsp; $x(t)$&nbsp; are&nbsp; $∇τ_x = 0.33\hspace{0.08cm} \rm &micro;s$&nbsp; &nbsp;and&nbsp; $∇f_x = 3 \hspace{0.08cm} \rm MHz$.
 +
*The equivalent ACF duration of the signal&nbsp; $y(t)$&nbsp; is three times: &nbsp; $∇τ_y = 1 \hspace{0.08cm} \rm &micro;s$.
 +
*The equivalent PSD bandwidth  of the signal&nbsp; $y(t)$&nbsp; is thus only&nbsp; $∇f_y = ∇f_x/3 = 1 \hspace{0.08cm} \rm MHz$. }}
  
Aus dieser physikalischen Interpretation folgt weiter, dass das LDS stets reell ist und nie negativ werden kann. Die gesamte Signalleistung von $x(t)$ erhält man dann durch Integration über alle Spektralanteile:
 
$$P_x = \int^{\infty}_{0}\Phi_{x \rm +}(f) \hspace{0.1cm}{\rm d} f = \int^{+\infty}_{-\infty}\Phi_x(f)\hspace{0.1cm} {\rm d} f .$$
 
  
==Reziprozitätsgesetz von AKF-Zeitdauer und LDS-Bandbreite (1)==
+
{{BlaueBox|TEXT=  
Alle Gesetzmäßigkeiten der Fouriertransformation,  hergeleitet im Kapitel 3.2 des Buches „Signaldarstellung” für deterministische Signale, können auch auf die Autokorrelationsfunktion (AKF) und das Leistungsdichtespektrum (LDS) eines Zufallsprozesses angewendet werden. Aufgrund der spezifischen Eigenschaften von AKF (stets reell und gerade) und LDS (stets reell, gerade und nicht-negativ) liefern allerdings nicht alle Gesetze sinnvolle Ergebnisse.  
+
$\text{General:}$&nbsp;
 +
'''The product of equivalent ACF duration&nbsp; ${ {\rm \nabla} \tau_x}$&nbsp; and equivalent PSD bandwidth&nbsp; $ { {\rm \nabla} f_x}$&nbsp; is always "one"''':
 +
:$${ {\rm \nabla} \tau_x} \cdot { {\rm \nabla} f_x} = 1.$$}}
  
  
[[File:P_ID390__Sto_T_4_5_S3_Ganz_neu.png | Zum Reziprozitätsgesetz von AKF und LDS]]
+
{{BlaueBox|TEXT=
 +
$\text{Proof:}$&nbsp; According to the above definitions:
 +
:$${ {\rm \nabla} \tau_x} = \frac {\rm 1}{ \varphi_x(\tau = \rm 0)} \cdot \int^{+\infty}_{-\infty}{ \varphi_x(\tau )} \hspace{0.1cm} {\rm d} \tau = \frac { {\it \Phi}_x(f = {\rm 0)} }{ \varphi_x(\tau = \rm 0)},$$
 +
:$${ {\rm \nabla} f_x} = \frac {1}{ {\it \Phi}_x(f = {\rm0})} \cdot \int^{+\infty}_{-\infty}{ {\it \Phi}_x(f)} \hspace{0.1cm} {\rm d} f = \frac {\varphi_x(\tau = {\rm 0)} }{ {\it \Phi}_x(f = \rm 0)}.$$
  
 +
Thus,&nbsp; the product is equal to&nbsp; $1$.
 +
<div align="right">'''q.e.d.'''</div> }}
  
Wir betrachten nun wie auf der Seite Interpretation der Autokorrelationsfunktion  im Kapitel 4.4 zwei unterschiedliche ergodische Zufallsprozesse { $x_i(t)$} und { $y_i(t)$} anhand
 
*der beiden Mustersignale $x(t)$ bzw. $y(t)$  ⇒  obere Skizze,
 
*der beiden Autokorrelationsfunktionen $φ_x(τ)$ bzw. $φ_y(τ)$  ⇒  mittlere Skizze,
 
*der beiden Leistungsdichtespektren $Φ_x(f)$ bzw. $Φ_y(f)$ ⇒  untere Skizze.
 
  
 +
{{GraueBox|TEXT= 
 +
$\text{Example 2:}$&nbsp; 
 +
A limiting case of the reciprocity theorem represents the so-called&nbsp; "White Noise":
 +
*This includes all spectral components&nbsp; (up to infinity).
 +
*The equivalent PSD bandwidth&nbsp; $∇f$&nbsp; is infinite.
  
Die Interpretation dieser Grafiken erfolgt im nächsten Abschnitt.
 
  
 +
The theorem given here states that for the equivalent ACF duration&nbsp; $∇τ = 0$&nbsp; must hold &nbsp; &rArr; &nbsp; &raquo;'''white noise has a Dirac-shaped ACF'''&laquo;.
  
 +
For more on this topic, see the three-part&nbsp; (German language)&nbsp; learning video&nbsp; [[Der_AWGN-Kanal_(Lernvideo)|"The AWGN channel"]],&nbsp; especially the second part.}}
  
  
 +
==Power-spectral density with DC component==
 +
<br>
 +
We assume a DC&ndash;free random process&nbsp; $\{x_i(t)\}$.&nbsp; Further,&nbsp; we assume that the process also contains no periodic components.&nbsp; Then holds:
 +
*The auto-correlation function&nbsp; $φ_x(τ)$ vanishes&nbsp; for&nbsp; $τ → ∞$.
 +
*The power-spectral density&nbsp; ${\it \Phi}_x(f)$ &nbsp;&ndash;&nbsp; computable as the Fourier transform of&nbsp; $φ_x(τ)$&nbsp; &ndash;&nbsp; is both continuous in value and continuous in time,&nbsp; i.e.,&nbsp; without discrete components.
  
  
 +
We now consider a second random process&nbsp; $\{y_i(t)\}$,&nbsp; which differs from the process&nbsp; $\{x_i(t)\}$&nbsp; only by an additional DC component&nbsp; $m_y$:
 +
:$$\left\{ y_i (t) \right\} = \left\{ x_i (t) + m_y \right\}.$$
  
 +
The statistical descriptors of the mean-valued random process&nbsp; $\{y_i(t)\}$&nbsp; then have the following properties:
 +
*The limit of the ACF for&nbsp; $τ → ∞$&nbsp; is now no longer zero,&nbsp; but&nbsp; $m_y^2$. &nbsp; Throughout the&nbsp; $τ$&ndash;range from&nbsp; $-∞$&nbsp; to&nbsp; $+∞$&nbsp; the ACF&nbsp; $φ_y(τ)$&nbsp; is larger than&nbsp; $φ_x(τ)$&nbsp; by&nbsp; $m_y^2$:
 +
:$${\varphi_y ( \tau)} = {\varphi_x ( \tau)} + m_y^2 . $$
 +
*According to the elementary laws of the Fourier transform,&nbsp; the constant ACF contribution in the PSD leads to a Dirac delta function&nbsp; $δ(f)$&nbsp; with weight&nbsp; $m_y^2$:
 +
:$${{\it \Phi}_y ( f)} = {\Phi_x ( f)} + m_y^2 \cdot \delta (f). $$
 +
 +
*More information about the&nbsp; $\delta$&ndash;function can be found in the chapter&nbsp; [[Signal_Representation/Direct_Current_Signal_-_Limit_Case_of_a_Periodic_Signal|"Direct current signal - Limit case of a periodic signal"]]&nbsp; of the book "Signal Representation".&nbsp;  Furthermore,&nbsp; we would like to refer you here to the&nbsp; (German language)&nbsp;  learning video&nbsp; [[Herleitung_und_Visualisierung_der_Diracfunktion_(Lernvideo)|"Herleitung und Visualisierung der Diracfunktion"]] &nbsp; &rArr; &nbsp; "Derivation and visualization of the Dirac delta function".
 +
 +
==Numerical PSD determination==
 +
<br>
 +
Auto-correlation function and power-spectral density are strictly related via the&nbsp; [[Signal_Representation/Fourier_Transform_and_its_Inverse#Fourier_transform|$\text{Fourier transform}$]].&nbsp; This relationship also holds for discrete-time ACF representation with the sampling operator&nbsp; ${\rm A} \{ \varphi_x ( \tau ) \} $,&nbsp; thus for
 +
:$${\rm A} \{ \varphi_x ( \tau ) \} = \varphi_x ( \tau ) \cdot \sum_{k= - \infty}^{\infty} T_{\rm A} \cdot \delta ( \tau - k \cdot T_{\rm A}).$$
 +
 +
The transition from the time domain to the spectral domain can be derived with the following steps:
 +
*The distance&nbsp; $T_{\rm A}$&nbsp; of two samples is determined by the absolute bandwidth&nbsp; $B_x$&nbsp; $($maximum occurring frequency within the process$)$&nbsp; via the sampling theorem:
 +
:$$T_{\rm A}\le\frac{1}{2B_x}.$$
 +
*The Fourier transform of the discrete-time&nbsp; (sampled)&nbsp; auto-correlation function yields an with&nbsp; ${\rm 1}/T_{\rm A}$&nbsp; periodic power-spectral density:
 +
:$${\rm A} \{ \varphi_x ( \tau ) \}  \hspace{0.3cm} \circ\!\!-\!\!\!-\!\!\!-\!\!\bullet\, \hspace{0.3cm} {\rm P} \{{{\it \Phi}_x} ( f) \} = \sum_{\mu = - \infty}^{\infty} {{\it \Phi}_x} ( f - \frac {\mu}{T_{\rm A}}).$$
 +
 +
{{BlaueBox|TEXT=
 +
$\text{Conclusion:}$&nbsp; Since both&nbsp; $φ_x(τ)$&nbsp; and&nbsp; ${\it \Phi}_x(f)$&nbsp; are even and real functions,&nbsp; the following relation holds:
 +
:$${\rm P} \{ { {\it \Phi}_x} ( f) \} = T_{\rm A} \cdot \varphi_x ( k = 0) +2 T_{\rm A} \cdot \sum_{k = 1}^{\infty} \varphi_x ( k T_{\rm A}) \cdot {\rm cos}(2{\rm \pi} f k T_{\rm A}).$$
 +
*The power-spectral density&nbsp; $\rm (PSD)$&nbsp; of the continuous-time process is obtained from&nbsp; ${\rm P} \{ { {\it \Phi}_x} ( f) \}$&nbsp; by bandlimiting to the range&nbsp; $\vert f \vert ≤ 1/(2T_{\rm A})$.
 +
*In the time domain,&nbsp; this operation means interpolating the individual ACF samples with the&nbsp; ${\rm sinc}$ function, where&nbsp; ${\rm sinc}(x)$&nbsp; stands for&nbsp; $\sin(\pi x)/(\pi x)$.}}
 +
 +
 +
{{GraueBox|TEXT=
 +
$\text{Example 3:}$&nbsp; A Gaussian ACF&nbsp; $φ_x(τ)$&nbsp; is sampled at distance&nbsp; $T_{\rm A}$&nbsp; where the sampling theorem is satisfied:
 +
[[File:EN_Sto_T_4_5_S5.png |right|frame| Discrete-time auto-correlation function,&nbsp; periodically continued power-spectral density]]
 +
*The Fourier transform of the discrete-time ACF &nbsp; &rArr; &nbsp; ${\rm A} \{φ_x(τ) \}$&nbsp; be the periodically continued PSD &nbsp; &rArr; &nbsp; ${\rm P} \{ { {\it \Phi}_x} ( f) \}$.&nbsp;
 +
 +
 +
*This with&nbsp; ${\rm 1}/T_{\rm A}$&nbsp; periodic function&nbsp; ${\rm P} \{ { {\it \Phi}_x} ( f) \}$&nbsp; is accordingly infinitely extended&nbsp; (red curve).
 +
 +
 +
*The PSD&nbsp; ${\it \Phi}_x(f)$&nbsp; of the continuous-time process&nbsp; $\{x_i(t)\}$&nbsp; is obtained by band-limiting to the frequency range&nbsp; $\vert f \cdot T_{\rm A} \vert ≤ 0.5$,&nbsp; highlighted in blue in the figure. }}
 +
 +
==Accuracy of the numerical PSD calculation==
 +
<br>
 +
For the following analysis,&nbsp; we make the following assumptions:
 +
#The discrete-time ACF&nbsp; $φ_x(k \cdot T_{\rm A})$&nbsp; was determined numerically from&nbsp; $N$&nbsp; samples. &nbsp;
 +
#As already shown in section&nbsp; [[Theory_of_Stochastic_Signals/Auto-Correlation_Function#Accuracy_of_the_numerical_ACF_calculation|"Accuracy of the numerical ACF calculation"]],&nbsp; these values are in error and the errors are correlated if&nbsp; $N$&nbsp; was chosen too small.
 +
#To calculate the periodic power-spectral density&nbsp; $\rm (PSD)$,&nbsp; we use only the ACF values&nbsp; $φ_x(0)$, ... , $φ_x(K \cdot T_{\rm A})$:
 +
::$${\rm P} \{{{\it \Phi}_x} ( f) \} = T_{\rm A} \cdot \varphi_x ( k = 0) +2 T_{\rm A} \cdot  \sum_{k = 1}^{K} \varphi_x  ( k T_{\rm A})\cdot {\rm cos}(2{\rm \pi} f k T_{\rm A}).$$
 +
 +
{{BlaueBox|TEXT= 
 +
$\text{Conclusion:}$&nbsp;
 +
The accuracy of the power-spectral density calculation is determined to a strong extent by the parameter&nbsp; $K$: 
 +
*If&nbsp; $K$&nbsp; is chosen too small,&nbsp; the ACF values actually present&nbsp; $φ_x(k - T_{\rm A})$&nbsp; with&nbsp; $k > K$&nbsp; will not be taken into account.
 +
*If&nbsp; $K$&nbsp; is too large,&nbsp; also such ACF values are considered,&nbsp; which should actually be zero and are finite only because of the numerical ACF calculation.
 +
*These values are only errors&nbsp; $($due to a small&nbsp; $N$&nbsp; in the ACF calculation$)$  and impair the PSD calculation more than they provide a useful contribution to the result. }}
 +
 +
 +
{{GraueBox|TEXT=
 +
$\text{Example 4:}$&nbsp; We consider here a zero mean process with statistically independent samples.&nbsp; Thus,&nbsp; only the ACF value&nbsp; $φ_x(0) = σ_x^2$&nbsp; should be different from zero.
 +
[[File:EN_Sto_T_4_5_S5_b_neu_v2.png |450px|right|frame| Accuracy of numerical PSD calculation ]]
 +
*But if one determines the ACF numerically from only&nbsp; $N = 1000$&nbsp; samples,&nbsp; one obtains finite ACF values even for&nbsp; $k ≠ 0$.
 +
 +
*The upper figure shows that these erroneous ACF values can be up to&nbsp; $6\%$&nbsp; of the maximum value.
 +
 +
*The numerically determined PSD is shown below.&nbsp; The theoretical&nbsp; (yellow) curve should be constant for&nbsp; $\vert f \cdot T_{\rm A} \vert ≤ 0.5$.
 +
 +
*The green and purple curves illustrate how by&nbsp; $K = 3$ &nbsp;resp.&nbsp; $K = 10$,&nbsp; the result is distorted compared to&nbsp; $K = 0$.
 +
 +
*In this case&nbsp; $($statistically independent random variables$)$&nbsp; the error grows monotonically with increasing $K$.&nbsp;
 +
 +
 +
In contrast,&nbsp; for a random variable with statistical bindings,&nbsp; there is an optimal value for&nbsp; $K$&nbsp; in each case.
 +
#If this is chosen too small,&nbsp; significant bindings are not considered.
 +
#In contrast,&nbsp; a too large value  leads to oscillations that can only be attributed to erroneous ACF values.}}
 +
 +
==Exercises for the chapter==
 +
<br>
 +
[[Aufgaben:Exercise_4.12:_Power-Spectral_Density_of_a_Binary_Signal|Exercise 4.12: Power-Spectral Density of a Binary Signal]]
 +
 +
[[Aufgaben:Exercise_4.12Z:_White_Gaussian_Noise|Exercise 4.12Z: White Gaussian Noise]]
 +
 +
[[Aufgaben:Exercise_4.13:_Gaussian_ACF_and_PSD|Exercise 4.13: Gaussian ACF and PSD]]
 +
 +
[[Aufgaben:Exercise_4.13Z:_AMI_Code|Exercise 4.13Z: AMI Code]]
  
  
 
{{Display}}
 
{{Display}}

Latest revision as of 16:13, 22 December 2022

Wiener-Khintchine Theorem


In the remainder of this paper we restrict ourselves to ergodic processes.  As was shown in the  "last chapter"  the following statements then hold:

  • Each individual pattern function  $x_i(t)$  is representative of the entire random process  $\{x_i(t)\}$.
  • All time means are thus identical to the corresponding coulter means.
  • The auto-correlation function,  which is generally affected by the two time parameters  $t_1$  and  $t_2$,  now depends only on the time difference  $τ = t_2 - t_1$:
$$\varphi_x(t_1,t_2)={\rm E}\big[x(t_{\rm 1})\cdot x(t_{\rm 2})\big] = \varphi_x(\tau)= \int^{+\infty}_{-\infty}x(t)\cdot x(t+\tau)\,{\rm d}t.$$

The auto-correlation function provides quantitative information about the  (linear)  statistical bindings within the ergodic process  $\{x_i(t)\}$  in the time domain.  The equivalent descriptor in the frequency domain is the  "power-spectral density",  often also referred to as the  "power-spectral density".

$\text{Definition:}$  The  »power-spectral density«  $\rm (PSD)$  of an ergodic random process  $\{x_i(t)\}$  is the Fourier transform of the auto-correlation function  $\rm (ACF)$:

$${\it \Phi}_x(f)=\int^{+\infty}_{-\infty}\varphi_x(\tau) \cdot {\rm e}^{- {\rm j\hspace{0.05cm}\cdot \hspace{0.05cm} \pi}\hspace{0.05cm}\cdot \hspace{0.05cm} f \hspace{0.05cm}\cdot \hspace{0.05cm}\tau} {\rm d} \tau. $$

This functional relationship is called the  "Theorem of  $\text{Wiener}$  and  $\text{Khinchin}$".


Similarly,  the auto-correlation function can be computed as the inverse Fourier transform of the power-spectral density  (see section  "Inverse Fourier transform"  in the book  "Signal Representation"):

$$ \varphi_x(\tau)=\int^{+\infty}_{-\infty} {\it \Phi}_x \cdot {\rm e}^{- {\rm j\hspace{0.05cm}\cdot \hspace{0.05cm} \pi}\hspace{0.05cm}\cdot \hspace{0.05cm} f \hspace{0.05cm}\cdot \hspace{0.05cm}\tau} {\rm d} f.$$
  • The two equations are directly applicable only if the random process contains neither a DC component nor periodic components.
  • Otherwise,  one must proceed according to the specifications given in section  "Power-spectral density with DC component".

Physical interpretation and measurement


The lower chart shows an arrangement for  (approximate)  metrological determination of the power-spectral density  ${\it \Phi}_x(f)$.  The following should be noted in this regard:

  • The random signal  $x(t)$  is applied to a  (preferably)  rectangular and  (preferably)  narrowband filter with center frequency  $f$  and bandwidth  $Δf$  where  $Δf$  must be chosen sufficiently small according to the desired frequency resolution.
  • The corresponding output signal  $x_f(t)$  is squared and then the mean value is formed over a sufficiently long measurement period  $T_{\rm M}$.  This gives the  "power of  $x_f(t)$"  or the  "power components of  $x(t)$  in the spectral range from  $f - Δf/2$  to  $f + Δf/2$":
To measure the power-spectral density
$$P_{x_f} =\overline{x_f(t)^2}=\frac{1}{T_{\rm M}}\cdot\int^{T_{\rm M}}_{0}x_f^2(t) \hspace{0.1cm}\rm d \it t.$$
  • Division by  $Δf$  leads to the power-spectral density  $\rm (PSD)$:
$${{\it \Phi}_{x \rm +}}(f) =\frac{P_{x_f}}{{\rm \Delta} f} \hspace {0.5cm} \Rightarrow \hspace {0.5cm} {\it \Phi}_{x}(f) = \frac{P_{x_f}}{{\rm 2 \cdot \Delta} f}.$$
  • ${\it \Phi}_{x+}(f) = 2 \cdot {\it \Phi}_x(f)$  denotes the one-sided PSD defined only for positive frequencies.   For  $f<0$   ⇒   ${\it \Phi}_{x+}(f) = 0$.  In contrast,  for the commonly used two-sided power-spectral density:
$${\it \Phi}_x(-f) = {\it \Phi}_x(f).$$
  • While the power  $P_{x_f}$  tends to zero as the bandwidth  $Δf$  becomes smaller,  the power-spectral density remains nearly constant above a sufficiently small value of  $Δf$.  For the exact determination of  ${\it \Phi}_x(f)$  two boundary crossings are necessary:
$${{\it \Phi}_x(f)} = \lim_{{\rm \Delta}f\to 0} \hspace{0.2cm} \lim_{T_{\rm M}\to\infty}\hspace{0.2cm} \frac{1}{{\rm 2 \cdot \Delta}f\cdot T_{\rm M}}\cdot\int^{T_{\rm M}}_{0}x_f^2(t) \hspace{0.1cm} \rm d \it t.$$

$\text{Conclusion:}$ 

  • From this physical interpretation it further follows that the power-spectral density is always real and can never become negative.  
  • The total power of the random signal  $x(t)$  is then obtained by integration over all spectral components:
$$P_x = \int^{\infty}_{0}{\it \Phi}_{x \rm +}(f) \hspace{0.1cm}{\rm d} f = \int^{+\infty}_{-\infty}{\it \Phi}_x(f)\hspace{0.1cm} {\rm d} f .$$

Reciprocity law of ACF duration and PSD bandwidth


All the  $\text{Fourier transform theorems}$  derived in the book  "Signal Representation"  for deterministic signals can also be applied to

On the  "Reciprocity Theorem"  of ACF and PSD
  • the  auto-correlation function  $\rm (ACF)$,  and
  • the  power-spectral density  $\rm (PSD)$. 


However,  not all laws yield meaningful results due to the specific properties

  • of auto-correlation function  (always real and even)
  • and power-spectral density  (always real, even, and non–negative).


We now consider as in the section  "Interpretation of the auto-correlation function"  two different ergodic random processes  $\{x_i(t)\}$  and  $\{y_i(t)\}$  based on

  1. two pattern signals  $x(t)$  and  $y(t)$   ⇒   upper sketch,
  2. two auto-correlation functions  $φ_x(τ)$  and  $φ_y(τ)$   ⇒   middle sketch,
  3. two power-spectral densities  ${\it \Phi}_x(f)$  and  ${\it \Phi}_y(f)$   ⇒   bottom sketch.


Based on these exemplary graphs,  the following statements can be made:

  • The areas under the PSD curves are equal   ⇒   the processes  $\{x_i(t)\}$  and  $\{y_i(t)\}$  have the same power:
$${\varphi_x({\rm 0})}\hspace{0.05cm} =\hspace{0.05cm} \int^{+\infty}_{-\infty}{{\it \Phi}_x(f)} \hspace{0.1cm} {\rm d} f \hspace{0.2cm} = \hspace{0.2cm}{\varphi_y({\rm 0})} = \int^{+\infty}_{-\infty}{{\it \Phi}_y(f)} \hspace{0.1cm} {\rm d} f .$$
$${{\rm \nabla} f_x} = \frac {1}{{\it \Phi}_x(f = {\rm 0})} \cdot \int^{+\infty}_{-\infty}{{\it \Phi}_x(f)} \hspace{0.1cm} {\rm d} f, $$
$${ {\rm \nabla} \tau_x} = \frac {\rm 1}{ \varphi_x(\tau = \rm 0)} \cdot \int^{+\infty}_{-\infty}{\varphi_x(\tau )} \hspace{0.1cm} {\rm d} \tau.$$
  • With these definitions,  the following basic relationship holds:
$${{\rm \nabla} \tau_x} \cdot {{\rm \nabla} f_x} = 1\hspace{1cm}{\rm resp.}\hspace{1cm} {{\rm \nabla} \tau_y} \cdot {{\rm \nabla} f_y} = 1.$$

$\text{Example 1:}$  We start from the graph at the top of this section:

  • The characteristics of the higher frequency signal  $x(t)$  are  $∇τ_x = 0.33\hspace{0.08cm} \rm µs$   and  $∇f_x = 3 \hspace{0.08cm} \rm MHz$.
  • The equivalent ACF duration of the signal  $y(t)$  is three times:   $∇τ_y = 1 \hspace{0.08cm} \rm µs$.
  • The equivalent PSD bandwidth of the signal  $y(t)$  is thus only  $∇f_y = ∇f_x/3 = 1 \hspace{0.08cm} \rm MHz$.


$\text{General:}$  The product of equivalent ACF duration  ${ {\rm \nabla} \tau_x}$  and equivalent PSD bandwidth  $ { {\rm \nabla} f_x}$  is always "one":

$${ {\rm \nabla} \tau_x} \cdot { {\rm \nabla} f_x} = 1.$$


$\text{Proof:}$  According to the above definitions:

$${ {\rm \nabla} \tau_x} = \frac {\rm 1}{ \varphi_x(\tau = \rm 0)} \cdot \int^{+\infty}_{-\infty}{ \varphi_x(\tau )} \hspace{0.1cm} {\rm d} \tau = \frac { {\it \Phi}_x(f = {\rm 0)} }{ \varphi_x(\tau = \rm 0)},$$
$${ {\rm \nabla} f_x} = \frac {1}{ {\it \Phi}_x(f = {\rm0})} \cdot \int^{+\infty}_{-\infty}{ {\it \Phi}_x(f)} \hspace{0.1cm} {\rm d} f = \frac {\varphi_x(\tau = {\rm 0)} }{ {\it \Phi}_x(f = \rm 0)}.$$

Thus,  the product is equal to  $1$.

q.e.d.


$\text{Example 2:}$  A limiting case of the reciprocity theorem represents the so-called  "White Noise":

  • This includes all spectral components  (up to infinity).
  • The equivalent PSD bandwidth  $∇f$  is infinite.


The theorem given here states that for the equivalent ACF duration  $∇τ = 0$  must hold   ⇒   »white noise has a Dirac-shaped ACF«.

For more on this topic, see the three-part  (German language)  learning video  "The AWGN channel",  especially the second part.


Power-spectral density with DC component


We assume a DC–free random process  $\{x_i(t)\}$.  Further,  we assume that the process also contains no periodic components.  Then holds:

  • The auto-correlation function  $φ_x(τ)$ vanishes  for  $τ → ∞$.
  • The power-spectral density  ${\it \Phi}_x(f)$  –  computable as the Fourier transform of  $φ_x(τ)$  –  is both continuous in value and continuous in time,  i.e.,  without discrete components.


We now consider a second random process  $\{y_i(t)\}$,  which differs from the process  $\{x_i(t)\}$  only by an additional DC component  $m_y$:

$$\left\{ y_i (t) \right\} = \left\{ x_i (t) + m_y \right\}.$$

The statistical descriptors of the mean-valued random process  $\{y_i(t)\}$  then have the following properties:

  • The limit of the ACF for  $τ → ∞$  is now no longer zero,  but  $m_y^2$.   Throughout the  $τ$–range from  $-∞$  to  $+∞$  the ACF  $φ_y(τ)$  is larger than  $φ_x(τ)$  by  $m_y^2$:
$${\varphi_y ( \tau)} = {\varphi_x ( \tau)} + m_y^2 . $$
  • According to the elementary laws of the Fourier transform,  the constant ACF contribution in the PSD leads to a Dirac delta function  $δ(f)$  with weight  $m_y^2$:
$${{\it \Phi}_y ( f)} = {\Phi_x ( f)} + m_y^2 \cdot \delta (f). $$

Numerical PSD determination


Auto-correlation function and power-spectral density are strictly related via the  $\text{Fourier transform}$.  This relationship also holds for discrete-time ACF representation with the sampling operator  ${\rm A} \{ \varphi_x ( \tau ) \} $,  thus for

$${\rm A} \{ \varphi_x ( \tau ) \} = \varphi_x ( \tau ) \cdot \sum_{k= - \infty}^{\infty} T_{\rm A} \cdot \delta ( \tau - k \cdot T_{\rm A}).$$

The transition from the time domain to the spectral domain can be derived with the following steps:

  • The distance  $T_{\rm A}$  of two samples is determined by the absolute bandwidth  $B_x$  $($maximum occurring frequency within the process$)$  via the sampling theorem:
$$T_{\rm A}\le\frac{1}{2B_x}.$$
  • The Fourier transform of the discrete-time  (sampled)  auto-correlation function yields an with  ${\rm 1}/T_{\rm A}$  periodic power-spectral density:
$${\rm A} \{ \varphi_x ( \tau ) \} \hspace{0.3cm} \circ\!\!-\!\!\!-\!\!\!-\!\!\bullet\, \hspace{0.3cm} {\rm P} \{{{\it \Phi}_x} ( f) \} = \sum_{\mu = - \infty}^{\infty} {{\it \Phi}_x} ( f - \frac {\mu}{T_{\rm A}}).$$

$\text{Conclusion:}$  Since both  $φ_x(τ)$  and  ${\it \Phi}_x(f)$  are even and real functions,  the following relation holds:

$${\rm P} \{ { {\it \Phi}_x} ( f) \} = T_{\rm A} \cdot \varphi_x ( k = 0) +2 T_{\rm A} \cdot \sum_{k = 1}^{\infty} \varphi_x ( k T_{\rm A}) \cdot {\rm cos}(2{\rm \pi} f k T_{\rm A}).$$
  • The power-spectral density  $\rm (PSD)$  of the continuous-time process is obtained from  ${\rm P} \{ { {\it \Phi}_x} ( f) \}$  by bandlimiting to the range  $\vert f \vert ≤ 1/(2T_{\rm A})$.
  • In the time domain,  this operation means interpolating the individual ACF samples with the  ${\rm sinc}$ function, where  ${\rm sinc}(x)$  stands for  $\sin(\pi x)/(\pi x)$.


$\text{Example 3:}$  A Gaussian ACF  $φ_x(τ)$  is sampled at distance  $T_{\rm A}$  where the sampling theorem is satisfied:

Discrete-time auto-correlation function,  periodically continued power-spectral density
  • The Fourier transform of the discrete-time ACF   ⇒   ${\rm A} \{φ_x(τ) \}$  be the periodically continued PSD   ⇒   ${\rm P} \{ { {\it \Phi}_x} ( f) \}$. 


  • This with  ${\rm 1}/T_{\rm A}$  periodic function  ${\rm P} \{ { {\it \Phi}_x} ( f) \}$  is accordingly infinitely extended  (red curve).


  • The PSD  ${\it \Phi}_x(f)$  of the continuous-time process  $\{x_i(t)\}$  is obtained by band-limiting to the frequency range  $\vert f \cdot T_{\rm A} \vert ≤ 0.5$,  highlighted in blue in the figure.

Accuracy of the numerical PSD calculation


For the following analysis,  we make the following assumptions:

  1. The discrete-time ACF  $φ_x(k \cdot T_{\rm A})$  was determined numerically from  $N$  samples.  
  2. As already shown in section  "Accuracy of the numerical ACF calculation",  these values are in error and the errors are correlated if  $N$  was chosen too small.
  3. To calculate the periodic power-spectral density  $\rm (PSD)$,  we use only the ACF values  $φ_x(0)$, ... , $φ_x(K \cdot T_{\rm A})$:
$${\rm P} \{{{\it \Phi}_x} ( f) \} = T_{\rm A} \cdot \varphi_x ( k = 0) +2 T_{\rm A} \cdot \sum_{k = 1}^{K} \varphi_x ( k T_{\rm A})\cdot {\rm cos}(2{\rm \pi} f k T_{\rm A}).$$

$\text{Conclusion:}$  The accuracy of the power-spectral density calculation is determined to a strong extent by the parameter  $K$:

  • If  $K$  is chosen too small,  the ACF values actually present  $φ_x(k - T_{\rm A})$  with  $k > K$  will not be taken into account.
  • If  $K$  is too large,  also such ACF values are considered,  which should actually be zero and are finite only because of the numerical ACF calculation.
  • These values are only errors  $($due to a small  $N$  in the ACF calculation$)$ and impair the PSD calculation more than they provide a useful contribution to the result.


$\text{Example 4:}$  We consider here a zero mean process with statistically independent samples.  Thus,  only the ACF value  $φ_x(0) = σ_x^2$  should be different from zero.

Accuracy of numerical PSD calculation
  • But if one determines the ACF numerically from only  $N = 1000$  samples,  one obtains finite ACF values even for  $k ≠ 0$.
  • The upper figure shows that these erroneous ACF values can be up to  $6\%$  of the maximum value.
  • The numerically determined PSD is shown below.  The theoretical  (yellow) curve should be constant for  $\vert f \cdot T_{\rm A} \vert ≤ 0.5$.
  • The green and purple curves illustrate how by  $K = 3$  resp.  $K = 10$,  the result is distorted compared to  $K = 0$.
  • In this case  $($statistically independent random variables$)$  the error grows monotonically with increasing $K$. 


In contrast,  for a random variable with statistical bindings,  there is an optimal value for  $K$  in each case.

  1. If this is chosen too small,  significant bindings are not considered.
  2. In contrast,  a too large value leads to oscillations that can only be attributed to erroneous ACF values.

Exercises for the chapter


Exercise 4.12: Power-Spectral Density of a Binary Signal

Exercise 4.12Z: White Gaussian Noise

Exercise 4.13: Gaussian ACF and PSD

Exercise 4.13Z: AMI Code