Difference between revisions of "Theory of Stochastic Signals/Power-Spectral Density"

From LNTwww
 
(25 intermediate revisions by 2 users not shown)
Line 7: Line 7:
 
==Wiener-Khintchine Theorem==
 
==Wiener-Khintchine Theorem==
 
<br>
 
<br>
In the remainder of this paper we restrict ourselves to ergodic processes.&nbsp; As was shown in&nbsp; [[Theory_of_Stochastic_Signals/Auto-Correlation_Function#Ergodic_random_processes|last chapter]]&nbsp; the following statements then hold:   
+
In the remainder of this paper we restrict ourselves to ergodic processes.&nbsp; As was shown in the&nbsp; [[Theory_of_Stochastic_Signals/Auto-Correlation_Function#Ergodic_random_processes|"last chapter"]]&nbsp; the following statements then hold:   
*Each individual sample function&nbsp; $x_i(t)$&nbsp; is representative of the entire random process&nbsp; $\{x_i(t)\}$.  
+
*Each individual pattern function&nbsp; $x_i(t)$&nbsp; is representative of the entire random process&nbsp; $\{x_i(t)\}$.  
 
*All time means are thus identical to the corresponding coulter means.  
 
*All time means are thus identical to the corresponding coulter means.  
*The autocorrelation function, which is generally affected by the two time parameters&nbsp; $t_1$&nbsp; and&nbsp; $t_2$&nbsp; now depends only on the time difference&nbsp; $τ = t_2 - t_1$&nbsp; :  
+
*The auto-correlation function,&nbsp; which is generally affected by the two time parameters&nbsp; $t_1$&nbsp; and&nbsp; $t_2$,&nbsp; now depends only on the time difference&nbsp; $τ = t_2 - t_1$:  
 
:$$\varphi_x(t_1,t_2)={\rm E}\big[x(t_{\rm 1})\cdot x(t_{\rm 2})\big] = \varphi_x(\tau)= \int^{+\infty}_{-\infty}x(t)\cdot x(t+\tau)\,{\rm d}t.$$
 
:$$\varphi_x(t_1,t_2)={\rm E}\big[x(t_{\rm 1})\cdot x(t_{\rm 2})\big] = \varphi_x(\tau)= \int^{+\infty}_{-\infty}x(t)\cdot x(t+\tau)\,{\rm d}t.$$
  
The autocorrelation function provides quantitative information about the (linear) statistical bindings within the ergodic process&nbsp; $\{x_i(t)\}$&nbsp; in the time domain.&nbsp; The equivalent descriptor in the frequency domain is the ''power spectral density '', often referred to as the ''power density spectrum''.  
+
The auto-correlation function provides quantitative information about the&nbsp; (linear)&nbsp; statistical bindings within the ergodic process&nbsp; $\{x_i(t)\}$&nbsp; in the time domain.&nbsp; The equivalent descriptor in the frequency domain is the&nbsp; "power-spectral density",&nbsp; often also referred to as the&nbsp; "power-spectral density".  
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Definition:}$&nbsp; The&nbsp; '''power spectral density'''&nbsp; (PSD) of an ergodic random process&nbsp; $\{x_i(t)\}$&nbsp; is the Fourier transform of the auto-correlation function (ACF):  
+
$\text{Definition:}$&nbsp; The&nbsp; &raquo;'''power-spectral density'''&laquo;&nbsp; $\rm (PSD)$&nbsp; of an ergodic random process&nbsp; $\{x_i(t)\}$&nbsp; is the Fourier transform of the auto-correlation function&nbsp; $\rm (ACF)$:  
 
:$${\it \Phi}_x(f)=\int^{+\infty}_{-\infty}\varphi_x(\tau) \cdot {\rm e}^{- {\rm j\hspace{0.05cm}\cdot \hspace{0.05cm} \pi}\hspace{0.05cm}\cdot \hspace{0.05cm} f \hspace{0.05cm}\cdot \hspace{0.05cm}\tau} {\rm d} \tau. $$
 
:$${\it \Phi}_x(f)=\int^{+\infty}_{-\infty}\varphi_x(\tau) \cdot {\rm e}^{- {\rm j\hspace{0.05cm}\cdot \hspace{0.05cm} \pi}\hspace{0.05cm}\cdot \hspace{0.05cm} f \hspace{0.05cm}\cdot \hspace{0.05cm}\tau} {\rm d} \tau. $$
This functional is called the theorem of&nbsp; [https://en.wikipedia.org/wiki/Norbert_Wiener Wiener]&nbsp; and&nbsp; [https://en.wikipedia.org/wiki/Aleksandr_Khinchin Khinchin]. }}
+
This functional relationship is called the&nbsp; "Theorem of&nbsp; [https://en.wikipedia.org/wiki/Norbert_Wiener $\text{Wiener}$]&nbsp; and&nbsp; [https://en.wikipedia.org/wiki/Aleksandr_Khinchin $\text{Khinchin}$]". }}
  
  
Similarly, the ACF can be computed as the inverse Fourier transform of the PSD (see page&nbsp; [[Signal_Representation/Fourier_Transform_and_its_Inverse#The_second_Fourier_integral| inverse Fourier transform]]&nbsp; in the book "Signal Representation"):  
+
Similarly,&nbsp; the auto-correlation function can be computed as the inverse Fourier transform of the power-spectral density&nbsp; (see section&nbsp; [[Signal_Representation/Fourier_Transform_and_its_Inverse#The_second_Fourier_integral|"Inverse Fourier transform"]]&nbsp; in the book&nbsp; "Signal Representation"):  
 
:$$ \varphi_x(\tau)=\int^{+\infty}_{-\infty} {\it \Phi}_x \cdot {\rm e}^{- {\rm j\hspace{0.05cm}\cdot \hspace{0.05cm} \pi}\hspace{0.05cm}\cdot \hspace{0.05cm} f \hspace{0.05cm}\cdot \hspace{0.05cm}\tau} {\rm d} f.$$
 
:$$ \varphi_x(\tau)=\int^{+\infty}_{-\infty} {\it \Phi}_x \cdot {\rm e}^{- {\rm j\hspace{0.05cm}\cdot \hspace{0.05cm} \pi}\hspace{0.05cm}\cdot \hspace{0.05cm} f \hspace{0.05cm}\cdot \hspace{0.05cm}\tau} {\rm d} f.$$
 
*The two equations are directly applicable only if the random process contains neither a DC component nor periodic components.  
 
*The two equations are directly applicable only if the random process contains neither a DC component nor periodic components.  
*Otherwise, one must proceed according to the specifications given in page&nbsp; [[Theory_of_Stochastic_Signals/Power-Spectral_Density#Leistungsdichtespektrum_mit_Gleichsignalkomponente|Power spectral density with DC component]]&nbsp;.
+
*Otherwise,&nbsp; one must proceed according to the specifications given in section&nbsp; [[Theory_of_Stochastic_Signals/Power-Spectral_Density#Power-spectral_density_with_DC_component|"Power-spectral density with DC component"]].
  
 
==Physical interpretation and measurement==
 
==Physical interpretation and measurement==
 
<br>
 
<br>
The following figure shows an arrangement for (approximate) metrological determination of the power density spectrum&nbsp; ${\it \Phi}_x(f)$.
+
The lower chart shows an arrangement for&nbsp; (approximate)&nbsp; metrological determination of the power-spectral density&nbsp; ${\it \Phi}_x(f)$.&nbsp; The following should be noted in this regard:  
 
+
*The random signal&nbsp; $x(t)$&nbsp; is applied to a&nbsp; (preferably)&nbsp; rectangular and&nbsp; (preferably)&nbsp; narrowband filter with center frequency&nbsp; $f$&nbsp; and bandwidth&nbsp; $Δf$&nbsp; where&nbsp; $Δf$&nbsp; must be chosen sufficiently small according to the desired frequency resolution.  
[[File: P_ID387__Sto_T_4_5_S2_neu.png |center|frame| To measure the power density spectrum]]
+
*The corresponding output signal&nbsp; $x_f(t)$&nbsp; is squared and then the mean value is formed over a sufficiently long measurement period&nbsp; $T_{\rm M}$.&nbsp; This gives the&nbsp; "power of&nbsp; $x_f(t)$"&nbsp; or the&nbsp; "power components of&nbsp; $x(t)$&nbsp; in the spectral range from&nbsp; $f - Δf/2$&nbsp; to&nbsp; $f + Δf/2$":
 
+
[[File: P_ID387__Sto_T_4_5_S2_neu.png |right|frame| To measure the power-spectral density]]
The following should be noted in this regard:  
 
*The random signal&nbsp; $x(t)$&nbsp; is applied to a (preferably) rectangular and (preferably) narrowband filter with center frequency&nbsp; $f$&nbsp; and bandwidth&nbsp; $Δf$&nbsp; where&nbsp; $Δf$&nbsp; must be chosen sufficiently small according to the desired frequency resolution.  
 
*The corresponding output signal&nbsp; $x_f(t)$&nbsp; is squared and then the mean value is formed over a sufficiently long measurement period&nbsp; $T_{\rm M}$&nbsp; . &nbsp; This gives the power of&nbsp; $x_f(t)$&nbsp; or the power components of&nbsp; $x(t)$&nbsp; in the spectral range from&nbsp; $f - Δf/2$&nbsp; to&nbsp; $f + Δf/2$:
 
 
:$$P_{x_f} =\overline{x_f(t)^2}=\frac{1}{T_{\rm M}}\cdot\int^{T_{\rm M}}_{0}x_f^2(t) \hspace{0.1cm}\rm d \it t.$$
 
:$$P_{x_f} =\overline{x_f(t)^2}=\frac{1}{T_{\rm M}}\cdot\int^{T_{\rm M}}_{0}x_f^2(t) \hspace{0.1cm}\rm d \it t.$$
*Division by&nbsp; $Δf$&nbsp; leads from spectral power to power spectral density:  
+
*Division by&nbsp; $Δf$&nbsp; leads to the power-spectral density&nbsp; $\rm (PSD)$:  
:$${{\it \Phi}_{x \rm +}}(f)  =\frac{P_{x_f}}{{\rm \Delta} f} \hspace {0.5cm} {\rm bzw.} \hspace {0.5cm} {\it \Phi}_{x}(f) = \frac{P_{x_f}}{{\rm 2 \cdot \Delta} f}.$$
+
:$${{\it \Phi}_{x \rm +}}(f)  =\frac{P_{x_f}}{{\rm \Delta} f} \hspace {0.5cm} \Rightarrow \hspace {0.5cm} {\it \Phi}_{x}(f) = \frac{P_{x_f}}{{\rm 2 \cdot \Delta} f}.$$
:Here denotes&nbsp; ${\it \Phi}_{x+}(f) = 2 - {\it \Phi}_x(f)$&nbsp; the one-sided PSD defined only for positive frequencies. &nbsp; For negative frequencies,&nbsp; ${\it \Phi}_{x+}(f) = 0$.&nbsp; In contrast, for the commonly used two-sided PSD : &nbsp; ${\it \Phi}_x(-f) = {\it \Phi}_x(f)$.
+
*${\it \Phi}_{x+}(f) = 2 \cdot {\it \Phi}_x(f)$&nbsp; denotes&nbsp;the one-sided PSD defined only for positive frequencies. &nbsp; For&nbsp; $f<0$ &nbsp; &rArr; &nbsp; ${\it \Phi}_{x+}(f) = 0$.&nbsp; In contrast,&nbsp; for the commonly used two-sided power-spectral density:
*While the power&nbsp; $P_{x_f}$&nbsp; tends to zero as the bandwidth&nbsp; $Δf$&nbsp; becomes smaller, the power spectral density remains nearly constant above a sufficiently small value of&nbsp; $Δf$&nbsp; For the exact determination of&nbsp; ${\it \Phi}_x(f)$&nbsp; two boundary crossings are necessary:
+
:$${\it \Phi}_x(-f) = {\it \Phi}_x(f).$$
 +
*While the power&nbsp; $P_{x_f}$&nbsp; tends to zero as the bandwidth&nbsp; $Δf$&nbsp; becomes smaller,&nbsp; the power-spectral density remains nearly constant above a sufficiently small value of&nbsp; $Δf$.&nbsp; For the exact determination of&nbsp; ${\it \Phi}_x(f)$&nbsp; two boundary crossings are necessary:
 
:$${{\it \Phi}_x(f)} = \lim_{{\rm \Delta}f\to 0} \hspace{0.2cm} \lim_{T_{\rm M}\to\infty}\hspace{0.2cm} \frac{1}{{\rm 2 \cdot \Delta}f\cdot T_{\rm M}}\cdot\int^{T_{\rm M}}_{0}x_f^2(t) \hspace{0.1cm} \rm d \it t.$$
 
:$${{\it \Phi}_x(f)} = \lim_{{\rm \Delta}f\to 0} \hspace{0.2cm} \lim_{T_{\rm M}\to\infty}\hspace{0.2cm} \frac{1}{{\rm 2 \cdot \Delta}f\cdot T_{\rm M}}\cdot\int^{T_{\rm M}}_{0}x_f^2(t) \hspace{0.1cm} \rm d \it t.$$
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
 
$\text{Conclusion:}$&nbsp;  
 
$\text{Conclusion:}$&nbsp;  
*From this physical interpretation it further follows that the power spectral density is always real and can never become negative. &nbsp;  
+
*From this physical interpretation it further follows that the power-spectral density is always real and can never become negative. &nbsp;  
*The total signal power of&nbsp; $x(t)$&nbsp; is then obtained by integration over all spectral components:  
+
*The total power of the random signal&nbsp; $x(t)$&nbsp; is then obtained by integration over all spectral components:  
 
:$$P_x = \int^{\infty}_{0}{\it \Phi}_{x \rm +}(f) \hspace{0.1cm}{\rm d} f = \int^{+\infty}_{-\infty}{\it \Phi}_x(f)\hspace{0.1cm} {\rm d} f .$$}}
 
:$$P_x = \int^{\infty}_{0}{\it \Phi}_{x \rm +}(f) \hspace{0.1cm}{\rm d} f = \int^{+\infty}_{-\infty}{\it \Phi}_x(f)\hspace{0.1cm} {\rm d} f .$$}}
  
 
==Reciprocity law of ACF duration and PSD bandwidth==
 
==Reciprocity law of ACF duration and PSD bandwidth==
 
<br>
 
<br>
All the&nbsp; [[Signal_Representation/Fourier_Transform_Laws|Laws of Fourier Transform]]&nbsp; derived in the book "Signal Representation" for deterministic signals can also be applied to the&nbsp; ''auto-correlation function''&nbsp; (ACF) and the&nbsp; ''Power Spectral Density''&nbsp; (PSD) of a random process.
+
All the&nbsp; [[Signal_Representation/Fourier_Transform_Laws|$\text{Fourier transform theorems}$]]&nbsp; derived in the book&nbsp; "Signal Representation"&nbsp; for deterministic signals can also be applied to  
 +
[[File:P_ID390__Sto_T_4_5_S3_Ganz_neu.png |frame| On the&nbsp; "Reciprocity Theorem"&nbsp; of ACF and PSD]]
  
[[File:P_ID390__Sto_T_4_5_S3_Ganz_neu.png |frame| On the reciprocity law of ACF and PSD]]
+
*the&nbsp; auto-correlation function&nbsp; $\rm (ACF)$,&nbsp; and  
Due to the specific properties  
+
*the&nbsp; power-spectral density&nbsp; $\rm (PSD)$.&nbsp;
*of autocorrelation function&nbsp; (always real and even).
+
<br>However,&nbsp; not all laws yield meaningful results due to the specific properties  
*and power density spectrum&nbsp; (always real, even, and non&ndash;negative)  
+
*of auto-correlation function&nbsp; (always real and even)  
 +
*and power-spectral density&nbsp; (always real, even, and non&ndash;negative).
 +
  
 +
We now consider as in the section&nbsp; [[Theory_of_Stochastic_Signals/Auto-Correlation_Function#Interpretation_of_the_auto-correlation_function|"Interpretation of the auto-correlation function"]]&nbsp; two different ergodic random processes&nbsp; $\{x_i(t)\}$&nbsp; and&nbsp; $\{y_i(t)\}$&nbsp; based on
 +
#two pattern signals&nbsp; $x(t)$&nbsp; and&nbsp; $y(t)$ &nbsp; ⇒ &nbsp; upper sketch,
 +
#two auto-correlation functions&nbsp; $φ_x(τ)$&nbsp; and&nbsp; $φ_y(τ)$ &nbsp; ⇒ &nbsp; middle sketch,
 +
#two power-spectral densities&nbsp; ${\it \Phi}_x(f)$&nbsp; and&nbsp; ${\it \Phi}_y(f)$ &nbsp; ⇒ &nbsp; bottom sketch.
  
however, do not all laws yield meaningful results.
 
  
We now consider, as in the section&nbsp; [[Theory_of_Stochastic_Signals/Auto-Correlation_Function#Interpretation_of_the_auto-correlation_function|Interpretation of the Autocorrelation Function]]&nbsp; two different ergodic random processes&nbsp; $\{x_i(t)\}$&nbsp; and&nbsp; $\{y_i(t)\}$&nbsp; based on  
+
Based on these exemplary graphs,&nbsp; the following statements can be made:  
*of the two pattern signals&nbsp; $x(t)$&nbsp; and&nbsp; $y(t)$ &nbsp; ⇒ &nbsp; upper sketch,  
+
*The areas under the PSD curves are equal &nbsp; ⇒ &nbsp; the processes&nbsp; $\{x_i(t)\}$&nbsp; and&nbsp; $\{y_i(t)\}$&nbsp; have the same power:  
*of the two autocorrelation functions&nbsp; $φ_x(τ)$&nbsp; and&nbsp; $φ_y(τ)$ &nbsp; ⇒ &nbsp; middle sketch,
 
*of the two power spectral densities&nbsp; ${\it \Phi}_x(f)$&nbsp; and&nbsp; ${\it \Phi}_y(f)$ &nbsp; ⇒ &nbsp; bottom sketch.
 
<br clear=all>
 
Based on these graphs, the following statements can be made:  
 
*The areas under the PSD curves are equal &nbsp; ⇒ &nbsp; the processes&nbsp; $\{x_i(t)\}$&nbsp; and&nbsp; $\{y_i(t)\}$&nbsp; have equal power:  
 
 
:$${\varphi_x({\rm 0})}\hspace{0.05cm}  =\hspace{0.05cm} \int^{+\infty}_{-\infty}{{\it \Phi}_x(f)} \hspace{0.1cm} {\rm d} f \hspace{0.2cm} = \hspace{0.2cm}{\varphi_y({\rm 0})} = \int^{+\infty}_{-\infty}{{\it \Phi}_y(f)} \hspace{0.1cm} {\rm d} f .$$
 
:$${\varphi_x({\rm 0})}\hspace{0.05cm}  =\hspace{0.05cm} \int^{+\infty}_{-\infty}{{\it \Phi}_x(f)} \hspace{0.1cm} {\rm d} f \hspace{0.2cm} = \hspace{0.2cm}{\varphi_y({\rm 0})} = \int^{+\infty}_{-\infty}{{\it \Phi}_y(f)} \hspace{0.1cm} {\rm d} f .$$
*The well known from classical (deterministic) system theory&nbsp; [[Signal_Representation/Fourier_Transform_Theorems#Reciprocity_Theorem_of_time_duration_and_bandwidth|Reciprocity Theorem of time duration and bandwidth]]&nbsp; also applies here: &nbsp; <br>A narrow autocorrelation function corresponds to a broad power density spectrum and vice versa.  
+
*The from classical&nbsp; (deterministic)&nbsp; system theory well known &nbsp; [[Signal_Representation/Fourier_Transform_Theorems#Reciprocity_Theorem_of_time_duration_and_bandwidth|$\text{Reciprocity Theorem of time duration and bandwidth}$]]&nbsp; also applies here: &nbsp; '''A narrow ACF corresponds to a broad PSD and vice versa'''.  
*As a descriptive quantity, we use here the equivalent PSD bandwidth&nbsp; $∇f$&nbsp; $($man speaks "Nabla-f"$)$, similarly defined as the equivalent ACF duration&nbsp;  $∇τ$&nbsp; im Kapitel&nbsp; [[Theory_of_Stochastic_Signals/Auto-Correlation_Function#Interpretation_of_the_auto-correlation_function|Interpretation of the auto-correlation function]]:  
+
*As a descriptive quantity,&nbsp; we use here the&nbsp; &raquo;'''equivalent PSD bandwidth'''&laquo; &nbsp; $∇f$&nbsp; $($one speaks&nbsp; "Nabla-f"$)$,&nbsp; <br>similarly defined as the equivalent ACF duration&nbsp;  $∇τ$&nbsp; in chapter&nbsp; [[Theory_of_Stochastic_Signals/Auto-Correlation_Function#Interpretation_of_the_auto-correlation_function|"Interpretation of the auto-correlation function"]]:  
:$${{\rm \nabla} f_x} = \frac {1}{{\it \Phi}_x(f = {\rm 0})} \cdot \int^{+\infty}_{-\infty}{{\it \Phi}_x(f)} \hspace{0.1cm} {\rm d} f, \hspace{0.5cm}{ {\rm \nabla} \tau_x} = \frac {\rm 1}{ \varphi_x(\tau = \rm 0)} \cdot \int^{+\infty}_{-\infty}{\varphi_x(\tau )} \hspace{0.1cm} {\rm d} \tau.$$
+
:$${{\rm \nabla} f_x} = \frac {1}{{\it \Phi}_x(f = {\rm 0})} \cdot \int^{+\infty}_{-\infty}{{\it \Phi}_x(f)} \hspace{0.1cm} {\rm d} f, $$
*With these definitions, the following basic relationship holds:  
+
:$${ {\rm \nabla} \tau_x} = \frac {\rm 1}{ \varphi_x(\tau = \rm 0)} \cdot \int^{+\infty}_{-\infty}{\varphi_x(\tau )} \hspace{0.1cm} {\rm d} \tau.$$
 +
*With these definitions,&nbsp; the following basic relationship holds:  
 
:$${{\rm \nabla} \tau_x} \cdot {{\rm \nabla} f_x} = 1\hspace{1cm}{\rm resp.}\hspace{1cm}
 
:$${{\rm \nabla} \tau_x} \cdot {{\rm \nabla} f_x} = 1\hspace{1cm}{\rm resp.}\hspace{1cm}
 
{{\rm \nabla} \tau_y} \cdot {{\rm \nabla} f_y} = 1.$$
 
{{\rm \nabla} \tau_y} \cdot {{\rm \nabla} f_y} = 1.$$
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Example 1:}$&nbsp; We start from the graph at the top of this page:  
+
$\text{Example 1:}$&nbsp; We start from the graph at the top of this section:  
 
*The characteristics of the higher frequency signal&nbsp; $x(t)$&nbsp; are&nbsp; $∇τ_x = 0.33\hspace{0.08cm} \rm &micro;s$&nbsp; &nbsp;and&nbsp; $∇f_x = 3 \hspace{0.08cm} \rm MHz$.  
 
*The characteristics of the higher frequency signal&nbsp; $x(t)$&nbsp; are&nbsp; $∇τ_x = 0.33\hspace{0.08cm} \rm &micro;s$&nbsp; &nbsp;and&nbsp; $∇f_x = 3 \hspace{0.08cm} \rm MHz$.  
 
*The equivalent ACF duration of the signal&nbsp; $y(t)$&nbsp; is three times: &nbsp; $∇τ_y = 1 \hspace{0.08cm} \rm &micro;s$.  
 
*The equivalent ACF duration of the signal&nbsp; $y(t)$&nbsp; is three times: &nbsp; $∇τ_y = 1 \hspace{0.08cm} \rm &micro;s$.  
*The equivalent PSD bandwidth is thus only more&nbsp; $∇f_y = ∇f_x/3 = 1 \hspace{0.08cm} \rm MHz$. }}
+
*The equivalent PSD bandwidth of the signal&nbsp; $y(t)$&nbsp; is thus only&nbsp; $∇f_y = ∇f_x/3 = 1 \hspace{0.08cm} \rm MHz$. }}
  
  
 
{{BlaueBox|TEXT=  
 
{{BlaueBox|TEXT=  
 
$\text{General:}$&nbsp;  
 
$\text{General:}$&nbsp;  
The product of equivalent ACF duration&nbsp; ${ {\rm \nabla} \tau_x}$&nbsp; and equivalent PSD bandwidth&nbsp; $ { {\rm \nabla} f_x}$&nbsp; is always equal&nbsp; $1$:  
+
'''The product of equivalent ACF duration&nbsp; ${ {\rm \nabla} \tau_x}$&nbsp; and equivalent PSD bandwidth&nbsp; $ { {\rm \nabla} f_x}$&nbsp; is always "one"''':  
 
:$${ {\rm \nabla} \tau_x} \cdot { {\rm \nabla} f_x} = 1.$$}}
 
:$${ {\rm \nabla} \tau_x} \cdot { {\rm \nabla} f_x} = 1.$$}}
  
Line 93: Line 93:
 
:$${ {\rm \nabla} f_x} = \frac {1}{ {\it \Phi}_x(f = {\rm0})} \cdot \int^{+\infty}_{-\infty}{ {\it \Phi}_x(f)} \hspace{0.1cm} {\rm d} f = \frac {\varphi_x(\tau = {\rm 0)} }{ {\it \Phi}_x(f = \rm 0)}.$$
 
:$${ {\rm \nabla} f_x} = \frac {1}{ {\it \Phi}_x(f = {\rm0})} \cdot \int^{+\infty}_{-\infty}{ {\it \Phi}_x(f)} \hspace{0.1cm} {\rm d} f = \frac {\varphi_x(\tau = {\rm 0)} }{ {\it \Phi}_x(f = \rm 0)}.$$
  
Thus, the product is equal to&nbsp; $1$.
+
Thus,&nbsp; the product is equal to&nbsp; $1$.
 
<div align="right">'''q.e.d.'''</div> }}
 
<div align="right">'''q.e.d.'''</div> }}
  
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 2:}$&nbsp;   
+
$\text{Example 2:}$&nbsp;   
A limiting case of the reciprocity law represents the so-called&nbsp; '''white noise''''&nbsp;:  
+
A limiting case of the reciprocity theorem represents the so-called&nbsp; "White Noise":  
 
*This includes all spectral components&nbsp; (up to infinity).
 
*This includes all spectral components&nbsp; (up to infinity).
 
*The equivalent PSD bandwidth&nbsp; $∇f$&nbsp; is infinite.  
 
*The equivalent PSD bandwidth&nbsp; $∇f$&nbsp; is infinite.  
  
  
 +
The theorem given here states that for the equivalent ACF duration&nbsp; $∇τ = 0$&nbsp; must hold &nbsp; &rArr; &nbsp; &raquo;'''white noise has a Dirac-shaped ACF'''&laquo;.
  
The law given here states that thus for the equivalent ACF duration&nbsp; $∇τ = 0$&nbsp; must hold &nbsp; &rArr; &nbsp; the white noise has a dirac-shaped ACF.  
+
For more on this topic, see the three-part&nbsp; (German language)&nbsp; learning video&nbsp; [[Der_AWGN-Kanal_(Lernvideo)|"The AWGN channel"]],&nbsp; especially the second part.}}
  
For more on this topic, see the three-part tutorial video&nbsp; [[Der_AWGN-Kanal_(Lernvideo)|The AWGN channel]], especially the second part}}.
 
  
 
+
==Power-spectral density with DC component==
==Power spectral density with DC component==
 
 
<br>
 
<br>
We assume an equal-signal-free random process&nbsp; $\{x_i(t)\}$&nbsp; Further, we assume that the process also contains no periodic components.&nbsp; Then holds:  
+
We assume a DC&ndash;free random process&nbsp; $\{x_i(t)\}$.&nbsp; Further,&nbsp; we assume that the process also contains no periodic components.&nbsp; Then holds:  
*The autocorrelation function&nbsp; $φ_x(τ)$ vanishes&nbsp; for&nbsp; $τ → ∞$.  
+
*The auto-correlation function&nbsp; $φ_x(τ)$ vanishes&nbsp; for&nbsp; $τ → ∞$.  
*The power density spectrum&nbsp; ${\it \Phi}_x(f)$ &nbsp;-&nbsp; computable as the Fourier transform of&nbsp; $φ_x(τ)$&nbsp; -&nbsp; is both continuous in value and continuous in time, i.e., without discrete components.  
+
*The power-spectral density&nbsp; ${\it \Phi}_x(f)$ &nbsp;&ndash;&nbsp; computable as the Fourier transform of&nbsp; $φ_x(τ)$&nbsp; &ndash;&nbsp; is both continuous in value and continuous in time,&nbsp; i.e.,&nbsp; without discrete components.  
  
  
We now consider a second random process&nbsp; $\{y_i(t)\}$, which differs from the process&nbsp; $\{x_i(t)\}$&nbsp; only by an additional DC component&nbsp; $m_y$&nbsp; :  
+
We now consider a second random process&nbsp; $\{y_i(t)\}$,&nbsp; which differs from the process&nbsp; $\{x_i(t)\}$&nbsp; only by an additional DC component&nbsp; $m_y$:  
 
:$$\left\{ y_i (t) \right\} = \left\{ x_i (t) + m_y \right\}.$$
 
:$$\left\{ y_i (t) \right\} = \left\{ x_i (t) + m_y \right\}.$$
  
 
The statistical descriptors of the mean-valued random process&nbsp; $\{y_i(t)\}$&nbsp; then have the following properties:  
 
The statistical descriptors of the mean-valued random process&nbsp; $\{y_i(t)\}$&nbsp; then have the following properties:  
*The limit of the ACF for&nbsp; $τ → ∞$&nbsp; is now no longer zero, but&nbsp; $m_y^2$. &nbsp; Throughout&nbsp; $τ$&ndash;the range from&nbsp; $-∞$&nbsp; to&nbsp; $+∞$&nbsp; the ACF&nbsp; $φ_y(τ)$&nbsp; is larger by&nbsp; $m_y^2$&nbsp; than&nbsp; $φ_x(τ)$:
+
*The limit of the ACF for&nbsp; $τ → ∞$&nbsp; is now no longer zero,&nbsp; but&nbsp; $m_y^2$. &nbsp; Throughout the&nbsp; $τ$&ndash;range from&nbsp; $-∞$&nbsp; to&nbsp; $+∞$&nbsp; the ACF&nbsp; $φ_y(τ)$&nbsp; is larger than&nbsp; $φ_x(τ)$&nbsp; by&nbsp; $m_y^2$:
 
:$${\varphi_y ( \tau)} = {\varphi_x ( \tau)} + m_y^2 . $$
 
:$${\varphi_y ( \tau)} = {\varphi_x ( \tau)} + m_y^2 . $$
*According to the elementary laws of the Fourier transform, the constant ACF contribution in the PSD leads to a Dirac function&nbsp; $δ(f)$&nbsp; with weight&nbsp; $m_y^2$:
+
*According to the elementary laws of the Fourier transform,&nbsp; the constant ACF contribution in the PSD leads to a Dirac delta function&nbsp; $δ(f)$&nbsp; with weight&nbsp; $m_y^2$:
 
:$${{\it \Phi}_y ( f)} = {\Phi_x ( f)} + m_y^2 \cdot \delta (f). $$
 
:$${{\it \Phi}_y ( f)} = {\Phi_x ( f)} + m_y^2 \cdot \delta (f). $$
  
More detailed information about the Dirac function can be found in the chapter&nbsp; [[Signal_Representation/General_Description/Gleichsignal_-_Grenzfall_eines_periodischen_Signals|Gleichsignal - Grenzfall eines periodischen Signals (still in German)]]&nbsp; of the book "Signal Representation".&nbsp;   
+
*More information about the&nbsp; $\delta$&ndash;function can be found in the chapter&nbsp; [[Signal_Representation/Direct_Current_Signal_-_Limit_Case_of_a_Periodic_Signal|"Direct current signal - Limit case of a periodic signal"]]&nbsp; of the book "Signal Representation".&nbsp;  Furthermore,&nbsp; we would like to refer you here to the&nbsp; (German language)&nbsp;  learning video&nbsp; [[Herleitung_und_Visualisierung_der_Diracfunktion_(Lernvideo)|"Herleitung und Visualisierung der Diracfunktion"]] &nbsp; &rArr; &nbsp; "Derivation and visualization of the Dirac delta function".
Furthermore, we would like to refer you here to the learning video&nbsp; [[Herleitung_und_Visualisierung_der_Diracfunktion_(Lernvideo)|Herleitung und Visualisierung der Diracfunktion (still in German)]]&nbsp;.
 
  
 
==Numerical PSD determination==
 
==Numerical PSD determination==
 
<br>
 
<br>
Autocorrelation function and power density spectrum are strictly related via the&nbsp; [[Signal_Representation/Fourier_Transform_and_its_Inverse#Fourier_transform|Fourier transform]]&nbsp; This relationship also holds for discrete-time ACF representation with the sampling operator&nbsp; ${\rm A} \{ \varphi_x ( \tau ) \} $,&nbsp; thus for.
+
Auto-correlation function and power-spectral density are strictly related via the&nbsp; [[Signal_Representation/Fourier_Transform_and_its_Inverse#Fourier_transform|$\text{Fourier transform}$]].&nbsp; This relationship also holds for discrete-time ACF representation with the sampling operator&nbsp; ${\rm A} \{ \varphi_x ( \tau ) \} $,&nbsp; thus for
 
:$${\rm A} \{ \varphi_x ( \tau ) \} = \varphi_x ( \tau ) \cdot \sum_{k= - \infty}^{\infty} T_{\rm A} \cdot \delta ( \tau - k \cdot T_{\rm A}).$$
 
:$${\rm A} \{ \varphi_x ( \tau ) \} = \varphi_x ( \tau ) \cdot \sum_{k= - \infty}^{\infty} T_{\rm A} \cdot \delta ( \tau - k \cdot T_{\rm A}).$$
  
The transition from the time&ndash; to the spectral domain can be derived with the following steps:  
+
The transition from the time domain to the spectral domain can be derived with the following steps:  
*The distance&nbsp; $T_{\rm A}$&nbsp; of two samples is determined by the absolute bandwidth&nbsp; $B_x$&nbsp; (maximum occurring frequency within the process)&nbsp; via the sampling theorem:  
+
*The distance&nbsp; $T_{\rm A}$&nbsp; of two samples is determined by the absolute bandwidth&nbsp; $B_x$&nbsp; $($maximum occurring frequency within the process$)$&nbsp; via the sampling theorem:  
 
:$$T_{\rm A}\le\frac{1}{2B_x}.$$
 
:$$T_{\rm A}\le\frac{1}{2B_x}.$$
*The Fourier transform of the discrete-time&nbsp; (sampled)&nbsp; ACF yields an PSD periodic with&nbsp; ${\rm 1}/T_{\rm A}$&nbsp;:  
+
*The Fourier transform of the discrete-time&nbsp; (sampled)&nbsp; auto-correlation function yields an with&nbsp; ${\rm 1}/T_{\rm A}$&nbsp; periodic power-spectral density:  
 
:$${\rm A} \{ \varphi_x ( \tau ) \}  \hspace{0.3cm} \circ\!\!-\!\!\!-\!\!\!-\!\!\bullet\, \hspace{0.3cm} {\rm P} \{{{\it \Phi}_x} ( f) \} = \sum_{\mu = - \infty}^{\infty} {{\it \Phi}_x} ( f - \frac {\mu}{T_{\rm A}}).$$
 
:$${\rm A} \{ \varphi_x ( \tau ) \}  \hspace{0.3cm} \circ\!\!-\!\!\!-\!\!\!-\!\!\bullet\, \hspace{0.3cm} {\rm P} \{{{\it \Phi}_x} ( f) \} = \sum_{\mu = - \infty}^{\infty} {{\it \Phi}_x} ( f - \frac {\mu}{T_{\rm A}}).$$
  
 
{{BlaueBox|TEXT=  
 
{{BlaueBox|TEXT=  
$\text{Conclusion:}$&nbsp; Since both&nbsp; $φ_x(τ)$&nbsp; and&nbsp; ${\it \Phi}_x(f)$&nbsp; are even and real functions, the relation holds:
+
$\text{Conclusion:}$&nbsp; Since both&nbsp; $φ_x(τ)$&nbsp; and&nbsp; ${\it \Phi}_x(f)$&nbsp; are even and real functions,&nbsp; the following relation holds:
 
:$${\rm P} \{ { {\it \Phi}_x} ( f) \} = T_{\rm A} \cdot \varphi_x ( k = 0) +2 T_{\rm A} \cdot \sum_{k = 1}^{\infty} \varphi_x ( k T_{\rm A}) \cdot {\rm cos}(2{\rm \pi} f k T_{\rm A}).$$  
 
:$${\rm P} \{ { {\it \Phi}_x} ( f) \} = T_{\rm A} \cdot \varphi_x ( k = 0) +2 T_{\rm A} \cdot \sum_{k = 1}^{\infty} \varphi_x ( k T_{\rm A}) \cdot {\rm cos}(2{\rm \pi} f k T_{\rm A}).$$  
*The power density spectrum (PSD) of the continuous-time process is obtained from&nbsp; ${\rm P} \{ { {\it \Phi}_x} ( f) \}$&nbsp; by bandlimiting to the range&nbsp; $\vert f \vert ≤ 1/(2T_{\rm A})$.  
+
*The power-spectral density&nbsp; $\rm (PSD)$&nbsp; of the continuous-time process is obtained from&nbsp; ${\rm P} \{ { {\it \Phi}_x} ( f) \}$&nbsp; by bandlimiting to the range&nbsp; $\vert f \vert ≤ 1/(2T_{\rm A})$.  
*In the time domain, this operation means interpolating the individual ACF samples with the&nbsp; ${\rm si}$&ndash;function, where&nbsp; ${\rm si}(x)$&nbsp; stands for&nbsp; $\sin(x)/x$&nbsp; . }}
+
*In the time domain,&nbsp; this operation means interpolating the individual ACF samples with the&nbsp; ${\rm sinc}$ function, where&nbsp; ${\rm sinc}(x)$&nbsp; stands for&nbsp; $\sin(\pi x)/(\pi x)$.}}
  
  
[[File:EN_Sto_T_4_5_S5.png |right|frame| Discrete-time ACF and periodically continued PSD]]
 
 
{{GraueBox|TEXT=  
 
{{GraueBox|TEXT=  
 
$\text{Example 3:}$&nbsp; A Gaussian ACF&nbsp; $φ_x(τ)$&nbsp; is sampled at distance&nbsp; $T_{\rm A}$&nbsp; where the sampling theorem is satisfied:
 
$\text{Example 3:}$&nbsp; A Gaussian ACF&nbsp; $φ_x(τ)$&nbsp; is sampled at distance&nbsp; $T_{\rm A}$&nbsp; where the sampling theorem is satisfied:
*The Fourier transform of the discrete-time ACF&nbsp; ${\rm A} \{φ_x(τ) \}$&nbsp; be&nbsp; ${\rm P} \{ { {\it \Phi}_x} ( f) \}$.&nbsp;  
+
[[File:EN_Sto_T_4_5_S5.png |right|frame| Discrete-time auto-correlation function,&nbsp; periodically continued power-spectral density]]
*This with&nbsp; ${\rm 1}/T_{\rm A}$&nbsp; periodic function&nbsp; ${\rm P} \{ { {\it \Phi}_x} ( f) \}$&nbsp; is accordingly infinitely extended ( red curve ).  
+
*The Fourier transform of the discrete-time ACF &nbsp; &rArr; &nbsp; ${\rm A} \{φ_x(τ) \}$&nbsp; be the periodically continued PSD &nbsp; &rArr; &nbsp; ${\rm P} \{ { {\it \Phi}_x} ( f) \}$.&nbsp;
*The PSD&nbsp; ${\it \Phi}_x(f)$&nbsp; of the continuous-time process&nbsp; $\{x_i(t)\}$&nbsp; is obtained by band-limiting to the frequency range highlighted in blue in the figure&nbsp; $\vert f - T_{\rm A} \vert ≤ 0.5$. }}
+
 
 +
 +
*This with&nbsp; ${\rm 1}/T_{\rm A}$&nbsp; periodic function&nbsp; ${\rm P} \{ { {\it \Phi}_x} ( f) \}$&nbsp; is accordingly infinitely extended&nbsp; (red curve).
 +
 
 +
 +
*The PSD&nbsp; ${\it \Phi}_x(f)$&nbsp; of the continuous-time process&nbsp; $\{x_i(t)\}$&nbsp; is obtained by band-limiting to the frequency range&nbsp; $\vert f \cdot T_{\rm A} \vert ≤ 0.5$,&nbsp; highlighted in blue in the figure. }}
  
 
==Accuracy of the numerical PSD calculation==
 
==Accuracy of the numerical PSD calculation==
 
<br>
 
<br>
For the following analysis, we make the following assumptions:  
+
For the following analysis,&nbsp; we make the following assumptions:  
*The discrete-time ACF&nbsp; $φ_x(k - T_{\rm A})$&nbsp; was determined numerically from&nbsp; $N$&nbsp; samples. &nbsp; As already shown on&nbsp; [[Theory_of_Stochastic_Signals/Auto-Correlation_Function#Accuracy_of_the_numerical_ACF_calculation|Accuracy of the numerical ACF calculation]]&nbsp; these values are in error and the errors are correlated if&nbsp; $N$&nbsp; was chosen too small.
+
#The discrete-time ACF&nbsp; $φ_x(k \cdot T_{\rm A})$&nbsp; was determined numerically from&nbsp; $N$&nbsp; samples. &nbsp;  
*To calculate the periodic power density spectrum (PSD), we use only the ACF values&nbsp; $φ_x(0)$, ... , $φ_x(K - T_{\rm A})$:  
+
#As already shown in section&nbsp; [[Theory_of_Stochastic_Signals/Auto-Correlation_Function#Accuracy_of_the_numerical_ACF_calculation|"Accuracy of the numerical ACF calculation"]],&nbsp; these values are in error and the errors are correlated if&nbsp; $N$&nbsp; was chosen too small.
:$${\rm P} \{{{\it \Phi}_x} ( f) \} = T_{\rm A} \cdot \varphi_x ( k = 0) +2 T_{\rm A} \cdot  \sum_{k = 1}^{K} \varphi_x  ( k T_{\rm A})\cdot {\rm cos}(2{\rm \pi} f k T_{\rm A}).$$
+
#To calculate the periodic power-spectral density&nbsp; $\rm (PSD)$,&nbsp; we use only the ACF values&nbsp; $φ_x(0)$, ... , $φ_x(K \cdot T_{\rm A})$:  
 +
::$${\rm P} \{{{\it \Phi}_x} ( f) \} = T_{\rm A} \cdot \varphi_x ( k = 0) +2 T_{\rm A} \cdot  \sum_{k = 1}^{K} \varphi_x  ( k T_{\rm A})\cdot {\rm cos}(2{\rm \pi} f k T_{\rm A}).$$
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
 
$\text{Conclusion:}$&nbsp;  
 
$\text{Conclusion:}$&nbsp;  
The accuracy of the PSD calculation is determined to a strong extent by the parameter&nbsp; $K$&nbsp;:   
+
The accuracy of the power-spectral density calculation is determined to a strong extent by the parameter&nbsp; $K$:   
*If&nbsp; $K$&nbsp; is chosen too small, the ACF values actually present&nbsp; $φ_x(k - T_{\rm A})$&nbsp; with&nbsp; $k > K$&nbsp; will not be taken into account.  
+
*If&nbsp; $K$&nbsp; is chosen too small,&nbsp; the ACF values actually present&nbsp; $φ_x(k - T_{\rm A})$&nbsp; with&nbsp; $k > K$&nbsp; will not be taken into account.  
*If&nbsp; $K$&nbsp; is too large, also such ACF values are considered, which should actually be zero and are finite only because of the numerical ACF calculation.  
+
*If&nbsp; $K$&nbsp; is too large,&nbsp; also such ACF values are considered,&nbsp; which should actually be zero and are finite only because of the numerical ACF calculation.  
*These values, however, are - due to a too small&nbsp; $N$&nbsp; in the ACF calculation - only errors, and impair the PSD calculation more than they provide a useful contribution to the result. }}
+
*These values are only errors&nbsp; $($due to a small&nbsp; $N$&nbsp; in the ACF calculation$)$  and impair the PSD calculation more than they provide a useful contribution to the result. }}
  
  
[[File:EN_Sto_T_4_5_S5_b_neu.png |450px|right|frame| Accuracy of numerical PSD calculation]]
 
 
{{GraueBox|TEXT=  
 
{{GraueBox|TEXT=  
$\text{Example 4:}$&nbsp; We consider here a zero mean process with statistically independent samples.  
+
$\text{Example 4:}$&nbsp; We consider here a zero mean process with statistically independent samples.&nbsp; Thus,&nbsp; only the ACF value&nbsp; $φ_x(0) = σ_x^2$&nbsp; should be different from zero.
*Thus, only the ACF value&nbsp; $φ_x(0) = σ_x^2$&nbsp; should be different from zero.  
+
[[File:EN_Sto_T_4_5_S5_b_neu_v2.png |450px|right|frame| Accuracy of numerical PSD calculation ]]
*But if one determines the ACF numerically from only&nbsp; $N = 1000$&nbsp; samples, one obtains finite ACF values even for&nbsp; $k ≠ 0$&nbsp;.  
+
*But if one determines the ACF numerically from only&nbsp; $N = 1000$&nbsp; samples,&nbsp; one obtains finite ACF values even for&nbsp; $k ≠ 0$.
 +
 
 +
*The upper figure shows that these erroneous ACF values can be up to&nbsp; $6\%$&nbsp; of the maximum value.
 +
 
 +
*The numerically determined PSD is shown below.&nbsp; The theoretical&nbsp; (yellow) curve should be constant for&nbsp; $\vert f \cdot T_{\rm A} \vert ≤ 0.5$.
 +
 
 +
*The green and purple curves illustrate how by&nbsp; $K = 3$ &nbsp;resp.&nbsp; $K = 10$,&nbsp; the result is distorted compared to&nbsp; $K = 0$.
 +
 
 +
*In this case&nbsp; $($statistically independent random variables$)$&nbsp; the error grows monotonically with increasing $K$.&nbsp;
 +
 
  
*The upper figure shows that these erroneous ACF&ndash;values can be up to&nbsp; $6\%$&nbsp; of the maximum value.
+
In contrast,&nbsp; for a random variable with statistical bindings,&nbsp; there is an optimal value for&nbsp; $K$&nbsp; in each case.  
*The numerically determined power density spectrum is shown below.&nbsp; The theoretical curve is shown in yellow, which was calculated for&nbsp; $\vert f - T_{\rm A} \vert ≤ 0.5$&nbsp; should be constant.
+
#If this is chosen too small,&nbsp; significant bindings are not considered.  
*The green and purple curves illustrate how by&nbsp; $K = 3$ &nbsp;and&nbsp; $K = 10$&nbsp; respectively, the result is distorted compared to&nbsp; $K = 0$&nbsp;.
+
#In contrast,&nbsp; a too large value  leads to oscillations that can only be attributed to erroneous ACF values.}}  
<br clear=all>
 
In this case (statistically independent random variables) the error grows monotonically with increasing $K$.&nbsp; In contrast, for a random variable with statistical bindings, there is an optimal value for $K$ in each case.  
 
*If this is chosen too small, significant bindings are not considered.  
 
*In contrast, a value that is too large leads to oscillations that can only be attributed to erroneous ACF values}}.
 
  
 
==Exercises for the chapter==
 
==Exercises for the chapter==

Latest revision as of 16:13, 22 December 2022

Wiener-Khintchine Theorem


In the remainder of this paper we restrict ourselves to ergodic processes.  As was shown in the  "last chapter"  the following statements then hold:

  • Each individual pattern function  $x_i(t)$  is representative of the entire random process  $\{x_i(t)\}$.
  • All time means are thus identical to the corresponding coulter means.
  • The auto-correlation function,  which is generally affected by the two time parameters  $t_1$  and  $t_2$,  now depends only on the time difference  $τ = t_2 - t_1$:
$$\varphi_x(t_1,t_2)={\rm E}\big[x(t_{\rm 1})\cdot x(t_{\rm 2})\big] = \varphi_x(\tau)= \int^{+\infty}_{-\infty}x(t)\cdot x(t+\tau)\,{\rm d}t.$$

The auto-correlation function provides quantitative information about the  (linear)  statistical bindings within the ergodic process  $\{x_i(t)\}$  in the time domain.  The equivalent descriptor in the frequency domain is the  "power-spectral density",  often also referred to as the  "power-spectral density".

$\text{Definition:}$  The  »power-spectral density«  $\rm (PSD)$  of an ergodic random process  $\{x_i(t)\}$  is the Fourier transform of the auto-correlation function  $\rm (ACF)$:

$${\it \Phi}_x(f)=\int^{+\infty}_{-\infty}\varphi_x(\tau) \cdot {\rm e}^{- {\rm j\hspace{0.05cm}\cdot \hspace{0.05cm} \pi}\hspace{0.05cm}\cdot \hspace{0.05cm} f \hspace{0.05cm}\cdot \hspace{0.05cm}\tau} {\rm d} \tau. $$

This functional relationship is called the  "Theorem of  $\text{Wiener}$  and  $\text{Khinchin}$".


Similarly,  the auto-correlation function can be computed as the inverse Fourier transform of the power-spectral density  (see section  "Inverse Fourier transform"  in the book  "Signal Representation"):

$$ \varphi_x(\tau)=\int^{+\infty}_{-\infty} {\it \Phi}_x \cdot {\rm e}^{- {\rm j\hspace{0.05cm}\cdot \hspace{0.05cm} \pi}\hspace{0.05cm}\cdot \hspace{0.05cm} f \hspace{0.05cm}\cdot \hspace{0.05cm}\tau} {\rm d} f.$$
  • The two equations are directly applicable only if the random process contains neither a DC component nor periodic components.
  • Otherwise,  one must proceed according to the specifications given in section  "Power-spectral density with DC component".

Physical interpretation and measurement


The lower chart shows an arrangement for  (approximate)  metrological determination of the power-spectral density  ${\it \Phi}_x(f)$.  The following should be noted in this regard:

  • The random signal  $x(t)$  is applied to a  (preferably)  rectangular and  (preferably)  narrowband filter with center frequency  $f$  and bandwidth  $Δf$  where  $Δf$  must be chosen sufficiently small according to the desired frequency resolution.
  • The corresponding output signal  $x_f(t)$  is squared and then the mean value is formed over a sufficiently long measurement period  $T_{\rm M}$.  This gives the  "power of  $x_f(t)$"  or the  "power components of  $x(t)$  in the spectral range from  $f - Δf/2$  to  $f + Δf/2$":
To measure the power-spectral density
$$P_{x_f} =\overline{x_f(t)^2}=\frac{1}{T_{\rm M}}\cdot\int^{T_{\rm M}}_{0}x_f^2(t) \hspace{0.1cm}\rm d \it t.$$
  • Division by  $Δf$  leads to the power-spectral density  $\rm (PSD)$:
$${{\it \Phi}_{x \rm +}}(f) =\frac{P_{x_f}}{{\rm \Delta} f} \hspace {0.5cm} \Rightarrow \hspace {0.5cm} {\it \Phi}_{x}(f) = \frac{P_{x_f}}{{\rm 2 \cdot \Delta} f}.$$
  • ${\it \Phi}_{x+}(f) = 2 \cdot {\it \Phi}_x(f)$  denotes the one-sided PSD defined only for positive frequencies.   For  $f<0$   ⇒   ${\it \Phi}_{x+}(f) = 0$.  In contrast,  for the commonly used two-sided power-spectral density:
$${\it \Phi}_x(-f) = {\it \Phi}_x(f).$$
  • While the power  $P_{x_f}$  tends to zero as the bandwidth  $Δf$  becomes smaller,  the power-spectral density remains nearly constant above a sufficiently small value of  $Δf$.  For the exact determination of  ${\it \Phi}_x(f)$  two boundary crossings are necessary:
$${{\it \Phi}_x(f)} = \lim_{{\rm \Delta}f\to 0} \hspace{0.2cm} \lim_{T_{\rm M}\to\infty}\hspace{0.2cm} \frac{1}{{\rm 2 \cdot \Delta}f\cdot T_{\rm M}}\cdot\int^{T_{\rm M}}_{0}x_f^2(t) \hspace{0.1cm} \rm d \it t.$$

$\text{Conclusion:}$ 

  • From this physical interpretation it further follows that the power-spectral density is always real and can never become negative.  
  • The total power of the random signal  $x(t)$  is then obtained by integration over all spectral components:
$$P_x = \int^{\infty}_{0}{\it \Phi}_{x \rm +}(f) \hspace{0.1cm}{\rm d} f = \int^{+\infty}_{-\infty}{\it \Phi}_x(f)\hspace{0.1cm} {\rm d} f .$$

Reciprocity law of ACF duration and PSD bandwidth


All the  $\text{Fourier transform theorems}$  derived in the book  "Signal Representation"  for deterministic signals can also be applied to

On the  "Reciprocity Theorem"  of ACF and PSD
  • the  auto-correlation function  $\rm (ACF)$,  and
  • the  power-spectral density  $\rm (PSD)$. 


However,  not all laws yield meaningful results due to the specific properties

  • of auto-correlation function  (always real and even)
  • and power-spectral density  (always real, even, and non–negative).


We now consider as in the section  "Interpretation of the auto-correlation function"  two different ergodic random processes  $\{x_i(t)\}$  and  $\{y_i(t)\}$  based on

  1. two pattern signals  $x(t)$  and  $y(t)$   ⇒   upper sketch,
  2. two auto-correlation functions  $φ_x(τ)$  and  $φ_y(τ)$   ⇒   middle sketch,
  3. two power-spectral densities  ${\it \Phi}_x(f)$  and  ${\it \Phi}_y(f)$   ⇒   bottom sketch.


Based on these exemplary graphs,  the following statements can be made:

  • The areas under the PSD curves are equal   ⇒   the processes  $\{x_i(t)\}$  and  $\{y_i(t)\}$  have the same power:
$${\varphi_x({\rm 0})}\hspace{0.05cm} =\hspace{0.05cm} \int^{+\infty}_{-\infty}{{\it \Phi}_x(f)} \hspace{0.1cm} {\rm d} f \hspace{0.2cm} = \hspace{0.2cm}{\varphi_y({\rm 0})} = \int^{+\infty}_{-\infty}{{\it \Phi}_y(f)} \hspace{0.1cm} {\rm d} f .$$
$${{\rm \nabla} f_x} = \frac {1}{{\it \Phi}_x(f = {\rm 0})} \cdot \int^{+\infty}_{-\infty}{{\it \Phi}_x(f)} \hspace{0.1cm} {\rm d} f, $$
$${ {\rm \nabla} \tau_x} = \frac {\rm 1}{ \varphi_x(\tau = \rm 0)} \cdot \int^{+\infty}_{-\infty}{\varphi_x(\tau )} \hspace{0.1cm} {\rm d} \tau.$$
  • With these definitions,  the following basic relationship holds:
$${{\rm \nabla} \tau_x} \cdot {{\rm \nabla} f_x} = 1\hspace{1cm}{\rm resp.}\hspace{1cm} {{\rm \nabla} \tau_y} \cdot {{\rm \nabla} f_y} = 1.$$

$\text{Example 1:}$  We start from the graph at the top of this section:

  • The characteristics of the higher frequency signal  $x(t)$  are  $∇τ_x = 0.33\hspace{0.08cm} \rm µs$   and  $∇f_x = 3 \hspace{0.08cm} \rm MHz$.
  • The equivalent ACF duration of the signal  $y(t)$  is three times:   $∇τ_y = 1 \hspace{0.08cm} \rm µs$.
  • The equivalent PSD bandwidth of the signal  $y(t)$  is thus only  $∇f_y = ∇f_x/3 = 1 \hspace{0.08cm} \rm MHz$.


$\text{General:}$  The product of equivalent ACF duration  ${ {\rm \nabla} \tau_x}$  and equivalent PSD bandwidth  $ { {\rm \nabla} f_x}$  is always "one":

$${ {\rm \nabla} \tau_x} \cdot { {\rm \nabla} f_x} = 1.$$


$\text{Proof:}$  According to the above definitions:

$${ {\rm \nabla} \tau_x} = \frac {\rm 1}{ \varphi_x(\tau = \rm 0)} \cdot \int^{+\infty}_{-\infty}{ \varphi_x(\tau )} \hspace{0.1cm} {\rm d} \tau = \frac { {\it \Phi}_x(f = {\rm 0)} }{ \varphi_x(\tau = \rm 0)},$$
$${ {\rm \nabla} f_x} = \frac {1}{ {\it \Phi}_x(f = {\rm0})} \cdot \int^{+\infty}_{-\infty}{ {\it \Phi}_x(f)} \hspace{0.1cm} {\rm d} f = \frac {\varphi_x(\tau = {\rm 0)} }{ {\it \Phi}_x(f = \rm 0)}.$$

Thus,  the product is equal to  $1$.

q.e.d.


$\text{Example 2:}$  A limiting case of the reciprocity theorem represents the so-called  "White Noise":

  • This includes all spectral components  (up to infinity).
  • The equivalent PSD bandwidth  $∇f$  is infinite.


The theorem given here states that for the equivalent ACF duration  $∇τ = 0$  must hold   ⇒   »white noise has a Dirac-shaped ACF«.

For more on this topic, see the three-part  (German language)  learning video  "The AWGN channel",  especially the second part.


Power-spectral density with DC component


We assume a DC–free random process  $\{x_i(t)\}$.  Further,  we assume that the process also contains no periodic components.  Then holds:

  • The auto-correlation function  $φ_x(τ)$ vanishes  for  $τ → ∞$.
  • The power-spectral density  ${\it \Phi}_x(f)$  –  computable as the Fourier transform of  $φ_x(τ)$  –  is both continuous in value and continuous in time,  i.e.,  without discrete components.


We now consider a second random process  $\{y_i(t)\}$,  which differs from the process  $\{x_i(t)\}$  only by an additional DC component  $m_y$:

$$\left\{ y_i (t) \right\} = \left\{ x_i (t) + m_y \right\}.$$

The statistical descriptors of the mean-valued random process  $\{y_i(t)\}$  then have the following properties:

  • The limit of the ACF for  $τ → ∞$  is now no longer zero,  but  $m_y^2$.   Throughout the  $τ$–range from  $-∞$  to  $+∞$  the ACF  $φ_y(τ)$  is larger than  $φ_x(τ)$  by  $m_y^2$:
$${\varphi_y ( \tau)} = {\varphi_x ( \tau)} + m_y^2 . $$
  • According to the elementary laws of the Fourier transform,  the constant ACF contribution in the PSD leads to a Dirac delta function  $δ(f)$  with weight  $m_y^2$:
$${{\it \Phi}_y ( f)} = {\Phi_x ( f)} + m_y^2 \cdot \delta (f). $$

Numerical PSD determination


Auto-correlation function and power-spectral density are strictly related via the  $\text{Fourier transform}$.  This relationship also holds for discrete-time ACF representation with the sampling operator  ${\rm A} \{ \varphi_x ( \tau ) \} $,  thus for

$${\rm A} \{ \varphi_x ( \tau ) \} = \varphi_x ( \tau ) \cdot \sum_{k= - \infty}^{\infty} T_{\rm A} \cdot \delta ( \tau - k \cdot T_{\rm A}).$$

The transition from the time domain to the spectral domain can be derived with the following steps:

  • The distance  $T_{\rm A}$  of two samples is determined by the absolute bandwidth  $B_x$  $($maximum occurring frequency within the process$)$  via the sampling theorem:
$$T_{\rm A}\le\frac{1}{2B_x}.$$
  • The Fourier transform of the discrete-time  (sampled)  auto-correlation function yields an with  ${\rm 1}/T_{\rm A}$  periodic power-spectral density:
$${\rm A} \{ \varphi_x ( \tau ) \} \hspace{0.3cm} \circ\!\!-\!\!\!-\!\!\!-\!\!\bullet\, \hspace{0.3cm} {\rm P} \{{{\it \Phi}_x} ( f) \} = \sum_{\mu = - \infty}^{\infty} {{\it \Phi}_x} ( f - \frac {\mu}{T_{\rm A}}).$$

$\text{Conclusion:}$  Since both  $φ_x(τ)$  and  ${\it \Phi}_x(f)$  are even and real functions,  the following relation holds:

$${\rm P} \{ { {\it \Phi}_x} ( f) \} = T_{\rm A} \cdot \varphi_x ( k = 0) +2 T_{\rm A} \cdot \sum_{k = 1}^{\infty} \varphi_x ( k T_{\rm A}) \cdot {\rm cos}(2{\rm \pi} f k T_{\rm A}).$$
  • The power-spectral density  $\rm (PSD)$  of the continuous-time process is obtained from  ${\rm P} \{ { {\it \Phi}_x} ( f) \}$  by bandlimiting to the range  $\vert f \vert ≤ 1/(2T_{\rm A})$.
  • In the time domain,  this operation means interpolating the individual ACF samples with the  ${\rm sinc}$ function, where  ${\rm sinc}(x)$  stands for  $\sin(\pi x)/(\pi x)$.


$\text{Example 3:}$  A Gaussian ACF  $φ_x(τ)$  is sampled at distance  $T_{\rm A}$  where the sampling theorem is satisfied:

Discrete-time auto-correlation function,  periodically continued power-spectral density
  • The Fourier transform of the discrete-time ACF   ⇒   ${\rm A} \{φ_x(τ) \}$  be the periodically continued PSD   ⇒   ${\rm P} \{ { {\it \Phi}_x} ( f) \}$. 


  • This with  ${\rm 1}/T_{\rm A}$  periodic function  ${\rm P} \{ { {\it \Phi}_x} ( f) \}$  is accordingly infinitely extended  (red curve).


  • The PSD  ${\it \Phi}_x(f)$  of the continuous-time process  $\{x_i(t)\}$  is obtained by band-limiting to the frequency range  $\vert f \cdot T_{\rm A} \vert ≤ 0.5$,  highlighted in blue in the figure.

Accuracy of the numerical PSD calculation


For the following analysis,  we make the following assumptions:

  1. The discrete-time ACF  $φ_x(k \cdot T_{\rm A})$  was determined numerically from  $N$  samples.  
  2. As already shown in section  "Accuracy of the numerical ACF calculation",  these values are in error and the errors are correlated if  $N$  was chosen too small.
  3. To calculate the periodic power-spectral density  $\rm (PSD)$,  we use only the ACF values  $φ_x(0)$, ... , $φ_x(K \cdot T_{\rm A})$:
$${\rm P} \{{{\it \Phi}_x} ( f) \} = T_{\rm A} \cdot \varphi_x ( k = 0) +2 T_{\rm A} \cdot \sum_{k = 1}^{K} \varphi_x ( k T_{\rm A})\cdot {\rm cos}(2{\rm \pi} f k T_{\rm A}).$$

$\text{Conclusion:}$  The accuracy of the power-spectral density calculation is determined to a strong extent by the parameter  $K$:

  • If  $K$  is chosen too small,  the ACF values actually present  $φ_x(k - T_{\rm A})$  with  $k > K$  will not be taken into account.
  • If  $K$  is too large,  also such ACF values are considered,  which should actually be zero and are finite only because of the numerical ACF calculation.
  • These values are only errors  $($due to a small  $N$  in the ACF calculation$)$ and impair the PSD calculation more than they provide a useful contribution to the result.


$\text{Example 4:}$  We consider here a zero mean process with statistically independent samples.  Thus,  only the ACF value  $φ_x(0) = σ_x^2$  should be different from zero.

Accuracy of numerical PSD calculation
  • But if one determines the ACF numerically from only  $N = 1000$  samples,  one obtains finite ACF values even for  $k ≠ 0$.
  • The upper figure shows that these erroneous ACF values can be up to  $6\%$  of the maximum value.
  • The numerically determined PSD is shown below.  The theoretical  (yellow) curve should be constant for  $\vert f \cdot T_{\rm A} \vert ≤ 0.5$.
  • The green and purple curves illustrate how by  $K = 3$  resp.  $K = 10$,  the result is distorted compared to  $K = 0$.
  • In this case  $($statistically independent random variables$)$  the error grows monotonically with increasing $K$. 


In contrast,  for a random variable with statistical bindings,  there is an optimal value for  $K$  in each case.

  1. If this is chosen too small,  significant bindings are not considered.
  2. In contrast,  a too large value leads to oscillations that can only be attributed to erroneous ACF values.

Exercises for the chapter


Exercise 4.12: Power-Spectral Density of a Binary Signal

Exercise 4.12Z: White Gaussian Noise

Exercise 4.13: Gaussian ACF and PSD

Exercise 4.13Z: AMI Code