Difference between revisions of "Theory of Stochastic Signals/Moments of a Discrete Random Variable"

From LNTwww
Line 7: Line 7:
 
==Calculation as ensemble average or time average==
 
==Calculation as ensemble average or time average==
 
<br>
 
<br>
The probabilities and the relative frequencies provide extensive information about a discrete random variable. Reduced information is obtained by the so-called moments&nbsp; $m_k$, where&nbsp; $k$&nbsp; represents a natural number.  
+
The probabilities and the relative frequencies provide extensive information about a discrete random variable.&nbsp;
 +
 
 +
Reduced information is obtained by the so-called '''moments'''&nbsp; $m_k$,&nbsp; where&nbsp; $k$&nbsp; represents a natural number.  
  
 
{{BlueBox|TEXT=   
 
{{BlueBox|TEXT=   
 
$\text{Two alternative ways of calculation:}$&nbsp;
 
$\text{Two alternative ways of calculation:}$&nbsp;
  
Under the&nbsp; [[Theory_of_Stochastic_Signals/Autocorrelation_Function_(ACF)#Ergodic_random_processes|Ergodicity]]&nbsp; implied here, there are two different calculation possibilities for the moment&nbsp; $k$-th order:  
+
Under the condition&nbsp; [[Theory_of_Stochastic_Signals/Autocorrelation_Function_(ACF)#Ergodic_random_processes|"Ergodicity"]]&nbsp; implicitly assumed here,&nbsp; there are two different calculation possibilities for the&nbsp; $k$-th order moment:  
*the&nbsp; '''ensemble averaging''''&nbsp; or&nbsp; ''expected value formation'' &nbsp; &rArr; &nbsp;averaging over all possible values&nbsp; $\{ z_\mu\}$&nbsp; with the index&nbsp; $\mu = 1 , \hspace{0.1cm}\text{ ...}  \hspace{0.1cm} , M$:  
+
*the&nbsp; '''ensemble averaging'''&nbsp; or&nbsp; "expected value formation" &nbsp; &rArr; &nbsp;averaging over all possible values&nbsp; $\{ z_\mu\}$&nbsp; with the index&nbsp; $\mu = 1 , \hspace{0.1cm}\text{ ...}  \hspace{0.1cm} , M$:  
 
:$$m_k = {\rm E} \big[z^k \big] = \sum_{\mu = 1}^{M}p_\mu \cdot z_\mu^k \hspace{2cm} \rm with \hspace{0.1cm} {\rm E\big[\text{ ...} \big]\hspace{-0.1cm}:} \hspace{0.3cm} \rm expected\hspace{0.1cm}value ;$$
 
:$$m_k = {\rm E} \big[z^k \big] = \sum_{\mu = 1}^{M}p_\mu \cdot z_\mu^k \hspace{2cm} \rm with \hspace{0.1cm} {\rm E\big[\text{ ...} \big]\hspace{-0.1cm}:} \hspace{0.3cm} \rm expected\hspace{0.1cm}value ;$$
 
*the&nbsp; '''time averaging'''&nbsp; over the random sequence&nbsp; $\langle z_ν\rangle$&nbsp; with the index&nbsp; $ν = 1 , \hspace{0.1cm}\text{ ...}  \hspace{0.1cm} , N$:  
 
*the&nbsp; '''time averaging'''&nbsp; over the random sequence&nbsp; $\langle z_ν\rangle$&nbsp; with the index&nbsp; $ν = 1 , \hspace{0.1cm}\text{ ...}  \hspace{0.1cm} , N$:  
Line 20: Line 22:
  
 
Note:
 
Note:
*Both types of calculations lead to the same asymptotic result for sufficiently large values of&nbsp; $N$&nbsp; .  
+
*Both types of calculations lead to the same asymptotic result for sufficiently large values of&nbsp; $N$.  
*For finite&nbsp; $N$&nbsp;, a comparable error results as when the probability is approximated by the relative frequency.  
+
*For finite&nbsp; $N$,&nbsp; a comparable error results as when the probability is approximated by the relative frequency.  
  
 
==Linear mean - DC component==
 
==Linear mean - DC component==
Line 29: Line 31:
 
:$$m_1 =\sum_{\mu=1}^{M}p_\mu\cdot z_\mu =\lim_{N\to\infty}\frac{1}{N}\sum_{\nu=1}^{N}z_\nu.$$
 
:$$m_1 =\sum_{\mu=1}^{M}p_\mu\cdot z_\mu =\lim_{N\to\infty}\frac{1}{N}\sum_{\nu=1}^{N}z_\nu.$$
 
*The left part of this equation describes the ensemble averaging&nbsp; (over all possible values),  
 
*The left part of this equation describes the ensemble averaging&nbsp; (over all possible values),  
*while the right equation gives the determination as time average.  
+
:while the right equation gives the determination as time average.  
*In the context of signals, this quantity is also referred to as the&nbsp; [[Signal_Representation/Direct_Current_Signal_-_Limit_Case_of_a_Periodic_Signal|DC component]]&nbsp; }}
+
*In the context of signals,&nbsp; this quantity is also referred to as the&nbsp; [[Signal_Representation/Direct_Current_Signal_-_Limit_Case_of_a_Periodic_Signal|"direct current"]]&nbsp; $\rm (DC)$&nbsp; component.}}
  
  
 
[[File:P_ID49__Sto_T_2_2_S2_neu.png|right|frame|DC component&nbsp; $m_1$&nbsp; of a binary signal]]
 
[[File:P_ID49__Sto_T_2_2_S2_neu.png|right|frame|DC component&nbsp; $m_1$&nbsp; of a binary signal]]
 
{{GraueBox|TEXT=
 
{{GraueBox|TEXT=
$\text{Example 1:}$&nbsp; A binary signal&nbsp; $x(t)$&nbsp; with the two possible amplitude values.
+
$\text{Example 1:}$&nbsp; A binary signal&nbsp; $x(t)$&nbsp; with the two possible values  
 
*$1\hspace{0.03cm}\rm V$&nbsp; $($for the symbol&nbsp; $\rm L)$,  
 
*$1\hspace{0.03cm}\rm V$&nbsp; $($for the symbol&nbsp; $\rm L)$,  
 
*$3\hspace{0.03cm}\rm V$&nbsp; $($for the symbol&nbsp; $\rm H)$  
 
*$3\hspace{0.03cm}\rm V$&nbsp; $($for the symbol&nbsp; $\rm H)$  
  
  
as well as the occurrence probabilities&nbsp; $p_{\rm L} = 0.2$&nbsp; respectively&nbsp; $p_{\rm H} = 0.8$&nbsp; has the linear mean (DC)
+
as well as the occurrence probabilities&nbsp; $p_{\rm L} = 0.2$&nbsp; and&nbsp; $p_{\rm H} = 0.8$&nbsp; has the linear mean&nbsp; ("DC component")
 
:$$m_1 = 0.2 \cdot 1\,{\rm V}+ 0.8 \cdot 3\,{\rm V}= 2.6 \,{\rm V}. $$
 
:$$m_1 = 0.2 \cdot 1\,{\rm V}+ 0.8 \cdot 3\,{\rm V}= 2.6 \,{\rm V}. $$
 
This is drawn as a red line in the graph.
 
This is drawn as a red line in the graph.
<br clear=all>
+
 
If we determine this parameter by time averaging over the displayed&nbsp; $N = 12$&nbsp; signal values, we obtain a slightly smaller value:  
+
If we determine this parameter by time averaging over the displayed&nbsp; $N = 12$&nbsp; signal values,&nbsp; we obtain a slightly smaller value:  
 
:$$m_1\hspace{0.01cm}' = 4/12 \cdot 1\,{\rm V}+ 8/12 \cdot 3\,{\rm V}= 2.33 \,{\rm V}. $$
 
:$$m_1\hspace{0.01cm}' = 4/12 \cdot 1\,{\rm V}+ 8/12 \cdot 3\,{\rm V}= 2.33 \,{\rm V}. $$
Here, the probabilities of occurrence&nbsp; $p_{\rm L} = 0.2$&nbsp; and&nbsp; $p_{\rm H} = 0.8$&nbsp; were replaced by the corresponding frequencies&nbsp; $h_{\rm L} = 4/12$&nbsp; and&nbsp; $h_{\rm H} = 8/12$&nbsp; respectively. The relative error due to insufficient sequence length&nbsp; $N$&nbsp; is greater than&nbsp; $10\%$ in the example.  
+
*Here,&nbsp; the probabilities&nbsp; $p_{\rm L} = 0.2$&nbsp; and&nbsp; $p_{\rm H} = 0.8$&nbsp; were replaced by the corresponding frequencies&nbsp; $h_{\rm L} = 4/12$&nbsp; and&nbsp; $h_{\rm H} = 8/12$&nbsp; respectively.  
 +
*In this example the relative error due to insufficient sequence length&nbsp; $N$&nbsp; is greater than&nbsp; $10\%$.  
 +
 
  
<u>Note about our (admittedly somewhat unusual) nomenclature:</u>.
+
$\text{Note about our (admittedly somewhat unusual) nomenclature:}$
  
We denote binary symbols here as in circuit theory with&nbsp; $\rm L$&nbsp; (Low) and&nbsp; $\rm H$&nbsp; (High) to avoid confusion.  
+
We denote binary symbols here as in circuit theory with&nbsp; $\rm L$&nbsp; ("Low")&nbsp; and&nbsp; $\rm H$&nbsp; ("High")&nbsp; to avoid confusion.  
*In coding theory, it is useful to map&nbsp; $\{ \text{L, H}\}$&nbsp; to&nbsp; $\{0, 1\}$&nbsp; to take advantage of the possibilities of modulo algebra.  
+
*In coding theory,&nbsp; it is useful to map&nbsp; $\{ \text{L, H}\}$&nbsp; to&nbsp; $\{0, 1\}$&nbsp; to take advantage of the possibilities of modulo algebra.  
*In contrast, to describe modulation with bipolar (antipodal) signals, one better chooses the mapping&nbsp; $\{ \text{L, H}\}$ ⇔ $ \{-1, +1\}$.
+
*In contrast,&nbsp; to describe modulation with bipolar&nbsp; (antipodal)&nbsp; signals,&nbsp; one better chooses the mapping&nbsp; $\{ \text{L, H}\}$ ⇔ $ \{-1, +1\}$.
 
}}
 
}}
  
  
==Quadratic Mean Variance Dispersion==
+
==Quadratic mean variance standard deviation==
 
<br>
 
<br>
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
 
$\text{Definitions:}$&nbsp;
 
$\text{Definitions:}$&nbsp;
  
*Analogous to the linear mean, &nbsp; $k = 2$&nbsp; is obtained for the&nbsp; '''root mean square''':
+
*Analogous to the linear mean,&nbsp; $k = 2$&nbsp; is obtained for the&nbsp; '''root mean square'''&nbsp; (short:&nbsp; "rms"):
 
:$$m_2 =\sum_{\mu=\rm 1}^{\it M}p_\mu\cdot z_\mu^2 =\lim_{N\to\infty}\frac{\rm 1}{\it N}\sum_{\nu=\rm 1}^{\it N}z_\nu^2.$$
 
:$$m_2 =\sum_{\mu=\rm 1}^{\it M}p_\mu\cdot z_\mu^2 =\lim_{N\to\infty}\frac{\rm 1}{\it N}\sum_{\nu=\rm 1}^{\it N}z_\nu^2.$$
  
*Together with the DC component&nbsp; $m_1$&nbsp;, the&nbsp; '''variance'''&nbsp; $σ^2$&nbsp; can be determined from this as a further parameter ('''Steiner's Theorem'''):  
+
*Together with the DC component&nbsp; $m_1$,&nbsp; the&nbsp; '''variance'''&nbsp; $σ^2$&nbsp; can be determined from this as a further parameter ("Steiner's theorem"):  
 
:$$\sigma^2=m_2-m_1^2.$$
 
:$$\sigma^2=m_2-m_1^2.$$
*In statistics, the&nbsp; '''dispersion'''&nbsp; $σ$&nbsp; is the square root of the variance; sometimes this quantity is also called&nbsp; ''standard deviation''&nbsp;:
+
*The square root&nbsp; $σ$&nbsp; of the variance is called&nbsp; "root mean square" &nbsp; &rArr; &nbsp; '''rms value'''    (sometimes this quantity is also called the&nbsp; "standard deviation"):
 
:$$\sigma=\sqrt{m_2-m_1^2}.$$}}
 
:$$\sigma=\sqrt{m_2-m_1^2}.$$}}
  
  
''Notes on units:''
+
$\text{Notes on units:}$
  
*For message signals,&nbsp; $m_2$&nbsp;  indicates the (average) ''power''&nbsp; power of a random signal, referenced to&nbsp; $1 \hspace{0.03cm} Ω$ resistance.  
+
:*For message signals,&nbsp; $m_2$&nbsp;  indicates the&nbsp; "(average) power"&nbsp; of a random signal,&nbsp; referenced to&nbsp; $1 \hspace{0.03cm} Ω$ resistance.  
*If&nbsp; $z$&nbsp; describes a voltage,&nbsp; $m_2$&nbsp; accordingly has the unit${\rm V}^2$.  
+
:*If&nbsp; $z$&nbsp; describes a voltage,&nbsp; $m_2$&nbsp; accordingly has the unit&nbsp; ${\rm V}^2$.  
*The variance&nbsp; $σ^2$&nbsp; of a random signal corresponds physically to the&nbsp; ''alternating power''&nbsp; and the dispersion&nbsp; $σ$&nbsp; to the''rms (root mean square) value.''
+
:*The variance&nbsp; $σ^2$ of a random signal corresponds physically to the&nbsp; "alternating power".  
*These definitions are based on the reference resistance&nbsp; $1 \hspace{0.03cm} Ω$&nbsp; zugrunde.  
+
:*These definitions are based on the reference resistance&nbsp; $1 \hspace{0.03cm} Ω$.  
  
  
The (German) learning video &nbsp; [[Momentenberechnung bei diskreten Zufallsgrößen (Lernvideo)|Momentenberechnung bei diskreten Zufallsgrößen]] &nbsp; $\Rightarrow$ Moment Calculation for Discrete Random Variables, illustrates the defined quantities using the example of a digital signal.
+
The following&nbsp; (German language)&nbsp; learning video illustrates the defined quantities using the example of a digital signal: <br> &nbsp; &nbsp; [[Momentenberechnung bei diskreten Zufallsgrößen (Lernvideo)|Momentenberechnung bei diskreten Zufallsgrößen]] &nbsp; &rArr;  &nbsp; "Moment Calculation for Discrete Random Variables".
  
[[File:P_ID456__Sto_T_2_2_S3_neu.png | right|frame|Standard deviation of a binary signals]]
+
[[File:P_ID456__Sto_T_2_2_S3_neu.png | right|frame|Standard deviation ("rms value"&nbsp; of a binary signal]]
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Example 2:}$&nbsp;
+
$\text{Example 2:}$&nbsp; A binary signal&nbsp; $x(t)$&nbsp; with the two possible values  
A binary signal&nbsp; $x(t)$&nbsp; with the amplitude values
 
 
*$1\hspace{0.03cm}\rm V$&nbsp; $($for the symbol&nbsp; $\rm L)$,  
 
*$1\hspace{0.03cm}\rm V$&nbsp; $($for the symbol&nbsp; $\rm L)$,  
 
*$3\hspace{0.03cm}\rm V$&nbsp; $($for the symbol&nbsp; $\rm H)$  
 
*$3\hspace{0.03cm}\rm V$&nbsp; $($for the symbol&nbsp; $\rm H)$  
  
  
and the probabilities of occurrence&nbsp; $p_{\rm L} = 0.2$&nbsp; and&nbsp; $p_{\rm H} = 0.8$&nbsp;, respectively, has the total signal power  
+
as well as the occurrence probabilities&nbsp; $p_{\rm L} = 0.2$&nbsp; and&nbsp; $p_{\rm H} = 0.8$&nbsp; has the total signal power  
:$$P_{\rm Gesamt}  = 0.2 \cdot (1\,{\rm V})^2+ 0.8 \cdot (3\,{\rm V})^2 = 7.4 \hspace{0.05cm}{\rm V}^2,$$  
+
:$$P_{\rm total}  = 0.2 \cdot (1\,{\rm V})^2+ 0.8 \cdot (3\,{\rm V})^2 = 7.4 \hspace{0.05cm}{\rm V}^2,$$  
  
 
if one assumes the reference resistance&nbsp; $R = 1 \hspace{0.05cm} Ω$&nbsp;.
 
if one assumes the reference resistance&nbsp; $R = 1 \hspace{0.05cm} Ω$&nbsp;.
  
With the DC component&nbsp; $m_1 = 2.6 \hspace{0.05cm}\rm V$&nbsp; $($see&nbsp; [[Theory_of_Stochastic_Signals/Momente_einer_diskreten_Zufallsgröße#Linear_mean_-_DC_component|$\text{example 1})$]]&nbsp; it follows for
+
With the DC component&nbsp; $m_1 = 2.6 \hspace{0.05cm}\rm V$&nbsp; $($see&nbsp; [[Theory_of_Stochastic_Signals/Momente_einer_diskreten_Zufallsgröße#Linear_mean_-_DC_component|$\text{Example 1})$]]&nbsp; it follows for
*the alternating power (variance)&nbsp;  $P_{\rm W} = σ^2 = 7.4 \hspace{0.05cm}{\rm V}^2 - \big [2.6 \hspace{0.05cm}\rm V\big ]^2 = 0.64\hspace{0.05cm}  {\rm V}^2$,  
+
*the variance&nbsp;  $σ^2 = 7.4 \hspace{0.05cm}{\rm V}^2 - \big [2.6 \hspace{0.05cm}\rm V\big ]^2 = 0.64\hspace{0.05cm}  {\rm V}^2$,
 +
*the alternating power&nbsp;  $P_{\rm alter}  =  0.64\hspace{0.05cm}  {\rm W}$ &nbsp; &rArr; &nbsp; same numerical value as&nbsp; $σ^2$,&nbsp; but different unit, 
 
*the rms value&nbsp; $s_{\rm eff} = σ = 0.8 \hspace{0.05cm} \rm V$.  
 
*the rms value&nbsp; $s_{\rm eff} = σ = 0.8 \hspace{0.05cm} \rm V$.  
  
:::''Parenthesis'': &nbsp; With other reference resistance &nbsp; ⇒ &nbsp; $R \ne 1 \hspace{0.1cm} Ω$&nbsp;, not all these calculations apply.&nbsp; For example, with&nbsp; $R = 50 \hspace{0.1cm} Ω$&nbsp;, the power $P_{\rm Gesamt} $,&nbsp;, the alternating power&nbsp; $P_{\rm W}$&nbsp;, and the rms value&nbsp; $s_{\rm eff}$&nbsp; have the following values:
+
::Insertion: &nbsp; With other reference resistance &nbsp; ⇒ &nbsp; $R \ne 1 \hspace{0.1cm} Ω$,&nbsp; not all these calculations apply.&nbsp; For example,&nbsp; with&nbsp; $R = 50 \hspace{0.1cm} Ω$,&nbsp; the power&nbsp; $P_{\rm total} $,&nbsp; the alternating power&nbsp; $P_{\rm alter}$,&nbsp; and the rms value&nbsp; $s_{\rm eff}$&nbsp; have the following physical values:
::::$$P_{\rm Total} \hspace{-0.05cm}= \hspace{-0.05cm} \frac{m_2}{R} \hspace{-0.05cm}= \hspace{-0.05cm} \frac{7.4\,{\rm V}^2}{50\,{\rm \Omega} } \hspace{-0.05cm}=  \hspace{-0.05cm}0.148\,{\rm W},\hspace{0.5cm}
+
::::$$P_{\rm total} \hspace{-0.05cm}= \hspace{-0.05cm} \frac{m_2}{R} \hspace{-0.05cm}= \hspace{-0.05cm} \frac{7.4\,{\rm V}^2}{50\,{\rm \Omega} } \hspace{-0.05cm}=  \hspace{-0.05cm}0.148\,{\rm W},\hspace{0.5cm}
P_{\rm W} \hspace{-0.05cm} = \hspace{-0.05cm} \frac{\sigma^2}{R} \hspace{-0.05cm}=  \hspace{-0.05cm}12.8\,{\rm mW} \hspace{0.05cm},\hspace{0.5cm}
+
P_{\rm alter} \hspace{-0.05cm} = \hspace{-0.05cm} \frac{\sigma^2}{R} \hspace{-0.05cm}=  \hspace{-0.05cm}12.8\,{\rm mW} \hspace{0.05cm},\hspace{0.5cm}
 
s_{\rm eff} \hspace{-0.05cm} =  \hspace{-0.05cm}\sqrt{R \cdot P_{\rm W} } \hspace{-0.05cm}= \hspace{-0.05cm} \sigma \hspace{-0.05cm}= \hspace{-0.05cm} 0.8\,{\rm V}.$$
 
s_{\rm eff} \hspace{-0.05cm} =  \hspace{-0.05cm}\sqrt{R \cdot P_{\rm W} } \hspace{-0.05cm}= \hspace{-0.05cm} \sigma \hspace{-0.05cm}= \hspace{-0.05cm} 0.8\,{\rm V}.$$
  
The same variance and rms value&nbsp; $s_{\rm eff}$&nbsp; are obtained for amplitudes&nbsp; $0\hspace{0.05cm}\rm V$&nbsp; $($for symbol&nbsp; $\rm L)$&nbsp; and $2\hspace{0.05cm}\rm V$&nbsp; $($for symbol&nbsp; $\rm H)$&nbsp; , provided that the probabilities of occurrence&nbsp; $p_{\rm L} = 0.2$&nbsp; and&nbsp; $p_{\rm H} = 0.8$&nbsp; remain the same.&nbsp; Only the DC component and the total power change:  
+
The same variance&nbsp; $σ^2 =  0.64\hspace{0.05cm}  {\rm V}^2$ and the same rms value&nbsp; $s_{\rm eff}= 0.8 \hspace{0.05cm} \rm V$&nbsp; are obtained for amplitudes&nbsp; $0\hspace{0.05cm}\rm V$&nbsp; $($for symbol&nbsp; $\rm L)$&nbsp; and $2\hspace{0.05cm}\rm V$&nbsp; $($for symbol&nbsp; $\rm H)$,&nbsp; provided that the probabilities&nbsp; $p_{\rm L} = 0.2$&nbsp; and&nbsp; $p_{\rm H} = 0.8$&nbsp; remain the same.&nbsp; Only the DC component and the total power change:  
:$$m_1 =  1.6 \hspace{0.05cm}{\rm V}, \hspace{0.5cm}P_{\rm Gesamt}  = P_{\rm W} + {m_1}^2 = 3.2 \hspace{0.05cm}{\rm V}^2.$$
+
:$$m_1 =  1.6 \hspace{0.05cm}{\rm V}, \hspace{0.5cm}P_{\rm total}  = {m_1}^2 +\sigma^2 = 3.2 \hspace{0.05cm}{\rm V}^2.$$
  
 
}}
 
}}

Revision as of 17:31, 6 December 2021

Calculation as ensemble average or time average


The probabilities and the relative frequencies provide extensive information about a discrete random variable. 

Reduced information is obtained by the so-called moments  $m_k$,  where  $k$  represents a natural number.

$\text{Two alternative ways of calculation:}$ 

Under the condition  "Ergodicity"  implicitly assumed here,  there are two different calculation possibilities for the  $k$-th order moment:

  • the  ensemble averaging  or  "expected value formation"   ⇒  averaging over all possible values  $\{ z_\mu\}$  with the index  $\mu = 1 , \hspace{0.1cm}\text{ ...} \hspace{0.1cm} , M$:
$$m_k = {\rm E} \big[z^k \big] = \sum_{\mu = 1}^{M}p_\mu \cdot z_\mu^k \hspace{2cm} \rm with \hspace{0.1cm} {\rm E\big[\text{ ...} \big]\hspace{-0.1cm}:} \hspace{0.3cm} \rm expected\hspace{0.1cm}value ;$$
  • the  time averaging  over the random sequence  $\langle z_ν\rangle$  with the index  $ν = 1 , \hspace{0.1cm}\text{ ...} \hspace{0.1cm} , N$:
$$m_k=\overline{z_\nu^k}=\hspace{0.01cm}\lim_{N\to\infty}\frac{1}{N}\sum_{\nu=\rm 1}^{\it N}z_\nu^k\hspace{1.7cm}\rm with\hspace{0.1cm}horizontal\hspace{0.1cm}line\hspace{-0.1cm}:\hspace{0.1cm}time\hspace{0.1cm}average.$$


Note:

  • Both types of calculations lead to the same asymptotic result for sufficiently large values of  $N$.
  • For finite  $N$,  a comparable error results as when the probability is approximated by the relative frequency.

Linear mean - DC component


$\text{Definition:}$  With  $k = 1$  we obtain from the general equation for moments the  linear mean:

$$m_1 =\sum_{\mu=1}^{M}p_\mu\cdot z_\mu =\lim_{N\to\infty}\frac{1}{N}\sum_{\nu=1}^{N}z_\nu.$$
  • The left part of this equation describes the ensemble averaging  (over all possible values),
while the right equation gives the determination as time average.
  • In the context of signals,  this quantity is also referred to as the  "direct current"  $\rm (DC)$  component.


DC component  $m_1$  of a binary signal

$\text{Example 1:}$  A binary signal  $x(t)$  with the two possible values

  • $1\hspace{0.03cm}\rm V$  $($for the symbol  $\rm L)$,
  • $3\hspace{0.03cm}\rm V$  $($for the symbol  $\rm H)$


as well as the occurrence probabilities  $p_{\rm L} = 0.2$  and  $p_{\rm H} = 0.8$  has the linear mean  ("DC component")

$$m_1 = 0.2 \cdot 1\,{\rm V}+ 0.8 \cdot 3\,{\rm V}= 2.6 \,{\rm V}. $$

This is drawn as a red line in the graph.

If we determine this parameter by time averaging over the displayed  $N = 12$  signal values,  we obtain a slightly smaller value:

$$m_1\hspace{0.01cm}' = 4/12 \cdot 1\,{\rm V}+ 8/12 \cdot 3\,{\rm V}= 2.33 \,{\rm V}. $$
  • Here,  the probabilities  $p_{\rm L} = 0.2$  and  $p_{\rm H} = 0.8$  were replaced by the corresponding frequencies  $h_{\rm L} = 4/12$  and  $h_{\rm H} = 8/12$  respectively.
  • In this example the relative error due to insufficient sequence length  $N$  is greater than  $10\%$.


$\text{Note about our (admittedly somewhat unusual) nomenclature:}$

We denote binary symbols here as in circuit theory with  $\rm L$  ("Low")  and  $\rm H$  ("High")  to avoid confusion.

  • In coding theory,  it is useful to map  $\{ \text{L, H}\}$  to  $\{0, 1\}$  to take advantage of the possibilities of modulo algebra.
  • In contrast,  to describe modulation with bipolar  (antipodal)  signals,  one better chooses the mapping  $\{ \text{L, H}\}$ ⇔ $ \{-1, +1\}$.


Quadratic mean – variance – standard deviation


$\text{Definitions:}$ 

  • Analogous to the linear mean,  $k = 2$  is obtained for the  root mean square  (short:  "rms"):
$$m_2 =\sum_{\mu=\rm 1}^{\it M}p_\mu\cdot z_\mu^2 =\lim_{N\to\infty}\frac{\rm 1}{\it N}\sum_{\nu=\rm 1}^{\it N}z_\nu^2.$$
  • Together with the DC component  $m_1$,  the  variance  $σ^2$  can be determined from this as a further parameter ("Steiner's theorem"):
$$\sigma^2=m_2-m_1^2.$$
  • The square root  $σ$  of the variance is called  "root mean square"   ⇒   rms value (sometimes this quantity is also called the  "standard deviation"):
$$\sigma=\sqrt{m_2-m_1^2}.$$


$\text{Notes on units:}$

  • For message signals,  $m_2$  indicates the  "(average) power"  of a random signal,  referenced to  $1 \hspace{0.03cm} Ω$ resistance.
  • If  $z$  describes a voltage,  $m_2$  accordingly has the unit  ${\rm V}^2$.
  • The variance  $σ^2$ of a random signal corresponds physically to the  "alternating power".
  • These definitions are based on the reference resistance  $1 \hspace{0.03cm} Ω$.


The following  (German language)  learning video illustrates the defined quantities using the example of a digital signal:
    Momentenberechnung bei diskreten Zufallsgrößen   ⇒   "Moment Calculation for Discrete Random Variables".

Standard deviation ("rms value"  of a binary signal

$\text{Example 2:}$  A binary signal  $x(t)$  with the two possible values

  • $1\hspace{0.03cm}\rm V$  $($for the symbol  $\rm L)$,
  • $3\hspace{0.03cm}\rm V$  $($for the symbol  $\rm H)$


as well as the occurrence probabilities  $p_{\rm L} = 0.2$  and  $p_{\rm H} = 0.8$  has the total signal power

$$P_{\rm total} = 0.2 \cdot (1\,{\rm V})^2+ 0.8 \cdot (3\,{\rm V})^2 = 7.4 \hspace{0.05cm}{\rm V}^2,$$

if one assumes the reference resistance  $R = 1 \hspace{0.05cm} Ω$ .

With the DC component  $m_1 = 2.6 \hspace{0.05cm}\rm V$  $($see  $\text{Example 1})$  it follows for

  • the variance  $σ^2 = 7.4 \hspace{0.05cm}{\rm V}^2 - \big [2.6 \hspace{0.05cm}\rm V\big ]^2 = 0.64\hspace{0.05cm} {\rm V}^2$,
  • the alternating power  $P_{\rm alter} = 0.64\hspace{0.05cm} {\rm W}$   ⇒   same numerical value as  $σ^2$,  but different unit,
  • the rms value  $s_{\rm eff} = σ = 0.8 \hspace{0.05cm} \rm V$.
Insertion:   With other reference resistance   ⇒   $R \ne 1 \hspace{0.1cm} Ω$,  not all these calculations apply.  For example,  with  $R = 50 \hspace{0.1cm} Ω$,  the power  $P_{\rm total} $,  the alternating power  $P_{\rm alter}$,  and the rms value  $s_{\rm eff}$  have the following physical values:
$$P_{\rm total} \hspace{-0.05cm}= \hspace{-0.05cm} \frac{m_2}{R} \hspace{-0.05cm}= \hspace{-0.05cm} \frac{7.4\,{\rm V}^2}{50\,{\rm \Omega} } \hspace{-0.05cm}= \hspace{-0.05cm}0.148\,{\rm W},\hspace{0.5cm} P_{\rm alter} \hspace{-0.05cm} = \hspace{-0.05cm} \frac{\sigma^2}{R} \hspace{-0.05cm}= \hspace{-0.05cm}12.8\,{\rm mW} \hspace{0.05cm},\hspace{0.5cm} s_{\rm eff} \hspace{-0.05cm} = \hspace{-0.05cm}\sqrt{R \cdot P_{\rm W} } \hspace{-0.05cm}= \hspace{-0.05cm} \sigma \hspace{-0.05cm}= \hspace{-0.05cm} 0.8\,{\rm V}.$$

The same variance  $σ^2 = 0.64\hspace{0.05cm} {\rm V}^2$ and the same rms value  $s_{\rm eff}= 0.8 \hspace{0.05cm} \rm V$  are obtained for amplitudes  $0\hspace{0.05cm}\rm V$  $($for symbol  $\rm L)$  and $2\hspace{0.05cm}\rm V$  $($for symbol  $\rm H)$,  provided that the probabilities  $p_{\rm L} = 0.2$  and  $p_{\rm H} = 0.8$  remain the same.  Only the DC component and the total power change:

$$m_1 = 1.6 \hspace{0.05cm}{\rm V}, \hspace{0.5cm}P_{\rm total} = {m_1}^2 +\sigma^2 = 3.2 \hspace{0.05cm}{\rm V}^2.$$

Exercises for the chapter


Exercise 2.2: Multi-Level Signals

Exercise 2.2Z: Discrete Random Variables