Difference between revisions of "Theory of Stochastic Signals/Binomial Distribution"

From LNTwww
Line 38: Line 38:
 
For the   '''probabilities of the binomial distribution'''   with   $μ = 0, \hspace{0.1cm}\text{...} \hspace{0.1cm}, \ I$:
 
For the   '''probabilities of the binomial distribution'''   with   $μ = 0, \hspace{0.1cm}\text{...} \hspace{0.1cm}, \ I$:
 
:$$p_\mu = {\rm Pr}(z=\mu)={I \choose \mu}\cdot p\hspace{0.05cm}^\mu\cdot ({\rm 1}-p)\hspace{0.05cm}^{I-\mu}.$$
 
:$$p_\mu = {\rm Pr}(z=\mu)={I \choose \mu}\cdot p\hspace{0.05cm}^\mu\cdot ({\rm 1}-p)\hspace{0.05cm}^{I-\mu}.$$
Der erste Term gibt hierbei die Anzahl der Kombinationen an   $($sprich:   $I\ \text{  über }\ μ)$:
+
The first term here indicates the number of combinations   $($read:   $I\ \text{  over }\ μ)$:
 
:$${I \choose \mu}=\frac{I !}{\mu !\cdot (I-\mu) !}=\frac{ {I\cdot (I- 1) \cdot \ \cdots \ \cdot (I-\mu+ 1)} }{ 1\cdot  2\cdot \ \cdots \ \cdot  \mu}.$$}}
 
:$${I \choose \mu}=\frac{I !}{\mu !\cdot (I-\mu) !}=\frac{ {I\cdot (I- 1) \cdot \ \cdots \ \cdot (I-\mu+ 1)} }{ 1\cdot  2\cdot \ \cdots \ \cdot  \mu}.$$}}
  
  
''Weitere Hinweise:''
+
''Additional notes:''
*Für sehr große Werte von   $I$  kann die Binomialverteilung durch die im nächsten Abschnitt beschriebene   [[Theory_of_Stochastic_Signals/Poissonverteilung|Poissonverteilung]]  angenähert werden.  
+
*For very large values of   $I$ , the binomial distribution can be approximated by the   [[Theory_of_Stochastic_Signals/Poisson_Distribution|Poisson distribution]]  described in the next section.
*Ist gleichzeitig das Produkt   $I · p \gg 1$,  so geht nach dem   [https://de.wikipedia.org/wiki/Satz_von_Moivre-Laplace Grenzwertsatz von de Moivre-Laplace]  die Poissonverteilung   (und damit auch die Binomialverteilung)  in eine diskrete  [[Theory_of_Stochastic_Signals/Gaußverteilte_Zufallsgröße|Gaußverteilung]]  über.
+
*If at the same time the product   $I · p \gg 1$, , then according to   [https://en.wikipedia.org/wiki/De_Moivre%E2%80%93Laplace_theorem de Moivre–Laplace's (central limit) theorem]  , the Poisson distribution   (and hence the binomial distribution))  transitions to a discrete  [[Theory_of_Stochastic_Signals/Gaussian_Distributed_Random_Variables|Gaussian distribution]] .
  
  
[[File:P_ID203__Sto_T_2_3_S2_neu.png |frame| Wahrscheinlichkeiten der Binomialverteilung | rechts]]
+
[[File:P_ID203__Sto_T_2_3_S2_neu.png |frame| Probabilites of the binomial distribution]]
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 2:}$ 
+
$\text{Example 2:}$ 
Die Grafik zeigt die Wahrscheinlichkeiten der Binomialverteilung sind für  $I =6$   und  $p =0.4$.  Von Null verschieden sind somit  $M = I+1=7$  Wahrscheinlichkeiten.
+
The graph shows the probabilities of the binomial distribution are for  $I =6$   and  $p =0.4$.  Thus  $M = I+1=7$  probabilities are different from zero.
  
Dagegen ergeben sich für  $I = 6$  und  $p = 0.5$ die folgenden Binomialwahrscheinlichkeiten:  
+
In contrast, for  $I = 6$  and  $p = 0.5$, the binomial probabilities are as follows:  
 
:$$\begin{align*}{\rm Pr}(z\hspace{-0.05cm} =\hspace{-0.05cm}0)  & =  {\rm Pr}(z\hspace{-0.05cm} =\hspace{-0.05cm}6)\hspace{-0.05cm} =\hspace{-0.05cm} 1/64\hspace{-0.05cm} = \hspace{-0.05cm}0.015625 ,\\ {\rm Pr}(z\hspace{-0.05cm} =\hspace{-0.05cm}1)  & =  {\rm Pr}(z\hspace{-0.05cm} =\hspace{-0.05cm}5) \hspace{-0.05cm}= \hspace{-0.05cm}6/64 \hspace{-0.05cm}=\hspace{-0.05cm} 0.09375,\\ {\rm Pr}(z\hspace{-0.05cm} =\hspace{-0.05cm}2)  & =  {\rm Pr}(z\hspace{-0.05cm} =\hspace{-0.05cm}4)\hspace{-0.05cm} = \hspace{-0.05cm}15/64 \hspace{-0.05cm}= \hspace{-0.05cm}0.234375 ,\\ {\rm Pr}(z\hspace{-0.05cm} =\hspace{-0.05cm}3)  & =  20/64 \hspace{-0.05cm}= \hspace{-0.05cm} 0.3125 .\end{align*}$$
 
:$$\begin{align*}{\rm Pr}(z\hspace{-0.05cm} =\hspace{-0.05cm}0)  & =  {\rm Pr}(z\hspace{-0.05cm} =\hspace{-0.05cm}6)\hspace{-0.05cm} =\hspace{-0.05cm} 1/64\hspace{-0.05cm} = \hspace{-0.05cm}0.015625 ,\\ {\rm Pr}(z\hspace{-0.05cm} =\hspace{-0.05cm}1)  & =  {\rm Pr}(z\hspace{-0.05cm} =\hspace{-0.05cm}5) \hspace{-0.05cm}= \hspace{-0.05cm}6/64 \hspace{-0.05cm}=\hspace{-0.05cm} 0.09375,\\ {\rm Pr}(z\hspace{-0.05cm} =\hspace{-0.05cm}2)  & =  {\rm Pr}(z\hspace{-0.05cm} =\hspace{-0.05cm}4)\hspace{-0.05cm} = \hspace{-0.05cm}15/64 \hspace{-0.05cm}= \hspace{-0.05cm}0.234375 ,\\ {\rm Pr}(z\hspace{-0.05cm} =\hspace{-0.05cm}3)  & =  20/64 \hspace{-0.05cm}= \hspace{-0.05cm} 0.3125 .\end{align*}$$
  
Diese sind symmetrisch bezüglich des Abszissenwertes   $\mu = I/2 = 3$.}}
+
These are symmetrical with respect to the abscissa value   $\mu = I/2 = 3$.}}
  
  
Ein weiteres Beispiel für die Anwendung der Binomialverteilung ist die  '''Berechnung der Blockfehlerwahrscheinlichkeit bei digitaler Übertragung'''.
+
Another example of the application of the binomial distribution is the  '''calculation of the block error probability in digital transmission'''.
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 3:}$ 
+
$\text{Example 3:}$ 
Überträgt man jeweils Blöcke von  $I =10$  Binärsymbolen über einen Kanal, der
+
If one transmits blocks of  $I =10$  binary symbols each over a channel which is
*mit der Wahrscheinlichkeit  $p = 0.01$  ein Symbol verfälscht   ⇒   Zufallsgröße  $e_i = 1$,  und
+
*with probability  $p = 0.01$  one symbol is corrupted   ⇒   random variable  $e_i = 1$,  and
*entsprechend mit der Wahrscheinlichkeit  $1 - p = 0.99$  das Symbol unverfälscht überträgt    ⇒   Zufallsgröße  $e_i = 0$,  
+
*correspondingly with probability  $1 - p = 0.99$  transmits the symbol uncorrupted   ⇒   random variable  $e_i = 0$,  
  
  
so gilt für die neue Zufallsgröße  $f$   ("Fehler pro Block"):  
+
then the new random variable  $f$   ("block error") is:  
 
:$$f=\sum_{i=1}^{I}e_i.$$
 
:$$f=\sum_{i=1}^{I}e_i.$$
  
Diese Zufallsgröße  $f$  kann nun alle ganzzahligen Werte zwischen  $0$  (kein Symbol verfälscht)  und  $I$  (alle Symbole falsch)  annehmen.  Die Wahrscheinlichkeiten für  $\mu$  Verfälschungen bezeichnen wir mit  $p_μ$.  
+
This random variable  $f$  can now take all integer values between  $0$  (no symbol corrupted)  and  $I$  (all symbols incorrect) .  We denote the probabilities for  $\mu$  corruptions by  $p_μ$.  
*Der Fall, dass alle  $I$  Symbole richtig übertragen werden, tritt mit der Wahrscheinlichkeit  $p_0 = 0.99^{10} ≈ 0.9044$  ein. Dies ergibt sich auch aus der Binomialformel für  $μ = 0$  unter Berücksichtigung der Definition  $10\, \text{ über }\, 0 = 1$.  
+
*The case where all  $I$  symbols are correctly transmitted occurs with probability  $p_0 = 0.99^{10} ≈ 0.9044$ . This also follows from the binomial formula for  $μ = 0$  considering definition  $10\, \text{ over }\, 0 = 1$.  
*Ein einziger Symbolfehler  $(f = 1)$  tritt mit folgender Wahrscheinlichkeit auf:  
+
*A single symbol error  $(f = 1)$  occurs with the following probability:  
 
:$$p_1 = \rm 10\cdot 0.01\cdot 0.99^9\approx 0.0914.$$
 
:$$p_1 = \rm 10\cdot 0.01\cdot 0.99^9\approx 0.0914.$$
:Der erste Faktor berücksichtigt, dass es für die Position eines einzigen Fehlers genau  $10\, \text{ über }\, 1 = 10$  Möglichkeiten gibt.  Die beiden weiteren Faktoren beücksichtigen, dass ein Symbol verfälscht und neun richtig übertragen werden müssen, wenn  $f =1$  gelten soll.  
+
:The first factor considers that there are exactly  $10\, \text{ over }\, 1 = 10$  possibilities for the position of a single error.  The other two factors take into account that one symbol must be corrupted and nine must be transmitted correctly if  $f =1$  is to hold.  
*Für  $f =2$  gibt es deutlich mehr Kombinationen, nämlich  $10\, \text{ über }\, 2 = 45$,  und man erhält
+
*For  $f =2$  there are clearly more combinations, namely  $10\, \text{ over }\, 2 = 45$,  and we get
 
:$$p_2 = \rm 45\cdot 0.01^2\cdot 0.99^8\approx 0.0041.$$
 
:$$p_2 = \rm 45\cdot 0.01^2\cdot 0.99^8\approx 0.0041.$$
  
Kann ein Blockcode bis zu zwei Fehler korrigieren, so ist die Restfehlerwahrscheinlichkeit
+
If a block code can correct up to two errors, the residual error probability is
 
:$$p_{\rm R} = \it p_{\rm 3} \rm +\hspace{0.1cm}\text{ ...} \hspace{0.1cm} \rm + \it p_{\rm 10}\approx \rm 10^{-4},$$
 
:$$p_{\rm R} = \it p_{\rm 3} \rm +\hspace{0.1cm}\text{ ...} \hspace{0.1cm} \rm + \it p_{\rm 10}\approx \rm 10^{-4},$$
 
oder
 
oder
 
:$$p_{\rm R} = \rm 1-\it p_{\rm 0}-\it p_{\rm 1}-p_{\rm 2}\approx \rm 10^{-4}.$$
 
:$$p_{\rm R} = \rm 1-\it p_{\rm 0}-\it p_{\rm 1}-p_{\rm 2}\approx \rm 10^{-4}.$$
  
*Man erkennt, dass die zweite Berechnungsmöglichkeit über das Komplement für große Werte von  $I$  schneller zum Ziel führt.  
+
*One can see that the second possibility of calculation via the complement for large values of  $I$  leads faster to the goal.  
*Man könnte aber auch als Näherung berücksichtigen, dass bei diesen Zahlenwerten  $p_{\rm R} ≈ p_3$  gilt. }}
+
*However, one could also consider as an approximation that for these numerical values  $p_{\rm R} ≈ p_3$  holds. }}
  
  
Mit dem interaktiven Applet  [[Applets:Binomial-_und_Poissonverteilung_(Applet)|Binomial– und Poissonverteilung]]  können Sie die Binomialwahrscheinlichkeiten für beliebige  $I$  und  $p$  ermitteln.
+
Use the interactive applet  [[Applets:Binomial-_und_Poissonverteilung_(Applet)|Binomial– and Poisson distribution]]  to find the binomial probabilities for any  $I$  and  $p$ .
  
  
==Momente der Binomialverteilung==
+
==Moments of the binomial distribution==
 
<br>
 
<br>
Die Momente kann man mit den Gleichungen der Kapitel&nbsp; [[Theory_of_Stochastic_Signals/Momente_einer_diskreten_Zufallsgröße|Momente einer diskreten Zufallsgröße]]&nbsp;  und&nbsp; [[Theory_of_Stochastic_Signals/Binomialverteilung#Wahrscheinlichkeiten_der_Binomialverteilung|Wahrscheinlichkeiten der Binomialverteilung]]&nbsp; allgemein berechnen.  
+
You can calculate the moments in general using the equations in the chapters&nbsp; [[Theory_of_Stochastic_Signals/Moments_of_a_Discrete_Random_Variable|Moments of a discrete random variable]]&nbsp;  and&nbsp; [[Theory_of_Stochastic_Signals/Binomial_Distribution#Probabilities_of_the_binomial_distribution|probabilities of the binomial distribution]]&nbsp;.
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Berechnungsvorschriften:} $&nbsp; Für das&nbsp; '''Moment $k$-ter Ordnung'''&nbsp; einer binomialverteilten Zufallsgröße gilt allgemein:  
+
$\text{Calculation rules:} $&nbsp; For the&nbsp; '''$k$-th order moment'''&nbsp; of a binomially distributed random variable, the general rule is:  
 
:$$m_k={\rm E}\big[z^k\big]=\sum_{\mu={\rm 0} }^{I}\mu^k\cdot{I \choose \mu}\cdot p\hspace{0.05cm}^\mu\cdot ({\rm 1}-p)\hspace{0.05cm}^{I-\mu}.$$
 
:$$m_k={\rm E}\big[z^k\big]=\sum_{\mu={\rm 0} }^{I}\mu^k\cdot{I \choose \mu}\cdot p\hspace{0.05cm}^\mu\cdot ({\rm 1}-p)\hspace{0.05cm}^{I-\mu}.$$
  
Daraus erhält man nach einigen Umformungen für
+
From this, after some transformations, we obtain for
*den linearen Mittelwert:  
+
*the linear mean value:
 
:$$m_1 ={\rm E}\big[z\big]= I\cdot p,$$
 
:$$m_1 ={\rm E}\big[z\big]= I\cdot p,$$
*den quadratischen Mittelwert:  
+
*the rms value:
 
:$$m_2 ={\rm E}\big[z^2\big]= (I^2-I)\cdot p^2+I\cdot p.$$
 
:$$m_2 ={\rm E}\big[z^2\big]= (I^2-I)\cdot p^2+I\cdot p.$$
Die Varianz und die Streuung erhält man durch Anwendung des "Steinerschen Satzes":
+
The variance and standard deviation are obtained by applyings "Steiner's theorem":
 
:$$\sigma^2 = {m_2-m_1^2} = {I \cdot p\cdot (1-p)} \hspace{0.3cm}\Rightarrow \hspace{0.3cm}
 
:$$\sigma^2 = {m_2-m_1^2} = {I \cdot p\cdot (1-p)} \hspace{0.3cm}\Rightarrow \hspace{0.3cm}
 
\sigma =  \sqrt{I \cdot p\cdot (1-p)}.$$}}
 
\sigma =  \sqrt{I \cdot p\cdot (1-p)}.$$}}
  
  
Die maximale Varianz&nbsp; $σ^2 = I/4$&nbsp; ergibt sich für die "charakteristische Wahrscheinlichkeit"&nbsp; $p = 1/2$.&nbsp; In diesem Fall sind die Wahrscheinlichkeiten symmetrisch um den Mittelwert&nbsp; $m_1 = I/2  \ ⇒  \ p_μ = p_{I–μ}$.  
+
The maximum variance&nbsp; $σ^2 = I/4$&nbsp; is obtained for the "characteristic probability"&nbsp; $p = 1/2$.&nbsp; In this case, the probabilities are symmetric around the mean&nbsp; $m_1 = I/2  \ ⇒  \ p_μ = p_{I–μ}$.  
  
Je mehr die charakteristische Wahrscheinlichkeit&nbsp; $p$&nbsp; vom Wert&nbsp; $1/2$&nbsp; abweicht,  
+
The more the characteristic probability&nbsp; $p$&nbsp; deviates from the value&nbsp; $1/2$&nbsp;,  
*um so kleiner ist die Streuung&nbsp; $σ$, und
+
*the smaller is the '''standard deviation'''&nbsp; $σ$, and
*um so unsymmetrischer werden die Wahrscheinlichkeiten um den Mittelwert&nbsp; $m_1 = I · p$.  
+
*the more asymmetric the probabilities become around the mean&nbsp; $m_1 = I · p$.  
  
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 4:}$&nbsp;
+
$\text{Example 4:}$&nbsp;
Wir betrachten wie im&nbsp; $\text{Beispiel 3}$&nbsp; einen Block von&nbsp; $I =10$&nbsp; Binärsymbolen, die jeweils mit der Wahrscheinlichkeit&nbsp; $p = 0.01$&nbsp; unabhängig voneinander verfälscht werden.&nbsp; Dann gilt:  
+
As in&nbsp; $\text{example 3}$&nbsp;, we consider a block of&nbsp; $I =10$&nbsp; binary symbols, each of which is independently corrupted with probability&nbsp; $p = 0.01$&nbsp;.&nbsp; Then holds:  
*Die mittlere Anzahl von Fehlern pro Block ist gleich&nbsp; $m_f  = {\rm E}\big[ f\big] = I · p = 0.1$.
+
*The mean number of block errors is equal to&nbsp; $m_f  = {\rm E}\big[ f\big] = I · p = 0.1$.
*Die Streuung (Standardabweichung) der Zufallsgröße&nbsp; $f$&nbsp; beträgt&nbsp; $σ_f  = \sqrt{0.1 \cdot 0.99}≈ 0.315$.
+
*Der standard deviation of the random variable&nbsp; $f$&nbsp; is&nbsp; $σ_f  = \sqrt{0.1 \cdot 0.99}≈ 0.315$.
  
  
Im vollständig gestörten Kanal  &nbsp;  ⇒  &nbsp; Verfälschungswahrscheinlichkeit&nbsp; $p = 1/2$&nbsp; ergeben sich demgegenüber die Werte
+
In contrast, in the completely corrupted channel &nbsp;  ⇒  &nbsp; falsification probability (*besseres Wort als falsification für Verfälschung?*)&nbsp; $p = 1/2$&nbsp; results in the values
*$m_f  = 5$ &nbsp;  ⇒  &nbsp; im Mittel sind fünf der zehn Bit innerhalb eines Blocks falsch,
+
* $σ_f  = \sqrt{I}/2 ≈1.581$  &nbsp;  ⇒  &nbsp;  maximale Streuung für&nbsp; $I = 10$.}}
+
*$m_f  = 5$ &nbsp;  ⇒  &nbsp; on average, five of the ten bits within a block are wrong,
 +
* $σ_f  = \sqrt{I}/2 ≈1.581$  &nbsp;  ⇒  &nbsp;  maximum '''standard deviation'''&nbsp; $I = 10$.}}
  
==Aufgaben zum Kapitel==
+
==Exercises for the chapter==
 
<br>
 
<br>
 
[[Aufgaben:2.3 Summe von Binärzahlen|Aufgabe 2.3: Summe von Binärzahlen]]
 
[[Aufgaben:2.3 Summe von Binärzahlen|Aufgabe 2.3: Summe von Binärzahlen]]

Revision as of 13:50, 10 December 2021

General description of the binomial distribution


$\text{Definition:}$  The  binomial distribution  represents an important special case for the occurrence probabilities of a discrete random variable.


To derive the binomial distribution, we assume that  $I$ binary and statistically independent random variables  $b_i$  each can achieve

  • the value  $1$  with probability  ${\rm Pr}(b_i = 1) = p$,  and
  • the value  $0$  with probability  ${\rm Pr}(b_i = 0) = 1-p$.


Then the sum  $z$  is also a discrete random variable with the symbol set   $\{0, \ 1, \ 2,\hspace{0.1cm}\text{ ...} \hspace{0.1cm}, \ I\}$, , which is called binomially distributed:

$$z=\sum_{i=1}^{I}b_i.$$

Thus, the symbol size is  $M = I + 1.$


$\text{Example 1:}$  The binomial distribution finds manifold applications in communications engineering as well as in other disciplines:

  1.   It describes the distribution of rejects in statistical quality control.
  2.   It allows the calculation of the residual error probability in blockwise coding.
  3.  Also the bit error rate of a digital transmission system obtained by simulation is actually a binomially distributed random quantity.

Probabilities of the binomial distribution.

Probabilities of the binomial distribution


$\text{Calculation rule:}$  For the  probabilities of the binomial distribution  with  $μ = 0, \hspace{0.1cm}\text{...} \hspace{0.1cm}, \ I$:

$$p_\mu = {\rm Pr}(z=\mu)={I \choose \mu}\cdot p\hspace{0.05cm}^\mu\cdot ({\rm 1}-p)\hspace{0.05cm}^{I-\mu}.$$

The first term here indicates the number of combinations   $($read:  $I\ \text{ over }\ μ)$:

$${I \choose \mu}=\frac{I !}{\mu !\cdot (I-\mu) !}=\frac{ {I\cdot (I- 1) \cdot \ \cdots \ \cdot (I-\mu+ 1)} }{ 1\cdot 2\cdot \ \cdots \ \cdot \mu}.$$


Additional notes:


Probabilites of the binomial distribution

$\text{Example 2:}$  The graph shows the probabilities of the binomial distribution are for  $I =6$  and  $p =0.4$.  Thus  $M = I+1=7$  probabilities are different from zero.

In contrast, for  $I = 6$  and  $p = 0.5$, the binomial probabilities are as follows:

$$\begin{align*}{\rm Pr}(z\hspace{-0.05cm} =\hspace{-0.05cm}0) & = {\rm Pr}(z\hspace{-0.05cm} =\hspace{-0.05cm}6)\hspace{-0.05cm} =\hspace{-0.05cm} 1/64\hspace{-0.05cm} = \hspace{-0.05cm}0.015625 ,\\ {\rm Pr}(z\hspace{-0.05cm} =\hspace{-0.05cm}1) & = {\rm Pr}(z\hspace{-0.05cm} =\hspace{-0.05cm}5) \hspace{-0.05cm}= \hspace{-0.05cm}6/64 \hspace{-0.05cm}=\hspace{-0.05cm} 0.09375,\\ {\rm Pr}(z\hspace{-0.05cm} =\hspace{-0.05cm}2) & = {\rm Pr}(z\hspace{-0.05cm} =\hspace{-0.05cm}4)\hspace{-0.05cm} = \hspace{-0.05cm}15/64 \hspace{-0.05cm}= \hspace{-0.05cm}0.234375 ,\\ {\rm Pr}(z\hspace{-0.05cm} =\hspace{-0.05cm}3) & = 20/64 \hspace{-0.05cm}= \hspace{-0.05cm} 0.3125 .\end{align*}$$

These are symmetrical with respect to the abscissa value  $\mu = I/2 = 3$.


Another example of the application of the binomial distribution is the  calculation of the block error probability in digital transmission.

$\text{Example 3:}$  If one transmits blocks of  $I =10$  binary symbols each over a channel which is

  • with probability  $p = 0.01$  one symbol is corrupted   ⇒   random variable  $e_i = 1$,  and
  • correspondingly with probability  $1 - p = 0.99$  transmits the symbol uncorrupted   ⇒   random variable  $e_i = 0$,


then the new random variable  $f$  ("block error") is:

$$f=\sum_{i=1}^{I}e_i.$$

This random variable  $f$  can now take all integer values between  $0$  (no symbol corrupted)  and  $I$  (all symbols incorrect) .  We denote the probabilities for  $\mu$  corruptions by  $p_μ$.

  • The case where all  $I$  symbols are correctly transmitted occurs with probability  $p_0 = 0.99^{10} ≈ 0.9044$ . This also follows from the binomial formula for  $μ = 0$  considering definition  $10\, \text{ over }\, 0 = 1$.
  • A single symbol error  $(f = 1)$  occurs with the following probability:
$$p_1 = \rm 10\cdot 0.01\cdot 0.99^9\approx 0.0914.$$
The first factor considers that there are exactly  $10\, \text{ over }\, 1 = 10$  possibilities for the position of a single error.  The other two factors take into account that one symbol must be corrupted and nine must be transmitted correctly if  $f =1$  is to hold.
  • For  $f =2$  there are clearly more combinations, namely  $10\, \text{ over }\, 2 = 45$,  and we get
$$p_2 = \rm 45\cdot 0.01^2\cdot 0.99^8\approx 0.0041.$$

If a block code can correct up to two errors, the residual error probability is

$$p_{\rm R} = \it p_{\rm 3} \rm +\hspace{0.1cm}\text{ ...} \hspace{0.1cm} \rm + \it p_{\rm 10}\approx \rm 10^{-4},$$

oder

$$p_{\rm R} = \rm 1-\it p_{\rm 0}-\it p_{\rm 1}-p_{\rm 2}\approx \rm 10^{-4}.$$
  • One can see that the second possibility of calculation via the complement for large values of  $I$  leads faster to the goal.
  • However, one could also consider as an approximation that for these numerical values  $p_{\rm R} ≈ p_3$  holds.


Use the interactive applet  Binomial– and Poisson distribution  to find the binomial probabilities for any  $I$  and  $p$ .


Moments of the binomial distribution


You can calculate the moments in general using the equations in the chapters  Moments of a discrete random variable  and  probabilities of the binomial distribution .

$\text{Calculation rules:} $  For the  $k$-th order moment  of a binomially distributed random variable, the general rule is:

$$m_k={\rm E}\big[z^k\big]=\sum_{\mu={\rm 0} }^{I}\mu^k\cdot{I \choose \mu}\cdot p\hspace{0.05cm}^\mu\cdot ({\rm 1}-p)\hspace{0.05cm}^{I-\mu}.$$

From this, after some transformations, we obtain for

  • the linear mean value:
$$m_1 ={\rm E}\big[z\big]= I\cdot p,$$
  • the rms value:
$$m_2 ={\rm E}\big[z^2\big]= (I^2-I)\cdot p^2+I\cdot p.$$

The variance and standard deviation are obtained by applyings "Steiner's theorem":

$$\sigma^2 = {m_2-m_1^2} = {I \cdot p\cdot (1-p)} \hspace{0.3cm}\Rightarrow \hspace{0.3cm} \sigma = \sqrt{I \cdot p\cdot (1-p)}.$$


The maximum variance  $σ^2 = I/4$  is obtained for the "characteristic probability"  $p = 1/2$.  In this case, the probabilities are symmetric around the mean  $m_1 = I/2 \ ⇒ \ p_μ = p_{I–μ}$.

The more the characteristic probability  $p$  deviates from the value  $1/2$ ,

  • the smaller is the standard deviation  $σ$, and
  • the more asymmetric the probabilities become around the mean  $m_1 = I · p$.


$\text{Example 4:}$  As in  $\text{example 3}$ , we consider a block of  $I =10$  binary symbols, each of which is independently corrupted with probability  $p = 0.01$ .  Then holds:

  • The mean number of block errors is equal to  $m_f = {\rm E}\big[ f\big] = I · p = 0.1$.
  • Der standard deviation of the random variable  $f$  is  $σ_f = \sqrt{0.1 \cdot 0.99}≈ 0.315$.


In contrast, in the completely corrupted channel   ⇒   falsification probability (*besseres Wort als falsification für Verfälschung?*)  $p = 1/2$  results in the values

  • $m_f = 5$   ⇒   on average, five of the ten bits within a block are wrong,
  • $σ_f = \sqrt{I}/2 ≈1.581$   ⇒   maximum standard deviation  $I = 10$.

Exercises for the chapter


Aufgabe 2.3: Summe von Binärzahlen

Aufgabe 2.4: Zahlenlotto (6 aus 49)