Difference between revisions of "Digital Signal Transmission/Error Probability for Baseband Transmission"

From LNTwww
 
(89 intermediate revisions by 7 users not shown)
Line 1: Line 1:
 
   
 
   
 
{{Header
 
{{Header
|Untermenü=Digitalsignalübertragung bei idealisierten Bedingungen
+
|Untermenü=Digital Signal Transmission under Idealized Conditions
 
|Vorherige Seite=Systemkomponenten eines Basisbandübertragungssystems
 
|Vorherige Seite=Systemkomponenten eines Basisbandübertragungssystems
 
|Nächste Seite=Eigenschaften von Nyquistsystemen
 
|Nächste Seite=Eigenschaften von Nyquistsystemen
 
}}
 
}}
  
 +
== Definition of the bit error probability ==
 +
<br>
 +
[[File:EN_Dig_T_1_2_S1.png|right|frame|For the definition of the bit error probability]]
 +
The diagram shows a very simple,&nbsp; but generally valid model of a binary transmission system.
 +
 +
This can be characterized as follows:
 +
*Source and sink are described by the two binary sequences &nbsp;$〈q_ν〉$&nbsp; and &nbsp;$〈v_ν〉$.&nbsp;
 +
*The entire transmission system &ndash; consisting of
 +
#the transmitter,&nbsp;
 +
#the transmission channel including noise and
 +
#the receiver,
 +
 +
is regarded as a&nbsp; "Black Box"&nbsp; with binary input and binary output.
 +
*This&nbsp; "digital channel"&nbsp; is characterized solely by the error sequence $〈e_ν〉$.&nbsp;
 +
*If the $\nu$&ndash;th bit is transmitted without errors &nbsp;$(v_ν = q_ν)$,&nbsp; &nbsp;$e_ν= 0$&nbsp; is valid,&nbsp; <br>otherwise &nbsp;$(v_ν \ne q_ν)$&nbsp; &nbsp;$e_ν= 1$&nbsp; is set.
 +
<br clear=all>
 +
{{BlaueBox|TEXT= 
 +
$\text{Definition:}$&nbsp; The&nbsp; (average)&nbsp; '''bit error probability''' for a binary system is given as follows:
  
== Definition der Bitfehlerwahrscheinlichkeit ==
+
:$$p_{\rm B} = {\rm E}\big[{\rm Pr}(v_{\nu} \ne q_{\nu})\big]= \overline{  {\rm Pr}(v_{\nu} \ne q_{\nu}) } =
 +
\lim_{N \to\infty}\frac{1}{N}\cdot\sum\limits_{\nu=1}^{N}{\rm Pr}(v_{\nu}
 +
\ne q_{\nu})\hspace{0.05cm}.$$
 +
This statistical quantity is the most important evaluation criterion of any digital system.}}<br>
 +
 
 +
*The calculation as expected value &nbsp;$\rm E[\text{...}]$&nbsp; according to the first part of the above equation corresponds to an ensemble averaging over the falsification probability &nbsp;${\rm Pr}(v_{\nu} \ne q_{\nu})$&nbsp; of the &nbsp;$\nu$&ndash;th symbol,&nbsp; while the line in the right part of the equation marks a time averaging.
 +
 
 +
*Both types of calculation lead&nbsp; &ndash; under the justified assumption of ergodic processes &ndash;&nbsp; to the same result,&nbsp; as shown in the fourth main chapter&nbsp;  "Random Variables with Statistical Dependence"&nbsp; of the book &nbsp;[[Theory_of_Stochastic_Signals|"Theory of Stochastic Signals"]].&nbsp;
 +
 
 +
*The bit error probability can be determined as an expected value also from the error sequence &nbsp;$〈e_ν〉$,&nbsp; taking into account that the error quantity &nbsp;$e_ν$&nbsp; can only take the values &nbsp;$0$&nbsp; and &nbsp;$1$:&nbsp;
 +
:$$\it p_{\rm B} =  \rm E\big[\rm Pr(\it e_{\nu}=\rm 1)\big]= {\rm E}\big[{\it e_{\nu}}\big]\hspace{0.05cm}.$$
 +
 
 +
*The above definition of the bit error probability applies whether or not there are statistical bindings within the error sequence &nbsp;$〈e_ν〉$.&nbsp; Depending on this,&nbsp;  one has to use different digital channel models in a system simulation.&nbsp; The complexity of the bit error probability calculation depends on this.
 
<br>
 
<br>
 +
In the fifth main chapter it will be shown that the so-called &nbsp;[[Digital_Signal_Transmission/Binary_Symmetric_Channel_(BSC)|"BSC model"]]&nbsp; ("Binary Symmetrical Channel")&nbsp; provides statistically independent errors,&nbsp; while for the description of bundle error channels one has to resort to the models of &nbsp;[[Digital_Signal_Transmission/Burst_Error_Channels#Channel_model_according_to_Gilbert-Elliott|"Gilbert&ndash;Elliott"]]&nbsp; [Gil60]<ref>Gilbert, E. N.:&nbsp; Capacity of Burst–Noise Channel,&nbsp; In: Bell Syst. Techn. J. Vol. 39, 1960, pp. 1253–1266.</ref> and of &nbsp;[[Digital_Signal_Transmission/Burst_Error_Channels#Channel_model_according_to_McCullough|"McCullough"]]&nbsp; [McC68]<ref>McCullough, R.H.:&nbsp; The Binary Regenerative Channel,&nbsp; In: Bell Syst. Techn. J. (47), 1968.</ref>.
  
Die Grafik zeigt ein sehr einfaches, aber allgemeingültiges Modell eines binären Übertragungssystems.
 
[[File:P_ID1258__Dig_T_1_2_S1_v1.png|Zur Definition der Bitfehlerwahrscheinlichkeit|class=fit]]<br>
 
Dieses lässt sich wie folgt charakterisieren:
 
*Die Quelle und die Sinke werden durch die beiden Binärfolgen &#9001;<i>q<sub>&nu;</sub></i>&#9002; und &#9001;<i>&upsilon;<sub>&nu;</sub></i>&#9002; beschrieben.
 
*Das gesamte Übertragungsystem &ndash; bestehend aus Sender, Übertragungskanal inklusive Störungen und Empfänger &ndash; wird als &bdquo;Black Box&rdquo; mit binärem Ein&ndash; und Ausgang betrachtet.
 
*Dieser &bdquo;Digitale Kanal&rdquo; wird allein durch die Fehlerfolge &#9001;<i>e<sub>&nu;</sub></i>&#9002; charakterisiert. Bei fehlerfreier Übertragung des <i>&nu;</i>&ndash;ten Bits (<i>&upsilon;<sub>&nu;</sub></i> = <i>q<sub>&nu;</sub></i>) gilt <i>e<sub>&nu;</sub></i> = 0, andernfalls (<i>&upsilon;<sub>&nu;</sub></i> &ne;  <i>q<sub>&nu;</sub></i>) wird <i>e<sub>&nu;</sub></i> = 1 gesetzt.<br><br>
 
  
{{Definition}}''':''' Die (mittlere) <font color="#990000"><span style="font-weight: bold;">Bitfehlerwahrscheinlichkeit</span></font> ist bei einem Binärsystem wie folgt gegeben::
+
== Definition of the bit error rate==
<math>\it p_{\rm B} = \rm E[\rm Pr(\it v_{\nu} \ne q_{\nu})]= \overline {\rm Pr(\it v_{\nu} \ne q_{\nu})} =
+
<br>
\lim_{{\it N}\to\infty}\frac{\rm 1}{\it
+
The&nbsp; "bit error probability"&nbsp; is well suited for the design and optimization of digital systems.&nbsp; It is an &nbsp;"a&ndash;priori parameter",&nbsp; which allows a prediction about the error behavior of a transmission system without having to realize it already.<br>
N}\cdot\sum\limits_{\it \nu=\rm 1}^{\it N}\rm \rm Pr(\it v_{\nu}
 
\ne q_{\nu})\hspace{0.05cm}.</math>
 
<br>Diese statistische Größe ist das wichtigste Beurteilungskriterium eines jeden Digitalsystems.{{end}}<br><br>
 
Die Berechnung als Erwartungswert E[…..] gemäß dem ersten Teil der obigen Gleichung entspricht einer Scharmittelung über die Verfälschungswahrscheinlichkeit Pr(<i>&upsilon;<sub>&nu;</sub></i> &ne; <i>q<sub>&nu;</sub></i>) des <i>&nu;</i>&ndash;ten Symbols, während die überstreichende Linie im rechten Teil eine Zeitmittelung  kennzeichnet. Beide Berechnungsarten führen &ndash; unter der gerechtfertigten Annahme ergodischer Prozesse &ndash; zum gleichen Ergebnis, wie im Kapitel 4 des Buches &bdquo;Stochastische Signaltheorie&rdquo; gezeigt wurde.
 
  
Auch aus der Fehlerfolge &#9001;<i>e<sub>&nu;</sub></i>&#9002; lässt sich die Bitfehlerwahrscheinlichkeit als Erwartungswert bestimmen, wobei zu berücksichtigen ist, dass <i>e<sub>&nu;</sub></i> nur die Werte 0 und 1 annehmen kann::
+
In contrast,&nbsp; to measure the quality of a realized system or in a system simulation,&nbsp; one must switch to the&nbsp; "bit error rate",&nbsp; which is determined by comparing the source symbol sequence &nbsp;$〈q_ν〉$&nbsp; and the sink symbol sequence &nbsp;$〈v_ν〉$.&nbsp; This is thus an &nbsp;"a&ndash;posteriori parameter"&nbsp; of the system.
<math>\it p_{\rm B} = \rm E[\rm Pr(\it e_{\nu}=\rm 1)]= {\rm E}[{\it e_{\nu}}]\hspace{0.05cm}.</math>
 
Die obige Definition der Bitfehlerwahrscheinlichkeit gilt unabhängig davon, ob es statistische Bindungen innerhalb der Fehlerfolge &#9001;<i>e<sub>&nu;</sub></i>&#9002; gibt oder nicht. Je nachdem ist der Aufwand zur Berechnung von <i>p</i><sub>B</sub> unterschiedlich groß und bei einer Systemsimulation müssen unterschiedliche digitale Kanalmodelle herangezogen werden.
 
Im Kapitel 5 wird gezeigt, dass das sog. BSC&ndash;Modell (<i>Binary Symmetrical Channel</i>) statistisch unabhängige Fehler liefert, während für die Beschreibung von Bündelfehlerkanälen auf die Modelle von Gilbert&ndash;Elliott: .: Capacity of Burst–Noise Channel, In: Bell Syst. Techn. J. Vol. 39, 1960, pp. 1253–1266 and McCullough : The Binary Regenerative Channel, In: Bell Syst. Techn. J. (47), 1968 zurückgegriffen werden muss.
 
  
 +
{{BlaueBox|TEXT= 
 +
$\text{Definition:}$&nbsp; The  '''bit error rate'''&nbsp; $\rm (BER)$&nbsp; is the ratio of the number &nbsp;$n_{\rm B}(N)$&nbsp;  of bit errors &nbsp;$(v_ν \ne q_ν)$&nbsp;  and the number &nbsp;$N$&nbsp; of transmitted symbols:
 +
:$$h_{\rm B}(N) = \frac{n_{\rm B}(N)}{N}  \hspace{0.05cm}.$$
 +
In terms of probability theory,&nbsp; the bit error rate is a &nbsp;[[Theory_of_Stochastic_Signals/From_Random_Experiment_to_Random_Variable#Bernoulli.27s_law_of_large_numbers|"relative frequency"]]; &nbsp;therefore,&nbsp; it is also called&nbsp; "bit error frequency".}}<br>
  
== Definition der Bitfehlerquote (1) ==
+
*The notation &nbsp;$h_{\rm B}(N)$&nbsp; is intended to make clear that the bit error rate determined by measurement or simulation depends significantly on the parameter &nbsp;$N$ &nbsp; &rArr; &nbsp;  the total number of transmitted or simulated symbols.
 +
*According to the elementary laws of probability theory,&nbsp; only in the limiting case &nbsp;$N \to \infty$&nbsp; the a&ndash;posteriori parameter &nbsp;$h_{\rm B}(N)$&nbsp; coincides exactly with the a&ndash;priori parameter &nbsp;$p_{\rm B}$.&nbsp; <br><br>
 +
The connection between&nbsp; "probability"&nbsp; and&nbsp; "relative frequency"&nbsp; is clarified in the&nbsp; (German language)&nbsp; learning video<br> &nbsp; &nbsp; &nbsp; [[Bernoullisches_Gesetz_der_großen_Zahlen_(Lernvideo)|"Bernoullisches Gesetz der großen Zahlen"]] &nbsp; &rArr; &nbsp; "Bernoulli's law of large numbers".
 +
<br><br>
 +
 
 +
== Bit error probability and bit error rate in the BSC model==
 
<br>
 
<br>
Die Bitfehlerwahrscheinlichkeit <i>p</i><sub>B</sub> eignet sich zum Beispiel gut für die Konzipierung und Optimierung von Digitalsystemen. Diese ist eine <font color="#cc0000"><span style="font-weight: bold;">Apriori-Kenngröße</span></font>, die eine Vorhersage über das Fehlerverhalten eines Nachrichtensystems erlaubt, ohne dass dieses bereits realisiert sein muss.
+
The following derivations are based on the BSC model&nbsp; ("Binary Symmetric Channel"),&nbsp; which is described in detail in &nbsp;[[Digitalsignal%C3%BCbertragung/Binary_Symmetric_Channel_(BSC)#Fehlerkorrelationsfunktion_des_BSC.E2.80.93Modells|"chapter 5.2"]].&nbsp;  
Dagegen muss zur messtechnischen Erfassung der Qualität eines realisierten Systems oder bei einer Systemsimulation auf die Bitfehlerquote übergegangen werden, die durch den Vergleich von Quellen&ndash; und Sinkensymbolfolge ermittelt wird. Diese ist somit eine <font color="#cc0000"><span style="font-weight: bold;">Aposteriori-Kenngröße</span></font> des Systems.
+
*Each bit is distorted with probability &nbsp;$p = {\rm Pr}(v_{\nu} \ne q_{\nu}) = {\rm Pr}(e_{\nu} = 1)$,&nbsp; independent of the error probabilities of the neighboring symbols.
{{Definition}}''':'''Die <font color="#cc0000"><span style="font-weight: bold;"><font color="#cc3300"> Bitfehlerquote</font> </span></font> (englisch: <i>Bit Error Rate</i>, BER) ist das Verhältnis aus der Anzahl <i>n</i><sub>B</sub>(<i>N</i>) der aufgetretenen Bitfehler (<i>&upsilon;<sub>&nu;</sub></i> &ne; <i>q<sub>&nu;</sub></i>) und der Anzahl <i>N</i> der insgesamt übertragenen Symbole:
+
*Thus,&nbsp; the&nbsp; (average)&nbsp; bit error probability &nbsp;$p_{\rm B}$&nbsp; is also equal to &nbsp;$p$.
<math>h_{\rm B}(N) = \frac{n_{\rm B}(N)}{N}  \hspace{0.05cm}.</math>
 
Im Sinne der Wahrscheinlichkeitsrechnung stellt die Bitfehlerquote eine relative Häufigkeit dar; sie wird demzufolge auch <font color="#000000"><span style="font-weight: bold;"> Bitfehlerhäufigkeit </span></font> genannt.{{end}}<br>
 
  
Die Schreibweise <i>h</i><sub>B</sub>(<i>N</i>) soll deutlich machen, dass die per Messung oder durch Simulation ermittelte Bitfehlerquote signifikant von dem Parameter <i>N</i> &ndash; also der Anzahl der insgesamt übertragenen oder simulierten Symbole &ndash; abhängt. Nach den elementaren Gesetzen der Wahrscheinlichkeitsrechnung stimmt nur im Grenzfall <i>N</i> &#8594; &#8734; die Aposteriori&ndash;Kenngröße <i>h</i><sub>B</sub> mit der Apriori&ndash;Kenngröße <i>p</i><sub>B</sub> exakt überein.<br><br>
 
Der Zusammenhang zwischen Wahrscheinlichkeit und relativer Häufigkeit wird in einem Lernvideo zum Buch &bdquo;Stochastische Signaltheorie&rdquo; verdeutlicht:<br>
 
[https://intern.lntwww.de/cgi-bin/extern/uni.pl?uno=hyperlink&due=block&b_id=848&hyperlink_typ=block_verweis&hyperlink_fenstergroesse=blockverweis_gross Das Bernoullische Gesetz der großen Zahlen] (Dateigröße: 1.97 MB, Dauer: 4:25)<br><br>
 
Für die nachfolgende Herleitung wird das <font color="#cc0000"><span style="font-weight: bold;">BSC&ndash;Modell</span></font> zugrunde gelegt, das in [http://en.lntwww.de/index.php?title=Digitalsignal%C3%BCbertragung/Binary_Symmetric_Channel_(BSC)&action=edit&redlink=1 Kapitel 5.2][[Please add link]] im Detail beschrieben wird. Jedes einzelne Bit wird mit der Wahrscheinlichkeit <i>p</i> = Pr(<i>&upsilon;<sub>&nu;</sub></i> &ne; <i>q<sub>&nu;</sub></i>) = Pr(<i>e</i><sub>&nu;</sub> = 1) verfälscht, unabhängig von den Fehlerwahrscheinlichkeiten der benachbarten Symbole. Die (mittlere) Bitfehlerwahrscheinlichkeit <i>p</i><sub>B</sub> ist somit ebenfalls gleich <i>p</i>.
 
  
 +
Now we estimate how accurately in the BSC model the bit error probability &nbsp;$p_{\rm B} = p$&nbsp; is approximated by the bit error rate &nbsp;$h_{\rm B}(N)$:&nbsp;
  
== Definition der Bitfehlerquote (2) ==
+
*The number of bit errors in the transmission of &nbsp;$N$&nbsp; symbols is a discrete random quantity:
<br>
+
:$$n_{\rm B}(N) = \sum\limits_{\it \nu=\rm 1}^{\it N} e_{\nu} \hspace{0.2cm} \in \hspace{0.2cm} \{0, 1, \hspace{0.05cm}\text{...} \hspace{0.05cm} , N \}\hspace{0.05cm}.$$
Nun soll abgeschätzt werden, wie genau die Bitfehlerwahrscheinlichkeit <i>p</i><sub>B</sub> = <i>p</i> beim BSC-Modell durch die Bitfehlerquote <i>h</i><sub>B</sub> approximiert wird. Dies geschieht in mehreren Schritten:
+
*In the case of statistically independent errors&nbsp; (BSC model),&nbsp; $n_{\rm B}(N)$&nbsp; is [[Theory_of_Stochastic_Signals/Binomial_Distribution#General_description_of_the_binomial_distribution|"binomially distributed"]].&nbsp; Consequently,&nbsp; mean and standard deviation of this random variable are:
*Die Anzahl der Bitfehler bei der Übertragung von <i>N</i> Symbolen ist eine diskrete Zufallsgröße::
+
:$$m_{n{\rm B}}=N \cdot p_{\rm B},\hspace{0.2cm}\sigma_{n{\rm B}}=\sqrt{N\cdot p_{\rm B}\cdot (\rm 1- \it p_{\rm B})}\hspace{0.05cm}.$$
<math>n_{\rm B}(N) = \sum\limits_{\it \nu=\rm 1}^{\it N} e_{\nu} \hspace{0.2cm} \in \hspace{0.2cm} \{0, 1, ... , N \}\hspace{0.05cm}.</math>
+
*Therefore,&nbsp; for mean and standard deviation of the bit error rate &nbsp;$h_{\rm B}(N)= n_{\rm B}(N)/N$&nbsp; holds:
*Bei statistisch unabhängigen Fehlern (BSC) ist <i>n</i><sub>B</sub> [http://en.lntwww.de/Stochastische_Signaltheorie/Binomialverteilung#Allgemeine_Beschreibung_der_Binomialverteilung binominalverteilt]. Demzufolge gilt::
 
<math>m_{n{\rm B}}=N \cdot p_{\rm B},\hspace{0.2cm}\sigma_{n{\rm B}}=\sqrt{N\cdot p_{\rm B}\cdot (\rm 1- \it p_{\rm B})}\hspace{0.05cm}.</math>
 
*Für Mittelwert und Streuung der Bitfehlerquote <i>h</i><sub>B</sub> = <i>n</i><sub>B</sub>/<i>N</i> gilt deshalb::
 
 
<math>m_{h{\rm B}}= \frac{m_{n{\rm B}}}{N} = p_{\rm B}\hspace{0.05cm},\hspace{0.2cm}\sigma_{h{\rm B}}= \frac{\sigma_{n{\rm B}}}{N}=
 
<math>m_{h{\rm B}}= \frac{m_{n{\rm B}}}{N} = p_{\rm B}\hspace{0.05cm},\hspace{0.2cm}\sigma_{h{\rm B}}= \frac{\sigma_{n{\rm B}}}{N}=
 
   \sqrt{\frac{ p_{\rm B}\cdot (\rm 1- \it p_{\rm B})}{N}}\hspace{0.05cm}.</math>
 
   \sqrt{\frac{ p_{\rm B}\cdot (\rm 1- \it p_{\rm B})}{N}}\hspace{0.05cm}.</math>
*Nach [http://de.wikipedia.org/wiki/Abraham_de_Moivre Moivre] und Laplace lässt sich die Binominalverteilung in eine Gaußverteilung überführen::
+
*However,&nbsp; according to &nbsp;[https://en.wikipedia.org/wiki/Abraham_de_Moivre "Moivre"]&nbsp; and &nbsp;[https://en.wikipedia.org/wiki/Pierre-Simon_Laplace "Laplace"]: &nbsp; The binomial distribution can be approximated by a Gaussian distribution:
<math>f_{h{\rm B}}({h_{\rm B}}) \approx \frac{1}{\sqrt{2\pi}\cdot\sigma_{h{\rm B}}}\cdot {\rm exp}
+
:$$f_{h{\rm B}}({h_{\rm B}}) \approx \frac{1}{\sqrt{2\pi}\cdot\sigma_{h{\rm B}}}\cdot {\rm e}^{-(h_{\rm B}-p_{\rm B})^2/(2 \hspace{0.05cm}\cdot \hspace{0.05cm}\sigma_{h{\rm B}}^2)}.$$
  \left[-\frac{(h_{\rm B}-p_{\rm B})^2}{2 \cdot \sigma_{h{\rm B}}^2}\right].</math>
+
*Using the &nbsp;[[Theory_of_Stochastic_Signals/Gaussian_Distributed_Random_Variables#Exceedance_probability|"Gaussian error integral"]]&nbsp; ${\rm Q}(x)$,&nbsp; the probability &nbsp;$p_\varepsilon$&nbsp;  can be calculated that the bit error rate &nbsp;$h_{\rm B}(N)$&nbsp; determined by simulation/measurement over &nbsp;$N$&nbsp; symbols differs in magnitude by less than a value &nbsp;$\varepsilon$&nbsp; from the actual bit error probability &nbsp;$p_{\rm B}$:&nbsp;
*Mit dem Gaußschen Fehlerintergal Q(<i>x</i>) lässt sich somit die Wahrscheinlichkeit <i>p</i><sub>&epsilon;</sub> berechnen, dass die per Simulation/Messung über <i>N</i> Symbole ermittelte Bitfehlerquote <i>h</i><sub>B</sub>(<i>N</i>) betragsmäßig um weniger als einen Wert <i>&epsilon;</i> von der tatsächlichen Bitfehlerwahrscheinlichkeit <i>p</i><sub>B</sub> abweicht::
+
:$$p_{\varepsilon}= {\rm Pr} \left( |h_{\rm B}(N) - p_{\rm B}| < \varepsilon \right)
<math>p_{\varepsilon}= {\rm Pr} \left( |h_{\rm B}(N) - p_{\rm B}| < \varepsilon \right)
 
 
   = 1 -2 \cdot {\rm Q} \left( \frac{\varepsilon}{\sigma_{h{\rm B}}} \right)=
 
   = 1 -2 \cdot {\rm Q} \left( \frac{\varepsilon}{\sigma_{h{\rm B}}} \right)=
   1 -2 \cdot {\rm Q} \left( \frac{\varepsilon \cdot \sqrt{N}}{\sqrt{p_{\rm B} \cdot (1-p_{\rm B})}} \right)\hspace{0.05cm}.</math>
+
   1 -2 \cdot {\rm Q} \left( \frac{\varepsilon \cdot \sqrt{N}}{\sqrt{p_{\rm B} \cdot (1-p_{\rm B})}} \right)\hspace{0.05cm}.$$
Dieses Ergebnis ist wie folgt zu interpretieren: Wenn man unendlich viele Versuchsreihen über jeweils <i>N</i> Symbole durchführt, ist der Mittelwert
+
 
<i>m</i><sub><i>h</i>B</sub> tatsächlich gleich der gesuchten Fehlerwahrscheinlichkeit <i>p</i><sub>B</sub>. Bei einer einzigen Versuchsreihe wird man dagegen nur eine Näherung erhalten, wobei die jeweilige Abweichung vom Sollwert bei mehreren Versuchsreihen gaußverteilt ist.
+
{{BlaueBox|TEXT= 
 +
$\text{Conclusion:}$&nbsp; This result can be interpreted as follows:
 +
#If one performs an infinite number of test series over &nbsp;$N$&nbsp; symbols each,&nbsp; the mean value &nbsp;$m_{h{\rm B} }$&nbsp; is actually equal to the sought error probability &nbsp;$p_{\rm B}$.  
 +
#With a single test series,&nbsp; on the other hand,&nbsp; one will only obtain an approximation,&nbsp; whereby the respective deviation from the nominal value is Gaussian distributed with several test series.}}
  
{{Beispiel}}''':''' Die Bitfehlerwahrscheinlichkeit betrage <i>p</i><sub>B</sub> = 10<sup>&ndash;3</sup> und es ist bekannt, dass die Bitfehler statistisch unabhängig sind. Macht man nun sehr viele Versuchsreihen mit jeweils <i>N</i> = 10<sup>5</sup> Symbolen, so werden die jeweiligen Ergebnisse <i>h</i><sub>B</sub> entsprechend einer Gaußverteilung um den Sollwert 10<sup>&ndash;3</sup> variieren. Die Streuung beträgt dabei
 
<math>\sigma_{h{\rm B}}=  \sqrt{{ p_{\rm B}\cdot (\rm 1- \it p_{\rm B})}/{N}}\approx 10^{-4}\hspace{0.05cm}.</math><br>
 
Die Wahrscheinlichkeit, dass die relative Häufigkeit einen Wert zwischen 0.9 &middot; 10<sup>&ndash;3</sup> und 1.1 &middot; 10<sup>&ndash;3</sup> <nobr>(<i>&epsilon;</i> = 10<sup>&ndash;4</sup>)</nobr> haben wird, ist somit gleich <i>p<sub>&epsilon;</sub></i> = 1 &ndash; 2 &middot; Q(<i>&epsilon;</i>/<i>&sigma;<sub>h</sub></i><sub>B</sub>) = 1 &ndash; 2 &middot; Q(1) &asymp; 68.4%. Soll diese Wahrscheinlichkeit (Genauigkeit) auf 95% gesteigert werden, so müsste <i>N</i> auf  400&nbsp;000 erhöht werden.{{end}}
 
  
 +
{{GraueBox|TEXT= 
 +
$\text{Example 1:}$&nbsp; The bit error probability &nbsp;$p_{\rm B}= 10^{-3}$&nbsp;  is given and it is known that the bit errors are statistically independent.
 +
*If we now make a large number of test series with &nbsp;$N= 10^{5}$&nbsp; symbols each,&nbsp; the respective results &nbsp;$h_{\rm B}(N)$&nbsp; will vary around the nominal value &nbsp;$10^{-3}$&nbsp; according to a Gaussian distribution.&nbsp; The standard deviation here is &nbsp;$\sigma_{h{\rm B} }=  \sqrt{ { p_{\rm B}\cdot (\rm 1- \it p_{\rm B})}/{N} }\approx 10^{-4}\hspace{0.05cm}.$
 +
*Thus,&nbsp; the probability that the relative frequency will have a value between &nbsp;$0.9 \cdot 10^{-3}$&nbsp; and &nbsp;$1.1 \cdot 10^{-3}$&nbsp; &nbsp;  $(\varepsilon=10^{-4})$:
 +
:$$p_{\varepsilon} = 1 - 2 \cdot  {\rm Q} \left({\varepsilon}/{\sigma_{h{\rm B} } } \right )= 1 - 2 \cdot {\rm Q} (1) \approx 68.4\%.$$
 +
*If this probability accuracy is to be increased to &nbsp;$95\%$,&nbsp; &nbsp;$N = 400\hspace{0.05cm}000$&nbsp; symbols would be required.}}
  
== Fehlerwahrscheinlichkeit bei Gaußschem Rauschen (1) ==
+
 
 +
== Error probability with Gaussian noise==
 
<br>
 
<br>
Entsprechend den [http://en.lntwww.de/Digitalsignal%C3%BCbertragung/Systemkomponenten_eines_Basisband%C3%BCbertragungssystems#Ersatzschaltbild_und_Voraussetzungen_für_Kapitel_1 Voraussetzungen] zu diesem Kapitel gehen wir davon aus, dass das Detektionssignal zu den Detektionszeitpunkten wie folgt dargestellt werden kann::
+
According to the &nbsp;[[Digital_Signal_Transmission/System_Components_of_a_Baseband_Transmission_System#Block_diagram_and_prerequisites_for_the_first_main_chapter|"prerequisites to this chapter"]],&nbsp; we make the following assumptions:
<math> d(\nu  T) = d_{\rm S}(\nu  T)+d_{\rm N}(\nu T)\hspace{0.05cm}. </math>
+
[[File:P_ID1259__Dig_T_1_2_S3_v2.png|right|frame|"Error probability with Gaussian noise"|class=fit]]
Der Nutzanteil wird durch die Wahrscheinlichkeitsdichtefunktion (WDF) <i>f</i><sub><i>d</i>S</sub>(<i>d</i><sub>S</sub>) beschrieben, wobei wir hier von unterschiedlichen Auftrittswahrscheinlichkeiten <i>p</i><sub>L</sub> = Pr(<i>d</i><sub>S</sub> = &ndash;<i>s</i><sub>0</sub>),  
 
<i>p</i><sub>H</sub> = Pr(<i>d</i><sub>S</sub> = +<i>s</i><sub>0</sub>) = 1&ndash; <i>p</i><sub>L</sub> ausgehen. Die WDF <i>f</i><sub><i>d</i>N</sub>(<i>d</i><sub>N</sub>) der Störkomponente ist gaußförmig und besitzt die Streuung <i>&sigma;<sub>d</sub></i>.<br>
 
[[File:P_ID1259__Dig_T_1_2_S3_v2.png|Fehlerwahrscheinlichkeit bei Gaußschem Rauschen|class=fit]]<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>
 
  
Die WDF <i>f</i><sub><i>d</i></sub>(<i>d</i>) der Detektionsabtastwerte <i>d</i>(<i>&nu;</i><i>T</i>) ergibt sich unter der Voraussetzung, dass <i>d</i><sub>S</sub>(<i>&nu;</i><i>T</i>) und <i>d</i><sub>N</sub>(<i>&nu;</i><i>T</i>) statistisch unabhängig voneinander sind (&bdquo;signalunabhängiges Rauschen&rdquo;), als Faltungsprodukt<nowiki>:</nowiki>
+
*The detection signal at the detection times can be represented as follows: &nbsp;
 +
:$$ d(\nu T) = d_{\rm S}(\nu T)+d_{\rm N}(\nu T)\hspace{0.05cm}. $$
  
<math>f_d(d) = f_{d{\rm S}}(d_{\rm S}) \star f_{d{\rm N}}(d_{\rm N})\hspace{0.05cm}.</math><br><br>
 
Der Schwellenwertentscheider mit der Schwelle <i>E</i> = 0 trifft dann eine falsche Entscheidung, wenn
 
*das Symbol <b>L</b> gesendet wurde (<i>d</i><sub>S</sub> = &ndash;<i>s</i><sub>0</sub>) und <i>d</i> > 0 ist (rote schraffierte Fläche), oder
 
*das Symbol <b>H</b> gesendet wurde (<i>d</i><sub>S</sub> = +<i>s</i><sub>0</sub>) und <i>d</i> < 0 ist (blaue schraffierte Fläche).
 
<br>Da die Flächen der zwei Gaußkurven zusammen 1 ergeben, gibt die Summe aus der rot und der blau schraffierten Fläche die Bitfehlerwahrscheinlichkeit <i>p</i><sub>B</sub> an. Die beiden grün schraffierten Flächen in der oberen Wahrscheinlichkeitsdichtefunktion <i>f</i><sub><i>d</i>N</sub>(<i>d</i><sub>N</sub>) sind &ndash; jede für sich &ndash; ebenfalls gleich <i>p</i><sub>B</sub>.<br>
 
Die anhand der Grafik veranschaulichten Ergebnisse sollen nun formelmäßig hergeleitet werden. Es gilt:
 
<math>p_{\rm B} = p_{\rm L} \cdot {\rm Pr}( \upsilon_\nu = \mathbf{H}\hspace{0.1cm}|\hspace{0.1cm} q_\nu = \mathbf{L})+
 
  p_{\rm H} \cdot {\rm Pr}( \upsilon_\nu = \mathbf{L}\hspace{0.1cm}|\hspace{0.1cm} q_\nu = \mathbf{H})\hspace{0.05cm}.</math>
 
<br>Hierbei sind <i>p</i><sub>L</sub> und <i>p</i><sub>H</sub> die Quellensymbolwahrscheinlichkeiten, während die jeweils zweiten, bedingten Wahrscheinlichkeiten Pr(<i>&upsilon;<sub>&nu;</sub></i> | <i>q<sub>&nu;</sub></i>) die Verfälschungen durch den AWGN&ndash;Kanal beschreiben.
 
Aus der Entscheidungsregel des Schwellenwertentscheiders (mit Schwelle <i>E</i> = 0) ergibt sich auch::
 
<math>p_{\rm B} = p_{\rm L} \cdot {\rm Pr}( d(\nu T)>0)+  p_{\rm H} \cdot {\rm Pr}( d(\nu  T)<0) =</math>
 
::<math>= p_{\rm L} \cdot {\rm Pr}( d_{\rm N}(\nu T)>s_0)+  p_{\rm H} \cdot {\rm Pr}( d_{\rm N}(\nu T)<-s_0) \hspace{0.05cm}.</math>
 
Die Herleitung wird auf der nächsten Seite fortgesetzt.
 
  
 +
*The signal component is described by the probability density function&nbsp; (PDF) &nbsp;$f_{d{\rm S}}(d_{\rm S}) $,&nbsp; where we assume here different occurrence probabilities &nbsp;
 +
:$$p_{\rm L} = {\rm Pr}(d_{\rm S} = -s_0),\hspace{0.5cm}p_{\rm H} = {\rm Pr}(d_{\rm S} = +s_0)= 1-p_{\rm L}.$$
  
 +
 +
*Let the probability density function &nbsp;$f_{d{\rm N}}(d_{\rm N})$&nbsp; of the noise component be Gaussian and possess the standard deviation &nbsp;$\sigma_d$.
 +
<br clear=all>
 +
Assuming that &nbsp;$d_{\rm S}(\nu  T)$&nbsp; and &nbsp;$d_{\rm N}(\nu  T)$&nbsp; are statistically independent of each other &nbsp;("signal independent noise"),&nbsp;  the probability density function &nbsp;$f_d(d) $&nbsp; of the detection samples &nbsp;$d(\nu  T)$&nbsp; is obtained as the convolution product
 +
:$$f_d(d) = f_{d{\rm S}}(d_{\rm S}) \star f_{d{\rm N}}(d_{\rm N})\hspace{0.05cm}.$$
  
 +
The threshold decision with threshold &nbsp;$E = 0$&nbsp; makes a wrong decision whenever
 +
*the symbol &nbsp;$\rm L$&nbsp; was sent &nbsp;$(d_{\rm S} = -s_0)$&nbsp; and &nbsp;$d > 0$&nbsp; $($red shaded area$)$,&nbsp; '''or'''
 +
*the symbol &nbsp;$\rm H$&nbsp; was sent &nbsp;$(d_{\rm S} = +s_0)$&nbsp; and &nbsp;$d < 0$&nbsp; $($blue shaded area$)$.
 +
<br>
 +
Since the areas of the red and blue Gaussian curves add up to &nbsp;$1$,&nbsp; the sum of the red and blue shaded areas gives the bit error probability &nbsp;$p_{\rm B}$.&nbsp; The two green shaded areas in the upper probability density function &nbsp;$f_{d{\rm N}}(d_{\rm N})$&nbsp; are &ndash; each separately &ndash; also equal to &nbsp;$p_{\rm B}$.
 +
 +
The results illustrated by the diagram are now to be derived as formulas.&nbsp; We start from the equation
 +
:$$p_{\rm B} = p_{\rm L} \cdot {\rm Pr}( v_\nu = \mathbf{H}\hspace{0.1cm}|\hspace{0.1cm} q_\nu = \mathbf{L})+
 +
  p_{\rm H} \cdot {\rm Pr}(v_\nu = \mathbf{L}\hspace{0.1cm}|\hspace{0.1cm} q_\nu = \mathbf{H})\hspace{0.05cm}.$$
 +
*Here &nbsp;$p_{\rm L} $&nbsp; and &nbsp;$p_{\rm H} $&nbsp; are the source symbol probabilities. The respective second&nbsp; (conditional)&nbsp; probabilities &nbsp;$ {\rm Pr}( v_\nu \hspace{0.05cm}|\hspace{0.05cm} q_\nu)$&nbsp;describe the interferences due to the AWGN channel. From the decision rule of the threshold decision &nbsp;$($with threshold &nbsp;$E = 0)$&nbsp; also results:
 +
:$$p_{\rm B} = p_{\rm L} \cdot {\rm Pr}( d(\nu T)>0)+  p_{\rm H} \cdot {\rm Pr}( d(\nu  T)<0) =p_{\rm L} \cdot {\rm Pr}( d_{\rm N}(\nu T)>+s_0)+  p_{\rm H} \cdot {\rm Pr}( d_{\rm N}(\nu T)<-s_0) \hspace{0.05cm}.$$
 +
*The two exceedance probabilities in the above equation are equal due to the symmetry of the Gaussian  probability density function &nbsp;$f_{d{\rm N}}(d_{\rm N})$.&nbsp; It holds:
 +
:$$p_{\rm B} = (p_{\rm L} + p_{\rm H}) \cdot {\rm Pr}( d_{\rm N}(\nu T)>s_0) = {\rm Pr}( d_{\rm N}(\nu T)>s_0)\hspace{0.05cm}.$$
 +
:This means: &nbsp; For a binary system with threshold &nbsp;$E = 0$,&nbsp; the bit error probability&nbsp; $p_{\rm B}$&nbsp; does not depend on the symbol probabilities &nbsp;$p_{\rm L} $&nbsp; and &nbsp;$p_{\rm H} = 1- p_{\rm L}$.&nbsp;
 +
*The probability that the AWGN noise term &nbsp;$d_{\rm N}$&nbsp; with standard deviation &nbsp;$\sigma_d$&nbsp; is larger than the amplitude&nbsp; $s_0$&nbsp; of the NRZ transmission pulse  is thus given by:
 +
:$$p_{\rm B} = \int_{s_0}^{+\infty}f_{d{\rm N}}(d_{\rm N})\,{\rm d} d_{\rm N} =
 +
  \frac{\rm 1}{\sqrt{2\pi} \cdot \sigma_d}\int_{
 +
s_0}^{+\infty}{\rm e} ^{-d_{\rm N}^2/(2\sigma_d^2) }\,{\rm d} d_{\rm
 +
N}\hspace{0.05cm}.$$
 +
*Using the complementary Gaussian error integral &nbsp;${\rm Q}(x)$,&nbsp; the result is:
 +
:$$p_{\rm B} =  {\rm Q} \left( \frac{s_0}{\sigma_d}\right)\hspace{0.4cm}{\rm with}\hspace{0.4cm}\rm Q (\it x) = \frac{\rm 1}{\sqrt{\rm 2\pi}}\int_{\it
 +
x}^{+\infty}\rm e^{\it -u^{\rm 2}/\rm 2}\,d \it u \hspace{0.05cm}.$$
 +
*Often&nbsp; &ndash; especially in the English-language literature &ndash;&nbsp; the comparable&nbsp; "complementary error function" &nbsp;${\rm erfc}(x)$&nbsp; is used instead of &nbsp;${\rm Q}(x)$.&nbsp; With this applies:
 +
:$$p_{\rm B} =  {1}/{2} \cdot {\rm erfc} \left( \frac{s_0}{\sqrt{2}\cdot \sigma_d}\right)\hspace{0.4cm}{\rm with}\hspace{0.4cm}
 +
{\rm erfc} (\it x) = \frac{\rm 2}{\sqrt{\rm \pi}}\int_{\it
 +
x}^{+\infty}\rm e^{\it -u^{\rm 2}}\,d \it u \hspace{0.05cm}.$$
 +
*Both functions can be found in formula collections in tabular form.&nbsp;  However,&nbsp; you can also use our HTML 5/JavaScript  applet &nbsp;[[Applets:Komplementäre_Gaußsche_Fehlerfunktionen|"Complementary Gaussian Error Functions"]]&nbsp; to calculate the function values of &nbsp;${\rm Q}(x)$&nbsp; and &nbsp;$1/2 \cdot {\rm erfc}(x)$.&nbsp;
 +
 +
 +
{{GraueBox|TEXT= 
 +
$\text{Example 2:}$&nbsp; For the following,&nbsp; we assume that tables are available listing the argument of the Gaussian error functions at &nbsp;$0.1$&nbsp; intervals.
 +
 +
With &nbsp;$s_0/\sigma_d = 4$,&nbsp; we obtain for the bit error probability according to the Q&ndash;function:
 +
:$$p_{\rm B} = {\rm Q} (4) = 0.317 \cdot 10^{-4}\hspace{0.05cm}.$$
 +
According to the second equation we get:
 +
:$$p_{\rm B} = {1}/{2} \cdot {\rm erfc} ( {4}/{\sqrt{2} })= {1}/{2} \cdot {\rm erfc} ( 2.828)\approx {1}/{2} \cdot {\rm erfc} ( 2.8)= 0.375 \cdot 10^{-4}\hspace{0.05cm}.$$
 +
*The first value is correct.&nbsp; With the second calculation method,&nbsp; you have to round or &ndash; even better &ndash; interpolate,&nbsp; which is very difficult due to the strong non-linearity of this function.<br>
 +
*With the given numerical values,&nbsp; the  Q&ndash;function is therefore more suitable.&nbsp; Outside of exercise examples,&nbsp; $s_0/\sigma_d$&nbsp; will usually have a&nbsp; "curved"&nbsp; value.&nbsp; In this case,&nbsp; of course,&nbsp; the Q&ndash;function offers no advantage over &nbsp;${\rm erfc}(x)$.}}
 +
 +
== Optimal binary receiver &ndash; "Matched Filter" realization ==
 +
<br>
 +
We further assume the &nbsp;[[Digital_Signal_Transmission/System_Components_of_a_Baseband_Transmission_System#Block_diagram_and_prerequisites_for_the_first_main_chapter|"conditions defined in the previous section"]].&nbsp;
 +
 +
[[File:EN_Dig_T_1_2_S4_v23.png|right|frame|Optimal binary receiver (matched filter variant) ]]
 +
 +
*Then we can assume for the frequency response and the impulse response of the receiver filter:
 +
:$$H_{\rm E}(f) =  {\rm sinc}(f T)\hspace{0.05cm},$$
 +
:$$H_{\rm E}(f)  \hspace{0.4cm}\bullet\!\!-\!\!\!-\!\!\!-\!\!\circ \hspace{0.4cm} h_{\rm E}(t)  =  \left\{ \begin{array}{c} 1/T  \\
 +
1/(2T) \\ 0 \\ \end{array} \right.\quad
 +
\begin{array}{*{1}c} {\rm{for}}
 +
\\  {\rm{for}} \\  {\rm{for}}  \\ \end{array}\begin{array}{*{20}c}
 +
|\hspace{0.05cm}t\hspace{0.05cm}|< T/2 \hspace{0.05cm},\\
 +
|\hspace{0.05cm}t\hspace{0.05cm}|= T/2 \hspace{0.05cm},\\
 +
|\hspace{0.05cm}t\hspace{0.05cm}|>T/2 \hspace{0.05cm}. \\
 +
\end{array}$$
 +
 +
*Because of linearity,&nbsp; it can be written for the signal component of the detection signal&nbsp; $d(t)$:&nbsp;
 +
:$$d_{\rm S}(t) =  \sum_{(\nu)} a_\nu \cdot g_d ( t - \nu \cdot T)\hspace{0.2cm}{\rm with}\hspace{0.2cm}g_d(t) = g_s(t) \star h_{\rm E}(t) \hspace{0.05cm}.$$
 +
 +
*Convolution of two rectangles of equal width &nbsp;$T$&nbsp; and height &nbsp;$s_0$&nbsp; yields a triangular detection pulse &nbsp;$g_d(t)$&nbsp;  with &nbsp;$g_d(t = 0) = s_0$.
 +
 +
*Because of &nbsp;$g_d(|t| \ge T/2) = 0$,&nbsp; the system is free of intersymbol interference &nbsp; &rArr; &nbsp; $d_{\rm S}(\nu  T)= \pm s_0$.
 +
 +
*The variance of the noise component&nbsp; $d_{\rm N}(t)$&nbsp; of the detection signal  &nbsp; &rArr; &nbsp; "detection noise power":
 +
:$$\sigma _d ^2  = \frac{N_0 }{2} \cdot \int_{ - \infty }^{
 +
+ \infty } {\left| {H_{\rm E}( f )} \right|^2
 +
\hspace{0.1cm}{\rm{d}}f} =  \frac{N_0 }{2}  \cdot \int_{-
 +
\infty }^{+ \infty } {\rm sinc}^2(f T)\hspace{0.1cm}{\rm{d}}f =
 +
\frac{N_0 }{2T} \hspace{0.05cm}.$$
 +
*This gives the two equivalent equations for the&nbsp; '''bit error probability'''&nbsp; corresponding to the last section:
 +
:$$p_{\rm B}  =  {\rm Q} \left( \sqrt{\frac{2 \cdot s_0^2 \cdot T}{N_0}}\right)=  {\rm Q} \left(
 +
\sqrt{\rho_d}\right)\hspace{0.05cm},\hspace{0.5cm}
 +
p_{\rm B} = {1}/{2} \cdot {\rm erfc} \left( \sqrt{{
 +
s_0^2 \cdot T}/{N_0}}\right)=  {1}/{2}\cdot {\rm erfc}\left(
 +
\sqrt{{\rho_d}/{2}}\right)
 +
\hspace{0.05cm}.$$
 +
 +
{{BlaueBox|TEXT= 
 +
$\text{Definition:}$&nbsp; Used in this equation is the instantaneous&nbsp; '''signal&ndash;to&ndash;noise power ratio&nbsp; $\rm  (SNR)$&nbsp;  $\rho_d$&nbsp;&nbsp; of the detection signal &nbsp;$d(t)$&nbsp; at times &nbsp;$\nu T$''':
 +
:$$\rho_d = \frac{d_{\rm S}^2(\nu  T)}{ {\rm E}\big[d_{\rm N}^2(\nu  T)\big ]}= {s_0^2}/{\sigma _d ^2}
 +
\hspace{0.05cm}.$$
 +
In part,&nbsp; we use for&nbsp; $\rho_d$&nbsp; in short the designation&nbsp; "detection SNR":}}
 +
 +
 +
A comparison of this result with the section&nbsp;[[Theory_of_Stochastic_Signals/Matched_Filter#Optimization_criterion_of_the_matched_filter|"Optimization Criterion of Matched Filter"]]&nbsp; in the book&nbsp; "Theory of Stochastic Signals"&nbsp; shows that the receiver filter
 +
&nbsp;$H_{\rm E}(f)$&nbsp; is a matched filter adapted to the basic transmitter pulse &nbsp;$g_s(t)$:&nbsp;
 +
:$$H_{\rm E}(f) = H_{\rm MF}(f) = K_{\rm MF}\cdot G_s^*(f)\hspace{0.05cm}.$$
 +
Compared to the &nbsp;[[Theory_of_Stochastic_Signals/Matched_Filter#Matched_filter_optimization| "matched filter optimization"]]&nbsp; section,&nbsp; the following modifications are considered here:
 +
*The matched filter constant is here set to &nbsp;$K_{\rm MF} = 1/(s_0 \cdot T)$.&nbsp; Thus the frequency response &nbsp;$ H_{\rm MF}(f)$&nbsp; is dimensionless.
 +
*The in general freely selectable detection time is chosen here to &nbsp;$T_{\rm D} = 0$.&nbsp; However,&nbsp; this results in an acausal filter.
 +
*The detection SNR can be represented for any basic transmitter pulse &nbsp;$g_s(t)$&nbsp; with spectrum &nbsp;$G_s(f)$&nbsp; as follows,&nbsp; where the right identity results from &nbsp;[https://en.wikipedia.org/wiki/Parseval%27s_theorem "Parseval's theorem"]:&nbsp;
 +
:$$\rho_d = \frac{2 \cdot E_{\rm B}}{N_0}\hspace{0.4cm}{\rm with}\hspace{0.4cm}
 +
E_{\rm B} =    \int^{+\infty} _{-\infty} g_s^2(t)\,{\rm
 +
d}t =    \int^{+\infty} _{-\infty} |G_s(f)|^2\,{\rm
 +
d}f\hspace{0.05cm}.$$
 +
*$E_{\rm B}$&nbsp; is often referred to as&nbsp; "energy per bit"&nbsp; and &nbsp;$E_{\rm B}/N_0$ &ndash; incorrectly &ndash; as &nbsp;$\rm SNR$.&nbsp; Indeed,&nbsp; as can be seen from the last equation,&nbsp; for binary baseband transmission&nbsp; $E_{\rm B}/N_0$&nbsp; differs from the detection SNR &nbsp;$\rho_d$&nbsp; by a factor of &nbsp;$2$.
 +
 +
 +
{{BlaueBox|TEXT= 
 +
$\text{Conclusion:}$&nbsp; The&nbsp; '''bit error probability of the optimal binary receiver with bipolar signaling'''&nbsp; derived here can thus also be written as follows:
 +
:$$p_{\rm B} = {\rm Q} \left( \sqrt{ {2 \cdot E_{\rm B} }/{N_0} }\right)=  {1}/{2} \cdot{\rm erfc} \left( \sqrt{ {E_{\rm B} }/{N_0} }\right)
 +
\hspace{0.05cm}.$$
 +
This equation is valid for the realization with matched filter as well as for the realization form&nbsp; "Integrate & Dump"&nbsp; (see next section).}}
 +
 +
 +
For clarification of the topic discussed here,&nbsp; we refer to our HTML 5/JavaScript applet &nbsp;[[Applets:Matched_Filter_Properties|"Matched Filter Properties"]].&nbsp;
 +
 +
== Optimal binary receiver &ndash; "Integrate & Dump" realization ==
 +
<br>
 +
For rectangular NRZ transmission pulses,&nbsp; the matched filter can also be implemented as an integrator&nbsp; $($in each case over a symbol duration &nbsp;$T)$.&nbsp; Thus,&nbsp; the following applies to the detection signal at the detection times:
 +
 +
[[File:EN_Dig_T_1_2_S6_alt_kontrast.png|right|frame|Signals at the receivers&nbsp;  "MF"&nbsp; and&nbsp; "I&D"]]
 +
:$$d(\nu \cdot T + T/2) = \frac {1}{T} \cdot \int^{\nu \cdot T + T/2} _{\nu \cdot T - T/2} r(t)\,{\rm
 +
d}t \hspace{0.05cm}.$$
 +
The diagram illustrates the differences in the realization of the optimal binary receiver
 +
*with matched filter $\rm (MF)$ &nbsp; &#8658; &nbsp; middle figure,&nbsp; and
 +
*as "Integrate & Dump" $\rm (I\&D)$ &nbsp; &#8658; &nbsp; bottom figure.
 +
 +
 +
One can see from these signal waveforms:
 +
*The signal component &nbsp;$d_{\rm S}(t)$&nbsp; of the detection signal at the detection times &nbsp; &rArr; &nbsp; yellow markers $\rm (MF$: &nbsp; at &nbsp;$\nu \cdot T$, &nbsp; $\rm I\&D$: &nbsp; at &nbsp;$\nu \cdot T +T/2)$&nbsp; is equal to&nbsp; $\pm s_0$&nbsp; in both cases.
 +
*The different detection times are due to the fact that the matched filter was assumed to be acausal&nbsp; (see last section),&nbsp; in contrast to&nbsp; "Integrate & Dump".
 +
*For the matched filter receiver,&nbsp; the variance of the detection noise component is the same at all times &nbsp;$t$:&nbsp; &nbsp; ${\rm E}\big[d_{\rm N}^2(t)\big]= {\sigma _d ^2} = {\rm const.}$&nbsp;  In contrast,&nbsp; for the I&D&nbsp; receiver,&nbsp; the variance increases from symbol start to symbol end.
 +
*At the times marked in yellow,&nbsp; the detection noise power is the same in both cases,&nbsp; resulting in the same bit error probability.&nbsp; With &nbsp;$E_{\rm B} = s_0^2 \cdot T$&nbsp; holds again:
 +
:$$\sigma _d ^2  =  \frac{N_0}{2}  \cdot \int_{-
 +
\infty }^{ +\infty } {\rm sinc}^2(f T)\hspace{0.1cm}{\rm{d}}f =
 +
\frac{N_0}{2T} $$
 +
:$$\Rightarrow \hspace{0.3cm} p_{\rm B} = {\rm Q} \left( \sqrt{ s_0^2 /
 +
\sigma _d ^2} \right)=  {\rm Q} \left( \sqrt{{2 \cdot E_{\rm B}}/{N_0}}\right)
 +
.$$
 +
<br clear=all>
 +
 +
== Interpretation of the optimal receiver ==
 +
<br>
 +
In this section,&nbsp;  it is shown that the smallest possible bit error probability can be achieved with a receiver consisting of a linear receiver filter and a nonlinear decision:
 +
:$$ p_{\rm B, \hspace{0.05cm}min} = {\rm Q} \left( \sqrt{{2 \cdot E_{\rm B}}/{N_0}}\right)
 +
= {1}/{2} \cdot {\rm erfc} \left( \sqrt{{ E_{\rm B}}/{N_0}}\right) \hspace{0.05cm}.$$
 +
The resulting configuration is a special case of the so-called&nbsp; '''maximum a&ndash;posteriori receiver'''&nbsp; $\rm (MAP)$,&nbsp; which is discussed in the section &nbsp;[[Digital_Signal_Transmission/Optimale_Empfängerstrategien|"Optimal Receiver Strategies"]]&nbsp;  in the third main chapter of this book.
 +
 +
However,&nbsp; for the above equation to be valid,&nbsp; a number of conditions must be met:
 +
*The transmitted signal &nbsp; $s(t)$&nbsp; is binary as well as bipolar&nbsp; (antipodal)&nbsp; and has the&nbsp; (average)&nbsp; energy &nbsp;$E_{\rm B}$&nbsp; per bit.&nbsp; The&nbsp; (average)&nbsp; transmitted power is therefore&nbsp; $E_{\rm B}/T$.
 +
 +
*An AWGN channel&nbsp; ("Additive White Gaussian Noise")&nbsp; with constant&nbsp; (one-sided)&nbsp; noise power density &nbsp;$N_0$&nbsp; is present.
 +
 +
*The receiver filter &nbsp;$H_{\rm E}(f)$&nbsp; is matched as best as possible to the spectrum &nbsp;$G_s(f)$&nbsp; of basic transmitter pulse according to the&nbsp; "matched filter criterion".
 +
 +
*The decision&nbsp; (threshold,&nbsp; detection times)&nbsp; is optimal.&nbsp; A causal realization of the matched filter can be compensated by shifting the detection timing.
 +
 +
*The above equation is valid independent of the basic transmitter pulse &nbsp;$g_s(t)$.&nbsp; Only the energy &nbsp;$E_{\rm B}$&nbsp; spent for the transmission of a binary symbol is decisive for the bit error probability &nbsp;$p_{\rm B}$ in addition to the noise power density &nbsp;$N_0$.&nbsp;
 +
 +
*A prerequisite for the applicability of the above equation is that the detection of a symbol is not interfered with by other symbols. Such &nbsp;[[Digital_Signal_Transmission/Ursachen_und_Auswirkungen_von_Impulsinterferenzen|"intersymbol interferences"]]&nbsp; increase the bit error probability &nbsp;$p_{\rm B}$&nbsp; enormously.
 +
 +
*If the absolute duration &nbsp;$T_{\rm S}$&nbsp; of the basic transmitter pulse is less than or equal to the symbol spacing &nbsp;$T$,&nbsp;  the above equation is always applicable if the matched filter criterion is fulfilled.
 +
 +
*The equation is also valid for Nyquist systems where &nbsp;$T_{\rm S} > T$&nbsp; holds,&nbsp; but intersymbol interference does not occur due to equidistant zero crossings of the basic detection pulse &nbsp;$g_d(t)$.&nbsp; We will deal with this in the next chapter.
 +
 +
 +
== Exercises for the chapter==
 +
<br>
 +
[[Aufgaben:Exercise_1.2:_Bit_Error_Rate|Exercise 1.2: Bit Error Rate]]
  
 +
[[Aufgaben:Exercise_1.2Z:_Bit_Error_Measurement|Exercise 1.2Z: Bit Error Measurement]]
  
 +
[[Aufgaben:Exercise_1.3:_Rectangular_Functions_for_Transmitter_and_Receiver|Exercise 1.3: Rectangular Functions for Transmitter and Receiver]]
  
 +
[[Aufgaben:Exercise_1.3Z:_Threshold_Optimization|Exercise 1.3Z: Threshold Optimization]]
  
 +
==References==
  
 +
<references/>
  
  
 
{{Display}}
 
{{Display}}

Latest revision as of 14:48, 23 January 2023

Definition of the bit error probability


For the definition of the bit error probability

The diagram shows a very simple,  but generally valid model of a binary transmission system.

This can be characterized as follows:

  • Source and sink are described by the two binary sequences  $〈q_ν〉$  and  $〈v_ν〉$. 
  • The entire transmission system – consisting of
  1. the transmitter, 
  2. the transmission channel including noise and
  3. the receiver,

is regarded as a  "Black Box"  with binary input and binary output.

  • This  "digital channel"  is characterized solely by the error sequence $〈e_ν〉$. 
  • If the $\nu$–th bit is transmitted without errors  $(v_ν = q_ν)$,   $e_ν= 0$  is valid, 
    otherwise  $(v_ν \ne q_ν)$   $e_ν= 1$  is set.


$\text{Definition:}$  The  (average)  bit error probability for a binary system is given as follows:

$$p_{\rm B} = {\rm E}\big[{\rm Pr}(v_{\nu} \ne q_{\nu})\big]= \overline{ {\rm Pr}(v_{\nu} \ne q_{\nu}) } = \lim_{N \to\infty}\frac{1}{N}\cdot\sum\limits_{\nu=1}^{N}{\rm Pr}(v_{\nu} \ne q_{\nu})\hspace{0.05cm}.$$

This statistical quantity is the most important evaluation criterion of any digital system.


  • The calculation as expected value  $\rm E[\text{...}]$  according to the first part of the above equation corresponds to an ensemble averaging over the falsification probability  ${\rm Pr}(v_{\nu} \ne q_{\nu})$  of the  $\nu$–th symbol,  while the line in the right part of the equation marks a time averaging.
  • Both types of calculation lead  – under the justified assumption of ergodic processes –  to the same result,  as shown in the fourth main chapter  "Random Variables with Statistical Dependence"  of the book  "Theory of Stochastic Signals"
  • The bit error probability can be determined as an expected value also from the error sequence  $〈e_ν〉$,  taking into account that the error quantity  $e_ν$  can only take the values  $0$  and  $1$: 
$$\it p_{\rm B} = \rm E\big[\rm Pr(\it e_{\nu}=\rm 1)\big]= {\rm E}\big[{\it e_{\nu}}\big]\hspace{0.05cm}.$$
  • The above definition of the bit error probability applies whether or not there are statistical bindings within the error sequence  $〈e_ν〉$.  Depending on this,  one has to use different digital channel models in a system simulation.  The complexity of the bit error probability calculation depends on this.


In the fifth main chapter it will be shown that the so-called  "BSC model"  ("Binary Symmetrical Channel")  provides statistically independent errors,  while for the description of bundle error channels one has to resort to the models of  "Gilbert–Elliott"  [Gil60][1] and of  "McCullough"  [McC68][2].


Definition of the bit error rate


The  "bit error probability"  is well suited for the design and optimization of digital systems.  It is an  "a–priori parameter",  which allows a prediction about the error behavior of a transmission system without having to realize it already.

In contrast,  to measure the quality of a realized system or in a system simulation,  one must switch to the  "bit error rate",  which is determined by comparing the source symbol sequence  $〈q_ν〉$  and the sink symbol sequence  $〈v_ν〉$.  This is thus an  "a–posteriori parameter"  of the system.

$\text{Definition:}$  The bit error rate  $\rm (BER)$  is the ratio of the number  $n_{\rm B}(N)$  of bit errors  $(v_ν \ne q_ν)$  and the number  $N$  of transmitted symbols:

$$h_{\rm B}(N) = \frac{n_{\rm B}(N)}{N} \hspace{0.05cm}.$$

In terms of probability theory,  the bit error rate is a  "relative frequency";  therefore,  it is also called  "bit error frequency".


  • The notation  $h_{\rm B}(N)$  is intended to make clear that the bit error rate determined by measurement or simulation depends significantly on the parameter  $N$   ⇒   the total number of transmitted or simulated symbols.
  • According to the elementary laws of probability theory,  only in the limiting case  $N \to \infty$  the a–posteriori parameter  $h_{\rm B}(N)$  coincides exactly with the a–priori parameter  $p_{\rm B}$. 

The connection between  "probability"  and  "relative frequency"  is clarified in the  (German language)  learning video
      "Bernoullisches Gesetz der großen Zahlen"   ⇒   "Bernoulli's law of large numbers".

Bit error probability and bit error rate in the BSC model


The following derivations are based on the BSC model  ("Binary Symmetric Channel"),  which is described in detail in  "chapter 5.2"

  • Each bit is distorted with probability  $p = {\rm Pr}(v_{\nu} \ne q_{\nu}) = {\rm Pr}(e_{\nu} = 1)$,  independent of the error probabilities of the neighboring symbols.
  • Thus,  the  (average)  bit error probability  $p_{\rm B}$  is also equal to  $p$.


Now we estimate how accurately in the BSC model the bit error probability  $p_{\rm B} = p$  is approximated by the bit error rate  $h_{\rm B}(N)$: 

  • The number of bit errors in the transmission of  $N$  symbols is a discrete random quantity:
$$n_{\rm B}(N) = \sum\limits_{\it \nu=\rm 1}^{\it N} e_{\nu} \hspace{0.2cm} \in \hspace{0.2cm} \{0, 1, \hspace{0.05cm}\text{...} \hspace{0.05cm} , N \}\hspace{0.05cm}.$$
  • In the case of statistically independent errors  (BSC model),  $n_{\rm B}(N)$  is "binomially distributed".  Consequently,  mean and standard deviation of this random variable are:
$$m_{n{\rm B}}=N \cdot p_{\rm B},\hspace{0.2cm}\sigma_{n{\rm B}}=\sqrt{N\cdot p_{\rm B}\cdot (\rm 1- \it p_{\rm B})}\hspace{0.05cm}.$$
  • Therefore,  for mean and standard deviation of the bit error rate  $h_{\rm B}(N)= n_{\rm B}(N)/N$  holds\[m_{h{\rm B}}= \frac{m_{n{\rm B}}}{N} = p_{\rm B}\hspace{0.05cm},\hspace{0.2cm}\sigma_{h{\rm B}}= \frac{\sigma_{n{\rm B}}}{N}= \sqrt{\frac{ p_{\rm B}\cdot (\rm 1- \it p_{\rm B})}{N}}\hspace{0.05cm}.\]
  • However,  according to  "Moivre"  and  "Laplace":   The binomial distribution can be approximated by a Gaussian distribution:
$$f_{h{\rm B}}({h_{\rm B}}) \approx \frac{1}{\sqrt{2\pi}\cdot\sigma_{h{\rm B}}}\cdot {\rm e}^{-(h_{\rm B}-p_{\rm B})^2/(2 \hspace{0.05cm}\cdot \hspace{0.05cm}\sigma_{h{\rm B}}^2)}.$$
  • Using the  "Gaussian error integral"  ${\rm Q}(x)$,  the probability  $p_\varepsilon$  can be calculated that the bit error rate  $h_{\rm B}(N)$  determined by simulation/measurement over  $N$  symbols differs in magnitude by less than a value  $\varepsilon$  from the actual bit error probability  $p_{\rm B}$: 
$$p_{\varepsilon}= {\rm Pr} \left( |h_{\rm B}(N) - p_{\rm B}| < \varepsilon \right) = 1 -2 \cdot {\rm Q} \left( \frac{\varepsilon}{\sigma_{h{\rm B}}} \right)= 1 -2 \cdot {\rm Q} \left( \frac{\varepsilon \cdot \sqrt{N}}{\sqrt{p_{\rm B} \cdot (1-p_{\rm B})}} \right)\hspace{0.05cm}.$$

$\text{Conclusion:}$  This result can be interpreted as follows:

  1. If one performs an infinite number of test series over  $N$  symbols each,  the mean value  $m_{h{\rm B} }$  is actually equal to the sought error probability  $p_{\rm B}$.
  2. With a single test series,  on the other hand,  one will only obtain an approximation,  whereby the respective deviation from the nominal value is Gaussian distributed with several test series.


$\text{Example 1:}$  The bit error probability  $p_{\rm B}= 10^{-3}$  is given and it is known that the bit errors are statistically independent.

  • If we now make a large number of test series with  $N= 10^{5}$  symbols each,  the respective results  $h_{\rm B}(N)$  will vary around the nominal value  $10^{-3}$  according to a Gaussian distribution.  The standard deviation here is  $\sigma_{h{\rm B} }= \sqrt{ { p_{\rm B}\cdot (\rm 1- \it p_{\rm B})}/{N} }\approx 10^{-4}\hspace{0.05cm}.$
  • Thus,  the probability that the relative frequency will have a value between  $0.9 \cdot 10^{-3}$  and  $1.1 \cdot 10^{-3}$    $(\varepsilon=10^{-4})$:
$$p_{\varepsilon} = 1 - 2 \cdot {\rm Q} \left({\varepsilon}/{\sigma_{h{\rm B} } } \right )= 1 - 2 \cdot {\rm Q} (1) \approx 68.4\%.$$
  • If this probability accuracy is to be increased to  $95\%$,   $N = 400\hspace{0.05cm}000$  symbols would be required.


Error probability with Gaussian noise


According to the  "prerequisites to this chapter",  we make the following assumptions:

"Error probability with Gaussian noise"
  • The detection signal at the detection times can be represented as follows:  
$$ d(\nu T) = d_{\rm S}(\nu T)+d_{\rm N}(\nu T)\hspace{0.05cm}. $$


  • The signal component is described by the probability density function  (PDF)  $f_{d{\rm S}}(d_{\rm S}) $,  where we assume here different occurrence probabilities  
$$p_{\rm L} = {\rm Pr}(d_{\rm S} = -s_0),\hspace{0.5cm}p_{\rm H} = {\rm Pr}(d_{\rm S} = +s_0)= 1-p_{\rm L}.$$


  • Let the probability density function  $f_{d{\rm N}}(d_{\rm N})$  of the noise component be Gaussian and possess the standard deviation  $\sigma_d$.


Assuming that  $d_{\rm S}(\nu T)$  and  $d_{\rm N}(\nu T)$  are statistically independent of each other  ("signal independent noise"),  the probability density function  $f_d(d) $  of the detection samples  $d(\nu T)$  is obtained as the convolution product

$$f_d(d) = f_{d{\rm S}}(d_{\rm S}) \star f_{d{\rm N}}(d_{\rm N})\hspace{0.05cm}.$$

The threshold decision with threshold  $E = 0$  makes a wrong decision whenever

  • the symbol  $\rm L$  was sent  $(d_{\rm S} = -s_0)$  and  $d > 0$  $($red shaded area$)$,  or
  • the symbol  $\rm H$  was sent  $(d_{\rm S} = +s_0)$  and  $d < 0$  $($blue shaded area$)$.


Since the areas of the red and blue Gaussian curves add up to  $1$,  the sum of the red and blue shaded areas gives the bit error probability  $p_{\rm B}$.  The two green shaded areas in the upper probability density function  $f_{d{\rm N}}(d_{\rm N})$  are – each separately – also equal to  $p_{\rm B}$.

The results illustrated by the diagram are now to be derived as formulas.  We start from the equation

$$p_{\rm B} = p_{\rm L} \cdot {\rm Pr}( v_\nu = \mathbf{H}\hspace{0.1cm}|\hspace{0.1cm} q_\nu = \mathbf{L})+ p_{\rm H} \cdot {\rm Pr}(v_\nu = \mathbf{L}\hspace{0.1cm}|\hspace{0.1cm} q_\nu = \mathbf{H})\hspace{0.05cm}.$$
  • Here  $p_{\rm L} $  and  $p_{\rm H} $  are the source symbol probabilities. The respective second  (conditional)  probabilities  $ {\rm Pr}( v_\nu \hspace{0.05cm}|\hspace{0.05cm} q_\nu)$ describe the interferences due to the AWGN channel. From the decision rule of the threshold decision  $($with threshold  $E = 0)$  also results:
$$p_{\rm B} = p_{\rm L} \cdot {\rm Pr}( d(\nu T)>0)+ p_{\rm H} \cdot {\rm Pr}( d(\nu T)<0) =p_{\rm L} \cdot {\rm Pr}( d_{\rm N}(\nu T)>+s_0)+ p_{\rm H} \cdot {\rm Pr}( d_{\rm N}(\nu T)<-s_0) \hspace{0.05cm}.$$
  • The two exceedance probabilities in the above equation are equal due to the symmetry of the Gaussian probability density function  $f_{d{\rm N}}(d_{\rm N})$.  It holds:
$$p_{\rm B} = (p_{\rm L} + p_{\rm H}) \cdot {\rm Pr}( d_{\rm N}(\nu T)>s_0) = {\rm Pr}( d_{\rm N}(\nu T)>s_0)\hspace{0.05cm}.$$
This means:   For a binary system with threshold  $E = 0$,  the bit error probability  $p_{\rm B}$  does not depend on the symbol probabilities  $p_{\rm L} $  and  $p_{\rm H} = 1- p_{\rm L}$. 
  • The probability that the AWGN noise term  $d_{\rm N}$  with standard deviation  $\sigma_d$  is larger than the amplitude  $s_0$  of the NRZ transmission pulse is thus given by:
$$p_{\rm B} = \int_{s_0}^{+\infty}f_{d{\rm N}}(d_{\rm N})\,{\rm d} d_{\rm N} = \frac{\rm 1}{\sqrt{2\pi} \cdot \sigma_d}\int_{ s_0}^{+\infty}{\rm e} ^{-d_{\rm N}^2/(2\sigma_d^2) }\,{\rm d} d_{\rm N}\hspace{0.05cm}.$$
  • Using the complementary Gaussian error integral  ${\rm Q}(x)$,  the result is:
$$p_{\rm B} = {\rm Q} \left( \frac{s_0}{\sigma_d}\right)\hspace{0.4cm}{\rm with}\hspace{0.4cm}\rm Q (\it x) = \frac{\rm 1}{\sqrt{\rm 2\pi}}\int_{\it x}^{+\infty}\rm e^{\it -u^{\rm 2}/\rm 2}\,d \it u \hspace{0.05cm}.$$
  • Often  – especially in the English-language literature –  the comparable  "complementary error function"  ${\rm erfc}(x)$  is used instead of  ${\rm Q}(x)$.  With this applies:
$$p_{\rm B} = {1}/{2} \cdot {\rm erfc} \left( \frac{s_0}{\sqrt{2}\cdot \sigma_d}\right)\hspace{0.4cm}{\rm with}\hspace{0.4cm} {\rm erfc} (\it x) = \frac{\rm 2}{\sqrt{\rm \pi}}\int_{\it x}^{+\infty}\rm e^{\it -u^{\rm 2}}\,d \it u \hspace{0.05cm}.$$
  • Both functions can be found in formula collections in tabular form.  However,  you can also use our HTML 5/JavaScript applet  "Complementary Gaussian Error Functions"  to calculate the function values of  ${\rm Q}(x)$  and  $1/2 \cdot {\rm erfc}(x)$. 


$\text{Example 2:}$  For the following,  we assume that tables are available listing the argument of the Gaussian error functions at  $0.1$  intervals.

With  $s_0/\sigma_d = 4$,  we obtain for the bit error probability according to the Q–function:

$$p_{\rm B} = {\rm Q} (4) = 0.317 \cdot 10^{-4}\hspace{0.05cm}.$$

According to the second equation we get:

$$p_{\rm B} = {1}/{2} \cdot {\rm erfc} ( {4}/{\sqrt{2} })= {1}/{2} \cdot {\rm erfc} ( 2.828)\approx {1}/{2} \cdot {\rm erfc} ( 2.8)= 0.375 \cdot 10^{-4}\hspace{0.05cm}.$$
  • The first value is correct.  With the second calculation method,  you have to round or – even better – interpolate,  which is very difficult due to the strong non-linearity of this function.
  • With the given numerical values,  the Q–function is therefore more suitable.  Outside of exercise examples,  $s_0/\sigma_d$  will usually have a  "curved"  value.  In this case,  of course,  the Q–function offers no advantage over  ${\rm erfc}(x)$.

Optimal binary receiver – "Matched Filter" realization


We further assume the  "conditions defined in the previous section"

Optimal binary receiver (matched filter variant)
  • Then we can assume for the frequency response and the impulse response of the receiver filter:
$$H_{\rm E}(f) = {\rm sinc}(f T)\hspace{0.05cm},$$
$$H_{\rm E}(f) \hspace{0.4cm}\bullet\!\!-\!\!\!-\!\!\!-\!\!\circ \hspace{0.4cm} h_{\rm E}(t) = \left\{ \begin{array}{c} 1/T \\ 1/(2T) \\ 0 \\ \end{array} \right.\quad \begin{array}{*{1}c} {\rm{for}} \\ {\rm{for}} \\ {\rm{for}} \\ \end{array}\begin{array}{*{20}c} |\hspace{0.05cm}t\hspace{0.05cm}|< T/2 \hspace{0.05cm},\\ |\hspace{0.05cm}t\hspace{0.05cm}|= T/2 \hspace{0.05cm},\\ |\hspace{0.05cm}t\hspace{0.05cm}|>T/2 \hspace{0.05cm}. \\ \end{array}$$
  • Because of linearity,  it can be written for the signal component of the detection signal  $d(t)$: 
$$d_{\rm S}(t) = \sum_{(\nu)} a_\nu \cdot g_d ( t - \nu \cdot T)\hspace{0.2cm}{\rm with}\hspace{0.2cm}g_d(t) = g_s(t) \star h_{\rm E}(t) \hspace{0.05cm}.$$
  • Convolution of two rectangles of equal width  $T$  and height  $s_0$  yields a triangular detection pulse  $g_d(t)$  with  $g_d(t = 0) = s_0$.
  • Because of  $g_d(|t| \ge T/2) = 0$,  the system is free of intersymbol interference   ⇒   $d_{\rm S}(\nu T)= \pm s_0$.
  • The variance of the noise component  $d_{\rm N}(t)$  of the detection signal   ⇒   "detection noise power":
$$\sigma _d ^2 = \frac{N_0 }{2} \cdot \int_{ - \infty }^{ + \infty } {\left| {H_{\rm E}( f )} \right|^2 \hspace{0.1cm}{\rm{d}}f} = \frac{N_0 }{2} \cdot \int_{- \infty }^{+ \infty } {\rm sinc}^2(f T)\hspace{0.1cm}{\rm{d}}f = \frac{N_0 }{2T} \hspace{0.05cm}.$$
  • This gives the two equivalent equations for the  bit error probability  corresponding to the last section:
$$p_{\rm B} = {\rm Q} \left( \sqrt{\frac{2 \cdot s_0^2 \cdot T}{N_0}}\right)= {\rm Q} \left( \sqrt{\rho_d}\right)\hspace{0.05cm},\hspace{0.5cm} p_{\rm B} = {1}/{2} \cdot {\rm erfc} \left( \sqrt{{ s_0^2 \cdot T}/{N_0}}\right)= {1}/{2}\cdot {\rm erfc}\left( \sqrt{{\rho_d}/{2}}\right) \hspace{0.05cm}.$$

$\text{Definition:}$  Used in this equation is the instantaneous  signal–to–noise power ratio  $\rm (SNR)$  $\rho_d$   of the detection signal  $d(t)$  at times  $\nu T$:

$$\rho_d = \frac{d_{\rm S}^2(\nu T)}{ {\rm E}\big[d_{\rm N}^2(\nu T)\big ]}= {s_0^2}/{\sigma _d ^2} \hspace{0.05cm}.$$

In part,  we use for  $\rho_d$  in short the designation  "detection SNR":


A comparison of this result with the section "Optimization Criterion of Matched Filter"  in the book  "Theory of Stochastic Signals"  shows that the receiver filter  $H_{\rm E}(f)$  is a matched filter adapted to the basic transmitter pulse  $g_s(t)$: 

$$H_{\rm E}(f) = H_{\rm MF}(f) = K_{\rm MF}\cdot G_s^*(f)\hspace{0.05cm}.$$

Compared to the   "matched filter optimization"  section,  the following modifications are considered here:

  • The matched filter constant is here set to  $K_{\rm MF} = 1/(s_0 \cdot T)$.  Thus the frequency response  $ H_{\rm MF}(f)$  is dimensionless.
  • The in general freely selectable detection time is chosen here to  $T_{\rm D} = 0$.  However,  this results in an acausal filter.
  • The detection SNR can be represented for any basic transmitter pulse  $g_s(t)$  with spectrum  $G_s(f)$  as follows,  where the right identity results from  "Parseval's theorem"
$$\rho_d = \frac{2 \cdot E_{\rm B}}{N_0}\hspace{0.4cm}{\rm with}\hspace{0.4cm} E_{\rm B} = \int^{+\infty} _{-\infty} g_s^2(t)\,{\rm d}t = \int^{+\infty} _{-\infty} |G_s(f)|^2\,{\rm d}f\hspace{0.05cm}.$$
  • $E_{\rm B}$  is often referred to as  "energy per bit"  and  $E_{\rm B}/N_0$ – incorrectly – as  $\rm SNR$.  Indeed,  as can be seen from the last equation,  for binary baseband transmission  $E_{\rm B}/N_0$  differs from the detection SNR  $\rho_d$  by a factor of  $2$.


$\text{Conclusion:}$  The  bit error probability of the optimal binary receiver with bipolar signaling  derived here can thus also be written as follows:

$$p_{\rm B} = {\rm Q} \left( \sqrt{ {2 \cdot E_{\rm B} }/{N_0} }\right)= {1}/{2} \cdot{\rm erfc} \left( \sqrt{ {E_{\rm B} }/{N_0} }\right) \hspace{0.05cm}.$$

This equation is valid for the realization with matched filter as well as for the realization form  "Integrate & Dump"  (see next section).


For clarification of the topic discussed here,  we refer to our HTML 5/JavaScript applet  "Matched Filter Properties"

Optimal binary receiver – "Integrate & Dump" realization


For rectangular NRZ transmission pulses,  the matched filter can also be implemented as an integrator  $($in each case over a symbol duration  $T)$.  Thus,  the following applies to the detection signal at the detection times:

Signals at the receivers  "MF"  and  "I&D"
$$d(\nu \cdot T + T/2) = \frac {1}{T} \cdot \int^{\nu \cdot T + T/2} _{\nu \cdot T - T/2} r(t)\,{\rm d}t \hspace{0.05cm}.$$

The diagram illustrates the differences in the realization of the optimal binary receiver

  • with matched filter $\rm (MF)$   ⇒   middle figure,  and
  • as "Integrate & Dump" $\rm (I\&D)$   ⇒   bottom figure.


One can see from these signal waveforms:

  • The signal component  $d_{\rm S}(t)$  of the detection signal at the detection times   ⇒   yellow markers $\rm (MF$:   at  $\nu \cdot T$,   $\rm I\&D$:   at  $\nu \cdot T +T/2)$  is equal to  $\pm s_0$  in both cases.
  • The different detection times are due to the fact that the matched filter was assumed to be acausal  (see last section),  in contrast to  "Integrate & Dump".
  • For the matched filter receiver,  the variance of the detection noise component is the same at all times  $t$:    ${\rm E}\big[d_{\rm N}^2(t)\big]= {\sigma _d ^2} = {\rm const.}$  In contrast,  for the I&D  receiver,  the variance increases from symbol start to symbol end.
  • At the times marked in yellow,  the detection noise power is the same in both cases,  resulting in the same bit error probability.  With  $E_{\rm B} = s_0^2 \cdot T$  holds again:
$$\sigma _d ^2 = \frac{N_0}{2} \cdot \int_{- \infty }^{ +\infty } {\rm sinc}^2(f T)\hspace{0.1cm}{\rm{d}}f = \frac{N_0}{2T} $$
$$\Rightarrow \hspace{0.3cm} p_{\rm B} = {\rm Q} \left( \sqrt{ s_0^2 / \sigma _d ^2} \right)= {\rm Q} \left( \sqrt{{2 \cdot E_{\rm B}}/{N_0}}\right) .$$


Interpretation of the optimal receiver


In this section,  it is shown that the smallest possible bit error probability can be achieved with a receiver consisting of a linear receiver filter and a nonlinear decision:

$$ p_{\rm B, \hspace{0.05cm}min} = {\rm Q} \left( \sqrt{{2 \cdot E_{\rm B}}/{N_0}}\right) = {1}/{2} \cdot {\rm erfc} \left( \sqrt{{ E_{\rm B}}/{N_0}}\right) \hspace{0.05cm}.$$

The resulting configuration is a special case of the so-called  maximum a–posteriori receiver  $\rm (MAP)$,  which is discussed in the section  "Optimal Receiver Strategies"  in the third main chapter of this book.

However,  for the above equation to be valid,  a number of conditions must be met:

  • The transmitted signal   $s(t)$  is binary as well as bipolar  (antipodal)  and has the  (average)  energy  $E_{\rm B}$  per bit.  The  (average)  transmitted power is therefore  $E_{\rm B}/T$.
  • An AWGN channel  ("Additive White Gaussian Noise")  with constant  (one-sided)  noise power density  $N_0$  is present.
  • The receiver filter  $H_{\rm E}(f)$  is matched as best as possible to the spectrum  $G_s(f)$  of basic transmitter pulse according to the  "matched filter criterion".
  • The decision  (threshold,  detection times)  is optimal.  A causal realization of the matched filter can be compensated by shifting the detection timing.
  • The above equation is valid independent of the basic transmitter pulse  $g_s(t)$.  Only the energy  $E_{\rm B}$  spent for the transmission of a binary symbol is decisive for the bit error probability  $p_{\rm B}$ in addition to the noise power density  $N_0$. 
  • A prerequisite for the applicability of the above equation is that the detection of a symbol is not interfered with by other symbols. Such  "intersymbol interferences"  increase the bit error probability  $p_{\rm B}$  enormously.
  • If the absolute duration  $T_{\rm S}$  of the basic transmitter pulse is less than or equal to the symbol spacing  $T$,  the above equation is always applicable if the matched filter criterion is fulfilled.
  • The equation is also valid for Nyquist systems where  $T_{\rm S} > T$  holds,  but intersymbol interference does not occur due to equidistant zero crossings of the basic detection pulse  $g_d(t)$.  We will deal with this in the next chapter.


Exercises for the chapter


Exercise 1.2: Bit Error Rate

Exercise 1.2Z: Bit Error Measurement

Exercise 1.3: Rectangular Functions for Transmitter and Receiver

Exercise 1.3Z: Threshold Optimization

References

  1. Gilbert, E. N.:  Capacity of Burst–Noise Channel,  In: Bell Syst. Techn. J. Vol. 39, 1960, pp. 1253–1266.
  2. McCullough, R.H.:  The Binary Regenerative Channel,  In: Bell Syst. Techn. J. (47), 1968.