Difference between revisions of "Digital Signal Transmission/Parameters of Digital Channel Models"

From LNTwww
 
(63 intermediate revisions by 6 users not shown)
Line 1: Line 1:
 
   
 
   
 
{{Header
 
{{Header
|Untermenü=Digitale Kanalmodelle
+
|Untermenü=Digital Channel Models
 
|Vorherige Seite=Trägerfrequenzsysteme mit nichtkohärenter Demodulation
 
|Vorherige Seite=Trägerfrequenzsysteme mit nichtkohärenter Demodulation
 
|Nächste Seite=Binary Symmetric Channel (BSC)
 
|Nächste Seite=Binary Symmetric Channel (BSC)
 
}}
 
}}
  
== Anwendung analoger Kanalmodelle ==
+
== # OVERVIEW OF THE FIFTH MAIN CHAPTER # ==
 
<br>
 
<br>
Für Untersuchungen von Nachrichtenübertragungssystemen sind geeignete Kanalmodelle von großer Wichtigkeit, weil diese
+
At the end of this book,&nbsp; '''digital channel models'''&nbsp; are discussed
*Voraussetzung für eine Systemsimulation und &ndash;optimierung sind, sowie<br>
 
*gleichbleibende und rekonstruierbare Randbedingungen schaffen.<br><br>
 
  
Für die Digitalsignalübertragung gibt es sowohl analoge als auch digitale Kanalmodelle: Ein analoges Kanalmodell muss zwar den Übertragungskanal nicht in allen physikalischen Einzelheiten wiedergeben, sollte jedoch dessen Übertragungsverhalten inklusive der dominanten Störgrößen funktionell ausreichend genau beschreiben. Meist muss ein Kompromiss zwischen mathematischer Handhabbarkeit und dem Bezug zur Realität gefunden werden.<br>
+
*which do not describe the transmission behavior of a digital transmission system in great detail according to the individual system components,  
  
{{Beispiel}}''':''' Die Grafik zeigt ein analoges Kanalmodell innerhalb eines digitalen Übertragungssystems. Dieses beinhaltet den [http://en.lntwww.de/Lineare_zeitinvariante_Systeme/Systembeschreibung_im_Frequenzbereich#.C3.9Cbertragungsfunktion_-_Frequenzgang Kanalfrequenzgang] <i>H</i><sub>K</sub>(<i>f</i>) zur Beschreibung der linearen Verzerrungen sowie ein additives Störsignal <i>n</i>(<i>t</i>), charakterisiert durch die Wahrscheinlichkeitsdichtefunktion <i>f<sub>n</sub></i>(<i>n</i>) und das Leistungsdichtespektrum <i>&Phi;<sub>n</sub></i>(<i>f</i>).<br>
+
*but rather globally on the basis of typical error structures.  
  
[[File:P ID1821 Dig T 5 1 S1 version1.png|Analoges Kanalmodell innerhalb eines digitalen Übertragungssystems|class=fit]]<br>
 
  
Ein Sonderfall dieses Modells ist der so genannte [http://en.lntwww.de/Modulationsverfahren/Qualit%C3%A4tskriterien#Einige_Anmerkungen_zum_AWGN.E2.80.93Kanalmodell AWGN&ndash;Kanal] (<i>Additive White Gaussian Noise</i>) mit den Systemeigenschaften
+
Such channel models are mainly used for&nbsp; &raquo;cascaded transmission systems&laquo;&nbsp; for the inner block,&nbsp; if the performance of the outer system components&nbsp; $($e.g. encoder  and decoder$)$&nbsp; is to be determined by simulation.
  
:<math>H_{\rm K}(f) = 1\hspace{0.05cm},\hspace{0.2cm}{\it \Phi}_{n}(f) = {\rm const.}\hspace{0.05cm},\hspace{0.2cm}
+
The following are dealt with in detail:
{f}_{n}(n) = \frac{1}{\sqrt{2 \pi} \cdot \sigma} \cdot {\rm
 
e}^{-n^2\hspace{-0.05cm}/(2 \sigma^2)}\hspace{0.05cm}.</math>
 
  
Dieses einfache Modell eignet sich zum Beispiel zur Beschreibung eines Funkkanals mit zeitinvariantem Verhalten, wobei das Kanalmodell dahingehend abstrahiert ist, dass
+
#&nbsp; The descriptive quantities&nbsp; &raquo;error correlation function&laquo;&nbsp; and&nbsp; &raquo;error distance distribution&laquo;,
*der eigentlich bandpassartige Kanal im äquivalenten Tiefpassbereich beschrieben wird, und<br>
+
#&nbsp; the&nbsp; &raquo;BSC model&laquo; &nbsp; ("Binary Symmetric Channel") &nbsp; for the description of statistically independent errors,
 +
#&nbsp; the&nbsp; &raquo;burst error channel models&laquo;&nbsp; according to Gilbert-Elliott and McCullough,
 +
#&nbsp; the&nbsp; &raquo;Wilhelm channel model&laquo;&nbsp; for the formulaic approximation of measured error curves,
 +
#&nbsp; some notes on the&nbsp; &raquo;generation of error sequences&laquo;,&nbsp;  for example with respect to&nbsp; &raquo;error distance simulation&laquo;,
 +
#&nbsp;the effects of different error structures on&nbsp; &raquo;BMP files&laquo;&nbsp;  $($for images$)$&nbsp; and&nbsp; &raquo;WAV files&laquo;&nbsp;  $($for  audios$)$.
  
*die vom Frequenzband und der Übertragungsweglänge abhängige Dämpfung mit der Varianz <i>&sigma;</i><sup>2</sup> des Rauschsignals <i>n</i>(<i>t</i>) verrechnet wird.<br><br>
 
  
Zur Berücksichtigung zeitvarianter Eigenschaften muss man andere Modelle wie <i>Rayleigh&ndash;, Rice&ndash;</i> und <i>Lognormal&ndash;Fading </i> verwenden, die im Buch &bdquo;Mobile Kommunikation&rdquo; beschrieben werden.<br>
+
<u>Note</u>: &nbsp; All BMP images and WAV audios of this chapter were generated with
 +
*the&nbsp; (German language)&nbsp; Windows program &nbsp;  "Digital Channel Models & Multimedia"
 +
*from the&nbsp; (former)&nbsp; practical course&nbsp; "Simulation of Digital Transmission Systems"&nbsp; at the Chair of Communications Engineering of the TU Munich.
  
Bei leitungsgebundenen Übertragungssystemen ist insbesondere der spezifische Frequenzgang des Übertragungsmediums entsprechend den Angaben in Kapitel 4.2 (Koaxialkabel) und Kapitel 4.3 (Zweidrahtleitung) des Buches &bdquo;Lineare zeitinvariante Systeme&rdquo; zu berücksichtigen, aber auch, dass aufgrund von Fremdstörungen (Nebensprechen, elektromagnetische Felder, usw.) nicht mehr von Weißem Rauschen ausgegangen werden kann.<br>
 
  
Bei optischen Systemen muss zudem das multiplikativ wirkende, also signalabhängige Schrotrauschen geeignet in das analoge Kanalmodell eingearbeitet werden.{{end}}<br>
 
  
== Definition digitaler Kanalmodelle (1) ==
+
 
 +
== Application of analog channel models ==
 
<br>
 
<br>
Ein analoges Kanalmodell zeichnet sich durch analoge Eingangs&ndash; und Ausgangsgrößen aus. Dagegen sind bei einem digitalen Kanalmodell (manchmal auch als &bdquo;diskret&rdquo; bezeichnet) sowohl der Eingang als auch der Ausgang zeit&ndash; und wertdiskret. Im Folgenden seien dies die <i>Quellensymbolfolge</i> &#9001;<i>q<sub>&nu;</sub></i>&#9002; &#8712; {<b>L</b>, <b>H</b>} und die Sinkensymbolfolge &#9001;<i>&upsilon;<sub>&nu;</sub></i>&#9002; &#8712; {<b>L</b>, <b>H</b>}. Die Laufvariable <i>&nu;</i> kann Werte zwischen 1 und <i>N</i> annehmen.<br>
+
For investigations of message transmission systems,&nbsp; suitable channel models are of great importance,&nbsp; because they are the
 +
*prerequisite for&nbsp; &raquo;system simulation and optimization,&laquo;&nbsp; as well as<br>
 +
*creating consistent and reconstructible boundary conditions.<br><br>
 +
 
 +
For digital signal transmission, there are both analog and digital channel models:
 +
*Although an analog channel model does not have to reproduce the transmission channel in all physical details, <br>it should describe its transmission behavior,&nbsp; including the dominant noise variables,&nbsp; with sufficient functional accuracy.
 +
 
 +
*In most cases,&nbsp; a compromise must be found between&nbsp; "mathematical manageability"&nbsp; and the&nbsp; "relationship to reality".<br>
 +
 
 +
 
 +
{{GraueBox|TEXT= 
 +
$\text{Example 1:}$&nbsp; The graphic shows an&nbsp; "analog channel model"&nbsp; within a&nbsp; "digital transmission system".&nbsp; This contains
 +
*the&nbsp; [[Linear_and_Time_Invariant_Systems/System_Description_in_Frequency_Domain#Frequency_response_.E2.80.93_Transfer_function|"channel frequency response"]] &nbsp; &rArr; &nbsp; $H_{\rm K}(f)$&nbsp; to describe the linear distortions,&nbsp; and
 +
[[File:EN Dig T 5 1 S1 v2.png|right|frame|Analog channel model within a digital transmission system|class=fit]]
 +
*an additive noise signal&nbsp; $n(t)$,&nbsp; characterized by
 +
**the&nbsp; [[Theory_of_Stochastic_Signals/Probability_Density_Function_(PDF)| "probability density function"]]&nbsp; $\rm (PDF)$ &nbsp; &rArr; &nbsp; $f_n(n)$,&nbsp; and
 +
**the&nbsp; [[Theory_of_Stochastic_Signals/Power-Spectral_Density#Wiener-Khintchine_Theorem|"power-spectral density"]] &nbsp; $\rm (PSD)$ &nbsp; &rArr; &nbsp; ${\it \Phi}_n(f)$.<br>
 +
 
 +
 
 +
A special case of this model is the so-called&nbsp; [[Modulation_Methods/Quality_Criteria#Some_remarks_on_the_AWGN_channel_model| "AWGN channel"]]&nbsp; ("Additive White Gaussian Noise")&nbsp; with the system properties
 +
 
 +
:$$H_{\rm K}(f) = 1\hspace{0.05cm},$$
 +
:$${f}_{n}(n) = \frac{1}{\sqrt{2 \pi} \cdot \sigma} \cdot {\rm
 +
e}^{-n^2\hspace{-0.05cm}/(2 \sigma^2)}\hspace{0.05cm},$$
 +
:$${\it \Phi}_{n}(f) = {\rm const.}\hspace{0.05cm}.$$
 +
 
 +
This simple model is suitable,&nbsp; for example,&nbsp; for describing a radio channel with time-invariant behavior,&nbsp; where the model is abstracted such that
 +
*the actual band-pass channel is described in the&nbsp; "equivalent low-pass range",&nbsp; and<br>
 +
 
 +
*the attenuation,&nbsp; which depends on the frequency band and the transmission path length,&nbsp; is offset against the variance&nbsp; $\sigma^2$&nbsp; of the noise signal&nbsp; $n(t)$.&nbsp; }}
 +
 
 +
 
 +
To take&nbsp; '''time-variant characteristics'''&nbsp; into account,&nbsp; one must use other models,&nbsp; which are described in the book&nbsp; "Mobile communications"&nbsp; such as&nbsp;
 +
*[[Mobile_Communications/Probability_Density_of_Rayleigh_Fading|"Rayleigh fading"]],&nbsp;
 +
*[[Mobile_Communications/Non-Frequency_Selective_Fading_With_Direct_Component#Example_of_signal_behaviour_with_Rice_fading|"Rice fading"]],&nbsp; and&nbsp;  
 +
*[[Mobile_Communications/Distance_Dependent_Attenuation_and_Shading#Lognormal_channel_model|"Lognormal fading"]].<br>
 +
 
 +
 
 +
For&nbsp; '''wired transmission systems''', the specific frequency response of the transmission medium according to the specifications for&nbsp;
 +
* [[Linear_and_Time_Invariant_Systems/Properties_of_Coaxial_Cables#Complex_propagation_function_of_coaxial_cables|"coaxial cable"]],&nbsp; and&nbsp;
 +
* [[Linear_and_Time_Invariant_Systems/Properties_of_Balanced_Copper_Pairs#Access_network_of_a_telecommunications_system|"two-wire line"]]&nbsp;  
 +
 
 +
 
 +
in the book&nbsp; "Linear Time-Invariant Systems" must be taken into account in particular,&nbsp; but also the fact that white noise can no longer be assumed due to&nbsp; [[Examples_of_Communication_Systems/Methods_to_Reduce_the_Bit_Error_Rate_in_DSL#Noise_during_transmission|"extraneous noise"]]&nbsp; (crosstalk, electromagnetic fields, etc.).<br>
 +
 
 +
In the case of&nbsp; '''optical systems''',&nbsp; the multiplicatively acting,&nbsp; i.e. signal-dependent&nbsp; [[Theory_of_Stochastic_Signals/Poisson_Distribution#Applications_of_the_Poisson_distribution|"shot noise"]]&nbsp; must also be suitably incorporated into the analog channel model.<br>
 +
 
 +
== Definition of digital channel models==
 +
<br>
 +
An analog channel model is characterized by analog input and output variables.&nbsp; In contrast,&nbsp; in a&nbsp; "digital channel model"&nbsp; $($sometimes referred to as&nbsp; "discrete"$)$,&nbsp; both the input and the output are discrete in time and value.
 +
 
 +
[[File:EN_Dig_T_5_1_S2.png|right|frame|Digital channel model and exemplary sequences|class=fit]]
 +
In the following,&nbsp; let these be
 +
*the&nbsp; "source symbol sequence" &nbsp; $\langle q_\nu \rangle$&nbsp; with&nbsp; $ q_\nu \in \{\rm L, \ H\}$&nbsp; and
 +
 
 +
*the sink symbol sequence&nbsp; $ \langle v_\nu \rangle$&nbsp; with&nbsp; $ v_\nu \in \{\rm L, \ H\}$.  
 +
 
 +
 
 +
The indexing variable&nbsp; $\nu$&nbsp;&nbsp; can take values between&nbsp; $1$&nbsp; and&nbsp; $N$.&nbsp; <br>
  
[[File:P ID1822 Dig T 5 1 S2 version1.png|Digitales Kanalmodell und beispielhafte Folgen|class=fit]]<br>
+
As a comparison with the block diagram in&nbsp; [[Digital_Signal_Transmission/Parameters_of_Digital_Channel_Models#Application_of_analog_channel_models|"$\text{Example 1}$"]]&nbsp; shows:
 +
*The "digital channel" is a simplifying model of the analog transmission channel including the technical transmission and reception units.
  
Wie ein Vergleich mit dem [http://en.lntwww.de/index.php?title=Digitalsignal%C3%BCbertragung/Beschreibungsgr%C3%B6%C3%9Fen_digitaler_Kanalmodelle&action=submit#Anwendung_analoger_Kanalmodelle Blockschaltbild] auf der letzten Seite zeigt, ist der Digitale Kanal ein vereinfachendes Modell des analogen Übertragungskanals einschließlich der technischen Sende&ndash; und Empfangseinrichtungen. Vereinfachend deshalb, weil dieses Modell sich lediglich auf die auftretenden Übertragungsfehler bezieht, dargestellt durch die Fehlerfolge &#9001;<i>e<sub>&nu;</sub></i>&#9002; mit
+
*Simplifying because this model only refers to the occurring transmission errors, represented by the&nbsp; '''error sequence'''&nbsp; $ \langle e_\nu \rangle$&nbsp; with
  
:<math>e_{\nu} =
+
::<math>e_{\nu} =
 
  \left\{ \begin{array}{c} 1 \\
 
  \left\{ \begin{array}{c} 1 \\
 
  0 \end{array} \right.\quad
 
  0 \end{array} \right.\quad
\begin{array}{*{1}c} {\rm falls}\hspace{0.15cm}\upsilon_\nu \ne q_\nu \hspace{0.05cm},
+
\begin{array}{*{1}c} {\rm if}\hspace{0.15cm}\upsilon_\nu \ne q_\nu \hspace{0.05cm},
\\  {\rm falls}\hspace{0.15cm} \upsilon_\nu = q_\nu \hspace{0.05cm}.\\ \end{array}</math>
+
\\  {\rm if}\hspace{0.15cm} \upsilon_\nu = q_\nu \hspace{0.05cm}.\\ \end{array}</math>
 +
 
 +
*While&nbsp; $\rm L$&nbsp; and&nbsp; $\rm H$&nbsp; denote the possible symbols,&nbsp; which here stand for&nbsp; "Low"&nbsp; and&nbsp; "High",&nbsp; &nbsp; $ e_\nu \in \{\rm 0, \ 1\}$&nbsp; is a real number value. <br> &nbsp; &nbsp; &nbsp; <u>Note:</u> &nbsp; Often the symbols are also defined as&nbsp; $ q_\nu \in \{\rm 0, \ 1\}$&nbsp; and &nbsp;$ v_\nu \in \{\rm 0, \ 1\}$.&nbsp; To avoid confusion,&nbsp; we have used the somewhat unusual nomenclature here.<br>
  
Während <b>L</b> und <b>H</b> die möglichen Symbole bezeichnen, die hier für <i>Low</i> und <i>High</i> stehen, ist <i>e<sub>&nu;</sub></i> &#8712; {0, 1} ein reeller Zahlenwert. Oft werden die Symbole auch als <i>q<sub>&nu;</sub></i> &#8712; {0, 1} und <i>&upsilon;<sub>&nu;</sub></i> &#8712; {0, 1}definiert. Um Verwechslungen zu vermeiden, haben wir hier die etwas ungewöhnliche Nomenklatur verwendet.<br>
+
*The&nbsp; "error sequence"&nbsp; $ \langle e_\nu \rangle$&nbsp; given in the graph
 +
**is obtained by comparing the two binary sequences&nbsp; $ \langle q_\nu \rangle$&nbsp; and&nbsp; $ \langle v_\nu \rangle$,<br>
 +
**contains only information about the sequence of transmission errors and thus less information than an analog channel model,<br>
 +
**is conveniently approximated by a random process with only a few parameters.<br><br>
  
Die in der Grafik angegebene Fehlerfolge &#9001;<i>e<sub>&nu;</sub></i>&#9002;
+
{{BlaueBox|TEXT= 
 +
$\text{Conclusion:}$&nbsp; The &nbsp;'''error sequence'''&nbsp; $ \langle e_\nu \rangle$&nbsp; allows statements about the error statistics,&nbsp; for example whether the errors are so-called
 +
#&nbsp;"statistically independent errors",&nbsp; or<br>
 +
#&nbsp;"burst errors".<br><br>
  
*ergibt sich durch den Vergleich der beiden Binärfolgen &#9001;<i>q<sub>&nu;</sub></i>&#9002; und &#9001;<i>&upsilon;<sub>&nu;</sub></i>&#9002;,<br>
+
The following example is intended to illustrate these two error types.}}<br>
*beinhaltet nur Informationen über die Abfolge der Übertragungsfehler und damit im Allgemeinen weniger Information als ein analoges Kanalmodell,<br>
 
*wird zweckmäßigerweise durch einen Zufallsprozess mit nur wenigen Parametern angenähert.<br><br>
 
  
Die Fehlerfolge &#9001;<i>e<sub>&nu;</sub></i>&#9002; erlaubt Aussagen über die Fehlerstatistik, zum Beispiel ob es sich um so genannte
+
{{GraueBox|TEXT= 
*statistisch unabhängige Fehler, oder<br>
+
$\text{Example 2:}$&nbsp; In the graph,&nbsp; we see  in the center the BMP image&nbsp; "White"&nbsp; with $300&nbsp;&times;&nbsp;200$ pixels.
 +
[[File:EN_Dig_T_5_1_S2b_neu.png|right|frame|BMP image&nbsp; "White"&nbsp; with independent errors&nbsp; $($left$)$&nbsp; and burst errors&nbsp; $($right) |class=fit]]
 +
*The left image shows the falsification with statistically independent errors &nbsp; &rArr; &nbsp; [[Digital_Signal_Transmission/Binary_Symmetric_Channel_(BSC)|"BSC model"]].
  
*Bündelfehler<br><br>
+
* The right image illustrates a burst error channel &nbsp; &rArr; &nbsp; [[Digital_Signal_Transmission/Burst_Error_Channels#Channel_model_according_to_Gilbert-Elliott|"Gilbert-Elliott model"]].&nbsp; <br>
  
handelt. Das folgende Beispiel soll diese beiden Fehlerarten verdeutlichen.<br>
 
  
== Definition digitaler Kanalmodelle (2) ==
+
<u>Notes:</u>
 +
#&nbsp;A&nbsp; [[Digital_Signal_Transmission/Applications_for_Multimedia_Files#Images_in_BMP_format|"BMP graphics"]]&nbsp; is always saved line by line,&nbsp; which can be seen in the error bursts in the right image.
 +
#&nbsp;Mean error probability in both cases:&nbsp; $2.5\%$;&nbsp; on average every $40$th pixel is falsified&nbsp; $($here: &nbsp; white &nbsp;&#8658;&nbsp; black$)$.}}<br>
 +
 
 +
== Example application of digital channel models ==
 
<br>
 
<br>
{{Beispiel}}''':''' In der folgenden Grafik sehen wir in der Mitte das BMP&ndash;Bild &bdquo;Weiß&rdquo; mit 300&nbsp;x&nbsp;200 Pixeln. Das linke Bild zeigt die Verfälschung mit statistisch unabhängigen Fehlern (BSC&ndash;Modell), während das rechte Bild einen Bündelfehlerkanal (Gilbert&ndash;Elliott&ndash;Modell) verdeutlicht.<br>
+
Digital channel models are preferably used for cascaded transmission,&nbsp; as shown in the graph.&nbsp; You can see from this diagram:
 +
[[File:EN_Dig_T_5_1_S3_neu2.png|right|frame|Model of a transmission system with encoder/decoder |class=fit]]
 +
*The inner transmission system &ndash; consisting of modulator, analog channel, noise, demodulator, receiver filter, decision, clock recovery &ndash; is summarized in the block&nbsp; "Digital channel"&nbsp; marked in blue.<br>
 +
 
 +
*This inner block is characterized exclusively by its error sequence&nbsp; $ \langle e\hspace{0.05cm}'_\nu \rangle$,&nbsp; which refers to its input sequence&nbsp; $ \langle c_\nu \rangle$&nbsp; and output sequence&nbsp; $ \langle w_\nu \rangle$.&nbsp; It is obvious that this channel model provides less information than a detailed analog model considering all components.
  
[[File:P ID1823 Dig T 5 1 S2b version1.png|BMP–Bild „Weiß” mit unabhängigen Fehlern bzw. Bündelfehlern|class=fit]]<br>
+
*In contrast,&nbsp; the&nbsp; "outer error sequence"&nbsp; $ \langle e_\nu \rangle$&nbsp; refers to the source symbol sequence&nbsp; $ \langle q_\nu \rangle$&nbsp; and the sink symbol sequence&nbsp; $ \langle v_\nu \rangle$&nbsp; and thus to the overall system including the specific encoding and the decoder on the receiver side.<br>
  
Anzumerken ist, dass BMP&ndash;Grafiken stets zeilenweise abgespeichert werden, was im rechten Bild zu erkennen ist. Die mittlere Fehlerwahrscheinlichkeit beträgt in beiden Fällen <i>p</i><sub>M</sub> = 2.5%, das heißt, dass im Mittel jedes 40. Pixel verfälscht (hier: weiß &#8658; schwarz) ist.{{end}}<br>
+
*The comparison of the two error sequences with and without consideration of encoder/decoder allows conclusions to be drawn about the efficiency of the underlying coding and decoding. These two components are appropriate if and only if the outer comparator indicates fewer errors on average than the inner comparator.<br>
  
== Beispielhafte Anwendung von digitalen Kanalmodellen ==
+
== Error sequence and average error probability ==
 
<br>
 
<br>
Digitale Kanalmodelle finden vorzugsweise Anwendung bei einer kaskadierten Übertragung, wie in der folgenden Grafik dargestellt.<br>
+
{{BlaueBox|TEXT= 
 +
$\text{Definition:}$&nbsp; The transmission behavior of a binary system is completely described by the &nbsp;'''error sequence'''&nbsp; $ \langle e_\nu \rangle$:&nbsp;
  
[[File:P ID1824 Dig T 5 1 S3 version1.png|Modell eines Übertragungssystems mit Coder/Decoder|class=fit]]<br>
+
::<math>e_{\nu} =
 +
\left\{ \begin{array}{c} 1 \\
 +
0 \end{array} \right.\quad
 +
\begin{array}{*{1}c} {\rm if}\hspace{0.15cm}\upsilon_\nu \ne q_\nu \hspace{0.05cm},
 +
\\  {\rm if}\hspace{0.15cm} \upsilon_\nu = q_\nu \hspace{0.05cm}.\\ \end{array}</math>
  
Man erkennt aus dieser Darstellung:
+
*From this,&nbsp; the&nbsp; (mean) &nbsp;'''bit error probability'''&nbsp; can be calculated as follows:
*Das innere Übertragungssystem &ndash; bestehend aus Modulator, Analogkanal, Störung, Demodulator, Empfangsfilter, Entscheider und Taktrückgewinnung &ndash; ist im blau markierten Block &bdquo;Digitaler Kanal&rdquo; zusammengefasst.<br>
 
  
*Dieser innere Block wird auschließlich durch seine Fehlerfolge &#9001;<i>e</i>'<i><sub>&nu;</sub></i>&#9002; charakterisiert, die sich auf die Symbolfolgen &#9001;<i>c<sub>&nu;</sub></i>&#9002; und &#9001;<i>w<sub>&nu;</sub></i>&#9002; bezieht. Es ist offensichtlich, dass dieses digitale Kanalmodell weniger Informationen liefert als ein detailliertes Analogmodell unter Berücksichtigung aller Komponenten.<br>
+
::<math>p_{\rm M} =  {\rm E}\big[e \big] = \lim_{N \rightarrow \infty} \frac{1}{N}
 +
\sum_{\nu = 1}^{N}e_{\nu}\hspace{0.05cm}.</math>
  
*Dagegen bezieht sich die &bdquo;äußere&rdquo; Fehlerfolge &#9001;<i>e<sub>&nu;</sub></i>&#9002; auf die Quellensymbolfolge &#9001;<i>q<sub>&nu;</sub></i>&#9002; sowie die Sinkensymbolfolge &#9001;<i>&upsilon;<sub>&nu;</sub></i>&#9002; und damit auf das Gesamtsystem einschließlich der spezifischen Codierung und des empfängerseitigen Decoders.<br>
+
*It is assumed here that the random process generating the errors is&nbsp; [[Theory_of_Stochastic_Signals/Auto-Correlation_Function#Stationary_random_processes| "stationary"]]&nbsp; and&nbsp; [[Theory_of_Stochastic_Signals/Auto-Correlation_Function#Ergodic_random_processes|"ergodic"]],&nbsp; so that the error sequence&nbsp; $ \langle e_\nu \rangle$&nbsp; can also be formally described completely by the random variable&nbsp; $e \in \{0, \ 1\}$.&nbsp; Thus,&nbsp; the transition from time to coulter averaging is allowed.}}<br>
  
*Der Vergleich der beiden Fehlerfolgen mit und ohne Berücksichtigung von Coder/Decoder erlaubt Rückschlüsse auf die Effizienz des zugrundeliegenden Codierverfahrens. Dieses ist dann und nur dann sinnvoll, wenn der äußere Komparator im Mittel weniger Fehler anzeigt als der innere.<br><br>
+
<u>Note:</u> &nbsp;
 +
#In all other&nbsp; $\rm LNTwww $ books,&nbsp; the&nbsp; "mean bit error probability"&nbsp; is denoted by&nbsp; $p_{\rm B}$.&nbsp;
 +
#To avoid confusion in connection with the&nbsp; [[Digital_Signal_Transmission/Burst_Error_Channels#Channel_model_according_to_Gilbert-Elliott|"Gilbert&ndash;Elliott model"]],&nbsp; this renaming here is unavoidable.
 +
# We will no longer refer to the bit error probability in the following, but only to the&nbsp; "mean error probability"&nbsp; $p_{\rm M}$.<br>
  
== Fehlerfolge und Fehlerkorrelationsfunktion ==
+
== Error correlation function ==
 
<br>
 
<br>
Das Übertragungsverhalten eines Binärsystems wird durch die Fehlerfolge &#9001;<i>e<sub>&nu;</sub></i>&#9002; vollständig beschrieben:
+
{{BlaueBox|TEXT= 
 +
$\text{Definition:}$&nbsp; Another important descriptive quantity of the digital channel models is the&nbsp; '''error correlation function'''&nbsp; &ndash; abbreviated&nbsp; $\rm ECF$:
 +
 
 +
::<math>\varphi_{e}(k) =  {\rm E}\big [e_{\nu} \cdot e_{\nu + k}\big ] = \overline{e_{\nu} \cdot e_{\nu + k} }\hspace{0.05cm}.</math>}}
 +
 
 +
 
 +
The error correlation function has the following properties:
 +
*$\varphi_{e}(k) $&nbsp; indicates the&nbsp; (discrete-time)&nbsp; [[Theory_of_Stochastic_Signals/Auto-Correlation_Function#Auto-correlation_function_for_stationary_and_ergodic_processes |"auto-correlation function"]]&nbsp; of the random variable&nbsp; $e$,&nbsp; which is also discrete-time.&nbsp; The sweeping line in the right equation denotes the time averaging.<br>
 +
 
 +
*The error correlation value&nbsp; $\varphi_{e}(k) $&nbsp; provides statistical information about two sequence elements that are&nbsp; $k$&nbsp; apart,&nbsp; e.g. about&nbsp; $e_{\nu}$&nbsp; and&nbsp; $e_{\nu+ k}$.&nbsp; The intervening elements&nbsp; $e_{\nu+ 1}$, ... , $e_{\nu+ k-1}$&nbsp; do not affect the&nbsp; $\varphi_{e}(k)$&nbsp; value.<br>
  
:<math>e_{\nu} =
+
*For stationary sequences, regardless of the error statistic due to&nbsp; $e \in \{0, \ 1\}$,&nbsp; always holds:
  \left\{ \begin{array}{c} 1 \\
+
::<math>\varphi_{e}(k = 0)  {\rm E}\big[e_{\nu} \cdot e_{\nu}\big] =  {\rm
  0 \end{array} \right.\quad
+
E}\big[e^2\big]= {\rm E}\big[e\big]= {\rm Pr}(e = 1)= p_{\rm M}\hspace{0.05cm},</math>
\begin{array}{*{1}c} {\rm falls}\hspace{0.15cm}\upsilon_\nu \ne q_\nu \hspace{0.05cm},
+
::<math>\varphi_{e}(k \rightarrow \infty)  =  {\rm E}\big[e_{\nu}\big] \cdot
\\ {\rm falls}\hspace{0.15cm} \upsilon_\nu = q_\nu \hspace{0.05cm}.\\ \end{array}</math>
+
{\rm E}\big[e_{\nu + k}\big] = p_{\rm M}^2\hspace{0.05cm}.</math>
 +
 
 +
*The error correlation function is an&nbsp; "at least weakly decreasing function".&nbsp; The slower the decay of the&nbsp; $\rm ECF$&nbsp; values,&nbsp; the longer the memory of the channel and the further the statistical ties of the error sequence.<br><br>
 +
 
 +
{{GraueBox|TEXT= 
 +
$\text{Example 3:}$&nbsp; In a binary transmission,&nbsp; $100$&nbsp; of the total&nbsp; $N = 10^5$&nbsp; transmitted binary symbols are falsified, so that the error sequence&nbsp; $ \langle e_\nu \rangle$&nbsp;
 +
*consists of&nbsp; $100$&nbsp; "ones"
 +
* and&nbsp; $99900$&nbsp; "zeros".
 +
 
 +
 
 +
Thus:
 +
#&nbsp;The mean error probability is&nbsp; $p_{\rm M} =10^{-3}$.
 +
#&nbsp;The error correlation function&nbsp; $\varphi_{e}(k)$&nbsp; starts at&nbsp; $p_{\rm M} =10^{-3}$&nbsp; $($for &nbsp;$k = 0)$&nbsp; and tends towards&nbsp; $p_{\rm M}^2 =10^{-6}$ $($for &nbsp;$k = \to \infty)$&nbsp; for very large&nbsp; $k$&nbsp; values.
 +
#&nbsp;So far,&nbsp; no statement can be made about the actual course of&nbsp; $\varphi_{e}(k)$&nbsp; with the information given here.}}<br>
 +
 
 +
== Relationship between error sequence and error distance ==
 +
<br>
 +
{{BlaueBox|TEXT= 
 +
$\text{Definition:}$&nbsp; The &nbsp;'''error distance'''&nbsp; $a$&nbsp; is the number of correctly transmitted symbols between two channel errors plus&nbsp; $1$.&nbsp; The sketch illustrates this definition.
 +
[[File:P ID1825 Dig T 5 1 S5 version1.png|right|frame|For the definition of the&nbsp; "error distance"|class=fit]]
 +
 
 +
Any information about the transmission behavior of the digital channel
 +
*contained in the error sequence&nbsp; $ \langle e_\nu \rangle$&nbsp;
 +
 
 +
*is also contained in the sequence&nbsp; $ \langle a_n \rangle$&nbsp; of error distances.
  
Hieraus lässt sich die (mittlere) Bitfehlerwahrscheinlichkeit wie folgt berechnen:
 
  
:<math>p_{\rm M} = {\rm E}[e] = \lim_{N \rightarrow \infty} \frac{1}{N}
+
<u>Note:</u> Since the sequences&nbsp; $ \langle e_\nu \rangle$&nbsp; and&nbsp; $ \langle a_n \rangle$&nbsp; are not synchronous,&nbsp; <br>we use different indices &nbsp;$(\nu$&nbsp; resp. &nbsp;$n)$.}}<br>
\sum_{\nu = 1}^{N}e_{\nu}\hspace{0.05cm}.</math>
 
  
Vorausgesetzt ist hierbei, dass der die Fehlentscheidungen erzeugende Zufallsprozess [http://en.lntwww.de/Stochastische_Signaltheorie/Autokorrelationsfunktion_(AKF)#Station.C3.A4re_Zufallsprozesse stationär] und [http://en.lntwww.de/Stochastische_Signaltheorie/Autokorrelationsfunktion_(AKF)#Ergodische_Zufallsprozesse ergodisch] ist, so dass man die Fehlerfolge &#9001;<i>e<sub>&nu;</sub></i>&#9002; formal auch durch die Zufallsgröße <i>e</i> &#8712; {0, 1} vollständig beschreiben kann. Der Übergang von der Zeit&ndash; zur Scharmittelung ist also zulässig.<br>
+
In particular,&nbsp; we can see from the graph above:
 +
*Since the first symbol was transmitted correctly&nbsp; $(e_1 = 0)$&nbsp; and the second incorrectly&nbsp; $(e_2 = 1)$, the error distance is&nbsp; $a_1 = 2$.<br>
  
<b>Hinweis:</b> In allen anderen Büchern von LNTwww wird die mittlere Bitfehlerwahrscheinlichkeit mit <i>p</i><sub>B</sub> bezeichnet. Zur Vermeidung von Verwechslungen im Zusammenhang mit dem Gilbert&ndash;Elliott&ndash;Modell ist diese hier vorgenommene Umbenennung unvermeidbar und wir sprechen nachfolgend nicht mehr von der Bitfehlerwahrscheinlichkeit, sondern nur noch von der mittleren Fehlerwahrscheinlichkeit <i>p</i><sub>M</sub>.<br>
+
*$a_2 = 4$&nbsp; indicates that three symbols were transmitted correctly between the first two errors&nbsp; $(e_2 = 1, \ e_6 = 1)$.&nbsp; <br>
  
Eine wichtige Beschreibungsgröße der digitalen Kanalmodelle ist die Fehlerkorrelationsfunktion &ndash; abgekürzt FKF &ndash; gemäß folgender Definition:
+
*If two errors follow each other directly, the error distance is equal to&nbsp;$1$:&nbsp; $e_6 = 1, \ e_7 = 1$ &nbsp; &rArr; &nbsp; $a_3=1$.<br>
  
:<math>\varphi_{e}(k) = {\rm E}[e_{\nu} \cdot e_{\nu + k}] =
+
*The event&nbsp; "$a = k$"&nbsp; means simultaneously&nbsp; "$k-1$&nbsp; error&ndash;free symbols between two errors" &nbsp; &rArr; &nbsp; If an error occurred at time&nbsp; $\nu$,&nbsp; the next error follows  at time&nbsp; $\nu + k$.<br>
\overline{e_{\nu} \cdot e_{\nu + k}}\hspace{0.05cm}.</math>
 
  
Für diese gilt:
+
*The set of values of the random variable&nbsp; $a$&nbsp; is the set of natural numbers in contrast to the binary random variable&nbsp; $e$:
*<i>&phi;<sub>e</sub></i>(<i>k</i>) gibt die (zeitdiskrete) Autokorrelationsfunktion der ebenfalls zeitdiskreten Zufallsgröße <i>e</i> an. Die überstreichende Linie in der rechten Gleichung kennzeichnet die Zeitmittelung.<br>
+
::<math>a \in \{ 1, 2, 3, ... \}\hspace{0.05cm},  \hspace{0.5cm}e \in \{ 0, 1 \}\hspace{0.05cm}.</math>
  
*Der Fehlerkorrelationswert <i>&phi;<sub>e</sub></i>(<i>k</i>) liefert statistische Aussagen bezüglich zwei um <i>k</i> auseinander liegende Folgenelemente, zum Beispiel über <i>e<sub>&nu;</sub></i> und <i>e<sub>&nu;</sub></i><sub>+<i>k</i></sub>. Die dazwischen liegenden Elemente <i>e<sub>&nu;</sub></i><sub>+1</sub>, ... , <i>e<sub>&nu;</sub></i><sub>+<i>k</i>&ndash;1</sub> beeinflussen den <i>&phi;<sub>e</sub></i>(<i>k</i>)&ndash;Wert nicht.<br>
+
*The mean error probability can be determined from both random variables:
  
*Bei stationren Folgen gilt unabhängig von der  der Fehlerstatistik wegen <i>e</i> &#8712; {0, 1} stets:
+
:$${\rm E}\big[e \big]  =  {\rm Pr}(e = 1) =p_{\rm M}\hspace{0.05cm},$$
::<math>\varphi_{e}(k = 0)  =  {\rm E}[e_{\nu} \cdot e_{\nu}] = {\rm
+
:$$ {\rm E}\big[a \big] \sum_{k = 1}^{\infty} k \cdot {\rm Pr}(a = k) = {1}/{p_{\rm M}}\hspace{0.05cm}.$$
E}[e^2]{\rm E}[e]= {\rm Pr}(e = 1)= p_{\rm M}\hspace{0.05cm},</math>
 
::<math>\varphi_{e}(k \rightarrow \infty)  =  {\rm E}[e_{\nu}] \cdot
 
{\rm E}[e_{\nu + k}] = p_{\rm M}^2\hspace{0.05cm}.</math>
 
  
*Die Fehlerkorrelationsfunktion ist eine zumindest schwach abfallende Funktion. Je langsamer der Abfall der FKF&ndash;Werte erfolgt, desto länger ist das Gedächtnis des Kanals und um so weiter reichen die statistischen Bindungen der Fehlerfolge.<br><br>
+
{{GraueBox|TEXT= 
 +
$\text{Example 4:}$&nbsp;
 +
*In the abve sketched sequence&nbsp; $16$&nbsp; of the total&nbsp; $N = 40$&nbsp; symbols are falsified &nbsp; &#8658; &nbsp; $p_{\rm M} = 0.4$.
 +
* Accordingly,&nbsp; the expected value of the error distances gives
  
{{Beispiel}}''':''' Bei einer Binärübertragung werden 100 der insgesamt <i>N</i> = 10<sup>5</sup> übertragenen Binärsymbole verfälscht, so dass die Fehlerfolge &#9001;<i>e<sub>&nu;</sub></i>&#9002; aus 100 Einsen und 99900 Nullen besteht. Die mittlere Fehlerwahrscheinlichkeit <i>p</i><sub>M</sub> = <i>&phi;<sub>e</sub></i>(0) beträgt somit 10<sup>&ndash;3</sup>, während die Fehlerkorrelationsfunktion <i>&phi;<sub>e</sub></i>(<i>k</i>) bei 10<sup>&ndash;6</sup> (für <i>k</i> = 0) beginnt und für sehr große <i>k</i>&ndash;Werte gegen 10<sup>&ndash;6</sup> tendiert. Über den tatsächlichen Verlauf von <i>&phi;<sub>e</sub></i>(<i>k</i>) ist mit den hier gemachten Angaben keine Aussage möglich.{{end}}<br>
+
::<math>{\rm E}\big[a \big] = 1 \cdot {4}/{16}+  2 \cdot {5}/{16}+ 3 \cdot {4}/{16}+4 \cdot {1}/{16}+5 \cdot {2}/{16}= 2.5 =
 +
  {1}/{p_{\rm M} }\hspace{0.05cm}.</math>}}
  
== Fehlerabstand und Fehlerabstandsverteilung (1) ==
+
==Error distance distribution ==
 
<br>
 
<br>
Jede in der Fehlerfolge &#9001;<i>e<sub>&nu;</sub></i>&#9002; enthaltene Information über das Übertragungsverhalten des digitalen Kanals ist auch in der Folge &#9001;<i>a<sub>n</sub></i>&#9002; der Fehlerabstände enthaltenen. Als Fehlerabstand bezeichnet man dabei die Anzahl der zwischen zwei Kanalfehlern richtig übertragenen Symbole plus 1.<br>
+
The&nbsp; [[Theory_of_Stochastic_Signals/Probability_Density_Function_(PDF)|"probability density function"]]&nbsp; $\rm (PDF)$&nbsp; of the discrete random variable&nbsp; $a \in \{1, 2, 3, \text{...}\}$&nbsp; is composed of an&nbsp; (infinite)&nbsp; sum of Dirac delta functions according to the chapter&nbsp;  [[Theory_of_Stochastic_Signals/Probability_Density_Function_(PDF)#PDF_definition_for_discrete_random_variables|"PDF definition for discrete random variables"]]&nbsp; in the book&nbsp; "Stochastic Signal Theory":
  
[[File:P ID1825 Dig T 5 1 S5 version1.png|Zur Definition des Fehlerabstands|class=fit]]<br>
+
::<math>f_a(a) = \sum_{k = 1}^{\infty}  {\rm Pr}(a = k) \cdot \delta (a-k)\hspace{0.05cm}.</math>
  
Da die Folgen  &#9001;<i>e<sub>&nu;</sub></i>&#9002; und &#9001;<i>a<sub>&nu;'</sub></i>&#9002; nicht synchron laufen, verwenden wir unterschiedliche Indizes (<i>&nu;</i> bzw. <i>n</i>).<br>
+
*We refer to this particular PDF as the&nbsp; "error distance density function".&nbsp; Based on the error sequence&nbsp; $ \langle e_\nu \rangle$,&nbsp; the probability that the error distance&nbsp; $a$&nbsp; is exactly equal to&nbsp; $k$&nbsp; can be expressed by the following conditional probability:
  
Die Grafik verdeutlicht diese Definition. Man erkennt:
+
::<math>{\rm Pr}(a = k) = {\rm Pr}(e_{\nu + 1} = 0 \hspace{0.15cm}\cap \hspace{0.15cm} \text{...} \hspace{0.15cm}\cap \hspace{0.15cm}\hspace{0.05cm}
*Da das erste Symbol richtig übertragen wurde, ist <i>a</i><sub>1</sub> = 2.<br>
+
e_{\nu + k -1} = 0 \hspace{0.15cm}\cap \hspace{0.15cm}e_{\nu + k} = 1 \hspace{0.1cm}| \hspace{0.1cm} e_{\nu } = 1)\hspace{0.05cm}.</math>
  
*<i>a</i><sub>2</sub> = 4 sagt aus, dass zwischen den beiden ersten Fehlern drei Symbole richtig übertragen wurden.<br>
+
*In the book&nbsp; "Stochastic Signal Theory"&nbsp; you will also find the definition of the&nbsp; [[Theory_of_Stochastic_Signals/Cumulative_Distribution_Function#CDF_for_discrete-valued_random_variables|"cumulative distribution function"]]&nbsp; $\rm (CDF)$&nbsp;  of the discrete random variable&nbsp; $a$:
  
*Folgen zwei Fehler direkt aufeinander, so ist der Fehlerabstand wie <i>a</i><sub>3</sub> in obiger Grafik gleich 1.<br>
+
::<math>F_a(k) =  {\rm Pr}(a \le k) \hspace{0.05cm}.</math>
  
*Das Ereignis &bdquo;<i>a</i> = <i>k</i>&rdquo; bedeutet gleichzeitig <i>k</i>&nbsp;&ndash;&nbsp;1 fehlerfreie Symbole zwischen zwei Fehlern. Ist zum Zeitpunkt <i>&nu;</i> ein Fehler aufgetreten, so folgt der nächste Fehler genau zum Zeitpunkt <i>&nu;</i>&nbsp;+&nbsp;<i>k</i>.<br>
+
*This function is obtained from the PDF&nbsp; $f_a(a)$&nbsp; by integration from&nbsp; $1$&nbsp; to&nbsp; $k$.&nbsp; The function&nbsp; $F_a(k)$&nbsp; can take values between&nbsp; $0$&nbsp; and&nbsp; $1$&nbsp; $($including these two limits$)$&nbsp; and is weakly monotonically increasing.<br>
  
*Der Wertevorrat der Zufallsgröße <i>a</i> ist die Menge der natürlichen Zahlen im Gegensatz zur binären Zufallsgröße <i>e</i>:
 
::<math>a \in \{ 1, 2, 3, ... \}\hspace{0.05cm},  \hspace{0.2cm}e \in \{ 0, 1 \}\hspace{0.05cm}.</math>
 
  
*Die mittlere Fehlerwahrscheinlichkeit lässt sich aus beiden Zufallsgrößen ermitteln:
+
In the context of digital channel models, the literature deviates from this usual definition.
  
::<math>{\rm E}[e]  =  {\rm Pr}(e = 1) =p_{\rm M}\hspace{0.05cm}, </math>
+
{{BlaueBox|TEXT=   
::<math> {\rm E}[a] =  \sum_{k = 1}^{\infty} k \cdot {\rm Pr}(a = k) = {1}/{p_{\rm M}}\hspace{0.05cm}.</math>
+
$\text{Definition:}$&nbsp; Rather,&nbsp; here the&nbsp; '''error distance distribution'''&nbsp; $\rm (EDD)$&nbsp; gives the probability that the error distance&nbsp; $a$&nbsp; is greater than or equal to&nbsp; $k$:&nbsp;
  
In obigem Beispiel sind 16 der insgesamt <i>N</i> = 40 Symbole verfälscht &nbsp;&#8658;&nbsp; <i>p</i><sub>M</sub> = 0.4. Der Erwartungswert der Fehlerabstände ergibt entsprechend
+
::<math>V_a(k) =  {\rm Pr}(a \ge k) = 1 - \sum_{\kappa = 1}^{k}  {\rm Pr}(a = \kappa)\hspace{0.05cm}.</math>
  
:<math>{\rm E}[a] = 1 \cdot {4}/{16}+  2 \cdot {5}/{16}+ 3 \cdot {4}/{16}+4 \cdot {1}/{16}+5 \cdot {2}/{16}= 2.5 =
+
*In particular: &nbsp;
  {1}/{p_{\rm M}}\hspace{0.05cm}.</math>
+
:$$V_a(k = 1) = 1 \hspace{0.05cm},\hspace{0.5cm} \lim_{k \rightarrow \infty}V_a(k ) =
 +
0 \hspace{0.05cm}.$$}}
  
Die Wahrscheinlichkeitsdichtefunktion (WDF) der diskreten Zufallsgröße <i>a</i> &#8712; {1, 2, 3, ...} setzt sich entsprechend [http://en.lntwww.de/Stochastische_Signaltheorie/Wahrscheinlichkeitsdichtefunktion_(WDF)#WDF-Definition_f.C3.BCr_diskrete_Zufallsgr.C3.B6.C3.9Fen Kapitel 3.1] im Buch &bdquo;Stochastische Signaltheorie&rdquo; aus einer (unendlichen) Summe von Diracfunktionen zusammen:
 
  
:<math>f_a(a) = \sum_{k = 1}^{\infty}  {\rm Pr}(a = k) \cdot \delta (a-k)\hspace{0.05cm}.</math>
+
The following relationship holds between the monotonically increasing function &nbsp; $F_a(k)$ &nbsp; and the monotonically decreasing function &nbsp; $V_a(k)$:&nbsp;
  
Wir bezeichnen diese spezielle WDF als Fehlerabstandsdichtefunktion. Die Wahrscheinlichkeit, dass der Fehlerabstand <i>a</i> exakt gleich <i>k</i> ist, lässt sich anhand der Fehlerfolge durch die folgende bedingte Wahrscheinlichkeit ausdrücken:
+
::<math>F_a(k ) = 1-V_a(k +1)  \hspace{0.05cm}.</math>
  
:<math>{\rm Pr}(a = k) = {\rm Pr}(e_{\nu + 1} = 0 \hspace{0.15cm}\cap \hspace{0.15cm} ... \hspace{0.15cm}\cap \hspace{0.15cm}\hspace{0.05cm}
+
{{GraueBox|TEXT= 
e_{\nu + k -1} = 0 \hspace{0.15cm}\cap \hspace{0.15cm}e_{\nu + k} = 1 \hspace{0.1cm}| \hspace{0.1cm} e_{\nu } = 1)\hspace{0.05cm}.</math><br>
+
$\text{Example 5:}$&nbsp; The graph shows in the left sketch an arbitrary discrete error distance density function&nbsp; $f_a(a)$&nbsp; and the resulting integrated functions.
 +
[[File:P ID1826 Dig T 5 1 S5b version1.png|right|frame|Discrete probability density function&nbsp; $f_a(a)$&nbsp;and the functions&nbsp; $F_a(k )$&nbsp; and&nbsp; $V_a(k )$ |class=fit]]
  
 +
* $F_a(k ) = {\rm Pr}(a \le k)$ &nbsp; &rArr; &nbsp; middle sketch,&nbsp; as well as
  
 +
* $V_a(k ) = {\rm Pr}(a \ge k)$ &nbsp; &rArr; &nbsp; right sketch.<br>
  
  
 +
For example, for&nbsp; $k = 2$, we obtain:
  
 +
::<math>F_a( k =2 )  = {\rm Pr}(a = 1) + {\rm Pr}(a = 2) \hspace{0.05cm}, </math>
 +
::<math>\Rightarrow \hspace{0.3cm} F_a( k =2 )  = 1-V_a(k = 3)= 0.7\hspace{0.05cm}, </math>
 +
::<math> V_a(k =2 )  = 1 - {\rm Pr}(a = 1)  \hspace{0.05cm},</math>
 +
::<math>\Rightarrow \hspace{0.3cm} V_a(k =2 )  = 1-F_a(k = 1) = 0.6\hspace{0.05cm}.</math>
 +
 +
For&nbsp; $k = 4$,&nbsp; the following results are obtained:
 +
::<math>F_a(k = 4 )  =  {\rm Pr}(a \le 4) = 1
 +
\hspace{0.05cm},  \hspace{0.5cm} V_a(k = 4 )  =  {\rm Pr}(a \ge 4)= {\rm Pr}(a = 4) = 0.1 = 1-F_a(k = 3)
 +
\hspace{0.05cm}.</math>}}<br>
 +
 +
==Exercises for the chapter==
 +
<br>
 +
[[Aufgaben:Exercise_5.1:_Error_Distance_Distribution|Exercise 5.1:&nbsp; Error Distance Distribution]]
  
==  ==
+
[[Aufgaben:Exercise_5.2:_Error_Correlation_Function|Exercise 5.2:&nbsp; Error Correlation Function]]
  
 
{{Display}}
 
{{Display}}

Latest revision as of 13:01, 19 October 2022

# OVERVIEW OF THE FIFTH MAIN CHAPTER #


At the end of this book,  digital channel models  are discussed

  • which do not describe the transmission behavior of a digital transmission system in great detail according to the individual system components,
  • but rather globally on the basis of typical error structures.


Such channel models are mainly used for  »cascaded transmission systems«  for the inner block,  if the performance of the outer system components  $($e.g. encoder and decoder$)$  is to be determined by simulation.

The following are dealt with in detail:

  1.   The descriptive quantities  »error correlation function«  and  »error distance distribution«,
  2.   the  »BSC model«   ("Binary Symmetric Channel")   for the description of statistically independent errors,
  3.   the  »burst error channel models«  according to Gilbert-Elliott and McCullough,
  4.   the  »Wilhelm channel model«  for the formulaic approximation of measured error curves,
  5.   some notes on the  »generation of error sequences«,  for example with respect to  »error distance simulation«,
  6.  the effects of different error structures on  »BMP files«  $($for images$)$  and  »WAV files«  $($for audios$)$.


Note:   All BMP images and WAV audios of this chapter were generated with

  • the  (German language)  Windows program   "Digital Channel Models & Multimedia"
  • from the  (former)  practical course  "Simulation of Digital Transmission Systems"  at the Chair of Communications Engineering of the TU Munich.



Application of analog channel models


For investigations of message transmission systems,  suitable channel models are of great importance,  because they are the

  • prerequisite for  »system simulation and optimization,«  as well as
  • creating consistent and reconstructible boundary conditions.

For digital signal transmission, there are both analog and digital channel models:

  • Although an analog channel model does not have to reproduce the transmission channel in all physical details,
    it should describe its transmission behavior,  including the dominant noise variables,  with sufficient functional accuracy.
  • In most cases,  a compromise must be found between  "mathematical manageability"  and the  "relationship to reality".


$\text{Example 1:}$  The graphic shows an  "analog channel model"  within a  "digital transmission system".  This contains

Analog channel model within a digital transmission system


A special case of this model is the so-called  "AWGN channel"  ("Additive White Gaussian Noise")  with the system properties

$$H_{\rm K}(f) = 1\hspace{0.05cm},$$
$${f}_{n}(n) = \frac{1}{\sqrt{2 \pi} \cdot \sigma} \cdot {\rm e}^{-n^2\hspace{-0.05cm}/(2 \sigma^2)}\hspace{0.05cm},$$
$${\it \Phi}_{n}(f) = {\rm const.}\hspace{0.05cm}.$$

This simple model is suitable,  for example,  for describing a radio channel with time-invariant behavior,  where the model is abstracted such that

  • the actual band-pass channel is described in the  "equivalent low-pass range",  and
  • the attenuation,  which depends on the frequency band and the transmission path length,  is offset against the variance  $\sigma^2$  of the noise signal  $n(t)$. 


To take  time-variant characteristics  into account,  one must use other models,  which are described in the book  "Mobile communications"  such as 


For  wired transmission systems, the specific frequency response of the transmission medium according to the specifications for 


in the book  "Linear Time-Invariant Systems" must be taken into account in particular,  but also the fact that white noise can no longer be assumed due to  "extraneous noise"  (crosstalk, electromagnetic fields, etc.).

In the case of  optical systems,  the multiplicatively acting,  i.e. signal-dependent  "shot noise"  must also be suitably incorporated into the analog channel model.

Definition of digital channel models


An analog channel model is characterized by analog input and output variables.  In contrast,  in a  "digital channel model"  $($sometimes referred to as  "discrete"$)$,  both the input and the output are discrete in time and value.

Digital channel model and exemplary sequences

In the following,  let these be

  • the  "source symbol sequence"   $\langle q_\nu \rangle$  with  $ q_\nu \in \{\rm L, \ H\}$  and
  • the sink symbol sequence  $ \langle v_\nu \rangle$  with  $ v_\nu \in \{\rm L, \ H\}$.


The indexing variable  $\nu$   can take values between  $1$  and  $N$. 

As a comparison with the block diagram in  "$\text{Example 1}$"  shows:

  • The "digital channel" is a simplifying model of the analog transmission channel including the technical transmission and reception units.
  • Simplifying because this model only refers to the occurring transmission errors, represented by the  error sequence  $ \langle e_\nu \rangle$  with
\[e_{\nu} = \left\{ \begin{array}{c} 1 \\ 0 \end{array} \right.\quad \begin{array}{*{1}c} {\rm if}\hspace{0.15cm}\upsilon_\nu \ne q_\nu \hspace{0.05cm}, \\ {\rm if}\hspace{0.15cm} \upsilon_\nu = q_\nu \hspace{0.05cm}.\\ \end{array}\]
  • While  $\rm L$  and  $\rm H$  denote the possible symbols,  which here stand for  "Low"  and  "High",    $ e_\nu \in \{\rm 0, \ 1\}$  is a real number value.
          Note:   Often the symbols are also defined as  $ q_\nu \in \{\rm 0, \ 1\}$  and  $ v_\nu \in \{\rm 0, \ 1\}$.  To avoid confusion,  we have used the somewhat unusual nomenclature here.
  • The  "error sequence"  $ \langle e_\nu \rangle$  given in the graph
    • is obtained by comparing the two binary sequences  $ \langle q_\nu \rangle$  and  $ \langle v_\nu \rangle$,
    • contains only information about the sequence of transmission errors and thus less information than an analog channel model,
    • is conveniently approximated by a random process with only a few parameters.

$\text{Conclusion:}$  The  error sequence  $ \langle e_\nu \rangle$  allows statements about the error statistics,  for example whether the errors are so-called

  1.  "statistically independent errors",  or
  2.  "burst errors".

The following example is intended to illustrate these two error types.


$\text{Example 2:}$  In the graph,  we see in the center the BMP image  "White"  with $300 × 200$ pixels.

BMP image  "White"  with independent errors  $($left$)$  and burst errors  $($right)
  • The left image shows the falsification with statistically independent errors   ⇒   "BSC model".


Notes:

  1.  A  "BMP graphics"  is always saved line by line,  which can be seen in the error bursts in the right image.
  2.  Mean error probability in both cases:  $2.5\%$;  on average every $40$th pixel is falsified  $($here:   white  ⇒  black$)$.


Example application of digital channel models


Digital channel models are preferably used for cascaded transmission,  as shown in the graph.  You can see from this diagram:

Model of a transmission system with encoder/decoder
  • The inner transmission system – consisting of modulator, analog channel, noise, demodulator, receiver filter, decision, clock recovery – is summarized in the block  "Digital channel"  marked in blue.
  • This inner block is characterized exclusively by its error sequence  $ \langle e\hspace{0.05cm}'_\nu \rangle$,  which refers to its input sequence  $ \langle c_\nu \rangle$  and output sequence  $ \langle w_\nu \rangle$.  It is obvious that this channel model provides less information than a detailed analog model considering all components.
  • In contrast,  the  "outer error sequence"  $ \langle e_\nu \rangle$  refers to the source symbol sequence  $ \langle q_\nu \rangle$  and the sink symbol sequence  $ \langle v_\nu \rangle$  and thus to the overall system including the specific encoding and the decoder on the receiver side.
  • The comparison of the two error sequences with and without consideration of encoder/decoder allows conclusions to be drawn about the efficiency of the underlying coding and decoding. These two components are appropriate if and only if the outer comparator indicates fewer errors on average than the inner comparator.

Error sequence and average error probability


$\text{Definition:}$  The transmission behavior of a binary system is completely described by the  error sequence  $ \langle e_\nu \rangle$: 

\[e_{\nu} = \left\{ \begin{array}{c} 1 \\ 0 \end{array} \right.\quad \begin{array}{*{1}c} {\rm if}\hspace{0.15cm}\upsilon_\nu \ne q_\nu \hspace{0.05cm}, \\ {\rm if}\hspace{0.15cm} \upsilon_\nu = q_\nu \hspace{0.05cm}.\\ \end{array}\]
  • From this,  the  (mean)  bit error probability  can be calculated as follows:
\[p_{\rm M} = {\rm E}\big[e \big] = \lim_{N \rightarrow \infty} \frac{1}{N} \sum_{\nu = 1}^{N}e_{\nu}\hspace{0.05cm}.\]
  • It is assumed here that the random process generating the errors is  "stationary"  and  "ergodic",  so that the error sequence  $ \langle e_\nu \rangle$  can also be formally described completely by the random variable  $e \in \{0, \ 1\}$.  Thus,  the transition from time to coulter averaging is allowed.


Note:  

  1. In all other  $\rm LNTwww $ books,  the  "mean bit error probability"  is denoted by  $p_{\rm B}$. 
  2. To avoid confusion in connection with the  "Gilbert–Elliott model",  this renaming here is unavoidable.
  3. We will no longer refer to the bit error probability in the following, but only to the  "mean error probability"  $p_{\rm M}$.

Error correlation function


$\text{Definition:}$  Another important descriptive quantity of the digital channel models is the  error correlation function  – abbreviated  $\rm ECF$:

\[\varphi_{e}(k) = {\rm E}\big [e_{\nu} \cdot e_{\nu + k}\big ] = \overline{e_{\nu} \cdot e_{\nu + k} }\hspace{0.05cm}.\]


The error correlation function has the following properties:

  • $\varphi_{e}(k) $  indicates the  (discrete-time)  "auto-correlation function"  of the random variable  $e$,  which is also discrete-time.  The sweeping line in the right equation denotes the time averaging.
  • The error correlation value  $\varphi_{e}(k) $  provides statistical information about two sequence elements that are  $k$  apart,  e.g. about  $e_{\nu}$  and  $e_{\nu+ k}$.  The intervening elements  $e_{\nu+ 1}$, ... , $e_{\nu+ k-1}$  do not affect the  $\varphi_{e}(k)$  value.
  • For stationary sequences, regardless of the error statistic due to  $e \in \{0, \ 1\}$,  always holds:
\[\varphi_{e}(k = 0) = {\rm E}\big[e_{\nu} \cdot e_{\nu}\big] = {\rm E}\big[e^2\big]= {\rm E}\big[e\big]= {\rm Pr}(e = 1)= p_{\rm M}\hspace{0.05cm},\]
\[\varphi_{e}(k \rightarrow \infty) = {\rm E}\big[e_{\nu}\big] \cdot {\rm E}\big[e_{\nu + k}\big] = p_{\rm M}^2\hspace{0.05cm}.\]
  • The error correlation function is an  "at least weakly decreasing function".  The slower the decay of the  $\rm ECF$  values,  the longer the memory of the channel and the further the statistical ties of the error sequence.

$\text{Example 3:}$  In a binary transmission,  $100$  of the total  $N = 10^5$  transmitted binary symbols are falsified, so that the error sequence  $ \langle e_\nu \rangle$ 

  • consists of  $100$  "ones"
  • and  $99900$  "zeros".


Thus:

  1.  The mean error probability is  $p_{\rm M} =10^{-3}$.
  2.  The error correlation function  $\varphi_{e}(k)$  starts at  $p_{\rm M} =10^{-3}$  $($for  $k = 0)$  and tends towards  $p_{\rm M}^2 =10^{-6}$ $($for  $k = \to \infty)$  for very large  $k$  values.
  3.  So far,  no statement can be made about the actual course of  $\varphi_{e}(k)$  with the information given here.


Relationship between error sequence and error distance


$\text{Definition:}$  The  error distance  $a$  is the number of correctly transmitted symbols between two channel errors plus  $1$.  The sketch illustrates this definition.

For the definition of the  "error distance"

Any information about the transmission behavior of the digital channel

  • contained in the error sequence  $ \langle e_\nu \rangle$ 
  • is also contained in the sequence  $ \langle a_n \rangle$  of error distances.


Note: Since the sequences  $ \langle e_\nu \rangle$  and  $ \langle a_n \rangle$  are not synchronous, 
we use different indices  $(\nu$  resp.  $n)$.


In particular,  we can see from the graph above:

  • Since the first symbol was transmitted correctly  $(e_1 = 0)$  and the second incorrectly  $(e_2 = 1)$, the error distance is  $a_1 = 2$.
  • $a_2 = 4$  indicates that three symbols were transmitted correctly between the first two errors  $(e_2 = 1, \ e_6 = 1)$. 
  • If two errors follow each other directly, the error distance is equal to $1$:  $e_6 = 1, \ e_7 = 1$   ⇒   $a_3=1$.
  • The event  "$a = k$"  means simultaneously  "$k-1$  error–free symbols between two errors"   ⇒   If an error occurred at time  $\nu$,  the next error follows at time  $\nu + k$.
  • The set of values of the random variable  $a$  is the set of natural numbers in contrast to the binary random variable  $e$:
\[a \in \{ 1, 2, 3, ... \}\hspace{0.05cm}, \hspace{0.5cm}e \in \{ 0, 1 \}\hspace{0.05cm}.\]
  • The mean error probability can be determined from both random variables:
$${\rm E}\big[e \big] = {\rm Pr}(e = 1) =p_{\rm M}\hspace{0.05cm},$$
$$ {\rm E}\big[a \big] = \sum_{k = 1}^{\infty} k \cdot {\rm Pr}(a = k) = {1}/{p_{\rm M}}\hspace{0.05cm}.$$

$\text{Example 4:}$ 

  • In the abve sketched sequence  $16$  of the total  $N = 40$  symbols are falsified   ⇒   $p_{\rm M} = 0.4$.
  • Accordingly,  the expected value of the error distances gives
\[{\rm E}\big[a \big] = 1 \cdot {4}/{16}+ 2 \cdot {5}/{16}+ 3 \cdot {4}/{16}+4 \cdot {1}/{16}+5 \cdot {2}/{16}= 2.5 = {1}/{p_{\rm M} }\hspace{0.05cm}.\]

Error distance distribution


The  "probability density function"  $\rm (PDF)$  of the discrete random variable  $a \in \{1, 2, 3, \text{...}\}$  is composed of an  (infinite)  sum of Dirac delta functions according to the chapter  "PDF definition for discrete random variables"  in the book  "Stochastic Signal Theory":

\[f_a(a) = \sum_{k = 1}^{\infty} {\rm Pr}(a = k) \cdot \delta (a-k)\hspace{0.05cm}.\]
  • We refer to this particular PDF as the  "error distance density function".  Based on the error sequence  $ \langle e_\nu \rangle$,  the probability that the error distance  $a$  is exactly equal to  $k$  can be expressed by the following conditional probability:
\[{\rm Pr}(a = k) = {\rm Pr}(e_{\nu + 1} = 0 \hspace{0.15cm}\cap \hspace{0.15cm} \text{...} \hspace{0.15cm}\cap \hspace{0.15cm}\hspace{0.05cm} e_{\nu + k -1} = 0 \hspace{0.15cm}\cap \hspace{0.15cm}e_{\nu + k} = 1 \hspace{0.1cm}| \hspace{0.1cm} e_{\nu } = 1)\hspace{0.05cm}.\]
  • In the book  "Stochastic Signal Theory"  you will also find the definition of the  "cumulative distribution function"  $\rm (CDF)$  of the discrete random variable  $a$:
\[F_a(k) = {\rm Pr}(a \le k) \hspace{0.05cm}.\]
  • This function is obtained from the PDF  $f_a(a)$  by integration from  $1$  to  $k$.  The function  $F_a(k)$  can take values between  $0$  and  $1$  $($including these two limits$)$  and is weakly monotonically increasing.


In the context of digital channel models, the literature deviates from this usual definition.

$\text{Definition:}$  Rather,  here the  error distance distribution  $\rm (EDD)$  gives the probability that the error distance  $a$  is greater than or equal to  $k$: 

\[V_a(k) = {\rm Pr}(a \ge k) = 1 - \sum_{\kappa = 1}^{k} {\rm Pr}(a = \kappa)\hspace{0.05cm}.\]
  • In particular:  
$$V_a(k = 1) = 1 \hspace{0.05cm},\hspace{0.5cm} \lim_{k \rightarrow \infty}V_a(k ) = 0 \hspace{0.05cm}.$$


The following relationship holds between the monotonically increasing function   $F_a(k)$   and the monotonically decreasing function   $V_a(k)$: 

\[F_a(k ) = 1-V_a(k +1) \hspace{0.05cm}.\]

$\text{Example 5:}$  The graph shows in the left sketch an arbitrary discrete error distance density function  $f_a(a)$  and the resulting integrated functions.

Discrete probability density function  $f_a(a)$ and the functions  $F_a(k )$  and  $V_a(k )$
  • $F_a(k ) = {\rm Pr}(a \le k)$   ⇒   middle sketch,  as well as
  • $V_a(k ) = {\rm Pr}(a \ge k)$   ⇒   right sketch.


For example, for  $k = 2$, we obtain:

\[F_a( k =2 ) = {\rm Pr}(a = 1) + {\rm Pr}(a = 2) \hspace{0.05cm}, \]
\[\Rightarrow \hspace{0.3cm} F_a( k =2 ) = 1-V_a(k = 3)= 0.7\hspace{0.05cm}, \]
\[ V_a(k =2 ) = 1 - {\rm Pr}(a = 1) \hspace{0.05cm},\]
\[\Rightarrow \hspace{0.3cm} V_a(k =2 ) = 1-F_a(k = 1) = 0.6\hspace{0.05cm}.\]

For  $k = 4$,  the following results are obtained:

\[F_a(k = 4 ) = {\rm Pr}(a \le 4) = 1 \hspace{0.05cm}, \hspace{0.5cm} V_a(k = 4 ) = {\rm Pr}(a \ge 4)= {\rm Pr}(a = 4) = 0.1 = 1-F_a(k = 3) \hspace{0.05cm}.\]


Exercises for the chapter


Exercise 5.1:  Error Distance Distribution

Exercise 5.2:  Error Correlation Function