Difference between revisions of "Digital Signal Transmission/Optimal Receiver Strategies"

From LNTwww
m (Text replacement - "„" to """)
 
(12 intermediate revisions by 4 users not shown)
Line 1: Line 1:
 
   
 
   
 
{{Header
 
{{Header
|Untermenü=Impulsinterferenzen und Entzerrungsverfahren
+
|Untermenü=Intersymbol Interfering and Equalization Methods
 
|Vorherige Seite=Entscheidungsrückkopplung
 
|Vorherige Seite=Entscheidungsrückkopplung
 
|Nächste Seite=Viterbi–Empfänger
 
|Nächste Seite=Viterbi–Empfänger
 
}}
 
}}
  
== Betrachtetes Szenario und Voraussetzungen==
+
== Considered scenario and prerequisites==
 
<br>
 
<br>
Alle bisher beschriebenen Digitalempfänger treffen stets symbolweise Entscheidungen. Werden dagegen mehrere Symbole gleichzeitig entschieden, so können bei der Detektion statistische Bindungen zwischen den Empfangssignalabtastwerten berücksichtigt werden, was eine geringere Fehlerwahrscheinlichkeit zur Folge hat &ndash; allerdings auf Kosten einer zusätzlichen Laufzeit.<br>
+
All digital receivers described so far always make symbol-wise decisions.&nbsp; If,&nbsp; on the other hand,&nbsp; several symbols are decided simultaneously,&nbsp; statistical bindings between the received signal samples can be taken into account during detection,&nbsp; which results in a lower error probability &ndash; but at the cost of an additional delay time.<br>
  
In diesem &ndash; teilweise auch im nächsten Kapitel &ndash; wird von folgendem Übertragungsmodell ausgegangen:<br>
+
In this&nbsp; $($partly also in the next chapter$)$&nbsp; the following transmission model is assumed.&nbsp; Compared to the last two chapters,&nbsp; the following differences arise: <br>
  
[[File:P ID1455 Dig T 3 7 S1 version1.png|center|frame|Übertragungssystem mit optimalem Empfänger|class=fit]]
+
[[File:EN_Dig_T_3_7_S1.png|right|frame|Transmission system with optimal receiver|class=fit]]
  
Gegenüber den letzten beiden Kapiteln ergeben sich folgende Unterschiede:
+
*$Q \in \{Q_i\}$&nbsp; with&nbsp; $i = 0$, ... , $M-1$&nbsp; denotes a time-constrained source symbol sequence &nbsp;$\langle q_\nu \rangle$ whose symbols are to be jointly decided by the receiver.<br>
*$Q \in \{Q_i\}$&nbsp; mit&nbsp; $i = 0$, ... , $M-1$&nbsp; bezeichnet eine zeitlich begrenzte Quellensymbolfolge &nbsp;$\langle q_\nu \rangle$, deren Symbole vom optimalen Empfänger gemeinsam entschieden werden sollen.<br>
 
  
*Beschreibt &nbsp;$Q$&nbsp; eine Folge von &nbsp;$N$&nbsp; redundanzfreien Binärsymbolen, so ist &nbsp;$M = 2^N$&nbsp; zu setzen. Dagegen gibt &nbsp;$M$&nbsp; bei symbolweiser Entscheidung die Stufenzahl der digitalen Quelle an.<br>
+
*If the source &nbsp;$Q$&nbsp; describes a sequence of &nbsp;$N$&nbsp; redundancy-free binary symbols, set &nbsp;$M = 2^N$.&nbsp; On the other hand,&nbsp; if the decision is symbol-wise, &nbsp;$M$&nbsp; specifies the level number of the digital source.<br>
  
*Im obigen Modell werden eventuelle Kanalverzerrungen dem Sender hinzugefügt und sind somit bereits im Grundimpuls &nbsp;$g_s(t)$&nbsp; und im Signal &nbsp;$s(t)$&nbsp; enthalten. Diese Maßnahme dient lediglich einer einfacheren Darstellung und stellt keine Einschränkung dar.<br>
+
*In this model,&nbsp; any channel distortions are added to the transmitter and are thus already included in the basic transmission pulse &nbsp;$g_s(t)$&nbsp; and the signal &nbsp;$s(t)$.&nbsp; This measure is only for a simpler representation and is not a restriction.<br>
  
*Der optimale Empfänger sucht unter Kenntnis des aktuell anliegenden Empfangssignals &nbsp;$s(t)$&nbsp; aus der Menge &nbsp;$\{Q_0$, ... , $Q_{M-1}\}$&nbsp; der möglichen Quellensymbolfolgen die am wahrscheinlichsten gesendete Folge &nbsp;$Q_j$&nbsp; und gibt diese als Sinkensymbolfolge &nbsp;$V$&nbsp; aus.<br>
+
*Knowing the currently applied received signal &nbsp;$r(t)$,&nbsp; the optimal receiver searches from the set &nbsp;$\{Q_0$, ... , $Q_{M-1}\}$&nbsp; of the possible source symbol sequences, the receiver searches for the most likely transmitted sequence &nbsp;$Q_j$&nbsp; and outputs this as a sink symbol sequence &nbsp;$V$.&nbsp; <br>
  
*Vor dem eigentlichen Entscheidungsalgorithmus muss durch eine geeignete Signalvorverarbeitung aus dem Empfangssignal &nbsp;$r(t)$&nbsp; für jede mögliche Folge &nbsp;$Q_i$&nbsp; ein Zahlenwert &nbsp;$W_i$&nbsp; abgeleitet werden. Je größer &nbsp;$W_i$&nbsp; ist, desto größer ist die Rückschlusswahrscheinlichkeit, dass &nbsp;$Q_i$&nbsp; gesendet wurde.<br>
+
*Before the actual decision algorithm,&nbsp; a numerical value &nbsp;$W_i$&nbsp; must be derived from the received signal &nbsp;$r(t)$&nbsp; for each possible sequence &nbsp;$Q_i$&nbsp; by suitable signal preprocessing.&nbsp; The larger &nbsp;$W_i$&nbsp; is,&nbsp; the greater the inference probability that &nbsp;$Q_i$&nbsp; was transmitted.<br>
  
*Die Signalvorverarbeitung muss für die erforderliche Rauschleistungsbegrenzung und &ndash; bei starken Kanalverzerrungen &ndash; für eine ausreichende Vorentzerrung der entstandenen Impulsinterferenzen sorgen. Außerdem beinhaltet die Vorverarbeitung auch die Abtastung zur Zeitdiskretisierung.<br>
+
*Signal preprocessing must provide for the necessary noise power limitation and &ndash; in the case of strong channel distortions &ndash; for sufficient pre-equalization of the resulting intersymbol interferences.&nbsp; In addition,&nbsp; preprocessing also includes sampling for time discretization.<br>
  
== MAP– und Maximum–Likelihood–Entscheidungsregel==
+
== Maximum-a-posteriori and maximum–likelihood decision rule==
 
<br>
 
<br>
Man bezeichnet den (uneingeschränkt) optimalen Empfänger als MAP&ndash;Empfänger, wobei "MAP&rdquo; für "Maximum&ndash;a&ndash;posteriori&rdquo; steht.<br>
+
The&nbsp; (unconstrained)&nbsp; optimal receiver is called the&nbsp; "MAP receiver",&nbsp; where&nbsp; "MAP"&nbsp; stands for&nbsp; "maximum&ndash;a&ndash;posteriori".<br>
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Definition:}$&nbsp; Der &nbsp;'''MAP&ndash;Empfänger'''&nbsp; ermittelt die &nbsp;$M$&nbsp; Rückschlusswahrscheinlichkeiten &nbsp;${\rm Pr}\big[Q_i \hspace{0.05cm}\vert \hspace{0.05cm}r(t)\big]$&nbsp; und setzt die Ausgangsfolge &nbsp;$V$&nbsp; gemäß der Entscheidungsregel, wobei für den Index gilt: &nbsp; $i = 0$, ... , $M-1$&nbsp; sowie &nbsp;$i \ne j$:
+
$\text{Definition:}$&nbsp; The&nbsp; '''maximum&ndash;a&ndash;posteriori receiver'''&nbsp; $($abbreviated&nbsp; $\rm MAP)$&nbsp; determines the &nbsp;$M$&nbsp; inference probabilities &nbsp;${\rm Pr}\big[Q_i \hspace{0.05cm}\vert \hspace{0.05cm}r(t)\big]$,&nbsp; and sets the output sequence &nbsp;$V$&nbsp; according to the decision rule,&nbsp; where the index is &nbsp; $i = 0$, ... , $M-1$&nbsp; as well as &nbsp;$i \ne j$:
 
:$${\rm Pr}\big[Q_j \hspace{0.05cm}\vert \hspace{0.05cm} r(t)\big] > {\rm Pr}\big[Q_i \hspace{0.05cm}\vert \hspace{0.05cm} r(t)\big]
 
:$${\rm Pr}\big[Q_j \hspace{0.05cm}\vert \hspace{0.05cm} r(t)\big] > {\rm Pr}\big[Q_i \hspace{0.05cm}\vert \hspace{0.05cm} r(t)\big]
 
  \hspace{0.05cm}.$$}}<br>
 
  \hspace{0.05cm}.$$}}<br>
  
Die &nbsp;[[Theory_of_Stochastic_Signals/Statistische_Abhängigkeit_und_Unabhängigkeit#R.C3.BCckschlusswahrscheinlichkeit|Rückschlusswahrscheinlichkeit]]&nbsp; ${\rm Pr}\big[Q_i \hspace{0.05cm}\vert \hspace{0.05cm} r(t)\big]$&nbsp; gibt an, mit welcher Wahrscheinlichkeit die Folge &nbsp;$Q_i$&nbsp; gesendet wurde, wenn das Empfangssignal &nbsp;$r(t)$&nbsp; am Entscheider anliegt. Mit dem &nbsp;[[Theory_of_Stochastic_Signals/Statistische_Abhängigkeit_und_Unabhängigkeit#Bedingte_Wahrscheinlichkeit|Satz von Bayes]]&nbsp; kann diese Wahrscheinlichkeit wie folgt berechnet werden:
+
*The &nbsp;[[Theory_of_Stochastic_Signals/Statistical_Dependence_and_Independence#Inference_probability|"inference probability"]]&nbsp; ${\rm Pr}\big[Q_i \hspace{0.05cm}\vert \hspace{0.05cm} r(t)\big]$&nbsp; indicates the probability with which the sequence &nbsp;$Q_i$&nbsp; was sent when the received signal &nbsp;$r(t)$&nbsp; is present at the decision.&nbsp; Using &nbsp;[[Theory_of_Stochastic_Signals/Statistical_Dependence_and_Independence#Conditional_Probability|"Bayes' theorem"]],&nbsp; this probability can be calculated as follows:
 
:$${\rm Pr}\big[Q_i \hspace{0.05cm}|\hspace{0.05cm} r(t)\big] = \frac{ {\rm Pr}\big[ r(t)\hspace{0.05cm}|\hspace{0.05cm}
 
:$${\rm Pr}\big[Q_i \hspace{0.05cm}|\hspace{0.05cm} r(t)\big] = \frac{ {\rm Pr}\big[ r(t)\hspace{0.05cm}|\hspace{0.05cm}
 
  Q_i \big] \cdot {\rm Pr}\big[Q_i]}{{\rm Pr}[r(t)\big]}
 
  Q_i \big] \cdot {\rm Pr}\big[Q_i]}{{\rm Pr}[r(t)\big]}
 
  \hspace{0.05cm}.$$
 
  \hspace{0.05cm}.$$
  
Die MAP&ndash;Entscheidungsregel lässt sich somit wie folgt umformulieren bzw. vereinfachen: &nbsp; Man setze die Sinkensymbolfolge &nbsp;$V = Q_j$, falls für alle &nbsp;$i \ne j$&nbsp; gilt:
+
*The MAP decision rule can thus be reformulated or simplified as follows: &nbsp; Let the sink symbol sequence &nbsp;$V = Q_j$,&nbsp; if for all &nbsp;$i \ne j$&nbsp; holds:
 
:$$\frac{ {\rm Pr}\big[ r(t)\hspace{0.05cm}|\hspace{0.05cm}
 
:$$\frac{ {\rm Pr}\big[ r(t)\hspace{0.05cm}|\hspace{0.05cm}
 
  Q_j \big] \cdot {\rm Pr}\big[Q_j)}{{\rm Pr}\big[r(t)\big]} > \frac{ {\rm Pr}\big[ r(t)\hspace{0.05cm}|\hspace{0.05cm}
 
  Q_j \big] \cdot {\rm Pr}\big[Q_j)}{{\rm Pr}\big[r(t)\big]} > \frac{ {\rm Pr}\big[ r(t)\hspace{0.05cm}|\hspace{0.05cm}
Line 49: Line 48:
 
  Q_i \big] \cdot {\rm Pr}\big[Q_i\big] \hspace{0.05cm}.$$
 
  Q_i \big] \cdot {\rm Pr}\big[Q_i\big] \hspace{0.05cm}.$$
  
Eine weitere Vereinfachung dieser MAP&ndash;Entscheidungsregel führt zum ML&ndash;Empfänger, wobei "ML&rdquo; für "Maximum&ndash;Likelihood&rdquo; steht.<br>
+
A further simplification of this MAP decision rule leads to the&nbsp; "ML receiver",&nbsp; where&nbsp; "ML"&nbsp; stands for&nbsp; "maximum likelihood".<br>
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Definition:}$&nbsp; Der &nbsp;'''Maximum&ndash;Likelihood&ndash;Empfänger'''&nbsp; &ndash; abgekürzt ML &ndash; entscheidet nach den bedingten Vorwärtswahrscheinlichkeiten &nbsp;${\rm Pr}\big[r(t)\hspace{0.05cm} \vert \hspace{0.05cm}Q_i \big]$&nbsp; und setzt die Ausgangsfolge &nbsp;$V = Q_j$, falls für alle &nbsp;$i \ne j$&nbsp; gilt:
+
$\text{Definition:}$&nbsp; The &nbsp;'''maximum likelihood receiver'''&nbsp; $($abbreviated&nbsp; $\rm ML)$ &nbsp; decides according to the conditional forward probabilities &nbsp;${\rm Pr}\big[r(t)\hspace{0.05cm} \vert \hspace{0.05cm}Q_i \big]$,&nbsp; and sets the output sequence &nbsp;$V = Q_j$,&nbsp; if for all &nbsp;$i \ne j$&nbsp; holds:
 
:$${\rm Pr}\big[ r(t)\hspace{0.05cm} \vert\hspace{0.05cm}
 
:$${\rm Pr}\big[ r(t)\hspace{0.05cm} \vert\hspace{0.05cm}
 
  Q_j \big] > {\rm Pr}\big[ r(t)\hspace{0.05cm} \vert \hspace{0.05cm}
 
  Q_j \big] > {\rm Pr}\big[ r(t)\hspace{0.05cm} \vert \hspace{0.05cm}
 
  Q_i\big]  \hspace{0.05cm}.$$}}<br>
 
  Q_i\big]  \hspace{0.05cm}.$$}}<br>
  
Ein Vergleich dieser beiden Definitionen zeigt:
+
A comparison of these two definitions shows:
* Bei gleichwahrscheinlichen Quellensymbolen  verwenden der ML&ndash;Empfänger und der MAP&ndash;Empfänger gleiche Entscheidungsregeln; sie  sind somit äquivalent.  
+
* For equally probable source symbols,&nbsp; the&nbsp; "ML receiver"&nbsp; and the&nbsp; "MAP receiver"&nbsp; use the same decision rules.&nbsp; Thus,&nbsp; they are equivalent.
*Bei nicht gleichwahrscheinlichen Symbolen ist der ML&ndash;Empfänger dem MAP&ndash;Empfänger unterlegen, da er für die Detektion nicht alle zur Verfügung stehenden Informationen nutzt.<br>
+
 
 +
*For symbols that are not equally probable,&nbsp; the&nbsp; "ML receiver"&nbsp; is inferior to the&nbsp; "MAP receiver"&nbsp; because it does not use all the available information for detection.<br>
  
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 1:}$&nbsp; Zur Verdeutlichung von ML&ndash; und MAP&ndash;Entscheidungsregel konstruieren wir nun ein sehr einfaches Beispiel mit nur zwei Quellensymbolen &nbsp;$(M = 2)$.  
+
$\text{Example 1:}$&nbsp; To illustrate the&nbsp; "ML"&nbsp; and the&nbsp; "MAP"&nbsp; decision rule,&nbsp; we now construct a very simple example with only two source symbols &nbsp;$(M = 2)$.
*Die beiden möglichen Symbole &nbsp;$Q_0$&nbsp; und &nbsp;$Q_1$&nbsp; werden durch die Sendesignale &nbsp;$s = 0$&nbsp; bzw. &nbsp;$s = 1$&nbsp; dargestellt.
+
[[File:EN_Dig_T_3_7_S2.png|right|frame|For clarification of MAP and ML receiver|class=fit]]
*Das Empfangssignal kann &ndash; warum auch immer &ndash; drei verschiedene Werte annehmen, nämlich &nbsp;$r = 0$, &nbsp;$r = 1$&nbsp; und zusätzlich &nbsp;$r = 0.5$.
+
<br><br>&rArr; &nbsp; The two possible symbols &nbsp;$Q_0$&nbsp; and &nbsp;$Q_1$&nbsp; are represented by the transmitted signals &nbsp;$s = 0$&nbsp; and &nbsp;$s = 1$.
 
+
<br><br>
 
+
&rArr; &nbsp; The received signal can &ndash; for whatever reason &ndash; take three different values, namely &nbsp;$r = 0$, &nbsp;$r = 1$&nbsp; and additionally &nbsp;$r = 0.5$.
[[File:P ID1461 Dig T 3 7 S2 version1.png|center|frame|Zur Verdeutlichung von MAP- und ML-Empfänger|class=fit]]
+
<br><br>
 +
<u>Note:</u>
 +
*The received values &nbsp;$r = 0$&nbsp; and &nbsp;$r = 1$&nbsp; will be assigned to the transmitter values &nbsp;$s = 0 \ (Q_0)$&nbsp; resp. &nbsp;$s = 1 \ (Q_1)$,&nbsp;  by both,&nbsp; the ML and MAP decisions.
  
Die Empfangswerte &nbsp;$r = 0$&nbsp; und &nbsp;$r = 1$&nbsp; werden sowohl vom ML&ndash; als auch vom MAP&ndash;Entscheider den Senderwerten  &nbsp;$s = 0 \ (Q_0)$&nbsp; bzw. &nbsp;$s = 1 \ (Q_1)$&nbsp; zugeordnet. Dagegen werden die Entscheider bezüglich des Empfangswertes &nbsp;$r = 0.5$&nbsp; ein anderes Ergebnis liefern:
+
*In contrast, the decisions will give a different result with respect to the received value &nbsp;$r = 0.5$:&nbsp;  
  
*Die Maximum&ndash;Likelihood&ndash;Entscheidungsregel führt zum Quellensymbol &nbsp;$Q_0$, wegen
+
:*The maximum likelihood&nbsp; $\rm (ML)$&nbsp; decision rule leads to the source symbol &nbsp;$Q_0$,&nbsp; because of:
:$${\rm Pr}\big [ r= 0.5\hspace{0.05cm}\vert\hspace{0.05cm}
+
::$${\rm Pr}\big [ r= 0.5\hspace{0.05cm}\vert\hspace{0.05cm}
 
  Q_0\big ] = 0.4 > {\rm Pr}\big [ r= 0.5\hspace{0.05cm} \vert \hspace{0.05cm}
 
  Q_0\big ] = 0.4 > {\rm Pr}\big [ r= 0.5\hspace{0.05cm} \vert \hspace{0.05cm}
 
  Q_1\big ] = 0.2 \hspace{0.05cm}.$$
 
  Q_1\big ] = 0.2 \hspace{0.05cm}.$$
  
*Die MAP&ndash;Entscheidung führt dagegen zum Quellensymbol &nbsp;$Q_1$, da entsprechend der Nebenrechnung in der Grafik gilt:
+
:*The maximum&ndash;a&ndash;posteriori&nbsp; $\rm (MAP)$&nbsp; decision rule leads to the source symbol &nbsp;$Q_1$,&nbsp; since according to the incidental calculation in the graph:
:$${\rm Pr}\big [Q_1 \hspace{0.05cm}\vert\hspace{0.05cm}
+
::$${\rm Pr}\big [Q_1 \hspace{0.05cm}\vert\hspace{0.05cm}
 
  r= 0.5\big ] = 0.6 > {\rm Pr}\big [Q_0 \hspace{0.05cm}\vert\hspace{0.05cm}
 
  r= 0.5\big ] = 0.6 > {\rm Pr}\big [Q_0 \hspace{0.05cm}\vert\hspace{0.05cm}
 
  r= 0.5\big ] = 0.4 \hspace{0.05cm}.$$}}<br>
 
  r= 0.5\big ] = 0.4 \hspace{0.05cm}.$$}}<br>
  
== Maximum&ndash;Likelihood&ndash;Entscheidung bei Gaußscher Störung ==
+
== Maximum likelihood decision for Gaussian noise ==
 
<br>
 
<br>
Wir setzen nun voraus, dass sich das Empfangssignal &nbsp;$r(t)$&nbsp; additiv aus einem Nutzsignal &nbsp;$s(t)$&nbsp; und einem Störanteil &nbsp;$n(t)$&nbsp; zusammensetzt, wobei die Störung als gaußverteilt und weiß angenommen wird &nbsp; &rArr; &nbsp; &nbsp;[[Digital_Signal_Transmission/Systemkomponenten_eines_Basisbandübertragungssystems#.C3.9Cbertragungskanal_und_St.C3.B6rungen|AWGN&ndash;Rauschen]]:
+
We now assume that the received signal &nbsp;$r(t)$&nbsp; is additively composed of a useful component &nbsp;$s(t)$&nbsp; and a noise component &nbsp;$n(t)$,&nbsp; where the noise is assumed to be Gaussian distributed and white &nbsp; &rArr; &nbsp; &nbsp;[[Digital_Signal_Transmission/System_Components_of_a_Baseband_Transmission_System#Transmission_channel_and_interference|"AWGN noise"]]:
 
:$$r(t) = s(t) + n(t) \hspace{0.05cm}.$$
 
:$$r(t) = s(t) + n(t) \hspace{0.05cm}.$$
  
Eventuelle Kanalverzerrungen werden zur Vereinfachung bereits dem Signal &nbsp;$s(t)$&nbsp; beaufschlagt.<br>
+
Any channel distortions are already applied to the signal &nbsp;$s(t)$&nbsp; for simplicity.<br>
  
Die notwendige Rauschleistungsbegrenzung wird durch einen Integrator realisiert; dies entspricht einer Mittelung der Rauschwerte im Zeitbereich. Begrenzt man das Integrationsintervall auf den Bereich &nbsp;$t_1$&nbsp; bis &nbsp;$t_2$, so kann man für jede Quellensymbolfolge &nbsp;$Q_i$&nbsp; eine Größe &nbsp;$W_i$&nbsp; ableiten, die ein Maß für die bedingte Wahrscheinlichkeit &nbsp;${\rm Pr}\big [ r(t)\hspace{0.05cm} \vert \hspace{0.05cm}
+
The necessary noise power limitation is realized by an integrator;&nbsp; this corresponds to an averaging of the noise values in the time domain.&nbsp; If one limits the integration interval to the range &nbsp;$t_1$&nbsp; to &nbsp;$t_2$,&nbsp; one can derive a quantity &nbsp;$W_i$&nbsp; for each source symbol sequence &nbsp;$Q_i$,&nbsp; which is a measure for the conditional probability &nbsp;${\rm Pr}\big [ r(t)\hspace{0.05cm} \vert \hspace{0.05cm}
  Q_i\big ] $&nbsp; darstellt:
+
  Q_i\big ] $:&nbsp;  
 
:$$W_i  =  \int_{t_1}^{t_2} r(t) \cdot s_i(t) \,{\rm d} t -
 
:$$W_i  =  \int_{t_1}^{t_2} r(t) \cdot s_i(t) \,{\rm d} t -
 
{1}/{2} \cdot \int_{t_1}^{t_2} s_i^2(t) \,{\rm d} t=
 
{1}/{2} \cdot \int_{t_1}^{t_2} s_i^2(t) \,{\rm d} t=
 
I_i - {E_i}/{2} \hspace{0.05cm}.$$
 
I_i - {E_i}/{2} \hspace{0.05cm}.$$
  
Diese Entscheidungsgröße &nbsp;$W_i$&nbsp; kann über die &nbsp;$k$&ndash;dimensioniale [[Theory_of_Stochastic_Signals/Zweidimensionale_Zufallsgrößen#Verbundwahrscheinlichkeitsdichtefunktion|Verbundwahrscheinlichkeitsdichte]]&nbsp; der Störungen $($mit &nbsp;$k \to \infty)$&nbsp; und einigen Grenzübergängen hergeleitet werden. Das Ergebnis lässt sich wie folgt interpretieren:
+
This decision variable &nbsp;$W_i$&nbsp; can be derived using the &nbsp;$k$&ndash;dimensionial&nbsp; [[Theory_of_Stochastic_Signals/Two-Dimensional_Random_Variables#Joint_probability_density_function|"joint probability density"]]&nbsp; of the noise&nbsp; $($with &nbsp;$k \to \infty)$&nbsp; and some boundary crossings.&nbsp; The result can be interpreted as follows:
*Die Integration dient der Rauschleistungsreduzierung durch Mittelung. Werden vom Maximum–Likelihood&ndash;Detektor &nbsp;$N$&nbsp; Binärsymbole gleichzeitig entschieden, so ist bei verzerrungsfreiem Kanal &nbsp;$t_1 = 0 $&nbsp; und &nbsp;$t_2 = N \cdot T$&nbsp; zu setzen.
+
*Integration is used for noise power reduction by averaging.&nbsp; If &nbsp;$N$&nbsp; binary symbols are decided simultaneously by the maximum likelihood detector,&nbsp; set &nbsp;$t_1 = 0 $&nbsp; and &nbsp;$t_2 = N \cdot T$&nbsp; for distortion-free channel.
*Der erste Term der obigen Entscheidungsgröße &nbsp;$W_i$&nbsp; ist gleich der über das endliche Zeitintervall &nbsp;$NT$&nbsp;  gebildeten&nbsp; [[Theory_of_Stochastic_Signals/Kreuzkorrelationsfunktion_und_Kreuzleistungsdichte#Definition_der_Kreuzkorrelationsfunktion| Energie&ndash;Kreuzkorrelationsfunktion]]&nbsp; zwischen &nbsp;$r(t)$&nbsp; und &nbsp;$s_i(t)$&nbsp; an der Stelle &nbsp;$\tau = 0$:
+
 
 +
*The first term of the above decision variable &nbsp;$W_i$&nbsp; is equal to the&nbsp; [[Theory_of_Stochastic_Signals/Cross-Correlation_Function_and_Cross_Power-Spectral_Density#Definition_of_the_cross-correlation_function| "energy cross-correlation function"]]&nbsp; formed over the finite time interval &nbsp;$NT$&nbsp; between &nbsp;$r(t)$&nbsp; and &nbsp;$s_i(t)$&nbsp; at the time point &nbsp;$\tau = 0$:
 
:$$I_i  = \varphi_{r, \hspace{0.08cm}s_i} (\tau = 0) =  \int_{0}^{N \cdot T}r(t) \cdot s_i(t) \,{\rm d} t
 
:$$I_i  = \varphi_{r, \hspace{0.08cm}s_i} (\tau = 0) =  \int_{0}^{N \cdot T}r(t) \cdot s_i(t) \,{\rm d} t
 
\hspace{0.05cm}.$$
 
\hspace{0.05cm}.$$
  
*Der zweite Term gibt die halbe Energie des betrachteten Nutzsignals &nbsp;$s_i(t)$&nbsp; an, die zu subtrahieren ist. Die Energie ist gleich der AKF des Nutzsignals an der Stelle &nbsp;$\tau = 0$:
+
*The second term gives half the energy of the considered useful signal &nbsp;$s_i(t)$&nbsp; to be subtracted.&nbsp; The energy is equal to the auto-correlation function&nbsp; $\rm (ACF)$&nbsp; of &nbsp;$s_i(t)$&nbsp; at the time point &nbsp;$\tau = 0$:
  
 
::<math>E_i  =  \varphi_{s_i} (\tau = 0) = \int_{0}^{N \cdot T}
 
::<math>E_i  =  \varphi_{s_i} (\tau = 0) = \int_{0}^{N \cdot T}
 
s_i^2(t) \,{\rm d} t \hspace{0.05cm}.</math>
 
s_i^2(t) \,{\rm d} t \hspace{0.05cm}.</math>
  
*Bei verzerrendem Kanal ist die Impulsantwort &nbsp;$h_{\rm K}(t)$&nbsp; nicht diracförmig, sondern beispielsweise auf den Bereich  &nbsp;$-T_{\rm K} \le t \le +T_{\rm K}$&nbsp; ausgedehnt. In diesem Fall muss für die beiden Integrationsgrenzen &nbsp;$t_1 = -T_{\rm K}$&nbsp; und &nbsp;$t_2 = N \cdot T +T_{\rm K}$&nbsp; eingesetzt werden.<br>
+
*In the case of a distorting channel,&nbsp; the channel impulse response &nbsp;$h_{\rm K}(t)$&nbsp; is not Dirac-shaped,&nbsp; but for example extended to the range &nbsp;$-T_{\rm K} \le t \le +T_{\rm K}$.&nbsp; In this case,&nbsp; $t_1 = -T_{\rm K}$&nbsp; and &nbsp;$t_2 = N \cdot T +T_{\rm K}$&nbsp; must be used for the integration limits.<br>
  
== Matched&ndash;Filter&ndash;Empfänger vs. Korrelationsempfänger ==
+
== Matched filter receiver vs. correlation receiver ==
 
<br>
 
<br>
Es gibt verschiedene schaltungstechnische Implementierungen des Maximum&ndash;Likelihood&ndash;Empfängers.
+
There are various circuit implementations of the maximum likelihood&nbsp; $\rm (ML)$&nbsp; receiver.
 
 
Beispielsweise können die erforderlichen Integrale durch lineare Filterung und anschließender Abtastung gewonnen werden. Man bezeichnet diese Realisierungsform als&nbsp; '''Matched&ndash;Filter&ndash;Empfänger''', da hier die Impulsantworten der &nbsp;$M$&nbsp; parallelen Filter formgleich mit den Nutzsignalen &nbsp;$s_0(t)$, ... , $s_{M-1}(t)$&nbsp; sind.<br>
 
*Die $M$ Entscheidungsgrößen &nbsp;$I_i$&nbsp; sind dann gleich den Faltungsprodukten &nbsp;$r(t) \star s_i(t)$&nbsp; zum Zeitpunkt &nbsp;$t= 0$.
 
*Beispielsweise erlaubt der im Kapitel&nbsp; [[Digitalsignal%C3%BCbertragung/Optimierung_der_Basisband%C3%BCbertragungssysteme#Voraussetzungen_und_Optimierungskriterium|Optimierung der Basisband&ndash;Übertragungssysteme]]&nbsp; ausführlich beschriebene "optimale Binärempfänger&rdquo; eine Maximum&ndash;Likelihood&ndash;Entscheidung mit den ML&ndash;Parametern &nbsp;$M = 2$&nbsp; und &nbsp;$N = 1$.<br>
 
  
 +
&rArr; &nbsp; For example,&nbsp; the required integrals can be obtained by linear filtering and subsequent sampling.&nbsp; This realization form is called&nbsp; '''matched filter receiver''',&nbsp; because here the impulse responses of the &nbsp;$M$&nbsp; parallel filters have the same shape as the useful signals &nbsp;$s_0(t)$, ... , $s_{M-1}(t)$.&nbsp; <br>
 +
*The&nbsp; $M$&nbsp; decision variables &nbsp;$I_i$&nbsp; are then equal to the convolution products &nbsp;$r(t) \star s_i(t)$&nbsp; at time &nbsp;$t= 0$.
 +
*For example,&nbsp; the&nbsp; "optimal binary receiver"&nbsp; described in detail in the chapter&nbsp; [[Digital_Signal_Transmission/Optimization_of_Baseband_Transmission_Systems#Prerequisites_and_optimization_criterion|"Optimization of Baseband Transmission Systems"]]&nbsp; allows a maximum likelihood&nbsp; $\rm (ML)$&nbsp; decision with parameters &nbsp;$M = 2$&nbsp; and &nbsp;$N = 1$.<br>
  
Eine zweite Realisierungsform bietet der &nbsp;'''Korrelationsempfänger'''&nbsp; entsprechend der folgenden Grafik.
 
  
[[File:P ID1457 Dig T 3 7 S4 version1.png|center|frame|Korrelationsempfänger für &nbsp;$N = 3$, &nbsp;$t_1 = 0$, &nbsp;$t_2 = 3T$ &nbsp;sowie&nbsp; $M = 2^3 = 8$ |class=fit]]
+
&rArr; &nbsp; A second realization form is provided by the &nbsp;'''correlation receiver'''&nbsp; according to the following graph.&nbsp; One recognizes from this block diagram for the indicated parameters:
 +
[[File:EN_Dig_T_3_7_S4.png|right|frame|Correlation receiver for &nbsp;$N = 3$, &nbsp;$t_1 = 0$, &nbsp;$t_2 = 3T$ &nbsp; and &nbsp; $M = 2^3 = 8$ |class=fit]]
  
Man erkennt aus diesem Blockschaltbild für die angegebenen Parameter:
+
*The drawn correlation receiver forms a total of &nbsp;$M = 8$&nbsp; cross-correlation functions between the received signal &nbsp;$r(t) = s_k(t) + n(t)$&nbsp; and the possible transmitted signals &nbsp;$s_i(t), \ i = 0$, ... , $M-1$. The following description assumes that the useful signal &nbsp;$s_k(t)$&nbsp; has been transmitted.<br>
*Der gezeichnete Korrelationsempfänger bildet insgesamt &nbsp;$M = 8$&nbsp; Kreuzkorrelationsfunktionen zwischen dem Empfangssignal &nbsp;$r(t) = s_k(t) + n(t)$&nbsp; und den möglichen Sendesignalen &nbsp;$s_i(t), \ i = 0$, ... , $M-1$. Vorausgesetzt ist für die folgende Beschreibung, dass das Nutzsignal &nbsp;$s_k(t)$&nbsp; gesendet wurde.<br>
 
  
*Der Korrelationsempfänger sucht nun den maximalen Wert &nbsp;$W_j$&nbsp; aller Korrelationswerte und gibt die dazugehörige Folge &nbsp;$Q_j$&nbsp; als Sinkensymbolfolge &nbsp;$V$&nbsp; aus. Formal lässt sich die ML&ndash;Entscheidungsregel wie folgt ausdrücken:
+
*This receiver searches for the maximum value &nbsp;$W_j$&nbsp; of all correlation values and outputs the corresponding sequence &nbsp;$Q_j$&nbsp; as sink symbol sequence &nbsp;$V$.&nbsp; Formally,&nbsp; the&nbsp; $\rm ML$&nbsp; decision rule can be expressed as follows:
:$$V = Q_j, \hspace{0.2cm}{\rm falls}\hspace{0.2cm} W_i < W_j
+
:$$V = Q_j, \hspace{0.2cm}{\rm if}\hspace{0.2cm} W_i < W_j
\hspace{0.2cm}{\rm f\ddot{u}r}\hspace{0.2cm} {\rm
+
\hspace{0.2cm}{\rm for}\hspace{0.2cm} {\rm
alle}\hspace{0.2cm} i \ne j \hspace{0.05cm}.$$
+
all}\hspace{0.2cm} i \ne j \hspace{0.05cm}.$$
  
*Setzt man weiter voraus, dass alle Sendesignale &nbsp;$s_i(t)$&nbsp; die genau gleiche Energie besitzen, so kann man auf die Subtraktion von &nbsp;$E_i/2$&nbsp; in allen Zweigen verzichten. In diesem Fall werden folgende Korrelationswerte miteinander verglichen &nbsp;$(i = 0$, ... , $M-1)$:
+
*If we further assume that all transmitted signals &nbsp;$s_i(t)$&nbsp; have same energy,&nbsp; we can dispense with the subtraction of &nbsp;$E_i/2$&nbsp; in all branches.&nbsp; In this case,&nbsp; the following correlation values are compared &nbsp;$(i = 0$, ... , $M-1)$:
 
::<math>I_i  =  \int_{0}^{NT} s_j(t) \cdot s_i(t) \,{\rm d} t +
 
::<math>I_i  =  \int_{0}^{NT} s_j(t) \cdot s_i(t) \,{\rm d} t +
 
\int_{0}^{NT} n(t) \cdot s_i(t) \,{\rm d} t
 
\int_{0}^{NT} n(t) \cdot s_i(t) \,{\rm d} t
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
*Mit großer Wahrscheinlichkeit ist &nbsp;$I_j = I_k$&nbsp; größer als alle anderen Vergleichswerte &nbsp;$I_{j \ne k}$. Ist das Rauschen &nbsp;$n(t)$&nbsp; allerdings zu groß, so wird auch der Korrelationsempfänger eine Fehlentscheidung treffen.<br>
+
*With high probability, &nbsp;$I_j = I_k$&nbsp; is larger than all other comparison values&nbsp; $I_{j \ne k}$ &nbsp; &rArr; &nbsp;  right decision.&nbsp; However,&nbsp; if the noise &nbsp;$n(t)$&nbsp; is too large,&nbsp; also the correlation receiver will make wrong decisions.<br>
  
== Darstellung des Korrelationsempfängers im Baumdiagramm==
+
== Representation of the correlation receiver in the tree diagram==
 
<br>
 
<br>
Verdeutlichen wir uns die Funktionsweise des Korrelationsempfängers im Baumdiagramm, wobei die &nbsp;$2^3 = 8$&nbsp; möglichen Quellensymbolfolgen &nbsp;$Q_i$&nbsp; der Länge &nbsp;$N = 3$&nbsp; durch bipolare rechteckförmige Sendesignale &nbsp;$s_i(t)$&nbsp; repräsentiert werden:
+
Let us illustrate  the correlation receiver operation in the tree diagram,&nbsp; where the &nbsp;$2^3 = 8$&nbsp; possible source symbol sequences &nbsp;$Q_i$&nbsp; of length &nbsp;$N = 3$&nbsp; are represented by bipolar rectangular transmitted signals &nbsp;$s_i(t)$.
  
[[File:P ID1458 Dig T 3 7 S5a version1.png|center|frame|Mögliche bipolare Sendesignale für &nbsp;$N = 3$|class=fit]]
+
[[File:P ID1458 Dig T 3 7 S5a version1.png|right|frame|All&nbsp; $2^3=8$&nbsp; possible bipolar transmitted signals&nbsp; $s_i(t)$&nbsp; for &nbsp;$N = 3$|class=fit]]
 +
The possible symbol sequences &nbsp;$Q_0 = \rm LLL$, ... , $Q_7 = \rm HHH$&nbsp; and the associated transmitted signals &nbsp;$s_0(t)$, ... , $s_7(t)$&nbsp; are listed below.
  
Die möglichen Symbolfolgen &nbsp;$Q_0 = \rm LLL$, ... , $Q_7 = \rm HHH$&nbsp; und die zugehörigen Sendesignale &nbsp;$s_0(t)$, ... , $s_7(t)$&nbsp; sind oben aufgeführt.
+
*Due to bipolar amplitude coefficients and the rectangular shape &nbsp; &rArr; &nbsp; all signal energies are equal:&nbsp; $E_0 =  \text{...} = E_7 = N \cdot E_{\rm B}$, where &nbsp;$E_{\rm B}$&nbsp; indicates the energy of a single pulse of duration $T$.
*Aufgrund der bipolaren Amplitudenkoeffizienten und der Rechteckform sind alle Signalenergien gleich: &nbsp; $E_0 =  \text{...} = E_7 = N \cdot E_{\rm B}$, wobei &nbsp;$E_{\rm B}$&nbsp; die Energie eines Einzelimpulses der Dauer $T$ angibt.  
+
*Deshalb kann auf die Subtraktion des Terms &nbsp;$E_i/2$&nbsp; in allen Zweigen verzichtet werden &nbsp; &rArr; &nbsp; eine auf den Korrelationswerten &nbsp;$I_i$&nbsp; basierende Entscheidung liefert ebenso zuverlässige Ergebnisse wie die Maximierung der korrigierten Werte &nbsp;$W_i$.<br>
+
*Therefore,&nbsp; the subtraction of the &nbsp;$E_i/2$&nbsp; term in all branches can be omitted &nbsp; &rArr; &nbsp; the decision based on the correlation values &nbsp;$I_i$&nbsp; gives equally reliable results as maximizing the corrected values &nbsp;$W_i$.
 +
<br clear=all>
  
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 2:}$&nbsp; In der Grafik sind die fortlaufenden Integralwerte dargestellt, wobei vom tatsächlich gesendeten Signal &nbsp;$s_5(t)$&nbsp; und dem rauschfreien Fall ausgegangen wird. Für diesen Fall gilt für die zeitabhängigen Integralwerte und die Integralendwerte:  
+
$\text{Example 2:}$&nbsp; The graph shows the continuous-valued integral values,&nbsp; assuming the actually transmitted signal &nbsp;$s_5(t)$&nbsp; and the noise-free case.&nbsp; For this case,&nbsp; the time-dependent integral values and the integral end values:
[[File:Dig_T_3_7_S5B_version2.png|right|frame|Korrelationsempfänger: &nbsp; Baumdiagramm im rauschfreien Fall|class=fit]]
+
[[File:EN_Dig_T_3_7_S5b.png|right|frame|Tree diagram of the correlation receiver in the noise-free case|class=fit]]
 
:$$i_i(t)  =  \int_{0}^{t} r(\tau) \cdot s_i(\tau) \,{\rm d}
 
:$$i_i(t)  =  \int_{0}^{t} r(\tau) \cdot s_i(\tau) \,{\rm d}
 
\tau =  \int_{0}^{t} s_5(\tau) \cdot s_i(\tau) \,{\rm d}
 
\tau =  \int_{0}^{t} s_5(\tau) \cdot s_i(\tau) \,{\rm d}
 
\tau \hspace{0.3cm}
 
\tau \hspace{0.3cm}
 
\Rightarrow \hspace{0.3cm}I_i = i_i(3T). $$
 
\Rightarrow \hspace{0.3cm}I_i = i_i(3T). $$
Die Grafik kann wie folgt interpretiert werden::
+
The graph can be interpreted as follows:
*Wegen der Rechteckform der Signale &nbsp;$s_i(t)$&nbsp; sind alle Funktionsverläufe &nbsp;$i_i(t)$&nbsp; geradlinig. Die auf &nbsp;$E_{\rm B}$&nbsp; normierten Endwerte sind &nbsp;$+3$, &nbsp;$+1$, &nbsp;$-1$&nbsp; und &nbsp;$-3$.<br>
+
*Because of the rectangular shape of the signals &nbsp;$s_i(t)$,&nbsp; all function curves &nbsp;$i_i(t)$&nbsp; are rectilinear.&nbsp; The end values normalized to &nbsp;$E_{\rm B}$&nbsp; are &nbsp;$+3$, &nbsp;$+1$, &nbsp;$-1$&nbsp; and &nbsp;$-3$.<br>
*Der maximale Endwert ist &nbsp;$I_5 = 3 \cdot E_{\rm B}$&nbsp; (roter Kurvenverlauf), da tatsächlich das Signal &nbsp;$s_5(t)$&nbsp; gesendet wurde. Ohne Rauschen trifft der Korrelationsempfänger somit natürlich immer die richtige Entscheidung.<br>
+
 
*Der blaue Kurvenzug &nbsp;$i_1(t)$&nbsp; führt zum Endwert &nbsp;$I_1 = -E_{\rm B} + E_{\rm B}+ E_{\rm B} = E_{\rm B}$, da sich &nbsp;$s_1(t)$&nbsp; von &nbsp;$s_5(t)$&nbsp; nur im ersten Bit unterscheidet. Die Vergleichswerte &nbsp;$I_4$&nbsp; und &nbsp;$I_7$&nbsp; sind ebenfalls gleich &nbsp;$E_{\rm B}$.<br>
+
*The maximum final value is &nbsp;$I_5 = 3 \cdot E_{\rm B}$&nbsp; (red waveform),&nbsp; since signal &nbsp;$s_5(t)$&nbsp; was actually sent.&nbsp; Without noise,&nbsp; the correlation receiver thus naturally always makes the correct decision.<br>
*Da sich &nbsp;$s_0(t)$, &nbsp;$s_3(t)$&nbsp; und &nbsp;$s_6(t)$&nbsp; vom gesendeten &nbsp;$s_5(t)$&nbsp; in zwei Bit unterscheiden, gilt &nbsp;$I_0 = I_3 = I_6 =-E_{\rm B}$. Die grüne Kurve zeigt &nbsp;$s_6(t)$, das zunächst ansteigt (erstes Bit stimmt überein) und dann über zwei Bit abfällt.
+
 
*Die violette Kurve führt zum Endwert &nbsp;$I_2 = -3 \cdot E_{\rm B}$. Das zugehörige Signal &nbsp;$s_2(t)$&nbsp; unterscheidet sich von &nbsp;$s_5(t)$&nbsp; in allen drei Symbolen und es gilt &nbsp;$s_2(t) = -s_5(t)$.}}<br><br>
+
*The blue curve &nbsp;$i_1(t)$&nbsp; leads to the final value &nbsp;$I_1 = -E_{\rm B} + E_{\rm B}+ E_{\rm B} = E_{\rm B}$,&nbsp; since &nbsp;$s_1(t)$&nbsp; differs from &nbsp;$s_5(t)$&nbsp; only in the first bit.&nbsp; The comparison values &nbsp;$I_4$&nbsp; and &nbsp;$I_7$&nbsp; are also equal to &nbsp;$E_{\rm B}$.<br>
 +
 
 +
*Since &nbsp;$s_0(t)$, &nbsp;$s_3(t)$&nbsp; and &nbsp;$s_6(t)$&nbsp; differ from the transmitted &nbsp;$s_5(t)$&nbsp; in two bits, &nbsp;$I_0 = I_3 = I_6 =-E_{\rm B}$.&nbsp; The green curve shows &nbsp;$s_6(t)$ initially increasing&nbsp; (first bit matches)&nbsp; and then decreasing over two bits.
 +
 
 +
*The purple curve leads to the final value &nbsp;$I_2 = -3 \cdot E_{\rm B}$.&nbsp; The corresponding signal &nbsp;$s_2(t)$&nbsp; differs from &nbsp;$s_5(t)$&nbsp; in all three symbols and &nbsp;$s_2(t) = -s_5(t)$&nbsp; holds.}}<br><br>
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 3:}$&nbsp; Die Grafik zu diesem Beispiel beschreibt den gleichen Sachverhalt wie das &nbsp;$\text{Beispiel 2}$, doch es wird nun vom Empfangssignal  &nbsp;$r(t) = s_5(t)+ n(t)$&nbsp; ausgegangen. Die Varianz des AWGN&ndash;Rauschens &nbsp;$n(t)$&nbsp; beträgt hierbei  &nbsp;$\sigma_n^2 = 4 \cdot E_{\rm B}/T$.
+
$\text{Example 3:}$&nbsp; The graph describes the same situation as &nbsp;$\text{Example 2}$,&nbsp; but now the received signal &nbsp;$r(t) = s_5(t)+ n(t)$&nbsp; is assumed.&nbsp; The variance of the AWGN noise &nbsp;$n(t)$&nbsp; here is &nbsp;$\sigma_n^2 = 4 \cdot E_{\rm B}/T$.
[[File:Dig_T_3_7_S5C_version2.png|right|frame|Korrelationsempfänger: Baumdiagramm mit Rauschen |class=fit]]
+
[[File:EN_Dig_T_3_7_S5c_neu.png|right|frame|Tree diagram of the correlation receiver with noise &nbsp; $(\sigma_n^2 = 4 \cdot E_{\rm B}/T)$ |class=fit]]
Man erkennt aus dieser Grafik im Vergleich zum rauschfreien Fall:
+
<br><br><br>One can see from this graph compared to the noise-free case:
*Die Funktionsverläufe sind aufgrund des Rauschanteils &nbsp;$n(t)$&nbsp; nun nicht mehr gerade und es ergeben sich auch etwas andere Endwerte als ohne Rauschen.  
+
*The curves are now no longer straight due to the noise component &nbsp;$n(t)$&nbsp; and there are also slightly different final values than without noise.
*Im betrachteten Beispiel entscheidet der Korrelationsempfänger aber mit großer Wahrscheinlichkeit richtig, da die Differenz zwischen &nbsp;$I_5$&nbsp; und dem zweitgrößeren Wert &nbsp;$I_7$&nbsp; mit &nbsp;$1.65\cdot E_{\rm B}$&nbsp; verhältnismäßig groß ist.<br>
+
 
*Die Fehlerwahrscheinlichkeit ist im hier betrachteten Beispiel allerdings nicht besser als die des Matched&ndash;Filter&ndash;Empfängers mit symbolweiser Entscheidung.
+
*In the considered example,&nbsp;  the correlation receiver decides correctly with high probability,&nbsp; since the difference between &nbsp;$I_5$&nbsp; and the next value &nbsp;$I_7$&nbsp; is relatively large: &nbsp;$1.65\cdot E_{\rm B}$.&nbsp; <br>
*Entsprechend dem Kapitel&nbsp;  [[Digitalsignal%C3%BCbertragung/Optimierung_der_Basisband%C3%BCbertragungssysteme#Voraussetzungen_und_Optimierungskriterium|Optimierung der Basisband&ndash;Übertragungssysteme]]&nbsp; gilt auch hier:
+
 
 +
*The error probability in this example  is not better than that of the matched filter receiver with symbol-wise decision.&nbsp; In accordance with the chapter&nbsp;  [[Digital_Signal_Transmission/Optimization_of_Baseband_Transmission_Systems#Prerequisites_and_optimization_criterion|"Optimization of Baseband Transmission Systems"]],&nbsp; the following also applies here:
 
:$$p_{\rm S} = {\rm Q} \left( \sqrt{ {2 \cdot E_{\rm B} }/{N_0} }\right)
 
:$$p_{\rm S} = {\rm Q} \left( \sqrt{ {2 \cdot E_{\rm B} }/{N_0} }\right)
 
  = {1}/{2} \cdot {\rm erfc} \left( \sqrt{ { E_{\rm B} }/{N_0} }\right) \hspace{0.05cm}.$$}}  
 
  = {1}/{2} \cdot {\rm erfc} \left( \sqrt{ { E_{\rm B} }/{N_0} }\right) \hspace{0.05cm}.$$}}  
Line 174: Line 182:
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Fazit:}$&nbsp;  
+
$\text{Conclusions:}$&nbsp;  
*Weist das Eingangssignal keine statistischen Bindungen auf wie im &nbsp;$\text{Beispiel 2}$, so ist durch die gemeinsame Entscheidung von &nbsp;$N$&nbsp; Symbolen gegenüber der symbolweisen Entscheidung keine Verbesserung zu erzielen.
+
#If the input signal does not have statistical bindings &nbsp;$\text{(Example 2)}$,&nbsp; there is no improvement by joint decision of &nbsp;$N$&nbsp; symbols over symbol-wise decision &nbsp; <br>&rArr; &nbsp; $p_{\rm S} = {\rm Q} \left( \sqrt{ {2 \cdot E_{\rm B} }/{N_0} }\right)$.
*Bei Vorhandensein von statistischen Bindungen  wird durch die gemeinsame Entscheidung von &nbsp;$N$&nbsp; Symbolen die Fehlerwahrscheinlichkeit  gegenüber &nbsp;$p_{\rm S} = {\rm Q} \left( \sqrt{ {2 \cdot E_{\rm B} }/{N_0} }\right)$&nbsp; (gültig für symbolweise Entscheidung) merklich verringert, da der Maximum&ndash;Likelihood&ndash;Empfänger die Bindungen berücksichtigt.  
+
#In the presence of statistical bindings &nbsp;$\text{(Example 3)}$,&nbsp; the joint decision of &nbsp;$N$&nbsp; symbols noticeably reduces the error probability,&nbsp; since the maximum likelihood receiver takes the bindings into account.
*Solche Bindungen können entweder durch sendeseitige Codierung bewusst erzeugt werden (siehe &nbsp;$\rm LNTwww$-Buch [[Channel_Coding]]) oder durch (lineare) Kanalverzerrungen ungewollt entstehen.<br>
+
#Such bindings can be either deliberately created by transmission-side coding&nbsp; $($see the &nbsp;$\rm LNTwww$ book&nbsp; [[Channel_Coding|"Channel Coding"]])&nbsp; or unintentionally caused by&nbsp; (linear)&nbsp; channel distortions.<br>
*Bei Vorhandensein solcher Impulsinterferenzen ist die Berechnung der Fehlerwahrscheinlichkeit deutlich schwieriger. Es können jedoch vergleichbare Näherungen wie beim Viterbi&ndash;Empfänger angegeben werden, die am &nbsp;[[Digital_Signal_Transmission/Viterbi–Empfänger#Fehlerwahrscheinlichkeit_bei_Maximum.E2.80.93Likelihood.E2.80.93Entscheidung|Ende des nächsten Kapitels ]]&nbsp; angegeben sind.}}<br>
+
#In the presence of such&nbsp; "intersymbol interferences",&nbsp; the calculation of the error probability is much more difficult.&nbsp; However,&nbsp; comparable approximations as for the Viterbi receiver can be used,&nbsp; which are given at the &nbsp;[[Digital_Signal_Transmission/Viterbi_Receiver#Bit_error_probability_with_maximum_likelihood_decision|end of the next chapter]].&nbsp; }}<br>
  
== Korrelationsempfänger bei unipolarer Signalisierung ==
+
== Correlation receiver with unipolar signaling ==
 
<br>
 
<br>
Bisher sind wir bei der Beschreibung des Korrelationsempfänger stets von binärer ''bipolarer''&nbsp; Signalisierung ausgegangen:
+
So far,&nbsp; we have always assumed binary&nbsp; '''bipolar'''&nbsp; signaling when describing the correlation receiver:
 
:$$a_\nu  =  \left\{ \begin{array}{c} +1  \\
 
:$$a_\nu  =  \left\{ \begin{array}{c} +1  \\
 
  -1 \\  \end{array} \right.\quad
 
  -1 \\  \end{array} \right.\quad
 
\begin{array}{*{1}c} {\rm{f\ddot{u}r}}
 
\begin{array}{*{1}c} {\rm{f\ddot{u}r}}
\\  {\rm{f\ddot{u}r}}  \\ \end{array}\begin{array}{*{20}c}
+
\\  {\rm{for}}  \\ \end{array}\begin{array}{*{20}c}
 
q_\nu = \mathbf{H} \hspace{0.05cm}, \\
 
q_\nu = \mathbf{H} \hspace{0.05cm}, \\
 
q_\nu = \mathbf{L} \hspace{0.05cm}.  \\
 
q_\nu = \mathbf{L} \hspace{0.05cm}.  \\
 
\end{array}$$
 
\end{array}$$
Nun betrachten wir den Fall der binären ''unipolaren''&nbsp; Digitalsignalübertragung gilt:
+
Now we consider the case of binary&nbsp; '''unipolar'''&nbsp; digital signaling holds:
 
:$$a_\nu  =  \left\{ \begin{array}{c} 1  \\
 
:$$a_\nu  =  \left\{ \begin{array}{c} 1  \\
 
  0 \\  \end{array} \right.\quad
 
  0 \\  \end{array} \right.\quad
\begin{array}{*{1}c} {\rm{f\ddot{u}r}}
+
\begin{array}{*{1}c} {\rm{for}}
\\  {\rm{f\ddot{u}r}}  \\ \end{array}\begin{array}{*{20}c}
+
\\  {\rm{for}}  \\ \end{array}\begin{array}{*{20}c}
 
q_\nu = \mathbf{H} \hspace{0.05cm}, \\
 
q_\nu = \mathbf{H} \hspace{0.05cm}, \\
 
q_\nu = \mathbf{L} \hspace{0.05cm}.  \\
 
q_\nu = \mathbf{L} \hspace{0.05cm}.  \\
 
\end{array}$$
 
\end{array}$$
 +
[[File:P ID1462 Dig T 3 7 S5c version1.png|right|frame|Possible unipolar transmitted signals for &nbsp;$N = 3$|class=fit]]
 +
The &nbsp;$2^3 = 8$&nbsp; possible source symbol sequences &nbsp;$Q_i$&nbsp; of length &nbsp;$N = 3$&nbsp; are now represented by unipolar rectangular transmitted signals &nbsp;$s_i(t)$.&nbsp;
  
Die &nbsp;$2^3 = 8$&nbsp; möglichen Quellensymbolfolgen &nbsp;$Q_i$&nbsp; der Länge &nbsp;$N = 3$&nbsp; werden nun durch unipolare rechteckförmige Sendesignale &nbsp;$s_i(t)$&nbsp; repräsentiert. Nachfolgend  aufgeführt sind die Symbolfolgen &nbsp;$Q_0 = \rm LLL$, ... , $Q_7 = \rm HHH$&nbsp; und die  Sendesignale &nbsp;$s_0(t)$, ... , $s_7(t)$.
+
Listed on the right are the eight symbol sequences and the transmitted signals 
 +
:$$Q_0 = \rm LLL, \text{ ... },\ Q_7 = \rm HHH,$$
 +
:$$s_0(t), \text{ ... },\ s_7(t).$$  
  
[[File:P ID1462 Dig T 3 7 S5c version1.png|center|frame|Mögliche unipolare Sendesignale für &nbsp;$N = 3$|class=fit]]
+
By comparing with the &nbsp;[[Digital_Signal_Transmission/Optimal_Receiver_Strategies#Representation_of_the_correlation_receiver_in_the_tree_diagram|"corresponding table"]]&nbsp; for bipolar signaling,&nbsp; one can see:
 
+
*Due to the unipolar amplitude coefficients,&nbsp; the signal energies &nbsp;$E_i$&nbsp; are now different,&nbsp; e.g. &nbsp;$E_0 =  0$&nbsp; and &nbsp;$E_7 = 3 \cdot E_{\rm B}$.
Durch Vergleich mit der &nbsp;[[Digital_Signal_Transmission/Optimale_Empfängerstrategien#Darstellung_des_Korrelationsempf.C3.A4ngers_im_Baumdiagramm|entsprechenden Tabelle]]&nbsp; für bipolare Signalisierung erkennt man:
+
*Aufgrund der unipolaren Amplitudenkoeffizienten sind nun die Signalenergien &nbsp;$E_i$&nbsp; unterschiedlich, zum Beispiel gilt &nbsp;$E_0 =  0$&nbsp; und &nbsp;$E_7 = 3 \cdot E_{\rm B}$.  
+
*Here the decision based on the integral values &nbsp;$I_i$&nbsp; does not lead to the correct result.&nbsp; Instead,&nbsp; the corrected comparison values &nbsp;$W_i = I_i- E_i/2$&nbsp; must now be used.<br>
*Hier führt die auf den Integralendwerten &nbsp;$I_i$&nbsp; basierende Entscheidung nicht zum richtigen Ergebnis.  
 
*Vielmehr muss nun auf die korrigierten Vergleichswerte &nbsp;$W_i = I_i- E_i/2$&nbsp; zurückgegriffen werden.<br>
 
  
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 4:}$&nbsp; In der Grafik sind die fortlaufenden Integralwerte dargestellt, wobei wieder vom tatsächlich gesendeten Signal &nbsp;$s_5(t)$&nbsp; und dem rauschfreien Fall ausgegangen wird. Das entsprechende bipolare Äquivalent wurde im [[Digital_Signal_Transmission/Optimale_Empfängerstrategien#Darstellung_des_Korrelationsempf.C3.A4ngers_im_Baumdiagramm|Beispiel 2]] betrachtet.  
+
$\text{Example 4:}$&nbsp; The graph shows the integral values&nbsp; $I_i$,&nbsp; again assuming the actual transmitted signal &nbsp;$s_5(t)$&nbsp; and the noise-free case.&nbsp; The corresponding bipolar equivalent was considered in&nbsp; [[Digital_Signal_Transmission/Optimal_Receiver_Strategies#Representation_of_the_correlation_receiver_in_the_tree_diagram|Example 2]].  
  
[[File:Dig_T_3_7_S5D_version2.png|right|frame|Baumdiagramm des Korrelationsempfängers (unipolar)|class=fit]]
+
[[File:EN_Dig_T_3_7_S5d.png|right|frame|Tree diagram of the correlation receiver&nbsp; (unipolar signaling)|class=fit]]
Für dieses Beispiel ergeben sich folgende Vergleichswerte, jeweils normiert auf &nbsp;$E_{\rm B}$:
+
For this example,&nbsp; the following comparison values result,&nbsp; each normalized to &nbsp;$E_{\rm B}$:
 
:$$I_5 = I_7 = 2, \hspace{0.2cm}I_1 = I_3 = I_4= I_6 = 1 \hspace{0.2cm},
 
:$$I_5 = I_7 = 2, \hspace{0.2cm}I_1 = I_3 = I_4= I_6 = 1 \hspace{0.2cm},
 
  \hspace{0.2cm}I_0 = I_2 = 0
 
  \hspace{0.2cm}I_0 = I_2 = 0
Line 221: Line 231:
 
  \hspace{0.05cm}.$$
 
  \hspace{0.05cm}.$$
  
Das bedeutet:
+
This means:
*Bei einem Vergleich hinsichtlich der maximalen &nbsp;$I_i$&ndash;Werte wären die Quellensymbolfolgen &nbsp;$Q_5$&nbsp; und &nbsp;$Q_7$&nbsp; gleichwertig.  
+
*When compared in terms of maximum &nbsp;$I_i$ values,&nbsp; the source symbol sequences &nbsp;$Q_5$&nbsp; and &nbsp;$Q_7$&nbsp; would be equivalent.
*Berücksichtigt man die unterschiedlichen Energien &nbsp;$(E_5 = 2, \ E_7 = 3)$, so  wird dagegen wegen &nbsp;$W_5 > W_7$&nbsp; eindeutig für die Folge &nbsp;$Q_5$&nbsp; entschieden.
+
 
*Der Korrelationsempfänger gemäß &nbsp;$W_i = I_i- E_i/2$&nbsp; entscheidet also auch bei unipolarer Signalisierung richtig auf &nbsp;$s(t) = s_5(t)$. }}<br>
+
*On the other hand,&nbsp; if the different energies &nbsp;$(E_5 = 2, \ E_7 = 3)$&nbsp; are taken into account,&nbsp; the decision is clearly in favor of the sequence &nbsp;$Q_5$&nbsp; because of &nbsp;$W_5 > W_7$.&nbsp;  
 +
 
 +
*The correlation receiver according to &nbsp;$W_i = I_i- E_i/2$&nbsp; therefore decides correctly on&nbsp; $s(t) = s_5(t)$&nbsp; even with unipolar signaling. }}<br>
  
== Aufgaben zum Kapitel==
+
== Exercises for the chapter==
 
<br>
 
<br>
[[Aufgaben:Aufgabe_3.09:_Korrelationsempfänger_für_unipolare_Signalisierung|Aufgabe 3.9: Korrelationsempfänger für unipolare Signalisierung]]
+
[[Aufgaben:Exercise_3.09:_Correlation_Receiver_for_Unipolar_Signaling|Exercise 3.09: Correlation Receiver for Unipolar Signaling]]
  
[[Aufgaben:Aufgabe_3.10:_Baumdiagramm_bei_Maximum-Likelihood|Aufgabe 3.10: Baumdiagramm bei Maximum-Likelihood]]
+
[[Aufgaben:Exercise_3.10:_Maximum_Likelihood_Tree_Diagram|Exercise 3.10: Maximum Likelihood Tree Diagram]]
  
  
 
{{Display}}
 
{{Display}}

Latest revision as of 13:16, 11 July 2022

Considered scenario and prerequisites


All digital receivers described so far always make symbol-wise decisions.  If,  on the other hand,  several symbols are decided simultaneously,  statistical bindings between the received signal samples can be taken into account during detection,  which results in a lower error probability – but at the cost of an additional delay time.

In this  $($partly also in the next chapter$)$  the following transmission model is assumed.  Compared to the last two chapters,  the following differences arise:

Transmission system with optimal receiver
  • $Q \in \{Q_i\}$  with  $i = 0$, ... , $M-1$  denotes a time-constrained source symbol sequence  $\langle q_\nu \rangle$ whose symbols are to be jointly decided by the receiver.
  • If the source  $Q$  describes a sequence of  $N$  redundancy-free binary symbols, set  $M = 2^N$.  On the other hand,  if the decision is symbol-wise,  $M$  specifies the level number of the digital source.
  • In this model,  any channel distortions are added to the transmitter and are thus already included in the basic transmission pulse  $g_s(t)$  and the signal  $s(t)$.  This measure is only for a simpler representation and is not a restriction.
  • Knowing the currently applied received signal  $r(t)$,  the optimal receiver searches from the set  $\{Q_0$, ... , $Q_{M-1}\}$  of the possible source symbol sequences, the receiver searches for the most likely transmitted sequence  $Q_j$  and outputs this as a sink symbol sequence  $V$. 
  • Before the actual decision algorithm,  a numerical value  $W_i$  must be derived from the received signal  $r(t)$  for each possible sequence  $Q_i$  by suitable signal preprocessing.  The larger  $W_i$  is,  the greater the inference probability that  $Q_i$  was transmitted.
  • Signal preprocessing must provide for the necessary noise power limitation and – in the case of strong channel distortions – for sufficient pre-equalization of the resulting intersymbol interferences.  In addition,  preprocessing also includes sampling for time discretization.

Maximum-a-posteriori and maximum–likelihood decision rule


The  (unconstrained)  optimal receiver is called the  "MAP receiver",  where  "MAP"  stands for  "maximum–a–posteriori".

$\text{Definition:}$  The  maximum–a–posteriori receiver  $($abbreviated  $\rm MAP)$  determines the  $M$  inference probabilities  ${\rm Pr}\big[Q_i \hspace{0.05cm}\vert \hspace{0.05cm}r(t)\big]$,  and sets the output sequence  $V$  according to the decision rule,  where the index is   $i = 0$, ... , $M-1$  as well as  $i \ne j$:

$${\rm Pr}\big[Q_j \hspace{0.05cm}\vert \hspace{0.05cm} r(t)\big] > {\rm Pr}\big[Q_i \hspace{0.05cm}\vert \hspace{0.05cm} r(t)\big] \hspace{0.05cm}.$$


  • The  "inference probability"  ${\rm Pr}\big[Q_i \hspace{0.05cm}\vert \hspace{0.05cm} r(t)\big]$  indicates the probability with which the sequence  $Q_i$  was sent when the received signal  $r(t)$  is present at the decision.  Using  "Bayes' theorem",  this probability can be calculated as follows:
$${\rm Pr}\big[Q_i \hspace{0.05cm}|\hspace{0.05cm} r(t)\big] = \frac{ {\rm Pr}\big[ r(t)\hspace{0.05cm}|\hspace{0.05cm} Q_i \big] \cdot {\rm Pr}\big[Q_i]}{{\rm Pr}[r(t)\big]} \hspace{0.05cm}.$$
  • The MAP decision rule can thus be reformulated or simplified as follows:   Let the sink symbol sequence  $V = Q_j$,  if for all  $i \ne j$  holds:
$$\frac{ {\rm Pr}\big[ r(t)\hspace{0.05cm}|\hspace{0.05cm} Q_j \big] \cdot {\rm Pr}\big[Q_j)}{{\rm Pr}\big[r(t)\big]} > \frac{ {\rm Pr}\big[ r(t)\hspace{0.05cm}|\hspace{0.05cm} Q_i\big] \cdot {\rm Pr}\big[Q_i\big]}{{\rm Pr}\big[r(t)\big]}\hspace{0.3cm} \Rightarrow \hspace{0.3cm} {\rm Pr}\big[ r(t)\hspace{0.05cm}|\hspace{0.05cm} Q_j\big] \cdot {\rm Pr}\big[Q_j\big]> {\rm Pr}\big[ r(t)\hspace{0.05cm}|\hspace{0.05cm} Q_i \big] \cdot {\rm Pr}\big[Q_i\big] \hspace{0.05cm}.$$

A further simplification of this MAP decision rule leads to the  "ML receiver",  where  "ML"  stands for  "maximum likelihood".

$\text{Definition:}$  The  maximum likelihood receiver  $($abbreviated  $\rm ML)$   decides according to the conditional forward probabilities  ${\rm Pr}\big[r(t)\hspace{0.05cm} \vert \hspace{0.05cm}Q_i \big]$,  and sets the output sequence  $V = Q_j$,  if for all  $i \ne j$  holds:

$${\rm Pr}\big[ r(t)\hspace{0.05cm} \vert\hspace{0.05cm} Q_j \big] > {\rm Pr}\big[ r(t)\hspace{0.05cm} \vert \hspace{0.05cm} Q_i\big] \hspace{0.05cm}.$$


A comparison of these two definitions shows:

  • For equally probable source symbols,  the  "ML receiver"  and the  "MAP receiver"  use the same decision rules.  Thus,  they are equivalent.
  • For symbols that are not equally probable,  the  "ML receiver"  is inferior to the  "MAP receiver"  because it does not use all the available information for detection.


$\text{Example 1:}$  To illustrate the  "ML"  and the  "MAP"  decision rule,  we now construct a very simple example with only two source symbols  $(M = 2)$.

For clarification of MAP and ML receiver



⇒   The two possible symbols  $Q_0$  and  $Q_1$  are represented by the transmitted signals  $s = 0$  and  $s = 1$.

⇒   The received signal can – for whatever reason – take three different values, namely  $r = 0$,  $r = 1$  and additionally  $r = 0.5$.

Note:

  • The received values  $r = 0$  and  $r = 1$  will be assigned to the transmitter values  $s = 0 \ (Q_0)$  resp.  $s = 1 \ (Q_1)$,  by both,  the ML and MAP decisions.
  • In contrast, the decisions will give a different result with respect to the received value  $r = 0.5$: 
  • The maximum likelihood  $\rm (ML)$  decision rule leads to the source symbol  $Q_0$,  because of:
$${\rm Pr}\big [ r= 0.5\hspace{0.05cm}\vert\hspace{0.05cm} Q_0\big ] = 0.4 > {\rm Pr}\big [ r= 0.5\hspace{0.05cm} \vert \hspace{0.05cm} Q_1\big ] = 0.2 \hspace{0.05cm}.$$
  • The maximum–a–posteriori  $\rm (MAP)$  decision rule leads to the source symbol  $Q_1$,  since according to the incidental calculation in the graph:
$${\rm Pr}\big [Q_1 \hspace{0.05cm}\vert\hspace{0.05cm} r= 0.5\big ] = 0.6 > {\rm Pr}\big [Q_0 \hspace{0.05cm}\vert\hspace{0.05cm} r= 0.5\big ] = 0.4 \hspace{0.05cm}.$$


Maximum likelihood decision for Gaussian noise


We now assume that the received signal  $r(t)$  is additively composed of a useful component  $s(t)$  and a noise component  $n(t)$,  where the noise is assumed to be Gaussian distributed and white   ⇒    "AWGN noise":

$$r(t) = s(t) + n(t) \hspace{0.05cm}.$$

Any channel distortions are already applied to the signal  $s(t)$  for simplicity.

The necessary noise power limitation is realized by an integrator;  this corresponds to an averaging of the noise values in the time domain.  If one limits the integration interval to the range  $t_1$  to  $t_2$,  one can derive a quantity  $W_i$  for each source symbol sequence  $Q_i$,  which is a measure for the conditional probability  ${\rm Pr}\big [ r(t)\hspace{0.05cm} \vert \hspace{0.05cm} Q_i\big ] $: 

$$W_i = \int_{t_1}^{t_2} r(t) \cdot s_i(t) \,{\rm d} t - {1}/{2} \cdot \int_{t_1}^{t_2} s_i^2(t) \,{\rm d} t= I_i - {E_i}/{2} \hspace{0.05cm}.$$

This decision variable  $W_i$  can be derived using the  $k$–dimensionial  "joint probability density"  of the noise  $($with  $k \to \infty)$  and some boundary crossings.  The result can be interpreted as follows:

  • Integration is used for noise power reduction by averaging.  If  $N$  binary symbols are decided simultaneously by the maximum likelihood detector,  set  $t_1 = 0 $  and  $t_2 = N \cdot T$  for distortion-free channel.
  • The first term of the above decision variable  $W_i$  is equal to the  "energy cross-correlation function"  formed over the finite time interval  $NT$  between  $r(t)$  and  $s_i(t)$  at the time point  $\tau = 0$:
$$I_i = \varphi_{r, \hspace{0.08cm}s_i} (\tau = 0) = \int_{0}^{N \cdot T}r(t) \cdot s_i(t) \,{\rm d} t \hspace{0.05cm}.$$
  • The second term gives half the energy of the considered useful signal  $s_i(t)$  to be subtracted.  The energy is equal to the auto-correlation function  $\rm (ACF)$  of  $s_i(t)$  at the time point  $\tau = 0$:
\[E_i = \varphi_{s_i} (\tau = 0) = \int_{0}^{N \cdot T} s_i^2(t) \,{\rm d} t \hspace{0.05cm}.\]
  • In the case of a distorting channel,  the channel impulse response  $h_{\rm K}(t)$  is not Dirac-shaped,  but for example extended to the range  $-T_{\rm K} \le t \le +T_{\rm K}$.  In this case,  $t_1 = -T_{\rm K}$  and  $t_2 = N \cdot T +T_{\rm K}$  must be used for the integration limits.

Matched filter receiver vs. correlation receiver


There are various circuit implementations of the maximum likelihood  $\rm (ML)$  receiver.

⇒   For example,  the required integrals can be obtained by linear filtering and subsequent sampling.  This realization form is called  matched filter receiver,  because here the impulse responses of the  $M$  parallel filters have the same shape as the useful signals  $s_0(t)$, ... , $s_{M-1}(t)$. 

  • The  $M$  decision variables  $I_i$  are then equal to the convolution products  $r(t) \star s_i(t)$  at time  $t= 0$.
  • For example,  the  "optimal binary receiver"  described in detail in the chapter  "Optimization of Baseband Transmission Systems"  allows a maximum likelihood  $\rm (ML)$  decision with parameters  $M = 2$  and  $N = 1$.


⇒   A second realization form is provided by the  correlation receiver  according to the following graph.  One recognizes from this block diagram for the indicated parameters:

Correlation receiver for  $N = 3$,  $t_1 = 0$,  $t_2 = 3T$   and   $M = 2^3 = 8$
  • The drawn correlation receiver forms a total of  $M = 8$  cross-correlation functions between the received signal  $r(t) = s_k(t) + n(t)$  and the possible transmitted signals  $s_i(t), \ i = 0$, ... , $M-1$. The following description assumes that the useful signal  $s_k(t)$  has been transmitted.
  • This receiver searches for the maximum value  $W_j$  of all correlation values and outputs the corresponding sequence  $Q_j$  as sink symbol sequence  $V$.  Formally,  the  $\rm ML$  decision rule can be expressed as follows:
$$V = Q_j, \hspace{0.2cm}{\rm if}\hspace{0.2cm} W_i < W_j \hspace{0.2cm}{\rm for}\hspace{0.2cm} {\rm all}\hspace{0.2cm} i \ne j \hspace{0.05cm}.$$
  • If we further assume that all transmitted signals  $s_i(t)$  have same energy,  we can dispense with the subtraction of  $E_i/2$  in all branches.  In this case,  the following correlation values are compared  $(i = 0$, ... , $M-1)$:
\[I_i = \int_{0}^{NT} s_j(t) \cdot s_i(t) \,{\rm d} t + \int_{0}^{NT} n(t) \cdot s_i(t) \,{\rm d} t \hspace{0.05cm}.\]
  • With high probability,  $I_j = I_k$  is larger than all other comparison values  $I_{j \ne k}$   ⇒   right decision.  However,  if the noise  $n(t)$  is too large,  also the correlation receiver will make wrong decisions.

Representation of the correlation receiver in the tree diagram


Let us illustrate the correlation receiver operation in the tree diagram,  where the  $2^3 = 8$  possible source symbol sequences  $Q_i$  of length  $N = 3$  are represented by bipolar rectangular transmitted signals  $s_i(t)$.

All  $2^3=8$  possible bipolar transmitted signals  $s_i(t)$  for  $N = 3$

The possible symbol sequences  $Q_0 = \rm LLL$, ... , $Q_7 = \rm HHH$  and the associated transmitted signals  $s_0(t)$, ... , $s_7(t)$  are listed below.

  • Due to bipolar amplitude coefficients and the rectangular shape   ⇒   all signal energies are equal:  $E_0 = \text{...} = E_7 = N \cdot E_{\rm B}$, where  $E_{\rm B}$  indicates the energy of a single pulse of duration $T$.
  • Therefore,  the subtraction of the  $E_i/2$  term in all branches can be omitted   ⇒   the decision based on the correlation values  $I_i$  gives equally reliable results as maximizing the corrected values  $W_i$.



$\text{Example 2:}$  The graph shows the continuous-valued integral values,  assuming the actually transmitted signal  $s_5(t)$  and the noise-free case.  For this case,  the time-dependent integral values and the integral end values:

Tree diagram of the correlation receiver in the noise-free case
$$i_i(t) = \int_{0}^{t} r(\tau) \cdot s_i(\tau) \,{\rm d} \tau = \int_{0}^{t} s_5(\tau) \cdot s_i(\tau) \,{\rm d} \tau \hspace{0.3cm} \Rightarrow \hspace{0.3cm}I_i = i_i(3T). $$

The graph can be interpreted as follows:

  • Because of the rectangular shape of the signals  $s_i(t)$,  all function curves  $i_i(t)$  are rectilinear.  The end values normalized to  $E_{\rm B}$  are  $+3$,  $+1$,  $-1$  and  $-3$.
  • The maximum final value is  $I_5 = 3 \cdot E_{\rm B}$  (red waveform),  since signal  $s_5(t)$  was actually sent.  Without noise,  the correlation receiver thus naturally always makes the correct decision.
  • The blue curve  $i_1(t)$  leads to the final value  $I_1 = -E_{\rm B} + E_{\rm B}+ E_{\rm B} = E_{\rm B}$,  since  $s_1(t)$  differs from  $s_5(t)$  only in the first bit.  The comparison values  $I_4$  and  $I_7$  are also equal to  $E_{\rm B}$.
  • Since  $s_0(t)$,  $s_3(t)$  and  $s_6(t)$  differ from the transmitted  $s_5(t)$  in two bits,  $I_0 = I_3 = I_6 =-E_{\rm B}$.  The green curve shows  $s_6(t)$ initially increasing  (first bit matches)  and then decreasing over two bits.
  • The purple curve leads to the final value  $I_2 = -3 \cdot E_{\rm B}$.  The corresponding signal  $s_2(t)$  differs from  $s_5(t)$  in all three symbols and  $s_2(t) = -s_5(t)$  holds.



$\text{Example 3:}$  The graph describes the same situation as  $\text{Example 2}$,  but now the received signal  $r(t) = s_5(t)+ n(t)$  is assumed.  The variance of the AWGN noise  $n(t)$  here is  $\sigma_n^2 = 4 \cdot E_{\rm B}/T$.

Tree diagram of the correlation receiver with noise   $(\sigma_n^2 = 4 \cdot E_{\rm B}/T)$




One can see from this graph compared to the noise-free case:

  • The curves are now no longer straight due to the noise component  $n(t)$  and there are also slightly different final values than without noise.
  • In the considered example,  the correlation receiver decides correctly with high probability,  since the difference between  $I_5$  and the next value  $I_7$  is relatively large:  $1.65\cdot E_{\rm B}$. 
  • The error probability in this example is not better than that of the matched filter receiver with symbol-wise decision.  In accordance with the chapter  "Optimization of Baseband Transmission Systems",  the following also applies here:
$$p_{\rm S} = {\rm Q} \left( \sqrt{ {2 \cdot E_{\rm B} }/{N_0} }\right) = {1}/{2} \cdot {\rm erfc} \left( \sqrt{ { E_{\rm B} }/{N_0} }\right) \hspace{0.05cm}.$$


$\text{Conclusions:}$ 

  1. If the input signal does not have statistical bindings  $\text{(Example 2)}$,  there is no improvement by joint decision of  $N$  symbols over symbol-wise decision  
    ⇒   $p_{\rm S} = {\rm Q} \left( \sqrt{ {2 \cdot E_{\rm B} }/{N_0} }\right)$.
  2. In the presence of statistical bindings  $\text{(Example 3)}$,  the joint decision of  $N$  symbols noticeably reduces the error probability,  since the maximum likelihood receiver takes the bindings into account.
  3. Such bindings can be either deliberately created by transmission-side coding  $($see the  $\rm LNTwww$ book  "Channel Coding")  or unintentionally caused by  (linear)  channel distortions.
  4. In the presence of such  "intersymbol interferences",  the calculation of the error probability is much more difficult.  However,  comparable approximations as for the Viterbi receiver can be used,  which are given at the  end of the next chapter


Correlation receiver with unipolar signaling


So far,  we have always assumed binary  bipolar  signaling when describing the correlation receiver:

$$a_\nu = \left\{ \begin{array}{c} +1 \\ -1 \\ \end{array} \right.\quad \begin{array}{*{1}c} {\rm{f\ddot{u}r}} \\ {\rm{for}} \\ \end{array}\begin{array}{*{20}c} q_\nu = \mathbf{H} \hspace{0.05cm}, \\ q_\nu = \mathbf{L} \hspace{0.05cm}. \\ \end{array}$$

Now we consider the case of binary  unipolar  digital signaling holds:

$$a_\nu = \left\{ \begin{array}{c} 1 \\ 0 \\ \end{array} \right.\quad \begin{array}{*{1}c} {\rm{for}} \\ {\rm{for}} \\ \end{array}\begin{array}{*{20}c} q_\nu = \mathbf{H} \hspace{0.05cm}, \\ q_\nu = \mathbf{L} \hspace{0.05cm}. \\ \end{array}$$
Possible unipolar transmitted signals for  $N = 3$

The  $2^3 = 8$  possible source symbol sequences  $Q_i$  of length  $N = 3$  are now represented by unipolar rectangular transmitted signals  $s_i(t)$. 

Listed on the right are the eight symbol sequences and the transmitted signals

$$Q_0 = \rm LLL, \text{ ... },\ Q_7 = \rm HHH,$$
$$s_0(t), \text{ ... },\ s_7(t).$$

By comparing with the  "corresponding table"  for bipolar signaling,  one can see:

  • Due to the unipolar amplitude coefficients,  the signal energies  $E_i$  are now different,  e.g.  $E_0 = 0$  and  $E_7 = 3 \cdot E_{\rm B}$.
  • Here the decision based on the integral values  $I_i$  does not lead to the correct result.  Instead,  the corrected comparison values  $W_i = I_i- E_i/2$  must now be used.


$\text{Example 4:}$  The graph shows the integral values  $I_i$,  again assuming the actual transmitted signal  $s_5(t)$  and the noise-free case.  The corresponding bipolar equivalent was considered in  Example 2.

Tree diagram of the correlation receiver  (unipolar signaling)

For this example,  the following comparison values result,  each normalized to  $E_{\rm B}$:

$$I_5 = I_7 = 2, \hspace{0.2cm}I_1 = I_3 = I_4= I_6 = 1 \hspace{0.2cm}, \hspace{0.2cm}I_0 = I_2 = 0 \hspace{0.05cm},$$
$$W_5 = 1, \hspace{0.2cm}W_1 = W_4 = W_7 = 0.5, \hspace{0.2cm} W_0 = W_3 =W_6 =0, \hspace{0.2cm}W_2 = -0.5 \hspace{0.05cm}.$$

This means:

  • When compared in terms of maximum  $I_i$ values,  the source symbol sequences  $Q_5$  and  $Q_7$  would be equivalent.
  • On the other hand,  if the different energies  $(E_5 = 2, \ E_7 = 3)$  are taken into account,  the decision is clearly in favor of the sequence  $Q_5$  because of  $W_5 > W_7$. 
  • The correlation receiver according to  $W_i = I_i- E_i/2$  therefore decides correctly on  $s(t) = s_5(t)$  even with unipolar signaling.


Exercises for the chapter


Exercise 3.09: Correlation Receiver for Unipolar Signaling

Exercise 3.10: Maximum Likelihood Tree Diagram