Difference between revisions of "Digital Signal Transmission/Optimal Receiver Strategies"

From LNTwww
Line 57: Line 57:
 
  Q_i\big]  \hspace{0.05cm}.$$}}<br>
 
  Q_i\big]  \hspace{0.05cm}.$$}}<br>
  
Ein Vergleich dieser beiden Definitionen zeigt:
+
A comparison of these two definitions shows:
* Bei gleichwahrscheinlichen Quellensymbolen  verwenden der ML&ndash;Empfänger und der MAP&ndash;Empfänger gleiche Entscheidungsregeln; sie  sind somit äquivalent.  
+
* For equally probable source symbols, the ML receiver and the MAP receiver use the same decision rules; thus, they are equivalent.
*Bei nicht gleichwahrscheinlichen Symbolen ist der ML&ndash;Empfänger dem MAP&ndash;Empfänger unterlegen, da er für die Detektion nicht alle zur Verfügung stehenden Informationen nutzt.<br>
+
*For symbols that are not equally probable, the ML receiver is inferior to the MAP receiver because it does not use all the available information for detection.<br>
  
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 1:}$&nbsp; Zur Verdeutlichung von ML&ndash; und MAP&ndash;Entscheidungsregel konstruieren wir nun ein sehr einfaches Beispiel mit nur zwei Quellensymbolen &nbsp;$(M = 2)$.  
+
$\text{Example 1:}$&nbsp; To illustrate ML and MAP decision rule, we now construct a very simple example with only two source symbols &nbsp;$(M = 2)$.  
*Die beiden möglichen Symbole &nbsp;$Q_0$&nbsp; und &nbsp;$Q_1$&nbsp; werden durch die Sendesignale &nbsp;$s = 0$&nbsp; bzw. &nbsp;$s = 1$&nbsp; dargestellt.
+
*The two possible symbols &nbsp;$Q_0$&nbsp; and &nbsp;$Q_1$&nbsp; are represented by the transmitted signals &nbsp;$s = 0$&nbsp; and &nbsp;$s = 1$.&nbsp;
*Das Empfangssignal kann &ndash; warum auch immer &ndash; drei verschiedene Werte annehmen, nämlich &nbsp;$r = 0$, &nbsp;$r = 1$&nbsp; und zusätzlich &nbsp;$r = 0.5$.
+
*The received signal can &ndash; for whatever reason &ndash; take three different values, namely &nbsp;$r = 0$, &nbsp;$r = 1$&nbsp; and additionally &nbsp;$r = 0.5$.
  
  
[[File:P ID1461 Dig T 3 7 S2 version1.png|center|frame|Zur Verdeutlichung von MAP- und ML-Empfänger|class=fit]]
+
[[File:P ID1461 Dig T 3 7 S2 version1.png|center|frame|For clarification of MAP and ML receiver|class=fit]]
  
Die Empfangswerte &nbsp;$r = 0$&nbsp; und &nbsp;$r = 1$&nbsp; werden sowohl vom ML&ndash; als auch vom MAP&ndash;Entscheider den Senderwerten  &nbsp;$s = 0 \ (Q_0)$&nbsp; bzw. &nbsp;$s = 1 \ (Q_1)$&nbsp; zugeordnet. Dagegen werden die Entscheider bezüglich des Empfangswertes &nbsp;$r = 0.5$&nbsp; ein anderes Ergebnis liefern:
+
The received values &nbsp;$r = 0$&nbsp; and &nbsp;$r = 1$&nbsp; will be assigned to the transmitter values &nbsp;$s = 0 \ (Q_0)$&nbsp; and &nbsp;$s = 1 \ (Q_1)$,&nbsp; respectively, by both the ML and MAP decisions. In contrast, the decisions will give a different result with respect to the received value &nbsp;$r = 0.5$:&nbsp;  
  
*Die Maximum&ndash;Likelihood&ndash;Entscheidungsregel führt zum Quellensymbol &nbsp;$Q_0$, wegen
+
*The maximum likelihood decision rule leads to the source symbol &nbsp;$Q_0$, because of
 
:$${\rm Pr}\big [ r= 0.5\hspace{0.05cm}\vert\hspace{0.05cm}
 
:$${\rm Pr}\big [ r= 0.5\hspace{0.05cm}\vert\hspace{0.05cm}
 
  Q_0\big ] = 0.4 > {\rm Pr}\big [ r= 0.5\hspace{0.05cm} \vert \hspace{0.05cm}
 
  Q_0\big ] = 0.4 > {\rm Pr}\big [ r= 0.5\hspace{0.05cm} \vert \hspace{0.05cm}
 
  Q_1\big ] = 0.2 \hspace{0.05cm}.$$
 
  Q_1\big ] = 0.2 \hspace{0.05cm}.$$
  
*Die MAP&ndash;Entscheidung führt dagegen zum Quellensymbol &nbsp;$Q_1$, da entsprechend der Nebenrechnung in der Grafik gilt:
+
*The MAP decision, on the other hand, leads to the source symbol &nbsp;$Q_1$, since according to the secondary calculation in the graph:
 
:$${\rm Pr}\big [Q_1 \hspace{0.05cm}\vert\hspace{0.05cm}
 
:$${\rm Pr}\big [Q_1 \hspace{0.05cm}\vert\hspace{0.05cm}
 
  r= 0.5\big ] = 0.6 > {\rm Pr}\big [Q_0 \hspace{0.05cm}\vert\hspace{0.05cm}
 
  r= 0.5\big ] = 0.6 > {\rm Pr}\big [Q_0 \hspace{0.05cm}\vert\hspace{0.05cm}
 
  r= 0.5\big ] = 0.4 \hspace{0.05cm}.$$}}<br>
 
  r= 0.5\big ] = 0.4 \hspace{0.05cm}.$$}}<br>
  
== Maximum&ndash;Likelihood&ndash;Entscheidung bei Gaußscher Störung ==
+
== Maximum likelihood decision for Gaussian noise ==
 
<br>
 
<br>
Wir setzen nun voraus, dass sich das Empfangssignal &nbsp;$r(t)$&nbsp; additiv aus einem Nutzsignal &nbsp;$s(t)$&nbsp; und einem Störanteil &nbsp;$n(t)$&nbsp; zusammensetzt, wobei die Störung als gaußverteilt und weiß angenommen wird &nbsp; &rArr; &nbsp; &nbsp;[[Digital_Signal_Transmission/Systemkomponenten_eines_Basisbandübertragungssystems#.C3.9Cbertragungskanal_und_St.C3.B6rungen|AWGN&ndash;Rauschen]]:
+
We now assume that the received signal &nbsp;$r(t)$&nbsp; is additively composed of a useful signal &nbsp;$s(t)$&nbsp; and a noise component &nbsp;$n(t)$,&nbsp; where the noise is assumed to be Gaussian distributed and white &nbsp; &rArr; &nbsp; &nbsp;[[Digital_Signal_Transmission/System_Components_of_a_Baseband_Transmission_System#Transmission_channel_and_interference|"AWGN noise"]]:
 
:$$r(t) = s(t) + n(t) \hspace{0.05cm}.$$
 
:$$r(t) = s(t) + n(t) \hspace{0.05cm}.$$
  
Eventuelle Kanalverzerrungen werden zur Vereinfachung bereits dem Signal &nbsp;$s(t)$&nbsp; beaufschlagt.<br>
+
Any channel distortions are already applied to the signal &nbsp;$s(t)$&nbsp; for simplicity.<br>
  
Die notwendige Rauschleistungsbegrenzung wird durch einen Integrator realisiert; dies entspricht einer Mittelung der Rauschwerte im Zeitbereich. Begrenzt man das Integrationsintervall auf den Bereich &nbsp;$t_1$&nbsp; bis &nbsp;$t_2$, so kann man für jede Quellensymbolfolge &nbsp;$Q_i$&nbsp; eine Größe &nbsp;$W_i$&nbsp; ableiten, die ein Maß für die bedingte Wahrscheinlichkeit &nbsp;${\rm Pr}\big [ r(t)\hspace{0.05cm} \vert \hspace{0.05cm}
+
The necessary noise power limitation is realized by an integrator; this corresponds to an averaging of the noise values in the time domain. If one limits the integration interval to the range &nbsp;$t_1$&nbsp; to &nbsp;$t_2$, one can derive a quantity &nbsp;$W_i$&nbsp; for each source symbol sequence &nbsp;$Q_i$,&nbsp; which is a measure for the conditional probability &nbsp;${\rm Pr}\big [ r(t)\hspace{0.05cm} \vert \hspace{0.05cm}
  Q_i\big ] $&nbsp; darstellt:
+
  Q_i\big ] $:&nbsp;  
 
:$$W_i  =  \int_{t_1}^{t_2} r(t) \cdot s_i(t) \,{\rm d} t -
 
:$$W_i  =  \int_{t_1}^{t_2} r(t) \cdot s_i(t) \,{\rm d} t -
 
{1}/{2} \cdot \int_{t_1}^{t_2} s_i^2(t) \,{\rm d} t=
 
{1}/{2} \cdot \int_{t_1}^{t_2} s_i^2(t) \,{\rm d} t=
 
I_i - {E_i}/{2} \hspace{0.05cm}.$$
 
I_i - {E_i}/{2} \hspace{0.05cm}.$$
  
Diese Entscheidungsgröße &nbsp;$W_i$&nbsp; kann über die &nbsp;$k$&ndash;dimensioniale [[Theory_of_Stochastic_Signals/Zweidimensionale_Zufallsgrößen#Verbundwahrscheinlichkeitsdichtefunktion|Verbundwahrscheinlichkeitsdichte]]&nbsp; der Störungen $($mit &nbsp;$k \to \infty)$&nbsp; und einigen Grenzübergängen hergeleitet werden. Das Ergebnis lässt sich wie folgt interpretieren:
+
This decision variable &nbsp;$W_i$&nbsp; can be derived using the &nbsp;$k$&ndash;dimensionial [[Theory_of_Stochastic_Signals/Two-Dimensional_Random_Variables#Joint_probability_density_function|"joint probability density"]]&nbsp; of the noise $($with &nbsp;$k \to \infty)$&nbsp; and some boundary crossings. The result can be interpreted as follows:
*Die Integration dient der Rauschleistungsreduzierung durch Mittelung. Werden vom Maximum–Likelihood&ndash;Detektor &nbsp;$N$&nbsp; Binärsymbole gleichzeitig entschieden, so ist bei verzerrungsfreiem Kanal &nbsp;$t_1 = 0 $&nbsp; und &nbsp;$t_2 = N \cdot T$&nbsp; zu setzen.
+
*Integration is used for noise power reduction by averaging. If &nbsp;$N$&nbsp; binary symbols are decided simultaneously by the maximum likelihood detector, set &nbsp;$t_1 = 0 $&nbsp; and &nbsp;$t_2 = N \cdot T$&nbsp; for distortion-free channel.
*Der erste Term der obigen Entscheidungsgröße &nbsp;$W_i$&nbsp; ist gleich der über das endliche Zeitintervall &nbsp;$NT$&nbsp;  gebildeten&nbsp; [[Theory_of_Stochastic_Signals/Kreuzkorrelationsfunktion_und_Kreuzleistungsdichte#Definition_der_Kreuzkorrelationsfunktion| Energie&ndash;Kreuzkorrelationsfunktion]]&nbsp; zwischen &nbsp;$r(t)$&nbsp; und &nbsp;$s_i(t)$&nbsp; an der Stelle &nbsp;$\tau = 0$:
+
*The first term of the above decision variable &nbsp;$W_i$&nbsp; is equal to the &nbsp; [[Theory_of_Stochastic_Signals/Cross-Correlation_Function_and_Cross_Power-Spectral_Density#Definition_of_the_cross-correlation_function| "energy cross-correlation function"]]&nbsp; formed over the finite time interval &nbsp;$NT$&nbsp; between &nbsp;$r(t)$&nbsp; and &nbsp;$s_i(t)$&nbsp; at the point &nbsp;$\tau = 0$:
 
:$$I_i  = \varphi_{r, \hspace{0.08cm}s_i} (\tau = 0) =  \int_{0}^{N \cdot T}r(t) \cdot s_i(t) \,{\rm d} t
 
:$$I_i  = \varphi_{r, \hspace{0.08cm}s_i} (\tau = 0) =  \int_{0}^{N \cdot T}r(t) \cdot s_i(t) \,{\rm d} t
 
\hspace{0.05cm}.$$
 
\hspace{0.05cm}.$$
  
*Der zweite Term gibt die halbe Energie des betrachteten Nutzsignals &nbsp;$s_i(t)$&nbsp; an, die zu subtrahieren ist. Die Energie ist gleich der AKF des Nutzsignals an der Stelle &nbsp;$\tau = 0$:
+
*The second term gives the half energy of the considered useful signal &nbsp;$s_i(t)$&nbsp; to be subtracted. The energy is equal to the ACF of the useful signal at the point &nbsp;$\tau = 0$:
  
 
::<math>E_i  =  \varphi_{s_i} (\tau = 0) = \int_{0}^{N \cdot T}
 
::<math>E_i  =  \varphi_{s_i} (\tau = 0) = \int_{0}^{N \cdot T}
 
s_i^2(t) \,{\rm d} t \hspace{0.05cm}.</math>
 
s_i^2(t) \,{\rm d} t \hspace{0.05cm}.</math>
  
*Bei verzerrendem Kanal ist die Impulsantwort &nbsp;$h_{\rm K}(t)$&nbsp; nicht diracförmig, sondern beispielsweise auf den Bereich  &nbsp;$-T_{\rm K} \le t \le +T_{\rm K}$&nbsp; ausgedehnt. In diesem Fall muss für die beiden Integrationsgrenzen &nbsp;$t_1 = -T_{\rm K}$&nbsp; und &nbsp;$t_2 = N \cdot T +T_{\rm K}$&nbsp; eingesetzt werden.<br>
+
*In case of distorting channel the impulse response &nbsp;$h_{\rm K}(t)$&nbsp; is not Dirac-shaped, but for example extended to the range &nbsp;$-T_{\rm K} \le t \le +T_{\rm K}$.&nbsp; In this case, &nbsp;$t_1 = -T_{\rm K}$&nbsp; and &nbsp;$t_2 = N \cdot T +T_{\rm K}$&nbsp; must be used for the two integration limits.<br>
  
 
== Matched filter receiver vs. correlation receiver ==
 
== Matched filter receiver vs. correlation receiver ==
 
<br>
 
<br>
Es gibt verschiedene schaltungstechnische Implementierungen des Maximum&ndash;Likelihood&ndash;Empfängers.  
+
There are various circuit implementations of the maximum likelihood receiver.
  
Beispielsweise können die erforderlichen Integrale durch lineare Filterung und anschließender Abtastung gewonnen werden. Man bezeichnet diese Realisierungsform als&nbsp; '''Matched&ndash;Filter&ndash;Empfänger''', da hier die Impulsantworten der &nbsp;$M$&nbsp; parallelen Filter formgleich mit den Nutzsignalen &nbsp;$s_0(t)$, ... , $s_{M-1}(t)$&nbsp; sind.<br>
+
For example, the required integrals can be obtained by linear filtering and subsequent sampling. This realization form is called&nbsp; '''matched filter receiver''', because here the impulse responses of the &nbsp;$M$&nbsp; parallel filters have the same shape as the useful signals &nbsp;$s_0(t)$, ... , $s_{M-1}(t)$.&nbsp; <br>
*Die $M$ Entscheidungsgrößen &nbsp;$I_i$&nbsp; sind dann gleich den Faltungsprodukten &nbsp;$r(t) \star s_i(t)$&nbsp; zum Zeitpunkt &nbsp;$t= 0$.  
+
*The $M$ decision variables &nbsp;$I_i$&nbsp; are then equal to the convolution products &nbsp;$r(t) \star s_i(t)$&nbsp; at time &nbsp;$t= 0$.  
*Beispielsweise erlaubt der im Kapitel&nbsp; [[Digitalsignal%C3%BCbertragung/Optimierung_der_Basisband%C3%BCbertragungssysteme#Voraussetzungen_und_Optimierungskriterium|Optimierung der Basisband&ndash;Übertragungssysteme]]&nbsp; ausführlich beschriebene "optimale Binärempfänger" eine Maximum&ndash;Likelihood&ndash;Entscheidung mit den ML&ndash;Parametern &nbsp;$M = 2$&nbsp; und &nbsp;$N = 1$.<br>
+
*For example, the "optimal binary receiver" described in detail in the chapter&nbsp; [[Digital_Signal_Transmission/Optimization_of_Baseband_Transmission_Systems#Prerequisites_and_optimization_criterion|"Optimization of Baseband Transmission Systems"]]&nbsp; allows a maximum likelihood decision with ML parameters &nbsp;$M = 2$&nbsp; and &nbsp;$N = 1$.<br>
  
  
Eine zweite Realisierungsform bietet der &nbsp;'''Korrelationsempfänger'''&nbsp; entsprechend der folgenden Grafik.  
+
A second form of realization is provided by the &nbsp;'''correlation receiver'''&nbsp; according to the following graph.
  
[[File:P ID1457 Dig T 3 7 S4 version1.png|center|frame|Korrelationsempfänger für &nbsp;$N = 3$, &nbsp;$t_1 = 0$, &nbsp;$t_2 = 3T$ &nbsp;sowie&nbsp; $M = 2^3 = 8$ |class=fit]]
+
[[File:P ID1457 Dig T 3 7 S4 version1.png|center|frame|Correlation receiver for &nbsp;$N = 3$, &nbsp;$t_1 = 0$, &nbsp;$t_2 = 3T$ &nbsp; and &nbsp; $M = 2^3 = 8$ |class=fit]]
  
Man erkennt aus diesem Blockschaltbild für die angegebenen Parameter:
+
One recognizes from this block diagram for the indicated parameters:
*Der gezeichnete Korrelationsempfänger bildet insgesamt &nbsp;$M = 8$&nbsp; Kreuzkorrelationsfunktionen zwischen dem Empfangssignal &nbsp;$r(t) = s_k(t) + n(t)$&nbsp; und den möglichen Sendesignalen &nbsp;$s_i(t), \ i = 0$, ... , $M-1$. Vorausgesetzt ist für die folgende Beschreibung, dass das Nutzsignal &nbsp;$s_k(t)$&nbsp; gesendet wurde.<br>
+
*The drawn correlation receiver forms a total of &nbsp;$M = 8$&nbsp; cross-correlation functions between the received signal &nbsp;$r(t) = s_k(t) + n(t)$&nbsp; and the possible transmitted signals &nbsp;$s_i(t), \ i = 0$, ... , $M-1$. The following description assumes that the useful signal &nbsp;$s_k(t)$&nbsp; has been transmitted.<br>
  
*Der Korrelationsempfänger sucht nun den maximalen Wert &nbsp;$W_j$&nbsp; aller Korrelationswerte und gibt die dazugehörige Folge &nbsp;$Q_j$&nbsp; als Sinkensymbolfolge &nbsp;$V$&nbsp; aus. Formal lässt sich die ML&ndash;Entscheidungsregel wie folgt ausdrücken:
+
*The correlation receiver now searches for the maximum value &nbsp;$W_j$&nbsp; of all correlation values and outputs the corresponding sequence &nbsp;$Q_j$&nbsp; as a sink symbol sequence &nbsp;$V$.&nbsp; Formally, the ML decision rule can be expressed as follows:
 
:$$V = Q_j, \hspace{0.2cm}{\rm falls}\hspace{0.2cm} W_i < W_j
 
:$$V = Q_j, \hspace{0.2cm}{\rm falls}\hspace{0.2cm} W_i < W_j
\hspace{0.2cm}{\rm f\ddot{u}r}\hspace{0.2cm} {\rm
+
\hspace{0.2cm}{\rm for}\hspace{0.2cm} {\rm
alle}\hspace{0.2cm} i \ne j \hspace{0.05cm}.$$
+
all}\hspace{0.2cm} i \ne j \hspace{0.05cm}.$$
  
*Setzt man weiter voraus, dass alle Sendesignale &nbsp;$s_i(t)$&nbsp; die genau gleiche Energie besitzen, so kann man auf die Subtraktion von &nbsp;$E_i/2$&nbsp; in allen Zweigen verzichten. In diesem Fall werden folgende Korrelationswerte miteinander verglichen &nbsp;$(i = 0$, ... , $M-1)$:
+
*If we further assume that all transmitted signals &nbsp;$s_i(t)$&nbsp; have exactly the same energy, we can dispense with the subtraction of &nbsp;$E_i/2$&nbsp; in all branches. In this case, the following correlation values are compared &nbsp;$(i = 0$, ... , $M-1)$:
 
::<math>I_i  =  \int_{0}^{NT} s_j(t) \cdot s_i(t) \,{\rm d} t +
 
::<math>I_i  =  \int_{0}^{NT} s_j(t) \cdot s_i(t) \,{\rm d} t +
 
\int_{0}^{NT} n(t) \cdot s_i(t) \,{\rm d} t
 
\int_{0}^{NT} n(t) \cdot s_i(t) \,{\rm d} t
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
*Mit großer Wahrscheinlichkeit ist &nbsp;$I_j = I_k$&nbsp; größer als alle anderen Vergleichswerte &nbsp;$I_{j \ne k}$. Ist das Rauschen &nbsp;$n(t)$&nbsp; allerdings zu groß, so wird auch der Korrelationsempfänger eine Fehlentscheidung treffen.<br>
+
*With high probability, &nbsp;$I_j = I_k$&nbsp; is larger than all other comparison values &nbsp;$I_{j \ne k}$. However, if the noise &nbsp;$n(t)$&nbsp; is too large, the correlation receiver will also make a wrong decision.<br>
  
== Darstellung des Korrelationsempfängers im Baumdiagramm==
+
== Representation of the correlation receiver in the tree diagram==
 
<br>
 
<br>
Verdeutlichen wir uns die Funktionsweise des Korrelationsempfängers im Baumdiagramm, wobei die &nbsp;$2^3 = 8$&nbsp; möglichen Quellensymbolfolgen &nbsp;$Q_i$&nbsp; der Länge &nbsp;$N = 3$&nbsp; durch bipolare rechteckförmige Sendesignale &nbsp;$s_i(t)$&nbsp; repräsentiert werden:
+
Let us illustrate the operation of the correlation receiver in the tree diagram, where the &nbsp;$2^3 = 8$&nbsp; possible source symbol sequences &nbsp;$Q_i$&nbsp; of length &nbsp;$N = 3$&nbsp; are represented by bipolar rectangular transmitted signals &nbsp;$s_i(t)$:&nbsp;  
  
[[File:P ID1458 Dig T 3 7 S5a version1.png|center|frame|Mögliche bipolare Sendesignale für &nbsp;$N = 3$|class=fit]]
+
[[File:P ID1458 Dig T 3 7 S5a version1.png|center|frame|Possible bipolar transmitted signals for &nbsp;$N = 3$|class=fit]]
  
Die möglichen Symbolfolgen &nbsp;$Q_0 = \rm LLL$, ... , $Q_7 = \rm HHH$&nbsp; und die zugehörigen Sendesignale &nbsp;$s_0(t)$, ... , $s_7(t)$&nbsp; sind oben aufgeführt.  
+
The possible symbol sequences &nbsp;$Q_0 = \rm LLL$, ... , $Q_7 = \rm HHH$&nbsp; and the associated transmitted signals &nbsp;$s_0(t)$, ... , $s_7(t)$&nbsp; are listed above.
*Aufgrund der bipolaren Amplitudenkoeffizienten und der Rechteckform sind alle Signalenergien gleich: &nbsp; $E_0 =  \text{...} = E_7 = N \cdot E_{\rm B}$, wobei &nbsp;$E_{\rm B}$&nbsp; die Energie eines Einzelimpulses der Dauer $T$ angibt.  
+
*Due to the bipolar amplitude coefficients and the rectangular shape all signal energies are equal: &nbsp; $E_0 =  \text{...} = E_7 = N \cdot E_{\rm B}$, where &nbsp;$E_{\rm B}$&nbsp; indicates the energy of a single pulse of duration $T$.  
*Deshalb kann auf die Subtraktion des Terms &nbsp;$E_i/2$&nbsp; in allen Zweigen verzichtet werden &nbsp; &rArr; &nbsp; eine auf den Korrelationswerten &nbsp;$I_i$&nbsp; basierende Entscheidung liefert ebenso zuverlässige Ergebnisse wie die Maximierung der korrigierten Werte &nbsp;$W_i$.<br>
+
*Therefore, subtraction of the &nbsp;$E_i/2$&nbsp; term in all branches can be omitted &nbsp; &rArr; &nbsp; a decision based on the correlation values &nbsp;$I_i$&nbsp; gives equally reliable results as maximizing the corrected values &nbsp;$W_i$.<br>
  
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 2:}$&nbsp; In der Grafik sind die fortlaufenden Integralwerte dargestellt, wobei vom tatsächlich gesendeten Signal &nbsp;$s_5(t)$&nbsp; und dem rauschfreien Fall ausgegangen wird. Für diesen Fall gilt für die zeitabhängigen Integralwerte und die Integralendwerte:  
+
$\text{Example 2:}$&nbsp; The graph shows the continuous integral values, assuming the actually transmitted signal &nbsp;$s_5(t)$&nbsp; and the noise-free case. For this case, the time-dependent integral values and the integral end values are valid:
[[File:Dig_T_3_7_S5B_version2.png|right|frame|Korrelationsempfänger: &nbsp; Baumdiagramm im rauschfreien Fall|class=fit]]
+
[[File:Dig_T_3_7_S5B_version2.png|right|frame|Correlation receiver: &nbsp; tree diagram in the noise-free case|class=fit]]
 
:$$i_i(t)  =  \int_{0}^{t} r(\tau) \cdot s_i(\tau) \,{\rm d}
 
:$$i_i(t)  =  \int_{0}^{t} r(\tau) \cdot s_i(\tau) \,{\rm d}
 
\tau =  \int_{0}^{t} s_5(\tau) \cdot s_i(\tau) \,{\rm d}
 
\tau =  \int_{0}^{t} s_5(\tau) \cdot s_i(\tau) \,{\rm d}
 
\tau \hspace{0.3cm}
 
\tau \hspace{0.3cm}
 
\Rightarrow \hspace{0.3cm}I_i = i_i(3T). $$
 
\Rightarrow \hspace{0.3cm}I_i = i_i(3T). $$
Die Grafik kann wie folgt interpretiert werden::
+
The graph can be interpreted as follows:
*Wegen der Rechteckform der Signale &nbsp;$s_i(t)$&nbsp; sind alle Funktionsverläufe &nbsp;$i_i(t)$&nbsp; geradlinig. Die auf &nbsp;$E_{\rm B}$&nbsp; normierten Endwerte sind &nbsp;$+3$, &nbsp;$+1$, &nbsp;$-1$&nbsp; und &nbsp;$-3$.<br>
+
*Because of the rectangular shape of the signals &nbsp;$s_i(t)$,&nbsp; all function curves &nbsp;$i_i(t)$&nbsp; are rectilinear. The end values normalized to &nbsp;$E_{\rm B}$&nbsp; are &nbsp;$+3$, &nbsp;$+1$, &nbsp;$-1$&nbsp; and &nbsp;$-3$.<br>
*Der maximale Endwert ist &nbsp;$I_5 = 3 \cdot E_{\rm B}$&nbsp; (roter Kurvenverlauf), da tatsächlich das Signal &nbsp;$s_5(t)$&nbsp; gesendet wurde. Ohne Rauschen trifft der Korrelationsempfänger somit natürlich immer die richtige Entscheidung.<br>
+
*The maximum final value is &nbsp;$I_5 = 3 \cdot E_{\rm B}$&nbsp; (red waveform), since signal &nbsp;$s_5(t)$&nbsp; was actually sent. Without noise, the correlation receiver thus naturally always makes the correct decision.<br>
*Der blaue Kurvenzug &nbsp;$i_1(t)$&nbsp; führt zum Endwert &nbsp;$I_1 = -E_{\rm B} + E_{\rm B}+ E_{\rm B} = E_{\rm B}$, da sich &nbsp;$s_1(t)$&nbsp; von &nbsp;$s_5(t)$&nbsp; nur im ersten Bit unterscheidet. Die Vergleichswerte &nbsp;$I_4$&nbsp; und &nbsp;$I_7$&nbsp; sind ebenfalls gleich &nbsp;$E_{\rm B}$.<br>
+
*The blue curve &nbsp;$i_1(t)$&nbsp; leads to the final value &nbsp;$I_1 = -E_{\rm B} + E_{\rm B}+ E_{\rm B} = E_{\rm B}$, since &nbsp;$s_1(t)$&nbsp; differs from &nbsp;$s_5(t)$&nbsp; only in the first bit. The comparison values &nbsp;$I_4$&nbsp; and &nbsp;$I_7$&nbsp; are also equal to &nbsp;$E_{\rm B}$.<br>
*Da sich &nbsp;$s_0(t)$, &nbsp;$s_3(t)$&nbsp; und &nbsp;$s_6(t)$&nbsp; vom gesendeten &nbsp;$s_5(t)$&nbsp; in zwei Bit unterscheiden, gilt &nbsp;$I_0 = I_3 = I_6 =-E_{\rm B}$. Die grüne Kurve zeigt &nbsp;$s_6(t)$, das zunächst ansteigt (erstes Bit stimmt überein) und dann über zwei Bit abfällt.
+
*Since &nbsp;$s_0(t)$, &nbsp;$s_3(t)$&nbsp; and &nbsp;$s_6(t)$&nbsp; differ from the transmitted &nbsp;$s_5(t)$&nbsp; in two bits, &nbsp;$I_0 = I_3 = I_6 =-E_{\rm B}$. The green curve shows &nbsp;$s_6(t)$ initially increasing (first bit matches) and then decreasing over two bits.
*Die violette Kurve führt zum Endwert &nbsp;$I_2 = -3 \cdot E_{\rm B}$. Das zugehörige Signal &nbsp;$s_2(t)$&nbsp; unterscheidet sich von &nbsp;$s_5(t)$&nbsp; in allen drei Symbolen und es gilt &nbsp;$s_2(t) = -s_5(t)$.}}<br><br>
+
*The purple curve leads to the final value &nbsp;$I_2 = -3 \cdot E_{\rm B}$. The corresponding signal &nbsp;$s_2(t)$&nbsp; differs from &nbsp;$s_5(t)$&nbsp; in all three symbols and &nbsp;$s_2(t) = -s_5(t)$ holds.}}<br><br>
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 3:}$&nbsp; Die Grafik zu diesem Beispiel beschreibt den gleichen Sachverhalt wie das &nbsp;$\text{Beispiel 2}$, doch es wird nun vom Empfangssignal  &nbsp;$r(t) = s_5(t)+ n(t)$&nbsp; ausgegangen. Die Varianz des AWGN&ndash;Rauschens &nbsp;$n(t)$&nbsp; beträgt hierbei  &nbsp;$\sigma_n^2 = 4 \cdot E_{\rm B}/T$.
+
$\text{Example 3:}$&nbsp; The graph for this example describes the same situation as &nbsp;$\text{Example 2}$, but now the received signal &nbsp;$r(t) = s_5(t)+ n(t)$&nbsp; is assumed. The variance of the AWGN noise &nbsp;$n(t)$&nbsp; here is &nbsp;$\sigma_n^2 = 4 \cdot E_{\rm B}/T$.
[[File:Dig_T_3_7_S5C_version2.png|right|frame|Korrelationsempfänger: Baumdiagramm mit Rauschen |class=fit]]
+
[[File:Dig_T_3_7_S5C_version2.png|right|frame|Correlation receiver: tree diagram with noise |class=fit]]
Man erkennt aus dieser Grafik im Vergleich zum rauschfreien Fall:
+
One can see from this graph compared to the noise-free case:
*Die Funktionsverläufe sind aufgrund des Rauschanteils &nbsp;$n(t)$&nbsp; nun nicht mehr gerade und es ergeben sich auch etwas andere Endwerte als ohne Rauschen.  
+
*The function curves are now no longer straight due to the noise component &nbsp;$n(t)$&nbsp; and there are also slightly different final values than without noise.
*Im betrachteten Beispiel entscheidet der Korrelationsempfänger aber mit großer Wahrscheinlichkeit richtig, da die Differenz zwischen &nbsp;$I_5$&nbsp; und dem zweitgrößeren Wert &nbsp;$I_7$&nbsp; mit &nbsp;$1.65\cdot E_{\rm B}$&nbsp; verhältnismäßig groß ist.<br>
+
*In the considered example, however, the correlation receiver decides correctly with high probability, since the difference between &nbsp;$I_5$&nbsp; and the second larger value &nbsp;$I_7$&nbsp; is relatively large with &nbsp;$1.65\cdot E_{\rm B}$.&nbsp; <br>
*Die Fehlerwahrscheinlichkeit ist im hier betrachteten Beispiel allerdings nicht besser als die des Matched&ndash;Filter&ndash;Empfängers mit symbolweiser Entscheidung.  
+
*However, the error probability in the example considered here is not better than that of the matched filter receiver with symbolwise decision.
*Entsprechend dem Kapitel&nbsp;  [[Digitalsignal%C3%BCbertragung/Optimierung_der_Basisband%C3%BCbertragungssysteme#Voraussetzungen_und_Optimierungskriterium|Optimierung der Basisband&ndash;Übertragungssysteme]]&nbsp; gilt auch hier:
+
*In accordance with the chapter&nbsp;  [[Digital_Signal_Transmission/Optimization_of_Baseband_Transmission_Systems#Prerequisites_and_optimization_criterion|"Optimization of Baseband Transmission Systems"]],&nbsp; the following also applies here:
 
:$$p_{\rm S} = {\rm Q} \left( \sqrt{ {2 \cdot E_{\rm B} }/{N_0} }\right)
 
:$$p_{\rm S} = {\rm Q} \left( \sqrt{ {2 \cdot E_{\rm B} }/{N_0} }\right)
 
  = {1}/{2} \cdot {\rm erfc} \left( \sqrt{ { E_{\rm B} }/{N_0} }\right) \hspace{0.05cm}.$$}}  
 
  = {1}/{2} \cdot {\rm erfc} \left( \sqrt{ { E_{\rm B} }/{N_0} }\right) \hspace{0.05cm}.$$}}  
Line 174: Line 174:
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Fazit:}$&nbsp;  
+
$\text{Conclusion:}$&nbsp;  
*Weist das Eingangssignal keine statistischen Bindungen auf wie im &nbsp;$\text{Beispiel 2}$, so ist durch die gemeinsame Entscheidung von &nbsp;$N$&nbsp; Symbolen gegenüber der symbolweisen Entscheidung keine Verbesserung zu erzielen.
+
*If the input signal does not have statistical bindings as in &nbsp;$\text{Example 2}$, there is no improvement by joint decision of &nbsp;$N$&nbsp; symbols over symbolwise decision.
*Bei Vorhandensein von statistischen Bindungen  wird durch die gemeinsame Entscheidung von &nbsp;$N$&nbsp; Symbolen die Fehlerwahrscheinlichkeit  gegenüber &nbsp;$p_{\rm S} = {\rm Q} \left( \sqrt{ {2 \cdot E_{\rm B} }/{N_0} }\right)$&nbsp; (gültig für symbolweise Entscheidung) merklich verringert, da der Maximum&ndash;Likelihood&ndash;Empfänger die Bindungen berücksichtigt.  
+
*In the presence of statistical bindings, the joint decision of &nbsp;$N$&nbsp; symbols noticeably reduces the error probability compared to &nbsp;$p_{\rm S} = {\rm Q} \left( \sqrt{ {2 \cdot E_{\rm B} }/{N_0} }\right)$&nbsp; (valid for symbolwise decision), since the maximum likelihood receiver takes the bindings into account.
*Solche Bindungen können entweder durch sendeseitige Codierung bewusst erzeugt werden (siehe &nbsp;$\rm LNTwww$-Buch [[Channel_Coding]]) oder durch (lineare) Kanalverzerrungen ungewollt entstehen.<br>
+
*Such bindings can be either deliberately created by transmission-side coding (see &nbsp;$\rm LNTwww$ book [[Channel_Coding|"Channel Coding"]]) oder durch (lineare) Kanalverzerrungen ungewollt entstehen.<br>
 
*Bei Vorhandensein solcher Impulsinterferenzen ist die Berechnung der Fehlerwahrscheinlichkeit deutlich schwieriger. Es können jedoch vergleichbare Näherungen wie beim Viterbi&ndash;Empfänger angegeben werden, die am &nbsp;[[Digital_Signal_Transmission/Viterbi–Empfänger#Fehlerwahrscheinlichkeit_bei_Maximum.E2.80.93Likelihood.E2.80.93Entscheidung|Ende des nächsten Kapitels ]]&nbsp; angegeben sind.}}<br>
 
*Bei Vorhandensein solcher Impulsinterferenzen ist die Berechnung der Fehlerwahrscheinlichkeit deutlich schwieriger. Es können jedoch vergleichbare Näherungen wie beim Viterbi&ndash;Empfänger angegeben werden, die am &nbsp;[[Digital_Signal_Transmission/Viterbi–Empfänger#Fehlerwahrscheinlichkeit_bei_Maximum.E2.80.93Likelihood.E2.80.93Entscheidung|Ende des nächsten Kapitels ]]&nbsp; angegeben sind.}}<br>
  

Revision as of 09:54, 31 May 2022

Considered scenario and prerequisites


All digital receivers described so far always make symbolwise decisions. If, on the other hand, several symbols are decided simultaneously, statistical bindings between the received signal samples can be taken into account during detection, which results in a lower error probability – but at the cost of an additional runtime.

In this – partly also in the next chapter – the following transmission model is assumed:

Transmission system with optimal receiver

Compared to the last two chapters, the following differences arise:

  • $Q \in \{Q_i\}$  with  $i = 0$, ... , $M-1$  denotes a time-constrained source symbol sequence  $\langle q_\nu \rangle$ whose symbols are to be jointly decided by the optimal receiver.
  • If  $Q$  describes a sequence of  $N$  redundancy-free binary symbols, set  $M = 2^N$.  On the other hand, if the decision is symbolwise,  $M$  specifies the level number of the digital source.
  • In the above model, any channel distortions are added to the transmitter and are thus already included in the basic pulse  $g_s(t)$  and the signal  $s(t)$.  This measure is only for a simpler representation and is not a restriction.
  • Knowing the currently applied received signal  $s(t)$,  the optimal receiver searches from the set  $\{Q_0$, ... , $Q_{M-1}\}$  of the possible source symbol sequences, the receiver searches for the most likely transmitted sequence  $Q_j$  and outputs this as a sink symbol sequence  $V$. 
  • Before the actual decision algorithm, a numerical value  $W_i$  must be derived from the received signal  $r(t)$  for each possible sequence  $Q_i$  by suitable signal preprocessing. The larger  $W_i$  is, the greater the inference probability that  $Q_i$  was transmitted.
  • Signal preprocessing must provide for the necessary noise power limitation and – in the case of strong channel distortions – for sufficient pre-equalization of the resulting intersymbol interferences. In addition, preprocessing also includes sampling for time discretization.

MAP and Maximum–Likelihood decision rule


The (unconstrained) optimal receiver is called the MAP receiver, where "MAP" stands for "Maximum–a–posteriori".

$\text{Definition:}$  The  MAP receiver  determines the  $M$  inference probabilities  ${\rm Pr}\big[Q_i \hspace{0.05cm}\vert \hspace{0.05cm}r(t)\big]$  and sets the output sequence  $V$  according to the decision rule, where the index is:   $i = 0$, ... , $M-1$  as well as  $i \ne j$:

$${\rm Pr}\big[Q_j \hspace{0.05cm}\vert \hspace{0.05cm} r(t)\big] > {\rm Pr}\big[Q_i \hspace{0.05cm}\vert \hspace{0.05cm} r(t)\big] \hspace{0.05cm}.$$


The  "inference probability"  ${\rm Pr}\big[Q_i \hspace{0.05cm}\vert \hspace{0.05cm} r(t)\big]$  indicates the probability with which the sequence  $Q_i$  was sent when the received signal  $r(t)$  is present at the decision. Using  "Bayes' theorem",  this probability can be calculated as follows:

$${\rm Pr}\big[Q_i \hspace{0.05cm}|\hspace{0.05cm} r(t)\big] = \frac{ {\rm Pr}\big[ r(t)\hspace{0.05cm}|\hspace{0.05cm} Q_i \big] \cdot {\rm Pr}\big[Q_i]}{{\rm Pr}[r(t)\big]} \hspace{0.05cm}.$$

The MAP decision rule can thus be reformulated or simplified as follows:   Let the sink symbol sequence  $V = Q_j$ if for all  $i \ne j$  holds:

$$\frac{ {\rm Pr}\big[ r(t)\hspace{0.05cm}|\hspace{0.05cm} Q_j \big] \cdot {\rm Pr}\big[Q_j)}{{\rm Pr}\big[r(t)\big]} > \frac{ {\rm Pr}\big[ r(t)\hspace{0.05cm}|\hspace{0.05cm} Q_i\big] \cdot {\rm Pr}\big[Q_i\big]}{{\rm Pr}\big[r(t)\big]}\hspace{0.3cm} \Rightarrow \hspace{0.3cm} {\rm Pr}\big[ r(t)\hspace{0.05cm}|\hspace{0.05cm} Q_j\big] \cdot {\rm Pr}\big[Q_j\big]> {\rm Pr}\big[ r(t)\hspace{0.05cm}|\hspace{0.05cm} Q_i \big] \cdot {\rm Pr}\big[Q_i\big] \hspace{0.05cm}.$$

A further simplification of this MAP decision rule leads to the ML receiver, where "ML" stands for "maximum likelihood".

$\text{Definition:}$  The  maximum likelihood receiver  – abbreviated ML – decides according to the conditional forward probabilities  ${\rm Pr}\big[r(t)\hspace{0.05cm} \vert \hspace{0.05cm}Q_i \big]$  and sets the output sequence  $V = Q_j$ if for all  $i \ne j$  holds:

$${\rm Pr}\big[ r(t)\hspace{0.05cm} \vert\hspace{0.05cm} Q_j \big] > {\rm Pr}\big[ r(t)\hspace{0.05cm} \vert \hspace{0.05cm} Q_i\big] \hspace{0.05cm}.$$


A comparison of these two definitions shows:

  • For equally probable source symbols, the ML receiver and the MAP receiver use the same decision rules; thus, they are equivalent.
  • For symbols that are not equally probable, the ML receiver is inferior to the MAP receiver because it does not use all the available information for detection.


$\text{Example 1:}$  To illustrate ML and MAP decision rule, we now construct a very simple example with only two source symbols  $(M = 2)$.

  • The two possible symbols  $Q_0$  and  $Q_1$  are represented by the transmitted signals  $s = 0$  and  $s = 1$. 
  • The received signal can – for whatever reason – take three different values, namely  $r = 0$,  $r = 1$  and additionally  $r = 0.5$.


For clarification of MAP and ML receiver

The received values  $r = 0$  and  $r = 1$  will be assigned to the transmitter values  $s = 0 \ (Q_0)$  and  $s = 1 \ (Q_1)$,  respectively, by both the ML and MAP decisions. In contrast, the decisions will give a different result with respect to the received value  $r = 0.5$: 

  • The maximum likelihood decision rule leads to the source symbol  $Q_0$, because of
$${\rm Pr}\big [ r= 0.5\hspace{0.05cm}\vert\hspace{0.05cm} Q_0\big ] = 0.4 > {\rm Pr}\big [ r= 0.5\hspace{0.05cm} \vert \hspace{0.05cm} Q_1\big ] = 0.2 \hspace{0.05cm}.$$
  • The MAP decision, on the other hand, leads to the source symbol  $Q_1$, since according to the secondary calculation in the graph:
$${\rm Pr}\big [Q_1 \hspace{0.05cm}\vert\hspace{0.05cm} r= 0.5\big ] = 0.6 > {\rm Pr}\big [Q_0 \hspace{0.05cm}\vert\hspace{0.05cm} r= 0.5\big ] = 0.4 \hspace{0.05cm}.$$


Maximum likelihood decision for Gaussian noise


We now assume that the received signal  $r(t)$  is additively composed of a useful signal  $s(t)$  and a noise component  $n(t)$,  where the noise is assumed to be Gaussian distributed and white   ⇒    "AWGN noise":

$$r(t) = s(t) + n(t) \hspace{0.05cm}.$$

Any channel distortions are already applied to the signal  $s(t)$  for simplicity.

The necessary noise power limitation is realized by an integrator; this corresponds to an averaging of the noise values in the time domain. If one limits the integration interval to the range  $t_1$  to  $t_2$, one can derive a quantity  $W_i$  for each source symbol sequence  $Q_i$,  which is a measure for the conditional probability  ${\rm Pr}\big [ r(t)\hspace{0.05cm} \vert \hspace{0.05cm} Q_i\big ] $: 

$$W_i = \int_{t_1}^{t_2} r(t) \cdot s_i(t) \,{\rm d} t - {1}/{2} \cdot \int_{t_1}^{t_2} s_i^2(t) \,{\rm d} t= I_i - {E_i}/{2} \hspace{0.05cm}.$$

This decision variable  $W_i$  can be derived using the  $k$–dimensionial "joint probability density"  of the noise $($with  $k \to \infty)$  and some boundary crossings. The result can be interpreted as follows:

  • Integration is used for noise power reduction by averaging. If  $N$  binary symbols are decided simultaneously by the maximum likelihood detector, set  $t_1 = 0 $  and  $t_2 = N \cdot T$  for distortion-free channel.
  • The first term of the above decision variable  $W_i$  is equal to the   "energy cross-correlation function"  formed over the finite time interval  $NT$  between  $r(t)$  and  $s_i(t)$  at the point  $\tau = 0$:
$$I_i = \varphi_{r, \hspace{0.08cm}s_i} (\tau = 0) = \int_{0}^{N \cdot T}r(t) \cdot s_i(t) \,{\rm d} t \hspace{0.05cm}.$$
  • The second term gives the half energy of the considered useful signal  $s_i(t)$  to be subtracted. The energy is equal to the ACF of the useful signal at the point  $\tau = 0$:
\[E_i = \varphi_{s_i} (\tau = 0) = \int_{0}^{N \cdot T} s_i^2(t) \,{\rm d} t \hspace{0.05cm}.\]
  • In case of distorting channel the impulse response  $h_{\rm K}(t)$  is not Dirac-shaped, but for example extended to the range  $-T_{\rm K} \le t \le +T_{\rm K}$.  In this case,  $t_1 = -T_{\rm K}$  and  $t_2 = N \cdot T +T_{\rm K}$  must be used for the two integration limits.

Matched filter receiver vs. correlation receiver


There are various circuit implementations of the maximum likelihood receiver.

For example, the required integrals can be obtained by linear filtering and subsequent sampling. This realization form is called  matched filter receiver, because here the impulse responses of the  $M$  parallel filters have the same shape as the useful signals  $s_0(t)$, ... , $s_{M-1}(t)$. 

  • The $M$ decision variables  $I_i$  are then equal to the convolution products  $r(t) \star s_i(t)$  at time  $t= 0$.
  • For example, the "optimal binary receiver" described in detail in the chapter  "Optimization of Baseband Transmission Systems"  allows a maximum likelihood decision with ML parameters  $M = 2$  and  $N = 1$.


A second form of realization is provided by the  correlation receiver  according to the following graph.

Correlation receiver for  $N = 3$,  $t_1 = 0$,  $t_2 = 3T$   and   $M = 2^3 = 8$

One recognizes from this block diagram for the indicated parameters:

  • The drawn correlation receiver forms a total of  $M = 8$  cross-correlation functions between the received signal  $r(t) = s_k(t) + n(t)$  and the possible transmitted signals  $s_i(t), \ i = 0$, ... , $M-1$. The following description assumes that the useful signal  $s_k(t)$  has been transmitted.
  • The correlation receiver now searches for the maximum value  $W_j$  of all correlation values and outputs the corresponding sequence  $Q_j$  as a sink symbol sequence  $V$.  Formally, the ML decision rule can be expressed as follows:
$$V = Q_j, \hspace{0.2cm}{\rm falls}\hspace{0.2cm} W_i < W_j \hspace{0.2cm}{\rm for}\hspace{0.2cm} {\rm all}\hspace{0.2cm} i \ne j \hspace{0.05cm}.$$
  • If we further assume that all transmitted signals  $s_i(t)$  have exactly the same energy, we can dispense with the subtraction of  $E_i/2$  in all branches. In this case, the following correlation values are compared  $(i = 0$, ... , $M-1)$:
\[I_i = \int_{0}^{NT} s_j(t) \cdot s_i(t) \,{\rm d} t + \int_{0}^{NT} n(t) \cdot s_i(t) \,{\rm d} t \hspace{0.05cm}.\]
  • With high probability,  $I_j = I_k$  is larger than all other comparison values  $I_{j \ne k}$. However, if the noise  $n(t)$  is too large, the correlation receiver will also make a wrong decision.

Representation of the correlation receiver in the tree diagram


Let us illustrate the operation of the correlation receiver in the tree diagram, where the  $2^3 = 8$  possible source symbol sequences  $Q_i$  of length  $N = 3$  are represented by bipolar rectangular transmitted signals  $s_i(t)$: 

Possible bipolar transmitted signals for  $N = 3$

The possible symbol sequences  $Q_0 = \rm LLL$, ... , $Q_7 = \rm HHH$  and the associated transmitted signals  $s_0(t)$, ... , $s_7(t)$  are listed above.

  • Due to the bipolar amplitude coefficients and the rectangular shape all signal energies are equal:   $E_0 = \text{...} = E_7 = N \cdot E_{\rm B}$, where  $E_{\rm B}$  indicates the energy of a single pulse of duration $T$.
  • Therefore, subtraction of the  $E_i/2$  term in all branches can be omitted   ⇒   a decision based on the correlation values  $I_i$  gives equally reliable results as maximizing the corrected values  $W_i$.


$\text{Example 2:}$  The graph shows the continuous integral values, assuming the actually transmitted signal  $s_5(t)$  and the noise-free case. For this case, the time-dependent integral values and the integral end values are valid:

Correlation receiver:   tree diagram in the noise-free case
$$i_i(t) = \int_{0}^{t} r(\tau) \cdot s_i(\tau) \,{\rm d} \tau = \int_{0}^{t} s_5(\tau) \cdot s_i(\tau) \,{\rm d} \tau \hspace{0.3cm} \Rightarrow \hspace{0.3cm}I_i = i_i(3T). $$

The graph can be interpreted as follows:

  • Because of the rectangular shape of the signals  $s_i(t)$,  all function curves  $i_i(t)$  are rectilinear. The end values normalized to  $E_{\rm B}$  are  $+3$,  $+1$,  $-1$  and  $-3$.
  • The maximum final value is  $I_5 = 3 \cdot E_{\rm B}$  (red waveform), since signal  $s_5(t)$  was actually sent. Without noise, the correlation receiver thus naturally always makes the correct decision.
  • The blue curve  $i_1(t)$  leads to the final value  $I_1 = -E_{\rm B} + E_{\rm B}+ E_{\rm B} = E_{\rm B}$, since  $s_1(t)$  differs from  $s_5(t)$  only in the first bit. The comparison values  $I_4$  and  $I_7$  are also equal to  $E_{\rm B}$.
  • Since  $s_0(t)$,  $s_3(t)$  and  $s_6(t)$  differ from the transmitted  $s_5(t)$  in two bits,  $I_0 = I_3 = I_6 =-E_{\rm B}$. The green curve shows  $s_6(t)$ initially increasing (first bit matches) and then decreasing over two bits.
  • The purple curve leads to the final value  $I_2 = -3 \cdot E_{\rm B}$. The corresponding signal  $s_2(t)$  differs from  $s_5(t)$  in all three symbols and  $s_2(t) = -s_5(t)$ holds.



$\text{Example 3:}$  The graph for this example describes the same situation as  $\text{Example 2}$, but now the received signal  $r(t) = s_5(t)+ n(t)$  is assumed. The variance of the AWGN noise  $n(t)$  here is  $\sigma_n^2 = 4 \cdot E_{\rm B}/T$.

Correlation receiver: tree diagram with noise

One can see from this graph compared to the noise-free case:

  • The function curves are now no longer straight due to the noise component  $n(t)$  and there are also slightly different final values than without noise.
  • In the considered example, however, the correlation receiver decides correctly with high probability, since the difference between  $I_5$  and the second larger value  $I_7$  is relatively large with  $1.65\cdot E_{\rm B}$. 
  • However, the error probability in the example considered here is not better than that of the matched filter receiver with symbolwise decision.
  • In accordance with the chapter  "Optimization of Baseband Transmission Systems",  the following also applies here:
$$p_{\rm S} = {\rm Q} \left( \sqrt{ {2 \cdot E_{\rm B} }/{N_0} }\right) = {1}/{2} \cdot {\rm erfc} \left( \sqrt{ { E_{\rm B} }/{N_0} }\right) \hspace{0.05cm}.$$


$\text{Conclusion:}$ 

  • If the input signal does not have statistical bindings as in  $\text{Example 2}$, there is no improvement by joint decision of  $N$  symbols over symbolwise decision.
  • In the presence of statistical bindings, the joint decision of  $N$  symbols noticeably reduces the error probability compared to  $p_{\rm S} = {\rm Q} \left( \sqrt{ {2 \cdot E_{\rm B} }/{N_0} }\right)$  (valid for symbolwise decision), since the maximum likelihood receiver takes the bindings into account.
  • Such bindings can be either deliberately created by transmission-side coding (see  $\rm LNTwww$ book "Channel Coding") oder durch (lineare) Kanalverzerrungen ungewollt entstehen.
  • Bei Vorhandensein solcher Impulsinterferenzen ist die Berechnung der Fehlerwahrscheinlichkeit deutlich schwieriger. Es können jedoch vergleichbare Näherungen wie beim Viterbi–Empfänger angegeben werden, die am  Ende des nächsten Kapitels   angegeben sind.


Korrelationsempfänger bei unipolarer Signalisierung


Bisher sind wir bei der Beschreibung des Korrelationsempfänger stets von binärer bipolarer  Signalisierung ausgegangen:

$$a_\nu = \left\{ \begin{array}{c} +1 \\ -1 \\ \end{array} \right.\quad \begin{array}{*{1}c} {\rm{f\ddot{u}r}} \\ {\rm{f\ddot{u}r}} \\ \end{array}\begin{array}{*{20}c} q_\nu = \mathbf{H} \hspace{0.05cm}, \\ q_\nu = \mathbf{L} \hspace{0.05cm}. \\ \end{array}$$

Nun betrachten wir den Fall der binären unipolaren  Digitalsignalübertragung gilt:

$$a_\nu = \left\{ \begin{array}{c} 1 \\ 0 \\ \end{array} \right.\quad \begin{array}{*{1}c} {\rm{f\ddot{u}r}} \\ {\rm{f\ddot{u}r}} \\ \end{array}\begin{array}{*{20}c} q_\nu = \mathbf{H} \hspace{0.05cm}, \\ q_\nu = \mathbf{L} \hspace{0.05cm}. \\ \end{array}$$

Die  $2^3 = 8$  möglichen Quellensymbolfolgen  $Q_i$  der Länge  $N = 3$  werden nun durch unipolare rechteckförmige Sendesignale  $s_i(t)$  repräsentiert. Nachfolgend aufgeführt sind die Symbolfolgen  $Q_0 = \rm LLL$, ... , $Q_7 = \rm HHH$  und die Sendesignale  $s_0(t)$, ... , $s_7(t)$.

Mögliche unipolare Sendesignale für  $N = 3$

Durch Vergleich mit der  entsprechenden Tabelle  für bipolare Signalisierung erkennt man:

  • Aufgrund der unipolaren Amplitudenkoeffizienten sind nun die Signalenergien  $E_i$  unterschiedlich, zum Beispiel gilt  $E_0 = 0$  und  $E_7 = 3 \cdot E_{\rm B}$.
  • Hier führt die auf den Integralendwerten  $I_i$  basierende Entscheidung nicht zum richtigen Ergebnis.
  • Vielmehr muss nun auf die korrigierten Vergleichswerte  $W_i = I_i- E_i/2$  zurückgegriffen werden.


$\text{Beispiel 4:}$  In der Grafik sind die fortlaufenden Integralwerte dargestellt, wobei wieder vom tatsächlich gesendeten Signal  $s_5(t)$  und dem rauschfreien Fall ausgegangen wird. Das entsprechende bipolare Äquivalent wurde im Beispiel 2 betrachtet.

Baumdiagramm des Korrelationsempfängers (unipolar)

Für dieses Beispiel ergeben sich folgende Vergleichswerte, jeweils normiert auf  $E_{\rm B}$:

$$I_5 = I_7 = 2, \hspace{0.2cm}I_1 = I_3 = I_4= I_6 = 1 \hspace{0.2cm}, \hspace{0.2cm}I_0 = I_2 = 0 \hspace{0.05cm},$$
$$W_5 = 1, \hspace{0.2cm}W_1 = W_4 = W_7 = 0.5, \hspace{0.2cm} W_0 = W_3 =W_6 =0, \hspace{0.2cm}W_2 = -0.5 \hspace{0.05cm}.$$

Das bedeutet:

  • Bei einem Vergleich hinsichtlich der maximalen  $I_i$–Werte wären die Quellensymbolfolgen  $Q_5$  und  $Q_7$  gleichwertig.
  • Berücksichtigt man die unterschiedlichen Energien  $(E_5 = 2, \ E_7 = 3)$, so wird dagegen wegen  $W_5 > W_7$  eindeutig für die Folge  $Q_5$  entschieden.
  • Der Korrelationsempfänger gemäß  $W_i = I_i- E_i/2$  entscheidet also auch bei unipolarer Signalisierung richtig auf  $s(t) = s_5(t)$.


Aufgaben zum Kapitel


Aufgabe 3.9: Korrelationsempfänger für unipolare Signalisierung

Aufgabe 3.10: Baumdiagramm bei Maximum-Likelihood