Loading [MathJax]/jax/output/HTML-CSS/fonts/TeX/fontdata.js

Difference between revisions of "Digital Signal Transmission/Optimal Receiver Strategies"

From LNTwww
 
(4 intermediate revisions by one other user not shown)
Line 20: Line 20:
 
*In this model,&nbsp; any channel distortions are added to the transmitter and are thus already included in the basic transmission pulse &nbsp;gs(t)&nbsp; and the signal &nbsp;s(t).&nbsp; This measure is only for a simpler representation and is not a restriction.<br>
 
*In this model,&nbsp; any channel distortions are added to the transmitter and are thus already included in the basic transmission pulse &nbsp;gs(t)&nbsp; and the signal &nbsp;s(t).&nbsp; This measure is only for a simpler representation and is not a restriction.<br>
  
*Knowing the currently applied received signal &nbsp;$s(t),&nbsp; the optimal receiver searches from the set &nbsp;\{Q_0,...,Q_{M-1}\}&nbsp; of the possible source symbol sequences, the receiver searches for the most likely transmitted sequence &nbsp;Q_j&nbsp; and outputs this as a sink symbol sequence &nbsp;V$.&nbsp; <br>
+
*Knowing the currently applied received signal &nbsp;$r(t),&nbsp; the optimal receiver searches from the set &nbsp;\{Q_0,...,Q_{M-1}\}&nbsp; of the possible source symbol sequences, the receiver searches for the most likely transmitted sequence &nbsp;Q_j&nbsp; and outputs this as a sink symbol sequence &nbsp;V$.&nbsp; <br>
  
 
*Before the actual decision algorithm,&nbsp; a numerical value &nbsp;Wi&nbsp; must be derived from the received signal &nbsp;r(t)&nbsp; for each possible sequence &nbsp;Qi&nbsp; by suitable signal preprocessing.&nbsp; The larger &nbsp;Wi&nbsp; is,&nbsp; the greater the inference probability that &nbsp;Qi&nbsp; was transmitted.<br>
 
*Before the actual decision algorithm,&nbsp; a numerical value &nbsp;Wi&nbsp; must be derived from the received signal &nbsp;r(t)&nbsp; for each possible sequence &nbsp;Qi&nbsp; by suitable signal preprocessing.&nbsp; The larger &nbsp;Wi&nbsp; is,&nbsp; the greater the inference probability that &nbsp;Qi&nbsp; was transmitted.<br>
Line 65: Line 65:
 
Example 1:&nbsp; To illustrate the&nbsp; "ML"&nbsp; and the&nbsp; "MAP"&nbsp; decision rule,&nbsp; we now construct a very simple example with only two source symbols &nbsp;(M=2).
 
Example 1:&nbsp; To illustrate the&nbsp; "ML"&nbsp; and the&nbsp; "MAP"&nbsp; decision rule,&nbsp; we now construct a very simple example with only two source symbols &nbsp;(M=2).
 
[[File:EN_Dig_T_3_7_S2.png|right|frame|For clarification of MAP and ML receiver|class=fit]]  
 
[[File:EN_Dig_T_3_7_S2.png|right|frame|For clarification of MAP and ML receiver|class=fit]]  
<br><br>&rArr; &nbsp; The two possible symbols &nbsp;Q0&nbsp; and &nbsp;Q1&nbsp; are represented by the transmitted signals &nbsp;s=0&nbsp; and &nbsp;s=1.&nbsp;
+
<br><br>&rArr; &nbsp; The two possible symbols &nbsp;Q0&nbsp; and &nbsp;Q1&nbsp; are represented by the transmitted signals &nbsp;s=0&nbsp; and &nbsp;s=1.
 
+
<br><br>
 
&rArr; &nbsp; The received signal can &ndash; for whatever reason &ndash; take three different values, namely &nbsp;r=0, &nbsp;r=1&nbsp; and additionally &nbsp;r=0.5.
 
&rArr; &nbsp; The received signal can &ndash; for whatever reason &ndash; take three different values, namely &nbsp;r=0, &nbsp;r=1&nbsp; and additionally &nbsp;r=0.5.
 
+
<br><br>
*The received values &nbsp;r=0&nbsp; and &nbsp;r=1&nbsp; will be assigned to the transmitter values &nbsp;s=0 (Q0)&nbsp; and &nbsp;s=1 (Q1),&nbsp; resp.,&nbsp;  by both,&nbsp; the ML and MAP decisions.  
+
<u>Note:</u>
 +
*The received values &nbsp;r=0&nbsp; and &nbsp;r=1&nbsp; will be assigned to the transmitter values &nbsp;s=0 (Q0)&nbsp; resp. &nbsp;s=1 (Q1),&nbsp;   by both,&nbsp; the ML and MAP decisions.  
  
 
*In contrast, the decisions will give a different result with respect to the received value &nbsp;r=0.5:&nbsp;  
 
*In contrast, the decisions will give a different result with respect to the received value &nbsp;r=0.5:&nbsp;  
  
*The maximum likelihood decision rule leads to the source symbol &nbsp;Q0, because of
+
:*The maximum likelihood&nbsp; (ML)&nbsp; decision rule leads to the source symbol &nbsp;Q0,&nbsp; because of:
:$${\rm Pr}\big [ r= 0.5\hspace{0.05cm}\vert\hspace{0.05cm}
+
::$${\rm Pr}\big [ r= 0.5\hspace{0.05cm}\vert\hspace{0.05cm}
 
  Q_0\big ] = 0.4 > {\rm Pr}\big [ r= 0.5\hspace{0.05cm} \vert \hspace{0.05cm}
 
  Q_0\big ] = 0.4 > {\rm Pr}\big [ r= 0.5\hspace{0.05cm} \vert \hspace{0.05cm}
 
  Q_1\big ] = 0.2 \hspace{0.05cm}.$$
 
  Q_1\big ] = 0.2 \hspace{0.05cm}.$$
  
*The MAP decision, on the other hand, leads to the source symbol &nbsp;Q1, since according to the secondary calculation in the graph:
+
:*The maximum&ndash;a&ndash;posteriori&nbsp; $\rm (MAP)$&nbsp; decision rule leads to the source symbol &nbsp;Q1,&nbsp; since according to the incidental calculation in the graph:
:$${\rm Pr}\big [Q_1 \hspace{0.05cm}\vert\hspace{0.05cm}
+
::$${\rm Pr}\big [Q_1 \hspace{0.05cm}\vert\hspace{0.05cm}
 
  r= 0.5\big ] = 0.6 > {\rm Pr}\big [Q_0 \hspace{0.05cm}\vert\hspace{0.05cm}
 
  r= 0.5\big ] = 0.6 > {\rm Pr}\big [Q_0 \hspace{0.05cm}\vert\hspace{0.05cm}
 
  r= 0.5\big ] = 0.4 \hspace{0.05cm}.$$}}<br>
 
  r= 0.5\big ] = 0.4 \hspace{0.05cm}.$$}}<br>
Line 85: Line 86:
 
== Maximum likelihood decision for Gaussian noise ==
 
== Maximum likelihood decision for Gaussian noise ==
 
<br>
 
<br>
We now assume that the received signal &nbsp;r(t)&nbsp; is additively composed of a useful signal &nbsp;s(t)&nbsp; and a noise component &nbsp;n(t),&nbsp; where the noise is assumed to be Gaussian distributed and white &nbsp; &rArr; &nbsp; &nbsp;[[Digital_Signal_Transmission/System_Components_of_a_Baseband_Transmission_System#Transmission_channel_and_interference|"AWGN noise"]]:
+
We now assume that the received signal &nbsp;r(t)&nbsp; is additively composed of a useful component &nbsp;s(t)&nbsp; and a noise component &nbsp;n(t),&nbsp; where the noise is assumed to be Gaussian distributed and white &nbsp; &rArr; &nbsp; &nbsp;[[Digital_Signal_Transmission/System_Components_of_a_Baseband_Transmission_System#Transmission_channel_and_interference|"AWGN noise"]]:
 
:r(t)=s(t)+n(t).
 
:r(t)=s(t)+n(t).
  
 
Any channel distortions are already applied to the signal &nbsp;s(t)&nbsp; for simplicity.<br>
 
Any channel distortions are already applied to the signal &nbsp;s(t)&nbsp; for simplicity.<br>
  
The necessary noise power limitation is realized by an integrator; this corresponds to an averaging of the noise values in the time domain. If one limits the integration interval to the range &nbsp;t1&nbsp; to &nbsp;t2, one can derive a quantity &nbsp;Wi&nbsp; for each source symbol sequence &nbsp;Qi,&nbsp; which is a measure for the conditional probability &nbsp;${\rm Pr}\big [ r(t)\hspace{0.05cm} \vert \hspace{0.05cm}
+
The necessary noise power limitation is realized by an integrator;&nbsp; this corresponds to an averaging of the noise values in the time domain.&nbsp; If one limits the integration interval to the range &nbsp;t1&nbsp; to &nbsp;t2,&nbsp; one can derive a quantity &nbsp;Wi&nbsp; for each source symbol sequence &nbsp;Qi,&nbsp; which is a measure for the conditional probability &nbsp;${\rm Pr}\big [ r(t)\hspace{0.05cm} \vert \hspace{0.05cm}
 
  Q_i\big ] $:&nbsp;  
 
  Q_i\big ] $:&nbsp;  
 
:$$W_i  =  \int_{t_1}^{t_2} r(t) \cdot s_i(t) \,{\rm d} t -
 
:$$W_i  =  \int_{t_1}^{t_2} r(t) \cdot s_i(t) \,{\rm d} t -
Line 96: Line 97:
 
I_i - {E_i}/{2} \hspace{0.05cm}.$$
 
I_i - {E_i}/{2} \hspace{0.05cm}.$$
  
This decision variable &nbsp;Wi&nbsp; can be derived using the &nbsp;k&ndash;dimensionial [[Theory_of_Stochastic_Signals/Two-Dimensional_Random_Variables#Joint_probability_density_function|"joint probability density"]]&nbsp; of the noise (with &nbsp;k)&nbsp; and some boundary crossings. The result can be interpreted as follows:
+
This decision variable &nbsp;Wi&nbsp; can be derived using the &nbsp;k&ndash;dimensionial&nbsp; [[Theory_of_Stochastic_Signals/Two-Dimensional_Random_Variables#Joint_probability_density_function|"joint probability density"]]&nbsp; of the noise&nbsp; (with &nbsp;k)&nbsp; and some boundary crossings.&nbsp; The result can be interpreted as follows:
*Integration is used for noise power reduction by averaging. If &nbsp;N&nbsp; binary symbols are decided simultaneously by the maximum likelihood detector, set &nbsp;t1=0&nbsp; and &nbsp;t2=NT&nbsp; for distortion-free channel.
+
*Integration is used for noise power reduction by averaging.&nbsp; If &nbsp;N&nbsp; binary symbols are decided simultaneously by the maximum likelihood detector,&nbsp; set &nbsp;t1=0&nbsp; and &nbsp;t2=NT&nbsp; for distortion-free channel.
*The first term of the above decision variable &nbsp;Wi&nbsp; is equal to the &nbsp; [[Theory_of_Stochastic_Signals/Cross-Correlation_Function_and_Cross_Power-Spectral_Density#Definition_of_the_cross-correlation_function| "energy cross-correlation function"]]&nbsp; formed over the finite time interval &nbsp;NT&nbsp; between &nbsp;r(t)&nbsp; and &nbsp;si(t)&nbsp; at the point &nbsp;τ=0:
+
 
 +
*The first term of the above decision variable &nbsp;Wi&nbsp; is equal to the&nbsp; [[Theory_of_Stochastic_Signals/Cross-Correlation_Function_and_Cross_Power-Spectral_Density#Definition_of_the_cross-correlation_function| "energy cross-correlation function"]]&nbsp; formed over the finite time interval &nbsp;NT&nbsp; between &nbsp;r(t)&nbsp; and &nbsp;si(t)&nbsp; at the time point &nbsp;τ=0:
 
:$$I_i  = \varphi_{r, \hspace{0.08cm}s_i} (\tau = 0) =  \int_{0}^{N \cdot T}r(t) \cdot s_i(t) \,{\rm d} t
 
:$$I_i  = \varphi_{r, \hspace{0.08cm}s_i} (\tau = 0) =  \int_{0}^{N \cdot T}r(t) \cdot s_i(t) \,{\rm d} t
 
\hspace{0.05cm}.$$
 
\hspace{0.05cm}.$$
  
*The second term gives the half energy of the considered useful signal &nbsp;si(t)&nbsp; to be subtracted. The energy is equal to the ACF of the useful signal at the point &nbsp;τ=0:
+
*The second term gives half the energy of the considered useful signal &nbsp;si(t)&nbsp; to be subtracted.&nbsp; The energy is equal to the auto-correlation function&nbsp; $\rm (ACF)$&nbsp; of &nbsp;si(t)&nbsp; at the time point &nbsp;τ=0:
  
 
::<math>E_i  =  \varphi_{s_i} (\tau = 0) = \int_{0}^{N \cdot T}
 
::<math>E_i  =  \varphi_{s_i} (\tau = 0) = \int_{0}^{N \cdot T}
 
s_i^2(t) \,{\rm d} t \hspace{0.05cm}.</math>
 
s_i^2(t) \,{\rm d} t \hspace{0.05cm}.</math>
  
*In case of distorting channel the impulse response &nbsp;hK(t)&nbsp; is not Dirac-shaped, but for example extended to the range &nbsp;TKt+TK.&nbsp; In this case, &nbsp;t1=TK&nbsp; and &nbsp;t2=NT+TK&nbsp; must be used for the two integration limits.<br>
+
*In the case of a distorting channel,&nbsp; the channel impulse response &nbsp;hK(t)&nbsp; is not Dirac-shaped,&nbsp; but for example extended to the range &nbsp;TKt+TK.&nbsp; In this case,&nbsp; t1=TK&nbsp; and &nbsp;t2=NT+TK&nbsp; must be used for the integration limits.<br>
  
 
== Matched filter receiver vs. correlation receiver ==
 
== Matched filter receiver vs. correlation receiver ==
 
<br>
 
<br>
There are various circuit implementations of the maximum likelihood receiver.
+
There are various circuit implementations of the maximum likelihood&nbsp; (ML)&nbsp; receiver.
  
For example, the required integrals can be obtained by linear filtering and subsequent sampling. This realization form is called&nbsp; '''matched filter receiver''', because here the impulse responses of the &nbsp;M&nbsp; parallel filters have the same shape as the useful signals &nbsp;s0(t), ... , sM1(t).&nbsp; <br>
+
&rArr; &nbsp; For example,&nbsp; the required integrals can be obtained by linear filtering and subsequent sampling.&nbsp; This realization form is called&nbsp; '''matched filter receiver''',&nbsp; because here the impulse responses of the &nbsp;M&nbsp; parallel filters have the same shape as the useful signals &nbsp;s0(t), ... , sM1(t).&nbsp; <br>
*The M decision variables &nbsp;Ii&nbsp; are then equal to the convolution products &nbsp;r(t)si(t)&nbsp; at time &nbsp;t=0.  
+
*The&nbsp; M&nbsp; decision variables &nbsp;Ii&nbsp; are then equal to the convolution products &nbsp;r(t)si(t)&nbsp; at time &nbsp;t=0.  
*For example, the "optimal binary receiver" described in detail in the chapter&nbsp; [[Digital_Signal_Transmission/Optimization_of_Baseband_Transmission_Systems#Prerequisites_and_optimization_criterion|"Optimization of Baseband Transmission Systems"]]&nbsp; allows a maximum likelihood decision with ML parameters &nbsp;M=2&nbsp; and &nbsp;N=1.<br>
+
*For example,&nbsp; the&nbsp; "optimal binary receiver"&nbsp; described in detail in the chapter&nbsp; [[Digital_Signal_Transmission/Optimization_of_Baseband_Transmission_Systems#Prerequisites_and_optimization_criterion|"Optimization of Baseband Transmission Systems"]]&nbsp; allows a maximum likelihood&nbsp; (ML)&nbsp; decision with parameters &nbsp;M=2&nbsp; and &nbsp;N=1.<br>
  
  
A second form of realization is provided by the &nbsp;'''correlation receiver'''&nbsp; according to the following graph.
+
&rArr; &nbsp; A second realization form is provided by the &nbsp;'''correlation receiver'''&nbsp; according to the following graph.&nbsp; One recognizes from this block diagram for the indicated parameters:
 +
[[File:EN_Dig_T_3_7_S4.png|right|frame|Correlation receiver for &nbsp;N=3, &nbsp;t1=0, &nbsp;t2=3T &nbsp; and &nbsp; M=23=8 |class=fit]]
  
[[File:EN_Dig_T_3_7_S4.png|center|frame|Correlation receiver for &nbsp;N=3, &nbsp;t1=0, &nbsp;t2=3T &nbsp; and &nbsp; M=23=8 |class=fit]]
 
 
One recognizes from this block diagram for the indicated parameters:
 
 
*The drawn correlation receiver forms a total of &nbsp;M=8&nbsp; cross-correlation functions between the received signal &nbsp;r(t)=sk(t)+n(t)&nbsp; and the possible transmitted signals &nbsp;si(t), i=0, ... , M1. The following description assumes that the useful signal &nbsp;sk(t)&nbsp; has been transmitted.<br>
 
*The drawn correlation receiver forms a total of &nbsp;M=8&nbsp; cross-correlation functions between the received signal &nbsp;r(t)=sk(t)+n(t)&nbsp; and the possible transmitted signals &nbsp;si(t), i=0, ... , M1. The following description assumes that the useful signal &nbsp;sk(t)&nbsp; has been transmitted.<br>
  
*The correlation receiver now searches for the maximum value &nbsp;Wj&nbsp; of all correlation values and outputs the corresponding sequence &nbsp;Qj&nbsp; as a sink symbol sequence &nbsp;V.&nbsp; Formally, the ML decision rule can be expressed as follows:
+
*This receiver searches for the maximum value &nbsp;Wj&nbsp; of all correlation values and outputs the corresponding sequence &nbsp;Qj&nbsp; as sink symbol sequence &nbsp;V.&nbsp; Formally,&nbsp; the&nbsp; $\rm ML$&nbsp; decision rule can be expressed as follows:
:$$V = Q_j, \hspace{0.2cm}{\rm falls}\hspace{0.2cm} W_i < W_j
+
:$$V = Q_j, \hspace{0.2cm}{\rm if}\hspace{0.2cm} W_i < W_j
 
\hspace{0.2cm}{\rm for}\hspace{0.2cm} {\rm
 
\hspace{0.2cm}{\rm for}\hspace{0.2cm} {\rm
 
all}\hspace{0.2cm} i \ne j \hspace{0.05cm}.$$
 
all}\hspace{0.2cm} i \ne j \hspace{0.05cm}.$$
  
*If we further assume that all transmitted signals &nbsp;si(t)&nbsp; have exactly the same energy, we can dispense with the subtraction of &nbsp;Ei/2&nbsp; in all branches. In this case, the following correlation values are compared &nbsp;(i=0, ... , M1):
+
*If we further assume that all transmitted signals &nbsp;si(t)&nbsp; have same energy,&nbsp; we can dispense with the subtraction of &nbsp;Ei/2&nbsp; in all branches.&nbsp; In this case,&nbsp; the following correlation values are compared &nbsp;(i=0, ... , M1):
 
::<math>I_i  =  \int_{0}^{NT} s_j(t) \cdot s_i(t) \,{\rm d} t +
 
::<math>I_i  =  \int_{0}^{NT} s_j(t) \cdot s_i(t) \,{\rm d} t +
 
\int_{0}^{NT} n(t) \cdot s_i(t) \,{\rm d} t
 
\int_{0}^{NT} n(t) \cdot s_i(t) \,{\rm d} t
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
*With high probability, &nbsp;Ij=Ik&nbsp; is larger than all other comparison values &nbsp;Ijk. However, if the noise &nbsp;n(t)&nbsp; is too large, the correlation receiver will also make a wrong decision.<br>
+
*With high probability, &nbsp;Ij=Ik&nbsp; is larger than all other comparison values&nbsp; Ijk &nbsp; &rArr; &nbsp;  right decision.&nbsp; However,&nbsp; if the noise &nbsp;n(t)&nbsp; is too large,&nbsp; also the correlation receiver will make wrong decisions.<br>
  
 
== Representation of the correlation receiver in the tree diagram==
 
== Representation of the correlation receiver in the tree diagram==
 
<br>
 
<br>
Let us illustrate the operation of the correlation receiver in the tree diagram, where the &nbsp;23=8&nbsp; possible source symbol sequences &nbsp;Qi&nbsp; of length &nbsp;N=3&nbsp; are represented by bipolar rectangular transmitted signals &nbsp;si(t):&nbsp;
+
Let us illustrate the correlation receiver operation in the tree diagram,&nbsp; where the &nbsp;23=8&nbsp; possible source symbol sequences &nbsp;Qi&nbsp; of length &nbsp;N=3&nbsp; are represented by bipolar rectangular transmitted signals &nbsp;si(t).
  
[[File:P ID1458 Dig T 3 7 S5a version1.png|center|frame|Possible bipolar transmitted signals for &nbsp;N=3|class=fit]]
+
[[File:P ID1458 Dig T 3 7 S5a version1.png|right|frame|All&nbsp; 23=8&nbsp; possible bipolar transmitted signals&nbsp; si(t)&nbsp; for &nbsp;N=3|class=fit]]
 +
The possible symbol sequences &nbsp;Q0=LLL, ... , Q7=HHH&nbsp; and the associated transmitted signals &nbsp;s0(t), ... , s7(t)&nbsp; are listed below.
  
The possible symbol sequences &nbsp;Q0=LLL, ... , Q7=HHH&nbsp; and the associated transmitted signals &nbsp;s0(t), ... , s7(t)&nbsp; are listed above.
+
*Due to bipolar amplitude coefficients and the rectangular shape &nbsp; &rArr; &nbsp; all signal energies are equal:&nbsp; E0=...=E7=NEB, where &nbsp;EB&nbsp; indicates the energy of a single pulse of duration T.
*Due to the bipolar amplitude coefficients and the rectangular shape all signal energies are equal: &nbsp; E0=...=E7=NEB, where &nbsp;EB&nbsp; indicates the energy of a single pulse of duration T.  
+
*Therefore, subtraction of the &nbsp;Ei/2&nbsp; term in all branches can be omitted &nbsp; &rArr; &nbsp; a decision based on the correlation values &nbsp;Ii&nbsp; gives equally reliable results as maximizing the corrected values &nbsp;Wi.<br>
+
*Therefore,&nbsp; the subtraction of the &nbsp;Ei/2&nbsp; term in all branches can be omitted &nbsp; &rArr; &nbsp; the decision based on the correlation values &nbsp;Ii&nbsp; gives equally reliable results as maximizing the corrected values &nbsp;Wi.
 +
<br clear=all>
  
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
Example 2:&nbsp; The graph shows the continuous integral values, assuming the actually transmitted signal &nbsp;s5(t)&nbsp; and the noise-free case. For this case, the time-dependent integral values and the integral end values are valid:
+
Example 2:&nbsp; The graph shows the continuous-valued integral values,&nbsp; assuming the actually transmitted signal &nbsp;s5(t)&nbsp; and the noise-free case.&nbsp; For this case,&nbsp; the time-dependent integral values and the integral end values:
[[File:EN_Dig_T_3_7_S5b.png|right|frame|Correlation receiver: &nbsp; tree diagram in the noise-free case|class=fit]]
+
[[File:EN_Dig_T_3_7_S5b.png|right|frame|Tree diagram of the correlation receiver in the noise-free case|class=fit]]
 
:$$i_i(t)  =  \int_{0}^{t} r(\tau) \cdot s_i(\tau) \,{\rm d}
 
:$$i_i(t)  =  \int_{0}^{t} r(\tau) \cdot s_i(\tau) \,{\rm d}
 
\tau =  \int_{0}^{t} s_5(\tau) \cdot s_i(\tau) \,{\rm d}
 
\tau =  \int_{0}^{t} s_5(\tau) \cdot s_i(\tau) \,{\rm d}
Line 156: Line 158:
 
\Rightarrow \hspace{0.3cm}I_i = i_i(3T). $$
 
\Rightarrow \hspace{0.3cm}I_i = i_i(3T). $$
 
The graph can be interpreted as follows:
 
The graph can be interpreted as follows:
*Because of the rectangular shape of the signals &nbsp;si(t),&nbsp; all function curves &nbsp;ii(t)&nbsp; are rectilinear. The end values normalized to &nbsp;EB&nbsp; are &nbsp;+3, &nbsp;+1, &nbsp;1&nbsp; and &nbsp;3.<br>
+
*Because of the rectangular shape of the signals &nbsp;si(t),&nbsp; all function curves &nbsp;ii(t)&nbsp; are rectilinear.&nbsp; The end values normalized to &nbsp;EB&nbsp; are &nbsp;+3, &nbsp;+1, &nbsp;1&nbsp; and &nbsp;3.<br>
*The maximum final value is &nbsp;I5=3EB&nbsp; (red waveform), since signal &nbsp;s5(t)&nbsp; was actually sent. Without noise, the correlation receiver thus naturally always makes the correct decision.<br>
+
 
*The blue curve &nbsp;i1(t)&nbsp; leads to the final value &nbsp;I1=EB+EB+EB=EB, since &nbsp;s1(t)&nbsp; differs from &nbsp;s5(t)&nbsp; only in the first bit. The comparison values &nbsp;I4&nbsp; and &nbsp;I7&nbsp; are also equal to &nbsp;EB.<br>
+
*The maximum final value is &nbsp;I5=3EB&nbsp; (red waveform),&nbsp; since signal &nbsp;s5(t)&nbsp; was actually sent.&nbsp; Without noise,&nbsp; the correlation receiver thus naturally always makes the correct decision.<br>
*Since &nbsp;s0(t), &nbsp;s3(t)&nbsp; and &nbsp;s6(t)&nbsp; differ from the transmitted &nbsp;s5(t)&nbsp; in two bits, &nbsp;I0=I3=I6=EB. The green curve shows &nbsp;s6(t) initially increasing (first bit matches) and then decreasing over two bits.
+
 
*The purple curve leads to the final value &nbsp;I2=3EB. The corresponding signal &nbsp;s2(t)&nbsp; differs from &nbsp;s5(t)&nbsp; in all three symbols and &nbsp;s2(t)=s5(t) holds.}}<br><br>
+
*The blue curve &nbsp;i1(t)&nbsp; leads to the final value &nbsp;I1=EB+EB+EB=EB,&nbsp; since &nbsp;s1(t)&nbsp; differs from &nbsp;s5(t)&nbsp; only in the first bit.&nbsp; The comparison values &nbsp;I4&nbsp; and &nbsp;I7&nbsp; are also equal to &nbsp;EB.<br>
 +
 
 +
*Since &nbsp;s0(t), &nbsp;s3(t)&nbsp; and &nbsp;s6(t)&nbsp; differ from the transmitted &nbsp;s5(t)&nbsp; in two bits, &nbsp;I0=I3=I6=EB.&nbsp; The green curve shows &nbsp;s6(t) initially increasing&nbsp; (first bit matches)&nbsp; and then decreasing over two bits.
 +
 
 +
*The purple curve leads to the final value &nbsp;I2=3EB.&nbsp; The corresponding signal &nbsp;s2(t)&nbsp; differs from &nbsp;s5(t)&nbsp; in all three symbols and &nbsp;s2(t)=s5(t)&nbsp; holds.}}<br><br>
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
Example 3:&nbsp; The graph for this example describes the same situation as &nbsp;Example 2, but now the received signal &nbsp;r(t)=s5(t)+n(t)&nbsp; is assumed. The variance of the AWGN noise &nbsp;n(t)&nbsp; here is &nbsp;σ2n=4EB/T.
+
Example 3:&nbsp; The graph describes the same situation as &nbsp;Example 2,&nbsp; but now the received signal &nbsp;r(t)=s5(t)+n(t)&nbsp; is assumed.&nbsp; The variance of the AWGN noise &nbsp;n(t)&nbsp; here is &nbsp;σ2n=4EB/T.
[[File:EN_Dig_T_3_7_S5c_neu.png|right|frame|Correlation receiver: tree diagram with noise |class=fit]]
+
[[File:EN_Dig_T_3_7_S5c_neu.png|right|frame|Tree diagram of the correlation receiver with noise &nbsp; (σ2n=4EB/T) |class=fit]]
One can see from this graph compared to the noise-free case:
+
<br><br><br>One can see from this graph compared to the noise-free case:
*The function curves are now no longer straight due to the noise component &nbsp;n(t)&nbsp; and there are also slightly different final values than without noise.
+
*The curves are now no longer straight due to the noise component &nbsp;n(t)&nbsp; and there are also slightly different final values than without noise.
*In the considered example, however, the correlation receiver decides correctly with high probability, since the difference between &nbsp;I5&nbsp; and the second larger value &nbsp;I7&nbsp; is relatively large with &nbsp;1.65EB.&nbsp; <br>
+
 
*However, the error probability in the example considered here is not better than that of the matched filter receiver with symbolwise decision.
+
*In the considered example,&nbsp;  the correlation receiver decides correctly with high probability,&nbsp; since the difference between &nbsp;I5&nbsp; and the next value &nbsp;I7&nbsp; is relatively large: &nbsp;1.65EB.&nbsp; <br>
*In accordance with the chapter&nbsp;  [[Digital_Signal_Transmission/Optimization_of_Baseband_Transmission_Systems#Prerequisites_and_optimization_criterion|"Optimization of Baseband Transmission Systems"]],&nbsp; the following also applies here:
+
 
 +
*The error probability in this example is not better than that of the matched filter receiver with symbol-wise decision.&nbsp; In accordance with the chapter&nbsp;  [[Digital_Signal_Transmission/Optimization_of_Baseband_Transmission_Systems#Prerequisites_and_optimization_criterion|"Optimization of Baseband Transmission Systems"]],&nbsp; the following also applies here:
 
:$$p_{\rm S} = {\rm Q} \left( \sqrt{ {2 \cdot E_{\rm B} }/{N_0} }\right)
 
:$$p_{\rm S} = {\rm Q} \left( \sqrt{ {2 \cdot E_{\rm B} }/{N_0} }\right)
 
  = {1}/{2} \cdot {\rm erfc} \left( \sqrt{ { E_{\rm B} }/{N_0} }\right) \hspace{0.05cm}.$$}}  
 
  = {1}/{2} \cdot {\rm erfc} \left( \sqrt{ { E_{\rm B} }/{N_0} }\right) \hspace{0.05cm}.$$}}  
Line 175: Line 182:
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Conclusion:}$&nbsp;  
+
$\text{Conclusions:}$&nbsp;  
*If the input signal does not have statistical bindings as in &nbsp;Example 2, there is no improvement by joint decision of &nbsp;N&nbsp; symbols over symbolwise decision.
+
#If the input signal does not have statistical bindings &nbsp;$\text{(Example 2)}$,&nbsp; there is no improvement by joint decision of &nbsp;N&nbsp; symbols over symbol-wise decision &nbsp; <br>&rArr; &nbsp; pS=Q(2EB/N0).
*In the presence of statistical bindings, the joint decision of &nbsp;N&nbsp; symbols noticeably reduces the error probability compared to &nbsp;pS=Q(2EB/N0)&nbsp; (valid for symbolwise decision), since the maximum likelihood receiver takes the bindings into account.
+
#In the presence of statistical bindings &nbsp;$\text{(Example 3)}$,&nbsp; the joint decision of &nbsp;N&nbsp; symbols noticeably reduces the error probability,&nbsp; since the maximum likelihood receiver takes the bindings into account.
*Such bindings can be either deliberately created by transmission-side coding (see &nbsp;LNTwww book [[Channel_Coding|"Channel Coding"]]) or unintentionally caused by (linear) channel distortions.<br>
+
#Such bindings can be either deliberately created by transmission-side coding&nbsp; $($see the &nbsp;LNTwww book&nbsp; [[Channel_Coding|"Channel Coding"]])&nbsp; or unintentionally caused by&nbsp; (linear)&nbsp; channel distortions.<br>
*In the presence of such intersymbol interference, the calculation of the error probability is much more difficult. However, comparable approximations as for the Viterbi receiver can be given, which are given at the &nbsp;[[Digital_Signal_Transmission/Viterbi_Receiver#Error_probability_in_maximum_likelihood_decision|"end of the next chapter"]].&nbsp; }}<br>
+
#In the presence of such&nbsp; "intersymbol interferences",&nbsp; the calculation of the error probability is much more difficult.&nbsp; However,&nbsp; comparable approximations as for the Viterbi receiver can be used,&nbsp; which are given at the &nbsp;[[Digital_Signal_Transmission/Viterbi_Receiver#Bit_error_probability_with_maximum_likelihood_decision|end of the next chapter]].&nbsp; }}<br>
  
 
== Correlation receiver with unipolar signaling ==
 
== Correlation receiver with unipolar signaling ==
 
<br>
 
<br>
So far, we have always assumed binary ''bipolar''&nbsp; signaling when describing the correlation receiver:
+
So far,&nbsp; we have always assumed binary&nbsp; '''bipolar'''&nbsp; signaling when describing the correlation receiver:
 
:$$a_\nu  =  \left\{ \begin{array}{c} +1  \\
 
:$$a_\nu  =  \left\{ \begin{array}{c} +1  \\
 
  -1 \\  \end{array} \right.\quad
 
  -1 \\  \end{array} \right.\quad
Line 191: Line 198:
 
q_\nu = \mathbf{L} \hspace{0.05cm}.  \\
 
q_\nu = \mathbf{L} \hspace{0.05cm}.  \\
 
\end{array}$$
 
\end{array}$$
Now we consider the case of binary ''unipolar''&nbsp; digital signaling holds:
+
Now we consider the case of binary&nbsp; '''unipolar'''&nbsp; digital signaling holds:
 
:$$a_\nu  =  \left\{ \begin{array}{c} 1  \\
 
:$$a_\nu  =  \left\{ \begin{array}{c} 1  \\
 
  0 \\  \end{array} \right.\quad
 
  0 \\  \end{array} \right.\quad
Line 199: Line 206:
 
q_\nu = \mathbf{L} \hspace{0.05cm}.  \\
 
q_\nu = \mathbf{L} \hspace{0.05cm}.  \\
 
\end{array}$$
 
\end{array}$$
 +
[[File:P ID1462 Dig T 3 7 S5c version1.png|right|frame|Possible unipolar transmitted signals for &nbsp;N=3|class=fit]]
 +
The &nbsp;23=8&nbsp; possible source symbol sequences &nbsp;Qi&nbsp; of length &nbsp;N=3&nbsp; are now represented by unipolar rectangular transmitted signals &nbsp;si(t).&nbsp;
  
The &nbsp;23=8&nbsp; possible source symbol sequences &nbsp;Qi&nbsp; of length &nbsp;N=3&nbsp; are now represented by unipolar rectangular transmitted signals &nbsp;$s_i(t)$.&nbsp; Listed below are the symbol sequences &nbsp;$Q_0 = \rm LLL$, ... , $Q_7 = \rm HHH$&nbsp; and the transmitted signals &nbsp;$s_0(t)$, ... , $s_7(t)$.  
+
Listed on the right are the eight symbol sequences and the transmitted signals
 
+
:$$Q_0 = \rm LLL, \text{ ... },\ Q_7 = \rm HHH,$$
[[File:P ID1462 Dig T 3 7 S5c version1.png|center|frame|Possible unipolar transmitted signals for &nbsp;$N = 3$|class=fit]]
+
:$$s_0(t), \text{ ... },\ s_7(t).$$  
  
By comparing with the &nbsp;[[Digital_Signal_Transmission/Optimal_Receiver_Strategies#Representation_of_the_correlation_receiver_in_the_tree_diagram|"corresponding table"]]&nbsp; for bipolar signaling, one can see:
+
By comparing with the &nbsp;[[Digital_Signal_Transmission/Optimal_Receiver_Strategies#Representation_of_the_correlation_receiver_in_the_tree_diagram|"corresponding table"]]&nbsp; for bipolar signaling,&nbsp; one can see:
*Due to the unipolar amplitude coefficients, the signal energies &nbsp;Ei&nbsp; are now different, for example &nbsp;E0=0&nbsp; and  &nbsp;E7=3EB.  
+
*Due to the unipolar amplitude coefficients,&nbsp; the signal energies &nbsp;Ei&nbsp; are now different,&nbsp; e.g. &nbsp;E0=0&nbsp; and  &nbsp;E7=3EB.
*Here the decision based on the integral end values &nbsp;Ii&nbsp; does not lead to the correct result.
+
*Instead, the corrected comparison values &nbsp;Wi=IiEi/2&nbsp; must now be used.<br>
+
*Here the decision based on the integral values &nbsp;Ii&nbsp; does not lead to the correct result.&nbsp; Instead,&nbsp; the corrected comparison values &nbsp;Wi=IiEi/2&nbsp; must now be used.<br>
  
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
Example 4:&nbsp; The graph shows the continuous integral values, again assuming the actual transmitted signal &nbsp;s5(t)&nbsp; and the noise-free case. The corresponding bipolar equivalent was considered in [[Digital_Signal_Transmission/Optimal_Receiver_Strategies#Representation_of_the_correlation_receiver_in_the_tree_diagram|"Example 2"]].  
+
Example 4:&nbsp; The graph shows the integral values&nbsp; Ii,&nbsp; again assuming the actual transmitted signal &nbsp;s5(t)&nbsp; and the noise-free case.&nbsp; The corresponding bipolar equivalent was considered in&nbsp; [[Digital_Signal_Transmission/Optimal_Receiver_Strategies#Representation_of_the_correlation_receiver_in_the_tree_diagram|Example 2]].  
  
[[File:EN_Dig_T_3_7_S5d.png|right|frame|Tree diagram of the correlation receiver (unipolar)|class=fit]]
+
[[File:EN_Dig_T_3_7_S5d.png|right|frame|Tree diagram of the correlation receiver&nbsp; (unipolar signaling)|class=fit]]
For this example, the following comparison values result, each normalized to &nbsp;EB:
+
For this example,&nbsp; the following comparison values result,&nbsp; each normalized to &nbsp;EB:
 
:$$I_5 = I_7 = 2, \hspace{0.2cm}I_1 = I_3 = I_4= I_6 = 1 \hspace{0.2cm},
 
:$$I_5 = I_7 = 2, \hspace{0.2cm}I_1 = I_3 = I_4= I_6 = 1 \hspace{0.2cm},
 
  \hspace{0.2cm}I_0 = I_2 = 0
 
  \hspace{0.2cm}I_0 = I_2 = 0
Line 223: Line 232:
  
 
This means:
 
This means:
*When compared in terms of maximum &nbsp;Ii values, the source symbol sequences &nbsp;Q5&nbsp; and &nbsp;Q7&nbsp; would be equivalent.
+
*When compared in terms of maximum &nbsp;Ii values,&nbsp; the source symbol sequences &nbsp;Q5&nbsp; and &nbsp;Q7&nbsp; would be equivalent.
*On the other hand, if the different energies &nbsp;(E5=2, E7=3) are taken into account, the decision is clearly in favor of the sequence &nbsp;Q5&nbsp; because of &nbsp;W5>W7.&nbsp;  
+
 
*The correlation receiver according to &nbsp;Wi=IiEi/2&nbsp; therefore decides correctly on &nbsp;s(t)=s5(t) even with unipolar signaling. }}<br>
+
*On the other hand,&nbsp; if the different energies &nbsp;(E5=2, E7=3)&nbsp; are taken into account,&nbsp; the decision is clearly in favor of the sequence &nbsp;Q5&nbsp; because of &nbsp;W5>W7.&nbsp;  
 +
 
 +
*The correlation receiver according to &nbsp;Wi=IiEi/2&nbsp; therefore decides correctly on&nbsp; s(t)=s5(t)&nbsp; even with unipolar signaling. }}<br>
  
 
== Exercises for the chapter==
 
== Exercises for the chapter==

Latest revision as of 14:16, 11 July 2022

Considered scenario and prerequisites


All digital receivers described so far always make symbol-wise decisions.  If,  on the other hand,  several symbols are decided simultaneously,  statistical bindings between the received signal samples can be taken into account during detection,  which results in a lower error probability – but at the cost of an additional delay time.

In this  (partly also in the next chapter)  the following transmission model is assumed.  Compared to the last two chapters,  the following differences arise:

Transmission system with optimal receiver
  • Q{Qi}  with  i=0, ... , M1  denotes a time-constrained source symbol sequence  qν whose symbols are to be jointly decided by the receiver.
  • If the source  Q  describes a sequence of  N  redundancy-free binary symbols, set  M=2N.  On the other hand,  if the decision is symbol-wise,  M  specifies the level number of the digital source.
  • In this model,  any channel distortions are added to the transmitter and are thus already included in the basic transmission pulse  gs(t)  and the signal  s(t).  This measure is only for a simpler representation and is not a restriction.
  • Knowing the currently applied received signal  r(t),  the optimal receiver searches from the set  {Q0, ... , QM1}  of the possible source symbol sequences, the receiver searches for the most likely transmitted sequence  Qj  and outputs this as a sink symbol sequence  V
  • Before the actual decision algorithm,  a numerical value  Wi  must be derived from the received signal  r(t)  for each possible sequence  Qi  by suitable signal preprocessing.  The larger  Wi  is,  the greater the inference probability that  Qi  was transmitted.
  • Signal preprocessing must provide for the necessary noise power limitation and – in the case of strong channel distortions – for sufficient pre-equalization of the resulting intersymbol interferences.  In addition,  preprocessing also includes sampling for time discretization.

Maximum-a-posteriori and maximum–likelihood decision rule


The  (unconstrained)  optimal receiver is called the  "MAP receiver",  where  "MAP"  stands for  "maximum–a–posteriori".

Definition:  The  maximum–a–posteriori receiver  (abbreviated  MAP)  determines the  M  inference probabilities  Pr[Qi|r(t)],  and sets the output sequence  V  according to the decision rule,  where the index is   i=0, ... , M1  as well as  ij:

Pr[Qj|r(t)]>Pr[Qi|r(t)].


  • The  "inference probability"  Pr[Qi|r(t)]  indicates the probability with which the sequence  Qi  was sent when the received signal  r(t)  is present at the decision.  Using  "Bayes' theorem",  this probability can be calculated as follows:
Pr[Qi|r(t)]=Pr[r(t)|Qi]Pr[Qi]Pr[r(t)].
  • The MAP decision rule can thus be reformulated or simplified as follows:   Let the sink symbol sequence  V=Qj,  if for all  ij  holds:
Pr[r(t)|Qj]Pr[Qj)Pr[r(t)]>Pr[r(t)|Qi]Pr[Qi]Pr[r(t)]Pr[r(t)|Qj]Pr[Qj]>Pr[r(t)|Qi]Pr[Qi].

A further simplification of this MAP decision rule leads to the  "ML receiver",  where  "ML"  stands for  "maximum likelihood".

Definition:  The  maximum likelihood receiver  (abbreviated  ML)   decides according to the conditional forward probabilities  Pr[r(t)|Qi],  and sets the output sequence  V=Qj,  if for all  ij  holds:

Pr[r(t)|Qj]>Pr[r(t)|Qi].


A comparison of these two definitions shows:

  • For equally probable source symbols,  the  "ML receiver"  and the  "MAP receiver"  use the same decision rules.  Thus,  they are equivalent.
  • For symbols that are not equally probable,  the  "ML receiver"  is inferior to the  "MAP receiver"  because it does not use all the available information for detection.


Example 1:  To illustrate the  "ML"  and the  "MAP"  decision rule,  we now construct a very simple example with only two source symbols  (M=2).

For clarification of MAP and ML receiver



⇒   The two possible symbols  Q0  and  Q1  are represented by the transmitted signals  s=0  and  s=1.

⇒   The received signal can – for whatever reason – take three different values, namely  r=0,  r=1  and additionally  r=0.5.

Note:

  • The received values  r=0  and  r=1  will be assigned to the transmitter values  s=0 (Q0)  resp.  s=1 (Q1),  by both,  the ML and MAP decisions.
  • In contrast, the decisions will give a different result with respect to the received value  r=0.5
  • The maximum likelihood  (ML)  decision rule leads to the source symbol  Q0,  because of:
Pr[r=0.5|Q0]=0.4>Pr[r=0.5|Q1]=0.2.
  • The maximum–a–posteriori  (MAP)  decision rule leads to the source symbol  Q1,  since according to the incidental calculation in the graph:
Pr[Q1|r=0.5]=0.6>Pr[Q0|r=0.5]=0.4.


Maximum likelihood decision for Gaussian noise


We now assume that the received signal  r(t)  is additively composed of a useful component  s(t)  and a noise component  n(t),  where the noise is assumed to be Gaussian distributed and white   ⇒    "AWGN noise":

r(t)=s(t)+n(t).

Any channel distortions are already applied to the signal  s(t)  for simplicity.

The necessary noise power limitation is realized by an integrator;  this corresponds to an averaging of the noise values in the time domain.  If one limits the integration interval to the range  t1  to  t2,  one can derive a quantity  Wi  for each source symbol sequence  Qi,  which is a measure for the conditional probability  Pr[r(t)|Qi]

Wi=t2t1r(t)si(t)dt1/2t2t1s2i(t)dt=IiEi/2.

This decision variable  Wi  can be derived using the  k–dimensionial  "joint probability density"  of the noise  (with  k)  and some boundary crossings.  The result can be interpreted as follows:

  • Integration is used for noise power reduction by averaging.  If  N  binary symbols are decided simultaneously by the maximum likelihood detector,  set  t1=0  and  t2=NT  for distortion-free channel.
  • The first term of the above decision variable  Wi  is equal to the  "energy cross-correlation function"  formed over the finite time interval  NT  between  r(t)  and  si(t)  at the time point  τ=0:
Ii=φr,si(τ=0)=NT0r(t)si(t)dt.
  • The second term gives half the energy of the considered useful signal  si(t)  to be subtracted.  The energy is equal to the auto-correlation function  (ACF)  of  si(t)  at the time point  τ=0:
Ei=φsi(τ=0)=NT0s2i(t)dt.
  • In the case of a distorting channel,  the channel impulse response  hK(t)  is not Dirac-shaped,  but for example extended to the range  TKt+TK.  In this case,  t1=TK  and  t2=NT+TK  must be used for the integration limits.

Matched filter receiver vs. correlation receiver


There are various circuit implementations of the maximum likelihood  (ML)  receiver.

⇒   For example,  the required integrals can be obtained by linear filtering and subsequent sampling.  This realization form is called  matched filter receiver,  because here the impulse responses of the  M  parallel filters have the same shape as the useful signals  s0(t), ... , sM1(t)

  • The  M  decision variables  Ii  are then equal to the convolution products  r(t)si(t)  at time  t=0.
  • For example,  the  "optimal binary receiver"  described in detail in the chapter  "Optimization of Baseband Transmission Systems"  allows a maximum likelihood  (ML)  decision with parameters  M=2  and  N=1.


⇒   A second realization form is provided by the  correlation receiver  according to the following graph.  One recognizes from this block diagram for the indicated parameters:

Correlation receiver for  N=3,  t1=0,  t2=3T   and   M=23=8
  • The drawn correlation receiver forms a total of  M=8  cross-correlation functions between the received signal  r(t)=sk(t)+n(t)  and the possible transmitted signals  si(t), i=0, ... , M1. The following description assumes that the useful signal  sk(t)  has been transmitted.
  • This receiver searches for the maximum value  Wj  of all correlation values and outputs the corresponding sequence  Qj  as sink symbol sequence  V.  Formally,  the  ML  decision rule can be expressed as follows:
V=Qj,ifWi<Wjforallij.
  • If we further assume that all transmitted signals  si(t)  have same energy,  we can dispense with the subtraction of  Ei/2  in all branches.  In this case,  the following correlation values are compared  (i=0, ... , M1):
Ii=NT0sj(t)si(t)dt+NT0n(t)si(t)dt.
  • With high probability,  Ij=Ik  is larger than all other comparison values  Ijk   ⇒   right decision.  However,  if the noise  n(t)  is too large,  also the correlation receiver will make wrong decisions.

Representation of the correlation receiver in the tree diagram


Let us illustrate the correlation receiver operation in the tree diagram,  where the  23=8  possible source symbol sequences  Qi  of length  N=3  are represented by bipolar rectangular transmitted signals  si(t).

All  23=8  possible bipolar transmitted signals  si(t)  for  N=3

The possible symbol sequences  Q0=LLL, ... , Q7=HHH  and the associated transmitted signals  s0(t), ... , s7(t)  are listed below.

  • Due to bipolar amplitude coefficients and the rectangular shape   ⇒   all signal energies are equal:  E0=...=E7=NEB, where  EB  indicates the energy of a single pulse of duration T.
  • Therefore,  the subtraction of the  Ei/2  term in all branches can be omitted   ⇒   the decision based on the correlation values  Ii  gives equally reliable results as maximizing the corrected values  Wi.



Example 2:  The graph shows the continuous-valued integral values,  assuming the actually transmitted signal  s5(t)  and the noise-free case.  For this case,  the time-dependent integral values and the integral end values:

Tree diagram of the correlation receiver in the noise-free case
ii(t)=t0r(τ)si(τ)dτ=t0s5(τ)si(τ)dτIi=ii(3T).

The graph can be interpreted as follows:

  • Because of the rectangular shape of the signals  si(t),  all function curves  ii(t)  are rectilinear.  The end values normalized to  EB  are  +3,  +1,  1  and  3.
  • The maximum final value is  I5=3EB  (red waveform),  since signal  s5(t)  was actually sent.  Without noise,  the correlation receiver thus naturally always makes the correct decision.
  • The blue curve  i1(t)  leads to the final value  I1=EB+EB+EB=EB,  since  s1(t)  differs from  s5(t)  only in the first bit.  The comparison values  I4  and  I7  are also equal to  EB.
  • Since  s0(t),  s3(t)  and  s6(t)  differ from the transmitted  s5(t)  in two bits,  I0=I3=I6=EB.  The green curve shows  s6(t) initially increasing  (first bit matches)  and then decreasing over two bits.
  • The purple curve leads to the final value  I2=3EB.  The corresponding signal  s2(t)  differs from  s5(t)  in all three symbols and  s2(t)=s5(t)  holds.



Example 3:  The graph describes the same situation as  Example 2,  but now the received signal  r(t)=s5(t)+n(t)  is assumed.  The variance of the AWGN noise  n(t)  here is  σ2n=4EB/T.

Tree diagram of the correlation receiver with noise   (σ2n=4EB/T)




One can see from this graph compared to the noise-free case:

  • The curves are now no longer straight due to the noise component  n(t)  and there are also slightly different final values than without noise.
  • In the considered example,  the correlation receiver decides correctly with high probability,  since the difference between  I5  and the next value  I7  is relatively large:  1.65EB
  • The error probability in this example is not better than that of the matched filter receiver with symbol-wise decision.  In accordance with the chapter  "Optimization of Baseband Transmission Systems",  the following also applies here:
pS=Q(2EB/N0)=1/2erfc(EB/N0).


Conclusions: 

  1. If the input signal does not have statistical bindings  (Example 2),  there is no improvement by joint decision of  N  symbols over symbol-wise decision  
    ⇒   pS=Q(2EB/N0).
  2. In the presence of statistical bindings  (Example 3),  the joint decision of  N  symbols noticeably reduces the error probability,  since the maximum likelihood receiver takes the bindings into account.
  3. Such bindings can be either deliberately created by transmission-side coding  (see the  LNTwww book  "Channel Coding")  or unintentionally caused by  (linear)  channel distortions.
  4. In the presence of such  "intersymbol interferences",  the calculation of the error probability is much more difficult.  However,  comparable approximations as for the Viterbi receiver can be used,  which are given at the  end of the next chapter


Correlation receiver with unipolar signaling


So far,  we have always assumed binary  bipolar  signaling when describing the correlation receiver:

aν={+11f¨urforqν=H,qν=L.

Now we consider the case of binary  unipolar  digital signaling holds:

aν={10forforqν=H,qν=L.
Possible unipolar transmitted signals for  N=3

The  23=8  possible source symbol sequences  Qi  of length  N=3  are now represented by unipolar rectangular transmitted signals  si(t)

Listed on the right are the eight symbol sequences and the transmitted signals

Q0=LLL, ... , Q7=HHH,
s0(t), ... , s7(t).

By comparing with the  "corresponding table"  for bipolar signaling,  one can see:

  • Due to the unipolar amplitude coefficients,  the signal energies  Ei  are now different,  e.g.  E0=0  and  E7=3EB.
  • Here the decision based on the integral values  Ii  does not lead to the correct result.  Instead,  the corrected comparison values  Wi=IiEi/2  must now be used.


Example 4:  The graph shows the integral values  Ii,  again assuming the actual transmitted signal  s5(t)  and the noise-free case.  The corresponding bipolar equivalent was considered in  Example 2.

Tree diagram of the correlation receiver  (unipolar signaling)

For this example,  the following comparison values result,  each normalized to  EB:

I5=I7=2,I1=I3=I4=I6=1,I0=I2=0,
W5=1,W1=W4=W7=0.5,W0=W3=W6=0,W2=0.5.

This means:

  • When compared in terms of maximum  Ii values,  the source symbol sequences  Q5  and  Q7  would be equivalent.
  • On the other hand,  if the different energies  (E5=2, E7=3)  are taken into account,  the decision is clearly in favor of the sequence  Q5  because of  W5>W7
  • The correlation receiver according to  Wi=IiEi/2  therefore decides correctly on  s(t)=s5(t)  even with unipolar signaling.


Exercises for the chapter


Exercise 3.09: Correlation Receiver for Unipolar Signaling

Exercise 3.10: Maximum Likelihood Tree Diagram