Difference between revisions of "Aufgaben:Exercise 4.4: Maximum–a–posteriori and Maximum–Likelihood"

From LNTwww
 
(16 intermediate revisions by 4 users not shown)
Line 1: Line 1:
  
{{quiz-Header|Buchseite=Digitalsignalübertragung/Struktur des optimalen Empfängers}}
+
{{quiz-Header|Buchseite=Digital_Signal_Transmission/Structure_of_the_Optimal_Receiver}}
  
[[File:P_ID2013__Dig_A_4_4.png|right|frame|Kanalübergangswahrscheinlichkeiten]]
+
[[File:EN_Dig_A_4_4.png|right|frame|Channel transition probabilities]]
Zur Verdeutlichung von MAP– und ML–Entscheidung konstruieren wir nun ein sehr einfaches Beispiel mit nur zwei möglichen Nachrichten $m_0 = 0$ und $m_1 = 1$, die durch die Signalwerte $s_0$ bzw. $s_1$ dargestellt werden:  
+
To illustrate  "maximum–a–posteriori"  $\rm (MAP)$  and  "maximum likelihood"  $\rm (ML)$   decision,  we now construct a very simple example with only two possible messages  $m_0 = 0$  and  $m_1 = 1$,  represented by the signal values  $s_0$  resp.  $s_1$:
 
:$$s \hspace{-0.15cm} \ = \ \hspace{-0.15cm}s_0 = +1 \hspace{0.2cm} \Longleftrightarrow \hspace{0.2cm}m = m_0 = 0\hspace{0.05cm},$$
 
:$$s \hspace{-0.15cm} \ = \ \hspace{-0.15cm}s_0 = +1 \hspace{0.2cm} \Longleftrightarrow \hspace{0.2cm}m = m_0 = 0\hspace{0.05cm},$$
 
:$$s \hspace{-0.15cm} \ = \ \hspace{-0.15cm}s_1 = -1 \hspace{0.2cm} \Longleftrightarrow \hspace{0.2cm}m = m_1 = 1\hspace{0.05cm}.$$
 
:$$s \hspace{-0.15cm} \ = \ \hspace{-0.15cm}s_1 = -1 \hspace{0.2cm} \Longleftrightarrow \hspace{0.2cm}m = m_1 = 1\hspace{0.05cm}.$$
  
Die Auftrittswahrscheinlichkeiten sind:
+
*Let the probabilities of occurrence be:
 
:$${\rm Pr}(s = s_0) = 0.75,\hspace{0.2cm}{\rm Pr}(s = s_1) = 0.25 \hspace{0.05cm}.$$
 
:$${\rm Pr}(s = s_0) = 0.75,\hspace{0.2cm}{\rm Pr}(s = s_1) = 0.25 \hspace{0.05cm}.$$
  
Das Empfangssignal kann – warum auch immer – drei verschiedene Werte annehmen, nämlich
+
*The received signal can – for whatever reason – take three different values, i.e.
 
:$$r = +1,\hspace{0.2cm}r = 0,\hspace{0.2cm}r = -1 \hspace{0.05cm}.$$
 
:$$r = +1,\hspace{0.2cm}r = 0,\hspace{0.2cm}r = -1 \hspace{0.05cm}.$$
  
Die bedingten Kanalwahrscheinlichkeiten können der Grafik entnommen werden.
+
*The conditional channel probabilities can be taken from the graph.
  
Nach der Übertragung soll die gesendete Nachricht durch einen optimalen Empfänger geschätzt werden. Zur Verfügung stehen:
 
* der <font color="#cc0000"><span style="font-weight: bold;">Maximum&ndash;Likelihood&ndash;Empfänger</span></font> (ML&ndash;Empfänger), der die Auftrittswahrscheinlichkeiten ${\rm Pr}(s = s_i)$ nicht kennt, mit der Entscheidungsregel:
 
:$$\hat{m}_{\rm ML} = {\rm arg} \max_i \hspace{0.1cm} [ p_{r |s } \hspace{0.05cm} (\rho
 
|s_i ) ]\hspace{0.05cm},$$
 
  
* der <font color="#cc0000"><span style="font-weight: bold;">Maximum&ndash;a&ndash;posteriori&ndash;Empfänger</span></font> (MAP&ndash;Empfänger); dieser berücksichtigt bei seinem Entscheidungsprozess auch die Symbolwahrscheinlichkeiten der Quelle:
+
After transmission, the  message is to be estimated by an optimal receiver.&nbsp; Available are:
:$$\hat{m}_{\rm MAP} = {\rm arg} \max_i \hspace{0.1cm} [ {\rm Pr}( s = s_i) \cdot p_{r |s } \hspace{0.05cm} (\rho  
+
* the &nbsp;'''maximum likelihood receiver'''&nbsp; $\rm (ML$&nbsp; receiver$)$,&nbsp; which does not know the occurrence probabilities&nbsp; ${\rm Pr}(s = s_i)$,&nbsp; with the decision rule:
|s_i ) ]\hspace{0.05cm}.$$
+
:$$\hat{m}_{\rm ML} = {\rm arg} \max_i \hspace{0.1cm} \big[ p_{r |s } \hspace{0.05cm} (\rho  
 +
|s_i ) \big]\hspace{0.05cm},$$
  
 +
* the &nbsp;'''maximum-a-posteriori receiver'''&nbsp; $\rm (MAP$&nbsp; receiver$)$;&nbsp; this receiver also considers the symbol probabilities of the source in its decision process:
 +
:$$\hat{m}_{\rm MAP} = {\rm arg} \max_i \hspace{0.1cm} \big[ {\rm Pr}( s = s_i) \cdot p_{r |s } \hspace{0.05cm} (\rho
 +
|s_i ) \big ]\hspace{0.05cm}.$$
  
''Hinweise:''
 
*Die Aufgabe gehört zum  Kapitel  [[Digitalsignal%C3%BCbertragung/Signale,_Basisfunktionen_und_Vektorr%C3%A4ume| Signale, Basisfunktionen und Vektorräume]].
 
*Sollte die Eingabe des Zahlenwertes &bdquo;0&rdquo; erforderlich sein, so geben Sie bitte &bdquo;0.&rdquo; ein.
 
  
''Hinweise:''
 
* Diese Aufgabe bezieht sich auf das Kapitel [[Digitalsignal%C3%BCbertragung/Optimale_Empf%C3%A4ngerstrategien| Optimale Empfängerstrategien]] sowie das Kapitel [[Digitalsignal%C3%BCbertragung/Struktur_des_optimalen_Empf%C3%A4ngers| Struktur des optimalen Empfängers]] des vorliegenden Buches.
 
* Die notwendigen statistischen Grundlagen finden Sie im Kapitel [[Stochastische_Signaltheorie/Statistische_Abh%C3%A4ngigkeit_und_Unabh%C3%A4ngigkeit| Statistische Abhängigkeit und Unabhängigkeit]] des Buches &bdquo;Stochastische Signaltheorie&rdquo;.
 
  
  
===Fragebogen===
+
Notes:
 +
*The exercise belongs to the chapter&nbsp;  [[Digital_Signal_Transmission/Optimal_Receiver_Strategies|"Optimal Receiver Strategies"]].
 +
 
 +
*Reference is also made to the chapter&nbsp; [[Digital_Signal_Transmission/Structure_of_the_Optimal_Receiver|"Structure of the Optimal Receiver]].
 +
 
 +
* The necessary statistical principles can be found in the chapter&nbsp; [[Theory_of_Stochastic_Signals/Statistical_Dependence_and_Independence| "Statistical Dependence and Independence"]]&nbsp; of the book&nbsp; "Theory of Stochastic Signals".
 +
 +
 
 +
 
 +
 
 +
 
 +
===Questions===
 
<quiz display=simple>
 
<quiz display=simple>
{Mit welchen Wahrscheinlichkeiten treten die Empfangswerte auf?
+
{With which probabilities do the received values occur?
 
|type="{}"}
 
|type="{}"}
${\rm Pr}(r = +1)$ = { 0.6 3% }
+
${\rm Pr}(r = +1) \ = \ $ { 0.6 3% }
${\rm Pr}(r = \, &ndash;1)$ = { 0.15 3% }
+
${\rm Pr}(r = -1) \ = \ $ { 0.15 3% }
${\rm Pr}(r = 0)$ = { 0.25 3% }
+
${\rm Pr}(r = 0) \hspace{0.45cm} = \ $ { 0.25 3% }
  
{Berechnen Sie alle Rückschlusswahrscheinlichkeiten.
+
{Calculate all inference probabilities.
 
|type="{}"}
 
|type="{}"}
${\rm Pr}(s_0|r = +1)$ = { 1 3% }
+
${\rm Pr}(s_0|r = +1) \ = \ $ { 1 3% }
${\rm Pr}(s_1|r = +1)$ = { 0 3% }
+
${\rm Pr}(s_1|r = +1) \ = \ $ { 0. }
${\rm Pr}(s_0|r = \, &ndash;1)$ = { 0 3% }
+
${\rm Pr}(s_0|r = -1) \ = \ $ { 0. }
${\rm Pr}(s_1|r = \, &ndash;1)$ = { 1 3% }
+
${\rm Pr}(s_1|r = -1) \ = \ $ { 1 3% }
${\rm Pr}(s_0|r = 0)$ = { 0.6 3% }
+
${\rm Pr}(s_0|r = 0) \hspace{0.45cm} = \ $ { 0.6 3% }
${\rm Pr}(s_1|r = 0)$ = { 0.4 3% }
+
${\rm Pr}(s_1|r = 0) \hspace{0.45cm} = \ $ { 0.4 3% }
  
{Unterscheiden sich MAP&ndash; und ML&ndash;Empfänger für $r = +1$?
+
{Do MAP and ML receivers differ under the condition&nbsp; "$r = +1$"?
 
|type="()"}
 
|type="()"}
- ja,
+
- yes,
+ nein.
+
+ no.
  
{Unterscheiden sich MAP&ndash; und ML&ndash;Empfänger für $r = \, &ndash;1$?
+
{Do MAP and ML receivers differ under the condition&nbsp; "$r = -1$"?
 
|type="()"}
 
|type="()"}
- ja,
+
- yes,
+ nein.
+
+ no.
  
{Welche Aussagen gelten unter der Voraussetzung &bdquo;$r = 0$&rdquo;?
+
{Which statements are true under the condition&nbsp; "$r = 0$"?
 
|type="[]"}
 
|type="[]"}
+ Der MAP&ndash;Empfänger entscheidet sich für $s_0$.
+
+ The MAP receiver decides for&nbsp; $s_0$.
- Der MAP&ndash;Empfänger entscheidet sich für $s_1$.
+
- The MAP receiver decides for&nbsp; $s_1$.
- Der ML&ndash;Empfänger entscheidet sich für $s_0$.
+
- The ML receiver decides for&nbsp; $s_0$.
+ Der ML&ndash;Empfänger entscheidet sich für $s_1$.
+
+ The ML receiver decides for&nbsp; $s_1$.
  
{Berechnen Sie die Fehlerwahrscheinlichkeit des ML&ndash;Empfängers.
+
{Calculate the symbol error probability of the &nbsp;'''ML receiver'''.
 
|type="{}"}
 
|type="{}"}
${\rm ML\text{:} \hspace{0.15cm} Pr(Symbolfehler)}$ = { 0.15 3% }
+
${\rm Pr(symbol\hspace{0.15cm} error)}\ = \ $ { 0.15 3% }
  
{Berechnen Sie die Fehlerwahrscheinlichkeit des MAP&ndash;Empfängers.
+
{Calculate the symbol error probability of the &nbsp;'''MAP receiver'''.
 
|type="{}"}
 
|type="{}"}
${\rm MAP\text{:} \hspace{0.15cm} Pr(Symbolfehler)}$ = { 0.1 3% }
+
${\rm Pr(symbol\hspace{0.15cm}error)}\ = \ $ { 0.1 3% }
 
</quiz>
 
</quiz>
  
===Musterlösung===
+
===Solution===
 
{{ML-Kopf}}
 
{{ML-Kopf}}
'''(1)'''&nbsp; Die gesuchten empfängerseitigen Auftrittswahrscheinlichkeiten sind
+
'''(1)'''&nbsp; The receiver side occurrence probabilities we are looking for are
 
:$${\rm Pr} ( r = +1) \hspace{-0.1cm} \ = \ \hspace{-0.1cm} {\rm Pr} ( s_0) \cdot {\rm Pr} ( r = +1 \hspace{0.05cm}| \hspace{0.05cm}s = +1) = 0.75 \cdot 0.8 \hspace{0.05cm}\hspace{0.15cm}\underline { = 0.6}\hspace{0.05cm},$$
 
:$${\rm Pr} ( r = +1) \hspace{-0.1cm} \ = \ \hspace{-0.1cm} {\rm Pr} ( s_0) \cdot {\rm Pr} ( r = +1 \hspace{0.05cm}| \hspace{0.05cm}s = +1) = 0.75 \cdot 0.8 \hspace{0.05cm}\hspace{0.15cm}\underline { = 0.6}\hspace{0.05cm},$$
 
:$${\rm Pr} ( r = -1) \hspace{-0.1cm} \ = \ \hspace{-0.1cm} {\rm Pr} ( s_1) \cdot {\rm Pr} ( r = -1 \hspace{0.05cm}| \hspace{0.05cm}s = -1) = 0.25 \cdot 0.6  \hspace{0.05cm}\hspace{0.15cm}\underline {= 0.15}\hspace{0.05cm},$$
 
:$${\rm Pr} ( r = -1) \hspace{-0.1cm} \ = \ \hspace{-0.1cm} {\rm Pr} ( s_1) \cdot {\rm Pr} ( r = -1 \hspace{0.05cm}| \hspace{0.05cm}s = -1) = 0.25 \cdot 0.6  \hspace{0.05cm}\hspace{0.15cm}\underline {= 0.15}\hspace{0.05cm},$$
 
:$${\rm Pr} ( r = 0) \hspace{-0.1cm} \ = \ \hspace{-0.1cm} 1 - {\rm Pr} ( r = +1) - {\rm Pr} ( r = -1) = 1 - 0.6 - 0.15  \hspace{0.05cm}\hspace{0.15cm}\underline {= 0.25}\hspace{0.05cm}.$$
 
:$${\rm Pr} ( r = 0) \hspace{-0.1cm} \ = \ \hspace{-0.1cm} 1 - {\rm Pr} ( r = +1) - {\rm Pr} ( r = -1) = 1 - 0.6 - 0.15  \hspace{0.05cm}\hspace{0.15cm}\underline {= 0.25}\hspace{0.05cm}.$$
  
Für die letzte Wahrscheinlichkeit gilt auch:
+
*For the last probability also holds:
 
:$${\rm Pr} ( r = 0) = 0.75 \cdot 0.2 + 0.25 \cdot 0.4 = 0.25\hspace{0.05cm}.$$
 
:$${\rm Pr} ( r = 0) = 0.75 \cdot 0.2 + 0.25 \cdot 0.4 = 0.25\hspace{0.05cm}.$$
  
  
'''(2)'''&nbsp; Für die erste gesuchte Rückschlusswahrscheinlichkeit gilt:
+
'''(2)'''&nbsp; For the first inference probability we are looking for holds:
 
:$${\rm Pr} (s_0 \hspace{0.05cm}| \hspace{0.05cm}r = +1) = \frac{{\rm Pr} ( r = +1 \hspace{0.05cm}|\hspace{0.05cm}s_0 ) \cdot {\rm Pr} ( s_0)}{{\rm Pr} ( r = +1)}  
 
:$${\rm Pr} (s_0 \hspace{0.05cm}| \hspace{0.05cm}r = +1) = \frac{{\rm Pr} ( r = +1 \hspace{0.05cm}|\hspace{0.05cm}s_0 ) \cdot {\rm Pr} ( s_0)}{{\rm Pr} ( r = +1)}  
 
= \frac{0.8 \cdot 0.75}{0.6} \hspace{0.05cm}\hspace{0.15cm}\underline {= 1}\hspace{0.05cm}.$$
 
= \frac{0.8 \cdot 0.75}{0.6} \hspace{0.05cm}\hspace{0.15cm}\underline {= 1}\hspace{0.05cm}.$$
  
Entsprechend erhält man für die weiteren Wahrscheinlichkeiten:
+
*Correspondingly,&nbsp; we obtain for the other probabilities:
 
:$${\rm Pr} (s_1 \hspace{0.05cm}| \hspace{0.05cm}r = +1) \hspace{-0.1cm} \ = \ 1 - {\rm Pr} (s_0 \hspace{0.05cm}| \hspace{0.05cm}r = +1)  \hspace{0.05cm}\hspace{0.15cm}\underline {= 0}\hspace{0.05cm},$$
 
:$${\rm Pr} (s_1 \hspace{0.05cm}| \hspace{0.05cm}r = +1) \hspace{-0.1cm} \ = \ 1 - {\rm Pr} (s_0 \hspace{0.05cm}| \hspace{0.05cm}r = +1)  \hspace{0.05cm}\hspace{0.15cm}\underline {= 0}\hspace{0.05cm},$$
:$${\rm Pr} (s_0 \hspace{0.05cm}| \hspace{0.05cm}r = -1) \hspace{-0.1cm} \ = \ \hspace{-0.1cm} 0, \hspace{0.4cm}{\rm Pr} (s_1 \hspace{0.05cm}| \hspace{0.05cm}r = -1)  \hspace{0.05cm}\hspace{0.15cm}\underline {= 1}\hspace{0.05cm},$$
+
:$${\rm Pr} (s_0 \hspace{0.05cm}| \hspace{0.05cm}r = -1) \hspace{0.05cm}\hspace{0.15cm}\underline {= 0}\hspace{0.05cm},$$
 +
:$${\rm Pr} (s_1 \hspace{0.05cm}| \hspace{0.05cm}r = -1)  \hspace{0.05cm}\hspace{0.15cm}\underline {= 1}\hspace{0.05cm},$$
 
:$${\rm Pr} (s_0 \hspace{0.05cm}| \hspace{0.05cm}r = 0) \hspace{-0.1cm} \ = \ \hspace{-0.1cm}\frac{{\rm Pr} ( r = 0 \hspace{0.05cm}|\hspace{0.05cm}s_0 ) \cdot {\rm Pr} ( s_0)}{{\rm Pr} ( r = 0 )}= \frac{0.2 \cdot 0.75}{0.25} \hspace{0.05cm}\hspace{0.15cm}\underline {= 0.6}\hspace{0.05cm},$$
 
:$${\rm Pr} (s_0 \hspace{0.05cm}| \hspace{0.05cm}r = 0) \hspace{-0.1cm} \ = \ \hspace{-0.1cm}\frac{{\rm Pr} ( r = 0 \hspace{0.05cm}|\hspace{0.05cm}s_0 ) \cdot {\rm Pr} ( s_0)}{{\rm Pr} ( r = 0 )}= \frac{0.2 \cdot 0.75}{0.25} \hspace{0.05cm}\hspace{0.15cm}\underline {= 0.6}\hspace{0.05cm},$$
 
:$${\rm Pr} (s_1 \hspace{0.05cm}| \hspace{0.05cm}r = 0) \hspace{-0.1cm} \ = \ \hspace{-0.1cm} 1- {\rm Pr} (s_0 \hspace{0.05cm}| \hspace{0.05cm}r = 0)  \hspace{0.05cm}\hspace{0.15cm}\underline {= 0.4} \hspace{0.05cm}.$$
 
:$${\rm Pr} (s_1 \hspace{0.05cm}| \hspace{0.05cm}r = 0) \hspace{-0.1cm} \ = \ \hspace{-0.1cm} 1- {\rm Pr} (s_0 \hspace{0.05cm}| \hspace{0.05cm}r = 0)  \hspace{0.05cm}\hspace{0.15cm}\underline {= 0.4} \hspace{0.05cm}.$$
  
  
'''(3)'''&nbsp; Es gelte $r = +1$. Dann entscheidet sich
+
'''(3)'''&nbsp; Let&nbsp; $r = +1$.&nbsp; Then decides
* der MAP&ndash;Empfänger für $s_0$, da
+
* the MAP receiver for&nbsp; $s_0$,&nbsp; because ${\rm Pr} (s_0 \hspace{0.05cm}| \hspace{0.05cm}r = +1) = 1 > {\rm Pr} (s_1 \hspace{0.05cm}| \hspace{0.05cm}r = +1)= 0\hspace{0.05cm},$
:$${\rm Pr} (s_0 \hspace{0.05cm}| \hspace{0.05cm}r = +1) = 1 > {\rm Pr} (s_1 \hspace{0.05cm}| \hspace{0.05cm}r = +1)= 0\hspace{0.05cm},$$
+
* the ML receiver likewise for&nbsp; $s_0$,&nbsp; since ${\rm Pr} ( r = +1 \hspace{0.05cm}| \hspace{0.05cm}s_0) = 0.8 > {\rm Pr} ( r = +1 \hspace{0.05cm}| \hspace{0.05cm}s_1) = 0 \hspace{0.05cm}.$
* der ML&ndash;Empfänger ebenfalls für $s_0$, da
+
 
:$${\rm Pr} ( r = +1 \hspace{0.05cm}| \hspace{0.05cm}s_0) = 0.8 > {\rm Pr} ( r = +1 \hspace{0.05cm}| \hspace{0.05cm}s_1) = 0 \hspace{0.05cm}.$$
 
  
Richtig ist also <u>NEIN</u>.
+
So the correct answer is&nbsp; <u>NO</u>.
  
  
'''(4)'''&nbsp; Zum gleichen Ergebnis <u>NEIN</u> kommt man unter der Voraussetzung &bdquo;$r = \, &ndash;1$&rdquo;, da keine Verbindung zwischen $s_0$ und &bdquo;$r = \, &ndash;1$&rdquo; besteht.
+
'''(4)'''&nbsp; <u>NO</u>&nbsp; is also true under the condition&nbsp; "$r = \, &ndash;1$",&nbsp; since there is no connection between&nbsp; $s_0$&nbsp; and&nbsp; "$r = \, &ndash;1$".
  
  
'''(5)'''&nbsp; Der MAP&ndash;Empfänger entscheidet sich für das Ereignis $s_0$, da
+
'''(5)'''&nbsp; <u>Solutions 1 and 4</u>&nbsp;  are correct:
:$${\rm Pr} (s_0 \hspace{0.05cm}| \hspace{0.05cm}r = 0) = 0.6 > {\rm Pr} (s_1 \hspace{0.05cm}| \hspace{0.05cm}r = 0) = 0.4 \hspace{0.05cm}.$$
+
*The MAP receiver will choose event&nbsp; $s_0$,&nbsp;  since ${\rm Pr} (s_0 \hspace{0.05cm}| \hspace{0.05cm}r = 0) = 0.6 > {\rm Pr} (s_1 \hspace{0.05cm}| \hspace{0.05cm}r = 0) = 0.4 \hspace{0.05cm}.$
 +
*In contrast,&nbsp;  the ML receiver will choose&nbsp;  $s_1$,&nbsp;  since&nbsp;  ${\rm Pr} ( r = 0 \hspace{0.05cm}| \hspace{0.05cm}s_1) = 0.4 > {\rm Pr} ( r = 0 \hspace{0.05cm}| \hspace{0.05cm}s_0) = 0.2 \hspace{0.05cm}.$
  
Dagegen wird sich der ML&ndash;Empfänger für $s_1$ entscheiden, da
 
:$${\rm Pr} ( r = 0 \hspace{0.05cm}| \hspace{0.05cm}s_1) = 0.4 > {\rm Pr} ( r = 0 \hspace{0.05cm}| \hspace{0.05cm}s_0) = 0.2 \hspace{0.05cm}.$$
 
  
Richtig sind also die <u>Lösungsvorschläge 1 und 4</u>.
+
'''(6)'''&nbsp; The maximum likelihood receiver
 +
* decides for&nbsp;  $s_0$&nbsp;  only if&nbsp;  $r = +1$,
  
 +
* thus makes no error if&nbsp;  $s_1$&nbsp;  was sent,
  
'''(6)'''&nbsp; Der Maximum&ndash;Likelihood&ndash;Empfänger
+
* only makes an error when&nbsp; "$s_0$"&nbsp; and&nbsp; "$r = 0$"&nbsp; are combined:
* entscheidet sich nur für $s_0$, wenn $r = +1$ ist,
+
:$${\rm Pr} ({\rm symbol\hspace{0.15cm}error} ) = {\rm Pr} ({\cal E } ) = 0.75 \cdot 0.2  \hspace{0.05cm}\hspace{0.15cm}\underline {= 0.15}  \hspace{0.05cm}.$$
* macht also keinen Fehler, wenn $s_1$ gesendet wurde,
 
* macht nur einen Fehler bei der Kombination &bdquo;$s_0$&rdquo; und &bdquo;$r = 0$&rdquo;:
 
:$${\rm Pr} ({\rm Symbolfehler} ) = {\rm Pr} ({\cal E } ) = 0.75 \cdot 0.2  \hspace{0.05cm}\hspace{0.15cm}\underline {= 0.15}  \hspace{0.05cm}.$$
 
  
  
'''(7)'''&nbsp; Der MAP&ndash;Empfänger entscheidet sich dagegen bei &bdquo;$r = 0$&rdquo; für $s_0$. Einen Symbolfehler gibt es also nur in der Kombination &bdquo;$s_1$&rdquo; und &bdquo;$r = 0$&rdquo;. Daraus folgt:
+
'''(7)'''&nbsp; The MAP receiver,&nbsp; on the other hand,&nbsp; decides for&nbsp; $s_0$ when&nbsp; "$r = 0$".&nbsp; So there is a symbol error only in the combination&nbsp; "$s_1$"&nbsp; and&nbsp; "$r = 0$".&nbsp; From this follows:
:$${\rm Pr} ({\rm Symbolfehler} ) = {\rm Pr} ({\cal E } ) = 0.25 \cdot 0.4  \hspace{0.05cm}\hspace{0.15cm}\underline {= 0.1}  \hspace{0.05cm}.$$
+
:$${\rm Pr} ({\rm symbol\hspace{0.15cm}error} ) = {\rm Pr} ({\cal E } ) = 0.25 \cdot 0.4  \hspace{0.05cm}\hspace{0.15cm}\underline {= 0.1}  \hspace{0.05cm}.$$
  
Die Fehlerwahrscheinlichkeit ist hier geringer als beim ML&ndash;Empfänger, da nun auch die unterschiedlichen Apriori&ndash;Wahrscheinlichkeiten ${\rm Pr}(s_0)$ und ${\rm Pr}(s_1)$ berücksichtigt werden.
+
*The error probability here is lower than for the ML receiver,
 +
*because now also the different a-priori probabilities&nbsp; ${\rm Pr}(s_0)$ and ${\rm Pr}(s_1)$&nbsp; are considered.
 
{{ML-Fuß}}
 
{{ML-Fuß}}
  
  
  
[[Category:Aufgaben zu Digitalsignalübertragung|^4.2 Struktur des optimalen Empfängers^]]
+
[[Category:Digital Signal Transmission: Exercises|^4.2 Structure of the Optimal Receiver^]]

Latest revision as of 15:39, 15 July 2022

Channel transition probabilities

To illustrate  "maximum–a–posteriori"  $\rm (MAP)$  and  "maximum likelihood"  $\rm (ML)$  decision,  we now construct a very simple example with only two possible messages  $m_0 = 0$  and  $m_1 = 1$,  represented by the signal values  $s_0$  resp.  $s_1$:

$$s \hspace{-0.15cm} \ = \ \hspace{-0.15cm}s_0 = +1 \hspace{0.2cm} \Longleftrightarrow \hspace{0.2cm}m = m_0 = 0\hspace{0.05cm},$$
$$s \hspace{-0.15cm} \ = \ \hspace{-0.15cm}s_1 = -1 \hspace{0.2cm} \Longleftrightarrow \hspace{0.2cm}m = m_1 = 1\hspace{0.05cm}.$$
  • Let the probabilities of occurrence be:
$${\rm Pr}(s = s_0) = 0.75,\hspace{0.2cm}{\rm Pr}(s = s_1) = 0.25 \hspace{0.05cm}.$$
  • The received signal can – for whatever reason – take three different values, i.e.
$$r = +1,\hspace{0.2cm}r = 0,\hspace{0.2cm}r = -1 \hspace{0.05cm}.$$
  • The conditional channel probabilities can be taken from the graph.


After transmission, the message is to be estimated by an optimal receiver.  Available are:

  • the  maximum likelihood receiver  $\rm (ML$  receiver$)$,  which does not know the occurrence probabilities  ${\rm Pr}(s = s_i)$,  with the decision rule:
$$\hat{m}_{\rm ML} = {\rm arg} \max_i \hspace{0.1cm} \big[ p_{r |s } \hspace{0.05cm} (\rho |s_i ) \big]\hspace{0.05cm},$$
  • the  maximum-a-posteriori receiver  $\rm (MAP$  receiver$)$;  this receiver also considers the symbol probabilities of the source in its decision process:
$$\hat{m}_{\rm MAP} = {\rm arg} \max_i \hspace{0.1cm} \big[ {\rm Pr}( s = s_i) \cdot p_{r |s } \hspace{0.05cm} (\rho |s_i ) \big ]\hspace{0.05cm}.$$



Notes:



Questions

1

With which probabilities do the received values occur?

${\rm Pr}(r = +1) \ = \ $

${\rm Pr}(r = -1) \ = \ $

${\rm Pr}(r = 0) \hspace{0.45cm} = \ $

2

Calculate all inference probabilities.

${\rm Pr}(s_0|r = +1) \ = \ $

${\rm Pr}(s_1|r = +1) \ = \ $

${\rm Pr}(s_0|r = -1) \ = \ $

${\rm Pr}(s_1|r = -1) \ = \ $

${\rm Pr}(s_0|r = 0) \hspace{0.45cm} = \ $

${\rm Pr}(s_1|r = 0) \hspace{0.45cm} = \ $

3

Do MAP and ML receivers differ under the condition  "$r = +1$"?

yes,
no.

4

Do MAP and ML receivers differ under the condition  "$r = -1$"?

yes,
no.

5

Which statements are true under the condition  "$r = 0$"?

The MAP receiver decides for  $s_0$.
The MAP receiver decides for  $s_1$.
The ML receiver decides for  $s_0$.
The ML receiver decides for  $s_1$.

6

Calculate the symbol error probability of the  ML receiver.

${\rm Pr(symbol\hspace{0.15cm} error)}\ = \ $

7

Calculate the symbol error probability of the  MAP receiver.

${\rm Pr(symbol\hspace{0.15cm}error)}\ = \ $


Solution

(1)  The receiver side occurrence probabilities we are looking for are

$${\rm Pr} ( r = +1) \hspace{-0.1cm} \ = \ \hspace{-0.1cm} {\rm Pr} ( s_0) \cdot {\rm Pr} ( r = +1 \hspace{0.05cm}| \hspace{0.05cm}s = +1) = 0.75 \cdot 0.8 \hspace{0.05cm}\hspace{0.15cm}\underline { = 0.6}\hspace{0.05cm},$$
$${\rm Pr} ( r = -1) \hspace{-0.1cm} \ = \ \hspace{-0.1cm} {\rm Pr} ( s_1) \cdot {\rm Pr} ( r = -1 \hspace{0.05cm}| \hspace{0.05cm}s = -1) = 0.25 \cdot 0.6 \hspace{0.05cm}\hspace{0.15cm}\underline {= 0.15}\hspace{0.05cm},$$
$${\rm Pr} ( r = 0) \hspace{-0.1cm} \ = \ \hspace{-0.1cm} 1 - {\rm Pr} ( r = +1) - {\rm Pr} ( r = -1) = 1 - 0.6 - 0.15 \hspace{0.05cm}\hspace{0.15cm}\underline {= 0.25}\hspace{0.05cm}.$$
  • For the last probability also holds:
$${\rm Pr} ( r = 0) = 0.75 \cdot 0.2 + 0.25 \cdot 0.4 = 0.25\hspace{0.05cm}.$$


(2)  For the first inference probability we are looking for holds:

$${\rm Pr} (s_0 \hspace{0.05cm}| \hspace{0.05cm}r = +1) = \frac{{\rm Pr} ( r = +1 \hspace{0.05cm}|\hspace{0.05cm}s_0 ) \cdot {\rm Pr} ( s_0)}{{\rm Pr} ( r = +1)} = \frac{0.8 \cdot 0.75}{0.6} \hspace{0.05cm}\hspace{0.15cm}\underline {= 1}\hspace{0.05cm}.$$
  • Correspondingly,  we obtain for the other probabilities:
$${\rm Pr} (s_1 \hspace{0.05cm}| \hspace{0.05cm}r = +1) \hspace{-0.1cm} \ = \ 1 - {\rm Pr} (s_0 \hspace{0.05cm}| \hspace{0.05cm}r = +1) \hspace{0.05cm}\hspace{0.15cm}\underline {= 0}\hspace{0.05cm},$$
$${\rm Pr} (s_0 \hspace{0.05cm}| \hspace{0.05cm}r = -1) \hspace{0.05cm}\hspace{0.15cm}\underline {= 0}\hspace{0.05cm},$$
$${\rm Pr} (s_1 \hspace{0.05cm}| \hspace{0.05cm}r = -1) \hspace{0.05cm}\hspace{0.15cm}\underline {= 1}\hspace{0.05cm},$$
$${\rm Pr} (s_0 \hspace{0.05cm}| \hspace{0.05cm}r = 0) \hspace{-0.1cm} \ = \ \hspace{-0.1cm}\frac{{\rm Pr} ( r = 0 \hspace{0.05cm}|\hspace{0.05cm}s_0 ) \cdot {\rm Pr} ( s_0)}{{\rm Pr} ( r = 0 )}= \frac{0.2 \cdot 0.75}{0.25} \hspace{0.05cm}\hspace{0.15cm}\underline {= 0.6}\hspace{0.05cm},$$
$${\rm Pr} (s_1 \hspace{0.05cm}| \hspace{0.05cm}r = 0) \hspace{-0.1cm} \ = \ \hspace{-0.1cm} 1- {\rm Pr} (s_0 \hspace{0.05cm}| \hspace{0.05cm}r = 0) \hspace{0.05cm}\hspace{0.15cm}\underline {= 0.4} \hspace{0.05cm}.$$


(3)  Let  $r = +1$.  Then decides

  • the MAP receiver for  $s_0$,  because ${\rm Pr} (s_0 \hspace{0.05cm}| \hspace{0.05cm}r = +1) = 1 > {\rm Pr} (s_1 \hspace{0.05cm}| \hspace{0.05cm}r = +1)= 0\hspace{0.05cm},$
  • the ML receiver likewise for  $s_0$,  since ${\rm Pr} ( r = +1 \hspace{0.05cm}| \hspace{0.05cm}s_0) = 0.8 > {\rm Pr} ( r = +1 \hspace{0.05cm}| \hspace{0.05cm}s_1) = 0 \hspace{0.05cm}.$


So the correct answer is  NO.


(4)  NO  is also true under the condition  "$r = \, –1$",  since there is no connection between  $s_0$  and  "$r = \, –1$".


(5)  Solutions 1 and 4  are correct:

  • The MAP receiver will choose event  $s_0$,  since ${\rm Pr} (s_0 \hspace{0.05cm}| \hspace{0.05cm}r = 0) = 0.6 > {\rm Pr} (s_1 \hspace{0.05cm}| \hspace{0.05cm}r = 0) = 0.4 \hspace{0.05cm}.$
  • In contrast,  the ML receiver will choose  $s_1$,  since  ${\rm Pr} ( r = 0 \hspace{0.05cm}| \hspace{0.05cm}s_1) = 0.4 > {\rm Pr} ( r = 0 \hspace{0.05cm}| \hspace{0.05cm}s_0) = 0.2 \hspace{0.05cm}.$


(6)  The maximum likelihood receiver

  • decides for  $s_0$  only if  $r = +1$,
  • thus makes no error if  $s_1$  was sent,
  • only makes an error when  "$s_0$"  and  "$r = 0$"  are combined:
$${\rm Pr} ({\rm symbol\hspace{0.15cm}error} ) = {\rm Pr} ({\cal E } ) = 0.75 \cdot 0.2 \hspace{0.05cm}\hspace{0.15cm}\underline {= 0.15} \hspace{0.05cm}.$$


(7)  The MAP receiver,  on the other hand,  decides for  $s_0$ when  "$r = 0$".  So there is a symbol error only in the combination  "$s_1$"  and  "$r = 0$".  From this follows:

$${\rm Pr} ({\rm symbol\hspace{0.15cm}error} ) = {\rm Pr} ({\cal E } ) = 0.25 \cdot 0.4 \hspace{0.05cm}\hspace{0.15cm}\underline {= 0.1} \hspace{0.05cm}.$$
  • The error probability here is lower than for the ML receiver,
  • because now also the different a-priori probabilities  ${\rm Pr}(s_0)$ and ${\rm Pr}(s_1)$  are considered.