Difference between revisions of "Aufgaben:Exercise 4.1: Log Likelihood Ratio"

From LNTwww
 
(38 intermediate revisions by 5 users not shown)
Line 1: Line 1:
{{quiz-Header|Buchseite=Kanalcodierung/Soft–in Soft–out Decoder}}
+
{{quiz-Header|Buchseite=Channel_Coding/Soft-in_Soft-Out_Decoder}}
  
[[File:P_ID2979__KC_A_4_1_v2.png|right|frame|Betrachtete Kanalmodelle]]
+
[[File:EN_KC_A_4_1.png|right|frame|Considered channel models]]
Zur Interpretation von <i>Log&ndash;Likelihood&ndash;Verhältnissen</i> (kurz $L$&ndash;Werten) gehen wir wie im [[Kanalcodierung/Soft%E2%80%93in_Soft%E2%80%93out_Decoder#Zuverl.C3.A4ssigkeitsinformation_.E2.80.93_Log_Likelihood_Ratio|Theorieteil]] vom <i>Binary Symmetric Channel</i> (BSC) aus. Die englische Bezeichung ist <i>Log Likelihood Ratio</i> (LLR).
+
To interpret the "log likelihood ratio"&nbsp; $\rm (LLR)$&nbsp; we start from the&nbsp; "binary symmetric channel"&nbsp; $\rm (BSC)$&nbsp; as in the&nbsp; [[Channel_Coding/Soft-in_Soft-Out_Decoder#Reliability_information_-_Log_Likelihood_Ratio|"theory section"]]&nbsp;.
  
Für die binären Zufallsgrößen am Eingang und Ausgang gelte
+
For the binary random variables at the channel input and output  holds:
 
:$$x \in \{0\hspace{0.05cm}, 1\} \hspace{0.05cm},\hspace{0.25cm}y \in \{0\hspace{0.05cm}, 1\}
 
:$$x \in \{0\hspace{0.05cm}, 1\} \hspace{0.05cm},\hspace{0.25cm}y \in \{0\hspace{0.05cm}, 1\}
 
\hspace{0.05cm}. $$
 
\hspace{0.05cm}. $$
  
Dieses Modell ist in der oberen Grafik dargestellt und wird im Folgenden als <b>Modell A</b> bezeichnet. Für die bedingten Wahrscheinlichkeiten in Vorwärtsrichtung gilt:
+
This model is shown in the upper graph.&nbsp; The following applies to the conditional probabilities in the forward direction:
:$${\rm Pr}(y = 1\hspace{0.05cm}|\hspace{0.05cm} x = 0) \hspace{-0.2cm} \ = \ \hspace{-0.2cm} {\rm Pr}(y = 0\hspace{0.05cm}|\hspace{0.05cm} x = 1) = \varepsilon \hspace{0.05cm},$$
+
:$${\rm Pr}(y = 1\hspace{0.05cm}|\hspace{0.05cm} x = 0) = {\rm Pr}(y = 0\hspace{0.05cm}|\hspace{0.05cm} x = 1) = \varepsilon \hspace{0.05cm},$$
:$${\rm Pr}(y = 0\hspace{0.05cm}|\hspace{0.05cm} x = 0) \hspace{-0.2cm} \ = \ \hspace{-0.2cm} {\rm Pr}(y = 1\hspace{0.05cm}|\hspace{0.05cm} x = 1) = 1-\varepsilon \hspace{0.05cm}.$$
+
:$${\rm Pr}(y = 0\hspace{0.05cm}|\hspace{0.05cm} x = 0) = {\rm Pr}(y = 1\hspace{0.05cm}|\hspace{0.05cm} x = 1) = 1-\varepsilon \hspace{0.05cm}.$$
  
Die Verfälschungswahrscheinlichkeit $\epsilon$ ist der entscheidende Parameter des BSC&ndash;Modells.
+
The falsification probability&nbsp; $\varepsilon$&nbsp; is the crucial parameter of the BSC model.
  
Bezüglich der Wahrscheinlichkeitsverteilung am Eingang ist es zweckmäßig, anstelle der Wahrscheinlichkeiten ${\rm Pr}(x = 0)$ und ${\rm Pr}(x = 1)$ das <i>Log Likelihood Ratio</i> (LLR) zu betrachten.
+
Regarding the probability distribution at the input instead of considering the probabilities&nbsp; ${\rm Pr}(x = 0)$&nbsp; and&nbsp; ${\rm Pr}(x = 1)$&nbsp; it is convenient to consider the&nbsp; log likelihood ratio.
  
Für dieses gilt bei der hier verwendeten unipolaren Betrachtungsweise per Definition:
+
For the unipolar approach used here,&nbsp; the following applies by definition:
 
:$$L_{\rm A}(x)={\rm ln} \hspace{0.15cm} \frac{{\rm Pr}(x = 0)}{{\rm Pr}(x = 1)}\hspace{0.05cm},$$
 
:$$L_{\rm A}(x)={\rm ln} \hspace{0.15cm} \frac{{\rm Pr}(x = 0)}{{\rm Pr}(x = 1)}\hspace{0.05cm},$$
  
wobei der Index &bdquo;A&rdquo; auf die Apriori&ndash;Wahrscheinlichkeit hinweist.
+
where the subscript &nbsp;"$\rm A$"&nbsp; indicates the&nbsp; "a-priori log likelihood ratio"&nbsp; or the&nbsp; "a-priori L&ndash;value".
  
Beispielsweise ergibt sich für ${\rm Pr}(x = 0) = 0.2 \ \Rightarrow \ {\rm Pr}(x = 1) = 0.8$ das Apriori&ndash;LLR $L_{\rm A}(x) = \, &ndash;1.382$.  
+
For example,&nbsp; for&nbsp; ${\rm Pr}(x = 0) = 0.2 \ \Rightarrow \ {\rm Pr}(x = 1) = 0.8$ &nbsp; &rArr; &nbsp; $L_{\rm A}(x) = \, -1.382$.
  
Aus dem BSC&ndash;Modell lässt sich zudem der $L$&ndash;Wert der bedingten Wahrscheinlichkeiten ${\rm Pr}(y|x)$ in Vorwärtsrichtung ermitteln, der in der vorliegenden Aufgabe auch mit $L_{\rm V}(y)$ bezeichnet wird:
+
From the BSC model,&nbsp; it is possible to determine the&nbsp; L&ndash;value of the conditional probabilities&nbsp; ${\rm Pr}(y\hspace{0.05cm}|\hspace{0.05cm}x)$&nbsp; in forward direction&nbsp; $($German:&nbsp; "Vorwärtsrichtung" &nbsp; &rArr; &nbsp; subscript "V"$)$,&nbsp; which is denoted by&nbsp; $L_{\rm V}(y)$&nbsp; in the present exercise:
 
:$$L_{\rm V}(y) = L(y\hspace{0.05cm}|\hspace{0.05cm}x) =
 
:$$L_{\rm V}(y) = L(y\hspace{0.05cm}|\hspace{0.05cm}x) =
 
{\rm ln} \hspace{0.15cm} \frac{{\rm Pr}(y\hspace{0.05cm}|\hspace{0.05cm}x = 0)}{{\rm Pr}(y\hspace{0.05cm}|\hspace{0.05cm}x = 1)} =   
 
{\rm ln} \hspace{0.15cm} \frac{{\rm Pr}(y\hspace{0.05cm}|\hspace{0.05cm}x = 0)}{{\rm Pr}(y\hspace{0.05cm}|\hspace{0.05cm}x = 1)} =   
 
\left\{ \begin{array}{c} {\rm ln} \hspace{0.15cm} [(1 - \varepsilon)/\varepsilon]\\
 
\left\{ \begin{array}{c} {\rm ln} \hspace{0.15cm} [(1 - \varepsilon)/\varepsilon]\\
 
  {\rm ln} \hspace{0.15cm} [\varepsilon/(1 - \varepsilon)]  \end{array} \right.\hspace{0.15cm}
 
  {\rm ln} \hspace{0.15cm} [\varepsilon/(1 - \varepsilon)]  \end{array} \right.\hspace{0.15cm}
\begin{array}{*{1}c} {\rm f\ddot{u}r} \hspace{0.05cm} y = 0,
+
\begin{array}{*{1}c} {\rm f\ddot{u}r} \hspace{0.15cm} y = 0,
 
\\  {\rm f\ddot{u}r} \hspace{0.15cm} y = 1. \\ \end{array}$$
 
\\  {\rm f\ddot{u}r} \hspace{0.15cm} y = 1. \\ \end{array}$$
  
Beispielsweise ergibt sich für $\epsilon = 0.1$:
+
For example,&nbsp; for&nbsp; $\varepsilon = 0.1$:
 
:$$L_{\rm V}(y = 0) = +2.197\hspace{0.05cm}, \hspace{0.3cm}L_{\rm V}(y = 1) = -2.197\hspace{0.05cm}.$$
 
:$$L_{\rm V}(y = 0) = +2.197\hspace{0.05cm}, \hspace{0.3cm}L_{\rm V}(y = 1) = -2.197\hspace{0.05cm}.$$
  
Von besonderer Bedeutung für die Codierungstheorie sind die Rückschlusswahrscheinlichkeiten ${\rm Pr}(x|y)$, die mit den Vorwärtswahrscheinlichkeiten ${\rm Pr}(y|x)$ sowie den Eingangswahrscheinlichkeiten ${\rm Pr}(x = 0)$ und ${\rm Pr}(x = 1)$ über den Satz von Bayes in Zusammenhang stehen. Der entsprechende $L$&ndash;Wert wird in dieser Aufgabe mit $L_{\rm R}(y)$ bezeichnet:
+
Of particular importance to coding theory are the inference probabilities&nbsp; ${\rm Pr}(x\hspace{0.05cm}|\hspace{0.05cm}y)$,&nbsp; which are related to the backward probabilities&nbsp; ${\rm Pr}(y\hspace{0.05cm}|\hspace{0.05cm}x)$&nbsp; and the input probabilities&nbsp; ${\rm Pr}(x = 0)$&nbsp; and&nbsp; ${\rm Pr}(x = 1)$&nbsp; via Bayes' theorem.
 +
 
 +
The corresponding&nbsp; &nbsp; L&ndash;value in backward direction&nbsp; $($German:&nbsp; "Rückwärtsrichtung" &nbsp; &rArr; &nbsp; subscript "R"$)$&nbsp; is denoted in this exercise&nbsp; by&nbsp; $L_{\rm R}(y)$:
 
:$$L_{\rm R}(y) = L(x\hspace{0.05cm}|\hspace{0.05cm}y) =
 
:$$L_{\rm R}(y) = L(x\hspace{0.05cm}|\hspace{0.05cm}y) =
 
{\rm ln} \hspace{0.15cm} \frac{{\rm Pr}(x = 0)\hspace{0.05cm}|\hspace{0.05cm}y)}{{\rm Pr}(x = 1)\hspace{0.05cm}|\hspace{0.05cm}y)} \hspace{0.05cm} .$$
 
{\rm ln} \hspace{0.15cm} \frac{{\rm Pr}(x = 0)\hspace{0.05cm}|\hspace{0.05cm}y)}{{\rm Pr}(x = 1)\hspace{0.05cm}|\hspace{0.05cm}y)} \hspace{0.05cm} .$$
  
''Hinweise:''
 
* Die Aufgabe bezieht sich auf die ersten Seiten des Kapitels [[Kanalcodierung/Soft%E2%80%93in_Soft%E2%80%93out_Decoder#Hard_Decision_vs._Soft_Decision| Soft&ndash;in Soft&ndash;out Decoder]].
 
* In den letzten Teilaufgaben ist zu klären, ob die gefundenen Zusammenhänge zwischen $L_{\rm A}, \ L_{\rm V}$ und $L_{\rm R}$ auch auf den unten skizzierten &bdquo;2&ndash;auf&ndash;$M$&ndash;Kanal&rdquo; übertragen werden können. Hierzu wählen wir für die Eingangssymbole eine bipolare Betrachtungsweise: &bdquo;$0$&rdquo; &#8594; &bdquo;$+1$&rdquo; sowie &bdquo;$1$&rdquo; &#8594; &bdquo;$&ndash;1$&rdquo;.
 
* Sollte die Eingabe des Zahlenwertes &bdquo;0&rdquo; erforderlich sein, so geben Sie bitte &bdquo;0.&rdquo; ein.
 
  
  
  
===Fragebogen===
+
 
 +
<u>Hints:</u>
 +
* The exercise belongs to the chapter&nbsp; [[Channel_Coding/Soft-in_Soft-Out_Decoder| "Soft&ndash;in Soft&ndash;out Decoder"]].
 +
 
 +
* Reference is made in particular to the section&nbsp; [[Channel_Coding/Soft-in_Soft-Out_Decoder#Reliability_information_-_Log_Likelihood_Ratio| "Reliability Information &ndash; Log Likelihood Ratio"]].
 +
 
 +
*In the last subtasks you have to clarify whether the found relations between&nbsp; $L_{\rm A}, \ L_{\rm V}$&nbsp; and&nbsp; $L_{\rm R}$&nbsp; can also be transferred to the&nbsp; "2-on-$M$&nbsp; channel".
 +
 +
*For this purpose,&nbsp; we choose a bipolar approach for the input symbols: &nbsp; "$0$"&nbsp; &#8594; &nbsp;"$+1$"&nbsp; and&nbsp; "$1$" &nbsp; &#8594; &nbsp;"$&ndash;1$".
 +
 +
 
 +
 
 +
 
 +
===Questions===
 
<quiz display=simple>
 
<quiz display=simple>
{Wie hängen die bedingten Wahrscheinlichkeiten zweier Zufallsgrößen $A$ und $B$ zusammen?
+
{How are the conditional probabilities of two random variables&nbsp; $A$&nbsp; and&nbsp; $B$&nbsp; related?
|type="[]"}
+
|type="()"}
- ${\rm Pr}(A | B) = {\rm Pr}(B | A)$,
+
- ${\rm Pr}(A\hspace{0.05cm}|\hspace{0.05cm} B) = {\rm Pr}(B \hspace{0.05cm}|\hspace{0.05cm} A)$,
- ${\rm Pr}(A | B) = {\rm Pr}(B | A) \cdot {\rm Pr}(B) / {\rm Pr}(A)$,
+
- ${\rm Pr}(A\hspace{0.05cm}|\hspace{0.05cm}B) = {\rm Pr}(B\hspace{0.05cm}|\hspace{0.05cm} A) \cdot {\rm Pr}(B) / {\rm Pr}(A)$,
+ ${\rm Pr}(A | B) = {\rm Pr}(B | A) \cdot {\rm Pr}(A) / {\rm Pr}(B)$.
+
+ ${\rm Pr}(A\hspace{0.05cm}|\hspace{0.05cm} B) = {\rm Pr}(B \hspace{0.05cm}|\hspace{0.05cm}A) \cdot {\rm Pr}(A) / {\rm Pr}(B)$.
  
{Welche Gleichung gilt für den Binärkanal mit den Wahrscheinlichkeiten ${\rm Pr}(A) = {\rm Pr}(x = 0)$ und ${\rm Pr}(B) = {\rm Pr}(y = 0)$?
+
{Which equation holds for the binary channel with probabilities&nbsp; ${\rm Pr}(A) = {\rm Pr}(x = 0)$&nbsp; and&nbsp; ${\rm Pr}(B) = {\rm Pr}(y = 0)$?
|type="[]"}
+
|type="()"}
 
+ ${\rm Pr}(x = 0 | y = 0) = {\rm Pr}(y = 0 | x = 0) \cdot {\rm Pr}(x = 0) / {\rm Pr}(y = 0)$,
 
+ ${\rm Pr}(x = 0 | y = 0) = {\rm Pr}(y = 0 | x = 0) \cdot {\rm Pr}(x = 0) / {\rm Pr}(y = 0)$,
 
- ${\rm Pr}(x = 0 | y = 0) = {\rm Pr}(y = 0 | x = 0) \cdot {\rm Pr}(y = 0) / {\rm Pr}(x = 0)$.
 
- ${\rm Pr}(x = 0 | y = 0) = {\rm Pr}(y = 0 | x = 0) \cdot {\rm Pr}(y = 0) / {\rm Pr}(x = 0)$.
  
{Unter welchen Voraussetzungen gilt für das Rückschluss&ndash;LLR für alle möglichen Ausgangswerte $y &8712; \{0, \, 1\} \text{:} \, L(x|y) = L(y|x)$ bzw. $L_{\rm R}(y) = L_{\rm V}(y)?
+
{Under what conditions does the inference log likelihood ratio hold for all possible output values&nbsp; $y &#8712; \{0, \, 1\}$: &nbsp; &nbsp;  $L(x\hspace{0.05cm}|\hspace{0.05cm}y) = L(y\hspace{0.05cm}|\hspace{0.05cm}x)$ &nbsp; resp. &nbsp; $L_{\rm R}(y) = L_{\rm V}(y)$?
|type="[]"}
+
|type="()"}
- Für jede beliebige Eingangsverteilung ${\rm Pr}(x = 0), \ {\rm Pr}(x = 1)$.
+
- For any input distribution&nbsp; ${\rm Pr}(x = 0), \ {\rm Pr}(x = 1)$.
+ Nur für die Gleichverteilung: $\hspace{0.2cm} {\rm Pr}(x = 0) = {\rm Pr}(x = 1) = 1/2$.
+
+ For the uniform distribution only:&nbsp; $\hspace{0.2cm} {\rm Pr}(x = 0) = {\rm Pr}(x = 1) = 1/2$.
  
{Das Ausgangssymbol sei $y = 1$. Welches Rückschluss&ndash;LLR erhält man mit der Verfälschungswahrscheinlichkeit $\epsilon = 0.1$ bei gleichwahrscheinlichen Symbolen?
+
{Let the initial symbol be&nbsp; $y = 1$.&nbsp; What inference LLR is obtained with the falsification probability&nbsp; $\varepsilon = 0.1$&nbsp; for equally probable symbols?
|type="{}"
+
|type="{}"}
$\epsilon = 0.1 \text{:} \hspace{0.2cm} L_{\rm R}(y = 1) = L(x | y = 1) \ = \ ${ -2.26291--2.13109 }
+
$L_{\rm R}(y = 1) = L(x | y = 1) \ = \ ${ -2.26291--2.13109 }
  
{Das Ausgangssymbol sei nun $y = 0$. Welches Rückschluss&ndash;LLR erhält man für ${\rm Pr}(x = 0) = 0.2$?
+
{Let the initial symbol now be&nbsp; $y = 0$. What inference log likelihood ratio is obtained for&nbsp; ${\rm Pr}(x = 0) = 0.2$&nbsp; and&nbsp; $\varepsilon = 0.1$?
 
|type="{}"}
 
|type="{}"}
$\epsilon = 0.1 \text{:} \hspace{0.2cm} L_{\rm R}(y = 0) = L(x | y = 0) \ = \ ${ 5.4 3% }
+
$L_{\rm R}(y = 0) = L(x | y = 0) \ = \ ${ 0.815 3% }
  
{Multiple-Choice
+
{Can the result derived in subtask&nbsp; '''(3)''' &nbsp; &#8658; &nbsp; $L_{\rm R} = L_{\rm V} + L_{\rm A}$&nbsp; also be applied to the&nbsp; "2-on-''M''"&nbsp; channel?
|type="[]"}
+
|type="()"}
+ correct
+
+ Yes.
- false
+
- No.
  
{Multiple-Choice
+
{Can the context be applied to the AWGN channel as well?
|type="[]"}
+
|type="()"}
+ correct
+
+ Yes.
- false
+
- No.
 
</quiz>
 
</quiz>
  
===Musterlösung===
+
===Solution===
 
{{ML-Kopf}}
 
{{ML-Kopf}}
'''(1)'''&nbsp;  
+
'''(1)'''&nbsp; For the conditional probabilities,&nbsp; according to the&nbsp; [[Theory_of_Stochastic_Signals/Statistical_Dependence_and_Independence#Conditional_Probability| "Bayes' theorem"]]&nbsp; with intersection&nbsp; $A &#8745; B$:
'''(2)'''&nbsp;  
+
:$${\rm Pr}(B \hspace{0.05cm}|\hspace{0.05cm}  A) = \frac{{\rm Pr}(A \cap B)}{{\rm Pr}(A)}\hspace{0.05cm},
'''(3)'''&nbsp;  
+
\hspace{0.3cm} {\rm Pr}(A \hspace{0.05cm}|\hspace{0.05cm} B) = \frac{{\rm Pr}(A \cap B)}{{\rm Pr}(B)}\hspace{0.3cm}
'''(4)'''&nbsp;  
+
\Rightarrow \hspace{0.3cm}{\rm Pr}(A \hspace{0.05cm}|\hspace{0.05cm} B) =
'''(5)'''&nbsp;  
+
{\rm Pr}(B \hspace{0.05cm}|\hspace{0.05cm} A) \cdot \frac{{\rm Pr}(A)}{{\rm Pr}(B)}\hspace{0.05cm}.$$
 +
 
 +
*Correct is the&nbsp; <u>proposition 3</u>.&nbsp;
 +
 
 +
*In the special case&nbsp; ${\rm Pr}(B) = {\rm Pr}(A)$ &nbsp; &rArr; &nbsp; also the suggestion 1 would be correct.
 +
 
 +
 
 +
 
 +
'''(2)'''&nbsp; With&nbsp; $A$ &nbsp; &#8658; &nbsp; "$x = 0$"&nbsp; and&nbsp; $B$ &nbsp; &#8658; &nbsp; "$y = 0$"&nbsp; we immediately get the equation according to&nbsp; <u>proposition 1</u>:
 +
:$${\rm Pr}(x = 0\hspace{0.05cm}|\hspace{0.05cm} y = 0) =
 +
{\rm Pr}(y = 0\hspace{0.05cm}|\hspace{0.05cm} x = 0)  \cdot \frac{{\rm Pr}(x = 0)}{{\rm Pr}(y = 0)}\hspace{0.05cm}.$$
 +
 
 +
 
 +
'''(3)'''&nbsp; We compute the  L&ndash;values of the inference probabilities.&nbsp; Assuming&nbsp; $y = 0$&nbsp; holds:
 +
:$$L_{\rm R}(y= 0) \hspace{-0.15cm} \ = \ \hspace{-0.15cm} L(x\hspace{0.05cm}|\hspace{0.05cm}y= 0)=
 +
{\rm ln} \hspace{0.15cm} \frac{{\rm Pr}(x = 0\hspace{0.05cm}|\hspace{0.05cm}y=0)}{{\rm Pr}(x = 1\hspace{0.05cm}|\hspace{0.05cm}y=0)} = {\rm ln} \hspace{0.15cm} \frac{{\rm Pr}(y = 0\hspace{0.05cm}|\hspace{0.05cm}x=0) \cdot {\rm Pr}(x = 0) / {\rm Pr}(y = 0)}{{\rm Pr}(y = 0\hspace{0.05cm}|\hspace{0.05cm}x = 1)\cdot {\rm Pr}(x = 1) / {\rm Pr}(y = 0)} $$
 +
:$$\Rightarrow \hspace{0.3cm} L_{\rm R}(y= 0)=    {\rm ln} \hspace{0.15cm} \frac{{\rm Pr}(y = 0\hspace{0.05cm}|\hspace{0.05cm}x=0) }{{\rm Pr}(y = 0\hspace{0.05cm}|\hspace{0.05cm}x = 1)} +
 +
{\rm ln} \hspace{0.15cm} \frac{{\rm Pr}(x=0) }{{\rm Pr}(x = 1)}$$
 +
:$$\Rightarrow \hspace{0.3cm} L_{\rm R}(y= 0) =  L(x\hspace{0.05cm}|\hspace{0.05cm}y= 0) = L_{\rm V}(y= 0) + L_{\rm A}(x)\hspace{0.05cm}.$$
 +
 
 +
*Similarly,&nbsp; assuming&nbsp; $y = 1$,&nbsp; the result is:
 +
:$$L_{\rm R}(y= 1)  = L(x\hspace{0.05cm}|\hspace{0.05cm}y= 1) = L_{\rm V}(y= 1) + L_{\rm A}(x)\hspace{0.05cm}.$$
 +
 
 +
*The two results can be summarized using&nbsp; $y &#8712; \{0, \, 1\}$&nbsp; and
 +
 +
* the input log likelihood ratio,
 +
:$$L_{\rm A}(x) = {\rm ln} \hspace{0.15cm} \frac{ {\rm Pr}(x=0) }{ {\rm Pr}(x = 1)}\hspace{0.05cm},$$
 +
 
 +
* as well as the forward log likelihood ratio,
 +
:$$L_{\rm V}(y) = L(y\hspace{0.05cm}|\hspace{0.05cm}x) =  {\rm ln} \hspace{0.15cm} \frac{{\rm Pr}(y \hspace{0.05cm}|\hspace{0.05cm}x=0) }{{\rm Pr}(y \hspace{0.05cm}|\hspace{0.05cm}x = 1)}
 +
\hspace{0.05cm},$$
 +
 
 +
as follows:
 +
:$$L_{\rm R}(y)  = L(x\hspace{0.05cm}|\hspace{0.05cm}y) = L_{\rm V}(y) + L_{\rm A}(x)\hspace{0.05cm}.$$
 +
 
 +
*The identity &nbsp; $L_{\rm R}(y) &equiv; L_{\rm V}(y)$ &nbsp; requires $L_{\rm A}(x) = 0$ &nbsp; &#8658; &nbsp; equally probable symbols &nbsp; &#8658; &nbsp; <u>proposition 2</u>.
 +
 
 +
 
 +
 
 +
'''(4)'''&nbsp; From the exercise description,&nbsp; you can see that with falsification probability&nbsp; $\varepsilon = 0.1$,&nbsp; the initial value&nbsp; $y = 1$&nbsp; leads to forward log likelihood ratio&nbsp; $L_{\rm V}(y = 1) = \, &ndash;2.197$. 
 +
 
 +
*Because of&nbsp; ${\rm Pr}(x = 0) = 1/2 \ \Rightarrow \ L_{\rm A}(x) = 0$:
 +
:$$L_{\rm R}(y = 1)  = L_{\rm V}(y = 1)  \hspace{0.15cm}\underline{= -2.197}\hspace{0.05cm}.$$
 +
 
 +
 
 +
 
 +
'''(5)'''&nbsp; With the same falsification probability&nbsp; $\varepsilon = 0.1$ &nbsp; &rArr; &nbsp; $L_{\rm V}(y = 0)$&nbsp; differs from&nbsp; $L_{\rm V}(y = 1)$&nbsp; only by the sign.
 +
*With&nbsp; ${\rm Pr}(x = 0) = 0.2 \ \Rightarrow \ L_{\rm A}(x) = \, -1.382$&nbsp; we thus obtain:
 +
:$$L_{\rm R}(y = 0)  = (+)2.197 - 1.382 \hspace{0.15cm}\underline{=+0.815}\hspace{0.05cm}.$$
 +
 
 +
 
 +
 
 +
'''(6)'''&nbsp; The relation &nbsp; $L_{\rm R} = L_{\rm V} + L_{\rm A}$ &nbsp; also holds for the&nbsp; "2-on-$M$ channel",&nbsp; regardless of the set size&nbsp; $M$&nbsp; of the output alphabet &nbsp; &#8658; &nbsp; <u>Answer Yes</u>.
 +
 
 +
 
 +
 
 +
'''(7)'''&nbsp; The AWGN channel is described by the outlined&nbsp; "2&ndash;on&ndash;$M$&ndash;channel"&nbsp; with&nbsp; $M &#8594; &#8734;$&nbsp; also &nbsp; &#8658; &nbsp; <u>Answer Yes</u>.
 
{{ML-Fuß}}
 
{{ML-Fuß}}
  
  
[[Category:Aufgaben zu  Kanalcodierung|^4.1 Soft–in Soft–out Decoder^]]
+
[[Category:Channel Coding: Exercises|^4.1 Soft–in Soft–out Decoder^]]

Latest revision as of 16:27, 23 January 2023

Considered channel models

To interpret the "log likelihood ratio"  $\rm (LLR)$  we start from the  "binary symmetric channel"  $\rm (BSC)$  as in the  "theory section" .

For the binary random variables at the channel input and output holds:

$$x \in \{0\hspace{0.05cm}, 1\} \hspace{0.05cm},\hspace{0.25cm}y \in \{0\hspace{0.05cm}, 1\} \hspace{0.05cm}. $$

This model is shown in the upper graph.  The following applies to the conditional probabilities in the forward direction:

$${\rm Pr}(y = 1\hspace{0.05cm}|\hspace{0.05cm} x = 0) = {\rm Pr}(y = 0\hspace{0.05cm}|\hspace{0.05cm} x = 1) = \varepsilon \hspace{0.05cm},$$
$${\rm Pr}(y = 0\hspace{0.05cm}|\hspace{0.05cm} x = 0) = {\rm Pr}(y = 1\hspace{0.05cm}|\hspace{0.05cm} x = 1) = 1-\varepsilon \hspace{0.05cm}.$$

The falsification probability  $\varepsilon$  is the crucial parameter of the BSC model.

Regarding the probability distribution at the input instead of considering the probabilities  ${\rm Pr}(x = 0)$  and  ${\rm Pr}(x = 1)$  it is convenient to consider the  log likelihood ratio.

For the unipolar approach used here,  the following applies by definition:

$$L_{\rm A}(x)={\rm ln} \hspace{0.15cm} \frac{{\rm Pr}(x = 0)}{{\rm Pr}(x = 1)}\hspace{0.05cm},$$

where the subscript  "$\rm A$"  indicates the  "a-priori log likelihood ratio"  or the  "a-priori L–value".

For example,  for  ${\rm Pr}(x = 0) = 0.2 \ \Rightarrow \ {\rm Pr}(x = 1) = 0.8$   ⇒   $L_{\rm A}(x) = \, -1.382$.

From the BSC model,  it is possible to determine the  L–value of the conditional probabilities  ${\rm Pr}(y\hspace{0.05cm}|\hspace{0.05cm}x)$  in forward direction  $($German:  "Vorwärtsrichtung"   ⇒   subscript "V"$)$,  which is denoted by  $L_{\rm V}(y)$  in the present exercise:

$$L_{\rm V}(y) = L(y\hspace{0.05cm}|\hspace{0.05cm}x) = {\rm ln} \hspace{0.15cm} \frac{{\rm Pr}(y\hspace{0.05cm}|\hspace{0.05cm}x = 0)}{{\rm Pr}(y\hspace{0.05cm}|\hspace{0.05cm}x = 1)} = \left\{ \begin{array}{c} {\rm ln} \hspace{0.15cm} [(1 - \varepsilon)/\varepsilon]\\ {\rm ln} \hspace{0.15cm} [\varepsilon/(1 - \varepsilon)] \end{array} \right.\hspace{0.15cm} \begin{array}{*{1}c} {\rm f\ddot{u}r} \hspace{0.15cm} y = 0, \\ {\rm f\ddot{u}r} \hspace{0.15cm} y = 1. \\ \end{array}$$

For example,  for  $\varepsilon = 0.1$:

$$L_{\rm V}(y = 0) = +2.197\hspace{0.05cm}, \hspace{0.3cm}L_{\rm V}(y = 1) = -2.197\hspace{0.05cm}.$$

Of particular importance to coding theory are the inference probabilities  ${\rm Pr}(x\hspace{0.05cm}|\hspace{0.05cm}y)$,  which are related to the backward probabilities  ${\rm Pr}(y\hspace{0.05cm}|\hspace{0.05cm}x)$  and the input probabilities  ${\rm Pr}(x = 0)$  and  ${\rm Pr}(x = 1)$  via Bayes' theorem.

The corresponding    L–value in backward direction  $($German:  "Rückwärtsrichtung"   ⇒   subscript "R"$)$  is denoted in this exercise  by  $L_{\rm R}(y)$:

$$L_{\rm R}(y) = L(x\hspace{0.05cm}|\hspace{0.05cm}y) = {\rm ln} \hspace{0.15cm} \frac{{\rm Pr}(x = 0)\hspace{0.05cm}|\hspace{0.05cm}y)}{{\rm Pr}(x = 1)\hspace{0.05cm}|\hspace{0.05cm}y)} \hspace{0.05cm} .$$



Hints:

  • In the last subtasks you have to clarify whether the found relations between  $L_{\rm A}, \ L_{\rm V}$  and  $L_{\rm R}$  can also be transferred to the  "2-on-$M$  channel".
  • For this purpose,  we choose a bipolar approach for the input symbols:   "$0$"  →  "$+1$"  and  "$1$"   →  "$–1$".



Questions

1

How are the conditional probabilities of two random variables  $A$  and  $B$  related?

${\rm Pr}(A\hspace{0.05cm}|\hspace{0.05cm} B) = {\rm Pr}(B \hspace{0.05cm}|\hspace{0.05cm} A)$,
${\rm Pr}(A\hspace{0.05cm}|\hspace{0.05cm}B) = {\rm Pr}(B\hspace{0.05cm}|\hspace{0.05cm} A) \cdot {\rm Pr}(B) / {\rm Pr}(A)$,
${\rm Pr}(A\hspace{0.05cm}|\hspace{0.05cm} B) = {\rm Pr}(B \hspace{0.05cm}|\hspace{0.05cm}A) \cdot {\rm Pr}(A) / {\rm Pr}(B)$.

2

Which equation holds for the binary channel with probabilities  ${\rm Pr}(A) = {\rm Pr}(x = 0)$  and  ${\rm Pr}(B) = {\rm Pr}(y = 0)$?

${\rm Pr}(x = 0 | y = 0) = {\rm Pr}(y = 0 | x = 0) \cdot {\rm Pr}(x = 0) / {\rm Pr}(y = 0)$,
${\rm Pr}(x = 0 | y = 0) = {\rm Pr}(y = 0 | x = 0) \cdot {\rm Pr}(y = 0) / {\rm Pr}(x = 0)$.

3

Under what conditions does the inference log likelihood ratio hold for all possible output values  $y ∈ \{0, \, 1\}$:     $L(x\hspace{0.05cm}|\hspace{0.05cm}y) = L(y\hspace{0.05cm}|\hspace{0.05cm}x)$   resp.   $L_{\rm R}(y) = L_{\rm V}(y)$?

For any input distribution  ${\rm Pr}(x = 0), \ {\rm Pr}(x = 1)$.
For the uniform distribution only:  $\hspace{0.2cm} {\rm Pr}(x = 0) = {\rm Pr}(x = 1) = 1/2$.

4

Let the initial symbol be  $y = 1$.  What inference LLR is obtained with the falsification probability  $\varepsilon = 0.1$  for equally probable symbols?

$L_{\rm R}(y = 1) = L(x | y = 1) \ = \ $

5

Let the initial symbol now be  $y = 0$. What inference log likelihood ratio is obtained for  ${\rm Pr}(x = 0) = 0.2$  and  $\varepsilon = 0.1$?

$L_{\rm R}(y = 0) = L(x | y = 0) \ = \ $

6

Can the result derived in subtask  (3)   ⇒   $L_{\rm R} = L_{\rm V} + L_{\rm A}$  also be applied to the  "2-on-M"  channel?

Yes.
No.

7

Can the context be applied to the AWGN channel as well?

Yes.
No.


Solution

(1)  For the conditional probabilities,  according to the  "Bayes' theorem"  with intersection  $A ∩ B$:

$${\rm Pr}(B \hspace{0.05cm}|\hspace{0.05cm} A) = \frac{{\rm Pr}(A \cap B)}{{\rm Pr}(A)}\hspace{0.05cm}, \hspace{0.3cm} {\rm Pr}(A \hspace{0.05cm}|\hspace{0.05cm} B) = \frac{{\rm Pr}(A \cap B)}{{\rm Pr}(B)}\hspace{0.3cm} \Rightarrow \hspace{0.3cm}{\rm Pr}(A \hspace{0.05cm}|\hspace{0.05cm} B) = {\rm Pr}(B \hspace{0.05cm}|\hspace{0.05cm} A) \cdot \frac{{\rm Pr}(A)}{{\rm Pr}(B)}\hspace{0.05cm}.$$
  • Correct is the  proposition 3
  • In the special case  ${\rm Pr}(B) = {\rm Pr}(A)$   ⇒   also the suggestion 1 would be correct.


(2)  With  $A$   ⇒   "$x = 0$"  and  $B$   ⇒   "$y = 0$"  we immediately get the equation according to  proposition 1:

$${\rm Pr}(x = 0\hspace{0.05cm}|\hspace{0.05cm} y = 0) = {\rm Pr}(y = 0\hspace{0.05cm}|\hspace{0.05cm} x = 0) \cdot \frac{{\rm Pr}(x = 0)}{{\rm Pr}(y = 0)}\hspace{0.05cm}.$$


(3)  We compute the L–values of the inference probabilities.  Assuming  $y = 0$  holds:

$$L_{\rm R}(y= 0) \hspace{-0.15cm} \ = \ \hspace{-0.15cm} L(x\hspace{0.05cm}|\hspace{0.05cm}y= 0)= {\rm ln} \hspace{0.15cm} \frac{{\rm Pr}(x = 0\hspace{0.05cm}|\hspace{0.05cm}y=0)}{{\rm Pr}(x = 1\hspace{0.05cm}|\hspace{0.05cm}y=0)} = {\rm ln} \hspace{0.15cm} \frac{{\rm Pr}(y = 0\hspace{0.05cm}|\hspace{0.05cm}x=0) \cdot {\rm Pr}(x = 0) / {\rm Pr}(y = 0)}{{\rm Pr}(y = 0\hspace{0.05cm}|\hspace{0.05cm}x = 1)\cdot {\rm Pr}(x = 1) / {\rm Pr}(y = 0)} $$
$$\Rightarrow \hspace{0.3cm} L_{\rm R}(y= 0)= {\rm ln} \hspace{0.15cm} \frac{{\rm Pr}(y = 0\hspace{0.05cm}|\hspace{0.05cm}x=0) }{{\rm Pr}(y = 0\hspace{0.05cm}|\hspace{0.05cm}x = 1)} + {\rm ln} \hspace{0.15cm} \frac{{\rm Pr}(x=0) }{{\rm Pr}(x = 1)}$$
$$\Rightarrow \hspace{0.3cm} L_{\rm R}(y= 0) = L(x\hspace{0.05cm}|\hspace{0.05cm}y= 0) = L_{\rm V}(y= 0) + L_{\rm A}(x)\hspace{0.05cm}.$$
  • Similarly,  assuming  $y = 1$,  the result is:
$$L_{\rm R}(y= 1) = L(x\hspace{0.05cm}|\hspace{0.05cm}y= 1) = L_{\rm V}(y= 1) + L_{\rm A}(x)\hspace{0.05cm}.$$
  • The two results can be summarized using  $y ∈ \{0, \, 1\}$  and
  • the input log likelihood ratio,
$$L_{\rm A}(x) = {\rm ln} \hspace{0.15cm} \frac{ {\rm Pr}(x=0) }{ {\rm Pr}(x = 1)}\hspace{0.05cm},$$
  • as well as the forward log likelihood ratio,
$$L_{\rm V}(y) = L(y\hspace{0.05cm}|\hspace{0.05cm}x) = {\rm ln} \hspace{0.15cm} \frac{{\rm Pr}(y \hspace{0.05cm}|\hspace{0.05cm}x=0) }{{\rm Pr}(y \hspace{0.05cm}|\hspace{0.05cm}x = 1)} \hspace{0.05cm},$$

as follows:

$$L_{\rm R}(y) = L(x\hspace{0.05cm}|\hspace{0.05cm}y) = L_{\rm V}(y) + L_{\rm A}(x)\hspace{0.05cm}.$$
  • The identity   $L_{\rm R}(y) ≡ L_{\rm V}(y)$   requires $L_{\rm A}(x) = 0$   ⇒   equally probable symbols   ⇒   proposition 2.


(4)  From the exercise description,  you can see that with falsification probability  $\varepsilon = 0.1$,  the initial value  $y = 1$  leads to forward log likelihood ratio  $L_{\rm V}(y = 1) = \, –2.197$.

  • Because of  ${\rm Pr}(x = 0) = 1/2 \ \Rightarrow \ L_{\rm A}(x) = 0$:
$$L_{\rm R}(y = 1) = L_{\rm V}(y = 1) \hspace{0.15cm}\underline{= -2.197}\hspace{0.05cm}.$$


(5)  With the same falsification probability  $\varepsilon = 0.1$   ⇒   $L_{\rm V}(y = 0)$  differs from  $L_{\rm V}(y = 1)$  only by the sign.

  • With  ${\rm Pr}(x = 0) = 0.2 \ \Rightarrow \ L_{\rm A}(x) = \, -1.382$  we thus obtain:
$$L_{\rm R}(y = 0) = (+)2.197 - 1.382 \hspace{0.15cm}\underline{=+0.815}\hspace{0.05cm}.$$


(6)  The relation   $L_{\rm R} = L_{\rm V} + L_{\rm A}$   also holds for the  "2-on-$M$ channel",  regardless of the set size  $M$  of the output alphabet   ⇒   Answer Yes.


(7)  The AWGN channel is described by the outlined  "2–on–$M$–channel"  with  $M → ∞$  also   ⇒   Answer Yes.