Difference between revisions of "Aufgaben:Exercise 4.1: Log Likelihood Ratio"
m (Guenter moved page Aufgabe 4.1: Zum „Log Likelihood Ratio” to Exercise 4.1: Log Likelihood Ratio) |
|||
Line 1: | Line 1: | ||
− | {{quiz-Header|Buchseite= | + | {{quiz-Header|Buchseite=Channel_Coding/Soft-in_Soft-Out_Decoder}} |
− | [[File:P_ID2979__KC_A_4_1_v2.png|right|frame| | + | [[File:P_ID2979__KC_A_4_1_v2.png|right|frame|Considered channel models]] |
− | + | To interpret <i>log likelihood ratios</i> (LLRs / L values) we start from the <i>binary symmetric channel</i> as in [[Channel_Coding/Soft-in_Soft-Out_Decoder#Reliability_information_-_Log_Likelihood_Ratio|"theory section"]] . | |
− | + | For the binary random variables at the input and output the following is valid | |
:$$x \in \{0\hspace{0.05cm}, 1\} \hspace{0.05cm},\hspace{0.25cm}y \in \{0\hspace{0.05cm}, 1\} | :$$x \in \{0\hspace{0.05cm}, 1\} \hspace{0.05cm},\hspace{0.25cm}y \in \{0\hspace{0.05cm}, 1\} | ||
\hspace{0.05cm}. $$ | \hspace{0.05cm}. $$ | ||
− | + | This model is shown in the upper graph. The following applies to the conditional probabilities in the forward direction: | |
:$${\rm Pr}(y = 1\hspace{0.05cm}|\hspace{0.05cm} x = 0) \hspace{-0.2cm} \ = \ \hspace{-0.2cm} {\rm Pr}(y = 0\hspace{0.05cm}|\hspace{0.05cm} x = 1) = \varepsilon \hspace{0.05cm},$$ | :$${\rm Pr}(y = 1\hspace{0.05cm}|\hspace{0.05cm} x = 0) \hspace{-0.2cm} \ = \ \hspace{-0.2cm} {\rm Pr}(y = 0\hspace{0.05cm}|\hspace{0.05cm} x = 1) = \varepsilon \hspace{0.05cm},$$ | ||
:$${\rm Pr}(y = 0\hspace{0.05cm}|\hspace{0.05cm} x = 0) \hspace{-0.2cm} \ = \ \hspace{-0.2cm} {\rm Pr}(y = 1\hspace{0.05cm}|\hspace{0.05cm} x = 1) = 1-\varepsilon \hspace{0.05cm}.$$ | :$${\rm Pr}(y = 0\hspace{0.05cm}|\hspace{0.05cm} x = 0) \hspace{-0.2cm} \ = \ \hspace{-0.2cm} {\rm Pr}(y = 1\hspace{0.05cm}|\hspace{0.05cm} x = 1) = 1-\varepsilon \hspace{0.05cm}.$$ | ||
− | + | The corruption probability $\varepsilon$ is the crucial parameter of the BSC model. | |
− | + | Regarding the probability distribution at the input, instead of considering the probabilities ${\rm Pr}(x = 0)$ and ${\rm Pr}(x = 1)$ it is convenient to consider the <i>log likelihood ratio</i> (LLR). | |
− | + | For the unipolar approach used here, the following applies by definition: | |
:$$L_{\rm A}(x)={\rm ln} \hspace{0.15cm} \frac{{\rm Pr}(x = 0)}{{\rm Pr}(x = 1)}\hspace{0.05cm},$$ | :$$L_{\rm A}(x)={\rm ln} \hspace{0.15cm} \frac{{\rm Pr}(x = 0)}{{\rm Pr}(x = 1)}\hspace{0.05cm},$$ | ||
− | + | where the subscript $\rm A$ indicates the apriori probability. | |
− | + | For example, for ${\rm Pr}(x = 0) = 0.2 \ \Rightarrow \ {\rm Pr}(x = 1) = 0.8$ the apriori LLR $L_{\rm A}(x) = \, -1.382$. | |
− | + | From the BSC–model, it is also possible to determine the $L$ value (LLR) of the conditional probabilities ${\rm Pr}(y\hspace{0.05cm}|\hspace{0.05cm}x)$ in the forward direction, which is also denoted by $L_{\rm V}(y)$ in the present exercise: | |
:$$L_{\rm V}(y) = L(y\hspace{0.05cm}|\hspace{0.05cm}x) = | :$$L_{\rm V}(y) = L(y\hspace{0.05cm}|\hspace{0.05cm}x) = | ||
{\rm ln} \hspace{0.15cm} \frac{{\rm Pr}(y\hspace{0.05cm}|\hspace{0.05cm}x = 0)}{{\rm Pr}(y\hspace{0.05cm}|\hspace{0.05cm}x = 1)} = | {\rm ln} \hspace{0.15cm} \frac{{\rm Pr}(y\hspace{0.05cm}|\hspace{0.05cm}x = 0)}{{\rm Pr}(y\hspace{0.05cm}|\hspace{0.05cm}x = 1)} = | ||
Line 31: | Line 31: | ||
\\ {\rm f\ddot{u}r} \hspace{0.15cm} y = 1. \\ \end{array}$$ | \\ {\rm f\ddot{u}r} \hspace{0.15cm} y = 1. \\ \end{array}$$ | ||
− | + | For example, for $\varepsilon = 0.1$: | |
:$$L_{\rm V}(y = 0) = +2.197\hspace{0.05cm}, \hspace{0.3cm}L_{\rm V}(y = 1) = -2.197\hspace{0.05cm}.$$ | :$$L_{\rm V}(y = 0) = +2.197\hspace{0.05cm}, \hspace{0.3cm}L_{\rm V}(y = 1) = -2.197\hspace{0.05cm}.$$ | ||
− | + | Of particular importance to coding theory are the inference probabilities ${\rm Pr}(x\hspace{0.05cm}|\hspace{0.05cm}y)$, which are related to the forward probabilities ${\rm Pr}(y\hspace{0. 05cm}|\hspace{0.05cm}x)$ and the input probabilities ${\rm Pr}(x = 0)$ and ${\rm Pr}(x = 1)$ via Bayes' theorem. | |
− | + | The corresponding $L$ value (LLR) in this exercise is denoted by $L_{\rm R}(y)$ : | |
:$$L_{\rm R}(y) = L(x\hspace{0.05cm}|\hspace{0.05cm}y) = | :$$L_{\rm R}(y) = L(x\hspace{0.05cm}|\hspace{0.05cm}y) = | ||
{\rm ln} \hspace{0.15cm} \frac{{\rm Pr}(x = 0)\hspace{0.05cm}|\hspace{0.05cm}y)}{{\rm Pr}(x = 1)\hspace{0.05cm}|\hspace{0.05cm}y)} \hspace{0.05cm} .$$ | {\rm ln} \hspace{0.15cm} \frac{{\rm Pr}(x = 0)\hspace{0.05cm}|\hspace{0.05cm}y)}{{\rm Pr}(x = 1)\hspace{0.05cm}|\hspace{0.05cm}y)} \hspace{0.05cm} .$$ | ||
Line 48: | Line 48: | ||
− | + | Hints: | |
− | * | + | * The exercise belongs to the chapter [[Channel_Coding/Soft-in_Soft-Out_Decoder| "Soft–in Soft–out Decoder"]]. |
− | * | + | * Reference is made in particular to the page [[Channel_Coding/Soft-in_Soft-Out_Decoder#Reliability_information_-_Log_Likelihood_Ratio| "Reliability Information – Log Likelihood Ratio"]]. |
− | * In | + | *In the last subtasks we have to clarify whether the found relations between $L_{\rm A}, \ L_{\rm V}$ and $L_{\rm R}$ can also be transferred to the "2 on $M$ channel". |
− | * | + | *For this purpose, we choose a bipolar approach for the input symbols: "$0$" → "$+1$" and "$1$" → "$–1$". |
− | === | + | ===Questions=== |
<quiz display=simple> | <quiz display=simple> | ||
− | { | + | {How are the conditional probabilities of two random variables $A$ and $B$ related? |
|type="()"} | |type="()"} | ||
- ${\rm Pr}(A\hspace{0.05cm}|\hspace{0.05cm} B) = {\rm Pr}(B \hspace{0.05cm}|\hspace{0.05cm} A)$, | - ${\rm Pr}(A\hspace{0.05cm}|\hspace{0.05cm} B) = {\rm Pr}(B \hspace{0.05cm}|\hspace{0.05cm} A)$, | ||
Line 65: | Line 65: | ||
+ ${\rm Pr}(A\hspace{0.05cm}|\hspace{0.05cm} B) = {\rm Pr}(B \hspace{0.05cm}|\hspace{0.05cm}A) \cdot {\rm Pr}(A) / {\rm Pr}(B)$. | + ${\rm Pr}(A\hspace{0.05cm}|\hspace{0.05cm} B) = {\rm Pr}(B \hspace{0.05cm}|\hspace{0.05cm}A) \cdot {\rm Pr}(A) / {\rm Pr}(B)$. | ||
− | { | + | {Which equation holds for the binary channel with probabilities ${\rm Pr}(A) = {\rm Pr}(x = 0)$ and ${\rm Pr}(B) = {\rm Pr}(y = 0)$? |
|type="()"} | |type="()"} | ||
+ ${\rm Pr}(x = 0 | y = 0) = {\rm Pr}(y = 0 | x = 0) \cdot {\rm Pr}(x = 0) / {\rm Pr}(y = 0)$, | + ${\rm Pr}(x = 0 | y = 0) = {\rm Pr}(y = 0 | x = 0) \cdot {\rm Pr}(x = 0) / {\rm Pr}(y = 0)$, | ||
- ${\rm Pr}(x = 0 | y = 0) = {\rm Pr}(y = 0 | x = 0) \cdot {\rm Pr}(y = 0) / {\rm Pr}(x = 0)$. | - ${\rm Pr}(x = 0 | y = 0) = {\rm Pr}(y = 0 | x = 0) \cdot {\rm Pr}(y = 0) / {\rm Pr}(x = 0)$. | ||
− | { | + | {Under what conditions does the inference LLR hold for all possible output values $y ∈ \{0, \, 1\}$: <br> $L(x\hspace{0.05cm}|\hspace{0.05cm}y) = L(y\hspace{0.05cm}|\hspace{0.05cm}x)$ bzw. $L_{\rm R}(y) = L_{\rm V}(y)$? |
|type="()"} | |type="()"} | ||
− | - | + | - For any input distribution ${\rm Pr}(x = 0), \ {\rm Pr}(x = 1)$. |
− | + | + | + For the uniform distribution only: $\hspace{0.2cm} {\rm Pr}(x = 0) = {\rm Pr}(x = 1) = 1/2$. |
− | { | + | {Let the initial symbol be $y = 1$. What inference LLR is obtained with the corruption probability $\varepsilon = 0.1$ for equally probable symbols? |
|type="{}"} | |type="{}"} | ||
$L_{\rm R}(y = 1) = L(x | y = 1) \ = \ ${ -2.26291--2.13109 } | $L_{\rm R}(y = 1) = L(x | y = 1) \ = \ ${ -2.26291--2.13109 } | ||
− | { | + | {Let the initial symbol now be $y = 0$. What inference LLR is obtained for ${\rm Pr}(x = 0) = 0.2$ and $\varepsilon = 0.1$? |
|type="{}"} | |type="{}"} | ||
$L_{\rm R}(y = 0) = L(x | y = 0) \ = \ ${ 0.815 3% } | $L_{\rm R}(y = 0) = L(x | y = 0) \ = \ ${ 0.815 3% } | ||
− | { | + | {Can the result derived in '''(3)'''' ⇒ $L_{\rm R} = L_{\rm V} + L_{\rm A}$ also be applied to the "2 on $M$ channel"? |
|type="()"} | |type="()"} | ||
− | + | + | + Yes. |
− | - | + | - No. |
− | { | + | {Can the context be applied to the AWGN–channel as well? |
|type="()"} | |type="()"} | ||
− | + | + | + Yes. |
− | - | + | - No. |
</quiz> | </quiz> | ||
− | === | + | ===Solution=== |
{{ML-Kopf}} | {{ML-Kopf}} | ||
'''(1)''' Für die bedingten Wahrscheinlichkeiten gilt nach dem [[Theory_of_Stochastic_Signals/Statistische_Abh%C3%A4ngigkeit_und_Unabh%C3%A4ngigkeit#Bedingte_Wahrscheinlichkeit| Satz von Bayes]] mit der Schnittmenge $A ∩ B$: | '''(1)''' Für die bedingten Wahrscheinlichkeiten gilt nach dem [[Theory_of_Stochastic_Signals/Statistische_Abh%C3%A4ngigkeit_und_Unabh%C3%A4ngigkeit#Bedingte_Wahrscheinlichkeit| Satz von Bayes]] mit der Schnittmenge $A ∩ B$: |
Revision as of 14:35, 27 October 2022
To interpret log likelihood ratios (LLRs / L values) we start from the binary symmetric channel as in "theory section" .
For the binary random variables at the input and output the following is valid
- $$x \in \{0\hspace{0.05cm}, 1\} \hspace{0.05cm},\hspace{0.25cm}y \in \{0\hspace{0.05cm}, 1\} \hspace{0.05cm}. $$
This model is shown in the upper graph. The following applies to the conditional probabilities in the forward direction:
- $${\rm Pr}(y = 1\hspace{0.05cm}|\hspace{0.05cm} x = 0) \hspace{-0.2cm} \ = \ \hspace{-0.2cm} {\rm Pr}(y = 0\hspace{0.05cm}|\hspace{0.05cm} x = 1) = \varepsilon \hspace{0.05cm},$$
- $${\rm Pr}(y = 0\hspace{0.05cm}|\hspace{0.05cm} x = 0) \hspace{-0.2cm} \ = \ \hspace{-0.2cm} {\rm Pr}(y = 1\hspace{0.05cm}|\hspace{0.05cm} x = 1) = 1-\varepsilon \hspace{0.05cm}.$$
The corruption probability $\varepsilon$ is the crucial parameter of the BSC model.
Regarding the probability distribution at the input, instead of considering the probabilities ${\rm Pr}(x = 0)$ and ${\rm Pr}(x = 1)$ it is convenient to consider the log likelihood ratio (LLR).
For the unipolar approach used here, the following applies by definition:
- $$L_{\rm A}(x)={\rm ln} \hspace{0.15cm} \frac{{\rm Pr}(x = 0)}{{\rm Pr}(x = 1)}\hspace{0.05cm},$$
where the subscript $\rm A$ indicates the apriori probability.
For example, for ${\rm Pr}(x = 0) = 0.2 \ \Rightarrow \ {\rm Pr}(x = 1) = 0.8$ the apriori LLR $L_{\rm A}(x) = \, -1.382$.
From the BSC–model, it is also possible to determine the $L$ value (LLR) of the conditional probabilities ${\rm Pr}(y\hspace{0.05cm}|\hspace{0.05cm}x)$ in the forward direction, which is also denoted by $L_{\rm V}(y)$ in the present exercise:
- $$L_{\rm V}(y) = L(y\hspace{0.05cm}|\hspace{0.05cm}x) = {\rm ln} \hspace{0.15cm} \frac{{\rm Pr}(y\hspace{0.05cm}|\hspace{0.05cm}x = 0)}{{\rm Pr}(y\hspace{0.05cm}|\hspace{0.05cm}x = 1)} = \left\{ \begin{array}{c} {\rm ln} \hspace{0.15cm} [(1 - \varepsilon)/\varepsilon]\\ {\rm ln} \hspace{0.15cm} [\varepsilon/(1 - \varepsilon)] \end{array} \right.\hspace{0.15cm} \begin{array}{*{1}c} {\rm f\ddot{u}r} \hspace{0.15cm} y = 0, \\ {\rm f\ddot{u}r} \hspace{0.15cm} y = 1. \\ \end{array}$$
For example, for $\varepsilon = 0.1$:
- $$L_{\rm V}(y = 0) = +2.197\hspace{0.05cm}, \hspace{0.3cm}L_{\rm V}(y = 1) = -2.197\hspace{0.05cm}.$$
Of particular importance to coding theory are the inference probabilities ${\rm Pr}(x\hspace{0.05cm}|\hspace{0.05cm}y)$, which are related to the forward probabilities ${\rm Pr}(y\hspace{0. 05cm}|\hspace{0.05cm}x)$ and the input probabilities ${\rm Pr}(x = 0)$ and ${\rm Pr}(x = 1)$ via Bayes' theorem.
The corresponding $L$ value (LLR) in this exercise is denoted by $L_{\rm R}(y)$ :
- $$L_{\rm R}(y) = L(x\hspace{0.05cm}|\hspace{0.05cm}y) = {\rm ln} \hspace{0.15cm} \frac{{\rm Pr}(x = 0)\hspace{0.05cm}|\hspace{0.05cm}y)}{{\rm Pr}(x = 1)\hspace{0.05cm}|\hspace{0.05cm}y)} \hspace{0.05cm} .$$
Hints:
- The exercise belongs to the chapter "Soft–in Soft–out Decoder".
- Reference is made in particular to the page "Reliability Information – Log Likelihood Ratio".
- In the last subtasks we have to clarify whether the found relations between $L_{\rm A}, \ L_{\rm V}$ and $L_{\rm R}$ can also be transferred to the "2 on $M$ channel".
- For this purpose, we choose a bipolar approach for the input symbols: "$0$" → "$+1$" and "$1$" → "$–1$".
Questions
Solution
- $${\rm Pr}(B \hspace{0.05cm}|\hspace{0.05cm} A) = \frac{{\rm Pr}(A \cap B)}{{\rm Pr}(A)}\hspace{0.05cm}, \hspace{0.3cm} {\rm Pr}(A \hspace{0.05cm}|\hspace{0.05cm} B) = \frac{{\rm Pr}(A \cap B)}{{\rm Pr}(B)}\hspace{0.3cm} \Rightarrow \hspace{0.3cm}{\rm Pr}(A \hspace{0.05cm}|\hspace{0.05cm} B) = {\rm Pr}(B \hspace{0.05cm}|\hspace{0.05cm} A) \cdot \frac{{\rm Pr}(A)}{{\rm Pr}(B)}\hspace{0.05cm}.$$
Richtig ist der Lösungsvorschlag 3. Im Sonderfall ${\rm Pr}(B) = {\rm Pr}(A)$ wäre auch der Vorschlag 1 richtig.
(2) Mit $A$ ⇒ "$x = 0$" und $B$ ⇒ "$y = 0$" ergibt sich sofort die Gleichung gemäß Lösungsvorschlag 1:
- $${\rm Pr}(x = 0\hspace{0.05cm}|\hspace{0.05cm} y = 0) = {\rm Pr}(y = 0\hspace{0.05cm}|\hspace{0.05cm} x = 0) \cdot \frac{{\rm Pr}(x = 0)}{{\rm Pr}(y = 0)}\hspace{0.05cm}.$$
(3) Wir berechnen den $L$–Wert der Rückschlusswahrscheinlichkeiten. Unter der Annahme $y = 0$ gilt:
- $$L_{\rm R}(y= 0) \hspace{-0.15cm} \ = \ \hspace{-0.15cm} L(x\hspace{0.05cm}|\hspace{0.05cm}y= 0)= {\rm ln} \hspace{0.15cm} \frac{{\rm Pr}(x = 0\hspace{0.05cm}|\hspace{0.05cm}y=0)}{{\rm Pr}(x = 1\hspace{0.05cm}|\hspace{0.05cm}y=0)} = {\rm ln} \hspace{0.15cm} \frac{{\rm Pr}(y = 0\hspace{0.05cm}|\hspace{0.05cm}x=0) \cdot {\rm Pr}(x = 0) / {\rm Pr}(y = 0)}{{\rm Pr}(y = 0\hspace{0.05cm}|\hspace{0.05cm}x = 1)\cdot {\rm Pr}(x = 1) / {\rm Pr}(y = 0)} $$
- $$\Rightarrow \hspace{0.3cm} L_{\rm R}(y= 0)= {\rm ln} \hspace{0.15cm} \frac{{\rm Pr}(y = 0\hspace{0.05cm}|\hspace{0.05cm}x=0) }{{\rm Pr}(y = 0\hspace{0.05cm}|\hspace{0.05cm}x = 1)} + {\rm ln} \hspace{0.15cm} \frac{{\rm Pr}(x=0) }{{\rm Pr}(x = 1)}$$
- $$\Rightarrow \hspace{0.3cm} L_{\rm R}(y= 0) = L(x\hspace{0.05cm}|\hspace{0.05cm}y= 0) = L_{\rm V}(y= 0) + L_{\rm A}(x)\hspace{0.05cm}.$$
In gleicher Weise ergibt sich unter der Annahme $y = 1$:
- $$L_{\rm R}(y= 1) = L(x\hspace{0.05cm}|\hspace{0.05cm}y= 1) = L_{\rm V}(y= 1) + L_{\rm A}(x)\hspace{0.05cm}.$$
Die beiden Ergebnisse lassen sich mit $y ∈ \{0, \, 1\}$ und
- dem Eingangs–LLR,
- $$L_{\rm A}(x) = {\rm ln} \hspace{0.15cm} \frac{{\rm Pr}(x=0) }{{\rm Pr}(x = 1)}\hspace{0.05cm},$$
- sowie dem Vorwärts–LLR,
- $$L_{\rm V}(y) = L(y\hspace{0.05cm}|\hspace{0.05cm}x) = {\rm ln} \hspace{0.15cm} \frac{{\rm Pr}(y \hspace{0.05cm}|\hspace{0.05cm}x=0) }{{\rm Pr}(y \hspace{0.05cm}|\hspace{0.05cm}x = 1)} \hspace{0.05cm},$$
wie folgt zusammenfassen:
- $$L_{\rm R}(y) = L(x\hspace{0.05cm}|\hspace{0.05cm}y) = L_{\rm V}(y) + L_{\rm A}(x)\hspace{0.05cm}.$$
Die Identität $L_{\rm R}(y) ≡ L_{\rm V}(y)$ erfordert $L_{\rm A}(x) = 0$ ⇒ gleichwahrscheinliche Symbole ⇒ Vorschlag 2.
(4) Der Aufgabenbeschreibung können Sie entnehmen, dass mit der Verfälschungswahrscheinlichkeit $\varepsilon = 0.1$ der Ausgangswert $y = 1$ zum Vorwärts–LLR $L_{\rm V}(y = 1) = \, –2.197$ führt.
- Wegen ${\rm Pr}(x = 0) = 1/2 \ \Rightarrow \ L_{\rm A}(x) = 0$ gilt somit auch:
- $$L_{\rm R}(y = 1) = L_{\rm V}(y = 1) \hspace{0.15cm}\underline{= -2.197}\hspace{0.05cm}.$$
(5) Bei gleicher Verfälschungswahrscheinlichkeit $\varepsilon = 0.1$ unterscheidet sich $L_{\rm V}(y = 0)$ von $L_{\rm V}(y = 1)$ nur durch das Vorzeichen.
- Mit ${\rm Pr}(x = 0) = 0.2 \ \Rightarrow \ L_{\rm A}(x) = \, -1.382$ erhält man somit:
- $$L_{\rm R}(y = 0) = (+)2.197 - 1.382 \hspace{0.15cm}\underline{=+0.815}\hspace{0.05cm}.$$
(6) Wie Sie sicher gerne nachprüfen werden, gilt der Zusammenhang $L_{\rm R} = L_{\rm V} + L_{\rm A}$ auch für den "2–auf–$M$–Kanal", unabhängig vom Umfang $M$ des Ausgangsalphabets ⇒ Antwort Ja.
(7) Der AWGN–Kanal wird durch den skizzierten "2–auf–$M$–Kanal" mit $M → ∞$ ebenfalls beschrieben ⇒ Antwort Ja.