Difference between revisions of "Aufgaben:Exercise 4.2: Channel Log Likelihood Ratio at AWGN"

From LNTwww
 
(4 intermediate revisions by the same user not shown)
Line 30: Line 30:
 
* This exercise belongs to the chapter  [[Channel_Coding/Soft-in_Soft-Out_Decoder| "Soft–in Soft–out Decoder"]].
 
* This exercise belongs to the chapter  [[Channel_Coding/Soft-in_Soft-Out_Decoder| "Soft–in Soft–out Decoder"]].
  
* Reference is made in particular to the sections  [[Channel_Coding/Soft-in_Soft-Out_Decoder#Reliability_information_-_Log_Likelihood_Ratio|"Reliability Information – Log Likelihood Ratio"]]  and  [[Channel_Coding/Channel_Models_and_Decision_Structures#AWGN_channel_at_Binary_Input|"AWGN–Channel at Binary Input"]].
+
* Reference is made in particular to the sections   
 +
:*[[Channel_Coding/Soft-in_Soft-Out_Decoder#Reliability_information_-_Log_Likelihood_Ratio|"Reliability Information – Log Likelihood Ratio"]],   
 +
 
 +
:* [[Channel_Coding/Channel_Models_and_Decision_Structures#AWGN_channel_at_Binary_Input|"AWGN–Channel at Binary Input"]].
 
   
 
   
  
Line 40: Line 43:
 
{What are the characteristics of the channels shown in the diagram?
 
{What are the characteristics of the channels shown in the diagram?
 
|type="[]"}
 
|type="[]"}
+ They describe the binary transmission under Gaussian interference.
+
+ They describe the binary transmission under Gaussian noise.
 
+ The bit error probability without coding is  ${\rm Q}(1/\sigma)$.
 
+ The bit error probability without coding is  ${\rm Q}(1/\sigma)$.
+ The channel LLR is given as  $L_{\rm K}(y) = K_{\rm L} \cdot y$  representable.
+
+ The channel log likelihood ratio is given as  $L_{\rm K}(y) = K_{\rm L} \cdot y$.
  
{Which constant $K_{\rm L}$ characterizes the channel  $\rm A$?
+
{Which constant  $K_{\rm L}$  characterizes the channel  $\rm A$?
 
|type="{}"}
 
|type="{}"}
 
$K_{\rm L} \ = \ ${ 2 3% }  
 
$K_{\rm L} \ = \ ${ 2 3% }  
  
{For channel  $\rm A$  what information do the received values  $y_1 = 1, \ y_2 = 0.5$,  $y_3 = \, -1.5$  provide about the transmitted binary symbols  $x_1, \ x_2$  and  $x_3$, respectively?
+
{For channel  $\rm A$  what information do the received values  $y_1 = 1, \ y_2 = 0.5$,  $y_3 = \, -1.5$  provide about the transmitted binary symbols  $x_1, \ x_2$  and  $x_3$?
 
|type="[]"}
 
|type="[]"}
 
+ $y_1 = 1.0$  states that probably  $x_1 = +1$  was sent.
 
+ $y_1 = 1.0$  states that probably  $x_1 = +1$  was sent.
 
+ $y_2 = 0.5$  states that probably  $x_2 = +1$  was sent.
 
+ $y_2 = 0.5$  states that probably  $x_2 = +1$  was sent.
 
+ $y_3 = \, -1.5$  states that probably  $x_3 = \, -1$  was sent.
 
+ $y_3 = \, -1.5$  states that probably  $x_3 = \, -1$  was sent.
+ The decision  "$y_1 → x_1$"  is more certain than  "$y_2 → x_2$".
+
+ The decision  "$y_1 → x_1$"  is safer than  "$y_2 → x_2$".
 
- The decision  "$y_1 → x_1$"  is safer than  "$y_3 → x_3$".
 
- The decision  "$y_1 → x_1$"  is safer than  "$y_3 → x_3$".
  
Line 60: Line 63:
 
$K_{\rm L} \ = \ ${ 8 3% }
 
$K_{\rm L} \ = \ ${ 8 3% }
  
{What information does channel  $\rm B$  provide about the received values  $y_1 = 1, \ y_2 = 0.5$,  $y_3 = -1.5$  about the transmitted binary symbols  $x_1, \ x_2$  respectively.  $x_3$?
+
{What information does channel  $\rm B$  provide about the received values  $y_1 = 1, \ y_2 = 0.5$,  $y_3 = -1.5$  about the transmitted binary symbols  $x_1, \ x_2$  and  $x_3$?
 
|type="[]"}
 
|type="[]"}
 
+ For  $x_1, \ x_2, \ x_3$  is decided the same as for channel  $\rm A$.
 
+ For  $x_1, \ x_2, \ x_3$  is decided the same as for channel  $\rm A$.
 
+ The estimate  "$x_2 = +1$"  is four times more certain than for channel  $\rm A$.
 
+ The estimate  "$x_2 = +1$"  is four times more certain than for channel  $\rm A$.
- The estimate  "$x_3 = \, -1$"  at channel  $\rm A$  is more reliable than the estimate  "$x_2 = +1$" at channel  $\rm B$.
+
- The estimate  "$x_3 = \, -1$"  at channel  $\rm A$  is more reliable than the estimate  "$x_2 = +1$"  at channel  $\rm B$.
 
</quiz>
 
</quiz>
  
 
===Solution===
 
===Solution===
 
{{ML-Kopf}}
 
{{ML-Kopf}}
'''(1)'''&nbsp; <u>All proposed solutions</u> are correct:
+
'''(1)'''&nbsp; <u>All proposed solutions</u>&nbsp; are correct:
* The transfer equation is always $y = x + n$, with $x &#8712; \{+1, \, -1\}$; $n$ gives a Gaussian random variable with variance $\sigma$ &nbsp; &#8658; &nbsp; variance $\sigma^2$ &nbsp; &#8658; &nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#AWGN_channel_at_Binary_Input| "AWGN Channel"]].
+
* The transfer equation is always&nbsp; $y = x + n$,&nbsp; with&nbsp; $x &#8712; \{+1, \, -1\}$.&nbsp;
* The [[Digital_Signal_Transmission/Error_Probability_for_Baseband_Transmission#Error_probability_with_Gaussian_noise|"AWGN Bit Error Probability"]] is calculated using the dispersion $\sigma$ to ${\rm Q}(1/\sigma)$ where ${\rm Q}(x)$ denotes the [[Theory_of_Stochastic_Signals/Gaussian_Distributed_Random_Variables#Exceedance_probability|"complementary Gaussian error function"]].
+
 
* For each AWGN channel, according to the [[Channel_Coding/Soft-in_Soft-Out_Decoder#Reliability_information_-_Log_Likelihood_Ratio|"theory section"]], the channel&ndash;LLR always results in $L_{\rm K}(y) = L(y|x) = K_{\rm L} \cdot y$.
+
*The variable&nbsp; $n$&nbsp; is a Gaussian random variable with standard deviation&nbsp; $\sigma$ &nbsp; &#8658; &nbsp; variance&nbsp; $\sigma^2$ &nbsp; &#8658; &nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#AWGN_channel_at_Binary_Input| "AWGN Channel"]].
*The constant $K_{\rm L}$ is different for the two channels, however.
 
  
 +
* The&nbsp; [[Digital_Signal_Transmission/Error_Probability_for_Baseband_Transmission#Error_probability_with_Gaussian_noise|"AWGN bit error probability"]]&nbsp; is calculated using the standard deviation &nbsp; $\sigma$ &nbsp; to &nbsp; ${\rm Q}(1/\sigma)$ &nbsp; where&nbsp; ${\rm Q}(x)$&nbsp; denotes the&nbsp; [[Theory_of_Stochastic_Signals/Gaussian_Distributed_Random_Variables#Exceedance_probability|"complementary Gaussian error function"]].
 +
 +
* For each AWGN channel,&nbsp; according to the&nbsp; [[Channel_Coding/Soft-in_Soft-Out_Decoder#Reliability_information_-_Log_Likelihood_Ratio|"theory section"]],&nbsp; the channel log likelihood ratio always results in&nbsp; $L_{\rm K}(y) = L(y|x) = K_{\rm L} \cdot y$.
 +
 +
*The constant&nbsp; $K_{\rm L}$&nbsp; is different for the two channels.
  
'''(2)'''&nbsp; For the AWGN channel, $L_{\rm K}(y) = K_{\rm L} \cdot y$ with constant $K_{\rm L} = 2/\sigma^2$. The standard deviation $\sigma$ can be read from the graph on the data page as the distance of the inflection points within the Gaussian curves from their respective midpoints. For '''channel A''', $\sigma = 1$ results.  
+
 
 +
'''(2)'''&nbsp; For the AWGN channel &nbsp; &rArr; &nbsp; $L_{\rm K}(y) = K_{\rm L} \cdot y$ &nbsp; with constant &nbsp; $K_{\rm L} = 2/\sigma^2$.  
 +
*The standard deviation&nbsp; $\sigma$&nbsp; can be read from the graph on the data page as the distance of the inflection points within the Gaussian curves from their respective midpoints.&nbsp; For '''channel A''' &nbsp; &rArr; &nbsp;  $\sigma = 1$&nbsp; results.  
  
 
*The same result is obtained by evaluating the Gaussian function  
 
*The same result is obtained by evaluating the Gaussian function  
 
:$$\frac{f_{\rm G}( y = \sigma)}{f_{\rm G}( y = 0)} = {\rm e} ^{ -  y^2/(2\sigma^2) } \Bigg |_{\hspace{0.05cm} y \hspace{0.05cm} = \hspace{0.05cm} \sigma} = {\rm e} ^{ -0.5} \approx 0.6065\hspace{0.05cm}.$$
 
:$$\frac{f_{\rm G}( y = \sigma)}{f_{\rm G}( y = 0)} = {\rm e} ^{ -  y^2/(2\sigma^2) } \Bigg |_{\hspace{0.05cm} y \hspace{0.05cm} = \hspace{0.05cm} \sigma} = {\rm e} ^{ -0.5} \approx 0.6065\hspace{0.05cm}.$$
  
*This means: At the abscissa value $y = \sigma$ the mean-free Gaussian function $f_{\rm G}(y)$ has decayed to $60.65\%$ of its maximum value. Thus, for the constant at '''channel A''': &nbsp; $K_{\rm L} = 2/\sigma^2 \ \underline{= 2}$.
+
*This means:&nbsp; At the abscissa value&nbsp; $y = \sigma$&nbsp; the mean-free Gaussian function&nbsp; $f_{\rm G}(y)$&nbsp; has decayed to&nbsp; $60.65\%$&nbsp; of its maximum value.&nbsp;  
  
 +
*Thus,&nbsp; for the constant at&nbsp; '''channel A''': &nbsp; $K_{\rm L} = 2/\sigma^2 \ \underline{= 2}$.
  
  
'''(3)'''&nbsp; The correct <u>solutions are 1 to 4</u>:
+
 
*We first give the respective LLRs of '''Channel A''':
+
'''(3)'''&nbsp; Correct are the&nbsp; <u>solutions 1 to 4</u>:
 +
*We first give the respective log likelihood ratios of&nbsp; '''Channel A''':
 
:$$L_{\rm K}(y_1 = +1.0) = +2\hspace{0.05cm},\hspace{0.3cm}
 
:$$L_{\rm K}(y_1 = +1.0) = +2\hspace{0.05cm},\hspace{0.3cm}
 
L_{\rm K}(y_2 = +0.5) = +1\hspace{0.05cm},\hspace{0.3cm}
 
L_{\rm K}(y_2 = +0.5) = +1\hspace{0.05cm},\hspace{0.3cm}
 
L_{\rm K}(y_3 = -1.5) = -3\hspace{0.05cm}. $$
 
L_{\rm K}(y_3 = -1.5) = -3\hspace{0.05cm}. $$
 
*This results in the following consequences:
 
*This results in the following consequences:
# The decision for the (most probable) codebit $x_i$ is made based on the sign of $L_{\rm K}(y_i)$: $x_1 = +1, \ x_2 = +1, \ x_3 = \, -1$ &nbsp; &#8658; &nbsp; the <u>proposed solutions 1, 2 and 3</u> are correct.
+
# The decision for the&nbsp; $($most probable$)$&nbsp; code bit&nbsp; $x_i$&nbsp; is based on the sign of&nbsp; $L_{\rm K}(y_i)$: <br> &nbsp; &nbsp; $x_1 = +1, \ x_2 = +1, \ x_3 = \, -1$ &nbsp; &#8658; &nbsp; the&nbsp; <u>proposed solutions 1, 2 and 3</u>&nbsp; are correct.
# The decision "$x_1 = +1$" is more reliable than the decision "$x_2 = +1$" &nbsp; &#8658; &nbsp; <u>Proposition 4</u> is also correct.
+
# The decision&nbsp; "$x_1 = +1$"&nbsp; is more reliable than the decision&nbsp; "$x_2 = +1$" &nbsp; &#8658; &nbsp; <u>Proposition 4</u>&nbsp; is also correct.
# However, the decision "$x_1 = +1$" is less reliable than the decision "$x_3 = \, &ndash;1$" because $|L_{\rm K}(y_1)|$ is smaller than $|L_{\rm K}(y_3)|$ &nbsp; &#8658; &nbsp; Proposed solution 5 is incorrect.
+
# However,&nbsp; the decision&nbsp; "$x_1 = +1$"&nbsp; is less reliable than the decision&nbsp; "$x_3 = \, &ndash;1$"&nbsp; because&nbsp; $|L_{\rm K}(y_1)<|L_{\rm K}(y_3)|$ &nbsp; &#8658; &nbsp; proposed solution 5 is incorrect.
 
 
  
This can also be interpreted as follows: The quotient between the red and the blue PDF value at $y_3 = \, -1.5$ is larger than the quotient between the blue and the red PDF value at $y_1 = +1$.
+
*This can also be interpreted as follows:&nbsp; The quotient between the red and the blue PDF value at&nbsp; $y_3 = \, -1.5$&nbsp; is larger than the quotient between the blue and the red PDF value at&nbsp; $y_1 = +1$.
  
  
  
'''(4)'''&nbsp; Following the same considerations as in subtask (2), the scattering of '''channel B'''' is given by: &nbsp; $\sigma = 1/2 \ \Rightarrow \ K_{\rm L} = 2/\sigma^2 \ \underline{= 8}$.
+
'''(4)'''&nbsp; Following the same considerations as in subtask&nbsp; '''(2)''',&nbsp; the standard deviation of&nbsp; '''channel B'''&nbsp; is given by: &nbsp;  
 +
:$$\sigma = 1/2 \ \Rightarrow \ K_{\rm L} = 2/\sigma^2 \ \underline{= 8}.$$
  
  
  
'''(5)'''&nbsp; For '''channel B''', the following applies: &nbsp; $L_{\rm K}(y_1 = +1.0) = +8, \ L_{\rm K}(y_2 = +0.5) = +4$ und $L_{\rm K}(y_3 = \, -1.5) = \, -12$.
+
'''(5)'''&nbsp; For&nbsp; '''channel B''',&nbsp; the following applies: &nbsp;  
 +
:$$L_{\rm K}(y_1 = +1.0) = +8, \ L_{\rm K}(y_2 = +0.5) = +4, \ L_{\rm K}(y_3 = \, -1.5) = \, -12.$$  
  
*It is obvious that <u>the first two proposed solutions</u> are true, but not the third, because
+
*It is obvious that&nbsp; <u>the first two proposed solutions</u>&nbsp; are true,&nbsp; but not the third,&nbsp; because
 
:$$|L_{\rm K}(y_3 = -1.5, {\rm channel\hspace{0.15cm} A)}| = 3
 
:$$|L_{\rm K}(y_3 = -1.5, {\rm channel\hspace{0.15cm} A)}| = 3
\hspace{0.5cm} <\hspace{0.5cm}
+
\hspace{0.2cm} <\hspace{0.2cm}
 
|L_{\rm K}(y_2 = 0.5, {\rm channel\hspace{0.15cm} B)}| = 4\hspace{0.05cm} . $$
 
|L_{\rm K}(y_2 = 0.5, {\rm channel\hspace{0.15cm} B)}| = 4\hspace{0.05cm} . $$
 
{{ML-Fuß}}
 
{{ML-Fuß}}

Latest revision as of 15:21, 29 November 2022

Conditional Gaussian functions

We consider two channels  $\rm A$  and  $\rm B$,  each with

  • binary bipolar input  $x ∈ \{+1, \, -1\}$,  and
  • continuous-valued output  $y ∈ {\rm \mathcal{R}}$  (real number).


The graph shows for both channels

  • as blue curve the probability density functions  $f_{y\hspace{0.05cm}|\hspace{0.05cm}x=+1}$,
  • as red curve the probability density functions  $f_{y\hspace{0.05cm}|\hspace{0.05cm}x=-1}$.


In the   "theory section"  the channel  $($German:  "Kanal"   ⇒   subscript:  "K"$)$  log likelihood ratio was derived for this AWGN constellation as follows:

$$L_{\rm K}(y) = L(y\hspace{0.05cm}|\hspace{0.05cm}x) = {\rm ln} \hspace{0.15cm} \frac{{\rm Pr}(y \hspace{0.05cm}|\hspace{0.05cm}x=+1) }{{\rm Pr}(y \hspace{0.05cm}|\hspace{0.05cm}x = -1)} \hspace{0.05cm}.$$

Evaluating this equation analytically,  we obtain with the proportionality constant  $K_{\rm L} = 2/\sigma^2$:

$$L_{\rm K}(y) = K_{\rm L} \cdot y \hspace{0.05cm}.$$



Hints:

  • Reference is made in particular to the sections 



Questions

1

What are the characteristics of the channels shown in the diagram?

They describe the binary transmission under Gaussian noise.
The bit error probability without coding is  ${\rm Q}(1/\sigma)$.
The channel log likelihood ratio is given as  $L_{\rm K}(y) = K_{\rm L} \cdot y$.

2

Which constant  $K_{\rm L}$  characterizes the channel  $\rm A$?

$K_{\rm L} \ = \ $

3

For channel  $\rm A$  what information do the received values  $y_1 = 1, \ y_2 = 0.5$,  $y_3 = \, -1.5$  provide about the transmitted binary symbols  $x_1, \ x_2$  and  $x_3$?

$y_1 = 1.0$  states that probably  $x_1 = +1$  was sent.
$y_2 = 0.5$  states that probably  $x_2 = +1$  was sent.
$y_3 = \, -1.5$  states that probably  $x_3 = \, -1$  was sent.
The decision  "$y_1 → x_1$"  is safer than  "$y_2 → x_2$".
The decision  "$y_1 → x_1$"  is safer than  "$y_3 → x_3$".

4

Which  $K_{\rm L}$  identifies the channel  $\rm B$?

$K_{\rm L} \ = \ $

5

What information does channel  $\rm B$  provide about the received values  $y_1 = 1, \ y_2 = 0.5$,  $y_3 = -1.5$  about the transmitted binary symbols  $x_1, \ x_2$  and  $x_3$?

For  $x_1, \ x_2, \ x_3$  is decided the same as for channel  $\rm A$.
The estimate  "$x_2 = +1$"  is four times more certain than for channel  $\rm A$.
The estimate  "$x_3 = \, -1$"  at channel  $\rm A$  is more reliable than the estimate  "$x_2 = +1$"  at channel  $\rm B$.


Solution

(1)  All proposed solutions  are correct:

  • The transfer equation is always  $y = x + n$,  with  $x ∈ \{+1, \, -1\}$. 
  • The variable  $n$  is a Gaussian random variable with standard deviation  $\sigma$   ⇒   variance  $\sigma^2$   ⇒   "AWGN Channel".
  • For each AWGN channel,  according to the  "theory section",  the channel log likelihood ratio always results in  $L_{\rm K}(y) = L(y|x) = K_{\rm L} \cdot y$.
  • The constant  $K_{\rm L}$  is different for the two channels.


(2)  For the AWGN channel   ⇒   $L_{\rm K}(y) = K_{\rm L} \cdot y$   with constant   $K_{\rm L} = 2/\sigma^2$.

  • The standard deviation  $\sigma$  can be read from the graph on the data page as the distance of the inflection points within the Gaussian curves from their respective midpoints.  For channel A   ⇒   $\sigma = 1$  results.
  • The same result is obtained by evaluating the Gaussian function
$$\frac{f_{\rm G}( y = \sigma)}{f_{\rm G}( y = 0)} = {\rm e} ^{ - y^2/(2\sigma^2) } \Bigg |_{\hspace{0.05cm} y \hspace{0.05cm} = \hspace{0.05cm} \sigma} = {\rm e} ^{ -0.5} \approx 0.6065\hspace{0.05cm}.$$
  • This means:  At the abscissa value  $y = \sigma$  the mean-free Gaussian function  $f_{\rm G}(y)$  has decayed to  $60.65\%$  of its maximum value. 
  • Thus,  for the constant at  channel A:   $K_{\rm L} = 2/\sigma^2 \ \underline{= 2}$.


(3)  Correct are the  solutions 1 to 4:

  • We first give the respective log likelihood ratios of  Channel A:
$$L_{\rm K}(y_1 = +1.0) = +2\hspace{0.05cm},\hspace{0.3cm} L_{\rm K}(y_2 = +0.5) = +1\hspace{0.05cm},\hspace{0.3cm} L_{\rm K}(y_3 = -1.5) = -3\hspace{0.05cm}. $$
  • This results in the following consequences:
  1. The decision for the  $($most probable$)$  code bit  $x_i$  is based on the sign of  $L_{\rm K}(y_i)$:
        $x_1 = +1, \ x_2 = +1, \ x_3 = \, -1$   ⇒   the  proposed solutions 1, 2 and 3  are correct.
  2. The decision  "$x_1 = +1$"  is more reliable than the decision  "$x_2 = +1$"   ⇒   Proposition 4  is also correct.
  3. However,  the decision  "$x_1 = +1$"  is less reliable than the decision  "$x_3 = \, –1$"  because  $|L_{\rm K}(y_1)<|L_{\rm K}(y_3)|$   ⇒   proposed solution 5 is incorrect.
  • This can also be interpreted as follows:  The quotient between the red and the blue PDF value at  $y_3 = \, -1.5$  is larger than the quotient between the blue and the red PDF value at  $y_1 = +1$.


(4)  Following the same considerations as in subtask  (2),  the standard deviation of  channel B  is given by:  

$$\sigma = 1/2 \ \Rightarrow \ K_{\rm L} = 2/\sigma^2 \ \underline{= 8}.$$


(5)  For  channel B,  the following applies:  

$$L_{\rm K}(y_1 = +1.0) = +8, \ L_{\rm K}(y_2 = +0.5) = +4, \ L_{\rm K}(y_3 = \, -1.5) = \, -12.$$
  • It is obvious that  the first two proposed solutions  are true,  but not the third,  because
$$|L_{\rm K}(y_3 = -1.5, {\rm channel\hspace{0.15cm} A)}| = 3 \hspace{0.2cm} <\hspace{0.2cm} |L_{\rm K}(y_2 = 0.5, {\rm channel\hspace{0.15cm} B)}| = 4\hspace{0.05cm} . $$