Difference between revisions of "Aufgaben:Exercise 4.5: On the Extrinsic L-values again"

From LNTwww
 
(6 intermediate revisions by 3 users not shown)
Line 1: Line 1:
{{quiz-Header|Buchseite=Kanalcodierung/Soft–in Soft–out Decoder}}
+
{{quiz-Header|Buchseite=Channel_Coding/Soft-in_Soft-Out_Decoder}}
  
[[File:P_ID3026__KC_A_4_5_v2.png|right|frame|Tabelle nach dem ersten  $L_{\rm E}(i)$–Ansatz]]
+
[[File:P_ID3026__KC_A_4_5_v2.png|right|frame|Table for first  $L_{\rm E}(i)$  approach]]
We assume as in&nbsp; [[Channel_Coding/Soft-in_Soft-Out_Decoder#Calculation_of_extrinsic_LLRs|"theory section"]]&nbsp; from&nbsp; <i>single parity&ndash;check code</i> &nbsp; $\rm SPC \, (3, \, 2, \, 2)$&nbsp;. The possible code words are&nbsp; $\underline{x} \hspace{-0.01cm}\in \hspace{-0.01cm}
+
We assume as in the&nbsp; [[Channel_Coding/Soft-in_Soft-Out_Decoder#Calculation_of_extrinsic_log_likelihood_ratios|"theory section"]]&nbsp; the&nbsp; "single parity&ndash;check code" &nbsp; $\rm SPC \, (3, \, 2, \, 2)$.&nbsp;  
 +
 
 +
The possible code words are&nbsp; $\underline{x} \hspace{-0.01cm}\in \hspace{-0.01cm}
 
\{ \underline{x}_0,\hspace{0.05cm}
 
\{ \underline{x}_0,\hspace{0.05cm}
 
\underline{x}_1,\hspace{0.05cm}
 
\underline{x}_1,\hspace{0.05cm}
 
\underline{x}_2,\hspace{0.05cm}
 
\underline{x}_2,\hspace{0.05cm}
\underline{x}_3\}$&nbsp; mit
+
\underline{x}_3\}$&nbsp; with
:$$\underline{x}_0 \hspace{-0.15cm} \ = \ \hspace{-0.15cm} (0\hspace{-0.03cm},\hspace{0.05cm}0\hspace{-0.03cm},\hspace{0.05cm}0)\hspace{0.35cm}{\rm bzw. } \hspace{0.35cm}
+
:$$\underline{x}_0 \hspace{-0.15cm} \ = \ \hspace{-0.15cm} (0\hspace{-0.03cm},\hspace{0.05cm}0\hspace{-0.03cm},\hspace{0.05cm}0)\hspace{0.35cm}{\rm resp. } \hspace{0.35cm}
 
\underline{x}_0 \hspace{-0.05cm}=\hspace{-0.05cm} (+1\hspace{-0.03cm},\hspace{-0.05cm}+1\hspace{-0.03cm},\hspace{-0.05cm}+1)\hspace{0.05cm},$$
 
\underline{x}_0 \hspace{-0.05cm}=\hspace{-0.05cm} (+1\hspace{-0.03cm},\hspace{-0.05cm}+1\hspace{-0.03cm},\hspace{-0.05cm}+1)\hspace{0.05cm},$$
:$$\underline{x}_1 \hspace{-0.15cm} \ = \ \hspace{-0.15cm} (0\hspace{-0.03cm},\hspace{0.05cm}1\hspace{-0.03cm},\hspace{0.05cm}1)\hspace{0.35cm}{\rm bzw. } \hspace{0.35cm}
+
:$$\underline{x}_1 \hspace{-0.15cm} \ = \ \hspace{-0.15cm} (0\hspace{-0.03cm},\hspace{0.05cm}1\hspace{-0.03cm},\hspace{0.05cm}1)\hspace{0.35cm}{\rm resp. } \hspace{0.35cm}
 
\underline{x}_1 \hspace{-0.05cm}=\hspace{-0.05cm} (+1\hspace{-0.03cm},\hspace{-0.05cm}-1\hspace{-0.03cm},\hspace{-0.05cm}-1)\hspace{0.05cm},$$
 
\underline{x}_1 \hspace{-0.05cm}=\hspace{-0.05cm} (+1\hspace{-0.03cm},\hspace{-0.05cm}-1\hspace{-0.03cm},\hspace{-0.05cm}-1)\hspace{0.05cm},$$
:$$\underline{x}_2 \hspace{-0.15cm} \ = \ \hspace{-0.15cm} (1\hspace{-0.03cm},\hspace{0.05cm}0\hspace{-0.03cm},\hspace{0.05cm}1)\hspace{0.35cm}{\rm bzw. } \hspace{0.35cm}
+
:$$\underline{x}_2 \hspace{-0.15cm} \ = \ \hspace{-0.15cm} (1\hspace{-0.03cm},\hspace{0.05cm}0\hspace{-0.03cm},\hspace{0.05cm}1)\hspace{0.35cm}{\rm resp. } \hspace{0.35cm}
 
\underline{x}_2 \hspace{-0.05cm}=\hspace{-0.05cm} (-1\hspace{-0.03cm},\hspace{-0.05cm}+1\hspace{-0.03cm},\hspace{-0.05cm}-1)\hspace{0.05cm},$$
 
\underline{x}_2 \hspace{-0.05cm}=\hspace{-0.05cm} (-1\hspace{-0.03cm},\hspace{-0.05cm}+1\hspace{-0.03cm},\hspace{-0.05cm}-1)\hspace{0.05cm},$$
:$$\underline{x}_3 \hspace{-0.15cm} \ = \ \hspace{-0.15cm} (1\hspace{-0.03cm},\hspace{0.05cm}1\hspace{-0.03cm},\hspace{0.05cm}0)\hspace{0.35cm}{\rm bzw. } \hspace{0.35cm}
+
:$$\underline{x}_3 \hspace{-0.15cm} \ = \ \hspace{-0.15cm} (1\hspace{-0.03cm},\hspace{0.05cm}1\hspace{-0.03cm},\hspace{0.05cm}0)\hspace{0.35cm}{\rm resp. } \hspace{0.35cm}
 
\underline{x}_3 \hspace{-0.05cm}=\hspace{-0.05cm} (-1\hspace{-0.03cm},\hspace{-0.05cm}-1\hspace{-0.03cm},\hspace{-0.05cm}+1)\hspace{0.05cm}.$$
 
\underline{x}_3 \hspace{-0.05cm}=\hspace{-0.05cm} (-1\hspace{-0.03cm},\hspace{-0.05cm}-1\hspace{-0.03cm},\hspace{-0.05cm}+1)\hspace{0.05cm}.$$
  
In the exercise we mostly use the second (bipolar) representation of the code symbols: &nbsp; $x_i &#8712; \{+1, -1\}$.
+
In the exercise we mostly use the second (bipolar) representation of the code symbols: &nbsp;  
 +
:$$x_i &#8712; \{+1, -1\}.$$
  
*It is not that the&nbsp; $\rm SPC \, (3, \, 2, \, 2)$&nbsp; would be of much practical interest, since, for example, in&nbsp; <i>hard decision</i>&nbsp; because of&nbsp; $d_{\rm min} = 2$&nbsp; only one error can be detected and none can be corrected. However, the code is well suited for practice and demonstration purposes because of the manageable effort involved.
+
Note:
*With&nbsp; ''iterative symbol-wise decoding''&nbsp; one can also correct one error. In the present code, the extrinsic&nbsp; $L$&ndash;values&nbsp; $\underline{L}_{\rm E} = \big (L_{\rm E}(1), \ L_{\rm E}(2), \ L_{\rm E}(3)\big )$&nbsp; must be calculated according to the following equation.
+
#It is not that the&nbsp; $\rm SPC \, (3, \, 2, \, 2)$&nbsp; would be of much practical interest,&nbsp; since,&nbsp; for example,&nbsp; in&nbsp; "hard decision"&nbsp; because of&nbsp; $d_{\rm min} = 2$&nbsp; only one error can be detected and none can be corrected.  
:$$L_{\rm E}(i) = {\rm ln} \hspace{0.15cm}\frac{{\rm Pr} \left [w_{\rm H}(\underline{x}^{(-i)})\hspace{0.15cm}{\rm ist \hspace{0.15cm} gerade} \hspace{0.05cm} \right ]}{{\rm Pr} \left [w_{\rm H}(\underline{x}^{(-i)})\hspace{0.15cm}{\rm ist \hspace{0.15cm} ungerade} \hspace{0.05cm}  \hspace{0.05cm}\right ]}.$$
+
#However,&nbsp; the code is well suited for demonstration purposes because of the manageable effort involved.
 +
#With&nbsp; "iterative symbol-wise decoding"&nbsp; one can also correct one error.  
 +
#In the present code,&nbsp; the extrinsic&nbsp; $L$&ndash;values&nbsp; $\underline{L}_{\rm E} = \big (L_{\rm E}(1), \ L_{\rm E}(2), \ L_{\rm E}(3)\big )$&nbsp; must be calculated according to the following equation:
 +
:$$L_{\rm E}(i) = {\rm ln} \hspace{0.15cm}\frac{{\rm Pr} \left [w_{\rm H}(\underline{x}^{(-i)})\hspace{0.15cm}{\rm is \hspace{0.15cm} even} \hspace{0.05cm} \right ]}{{\rm Pr} \left [w_{\rm H}(\underline{x}^{(-i)})\hspace{0.15cm}{\rm is \hspace{0.15cm} odd} \hspace{0.05cm}  \hspace{0.05cm}\right ]}.$$
  
 
:Here&nbsp; $\underline{x}^{(-1)}$&nbsp; denotes all symbols except&nbsp; $x_i$&nbsp; and is thus a vector of length&nbsp; $n - 1 = 2$.
 
:Here&nbsp; $\underline{x}^{(-1)}$&nbsp; denotes all symbols except&nbsp; $x_i$&nbsp; and is thus a vector of length&nbsp; $n - 1 = 2$.
  
  
As the&nbsp; '''first $L_{\rm E}(i)$ approach'''&nbsp; we refer to the approach corresponding to the equations
+
&rArr; &nbsp; As the&nbsp; &raquo;'''first $L_{\rm E}(i)$ approach'''&laquo;&nbsp; we refer to the approach corresponding to the equations
 
:$$L_{\rm E}(1) \hspace{-0.15cm} \ = \ \hspace{-0.15cm}2 \cdot {\rm tanh}^{-1} \left [{\rm tanh}(L_2/2) \cdot {\rm tanh}(L_3/2) \right ] \hspace{0.05cm},$$
 
:$$L_{\rm E}(1) \hspace{-0.15cm} \ = \ \hspace{-0.15cm}2 \cdot {\rm tanh}^{-1} \left [{\rm tanh}(L_2/2) \cdot {\rm tanh}(L_3/2) \right ] \hspace{0.05cm},$$
 
:$$L_{\rm E}(2) \hspace{-0.15cm} \ = \ \hspace{-0.15cm}2 \cdot {\rm tanh}^{-1} \left [{\rm tanh}(L_1/2) \cdot {\rm tanh}(L_3/2) \right ] \hspace{0.05cm},$$
 
:$$L_{\rm E}(2) \hspace{-0.15cm} \ = \ \hspace{-0.15cm}2 \cdot {\rm tanh}^{-1} \left [{\rm tanh}(L_1/2) \cdot {\rm tanh}(L_3/2) \right ] \hspace{0.05cm},$$
 
:$$L_{\rm E}(3) \hspace{-0.15cm} \ = \ \hspace{-0.15cm}2 \cdot {\rm tanh}^{-1} \left [{\rm tanh}(L_1/2) \cdot {\rm tanh}(L_2/2) \right ] \hspace{0.05cm}.$$
 
:$$L_{\rm E}(3) \hspace{-0.15cm} \ = \ \hspace{-0.15cm}2 \cdot {\rm tanh}^{-1} \left [{\rm tanh}(L_1/2) \cdot {\rm tanh}(L_2/2) \right ] \hspace{0.05cm}.$$
  
'''(1)'''&nbsp; This&nbsp; $L_{\rm E}(i)$ approach underlies the results table above (red entries), assuming the following a posteriori $L$ values:
+
'''(1)'''&nbsp; This&nbsp; $L_{\rm E}(i)$&nbsp; approach underlies the results table above&nbsp; $($red entries$)$,&nbsp; assuming the following a-posteriori $L$&ndash;values:
:$$\underline {L}_{\rm APP} = (+1.0\hspace{0.05cm},\hspace{0.05cm}+0.4\hspace{0.05cm},\hspace{0.05cm}-1.0)  \hspace{0.5cm}{\rm kurz\hspace{-0.1cm}:}\hspace{0.25cm}
+
:$$\underline {L}_{\rm APP} = (+1.0\hspace{0.05cm},\hspace{0.05cm}+0.4\hspace{0.05cm},\hspace{0.05cm}-1.0)  \hspace{0.5cm}\Rightarrow \hspace{0.5cm}
L_1 = +1.0\hspace{0.05cm},\hspace{0.05cm}
+
L_1 = +1.0\hspace{0.05cm},\hspace{0.15cm}
L_2 = +0.4\hspace{0.05cm},\hspace{0.05cm}  
+
L_2 = +0.4\hspace{0.05cm},\hspace{0.15cm}  
 
L_3 = -1.0\hspace{0.05cm}.$$
 
L_3 = -1.0\hspace{0.05cm}.$$
  
'''(2)'''&nbsp; The extrinsic&nbsp; $L$&ndash;values for the zeroth iteration result in&nbsp; (derivation in&nbsp; [[Aufgaben:Exercise_4.5Z:_Tangent_Hyperbolic_and_Inverse|"Exercise 4.5Z"]]):  
+
'''(2)'''&nbsp; The extrinsic&nbsp; $L$&ndash;values for the zeroth iteration result in&nbsp; $($derivation in&nbsp; [[Aufgaben:Exercise_4.5Z:_Tangent_Hyperbolic_and_Inverse|$\text{Exercise 4.5Z})$]]:  
 
:$$L_{\rm E}(1) = -0.1829, \ L_{\rm E}(2) = -0.4337, \  L_{\rm E}(3) = +0.1829.$$  
 
:$$L_{\rm E}(1) = -0.1829, \ L_{\rm E}(2) = -0.4337, \  L_{\rm E}(3) = +0.1829.$$  
  
'''(3)'''&nbsp; The a posteriori values at the beginning of the first iteration are thus
+
'''(3)'''&nbsp; The a-posteriori&nbsp; $L$&ndash;values at the beginning of the first iteration are thus
:$$\underline{L}^{(I=1)} = \underline{L}^{(I=0)}  + \underline{L}_{\hspace{0.02cm}\rm E}^{(I=0)}  =  
+
:$$\underline{L_{\rm APP} }^{(I=1)} = \underline{L_{\rm APP} }^{(I=0)}  + \underline{L}_{\hspace{0.02cm}\rm E}^{(I=0)}  =  
 
(+0.8171\hspace{0.05cm},\hspace{0.05cm}-0.0337\hspace{0.05cm},\hspace{0.05cm}-0.8171)  
 
(+0.8171\hspace{0.05cm},\hspace{0.05cm}-0.0337\hspace{0.05cm},\hspace{0.05cm}-0.8171)  
 
\hspace{0.05cm} .  $$
 
\hspace{0.05cm} .  $$
  
'''(4)'''&nbsp; From this, the new extrinsic values for the iteration loop&nbsp; $I = 1$&nbsp; are as follows:
+
'''(4)'''&nbsp; From this,&nbsp; the new extrinsic&nbsp; $L$&ndash;values for the iteration loop&nbsp; $I = 1$&nbsp; are as follows:
 
:$$L_{\rm E}(1) \hspace{-0.15cm} \ = \ \hspace{-0.15cm}2 \cdot {\rm tanh}^{-1} \big [{\rm tanh}(-0.0337/2) \cdot {\rm tanh}(-0.8171/2) \big ] = 0.0130 = -L_{\rm E}(3)\hspace{0.05cm},$$
 
:$$L_{\rm E}(1) \hspace{-0.15cm} \ = \ \hspace{-0.15cm}2 \cdot {\rm tanh}^{-1} \big [{\rm tanh}(-0.0337/2) \cdot {\rm tanh}(-0.8171/2) \big ] = 0.0130 = -L_{\rm E}(3)\hspace{0.05cm},$$
 
:$$L_{\rm E}(2) \hspace{-0.15cm} \ = \ \hspace{-0.15cm}2 \cdot {\rm tanh}^{-1} \big [{\rm tanh}(+0.8171/2) \cdot {\rm tanh}(-0.8171/2) \big ]  = - 0.3023\hspace{0.05cm}.$$
 
:$$L_{\rm E}(2) \hspace{-0.15cm} \ = \ \hspace{-0.15cm}2 \cdot {\rm tanh}^{-1} \big [{\rm tanh}(+0.8171/2) \cdot {\rm tanh}(-0.8171/2) \big ]  = - 0.3023\hspace{0.05cm}.$$
  
Further, one can see from the above table:
+
Further,&nbsp; one can see from the above table:
* A hard decision according to the signs before the first iteration&nbsp; $(I = 0)$ fails, since&nbsp; $(+1, +1, -1)$&nbsp; is not a valid&nbsp; $\rm SPC \, (3, \, 2, \, 2)$&ndash;code word.
+
* A hard decision according to the signs before the first iteration&nbsp; $(I = 0)$ fails,&nbsp; since&nbsp; $(+1, +1, -1)$&nbsp; is not a valid&nbsp; $\rm SPC \, (3, \, 2, \, 2)$&nbsp; code word.
* But already after&nbsp; $I = 1$&nbsp; iterations, a hard decision yields a valid code word, namely&nbsp; $\underline{x}_2 = (+1, -1, -1)$. Also in later graphs, the rows with correct HD decisions for the first time are highlighted in blue.
+
 
 +
* But already after&nbsp; $I = 1$&nbsp; iterations,&nbsp; a hard decision yields a valid code word,&nbsp; namely&nbsp; $\underline{x}_2 = (+1, -1, -1)$.  
 +
 
 +
*Also in later graphs,&nbsp; the rows with correct hard decisions for the first time are highlighted in blue.
 +
 
 
* Hard decisions after further iterations&nbsp; $(I &#8805; 2)$&nbsp; each lead to the same code word&nbsp; $\underline{x}_2$. This statement is not only valid for this example, but in general.
 
* Hard decisions after further iterations&nbsp; $(I &#8805; 2)$&nbsp; each lead to the same code word&nbsp; $\underline{x}_2$. This statement is not only valid for this example, but in general.
  
  
Besides, in this exercise we consider a '''second $L_{\rm E}(i)$ approach''', which is given here for the example of the first symbol $(i = 1)$:
+
Besides,&nbsp; in this exercise we consider a&nbsp; &raquo;'''second $L_{\rm E}(i)$ approach'''&laquo;,&nbsp; which is given here for the example of the first symbol&nbsp; $(i = 1)$:
:$${\rm sign} \big[L_{\rm E}(1)\big] \hspace{-0.15cm} \ = \ \hspace{-0.15cm} {\rm sign} \big[L_{\rm E}(2)\big] \cdot {\rm sign} \big[L_{\rm E}(3)\big]\hspace{0.05cm},\hspace{0.8cm}
+
:$${\rm sign} \big[L_{\rm E}(1)\big] \hspace{-0.15cm} \ = \ \hspace{-0.15cm} {\rm sign} \big[L_{\rm E}(2)\big] \cdot {\rm sign} \big[L_{\rm E}(3)\big]\hspace{0.05cm},$$
|L_{\rm E}(1)| \hspace{-0.15cm} \ = \ \hspace{-0.15cm} {\rm Min} \left ( |L_{\rm E}(2)|\hspace{0.05cm}, \hspace{0.05cm}|L_{\rm E}(3)| \right )  \hspace{0.05cm}.$$
+
:$$|L_{\rm E}(1)| \hspace{-0.15cm} \ = \ \hspace{-0.15cm} {\rm Min} \left ( |L_{\rm E}(2)|\hspace{0.05cm}, \hspace{0.05cm}|L_{\rm E}(3)| \right )  \hspace{0.05cm}.$$
  
This second approach is based on the assumption that the reliability of&nbsp; $L_{\rm E}(i)$&nbsp; is essentially determined by the most unreliable neighbor symbol. The better (larger) input&ndash;LLR is completely disregarded. &ndash; Let us consider two examples for this:
+
This second approach is based on the assumption that the reliability of&nbsp; $L_{\rm E}(i)$&nbsp; is essentially determined by the most unreliable neighbor symbol.&nbsp; The better&nbsp; $($larger$)$&nbsp; the input log likelihood ratio is completely disregarded.  
  
 +
Let us consider two examples for this:
  
'''(1)'''&nbsp; For&nbsp; $L_2 = 1.0$&nbsp; and&nbsp; $L_3 = 5.0$&nbsp; we get for example
+
 
 +
'''(1)'''&nbsp; For&nbsp; $L_2 = 1.0$&nbsp; and&nbsp; $L_3 = 5.0$&nbsp; we get
 
* after the first approach: &nbsp; $L_{\rm E}(1) =2 \cdot {\rm tanh}^{-1} \big [{\rm tanh}(0.5) \cdot {\rm tanh}(2.5) \big ] =2 \cdot {\rm tanh}^{-1}(0.4559) = 0.984\hspace{0.05cm},$
 
* after the first approach: &nbsp; $L_{\rm E}(1) =2 \cdot {\rm tanh}^{-1} \big [{\rm tanh}(0.5) \cdot {\rm tanh}(2.5) \big ] =2 \cdot {\rm tanh}^{-1}(0.4559) = 0.984\hspace{0.05cm},$
 +
 
* according to the second approach: &nbsp; $|L_{\rm E}(1)| \hspace{-0.15cm} \ = \ \hspace{-0.15cm} {\rm Min} \big ( 1.0\hspace{0.05cm}, \hspace{0.05cm}5.0 \big )  = 1.000 \hspace{0.05cm}.$
 
* according to the second approach: &nbsp; $|L_{\rm E}(1)| \hspace{-0.15cm} \ = \ \hspace{-0.15cm} {\rm Min} \big ( 1.0\hspace{0.05cm}, \hspace{0.05cm}5.0 \big )  = 1.000 \hspace{0.05cm}.$
  
Line 68: Line 81:
 
'''(2)'''&nbsp; On the other hand one obtains for&nbsp; $L_2 = L_3 = 1.0$
 
'''(2)'''&nbsp; On the other hand one obtains for&nbsp; $L_2 = L_3 = 1.0$
 
* according to the first approach: &nbsp; $L_{\rm E}(1) =2 \cdot {\rm tanh}^{-1} \big [{\rm tanh}(0.5) \cdot {\rm tanh}(0.5) \big ] =2 \cdot {\rm tanh}^{-1}(0.2135) = 0.433\hspace{0.05cm},$
 
* according to the first approach: &nbsp; $L_{\rm E}(1) =2 \cdot {\rm tanh}^{-1} \big [{\rm tanh}(0.5) \cdot {\rm tanh}(0.5) \big ] =2 \cdot {\rm tanh}^{-1}(0.2135) = 0.433\hspace{0.05cm},$
 +
 
* according to the second approach: &nbsp; $|L_{\rm E}(1)| \hspace{-0.15cm} \ = \ \hspace{-0.15cm} {\rm Min} \big ( 1.0\hspace{0.05cm}, \hspace{0.05cm}1.0 \big )  = 1.000 \hspace{0.05cm}.$
 
* according to the second approach: &nbsp; $|L_{\rm E}(1)| \hspace{-0.15cm} \ = \ \hspace{-0.15cm} {\rm Min} \big ( 1.0\hspace{0.05cm}, \hspace{0.05cm}1.0 \big )  = 1.000 \hspace{0.05cm}.$
  
  
One can see the clear discrepancy between the two approaches. The second approach (approximation) is clearly more positive than the first (correct) approach. However, it is actually only important that the iterations lead to the desired decoding result.
+
One can see the clear discrepancy between the two approaches.&nbsp; The second approach&nbsp; $($approximation$)$&nbsp; is clearly more positive than the first&nbsp; $($correct$)$&nbsp; approach. However,&nbsp; it is actually only important that the iterations lead to the desired decoding result.
  
  
Line 77: Line 91:
  
  
Hints:
+
<u>Hints:</u>
 
*The exercise belongs to the chapter&nbsp; [[Channel_Coding/Soft-in_Soft-Out_Decoder|"Soft&ndash;in Soft&ndash;out Decoder"]].
 
*The exercise belongs to the chapter&nbsp; [[Channel_Coding/Soft-in_Soft-Out_Decoder|"Soft&ndash;in Soft&ndash;out Decoder"]].
*Referred to in particular&nbsp; [[Channel_Coding/Soft-in_Soft-Out_Decoder#Calculation_of_extrinsic_LLRs|"For calculation of extrinsic L values"]].  
+
 
* Only the&nbsp; '''second solution approach''' is treated here.  
+
*Referred to in particular&nbsp; [[Channel_Coding/Soft-in_Soft-Out_Decoder#Calculation_of_extrinsic_log_likelihood_ratios|"Calculation of extrinsic log likelihood ratios"]].
* For the first solution approach we refer to the&nbsp; [[Aufgaben:Exercise_4.5Z:_Tangent_Hyperbolic_and_Inverse|"Exercise 4.5Z"]] .
+
 +
* Only the&nbsp; '''second solution approach'''&nbsp; is treated here.  
 +
 
 +
* For the first solution approach we refer to&nbsp; [[Aufgaben:Exercise_4.5Z:_Tangent_Hyperbolic_and_Inverse|$\text{Exercise 4.5Z}$]] .
 
   
 
   
  
Line 88: Line 105:
 
===Questions===
 
===Questions===
 
<quiz display=simple>
 
<quiz display=simple>
{Es gelte&nbsp; $\underline{L} = (+1.0, +0.4, -1.0)$. Ermitteln Sie die extrinsischen&nbsp; $L$&ndash;Werte nach dem '''zweiten&nbsp; $L_{\rm E}(i)$&ndash;Ansatz''' ohne vorherige Iteration&nbsp; $\underline{(I = 0)}$.
+
{It holds&nbsp; $\underline{L} = (+1.0, +0.4, -1.0)$.&nbsp; Determine the extrinsic&nbsp; $L$&ndash;values according to the&nbsp; '''second&nbsp; $L_{\rm E}(i)$&ndash;approach'''&nbsp; without previous iteration&nbsp; $\underline{(I = 0)}$.
 
|type="{}"}
 
|type="{}"}
 
$L_{\rm E}(1) \ = \ ${ -0.412--0.388 }
 
$L_{\rm E}(1) \ = \ ${ -0.412--0.388 }
Line 94: Line 111:
 
$L_{\rm E}(3) \ = \ ${ 0.4 3% }
 
$L_{\rm E}(3) \ = \ ${ 0.4 3% }
  
{Wie lauten die Aposteriori&ndash;$L$&ndash;Werte für die erste Iteration&nbsp; $\underline{(I = 1)}$?
+
{What are the a-posteriori $L$&ndash;values&nbsp; $L_i = L_{\rm APP} (i)$&nbsp; for the first iteration&nbsp; $\underline{(I = 1)}$?
 
|type="{}"}
 
|type="{}"}
$L_(1) \ = \ ${ 0.6 3% }
+
$L_1 \ = \ ${ 0.6 3% }
$L_(2) \ = \ ${ -0.618--0.582 }
+
$L_2 \ = \ ${ -0.618--0.582 }
$L_(3) \ = \ ${ -0.618--0.582 }
+
$L_3 \ = \ ${ -0.618--0.582 }
  
{Welcher der folgenden Aussagen gelten für&nbsp; $\underline{L} = (+1.0, +0.4, -1.0)$?
+
{Which of the following statements are true for&nbsp; $\underline{L} = (+1.0, +0.4, -1.0)$?
 
|type="[]"}
 
|type="[]"}
+ <i>Hard Decision</i>&nbsp; nach&nbsp; $I = 1$&nbsp; führt zum Codewort&nbsp; $\underline{x}_1 = (+1, -1, -1)$.
+
+ Hard decision&nbsp; after&nbsp; $I = 1$&nbsp; leads to the code word&nbsp; $\underline{x}_1 = (+1, -1, -1)$.
+ Daran ändert sich auch nach weiteren Iterationen nichts.
+
+ This does not change after further iterations.
- Weitere Iterationen erhöhen die Zuverlässigkeit für&nbsp; $\underline{x}_1$&nbsp; nicht.
+
- Further iterations do not increase the reliability for&nbsp; $\underline{x}_1$&nbsp;.
  
{Welche der folgenden Aussagen gelten für&nbsp; $\underline{L} = (+0.6, +1.0, -0.4)$?
+
{Which of the following statements are true for&nbsp; $\underline{L} = (+0.6, +1.0, -0.4)$?
 
|type="[]"}
 
|type="[]"}
+ Die iterative Decodierung führt zum Ergebnis&nbsp; $\underline{x}_0 = (+1, +1, +1)$.
+
+ The iterative decoding leads to the result&nbsp; $\underline{x}_0 = (+1, +1, +1)$.
- Die iterative Decodierung führt zum Ergebnis&nbsp; $\underline{x}_2 = (-1, +1, -1)$.
+
- The iterative decoding leads to the result&nbsp; $\underline{x}_2 = (-1, +1, -1)$.
+ Dieses Ergebnis liefert auch <i>Hard Decision</i> ab&nbsp; $I = 1$.
+
+ Hard decision also returns this result for&nbsp; $I \ge 1$.
  
{Welche der folgenden Aussagen gelten für&nbsp; $\underline{L} = (+0.6, +1.0, -0.8)$?
+
{Which of the following statements are true for&nbsp; $\underline{L} = (+0.6, +1.0, -0.8)$?
 
|type="[]"}
 
|type="[]"}
- Die iterative Decodierung führt zum Ergebnis&nbsp; $\underline{x}_0 = (+1, +1, +1)$.
+
- The iterative decoding leads to the result&nbsp; $\underline{x}_0 = (+1, +1, +1)$.
+ Die iterative Decodierung führt zum Ergebnis&nbsp; $\underline{x}_2 = (-1, +1, -1)$.
+
+ The iterative decoding leads to the result&nbsp; $\underline{x}_2 = (-1, +1, -1)$.
+ Dieses Ergebnis liefert auch <i>Hard Decision</i> ab&nbsp; $I = 1$.
+
+ Hard decision also returns this result for&nbsp; $I \ge 1$.
  
{Welche der folgenden Aussagen gelten für&nbsp; $\underline{L} = (+0.6, +1.0, -0.6)$?
+
{Which of the following statements are true for&nbsp; $\underline{L} = (+0.6, +1.0, -0.6)$?
 
|type="[]"}
 
|type="[]"}
- Die iterative Decodierung führt zum Ergebnis&nbsp; $\underline{x}_0 = (+1, +1, +1)$.
+
- Iterative decoding leads to the result&nbsp; $\underline{x}_0 = (+1, +1, +1)$.
- Die iterative Decodierung führt zum Ergebnis&nbsp; $\underline{x}_2 = (-1, +1, -1)$.
+
- The iterative decoding leads to the result&nbsp; $\underline{x}_2 = (-1, +1, -1)$.
+ Die iterative Decodierung führt hier nicht zum Ziel.
+
+ The iterative decoding does not lead to the result here.
 
</quiz>
 
</quiz>
  
===Musterlösung===
+
===Solution===
 
{{ML-Kopf}}
 
{{ML-Kopf}}
[[File:P_ID3027__KC_A_4_5a_v2.png|right|frame|Ergebnisse für&nbsp; $\underline{L}=(+1.0, +0.4, –1.0)$]]  
+
[[File:P_ID3027__KC_A_4_5a_v2.png|right|frame|Results for&nbsp; $\underline{L}=(+1.0, +0.4, –1.0)$]]  
'''(1)'''&nbsp; Entsprechend dem zweiten $L_{\rm E}(i)$&ndash;Ansatz gilt:
+
'''(1)'''&nbsp; According to the second&nbsp; $L_{\rm E}(i)$&nbsp; approach holds:
 
:$${\rm sign} [L_{\rm E}(1)] \hspace{-0.15cm} \ = \ \hspace{-0.15cm} {\rm sign} [L_{\rm E}(2)] \cdot {\rm sign} [L_{\rm E}(3)] = -1 \hspace{0.05cm},$$
 
:$${\rm sign} [L_{\rm E}(1)] \hspace{-0.15cm} \ = \ \hspace{-0.15cm} {\rm sign} [L_{\rm E}(2)] \cdot {\rm sign} [L_{\rm E}(3)] = -1 \hspace{0.05cm},$$
:$$|L_{\rm E}(1)| \hspace{-0.15cm} \ = \ \hspace{-0.15cm} {\rm Min} \left ( |L_{\rm E}(2)|\hspace{0.05cm}, \hspace{0.05cm}|L_{\rm E}(3)| \right )  = {\rm Min} \left ( 0.4\hspace{0.05cm}, \hspace{0.05cm}1.0 \right ) = 0.4\hspace{0.3cm}
+
:$$|L_{\rm E}(1)| \hspace{-0.15cm} \ = \ \hspace{-0.15cm} {\rm Min} \left ( |L_{\rm E}(2)|\hspace{0.05cm}, \hspace{0.05cm}|L_{\rm E}(3)| \right )  = {\rm Min} \left ( 0.4\hspace{0.05cm}, \hspace{0.05cm}1.0 \right ) = 0.4$$
\Rightarrow \hspace{0.3cm}L_{\rm E}(1) \hspace{0.15cm} \underline{-0.4}\hspace{0.05cm}.$$
+
:$$\Rightarrow \hspace{0.3cm}L_{\rm E}(1) \hspace{0.15cm} \underline{-0.4}\hspace{0.05cm}.$$
  
*In gleicher Weise erhält man:
+
*In the same way you get:
:$$L_{\rm E}(2) \hspace{0.15cm} \underline{-1.0}\hspace{0.05cm}, \hspace{0.3cm}
+
:$$L_{\rm E}(2) \hspace{0.15cm} \underline{-1.0}\hspace{0.05cm}, $$
L_{\rm E}(3) \hspace{0.15cm} \underline{+0.4}\hspace{0.05cm}.$$
+
:$$L_{\rm E}(3) \hspace{0.15cm} \underline{+0.4}\hspace{0.05cm}.$$
  
  
'''(2)'''&nbsp; Die Aposteriori&ndash;$L$&ndash;Werte zu Beginn der ersten Iteration $(I = 1)$ ergeben sich aus der Summe der bisherigen $L$&ndash;Werte (für $I = 0$) und den unter (1) berechneten extrinsischen Werten:
+
'''(2)'''&nbsp; The a-posteriori&nbsp; $L$&ndash;values at the beginning of the first iteration&nbsp; $(I = 1)$&nbsp; are the sum
 +
*of the previous&nbsp; $L$&ndash;values&nbsp; $($for&nbsp; $I = 0$)&nbsp;
 +
*and the extrinsic values calculated in subtask&nbsp; '''(1)''':
 
:$$L_1 = L_{\rm APP}(1) \hspace{-0.15cm} \ = \ \hspace{-0.15cm}1.0 + (-0.4)\hspace{0.15cm} \underline{=+0.6}\hspace{0.05cm},$$
 
:$$L_1 = L_{\rm APP}(1) \hspace{-0.15cm} \ = \ \hspace{-0.15cm}1.0 + (-0.4)\hspace{0.15cm} \underline{=+0.6}\hspace{0.05cm},$$
 
:$$L_2 = L_{\rm APP}(2) \hspace{-0.15cm} \ = \ \hspace{-0.15cm} 0.4 + (-1.0)\hspace{0.15cm} \underline{=-0.6}\hspace{0.05cm},$$
 
:$$L_2 = L_{\rm APP}(2) \hspace{-0.15cm} \ = \ \hspace{-0.15cm} 0.4 + (-1.0)\hspace{0.15cm} \underline{=-0.6}\hspace{0.05cm},$$
Line 144: Line 163:
  
  
'''(3)'''&nbsp; Wie aus obiger Tabelle hervorgeht, sind die <u>Lösungsvorschläge 1 und 2</u> richtig im Gegensatz zur Antwort 3:  
+
'''(3)'''&nbsp; As can be seen from the above table,&nbsp; the&nbsp; <u>solutions 1 and 2</u>&nbsp; are correct in contrast to answer 3:  
*Mit jeder neuen Iteration werden die Beträge von $L(1), \ L(2)$ und $L(3)$ signifikant größer.
+
*With each new iteration,&nbsp; the magnitudes of&nbsp; $L(1), \ L(2)$ and $L(3)$&nbsp; become significantly larger.
  
  
[[File:P_ID3030__KC_A_4_5d_v2.png|right|frame|Ergebnisse für&nbsp; $\underline{L}=(+0.6, +1.0, –0.4)$]]  
+
[[File:P_ID3030__KC_A_4_5d_v2.png|right|frame|Results for&nbsp; $\underline{L}=(+0.6, +1.0, –0.4)$]]  
'''(4)'''&nbsp; Wie aus nebenstehender Tabelle hervorgeht, sind <u>die Antworten 1 und 3</u> richtig:  
+
<br><br>
*Die Entscheidung fällt also für das Codewort $\underline{x}_0 = (+1, +1, +1)$.  
+
'''(4)'''&nbsp; As can be seen from the adjacent table,
*Ab $I = 1$ wäre dies auch die Entscheidung von <i>Hard Decision</i>.
+
&nbsp; the&nbsp; <u>answers 1 and 3</u>&nbsp; are correct:  
 +
*So the decision is made for the code word $\underline{x}_0 = (+1, +1, +1)$.
 +
 +
*From&nbsp; $I = 1$&nbsp; this would also be the decision of&nbsp; "hard decision".
 
<br clear=all>
 
<br clear=all>
[[File:P_ID3028__KC_A_4_5e_v2.png|right|frame|Ergebnisse für&nbsp; $\underline{L}=(+0.6, +1.0, –0.8)$]]
+
[[File:P_ID3028__KC_A_4_5e_v2.png|right|frame|Results for&nbsp; $\underline{L}=(+0.6, +1.0, –0.8)$]]
'''(5)'''&nbsp; Richtig sind die <u>Antworten 2 und 3</u>:
+
'''(5)'''&nbsp; Correct are the&nbsp; <u>answers 2 and 3</u>:
*Wegen $|L(3)| > |L(1)|$ gilt bereits ab $I = 1$: &nbsp; $L_1 < 0 \hspace{0.05cm},\hspace{0.2cm}
+
*Because of&nbsp; $|L(3)| > |L(1)|$&nbsp; the following is valid for $I /ge 1$: &nbsp; $L_1 < 0 \hspace{0.05cm},\hspace{0.2cm}
 
  L_2 > 0 \hspace{0.05cm},\hspace{0.2cm}
 
  L_2 > 0 \hspace{0.05cm},\hspace{0.2cm}
 
L_3 < 0 \hspace{0.05cm}.$
 
L_3 < 0 \hspace{0.05cm}.$
*Ab dieser Iterationsschleife liefert <i>Hard Decision</i> das Codewort $\underline{x}_2 = (-1, +1, -1)$.  
+
 
 +
*From this iteration loop,&nbsp; hard decision returns the code word&nbsp; $\underline{x}_2 = (-1, +1, -1)$.  
 
<br clear=all>
 
<br clear=all>
  [[File: P_ID3029__KC_A_4_5f_v1.png|right|frame|Ergebnisse für&nbsp; $\underline{L}=(+0.6, +1.0, –0.6)$]]  
+
  [[File: P_ID3029__KC_A_4_5f_v1.png|right|frame|Results for&nbsp; $\underline{L}=(+0.6, +1.0, –0.6)$]]  
'''(6)'''&nbsp; Richtig sind der <u>Lösungsvorschlag 3</u>:
+
'''(6)'''&nbsp; Correct is the&nbsp; <u>proposed solution 3</u>:
*Die nebenstehende Tabelle zeigt, dass unter der Voraussetzung $|L(1)| = |L(3)|$ ab der Iterationsschleife $I = 1$ alle extrinsischen $L$&ndash;Werte Null  sind.  
+
*The adjacent table shows that under the condition&nbsp; $|L(1)| = |L(3)|$,&nbsp; starting from the iteration loop&nbsp; $I = 1$,&nbsp; all extrinsic&nbsp; $L$&ndash;values are zero.  
*Damit bleiben die Aposteriori&ndash;$L$&ndash;Werte auch für $I > 1$ konstant gleich $\underline{L} = (0., +0.4, 0.)$, was keinem Codewort zugeordnet werden kann.
+
 
 +
*Thus,&nbsp; the a-posteriori&nbsp; $L$&ndash; values remain constantly equal to&nbsp; $\underline{L} = (0., +0.4, 0.)$&nbsp; even for&nbsp; $I > 1$,&nbsp; which cannot be assigned to any code word.
 
{{ML-Fuß}}
 
{{ML-Fuß}}
  

Latest revision as of 17:06, 4 December 2022

Table for first  $L_{\rm E}(i)$  approach

We assume as in the  "theory section"  the  "single parity–check code"   $\rm SPC \, (3, \, 2, \, 2)$. 

The possible code words are  $\underline{x} \hspace{-0.01cm}\in \hspace{-0.01cm} \{ \underline{x}_0,\hspace{0.05cm} \underline{x}_1,\hspace{0.05cm} \underline{x}_2,\hspace{0.05cm} \underline{x}_3\}$  with

$$\underline{x}_0 \hspace{-0.15cm} \ = \ \hspace{-0.15cm} (0\hspace{-0.03cm},\hspace{0.05cm}0\hspace{-0.03cm},\hspace{0.05cm}0)\hspace{0.35cm}{\rm resp. } \hspace{0.35cm} \underline{x}_0 \hspace{-0.05cm}=\hspace{-0.05cm} (+1\hspace{-0.03cm},\hspace{-0.05cm}+1\hspace{-0.03cm},\hspace{-0.05cm}+1)\hspace{0.05cm},$$
$$\underline{x}_1 \hspace{-0.15cm} \ = \ \hspace{-0.15cm} (0\hspace{-0.03cm},\hspace{0.05cm}1\hspace{-0.03cm},\hspace{0.05cm}1)\hspace{0.35cm}{\rm resp. } \hspace{0.35cm} \underline{x}_1 \hspace{-0.05cm}=\hspace{-0.05cm} (+1\hspace{-0.03cm},\hspace{-0.05cm}-1\hspace{-0.03cm},\hspace{-0.05cm}-1)\hspace{0.05cm},$$
$$\underline{x}_2 \hspace{-0.15cm} \ = \ \hspace{-0.15cm} (1\hspace{-0.03cm},\hspace{0.05cm}0\hspace{-0.03cm},\hspace{0.05cm}1)\hspace{0.35cm}{\rm resp. } \hspace{0.35cm} \underline{x}_2 \hspace{-0.05cm}=\hspace{-0.05cm} (-1\hspace{-0.03cm},\hspace{-0.05cm}+1\hspace{-0.03cm},\hspace{-0.05cm}-1)\hspace{0.05cm},$$
$$\underline{x}_3 \hspace{-0.15cm} \ = \ \hspace{-0.15cm} (1\hspace{-0.03cm},\hspace{0.05cm}1\hspace{-0.03cm},\hspace{0.05cm}0)\hspace{0.35cm}{\rm resp. } \hspace{0.35cm} \underline{x}_3 \hspace{-0.05cm}=\hspace{-0.05cm} (-1\hspace{-0.03cm},\hspace{-0.05cm}-1\hspace{-0.03cm},\hspace{-0.05cm}+1)\hspace{0.05cm}.$$

In the exercise we mostly use the second (bipolar) representation of the code symbols:  

$$x_i ∈ \{+1, -1\}.$$

Note:

  1. It is not that the  $\rm SPC \, (3, \, 2, \, 2)$  would be of much practical interest,  since,  for example,  in  "hard decision"  because of  $d_{\rm min} = 2$  only one error can be detected and none can be corrected.
  2. However,  the code is well suited for demonstration purposes because of the manageable effort involved.
  3. With  "iterative symbol-wise decoding"  one can also correct one error.
  4. In the present code,  the extrinsic  $L$–values  $\underline{L}_{\rm E} = \big (L_{\rm E}(1), \ L_{\rm E}(2), \ L_{\rm E}(3)\big )$  must be calculated according to the following equation:
$$L_{\rm E}(i) = {\rm ln} \hspace{0.15cm}\frac{{\rm Pr} \left [w_{\rm H}(\underline{x}^{(-i)})\hspace{0.15cm}{\rm is \hspace{0.15cm} even} \hspace{0.05cm} \right ]}{{\rm Pr} \left [w_{\rm H}(\underline{x}^{(-i)})\hspace{0.15cm}{\rm is \hspace{0.15cm} odd} \hspace{0.05cm} \hspace{0.05cm}\right ]}.$$
Here  $\underline{x}^{(-1)}$  denotes all symbols except  $x_i$  and is thus a vector of length  $n - 1 = 2$.


⇒   As the  »first $L_{\rm E}(i)$ approach«  we refer to the approach corresponding to the equations

$$L_{\rm E}(1) \hspace{-0.15cm} \ = \ \hspace{-0.15cm}2 \cdot {\rm tanh}^{-1} \left [{\rm tanh}(L_2/2) \cdot {\rm tanh}(L_3/2) \right ] \hspace{0.05cm},$$
$$L_{\rm E}(2) \hspace{-0.15cm} \ = \ \hspace{-0.15cm}2 \cdot {\rm tanh}^{-1} \left [{\rm tanh}(L_1/2) \cdot {\rm tanh}(L_3/2) \right ] \hspace{0.05cm},$$
$$L_{\rm E}(3) \hspace{-0.15cm} \ = \ \hspace{-0.15cm}2 \cdot {\rm tanh}^{-1} \left [{\rm tanh}(L_1/2) \cdot {\rm tanh}(L_2/2) \right ] \hspace{0.05cm}.$$

(1)  This  $L_{\rm E}(i)$  approach underlies the results table above  $($red entries$)$,  assuming the following a-posteriori $L$–values:

$$\underline {L}_{\rm APP} = (+1.0\hspace{0.05cm},\hspace{0.05cm}+0.4\hspace{0.05cm},\hspace{0.05cm}-1.0) \hspace{0.5cm}\Rightarrow \hspace{0.5cm} L_1 = +1.0\hspace{0.05cm},\hspace{0.15cm} L_2 = +0.4\hspace{0.05cm},\hspace{0.15cm} L_3 = -1.0\hspace{0.05cm}.$$

(2)  The extrinsic  $L$–values for the zeroth iteration result in  $($derivation in  $\text{Exercise 4.5Z})$:

$$L_{\rm E}(1) = -0.1829, \ L_{\rm E}(2) = -0.4337, \ L_{\rm E}(3) = +0.1829.$$

(3)  The a-posteriori  $L$–values at the beginning of the first iteration are thus

$$\underline{L_{\rm APP} }^{(I=1)} = \underline{L_{\rm APP} }^{(I=0)} + \underline{L}_{\hspace{0.02cm}\rm E}^{(I=0)} = (+0.8171\hspace{0.05cm},\hspace{0.05cm}-0.0337\hspace{0.05cm},\hspace{0.05cm}-0.8171) \hspace{0.05cm} . $$

(4)  From this,  the new extrinsic  $L$–values for the iteration loop  $I = 1$  are as follows:

$$L_{\rm E}(1) \hspace{-0.15cm} \ = \ \hspace{-0.15cm}2 \cdot {\rm tanh}^{-1} \big [{\rm tanh}(-0.0337/2) \cdot {\rm tanh}(-0.8171/2) \big ] = 0.0130 = -L_{\rm E}(3)\hspace{0.05cm},$$
$$L_{\rm E}(2) \hspace{-0.15cm} \ = \ \hspace{-0.15cm}2 \cdot {\rm tanh}^{-1} \big [{\rm tanh}(+0.8171/2) \cdot {\rm tanh}(-0.8171/2) \big ] = - 0.3023\hspace{0.05cm}.$$

Further,  one can see from the above table:

  • A hard decision according to the signs before the first iteration  $(I = 0)$ fails,  since  $(+1, +1, -1)$  is not a valid  $\rm SPC \, (3, \, 2, \, 2)$  code word.
  • But already after  $I = 1$  iterations,  a hard decision yields a valid code word,  namely  $\underline{x}_2 = (+1, -1, -1)$.
  • Also in later graphs,  the rows with correct hard decisions for the first time are highlighted in blue.
  • Hard decisions after further iterations  $(I ≥ 2)$  each lead to the same code word  $\underline{x}_2$. This statement is not only valid for this example, but in general.


Besides,  in this exercise we consider a  »second $L_{\rm E}(i)$ approach«,  which is given here for the example of the first symbol  $(i = 1)$:

$${\rm sign} \big[L_{\rm E}(1)\big] \hspace{-0.15cm} \ = \ \hspace{-0.15cm} {\rm sign} \big[L_{\rm E}(2)\big] \cdot {\rm sign} \big[L_{\rm E}(3)\big]\hspace{0.05cm},$$
$$|L_{\rm E}(1)| \hspace{-0.15cm} \ = \ \hspace{-0.15cm} {\rm Min} \left ( |L_{\rm E}(2)|\hspace{0.05cm}, \hspace{0.05cm}|L_{\rm E}(3)| \right ) \hspace{0.05cm}.$$

This second approach is based on the assumption that the reliability of  $L_{\rm E}(i)$  is essentially determined by the most unreliable neighbor symbol.  The better  $($larger$)$  the input log likelihood ratio is completely disregarded.

Let us consider two examples for this:


(1)  For  $L_2 = 1.0$  and  $L_3 = 5.0$  we get

  • after the first approach:   $L_{\rm E}(1) =2 \cdot {\rm tanh}^{-1} \big [{\rm tanh}(0.5) \cdot {\rm tanh}(2.5) \big ] =2 \cdot {\rm tanh}^{-1}(0.4559) = 0.984\hspace{0.05cm},$
  • according to the second approach:   $|L_{\rm E}(1)| \hspace{-0.15cm} \ = \ \hspace{-0.15cm} {\rm Min} \big ( 1.0\hspace{0.05cm}, \hspace{0.05cm}5.0 \big ) = 1.000 \hspace{0.05cm}.$


(2)  On the other hand one obtains for  $L_2 = L_3 = 1.0$

  • according to the first approach:   $L_{\rm E}(1) =2 \cdot {\rm tanh}^{-1} \big [{\rm tanh}(0.5) \cdot {\rm tanh}(0.5) \big ] =2 \cdot {\rm tanh}^{-1}(0.2135) = 0.433\hspace{0.05cm},$
  • according to the second approach:   $|L_{\rm E}(1)| \hspace{-0.15cm} \ = \ \hspace{-0.15cm} {\rm Min} \big ( 1.0\hspace{0.05cm}, \hspace{0.05cm}1.0 \big ) = 1.000 \hspace{0.05cm}.$


One can see the clear discrepancy between the two approaches.  The second approach  $($approximation$)$  is clearly more positive than the first  $($correct$)$  approach. However,  it is actually only important that the iterations lead to the desired decoding result.



Hints:

  • Only the  second solution approach  is treated here.



Questions

1

It holds  $\underline{L} = (+1.0, +0.4, -1.0)$.  Determine the extrinsic  $L$–values according to the  second  $L_{\rm E}(i)$–approach  without previous iteration  $\underline{(I = 0)}$.

$L_{\rm E}(1) \ = \ $

$L_{\rm E}(2) \ = \ $

$L_{\rm E}(3) \ = \ $

2

What are the a-posteriori $L$–values  $L_i = L_{\rm APP} (i)$  for the first iteration  $\underline{(I = 1)}$?

$L_1 \ = \ $

$L_2 \ = \ $

$L_3 \ = \ $

3

Which of the following statements are true for  $\underline{L} = (+1.0, +0.4, -1.0)$?

Hard decision  after  $I = 1$  leads to the code word  $\underline{x}_1 = (+1, -1, -1)$.
This does not change after further iterations.
Further iterations do not increase the reliability for  $\underline{x}_1$ .

4

Which of the following statements are true for  $\underline{L} = (+0.6, +1.0, -0.4)$?

The iterative decoding leads to the result  $\underline{x}_0 = (+1, +1, +1)$.
The iterative decoding leads to the result  $\underline{x}_2 = (-1, +1, -1)$.
Hard decision also returns this result for  $I \ge 1$.

5

Which of the following statements are true for  $\underline{L} = (+0.6, +1.0, -0.8)$?

The iterative decoding leads to the result  $\underline{x}_0 = (+1, +1, +1)$.
The iterative decoding leads to the result  $\underline{x}_2 = (-1, +1, -1)$.
Hard decision also returns this result for  $I \ge 1$.

6

Which of the following statements are true for  $\underline{L} = (+0.6, +1.0, -0.6)$?

Iterative decoding leads to the result  $\underline{x}_0 = (+1, +1, +1)$.
The iterative decoding leads to the result  $\underline{x}_2 = (-1, +1, -1)$.
The iterative decoding does not lead to the result here.


Solution

Results for  $\underline{L}=(+1.0, +0.4, –1.0)$

(1)  According to the second  $L_{\rm E}(i)$  approach holds:

$${\rm sign} [L_{\rm E}(1)] \hspace{-0.15cm} \ = \ \hspace{-0.15cm} {\rm sign} [L_{\rm E}(2)] \cdot {\rm sign} [L_{\rm E}(3)] = -1 \hspace{0.05cm},$$
$$|L_{\rm E}(1)| \hspace{-0.15cm} \ = \ \hspace{-0.15cm} {\rm Min} \left ( |L_{\rm E}(2)|\hspace{0.05cm}, \hspace{0.05cm}|L_{\rm E}(3)| \right ) = {\rm Min} \left ( 0.4\hspace{0.05cm}, \hspace{0.05cm}1.0 \right ) = 0.4$$
$$\Rightarrow \hspace{0.3cm}L_{\rm E}(1) \hspace{0.15cm} \underline{-0.4}\hspace{0.05cm}.$$
  • In the same way you get:
$$L_{\rm E}(2) \hspace{0.15cm} \underline{-1.0}\hspace{0.05cm}, $$
$$L_{\rm E}(3) \hspace{0.15cm} \underline{+0.4}\hspace{0.05cm}.$$


(2)  The a-posteriori  $L$–values at the beginning of the first iteration  $(I = 1)$  are the sum

  • of the previous  $L$–values  $($for  $I = 0$) 
  • and the extrinsic values calculated in subtask  (1):
$$L_1 = L_{\rm APP}(1) \hspace{-0.15cm} \ = \ \hspace{-0.15cm}1.0 + (-0.4)\hspace{0.15cm} \underline{=+0.6}\hspace{0.05cm},$$
$$L_2 = L_{\rm APP}(2) \hspace{-0.15cm} \ = \ \hspace{-0.15cm} 0.4 + (-1.0)\hspace{0.15cm} \underline{=-0.6}\hspace{0.05cm},$$
$$L_3 = L_{\rm APP}(3) \hspace{-0.15cm} \ = \ \hspace{-0.15cm} (-1.0) + 0.4\hspace{0.15cm} \underline{=-0.6}\hspace{0.05cm}.$$


(3)  As can be seen from the above table,  the  solutions 1 and 2  are correct in contrast to answer 3:

  • With each new iteration,  the magnitudes of  $L(1), \ L(2)$ and $L(3)$  become significantly larger.


Results for  $\underline{L}=(+0.6, +1.0, –0.4)$



(4)  As can be seen from the adjacent table,   the  answers 1 and 3  are correct:

  • So the decision is made for the code word $\underline{x}_0 = (+1, +1, +1)$.
  • From  $I = 1$  this would also be the decision of  "hard decision".


Results for  $\underline{L}=(+0.6, +1.0, –0.8)$

(5)  Correct are the  answers 2 and 3:

  • Because of  $|L(3)| > |L(1)|$  the following is valid for $I /ge 1$:   $L_1 < 0 \hspace{0.05cm},\hspace{0.2cm} L_2 > 0 \hspace{0.05cm},\hspace{0.2cm} L_3 < 0 \hspace{0.05cm}.$
  • From this iteration loop,  hard decision returns the code word  $\underline{x}_2 = (-1, +1, -1)$.


Results for  $\underline{L}=(+0.6, +1.0, –0.6)$

(6)  Correct is the  proposed solution 3:

  • The adjacent table shows that under the condition  $|L(1)| = |L(3)|$,  starting from the iteration loop  $I = 1$,  all extrinsic  $L$–values are zero.
  • Thus,  the a-posteriori  $L$– values remain constantly equal to  $\underline{L} = (0., +0.4, 0.)$  even for  $I > 1$,  which cannot be assigned to any code word.