Loading [MathJax]/jax/output/HTML-CSS/fonts/TeX/fontdata.js

Difference between revisions of "Aufgaben:Exercise 4.3: Iterative Decoding at the BSC"

From LNTwww
Line 2: Line 2:
  
 
[[File:P_ID2987__KC_A_4_3_v1.png|right|frame|BSC model and possible received values]]
 
[[File:P_ID2987__KC_A_4_3_v1.png|right|frame|BSC model and possible received values]]
Wir betrachten in dieser Aufgabe zwei Codes:
+
We consider two codes in this exercise:
* den Single Parity–Code   ⇒   [[Channel_Coding/Examples_of_Binary_Block_Codes#Single_Parity-check_Codes| "SPC (3, 2, 2)"]]:
+
* the Single Parity–Code   ⇒   [[Channel_Coding/Examples_of_Binary_Block_Codes#Single_Parity-check_Codes| "SPC (3, 2, 2)"]]:
 
:$$\underline{x} = \big (\hspace{0.05cm}(0, 0, 0), \hspace{0.1cm}
 
:$$\underline{x} = \big (\hspace{0.05cm}(0, 0, 0), \hspace{0.1cm}
 
(0, 1, 1), \hspace{0.1cm}
 
(0, 1, 1), \hspace{0.1cm}
Line 9: Line 9:
 
(1, 1, 0) \hspace{0.05cm} \big )
 
(1, 1, 0) \hspace{0.05cm} \big )
 
\hspace{0.05cm}, $$
 
\hspace{0.05cm}, $$
* den Wiederholungscode   ⇒   [[Channel_Coding/Examples_of_Binary_Block_Codes#Repetition_Codes| "RC (3, 1, 3)"]]:
+
* the repetition code   ⇒   [[Channel_Coding/Examples_of_Binary_Block_Codes#Repetition_Codes| "RC (3, 1, 3)"]]:
 
:$$\underline{x} = \big (\hspace{0.05cm}(0, 0, 0), \hspace{0.1cm}
 
:$$\underline{x} = \big (\hspace{0.05cm}(0, 0, 0), \hspace{0.1cm}
 
(1, 1, 1)  \hspace{0.05cm} \big )
 
(1, 1, 1)  \hspace{0.05cm} \big )
Line 15: Line 15:
  
  
Der Kanal wird auf Bitebene durch das  [[Digital_Signal_Transmission/Binary_Symmetric_Channel|"BSC–Modell"]]  beschrieben. Entsprechend der Grafik gilt dabei:
+
The channel is described at bit level by the  [[Digital_Signal_Transmission/Binary_Symmetric_Channel|"BSC–model"]] . According to the graphic, the following applies:
 
:Pr(yixi) = ε=0.269,
 
:Pr(yixi) = ε=0.269,
 
:Pr(yi=xi) = 1ε=0.731.
 
:Pr(yi=xi) = 1ε=0.731.
  
Hierbei bezeichnet  ε  die Verfälschungswahrscheinlichkeit des BSC–Modells.
+
Here,  ε  denotes the corruption probability of the BSC model.
  
Bis auf die letzte Teilaufgabe wird stets von folgendem Empfangswert ausgegangen:
+
Except for the last subtask, the following received value is always assumed:
 
:$$\underline{y} = (0, 1, 0) =\underline{y}_2
 
:$$\underline{y} = (0, 1, 0) =\underline{y}_2
 
\hspace{0.05cm}. $$
 
\hspace{0.05cm}. $$
  
Die hier gewählte Indizierung aller möglichen Empfangsvektoren kann der Grafik entnommen werden.  
+
The here chosen indexing of all possible received vectors can be taken from the graphic.  
*Der meistens betrachtete Vektor  y_2  ist hierbei rot hervorgehoben.  
+
*The most considered vector  y_2  is highlighted in red here.  
*Für die Teilaufgabe '''(6)''' gilt dann:
+
*For the subtask '''(6)''' then applies:
 
:$$\underline{y} = (1, 1, 0) =\underline{y}_6
 
:$$\underline{y} = (1, 1, 0) =\underline{y}_6
 
\hspace{0.05cm}. $$
 
\hspace{0.05cm}. $$
  
Zur Decodierung sollen in der Aufgabe untersucht werden:
+
For decoding purposes, the exercise will examine:
* die&nbsp; [[Channel_Coding/Decoding_of_Linear_Block_Codes#Generalization_of_syndrome_coding|"Syndromdecodierung"]], die bei den betrachteten Codes dem Konzept&nbsp; <i>Hard Decision Maximum Likelihood Detection</i>&nbsp; (HD&ndash;ML) folgt <br>(Softwerte liegen beim BSC nicht vor),
+
* the&nbsp; [[Channel_Coding/Decoding_of_Linear_Block_Codes#Generalization_of_syndrome_coding|"Syndrome Decoding"]], which follows the concept&nbsp; <i>hard decision maximum likelihood detection</i>&nbsp; (HD ML) for the codes under consideration. <br>(soft values are not available at the BSC),
* die symbolweise&nbsp; [[Channel_Coding/Soft-in_Soft-Out_Decoder#Symbol-wise_soft-in_soft-out_decoding|"Soft&ndash;in Soft&ndash;out Decodierung"]]&nbsp; (SISO) entsprechend dieses Abschnitts.
+
* the symbol-wise&nbsp; [[Channel_Coding/Soft-in_Soft-Out_Decoder#Symbol-wise_soft-in_soft-out_decoding|"Soft&ndash;in Soft&ndash;out Decoding"]]&nbsp; (SISO) according to this section.
  
  
Line 44: Line 44:
  
  
''Hinweise:''
+
Hints:
* Die Aufgabe bezieht sich auf das Kapitel&nbsp; [[Channel_Coding/Soft-in_Soft-Out_Decoder| "Soft&ndash;in Soft&ndash;out Decoder"]].
+
* This exercise refers to the chapter&nbsp; [[Channel_Coding/Soft-in_Soft-Out_Decoder| "Soft&ndash;in Soft&ndash;out Decoder"]].
* Bezug genommen wird insbesondere auf die Seiten
+
* Reference is made in particular to the pages
::[[Channel_Coding/Soft-in_Soft-Out_Decoder#Symbol-wise_soft-in_soft-out_decoding|"Symbolweise Soft&ndash;in Soft&ndash;out_Decodierung"]], sowie
+
::[[Channel_Coding/Soft-in_Soft-Out_Decoder#Symbol-wise_soft-in_soft-out_decoding|"Symbol-wise Soft&ndash;in Soft&ndash;out_Decoding"]], as well as
::[[Channel_Coding/Channel_Models_and_Decision_Structures#Binary_Symmetric_Channel_.E2.80.93_BSC|''Binary Symmetric Channel'']].
+
::[[Channel_Coding/Channel_Models_and_Decision_Structures#Binary_Symmetric_Channel_.E2.80.93_BSC|''Binary Symmetric Channel'']]
* Das vom Decoder ausgewählte Codewort wird in den Fragen mit&nbsp; z_&nbsp; bezeichnet.
+
* The codeword selected by the decoder is denoted by&nbsp; z_&nbsp; in the questions.
 
   
 
   
  
  
  
===Fragebogen===
+
===Questions===
 
<quiz display=simple>
 
<quiz display=simple>
{Welche Aussagen gelten für die Decodierung des&nbsp; SPC (3, 2, 2)?
+
{Which statements are valid for decoding the&nbsp; SPC (3, 2, 2)?
 
|type="()"}
 
|type="()"}
- Die HD&ndash;Syndromdecodierung liefert das Ergebnis&nbsp; z_=(0,1,0).
+
- The HD syndrome decoding yields the result&nbsp; z_=(0,1,0).
- Die HD&ndash;Syndromdecodierung liefert das Ergebnis&nbsp; z_=(0,0,0).
+
- The HD syndrome decoding returns the result&nbsp; z_=(0,0,0).
+ Die HD&ndash;Syndromdecodierung versagt hier.
+
+ The HD syndrome decoding fails here.
  
{Welche Aussagen gelten für den&nbsp;  RC (3, 1, 3)?
+
{Which statements are valid for the&nbsp;  RC (3, 1, 3)?
 
|type="()"}
 
|type="()"}
- Die HD&ndash;Syndromdecodierung liefert das Ergebnis&nbsp; z_=(0,1,0).
+
- The HD syndrome decoding returns the result&nbsp; z_=(0,1,0).
+ Die HD&ndash;Syndromdecodierung liefert das Ergebnis&nbsp; z_=(0,0,0).
+
+ The HD syndrome decoding returns the result&nbsp; z_=(0,0,0).
- Die HD&ndash;Syndromdecodierung versagt hier.
+
- The HD syndrome decoding fails here.
  
{Wie sicher ist diese Entscheidung, wenn man als Sicherheit&nbsp; S&nbsp; den Quotienten der Wahrscheinlichkeiten für eine richtige bzw. falsche Entscheidung definiert? <br>Setzen Sie  die Verfälschungswahrscheinlichkeit des BSC&ndash;Modells zu&nbsp; ε=26.9%.
+
{How certain is this decision if we define certainty&nbsp; S&nbsp; as the quotient of the probabilities for a correct or incorrect decision? <br>Set the corruption probability of the BSC model to&nbsp; ε=26.9%.
 
|type="{}"}
 
|type="{}"}
 
S = { 2.717 3% }
 
S = { 2.717 3% }
 
ln(S) = { 1 3% }
 
ln(S) = { 1 3% }
  
{Wie lauten die intrinsischen&nbsp; L&ndash;Werte für die iterative symbolweise Decodierung des&nbsp; RC (3, 1)&ndash;Empfangswortes&nbsp; y_2=(0,1,0)?
+
{What are the intrinsic LLR for the iterative symbol-wise decoding of the&nbsp; RC (3, 1) received word&nbsp; y_2=(0,1,0)?
 
|type="{}"}
 
|type="{}"}
 
LK(1) = { 1 3% }  
 
LK(1) = { 1 3% }  
Line 79: Line 79:
 
LK(3) = { 1 3% }  
 
LK(3) = { 1 3% }  
  
{Welche Aussagen sind für die Decodierung des Empfangswortes&nbsp; y_2=(0,1,0)&nbsp; zutreffend? Gehen Sie weiterhin vom&nbsp; RC (3, 1, 3) aus.
+
{Which statements are true for decoding the received word&nbsp; y_2=(0,1,0)&nbsp;? Continue to assume the&nbsp; RC (3, 1, 3).
 
|type="[]"}
 
|type="[]"}
+ Ab der ersten Iteration sind alle Vorzeichen von&nbsp; LAPP(i)&nbsp; positiv.
+
+ From the first iteration all signs of&nbsp; LAPP(i)&nbsp; are positive.
+ Bereits nach der zweiten Iteration ist&nbsp; Pr(x_0|y_2)&nbsp; größer als&nbsp; 99%.
+
+ Already after the second iteration&nbsp; Pr(x_0|y_2)&nbsp; is greater than&nbsp; 99%.
+ Mit jeder Iteration werden die Beträge&nbsp; LAPP(i)&nbsp; größer.
+
+ With each iteration the absolute values&nbsp; LAPP(i)&nbsp; become larger.
  
{Welche Aussagen sind für die Decodierung des Empfangswortes&nbsp; y_6=(1,1,0)&nbsp; zutreffend, wenn&nbsp; x_0=(0,0,0)&nbsp; gesendet wurde?
+
{Which statements are true for decoding the received word&nbsp; y_6=(1,1,0)&nbsp; when&nbsp; x_0=(0,0,0)&nbsp; was sent?
 
|type="[]"}
 
|type="[]"}
- Der iterative Decoder entscheidet richtig.
+
- The iterative decoder decides correctly.
+ Der iterative Decoder entscheidet falsch.
+
+ The iterative decoder decides wrong.
+ Die "Zuverlässigkeit" für "$\underline{y}_6 \Rightarrow \underline{x}_0$" steigt mit wachsendem&nbsp; I.
+
+ The "reliability" for "y_6x_0" increases with increasing&nbsp; I.
 
</quiz>
 
</quiz>
  
===Musterlösung===
+
===Solution===
 
{{ML-Kopf}}
 
{{ML-Kopf}}
'''(1)'''&nbsp; Richtig ist der <u>Lösungsvorschlag 3</u>:
+
'''(1)'''&nbsp; Correct is the <u>proposed solution 3</u>:
*Das Empfangswort y_2=(0,1,0) ist kein gültiges Codewort des <i>Single Parity&ndash;check Codes</i> SPC (3, 2). Somit ist die erste Aussage falsch.
+
*The received word y_2=(0,1,0) is not a valid codeword of the <i>single parity&ndash;check code</i> SPC (3, 2). Thus, the first statement is false.
*Da der SPC (3, 2) zudem nur die minimale Distanz dmin=2 aufweist, kann auch kein Fehler korrigiert werden.  
+
*In addition, since the SPC (3, 2) has only the minimum distance dmin=2, no error can be corrected.  
  
  
  
'''(2)'''&nbsp; Richtig ist der <u>Lösungsvorschlag 2</u>:
+
'''(2)'''&nbsp; Correct is the <u>proposed solution 2</u>:
*Die möglichen Codeworte beim RP (3, 1) sind x_0=(0,0,0) und x_1=(1,1,1).  
+
*The possible codewords at RP (3, 1) are x_0=(0,0,0) and x_1=(1,1,1).  
*Die minimale Distanz dieses Codes beträgt dmin=3, so dass t=(dmin1)/2=1 Fehler korrigiert werden kann.  
+
*The minimum distance of this code is dmin=3, so t=(dmin1)/2=1 error can be corrected.  
*Neben y_0=(0,0,0) werden auch y_1=(0,0,1), y_2=(0,1,0) und y_4=(1,0,0) dem Decodierergebnis x_0=(0,0,0) zugeordnet.
+
*In addition to y_0=(0,0,0), y_1=(0,0,1), y_2=(0,1,0), and y_4=(1,0,0) are also assigned to the decoding result x_0=(0,0,0).  
  
  
  
'''(3)'''&nbsp; Entsprechend dem BSC&ndash;Modell gilt für die bedingte Wahrscheinlichkeit, dass y_2=(0,1,0) empfangen wird, unter der Voraussetzung, dass x_0=(0,0,0) gesendet wurde:
+
'''(3)'''&nbsp; According to the BSC model, the conditional probability is that y_2=(0,1,0) is received, given that x_0=(0,0,0) was sent:
 
:Pr(y_=y_2|x_=x_0)=(1ε)2ε.
 
:Pr(y_=y_2|x_=x_0)=(1ε)2ε.
  
*Der erste Term (1 \, &ndash;\varepsilon)^2 gibt dabei die Wahrscheinlichkeit dafür an, dass das erste und das dritte Bit richtig übertragen wurden und ε berücksichtigt die Verfälschungswahrscheinlichkeit für das zweite Bit.  
+
*The first term (1 \, &ndash;\varepsilon)^2 indicates the probability that the first and the third bit were transmitted correctly and ε considers the corruption probability for the second bit.  
  
*Entsprechend gilt für das zweite mögliche Codewort x_1=(1,1,1):
+
*Correspondingly, for the second possible code word x_1=(1,1,1):
 
:Pr(y_=y_2|x_=x_1)=ε2(1ε).
 
:Pr(y_=y_2|x_=x_1)=ε2(1ε).
  
*Nach dem Satz von Bayes gilt dann für die Rückschlusswahrscheinlichkeiten:
+
*According to Bayes' theorem, the inference probabilities are then:
 
:$${\rm Pr}(\hspace{0.1cm}\underline{x} =  \underline{x}_0 \hspace{0.1cm}| \hspace{0.1cm}\underline{y} = \underline{y}_2  ) \hspace{-0.15cm} \ = \ \hspace{-0.15cm} {\rm Pr}(\underline{y} = \underline{y}_2 \hspace{0.1cm}| \hspace{0.1cm}\underline{x} =  \underline{x}_0 )  \cdot \frac{{\rm Pr}(\underline{x} =  \underline{x}_0)}
 
:$${\rm Pr}(\hspace{0.1cm}\underline{x} =  \underline{x}_0 \hspace{0.1cm}| \hspace{0.1cm}\underline{y} = \underline{y}_2  ) \hspace{-0.15cm} \ = \ \hspace{-0.15cm} {\rm Pr}(\underline{y} = \underline{y}_2 \hspace{0.1cm}| \hspace{0.1cm}\underline{x} =  \underline{x}_0 )  \cdot \frac{{\rm Pr}(\underline{x} =  \underline{x}_0)}
 
{{\rm Pr}(\underline{y} =  \underline{y}_2)} \hspace{0.05cm},$$
 
{{\rm Pr}(\underline{y} =  \underline{y}_2)} \hspace{0.05cm},$$
Line 121: Line 121:
 
{{\rm Pr}(\underline{y} =  \underline{y}_2)} $$
 
{{\rm Pr}(\underline{y} =  \underline{y}_2)} $$
 
:$$\Rightarrow \hspace{0.3cm} S =  \frac{{\rm Pr(richtige \hspace{0.15cm}Entscheidung)}}
 
:$$\Rightarrow \hspace{0.3cm} S =  \frac{{\rm Pr(richtige \hspace{0.15cm}Entscheidung)}}
{{\rm Pr(falsche \hspace{0.15cm}Entscheidung) }} = \frac{(1-\varepsilon)^2 \cdot  \varepsilon}{\varepsilon^2 \cdot (1-\varepsilon)}= \frac{(1-\varepsilon)}{\varepsilon}\hspace{0.05cm}.$$
+
{{\rm Pr(wrong \hspace{0.15cm}decision) }} = \frac{(1-\varepsilon)^2 \cdot  \varepsilon}{\varepsilon^2 \cdot (1-\varepsilon)}= \frac{(1-\varepsilon)}{\varepsilon}\hspace{0.05cm}.$$
  
*Mit ε=0.269 erhält man folgende Zahlenwerte:
+
*With ε=0.269 we get the following numerical values:
 
:$$S =  {0.731}/{0.269}\hspace{0.15cm}\underline {= 2.717}\hspace{0.3cm}\Rightarrow   
 
:$$S =  {0.731}/{0.269}\hspace{0.15cm}\underline {= 2.717}\hspace{0.3cm}\Rightarrow   
 
\hspace{0.3cm}{\rm ln}\hspace{0.15cm}(S)\hspace{0.15cm} \underline {= 1}\hspace{0.05cm}.$$
 
\hspace{0.3cm}{\rm ln}\hspace{0.15cm}(S)\hspace{0.15cm} \underline {= 1}\hspace{0.05cm}.$$
Line 129: Line 129:
  
  
'''(4)'''&nbsp; Das Vorzeichen des Kanal&ndash;L&ndash;Wertes LK(i) ist positiv, falls yi=0, und negativ für yi=1.  
+
'''(4)'''&nbsp; The sign of the channel LLR LK(i) is positive if yi=0, and negative for yi=1.  
*Der Betrag gibt die Zuverlässigkeit von yi an. Beim BSC&ndash;Modell gilt |L_{\rm K}(i)| = \ln {(1 \, &ndash; \varepsilon)/\varepsilon} = 1 für alle i. Also:
+
*The absolute value indicates the reliability of yi. In the BSC model, |L_{\rm K}(i)| = \ln {(1 \, &ndash; \varepsilon)/\varepsilon} = 1 for all i. Thus:
[[File:P_ID2988__KC_A_4_3e_v1.png|rightr|frame| Iterative Decodierung von $(+1, –1, +1)$]]
+
[[File:P_ID2988__KC_A_4_3e_v1.png|rightr|frame| Iterative decoding of $(+1, -1, +1)$]]
 
:$$\underline {L_{\rm K}}(1)\hspace{0.15cm} \underline {= +1}\hspace{0.05cm},\hspace{0.5cm}
 
:$$\underline {L_{\rm K}}(1)\hspace{0.15cm} \underline {= +1}\hspace{0.05cm},\hspace{0.5cm}
 
\underline {L_{\rm K}}(2)\hspace{0.15cm} \underline {= -1}\hspace{0.05cm},\hspace{0.5cm}
 
\underline {L_{\rm K}}(2)\hspace{0.15cm} \underline {= -1}\hspace{0.05cm},\hspace{0.5cm}
Line 138: Line 138:
  
  
'''(5)'''&nbsp; Die nebenstehende  Tabelle verdeutlicht die iterative symbolweise Decodierung ausgehend von y_2=(0,1,0).
+
'''(5)'''&nbsp; The adjacent table illustrates the iterative symbol-wise decoding starting from y_2=(0,1,0).
 
<br clear=all>
 
<br clear=all>
Diese Ergebnisse lassen sich wie folgt interpretieren:
+
These results can be interpreted as follows:
* Die Vorbelegung (Iteration I=0) geschieht entsprechend L_APP=L_K. Eine harte Entscheidung &nbsp;&#8658;&nbsp; "signL_APP(i)" würde zum Decodierergebnis (0,1,0) führen. Die Zuverlässigkeit dieses offensichtlich falschen Ergebnisses wird mit |Σ|=1 angegeben. Dieser Wert stimmt mit dem in Teilaufgaben (3) berechneten "ln(S)" überein.
+
* The preassignment (iteration I=0) happens according to L_APP=L_K. A hard decision &nbsp;&#8658;&nbsp; "signL_APP(i)" would lead to the decoding result (0,1,0). The reliability of this obviously incorrect result is given as |Σ|=1. This value agrees with the "ln(S)" calculated in subtasks (3).
* Nach der ersten Iteration (I=1) sind alle Aposteriori&ndash;L&ndash;Werte LAPP(i)=+1. Eine harte Entscheidung würde hier das (voraussichtlich) richtige Ergebnis x_APP=(0,0,0) liefern. Die Wahrscheinlichkeit, dass dieses Ergebnis richtig ist, wird durch |ΣAPP|=3 quantifiziert:
+
* After the first iteration (I=1) all a posteriori LLRs are LAPP(i)=+1. A hard decision here would yield the (expected) correct result x_APP=(0,0,0). The probability that this outcome is correct is quantified by |ΣAPP|=3:
 
:$${\rm ln}\hspace{0.25cm}\frac{{\rm Pr}(\hspace{0.1cm}\underline{x} =  \underline{x}_0 \hspace{0.1cm}| \hspace{0.1cm}\underline{y}=\underline{y}_2)}{1-{\rm Pr}(\hspace{0.1cm}\underline{x} =  \underline{x}_0 \hspace{0.1cm}| \hspace{0.1cm}\underline{y}=\underline{y}_2)} = 3
 
:$${\rm ln}\hspace{0.25cm}\frac{{\rm Pr}(\hspace{0.1cm}\underline{x} =  \underline{x}_0 \hspace{0.1cm}| \hspace{0.1cm}\underline{y}=\underline{y}_2)}{1-{\rm Pr}(\hspace{0.1cm}\underline{x} =  \underline{x}_0 \hspace{0.1cm}| \hspace{0.1cm}\underline{y}=\underline{y}_2)} = 3
 
\hspace{0.3cm}\Rightarrow \hspace{0.3cm}
 
\hspace{0.3cm}\Rightarrow \hspace{0.3cm}
Line 148: Line 148:
 
:$$\hspace{0.3cm}\Rightarrow \hspace{0.3cm}{\rm Pr}(\hspace{0.1cm}\underline{x} =  \underline{x}_0 \hspace{0.1cm}| \hspace{0.1cm}\underline{y}=\underline{y}_2) = {20}/{21}
 
:$$\hspace{0.3cm}\Rightarrow \hspace{0.3cm}{\rm Pr}(\hspace{0.1cm}\underline{x} =  \underline{x}_0 \hspace{0.1cm}| \hspace{0.1cm}\underline{y}=\underline{y}_2) = {20}/{21}
 
  {\approx 95.39\%}\hspace{0.05cm}.$$
 
  {\approx 95.39\%}\hspace{0.05cm}.$$
* Die zweite Iteration bestätigt das Decodierergebnis der ersten Iteration. Die Zuverlässigkeit wird hier sogar mit "9" beziffert. Dieser Wert kann wie folgt interpretiert werden:
+
* The second iteration confirms the decoding result of the first iteration. The reliability is even quantified here with "9". This value can be interpreted as follows:
 
:$$\frac{{\rm Pr}(\hspace{0.1cm}\underline{x} =  \underline{x}_0 \hspace{0.1cm}| \hspace{0.1cm}\underline{y}=\underline{y}_2)}{1-{\rm Pr}(\hspace{0.1cm}\underline{x} =  \underline{x}_0 \hspace{0.1cm}| \hspace{0.1cm}\underline{y}=\underline{y}_2)} = {\rm e}^9
 
:$$\frac{{\rm Pr}(\hspace{0.1cm}\underline{x} =  \underline{x}_0 \hspace{0.1cm}| \hspace{0.1cm}\underline{y}=\underline{y}_2)}{1-{\rm Pr}(\hspace{0.1cm}\underline{x} =  \underline{x}_0 \hspace{0.1cm}| \hspace{0.1cm}\underline{y}=\underline{y}_2)} = {\rm e}^9
 
\hspace{0.3cm}\Rightarrow \hspace{0.3cm}
 
\hspace{0.3cm}\Rightarrow \hspace{0.3cm}
 
{\rm Pr}(\hspace{0.1cm}\underline{x} =  \underline{x}_0 \hspace{0.1cm}| \hspace{0.1cm}\underline{y}=\underline{y}_2) = {{\rm e}^9}/{({\rm e}^9+1)}
 
{\rm Pr}(\hspace{0.1cm}\underline{x} =  \underline{x}_0 \hspace{0.1cm}| \hspace{0.1cm}\underline{y}=\underline{y}_2) = {{\rm e}^9}/{({\rm e}^9+1)}
 
  \approx 99.99\% \hspace{0.05cm}.$$
 
  \approx 99.99\% \hspace{0.05cm}.$$
 +
*With each further iteration the reliability value and thus the probability Pr(x_0|y_2) increases drastically &nbsp;&#8658;&nbsp; <u>All proposed solutions</u> are correct.
  
*Mit jeder weiteren Iteration nimmt der Zuverlässigkeitswert und damit die Wahrscheinlichkeit Pr(x_0|y_2) drastisch zu &nbsp;&#8658;&nbsp; <u>Alle Lösungsvorschläge</u> sind richtig.
 
  
  
 +
[[File:P_ID2991__KC_A_4_3f_v1.png|right|frame|Iterative decoding of (–1, –1, +1)]]
 +
'''(6)'''&nbsp; Correct are <u>the proposed solutions 2 and 3</u>:
 +
*For the received vector \underline{y}_6 = (1, \, 1, \, 0), the second table applies.
  
[[File:P_ID2991__KC_A_4_3f_v1.png|right|frame|Iterative Decodierung von (–1, –1, +1)]]
+
*The decoder now decides for the sequence \underline{x}_1 = (1, \, 1, \, 1).  
'''(6)'''&nbsp; Richtig sind <u>die Lösungsvorschläge 2 und 3</u>:
+
*The case "\underline{y}_3 = (1, \, 1, \, 0) received under the condition \underline{x}_1 = (1, \, 1, \, 1) sent" would correspond exactly to the constellation "\underline{y}_2 = (1, \, 0, \, 1) received and \underline{x}_0 = (0, \, 0, \, 0) sent" considered in the last subtask.
*Für den Empfangsvektor \underline{y}_6 = (1, \, 1, \, 0) gilt die zweite Tabelle.
+
*But since \underline{x}_0 = (0, \, 0, \, 0) was sent, there are now two bit errors with the following consequence:
 
+
:* The iterative decoder decides incorrectly.
*Der Decoder entscheidet sich nun für die Folge \underline{x}_1 = (1, \, 1, \, 1).  
+
:* With each further iteration the wrong decision is declared as more reliable.
*Der Fall "\underline{y}_3 = (1, \, 1, \, 0) empfangen unter der Voraussetzung \underline{x}_1 = (1, \, 1, \, 1) gesendet" würde genau der in der letzten Teilaufgabe betrachteten Konstellation "\underline{y}_2 = (1, \, 0, \, 1) empfangen und \underline{x}_0 = (0, \, 0, \, 0) gesendet" entsprechen.
 
*Da aber \underline{x}_0 = (0, \, 0, \, 0) gesendet wurde, gibt es nun zwei Bitfehler mit folgender Konsequenz:
 
:* Der iterative Decoder entscheidet falsch.
 
:* Mit jeder weiteren Iteration wird die falsche Entscheidung als zuverlässiger deklariert.
 
  
  

Revision as of 19:04, 27 October 2022

BSC model and possible received values

We consider two codes in this exercise:

\underline{x} = \big (\hspace{0.05cm}(0, 0, 0), \hspace{0.1cm} (0, 1, 1), \hspace{0.1cm} (1, 0, 1), \hspace{0.1cm} (1, 1, 0) \hspace{0.05cm} \big ) \hspace{0.05cm},
\underline{x} = \big (\hspace{0.05cm}(0, 0, 0), \hspace{0.1cm} (1, 1, 1) \hspace{0.05cm} \big ) \hspace{0.05cm}.


The channel is described at bit level by the  "BSC–model" . According to the graphic, the following applies:

{\rm Pr}(y_i \ne x_i) \hspace{-0.15cm} \ = \ \hspace{-0.15cm}\varepsilon = 0.269\hspace{0.05cm},
{\rm Pr}(y_i = x_i) \hspace{-0.15cm} \ = \ \hspace{-0.15cm}1-\varepsilon = 0.731\hspace{0.05cm}.

Here,  \varepsilon  denotes the corruption probability of the BSC model.

Except for the last subtask, the following received value is always assumed:

\underline{y} = (0, 1, 0) =\underline{y}_2 \hspace{0.05cm}.

The here chosen indexing of all possible received vectors can be taken from the graphic.

  • The most considered vector  \underline{y}_2  is highlighted in red here.
  • For the subtask (6) then applies:
\underline{y} = (1, 1, 0) =\underline{y}_6 \hspace{0.05cm}.

For decoding purposes, the exercise will examine:

  • the  "Syndrome Decoding", which follows the concept  hard decision maximum likelihood detection  (HD ML) for the codes under consideration.
    (soft values are not available at the BSC),
  • the symbol-wise  "Soft–in Soft–out Decoding"  (SISO) according to this section.






Hints:

"Symbol-wise Soft–in Soft–out_Decoding", as well as
Binary Symmetric Channel
  • The codeword selected by the decoder is denoted by  \underline{z}  in the questions.



Questions

1

Which statements are valid for decoding the  \text{SPC (3, 2, 2)}?

The HD syndrome decoding yields the result  \underline{z} = (0, \, 1, \, 0).
The HD syndrome decoding returns the result  \underline{z} = (0, \, 0, \, 0).
The HD syndrome decoding fails here.

2

Which statements are valid for the  \text{ RC (3, 1, 3)}?

The HD syndrome decoding returns the result  \underline{z} = (0, \, 1, \, 0).
The HD syndrome decoding returns the result  \underline{z} = (0, \, 0, \, 0).
The HD syndrome decoding fails here.

3

How certain is this decision if we define certainty  S  as the quotient of the probabilities for a correct or incorrect decision?
Set the corruption probability of the BSC model to  \varepsilon = 26.9\%.

S \ = \

\hspace{0.75cm} \ln {(S)} \ = \

4

What are the intrinsic LLR for the iterative symbol-wise decoding of the  \text{RC (3, 1)} received word  \underline{y}_2 = (0, \, 1, \, 0)?

L_{\rm K}(1) \ = \

L_{\rm K}(2) \ = \

L_{\rm K}(3) \ = \

5

Which statements are true for decoding the received word  \underline{y}_2 = (0, \, 1, \, 0) ? Continue to assume the  \text{RC (3, 1, 3)}.

From the first iteration all signs of  L_{\rm APP}(i)  are positive.
Already after the second iteration  {\rm Pr}(\underline{x}_0\hspace{0.05cm} |\hspace{0.05cm} \underline{y}_2)  is greater than  99\%.
With each iteration the absolute values  L_{\rm APP}(i)  become larger.

6

Which statements are true for decoding the received word  \underline{y}_6 = (1, 1, 0)  when  \underline{x}_0 = (0, 0, 0)  was sent?

The iterative decoder decides correctly.
The iterative decoder decides wrong.
The "reliability" for "\underline{y}_6 \Rightarrow \underline{x}_0" increases with increasing  I.


Solution

(1)  Correct is the proposed solution 3:

  • The received word \underline{y}_2 = (0, 1, 0) is not a valid codeword of the single parity–check code SPC (3, 2). Thus, the first statement is false.
  • In addition, since the SPC (3, 2) has only the minimum distance d_{\rm min} = 2, no error can be corrected.


(2)  Correct is the proposed solution 2:

  • The possible codewords at RP (3, 1) are \underline{x}_0 = (0, 0, 0) and \underline{x}_1 = (1, 1, 1).
  • The minimum distance of this code is d_{\rm min} = 3, so t = (d_{\rm min} \, - 1)/2 = 1 error can be corrected.
  • In addition to \underline{y}_0 = (0, 0, 0), \underline{y}_1 = (0, 0, 1), \ \underline{y}_2 = (0, 1, 0), and \underline{y}_4 = (1, 0, 0) are also assigned to the decoding result \underline{x}_0 = (0, 0, 0).


(3)  According to the BSC model, the conditional probability is that \underline{y}_2 = (0, 1, 0) is received, given that \underline{x}_0 = (0, 0, 0) was sent:

{\rm Pr}(\underline{y} = \underline{y}_2 \hspace{0.1cm}| \hspace{0.1cm}\underline{x} = \underline{x}_0 ) = (1-\varepsilon)^2 \cdot \varepsilon\hspace{0.05cm}.
  • The first term (1 \, –\varepsilon)^2 indicates the probability that the first and the third bit were transmitted correctly and \varepsilon considers the corruption probability for the second bit.
  • Correspondingly, for the second possible code word \underline{x}_1 = (1, 1, 1):
{\rm Pr}(\underline{y} = \underline{y}_2 \hspace{0.1cm}| \hspace{0.1cm}\underline{x} = \underline{x}_1 ) = \varepsilon^2 \cdot (1-\varepsilon) \hspace{0.05cm}.
  • According to Bayes' theorem, the inference probabilities are then:
{\rm Pr}(\hspace{0.1cm}\underline{x} = \underline{x}_0 \hspace{0.1cm}| \hspace{0.1cm}\underline{y} = \underline{y}_2 ) \hspace{-0.15cm} \ = \ \hspace{-0.15cm} {\rm Pr}(\underline{y} = \underline{y}_2 \hspace{0.1cm}| \hspace{0.1cm}\underline{x} = \underline{x}_0 ) \cdot \frac{{\rm Pr}(\underline{x} = \underline{x}_0)} {{\rm Pr}(\underline{y} = \underline{y}_2)} \hspace{0.05cm},
{\rm Pr}(\hspace{0.1cm}\underline{x} = \underline{x}_1 \hspace{0.1cm}| \hspace{0.1cm}\underline{y} = \underline{y}_2 ) \hspace{-0.15cm} \ = \ \hspace{-0.15cm} {\rm Pr}(\underline{y} = \underline{y}_2 \hspace{0.1cm}| \hspace{0.1cm}\underline{x} = \underline{x}_1 ) \cdot \frac{{\rm Pr}(\underline{x} = \underline{x}_1)} {{\rm Pr}(\underline{y} = \underline{y}_2)}
\Rightarrow \hspace{0.3cm} S = \frac{{\rm Pr(richtige \hspace{0.15cm}Entscheidung)}} {{\rm Pr(wrong \hspace{0.15cm}decision) }} = \frac{(1-\varepsilon)^2 \cdot \varepsilon}{\varepsilon^2 \cdot (1-\varepsilon)}= \frac{(1-\varepsilon)}{\varepsilon}\hspace{0.05cm}.
  • With \varepsilon = 0.269 we get the following numerical values:
S = {0.731}/{0.269}\hspace{0.15cm}\underline {= 2.717}\hspace{0.3cm}\Rightarrow \hspace{0.3cm}{\rm ln}\hspace{0.15cm}(S)\hspace{0.15cm} \underline {= 1}\hspace{0.05cm}.


(4)  The sign of the channel LLR L_{\rm K}(i) is positive if y_i = 0, and negative for y_i = 1.

  • The absolute value indicates the reliability of y_i. In the BSC model, |L_{\rm K}(i)| = \ln {(1 \, – \varepsilon)/\varepsilon} = 1 for all i. Thus:
Iterative decoding of (+1, -1, +1)
\underline {L_{\rm K}}(1)\hspace{0.15cm} \underline {= +1}\hspace{0.05cm},\hspace{0.5cm} \underline {L_{\rm K}}(2)\hspace{0.15cm} \underline {= -1}\hspace{0.05cm},\hspace{0.5cm} \underline {L_{\rm K}}(3)\hspace{0.15cm} \underline {= +1}\hspace{0.05cm}.


(5)  The adjacent table illustrates the iterative symbol-wise decoding starting from \underline{y}_2 = (0, \, 1, \, 0).
These results can be interpreted as follows:

  • The preassignment (iteration I = 0) happens according to \underline{L}_{\rm APP} = \underline{L}_{\rm K}. A hard decision  ⇒  "\sign {\underline{L}_{\rm APP}(i)}" would lead to the decoding result (0, \, 1, \, 0). The reliability of this obviously incorrect result is given as |{\it \Sigma}| = 1. This value agrees with the "\ln (S)" calculated in subtasks (3).
  • After the first iteration (I = 1) all a posteriori LLRs are L_{\rm APP}(i) = +1. A hard decision here would yield the (expected) correct result \underline{x}_{\rm APP} = (0, \, 0, \, 0). The probability that this outcome is correct is quantified by |{\it \Sigma}_{\rm APP}| = 3:
{\rm ln}\hspace{0.25cm}\frac{{\rm Pr}(\hspace{0.1cm}\underline{x} = \underline{x}_0 \hspace{0.1cm}| \hspace{0.1cm}\underline{y}=\underline{y}_2)}{1-{\rm Pr}(\hspace{0.1cm}\underline{x} = \underline{x}_0 \hspace{0.1cm}| \hspace{0.1cm}\underline{y}=\underline{y}_2)} = 3 \hspace{0.3cm}\Rightarrow \hspace{0.3cm} \frac{{\rm Pr}(\hspace{0.1cm}\underline{x} = \underline{x}_0 \hspace{0.1cm}| \hspace{0.1cm}\underline{y}=\underline{y}_2)}{1-{\rm Pr}(\hspace{0.1cm}\underline{x} = \underline{x}_0 \hspace{0.1cm}| \hspace{0.1cm}\underline{y}=\underline{y}_2)} = {\rm e}^3 \approx 20
\hspace{0.3cm}\Rightarrow \hspace{0.3cm}{\rm Pr}(\hspace{0.1cm}\underline{x} = \underline{x}_0 \hspace{0.1cm}| \hspace{0.1cm}\underline{y}=\underline{y}_2) = {20}/{21} {\approx 95.39\%}\hspace{0.05cm}.
  • The second iteration confirms the decoding result of the first iteration. The reliability is even quantified here with "9". This value can be interpreted as follows:
\frac{{\rm Pr}(\hspace{0.1cm}\underline{x} = \underline{x}_0 \hspace{0.1cm}| \hspace{0.1cm}\underline{y}=\underline{y}_2)}{1-{\rm Pr}(\hspace{0.1cm}\underline{x} = \underline{x}_0 \hspace{0.1cm}| \hspace{0.1cm}\underline{y}=\underline{y}_2)} = {\rm e}^9 \hspace{0.3cm}\Rightarrow \hspace{0.3cm} {\rm Pr}(\hspace{0.1cm}\underline{x} = \underline{x}_0 \hspace{0.1cm}| \hspace{0.1cm}\underline{y}=\underline{y}_2) = {{\rm e}^9}/{({\rm e}^9+1)} \approx 99.99\% \hspace{0.05cm}.
  • With each further iteration the reliability value and thus the probability {\rm Pr}(\underline{x}_0 | \underline{y}_2) increases drastically  ⇒  All proposed solutions are correct.


Iterative decoding of (–1, –1, +1)

(6)  Correct are the proposed solutions 2 and 3:

  • For the received vector \underline{y}_6 = (1, \, 1, \, 0), the second table applies.
  • The decoder now decides for the sequence \underline{x}_1 = (1, \, 1, \, 1).
  • The case "\underline{y}_3 = (1, \, 1, \, 0) received under the condition \underline{x}_1 = (1, \, 1, \, 1) sent" would correspond exactly to the constellation "\underline{y}_2 = (1, \, 0, \, 1) received and \underline{x}_0 = (0, \, 0, \, 0) sent" considered in the last subtask.
  • But since \underline{x}_0 = (0, \, 0, \, 0) was sent, there are now two bit errors with the following consequence:
  • The iterative decoder decides incorrectly.
  • With each further iteration the wrong decision is declared as more reliable.