Difference between revisions of "Channel Coding/Soft-in Soft-Out Decoder"

From LNTwww
m (Text replacement - "Klassifizierung_von_Signalen" to "Signal_classification")
 
(71 intermediate revisions by 5 users not shown)
Line 1: Line 1:
 
   
 
   
 
{{Header
 
{{Header
|Untermenü=Iterative Decodierverfahren
+
|Untermenü=Iterative Decoding Methods
|Vorherige Seite=Distanzeigenschaften und Fehlerwahrscheinlichkeitsschranken
+
|Vorherige Seite=Distance Characteristics and Error Probability Bounds
|Nächste Seite=Grundlegendes zu den Produktcodes
+
|Nächste Seite=The Basics of Product Codes
 
}}
 
}}
  
== # ÜBERBLICK ZUM VIERTEN HAUPTKAPITEL # ==
+
== # OVERVIEW OF THE FOURTH MAIN CHAPTER # ==
 
<br>
 
<br>
Das letzte Hauptkapitel des Kanalcodierungsbuches beschreibt&nbsp; ''iterative Decodierverfahren'', wie sie in den meisten heutigen (2017) Kommunikationssystemen eingesetzt werden. Dies hat folgende Gründe:
 
  
*Um sich der Kanalkapazität anzunähern, benötigt man sehr lange Codes.
+
The last main chapter of the channel coding book describes&nbsp; &raquo;'''iterative decoding techniques'''&laquo;&nbsp; as used in most of today's&nbsp; (2017)&nbsp; communication systems.&nbsp; This is due to the following reasons:
*Für lange Codes ist aber eine blockweise&nbsp; ''Maximum–Likelihood–Decodierung''&nbsp; nahezu unmöglich.
 
  
 +
*To approach the channel capacity,&nbsp; one needs very long codes.
  
Die Decoder–Komplexität lässt sich bei nahezu gleicher Qualität deutlich herabsetzen, wenn man zwei (oder mehrere) kurze Kanalcodes miteinander verknüpft und beim Empfänger die jeweils neu gewonnene (Soft–)Information in mehreren Schritten – also iterativ – zwischen den Decodern austauscht.
+
*But for long codes,&nbsp; "blockwise maximum likelihood decoding"&nbsp; is almost impossible.
  
Der Durchbruch auf dem Gebiet gelang Anfang der 1990er Jahre mit der Erfindung der&nbsp; ''Turbo–Codes''&nbsp; durch&nbsp; [https://de.wikipedia.org/wiki/Claude_Berrou Claude Berrou]&nbsp; und kurz darauf mit der Wiederentdeckung der&nbsp; ''Low–density Parity–check Codes''&nbsp; durch&nbsp; [https://en.wikipedia.org/wiki/David_J._C._MacKay David J. C. MacKay]&nbsp; und&nbsp; [https://en.wikipedia.org/wiki/Radford_M._Neal Radford M. Neal], nachdem die schon 1961 von&nbsp; [https://de.wikipedia.org/wiki/Robert_Gray_Gallager Robert G. Gallager]&nbsp; entwickelten LDPC–Codes zwischenzeitlich in Vergessenheit geraten waren.
 
  
Im Einzelnen werden im vierten Hauptkapitel behandelt:
+
The decoder complexity can be significantly reduced with almost the same quality if two&nbsp; $($or more$)$&nbsp; short channel codes are linked together and the newly acquired&nbsp; $($soft$)$&nbsp; information is exchanged between the decoders at the receiver in several steps &nbsp; &rArr; &nbsp; "iteratively".
  
*Eine Gegenüberstellung von&nbsp; ''Hard Decision''&nbsp; und&nbsp; ''Soft Decision'',
+
The breakthrough in the field came in the early 1990s with the invention of the&nbsp; "turbo codes"&nbsp; by&nbsp; [https://en.wikipedia.org/wiki/Claude_Berrou $\text{Claude Berrou}$]&nbsp; and shortly thereafter with the rediscovery of&nbsp; "low-density parity-check codes"&nbsp; by&nbsp; [https://en.wikipedia.org/wiki/David_J._C._MacKay $\text{David J. C. MacKay}$]&nbsp; and&nbsp; [https://en.wikipedia.org/wiki/Radford_M._Neal $\text{Radford M. Neal}$],&nbsp; after the LDPC codes developed as early as 1961 by&nbsp; [https://en.wikipedia.org/wiki/Robert_G._Gallager $\text{Robert G. Gallager}$]&nbsp; had been forgotten in the meantime.
*die Quantifizierung von&nbsp; ''Zuverlässigkeitsinformation''&nbsp; durch&nbsp; ''Log Likelihood Ratios''&nbsp;(LLR),
 
*das Prinzip der symbolweisen&nbsp; ''Soft–in Soft–out''&nbsp; (SISO) Decodierung,
 
*die Definition von&nbsp; ''Apriori–L–Wert'',&nbsp; ''Aposteriori–L–Wert''&nbsp; und&nbsp; ''extrinsischem L–Wert'',
 
*die Grundstruktur von&nbsp; ''seriell  verketteten''&nbsp; bzw.&nbsp; ''parallel verketteten''&nbsp; Codiersystemen,
 
*die Eigenschaften von&nbsp; ''Produkt–Codes''&nbsp; und deren&nbsp; ''Hard Decision Decodierung'',
 
*die Grundstruktur, der Decodieralgorithmus und die Leistungsfähigkeit der&nbsp; ''Turbo–Codes'',
 
*Grundlegendes zu den&nbsp; ''Low–density Parity–check Codes''&nbsp; und deren Anwendungen.
 
  
 +
Specifically,&nbsp; the fourth main chapter discusses:
  
 +
*a comparison of&nbsp; &raquo;hard decision&laquo;&nbsp; and&nbsp; &raquo;soft decision&laquo;,
 +
 +
*the quantification of&nbsp; &raquo;reliability information&laquo;&nbsp; by&nbsp; &raquo;log likelihood ratios&laquo;,
 +
 +
*the principle of symbol-wise&nbsp; &raquo;soft-in soft-out&nbsp; (SISO)&laquo;&nbsp; decoding,
 +
 +
*the definition of&nbsp; &raquo;a-priori&nbsp; L&ndash;value&laquo;,&nbsp; &raquo;a-posteriori&nbsp; L&ndash;value&laquo;&nbsp; and&nbsp; &raquo;extrinsic&nbsp; L&ndash;value&laquo;,
 +
 +
*the basic structure of&nbsp; &raquo;serially concatenated&laquo;&nbsp; resp.&nbsp; &raquo;parallel concatenated&laquo;&nbsp; coding systems,
 +
 +
*the characteristics of&nbsp; &raquo;product codes&laquo;&nbsp; and their&nbsp; &raquo;hard decision decoding&laquo;,
 +
 +
*the basic structure,&nbsp; decoding algorithm,&nbsp; and performance of&nbsp; &raquo;turbo codes&laquo;,
 +
 +
*basic information on the&nbsp; &raquo;low-density parity-check codes&laquo;&nbsp; and their applications.
  
== Hard Decision vs. Soft Decision==
 
<br>
 
Zur Hinleitung auf die hier behandelte Thematik betrachten wir das folgende Nachrichtenübertragungssystem mit Codierung.<br>
 
  
[[File:P ID2971 KC T 4 1 S1a v8.png|center|frame|Betrachtetes Nachrichtenübertragungssystem mit Codierung|class=fit]]
 
  
Im Weiteren werden alle Symbole in bipolarer Darstellung angegeben: &nbsp; &bdquo;$0$&rdquo; &nbsp;&#8594;&nbsp; &bdquo;$+1$&rdquo;&nbsp; und&nbsp; &bdquo;$1$&rdquo; &nbsp;&#8594;&nbsp; &bdquo;$-1$&rdquo;.
 
*Die Symbolfolge&nbsp; $\underline{u} = (u_1, \ u_2)$&nbsp; wird der Coderfolge&nbsp; $\underline{x} = (x_1, \ x_2, \ x_3) = (u_1, \ u_2, \ p)$&nbsp; zugeordnet, wobei für das Paritybit&nbsp; $p = u_1 &oplus; u_2$&nbsp; gilt &nbsp; &#8658; &nbsp; [[Channel_Coding/Beispiele_bin%C3%A4rer_Blockcodes#Single_Parity.E2.80.93check_Codes| Single Parity&ndash;check Code]] &nbsp; &#8658; &nbsp; ${\rm SPC} (3, 2, 2)$.<br>
 
  
*Der&nbsp; [[Channel_Coding/Signal_classification#AWGN.E2.80.93Kanal_bei_bin.C3.A4rem_Eingang| AWGN&ndash;Kanal]]&nbsp; verändert die Binärsymbole&nbsp; $x_i &#8712; \{+1, \ &ndash;1\}$&nbsp; zu reellwertigen Ausgangswerten&nbsp; $y_i$, zum Beispiel gemäß&nbsp; $\text{Kanal 4}$&nbsp; der unteren Tabelle: &nbsp; $x_1 = +1$ &nbsp; &#8658; &nbsp; $y_1 = +0.9$, &nbsp; $x_2 = \, -1$ &nbsp; &#8658; &nbsp; $y_2 = +0.1$&nbsp; und&nbsp; $x_3 = \, -1$ &nbsp; &#8658; &nbsp; $y_3 = +0.1$.<br>
+
== Hard Decision vs. Soft Decision==
 +
<br>
 +
To introduce the topic discussed here, let us consider the following digital transmission system with coding.<br>
  
*Die Decodierung geschieht gemäß dem Kriterium&nbsp; [[Channel_Coding/Kanalmodelle_und_Entscheiderstrukturen#Maximum-a-posteriori.E2.80.93_und_Maximum-Likelihood.E2.80.93Kriterium|Maximum-Likelihood]] &nbsp;$\text{(blockwise ML)}$, wobei zwischen&nbsp; <i>Hard Decision</i>&nbsp; $\rm {(ML&ndash;HD)}$&nbsp; und&nbsp; <i>Soft Decision</i>&nbsp; $\rm {(ML&ndash;SD)}$&nbsp; zu unterscheiden ist.<br>
+
*In the following, all symbols are given in bipolar representation:
 +
[[File:EN_KC_T_4_1_S1a.png|right|frame|Considered digital transmission system with coding|class=fit]]
  
*Das gesamte Blockschaltbild entspricht &nbsp;$\rm ML&ndash;HD$. Hier werden zur Decodierung nur die Vorzeichen der AWGN&ndash;Ausgangswerte &nbsp; &rArr; &nbsp; $y_{\rm HD, \ \it i} = {\rm sign}\big [y_{\rm SD, \ \it i}\big ]$&nbsp; ausgewertet. Bei&nbsp; <i>Soft Decision</i>&nbsp; $\rm {(ML&ndash;SD)}$&nbsp; verzichtet man auf den schraffierten Block und wertet direkt die wertkontinuierlichen Eingangsgrößen&nbsp; $y_{\rm SD, \ \it i}$&nbsp; aus.<br>
+
[[File:EN_KC_T_4_1_S1b.png|right|frame|Comparison of&nbsp; "hard decision" and&nbsp; "soft decision"; &nbsp; for all columns of this table it is assumed:<br>'''(1)''' &nbsp; the information block &nbsp; $\underline{u} = (0,\ 1)$,&nbsp; bipolar representable as&nbsp; $(+1, \, &ndash;1)$,<br>'''(2)''' &nbsp; the encoded block&nbsp; $\underline{x} = (0,\ 1,\ 1)$ &nbsp; &rArr; &nbsp; in bipolar representation:&nbsp; $(+1, \, -1, \, -1)$.<br>|class=fit]]
  
  
[[File:P ID2972 KC T 4 1 S1b v4.png|center|frame|Gegenüberstellung von <i>Hard Decision</i> und <i>Soft Decision</i>|class=fit]]
+
 +
::"$0$" &nbsp;&#8594;&nbsp; "$+1$",&nbsp; and&nbsp;
 +
::"$1$" &nbsp;&#8594;&nbsp; "$-1$".
  
Für alle Spalten dieser Tabelle wird vorausgesetzt:
+
*The symbol sequence&nbsp; $\underline{u} = (u_1, \ u_2)$&nbsp; of the digital source is assigned to the encoded sequence&nbsp;
*der Nachrichtenblock&nbsp; $\underline{u} = (0, 1)$, bipolar darstellbar als&nbsp; $(+1, \, &ndash;1)$,<br>
+
:$$\underline{x} = (x_1, \ x_2, \ x_3) = (u_1, \ u_2, \ p)$$
*der&nbsp; ${\rm SPC} \ (3, 2)$&ndash;codierte Block&nbsp; $\underline{x} = (0, 1, 1)$&nbsp; bzw.&nbsp; in Bipolardarstellung&nbsp; $(+1, \, -1, \, -1)$.<br>
+
:where for the parity bit &nbsp; $p = u_1 &oplus; u_2$ &nbsp; holds &nbsp; &#8658; &nbsp; [[Channel_Coding/Examples_of_Binary_Block_Codes#Single_Parity-check_Codes| $\text{ SPC (3, 2, 2)}$]].<br>
  
 +
*The&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#AWGN_channel_at_binary_input| $\text{AWGN channel}$]]&nbsp; changes the symbols&nbsp; $x_i &#8712; \{+1, \ &ndash;1\}$&nbsp; to real-valued output values&nbsp; $y_i$,&nbsp; for example according to&nbsp; $\text{channel 4}$&nbsp; in the table below: &nbsp;
 +
**$x_1 = +1$ &nbsp; &#8658; &nbsp; $y_1 = +0.9,$
 +
**$x_2 = -1$ &nbsp; &#8658; &nbsp; $y_1 = +0.1,$
 +
**$x_3 = -1$ &nbsp; &#8658; &nbsp; $y_1 = +0.1,$
  
Die vier Spalten unterscheiden sich also nur durch unterschiedliche AWGN&ndash;Realisierungen.<br>
+
*Decoding is done according to the criterion&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#Criteria_.C2.BBMaximum-a-posteriori.C2.AB_and_.C2.BBMaximum-Likelihood.C2.AB|$\text{&raquo;block-wise maximum-likelihood&laquo;}$]]&nbsp; $\text{(ML)}$,&nbsp; distinguishing between&nbsp;
 +
**"hard decision" &nbsp; $\rm {(ML&ndash;HD)}$,&nbsp; and&nbsp;
 +
**"soft decision" &nbsp; $\rm {(ML&ndash;SD)}$&nbsp;.<br>
  
{{BlaueBox|TEXT= 
+
*The given block diagram corresponds to &nbsp;$\rm ML&ndash;HD$.&nbsp; Here,&nbsp; only the signs of the AWGN output values &nbsp; &rArr; &nbsp; $y_{\rm HD, \ \it i} = {\rm sign}\left [y_{\rm SD, \ \it i}\right ]$&nbsp; are evaluated for decoding.  
$\text{Definition:}$&nbsp; Aus der Beispieltabelle erkennt man:
 
*$\text{Hard Decision:}$ &nbsp; Die Symbolfolge&nbsp; $\underline{v}_{\rm HD}$&nbsp; ergibt sich aus den hart entschiedenen Kanalwerten&nbsp; $\underline{y}_{\rm HD}$ (blaue Hinterlegung). <br>Bei unserem Beispiel werden nur die Konstellationen gemäß&nbsp; $\text{Kanal 1}$&nbsp; und&nbsp; $\text{Kanal 2}$&nbsp; fehlerfrei decodiert.<br>
 
  
*$\text{Soft Decision:}$ &nbsp; Die Symbolfolge&nbsp; $\underline{v}_{\rm SD}$&nbsp; ergibt sich aus den &bdquo;weichen&rdquo; Kanalausgangswerten&nbsp; $\underline{y}_{\rm SD}$ (grüne Hinterlegung). <br>Nun wird in diesem Beispiel auch bei&nbsp; $\text{Kanal 3}$&nbsp; richtig entschieden.}}<br>
+
*With&nbsp; "soft decision"&nbsp; one omits the shaded block and directly evaluates the continuous value input variables&nbsp; $y_{\rm SD, \ \it i}$&nbsp;.<br>
  
Die Einträge in der obigen Beispieltabelle sind wie folgt zu interpretieren:
 
*Bei idealem Kanal &nbsp; &#8658; &nbsp; $\text{Kanal 1}$ &nbsp; &#8658; &nbsp; $\underline{x} = \underline{y}_{\rm SD} = \underline{y}_{\rm HD}$&nbsp; gibt es keinen Unterschied zwischen der (blauen) herkömmlichen  ''Hard Decision''&ndash;Variante $\rm {(ML&ndash;HD)}$&nbsp; und der (grünen)  ''Soft Decision''&ndash;Variante $\rm {(ML&ndash;SD)}$.<br>
 
  
*Die Einstellung entsprechend&nbsp; $\text{Kanal 2}$&nbsp; demonstriert geringe Signalverfälschungen. Wegen&nbsp; $\underline{y}_{\rm HD} = \underline{x}$&nbsp; (das heißt, dass der Kanal die Vorzeichen nicht  verfälscht) liefert auch&nbsp; $\rm ML&ndash;HD$&nbsp; das richtige Ergebnis&nbsp; $\underline{v}_{\rm HD} = \underline{u}$.<br>
+
Thus,&nbsp; the four columns in the table differ only by different AWGN realizations.<br>
  
*Beim&nbsp; $\text{Kanal 3}$&nbsp; gilt&nbsp; $\underline{y}_{\rm HD} &ne; \underline{x}$&nbsp; und es gibt auch keine&nbsp; ${\rm SPC} \ (3, 2)$&ndash;Zuordnung&nbsp; $\underline{u}$ &nbsp; &#8658; &nbsp; $\underline{y}_{\rm HD}$. Der ML&ndash;Decoder vermeldet hier durch die Ausgabe&nbsp; $\underline{v}_{\rm HD} = \rm (E, \ E)$, dass er bei der Decodierung dieses Blocks gescheitert ist. &bdquo;$\rm E$&rdquo; steht für&nbsp; [[Channel_Coding/Signal_classification#Binary_Erasure_Channel_.E2.80.93_BEC| Erasure]]&nbsp; (deutsch: &nbsp; Auslöschung).<br>
+
{{BlaueBox|TEXT= 
 +
$\text{Definitions:}$&nbsp; From the example table you can see:
 +
*$\text{Hard Decision:}$ &nbsp; The sink symbol sequence&nbsp; $\underline{v}_{\rm HD}$&nbsp; results from the&nbsp; "hard decided channel values"&nbsp; $\underline{y}_{\rm HD}$&nbsp; $($blue background$)$. <br> &nbsp; &nbsp; In our example,&nbsp; only the constellations according to&nbsp; $\text{channel 1}$&nbsp; and&nbsp; $\text{channel 2}$&nbsp; are decoded without errors.<br>
  
*Auch der&nbsp; <i>Soft Decision</i>&nbsp; Decoder erkennt, dass eine Decodierung anhand der Vorzeichen nicht funktioniert. Anhand der&nbsp; $\underline{y}_{\rm SD}$&ndash;Werte erkennt er aber, dass mit großer Wahrscheinlichkeit das zweite Bit verfälscht wurde und entscheidet sich für die richtige Symbolfolge&nbsp; $\underline{v}_{\rm SD} = (+1, \, -1) = \underline{u}$.<br>
+
*$\text{Soft Decision:}$ &nbsp; The sink symbol sequence&nbsp; $\underline{v}_{\rm SD}$&nbsp; results from the&nbsp; "soft channel output values"&nbsp; $\underline{y}_{\rm SD}$&nbsp; $($green background$)$. <br> &nbsp; &nbsp; Now,&nbsp;  in this example,&nbsp; it is also correctly decided at&nbsp; $\text{channel 3}$.&nbsp; }}<br>
  
*Bei&nbsp; $\text{Kanal 4}$&nbsp; werden durch den AWGN&ndash;Kanal sowohl die Vorzeichen von Bit 2 als auch von Bit 3 verändert, was zum Ergebnis&nbsp; $\underline{v}_{\rm HD} = (+1, +1) &ne; \underline{u}(+1, \, -1)$&nbsp; führt &nbsp; &#8658; &nbsp; ein Blockfehler und gleichzeitig ein Bitfehler. Auch der <i>Soft Decision</i> Decoder liefert hier das gleiche falsche Ergebnis.<br><br>
+
The entries in the example table above are to be interpreted as follows:
 +
# For ideal channel &nbsp; &#8658; &nbsp; $\text{channel 1}$ &nbsp; &#8658; &nbsp; $\underline{x} = \underline{y}_{\rm SD} = \underline{y}_{\rm HD}$&nbsp; there is no difference between the (blue) conventional hard decision variant&nbsp; $\rm {(ML&ndash;HD)}$&nbsp; and the (green) soft decision variant&nbsp; $\rm {(ML&ndash;SD)}$.<br>
 +
#  The setting according to&nbsp; $\text{channel 2}$&nbsp; demonstrates low signal distortions.&nbsp; Because of&nbsp; $\underline{y}_{\rm HD} = \underline{x}$&nbsp; $($which means that the channel does not distort the signs$)$&nbsp; also&nbsp; $\rm ML&ndash;HD$&nbsp; gives the correct result&nbsp; $\underline{v}_{\rm HD} = \underline{u}$.<br>
 +
#  At&nbsp; $\text{channel 3}$&nbsp; there is&nbsp; $\underline{y}_{\rm HD} &ne; \underline{x}$&nbsp; and there is also no&nbsp; ${\rm SPC} \ (3, 2)$&nbsp; assignment&nbsp;  $\underline{u}$ &nbsp; &#8658; &nbsp; $\underline{y}_{\rm HD}$.&nbsp; The maximum Likelihood decoder reports here by outputting&nbsp; $\underline{v}_{\rm HD} = \rm (E, \ E)$ that it failed in decoding this block.&nbsp; "$\rm E$" stands for an&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#Binary_Erasure_Channel_.E2.80.93_BEC|$\text{Erasure}$]]&nbsp;.<br>
 +
#  Also the&nbsp; soft decision decoder recognizes that decoding based on the signs does not work.&nbsp; Based on the&nbsp; $\underline{y}_{\rm SD}$&nbsp; values,&nbsp; however,&nbsp; it recognizes that with high probability the second bit has been falsified and decides to use the correct symbol sequence&nbsp; $\underline{v}_{\rm SD} = (+1, \, -1) = \underline{u}$.<br>
 +
#  In&nbsp; $\text{channel 4}$,&nbsp; both the signs of bit 2 and bit 3 are changed by the AWGN channel,&nbsp; leading to the result &nbsp; $\underline{v}_{\rm HD} = (+1, +1) &ne; \underline{u}(+1, \, -1)$ &nbsp; &#8658; &nbsp; a block error and a bit error at the same time.&nbsp; Also the soft decision decoder gives the same wrong result here.<br><br>
  
Die Decodiervariante &bdquo;ML&ndash;SD&rdquo; bietet gegenüber &bdquo;ML&ndash;HD&rdquo; zudem den Vorteil, dass man relativ einfach jedes Decodierergebnis mit einem Zuverlässigkeitswert versehen kann (in obiger Tabelle ist dieser allerdings nicht angegeben). Dieser Zuverlässigkeitswert
+
The decoding variant&nbsp; $\rm ML&ndash;SD$&nbsp; also offers the advantage over&nbsp; $\rm ML&ndash;HD$&nbsp; that it is relatively easy to assign a reliability value to each decoding result&nbsp; $($but in the above table this is not specified$)$.&nbsp;  This reliability value would
*hätte bei&nbsp; $\text{Kanal 1}$&nbsp; seinen Maximalwert,<br>
+
*have its maximum value for&nbsp; $\text{channel 1}$&nbsp;,<br>
  
*wäre bei&nbsp; $\text{Kanal 2}$&nbsp; deutlich kleiner,<br>
+
*be significantly smaller for&nbsp; $\text{channel 2}$&nbsp;,<br>
  
*läge bei&nbsp; $\text{Kanal 3}$&nbsp; und&nbsp; $\text{Kanal 4}$&nbsp; nahe bei Null.<br>
+
*be close to zero for&nbsp; $\text{channel 3}$&nbsp; and&nbsp; $\text{channel 4}$.<br>
  
== Zuverlässigkeitsinformation – Log Likelihood Ratio==
+
== Reliability information - Log likelihood ratio==
 
<br>
 
<br>
Es sei&nbsp; $x &#8712; \{+1, \, -1\}$&nbsp; eine binäre Zufallsvariable mit den Wahrscheinlichkeiten&nbsp; ${\rm Pr}(x = +1)$&nbsp; und&nbsp; ${\rm Pr}(x = \, -1)$. Für die Codierungstheorie erweist es sich als zweckmäßig hinsichtlich der Rechenzeiten, wenn man anstelle der Wahrscheinlichkeiten&nbsp; ${\rm Pr}(x = &plusmn;1)$&nbsp; den natürlichen Logarithmus des Quotienten heranzieht.<br>
+
Let be&nbsp; $x &#8712; \{+1, \, -1\}$&nbsp; a binary random variable with probabilities&nbsp; ${\rm Pr}(x = +1)$&nbsp; and&nbsp; ${\rm Pr}(x = \, -1)$.&nbsp; For coding theory,&nbsp; it proves convenient in terms of computation times to use the natural logarithm of the quotient instead of the probabilities&nbsp; ${\rm Pr}(x = &plusmn;1)$&nbsp; .<br>
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Definition:}$&nbsp;  Das&nbsp; <b>Log&ndash;Likelihood&ndash;Verhältnis</b>&nbsp; (kurz: &nbsp; der $L$&ndash;Wert, englisch: <i>Log&ndash;Likelihood Ratio</i>, LLR)&nbsp; der Zufallsgröße&nbsp; $x &#8712; \{+1, \, -1\}$&nbsp; lautet:
+
$\text{Definition:}$&nbsp;  The&nbsp; <b>log likelihood ratio</b>&nbsp; $\rm (LLR)$&nbsp; of the random variable&nbsp; $x &#8712; \{+1, \, -1\}$&nbsp; is:
  
 
::<math>L(x)={\rm ln} \hspace{0.15cm} \frac{ {\rm Pr}(x = +1)}{ {\rm Pr}(x = -1)}\hspace{0.05cm}.</math>
 
::<math>L(x)={\rm ln} \hspace{0.15cm} \frac{ {\rm Pr}(x = +1)}{ {\rm Pr}(x = -1)}\hspace{0.05cm}.</math>
  
Bei unipolarer/symbolhafter Darstellung&nbsp; $(+1&nbsp; &#8594; &nbsp;0$ &nbsp; und &nbsp; $-1&nbsp; &#8594; &nbsp;1)$&nbsp; gilt entsprechend mit&nbsp; $\xi &#8712; \{0, \, 1\}$:
+
*For unipolar representation&nbsp; $(+1&nbsp; &#8594; &nbsp;0$ &nbsp; and &nbsp; $-1&nbsp; &#8594; &nbsp;1)$&nbsp; applies accordingly with&nbsp; $\xi &#8712; \{0, \, 1\}$:
  
 
::<math>L(\xi)={\rm ln} \hspace{0.15cm} \frac{ {\rm Pr}(\xi = 0)}{ {\rm Pr}(\xi = 1)}\hspace{0.05cm}.</math>}}<br>
 
::<math>L(\xi)={\rm ln} \hspace{0.15cm} \frac{ {\rm Pr}(\xi = 0)}{ {\rm Pr}(\xi = 1)}\hspace{0.05cm}.</math>}}<br>
  
Nachfolgend ist der nichtlineare Zusammenhang zwischen&nbsp; ${\rm Pr}(x = &plusmn;1)$&nbsp; und&nbsp; $L(x)$&nbsp; angegeben. Ersetzt man&nbsp; ${\rm Pr}(x = +1)$&nbsp; durch&nbsp; ${\rm Pr}(\xi = 0)$, so gibt die mittlere Zeile den&nbsp; $L$&ndash;Wert der unipolaren Zufallsgröße&nbsp; $\xi$ an.<br>
+
[[File:P ID2975 KC T 4 1 S2a v1.png|right|frame|Probability and log likelihood ratio|class=fit]]
 
 
[[File:P ID2975 KC T 4 1 S2a v1.png|center|frame|Wahrscheinlichkeit und &nbsp;$L$–Wert|class=fit]]
 
Man erkennt:
 
*Der wahrscheinlichere Zufallswert von&nbsp; $x &#8712; \{+1, \, -1\}$&nbsp; ist durch das&nbsp; <b>Vorzeichen</b> &nbsp; &#8658; &nbsp; ${\rm sign} \ L(x)$&nbsp;  gegeben.<br>
 
  
*Dagegen gibt der&nbsp; <b>Betrag</b>&nbsp; &nbsp; &#8658; &nbsp; $|L(x)|$&nbsp; die Zuverlässigkeit für das Ergebnis&nbsp; ${\rm sign}(L(x))$&nbsp; an.<br><br>
+
The table gives the nonlinear relationship between&nbsp; ${\rm Pr}(x = &plusmn;1)$&nbsp; and&nbsp; $L(x)$.&nbsp; Replacing&nbsp; ${\rm Pr}(x = +1)$&nbsp; with&nbsp; ${\rm Pr}(\xi = 0)$,&nbsp; the middle row gives the&nbsp; $L$&ndash;value of the unipolar random variable&nbsp; $\xi$.<br>
  
[[File:P ID2976 KC T 4 1 S2b v2.png|right|frame|BSC–Modell]]
+
You can see:
{{GraueBox|TEXT= 
+
#The more likely random value of&nbsp; $x &#8712; \{+1, \, -1\}$&nbsp; is given by the&nbsp; <b>sign</b> &nbsp; &#8658; &nbsp; ${\rm sign} \ L(x)$.<br>
$\text{Beispiel 1:}$&nbsp; Wir betrachten das  skizzierte&nbsp; [[Channel_Coding/Kanalmodelle_und_Entscheiderstrukturen#Binary_Symmetric_Channel_.E2.80.93_BSC| BSC&ndash;Modell]]&nbsp; mit bipolarer Darstellung. Hier gilt mit  der Verfälschungswahrscheinlichkeit&nbsp; $\varepsilon = 0.1$&nbsp; sowie den beiden Zufallsgrößen&nbsp; $x &#8712; \{+1, \, -1\}$&nbsp; und&nbsp; $y &#8712; \{+1, \, -1\}$&nbsp; am Eingang und Ausgang des Kanals:
+
#In contrast,&nbsp; the&nbsp; <b>absolute value</b>&nbsp; &nbsp; &#8658; &nbsp; $|L(x)|$&nbsp; indicates the reliability for the result&nbsp; "${\rm sign}(L(x))$".<br><br>
  
 +
{{GraueBox|TEXT=
 +
[[File:P ID2976 KC T 4 1 S2b v2.png|right|frame|Given BSC model]] 
 +
$\text{Example 1:}$&nbsp;  We consider the outlined&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#Binary_Symmetric_Channel_.E2.80.93_BSC| $\text{BSC model}$]]&nbsp; with bipolar representation.&nbsp;
 +
*With the falsification probability&nbsp; $\varepsilon = 0.1$&nbsp; and the two random variables&nbsp; $x &#8712; \{+1, \, -1\}$&nbsp; and&nbsp; $y &#8712; \{+1, \, -1\}$&nbsp; at the input and output of the channel:
 
::<math>L(y\hspace{0.05cm}\vert\hspace{0.05cm}x) =
 
::<math>L(y\hspace{0.05cm}\vert\hspace{0.05cm}x) =
 
{\rm ln} \hspace{0.15cm} \frac{ {\rm Pr}(y\hspace{0.05cm}\vert\hspace{0.05cm}x = +1) }{ {\rm Pr}(y\hspace{0.05cm}\vert\hspace{0.05cm}x = -1)}  =  
 
{\rm ln} \hspace{0.15cm} \frac{ {\rm Pr}(y\hspace{0.05cm}\vert\hspace{0.05cm}x = +1) }{ {\rm Pr}(y\hspace{0.05cm}\vert\hspace{0.05cm}x = -1)}  =  
 
\left\{ \begin{array}{c} {\rm ln} \hspace{0.15cm} \big[(1 - \varepsilon)/\varepsilon \big]\\
 
\left\{ \begin{array}{c} {\rm ln} \hspace{0.15cm} \big[(1 - \varepsilon)/\varepsilon \big]\\
 
  {\rm ln} \hspace{0.15cm}\big [\varepsilon/(1 - \varepsilon)\big]  \end{array} \right.\quad
 
  {\rm ln} \hspace{0.15cm}\big [\varepsilon/(1 - \varepsilon)\big]  \end{array} \right.\quad
\begin{array}{*{1}c} {\rm f\ddot{u}r} \hspace{0.15cm} y = +1,
+
\begin{array}{*{1}c} {\rm for} \hspace{0.15cm} y = +1,
\\  {\rm f\ddot{u}r} \hspace{0.15cm} y = -1. \\ \end{array}</math>
+
\\  {\rm for} \hspace{0.15cm} y = -1. \\ \end{array}</math>
 
 
Beispielsweise ergeben sich für&nbsp; $\varepsilon = 0.1$&nbsp; folgende Zahlenwerte (vergleiche obere Tabelle):
 
  
 +
*For example,&nbsp; $\varepsilon = 0.1$&nbsp; results in the following numerical values&nbsp; $($compare the upper table$)$:
 
::<math>L(y = +1\hspace{0.05cm}\vert\hspace{0.05cm}x) =
 
::<math>L(y = +1\hspace{0.05cm}\vert\hspace{0.05cm}x) =
 
{\rm ln} \hspace{0.15cm} \frac{0.9}{0.1}  = +2.197\hspace{0.05cm}, \hspace{0.8cm}
 
{\rm ln} \hspace{0.15cm} \frac{0.9}{0.1}  = +2.197\hspace{0.05cm}, \hspace{0.8cm}
 
  L(y = -1\hspace{0.05cm}\vert\hspace{0.05cm}x) = -2.197\hspace{0.05cm}.</math>
 
  L(y = -1\hspace{0.05cm}\vert\hspace{0.05cm}x) = -2.197\hspace{0.05cm}.</math>
  
Dieses Beispiel zeigt, dass man die so genannte&nbsp; $L$&ndash;Wert&ndash;Algebra auch auf bedingte Wahrscheinlichkeiten anwenden kann. <br>In der&nbsp; [[Aufgaben:Aufgabe_4.1Z:_L–Werte_des_BEC–Modells|Aufgabe 4.1Z]]&nbsp; wird das BEC&ndash;Modell in ähnlicher Weise beschrieben.}}<br>
+
*This example shows that the so-called&nbsp; &raquo;$\text{LLR algebra}$&laquo; can also be applied to conditional probabilities. In &nbsp;[[Aufgaben:Exercise_4.1Z:_Log_Likelihood_Ratio_at_the_BEC_Model|"Exercise 4.1Z"]],&nbsp; the BEC&ndash;model is described in a similar way.}}<br>
  
{{GraueBox|TEXT=
+
{{GraueBox|TEXT=  
$\text{Beispiel 2:}$&nbsp;
+
[[File:EN_KC_T_4_1_S2c.png|right|frame|Conditional AWGN probability density functions|class=fit]] 
In einem weiteren Beispiel  betrachten nun den&nbsp; [[Channel_Coding/Kanalmodelle_und_Entscheiderstrukturen#AWGN.E2.80.93Kanal_bei_bin.C3.A4rem_Eingang|AWGN&ndash;Kanal]]&nbsp; mit den bedingten Wahrscheinlichkeitsdichtefunktionen
+
$\text{Example 2:}$&nbsp;
[[File:P ID2977 KC T 4 1 S2c v3.png|right|frame|Bedingte AWGN–Dichtefunktionen|class=fit]]
+
In another example,&nbsp; we consider the&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#AWGN_channel_at_binary_input|$\rm AWGN$ ]]&nbsp; channel with the conditional probability density functions
  
 
::<math>f_{y \hspace{0.03cm}\vert \hspace{0.03cm}x=+1 } \hspace{0.05cm} (y \hspace{0.05cm}\vert\hspace{0.05cm}x=+1 )\hspace{-0.1cm} =  \hspace{-0.1cm}
 
::<math>f_{y \hspace{0.03cm}\vert \hspace{0.03cm}x=+1 } \hspace{0.05cm} (y \hspace{0.05cm}\vert\hspace{0.05cm}x=+1 )\hspace{-0.1cm} =  \hspace{-0.1cm}
Line 130: Line 144:
 
\frac {1}{\sqrt{2\pi} \cdot \sigma } \cdot {\rm e} ^{ -  {(y+1)^2}/(2\sigma^2) } \hspace{0.05cm}.</math>
 
\frac {1}{\sqrt{2\pi} \cdot \sigma } \cdot {\rm e} ^{ -  {(y+1)^2}/(2\sigma^2) } \hspace{0.05cm}.</math>
  
In der Grafik sind zwei beispielhafte Gaußfunktionen als blaue bzw. rote Kurve dargestellt.<br>
+
&rArr; &nbsp; In the graph,&nbsp; two exemplary Gaussian functions are shown as blue and red curves, respectively.&nbsp;
<br clear=all>
+
 
Die gesamte WDF des Ausgangssignals&nbsp;$y$&nbsp; ergibt sich aus der (gleich) gewichteten Summe:
+
*The total probability density function&nbsp; $\rm (PDF)$&nbsp; of the output signal&nbsp;$y$&nbsp; is obtained from the&nbsp; (equally)&nbsp; weighted sum:
  
 
::<math>f_{y  } \hspace{0.05cm} (y  ) = 1/2 \cdot \big [ f_{y \hspace{0.03cm}\vert \hspace{0.03cm}x=+1 } \hspace{0.05cm} (y \hspace{0.05cm}\vert\hspace{0.05cm}x=+1 ) \hspace{0.1cm} +  \hspace{0.1cm}
 
::<math>f_{y  } \hspace{0.05cm} (y  ) = 1/2 \cdot \big [ f_{y \hspace{0.03cm}\vert \hspace{0.03cm}x=+1 } \hspace{0.05cm} (y \hspace{0.05cm}\vert\hspace{0.05cm}x=+1 ) \hspace{0.1cm} +  \hspace{0.1cm}
Line 139: Line 153:
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
Wir berechnen nun die Wahrscheinlichkeit, dass der Empfangswert&nbsp; $y$&nbsp; in einem (sehr) schmalen Intervall der Breite&nbsp; $\Delta$&nbsp; um&nbsp; $y_0 = 0.25$&nbsp; liegt. Man erhält näherungsweise
+
* We now calculate the probability that the received value&nbsp; $y$&nbsp; lies in a&nbsp; (very)&nbsp; narrow interval of width&nbsp; $\it \Delta$&nbsp; around&nbsp; $y_0 = 0.25$.&nbsp; One obtains approximately
  
 
::<math>{\rm Pr} (\vert y - y_0\vert  \le{\it  \Delta}/2 \hspace{0.05cm}
 
::<math>{\rm Pr} (\vert y - y_0\vert  \le{\it  \Delta}/2 \hspace{0.05cm}
Line 148: Line 162:
 
\frac {\it  \Delta}{\sqrt{2\pi} \cdot \sigma } \cdot {\rm e} ^{ -  {(y_0+1)^2}/(2\sigma^2) } \hspace{0.05cm}.</math>
 
\frac {\it  \Delta}{\sqrt{2\pi} \cdot \sigma } \cdot {\rm e} ^{ -  {(y_0+1)^2}/(2\sigma^2) } \hspace{0.05cm}.</math>
  
Die etwas größeren senkrechten Striche bezeichnen die Bedingungen, die kleineren die Betragsbildung.<br>
+
:The slightly larger vertical lines denote the conditions,&nbsp; the smaller ones the absolute value.<br>
  
Der&nbsp; $L$&ndash;Wert der bedingten Wahrscheinlichkeit in Vorwärtsrichtung&nbsp; $($das bedeutet: &nbsp; Ausgang &nbsp;$y$&nbsp; für einen gegebenen Eingang&nbsp; $x)$&nbsp; ergibt sich somit als der Logarithmus des Quotienten beider Ausdrücke:
+
*The&nbsp; log likelihood ratio&nbsp; $\rm (LLR)$&nbsp; of the conditional probability in the forward direction&nbsp; $($meaning: &nbsp; output &nbsp;$y$&nbsp; for a given input&nbsp; $x)$&nbsp; is thus obtained as the logarithm of the quotient of both expressions:
  
 
::<math>L(y = y_0\hspace{0.05cm}\vert\hspace{0.05cm}x) =
 
::<math>L(y = y_0\hspace{0.05cm}\vert\hspace{0.05cm}x) =
Line 156: Line 170:
 
{\rm ln} \left [ {\rm e} ^{ - [ {(y_0-1)^2}+{(y_0+1)^2}]/(2\sigma^2)}  \right ] =  \frac{(y_0+1)^2-(y_0-1)^2}{2\cdot \sigma^2} = \frac{2 \cdot y_0}{\sigma^2}\hspace{0.05cm}. </math>
 
{\rm ln} \left [ {\rm e} ^{ - [ {(y_0-1)^2}+{(y_0+1)^2}]/(2\sigma^2)}  \right ] =  \frac{(y_0+1)^2-(y_0-1)^2}{2\cdot \sigma^2} = \frac{2 \cdot y_0}{\sigma^2}\hspace{0.05cm}. </math>
  
Ersetzen wir nun die Hilfsgröße&nbsp; $y_0$&nbsp; durch die (allgemeine) Zufallsgröße&nbsp; $y$&nbsp; am AWGN&ndash;Ausgang, so lautet das Endergebnis:
+
*If we now replace the auxiliary&nbsp; $y_0$&nbsp; with the general random&nbsp; $y$&nbsp; at the AWGN output,&nbsp; the final result is:
  
 
::<math>L(y \hspace{0.05cm}\vert\hspace{0.05cm}x) = {2  \cdot y}/{\sigma^2} =K_{\rm L} \cdot y\hspace{0.05cm}. </math>
 
::<math>L(y \hspace{0.05cm}\vert\hspace{0.05cm}x) = {2  \cdot y}/{\sigma^2} =K_{\rm L} \cdot y\hspace{0.05cm}. </math>
  
Hierbei ist&nbsp; $K_{\rm L} = 2/\sigma^2$&nbsp; eine Konstante, die allein von der Streuung der Gaußschen Störung abhängt.}}<br>
+
:Here&nbsp; $K_{\rm L} = 2/\sigma^2$&nbsp; is a constant that depends solely on the standard deviation of the Gaussian noise component.}}<br>
  
== Symbolweise Soft–in Soft–out Decodierung ==
+
== Bit-wise soft-in soft-out decoding ==
 
<br>
 
<br>
Wir gehen  nun von einem&nbsp; $(n, \ k)$&ndash;Blockcode aus, wobei das Codewort&nbsp; $\underline{x} = (x_1, \ \text{...} \ , \ x_n)$&nbsp; durch den Kanal in das Empfangswort&nbsp; $\underline{y} = (y_1, \ \text{...} \ , \ y_n)$&nbsp; verfälscht wird.  
+
We now assume an&nbsp; $(n, \ k)$&nbsp;block code where the code word &nbsp; $\underline{x} = (x_1, \ \text{...} \ , \ x_n)$ &nbsp; is falsified by the channel into the received word&nbsp; $\underline{y} = (y_1, \ \text{...} \ , \ y_n)$.
*Bei langen Codes ist eine&nbsp; <i>Maximum&ndash;a&ndash;posteriori&ndash;Entscheidung auf Blockebene</i>&nbsp; &ndash; kurz: &nbsp;<b>[[Channel_Coding/Kanalmodelle_und_Entscheiderstrukturen#Maximum-a-posteriori.E2.80.93_und_Maximum-Likelihood.E2.80.93Kriterium|$\text{ block&ndash;wise MAP}$]]</b>&nbsp; &ndash; sehr aufwändig.  
+
[[File:EN_KC_T_4_1_S4.png|right|frame|Model of bit-wise soft-in soft-out decoding|class=fit]]
*Man müsste unter den&nbsp; $2^k&nbsp;$ zulässigen Codeworten&nbsp; $\underline{x}_j &#8712; \mathcal{C}$&nbsp; dasjenige mit der größten Rückschlusswahrscheinlichkeit (englisch:&nbsp; <i>A Posteriori Probability</i>, APP) finden.  
+
*Das auszugebende Codewort&nbsp; $\underline{z}$&nbsp; wäre in diesem Fall&nbsp; $\underline{z} = {\rm arg} \max_{\underline{x}_{\hspace{0.03cm}j} \hspace{0.03cm} \in \hspace{0.05cm} \mathcal{C}} \hspace{0.1cm} {\rm Pr}( \underline{x}_{\hspace{0.03cm}j} |\hspace{0.05cm} \underline{y} ) \hspace{0.05cm}.$
+
*For long codes,&nbsp; a&nbsp; "maximum-a-posteriori block-level decision"&nbsp; $($for short: &nbsp;<b>[[Channel_Coding/Channel_Models_and_Decision_Structures#Criteria_.C2.BBMaximum-a-posteriori.C2.AB_and_.C2.BBMaximum-Likelihood.C2.AB|$\text{ block-wise MAP}$]]</b>&nbsp; &ndash; very elaborate.
 
+
 
+
*One would have to find among the&nbsp; $2^k&nbsp;$ allowable code words&nbsp; $\underline{x}_j &#8712; \mathcal{C}$&nbsp; the one with the greatest a-posteriori probability&nbsp; $\rm (APP)$.
[[File:P ID2981 KC T 4 1 S3a v6.png|center|frame|Modell der symbolweisen Soft–in Soft–out Decodierung|class=fit]]
+
 +
*The code word to output&nbsp; $\underline{z}$&nbsp; in this case would be&nbsp;  
 +
:$$\underline{z} = {\rm arg} \max_{\underline{x}_{\hspace{0.03cm}j} \hspace{0.03cm} \in \hspace{0.05cm} \mathcal{C}} \hspace{0.1cm} {\rm Pr}( \underline{x}_{\hspace{0.03cm}j} |\hspace{0.05cm} \underline{y} ) \hspace{0.05cm}.$$
 +
<br clear=all>
 +
A second possibility is decoding on bit level.&nbsp; The represented bit-wise&nbsp; $\text{soft-in soft out decoder}$&nbsp; has the task to decode all code word bits&nbsp; $x_i &#8712; \{0, \, 1\}$&nbsp; according to maximum a-posteriori probability&nbsp; ${\rm Pr}(x_i | \underline{y})$.&nbsp; With the control variable&nbsp; $i = 1, \text{...} , \ n$&nbsp; holds in this case:
  
Eine zweite Möglichkeit ist die Decodierung auf Bitebene. Der dargestellte symbolweise (oder bitweise)&nbsp; $\text{Soft&ndash;in Soft&ndash;out Decoder}$&nbsp;  hat die Aufgabe, alle Codewortbits&nbsp; $x_i &#8712; \{0, \, 1\}$&nbsp; entsprechend maximaler Rückschlusswahrscheinlichkeit&nbsp; ${\rm Pr}(x_i | \underline{y})$&nbsp; zu decodieren. Mit der Laufvariablen&nbsp; $i = 1, \text{...} , \ n$&nbsp; gilt dabei:
+
*The corresponding&nbsp; log likelihood ratio  for the&nbsp; $i$th&nbsp; code bit is:
 
 
*Der entsprechende&nbsp; $L$&ndash;Wert&nbsp; (englisch: &nbsp;<i>Log Likelihood Ratio</i>, LLR) für das&nbsp; $i$&ndash;te Codebit lautet:
 
  
 
::<math>L_{\rm APP} (i) = L(x_i\hspace{0.05cm}|\hspace{0.05cm}\underline{y}) =
 
::<math>L_{\rm APP} (i) = L(x_i\hspace{0.05cm}|\hspace{0.05cm}\underline{y}) =
 
{\rm ln} \hspace{0.15cm} \frac{{\rm Pr}(x_i = 0\hspace{0.05cm}|\hspace{0.05cm}\underline{y})}{{\rm Pr}(x_i = 1\hspace{0.05cm}|\hspace{0.05cm}\underline{y})}\hspace{0.05cm} . </math>
 
{\rm ln} \hspace{0.15cm} \frac{{\rm Pr}(x_i = 0\hspace{0.05cm}|\hspace{0.05cm}\underline{y})}{{\rm Pr}(x_i = 1\hspace{0.05cm}|\hspace{0.05cm}\underline{y})}\hspace{0.05cm} . </math>
  
*Der Decoder arbeitet iterativ. Bei der Initialisierung $($in der Grafik gekennzeichnet durch den Parameter&nbsp; $I = 0)$&nbsp; ist&nbsp; $L_{\rm APP}(i) = L_{\rm K}(i)$, wobei das Kanal&ndash;LLR&nbsp; $L_{\rm K}(i)$&nbsp; durch den Empfangswert&nbsp; $y_i$&nbsp; gegeben ist.<br>
+
*The decoder works iteratively.&nbsp; At initialization&nbsp; $($indicated in the diagram by the parameter&nbsp; $I = 0)$&nbsp; is&nbsp; $L_{\rm APP}(i) = L_{\rm K}(i)$,&nbsp; where the channel&nbsp; $($German:&nbsp; "Kanal" &nbsp; &rArr; &nbsp; "K"$)$&nbsp; log likelihood ratio&nbsp; $L_{\rm K}(i)$&nbsp; is given by the received value&nbsp; $y_i$.<br>
  
*Berechnet wird zudem  der extrinsische&nbsp; $L$&ndash;Wert&nbsp; $L_{\rm E}(i)$, der die gesamte Information quantifiziert, die alle anderen Bits&nbsp; $(j &ne; i)$&nbsp; aufgrund der Code&ndash;Eigenschaften über das betrachtete&nbsp; $i$&ndash;te Bit liefern.<br>
+
*In addition,&nbsp; the extrinsic&nbsp; log likelihood ratio&nbsp; $L_{\rm E}(i)$&nbsp; is calculated,&nbsp; which quantifies the total information provided by all other bits &nbsp; $(j &ne; i)$ &nbsp; due to the code properties about the considered&nbsp; $i$th bit.<br>
  
*Bei der  nächsten Iteration&nbsp; $($ab&nbsp; $I = 1)$&nbsp; wird&nbsp; $L_{\rm E}(i)$&nbsp; bei der Berechnung von&nbsp; $L_{\rm APP}(i)$&nbsp; als Apriori&ndash;Information&nbsp; $L_{\rm A}(i)$&nbsp; berücksichtigt. Für das neue Aposteriori&ndash;LLR in der Iteration&nbsp; $I + 1$&nbsp; gilt somit:
+
*At the next iteration&nbsp; $(I = 1)$&nbsp; $L_{\rm E}(i)$&nbsp; is taken into account in the computation of&nbsp; $L_{\rm APP}(i)$&nbsp; as a-priori information&nbsp; $L_{\rm A}(i)$.&nbsp; Thus,&nbsp; for the new a-posteriori log likelihood ratio in iteration&nbsp; $I + 1$&nbsp; holds:
  
 
::<math>L_{\hspace{0.1cm}\rm APP}^{(I+1)} (i) = L_{\hspace{0.1cm}\rm APP}^{(I)} (i) + L_{\hspace{0.1cm}\rm A}^{(I+1)} (i) = L_{\hspace{0.1cm}\rm APP}^{(I)} (i) + L_{\hspace{0.1cm}\rm E}^{(I)} (i)\hspace{0.05cm} .  </math>
 
::<math>L_{\hspace{0.1cm}\rm APP}^{(I+1)} (i) = L_{\hspace{0.1cm}\rm APP}^{(I)} (i) + L_{\hspace{0.1cm}\rm A}^{(I+1)} (i) = L_{\hspace{0.1cm}\rm APP}^{(I)} (i) + L_{\hspace{0.1cm}\rm E}^{(I)} (i)\hspace{0.05cm} .  </math>
  
*Die Iterationen werden fortgesetzt, bis alle Beträge&nbsp; $|L_{\rm APP}(i)|$&nbsp; größer sind als ein vorzugebender Wert. Das wahrscheinlichste Codewort&nbsp; $\underline{z}$&nbsp; ergibt sich dann aus den Vorzeichen aller&nbsp; $L_{\rm APP}(i)$, mit&nbsp; $i = 1, \ \text{...} , \ n$.<br>
+
*The iterations continue until all absolute values&nbsp; $|L_{\rm APP}(i)|$&nbsp; are greater than a given value.&nbsp; The most likely code word&nbsp; $\underline{z}$&nbsp; is then obtained from the signs of all&nbsp; $L_{\rm APP}(i)$,&nbsp; with&nbsp; $i = 1, \ \text{...} , \ n$.<br>
  
*Bei einem&nbsp; [[Channel_Coding/Allgemeine_Beschreibung_linearer_Blockcodes#Systematische_Codes| systematischen Code]]&nbsp; geben die ersten&nbsp; $k$&nbsp; Bit von&nbsp; $\underline{z}$&nbsp; das gesuchte Informationswort an, das mit großer Wahrscheinlichkeit mit der gesendeten  Nachricht&nbsp; $\underline{u}$ übereinstimmen wird.<br><br>
+
*For a&nbsp; [[Channel_Coding/General_Description_of_Linear_Block_Codes#Systematic_Codes| $\text{systematic code}$]]&nbsp; the first&nbsp; $k$&nbsp; bits of&nbsp; $\underline{z}$&nbsp; indicate the information word&nbsp; $\underline{\upsilon}$&nbsp; being searched for,&nbsp; which will most likely match the message&nbsp; $\underline{u}$&nbsp; being sent.<br><br>
  
Diese Beschreibung des SISO&ndash;Decodierers nach&nbsp; [Bos98]<ref name='Bos98'>Bossert, M.: ''Kanalcodierung.'' Stuttgart: B. G. Teubner, 1998.</ref>&nbsp; soll an dieser Stelle in erster Linie die unterschiedlichen&nbsp; $L$&ndash;Werte verdeutlichen. Das große Potential der symbolweisen Decodierung erkennt man erst im Zusammenhang mit&nbsp; [[Channel_Coding/Soft%E2%80%93in_Soft%E2%80%93out_Decoder#Grundstruktur_von_verketteten_Codiersystemen| verketteten Codiersystemen.]]<br>
+
This SISO decoder descriptione from&nbsp; [Bos99]<ref name='Bos99'>Bossert, M.:&nbsp; Channel Coding for Telecommunications.&nbsp; Chichester:&nbsp; Wiley,&nbsp; 1999.&nbsp; ISBN:&nbsp; 978-0-471-98277-7.</ref>&nbsp; is intended at this point primarily to clarify the different&nbsp; $L$&ndash;values.&nbsp; One recognizes the large potential of the bit-wise decoding only in connection with&nbsp; [[Channel_Coding/Soft-in_Soft-Out_Decoder#Basic_structure_of_concatenated_coding_systems|$\text{concatenated coding systems}$]].<br>
  
== Zur Berechnung der extrinsischen L–Werte ==
+
== Calculation of extrinsic log likelihood ratios ==
 
<br>
 
<br>
Die Schwierigkeit bei der symbolweisen iterativen Decodierung ist im allgemeinen die Bereitstellung der extrinsischen&nbsp; $L$&ndash;Werte&nbsp; $L_{\rm E}(i)$. Bei einem Code der Länge&nbsp; $n$&nbsp; gilt hierbei für die Laufvariable:&nbsp; $i = 1, \ \text{...} , \ n$.<br>
+
The difficulty in bit-wise&nbsp; $($or symbol-wise$)$&nbsp; iterative decoding is generally the provision of the extrinsic&nbsp; log likelihood ratios &nbsp; $L_{\rm E}(i)$.&nbsp; For a code of length&nbsp; $n$,&nbsp; the control variable is &nbsp; $i = 1, \ \text{...} , \ n$.<br>
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Definition:}$&nbsp;  Der&nbsp; <b> extrinsische $L$&ndash;Wert</b>&nbsp; (englisch: &nbsp;<i>extrinsic LLR</i> )&nbsp; ist ein Maß für die Informationen, den die anderen Symbole&nbsp; $(j &ne; i)$&nbsp; des Codewortes über das&nbsp; $i$&ndash;te Codesymbol liefern, ausgedrückt als Log&ndash;Likelihood&ndash;Verhältnis. Wir benennen diesen Kennwert mit&nbsp; $L_{\rm E}(i)$.}}
+
$\text{Definition:}$&nbsp;  The &nbsp; <b> extrinsic log likelihood ratio</b> &nbsp; is a measure of the information that the other symbols&nbsp; $(j &ne; i)$&nbsp; of the code word provide about the&nbsp; $i$&ndash;th encoded symbol,&nbsp; expressed as a log likelihood ratio.&nbsp; We denote this characteristic value by&nbsp; $L_{\rm E}(i)$.}}
  
  
Wir berechnen nun die extrinsischen&nbsp; $L$&ndash;Werte&nbsp; $L_{\rm E}(i)$&nbsp; für zwei beispielhafte Codes.<br>
+
We now calculate the extrinsic log likelihood ratios&nbsp; $L_{\rm E}(i)$&nbsp; for two example codes.<br>
  
 +
$\text{(A) &nbsp;Repetition Code}$ &nbsp; &#8658; &nbsp; ${\rm RC} \ (n, 1, n)$</b><br>
  
$\text{Repetition Code}$ &nbsp; &#8658; &nbsp; ${\rm RC} \ (n, 1, n)$</b><br>
+
A repetition code&nbsp; $\rm (RC)$&nbsp;is characterized by the fact that all&nbsp; $n$&nbsp; encoded symbols&nbsp; $x_i &#8712; \{0, \, 1\}$&nbsp; are identical.
  
Ein Wiederholungscode zeichnet sich dadurch aus, dass alle&nbsp; $n$&nbsp; Codesymbole&nbsp; $x_i &#8712; \{0, \, 1\}$&nbsp; identisch sind. Der extrinsische&nbsp; $L$&ndash;Wert für das&nbsp; $i$&ndash;ten Symbol ist hier sehr einfach anzugeben und lautet:
+
*Here,&nbsp; the extrinsic log likelihood ratio for the&nbsp; $i$&ndash;th symbol is very easy to specify:
  
 
::<math>L_{\rm E}(i) = \hspace{0.05cm}\sum_{j \ne i} \hspace{0.1cm} L_j  
 
::<math>L_{\rm E}(i) = \hspace{0.05cm}\sum_{j \ne i} \hspace{0.1cm} L_j  
\hspace{0.3cm}{\rm mit}\hspace{0.3cm}L_j = L_{\rm APP}(j)
+
\hspace{0.3cm}{\rm with}\hspace{0.3cm}L_j = L_{\rm APP}(j)
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
 
+
*If the sum over all&nbsp; $L_{j &ne; i}$&nbsp; is positive &nbsp; &rArr; &nbsp; this implies from the point of view of the other log likelihood ratio values a preference for the decision &nbsp; $x_i = 0$.
*Ist die Summe über alle&nbsp; $L_{j &ne; i}$&nbsp; positiv, so bedeutet dies aus Sicht der anderen&nbsp; $L$&ndash;Werte eine Präferenz für die Entscheidung &nbsp; $x_i = 0$.  
+
*Bei negativer Summe ist dagegen&nbsp; $x_i = 1$&nbsp; wahrscheinlicher.  
+
*On the other hand,&nbsp; if the sum is negative &nbsp; &rArr; &nbsp; $x_i = 1$&nbsp; is more likely.
*$L_{\rm E}(i) = 0$&nbsp; erlaubt keine Vorhersage.<br>
+
 +
*$L_{\rm E}(i) = 0$&nbsp; does not allow a prediction.<br>
  
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 5:}$&nbsp; Wir betrachten die Decodierung des Wiederholungscodes&nbsp; $\text{RC (4, 1,4)}$. Wir gehen dabei von drei verschiedene Annnahmen für das&nbsp; <i>Log Likelihood Ratio</i>&nbsp; $\underline{L}_{\rm A}^{(I=0)} = \underline{L}_{\rm APP}$&nbsp; aus.
+
$\text{Example 5:}$&nbsp; We consider the decoding of the repetition code&nbsp; $\text{RC (4, 1,4)}$.&nbsp; Thereby,&nbsp; we make three different assumptions for the log likelihood ratio&nbsp; $ \underline{L}_{\rm APP}^{(I=0)}=\underline{L}_{\rm K}.$
  
[[File:P ID3094 KC T 4 4 S3a neu v2.png|right|frame|Decodierbeispiel &nbsp;$\rm (A)$&nbsp; für den&nbsp; ${\rm RC} \ (4, 1, 4)$|class=fit]]
+
[[File:EN_KC_T_4_1_S3a.png|right|frame|Decoding example &nbsp;$\rm (5a)$&nbsp; for the&nbsp; ${\rm RC} \ (4, 1, 4)$|class=fit]]
$\text{Decodierbeispiel (A):}$
+
$\text{Decoding example (5a):}$
:$$\underline{L}_{\rm APP} = (+1,  -1, +3, -1)\text{:}$$   
+
:$$\underline{L}_{\rm APP}^{(I=0)} = (+1,  -1, +3, -1)\text{:}$$   
 
::$$L_{\rm E}(1) = -1+3-1 = +1\hspace{0.05cm},$$
 
::$$L_{\rm E}(1) = -1+3-1 = +1\hspace{0.05cm},$$
 
::$$L_{\rm E}(2) = +1+3-1 = +3\hspace{0.05cm},$$
 
::$$L_{\rm E}(2) = +1+3-1 = +3\hspace{0.05cm},$$
Line 228: Line 246:
 
::$$L_{\rm E}(4) = +1-1 +3 = +3\hspace{0.05cm}$$
 
::$$L_{\rm E}(4) = +1-1 +3 = +3\hspace{0.05cm}$$
 
:$$\hspace{0.3cm}\Rightarrow\hspace{0.3cm}\underline{L}_{\rm E}^{(I=0)}= (+1, +3, -1, +3)
 
:$$\hspace{0.3cm}\Rightarrow\hspace{0.3cm}\underline{L}_{\rm E}^{(I=0)}= (+1, +3, -1, +3)
\hspace{0.3cm}\Rightarrow\hspace{0.3cm}\underline{L}_{\rm A}^{(I=1)}=\underline{L}_{\rm A}^{(I=0)}+ \underline{L}_{\rm E}^{(I=0)}= (+2, +2, +2, +2)$$
+
\hspace{0.3cm}\Rightarrow\hspace{0.3cm}\underline{L}_{\rm APP}^{(I=1)}=\underline{L}_{\rm APP}^{(I=0)}+ \underline{L}_{\rm E}^{(I=0)}= (+2, +2, +2, +2)$$
*Zu Beginn&nbsp; $(I=0)$&nbsp; weisen die positiven&nbsp; $L_{\rm E}$&ndash;Werte auf &nbsp;$x_1 = 0$,&nbsp;$x_2 = 0$&nbsp; und &nbsp;$x_4 = 0$&nbsp; hin, während &nbsp;$x_3 =1$&nbsp; wahrscheinlicher ist.
+
*At the beginning&nbsp; $(I=0)$&nbsp; the positive&nbsp; $L_{\rm E}$&nbsp; values indicate &nbsp;$x_1 = x_2 = x_4 = 0$&nbsp; while &nbsp;$x_3 =1$&nbsp; is more likely &nbsp; $($because of negative&nbsp; $L_{\rm E})$ .
*Bereits nach einer Iteration&nbsp; $(I=1)$&nbsp; sind alle&nbsp; $L_{\rm A}$&ndash;Werte positiv  &nbsp; &#8658; &nbsp; Informationsbit wird als&nbsp; $u = 0$&nbsp; decodiert.  
+
 
*Weitere Iterationen bringen nichts.<br>
+
*Already after one iteration&nbsp; $(I=1)$&nbsp; all&nbsp; $L_{\rm APP}$&nbsp; values are positive &nbsp; &#8658; &nbsp; the information bit is decoded as&nbsp; $u = 0$.
 +
 
 +
*Further iterations yield nothing.<br>
 +
 
 +
 
 +
$\text{Decoding example (5b):}$
 +
[[File:EN_KC_T_4_1_S3ab.png|right|frame|Decoding example &nbsp;$\rm (5b)$&nbsp; for the&nbsp; ${\rm RC} \ (4, 1, 4)$]]
 +
 
 +
:$$\underline{L}_{\rm APP}^{(I=0)} = (+1,  +1, -4, +1)\text{:}$$
 +
:$$\underline{L}_{\rm E}^{(I=0)} = \ (-2,  -2, +3, -2)$$
 +
 
 +
*Although three signs were wrong at the beginning,&nbsp; after two iterations all&nbsp; $L_{\rm APP}$&nbsp; values are negative.
 +
 
 +
*The information bit is decoded as&nbsp; $u = 1$.  
  
  
[[File:P ID3096 KC T 4 4 S3c neu v1.png|right|frame|Decodierbeispiel &nbsp;$\rm (B)$&nbsp; für den&nbsp;  ${\rm RC} \ (4, 1, 4)$]]
 
$\text{Decodierbeispiel (B):}$
 
$$\underline{L}_{\rm APP} = (+1,  +1, -4, +1)\text{:}$$
 
$$\underline{L}_{\rm E}^{(I=0)} = \ (-2,  -2, +3, -2)$$
 
  
*Obwohl zu Beginn drei Vorzeichen falsch waren,  sind nach zwei Iterationen alle  $L_{\rm A}$&ndash;Werte negativ.
+
[[File:EN_KC_T_4_1_S3c.png|right|frame|Decoding example &nbsp;$\rm (5c)$&nbsp; for the&nbsp; ${\rm RC} \ (4, 1, 4)$]]<br>
* Das Informationsbit wird als&nbsp; $u = 1$&nbsp; decodiert.
+
$\text{Decoding example (5c):}$
 +
:$$\underline{L}_{\rm APP}^{(I=0)} = (+1,  +1, -3, +1)\text{:}$$
 +
:$$\underline{L}_{\rm E}^{(I=0)} = (-1,  -1, +3, -1)$$
  
[[File:P ID3095 KC T 4 4 S3b neu v1.png|right|frame|Decodierbeispiel &nbsp;$\rm (C)$&nbsp; für den&nbsp;  ${\rm RC} \ (4, 1, 4)$]]<br>
+
*All&nbsp; $L_{\rm APP}$&nbsp; values are zero already after one iteration.
$\text{Decodierbeispiel (C):}$
 
$$\underline{L}_{\rm APP} = (+1,  +1, -3, +1)\text{:}$$
 
$$\underline{L}_{\rm E}^{(I=0)} =  (-1,  -1, +3, -1)$$
 
  
*Alle&nbsp; $L_{\rm A}$&ndash;Werte sind schon nach einer Iteration Null.
+
*The information bit cannot be decoded,&nbsp; although the initial situation was not much worse than with&nbsp; $\rm (5b)$.
*Das Informationsbit kann nicht decodiert werden, obwohl die Ausgangslage nicht viel schlechter war als bei&nbsp; $\rm (B)$.  
+
*Weitere Iterationen bringen auch hier nichts.<br>}}
+
*Further iterations bring nothing here.<br>}}
  
  
  
$\text{Single Parity&ndash;check Code}$ &nbsp; &#8658; &nbsp; ${\rm SPC} \ (n, \ n \, -1, \ 2)$
+
$\text{(B) &nbsp; Single parity&ndash;check code}$ &nbsp; &#8658; &nbsp; ${\rm SPC} \ (n, \ n \, -1, \ 2)$
 
<br>
 
<br>
  
Bei jedem&nbsp; <i>Single Parity&ndash;check Code</i>&nbsp; ist die Anzahl der Einsen in jedem Codewort geradzahlig. Oder anders ausgedrückt: &nbsp; Für jedes Codewort&nbsp; $\underline{x}$&nbsp; ist das&nbsp; [[Channel_Coding/Zielsetzung_der_Kanalcodierung#Einige_wichtige_Definitionen_zur_Blockcodierung|Hamming&ndash;Gewicht]]&nbsp; $w_{\rm H}(\underline{x})$&nbsp; geradzahlig.<br>
+
For each single parity&ndash;check code,&nbsp; the number of&nbsp; "ones"&nbsp; in each code word is even.&nbsp;  Or,&nbsp;  put another way: &nbsp; For each code word&nbsp; $\underline{x}$&nbsp; the&nbsp; [[Channel_Coding/Objective_of_Channel_Coding#Important_definitions_for_block_coding|$\text{Hamming weight}$]]&nbsp; $w_{\rm H}(\underline{x})$&nbsp; is even.<br>
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Definition:}$&nbsp; Das Codewort&nbsp; $\underline{x}^{(&ndash;i)}$&nbsp; beinhalte alle Symbole mit Ausnahme von&nbsp; $x_i$ &nbsp; &#8658; &nbsp; Vektor der Länge&nbsp; $n -1$. Damit lautet der &nbsp;$\text{extrinsische }L\text{&ndash;Wert}$&nbsp; bezüglich des&nbsp; $i$&ndash;ten Symbols, wenn&nbsp; $\underline{x}$&nbsp; empfangen wurde:
+
$\text{Definition:}$&nbsp; Let the code word&nbsp; $\underline{x}^{(&ndash;i)}$&nbsp; contain all symbols except&nbsp; $x_i$ &nbsp; &#8658; &nbsp; vector of length&nbsp; $n&ndash;1$.&nbsp; Thus,&nbsp;  the &nbsp; "extrinsic log likelihood ratio"&nbsp; will be with respect to the&nbsp; $i$&ndash;th symbol, when&nbsp; $\underline{x}$&nbsp; has been received:
  
::<math>L_{\rm E}(i) = {\rm ln} \hspace{0.15cm}\frac{ {\rm Pr} \left [w_{\rm H}(\underline{x}^{(-i)})\hspace{0.15cm}{\rm ist \hspace{0.15cm} gerade} \hspace{0.05cm} \vert\hspace{0.05cm}\underline{y} \hspace{0.05cm}\right ]}{ {\rm Pr} \left [w_{\rm H}(\underline{x}^{(-i)})\hspace{0.15cm}{\rm ist \hspace{0.15cm} ungerade} \hspace{0.05cm} \vert \hspace{0.05cm}\underline{y} \hspace{0.05cm}\right ]}
+
::<math>L_{\rm E}(i) = {\rm ln} \hspace{0.15cm}\frac{ {\rm Pr} \left [w_{\rm H}(\underline{x}^{(-i)})\hspace{0.15cm}{\rm is \hspace{0.15cm} even} \hspace{0.05cm} \vert\hspace{0.05cm}\underline{y} \hspace{0.05cm}\right ]}{ {\rm Pr} \left [w_{\rm H}(\underline{x}^{(-i)})\hspace{0.15cm}{\rm is \hspace{0.15cm} odd} \hspace{0.05cm} \vert \hspace{0.05cm}\underline{y} \hspace{0.05cm}\right ]}
 
  \hspace{0.05cm}.</math>
 
  \hspace{0.05cm}.</math>
  
Wie in der&nbsp; [[Aufgaben:Aufgabe_4.4:_Extrinsische_L–Werte_beim_SPC|Aufgabe 4.4]]&nbsp; gezeigt werden soll, kann hierfür auch geschrieben werden:
+
*As will be shown in the&nbsp; [[Aufgaben:Aufgabe_4.4:_Extrinsische_L–Werte_beim_SPC|"Exercise 4.4"]],&nbsp; it is also possible to write for this:
  
 
::<math>L_{\rm E}(i) = 2 \cdot {\rm tanh}^{-1} \hspace{0.1cm} \left [  \prod\limits_{j \ne i}^{n} \hspace{0.15cm}{\rm tanh}(L_j/2) \right ]
 
::<math>L_{\rm E}(i) = 2 \cdot {\rm tanh}^{-1} \hspace{0.1cm} \left [  \prod\limits_{j \ne i}^{n} \hspace{0.15cm}{\rm tanh}(L_j/2) \right ]
\hspace{0.3cm}{\rm mit}\hspace{0.3cm}L_j = L_{\rm APP}(j)
+
\hspace{0.3cm}{\rm with}\hspace{0.3cm}L_j = L_{\rm APP}(j)
 
  \hspace{0.05cm}.</math>}}
 
  \hspace{0.05cm}.</math>}}
  
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 6:}$&nbsp; Wir gehen vom&nbsp; <i>Single Parity&ndash;check Code</i>&nbsp; mit&nbsp; $n = 3, \ k = 2$ &nbsp; &#8658; &nbsp; kurz&nbsp; ${\rm SPC} \ (3, \ 2, \ 2)$&nbsp; aus.
+
$\text{Example 6:}$&nbsp; We assume a&nbsp; "Single parity&ndash;check code"&nbsp; with&nbsp; $n = 3, \ k = 2$ &nbsp; &#8658; &nbsp; briefly&nbsp; ${\rm SPC} \ (3, \ 2, \ 2)$.
 
   
 
   
Die&nbsp; $2^k = 4$&nbsp; gültigen Codeworte&nbsp; $\underline{x} = \{x_1, x_2, x_3\}$&nbsp; lauten bei bipolarer Beschreibung &nbsp; &#8658; &nbsp; $x_i &#8712; \{&plusmn;1\}$:
+
*The &nbsp; $2^k = 4$ &nbsp; valid code words&nbsp; $\underline{x} = \{x_1,\ x_2,\ x_3\}$&nbsp; are for bipolar description &nbsp; &#8658; &nbsp; $x_i &#8712; \{&plusmn;1\}$:
[[File:P ID3098 KC T 4 1 S3e v1.png|right|frame|Decodierbeispiel für den&nbsp; ${\rm SPC} \ (3, 2, 2)$|class=fit]]
+
[[File:EN_KC_T_4_1_S3e.png|right|frame|Decoding example for the&nbsp; ${\rm SPC} \ (3, 2, 2)$|class=fit]]
  
 
:$$ \underline{x}_0 \hspace{-0.05cm}=\hspace{-0.05cm} (+1\hspace{-0.03cm},\hspace{-0.05cm}+1\hspace{-0.03cm},\hspace{-0.05cm}+1)\hspace{-0.05cm}, $$
 
:$$ \underline{x}_0 \hspace{-0.05cm}=\hspace{-0.05cm} (+1\hspace{-0.03cm},\hspace{-0.05cm}+1\hspace{-0.03cm},\hspace{-0.05cm}+1)\hspace{-0.05cm}, $$
Line 282: Line 308:
 
:$$\underline{x}_3  \hspace{-0.05cm}=\hspace{-0.05cm} (-1\hspace{-0.03cm},\hspace{-0.05cm} -1\hspace{-0.03cm},\hspace{-0.05cm} +1)\hspace{-0.05cm}. $$
 
:$$\underline{x}_3  \hspace{-0.05cm}=\hspace{-0.05cm} (-1\hspace{-0.03cm},\hspace{-0.05cm} -1\hspace{-0.03cm},\hspace{-0.05cm} +1)\hspace{-0.05cm}. $$
  
Bei diesem Code ist also das Produkt&nbsp; $x_1 \cdot x_2 \cdot x_3$&nbsp; stets positiv.<br>
+
:So,&nbsp;the product &nbsp; $x_1 \cdot x_2 \cdot x_3$ &nbsp; is always positive.<br>
  
Die obige Tabelle zeigt den Decodiervorgang für&nbsp; $\underline{L}_{\rm APP} = (+2.0, +0.4, \, &ndash;1.6)$. Die harte Entscheidung nach den Vorzeichen von&nbsp; $L_{\rm APP}(i)$&nbsp; ergäbe hier&nbsp; $(+1, +1, \, -1)$, also kein gültiges Codewort des&nbsp; ${\rm SP}(3, \ 2, \ 2)$.<br>
+
*The table shows the decoding process for&nbsp; $\underline{L}_{\rm APP} = (+2.0, +0.4, \, &ndash;1.6)$.  
  
Rechts in der Tabelle sind die dazugehörigen extrinsischen&nbsp; $L$&ndash;Werte eingetragen:<br>
+
*The hard decision according to the signs of&nbsp; $L_{\rm APP}(i)$&nbsp; would yield here&nbsp; $(+1, +1, \, -1)$ &nbsp; &rArr; &nbsp; no valid code word of&nbsp; ${\rm SP}(3, \ 2, \ 2)$.<br>
 +
 
 +
*On the right side of the table, the corresponding extrinsic log likelihood ratios are entered:<br>
  
 
::<math>L_{\rm E}(1) = 2 \cdot {\rm tanh}^{-1} \hspace{0.05cm}
 
::<math>L_{\rm E}(1) = 2 \cdot {\rm tanh}^{-1} \hspace{0.05cm}
Line 295: Line 323:
 
  \left [  {\rm tanh} (1.0) \cdot {\rm tanh} (0.2)\hspace{0.05cm}\right ] = +0.151\hspace{0.05cm}.</math>
 
  \left [  {\rm tanh} (1.0) \cdot {\rm tanh} (0.2)\hspace{0.05cm}\right ] = +0.151\hspace{0.05cm}.</math>
  
Die zweite Gleichung lässt sich wie folgt interpretieren:
+
The second equation can be interpreted as follows:
* $L_{\rm APP}(1) = +2.0$&nbsp; und&nbsp; $L_{\rm APP}(3) = \, -1.6$&nbsp; sagen aus, dass das erste Bit eher &nbsp;$+1$&nbsp; als &nbsp;$-1$&nbsp; ist und das dritte Bit eher &nbsp;$-1$&nbsp; als &nbsp;$+1$. Die Zuverlässigkeit (der Betrag) ist beim ersten Bit etwas größer als beim dritten.<br>
+
# $L_{\rm APP}(1) = +2.0$&nbsp; and&nbsp; $L_{\rm APP}(3) = \, -1.6$&nbsp; state that the first bit is more likely to be &nbsp;"$+1$"&nbsp; than &nbsp;"$-1$"&nbsp; and the third bit is more likely to be &nbsp;"$-1"$&nbsp; than &nbsp;"$+1$".
 +
#The reliability&nbsp; $($the amount$)$&nbsp; is slightly greater for the first bit than for the third.<br>
 +
#The extrinsic information &nbsp; $L_{\rm E}(2) = \, -0.518$&nbsp; considers only the information of bit 1 and bit 3 about bit 2.
 +
#From their point of view,&nbsp; the second bit is &nbsp;"$-1$"&nbsp; with reliability&nbsp; $0.518$.<br>
 +
#The log likelihood ratio&nbsp;  $L_{\rm APP}^{I=0}(2) = +0.4$&nbsp; derived from the received value&nbsp; $y_2$&nbsp; has suggested &nbsp;"$+1$"&nbsp; for the second bit.
 +
#But the discrepancy is resolved here already in the iteration&nbsp; $I = 1$.&nbsp; The decision falls here for the code word&nbsp; $\underline{x}_1  \hspace{-0.05cm}=\hspace{-0.05cm} (+1\hspace{-0.03cm},\hspace{-0.05cm} -1\hspace{-0.03cm},\hspace{-0.05cm} -1)\hspace{-0.05cm},$.
 +
#For &nbsp; $0.518 < L_{\rm APP}(2) < 1.6$ &nbsp; the result&nbsp; $\underline{x}_1$&nbsp; would only be available after several iterations.&nbsp; For&nbsp; $L_{\rm APP}(2) > 1.6$&nbsp; on the other hand,&nbsp; the decoder returns&nbsp; $\underline{x}_0$.}}<br>
  
*Die extrinsische Information&nbsp; $L_{\rm E}(2) = \, -0.518$ berücksichtigt nur die Informationen von Bit 1 und Bit 3 über Bit 2. Aus deren Sicht ist das zweite Bit eine &nbsp;$-1$&nbsp; mit der Zuverlässigkeit&nbsp; $0.518$.<br>
+
== BCJR decoding: Forward-backward algorithm ==
 +
<br>
 +
An example of iterative decoding of convolutional codes is the&nbsp; &raquo;<b>BCJR algorithm</b>&laquo;,&nbsp; named after its inventors L. R. Bahl, J. Cocke, F. Jelinek, and J. Raviv &nbsp; &#8658; &nbsp; [BCJR74]<ref name='BCJR74'>Bahl, L.R.; Cocke, J.; Jelinek, F.; Raviv, J.:&nbsp; Optimal Decoding of Linear Codes for Minimizing Symbol Error Rate.&nbsp; In: IEEE Transactions on Information Theory, Vol. IT-20, pp. 284-287, 1974.</ref>.&nbsp; The algorithm has many parallels to the seven-year older Viterbi decoding,&nbsp; but also some significant differences:
 +
*While Viterbi estimates the total sequence &nbsp; &#8658; &nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#Definitions_of_the_different_optimal_receivers|$\text{block&ndash;wise maximum likelihood}$]],&nbsp; the BCJR&ndash; Algorithm minimizes the bit error probability &nbsp; &#8658; &nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#Definitions_of_the_different_optimal_receivers|$\text{bit&ndash;wise MAP}$]].<br>
  
*Der vom Empfangswert&nbsp; $y_2$&nbsp; abgeleitete&nbsp; $L$&ndash;Wert &nbsp; &#8658; &nbsp; $L_{\rm APP}(2) = +0.4$&nbsp; hat für das zweite Bit eine &nbsp;$+1$&nbsp; vermuten lassen. Die Diskrepanz wird hier bereits in der Iteration&nbsp; $I = 1$&nbsp; aufgelöst.<br>
+
*The Viterbi algorithm&nbsp; $($in its original form$)$&nbsp; cannot process soft information.&nbsp; In contrast,&nbsp; the BCJR algorithm specifies a reliability value for each individual bit at each iteration,&nbsp; which is taken into account in subsequent iterations.<br>
  
*Entschieden wird hier für das Codewort&nbsp; $\underline{x}_1$. Bei&nbsp; $0.518 < L_{\rm APP}(2) < 1.6$&nbsp; würde das Ergebnis&nbsp; $\underline{x}_1$&nbsp; erst nach mehreren Iterationen vorliegen. Für&nbsp; $L_{\rm APP}(2) > 1.6$&nbsp; liefert der Decoder dagegen&nbsp; $\underline{x}_0$.}}<br>
 
  
== BCJR–Decodierung: Vorwärts–Rückwärts–Algorithmus ==
+
The figure is intended  to illustrate&nbsp; $($in an almost inadmissibly simplified  way$)$&nbsp; the different approach of Viterbi and BCJR algorithm.&nbsp; This is based on a convolutional code with memory&nbsp; $m = 1$&nbsp; and length&nbsp; $L = 4$ &nbsp; &#8658; &nbsp; total length $($with termination$)$&nbsp; $L' = 5$.
<br>
+
[[File:EN_KC_T_4_1_S5.png|right|frame|Comparison of Viterbi and BCJR algorithm|class=fit]]
Ein Beispiel für die iterative Decodierung von Faltungscodes ist der&nbsp; <i>BCJR&ndash;Algorithmus</i>, benannt nach dessen Erfindern L. R. Bahl, J. Cocke, F. Jelinek und J. Raviv &nbsp;&#8658;&nbsp; [BCJR74]<ref name='BCJR74'>Bahl, L.R.; Cocke, J.; Jelinek, F.; Raviv, J.: ''Optimal Decoding of Linear Codes for Minimizing Symbol Error Rate.'' In: IEEE Transactions on Information Theory, Vol. IT-20, S. 284-287, 1974.</ref>. Der Algorithmus weist viele Parallelen zur sieben Jahren älteren Viterbi&ndash;Decodierung auf, doch auch einige signifikante Unterschiede:
 
*Während Viterbi die Gesamtsequenz schätzt &nbsp; &#8658; &nbsp; [[Channel_Coding/Kanalmodelle_und_Entscheiderstrukturen#Definitionen_der_verschiedenen_Optimalempf.C3.A4nger| $\text{block&ndash;wise Maximum Likelihood}$]], minimiert der BCJR&ndash;Algorithmus die Bitfehlerwahrscheinlichkeit &nbsp; &#8658; &nbsp; [[Channel_Coding/Kanalmodelle_und_Entscheiderstrukturen#Definitionen_der_verschiedenen_Optimalempf.C3.A4nger|$\text{ bit&ndash;wise MAP}$]].<br>
 
  
*Der Viterbi&ndash;Algorithmus kann (in seiner ursprünglichen Form) keine Softinformation verarbeiten. Dagegen gibt der BCJR&ndash;Algorithmus bei jeder Iteration für jedes einzelne Symbol (Bit) einen Zuverlässigkeitswert an, der bei späteren Iterationen berücksichtigt wird.<br>
+
*Viterbi searches and finds the most likely path from&nbsp; ${\it \Gamma}_0(S_0)$&nbsp; to&nbsp; ${\it \Gamma}_5(S_0)$, namely&nbsp; $S_0 &#8594; S_1 &#8594; S_0 &#8594; S_0 &#8594; S_1&#8594; S_0 $. We refer to the sample solution to&nbsp; [[Aufgaben:Exercise_3.09Z:_Viterbi_Algorithm_again|"Exercise 3.9Z"]].<br>
  
 +
*The sketch for the BCJR algorithm illustrates the extraction of the extrinsic&nbsp; L&ndash;value for the third symbol &nbsp; &#8658; &nbsp; $L_{\rm E}(3)$.&nbsp; The relevant area in the trellis is shaded:
  
[[File:P ID3024 KC T 4 1 S5 v2.png|center|frame|Gegenüberstellung von Viterbi– und BCJR–Algorithmus|class=fit]]
+
* Working through the trellis diagram in the forward direction,&nbsp; one obtains in the same way as Viterbi  the metrics&nbsp; $\alpha_1, \ \alpha_2, \ \text{...}\hspace{0.05cm} , \ \alpha_5$.&nbsp; To calculate&nbsp; $L_{\rm E}(3)$&nbsp; one needs from this&nbsp; $\alpha_2$.<br>
  
Die Abbildung soll &ndash; fast unzulässig vereinfacht &ndash; die unterschiedliche Vorgehensweise von Viterbi&ndash;Algorithmus (links) und BCJR&ndash;Algorithmus (rechts) verdeutlichen. Zugrunde liegt ein Faltungscode mit dem Gedächtnis&nbsp; $m = 1$&nbsp; und der Länge&nbsp; $L = 4$ &nbsp; &#8658; &nbsp; Gesamtlänge  (mit Terminierung)&nbsp; $L' = 5$.
+
* Then we traverse the trellis diagram backwards&nbsp; $($i.e. from right to left$)$&nbsp; and thus obtain the metrics&nbsp; $\beta_4, \ \beta_3, \ \text{...}\hspace{0.05cm} , \ \beta_0$&nbsp; corresponding to the sketch below.<br>
*Der Viterbi&ndash;Algorithmus sucht und findet den wahrscheinlichsten Pfad von&nbsp; ${\it \Gamma}_0(S_0)$&nbsp; nach&nbsp; ${\it \Gamma}_5(S_0)$, nämlich&nbsp; $S_0 &#8594; S_1 &#8594; S_0 &#8594; S_0 &#8594; S_1&#8594; S_0 $. Wir verweisen auf die Musterlösung zur&nbsp; [[Aufgaben:Aufgabe_3.09Z:_Nochmals_Viterbi–Algorithmus|Aufgabe 3.9Z]].<br><br>
 
  
Die Skizze für den BCJR&ndash;Algorithmus verdeutlicht die Gewinnung des extrinsischen&nbsp; $L$&ndash;Wertes für das dritte Symbol &nbsp; &#8658; &nbsp; $L_{\rm E}(3)$. Der relevante Bereich im Trellis ist schraffiert:
+
* The extrinsic log likelihood ratio&nbsp; $L_{\rm E}(3)$&nbsp; is obtained from the forward direction metric&nbsp; $\alpha_2$&nbsp;  and the backward direction metric&nbsp; $\beta_3$&nbsp;  and the a-priori information&nbsp; $\gamma_3$&nbsp; about the symbol&nbsp; $i = 3$.<br>
* Bei der Abarbeitung des Trellisdiagramms in Vorwärtsrichtung  gewinnt man &ndash; in gleicher Weise wie bei Viterbi &ndash; die Metriken&nbsp; $\alpha_1, \ \alpha_2, \ \text{...}\hspace{0.05cm} , \ \alpha_5$. Zur Berechnung von&nbsp; $L_{\rm E}(3)$&nbsp; benötigt man hiervon&nbsp; $\alpha_2$.<br>
 
  
* Anschließend durchläuft man das Trellisdiagramm rückwärts (also von rechts nach links) und erhält damit die Metriken&nbsp; $\beta_4, \ \beta_3, \ \text{...}\hspace{0.05cm} , \ \beta_0$&nbsp; entsprechend der unteren Skizze.<br>
+
== Basic structure of concatenated coding systems ==
 
 
* Der gesuchte extrinsische&nbsp; $L$&ndash;Wert&nbsp; $L_{\rm E}(3)$&nbsp; ergibt sich aus den Metriken&nbsp; $\alpha_2$&nbsp; (in Vorwärtsrichtung) und&nbsp; $\beta_3$&nbsp; (in Rückwärtsrichtung) sowie der Apriori&ndash;Information&nbsp; $\gamma_3$&nbsp; über das Symbol&nbsp; $i = 3$.<br>
 
 
 
== Grundstruktur von verketteten Codiersystemen ==
 
 
<br>
 
<br>
Die wichtigsten Kommunikationssysteme der letzten Jahre verwenden zwei unterschiedliche Kanalcodes. Man spricht dann von&nbsp; '''verketteten Codiersystemen'''&nbsp; (englisch:&nbsp; <i>Concatenated Codes</i>). Auch bei relativ kurzen Komponentencodes&nbsp; $\mathcal{C}_1$&nbsp; und&nbsp; $\mathcal{C}_2$&nbsp; ergibt sich für den verketteten Code&nbsp; $\mathcal{C}$&nbsp; eine hinreichend große Codewortlänge&nbsp; $n$, die ja bekanntlich erforderlich ist, um sich der Kanalkapazität anzunähern.<br>
+
The most important communication systems of the last years use two different channel codes.&nbsp; One speaks then of&nbsp; &raquo;'''concatenated codes'''&laquo;.&nbsp; Even with relatively short component codes&nbsp; $\mathcal{C}_1$&nbsp; and&nbsp; $\mathcal{C}_2$&nbsp; the concatenated code&nbsp; $\mathcal{C}$&nbsp; results in a sufficiently large code word length&nbsp; $n$,&nbsp; which is known to be necessary to approach the channel capacity.<br>
 
 
Zunächst seien einige Beispiele aus dem Mobilfunk genannt:
 
*Bei&nbsp; [[Examples_of_Communication_Systems/Allgemeine_Beschreibung_von_GSM#.23_.C3.9CBERBLICK_ZUM_DRITTEN_HAUPTKAPITEL_.23|GSM]]&nbsp; (<i>Global System for Mobile Communications</i>, zweite Mobilfunkgeneration) wird zunächst die Datenbitrate von&nbsp; $9.6 \ \rm kbit/s$&nbsp; auf&nbsp; $12 \ \rm kbit/s$&nbsp; erhöht, um auch in leitungsvermittelten Netzen eine Fehlererkennung zu ermöglichen. Anschließend folgt ein punktierter Faltungscode mit der Ausgangsbitrate&nbsp; $22.8 \ \rm kbit/s$. Die Gesamtcoderate beträgt somit etwa&nbsp; $42.1\%$.
 
  
*Beim 3G&ndash;Mobilfunksystem&nbsp; [[Examples_of_Communication_Systems/Allgemeine_Beschreibung_von_UMTS#.23_.C3.9CBERBLICK_ZUM_VIERTEN_HAUPTKAPITEL_.23|UMTS]]&nbsp; (<i>Universal Mobile Telecommunications System</i>) verwendet man je nach den Randbedingungen (guter/schlechter Kanal, wenige/viele Teilnehmer in der Zelle) einen&nbsp; [[Channel_Coding/Grundlagen_der_Faltungscodierung#.23_.C3.9CBERBLICK_ZUM_DRITTEN_HAUPTKAPITEL_.23|Faltungscode]]&nbsp; oder einen&nbsp; [[Channel_Coding/Grundlegendes_zu_den_Turbocodes#Grundstruktur_eines_Turbocodes| Turbocode]]&nbsp; (darunter versteht man per se die Verkettung zweier Faltungscodierer). Beim 4G&ndash;Mobilfunksystem&nbsp; [[Mobile_Communications/Allgemeines_zum_Mobilfunkstandard_LTE#.23_.C3.9CBERSICHT_ZUM_VIERTEN_HAUPTKAPITEL_.23|LTE]]&nbsp; (<i>Long Term Evolution</i>) verwendet man für kurze Kontrollsignale einen Faltungscode  und für die längeren Payload-Daten einen Turbocode.<br>
+
To begin with,&nbsp; here are a few examples from mobile communications:
 +
*In&nbsp; [[Examples_of_Communication_Systems/General_Description_of_GSM#.23_OVERVIEW_OF_THE_THIRD_MAIN_CHAPTER_.23|$\rm GSM$]]&nbsp; $($"Global System for Mobile Communications",&nbsp; the second generation of mobile communication systems$)$,&nbsp; the data bit rate is first increased from&nbsp; $9. 6 \ \rm kbit/s$&nbsp; to&nbsp; $12 \ \rm kbit/s$&nbsp; in order to enable error detection in circuit-switched networks as well.&nbsp; This is followed by a punctured convolutional code with the output bit rate&nbsp; $22.8 \ \rm kbit/s$.&nbsp; The total code rate is thus about $0.421$.
  
 +
*In the 3G cellular system&nbsp; [[Examples_of_Communication_Systems/General_Description_of_UMTS#.23_OVERVIEW_OF_THE_FOURTH_MAIN_CHAPTER_.23|$\rm UMTS$]]&nbsp; $($"Universal Mobile Telecommunications System"),&nbsp; depending on the boundary conditions&nbsp; $($good/bad channel,&nbsp; few/many subscribers in the cell$)$&nbsp; one uses a&nbsp; [[Examples_of_Communication_Systems/General_Description_of_GSM#.23_OVERVIEW_OF_THE_THIRD_MAIN_CHAPTER_.23|$\text{convolutional code}$]]&nbsp; or a&nbsp; [[Channel_Coding/The_Basics_of_Turbo_Codes#Basic_structure_of_a_turbo_code|$\text{turbo code}$]]&nbsp; $($by this one understands per se the concatenation of two convolutional encoders$)$.&nbsp;
  
[[File:P ID2998 KC T 4 1 S6 v1.png|center|frame|Parallel verkettetes Codiersystem|class=fit]]
+
*In 4G mobile communication systems&nbsp; [[Examples_of_Communication_Systems/General_Description_of_UMTS#.23_OVERVIEW_OF_THE_FOURTH_MAIN_CHAPTER_.23|$\rm LTE$]]&nbsp; $($"Long Term Evolution"$)$,&nbsp; a convolutional code is used for short control signals,&nbsp; turbo code for the longer payload data.<br>
  
Die Grafik zeigt die Grundstruktur eines parallel verketteten Codiersystems. Alle Vektoren bestehen aus&nbsp; $n$&nbsp; Elementen:&nbsp; $\underline{L} = (L(1), \ \text{...}\hspace{0.05cm} , \ L(n))$. Die Berechnung aller&nbsp; $L$&ndash;Werte geschieht also auf Symbolebene. Nicht dargestellt ist hier der&nbsp; [[Channel_Coding/Grundlegendes_zu_den_Turbocodes#Zweite_Voraussetzung_f.C3.BCr_Turbocodes:_Interleaving| Interleaver]], der zum Beispiel bei den Turbocodes obligatorisch ist.
 
*Die Codesequenzen&nbsp; $\underline{x}_1$&nbsp; und&nbsp; $\underline{x}_2$&nbsp; werden zur gemeinsamen Übertragung über den Kanal durch einen Multiplexer zum Vektor&nbsp; $\underline{x}$&nbsp; zusammengefügt. Am Empfänger wird die Sequenz&nbsp; $\underline{y}$&nbsp; wieder in die Einzelteile&nbsp; $\underline{y}_1$&nbsp; und&nbsp; $\underline{y}_2$&nbsp; zerlegt. Daraus werden die Kanal&ndash;$L$&ndash;Werte&nbsp; $\underline{L}_{\rm K,\hspace{0.05cm}1}&nbsp;$ und&nbsp; $\underline{L}_{\rm K,\hspace{0.05cm}2}$&nbsp; gebildet.<br>
 
  
*Der symbolweise Decoder ermittelt entsprechend der vorne beschriebenen&nbsp; [[Channel_Coding/Soft%E2%80%93in_Soft%E2%80%93out_Decoder#Zur_Berechnung_der_extrinsischen_L.E2.80.93Werte| Vorgehensweise]]&nbsp; die extrinsischen $L$&ndash;Werte&nbsp; $\underline{L}_{\rm E,\hspace{0.05cm} 1}$&nbsp; und&nbsp; $\underline{L}_{\rm E,\hspace{0.05cm} 2}$, die gleichzeitig die Apriori&ndash;Informationen&nbsp; $\underline{L}_{\rm A,\hspace{0.05cm} 2}$&nbsp; und&nbsp; $\underline{L}_{\rm A,\hspace{0.05cm} 1}$&nbsp; für den jeweils anderen Decoder darstellen. <br>
 
  
*Nach ausreichend vielen Iterationen (also dann, wenn ein Abbruchkriterium erfüllt ist) liegt am Decoderausgang der Vektor der Aposteriori&ndash;Werte &nbsp; &#8658; &nbsp; $\underline{L}_{\rm APP}$&nbsp; an. Im Beispiel wird willkürlich der Wert im oberen Zweig genommen. Möglich wäre aber auch der untere&nbsp; $L$&ndash;Wert.<br><br>
+
The graph shows the basic structure of a parallel concatenated coding system.&nbsp; All vectors consist of&nbsp; $n$&nbsp; elements:&nbsp;
 +
:$$\underline{L} = (L(1), \ \text{...}\hspace{0.05cm} , \ L(n)).$$
 +
[[File:EN_KC_T_4_1_S6.png|right|frame|Parallel concatenated codes|class=fit]]
 +
So the calculation of all&nbsp; L&ndash;values is done on symbol level.&nbsp; Not shown here is the&nbsp; [[Channel_Coding/The_Basics_of_Turbo_Codes#Second_requirement_for_turbo_codes:_Interleaving| $\rm interleaver$]],&nbsp; which is mandatory e.g. for turbo codes.
 +
# The sequences&nbsp; $\underline{x}_1$&nbsp; and&nbsp; $\underline{x}_2$&nbsp; are combined to form the vector&nbsp; $\underline{x}$&nbsp; for joint transmission over the channel by a multiplexer.
 +
# At the receiver, the sequence&nbsp; $\underline{y}$&nbsp; is again decomposed into the parts&nbsp; $\underline{y}_1$&nbsp; and&nbsp; $\underline{y}_2$.&nbsp;  From this the channel L&ndash;values&nbsp; $\underline{L}_{\rm K,\hspace{0.05cm}1}&nbsp;$ and&nbsp; $\underline{L}_{\rm K,\hspace{0.05cm}2}$&nbsp; are formed.<br>
 +
# The symbol-wise decoder determines the extrinsic log likelihood ratios &nbsp; $\underline{L}_{\rm E,\hspace{0.05cm} 1}$&nbsp; and&nbsp; $\underline{L}_{\rm E,\hspace{0.05cm} 2}$, which are also the a-priori information&nbsp; $\underline{L}_{\rm A,\hspace{0.05cm} 2}$&nbsp; and&nbsp; $\underline{L}_{\rm A,\hspace{0.05cm} 1}$&nbsp; for the other decoder. <br>
 +
# After a sufficient number of iterations&nbsp; $($i.e. when a termination criterion is met$)$,&nbsp; the vector&nbsp; $\underline{L}_{\rm APP}$&nbsp; of a-posteriori values is available at the decoder output.
 +
# In the example,&nbsp;  the value in the upper branch is taken arbitrarily.&nbsp;
 +
# However,&nbsp; the lower log likelihood ratio would also be possible.<br><br>
  
Das obige Modell gilt insbesondere auch für die Decodierung der Turbo&ndash;Codes gemäß dem Kapitel&nbsp; [[Channel_Coding/Grundlegendes_zu_den_Turbocodes#Grundstruktur_eines_Turbocodes| Grundlegendes zu den Turbocodes]].<br>
+
The above model also applies in particular to the decoding of turbo codes according to the chapter&nbsp; [[Channel_Coding/The_Basics_of_Turbo_Codes#Basic_structure_of_a_turbo_code|"Basic structure of a turbo code"]].<br>
  
== Aufgaben zum Kapitel==
+
== Exercises for the chapter ==
 
<br>
 
<br>
[[Aufgaben:Aufgabe_4.1:_Zum_„Log_Likelihood_Ratio”|Aufgabe 4.1: Zum „Log Likelihood Ratio”]]
+
[[Aufgaben:Exercise_4.1:_Log_Likelihood_Ratio|Exercise 4.1: Log Likelihood Ratio]]
  
[[Aufgaben:Aufgabe_4.1Z:_L–Werte_des_BEC–Modells|Aufgabe 4.1Z: L–Werte des BEC–Modells]]
+
[[Aufgaben:Exercise_4.1Z:_Log_Likelihood_Ratio_at_the_BEC_Model|Exercise 4.1Z: Log Likelihood Ratio at the BEC Model]]
  
[[Aufgaben:Aufgabe_4.2:_Kanal–LLR_bei_AWGN|Aufgabe 4.2: Kanal–LLR bei AWGN]]
+
[[Aufgaben:Exercise_4.2:_Channel_Log_Likelihood_Ratio_at_AWGN|Exercise 4.2: Channel Log Likelihood Ratio at AWGN]]
  
[[Aufgaben:Aufgabe_4.3:_Iterative_Decodierung_beim_BSC|Aufgabe 4.3: Iterative Decodierung beim BSC]]
+
[[Aufgaben:Exercise_4.3:_Iterative_Decoding_at_the_BSC|Exercise 4.3: Iterative Decoding at the BSC]]
  
[[Aufgaben:Aufgabe_4.3Z:_Umrechnungen_von_L–Wert_und_S–Wert|Aufgabe 4.3Z: Umrechnungen von L–Wert und S–Wert]]
+
[[Aufgaben:Exercise_4.3Z:_Conversions_of_L-value_and_S-value|Exercise 4.3Z: Conversions of L-value and S-value]]
  
[[Aufgaben:Aufgabe_4.4:_Extrinsische_L–Werte_beim_SPC|Aufgabe 4.4: Extrinsische L–Werte beim SPC]]
+
[[Aufgaben:Exercise_4.4:_Extrinsic_L-values_at_SPC|Exercise 4.4: Extrinsic L-values at SPC]]
  
[[Aufgaben:Aufgabe_4.4Z:_Ergänzung_zur_Aufgabe_4.4|Aufgabe 4.4Z: Ergänzung zur Aufgabe 4.4]]
+
[[Aufgaben:Exercise_4.4Z:_Supplement_to_Exercise_4.4|Exercise 4.4Z: Supplement to Exercise 4.4]]
  
[[Aufgaben:Aufgabe_4.5:_Nochmals_zu_den_extrinsischen_L–Werten|Aufgabe 4.5: Nochmals zu den extrinsischen L–Werten]]
+
[[Aufgaben:Exercise_4.5:_On_the_Extrinsic_L-values_again|Exercise 4.5: On the Extrinsic L-values again]]
  
[[Aufgaben:Aufgabe_4.5Z:_Tangens_Hyperbolikus_und_Inverse|Aufgabe 4.5Z: Tangens Hyperbolikus und Inverse]]
+
[[Aufgaben:Exercise_4.5Z:_Tangent_Hyperbolic_and_Inverse|Exercise 4.5Z: Tangent Hyperbolic and Inverse]]
  
==Quellenverzeichnis==
+
==References==
 
<references/>
 
<references/>
  
 
{{Display}}
 
{{Display}}

Latest revision as of 17:21, 12 January 2023

# OVERVIEW OF THE FOURTH MAIN CHAPTER #


The last main chapter of the channel coding book describes  »iterative decoding techniques«  as used in most of today's  (2017)  communication systems.  This is due to the following reasons:

  • To approach the channel capacity,  one needs very long codes.
  • But for long codes,  "blockwise maximum likelihood decoding"  is almost impossible.


The decoder complexity can be significantly reduced with almost the same quality if two  $($or more$)$  short channel codes are linked together and the newly acquired  $($soft$)$  information is exchanged between the decoders at the receiver in several steps   ⇒   "iteratively".

The breakthrough in the field came in the early 1990s with the invention of the  "turbo codes"  by  $\text{Claude Berrou}$  and shortly thereafter with the rediscovery of  "low-density parity-check codes"  by  $\text{David J. C. MacKay}$  and  $\text{Radford M. Neal}$,  after the LDPC codes developed as early as 1961 by  $\text{Robert G. Gallager}$  had been forgotten in the meantime.

Specifically,  the fourth main chapter discusses:

  • a comparison of  »hard decision«  and  »soft decision«,
  • the quantification of  »reliability information«  by  »log likelihood ratios«,
  • the principle of symbol-wise  »soft-in soft-out  (SISO)«  decoding,
  • the definition of  »a-priori  L–value«,  »a-posteriori  L–value«  and  »extrinsic  L–value«,
  • the basic structure of  »serially concatenated«  resp.  »parallel concatenated«  coding systems,
  • the characteristics of  »product codes«  and their  »hard decision decoding«,
  • the basic structure,  decoding algorithm,  and performance of  »turbo codes«,
  • basic information on the  »low-density parity-check codes«  and their applications.



Hard Decision vs. Soft Decision


To introduce the topic discussed here, let us consider the following digital transmission system with coding.

  • In the following, all symbols are given in bipolar representation:
Considered digital transmission system with coding
Comparison of  "hard decision" and  "soft decision";   for all columns of this table it is assumed:
(1)   the information block   $\underline{u} = (0,\ 1)$,  bipolar representable as  $(+1, \, –1)$,
(2)   the encoded block  $\underline{x} = (0,\ 1,\ 1)$   ⇒   in bipolar representation:  $(+1, \, -1, \, -1)$.


"$0$"  →  "$+1$",  and 
"$1$"  →  "$-1$".
  • The symbol sequence  $\underline{u} = (u_1, \ u_2)$  of the digital source is assigned to the encoded sequence 
$$\underline{x} = (x_1, \ x_2, \ x_3) = (u_1, \ u_2, \ p)$$
where for the parity bit   $p = u_1 ⊕ u_2$   holds   ⇒   $\text{ SPC (3, 2, 2)}$.
  • The  $\text{AWGN channel}$  changes the symbols  $x_i ∈ \{+1, \ –1\}$  to real-valued output values  $y_i$,  for example according to  $\text{channel 4}$  in the table below:  
    • $x_1 = +1$   ⇒   $y_1 = +0.9,$
    • $x_2 = -1$   ⇒   $y_1 = +0.1,$
    • $x_3 = -1$   ⇒   $y_1 = +0.1,$
  • Decoding is done according to the criterion  $\text{»block-wise maximum-likelihood«}$  $\text{(ML)}$,  distinguishing between 
    • "hard decision"   $\rm {(ML–HD)}$,  and 
    • "soft decision"   $\rm {(ML–SD)}$ .
  • The given block diagram corresponds to  $\rm ML–HD$.  Here,  only the signs of the AWGN output values   ⇒   $y_{\rm HD, \ \it i} = {\rm sign}\left [y_{\rm SD, \ \it i}\right ]$  are evaluated for decoding.
  • With  "soft decision"  one omits the shaded block and directly evaluates the continuous value input variables  $y_{\rm SD, \ \it i}$ .


Thus,  the four columns in the table differ only by different AWGN realizations.

$\text{Definitions:}$  From the example table you can see:

  • $\text{Hard Decision:}$   The sink symbol sequence  $\underline{v}_{\rm HD}$  results from the  "hard decided channel values"  $\underline{y}_{\rm HD}$  $($blue background$)$.
        In our example,  only the constellations according to  $\text{channel 1}$  and  $\text{channel 2}$  are decoded without errors.
  • $\text{Soft Decision:}$   The sink symbol sequence  $\underline{v}_{\rm SD}$  results from the  "soft channel output values"  $\underline{y}_{\rm SD}$  $($green background$)$.
        Now,  in this example,  it is also correctly decided at  $\text{channel 3}$. 


The entries in the example table above are to be interpreted as follows:

  1. For ideal channel   ⇒   $\text{channel 1}$   ⇒   $\underline{x} = \underline{y}_{\rm SD} = \underline{y}_{\rm HD}$  there is no difference between the (blue) conventional hard decision variant  $\rm {(ML–HD)}$  and the (green) soft decision variant  $\rm {(ML–SD)}$.
  2. The setting according to  $\text{channel 2}$  demonstrates low signal distortions.  Because of  $\underline{y}_{\rm HD} = \underline{x}$  $($which means that the channel does not distort the signs$)$  also  $\rm ML–HD$  gives the correct result  $\underline{v}_{\rm HD} = \underline{u}$.
  3. At  $\text{channel 3}$  there is  $\underline{y}_{\rm HD} ≠ \underline{x}$  and there is also no  ${\rm SPC} \ (3, 2)$  assignment  $\underline{u}$   ⇒   $\underline{y}_{\rm HD}$.  The maximum Likelihood decoder reports here by outputting  $\underline{v}_{\rm HD} = \rm (E, \ E)$ that it failed in decoding this block.  "$\rm E$" stands for an  $\text{Erasure}$ .
  4. Also the  soft decision decoder recognizes that decoding based on the signs does not work.  Based on the  $\underline{y}_{\rm SD}$  values,  however,  it recognizes that with high probability the second bit has been falsified and decides to use the correct symbol sequence  $\underline{v}_{\rm SD} = (+1, \, -1) = \underline{u}$.
  5. In  $\text{channel 4}$,  both the signs of bit 2 and bit 3 are changed by the AWGN channel,  leading to the result   $\underline{v}_{\rm HD} = (+1, +1) ≠ \underline{u}(+1, \, -1)$   ⇒   a block error and a bit error at the same time.  Also the soft decision decoder gives the same wrong result here.

The decoding variant  $\rm ML–SD$  also offers the advantage over  $\rm ML–HD$  that it is relatively easy to assign a reliability value to each decoding result  $($but in the above table this is not specified$)$.  This reliability value would

  • have its maximum value for  $\text{channel 1}$ ,
  • be significantly smaller for  $\text{channel 2}$ ,
  • be close to zero for  $\text{channel 3}$  and  $\text{channel 4}$.

Reliability information - Log likelihood ratio


Let be  $x ∈ \{+1, \, -1\}$  a binary random variable with probabilities  ${\rm Pr}(x = +1)$  and  ${\rm Pr}(x = \, -1)$.  For coding theory,  it proves convenient in terms of computation times to use the natural logarithm of the quotient instead of the probabilities  ${\rm Pr}(x = ±1)$  .

$\text{Definition:}$  The  log likelihood ratio  $\rm (LLR)$  of the random variable  $x ∈ \{+1, \, -1\}$  is:

\[L(x)={\rm ln} \hspace{0.15cm} \frac{ {\rm Pr}(x = +1)}{ {\rm Pr}(x = -1)}\hspace{0.05cm}.\]
  • For unipolar representation  $(+1  →  0$   and   $-1  →  1)$  applies accordingly with  $\xi ∈ \{0, \, 1\}$:
\[L(\xi)={\rm ln} \hspace{0.15cm} \frac{ {\rm Pr}(\xi = 0)}{ {\rm Pr}(\xi = 1)}\hspace{0.05cm}.\]


Probability and log likelihood ratio

The table gives the nonlinear relationship between  ${\rm Pr}(x = ±1)$  and  $L(x)$.  Replacing  ${\rm Pr}(x = +1)$  with  ${\rm Pr}(\xi = 0)$,  the middle row gives the  $L$–value of the unipolar random variable  $\xi$.

You can see:

  1. The more likely random value of  $x ∈ \{+1, \, -1\}$  is given by the  sign   ⇒   ${\rm sign} \ L(x)$.
  2. In contrast,  the  absolute value    ⇒   $|L(x)|$  indicates the reliability for the result  "${\rm sign}(L(x))$".

Given BSC model

$\text{Example 1:}$  We consider the outlined  $\text{BSC model}$  with bipolar representation. 

  • With the falsification probability  $\varepsilon = 0.1$  and the two random variables  $x ∈ \{+1, \, -1\}$  and  $y ∈ \{+1, \, -1\}$  at the input and output of the channel:
\[L(y\hspace{0.05cm}\vert\hspace{0.05cm}x) = {\rm ln} \hspace{0.15cm} \frac{ {\rm Pr}(y\hspace{0.05cm}\vert\hspace{0.05cm}x = +1) }{ {\rm Pr}(y\hspace{0.05cm}\vert\hspace{0.05cm}x = -1)} = \left\{ \begin{array}{c} {\rm ln} \hspace{0.15cm} \big[(1 - \varepsilon)/\varepsilon \big]\\ {\rm ln} \hspace{0.15cm}\big [\varepsilon/(1 - \varepsilon)\big] \end{array} \right.\quad \begin{array}{*{1}c} {\rm for} \hspace{0.15cm} y = +1, \\ {\rm for} \hspace{0.15cm} y = -1. \\ \end{array}\]
  • For example,  $\varepsilon = 0.1$  results in the following numerical values  $($compare the upper table$)$:
\[L(y = +1\hspace{0.05cm}\vert\hspace{0.05cm}x) = {\rm ln} \hspace{0.15cm} \frac{0.9}{0.1} = +2.197\hspace{0.05cm}, \hspace{0.8cm} L(y = -1\hspace{0.05cm}\vert\hspace{0.05cm}x) = -2.197\hspace{0.05cm}.\]
  • This example shows that the so-called  »$\text{LLR algebra}$« can also be applied to conditional probabilities. In  "Exercise 4.1Z",  the BEC–model is described in a similar way.


Conditional AWGN probability density functions

$\text{Example 2:}$  In another example,  we consider the  $\rm AWGN$   channel with the conditional probability density functions

\[f_{y \hspace{0.03cm}\vert \hspace{0.03cm}x=+1 } \hspace{0.05cm} (y \hspace{0.05cm}\vert\hspace{0.05cm}x=+1 )\hspace{-0.1cm} = \hspace{-0.1cm} \frac {1}{\sqrt{2\pi} \cdot \sigma } \cdot {\rm e} ^{ - {(y-1)^2}/(2\sigma^2) } \hspace{0.05cm},\]
\[f_{y \hspace{0.03cm}\vert \hspace{0.03cm}x=-1 } \hspace{0.05cm} (y \hspace{0.05cm}\vert\hspace{0.05cm}x=-1 )\hspace{-0.1cm} = \hspace{-0.1cm} \frac {1}{\sqrt{2\pi} \cdot \sigma } \cdot {\rm e} ^{ - {(y+1)^2}/(2\sigma^2) } \hspace{0.05cm}.\]

⇒   In the graph,  two exemplary Gaussian functions are shown as blue and red curves, respectively. 

  • The total probability density function  $\rm (PDF)$  of the output signal $y$  is obtained from the  (equally)  weighted sum:
\[f_{y } \hspace{0.05cm} (y ) = 1/2 \cdot \big [ f_{y \hspace{0.03cm}\vert \hspace{0.03cm}x=+1 } \hspace{0.05cm} (y \hspace{0.05cm}\vert\hspace{0.05cm}x=+1 ) \hspace{0.1cm} + \hspace{0.1cm} f_{y \hspace{0.03cm}\vert \hspace{0.03cm}x=-1 } \hspace{0.05cm} (y \hspace{0.05cm}\vert \hspace{0.05cm}x=-1 ) \big ] \hspace{0.05cm}.\]
  • We now calculate the probability that the received value  $y$  lies in a  (very)  narrow interval of width  $\it \Delta$  around  $y_0 = 0.25$.  One obtains approximately
\[{\rm Pr} (\vert y - y_0\vert \le{\it \Delta}/2 \hspace{0.05cm} \Big \vert \hspace{0.05cm}x=+1 )\hspace{-0.1cm} \approx \hspace{-0.1cm} \frac {\it \Delta}{\sqrt{2\pi} \cdot \sigma } \cdot {\rm e} ^{ - {(y_0-1)^2}/(2\sigma^2) } \hspace{0.05cm},\]
\[{\rm Pr} (\vert y - y_0\vert \le {\it \Delta}/2 \hspace{0.05cm} \Big \vert \hspace{0.05cm}x=-1 )\hspace{-0.1cm} \approx \hspace{-0.1cm} \frac {\it \Delta}{\sqrt{2\pi} \cdot \sigma } \cdot {\rm e} ^{ - {(y_0+1)^2}/(2\sigma^2) } \hspace{0.05cm}.\]
The slightly larger vertical lines denote the conditions,  the smaller ones the absolute value.
  • The  log likelihood ratio  $\rm (LLR)$  of the conditional probability in the forward direction  $($meaning:   output  $y$  for a given input  $x)$  is thus obtained as the logarithm of the quotient of both expressions:
\[L(y = y_0\hspace{0.05cm}\vert\hspace{0.05cm}x) = {\rm ln} \hspace{0.15cm} \left [ \frac{{\rm e} ^{ - {(y_0-1)^2}/(2\sigma^2)}}{{\rm e} ^{ - {(y_0+1)^2}/(2\sigma^2)}} \right ] = {\rm ln} \left [ {\rm e} ^{ - [ {(y_0-1)^2}+{(y_0+1)^2}]/(2\sigma^2)} \right ] = \frac{(y_0+1)^2-(y_0-1)^2}{2\cdot \sigma^2} = \frac{2 \cdot y_0}{\sigma^2}\hspace{0.05cm}. \]
  • If we now replace the auxiliary  $y_0$  with the general random  $y$  at the AWGN output,  the final result is:
\[L(y \hspace{0.05cm}\vert\hspace{0.05cm}x) = {2 \cdot y}/{\sigma^2} =K_{\rm L} \cdot y\hspace{0.05cm}. \]
Here  $K_{\rm L} = 2/\sigma^2$  is a constant that depends solely on the standard deviation of the Gaussian noise component.


Bit-wise soft-in soft-out decoding


We now assume an  $(n, \ k)$ block code where the code word   $\underline{x} = (x_1, \ \text{...} \ , \ x_n)$   is falsified by the channel into the received word  $\underline{y} = (y_1, \ \text{...} \ , \ y_n)$.

Model of bit-wise soft-in soft-out decoding
  • For long codes,  a  "maximum-a-posteriori block-level decision"  $($for short:  $\text{ block-wise MAP}$  – very elaborate.
  • One would have to find among the  $2^k $ allowable code words  $\underline{x}_j ∈ \mathcal{C}$  the one with the greatest a-posteriori probability  $\rm (APP)$.
  • The code word to output  $\underline{z}$  in this case would be 
$$\underline{z} = {\rm arg} \max_{\underline{x}_{\hspace{0.03cm}j} \hspace{0.03cm} \in \hspace{0.05cm} \mathcal{C}} \hspace{0.1cm} {\rm Pr}( \underline{x}_{\hspace{0.03cm}j} |\hspace{0.05cm} \underline{y} ) \hspace{0.05cm}.$$


A second possibility is decoding on bit level.  The represented bit-wise  $\text{soft-in soft out decoder}$  has the task to decode all code word bits  $x_i ∈ \{0, \, 1\}$  according to maximum a-posteriori probability  ${\rm Pr}(x_i | \underline{y})$.  With the control variable  $i = 1, \text{...} , \ n$  holds in this case:

  • The corresponding  log likelihood ratio  for the  $i$th  code bit is:
\[L_{\rm APP} (i) = L(x_i\hspace{0.05cm}|\hspace{0.05cm}\underline{y}) = {\rm ln} \hspace{0.15cm} \frac{{\rm Pr}(x_i = 0\hspace{0.05cm}|\hspace{0.05cm}\underline{y})}{{\rm Pr}(x_i = 1\hspace{0.05cm}|\hspace{0.05cm}\underline{y})}\hspace{0.05cm} . \]
  • The decoder works iteratively.  At initialization  $($indicated in the diagram by the parameter  $I = 0)$  is  $L_{\rm APP}(i) = L_{\rm K}(i)$,  where the channel  $($German:  "Kanal"   ⇒   "K"$)$  log likelihood ratio  $L_{\rm K}(i)$  is given by the received value  $y_i$.
  • In addition,  the extrinsic  log likelihood ratio  $L_{\rm E}(i)$  is calculated,  which quantifies the total information provided by all other bits   $(j ≠ i)$   due to the code properties about the considered  $i$th bit.
  • At the next iteration  $(I = 1)$  $L_{\rm E}(i)$  is taken into account in the computation of  $L_{\rm APP}(i)$  as a-priori information  $L_{\rm A}(i)$.  Thus,  for the new a-posteriori log likelihood ratio in iteration  $I + 1$  holds:
\[L_{\hspace{0.1cm}\rm APP}^{(I+1)} (i) = L_{\hspace{0.1cm}\rm APP}^{(I)} (i) + L_{\hspace{0.1cm}\rm A}^{(I+1)} (i) = L_{\hspace{0.1cm}\rm APP}^{(I)} (i) + L_{\hspace{0.1cm}\rm E}^{(I)} (i)\hspace{0.05cm} . \]
  • The iterations continue until all absolute values  $|L_{\rm APP}(i)|$  are greater than a given value.  The most likely code word  $\underline{z}$  is then obtained from the signs of all  $L_{\rm APP}(i)$,  with  $i = 1, \ \text{...} , \ n$.
  • For a  $\text{systematic code}$  the first  $k$  bits of  $\underline{z}$  indicate the information word  $\underline{\upsilon}$  being searched for,  which will most likely match the message  $\underline{u}$  being sent.

This SISO decoder descriptione from  [Bos99][1]  is intended at this point primarily to clarify the different  $L$–values.  One recognizes the large potential of the bit-wise decoding only in connection with  $\text{concatenated coding systems}$.

Calculation of extrinsic log likelihood ratios


The difficulty in bit-wise  $($or symbol-wise$)$  iterative decoding is generally the provision of the extrinsic  log likelihood ratios   $L_{\rm E}(i)$.  For a code of length  $n$,  the control variable is   $i = 1, \ \text{...} , \ n$.

$\text{Definition:}$  The   extrinsic log likelihood ratio   is a measure of the information that the other symbols  $(j ≠ i)$  of the code word provide about the  $i$–th encoded symbol,  expressed as a log likelihood ratio.  We denote this characteristic value by  $L_{\rm E}(i)$.


We now calculate the extrinsic log likelihood ratios  $L_{\rm E}(i)$  for two example codes.

$\text{(A)  Repetition Code}$   ⇒   ${\rm RC} \ (n, 1, n)$

A repetition code  $\rm (RC)$ is characterized by the fact that all  $n$  encoded symbols  $x_i ∈ \{0, \, 1\}$  are identical.

  • Here,  the extrinsic log likelihood ratio for the  $i$–th symbol is very easy to specify:
\[L_{\rm E}(i) = \hspace{0.05cm}\sum_{j \ne i} \hspace{0.1cm} L_j \hspace{0.3cm}{\rm with}\hspace{0.3cm}L_j = L_{\rm APP}(j) \hspace{0.05cm}.\]
  • If the sum over all  $L_{j ≠ i}$  is positive   ⇒   this implies from the point of view of the other log likelihood ratio values a preference for the decision   $x_i = 0$.
  • On the other hand,  if the sum is negative   ⇒   $x_i = 1$  is more likely.
  • $L_{\rm E}(i) = 0$  does not allow a prediction.


$\text{Example 5:}$  We consider the decoding of the repetition code  $\text{RC (4, 1,4)}$.  Thereby,  we make three different assumptions for the log likelihood ratio  $ \underline{L}_{\rm APP}^{(I=0)}=\underline{L}_{\rm K}.$

Decoding example  $\rm (5a)$  for the  ${\rm RC} \ (4, 1, 4)$

$\text{Decoding example (5a):}$

$$\underline{L}_{\rm APP}^{(I=0)} = (+1, -1, +3, -1)\text{:}$$
$$L_{\rm E}(1) = -1+3-1 = +1\hspace{0.05cm},$$
$$L_{\rm E}(2) = +1+3-1 = +3\hspace{0.05cm},$$
$$L_{\rm E}(3) = +1-1 -1= -1\hspace{0.05cm},$$
$$L_{\rm E}(4) = +1-1 +3 = +3\hspace{0.05cm}$$
$$\hspace{0.3cm}\Rightarrow\hspace{0.3cm}\underline{L}_{\rm E}^{(I=0)}= (+1, +3, -1, +3) \hspace{0.3cm}\Rightarrow\hspace{0.3cm}\underline{L}_{\rm APP}^{(I=1)}=\underline{L}_{\rm APP}^{(I=0)}+ \underline{L}_{\rm E}^{(I=0)}= (+2, +2, +2, +2)$$
  • At the beginning  $(I=0)$  the positive  $L_{\rm E}$  values indicate  $x_1 = x_2 = x_4 = 0$  while  $x_3 =1$  is more likely   $($because of negative  $L_{\rm E})$ .
  • Already after one iteration  $(I=1)$  all  $L_{\rm APP}$  values are positive   ⇒   the information bit is decoded as  $u = 0$.
  • Further iterations yield nothing.


$\text{Decoding example (5b):}$

Decoding example  $\rm (5b)$  for the  ${\rm RC} \ (4, 1, 4)$
$$\underline{L}_{\rm APP}^{(I=0)} = (+1, +1, -4, +1)\text{:}$$
$$\underline{L}_{\rm E}^{(I=0)} = \ (-2, -2, +3, -2)$$
  • Although three signs were wrong at the beginning,  after two iterations all  $L_{\rm APP}$  values are negative.
  • The information bit is decoded as  $u = 1$.


Decoding example  $\rm (5c)$  for the  ${\rm RC} \ (4, 1, 4)$

$\text{Decoding example (5c):}$

$$\underline{L}_{\rm APP}^{(I=0)} = (+1, +1, -3, +1)\text{:}$$
$$\underline{L}_{\rm E}^{(I=0)} = (-1, -1, +3, -1)$$
  • All  $L_{\rm APP}$  values are zero already after one iteration.
  • The information bit cannot be decoded,  although the initial situation was not much worse than with  $\rm (5b)$.
  • Further iterations bring nothing here.


$\text{(B)   Single parity–check code}$   ⇒   ${\rm SPC} \ (n, \ n \, -1, \ 2)$

For each single parity–check code,  the number of  "ones"  in each code word is even.  Or,  put another way:   For each code word  $\underline{x}$  the  $\text{Hamming weight}$  $w_{\rm H}(\underline{x})$  is even.

$\text{Definition:}$  Let the code word  $\underline{x}^{(–i)}$  contain all symbols except  $x_i$   ⇒   vector of length  $n–1$.  Thus,  the   "extrinsic log likelihood ratio"  will be with respect to the  $i$–th symbol, when  $\underline{x}$  has been received:

\[L_{\rm E}(i) = {\rm ln} \hspace{0.15cm}\frac{ {\rm Pr} \left [w_{\rm H}(\underline{x}^{(-i)})\hspace{0.15cm}{\rm is \hspace{0.15cm} even} \hspace{0.05cm} \vert\hspace{0.05cm}\underline{y} \hspace{0.05cm}\right ]}{ {\rm Pr} \left [w_{\rm H}(\underline{x}^{(-i)})\hspace{0.15cm}{\rm is \hspace{0.15cm} odd} \hspace{0.05cm} \vert \hspace{0.05cm}\underline{y} \hspace{0.05cm}\right ]} \hspace{0.05cm}.\]
  • As will be shown in the  "Exercise 4.4",  it is also possible to write for this:
\[L_{\rm E}(i) = 2 \cdot {\rm tanh}^{-1} \hspace{0.1cm} \left [ \prod\limits_{j \ne i}^{n} \hspace{0.15cm}{\rm tanh}(L_j/2) \right ] \hspace{0.3cm}{\rm with}\hspace{0.3cm}L_j = L_{\rm APP}(j) \hspace{0.05cm}.\]


$\text{Example 6:}$  We assume a  "Single parity–check code"  with  $n = 3, \ k = 2$   ⇒   briefly  ${\rm SPC} \ (3, \ 2, \ 2)$.

  • The   $2^k = 4$   valid code words  $\underline{x} = \{x_1,\ x_2,\ x_3\}$  are for bipolar description   ⇒   $x_i ∈ \{±1\}$:
Decoding example for the  ${\rm SPC} \ (3, 2, 2)$
$$ \underline{x}_0 \hspace{-0.05cm}=\hspace{-0.05cm} (+1\hspace{-0.03cm},\hspace{-0.05cm}+1\hspace{-0.03cm},\hspace{-0.05cm}+1)\hspace{-0.05cm}, $$
$$\underline{x}_1 \hspace{-0.05cm}=\hspace{-0.05cm} (+1\hspace{-0.03cm},\hspace{-0.05cm} -1\hspace{-0.03cm},\hspace{-0.05cm} -1)\hspace{-0.05cm}, $$
$$\underline{x}_2 \hspace{-0.05cm}=\hspace{-0.05cm} (-1\hspace{-0.03cm},\hspace{-0.05cm} +1\hspace{-0.03cm},\hspace{-0.05cm} -1)\hspace{-0.05cm}, $$
$$\underline{x}_3 \hspace{-0.05cm}=\hspace{-0.05cm} (-1\hspace{-0.03cm},\hspace{-0.05cm} -1\hspace{-0.03cm},\hspace{-0.05cm} +1)\hspace{-0.05cm}. $$
So, the product   $x_1 \cdot x_2 \cdot x_3$   is always positive.
  • The table shows the decoding process for  $\underline{L}_{\rm APP} = (+2.0, +0.4, \, –1.6)$.
  • The hard decision according to the signs of  $L_{\rm APP}(i)$  would yield here  $(+1, +1, \, -1)$   ⇒   no valid code word of  ${\rm SP}(3, \ 2, \ 2)$.
  • On the right side of the table, the corresponding extrinsic log likelihood ratios are entered:
\[L_{\rm E}(1) = 2 \cdot {\rm tanh}^{-1} \hspace{0.05cm} \left [ {\rm tanh} (0.2) \cdot {\rm tanh} (-0.8)\hspace{0.05cm}\right ] = -0.131\hspace{0.05cm}, \]
\[L_{\rm E}(2) =2 \cdot {\rm tanh}^{-1} \hspace{0.05cm} \left [ {\rm tanh} (1.0) \cdot {\rm tanh} (-0.8)\hspace{0.05cm}\right ] = -0.518\hspace{0.05cm}, \]
\[L_{\rm E}(3) =2 \cdot {\rm tanh}^{-1} \hspace{0.05cm} \left [ {\rm tanh} (1.0) \cdot {\rm tanh} (0.2)\hspace{0.05cm}\right ] = +0.151\hspace{0.05cm}.\]

The second equation can be interpreted as follows:

  1. $L_{\rm APP}(1) = +2.0$  and  $L_{\rm APP}(3) = \, -1.6$  state that the first bit is more likely to be  "$+1$"  than  "$-1$"  and the third bit is more likely to be  "$-1"$  than  "$+1$".
  2. The reliability  $($the amount$)$  is slightly greater for the first bit than for the third.
  3. The extrinsic information   $L_{\rm E}(2) = \, -0.518$  considers only the information of bit 1 and bit 3 about bit 2.
  4. From their point of view,  the second bit is  "$-1$"  with reliability  $0.518$.
  5. The log likelihood ratio  $L_{\rm APP}^{I=0}(2) = +0.4$  derived from the received value  $y_2$  has suggested  "$+1$"  for the second bit.
  6. But the discrepancy is resolved here already in the iteration  $I = 1$.  The decision falls here for the code word  $\underline{x}_1 \hspace{-0.05cm}=\hspace{-0.05cm} (+1\hspace{-0.03cm},\hspace{-0.05cm} -1\hspace{-0.03cm},\hspace{-0.05cm} -1)\hspace{-0.05cm},$.
  7. For   $0.518 < L_{\rm APP}(2) < 1.6$   the result  $\underline{x}_1$  would only be available after several iterations.  For  $L_{\rm APP}(2) > 1.6$  on the other hand,  the decoder returns  $\underline{x}_0$.


BCJR decoding: Forward-backward algorithm


An example of iterative decoding of convolutional codes is the  »BCJR algorithm«,  named after its inventors L. R. Bahl, J. Cocke, F. Jelinek, and J. Raviv   ⇒   [BCJR74][2].  The algorithm has many parallels to the seven-year older Viterbi decoding,  but also some significant differences:

  • The Viterbi algorithm  $($in its original form$)$  cannot process soft information.  In contrast,  the BCJR algorithm specifies a reliability value for each individual bit at each iteration,  which is taken into account in subsequent iterations.


The figure is intended to illustrate  $($in an almost inadmissibly simplified way$)$  the different approach of Viterbi and BCJR algorithm.  This is based on a convolutional code with memory  $m = 1$  and length  $L = 4$   ⇒   total length $($with termination$)$  $L' = 5$.

Comparison of Viterbi and BCJR algorithm
  • Viterbi searches and finds the most likely path from  ${\it \Gamma}_0(S_0)$  to  ${\it \Gamma}_5(S_0)$, namely  $S_0 → S_1 → S_0 → S_0 → S_1→ S_0 $. We refer to the sample solution to  "Exercise 3.9Z".
  • The sketch for the BCJR algorithm illustrates the extraction of the extrinsic  L–value for the third symbol   ⇒   $L_{\rm E}(3)$.  The relevant area in the trellis is shaded:
  • Working through the trellis diagram in the forward direction,  one obtains in the same way as Viterbi the metrics  $\alpha_1, \ \alpha_2, \ \text{...}\hspace{0.05cm} , \ \alpha_5$.  To calculate  $L_{\rm E}(3)$  one needs from this  $\alpha_2$.
  • Then we traverse the trellis diagram backwards  $($i.e. from right to left$)$  and thus obtain the metrics  $\beta_4, \ \beta_3, \ \text{...}\hspace{0.05cm} , \ \beta_0$  corresponding to the sketch below.
  • The extrinsic log likelihood ratio  $L_{\rm E}(3)$  is obtained from the forward direction metric  $\alpha_2$  and the backward direction metric  $\beta_3$  and the a-priori information  $\gamma_3$  about the symbol  $i = 3$.

Basic structure of concatenated coding systems


The most important communication systems of the last years use two different channel codes.  One speaks then of  »concatenated codes«.  Even with relatively short component codes  $\mathcal{C}_1$  and  $\mathcal{C}_2$  the concatenated code  $\mathcal{C}$  results in a sufficiently large code word length  $n$,  which is known to be necessary to approach the channel capacity.

To begin with,  here are a few examples from mobile communications:

  • In  $\rm GSM$  $($"Global System for Mobile Communications",  the second generation of mobile communication systems$)$,  the data bit rate is first increased from  $9. 6 \ \rm kbit/s$  to  $12 \ \rm kbit/s$  in order to enable error detection in circuit-switched networks as well.  This is followed by a punctured convolutional code with the output bit rate  $22.8 \ \rm kbit/s$.  The total code rate is thus about $0.421$.
  • In the 3G cellular system  $\rm UMTS$  $($"Universal Mobile Telecommunications System"),  depending on the boundary conditions  $($good/bad channel,  few/many subscribers in the cell$)$  one uses a  $\text{convolutional code}$  or a  $\text{turbo code}$  $($by this one understands per se the concatenation of two convolutional encoders$)$. 
  • In 4G mobile communication systems  $\rm LTE$  $($"Long Term Evolution"$)$,  a convolutional code is used for short control signals,  turbo code for the longer payload data.


The graph shows the basic structure of a parallel concatenated coding system.  All vectors consist of  $n$  elements: 

$$\underline{L} = (L(1), \ \text{...}\hspace{0.05cm} , \ L(n)).$$
Parallel concatenated codes

So the calculation of all  L–values is done on symbol level.  Not shown here is the  $\rm interleaver$,  which is mandatory e.g. for turbo codes.

  1. The sequences  $\underline{x}_1$  and  $\underline{x}_2$  are combined to form the vector  $\underline{x}$  for joint transmission over the channel by a multiplexer.
  2. At the receiver, the sequence  $\underline{y}$  is again decomposed into the parts  $\underline{y}_1$  and  $\underline{y}_2$.  From this the channel L–values  $\underline{L}_{\rm K,\hspace{0.05cm}1} $ and  $\underline{L}_{\rm K,\hspace{0.05cm}2}$  are formed.
  3. The symbol-wise decoder determines the extrinsic log likelihood ratios   $\underline{L}_{\rm E,\hspace{0.05cm} 1}$  and  $\underline{L}_{\rm E,\hspace{0.05cm} 2}$, which are also the a-priori information  $\underline{L}_{\rm A,\hspace{0.05cm} 2}$  and  $\underline{L}_{\rm A,\hspace{0.05cm} 1}$  for the other decoder.
  4. After a sufficient number of iterations  $($i.e. when a termination criterion is met$)$,  the vector  $\underline{L}_{\rm APP}$  of a-posteriori values is available at the decoder output.
  5. In the example,  the value in the upper branch is taken arbitrarily. 
  6. However,  the lower log likelihood ratio would also be possible.

The above model also applies in particular to the decoding of turbo codes according to the chapter  "Basic structure of a turbo code".

Exercises for the chapter


Exercise 4.1: Log Likelihood Ratio

Exercise 4.1Z: Log Likelihood Ratio at the BEC Model

Exercise 4.2: Channel Log Likelihood Ratio at AWGN

Exercise 4.3: Iterative Decoding at the BSC

Exercise 4.3Z: Conversions of L-value and S-value

Exercise 4.4: Extrinsic L-values at SPC

Exercise 4.4Z: Supplement to Exercise 4.4

Exercise 4.5: On the Extrinsic L-values again

Exercise 4.5Z: Tangent Hyperbolic and Inverse

References

  1. Bossert, M.:  Channel Coding for Telecommunications.  Chichester:  Wiley,  1999.  ISBN:  978-0-471-98277-7.
  2. Bahl, L.R.; Cocke, J.; Jelinek, F.; Raviv, J.:  Optimal Decoding of Linear Codes for Minimizing Symbol Error Rate.  In: IEEE Transactions on Information Theory, Vol. IT-20, pp. 284-287, 1974.