Difference between revisions of "Channel Coding/The Basics of Turbo Codes"

From LNTwww
 
(77 intermediate revisions by 7 users not shown)
Line 1: Line 1:
 
   
 
   
 
{{Header
 
{{Header
|Untermenü=Iterative Decodierverfahren
+
|Untermenü=Iterative Decoding Methods
|Vorherige Seite=Grundlegendes zu den Produktcodes
+
|Vorherige Seite=The Basics of Product Codes
|Nächste Seite=Grundlegendes zu den Low–density Parity–check Codes
+
|Nächste Seite=The Basics of Low-Density Parity Check Codes
 
}}
 
}}
  
== Grundstruktur eines Turbocodes (1) ==
+
== Basic structure of a turbo code ==
 
<br>
 
<br>
Alle heute (2016) aktuellen Kommunikationssysteme wie UMTS (3. Mobilfunkgeneration) und LTE (4. Mobilfunkgeneration) verwenden das in [[Kanalcodierung/Soft%E2%80%93in_Soft%E2%80%93out_Decoder#Hard_Decision_vs._Soft_Decision_.281.29| Kapitel 4.1]] dargelegte Konzept der symbolweisen iterativen Decodierung. Dass dies so ist, steht unmittelbar mit der Erfindung der Turbocodes im Jahre 1993 durch C. Berrou, A. Glavieux und P. Thitimajshima in Zusammenhang, denn erst mit diesen Codes konnte man sich der Shannon&ndash;Grenze mit vertretbarem Decodieraufwand annähern.<br>
+
All communications systems current today&nbsp; $($2017$)$,&nbsp; such as&nbsp;
 +
* [[Examples_of_Communication_Systems/General_Description_of_UMTS#.23_OVERVIEW_OF_THE_FOURTH_MAIN_CHAPTER_.23|$\rm UMTS$]]&nbsp; $($"Universal Mobile Telecommunications System" &nbsp; &rArr; &nbsp; 3rd generation mobile communications$)$ and&nbsp;
 +
*[[Mobile_Communications/General_Information_on_the_LTE_Mobile_Communications_Standard#.23_OVERVIEW_OF_THE_FOURTH_MAIN_CHAPTER_.23|$\rm LTE$]]&nbsp; $($"Long Term Evolution"&nbsp; &rArr; &nbsp; 4th generation mobile communications$)$
  
Turbocodes ergeben sich durch die parallele oder serielle Verkettung von Faltungscodes. Die Grafik zeigt die parallele Verkettung zweier Codes mit den Parametern $k = 1, \ n = 2$ &nbsp;&#8658;&nbsp; Rate $R = 1/2$.<br>
 
  
[[File:P ID3034 KC T 4 3 S1a v1.png|center|frame|Parallele Verkettung zweier Rate–$1/2$–Codes|class=fit]]<br>
+
use the concept of&nbsp; [[Channel_Coding/Soft-in_Soft-Out_Decoder#Bit-wise_soft-in_soft-out_decoding|$\text{symbol-wise iterative decoding}$]].&nbsp; This  is directly related to the invention of&nbsp; &raquo;'''turbo codes'''&laquo;&nbsp; in 1993 by&nbsp; [https://en.wikipedia.org/wiki/Claude_Berrou $\text{Claude Berrou}$],&nbsp; [https://en.wikipedia.org/wiki/Alain_Glavieux $\text{Alain Glavieux}$]&nbsp; and&nbsp; [https://scholar.google.com/citations?user=-UZolIAAAAAJ $\text{Punya Thitimajshima}$]&nbsp; because it was only with these codes that the Shannon bound could be approached with reasonable decoding effort.<br>
  
In dieser Darstellung bezeichnet:
+
Turbo codes result from the parallel or serial concatenation of convolutional codes.&nbsp;
* $u$ das aktuell betrachtete Bit der Informationssequenz $\underline{u}$,<br>
+
[[File:EN_KC_T_4_3_S1a_v1.png|right|frame|Parallel concatenation of two rate&nbsp; $1/2$&nbsp; codes|class=fit]]
 +
[[File:EN_KC_T_4_3_S1b_v1.png|right|frame|Rate&nbsp; $1/3$&nbsp; turbo encoder&nbsp; $($parallel concatenation of two rate&nbsp; $1/2$&nbsp; convolutional codes$)$ |class=fit]]
 +
The graphic shows the parallel concatenation of two codes,&nbsp; each with the parameters&nbsp; $k = 1, \ n = 2$ &nbsp; &#8658; &nbsp; code rate $R = 1/2$.<br>
  
* $x_{i,j}$ das aktuell betrachtete Bit am Ausgang $j$ von Coder $i$ (mit $1 &#8804; i &#8804; 2, \ 1 &#8804; j &#8804; 2)$,<br>
+
In this representation is:
 +
# $u$&nbsp; the currently considered bit of the information sequence&nbsp; $\underline{u}$,<br>
 +
# $x_{i,\hspace{0.03cm}j}$&nbsp; the currently considered bit at the output&nbsp; $j$&nbsp; of encoder&nbsp; $i$&nbsp; <br>$($with&nbsp; $1 &#8804; i &#8804; 2, \hspace{0.2cm} 1 &#8804; j &#8804; 2)$,<br>
 +
# $\underline{X} = (x_{1,\hspace{0.03cm}1}, \ x_{1,\hspace{0.03cm}2}, \ x_{2,\hspace{0.03cm}1}, \ x_{2,\hspace{0.03cm}2})$&nbsp; the code word for the current information bit&nbsp; $u$.<br><br>
  
* $\underline{X} = (x_{1,1}, \ x_{1,2}, \ x_{2,1}, \ x_{2,2})$ das Codewort für das aktuelle Informationsbit $u$.<br><br>
+
The resulting rate of the concatenated coding system is thus&nbsp; $R = 1/4$.&nbsp;
  
Die resultierende Rate des verketteten Codiersystems ist somit $R = 1/4$.<br>
+
If systematic component codes are used,&nbsp; the second shown model results.&nbsp; The modifications from the top graph to the graph below can be justified as follows:
 +
*For systematic codes&nbsp; $C_1$&nbsp; and&nbsp; $C_2$,&nbsp; both&nbsp; $x_{1,\hspace{0.03cm}1} = u$&nbsp; and&nbsp; $x_{2,\hspace{0.03cm}1} = u$.&nbsp; Therefore,&nbsp; one can dispense with the transmission of a redundant bit&nbsp; $($e.g.&nbsp; $x_{2,\hspace{0.03cm}2})$.
  
Verwendet man systematische Komponentencodes, so ergibt sich das folgende Modell:<br>
+
*With this reduction,&nbsp; the result is a rate&nbsp; $1/3$&nbsp; turbo code with&nbsp; $k = 1$&nbsp; and&nbsp; $n = 3$.&nbsp; The code word with the parity bits is&nbsp; $p_1$&nbsp; $($encoder 1$)$&nbsp; and&nbsp; $p_2$&nbsp; $($encoder 2$)$:  
 +
:$$\underline{X} = \left (u, \ p_1, \ p_2 \right )\hspace{0.05cm}.$$
  
[[File:P ID3035 KC T 4 3 S1b v1.png|center|frame|Rate–1/3–Turbocodierer (parallele Verkettung zweier Rate–1/2–Faltungscodes) |class=fit]]<br>
 
  
Die Modifikationen gegenüber der oberen Grafik lassen sich wie folgt begründen:
+
== Further modification of the basic structure of the turbo code==
*Bei systematischen Codes $C_1$ und $C_2$ ist sowohl $x_{1,1} = u$ als auch $x_{2,1} = u$. Deshalb kann man auf die Übertragung eines redundanten Bits (zum Beispiel $x_{2,2}$) verzichten.
+
<br>
 
+
In the following we always assume a still somewhat further modified turbo encoder model:
*Mit dieser Reduktion ergibt sich ein Rate&ndash;1/3&ndash;Turbocode mit den Parametern $k = 1$ und $n = 3$. Das Codewort lautet mit den Paritybits $p_1$ und $p_2$ von Coder 1 bzw. Coder 2:
+
[[File:EN_KC_T_4_3_S1c_v2.png|right|frame|Rate&nbsp; $1/3$&nbsp; turbo encoder with interleaver|class=fit]]
  
::<math>\underline{X} = \left (u, p_1, p_2 \right )\hspace{0.05cm}.</math>
+
#As required for the description of convolutional codes,&nbsp; at the input  is the information sequence&nbsp; $\underline{u} = (u_1, \ u_2, \ \text{...}\hspace{0.05cm} , \ u_i , \ \text{...}\hspace{0.05cm} )$&nbsp; instead of the isolated information bit&nbsp; $u$&nbsp;.<br>
 +
#The code word sequence&nbsp; $\underline{x} = (\underline{X}_1, \underline{X}_2, \ \text{...}\hspace{0.05cm}  , \ \underline{X}_i, \ \text{...}\hspace{0.05cm} )$&nbsp; is generated.&nbsp;  To avoid confusion,&nbsp; the code words&nbsp; $\underline{X}_i = (u, \ p_1, \ p_2)$&nbsp; with capital letters were introduced in the last section.<br>
 +
#The encoders&nbsp; $\mathcal{C}_1$&nbsp; and&nbsp; $\mathcal{C}_2$&nbsp; are conceived&nbsp; $($at least in thought$)$&nbsp; as&nbsp; [[Theory_of_Stochastic_Signals/Digital_Filters| $\text{digital filters}$]]&nbsp; and are thus characterized by the&nbsp; [[Channel_Coding/Algebraic_and_Polynomial_Description#Application_of_the_D.E2.80.93transform_to_rate_.7F.27.22.60UNIQ-MathJax158-QINU.60.22.27.7F_convolution_encoders| $\text{transfer functions}$]]&nbsp; $G_1(D)$&nbsp; and&nbsp; $G_2(D)$.<br>
 +
#For various reasons &nbsp; &#8658; &nbsp; see&nbsp; [[Channel_Coding/The_Basics_of_Turbo_Codes#Second_requirement_for_turbo_codes:_Interleaving| "two sections ahead"]]&nbsp; the input sequence of the second encoder &nbsp; &#8658; &nbsp; $\underline{u}_{\pi}$&nbsp; should be scrambled with respect to the sequence&nbsp; $\underline{u}$&nbsp; by an interleaver&nbsp; $(\Pi)$.<br>
 +
#Thereby there is nothing against choosing both encoders the same:&nbsp; $G_1(D) = G_2(D) = G(D)$.&nbsp; Without interleaver the correction capability would be extremely limited.<br><br>
  
Auf der nächsten Seite wird unser Turbocode&ndash;Modell nochmals geringfügig modifiziert.<br>
+
{{GraueBox|TEXT=
 +
[[File:EN_KC_T_4_3_S1d_v2.png|right|frame|Example sequences at the rate&nbsp; $1/3$&nbsp; turbo encoder|class=fit]]
 +
$\text{Example 1:}$&nbsp; The graph shows the different sequences in matched colors. To note:
 +
#For&nbsp; $\underline{u}_{\Pi}$&nbsp; a&nbsp; $3×4$&nbsp; interleaver matrix is considered according to&nbsp; [[Aufgaben:Exercise_4.08Z:_Basics_about_Interleaving|"Exercise 4.8Z"]].<br><br>
 +
#The parity sequences are obtained according to&nbsp; $G_1(D) = G_2(D) = 1 + D^2$ &nbsp; &#8658; &nbsp; see&nbsp; [[Aufgaben:Exercise_4.08:_Repetition_to_the_Convolutional_Codes|"Exercise 4.8"]].}}<br>
  
== Grundstruktur eines Turbocodes (2) ==
+
== First requirement for turbo codes: Recursive component codes ==
 
<br>
 
<br>
Im Folgenden gehen wir stets von einem noch weiter modifizierten Turbocoder&ndash;Modell aus:
+
Non-recursive transfer functions for generating the parity sequences cause a turbo code with insufficiently small minimum distance.&nbsp; The reason for this inadequacy is the finite impulse response&nbsp; $\underline{g} = (1, \ g_2, \ \text{...}\hspace{0.05cm} , \ g_m, \ 0, \ 0, \ \text{...}\hspace{0.05cm} )$&nbsp; with&nbsp; $g_2, \ \text{...}\hspace{0.05cm} , \ g_m &#8712; \{0, 1\}$.&nbsp; Here&nbsp; $m$&nbsp; denotes the&nbsp;  "memory".<br>
*Wie es für die Beschreibung von Faltungscodes erforderlich ist,  liegt nun am Eingang anstelle des isolierten Informationsbits $u$ die Informationssequenz $\underline{u} = (u_1, \ u_2, \ ... , \ u_i , \ ...)$ an.<br>
+
[[File:EN_KC_T_4_3_S2a_v1.png|right|frame|Non-recursive systematic turbo code and state transition diagram|class=fit]]
 
 
*Die Codewortsequenz wird mit $\underline{x} = (\underline{X}_1, \underline{X}_2, \ ... , \ \underline{X}_i, \ ...)$ bezeichnet. Um Verwechslungen zu vermeiden, wurden vorne die Codeworte $\underline{X}_i = (u, \ p_1, \ p_2)$ mit Großbuchstaben eingeführt.<br>
 
 
 
[[File:P ID3036 KC T 4 3 S1c v1.png|center|frame|Rate–1/3–Turbocodierer mit Interleaver|class=fit]]<br>
 
 
 
*Die Coder $C_1$ und $C_2$ werden (gedanklich) als [[Stochastische_Signaltheorie/Digitale_Filter| Digitale Filter]] konzipiert und sind somit durch die [[Kanalcodierung/Algebraische_und_polynomische_Beschreibung#Anwendung_der_D.E2.80.93Transformation_auf_Rate.E2.80.931.2Fn.E2.80.93Faltungscoder_.282.29| Übertragungsfunktionen]] $G_1(D)$ und $G_2(D)$ darstellbar.<br>
 
  
*Aus verschiedenen Gründen &nbsp;&#8658;&nbsp; siehe [[Kanalcodierung/Grundlegendes_zu_den_Turbocodes#Zweite_Voraussetzung_f.C3.BCr_Turbocodes:_Interleaving| übernächste Seite]] sollte die Eingangssequenz des zweiten Coders &nbsp;&#8658;&nbsp; $\underline{u}_{\pi}$ gegenüber der Sequenz $\underline{u}$ durch einen Interleaver $(\Pi)$ verwürfelt sein.<br>
+
<br><br>The graph shows the state transition diagram for the example&nbsp; $\mathbf{G}(D) = \big [1, \ 1 + D^2 \big ]$. The transitions are labeled "$u_i\hspace{0.05cm}|\hspace{0.05cm}u_i p_i$".
 +
*The sequence&nbsp; $S_0 &#8594; S_1 &#8594; S_2 &#8594; S_0 &#8594; S_0 &#8594; \ \text{...}\hspace{0.05cm} \ $&nbsp; leads with respect to the input to the information sequence&nbsp; $\underline{u} = (1, 0, 0, 0, 0, \ \text{...}\hspace{0.05cm})$,&nbsp; and
  
*Somit spricht nichts dagegen, die beiden Coder gleich zu wählen: $G_1(D) = G_2(D) = G_3(D)$. Ohne Interleaver würde dies die Korrekturfähigkeit extrem einschränken.<br><br>
+
*with respect to the second encoded symbol to the sequence &nbsp; $\underline{p} = (1, 0, 1, 0, 0, \ \text{...}\hspace{0.05cm})$&nbsp; &nbsp; &#8658; &nbsp; because of&nbsp; $\underline{u} = (1, 0, 0, 0, 0, \ \text{...}\hspace{0.05cm})$&nbsp; identical to the&nbsp; "impulse response"&nbsp; $\underline{g}$ &nbsp; &#8658; &nbsp; memory&nbsp; $m = 2$.<br>
 +
<br clear=all>
 +
The lower graph applies to a so-called&nbsp; &raquo;'''RSC code'''&laquo;&nbsp; $($"Recursive Systematic Convolutional"$)$&nbsp; correspondingly&nbsp;
 +
[[File:EN_KC_T_4_3_S2b_v1.png|right|frame|Non-recursive systematic turbo code and state transition diagram|class=fit]]
 +
:$$\mathbf{G}(D) = \big [1, \ (1+ D^2)/(1 + D + D^2)\big ].$$
 +
*Here the sequence&nbsp;
 +
:$$S_0 &#8594; S_1 &#8594; S_3 &#8594; S_2 &#8594; S_1 &#8594; S_3 &#8594; S_2 &#8594;  \text{...}$$
 +
:leads to the impulse response&nbsp;
 +
:$$\underline{g} = (1, 1, 1, 0, 1, 1, 0, 1, 1, \ \text{...}\hspace{0.05cm}).$$
 +
*This impulse response continues to infinity due to the loop&nbsp; $S_1 &#8594; S_3 &#8594; S_2 &#8594; S_1$.&nbsp; This enables or facilitates the iterative decoding.<br>
  
{{Beispiel}}''':''' Die Grafik zeigt die verschiedenen Sequenzen in angepassten Farben. Anzumerken ist:
 
#&nbsp;&nbsp; Für $\underline{u}_{\pi}$  ist eine $3×4$&ndash;Interleaver&ndash;Matrix entsprechend Aufgabe Z4.8 berücksichtigt.
 
# &nbsp;&nbsp;<sub>&nbsp;</sub>Die Paritysequenzen ergeben sich gemäß $G_1(D) = G_2(D) = 1 + D^2$ &nbsp;&#8658;&nbsp; siehe Aufgabe A4.8.<br>
 
  
[[File:P ID3037 KC T 4 3 S1d v2.png|center|frame|Beispielhafte Sequenzen beim Rate–1/3–Turbocodierer|class=fit]]{{end}}<br>
+
More details on the examples in this section can be found in the&nbsp; [[Aufgaben:Exercise_4.08Z:_Basics_about_Interleaving|"Exercise 4.8"]]&nbsp; and the&nbsp; [[Aufgaben:Exercise_4.09:_Recursive_Systematic_Convolutional_Codes|"Exercise 4.9"]].<br>
  
== Erste Voraussetzung für Turbocodes: Rekursive Komponentencodes ==
+
== Second requirement for turbo codes: Interleaving ==
 
<br>
 
<br>
Nichtrekursive Übertragungsfunktionen zur Erzeugung der Paritysequenzen bewirken einen Turbocode mit unzureichend kleiner Minimaldistanz. Grund für diese Unzulänglichkeit ist die endliche Impulsantwort $\underline{g} = (1, \ g_2, \ ... , \ g_m, \ 0, \ 0, \ ...)$ mit $g_2, \ ... , \ g_m &#8712; \{0, 1\}$. Hierbei  bezeichnet $m$ das Codegedächtnis.<br>
+
It is obvious that for&nbsp; $G_1(D) = G_2(D)$&nbsp; an interleaver&nbsp; $(\Pi)$&nbsp; is essential.  
 +
*Another reason is that the a-priori information is assumed to be independent.  
  
Die Grafik zeigt das Zustandsübergangsdiagramm für das Beispiel $\mathbf{G}(D) = [1, \ 1 + D^2]$. Die Übergänge sind mit &bdquo;$u_i|u_i p_i$&rdquo; beschriftet. Die Abfolge $S_0 &#8594; S_1 &#8594; S_2 &#8594; S_0 &#8594; S_0 &#8594; S_0 &#8594; \ ... \ $ führt bezüglich des Eingangs zur Informationssequenz $\underline{u} = (1, 0, 0, 0, 0, \ ...)$ und bezüglich des jeweils zweiten Codesymbols zur Paritysequenz &nbsp; $\underline{p} = (1, 0, 1, 0, 0, \ ...)$ &nbsp;&#8658;&nbsp;  identisch mit der Impulsantwort $\underline{g}$ &nbsp;&#8658;&nbsp; Gedächtnis $m = 2$.<br>
+
*Thus,&nbsp; adjacent&nbsp; $($and thus possibly strongly dependent$)$&nbsp; bits for the other sub&ndash;code should be far apart.<br>
  
[[File:P ID3044 KC T 4 3 S2a v2.png|center|frame|Nichtrekursiver Turbocode und Zustandsübergangsdiagramm|class=fit]]<br>
 
  
Die untere Grafik gilt für einen sog. RSC&ndash;Code (<i>Recursive Systematic Convolutional</i>) entsprechend $\mathbf{G}(D) = [1, \ (1+ D^2)/(1 + D + D^2)]$. Hier führt die Folge $S_0 &#8594; S_1 &#8594; S_3 &#8594; S_2 &#8594; S_1 &#8594; S_3 &#8594; S_2 &#8594; \ ... \ $ zur Impulsantwort $\underline{g} = (1, 1, 1, 0, 1, 1, 0, 1, 1, \ ...)$. Diese Impulsantwort setzt sich aufgrund der Schleife $S_1 &#8594; S_3 &#8594; S_2 &#8594; S_1$ bis ins Unendliche fort. Dies ermöglicht bzw. erleichtert die iterative Decodierung.<br>
+
Indeed,&nbsp; for any RSC code &nbsp; &#8658; &nbsp; infinite impulse response&nbsp; $\underline{g}$ &nbsp; &#8658; &nbsp; fractional&ndash;rational transfer function&nbsp; $G(D)$:&nbsp; There are certain input sequences&nbsp; $\underline{u}$,&nbsp; which lead to very short parity sequences &nbsp; $\underline{p} = \underline{u} &#8727; \underline{g}$ &nbsp; with low Hamming weight&nbsp; $w_{\rm H}(\underline{p})$.
  
[[File:P ID3045 KC T 4 3 S2b v2.png|center|frame|RSC-Turbocode und Zustandsübergangsdiagramm|class=fit]]<br>
+
For example,&nbsp; such a sequence is given in the second graphic of the&nbsp; [[Channel_Coding/The_Basics_of_Turbo_Codes#First_requirement_for_turbo_codes:_Recursive_component_codes| "last section"]]&nbsp;: &nbsp;
 +
:$$\underline{u} = (1, 1, 1, 0, 0, \ \text{...}\hspace{0.05cm}).$$
 +
Then for the output sequence holds:
  
Mehr Details zu den Beispielen auf dieser Seite finden Sie in Aufgabe A4.8 und Aufgabe A4.9.<br>
+
::<math>P(D) = U(D) \cdot G(D) = (1+D+D^2) \cdot \frac{1+D^2}{1+D+D^2}= 1+D^2\hspace{0.3cm}\Rightarrow\hspace{0.3cm}  \underline{p}= (\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} \text{...}\hspace{0.05cm}\hspace{0.05cm})\hspace{0.05cm}. </math>
  
== Zweite Voraussetzung für Turbocodes: Interleaving ==
+
{{BlaueBox|TEXT=
<br>
+
$\text{Meaning and purpose:}$&nbsp;
Es ist offensichtlich, dass bei $G_1(D) = G_2(D)$ ein Interleaver $(\Pi)$ unerlässlich ist. Ein weiterer Grund ist, dass die Apriori&ndash;Information als unabhängig vorausgesetzt wird. Somit sollten benachbarte (und somit möglicherweise stark abhängige) Bits für den jeweils anderen Teilcode weit auseinander liegen.<br>
+
By&nbsp; $\rm interleaving$,&nbsp; it is now ensured with high probability that this sequence&nbsp; $\underline{u} = (1, 1, 1, 0, 0, \ \text{...}\hspace{0.05cm})$&nbsp; is converted into a sequence&nbsp; $\underline{u}_{\pi}$&nbsp;
 +
*which also contains only three&nbsp; "ones",<br>
  
Für jeden RSC&ndash;Code &nbsp;&#8658;&nbsp; unendliche Impulsantwort $\underline{g}$ &nbsp;&#8658;&nbsp; gebrochen&ndash;rationale Übertragungsfunktion $G(D)$ gibt es nämlich gewisse Eingangssequenzen $\underline{u}$, die zu sehr kurzen Paritysequenzen $\underline{p} = \underline{u} &#8727; \underline{g}$ mit geringem Hamming&ndash;Gewicht $w_{\rm H}(\underline{p})$ führen.  Eine solche Sequenz ist beispielsweise in der unteren Grafik auf der [[Kanalcodierung/Grundlegendes_zu_den_Turbocodes#Erste_Voraussetzung_f.C3.BCr_Turbocodes:_Rekursive_Komponentencodes| letzten Seite]] angegeben: &nbsp; $\underline{u} = (1, 1, 1, 0, 0, \ ...)$. Dann gilt für die Ausgangssequenz:
+
*but whose output sequence is characterized by a large Hamming weight&nbsp; $w_{\rm H}(\underline{p})$.<br>
  
:<math>P(D) = U(D) \cdot G(D) = (1+D+D^2) \cdot \frac{1+D^2}{1+D+D^2}= 1+D^2</math>
 
  
:<math>\Rightarrow\hspace{0.3cm}  \underline{p}= (\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} ...\hspace{0.05cm})\hspace{0.05cm}. </math>
+
Thus,&nbsp; the decoder succeeds in resolving such&nbsp; "problem sequences"&nbsp; iteratively.}}<br>
  
Durch Interleaving (deutsch: <i>Verwürfelung</i>) wird nun mit großer Wahrscheinlichkeit sichergestellt, dass diese Sequenz $\underline{u} = (1, 1, 1, 0, 0, \ ...)$ in eine Sequenz $\underline{u}_{\pi}$ gewandelt wird,
+
For the following description of the interleavers we use the assignment&nbsp; $I_{\rm In} &#8594; I_{\rm Out}$.&nbsp; These labels stand for the indices of output and input sequence,&nbsp; respectively. The interleaver variable is named&nbsp; $I_{\rm max}$&nbsp;.<br>
*die zwar ebenfalls nur drei Einsen beinhaltet,<br>
+
[[File:P ID3048 KC T 4 3 S3a v5.png|right|frame|Clarification of block interleaving]]
  
*deren Ausgangssequenz aber durch ein großes Hamming&ndash;Gewicht $w_{\rm H}(\underline{p})$ gekennzeichnet ist.<br><br>
+
There are several,&nbsp; fundamentally different interleaver concepts:<br>
  
Somit gelingt es dem Decoder, solche &bdquo;Problemsequenzen&rdquo; iterativ aufzulösen.<br>
+
&rArr; &nbsp; In a&nbsp; &raquo;<b>block interleaver</b>&laquo;&nbsp; one fills a matrix with&nbsp; $N_{\rm C}$&nbsp; columns and&nbsp; $N_{\rm R}$&nbsp; rows column-by-column and reads the matrix row-by-row.&nbsp; Thus an information block with&nbsp; $I_{\rm max} = N_{\rm C} \cdot N_{\rm R}$&nbsp; bit is deterministically scrambled.<br>
  
Zur Beschreibung der Interleaver verwendet man die Zuordnung $I_{\rm In} &#8594; I_{\rm Out}$. Diese Bezeichnungen stehen für die Indizes von Ausgangs&ndash; bzw. Eingangsfolge. Die Interleavergröße wird mit $I_{\rm max}$ benannt.<br>
+
The right graph illustrates the principle for&nbsp; $I_{\rm max} = 64$ &nbsp; &#8658; &nbsp; $1 &#8804; I_{\rm In} \le 64$&nbsp; and&nbsp; $1 &#8804; I_{\rm Out} \le 64$.&nbsp; The order of the output bits is then: &nbsp;  
 +
:$$1, 9, 17, 25, 33, 41, 49, 57, 2, 10, 18, \ \text{...}\hspace{0.05cm} , 48, 56, 64.$$
  
[[File:P ID3048 KC T 4 3 S3a v5.png|right|frame|Zur Verdeutlichung von Block–Interleaving]]<br>
+
More information on block interleaving is available in the&nbsp; [[Aufgaben:Exercise_4.08Z:_Basics_about_Interleaving|"Exercise 4.8Z"]].<br>
  
Es gibt mehrere, grundsätzlich verschiedene Interleaver&ndash;Konzepte:<br>
+
[[File:P ID3050 KC T 4 3 S3b v5.png|left|frame|Clarification of $S$&ndash;random interleaving]]
 +
<br><br><br><br>
 +
&rArr; &nbsp; Turbo codes often use the&nbsp; &raquo;'''$S$&ndash;random interleaver'''&laquo;.  This pseudo random algorithm with the parameter&nbsp; "$S$"&nbsp; guarantees
 +
*that two indices less than&nbsp; $S$&nbsp; apart at the input
  
Bei einem <b>Block&ndash;Interleaver</b> füllt man eine Matrix mit $S$ Spalten und $Z$ Zeilen spaltenweise und liest die Matrix zeilenweise aus. Damit wird ein Informationsblock mit $I_{\rm max} = S \cdot Z \ \rm Bit$ deterministisch verwürfelt.<br>
+
*occur at least at the distance&nbsp; $S + 1$&nbsp; at the output.  
  
Die Grafik verdeutlicht das Prinzip für $I_{\rm max} = 64$ &nbsp;&#8658;&nbsp; $1 &#8804; I_{\rm In} < 65$ und $1 &#8804; I_{\rm Out} < 65$. Die Reihenfolge der Ausgangsbits lautet dann:<br>
 
  
:$$1, 9, 17, 25, 33, 41, 49, 57, 2, 10, 18, \ ... , 48, 56, 64.$$
+
The left graph shows the&nbsp; $S$&ndash;random characteristic&nbsp; $I_{\rm Out}(I_{\rm In})$&nbsp; for&nbsp; $I_{\rm max} = 640$.<br>
  
Mehr Informationen zum Block&ndash;Interleaving gibt es in Aufgabe Z4.8.<br>
+
#This algorithm is also deterministic,&nbsp; and one can undo the scrambling in the decoder &nbsp; &#8658; &nbsp; "De&ndash;interleaving".  
 +
#The distribution still seems&nbsp; "more random"&nbsp; than with block interleaving.
 +
<br clear=all>
  
[[File:P ID3050 KC T 4 3 S3b v5.png|left|frame|Zur Verdeutlichung von $S$–Random–Interleaving]]
+
== Bit-wise iterative decoding of a turbo code ==
 +
<br>
 +
The decoding of a turbo code is basically done as described in section&nbsp; [[Channel_Coding/Soft-in_Soft-Out_Decoder#Bit-wise_soft-in_soft-out_decoding|"Bit-wise Soft&ndash;in Soft&ndash;out Decoding"]].&nbsp; From the following graphic,&nbsp; however,&nbsp; you can also see some special features that apply only to the turbo decoder.<br>
  
Turbocodes verwenden oft den <span style="font-weight: bold;">$S$&ndash;Random&ndash;Interleaver</span>. Dieser pseudozufällige Algorithmus mit dem Parameter &bdquo;$S$&rdquo; garantiert, dass zwei am Eingang weniger als $S$ auseinander liegende Indizes am Ausgang mindestens im Abstand $S + 1$ auftreten. Die linke Grafik zeigt die $S$&ndash;Random&ndash;Kennlinie $I_{\rm Out}(I_{\rm In})$ für $I_{\rm max} = 640$.<br>
+
Assumed is a rate&nbsp; $1/3$&nbsp; turbo code according to the description in the&nbsp; [[Channel_Coding/The_Basics_of_Turbo_Codes#Basic_structure_of_a_turbo_code| "first section of this page"]].&nbsp; Also,&nbsp; the color scheme for the information sequence&nbsp; $\underline{u}$&nbsp; and the two parity sequences&nbsp; $\underline{p}_1$&nbsp; and&nbsp; $\underline{p}_2$&nbsp; are adapted from the earlier graphics.&nbsp; Further,&nbsp; it should be noted:
 +
[[File:EN_KC_T_4_3_S4a_v2.png|right|frame|Iterative turbo decoder for rate&nbsp; $R = 1/3$ |class=fit]]
  
Auch dieser Algorithmus ist deterministisch, und man kann die Verwürfelung im Decoder rückgängig machen &#8658; <i>De&ndash;Interleaving</i>. Die Verteilung wirkt trotzdem &bdquo;zufälliger&rdquo; als bei Block&ndash;Interleaving.<br>
+
*The received vectors&nbsp; $\underline{y}_0,\hspace{0.15cm} \underline{y}_1,\hspace{0.15cm} \underline{y}_2$&nbsp; are real-valued and provide the respective soft information with respect to the information sequence&nbsp; $\underline{u}$&nbsp; and the sequences&nbsp; $\underline{p}_1$&nbsp; $($parity for encoder 1$)$&nbsp; and&nbsp; $\underline{p}_2$&nbsp; $($parity for encoder 2$)$.<br>
  
== Symbolweise iterative Decodierung eines Turbocodes ==
+
*The decoder 1 receives the required intrinsic information in the form of the&nbsp; $L$ values $L_{\rm K,\hspace{0.03cm} 0}$&nbsp; $($out&nbsp; $\underline{y}_0)$&nbsp; and&nbsp; $L_{\rm K,\hspace{0.03cm}1}$ $($out&nbsp; $\underline{y}_1)$&nbsp; over each bit of the sequences&nbsp; $\underline{u}$&nbsp; and&nbsp; $\underline{p}_1$.  
<br>
 
Die Decodierung eines Turbocodes geschieht grundsätzlich wie in [http://en.lntwww.de/Kanalcodierung/Soft%E2%80%93in_Soft%E2%80%93out_Decoder#Grundstruktur_von_verketteten_Codiersystemen Kapitel 4.1] beschrieben. Aus der folgenden Grafik erkennt man aber einige nur für den Turbodecoder zutreffende Besonderheiten.<br>
 
  
[[File:P ID3049 KC T 4 3 S4a v2.png|Iterativer Turbodecoder für die Rate <i>R</i> = 1/3|class=fit]]<br>
+
*In the second decoder,&nbsp; the scrambling of the information sequence&nbsp; $\underline{u}$&nbsp; must be taken into account.&nbsp; Thus,&nbsp; the&nbsp; $L$ values to be processed are&nbsp; $\pi(L_{\rm K, \hspace{0.03cm}0})$&nbsp; and&nbsp; $L_{\rm K, \hspace{0.03cm}2}$.<br>
  
Vorausgesetzt ist ein Rate&ndash;1/3&ndash;Turbocode entsprechend der Beschreibung auf der [http://en.lntwww.de/Kanalcodierung/Grundlegendes_zu_den_Turbocodes#Grundstruktur_eines_Turbocodes_.282.29 ersten Seite] dieses Abschnitts. Auch die Farbgebung für die Informationssequenz <u><i>u</i></u> und die beiden Paritysequenzen <u><i>p</i></u><sub>1</sub> und <u><i>p</i></u><sub>2</sub> sind an die früheren Grafiken angepasst. Weiter ist zu bemerken:
+
*In the general&nbsp; [[Channel_Coding/Soft-in_Soft-Out_Decoder#Basic_structure_of_concatenated_coding_systems|$\text{SISO decoder}$]]&nbsp; the information exchange between the two component decoders was controlled with&nbsp; $\underline{L}_{\rm A, \hspace{0.03cm}2} = \underline{L}_{\rm E, \hspace{0.03cm}1}$&nbsp; and&nbsp; $\underline{L}_{\rm A, \hspace{0.03cm}1} = \underline{L}_{\rm E, \hspace{0.03cm}2}$.  
*Die Empfangsvektoren <u><i>y</i></u><sub>0</sub>, <u><i>y</i></u><sub>1</sub> und <u><i>y</i></u><sub>2</sub> sind reellwertig und liefern die jeweilige Soft&ndash;Information bezüglich der Sequenzen <u><i>u</i></u> (Information), <u><i>p</i></u><sub>1</sub> (Parity für Coder 1) und <u><i>p</i></u><sub>2</sub>  (Parity für Coder 2).<br>
 
  
*Der Decoder 1 erhält die erforderliche intrinsische Information in Form der <i>L</i>&ndash;Werte <i>L</i><sub>K,0</sub> (aus <u><i>y</i></u><sub>0</sub>) und  <i>L</i><sub>K,1</sub> (aus <u><i>y</i></u><sub>1</sub>) über jedes einzelne Bit der Sequenzen <u><i>u</i></u> und <u><i>p</i></u><sub>1</sub>.<br>
+
*Written out at the bit level,&nbsp; these equations are denoted by&nbsp; $(1 &#8804; i &#8804; n)$:
  
*Beim zweiten Decoder ist auch die Verwürfelung der Informationssequenz <u><i>u</i></u> durch den Interleaver zu berücksichtigen. Die zu verarbeitenden <i>L</i>&ndash;Werte sind somit &pi;(<i>L</i><sub>K,0</sub>) und <i>L</i><sub>K,2</sub>.<br>
+
::<math>L_{\rm A, \hspace{0.03cm}2}(i) = L_{\rm E, \hspace{0.03cm}1}(i)
 +
\hspace{0.5cm}{\rm resp.}\hspace{0.5cm}
 +
L_{\rm A, \hspace{0.03cm}1}(i) = L_{\rm E, \hspace{0.03cm}2}(i) \hspace{0.03cm}.</math>
  
*Beim allgemeinen [http://en.lntwww.de/Kanalcodierung/Soft%E2%80%93in_Soft%E2%80%93out_Decoder#Grundstruktur_von_verketteten_Codiersystemen SISO&ndash;Decoder] am Ende von Kapitel 4.1 wurde der Informationsaustausch zwischen den beiden Komponentendecodern mit <u><i>L</i></u><sub>A,2</sub> = <u><i>L</i></u><sub>E,1</sub> sowie <u><i>L</i></u><sub>A,1</sub> = <u><i>L</i></u><sub>E,2</sub> angegeben. Ausgeschrieben auf Bitebene bedeuten diese Gleichungen mit 1 &#8804; <i>i</i> &#8804; <i>n</i>:
+
*In the case of the turbo decoder,&nbsp; the interleaver must also be taken into account in this information exchange.&nbsp; Then for&nbsp; $i = 1, \ \text{...}\hspace{0.05cm} , \ n$:
  
::<math>L_{\rm A, \hspace{0.01cm}2}(i) = L_{\rm E, \hspace{0.01cm}1}(i)  
+
::<math>L_{\rm A, \hspace{0.03cm}2}\left ({\rm \pi}(i) \right ) = L_{\rm E, \hspace{0.03cm}1}(i)  
 
\hspace{0.5cm}{\rm bzw.}\hspace{0.5cm}
 
\hspace{0.5cm}{\rm bzw.}\hspace{0.5cm}
L_{\rm A, \hspace{0.01cm}1}(i) = L_{\rm E, \hspace{0.01cm}2}(i) \hspace{0.03cm}.</math>
+
L_{\rm A, \hspace{0.03cm}1}(i) = L_{\rm E, \hspace{0.03cm}2}\left ({\rm \pi}(i) \right ) \hspace{0.05cm}.</math>
  
*Beim Turbodecoder muss bei diesem Informationsaustausch auch der Interleaver berücksichtigt werden. Dann gilt wieder für <i>i</i> = 1, ..., <i>n</i>:
+
*The a-posteriori&nbsp; $L$ value is&nbsp; $($arbitrarily$)$&nbsp; given by decoder 1 in the above model.&nbsp; This can be justified by the fact that one iteration stands for a twofold information exchange.<br>
  
::<math>L_{\rm A, \hspace{0.01cm}2}\left ({\rm \pi}(i) \right ) = L_{\rm E, \hspace{0.01cm}1}(i)
+
== Performance of the turbo codes ==
\hspace{0.5cm}{\rm bzw.}\hspace{0.5cm}
+
<br>
L_{\rm A, \hspace{0.01cm}1}(i) = L_{\rm E, \hspace{0.01cm}2}\left ({\rm \pi}(i) \right ) \hspace{0.05cm}.</math>
+
[[File:EN_KC_T_4_3_S5b_v2.png|right|frame|Bit and block error probability of turbo codes at AWGN channel]]
 +
We consider,&nbsp; as in the last sections,&nbsp; the rate&nbsp; $1/3$&nbsp; turbo code
  
*Der Aposteriori&ndash;<i>L</i>&ndash;Wert wird in obigem Modell (willkürlich) vom Decoder 1 abgegeben. Dies lässt sich damit begründen, dass eine Iteration für einen zweifachen Informationsaustausch steht.<br>
+
*with equal filter functions &nbsp; $G_1(D) = G_2(D) = (1 + D^2)/(1 + D + D^2)$ &nbsp; &#8658; &nbsp; memory&nbsp; $m = 2$,<br>
  
== Leistungsfähigkeit der Turbocodes (1) ==
+
*with the interleaver size&nbsp; $K$; first apply&nbsp; $K = 10000,$&nbsp; and<br>
<br>
 
Wir betrachten wieder wie auf den letzten Seiten den Rate&ndash;1/3&ndash;Turbocode
 
*mit gleichen Filterfunktionen <i>G</i><sub>1</sub>(<i>D</i>) = <i>G</i><sub>2</sub>(<i>D</i>) = (1 + <i>D</i><sup>2</sup>)/(1 + <i>D</i> + <i>D</i><sup>2</sup>) &nbsp;&#8658;&nbsp; Gedächtnis <i>m</i> = 2,<br>
 
  
*mit der Interleavergröße <i>K</i>; zunächst gelte <i>K</i> = 10000,<br>
+
*a sufficient large number of iterations&nbsp; $(I = 20)$,&nbsp; almost equivalent in result&nbsp; to "$I &#8594; &#8734;$".<br><br>
  
*eine ausreichende Anzahl an Iterationen (<i>I</i> = 20), hier nahezu gleichzusetzen mit &bdquo;<i>I</i> &#8594; &#8734;&rdquo;.<br><br>
+
The two RSC component codes are each terminated on&nbsp; $K$&nbsp; bits.&nbsp; Therefore we group
  
Die beiden RSC&ndash;Komponentencodes sind jeweils auf <i>K</i> Bit terminiert. Deshalb gruppieren wir
+
* the information sequence&nbsp; $\underline{u}$&nbsp; into blocks of&nbsp; $K$&nbsp; information bits each,&nbsp; and<br>
*die Informationssequenz <u><i>u</i></u> zu Blöcken mit je <i>K</i> Informationsbits, und<br>
 
  
*die Codesequenz <u><i>x</i></u> zu Blöcken mit je <i>N</i>&nbsp;=&nbsp;3&nbsp;&middot;&nbsp;<i>K</i> Codebits.<br><br>
+
*the encoded sequence&nbsp; $\underline{x}$&nbsp; to blocks with each&nbsp; $N = 3 \cdot K$&nbsp; encoded bits.<br><br>
  
Die Grafik zeigt als grüne Kurve in doppelt&ndash;logarithmischer Darstellung die Blockfehlerwahrscheinlichkeit &#8658; &nbsp;Pr(Blockfehler) beim [http://en.lntwww.de/Kanalcodierung/Klassifizierung_von_Signalen#AWGN.E2.80.93Kanal_bei_bin.C3.A4rem_Eingang AWGN&ndash;Kanal] in Abhängigkeit der Kenngröße 10 &middot; lg(<i>E</i><sub>B</sub>/<i>N</i><sub>0</sub>). Man erkennt:
+
All results apply to the&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#AWGN_channel_at_binary_input|$\text{AWGN channel}$]]. Data are taken from the lecture notes&nbsp; [Liv15]<ref>Liva, G.:&nbsp; Channels Codes for Iterative Decoding.&nbsp; Lecture notes, Department of Communications Engineering, TU Munich and DLR Oberpfaffenhofen, 2015.</ref>.
[[File:P ID3053 KC T 4 3 S5a v7.png|rahmenlos|rechts|Bit– und Blockfehlerwahrscheinlichkeit von Turbodecodes]]
 
*Die Daten entstammen dem Vorlesungsskript [Liv15]<ref>Liva, G.: ''Channels Codes for Iterative Decoding.'' Vorlesungsmanuskript, Lehrstuhl für Nachrichtentechnik, TU München und DLR Oberpfaffenhofen, 2015.</ref>. Die mit Kreuzen markierten Punkte ergaben sich aus den Gewichtsfunktionen des Turbocodes mit Hilfe der [http://en.lntwww.de/Kanalcodierung/Schranken_f%C3%BCr_die_Blockfehlerwahrscheinlichkeit#Union_Bound_der_Blockfehlerwahrscheinlichkeit Union Bound.] <br>
 
  
*Simulationsergebnisse aus [Liv15] sind in der Grafik durch Kreise markiert. Sie sind nahezu deckungsgleich mit den berechneten Werten.<br>
+
The graph shows as a green curve the&nbsp; &raquo;'''block error probability'''&laquo; &nbsp; &#8658; &nbsp; ${\rm Pr(block\:error)}$&nbsp; in double logarithmic representation depending on the AWGN characteristic&nbsp; $10 \cdot {\rm lg} \, (E_{\rm B}/N_0)$.&nbsp; It can be seen:
  
*Die Union Bound ist nur eine obere Schranke, basierend auf ML&ndash;Decodierung. Der iterative Decoder ist suboptimal (also schlecher als ML). Diese beiden Effekte heben sich scheinbar auf.<br>
+
# The points marked with crosses resulted from the weight functions of the turbo code using the&nbsp; [[Channel_Coding/Bounds_for_Block_Error_Probability#Union_Bound_of_the_block_error_probability|$\text{Union Bound}$]].&nbsp; The simulation results &nbsp; &ndash;  in the graph marked by circles &ndash; &nbsp; are almost congruent with the analytically calculated values.<br>
 +
# The&nbsp; "Union Bound"&nbsp; is only an upper bound based on maximum likelihood decoding&nbsp; $\rm (ML)$.&nbsp; The iterative decoder is suboptimal&nbsp; (i.e.,&nbsp; worse than "maximum likelihood").&nbsp; These two effects seem to almost cancel each other out.<br>
 +
# At&nbsp; $10 \cdot {\rm lg} \, (E_{\rm B}/N_0) = 1 \ \rm dB$&nbsp; there is a kink in the&nbsp; (green)&nbsp; curve,&nbsp; corresponding to the slope change of&nbsp; ${\rm Pr(bit\:error)}$ &nbsp; &#8658; &nbsp; blue curve. <br><br>
  
*Etwa bei 10 &middot; lg(<i>E</i><sub>B</sub>/<i>N</i><sub>0</sub>) = 1 dB ist ein Knick im (grünen) Kurvenverlauf festzustellen, der mit der Steigungsänderung von Pr(Bitfehler) &nbsp;&#8658;&nbsp; blaue Kurve korrespondiert. Die Erklärung folgt unten.<br><br>
+
The blue crosses&nbsp; $($"calculation"$)$&nbsp; and the blue circles&nbsp; $($"simulation"$)$&nbsp; denote the&nbsp; &raquo;'''bit error probability'''&laquo;&nbsp; for the interleaver size&nbsp; $K = 10000$.&nbsp; The&nbsp; (dash-dotted)&nbsp; curve for uncoded transmission is drawn as a comparison curve.&nbsp;
  
Die blauen Kreuze (Berechnung) und die blauen Kreise (Simulation) bezeichnen die Bitfehlerwahrscheinlichkeit. Als Vergleichskurve ist die (strichpunktierte) Kurve für uncodierte Übertragung eingezeichnet. Anzumerken ist:
+
&rArr; &nbsp; To these blue curves is to be noted:
*Bei kleinen Abszissenwerten ist der Kurvenabfall in der gewählten Darstellung nahezu linear und ausreichend steil. Für Pr(Bitfehler) = 10<sup>&ndash;5</sup> benötigt man 10&nbsp;&middot;&nbsp;lg(<i>E</i><sub>B</sub>/<i>N</i><sub>0</sub>)&nbsp;&asymp;&nbsp;0.8 dB.<br>
+
*For small abscissa values,&nbsp; the curve slope in the selected plot is nearly linear and sufficiently steep.&nbsp; For example,&nbsp; for &nbsp; ${\rm Pr(bit\, error)} = 10^{-5}$ &nbsp; one needs at least &nbsp; $10 \cdot {\rm lg} \, (E_{\rm B}/N_0) \approx \, 0.8 \ \rm dB$.<br>
  
*Im Vergleich zur [http://en.lntwww.de/Kanalcodierung/Informationstheoretische_Grenzen_der_Kanalcodierung#Kanalkapazit.C3.A4t_des_AWGN.E2.80.93Modells_.282.29 Shannon&ndash;Grenze,] die sich für die Coderate <i>R</i> = 1/3 zu 10 &middot; lg(<i>E</i><sub>B</sub>/<i>N</i><sub>0</sub>) &asymp; &ndash;0.55 dB ergibt, liegt unser Standard&ndash;Turbocode (mit Gedächtnis <i>m</i> = 2) nur etwa 1.35 dB entfernt.<br>
+
*Compared to the&nbsp; [[Channel_Coding/Information_Theoretical_Limits_of_Channel_Coding#Channel_capacity_of_the_AWGN_model| $\text{Shannon bound}$]],&nbsp; which results for code rate&nbsp; $R = 1/3$&nbsp; to&nbsp; $10 \cdot {\rm lg} \, (E_{\rm B}/N_0) \approx \, &ndash;0.55 \ \rm dB$,&nbsp; our standard turbo code &nbsp; $($with memory&nbsp; $m = 2)$ &nbsp; is only about&nbsp; $1.35 \ \rm dB$&nbsp; away.<br>
  
*Ab 10 &middot; lg(<i>E</i><sub>B</sub>/<i>N</i><sub>0</sub>) &asymp; 0.5 dB verläuft die Kurve flacher. Ab ca. 1.5 dB ist der Verlauf wieder (fast) linear mit geringerer Steigung. Für Pr(Bitfehler)&nbsp;=&nbsp;10<sup>&ndash;7</sup> benötigt man etwa 10&nbsp;&middot;&nbsp;lg(<i>E</i><sub>B</sub>/<i>N</i><sub>0</sub>)&nbsp;=&nbsp;3&nbsp;dB.<br><br>
+
*From&nbsp; $10 \cdot {\rm lg} \, (E_{\rm B}/N_0) \approx 0.5 \ \rm dB$&nbsp; the curve runs flatter.&nbsp; From&nbsp; $\approx 1.5 \ \rm dB$&nbsp; the curve is again&nbsp; (nearly)&nbsp; linear with lower slope.&nbsp; For&nbsp; ${\rm Pr(bit\:error)} = 10^{-7}$&nbsp; one needs about&nbsp; $10 \cdot {\rm lg} \, (E_{\rm B}/N_0) = 3 \ \rm dB$.<br><br>
  
Die Bildbeschreibung wird auf der nächsten Seite fortgesetzt.<br>
+
&rArr; &nbsp; We now try to explain the flatter drop in bit error probability with larger&nbsp; $E_{\rm B}/N_0$.&nbsp; This is called an&nbsp; &raquo;$\text{error floor}$&laquo;:
  
== Leistungsfähigkeit der Turbocodes (2) ==
+
#The reason for this asymptotically worse behavior with better channel&nbsp; $($in the example: &nbsp; from&nbsp; $10 \cdot {\rm lg} \, E_{\rm B}/N_0 \ge 2 \ \rm dB)$&nbsp; is the period&nbsp; $P$&nbsp; of the encoder impulse response&nbsp; $\underline{g}$,&nbsp; as demonstrated in the section&nbsp; [[Channel_Coding/The_Basics_of_Turbo_Codes#Second_requirement_for_turbo_codes: _Interleaving|$\rm Interleaving$]],&nbsp;  and explained  in the&nbsp; [[Aufgaben:Exercise_4.10:_Turbo_Encoder_for_UMTS_and_LTE|"Exercise 4.10"]]&nbsp; with examples. <br>
<br>
+
#For&nbsp; $m = 2$,&nbsp; the period is&nbsp; $P = 2^m -1 = 3$.&nbsp; Thus,&nbsp; for&nbsp; $\underline{u} = (1, 1, 1) &nbsp; &#8658; &nbsp; w_{\rm H}(\underline{u}) = 3$&nbsp;  the parity sequence is bounded: &nbsp; $\underline{p} = (1, 0, 1)$ &nbsp; &#8658; &nbsp; $w_{\rm H}(\underline{p}) = 2$ &nbsp; despite unbounded impulse response&nbsp; $\underline{g}$.<br>
Wir betrachten weiter die (blaue) Bitfehlerwahrscheinlichkeit für die Interleavergröße <i>K</i>&nbsp;=&nbsp;10000 und versuchen, den flacheren Abfall bei größerem <i>E</i><sub>B</sub>/<i>N</i><sub>0</sub> zu erklären. Man spricht von einem <i>Error Floor</i>.
+
#The sequence &nbsp; $\underline{u} = (0, \ \text{...}\hspace{0.05cm} , \ 0, \ 1, \ 0, \ 0, \ 1, \ 0, \ \text{...}\hspace{0.05cm})$ &nbsp; &#8658; &nbsp; $U(D) = D^x \cdot (1 + D^P)$&nbsp; also leads to a small Hamming weight&nbsp; $w_{\rm H}(\underline{p})$&nbsp; at the output,&nbsp; which complicates the iterative decoding process.<br>
[[File:P ID3056 KC T 4 3 S5b v4.png|rahmenlos|rechts|Bit– und Blockfehlerwahrscheinlichkeit von Turbodecodes]]
+
#Some workaround is provided by the interleaver,&nbsp; which ensures that both sequences&nbsp; $\underline{p}_1$&nbsp; and&nbsp; $\underline{p}_2$&nbsp; are not simultaneously loaded by very small Hamming weights&nbsp; $w_{\rm H}(\underline{p}_1)$&nbsp; and&nbsp; $w_{\rm H}(\underline{p}_2)$.<br>
*Der Grund für dieses asymptotisch schlechtere Verhalten bei besserem Kanal (im Beispiel: ab ca. 10 &middot; lg <i>E</i><sub>B</sub>/<i>N</i><sub>0</sub> > 2 dB) ist die Periode <i>P</i> der Coderimpulsantwort <u><i>g</i></u>, wie auf der Seite [http://en.lntwww.de/Kanalcodierung/Grundlegendes_zu_den_Turbocodes#Zweite_Voraussetzung_f.C3.BCr_Turbocodes:_Interleaving Zweite Voraussetzung: Interleaving] nachgewiesen.<br>
+
#From the graph you can also see that the bit error probability is inversely proportional to the interleaver size&nbsp; $K$.&nbsp; That means: &nbsp; With large&nbsp; $K$&nbsp; the despreading of unfavorable input sequences works better.<br>
 +
#However,&nbsp; the approximation &nbsp; $K \cdot {\rm Pr(bit\:error) = const.}$ &nbsp; is valid only for larger&nbsp; $E_{\rm B}/N_0$&nbsp; values &nbsp; &#8658; &nbsp; small bit error probabilities.&nbsp; The described effect also occurs for smaller&nbsp; $E_{\rm B}/N_0$&nbsp; values, but then the effects on the bit error probability are smaller.<br>
 +
#The flatter shape of the block error probability&nbsp; $($green curve$)$&nbsp; holds largely independent of the interleaver size&nbsp; $K$,&nbsp; i.e., for&nbsp; $K = 1000$&nbsp; as well as for&nbsp; $K = 10000$.&nbsp; In the range&nbsp; $10 \cdot {\rm lg} \, E_{\rm B}/N_0 > 2 \ \rm dB$&nbsp; namely single errors dominate,&nbsp; so that here the approximation&nbsp; ${\rm Pr(block\: error)} \approx {\rm Pr(bit\:error)} \cdot K$&nbsp; is valid.<br>
  
*Im  Beispiel (<i>m</i> &#61; 2) ist die Periode <i>P</i> = 2<sup><i>m</i></sup> = 3. Dadurch ist für <u><i>u</i></u> = (1, 1, 1) &nbsp;&#8658;&nbsp; <i>w</i><sub>H</sub>(<u><i>u</i></u>) = 3 trotz unbegrenzter Impulsantwort <u><i>g</i></u> die Paritysequenz begrenzt: &nbsp; <u><i>p</i></u> = (1, 0, 1) &nbsp;&#8658;&nbsp; <i>w</i><sub>H</sub>(<u><i>p</i></u>) = 2.<br>
 
  
*Die Sequenz <u><i>u</i></u> = (0, ..., 0, 1, 0, 0, 1, 0, ...) &nbsp;&#8658;&nbsp; <i>U</i>(<i>D</i>) = <i>D</i><sup><i>x</i></sup> &middot; (1 + <i>D</i><sup><i>P</i></sup>) führt ebenfalls zu einem kleinen Hamming&ndash;Gewicht <i>w</i><sub>H</sub>(<u><i>p</i></u>) am Ausgang, was den iterativen Decodierprozess erschwert.<br>
+
{{BlaueBox|TEXT= 
 +
$\text{Conclusions:}$&nbsp; The exemplary shown bit error probability and block error probability curves also apply qualitatively for&nbsp; $m > 2$,&nbsp; e.g.&nbsp; for the  UMTS and LTE turbo codes&nbsp; $($each with&nbsp; $m = 3)$,&nbsp; which is analyzed in&nbsp; [[Aufgaben:Exercise_4.10:_Turbo_Encoder_for_UMTS_and_LTE|"Exercise 4.10"]].&nbsp; However,&nbsp; some quantitative differences emerge:
 +
*The curve is steeper for small&nbsp; $E_{\rm B}/N_0$&nbsp; and the distance from the Shannon bound is slightly less than in the example shown here for&nbsp; $m = 2$.<br>
  
*Eine gewisse Abhilfe schafft der Interleaver, der dafür sorgt, dass nicht die beiden Sequenzen <u><i>p</i></u><sub>1</sub> und <u><i>p</i></u><sub>2</sub> gleichzeitig durch sehr kleine Hamming&ndash;Gewichte <i>w</i><sub>H</sub>(<u><i>p</i></u><sub>1</sub>) und  <i>w</i><sub>H</sub>(<u><i>p</i></u><sub>2</sub>) belastet sind.<br>
+
*Also for larger&nbsp; $m$&nbsp; there is an&nbsp; "error floor".&nbsp; However,&nbsp; the kink in the displayed curves then occurs later,&nbsp; i.e. at smaller error probabilities.}}<br>
  
*Aus der Grafik erkennt man, dass Pr(Bitfehler) umgekehrt proportional zur Interleavergröße <i>K</i> ist. Das heißt: Bei großem <i>K</i> funktioniert die Entspreizung ungünstiger Eingangssequenzen besser.<br>
+
== Serial concatenated turbo codes – SCCC ==
 +
<br>
 +
The turbo codes considered so far are sometimes referred to as&nbsp; "parallel concatenated convolutional codes"&nbsp; $\rm (PCCC)$.  
  
*Allerdings gilt die Näherung <i>K</i> &middot; Pr(Bitfehler) = const. nur für größere <i>E</i><sub>B</sub>/<i>N</i><sub>0</sub>&ndash;Werte&nbsp;&#8658;&nbsp;kleinere Bitfehlerwahrscheinlichkeiten. Der oben beschriebene Effekt tritt zwar auch bei kleinerem <i>E</i><sub>B</sub>/<i>N</i><sub>0</sub> auf, doch sind dann die Auswirkungen auf die Bitfehlerwahrscheinlichkeit geringer.<br><br>
+
Some years after Berrou's invention,&nbsp; "serial concatenated convolutional codes"&nbsp; $\rm (SCCC)$&nbsp; were also proposed by other authors according to the following diagram.
 +
[[File:EN_KC_T_4_3_S7a_v2.png|right|frame|Serial concatenated convolutional codes:&nbsp; Encoder and decoder<br><br><br><br><br> |class=fit]]
  
Dagegen gilt der flachere Verlauf der Blockfehlerwahrscheinlichkeit (grüne Kurve weitgehend unabhängig von der Interleavergröße <i>K</i>, also sowohl für <i>K</i> = 1000 als auch für <i>K</i> = 10000. Im Bereich ab 10 &middot; lg <i>E</i><sub>B</sub>/<i>N</i><sub>0</sub> > 2 dB dominieren aufgrund der kleinen Bitfehlerwahrscheinlichkeiten (< 10<sup>&ndash;5</sup>) nämlich Einzelfehler, so dass hier die Näherung Pr(Blockfehler) &asymp; Pr(Bitfehler) &middot; <i>K</i> gültig ist.<br>
+
*The information sequence&nbsp; $\underline{u}$&nbsp; is located at the outer convolutional encoder&nbsp; $\mathcal{C}_1$.&nbsp; Let its output sequence be&nbsp; $\underline{c}$. <br>
  
Die hier beispielhaft gezeigten Kurven für Bit&ndash; und Blockfehlerwahrscheinlichkeit gelten qualitativ auch für <i>m</i> > 2, zum Beispiel für den Turbocode von UMTS und LTE (jeweils <i>m</i> = 3), der in Aufgabe A4.10 analysiert wird. Es ergeben sich aber einige quantitative Unterschiede:
+
*After the interleaver&nbsp; $(\Pi)$&nbsp; follows the inner convolutional encoder&nbsp; $\mathcal{C}_2$.&nbsp; The encoded sequence is called&nbsp; $\underline{x}$&nbsp;.<br>
*Die Kurve verläuft bei  kleinem <i>E</i><sub>B</sub>/<i>N</i><sub>0</sub>  steiler und der Abstand von der Shannongrenze ist etwas geringer als im hier gezeigten Beispiel für <i>m</i> = 2.<br>
 
  
*Auch für größeres  <i>m</i> gibt es einen <i>Error Floor</i>. Der Knick in den dargestellten Kurven erfolgt aber später, also bei kleineren Fehlerwahrscheinlichkeiten.<br>
+
*The resulting code rate is&nbsp; $R = R_1 \cdot R_2$.&nbsp; For rate&nbsp; $1/2$&nbsp; component codes: &nbsp; $R = 1/4$.<br><br>
  
== Seriell verkettete Turbocodes – SCCC ==
+
The bottom diagram shows the SCCC decoder and illustrates the processing of&nbsp; $L$&ndash;values and the exchange of extrinsic information between the two component decoders:
<br>
+
*The inner decoder&nbsp; $($code&nbsp; $\mathcal{C}_2)$&nbsp; receives the intrinsic information&nbsp; $\underline{L}_{\rm K}(\underline{x})$&nbsp; from the channel and a-priori information&nbsp; $\underline{L}_{\rm A}(\underline{w})$&nbsp; with&nbsp; $\underline{w} = \pi(\underline{c})$&nbsp; from the outer decoder&nbsp; $($after interleaving$)$&nbsp; and delivers the extrinsic information&nbsp; $\underline{L}_{\rm E}(\underline{w})$&nbsp; to the outer decoder.<br>
Die bisher betrachteten Turbocodes werden manchmal auch als <i>Parallel Concatenated Convolutional Codes</i> (PCCC) bezeichnet. Einige Jahre nach Berrou's Erfindung wurden von anderen Autoren auch <i>Serial Concatenated Convolutional Codes</i> (SCCCentsprechend folgender Grafik vorgeschlagen.
 
*Die Informationssequenz <u><i>u</i></u> liegt am äußeren Faltungscoder <i>C</i><sub>1</sub> an. Dessen Ausgangssequenz sei <u><i>c</i></u>.<br>
 
  
*Nach dem Interleaver (&Pi;) folgt der innere Faltungscoder <i>C</i><sub>2</sub>. Die Codesequenz wird  <u><i>x</i></u> genannt.<br>
+
*The outer decoder&nbsp; $(\mathcal{C}_1)$&nbsp; processes the a-priori information&nbsp; $\underline{L}_{\rm A}(\underline{c})$, i.e. the extrinsic information&nbsp; $\underline{L}_{\rm E}(\underline{w})$&nbsp; after de&ndash;interleaving.&nbsp; It provides the extrinsic information&nbsp; $\underline{L}_{\rm E}(\underline{c})$.<br>
  
*Die resultierende Coderate ist <i>R</i> = <i>R</i><sub>1</sub> &middot; <i>R</i><sub>2</sub>. Bei Rate&ndash;1/2&ndash;Komponentencodes ist <i>R</i> = 1/4.<br><br>
+
*After a sufficient number of iterations,&nbsp; the desired decoding result is obtained in the form of the a-posteriori&nbsp; $L$&ndash;values&nbsp; $\underline{L}_{\rm APP}(\underline{u})$&nbsp; of the information sequence&nbsp; $\underline{u}$.<br><br>
  
[[File:P ID3058 KC T 4 3 S7a v1.png|SCCC–Coder und –Decoder|class=fit]]<br>
+
{{BlaueBox|TEXT= 
 +
$\text{Conclusions:}$&nbsp; Important for serial concatenated convolutional codes&nbsp;  $\rm (SCCC)$&nbsp;  is that the inner code&nbsp; $\mathcal{C}_2$&nbsp; is recursive&nbsp;  $($i.e. a RSC code$)$.&nbsp;  The outer code&nbsp; $\mathcal{C}_1$&nbsp; may also be non-recursive.  
  
Die untere Grafik zeigt den SCCC&ndash;Decoder und verdeutlicht die Verarbeitung der <i>L</i>&ndash;Werte und den Austausch der extrinsischen Information zwischen den beiden Komponentencoder:
+
Regarding the performance of such codes,&nbsp;  it should be noted:
*Der innere Decoder (für den Code <i>C</i><sub>2</sub>) erhält vom Kanal die intrinsische Information <u><i>L</i></u><sub>K</sub>(<u><i>x</i></u>) und vom äußeren Decoder (nach Interleaving) die Apriori&ndash;Information <u><i>L</i></u><sub>A</sub>(<u><i>w</i></u>) mit <u><i>w</i></u> = &pi;(<u><i>c</i></u>). An den äußeren Decoder wird die extrinsische Information <u><i>L</i></u><sub>E</sub>(<u><i>w</i></u>) abgegeben.<br>
+
*An SCCC is often better than a PCCC &nbsp; &#8658; &nbsp; lower error floor for large&nbsp; $E_{\rm B}/N_0$.&nbsp; The statement is already true for SCCC component codes with memory&nbsp; $m = 2$&nbsp; $($only four trellis states$)$,&nbsp;  while for PCCC the memory&nbsp; $m = 3$&nbsp; and&nbsp; $m = 4$&nbsp; $($eight and sixteen trellis states,&nbsp;  respectively$)$&nbsp;  should be.<br>
  
*Der äußere Decoder (für <i>C</i><sub>1</sub>) verarbeitet die Apriori&ndash;Information <u><i>L</i></u><sub>A</sub>(<u><i>c</i></u>), also die extrinsische Information <u><i>L</i></u><sub>E</sub>(<u><i>w</i></u>) nach dem De&ndash;Interleaving. Er liefert die extrinsische Information <u><i>L</i></u><sub>E</sub>(<u><i>c</i></u>).<br>
+
*In the lower range&nbsp;  $($small &nbsp;$E_{\rm B}/N_0)$&nbsp; on the other hand,&nbsp;  the best serial concatenated convolutional code is several tenths of a decibel worse than the comparable turbo code according to Berrou&nbsp; $\rm (PCCC)$.&nbsp; The distance from the Shannon bound is correspondingly larger.}}<br><br>
  
*Nach hinreichend vielen Iterationen ergibt sich das das gewünschte Decodierergebnis in Form der Aposteriori&ndash;<i>L</i>&ndash;Werte <u><i>L</i></u><sub>APP</sub>(<u><i>u</i></u>) der Informationssequenz <u><i>u</i></u>.<br><br>
+
== Some application areas for turbo codes ==
 +
<br>
 +
[[File:EN_KC_T_4_3_S7b_v2.png|right|frame|Some standardized turbo codes compared to the Shannon bound]]
 +
Turbo codes are used in almost all newer communication systems.&nbsp; The graph shows their performance with the AWGN channel compared to&nbsp; [[Channel_Coding/Information_Theoretical_Limits_of_Channel_Coding#Channel_capacity_of_the_AWGN_model|$\text{Shannon's channel capacity}$]]&nbsp; $($blue curve$)$.<br>
  
Wichtig für seriell verkettete Faltungscodes ist, dass der innere Code rekursiv ist (also ein RSC&ndash;Code). Der äußere Code <i>C</i><sub>1</sub> kann auch nichtrekursiv sein. Zur Leistungsfähigkeit solcher Codes ist anzumerken:
+
The green highlighted area&nbsp; "BPSK"&nbsp; indicates the Shannon bound for digital systems with binary input,&nbsp; with which according to the&nbsp; [[Channel_Coding/Information_Theoretical_Limits_of_Channel_Coding#Channel_coding_theorem_and_channel_capacity|$\text{channel coding theorem}$]]&nbsp; an error-free transmission is just possible.<br>
*SCCCs sind bei großem <i>E</i><sub>B</sub>/<i>N</i><sub>0</sub> besser als PCCCs &nbsp;&#8658;&nbsp; niedrigerer <i>Error Floor</i>. Die Aussage gilt schon für SCCC&ndash;Komponentencodes mit Gedächtnis <i>m</i> = 2 (nur vier Trelliszustände), während bei PCCC das Gedächtnis <i>m</i> = 3 bzw. <i>m</i> = 4 (acht bzw. sechzehn Trelliszustände) sein sollte.<br>
 
  
*Im unteren Bereich (kleines <i>E</i><sub>B</sub>/<i>N</i><sub>0</sub>) ist dagegen der beste seriell verkettete Faltungscode (SCCC) um einige Zehntel Dezibel schlechter als der vergleichbare Turbocode gemäß Berrou (PCCC). Entsprechend größer ist auch der Abstand von der Shannongrenze.<br><br>
+
It should be noted that the bit error rate&nbsp; $\rm BER= 10^{-5}$&nbsp; is the basis here for the channel codes of standardized systems which are drawn in,&nbsp;  while the information-theoretical capacity curves&nbsp;  $($Shannon,&nbsp;  BPSK$)$&nbsp;  apply to the error probability&nbsp; $0$.
 +
*The blue rectangles mark the turbo codes for UMTS.&nbsp;  These are rate&nbsp;  $1/3$&nbsp;  codes with memory&nbsp; $m = 3$.&nbsp;  The performance depends strongly on the interleaver size.&nbsp;  With&nbsp; $K = 6144$&nbsp; this code is only about&nbsp; $1 \rm dB$&nbsp; to the right of the Shannon bound.&nbsp;  LTE uses the same turbo codes.&nbsp;  Minor differences occur due to the different interleaver.<br>
  
== Einige Anwendungsgebiete für Turbocodes ==
+
*The red crosses mark the turbo codes according to&nbsp;  $\rm CCSDS$&nbsp;  $($"Consultative Committee for Space Data Systems"$)$,&nbsp;  developed for use in space missions.&nbsp;  This class assumes the fixed interleaver size&nbsp; $K = 6920$&nbsp; and provides codes of rate&nbsp; $1/6$,&nbsp; $1/4$,&nbsp; $1/3$&nbsp; and&nbsp; $1/2$.&nbsp; The lowest code rates allow operation at&nbsp; $10 \cdot {\rm lg} \, (E_{\rm B}/N_0) \approx 0 \ \rm dB$.<br>
<br>
 
[[File:P ID3059 KC T 4 3 S7b v2.png|rahmenlos|rechts|Einige standardisierte Turbocodes im Vergleich zur Shannon–Grenze]]
 
In fast allen neueren Kommunikationssystemen (nach 1993 standardisiert) werden Turbocodes eingesetzt. Die Grafik zeigt deren Leistungsfähigkeit beim AWGN&ndash;Kanal im Vergleich zur [http://en.lntwww.de/Kanalcodierung/Informationstheoretische_Grenzen_der_Kanalcodierung#Kanalkapazit.C3.A4t_des_AWGN.E2.80.93Modells_.282.29 Shannonschen Kanalkapazität] (blaue Kurve).<br>
 
  
Der grün hinterlegte Bereich &bdquo;BPSK&rdquo; gibt die Shannongrenze für Nachrichtensystemee mit binärem Eingang an, mit der nach dem [http://en.lntwww.de/Kanalcodierung/Informationstheoretische_Grenzen_der_Kanalcodierung#Kanalcodierungstheorem_und_Kanalkapazit.C3.A4t Kanalcodierungstheorem] eine fehlerfreie Übertragung gerade noch möglich ist.<br>
+
*The green circle represents a very simple&nbsp; "Repeat Accumulate"&nbsp; $\rm (RA)$&nbsp; code,&nbsp; a serial concatenated turbo code.&nbsp; The following is an outline of its structure: &nbsp; The outer decoder uses a&nbsp; [[Channel_Coding/Examples_of_Binary_Block_Codes#Repetition_Codes|$\text{repetition code}$]],&nbsp; in the drawn example with rate&nbsp; $R = 1/3$.&nbsp;  The interleaver is followed by an RSC code with&nbsp; $G(D) = 1/(1 + D)$ &nbsp; &#8658; &nbsp; memory&nbsp; $m = 1$.&nbsp;  When executed systematically,&nbsp;  the total code rate is&nbsp; $R = 1/4$&nbsp; $($three parity bits are added to each bit of information$)$.  
  
Anzumerken ist, dass hier für die eingezeichneten Kanalcodes von standardisierten Systemen die Fehlerrate 10<sup>&ndash;5</sup> zugrunde liegt, während die informationstheoretischen Kapazitätskurven (Shannon, BPSK) für die Fehlerwahrscheinlichkeit 0 gelten.
 
*Die blauen Rechtecke markieren die Turbocodes für UMTS. Diese sind Rate&ndash;1/3&ndash;Codes mit Gedächtnis <i>m</i> = 3. Die Leistungsfähigkeit hängt stark von der Interleavergröße ab. Mit <i>K</i> = 6144 liegt dieser Code nur etwa 1 dB rechts von der Shannon&ndash;Grenze. LTE verwendet die gleichen Turbocodes. Geringfügige Unterschiede ergeben sich aufgrund des unterschiedlichen Interleavers.<br>
 
  
*Die roten Kreuze markieren die Turbocodes nach CCSDS (<i>Consultative Comittee for Space Data Systems</i>), entwickelt für den Einsatz bei fernen Weltraummissionen. Diese Klasse geht von der einheitlichen Interleavergröße <i>K</i> = 6920 aus und stellt Codes der Rate 1/6, 1/4, 1/3 und  1/2 zur Verfügung. Die niedrigsten Coderaten erlauben einen Betrieb mit 10 &middot; lg (<i>E</i><sub>B</sub>/<i>N</i><sub>0</sub>) &asymp; 0 dB.<br>
+
[[File:EN_KC_T_4_3_S7c_v2.png|left|frame|Repeat Accumulate&nbsp; $\rm (RA)$&nbsp; code with rate&nbsp; $1/4$|class=fit]]
  
*Der grüne Kreis steht für einen sehr einfachen <i>Repeat&ndash;Accumulate</i> (RA) Code, einem seriell&ndash;verketteten Turbocode. Der äußere Decoder verwendet einen [http://en.lntwww.de/Kanalcodierung/Beispiele_bin%C3%A4rer_Blockcodes#Wiederholungscodes_.281.29 Wiederholungscode] (englisch:  <i>Repetition Code</i>), im gezeichneten Beispiel mit der Rate <i>R</i> = 1/3. Nach dem Interleaver folgt ein RSC&ndash;Code mit <i>G</i>(<i>D</i>) = 1/(1 + <i>D</i>) &nbsp;&#8658;&nbsp; Gedächtnis <i>m</i> = 1. Bei systematischer Ausführung ist die Gesamtcoderate <i>R</i> = 1/4 (zu jedem Informationsbit noch drei Paritybits). Aus der oberen Grafik erkennt man, dass dieser einfache RA&ndash;Code überraschend gut ist. Mit der Interleavergröße <i>K</i> = 300000 beträgt der Abstand von der Shannon&ndash;Grenze lediglich ca. 1.5 dB.<br>
 
  
[[File:P ID3064 KC T 4 3 S7c v1.png|<i>Repeat Accumulate</i> (RA) Code der Rate 1/4|class=fit]]<br>
+
*From the graph on the top right,&nbsp; one can see that this simple RA code is surprisingly good.  
  
In der oberen Grafik nicht eingetragen sind die Turbocodes für den Standard <i>DVB Return Channel Terrestrial</i> (RCS) sowie für den WiMax&ndash;Standard (IEEE 802.16), die ähnliche Turbocodes benutzen.<br>
+
*With the interleaver size&nbsp; $K = 300000$&nbsp; the distance from the Shannon bound is only about &nbsp;$1.5 \ \rm dB$&nbsp; $($green dot$)$.
  
== Aufgaben ==
+
*Similar repeat accumulate codes are provided for the&nbsp; "DVB Return Channel Terrestrial"&nbsp; $\rm (RCS)$&nbsp; standard and for the WiMax standard&nbsp; $\rm (IEEE 802.16)$.
 +
<br clear=all>
 +
== Exercises for the chapter ==
 
<br>
 
<br>
[[Aufgaben:4.8 Wiederholung zu den Faltungscodes|A4.8 Wiederholung zu den Faltungscodes]]
+
[[Aufgaben:Exercise_4.08:_Repetition_to_the_Convolutional_Codes|Exercise 4.08: Repetition to the Convolutional Codes]]
  
[[Zusatzaufgaben:4.8 Grundlegendes zum Interleaving]]
+
[[Aufgaben:Exercise_4.08Z:_Basics_about_Interleaving|Exercise 4.08Z: Basics about Interleaving]]
  
[[Aufgaben:4.9 Wiederholung zu den RSC-Codes|A4.9 Wiederholung zu den RSC-Codes]]
+
[[Aufgaben:Exercise_4.09:_Recursive_Systematic_Convolutional_Codes|Exercise 4.09: Recursive Systematic Convolutional Codes]]
  
[[Aufgaben:4.10 UMTS/LTE–Turbocoder|A4.10 UMTS/LTE–Turbocoder]]
+
[[Aufgaben:Exercise_4.10:_Turbo_Encoder_for_UMTS_and_LTE|Exercise 4.10: Turbo Enccoder for UMTS and LTE]]
  
==Quellenverzeichnis==
+
==References==
 
<references/>
 
<references/>
  
 
{{Display}}
 
{{Display}}

Latest revision as of 15:58, 23 January 2023

Basic structure of a turbo code


All communications systems current today  $($2017$)$,  such as 

  • $\rm UMTS$  $($"Universal Mobile Telecommunications System"   ⇒   3rd generation mobile communications$)$ and 
  • $\rm LTE$  $($"Long Term Evolution"  ⇒   4th generation mobile communications$)$


use the concept of  $\text{symbol-wise iterative decoding}$.  This is directly related to the invention of  »turbo codes«  in 1993 by  $\text{Claude Berrou}$$\text{Alain Glavieux}$  and  $\text{Punya Thitimajshima}$  because it was only with these codes that the Shannon bound could be approached with reasonable decoding effort.

Turbo codes result from the parallel or serial concatenation of convolutional codes. 

Parallel concatenation of two rate  $1/2$  codes
Rate  $1/3$  turbo encoder  $($parallel concatenation of two rate  $1/2$  convolutional codes$)$

The graphic shows the parallel concatenation of two codes,  each with the parameters  $k = 1, \ n = 2$   ⇒   code rate $R = 1/2$.

In this representation is:

  1. $u$  the currently considered bit of the information sequence  $\underline{u}$,
  2. $x_{i,\hspace{0.03cm}j}$  the currently considered bit at the output  $j$  of encoder  $i$ 
    $($with  $1 ≤ i ≤ 2, \hspace{0.2cm} 1 ≤ j ≤ 2)$,
  3. $\underline{X} = (x_{1,\hspace{0.03cm}1}, \ x_{1,\hspace{0.03cm}2}, \ x_{2,\hspace{0.03cm}1}, \ x_{2,\hspace{0.03cm}2})$  the code word for the current information bit  $u$.

The resulting rate of the concatenated coding system is thus  $R = 1/4$. 

If systematic component codes are used,  the second shown model results.  The modifications from the top graph to the graph below can be justified as follows:

  • For systematic codes  $C_1$  and  $C_2$,  both  $x_{1,\hspace{0.03cm}1} = u$  and  $x_{2,\hspace{0.03cm}1} = u$.  Therefore,  one can dispense with the transmission of a redundant bit  $($e.g.  $x_{2,\hspace{0.03cm}2})$.
  • With this reduction,  the result is a rate  $1/3$  turbo code with  $k = 1$  and  $n = 3$.  The code word with the parity bits is  $p_1$  $($encoder 1$)$  and  $p_2$  $($encoder 2$)$:
$$\underline{X} = \left (u, \ p_1, \ p_2 \right )\hspace{0.05cm}.$$


Further modification of the basic structure of the turbo code


In the following we always assume a still somewhat further modified turbo encoder model:

Rate  $1/3$  turbo encoder with interleaver
  1. As required for the description of convolutional codes,  at the input is the information sequence  $\underline{u} = (u_1, \ u_2, \ \text{...}\hspace{0.05cm} , \ u_i , \ \text{...}\hspace{0.05cm} )$  instead of the isolated information bit  $u$ .
  2. The code word sequence  $\underline{x} = (\underline{X}_1, \underline{X}_2, \ \text{...}\hspace{0.05cm} , \ \underline{X}_i, \ \text{...}\hspace{0.05cm} )$  is generated.  To avoid confusion,  the code words  $\underline{X}_i = (u, \ p_1, \ p_2)$  with capital letters were introduced in the last section.
  3. The encoders  $\mathcal{C}_1$  and  $\mathcal{C}_2$  are conceived  $($at least in thought$)$  as  $\text{digital filters}$  and are thus characterized by the  $\text{transfer functions}$  $G_1(D)$  and  $G_2(D)$.
  4. For various reasons   ⇒   see  "two sections ahead"  the input sequence of the second encoder   ⇒   $\underline{u}_{\pi}$  should be scrambled with respect to the sequence  $\underline{u}$  by an interleaver  $(\Pi)$.
  5. Thereby there is nothing against choosing both encoders the same:  $G_1(D) = G_2(D) = G(D)$.  Without interleaver the correction capability would be extremely limited.

Example sequences at the rate  $1/3$  turbo encoder

$\text{Example 1:}$  The graph shows the different sequences in matched colors. To note:

  1. For  $\underline{u}_{\Pi}$  a  $3×4$  interleaver matrix is considered according to  "Exercise 4.8Z".

  2. The parity sequences are obtained according to  $G_1(D) = G_2(D) = 1 + D^2$   ⇒   see  "Exercise 4.8".


First requirement for turbo codes: Recursive component codes


Non-recursive transfer functions for generating the parity sequences cause a turbo code with insufficiently small minimum distance.  The reason for this inadequacy is the finite impulse response  $\underline{g} = (1, \ g_2, \ \text{...}\hspace{0.05cm} , \ g_m, \ 0, \ 0, \ \text{...}\hspace{0.05cm} )$  with  $g_2, \ \text{...}\hspace{0.05cm} , \ g_m ∈ \{0, 1\}$.  Here  $m$  denotes the  "memory".

Non-recursive systematic turbo code and state transition diagram



The graph shows the state transition diagram for the example  $\mathbf{G}(D) = \big [1, \ 1 + D^2 \big ]$. The transitions are labeled "$u_i\hspace{0.05cm}|\hspace{0.05cm}u_i p_i$".

  • The sequence  $S_0 → S_1 → S_2 → S_0 → S_0 → \ \text{...}\hspace{0.05cm} \ $  leads with respect to the input to the information sequence  $\underline{u} = (1, 0, 0, 0, 0, \ \text{...}\hspace{0.05cm})$,  and
  • with respect to the second encoded symbol to the sequence   $\underline{p} = (1, 0, 1, 0, 0, \ \text{...}\hspace{0.05cm})$    ⇒   because of  $\underline{u} = (1, 0, 0, 0, 0, \ \text{...}\hspace{0.05cm})$  identical to the  "impulse response"  $\underline{g}$   ⇒   memory  $m = 2$.


The lower graph applies to a so-called  »RSC code«  $($"Recursive Systematic Convolutional"$)$  correspondingly 

Non-recursive systematic turbo code and state transition diagram
$$\mathbf{G}(D) = \big [1, \ (1+ D^2)/(1 + D + D^2)\big ].$$
  • Here the sequence 
$$S_0 → S_1 → S_3 → S_2 → S_1 → S_3 → S_2 → \text{...}$$
leads to the impulse response 
$$\underline{g} = (1, 1, 1, 0, 1, 1, 0, 1, 1, \ \text{...}\hspace{0.05cm}).$$
  • This impulse response continues to infinity due to the loop  $S_1 → S_3 → S_2 → S_1$.  This enables or facilitates the iterative decoding.


More details on the examples in this section can be found in the  "Exercise 4.8"  and the  "Exercise 4.9".

Second requirement for turbo codes: Interleaving


It is obvious that for  $G_1(D) = G_2(D)$  an interleaver  $(\Pi)$  is essential.

  • Another reason is that the a-priori information is assumed to be independent.
  • Thus,  adjacent  $($and thus possibly strongly dependent$)$  bits for the other sub–code should be far apart.


Indeed,  for any RSC code   ⇒   infinite impulse response  $\underline{g}$   ⇒   fractional–rational transfer function  $G(D)$:  There are certain input sequences  $\underline{u}$,  which lead to very short parity sequences   $\underline{p} = \underline{u} ∗ \underline{g}$   with low Hamming weight  $w_{\rm H}(\underline{p})$.

For example,  such a sequence is given in the second graphic of the  "last section" :  

$$\underline{u} = (1, 1, 1, 0, 0, \ \text{...}\hspace{0.05cm}).$$

Then for the output sequence holds:

\[P(D) = U(D) \cdot G(D) = (1+D+D^2) \cdot \frac{1+D^2}{1+D+D^2}= 1+D^2\hspace{0.3cm}\Rightarrow\hspace{0.3cm} \underline{p}= (\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} \text{...}\hspace{0.05cm}\hspace{0.05cm})\hspace{0.05cm}. \]

$\text{Meaning and purpose:}$  By  $\rm interleaving$,  it is now ensured with high probability that this sequence  $\underline{u} = (1, 1, 1, 0, 0, \ \text{...}\hspace{0.05cm})$  is converted into a sequence  $\underline{u}_{\pi}$ 

  • which also contains only three  "ones",
  • but whose output sequence is characterized by a large Hamming weight  $w_{\rm H}(\underline{p})$.


Thus,  the decoder succeeds in resolving such  "problem sequences"  iteratively.


For the following description of the interleavers we use the assignment  $I_{\rm In} → I_{\rm Out}$.  These labels stand for the indices of output and input sequence,  respectively. The interleaver variable is named  $I_{\rm max}$ .

Clarification of block interleaving

There are several,  fundamentally different interleaver concepts:

⇒   In a  »block interleaver«  one fills a matrix with  $N_{\rm C}$  columns and  $N_{\rm R}$  rows column-by-column and reads the matrix row-by-row.  Thus an information block with  $I_{\rm max} = N_{\rm C} \cdot N_{\rm R}$  bit is deterministically scrambled.

The right graph illustrates the principle for  $I_{\rm max} = 64$   ⇒   $1 ≤ I_{\rm In} \le 64$  and  $1 ≤ I_{\rm Out} \le 64$.  The order of the output bits is then:  

$$1, 9, 17, 25, 33, 41, 49, 57, 2, 10, 18, \ \text{...}\hspace{0.05cm} , 48, 56, 64.$$

More information on block interleaving is available in the  "Exercise 4.8Z".

Clarification of $S$–random interleaving





⇒   Turbo codes often use the  »$S$–random interleaver«. This pseudo random algorithm with the parameter  "$S$"  guarantees

  • that two indices less than  $S$  apart at the input
  • occur at least at the distance  $S + 1$  at the output.


The left graph shows the  $S$–random characteristic  $I_{\rm Out}(I_{\rm In})$  for  $I_{\rm max} = 640$.

  1. This algorithm is also deterministic,  and one can undo the scrambling in the decoder   ⇒   "De–interleaving".
  2. The distribution still seems  "more random"  than with block interleaving.


Bit-wise iterative decoding of a turbo code


The decoding of a turbo code is basically done as described in section  "Bit-wise Soft–in Soft–out Decoding".  From the following graphic,  however,  you can also see some special features that apply only to the turbo decoder.

Assumed is a rate  $1/3$  turbo code according to the description in the  "first section of this page".  Also,  the color scheme for the information sequence  $\underline{u}$  and the two parity sequences  $\underline{p}_1$  and  $\underline{p}_2$  are adapted from the earlier graphics.  Further,  it should be noted:

Iterative turbo decoder for rate  $R = 1/3$
  • The received vectors  $\underline{y}_0,\hspace{0.15cm} \underline{y}_1,\hspace{0.15cm} \underline{y}_2$  are real-valued and provide the respective soft information with respect to the information sequence  $\underline{u}$  and the sequences  $\underline{p}_1$  $($parity for encoder 1$)$  and  $\underline{p}_2$  $($parity for encoder 2$)$.
  • The decoder 1 receives the required intrinsic information in the form of the  $L$ values $L_{\rm K,\hspace{0.03cm} 0}$  $($out  $\underline{y}_0)$  and  $L_{\rm K,\hspace{0.03cm}1}$ $($out  $\underline{y}_1)$  over each bit of the sequences  $\underline{u}$  and  $\underline{p}_1$.
  • In the second decoder,  the scrambling of the information sequence  $\underline{u}$  must be taken into account.  Thus,  the  $L$ values to be processed are  $\pi(L_{\rm K, \hspace{0.03cm}0})$  and  $L_{\rm K, \hspace{0.03cm}2}$.
  • In the general  $\text{SISO decoder}$  the information exchange between the two component decoders was controlled with  $\underline{L}_{\rm A, \hspace{0.03cm}2} = \underline{L}_{\rm E, \hspace{0.03cm}1}$  and  $\underline{L}_{\rm A, \hspace{0.03cm}1} = \underline{L}_{\rm E, \hspace{0.03cm}2}$.
  • Written out at the bit level,  these equations are denoted by  $(1 ≤ i ≤ n)$:
\[L_{\rm A, \hspace{0.03cm}2}(i) = L_{\rm E, \hspace{0.03cm}1}(i) \hspace{0.5cm}{\rm resp.}\hspace{0.5cm} L_{\rm A, \hspace{0.03cm}1}(i) = L_{\rm E, \hspace{0.03cm}2}(i) \hspace{0.03cm}.\]
  • In the case of the turbo decoder,  the interleaver must also be taken into account in this information exchange.  Then for  $i = 1, \ \text{...}\hspace{0.05cm} , \ n$:
\[L_{\rm A, \hspace{0.03cm}2}\left ({\rm \pi}(i) \right ) = L_{\rm E, \hspace{0.03cm}1}(i) \hspace{0.5cm}{\rm bzw.}\hspace{0.5cm} L_{\rm A, \hspace{0.03cm}1}(i) = L_{\rm E, \hspace{0.03cm}2}\left ({\rm \pi}(i) \right ) \hspace{0.05cm}.\]
  • The a-posteriori  $L$ value is  $($arbitrarily$)$  given by decoder 1 in the above model.  This can be justified by the fact that one iteration stands for a twofold information exchange.

Performance of the turbo codes


Bit and block error probability of turbo codes at AWGN channel

We consider,  as in the last sections,  the rate  $1/3$  turbo code

  • with equal filter functions   $G_1(D) = G_2(D) = (1 + D^2)/(1 + D + D^2)$   ⇒   memory  $m = 2$,
  • with the interleaver size  $K$; first apply  $K = 10000,$  and
  • a sufficient large number of iterations  $(I = 20)$,  almost equivalent in result  to "$I → ∞$".

The two RSC component codes are each terminated on  $K$  bits.  Therefore we group

  • the information sequence  $\underline{u}$  into blocks of  $K$  information bits each,  and
  • the encoded sequence  $\underline{x}$  to blocks with each  $N = 3 \cdot K$  encoded bits.

All results apply to the  $\text{AWGN channel}$. Data are taken from the lecture notes  [Liv15][1].

The graph shows as a green curve the  »block error probability«   ⇒   ${\rm Pr(block\:error)}$  in double logarithmic representation depending on the AWGN characteristic  $10 \cdot {\rm lg} \, (E_{\rm B}/N_0)$.  It can be seen:

  1. The points marked with crosses resulted from the weight functions of the turbo code using the  $\text{Union Bound}$.  The simulation results   – in the graph marked by circles –   are almost congruent with the analytically calculated values.
  2. The  "Union Bound"  is only an upper bound based on maximum likelihood decoding  $\rm (ML)$.  The iterative decoder is suboptimal  (i.e.,  worse than "maximum likelihood").  These two effects seem to almost cancel each other out.
  3. At  $10 \cdot {\rm lg} \, (E_{\rm B}/N_0) = 1 \ \rm dB$  there is a kink in the  (green)  curve,  corresponding to the slope change of  ${\rm Pr(bit\:error)}$   ⇒   blue curve.

The blue crosses  $($"calculation"$)$  and the blue circles  $($"simulation"$)$  denote the  »bit error probability«  for the interleaver size  $K = 10000$.  The  (dash-dotted)  curve for uncoded transmission is drawn as a comparison curve. 

⇒   To these blue curves is to be noted:

  • For small abscissa values,  the curve slope in the selected plot is nearly linear and sufficiently steep.  For example,  for   ${\rm Pr(bit\, error)} = 10^{-5}$   one needs at least   $10 \cdot {\rm lg} \, (E_{\rm B}/N_0) \approx \, 0.8 \ \rm dB$.
  • Compared to the  $\text{Shannon bound}$,  which results for code rate  $R = 1/3$  to  $10 \cdot {\rm lg} \, (E_{\rm B}/N_0) \approx \, –0.55 \ \rm dB$,  our standard turbo code   $($with memory  $m = 2)$   is only about  $1.35 \ \rm dB$  away.
  • From  $10 \cdot {\rm lg} \, (E_{\rm B}/N_0) \approx 0.5 \ \rm dB$  the curve runs flatter.  From  $\approx 1.5 \ \rm dB$  the curve is again  (nearly)  linear with lower slope.  For  ${\rm Pr(bit\:error)} = 10^{-7}$  one needs about  $10 \cdot {\rm lg} \, (E_{\rm B}/N_0) = 3 \ \rm dB$.

⇒   We now try to explain the flatter drop in bit error probability with larger  $E_{\rm B}/N_0$.  This is called an  »$\text{error floor}$«:

  1. The reason for this asymptotically worse behavior with better channel  $($in the example:   from  $10 \cdot {\rm lg} \, E_{\rm B}/N_0 \ge 2 \ \rm dB)$  is the period  $P$  of the encoder impulse response  $\underline{g}$,  as demonstrated in the section  $\rm Interleaving$,  and explained in the  "Exercise 4.10"  with examples.
  2. For  $m = 2$,  the period is  $P = 2^m -1 = 3$.  Thus,  for  $\underline{u} = (1, 1, 1)   ⇒   w_{\rm H}(\underline{u}) = 3$  the parity sequence is bounded:   $\underline{p} = (1, 0, 1)$   ⇒   $w_{\rm H}(\underline{p}) = 2$   despite unbounded impulse response  $\underline{g}$.
  3. The sequence   $\underline{u} = (0, \ \text{...}\hspace{0.05cm} , \ 0, \ 1, \ 0, \ 0, \ 1, \ 0, \ \text{...}\hspace{0.05cm})$   ⇒   $U(D) = D^x \cdot (1 + D^P)$  also leads to a small Hamming weight  $w_{\rm H}(\underline{p})$  at the output,  which complicates the iterative decoding process.
  4. Some workaround is provided by the interleaver,  which ensures that both sequences  $\underline{p}_1$  and  $\underline{p}_2$  are not simultaneously loaded by very small Hamming weights  $w_{\rm H}(\underline{p}_1)$  and  $w_{\rm H}(\underline{p}_2)$.
  5. From the graph you can also see that the bit error probability is inversely proportional to the interleaver size  $K$.  That means:   With large  $K$  the despreading of unfavorable input sequences works better.
  6. However,  the approximation   $K \cdot {\rm Pr(bit\:error) = const.}$   is valid only for larger  $E_{\rm B}/N_0$  values   ⇒   small bit error probabilities.  The described effect also occurs for smaller  $E_{\rm B}/N_0$  values, but then the effects on the bit error probability are smaller.
  7. The flatter shape of the block error probability  $($green curve$)$  holds largely independent of the interleaver size  $K$,  i.e., for  $K = 1000$  as well as for  $K = 10000$.  In the range  $10 \cdot {\rm lg} \, E_{\rm B}/N_0 > 2 \ \rm dB$  namely single errors dominate,  so that here the approximation  ${\rm Pr(block\: error)} \approx {\rm Pr(bit\:error)} \cdot K$  is valid.


$\text{Conclusions:}$  The exemplary shown bit error probability and block error probability curves also apply qualitatively for  $m > 2$,  e.g.  for the UMTS and LTE turbo codes  $($each with  $m = 3)$,  which is analyzed in  "Exercise 4.10".  However,  some quantitative differences emerge:

  • The curve is steeper for small  $E_{\rm B}/N_0$  and the distance from the Shannon bound is slightly less than in the example shown here for  $m = 2$.
  • Also for larger  $m$  there is an  "error floor".  However,  the kink in the displayed curves then occurs later,  i.e. at smaller error probabilities.


Serial concatenated turbo codes – SCCC


The turbo codes considered so far are sometimes referred to as  "parallel concatenated convolutional codes"  $\rm (PCCC)$.

Some years after Berrou's invention,  "serial concatenated convolutional codes"  $\rm (SCCC)$  were also proposed by other authors according to the following diagram.

Serial concatenated convolutional codes:  Encoder and decoder




  • The information sequence  $\underline{u}$  is located at the outer convolutional encoder  $\mathcal{C}_1$.  Let its output sequence be  $\underline{c}$.
  • After the interleaver  $(\Pi)$  follows the inner convolutional encoder  $\mathcal{C}_2$.  The encoded sequence is called  $\underline{x}$ .
  • The resulting code rate is  $R = R_1 \cdot R_2$.  For rate  $1/2$  component codes:   $R = 1/4$.

The bottom diagram shows the SCCC decoder and illustrates the processing of  $L$–values and the exchange of extrinsic information between the two component decoders:

  • The inner decoder  $($code  $\mathcal{C}_2)$  receives the intrinsic information  $\underline{L}_{\rm K}(\underline{x})$  from the channel and a-priori information  $\underline{L}_{\rm A}(\underline{w})$  with  $\underline{w} = \pi(\underline{c})$  from the outer decoder  $($after interleaving$)$  and delivers the extrinsic information  $\underline{L}_{\rm E}(\underline{w})$  to the outer decoder.
  • The outer decoder  $(\mathcal{C}_1)$  processes the a-priori information  $\underline{L}_{\rm A}(\underline{c})$, i.e. the extrinsic information  $\underline{L}_{\rm E}(\underline{w})$  after de–interleaving.  It provides the extrinsic information  $\underline{L}_{\rm E}(\underline{c})$.
  • After a sufficient number of iterations,  the desired decoding result is obtained in the form of the a-posteriori  $L$–values  $\underline{L}_{\rm APP}(\underline{u})$  of the information sequence  $\underline{u}$.

$\text{Conclusions:}$  Important for serial concatenated convolutional codes  $\rm (SCCC)$  is that the inner code  $\mathcal{C}_2$  is recursive  $($i.e. a RSC code$)$.  The outer code  $\mathcal{C}_1$  may also be non-recursive.

Regarding the performance of such codes,  it should be noted:

  • An SCCC is often better than a PCCC   ⇒   lower error floor for large  $E_{\rm B}/N_0$.  The statement is already true for SCCC component codes with memory  $m = 2$  $($only four trellis states$)$,  while for PCCC the memory  $m = 3$  and  $m = 4$  $($eight and sixteen trellis states,  respectively$)$  should be.
  • In the lower range  $($small  $E_{\rm B}/N_0)$  on the other hand,  the best serial concatenated convolutional code is several tenths of a decibel worse than the comparable turbo code according to Berrou  $\rm (PCCC)$.  The distance from the Shannon bound is correspondingly larger.



Some application areas for turbo codes


Some standardized turbo codes compared to the Shannon bound

Turbo codes are used in almost all newer communication systems.  The graph shows their performance with the AWGN channel compared to  $\text{Shannon's channel capacity}$  $($blue curve$)$.

The green highlighted area  "BPSK"  indicates the Shannon bound for digital systems with binary input,  with which according to the  $\text{channel coding theorem}$  an error-free transmission is just possible.

It should be noted that the bit error rate  $\rm BER= 10^{-5}$  is the basis here for the channel codes of standardized systems which are drawn in,  while the information-theoretical capacity curves  $($Shannon,  BPSK$)$  apply to the error probability  $0$.

  • The blue rectangles mark the turbo codes for UMTS.  These are rate  $1/3$  codes with memory  $m = 3$.  The performance depends strongly on the interleaver size.  With  $K = 6144$  this code is only about  $1 \rm dB$  to the right of the Shannon bound.  LTE uses the same turbo codes.  Minor differences occur due to the different interleaver.
  • The red crosses mark the turbo codes according to  $\rm CCSDS$  $($"Consultative Committee for Space Data Systems"$)$,  developed for use in space missions.  This class assumes the fixed interleaver size  $K = 6920$  and provides codes of rate  $1/6$,  $1/4$,  $1/3$  and  $1/2$.  The lowest code rates allow operation at  $10 \cdot {\rm lg} \, (E_{\rm B}/N_0) \approx 0 \ \rm dB$.
  • The green circle represents a very simple  "Repeat Accumulate"  $\rm (RA)$  code,  a serial concatenated turbo code.  The following is an outline of its structure:   The outer decoder uses a  $\text{repetition code}$,  in the drawn example with rate  $R = 1/3$.  The interleaver is followed by an RSC code with  $G(D) = 1/(1 + D)$   ⇒   memory  $m = 1$.  When executed systematically,  the total code rate is  $R = 1/4$  $($three parity bits are added to each bit of information$)$.


Repeat Accumulate  $\rm (RA)$  code with rate  $1/4$


  • From the graph on the top right,  one can see that this simple RA code is surprisingly good.
  • With the interleaver size  $K = 300000$  the distance from the Shannon bound is only about  $1.5 \ \rm dB$  $($green dot$)$.
  • Similar repeat accumulate codes are provided for the  "DVB Return Channel Terrestrial"  $\rm (RCS)$  standard and for the WiMax standard  $\rm (IEEE 802.16)$.


Exercises for the chapter


Exercise 4.08: Repetition to the Convolutional Codes

Exercise 4.08Z: Basics about Interleaving

Exercise 4.09: Recursive Systematic Convolutional Codes

Exercise 4.10: Turbo Enccoder for UMTS and LTE

References

  1. Liva, G.:  Channels Codes for Iterative Decoding.  Lecture notes, Department of Communications Engineering, TU Munich and DLR Oberpfaffenhofen, 2015.