Difference between revisions of "Channel Coding/The Basics of Turbo Codes"

From LNTwww
Line 161: Line 161:
 
The graph shows as a green curve the  '''block error probability'''   ⇒   ${\rm Pr(block\:error)}$  in double logarithmic representation depending on the AWGN characteristic  $10 \cdot {\rm lg} \, (E_{\rm B}/N_0)$. It can be seen:
 
The graph shows as a green curve the  '''block error probability'''   ⇒   ${\rm Pr(block\:error)}$  in double logarithmic representation depending on the AWGN characteristic  $10 \cdot {\rm lg} \, (E_{\rm B}/N_0)$. It can be seen:
  
*Die mit Kreuzen markierten Punkte ergaben sich aus den Gewichtsfunktionen des Turbocodes mit Hilfe der&nbsp; [[Channel_Coding/Bounds_for_Block_Error_Probability#Union_Bound_of_the_block_error_probability| "Union Bound"]]. Die Simulationsergebnisse &ndash; in der Grafik durch Kreise markiert &ndash; sind nahezu deckungsgleich mit den analytisch berechneten Werten.<br>
+
*The points marked with crosses resulted from the weight functions of the turbo code using the&nbsp; [[Channel_Coding/Bounds_for_Block_Error_Probability#Union_Bound_of_the_block_error_probability| "Union Bound"]]. The simulation results &ndash; marked by circles in the graph &ndash; are almost congruent with the analytically calculated values.<br>
  
*Die "Union Bound" ist nur eine obere Schranke, basierend auf Maximum&ndash;Likelihood&ndash;Decodierung ("ML"). Der iterative Decoder ist suboptimal (also schlecher als "ML"). Diese beiden Effekte heben sich scheinbar nahezu auf.<br>
+
*The "Union Bound" is only an upper bound based on maximum likelihood decoding ("ML"). The iterative decoder is suboptimal (i.e., worse than "ML"). These two effects seem to almost cancel each other out.<br>
  
*Etwa bei&nbsp; $10 \cdot {\rm lg} \, (E_{\rm B}/N_0) = 1 \ \rm dB$&nbsp; ist ein Knick im (grünen) Kurvenverlauf festzustellen, der mit der Steigungsänderung von&nbsp; ${\rm Pr(Bitfehler)}$ &nbsp; &#8658; &nbsp; blaue Kurve korrespondiert. <br><br>
+
*At&nbsp; $10 \cdot {\rm lg} \, (E_{\rm B}/N_0) = 1 \ \rm dB$&nbsp; there is a kink in the (green) curve, corresponding to the slope change of&nbsp; ${\rm Pr(bit\:error)}$ &nbsp; &#8658; &nbsp; blue curve. <br><br>
  
Die blauen Kreuze (Berechnung) und die blauen Kreise (Simulation) bezeichnen die&nbsp; '''Bitfehlerwahrscheinlichkeit'''&nbsp; für die Interleavergröße&nbsp; $K = 10000$ . Als Vergleichskurve eingezeichnet ist die (strichpunktierte) Kurve für uncodierte Übertragung. Zu diesen blauen Kurven ist anzumerken:
+
The blue crosses (calculation) and the blue circles (simulation) denote the&nbsp; '''bit error probability'''&nbsp; for the interleaver size&nbsp; $K = 10000$ . The (dash-dotted) curve for uncoded transmission is drawn as a comparison curve. To these blue curves is to be noted:
*Bei kleinen Abszissenwerten ist der Kurvenabfall in der gewählten Darstellung nahezu linear und ausreichend steil. Zum Beispiel benötigt man für&nbsp; ${\rm Pr(Bitfehler)} = 10^{-5}$&nbsp; mindestens&nbsp; $10 \cdot {\rm lg} \, (E_{\rm B}/N_0) \approx \, 0.8 \ \rm dB$.<br>
+
*For small abscissa values, the curve slope in the selected plot is nearly linear and sufficiently steep. For example, for&nbsp; ${\rm Pr(bit error)} = 10^{-5}$&nbsp; one needs at least&nbsp; $10 \cdot {\rm lg} \, (E_{\rm B}/N_0) \approx \, 0.8 \ \rm dB$.<br>
  
*Im Vergleich zur&nbsp; [[Channel_Coding/Information_Theoretical_Limits_of_Channel_Coding#Channel_capacity_of_the_AWGN_model| "Shannon&ndash;Grenze"]], die sich für die Coderate&nbsp; $R = 1/3$&nbsp; zu&nbsp; $10 \cdot {\rm lg} \, (E_{\rm B}/N_0) \approx \, &ndash;0.55 \ \rm dB$&nbsp; ergibt, liegt unser Standard&ndash;Turbocode&nbsp; $($mit Gedächtnis&nbsp; $m = 2)$&nbsp; nur etwa&nbsp; $1.35 \ \rm dB$&nbsp; entfernt.<br>
+
*Compared to&nbsp; [[Channel_Coding/Information_Theoretical_Limits_of_Channel_Coding#Channel_capacity_of_the_AWGN_model| "Shannon bound"]], which results for code rate&nbsp; $R = 1/3$&nbsp; to&nbsp; $10 \cdot {\rm lg} \, (E_{\rm B}/N_0) \approx \, &ndash;0.55 \ \rm dB$&nbsp; our standard turbo code&nbsp; $($with memory&nbsp; $m = 2)$&nbsp; is only about&nbsp; $1.35 \ \rm dB$&nbsp; away.<br>
  
*Ab&nbsp; $10 \cdot {\rm lg} \, (E_{\rm B}/N_0) \approx 0.5 \ \rm dB$&nbsp; verläuft die Kurve flacher. Ab ca.&nbsp; $1.5 \ \rm dB$&nbsp; ist der Verlauf wieder (nahezu) linear mit geringerer Steigung. Für&nbsp; ${\rm Pr(Bitfehler)} = 10^{-7}$&nbsp; benötigt man etwa&nbsp; $10 \cdot {\rm lg} \, (E_{\rm B}/N_0) = 3 \ \rm dB$.<br><br>
+
*Ab&nbsp; $10 \cdot {\rm lg} \, (E_{\rm B}/N_0) \approx 0.5 \ \rm dB$&nbsp; the curve runs flatter. From approx.&nbsp; $1.5 \ \rm dB$&nbsp; the curve is again (nearly) linear with lower slope. For&nbsp; ${\rm Pr(bit\:error)} = 10^{-7}$&nbsp; one needs about&nbsp; $10 \cdot {\rm lg} \, (E_{\rm B}/N_0) = 3 \ \rm dB$.<br><br>
  
Wir versuchen nun, den flacheren Abfall der Bitfehlerwahrscheinlichkeit bei größerem&nbsp; $E_{\rm B}/N_0$&nbsp; zu erklären. Man spricht von einem&nbsp; $\text{Error Floor}$:
+
We now try to explain the flatter drop in bit error probability with larger&nbsp; $E_{\rm B}/N_0$&nbsp;. This is called a&nbsp; $\text{error floor}$:
  
*Der Grund für dieses asymptotisch schlechtere Verhalten bei besserem Kanal&nbsp; $($im Beispiel: &nbsp; ab&nbsp; $10 \cdot {\rm lg} \, E_{\rm B}/N_0 \approx 2 \ \rm dB)$&nbsp; ist die Periode&nbsp; $P$&nbsp; der Coderimpulsantwort&nbsp; $\underline{g}$, wie auf der Seite&nbsp; [[Channel_Coding/The_Basics_of_Turbo_Codes#Second_requirement_for_turbo_codes:_Interleaving|"Interleaving"]]&nbsp; nachgewiesen und in der&nbsp; [[Aufgaben:Exercise_4.10:_Turbo_Enccoder_for_UMTS_and_LTE|"Aufgabe 4.10"]]&nbsp; an Beispielen erläutert. <br>
+
*The reason for this asymptotically worse behavior with better channel&nbsp; $($in the example: &nbsp; ab&nbsp; $10 \cdot {\rm lg} \, E_{\rm B}/N_0 \approx 2 \ \rm dB)$&nbsp; is the period&nbsp; $P$&nbsp; of the coder impulse response&nbsp; $\underline{g}$, as shown on the page&nbsp; [[Channel_Coding/The_Basics_of_Turbo_Codes#Second_requirement_for_turbo_codes: _Interleaving|"Interleaving"]]&nbsp; demonstrated, and in the&nbsp; [[Aufgaben:Exercise_4. 10:_Turbo_Encoder_for_UMTS_and_LTE|"Exercise 4.10"]]&nbsp; explained with examples. <br>
  
*Für&nbsp; $m = 2$&nbsp; ist die Periode $P = 2^m -1 = 3$. Dadurch ist für $\underline{u} = (1, 1, 1) &#8658; w_{\rm H}(\underline{u}) = 3$&nbsp; trotz unbegrenzter Impulsantwort&nbsp; $\underline{g}$&nbsp; die Paritysequenz begrenzt: &nbsp; $\underline{p} = (1, 0, 1)$ &nbsp; &#8658; &nbsp; $w_{\rm H}(\underline{p}) = 2$.<br>
+
*For&nbsp; $m = 2$&nbsp; the period is $P = 2^m -1 = 3$. Thus, for $\underline{u} = (1, 1, 1) &#8658; w_{\rm H}(\underline{u}) = 3$&nbsp; despite unbounded impulse response&nbsp; $\underline{g}$&nbsp; the parity sequence is bounded: &nbsp; $\underline{p} = (1, 0, 1)$ &nbsp; &#8658; &nbsp; $w_{\rm H}(\underline{p}) = 2$.<br>
  
*Die Sequenz&nbsp; $\underline{u} = (0, \ \text{...}\hspace{0.05cm} , \ 0, \ 1, \ 0, \ 0, \ 1, \ 0, \ \text{...}\hspace{0.05cm})$ &nbsp; &#8658; &nbsp; $U(D) = D^x \cdot (1 + D^P)$&nbsp; führt ebenfalls zu einem kleinen Hamming&ndash;Gewicht&nbsp; $w_{\rm H}(\underline{p})$&nbsp; am Ausgang, was den iterativen Decodierprozess erschwert.<br>
+
*The sequence&nbsp; $\underline{u} = (0, \ \text{...}\hspace{0.05cm} , \ 0, \ 1, \ 0, \ 0, \ 1, \ 0, \ \text{...}\hspace{0. 05cm})$ &nbsp; &#8658; &nbsp; $U(D) = D^x \cdot (1 + D^P)$&nbsp; also leads to a small hamming&ndash;weight&nbsp; $w_{\rm H}(\underline{p})$&nbsp; at the output, which complicates the iterative decoding process.<br>
  
*Eine gewisse Abhilfe schafft der Interleaver, der dafür sorgt, dass nicht die beiden Sequenzen&nbsp; $\underline{p}_1$&nbsp; und&nbsp; $\underline{p}_2$&nbsp; gleichzeitig durch sehr kleine Hamming&ndash;Gewichte&nbsp; $w_{\rm H}(\underline{p}_1)$&nbsp; und&nbsp; $w_{\rm H}(\underline{p}_2)$&nbsp; belastet sind.<br>
+
*Some workaround is provided by the interleaver, which ensures that both sequences&nbsp; $\underline{p}_1$&nbsp; and&nbsp; $\underline{p}_2$&nbsp; are not simultaneously loaded by very small Hamming weights&nbsp; $w_{\rm H}(\underline{p}_1)$&nbsp; and&nbsp; $w_{\rm H}(\underline{p}_2)$&nbsp;.<br>
  
*Aus der Grafik erkennt man auch, dass die Bitfehlerwahrscheinlichkeit umgekehrt proportional zur Interleavergröße&nbsp; $K$&nbsp; ist. Das heißt: &nbsp; Bei großem&nbsp; $K$&nbsp; funktioniert die Entspreizung ungünstiger Eingangssequenzen besser.<br>
+
*From the graph you can also see that the bit error probability is inversely proportional to the interleaver size&nbsp; $K$&nbsp;. That means: &nbsp; With large&nbsp; $K$&nbsp; the despreading of unfavorable input sequences works better.<br>
  
*Allerdings gilt die Näherung&nbsp; $K \cdot {\rm Pr(Bitfehler) = const.}$&nbsp; nur für größere&nbsp; $E_{\rm B}/N_0$&ndash;Werte &nbsp; &#8658; &nbsp; kleine Bitfehlerwahrscheinlichkeiten. Der  beschriebene Effekt tritt zwar auch bei kleinerem&nbsp; $E_{\rm B}/N_0$&nbsp; auf, doch sind dann die Auswirkungen auf die Bitfehlerwahrscheinlichkeit geringer.<br><br>
+
*However, the approximation&nbsp; $K \cdot {\rm Pr(bit\:error) = const.}$&nbsp; is valid only for larger&nbsp; $E_{\rm B}/N_0$&ndash;values &nbsp; &#8658; &nbsp; small bit error probabilities. The described effect also occurs for smaller&nbsp; $E_{\rm B}/N_0$&nbsp; values, but then the effects on the bit error probability are smaller.<br><br>
  
Dagegen gilt der flachere Verlauf der Blockfehlerwahrscheinlichkeit (grüne Kurve) weitgehend unabhängig von der Interleavergröße&nbsp; $K$, also sowohl für&nbsp; $K = 1000$&nbsp; als auch für&nbsp; $K = 10000$. Im Bereich ab&nbsp; $10 \cdot {\rm lg} \, E_{\rm B}/N_0 > 2 \ \rm dB$&nbsp; dominieren nämlich Einzelfehler, so dass hier die Näherung&nbsp; ${\rm Pr(Blockfehler)} \approx {\rm Pr(Bitfehler)} \cdot K$&nbsp; gültig ist.<br>
+
In contrast, the flatter shape of the block error probability (green curve) holds largely independent of the interleaver size&nbsp; $K$, i.e., for&nbsp; $K = 1000$&nbsp; as well as for&nbsp; $K = 10000$. In the range from&nbsp; $10 \cdot {\rm lg} \, E_{\rm B}/N_0 > 2 \ \rm dB$&nbsp; namely single errors dominate, so that here the approximation&nbsp; ${\rm Pr(block error)} \approx {\rm Pr(bit\:error)} \cdot K$&nbsp; is valid.<br>
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Fazit:}$&nbsp; Die beispielhaft gezeigten Kurven für Bitfehlerwahrscheinlichkeit und Blockfehlerwahrscheinlichkeit gelten qualitativ auch für&nbsp; $m > 2$, zum Beispiel für den Turbocode von UMTS und LTE&nbsp; $($jeweils mit&nbsp; $m = 3)$, der in der&nbsp; [[Aufgaben:Exercise_4.10:_Turbo_Enccoder_for_UMTS_and_LTE|"Aufgabe 4.10"]]&nbsp; analysiert wird. Es ergeben sich aber einige quantitative Unterschiede:
+
$\text{Conclusion:}$&nbsp; The bit error probability and block error probability curves shown as examples also apply qualitatively for&nbsp; $m > 2$, for example, for the turbo code of UMTS and LTE&nbsp; $($each with&nbsp; $m = 3)$, which is analyzed in the&nbsp; [[Aufgaben:Exercise_4.10:_Turbo_Encoder_for_UMTS_and_LTE|"Exercise 4.10"]]&nbsp;. However, some quantitative differences emerge:
*Die Kurve verläuft bei  kleinem&nbsp; $E_{\rm B}/N_0$&nbsp; steiler und der Abstand von der Shannongrenze ist etwas geringer als im hier gezeigten Beispiel für&nbsp; $m = 2$.<br>
+
*The curve is steeper for small&nbsp; $E_{\rm B}/N_0$&nbsp; and the distance from the Shannon boundary is slightly less than in the example shown here for&nbsp; $m = 2$.<br>
  
*Auch für größeres&nbsp; $m$&nbsp; gibt es einen <i>Error Floor</i>. Der Knick in den dargestellten Kurven erfolgt dann aber später, also bei kleineren Fehlerwahrscheinlichkeiten.}}<br>
+
*Also for larger&nbsp; $m$&nbsp; there is an <i>error floor</i>. However, the kink in the displayed curves then occurs later, i.e. at smaller error probabilities.}}<br>
  
== Seriell verkettete Turbocodes – SCCC ==
+
== Serial concatenated turbo codes – SCCC ==
 
<br>
 
<br>
Die bisher betrachteten Turbocodes werden manchmal auch als&nbsp; <i>Parallel Concatenated Convolutional Codes</i>&nbsp; (PCCC) bezeichnet.  
+
The turbo codes considered so far are sometimes referred to as&nbsp; <i>Parallel Concatenated Convolutional Codes</i>&nbsp; (PCCC).  
  
Einige Jahre nach Berrou's Erfindung wurden von anderen Autoren auch&nbsp; <i>Serial Concatenated Convolutional Codes</i>&nbsp; (SCCC) entsprechend der folgenden Grafik vorgeschlagen.
+
Some years after Berrou's invention, <i>Serial Concatenated Convolutional Codes</i>&nbsp; (SCCC) were also proposed by other authors according to the following diagram.
*Die Informationssequenz&nbsp; $\underline{u}$&nbsp; liegt am äußeren Faltungscoder&nbsp; $\mathcal{C}_1$&nbsp; an. Dessen Ausgangssequenz sei&nbsp; $\underline{c}$.<br>
+
*The information sequence&nbsp; $\underline{u}$&nbsp; is located at the outer convolutional encoder&nbsp; $\mathcal{C}_1$&nbsp;. Let its output sequence be&nbsp; $\underline{c}$. <br>
  
*Nach dem Interleaver&nbsp; $(\Pi)$&nbsp; folgt der innere Faltungscoder&nbsp; $\mathcal{C}_2$. Die Codesequenz wird hier&nbsp; $\underline{x}$&nbsp; genannt.<br>
+
*After the interleaver&nbsp; $(\Pi)$&nbsp; follows the inner convolutional encoder&nbsp; $\mathcal{C}_2$. The code sequence is called here&nbsp; $\underline{x}$&nbsp;.<br>
  
*Die resultierende Coderate ist&nbsp; $R = R_1 \cdot R_2$. Bei Rate&ndash;$1/2$&ndash;Komponentencodes ist&nbsp; $R = 1/4$.<br><br>
+
*The resulting code rate is&nbsp; $R = R_1 \cdot R_2$. For rate $1/2$ component codes is&nbsp; $R = 1/4$.<br><br>
  
[[File:P ID3058 KC T 4 3 S7a v1.png|center|frame|<i>Serial Concatenated Convolutional Codes</i> (SCCC): Coder und Decoder |class=fit]]
+
[[File:P ID3058 KC T 4 3 S7a v1.png|center|frame|<i>Serial Concatenated Convolutional Codes</i> (SCCC): Encoder and decoder |class=fit]]
  
Die untere Grafik zeigt den SCCC&ndash;Decoder und verdeutlicht die Verarbeitung der&nbsp; $L$&ndash;Werte und den Austausch der extrinsischen Information zwischen den beiden Komponentencoder:
+
The bottom diagram shows the SCCC&ndash;decoder and illustrates the processing of&nbsp; $L$ values and the exchange of extrinsic information between the two component decoders:
*Der innere Decoder&nbsp; (für den Code&nbsp; $\mathcal{C}_2)$&nbsp; erhält vom Kanal die intrinsische Information&nbsp; $\underline{L}_{\rm K}(\underline{x})$&nbsp; und vom äußeren Decoder (nach Interleaving) die Apriori&ndash;Information&nbsp; $\underline{L}_{\rm A}(\underline{w})$&nbsp; mit&nbsp; $\underline{w} = \pi(\underline{c})$. An den äußeren Decoder wird die extrinsische Information&nbsp; $\underline{L}_{\rm E}(\underline{w})$&nbsp; abgegeben.<br>
+
*The inner decoder&nbsp; (for the code&nbsp; $\mathcal{C}_2)$&nbsp; receives from the channel the intrinsic information&nbsp; $\underline{L}_{\rm K}(\underline{x})$&nbsp; and from the outer decoder (after interleaving) the apriori information&nbsp; $\underline{L}_{\rm A}(\underline{w})$&nbsp; with&nbsp; $\underline{w} = \pi(\underline{c})$. To the outer decoder the extrinsic information&nbsp; $\underline{L}_{\rm E}(\underline{w})$&nbsp; is given.<br>
  
*Der äußere Decoder $($für&nbsp; $\mathcal{C}_1)$&nbsp; verarbeitet die Apriori&ndash;Information&nbsp; $\underline{L}_{\rm A}(\underline{c})$, also die extrinsische Information&nbsp; $\underline{L}_{\rm E}(\underline{w})$&nbsp; nach dem De&ndash;Interleaving. Er liefert die extrinsische Information&nbsp; $\underline{L}_{\rm E}(\underline{c})$.<br>
+
*The outer decoder $($for&nbsp; $\mathcal{C}_1)$&nbsp; processes the apriori information&nbsp; $\underline{L}_{\rm A}(\underline{c})$, i.e. the extrinsic information&nbsp; $\underline{L}_{\rm E}(\underline{w})$&nbsp; after de&ndash;interleaving. It provides the extrinsic information&nbsp; $\underline{L}_{\rm E}(\underline{c})$.<br>
  
*Nach hinreichend vielen Iterationen ergibt sich das  das gewünschte Decodierergebnis in Form der Aposteriori&ndash;$L$&ndash;Werte&nbsp; $\underline{L}_{\rm APP}(\underline{u})$&nbsp; der Informationssequenz&nbsp; $\underline{u}$.<br><br>
+
*After a sufficient number of iterations, the desired decoding result is obtained in the form of the a posteriori&ndash;$L$&ndash;values&nbsp; $\underline{L}_{\rm APP}(\underline{u})$&nbsp; of the information sequence&nbsp; $\underline{u}$.<br><br>
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Fazit:}$&nbsp; Wichtig für <i>Serial Concatenated Convolutional Codes</i> (SCCC) ist, dass der innere Code&nbsp; $\mathcal{C}_2$&nbsp; rekursiv ist (also ein RSC&ndash;Code). Der äußere Code&nbsp; $\mathcal{C}_1$&nbsp; kann auch nichtrekursiv sein.  
+
$\text{Conclusion:}$&nbsp; Important for <i>Serial Concatenated Convolutional Codes</i> (SCCC) is that the inner code&nbsp; $\mathcal{C}_2$&nbsp; is recursive (i.e. a RSC&ndash;code). The outer code&nbsp; $\mathcal{C}_1$&nbsp; may also be non-recursive.  
  
Zur Leistungsfähigkeit solcher Codes ist anzumerken:
+
Regarding the performance of such codes, it should be noted:
*Ein SCCC ist bei großem&nbsp; $E_{\rm B}/N_0$&nbsp; oft besser als ein PCCC &nbsp;&#8658;&nbsp; niedrigerer <i>Error Floor</i>. Die Aussage gilt schon für SCCC&ndash;Komponentencodes mit Gedächtnis&nbsp; $m = 2$&nbsp; (nur vier Trelliszustände), während bei PCCC das Gedächtnis&nbsp; $m = 3$&nbsp; bzw.&nbsp; $m = 4$&nbsp; (acht bzw. sechzehn Trelliszustände) sein sollte.<br>
+
*An SCCC is often better than a PCCC &nbsp;&#8658;&nbsp; lower <i>error floor</i> for large&nbsp; $E_{\rm B}/N_0$&nbsp;. The statement is already true for SCCC component codes with memory&nbsp; $m = 2$&nbsp; (only four trellis states), while for PCCC the memory&nbsp; $m = 3$&nbsp; and&nbsp; $m = 4$&nbsp; (eight and sixteen trellis states, respectively) should be.<br>
  
*Im unteren Bereich $($kleines &nbsp;$E_{\rm B}/N_0)$&nbsp; ist dagegen der beste seriell verkettete Faltungscode (SCCC) um einige Zehntel Dezibel schlechter als der vergleichbare Turbocode gemäß Berrou (PCCC). Entsprechend größer ist auch der Abstand von der Shannongrenze.}}<br><br>
+
*In the lower range $($small &nbsp;$E_{\rm B}/N_0)$&nbsp; on the other hand, the best serial concatenated convolutional code (SCCC) is several tenths of a decibel worse than the comparable turbo code according to Berrou (PCCC). The distance from the Shannon boundary is correspondingly larger.}}<br><br>
  
== Einige Anwendungsgebiete für Turbocodes ==
+
== Some application areas for turbo codes ==
 
<br>
 
<br>
[[File:P ID3059 KC T 4 3 S7b v2.png|right|frame|Einige standardisierte Turbocodes im Vergleich zur Shannon–Grenze]]
+
[[File:P ID3059 KC T 4 3 S7b v2.png|right|frame|Some standardized turbo codes compared to the Shannon bound]]
In fast allen neueren Kommunikationssystemen werden Turbocodes eingesetzt. Die Grafik zeigt deren Leistungsfähigkeit beim AWGN&ndash;Kanal im Vergleich zur&nbsp; [[Channel_Coding/Informationstheoretische_Grenzen_der_Kanalcodierung#Kanalkapazit.C3.A4t_des_AWGN.E2.80.93Modells| Shannonschen Kanalkapazität]]&nbsp; (blaue Kurve).<br>
+
Turbo codes are used in almost all newer communication systems. The graph shows their performance with the AWGN channel compared to&nbsp; [[Channel_Coding/Information_Theoretical_Limits_of_Channel_Coding#Channel_capacity_of_the_AWGN_model| "Shannon's channel capacity"]]&nbsp; (blue curve).<br>
  
Der  grün hinterlegte Bereich "BPSK" gibt die Shannongrenze für Nachrichtensysteme mit binärem Eingang an, mit der nach dem&nbsp; [[Channel_Coding/Informationstheoretische_Grenzen_der_Kanalcodierung#Kanalcodierungstheorem_und_Kanalkapazit.C3.A4t| Kanalcodierungstheorem]]&nbsp; eine fehlerfreie Übertragung gerade noch möglich ist.<br>
+
The green highlighted area "BPSK" indicates the Shannon limit for message systems with binary input, with which according to the&nbsp; [[Channel_Coding/Information_Theoretical_Limits_of_Channel_Coding#Channel_coding_theorem_and_channel_capacity| "channel coding theorem"]]&nbsp; an error-free transmission is just possible.<br>
  
Anzumerken ist, dass hier für die eingezeichneten Kanalcodes von standardisierten Systemen die Fehlerrate&nbsp; $10^{-5}$&nbsp; zugrunde liegt, während die informationstheoretischen Kapazitätskurven (Shannon, BPSK) für die Fehlerwahrscheinlichkeit&nbsp; $0$&nbsp; gelten.
+
It should be noted that the error rate&nbsp; $10^{-5}$&nbsp; is the basis here for the channel codes of standardized systems which are drawn in, while the information-theoretical capacity curves (Shannon, BPSK) apply to the error probability&nbsp; $0$&nbsp;.
*Die blauen Rechtecke markieren die Turbocodes für UMTS. Diese sind Rate&ndash;$1/3$&ndash;Codes mit Gedächtnis&nbsp; $m = 3$. Die Leistungsfähigkeit hängt stark von der Interleavergröße ab. Mit&nbsp; $K = 6144$&nbsp; liegt dieser Code nur etwa&nbsp; $1 \ \rm dB$&nbsp; rechts von der Shannon&ndash;Grenze. LTE verwendet die gleichen Turbocodes. Geringfügige Unterschiede ergeben sich aufgrund des unterschiedlichen Interleavers.<br>
+
*The blue rectangles mark the turbo codes for UMTS. These are rate&ndash;$1/3$&ndash;codes with memory&nbsp; $m = 3$. The performance depends strongly on the interleaver size. With&nbsp; $K = 6144$&nbsp; this code is only about&nbsp; $1 \rm dB$&nbsp; to the right of the Shannon bound. LTE uses the same turbo codes. Minor differences occur due to the different interleaver.<br>
  
*Die roten Kreuze markieren die Turbocodes nach CCSDS (<i>Consultative Comittee for Space Data Systems</i>), entwickelt für den Einsatz bei Weltraummissionen. Diese Klasse geht von der festen Interleavergröße&nbsp; $K = 6920$&nbsp; aus und stellt Codes der Rate&nbsp; $1/6$,&nbsp; $1/4$,&nbsp; $1/3$&nbsp; und&nbsp; $1/2$&nbsp; bereit. Die niedrigsten Coderaten erlauben einen Betrieb mit&nbsp; $10 \cdot {\rm lg} \, (E_{\rm B}/N_0) \approx 0 \ \rm dB$.<br>
+
*The red crosses mark the turbo codes according to CCSDS (<i>Consultative Committee for Space Data Systems</i>), developed for use in space missions. This class assumes the fixed interleaver size&nbsp; $K = 6920$&nbsp; and provides codes of rate&nbsp; $1/6$,&nbsp; $1/4$,&nbsp; $1/3$&nbsp; and&nbsp; $1/2$&nbsp;. The lowest code rates allow operation at&nbsp; $10 \cdot {\rm lg} \, (E_{\rm B}/N_0) \approx 0 \ \rm dB$.<br>
  
*Der grüne Kreis steht für einen sehr einfachen&nbsp; <i>Repeat&ndash;Accumulate</i>&nbsp; (RA) Code, einem seriell&ndash;verketteten Turbocode. Nachfolgend ist dessen Struktur skizziert: &nbsp; Der äußere Decoder verwendet einen&nbsp; [[Channel_Coding/Beispiele_bin%C3%A4rer_Blockcodes#Wiederholungscodes_.281.29| Wiederholungscode]]&nbsp; (englisch:&nbsp;  <i>Repetition Code</i>), im gezeichneten Beispiel mit der Rate&nbsp; $R = 1/3$. Nach dem Interleaver folgt ein RSC&ndash;Code mit&nbsp; $G(D) = 1/(1 + D)$ &nbsp; &#8658; &nbsp; Gedächtnis&nbsp; $m = 1$. Bei systematischer Ausführung ist die Gesamtcoderate&nbsp; $R = 1/4$&nbsp; (zu jedem Informationsbit werden noch drei Paritybit hinzugefügt).  
+
*The green circle represents a very simple&nbsp; <i>Repeat&ndash;Accumulate</i>&nbsp; (RA) code, a serial&ndash;concatenated turbo code. The following is an outline of its structure: &nbsp; The outer decoder uses a&nbsp; [[Channel_Coding/Examples_of_Binary_Block_Codes#Repetition_Codes| "repetition code"]]&nbsp;, in the drawn example with rate&nbsp; $R = 1/3$. The interleaver is followed by an RSC&ndash;code with&nbsp; $G(D) = 1/(1 + D)$ &nbsp; &#8658; &nbsp; memory&nbsp; $m = 1$. When executed systematically, the total code rate&nbsp; $R = 1/4$&nbsp; (three parity bits are added to each bit of information).  
  
[[File:P ID3064 KC T 4 3 S7c v1.png|center|frame|<i>Repeat Accumulate</i> (RA) Code der Rate $1/4$|class=fit]]
+
[[File:P ID3064 KC T 4 3 S7c v1.png|center|frame|<i>Repeat Accumulate</i> (RA) code with rate $1/4$|class=fit]]
  
Aus der Grafik rechts oben erkennt man, dass dieser einfache RA&ndash;Code überraschend gut ist. Mit der Interleavergröße&nbsp; $K = 300000$&nbsp; beträgt der Abstand von der Shannon&ndash;Grenze lediglich etwa &nbsp;$1.5 \ \rm dB$&nbsp; (grüner Punkt).
+
From the graph on the top right, one can see that this simple RA&ndash;code is surprisingly good. With the interleaver size&nbsp; $K = 300000$&nbsp; the distance from the Shannon&ndash;limit is only about &nbsp;$1.5 \ \rm dB$&nbsp; (green dot).
  
Ähnliche <i>Repeat Accumulate Codes</i> sind für den Standard <i>DVB Return Channel Terrestrial</i> (RCS) sowie für den WiMax&ndash;Standard (IEEE 802.16) vorgesehen.
+
Similar <i>Repeat Accumulate Codes</i> are provided for the <i>DVB Return Channel Terrestrial</i> (RCS) standard and for the WiMax standard (IEEE 802.16).
  
  
  
  
== Aufgaben zum Kapitel ==
+
== Exercises for the chapter ==
 
<br>
 
<br>
[[Aufgaben:Aufgabe_4.08:_Wiederholung_zu_den_Faltungscodes|Aufgabe 4.8: Wiederholung zu den Faltungscodes]]
+
[[Aufgaben:Exercise_4.08:_Repetition_to_the_Convolutional_Codes|Exercise 4.08: Repetition to the Convolutional Codes]]
  
[[Aufgaben:Aufgabe_4.08Z:_Grundlegendes_zum_Interleaving|Aufgabe 4.8Z: Grundlegendes zum Interleaving]]
+
[[Aufgaben:Exercise_4.08Z:_Basics_about_Interleaving|Exercise 4.08Z: Basics about Interleaving]]
  
[[Aufgaben:Aufgabe_4.09:_Recursive_Systematic_Convolutional_Codes|Aufgabe 4.9: Recursive Systematic Convolutional Codes]]
+
[[Aufgaben:Exercise_4.09:_Recursive_Systematic_Convolutional_Codes|Exercise 4.09: Recursive Systematic Convolutional Codes]]
  
[[Aufgaben:Aufgabe_4.10:_Turbocoder_für_UMTS_und_LTE|Aufgabe 4.10: Turbocoder für UMTS und LTE]]
+
[[Aufgaben:Exercise_4.10:_Turbo_Enccoder_for_UMTS_and_LTE|Exercise 4.10: Turbo Enccoder for UMTS and LTE]]
  
==Quellenverzeichnis==
+
==References==
 
<references/>
 
<references/>
  
 
{{Display}}
 
{{Display}}

Revision as of 21:59, 15 November 2022

Basic structure of a turbo code


All communications systems current today (2017), such as  "UMTS"  (Universal Mobile Telecommunications System   ⇒   3rd generation mobile communications) and  "LTE"  (Long Term Evolution  ⇒   4th generation mobile communications) use the concept of  "symbol-wise iterative decoding". That this is so is directly related to the invention of  turbo codes'  in 1993 by  "Claude Berrou""Alain Glavieux"  and  "Punya Thitimajshima"  because it was only with these codes that the Shannon bound could be approached with reasonable decoding effort.

Turbo codes result from the parallel or serial concatenation of convolutional codes. The graphic shows the parallel concatenation of two codes, each with the parameters  $k = 1, \ n = 2$   ⇒   Rate $R = 1/2$.

Parallel concatenation of two rate $1/2$ codes

In this representation:

  • $u$  the currently considered bit of the information sequence  $\underline{u}$,
  • $x_{i,\hspace{0.03cm}j}$  the currently considered bit at the output  $j$  of encoder  $i$  $($with  $1 ≤ i ≤ 2, \ 1 ≤ j ≤ 2)$,
  • $\underline{X} = (x_{1,\hspace{0.03cm}1}, \ x_{1,\hspace{0.03cm}2}, \ x_{2,\hspace{0.03cm}1}, \ x_{2,\hspace{0.03cm}2})$  the code word for the current information bit  $u$.

The resulting rate of the concatenated coding system is thus  $R = 1/4$.

If systematic component codes are used, the following model results:

Rate $1/3$ turbo encoder (parallel concatenation of two rate $1/2$ convolutional codes)

The modifications from the top graph can be justified as follows:

  • For systematic codes  $C_1$  and  $C_2$  both  $x_{1,\hspace{0.03cm}1} = u$  and  $x_{2,\hspace{0.03cm}1} = u$. Therefore, one can dispense with the transmission of a redundant bit  $($for example $x_{2,\hspace{0.03cm}2})$ .
  • With this reduction, the result is a rate–$1/3$–turbo code with  $k = 1$  and  $n = 3$. The code word with the parity bits is  $p_1$  (encoder 1) and  $p_2$  (encoder 2):
$$\underline{X} = \left (u, \ p_1, \ p_2 \right )\hspace{0.05cm}.$$


Further modification of the basic structure of the turbo code


In the following we always assume a still somewhat further modified turbo encoder model:

  • As required for the description of convolutional codes, now at the input instead of the isolated information bit  $u$  the information sequence  $\underline{u} = (u_1, \ u_2, \ \text{...}\hspace{0.05cm} , \ u_i , \ \text{...}\hspace{0.05cm} )$  an.
  • The code word sequence is generated with  $\underline{x} = (\underline{X}_1, \underline{X}_2, \ \text{...}\hspace{0.05cm} , \ \underline{X}_i, \ \text{...}\hspace{0.05cm} )$  denoted. To avoid confusion, the code words  $\underline{X}_i = (u, \ p_1, \ p_2)$  with capital letters were introduced in front.
Rate $1/3$ turbo encoder with interleaver
  • The encoders  $\mathcal{C}_1$  and  $\mathcal{C}_2$  are conceived (at least in thought) as  "digital filters"  and are thus characterized by the  "transfer functions"  $G_1(D)$  and  $G_2(D)$  representable.
  • For various reasons   ⇒   see  "two pages ahead"  the input sequence of the second encoder   ⇒   $\underline{u}_{\pi}$  should be scrambled with respect to the sequence  $\underline{u}$  by an interleaver  $(\Pi)$  .
  • Thereby there is nothing against choosing both encoders the same: $G_1(D) = G_2(D) = G(D)$. Without interleaver the correction capability would be extremely limited.

$\text{Example 1:}$  The graph shows the different sequences in matched colors. To note:

  1.    For  $\underline{u}_{\pi}$  a  $3×4$–interleaver matrix is considered according to  "Exercise 4.8Z" .
  2.    The parity sequences are obtained according to  $G_1(D) = G_2(D) = 1 + D^2$   ⇒   see  "Exercise 4.8".
Example sequences at the rate $1/3$ turbo encoder

First requirement for turbo codes: Recursive component codes


Non-recursive transfer functions for generating the parity sequences cause a turbo code with insufficiently small minimum distance. The reason for this inadequacy is the finite impulse response  $\underline{g} = (1, \ g_2, \ \text{...}\hspace{0.05cm} , \ g_m, \ 0, \ 0, \ \text{...}\hspace{0.05cm} )$  with  $g_2, \ \text{...}\hspace{0.05cm} , \ g_m ∈ \{0, 1\}$. Here  $m$  denotes the code memory.

The graph shows the state transition diagram for the example  $\mathbf{G}(D) = \big [1, \ 1 + D^2 \big ]$. The transitions are labeled "$u_i\hspace{0.05cm}|\hspace{0.05cm}u_i p_i$".

  • The sequence  $S_0 → S_1 → S_2 → S_0 → S_0 → \ \text{...}\hspace{0.05cm} \ $  with respect to the input, leads to the information sequence  $\underline{u} = (1, 0, 0, 0, 0, \ \text{...}\hspace{0.05cm})$ 
  • and with respect to the respective second code symbol to the parity sequence   $\underline{p} = (1, 0, 1, 0, 0, \ \text{...}\hspace{0.05cm})$    ⇒   identical to the impulse response  $\underline{g}$   ⇒   memory $m = 2$.


Non-recursive systematic turbo code and state transition diagram

The lower graph applies to a so-called  RSC code  (Recursive Systematic Convolutional) correspondingly  $\mathbf{G}(D) = \big [1, \ (1+ D^2)/(1 + D + D^2)\big ]$.

  • Here the sequence  $S_0 → S_1 → S_3 → S_2 → S_1 → S_3 → S_2 → \ \text{...}\hspace{0.05cm} \ $  to the impulse response  $\underline{g} = (1, 1, 1, 0, 1, 1, 0, 1, 1, \ \text{...}\hspace{0.05cm})$.
  • This impulse response continues to infinity due to the loop  $S_1 → S_3 → S_2 → S_1$. This enables or facilitates the iterative decoding.


Non-recursive systematic turbo code and state transition diagram

More details on the examples on this page can be found in the  "Exercise 4.8"  and the  "Exercise 4.9".

Second requirement for turbo codes: Interleaving


It is obvious that for  $G_1(D) = G_2(D)$  an interleaver  $(\Pi)$  is essential. Another reason is that the apriori information is assumed to be independent. Thus, adjacent (and thus possibly strongly dependent) bits for the other subcode should be far apart.

Indeed, for any RSC code   ⇒   infinite impulse response  $\underline{g}$   ⇒   fractional–rational transfer function  $G(D)$  there are certain input sequences  $\underline{u}$, which lead to very short parity sequences  $\underline{p} = \underline{u} ∗ \underline{g}$  with low Hamming weight  $w_{\rm H}(\underline{p})$ .

For example, such a sequence is given in the graphic below on the  "last page" :   $\underline{u} = (1, 1, 1, 0, 0, \ \text{...}\hspace{0.05cm})$. Then for the output sequence holds:

\[P(D) = U(D) \cdot G(D) = (1+D+D^2) \cdot \frac{1+D^2}{1+D+D^2}= 1+D^2\hspace{0.3cm}\Rightarrow\hspace{0.3cm} \underline{p}= (\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} \text{...}\hspace{0.05cm}\hspace{0.05cm})\hspace{0.05cm}. \]

$\text{Meaning and purpose:}$  By  $\rm interleaving$, it is now ensured with high probability that this sequence  $\underline{u} = (1, 1, 1, 0, 0, \ \text{...}\hspace{0.05cm})$  is converted into a sequence  $\underline{u}_{\pi}$ ,

  • which also contains only three ones,
  • but whose output sequence is characterized by a large Hamming weight  $w_{\rm H}(\underline{p})$ .

Thus, the decoder succeeds in resolving such "problem sequences" iteratively.


Clarification of block interleaving

For the following description of the interleavers we use the assignment  $I_{\rm In} → I_{\rm Out}$. These labels stand for the indices of output and input sequence, respectively. The interleaver variable is named  $I_{\rm max}$ .

There are several, fundamentally different interleaver concepts:

In a  block interleaver  one fills a matrix with  $S$  columns and  $Z$  rows column by column and reads the matrix row by row. Thus an information block with  $I_{\rm max} = S \cdot Z$  bit is deterministically scrambled.

The right graph illustrates the principle for  $I_{\rm max} = 64$   ⇒   $1 ≤ I_{\rm In} \le 64$    $1 ≤ I_{\rm Out} \le 64$. The order of the output bits is then:   $1, 9, 17, 25, 33, 41, 49, 57, 2, 10, 18, \ \text{...}\hspace{0.05cm} , 48, 56, 64.$

More information on block interleaving is available in the  "Exercise 4.8Z".

Clarification of $S$–random interleaving





Turbo codes often use the  $S$–random interleaver. This pseudo random algorithm with the parameter "$S$" guarantees that two indices less than  $S$  apart at the input occur at least at the distance  $S + 1$  at the output. The left graph shows the  $S$–random characteristic  $I_{\rm Out}(I_{\rm In})$  for  $I_{\rm max} = 640$.

  • This algorithm is also deterministic, and one can undo the scrambling in the decoder ⇒ De–interleaving.
  • The distribution still seems more "random" than with block interleaving.


Symbol-wise iterative decoding of a turbo code


The decoding of a turbo code is basically done as described in section  "Symbol-wise Soft–in Soft–out Decoding" . From the following graphic, however, you can also see some special features that apply only to the turbo decoder.

Iterative turbo decoder for rate  $R = 1/3$

Assumed is a rate ;$1/3$ turbo code according to the description on the  "first page of this section". Also, the color scheme for the information sequence  $\underline{u}$  and the two parity sequences  $\underline{p}_1$  and  $\underline{p}_2$  are adapted from the earlier graphics. Further, it should be noted:

  • The received vectors  $\underline{y}_0, \underline{y}_1$  and  $\underline{y}_2$  are real-valued and provide the respective soft information with respect to the information sequence  $\underline{u}$  and the sequences  $\underline{p}_1$  (parity for encoder 1) and  $\underline{p}_2$  (parity for encoder 2).
  • The decoder 1 receives the required intrinsic information in the form of the  $L$ values $L_{\rm K,\hspace{0.03cm} 0}$  $($out  $\underline{y}_0)$  and  $L_{\rm K,\hspace{0.03cm}1}$ $($out  $\underline{y}_1)$  over each bit of the sequences  $\underline{u}$  and  $\underline{p}_1$. In the second decoder, the scrambling of the information sequence  $\underline{u}$  must be taken into account. Thus, the  $L$ values to be processed are  $\pi(L_{\rm K, \hspace{0.03cm}0})$  and  $L_{\rm K, \hspace{0.03cm}2}$.
  • In general  "SISO decoder"  the information exchange between the two component decoders was controlled with  $\underline{L}_{\rm A, \hspace{0.03cm}2} = \underline{L}_{\rm E, \hspace{0.03cm}1}$  and  $\underline{L}_{\rm A, \hspace{0.03cm}1} = \underline{L}_{\rm E, \hspace{0.03cm}2}$  . Written out at the bit level, these equations are denoted by  $1 ≤ i ≤ n$:
\[L_{\rm A, \hspace{0.03cm}2}(i) = L_{\rm E, \hspace{0.03cm}1}(i) \hspace{0.5cm}{\rm bzw.}\hspace{0.5cm} L_{\rm A, \hspace{0.03cm}1}(i) = L_{\rm E, \hspace{0.03cm}2}(i) \hspace{0.03cm}.\]
  • In the case of the turbo decoder, the interleaver must also be taken into account in this information exchange. Then for  $i = 1, \ \text{...}\hspace{0.05cm} , \ n$:
\[L_{\rm A, \hspace{0.03cm}2}\left ({\rm \pi}(i) \right ) = L_{\rm E, \hspace{0.03cm}1}(i) \hspace{0.5cm}{\rm bzw.}\hspace{0.5cm} L_{\rm A, \hspace{0.03cm}1}(i) = L_{\rm E, \hspace{0.03cm}2}\left ({\rm \pi}(i) \right ) \hspace{0.05cm}.\]
  • The a posteriori $L$ value is (arbitrarily) given by decoder 1 in the above model. This can be justified by the fact that one iteration stands for a twofold information exchange.

Performance of the turbo codes


Bit and block error probability of turbo codes at AWGN channel.

We consider, as in the last pages, the rate $1/3$ turbo code.

  • with equal filter functions  $G_1(D) = G_2(D) = (1 + D^2)/(1 + D + D^2)$   ⇒   Memory  $m = 2$,
  • with the interleaver variable  $K$; first apply  $K = 10000,$
  • a sufficient number of iterations  $(I = 20)$, almost equivalent in result to "$I → ∞$".

The two RSC component codes are each terminated on  $K$  bits. Therefore we group

  • the information sequence  $\underline{u}$  into blocks of  $K$  information bits each, and
  • the code sequence $\underline{x}$ to blocks with each  $N = 3 \cdot K$  code bits.

All results apply to the  "AWGN channel". Data taken from lecture notes  [Liv15][1].

The graph shows as a green curve the  block error probability   ⇒   ${\rm Pr(block\:error)}$  in double logarithmic representation depending on the AWGN characteristic  $10 \cdot {\rm lg} \, (E_{\rm B}/N_0)$. It can be seen:

  • The points marked with crosses resulted from the weight functions of the turbo code using the  "Union Bound". The simulation results – marked by circles in the graph – are almost congruent with the analytically calculated values.
  • The "Union Bound" is only an upper bound based on maximum likelihood decoding ("ML"). The iterative decoder is suboptimal (i.e., worse than "ML"). These two effects seem to almost cancel each other out.
  • At  $10 \cdot {\rm lg} \, (E_{\rm B}/N_0) = 1 \ \rm dB$  there is a kink in the (green) curve, corresponding to the slope change of  ${\rm Pr(bit\:error)}$   ⇒   blue curve.

The blue crosses (calculation) and the blue circles (simulation) denote the  bit error probability  for the interleaver size  $K = 10000$ . The (dash-dotted) curve for uncoded transmission is drawn as a comparison curve. To these blue curves is to be noted:

  • For small abscissa values, the curve slope in the selected plot is nearly linear and sufficiently steep. For example, for  ${\rm Pr(bit error)} = 10^{-5}$  one needs at least  $10 \cdot {\rm lg} \, (E_{\rm B}/N_0) \approx \, 0.8 \ \rm dB$.
  • Compared to  "Shannon bound", which results for code rate  $R = 1/3$  to  $10 \cdot {\rm lg} \, (E_{\rm B}/N_0) \approx \, –0.55 \ \rm dB$  our standard turbo code  $($with memory  $m = 2)$  is only about  $1.35 \ \rm dB$  away.
  • Ab  $10 \cdot {\rm lg} \, (E_{\rm B}/N_0) \approx 0.5 \ \rm dB$  the curve runs flatter. From approx.  $1.5 \ \rm dB$  the curve is again (nearly) linear with lower slope. For  ${\rm Pr(bit\:error)} = 10^{-7}$  one needs about  $10 \cdot {\rm lg} \, (E_{\rm B}/N_0) = 3 \ \rm dB$.

We now try to explain the flatter drop in bit error probability with larger  $E_{\rm B}/N_0$ . This is called a  $\text{error floor}$:

  • The reason for this asymptotically worse behavior with better channel  $($in the example:   ab  $10 \cdot {\rm lg} \, E_{\rm B}/N_0 \approx 2 \ \rm dB)$  is the period  $P$  of the coder impulse response  $\underline{g}$, as shown on the page  "Interleaving"  demonstrated, and in the  "Exercise 4.10"  explained with examples.
  • For  $m = 2$  the period is $P = 2^m -1 = 3$. Thus, for $\underline{u} = (1, 1, 1) ⇒ w_{\rm H}(\underline{u}) = 3$  despite unbounded impulse response  $\underline{g}$  the parity sequence is bounded:   $\underline{p} = (1, 0, 1)$   ⇒   $w_{\rm H}(\underline{p}) = 2$.
  • The sequence  $\underline{u} = (0, \ \text{...}\hspace{0.05cm} , \ 0, \ 1, \ 0, \ 0, \ 1, \ 0, \ \text{...}\hspace{0. 05cm})$   ⇒   $U(D) = D^x \cdot (1 + D^P)$  also leads to a small hamming–weight  $w_{\rm H}(\underline{p})$  at the output, which complicates the iterative decoding process.
  • Some workaround is provided by the interleaver, which ensures that both sequences  $\underline{p}_1$  and  $\underline{p}_2$  are not simultaneously loaded by very small Hamming weights  $w_{\rm H}(\underline{p}_1)$  and  $w_{\rm H}(\underline{p}_2)$ .
  • From the graph you can also see that the bit error probability is inversely proportional to the interleaver size  $K$ . That means:   With large  $K$  the despreading of unfavorable input sequences works better.
  • However, the approximation  $K \cdot {\rm Pr(bit\:error) = const.}$  is valid only for larger  $E_{\rm B}/N_0$–values   ⇒   small bit error probabilities. The described effect also occurs for smaller  $E_{\rm B}/N_0$  values, but then the effects on the bit error probability are smaller.

In contrast, the flatter shape of the block error probability (green curve) holds largely independent of the interleaver size  $K$, i.e., for  $K = 1000$  as well as for  $K = 10000$. In the range from  $10 \cdot {\rm lg} \, E_{\rm B}/N_0 > 2 \ \rm dB$  namely single errors dominate, so that here the approximation  ${\rm Pr(block error)} \approx {\rm Pr(bit\:error)} \cdot K$  is valid.

$\text{Conclusion:}$  The bit error probability and block error probability curves shown as examples also apply qualitatively for  $m > 2$, for example, for the turbo code of UMTS and LTE  $($each with  $m = 3)$, which is analyzed in the  "Exercise 4.10" . However, some quantitative differences emerge:

  • The curve is steeper for small  $E_{\rm B}/N_0$  and the distance from the Shannon boundary is slightly less than in the example shown here for  $m = 2$.
  • Also for larger  $m$  there is an error floor. However, the kink in the displayed curves then occurs later, i.e. at smaller error probabilities.


Serial concatenated turbo codes – SCCC


The turbo codes considered so far are sometimes referred to as  Parallel Concatenated Convolutional Codes  (PCCC).

Some years after Berrou's invention, Serial Concatenated Convolutional Codes  (SCCC) were also proposed by other authors according to the following diagram.

  • The information sequence  $\underline{u}$  is located at the outer convolutional encoder  $\mathcal{C}_1$ . Let its output sequence be  $\underline{c}$.
  • After the interleaver  $(\Pi)$  follows the inner convolutional encoder  $\mathcal{C}_2$. The code sequence is called here  $\underline{x}$ .
  • The resulting code rate is  $R = R_1 \cdot R_2$. For rate $1/2$ component codes is  $R = 1/4$.

Serial Concatenated Convolutional Codes (SCCC): Encoder and decoder

The bottom diagram shows the SCCC–decoder and illustrates the processing of  $L$ values and the exchange of extrinsic information between the two component decoders:

  • The inner decoder  (for the code  $\mathcal{C}_2)$  receives from the channel the intrinsic information  $\underline{L}_{\rm K}(\underline{x})$  and from the outer decoder (after interleaving) the apriori information  $\underline{L}_{\rm A}(\underline{w})$  with  $\underline{w} = \pi(\underline{c})$. To the outer decoder the extrinsic information  $\underline{L}_{\rm E}(\underline{w})$  is given.
  • The outer decoder $($for  $\mathcal{C}_1)$  processes the apriori information  $\underline{L}_{\rm A}(\underline{c})$, i.e. the extrinsic information  $\underline{L}_{\rm E}(\underline{w})$  after de–interleaving. It provides the extrinsic information  $\underline{L}_{\rm E}(\underline{c})$.
  • After a sufficient number of iterations, the desired decoding result is obtained in the form of the a posteriori–$L$–values  $\underline{L}_{\rm APP}(\underline{u})$  of the information sequence  $\underline{u}$.

$\text{Conclusion:}$  Important for Serial Concatenated Convolutional Codes (SCCC) is that the inner code  $\mathcal{C}_2$  is recursive (i.e. a RSC–code). The outer code  $\mathcal{C}_1$  may also be non-recursive.

Regarding the performance of such codes, it should be noted:

  • An SCCC is often better than a PCCC  ⇒  lower error floor for large  $E_{\rm B}/N_0$ . The statement is already true for SCCC component codes with memory  $m = 2$  (only four trellis states), while for PCCC the memory  $m = 3$  and  $m = 4$  (eight and sixteen trellis states, respectively) should be.
  • In the lower range $($small  $E_{\rm B}/N_0)$  on the other hand, the best serial concatenated convolutional code (SCCC) is several tenths of a decibel worse than the comparable turbo code according to Berrou (PCCC). The distance from the Shannon boundary is correspondingly larger.



Some application areas for turbo codes


Some standardized turbo codes compared to the Shannon bound

Turbo codes are used in almost all newer communication systems. The graph shows their performance with the AWGN channel compared to  "Shannon's channel capacity"  (blue curve).

The green highlighted area "BPSK" indicates the Shannon limit for message systems with binary input, with which according to the  "channel coding theorem"  an error-free transmission is just possible.

It should be noted that the error rate  $10^{-5}$  is the basis here for the channel codes of standardized systems which are drawn in, while the information-theoretical capacity curves (Shannon, BPSK) apply to the error probability  $0$ .

  • The blue rectangles mark the turbo codes for UMTS. These are rate–$1/3$–codes with memory  $m = 3$. The performance depends strongly on the interleaver size. With  $K = 6144$  this code is only about  $1 \rm dB$  to the right of the Shannon bound. LTE uses the same turbo codes. Minor differences occur due to the different interleaver.
  • The red crosses mark the turbo codes according to CCSDS (Consultative Committee for Space Data Systems), developed for use in space missions. This class assumes the fixed interleaver size  $K = 6920$  and provides codes of rate  $1/6$,  $1/4$,  $1/3$  and  $1/2$ . The lowest code rates allow operation at  $10 \cdot {\rm lg} \, (E_{\rm B}/N_0) \approx 0 \ \rm dB$.
  • The green circle represents a very simple  Repeat–Accumulate  (RA) code, a serial–concatenated turbo code. The following is an outline of its structure:   The outer decoder uses a  "repetition code" , in the drawn example with rate  $R = 1/3$. The interleaver is followed by an RSC–code with  $G(D) = 1/(1 + D)$   ⇒   memory  $m = 1$. When executed systematically, the total code rate  $R = 1/4$  (three parity bits are added to each bit of information).
Repeat Accumulate (RA) code with rate $1/4$

From the graph on the top right, one can see that this simple RA–code is surprisingly good. With the interleaver size  $K = 300000$  the distance from the Shannon–limit is only about  $1.5 \ \rm dB$  (green dot).

Similar Repeat Accumulate Codes are provided for the DVB Return Channel Terrestrial (RCS) standard and for the WiMax standard (IEEE 802.16).



Exercises for the chapter


Exercise 4.08: Repetition to the Convolutional Codes

Exercise 4.08Z: Basics about Interleaving

Exercise 4.09: Recursive Systematic Convolutional Codes

Exercise 4.10: Turbo Enccoder for UMTS and LTE

References

  1. Liva, G.: Channels Codes for Iterative Decoding. Lecture notes, Department of Communications Engineering, TU Munich and DLR Oberpfaffenhofen, 2015.