Difference between revisions of "Channel Coding/Basics of Convolutional Coding"

From LNTwww
m (Text replacement - "”" to """)
 
(33 intermediate revisions by 3 users not shown)
Line 1: Line 1:
 
   
 
   
 
{{Header
 
{{Header
|Untermenü=Faltungscodierung und geeignete Decoder
+
|Untermenü=Convolutional Codes and Their Decoding
|Vorherige Seite=Fehlerwahrscheinlichkeit und Anwendungsgebiete
+
|Vorherige Seite=Error Probability and Areas of Application
|Nächste Seite=Algebraische und polynomische Beschreibung
+
|Nächste Seite=Algebraic and Polynomial Description
 
}}
 
}}
  
== # ÜBERBLICK ZUM DRITTEN HAUPTKAPITEL # ==
+
== # OVERVIEW OF THE THIRD MAIN CHAPTER # ==
 
<br>
 
<br>
Dieses dritte Hauptkapitel behandelt&nbsp; '''Faltungscodes'''&nbsp; (englisch: &nbsp;''Convolutional Codes''), die erstmals 1955 von&nbsp; [https://de.wikipedia.org/wiki/Peter_Elias Peter Elias]&nbsp; beschrieben wurden [Eli55]<ref name='Eli55'>Elias, P.: ''Coding for Noisy Channels''. In: IRE Conv. Rec. Part 4,S. 37-47, 1955.</ref>. Während bei den linearen Blockcodes (siehe&nbsp; [[Channel_Coding/Zielsetzung_der_Kanalcodierung#.23_.C3.9CBERBLICK_ZUM_ERSTEN_HAUPTKAPITEL_.23|Erstes Hauptkapitel]])&nbsp; und den Reed–Solomon–Codes&nbsp; (siehe [[Channel_Coding/Einige_Grundlagen_der_Algebra#.23_.C3.9CBERBLICK_ZUM_ZWEITEN_HAUPTKAPITEL_.23|Zweites Hauptkapitel]])&nbsp; die Codewortlänge jeweils&nbsp; $n$&nbsp; ist, basiert die Theorie der Faltungscdes auf „semi–infiniten” Informations– und Codesequenzen. Ebenso liefert die Maximum–Likelihood–Decodierung mittels des Viterbi–Algorithmuses per se die gesamte Sequenz.
+
This third main chapter discusses &nbsp; &raquo;'''convolutional codes'''&laquo;, &nbsp; first described in 1955 by&nbsp; [https://en.wikipedia.org/wiki/Peter_Elias $\text{Peter Elias}$]&nbsp; [Eli55]<ref name='Eli55'>Elias, P.:&nbsp; Coding for Noisy Channels.&nbsp; In:&nbsp; IRE Conv. Rec. Part 4, pp. 37-47, 1955.</ref>.&nbsp;  
  
Im Einzelnen werden in diesem Kapitel behandelt:
+
*While for linear block codes&nbsp; $($see&nbsp; [[Channel_Coding/Objective_of_Channel_Coding#.23_OVERVIEW_OF_THE_FIRST_MAIN_CHAPTER_.23|$\text{first main chapter}$]]$)$&nbsp; and Reed-Solomon codes&nbsp; $($see&nbsp; [[Channel_Coding/Some_Basics_of_Algebra#.23_OVERVIEW_OF_THE_SECOND_MAIN_CHAPTER_.23|$\text{second main chapter}$]]$)$&nbsp; the code word length is&nbsp; $n$&nbsp; in each case,&nbsp;
  
*Wichtige ''Definitionen''&nbsp; für Faltungscodes: &nbsp; Coderate, Gedächtnis, Einflusslänge, freie Distanz
+
*the theory of convolutional codes is based on&nbsp; "semi-infinite"&nbsp; information and encoded sequences.&nbsp; Similarly,&nbsp; "maximum likelihood decoding"&nbsp; using the&nbsp; [[Channel_Coding/Decoding_of_Convolutional_Codes#Viterbi_algorithm_based_on_correlation_and_metrics|$\text{Viterbi algorithm}$]]&nbsp; per se yields the entire sequence.
*''Gemeinsamkeiten''&nbsp; und ''Unterschiede''&nbsp; zu den linearen Blockcodes,
 
*''Generatormatrix''&nbsp; und ''Übertragungsfunktionsmatrix''&nbsp; eines Faltungscodes,
 
*''gebrochen–rationale Übertragungsfunktionen''&nbsp; für systematische Faltungscodes,
 
*Beschreibung mit ''Zustandsübergangsdiagramm''&nbsp; und ''Trellisdiagramm'',
 
*''Terminierung''&nbsp; und ''Punktierung''&nbsp; von Faltungscodes,
 
*''Decodierung''&nbsp; von Faltungscodes &nbsp; ⇒ &nbsp; ''Viterbi–Algorithmus'',
 
*''Gewichtsfunktionen''&nbsp; und Näherungen für die ''Bitfehlerwahrscheinlichkeit''.
 
  
  
== Voraussetzungen und Definitionen ==
+
Specifically,&nbsp; this chapter discusses:
 +
 
 +
#Important definitions for convolutional codes: &nbsp;  &raquo;code rate&laquo;,&nbsp; &raquo;memory&laquo;,&nbsp; &raquo; influence length&laquo;, &nbsp;&raquo; free distance&laquo;,
 +
#&raquo;similarities&laquo;&nbsp; and &nbsp;&raquo;differences&laquo;&nbsp; to linear block codes,
 +
#&raquo;generator matrix&laquo;&nbsp; and &nbsp;&raquo;transfer function matrix&laquo;&nbsp; of a convolutional code,
 +
#&raquo;fractional-rational transfer functions&laquo;&nbsp; for systematic convolutional codes,
 +
#description with&nbsp; &raquo;state transition diagram&laquo;&nbsp; and &nbsp;&raquo;trellis diagram&laquo;,
 +
#&raquo;termination&laquo;&nbsp; and &nbsp;&raquo;puncturing&laquo;&nbsp; of convolutional codes,
 +
#&raquo;decoding&laquo;&nbsp; of convolutional codes &nbsp; ⇒ &nbsp; &raquo;Viterbi algorithm&laquo;,
 +
#&raquo;weight enumerator functions&laquo;&nbsp; and approximations for the &nbsp;&raquo;bit error probability&laquo;.
 +
 
 +
 
 +
 
 +
== Requirements and definitions ==
 
<br>
 
<br>
Wir betrachten in diesem Kapitel eine unendlich lange binäre Informationssequenz&nbsp; $\underline{u}$&nbsp; und unterteilen diese in Informationsblöcke&nbsp; $\underline{u}_i$&nbsp; zu je&nbsp; $k$&nbsp; Bit. Man kann diesen Sachverhalt wie folgt formalisieren:
+
We consider in this chapter an infinite binary information sequence&nbsp; $\underline{u}$&nbsp; and divide it into information blocks&nbsp; $\underline{u}_i$&nbsp; of&nbsp; $k$&nbsp; bits each. One can formalize this fact as follows:
  
::<math>\underline{\it u} = \left ( \underline{\it u}_1, \underline{\it u}_2, \hspace{0.05cm} \text{...} \hspace{0.05cm}, \underline{\it u}_i , \hspace{0.05cm} \text{...} \hspace{0.05cm}\right ) \hspace{0.3cm}{\rm mit}\hspace{0.3cm} \underline{\it u}_i = \left ( u_i^{(1)}, u_i^{(2)}, \hspace{0.05cm} \text{...} \hspace{0.05cm}, u_i^{(k)}\right )\hspace{0.05cm},</math>
+
::<math>\underline{\it u} = \left ( \underline{\it u}_1, \underline{\it u}_2, \hspace{0.05cm} \text{...} \hspace{0.05cm}, \underline{\it u}_i , \hspace{0.05cm} \text{...} \hspace{0.05cm}\right ) \hspace{0.3cm}{\rm with}\hspace{0.3cm} \underline{\it u}_i = \left ( u_i^{(1)}, u_i^{(2)}, \hspace{0.05cm} \text{...} \hspace{0.05cm}, u_i^{(k)}\right )\hspace{0.05cm},</math>
  
::<math>u_i^{(j)}\in {\rm GF(2)}\hspace{0.3cm}{\rm f\ddot{u}r} \hspace{0.3cm}1 \le j \le k  
+
::<math>u_i^{(j)}\in {\rm GF(2)}\hspace{0.3cm}{\rm for} \hspace{0.3cm}1 \le j \le k  
 
\hspace{0.5cm}\Rightarrow \hspace{0.5cm} \underline{\it u}_i \in {\rm GF}(2^k)\hspace{0.05cm}.</math>
 
\hspace{0.5cm}\Rightarrow \hspace{0.5cm} \underline{\it u}_i \in {\rm GF}(2^k)\hspace{0.05cm}.</math>
  
Im Englischen bezeichnet man eine solche Sequenz ohne negative Indizes als <i>semi&ndash;infinite</i>&nbsp;.<br>
+
Such a sequence without negative indices is called&nbsp; &raquo;'''semi&ndash;infinite'''&laquo;.<br>
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Definition:}$&nbsp; Bei einem&nbsp; <b>binären Faltungscode</b>&nbsp; (englisch: &nbsp;<i>Binary Convolutional Code</i>&nbsp;)  wird zum Zeitpunkt&nbsp; $i$&nbsp; ein Codewort&nbsp; $\underline{x}_i$&nbsp; bestehend aus&nbsp; $n$&nbsp; Codebit ausgegeben:
+
$\text{Definition:}$&nbsp; In a&nbsp; &raquo;<b>binary convolutional code</b>&laquo;,&nbsp; a code word&nbsp; $\underline{x}_i$&nbsp; consisting of&nbsp; $n$&nbsp; code bit is output at time&nbsp; $i$:
  
 
::<math>\underline{\it x}_i = \left ( x_i^{(1)}, x_i^{(2)}, \hspace{0.05cm} \text{...} \hspace{0.05cm}, x_i^{(n)}\right )\in {\rm GF}(2^n)\hspace{0.05cm}.</math>
 
::<math>\underline{\it x}_i = \left ( x_i^{(1)}, x_i^{(2)}, \hspace{0.05cm} \text{...} \hspace{0.05cm}, x_i^{(n)}\right )\in {\rm GF}(2^n)\hspace{0.05cm}.</math>
  
Dieses ergibt sich entsprechend
+
This results according to
*den&nbsp; $k$&nbsp; Bit des aktuellen Informationsblockes&nbsp; $\underline{u}_i$, und<br>
+
*the&nbsp; $k$&nbsp; bit of the current information block&nbsp; $\underline{u}_i$, and <br>
 +
 
 +
*the&nbsp; $m$&nbsp; previous information blocks&nbsp; $\underline{u}_{i-1}$, ... , $\underline{u}_{i-m}$.<br>}}<br>
  
*den&nbsp; $m$&nbsp; vorherigen Informationsblöcken&nbsp; $\underline{u}_{i-1}$, ... , $\underline{u}_{i-m}$.<br>}}<br>
+
[[File:EN_KC_T_3_1_S1_neu.png|right|frame|Dependencies for a convolutional encoder with&nbsp; $m = 2$|class=fit]]
  
[[File:P ID2590 KC T 3 1 S1 v1.png|right|frame|Abhängigkeiten bei einem Faltungscodierer mit&nbsp; $m = 2$|class=fit]]
+
The diagram opposite illustrates this fact for the parameters&nbsp;  
 +
:$$k = 4, \ n = 7, \ m = 2,\ i = 4.$$  
  
Nebenstehende Grafik verdeutlicht diesen Sachverhalt für die Parameter&nbsp; $k = 4, \ n = 7, \ m = 2$&nbsp; und&nbsp; $i = 4$.  
+
The&nbsp; $n = 7$&nbsp; code bits&nbsp; $x_4^{(1)}$, ... , $x_4^{(7)}$&nbsp; generated at time&nbsp; $i = 4$&nbsp; may depend&nbsp; $($directly$)$&nbsp; on the&nbsp; $k \cdot (m+1) = 12$&nbsp; information bits marked in red and are generated by modulo-2 additions.<br>
  
Die&nbsp; $n = 7$&nbsp; zum Zeitpunkt&nbsp; $i = 4$&nbsp; erzeugten Codebits&nbsp; $x_4^{(1)}$, ... , $x_4^{(7)}$&nbsp; können (direkt) von den&nbsp; $k \cdot (m+1) = 12$&nbsp; rot markierten Informationsbits abhängen und werden durch Modulo&ndash;2&ndash;Additionen erzeugt.<br>
+
Drawn in yellow is a&nbsp; $(n, k)$&nbsp; convolutional encoder.&nbsp; Note that the vector&nbsp; $\underline{u}_i$&nbsp; and the sequence&nbsp; $\underline{u}^{(i)}$&nbsp; are fundamentally different:
 +
*While&nbsp; $\underline{u}_i = (u_i^{(1)}, u_i^{(2)}$, ... , $u_i^{(k)})$&nbsp; summarizes&nbsp; $k$&nbsp; parallel information bits at time&nbsp; $i$,
  
Gelb eingezeichnet ist zudem ein&nbsp; $(n, k)$&ndash;Faltungscodierer. Zu beachten ist, dass sich der Vektor&nbsp; $\underline{u}_i$&nbsp; und die Sequenz&nbsp; $\underline{u}^{(i)}$&nbsp; grundlegend unterscheiden:
+
* $\underline{u}^i = (u_1^{(i)}$, $u_2^{(i)}, \ \text{...})$&nbsp; denotes the&nbsp; $($horizontal$)$&nbsp; sequence at&nbsp; $i$&ndash;th input of the convolutional encoder.  
*Während&nbsp; $\underline{u}_i = (u_i^{(1)}, u_i^{(2)}$, ... , $u_i^{(k)})$&nbsp; die&nbsp; $k$&nbsp; zum Zeitpunkt&nbsp; $i$&nbsp; parallel anliegenden Informationsbits zusammenfasst,
 
* bezeichnet&nbsp; $\underline{u}^i = (u_1^{(i)}$, $u_2^{(i)}, \ \text{...})$&nbsp; die (horizontale) Sequenz am&nbsp; $i$&ndash;ten Eingang des Faltungscodierers.  
 
  
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Definitionen bezüglich Faltungscodes:}$&nbsp;   
+
$\text{Some definitions related to convolutional codes:}$&nbsp;   
*Die&nbsp; <b>Coderrate</b>&nbsp; ergibt sich wie bei den Blockcodes zu&nbsp; $R = k/n$.<br>
+
*The &nbsp; &raquo;<b>code rate</b>&laquo; &nbsp; results as for the block codes to&nbsp; $R = k/n$.<br>
  
*Man bezeichnet&nbsp; $m$&nbsp; als das&nbsp; <b>Gedächtnis</b>&nbsp; (englisch: &nbsp;<i>Memory</i>&nbsp;)&nbsp; des Codes und den <i>Convolutional Code</i>&nbsp; selbst mit&nbsp; ${\rm CC} \ (n, k, m)$.<br>
+
*Perceive &nbsp; $m$&nbsp; as the&nbsp; &raquo;<b>memory</b>&laquo;&nbsp; of the code and the&nbsp; "convolutional code"&nbsp; itself with&nbsp; ${\rm CC} \ (n, k, m)$.<br>
  
*Daraus ergibt sich die&nbsp; <b>Einflusslänge</b>&nbsp; (englisch: &nbsp;<i>Constraint Length</i>&nbsp;) zu&nbsp; $\nu = m + 1$.<br>
+
*This gives the&nbsp; &raquo;<b>constraint length</b>&laquo;&nbsp; $\nu = m + 1$&nbsp; of the convolutional code.<br>
  
*Für&nbsp; $k > 1$&nbsp; gibt man diese Parameter oft auch in Bit an: &nbsp; $m_{\rm Bit} = m \cdot k$ &nbsp; &nbsp;bzw.&nbsp; &nbsp; $\nu_{\rm Bit} = (m + 1) \cdot k$.<br>}}
+
*For&nbsp; $k > 1$&nbsp; one often gives these parameters also in bits: &nbsp; $m_{\rm bit} = m \cdot k$ &nbsp; &nbsp;resp.&nbsp; &nbsp; $\nu_{\rm bit} = (m + 1) \cdot k$.<br>}}
  
== Gemeinsamkeiten und Unterschiede gegenüber Blockcodes ==
+
== Similarities and differences compared to block codes ==
 
<br>
 
<br>
Aus der&nbsp; [[Channel_Coding/Grundlagen_der_Faltungscodierung#Voraussetzungen_und_Definitionen| Definition]]&nbsp; auf der vorherigen Seite ist ersichtlich, dass ein binärer Faltungscode mit&nbsp; $m = 0$&nbsp; (also ohne Gedächtnis) identisch wäre mit einem binären Blockcode wie in ersten Hauptkapitel beschrieben. Wir schließen diesen Grenzfall aus und setzen deshalb für das Folgende stets voraus:
+
From the&nbsp; [[Channel_Coding/Basics_of_Convolutional_Coding#Requirements_and_definitions| $\text{definition}$]]&nbsp; in the previous section,&nbsp; it is evident that a binary convolutional code with&nbsp; $m = 0$&nbsp; $($i.e., without memory$)$&nbsp; would be identical to a binary block code as described in the first main chapter.&nbsp; We exclude this limiting case and therefore assume always for the following:
*Das Gedächtnis&nbsp; $m$&nbsp; sei größer oder gleich&nbsp; $1$.<br>
+
*The memory&nbsp; $m$&nbsp; be greater than or equal to&nbsp; $1$.<br>
  
*Die Einflusslänge&nbsp; $\nu$&nbsp; sei größer oder gleich&nbsp; $2$.<br><br>
+
*The influence length&nbsp; $\nu$&nbsp; be greater than or equal to&nbsp; $2$.<br><br>
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 1:}$&nbsp; Bei einem&nbsp; $(7, 4)$&ndash;Blockcode hängt das Codewort&nbsp; $\underline {x}_4$&nbsp; nur vom Informationswort&nbsp; $\underline{u}_4$&nbsp; ab, nicht jedoch von&nbsp; $\underline{u}_2$&nbsp; und&nbsp; $\underline{u}_3$, wie bei dem&nbsp; [[Channel_Coding/Grundlagen_der_Faltungscodierung#Voraussetzungen_und_Definitionen| beispielhaften Faltungscodes]]&nbsp; $($mit&nbsp; $m = 2)$&nbsp; auf der letzten Seite.<br>
+
$\text{Example 1:}$&nbsp; For a&nbsp; $(7, 4)$&nbsp; block code,&nbsp; the code word&nbsp; $\underline {x}_4$&nbsp; depends only on the information word&nbsp; $\underline{u}_4$&nbsp; but not on&nbsp; $\underline{u}_2$&nbsp; and&nbsp; $\underline{u}_3$, as in the&nbsp; [[Channel_Coding/Basics_of_Convolutional_Coding#Requirements_and_definitions| $\text{example convolutional codes}$]]&nbsp; $($with&nbsp; $m = 2)$&nbsp; in the last section.<br>
  
[[File:P ID2592 KC T 3 1 S2 v2.png|right|frame|Abhängigkeiten bei einem&nbsp; $(7, 4)$–Blockcode zum Zeitpunkt&nbsp; $i = 4$|class=fit]]
+
[[File:EN_KC_T_3_1_S2.png|right|frame|Dependencies on a&nbsp; $(7, 4)$&nbsp; block code at time&nbsp; $i = 4$|class=fit]]
  
Gilt beispielsweise
+
For example, if
 
:$$x_4^{(1)} = u_4^{(1)}, \ x_4^{(2)} = u_4^{(2)},$$
 
:$$x_4^{(1)} = u_4^{(1)}, \ x_4^{(2)} = u_4^{(2)},$$
 
:$$x_4^{(3)} = u_4^{(3)}, \ x_4^{(4)} = u_4^{(4)}$$  
 
:$$x_4^{(3)} = u_4^{(3)}, \ x_4^{(4)} = u_4^{(4)}$$  
sowie
+
as well as
 
:$$x_4^{(5)} = u_4^{(1)}+ u_4^{(2)}+u_4^{(3)}\hspace{0.05cm},$$  
 
:$$x_4^{(5)} = u_4^{(1)}+ u_4^{(2)}+u_4^{(3)}\hspace{0.05cm},$$  
 
:$$x_4^{(6)} = u_4^{(2)}+ u_4^{(3)}+u_4^{(4)}\hspace{0.05cm},$$
 
:$$x_4^{(6)} = u_4^{(2)}+ u_4^{(3)}+u_4^{(4)}\hspace{0.05cm},$$
 
:$$x_4^{(7)} = u_4^{(1)}+ u_4^{(2)}+u_4^{(4)}\hspace{0.05cm},$$
 
:$$x_4^{(7)} = u_4^{(1)}+ u_4^{(2)}+u_4^{(4)}\hspace{0.05cm},$$
  
so handelt es sich um einen so genannten&nbsp; [[Channel_Coding/Beispiele_bin%C3%A4rer_Blockcodes#Hamming.E2.80.93Codes| systematischen Hamming&ndash;Code]]&nbsp; $\text{HS (7, 4, 3)}$. In der Grafik sind diese speziellen Abhängigkeiten für&nbsp; $x_4^{(1)}$&nbsp; und&nbsp; $x_4^{(7)}$&nbsp; rot eingezeichnet.}}<br>
+
applies then it is a so-called&nbsp; [[Channel_Coding/Examples_of_Binary_Block_Codes#Hamming_Codes| $\text{systematic Hamming code}$]]&nbsp; $\text{HS (7, 4, 3)}$.&nbsp; In the graph,&nbsp; these special dependencies for&nbsp; $x_4^{(1)}$&nbsp; and&nbsp; $x_4^{(7)}$&nbsp; are drawn in red.}}<br>
 
 
In gewisser Weise könnte man auch einen&nbsp; $(n, k)$&ndash;Faltungscode mit Gedächtnis&nbsp; $m &#8805; 1$&nbsp; als Blockcode interpretieren, dessen Codeparameter&nbsp; $n\hspace{0.05cm}' \gg n$&nbsp; und&nbsp; $k\hspace{0.05cm}' \gg k$&nbsp; allerdings sehr viel größere Werte annehmen müssten als die des vorliegenden Faltungscodes. Aufgrund der großen Unterschiede in der Beschreibung, in den Eigenschaften und insbesondere bei der Decodierung betrachten wir aber Faltungscodes in diesem Lerntutorial als etwas völlig Neues. Hierfür sprechen folgende Gründe:
 
*Ein Blockcodierer setzt Informationsworte der Länge&nbsp; $k$&nbsp; Bit blockweise in Codeworte mit je&nbsp; $n$&nbsp; Bit um. Der Blockcode ist dabei um so leistungsfähiger, je größer seine Codewortlänge&nbsp; $n$&nbsp; ist. Bei gegebener Coderate&nbsp; $R = k/n$&nbsp; erfordert dies auch eine große Informationswortlänge&nbsp; $k$.<br>
 
  
*Dagegen wird die Korrekturfähigkeit eines Faltungscodes im wesentlichen durch sein Gedächtnis&nbsp; $m$&nbsp; bestimmt. Die Codeparameter&nbsp; $k$&nbsp; und&nbsp; $n$&nbsp; werden hier meist sehr klein gewählt&nbsp; $(1, \ 2, \ 3, \ \text{...})$. Von praktischer Bedeutung sind somit nur ganz wenige und zudem sehr einfache Faltungscodes.<br>
+
In a sense,&nbsp; one could also interpret a&nbsp; $(n, k)$&nbsp; convolutional code with memory&nbsp; $m &#8805; 1$&nbsp; as a block code whose code parameters&nbsp; $n\hspace{0.05cm}' \gg n$&nbsp; and&nbsp; $k\hspace{0.05cm}' \gg k$&nbsp; would have to assume much larger values than those of the present convolutional code.&nbsp;
  
*Auch schon bei kleinen Werten für&nbsp; $k$&nbsp; und&nbsp; $n$&nbsp; überführt ein Faltungscoder eine ganze Sequenz von Informationsbits&nbsp; $(k\hspace{0.05cm}' &#8594; &#8734;)$&nbsp; in eine sehr lange Sequenz von Codebits&nbsp; $(n\hspace{0.05cm}' = k\hspace{0.05cm}'/R)$. Ein solcher Code bietet somit oft ebenfalls eine große Korrekturfähigkeit.<br>
+
However,&nbsp; because of the large differences in description,&nbsp; properties,&nbsp; and especially decoding,&nbsp; we consider convolutional codes to be something completely new in this learning tutorial.&nbsp; The reasons for this are as follows:
 +
# &nbsp; A block encoder  converts block-by-block information words of length&nbsp; $k$&nbsp; bits into code words of length&nbsp; $n$&nbsp; bits each.&nbsp; In this case,&nbsp; the larger its code word length&nbsp; $n$,&nbsp; the more powerful the block code is.&nbsp; For a given code rate&nbsp; $R = k/n$&nbsp; this also requires a large information word length&nbsp; $k$.<br>
 +
# &nbsp; In contrast,&nbsp; the correction capability of a convolutional code is essentially determined by its memory&nbsp; $m$.&nbsp; The code parameters&nbsp; $k$&nbsp; and&nbsp; $n$&nbsp; are mostly chosen very small here&nbsp; $(1, \ 2, \ 3, \ \text{...})$.&nbsp; Thus,&nbsp; only very few and,&nbsp; moreover,&nbsp; very simple convolutional codes are of practical importance.<br>
 +
# &nbsp; Even with small values for&nbsp; $k$&nbsp; and&nbsp; $n$,&nbsp; a convolutional encoder transfers a whole sequence of information bits&nbsp; $(k\hspace{0.05cm}' &#8594; &#8734;)$&nbsp; into a very long sequence of code bits&nbsp; $(n\hspace{0.05cm}' = k\hspace{0.05cm}'/R)$.&nbsp;  Such a code thus often provides a large correction capability as well.<br>
 +
# &nbsp; There are efficient convolutional decoders,&nbsp; for example the&nbsp; [[Channel_Coding/Decoding_of_Convolutional_Codes#Viterbi_algorithm_based_on_correlation_and_metrics|$\text{Viterbi algorithm}$]]&nbsp; and the&nbsp; [[Channel_Coding/Soft-in_Soft-Out_Decoder#BCJR_decoding:_Forward-backward_algorithm| $\text{BCJR algorithm}$]],&nbsp; which can process reliability information about the channel &nbsp; &rArr; &nbsp; "soft decision input"&nbsp; and provide reliability information about the decoding result &nbsp; &rArr; &nbsp; "soft decision output".<br>
 +
# &nbsp; Please note:&nbsp; The terms&nbsp; "convolutional code"&nbsp; and&nbsp; "convolutional encoder"&nbsp; should not be confused:
 +
::*The&nbsp; "convolutional code"&nbsp; ${\rm CC} \ (n, \ k, \ m)$ &nbsp; &#8658; &nbsp; $R = k/n$&nbsp; is understood as&nbsp; "the set of all possible encoded sequences&nbsp; $\underline{x}$&nbsp; $($at the output$)$&nbsp; that can be generated with this code considering all possible information sequences&nbsp; $\underline{u}$&nbsp; $($at the input$)$".
 +
 +
::*There are several&nbsp; "convolutional encoders"&nbsp; that realize the same&nbsp; "convolutional code".<br>
  
*Es gibt effiziente Faltungsdecoder, zum Beispiel den&nbsp; [[Channel_Coding/Decodierung_von_Faltungscodes#Viterbi.E2.80.93Algorithmus.2C_basierend_auf_Korrelation_und_Metriken| Viterbi&ndash;Algorithmus]]&nbsp; und den&nbsp; [[Channel_Coding/Soft%E2%80%93in_Soft%E2%80%93out_Decoder#BCJR.E2.80.93Decodierung:_Vorw.C3.A4rts.E2.80.93R.C3.BCckw.C3.A4rts.E2.80.93Algorithmus| BCJR&ndash;Algorithmus]], die Zuverlässigkeitsinformationen über den Kanal &nbsp; &rArr; &nbsp; ''Soft&ndash;Decision&ndash;Input''&nbsp; verarbeiten können und Zuverlässigkeitsinformation über das Decodierergebnis &nbsp; &rArr; &nbsp; <i>Soft&ndash;Decision&ndash;Output</i>&nbsp;  liefern.<br>
+
== Rate 1/2 convolutional encoder ==
 
 
== Rate–1/2–Faltungscodierer ==
 
 
<br>
 
<br>
[[File:P ID2593 KC T 3 1 S3a v1.png|right|frame|Faltungscoder&nbsp; $(n = 2, \hspace{0.05cm} k = 1)$&nbsp; für ein Informationsbit&nbsp; $u_i$|class=fit]]
+
[[File:EN_KC_T_3_1_S3ab.png|right|frame|Convolutional encoder&nbsp; $(n = 2, \hspace{0.05cm} k = 1)$&nbsp; for one bit&nbsp; $u_i$&nbsp; $($upper graphic$)$&nbsp; and for  the information sequence&nbsp; $\underline{u}$&nbsp; $($lower graphic$)$|class=fit]]
<i>Vorbemerkung</i>: &nbsp; Die beiden Begriffe "Faltungscodierer" und "Faltungscoder" werden in unserem Lerntutorial synonym verwendet und können beliebig ausgetauscht werden.
 
*Beide Begriffe bezeichnen die konkrete Umsetzung einer Informationssequenz&nbsp; $\underline{u}$&nbsp; in eine Codesequenz&nbsp; $\underline{x}$.<br>
 
  
 +
The upper graph shows a&nbsp; $(n = 2, \hspace{0.05cm} k = 1)$&nbsp; convolutional encoder.&nbsp;
  
Die Begriffe "Faltungscodierer" und "Faltungscode" sollte man allerdings nicht verwechseln.
+
*At clock time&nbsp; $i$&nbsp; the information bit&nbsp; $u_i$&nbsp; is present at the encoder input and the&nbsp; $2$&ndash;bit code block&nbsp; $\underline{x}_i = (x_i^{(1)}, \ x_i^{(2)})$&nbsp; is output.&nbsp;
*Unter einem Faltungscode&nbsp; ${\rm CC} \ (n, \ k, \ m)$ &nbsp; &#8658; &nbsp; $R = k/n$&nbsp; versteht man die Menge aller möglichen Codesequenzen&nbsp; $\underline{x}$, die mit diesem Code unter Berücksichtigung aller möglichen Informationssequenzen&nbsp; $\underline{u}$&nbsp; (am Eingang) generiert werden kann.  
 
*Es gibt verschiedene Faltungscodierer, die den gleichen Faltungscode realisieren.<br>
 
  
 
+
*To generate two code bits&nbsp; $x_i^{(1)}$&nbsp; and&nbsp; $x_i^{(2)}$&nbsp; from a single information bit,&nbsp; the convolutional encoder must include at least one memory element:
[[File:P ID2594 KC T 3 1 S3b v1.png|right|frame|Faltungscoder&nbsp; $(n = 2, \hspace{0.05cm} k = 1)$&nbsp; für die Informationssequenz&nbsp; $\underline{u}$|class=fit]]
 
Die obere Grafik zeigt einen&nbsp; $(n = 2, \hspace{0.05cm} k = 1)$&ndash;Faltungscodierer.
 
 
 
*Zum Taktzeitpunkt&nbsp; $i$&nbsp; liegt das Informationsbit&nbsp; $u_i$&nbsp; am Codereingang an und es wird der 2&ndash;Bit&ndash;Codeblock&nbsp; $\underline{x}_i = (x_i^{(1)}, \ x_i^{(2)})$&nbsp; ausgegeben.
 
*Unter Berücksichtigung der (halb&ndash;unendlich) langen Informationssequenz&nbsp; $\underline{u}$&nbsp; ergibt sich das Modell entsprechend der zweiten Grafik.<br>
 
*Um aus einem einzigen Informationsbit&nbsp; $u_i$&nbsp; zwei Codebit&nbsp; $x_i^{(1)}$&nbsp; und&nbsp; $x_i^{(2)}$&nbsp; generieren zu können, muss der Faltungscodierer mindestens ein Speicherelement beinhalten:  
 
 
:$$k = 1\hspace{0.05cm}, \ n = 2 \hspace{0.3cm}\Rightarrow \hspace{0.3cm} m \ge 1
 
:$$k = 1\hspace{0.05cm}, \ n = 2 \hspace{0.3cm}\Rightarrow \hspace{0.3cm} m \ge 1
 
  \hspace{0.05cm}.$$
 
  \hspace{0.05cm}.$$
  
 +
*Taking into account the&nbsp; $($half&ndash;infinite$)$&nbsp; long information sequence&nbsp; $\underline{u}$&nbsp; the model results according to the lower graph.
 +
<br clear=all>
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 2:}$&nbsp; Nachfolgend ist ein Faltungscodierer für die Parameter&nbsp; $k = 1, \ n = 2$&nbsp; und&nbsp; $m = 1$&nbsp; dargestellt. Das gelb hinterlegte Quadrat kennzeichnet ein Speicherelement. Dessen Beschriftung&nbsp;  $D$&nbsp; ist von <i>Delay</i> abgeleitet.<br>
+
$\text{Example 2:}$&nbsp; The graphic shows a convolutional encoder for the parameters&nbsp; $k = 1, \ n = 2$&nbsp; and&nbsp; $m = 1$.&nbsp;
  
[[File:P ID2595 KC T 3 1 S3c v1.png|right|frame|Faltungscodierer mit&nbsp; $k = 1, \ n = 2, \ m = 1$&nbsp; sowie Beispielsequenzen|class=fit]]
+
[[File:EN_KC_T_3_1_S3c.png|right|frame|Convolutional encoder with&nbsp; $k = 1, \ n = 2, \ m = 1$&nbsp; and example sequences; &nbsp; the square with label&nbsp; $D$&nbsp;  $($"delay"$)$&nbsp; indicates a memory element. |class=fit]]
  
*Es handelt sich hier um einen <i>systematischen</i>&nbsp; Faltungscodierer, gekennzeichnet durch&nbsp; $x_i^{(1)} = u_i$.
+
*This is a&nbsp; "systematic"&nbsp; convolutional encoder,&nbsp; characterized by&nbsp;
*Der zweite Ausgang liefert&nbsp; $x_i^{(2)} = u_i + u_{i-1}$.  
+
:$$x_i^{(1)} = u_i.$$  
 +
*The second output returns&nbsp; $x_i^{(2)} = u_i + u_{i-1}$.  
  
*In der beispielhaften Ausgangssequenz&nbsp; $\underline{x}$&nbsp; nach&nbsp; <i>Multiplexing</i>&nbsp; sind alle&nbsp; $x_i^{(1)}$&nbsp; rot und alle&nbsp; $x_i^{(2)}$&nbsp; blau beschriftet.}}<br>
 
  
[[File:P ID2596 KC T 3 1 S3d v1.png|right|frame|Faltungscoder&nbsp; $(k = 1, \ n = 2, \ m = 2)$&nbsp; in zwei verschiedenen Darstellungen|class=fit]]
+
*In the example output sequence&nbsp; $\underline{x}$&nbsp; after multiplexing 
 +
:*all&nbsp; $x_i^{(1)}$&nbsp; are labeled red,
 +
 
 +
:*all&nbsp; $x_i^{(2)}$&nbsp; are labeled blue.}}<br>
 +
 
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 3:}$&nbsp;  
+
$\text{Example 3:}$&nbsp;  
Die Grafik zeigt einen&nbsp; $(n = 2, \ k = 1)$&ndash;Faltungscodierers mit&nbsp; $m = 2$&nbsp; Speicherelementen:
+
The graphic shows a&nbsp; $(n = 2, \ k = 1)$&nbsp; convolutional encoder with&nbsp; $m = 2$&nbsp; memory elements:
*Links ist das Ersatzschaltbild dargestellt.
+
 
*Rechts ist eine Realisierungsform dieses Coders angegeben.  
+
*On the left is shown the equivalent circuit.
 +
 
 +
*On the right you see a realization form of this encoder.  
 +
 
 +
*The two information bits are:
 +
[[File:EN_KC_T_3_1_S3d_neu.png|right|frame|Convolutional encoder&nbsp; $(k = 1, \ n = 2, \ m = 2)$&nbsp; in two different representations|class=fit]]
  
*Die beiden Informationsbits lauten:
 
 
::<math>x_i^{(1)} = u_{i} + u_{i-1}+ u_{i-2} \hspace{0.05cm},</math>
 
::<math>x_i^{(1)} = u_{i} + u_{i-1}+ u_{i-2} \hspace{0.05cm},</math>
 
::<math>x_i^{(2)} = u_{i} + u_{i-2} \hspace{0.05cm}.</math>
 
::<math>x_i^{(2)} = u_{i} + u_{i-2} \hspace{0.05cm}.</math>
  
*Wegen&nbsp; $x_i^{(1)} &ne; u_i$&nbsp; handelt es sich hier nicht um einen systematischen Code.
+
*Because&nbsp; $x_i^{(1)} &ne; u_i$&nbsp; this is not a systematic code.
<br clear=all>
 
Man erkennt:
 
*Die Informationssequenz&nbsp; $\underline{u}$&nbsp; wird in einem Schieberegister der Länge&nbsp; $L = m + 1 = 3$&nbsp; abgelegt.<br>
 
  
*Zum Taktzeitpunkt&nbsp; $i$&nbsp; beinhaltet das linke Speicherelement das aktuelle Informationsbit&nbsp; $u_i$, das zu den nächsten Taktzeitpunkten jeweils um eine Stelle nach rechts verschoben wird.<br>
 
  
*Aus der Anzahl der gelben Quadrate ergibt sich wieder das Gedächtnis&nbsp; $m = 2$&nbsp; des Coders.<br><br>
+
It can be seen:
 +
# The input sequence&nbsp; $\underline{u}$&nbsp; is stored in a shift register of length&nbsp; $L = m + 1 = 3$&nbsp;.<br>
 +
#At clock time&nbsp; $i$&nbsp; the left memory element contains the current information bit&nbsp; $u_i$,&nbsp; which is shifted one place to the right at each of the next clock times.<br>
 +
#The number of yellow squares gives the memory&nbsp; $m = 2$&nbsp; of the encoder.<br><br>
  
Aus den beiden Darstellungen wird deutlich, dass&nbsp; $x_i^{(1)}$&nbsp; und&nbsp; $x_i^{(2)}$&nbsp; jeweils als der Ausgang eines&nbsp; [[Theory_of_Stochastic_Signals/Digitale_Filter|Digitalen Filters]]&nbsp; über dem Galoisfeld&nbsp; ${\rm GF(2)}$&nbsp; interpretiert werden kann, wobei beide Filter parallel mit der gleichen Eingangsfolge&nbsp; $\underline{u}$&nbsp; arbeiten.  
+
&rArr; &nbsp; From these plots it is clear that&nbsp; $x_i^{(1)}$&nbsp; and&nbsp; $x_i^{(2)}$&nbsp; can each be interpreted as the output of a&nbsp; [[Theory_of_Stochastic_Signals/Digital_Filters|$\text{digital filter}$]]&nbsp; over the Galois field&nbsp; ${\rm GF(2)}$&nbsp; where both filters operate in parallel with the same input sequence&nbsp; $\underline{u}$&nbsp;.  
  
Da sich (ganz allgemein) das Ausgangssignal eines Filters aus der&nbsp; [[Signal_Representation/The_Convolution_Theorem_and_Operation#Faltung_im_Zeitbereich| Faltung]]&nbsp; des Eingangssignals mit der Filterimpulsantwort ergibt, spricht man von&nbsp; <i>Faltungscodierung</i>.<br>}}
+
&rArr; &nbsp; Since&nbsp; $($in general terms$)$,&nbsp; the output signal of a filter results from the&nbsp; [[Signal_Representation/The_Convolution_Theorem_and_Operation#Convolution_in_the_time_domain|$\text{convolution}$]]&nbsp; of the input signal with the filter impulse response,&nbsp; this is referred to as&nbsp; "convolutional coding".<br>}}
  
== Faltungscodierer mit zwei Eingängen ==
+
== Convolutional encoder with two inputs ==
 
<br>  
 
<br>  
[[File:P ID2598 KC T 3 1 S4 v1.png|right|frame|Faltungscodierer mit&nbsp; $k = 2$&nbsp; und&nbsp; $n = 3$]]
+
[[File:P ID2598 KC T 3 1 S4 v1.png|right|frame|Convolutional encoder with&nbsp; $k = 2$&nbsp; and&nbsp; $n = 3$]]
Nun betrachten wir einen Faltungscodierer, der aus&nbsp; $k = 2$&nbsp; Informationsbits&nbsp; $n = 3$&nbsp; Codebits generiert.
+
Now consider a convolutional encoder that generates&nbsp; $n = 3$&nbsp; code bits from&nbsp; $k = 2$&nbsp; information bits.
*Die Informationssequenz&nbsp; $\underline{u}$&nbsp; wird in Blöcke zu je zwei Bit aufgeteilt.  
+
*The information sequence&nbsp; $\underline{u}$&nbsp; is divided into blocks of two bits each.
*Zum Taktzeitpunkt&nbsp; $i$&nbsp; liegt am oberen Eingang das Bit&nbsp; $u_i^{(1)}$&nbsp; an und am unteren Eingang&nbsp; $u_i^{(2)}$.  
+
*Für die&nbsp; $n = 3$&nbsp; Codebits zum Zeitpunkt&nbsp; $i$&nbsp; gilt dann:
+
*At the clock time&nbsp; $i$: &nbsp; The bit&nbsp; $u_i^{(1)}$&nbsp; is present at the upper input and&nbsp; $u_i^{(2)}$ at the lower input.  
 +
 
 +
*Then holds for the&nbsp; $n = 3$&nbsp; code bits at time&nbsp; $i$:
 
::<math>x_i^{(1)} = u_{i}^{(1)} + u_{i-1}^{(1)}+ u_{i-1}^{(2)} \hspace{0.05cm},</math>
 
::<math>x_i^{(1)} = u_{i}^{(1)} + u_{i-1}^{(1)}+ u_{i-1}^{(2)} \hspace{0.05cm},</math>
 
::<math>x_i^{(2)} = u_{i}^{(2)} + u_{i-1}^{(1)} \hspace{0.05cm},</math>
 
::<math>x_i^{(2)} = u_{i}^{(2)} + u_{i-1}^{(1)} \hspace{0.05cm},</math>
 
::<math>x_i^{(3)} = u_{i}^{(1)} + u_{i}^{(2)}+ u_{i-1}^{(1)} \hspace{0.05cm}.</math>
 
::<math>x_i^{(3)} = u_{i}^{(1)} + u_{i}^{(2)}+ u_{i-1}^{(1)} \hspace{0.05cm}.</math>
  
In der Grafik sind die Info&ndash;Bits&nbsp; $u_i^{(1)}$&nbsp; und&nbsp; $u_i^{(2)}$&nbsp; rot bzw. blau gekennzeichnet, und die vorherigen Info&ndash;Bits&nbsp; $u_{i-1}^{(1)}$&nbsp; und&nbsp; $u_{i-1}^{(2)}$&nbsp; grün bzw. braun.
+
In the graph,&nbsp; the info&ndash;bits&nbsp; $u_i^{(1)}$&nbsp; and&nbsp; $u_i^{(2)}$&nbsp; are marked red resp. blue,&nbsp; and the previous info&ndash;bits&nbsp; $u_{i-1}^{(1)}$&nbsp; resp.&nbsp; $u_{i-1}^{(2)}$&nbsp; are marked green and brown, respectively
  
Anzumerken ist:
+
To be noted:
*Das <i>Gedächtnis</i>&nbsp; $m$&nbsp; ist gleich der maximalen Speicherzellenzahl in einem Zweig &nbsp;&nbsp;&#8658;&nbsp;&nbsp; hier&nbsp; $m = 1$.
+
# &nbsp; The&nbsp; "memory"&nbsp; $m$&nbsp; is  equal to the maximum memory cell count in a branch &nbsp;&nbsp;&#8658;&nbsp;&nbsp; here&nbsp; $m = 1$.
 +
# &nbsp; The&nbsp; "influence length"&nbsp; $\nu$&nbsp; is equal to the sum of all memory elements &nbsp;&nbsp;&#8658;&nbsp;&nbsp; here&nbsp; $\nu = 2$.<br>
 +
# &nbsp;All memory elements are set to zero at the beginning of the coding&nbsp; $($"initialization"$)$.
 +
# &nbsp;The code defined herewith is the set of all possible encoded sequences&nbsp; $\underline{x}$,&nbsp; which result when all possible information sequences&nbsp; $\underline{u}$&nbsp; are entered.
 +
# &nbsp;Both&nbsp; $\underline{u}$&nbsp; and&nbsp; $\underline{x}$&nbsp; are thereby (temporally) unbounded.<br>
  
*Die <i>Einflusslänge</i>&nbsp; $\nu$&nbsp; ist gleich der Summe aller Speicherelemente &nbsp;&nbsp;&#8658;&nbsp;&nbsp; hier&nbsp; $\nu = 2$.<br>
 
  
*Alle Speicherelemente seien zu Beginn der Codierung (<i>Initialisierung</i>&nbsp;) auf Null gesetzt.<br><br>
+
{{GraueBox|TEXT= 
 
+
$\text{Example 4:}$&nbsp; Let the information sequence be &nbsp;$\underline{u} = (0, 1, 1, 0, 0, 0, 1, 1, \ \text{ ...})$.&nbsp; This gives the subsequences&nbsp;  
Der hiermit definierte Code ist die Menge aller möglichen Codesequenzen&nbsp; $\underline{x}$, die sich bei Eingabe aller möglichen Informationssequenzen&nbsp; $\underline{u}$&nbsp; ergeben. Sowohl&nbsp; $\underline{u}$&nbsp; als auch&nbsp; $\underline{x}$&nbsp; seien dabei (zeitlich) unbegrenzt.<br>
+
*$\underline{u}^{(1)} = (0, 1, 0, 1, \ \text{ ...})$,
 +
*$\underline{u}^{(2)} = (1, 0, 0, 1, \ \text{ ...})$.  
  
{{GraueBox|TEXT= 
 
$\text{Beispiel 4:}$&nbsp; Die Informationssequenz sei &nbsp;$\underline{u} = (0, 1, 1, 0, 0, 0, 1, 1, \ \text{ ...})$. Daraus ergeben sich die Teilsequenzen&nbsp; $\underline{u}^{(1)} = (0, 1, 0, 1, \ \text{ ...})$ &nbsp;und&nbsp; $\underline{u}^{(2)} = (1, 0, 0, 1, \ \text{ ...})$.
 
  
Mit der Festlegung&nbsp; $u_0^{(1)} = u_0^{(2)} = 0$&nbsp; folgt aus den obigen Gleichungen für die&nbsp; $n = 3$&nbsp; Codebits
+
With the specification&nbsp; $u_0^{(1)} = u_0^{(2)} = 0$&nbsp; it follows from the above equations for the&nbsp; $n = 3$&nbsp; code bits.
*im ersten Codierschritt&nbsp; $(i = 1)$:
+
*at the first coding step&nbsp; $(i = 1)$:  
  
 
::<math>x_1^{(1)} = u_{1}^{(1)} = 0  \hspace{0.05cm},\hspace{0.4cm}
 
::<math>x_1^{(1)} = u_{1}^{(1)} = 0  \hspace{0.05cm},\hspace{0.4cm}
Line 182: Line 197:
 
x_1^{(3)} = u_{1}^{(1)} + u_{1}^{(2)} = 0+1 = 1 \hspace{0.05cm},</math>
 
x_1^{(3)} = u_{1}^{(1)} + u_{1}^{(2)} = 0+1 = 1 \hspace{0.05cm},</math>
  
*im zweiten Codierschritt&nbsp; $(i = 2)$:
+
*at the second coding step&nbsp; $(i = 2)$:
  
 
::<math>x_2^{(1)} =u_{2}^{(1)} + u_{1}^{(1)}+ u_{1}^{(2)} = 1 + 0 + 1 = 0\hspace{0.05cm},\hspace{0.4cm}
 
::<math>x_2^{(1)} =u_{2}^{(1)} + u_{1}^{(1)}+ u_{1}^{(2)} = 1 + 0 + 1 = 0\hspace{0.05cm},\hspace{0.4cm}
Line 188: Line 203:
 
x_2^{(3)} = u_{2}^{(1)} + u_{2}^{(2)}+ u_{1}^{(1)} = 1 + 0+0 =1\hspace{0.05cm},</math>
 
x_2^{(3)} = u_{2}^{(1)} + u_{2}^{(2)}+ u_{1}^{(1)} = 1 + 0+0 =1\hspace{0.05cm},</math>
  
*im dritten Codierschritt&nbsp; $(i = 3)$:
+
*at the third coding step&nbsp; $(i = 3)$:
  
 
::<math>x_3^{(1)} =u_{3}^{(1)} + u_{2}^{(1)}+ u_{2}^{(2)} = 0+1+0 = 1\hspace{0.05cm},\hspace{0.4cm}
 
::<math>x_3^{(1)} =u_{3}^{(1)} + u_{2}^{(1)}+ u_{2}^{(2)} = 0+1+0 = 1\hspace{0.05cm},\hspace{0.4cm}
Line 194: Line 209:
 
x_3^{(3)} =u_{3}^{(1)} + u_{3}^{(2)}+ u_{2}^{(1)} = 0+0+1 =1\hspace{0.05cm},</math>
 
x_3^{(3)} =u_{3}^{(1)} + u_{3}^{(2)}+ u_{2}^{(1)} = 0+0+1 =1\hspace{0.05cm},</math>
  
*und schließlich im vierten Codierschritt&nbsp; $(i = 4)$:
+
*and finally at the fourth coding step&nbsp; $(i = 4)$:
  
 
::<math>x_4^{(1)} = u_{4}^{(1)} + u_{3}^{(1)}+ u_{3}^{(2)} = 1+0+0 = 1\hspace{0.05cm},\hspace{0.4cm}
 
::<math>x_4^{(1)} = u_{4}^{(1)} + u_{3}^{(1)}+ u_{3}^{(2)} = 1+0+0 = 1\hspace{0.05cm},\hspace{0.4cm}
Line 200: Line 215:
 
x_4^{(3)}= u_{4}^{(1)} + u_{4}^{(2)}+ u_{3}^{(1)} = 1+1+0 =0\hspace{0.05cm}.</math>
 
x_4^{(3)}= u_{4}^{(1)} + u_{4}^{(2)}+ u_{3}^{(1)} = 1+1+0 =0\hspace{0.05cm}.</math>
  
Damit lautet die Codesequenz nach dem Multiplexer: &nbsp; &nbsp; $\underline{x} = (0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, \ \text{...})$.}}<br>
+
Thus the encoded sequence is after the multiplexer: &nbsp; &nbsp; $\underline{x} = (0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, \ \text{...})$.}}<br>
  
== Aufgaben zum Kapitel==
+
== Exercises for the chapter==
 
<br>
 
<br>
[[Aufgaben:Aufgabe_3.1:_Analyse_eines_Faltungscodierers|Aufgabe 3.1: Analyse eines Faltungscodierers]]
+
[[Aufgaben:Exercise_3.1:_Analysis_of_a_Convolutional_Encoder|Exercise 3.1: Analysis of a Convolutional Encoder]]
  
[[Aufgaben:Aufgabe_3.1Z:_Faltungscodes_der_Rate_1/2|Aufgabe 3.1Z: Faltungscodes der Rate 1/2]]
+
[[Aufgaben:Exercise_3.1Z:_Convolution_Codes_of_Rate_1/2|Exercise 3.1Z: Convolution Codes of Rate 1/2]]
  
==Quellenverzeichnis==
+
==References==
 
<references/>
 
<references/>
  
 
{{Display}}
 
{{Display}}

Latest revision as of 17:02, 7 December 2022

# OVERVIEW OF THE THIRD MAIN CHAPTER #


This third main chapter discusses   »convolutional codes«,   first described in 1955 by  $\text{Peter Elias}$  [Eli55][1]

  • the theory of convolutional codes is based on  "semi-infinite"  information and encoded sequences.  Similarly,  "maximum likelihood decoding"  using the  $\text{Viterbi algorithm}$  per se yields the entire sequence.


Specifically,  this chapter discusses:

  1. Important definitions for convolutional codes:   »code rate«,  »memory«,  » influence length«,  » free distance«,
  2. »similarities«  and  »differences«  to linear block codes,
  3. »generator matrix«  and  »transfer function matrix«  of a convolutional code,
  4. »fractional-rational transfer functions«  for systematic convolutional codes,
  5. description with  »state transition diagram«  and  »trellis diagram«,
  6. »termination«  and  »puncturing«  of convolutional codes,
  7. »decoding«  of convolutional codes   ⇒   »Viterbi algorithm«,
  8. »weight enumerator functions«  and approximations for the  »bit error probability«.


Requirements and definitions


We consider in this chapter an infinite binary information sequence  $\underline{u}$  and divide it into information blocks  $\underline{u}_i$  of  $k$  bits each. One can formalize this fact as follows:

\[\underline{\it u} = \left ( \underline{\it u}_1, \underline{\it u}_2, \hspace{0.05cm} \text{...} \hspace{0.05cm}, \underline{\it u}_i , \hspace{0.05cm} \text{...} \hspace{0.05cm}\right ) \hspace{0.3cm}{\rm with}\hspace{0.3cm} \underline{\it u}_i = \left ( u_i^{(1)}, u_i^{(2)}, \hspace{0.05cm} \text{...} \hspace{0.05cm}, u_i^{(k)}\right )\hspace{0.05cm},\]
\[u_i^{(j)}\in {\rm GF(2)}\hspace{0.3cm}{\rm for} \hspace{0.3cm}1 \le j \le k \hspace{0.5cm}\Rightarrow \hspace{0.5cm} \underline{\it u}_i \in {\rm GF}(2^k)\hspace{0.05cm}.\]

Such a sequence without negative indices is called  »semi–infinite«.

$\text{Definition:}$  In a  »binary convolutional code«,  a code word  $\underline{x}_i$  consisting of  $n$  code bit is output at time  $i$:

\[\underline{\it x}_i = \left ( x_i^{(1)}, x_i^{(2)}, \hspace{0.05cm} \text{...} \hspace{0.05cm}, x_i^{(n)}\right )\in {\rm GF}(2^n)\hspace{0.05cm}.\]

This results according to

  • the  $k$  bit of the current information block  $\underline{u}_i$, and
  • the  $m$  previous information blocks  $\underline{u}_{i-1}$, ... , $\underline{u}_{i-m}$.


Dependencies for a convolutional encoder with  $m = 2$

The diagram opposite illustrates this fact for the parameters 

$$k = 4, \ n = 7, \ m = 2,\ i = 4.$$

The  $n = 7$  code bits  $x_4^{(1)}$, ... , $x_4^{(7)}$  generated at time  $i = 4$  may depend  $($directly$)$  on the  $k \cdot (m+1) = 12$  information bits marked in red and are generated by modulo-2 additions.

Drawn in yellow is a  $(n, k)$  convolutional encoder.  Note that the vector  $\underline{u}_i$  and the sequence  $\underline{u}^{(i)}$  are fundamentally different:

  • While  $\underline{u}_i = (u_i^{(1)}, u_i^{(2)}$, ... , $u_i^{(k)})$  summarizes  $k$  parallel information bits at time  $i$,
  • $\underline{u}^i = (u_1^{(i)}$, $u_2^{(i)}, \ \text{...})$  denotes the  $($horizontal$)$  sequence at  $i$–th input of the convolutional encoder.


$\text{Some definitions related to convolutional codes:}$ 

  • The   »code rate«   results as for the block codes to  $R = k/n$.
  • Perceive   $m$  as the  »memory«  of the code and the  "convolutional code"  itself with  ${\rm CC} \ (n, k, m)$.
  • This gives the  »constraint length«  $\nu = m + 1$  of the convolutional code.
  • For  $k > 1$  one often gives these parameters also in bits:   $m_{\rm bit} = m \cdot k$    resp.    $\nu_{\rm bit} = (m + 1) \cdot k$.

Similarities and differences compared to block codes


From the  $\text{definition}$  in the previous section,  it is evident that a binary convolutional code with  $m = 0$  $($i.e., without memory$)$  would be identical to a binary block code as described in the first main chapter.  We exclude this limiting case and therefore assume always for the following:

  • The memory  $m$  be greater than or equal to  $1$.
  • The influence length  $\nu$  be greater than or equal to  $2$.

$\text{Example 1:}$  For a  $(7, 4)$  block code,  the code word  $\underline {x}_4$  depends only on the information word  $\underline{u}_4$  but not on  $\underline{u}_2$  and  $\underline{u}_3$, as in the  $\text{example convolutional codes}$  $($with  $m = 2)$  in the last section.

Dependencies on a  $(7, 4)$  block code at time  $i = 4$

For example, if

$$x_4^{(1)} = u_4^{(1)}, \ x_4^{(2)} = u_4^{(2)},$$
$$x_4^{(3)} = u_4^{(3)}, \ x_4^{(4)} = u_4^{(4)}$$

as well as

$$x_4^{(5)} = u_4^{(1)}+ u_4^{(2)}+u_4^{(3)}\hspace{0.05cm},$$
$$x_4^{(6)} = u_4^{(2)}+ u_4^{(3)}+u_4^{(4)}\hspace{0.05cm},$$
$$x_4^{(7)} = u_4^{(1)}+ u_4^{(2)}+u_4^{(4)}\hspace{0.05cm},$$

applies then it is a so-called  $\text{systematic Hamming code}$  $\text{HS (7, 4, 3)}$.  In the graph,  these special dependencies for  $x_4^{(1)}$  and  $x_4^{(7)}$  are drawn in red.


In a sense,  one could also interpret a  $(n, k)$  convolutional code with memory  $m ≥ 1$  as a block code whose code parameters  $n\hspace{0.05cm}' \gg n$  and  $k\hspace{0.05cm}' \gg k$  would have to assume much larger values than those of the present convolutional code. 

However,  because of the large differences in description,  properties,  and especially decoding,  we consider convolutional codes to be something completely new in this learning tutorial.  The reasons for this are as follows:

  1.   A block encoder converts block-by-block information words of length  $k$  bits into code words of length  $n$  bits each.  In this case,  the larger its code word length  $n$,  the more powerful the block code is.  For a given code rate  $R = k/n$  this also requires a large information word length  $k$.
  2.   In contrast,  the correction capability of a convolutional code is essentially determined by its memory  $m$.  The code parameters  $k$  and  $n$  are mostly chosen very small here  $(1, \ 2, \ 3, \ \text{...})$.  Thus,  only very few and,  moreover,  very simple convolutional codes are of practical importance.
  3.   Even with small values for  $k$  and  $n$,  a convolutional encoder transfers a whole sequence of information bits  $(k\hspace{0.05cm}' → ∞)$  into a very long sequence of code bits  $(n\hspace{0.05cm}' = k\hspace{0.05cm}'/R)$.  Such a code thus often provides a large correction capability as well.
  4.   There are efficient convolutional decoders,  for example the  $\text{Viterbi algorithm}$  and the  $\text{BCJR algorithm}$,  which can process reliability information about the channel   ⇒   "soft decision input"  and provide reliability information about the decoding result   ⇒   "soft decision output".
  5.   Please note:  The terms  "convolutional code"  and  "convolutional encoder"  should not be confused:
  • The  "convolutional code"  ${\rm CC} \ (n, \ k, \ m)$   ⇒   $R = k/n$  is understood as  "the set of all possible encoded sequences  $\underline{x}$  $($at the output$)$  that can be generated with this code considering all possible information sequences  $\underline{u}$  $($at the input$)$".
  • There are several  "convolutional encoders"  that realize the same  "convolutional code".

Rate 1/2 convolutional encoder


Convolutional encoder  $(n = 2, \hspace{0.05cm} k = 1)$  for one bit  $u_i$  $($upper graphic$)$  and for the information sequence  $\underline{u}$  $($lower graphic$)$

The upper graph shows a  $(n = 2, \hspace{0.05cm} k = 1)$  convolutional encoder. 

  • At clock time  $i$  the information bit  $u_i$  is present at the encoder input and the  $2$–bit code block  $\underline{x}_i = (x_i^{(1)}, \ x_i^{(2)})$  is output. 
  • To generate two code bits  $x_i^{(1)}$  and  $x_i^{(2)}$  from a single information bit,  the convolutional encoder must include at least one memory element:
$$k = 1\hspace{0.05cm}, \ n = 2 \hspace{0.3cm}\Rightarrow \hspace{0.3cm} m \ge 1 \hspace{0.05cm}.$$
  • Taking into account the  $($half–infinite$)$  long information sequence  $\underline{u}$  the model results according to the lower graph.


$\text{Example 2:}$  The graphic shows a convolutional encoder for the parameters  $k = 1, \ n = 2$  and  $m = 1$. 

Convolutional encoder with  $k = 1, \ n = 2, \ m = 1$  and example sequences;   the square with label  $D$  $($"delay"$)$  indicates a memory element.
  • This is a  "systematic"  convolutional encoder,  characterized by 
$$x_i^{(1)} = u_i.$$
  • The second output returns  $x_i^{(2)} = u_i + u_{i-1}$.


  • In the example output sequence  $\underline{x}$  after multiplexing
  • all  $x_i^{(1)}$  are labeled red,
  • all  $x_i^{(2)}$  are labeled blue.


$\text{Example 3:}$  The graphic shows a  $(n = 2, \ k = 1)$  convolutional encoder with  $m = 2$  memory elements:

  • On the left is shown the equivalent circuit.
  • On the right you see a realization form of this encoder.
  • The two information bits are:
Convolutional encoder  $(k = 1, \ n = 2, \ m = 2)$  in two different representations
\[x_i^{(1)} = u_{i} + u_{i-1}+ u_{i-2} \hspace{0.05cm},\]
\[x_i^{(2)} = u_{i} + u_{i-2} \hspace{0.05cm}.\]
  • Because  $x_i^{(1)} ≠ u_i$  this is not a systematic code.


It can be seen:

  1. The input sequence  $\underline{u}$  is stored in a shift register of length  $L = m + 1 = 3$ .
  2. At clock time  $i$  the left memory element contains the current information bit  $u_i$,  which is shifted one place to the right at each of the next clock times.
  3. The number of yellow squares gives the memory  $m = 2$  of the encoder.

⇒   From these plots it is clear that  $x_i^{(1)}$  and  $x_i^{(2)}$  can each be interpreted as the output of a  $\text{digital filter}$  over the Galois field  ${\rm GF(2)}$  where both filters operate in parallel with the same input sequence  $\underline{u}$ .

⇒   Since  $($in general terms$)$,  the output signal of a filter results from the  $\text{convolution}$  of the input signal with the filter impulse response,  this is referred to as  "convolutional coding".

Convolutional encoder with two inputs


Convolutional encoder with  $k = 2$  and  $n = 3$

Now consider a convolutional encoder that generates  $n = 3$  code bits from  $k = 2$  information bits.

  • The information sequence  $\underline{u}$  is divided into blocks of two bits each.
  • At the clock time  $i$:   The bit  $u_i^{(1)}$  is present at the upper input and  $u_i^{(2)}$ at the lower input.
  • Then holds for the  $n = 3$  code bits at time  $i$:
\[x_i^{(1)} = u_{i}^{(1)} + u_{i-1}^{(1)}+ u_{i-1}^{(2)} \hspace{0.05cm},\]
\[x_i^{(2)} = u_{i}^{(2)} + u_{i-1}^{(1)} \hspace{0.05cm},\]
\[x_i^{(3)} = u_{i}^{(1)} + u_{i}^{(2)}+ u_{i-1}^{(1)} \hspace{0.05cm}.\]

In the graph,  the info–bits  $u_i^{(1)}$  and  $u_i^{(2)}$  are marked red resp. blue,  and the previous info–bits  $u_{i-1}^{(1)}$  resp.  $u_{i-1}^{(2)}$  are marked green and brown, respectively

To be noted:

  1.   The  "memory"  $m$  is equal to the maximum memory cell count in a branch   ⇒   here  $m = 1$.
  2.   The  "influence length"  $\nu$  is equal to the sum of all memory elements   ⇒   here  $\nu = 2$.
  3.  All memory elements are set to zero at the beginning of the coding  $($"initialization"$)$.
  4.  The code defined herewith is the set of all possible encoded sequences  $\underline{x}$,  which result when all possible information sequences  $\underline{u}$  are entered.
  5.  Both  $\underline{u}$  and  $\underline{x}$  are thereby (temporally) unbounded.


$\text{Example 4:}$  Let the information sequence be  $\underline{u} = (0, 1, 1, 0, 0, 0, 1, 1, \ \text{ ...})$.  This gives the subsequences 

  • $\underline{u}^{(1)} = (0, 1, 0, 1, \ \text{ ...})$,
  • $\underline{u}^{(2)} = (1, 0, 0, 1, \ \text{ ...})$.


With the specification  $u_0^{(1)} = u_0^{(2)} = 0$  it follows from the above equations for the  $n = 3$  code bits.

  • at the first coding step  $(i = 1)$:
\[x_1^{(1)} = u_{1}^{(1)} = 0 \hspace{0.05cm},\hspace{0.4cm} x_1^{(2)} = u_{1}^{(2)} = 1 \hspace{0.05cm},\hspace{0.4cm} x_1^{(3)} = u_{1}^{(1)} + u_{1}^{(2)} = 0+1 = 1 \hspace{0.05cm},\]
  • at the second coding step  $(i = 2)$:
\[x_2^{(1)} =u_{2}^{(1)} + u_{1}^{(1)}+ u_{1}^{(2)} = 1 + 0 + 1 = 0\hspace{0.05cm},\hspace{0.4cm} x_2^{(2)} = u_{2}^{(2)} + u_{1}^{(1)} = 0+0 = 0\hspace{0.05cm},\hspace{0.4cm} x_2^{(3)} = u_{2}^{(1)} + u_{2}^{(2)}+ u_{1}^{(1)} = 1 + 0+0 =1\hspace{0.05cm},\]
  • at the third coding step  $(i = 3)$:
\[x_3^{(1)} =u_{3}^{(1)} + u_{2}^{(1)}+ u_{2}^{(2)} = 0+1+0 = 1\hspace{0.05cm},\hspace{0.4cm} x_3^{(2)} = u_{3}^{(2)} + u_{2}^{(1)} = 0+1=1\hspace{0.05cm},\hspace{0.4cm} x_3^{(3)} =u_{3}^{(1)} + u_{3}^{(2)}+ u_{2}^{(1)} = 0+0+1 =1\hspace{0.05cm},\]
  • and finally at the fourth coding step  $(i = 4)$:
\[x_4^{(1)} = u_{4}^{(1)} + u_{3}^{(1)}+ u_{3}^{(2)} = 1+0+0 = 1\hspace{0.05cm},\hspace{0.4cm} x_4^{(2)} = u_{4}^{(2)} + u_{3}^{(1)} = 1+0=1\hspace{0.05cm},\hspace{0.4cm} x_4^{(3)}= u_{4}^{(1)} + u_{4}^{(2)}+ u_{3}^{(1)} = 1+1+0 =0\hspace{0.05cm}.\]

Thus the encoded sequence is after the multiplexer:     $\underline{x} = (0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, \ \text{...})$.


Exercises for the chapter


Exercise 3.1: Analysis of a Convolutional Encoder

Exercise 3.1Z: Convolution Codes of Rate 1/2

References

  1. Elias, P.:  Coding for Noisy Channels.  In:  IRE Conv. Rec. Part 4, pp. 37-47, 1955.