Difference between revisions of "Channel Coding/Basics of Convolutional Coding"

From LNTwww
Line 100: Line 100:
  
  
 +
*By a convolutional code  ${\rm CC} \ (n, \ k, \ m)$   ⇒   $R = k/n$  is understood as the set of all possible code sequences  $\underline{x}$ that can be generated with this code considering all possible information sequences  $\underline{u}$  (at the input).
 +
*There are several convolutional encoders that realize the same convolutional code.<br>
  
Die Begriffe "Faltungscodierer" und "Faltungscode" sollte man allerdings nicht verwechseln.
 
*Unter einem Faltungscode&nbsp; ${\rm CC} \ (n, \ k, \ m)$ &nbsp; &#8658; &nbsp; $R = k/n$&nbsp; versteht man die Menge aller möglichen Codesequenzen&nbsp; $\underline{x}$, die mit diesem Code unter Berücksichtigung aller möglichen Informationssequenzen&nbsp; $\underline{u}$&nbsp; (am Eingang) generiert werden kann.
 
*Es gibt verschiedene Faltungscodierer, die den gleichen Faltungscode realisieren.<br>
 
  
 +
[[File:P ID2594 KC T 3 1 S3b v1.png|right|frame|Convolutional encoder&nbsp; $(n = 2, \hspace{0.05cm} k = 1)$&nbsp; for the information sequence.&nbsp; $\underline{u}$|class=fit]]
 +
The top graph shows a&nbsp; $(n = 2, \hspace{0.05cm} k = 1)$ convolutional encoder.
  
[[File:P ID2594 KC T 3 1 S3b v1.png|right|frame|Faltungscoder&nbsp; $(n = 2, \hspace{0.05cm} k = 1)$&nbsp; für die Informationssequenz&nbsp; $\underline{u}$|class=fit]]
+
*At the clock time&nbsp; $i$&nbsp; the information bit&nbsp; $u_i$&nbsp; is present at the encoder input and the 2 bit code block&nbsp; $\underline{x}_i = (x_i^{(1)}, \ x_i^{(2)})$&nbsp; is output.
Die obere Grafik zeigt einen&nbsp; $(n = 2, \hspace{0.05cm} k = 1)$&ndash;Faltungscodierer.
+
*Taking into account the (half&ndash;infinite) long information sequence&nbsp; $\underline{u}$&nbsp; the model results according to the second graph.<br>
 
+
*To generate two codebits&nbsp; $x_i^{(1)}$&nbsp; and&nbsp; $x_i^{(2)}$&nbsp; from a single bit of information, the convolutional encoder must include at least one memory element:  
*Zum Taktzeitpunkt&nbsp; $i$&nbsp; liegt das Informationsbit&nbsp; $u_i$&nbsp; am Codereingang an und es wird der 2&ndash;Bit&ndash;Codeblock&nbsp; $\underline{x}_i = (x_i^{(1)}, \ x_i^{(2)})$&nbsp; ausgegeben.
 
*Unter Berücksichtigung der (halb&ndash;unendlich) langen Informationssequenz&nbsp; $\underline{u}$&nbsp; ergibt sich das Modell entsprechend der zweiten Grafik.<br>
 
*Um aus einem einzigen Informationsbit&nbsp; $u_i$&nbsp; zwei Codebit&nbsp; $x_i^{(1)}$&nbsp; und&nbsp; $x_i^{(2)}$&nbsp; generieren zu können, muss der Faltungscodierer mindestens ein Speicherelement beinhalten:  
 
 
:$$k = 1\hspace{0.05cm}, \ n = 2 \hspace{0.3cm}\Rightarrow \hspace{0.3cm} m \ge 1
 
:$$k = 1\hspace{0.05cm}, \ n = 2 \hspace{0.3cm}\Rightarrow \hspace{0.3cm} m \ge 1
 
  \hspace{0.05cm}.$$
 
  \hspace{0.05cm}.$$
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 2:}$&nbsp; Nachfolgend ist ein Faltungscodierer für die Parameter&nbsp; $k = 1, \ n = 2$&nbsp; und&nbsp; $m = 1$&nbsp; dargestellt. Das gelb hinterlegte Quadrat kennzeichnet ein Speicherelement. Dessen Beschriftung&nbsp;  $D$&nbsp; ist von <i>Delay</i> abgeleitet.<br>
+
$\text{Example 2:}$&nbsp; Below is shown a convolutional encoder for the parameters&nbsp; $k = 1, \ n = 2$&nbsp; and&nbsp; $m = 1$&nbsp;. The square with yellow background indicates a memory element. Dessen Beschriftung&nbsp;  $D$&nbsp; is derived from <i>delay</i>.<br>
  
[[File:P ID2595 KC T 3 1 S3c v1.png|right|frame|Faltungscodierer mit&nbsp; $k = 1, \ n = 2, \ m = 1$&nbsp; sowie Beispielsequenzen|class=fit]]
+
[[File:P ID2595 KC T 3 1 S3c v1.png|right|frame|Convolutional encoder with&nbsp; $k = 1, \ n = 2, \ m = 1$&nbsp; and example sequences|class=fit]]
  
*Es handelt sich hier um einen <i>systematischen</i>&nbsp; Faltungscodierer, gekennzeichnet durch&nbsp; $x_i^{(1)} = u_i$.  
+
*This is a <i>systematic</i>&nbsp; convolutional encoder, characterized by&nbsp; $x_i^{(1)} = u_i$.  
*Der zweite Ausgang liefert&nbsp; $x_i^{(2)} = u_i + u_{i-1}$.  
+
*The second output returns&nbsp; $x_i^{(2)} = u_i + u_{i-1}$.  
  
*In der beispielhaften Ausgangssequenz&nbsp; $\underline{x}$&nbsp; nach&nbsp; <i>Multiplexing</i>&nbsp; sind alle&nbsp; $x_i^{(1)}$&nbsp; rot und alle&nbsp; $x_i^{(2)}$&nbsp; blau beschriftet.}}<br>
+
*In the example output sequence&nbsp; $\underline{x}$&nbsp; after&nbsp; <i>multiplexing</i>&nbsp; all&nbsp; $x_i^{(1)}$&nbsp; are labeled red and all&nbsp; $x_i^{(2)}$&nbsp; are labeled blue.}}<br>
  
 
[[File:P ID2596 KC T 3 1 S3d v1.png|right|frame|Faltungscoder&nbsp; $(k = 1, \ n = 2, \ m = 2)$&nbsp; in zwei verschiedenen Darstellungen|class=fit]]
 
[[File:P ID2596 KC T 3 1 S3d v1.png|right|frame|Faltungscoder&nbsp; $(k = 1, \ n = 2, \ m = 2)$&nbsp; in zwei verschiedenen Darstellungen|class=fit]]
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 3:}$&nbsp;  
+
$\text{Example 3:}$&nbsp;  
Die Grafik zeigt einen&nbsp; $(n = 2, \ k = 1)$&ndash;Faltungscodierers mit&nbsp; $m = 2$&nbsp; Speicherelementen:   
+
The diagram shows a&nbsp; $(n = 2, \ k = 1)$ convolutional encoder with&nbsp; $m = 2$&nbsp; memory elements:   
*Links ist das Ersatzschaltbild dargestellt.
+
*On the left is shown the equivalent circuit.
*Rechts ist eine Realisierungsform dieses Coders angegeben.  
+
*On the right is a realization form of this encoder.  
  
*Die beiden Informationsbits lauten:
+
*The two information bits are:
 
::<math>x_i^{(1)} = u_{i} + u_{i-1}+ u_{i-2} \hspace{0.05cm},</math>
 
::<math>x_i^{(1)} = u_{i} + u_{i-1}+ u_{i-2} \hspace{0.05cm},</math>
 
::<math>x_i^{(2)} = u_{i} + u_{i-2} \hspace{0.05cm}.</math>
 
::<math>x_i^{(2)} = u_{i} + u_{i-2} \hspace{0.05cm}.</math>
  
*Wegen&nbsp; $x_i^{(1)} &ne; u_i$&nbsp; handelt es sich hier nicht um einen systematischen Code.
+
*Because&nbsp; $x_i^{(1)} &ne; u_i$&nbsp; this is not a systematic code.
 
<br clear=all>
 
<br clear=all>
Man erkennt:
+
It can be seen:
*Die Informationssequenz&nbsp; $\underline{u}$&nbsp; wird in einem Schieberegister der Länge&nbsp; $L = m + 1 = 3$&nbsp; abgelegt.<br>
+
*The information sequence&nbsp; $\underline{u}$&nbsp; is stored in a shift register of length&nbsp; $L = m + 1 = 3$&nbsp;.<br>
  
*Zum Taktzeitpunkt&nbsp; $i$&nbsp; beinhaltet das linke Speicherelement das aktuelle Informationsbit&nbsp; $u_i$, das zu den nächsten Taktzeitpunkten jeweils um eine Stelle nach rechts verschoben wird.<br>
+
*At the clock time&nbsp; $i$&nbsp; the left memory element contains the current information bit&nbsp; $u_i$, which is shifted one place to the right at each of the next clock times.<br>
  
*Aus der Anzahl der gelben Quadrate ergibt sich wieder das Gedächtnis&nbsp; $m = 2$&nbsp; des Coders.<br><br>
+
*The number of yellow squares again gives the memory&nbsp; $m = 2$&nbsp; of the encoder.<br><br>
  
Aus den beiden Darstellungen wird deutlich, dass&nbsp; $x_i^{(1)}$&nbsp; und&nbsp; $x_i^{(2)}$&nbsp; jeweils als der Ausgang eines&nbsp; [[Theory_of_Stochastic_Signals/Digitale_Filter|Digitalen Filters]]&nbsp; über dem Galoisfeld&nbsp; ${\rm GF(2)}$&nbsp; interpretiert werden kann, wobei beide Filter parallel mit der gleichen Eingangsfolge&nbsp; $\underline{u}$&nbsp; arbeiten.  
+
From the two plots it is clear that&nbsp; $x_i^{(1)}$&nbsp; and&nbsp; $x_i^{(2)}$&nbsp; can each be interpreted as the output of a&nbsp; [[Theory_of_Stochastic_Signals/Digital_Filters|"digital filter"]]&nbsp; over the Galois field&nbsp; ${\rm GF(2)}$&nbsp; where both filters operate in parallel with the same input sequence&nbsp; $\underline{u}$&nbsp;.  
  
Da sich (ganz allgemein) das Ausgangssignal eines Filters aus der&nbsp; [[Signal_Representation/The_Convolution_Theorem_and_Operation#Faltung_im_Zeitbereich| Faltung]]&nbsp; des Eingangssignals mit der Filterimpulsantwort ergibt, spricht man von&nbsp; <i>Faltungscodierung</i>.<br>}}
+
Since (in general terms) the output signal of a filter results from the&nbsp; [[Signal_Representation/The_Convolution_Theorem_and_Operation#Convolution_in_the_time_domain| "convolution"]]&nbsp; of the input signal with the filter impulse response, this is referred to as&nbsp; <i>convolutional coding</i>.<br>}}
  
== Faltungscodierer mit zwei Eingängen ==
+
== Convolutional encoder with two inputs ==
 
<br>  
 
<br>  
[[File:P ID2598 KC T 3 1 S4 v1.png|right|frame|Faltungscodierer mit&nbsp; $k = 2$&nbsp; und&nbsp; $n = 3$]]
+
[[File:P ID2598 KC T 3 1 S4 v1.png|right|frame|Convolutional encoder with&nbsp; $k = 2$&nbsp; and&nbsp; $n = 3$]]
Nun betrachten wir einen Faltungscodierer, der aus&nbsp; $k = 2$&nbsp; Informationsbits&nbsp; $n = 3$&nbsp; Codebits generiert.
+
Now consider a convolutional encoder that generates code bits from&nbsp; $k = 2$&nbsp; information bits&nbsp; $n = 3$&nbsp;.
*Die Informationssequenz&nbsp; $\underline{u}$&nbsp; wird in Blöcke zu je zwei Bit aufgeteilt.  
+
*The information sequence&nbsp; $\underline{u}$&nbsp; is divided into blocks of two bits each.  
*Zum Taktzeitpunkt&nbsp; $i$&nbsp; liegt am oberen Eingang das Bit&nbsp; $u_i^{(1)}$&nbsp; an und am unteren Eingang&nbsp; $u_i^{(2)}$.  
+
*At the clock time&nbsp; $i$&nbsp; bit&nbsp; $u_i^{(1)}$&nbsp; is present at the upper input and&nbsp; $u_i^{(2)}$ is present at the lower input.  
*Für die&nbsp; $n = 3$&nbsp; Codebits zum Zeitpunkt&nbsp; $i$&nbsp; gilt dann:
+
*For the&nbsp; $n = 3$&nbsp; code bits at time&nbsp; $i$&nbsp; then holds:
 
::<math>x_i^{(1)} = u_{i}^{(1)} + u_{i-1}^{(1)}+ u_{i-1}^{(2)} \hspace{0.05cm},</math>
 
::<math>x_i^{(1)} = u_{i}^{(1)} + u_{i-1}^{(1)}+ u_{i-1}^{(2)} \hspace{0.05cm},</math>
 
::<math>x_i^{(2)} = u_{i}^{(2)} + u_{i-1}^{(1)} \hspace{0.05cm},</math>
 
::<math>x_i^{(2)} = u_{i}^{(2)} + u_{i-1}^{(1)} \hspace{0.05cm},</math>
 
::<math>x_i^{(3)} = u_{i}^{(1)} + u_{i}^{(2)}+ u_{i-1}^{(1)} \hspace{0.05cm}.</math>
 
::<math>x_i^{(3)} = u_{i}^{(1)} + u_{i}^{(2)}+ u_{i-1}^{(1)} \hspace{0.05cm}.</math>
  
In der Grafik sind die Info&ndash;Bits&nbsp; $u_i^{(1)}$&nbsp; und&nbsp; $u_i^{(2)}$&nbsp; rot bzw. blau gekennzeichnet, und die vorherigen Info&ndash;Bits&nbsp; $u_{i-1}^{(1)}$&nbsp; und&nbsp; $u_{i-1}^{(2)}$&nbsp; grün bzw. braun.  
+
In the graph, the info&ndash;bits&nbsp; $u_i^{(1)}$&nbsp; and&nbsp; $u_i^{(2)}$&nbsp; are marked red and blue, respectively, and the previous info&ndash;bits&nbsp; $u_{i-1}^{(1)}$&nbsp; and&nbsp; $u_{i-1}^{(2)}$&nbsp; are marked green and brown, respectively.  
  
Anzumerken ist:
+
To be noted:
*Das <i>Gedächtnis</i>&nbsp; $m$&nbsp; ist gleich der maximalen Speicherzellenzahl in einem Zweig &nbsp;&nbsp;&#8658;&nbsp;&nbsp; hier&nbsp; $m = 1$.
+
*The <i>memory</i>&nbsp; $m$&nbsp; is is equal to the maximum memory cell count in a branch &nbsp;&nbsp;&#8658;&nbsp;&nbsp; here&nbsp; $m = 1$.
  
*Die <i>Einflusslänge</i>&nbsp; $\nu$&nbsp; ist gleich der Summe aller Speicherelemente &nbsp;&nbsp;&#8658;&nbsp;&nbsp; hier&nbsp; $\nu = 2$.<br>
+
*The <i>influence length</i>&nbsp; $\nu$&nbsp; is equal to the sum of all memory elements. &nbsp;&nbsp;&#8658;&nbsp;&nbsp; hier&nbsp; $\nu = 2$.<br>
  
*Alle Speicherelemente seien zu Beginn der Codierung (<i>Initialisierung</i>&nbsp;) auf Null gesetzt.<br><br>
+
*All memory elements are set to zero at the beginning of the coding (<i>initialization</i>&nbsp;).<br><br>
  
Der hiermit definierte Code ist die Menge aller möglichen Codesequenzen&nbsp; $\underline{x}$, die sich bei Eingabe aller möglichen Informationssequenzen&nbsp; $\underline{u}$&nbsp; ergeben. Sowohl&nbsp; $\underline{u}$&nbsp; als auch&nbsp; $\underline{x}$&nbsp; seien dabei (zeitlich) unbegrenzt.<br>
+
The code defined herewith is the set of all possible code sequences&nbsp; $\underline{x}$, which result when all possible information sequences&nbsp; $\underline{u}$&nbsp; are entered. Both&nbsp; $\underline{u}$&nbsp; and&nbsp; $\underline{x}$&nbsp; are thereby (temporally) unbounded.<br>
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 4:}$&nbsp; Die Informationssequenz sei &nbsp;$\underline{u} = (0, 1, 1, 0, 0, 0, 1, 1, \ \text{ ...})$. Daraus ergeben sich die Teilsequenzen&nbsp; $\underline{u}^{(1)} = (0, 1, 0, 1, \ \text{ ...})$ &nbsp;und&nbsp; $\underline{u}^{(2)} = (1, 0, 0, 1, \ \text{ ...})$.  
+
$\text{Example 4:}$&nbsp; Let the information sequence be &nbsp;$\underline{u} = (0, 1, 1, 0, 0, 0, 1, 1, \ \text{ ...})$. This gives the subsequences&nbsp; $\underline{u}^{(1)} = (0, 1, 0, 1, \ \text{ ...})$ &nbsp;and&nbsp; $\underline{u}^{(2)} = (1, 0, 0, 1, \ \text{ ...})$.  
 +
 
 +
With the specification&nbsp; $u_0^{(1)} = u_0^{(2)} = 0$&nbsp; it follows from the above equations for the&nbsp; $n = 3$&nbsp; code bits.
 +
*in the first coding step&nbsp; $(i = 1)$:
  
Mit der Festlegung&nbsp; $u_0^{(1)} = u_0^{(2)} = 0$&nbsp; folgt aus den obigen Gleichungen für die&nbsp; $n = 3$&nbsp; Codebits
 
*im ersten Codierschritt&nbsp; $(i = 1)$:
 
  
 
::<math>x_1^{(1)} = u_{1}^{(1)} = 0  \hspace{0.05cm},\hspace{0.4cm}
 
::<math>x_1^{(1)} = u_{1}^{(1)} = 0  \hspace{0.05cm},\hspace{0.4cm}
Line 181: Line 180:
 
x_1^{(3)} = u_{1}^{(1)} + u_{1}^{(2)} = 0+1 = 1 \hspace{0.05cm},</math>
 
x_1^{(3)} = u_{1}^{(1)} + u_{1}^{(2)} = 0+1 = 1 \hspace{0.05cm},</math>
  
*im zweiten Codierschritt&nbsp; $(i = 2)$:
+
*in the second coding step&nbsp; $(i = 2)$:
  
 
::<math>x_2^{(1)} =u_{2}^{(1)} + u_{1}^{(1)}+ u_{1}^{(2)} = 1 + 0 + 1 = 0\hspace{0.05cm},\hspace{0.4cm}
 
::<math>x_2^{(1)} =u_{2}^{(1)} + u_{1}^{(1)}+ u_{1}^{(2)} = 1 + 0 + 1 = 0\hspace{0.05cm},\hspace{0.4cm}
Line 187: Line 186:
 
x_2^{(3)} = u_{2}^{(1)} + u_{2}^{(2)}+ u_{1}^{(1)} = 1 + 0+0 =1\hspace{0.05cm},</math>
 
x_2^{(3)} = u_{2}^{(1)} + u_{2}^{(2)}+ u_{1}^{(1)} = 1 + 0+0 =1\hspace{0.05cm},</math>
  
*im dritten Codierschritt&nbsp; $(i = 3)$:
+
*at the third coding step&nbsp; $(i = 3)$:
  
 
::<math>x_3^{(1)} =u_{3}^{(1)} + u_{2}^{(1)}+ u_{2}^{(2)} = 0+1+0 = 1\hspace{0.05cm},\hspace{0.4cm}
 
::<math>x_3^{(1)} =u_{3}^{(1)} + u_{2}^{(1)}+ u_{2}^{(2)} = 0+1+0 = 1\hspace{0.05cm},\hspace{0.4cm}
Line 193: Line 192:
 
x_3^{(3)} =u_{3}^{(1)} + u_{3}^{(2)}+ u_{2}^{(1)} = 0+0+1 =1\hspace{0.05cm},</math>
 
x_3^{(3)} =u_{3}^{(1)} + u_{3}^{(2)}+ u_{2}^{(1)} = 0+0+1 =1\hspace{0.05cm},</math>
  
*und schließlich im vierten Codierschritt&nbsp; $(i = 4)$:
+
*and finally in the fourth coding step&nbsp; $(i = 4)$:
  
 
::<math>x_4^{(1)} = u_{4}^{(1)} + u_{3}^{(1)}+ u_{3}^{(2)} = 1+0+0 = 1\hspace{0.05cm},\hspace{0.4cm}
 
::<math>x_4^{(1)} = u_{4}^{(1)} + u_{3}^{(1)}+ u_{3}^{(2)} = 1+0+0 = 1\hspace{0.05cm},\hspace{0.4cm}
Line 199: Line 198:
 
x_4^{(3)}= u_{4}^{(1)} + u_{4}^{(2)}+ u_{3}^{(1)} = 1+1+0 =0\hspace{0.05cm}.</math>
 
x_4^{(3)}= u_{4}^{(1)} + u_{4}^{(2)}+ u_{3}^{(1)} = 1+1+0 =0\hspace{0.05cm}.</math>
  
Damit lautet die Codesequenz nach dem Multiplexer: &nbsp; &nbsp; $\underline{x} = (0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, \ \text{...})$.}}<br>
+
Thus the code sequence is after the multiplexer: &nbsp; &nbsp; $\underline{x} = (0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, \ \text{...})$.}}<br>
  
== Aufgaben zum Kapitel==
+
== Exercises for the chapter==
 
<br>
 
<br>
[[Aufgaben:Aufgabe_3.1:_Analyse_eines_Faltungscodierers|Aufgabe 3.1: Analyse eines Faltungscodierers]]
+
[[Aufgaben:Erercise_3.1:_Analysis_of_a_Convolutional_Encoder|Exercise 3.1: Analysis of a Convolutional Encoder]]
  
[[Aufgaben:Aufgabe_3.1Z:_Faltungscodes_der_Rate_1/2|Aufgabe 3.1Z: Faltungscodes der Rate 1/2]]
+
[[Aufgaben:Erercise_3.1Z:_Convolution_Codes_of_Rate_1/2|Exercise 3.1Z: Convolution Codes of Rate 1/2]]
  
==Quellenverzeichnis==
+
==Sources==
 
<references/>
 
<references/>
  
 
{{Display}}
 
{{Display}}

Revision as of 21:44, 19 September 2022

# OVERVIEW OF THE THIRD MAIN CHAPTER #


This third main chapter discusses  Convolutional codes, first described in 1955 by  "Peter Elias"  [Eli55][1]. While for linear block codes (see  "First Main Chapter")  and Reed-Solomon codes  (see "Second main chapter")  the codeword length is  $n$  in each case, the theory of convolutional cdes is based on "semi-infinite" information and code sequences. Similarly, maximum likelihood decoding using the Viterbi algorithm per se yields the entire sequence.

Specifically, this chapter discusses:

  • Important definitions  for convolutional codes:   code rate, memory, influence length, free distance.
  • Similarities  and differences  to linear block codes,
  • generator matrix  and transfer function matrix  of a convolutional code,
  • Fractional-rational transfer functions  for systematic convolutional codes,
  • Description with state transition diagram  and trellis diagram,
  • termination  and puncturing  of convolutional codes,
  • decoding  of convolutional codes   ⇒   Viterbi algorithm,
  • weight enumerator functions  and approximations for the bit error probability.


Requirements and definitions


We consider in this chapter an infinite binary information sequence  $\underline{u}$  and divide it into information blocks  $\underline{u}_i$  of  $k$  bits each. One can formalize this fact as follows:

\[\underline{\it u} = \left ( \underline{\it u}_1, \underline{\it u}_2, \hspace{0.05cm} \text{...} \hspace{0.05cm}, \underline{\it u}_i , \hspace{0.05cm} \text{...} \hspace{0.05cm}\right ) \hspace{0.3cm}{\rm mit}\hspace{0.3cm} \underline{\it u}_i = \left ( u_i^{(1)}, u_i^{(2)}, \hspace{0.05cm} \text{...} \hspace{0.05cm}, u_i^{(k)}\right )\hspace{0.05cm},\]
\[u_i^{(j)}\in {\rm GF(2)}\hspace{0.3cm}{\rm for} \hspace{0.3cm}1 \le j \le k \hspace{0.5cm}\Rightarrow \hspace{0.5cm} \underline{\it u}_i \in {\rm GF}(2^k)\hspace{0.05cm}.\]

In English, such a sequence without negative indices is called semi–infinite .

$\text{Definition:}$  In a  Binary Convolutional Code, a codeword  $\underline{x}_i$  consisting of  $n$  code bit is output at time  $i$ :

\[\underline{\it x}_i = \left ( x_i^{(1)}, x_i^{(2)}, \hspace{0.05cm} \text{...} \hspace{0.05cm}, x_i^{(n)}\right )\in {\rm GF}(2^n)\hspace{0.05cm}.\]

This results according to

  • the  $k$  bit of the current information block  $\underline{u}_i$, and
  • the  $m$  previous information blocks  $\underline{u}_{i-1}$, ... , $\underline{u}_{i-m}$.


Dependencies for a convolutional encoder with  $m = 2$

The diagram opposite illustrates this fact for the parameters  $k = 4, \ n = 7, \ m = 2$  and  $i = 4$.

The  $n = 7$  code bits generated at time  $i = 4$  $x_4^{(1)}$, ... , $x_4^{(7)}$  may depend (directly) on the  $k \cdot (m+1) = 12$  information bits marked in red and are generated by modulo 2 additions.

Also drawn in yellow is a  $(n, k)$ convolutional encoder. Note that the vector  $\underline{u}_i$  and the sequence  $\underline{u}^{(i)}$  are fundamentally different:

  • While  $\underline{u}_i = (u_i^{(1)}, u_i^{(2)}$, ... , $u_i^{(k)})$  summarizes  $k$  at time  $i$  parallel information bits,
  • denotes  $\underline{u}^i = (u_1^{(i)}$, $u_2^{(i)}, \ \text{...})$  the (horizontal) sequence at  $i$–th input of the convolutional encoder.


$\text{Definitions related to convolutional codes:}$ 

  • The  code rate  results as for the block codes to  $R = k/n$.
  • Perceive   $m$  as the  memory  of the code and the convolutional code  itself with  ${\rm CC} \ (n, k, m)$.
  • This gives the  constraint length  $\nu = m + 1$.
  • For  $k > 1$  one often gives these parameters also in bits:   $m_{\rm Bit} = m \cdot k$    resp.    $\nu_{\rm Bit} = (m + 1) \cdot k$.

Similarities and differences compared to block codes


From the  "definition"  on the previous page, it is evident that a binary convolutional code with  $m = 0$  (i.e., without memory) would be identical to a binary block code as described in the first main chapter. We exclude this limiting case and therefore always assume for the following:

  • The memory  $m$  be greater than or equal to  $1$.
  • The influence length  $\nu$  be greater than or equal to  $2$.

$\text{Example 1:}$  For a  $(7, 4)$ block code, the codeword  $\underline {x}_4$  depends only on the information word  $\underline{u}_4$  but not on  $\underline{u}_2$  and  $\underline{u}_3$, as in the  "example convolutional codes"  $($with  $m = 2)$  on the last page.

Dependencies on a  $(7, 4)$ block code at time  $i = 4$

For example, if

$$x_4^{(1)} = u_4^{(1)}, \ x_4^{(2)} = u_4^{(2)},$$
$$x_4^{(3)} = u_4^{(3)}, \ x_4^{(4)} = u_4^{(4)}$$

sowie

$$x_4^{(5)} = u_4^{(1)}+ u_4^{(2)}+u_4^{(3)}\hspace{0.05cm},$$
$$x_4^{(6)} = u_4^{(2)}+ u_4^{(3)}+u_4^{(4)}\hspace{0.05cm},$$
$$x_4^{(7)} = u_4^{(1)}+ u_4^{(2)}+u_4^{(4)}\hspace{0.05cm},$$

applies then it is a so-called  "systematic Hamming code"  $\text{HS (7, 4, 3)}$. In the graph, these special dependencies for  $x_4^{(1)}$  and  $x_4^{(7)}$  are drawn in red.


In a sense, one could also interpret a  $(n, k)$ convolutional code with memory  $m ≥ 1$  as a block code whose code parameters  $n\hspace{0.05cm}' \gg n$  and  $k\hspace{0.05cm}' \gg k$  however, would have to assume much larger values than those of the present convolutional code. However, because of the large differences in description, properties, and especially decoding, we consider convolutional codes to be something completely new in this learning tutorial. The reasons for this are as follows:

  • A block coder converts information words of length  $k$  bits into code words of length  $n$  bits each, block by block. In this case, the larger its codeword length  $n$  is, the more powerful the block code is. For a given code rate  $R = k/n$  this also requires a large information word length  $k$.
  • In contrast, the correction capability of a convolutional code is essentially determined by its memory  $m$ . The code parameters  $k$  and  $n$  are mostly chosen very small here  $(1, \ 2, \ 3, \ \text{...})$. Thus, only very few and, moreover, very simple convolutional codes are of practical importance.
  • Even with small values for  $k$  and  $n$  a convolutional encoder transfers a whole sequence of information bits  $(k\hspace{0.05cm}' → ∞)$  into a very long sequence of code bits  $(n\hspace{0.05cm}' = k\hspace{0.05cm}'/R)$. Such a code thus often provides a large correction capability as well.
  • There are efficient convolutional decoders, for example the  "Viterbi algorithm"  and the  "BCJR algorithm", which can process reliability information about the channel   ⇒   Soft–Decision–Input  and provide reliability information about the decoding result   ⇒   Soft–Decision–Output .

Rate 1/2 convolutional encoder


Convolutional encoder  $(n = 2, \hspace{0.05cm} k = 1)$  for one information bit  $u_i$


  • By a convolutional code  ${\rm CC} \ (n, \ k, \ m)$   ⇒   $R = k/n$  is understood as the set of all possible code sequences  $\underline{x}$ that can be generated with this code considering all possible information sequences  $\underline{u}$  (at the input).
  • There are several convolutional encoders that realize the same convolutional code.


Convolutional encoder  $(n = 2, \hspace{0.05cm} k = 1)$  for the information sequence.  $\underline{u}$

The top graph shows a  $(n = 2, \hspace{0.05cm} k = 1)$ convolutional encoder.

  • At the clock time  $i$  the information bit  $u_i$  is present at the encoder input and the 2 bit code block  $\underline{x}_i = (x_i^{(1)}, \ x_i^{(2)})$  is output.
  • Taking into account the (half–infinite) long information sequence  $\underline{u}$  the model results according to the second graph.
  • To generate two codebits  $x_i^{(1)}$  and  $x_i^{(2)}$  from a single bit of information, the convolutional encoder must include at least one memory element:
$$k = 1\hspace{0.05cm}, \ n = 2 \hspace{0.3cm}\Rightarrow \hspace{0.3cm} m \ge 1 \hspace{0.05cm}.$$

$\text{Example 2:}$  Below is shown a convolutional encoder for the parameters  $k = 1, \ n = 2$  and  $m = 1$ . The square with yellow background indicates a memory element. Dessen Beschriftung  $D$  is derived from delay.

Convolutional encoder with  $k = 1, \ n = 2, \ m = 1$  and example sequences
  • This is a systematic  convolutional encoder, characterized by  $x_i^{(1)} = u_i$.
  • The second output returns  $x_i^{(2)} = u_i + u_{i-1}$.
  • In the example output sequence  $\underline{x}$  after  multiplexing  all  $x_i^{(1)}$  are labeled red and all  $x_i^{(2)}$  are labeled blue.


Faltungscoder  $(k = 1, \ n = 2, \ m = 2)$  in zwei verschiedenen Darstellungen

$\text{Example 3:}$  The diagram shows a  $(n = 2, \ k = 1)$ convolutional encoder with  $m = 2$  memory elements:

  • On the left is shown the equivalent circuit.
  • On the right is a realization form of this encoder.
  • The two information bits are:
\[x_i^{(1)} = u_{i} + u_{i-1}+ u_{i-2} \hspace{0.05cm},\]
\[x_i^{(2)} = u_{i} + u_{i-2} \hspace{0.05cm}.\]
  • Because  $x_i^{(1)} ≠ u_i$  this is not a systematic code.


It can be seen:

  • The information sequence  $\underline{u}$  is stored in a shift register of length  $L = m + 1 = 3$ .
  • At the clock time  $i$  the left memory element contains the current information bit  $u_i$, which is shifted one place to the right at each of the next clock times.
  • The number of yellow squares again gives the memory  $m = 2$  of the encoder.

From the two plots it is clear that  $x_i^{(1)}$  and  $x_i^{(2)}$  can each be interpreted as the output of a  "digital filter"  over the Galois field  ${\rm GF(2)}$  where both filters operate in parallel with the same input sequence  $\underline{u}$ .

Since (in general terms) the output signal of a filter results from the  "convolution"  of the input signal with the filter impulse response, this is referred to as  convolutional coding.

Convolutional encoder with two inputs


Convolutional encoder with  $k = 2$  and  $n = 3$

Now consider a convolutional encoder that generates code bits from  $k = 2$  information bits  $n = 3$ .

  • The information sequence  $\underline{u}$  is divided into blocks of two bits each.
  • At the clock time  $i$  bit  $u_i^{(1)}$  is present at the upper input and  $u_i^{(2)}$ is present at the lower input.
  • For the  $n = 3$  code bits at time  $i$  then holds:
\[x_i^{(1)} = u_{i}^{(1)} + u_{i-1}^{(1)}+ u_{i-1}^{(2)} \hspace{0.05cm},\]
\[x_i^{(2)} = u_{i}^{(2)} + u_{i-1}^{(1)} \hspace{0.05cm},\]
\[x_i^{(3)} = u_{i}^{(1)} + u_{i}^{(2)}+ u_{i-1}^{(1)} \hspace{0.05cm}.\]

In the graph, the info–bits  $u_i^{(1)}$  and  $u_i^{(2)}$  are marked red and blue, respectively, and the previous info–bits  $u_{i-1}^{(1)}$  and  $u_{i-1}^{(2)}$  are marked green and brown, respectively.

To be noted:

  • The memory  $m$  is is equal to the maximum memory cell count in a branch   ⇒   here  $m = 1$.
  • The influence length  $\nu$  is equal to the sum of all memory elements.   ⇒   hier  $\nu = 2$.
  • All memory elements are set to zero at the beginning of the coding (initialization ).

The code defined herewith is the set of all possible code sequences  $\underline{x}$, which result when all possible information sequences  $\underline{u}$  are entered. Both  $\underline{u}$  and  $\underline{x}$  are thereby (temporally) unbounded.

$\text{Example 4:}$  Let the information sequence be  $\underline{u} = (0, 1, 1, 0, 0, 0, 1, 1, \ \text{ ...})$. This gives the subsequences  $\underline{u}^{(1)} = (0, 1, 0, 1, \ \text{ ...})$  and  $\underline{u}^{(2)} = (1, 0, 0, 1, \ \text{ ...})$.

With the specification  $u_0^{(1)} = u_0^{(2)} = 0$  it follows from the above equations for the  $n = 3$  code bits.

  • in the first coding step  $(i = 1)$:


\[x_1^{(1)} = u_{1}^{(1)} = 0 \hspace{0.05cm},\hspace{0.4cm} x_1^{(2)} = u_{1}^{(2)} = 1 \hspace{0.05cm},\hspace{0.4cm} x_1^{(3)} = u_{1}^{(1)} + u_{1}^{(2)} = 0+1 = 1 \hspace{0.05cm},\]
  • in the second coding step  $(i = 2)$:
\[x_2^{(1)} =u_{2}^{(1)} + u_{1}^{(1)}+ u_{1}^{(2)} = 1 + 0 + 1 = 0\hspace{0.05cm},\hspace{0.4cm} x_2^{(2)} = u_{2}^{(2)} + u_{1}^{(1)} = 0+0 = 0\hspace{0.05cm},\hspace{0.4cm} x_2^{(3)} = u_{2}^{(1)} + u_{2}^{(2)}+ u_{1}^{(1)} = 1 + 0+0 =1\hspace{0.05cm},\]
  • at the third coding step  $(i = 3)$:
\[x_3^{(1)} =u_{3}^{(1)} + u_{2}^{(1)}+ u_{2}^{(2)} = 0+1+0 = 1\hspace{0.05cm},\hspace{0.4cm} x_3^{(2)} = u_{3}^{(2)} + u_{2}^{(1)} = 0+1=1\hspace{0.05cm},\hspace{0.4cm} x_3^{(3)} =u_{3}^{(1)} + u_{3}^{(2)}+ u_{2}^{(1)} = 0+0+1 =1\hspace{0.05cm},\]
  • and finally in the fourth coding step  $(i = 4)$:
\[x_4^{(1)} = u_{4}^{(1)} + u_{3}^{(1)}+ u_{3}^{(2)} = 1+0+0 = 1\hspace{0.05cm},\hspace{0.4cm} x_4^{(2)} = u_{4}^{(2)} + u_{3}^{(1)} = 1+0=1\hspace{0.05cm},\hspace{0.4cm} x_4^{(3)}= u_{4}^{(1)} + u_{4}^{(2)}+ u_{3}^{(1)} = 1+1+0 =0\hspace{0.05cm}.\]

Thus the code sequence is after the multiplexer:     $\underline{x} = (0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, \ \text{...})$.


Exercises for the chapter


Exercise 3.1: Analysis of a Convolutional Encoder

Exercise 3.1Z: Convolution Codes of Rate 1/2

Sources

  1. Elias, P.: Coding for Noisy Channels. In: IRE Conv. Rec. Part 4,pp. 37-47, 1955.