Difference between revisions of "Channel Coding/Code Description with State and Trellis Diagram"

From LNTwww
 
(114 intermediate revisions by 8 users not shown)
Line 1: Line 1:
+
 
 
{{Header
 
{{Header
|Untermenü=Faltungscodierung und geeignete Decoder
+
|Untermenü=Convolutional Codes and Their Decoding
|Vorherige Seite=Algebraische und polynomische Beschreibung
+
|Vorherige Seite=Algebraic and Polynomial Description
|Nächste Seite=Decodierung von Faltungscodes
+
|Nächste Seite=Decoding of Convolutional Codes
 
}}
 
}}
  
== Zustandsdefinition für ein Speicherregister (1) ==
+
== State definition for a memory register ==
 
<br>
 
<br>
Ein Faltungscodierer kann auch als Automat mit endlicher Anzahl von Zuständen aufgefasst werden. Die Zustandsanzahl ergibt sich dabei aus der Zahl der Speicherelemente &nbsp;&#8658;&nbsp; Gedächtnis <i>m</i> zu 2<sup><i>m</i></sup>.<br>
+
A convolutional encoder can also be understood as an automaton with a finite number of states.&nbsp; This number results from the number&nbsp; $m$&nbsp; of memory elements to&nbsp; $2^m$.<br>
  
[[File:P ID2630 KC T 3 3 S1 v2.png|Faltungscodierer mit <i>k</i> = 1, <i>n</i> = 2 und <i>m</i> = 2|class=fit]]<br>
+
In this chapter we mostly start from the drawn convolutional encoder,&nbsp; which is characterized by the following parameters:
 +
[[File:P ID2630 KC T 3 3 S1 v2.png|right|frame|Convolutional encoder with&nbsp; $k = 1, \ n = 2, \ m = 2$|class=fit]]
 +
*$k = 1, \ n = 2$ &nbsp; &#8658; &nbsp; code rate $R = 1/2$,<br>
  
Im Kapitel 3.3 gehen wir meist vom gezeichneten Faltungscodierer aus, der durch folgende Kenngrößen charakterisiert wird:
+
*Memory&nbsp; $m = 2$ &nbsp; &#8658; &nbsp; influence length $\nu = m+1=3$,<br>
*<i>k</i> = 1, <i>n</i> = 2 &nbsp;&#8658;&nbsp; Coderate <i>R</i> = 1/2,<br>
 
  
*Gedächtnis <i>m</i> = 2 &nbsp;&#8658;&nbsp; Einflusslänge <i>&nu;</i> = 3,<br>
+
*Transfer function matrix in octal form&nbsp; $(7, 5)$:
 +
:$$\mathbf{G}(D) = (1 + D + D^2, \ 1 + D^2).$$
  
*Übertragungsfunktionsmatrix in Oktalform  (7, 5) &nbsp;&#8658;&nbsp; <b>G</b>(<i>D</i>) =(1 + <i>D</i> + <i>D</i><sup>2</sup>, 1 + <i>D</i><sup>2</sup>).<br><br>
+
The encoded sequence&nbsp; $\underline{x}_i = (x_i^{(1)}, \ x_i^{(2)})$ &nbsp; at time&nbsp; $i$&nbsp; depends not only on the information bit&nbsp; $u_i$&nbsp; but also on the content&nbsp; $(u_{i-1}, \ u_{i-2})$&nbsp; of the memory.  
 +
*There are &nbsp; $2^m = 4$ &nbsp; possibilities for this,&nbsp; namely the states&nbsp; $S_0, \ S_1, \ S_2,\ S_3$.
 +
 +
*Let the register state&nbsp; $S_{\mu}$&nbsp; be defined by the index
  
Die Codesequenz zum Zeitpunkt <i>i</i> &nbsp;&#8658;&nbsp; <u><i>x</i></u><sub><i>i</i></sub> = (<i>x<sub>i</sub></i><sup>(1)</sup>, <i>x<sub>i</sub></i><sup>(2)</sup>) hängt außer vom Informationsbit <i>u<sub>i</sub></i> auch vom Inhalt (<i>u<sub>i</sub></i><sub>&ndash;1</sub>, <i>u<sub>i</sub></i><sub>&ndash;2</sub>) des Speichers ab. Hierfür gibt es 2<sup><i>m</i></sup> = 4 Möglichkeiten, die man als die Zustände <i>S</i><sub>0</sub>, <i>S</i><sub>1</sub>, <i>S</i><sub>2</sub> und <i>S</i><sub>3</sub> bezeichnet. Der Registerzustand <i>S<sub>&mu;</sub></i> sei dabei über den Index definiert:
+
::<math>\mu =  u_{i-1} + 2 \cdot u_{i-2}\hspace{0.05cm}, \hspace{0.5cm}{\rm general\hspace{-0.1cm}:}\hspace{0.15cm}
 
 
:<math>\mu =  u_{i-1} + 2 \cdot u_{i-2}\hspace{0.05cm}, \hspace{0.5cm}{\rm allgemein\hspace{-0.1cm}:}\hspace{0.15cm}
 
 
\mu = \sum_{l = 1}^{m} \hspace{0.1cm}2\hspace{0.03cm}^{l-1} \cdot u_{i-l}
 
\mu = \sum_{l = 1}^{m} \hspace{0.1cm}2\hspace{0.03cm}^{l-1} \cdot u_{i-l}
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
Im Englischen verwendet man für &bdquo;Zustand&rdquo; den Begriff <i>State</i>. Entsprechend ist auch im deutschen Text manchmal vom <i>Registerstatus</i> die Rede.<br>
+
To avoid confusion,&nbsp; we distinguish in the following by upper or lower case letters:
 +
 
 +
# &nbsp; the possible states&nbsp; $S_{\mu}$&nbsp; with the indices&nbsp; $0 &#8804; \mu &#8804; 2^m \,-1$,<br>
 +
# &nbsp; the current states&nbsp; $s_i$&nbsp; at the times&nbsp; $i = 1, \ 2, \ 3, \ \text{...}$<br>
 +
 
 +
 
 +
{{GraueBox|TEXT= 
 +
$\text{Example 1:}$&nbsp;  The graph shows for the convolutional encoder sketched above
 +
[[File:EN_KC_T_3_3_S1b_v3.png|right|frame|To illustrate the states&nbsp; $S_{\mu}$ |class=fit]]
 +
<br>
 +
*the information sequence&nbsp; $\underline{u} = (u_1,u_2, \text{...} \ ) $ <br>&nbsp; &#8658; &nbsp; information bits&nbsp; $u_i$,
 +
 
 +
*the current states at times&nbsp; $i$<br> &nbsp; &#8658; &nbsp; $s_i &#8712; \{S_0, \ S_1, \ S_2, \ S_3\}$,<br>
 +
 
 +
*the 2&ndash;bit sequences&nbsp; $\underline{x}_i = (x_i^{(1)}, \ x_i^{(2)})$ <br>&nbsp; &#8658; &nbsp; encoding the information bit $u_i$.<br>
 +
<br clear=all>
 +
For example,&nbsp; you can see from this illustration&nbsp; $($the colors are to facilitate the transition to later graphics$)$:
 +
 
 +
# &nbsp; At time&nbsp; $i = 5$ &nbsp; &rArr; &nbsp; $u_{i-1} = u_4 = 0$,&nbsp; $u_{i-2} = u_3 = 1$&nbsp; holds.&nbsp; That is,&nbsp; the automat is in state&nbsp; $s_5 = S_2$.&nbsp;
 +
# &nbsp; With the information bit&nbsp; $u_i = u_5 = 0$,&nbsp; the encoded sequence&nbsp; $\underline{x}_5 = (11)$&nbsp; is obtained.<br>
 +
# &nbsp; The state for time&nbsp; $i = 6$&nbsp; results from&nbsp; $u_{i-1} = u_5 = 0$&nbsp; and&nbsp; $u_{i-2} = u_4 = 0$&nbsp; to&nbsp; $s_6 = S_0$.
 +
# &nbsp;  Because of&nbsp; $u_6 = 0$ &nbsp; &rArr; &nbsp; $\underline{x}_6 = (00)$&nbsp; is output and the new state&nbsp; $s_7$&nbsp; is again&nbsp; $S_0$.<br>
 +
# &nbsp; At time&nbsp; $i = 12$&nbsp; the encoded sequence&nbsp; $\underline{x}_{12} = (11)$&nbsp; is output because of&nbsp; $u_{12} = 0$&nbsp; as at time&nbsp; $i = 5$.&nbsp;
 +
# &nbsp; Here,&nbsp; one passes from state&nbsp; $s_{12} = S_2$&nbsp; to state&nbsp; $s_{13} = S_0$&nbsp;.<br>
 +
# &nbsp; In contrast,&nbsp; at time&nbsp; $i = 9$&nbsp; the encoded sequence&nbsp; $\underline{x}_{9} = (00)$&nbsp; is output and state&nbsp; $s_9 = S_2$&nbsp; is followed by&nbsp;  $s_{10} = S_1$.
 +
# &nbsp; The same is true for&nbsp; $i = 15$.&nbsp;  In both cases,&nbsp; the current information bit is&nbsp; $u_i = 1$.}}<br>
  
Um Verwechslungen zu vermeiden, unterscheiden wir im Weiteren durch Groß&ndash; bzw. Kleinbuchstaben:
+
{{BlaueBox|TEXT= 
*die möglichen Zustände <i>S<sub>&mu;</sub></i> mit den Indizes 0 &#8804; <i>&mu;</i> &#8804; 2<sup><i>m</i></sup> &ndash; 1,<br>
+
$\text{Conclusion:}$&nbsp;  From this example it can be seen that the encoded sequence&nbsp; $\underline{x}_i$&nbsp; at time&nbsp; $i$&nbsp; depends solely on
 +
# &nbsp; the current state&nbsp; $s_i = S_{\mu} \ (0 &#8804; \mu &#8804; 2^m -1)$, as well as<br>
 +
# &nbsp; the current information bit&nbsp; $u_i$.<br><br>
  
*die aktuellen Zustände <i>s<sub>i</sub></i> zu den Zeitpunkten <i>i</i> = 1, 2, 3, ....<br><br>
+
*Similarly,&nbsp; the successor state&nbsp; $s_{i+1}$&nbsp; is determined by&nbsp; $s_i$&nbsp; and&nbsp; $u_i$&nbsp; alone.  
  
Auf der nächsten Seite verdeutlichen wir die Zustände an einem Beispiel.<br>
+
*These properties are considered in the so-called&nbsp; "state transition diagram"&nbsp; in the next section.}}<br>
  
== Zustandsdefinition für ein Speicherregister (2) ==
+
== Representation in the state transition diagram ==
 
<br>
 
<br>
{{Beispiel}}''':''' Die folgende Grafik zeigt für obigen Faltungscodierer jeweils den Beginn (<i>i</i> &#8804; 15)
+
The graph shows the state transition diagram for the&nbsp; $(n = 2, \ k = 1, \ m = 2)$&nbsp; convolutional encoder.
*der Informationssequenz <u><i>u</i></u> &nbsp;&#8658;&nbsp; Informationsbits <i>u<sub>i</sub></i>,<br>
+
[[File:EN_KC_T_3_3_S2_v3.png|right|frame|State transition diagram&nbsp; $\rm A$&nbsp; for the&nbsp; $(n = 2, \ k = 1, \ m = 2)$&nbsp; convolutional encoder; <br> &nbsp; &nbsp; &nbsp; in the following:&nbsp; "our standard convolutional encoder" |class=fit]]
 +
 +
# &nbsp;This diagram  provides all information about this special convolutional encoder in compact form.
 +
# &nbsp;The color scheme is aligned with the&nbsp; [[Channel_Coding/Code_Description_with_State_and_Trellis_Diagram#State_definition_for_a_memory_register|$\text{sequential representation}$]]&nbsp; in the previous section.
 +
# &nbsp;For the sake of completeness,&nbsp; the derivation table is also given again.<br>
 +
# &nbsp;The&nbsp; $2^{m+k}$&nbsp; transitions are labeled&nbsp; "$u_i \ | \ \underline{x}_i$".
  
*der aktuellen Zustände <i>s<sub>i</sub></i> &#8712; {<i>S</i><sub>0</sub>, <i>S</i><sub>1</sub>, <i>S</i><sub>2</sub>, <i>S</i><sub>3</Sub>} zu den Zeitpunkten <i>i</i>, sowie<br>
 
  
*der jeweiligen Codesequenzen <u><i>x</i></u><sub><i>i</i></sub> = (<i>x<sub>i</sub></i><sup>(1)</sup>, <i>x<sub>i</sub></i><sup>(2)</sup>).<br><br>
+
For example,&nbsp; it can be read:
 +
*The information bit &nbsp; $u_i = 0$ &nbsp; $($indicated by a red label$)$&nbsp; takes you from state &nbsp; $s_i = S_1$ &nbsp; to state &nbsp; $s_{i+1} = S_2$ &nbsp; and the two code bits are &nbsp; $x_i^{(1)} = 1, \ x_i^{(2)} = 0$.<br>
  
[[File:P ID2631 KC T 3 3 S1b v1.png|Zur Verdeutlichung der Registerzustände <i>S<sub>μ</sub></i>|class=fit]]<br>
+
*With the information bit &nbsp; $u_i = 1$ &nbsp; $($blue label$)$&nbsp; in the state &nbsp; $s_i = S_1$ &nbsp;  the code bits result in &nbsp; $x_i^{(1)} = 0, \ x_i^{(2)} = 1$,&nbsp; and one arrives at the new state&nbsp; $s_{i+1} = S_3$.<br><br>
  
Die Farbkennzeichnungen sollen den Übergang zu den nachfolgenden Grafiken auf den nächsten Seiten erleichtern. Man erkennt aus obiger Darstellung beispielsweise:
 
*Zum Zeitpunkt <i>i</i> = 5 gilt <i>u<sub>i</sub></i><sub>&ndash;1</sub> = <i>u</i><sub>4</sub> = 0, <i>u<sub>i</sub></i><sub>&ndash;2</sub> = <i>u</i><sub>3</sub> = 1. Das heißt, der Automat befindet sich im Zustand <i>s</i><sub>5</sub> = <i>S</i><sub>2</sub>. Mit dem Informationsbit <i>u<sub>i</sub></i> = <i>u</i><sub>5</sub> = 0 erhält man die Codesequenz <u><i>x</i></u><sub>5</sub> = (11).<br>
 
  
*Der Zustand für den Zeitpunkt <i>i</i> = 6 ergibt sich aus <i>u<sub>i</sub></i><sub>&ndash;1</sub> = <i>u</i><sub>5</sub> = 0 und <i>u<sub>i</sub></i><sub>&ndash;2</sub>&nbsp;=&nbsp;<i>u</i><sub>4</sub>&nbsp;=&nbsp;0 zu <i>s</i><sub>6</sub>&nbsp;=&nbsp;<i>S</i><sub>0</sub>. Wegen <i>u</i><sub>6</sub> = 0 wird nun <u><i>x</i></u><sub>6</sub> = (00) ausgegeben und der neue Zustand <i>s</i><sub>7</sub> ist wiederum <i>S</i><sub>0</sub>.<br>
+
[[File:P ID2680 KC T 3 3 S2b v3.png|right|frame|State transition diagram&nbsp; $\rm B$&nbsp; for another&nbsp; $(n = 2, \ k = 1, \ m = 2)$&nbsp; convolutional encoder; <br> &nbsp; &nbsp; &nbsp; in the following:&nbsp; "equivalent systematic representation"|class=fit]]
 +
We now consider another systematic encoder with same parameters:
 +
::$$n = 2, \ k = 1, \ m = 2.$$
  
*Auch zum Zeitpunkt <i>i</i> = 12 wird wegen <i>u</i><sub>12</sub> = 0 die Codesequenz <u><i>x</i></u><sub>12</sub> = (11) ausgegeben und man geht vom Zustand <i>s</i><sub>12</sub> = <i>S</i><sub>2</sub> in den Zustand <i>s</i><sub>13</sub> = <i>S</i><sub>0</sub> über.<br>
+
This is the&nbsp; [[Channel_Coding/Algebraic_and_Polynomial_Description#Equivalent_systematic_convolutional_code|$\text{equivalent systematic representation}$]]&nbsp; of our standard convolutional encoder,&nbsp; also referred to as&nbsp; "recursive systematic convolutional encoder".
  
*Dagegen wird zum Zeitpunkt <i>i</i> = 9 die Codesequenz (00) ausgegeben und auf <i>s</i><sub>9</sub> = <i>S</i><sub>2</sub> folgt <i>s</i><sub>10</sub> = <i>S</i><sub>1</sub>. Gleiches gilt auch für  <i>i</i> = 15.  In beiden Fällen lautet das aktuelle Informationsbit <i>u<sub>i</sub></i> = 1.{{end}}<br>
+
Compared to the upper state transition diagram&nbsp; $\rm A$,&nbsp; one can see the following differences:
 +
*Since the earlier information bits&nbsp; $u_{i-1}$&nbsp; and&nbsp; $u_{i-2}$&nbsp; are not stored, here the states&nbsp; $s_i = S_{\mu}$&nbsp; are related to the processed quantities&nbsp; $w_{i-1}$&nbsp; and&nbsp; $w_{i-2}$, where&nbsp; $w_i = u_i + w_{i-1} + w_{i-2}$&nbsp; holds.<br>
  
Aus diesem Beispiel ist zu erkennen, dass die Codesequenz <u><i>x</i></u><sub><i>i</i></sub> zum Zeitpunkt <i>i</i> allein
+
*Since this code is also systematic &nbsp; &rArr; &nbsp; $x_i^{(1)} = u_i$.&nbsp; The derivation of the second code bits&nbsp; $x_i^{(2)}$&nbsp; can be found in&nbsp; [[Aufgaben:Exercise_3.5:_Recursive_Filters_for_GF(2)|"Exercise 3.5"]].&nbsp; It is a recursive filter,&nbsp; as described in the&nbsp; [[Channel_Coding/Algebraic_and_Polynomial_Description#Filter_structure_with_fractional.E2.80.93rational_transfer_function| "Filter structure with fractional&ndash;rational transfer function"]]&nbsp; section.<br><br>
*vom aktuellen Zustand <i>s<sub>i</sub></i> = <i>S<sub>&mu;</sub></i> (0 &#8804; <i>&mu;</i> &#8804; 2<sup><i>m</i></sup> &ndash; 1), sowie<br>
+
<br clear=all>
 +
{{BlaueBox|TEXT= 
 +
$\text{Conclusion:}$&nbsp; The comparison of the graphics shows that a similar state transition diagram is obtained for both encoders:
  
*vom aktuellen Informationsbit <i>u<sub>i</sub></i><br><br>
+
*The structure of the state transition diagram is determined solely by the parameters&nbsp; $m$&nbsp; and&nbsp; $k$&nbsp;.
  
abhängt. Ebenso wird der Nachfolgezustand <i>s<sub>i</sub></i><sub>+1</sub> allein durch <i>s<sub>i</sub></i> und <i>u<sub>i</sub></i> bestimmt. Diese Eigenschaften werden im so genannten  <i>Zustandsübergangsdiagramm</i> auf der nächsten Seite berücksichtigt.<br>
+
*The number of states is&nbsp; $2^{m \hspace{0.05cm}\cdot \hspace{0.05cm}k}$&nbsp; and&nbsp; $2^k$&nbsp; arrows go from each state.<br>
  
== Darstellung im Zustandsübergangsdiagramm (1) ==
+
*You also get from each state&nbsp; $s_i &#8712; \{S_0, \ S_1, \ S_2, \ S_3\}$&nbsp; to the same successor states&nbsp; $s_{i+1}$.<br>
 +
 
 +
*A difference exists solely with respect to the encoded sequences output,&nbsp; where in both cases&nbsp; $\underline{x}_i &#8712; \{00, \ 01, \ 10, \ 11\}$&nbsp; applies.}}<br><br>
 +
 
 +
== Representation in the trellis diagram ==
 
<br>
 
<br>
Die Grafik zeigt das <b>Zustandsübergangsdiagramm</b> (englisch: <i>State Transition Diagram</i>) für unseren Standardcodierer. Dieses liefert alle Informationen über den  (<i>n</i> = 2, <i>k</i> = 1, <i>m</i> = 2)&ndash;Faltungscodierer in kompakter Form. Die Farbgebung ist mit der sequenziellen Darstellung auf der vorherigen Seite abgestimmt. Der Vollständigkeit halber ist auch die Herleitungstabelle nochmals angegeben.<br>
+
One gets from the state transition diagram to the so-called&nbsp; "trellis diagram"&nbsp; $($short: &nbsp;$\text{trellis)}$&nbsp; by additionally considering a time component &nbsp; &#8658; &nbsp; control variable&nbsp; $i$.&nbsp; The following graph contrasts both diagrams for&nbsp; "our standard convolutional encoder".&nbsp; The bottom illustration has a resemblance to a&nbsp; "garden trellis" &ndash; assuming some imagination.  
 +
 
 +
[[File:P ID2635 KC T 3 3 S3 v2.png|right|frame|State transition diagram vs. trellis diagram for&nbsp;  "our standard convolutional encoder"|class=fit]]
  
[[File:P ID3022 KC T 3 3 S2 v2.png|Zustandsübertragungsdiagramm 1 für <i>n</i> = 2, <i>k</i> = 1, <i>m</i> = 2|class=fit]]<br>
+
Further is to be recognized on the basis of this diagram:
 +
# &nbsp; Since all memory registers are preallocated with zeros,&nbsp; the trellis always starts from the state&nbsp; $s_1 = S_0$.&nbsp; At the next time&nbsp; $(i = 2)$&nbsp; only the two states&nbsp; $S_0$&nbsp; and&nbsp; $S_1$&nbsp; are possible.<br>
 +
# &nbsp; From time&nbsp; $i = m + 1 = 3$&nbsp; the trellis has exactly the same appearance for each transition from&nbsp; $s_i$&nbsp; to&nbsp; $s_{i+1}$.&nbsp; Each state&nbsp; $S_{\mu}$&nbsp; is connected to a successor state by a red arrow&nbsp; $(u_i = 0)$&nbsp; and a blue arrow&nbsp; $(u_i = 1)$.<br>
 +
# &nbsp; Versus a&nbsp; "code tree"&nbsp;  that grows exponentially with each time step&nbsp; $i$ &nbsp; &ndash; see for example section&nbsp; [[Digital_Signal_Transmission/Viterbi_Receiver#Metric_and_accumulated_metric| "Error sizes and total error sizes"]] &nbsp; in the book&nbsp; "Digital Signal Transmission" &ndash;&nbsp; here the number of nodes&nbsp; $($i.e. possible states$)$&nbsp; is limited to&nbsp; $2^m$&nbsp;.<br>
 +
# &nbsp; This pleasing property of any trellis diagram is also used by the&nbsp; [[Channel_Coding/Decoding_of_Convolutional_Codes|$\text{Viterbi algorithm}$]]&nbsp; for efficient maximum likelihood decoding of convolutional codes.<br><br>
  
Die 2<sup><i>m</i></sup><sup>+</sup><sup><i>k</i></sup> Übergänge sind mit &bdquo;<i>u<sub>i</sub></u></i> | <u><i>x</i></u><sub><i>i</i></Sub>&rdquo; beschriftet. Beispielsweise ist abzulesen:
+
The following example is to show that there is a&nbsp; "$1\text{:}1$"&nbsp;  mapping between the string of encoded sequences&nbsp; $\underline{x}_i$&nbsp; and the paths through the trellis diagram.
*Durch das Informationsbit <i>u<sub>i</sub></i> = 0 (gekennzeichnet durch eine rote Beschriftung) gelangt man vom Zustand <i>s<sub>i</sub></i> = <i>S</i><sub>1</sub> zum Zustand <i>s<sub>i</sub></i><sub>+1</sub> = <i>S</i><sub>2</sub> und die beiden Codebits lauten <i>x<sub>i</sub></i><sup>(1)</sup> = 1, <i>x<sub>i</sub></i><sup>(2)</sup> = 0.<br>
+
*Also the information sequence&nbsp; $\underline{u}$&nbsp; can be read from the selected trellis path based on the colors of the individual branches.
 +
 +
*A red branch stands for the information bit&nbsp; $u_i = 0$, &nbsp;  a blue one for&nbsp; $u_i = 1$.<br>
  
*Mit dem Informationsbit <i>u<sub>i</sub></i> = 1 (blaue Beschriftung) im Zustand  <i>s<sub>i</sub></i> = <i>S</i><sub>1</sub> ergeben sich dagegen die Codebits zu <i>x<sub>i</sub></i><sup>(1)</sup> = 0, <i>x<sub>i</sub></i><sup>(2)</sup> = 1, und man kommt zum neuen Zustand <i>s<sub>i</sub></i><sub>+1</sub> = <i>S</i><sub>3</sub>.<br><br>
 
  
Die Struktur des Zustandsübergangsdiagramms ist allein durch die Parameter <i>m</i> und <i>k</i> festgelegt:
+
{{GraueBox|TEXT= 
*Die Anzahl der Zustände ist 2<sup><i>m</i> &middot; <i>k</i></sup>.<br>
+
$\text{Example 2:}$&nbsp;  In the former&nbsp; [[Channel_Coding/Code_Description_with_State_and_Trellis_Diagram#State_definition_for_a_memory_register|$\text{Example 1}$]]&nbsp; the encoded sequence&nbsp; $\underline{x}$&nbsp; was derived
  
*Von jedem Zustand gehen 2<sup><i>k</i></sup> Pfeile ab.<br><br>
+
*for our rate&nbsp; $1/2$&nbsp; standard convolutional encoder with memory&nbsp; $m = 2$&nbsp; and
 +
 +
*the information sequence&nbsp; $\underline{u} = (1, 1, 1, 0, 0, 0, 1, 0, 1, \ \text{. ..})$
 +
[[File:EN_KC_T_3_3_S3b_v3.png|right|frame|Trellis diagram of a sample sequence|class=fit]]
  
Ein weiteres Beispiel folgt auf der nächsten Seite.<br>
 
  
== Darstellung im Zustandsübergangsdiagramm (2) ==
+
The table on the right shows the result for&nbsp; $i &#8804; 9$&nbsp; once again.&nbsp; The trellis diagram is drawn below.&nbsp; You can see:
 +
# &nbsp; The selected path through the trellis is obtained by lining up red and blue arrows representing the possible information bits&nbsp; $u_i \in \{ 0, \ 1\}$&nbsp;.
 +
# &nbsp; This statement is true for any rate&nbsp;  $1/n$&nbsp;  convolutional code. For a code with&nbsp; $k > 1$&nbsp; there would be&nbsp; $2^k$&nbsp; different colored arrows.<br>
 +
# &nbsp; For a rate&nbsp; $1/n$&nbsp; convolutional code,&nbsp; the arrows are labeled with the code words&nbsp; $\underline{x}_i = (x_i^{(1)}, \ \text{...} \ , \ x_i^{(n)})$&nbsp; which are derived from the information bit&nbsp; $u_i$&nbsp; and the current register states&nbsp; $s_i$.
 +
# &nbsp; For&nbsp; $n = 2$&nbsp; there are only four possible code words,&nbsp; namely&nbsp; $00, \ 01, \ 10$&nbsp; and&nbsp; $11$.<br>
 +
# &nbsp; In vertical direction,&nbsp; one recognizes from the trellis the register states&nbsp; $S_{\mu}$.&nbsp; Given a rate $k/n$ convolutional code with memory order&nbsp; $m$&nbsp; there are&nbsp; $2^{k \hspace{0.05cm}\cdot \hspace{0.05cm}m}$&nbsp; different states.
 +
# &nbsp; In the present example&nbsp; $(k = 1, \ m = 2)$&nbsp; these are the states&nbsp; $S_0, \ S_1, \ S_2$ and $S_3$.}}<br>
 +
 
 +
== Definition of the free distance ==
 
<br>
 
<br>
Die obere Grafik zeigt nochmals das Zustandsübergangsdiagramm für unseren Standardcodierer. Dieses dient lediglich als Vergleichsgrundlage für das nachfolgende Beispiel.<br>
+
An important parameter of linear block codes is the&nbsp; [[Channel_Coding/Objective_of_Channel_Coding#Important_definitions_for_block_coding|$\text{minimum Hamming distance}$]]&nbsp; '''between any two code words'''&nbsp; $\underline{x}$&nbsp; and&nbsp; $\underline{x}\hspace{0.05cm}'$:
  
[[File:P ID2679 KC T 3 3 S2a v1.png|Zustandsübertragungsdiagramm 1 für <i>n</i> = 2, <i>k</i> = 1, <i>m</i> = 2|class=fit]]<br>
+
::<math>d_{\rm min}(\mathcal{C}) =
 +
\min_{\substack{\underline{x},\hspace{0.05cm}\underline{x}' \hspace{0.05cm}\in \hspace{0.05cm} \mathcal{C} \\ {\underline{x}} \hspace{0.05cm}\ne \hspace{0.05cm} \underline{x}\hspace{0.03cm}'}}\hspace{0.1cm}d_{\rm H}(\underline{x}, \hspace{0.05cm}\underline{x}\hspace{0.03cm}')\hspace{0.05cm}.</math>
  
Die untere Grafik gilt für einen systematischen Code, ebenfalls mit den Kenngrößen <i>n</i> = 2, <i>k</i> = 1, <i>m</i> = 2. Es handelt sich um die äquivalente systematische Repräsentation des obigen Codierers. Man bezeichnet diesen auch als RSC&ndash;Codierer (englisch: <i>Recursive Systematic Convolutional Encoder</i>).<br>
+
*Due to linearity,&nbsp; the zero word &nbsp; $\underline{0}$ &nbsp; also belongs to each block code.&nbsp; Thus&nbsp; $d_{\rm min}$&nbsp; is identical to the minimum&nbsp; [[Channel_Coding/Objective_of_Channel_Coding#Important_definitions_for_block_coding|$\text{Hamming weight}$]]&nbsp; $w_{\rm H}(\underline{x})$&nbsp; of a code word&nbsp; $\underline{x} &ne; \underline{0}$.<br>
  
[[File:P ID2680 KC T 3 3 S2b v3.png|Zustandsübertragungsdiagramm 2 für <i>n</i> = 2, <i>k</i> = 1, <i>m</i> = 2|class=fit]]<br>
+
*For convolutional codes,&nbsp; however,&nbsp; the description of the distance ratios proves to be much more complex,&nbsp; since a convolutional code consists of infinitely long and infinitely many encoded sequences.&nbsp; Therefore we define here instead of the minimum Hamming distance:
  
Gegenüber dem oberen Zustandsübergangsdiagramm erkennt man folgende Unterschiede:
 
*Da die früheren Informationsbits <i>u<sub>i</sub></i><sub>&ndash;1</sub> und <i>u<sub>i</sub></i><sub>&ndash;2</sub> nicht abgespeichert werden, beziehen sich hier die Zustände <i>s<sub>i</sub></i> = <i>S<sub>&mu;</sub></i> auf die verarbeiteten Größen <i>w<sub>i</sub></i><sub>&ndash;1</sub> und <i>w<sub>i</sub></i><sub>&ndash;2</sub>, wobei <i>w<sub>i</sub></i> = <i>u<sub>i</sub></i> + <i>w<sub>i</sub></i><sub>&ndash;1</sub> + <i>w<sub>i</sub></i><sub>&ndash;2</sub> gilt.<br>
 
  
*Da dieser Code systematisch ist, gilt <i>x<sub>i</sub></i><sup>(1)</sup> = <i>u<sub>i</sub></i>. Die Herleitung der zweiten Codebits <i>x<sub>i</sub></i><sup>(2)</sup> finden Sie in Aufgabe A3.5. Es handelt sich um ein rekursives Filter, wie in Kapitel 3.2 beschrieben.<br><br>
+
{{BlaueBox|TEXT= 
 +
$\text{Definition:}$&nbsp; 
 +
*The&nbsp; &raquo;<b>free distance</b>&laquo; &nbsp; $d_{\rm F}$&nbsp; of a convolutional code is equal to the number of bits in which any two encoded sequences &nbsp; $\underline{x}$ &nbsp; and &nbsp; $\underline{x}\hspace{0.03cm}' $&nbsp; differ&nbsp; $($at least$)$.
 +
 +
*In other words: &nbsp; The free distance is equal to the minimum Hamming distance&nbsp; '''between any two paths'''&nbsp; through the trellis.}}
  
Der Bildervergleich zeigt, dass sich für beide Codierer ein ähnliches Zustandsübergangsdiagramm ergibt:
 
*Man gelangt von jedem Zustand <i>s<sub>i</sub></i> &#8712; {<i>S</i><sub>0</sub>, <i>S</i><sub>1</sub>, <i>S</i><sub>2</sub>,<i>S</i><sub>3</sub>} zu den gleichen Nachfolgezuständen <i>s<sub>i</sub></i><sub>+1</sub>.<br>
 
  
*Ein Unterschied besteht hinsichtlich der ausgegebenen Codesequenzen <i><u>x</u><sub>i</sub></i> &#8712; {00, 01, 10, 11}.<br><br>
+
Since convolutional codes are linear,&nbsp; we can also use the infinite zero sequence as a reference sequence: &nbsp; $\underline{x}\hspace{0.03cm}' = \underline{0}$.&nbsp; Thus,&nbsp; the free distance&nbsp; $d_{\rm F}$&nbsp; is equal to the minimum Hamming weight&nbsp; $($number of&nbsp; "ones"$)$&nbsp; of a encoded sequence&nbsp; $\underline{x} &ne; \underline{0}$.<br>
  
=Darstellung im Trellisdiagramm (1) ==
+
{{GraueBox|TEXT=   
<br>
+
$\text{Example 3:}$&nbsp; We consider the null sequence&nbsp; $\underline{0}$&nbsp; (marked with white filled circles) as well as two other encoded sequences&nbsp; $\underline{x}$&nbsp; and&nbsp; $\underline{x}\hspace{0.03cm}'$&nbsp; (with yellow and dark background, respectively) in our standard trellis and characterize these sequences based on the state sequences:
Man kommt vom Zustandsübergangsdiagramm zum so genannten <i>Trellisdiagramm</i> (oder kurz: Trellis), indem man zusätzlich eine Zeitkomponente &nbsp;&#8658;&nbsp; Laufvariable <i>i</i> berücksichtigt. Die folgende Grafik stellt die beiden Diagramme für unseren Standardcodierer (<i>n</i> = 2, <i>k</i> = 1, <i>m</i> = 2) gegenüber.<br>
+
[[File:P ID2638 KC T 3 3 S4b v2.png|right|frame|On the definition of free distance|class=fit]]
 +
 
 +
:$$\underline{0} =\left ( S_0 \rightarrow S_0 \rightarrow S_0\rightarrow S_0\rightarrow S_0\rightarrow \hspace{0.05cm}\text{...} \hspace{0.05cm}\right)$$
 +
:$$\hspace{0.3cm} \Rightarrow \hspace{0.3cm} \underline{0} = \left ( 00, 00, 00, 00, 00,\hspace{0.05cm} \text{...} \hspace{0.05cm}\right) \hspace{0.05cm},$$
 +
 
 +
 
 +
:$$\ \underline{x} =\left ( S_0 \rightarrow S_1 \rightarrow S_2\rightarrow S_0\rightarrow S_0\rightarrow \hspace{0.05cm}\text{...} \hspace{0.05cm}\right)$$
 +
:$$\hspace{0.3cm} \Rightarrow \hspace{0.3cm}\underline{x} == \left ( 11, 10, 11, 00, 00,\hspace{0.05cm} \text{...} \hspace{0.05cm}\right) \hspace{0.05cm},$$
 +
 
 +
 
 +
:$$\underline{x}\hspace{0.03cm}' = \left ( S_0 \rightarrow S_1 \rightarrow S_3\rightarrow S_2\rightarrow S_0\rightarrow \hspace{0.05cm}\text{...} \hspace{0.05cm}\right)$$
 +
:$$\hspace{0.3cm} \Rightarrow \hspace{0.3cm}\underline{x}\hspace{0.03cm}' = \left ( 11, 01, 01, 11, 00,\hspace{0.05cm} \text{...} \hspace{0.05cm}\right) \hspace{0.05cm}.$$
 +
 
 +
 
 +
 
 +
One can see from these plots:
 +
# &nbsp; The free distance&nbsp; $d_{\rm F} = 5$&nbsp; is equal to the Hamming weight&nbsp; $w_{\rm H}(\underline{x})$.
 +
# &nbsp; No sequence other than the one highlighted in yellow differs from &nbsp; $\underline{0}$ &nbsp; by less than five bits.&nbsp; For example&nbsp; $w_{\rm H}(\underline{x}') = 6$.<br>
 +
# &nbsp; In this example,&nbsp; the free distance &nbsp; $d_{\rm F} = 5$ &nbsp; also results as the Hamming distance between the sequences &nbsp; $\underline{x}$ &nbsp; and &nbsp; $\underline{x}\hspace{0.03cm}'$,&nbsp; which differ in exactly five bit positions.}}
 +
 
 +
 
 +
The larger the free distance&nbsp; $d_{\rm F}$,&nbsp; the better is the&nbsp; "asymptotic behavior"&nbsp; of the convolutional encoder.
 +
*For the exact error probability calculation,&nbsp; however,&nbsp; one needs the exact knowledge of the&nbsp; [[Channel_Coding/Bounds_for_Block_Error_Probability#Distance_spectrum_of_a_linear_code|$\text{weight enumerator function}$]]&nbsp; just as for the linear block codes.
  
[[File:P ID2635 KC T 3 3 S3 v2.png|Zustandsübergangsdiagramm vs. Trellisdiagramm (<i>n</i> = 2, <i>k</i> = 1, <i>m</i> = 2)|class=fit]]<br>
+
*This is given for the convolutional codes in detail only in the chapter&nbsp; [[Channel_Coding/Distance_Characteristics_and_Error_Probability_Bounds|"Distance Characteristics and Error Probability Bounds"]].
 +
<br clear=all>
 +
[[File:EN_KC_T_3_3_S4cd_v3.png|right|frame|Optimal convolutional codes of rate&nbsp; $1/2$ |class=fit]]
 +
At the outset,&nbsp; just a few remarks:
 +
# &nbsp; The free distance&nbsp; $d_{\rm F}$&nbsp; increases with increasing memory&nbsp; $m$&nbsp; provided that one uses appropriate polynomials for the transfer function matrix&nbsp; $\mathbf{G}(D)$&nbsp;.
 +
# &nbsp; In the table for rate&nbsp; $1/2$&nbsp; convolutional codes,&nbsp; the&nbsp; $n = 2$&nbsp; polynomials are given together with the&nbsp; $d_{\rm F}$&nbsp; value.
 +
# &nbsp; Of greater practical importance is the&nbsp; "industry standard code"&nbsp; with&nbsp; $m = 6$&nbsp; &#8658; &nbsp; $64$&nbsp; states and the free distance&nbsp; $d_{\rm F} = 10$&nbsp; $($in the table this is highlighted in red$)$.
  
Die untere Darstellung hat Ähnlichkeit mit einem Gartenspalier &ndash; etwas Phantasie vorausgesetzt. Die englische Übersetzung hierfür ist &bdquo;Trellis&rdquo;. Weiter ist anhand dieser Grafik zu erkennen:
 
*Da alle Speicherregister mit Nullen vorbelegt sind, startet das Trellis stets vom Zustand <i>s</i><sub>1</sub> = <i>S</i><sub>0</sub>. Zum nächsten Zeitpunkt (<i>i</i> = 2) sind dann nur die beiden Zustände <i>S</i><sub>0</sub> und <i>S</i><sub>1</sub> möglich.<br>
 
  
*Ab dem Zeitpunkt <i>i</i> = <i>m</i> + 1 = 3 hat das Trellis für jeden Übergang von <i>s<sub>i</sub></i> nach <i>s<sub>i</sub></i><sub>+1</sub> genau das gleiche Aussehen. Jeder Zustand <i>S<sub>&mu;</sub></i> ist durch einen roten Pfeil (<i>u<sub>i</sub></i> = 0) und einen blauen Pfeil (<i>u<sub>i</sub></i>&nbsp;=&nbsp;1) mit einem Nachfolgezustand verbunden.<br>
+
The following example shows the effects of using unfavorable polynomials.
 +
<br clear=all>
 +
{{GraueBox|TEXT=
 +
$\text{Example 4:}$&nbsp; In&nbsp; $\text{Example 3}$&nbsp; it was shown that our standard convolutional encoder&nbsp; $(n = 2, \ k = 1, \ m = 2)$&nbsp; has the free distance&nbsp; $d_{\rm F} = 5$.&nbsp; This is based on the transfer function matrix &nbsp; $\mathbf{G}(D) = (1 + D + D^2, \ 1 + D^2)$,&nbsp; as shown in the model&nbsp; $($without red deletion$)$.  
  
*Gegenüber einem <i>Codebaum</i>, der mit jedem Zeitschritt <i>i</i> exponentiell anwächst &ndash; siehe zum Beispiel Kapitel 3.8, Seite 2a im Buch &bdquo;Digitalsignalübertragung&rdquo; &ndash; ist hier die Zahl der Knoten (also der möglichen Zustände) auf 2<sup><i>m</i></sup> begrenzt.<br>
+
[[File:EN_KC_T_3_3_S4d_v2.png|right|frame|State transition diagram&nbsp; $\rm C$&nbsp; for the&nbsp; modified standard convolutional encoder|class=fit]]
  
*Diese erfreuliche Eigenschaft eines jeden Trellisdiagramms nutzt auch der Viterbi&ndash;Algorithmus zur effizienten Maximum&ndash;Likelihood&ndash;Decodierung von Faltungscodes.<br><br>
+
Using&nbsp; $\mathbf{G}(D) = (1 + D, \ 1 + D^2)$ &nbsp; &rArr; &nbsp; red change,&nbsp; the state transition diagram changes from the&nbsp; [[Channel_Coding/Code_Description_with_State_and_Trellis_Diagram#Representation_in_the_state_transition_diagram|$\text{state transition diagram $\rm A$}$]]&nbsp; little at first glance.&nbsp; However, the&nbsp; effects are serious:
 +
# &nbsp; The free distance is no longer&nbsp; $d_{\rm F} = 5$,&nbsp; but&nbsp; $d_{\rm F} = 3$,&nbsp; where the encoded sequence &nbsp; $\underline{x} = (11, \ 01, \ 00, \ 00, \text{. ..})$ &nbsp; with only three&nbsp; '''ones'''&nbsp; belonging to the information sequence &nbsp; $\underline{u} = \underline{1} = (1, \ 1, \ 1, \ 1, \ \text{...})$.<br>
 +
# &nbsp; That is:&nbsp; By only three transmission errors at positions&nbsp; '''1''',&nbsp; '''2''',&nbsp; and&nbsp; '''4''',&nbsp; one falsifies the&nbsp; "one sequence"&nbsp; $(\underline{1})$&nbsp; into the&nbsp; "zero sequence"&nbsp; $(\underline{0})$&nbsp; and thus produces infinitely many decoding errors in state&nbsp; $S_3$.<br>
 +
# &nbsp; A convolutional encoder with these properties is called&nbsp; "catastrophic".&nbsp; The reason for this extremely unfavorable behavior is that here the two polynomials &nbsp; $1 + D$ &nbsp; and &nbsp; $1 + D^2$ &nbsp; are not divisor-remote.}}<br>
  
== Darstellung im Trellisdiagramm (2) ==
+
== Terminated convolutional codes ==
 
<br>
 
<br>
Das folgende Beispiel soll zeigen, dass zwischen der Aneinanderreihung der Codesequenzen <u><i>x</i></u><sub><i>i</i></sub> und den Pfaden durch das Trellisdiagramm eine 1:1&ndash;Zuordnung besteht. Auch die Informationssequenz <u><i>u</i></u> ist aus dem ausgewählten Trellispfad anhand der Farben der einzelnen Zweige ablesbar. Ein roter Zweig steht für das Informationsbit <i>u</i><sub><i>i</i></sub> = 0, ein blauer für <i>u</i><sub><i>i</i></sub> = 1.<br>
+
In the theoretical description of convolutional codes,&nbsp; one always assumes information sequences&nbsp; $\underline{u}$&nbsp; and encoded sequences&nbsp; $\underline{x}$&nbsp; that are  by definition infinite in length.
 +
*On the other hand,&nbsp;in practical applications e.g.&nbsp; [[Examples_of_Communication_Systems/General_Description_of_GSM|$\text{GSM}$]]&nbsp; and&nbsp; [[Examples_of_Communication_Systems/General_Description_of_UMTS|$\text{UMTS}$]],&nbsp;  one always uses an information sequence of finite length&nbsp; $L$.
 +
 
 +
*For a rate $1/n$ convolutional code,&nbsp; the encoded sequence then has at least length&nbsp; $n \cdot L$.
  
{{Beispiel}}''':''' Auf der ersten Seite dieses Abschnitts wurde für unseren Rate&ndash;1/2&ndash;Standardcodierer mit Gedächtnis <i>m</i> = 2 sowie die Informationssequenz <u><i>u</i></u> = (1, 1, 1, 0, 0, 0, 1, 0, 1, ...) die Codesequenz <u><i>x</i></u> hergeleitet, die in nachfolgender Tabelle für <i>i</i> &#8804; 9 nochmals angegeben ist.<br>
 
  
[[File:P ID2636 KC T 3 3 S3b v1.png|Trellisdiagramm einer Beispielssequenz|class=fit]]<br>
+
The graph&nbsp; $($without the gray background$)$&nbsp; shows the trellis of our standard rate&nbsp; $1/2$&nbsp; convolutional code at binary input sequence &nbsp; $\underline{u}$&nbsp; of finite length&nbsp; $L = 128$.&nbsp;
 +
[[File:P ID2641 KC T 3 3 S5 v2.png|right|frame|Terminated convolutional code of rate&nbsp; $R = 128/260$|class=fit]]  
 +
*Thus the encoded sequence&nbsp; $\underline{x}$&nbsp; has length&nbsp; $2 \cdot L = 256$.
  
Darunter gezeichnet ist das Trellisdiagramm. Man erkennt:
+
*However,&nbsp; due to the undefined final state,&nbsp; a complete maximum likelihood decoding of the transmitted sequence is not possible.  
*Der ausgewählte Pfad durch das Trellis ergibt sich durch die Aneinanderreihung roter und blauer Pfeile, die für die möglichen Informationsbits <i>u<sub>i</sub></i> = 0 bzw. <i>u<sub>i</sub></i> = 1 stehen. Diese Aussage gilt für jeden Rate&ndash;1/<i>n</i>&ndash;Faltungscode. Bei einem Code mit <i>k</i> > 1 gäbe es 2<sup><i>k</i></sup> verschiedenfarbige Pfeile.<br>
 
  
*Bei einem Rate&ndash;1/<i>n</i>&ndash;Faltungscode sind die Pfeile mit den Codeworten <u><i>x</i></u><sub><i>i</i></sub> = (<i>x<sub>i</sub></i><sup>(1)</sup>, ... , (<i>x<sub>i</sub></i><sup>(<i>n</i>)</sup>) beschriftet, die sich aus dem Informationsbit <i>u<sub>i</sub></i> und den aktuellen Registerzuständen <i>s<sub>i</sub></i> ergeben. Für <i>n</i> = 2 gibt es nur vier mögliche Codeworte, nämlich 00, 01, 10 und 11.<br>
+
*Since it is not known which of the states &nbsp; $S_0, \ \text{...} \ , \ S_3$ &nbsp; would occur for&nbsp; $i > L$,&nbsp; the error probability will be&nbsp; $($somewhat$)$&nbsp; larger than in the limiting case&nbsp; $L &#8594; &#8734;$.<br>
  
*In vertikaler Richtung erkennt man aus dem Trellis die möglichen Registerzustände <i>S<sub>&mu;</sub></i>. Bei einem Rate&ndash;<i>k</i>/<i>n</i>&ndash;Faltungscode mit der Gedächtnisordnung <i>m</i> gibt es 2<sup><i>k</i> &middot; <i>m</i></sup> verschiedene Zustände. Im vorliegenden Beispiel (<i>k</i> = 1, <i>m</i> = 2) sind dies nur die Zustände <i>S</i><sub>0</sub>, <i>S</i><sub>1</sub>, <i>S</i><sub>2</sub> und <i>S</i><sub>3</sub>.{{end}}<br>
+
*To prevent this,&nbsp; the convolutional code is terminated as shown in the diagram:&nbsp; You add&nbsp; $m = 2$&nbsp; zeros to the&nbsp; $L = 128$&nbsp; information bits &nbsp; &#8658; &nbsp; $L' = 130$.
  
 +
*This results,&nbsp; for example,&nbsp; for the color highlighted path through the trellis:
  
 +
:$$\underline{x}' = (11\hspace{0.05cm},\hspace{0.05cm} 01\hspace{0.05cm},\hspace{0.05cm} 01\hspace{0.05cm},\hspace{0.05cm} 00 \hspace{0.05cm},\hspace{0.05cm} \text{...}\hspace{0.1cm},\hspace{0.05cm} 10\hspace{0.05cm},\hspace{0.05cm}00\hspace{0.05cm},\hspace{0.05cm} 01\hspace{0.05cm},\hspace{0.05cm} 01\hspace{0.05cm},\hspace{0.05cm} 11 \hspace{0.05cm} ) $$
 +
:$$\Rightarrow \hspace{0.3cm}\underline{u}' = (1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 1 \hspace{0.05cm},\hspace{0.05cm} \text{...}\hspace{0.1cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 0 \hspace{0.05cm} ) \hspace{0.05cm}.$$
  
 +
The trellis now ends always&nbsp; $($independent of the data$)$&nbsp; in the defined final state&nbsp; $S_0$;&nbsp; one thus achieves the best possible error probability according to maximum likelihood.<br>
  
 +
*However,&nbsp; the improvement in error probability comes at the cost of a lower code rate.
 +
 +
*For&nbsp; $L \gg m$&nbsp; this loss is only small.&nbsp; In the considered example,&nbsp; instead of&nbsp; $R = 0.5$&nbsp; with termination the code rate results in
 +
:$$R\hspace{0.05cm}' = 0.5 \cdot L/(L + m) \approx 0.492.$$<br>
  
 +
== Punctured convolutional codes ==
 +
<br>
 +
{{BlaueBox|TEXT= 
 +
$\text{Definition:}$&nbsp; 
 +
*We assume a convolutional code of rate&nbsp; $R_0 = 1/n_0$&nbsp; which we call&nbsp; "mother code".
 +
 +
*If one deletes some bits from its encoded sequence according to a given pattern,&nbsp; it is called a&nbsp; &raquo;<b>punctured convolutional code</b>&laquo;&nbsp; with code rate&nbsp; $R > R_0$.}}<br>
  
 +
The puncturing is done using the puncturing matrix&nbsp; $\mathbf{P}$&nbsp; with the following properties:
 +
# &nbsp; The row number is&nbsp; $n_0$,&nbsp; the column number is equal to the puncturing period&nbsp; $p$,&nbsp; which is determined by the desired code rate.<br>
 +
# &nbsp; The&nbsp; $n_0 \cdot p$&nbsp; matrix elements&nbsp; $P_{ij}$&nbsp; are binary &nbsp;$(0$&nbsp; or&nbsp; $1)$.
 +
# &nbsp; If&nbsp; $P_{ij} = 1$&nbsp; the corresponding code bit&nbsp; $x_{ij}$&nbsp; is taken,&nbsp; if&nbsp; $P_{ij} = 0$&nbsp; not.<br>
 +
# &nbsp; The rate of the punctured code is equal to the quotient of the puncturing period&nbsp; $p$&nbsp; and the number&nbsp; $N_1$&nbsp; of&nbsp;  ones&nbsp; in the&nbsp; $\mathbf{P}$ matrix.<br>
  
  
 +
One can usually find favorably punctured convolutional codes only by means of computer-aided search.&nbsp; Here,&nbsp; a punctured convolutional code is said to be favorable if it has
 +
*the same memory order&nbsp; $m$&nbsp; as the parent code,&nbsp; and also the total influence length is the same in both cases: &nbsp; $\nu = m + 1$,<br>
  
 +
* a free distance&nbsp; $d_{\rm F}$&nbsp; as large as possible which is of course smaller than that of the mother code.<br><br>
  
 +
{{GraueBox|TEXT= 
 +
$\text{Example 5:}$&nbsp;  Starting from our&nbsp; [[Channel_Coding/Code_Description_with_State_and_Trellis_Diagram#State_definition_for_a_memory_register|$\text{rate 1/2 standard code}$]]&nbsp; with parameters&nbsp; $n_0 = 2$&nbsp; and&nbsp; $m = 2$,&nbsp; we obtain with the puncturing matrix
  
 +
::<math>{\boldsymbol{\rm P} } =
 +
\begin{pmatrix}
 +
1 & 1 & 0  \\
 +
1 & 0 & 1
 +
\end{pmatrix}\hspace{0.3cm}\Rightarrow \hspace{0.3cm} p = 3\hspace{0.05cm}, \hspace{0.2cm}N_1 = 4</math>
  
 +
a punctured convolutional code of rate&nbsp; $R = p/N_1 = 3/4$.
  
 +
We consider the following constellation for this purpose:
 +
# &nbsp; Information sequence: $\hspace{2.5cm} \underline{u} = (1, 0, 0, 1, 1, 0, \ \text{...}\hspace{0.03cm})$,<br>
 +
# &nbsp; encoded sequence without puncturing: $\hspace{0.35cm} \underline{x} = (11, 1 \color{grey}{0}, \color{gray}{1}1, 11, 0\color{gray}{1}, \color{gray}{0}1, \ \text{...}\hspace{0.03cm})$,<br>
 +
#  &nbsp; encoded sequence with puncturing: $\hspace{0.90cm} \underline{x}\hspace{0.05cm}' = (11, 1\_, \_1, 11, 0\_, \_1, \ \text{...}\hspace{0.03cm})$,<br>
 +
#  &nbsp; received sequence without errors: $\hspace{0.58cm} \underline{y} = (11, 1\_, \_1, 11, 0\_, \_1, \ \text{...}\hspace{0.03cm})$,<br>
 +
#  &nbsp; modified received sequence: $\hspace{1.4cm} \underline{y}\hspace{0.05cm}' = (11, 1\rm E, E1, 11, 0E, E1, \ \text{...}\hspace{0.03cm})$.<br>
  
 +
<u>Note:</u>
 +
*So each punctured bit in the received sequence&nbsp; $\underline{y}$&nbsp; $($marked by an underscore$)$ &nbsp; is replaced by an&nbsp; "Erasure"&nbsp; $\rm (E)$,&nbsp; see&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#Binary_Erasure_Channel_.E2.80.93_BEC| "Binary Erasure Channel"]].
  
 +
*Such an erasure created by the puncturing&nbsp;  is treated by the decoder in the same way as an erasure by the channel.
 +
 +
*Of course,&nbsp; the puncturing increases the error probability.&nbsp; This can already be seen from the fact that the free distance decreases to&nbsp; $d_{\rm F} = 3$&nbsp; after the puncturing.
 +
 +
*In contrast,&nbsp; the mother code has the free distance&nbsp; $d_{\rm F} = 5$.}}<br>
 +
 +
For example,&nbsp; puncturing is used in the so-called&nbsp; [https://ieeexplore.ieee.org/document/2763 $\text{RCPC codes}$]&nbsp; $($"rate compatible punctured convolutional codes"$)$.&nbsp; More about this in&nbsp;  [[Aufgaben:Aufgabe_3.8:_Rate_Compatible_Punctured_Convolutional_Codes| "Exercise 3.8"]].<br>
 +
 +
 +
== Exercises for the chapter==
 +
<br>
 +
[[Aufgaben:Aufgabe_3.6:_Zustandsübergangsdiagramm|Exercise 3.6: State Transition Diagram]]
  
 +
[[Aufgaben:Aufgabe_3.6Z:_Übergangsdiagramm_bei_3_Zuständen|Exercise 3.6Z: Transition Diagram at 3 States]]
  
 +
[[Aufgaben:Aufgabe_3.7:_Vergleich_zweier_Faltungscodierer|Exercise 3.7: Comparison of Two Convolutional Encoders]]
  
 +
[[Aufgaben:Aufgabe_3.7Z:_Welcher_Code_ist_katastrophal%3F|Exercise 3.7Z: Which Code is Catastrophic?]]
  
 +
[[Aufgaben:Aufgabe_3.8:_Rate_Compatible_Punctured_Convolutional_Codes|Exercise 3.8: Rate Compatible Punctured Convolutional Codes]]
  
 
{{Display}}
 
{{Display}}

Latest revision as of 18:10, 7 December 2022

State definition for a memory register


A convolutional encoder can also be understood as an automaton with a finite number of states.  This number results from the number  $m$  of memory elements to  $2^m$.

In this chapter we mostly start from the drawn convolutional encoder,  which is characterized by the following parameters:

Convolutional encoder with  $k = 1, \ n = 2, \ m = 2$
  • $k = 1, \ n = 2$   ⇒   code rate $R = 1/2$,
  • Memory  $m = 2$   ⇒   influence length $\nu = m+1=3$,
  • Transfer function matrix in octal form  $(7, 5)$:
$$\mathbf{G}(D) = (1 + D + D^2, \ 1 + D^2).$$

The encoded sequence  $\underline{x}_i = (x_i^{(1)}, \ x_i^{(2)})$   at time  $i$  depends not only on the information bit  $u_i$  but also on the content  $(u_{i-1}, \ u_{i-2})$  of the memory.

  • There are   $2^m = 4$   possibilities for this,  namely the states  $S_0, \ S_1, \ S_2,\ S_3$.
  • Let the register state  $S_{\mu}$  be defined by the index
\[\mu = u_{i-1} + 2 \cdot u_{i-2}\hspace{0.05cm}, \hspace{0.5cm}{\rm general\hspace{-0.1cm}:}\hspace{0.15cm} \mu = \sum_{l = 1}^{m} \hspace{0.1cm}2\hspace{0.03cm}^{l-1} \cdot u_{i-l} \hspace{0.05cm}.\]

To avoid confusion,  we distinguish in the following by upper or lower case letters:

  1.   the possible states  $S_{\mu}$  with the indices  $0 ≤ \mu ≤ 2^m \,-1$,
  2.   the current states  $s_i$  at the times  $i = 1, \ 2, \ 3, \ \text{...}$


$\text{Example 1:}$  The graph shows for the convolutional encoder sketched above

To illustrate the states  $S_{\mu}$


  • the information sequence  $\underline{u} = (u_1,u_2, \text{...} \ ) $
      ⇒   information bits  $u_i$,
  • the current states at times  $i$
      ⇒   $s_i ∈ \{S_0, \ S_1, \ S_2, \ S_3\}$,
  • the 2–bit sequences  $\underline{x}_i = (x_i^{(1)}, \ x_i^{(2)})$
      ⇒   encoding the information bit $u_i$.


For example,  you can see from this illustration  $($the colors are to facilitate the transition to later graphics$)$:

  1.   At time  $i = 5$   ⇒   $u_{i-1} = u_4 = 0$,  $u_{i-2} = u_3 = 1$  holds.  That is,  the automat is in state  $s_5 = S_2$. 
  2.   With the information bit  $u_i = u_5 = 0$,  the encoded sequence  $\underline{x}_5 = (11)$  is obtained.
  3.   The state for time  $i = 6$  results from  $u_{i-1} = u_5 = 0$  and  $u_{i-2} = u_4 = 0$  to  $s_6 = S_0$.
  4.   Because of  $u_6 = 0$   ⇒   $\underline{x}_6 = (00)$  is output and the new state  $s_7$  is again  $S_0$.
  5.   At time  $i = 12$  the encoded sequence  $\underline{x}_{12} = (11)$  is output because of  $u_{12} = 0$  as at time  $i = 5$. 
  6.   Here,  one passes from state  $s_{12} = S_2$  to state  $s_{13} = S_0$ .
  7.   In contrast,  at time  $i = 9$  the encoded sequence  $\underline{x}_{9} = (00)$  is output and state  $s_9 = S_2$  is followed by  $s_{10} = S_1$.
  8.   The same is true for  $i = 15$.  In both cases,  the current information bit is  $u_i = 1$.


$\text{Conclusion:}$  From this example it can be seen that the encoded sequence  $\underline{x}_i$  at time  $i$  depends solely on

  1.   the current state  $s_i = S_{\mu} \ (0 ≤ \mu ≤ 2^m -1)$, as well as
  2.   the current information bit  $u_i$.

  • Similarly,  the successor state  $s_{i+1}$  is determined by  $s_i$  and  $u_i$  alone.
  • These properties are considered in the so-called  "state transition diagram"  in the next section.


Representation in the state transition diagram


The graph shows the state transition diagram for the  $(n = 2, \ k = 1, \ m = 2)$  convolutional encoder.

State transition diagram  $\rm A$  for the  $(n = 2, \ k = 1, \ m = 2)$  convolutional encoder;
      in the following:  "our standard convolutional encoder"
  1.  This diagram provides all information about this special convolutional encoder in compact form.
  2.  The color scheme is aligned with the  $\text{sequential representation}$  in the previous section.
  3.  For the sake of completeness,  the derivation table is also given again.
  4.  The  $2^{m+k}$  transitions are labeled  "$u_i \ | \ \underline{x}_i$".


For example,  it can be read:

  • The information bit   $u_i = 0$   $($indicated by a red label$)$  takes you from state   $s_i = S_1$   to state   $s_{i+1} = S_2$   and the two code bits are   $x_i^{(1)} = 1, \ x_i^{(2)} = 0$.
  • With the information bit   $u_i = 1$   $($blue label$)$  in the state   $s_i = S_1$   the code bits result in   $x_i^{(1)} = 0, \ x_i^{(2)} = 1$,  and one arrives at the new state  $s_{i+1} = S_3$.


State transition diagram  $\rm B$  for another  $(n = 2, \ k = 1, \ m = 2)$  convolutional encoder;
      in the following:  "equivalent systematic representation"

We now consider another systematic encoder with same parameters:

$$n = 2, \ k = 1, \ m = 2.$$

This is the  $\text{equivalent systematic representation}$  of our standard convolutional encoder,  also referred to as  "recursive systematic convolutional encoder".

Compared to the upper state transition diagram  $\rm A$,  one can see the following differences:

  • Since the earlier information bits  $u_{i-1}$  and  $u_{i-2}$  are not stored, here the states  $s_i = S_{\mu}$  are related to the processed quantities  $w_{i-1}$  and  $w_{i-2}$, where  $w_i = u_i + w_{i-1} + w_{i-2}$  holds.


$\text{Conclusion:}$  The comparison of the graphics shows that a similar state transition diagram is obtained for both encoders:

  • The structure of the state transition diagram is determined solely by the parameters  $m$  and  $k$ .
  • The number of states is  $2^{m \hspace{0.05cm}\cdot \hspace{0.05cm}k}$  and  $2^k$  arrows go from each state.
  • You also get from each state  $s_i ∈ \{S_0, \ S_1, \ S_2, \ S_3\}$  to the same successor states  $s_{i+1}$.
  • A difference exists solely with respect to the encoded sequences output,  where in both cases  $\underline{x}_i ∈ \{00, \ 01, \ 10, \ 11\}$  applies.



Representation in the trellis diagram


One gets from the state transition diagram to the so-called  "trellis diagram"  $($short:  $\text{trellis)}$  by additionally considering a time component   ⇒   control variable  $i$.  The following graph contrasts both diagrams for  "our standard convolutional encoder".  The bottom illustration has a resemblance to a  "garden trellis" – assuming some imagination.

State transition diagram vs. trellis diagram for  "our standard convolutional encoder"

Further is to be recognized on the basis of this diagram:

  1.   Since all memory registers are preallocated with zeros,  the trellis always starts from the state  $s_1 = S_0$.  At the next time  $(i = 2)$  only the two states  $S_0$  and  $S_1$  are possible.
  2.   From time  $i = m + 1 = 3$  the trellis has exactly the same appearance for each transition from  $s_i$  to  $s_{i+1}$.  Each state  $S_{\mu}$  is connected to a successor state by a red arrow  $(u_i = 0)$  and a blue arrow  $(u_i = 1)$.
  3.   Versus a  "code tree"  that grows exponentially with each time step  $i$   – see for example section  "Error sizes and total error sizes"   in the book  "Digital Signal Transmission" –  here the number of nodes  $($i.e. possible states$)$  is limited to  $2^m$ .
  4.   This pleasing property of any trellis diagram is also used by the  $\text{Viterbi algorithm}$  for efficient maximum likelihood decoding of convolutional codes.

The following example is to show that there is a  "$1\text{:}1$"  mapping between the string of encoded sequences  $\underline{x}_i$  and the paths through the trellis diagram.

  • Also the information sequence  $\underline{u}$  can be read from the selected trellis path based on the colors of the individual branches.
  • A red branch stands for the information bit  $u_i = 0$,   a blue one for  $u_i = 1$.


$\text{Example 2:}$  In the former  $\text{Example 1}$  the encoded sequence  $\underline{x}$  was derived

  • for our rate  $1/2$  standard convolutional encoder with memory  $m = 2$  and
  • the information sequence  $\underline{u} = (1, 1, 1, 0, 0, 0, 1, 0, 1, \ \text{. ..})$
Trellis diagram of a sample sequence


The table on the right shows the result for  $i ≤ 9$  once again.  The trellis diagram is drawn below.  You can see:

  1.   The selected path through the trellis is obtained by lining up red and blue arrows representing the possible information bits  $u_i \in \{ 0, \ 1\}$ .
  2.   This statement is true for any rate  $1/n$  convolutional code. For a code with  $k > 1$  there would be  $2^k$  different colored arrows.
  3.   For a rate  $1/n$  convolutional code,  the arrows are labeled with the code words  $\underline{x}_i = (x_i^{(1)}, \ \text{...} \ , \ x_i^{(n)})$  which are derived from the information bit  $u_i$  and the current register states  $s_i$.
  4.   For  $n = 2$  there are only four possible code words,  namely  $00, \ 01, \ 10$  and  $11$.
  5.   In vertical direction,  one recognizes from the trellis the register states  $S_{\mu}$.  Given a rate $k/n$ convolutional code with memory order  $m$  there are  $2^{k \hspace{0.05cm}\cdot \hspace{0.05cm}m}$  different states.
  6.   In the present example  $(k = 1, \ m = 2)$  these are the states  $S_0, \ S_1, \ S_2$ and $S_3$.


Definition of the free distance


An important parameter of linear block codes is the  $\text{minimum Hamming distance}$  between any two code words  $\underline{x}$  and  $\underline{x}\hspace{0.05cm}'$:

\[d_{\rm min}(\mathcal{C}) = \min_{\substack{\underline{x},\hspace{0.05cm}\underline{x}' \hspace{0.05cm}\in \hspace{0.05cm} \mathcal{C} \\ {\underline{x}} \hspace{0.05cm}\ne \hspace{0.05cm} \underline{x}\hspace{0.03cm}'}}\hspace{0.1cm}d_{\rm H}(\underline{x}, \hspace{0.05cm}\underline{x}\hspace{0.03cm}')\hspace{0.05cm}.\]
  • Due to linearity,  the zero word   $\underline{0}$   also belongs to each block code.  Thus  $d_{\rm min}$  is identical to the minimum  $\text{Hamming weight}$  $w_{\rm H}(\underline{x})$  of a code word  $\underline{x} ≠ \underline{0}$.
  • For convolutional codes,  however,  the description of the distance ratios proves to be much more complex,  since a convolutional code consists of infinitely long and infinitely many encoded sequences.  Therefore we define here instead of the minimum Hamming distance:


$\text{Definition:}$ 

  • The  »free distance«   $d_{\rm F}$  of a convolutional code is equal to the number of bits in which any two encoded sequences   $\underline{x}$   and   $\underline{x}\hspace{0.03cm}' $  differ  $($at least$)$.
  • In other words:   The free distance is equal to the minimum Hamming distance  between any two paths  through the trellis.


Since convolutional codes are linear,  we can also use the infinite zero sequence as a reference sequence:   $\underline{x}\hspace{0.03cm}' = \underline{0}$.  Thus,  the free distance  $d_{\rm F}$  is equal to the minimum Hamming weight  $($number of  "ones"$)$  of a encoded sequence  $\underline{x} ≠ \underline{0}$.

$\text{Example 3:}$  We consider the null sequence  $\underline{0}$  (marked with white filled circles) as well as two other encoded sequences  $\underline{x}$  and  $\underline{x}\hspace{0.03cm}'$  (with yellow and dark background, respectively) in our standard trellis and characterize these sequences based on the state sequences:

On the definition of free distance
$$\underline{0} =\left ( S_0 \rightarrow S_0 \rightarrow S_0\rightarrow S_0\rightarrow S_0\rightarrow \hspace{0.05cm}\text{...} \hspace{0.05cm}\right)$$
$$\hspace{0.3cm} \Rightarrow \hspace{0.3cm} \underline{0} = \left ( 00, 00, 00, 00, 00,\hspace{0.05cm} \text{...} \hspace{0.05cm}\right) \hspace{0.05cm},$$


$$\ \underline{x} =\left ( S_0 \rightarrow S_1 \rightarrow S_2\rightarrow S_0\rightarrow S_0\rightarrow \hspace{0.05cm}\text{...} \hspace{0.05cm}\right)$$
$$\hspace{0.3cm} \Rightarrow \hspace{0.3cm}\underline{x} == \left ( 11, 10, 11, 00, 00,\hspace{0.05cm} \text{...} \hspace{0.05cm}\right) \hspace{0.05cm},$$


$$\underline{x}\hspace{0.03cm}' = \left ( S_0 \rightarrow S_1 \rightarrow S_3\rightarrow S_2\rightarrow S_0\rightarrow \hspace{0.05cm}\text{...} \hspace{0.05cm}\right)$$
$$\hspace{0.3cm} \Rightarrow \hspace{0.3cm}\underline{x}\hspace{0.03cm}' = \left ( 11, 01, 01, 11, 00,\hspace{0.05cm} \text{...} \hspace{0.05cm}\right) \hspace{0.05cm}.$$


One can see from these plots:

  1.   The free distance  $d_{\rm F} = 5$  is equal to the Hamming weight  $w_{\rm H}(\underline{x})$.
  2.   No sequence other than the one highlighted in yellow differs from   $\underline{0}$   by less than five bits.  For example  $w_{\rm H}(\underline{x}') = 6$.
  3.   In this example,  the free distance   $d_{\rm F} = 5$   also results as the Hamming distance between the sequences   $\underline{x}$   and   $\underline{x}\hspace{0.03cm}'$,  which differ in exactly five bit positions.


The larger the free distance  $d_{\rm F}$,  the better is the  "asymptotic behavior"  of the convolutional encoder.


Optimal convolutional codes of rate  $1/2$

At the outset,  just a few remarks:

  1.   The free distance  $d_{\rm F}$  increases with increasing memory  $m$  provided that one uses appropriate polynomials for the transfer function matrix  $\mathbf{G}(D)$ .
  2.   In the table for rate  $1/2$  convolutional codes,  the  $n = 2$  polynomials are given together with the  $d_{\rm F}$  value.
  3.   Of greater practical importance is the  "industry standard code"  with  $m = 6$  ⇒   $64$  states and the free distance  $d_{\rm F} = 10$  $($in the table this is highlighted in red$)$.


The following example shows the effects of using unfavorable polynomials.

$\text{Example 4:}$  In  $\text{Example 3}$  it was shown that our standard convolutional encoder  $(n = 2, \ k = 1, \ m = 2)$  has the free distance  $d_{\rm F} = 5$.  This is based on the transfer function matrix   $\mathbf{G}(D) = (1 + D + D^2, \ 1 + D^2)$,  as shown in the model  $($without red deletion$)$.

State transition diagram  $\rm C$  for the  modified standard convolutional encoder

Using  $\mathbf{G}(D) = (1 + D, \ 1 + D^2)$   ⇒   red change,  the state transition diagram changes from the  $\text{state transition diagram $\rm A$}$  little at first glance.  However, the  effects are serious:

  1.   The free distance is no longer  $d_{\rm F} = 5$,  but  $d_{\rm F} = 3$,  where the encoded sequence   $\underline{x} = (11, \ 01, \ 00, \ 00, \text{. ..})$   with only three  ones  belonging to the information sequence   $\underline{u} = \underline{1} = (1, \ 1, \ 1, \ 1, \ \text{...})$.
  2.   That is:  By only three transmission errors at positions  12,  and  4,  one falsifies the  "one sequence"  $(\underline{1})$  into the  "zero sequence"  $(\underline{0})$  and thus produces infinitely many decoding errors in state  $S_3$.
  3.   A convolutional encoder with these properties is called  "catastrophic".  The reason for this extremely unfavorable behavior is that here the two polynomials   $1 + D$   and   $1 + D^2$   are not divisor-remote.


Terminated convolutional codes


In the theoretical description of convolutional codes,  one always assumes information sequences  $\underline{u}$  and encoded sequences  $\underline{x}$  that are by definition infinite in length.

  • On the other hand, in practical applications e.g.  $\text{GSM}$  and  $\text{UMTS}$,  one always uses an information sequence of finite length  $L$.
  • For a rate $1/n$ convolutional code,  the encoded sequence then has at least length  $n \cdot L$.


The graph  $($without the gray background$)$  shows the trellis of our standard rate  $1/2$  convolutional code at binary input sequence   $\underline{u}$  of finite length  $L = 128$. 

Terminated convolutional code of rate  $R = 128/260$
  • Thus the encoded sequence  $\underline{x}$  has length  $2 \cdot L = 256$.
  • However,  due to the undefined final state,  a complete maximum likelihood decoding of the transmitted sequence is not possible.
  • Since it is not known which of the states   $S_0, \ \text{...} \ , \ S_3$   would occur for  $i > L$,  the error probability will be  $($somewhat$)$  larger than in the limiting case  $L → ∞$.
  • To prevent this,  the convolutional code is terminated as shown in the diagram:  You add  $m = 2$  zeros to the  $L = 128$  information bits   ⇒   $L' = 130$.
  • This results,  for example,  for the color highlighted path through the trellis:
$$\underline{x}' = (11\hspace{0.05cm},\hspace{0.05cm} 01\hspace{0.05cm},\hspace{0.05cm} 01\hspace{0.05cm},\hspace{0.05cm} 00 \hspace{0.05cm},\hspace{0.05cm} \text{...}\hspace{0.1cm},\hspace{0.05cm} 10\hspace{0.05cm},\hspace{0.05cm}00\hspace{0.05cm},\hspace{0.05cm} 01\hspace{0.05cm},\hspace{0.05cm} 01\hspace{0.05cm},\hspace{0.05cm} 11 \hspace{0.05cm} ) $$
$$\Rightarrow \hspace{0.3cm}\underline{u}' = (1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 1 \hspace{0.05cm},\hspace{0.05cm} \text{...}\hspace{0.1cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 0 \hspace{0.05cm} ) \hspace{0.05cm}.$$

The trellis now ends always  $($independent of the data$)$  in the defined final state  $S_0$;  one thus achieves the best possible error probability according to maximum likelihood.

  • However,  the improvement in error probability comes at the cost of a lower code rate.
  • For  $L \gg m$  this loss is only small.  In the considered example,  instead of  $R = 0.5$  with termination the code rate results in
$$R\hspace{0.05cm}' = 0.5 \cdot L/(L + m) \approx 0.492.$$

Punctured convolutional codes


$\text{Definition:}$ 

  • We assume a convolutional code of rate  $R_0 = 1/n_0$  which we call  "mother code".
  • If one deletes some bits from its encoded sequence according to a given pattern,  it is called a  »punctured convolutional code«  with code rate  $R > R_0$.


The puncturing is done using the puncturing matrix  $\mathbf{P}$  with the following properties:

  1.   The row number is  $n_0$,  the column number is equal to the puncturing period  $p$,  which is determined by the desired code rate.
  2.   The  $n_0 \cdot p$  matrix elements  $P_{ij}$  are binary  $(0$  or  $1)$.
  3.   If  $P_{ij} = 1$  the corresponding code bit  $x_{ij}$  is taken,  if  $P_{ij} = 0$  not.
  4.   The rate of the punctured code is equal to the quotient of the puncturing period  $p$  and the number  $N_1$  of  ones  in the  $\mathbf{P}$ matrix.


One can usually find favorably punctured convolutional codes only by means of computer-aided search.  Here,  a punctured convolutional code is said to be favorable if it has

  • the same memory order  $m$  as the parent code,  and also the total influence length is the same in both cases:   $\nu = m + 1$,
  • a free distance  $d_{\rm F}$  as large as possible which is of course smaller than that of the mother code.

$\text{Example 5:}$  Starting from our  $\text{rate 1/2 standard code}$  with parameters  $n_0 = 2$  and  $m = 2$,  we obtain with the puncturing matrix

\[{\boldsymbol{\rm P} } = \begin{pmatrix} 1 & 1 & 0 \\ 1 & 0 & 1 \end{pmatrix}\hspace{0.3cm}\Rightarrow \hspace{0.3cm} p = 3\hspace{0.05cm}, \hspace{0.2cm}N_1 = 4\]

a punctured convolutional code of rate  $R = p/N_1 = 3/4$.

We consider the following constellation for this purpose:

  1.   Information sequence: $\hspace{2.5cm} \underline{u} = (1, 0, 0, 1, 1, 0, \ \text{...}\hspace{0.03cm})$,
  2.   encoded sequence without puncturing: $\hspace{0.35cm} \underline{x} = (11, 1 \color{grey}{0}, \color{gray}{1}1, 11, 0\color{gray}{1}, \color{gray}{0}1, \ \text{...}\hspace{0.03cm})$,
  3.   encoded sequence with puncturing: $\hspace{0.90cm} \underline{x}\hspace{0.05cm}' = (11, 1\_, \_1, 11, 0\_, \_1, \ \text{...}\hspace{0.03cm})$,
  4.   received sequence without errors: $\hspace{0.58cm} \underline{y} = (11, 1\_, \_1, 11, 0\_, \_1, \ \text{...}\hspace{0.03cm})$,
  5.   modified received sequence: $\hspace{1.4cm} \underline{y}\hspace{0.05cm}' = (11, 1\rm E, E1, 11, 0E, E1, \ \text{...}\hspace{0.03cm})$.

Note:

  • So each punctured bit in the received sequence  $\underline{y}$  $($marked by an underscore$)$   is replaced by an  "Erasure"  $\rm (E)$,  see  "Binary Erasure Channel".
  • Such an erasure created by the puncturing  is treated by the decoder in the same way as an erasure by the channel.
  • Of course,  the puncturing increases the error probability.  This can already be seen from the fact that the free distance decreases to  $d_{\rm F} = 3$  after the puncturing.
  • In contrast,  the mother code has the free distance  $d_{\rm F} = 5$.


For example,  puncturing is used in the so-called  $\text{RCPC codes}$  $($"rate compatible punctured convolutional codes"$)$.  More about this in  "Exercise 3.8".


Exercises for the chapter


Exercise 3.6: State Transition Diagram

Exercise 3.6Z: Transition Diagram at 3 States

Exercise 3.7: Comparison of Two Convolutional Encoders

Exercise 3.7Z: Which Code is Catastrophic?

Exercise 3.8: Rate Compatible Punctured Convolutional Codes