Difference between revisions of "Channel Coding/Algebraic and Polynomial Description"

From LNTwww
 
(106 intermediate revisions by 8 users not shown)
Line 1: Line 1:
 
   
 
   
 
{{Header
 
{{Header
|Untermenü=Faltungscodierung und geeignete Decoder
+
|Untermenü=Convolutional Codes and Their Decoding
|Vorherige Seite=Grundlagen der Faltungscodierung
+
|Vorherige Seite=Basics of Convolutional Coding
|Nächste Seite=Codebeschreibung mit Zustands– und Trellisdiagramm
+
|Nächste Seite=Code Description with State and Trellis Diagram
 
}}
 
}}
  
== Definition und Interpretation der Teilmatrizen G0, ... , Gm ==
+
== Division of the generator matrix into partial matrices ==
 
<br>
 
<br>
Entsprechend den Ausführungen in Kapitel 1.4 lässt sich das Codewort <u><i>x</i></u> eines linearen Blockcodes aus dem Informationswort <u><i>u</i></u> und der Generatormatrix <b>G</b> in einfacher Weise ermitteln:
+
Following the discussion in the earlier section&nbsp; [[Channel_Coding/General_Description_of_Linear_Block_Codes#Linear_codes_and_cyclic_codes| "Linear Codes and Cyclic Codes"]]&nbsp; the code word&nbsp; $\underline{x}$&nbsp; of a linear block code can be determined from the information word&nbsp; $\underline{u}$&nbsp; and the generator matrix&nbsp; $\mathbf{G}$&nbsp; in a simple way: &nbsp; $\underline{x} = \underline{u} \cdot { \boldsymbol{\rm G}}$. The following holds:
 +
# &nbsp; The vectors&nbsp; $\underline{u}$&nbsp; and&nbsp; $\underline{x}$&nbsp; have length&nbsp; $k$ &nbsp; $($bit count of an info  word$)$&nbsp; resp. &nbsp; $n$ &nbsp; $($bit count of a code word$)$&nbsp; and&nbsp; $\mathbf{G}$&nbsp; has dimension&nbsp; $k &times; n$&nbsp; $(k$&nbsp; rows and&nbsp; $n$&nbsp; columns$)$.<br>
 +
# &nbsp; In convolutional coding,&nbsp; on the other hand&nbsp; $\underline{u}$&nbsp; and&nbsp; $\underline{x}$&nbsp; denote sequences with&nbsp; $k\hspace{0.05cm}' &#8594; &#8734;$ &nbsp; and &nbsp; $n\hspace{0.05cm}' &#8594; &#8734;$.
 +
# &nbsp; Therefore,&nbsp; the generator matrix&nbsp; $\mathbf{G}$&nbsp; will also be infinitely extended in both directions.<br><br>
  
:<math>\underline{x} = \underline{u} \cdot { \boldsymbol{\rm G}}  \hspace{0.05cm}.</math>
+
In preparation for the introduction of the generator matrix&nbsp; $\mathbf{G}$&nbsp; in the next section,&nbsp;
 +
*we define&nbsp; $m + 1$&nbsp; "partial matrices",&nbsp; each with&nbsp; $k$&nbsp; rows and&nbsp; $n$&nbsp; columns, which we denote by&nbsp; $\mathbf{G}_l$&nbsp;
  
Dabei gilt:
+
*where&nbsp; $0 &#8804; l &#8804; m$&nbsp; holds.<br>
*Die Vektoren <u><i>u</i></u> und <u><i>x</i></u> haben die Länge <i>k</i> (Bitanzahl eines Informationswortes) bzw. <i>n</i>  (Bitanzahl eines Codewortes) und <b>G</b> besitzt die Dimension <i>k</i> &times; <i>n</i> (<i>k</i> Zeilen und <i>n</i> Spalten).<br>
 
  
*Bei Faltungscodierung bezeichnen dagegen <u><i>u</i></u> und <u><i>x</i></u> Sequenzen mit <i>k</i>' &#8594; &#8734; und <i>n</i>' &#8594; &#8734;. Deshalb wird auch die Generatormatrix <b>G</b> in beiden Richtungen unendlich weit ausgedehnt sein.<br><br>
 
  
Als Vorbereitung für die Einführung der Generatormatrix <b>G</b> auf der nächsten Seite definieren wir <i>m</i> + 1 Teilmatrizen, jeweils mit <i>k</i> Zeilen und <i>n</i> Spalten, die wir mit <b>G</b><sub><i>l</i></sub> bezeichnen, wobei 0 &#8804; <i>l</i> &#8804; <i>m</i> gilt.<br>
+
{{BlaueBox|TEXT= 
 +
$\text{Definition:}$&nbsp;  The &nbsp; &raquo;'''partial matrix'''&laquo; &nbsp; $\mathbf{G}_l$&nbsp; describes the following fact: &nbsp;
 +
*If the matrix element&nbsp; $\mathbf{G}_l(\kappa, j) = 1$,&nbsp; this says that the code bit&nbsp; $x_i^{(j)}$&nbsp; is influenced by the information bit&nbsp; $u_{i-l}^{(\kappa)}$.&nbsp;  
  
{{Definition}}''':''' Ist das Matrizenelement <b>G</b><sub><i>l</i></sub>(<i>&kappa;</i>, <i>j</i>) = 1, so sagt dies aus, dass das Codebit <i>x<sub>i</sub></i><sup>(<i>j</i>)</sup> durch das Informationsbit <i>u<sub>i</sub></i><sub>&ndash;</sub><sub><i>l</i></sub><sup>(<i>&kappa;</i>)</sup> beeinflusst wird. Andernfalls ist dieses Matrixelement gleich 0.{{end}}<br>
+
*Otherwise,&nbsp; this matrix element is&nbsp; $\mathbf{G}_l(\kappa, j) =0$.}}<br>
  
Diese Definition wird nun an einem Beispiel verdeutlicht.
+
This definition will now be illustrated by an example.
  
{{Beispiel}}''':'''
+
{{GraueBox|TEXT= 
[[File:P ID2600 KC T 3 1 S4 v1.png|rechts|rahmenlos|Faltungscoder mit <i>k</i> = 2, <i>n</i> = 3 und <i>m</i> = 1]] Wir betrachten wiederum den Faltungscodierer gemäß nebenstehender Grafik mit den folgenden Codebits:
+
$\text{Example 1:}$&nbsp; 
 +
We again consider the convolutional encoder according to the diagram with the following code bits:
 +
[[File:P ID2600 KC T 3 1 S4 v1.png|right|frame|Convolutional encoder with&nbsp; $k = 2, \ n = 3, \ m = 1$]]  
  
:<math>x_i^{(1)} \hspace{-0.15cm}  = \hspace{-0.15cm} u_{i}^{(1)} + u_{i-1}^{(1)}+ u_{i-1}^{(2)} \hspace{0.05cm},</math>
+
::<math>x_i^{(1)} = u_{i}^{(1)} + u_{i-1}^{(1)}+ u_{i-1}^{(2)} \hspace{0.05cm},</math>
:<math>x_i^{(2)} \hspace{-0.15cm}  = \hspace{-0.15cm} u_{i}^{(2)} + u_{i-1}^{(1)} \hspace{0.05cm},</math>
+
::<math>x_i^{(2)} = u_{i}^{(2)} + u_{i-1}^{(1)} \hspace{0.05cm},</math>
:<math>x_i^{(3)} \hspace{-0.15cm}  = \hspace{-0.15cm} u_{i}^{(1)} + u_{i}^{(2)}+ u_{i-1}^{(1)} \hspace{0.05cm}.</math>
+
::<math>x_i^{(3)} = u_{i}^{(1)} + u_{i}^{(2)}+ u_{i-1}^{(1)} \hspace{0.05cm}.</math>
  
Wegen der Gedächtnisordnung <i>m</i> = 1 wird dieser Codierer durch die beiden Teilmatrizen <b>G</b><sub>0</sub> und <b>G</b><sub>1</Sub> charakterisiert:
+
Because of the memory&nbsp; $m = 1$&nbsp; this encoder is fully characterized by the partial matrices&nbsp; $\mathbf{G}_0$&nbsp; and&nbsp; $\mathbf{G}_1$&nbsp;:
  
:<math>{ \boldsymbol{\rm G}}_0 =  
+
::<math>{ \boldsymbol{\rm G} }_0 =  
 
\begin{pmatrix}
 
\begin{pmatrix}
 
1 & 0 & 1\\
 
1 & 0 & 1\\
 
0 & 1 & 1
 
0 & 1 & 1
 
\end{pmatrix}  \hspace{0.05cm},  \hspace{0.5cm}
 
\end{pmatrix}  \hspace{0.05cm},  \hspace{0.5cm}
{ \boldsymbol{\rm G}}_1 = \begin{pmatrix}
+
{ \boldsymbol{\rm G} }_1 = \begin{pmatrix}
 
1 & 1 & 1\\
 
1 & 1 & 1\\
 
1 & 0 & 0
 
1 & 0 & 0
 
\end{pmatrix}\hspace{0.05cm}.</math>
 
\end{pmatrix}\hspace{0.05cm}.</math>
  
Diese Matrizen sind wie folgt zu interpretieren:
+
These matrices are to be interpreted as follows:
*Erste Zeile von <b>G</b><sub>0</sub>, rote Pfeile:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <i>u<sub>i</sub></i><sup>(1)</sup> beeinflusst sowohl <i>x<sub>i</sub></i><sup>(1)</sup> als auch <i>x<sub>i</sub></i><sup>(3)</sup>, nicht jedoch <i>x<sub>i</sub></i><sup>(2)</sup>.<br>
+
*First row of&nbsp; $\mathbf{G}_0$,&nbsp; red arrows:&nbsp; $\hspace{1.3cm}u_i^{(1)}$&nbsp; affects both&nbsp; $x_i^{(1)}$&nbsp; and&nbsp; $x_i^{(3)}$,&nbsp; but not&nbsp; $x_i^{(2)}$.<br>
  
*Zweite Zeile von <b>G</b><sub>0</sub>, blaue Pfeile:&nbsp;&nbsp; <i>u<sub>i</sub></i><sup>(2)</sup> beeinflusst <i>x<sub>i</sub></i><sup>(2)</sup> und <i>x<sub>i</sub></i><sup>(3)</sup>, aber nicht <i>x<sub>i</sub></i><sup>(1)</sup>.<br>
+
*Second row of&nbsp; $\mathbf{G}_0$,&nbsp; blue arrows:&nbsp; $\hspace{0.6cm}u_i^{(2)}$&nbsp; affects&nbsp; $x_i^{(2)}$&nbsp; and&nbsp; $x_i^{(3)}$,&nbsp; but not&nbsp; $x_i^{(1)}$.<br>
  
*Erste Zeile von <b>G</b><sub>1</sub>, grüne Pfeile:&nbsp;&nbsp;&nbsp;&nbsp; <i>u<sub>i</sub></i><sub>&ndash;1</sub><sup>(1)</sup> beeinflusst alle drei Coderausgänge.<br>
+
*First row of&nbsp; $\mathbf{G}_1$,&nbsp; green arrows:&nbsp; $\hspace{0.9cm}u_{i-1}^{(1)}$&nbsp; affects all three encoder outputs.<br>
  
*Zweite Zeile von <b>G</b><sub>1</sub>, brauner Pfeil: <i>u<sub>i</sub></i><sub>&ndash;1</sub><sup>(2)</sup>  beeinflusst nur <i>x<sub>i</sub></i><sup>(1)</sup>.{{end}}<br>
+
*Second row of&nbsp; $\mathbf{G}_1$,&nbsp; brown arrow:&nbsp; $\hspace{0.45cm}u_{i-1}^{(2)}$&nbsp; affects only&nbsp; $x_i^{(1)}$.}}<br>
  
== Generatormatrix eines Faltungscodierers mit Gedächtnis m ==
+
== Generator matrix of a convolutional encoder with memory $m$ ==
 
<br>
 
<br>
Mit den Teilmatrizen <b>G</b><sub>0</sub>, ... , <b>G</b><sub><i>m</i></sub> lassen sich die  <i>n</i> Codebits zum Zeitpunkt <i>i</i> wie folgt ausdrücken:
+
The&nbsp; $n$&nbsp; code bits at time&nbsp; $i$&nbsp; can be expressed with the partial matrices &nbsp; $\mathbf{G}_0, \hspace{0.05cm} \text{...} \hspace{0.05cm} , \mathbf{G}_m$&nbsp; as follows:
  
:<math>\underline{x}_i = \sum_{l = 0}^{m} \hspace{0.15cm}\underline{u}_{i-l} \cdot { \boldsymbol{\rm G}}_l =
+
::<math>\underline{x}_i = \sum_{l = 0}^{m} \hspace{0.15cm}\underline{u}_{i-l} \cdot { \boldsymbol{\rm G}}_l =
  \underline{u}_{i} \cdot { \boldsymbol{\rm G}}_0 + \underline{u}_{i-1} \cdot { \boldsymbol{\rm G}}_1 + ... + \underline{u}_{i-m} \cdot { \boldsymbol{\rm G}}_m
+
  \underline{u}_{i} \cdot { \boldsymbol{\rm G}}_0 + \underline{u}_{i-1} \cdot { \boldsymbol{\rm G}}_1 +\hspace{0.05cm} \text{...} \hspace{0.05cm} + \underline{u}_{i-m} \cdot { \boldsymbol{\rm G}}_m
 
   \hspace{0.05cm}.</math>
 
   \hspace{0.05cm}.</math>
  
Hierbei sind folgende vektorielle Größen zu berücksichtigen:
+
*The following vectorial quantities must be taken into account:
  
:<math>\underline{\it u}_i = \left ( u_i^{(1)}, u_i^{(2)}, \hspace{0.05cm}... \hspace{0.1cm}, u_i^{(k)}\right )\hspace{0.05cm},\hspace{0.5cm}  
+
::<math>\underline{\it u}_i = \left ( u_i^{(1)}, u_i^{(2)}, \hspace{0.05cm}\text{...} \hspace{0.1cm}, u_i^{(k)}\right )\hspace{0.05cm},\hspace{0.5cm}  
  \underline{\it x}_i = \left ( x_i^{(1)}, x_i^{(2)}, \hspace{0.05cm}... \hspace{0.1cm}, x_i^{(n)}\right )\hspace{0.05cm}.</math>
+
  \underline{\it x}_i = \left ( x_i^{(1)}, x_i^{(2)}, \hspace{0.05cm}\text{...} \hspace{0.1cm}, x_i^{(n)}\right )\hspace{0.05cm}.</math>
  
Betrachtet man die bei <i>i</i> = 1 beginnenden und sich zeitlich bis ins Unendliche erstreckenden Sequenzen
+
*Considering the sequences
  
:<math>\underline{\it u} = \big( \underline{\it u}_1\hspace{0.05cm}, \underline{\it u}_2\hspace{0.05cm}, \hspace{0.05cm}... \hspace{0.1cm}, \underline{\it u}_i\hspace{0.05cm}, \hspace{0.05cm}... \hspace{0.1cm} \big)\hspace{0.05cm},\hspace{0.5cm}  
+
::<math>\underline{\it u} = \big( \underline{\it u}_1\hspace{0.05cm}, \underline{\it u}_2\hspace{0.05cm}, \hspace{0.05cm}\text{...} \hspace{0.1cm}, \underline{\it u}_i\hspace{0.05cm}, \hspace{0.05cm}\text{...\hspace{0.1cm} \big)\hspace{0.05cm},\hspace{0.5cm}  
  \underline{\it x} = \big( \underline{\it x}_1\hspace{0.05cm}, \underline{\it x}_2\hspace{0.05cm}, \hspace{0.05cm}... \hspace{0.1cm}, \underline{\it x}_i\hspace{0.05cm}, \hspace{0.05cm}... \hspace{0.1cm} \big)\hspace{0.05cm},</math>
+
  \underline{\it x} = \big( \underline{\it x}_1\hspace{0.05cm}, \underline{\it x}_2\hspace{0.05cm}, \hspace{0.05cm}\text{...} \hspace{0.1cm}, \underline{\it x}_i\hspace{0.05cm}, \hspace{0.05cm}\text{...} \hspace{0.1cm} \big)\hspace{0.05cm},</math>
  
so kann dieser Zusammenhang durch die Matrixgleichung <u><i>x</i></u> = <u><i>u</i></u> &middot; <b>G</b> ausgedrückt werden. Hierbei ist für die Generatormatrix <b>G</b> zu setzen:
+
:starting at&nbsp; $i = 1$&nbsp; and extending in time to infinity,&nbsp;  this relation can be expressed by the matrix equation &nbsp; $\underline{x} = \underline{u} \cdot \mathbf{G}$. &nbsp; Here,&nbsp; holds for the generator matrix:
  
:<math>{ \boldsymbol{\rm G}}=\begin{pmatrix}
+
::<math>{ \boldsymbol{\rm G}}=\begin{pmatrix}
 
{ \boldsymbol{\rm G}}_0 & { \boldsymbol{\rm G}}_1 & { \boldsymbol{\rm G}}_2 & \cdots  & { \boldsymbol{\rm G}}_m & & & \\
 
{ \boldsymbol{\rm G}}_0 & { \boldsymbol{\rm G}}_1 & { \boldsymbol{\rm G}}_2 & \cdots  & { \boldsymbol{\rm G}}_m & & & \\
 
& { \boldsymbol{\rm G}}_0 & { \boldsymbol{\rm G}}_1 & { \boldsymbol{\rm G}}_2 & \cdots & { \boldsymbol{\rm G}}_m & &\\
 
& { \boldsymbol{\rm G}}_0 & { \boldsymbol{\rm G}}_1 & { \boldsymbol{\rm G}}_2 & \cdots & { \boldsymbol{\rm G}}_m & &\\
Line 78: Line 84:
 
\end{pmatrix}\hspace{0.05cm}.</math>
 
\end{pmatrix}\hspace{0.05cm}.</math>
  
Aus der Gleichung erkennt man sofort das Gedächtnis <i>m</i> des Faltungscodes. Die Parameter <i>k</i> und <i>n</i> sind direkt nicht ablesbar. Sie sind aber durch die Zeilen&ndash; und Spaltenzahl der Teilmatrizen <b>G</b><sub><i>l</i></sub> festgelegt.<br>
+
*From this equation one immediately recognizes the memory&nbsp; $m$&nbsp; of the convolutional code.  
  
{{Beispiel}}''':'''
+
*The parameters&nbsp; $k$&nbsp; and&nbsp; $n$&nbsp; are not directly readable.
[[File:P ID2601 KC T 3 2 S2 v1.png|rahmenlos|rechts|Generatormatrix eines Faltungscodes]] Mit den zwei Matrizen <b>G</b><sub>0</sub> und <b>G</b><sub>1</sub> &ndash; siehe letztes Beispiel &ndash; erhält man die rechts skizzierte Matrix <b>G</b>.<br><br><br><br><br><br><br>
 
  
Anzumerken ist:
+
* However,&nbsp; they are determined by the number of rows and columns of the partial matrices&nbsp; $\mathbf{G}_l$.<br>
*Die Generatormatrix  <b>G</b> erstreckt sich nach unten und nach rechts eigentlich bis ins Unendliche. Explizit dargestellt sind aber nur 8 Zeilen und 12 Spalten.
 
  
*Für die zeitlich begrenzte Informationssequenz <u><i>u</i></u> = (0, 1, 1, 0, 0, 0, 1, 1) ist der gezeichnete Matrixteil ausreichend. Die Codesequenz lautet dann: &nbsp; <u><i>x</i></u>&nbsp;=&nbsp;(0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0).
 
  
*Anhand der Beschriftungsfarben lassen sich die <i>n</i> = 3 Codewortstränge ablesen. Das gleiche Ergebnis haben wir (auf anderem Wege) im Beispiel am Ende von Kapitel 3.1 erhalten.
+
{{GraueBox|TEXT= 
 +
$\text{Example 2:}$&nbsp; 
 +
With the two matrices&nbsp; $\mathbf{G}_0$&nbsp; and&nbsp; $\mathbf{G}_1$&nbsp; &ndash; see&nbsp; [[Channel_Coding/Algebraic_and_Polynomial_Description#Division_of_the_generator_matrix_into_partial_matrices| $\text{Example 1}$]]&nbsp; &ndash; the matrix sketched on the right&nbsp; $\mathbf{G}$&nbsp; is obtained.
 +
[[File:EN_KC_T_3_2_S2.png|right|frame|Generator matrix of a convolutional code]] 
  
::<math>\underline{\it x}^{(1)} = (0\hspace{0.05cm}, 0\hspace{0.05cm}, 1\hspace{0.05cm}, 1) \hspace{0.05cm},\hspace{0.5cm} \underline{\it x}^{(2)} = (1\hspace{0.05cm}, 0\hspace{0.05cm},1\hspace{0.05cm}, 1) \hspace{0.05cm},\hspace{0.5cm}
+
It should be noted:
\underline{\it x}^{(3)} = (1\hspace{0.05cm}, 1\hspace{0.05cm}, 1\hspace{0.05cm}, 0) \hspace{0.05cm}.</math>{{end}}<br>
+
*The generator matrix&nbsp; $\mathbf{G}$&nbsp; actually extends downwards and to the right to infinity.&nbsp; Explicitly shown,&nbsp; however,&nbsp; are only eight rows and twelve columns.
  
== Generatormatrix für Faltungscodierer der Rate 1/n ==
+
*For the temporal information sequence &nbsp; $\underline{u} = (0, 1, 1, 0, 0, 0, 1, 1)$ &nbsp; the drawn matrix part is sufficient.&nbsp; The encoded sequence is then:
 +
:$$\underline{x} = (0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0).$$
 +
 
 +
*On the basis of the label colors,&nbsp; the&nbsp; $n = 3$&nbsp; code word strings can be read.
 +
 +
*We got the same result&nbsp; $($in a different way$)$&nbsp; in the&nbsp; [[Channel_Coding/Basics_of_Convolutional_Coding#Convolutional_encoder_with_two_inputs| $\text{Example 4}$]]&nbsp; at the end of the last chapter:
 +
:$$\underline{\it x}^{(1)} = (0\hspace{0.05cm}, 0\hspace{0.05cm}, 1\hspace{0.05cm}, 1) \hspace{0.05cm},$$
 +
:$$\underline{\it x}^{(2)} = (1\hspace{0.05cm}, 0\hspace{0.05cm},1\hspace{0.05cm}, 1) \hspace{0.05cm},$$
 +
:$$ \underline{\it x}^{(3)} = (1\hspace{0.05cm}, 1\hspace{0.05cm}, 1\hspace{0.05cm}, 0) \hspace{0.05cm}.$$}}<br>
 +
 
 +
== Generator matrix for convolutional encoder of rate&nbsp; $1/n$ ==
 
<br>
 
<br>
Wir betrachten nun den Sonderfall <i>k</i> = 1, zum einen aus Gründen einer möglichst einfachen Darstellung, aber auch, weil Faltungscodierer der Rate 1/<i>n</i> für die Praxis eine große Bedeutung besitzen.<br><br>
+
We now consider the special case&nbsp; $k = 1$,  
 
+
*on the one hand for reasons of simplest possible representation,
[[File:P ID2602 KC T 3 2 S3a.png|rahmenlos|rechts|Faltungscoder (<i>k</i> = 1, <i>n</i> = 2, <i>m</i> = 1)]]
+
 +
*but also because convolutional encoders of rate&nbsp; $1/n$&nbsp; have great importance for practice.<br><br>
  
<b>Faltungscodierer mit <i>k</i> = 1, <i>n</i> = 2 und <i>m</i> = 1</b><br>
+
[[File:KC_T_3_2_S3a_neuv3.png|right|frame|Convolutional encoder<br>$(k = 1, \ n = 2, \ m = 1)$]]
 +
<b>Convolutional encoder with&nbsp; $k = 1, \ n = 2, \ m = 1$</b><br>
  
Aus der nebenstehenden Skizze kann abgeleitet werden:
+
*From the adjacent sketch can be derived:
  
:<math>{ \boldsymbol{\rm G}}_0=\begin{pmatrix}
+
:$${ \boldsymbol{\rm G}}_0=\begin{pmatrix}
 
1 & 1
 
1 & 1
 
\end{pmatrix}\hspace{0.05cm},\hspace{0.3cm}
 
\end{pmatrix}\hspace{0.05cm},\hspace{0.3cm}
 
{ \boldsymbol{\rm G}}_1=\begin{pmatrix}
 
{ \boldsymbol{\rm G}}_1=\begin{pmatrix}
 
0 & 1
 
0 & 1
\end{pmatrix}</math>
+
\end{pmatrix}\hspace{0.3cm} \Rightarrow \hspace{0.3cm}$$
 
+
*Thus,&nbsp; the resulting generator matrix is:
:<math>\Rightarrow \hspace{0.3cm}
+
:$${ \boldsymbol{\rm G}}=\begin{pmatrix}
{ \boldsymbol{\rm G}}=\begin{pmatrix}
 
 
11 & 01 & 00 & 00  & 00 & \cdots & \\
 
11 & 01 & 00 & 00  & 00 & \cdots & \\
 
00 & 11 & 01 & 00  & 00 & \cdots & \\
 
00 & 11 & 01 & 00  & 00 & \cdots & \\
Line 117: Line 134:
 
                 00 & 00 & 00 & 11  & 01  & \cdots & \\
 
                 00 & 00 & 00 & 11  & 01  & \cdots & \\
 
\cdots & \cdots  & \cdots & \cdots & \cdots &  \cdots
 
\cdots & \cdots  & \cdots & \cdots & \cdots &  \cdots
\end{pmatrix}\hspace{0.05cm}.</math>
+
\end{pmatrix}\hspace{0.05cm}.$$
  
Für die Eingangssequenz <u><i>u</i></u> = (1, 0, 1, 1) beginnt die Codesequenz mit <u><i>x</i></u> = (1, 1, 0, 1, 1, 1, 1, 0, ...). Dieses Ergebnis ist gleich der Summe der Zeilen 1, 3 und 4 der Gewneratormatrix.<br><br>
+
*For the input sequence &nbsp; $\underline{u} = (1, 0, 1, 1)$,&nbsp; the encoded sequence starts with &nbsp; $\underline{x} = (1, 1, 0, 1, 1, 1, 1, 0, \ \text{...})$.  
  
[[File:P ID2603 KC T 3 2 S3b.png|rahmenlos|rechts|Faltungscoder (<i>k</i> = 1, <i>n</i> = 2, <i>m</i> = 2)]]
+
*This result is equal to the sum of rows&nbsp; '''1''',&nbsp; '''3'''&nbsp; and&nbsp; '''4'''&nbsp; of the generator matrix.<br><br>
  
<b>Faltungscodierer mit <i>k</i> = 1, <i>n</i> = 2 und <i>m</i> = 2</b><br>
+
[[File:P ID2603 KC T 3 2 S3b.png|right|frame|Convolutional encoder&nbsp; $(k = 1, \ n = 2, \ m = 2)$]]
 +
<b>Convolutional encoder with&nbsp; $k = 1, \ n = 2, \ m = 2$</b><br>
  
Aufgrund der Gedächtnisordnung <i>m</i> = 2 gibt es hier drei Teilmatrizen:
+
*Due to the memory order&nbsp; $m = 2$&nbsp; there are three submatrices here:
  
:<math>{ \boldsymbol{\rm G}}_0=\begin{pmatrix}
+
::<math>{ \boldsymbol{\rm G}}_0=\begin{pmatrix}
 
1 & 1
 
1 & 1
 
\end{pmatrix}\hspace{0.05cm},\hspace{0.3cm}
 
\end{pmatrix}\hspace{0.05cm},\hspace{0.3cm}
Line 137: Line 155:
 
\end{pmatrix}</math>
 
\end{pmatrix}</math>
  
:<math>\Rightarrow \hspace{0.3cm}
+
*Thus,&nbsp; the resulting generator matrix is now:
{ \boldsymbol{\rm G}}=\begin{pmatrix}
+
 
 +
::<math> { \boldsymbol{\rm G}}=\begin{pmatrix}
 
11 & 10 & 11 & 00  & 00 & 00  & \cdots & \\
 
11 & 10 & 11 & 00  & 00 & 00  & \cdots & \\
 
00 & 11 & 10 & 11  & 00 & 00  & \cdots & \\
 
00 & 11 & 10 & 11  & 00 & 00  & \cdots & \\
Line 146: Line 165:
 
\end{pmatrix}\hspace{0.05cm}.</math>
 
\end{pmatrix}\hspace{0.05cm}.</math>
  
Hier führt die Eingangsssequenz <u><i>u</i></u> = (1, 0, 1, 1) zur Codesequenz <u><i>x</i></u> = (1, 1, 1, 0, 0, 0, 0, 1, ...).<br><br>
+
*Here the input sequence&nbsp; $\underline{u} = (1, 0, 1, 1)$&nbsp; leads to the encoded sequence&nbsp; $\underline{x} = (1, 1, 1, 0, 0, 0, 0, 1, \ \text{...})$.<br><br>
  
[[File:P ID2604 KC T 3 2 S3c.png|rahmenlos|rechts|Faltungscoder (<i>k</i> = 1, <i>n</i> = 3, <i>m</i> = 3)]]
+
[[File:P ID2604 KC T 3 2 S3c.png|right|frame|Convolutional encoder&nbsp; $(k = 1, \ n = 3, \ m = 3)$]]
 +
<b>Convolutional encoder with $k = 1, \ n = 3, \ m = 3$</b>
  
<b>Faltungscodierer mit <i>k</i> = 1, <i>n</i> = 3 und <i>m</i> = 3</b>
+
*Because of&nbsp; $m = 3$&nbsp; there are now four partial matrices of the respective dimension&nbsp; $1 &times; 3$:
  
Wegen <i>m</i> = 3 gibt es vier Teilmatrizen der Dimension 1 &times; 3:
+
::<math>{ \boldsymbol{\rm G}}_0=\begin{pmatrix}
 
 
:<math>{ \boldsymbol{\rm G}}_0=\begin{pmatrix}
 
 
1 & 1 & 0
 
1 & 1 & 0
 
\end{pmatrix}\hspace{0.05cm},\hspace{0.3cm}
 
\end{pmatrix}\hspace{0.05cm},\hspace{0.3cm}
 
{ \boldsymbol{\rm G}}_1=\begin{pmatrix}
 
{ \boldsymbol{\rm G}}_1=\begin{pmatrix}
 
0 & 0 & 1
 
0 & 0 & 1
\end{pmatrix}\hspace{0.05cm},</math>
+
\end{pmatrix}\hspace{0.05cm},\hspace{0.3cm}
 
+
{ \boldsymbol{\rm G}}_2=\begin{pmatrix}
:<math>{ \boldsymbol{\rm G}}_2=\begin{pmatrix}
 
 
0 & 0 & 1
 
0 & 0 & 1
 
\end{pmatrix}\hspace{0.05cm},\hspace{0.3cm}
 
\end{pmatrix}\hspace{0.05cm},\hspace{0.3cm}
Line 168: Line 185:
 
\end{pmatrix}\hspace{0.05cm}.</math>
 
\end{pmatrix}\hspace{0.05cm}.</math>
  
Damit lautet die resultierende Generatormatrix:
+
*Thus,&nbsp; the resulting generator matrix is:
  
:<math>{ \boldsymbol{\rm G}}=\begin{pmatrix}
+
::<math>{ \boldsymbol{\rm G}}=\begin{pmatrix}
 
110 & 001 & 001 & 011 & 000 & 000 & 000 & \cdots & \\
 
110 & 001 & 001 & 011 & 000 & 000 & 000 & \cdots & \\
 
000 & 110 & 001 & 001 & 011 & 000 & 000 & \cdots & \\
 
000 & 110 & 001 & 001 & 011 & 000 & 000 & \cdots & \\
Line 176: Line 193:
 
     000 & 000 & 000 & 110 & 001 & 001 & 011 & \cdots & \\
 
     000 & 000 & 000 & 110 & 001 & 001 & 011 & \cdots & \\
 
\cdots & \cdots  & \cdots & \cdots & \cdots &  \cdots & \cdots &  \cdots
 
\cdots & \cdots  & \cdots & \cdots & \cdots &  \cdots & \cdots &  \cdots
\end{pmatrix}\hspace{0.05cm},</math>
+
\end{pmatrix}\hspace{0.05cm}.</math>
  
und man erhält für <u><i>u</i></u> = (1, 0, 1, 1) die Codesequenz <u><i>x</i></u> = (1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, ...).<br>
+
*One obtains for &nbsp; $\underline{u} = (1, 0, 1, 1)$ &nbsp; the encoded sequence&nbsp; $\underline{x} = (1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, \ \text{...})$.<br>
  
== GF(2)–Beschreibungsformen eines Digitalen Filters (1) ==
+
== GF(2) description forms of a digital filter ==
 
<br>
 
<br>
In Kapitel 3.1 wurde bereits darauf hingewiesen, dass ein Faltungscodierer der Rate 1/<i>n</i> durch mehrere Digitale Filter realisiert werden kann, wobei die Filter parallel mit der gleichen Eingangsfolge <u><i>u</i></u> arbeiten. Bevor wir diese Aussage vertiefen, sollen zuerst die Eigenschaften eines Digitalfilters für das Galoisfeld GF(2) genannt werden.<br>
+
[[File:EN_KC_T_3_2_S4.png|right|frame|Digital filter in&nbsp; ${\rm GF}(2)$&nbsp; of order&nbsp; $m$|class=fit]]
 +
In the chapter&nbsp; [[Channel_Coding/Basics_of_Convolutional_Coding#Rate_1.2F2_convolutional_encoder| "Basics of Convolutional Coding"]]&nbsp; it was already pointed out,  
 +
# that a rate&nbsp; $1/n$ convolutional encoder can be realized by several digital filters,  
 +
# where the filters operate in parallel with the same input sequence&nbsp; $\underline{u}$&nbsp;.  
 +
 
 +
 
 +
Before we elaborate on this statement,&nbsp; we shall first mention the properties of a digital filter for the Galois field&nbsp; ${\rm GF(2)}$.
 +
 
 +
The graph is to be interpreted as follows:
 +
*The filter has impulse response&nbsp; $\underline{g} = (g_0,\ g_1,\ g_2, \ \text{...} \ ,\ g_m)$.
  
[[File:P ID2605 KC T 3 2 S4 v1.png|Digitales Filter in GF(2) der Ordnung <i>m</i>|class=fit]]<br>
+
* For all filter coefficients&nbsp; $($with indices&nbsp; $0 &#8804; l &#8804; m)$ &nbsp; holds: &nbsp; $g_l &#8712; {\rm GF}(2) = \{0, 1\}$.<br>
  
Die Grafik ist wie folgt zu interpretieren:
+
*The individual symbols&nbsp; $u_i$&nbsp; of the input sequence&nbsp; $\underline{u}$&nbsp; are also binary: &nbsp; $u_i &#8712; \{0, 1\}$.  
*Das Filter besitzt die Impulsantwort <u><i>g</i></u> = (<i>g</i><sub>0</sub>, <i>g</i><sub>1</sub>, <i>g</i><sub>2</sub>, ... , <i>g<sub>m</sub></i>), wobei für alle Filterkoeffizienten (mit den Indizes 0 &#8804; <i>l</i> &#8804; <i>m</i>) gilt: &nbsp; <i>g<sub>l</sub></i> &#8712; GF(2) = {0, 1}.<br>
 
  
*Die einzelnen Symbole <i>u<sub>i</sub></i> der Eingangsfolge <u><i>u</i></u> seien ebenfalls binär: <i>u<sub>i</sub></i> &#8712; {0, 1}. Damit gilt für das Ausgangssymbol zu den Zeitpunkten <i>i</i> &#8805; 1 mit Addition und Multiplikation in GF(2):
+
*Thus, for the output symbol at times&nbsp; $i &#8805; 1$&nbsp; with addition and multiplication in&nbsp; ${\rm GF(2)}$:
  
 
::<math>x_i = \sum_{l = 0}^{m} g_l \cdot u_{i-l} \hspace{0.05cm}.</math>
 
::<math>x_i = \sum_{l = 0}^{m} g_l \cdot u_{i-l} \hspace{0.05cm}.</math>
 +
*This corresponds to the&nbsp; $($discrete time$)$ &nbsp; &raquo;[[Signal_Representation/The_Convolution_Theorem_and_Operation#Convolution_in_the_time_domain|$\rm convolution$]]&laquo;,&nbsp; denoted by an asterisk.&nbsp; This can be used to write for the entire output sequence:
  
*Dies entspricht der (zeitdiskreten) Faltungsoperation (englisch: <i>Convolution</i>), gekennzeichnet durch einen Stern. Damit kann für die gesamte Ausgangssequenz geschrieben werden:
+
::<math>\underline{x} = \underline{u} * \underline{g}\hspace{0.05cm}.</math>
  
::<math>\underline{x} = \underline{u} * \underline{g}\hspace{0.05cm}.</math>
+
*Major difference compared to the chapter&nbsp; &raquo;[[Theory_of_Stochastic_Signals/Digital_Filters|"Digital Filters"]]&laquo;&nbsp; in the book&nbsp; "Theory of Stochastic Signals"&nbsp; is the modulo-2 addition&nbsp; $(1 + 1 = 0)$&nbsp; instead of the conventional addition&nbsp; $(1 + 1 = 2)$.<br><br>
  
*Wesentlicher Unterschied gegenüber dem Kapitel 5.2 des Buches &bdquo;Stochastische Signaltheorie&rdquo; ist die Modulo&ndash;2&ndash;Addition (1 + 1 = 0) anstelle der herkömmlichen Addition (1 + 1 = 2).<br><br>
+
{{GraueBox|TEXT= 
 +
$\text{Example 3:}$&nbsp;
 +
The impulse response of the shown third order digital filter  is: &nbsp; $\underline{g} = (1, 0, 1, 1)$.
 +
[[File:P ID2606 KC T 3 2 S4b.png|right|frame|Digital filter with impulse response&nbsp; $(1, 0, 1, 1)$]]
  
{{Beispiel}}''':'''
+
*Let the input sequence of this filter be unlimited in time: &nbsp; $\underline{u} = (1, 1, 0, 0, 0, \ \text{ ...})$.<br>
[[File:P ID2606 KC T 3 2 S4b.png|rahmenlos|rechts|Digitales Filter mit Impulsantwort  (1, 0, 1, 1)]] Die Impulsantwort des dargestellten Digitalen Filters der Ordnung 3 lautet <u><i>g</i></u> = (1, 0, 1, 1).
 
Die Eingangssequenz dieses Filters sei zeitlich unbegrenzt: &nbsp; <u><i>u</i></u> = (1, 1, 0, 0, 0, ...).<br>
 
  
Damit ergibt sich die (unendliche) Ausgangssequenz <u><i>x</i></u> im binären Galoisfeld &#8658; GF(2):
+
*This gives the&nbsp;  $($infinite$)$&nbsp; initial sequence&nbsp; $\underline{x}$&nbsp; in the binary Galois field &nbsp; &#8658; &nbsp; ${\rm GF(2)}$:
  
:<math>\underline{x} \hspace{-0.15cm}  = \hspace{-0.15cm} (\hspace{0.05cm}1,\hspace{0.05cm} 1,\hspace{0.05cm} 0,\hspace{0.05cm} 0,\hspace{0.05cm} 0, \hspace{0.05cm}. ... \hspace{0.05cm}) * (\hspace{0.05cm}1,\hspace{0.05cm} 0,\hspace{0.05cm} 1,\hspace{0.05cm} 1\hspace{0.05cm})= </math>
+
::<math>\underline{x} = (\hspace{0.05cm}1,\hspace{0.05cm} 1,\hspace{0.05cm} 0,\hspace{0.05cm} 0,\hspace{0.05cm} 0, \hspace{0.05cm} \text{ ...} \hspace{0.05cm}) * (\hspace{0.05cm}1,\hspace{0.05cm} 0,\hspace{0.05cm} 1,\hspace{0.05cm} 1\hspace{0.05cm})</math>
:<math>\hspace{0.3cm} \hspace{-0.15cm} (\hspace{0.05cm}1,\hspace{0.05cm} 0,\hspace{0.05cm} 1,\hspace{0.05cm} 1,\hspace{0.05cm} 0, \hspace{0.05cm}0,\hspace{0.05cm} . ... \hspace{0.05cm})
+
::<math>\Rightarrow \hspace{0.3cm} \underline{x} =(\hspace{0.05cm}1,\hspace{0.05cm} 0,\hspace{0.05cm} 1,\hspace{0.05cm} 1,\hspace{0.05cm} 0, \hspace{0.05cm}0,\hspace{0.05cm} \text{ ...} \hspace{0.05cm})
\oplus  (\hspace{0.05cm}0,\hspace{0.05cm}\hspace{0.05cm}1,\hspace{0.05cm} 0,\hspace{0.05cm} 1,\hspace{0.05cm} 1,\hspace{0.05cm}0, \hspace{0.05cm} . ... \hspace{0.05cm})
+
\oplus  (\hspace{0.05cm}0,\hspace{0.05cm}\hspace{0.05cm}1,\hspace{0.05cm} 0,\hspace{0.05cm} 1,\hspace{0.05cm} 1,\hspace{0.05cm}0, \hspace{0.05cm} \hspace{0.05cm} \text{ ...}\hspace{0.05cm})
= (\hspace{0.05cm}1,\hspace{0.05cm}\hspace{0.05cm}1,\hspace{0.05cm} 1,\hspace{0.05cm} 0,\hspace{0.05cm} 1,\hspace{0.05cm} 0, . ... \hspace{0.05cm}) \hspace{0.05cm}.</math>
+
= (\hspace{0.05cm}1,\hspace{0.05cm}\hspace{0.05cm}1,\hspace{0.05cm} 1,\hspace{0.05cm} 0,\hspace{0.05cm} 1,\hspace{0.05cm} 0, \hspace{0.05cm} \text{ ...} \hspace{0.05cm}) \hspace{0.05cm}.</math>
  
Bei der herkömmlichen Faltung (für reelle Zahlen) hätte dagegen das Ergebnis gelautet:
+
*In the conventional convolution&nbsp; $($for real numbers$)$,&nbsp; on the other hand,&nbsp; the result would have been:
  
:<math>\underline{x}= (\hspace{0.05cm}1,\hspace{0.05cm}\hspace{0.05cm}1,\hspace{0.05cm} 1,\hspace{0.05cm} 2,\hspace{0.05cm} 1,\hspace{0.05cm} 0, . ... \hspace{0.05cm}) \hspace{0.05cm}.</math>{{end}}<br>
+
::<math>\underline{x}= (\hspace{0.05cm}1,\hspace{0.05cm}\hspace{0.05cm}1,\hspace{0.05cm} 1,\hspace{0.05cm} 2,\hspace{0.05cm} 1,\hspace{0.05cm} 0, \text{ ...} \hspace{0.05cm}) \hspace{0.05cm}.</math>}}<br>
  
== GF(2)–Beschreibungsformen eines Digitalen Filters (2) ==
+
However,&nbsp; discrete time signals can also be represented by polynomials with respect to a dummy variable.<br>
<br>
 
Zeitdiskrete Signale kann man auch durch Polynome bezüglich einer Dummy&ndash;Variablen repräsentieren.<br>
 
  
{{Definition}}''':''' Die zum zeitdiskreten Signal <u><i>x</i></u>  = (<i>x</i><sub>0</sub>, <i>x</i><sub>1</sub>, <i>x</i><sub>2</sub>, ...) gehörige <b><i>D</i>&ndash;Transformierte</b> lautet:
+
{{BlaueBox|TEXT= 
 +
$\text{Definition:}$&nbsp;  The&nbsp; &raquo;<b>D&ndash;transform</b>&laquo;&nbsp;  belonging to the discrete time signal &nbsp; $\underline{x} = (x_0, x_1, x_2, \ \text{...}) $&nbsp; reads:
  
:<math>X(D) = x_0 + x_1 \cdot D + x_2 \cdot D^2 + \hspace{0.05cm}...\hspace{0.05cm}= \sum_{i = 0}^{\infty} x_i \cdot D^i \hspace{0.05cm}.</math>
+
::<math>X(D) = x_0 + x_1 \cdot D + x_2 \cdot D^2 + \hspace{0.05cm}\text{...}\hspace{0.05cm}= \sum_{i = 0}^{\infty} x_i \cdot D\hspace{0.05cm}^i \hspace{0.05cm}.</math>
  
Für diese spezielle Transformation in einen Bildbereich verwenden wir auch die Notation:
+
*For this particular transformation to an image area,&nbsp; we also use the following notation,&nbsp; where&nbsp; "D"&nbsp; stands&nbsp; for&nbsp; "delay operator":
  
:<math>\underline{x} = (x_0, x_1, x_2,\hspace{0.05cm}...\hspace{0.05cm}) \quad
+
::<math>\underline{x} = (x_0, x_1, x_2,\hspace{0.05cm}...\hspace{0.05cm}) \quad
 
\circ\!\!-\!\!\!-^{\hspace{-0.25cm}D}\!\!\!-\!\!\bullet\quad
 
\circ\!\!-\!\!\!-^{\hspace{-0.25cm}D}\!\!\!-\!\!\bullet\quad
X(D) =  \sum_{i = 0}^{\infty} x_i \cdot D^i \hspace{0.05cm}.</math>{{end}}<br>
+
X(D) =  \sum_{i = 0}^{\infty} x_i \cdot D\hspace{0.05cm}^i \hspace{0.05cm}.</math>}}<br>
  
In der Literatur wird manchmal <i>x</i>(<i>D</i>) anstelle von <i>X</i>(<i>D</i>) verwendet. Wir schreiben in LNTwww aber alle Bildbereichsfunktionen mit Großbuchstaben, zum Beispiel Fourier&ndash;, Laplace&ndash; und <i>D</i>&ndash;Transformation:
+
<u>Note</u>: &nbsp; In the literature,&nbsp; sometimes&nbsp; $x(D)$&nbsp; is used instead of&nbsp; $X(D)$.&nbsp; However,&nbsp; we write  in our learning tutorial all image domain functions &nbsp; &rArr; &nbsp; "spectral domain functions"&nbsp; with capital letters, &nbsp; for example the Fourier transform, the Laplace transform  and the D&ndash;transform:
  
:<math>x(t) \hspace{0.15cm}
+
::<math>x(t) \hspace{0.15cm}
 
\circ\!\!-\!\!\!-^{\hspace{-0.25cm}}\!\!\!-\!\!\bullet\hspace{0.15cm}
 
\circ\!\!-\!\!\!-^{\hspace{-0.25cm}}\!\!\!-\!\!\bullet\hspace{0.15cm}
 
X(f)\hspace{0.05cm},\hspace{0.4cm}  x(t) \hspace{0.15cm}
 
X(f)\hspace{0.05cm},\hspace{0.4cm}  x(t) \hspace{0.15cm}
Line 238: Line 264:
 
X(D)  \hspace{0.05cm}.</math>
 
X(D)  \hspace{0.05cm}.</math>
  
Wir werden nun die <i>D</i>&ndash;Transformation auch auf die Informationssequenz <u><i>u</i></u> und die Impulsantwort <u><i>g</i></u> an. Aufgrund der zeitlichen Begrenzung von <u><i>g</i></u> ergibt sich die obere Summationsgrenze bei <i>G</i>(<i>D</i>) zu <i>i</i> = <i>m</i>:<br>
+
We now apply the&nbsp; D&ndash;transform also
 +
*to the information sequence&nbsp; $\underline{u}$,&nbsp; and
  
:<math>\underline{u} = (u_0, u_1, u_2,\hspace{0.05cm}...\hspace{0.05cm}) \quad
+
* the impulse response&nbsp; $\underline{g}$.&nbsp;
 +
 
 +
 
 +
Due to the time limit of&nbsp; $\underline{g}$&nbsp; the upper summation limit at&nbsp; $G(D)$&nbsp; results in&nbsp; $i = m$:<br>
 +
 
 +
::<math>\underline{u} = (u_0, u_1, u_2,\hspace{0.05cm}\text{...}\hspace{0.05cm}) \quad
 
\circ\!\!-\!\!\!-^{\hspace{-0.25cm}D}\!\!\!-\!\!\bullet\quad
 
\circ\!\!-\!\!\!-^{\hspace{-0.25cm}D}\!\!\!-\!\!\bullet\quad
U(D) =  \sum_{i = 0}^{\infty} u_i \cdot D^i \hspace{0.05cm},</math>
+
U(D) =  \sum_{i = 0}^{\infty} u_i \cdot D\hspace{0.05cm}^i \hspace{0.05cm},</math>
  
:<math>\underline{g} = (g_0, g_1, \hspace{0.05cm}...\hspace{0.05cm}, g_m) \quad
+
::<math>\underline{g} = (g_0, g_1, \hspace{0.05cm}\text{...}\hspace{0.05cm}, g_m) \quad
 
\circ\!\!-\!\!\!-^{\hspace{-0.25cm}D}\!\!\!-\!\!\bullet\quad
 
\circ\!\!-\!\!\!-^{\hspace{-0.25cm}D}\!\!\!-\!\!\bullet\quad
G(D) =  \sum_{i = 0}^{m} g_i \cdot D^i \hspace{0.05cm}.</math><br>
+
G(D) =  \sum_{i = 0}^{m} g_i \cdot D\hspace{0.05cm}^i \hspace{0.05cm}.</math>
  
{{Satz}}''':''' Wie bei allen Spektraltransformationen gilt auch bei der <i>D</i>&ndash;Transformation im Bildbereich die <b>Multiplikation</b>, da die (diskreten) Zeitsignale <u><i>u</i></u> und <u><i>g</i></u> durch die <b>Faltung</b> verknüpft sind:
+
{{BlaueBox|TEXT= 
 +
$\text{Theorem:}$&nbsp; As with all spectral transformations,&nbsp; the&nbsp;  &raquo;<b>multiplication</b>&laquo;&nbsp; applies to the&nbsp; D&ndash;transform in the image domain,&nbsp; since the&nbsp; $($discrete$)$&nbsp; time functions&nbsp; $\underline{u}$&nbsp; and&nbsp; $\underline{g}$&nbsp; are interconnected by the&nbsp; &raquo;<b>convolution</b>&laquo;:
  
:<math>\underline{x} = \underline{u} * \underline{g} \quad
+
::<math>\underline{x} = \underline{u} * \underline{g} \quad
 
\circ\!\!-\!\!\!-^{\hspace{-0.25cm}D}\!\!\!-\!\!\bullet\quad
 
\circ\!\!-\!\!\!-^{\hspace{-0.25cm}D}\!\!\!-\!\!\bullet\quad
 
X(D) = U(D) \cdot G(D) \hspace{0.05cm}.</math>
 
X(D) = U(D) \cdot G(D) \hspace{0.05cm}.</math>
  
Man bezeichnet, wie in der Systemtheorie allgemein üblich, auch die <i>D</i>&ndash;Transformierte <i>G</i>(<i>D</i>) der Impulsantwort <u><i>g</i></u> als <span style="font-weight: bold;">Übertragungsfunktion</span> (englisch: <i>Transfer Function</i>).{{end}}<br>
+
*The&nbsp; $($rather simple$)$&nbsp; $\rm proof$&nbsp; of this important result can be found in the specification for&nbsp; [[Aufgaben:Exercise_3.3Z:_Convolution_and_D-Transformation|"Exercise 3.3Z"]].
  
Der (recht einfache) Beweis dieses wichtigen Ergebnisses finden Sie in der Angabe zu Aufgabe Z3.3.<br>
+
*As in&nbsp; &raquo;[[Linear_and_Time_Invariant_Systems/System_Description_in_Frequency_Domain#Frequency_response_.E2.80.93_Transfer_function|$\text{system theory}$]]&laquo;&nbsp; commonly,&nbsp;  the&nbsp; D&ndash;transform&nbsp; $G(D)$&nbsp; of the impulse response&nbsp; $\underline{g}$&nbsp; is also called&nbsp; "transfer function".}}
  
{{Beispiel}}''':'''
 
[[File:P ID2607 KC T 3 2 S4b.png|rahmenlos|rechts|Digitales Filter mit Impulsantwort (1, 0, 1, 1)]] Wir betrachten wieder die zeitdiskreten Signale
 
  
:<math>\underline{u} = (\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm}...\hspace{0.05cm}) \quad
+
 
 +
{{GraueBox|TEXT=
 +
[[File:P ID2607 KC T 3 2 S4b.png|right|frame|Impulse response&nbsp; $(1, 0, 1, 1)$&nbsp; of a digital filter]]
 +
 +
$\text{Example 4:}$&nbsp;  We consider again the discrete time signals
 +
::<math>\underline{u} = (\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm}\text{...}\hspace{0.05cm}) \quad
 
\circ\!\!-\!\!\!-^{\hspace{-0.25cm}D}\!\!\!-\!\!\bullet\quad
 
\circ\!\!-\!\!\!-^{\hspace{-0.25cm}D}\!\!\!-\!\!\bullet\quad
 
U(D) =  1+ D \hspace{0.05cm},</math>
 
U(D) =  1+ D \hspace{0.05cm},</math>
  
:<math>\underline{g} = (\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm}) \quad
+
::<math>\underline{g} = (\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm}) \quad
 
\circ\!\!-\!\!\!-^{\hspace{-0.25cm}D}\!\!\!-\!\!\bullet\quad
 
\circ\!\!-\!\!\!-^{\hspace{-0.25cm}D}\!\!\!-\!\!\bullet\quad
 
G(D) =  1+ D^2 + D^3 \hspace{0.05cm}.</math>
 
G(D) =  1+ D^2 + D^3 \hspace{0.05cm}.</math>
  
Wie im letzten Biespiel: erhält man auch auf diesem Lösungsweg:
+
*As in&nbsp; $\text{Example 3}$&nbsp; $($in this section above$)$,&nbsp; you get also on this solution path:
  
:<math>X(D) \hspace{-0.15cm}  = \hspace{-0.15cm} U(D) \cdot G(D) =  (1+D) \cdot (1+ D^2 + D^3) =</math>
+
::<math>X(D) = U(D) \cdot G(D) =  (1+D) \cdot (1+ D^2 + D^3) </math>
:<math>\hspace{1cm} =  \hspace{-0.15cm} 1+ D^2 + D^3 +D + D^3 + D^4 = 1+ D + D^2 + D^4 </math>
+
::<math>\Rightarrow \hspace{0.3cm} X(D) = 1+ D^2 + D^3 +D + D^3 + D^4 = 1+ D + D^2 + D^4 \hspace{0.3cm}
 +
\Rightarrow \hspace{0.3cm} \underline{x} = (\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm}, \text{...} \hspace{0.05cm}) \hspace{0.05cm}.</math>
  
:<math>\Rightarrow \hspace{0.4cm} \underline{x} = (\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm}, ... \hspace{0.05cm}) \hspace{0.05cm}.</math>
+
*Multiplication by  the&nbsp; "delay operator"&nbsp; $D$&nbsp; in the image domain corresponds to a shift of one place to the right in the time domain:
  
Die Multiplikation mit <i>D</i> im Bildbereich entspricht  im Zeitbereich einer Verschiebung um eine Stelle nach rechts, weshalb man <i>D</i> als <i>Verzögerungsoperator</i> (englisch: <i>Delay Operator</i>) bezeichnet:
+
::<math>W(D) = D \cdot X(D) \quad
 
 
:<math>W(D) = D \cdot X(D) \quad
 
 
\bullet\!\!-\!\!\!-^{\hspace{-0.25cm}D}\!\!\!-\!\!\circ\quad
 
\bullet\!\!-\!\!\!-^{\hspace{-0.25cm}D}\!\!\!-\!\!\circ\quad
\underline{w} = (\hspace{0.05cm}0\hspace{0.05cm},\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm}, ... \hspace{0.05cm}) \hspace{0.05cm}.</math>{{end}}<br>
+
\underline{w} = (\hspace{0.05cm}0\hspace{0.05cm},\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm}, \text{...} \hspace{0.05cm}) \hspace{0.05cm}.</math>}}<br>
  
== Anwendung der D–Transformation auf Rate–1/n–Faltungscoder (1) ==
+
== Application of the&nbsp; D&ndash;transform&nbsp; to rate&nbsp; $1/n$&nbsp; convolution encoders ==
 
<br>
 
<br>
Wir wenden nun die Ergebnisse der letzten Seite auf einen Faltungscoder an, wobei wir uns zunächst auf den Sonderfall <i>k</i> = 1 beschränken. Ein solcher (<i>n</i>, <i>k</i> = 1)&ndash;Faltungscode lässt sich mit <i>n</i> Digitalen  Filtern realisieren, die auf der gleichen Informationssequenz <u><i>u</i></u> parallel arbeiten. Die Grafik zeigt die Anordnung für den Codeparameter <i>n</i> = 2 &nbsp;&#8658;&nbsp; Coderate <i>R</i> = 1/2.<br>
+
We now apply the results of the last section to a convolutional encoder,&nbsp; restricting ourselves for the moment to the special case&nbsp; $k = 1$.  
 +
*Such a&nbsp; $(n, \ k = 1)$&nbsp; convolutional code can be realized with&nbsp; $n$&nbsp; digital filters operating in parallel on the same information sequence&nbsp; $\underline{u}$.
 +
 +
*The graph shows the arrangement for the code parameter&nbsp; $n = 2$ &nbsp; &#8658; &nbsp; code rate $R = 1/2$.<br>
  
[[File:P ID2608 KC T 3 2 S5 v1.png|class=fit|Zwei parallel arbeitende Filter, jeweils mit Ordnung <i>m</i>]]<br>
+
[[File:EN_KC_T_3_2_S5.png|right|frame|Two filters working in parallel, each with order&nbsp; $m$|class=fit]]
  
Die nachfolgenden Gleichungen gelten für beide Filter gleichermaßen, wobei für das obere Filter <i>j</i> = 1 und für das untere Filter <i>j</i> = 2 zu setzen ist:
 
*Die <b>Impulsantworten</b> der beiden Filter ergeben sich zu
 
  
::<math>\underline{g}^{(j)} = (g_0^{(j)}, g_1^{(j)}, \hspace{0.05cm}...\hspace{0.05cm}, g_m^{(j)}\hspace{0.01cm}) \hspace{0.05cm},\hspace{0.2cm}{\rm mit }\hspace{0.15cm} j \in \{1,2\}\hspace{0.05cm}.</math>
+
The following equations apply equally to both filters,&nbsp; setting $j = 1$&nbsp; for the upper filter&nbsp; and $j = 2$&nbsp; for the lower filter:
 +
*The&nbsp; &raquo;<b>impulse responses</b>&laquo;&nbsp; of the two filters result in
  
*Die beiden <b>Ausgangssequenzen</b> lauten:
+
::<math>\underline{g}^{(j)} = (g_0^{(j)}, g_1^{(j)}, \hspace{0.05cm}\text{...}\hspace{0.05cm}, g_m^{(j)}\hspace{0.01cm}) \hspace{0.05cm},\hspace{0.2cm}{\rm with }\hspace{0.15cm} j \in \{1,2\}\hspace{0.05cm}.</math>
  
::<math>\underline{x}^{(j)} = (x_0^{(j)}, x_1^{(j)}, x_2^{(j)}, \hspace{0.05cm}...\hspace{0.05cm}) = \underline{u} \cdot \underline{g}^{(j)} \hspace{0.05cm},\hspace{0.2cm}{\rm mit }\hspace{0.15cm} j \in \{1,2\}\hspace{0.05cm}.</math>
+
*The two&nbsp; &raquo;<b>output sequences</b>&laquo;&nbsp; are as follows,&nbsp; considering that both filters operate on the same input sequence&nbsp; $\underline{u} = (u_0, u_1, u_2, \hspace{0.05cm} \text{...})$&nbsp;:
  
:Hierbei ist berücksichtigt, dass das obere Filter und das untere Filter beide auf der gleichen Eingangssequenz <u><i>u</i></u> = (<i>u</i><sub>0</sub>, <i>u</i><sub>1</sub>, <i>u</i><sub>2</sub>, ...) arbeiten.
+
::<math>\underline{x}^{(j)} = (x_0^{(j)}, x_1^{(j)}, x_2^{(j)}, \hspace{0.05cm}\text{...}\hspace{0.05cm}) = \underline{u} \cdot \underline{g}^{(j)} \hspace{0.05cm},\hspace{0.2cm}{\rm with }\hspace{0.15cm} j \in \{1,2\}\hspace{0.05cm}.</math>
  
*Für die <b><i>D</i>&ndash;Transformierten</b> der Ausgangssequenzen gilt:
+
*For the&nbsp; &raquo;<b>D&ndash;transform</b>&laquo;&nbsp; of the output sequences:
  
::<math>X^{(j)}(D) = U(D) \cdot G^{(j)}(D) \hspace{0.05cm},\hspace{0.2cm}{\rm mit }\hspace{0.15cm} j \in \{1,2\}\hspace{0.05cm}.</math>
+
::<math>X^{(j)}(D) = U(D) \cdot G^{(j)}(D) \hspace{0.05cm},\hspace{0.2cm}{\rm with }\hspace{0.15cm} j \in \{1,2\}\hspace{0.05cm}.</math>
  
Auf der nächsten Seite verwenden  wir eine kompaktere Schreibweise.<br>
+
In order to represent this fact more compactly,&nbsp; we now define the following vectorial quantities of a convolutional code of rate&nbsp; $1/n$:
  
== Anwendung der D–Transformation auf Rate–1/n–Faltungscoder (2) ==
+
{{BlaueBox|TEXT=
<br>
+
$\text{Definition:}$&nbsp;  The&nbsp; &raquo;<b>D&ndash; transfer functions</b>&laquo;&nbsp; of the&nbsp; $n$&nbsp; parallel arranged digital filters are combined in the vector&nbsp; $\underline{G}(D)$:
Um den soeben dargelegten Sachverhalt kompakter darstellen zu können, definieren wir nun folgende vektorielle Größen eines Faltungscodes der Rate 1/<i>n</i>:
 
  
{{Definition}}''':''' Die <b><i>D</i>&ndash;Übertragungsfunktionen </b> der <i>n</i> parallel angeordneten digitalen Filter werden im Vektor <u><i>G</i></u>(<i>D</i>) zusammengefasst:
+
::<math>\underline{G}(D) = \left ( G^{(1)}(D), G^{(2)}(D), \hspace{0.05cm}\text{...}\hspace{0.1cm}, G^{(n)} (D) \right )\hspace{0.05cm}.</math>
  
:<math>\underline{G}(D) = \left ( G^{(1)}(D), G^{(2)}(D), \hspace{0.05cm}...\hspace{0.1cm}, G^{(n)} (D) \right )\hspace{0.05cm}.</math>
+
*The vector&nbsp; $\underline{X}(D)$&nbsp; contains the&nbsp; D&ndash;transform&nbsp; of&nbsp; $n$&nbsp; encoded sequences&nbsp; $\underline{x}^{(1)}, \underline{x}^{(2)}, \ \text{...} \ , \underline{x}^{(n)}$:
  
Der Vektor <u><i>X</i></u>(<i>D</i>) beinhaltet die <b><i>D</i>&ndash;Transformierten</b> der <i>n</i> Codesequenzen <u><i>x</i></u><sup>(1)</sup>, <u><i>x</i></u><sup>(2)</sup>, ... , <u><i>x</i></u><sup>(<i>n</i>)</sup>:
+
::<math>\underline{X}(D) = \left ( X^{(1)}(D), X^{(2)}(D), \hspace{0.05cm}\text{...}\hspace{0.1cm}, X^{(n)} (D) \right )\hspace{0.05cm}.</math>
  
:<math>\underline{X}(D) = \left ( X^{(1)}(D), X^{(2)}(D), \hspace{0.05cm}...\hspace{0.1cm}, X^{(n)} (D) \right )\hspace{0.05cm}.</math>{{end}}<br>
+
*This gives the following vector equation:
  
Damit erhält man die folgende Vektorgleichung:
+
::<math>\underline{X}(D) = U(D) \cdot \underline{G}(D)\hspace{0.05cm}.</math>
  
:<math>\underline{X}(D) = U(D) \cdot \underline{G}(D)\hspace{0.05cm}.</math>
+
*$U(D)$&nbsp; is not a vector quantity here because of the code parameter&nbsp; $k = 1$.}}<br>
  
Aufgrund des Codeparameters <i>k</i> = 1 ist <i>U</i>(<i>D</i>) hier keine vektorielle Größe.<br>
+
{{GraueBox|TEXT= 
 +
$\text{Example 5:}$&nbsp;
 +
We consider the convolutional encoder with code parameters&nbsp; $n = 2, \ k = 1, \ m = 2$. &nbsp; For this one holds:
 +
[[File:P ID2609 KC T 3 2 S5b.png|right|frame|Convolutional encoder&nbsp; $(n = 2, \ k = 1,\  m = 2)$]]
  
{{Beispiel}}''':'''
+
::<math>\underline{g}^{(1)} =(\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm}) \quad \circ\!\!-\!\!\!-^{\hspace{-0.25cm}D}\!\!\!-\!\!\bullet\quad
[[File:P ID2609 KC T 3 2 S5b.png|rahmenlos|rechts|Faltungscoder mit <i>n</i> = 2, <i>k</i> = 1 und <i>m</i> = 2]] Wir betrachten beispielhaft den skizzierten Faltungscode mit den Codeparametern <i>n</i> = 2, <i>k</i> = 1 und <i>m</i> = 2. Für diesen gilt:
 
 
 
:<math>\underline{g}^{(1)} \hspace{-0.15cm}  = \hspace{-0.15cm} (\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm}) \quad \circ\!\!-\!\!\!-^{\hspace{-0.25cm}D}\!\!\!-\!\!\bullet\quad
 
 
G(D) =  1+ D + D^2 \hspace{0.05cm},</math>
 
G(D) =  1+ D + D^2 \hspace{0.05cm},</math>
:<math>\underline{g}^{(2)} \hspace{-0.15cm}  = \hspace{-0.15cm} (\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm}) \quad \circ\!\!-\!\!\!-^{\hspace{-0.25cm}D}\!\!\!-\!\!\bullet\quad
+
::<math>\underline{g}^{(2)}= (\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm}) \quad \circ\!\!-\!\!\!-^{\hspace{-0.25cm}D}\!\!\!-\!\!\bullet\quad
 
G(D) =  1+  D^2 </math>
 
G(D) =  1+  D^2 </math>
 +
::<math>\Rightarrow \hspace{0.3cm} \underline{G}(D) = \big ( 1+ D + D^2 \hspace{0.05cm}, \hspace{0.1cm}1+  D^2 \big )\hspace{0.05cm}.</math>
  
:<math>\Rightarrow \hspace{0.3cm} \underline{G}(D) = \big ( 1+ D + D^2 \hspace{0.05cm}, \hspace{0.1cm}1+  D^2 \big )\hspace{0.05cm}.</math>
+
*Let the information sequence be &nbsp; $\underline{u} = (1, 0, 1, 1)$ &nbsp; &rArr; &nbsp; D&ndash;transform&nbsp; $U(D) = 1 + D^2 + D^3$.&nbsp; This gives:
 
 
Die Informationssequenz sei <u><i>u</i></u> = (1, 0, 1, 1), was zur <i>D</i>&ndash;Transformierten <i>U</i>(<i>D</i>) = 1 + <i>D</i><sup>2</sup> + <i>D</i><sup>3</sup> führt. Damit erhält man
 
  
:<math>\underline{X}(D) = \left ( X^{(1)}(D),\hspace{0.1cm} X^{(2)}(D)  \right ) = U(D) \cdot \underline{G}(D) \hspace{0.05cm}, \hspace{0.2cm}</math>
+
::<math>\underline{X}(D) = \left ( X^{(1)}(D),\hspace{0.1cm} X^{(2)}(D)  \right ) = U(D) \cdot \underline{G}(D) \hspace{0.05cm}, \hspace{0.2cm}</math>
  
wobei
+
:where
  
:<math>{X}^{(1)}(D) \hspace{-0.15cm}  = \hspace{-0.15cm} (1+ D^2 + D^3) \cdot (1+ D + D^2)=</math>
+
::<math>{X}^{(1)}(D) = (1+ D^2 + D^3) \cdot (1+ D + D^2)=1+ D + D^2 + D^2 + D^3 + D^4 + D^3 + D^4 + D^5 = 1+ D + D^5</math>
:<math>\hspace{1.5cm} =  \hspace{-0.15cm}1+ D + D^2 + D^2 + D^3 + D^4 + D^3 + D^4 + D^5 = 1+ D + D^5</math>
 
  
:<math>\Rightarrow \underline{x}^{(1)} = (\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm}, \hspace{0.05cm} 0\hspace{0.05cm}, \hspace{0.05cm} ... \hspace{0.05cm}  \hspace{0.05cm}) \hspace{0.05cm},</math>
+
::<math>\Rightarrow \hspace{0.3cm} \underline{x}^{(1)} = (\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm}, \hspace{0.05cm} 0\hspace{0.05cm}, \hspace{0.05cm} \text{...} \hspace{0.05cm}  \hspace{0.05cm}) \hspace{0.05cm},</math>
  
:<math>{X}^{(2)}(D) \hspace{-0.15cm}  = \hspace{-0.15cm} (1+ D^2 + D^3) \cdot (1+ D^2)=</math>
+
::<math>{X}^{(2)}(D) = (1+ D^2 + D^3) \cdot (1+ D^2)=1+  D^2 + D^2  + D^4 + D^3  + D^5 = 1+ D^3 + D^4 + D^5</math>
:<math>\hspace{1.5cm} =  \hspace{-0.15cm}1+  D^2 + D^2  + D^4 + D^3  + D^5 = 1+ D^3 + D^4 + D^5</math>
 
  
:<math>\Rightarrow  \underline{x}^{(2)} = (\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm}0\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm}, \hspace{0.05cm} 0\hspace{0.05cm}, \hspace{0.05cm} ... \hspace{0.05cm}  \hspace{0.05cm}) \hspace{0.05cm}.</math>
+
::<math>\Rightarrow  \underline{x}^{(2)} = (\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm}0\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm}, \hspace{0.05cm} 0\hspace{0.05cm}, \hspace{0.05cm} \text{...} \hspace{0.05cm}  \hspace{0.05cm}) \hspace{0.05cm}.</math>
  
Das gleiche Ergebnis haben wir in der Aufgabe Z3.1 auf anderem Wege erhalten. Nach dem Multplexen der beiden Sränge erhält man wieder: &nbsp; <u><i>x</i></u> = (11, 10, 00, 01, 01, 11, 00, 00, ... ).{{end}}<br>
+
*We got the same result in&nbsp; [[Aufgaben:Exercise_3.1Z:_Convolution_Codes_of_Rate_1/2|"Exercise 3.1Z"]]&nbsp; on other way.&nbsp; After multiplexing the two strands,&nbsp; you get again: &nbsp;  
 +
:$$\underline{x} = (11, 10, 00, 01, 01, 11, 00, 00, \hspace{0.05cm} \text{...} \hspace{0.05cm}).$$}}<br>
  
== Übertragungsfunktionsmatrix – Transfer Function Matrix (1) ==
+
== Transfer Function Matrix ==
 
<br>
 
<br>
Auf der letzten Seite haben wir gesehen, dass ein Faltungscode der Rate 1/<i>n</i> sich am kompaktesten als Vektorgleichung im <i>D</i>&ndash;transformierten Bereich beschreiben  lässt:
+
We have seen that a convolutional code of rate&nbsp; $1/n$&nbsp; can be most compactly described as a vector equation in the&nbsp; D&ndash;transformed domain:  
 
+
[[File:EN_KC_T_3_2_S6.png|right|frame|General&nbsp; $(n, \ k)$ convolutional encoder |class=fit]]
:<math>\underline{X}(D) = U(D) \cdot \underline{G}(D)\hspace{0.05cm}.</math>
+
 +
:$$\underline{X}(D) = U(D) \cdot \underline{G}(D).$$
  
Nun erweitern wir das Resultat auf Faltungscodierer mit mehr als einem Eingang &nbsp;&#8658;&nbsp; <i>k</i>&#8805; 2 (siehe Grafik).<br>
+
Now we extend the result to convolutional encoders with more than one input &nbsp; &#8658; &nbsp; $k &#8805; 2$ &nbsp;$($see graph$)$.<br>
  
[[File:P ID2616 KC T 3 2 S6b v1.png|Allgemeiner (<i>n</i>. <i>k</i>)–Faltungscoder |class=fit]]<br>
+
In order to map a convolutional code of rate&nbsp; $k/n$&nbsp; in the D&ndash;domain,&nbsp; the dimension of the above vector equation must be increased with respect to input and transfer function:
  
Um einen Faltungscode der Rate <i>k</i>/<i>n</i> im <i>D</i>&ndash;Bereich abbilden zu können, muss die Dimension obiger Vektorgleichung hinsichtlich Eingang und Übertragungsfunktion erhöht werden:
+
::<math>\underline{X}(D) = \underline{U}(D) \cdot { \boldsymbol{\rm G}}(D)\hspace{0.05cm}.</math>
  
:<math>\underline{X}(D) = \underline{U}(D) \cdot { \boldsymbol{\rm G}}(D)\hspace{0.05cm},</math>
+
This requires the following measures:
 +
*From the scalar function&nbsp; $U(D)$&nbsp; we get the vector&nbsp;
 +
:$$\underline{U}(D) = (U^{(1)}(D), \ U^{(2)}(D), \hspace{0.05cm} \text{...} \hspace{0.05cm} , \ U^{(k)}(D)).$$
  
mit folgenden Maßnahmen:
+
*From the vector&nbsp; $\underline{G}(D)$&nbsp; we get the&nbsp; $k &times; n$&nbsp; transfer function matrix&nbsp; $($or&nbsp; "polynomial generator matrix"$)$ &nbsp; $\mathbf{G}(D)$&nbsp;:
*Aus der skalaren Funktion <i>U</i>(<i>D</i>) wird der Vektor <u><i>U</i></u>(<i>D</i>) = (<i>U</i><sup>(1)</sup>(<i>D</i>), <i>U</i><sup>(2)</sup>(<i>D</i>), ... , <i>U</i><sup>(<i>k</i>)</sup>(<i>D</i>)).<br>
 
 
 
*Aus dem Vektor <u><i>G</i></u>(<i>D</i>) wird die <i>k</i>&times;<i>n</i>&ndash;Matrix <b>G</b>(<i>D</i>), die man als <span style="font-weight: bold;">Übertragungsfunktionsmatrix</span> bezeichnet (englisch: <i>Transfer Function Matrix</i> oder auch <i>Polynomial Generator Matrix</i>):
 
  
 
::<math>{\boldsymbol{\rm G}}(D)=\begin{pmatrix}
 
::<math>{\boldsymbol{\rm G}}(D)=\begin{pmatrix}
G_1^{(1)}(D) & G_1^{(2)}(D) & \ldots & G_1^{(n)}(D)\\
+
G_1^{(1)}(D) & G_1^{(2)}(D) & \hspace{0.05cm} \text{...} \hspace{0.05cm} & G_1^{(n)}(D)\\
G_2^{(1)}(D) & G_2^{(2)}(D) & \ldots & G_2^{(n)}(D)\\
+
G_2^{(1)}(D) & G_2^{(2)}(D) & \hspace{0.05cm} \text{...} \hspace{0.05cm} & G_2^{(n)}(D)\\
 
\vdots & \vdots & & \vdots\\
 
\vdots & \vdots & & \vdots\\
G_k^{(1)}(D) & G_k^{(2)}(D) & \ldots & G_k^{(n)}(D)
+
G_k^{(1)}(D) & G_k^{(2)}(D) & \hspace{0.05cm} \text{...} \hspace{0.05cm} & G_k^{(n)}(D)
 
\end{pmatrix}\hspace{0.05cm}.</math>
 
\end{pmatrix}\hspace{0.05cm}.</math>
  
*Jedes der <i>k</i>&middot;<i>n</i> Matrixelemente <i>G<sub>i</sub></i><sup>(<i>j</i>)</sup>(<i>D</i>) mit 1 &#8804; <i>i</i> &#8804; <i>k</i>, 1 &#8804; <i>j</i> &#8804; <i>n</i> ist ein Polynom über der Dummy&ndash;Variablen <i>D</i> im Galoisfeld GF(2), maximal vom Grad <i>m</i>, wobei <i>m</i> das Gedächtnis angibt.<br>
+
*Each of the &nbsp; $k \cdot n$ &nbsp; matrix elements &nbsp; $G_i^{(j)}(D)$ &nbsp; with &nbsp; $1 &#8804; i &#8804; k,\ 1 &#8804; j &#8804; n$&nbsp; is a polynomial over the dummy variable&nbsp; $D$&nbsp; in the Galois field&nbsp; ${\rm GF}(2)$,&nbsp; maximal of degree&nbsp; $m$,&nbsp; where&nbsp; $m$&nbsp; denotes the memory.<br>
  
*Für die obige <i>Übertragungsfunktionsmatrix</i> kann mit den zu Beginn dieses Kapitels definierten Teilmatrizen <b>G</b><sub>0</sub>, ... , <b>G</b><sub><i>m</i></sub>  auch geschrieben werden (als Index verwenden wir wieder  <i>l</i>):
+
*For the above&nbsp; transfer function matrix,&nbsp; using the&nbsp; [[Channel_Coding/Algebraic_and_Polynomial_Description#Division_of_the_generator_matrix_into_partial_matrices| &raquo;$\text{partial matrices}$&laquo;]]&nbsp; $\mathbf{G}_0, \ \text{...} \ , \mathbf{G}_m$&nbsp; also be written&nbsp; $($as&nbsp; index we use again &nbsp;$l)$:
  
 
::<math>{\boldsymbol{\rm G}}(D) =  \sum_{l = 0}^{m} {\boldsymbol{\rm G}}_l \cdot D\hspace{0.03cm}^l
 
::<math>{\boldsymbol{\rm G}}(D) =  \sum_{l = 0}^{m} {\boldsymbol{\rm G}}_l \cdot D\hspace{0.03cm}^l
= {\boldsymbol{\rm G}}_0 + {\boldsymbol{\rm G}}_1 \cdot D + {\boldsymbol{\rm G}}_2 \cdot D^2 + ... \hspace{0.05cm}+ {\boldsymbol{\rm G}}_m \cdot D\hspace{0.03cm}^m
+
= {\boldsymbol{\rm G}}_0 + {\boldsymbol{\rm G}}_1 \cdot D + {\boldsymbol{\rm G}}_2 \cdot D^2 + \hspace{0.05cm} \text{...} \hspace{0.05cm}+ {\boldsymbol{\rm G}}_m \cdot D\hspace{0.03cm}^m
 
  \hspace{0.05cm}.</math>
 
  \hspace{0.05cm}.</math>
  
Auf der nächsten Seite werden diese Definitionen und Gesetzmäßigkeiten an einem ausführlichen Beispiel verdeutlicht.<br>
+
{{GraueBox|TEXT= 
 +
$\text{Example 6:}$&nbsp;
 +
We consider the&nbsp; $(n = 3, \ k = 2, \ m = 1)$ convolutional encoder whose partial matrices have already been determined in the&nbsp; [[Channel_Coding/Algebraic_and_Polynomial_Description#Division_of_the_generator_matrix_into_partial_matrices| $\text{Example 1}$]]&nbsp; as follows:
 +
[[File:P ID2617 KC T 3 1 S4 v1.png|right|frame|Convolutional encoder with&nbsp; $k = 2, \ n = 3, \ m = 1$]]
  
== Übertragungsfunktionsmatrix – Transfer Function Matrix (2) ==
+
::<math>{ \boldsymbol{\rm G} }_0 =  
<br>
 
{{Beispiel}}''':'''
 
[[File:P ID2617 KC T 3 1 S4 v1.png|rahmenlos|rechts|Faltungscoder mit <i>k</i> = 2, <i>n</i> = 3 und <i>m</i> = 1]] Wir betrachten nun wieder den (<i>n</i> = 3, <i>k</i> = 2, <i>m</i> = 1)&ndash;Faltungscoder, dessen Teilmatrizen in einem früheren Beispiel wie folgt ermittelt wurden:
 
 
 
:<math>{ \boldsymbol{\rm G}}_0 =  
 
 
\begin{pmatrix}
 
\begin{pmatrix}
 
1 & 0 & 1\\
 
1 & 0 & 1\\
 
0 & 1 & 1
 
0 & 1 & 1
\end{pmatrix}  \hspace{0.05cm},  \\
+
\end{pmatrix}  \hspace{0.05cm},  \hspace{0.5cm}
{ \boldsymbol{\rm G}}_1 = \begin{pmatrix}
+
{ \boldsymbol{\rm G} }_1 = \begin{pmatrix}
 
1 & 1 & 1\\
 
1 & 1 & 1\\
 
1 & 0 & 0
 
1 & 0 & 0
 
\end{pmatrix}\hspace{0.05cm}.</math>
 
\end{pmatrix}\hspace{0.05cm}.</math>
  
Wegen <i>m</i> = 1 existieren keine Teilmatrizen für <i>l</i> &#8805; 2. Damit lautet die  Übertragungsfunktionsmatrix:
+
*Because of&nbsp; $m = 1$&nbsp; no partial matrices exist for&nbsp; $l &#8805; 2$.&nbsp; Thus the transfer function matrix is:
  
:<math>{\boldsymbol{\rm G}}(D) = {\boldsymbol{\rm G}}_0 + {\boldsymbol{\rm G}}_1 \cdot D =
+
::<math>{\boldsymbol{\rm G} }(D) = {\boldsymbol{\rm G} }_0 + {\boldsymbol{\rm G} }_1 \cdot D =
 
\begin{pmatrix}
 
\begin{pmatrix}
 
1+D & D & 1+D\\
 
1+D & D & 1+D\\
Line 411: Line 443:
 
  \hspace{0.05cm}.</math>
 
  \hspace{0.05cm}.</math>
  
Die (zeitlich begrenzte) Informationssequenz sei <u><i>u</i></u> = (0, 1, 1, 0, 0, 0, 1, 1), woraus sich die beiden Eingangsfolgen wie folgt ergeben:
+
*Let the&nbsp; $($time limited$)$&nbsp; information sequence be&nbsp; $\underline{u} = (0, 1, 1, 0, 0, 0, 1, 1)$,&nbsp; from which the two input sequences are as follows:
  
:<math>\underline{u}^{(1)} \hspace{-0.15cm}  = \hspace{-0.15cm} (\hspace{0.05cm}0\hspace{0.05cm},\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm}) \quad \circ\!\!-\!\!\!-^{\hspace{-0.25cm}D}\!\!\!-\!\!\bullet\quad
+
::<math>\underline{u}^{(1)} = (\hspace{0.05cm}0\hspace{0.05cm},\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm}) \quad \circ\!\!-\!\!\!-^{\hspace{-0.25cm}D}\!\!\!-\!\!\bullet\quad
 
{U}^{(1)}(D) =  D + D^3 \hspace{0.05cm},</math>
 
{U}^{(1)}(D) =  D + D^3 \hspace{0.05cm},</math>
:<math>\underline{u}^{(2)} \hspace{-0.15cm}  = \hspace{-0.15cm} (\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm}0\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm}) \quad \circ\!\!-\!\!\!-^{\hspace{-0.25cm}D}\!\!\!-\!\!\bullet\quad
+
::<math>\underline{u}^{(2)} = (\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm}0\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm}) \quad \circ\!\!-\!\!\!-^{\hspace{-0.25cm}D}\!\!\!-\!\!\bullet\quad
 
{U}^{(2)}(D) =  1 + D^3 \hspace{0.05cm}.</math>
 
{U}^{(2)}(D) =  1 + D^3 \hspace{0.05cm}.</math>
  
Daraus folgt für den Vektor der <i>D</i>&ndash;Transformierten am Coderausgang:
+
*From this follows for the vector of the&nbsp; D&ndash;transform at the encoder output:
  
:<math>\underline{X}(D) \hspace{-0.15cm}  = \hspace{-0.15cm} \big (\hspace{0.05cm} {X}^{(1)}(D)\hspace{0.05cm}, \hspace{0.05cm} {X}^{(2)}(D)\hspace{0.05cm}, \hspace{0.05cm} {X}^{(3)}(D)\hspace{0.05cm}\big ) = \underline{U}(D) \cdot {\boldsymbol{\rm G}}(D)</math>
+
::<math>\underline{X}(D) = \big (\hspace{0.05cm} {X}^{(1)}(D)\hspace{0.05cm}, \hspace{0.05cm} {X}^{(2)}(D)\hspace{0.05cm}, \hspace{0.05cm} {X}^{(3)}(D)\hspace{0.05cm}\big ) = \underline{U}(D) \cdot {\boldsymbol{\rm G} }(D)
:<math>\hspace{1cm} =  \hspace{-0.15cm} \begin{pmatrix}
+
\begin{pmatrix}
 
D+D^3 & 1+D^3
 
D+D^3 & 1+D^3
 
\end{pmatrix} \cdot \begin{pmatrix}
 
\end{pmatrix} \cdot \begin{pmatrix}
Line 428: Line 460:
 
\end{pmatrix}\hspace{0.05cm}.</math>
 
\end{pmatrix}\hspace{0.05cm}.</math>
  
Damit ergeben sich in den drei Strängen folgende Codesquenzen:
+
*This results in the following encoded sequences in the three strands:
  
:<math>{X}^{(1)}(D) \hspace{-0.15cm}  = \hspace{-0.15cm} (D + D^3) \cdot (1+D) + (1 + D^3) \cdot  D =</math>
+
::<math>{X}^{(1)}(D) = (D + D^3) \cdot (1+D) + (1 + D^3) \cdot  D =D + D^2 +  D^3 + D^4 + D  + D^4 = D^2 + D^3</math>
:<math>\hspace{1.5cm} =  \hspace{-0.15cm} D + D^2 +  D^3 + D^4 + D  + D^4 = D^2 + D^3</math>
 
  
:<math>\Rightarrow \underline{x}^{(1)} = (\hspace{0.05cm}0\hspace{0.05cm},\hspace{0.05cm}0\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm}\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm}\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} ... \hspace{0.05cm}) \hspace{0.05cm},</math>
+
:::<math>\Rightarrow \hspace{0.3cm} \underline{x}^{(1)} = (\hspace{0.05cm}0\hspace{0.05cm},\hspace{0.05cm}0\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm}\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm}\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} \text{...} \hspace{0.05cm}) \hspace{0.05cm},</math>
  
:<math>{X}^{(2)}(D) \hspace{-0.15cm}  = \hspace{-0.15cm} (D + D^3) \cdot D + (1 + D^3) \cdot  1 =</math>
+
::<math>{X}^{(2)}(D)= (D + D^3) \cdot D + (1 + D^3) \cdot  1 = D^2 +  D^4 + 1 + D^3 = 1+D^2 + D^3 + D^4</math>
:<math>\hspace{1.5cm} =  \hspace{-0.15cm} D^2 +  D^4 + 1 + D^3 = 1+D^2 + D^3 + D^4</math>
 
  
:<math>\Rightarrow \underline{x}^{(2)} = (\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm}0\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm}\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm}\hspace{0.05cm} ... \hspace{0.05cm}) \hspace{0.05cm},</math>
+
:::<math>\Rightarrow \hspace{0.3cm}\underline{x}^{(2)} = (\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm}0\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm}\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm}\hspace{0.05cm} \text{...} \hspace{0.05cm}) \hspace{0.05cm},</math>
  
:<math>{X}^{(3)}(D) \hspace{-0.15cm}  = \hspace{-0.15cm} (D + D^3) \cdot (1 + D) + (1 + D^3) \cdot  1 =</math>
+
::<math>{X}^{(3)}(D)=(D + D^3) \cdot (1 + D) + (1 + D^3) \cdot  1 = D + D^2 + D^3+  D^4 + 1 + D^3 = 1+ D + D^2  + D^4</math>
:<math>\hspace{1.5cm} =  \hspace{-0.15cm} D + D^2 + D^3+  D^4 + 1 + D^3 = 1+ D + D^2  + D^4</math>
 
  
:<math>\Rightarrow \underline{x}^{(3)} = (\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm}\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm}\hspace{0.05cm} ... \hspace{0.05cm}) \hspace{0.05cm}.</math>
+
:::<math>\Rightarrow \hspace{0.3cm}\underline{x}^{(3)} = (\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm}\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm}\hspace{0.05cm} \text{...} \hspace{0.05cm}) \hspace{0.05cm}.</math>
  
Die gleichen Ergebnisse haben wir auf anderen Wegen bereits in vorherigen Beispielen erhalten:  
+
We have already obtained the same results in other ways in previous examples:  
* im Beispiel von Kapitel 3.1, Seite 4.<br>
+
* in&nbsp; [[Channel_Coding/Basics_of_Convolutional_Coding#Convolutional_encoder_with_two_inputs|$\text{Example 4}$]]&nbsp; of the chapter&nbsp; "Basics of Convolutional Coding",<br>
  
*im Beispiel von Kapitel 3.2, Seite 2.{{end}}<br>
+
*in&nbsp; [[Channel_Coding/Algebraic_and_Polynomial_Description#Generator_matrix_of_a_convolutional_encoder_with_memory_.7F.27.22.60UNIQ-MathJax64-QINU.60.22.27.7F| $\text{Example 2}$]]&nbsp; of the current chapter.}}<br>
  
== Systematische Faltungscodes (1) ==
+
== Systematic convolutional codes ==
 
<br>
 
<br>
Die Polynomrepräsentation anhand der Übertragungsfunktionsmtrix <b>G</b>(<i>D</i>) ermöglicht Einblicke in die Struktur eines Faltungscodes. Beispielsweise erkennt man anhand dieser <i>k</i>&times<i>n</i>&ndash;Matrix, ob es sich um einen systematischen Code handelt. Darunter versteht man einen Code, bei dem die Codesequenzen <u><i>x</i></u><sup>(1)</sup>, ... , <u><i>x</i></u><sup>(<i>k</i>)</sup> mit den Informationssequenzen <u><i>u</i></u><sup>(1)</sup>, ... , <u><i>u</i></u><sup>(<i>k</i>)</sup> identisch sind. Die Grafik zeigt beispielhaft einen systematischen (<i>n</i>&nbsp;=&nbsp;4,&nbsp;<i>k</i>&nbsp;=&nbsp;3)&ndash;Faltungscode.<br>
+
Polynomial representation using the transfer function matrix&nbsp; $\mathbf{G}(D)$&nbsp; provides insight into the structure of a convolutional code.  
 +
[[File:P ID2611 KC T 3 2 S7 v2.png|right|frame|Systematic convolutional code with&nbsp; $k = 3, \ n = 4$|class=fit]]
  
[[File:P ID2611 KC T 3 2 S7 v2.png|Systematischer Faltungscode mit <i>k</i> = 3 und <i>n</i> = 4|class=fit]]<br>
+
*This&nbsp; $k &times; n$&nbsp; matrix is used to recognize whether it is a&nbsp; [[Channel_Coding/General_Description_of_Linear_Block_Codes#Systematic_Codes| &raquo;$\text{systematic code}$&laquo;]].
 +
 +
*This refers to a code where the encoded sequences&nbsp; $\underline{x}^{(1)}, \ \text{...} \ , \ \underline{x}^{(k)}$&nbsp; are identical with the information sequences&nbsp; $\underline{u}^{(1)}, \ \text{...} \ , \ \underline{u}^{(k)}$.
  
Ein systematischer (<i>n</i>, <i>k</i>)&ndash;Faltungscode liegt immer dann vor, wenn die Übertragungsfunktionsmatrix (mit <i>k</i> Zeilen und <i>n</i> Spalten) folgendes Aussehen hat:
+
*The graph shows an example of a systematic&nbsp; $(n = 4, \ k = 3)$&nbsp; convolutional code.<br>
  
:<math>{\boldsymbol{\rm G}}(D) = {\boldsymbol{\rm G}}_{\rm sys}(D) = \big [ \hspace{0.05cm} {\boldsymbol{\rm I}}_k\hspace{0.05cm} ; \hspace{0.1cm} {\boldsymbol{\rm P}}(D) \hspace{0.05cm}\big ]  
+
 
 +
A systematic &nbsp; $(n, k)$&nbsp; convolutional code exists whenever the transfer function matrix&nbsp; $($with&nbsp; $k$&nbsp; rows and&nbsp; $n$&nbsp; columns$)$&nbsp; has the following appearance:
 +
 
 +
::<math>{\boldsymbol{\rm G}}(D) = {\boldsymbol{\rm G}}_{\rm sys}(D) = \left [ \hspace{0.05cm} {\boldsymbol{\rm I}}_k\hspace{0.05cm} ; \hspace{0.1cm} {\boldsymbol{\rm P}}(D) \hspace{0.05cm}\right ]  
 
  \hspace{0.05cm}.</math>
 
  \hspace{0.05cm}.</math>
  
Hierbei ist folgende Nomenklatur verwendet:
+
The following nomenclature is used:
*<b>I</b><sub><i>k</i></sub> bezeichnet eine diagonale Einheitsmatrix der Dimension <i>k</i> &times; <i>k</i>.<br>
+
# &nbsp; $\mathbf{I}_k$&nbsp; denotes a diagonal unit matrix of dimension&nbsp; $k &times; k$.<br>
 +
# &nbsp; $\mathbf{P}(D)$&nbsp; is a&nbsp; $k &times; (n -k)$ matrix,&nbsp; where each matrix element describes a polynomial in&nbsp; $D$.<br><br>
  
*<b>P</b>(<i>D</i>) ist eine <i>k</i> &times; (<i>n</i> &ndash;<i>k</i>)&ndash;Matrix, wobei jedes Matrixelement ein Polynom in <i>D</i> beschreibt.<br><br>
+
{{GraueBox|TEXT= 
 +
$\text{Example 7:}$&nbsp; A systematic convolutional code with &nbsp; $n = 3, \ k = 2, \ m = 2$ &nbsp; might have the following transfer function matrix:
  
{{Beispiel}}''':''' Ein systematischer Faltungscode mit den Codeparametern <i>n</i>&nbsp;=&nbsp;3, <i>k</i>&nbsp;=&nbsp;2, <i>m</i>&nbsp;=&nbsp;2 könnte beispielsweise die folgende Übertragungsfunktionsmatrix aufweisen:
+
::<math>{\boldsymbol{\rm G} }_{\rm sys}(D) = \begin{pmatrix}
 
 
:<math>{\boldsymbol{\rm G}}_{\rm sys}(D) = \begin{pmatrix}
 
 
1 & 0 & 1+D^2\\
 
1 & 0 & 1+D^2\\
 
0 & 1 & 1+D
 
0 & 1 & 1+D
 
\end{pmatrix}\hspace{0.05cm}.</math>
 
\end{pmatrix}\hspace{0.05cm}.</math>
  
Andere systematische Faltungscodes mit gleichem <i>n</i> und gleichem <i>k</i> unterscheiden sich demgegenüber nur durch die beiden Matrixelemente in der letzten Spalte.{{end}}<br>
+
*In contrast,&nbsp; other systematic convolutional codes with equal&nbsp; $n$&nbsp; and equal&nbsp; $k$&nbsp; differ only by the two matrix elements in the last column.}}<br>
  
Zu jedem (<i>n</i>, <i>k</i>)&ndash;Faltungscode mit der Generatormatrix <b>G</b>(<i>D</i>) kann ein <span style="font-weight: bold;">äquivalenter systematischer Code</span>  gefunden werden, dessen <i>D</i>&ndash;Matrix wir mit <b>G</b><sub>sys</sub>(<i>D</i>)  benennen.<br>
 
  
[[File:P ID2622 KC T 3 2 S7 v1.png|Unterteilung von <b>G</b>(<i>D</i>) in <b>T</b>(<i>D</i>) und <b>Q</b>(<i>D</i>)|class=fit]]<br>
+
== Equivalent systematic convolutional code ==
 +
<br>
 +
For every&nbsp; $(n, \ k)$&nbsp; convolutional code with matrix &nbsp; $\mathbf{G}(D)$ &nbsp; there is an&nbsp; "equivalent systematic code"&nbsp; whose&nbsp; D&ndash;matrix we denote by&nbsp; $\mathbf{G}_{\rm sys}(D)$.<br>
  
Auf der nächsten Seite wird gezeigt, wie man von einer beliebigen Matrix <b>G</b>(<i>D</i>) durch Aufspalten in zwei Teilmatrizen  <b>T</b>(<i>D</i>) und  <b>Q</b>(<i>D</i>) und verschiedene Matrizenoperationen zur Matrix  <b>G</b><sub>sys</sub>(<i>D</i>) kommt.<br>
+
To get from the transfer function matrix &nbsp; $\mathbf{G}(D)$ &nbsp; to the matrix &nbsp; $\mathbf{G}_{\rm sys}(D)$ &nbsp; of the equivalent systematic convolutional code,&nbsp; proceed as follows according to the diagram:
 +
[[File:P ID2622 KC T 3 2 S7 v1.png|right|frame|Subdivision of&nbsp; $\mathbf{G}(D)$&nbsp; into&nbsp; $\mathbf{T}(D)$&nbsp; and&nbsp; $\mathbf{Q}(D)$|class=fit]]
 +
*Divide the&nbsp; $k &times; n$&nbsp; matrix&nbsp; $\mathbf{G}(D)$&nbsp; into a square matrix&nbsp; $\mathbf{T}(D)$&nbsp; with&nbsp; $k$&nbsp; rows and&nbsp; $k$&nbsp; columns and denote the remainder by&nbsp; $\mathbf{Q}(D)$.
  
== Systematische Faltungscodes (2) ==
+
*Then calculate the inverse matrix &nbsp; $\mathbf{T}^{-1}(D)$ &nbsp; to &nbsp; $\mathbf{T}(D)$&nbsp; and from this the matrix for the equivalent systematic code:
<br>
 
Um von einer Übertragungsfunktionsmatrix <b>G</b>(<i>D</i>)  zur Matrix  <b>G</b><sub>sys</sub>(<i>D</i>) eines äquivalenten systematischen Faltungscodes zu kommen, geht man entsprechend der Grafik auf der letzten Seite wie folgt vor:
 
*Man unterteilt die <i>k</i>&times;<i>n</i>&ndash;Matrix <b>G</b>(<i>D</i>) in eine quadratische Matrix <b>T</b>(<i>D</i>) mit <i>k</i> Zeilen und <i>k</i> Spalten und bezeichnet den Rest mit <b>Q</b>(<i>D</i>).
 
 
 
*Anschließend berechnet man die zu <b>T</b>(<i>D</i>) inverse Matrix <b>T</b><sup>&ndash;1</sup>(<i>D</i>) und daraus die Matrix für den äquivanten systematischen Code:
 
  
 
::<math>{\boldsymbol{\rm G}}_{\rm sys}(D)= {\boldsymbol{\rm T}}^{-1}(D) \cdot {\boldsymbol{\rm G}}(D) \hspace{0.05cm}.</math>
 
::<math>{\boldsymbol{\rm G}}_{\rm sys}(D)= {\boldsymbol{\rm T}}^{-1}(D) \cdot {\boldsymbol{\rm G}}(D) \hspace{0.05cm}.</math>
  
*Da <b>T</b><sup>&ndash;1</sup>(<i>D</i>) &middot; <b>T</b>(<i>D</i>) die <i>k</i>&times;<i>k</i>&ndash;Einheitsmatrix <b>I</b><sub><i>k</i></sub> ergibt, kann die Übertragungsfunktionsmatrix des äquivalenten systematischen Codes in der gewünschten Form geschrieben werden:
+
*Since the product &nbsp; $\mathbf{T}^{-1}(D) \cdot \mathbf{T}(D)$ &nbsp; yields the&nbsp; $k &times; k$&nbsp; unit matrix &nbsp; $\mathbf{I}_k$ &nbsp; <br>&rArr; &nbsp; the transfer function matrix of the equivalent systematic code can be written in the desired form:
  
::<math>{\boldsymbol{\rm G}}_{\rm sys}(D) = \big [ \hspace{0.05cm} {\boldsymbol{\rm I}}_k\hspace{0.05cm} ; \hspace{0.1cm} {\boldsymbol{\rm P}}(D) \hspace{0.05cm}\big ]  
+
::<math>{\boldsymbol{\rm G}}_{\rm sys}(D) = \bigg [ \hspace{0.05cm} {\boldsymbol{\rm I}}_k\hspace{0.05cm} ; \hspace{0.1cm} {\boldsymbol{\rm P}}(D) \hspace{0.05cm}\bigg ]  
\hspace{0.5cm}{\rm mit}\hspace{0.5cm} {\boldsymbol{\rm P}}(D)= {\boldsymbol{\rm T}}^{-1}(D) \cdot {\boldsymbol{\rm Q}}(D) \hspace{0.05cm}.
+
\hspace{0.5cm}{\rm with}\hspace{0.5cm} {\boldsymbol{\rm P}}(D)= {\boldsymbol{\rm T}}^{-1}(D) \cdot {\boldsymbol{\rm Q}}(D) \hspace{0.05cm}.
 
\hspace{0.05cm}</math>
 
\hspace{0.05cm}</math>
  
{{Beispiel}}''':'''
+
{{GraueBox|TEXT= 
[[File:P ID2613 KC T 3 2 S1 neu.png|rahmenlos|rechts|Faltungscodierer der Rate 2/3]] Der auf den letzten Seiten schon häufiger betrachtete Coder der Rate 2/3 ist nicht systematisch, weil zum Beispiel
+
$\text{Example 8:}$&nbsp;
<u><i>x</i></u><sup>(1)</sup> &ne; <u><i>u</i></u><sup>(1)</sup>, <u><i>x</i></u><sup>(2)</sup> &ne; <u><i>u</i></u><sup>(2)</sup> gilt (siehe nebenstehende Coderschaltung).<br>
+
The encoder  of rate &nbsp; $2/3$ &nbsp; considered often in the last sections is not systematic because&nbsp; e.g.&nbsp; $\underline{x}^{(1)} &ne; \underline{u}^{(1)}, \ \underline{x}^{(2)} &ne; \underline{u}^{(2)}$ &nbsp; holds&nbsp; $($see adjacent graphic$)$.<br>
 +
[[File:P ID2613 KC T 3 2 S1 neu.png|right|frame|Convolutional encoder of rate&nbsp; $2/3$]]
  
Man erkennt dies aber auch anhand der Übertragungsfunktionsmatrix:
+
&rArr; &nbsp; However,&nbsp; this can also be seen from the transfer function matrix:
  
:<math>{\boldsymbol{\rm G}}(D) = \big [ \hspace{0.05cm} {\boldsymbol{\rm T}}(D)\hspace{0.05cm} ; \hspace{0.1cm} {\boldsymbol{\rm Q}}(D) \hspace{0.05cm}\big ]</math>
+
::<math>{\boldsymbol{\rm G} }(D) = \big [ \hspace{0.05cm} {\boldsymbol{\rm T} }(D)\hspace{0.05cm} ; \hspace{0.1cm} {\boldsymbol{\rm Q} }(D) \hspace{0.05cm}\big ]</math>
  
:<math>\Rightarrow \hspace{0.3cm}
+
::<math>\Rightarrow \hspace{0.3cm}
{\boldsymbol{\rm T}}(D) = \begin{pmatrix}
+
{\boldsymbol{\rm T} }(D) = \begin{pmatrix}
 
1+D & D\\
 
1+D & D\\
 
D & 1  
 
D & 1  
 
\end{pmatrix}\hspace{0.05cm},\hspace{0.2cm}
 
\end{pmatrix}\hspace{0.05cm},\hspace{0.2cm}
{\boldsymbol{\rm Q}}(D) = \begin{pmatrix}
+
{\boldsymbol{\rm Q} }(D) = \begin{pmatrix}
 
1+D \\
 
1+D \\
 
1  
 
1  
 
\end{pmatrix}\hspace{0.05cm}.</math>
 
\end{pmatrix}\hspace{0.05cm}.</math>
  
Die Determinante von <b>T</b>(<i>D</i>) ergibt sich zu (1 + <i>D</i>) &middot; 1 + <i>D</i> &middot; <i>D</i> = 1 + <i>D</i> + <i>D</i><sup>2</sup> und ist ungleich 0. Somit kann für die Inverse von <b>T</b>(<i>D</i>) geschrieben werden (Vertauschung der Diagonalelemente!):
+
*The determinant of&nbsp; $\mathbf{T}(D)$&nbsp; results in &nbsp; $(1 + D) \cdot 1 + D \cdot D = 1 + D + D^2$ &nbsp; and is nonzero.  
 +
 
 +
*Thus,&nbsp; for the inverse of&nbsp; $\mathbf{T}(D)$&nbsp; can be written&nbsp; $($swapping the diagonal elements!$)$:
  
:<math>{\boldsymbol{\rm T}}^{-1}(D) = \frac{1}{1+D+D^2} \cdot \begin{pmatrix}
+
::<math>{\boldsymbol{\rm T} }^{-1}(D) = \frac{1}{1+D+D^2} \cdot \begin{pmatrix}
 
1 & D\\
 
1 & D\\
 
D & 1+D  
 
D & 1+D  
 
\end{pmatrix}\hspace{0.05cm}.</math>
 
\end{pmatrix}\hspace{0.05cm}.</math>
  
Das Produkt <b>T</b>(<i>D</i>) &middot; <b>T</b><sup>&ndash;1</sup>(<i>D</i>) ergibt die Einheitsmatrix <b>I</b><sub>2</sub>, und für die dritte Spalte von <b>G</b><sub>sys</sub>(<i>D</i>) gilt:
+
*The product&nbsp; $\mathbf{T}(D) \cdot \mathbf{T}^{&ndash;1}(D)$&nbsp; gives the unit matrix &nbsp; $\mathbf{I}_2$ &nbsp; &rArr; &nbsp; for the third column of&nbsp; $\mathbf{G}_{\rm sys}(D)$&nbsp; holds:
  
:<math>{\boldsymbol{\rm P}}(D)= {\boldsymbol{\rm T}}^{-1}(D) \cdot {\boldsymbol{\rm Q}}(D)  
+
::<math>{\boldsymbol{\rm P} }(D)= {\boldsymbol{\rm T} }^{-1}(D) \cdot {\boldsymbol{\rm Q} }(D)  
 
= \frac{1}{1+D+D^2} \cdot \begin{pmatrix}
 
= \frac{1}{1+D+D^2} \cdot \begin{pmatrix}
 
1 & D\\
 
1 & D\\
Line 532: Line 568:
 
\end{pmatrix} </math>
 
\end{pmatrix} </math>
  
:<math>\Rightarrow \hspace{0.3cm} {\boldsymbol{\rm P}}(D)
+
::<math>\Rightarrow \hspace{0.3cm} {\boldsymbol{\rm P} }(D)
 
=  \frac{1}{1+D+D^2} \cdot \begin{pmatrix}
 
=  \frac{1}{1+D+D^2} \cdot \begin{pmatrix}
 
(1+D)  + D \\
 
(1+D)  + D \\
Line 542: Line 578:
 
\end{pmatrix} </math>
 
\end{pmatrix} </math>
  
:<math>\Rightarrow \hspace{0.2cm}{\boldsymbol{\rm G}}_{\rm sys}(D) =
+
::<math>\Rightarrow \hspace{0.2cm}{\boldsymbol{\rm G} }_{\rm sys}(D) =
 
\begin{pmatrix}  
 
\begin{pmatrix}  
 
1 & 0 & \frac{1}{1+D+D^2}\\
 
1 & 0 & \frac{1}{1+D+D^2}\\
 
0 & 1 &\frac{1+D^2}{1+D+D^2}  
 
0 & 1 &\frac{1+D^2}{1+D+D^2}  
\end{pmatrix}\hspace{0.05cm}. </math>
+
\end{pmatrix}\hspace{0.05cm}. </math>}}
 +
 
  
Es ist noch zu klären, wie das Filter einer solchen gebrochen&ndash;rationalen Übertragungsfunktion aussieht.{{end}}<br>
+
It remains to be clarified what the filter of such a fractional&ndash;rational transfer function looks like.<br>
  
== Filterstruktur bei gebrochen–rationaler Übertragungsfunktion ==
+
== Filter structure with fractional&ndash;rational transfer function ==
 
<br>
 
<br>
Hat eine Übertragungsfunktion die Form <i>G</i>(<i>D</i>) = <i>A</i>(<i>D</i>)/<i>B</i>(<i>D</i>), so bezeichnet man das zugehörige Filter  als <i>rekursiv</i>. Bei einem rekursiven Faltungscodierer mit dem Gedächtnis <i>m</i> kann für die beiden Polynome <i>A</i>(<i>D</i>) und <i>B</i>(<i>D</i>) allgemein geschrieben werden:
+
If a transfer function has the form &nbsp; $G(D) = A(D)/B(D)$,&nbsp; the associated filter is called&nbsp; &raquo;<b>recursive</b>&laquo;.&nbsp; Given a recursive convolutional encoder with memory&nbsp; $m$,&nbsp; the two polynomials &nbsp; $A(D)$ &nbsp; and &nbsp; $B(D)$ &nbsp; can be written in general terms:  
 
+
[[File:P ID2619 KC T 3 2 S8 v1.png|right|frame|Recursive filter for realization of&nbsp; $G(D) = A(D)/B(D)$|class=fit]]
:<math>A(D) \hspace{-0.15cm}  = \hspace{-0.15cm} \sum_{l = 0}^{m} a_l \cdot D^l = a_0 + a_1 \cdot D + a_2 \cdot D^2 + ... \hspace{0.05cm} + a_m \cdot D^m \hspace{0.05cm},</math>
+
::<math>A(D) =  \sum_{l = 0}^{m} a_l \cdot D\hspace{0.05cm}^l = a_0 + a_1 \cdot D + a_2 \cdot D^2 +\  \text{...} \ \hspace{0.05cm} + a_m \cdot D\hspace{0.05cm}^m \hspace{0.05cm},</math>
:<math>B(D) \hspace{-0.15cm}  = \hspace{-0.15cm} 1 + \sum_{l = 1}^{m} b_l \cdot D^l = 1 + b_1 \cdot D + b_2 \cdot D^2 + ... \hspace{0.05cm} + b_m \cdot D^m \hspace{0.05cm}.</math>
+
::<math>B(D) =  1 + \sum_{l = 1}^{m} b_l \cdot D\hspace{0.05cm}^l = 1 + b_1 \cdot D + b_2 \cdot D^2 + \  \text{...} \  \hspace{0.05cm} + b_m \cdot D\hspace{0.05cm}^m \hspace{0.05cm}.</math>
 
 
Die Grafik zeigt die entsprechende Filterstruktur in der so genannten <i>Controller Canonical Form</i>.<br>
 
 
 
[[File:P ID2619 KC T 3 2 S8 v1.png|Rekursives Filter zur Realisierung von <i>G</i>(<i>D</i>) = <i>A</i>(<i>D</i>)/<i>B</i>(<i>D</i>)|class=fit]]<br>
 
 
 
Die Koeffizienten <i>a</i><sub>0</sub>, ... , <i>a<sub>m</sub></i> beschreiben den Vorwärtszweig, während <i>b</i><sub>1</sub>, ... , <i>b<sub>m</sub></i> eine Rückkopplung bilden. Alle Koeffizienten sind binär, also 1 (durchgehende Verbindung) oder 0 (fehlende Verbindung).<br>
 
  
{{Beispiel}}''':''' Die rechts skizzierte Filterstruktur lässt sich durch folgende Gleichungen beschreiben:
+
The graphic shows the corresponding filter structure in the so&ndash;called&nbsp; "Controller Canonical Form":<br>
 +
*The coefficients &nbsp; $a_0, \ \text{...} \ , \ a_m$ &nbsp; describe the forward branch.
  
:<math>x_i  \hspace{-0.15cm} \hspace{-0.15cm} w_i + w_{i-2} \hspace{0.05cm},</math>
+
* The coefficients &nbsp; $b_1, \ \text{...} \ , \ b_m$ &nbsp; form a feedback branch.
:<math>w_i \hspace{-0.15cm}  =  \hspace{-0.15cm} u_i + w_{i-1}+ w_{i-2}  \hspace{0.05cm}.</math>
+
   
 +
*All coefficients are binary,&nbsp;
 +
:*so&nbsp; $1$&nbsp; $($continuous connection$)$ &nbsp;
 +
:*or&nbsp; $0$&nbsp; $($missing connection$)$.
 +
<br clear=all>
 +
{{GraueBox|TEXT=   
 +
$\text{Example 9:}$&nbsp; The filter structure outlined on the right can be described as follows:
 +
[[File:P_ID2620__KC_T_3_2_S8b_neu.png|right|frame|Filter: &nbsp;$G(D) = (1+D^2)/(1+D +D^2)$|class=fit]]
  
Entsprechend gilt für die <i>D</i>&ndash;Transformierten:
+
::<math>x_i  =  w_i + w_{i-2} \hspace{0.05cm},</math>
 +
::<math>w_i =  u_i + w_{i-1}+ w_{i-2}  \hspace{0.05cm}.</math>
  
:<math>X(D) \hspace{0.15cm}  =  \hspace{0.15cm} W(D) + W(D) \cdot D^2 =</math>
+
*Accordingly,&nbsp; for the&nbsp; D&ndash;transforms:
:<math> \hspace{1.3cm} =  \hspace{0.15cm} W(D) \cdot \left ( 1+ D^2 \right ) \hspace{0.05cm},</math>
 
  
:<math>W(D) = \hspace{0.08cm} U(D) + W(D) \cdot D+ W(D) \cdot D^2 \hspace{0.3cm}\Rightarrow \hspace{0.3cm}
+
::<math>X(D) =W(D) + W(D) \cdot D^2 =W(D) \cdot \left ( 1+ D^2 \right ) \hspace{0.05cm},</math>
 +
::<math>W(D) = \hspace{0.08cm} U(D) + W(D) \cdot D+ W(D) \cdot D^2</math>
 +
::<math>\Rightarrow \hspace{0.3cm}
 
U(D) = W(D) \cdot \left ( 1+ D + D^2 \right ) \hspace{0.05cm}.</math>
 
U(D) = W(D) \cdot \left ( 1+ D + D^2 \right ) \hspace{0.05cm}.</math>
  
Somit erhält man für die Übertragungsfunktion dieses Filters:
+
*Thus,&nbsp; one obtains for the transfer function of this filter:
  
:<math>G(D) = \frac{X(D)}{U(D)} = \frac{1+D^2}{1+D+D^2} \hspace{0.05cm}. </math>
+
::<math>G(D) = \frac{X(D)}{U(D)} = \frac{1+D^2}{1+D+D^2} \hspace{0.05cm}. </math>
  
Im Beispiel zu den systematischen Faltungscodes hat sich genau ein solcher Ausdruck ergeben.{{end}}<br>
+
*In&nbsp; [[Channel_Coding/Algebraic_and_Polynomial_Description#Equivalent_systematic_convolutional_code| $\text{Example 8}$]]&nbsp; to the equivalent systematic convolutional code,&nbsp; exactly this expression has resulted in the lower branch.}}<br>
  
== Aufgaben ==
+
== Exercises for the chapter ==
 
<br>
 
<br>
[[Aufgaben:3.2 G–Matrix eines Faltungscoders|A3.2 G–Matrix eines Faltungscoders]]
+
[[Aufgaben:Exercise_3.2:_G-matrix_of_a_Convolutional_Encoder|Exercise 3.2: G-matrix of a Convolutional Encoder]]
  
[[Zusatzaufgaben:3.2 (3, 1, 3)–Faltungscodierer]]
+
[[Aufgaben:Exercise_3.2Z:_(3,_1,_3)_Convolutional_Encoder|Exercise 3.2Z: (3, 1, 3) Convolutional Encoder]]
  
[[Aufgaben:3.3 x über U(D) und G(D)|A3.3 x über U(D) und G(D)]]
+
[[Aufgaben:Exercise_3.3:_Code_Sequence_Calculation_via_U(D)_and_G(D)|Exercise 3.3: Code Sequence Calculation via U(D) and G(D)]]
  
[[Zusatzaufgaben:3.3 Faltung und D–Transformation]]
+
[[Aufgaben:Exercise_3.3Z:_Convolution_and_D-Transformation|Exercise 3.3Z: Convolution and D-Transformation]]
  
[[Aufgaben:3.4 Systematische Faltungscodes|A3.4 Systematische Faltungscodes]]
+
[[Aufgaben:Exercise_3.4:_Systematic_Convolution_Codes|Exercise 3.4: Systematic Convolution Codes]]
  
[[Zusatzaufgaben:3.4 Äquivalente Faltungscodes?]]
+
[[Aufgaben:Exercise_3.4Z:_Equivalent_Convolution_Codes%3F|Exercise 3.4Z: Equivalent Convolution Codes?]]
  
[[Aufgaben:3.5 Rekursive Filter für GF(2)|A3.5 Rekursive Filter für GF(2)]]
+
[[Aufgaben:Exercise_3.5:_Recursive_Filters_for_GF(2)|Exercise 3.5: Recursive Filters for GF(2)]]
  
 
{{Display}}
 
{{Display}}

Latest revision as of 18:50, 2 December 2022

Division of the generator matrix into partial matrices


Following the discussion in the earlier section  "Linear Codes and Cyclic Codes"  the code word  $\underline{x}$  of a linear block code can be determined from the information word  $\underline{u}$  and the generator matrix  $\mathbf{G}$  in a simple way:   $\underline{x} = \underline{u} \cdot { \boldsymbol{\rm G}}$. The following holds:

  1.   The vectors  $\underline{u}$  and  $\underline{x}$  have length  $k$   $($bit count of an info word$)$  resp.   $n$   $($bit count of a code word$)$  and  $\mathbf{G}$  has dimension  $k × n$  $(k$  rows and  $n$  columns$)$.
  2.   In convolutional coding,  on the other hand  $\underline{u}$  and  $\underline{x}$  denote sequences with  $k\hspace{0.05cm}' → ∞$   and   $n\hspace{0.05cm}' → ∞$.
  3.   Therefore,  the generator matrix  $\mathbf{G}$  will also be infinitely extended in both directions.

In preparation for the introduction of the generator matrix  $\mathbf{G}$  in the next section, 

  • we define  $m + 1$  "partial matrices",  each with  $k$  rows and  $n$  columns, which we denote by  $\mathbf{G}_l$ 
  • where  $0 ≤ l ≤ m$  holds.


$\text{Definition:}$  The   »partial matrix«   $\mathbf{G}_l$  describes the following fact:  

  • If the matrix element  $\mathbf{G}_l(\kappa, j) = 1$,  this says that the code bit  $x_i^{(j)}$  is influenced by the information bit  $u_{i-l}^{(\kappa)}$. 
  • Otherwise,  this matrix element is  $\mathbf{G}_l(\kappa, j) =0$.


This definition will now be illustrated by an example.

$\text{Example 1:}$  We again consider the convolutional encoder according to the diagram with the following code bits:

Convolutional encoder with  $k = 2, \ n = 3, \ m = 1$
\[x_i^{(1)} = u_{i}^{(1)} + u_{i-1}^{(1)}+ u_{i-1}^{(2)} \hspace{0.05cm},\]
\[x_i^{(2)} = u_{i}^{(2)} + u_{i-1}^{(1)} \hspace{0.05cm},\]
\[x_i^{(3)} = u_{i}^{(1)} + u_{i}^{(2)}+ u_{i-1}^{(1)} \hspace{0.05cm}.\]

Because of the memory  $m = 1$  this encoder is fully characterized by the partial matrices  $\mathbf{G}_0$  and  $\mathbf{G}_1$ :

\[{ \boldsymbol{\rm G} }_0 = \begin{pmatrix} 1 & 0 & 1\\ 0 & 1 & 1 \end{pmatrix} \hspace{0.05cm}, \hspace{0.5cm} { \boldsymbol{\rm G} }_1 = \begin{pmatrix} 1 & 1 & 1\\ 1 & 0 & 0 \end{pmatrix}\hspace{0.05cm}.\]

These matrices are to be interpreted as follows:

  • First row of  $\mathbf{G}_0$,  red arrows:  $\hspace{1.3cm}u_i^{(1)}$  affects both  $x_i^{(1)}$  and  $x_i^{(3)}$,  but not  $x_i^{(2)}$.
  • Second row of  $\mathbf{G}_0$,  blue arrows:  $\hspace{0.6cm}u_i^{(2)}$  affects  $x_i^{(2)}$  and  $x_i^{(3)}$,  but not  $x_i^{(1)}$.
  • First row of  $\mathbf{G}_1$,  green arrows:  $\hspace{0.9cm}u_{i-1}^{(1)}$  affects all three encoder outputs.
  • Second row of  $\mathbf{G}_1$,  brown arrow:  $\hspace{0.45cm}u_{i-1}^{(2)}$  affects only  $x_i^{(1)}$.


Generator matrix of a convolutional encoder with memory $m$


The  $n$  code bits at time  $i$  can be expressed with the partial matrices   $\mathbf{G}_0, \hspace{0.05cm} \text{...} \hspace{0.05cm} , \mathbf{G}_m$  as follows:

\[\underline{x}_i = \sum_{l = 0}^{m} \hspace{0.15cm}\underline{u}_{i-l} \cdot { \boldsymbol{\rm G}}_l = \underline{u}_{i} \cdot { \boldsymbol{\rm G}}_0 + \underline{u}_{i-1} \cdot { \boldsymbol{\rm G}}_1 +\hspace{0.05cm} \text{...} \hspace{0.05cm} + \underline{u}_{i-m} \cdot { \boldsymbol{\rm G}}_m \hspace{0.05cm}.\]
  • The following vectorial quantities must be taken into account:
\[\underline{\it u}_i = \left ( u_i^{(1)}, u_i^{(2)}, \hspace{0.05cm}\text{...} \hspace{0.1cm}, u_i^{(k)}\right )\hspace{0.05cm},\hspace{0.5cm} \underline{\it x}_i = \left ( x_i^{(1)}, x_i^{(2)}, \hspace{0.05cm}\text{...} \hspace{0.1cm}, x_i^{(n)}\right )\hspace{0.05cm}.\]
  • Considering the sequences
\[\underline{\it u} = \big( \underline{\it u}_1\hspace{0.05cm}, \underline{\it u}_2\hspace{0.05cm}, \hspace{0.05cm}\text{...} \hspace{0.1cm}, \underline{\it u}_i\hspace{0.05cm}, \hspace{0.05cm}\text{...} \hspace{0.1cm} \big)\hspace{0.05cm},\hspace{0.5cm} \underline{\it x} = \big( \underline{\it x}_1\hspace{0.05cm}, \underline{\it x}_2\hspace{0.05cm}, \hspace{0.05cm}\text{...} \hspace{0.1cm}, \underline{\it x}_i\hspace{0.05cm}, \hspace{0.05cm}\text{...} \hspace{0.1cm} \big)\hspace{0.05cm},\]
starting at  $i = 1$  and extending in time to infinity,  this relation can be expressed by the matrix equation   $\underline{x} = \underline{u} \cdot \mathbf{G}$.   Here,  holds for the generator matrix:
\[{ \boldsymbol{\rm G}}=\begin{pmatrix} { \boldsymbol{\rm G}}_0 & { \boldsymbol{\rm G}}_1 & { \boldsymbol{\rm G}}_2 & \cdots & { \boldsymbol{\rm G}}_m & & & \\ & { \boldsymbol{\rm G}}_0 & { \boldsymbol{\rm G}}_1 & { \boldsymbol{\rm G}}_2 & \cdots & { \boldsymbol{\rm G}}_m & &\\ & & { \boldsymbol{\rm G}}_0 & { \boldsymbol{\rm G}}_1 & { \boldsymbol{\rm G}}_2 & \cdots & { \boldsymbol{\rm G}}_m &\\ & & & \cdots & \cdots & & & \cdots \end{pmatrix}\hspace{0.05cm}.\]
  • From this equation one immediately recognizes the memory  $m$  of the convolutional code.
  • The parameters  $k$  and  $n$  are not directly readable.
  • However,  they are determined by the number of rows and columns of the partial matrices  $\mathbf{G}_l$.


$\text{Example 2:}$  With the two matrices  $\mathbf{G}_0$  and  $\mathbf{G}_1$  – see  $\text{Example 1}$  – the matrix sketched on the right  $\mathbf{G}$  is obtained.

Generator matrix of a convolutional code

It should be noted:

  • The generator matrix  $\mathbf{G}$  actually extends downwards and to the right to infinity.  Explicitly shown,  however,  are only eight rows and twelve columns.
  • For the temporal information sequence   $\underline{u} = (0, 1, 1, 0, 0, 0, 1, 1)$   the drawn matrix part is sufficient.  The encoded sequence is then:
$$\underline{x} = (0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0).$$
  • On the basis of the label colors,  the  $n = 3$  code word strings can be read.
  • We got the same result  $($in a different way$)$  in the  $\text{Example 4}$  at the end of the last chapter:
$$\underline{\it x}^{(1)} = (0\hspace{0.05cm}, 0\hspace{0.05cm}, 1\hspace{0.05cm}, 1) \hspace{0.05cm},$$
$$\underline{\it x}^{(2)} = (1\hspace{0.05cm}, 0\hspace{0.05cm},1\hspace{0.05cm}, 1) \hspace{0.05cm},$$
$$ \underline{\it x}^{(3)} = (1\hspace{0.05cm}, 1\hspace{0.05cm}, 1\hspace{0.05cm}, 0) \hspace{0.05cm}.$$


Generator matrix for convolutional encoder of rate  $1/n$


We now consider the special case  $k = 1$,

  • on the one hand for reasons of simplest possible representation,
  • but also because convolutional encoders of rate  $1/n$  have great importance for practice.

Convolutional encoder
$(k = 1, \ n = 2, \ m = 1)$

Convolutional encoder with  $k = 1, \ n = 2, \ m = 1$

  • From the adjacent sketch can be derived:
$${ \boldsymbol{\rm G}}_0=\begin{pmatrix} 1 & 1 \end{pmatrix}\hspace{0.05cm},\hspace{0.3cm} { \boldsymbol{\rm G}}_1=\begin{pmatrix} 0 & 1 \end{pmatrix}\hspace{0.3cm} \Rightarrow \hspace{0.3cm}$$
  • Thus,  the resulting generator matrix is:
$${ \boldsymbol{\rm G}}=\begin{pmatrix} 11 & 01 & 00 & 00 & 00 & \cdots & \\ 00 & 11 & 01 & 00 & 00 & \cdots & \\ 00 & 00 & 11 & 01 & 00 & \cdots & \\ 00 & 00 & 00 & 11 & 01 & \cdots & \\ \cdots & \cdots & \cdots & \cdots & \cdots & \cdots \end{pmatrix}\hspace{0.05cm}.$$
  • For the input sequence   $\underline{u} = (1, 0, 1, 1)$,  the encoded sequence starts with   $\underline{x} = (1, 1, 0, 1, 1, 1, 1, 0, \ \text{...})$.
  • This result is equal to the sum of rows  13  and  4  of the generator matrix.

Convolutional encoder  $(k = 1, \ n = 2, \ m = 2)$

Convolutional encoder with  $k = 1, \ n = 2, \ m = 2$

  • Due to the memory order  $m = 2$  there are three submatrices here:
\[{ \boldsymbol{\rm G}}_0=\begin{pmatrix} 1 & 1 \end{pmatrix}\hspace{0.05cm},\hspace{0.3cm} { \boldsymbol{\rm G}}_1=\begin{pmatrix} 1 & 0 \end{pmatrix}\hspace{0.05cm},\hspace{0.3cm} { \boldsymbol{\rm G}}_2=\begin{pmatrix} 1 & 1 \end{pmatrix}\]
  • Thus,  the resulting generator matrix is now:
\[ { \boldsymbol{\rm G}}=\begin{pmatrix} 11 & 10 & 11 & 00 & 00 & 00 & \cdots & \\ 00 & 11 & 10 & 11 & 00 & 00 & \cdots & \\ 00 & 00 & 11 & 10 & 11 & 00 & \cdots & \\ 00 & 00 & 00 & 11 & 10 & 11 & \cdots & \\ \cdots & \cdots & \cdots & \cdots & \cdots & \cdots \end{pmatrix}\hspace{0.05cm}.\]
  • Here the input sequence  $\underline{u} = (1, 0, 1, 1)$  leads to the encoded sequence  $\underline{x} = (1, 1, 1, 0, 0, 0, 0, 1, \ \text{...})$.

Convolutional encoder  $(k = 1, \ n = 3, \ m = 3)$

Convolutional encoder with $k = 1, \ n = 3, \ m = 3$

  • Because of  $m = 3$  there are now four partial matrices of the respective dimension  $1 × 3$:
\[{ \boldsymbol{\rm G}}_0=\begin{pmatrix} 1 & 1 & 0 \end{pmatrix}\hspace{0.05cm},\hspace{0.3cm} { \boldsymbol{\rm G}}_1=\begin{pmatrix} 0 & 0 & 1 \end{pmatrix}\hspace{0.05cm},\hspace{0.3cm} { \boldsymbol{\rm G}}_2=\begin{pmatrix} 0 & 0 & 1 \end{pmatrix}\hspace{0.05cm},\hspace{0.3cm} { \boldsymbol{\rm G}}_3=\begin{pmatrix} 0 & 1 & 1 \end{pmatrix}\hspace{0.05cm}.\]
  • Thus,  the resulting generator matrix is:
\[{ \boldsymbol{\rm G}}=\begin{pmatrix} 110 & 001 & 001 & 011 & 000 & 000 & 000 & \cdots & \\ 000 & 110 & 001 & 001 & 011 & 000 & 000 & \cdots & \\ 000 & 000 & 110 & 001 & 001 & 011 & 000 & \cdots & \\ 000 & 000 & 000 & 110 & 001 & 001 & 011 & \cdots & \\ \cdots & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots \end{pmatrix}\hspace{0.05cm}.\]
  • One obtains for   $\underline{u} = (1, 0, 1, 1)$   the encoded sequence  $\underline{x} = (1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, \ \text{...})$.

GF(2) description forms of a digital filter


Digital filter in  ${\rm GF}(2)$  of order  $m$

In the chapter  "Basics of Convolutional Coding"  it was already pointed out,

  1. that a rate  $1/n$ convolutional encoder can be realized by several digital filters,
  2. where the filters operate in parallel with the same input sequence  $\underline{u}$ .


Before we elaborate on this statement,  we shall first mention the properties of a digital filter for the Galois field  ${\rm GF(2)}$.

The graph is to be interpreted as follows:

  • The filter has impulse response  $\underline{g} = (g_0,\ g_1,\ g_2, \ \text{...} \ ,\ g_m)$.
  • For all filter coefficients  $($with indices  $0 ≤ l ≤ m)$   holds:   $g_l ∈ {\rm GF}(2) = \{0, 1\}$.
  • The individual symbols  $u_i$  of the input sequence  $\underline{u}$  are also binary:   $u_i ∈ \{0, 1\}$.
  • Thus, for the output symbol at times  $i ≥ 1$  with addition and multiplication in  ${\rm GF(2)}$:
\[x_i = \sum_{l = 0}^{m} g_l \cdot u_{i-l} \hspace{0.05cm}.\]
  • This corresponds to the  $($discrete time$)$   »$\rm convolution$«,  denoted by an asterisk.  This can be used to write for the entire output sequence:
\[\underline{x} = \underline{u} * \underline{g}\hspace{0.05cm}.\]
  • Major difference compared to the chapter  »"Digital Filters"«  in the book  "Theory of Stochastic Signals"  is the modulo-2 addition  $(1 + 1 = 0)$  instead of the conventional addition  $(1 + 1 = 2)$.

$\text{Example 3:}$  The impulse response of the shown third order digital filter is:   $\underline{g} = (1, 0, 1, 1)$.

Digital filter with impulse response  $(1, 0, 1, 1)$
  • Let the input sequence of this filter be unlimited in time:   $\underline{u} = (1, 1, 0, 0, 0, \ \text{ ...})$.
  • This gives the  $($infinite$)$  initial sequence  $\underline{x}$  in the binary Galois field   ⇒   ${\rm GF(2)}$:
\[\underline{x} = (\hspace{0.05cm}1,\hspace{0.05cm} 1,\hspace{0.05cm} 0,\hspace{0.05cm} 0,\hspace{0.05cm} 0, \hspace{0.05cm} \text{ ...} \hspace{0.05cm}) * (\hspace{0.05cm}1,\hspace{0.05cm} 0,\hspace{0.05cm} 1,\hspace{0.05cm} 1\hspace{0.05cm})\]
\[\Rightarrow \hspace{0.3cm} \underline{x} =(\hspace{0.05cm}1,\hspace{0.05cm} 0,\hspace{0.05cm} 1,\hspace{0.05cm} 1,\hspace{0.05cm} 0, \hspace{0.05cm}0,\hspace{0.05cm} \text{ ...} \hspace{0.05cm}) \oplus (\hspace{0.05cm}0,\hspace{0.05cm}\hspace{0.05cm}1,\hspace{0.05cm} 0,\hspace{0.05cm} 1,\hspace{0.05cm} 1,\hspace{0.05cm}0, \hspace{0.05cm} \hspace{0.05cm} \text{ ...}\hspace{0.05cm}) = (\hspace{0.05cm}1,\hspace{0.05cm}\hspace{0.05cm}1,\hspace{0.05cm} 1,\hspace{0.05cm} 0,\hspace{0.05cm} 1,\hspace{0.05cm} 0, \hspace{0.05cm} \text{ ...} \hspace{0.05cm}) \hspace{0.05cm}.\]
  • In the conventional convolution  $($for real numbers$)$,  on the other hand,  the result would have been:
\[\underline{x}= (\hspace{0.05cm}1,\hspace{0.05cm}\hspace{0.05cm}1,\hspace{0.05cm} 1,\hspace{0.05cm} 2,\hspace{0.05cm} 1,\hspace{0.05cm} 0, \text{ ...} \hspace{0.05cm}) \hspace{0.05cm}.\]


However,  discrete time signals can also be represented by polynomials with respect to a dummy variable.

$\text{Definition:}$  The  »D–transform«  belonging to the discrete time signal   $\underline{x} = (x_0, x_1, x_2, \ \text{...}) $  reads:

\[X(D) = x_0 + x_1 \cdot D + x_2 \cdot D^2 + \hspace{0.05cm}\text{...}\hspace{0.05cm}= \sum_{i = 0}^{\infty} x_i \cdot D\hspace{0.05cm}^i \hspace{0.05cm}.\]
  • For this particular transformation to an image area,  we also use the following notation,  where  "D"  stands  for  "delay operator":
\[\underline{x} = (x_0, x_1, x_2,\hspace{0.05cm}...\hspace{0.05cm}) \quad \circ\!\!-\!\!\!-^{\hspace{-0.25cm}D}\!\!\!-\!\!\bullet\quad X(D) = \sum_{i = 0}^{\infty} x_i \cdot D\hspace{0.05cm}^i \hspace{0.05cm}.\]


Note:   In the literature,  sometimes  $x(D)$  is used instead of  $X(D)$.  However,  we write in our learning tutorial all image domain functions   ⇒   "spectral domain functions"  with capital letters,   for example the Fourier transform, the Laplace transform and the D–transform:

\[x(t) \hspace{0.15cm} \circ\!\!-\!\!\!-^{\hspace{-0.25cm}}\!\!\!-\!\!\bullet\hspace{0.15cm} X(f)\hspace{0.05cm},\hspace{0.4cm} x(t) \hspace{0.15cm} \circ\!\!-\!\!\!-^{\hspace{-0.25cm}\rm L}\!\!\!-\!\!\bullet\hspace{0.15cm} X(p) \hspace{0.05cm},\hspace{0.4cm} \underline{x} \hspace{0.15cm} \circ\!\!-\!\!\!-^{\hspace{-0.25cm}D}\!\!\!-\!\!\bullet\hspace{0.15cm} X(D) \hspace{0.05cm}.\]

We now apply the  D–transform also

  • to the information sequence  $\underline{u}$,  and
  • the impulse response  $\underline{g}$. 


Due to the time limit of  $\underline{g}$  the upper summation limit at  $G(D)$  results in  $i = m$:

\[\underline{u} = (u_0, u_1, u_2,\hspace{0.05cm}\text{...}\hspace{0.05cm}) \quad \circ\!\!-\!\!\!-^{\hspace{-0.25cm}D}\!\!\!-\!\!\bullet\quad U(D) = \sum_{i = 0}^{\infty} u_i \cdot D\hspace{0.05cm}^i \hspace{0.05cm},\]
\[\underline{g} = (g_0, g_1, \hspace{0.05cm}\text{...}\hspace{0.05cm}, g_m) \quad \circ\!\!-\!\!\!-^{\hspace{-0.25cm}D}\!\!\!-\!\!\bullet\quad G(D) = \sum_{i = 0}^{m} g_i \cdot D\hspace{0.05cm}^i \hspace{0.05cm}.\]

$\text{Theorem:}$  As with all spectral transformations,  the  »multiplication«  applies to the  D–transform in the image domain,  since the  $($discrete$)$  time functions  $\underline{u}$  and  $\underline{g}$  are interconnected by the  »convolution«:

\[\underline{x} = \underline{u} * \underline{g} \quad \circ\!\!-\!\!\!-^{\hspace{-0.25cm}D}\!\!\!-\!\!\bullet\quad X(D) = U(D) \cdot G(D) \hspace{0.05cm}.\]
  • The  $($rather simple$)$  $\rm proof$  of this important result can be found in the specification for  "Exercise 3.3Z".
  • As in  »$\text{system theory}$«  commonly,  the  D–transform  $G(D)$  of the impulse response  $\underline{g}$  is also called  "transfer function".


Impulse response  $(1, 0, 1, 1)$  of a digital filter

$\text{Example 4:}$  We consider again the discrete time signals

\[\underline{u} = (\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm}\text{...}\hspace{0.05cm}) \quad \circ\!\!-\!\!\!-^{\hspace{-0.25cm}D}\!\!\!-\!\!\bullet\quad U(D) = 1+ D \hspace{0.05cm},\]
\[\underline{g} = (\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm}) \quad \circ\!\!-\!\!\!-^{\hspace{-0.25cm}D}\!\!\!-\!\!\bullet\quad G(D) = 1+ D^2 + D^3 \hspace{0.05cm}.\]
  • As in  $\text{Example 3}$  $($in this section above$)$,  you get also on this solution path:
\[X(D) = U(D) \cdot G(D) = (1+D) \cdot (1+ D^2 + D^3) \]
\[\Rightarrow \hspace{0.3cm} X(D) = 1+ D^2 + D^3 +D + D^3 + D^4 = 1+ D + D^2 + D^4 \hspace{0.3cm} \Rightarrow \hspace{0.3cm} \underline{x} = (\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm}, \text{...} \hspace{0.05cm}) \hspace{0.05cm}.\]
  • Multiplication by the  "delay operator"  $D$  in the image domain corresponds to a shift of one place to the right in the time domain:
\[W(D) = D \cdot X(D) \quad \bullet\!\!-\!\!\!-^{\hspace{-0.25cm}D}\!\!\!-\!\!\circ\quad \underline{w} = (\hspace{0.05cm}0\hspace{0.05cm},\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm}, \text{...} \hspace{0.05cm}) \hspace{0.05cm}.\]


Application of the  D–transform  to rate  $1/n$  convolution encoders


We now apply the results of the last section to a convolutional encoder,  restricting ourselves for the moment to the special case  $k = 1$.

  • Such a  $(n, \ k = 1)$  convolutional code can be realized with  $n$  digital filters operating in parallel on the same information sequence  $\underline{u}$.
  • The graph shows the arrangement for the code parameter  $n = 2$   ⇒   code rate $R = 1/2$.
Two filters working in parallel, each with order  $m$


The following equations apply equally to both filters,  setting $j = 1$  for the upper filter  and $j = 2$  for the lower filter:

  • The  »impulse responses«  of the two filters result in
\[\underline{g}^{(j)} = (g_0^{(j)}, g_1^{(j)}, \hspace{0.05cm}\text{...}\hspace{0.05cm}, g_m^{(j)}\hspace{0.01cm}) \hspace{0.05cm},\hspace{0.2cm}{\rm with }\hspace{0.15cm} j \in \{1,2\}\hspace{0.05cm}.\]
  • The two  »output sequences«  are as follows,  considering that both filters operate on the same input sequence  $\underline{u} = (u_0, u_1, u_2, \hspace{0.05cm} \text{...})$ :
\[\underline{x}^{(j)} = (x_0^{(j)}, x_1^{(j)}, x_2^{(j)}, \hspace{0.05cm}\text{...}\hspace{0.05cm}) = \underline{u} \cdot \underline{g}^{(j)} \hspace{0.05cm},\hspace{0.2cm}{\rm with }\hspace{0.15cm} j \in \{1,2\}\hspace{0.05cm}.\]
  • For the  »D–transform«  of the output sequences:
\[X^{(j)}(D) = U(D) \cdot G^{(j)}(D) \hspace{0.05cm},\hspace{0.2cm}{\rm with }\hspace{0.15cm} j \in \{1,2\}\hspace{0.05cm}.\]

In order to represent this fact more compactly,  we now define the following vectorial quantities of a convolutional code of rate  $1/n$:

$\text{Definition:}$  The  »D– transfer functions«  of the  $n$  parallel arranged digital filters are combined in the vector  $\underline{G}(D)$:

\[\underline{G}(D) = \left ( G^{(1)}(D), G^{(2)}(D), \hspace{0.05cm}\text{...}\hspace{0.1cm}, G^{(n)} (D) \right )\hspace{0.05cm}.\]
  • The vector  $\underline{X}(D)$  contains the  D–transform  of  $n$  encoded sequences  $\underline{x}^{(1)}, \underline{x}^{(2)}, \ \text{...} \ , \underline{x}^{(n)}$:
\[\underline{X}(D) = \left ( X^{(1)}(D), X^{(2)}(D), \hspace{0.05cm}\text{...}\hspace{0.1cm}, X^{(n)} (D) \right )\hspace{0.05cm}.\]
  • This gives the following vector equation:
\[\underline{X}(D) = U(D) \cdot \underline{G}(D)\hspace{0.05cm}.\]
  • $U(D)$  is not a vector quantity here because of the code parameter  $k = 1$.


$\text{Example 5:}$  We consider the convolutional encoder with code parameters  $n = 2, \ k = 1, \ m = 2$.   For this one holds:

Convolutional encoder  $(n = 2, \ k = 1,\ m = 2)$
\[\underline{g}^{(1)} =(\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm}) \quad \circ\!\!-\!\!\!-^{\hspace{-0.25cm}D}\!\!\!-\!\!\bullet\quad G(D) = 1+ D + D^2 \hspace{0.05cm},\]
\[\underline{g}^{(2)}= (\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm}) \quad \circ\!\!-\!\!\!-^{\hspace{-0.25cm}D}\!\!\!-\!\!\bullet\quad G(D) = 1+ D^2 \]
\[\Rightarrow \hspace{0.3cm} \underline{G}(D) = \big ( 1+ D + D^2 \hspace{0.05cm}, \hspace{0.1cm}1+ D^2 \big )\hspace{0.05cm}.\]
  • Let the information sequence be   $\underline{u} = (1, 0, 1, 1)$   ⇒   D–transform  $U(D) = 1 + D^2 + D^3$.  This gives:
\[\underline{X}(D) = \left ( X^{(1)}(D),\hspace{0.1cm} X^{(2)}(D) \right ) = U(D) \cdot \underline{G}(D) \hspace{0.05cm}, \hspace{0.2cm}\]
where
\[{X}^{(1)}(D) = (1+ D^2 + D^3) \cdot (1+ D + D^2)=1+ D + D^2 + D^2 + D^3 + D^4 + D^3 + D^4 + D^5 = 1+ D + D^5\]
\[\Rightarrow \hspace{0.3cm} \underline{x}^{(1)} = (\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm}, \hspace{0.05cm} 0\hspace{0.05cm}, \hspace{0.05cm} \text{...} \hspace{0.05cm} \hspace{0.05cm}) \hspace{0.05cm},\]
\[{X}^{(2)}(D) = (1+ D^2 + D^3) \cdot (1+ D^2)=1+ D^2 + D^2 + D^4 + D^3 + D^5 = 1+ D^3 + D^4 + D^5\]
\[\Rightarrow \underline{x}^{(2)} = (\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm}0\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm}, \hspace{0.05cm} 0\hspace{0.05cm}, \hspace{0.05cm} \text{...} \hspace{0.05cm} \hspace{0.05cm}) \hspace{0.05cm}.\]
  • We got the same result in  "Exercise 3.1Z"  on other way.  After multiplexing the two strands,  you get again:  
$$\underline{x} = (11, 10, 00, 01, 01, 11, 00, 00, \hspace{0.05cm} \text{...} \hspace{0.05cm}).$$


Transfer Function Matrix


We have seen that a convolutional code of rate  $1/n$  can be most compactly described as a vector equation in the  D–transformed domain:

General  $(n, \ k)$ convolutional encoder
$$\underline{X}(D) = U(D) \cdot \underline{G}(D).$$

Now we extend the result to convolutional encoders with more than one input   ⇒   $k ≥ 2$  $($see graph$)$.

In order to map a convolutional code of rate  $k/n$  in the D–domain,  the dimension of the above vector equation must be increased with respect to input and transfer function:

\[\underline{X}(D) = \underline{U}(D) \cdot { \boldsymbol{\rm G}}(D)\hspace{0.05cm}.\]

This requires the following measures:

  • From the scalar function  $U(D)$  we get the vector 
$$\underline{U}(D) = (U^{(1)}(D), \ U^{(2)}(D), \hspace{0.05cm} \text{...} \hspace{0.05cm} , \ U^{(k)}(D)).$$
  • From the vector  $\underline{G}(D)$  we get the  $k × n$  transfer function matrix  $($or  "polynomial generator matrix"$)$   $\mathbf{G}(D)$ :
\[{\boldsymbol{\rm G}}(D)=\begin{pmatrix} G_1^{(1)}(D) & G_1^{(2)}(D) & \hspace{0.05cm} \text{...} \hspace{0.05cm} & G_1^{(n)}(D)\\ G_2^{(1)}(D) & G_2^{(2)}(D) & \hspace{0.05cm} \text{...} \hspace{0.05cm} & G_2^{(n)}(D)\\ \vdots & \vdots & & \vdots\\ G_k^{(1)}(D) & G_k^{(2)}(D) & \hspace{0.05cm} \text{...} \hspace{0.05cm} & G_k^{(n)}(D) \end{pmatrix}\hspace{0.05cm}.\]
  • Each of the   $k \cdot n$   matrix elements   $G_i^{(j)}(D)$   with   $1 ≤ i ≤ k,\ 1 ≤ j ≤ n$  is a polynomial over the dummy variable  $D$  in the Galois field  ${\rm GF}(2)$,  maximal of degree  $m$,  where  $m$  denotes the memory.
  • For the above  transfer function matrix,  using the  »$\text{partial matrices}$«  $\mathbf{G}_0, \ \text{...} \ , \mathbf{G}_m$  also be written  $($as  index we use again  $l)$:
\[{\boldsymbol{\rm G}}(D) = \sum_{l = 0}^{m} {\boldsymbol{\rm G}}_l \cdot D\hspace{0.03cm}^l = {\boldsymbol{\rm G}}_0 + {\boldsymbol{\rm G}}_1 \cdot D + {\boldsymbol{\rm G}}_2 \cdot D^2 + \hspace{0.05cm} \text{...} \hspace{0.05cm}+ {\boldsymbol{\rm G}}_m \cdot D\hspace{0.03cm}^m \hspace{0.05cm}.\]

$\text{Example 6:}$  We consider the  $(n = 3, \ k = 2, \ m = 1)$ convolutional encoder whose partial matrices have already been determined in the  $\text{Example 1}$  as follows:

Convolutional encoder with  $k = 2, \ n = 3, \ m = 1$
\[{ \boldsymbol{\rm G} }_0 = \begin{pmatrix} 1 & 0 & 1\\ 0 & 1 & 1 \end{pmatrix} \hspace{0.05cm}, \hspace{0.5cm} { \boldsymbol{\rm G} }_1 = \begin{pmatrix} 1 & 1 & 1\\ 1 & 0 & 0 \end{pmatrix}\hspace{0.05cm}.\]
  • Because of  $m = 1$  no partial matrices exist for  $l ≥ 2$.  Thus the transfer function matrix is:
\[{\boldsymbol{\rm G} }(D) = {\boldsymbol{\rm G} }_0 + {\boldsymbol{\rm G} }_1 \cdot D = \begin{pmatrix} 1+D & D & 1+D\\ D & 1 & 1 \end{pmatrix} \hspace{0.05cm}.\]
  • Let the  $($time limited$)$  information sequence be  $\underline{u} = (0, 1, 1, 0, 0, 0, 1, 1)$,  from which the two input sequences are as follows:
\[\underline{u}^{(1)} = (\hspace{0.05cm}0\hspace{0.05cm},\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm}) \quad \circ\!\!-\!\!\!-^{\hspace{-0.25cm}D}\!\!\!-\!\!\bullet\quad {U}^{(1)}(D) = D + D^3 \hspace{0.05cm},\]
\[\underline{u}^{(2)} = (\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm}0\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm}) \quad \circ\!\!-\!\!\!-^{\hspace{-0.25cm}D}\!\!\!-\!\!\bullet\quad {U}^{(2)}(D) = 1 + D^3 \hspace{0.05cm}.\]
  • From this follows for the vector of the  D–transform at the encoder output:
\[\underline{X}(D) = \big (\hspace{0.05cm} {X}^{(1)}(D)\hspace{0.05cm}, \hspace{0.05cm} {X}^{(2)}(D)\hspace{0.05cm}, \hspace{0.05cm} {X}^{(3)}(D)\hspace{0.05cm}\big ) = \underline{U}(D) \cdot {\boldsymbol{\rm G} }(D) \begin{pmatrix} D+D^3 & 1+D^3 \end{pmatrix} \cdot \begin{pmatrix} 1+D & D & 1+D\\ D & 1 & 1 \end{pmatrix}\hspace{0.05cm}.\]
  • This results in the following encoded sequences in the three strands:
\[{X}^{(1)}(D) = (D + D^3) \cdot (1+D) + (1 + D^3) \cdot D =D + D^2 + D^3 + D^4 + D + D^4 = D^2 + D^3\]
\[\Rightarrow \hspace{0.3cm} \underline{x}^{(1)} = (\hspace{0.05cm}0\hspace{0.05cm},\hspace{0.05cm}0\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm}\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm}\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} \text{...} \hspace{0.05cm}) \hspace{0.05cm},\]
\[{X}^{(2)}(D)= (D + D^3) \cdot D + (1 + D^3) \cdot 1 = D^2 + D^4 + 1 + D^3 = 1+D^2 + D^3 + D^4\]
\[\Rightarrow \hspace{0.3cm}\underline{x}^{(2)} = (\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm}0\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm}\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm}\hspace{0.05cm} \text{...} \hspace{0.05cm}) \hspace{0.05cm},\]
\[{X}^{(3)}(D)=(D + D^3) \cdot (1 + D) + (1 + D^3) \cdot 1 = D + D^2 + D^3+ D^4 + 1 + D^3 = 1+ D + D^2 + D^4\]
\[\Rightarrow \hspace{0.3cm}\underline{x}^{(3)} = (\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm}\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm}\hspace{0.05cm} \text{...} \hspace{0.05cm}) \hspace{0.05cm}.\]

We have already obtained the same results in other ways in previous examples:


Systematic convolutional codes


Polynomial representation using the transfer function matrix  $\mathbf{G}(D)$  provides insight into the structure of a convolutional code.

Systematic convolutional code with  $k = 3, \ n = 4$
  • This refers to a code where the encoded sequences  $\underline{x}^{(1)}, \ \text{...} \ , \ \underline{x}^{(k)}$  are identical with the information sequences  $\underline{u}^{(1)}, \ \text{...} \ , \ \underline{u}^{(k)}$.
  • The graph shows an example of a systematic  $(n = 4, \ k = 3)$  convolutional code.


A systematic   $(n, k)$  convolutional code exists whenever the transfer function matrix  $($with  $k$  rows and  $n$  columns$)$  has the following appearance:

\[{\boldsymbol{\rm G}}(D) = {\boldsymbol{\rm G}}_{\rm sys}(D) = \left [ \hspace{0.05cm} {\boldsymbol{\rm I}}_k\hspace{0.05cm} ; \hspace{0.1cm} {\boldsymbol{\rm P}}(D) \hspace{0.05cm}\right ] \hspace{0.05cm}.\]

The following nomenclature is used:

  1.   $\mathbf{I}_k$  denotes a diagonal unit matrix of dimension  $k × k$.
  2.   $\mathbf{P}(D)$  is a  $k × (n -k)$ matrix,  where each matrix element describes a polynomial in  $D$.

$\text{Example 7:}$  A systematic convolutional code with   $n = 3, \ k = 2, \ m = 2$   might have the following transfer function matrix:

\[{\boldsymbol{\rm G} }_{\rm sys}(D) = \begin{pmatrix} 1 & 0 & 1+D^2\\ 0 & 1 & 1+D \end{pmatrix}\hspace{0.05cm}.\]
  • In contrast,  other systematic convolutional codes with equal  $n$  and equal  $k$  differ only by the two matrix elements in the last column.



Equivalent systematic convolutional code


For every  $(n, \ k)$  convolutional code with matrix   $\mathbf{G}(D)$   there is an  "equivalent systematic code"  whose  D–matrix we denote by  $\mathbf{G}_{\rm sys}(D)$.

To get from the transfer function matrix   $\mathbf{G}(D)$   to the matrix   $\mathbf{G}_{\rm sys}(D)$   of the equivalent systematic convolutional code,  proceed as follows according to the diagram:

Subdivision of  $\mathbf{G}(D)$  into  $\mathbf{T}(D)$  and  $\mathbf{Q}(D)$
  • Divide the  $k × n$  matrix  $\mathbf{G}(D)$  into a square matrix  $\mathbf{T}(D)$  with  $k$  rows and  $k$  columns and denote the remainder by  $\mathbf{Q}(D)$.
  • Then calculate the inverse matrix   $\mathbf{T}^{-1}(D)$   to   $\mathbf{T}(D)$  and from this the matrix for the equivalent systematic code:
\[{\boldsymbol{\rm G}}_{\rm sys}(D)= {\boldsymbol{\rm T}}^{-1}(D) \cdot {\boldsymbol{\rm G}}(D) \hspace{0.05cm}.\]
  • Since the product   $\mathbf{T}^{-1}(D) \cdot \mathbf{T}(D)$   yields the  $k × k$  unit matrix   $\mathbf{I}_k$  
    ⇒   the transfer function matrix of the equivalent systematic code can be written in the desired form:
\[{\boldsymbol{\rm G}}_{\rm sys}(D) = \bigg [ \hspace{0.05cm} {\boldsymbol{\rm I}}_k\hspace{0.05cm} ; \hspace{0.1cm} {\boldsymbol{\rm P}}(D) \hspace{0.05cm}\bigg ] \hspace{0.5cm}{\rm with}\hspace{0.5cm} {\boldsymbol{\rm P}}(D)= {\boldsymbol{\rm T}}^{-1}(D) \cdot {\boldsymbol{\rm Q}}(D) \hspace{0.05cm}. \hspace{0.05cm}\]

$\text{Example 8:}$  The encoder of rate   $2/3$   considered often in the last sections is not systematic because  e.g.  $\underline{x}^{(1)} ≠ \underline{u}^{(1)}, \ \underline{x}^{(2)} ≠ \underline{u}^{(2)}$   holds  $($see adjacent graphic$)$.

Convolutional encoder of rate  $2/3$

⇒   However,  this can also be seen from the transfer function matrix:

\[{\boldsymbol{\rm G} }(D) = \big [ \hspace{0.05cm} {\boldsymbol{\rm T} }(D)\hspace{0.05cm} ; \hspace{0.1cm} {\boldsymbol{\rm Q} }(D) \hspace{0.05cm}\big ]\]
\[\Rightarrow \hspace{0.3cm} {\boldsymbol{\rm T} }(D) = \begin{pmatrix} 1+D & D\\ D & 1 \end{pmatrix}\hspace{0.05cm},\hspace{0.2cm} {\boldsymbol{\rm Q} }(D) = \begin{pmatrix} 1+D \\ 1 \end{pmatrix}\hspace{0.05cm}.\]
  • The determinant of  $\mathbf{T}(D)$  results in   $(1 + D) \cdot 1 + D \cdot D = 1 + D + D^2$   and is nonzero.
  • Thus,  for the inverse of  $\mathbf{T}(D)$  can be written  $($swapping the diagonal elements!$)$:
\[{\boldsymbol{\rm T} }^{-1}(D) = \frac{1}{1+D+D^2} \cdot \begin{pmatrix} 1 & D\\ D & 1+D \end{pmatrix}\hspace{0.05cm}.\]
  • The product  $\mathbf{T}(D) \cdot \mathbf{T}^{–1}(D)$  gives the unit matrix   $\mathbf{I}_2$   ⇒   for the third column of  $\mathbf{G}_{\rm sys}(D)$  holds:
\[{\boldsymbol{\rm P} }(D)= {\boldsymbol{\rm T} }^{-1}(D) \cdot {\boldsymbol{\rm Q} }(D) = \frac{1}{1+D+D^2} \cdot \begin{pmatrix} 1 & D\\ D & 1+D \end{pmatrix}\cdot \begin{pmatrix} 1+D\\ 1 \end{pmatrix} \]
\[\Rightarrow \hspace{0.3cm} {\boldsymbol{\rm P} }(D) = \frac{1}{1+D+D^2} \cdot \begin{pmatrix} (1+D) + D \\ D \cdot (1+D) + (1+D) \end{pmatrix} = \frac{1}{1+D+D^2} \cdot \begin{pmatrix} 1 \\ 1+D^2 \end{pmatrix} \]
\[\Rightarrow \hspace{0.2cm}{\boldsymbol{\rm G} }_{\rm sys}(D) = \begin{pmatrix} 1 & 0 & \frac{1}{1+D+D^2}\\ 0 & 1 &\frac{1+D^2}{1+D+D^2} \end{pmatrix}\hspace{0.05cm}. \]


It remains to be clarified what the filter of such a fractional–rational transfer function looks like.

Filter structure with fractional–rational transfer function


If a transfer function has the form   $G(D) = A(D)/B(D)$,  the associated filter is called  »recursive«.  Given a recursive convolutional encoder with memory  $m$,  the two polynomials   $A(D)$   and   $B(D)$   can be written in general terms:

Recursive filter for realization of  $G(D) = A(D)/B(D)$
\[A(D) = \sum_{l = 0}^{m} a_l \cdot D\hspace{0.05cm}^l = a_0 + a_1 \cdot D + a_2 \cdot D^2 +\ \text{...} \ \hspace{0.05cm} + a_m \cdot D\hspace{0.05cm}^m \hspace{0.05cm},\]
\[B(D) = 1 + \sum_{l = 1}^{m} b_l \cdot D\hspace{0.05cm}^l = 1 + b_1 \cdot D + b_2 \cdot D^2 + \ \text{...} \ \hspace{0.05cm} + b_m \cdot D\hspace{0.05cm}^m \hspace{0.05cm}.\]

The graphic shows the corresponding filter structure in the so–called  "Controller Canonical Form":

  • The coefficients   $a_0, \ \text{...} \ , \ a_m$   describe the forward branch.
  • The coefficients   $b_1, \ \text{...} \ , \ b_m$   form a feedback branch.
  • All coefficients are binary, 
  • so  $1$  $($continuous connection$)$  
  • or  $0$  $($missing connection$)$.


$\text{Example 9:}$  The filter structure outlined on the right can be described as follows:

Filter:  $G(D) = (1+D^2)/(1+D +D^2)$
\[x_i = w_i + w_{i-2} \hspace{0.05cm},\]
\[w_i = u_i + w_{i-1}+ w_{i-2} \hspace{0.05cm}.\]
  • Accordingly,  for the  D–transforms:
\[X(D) =W(D) + W(D) \cdot D^2 =W(D) \cdot \left ( 1+ D^2 \right ) \hspace{0.05cm},\]
\[W(D) = \hspace{0.08cm} U(D) + W(D) \cdot D+ W(D) \cdot D^2\]
\[\Rightarrow \hspace{0.3cm} U(D) = W(D) \cdot \left ( 1+ D + D^2 \right ) \hspace{0.05cm}.\]
  • Thus,  one obtains for the transfer function of this filter:
\[G(D) = \frac{X(D)}{U(D)} = \frac{1+D^2}{1+D+D^2} \hspace{0.05cm}. \]
  • In  $\text{Example 8}$  to the equivalent systematic convolutional code,  exactly this expression has resulted in the lower branch.


Exercises for the chapter


Exercise 3.2: G-matrix of a Convolutional Encoder

Exercise 3.2Z: (3, 1, 3) Convolutional Encoder

Exercise 3.3: Code Sequence Calculation via U(D) and G(D)

Exercise 3.3Z: Convolution and D-Transformation

Exercise 3.4: Systematic Convolution Codes

Exercise 3.4Z: Equivalent Convolution Codes?

Exercise 3.5: Recursive Filters for GF(2)