Difference between revisions of "Channel Coding/Code Description with State and Trellis Diagram"

From LNTwww
Line 102: Line 102:
 
*A difference exists solely with respect to the code sequences output, where in both cases&nbsp; $\underline{x}_i &#8712; \{00, \ 01, \ 10, \ 11\}$&nbsp; applies.}}<br><br>
 
*A difference exists solely with respect to the code sequences output, where in both cases&nbsp; $\underline{x}_i &#8712; \{00, \ 01, \ 10, \ 11\}$&nbsp; applies.}}<br><br>
  
==  Darstellung im Trellisdiagramm ==
+
==  Representation in the trellis diagram ==
 
<br>
 
<br>
Man kommt vom Zustandsübergangsdiagramm zum so genannten&nbsp; <i>Trellisdiagramm</i>&nbsp; (oder kurz: &nbsp;'''Trellis'''), indem man zusätzlich eine Zeitkomponente &nbsp; &#8658; &nbsp; Laufvariable&nbsp; $i$&nbsp; berücksichtigt. Die folgende Grafik stellt beide Diagramme für unseren "Standard&ndash;Codierer"&nbsp; $(n = 2, \ k = 1, \ m = 2)$&nbsp; gegenüber.<br>
+
One gets from the state transition diagram to the so-called&nbsp; <i>trellis diagram</i>&nbsp; (short: &nbsp;'''Trellis''') by additionally considering a time component &nbsp; &#8658; &nbsp; run variable&nbsp; $i$&nbsp;. The following graph contrasts both diagrams for our "standard encoder"&nbsp; $(n = 2, \ k = 1, \ m = 2)$&nbsp;.<br>
  
[[File:P ID2635 KC T 3 3 S3 v2.png|center|frame|Zustandsübergangsdiagramm vs. Trellisdiagramm&nbsp; $(n = 2, \ k = 1, \ m = 2)$|class=fit]]
+
[[File:P ID2635 KC T 3 3 S3 v2.png|center|frame|State transition diagram vs. trellis diagram&nbsp; $(n = 2, \ k = 1, \ m = 2)$|class=fit]]
  
Die untere Darstellung hat Ähnlichkeit mit einem Gartenspalier &ndash; etwas Phantasie vorausgesetzt. Die englische Übersetzung hierfür ist "Trellis". Weiter ist anhand dieser Grafik zu erkennen:
+
The bottom illustration has a resemblance to a garden trellis &ndash; assuming some imagination. Further is to be recognized on the basis of this diagram:
*Da alle Speicherregister mit Nullen vorbelegt sind, startet das Trellis stets vom Zustand&nbsp; $s_1 = S_0$. Zum nächsten Zeitpunkt&nbsp; $(i = 2)$&nbsp; sind dann nur die beiden (Start&ndash;)Zustände&nbsp; $S_0$&nbsp; und&nbsp; $S_1$&nbsp; möglich.<br>
+
*Since all memory registers are preallocated with zeros, the trellis always starts from the state&nbsp; $s_1 = S_0$. At the next time&nbsp; $(i = 2)$&nbsp; only the two (start&ndash;)states&nbsp; $S_0$&nbsp; and&nbsp; $S_1$&nbsp; are possible.<br>
  
*Ab dem Zeitpunkt&nbsp; $i = m + 1 = 3$&nbsp; hat das Trellis für jeden Übergang von&nbsp; $s_i$&nbsp; nach&nbsp; $s_{i+1}$&nbsp; genau das gleiche Aussehen. Jeder Zustand&nbsp; $S_{\mu}$&nbsp; ist durch einen roten Pfeil&nbsp; $(u_i = 0)$&nbsp; und einen blauen Pfeil&nbsp; $(u_i = 1)$&nbsp; mit einem Nachfolgezustand verbunden.<br>
+
*From time&nbsp; $i = m + 1 = 3$&nbsp; the trellis has exactly the same appearance for each transition from&nbsp; $s_i$&nbsp; to&nbsp; $s_{i+1}$&nbsp;. Each state&nbsp; $S_{\mu}$&nbsp; is connected to a successor state by a red arrow&nbsp; $(u_i = 0)$&nbsp; and a blue arrow&nbsp; $(u_i = 1)$&nbsp;.<br>
  
*Gegenüber einem&nbsp; <i>Codebaum</i>, der mit jedem Zeitschritt&nbsp; $i$&nbsp; exponentiell anwächst &ndash; siehe zum Beispiel Abschnitt&nbsp; [[Digitalsignalübertragung/Viterbi–Empfänger#Fehlergr.C3.B6.C3.9Fen_und_Gesamtfehlergr.C3.B6.C3.9Fen| Fehlergrößen und Gesamtfehlergrößen]]&nbsp; im Buch "Digitalsignalübertragung" &ndash; ist hier die Zahl der Knoten (also der möglichen Zustände) auf&nbsp; $2^m$&nbsp; begrenzt.<br>
+
*Versus a&nbsp; <i>code tree</i> that grows exponentially with each time step&nbsp; $i$&nbsp; &ndash; see for example section&nbsp; [[Digital_Signal_Transmission/Viterbi_Receiver#Metric_and_accumulated_metric| "Error sizes and total error sizes"]]&nbsp; in the book "Digital Signal Transmission" &ndash; here the number of nodes (i.e. possible states) is limited to&nbsp; $2^m$&nbsp;.<br>
  
*Diese erfreuliche Eigenschaft eines jeden Trellisdiagramms nutzt auch der&nbsp; [[Channel_Coding/Decodierung_von_Faltungscodes| Viterbi&ndash;Algorithmus]]&nbsp; zur effizienten Maximum&ndash;Likelihood&ndash;Decodierung von Faltungscodes.<br><br>
+
*This pleasing property of any trellis diagram is also used by&nbsp; [[Channel_Coding/Decoding_of_Convolutional_Codes| "Viterbi algorithm"]]&nbsp; for efficient maximum likelihood decoding of convolutional codes.<br><br>
  
Das folgende Beispiel soll zeigen, dass zwischen der Aneinanderreihung der Codesequenzen&nbsp; $\underline{x}_i$&nbsp; und den Pfaden durch das Trellisdiagramm eine&nbsp; $1\text{:}1$&ndash;Zuordnung besteht.  
+
The following example is to show that there is a&nbsp; $1\text{:}1$ mapping between the string of code sequences&nbsp; $\underline{x}_i$&nbsp; and the paths through the trellis diagram.  
*Auch die Informationssequenz&nbsp; $\underline{u}$&nbsp; ist aus dem ausgewählten Trellispfad anhand der Farben der einzelnen Zweige ablesbar.  
+
*Also, the information sequence&nbsp; $\underline{u}$&nbsp; can be read from the selected trellis path based on the colors of the individual branches.  
*Ein roter Zweig steht für das Informationsbit&nbsp; $u_i = 0$, ein blauer für&nbsp; $u_i = 1$.<br>
+
*A red branch stands for the information bit&nbsp; $u_i = 0$, a blue one for&nbsp; $u_i = 1$.<br>
  
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 2:}$&nbsp;  Im&nbsp; [[Channel_Coding/Codebeschreibung_mit_Zustands–_und_Trellisdiagramm#Zustandsdefinition_f.C3.BCr_ein_Speicherregister|$\text{Beispiel 1}$]]&nbsp; wurde für unseren Rate&ndash;$1/2$&ndash;Standardcodierer mit Gedächtnis&nbsp; $m = 2$&nbsp; sowie die Informationssequenz&nbsp; $\underline{u} = (1, 1, 1, 0, 0, 0, 1, 0, 1, \ \text{...})$&nbsp; die Codesequenz&nbsp; $\underline{x}$&nbsp; hergeleitet, die hier für&nbsp; $i &#8804; 9$&nbsp; nochmals angegeben wird.<br>
+
$\text{Example 2:}$&nbsp;  Im&nbsp; [[Channel_Coding/Code_Description_with_State_and_Trellis_Diagram#State_definition_for_a_memory_register|$\text{"Example 1"}$]]&nbsp; was created for our rate&ndash; $1/2$ standard encoder with memory&nbsp; $m = 2$&nbsp; and the information sequence&nbsp; $\underline{u} = (1, 1, 1, 0, 0, 0, 1, 0, 1, \ \text{. ..})$&nbsp; the code sequence&nbsp; $\underline{x}$&nbsp; is derived, which is given again here for&nbsp; $i &#8804; 9$&nbsp;.<br>
  
[[File:P ID2636 KC T 3 3 S3b v1.png|center|frame|Trellisdiagramm einer Beispielssequenz|class=fit]]
+
[[File:P ID2636 KC T 3 3 S3b v1.png|center|frame|Trellis diagram of a sample sequence|class=fit]]
  
Darunter gezeichnet ist das Trellisdiagramm. Man erkennt:
+
The trellis diagram is drawn below. You can see:
*Der ausgewählte Pfad durch das Trellis ergibt sich durch die Aneinanderreihung roter und blauer Pfeile, die für die möglichen Informationsbits&nbsp; $u_i \in \{ 0, \ 1\}$&nbsp; stehen. Diese Aussage gilt für jeden Rate&ndash;$1/n$&ndash;Faltungscode. Bei einem Code mit&nbsp; $k > 1$&nbsp; gäbe es&nbsp; $2^k$&nbsp; verschiedenfarbige Pfeile.<br>
+
*The selected path through the trellis is obtained by lining up red and blue arrows representing the possible information bits&nbsp; $u_i \in \{ 0, \ 1\}$&nbsp;. This statement is true for any rate $1/n$ convolutional code. For a code with&nbsp; $k > 1$&nbsp; there would be&nbsp; $2^k$&nbsp; different colored arrows.<br>
  
*Bei einem Rate&ndash;$1/n$&ndash;Faltungscode sind die Pfeile mit den Codeworten&nbsp; $\underline{x}_i = (x_i^{(1)}, \ \text{...} \ , \ x_i^{(n)})$&nbsp; beschriftet, die sich aus dem Informationsbit&nbsp; $u_i$&nbsp; und den aktuellen Registerzuständen&nbsp; $s_i$&nbsp; ergeben. Für&nbsp; $n = 2$&nbsp; gibt es nur vier mögliche Codeworte, nämlich&nbsp; $00, \ 01, \ 10$&nbsp; und&nbsp; $11$.<br>
+
*For a rate&ndash;$1/n$&ndash;convolutional code, the arrows are labeled with the code words&nbsp; $\underline{x}_i = (x_i^{(1)}, \ \text{...} \ , \ x_i^{(n)})$&nbsp; which are derived from the information bit&nbsp; $u_i$&nbsp; and the current register states&nbsp; $s_i$&nbsp; . For&nbsp; $n = 2$&nbsp; there are only four possible code words, namely&nbsp; $00, \ 01, \ 10$&nbsp; and&nbsp; $11$.<br>
  
*In vertikaler Richtung erkennt man aus dem Trellis die Registerzustände&nbsp; $S_{\mu}$. Bei einem Rate&ndash;$k/n$&ndash;Faltungscode mit der Gedächtnisordnung&nbsp; $m$&nbsp; gibt es&nbsp; $2^{k \hspace{0.05cm}\cdot \hspace{0.05cm}m}$&nbsp; verschiedene Zustände. Im vorliegenden Beispiel&nbsp; $(k = 1, \ m = 2)$&nbsp; sind dies die Zustände&nbsp; $S_0, \ S_1, \ S_2$ und $S_3$.}}<br>
+
*In the vertical direction, one recognizes from the trellis the register states&nbsp; $S_{\mu}$. Given a rate $k/n$ convolutional code with memory order&nbsp; $m$&nbsp; there are&nbsp; $2^{k \hspace{0.05cm}\cdot \hspace{0.05cm}m}$&nbsp; different states. In the present example&nbsp; $(k = 1, \ m = 2)$&nbsp; these are the states&nbsp; $S_0, \ S_1, \ S_2$ and $S_3$.}}<br>
  
== Definition der freien Distanz ==
+
== Definition of the free distance ==
 
<br>
 
<br>
Eine wichtige Kenngröße der linearen Blockcodes ist die&nbsp; [[Channel_Coding/Zielsetzung_der_Kanalcodierung#Einige_wichtige_Definitionen_zur_Blockcodierung| minimale Hamming&ndash;Distanz]]&nbsp; '''zwischen zwei beliebigen Codeworten'''&nbsp; $\underline{x}$&nbsp; und&nbsp; $\underline{x}\hspace{0.05cm}'$:
+
An important parameter of linear block codes is the&nbsp; [[Channel_Coding/Objective_of_Channel_Coding#Important_definitions_for_block_coding|"minimum Hamming distance"]]&nbsp; '''between any two codewords'''&nbsp; $\underline{x}$&nbsp; and&nbsp; $\underline{x}\hspace{0.05cm}'$:
  
 
::<math>d_{\rm min}(\mathcal{C}) =
 
::<math>d_{\rm min}(\mathcal{C}) =
 
\min_{\substack{\underline{x},\hspace{0.05cm}\underline{x}' \hspace{0.05cm}\in \hspace{0.05cm} \mathcal{C} \\ {\underline{x}} \hspace{0.05cm}\ne \hspace{0.05cm} \underline{x}\hspace{0.03cm}'}}\hspace{0.1cm}d_{\rm H}(\underline{x}, \hspace{0.05cm}\underline{x}\hspace{0.03cm}')\hspace{0.05cm}.</math>
 
\min_{\substack{\underline{x},\hspace{0.05cm}\underline{x}' \hspace{0.05cm}\in \hspace{0.05cm} \mathcal{C} \\ {\underline{x}} \hspace{0.05cm}\ne \hspace{0.05cm} \underline{x}\hspace{0.03cm}'}}\hspace{0.1cm}d_{\rm H}(\underline{x}, \hspace{0.05cm}\underline{x}\hspace{0.03cm}')\hspace{0.05cm}.</math>
  
*Aufgrund der Linearität gehört  zu jedem Blockcode auch das Nullwort&nbsp; $\underline{0}$. Damit ist&nbsp; $d_{\rm min}$&nbsp; identisch mit dem minimalen&nbsp; [[Channel_Coding/Zielsetzung_der_Kanalcodierung#Einige_wichtige_Definitionen_zur_Blockcodierung| Hamming&ndash;Gewicht]]&nbsp; $w_{\rm H}(\underline{x})$&nbsp; eines Codewortes&nbsp; $\underline{x} &ne; \underline{0}$.<br>
+
*Due to linearity, the zero word&nbsp; $\underline{0}$ also belongs to each block code. Thus&nbsp; $d_{\rm min}$&nbsp; is identical to the minimum&nbsp; [[Channel_Coding/Objective_of_Channel_Coding#Important_definitions_for_block_coding| "Hamming weight"]]&nbsp; $w_{\rm H}(\underline{x})$&nbsp; of a codeword&nbsp; $\underline{x} &ne; \underline{0}$.<br>
  
*Bei Faltungscodes erweist sich allerdings die Beschreibung der Distanzverhältnisse als wesentlich aufwändiger, da ein Faltungscode aus unendlich langen und unendlich vielen Codesequenzen besteht. Deshalb definieren wir hier anstelle der minimalen Hamming&ndash;Distanz:
+
*For convolutional codes, however, the description of the distance ratios proves to be much more complex, since a convolutional code consists of infinitely long and infinitely many code sequences. Therefore we define here instead of the minimum Hamming distance:
  
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
 
$\text{Definition:}$&nbsp;   
 
$\text{Definition:}$&nbsp;   
*Die&nbsp; <b>freie Distanz</b>&nbsp; $d_{\rm F}$&nbsp; eines Faltungscodes ist gleich der Anzahl der Bits, in dem sich zwei beliebige Codesequenzen&nbsp; $\underline{x}$&nbsp; und&nbsp; $\underline{x}\hspace{0.03cm}'$&nbsp; (mindestens) unterscheiden.  
+
*The&nbsp; <b>free distance</b>&nbsp; $d_{\rm F}$&nbsp; of a convolutional code is equal to the number of bits in which any two code sequences&nbsp; $\underline{x}$&nbsp; and&nbsp; $\underline{x}\hspace{0.03cm}'$&nbsp; differ (at least).  
*Anders ausgedrückt: &nbsp; Die freie Distanz ist gleich der ''minimalen''&nbsp; Hamming&ndash;Distanz&nbsp; '''zwischen zwei beliebigen Pfaden'''&nbsp; durch das Trellis.}}
+
*In other words, &nbsp; The free distance is equal to the ''minimum''&nbsp; Hamming distance&nbsp; '''between any two paths'''&nbsp; through the trellis.}}
  
  
Da Faltungscodes ebenfalls linear sind, kann man auch hier als Bezugssequenz die unendlich lange Nullsequenz heranziehen: &nbsp; $\underline{x}\hspace{0.03cm}' = \underline{0}$. Damit ist die freie Distanz&nbsp; $d_{\rm F}$&nbsp; gleich dem minimalen Hamming&ndash;Gewicht (Anzahl der Einsen) einer Codesequenz&nbsp; $\underline{x} &ne; \underline{0}$.<br>
+
Since convolutional codes are also linear, we can also use the infinite zero sequence as a reference sequence: &nbsp; $\underline{x}\hspace{0.03cm}' = \underline{0}$. Thus, the free distance&nbsp; $d_{\rm F}$&nbsp; is equal to the minimum Hamming weight (number of ones) of a code sequence&nbsp; $\underline{x} &ne; \underline{0}$.<br>
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 3:}$&nbsp;   Wir betrachten die Nullsequenz&nbsp; $\underline{0}$&nbsp; (mit weiß gefüllten Kreisen markiert) sowie zwei andere Codesequenzen&nbsp; $\underline{x}$&nbsp; sowie&nbsp; $\underline{x}\hspace{0.03cm}'$&nbsp; (mit gelber bzw. dunkler Hinterlegung) in unserem Standard&ndash;Trellis und charakterisieren diese Sequenzen anhand der Zustandsfolgen:
+
$\text{Example 3:}$&nbsp; We consider the null sequence&nbsp; $\underline{0}$&nbsp; (marked with white filled circles) as well as two other code sequences&nbsp; $\underline{x}$&nbsp; and&nbsp; $\underline{x}\hspace{0.03cm}'$&nbsp; (with yellow and dark background, respectively) in our standard trellis and characterize these sequences based on the state sequences:
  
 
:<math>\underline{0} =\left ( S_0 \rightarrow S_0 \rightarrow S_0\rightarrow S_0\rightarrow S_0\rightarrow \hspace{0.05cm}\text{...} \hspace{0.05cm}\right)= \left ( 00, 00, 00, 00, 00,\hspace{0.05cm} \text{...} \hspace{0.05cm}\right) \hspace{0.05cm},</math>
 
:<math>\underline{0} =\left ( S_0 \rightarrow S_0 \rightarrow S_0\rightarrow S_0\rightarrow S_0\rightarrow \hspace{0.05cm}\text{...} \hspace{0.05cm}\right)= \left ( 00, 00, 00, 00, 00,\hspace{0.05cm} \text{...} \hspace{0.05cm}\right) \hspace{0.05cm},</math>
Line 161: Line 161:
 
:<math>\underline{x}\hspace{0.03cm}' = \left ( S_0 \rightarrow S_1 \rightarrow S_3\rightarrow S_2\rightarrow S_0\rightarrow \hspace{0.05cm}\text{...} \hspace{0.05cm}\right)= \left ( 11, 01, 01, 11, 00,\hspace{0.05cm} \text{...} \hspace{0.05cm}\right) \hspace{0.05cm}.</math>
 
:<math>\underline{x}\hspace{0.03cm}' = \left ( S_0 \rightarrow S_1 \rightarrow S_3\rightarrow S_2\rightarrow S_0\rightarrow \hspace{0.05cm}\text{...} \hspace{0.05cm}\right)= \left ( 11, 01, 01, 11, 00,\hspace{0.05cm} \text{...} \hspace{0.05cm}\right) \hspace{0.05cm}.</math>
  
[[File:P ID2638 KC T 3 3 S4b v2.png|center|frame|Zur Definition der freien Distanz|class=fit]]
+
[[File:P ID2638 KC T 3 3 S4b v2.png|center|frame|On the definition of free distance|class=fit]]
  
Man erkennt aus diesen Darstellungen:
+
One can see from these plots:
*Die freie Distanz&nbsp; $d_{\rm F} = 5$&nbsp; ist gleich dem Hamming&ndash;Gewicht&nbsp; $w_{\rm H}(\underline{x})$. Keine andere Sequenz als die gelb hinterlegte unterscheidet sich von&nbsp; $\underline{0}$&nbsp; um weniger als fünf Bit. Beispielsweise ist&nbsp; $w_{\rm H}(\underline{x}') = 6$.<br>
+
*The free distance&nbsp; $d_{\rm F} = 5$&nbsp; is equal to the Hamming&ndash;weight&nbsp; $w_{\rm H}(\underline{x})$. No sequence other than the one highlighted in yellow differs from&nbsp; $\underline{0}$&nbsp; by less than five bits. For example&nbsp; $w_{\rm H}(\underline{x}') = 6$.<br>
  
*In diesem Beispiel ergibt sich die freie Distanz&nbsp; $d_{\rm F} = 5$&nbsp; auch als die Hamming&ndash;Distanz zwischen den Sequenzen&nbsp; $\underline{x}$&nbsp; und&nbsp; $\underline{x}\hspace{0.03cm}'$, die sich genau in fünf Bitpositionen unterscheiden.}}
+
*In this example, the free distance&nbsp; $d_{\rm F} = 5$&nbsp; also results as the Hamming distance between the sequences&nbsp; $\underline{x}$&nbsp; and&nbsp; $\underline{x}\hspace{0.03cm}'$, which differ in exactly five bit positions.}}
  
 
+
The larger the free distance&nbsp; $d_{\rm F}$&nbsp; is, the better is the "asymptotic behavior" of the convolutional encoder.  
Je größer die freie Distanz&nbsp; $d_{\rm F}$&nbsp; ist, desto besser ist das "asymptotische Verhalten" des Faltungscoders.  
+
*For the exact error probability calculation, however, one needs the exact knowledge of the&nbsp; [[Channel_Coding/Bounds_for_Block_Error_Probability#Distance_spectrum_of_a_linear_code| "weight enumerator function"]]&nbsp; just as for the linear block codes.
*Zur genauen Fehlerwahrscheinlichkeitsberechnung benötigt man allerdings ebenso wie bei den linearen Blockcodes die genaue Kenntnis der&nbsp; [[Channel_Coding/Schranken_für_die_Blockfehlerwahrscheinlichkeit#Distanzspektrum_eines_linearen_Codes| Gewichtsfunktion]]&nbsp; (englisch: &nbsp;<i>Weight Enumerator Function</i>&nbsp;).  
+
*This is given for the convolutional codes in detail only in the chapter&nbsp; [[Channel_Coding/Distance_Characteristics_and_Error_Probability_Barriers|"Distance Characteristics and Error Probability Barriers"]]&nbsp;.  
*Diese wird für die Faltungscodes  im Detail erst im Kapitel&nbsp; [[Channel_Coding/Distanzeigenschaften_und_Fehlerwahrscheinlichkeitsschranken|Distanzeigenschaften und Fehlerwahrscheinlichkeitsschranken]]&nbsp; angegeben.  
 
 
<br clear=all>
 
<br clear=all>
[[File:P ID2639 KC T 3 3 S4c.png|right|frame|Optimale Faltungscodes der Rate&nbsp; $1/2$|class=fit]]
+
[[File:P ID2639 KC T 3 3 S4c.png|right|frame|Optimal convolutional codes of rate&nbsp; $1/2$|class=fit]]
Vorneweg nur einige wenige Bemerkungen:
+
At the outset, just a few remarks:
*Die freie Distanz&nbsp; $d_{\rm F}$&nbsp; nimmt mit wachsendem Gedächtnis&nbsp; $m$&nbsp; zu, vorausgesetzt, dass man für die Übertragungsfunktionsmatrix&nbsp; $\mathbf{G}(D)$&nbsp; geeignete Polynome  verwendet.  
+
*The free distance&nbsp; $d_{\rm F}$&nbsp; increases with increasing memory&nbsp; $m$&nbsp; provided that one uses appropriate polynomials for the transfer function matrix&nbsp; $\mathbf{G}(D)$&nbsp;.  
*In der Tabelle sind für Rate&ndash;$1/2$&ndash;Faltungscodes die&nbsp; $n = 2$&nbsp; Polynome zusammen mit dem&nbsp; $d_{\rm F}$&ndash;Wert angegeben.
+
*In the table, for rate $1/2$ convolutional codes, the&nbsp; $n = 2$&nbsp; polynomials are given together with the&nbsp; $d_{\rm F}$ value.
* Von größerer praktischer Bedeutung ist der  <i>Industrie&ndash;Standardcode</i>&nbsp; mit&nbsp; $m = 6$ &nbsp; &#8658; &nbsp; $64$&nbsp; Zustände und der freien Distanz&nbsp; $d_{\rm F} = 10$&nbsp; (in der Tabelle ist dieser rot hinterlegt).
+
* Of greater practical importance is the <i>industry standard code</i>&nbsp; with&nbsp; $m = 6$&nbsp; &#8658; &nbsp; $64$&nbsp; states and the free distance&nbsp; $d_{\rm F} = 10$&nbsp; (in the table this is highlighted in red).
  
  
Das folgende Beispiel zeigt, welche Auswirkungen es hat, wenn man ungünstige Polynome zugrundelegt.
+
The following example shows the effects of using unfavorable polynomials.
 
<br clear=all>
 
<br clear=all>
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 4:}$&nbsp;   Im&nbsp; $\text{Beispiel 3}$&nbsp; wurde gezeigt, dass unser Standard&ndash;Faltungscoder&nbsp; $(n = 2, \ k = 1, \ m = 2)$&nbsp; die freie Distanz&nbsp; $d_{\rm F} = 5$&nbsp; aufweist. Dieser basiert auf der Übertragungsfunktionsmatrix&nbsp; $\mathbf{G}(D) = (1 + D + D^2, \ 1 + D^2)$, wie aus dem gezeigten Modell (ohne rote Streichung) hervorgeht.  
+
$\text{Example 4:}$&nbsp; In&nbsp; $\text{Example 3}$&nbsp; it was shown that our standard convolutional encoder&nbsp; $(n = 2, \ k = 1, \ m = 2)$&nbsp; has the free distance&nbsp; $d_{\rm F} = 5$&nbsp; . This is based on the transfer function matrix&nbsp; $\mathbf{G}(D) = (1 + D + D^2, \ 1 + D^2)$, as shown in the model shown (without red deletion).  
  
[[File:P ID2640 KC T 3 3 S4d v3.png|center|frame|Zustandsübergangsdiagramm&nbsp; $\rm C$&nbsp; für&nbsp; $n = 2, \ k = 1, \ m = 2$|class=fit]]
+
[[File:P ID2640 KC T 3 3 S4d v3.png|center|frame|State transition diagram&nbsp; $\rm C$&nbsp; for&nbsp; $n = 2, \ k = 1, \ m = 2$.|class=fit]]
  
Verwendet man&nbsp; $\mathbf{G}(D) = (1 + D, \ 1 + D^2)$ &nbsp; &rArr; &nbsp; rote Änderung, so ändert sich das Zustandsübergangsdiagramm gegenüber dem&nbsp; [[Channel_Coding/Codebeschreibung_mit_Zustands–_und_Trellisdiagramm#Darstellung_im_Zustands.C3.BCbergangsdiagramm| Zustandsübergangsdiagramm $\rm A$]]&nbsp; auf den ersten Blick nur wenig. Die Auswirkungen sind aber gravierend:
+
Using&nbsp; $\mathbf{G}(D) = (1 + D, \ 1 + D^2)$ &nbsp; &rArr; &nbsp; red change, the state transition diagram changes from the&nbsp; [[Channel_Coding/Code_Description_with_State_and_Trellis_Diagram#Representation_in_the_state_transition_diagram| "State transition diagram $\rm A$"]]&nbsp; little at first glance. However, the effects are serious:
*Die freie Distanz ist nun nicht mehr&nbsp; $d_{\rm F} = 5$, sondern nur noch&nbsp; $d_{\rm F} = 3$, wobei die Codesequenz&nbsp; $\underline{x} = (11, \ 01, \ 00, \ 00, \ \text{...})$&nbsp; mit nur drei Einsen zur Informationssequenz&nbsp; $\underline{u} = \underline{1} = (1, \ 1, \ 1, \ 1, \ \text{...})$&nbsp; gehört.<br>
+
*The free distance is now no longer&nbsp; $d_{\rm F} = 5$, but only&nbsp; $d_{\rm F} = 3$, where the code sequence&nbsp; $\underline{x} = (11, \ 01, \ 00, \ 00, \ text{. ..})$&nbsp; with only three ones belonging to the information sequence&nbsp; $\underline{u} = \underline{1} = (1, \ 1, \ 1, \ 1, \ \text{...})$&nbsp;.<br>
  
*Das heißt:&nbsp; Durch nur drei Übertragungsfehler an den Positionen 1, 2, und 4 verfälscht man die Einssequenz&nbsp; $(\underline{1})$&nbsp; in die Nullsequenz&nbsp; $(\underline{0})$&nbsp; und produziert so unendlich viele Decodierfehler im Zustand&nbsp; $S_3$.<br>
+
*That is:&nbsp; By only three transmission errors at positions 1, 2, and 4, one corrupts the one sequence&nbsp; $(\underline{1})$&nbsp; into the zero sequence&nbsp; $(\underline{0})$&nbsp; and thus produces infinitely many decoding errors in state&nbsp; $S_3$.<br>
  
*Einen Faltungscodierer mit diesen Eigenschaften bezeichnet man als "katastrophal". Der Grund für dieses extrem ungünstige Verhalten ist, dass hier die beiden Polynome&nbsp; $1 + D$&nbsp; und&nbsp; $1 + D^2$&nbsp; nicht teilerfemd sind.}}<br>
+
*A convolutional encoder with these properties is called "catastrophic". The reason for this extremely unfavorable behavior is that here the two polynomials&nbsp; $1 + D$&nbsp; and&nbsp; $1 + D^2$&nbsp; are not divisor-remote.}}<br>
  
== Terminierte Faltungscodes ==
+
== Terminated convolutional codes ==
 
<br>
 
<br>
Bei der theoretischen Beschreibung der Faltungscodes geht man stets von Informationssequenzen&nbsp; $\underline{u}$&nbsp; und Codesequenzen&nbsp; $\underline{x}$&nbsp; aus, die per Definition unendlich lang sind.  
+
In the theoretical description of convolutional codes, one always assumes information sequences&nbsp; $\underline{u}$&nbsp; and code sequences&nbsp; $\underline{x}$&nbsp; that are infinite in length by definition.  
 
*In praktischen Anwendungen &ndash; zum Beispiel&nbsp; [[Examples_of_Communication_Systems/Allgemeine_Beschreibung_von_GSM|GSM]]&nbsp; und&nbsp; [[Examples_of_Communication_Systems/Allgemeine_Beschreibung_von_UMTS|UMTS]]&nbsp; &ndash; verwendet man dagegen stets eine Informationssequenz endlicher Länge $L$.  
 
*In praktischen Anwendungen &ndash; zum Beispiel&nbsp; [[Examples_of_Communication_Systems/Allgemeine_Beschreibung_von_GSM|GSM]]&nbsp; und&nbsp; [[Examples_of_Communication_Systems/Allgemeine_Beschreibung_von_UMTS|UMTS]]&nbsp; &ndash; verwendet man dagegen stets eine Informationssequenz endlicher Länge $L$.  
 
*Bei einem Rate&ndash;$1/n$&ndash;Faltungscode hat dann die Codesequenz mindestens die Länge&nbsp; $n \cdot L$.<br>
 
*Bei einem Rate&ndash;$1/n$&ndash;Faltungscode hat dann die Codesequenz mindestens die Länge&nbsp; $n \cdot L$.<br>
Line 203: Line 202:
 
[[File:P ID2641 KC T 3 3 S5 v2.png|center|frame|Terminierter Faltungscode der Rate&nbsp; $R = 128/260$|class=fit]]
 
[[File:P ID2641 KC T 3 3 S5 v2.png|center|frame|Terminierter Faltungscode der Rate&nbsp; $R = 128/260$|class=fit]]
  
Die Grafik ohne die graue Hinterlegung rechts  zeigt das Trellis unseres Standard&ndash;Rate&ndash;$1/2$&ndash;Faltungscodes bei binärer Eingangsfolge&nbsp; $\underline{u}$&nbsp; der endlichen Länge&nbsp; $L = 128$. Damit hat die Codefolge&nbsp; $\underline{x}$&nbsp; die Länge&nbsp; $2 \cdot L = 256$.  
+
The graph without the gray background on the right shows the trellis of our standard rate $1/2$ convolutional code at binary input sequence&nbsp; $\underline{u}$&nbsp; of finite length&nbsp; $L = 128$. Thus the code sequence&nbsp; $\underline{x}$&nbsp; has length&nbsp; $2 \cdot L = 256$.  
  
Aufgrund des undefinierten Endzustands ist eine vollständige Maximum&ndash;Likelihood&ndash;Decodierung der gesendeten Folge allerdings nicht möglich. Da man nicht weiß, welcher der Zustände&nbsp; $S_0, \ \text{...} \ , \ S_3$&nbsp; sich für&nbsp; $i > L$&nbsp; einstellen würden, wird die Fehlerwahrscheinlichkeit (etwas) größer sein als im Grenzfall&nbsp; $L &#8594; &#8734;$.<br>
+
However, due to the undefined final state, a complete maximum likelihood decoding of the transmitted sequence is not possible. Since it is not known which of the states&nbsp; $S_0, \ \text{...} \ , \ S_3$&nbsp; would occur for&nbsp; $i > L$&nbsp;, the error probability will be (somewhat) larger than in the limiting case&nbsp; $L &#8594; &#8734;$.<br>
  
Um dies zu verhindern, terminiert man den Faltungscode entsprechend der Hinterlegung in obiger Grafik:
+
To prevent this, the convolutional code is terminated as shown in the diagram above:
*Man fügt an die&nbsp; $L = 128$&nbsp; Informationsbits noch&nbsp; $m = 2$&nbsp; Nullen an &nbsp; &#8658; &nbsp; $L' = 130$.
+
*You add&nbsp; $m = 2$&nbsp; zeros to the&nbsp; $L = 128$&nbsp; information bits &nbsp; &#8658; &nbsp; $L' = 130$.
  
*Damit ergibt sich beispielsweise für den farblich hervorgehobenen Pfad durch das Trellis:
+
*This results, for example, for the color highlighted path through the trellis:
  
 
::<math>\underline{x}' = (11\hspace{0.05cm},\hspace{0.05cm} 01\hspace{0.05cm},\hspace{0.05cm} 01\hspace{0.05cm},\hspace{0.05cm} 00 \hspace{0.05cm},\hspace{0.05cm} \text{...}\hspace{0.1cm},\hspace{0.05cm} 10\hspace{0.05cm},\hspace{0.05cm}00\hspace{0.05cm},\hspace{0.05cm} 01\hspace{0.05cm},\hspace{0.05cm} 01\hspace{0.05cm},\hspace{0.05cm} 11 \hspace{0.05cm} ) \hspace{0.3cm}
 
::<math>\underline{x}' = (11\hspace{0.05cm},\hspace{0.05cm} 01\hspace{0.05cm},\hspace{0.05cm} 01\hspace{0.05cm},\hspace{0.05cm} 00 \hspace{0.05cm},\hspace{0.05cm} \text{...}\hspace{0.1cm},\hspace{0.05cm} 10\hspace{0.05cm},\hspace{0.05cm}00\hspace{0.05cm},\hspace{0.05cm} 01\hspace{0.05cm},\hspace{0.05cm} 01\hspace{0.05cm},\hspace{0.05cm} 11 \hspace{0.05cm} ) \hspace{0.3cm}
 
\Rightarrow \hspace{0.3cm}\underline{u}' = (1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 1 \hspace{0.05cm},\hspace{0.05cm} \text{...}\hspace{0.1cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 0 \hspace{0.05cm} ) \hspace{0.05cm}.</math>
 
\Rightarrow \hspace{0.3cm}\underline{u}' = (1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 1 \hspace{0.05cm},\hspace{0.05cm} \text{...}\hspace{0.1cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 0 \hspace{0.05cm} ) \hspace{0.05cm}.</math>
  
*Das Trellis endet nun stets (also unabhängig von den Daten) im definierten Endzustand&nbsp; $S_0$&nbsp; und man erreicht so die bestmögliche Fehlerwahrscheinlichkeit entsprechend Maximum&ndash;Likelihood.<br>
+
*The trellis now always ends (i.e. independent of the data) in the defined final state&nbsp; $S_0$&nbsp; and one thus achieves the best possible error probability according to maximum likelihood.<br>
  
*Die Verbesserung hinsichtlich der Fehlerwahrscheinlichkeit erkauft man sich allerdings auf Kosten einer kleineren Coderate.  
+
*However, the improvement in error probability comes at the cost of a lower code rate.  
*Bei&nbsp; $L \gg m$&nbsp; ist dieser Verlust nur gering. Im betrachteten Beispiel ergibt sich  anstelle von&nbsp; $R = 0.5$&nbsp; mit Terminierung die Coderate
+
*For $L \gg m$&nbsp; this loss is only small. In the considered example, instead of&nbsp; $R = 0.5$&nbsp; with termination the code rate results in
 
:$$R\hspace{0.05cm}' = 0.5 \cdot L/(L + m) \approx 0.492.$$<br>
 
:$$R\hspace{0.05cm}' = 0.5 \cdot L/(L + m) \approx 0.492.$$<br>
  
== Punktierte Faltungscodes ==
+
== Punctured convolutional codes ==
 
<br>
 
<br>
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
 
$\text{Definition:}$&nbsp;   
 
$\text{Definition:}$&nbsp;   
*Wir gehen von einem Faltungscode der Rate&nbsp; $R_0 = 1/n_0$&nbsp; aus, den wir <i>Muttercode</i>&nbsp; nennen.  
+
*We assume a convolutional code of rate&nbsp; $R_0 = 1/n_0$&nbsp; which we call <i>mother code</i>&nbsp;.  
*Streicht man von dessen Codesequenz einige Bits nach einem vorgegebenen Muster, so spricht man von einem&nbsp; <b>punktierten Faltungscode</b>&nbsp; (englisch: &nbsp; <i>Punctured Convolutional Code</i>&nbsp;) mit der Coderate&nbsp; $R > R_0$.}}<br>
+
*If one deletes some bits from its code sequence according to a given pattern, it is called a&nbsp; <b>punctured convolutional code</b> with code rate&nbsp; $R > R_0$.}}<br>
  
Die Punktierung geschieht mittels der Punktierungsmatrix&nbsp; $\mathbf{P}$&nbsp; mit folgenden Eigenschaften:
+
The puncturing is done using the puncturing matrix&nbsp; $\mathbf{P}$&nbsp; with the following properties:
*Die Zeilenzahl ist&nbsp; $n_0$, die Spaltenzahl gleich der Punktierungsperiode&nbsp; $p$, die durch die gewünschte Coderate bestimmt wird.<br>
+
*The row number is&nbsp; $n_0$, the column number is equal to the puncturing period&nbsp; $p$, which is determined by the desired code rate.<br>
  
*Die&nbsp; $n_0 \cdot p$&nbsp; Matrixelemente&nbsp; $P_{ij}$&nbsp; sind binär &nbsp;$(0$&nbsp; oder&nbsp; $1)$. Bei&nbsp; $P_{ij} = 1$&nbsp; wird das entsprechende Codebit übernommen, bei&nbsp; $P_{ij} = 0$&nbsp; punktiert.<br>
+
*The&nbsp; $n_0 \cdot p$&nbsp; matrix elements&nbsp; $P_{ij}$&nbsp; are binary &nbsp;$(0$&nbsp; or&nbsp; $1)$. If&nbsp; $P_{ij} = 1$&nbsp; the corresponding code bit is taken, if&nbsp; $P_{ij} = 0$&nbsp; punctured.<br>
  
*Die Rate des punktierten Codes ist gleich dem Quotienten aus der Punktierungsperiode&nbsp; $p$&nbsp; und der Anzahl&nbsp; $N_1$&nbsp; der Einsen in der&nbsp; $\mathbf{P}$&ndash;Matrix.<br>
+
*The rate of the punctured code is equal to the quotient of the puncturing period&nbsp; $p$&nbsp; and the number&nbsp; $N_1$&nbsp; of ones in the&nbsp; $\mathbf{P}$ matrix.<br>
  
  
Man findet günstig punktierte Faltungscodes üblicherweise nur mittels computergestützter Suche.  
+
One can usually find favorably punctured convolutional codes only by means of computer-aided search.  
  
Dabei bezeichnet man einen punktierten Faltungscode dann als günstig, wenn er
+
Here, a punctured convolutional code is said to be favorable if it.
*die gleiche Gedächtnisordnung&nbsp; $m$&nbsp; aufweist wie der Muttercode $($auch die Gesamteinflusslänge ist in beiden Fällen gleich: &nbsp; $\nu = m + 1)$,<br>
+
*has the same memory order&nbsp; $m$&nbsp; as the parent code $($also the total influence length is the same in both cases: &nbsp; $\nu = m + 1)$,<br>
  
*eine möglichst große freie Distanz&nbsp; $d_{\rm F}$&nbsp; besitzt, die natürlich kleiner ist als die des Muttercodes.<br><br>
+
*has as large a free distance as possible&nbsp; $d_{\rm F}$&nbsp; which is of course smaller than that of the mother code..<br><br>
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 5:}$&nbsp;  Ausgehend von unserem&nbsp; [[Channel_Coding/Codebeschreibung_mit_Zustands–_und_Trellisdiagramm#Zustandsdefinition_f.C3.BCr_ein_Speicherregister| Rate&ndash;1/2&ndash;Standardcode]]&nbsp; mit den Parametern&nbsp; $n_0 = 2$&nbsp; und&nbsp; $m = 2$&nbsp; erhält man mit der Punktierungsmatrix
+
$\text{Example 5:}$&nbsp;  Starting from our&nbsp; [[Channel_Coding/Code_Description_with_State_and_Trellis_Diagram#State_definition_for_a_memory_register| "Rate&ndash;1/2&ndash;Standard code"]]&nbsp; with parameters&nbsp; $n_0 = 2$&nbsp; and&nbsp; $m = 2$&nbsp; we obtain with the puncturing matrix
  
 
::<math>{\boldsymbol{\rm P} } =  
 
::<math>{\boldsymbol{\rm P} } =  
Line 252: Line 251:
 
\end{pmatrix}\hspace{0.3cm}\Rightarrow \hspace{0.3cm} p = 3\hspace{0.05cm}, \hspace{0.2cm}N_1 = 4</math>
 
\end{pmatrix}\hspace{0.3cm}\Rightarrow \hspace{0.3cm} p = 3\hspace{0.05cm}, \hspace{0.2cm}N_1 = 4</math>
  
einen punktierten Faltungscode der Rate&nbsp; $R = p/N_1 = 3/4$. Wir betrachten hierzu folgende Konstellation:
+
a punctured convolutional code of rate&nbsp; $R = p/N_1 = 3/4$. We consider the following constellation for this purpose:
*Informationssequenz: $\hspace{2.5cm} \underline{u} = (1, 0, 0, 1, 1, 0, \ \text{...}\hspace{0.03cm})$,<br>
+
*Information sequence: $\hspace{2.5cm} \underline{u} = (1, 0, 0, 1, 1, 0, \ \text{...}\hspace{0.03cm})$,<br>
  
*Codesequenz ohne Punktierung: $\hspace{0.7cm} \underline{x} = (11, 1 \color{grey}{0}, \color{gray}{1}1, 11, 0\color{gray}{1}, \color{gray}{0}1, \ \text{...}\hspace{0.03cm})$,<br>
+
*Code sequence without puncturing: $\hspace{0.7cm} \underline{x} = (11, 1 \color{grey}{0}, \color{gray}{1}1, 11, 0\color{gray}{1}, \color{gray}{0}1, \ \text{...}\hspace{0.03cm})$,<br>
  
*Codesequenz mit Punktierung: $\hspace{0.90cm} \underline{x}\hspace{0.05cm}' = (11, 1\_, \_1, 11, 0\_, \_1, \ \text{...}\hspace{0.03cm})$,<br>
+
*Code sequence with puncturing: $\hspace{0.90cm} \underline{x}\hspace{0.05cm}' = (11, 1\_, \_1, 11, 0\_, \_1, \ \text{...}\hspace{0.03cm})$,<br>
  
*Empfangssequenz ohne Fehler: $\hspace{0.88cm} \underline{y} = (11, 1\_, \_1, 11, 0\_, \_1, \ \text{...}\hspace{0.03cm})$,<br>
+
*Receive sequence without errors: $\hspace{0.88cm} \underline{y} = (11, 1\_, \_1, 11, 0\_, \_1, \ \text{...}\hspace{0.03cm})$,<br>
  
*Modifizierte Empfangssequenz: $\hspace{0.8cm} \underline{y}\hspace{0.05cm}' = (11, 1\rm E, E1, 11, 0E, E1, \ \text{...}\hspace{0.03cm})$.<br>
+
*Modified receive sequence: $\hspace{0.8cm} \underline{y}\hspace{0.05cm}' = (11, 1\rm E, E1, 11, 0E, E1, \ \text{...}\hspace{0.03cm})$.<br>
  
  
Jedes punktierte Bit in der Empfangssequenz&nbsp; $\underline{y}$&nbsp; (markiert durch einen Unterstrich) wird also durch ein&nbsp; <i>Erasure</i>&nbsp; $\rm E$&nbsp; ersetzt &ndash; siehe&nbsp; [[Channel_Coding/Signal_classification#Binary_Erasure_Channel_.E2.80.93_BEC| Binary Erasure Channel]]. Ein solches durch die Punktierung entstandene&nbsp; <i>Erasure</i>&nbsp; wird vom Decoder genauso behandelt wie eine Auslöschung durch den Kanal.
+
So each punctured bit in the receive sequence&nbsp; $\underline{y}$&nbsp; (marked by an underscore) is replaced by an&nbsp; <i>Erasure</i>&nbsp; $\rm E$&nbsp; &ndash; see&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#Binary_Erasure_Channel_.E2.80.93_BEC| "Binary Erasure Channel"]]. Such an erasure created by the puncturing&nbsp; <i>Erasure</i>&nbsp; is treated by the decoder in the same way as an erasure by the channel.
  
Natürlich erhöht sich durch die Punktierung die Fehlerwahrscheinlichkeit. Dies kann man bereits daran erkennen, dass die freie Distanz nach der Punktierung auf&nbsp; $d_{\rm F} = 3$&nbsp; sinkt. Demgegenüber besitzt der Muttercode die freie Distanz&nbsp; $d_{\rm F} = 5$.}}<br>
+
Of course, the puncturing increases the error probability. This can already be seen from the fact that the free distance decreases to&nbsp; $d_{\rm F} = 3$&nbsp; after the puncturing. In contrast, the mother code has the free distance&nbsp; $d_{\rm F} = 5$.}}<br>
  
Eine Anwendung findet die Punktierung zum Beispiel bei den so genannten RCPC&ndash;Codes (<i>Rate Compatible Punctered Convolutional Codes</i>). Näheres hierzu in&nbsp; [[Aufgaben:Aufgabe_3.8:_Rate_Compatible_Punctured_Convolutional_Codes| Aufgabe 3.8]].<br>
+
Puncturing is used, for example, in the so-called RCPC codes (<i>Rate Compatible Punctured Convolutional Codes</i>). More about this in&nbsp; [[Aufgaben:Aufgabe_3.8:_Rate_Compatible_Punctured_Convolutional_Codes| Aufgabe 3.8]].<br>
  
  
== Aufgaben zum Kapitel==
+
== Exercises for the chapter==
 
<br>
 
<br>
 
[[Aufgaben:Aufgabe_3.6:_Zustandsübergangsdiagramm|Aufgabe 3.6: Zustandsübergangsdiagramm]]
 
[[Aufgaben:Aufgabe_3.6:_Zustandsübergangsdiagramm|Aufgabe 3.6: Zustandsübergangsdiagramm]]

Revision as of 20:46, 28 September 2022

State definition for a memory register


A convolutional encoder can also be understood as an automaton with a finite number of states. The number of states results from the number of memory elements   ⇒   memory  $m$  thereby to  $2^m$.

Convolutional encoder with  $k = 1, \ n = 2, \ m = 2$

In this chapter we mostly start from the drawn convolutional encoder, which is characterized by the following parameters:

  • $k = 1, \ n = 2$   ⇒   Coderate $R = 1/2$,
  • Memory  $m = 2$   ⇒   influence length $\nu = 3$,
  • Transfer function matrix in octal form  $(7, 5)$:
$$\mathbf{G}(D) = (1 + D + D^2, \ 1 + D^2).$$

The code sequence at time  $i$   ⇒   $\underline{x}_i = (x_i^{(1)}, \ x_i^{(2)})$  depends not only on the information bit  $u_i$  but also on the content  $(u_{i-1}, \ u_{i-2})$  of the memory.

  • There are  $2^m = 4$  possibilities for this, namely the states  $S_0, \ S_1, \ S_2$  and  $S_3$.
  • Let the register state  $S_{\mu}$  be defined by the index:
\[\mu = u_{i-1} + 2 \cdot u_{i-2}\hspace{0.05cm}, \hspace{0.5cm}{\rm allgemein\hspace{-0.1cm}:}\hspace{0.15cm} \mu = \sum_{l = 1}^{m} \hspace{0.1cm}2\hspace{0.03cm}^{l-1} \cdot u_{i-l} \hspace{0.05cm}.\]

To avoid confusion, we distinguish in the following by upper or lower case letters:

  • the possible states  $S_{\mu}$  with the indices  $0 ≤ \mu ≤ 2^m \,-1$,
  • the current states  $s_i$  at the times.  $i = 1, \ 2, \ 3, \ \text{...}$


$\text{Example 1:}$  The graph shows for the convolutional encoder sketched above

  • the information sequence  $\underline{u} = (u_1,u_2, \text{...} \ ) $   ⇒   information bits  $u_i$,
  • the current states at the times  $i$   ⇒   $s_i ∈ \{S_0, \ S_1, \ S_2, \ S_3\}$,
  • the 2 bit sequences  $\underline{x}_i = (x_i^{(1)}, \ x_i^{(2)})$   ⇒   encoding the information bit $u_i$.


To illustrate the states  $S_{\mu}$

For example, you can see from this illustration (the colors are to facilitate the transition to later graphics):

  • At time  $i = 5$  $u_{i-1} = u_4 = 0$,  $u_{i-2} = u_3 = 1$ holds. That is, the automaton is in state  $s_5 = S_2$. With the information bit  $u_i = u_5 = 0$  the code sequence  $\underline{x}_5 = (11)$ is obtained.
  • The state for time  $i = 6$  results from  $u_{i-1} = u_5 = 0$  and  $u_{i-2} = u_4 = 0$  to  $s_6 = S_0$. Because of  $u_6 = 0$  now  $\underline{x}_6 = (00)$  is output and the new state  $s_7$  is again  $S_0$.
  • At time  $i = 12$  the code sequence  $\underline{x}_{12} = (11)$  is output because of  $u_{12} = 0$  as at time  $i = 5$  and one passes from state  $s_{12} = S_2$  to state  $s_{13} = S_0$ .
  • In contrast, at time  $i = 9$  the code sequence  $\underline{x}_{9} = (00)$  is output and followed by  $s_9 = S_2$  $s_{10} = S_1$. The same is true for  $i = 15$. In both cases, the current information bit is  $u_i = 1$.


$\text{Conclusion:}$  From this example it can be seen that the code sequence  $\underline{x}_i$  at time  $i$  alone depends on

  • from the current state  $s_i = S_{\mu} \ (0 ≤ \mu ≤ 2^m -1)$, as well as
  • from the current information bit  $u_i$.

Similarly, the successor state  $s_{i+1}$  is determined by  $s_i$  and  $u_i$  alone.

These properties are considered in the so-called state transition diagram  on the next page

.

Representation in the state transition diagram


The graph shows the  state transition diagram for our "standard encoder".

  • This provides all information about the  $(n = 2, \ k = 1, \ m = 2)$–convolutional encoder in compact form.
  • The color scheme is aligned with the  "sequential representation"  on the previous page.
  • For the sake of completeness, the derivation table is also given again.


State transition diagram  $\rm A$  for  $n = 2, \ k = 1, \ m = 2$.

The  $2^{m+k}$  transitions are labeled "$u_i \ | \ \underline{x}_i$". For example, it can be read:

  • The information bit  $u_i = 0$  (indicated by a red label) takes you from state  $s_i = S_1$  to state  $s_{i+1} = S_2$  and the two code bits are  $x_i^{(1)} = 1, \ x_i^{(2)} = 0$.
  • With the information bit  $u_i = 1$  (blue label) in the state  $s_i = S_1$  on the other hand, the code bits result in  $x_i^{(1)} = 0, \ x_i^{(2)} = 1$, and one arrives at the new state  $s_{i+1} = S_3$.

Let us now consider another systematic code, also with the characteristics  $n = 2, \ k = 1, \ m = 2$. This is the  "equivalent systematic representation"  of the upper coder. This is also referred to as  Recursive Systematic Convolutional Encoder .

State transition diagram  $\rm B$  for  $n = 2, \ k = 1, \ m = 2$.

Compared to the upper state transition diagram, one can see the following differences:

  • Since the earlier information bits  $u_{i-1}$  and  $u_{i-2}$  are not stored, here the states  $s_i = S_{\mu}$  to the processed sizes  $w_{i-1}$  and  $w_{i-2}$, where  $w_i = u_i + w_{i-1} + w_{i-2}$  holds.

$\text{Conclusion:}$  Image comparison shows that a similar state transition diagram is obtained for both encoders:

  • The structure of the state transition diagram is determined solely by the parameters  $m$  and  $k$ .
  • The number of states is  $2^{m \hspace{0.05cm}\cdot \hspace{0.05cm}k}$.
  • Arrows go from each state  $2^k$ .
  • You also get from each state  $s_i ∈ \{S_0, \ S_1, \ S_2, \ S_3\}$  to the same successor states  $s_{i+1}$.
  • A difference exists solely with respect to the code sequences output, where in both cases  $\underline{x}_i ∈ \{00, \ 01, \ 10, \ 11\}$  applies.



Representation in the trellis diagram


One gets from the state transition diagram to the so-called  trellis diagram  (short:  Trellis) by additionally considering a time component   ⇒   run variable  $i$ . The following graph contrasts both diagrams for our "standard encoder"  $(n = 2, \ k = 1, \ m = 2)$ .

State transition diagram vs. trellis diagram  $(n = 2, \ k = 1, \ m = 2)$

The bottom illustration has a resemblance to a garden trellis – assuming some imagination. Further is to be recognized on the basis of this diagram:

  • Since all memory registers are preallocated with zeros, the trellis always starts from the state  $s_1 = S_0$. At the next time  $(i = 2)$  only the two (start–)states  $S_0$  and  $S_1$  are possible.
  • From time  $i = m + 1 = 3$  the trellis has exactly the same appearance for each transition from  $s_i$  to  $s_{i+1}$ . Each state  $S_{\mu}$  is connected to a successor state by a red arrow  $(u_i = 0)$  and a blue arrow  $(u_i = 1)$ .
  • Versus a  code tree that grows exponentially with each time step  $i$  – see for example section  "Error sizes and total error sizes"  in the book "Digital Signal Transmission" – here the number of nodes (i.e. possible states) is limited to  $2^m$ .
  • This pleasing property of any trellis diagram is also used by  "Viterbi algorithm"  for efficient maximum likelihood decoding of convolutional codes.

The following example is to show that there is a  $1\text{:}1$ mapping between the string of code sequences  $\underline{x}_i$  and the paths through the trellis diagram.

  • Also, the information sequence  $\underline{u}$  can be read from the selected trellis path based on the colors of the individual branches.
  • A red branch stands for the information bit  $u_i = 0$, a blue one for  $u_i = 1$.


$\text{Example 2:}$  Im  $\text{"Example 1"}$  was created for our rate– $1/2$ standard encoder with memory  $m = 2$  and the information sequence  $\underline{u} = (1, 1, 1, 0, 0, 0, 1, 0, 1, \ \text{. ..})$  the code sequence  $\underline{x}$  is derived, which is given again here for  $i ≤ 9$ .

Trellis diagram of a sample sequence

The trellis diagram is drawn below. You can see:

  • The selected path through the trellis is obtained by lining up red and blue arrows representing the possible information bits  $u_i \in \{ 0, \ 1\}$ . This statement is true for any rate $1/n$ convolutional code. For a code with  $k > 1$  there would be  $2^k$  different colored arrows.
  • For a rate–$1/n$–convolutional code, the arrows are labeled with the code words  $\underline{x}_i = (x_i^{(1)}, \ \text{...} \ , \ x_i^{(n)})$  which are derived from the information bit  $u_i$  and the current register states  $s_i$  . For  $n = 2$  there are only four possible code words, namely  $00, \ 01, \ 10$  and  $11$.
  • In the vertical direction, one recognizes from the trellis the register states  $S_{\mu}$. Given a rate $k/n$ convolutional code with memory order  $m$  there are  $2^{k \hspace{0.05cm}\cdot \hspace{0.05cm}m}$  different states. In the present example  $(k = 1, \ m = 2)$  these are the states  $S_0, \ S_1, \ S_2$ and $S_3$.


Definition of the free distance


An important parameter of linear block codes is the  "minimum Hamming distance"  between any two codewords  $\underline{x}$  and  $\underline{x}\hspace{0.05cm}'$:

\[d_{\rm min}(\mathcal{C}) = \min_{\substack{\underline{x},\hspace{0.05cm}\underline{x}' \hspace{0.05cm}\in \hspace{0.05cm} \mathcal{C} \\ {\underline{x}} \hspace{0.05cm}\ne \hspace{0.05cm} \underline{x}\hspace{0.03cm}'}}\hspace{0.1cm}d_{\rm H}(\underline{x}, \hspace{0.05cm}\underline{x}\hspace{0.03cm}')\hspace{0.05cm}.\]
  • Due to linearity, the zero word  $\underline{0}$ also belongs to each block code. Thus  $d_{\rm min}$  is identical to the minimum  "Hamming weight"  $w_{\rm H}(\underline{x})$  of a codeword  $\underline{x} ≠ \underline{0}$.
  • For convolutional codes, however, the description of the distance ratios proves to be much more complex, since a convolutional code consists of infinitely long and infinitely many code sequences. Therefore we define here instead of the minimum Hamming distance:


$\text{Definition:}$ 

  • The  free distance  $d_{\rm F}$  of a convolutional code is equal to the number of bits in which any two code sequences  $\underline{x}$  and  $\underline{x}\hspace{0.03cm}'$  differ (at least).
  • In other words,   The free distance is equal to the minimum  Hamming distance  between any two paths  through the trellis.


Since convolutional codes are also linear, we can also use the infinite zero sequence as a reference sequence:   $\underline{x}\hspace{0.03cm}' = \underline{0}$. Thus, the free distance  $d_{\rm F}$  is equal to the minimum Hamming weight (number of ones) of a code sequence  $\underline{x} ≠ \underline{0}$.

$\text{Example 3:}$  We consider the null sequence  $\underline{0}$  (marked with white filled circles) as well as two other code sequences  $\underline{x}$  and  $\underline{x}\hspace{0.03cm}'$  (with yellow and dark background, respectively) in our standard trellis and characterize these sequences based on the state sequences:

\[\underline{0} =\left ( S_0 \rightarrow S_0 \rightarrow S_0\rightarrow S_0\rightarrow S_0\rightarrow \hspace{0.05cm}\text{...} \hspace{0.05cm}\right)= \left ( 00, 00, 00, 00, 00,\hspace{0.05cm} \text{...} \hspace{0.05cm}\right) \hspace{0.05cm},\] \[\underline{x} =\left ( S_0 \rightarrow S_1 \rightarrow S_2\rightarrow S_0\rightarrow S_0\rightarrow \hspace{0.05cm}\text{...} \hspace{0.05cm}\right)= \left ( 11, 10, 11, 00, 00,\hspace{0.05cm} \text{...} \hspace{0.05cm}\right) \hspace{0.05cm},\] \[\underline{x}\hspace{0.03cm}' = \left ( S_0 \rightarrow S_1 \rightarrow S_3\rightarrow S_2\rightarrow S_0\rightarrow \hspace{0.05cm}\text{...} \hspace{0.05cm}\right)= \left ( 11, 01, 01, 11, 00,\hspace{0.05cm} \text{...} \hspace{0.05cm}\right) \hspace{0.05cm}.\]

On the definition of free distance

One can see from these plots:

  • The free distance  $d_{\rm F} = 5$  is equal to the Hamming–weight  $w_{\rm H}(\underline{x})$. No sequence other than the one highlighted in yellow differs from  $\underline{0}$  by less than five bits. For example  $w_{\rm H}(\underline{x}') = 6$.
  • In this example, the free distance  $d_{\rm F} = 5$  also results as the Hamming distance between the sequences  $\underline{x}$  and  $\underline{x}\hspace{0.03cm}'$, which differ in exactly five bit positions.

The larger the free distance  $d_{\rm F}$  is, the better is the "asymptotic behavior" of the convolutional encoder.


Optimal convolutional codes of rate  $1/2$

At the outset, just a few remarks:

  • The free distance  $d_{\rm F}$  increases with increasing memory  $m$  provided that one uses appropriate polynomials for the transfer function matrix  $\mathbf{G}(D)$ .
  • In the table, for rate $1/2$ convolutional codes, the  $n = 2$  polynomials are given together with the  $d_{\rm F}$ value.
  • Of greater practical importance is the industry standard code  with  $m = 6$  ⇒   $64$  states and the free distance  $d_{\rm F} = 10$  (in the table this is highlighted in red).


The following example shows the effects of using unfavorable polynomials.

$\text{Example 4:}$  In  $\text{Example 3}$  it was shown that our standard convolutional encoder  $(n = 2, \ k = 1, \ m = 2)$  has the free distance  $d_{\rm F} = 5$  . This is based on the transfer function matrix  $\mathbf{G}(D) = (1 + D + D^2, \ 1 + D^2)$, as shown in the model shown (without red deletion).

State transition diagram  $\rm C$  for  $n = 2, \ k = 1, \ m = 2$.

Using  $\mathbf{G}(D) = (1 + D, \ 1 + D^2)$   ⇒   red change, the state transition diagram changes from the  "State transition diagram $\rm A$"  little at first glance. However, the effects are serious:

  • The free distance is now no longer  $d_{\rm F} = 5$, but only  $d_{\rm F} = 3$, where the code sequence  $\underline{x} = (11, \ 01, \ 00, \ 00, \ text{. ..})$  with only three ones belonging to the information sequence  $\underline{u} = \underline{1} = (1, \ 1, \ 1, \ 1, \ \text{...})$ .
  • That is:  By only three transmission errors at positions 1, 2, and 4, one corrupts the one sequence  $(\underline{1})$  into the zero sequence  $(\underline{0})$  and thus produces infinitely many decoding errors in state  $S_3$.
  • A convolutional encoder with these properties is called "catastrophic". The reason for this extremely unfavorable behavior is that here the two polynomials  $1 + D$  and  $1 + D^2$  are not divisor-remote.


Terminated convolutional codes


In the theoretical description of convolutional codes, one always assumes information sequences  $\underline{u}$  and code sequences  $\underline{x}$  that are infinite in length by definition.

  • In praktischen Anwendungen – zum Beispiel  GSM  und  UMTS  – verwendet man dagegen stets eine Informationssequenz endlicher Länge $L$.
  • Bei einem Rate–$1/n$–Faltungscode hat dann die Codesequenz mindestens die Länge  $n \cdot L$.


Terminierter Faltungscode der Rate  $R = 128/260$

The graph without the gray background on the right shows the trellis of our standard rate $1/2$ convolutional code at binary input sequence  $\underline{u}$  of finite length  $L = 128$. Thus the code sequence  $\underline{x}$  has length  $2 \cdot L = 256$.

However, due to the undefined final state, a complete maximum likelihood decoding of the transmitted sequence is not possible. Since it is not known which of the states  $S_0, \ \text{...} \ , \ S_3$  would occur for  $i > L$ , the error probability will be (somewhat) larger than in the limiting case  $L → ∞$.

To prevent this, the convolutional code is terminated as shown in the diagram above:

  • You add  $m = 2$  zeros to the  $L = 128$  information bits   ⇒   $L' = 130$.
  • This results, for example, for the color highlighted path through the trellis:
\[\underline{x}' = (11\hspace{0.05cm},\hspace{0.05cm} 01\hspace{0.05cm},\hspace{0.05cm} 01\hspace{0.05cm},\hspace{0.05cm} 00 \hspace{0.05cm},\hspace{0.05cm} \text{...}\hspace{0.1cm},\hspace{0.05cm} 10\hspace{0.05cm},\hspace{0.05cm}00\hspace{0.05cm},\hspace{0.05cm} 01\hspace{0.05cm},\hspace{0.05cm} 01\hspace{0.05cm},\hspace{0.05cm} 11 \hspace{0.05cm} ) \hspace{0.3cm} \Rightarrow \hspace{0.3cm}\underline{u}' = (1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 1 \hspace{0.05cm},\hspace{0.05cm} \text{...}\hspace{0.1cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm}1\hspace{0.05cm},\hspace{0.05cm} 1\hspace{0.05cm},\hspace{0.05cm} 0\hspace{0.05cm},\hspace{0.05cm} 0 \hspace{0.05cm} ) \hspace{0.05cm}.\]
  • The trellis now always ends (i.e. independent of the data) in the defined final state  $S_0$  and one thus achieves the best possible error probability according to maximum likelihood.
  • However, the improvement in error probability comes at the cost of a lower code rate.
  • For $L \gg m$  this loss is only small. In the considered example, instead of  $R = 0.5$  with termination the code rate results in
$$R\hspace{0.05cm}' = 0.5 \cdot L/(L + m) \approx 0.492.$$

Punctured convolutional codes


$\text{Definition:}$ 

  • We assume a convolutional code of rate  $R_0 = 1/n_0$  which we call mother code .
  • If one deletes some bits from its code sequence according to a given pattern, it is called a  punctured convolutional code with code rate  $R > R_0$.


The puncturing is done using the puncturing matrix  $\mathbf{P}$  with the following properties:

  • The row number is  $n_0$, the column number is equal to the puncturing period  $p$, which is determined by the desired code rate.
  • The  $n_0 \cdot p$  matrix elements  $P_{ij}$  are binary  $(0$  or  $1)$. If  $P_{ij} = 1$  the corresponding code bit is taken, if  $P_{ij} = 0$  punctured.
  • The rate of the punctured code is equal to the quotient of the puncturing period  $p$  and the number  $N_1$  of ones in the  $\mathbf{P}$ matrix.


One can usually find favorably punctured convolutional codes only by means of computer-aided search.

Here, a punctured convolutional code is said to be favorable if it.

  • has the same memory order  $m$  as the parent code $($also the total influence length is the same in both cases:   $\nu = m + 1)$,
  • has as large a free distance as possible  $d_{\rm F}$  which is of course smaller than that of the mother code..

$\text{Example 5:}$  Starting from our  "Rate–1/2–Standard code"  with parameters  $n_0 = 2$  and  $m = 2$  we obtain with the puncturing matrix

\[{\boldsymbol{\rm P} } = \begin{pmatrix} 1 & 1 & 0 \\ 1 & 0 & 1 \end{pmatrix}\hspace{0.3cm}\Rightarrow \hspace{0.3cm} p = 3\hspace{0.05cm}, \hspace{0.2cm}N_1 = 4\]

a punctured convolutional code of rate  $R = p/N_1 = 3/4$. We consider the following constellation for this purpose:

  • Information sequence: $\hspace{2.5cm} \underline{u} = (1, 0, 0, 1, 1, 0, \ \text{...}\hspace{0.03cm})$,
  • Code sequence without puncturing: $\hspace{0.7cm} \underline{x} = (11, 1 \color{grey}{0}, \color{gray}{1}1, 11, 0\color{gray}{1}, \color{gray}{0}1, \ \text{...}\hspace{0.03cm})$,
  • Code sequence with puncturing: $\hspace{0.90cm} \underline{x}\hspace{0.05cm}' = (11, 1\_, \_1, 11, 0\_, \_1, \ \text{...}\hspace{0.03cm})$,
  • Receive sequence without errors: $\hspace{0.88cm} \underline{y} = (11, 1\_, \_1, 11, 0\_, \_1, \ \text{...}\hspace{0.03cm})$,
  • Modified receive sequence: $\hspace{0.8cm} \underline{y}\hspace{0.05cm}' = (11, 1\rm E, E1, 11, 0E, E1, \ \text{...}\hspace{0.03cm})$.


So each punctured bit in the receive sequence  $\underline{y}$  (marked by an underscore) is replaced by an  Erasure  $\rm E$  – see  "Binary Erasure Channel". Such an erasure created by the puncturing  Erasure  is treated by the decoder in the same way as an erasure by the channel.

Of course, the puncturing increases the error probability. This can already be seen from the fact that the free distance decreases to  $d_{\rm F} = 3$  after the puncturing. In contrast, the mother code has the free distance  $d_{\rm F} = 5$.


Puncturing is used, for example, in the so-called RCPC codes (Rate Compatible Punctured Convolutional Codes). More about this in  Aufgabe 3.8.


Exercises for the chapter


Aufgabe 3.6: Zustandsübergangsdiagramm

Aufgabe 3.6Z: Übergangsdiagramm bei 3 Zuständen

Aufgabe 3.7: Vergleich zweier Faltungscodierer

Aufgabe 3.7Z: Welcher Code ist katastrophal?

Aufgabe 3.8: Rate Compatible Punctured Convolutional Codes