Difference between revisions of "Channel Coding/Distance Characteristics and Error Probability Bounds"

From LNTwww
 
(55 intermediate revisions by 4 users not shown)
Line 1: Line 1:
 
   
 
   
 
{{Header
 
{{Header
|Untermenü=Faltungscodierung und geeignete Decoder
+
|Untermenü=Convolutional Codes and Their Decoding
|Vorherige Seite=Decodierung von Faltungscodes
+
|Vorherige Seite=Decoding of Convolutional Codes
|Nächste Seite=Soft–in Soft–out Decoder
+
|Nächste Seite=Soft-in Soft-Out Decoder
 
}}
 
}}
  
== Freie Distanz vs. Minimale Distanz ==
+
== Free distance vs. minimum distance ==
 
<br>
 
<br>
Eine wichtige Kenngröße hinsichtlich der Fehlerwahrscheinlichkeit linearer Blockcodes ist die&nbsp; <i>minimale Distanz</i>&nbsp; zwischen zwei Codeworten&nbsp; $\underline{x}$&nbsp; und&nbsp; $\underline{x}\hspace{0.05cm}'$:
+
An important parameter regarding the error probability of linear block codes is the&nbsp; "minimum distance"&nbsp; between two code words &nbsp; $\underline{x}$ &nbsp; and &nbsp; $\underline{x}\hspace{0.05cm}'$:
  
 
::<math>d_{\rm min}(\mathcal{C}) =
 
::<math>d_{\rm min}(\mathcal{C}) =
Line 16: Line 16:
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
Der zweite Gleichungsteil ergibt sich aus der Tatsache, dass jeder lineare Code auch das Nullwort&nbsp; $(\underline{0})$&nbsp; beinhaltet. Zweckmäßigerweise setzt man deshalb&nbsp; $\underline{x}\hspace{0.05cm}' = \underline{0}$, so dass die&nbsp; [[Channel_Coding/Zielsetzung_der_Kanalcodierung#Einige_wichtige_Definitionen_zur_Blockcodierung| Hamming&ndash;Distanz]]&nbsp; $d_{\rm H}(\underline{x}, \ \underline{0})$&nbsp; das gleiche Ergebnis liefert wie das Hamming&ndash;Gewicht&nbsp; $w_{\rm H}(\underline{x})$.<br>
+
*The second part of the equation arises from the fact that every linear code also includes the zero word &nbsp; &rArr; &nbsp; $(\underline{0})$.  
  
[[File:P ID2684 KC T 3 5 S1 neu.png|right|frame| Codewort des&nbsp; $(7, 4, 3)$–Hamming–Codes|class=fit]]
+
*It is therefore convenient to set&nbsp; $\underline{x}\hspace{0.05cm}' = \underline{0}$,&nbsp; so that the&nbsp; [[Channel_Coding/Objective_of_Channel_Coding#Important_definitions_for_block_coding| $\text{Hamming  distance}$]] &nbsp; $d_{\rm H}(\underline{x}, \ \underline{0})$ &nbsp; gives the same result as the Hamming weight &nbsp; $w_{\rm H}(\underline{x})$.<br>
{{GraueBox|TEXT=
+
 
$\text{Beispiel 1:}$&nbsp; Die Tabelle zeigt die 16 Codeworte des&nbsp; [[Channel_Coding/Beispiele_binärer_Blockcodes#Hamming.E2.80.93Codes| $(7, 4, 3)$&ndash;Hamming&ndash;Codes]]&nbsp; <br>$($siehe &nbsp;$\text{Beispiel 7})$.<br>
+
 
*Alle Codeworte außer dem Nullwort&nbsp; $(\underline{0})$&nbsp; beinhalten mindestens drei Einsen &nbsp; &#8658; &nbsp; $d_{\rm min} = 3$.  
+
{{GraueBox|TEXT=  
*Es gibt sieben Codeworte mit drei Einsen (gelb hinterlegt), sieben mit vier Einsen (grün hinterlegt) und je eines ohne Einsen bzw. mit sieben Einsen.}}
+
$\text{Example 1:}$&nbsp; The table shows the&nbsp; $16$&nbsp; code words of of an exemplary&nbsp; [[Channel_Coding/Examples_of_Binary_Block_Codes#Hamming_Codes|$\text{HC (7, 4, 3)}$]]&nbsp; $($see &nbsp;$\text{Examples 6 and 7})$.<br>
 +
[[File:P ID2684 KC T 3 5 S1 neu.png|right|frame| Code words of the considered&nbsp; $(7, 4, 3)$ Hamming code|class=fit]]
 +
 
 +
You can see:
 +
*All code words except the null word&nbsp; $(\underline{0})$&nbsp; contain at least three ones &nbsp; &#8658; &nbsp; $d_{\rm min} = 3$.
 +
 +
*There are
 +
:#seven code words with three&nbsp; "ones"&nbsp; $($highlighted in yellow$)$,  
 +
:#seven with four&nbsp; "ones"&nbsp; $($highlighted in green$)$,&nbsp; and
 +
:#one each with no&nbsp; "ones" and seven&nbsp; "ones".}}
 
<br clear=all>
 
<br clear=all>
  
Die&nbsp; <b>freie Distanz</b>&nbsp; $d_{\rm F}$&nbsp; eines Faltungscodes (<i>Convolution Code</i> &nbsp; &#8658; &nbsp; $\mathcal{CC}$) unterscheidet sich formelmäßig nicht von der minimalen Distanz eines linearen Blockcodes:
+
The&nbsp; &raquo;<b>free distance</b>&laquo;&nbsp; $d_{\rm F}$&nbsp; of a convolutional code&nbsp; $(\mathcal{CC})$&nbsp; is formulaically no different from the minimum distance of a linear block code:
  
 
::<math>d_{\rm F}(\mathcal{CC}) =
 
::<math>d_{\rm F}(\mathcal{CC}) =
Line 33: Line 42:
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
In der Literatur wird anstelle von&nbsp; $d_{\rm F}$&nbsp; teilweise auch&nbsp; $d_{&#8734;}$&nbsp; verwendet.
+
[[File:P ID2685 KC T 3 5 S1c v1.png|right|frame| Some paths wih&nbsp; $w(\underline{x}) = d_{\rm F} = 5$|class=fit]]
*Wesentlicher Unterschied zur minimalen Distanz ist, dass bei Faltungscodes nicht Informations&ndash; und Codeworte zu betrachten sind, sondern Sequenzen mit der Eigenschaft&nbsp; [[Channel_Coding/Grundlagen_der_Faltungscodierung#Voraussetzungen_und_Definitionen| &bdquo;semi&ndash;infinite&rdquo;]].<br>
+
In the literature instead of&nbsp; "$d_{\rm F}$"&nbsp; sometimes is also used&nbsp; "$d_{&#8734;}$".
 +
 
 +
The graph shows three of the infinite paths with the minimum Hamming weight&nbsp; $w_{\rm H, \ min}(\underline{x}) = d_{\rm F} = 5$.<br>
 +
 
 +
*Major difference to the minimal distance&nbsp; $d_{\rm min}$&nbsp; of other codes is
 +
:#that in convolutional codes not information words and code words are to be considered,&nbsp;  
 +
:#but sequences with the property&nbsp; [[Channel_Coding/Basics_of_Convolutional_Coding#Requirements_and_definitions| &raquo;$\text{semi&ndash;infinite}$&laquo;]].&nbsp;
  
*Jede Codesequenz&nbsp; $\underline{x}$&nbsp; beschreibt einen Pfad durch das Trellis.  
+
*Each encoded sequence&nbsp; $\underline{x}$&nbsp; describes a path through the trellis.
*Die freie Distanz ist dabei das kleinstmögliche Hamming&ndash;Gewicht eines solchen Pfades (mit Ausnahme des Nullpfades).<br>
+
 +
*The&nbsp; "free distance"&nbsp; is the smallest possible Hamming weight of such a path&nbsp; $($except for the zero path$)$.<br>
  
  
Die Grafik zeigt drei der unendlich vielen Pfade mit dem minimalen Hamming&ndash;Gewicht&nbsp; $w_{\rm H, \ min}(\underline{x}) = d_{\rm F} = 5$.<br>
 
  
[[File:P ID2685 KC T 3 5 S1c v1.png|center|frame| Einige Pfade mit&nbsp; $w(\underline{x}) = d_{\rm F} = 5$|class=fit]]
 
  
== Pfadgewichtsfunktion==
+
== Path weighting enumerator function==
 
<br>
 
<br>
Für jeden linearen Blockcode lässt sich wegen der endlichen Anzahl an Codeworten&nbsp; $\underline{x}$&nbsp; in einfacher Weise eine Gewichtsfunktion angeben. Für das&nbsp; [[Channel_Coding/Distanzeigenschaften_und_Fehlerwahrscheinlichkeitsschranken#Freie_Distanz_vs._Minimale_Distanz|$\text{Beispiel 1}$]]&nbsp; auf der letzten Seite lautet diese:
+
For any linear block code,&nbsp; a weight enumerator function can be given in a simple way because of the finite number of code words&nbsp; $\underline{x}$.&nbsp;  
 +
 
 +
For the&nbsp; [[Channel_Coding/Distance_Characteristics_and_Error_Probability_Bounds#Free_distance_vs._minimum_distance|$\text{Example 1}$]]&nbsp; in the previous section this is:
  
 
::<math>W(X) = 1 + 7 \cdot X^{3} + 7 \cdot X^{4} + X^{7}\hspace{0.05cm}.</math>
 
::<math>W(X) = 1 + 7 \cdot X^{3} + 7 \cdot X^{4} + X^{7}\hspace{0.05cm}.</math>
 +
[[File:P ID2686 KC T 3 5 S2a v1.png|right|frame|Some paths and their path weightings|class=fit]]
 +
 +
In the case of a&nbsp; $($non-terminated$)$&nbsp; convolutional code,&nbsp; no such weight function can be given,&nbsp; since there are infinitely many,&nbsp; infinitely long encoded sequences&nbsp; $\underline{x}$&nbsp; and thus also infinitely many trellis paths.
  
Bei einem (nicht terminierten) Faltungscode kann keine solche Gewichtsfunktion angegegeben werden, da es unendlich viele, unendlich lange Codesequenzen&nbsp; $\underline{x}$&nbsp; gibt, und damit auch unendlich viele Trellispfade. Um dieses Problem in den Griff zu bekommen, gehen wir nun von folgenden Voraussetzungen aus:
+
To get a grip on this problem,&nbsp; we now assume the following:
*Als Bezugsgröße für das Trellisdiagramm wählen wir stets den Pfad der Codesequenz&nbsp; $\underline{x} = \underline{0}$&nbsp; und nennen diesen den <i>Nullpfad</i>&nbsp; $\varphi_0$.<br>
+
*As a reference for the trellis diagram,&nbsp; we always choose the path of the encoded sequence&nbsp; $\underline{x} = \underline{0}$&nbsp; and call this the&nbsp; "zero path"&nbsp; $\varphi_0$.<br>
  
*Wir betrachten nur noch solche Pfade&nbsp; $\varphi_j &#8712; {\it \Phi}$, die alle zu einer vorgegebenen Zeit&nbsp; $t$&nbsp; vom Nullpfad abweichen und irgendwann wieder zu diesem zurückkehren.<br><br>
+
*We now consider only paths &nbsp; $\varphi_j &#8712; {\it \Phi}$&nbsp; that all deviate from the zero path at a given time&nbsp; $t$&nbsp; and return to it at some point later.<br>
  
Obwohl nur ein Bruchteil aller Pfade zur Menge&nbsp; ${\it \Phi}$&nbsp; gehören, beinhaltet&nbsp; ${\it \Phi} = \{\varphi_1, \ \varphi_2, \ \varphi_3, \ \text{...} \}$&nbsp; noch immer eine unbegrenzte Menge an Pfaden. $\varphi_0$&nbsp; gehört nicht dazu.<br>
+
*Although only a fraction of all paths belong to the set&nbsp; ${\it \Phi}$,&nbsp; contains the remainder quantity&nbsp; ${\it \Phi} = \{\varphi_1, \ \varphi_2, \ \varphi_3, \ \text{...} \}$&nbsp; still an unbounded set of paths&nbsp; $\varphi_0$&nbsp; is not one of them.<br>
  
[[File:P ID2686 KC T 3 5 S2a v1.png|center|frame|Einige Pfade und ihre Pfadgewichte|class=fit]]
 
  
Im obigen Trellis sind einige Pfade&nbsp; $\varphi_j &#8712; {\it \Phi}$&nbsp; eingezeichnet:
+
In the above trellis some paths&nbsp; $\varphi_j &#8712; {\it \Phi}$&nbsp; are drawn:
*Der gelbe Pfad&nbsp; $\varphi_1$&nbsp; gehört zur Sequenz&nbsp; $\underline{x}_1 = (11, 10, 11)$&nbsp; mit dem Hamming&ndash;Gewicht&nbsp; $w_{\rm H}(\underline{x}_1) = 5$. Damit ist auch das  Pfadgewicht&nbsp; $w(\varphi_1) = 5$. Aufgrund der Festlegung des Abzweigzeitpunktes&nbsp; $t$&nbsp; hat nur noch dieser einzige Pfad&nbsp; $\varphi_1$&nbsp; die freie Distanz&nbsp; $d_{\rm F} = 5$&nbsp; zum Nullpfad &nbsp; &#8658; &nbsp; $A_5 = 1$.<br>
+
*The yellow path&nbsp; $\varphi_1$&nbsp; belongs to the sequence&nbsp; $\underline{x}_1 = (11, 10, 11)$&nbsp; with Hamming weight&nbsp; $w_{\rm H}(\underline{x}_1) = 5$.&nbsp; Thus,&nbsp; the path weighting&nbsp; $w(\varphi_1) = 5$.&nbsp; Due to the definition of the branching time&nbsp; $t$&nbsp; only this single path&nbsp; $\varphi_1$&nbsp; has the free distance&nbsp; $d_{\rm F} = 5$&nbsp; to the zero path &nbsp; &#8658; &nbsp; $A_5 = 1$.<br>
  
*Für die beiden grünen Pfade  mit den korrespondierenden Sequenzen&nbsp; $\underline{x}_2 = (11, 01, 01, 11)$&nbsp; bzw.&nbsp; $\underline{x}_3 = (11, 10, 00, 10, 11)$&nbsp; gilt&nbsp; $w(\varphi_2) = w(\varphi_3) = 6$. Kein anderer Pfad weist das Pfadgewicht&nbsp; $6$&nbsp; auf. Wir berücksichtigen diese Tatsache  durch den Koeffizienten&nbsp; $A_6 = 2$.<br>
+
*For the two green paths with corresponding sequences&nbsp; $\underline{x}_2 = (11, 01, 01, 11)$&nbsp; resp. &nbsp; $\underline{x}_3 = (11, 10, 00, 10, 11)$ &nbsp; &rArr; &nbsp; $w(\varphi_2) = w(\varphi_3) = 6$.&nbsp; No other path exhibits the path weighting&nbsp; $6$.&nbsp; We take this fact into account by the coefficient&nbsp; $A_6 = 2$.<br>
  
*Eingezeichnet ist auch der graue Pfad&nbsp; $\varphi_4$, assoziiert mit der Sequenz&nbsp; $\underline{x}_4 = (11, 01, 10, 01, 11)$ &nbsp; &#8658; &nbsp; $w(\varphi_4) = 7$. Auch die Sequenzen&nbsp; $\underline{x}_5 = (11, 01, 01, 00, 10, 11)$,&nbsp; $\underline{x}_6 = (11, 10, 00, 01, 01, 11)$&nbsp; und&nbsp; $\underline{x}_7 = (11, 10, 00, 10, 00, 10, 11)$&nbsp; haben das Pfadgewicht&nbsp; $7$&nbsp; &nbsp; &#8658; &nbsp; $A_7 = 4$.<br><br>
+
*Also drawn is the gray path&nbsp; $\varphi_4$,&nbsp; associated with the sequence&nbsp; $\underline{x}_4 = (11, 01, 10, 01, 11)$ &nbsp; &#8658; &nbsp; $w(\varphi_4) = 7$.&nbsp; Also,&nbsp; the sequences&nbsp; $\underline{x}_5 = (11, 01, 01, 00, 10, 11)$,&nbsp; $\underline{x}_6 = (11, 10, 00, 01, 01, 11)$&nbsp; and&nbsp; $\underline{x}_7 = (11, 10, 00, 10, 00, 10, 11)$&nbsp; have the same path weighting&nbsp; "$7$"&nbsp; &nbsp; &#8658; &nbsp; $A_7 = 4$.<br><br>
  
Damit lautet die Pfadgewichtsfunktion:
+
Thus,&nbsp; the path weighting enumerator function is for this example:
  
 
::<math>T(X) = A_5 \cdot X^5 + A_6 \cdot X^6  + A_7 \cdot X^7 + \text{...} \hspace{0.1cm}=  X^5 + 2 \cdot X^6  + 4 \cdot X^7+ \text{...}\hspace{0.1cm}
 
::<math>T(X) = A_5 \cdot X^5 + A_6 \cdot X^6  + A_7 \cdot X^7 + \text{...} \hspace{0.1cm}=  X^5 + 2 \cdot X^6  + 4 \cdot X^7+ \text{...}\hspace{0.1cm}
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
Die Definition dieser Funktion&nbsp; $T(X)$&nbsp; lautet:<br>
+
The definition of this function&nbsp; $T(X)$&nbsp; follows.<br>
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Definition:}$&nbsp; Für die&nbsp; <b>Pfadgewichtsfunktion</b>&nbsp; (englisch:&nbsp; <i>Path Weight Enumerator Function</i>, PWEF) eines Faltungscodes gilt:
+
$\text{Definition:}$&nbsp; For the&nbsp; &raquo;<b>path weighting enumerator function</b>&laquo;&nbsp; of a convolutional code holds:
  
 
::<math>T(X) = \sum_{\varphi_j \in {\it \Phi} }\hspace{0.1cm}  X^{w(\varphi_j) } \hspace{0.1cm}=\hspace{0.1cm} \sum_{w\hspace{0.05cm}  =\hspace{0.05cm} d_{\rm F} }^{\infty}\hspace{0.1cm}  A_w \cdot X^w  
 
::<math>T(X) = \sum_{\varphi_j \in {\it \Phi} }\hspace{0.1cm}  X^{w(\varphi_j) } \hspace{0.1cm}=\hspace{0.1cm} \sum_{w\hspace{0.05cm}  =\hspace{0.05cm} d_{\rm F} }^{\infty}\hspace{0.1cm}  A_w \cdot X^w  
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
*${\it \Phi}$&nbsp; bezeichnet die Menge aller Pfade, die den Nullpfad&nbsp; $\varphi_0$&nbsp; genau zum festgelegten Zeitpunkt&nbsp; $t$&nbsp; verlassen und (irgendwann) später zu diesem zurückkehren.<br>
+
# &nbsp; ${\it \Phi}$&nbsp; denotes the set of all paths that leave the zero path&nbsp; $\varphi_0$&nbsp; exactly at the specified time&nbsp; $t$&nbsp; and return to it&nbsp; $($sometime$)$&nbsp; later.<br>
 
+
# &nbsp; According to the second equation part,&nbsp; the summands are ordered by their path weightings&nbsp; $w$&nbsp; where&nbsp; $A_w$&nbsp; denotes the number of paths with weighting&nbsp; $w$.
*Gemäß dem zweiten Gleichungsteil sind die Summanden nach ihren Pfadgewichten&nbsp; $w$&nbsp; geordnet, wobei&nbsp; $A_w$&nbsp; die Anzahl der Pfade mit Pfadgewicht&nbsp; $w$&nbsp; bezeichnet.
+
# &nbsp; The sum starts with&nbsp; $w = d_{\rm F}$.<br>
*Die Summe beginnt mit&nbsp; $w = d_{\rm F}$.<br>
+
# &nbsp; The path weighting&nbsp; $w(\varphi_j)$&nbsp; is equal to the Hamming weight&nbsp; $($number of&nbsp; "ones"$)$&nbsp; of the encoded sequence $\underline{x}_j$&nbsp; associated to the path&nbsp; $\varphi_j$:&nbsp; $w({\varphi_j) = w_{\rm H}(\underline {x} }_j) \hspace{0.05cm}.$}}
  
*Das Pfadgewicht&nbsp; $w(\varphi_j)$&nbsp; ist gleich dem Hamming&ndash;Gewicht (Anzahl der Einsen) der zum Pfad&nbsp; $\varphi_j$&nbsp; assoziierten Codesequenz&nbsp; $\underline{x}_j$:
 
  
::<math>w({\varphi_j) = w_{\rm H}(\underline {x} }_j)
+
<u>Note:</u>&nbsp; The weight enumerator function&nbsp; $W(X)$&nbsp; defined for linear block codes and the path weight function&nbsp; $T(X)$&nbsp; of convolutional codes have many similarities;&nbsp; however,&nbsp; they are not identical. &nbsp; &nbsp; We consider again
\hspace{0.05cm}.</math>}}<br>
+
*the weight enumerator function of the&nbsp; $(7, 4, 3)$ Hamming code:
 
 
<i>Hinweis:</i> &nbsp; Die für lineare Blockcodes definierte Gewichtsfunktion&nbsp; $W(X)$&nbsp; und die Pfadgewichtsfunktion&nbsp; $T(X)$&nbsp; der Faltungscodes weisen viele Gemeinsamkeiten auf; sie sind jedoch nicht identisch.<br>
 
 
 
Wir betrachten nochmals die Gewichtsfunktion des&nbsp; $(7, 4, 3)$&ndash;Hamming&ndash;Codes,
 
  
 
::<math>W(X) = 1 + 7 \cdot X^{3} + 7 \cdot X^{4} + X^{7},</math>
 
::<math>W(X) = 1 + 7 \cdot X^{3} + 7 \cdot X^{4} + X^{7},</math>
  
und die Pfadgewichtsfunktion unseres Standard&ndash;Faltungscodierers,
+
* and the path weighting enumerator function of our&nbsp; "standard convolutional encoder":
  
 
::<math>T(X) =  X^5 + 2 \cdot X^6  + 4 \cdot X^7+ 8 \cdot X^8+ \text{...} </math>
 
::<math>T(X) =  X^5 + 2 \cdot X^6  + 4 \cdot X^7+ 8 \cdot X^8+ \text{...} </math>
  
Auffallend ist die &bdquo;$1$&rdquo; in der ersten Gleichung, die in der zweiten Gleichung fehlt. Das heißt: &nbsp; Bei den linearen Blockcodes wird das Bezugs&ndash;Codewort&nbsp; $\underline{x}_i = \underline{0}$&nbsp; mitgezählt, wohingegen die Nullcodesequenz&nbsp; $\underline{x}_i = \underline{0}$&nbsp; bzw. der Nullpfad&nbsp; $\varphi_0$&nbsp; bei den Faltungscodes per Definition ausgeschlossen wird.  
+
# &nbsp; Noticeable is the&nbsp; "$1$"&nbsp; in the first equation,&nbsp; which is missing in the second one.&nbsp; That is:  
 +
# &nbsp; For the linear block codes,&nbsp; the reference code word&nbsp; $\underline{x}_i = \underline{0}$&nbsp; is counted.
 +
# &nbsp; But the zero encoded sequence&nbsp; $\underline{x}_i = \underline{0}$&nbsp; and the zero path&nbsp; $\varphi_0$&nbsp; are excluded by definition for convolutional codes.  
 +
 
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Persönliche Meinung des Autors:}$&nbsp;
+
$\text{Author's personal opinion:}$&nbsp;
 
    
 
    
Man hätte&nbsp; $W(X)$&nbsp; ebenfalls ohne die &bdquo;$1$&rdquo; definieren können. Damit wäre unter anderem vermieden worden, dass sich die Bhattacharyya&ndash;Schranke für lineare Blockcodes und diejenge für Faltungscodes durch &bdquo;$-1$&rdquo; unterscheiden, wie aus den folgenden Gleichungen hervorgeht:<br>
+
One could&nbsp; have defined&nbsp; $W(X)$&nbsp; without the&nbsp; "$1$"&nbsp; as well.&nbsp; This would have avoided among other things,&nbsp; that the Bhattacharyya&ndash;bound for linear block codes and that for convolutional codes differ by&nbsp; "$-1$",&nbsp; as can be seen from the following equations:<br>
  
*[[Channel_Coding/Schranken_f%C3%BCr_die_Blockfehlerwahrscheinlichkeit#Die_obere_Schranke_nach_Bhattacharyya| Bhattacharyya&ndash;Schranke für lineare Blockcodes]]: &nbsp; &nbsp; ${\rm Pr(Blockfehler)} \le W(X = \beta) -1  
+
*[[Channel_Coding/Bounds_for_Block_Error_Probability#The_upper_bound_according_to_Bhattacharyya| "Bhattacharyya bound for&nbsp; $\text{linear block codes}$"]]: &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; ${\rm Pr(block\:error)} \le W(X = \beta) -1  
 
  \hspace{0.05cm},$
 
  \hspace{0.05cm},$
  
*[[Channel_Coding/Distanzeigenschaften_und_Fehlerwahrscheinlichkeitsschranken#Burstfehlerwahrscheinlichkeit_und_Bhattacharyya.E2.80.93Schranke| Bhattacharyya&ndash;Schranke für Faltungscodes]]: &nbsp; &nbsp; ${\rm Pr(Burstfehler)} \le T(X = \beta)   
+
*[[Channel_Coding/Distance_Characteristics_and_Error_Probability_Bounds#Burst_error_probability_and_Bhattacharyya_bound| "Bhattacharyya bound for&nbsp; $\text{convolutional codes}$"]]: &nbsp; &nbsp;${\rm Pr(burst\:error)} \le T(X = \beta)   
 
  \hspace{0.05cm}.$}}
 
  \hspace{0.05cm}.$}}
  
  
== Erweiterte Pfadgewichtsfunktion ==
+
== Enhanced path weighting enumerator function ==
 
<br>
 
<br>
Die Pfadgewichtsfunktion&nbsp; $T(X)$&nbsp; liefert nur Informationen hinsichtlich der Gewichte der Codesequenz&nbsp; $\underline{x}$.  
+
The path weighting enumerator function&nbsp; $T(X)$&nbsp; only provides information regarding the weights of the encoded sequence&nbsp; $\underline{x}$.  
*Mehr Informationen erhält man, wenn zusätzlich auch die Gewichte der Informationssequenz&nbsp; $\underline{u}$&nbsp; erfasst werden.  
+
*More information is obtained if the weights of the information sequence&nbsp; $\underline{u}$&nbsp; are also collected.
*Man benötigt dann zwei Formalparameter&nbsp; $X$&nbsp; und&nbsp; $U$, wie aus der folgenden Definition hervorgeht.<br>
+
 +
*One then needs two formal parameters&nbsp; $X$&nbsp; and&nbsp; $U$,&nbsp; as can be seen from the following definition.<br>
  
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Definition:}$&nbsp;  Die&nbsp; <b>erweiterte Pfadgewichtsfunktion</b>&nbsp; (englisch: &nbsp;<i>Enhanced Path Weight Enumerator Function</i>, EPWEF) lautet:
+
$\text{Definition:}$&nbsp;  The&nbsp; &raquo;<b>enhanced path weight enumerator function</b>&laquo;&nbsp; $\rm (EPWEF)$&nbsp; is:
  
::<math>T_{\rm enh}(X, U) = \sum_{\varphi_j \in {\it \Phi} }\hspace{0.1cm}  X^{w(\varphi_j)} \cdot U^{ { u}(\varphi_j)}  \hspace{0.1cm}=\hspace{0.1cm} \sum_{w} \sum_{u}\hspace{0.1cm}  A_{w, \hspace{0.05cm}u} \cdot X^w  \cdot U^{u}
+
::<math>T_{\rm enh}(X,\ U) = \sum_{\varphi_j \in {\it \Phi} }\hspace{0.1cm}  X^{w(\varphi_j)} \cdot U^{ { u}(\varphi_j)}  \hspace{0.1cm}=\hspace{0.1cm} \sum_{w} \sum_{u}\hspace{0.1cm}  A_{w, \hspace{0.05cm}u} \cdot X^w  \cdot U^{u}
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
Es gelten alle Angaben zur&nbsp; $T(X)$&ndash;Definition  auf der letzten Seite. Zusätzlich ist zu beachten:
+
All specifications for the &nbsp; $T(X)$&nbsp; definition in the last section apply.&nbsp; In addition,&nbsp; note:
*Das Pfadeingangsgewicht&nbsp; $u(\varphi_j)$&nbsp; ist gleich dem Hamming&ndash;Gewicht der zum Pfad&nbsp; $\varphi_j$&nbsp; assoziierten Informationssequenz&nbsp; $\underline{u}_j$. Es wird als Potenz des Formalparameters&nbsp; $U$&nbsp; ausgedrückt.<br>
+
# &nbsp; The path input weight&nbsp; $u(\varphi_j)$&nbsp; is equal to the Hamming weight of the information sequence&nbsp; $\underline{u}_j$&nbsp; associated to the path.
 
+
# &nbsp;  It is expressed as a power of the formal parameter&nbsp; $U$&nbsp;.<br>
*Der Koeffizient&nbsp; $A_{w, \ u}$&nbsp; bezeichnet die Anzahl der Pfade&nbsp; $\varphi_j$&nbsp; mit dem Pfadausgangsgewicht&nbsp; $w(\varphi_j)$&nbsp; und dem Pfadeingangsgewicht&nbsp; $u(\varphi_j)$. Als Laufvariable für den zweiten Anteil wird&nbsp; $u$&nbsp; verwendet.<br>
+
# &nbsp; $A_{w, \ u}$&nbsp; denotes the number of paths&nbsp; $\varphi_j$&nbsp; with path output weight&nbsp; $w(\varphi_j)$&nbsp; and path input weight&nbsp; $u(\varphi_j)$.&nbsp; Control variable for the second portion is&nbsp; $u$.<br>
 
+
# &nbsp; Setting the formal parameter&nbsp; $U = 1$&nbsp; in the enhanced path weighting enumerator function yields the original weighting enumerator function&nbsp; $T(X)$.<br>
*Setzt man in der erweiterten Pfadgewichtsfunktion den Formalparameter&nbsp; $U = 1$, so ergibt sich die ursprüngliche Gewichtsfunktion&nbsp; $T(X)$&nbsp; gemäß der Definition auf der letzten Seite.<br>
 
  
  
Bei vielen (und allen relevanten) Faltungscodes lässt sich obere Gleichung noch vereinfachen:
+
For many&nbsp; $($and all relevant$)$&nbsp; convolutional codes,&nbsp; upper equation can still be simplified:
  
::<math>T_{\rm enh}(X, U) =\hspace{0.1cm} \sum_{w \ = \ d_{\rm F} }^{\infty}\hspace{0.1cm}  A_w \cdot X^w \cdot U^{u}  
+
::<math>T_{\rm enh}(X,\ U) =\hspace{0.1cm} \sum_{w \ = \ d_{\rm F} }^{\infty}\hspace{0.1cm}  A_w \cdot X^w \cdot U^{u}  
 
\hspace{0.05cm}.</math>}}<br>
 
\hspace{0.05cm}.</math>}}<br>
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 2:}$&nbsp; Die erweiterte Pfadgewichtsfunktion unseres Standardcodieres lautet somit:
+
$\text{Example 2:}$&nbsp; Thus, the enhanced path weighting enumerator function of our standard encoder is:
  
::<math>T_{\rm enh}(X, U) =  U \cdot X^5 + 2  \cdot U^2 \cdot X^6  + 4  \cdot  U^3 \cdot X^7+ \text{ ...} \hspace{0.1cm}
+
::<math>T_{\rm enh}(X,\ U) =  U \cdot X^5 + 2  \cdot U^2 \cdot X^6  + 4  \cdot  U^3 \cdot X^7+ \text{ ...} \hspace{0.1cm}
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
Vergleicht man dieses Ergebnis mit dem unten dargestellten Trellis, so erkennt man:
+
[[File:P ID2702 KC T 3 5 S2a v1.png|right|frame|Some paths and their path weightings|class=fit]]
*Der gelb hinterlegte Pfad &ndash; gekennzeichnet durch&nbsp; $X^5$&nbsp; &ndash; setzt sich aus einem blauen Pfeil&nbsp; $(u_i = 1)$&nbsp; und zwei roten Pfeilen&nbsp; $(u_i = 0)$&nbsp; zusammen. Somit wird aus&nbsp; $X^5$&nbsp; der erweiterte Term&nbsp; $UX^5$.<br>
+
Comparing this result with the trellis shown below, we can see:
 +
*The yellow highlighted path&nbsp; &ndash; in&nbsp;  $T(X)$&nbsp;  marked by&nbsp; $X^5$&nbsp; &ndash; is composed of a blue arrow&nbsp; $(u_i = 1)$&nbsp; and two red arrows&nbsp; $(u_i = 0)$.&nbsp; Thus&nbsp; $X^5$&nbsp; becomes the extended term&nbsp; $U  X^5$.<br>
  
*Die Sequenzen der beiden grünen Pfade sind&nbsp; $\underline{u}_2 = (1, 1, 0, 0)$ &nbsp; &#8658; &nbsp; $\underline{x}_2 = (11, 01, 01, 11)$&nbsp; sowie&nbsp; $\underline{u}_3 = (1, 0, 1, 0, 0)$ &nbsp; &#8658; &nbsp; $\underline{x}_3 = (11, 10, 00, 10, 11)$. Daraus ergibt sich der zweite Term&nbsp; $2 \cdot U^2X^6$.<br>
+
*The sequences of the two green paths are&nbsp;  
 +
:$$\underline{u}_2 = (1, 1, 0, 0) \hspace{0.15cm} &#8658; \hspace{0.15cm} \underline{x}_2 = (11, 01, 01, 11),$$
 +
:$$\underline{u}_3 = (1, 0, 1, 0, 0) \hspace{0.15cm} &#8658; \hspace{0.15cm} \underline{x}_3 = (11, 10, 00, 10, 11).$$
 +
:This gives the second term&nbsp; $2 \cdot U^2X^6$.<br>
  
*Der graue Pfad (und die drei nicht gezeichneten Pfade) ergeben zusammen den Beitrag&nbsp; $4 \cdot U^3X^7$. Jeder dieser Pfade beinhaltet drei blaue Pfeile &nbsp; &#8658; &nbsp; drei Einsen in der zugehörigen Informationssequenz.<br>
+
*The gray path&nbsp; $($and the three undrawn paths$)$&nbsp; together make the contribution&nbsp; $4 \cdot U^3X^7$.&nbsp; Each of these paths contains three blue arrows &nbsp; &#8658; &nbsp; three&nbsp; "ones"&nbsp; in the associated information sequence.<br>}}
  
  
[[File:P ID2702 KC T 3 5 S2a v1.png|center|frame|Einige Pfade und ihre Pfadgewichte|class=fit]]}}
 
  
 
+
== Path weighting enumerator function from state transition diagram ==
 
 
== Pfadgewichtsfunktion aus Zustandsübergangsdiagramm ==
 
 
<br>
 
<br>
Es gibt eine elegante Methode, um die Pfadgewichtsfunktion&nbsp; $T(X)$&nbsp; und deren Erweiterung direkt aus dem Zustandsübergangsdiagramm zu bestimmen. Dies soll hier und auf den folgenden Seiten am Beispiel unseres&nbsp; [[Channel_Coding/Codebeschreibung_mit_Zustands%E2%80%93_und_Trellisdiagramm#Zustandsdefinition_f.C3.BCr_ein_Speicherregister| Standardcodierers]]&nbsp; demonstriert werden.<br>
+
There is an elegant way to determine the path weighting enumerator function&nbsp; $T(X)$&nbsp; and its enhancement directly from the state transition diagram.&nbsp; This will be demonstrated here and in the following sections using our&nbsp; [[Channel_Coding/Code_Description_with_State_and_Trellis_Diagram#State_definition_for_a_memory_register|$\text{standard convolutional encoder}$]]&nbsp; as an example.<br>
  
Zunächst muss dazu das Zustandsübergangsdiagramm umgezeichnet werden. Die Grafik zeigt dieses links in der bisherigen Form als Diagramm&nbsp; $\rm (A)$, während rechts das neue Diagramm&nbsp; $\rm (B)$&nbsp; angegeben ist.<br>
+
First,&nbsp; the state transition diagram must be redrawn for this purpose.&nbsp; The graphic shows this on the left in the previous form as diagram&nbsp; $\rm (A)$,&nbsp; while the new diagram&nbsp; $\rm (B)$&nbsp; is given on the right.&nbsp; It can be seen:
  
[[File:P ID2688 KC T 3 5 S3b1 v2.png|center|frame|Zustandsübergangsdiagramm in zwei verschiedenen Varianten|class=fit]]
+
[[File:KC_T_3_5_Diagram_v3.png|right|frame|State transition diagram in two different variants|class=fit]]
  
Man erkennt:
+
# &nbsp; The state&nbsp; $S_0$&nbsp; is split into the start state&nbsp; $S_0$&nbsp; and the end state&nbsp; $S_0\hspace{0.01cm}'$.&nbsp; Thus,&nbsp; all paths of the trellis that start in state&nbsp; $S_0$&nbsp; and return to it at some point can also be traced in the right graph&nbsp; $\rm (B)$.&nbsp; Excluded are direct transitions from&nbsp; $S_0$&nbsp; to&nbsp; $S_0\hspace{0.01cm}'$&nbsp; and thus also the zero path.<br>
*Der Zustand&nbsp; $S_0$&nbsp; wird aufgespalten in den Startzustand&nbsp; $S_0$&nbsp; und den Endzustand&nbsp; $S_0\hspace{0.01cm}'$. Damit lassen sich alle Pfade des Trellisdiagramms, die im Zustand&nbsp; $S_0$&nbsp; beginnen und irgendwann zu diesem zurückkehren, auch im rechten Graphen&nbsp; $\rm (B)$&nbsp; nachvollziehen. Ausgeschlossen sind dagegen direkte Übergänge von&nbsp; $S_0$&nbsp; nach&nbsp; $S_0\hspace{0.01cm}'$&nbsp; und damit auch der Nullpfad &nbsp;$($Dauer&ndash;$S_0)$.<br>
+
# &nbsp; In diagram&nbsp; $\rm (A)$&nbsp; the transitions are distinguishable by the colors red&nbsp; $($for&nbsp; $u_i = 0)$&nbsp; and blue&nbsp; $($for&nbsp; $u_i = 1)$.&nbsp;
 +
# &nbsp; The code words&nbsp; $\underline{x}_i &#8712; \{00, 01, 10, 11\}$&nbsp; are noted at the transitions.&nbsp; In diagram&nbsp; $\rm (B)$&nbsp; $(00)$&nbsp; is expressed&nbsp;  by&nbsp; $X^0 = 1$&nbsp; and&nbsp; $(11)$&nbsp; by&nbsp; $X^2$.
 +
# &nbsp; The code words&nbsp; $(01)$&nbsp; and&nbsp; $(10)$&nbsp; are now indistinguishable,&nbsp; but they are uniformly denoted by&nbsp; $X$&nbsp;.<br>
 +
# &nbsp; Phrased another way: &nbsp; The code word&nbsp; $\underline{x}_i$&nbsp; is now represented as&nbsp; $X^w$&nbsp; where&nbsp; $X$&nbsp; is a dummy variable associated with the output.
 +
# &nbsp; $w = w_{\rm H}(\underline{x}_i)$&nbsp; indicates the Hamming weight of the code word&nbsp; $\underline{x}_i$.&nbsp; For a rate $1/2$ code,&nbsp; the exponent&nbsp; $w$&nbsp; is either&nbsp; $0, \ 1$&nbsp; or&nbsp; $2$.<br>
 +
# &nbsp; Also diagram&nbsp; $\rm (B)$&nbsp; omits the color coding.&nbsp; The information bit&nbsp; $u_i = 1$&nbsp; is now denoted by&nbsp; $U^1 = U$&nbsp; and  bit&nbsp; $u_i = 0$&nbsp; by&nbsp; $U^0 = 1$.&nbsp; The dummy variable&nbsp; $U$&nbsp; is thus assigned to the input sequence&nbsp; $\underline{u}$&nbsp;.<br>
  
*Im Diagramm&nbsp; $\rm (A)$&nbsp; sind die Übergänge anhand der Farben Rot&nbsp; $($für&nbsp; $u_i = 0)$&nbsp; und Blau&nbsp; $($für&nbsp; $u_i = 1)$&nbsp; unterscheidbar, und die Codeworte&nbsp; $\underline{x}_i &#8712; \{00, 01, 10, 11\}$&nbsp; sind an den Übergängen vermerkt. Im neuen Diagramm&nbsp; $\rm (B)$&nbsp; werden&nbsp; $(00)$&nbsp; durch&nbsp; $X^0 = 1$&nbsp; und&nbsp; $(11)$&nbsp; durch&nbsp; $X^2$&nbsp; ausgedrückt. Die Codeworte&nbsp; $(01)$&nbsp; und&nbsp; $(10)$&nbsp; sind nun nicht mehr unterscheidbar, sondern werden einheitlich mit&nbsp; $X$&nbsp; bezeichnet.<br>
+
== Rules for manipulating the state transition diagram ==
 
+
<br>
*Anders formuliert: &nbsp; Das Codewort&nbsp; $\underline{x}_i$&nbsp; wird nun als&nbsp; $X^w$&nbsp; dargestellt, wobei&nbsp; $X$&nbsp; eine dem Ausgang (der Codesequenz) zugeordnete Dummy&ndash;Variable ist und&nbsp; $w = w_{\rm H}(\underline{x}_i)$&nbsp; das Hamming&ndash;Gewicht des Codewortes&nbsp; $\underline{x}_i$&nbsp; angibt. Bei einem Rate&ndash;$1/2$&ndash;Code ist der Exponent&nbsp; $w$&nbsp; entweder&nbsp; $0, \ 1$&nbsp; oder&nbsp; $2$.<br>
+
Goal of our calculations will be to characterize the&nbsp; $($arbitrarily complicated$)$&nbsp; path from&nbsp; $S_0$&nbsp; to&nbsp; $S_0\hspace{0.01cm}'$&nbsp; by the extended path weighting enumerator function&nbsp; $T_{\rm enh}(X, \ U)$.&nbsp; For this we need rules to simplify the graph stepwise.<br>
  
*Ebenfalls verzichtet wird im Diagramm&nbsp; $\rm (B)$&nbsp; auf die Farbcodierung. Das Informationsbit&nbsp; $u_i = 1$&nbsp; wird nun durch&nbsp; $U^1 = U$&nbsp; und das Informationsbit&nbsp; $u_i = 0$&nbsp; durch&nbsp; $U^0 = 1$&nbsp; gekennzeichnet. Die Dummy&ndash;Variable&nbsp; $U$&nbsp; ist also der Eingangssequenz&nbsp; $\underline{u}$&nbsp; zugeordnet.<br><br>
+
&raquo;'''Serial transitions'''&laquo;
 +
[[File:KC_T_3_5_Einzelteile_neu.png|right|frame|Reduction of serial transitions&nbsp; '''(1)''',&nbsp; parallel transitions&nbsp; '''(2)''',&nbsp; a ring&nbsp;  
 +
'''(3)''',&nbsp; a feedback&nbsp; '''(4)'''  ]]
  
== Regeln zur Manipulation des Zustandsübergangsdiagramms ==
+
Two serial connections &ndash; denoted by&nbsp; $A(X, \ U)$&nbsp; and&nbsp; $B(X, \ U)$&nbsp; &ndash; can be replaced by a single connection with the product of these ratings.<br>
<br>
 
Ziel unserer Berechnungen wird es sein, den (beliebig komplizierten) Weg von&nbsp; $S_0$&nbsp; nach&nbsp; $S_0\hspace{0.01cm}'$&nbsp; durch die erweiterte Pfadgewichtsfunktion&nbsp; $T_{\rm enh}(X, \ U)$&nbsp; zu charakterisieren. Dazu benötigen wir Regeln, um den Graphen schrittweise vereinfachen zu können.<br><br>
 
  
[[File:P ID2689 KC T 3 5 S3b1.png|right|frame|Erfassung serieller Übergänge]]
+
&raquo;'''Parallel transitions'''&laquo;
'''Serielle Übergänge'''
 
  
Zwei serielle Verbindungen &ndash; gekennzeichnet durch&nbsp; $A(X, \ U)$&nbsp; und&nbsp; $B(X, \ U)$&nbsp; &ndash; können durch eine einzige Verbindung mit dem Produkt dieser Bewertungen ersetzt werden.<br>
+
Two parallel connections &ndash; denoted by&nbsp; $A(X, \ U)$&nbsp; and&nbsp; $B(X, \ U)$&nbsp; &ndash; are combined by the sum of their valuation functions.<br>
<br clear=all>
 
[[File:P ID2690 KC T 3 5 S3b2.png|right|frame|Erfassung paralleler Übergänge]]
 
'''Parallele Übergänge'''
 
  
Zwei parallele Verbindungen &ndash; gekennzeichnet durch&nbsp; $A(X, \ U)$&nbsp; und&nbsp; $B(X, \ U)$&nbsp; &ndash; werden durch die Summe ihrer Bewertungsfunktionen zusammengefasst.<br>
+
&raquo;'''Ring'''&laquo;
<br clear=all>
 
[[File:P ID2691 KC T 3 5 S3b3.png|right|frame|Reduzierung eines Rings]]
 
'''Ring'''
 
  
Die nebenstehende Konstellation kann durch eine einzige Verbindung ersetzt werden, wobei für die Ersetzung gilt:
+
The adjacent constellation can be replaced by a single connection, and the following applies to the replacement:
 
::<math>E(X, U) =  \frac{A(X, U) \cdot B(X, U)}{1- C(X, U)}  
 
::<math>E(X, U) =  \frac{A(X, U) \cdot B(X, U)}{1- C(X, U)}  
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
<br clear=all>
 
[[File:P ID2692 KC T 3 5 S3b4.png|right|frame|Reduzierung einer Rückkopplung ]]
 
'''Rückkopplung'''
 
  
Durch die Rückkopplung können sich hier zwei Zustände beliebig oft abwechseln. Für diese Konstellation gilt:
+
&raquo;'''Feedback'''&laquo;
 +
 
 +
Due to the feedback, two states can alternate here as often as desired. For this constellation applies:
 
::<math>F(X, U) =  \frac{A(X, U) \cdot B(X, U)\cdot C(X, U)}{1- C(X, U)\cdot D(X, U)}  
 
::<math>F(X, U) =  \frac{A(X, U) \cdot B(X, U)\cdot C(X, U)}{1- C(X, U)\cdot D(X, U)}  
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
Die hier angegebenen Gleichungen für Ring und Rückkopplung sind in der&nbsp; [[Aufgaben:Aufgabe_3.12Z:_Ring_und_Rückkopplung|Aufgabe 3.12Z]]&nbsp; zu beweisen.
+
The equations for ring and feedback given here are to be proved in the&nbsp; [[Aufgaben:Exercise_3.12Z:_Ring_and_Feedback|"Exercise 3.12Z"]]&nbsp;.
 
<br clear=all>
 
<br clear=all>
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 3:}$&nbsp;   
+
$\text{Example 3:}$&nbsp;   
Die oben genannten Regeln sollen nun auf unser Standardbeispiel angewendet werden. In der Grafik sehen Sie links das modifizierte Diagramm&nbsp; $\rm (B)$.
+
The above rules are now to be applied to our standard example.&nbsp; In the graphic on the left you can see the modified diagram&nbsp; $\rm (B)$.
  
[[File:P ID2695 KC T 3 5 S3a v2.png|center|frame|Zur Reduktion der Zustandsübergänge|class=fit]]
+
[[File:EN_KC_T_3_5_S48.png|right|frame|Reduction of state transitions|class=fit]]
  
*Zunächst ersetzen wir den rot hinterlegten Umweg von&nbsp; $S_1$&nbsp; nach&nbsp; $S_2$&nbsp; über&nbsp; $S_3$&nbsp; im Diagramm&nbsp; $\rm (B)$&nbsp; durch die  im Diagramm&nbsp; $\rm (C)$&nbsp; eingezeichnete rote Verbindung&nbsp; $T_1(X, \hspace{0.05cm} U)$. Es handelt sich nach der oberen Klassifizierung um einen &bdquo;Ring&rdquo; mit den Beschriftungen&nbsp; $A = C = U \cdot X$&nbsp; und&nbsp; $B = X$, und wir erhalten für die <i>erste Reduktionsfunktion</i>:
+
*First we replace the red highlighted detour from&nbsp; $S_1$&nbsp; to&nbsp; $S_2$&nbsp; via&nbsp; $S_3$&nbsp; in the diagram&nbsp; $\rm (B)$&nbsp; by the red connection&nbsp; $T_1(X, \hspace{0.05cm} U)$&nbsp; drawn in the diagram&nbsp; $\rm (C)$.
 +
 
 +
*According to the upper classification,&nbsp; it is a&nbsp; "ring"&nbsp; with the labels&nbsp; $A = C = U \cdot X$&nbsp; and&nbsp; $B = X$, and we obtain for the first reduction function:
  
 
::<math>T_1(X, \hspace{0.05cm} U) =  \frac{U \cdot X^2}{1- U \cdot X}  
 
::<math>T_1(X, \hspace{0.05cm} U) =  \frac{U \cdot X^2}{1- U \cdot X}  
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
*Nun fassen wir die parallelen Verbindungen entsprechend der blauen Hinterlegung im Diagramm&nbsp; $\rm (C)$&nbsp; zusammen und ersetzen diese durch die blaue Verbindung im Diagramm&nbsp; $\rm (D)$. Die <i>zweite Reduktionsfunktion</i> lautet somit:
+
*Now we summarize the parallel connections according to the blue background in the diagram&nbsp; $\rm (C)$&nbsp; and replace it with the blue connection in the diagram&nbsp; $\rm (D)$.&nbsp; The second reduction function is thus:
  
::<math>T_2(X, \hspace{0.05cm}U) =  T_1(X, \hspace{0.05cm}U) + X = \frac{U X^2 + X \cdot (1-UX)}{1- U  X} =  \frac{X}{1- U  X}
+
:$$T_2(X, \hspace{0.03cm}U) =  T_1(X, \hspace{0.03cm}U) \hspace{-0.03cm}+\hspace{-0.03cm} X\hspace{-0.03cm}  $$
\hspace{0.05cm}.</math>
+
:$$\Rightarrow  \hspace{0.3cm} T_2(X, \hspace{0.05cm}U) =\hspace{-0.03cm} \frac{U X^2\hspace{-0.03cm} +\hspace{-0.03cm} X \cdot (1-UX)}{1- U  X}=  \frac{X}{1- U  X}\hspace{0.05cm}.$$
  
*Der gesamte Graph&nbsp; $\rm (D)$&nbsp; kann dann durch eine einzige Verbindung von&nbsp; $S_0$&nbsp; nach&nbsp; $S_0\hspace{0.01cm}'$&nbsp; ersetzt werden. Nach der Rückkopplungsregel erhält man für die&nbsp; <i>erweiterte Pfadgewichtsfunktion</i>:
+
*The entire graph&nbsp; $\rm (D)$&nbsp; can then be replaced by a single connection from&nbsp; $S_0$&nbsp; to&nbsp; $S_0\hspace{0.01cm}'$.&nbsp; According to the feedback rule,&nbsp; one obtains for the&nbsp; "enhanced path weighting enumerator function":
  
 
::<math>T_{\rm enh}(X, \hspace{0.05cm}U) = \frac{(U X^2) \cdot X^2 \cdot \frac{X}{1- U  X} }{1- U  \cdot \frac{X}{1- U  X} } =  \frac{U X^5}{1- U  X- U  X} = \frac{U X^5}{1- 2 \cdot U  X}
 
::<math>T_{\rm enh}(X, \hspace{0.05cm}U) = \frac{(U X^2) \cdot X^2 \cdot \frac{X}{1- U  X} }{1- U  \cdot \frac{X}{1- U  X} } =  \frac{U X^5}{1- U  X- U  X} = \frac{U X^5}{1- 2 \cdot U  X}
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
*Mit der Reihenentwicklung&nbsp; $1/(1 \, &ndash;x) = 1 + x + x^2 + x^3 + \ \text{...} \ $&nbsp; lässt sich hierfür auch schreiben:
+
*With the series expansion&nbsp; $1/(1 \, &ndash;x) = 1 + x + x^2 + x^3 + \ \text{...} \ $&nbsp; can also be written for this purpose:
  
 
::<math>T_{\rm enh}(X, \hspace{0.05cm}U) = U X^5 \cdot \big [ 1 + 2 \hspace{0.05cm}UX + (2 \hspace{0.05cm}UX)^2  + (2 \hspace{0.05cm}UX)^3 + \text{...} \hspace{0.05cm} \big ]  
 
::<math>T_{\rm enh}(X, \hspace{0.05cm}U) = U X^5 \cdot \big [ 1 + 2 \hspace{0.05cm}UX + (2 \hspace{0.05cm}UX)^2  + (2 \hspace{0.05cm}UX)^3 + \text{...} \hspace{0.05cm} \big ]  
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
*Setzt man die formale Input&ndash;Variable&nbsp; $U = 1$, so erhält man die &bdquo;einfache&rdquo; Pfadgewichtsfunktion, die allein Aussagen über die Gewichtsverteilung der Ausgangssequenz&nbsp; $\underline{x}$&nbsp; erlaubt:
+
*Setting the formal input variable &nbsp; $U = 1$, &nbsp; we obtain the&nbsp; "simple path weighting enumerator function",&nbsp; which alone allows statements about the weighting distribution of the output sequence&nbsp; $\underline{x}$:&nbsp;
  
 
::<math>T(X) = X^5 \cdot \big [ 1 + 2 X + 4  X^2  + 8  X^3 +\text{...}\hspace{0.05cm} \big ]  
 
::<math>T(X) = X^5 \cdot \big [ 1 + 2 X + 4  X^2  + 8  X^3 +\text{...}\hspace{0.05cm} \big ]  
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
:Das gleiche Ergebnis haben wir bereits aus dem Trellisdiagramm auf der Seite&nbsp; [[Channel_Coding/Distanzeigenschaften_und_Fehlerwahrscheinlichkeitsschranken#Pfadgewichtsfunktion| Pfadgewichtsfunktion]]&nbsp; abgelesen. Dort gab es einen grauen Pfad mit Gewicht&nbsp; $5$, zwei gelbe Pfade mit Gewicht&nbsp; $6$&nbsp; und vier grüne Pfade mit Gewicht&nbsp; $7$.}}<br>
+
We have already read the same result from the trellis diagram in section&nbsp; [[Channel_Coding/Distance_Characteristics_and_Error_Probability_Bounds#Path_weighting_enumerator_function| "Path weighting enumerator function"]]&nbsp;. There was
 +
**one gray path with weight&nbsp; $5$,  
 +
**two yellow paths with weighting&nbsp; $6$&nbsp;  
 +
** four green paths with weighting&nbsp; $7$
 +
** and ... .}}<br>
  
== Blockfehlerwahrscheinlichkeit  vs. Burstfehlerwahrscheinlichkeit ==
+
== Block error probability vs. burst error probability ==
 
<br>
 
<br>
[[File:P ID2705 KC T 3 5 S5 v1.png|right|frame|Einfaches Übertragungsmodell inklusive Codierung/Decodierung|class=fit]]
+
The simple model according to the sketch is valid for linear block codes as well as for convolutional codes.<br>
Das einfache Modell gemäß der Skizze gilt sowohl für lineare Blockcodes als auch für Faltungscodes.<br>
+
 
 +
[[File:EN_KC_T_3_5_S5.png|right|frame|Simple transmission model including encoding and decoding|class=fit]]
  
  
'''Blockcodes'''
+
&raquo;'''Block codes'''&laquo;
  
Bei den Blockcodes bezeichnen&nbsp; $\underline{u} = (u_1, \ \text{...} \hspace{0.05cm}, \ u_i, \ \text{...} \hspace{0.05cm}, \ u_k)$&nbsp; und&nbsp; $\underline{v} = (v_1, \ \text{...} \hspace{0.05cm}, v_i, \ \text{...} \hspace{0.05cm} \ , \ v_k)$&nbsp; die Informationsblöcke am Eingang und Ausgang des Systems.  
+
For block codes denote&nbsp; $\underline{u} = (u_1, \ \text{...} \hspace{0.05cm}, \ u_i, \ \text{...} \hspace{0.05cm}, \ u_k)$&nbsp; and&nbsp; $\underline{v} = (v_1, \ \text{...} \hspace{0.05cm}, v_i, \ \text{...} \hspace{0.05cm} \ , \ v_k)$&nbsp; the information blocks at the input and output of the system.  
  
Damit sind folgende Beschreibungsgrößen angebbar:
+
Thus,&nbsp; the following descriptive variables can be specified:
*die <i>Blockfehlerwahrscheinlichkeit</i> &nbsp; ${\rm Pr(Blockfehler)} = {\rm Pr}(\underline{v} &ne; \underline{u}),$
+
*the &raquo;block error probability&laquo; &nbsp; ${\rm Pr(block\:error)} = {\rm Pr}(\underline{v} &ne; \underline{u}),$
  
*die <i>Bitfehlerwahrscheinlichkeit</i> &nbsp; ${\rm Pr(Bitfehler)} = {\rm Pr}(v_i &ne; u_i).$<br><br>
+
*the &raquo;bit error probability&laquo; &nbsp; ${\rm Pr(bit\:error)} = {\rm Pr}(v_i &ne; u_i).$<br><br>
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Bitte beachten Sie:}$&nbsp; Bei realen Übertragungssystemen gilt aufgrund des thermischen Rauschens stets:
+
$\text{Please note:}$&nbsp; In real transmission systems,&nbsp; due to thermal noise the following always applies:
:$${\rm Pr(Bitfehler)} > 0\hspace{0.05cm},\hspace{1.0cm}{\rm Pr(Blockfehler)} > {\rm Pr(Bitfehler)}
+
:$${\rm Pr(bit\:error)} > 0\hspace{0.05cm},\hspace{0.5cm}{\rm Pr(block\:error)} > {\rm Pr(bit\:error)}
 
\hspace{0.05cm}.$$
 
\hspace{0.05cm}.$$
  
Hierfür ein einfacher Erklärungsversuch: &nbsp; Entscheidet der Decoder in jedem Block der Länge&nbsp; $k$&nbsp; genau ein Bit falsch,  
+
For this a simple explanation attempt: &nbsp; If the decoder decides in each block of length&nbsp; $k$&nbsp; exactly one bit wrong,  
*so beträgt auch die mittlere Bitfehlerwahrscheinlichkeit&nbsp; ${\rm Pr(Bitfehler)}= 1/k$,
+
*so the average bit error probability &nbsp; ${\rm Pr(bit\:error)}= 1/k$,
*während für die Blockfehlerwahrscheinlichkeit&nbsp; ${\rm Pr(Blockfehler)}\equiv 1$&nbsp; gilt.}}<br>
+
 
 +
*while for the block error probability &nbsp; ${\rm Pr(block\:error)}\equiv 1$&nbsp; holds.}}<br>
  
  
'''Faltungscodes'''  
+
&raquo;'''Convolutional codes'''&laquo;
  
Bei Faltungscodes ist die Blockfehlerwahrscheinlichkeit nicht angebbar, da hier&nbsp; $\underline{u} = (u_1, \ u_2, \ \text{...} \hspace{0.05cm})$&nbsp; und&nbsp; $\underline{\upsilon} = (v_1, \ v_2, \ \text{...} \hspace{0.05cm})$&nbsp; Sequenzen darstellen.  
+
For convolutional codes,&nbsp; the block error probability cannot be specified,&nbsp; since here &nbsp; $\underline{u} = (u_1, \ u_2, \ \text{...} \hspace{0.05cm})$ &nbsp; and &nbsp; $\underline{\upsilon} = (v_1, \ v_2, \ \text{...} \hspace{0.05cm})$ &nbsp; represent sequences.
  
Selbst der kleinstmögliche Codeparameter&nbsp; $k = 1$&nbsp; führt hier zur Sequenzlänge&nbsp; $k \hspace{0.05cm}'&nbsp;&#8594;&nbsp;&#8734;$, und die Blockfehlerwahrscheinlichkeit ergäbe sich stets zu&nbsp; ${\rm Pr(Blockfehler)}\equiv 1$, selbst wenn die Bitfehlerwahrscheinlichkeit extrem klein (aber ungleich Null) ist.<br>
+
[[File:P ID2706 KC T 3 5 S5b v1.png|right|frame|Zero path&nbsp; ${\it \varphi}_0$&nbsp; and deviation paths&nbsp; ${\it \varphi}_i$|class=fit]]
  
[[File:P ID2706 KC T 3 5 S5b v1.png|center|frame|Nullpfad&nbsp; ${\it \varphi}_0$&nbsp; und Abweichungspfade&nbsp; ${\it \varphi}_i$|class=fit]]
+
<br><br>Even the smallest possible code parameter&nbsp; $k = 1$,&nbsp; leads here to the sequence length&nbsp; $k \hspace{0.05cm}'&nbsp;&#8594;&nbsp;&#8734;$,&nbsp; and
 +
* the block error probability would always result in&nbsp; ${\rm Pr(block\hspace{0.1cm} error)}\equiv 1$,
  
 +
*even if the bit error probability is extremely small $\hspace{0.05cm}$ $($but non-zero$)$.
 +
<br clear=all>
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Definition:}$&nbsp; Für die&nbsp; <b>Burstfehlerwahrscheinlichkeit</b>&nbsp; eines Faltungscodes gilt:
+
$\text{Definition:}$&nbsp; For the&nbsp; &raquo;<b>burst error probability</b>&nbsp;&laquo; of a convolutional code holds:
  
::<math>{\rm Pr(Burstfehler)} = {\rm Pr}\big \{ {\rm Decoder\hspace{0.15cm} verl\ddot{a}sst\hspace{0.15cm} zur\hspace{0.15cm} Zeit}\hspace{0.15cm}t \hspace{0.15cm}{\rm den \hspace{0.15cm}korrekten \hspace{0.15cm}Pfad}\big \}   
+
::<math>{\rm Pr(burst\:error)} = {\rm Pr}\big \{ {\rm Decoder\hspace{0.15cm} leaves\hspace{0.15cm} the\hspace{0.15cm} correct\hspace{0.15cm}path}\hspace{0.15cm}{\rm at\hspace{0.15cm}time \hspace{0.10cm}\it t}\big \}   
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
*Um für die folgende Herleitung die Schreibweise zu vereinfachen, gehen wir stets von der Nullsequenz&nbsp; $(\underline{0})$&nbsp; aus, die im gezeichneten Trellis als Nullpfad&nbsp; $\varphi_0$&nbsp; rot dargestellt ist.  
+
*To simplify the notation for the following derivation,&nbsp; we always assume the zero sequence&nbsp; $\underline{0}$ &nbsp; which is shown in red in the drawn trellis as the&nbsp; "zero path"&nbsp; $\varphi_0$.
*Alle anderen Pfade&nbsp; $\varphi_1, \ \varphi_2, \ \varphi_3, \ \text{...} $&nbsp; (und noch viele mehr) verlassen&nbsp; $\varphi_0$&nbsp; zur Zeit&nbsp; $t$. Sie alle gehören zur Pfadmenge&nbsp; ${\it \Phi}$ &nbsp; &#8658; &nbsp; &bdquo;Viterbi&ndash;Decoder verlässt den korrekten Pfad zur Zeit&nbsp; $t$&rdquo;. Diese Wahrscheinlichkeit wird auf der nächsten Seite berechnet.}}<br>
+
 +
*All other paths&nbsp; $\varphi_1, \ \varphi_2, \ \varphi_3, \ \text{...} $&nbsp; $($and many more$)$&nbsp; leave&nbsp; $\varphi_0$&nbsp; at time&nbsp; $t$.  
 +
 
 +
*They all belong to the path set&nbsp; ${\it \Phi}$&nbsp; which is defined as &nbsp;"Viterbi decoder leaves the correct path at time&nbsp; $t$".&nbsp; This probability is calculated in the next section.}}<br>
  
== Burstfehlerwahrscheinlichkeit und Bhattacharyya–Schranke ==
+
== Burst error probability and Bhattacharyya bound ==
 
<br>
 
<br>
Wir gehen wie im früheren Kapitel&nbsp; [[Channel_Coding/Schranken_f%C3%BCr_die_Blockfehlerwahrscheinlichkeit#Union_Bound_der_Blockfehlerwahrscheinlichkeit|Schranken für die Blockfehlerwahrscheinlichkeit]]&nbsp; von der paarweisen Fehlerwahrscheinlichkeit&nbsp; ${\rm Pr}\big [\varphi_0 &#8594; \varphi_i \big]$&nbsp; aus, dass der Decoder anstelle des Pfades&nbsp; $\varphi_0$&nbsp; den Pfad&nbsp; $\varphi_i$&nbsp; auswählen <b>könnte</b>. Alle betrachteten Pfade&nbsp; $\varphi_i$&nbsp; verlassen den Nullpfad&nbsp; $\varphi_0$&nbsp; zum Zeitpunkt&nbsp; $t$; sie gehören somit  alle zur Pfadmenge&nbsp; ${\it \Phi}$.<br>
+
[[File:EN_KC_T_3_5_S5c.png|right|frame|Calculation of the burst error probability|class=fit]]
 +
We proceed as in the earlier chapter&nbsp; [[Channel_Coding/Bounds_for_Block_Error_Probability#Union_Bound_of_the_block_error_probability|"Bounds for block error probability"]]&nbsp;  
 +
*from the pairwise error probability&nbsp; ${\rm Pr}\big [\varphi_0 &#8594; \varphi_i \big]$&nbsp;
 +
 +
*that instead of the path&nbsp; $\varphi_0$&nbsp;
 +
 +
*the decoder <b>could</b> select the path&nbsp; $\varphi_i$.  
  
[[File:P ID2707 KC T 3 5 S5c v1.png|center|frame|Zur Berechnung der Burstfehlerwahrscheinlichkeit|class=fit]]
 
  
Die gesuchte&nbsp; <b>Burstfehlerwahrscheinlichkeit</b>&nbsp; ist gleich der folgenden  Vereinigungsmenge:
+
All considered paths&nbsp; $\varphi_i$&nbsp; leave the zero path&nbsp; $\varphi_0$&nbsp; at time&nbsp; $t$;&nbsp; thus they all belong to the path set ${\it \Phi}$.<br>
  
::<math>{\rm Pr(Burstfehler)}= {\rm Pr}\left (\big[\varphi_{\hspace{0.02cm}0} \mapsto \varphi_{\hspace{0.02cm}1}\big] \hspace{0.05cm}\cup\hspace{0.05cm}\big[\varphi_{\hspace{0.02cm}0} \mapsto \varphi_{\hspace{0.02cm}2}\big]\hspace{0.05cm}\cup\hspace{0.05cm} \text{... }\hspace{0.05cm} \right )= {\rm Pr} \left ( \cup_{\varphi_{\hspace{0.02cm}i} \in {\it \Phi}} \hspace{0.15cm} \big[\varphi_{\hspace{0.02cm}0} \mapsto \varphi_{\hspace{0.02cm}i}\big] \right )\hspace{0.05cm}.</math>
+
The sought&nbsp; "burst error probability"&nbsp; is equal to the following union set:
  
Eine obere Schranke hierfür bietet die so genannte&nbsp; [[Channel_Coding/Schranken_f%C3%BCr_die_Blockfehlerwahrscheinlichkeit#Union_Bound_der_Blockfehlerwahrscheinlichkeit| Union&ndash;Bound]]:
+
:$${\rm Pr(burst\:error)}= {\rm Pr}\left (\big[\varphi_{\hspace{0.02cm}0} \mapsto \varphi_{\hspace{0.02cm}1}\big] \hspace{0.05cm}\cup\hspace{0.05cm}\big[\varphi_{\hspace{0.02cm}0} \mapsto \varphi_{\hspace{0.02cm}2}\big]\hspace{0.05cm}\cup\hspace{0.05cm} \text{... }\hspace{0.05cm} \right )$$
 +
:$$\Rightarrow \hspace{0.3cm}{\rm Pr(burst\:error)}= {\rm Pr} \left ( \cup_{\varphi_{\hspace{0.02cm}i} \in {\it \Phi}} \hspace{0.15cm} \big[\varphi_{\hspace{0.02cm}0} \mapsto \varphi_{\hspace{0.02cm}i}\big] \right )\hspace{0.05cm}.$$
  
::<math>{\rm Pr(Burstfehler)} \le \sum_{\varphi_{\hspace{0.02cm}i} \in {\it \Phi}}\hspace{0.15cm}  
+
*An upper bound for this is provided by the so-called&nbsp; [[Channel_Coding/Bounds_for_Block_Error_Probability#Union_Bound_of_the_block_error_probability|$\text{Union Bound}$]]:
{\rm Pr}\big [\varphi_{\hspace{0.02cm}0} \mapsto \varphi_{\hspace{0.02cm}i}\big ] = {\rm Pr(Union \hspace{0.15cm}Bound)}
+
 
 +
::<math>{\rm Pr(Burst\:error)} \le \sum_{\varphi_{\hspace{0.02cm}i} \in {\it \Phi}}\hspace{0.15cm}  
 +
{\rm Pr}\big [\varphi_{\hspace{0.02cm}0} \mapsto \varphi_{\hspace{0.02cm}i}\big ] = {\rm Pr(union \hspace{0.15cm}bound)}
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
Die paarweise Fehlerwahrscheinlichkeit kann mit der&nbsp; [[Channel_Coding/Schranken_f%C3%BCr_die_Blockfehlerwahrscheinlichkeit#Die_obere_Schranke_nach_Bhattacharyya_.281.29| Bhattacharyya&ndash;Schranke]]&nbsp; abgeschätzt werden:
+
*The pairwise error probability can be estimated using the&nbsp; [[Channel_Coding/Bounds_for_Block_Error_Probability#The_upper_bound_according_to_Bhattacharyya|$\text{Bhattacharyya bound}$]]:
  
 
::<math>{\rm Pr}\big [\underline {0} \mapsto \underline {x}_{\hspace{0.02cm}i}\big ]
 
::<math>{\rm Pr}\big [\underline {0} \mapsto \underline {x}_{\hspace{0.02cm}i}\big ]
Line 303: Line 339:
 
\hspace{0.05cm} \beta^{w(\varphi_i)}\hspace{0.05cm}.</math>
 
\hspace{0.05cm} \beta^{w(\varphi_i)}\hspace{0.05cm}.</math>
  
Hierbei bezeichnet
+
:Here denotes
 
   
 
   
*$w_{\rm H}(\underline{x}_i)$&nbsp; das Hamming&ndash;Gewicht der möglichen Codesequenz&nbsp; $\underline{x}_i,$  
+
:*$w_{\rm H}(\underline{x}_i)$&nbsp; is the Hamming weight of the possible encoded sequence&nbsp; $\underline{x}_i,$
*$\ w(\varphi_i)$&nbsp; das Pfadgewicht des entsprechenden Pfades&nbsp; $\varphi_i &#8712; {\it \Phi}$, und
+
*$\beta$&nbsp; den so genannten&nbsp; [[Channel_Coding/Schranken_f%C3%BCr_die_Blockfehlerwahrscheinlichkeit#Die_obere_Schranke_nach_Bhattacharyya| Bhattacharyya&ndash;Kanalparameter]].<br>
+
:*$\ w(\varphi_i)$&nbsp; is the path weight of the corresponding path&nbsp; $\varphi_i &#8712; {\it \Phi},$
 +
 +
:*$\beta$&nbsp; is the so-called&nbsp; [[Channel_Coding/Bounds_for_Block_Error_Probability#The_upper_bound_according_to_Bhattacharyya|$\text{Bhattacharyya parameter}$]].<br>
 +
 
  
 +
{{BlaueBox|TEXT=
 +
By&nbsp; &raquo;'''summing over all paths'''&laquo;&nbsp; and comparing with the (simple)&nbsp; [[Channel_Coding/Distance_Characteristics_and_Error_Probability_Bounds#Path_weighting_enumerator_function|$\text{path weighting function}$]]&nbsp; $T(X)$&nbsp; we get the result:
  
Durch Summation über alle Pfade und einen Vergleich mit der (einfachen)&nbsp; [[Channel_Coding/Distanzeigenschaften_und_Fehlerwahrscheinlichkeitsschranken#Pfadgewichtsfunktion| Pfadgewichtsfunktion]]&nbsp; $T(X)$&nbsp; erhalten wir das Ergebnis:
+
::<math>{\rm Pr(burst\:error)} \le T(X = \beta),\hspace{0.5cm}{\rm with}\hspace{0.5cm}
 +
T(X) = \sum_{\varphi_{\hspace{0.02cm}i} \in {\it \Phi} }\hspace{0.15cm}
 +
\hspace{0.05cm} X^{w(\varphi_i)}\hspace{0.05cm}.</math>}}
  
::<math>{\rm Pr(Burstfehler)} \le T(X = \beta),\hspace{0.5cm}{\rm mit}\hspace{0.5cm}
 
T(X) = \sum_{\varphi_{\hspace{0.02cm}i} \in {\it \Phi}}\hspace{0.15cm}
 
\hspace{0.05cm} X^{w(\varphi_i)}\hspace{0.05cm}.</math>
 
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 4:}$&nbsp; Für unseren Standardcodierer &nbsp; &#8658; &nbsp; $R = 1/2, \ \ m = 2, \ \ \mathbf{G}(D) = (1 + D + D^2, \ 1 + D)$&nbsp; haben wir folgende&nbsp; [[Channel_Coding/Distanzeigenschaften_und_Fehlerwahrscheinlichkeitsschranken#Pfadgewichtsfunktion| Pfadgewichtsfunktion]]&nbsp; erhalten:
+
$\text{Example 4: }$&nbsp; For our standard encoder with
 +
* code rate&nbsp; $R = 1/2$,
 +
 
 +
*memory&nbsp; $m = 2$,
 +
 +
* D– transfer function&nbsp; $\mathbf{G}(D) = (1 + D + D^2, \ 1 + D)$,
 +
 
 +
 
 +
we have obtained the following&nbsp; [[Channel_Coding/Distance_Characteristics_and_Error_Probability_Bounds#Path_weighting_enumerator_function|$\text{path weighting enumerator function}$]]:  
  
 
::<math>T(X) =  X^5 + 2 \cdot X^6  + 4 \cdot X^7 + \ \text{...}  \hspace{0.1cm}
 
::<math>T(X) =  X^5 + 2 \cdot X^6  + 4 \cdot X^7 + \ \text{...}  \hspace{0.1cm}
Line 323: Line 371:
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
Mit der Reihenentwicklung&nbsp; $1/(1 \, &ndash;x) = 1 + x + x^2 + x^3 + \ \text{...} \hspace{0.15cm} $&nbsp; kann hierfür auch geschrieben werden:
+
*Using the series expansion &nbsp; $1/(1 \, &ndash;x) = 1 + x + x^2 + x^3 + \ \text{...} \hspace{0.15cm} $&nbsp; can also be written for this purpose:
  
 
::<math>T(X) =  \frac{X^5}{1-2 \cdot X}
 
::<math>T(X) =  \frac{X^5}{1-2 \cdot X}
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
Das BSC&ndash;Modell liefert mit der Verfälschungswahrscheinlichkeit&nbsp; $\varepsilon$&nbsp; folgende  Bhattacharyya&ndash;Schranke:
+
*The BSC model provides the following Bhattacharyya bound with the falsification probability&nbsp; $\varepsilon$:
  
::<math>{\rm Pr(Burstfehler)} \le T(X = \beta)  = T\big ( X = 2 \cdot \sqrt{\varepsilon \cdot (1-\varepsilon)} \big )
+
::<math>{\rm Pr(Burst\:error)} \le T(X = \beta)  = T\big ( X = 2 \cdot \sqrt{\varepsilon \cdot (1-\varepsilon)} \big )
 
= \frac{(2 \cdot \sqrt{\varepsilon \cdot (1-\varepsilon)})^5}{1- 4\cdot \sqrt{\varepsilon \cdot (1-\varepsilon)}}\hspace{0.05cm}.</math>
 
= \frac{(2 \cdot \sqrt{\varepsilon \cdot (1-\varepsilon)})^5}{1- 4\cdot \sqrt{\varepsilon \cdot (1-\varepsilon)}}\hspace{0.05cm}.</math>
 +
*In the&nbsp; [[Aufgaben:Exercise_3.14:_Error_Probability_Bounds|"Exercise 3.14"]]&nbsp; this equation is to be evaluated numerically.}}<br>
  
In der&nbsp; [[Aufgaben:Aufgabe_3.14:_Fehlerwahrscheinlichkeitsschranken|Aufgabe 3.14]]&nbsp; soll diese Gleichung numerisch ausgewertet werden.}}<br>
+
== Bit error probability and Viterbi bound ==
 +
<br>
 +
Finally,&nbsp; an upper bound is given for the bit error probability.&nbsp; According to the graph,&nbsp; we proceed as in&nbsp; [Liv10]<ref name='Liv10'>Liva, G.:&nbsp; Channel Coding.&nbsp; Lecture manuscript, Chair of Communications Engineering, TU Munich and DLR Oberpfaffenhofen, 2010.</ref>&nbsp; assume the following conditions:<br>
  
== Bitfehlerwahrscheinlichkeit und Viterbi–Schranke ==
+
*The zero sequence&nbsp; $\underline{x} = \underline{0}$ was sent &nbsp; &#8658; &nbsp; Path&nbsp; $\varphi_0$.<br>
<br>
 
Abschließend wird eine obere Schranke für die Bitfehlerwahrscheinlichkeit angegeben. Gemäß der Grafik gehen wir wie in&nbsp; [Liv10]<ref name='Liv10'>Liva, G.: ''Channel Coding''. Vorlesungsmanuskript, Lehrstuhl für Nachrichtentechnik, TU München und DLR Oberpfaffenhofen, 2010.</ref>&nbsp; von folgenden Gegebenheiten aus:<br>
 
  
*Gesendet wurde die Nullsequenz&nbsp; $\underline{x} = \underline{0}$ &nbsp; &#8658; &nbsp; Pfad&nbsp; $\varphi_0$.<br>
+
*The duration of a path deviation&nbsp; $($"error burst duration"$)$&nbsp; is denoted by&nbsp; $L$&nbsp;.<br>
  
*Die Dauer einer Pfadabweichung (englisch: &nbsp; <i>Error Burst Duration</i>&nbsp;) wird mit&nbsp; $L$&nbsp; bezeichnet.<br>
+
*The distance between two bursts&nbsp; $($"inter burst time"$)$&nbsp; we call&nbsp; $N$.<br>
  
*Den Abstand zweier Bursts (englisch: &nbsp; <i>Inter&ndash;Burst Time</i>&nbsp;) nennen wir&nbsp; $N$.<br>
+
*The Hamming weight of the error bundle be&nbsp; $H$.<br>
  
*Das Hamming&ndash;Gewicht des Fehlerbündels sei&nbsp; $H$.<br>
 
  
  
[[File:P ID2715 KC T 3 5 S6a v1.png|center|frame|Zur Definition der Beschreibungsgrößen&nbsp; $L$,&nbsp; $N$&nbsp; und $H$|class=fit]]
+
For a rate $1/n$&nbsp; convolutional code &nbsp; &#8658; &nbsp; $k = 1$ &nbsp; &#8658; &nbsp; one information bit per clock,&nbsp; the expected values&nbsp; ${\rm E}\big[L \big]$,&nbsp; ${\rm E}\big[N \big]$&nbsp; and&nbsp; ${\rm E}\big[H\big]$&nbsp; of the random variables defined above to give an upper bound for the bit error probability:
  
Für einen Rate&ndash;$1/n$&ndash;Faltungscode &nbsp; &#8658; &nbsp; $k = 1$, also einem Informationsbit pro Takt, lässt sich aus den Erwartungswerten&nbsp; ${\rm E}\big[L \big]$, ${\rm E}\big[N \big]$&nbsp; und&nbsp; ${\rm E}\big[H\big]$&nbsp; der oben definierten Zufallsgrößen eine obere Schranke für die Bitfehlerwahrscheinlichkeit angeben:
+
[[File:P ID2715 KC T 3 5 S6a v1.png|right|frame|For the definition of the description variables&nbsp; $L$,&nbsp; $N$&nbsp; and $H$|class=fit]]
  
::<math>{\rm Pr(Bitfehler)} =  \frac{{\rm E}\big[H\big]}{{\rm E}[L] + {\rm E}\big[N\big]}\hspace{0.15cm} \le \hspace{0.15cm} \frac{{\rm E}\big[H\big]}{{\rm E}\big[N\big]}
+
::<math>{\rm Pr(bit\:error)} =  \frac{{\rm E}\big[H\big]}{{\rm E}[L] + {\rm E}\big[N\big]}\hspace{0.15cm} \le \hspace{0.15cm} \frac{{\rm E}\big[H\big]}{{\rm E}\big[N\big]}
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
Hierbei ist vorausgesetzt, dass
+
It is assumed that
*die (mittlere) Dauer eines Fehlerbündels in der Praxis sehr viel kleiner ist als der zu erwartende Abstand zweier Bündel,  
+
# &nbsp; the&nbsp; (mean)&nbsp; duration of an error burst is in practice much smaller than the expected distance between two bursts,  
*die (mittlere) <i>Inter&ndash;Burst Time</i>&nbsp; $E\big[N\big]$&nbsp; gleich dem Kehrwert der Burstfehlerwahrscheinlichkeit ist,  
+
# &nbsp; the&nbsp; (mean)&nbsp; inter burst time $E\big[N\big]$&nbsp; is equal to the inverse of the burst error probability,  
*der Erwartungswert im Zähler wie folgt abgeschätzt wird:
+
# &nbsp; the expected value in the numerator is estimated as follows:
  
::<math>{\rm E}\big[H \big]  \le  \frac{1}{\rm Pr(Burstfehler)}\hspace{0.1cm} \cdot \sum_{\varphi_{\hspace{0.02cm}i} \in {\it \Phi}}\hspace{0.05cm}  
+
::<math>{\rm E}\big[H \big]  \le  \frac{1}{\rm Pr(burst\:error)}\hspace{0.1cm} \cdot \sum_{\varphi_{\hspace{0.02cm}i} \in {\it \Phi}}\hspace{0.05cm}  
 
\hspace{0.05cm} u(\varphi_i) \cdot \beta^{w(\varphi_i)}
 
\hspace{0.05cm} u(\varphi_i) \cdot \beta^{w(\varphi_i)}
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
Bei der Herleitung dieser Schranke werden die <i>paarweise Fehlerwahrscheinlichkeit</i>&nbsp; ${\rm Pr}\big [\varphi_0 &#8594; \varphi_i \big]$&nbsp; sowie die <i>Bhattacharyya&ndash;Abschätzung</i> verwendet. Damit erhält man mit
+
In deriving this bound,&nbsp; the&nbsp; "pairwise error probability"&nbsp; ${\rm Pr}\big [\varphi_0 &#8594; \varphi_i \big]$&nbsp; and the&nbsp; "Bhattacharyya estimation"&nbsp; are used.&nbsp; Thus we obtain with
*dem Pfadeingangsgewicht&nbsp; $u(\varphi_i),$  
+
*the path input weighting&nbsp; $u(\varphi_i),$
*dem Pfadausgangsgewicht&nbsp; $w(\varphi_i),$ und
+
*dem Bhattacharyya&ndash;Parameter&nbsp; $\beta$  
+
*the path output weighting&nbsp; $w(\varphi_i),$ and
 +
 
 +
*the Bhattacharyya parameter&nbsp; $\beta$  
  
  
die folgende Abschätzung für die Bitfehlerwahrscheinlichkeit und bezeichnet diese als die&nbsp; <b>Viterbi&ndash;Schranke</b>:
+
the following estimation for the bit error probability and refer to it as the&nbsp; &raquo;<b>Viterbi bound</b>&laquo;:
  
::<math>{\rm Pr(Bitfehler)}\le \sum_{\varphi_{\hspace{0.02cm}i} \in {\it \Phi}}\hspace{0.05cm}  
+
::<math>{\rm Pr(bit\:error)}\le \sum_{\varphi_{\hspace{0.02cm}i} \in {\it \Phi}}\hspace{0.05cm}  
 
\hspace{0.05cm} u(\varphi_i) \cdot \beta^{w(\varphi_i)}
 
\hspace{0.05cm} u(\varphi_i) \cdot \beta^{w(\varphi_i)}
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
Dieses Zwischenergebnis lässt sich auch in anderer Form darstellenWir erinnern uns an die&nbsp; [[Channel_Coding/Distanzeigenschaften_und_Fehlerwahrscheinlichkeitsschranken#Erweiterte_Pfadgewichtsfunktion| erweiterte Pfadgewichtsfunktion]]
+
This intermediate result can also be represented in another formWe recall the&nbsp; [[Channel_Coding/Distance_Characteristics_and_Error_Probability_Bounds#Enhanced_path_weighting_enumerator_function|$\text{enhanced path weighting enumerator function}$]]
  
 
::<math>T_{\rm enh}(X, U) = \sum_{\varphi_j \in {\it \Phi}}\hspace{0.1cm}  X^{w(\varphi_j)} \cdot U^{{ u}(\varphi_j)}   
 
::<math>T_{\rm enh}(X, U) = \sum_{\varphi_j \in {\it \Phi}}\hspace{0.1cm}  X^{w(\varphi_j)} \cdot U^{{ u}(\varphi_j)}   
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
Leitet man diese Funktion nach der Dummy&ndash;Eingangsvariablen&nbsp; $U$&nbsp; ab, so erhält man
+
If we derive this function according to the dummy input variable&nbsp; $U$,&nbsp; we get
  
 
::<math>\frac {\rm d}{{\rm d}U}\hspace{0.2cm}T_{\rm enh}(X, U) = \sum_{\varphi_j \in {\it \Phi}}\hspace{0.1cm} { u}(\varphi_j) \cdot  X^{w(\varphi_j)} \cdot U^{{ u}(\varphi_j)-1}   
 
::<math>\frac {\rm d}{{\rm d}U}\hspace{0.2cm}T_{\rm enh}(X, U) = \sum_{\varphi_j \in {\it \Phi}}\hspace{0.1cm} { u}(\varphi_j) \cdot  X^{w(\varphi_j)} \cdot U^{{ u}(\varphi_j)-1}   
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
Setzen wir schließlich noch für die Dummy&ndash;Eingangsvariable&nbsp; $U = 1$, so erkennen wir den Zusammenhang zum obigen Ergebnis:
+
Finally,&nbsp; if we set&nbsp; $U = 1$&nbsp; for the dummy input variable,&nbsp; than we see the connection to the above result:
  
 
::<math>\left [ \frac {\rm d}{{\rm d}U}\hspace{0.2cm}T_{\rm enh}(X, U) \right ]_{\substack{ U=1}} = \sum_{\varphi_j \in {\it \Phi}}\hspace{0.1cm} { u}(\varphi_j) \cdot  X^{w(\varphi_j)} \hspace{0.05cm}.</math>
 
::<math>\left [ \frac {\rm d}{{\rm d}U}\hspace{0.2cm}T_{\rm enh}(X, U) \right ]_{\substack{ U=1}} = \sum_{\varphi_j \in {\it \Phi}}\hspace{0.1cm} { u}(\varphi_j) \cdot  X^{w(\varphi_j)} \hspace{0.05cm}.</math>
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Fazit:}$&nbsp;  Die&nbsp; <b>Bitfehlerwahrscheinlichkeit</b>&nbsp; eines Faltungscodes kann mit der erweiterten Pfadgewichtsfunktion in geschlossener Form abgeschätzt werden:
+
$\text{Conclusion:}$&nbsp;  The&nbsp; &raquo;<b>bit error probability</b>&laquo;&nbsp; of a convolutional code can be estimated using the extended path weighting enumerator function in closed form:
::<math>{\rm Pr(Bitfehler)} \le {\rm Pr(Viterbi)} = \left [ \frac {\rm d}{ {\rm d}U}\hspace{0.2cm}T_{\rm enh}(X, U) \right ]_{\substack{X=\beta \\ U=1} }  
+
::<math>{\rm Pr(bit\:error)} \le {\rm Pr(Viterbi)} = \left [ \frac {\rm d}{ {\rm d}U}\hspace{0.2cm}T_{\rm enh}(X, U) \right ]_{\substack{X=\beta \\ U=1} }  
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
Man spricht von der&nbsp; <b>Viterbi&ndash;Schranke</b>. Dabei leitet man die erweiterte Pfadgewichtsfunktion nach dem zweiten Parameter&nbsp; $U$&nbsp; ab und setzt anschließend&nbsp;  
+
*One speaks of the&nbsp; &raquo;<b>Viterbi bound</b>&raquo;.&nbsp; Here one derives the enhanced path weighting enumerator function after the second parameter&nbsp; $U$&nbsp; and then sets&nbsp;  
*$X = \beta$,&nbsp;
+
::$$X = \beta,\hspace{0.4cm} U = 1.$$}}<br>
*$U = 1$.}}<br>
 
  
''Hinweis:'' &nbsp; In&nbsp; [[Aufgaben:Aufgabe_3.14:_Fehlerwahrscheinlichkeitsschranken|Aufgabe 3.14]]&nbsp; werden die&nbsp; ''Viterbi&ndash;Schranke''&nbsp; und die&nbsp; ''Bhattacharyya&ndash;Schranke''&nbsp; für den Rate&ndash;$1/2$&ndash;Standardcode und das&nbsp; [[Channel_Coding/Signal_classification#Binary_Symmetric_Channel_.E2.80.93_BSC| BSC&ndash;Modell]]&nbsp; numerisch ausgewertet.<br>
 
  
[[File:P ID2716 KC T 3 5 S6b v2.png|right|frame|AWGN–Bitfehlerwahrscheinlichkeit von Faltungscodes]]
 
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 5:}$&nbsp; Die Grafik verdeutlicht die gute Korrekturfähigkeit der Faltungscodes  beim&nbsp; [[Channel_Coding/Signal_classification#AWGN.E2.80.93Kanal_bei_bin.C3.A4rem_Eingang| AWGN&ndash;Kanal]]
+
$\text{Example 5:}$&nbsp; The graph illustrates the good correction capability &nbsp; &rArr; &nbsp; small bit error rate&nbsp; $\rm (BER)$&nbsp; of convolutional codes at the&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#AWGN_channel_at_binary_input|$\text{$\rm AWGN$&nbsp; channel}$]].
*Rote Kreise kennzeichnen die Bitfehlerrate für unseren Rate&ndash;$1/2$&ndash;Standardcode mit Memory&nbsp; $m = 2$.<br>
+
[[File:EN_KC_T_3_5_S6.png|right|frame|AWGN bit error probability of convolutional codes]]
*Grüne Kreuze markieren einen Faltungscode mit&nbsp; $m = 6$, dem so genannten&nbsp; [[Channel_Coding/Codebeschreibung_mit_Zustands%E2%80%93_und_Trellisdiagramm#Definition_der_freien_Distanz| Industrie&ndash;Standardcode]].<br><br>
+
 
 +
*Red circles denote the&nbsp; $\rm BER$&nbsp; for our&nbsp; "rate&nbsp; $1/2$&nbsp; standard encoder"&nbsp; with memory&nbsp; $m = 2$.<br>
 +
 
 +
 
 +
*Green crosses mark a convolutional code with&nbsp; $m = 6$,&nbsp; the so-called&nbsp; [[Channel_Coding/Code_Description_with_State_and_Trellis_Diagram#Definition_of_the_free_distance|$\text{industry standard code}$]].<br>
 +
 
 +
 
 +
* Codes with large memory&nbsp; $m$&nbsp; lead to large gains over uncoded transmission.
 +
 
  
Insbesondere Codes mit großem Gedächtnis&nbsp; $m$&nbsp; führen zu großen Gewinnen gegenüber uncodierter Übertragung (gestrichelte Kurve).}}
 
  
  
== Aufgaben zum Kapitel ==
+
<br><br><br><br>
 +
<u>Note:</u> &nbsp;
 +
 
 +
In&nbsp; [[Aufgaben:Exercise_3.14:_Error_Probability_Bounds|"Exercise 3.14"]],&nbsp; the&nbsp; "Viterbi bound"&nbsp; and the&nbsp; "Bhattacharyya bound"&nbsp; are  evaluated numerically for the&nbsp; "rate $1/2$&nbsp; standard encoder"&nbsp; and the&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#Binary_Symmetric_Channel_.E2.80.93_BSC|$\text{BSC model}$]].
 +
}}
 +
 
 +
 
 +
== Exercises for the chapter ==
 
<br>
 
<br>
[[Aufgaben:Aufgabe_3.12:_Pfadgewichtsfunktion|Aufgabe 3.12: Pfadgewichtsfunktion]]
+
[[Aufgaben:Exercise_3.12:_Path_Weighting_Function|Exercise 3.12: Path Weighting Function]]
  
[[Aufgaben:Aufgabe_3.12Z:_Ring_und_Rückkopplung|Aufgabe 3.12Z: Ring und Rückkopplung]]
+
[[Aufgaben:Exercise_3.12Z:_Ring_and_Feedback|Exercise 3.12Z: Ring and Feedback]]
  
[[Aufgaben:Aufgabe_3.13:_Nochmals_zu_den_Pfadgewichtsfunktionen|Aufgabe 3.13: Nochmals zu den Pfadgewichtsfunktionen]]
+
[[Aufgaben:Exercise_3.13:_Path_Weighting_Function_again|Exercise 3.13: Path Weighting Function again]]
  
[[Aufgaben:Aufgabe_3.14:_Fehlerwahrscheinlichkeitsschranken|Aufgabe 3.14: Fehlerwahrscheinlichkeitsschranken]]
+
[[Aufgaben:Exercise_3.14:_Error_Probability_Bounds|Exercise 3.14: Error Probability Bounds]]
  
==Quellenverzeichnis==
+
==References==
 
<references/>
 
<references/>
  
 
{{Display}}
 
{{Display}}

Latest revision as of 19:24, 20 December 2022

Free distance vs. minimum distance


An important parameter regarding the error probability of linear block codes is the  "minimum distance"  between two code words   $\underline{x}$   and   $\underline{x}\hspace{0.05cm}'$:

\[d_{\rm min}(\mathcal{C}) = \min_{\substack{\underline{x},\hspace{0.05cm}\underline{x}\hspace{0.05cm}' \hspace{0.05cm}\in \hspace{0.05cm} \mathcal{C} \\ {\underline{x}} \hspace{0.05cm}\ne \hspace{0.05cm} \underline{x}\hspace{0.05cm}'}}\hspace{0.1cm}d_{\rm H}(\underline{x}, \hspace{0.05cm}\underline{x}\hspace{0.05cm}') = \min_{\substack{\underline{x} \hspace{0.05cm}\in \hspace{0.05cm} \mathcal{C} \\ {\underline{x}} \hspace{0.05cm}\ne \hspace{0.05cm} \underline{0}}}\hspace{0.1cm}w_{\rm H}(\underline{x}) \hspace{0.05cm}.\]
  • The second part of the equation arises from the fact that every linear code also includes the zero word   ⇒   $(\underline{0})$.
  • It is therefore convenient to set  $\underline{x}\hspace{0.05cm}' = \underline{0}$,  so that the  $\text{Hamming distance}$   $d_{\rm H}(\underline{x}, \ \underline{0})$   gives the same result as the Hamming weight   $w_{\rm H}(\underline{x})$.


$\text{Example 1:}$  The table shows the  $16$  code words of of an exemplary  $\text{HC (7, 4, 3)}$  $($see  $\text{Examples 6 and 7})$.

Code words of the considered  $(7, 4, 3)$ Hamming code

You can see:

  • All code words except the null word  $(\underline{0})$  contain at least three ones   ⇒   $d_{\rm min} = 3$.
  • There are
  1. seven code words with three  "ones"  $($highlighted in yellow$)$,
  2. seven with four  "ones"  $($highlighted in green$)$,  and
  3. one each with no  "ones" and seven  "ones".


The  »free distance«  $d_{\rm F}$  of a convolutional code  $(\mathcal{CC})$  is formulaically no different from the minimum distance of a linear block code:

\[d_{\rm F}(\mathcal{CC}) = \min_{\substack{\underline{x},\hspace{0.05cm}\underline{x}\hspace{0.05cm}' \hspace{0.05cm}\in \hspace{0.05cm} \mathcal{CC} \\ {\underline{x}} \hspace{0.05cm}\ne \hspace{0.05cm} \underline{x}\hspace{0.05cm}'}}\hspace{0.1cm}d_{\rm H}(\underline{x}, \hspace{0.05cm}\underline{x}\hspace{0.05cm}') = \min_{\substack{\underline{x} \hspace{0.05cm}\in \hspace{0.05cm} \mathcal{CC} \\ {\underline{x}} \hspace{0.05cm}\ne \hspace{0.05cm} \underline{0}}}\hspace{0.1cm}w_{\rm H}(\underline{x}) \hspace{0.05cm}.\]
Some paths wih  $w(\underline{x}) = d_{\rm F} = 5$

In the literature instead of  "$d_{\rm F}$"  sometimes is also used  "$d_{∞}$".

The graph shows three of the infinite paths with the minimum Hamming weight  $w_{\rm H, \ min}(\underline{x}) = d_{\rm F} = 5$.

  • Major difference to the minimal distance  $d_{\rm min}$  of other codes is
  1. that in convolutional codes not information words and code words are to be considered, 
  2. but sequences with the property  »$\text{semi–infinite}$«
  • Each encoded sequence  $\underline{x}$  describes a path through the trellis.
  • The  "free distance"  is the smallest possible Hamming weight of such a path  $($except for the zero path$)$.



Path weighting enumerator function


For any linear block code,  a weight enumerator function can be given in a simple way because of the finite number of code words  $\underline{x}$. 

For the  $\text{Example 1}$  in the previous section this is:

\[W(X) = 1 + 7 \cdot X^{3} + 7 \cdot X^{4} + X^{7}\hspace{0.05cm}.\]
Some paths and their path weightings

In the case of a  $($non-terminated$)$  convolutional code,  no such weight function can be given,  since there are infinitely many,  infinitely long encoded sequences  $\underline{x}$  and thus also infinitely many trellis paths.

To get a grip on this problem,  we now assume the following:

  • As a reference for the trellis diagram,  we always choose the path of the encoded sequence  $\underline{x} = \underline{0}$  and call this the  "zero path"  $\varphi_0$.
  • We now consider only paths   $\varphi_j ∈ {\it \Phi}$  that all deviate from the zero path at a given time  $t$  and return to it at some point later.
  • Although only a fraction of all paths belong to the set  ${\it \Phi}$,  contains the remainder quantity  ${\it \Phi} = \{\varphi_1, \ \varphi_2, \ \varphi_3, \ \text{...} \}$  still an unbounded set of paths  $\varphi_0$  is not one of them.


In the above trellis some paths  $\varphi_j ∈ {\it \Phi}$  are drawn:

  • The yellow path  $\varphi_1$  belongs to the sequence  $\underline{x}_1 = (11, 10, 11)$  with Hamming weight  $w_{\rm H}(\underline{x}_1) = 5$.  Thus,  the path weighting  $w(\varphi_1) = 5$.  Due to the definition of the branching time  $t$  only this single path  $\varphi_1$  has the free distance  $d_{\rm F} = 5$  to the zero path   ⇒   $A_5 = 1$.
  • For the two green paths with corresponding sequences  $\underline{x}_2 = (11, 01, 01, 11)$  resp.   $\underline{x}_3 = (11, 10, 00, 10, 11)$   ⇒   $w(\varphi_2) = w(\varphi_3) = 6$.  No other path exhibits the path weighting  $6$.  We take this fact into account by the coefficient  $A_6 = 2$.
  • Also drawn is the gray path  $\varphi_4$,  associated with the sequence  $\underline{x}_4 = (11, 01, 10, 01, 11)$   ⇒   $w(\varphi_4) = 7$.  Also,  the sequences  $\underline{x}_5 = (11, 01, 01, 00, 10, 11)$,  $\underline{x}_6 = (11, 10, 00, 01, 01, 11)$  and  $\underline{x}_7 = (11, 10, 00, 10, 00, 10, 11)$  have the same path weighting  "$7$"    ⇒   $A_7 = 4$.

Thus,  the path weighting enumerator function is for this example:

\[T(X) = A_5 \cdot X^5 + A_6 \cdot X^6 + A_7 \cdot X^7 + \text{...} \hspace{0.1cm}= X^5 + 2 \cdot X^6 + 4 \cdot X^7+ \text{...}\hspace{0.1cm} \hspace{0.05cm}.\]

The definition of this function  $T(X)$  follows.

$\text{Definition:}$  For the  »path weighting enumerator function«  of a convolutional code holds:

\[T(X) = \sum_{\varphi_j \in {\it \Phi} }\hspace{0.1cm} X^{w(\varphi_j) } \hspace{0.1cm}=\hspace{0.1cm} \sum_{w\hspace{0.05cm} =\hspace{0.05cm} d_{\rm F} }^{\infty}\hspace{0.1cm} A_w \cdot X^w \hspace{0.05cm}.\]
  1.   ${\it \Phi}$  denotes the set of all paths that leave the zero path  $\varphi_0$  exactly at the specified time  $t$  and return to it  $($sometime$)$  later.
  2.   According to the second equation part,  the summands are ordered by their path weightings  $w$  where  $A_w$  denotes the number of paths with weighting  $w$.
  3.   The sum starts with  $w = d_{\rm F}$.
  4.   The path weighting  $w(\varphi_j)$  is equal to the Hamming weight  $($number of  "ones"$)$  of the encoded sequence $\underline{x}_j$  associated to the path  $\varphi_j$:  $w({\varphi_j) = w_{\rm H}(\underline {x} }_j) \hspace{0.05cm}.$


Note:  The weight enumerator function  $W(X)$  defined for linear block codes and the path weight function  $T(X)$  of convolutional codes have many similarities;  however,  they are not identical.     We consider again

  • the weight enumerator function of the  $(7, 4, 3)$ Hamming code:
\[W(X) = 1 + 7 \cdot X^{3} + 7 \cdot X^{4} + X^{7},\]
  • and the path weighting enumerator function of our  "standard convolutional encoder":
\[T(X) = X^5 + 2 \cdot X^6 + 4 \cdot X^7+ 8 \cdot X^8+ \text{...} \]
  1.   Noticeable is the  "$1$"  in the first equation,  which is missing in the second one.  That is:
  2.   For the linear block codes,  the reference code word  $\underline{x}_i = \underline{0}$  is counted.
  3.   But the zero encoded sequence  $\underline{x}_i = \underline{0}$  and the zero path  $\varphi_0$  are excluded by definition for convolutional codes.


$\text{Author's personal opinion:}$ 

One could  have defined  $W(X)$  without the  "$1$"  as well.  This would have avoided among other things,  that the Bhattacharyya–bound for linear block codes and that for convolutional codes differ by  "$-1$",  as can be seen from the following equations:


Enhanced path weighting enumerator function


The path weighting enumerator function  $T(X)$  only provides information regarding the weights of the encoded sequence  $\underline{x}$.

  • More information is obtained if the weights of the information sequence  $\underline{u}$  are also collected.
  • One then needs two formal parameters  $X$  and  $U$,  as can be seen from the following definition.


$\text{Definition:}$  The  »enhanced path weight enumerator function«  $\rm (EPWEF)$  is:

\[T_{\rm enh}(X,\ U) = \sum_{\varphi_j \in {\it \Phi} }\hspace{0.1cm} X^{w(\varphi_j)} \cdot U^{ { u}(\varphi_j)} \hspace{0.1cm}=\hspace{0.1cm} \sum_{w} \sum_{u}\hspace{0.1cm} A_{w, \hspace{0.05cm}u} \cdot X^w \cdot U^{u} \hspace{0.05cm}.\]

All specifications for the   $T(X)$  definition in the last section apply.  In addition,  note:

  1.   The path input weight  $u(\varphi_j)$  is equal to the Hamming weight of the information sequence  $\underline{u}_j$  associated to the path.
  2.   It is expressed as a power of the formal parameter  $U$ .
  3.   $A_{w, \ u}$  denotes the number of paths  $\varphi_j$  with path output weight  $w(\varphi_j)$  and path input weight  $u(\varphi_j)$.  Control variable for the second portion is  $u$.
  4.   Setting the formal parameter  $U = 1$  in the enhanced path weighting enumerator function yields the original weighting enumerator function  $T(X)$.


For many  $($and all relevant$)$  convolutional codes,  upper equation can still be simplified:

\[T_{\rm enh}(X,\ U) =\hspace{0.1cm} \sum_{w \ = \ d_{\rm F} }^{\infty}\hspace{0.1cm} A_w \cdot X^w \cdot U^{u} \hspace{0.05cm}.\]


$\text{Example 2:}$  Thus, the enhanced path weighting enumerator function of our standard encoder is:

\[T_{\rm enh}(X,\ U) = U \cdot X^5 + 2 \cdot U^2 \cdot X^6 + 4 \cdot U^3 \cdot X^7+ \text{ ...} \hspace{0.1cm} \hspace{0.05cm}.\]
Some paths and their path weightings

Comparing this result with the trellis shown below, we can see:

  • The yellow highlighted path  – in  $T(X)$  marked by  $X^5$  – is composed of a blue arrow  $(u_i = 1)$  and two red arrows  $(u_i = 0)$.  Thus  $X^5$  becomes the extended term  $U X^5$.
  • The sequences of the two green paths are 
$$\underline{u}_2 = (1, 1, 0, 0) \hspace{0.15cm} ⇒ \hspace{0.15cm} \underline{x}_2 = (11, 01, 01, 11),$$
$$\underline{u}_3 = (1, 0, 1, 0, 0) \hspace{0.15cm} ⇒ \hspace{0.15cm} \underline{x}_3 = (11, 10, 00, 10, 11).$$
This gives the second term  $2 \cdot U^2X^6$.
  • The gray path  $($and the three undrawn paths$)$  together make the contribution  $4 \cdot U^3X^7$.  Each of these paths contains three blue arrows   ⇒   three  "ones"  in the associated information sequence.


Path weighting enumerator function from state transition diagram


There is an elegant way to determine the path weighting enumerator function  $T(X)$  and its enhancement directly from the state transition diagram.  This will be demonstrated here and in the following sections using our  $\text{standard convolutional encoder}$  as an example.

First,  the state transition diagram must be redrawn for this purpose.  The graphic shows this on the left in the previous form as diagram  $\rm (A)$,  while the new diagram  $\rm (B)$  is given on the right.  It can be seen:

State transition diagram in two different variants
  1.   The state  $S_0$  is split into the start state  $S_0$  and the end state  $S_0\hspace{0.01cm}'$.  Thus,  all paths of the trellis that start in state  $S_0$  and return to it at some point can also be traced in the right graph  $\rm (B)$.  Excluded are direct transitions from  $S_0$  to  $S_0\hspace{0.01cm}'$  and thus also the zero path.
  2.   In diagram  $\rm (A)$  the transitions are distinguishable by the colors red  $($for  $u_i = 0)$  and blue  $($for  $u_i = 1)$. 
  3.   The code words  $\underline{x}_i ∈ \{00, 01, 10, 11\}$  are noted at the transitions.  In diagram  $\rm (B)$  $(00)$  is expressed  by  $X^0 = 1$  and  $(11)$  by  $X^2$.
  4.   The code words  $(01)$  and  $(10)$  are now indistinguishable,  but they are uniformly denoted by  $X$ .
  5.   Phrased another way:   The code word  $\underline{x}_i$  is now represented as  $X^w$  where  $X$  is a dummy variable associated with the output.
  6.   $w = w_{\rm H}(\underline{x}_i)$  indicates the Hamming weight of the code word  $\underline{x}_i$.  For a rate $1/2$ code,  the exponent  $w$  is either  $0, \ 1$  or  $2$.
  7.   Also diagram  $\rm (B)$  omits the color coding.  The information bit  $u_i = 1$  is now denoted by  $U^1 = U$  and bit  $u_i = 0$  by  $U^0 = 1$.  The dummy variable  $U$  is thus assigned to the input sequence  $\underline{u}$ .

Rules for manipulating the state transition diagram


Goal of our calculations will be to characterize the  $($arbitrarily complicated$)$  path from  $S_0$  to  $S_0\hspace{0.01cm}'$  by the extended path weighting enumerator function  $T_{\rm enh}(X, \ U)$.  For this we need rules to simplify the graph stepwise.

»Serial transitions«

Reduction of serial transitions  (1),  parallel transitions  (2),  a ring  (3),  a feedback  (4)

Two serial connections – denoted by  $A(X, \ U)$  and  $B(X, \ U)$  – can be replaced by a single connection with the product of these ratings.

»Parallel transitions«

Two parallel connections – denoted by  $A(X, \ U)$  and  $B(X, \ U)$  – are combined by the sum of their valuation functions.

»Ring«

The adjacent constellation can be replaced by a single connection, and the following applies to the replacement:

\[E(X, U) = \frac{A(X, U) \cdot B(X, U)}{1- C(X, U)} \hspace{0.05cm}.\]

»Feedback«

Due to the feedback, two states can alternate here as often as desired. For this constellation applies:

\[F(X, U) = \frac{A(X, U) \cdot B(X, U)\cdot C(X, U)}{1- C(X, U)\cdot D(X, U)} \hspace{0.05cm}.\]

The equations for ring and feedback given here are to be proved in the  "Exercise 3.12Z" .

$\text{Example 3:}$  The above rules are now to be applied to our standard example.  In the graphic on the left you can see the modified diagram  $\rm (B)$.

Reduction of state transitions
  • First we replace the red highlighted detour from  $S_1$  to  $S_2$  via  $S_3$  in the diagram  $\rm (B)$  by the red connection  $T_1(X, \hspace{0.05cm} U)$  drawn in the diagram  $\rm (C)$.
  • According to the upper classification,  it is a  "ring"  with the labels  $A = C = U \cdot X$  and  $B = X$, and we obtain for the first reduction function:
\[T_1(X, \hspace{0.05cm} U) = \frac{U \cdot X^2}{1- U \cdot X} \hspace{0.05cm}.\]
  • Now we summarize the parallel connections according to the blue background in the diagram  $\rm (C)$  and replace it with the blue connection in the diagram  $\rm (D)$.  The second reduction function is thus:
$$T_2(X, \hspace{0.03cm}U) = T_1(X, \hspace{0.03cm}U) \hspace{-0.03cm}+\hspace{-0.03cm} X\hspace{-0.03cm} $$
$$\Rightarrow \hspace{0.3cm} T_2(X, \hspace{0.05cm}U) =\hspace{-0.03cm} \frac{U X^2\hspace{-0.03cm} +\hspace{-0.03cm} X \cdot (1-UX)}{1- U X}= \frac{X}{1- U X}\hspace{0.05cm}.$$
  • The entire graph  $\rm (D)$  can then be replaced by a single connection from  $S_0$  to  $S_0\hspace{0.01cm}'$.  According to the feedback rule,  one obtains for the  "enhanced path weighting enumerator function":
\[T_{\rm enh}(X, \hspace{0.05cm}U) = \frac{(U X^2) \cdot X^2 \cdot \frac{X}{1- U X} }{1- U \cdot \frac{X}{1- U X} } = \frac{U X^5}{1- U X- U X} = \frac{U X^5}{1- 2 \cdot U X} \hspace{0.05cm}.\]
  • With the series expansion  $1/(1 \, –x) = 1 + x + x^2 + x^3 + \ \text{...} \ $  can also be written for this purpose:
\[T_{\rm enh}(X, \hspace{0.05cm}U) = U X^5 \cdot \big [ 1 + 2 \hspace{0.05cm}UX + (2 \hspace{0.05cm}UX)^2 + (2 \hspace{0.05cm}UX)^3 + \text{...} \hspace{0.05cm} \big ] \hspace{0.05cm}.\]
  • Setting the formal input variable   $U = 1$,   we obtain the  "simple path weighting enumerator function",  which alone allows statements about the weighting distribution of the output sequence  $\underline{x}$: 
\[T(X) = X^5 \cdot \big [ 1 + 2 X + 4 X^2 + 8 X^3 +\text{...}\hspace{0.05cm} \big ] \hspace{0.05cm}.\]

We have already read the same result from the trellis diagram in section  "Path weighting enumerator function" . There was

    • one gray path with weight  $5$,
    • two yellow paths with weighting  $6$ 
    • four green paths with weighting  $7$
    • and ... .


Block error probability vs. burst error probability


The simple model according to the sketch is valid for linear block codes as well as for convolutional codes.

Simple transmission model including encoding and decoding


»Block codes«

For block codes denote  $\underline{u} = (u_1, \ \text{...} \hspace{0.05cm}, \ u_i, \ \text{...} \hspace{0.05cm}, \ u_k)$  and  $\underline{v} = (v_1, \ \text{...} \hspace{0.05cm}, v_i, \ \text{...} \hspace{0.05cm} \ , \ v_k)$  the information blocks at the input and output of the system.

Thus,  the following descriptive variables can be specified:

  • the »block error probability«   ${\rm Pr(block\:error)} = {\rm Pr}(\underline{v} ≠ \underline{u}),$
  • the »bit error probability«   ${\rm Pr(bit\:error)} = {\rm Pr}(v_i ≠ u_i).$

$\text{Please note:}$  In real transmission systems,  due to thermal noise the following always applies:

$${\rm Pr(bit\:error)} > 0\hspace{0.05cm},\hspace{0.5cm}{\rm Pr(block\:error)} > {\rm Pr(bit\:error)} \hspace{0.05cm}.$$

For this a simple explanation attempt:   If the decoder decides in each block of length  $k$  exactly one bit wrong,

  • so the average bit error probability   ${\rm Pr(bit\:error)}= 1/k$,
  • while for the block error probability   ${\rm Pr(block\:error)}\equiv 1$  holds.



»Convolutional codes«

For convolutional codes,  the block error probability cannot be specified,  since here   $\underline{u} = (u_1, \ u_2, \ \text{...} \hspace{0.05cm})$   and   $\underline{\upsilon} = (v_1, \ v_2, \ \text{...} \hspace{0.05cm})$   represent sequences.

Zero path  ${\it \varphi}_0$  and deviation paths  ${\it \varphi}_i$



Even the smallest possible code parameter  $k = 1$,  leads here to the sequence length  $k \hspace{0.05cm}' → ∞$,  and

  • the block error probability would always result in  ${\rm Pr(block\hspace{0.1cm} error)}\equiv 1$,
  • even if the bit error probability is extremely small $\hspace{0.05cm}$ $($but non-zero$)$.


$\text{Definition:}$  For the  »burst error probability « of a convolutional code holds:

\[{\rm Pr(burst\:error)} = {\rm Pr}\big \{ {\rm Decoder\hspace{0.15cm} leaves\hspace{0.15cm} the\hspace{0.15cm} correct\hspace{0.15cm}path}\hspace{0.15cm}{\rm at\hspace{0.15cm}time \hspace{0.10cm}\it t}\big \} \hspace{0.05cm}.\]
  • To simplify the notation for the following derivation,  we always assume the zero sequence  $\underline{0}$   which is shown in red in the drawn trellis as the  "zero path"  $\varphi_0$.
  • All other paths  $\varphi_1, \ \varphi_2, \ \varphi_3, \ \text{...} $  $($and many more$)$  leave  $\varphi_0$  at time  $t$.
  • They all belong to the path set  ${\it \Phi}$  which is defined as  "Viterbi decoder leaves the correct path at time  $t$".  This probability is calculated in the next section.


Burst error probability and Bhattacharyya bound


Calculation of the burst error probability

We proceed as in the earlier chapter  "Bounds for block error probability" 

  • from the pairwise error probability  ${\rm Pr}\big [\varphi_0 → \varphi_i \big]$ 
  • that instead of the path  $\varphi_0$ 
  • the decoder could select the path  $\varphi_i$.


All considered paths  $\varphi_i$  leave the zero path  $\varphi_0$  at time  $t$;  thus they all belong to the path set ${\it \Phi}$.

The sought  "burst error probability"  is equal to the following union set:

$${\rm Pr(burst\:error)}= {\rm Pr}\left (\big[\varphi_{\hspace{0.02cm}0} \mapsto \varphi_{\hspace{0.02cm}1}\big] \hspace{0.05cm}\cup\hspace{0.05cm}\big[\varphi_{\hspace{0.02cm}0} \mapsto \varphi_{\hspace{0.02cm}2}\big]\hspace{0.05cm}\cup\hspace{0.05cm} \text{... }\hspace{0.05cm} \right )$$
$$\Rightarrow \hspace{0.3cm}{\rm Pr(burst\:error)}= {\rm Pr} \left ( \cup_{\varphi_{\hspace{0.02cm}i} \in {\it \Phi}} \hspace{0.15cm} \big[\varphi_{\hspace{0.02cm}0} \mapsto \varphi_{\hspace{0.02cm}i}\big] \right )\hspace{0.05cm}.$$
\[{\rm Pr(Burst\:error)} \le \sum_{\varphi_{\hspace{0.02cm}i} \in {\it \Phi}}\hspace{0.15cm} {\rm Pr}\big [\varphi_{\hspace{0.02cm}0} \mapsto \varphi_{\hspace{0.02cm}i}\big ] = {\rm Pr(union \hspace{0.15cm}bound)} \hspace{0.05cm}.\]
\[{\rm Pr}\big [\underline {0} \mapsto \underline {x}_{\hspace{0.02cm}i}\big ] \le \beta^{w_{\rm H}({x}_{\hspace{0.02cm}i})}\hspace{0.3cm}\Rightarrow \hspace{0.3cm} {\rm Pr}\left [\varphi_{\hspace{0.02cm}0} \mapsto \varphi_{\hspace{0.02cm}i}\right ] \le \hspace{0.05cm} \beta^{w(\varphi_i)}\hspace{0.05cm}.\]
Here denotes
  • $w_{\rm H}(\underline{x}_i)$  is the Hamming weight of the possible encoded sequence  $\underline{x}_i,$
  • $\ w(\varphi_i)$  is the path weight of the corresponding path  $\varphi_i ∈ {\it \Phi},$


By  »summing over all paths«  and comparing with the (simple)  $\text{path weighting function}$  $T(X)$  we get the result:

\[{\rm Pr(burst\:error)} \le T(X = \beta),\hspace{0.5cm}{\rm with}\hspace{0.5cm} T(X) = \sum_{\varphi_{\hspace{0.02cm}i} \in {\it \Phi} }\hspace{0.15cm} \hspace{0.05cm} X^{w(\varphi_i)}\hspace{0.05cm}.\]


$\text{Example 4: }$  For our standard encoder with

  • code rate  $R = 1/2$,
  • memory  $m = 2$,
  • D– transfer function  $\mathbf{G}(D) = (1 + D + D^2, \ 1 + D)$,


we have obtained the following  $\text{path weighting enumerator function}$:

\[T(X) = X^5 + 2 \cdot X^6 + 4 \cdot X^7 + \ \text{...} \hspace{0.1cm} = X^5 \cdot ( 1 + 2 \cdot X + 4 \cdot X^2+ \ \text{...} \hspace{0.1cm}) \hspace{0.05cm}.\]
  • Using the series expansion   $1/(1 \, –x) = 1 + x + x^2 + x^3 + \ \text{...} \hspace{0.15cm} $  can also be written for this purpose:
\[T(X) = \frac{X^5}{1-2 \cdot X} \hspace{0.05cm}.\]
  • The BSC model provides the following Bhattacharyya bound with the falsification probability  $\varepsilon$:
\[{\rm Pr(Burst\:error)} \le T(X = \beta) = T\big ( X = 2 \cdot \sqrt{\varepsilon \cdot (1-\varepsilon)} \big ) = \frac{(2 \cdot \sqrt{\varepsilon \cdot (1-\varepsilon)})^5}{1- 4\cdot \sqrt{\varepsilon \cdot (1-\varepsilon)}}\hspace{0.05cm}.\]


Bit error probability and Viterbi bound


Finally,  an upper bound is given for the bit error probability.  According to the graph,  we proceed as in  [Liv10][1]  assume the following conditions:

  • The zero sequence  $\underline{x} = \underline{0}$ was sent   ⇒   Path  $\varphi_0$.
  • The duration of a path deviation  $($"error burst duration"$)$  is denoted by  $L$ .
  • The distance between two bursts  $($"inter burst time"$)$  we call  $N$.
  • The Hamming weight of the error bundle be  $H$.


For a rate $1/n$  convolutional code   ⇒   $k = 1$   ⇒   one information bit per clock,  the expected values  ${\rm E}\big[L \big]$,  ${\rm E}\big[N \big]$  and  ${\rm E}\big[H\big]$  of the random variables defined above to give an upper bound for the bit error probability:

For the definition of the description variables  $L$,  $N$  and $H$
\[{\rm Pr(bit\:error)} = \frac{{\rm E}\big[H\big]}{{\rm E}[L] + {\rm E}\big[N\big]}\hspace{0.15cm} \le \hspace{0.15cm} \frac{{\rm E}\big[H\big]}{{\rm E}\big[N\big]} \hspace{0.05cm}.\]

It is assumed that

  1.   the  (mean)  duration of an error burst is in practice much smaller than the expected distance between two bursts,
  2.   the  (mean)  inter burst time $E\big[N\big]$  is equal to the inverse of the burst error probability,
  3.   the expected value in the numerator is estimated as follows:
\[{\rm E}\big[H \big] \le \frac{1}{\rm Pr(burst\:error)}\hspace{0.1cm} \cdot \sum_{\varphi_{\hspace{0.02cm}i} \in {\it \Phi}}\hspace{0.05cm} \hspace{0.05cm} u(\varphi_i) \cdot \beta^{w(\varphi_i)} \hspace{0.05cm}.\]

In deriving this bound,  the  "pairwise error probability"  ${\rm Pr}\big [\varphi_0 → \varphi_i \big]$  and the  "Bhattacharyya estimation"  are used.  Thus we obtain with

  • the path input weighting  $u(\varphi_i),$
  • the path output weighting  $w(\varphi_i),$ and
  • the Bhattacharyya parameter  $\beta$


the following estimation for the bit error probability and refer to it as the  »Viterbi bound«:

\[{\rm Pr(bit\:error)}\le \sum_{\varphi_{\hspace{0.02cm}i} \in {\it \Phi}}\hspace{0.05cm} \hspace{0.05cm} u(\varphi_i) \cdot \beta^{w(\varphi_i)} \hspace{0.05cm}.\]

This intermediate result can also be represented in another form. We recall the  $\text{enhanced path weighting enumerator function}$

\[T_{\rm enh}(X, U) = \sum_{\varphi_j \in {\it \Phi}}\hspace{0.1cm} X^{w(\varphi_j)} \cdot U^{{ u}(\varphi_j)} \hspace{0.05cm}.\]

If we derive this function according to the dummy input variable  $U$,  we get

\[\frac {\rm d}{{\rm d}U}\hspace{0.2cm}T_{\rm enh}(X, U) = \sum_{\varphi_j \in {\it \Phi}}\hspace{0.1cm} { u}(\varphi_j) \cdot X^{w(\varphi_j)} \cdot U^{{ u}(\varphi_j)-1} \hspace{0.05cm}.\]

Finally,  if we set  $U = 1$  for the dummy input variable,  than we see the connection to the above result:

\[\left [ \frac {\rm d}{{\rm d}U}\hspace{0.2cm}T_{\rm enh}(X, U) \right ]_{\substack{ U=1}} = \sum_{\varphi_j \in {\it \Phi}}\hspace{0.1cm} { u}(\varphi_j) \cdot X^{w(\varphi_j)} \hspace{0.05cm}.\]

$\text{Conclusion:}$  The  »bit error probability«  of a convolutional code can be estimated using the extended path weighting enumerator function in closed form:

\[{\rm Pr(bit\:error)} \le {\rm Pr(Viterbi)} = \left [ \frac {\rm d}{ {\rm d}U}\hspace{0.2cm}T_{\rm enh}(X, U) \right ]_{\substack{X=\beta \\ U=1} } \hspace{0.05cm}.\]
  • One speaks of the  »Viterbi bound».  Here one derives the enhanced path weighting enumerator function after the second parameter  $U$  and then sets 
$$X = \beta,\hspace{0.4cm} U = 1.$$



$\text{Example 5:}$  The graph illustrates the good correction capability   ⇒   small bit error rate  $\rm (BER)$  of convolutional codes at the  $\text{$\rm AWGN$  channel}$.

AWGN bit error probability of convolutional codes
  • Red circles denote the  $\rm BER$  for our  "rate  $1/2$  standard encoder"  with memory  $m = 2$.



  • Codes with large memory  $m$  lead to large gains over uncoded transmission.







Note:  

In  "Exercise 3.14",  the  "Viterbi bound"  and the  "Bhattacharyya bound"  are evaluated numerically for the  "rate $1/2$  standard encoder"  and the  $\text{BSC model}$.


Exercises for the chapter


Exercise 3.12: Path Weighting Function

Exercise 3.12Z: Ring and Feedback

Exercise 3.13: Path Weighting Function again

Exercise 3.14: Error Probability Bounds

References

  1. Liva, G.:  Channel Coding.  Lecture manuscript, Chair of Communications Engineering, TU Munich and DLR Oberpfaffenhofen, 2010.