Difference between revisions of "Channel Coding/Distance Characteristics and Error Probability Bounds"

From LNTwww
 
(42 intermediate revisions by 2 users not shown)
Line 8: Line 8:
 
== Free distance vs. minimum distance ==
 
== Free distance vs. minimum distance ==
 
<br>
 
<br>
An important parameter regarding the error probability of linear block codes is the&nbsp; <i>minimum distance</i>&nbsp; between two code words&nbsp; $\underline{x}$&nbsp; and&nbsp; $\underline{x}\hspace{0.05cm}'$:
+
An important parameter regarding the error probability of linear block codes is the&nbsp; "minimum distance"&nbsp; between two code words &nbsp; $\underline{x}$ &nbsp; and &nbsp; $\underline{x}\hspace{0.05cm}'$:
  
 
::<math>d_{\rm min}(\mathcal{C}) =
 
::<math>d_{\rm min}(\mathcal{C}) =
Line 16: Line 16:
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
The second part of the equation arises from the fact that every linear code also includes the zero word&nbsp; $(\underline{0})$&nbsp;. It is therefore convenient to set&nbsp; $\underline{x}\hspace{0.05cm}' = \underline{0}$, so that the&nbsp; [[Channel_Coding/Objective_of_Channel_Coding#Important_definitions_for_block_coding| "Hamming  distance"]]&nbsp; $d_{\rm H}(\underline{x}, \ \underline{0})$&nbsp; gives the same result as Hamming weight&nbsp; $w_{\rm H}(\underline{x})$.<br>
+
*The second part of the equation arises from the fact that every linear code also includes the zero word &nbsp; &rArr; &nbsp; $(\underline{0})$.  
  
[[File:P ID2684 KC T 3 5 S1 neu.png|right|frame| Code word of the&nbsp; $(7, 4, 3)$ Hamming code|class=fit]]
+
*It is therefore convenient to set&nbsp; $\underline{x}\hspace{0.05cm}' = \underline{0}$,&nbsp; so that the&nbsp; [[Channel_Coding/Objective_of_Channel_Coding#Important_definitions_for_block_coding| $\text{Hamming  distance}$]] &nbsp; $d_{\rm H}(\underline{x}, \ \underline{0})$ &nbsp; gives the same result as the Hamming weight &nbsp; $w_{\rm H}(\underline{x})$.<br>
{{GraueBox|TEXT=
+
 
$\text{Example 1:}$&nbsp; The table shows the 16 code words of the&nbsp; [[Channel_Coding/Examples_of_Binary_Block_Codes#Hamming_Codes|"$(7, 4, 3)$ Hamming codes"]]&nbsp; <br>$($see &nbsp;$\text{Example 7})$.<br>
+
 
*All code words except the null word&nbsp; $(\underline{0})$&nbsp; contain at least three ones &nbsp; &#8658; &nbsp; $d_{\rm min} = 3$.  
+
{{GraueBox|TEXT=  
*There are seven code words with three ones (highlighted in yellow), seven with four ones (highlighted in green), and one each with no ones and seven ones}}.
+
$\text{Example 1:}$&nbsp; The table shows the&nbsp; $16$&nbsp; code words of of an exemplary&nbsp; [[Channel_Coding/Examples_of_Binary_Block_Codes#Hamming_Codes|$\text{HC (7, 4, 3)}$]]&nbsp; $($see &nbsp;$\text{Examples 6 and 7})$.<br>
 +
[[File:P ID2684 KC T 3 5 S1 neu.png|right|frame| Code words of the considered&nbsp; $(7, 4, 3)$ Hamming code|class=fit]]
 +
 
 +
You can see:
 +
*All code words except the null word&nbsp; $(\underline{0})$&nbsp; contain at least three ones &nbsp; &#8658; &nbsp; $d_{\rm min} = 3$.
 +
 +
*There are  
 +
:#seven code words with three&nbsp; "ones"&nbsp; $($highlighted in yellow$)$,  
 +
:#seven with four&nbsp; "ones"&nbsp; $($highlighted in green$)$,&nbsp; and  
 +
:#one each with no&nbsp; "ones" and seven&nbsp; "ones".}}
 
<br clear=all>
 
<br clear=all>
  
The&nbsp; <b>free distance</b>&nbsp; $d_{\rm F}$&nbsp; of a (<i>convolutional code</i> &nbsp; &#8658; &nbsp; $\mathcal{CC}$) is formulaically no different from the minimum distance of a linear block code:
+
The&nbsp; &raquo;<b>free distance</b>&laquo;&nbsp; $d_{\rm F}$&nbsp; of a convolutional code&nbsp; $(\mathcal{CC})$&nbsp; is formulaically no different from the minimum distance of a linear block code:
  
 
::<math>d_{\rm F}(\mathcal{CC}) =
 
::<math>d_{\rm F}(\mathcal{CC}) =
Line 33: Line 42:
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
In the literature instead of&nbsp; $d_{\rm F}$&nbsp; sometimes also&nbsp; $d_{&#8734;}$&nbsp; is used.
+
[[File:P ID2685 KC T 3 5 S1c v1.png|right|frame| Some paths wih&nbsp; $w(\underline{x}) = d_{\rm F} = 5$|class=fit]]
*Major difference to minimal distance is that in convolutional codes not information&ndash; and code words are to be considered, but sequences with the property&nbsp; [[Channel_Coding/Basics_of_Convolutional_Coding#Requirements_and_definitions| "semi&ndash;infinite"]].<br>
+
In the literature instead of&nbsp; "$d_{\rm F}$"&nbsp; sometimes is also used&nbsp; "$d_{&#8734;}$".
 +
 
 +
The graph shows three of the infinite paths with the minimum Hamming weight&nbsp; $w_{\rm H, \ min}(\underline{x}) = d_{\rm F} = 5$.<br>
 +
 
 +
*Major difference to the minimal distance&nbsp; $d_{\rm min}$&nbsp; of other codes is  
 +
:#that in convolutional codes not information words and code words are to be considered,&nbsp;
 +
:#but sequences with the property&nbsp; [[Channel_Coding/Basics_of_Convolutional_Coding#Requirements_and_definitions| &raquo;$\text{semi&ndash;infinite}$&laquo;]].&nbsp;
  
*Each code sequence&nbsp; $\underline{x}$&nbsp; describes a path through the trellis.  
+
*Each encoded sequence&nbsp; $\underline{x}$&nbsp; describes a path through the trellis.
*The free distance is the smallest possible Hamming weight of such a path (except for the zero path).<br>
+
 +
*The&nbsp; "free distance"&nbsp; is the smallest possible Hamming weight of such a path&nbsp; $($except for the zero path$)$.<br>
  
  
The graph shows three of the infinite paths with the minimum Hamming weight&nbsp; $w_{\rm H, \ min}(\underline{x}) = d_{\rm F} = 5$.<br>
 
  
[[File:P ID2685 KC T 3 5 S1c v1.png|center|frame| Einige Pfade mit&nbsp; $w(\underline{x}) = d_{\rm F} = 5$|class=fit]]
 
  
 
== Path weighting enumerator function==
 
== Path weighting enumerator function==
 
<br>
 
<br>
For any linear block code, a weight enumerator function can be given in a simple way because of the finite number of code words&nbsp; $\underline{x}$&nbsp;. For the&nbsp; [[Channel_Coding/Distance_Characteristics_and_Error_Probability_Barriers#Free_distance_vs._minimum_distance|$\text{"Example 1"}$]]&nbsp; on the previous page this is:
+
For any linear block code,&nbsp; a weight enumerator function can be given in a simple way because of the finite number of code words&nbsp; $\underline{x}$.&nbsp;  
 +
 
 +
For the&nbsp; [[Channel_Coding/Distance_Characteristics_and_Error_Probability_Bounds#Free_distance_vs._minimum_distance|$\text{Example 1}$]]&nbsp; in the previous section this is:
  
 
::<math>W(X) = 1 + 7 \cdot X^{3} + 7 \cdot X^{4} + X^{7}\hspace{0.05cm}.</math>
 
::<math>W(X) = 1 + 7 \cdot X^{3} + 7 \cdot X^{4} + X^{7}\hspace{0.05cm}.</math>
 +
[[File:P ID2686 KC T 3 5 S2a v1.png|right|frame|Some paths and their path weightings|class=fit]]
 +
 +
In the case of a&nbsp; $($non-terminated$)$&nbsp; convolutional code,&nbsp; no such weight function can be given,&nbsp; since there are infinitely many,&nbsp; infinitely long encoded sequences&nbsp; $\underline{x}$&nbsp; and thus also infinitely many trellis paths.
  
In the case of a (non terminated) convolutional code, no such weight function can be given, since there are infinitely many, infinitely long code sequences&nbsp; $\underline{x}$&nbsp; and thus also infinitely many trellis paths. To get a grip on this problem, we now assume the following:
+
To get a grip on this problem,&nbsp; we now assume the following:
*As a reference for the trellis diagram, we always choose the path of the code sequence&nbsp; $\underline{x} = \underline{0}$&nbsp; and call this the <i>zero path</i>&nbsp; $\varphi_0$.<br>
+
*As a reference for the trellis diagram,&nbsp; we always choose the path of the encoded sequence&nbsp; $\underline{x} = \underline{0}$&nbsp; and call this the&nbsp; "zero path"&nbsp; $\varphi_0$.<br>
  
*We now consider only those paths&nbsp; $\varphi_j &#8712; {\it \Phi}$ that all deviate from the zero path at a given time&nbsp; $t$&nbsp; and return to it at some point.<br><br>
+
*We now consider only paths &nbsp; $\varphi_j &#8712; {\it \Phi}$&nbsp; that all deviate from the zero path at a given time&nbsp; $t$&nbsp; and return to it at some point later.<br>
  
Although only a fraction of all paths belong to the set&nbsp; ${\it \Phi}$&nbsp; , ${\it \Phi} = \{\varphi_1, \ \varphi_2, \ \varphi_3, \ \text{...} \}$&nbsp; still an unbounded set of paths. $\varphi_0$&nbsp; is not one of them.<br>
+
*Although only a fraction of all paths belong to the set&nbsp; ${\it \Phi}$,&nbsp; contains the remainder quantity&nbsp; ${\it \Phi} = \{\varphi_1, \ \varphi_2, \ \varphi_3, \ \text{...} \}$&nbsp; still an unbounded set of paths&nbsp; $\varphi_0$&nbsp; is not one of them.<br>
  
[[File:P ID2686 KC T 3 5 S2a v1.png|center|frame|Some paths and their path weightings|class=fit]]
 
  
 
In the above trellis some paths&nbsp; $\varphi_j &#8712; {\it \Phi}$&nbsp; are drawn:
 
In the above trellis some paths&nbsp; $\varphi_j &#8712; {\it \Phi}$&nbsp; are drawn:
*The yellow path&nbsp; $\varphi_1$&nbsp; belongs to the sequence&nbsp; $\underline{x}_1 = (11, 10, 11)$&nbsp; with hamming&ndash;weight&nbsp; $w_{\rm H}(\underline{x}_1) = 5$. Thus, the path weighting&nbsp; $w(\varphi_1) = 5$ as well. Due to the definition of the branching time&nbsp; $t$&nbsp; only this single path&nbsp; $\varphi_1$&nbsp; has the free distance&nbsp; $d_{\rm F} = 5$&nbsp; to the zero path &nbsp; &#8658; &nbsp; $A_5 = 1$.<br>
+
*The yellow path&nbsp; $\varphi_1$&nbsp; belongs to the sequence&nbsp; $\underline{x}_1 = (11, 10, 11)$&nbsp; with Hamming weight&nbsp; $w_{\rm H}(\underline{x}_1) = 5$.&nbsp; Thus,&nbsp; the path weighting&nbsp; $w(\varphi_1) = 5$.&nbsp; Due to the definition of the branching time&nbsp; $t$&nbsp; only this single path&nbsp; $\varphi_1$&nbsp; has the free distance&nbsp; $d_{\rm F} = 5$&nbsp; to the zero path &nbsp; &#8658; &nbsp; $A_5 = 1$.<br>
  
*For the two green paths with corresponding sequences&nbsp; $\underline{x}_2 = (11, 01, 01, 11)$&nbsp; and &nbsp; $\underline{x}_3 = (11, 10, 00, 10, 11)$&nbsp; respectively,&nbsp; $w(\varphi_2) = w(\varphi_3) = 6$ holds. No other path exhibits the path weighting&nbsp; $6$&nbsp;. We take this fact into account by the coefficient&nbsp; $A_6 = 2$.<br>
+
*For the two green paths with corresponding sequences&nbsp; $\underline{x}_2 = (11, 01, 01, 11)$&nbsp; resp. &nbsp; $\underline{x}_3 = (11, 10, 00, 10, 11)$ &nbsp; &rArr; &nbsp; $w(\varphi_2) = w(\varphi_3) = 6$.&nbsp; No other path exhibits the path weighting&nbsp; $6$.&nbsp; We take this fact into account by the coefficient&nbsp; $A_6 = 2$.<br>
  
*Also drawn is the gray path&nbsp; $\varphi_4$, associated with the sequence&nbsp; $\underline{x}_4 = (11, 01, 10, 01, 11)$ &nbsp; &#8658; &nbsp; $w(\varphi_4) = 7$. Also, the sequences&nbsp; $\underline{x}_5 = (11, 01, 01, 00, 10, 11)$,&nbsp; $\underline{x}_6 = (11, 10, 00, 01, 01, 11)$&nbsp; and&nbsp; $\underline{x}_7 = (11, 10, 00, 10, 00, 10, 11)$&nbsp; have path weighting&nbsp; $7$&nbsp; &nbsp; &#8658; &nbsp; $A_7 = 4$.<br><br>
+
*Also drawn is the gray path&nbsp; $\varphi_4$,&nbsp; associated with the sequence&nbsp; $\underline{x}_4 = (11, 01, 10, 01, 11)$ &nbsp; &#8658; &nbsp; $w(\varphi_4) = 7$.&nbsp; Also,&nbsp; the sequences&nbsp; $\underline{x}_5 = (11, 01, 01, 00, 10, 11)$,&nbsp; $\underline{x}_6 = (11, 10, 00, 01, 01, 11)$&nbsp; and&nbsp; $\underline{x}_7 = (11, 10, 00, 10, 00, 10, 11)$&nbsp; have the same path weighting&nbsp; "$7$"&nbsp; &nbsp; &#8658; &nbsp; $A_7 = 4$.<br><br>
  
Thus, the path weighting enumerator function is:
+
Thus,&nbsp; the path weighting enumerator function is for this example:
  
 
::<math>T(X) = A_5 \cdot X^5 + A_6 \cdot X^6  + A_7 \cdot X^7 + \text{...} \hspace{0.1cm}=  X^5 + 2 \cdot X^6  + 4 \cdot X^7+ \text{...}\hspace{0.1cm}
 
::<math>T(X) = A_5 \cdot X^5 + A_6 \cdot X^6  + A_7 \cdot X^7 + \text{...} \hspace{0.1cm}=  X^5 + 2 \cdot X^6  + 4 \cdot X^7+ \text{...}\hspace{0.1cm}
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
The definition of this function&nbsp; $T(X)$&nbsp; is:<br>
+
The definition of this function&nbsp; $T(X)$&nbsp; follows.<br>
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Definition:}$&nbsp; For the&nbsp; <b>path weighting enumerator function</b>&nbsp; of a convolutional code holds:
+
$\text{Definition:}$&nbsp; For the&nbsp; &raquo;<b>path weighting enumerator function</b>&laquo;&nbsp; of a convolutional code holds:
  
 
::<math>T(X) = \sum_{\varphi_j \in {\it \Phi} }\hspace{0.1cm}  X^{w(\varphi_j) } \hspace{0.1cm}=\hspace{0.1cm} \sum_{w\hspace{0.05cm}  =\hspace{0.05cm} d_{\rm F} }^{\infty}\hspace{0.1cm}  A_w \cdot X^w  
 
::<math>T(X) = \sum_{\varphi_j \in {\it \Phi} }\hspace{0.1cm}  X^{w(\varphi_j) } \hspace{0.1cm}=\hspace{0.1cm} \sum_{w\hspace{0.05cm}  =\hspace{0.05cm} d_{\rm F} }^{\infty}\hspace{0.1cm}  A_w \cdot X^w  
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
*${\it \Phi}$&nbsp; denotes the set of all paths that leave the zero path&nbsp; $\varphi_0$&nbsp; exactly at the specified time&nbsp; $t$&nbsp; and return to it (sometime) later.<br>
+
# &nbsp; ${\it \Phi}$&nbsp; denotes the set of all paths that leave the zero path&nbsp; $\varphi_0$&nbsp; exactly at the specified time&nbsp; $t$&nbsp; and return to it&nbsp; $($sometime$)$&nbsp; later.<br>
 
+
# &nbsp; According to the second equation part,&nbsp; the summands are ordered by their path weightings&nbsp; $w$&nbsp; where&nbsp; $A_w$&nbsp; denotes the number of paths with weighting&nbsp; $w$.
*According to the second part of the equation, the summands are ordered by their path weightings&nbsp; $w$&nbsp; where&nbsp; $A_w$&nbsp; denotes the number of paths with path weighting&nbsp; $w$&nbsp;.
+
# &nbsp; The sum starts with&nbsp; $w = d_{\rm F}$.<br>
*The sum starts with&nbsp; $w = d_{\rm F}$.<br>
+
# &nbsp; The path weighting&nbsp; $w(\varphi_j)$&nbsp; is equal to the Hamming weight&nbsp; $($number of&nbsp; "ones"$)$&nbsp; of the encoded sequence $\underline{x}_j$&nbsp; associated to the path&nbsp; $\varphi_j$:&nbsp; $w({\varphi_j) = w_{\rm H}(\underline {x} }_j) \hspace{0.05cm}.$}}
 
 
*The path weighting&nbsp; $w(\varphi_j)$&nbsp; is equal to the Hamming weight (number of ones) of the code sequence associated to the path&nbsp; $\varphi_j$&nbsp; $\underline{x}_j$:
 
 
 
::<math>w({\varphi_j) = w_{\rm H}(\underline {x} }_j)
 
\hspace{0.05cm}.</math>}}<br>
 
  
<i>Note:</i> &nbsp; The weight enumerator function&nbsp; $W(X)$&nbsp; defined for linear block codes and the path weight function&nbsp; $T(X)$&nbsp; of convolutional codes have many similarities; however, they are not identical.<br>
 
  
We consider again the weight enumerator function of the&nbsp; $(7, 4, 3)$&ndash;Hamming code,
+
<u>Note:</u>&nbsp; The weight enumerator function&nbsp; $W(X)$&nbsp; defined for linear block codes and the path weight function&nbsp; $T(X)$&nbsp; of convolutional codes have many similarities;&nbsp; however,&nbsp; they are not identical. &nbsp; &nbsp; We consider again  
 +
*the weight enumerator function of the&nbsp; $(7, 4, 3)$ Hamming code:
  
 
::<math>W(X) = 1 + 7 \cdot X^{3} + 7 \cdot X^{4} + X^{7},</math>
 
::<math>W(X) = 1 + 7 \cdot X^{3} + 7 \cdot X^{4} + X^{7},</math>
  
and the path weighting enumerator function of our standard&ndash;convolutional coder,
+
* and the path weighting enumerator function of our&nbsp; "standard convolutional encoder":
  
 
::<math>T(X) =  X^5 + 2 \cdot X^6  + 4 \cdot X^7+ 8 \cdot X^8+ \text{...} </math>
 
::<math>T(X) =  X^5 + 2 \cdot X^6  + 4 \cdot X^7+ 8 \cdot X^8+ \text{...} </math>
  
Noticeable is the "$1$" in the first equation, which is missing in the second equation. That is: &nbsp; For the linear block codes, the reference&ndash;code word&nbsp; $\underline{x}_i = \underline{0}$&nbsp; is counted, whereas the zero code sequence&nbsp; $\underline{x}_i = \underline{0}$&nbsp; or the zero path&nbsp; $\varphi_0$&nbsp; is excluded by definition for the convolutional codes.  
+
# &nbsp; Noticeable is the&nbsp; "$1$"&nbsp; in the first equation,&nbsp; which is missing in the second one.&nbsp; That is:  
 +
# &nbsp; For the linear block codes,&nbsp; the reference code word&nbsp; $\underline{x}_i = \underline{0}$&nbsp; is counted.
 +
# &nbsp; But the zero encoded sequence&nbsp; $\underline{x}_i = \underline{0}$&nbsp; and the zero path&nbsp; $\varphi_0$&nbsp; are excluded by definition for convolutional codes.  
 +
 
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
 
$\text{Author's personal opinion:}$&nbsp;
 
$\text{Author's personal opinion:}$&nbsp;
 
    
 
    
One could&nbsp; have defined $W(X)$&nbsp; without the "$1$" as well. This would have avoided, among other things, that the Bhattacharyya&ndash;bound for linear block codes and that for convolutional codes differ by "$-1$", as can be seen from the following equations:<br>
+
One could&nbsp; have defined&nbsp; $W(X)$&nbsp; without the&nbsp; "$1$"&nbsp; as well.&nbsp; This would have avoided among other things,&nbsp; that the Bhattacharyya&ndash;bound for linear block codes and that for convolutional codes differ by&nbsp; "$-1$",&nbsp; as can be seen from the following equations:<br>
  
*[[Channel_Coding/Bounds_for_Block_Error_Probability#The_upper_bound_according_to_Bhattacharyya| "Bhattacharyya bound for linear block codes"]]: &nbsp; &nbsp; ${\rm Pr(Block\:error)} \le W(X = \beta) -1  
+
*[[Channel_Coding/Bounds_for_Block_Error_Probability#The_upper_bound_according_to_Bhattacharyya| "Bhattacharyya bound for&nbsp; $\text{linear block codes}$"]]: &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; ${\rm Pr(block\:error)} \le W(X = \beta) -1  
 
  \hspace{0.05cm},$
 
  \hspace{0.05cm},$
  
*[[Channel_Coding/Distance_Characteristics_and_Error_Probability_Barriers#Burst_error_probability_and_Bhattacharyya_bound| "Bhattacharyya bound for convolutional codes"]]: &nbsp; &nbsp; ${\rm Pr(Burst\:error)} \le T(X = \beta)   
+
*[[Channel_Coding/Distance_Characteristics_and_Error_Probability_Bounds#Burst_error_probability_and_Bhattacharyya_bound| "Bhattacharyya bound for&nbsp; $\text{convolutional codes}$"]]: &nbsp; &nbsp;${\rm Pr(burst\:error)} \le T(X = \beta)   
 
  \hspace{0.05cm}.$}}
 
  \hspace{0.05cm}.$}}
  
Line 115: Line 131:
 
== Enhanced path weighting enumerator function ==
 
== Enhanced path weighting enumerator function ==
 
<br>
 
<br>
The path weighting enumerator function&nbsp; $T(X)$&nbsp; only provides information regarding the weights of the code sequence&nbsp; $\underline{x}$.  
+
The path weighting enumerator function&nbsp; $T(X)$&nbsp; only provides information regarding the weights of the encoded sequence&nbsp; $\underline{x}$.  
*More information is obtained if the weights of the information sequence&nbsp; $\underline{u}$&nbsp; are also collected.  
+
*More information is obtained if the weights of the information sequence&nbsp; $\underline{u}$&nbsp; are also collected.
*One then needs two formal parameters&nbsp; $X$&nbsp; and&nbsp; $U$, as can be seen from the following definition.<br>
+
 +
*One then needs two formal parameters&nbsp; $X$&nbsp; and&nbsp; $U$,&nbsp; as can be seen from the following definition.<br>
  
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Definition:}$&nbsp;  The&nbsp; <b>enhanced path weight enumerator function (EPWEF)</b>&nbsp; is:
+
$\text{Definition:}$&nbsp;  The&nbsp; &raquo;<b>enhanced path weight enumerator function</b>&laquo;&nbsp; $\rm (EPWEF)$&nbsp; is:
  
::<math>T_{\rm enh}(X, U) = \sum_{\varphi_j \in {\it \Phi} }\hspace{0.1cm}  X^{w(\varphi_j)} \cdot U^{ { u}(\varphi_j)}  \hspace{0.1cm}=\hspace{0.1cm} \sum_{w} \sum_{u}\hspace{0.1cm}  A_{w, \hspace{0.05cm}u} \cdot X^w  \cdot U^{u}
+
::<math>T_{\rm enh}(X,\ U) = \sum_{\varphi_j \in {\it \Phi} }\hspace{0.1cm}  X^{w(\varphi_j)} \cdot U^{ { u}(\varphi_j)}  \hspace{0.1cm}=\hspace{0.1cm} \sum_{w} \sum_{u}\hspace{0.1cm}  A_{w, \hspace{0.05cm}u} \cdot X^w  \cdot U^{u}
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
All specifications for&nbsp; $T(X)$&ndash;definition on the last page apply. In addition, note:
+
All specifications for the &nbsp; $T(X)$&nbsp; definition in the last section apply.&nbsp; In addition,&nbsp; note:
*The path input weight&nbsp; $u(\varphi_j)$&nbsp; is equal to the Hamming weight of the information sequence associated to the path&nbsp; $\varphi_j$&nbsp; $\underline{u}_j$. It is expressed as a power of the formal parameter&nbsp; $U$&nbsp;.<br>
+
# &nbsp; The path input weight&nbsp; $u(\varphi_j)$&nbsp; is equal to the Hamming weight of the information sequence&nbsp; $\underline{u}_j$&nbsp; associated to the path.
 
+
# &nbsp;  It is expressed as a power of the formal parameter&nbsp; $U$&nbsp;.<br>
*The coefficient&nbsp; $A_{w, \ u}$&nbsp; denotes the number of paths&nbsp; $\varphi_j$&nbsp; with path output weight&nbsp; $w(\varphi_j)$&nbsp; and path input weight&nbsp; $u(\varphi_j)$. The run variable for the second portion is&nbsp; $u$&nbsp;.<br>
+
# &nbsp; $A_{w, \ u}$&nbsp; denotes the number of paths&nbsp; $\varphi_j$&nbsp; with path output weight&nbsp; $w(\varphi_j)$&nbsp; and path input weight&nbsp; $u(\varphi_j)$.&nbsp; Control variable for the second portion is&nbsp; $u$.<br>
 
+
# &nbsp; Setting the formal parameter&nbsp; $U = 1$&nbsp; in the enhanced path weighting enumerator function yields the original weighting enumerator function&nbsp; $T(X)$.<br>
*Setting the formal parameter&nbsp; $U = 1$ in the enhanced path weighting enumerator function yields the original weighting enumerator function&nbsp; $T(X)$&nbsp; according to the definition on the last page.<br>
 
  
  
For many (and all relevant) convolutional codes, upper equation can still be simplified:
+
For many&nbsp; $($and all relevant$)$&nbsp; convolutional codes,&nbsp; upper equation can still be simplified:
  
::<math>T_{\rm enh}(X, U) =\hspace{0.1cm} \sum_{w \ = \ d_{\rm F} }^{\infty}\hspace{0.1cm}  A_w \cdot X^w \cdot U^{u}  
+
::<math>T_{\rm enh}(X,\ U) =\hspace{0.1cm} \sum_{w \ = \ d_{\rm F} }^{\infty}\hspace{0.1cm}  A_w \cdot X^w \cdot U^{u}  
 
\hspace{0.05cm}.</math>}}<br>
 
\hspace{0.05cm}.</math>}}<br>
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Example 2:}$&nbsp; Thus, the enhanced path weighting enumerator function of our standard coder is:
+
$\text{Example 2:}$&nbsp; Thus, the enhanced path weighting enumerator function of our standard encoder is:
  
::<math>T_{\rm enh}(X, U) =  U \cdot X^5 + 2  \cdot U^2 \cdot X^6  + 4  \cdot  U^3 \cdot X^7+ \text{ ...} \hspace{0.1cm}
+
::<math>T_{\rm enh}(X,\ U) =  U \cdot X^5 + 2  \cdot U^2 \cdot X^6  + 4  \cdot  U^3 \cdot X^7+ \text{ ...} \hspace{0.1cm}
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
If we compare this result with the trellis shown below, we can see:
+
[[File:P ID2702 KC T 3 5 S2a v1.png|right|frame|Some paths and their path weightings|class=fit]]
*The yellow highlighted path &ndash; marked by&nbsp; $X^5$&nbsp; &ndash; is composed of a blue arrow&nbsp; $(u_i = 1)$&nbsp; and two red arrows&nbsp; $(u_i = 0)$&nbsp;. Thus&nbsp; $X^5$&nbsp; becomes the extended term&nbsp; $UX^5$.<br>
+
Comparing this result with the trellis shown below, we can see:
 
+
*The yellow highlighted path&nbsp; &ndash; in&nbsp;  $T(X)$&nbsp;  marked by&nbsp; $X^5$&nbsp; &ndash; is composed of a blue arrow&nbsp; $(u_i = 1)$&nbsp; and two red arrows&nbsp; $(u_i = 0)$.&nbsp; Thus&nbsp; $X^5$&nbsp; becomes the extended term&nbsp; $U  X^5$.<br>
*The sequences of the two green paths are&nbsp; $\underline{u}_2 = (1, 1, 0, 0)$ &nbsp; &#8658; &nbsp; $\underline{x}_2 = (11, 01, 01, 11)$&nbsp; and&nbsp; $\underline{u}_3 = (1, 0, 1, 0, 0)$ &nbsp; &#8658; &nbsp; $\underline{x}_3 = (11, 10, 00, 10, 11)$. This gives the second term&nbsp; $2 \cdot U^2X^6$.<br>
 
 
 
*The gray path (and the three undrawn paths) together make the contribution&nbsp; $4 \cdot U^3X^7$. Each of these paths contains three blue arrows &nbsp; &#8658; &nbsp; three ones in the associated information sequence.<br>
 
  
 +
*The sequences of the two green paths are&nbsp;
 +
:$$\underline{u}_2 = (1, 1, 0, 0) \hspace{0.15cm} &#8658; \hspace{0.15cm} \underline{x}_2 = (11, 01, 01, 11),$$
 +
:$$\underline{u}_3 = (1, 0, 1, 0, 0) \hspace{0.15cm} &#8658; \hspace{0.15cm} \underline{x}_3 = (11, 10, 00, 10, 11).$$
 +
:This gives the second term&nbsp; $2 \cdot U^2X^6$.<br>
  
[[File:P ID2702 KC T 3 5 S2a v1.png|center|frame|Some paths and their path weightings|class=fit]]}}
+
*The gray path&nbsp; $($and the three undrawn paths$)$&nbsp; together make the contribution&nbsp; $4 \cdot U^3X^7$.&nbsp; Each of these paths contains three blue arrows &nbsp; &#8658; &nbsp; three&nbsp; "ones"&nbsp; in the associated information sequence.<br>}}
  
  
Line 159: Line 176:
 
== Path weighting enumerator function from state transition diagram ==
 
== Path weighting enumerator function from state transition diagram ==
 
<br>
 
<br>
There is an elegant way to determine the path weighting enumerator function&nbsp; $T(X)$&nbsp; and its enhancement directly from the state transition diagram. This will be demonstrated here and on the following pages using our&nbsp; [[Channel_Coding/Code_Description_with_State_and_Trellis_Diagram#State_definition_for_a_memory_register| "Standard encoder"]]&nbsp; as an example.<br>
+
There is an elegant way to determine the path weighting enumerator function&nbsp; $T(X)$&nbsp; and its enhancement directly from the state transition diagram.&nbsp; This will be demonstrated here and in the following sections using our&nbsp; [[Channel_Coding/Code_Description_with_State_and_Trellis_Diagram#State_definition_for_a_memory_register|$\text{standard convolutional encoder}$]]&nbsp; as an example.<br>
 
 
First, the state transition diagram must be redrawn for this purpose. The graphic shows this on the left in the previous form as diagram&nbsp; $\rm (A)$, while the new diagram&nbsp; $\rm (B)$&nbsp; is given on the right.<br>
 
  
[[File:P ID2688 KC T 3 5 S3b1 v2.png|center|frame|State transition diagram in two different variants|class=fit]]
+
First,&nbsp; the state transition diagram must be redrawn for this purpose.&nbsp; The graphic shows this on the left in the previous form as diagram&nbsp; $\rm (A)$,&nbsp; while the new diagram&nbsp; $\rm (B)$&nbsp; is given on the right.&nbsp; It can be seen:
  
It can be seen:
+
[[File:KC_T_3_5_Diagram_v3.png|right|frame|State transition diagram in two different variants|class=fit]]
*The state&nbsp; $S_0$&nbsp; is split into the start state&nbsp; $S_0$&nbsp; and the end state&nbsp; $S_0\hspace{0.01cm}'$. Thus, all paths of the trellis diagram that start in state&nbsp; $S_0$&nbsp; and return to it at some point can also be traced in the right graph&nbsp; $\rm (B)$&nbsp;. Excluded, on the other hand, are direct transitions from&nbsp; $S_0$&nbsp; to&nbsp; $S_0\hspace{0.01cm}'$&nbsp; and thus also the zero path &nbsp;$($duration&ndash;$S_0)$.<br>
 
  
*In the diagram&nbsp; $\rm (A)$&nbsp; the transitions are distinguishable by the colors red&nbsp; $($for&nbsp; $u_i = 0)$&nbsp; and blue&nbsp; $($for&nbsp; $u_i = 1)$&nbsp; and the code words&nbsp; $\underline{x}_i &#8712; \{00, 01, 10, 11\}$&nbsp; are noted at the transitions. In the new diagram&nbsp; $\rm (B)$&nbsp; are expressed&nbsp; $(00)$&nbsp; by&nbsp; $X^0 = 1$&nbsp; and&nbsp; $(11)$&nbsp; by&nbsp; $X^2$&nbsp;. The code words&nbsp; $(01)$&nbsp; and&nbsp; $(10)$&nbsp; are now indistinguishable, but are uniformly denoted by&nbsp; $X$&nbsp;.<br>
+
# &nbsp; The state&nbsp; $S_0$&nbsp; is split into the start state&nbsp; $S_0$&nbsp; and the end state&nbsp; $S_0\hspace{0.01cm}'$.&nbsp; Thus,&nbsp; all paths of the trellis that start in state&nbsp; $S_0$&nbsp; and return to it at some point can also be traced in the right graph&nbsp; $\rm (B)$.&nbsp; Excluded are direct transitions from&nbsp; $S_0$&nbsp; to&nbsp; $S_0\hspace{0.01cm}'$&nbsp; and thus also the zero path.<br>
 
+
# &nbsp; In diagram&nbsp; $\rm (A)$&nbsp; the transitions are distinguishable by the colors red&nbsp; $($for&nbsp; $u_i = 0)$&nbsp; and blue&nbsp; $($for&nbsp; $u_i = 1)$.&nbsp;
*Phrased another way: &nbsp; The code word&nbsp; $\underline{x}_i$&nbsp; is now represented as&nbsp; $X^w$&nbsp; where&nbsp; $X$&nbsp; is a dummy variable associated with the output (the code sequence); variable and&nbsp; $w = w_{\rm H}(\underline{x}_i)$&nbsp; indicates the hamming weight of the code word&nbsp; $\underline{x}_i$&nbsp;. For a rate $1/2$ code, the exponent&nbsp; $w$&nbsp; is either&nbsp; $0, \ 1$&nbsp; or&nbsp; $2$.<br>
+
# &nbsp; The code words&nbsp; $\underline{x}_i &#8712; \{00, 01, 10, 11\}$&nbsp; are noted at the transitions.&nbsp; In diagram&nbsp; $\rm (B)$&nbsp; $(00)$&nbsp; is expressed&nbsp;  by&nbsp; $X^0 = 1$&nbsp; and&nbsp; $(11)$&nbsp; by&nbsp; $X^2$.
 
+
# &nbsp; The code words&nbsp; $(01)$&nbsp; and&nbsp; $(10)$&nbsp; are now indistinguishable,&nbsp; but they are uniformly denoted by&nbsp; $X$&nbsp;.<br>
*Also the diagram&nbsp; $\rm (B)$&nbsp; omits the color coding. The information bit&nbsp; $u_i = 1$&nbsp; is now denoted by&nbsp; $U^1 = U$&nbsp; and the information bit&nbsp; $u_i = 0$&nbsp; by&nbsp; $U^0 = 1$&nbsp;. The dummy&ndash;variable&nbsp; $U$&nbsp; is thus assigned to the input sequence&nbsp; $\underline{u}$&nbsp;.<br><br>
+
# &nbsp; Phrased another way: &nbsp; The code word&nbsp; $\underline{x}_i$&nbsp; is now represented as&nbsp; $X^w$&nbsp; where&nbsp; $X$&nbsp; is a dummy variable associated with the output.
 +
# &nbsp; $w = w_{\rm H}(\underline{x}_i)$&nbsp; indicates the Hamming weight of the code word&nbsp; $\underline{x}_i$.&nbsp; For a rate $1/2$ code,&nbsp; the exponent&nbsp; $w$&nbsp; is either&nbsp; $0, \ 1$&nbsp; or&nbsp; $2$.<br>
 +
# &nbsp; Also diagram&nbsp; $\rm (B)$&nbsp; omits the color coding.&nbsp; The information bit&nbsp; $u_i = 1$&nbsp; is now denoted by&nbsp; $U^1 = U$&nbsp; and bit&nbsp; $u_i = 0$&nbsp; by&nbsp; $U^0 = 1$.&nbsp; The dummy variable&nbsp; $U$&nbsp; is thus assigned to the input sequence&nbsp; $\underline{u}$&nbsp;.<br>
  
 
== Rules for manipulating the state transition diagram ==
 
== Rules for manipulating the state transition diagram ==
 
<br>
 
<br>
The goal of our calculations will be to characterize the (arbitrarily complicated) path from&nbsp; $S_0$&nbsp; to&nbsp; $S_0\hspace{0.01cm}'$&nbsp; by the extended path weighting enumerator function&nbsp; $T_{\rm enh}(X, \ U)$&nbsp; . For this we need rules to simplify the graph stepwise.<br><br>
+
Goal of our calculations will be to characterize the&nbsp; $($arbitrarily complicated$)$&nbsp; path from&nbsp; $S_0$&nbsp; to&nbsp; $S_0\hspace{0.01cm}'$&nbsp; by the extended path weighting enumerator function&nbsp; $T_{\rm enh}(X, \ U)$.&nbsp; For this we need rules to simplify the graph stepwise.<br>
  
[[File:P ID2689 KC T 3 5 S3b1.png|right|frame|Detection of serial transitions]]
+
&raquo;'''Serial transitions'''&laquo;
'''Serial transitions'''
+
[[File:KC_T_3_5_Einzelteile_neu.png|right|frame|Reduction of serial transitions&nbsp; '''(1)''',&nbsp; parallel transitions&nbsp; '''(2)''',&nbsp; a ring&nbsp;
 +
'''(3)''',&nbsp; a feedback&nbsp; '''(4)''' ]]
  
 
Two serial connections &ndash; denoted by&nbsp; $A(X, \ U)$&nbsp; and&nbsp; $B(X, \ U)$&nbsp; &ndash; can be replaced by a single connection with the product of these ratings.<br>
 
Two serial connections &ndash; denoted by&nbsp; $A(X, \ U)$&nbsp; and&nbsp; $B(X, \ U)$&nbsp; &ndash; can be replaced by a single connection with the product of these ratings.<br>
<br clear=all>
+
 
[[File:P ID2690 KC T 3 5 S3b2.png|right|frame|Detection of parallel transitions]]
+
&raquo;'''Parallel transitions'''&laquo;
'''Parallel transitions'''
 
  
 
Two parallel connections &ndash; denoted by&nbsp; $A(X, \ U)$&nbsp; and&nbsp; $B(X, \ U)$&nbsp; &ndash; are combined by the sum of their valuation functions.<br>
 
Two parallel connections &ndash; denoted by&nbsp; $A(X, \ U)$&nbsp; and&nbsp; $B(X, \ U)$&nbsp; &ndash; are combined by the sum of their valuation functions.<br>
<br clear=all>
+
 
[[File:P ID2691 KC T 3 5 S3b3.png|right|frame|Reduction of a ring]]
+
&raquo;'''Ring'''&laquo;
'''Ring'''
 
  
 
The adjacent constellation can be replaced by a single connection, and the following applies to the replacement:
 
The adjacent constellation can be replaced by a single connection, and the following applies to the replacement:
 
::<math>E(X, U) =  \frac{A(X, U) \cdot B(X, U)}{1- C(X, U)}  
 
::<math>E(X, U) =  \frac{A(X, U) \cdot B(X, U)}{1- C(X, U)}  
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
<br clear=all>
+
 
[[File:P ID2692 KC T 3 5 S3b4.png|right|frame|Reduction of feedback]
+
&raquo;'''Feedback'''&laquo;
'''Feedback'''
 
  
 
Due to the feedback, two states can alternate here as often as desired. For this constellation applies:
 
Due to the feedback, two states can alternate here as often as desired. For this constellation applies:
Line 206: Line 220:
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
 
$\text{Example 3:}$&nbsp;   
 
$\text{Example 3:}$&nbsp;   
The above rules are now to be applied to our standard example. In the graphic on the left you can see the modified diagram $\rm (B)$.
+
The above rules are now to be applied to our standard example.&nbsp; In the graphic on the left you can see the modified diagram&nbsp; $\rm (B)$.
  
[[File:P ID2695 KC T 3 5 S3a v2.png|center|frame|Reduction of state transitions|class=fit]]
+
[[File:EN_KC_T_3_5_S48.png|right|frame|Reduction of state transitions|class=fit]]
  
*First we replace the red highlighted detour from&nbsp; $S_1$&nbsp; to&nbsp; $S_2$&nbsp; via&nbsp; $S_3$&nbsp; in the diagram&nbsp; $\rm (B)$&nbsp; by the red connection drawn in the diagram&nbsp; $\rm (C)$&nbsp; $T_1(X, \hspace{0.05cm} U)$. According to the upper classification, it is a "ring" with the labels&nbsp; $A = C = U \cdot X$&nbsp; and&nbsp; $B = X$, and we obtain for the <i>first reduction function</i>:
+
*First we replace the red highlighted detour from&nbsp; $S_1$&nbsp; to&nbsp; $S_2$&nbsp; via&nbsp; $S_3$&nbsp; in the diagram&nbsp; $\rm (B)$&nbsp; by the red connection&nbsp; $T_1(X, \hspace{0.05cm} U)$&nbsp; drawn in the diagram&nbsp; $\rm (C)$.
 +
 
 +
*According to the upper classification,&nbsp; it is a&nbsp; "ring"&nbsp; with the labels&nbsp; $A = C = U \cdot X$&nbsp; and&nbsp; $B = X$, and we obtain for the first reduction function:
  
 
::<math>T_1(X, \hspace{0.05cm} U) =  \frac{U \cdot X^2}{1- U \cdot X}  
 
::<math>T_1(X, \hspace{0.05cm} U) =  \frac{U \cdot X^2}{1- U \cdot X}  
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
*Now we summarize the parallel connections according to the blue background in the diagram&nbsp; $\rm (C)$&nbsp; and replace it with the blue connection in the diagram&nbsp; $\rm (D)$. The <i>second reduction function</i> is thus:
+
*Now we summarize the parallel connections according to the blue background in the diagram&nbsp; $\rm (C)$&nbsp; and replace it with the blue connection in the diagram&nbsp; $\rm (D)$.&nbsp; The second reduction function is thus:
  
::<math>T_2(X, \hspace{0.05cm}U) =  T_1(X, \hspace{0.05cm}U) + X = \frac{U X^2 + X \cdot (1-UX)}{1- U  X} =  \frac{X}{1- U  X}
+
:$$T_2(X, \hspace{0.03cm}U) =  T_1(X, \hspace{0.03cm}U) \hspace{-0.03cm}+\hspace{-0.03cm} X\hspace{-0.03cm}  $$
\hspace{0.05cm}.</math>
+
:$$\Rightarrow  \hspace{0.3cm} T_2(X, \hspace{0.05cm}U) =\hspace{-0.03cm} \frac{U X^2\hspace{-0.03cm} +\hspace{-0.03cm} X \cdot (1-UX)}{1- U  X}=  \frac{X}{1- U  X}\hspace{0.05cm}.$$
  
*The entire graph&nbsp; $\rm (D)$&nbsp; can then be replaced by a single connection from&nbsp; $S_0$&nbsp; to&nbsp; $S_0\hspace{0.01cm}'$&nbsp;. According to the feedback rule, one obtains for the&nbsp; <i>enhanced path weighting enumerator function</i>:
+
*The entire graph&nbsp; $\rm (D)$&nbsp; can then be replaced by a single connection from&nbsp; $S_0$&nbsp; to&nbsp; $S_0\hspace{0.01cm}'$.&nbsp; According to the feedback rule,&nbsp; one obtains for the&nbsp; "enhanced path weighting enumerator function":
  
 
::<math>T_{\rm enh}(X, \hspace{0.05cm}U) = \frac{(U X^2) \cdot X^2 \cdot \frac{X}{1- U  X} }{1- U  \cdot \frac{X}{1- U  X} } =  \frac{U X^5}{1- U  X- U  X} = \frac{U X^5}{1- 2 \cdot U  X}
 
::<math>T_{\rm enh}(X, \hspace{0.05cm}U) = \frac{(U X^2) \cdot X^2 \cdot \frac{X}{1- U  X} }{1- U  \cdot \frac{X}{1- U  X} } =  \frac{U X^5}{1- U  X- U  X} = \frac{U X^5}{1- 2 \cdot U  X}
Line 230: Line 246:
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
*Setting the formal input variable&nbsp; $U = 1$, we obtain the "simple" path weighting enumerator function, which alone allows statements about the weighting distribution of the output sequence&nbsp; $\underline{x}$&nbsp;:
+
*Setting the formal input variable &nbsp; $U = 1$, &nbsp; we obtain the&nbsp; "simple path weighting enumerator function",&nbsp; which alone allows statements about the weighting distribution of the output sequence&nbsp; $\underline{x}$:&nbsp;
  
 
::<math>T(X) = X^5 \cdot \big [ 1 + 2 X + 4  X^2  + 8  X^3 +\text{...}\hspace{0.05cm} \big ]  
 
::<math>T(X) = X^5 \cdot \big [ 1 + 2 X + 4  X^2  + 8  X^3 +\text{...}\hspace{0.05cm} \big ]  
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
:We have already read the same result from the trellis diagram on page&nbsp; [[Channel_Coding/Distance_Characteristics_and_Error_Probability_Barriers#Path_weighting_enumerator_function| "Path weighting enumerator function"]]&nbsp;. There was one gray path with weight&nbsp; $5$, two yellow paths with weighting&nbsp; $6$&nbsp; and four green paths with weighting&nbsp; $7$.}}<br>
+
We have already read the same result from the trellis diagram in section&nbsp; [[Channel_Coding/Distance_Characteristics_and_Error_Probability_Bounds#Path_weighting_enumerator_function| "Path weighting enumerator function"]]&nbsp;. There was  
 +
**one gray path with weight&nbsp; $5$,  
 +
**two yellow paths with weighting&nbsp; $6$&nbsp;  
 +
** four green paths with weighting&nbsp; $7$
 +
** and ... .}}<br>
  
 
== Block error probability vs. burst error probability ==
 
== Block error probability vs. burst error probability ==
 
<br>
 
<br>
[[File:P ID2705 KC T 3 5 S5 v1.png|right|frame|Simple transmission model including encoding/decoding|class=fit]]
 
 
The simple model according to the sketch is valid for linear block codes as well as for convolutional codes.<br>
 
The simple model according to the sketch is valid for linear block codes as well as for convolutional codes.<br>
 +
 +
[[File:EN_KC_T_3_5_S5.png|right|frame|Simple transmission model including encoding and decoding|class=fit]]
  
  
'''Block codes'''
+
&raquo;'''Block codes'''&laquo;
  
 
For block codes denote&nbsp; $\underline{u} = (u_1, \ \text{...} \hspace{0.05cm}, \ u_i, \ \text{...} \hspace{0.05cm}, \ u_k)$&nbsp; and&nbsp; $\underline{v} = (v_1, \ \text{...} \hspace{0.05cm}, v_i, \ \text{...} \hspace{0.05cm} \ , \ v_k)$&nbsp; the information blocks at the input and output of the system.  
 
For block codes denote&nbsp; $\underline{u} = (u_1, \ \text{...} \hspace{0.05cm}, \ u_i, \ \text{...} \hspace{0.05cm}, \ u_k)$&nbsp; and&nbsp; $\underline{v} = (v_1, \ \text{...} \hspace{0.05cm}, v_i, \ \text{...} \hspace{0.05cm} \ , \ v_k)$&nbsp; the information blocks at the input and output of the system.  
  
Thus, the following descriptive variables can be specified:
+
Thus,&nbsp; the following descriptive variables can be specified:
*the <i>block error probability</i> &nbsp; ${\rm Pr(Block\:error)} = {\rm Pr}(\underline{v} &ne; \underline{u}),$
+
*the &raquo;block error probability&laquo; &nbsp; ${\rm Pr(block\:error)} = {\rm Pr}(\underline{v} &ne; \underline{u}),$
  
*the <i>bit error probability</i> &nbsp; ${\rm Pr(Bit\:error)} = {\rm Pr}(v_i &ne; u_i).$<br><br>
+
*the &raquo;bit error probability&laquo; &nbsp; ${\rm Pr(bit\:error)} = {\rm Pr}(v_i &ne; u_i).$<br><br>
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Please note:}$&nbsp; In real transmission systems, due to thermal noise, the following always applies:
+
$\text{Please note:}$&nbsp; In real transmission systems,&nbsp; due to thermal noise the following always applies:
:$${\rm Pr(Bit\:error)} > 0\hspace{0.05cm},\hspace{1.0cm}{\rm Pr(Block\:error)} > {\rm Pr(Bit\:error)}
+
:$${\rm Pr(bit\:error)} > 0\hspace{0.05cm},\hspace{0.5cm}{\rm Pr(block\:error)} > {\rm Pr(bit\:error)}
 
\hspace{0.05cm}.$$
 
\hspace{0.05cm}.$$
  
 
For this a simple explanation attempt: &nbsp; If the decoder decides in each block of length&nbsp; $k$&nbsp; exactly one bit wrong,  
 
For this a simple explanation attempt: &nbsp; If the decoder decides in each block of length&nbsp; $k$&nbsp; exactly one bit wrong,  
*so the average bit error probability&nbsp; ${\rm Pr(Bit\:error)}= 1/k$,
+
*so the average bit error probability &nbsp; ${\rm Pr(bit\:error)}= 1/k$,
*while for the block error probability&nbsp; ${\rm Pr(Block\:error)}\equiv 1$&nbsp; holds}}<br>
+
 
 +
*while for the block error probability &nbsp; ${\rm Pr(block\:error)}\equiv 1$&nbsp; holds.}}<br>
  
  
'''Convolutional codes'''  
+
&raquo;'''Convolutional codes'''&laquo;
  
For convolutional codes, the block error probability cannot be specified, since here&nbsp; $\underline{u} = (u_1, \ u_2, \ \text{...} \hspace{0.05cm})$&nbsp; and&nbsp; $\underline{\upsilon} = (v_1, \ v_2, \ \text{...} \hspace{0.05cm})$&nbsp; represent sequences.
+
For convolutional codes,&nbsp; the block error probability cannot be specified,&nbsp; since here &nbsp; $\underline{u} = (u_1, \ u_2, \ \text{...} \hspace{0.05cm})$ &nbsp; and &nbsp; $\underline{\upsilon} = (v_1, \ v_2, \ \text{...} \hspace{0.05cm})$ &nbsp; represent sequences.
  
Even the smallest possible code parameter&nbsp; $k = 1$&nbsp; here leads to the sequence length&nbsp; $k \hspace{0.05cm}'&nbsp;&#8594;&nbsp;&#8734;$, and the block error probability would always result in&nbsp; ${\rm Pr(block error)}\equiv 1$, even if the bit error probability is extremely small (but non-zero).<br>
+
[[File:P ID2706 KC T 3 5 S5b v1.png|right|frame|Zero path&nbsp; ${\it \varphi}_0$&nbsp; and deviation paths&nbsp; ${\it \varphi}_i$|class=fit]]
  
[[File:P ID2706 KC T 3 5 S5b v1.png|center|frame|Zero path&nbsp; ${\it \varphi}_0$&nbsp; and deviation paths&nbsp; ${\it \varphi}_i$|class=fit]]
+
<br><br>Even the smallest possible code parameter&nbsp; $k = 1$,&nbsp; leads here to the sequence length&nbsp; $k \hspace{0.05cm}'&nbsp;&#8594;&nbsp;&#8734;$,&nbsp; and
 +
* the block error probability would always result in&nbsp; ${\rm Pr(block\hspace{0.1cm} error)}\equiv 1$,
  
 +
*even if the bit error probability is extremely small $\hspace{0.05cm}$ $($but non-zero$)$.
 +
<br clear=all>
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Definition:}$&nbsp; For the&nbsp; <b>burst error probability</b>&nbsp; of a convolutional code holds:
+
$\text{Definition:}$&nbsp; For the&nbsp; &raquo;<b>burst error probability</b>&nbsp;&laquo; of a convolutional code holds:
  
::<math>{\rm Pr(Burst\:error)} = {\rm Pr}\big \{ {\rm Decoder\hspace{0.15cm} leaves\hspace{0.15cm} the\hspace{0.15cm} correct\hspace{0.15cm}path}\hspace{0.15cm}{\rm at\hspace{0.15cm}time \hspace{0.15cm}t}\big \}   
+
::<math>{\rm Pr(burst\:error)} = {\rm Pr}\big \{ {\rm Decoder\hspace{0.15cm} leaves\hspace{0.15cm} the\hspace{0.15cm} correct\hspace{0.15cm}path}\hspace{0.15cm}{\rm at\hspace{0.15cm}time \hspace{0.10cm}\it t}\big \}   
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
*To simplify notation for the following derivation, we always assume the zero sequence&nbsp; $(\underline{0})$&nbsp; which is shown in red in the drawn trellis as the zero path&nbsp; $\varphi_0$&nbsp;.  
+
*To simplify the notation for the following derivation,&nbsp; we always assume the zero sequence&nbsp; $\underline{0}$ &nbsp; which is shown in red in the drawn trellis as the&nbsp; "zero path"&nbsp; $\varphi_0$.
*All other paths&nbsp; $\varphi_1, \ \varphi_2, \ \varphi_3, \ \text{...} $&nbsp; (and many more) leave&nbsp; $\varphi_0$&nbsp; at time&nbsp; $t$. They all belong to the path set&nbsp; ${\it \Phi}$&nbsp; &#8658; &nbsp; "Viterbi decoder leaves the correct path at time&nbsp; $t$". This probability is calculated on the next page}}.<br>
+
 +
*All other paths&nbsp; $\varphi_1, \ \varphi_2, \ \varphi_3, \ \text{...} $&nbsp; $($and many more$)$&nbsp; leave&nbsp; $\varphi_0$&nbsp; at time&nbsp; $t$.  
 +
 
 +
*They all belong to the path set&nbsp; ${\it \Phi}$&nbsp; which is defined as &nbsp;"Viterbi decoder leaves the correct path at time&nbsp; $t$".&nbsp; This probability is calculated in the next section.}}<br>
  
 
== Burst error probability and Bhattacharyya bound ==
 
== Burst error probability and Bhattacharyya bound ==
 
<br>
 
<br>
We proceed as in the earlier chapter&nbsp; [[Channel_Coding/Bounds_for_Block_Error_Probability#Union_Bound_of_the_block_error_probability|"Bounds for block error probability"]]&nbsp; from the pairwise error probability&nbsp; ${\rm Pr}\big [\varphi_0 &#8594; \varphi_i \big]$&nbsp; that instead of the path&nbsp; $\varphi_0$&nbsp; the decoder <b>could</b> select the path&nbsp; $\varphi_i$&nbsp;. All considered paths&nbsp; $\varphi_i$&nbsp; leave the zero path&nbsp; $\varphi_0$&nbsp; at time&nbsp; $t$; thus they all belong to the path set&nbsp; ${\it \Phi}$.<br>
+
[[File:EN_KC_T_3_5_S5c.png|right|frame|Calculation of the burst error probability|class=fit]]
 +
We proceed as in the earlier chapter&nbsp; [[Channel_Coding/Bounds_for_Block_Error_Probability#Union_Bound_of_the_block_error_probability|"Bounds for block error probability"]]&nbsp;  
 +
*from the pairwise error probability&nbsp; ${\rm Pr}\big [\varphi_0 &#8594; \varphi_i \big]$&nbsp;
 +
 +
*that instead of the path&nbsp; $\varphi_0$&nbsp;
 +
 +
*the decoder <b>could</b> select the path&nbsp; $\varphi_i$.  
 +
 
  
[[File:P ID2707 KC T 3 5 S5c v1.png|center|frame|Calculation of the burst error probability|class=fit]]
+
All considered paths&nbsp; $\varphi_i$&nbsp; leave the zero path&nbsp; $\varphi_0$&nbsp; at time&nbsp; $t$;&nbsp; thus they all belong to the path set ${\it \Phi}$.<br>
  
The sought&nbsp; <b>burst error probability</b>&nbsp; is equal to the following union set:
+
The sought&nbsp; "burst error probability"&nbsp; is equal to the following union set:
  
::<math>{\rm Pr(Burst\:error)}= {\rm Pr}\left (\big[\varphi_{\hspace{0.02cm}0} \mapsto \varphi_{\hspace{0.02cm}1}\big] \hspace{0.05cm}\cup\hspace{0.05cm}\big[\varphi_{\hspace{0.02cm}0} \mapsto \varphi_{\hspace{0.02cm}2}\big]\hspace{0.05cm}\cup\hspace{0.05cm} \text{... }\hspace{0.05cm} \right )= {\rm Pr} \left ( \cup_{\varphi_{\hspace{0.02cm}i} \in {\it \Phi}} \hspace{0.15cm} \big[\varphi_{\hspace{0.02cm}0} \mapsto \varphi_{\hspace{0.02cm}i}\big] \right )\hspace{0.05cm}.</math>
+
:$${\rm Pr(burst\:error)}= {\rm Pr}\left (\big[\varphi_{\hspace{0.02cm}0} \mapsto \varphi_{\hspace{0.02cm}1}\big] \hspace{0.05cm}\cup\hspace{0.05cm}\big[\varphi_{\hspace{0.02cm}0} \mapsto \varphi_{\hspace{0.02cm}2}\big]\hspace{0.05cm}\cup\hspace{0.05cm} \text{... }\hspace{0.05cm} \right )$$
 +
:$$\Rightarrow \hspace{0.3cm}{\rm Pr(burst\:error)}= {\rm Pr} \left ( \cup_{\varphi_{\hspace{0.02cm}i} \in {\it \Phi}} \hspace{0.15cm} \big[\varphi_{\hspace{0.02cm}0} \mapsto \varphi_{\hspace{0.02cm}i}\big] \right )\hspace{0.05cm}.$$
  
An upper bound for this is provided by the so-called&nbsp; [[Channel_Coding/Bounds_for_Block_Error_Probability#Union_Bound_of_the_block_error_probability| "Union bound"]]:
+
*An upper bound for this is provided by the so-called&nbsp; [[Channel_Coding/Bounds_for_Block_Error_Probability#Union_Bound_of_the_block_error_probability|$\text{Union Bound}$]]:
  
 
::<math>{\rm Pr(Burst\:error)} \le \sum_{\varphi_{\hspace{0.02cm}i} \in {\it \Phi}}\hspace{0.15cm}  
 
::<math>{\rm Pr(Burst\:error)} \le \sum_{\varphi_{\hspace{0.02cm}i} \in {\it \Phi}}\hspace{0.15cm}  
{\rm Pr}\big [\varphi_{\hspace{0.02cm}0} \mapsto \varphi_{\hspace{0.02cm}i}\big ] = {\rm Pr(Union \hspace{0.15cm}Bound)}
+
{\rm Pr}\big [\varphi_{\hspace{0.02cm}0} \mapsto \varphi_{\hspace{0.02cm}i}\big ] = {\rm Pr(union \hspace{0.15cm}bound)}
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
The pairwise error probability can be estimated using the&nbsp; [[Channel_Coding/Bounds_for_Block_Error_Probability#The_upper_bound_according_to_Bhattacharyya|"Bhattacharyya bound"]]&nbsp;:
+
*The pairwise error probability can be estimated using the&nbsp; [[Channel_Coding/Bounds_for_Block_Error_Probability#The_upper_bound_according_to_Bhattacharyya|$\text{Bhattacharyya bound}$]]:
  
 
::<math>{\rm Pr}\big [\underline {0} \mapsto \underline {x}_{\hspace{0.02cm}i}\big ]
 
::<math>{\rm Pr}\big [\underline {0} \mapsto \underline {x}_{\hspace{0.02cm}i}\big ]
Line 303: Line 339:
 
\hspace{0.05cm} \beta^{w(\varphi_i)}\hspace{0.05cm}.</math>
 
\hspace{0.05cm} \beta^{w(\varphi_i)}\hspace{0.05cm}.</math>
  
Here denotes
+
:Here denotes
 +
 +
:*$w_{\rm H}(\underline{x}_i)$&nbsp; is the Hamming weight of the possible encoded sequence&nbsp; $\underline{x}_i,$
 
   
 
   
*$w_{\rm H}(\underline{x}_i)$&nbsp; the Hamming weight of the possible code sequence&nbsp; $\underline{x}_i,$
+
:*$\ w(\varphi_i)$&nbsp; is the path weight of the corresponding path&nbsp; $\varphi_i &#8712; {\it \Phi},$
*$\ w(\varphi_i)$&nbsp; the path weight of the corresponding path&nbsp; $\varphi_i &#8712; {\it \Phi}$, and.
+
*$\beta$&nbsp; the so-called&nbsp; [[Channel_Coding/Bounds_for_Block_Error_Probability#The_upper_bound_according_to_Bhattacharyya| "Bhattacharyya channel parameter"]].<br>
+
:*$\beta$&nbsp; is the so-called&nbsp; [[Channel_Coding/Bounds_for_Block_Error_Probability#The_upper_bound_according_to_Bhattacharyya|$\text{Bhattacharyya parameter}$]].<br>
  
  
By summing over all paths and comparing with the (simple)&nbsp; [[Channel_Coding/Distance_Characteristics_and_Error_Probability_Barriers#Path_weighting_enumerator_function| "path weighting function"]]&nbsp; $T(X)$&nbsp; we get the result:
+
{{BlaueBox|TEXT=
 +
By&nbsp; &raquo;'''summing over all paths'''&laquo;&nbsp; and comparing with the (simple)&nbsp; [[Channel_Coding/Distance_Characteristics_and_Error_Probability_Bounds#Path_weighting_enumerator_function|$\text{path weighting function}$]]&nbsp; $T(X)$&nbsp; we get the result:
 +
 
 +
::<math>{\rm Pr(burst\:error)} \le T(X = \beta),\hspace{0.5cm}{\rm with}\hspace{0.5cm}
 +
T(X) = \sum_{\varphi_{\hspace{0.02cm}i} \in {\it \Phi} }\hspace{0.15cm}
 +
\hspace{0.05cm} X^{w(\varphi_i)}\hspace{0.05cm}.</math>}}
  
::<math>{\rm Pr(Burst\:error)} \le T(X = \beta),\hspace{0.5cm}{\rm mit}\hspace{0.5cm}
 
T(X) = \sum_{\varphi_{\hspace{0.02cm}i} \in {\it \Phi}}\hspace{0.15cm}
 
\hspace{0.05cm} X^{w(\varphi_i)}\hspace{0.05cm}.</math>
 
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Example 4: }$&nbsp; For our standard encoder &nbsp; &#8658; &nbsp; $R = 1/2, \ m = 2, \ \mathbf{G}(D) = (1 + D + D^2, \ 1 + D)$&nbsp; we have the following&nbsp; [[Channel_Coding/Distance_Characteristics_and_Error_Probability_Barriers#Path_weighting_enumerator_function| "path weighting enumerator function"]]&nbsp; obtained:
+
$\text{Example 4: }$&nbsp; For our standard encoder with
 +
* code rate&nbsp; $R = 1/2$,
 +
 
 +
*memory&nbsp; $m = 2$,
 +
 +
* D– transfer function&nbsp; $\mathbf{G}(D) = (1 + D + D^2, \ 1 + D)$,
 +
 
 +
 
 +
we have obtained the following&nbsp; [[Channel_Coding/Distance_Characteristics_and_Error_Probability_Bounds#Path_weighting_enumerator_function|$\text{path weighting enumerator function}$]]:  
  
 
::<math>T(X) =  X^5 + 2 \cdot X^6  + 4 \cdot X^7 + \ \text{...}  \hspace{0.1cm}
 
::<math>T(X) =  X^5 + 2 \cdot X^6  + 4 \cdot X^7 + \ \text{...}  \hspace{0.1cm}
Line 323: Line 371:
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
Using the series expansion&nbsp; $1/(1 \, &ndash;x) = 1 + x + x^2 + x^3 + \ \text{...} \hspace{0.15cm} $&nbsp; can also be written for this purpose:
+
*Using the series expansion &nbsp; $1/(1 \, &ndash;x) = 1 + x + x^2 + x^3 + \ \text{...} \hspace{0.15cm} $&nbsp; can also be written for this purpose:
  
 
::<math>T(X) =  \frac{X^5}{1-2 \cdot X}
 
::<math>T(X) =  \frac{X^5}{1-2 \cdot X}
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
The BSC model provides the following Bhattacharyya bound with the corruption probability&nbsp; $\varepsilon$&nbsp;:
+
*The BSC model provides the following Bhattacharyya bound with the falsification probability&nbsp; $\varepsilon$:
  
 
::<math>{\rm Pr(Burst\:error)} \le T(X = \beta)  = T\big ( X = 2 \cdot \sqrt{\varepsilon \cdot (1-\varepsilon)} \big )
 
::<math>{\rm Pr(Burst\:error)} \le T(X = \beta)  = T\big ( X = 2 \cdot \sqrt{\varepsilon \cdot (1-\varepsilon)} \big )
 
= \frac{(2 \cdot \sqrt{\varepsilon \cdot (1-\varepsilon)})^5}{1- 4\cdot \sqrt{\varepsilon \cdot (1-\varepsilon)}}\hspace{0.05cm}.</math>
 
= \frac{(2 \cdot \sqrt{\varepsilon \cdot (1-\varepsilon)})^5}{1- 4\cdot \sqrt{\varepsilon \cdot (1-\varepsilon)}}\hspace{0.05cm}.</math>
 
+
*In the&nbsp; [[Aufgaben:Exercise_3.14:_Error_Probability_Bounds|"Exercise 3.14"]]&nbsp; this equation is to be evaluated numerically.}}<br>
In the&nbsp; [[Aufgaben:Exercise_3.14:_Error_Probability_Bounds|"Exercise 3.14"]]&nbsp; this equation is to be evaluated numerically.}}<br>
 
  
 
== Bit error probability and Viterbi bound ==
 
== Bit error probability and Viterbi bound ==
 
<br>
 
<br>
Finally, an upper bound is given for the bit error probability. According to the graph, we proceed as in&nbsp; [Liv10]<ref name='Liv10'>Liva, G.: ''Channel Coding''. Lecture manuscript, Chair of Communications Engineering, TU Munich and DLR Oberpfaffenhofen, 2010.</ref>&nbsp; assume the following conditions:<br>
+
Finally,&nbsp; an upper bound is given for the bit error probability.&nbsp; According to the graph,&nbsp; we proceed as in&nbsp; [Liv10]<ref name='Liv10'>Liva, G.:&nbsp; Channel Coding.&nbsp; Lecture manuscript, Chair of Communications Engineering, TU Munich and DLR Oberpfaffenhofen, 2010.</ref>&nbsp; assume the following conditions:<br>
 +
 
 +
*The zero sequence&nbsp; $\underline{x} = \underline{0}$ was sent &nbsp; &#8658; &nbsp; Path&nbsp; $\varphi_0$.<br>
  
*Sent the zero sequence&nbsp; $\underline{x}$ = $\underline{0}$ &nbsp; &#8658; &nbsp; Path&nbsp; $\varphi_0$.<br>
+
*The duration of a path deviation&nbsp; $($"error burst duration"$)$&nbsp; is denoted by&nbsp; $L$&nbsp;.<br>
  
*The duration of a path deviation (error burst duration) is denoted by&nbsp; $L$&nbsp;.<br>
+
*The distance between two bursts&nbsp; $($"inter burst time"$)$&nbsp; we call&nbsp; $N$.<br>
  
*The distance between two bursts (inter burst time) we call&nbsp; $N$.<br>
+
*The Hamming weight of the error bundle be&nbsp; $H$.<br>
  
*The hamming weight of the error bundle be&nbsp; $H$.<br>
 
  
  
[[File:P ID2715 KC T 3 5 S6a v1.png|center|frame|For the definition of the description variables&nbsp; $L$,&nbsp; $N$&nbsp; and $H$|class=fit]]
+
For a rate $1/n$&nbsp; convolutional code &nbsp; &#8658; &nbsp; $k = 1$ &nbsp; &#8658; &nbsp; one information bit per clock,&nbsp; the expected values&nbsp; ${\rm E}\big[L \big]$,&nbsp; ${\rm E}\big[N \big]$&nbsp; and&nbsp; ${\rm E}\big[H\big]$&nbsp; of the random variables defined above to give an upper bound for the bit error probability:
  
For a rate&ndash;$1/n$&ndash;convolutional code &nbsp; &#8658; &nbsp; $k = 1$, i.e., one information bit per clock, the expected values&nbsp; ${\rm E}\big[L \big]$, ${\rm E}\big[N \big]$&nbsp; and&nbsp; ${\rm E}\big[H\big]$&nbsp; of the random variables defined above to give an upper bound for the bit error probability:
+
[[File:P ID2715 KC T 3 5 S6a v1.png|right|frame|For the definition of the description variables&nbsp; $L$,&nbsp; $N$&nbsp; and $H$|class=fit]]
  
::<math>{\rm Pr(Bit\:error)} =  \frac{{\rm E}\big[H\big]}{{\rm E}[L] + {\rm E}\big[N\big]}\hspace{0.15cm} \le \hspace{0.15cm} \frac{{\rm E}\big[H\big]}{{\rm E}\big[N\big]}
+
::<math>{\rm Pr(bit\:error)} =  \frac{{\rm E}\big[H\big]}{{\rm E}[L] + {\rm E}\big[N\big]}\hspace{0.15cm} \le \hspace{0.15cm} \frac{{\rm E}\big[H\big]}{{\rm E}\big[N\big]}
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
 
It is assumed that  
 
It is assumed that  
*the (mean) duration of a fault burst is in practice much smaller than the expected distance between two bursts,  
+
# &nbsp; the&nbsp; (mean)&nbsp; duration of an error burst is in practice much smaller than the expected distance between two bursts,  
*the (mean) inter burst time $E\big[N\big]$&nbsp; is equal to the inverse of the burst fault probability,  
+
# &nbsp; the&nbsp; (mean)&nbsp; inter burst time $E\big[N\big]$&nbsp; is equal to the inverse of the burst error probability,  
*the expected value in the numerator is estimated as follows:
+
# &nbsp; the expected value in the numerator is estimated as follows:
  
::<math>{\rm E}\big[H \big]  \le  \frac{1}{\rm Pr(Burst\:error)}\hspace{0.1cm} \cdot \sum_{\varphi_{\hspace{0.02cm}i} \in {\it \Phi}}\hspace{0.05cm}  
+
::<math>{\rm E}\big[H \big]  \le  \frac{1}{\rm Pr(burst\:error)}\hspace{0.1cm} \cdot \sum_{\varphi_{\hspace{0.02cm}i} \in {\it \Phi}}\hspace{0.05cm}  
 
\hspace{0.05cm} u(\varphi_i) \cdot \beta^{w(\varphi_i)}
 
\hspace{0.05cm} u(\varphi_i) \cdot \beta^{w(\varphi_i)}
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
In deriving this bound, the <i>pairwise error probability</i>&nbsp; ${\rm Pr}\big [\varphi_0 &#8594; \varphi_i \big]$&nbsp; and the <i>Bhattacharyya estimation</i> are used. Thus one obtains with  
+
In deriving this bound,&nbsp; the&nbsp; "pairwise error probability"&nbsp; ${\rm Pr}\big [\varphi_0 &#8594; \varphi_i \big]$&nbsp; and the&nbsp; "Bhattacharyya estimation"&nbsp; are used.&nbsp; Thus we obtain with  
*the path input weighting&nbsp; $u(\varphi_i),$  
+
*the path input weighting&nbsp; $u(\varphi_i),$
*the path exit weighting&nbsp; $w(\varphi_i),$ and  
+
 +
*the path output weighting&nbsp; $w(\varphi_i),$ and  
 +
 
 
*the Bhattacharyya parameter&nbsp; $\beta$  
 
*the Bhattacharyya parameter&nbsp; $\beta$  
  
  
we obtain the following estimate for the bit error probability and refer to it as the&nbsp; <b>Viterbi bound</b>:
+
the following estimation for the bit error probability and refer to it as the&nbsp; &raquo;<b>Viterbi bound</b>&laquo;:
  
::<math>{\rm Pr(Bit\:error)}\le \sum_{\varphi_{\hspace{0.02cm}i} \in {\it \Phi}}\hspace{0.05cm}  
+
::<math>{\rm Pr(bit\:error)}\le \sum_{\varphi_{\hspace{0.02cm}i} \in {\it \Phi}}\hspace{0.05cm}  
 
\hspace{0.05cm} u(\varphi_i) \cdot \beta^{w(\varphi_i)}
 
\hspace{0.05cm} u(\varphi_i) \cdot \beta^{w(\varphi_i)}
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
This intermediate result can also be represented in another form.  We recall the&nbsp; [[Channel_Coding/Distance_Characteristics_and_Error_Probability_Barriers#Enhanced_path_weighting_enumerator_function| "enhanced path weighting enumerator function"]]
+
This intermediate result can also be represented in another form.  We recall the&nbsp; [[Channel_Coding/Distance_Characteristics_and_Error_Probability_Bounds#Enhanced_path_weighting_enumerator_function|$\text{enhanced path weighting enumerator function}$]]
  
 
::<math>T_{\rm enh}(X, U) = \sum_{\varphi_j \in {\it \Phi}}\hspace{0.1cm}  X^{w(\varphi_j)} \cdot U^{{ u}(\varphi_j)}   
 
::<math>T_{\rm enh}(X, U) = \sum_{\varphi_j \in {\it \Phi}}\hspace{0.1cm}  X^{w(\varphi_j)} \cdot U^{{ u}(\varphi_j)}   
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
If we derive this function according to the dummy input variable&nbsp; $U$&nbsp;, we get
+
If we derive this function according to the dummy input variable&nbsp; $U$,&nbsp; we get
  
 
::<math>\frac {\rm d}{{\rm d}U}\hspace{0.2cm}T_{\rm enh}(X, U) = \sum_{\varphi_j \in {\it \Phi}}\hspace{0.1cm} { u}(\varphi_j) \cdot  X^{w(\varphi_j)} \cdot U^{{ u}(\varphi_j)-1}   
 
::<math>\frac {\rm d}{{\rm d}U}\hspace{0.2cm}T_{\rm enh}(X, U) = \sum_{\varphi_j \in {\it \Phi}}\hspace{0.1cm} { u}(\varphi_j) \cdot  X^{w(\varphi_j)} \cdot U^{{ u}(\varphi_j)-1}   
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
Finally, if we set $U = 1$ for the dummy&ndash;input variable&nbsp; we see the connection to the above result:
+
Finally,&nbsp; if we set&nbsp; $U = 1$&nbsp; for the dummy input variable,&nbsp; than we see the connection to the above result:
  
 
::<math>\left [ \frac {\rm d}{{\rm d}U}\hspace{0.2cm}T_{\rm enh}(X, U) \right ]_{\substack{ U=1}} = \sum_{\varphi_j \in {\it \Phi}}\hspace{0.1cm} { u}(\varphi_j) \cdot  X^{w(\varphi_j)} \hspace{0.05cm}.</math>
 
::<math>\left [ \frac {\rm d}{{\rm d}U}\hspace{0.2cm}T_{\rm enh}(X, U) \right ]_{\substack{ U=1}} = \sum_{\varphi_j \in {\it \Phi}}\hspace{0.1cm} { u}(\varphi_j) \cdot  X^{w(\varphi_j)} \hspace{0.05cm}.</math>
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Conclusion:}$&nbsp;  The&nbsp; <b>bit error probability</b>&nbsp; of a convolutional code can be estimated using the extended path weighting enumerator function in closed form:
+
$\text{Conclusion:}$&nbsp;  The&nbsp; &raquo;<b>bit error probability</b>&laquo;&nbsp; of a convolutional code can be estimated using the extended path weighting enumerator function in closed form:
::<math>{\rm Pr(Bit\:error)} \le {\rm Pr(Viterbi)} = \left [ \frac {\rm d}{ {\rm d}U}\hspace{0.2cm}T_{\rm enh}(X, U) \right ]_{\substack{X=\beta \\ U=1} }  
+
::<math>{\rm Pr(bit\:error)} \le {\rm Pr(Viterbi)} = \left [ \frac {\rm d}{ {\rm d}U}\hspace{0.2cm}T_{\rm enh}(X, U) \right ]_{\substack{X=\beta \\ U=1} }  
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
One speaks of the&nbsp; <b>Viterbi bound</b>. Here one derives the enhanced path weighting enumerator function after the second parameter&nbsp; $U$&nbsp; and then sets&nbsp;  
+
*One speaks of the&nbsp; &raquo;<b>Viterbi bound</b>&raquo;.&nbsp; Here one derives the enhanced path weighting enumerator function after the second parameter&nbsp; $U$&nbsp; and then sets&nbsp;  
*$X = \beta$,&nbsp;
+
::$$X = \beta,\hspace{0.4cm} U = 1.$$}}<br>
*$U = 1$.}}<br>
 
  
Note: &nbsp; In&nbsp; [[Aufgaben:Exercise_3.14:_Error_Probability_Bounds|"Exercise 3.14"]]&nbsp; the&nbsp; ''Viterbi bound''&nbsp; and the&nbsp; ''Bhattacharyya bound''&nbsp; for the rate $1/2$ default code and the&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#Binary_Symmetric_Channel_.E2.80.93_BSC|"BSC&ndash;Model"]]&nbsp; evaluated numerically.<br>
 
  
[[File:P ID2716 KC T 3 5 S6b v2.png|right|frame|AWGN bit error probability of convolutional codes]]
 
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Example 5:}$&nbsp; The graph illustrates the good correction capability of the convolutional codes at&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#AWGN_channel_at_Binary_Input|"AWGN channel"]].
+
$\text{Example 5:}$&nbsp; The graph illustrates the good correction capability &nbsp; &rArr; &nbsp; small bit error rate&nbsp; $\rm (BER)$&nbsp; of convolutional codes at the&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#AWGN_channel_at_binary_input|$\text{$\rm AWGN$&nbsp; channel}$]].
*Red circles denote the bit error rate for our rate $1/2$ default code with memory&nbsp; $m = 2$.<br>
+
[[File:EN_KC_T_3_5_S6.png|right|frame|AWGN bit error probability of convolutional codes]]
*Green crosses mark a convolutional code with&nbsp; $m = 6$, the so-called&nbsp; [[Channel_Coding/Code_Description_with_State_and_Trellis_Diagram#Definition_of_the_free_distance| "industry standard code"]].<br><br>
+
 
 +
*Red circles denote the&nbsp; $\rm BER$&nbsp; for our&nbsp; "rate&nbsp; $1/2$&nbsp; standard  encoder"&nbsp; with memory&nbsp; $m = 2$.<br>
 +
 
 +
 
 +
*Green crosses mark a convolutional code with&nbsp; $m = 6$,&nbsp; the so-called&nbsp; [[Channel_Coding/Code_Description_with_State_and_Trellis_Diagram#Definition_of_the_free_distance|$\text{industry standard code}$]].<br>
 +
 
 +
 
 +
* Codes with large memory&nbsp; $m$&nbsp; lead to large gains over uncoded transmission.
 +
 
 +
 
 +
 
 +
 
 +
<br><br><br><br>
 +
<u>Note:</u> &nbsp;
  
In particular, codes with large memory&nbsp; $m$&nbsp; lead to large gains over uncoded transmission (dashed curve).}}
+
In&nbsp; [[Aufgaben:Exercise_3.14:_Error_Probability_Bounds|"Exercise 3.14"]],&nbsp; the&nbsp; "Viterbi bound"&nbsp; and the&nbsp; "Bhattacharyya bound"&nbsp; are  evaluated numerically for the&nbsp; "rate $1/2$&nbsp; standard encoder"&nbsp; and the&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#Binary_Symmetric_Channel_.E2.80.93_BSC|$\text{BSC model}$]].
 +
}}
  
  
 
== Exercises for the chapter ==
 
== Exercises for the chapter ==
 
<br>
 
<br>
[[Aufgaben:Exercise_3.12:_Path_Weighting_Function|"Exercise 3.12: Path Weighting Function"]]
+
[[Aufgaben:Exercise_3.12:_Path_Weighting_Function|Exercise 3.12: Path Weighting Function]]
  
[[Aufgaben:Exercise_3.12Z:_Ring_and_Feedback|"Exercise 3.12Z: Ring and Feedback"]]
+
[[Aufgaben:Exercise_3.12Z:_Ring_and_Feedback|Exercise 3.12Z: Ring and Feedback]]
  
[[Aufgaben:Exercise_3.13:_Path_Weighting_Function_again|"Exercise 3.13: Path Weighting Function again"]]
+
[[Aufgaben:Exercise_3.13:_Path_Weighting_Function_again|Exercise 3.13: Path Weighting Function again]]
  
[[Aufgaben:Exercise_3.14:_Error_Probability_Bounds|"Exercise 3.14: Error Probability Bounds"]]
+
[[Aufgaben:Exercise_3.14:_Error_Probability_Bounds|Exercise 3.14: Error Probability Bounds]]
  
 
==References==
 
==References==

Latest revision as of 19:24, 20 December 2022

Free distance vs. minimum distance


An important parameter regarding the error probability of linear block codes is the  "minimum distance"  between two code words   $\underline{x}$   and   $\underline{x}\hspace{0.05cm}'$:

\[d_{\rm min}(\mathcal{C}) = \min_{\substack{\underline{x},\hspace{0.05cm}\underline{x}\hspace{0.05cm}' \hspace{0.05cm}\in \hspace{0.05cm} \mathcal{C} \\ {\underline{x}} \hspace{0.05cm}\ne \hspace{0.05cm} \underline{x}\hspace{0.05cm}'}}\hspace{0.1cm}d_{\rm H}(\underline{x}, \hspace{0.05cm}\underline{x}\hspace{0.05cm}') = \min_{\substack{\underline{x} \hspace{0.05cm}\in \hspace{0.05cm} \mathcal{C} \\ {\underline{x}} \hspace{0.05cm}\ne \hspace{0.05cm} \underline{0}}}\hspace{0.1cm}w_{\rm H}(\underline{x}) \hspace{0.05cm}.\]
  • The second part of the equation arises from the fact that every linear code also includes the zero word   ⇒   $(\underline{0})$.
  • It is therefore convenient to set  $\underline{x}\hspace{0.05cm}' = \underline{0}$,  so that the  $\text{Hamming distance}$   $d_{\rm H}(\underline{x}, \ \underline{0})$   gives the same result as the Hamming weight   $w_{\rm H}(\underline{x})$.


$\text{Example 1:}$  The table shows the  $16$  code words of of an exemplary  $\text{HC (7, 4, 3)}$  $($see  $\text{Examples 6 and 7})$.

Code words of the considered  $(7, 4, 3)$ Hamming code

You can see:

  • All code words except the null word  $(\underline{0})$  contain at least three ones   ⇒   $d_{\rm min} = 3$.
  • There are
  1. seven code words with three  "ones"  $($highlighted in yellow$)$,
  2. seven with four  "ones"  $($highlighted in green$)$,  and
  3. one each with no  "ones" and seven  "ones".


The  »free distance«  $d_{\rm F}$  of a convolutional code  $(\mathcal{CC})$  is formulaically no different from the minimum distance of a linear block code:

\[d_{\rm F}(\mathcal{CC}) = \min_{\substack{\underline{x},\hspace{0.05cm}\underline{x}\hspace{0.05cm}' \hspace{0.05cm}\in \hspace{0.05cm} \mathcal{CC} \\ {\underline{x}} \hspace{0.05cm}\ne \hspace{0.05cm} \underline{x}\hspace{0.05cm}'}}\hspace{0.1cm}d_{\rm H}(\underline{x}, \hspace{0.05cm}\underline{x}\hspace{0.05cm}') = \min_{\substack{\underline{x} \hspace{0.05cm}\in \hspace{0.05cm} \mathcal{CC} \\ {\underline{x}} \hspace{0.05cm}\ne \hspace{0.05cm} \underline{0}}}\hspace{0.1cm}w_{\rm H}(\underline{x}) \hspace{0.05cm}.\]
Some paths wih  $w(\underline{x}) = d_{\rm F} = 5$

In the literature instead of  "$d_{\rm F}$"  sometimes is also used  "$d_{∞}$".

The graph shows three of the infinite paths with the minimum Hamming weight  $w_{\rm H, \ min}(\underline{x}) = d_{\rm F} = 5$.

  • Major difference to the minimal distance  $d_{\rm min}$  of other codes is
  1. that in convolutional codes not information words and code words are to be considered, 
  2. but sequences with the property  »$\text{semi–infinite}$«
  • Each encoded sequence  $\underline{x}$  describes a path through the trellis.
  • The  "free distance"  is the smallest possible Hamming weight of such a path  $($except for the zero path$)$.



Path weighting enumerator function


For any linear block code,  a weight enumerator function can be given in a simple way because of the finite number of code words  $\underline{x}$. 

For the  $\text{Example 1}$  in the previous section this is:

\[W(X) = 1 + 7 \cdot X^{3} + 7 \cdot X^{4} + X^{7}\hspace{0.05cm}.\]
Some paths and their path weightings

In the case of a  $($non-terminated$)$  convolutional code,  no such weight function can be given,  since there are infinitely many,  infinitely long encoded sequences  $\underline{x}$  and thus also infinitely many trellis paths.

To get a grip on this problem,  we now assume the following:

  • As a reference for the trellis diagram,  we always choose the path of the encoded sequence  $\underline{x} = \underline{0}$  and call this the  "zero path"  $\varphi_0$.
  • We now consider only paths   $\varphi_j ∈ {\it \Phi}$  that all deviate from the zero path at a given time  $t$  and return to it at some point later.
  • Although only a fraction of all paths belong to the set  ${\it \Phi}$,  contains the remainder quantity  ${\it \Phi} = \{\varphi_1, \ \varphi_2, \ \varphi_3, \ \text{...} \}$  still an unbounded set of paths  $\varphi_0$  is not one of them.


In the above trellis some paths  $\varphi_j ∈ {\it \Phi}$  are drawn:

  • The yellow path  $\varphi_1$  belongs to the sequence  $\underline{x}_1 = (11, 10, 11)$  with Hamming weight  $w_{\rm H}(\underline{x}_1) = 5$.  Thus,  the path weighting  $w(\varphi_1) = 5$.  Due to the definition of the branching time  $t$  only this single path  $\varphi_1$  has the free distance  $d_{\rm F} = 5$  to the zero path   ⇒   $A_5 = 1$.
  • For the two green paths with corresponding sequences  $\underline{x}_2 = (11, 01, 01, 11)$  resp.   $\underline{x}_3 = (11, 10, 00, 10, 11)$   ⇒   $w(\varphi_2) = w(\varphi_3) = 6$.  No other path exhibits the path weighting  $6$.  We take this fact into account by the coefficient  $A_6 = 2$.
  • Also drawn is the gray path  $\varphi_4$,  associated with the sequence  $\underline{x}_4 = (11, 01, 10, 01, 11)$   ⇒   $w(\varphi_4) = 7$.  Also,  the sequences  $\underline{x}_5 = (11, 01, 01, 00, 10, 11)$,  $\underline{x}_6 = (11, 10, 00, 01, 01, 11)$  and  $\underline{x}_7 = (11, 10, 00, 10, 00, 10, 11)$  have the same path weighting  "$7$"    ⇒   $A_7 = 4$.

Thus,  the path weighting enumerator function is for this example:

\[T(X) = A_5 \cdot X^5 + A_6 \cdot X^6 + A_7 \cdot X^7 + \text{...} \hspace{0.1cm}= X^5 + 2 \cdot X^6 + 4 \cdot X^7+ \text{...}\hspace{0.1cm} \hspace{0.05cm}.\]

The definition of this function  $T(X)$  follows.

$\text{Definition:}$  For the  »path weighting enumerator function«  of a convolutional code holds:

\[T(X) = \sum_{\varphi_j \in {\it \Phi} }\hspace{0.1cm} X^{w(\varphi_j) } \hspace{0.1cm}=\hspace{0.1cm} \sum_{w\hspace{0.05cm} =\hspace{0.05cm} d_{\rm F} }^{\infty}\hspace{0.1cm} A_w \cdot X^w \hspace{0.05cm}.\]
  1.   ${\it \Phi}$  denotes the set of all paths that leave the zero path  $\varphi_0$  exactly at the specified time  $t$  and return to it  $($sometime$)$  later.
  2.   According to the second equation part,  the summands are ordered by their path weightings  $w$  where  $A_w$  denotes the number of paths with weighting  $w$.
  3.   The sum starts with  $w = d_{\rm F}$.
  4.   The path weighting  $w(\varphi_j)$  is equal to the Hamming weight  $($number of  "ones"$)$  of the encoded sequence $\underline{x}_j$  associated to the path  $\varphi_j$:  $w({\varphi_j) = w_{\rm H}(\underline {x} }_j) \hspace{0.05cm}.$


Note:  The weight enumerator function  $W(X)$  defined for linear block codes and the path weight function  $T(X)$  of convolutional codes have many similarities;  however,  they are not identical.     We consider again

  • the weight enumerator function of the  $(7, 4, 3)$ Hamming code:
\[W(X) = 1 + 7 \cdot X^{3} + 7 \cdot X^{4} + X^{7},\]
  • and the path weighting enumerator function of our  "standard convolutional encoder":
\[T(X) = X^5 + 2 \cdot X^6 + 4 \cdot X^7+ 8 \cdot X^8+ \text{...} \]
  1.   Noticeable is the  "$1$"  in the first equation,  which is missing in the second one.  That is:
  2.   For the linear block codes,  the reference code word  $\underline{x}_i = \underline{0}$  is counted.
  3.   But the zero encoded sequence  $\underline{x}_i = \underline{0}$  and the zero path  $\varphi_0$  are excluded by definition for convolutional codes.


$\text{Author's personal opinion:}$ 

One could  have defined  $W(X)$  without the  "$1$"  as well.  This would have avoided among other things,  that the Bhattacharyya–bound for linear block codes and that for convolutional codes differ by  "$-1$",  as can be seen from the following equations:


Enhanced path weighting enumerator function


The path weighting enumerator function  $T(X)$  only provides information regarding the weights of the encoded sequence  $\underline{x}$.

  • More information is obtained if the weights of the information sequence  $\underline{u}$  are also collected.
  • One then needs two formal parameters  $X$  and  $U$,  as can be seen from the following definition.


$\text{Definition:}$  The  »enhanced path weight enumerator function«  $\rm (EPWEF)$  is:

\[T_{\rm enh}(X,\ U) = \sum_{\varphi_j \in {\it \Phi} }\hspace{0.1cm} X^{w(\varphi_j)} \cdot U^{ { u}(\varphi_j)} \hspace{0.1cm}=\hspace{0.1cm} \sum_{w} \sum_{u}\hspace{0.1cm} A_{w, \hspace{0.05cm}u} \cdot X^w \cdot U^{u} \hspace{0.05cm}.\]

All specifications for the   $T(X)$  definition in the last section apply.  In addition,  note:

  1.   The path input weight  $u(\varphi_j)$  is equal to the Hamming weight of the information sequence  $\underline{u}_j$  associated to the path.
  2.   It is expressed as a power of the formal parameter  $U$ .
  3.   $A_{w, \ u}$  denotes the number of paths  $\varphi_j$  with path output weight  $w(\varphi_j)$  and path input weight  $u(\varphi_j)$.  Control variable for the second portion is  $u$.
  4.   Setting the formal parameter  $U = 1$  in the enhanced path weighting enumerator function yields the original weighting enumerator function  $T(X)$.


For many  $($and all relevant$)$  convolutional codes,  upper equation can still be simplified:

\[T_{\rm enh}(X,\ U) =\hspace{0.1cm} \sum_{w \ = \ d_{\rm F} }^{\infty}\hspace{0.1cm} A_w \cdot X^w \cdot U^{u} \hspace{0.05cm}.\]


$\text{Example 2:}$  Thus, the enhanced path weighting enumerator function of our standard encoder is:

\[T_{\rm enh}(X,\ U) = U \cdot X^5 + 2 \cdot U^2 \cdot X^6 + 4 \cdot U^3 \cdot X^7+ \text{ ...} \hspace{0.1cm} \hspace{0.05cm}.\]
Some paths and their path weightings

Comparing this result with the trellis shown below, we can see:

  • The yellow highlighted path  – in  $T(X)$  marked by  $X^5$  – is composed of a blue arrow  $(u_i = 1)$  and two red arrows  $(u_i = 0)$.  Thus  $X^5$  becomes the extended term  $U X^5$.
  • The sequences of the two green paths are 
$$\underline{u}_2 = (1, 1, 0, 0) \hspace{0.15cm} ⇒ \hspace{0.15cm} \underline{x}_2 = (11, 01, 01, 11),$$
$$\underline{u}_3 = (1, 0, 1, 0, 0) \hspace{0.15cm} ⇒ \hspace{0.15cm} \underline{x}_3 = (11, 10, 00, 10, 11).$$
This gives the second term  $2 \cdot U^2X^6$.
  • The gray path  $($and the three undrawn paths$)$  together make the contribution  $4 \cdot U^3X^7$.  Each of these paths contains three blue arrows   ⇒   three  "ones"  in the associated information sequence.


Path weighting enumerator function from state transition diagram


There is an elegant way to determine the path weighting enumerator function  $T(X)$  and its enhancement directly from the state transition diagram.  This will be demonstrated here and in the following sections using our  $\text{standard convolutional encoder}$  as an example.

First,  the state transition diagram must be redrawn for this purpose.  The graphic shows this on the left in the previous form as diagram  $\rm (A)$,  while the new diagram  $\rm (B)$  is given on the right.  It can be seen:

State transition diagram in two different variants
  1.   The state  $S_0$  is split into the start state  $S_0$  and the end state  $S_0\hspace{0.01cm}'$.  Thus,  all paths of the trellis that start in state  $S_0$  and return to it at some point can also be traced in the right graph  $\rm (B)$.  Excluded are direct transitions from  $S_0$  to  $S_0\hspace{0.01cm}'$  and thus also the zero path.
  2.   In diagram  $\rm (A)$  the transitions are distinguishable by the colors red  $($for  $u_i = 0)$  and blue  $($for  $u_i = 1)$. 
  3.   The code words  $\underline{x}_i ∈ \{00, 01, 10, 11\}$  are noted at the transitions.  In diagram  $\rm (B)$  $(00)$  is expressed  by  $X^0 = 1$  and  $(11)$  by  $X^2$.
  4.   The code words  $(01)$  and  $(10)$  are now indistinguishable,  but they are uniformly denoted by  $X$ .
  5.   Phrased another way:   The code word  $\underline{x}_i$  is now represented as  $X^w$  where  $X$  is a dummy variable associated with the output.
  6.   $w = w_{\rm H}(\underline{x}_i)$  indicates the Hamming weight of the code word  $\underline{x}_i$.  For a rate $1/2$ code,  the exponent  $w$  is either  $0, \ 1$  or  $2$.
  7.   Also diagram  $\rm (B)$  omits the color coding.  The information bit  $u_i = 1$  is now denoted by  $U^1 = U$  and bit  $u_i = 0$  by  $U^0 = 1$.  The dummy variable  $U$  is thus assigned to the input sequence  $\underline{u}$ .

Rules for manipulating the state transition diagram


Goal of our calculations will be to characterize the  $($arbitrarily complicated$)$  path from  $S_0$  to  $S_0\hspace{0.01cm}'$  by the extended path weighting enumerator function  $T_{\rm enh}(X, \ U)$.  For this we need rules to simplify the graph stepwise.

»Serial transitions«

Reduction of serial transitions  (1),  parallel transitions  (2),  a ring  (3),  a feedback  (4)

Two serial connections – denoted by  $A(X, \ U)$  and  $B(X, \ U)$  – can be replaced by a single connection with the product of these ratings.

»Parallel transitions«

Two parallel connections – denoted by  $A(X, \ U)$  and  $B(X, \ U)$  – are combined by the sum of their valuation functions.

»Ring«

The adjacent constellation can be replaced by a single connection, and the following applies to the replacement:

\[E(X, U) = \frac{A(X, U) \cdot B(X, U)}{1- C(X, U)} \hspace{0.05cm}.\]

»Feedback«

Due to the feedback, two states can alternate here as often as desired. For this constellation applies:

\[F(X, U) = \frac{A(X, U) \cdot B(X, U)\cdot C(X, U)}{1- C(X, U)\cdot D(X, U)} \hspace{0.05cm}.\]

The equations for ring and feedback given here are to be proved in the  "Exercise 3.12Z" .

$\text{Example 3:}$  The above rules are now to be applied to our standard example.  In the graphic on the left you can see the modified diagram  $\rm (B)$.

Reduction of state transitions
  • First we replace the red highlighted detour from  $S_1$  to  $S_2$  via  $S_3$  in the diagram  $\rm (B)$  by the red connection  $T_1(X, \hspace{0.05cm} U)$  drawn in the diagram  $\rm (C)$.
  • According to the upper classification,  it is a  "ring"  with the labels  $A = C = U \cdot X$  and  $B = X$, and we obtain for the first reduction function:
\[T_1(X, \hspace{0.05cm} U) = \frac{U \cdot X^2}{1- U \cdot X} \hspace{0.05cm}.\]
  • Now we summarize the parallel connections according to the blue background in the diagram  $\rm (C)$  and replace it with the blue connection in the diagram  $\rm (D)$.  The second reduction function is thus:
$$T_2(X, \hspace{0.03cm}U) = T_1(X, \hspace{0.03cm}U) \hspace{-0.03cm}+\hspace{-0.03cm} X\hspace{-0.03cm} $$
$$\Rightarrow \hspace{0.3cm} T_2(X, \hspace{0.05cm}U) =\hspace{-0.03cm} \frac{U X^2\hspace{-0.03cm} +\hspace{-0.03cm} X \cdot (1-UX)}{1- U X}= \frac{X}{1- U X}\hspace{0.05cm}.$$
  • The entire graph  $\rm (D)$  can then be replaced by a single connection from  $S_0$  to  $S_0\hspace{0.01cm}'$.  According to the feedback rule,  one obtains for the  "enhanced path weighting enumerator function":
\[T_{\rm enh}(X, \hspace{0.05cm}U) = \frac{(U X^2) \cdot X^2 \cdot \frac{X}{1- U X} }{1- U \cdot \frac{X}{1- U X} } = \frac{U X^5}{1- U X- U X} = \frac{U X^5}{1- 2 \cdot U X} \hspace{0.05cm}.\]
  • With the series expansion  $1/(1 \, –x) = 1 + x + x^2 + x^3 + \ \text{...} \ $  can also be written for this purpose:
\[T_{\rm enh}(X, \hspace{0.05cm}U) = U X^5 \cdot \big [ 1 + 2 \hspace{0.05cm}UX + (2 \hspace{0.05cm}UX)^2 + (2 \hspace{0.05cm}UX)^3 + \text{...} \hspace{0.05cm} \big ] \hspace{0.05cm}.\]
  • Setting the formal input variable   $U = 1$,   we obtain the  "simple path weighting enumerator function",  which alone allows statements about the weighting distribution of the output sequence  $\underline{x}$: 
\[T(X) = X^5 \cdot \big [ 1 + 2 X + 4 X^2 + 8 X^3 +\text{...}\hspace{0.05cm} \big ] \hspace{0.05cm}.\]

We have already read the same result from the trellis diagram in section  "Path weighting enumerator function" . There was

    • one gray path with weight  $5$,
    • two yellow paths with weighting  $6$ 
    • four green paths with weighting  $7$
    • and ... .


Block error probability vs. burst error probability


The simple model according to the sketch is valid for linear block codes as well as for convolutional codes.

Simple transmission model including encoding and decoding


»Block codes«

For block codes denote  $\underline{u} = (u_1, \ \text{...} \hspace{0.05cm}, \ u_i, \ \text{...} \hspace{0.05cm}, \ u_k)$  and  $\underline{v} = (v_1, \ \text{...} \hspace{0.05cm}, v_i, \ \text{...} \hspace{0.05cm} \ , \ v_k)$  the information blocks at the input and output of the system.

Thus,  the following descriptive variables can be specified:

  • the »block error probability«   ${\rm Pr(block\:error)} = {\rm Pr}(\underline{v} ≠ \underline{u}),$
  • the »bit error probability«   ${\rm Pr(bit\:error)} = {\rm Pr}(v_i ≠ u_i).$

$\text{Please note:}$  In real transmission systems,  due to thermal noise the following always applies:

$${\rm Pr(bit\:error)} > 0\hspace{0.05cm},\hspace{0.5cm}{\rm Pr(block\:error)} > {\rm Pr(bit\:error)} \hspace{0.05cm}.$$

For this a simple explanation attempt:   If the decoder decides in each block of length  $k$  exactly one bit wrong,

  • so the average bit error probability   ${\rm Pr(bit\:error)}= 1/k$,
  • while for the block error probability   ${\rm Pr(block\:error)}\equiv 1$  holds.



»Convolutional codes«

For convolutional codes,  the block error probability cannot be specified,  since here   $\underline{u} = (u_1, \ u_2, \ \text{...} \hspace{0.05cm})$   and   $\underline{\upsilon} = (v_1, \ v_2, \ \text{...} \hspace{0.05cm})$   represent sequences.

Zero path  ${\it \varphi}_0$  and deviation paths  ${\it \varphi}_i$



Even the smallest possible code parameter  $k = 1$,  leads here to the sequence length  $k \hspace{0.05cm}' → ∞$,  and

  • the block error probability would always result in  ${\rm Pr(block\hspace{0.1cm} error)}\equiv 1$,
  • even if the bit error probability is extremely small $\hspace{0.05cm}$ $($but non-zero$)$.


$\text{Definition:}$  For the  »burst error probability « of a convolutional code holds:

\[{\rm Pr(burst\:error)} = {\rm Pr}\big \{ {\rm Decoder\hspace{0.15cm} leaves\hspace{0.15cm} the\hspace{0.15cm} correct\hspace{0.15cm}path}\hspace{0.15cm}{\rm at\hspace{0.15cm}time \hspace{0.10cm}\it t}\big \} \hspace{0.05cm}.\]
  • To simplify the notation for the following derivation,  we always assume the zero sequence  $\underline{0}$   which is shown in red in the drawn trellis as the  "zero path"  $\varphi_0$.
  • All other paths  $\varphi_1, \ \varphi_2, \ \varphi_3, \ \text{...} $  $($and many more$)$  leave  $\varphi_0$  at time  $t$.
  • They all belong to the path set  ${\it \Phi}$  which is defined as  "Viterbi decoder leaves the correct path at time  $t$".  This probability is calculated in the next section.


Burst error probability and Bhattacharyya bound


Calculation of the burst error probability

We proceed as in the earlier chapter  "Bounds for block error probability" 

  • from the pairwise error probability  ${\rm Pr}\big [\varphi_0 → \varphi_i \big]$ 
  • that instead of the path  $\varphi_0$ 
  • the decoder could select the path  $\varphi_i$.


All considered paths  $\varphi_i$  leave the zero path  $\varphi_0$  at time  $t$;  thus they all belong to the path set ${\it \Phi}$.

The sought  "burst error probability"  is equal to the following union set:

$${\rm Pr(burst\:error)}= {\rm Pr}\left (\big[\varphi_{\hspace{0.02cm}0} \mapsto \varphi_{\hspace{0.02cm}1}\big] \hspace{0.05cm}\cup\hspace{0.05cm}\big[\varphi_{\hspace{0.02cm}0} \mapsto \varphi_{\hspace{0.02cm}2}\big]\hspace{0.05cm}\cup\hspace{0.05cm} \text{... }\hspace{0.05cm} \right )$$
$$\Rightarrow \hspace{0.3cm}{\rm Pr(burst\:error)}= {\rm Pr} \left ( \cup_{\varphi_{\hspace{0.02cm}i} \in {\it \Phi}} \hspace{0.15cm} \big[\varphi_{\hspace{0.02cm}0} \mapsto \varphi_{\hspace{0.02cm}i}\big] \right )\hspace{0.05cm}.$$
\[{\rm Pr(Burst\:error)} \le \sum_{\varphi_{\hspace{0.02cm}i} \in {\it \Phi}}\hspace{0.15cm} {\rm Pr}\big [\varphi_{\hspace{0.02cm}0} \mapsto \varphi_{\hspace{0.02cm}i}\big ] = {\rm Pr(union \hspace{0.15cm}bound)} \hspace{0.05cm}.\]
\[{\rm Pr}\big [\underline {0} \mapsto \underline {x}_{\hspace{0.02cm}i}\big ] \le \beta^{w_{\rm H}({x}_{\hspace{0.02cm}i})}\hspace{0.3cm}\Rightarrow \hspace{0.3cm} {\rm Pr}\left [\varphi_{\hspace{0.02cm}0} \mapsto \varphi_{\hspace{0.02cm}i}\right ] \le \hspace{0.05cm} \beta^{w(\varphi_i)}\hspace{0.05cm}.\]
Here denotes
  • $w_{\rm H}(\underline{x}_i)$  is the Hamming weight of the possible encoded sequence  $\underline{x}_i,$
  • $\ w(\varphi_i)$  is the path weight of the corresponding path  $\varphi_i ∈ {\it \Phi},$


By  »summing over all paths«  and comparing with the (simple)  $\text{path weighting function}$  $T(X)$  we get the result:

\[{\rm Pr(burst\:error)} \le T(X = \beta),\hspace{0.5cm}{\rm with}\hspace{0.5cm} T(X) = \sum_{\varphi_{\hspace{0.02cm}i} \in {\it \Phi} }\hspace{0.15cm} \hspace{0.05cm} X^{w(\varphi_i)}\hspace{0.05cm}.\]


$\text{Example 4: }$  For our standard encoder with

  • code rate  $R = 1/2$,
  • memory  $m = 2$,
  • D– transfer function  $\mathbf{G}(D) = (1 + D + D^2, \ 1 + D)$,


we have obtained the following  $\text{path weighting enumerator function}$:

\[T(X) = X^5 + 2 \cdot X^6 + 4 \cdot X^7 + \ \text{...} \hspace{0.1cm} = X^5 \cdot ( 1 + 2 \cdot X + 4 \cdot X^2+ \ \text{...} \hspace{0.1cm}) \hspace{0.05cm}.\]
  • Using the series expansion   $1/(1 \, –x) = 1 + x + x^2 + x^3 + \ \text{...} \hspace{0.15cm} $  can also be written for this purpose:
\[T(X) = \frac{X^5}{1-2 \cdot X} \hspace{0.05cm}.\]
  • The BSC model provides the following Bhattacharyya bound with the falsification probability  $\varepsilon$:
\[{\rm Pr(Burst\:error)} \le T(X = \beta) = T\big ( X = 2 \cdot \sqrt{\varepsilon \cdot (1-\varepsilon)} \big ) = \frac{(2 \cdot \sqrt{\varepsilon \cdot (1-\varepsilon)})^5}{1- 4\cdot \sqrt{\varepsilon \cdot (1-\varepsilon)}}\hspace{0.05cm}.\]


Bit error probability and Viterbi bound


Finally,  an upper bound is given for the bit error probability.  According to the graph,  we proceed as in  [Liv10][1]  assume the following conditions:

  • The zero sequence  $\underline{x} = \underline{0}$ was sent   ⇒   Path  $\varphi_0$.
  • The duration of a path deviation  $($"error burst duration"$)$  is denoted by  $L$ .
  • The distance between two bursts  $($"inter burst time"$)$  we call  $N$.
  • The Hamming weight of the error bundle be  $H$.


For a rate $1/n$  convolutional code   ⇒   $k = 1$   ⇒   one information bit per clock,  the expected values  ${\rm E}\big[L \big]$,  ${\rm E}\big[N \big]$  and  ${\rm E}\big[H\big]$  of the random variables defined above to give an upper bound for the bit error probability:

For the definition of the description variables  $L$,  $N$  and $H$
\[{\rm Pr(bit\:error)} = \frac{{\rm E}\big[H\big]}{{\rm E}[L] + {\rm E}\big[N\big]}\hspace{0.15cm} \le \hspace{0.15cm} \frac{{\rm E}\big[H\big]}{{\rm E}\big[N\big]} \hspace{0.05cm}.\]

It is assumed that

  1.   the  (mean)  duration of an error burst is in practice much smaller than the expected distance between two bursts,
  2.   the  (mean)  inter burst time $E\big[N\big]$  is equal to the inverse of the burst error probability,
  3.   the expected value in the numerator is estimated as follows:
\[{\rm E}\big[H \big] \le \frac{1}{\rm Pr(burst\:error)}\hspace{0.1cm} \cdot \sum_{\varphi_{\hspace{0.02cm}i} \in {\it \Phi}}\hspace{0.05cm} \hspace{0.05cm} u(\varphi_i) \cdot \beta^{w(\varphi_i)} \hspace{0.05cm}.\]

In deriving this bound,  the  "pairwise error probability"  ${\rm Pr}\big [\varphi_0 → \varphi_i \big]$  and the  "Bhattacharyya estimation"  are used.  Thus we obtain with

  • the path input weighting  $u(\varphi_i),$
  • the path output weighting  $w(\varphi_i),$ and
  • the Bhattacharyya parameter  $\beta$


the following estimation for the bit error probability and refer to it as the  »Viterbi bound«:

\[{\rm Pr(bit\:error)}\le \sum_{\varphi_{\hspace{0.02cm}i} \in {\it \Phi}}\hspace{0.05cm} \hspace{0.05cm} u(\varphi_i) \cdot \beta^{w(\varphi_i)} \hspace{0.05cm}.\]

This intermediate result can also be represented in another form. We recall the  $\text{enhanced path weighting enumerator function}$

\[T_{\rm enh}(X, U) = \sum_{\varphi_j \in {\it \Phi}}\hspace{0.1cm} X^{w(\varphi_j)} \cdot U^{{ u}(\varphi_j)} \hspace{0.05cm}.\]

If we derive this function according to the dummy input variable  $U$,  we get

\[\frac {\rm d}{{\rm d}U}\hspace{0.2cm}T_{\rm enh}(X, U) = \sum_{\varphi_j \in {\it \Phi}}\hspace{0.1cm} { u}(\varphi_j) \cdot X^{w(\varphi_j)} \cdot U^{{ u}(\varphi_j)-1} \hspace{0.05cm}.\]

Finally,  if we set  $U = 1$  for the dummy input variable,  than we see the connection to the above result:

\[\left [ \frac {\rm d}{{\rm d}U}\hspace{0.2cm}T_{\rm enh}(X, U) \right ]_{\substack{ U=1}} = \sum_{\varphi_j \in {\it \Phi}}\hspace{0.1cm} { u}(\varphi_j) \cdot X^{w(\varphi_j)} \hspace{0.05cm}.\]

$\text{Conclusion:}$  The  »bit error probability«  of a convolutional code can be estimated using the extended path weighting enumerator function in closed form:

\[{\rm Pr(bit\:error)} \le {\rm Pr(Viterbi)} = \left [ \frac {\rm d}{ {\rm d}U}\hspace{0.2cm}T_{\rm enh}(X, U) \right ]_{\substack{X=\beta \\ U=1} } \hspace{0.05cm}.\]
  • One speaks of the  »Viterbi bound».  Here one derives the enhanced path weighting enumerator function after the second parameter  $U$  and then sets 
$$X = \beta,\hspace{0.4cm} U = 1.$$



$\text{Example 5:}$  The graph illustrates the good correction capability   ⇒   small bit error rate  $\rm (BER)$  of convolutional codes at the  $\text{$\rm AWGN$  channel}$.

AWGN bit error probability of convolutional codes
  • Red circles denote the  $\rm BER$  for our  "rate  $1/2$  standard encoder"  with memory  $m = 2$.



  • Codes with large memory  $m$  lead to large gains over uncoded transmission.







Note:  

In  "Exercise 3.14",  the  "Viterbi bound"  and the  "Bhattacharyya bound"  are evaluated numerically for the  "rate $1/2$  standard encoder"  and the  $\text{BSC model}$.


Exercises for the chapter


Exercise 3.12: Path Weighting Function

Exercise 3.12Z: Ring and Feedback

Exercise 3.13: Path Weighting Function again

Exercise 3.14: Error Probability Bounds

References

  1. Liva, G.:  Channel Coding.  Lecture manuscript, Chair of Communications Engineering, TU Munich and DLR Oberpfaffenhofen, 2010.