Difference between revisions of "Channel Coding/Error Correction According to Reed-Solomon Coding"

From LNTwww
 
(30 intermediate revisions by 3 users not shown)
Line 8: Line 8:
 
== Block diagram and requirements for RS error correction ==
 
== Block diagram and requirements for RS error correction ==
 
<br>
 
<br>
As in the chapter&nbsp; [[Channel_Coding/Reed-Solomon_Decoding_for_the_Erasure_Channel#Block_diagram_and_requirements_for_RS_fault_detection|"Decoding at the Erasure Channel"]]&nbsp; we consider a transmission system with Reed&ndash;Solomon coding characterized by the two code parameters&nbsp; $n=2^m-1$&nbsp; and&nbsp; $k$&nbsp;. With the generator matrix&nbsp; $\boldsymbol{\rm G}$&nbsp; the relation between the information word&nbsp; $\underline {u}$&nbsp; and the codeword&nbsp; $\underline {c}$ is:
+
As in the chapter&nbsp; [[Channel_Coding/Reed-Solomon_Decoding_for_the_Erasure_Channel#Block_diagram_and_requirements_for_Reed-Solomon_error_detection|"Decoding at the Erasure Channel"]]&nbsp; we consider a transmission system with Reed&ndash;Solomon coding characterized by the two code parameters&nbsp; $n=2^m-1$&nbsp; and&nbsp; $k$.&nbsp; With the generator matrix&nbsp; $\boldsymbol{\rm G}$&nbsp; the relation between the information word&nbsp; $\underline {u}$&nbsp; and the code word&nbsp; $\underline {c}$&nbsp; is:
  
::<math>\underline {c} = {\rm enc}(\underline {u}) = \underline {u} \cdot { \boldsymbol{\rm G}}
+
[[File:EN_KC_T_2_5_S1_neu.png|right|frame|Transmission system with Reed-Solomon coding/decoding and $m$&ndash; BSC|class=fit]]
\hspace{0.3cm} {\rm mit}  \hspace{0.3cm}\underline {u} = (u_0, u_1, \hspace{0.05cm}\text{...} \hspace{0.05cm}, u_i, \hspace{0.05cm}\text{...} \hspace{0.05cm}, u_{k-1})\hspace{0.05cm}, \hspace{0.2cm}
 
\underline {c} = (c_0, c_1, \hspace{0.05cm}\text{...}  \hspace{0.05cm}, c_i, \hspace{0.05cm}\text{...} \hspace{0.05cm}, c_{n-1})
 
\hspace{0.05cm}.</math>
 
  
The information symbols&nbsp; $u_i$&nbsp; and also the code symbols&nbsp; $C_i$&nbsp; originate from the field&nbsp; ${\rm GF}(q)$&nbsp; with&nbsp; $q=n+1=2^m$; they are representable by&nbsp; $m$&nbsp; binary symbols (bits).<br>
+
:$$\underline {c} = {\rm enc}(\underline {u}) = \underline {u} \cdot { \boldsymbol{\rm G}}
 +
\hspace{0.3cm} {\rm with}$$
 +
::$$ \underline {u} = (u_0, u_1, \hspace{0.05cm}\text{...} \hspace{0.05cm}, u_i, \hspace{0.05cm}\text{...} \hspace{0.05cm}, u_{k-1})\hspace{0.05cm},$$
 +
::$$ \underline {c} = (c_0, c_1, \hspace{0.05cm}\text{...}  \hspace{0.05cm}, c_i, \hspace{0.05cm}\text{...} \hspace{0.05cm}, c_{n-1})
 +
\hspace{0.05cm}.$$
  
[[File:P ID2545 KC T 2 5 S1 v2.png|center|frame|Transmission system with Reed-Solomon coding/decoding and $m$ BSC|class=fit]]
+
The symbols&nbsp; $u_i$&nbsp; and&nbsp; $c_i$&nbsp; originate from the field&nbsp; ${\rm GF}(q)$&nbsp; with&nbsp;
 +
:$$q=n+1=2^m.$$
 +
They are representable by&nbsp; $m$&nbsp; binary symbols&nbsp; ("bits").<br>
  
A comparison of this block diagram with the corresponding&nbsp; [[Channel_Coding/Reed-Solomon_Decoding_for_the_Erasure_Channel#Block_diagram_and_requirements_for_RS_fault_detection|"Block diagram for RS fault detection"]]&nbsp; shows:
+
A comparison of the upper block diagram with the corresponding&nbsp; [[Channel_Coding/Reed-Solomon_Decoding_for_the_Erasure_Channel#Block_diagram_and_requirements_for_Reed-Solomon_error_detection|$\text{block diagram for Reed-Solomon error detection}$]]&nbsp; shows:
*The main difference is in the discrete channel model used (highlighted in green). Instead of the cancellation channel&nbsp; ("$m$ BEC")&nbsp; now the&nbsp; $m$ BSC&nbsp; is considered. For each bit of the code symbol&nbsp; $c_i$&nbsp; the&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#Binary_Symmetric_Channel_.E2.80.93_BSC| ''Binary Symmetric Channel'']]&nbsp; (BSC) is applied. A bit error with respect to the $i$ bit results in&nbsp; $y_i \ne c_i$.<br>
+
# The main difference is in the discrete channel model&nbsp; $($highlighted in green$)$.&nbsp; Instead of the erasure channel&nbsp; ("$m$&ndash;BEC")&nbsp; now the&nbsp; "$m$&ndash;BSC"&nbsp; is considered.&nbsp; For each bit of the code symbol&nbsp; $c_i$&nbsp; the&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#Binary_Symmetric_Channel_.E2.80.93_BSC|$\text{Binary Symmetric Channel}$]]&nbsp; $\rm (BSC)$&nbsp; is applied.&nbsp; A bit error with respect to the&nbsp; $i$&ndash;th bit of the code word results in&nbsp; $y_i \ne c_i$.<br>
 +
# In the&nbsp; [[Channel_Coding/Reed-Solomon_Decoding_for_the_Erasure_Channel|"last chapter"]]&nbsp; uncertain bits were already marked by erasures&nbsp; $\rm E$.&nbsp; The task of the&nbsp; "code word finder"&nbsp; $\rm (CWF)$&nbsp; was to reconstruct the decoding result&nbsp; $\underline {y}$&nbsp; from the garbled received words&nbsp; $\underline {z}$.
 +
# If the number&nbsp; $e$&nbsp; of erasures is smaller than the minimum distance&nbsp; $d_{\rm min}$,&nbsp; this succeeds and we get &nbsp; $\underline {z} = \underline {c}$. &nbsp; Otherwise,&nbsp; the code word finder reports that it cannot decode the current received word&nbsp; $\underline {y}$&nbsp;. &nbsp; A wrong decision&nbsp; $(\underline {z} \ne \underline {c})$&nbsp; was excluded at the BEC.<br>
 +
# In this chapter,&nbsp; the first decoder block is now referred to as&nbsp; "code word estimator"&nbsp; $\rm (CWS)$.&nbsp; The naming is to make clear that due to the&nbsp; $m$&ndash;BSC model,&nbsp; wrong decisions&nbsp; $(\underline {z} \ne \underline {c})$&nbsp; are inevitable,&nbsp; namely when multiple symbol errors distort the received word&nbsp; $\underline {y}$&nbsp; to a valid code word.<br><br>
  
*In the&nbsp; [[Channel_Coding/Reed-Solomon_Decoding_for_the_Erasure_Channel|"last chapter"]]&nbsp; uncertain bits were already marked by erasures&nbsp; $\rm E$. The exercise of the&nbsp; <i>code word finder</i>&nbsp; $\rm (CWF)$ was to reconstruct the decoding result&nbsp; $\underline {y}$&nbsp; from the garbled received words&nbsp; $\underline {z}$&nbsp;.
+
{{BlaueBox|TEXT= 
*If the number&nbsp; $e$&nbsp; of extinctions is smaller than the minimum distance&nbsp; $d_{\rm min}$, this succeeds and we get&nbsp; $\underline {z} = \underline {c}$. Otherwise, the codeword finder reports that it cannot decode the current received word&nbsp; $\underline {y}$&nbsp;. &nbsp; A wrong decision&nbsp; $(\underline {z} \ne \underline {c})$&nbsp; was excluded at the BEC.<br>
+
$\text{Conclusions:}$&nbsp;
 +
*The decoder's task is to determine its output vector&nbsp; $\underline {v}$&nbsp; so that it matches the information word&nbsp; $\underline {u}$&nbsp; "as well as possible".&nbsp; More precisely formulated:
  
*In this chapter, the first decoder block is now referred to as&nbsp; <i>code word estimator</i>&nbsp; $\rm (CWS)$. The naming is to make clear that due to the&nbsp; $m$ BSC model, wrong decisions&nbsp; $(\underline {z} \ne \underline {c})$&nbsp; are inevitable, namely when multiple symbol errors distort the received word&nbsp; $\underline {y}$&nbsp; to a valid codeword.<br><br>
+
::<math>{ \rm Pr(block\hspace{0.15cm}error)} = { \rm Pr}( \underline{v} \ne \underline{u}) \stackrel{!}{=} { \rm Minimum}\hspace{0.05cm}.</math>
  
{{BlaueBox|TEXT= 
+
*Because of deterministic mapping &nbsp; $\underline{c} = {\rm enc}(\underline{u})$ &nbsp; and &nbsp; $\underline{v} = {\rm enc}^{-1}(\underline{z})$ &nbsp; holds in the same way:
$\text{Conclusion:}$&nbsp;
 
*The decoder's exercise is to determine its output vector&nbsp; $\underline {v}$&nbsp; so that it matches the information word&nbsp; $\underline {u}$&nbsp; "as well as possible". More precisely formulated:
 
  
::<math>{ \rm Pr(Block\hspace{0.15cm}error)} = { \rm Pr}( \underline{v} \ne \underline{u}) \stackrel{!}{=} { \rm Minimum}\hspace{0.05cm}.</math>
+
::<math>{ \rm Pr(block\hspace{0.15cm}error)} = { \rm Pr}( \underline{z} \ne \underline{c}) \stackrel{!}{=} { \rm Minimum}\hspace{0.05cm}.</math>}}
  
*Because of deterministic mapping&nbsp; $\underline{c} = {\rm enc}(\underline{u})$&nbsp; and&nbsp; $\underline{v} = {\rm enc}^{-1}(\underline{z})$&nbsp; holds in the same way:
 
  
::<math>{ \rm Pr(Block\hspace{0.15cm}error)} = { \rm Pr}( \underline{z} \ne \underline{c}) \stackrel{!}{=} { \rm Minimum}\hspace{0.05cm}.</math>}}
+
The two blocks highlighted in yellow are not considered further.&nbsp;  The focus of the considerations is now the&nbsp; "code word estimator"&nbsp; $\rm (CWS)$&nbsp; highlighted in red.<br>
  
 +
== Possible code word estimators for Reed-Solomon decoding ==
 +
<br>
 +
[[File:EN_KC_T_2_5_S2.png|right|frame|Definition of the error vector&nbsp; $\underline{e}$|class=fit]]
 +
The right sketch of this graphic illustrates the task of the&nbsp; $\rm CWS$&nbsp; 
 +
*where here the channel model&nbsp; "$m$&ndash;BSC" is replaced
  
The two blocks highlighted in yellow are not considered further below.  The focus of the considerations is now the <i>code word estimator</i>&nbsp; $\rm (CWS)$ highlighted in red.<br>
+
*by the additive&nbsp; &raquo;'''error vector'''&laquo;&nbsp;  
 +
:$$\underline{e} = \underline{y} - \underline{c}.$$  
  
== Possible code word estimators for RS decoding ==
+
The sketch on the left illustrates the relationship between these vectors.<br>
<br>
 
The right sketch of the following graphic illustrates the task again, where here the channel model "$m$ BSC" is replaced by the additive&nbsp; '''error vector''''&nbsp; $\underline{e} = \underline{y} - \underline{c}$&nbsp;. The sketch on the left illustrates the relationship between these three vectors.<br>
 
  
[[File:P ID2546 KC T 2 5 S2 v2.png|center|frame|Definition of the error vector&nbsp; $\underline{e}$|class=fit]]
 
  
 
This task is to be clarified by an example.<br>
 
This task is to be clarified by an example.<br>
  
[[File:P ID2547 KC T 2 5 Darstellung v1.png|right|frame|Three forms of representation for $\rm GF(2^3)$]]
 
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Example 1:}$&nbsp;
+
$\text{Example 1:}$&nbsp;
Let all symbols be elements of&nbsp; $\rm GF(2^3) \in \{0, 1, \alpha^1, \alpha^2, \alpha^3, \alpha^4, \alpha^5, \alpha^6\}$. For conversion
+
Let all symbols be elements of &nbsp; $\rm GF(2^3) \in \{0,\ 1,\ \alpha^1,\ \alpha^2,\ \alpha^3,\ \alpha^4,\ \alpha^5,\ \alpha^6\}$. &nbsp; The adjacent table can be used for conversion
*between the coefficient representation $($with the order&nbsp; $k_2$,&nbsp; $k_1$,&nbsp; $k_0)$  
+
[[File:EN_KC_Z_2_5_neu.png|right|frame|$\rm GF(2^3)$&nbsp; representations]]
*and the exponent representation $($as powers of the primitive element&nbsp; $\alpha)$  
+
   
 +
*between the coefficient representation&nbsp; $($with the order&nbsp; $k_2$,&nbsp; $k_1$,&nbsp; $k_0)$
 +
 +
*and the exponent representation&nbsp; $($as powers of the primitive element&nbsp; $\alpha).$  
  
  
the adjacent table can be used.<br>
+
In this example,&nbsp; the code word and received word are in coefficient notation:
 
 
In this example, the codeword and received words are in coefficient notation:
 
  
 
::<math>\underline{c} = \Big ( (010), (001), (100),(010),(100),(111),(111)\Big )\hspace{0.05cm},</math>
 
::<math>\underline{c} = \Big ( (010), (001), (100),(010),(100),(111),(111)\Big )\hspace{0.05cm},</math>
 
::<math>\underline{y} =\Big ( (011), (001), (000),(010),(100),(111),(111)\Big )\hspace{0.05cm}.</math>
 
::<math>\underline{y} =\Big ( (011), (001), (000),(010),(100),(111),(111)\Big )\hspace{0.05cm}.</math>
  
This results in the following for the error vector&nbsp; $\underline{e} = \underline{y} - \underline{c}$:
+
&rArr; &nbsp; This results in the following error vector&nbsp; $\underline{e} = \underline{y} - \underline{c}$:
  
 
::<math>\underline{e} \hspace{0.05cm} = \hspace{0.05cm} \Big ( (001), (000), (100), (000),(000),(000),(000)\Big )\hspace{0.05cm}.</math>
 
::<math>\underline{e} \hspace{0.05cm} = \hspace{0.05cm} \Big ( (001), (000), (100), (000),(000),(000),(000)\Big )\hspace{0.05cm}.</math>
  
Converted to the exponential representation, we get:
+
&rArr; &nbsp; Converted to the exponential representation,&nbsp; we get:
  
 
::<math>\underline{c} = \Big ( \alpha^1, \hspace{0.09cm}1\hspace{0.09cm}, \alpha^2,\alpha^1,\alpha^2,\alpha^5,\alpha^5\Big )\hspace{0.05cm},</math>
 
::<math>\underline{c} = \Big ( \alpha^1, \hspace{0.09cm}1\hspace{0.09cm}, \alpha^2,\alpha^1,\alpha^2,\alpha^5,\alpha^5\Big )\hspace{0.05cm},</math>
Line 73: Line 80:
 
::<math>\underline{e} = \Big ( \hspace{0.09cm}1\hspace{0.09cm}, \hspace{0.09cm}0\hspace{0.09cm}, \hspace{0.05cm}\alpha^2,\hspace{0.12cm}0\hspace{0.12cm},\hspace{0.12cm}0\hspace{0.12cm},\hspace{0.12cm}0\hspace{0.12cm},\hspace{0.12cm}0\hspace{0.12cm}\Big )\hspace{0.05cm}.</math>}}<br>
 
::<math>\underline{e} = \Big ( \hspace{0.09cm}1\hspace{0.09cm}, \hspace{0.09cm}0\hspace{0.09cm}, \hspace{0.05cm}\alpha^2,\hspace{0.12cm}0\hspace{0.12cm},\hspace{0.12cm}0\hspace{0.12cm},\hspace{0.12cm}0\hspace{0.12cm},\hspace{0.12cm}0\hspace{0.12cm}\Big )\hspace{0.05cm}.</math>}}<br>
  
The task of the code word estimator&nbsp; $\rm (CWS)$&nbsp; is to find the most probable code word to&nbsp; $\underline{y}$&nbsp; and to pass its result&nbsp; $\underline{z} = \underline{c}_i$&nbsp; to the following mapping. There are several ways to do this:
+
<u>Note</u> &nbsp; Task of the code word estimator&nbsp; $\rm (CWS)$&nbsp; is to find the most probable code word to&nbsp; $\underline{y}$&nbsp; and to pass its result&nbsp; $\underline{z} = \underline{c}_i$&nbsp; to the following mapping.  
*<i>Hard Decision Maximum Likelihood Decoding</i>&nbsp;  $\text{(HD MLD)}$,<br>
 
 
 
*<i>Bounded Distance Decoding</i>&nbsp; $\text{(BDD)}$,<br>
 
 
 
*Decoding ''over half the minimum distance''.<br><br>
 
  
These decoding principles are described below.<br>
+
There are several ways to do this:
 +
# &nbsp; Hard Decision Maximum Likelihood Decoding&nbsp;  $\text{(HD&ndash;MLD)}$,<br>
 +
# &nbsp; Bounded Distance Decoding&nbsp; $\text{(BDD)}$,<br>
 +
# &nbsp; Decoding &nbsp;  "over half the minimum distance".<br><br>
  
  
<b>Hard Decision Maximum Likelihood Decoding&nbsp; $\text{(HD MLD)}$:</b><br>
+
<b>Hard Decision Maximum Likelihood Decoding &nbsp; $\text{(HD&ndash;MLD)}$:</b><br>
  
One chooses from all possible Reed&ndash;Solomon codewords&nbsp; $\underline{c}_i$&nbsp; $($hierof there are in total&nbsp; $q^k)$&nbsp; the one with the least&nbsp; [[Channel_Coding/Objective_of_Channel_Coding#Important_definitions_for_block_coding| "Hamming distance"]]&nbsp; to the received word&nbsp; $\underline{y}$&nbsp; from. Thus the result is:
+
One chooses from all possible Reed&ndash;Solomon code words&nbsp; $\underline{c}_i$ &nbsp; $($hierof there are in total&nbsp; $q^k)$ &nbsp; the one with the least&nbsp; [[Channel_Coding/Objective_of_Channel_Coding#Important_definitions_for_block_coding|$\text{Hamming distance}$]]&nbsp; to the received word&nbsp; $\underline{y}$.&nbsp; Result:
  
 
::<math>\underline{z} = {\rm arg} \min_{\underline{c}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C}_{\rm RS}} \hspace{0.1cm}
 
::<math>\underline{z} = {\rm arg} \min_{\underline{c}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C}_{\rm RS}} \hspace{0.1cm}
 
d_{\rm H}(\underline{y}  \hspace{0.05cm}, \hspace{0.1cm}\underline{c}_{\hspace{0.03cm}i})\hspace{0.05cm}.</math>
 
d_{\rm H}(\underline{y}  \hspace{0.05cm}, \hspace{0.1cm}\underline{c}_{\hspace{0.03cm}i})\hspace{0.05cm}.</math>
  
The decision here happens on the maximum inference probability&nbsp; ${\rm Pr}(\underline{c}_i\hspace{0.05cm}|\hspace{0.05cm}\underline{y})$&nbsp; and leads to the best possible result. For more details see&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#Maximum-likelihood_decision_at_the_BSC_channel|"ML Decision at the BSC Channel"]]. A decision is always made even if the number&nbsp; $r$&nbsp; of symbol errors is larger than the correction capability&nbsp; $t$&nbsp; of the code. However, in such a case the decoding result is very uncertain.<br>
+
*The decision here happens on the maximum inference probability&nbsp; ${\rm Pr}(\underline{c}_i\hspace{0.05cm}|\hspace{0.05cm}\underline{y})$&nbsp; and leads to the best possible result.&nbsp; For more details see&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#Maximum-likelihood_decision_at_the_BSC_channel|"Maximum-likelihood decision Decision at the BSC Channel"]].  
 +
 
 +
*A decision is always made even if the number&nbsp; $r$&nbsp; of symbol errors is larger than the correction capability&nbsp; $t$&nbsp; of the code.&nbsp; However,&nbsp; in such a case the decoding result is very uncertain.<br>
  
*It should be mentioned again that maximum likelihood decoding always decides.  
+
*It should be mentioned again that maximum likelihood decoding always decides.&nbsp; Decoding failure is impossible.&nbsp; But of course there are also wrong decisions.<br><br>
*Decoding failure is impossible.  
 
*But of course there are also wrong decisions.<br><br>
 
  
  
 
<b>Bounded Distance Decoding&nbsp; $\text{(BDD)}$:</b><br>
 
<b>Bounded Distance Decoding&nbsp; $\text{(BDD)}$:</b><br>
  
If the number&nbsp; $r$&nbsp; of symbol errors in the received words&nbsp; $\underline{y}$&nbsp; is not greater than the correction capability&nbsp; $t = &lfloor;(d_{\rm min}- 1)/2&rfloor;$&nbsp; of the code, one can correct the&nbsp; $r$&nbsp; symbol errors completely. However, it is also true:
+
If the number&nbsp; $r$&nbsp; of symbol errors in the received word&nbsp; $\underline{y}$&nbsp; is not greater than the correction capability &nbsp; $t = &lfloor;(d_{\rm min}- 1)/2&rfloor;$ &nbsp; of the code,&nbsp; one can correct the&nbsp; $r$&nbsp; symbol errors completely.&nbsp; However,&nbsp; it is also true:
*The case&nbsp; $r > t$&nbsp; leads to an abort of the decoding process with no result. &nbsp; In other words:
+
*The case&nbsp; $r > t$&nbsp; leads to an abort of the decoding process with no result. &nbsp; In other words:
*Only those received words towards the center of the sphere are decoded, which lie in a sphere around it with radius&nbsp; $t$&nbsp;.
+
 
*Other received words are marked as undecodable, for example as <i>erasure</i>.<br><br>
+
:*Only those received words towards the center of the sphere are decoded,&nbsp; which lie in a sphere around it with radius&nbsp; $t$.
 +
 
 +
:*Other received words are marked as undecodable,&nbsp; for example as&nbsp; "erasure".<br><br>
  
  
 
<b>Decoding over half the minimum distance</b>:<br>
 
<b>Decoding over half the minimum distance</b>:<br>
  
Here also in the case&nbsp; $r > t$&nbsp; an attempt is made to decode the codeword. In contrast to&nbsp; $\text{HD MLD}$, which also decodes beyond half the minimum distance, a decoding failure is not per se excluded here, however.<br>
+
Here also in the case&nbsp; $r > t$&nbsp; an attempt is made to decode the code word.&nbsp; However,&nbsp; in contrast to&nbsp; $\text{HD&ndash;MLD}$,&nbsp; which also decodes beyond half the minimum distance,&nbsp; a decoding failure is not per se excluded here.<br>
  
For the remainder of this chapter, we will deal exclusively with <i>Bounded Distance Decoding</i>. The reason for this is the enormous complexity of <i>Maximum Likelihood Decoding</i>&nbsp; proportional to&nbsp; $q^{n-k}$.<br>
+
:For the remainder of this chapter,&nbsp; we will deal exclusively with&nbsp; "Bounded Distance Decoding".&nbsp; The reason for this is the enormous complexity of&nbsp; "Maximum Likelihood Decoding"&nbsp; proportional to&nbsp; $q^{n-k}$.<br>
  
 
== Bounded Distance Decoding Procedure ==
 
== Bounded Distance Decoding Procedure ==
 
<br>
 
<br>
In the following, the individual steps of the BDD algorithm are described briefly and in a recipe-like manner. On the following pages, the individual points will be dealt with in more detail and the procedure will be illustrated using typical examples.<br>
+
In the following,&nbsp; the individual steps of the&nbsp; "bounded distance decoding"&nbsp; $\rm (BDD)$&nbsp; algorithm are described briefly and in a recipe-like manner.&nbsp; In the next sections,&nbsp; the individual points will be dealt with in more detail and the procedure will be illustrated using typical examples.<br>
 
 
  
$\rm (A)$&nbsp; &nbsp;$\text{Calculation and evaluation of syndrome}$ &nbsp; &rArr; &nbsp; [[Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding#Bounded_Distance_Decoding_Procedure|"Detailed description"]]
 
*Calculate from the received word&nbsp; $\underline{y}$&nbsp; and the parity-check matrix&nbsp; $\boldsymbol{\rm H }$&nbsp; of the code the syndrome&nbsp; $\underline {s} = \underline {y}  \cdot \boldsymbol{\rm H }^{\rm T}$.
 
*If&nbsp; $\underline {s} =\underline {0}$, set the BDD output&nbsp; $\underline {z} =\underline {y}$&nbsp; and terminate the decoding process for that received word.<br>
 
*Otherwise set the parameter&nbsp; $r = 1$&nbsp; and continue with step&nbsp; $\rm (B)$&nbsp;.<br><br>
 
  
 +
$\rm (A)$&nbsp; &nbsp;$\text{Calculation and evaluation of syndrome}$ &nbsp; &rArr; &nbsp; [[Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding#Step_.28A.29:_Evaluation_of_the_syndrome_in_BDD|$\text{Detailed description}$]]
 +
# Calculate from the received word&nbsp; $\underline{y}$&nbsp; and the parity-check matrix&nbsp; $\boldsymbol{\rm H }$&nbsp; of the code the syndrome&nbsp; $\underline {s} = \underline {y}  \cdot \boldsymbol{\rm H }^{\rm T}$.
 +
# If&nbsp; $\underline {s} =\underline {0}$,&nbsp; set the BDD output&nbsp; $\underline {z} =\underline {y}$&nbsp; and terminate the decoding process for that received word.<br>
 +
# Otherwise,&nbsp; set the parameter&nbsp; $r = 1$&nbsp; and continue with step&nbsp; $\rm (B)$&nbsp;.<br><br>
  
$\rm (B)$&nbsp; &nbsp;$\text{Determine the actual symbol error count } r$&nbsp; &rArr; &nbsp; [[Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding#Step_.28B.29:_Set_up.2Fevaluate_the_ELP_coefficient_vector|"Detailed description"]]
 
*Create and check the equations&nbsp; $\underline {\it \Lambda} _l \cdot\underline {s}^{\rm T} = 0$ &nbsp; for &nbsp; $l = 1,$ ... , $2 \cdot t -r$&nbsp; assuming that the received word contains exactly &nbsp; $r$ &nbsp; symbol errors.<br>
 
  
*$\underline {\it \Lambda} _l $&nbsp; denotes the generalized &nbsp;$\text{ELP}$ coefficient vectors, where "$\text{ELP}$" stands for&nbsp; [[Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding#Error_Locator_Polynomial_-_Definition_and_Properties|"Error Locator Polynomial"]]&nbsp; and&nbsp; $t$&nbsp; denotes the correctability of the code. <br>For the Reed&ndash;Solomon codes, uniformly&nbsp; $t = &lfloor;(n-k)/2 &rfloor;$.<br>
+
$\rm (B)$&nbsp; &nbsp;$\text{Determine the actual symbol error count }\ r$&nbsp; &rArr; &nbsp; [[Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding#Step_.28B.29:_Set_up.2Fevaluate_the_ELP_coefficient_vector|$\text{Detailed description}$]]
 +
# Create and check the equations&nbsp; $\underline {\it \Lambda} _l \cdot\underline {s}^{\rm T} = 0$ &nbsp; for &nbsp; $l = 1,$ ... , $2 \cdot t -r$&nbsp; assuming that the received word contains exactly &nbsp; $r$ &nbsp; symbol errors.<br>
 +
# $\underline {\it \Lambda} _l $&nbsp; denotes the generalized&nbsp; [[Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding#Error_Locator_Polynomial_-_Definition_and_Properties|$\text{error locator polynomial}$]]&nbsp; $\text{(ELP)}$&nbsp; coefficient vectors.
 +
# The parameter&nbsp; $t$&nbsp; denotes the correctability of the code.&nbsp; For the Reed&ndash;Solomon codes,&nbsp; uniformly&nbsp; $t = &lfloor;(n-k)/2 &rfloor;$.<br>
 +
# If there is a unique solution,&nbsp; then continue with step&nbsp; $\rm (C)$. &nbsp;<br>In the received vector&nbsp; $\underline{y}$&nbsp; there are then indeed exactly&nbsp; $r$ &nbsp; symbols falsified and in the error vector&nbsp; $\underline{e}$&nbsp; there are&nbsp; $r$ &nbsp; entries not equal to $0$.<br>
 +
# Otherwise increase&nbsp; $r$ &nbsp; by&nbsp; $1$. &nbsp; If&nbsp; $r &#8804; t$ &nbsp; &rArr;  &nbsp;  repeat step&nbsp; $\rm (B)$&nbsp; from the beginning: &nbsp; <br>The previously assumed&nbsp; $r$&nbsp; was obviously too small.&nbsp; Therefore now a new attempt with larger&nbsp; $r$.<br>
 +
# If the new&nbsp; $r$&nbsp; is greater than the correction capability&nbsp; $t$&nbsp; of the code,&nbsp; the current word cannot be decoded.&nbsp; End the decoding attempt with an&nbsp; "error message".<br><br>
  
*If there is a unique solution, then continue with step&nbsp; $\rm (C)$&nbsp;. &nbsp;In the received vector&nbsp; $\underline{y}$&nbsp; there are then indeed exactly&nbsp; $r$&nbsp; symbols corrupted and in the error vector&nbsp; $\underline{e}$&nbsp; there are&nbsp; $r$&nbsp; entries not equal to $0$.<br>
 
  
*Otherwise increase&nbsp; $r$&nbsp; by&nbsp; $1$. &nbsp; If&nbsp; $r &#8804; t$, then repeat step&nbsp; $\rm (B)$&nbsp; from the beginning: &nbsp; The previously assumed&nbsp; $r$&nbsp; was obviously too small. Therefore now a new attempt with larger&nbsp; $r$.<br>
+
$\rm (C)$&nbsp; &nbsp;$\text{Localization of the }\ r \text{ error locations}$&nbsp; &rArr; &nbsp; [[Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding#Step_.28C.29:_Localization_of_the_error_positions|$\text{Detailed description}$]]
 +
# Create the&nbsp; error locator polynomial &nbsp; ${\it \Lambda}(x)$ &nbsp; and find its&nbsp; $r$&nbsp; zeros in&nbsp; ${\rm GF}(q) \hspace{-0.05cm}\setminus \hspace{-0.05cm} \{0\}$.<br>
 +
# An error at location&nbsp; $i$&nbsp; is present whenever&nbsp; ${\it \Lambda}(\alpha^{i}) = 0$&nbsp;.<br><br>
  
*If the new&nbsp; $r$&nbsp; is greater than the correction capability&nbsp; $t$&nbsp; of the code, the current received word cannot be decoded. End the decoding attempt with an&nbsp; ''error message''.<br><br>
 
  
 
+
$\rm (D)$&nbsp; &nbsp;$\text{Determination of }\ r \text{ error values and correction}$&nbsp; &rArr; &nbsp; [[Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding#Step_.28D.29:_Final_error_correction|$\text{Detailed description}$]]
$\rm (C)$&nbsp; &nbsp;$\text{Localization of the }r \text{ error locations}$&nbsp; &rArr; &nbsp; [[Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding#Step_.28C.29:_Localization_of_the_error_locations|"Detailed description"]]
+
# Now the&nbsp; $r$&nbsp; error locations are known. Replacing in the received vector&nbsp; $\underline{y}$&nbsp; the wrong symbols with erasures &nbsp; &#8658; &nbsp; $y_i = \rm E$,&nbsp; if&nbsp; $e_i &ne; 0$, <br>we find the result&nbsp; $\underline{y}$&nbsp; corresponding to the chapter&nbsp; [[Channel_Coding/Reed-Solomon_Decoding_for_the_Erasure_Channel|"Reed-Solomon Decoding at the Erasure Channel"]].<br>
*Create the&nbsp; <i>error locator polynomial</i>&nbsp; ${\it \Lambda}(x)$&nbsp; and find its&nbsp; $r$&nbsp; zeros in&nbsp; ${\rm GF}(q) \hspace{-0.05cm}\setminus \hspace{-0.05cm} \{0\}$.<br>
+
# '''Or''':&nbsp; From the equation&nbsp; $\underline {e}  \cdot \boldsymbol{\rm H }^{\rm T} = \underline {s}$&nbsp; one arrives at a linear equation system for the faulty symbols&nbsp; $(e_i \ne 0)$&nbsp; taking advantage of the error-free sites&nbsp; $(e_i = 0)$.<br><br>
 
 
*An error at location&nbsp; $i$&nbsp; is present whenever&nbsp; ${\it \Lambda}(\alpha^{i}) = 0$&nbsp;.<br><br>
 
 
 
 
 
$\rm (D)$&nbsp; &nbsp;$\text{Determination of }r \text{ error values and correction}$&nbsp; &rArr; &nbsp; [[Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding#Step_.28D.29:_Final_error_correction|"Detailed description"]]
 
*Now the&nbsp; $r$&nbsp; error locations are known. Replacing in the received vector&nbsp; $\underline{y}$&nbsp; the wrong symbols with erasures &nbsp; &#8658; &nbsp; $y_i = \rm E$, if&nbsp; $e_i &ne; 0$, we find the result&nbsp; $\underline{y}$&nbsp; corresponding to the chapter&nbsp; [[Channel_Coding/Reed-Solomon_Decoding_for_the_Erasure_Channel|"Reed-Solomon Decoding at the Erasure Channel"]].<br>
 
 
 
*An alternative: &nbsp; From the equation&nbsp; $\underline {e}  \cdot \boldsymbol{\rm H }^{\rm T} = \underline {s}$&nbsp; one arrives at a linear system of equations for the faulty symbols&nbsp; $(e_i = 0)$&nbsp; taking advantage of the error-free sites&nbsp; $(e_i \ne 0)$.<br><br>
 
  
 
== Step (A): Evaluation of the syndrome in BDD ==
 
== Step (A): Evaluation of the syndrome in BDD ==
 
<br>
 
<br>
As shown in section&nbsp; [[Channel_Coding/Decoding_of_Linear_Block_Codes#Principle_of_syndrome_decoding|"Principle of Syndrome Decoding"]]&nbsp; syndrome&nbsp; $\underline{s}$&nbsp; can be used to decode a linear code.  
+
As shown in section&nbsp; [[Channel_Coding/Decoding_of_Linear_Block_Codes#Principle_of_syndrome_decoding|"Principle of Syndrome Decoding"]],&nbsp; the syndrome&nbsp; $\underline{s}$&nbsp; can be used to decode a linear code.  
  
With the received word&nbsp; $\underline{y}$&nbsp; equal to codeword&nbsp; $\underline{c}$&nbsp; plus error vector&nbsp; $\underline{e}$&nbsp; applies:
+
*With the received word&nbsp; $\underline{y}$ &nbsp; equal to code word&nbsp; $\underline{c}$&nbsp; plus error vector&nbsp; $\underline{e}$&nbsp; applies:
  
 
::<math>\underline {s} = \underline {y}  \cdot { \boldsymbol{\rm H }}^{\rm T}=  
 
::<math>\underline {s} = \underline {y}  \cdot { \boldsymbol{\rm H }}^{\rm T}=  
Line 156: Line 158:
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
Since always&nbsp; $\underline {c}  \cdot { \boldsymbol{\rm H }}^{\rm T} =\underline {0}$&nbsp; holds, it follows from&nbsp; $\underline{s}= \underline{0}$&nbsp; also&nbsp; $\underline {e}  \cdot { \boldsymbol{\rm H }}^{\rm T} =\underline{0}$. &nbsp; That is:
+
*Since always &nbsp; $\underline {c}  \cdot { \boldsymbol{\rm H }}^{\rm T} =\underline {0}$ &nbsp; holds,&nbsp; it follows from &nbsp; $\underline{s}= \underline{0}$ &nbsp; also &nbsp; $\underline {e}  \cdot { \boldsymbol{\rm H }}^{\rm T} =\underline{0}$. &nbsp; That is:
*With very high probability, from&nbsp; $\underline{s}= \underline{0}$&nbsp; it is also possible to infer&nbsp; $\underline{e}= \underline{0}$&nbsp; and thus also the correct decoding result&nbsp; $\underline{z}= \underline{y}$&nbsp; . The decoding process would be finished.<br>
+
 
 +
*With very high probability,&nbsp; from &nbsp; $\underline{s}= \underline{0}$ &nbsp; it is also possible to infer &nbsp; $\underline{e}= \underline{0}$ &nbsp; and thus also the correct decoding result&nbsp; $\underline{z}= \underline{y}$.&nbsp; The decoding process would be finished.<br>
 +
 
 +
*But there are also error patterns &nbsp; $\underline{e} \ne \underline{0}$ &nbsp; that lead to the syndrome &nbsp; $\underline{s}= \underline{0}$. &nbsp; Such patterns certainly contain more than&nbsp; $t$&nbsp; symbol errors.&nbsp;
  
*But there are also error patterns&nbsp; $\underline{e} \ne \underline{0}$ that lead to the syndrome&nbsp; $\underline{s}= \underline{0}$&nbsp;. Such patterns certainly contain more than&nbsp; $t$&nbsp; symbol errors, so here too it makes sense to abort the decoding process. All subsequent calculations would also not lead to success.<br><br>
+
*So here it makes also sense to abort the decoding process.&nbsp; All subsequent calculations would also not lead to success.<br><br>
  
[[File:P ID2548 KC T 2 5 Darstellung v1.png|right|frame|$\rm GF(2^3)$ representations]]
 
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
 
$\text{Example 2:}$&nbsp;   
 
$\text{Example 2:}$&nbsp;   
This and the following examples on the next pages are always based on the Reed&ndash;Solomon code&nbsp; $\text{RSC (7, 3, 5)}_8$&nbsp; so that the conversions given in the graph in&nbsp; $\rm GF(2^3)$&nbsp; can be used. The received words are:
+
This and the following examples in the next sections are always based on the Reed&ndash;Solomon code&nbsp; $\text{RSC (7, 3, 5)}_8$&nbsp; so that the conversions given in the graph in&nbsp; $\rm GF(2^3)$&nbsp; can be used.&nbsp;  The received words are:
 +
[[File:EN_KC_Z_2_5_neu.png|right|frame|$\rm GF(2^3)$&nbsp; representations]]
  
::<math>\underline{y}=\big (\alpha^3,\hspace{0.05cm} 1,\hspace{0.05cm} 0, \hspace{0.05cm}\alpha^1, \hspace{0.05cm} \alpha^2, \hspace{0.05cm} \alpha^5, \hspace{0.05cm} \alpha^5 \big).</math>
+
::<math>\underline{y}=\big (\alpha^3,\hspace{0.08cm} 1,\hspace{0.08cm} 0, \hspace{0.08cm}\alpha^1, \hspace{0.08cm} \alpha^2, \hspace{0.08cm} \alpha^5, \hspace{0.08cm} \alpha^5 \big).</math>
  
With the&nbsp; [[Channel_Coding/Definition_and_Properties_of_Reed-Solomon_Codes#Generator_matrix_and_parity-check_matrix| "Parity-Check Matrix"]]&nbsp; $\boldsymbol{\rm H }$&nbsp; results for the syndrome:
+
*With the&nbsp; [[Channel_Coding/Definition_and_Properties_of_Reed-Solomon_Codes#Generator_matrix_and_parity-check_matrix|$\text{parity-check matrix}$]]&nbsp; $\boldsymbol{\rm H }$&nbsp; results for the syndrome:
  
 
::<math>\underline {s} = \underline {y}  \cdot { \boldsymbol{\rm H } }^{\rm T}=  
 
::<math>\underline {s} = \underline {y}  \cdot { \boldsymbol{\rm H } }^{\rm T}=  
Line 183: Line 188:
 
\alpha^6 & \alpha^5 & \alpha^4 & \alpha^3
 
\alpha^6 & \alpha^5 & \alpha^4 & \alpha^3
 
\end{pmatrix}.  </math>
 
\end{pmatrix}.  </math>
These vector&ndash;matrix multiplications gives the result:
+
*These vector&ndash;matrix multiplications gives the result:
 
::<math>\underline {s} \hspace{-0.05cm} = \hspace{-0.05cm} (\alpha^3 , \alpha^3 , \alpha^3 , \alpha^3) + (\alpha^1 , \alpha^2 , \alpha^3 , \alpha^4) + (0,0,0,0) \hspace{-0.05cm}+\hspace{-0.05cm} (\alpha^4,1,\alpha^3,\alpha^6)+(\alpha^6,\alpha^3,1,\alpha^4)\hspace{-0.05cm}+\hspace{-0.05cm}(\alpha^3,\alpha^1,\alpha^6,\alpha^4) \hspace{-0.05cm}+\hspace{-0.05cm} (\alpha^4,\alpha^3,\alpha^2,\alpha^1)</math>
 
::<math>\underline {s} \hspace{-0.05cm} = \hspace{-0.05cm} (\alpha^3 , \alpha^3 , \alpha^3 , \alpha^3) + (\alpha^1 , \alpha^2 , \alpha^3 , \alpha^4) + (0,0,0,0) \hspace{-0.05cm}+\hspace{-0.05cm} (\alpha^4,1,\alpha^3,\alpha^6)+(\alpha^6,\alpha^3,1,\alpha^4)\hspace{-0.05cm}+\hspace{-0.05cm}(\alpha^3,\alpha^1,\alpha^6,\alpha^4) \hspace{-0.05cm}+\hspace{-0.05cm} (\alpha^4,\alpha^3,\alpha^2,\alpha^1)</math>
 
::<math>\Rightarrow  \hspace{0.3cm} \underline {s} =  \text{...} \hspace{0.05cm}=
 
::<math>\Rightarrow  \hspace{0.3cm} \underline {s} =  \text{...} \hspace{0.05cm}=
 
(\alpha^5,\alpha^2,\alpha^3,\alpha^1)  \hspace{0.05cm}.</math>
 
(\alpha^5,\alpha^2,\alpha^3,\alpha^1)  \hspace{0.05cm}.</math>
  
So the received word was corrupted. &nbsp; Otherwise it should have resulted in&nbsp; $\underline{e}= \underline{0} = (0, 0, 0, 0)$&nbsp;.<br>
+
*So the received word was falsified. &nbsp; Otherwise it should have resulted in&nbsp; $\underline{e}= \underline{0} = (0, 0, 0, 0)$&nbsp;.<br>
  
The description of the decoding process at&nbsp; $\text{RSC (7, 3, 5)}_8$&nbsp; is continued in&nbsp; [[Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding#Step_.28B.29:_Set_up.2Fevaluate_the_ELP_coefficient_vector|"$\text{Example 4}$"]]&nbsp; }}<br>
+
 
 +
The description of the decoding process at&nbsp; $\text{RSC (7, 3, 5)}_8$&nbsp; is continued in&nbsp; [[Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding#Step_.28B.29:_Set_up.2Fevaluate_the_ELP_coefficient_vector|$\text{Example 4}$]]&nbsp; }}<br>
  
 
== Error Locator Polynomial - Definition and Properties ==
 
== Error Locator Polynomial - Definition and Properties ==
 
<br>
 
<br>
 
After the syndrome calculation in step&nbsp; $\rm (A)$&nbsp; with the result&nbsp; $\underline{s} \ne \underline{0}$&nbsp; we know,
 
After the syndrome calculation in step&nbsp; $\rm (A)$&nbsp; with the result&nbsp; $\underline{s} \ne \underline{0}$&nbsp; we know,
*that the received word&nbsp; $\underline{y}$&nbsp; does not match the codeword&nbsp; $\underline{c}$&nbsp; respectively<br>
+
*that the received word&nbsp; $\underline{y}$ &nbsp; does not match the code word&nbsp; $\underline{c}$,&nbsp; respectively<br>
  
*that the error vector&nbsp; $\underline{e} = (e_0, \hspace{0.05cm}e_1, \hspace{0.05cm}\text{ ...}\hspace{0.05cm} , e_{n-1})$&nbsp; certainly includes elements not equal to&nbsp; $0$&nbsp;.<br><br>
+
*that the error vector&nbsp; $\underline{e} = (e_0, \hspace{0.05cm}e_1, \hspace{0.05cm}\text{ ...}\hspace{0.05cm} , e_{n-1})$&nbsp; certainly includes elements not equal to&nbsp; "$0$".&nbsp;<br><br>
  
However, we do not know how many symbols were corrupted&nbsp; $(0 < r &#8804; n)$&nbsp; nor can we name the positions of the error locations&nbsp; $(e_i &ne; 0)$&nbsp; in the error vector&nbsp; $\underline{c}$&nbsp;.<br>
+
However,&nbsp; we do not know how many symbols were falsified &nbsp; $(0 < r &#8804; n)$ &nbsp; nor we can name the positions of the error locations&nbsp; $(e_i &ne; 0)$&nbsp; in the error vector&nbsp; $\underline{c}$.&nbsp; An approach to this task is provided by the so-called&nbsp; "error locator polynomial"&nbsp; introduced by&nbsp; [https://en.wikipedia.org/wiki/W._Wesley_Peterson $\text{William Wesley Peterson}$]&nbsp; &rArr; &nbsp; see [Pet60]<ref name ='Pet60'>Peterson, W.W:&nbsp; Encoding and Error-correction Procedures for the Bose-Chaudhuri codes.&nbsp; IRE Transactions on Information Theory, IT-6:459{470), 1960.</ref>.&nbsp; It is also known as&nbsp; "key equation".<br>
  
An approach to this exercise is provided by the so-called&nbsp; <i>Error Locator Polynomial</i> introduced by&nbsp; [https://en.wikipedia.org/wiki/W._Wesley_Peterson "William Wesley Peterson"]&nbsp; &rArr; &nbsp; see [Pet60]<ref name =''Pet60>Peterson, W.W: ''Encoding and Error-correction Procedures for the Bose-Chaudhuri codes.'' IRE Transactions on Information Theory , IT-6:459{470), 1960.</ref>. It is also known as key equation.<br>
+
{{BlaueBox|TEXT= 
 +
$\text{Definition:}$&nbsp; Let it be known that exactly&nbsp; $r$&nbsp; elements of the error vector&nbsp; $\underline{e}$&nbsp; are non-zero,&nbsp; recognizable by&nbsp; [[Channel_Coding/Objective_of_Channel_Coding#Important_definitions_for_block_coding|$\text{Hamming weight}$]]&nbsp; $w_{\rm H}(\underline{e}) = r$.
 +
*Also let the quantity &nbsp; ${I}_{\rm EP}$ &nbsp; of error positions be known:
  
{{BlaueBox|TEXT= 
+
::<math>I_{\rm EP} = \{ i \hspace{0.1cm}\vert \hspace{0.1cm} e_i \ne 0,\hspace{0.1cm} 0 \le i < n \}\hspace{0.05cm}.</math>
$\text{Definition:}$&nbsp; Let it be known that exactly&nbsp; $r$&nbsp; elements of the error vector&nbsp; $\underline{e}$&nbsp; are non-zero, recognizable by&nbsp; [[Channel_Coding/Objective_of_Channel_Coding#Important_definitions_for_block_coding|"Hamming weight"]]&nbsp; $w_{\rm H}(\underline{e}) = r$.  
 
*Also let the quantity&nbsp; ${I}_{\rm FP}$&nbsp; of error positions be known:
 
  
::<math>I_{\rm FP} = \{ i \hspace{0.1cm}\vert \hspace{0.1cm} e_i \ne 0,\hspace{0.1cm} 0 \le i < n \}\hspace{0.05cm}.</math>
+
*Then for the&nbsp; &raquo;'''Error Locator Polynomial'''&laquo; &nbsp;$\rm (ELP)$:
  
*Then for the&nbsp; '''Error Locator Polynomial'''' &nbsp;$\rm (ELP)$:
+
::<math>{\it \Lambda}(x)=x \cdot \prod_{i\hspace{0.05cm}\in \hspace{0.05cm} I_{\rm FP} }(x-\alpha^i) =x \cdot \big [{\it \lambda}_0 + \lambda_1 \cdot x+\ldots+{\it \lambda}_{r-1} \cdot x^{r-1}+x^r \big ].</math>}}
  
::<math>{\it \Lambda}(x)=x \cdot \prod_{i\hspace{0.05cm}\in \hspace{0.05cm} I_{\rm FP} }(x-\alpha^i) =x \cdot \big [{\it \lambda}_0 + \lambda_1 \cdot x+\ldots+{\it \lambda}_{r-1} \cdot x^{r-1}+x^r \big ].</math>}}<br>
 
  
From the <i>error locator polynomial</i>&nbsp; we know because of the definition:
+
&rArr; &nbsp; From the error locator polynomial&nbsp; we know:
*because of the factor&nbsp; $x$&nbsp; in front of the product sign is&nbsp; ${\it \Lambda}(x= 0) = 0$.<br>
+
*Because of the factor&nbsp; $x$&nbsp; in front of the product sign:&nbsp; ${\it \Lambda}(x= 0) = 0$.<br>
  
*Other&nbsp; $r$&nbsp; zeros result for&nbsp; $x = \alpha^{i}$&nbsp; with&nbsp; $i \in I_{\rm FP}$, that is, for all error positions.<br>
+
*Other&nbsp; $r$&nbsp; zeros result for&nbsp; $x = \alpha^{i}$ &nbsp; with &nbsp; $i \in I_{\rm EP}$,&nbsp; that is,&nbsp; for all error positions.<br>
  
*In contrast, the&nbsp; <i>error locator polynomial</i>&nbsp; for&nbsp; $i &#8713; yields I_{\rm FP}$ &nbsp; &#8658; &nbsp; $e_i = 0$&nbsp; no zero: &nbsp; ${\it \Lambda}(x= \alpha^{i}) \ne0$.<br><br>
+
*In contrast,&nbsp; the&nbsp; error locator polynomial for&nbsp; $i &#8713; I_{\rm EP}$ &nbsp; &#8658; &nbsp; $e_i = 0$&nbsp; has no zero: &nbsp; ${\it \Lambda}(x= \alpha^{i}) \ne0$.<br><br>
  
So we search for the&nbsp; $r$&nbsp; nontrivial zeros of&nbsp; ${\it \Lambda}(x)$&nbsp; with the argument&nbsp; $x &#8712; {\rm GF}(q) \hspace{-0.05cm}\setminus \hspace{-0.05cm} \{0\}$. If we succeed, we know the&nbsp; $r$&nbsp; error positions, but not yet the actual error values&nbsp; $e_i &#8712; {\rm GF}(q)$.<br>
+
&rArr; &nbsp; So we search for the&nbsp; $r$&nbsp; non&ndash;trivial zeros of&nbsp; ${\it \Lambda}(x)$&nbsp; with argument&nbsp; $x &#8712; {\rm GF}(q) \hspace{-0.05cm}\setminus \hspace{-0.05cm} \{0\}$.&nbsp; If we succeed,  
 +
:*we know the&nbsp; $r$&nbsp; error positions,  
 +
:*but not yet the actual error values&nbsp; $e_i &#8712; {\rm GF}(q)$.<br>
  
[[File:P ID2549 KC T 2 5 Darstellung v1.png|right|frame|$\rm GF(2^3)$ representations]]  
+
{{GraueBox|TEXT=
{{GraueBox|TEXT= 
+
[[File:EN_KC_Z_2_5_neu.png|right|frame|$\rm GF(2^3)$&nbsp; representations]]  
$\text{Beispiel 3:}$&nbsp;  
+
$\text{Example 3:}$&nbsp;
 
Let $n=7$ &nbsp; &#8658; &nbsp; $q=8$,&nbsp; $r=2$&nbsp; and&nbsp; $I_{\rm FP}  = \{2, \hspace{0.05cm}4\}$:  &nbsp; &nbsp; &nbsp; &nbsp;[[File:P ID2551 KC T 2 5 S5a.png]]<br>
 
Let $n=7$ &nbsp; &#8658; &nbsp; $q=8$,&nbsp; $r=2$&nbsp; and&nbsp; $I_{\rm FP}  = \{2, \hspace{0.05cm}4\}$:  &nbsp; &nbsp; &nbsp; &nbsp;[[File:P ID2551 KC T 2 5 S5a.png]]<br>
  
Thus for the&nbsp; <i>error locator poynomial</i>&nbsp; from&nbsp; ${\rm GF}(2^3)$ is obtained:
+
*Thus for the&nbsp; "error locator poynomial"&nbsp; from&nbsp; ${\rm GF}(2^3)$ is obtained:
  
 
::<math>{\it \Lambda}(x)=x \cdot (x\hspace{-0.05cm}-\hspace{-0.05cm}\alpha^2) \cdot (x\hspace{-0.05cm}-\hspace{-0.05cm}\alpha^4)=  x \cdot (x\hspace{-0.05cm}+\hspace{-0.05cm}\alpha^2) \cdot (x\hspace{-0.05cm}+\hspace{-0.05cm}\alpha^4) =x \cdot \big [x^2 \hspace{-0.05cm}+ \hspace{-0.05cm}(\alpha^2 + \alpha^4) \cdot x + \alpha^6\big ] </math>
 
::<math>{\it \Lambda}(x)=x \cdot (x\hspace{-0.05cm}-\hspace{-0.05cm}\alpha^2) \cdot (x\hspace{-0.05cm}-\hspace{-0.05cm}\alpha^4)=  x \cdot (x\hspace{-0.05cm}+\hspace{-0.05cm}\alpha^2) \cdot (x\hspace{-0.05cm}+\hspace{-0.05cm}\alpha^4) =x \cdot \big [x^2 \hspace{-0.05cm}+ \hspace{-0.05cm}(\alpha^2 + \alpha^4) \cdot x + \alpha^6\big ] </math>
 
::<math>\Rightarrow \hspace{0.3cm}  {\it \Lambda}(x)= x \cdot \big [\alpha^6 + \alpha \cdot x + x^2\big ]\hspace{0.05cm}.</math>
 
::<math>\Rightarrow \hspace{0.3cm}  {\it \Lambda}(x)= x \cdot \big [\alpha^6 + \alpha \cdot x + x^2\big ]\hspace{0.05cm}.</math>
  
The other zeros $($except for&nbsp; $x = 0)$&nbsp; arise naturally here for&nbsp; $x = \alpha^2$&nbsp; and&nbsp; $x = \alpha^4$, as a calculation shows:
+
*The other zeros $($except for&nbsp; $x = 0)$&nbsp; arise naturally here for&nbsp; $x = \alpha^2$&nbsp; and&nbsp; $x = \alpha^4$, as a calculation shows:
  
 
::<math>{\it \Lambda}(x = \alpha^2)= x \cdot \big [\alpha^6 + \alpha \cdot \alpha^2 + (\alpha^2)^2\big ] = x \cdot \big [\alpha^6 + \alpha^3 + \alpha^4  \big ]= 0\hspace{0.05cm},</math>
 
::<math>{\it \Lambda}(x = \alpha^2)= x \cdot \big [\alpha^6 + \alpha \cdot \alpha^2 + (\alpha^2)^2\big ] = x \cdot \big [\alpha^6 + \alpha^3 + \alpha^4  \big ]= 0\hspace{0.05cm},</math>
 
::<math> {\it \Lambda}(x = \alpha^4)= x \cdot \big [\alpha^6 + \alpha \cdot \alpha^4 + (\alpha^4)^2\big ] =x \cdot \big [\alpha^6 + \alpha^5 + \alpha \big ]= 0\hspace{0.05cm}.</math>}}<br>
 
::<math> {\it \Lambda}(x = \alpha^4)= x \cdot \big [\alpha^6 + \alpha \cdot \alpha^4 + (\alpha^4)^2\big ] =x \cdot \big [\alpha^6 + \alpha^5 + \alpha \big ]= 0\hspace{0.05cm}.</math>}}<br>
  
For further derivation, we always assume the&nbsp; $\text{RSC (7, 3, 5)}_8$&nbsp; with the following parameter values:  
+
For further derivation,&nbsp;  we always assume the&nbsp; $\text{RSC (7, 3, 5)}_8$&nbsp; with the following parameter values:  
  
 
:$$n=7, \hspace{0.3cm}k = 3, \hspace{0.3cm}d_{\rm min} = 5 \ \Rightarrow  \ t = (d_{\rm min} -1/2) = 2.$$
 
:$$n=7, \hspace{0.3cm}k = 3, \hspace{0.3cm}d_{\rm min} = 5 \ \Rightarrow  \ t = (d_{\rm min} -1/2) = 2.$$
  
Let the number of symbol errors be&nbsp; $r = t = 2$. Thus the system of equations to be solved with the auxiliary variables&nbsp; $L_i = {\it \Lambda}(\alpha^{i})$:
+
*Let the number of symbol errors be&nbsp; $r = t = 2$.&nbsp; Thus the system of equations to be solved with the auxiliary variables&nbsp; $L_i = {\it \Lambda}(\alpha^{i})$:
  
 
::<math>L_0 = {\it \Lambda }(\alpha^0) = \alpha^0 \cdot \left [ {\it \lambda}_0 + {\it \lambda}_1 \cdot (\alpha^0)^1 + (\alpha^0)^2 \right ] = {\it \lambda}_0  \cdot 1 + {\it \lambda}_1 \cdot  1 + 1 \hspace{0.05cm},</math>
 
::<math>L_0 = {\it \Lambda }(\alpha^0) = \alpha^0 \cdot \left [ {\it \lambda}_0 + {\it \lambda}_1 \cdot (\alpha^0)^1 + (\alpha^0)^2 \right ] = {\it \lambda}_0  \cdot 1 + {\it \lambda}_1 \cdot  1 + 1 \hspace{0.05cm},</math>
Line 248: Line 255:
 
::<math> L_6 = {\it \Lambda }(\alpha^6) = \alpha^6 \cdot \left [ {\it \lambda}_0 + {\it \lambda}_1 \cdot (\alpha^6)^1 + (\alpha^6)^2 \right ] = {\it \lambda}_0 \cdot \alpha^6 + {\it \lambda}_1 \cdot \alpha^{12} + \alpha^{18} \hspace{0.05cm}.</math>
 
::<math> L_6 = {\it \Lambda }(\alpha^6) = \alpha^6 \cdot \left [ {\it \lambda}_0 + {\it \lambda}_1 \cdot (\alpha^6)^1 + (\alpha^6)^2 \right ] = {\it \lambda}_0 \cdot \alpha^6 + {\it \lambda}_1 \cdot \alpha^{12} + \alpha^{18} \hspace{0.05cm}.</math>
  
In vector form, this system of equations with the auxiliary vector is&nbsp; $\underline{L} = (L_0, \hspace{0.05cm}L_1, \hspace{0.05cm}L_2,\hspace{0.05cm}L_3,\hspace{0.05cm}L_4,\hspace{0.05cm}L_5,\hspace{0.05cm}L_6)$:
+
*In vector form,&nbsp; this system of equations with the auxiliary vector is &nbsp; $\underline{L} = (L_0, \hspace{0.05cm}L_1, \hspace{0.05cm}L_2,\hspace{0.05cm}L_3,\hspace{0.05cm}L_4,\hspace{0.05cm}L_5,\hspace{0.05cm}L_6)$:
  
 
::<math>\underline {L}^{\rm T}=\begin{pmatrix}
 
::<math>\underline {L}^{\rm T}=\begin{pmatrix}
Line 276: Line 283:
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
We now expand the ELP coefficient vector&nbsp; $\underline {\it \Lambda }$&nbsp; by appending zeros to the length&nbsp; $n-k$.  
+
*We now expand the ELP coefficient vector&nbsp; $\underline {\it \Lambda }$&nbsp; by appending zeros to the length&nbsp; $n-k$.&nbsp; Thus,&nbsp; in the considered example,&nbsp; we obtain &nbsp; ${\it \Lambda } = ( \lambda_0,\hspace{0.05cm}\lambda_1,\hspace{0.05cm}1, \hspace{0.05cm}0)$ &nbsp; and the following vector equation:
 
 
Thus, in the considered example, we obtain&nbsp; ${\it \Lambda } = ( \lambda_0,\hspace{0.05cm}\lambda_1,\hspace{0.05cm}1, \hspace{0.05cm}0)$&nbsp; and the following vector equation:
 
  
 
::<math>\underline {L}^{\rm T}
 
::<math>\underline {L}^{\rm T}
Line 299: Line 304:
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
With this we have achieved:
+
*With this we have achieved:
*The&nbsp; $7&times 3$ matrix has now become a&nbsp; $7&times 4$ matrix.  
+
# The&nbsp; $7&times 3$&nbsp; matrix has now become a&nbsp; $7&times 4$ matrix.  
*The fourth column can actually be filled arbitrarily, since all elements are multiplied by zeros.  
+
# The fourth column can actually be filled arbitrarily,&nbsp; since all elements are multiplied by zeros.  
*The addition chosen here gives the transposed&nbsp; [[Channel_Coding/Definition_and_Properties_of_Reed-Solomon_Codes#Generator_matrix_and_parity-check_matrix| "parity-check matrix"]]&nbsp; of&nbsp; $\text{RSC (7, 3, 5)}_8$.
+
# The addition chosen here gives the transposed&nbsp; [[Channel_Coding/Definition_and_Properties_of_Reed-Solomon_Codes#Generator_matrix_and_parity-check_matrix| $\text{parity-check matrix}$]]&nbsp; of&nbsp; $\text{RSC (7, 3, 5)}_8$.
* Thus, one can write for the last vector equation:
+
Thus,&nbsp; one can write for the last vector equation:
  
::<math>\underline {L}^{\rm T} = { \boldsymbol{\rm H }}^{\rm T} \cdot \underline {\it \Lambda }^{\rm T}
+
:::<math>\underline {L}^{\rm T} = { \boldsymbol{\rm H }}^{\rm T} \cdot \underline {\it \Lambda }^{\rm T}
 
  \hspace{0.3cm} \Rightarrow  \hspace{0.3cm}\underline {L} = \underline {\it \Lambda } \cdot { \boldsymbol{\rm H }}
 
  \hspace{0.3cm} \Rightarrow  \hspace{0.3cm}\underline {L} = \underline {\it \Lambda } \cdot { \boldsymbol{\rm H }}
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
But since for the error locations&nbsp; $(e_i &ne; 0)$&nbsp; always&nbsp; $L_i = {\it \Lambda}(\alpha^{i}) = 0$&nbsp; holds, the product&nbsp; $L_i \cdot e_i \equiv 0$&nbsp; and one obtains as determining equation for the zeros of the <i>error locator polynomial</i>:
+
*But since for the error locations&nbsp; $(e_i &ne; 0)$&nbsp; always&nbsp; $L_i = {\it \Lambda}(\alpha^{i}) = 0$&nbsp; holds,&nbsp; the product&nbsp; $L_i \cdot e_i \equiv 0$&nbsp; and one obtains as determining equation for the zeros of the error locator polynomial:
  
::<math>\underline {L}^{\rm T}  \cdot \underline {e}^{\rm T} = 0  \hspace{0.3cm} \Rightarrow  \hspace{0.3cm}
+
:::<math>\underline {L}^{\rm T}  \cdot \underline {e}^{\rm T} = 0  \hspace{0.3cm} \Rightarrow  \hspace{0.3cm}
 
\underline {\it \Lambda } \cdot { \boldsymbol{\rm H }} \cdot \underline {e}^{\rm T} = 0 \hspace{0.3cm} \Rightarrow  \hspace{0.3cm}
 
\underline {\it \Lambda } \cdot { \boldsymbol{\rm H }} \cdot \underline {e}^{\rm T} = 0 \hspace{0.3cm} \Rightarrow  \hspace{0.3cm}
 
\underline {\it \Lambda }  \cdot \underline {s}^{\rm T} = 0  
 
\underline {\it \Lambda }  \cdot \underline {s}^{\rm T} = 0  
Line 317: Line 322:
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Important intermediate result:}$&nbsp;  
+
$\text{Important intermediate result:}$&nbsp; The non-trivial zeros&nbsp;  $\lambda_0$, $\lambda_1$, ... &nbsp;$(=0)$&nbsp; of the error locator polynomial &nbsp; ${\it \Lambda}(x)$&nbsp; must always satisfy the vector equation&nbsp; $\underline {\it \Lambda }  \cdot \underline {s}^{\rm T} = 0 $&nbsp; .
 +
*Hereby denotes&nbsp; $\underline {\it \Lambda }$&nbsp; the &nbsp;$\rm ELP$&nbsp; coefficient vector.
  
The ''nontrivial zeros'' &nbsp;$($equal to $0)$&nbsp; $\lambda_0$, $\lambda_1$, ... of the ''error locator polynomial'' &nbsp; ${\it \Lambda}(x)$&nbsp; must always satisfy the vector equation&nbsp; $\underline {\it \Lambda }  \cdot \underline {s}^{\rm T} = 0 $&nbsp; .
+
* $\underline {s }  = \underline {y }\cdot  \boldsymbol{\rm H }^{\rm T} $&nbsp; gives the&nbsp; [[Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding#Step_.28A.29:_Evaluation_of_the_syndrome_in_BDD|$\text{syndrome}$]].}}<br>
*Hereby denotes&nbsp; $\underline {\it \Lambda }$&nbsp; the &nbsp;$\rm ELP coefficient vector$.
 
* $\underline {s }  = \underline {y }\cdot  \boldsymbol{\rm H }^{\rm T} $&nbsp; gives the&nbsp; [[Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding#Step_.28A.29:_Evaluation_of_the_syndrome_in_BDD| "syndrome"]]&nbsp;.}}<br>
 
  
 
== Step (B): Set up/evaluate the ELP coefficient vector ==
 
== Step (B): Set up/evaluate the ELP coefficient vector ==
 
<br>
 
<br>
Before we can consider this intermediate result for step&nbsp; $\rm (B)$&nbsp; some generalizations need to be made. The reason for this is:
+
Before we can consider this intermediate result for step&nbsp; $\rm (B)$&nbsp; some generalizations need to be made.&nbsp;
*The equation&nbsp; $\underline {\it \lambda }  \cdot \underline {s}^{\rm T} = 0 $&nbsp; yields only a single equation of determination. Thus the problem can be solved for&nbsp; $r = 1$&nbsp; if one is sure that indeed only one symbol has been corrupted.<br>
+
[[File:P ID2550 KC T 2 5 S3 v1.png|right|frame|Shifted ELP coefficient vectors|class=fit]]
 +
The reason for this is:
 +
*The relationship &nbsp; $\underline {\it \lambda }  \cdot \underline {s}^{\rm T} = 0 $ &nbsp; yields only a single equation of determination.  
  
*If one is not sure of this, but nevertheless performs the calculation for&nbsp; $r = 1$&nbsp;, one still needs a second equation (or even several) to verify the assumption.<br><br>
+
*Thus the problem can be solved for&nbsp; $r = 1$&nbsp; if one is sure that indeed only one symbol has been falsified.<br>
  
The property of&nbsp; [[Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding#Error_Locator_Polynomial_-_Definition_and_Properties|"Error Locator Polynomial"]] that&nbsp; ${\it \Lambda}(\alpha^{i})$&nbsp; is zero only for&nbsp; $e_i &ne; 0$&nbsp; $(i$&ndash;th symbol corrupted$)$ is preserved when multiplying&nbsp; ${\it \Lambda}(x)$&nbsp; by arbitrary powers of&nbsp; $x$&nbsp;. Each multiplication by&nbsp; $x$&nbsp; implies a shift of one place to the right for the ELP coefficient vector.<br>
+
*If one is not sure of this,&nbsp; but nevertheless performs the calculation for&nbsp; $r = 1$,&nbsp; one still needs a second equation&nbsp; (or even several)&nbsp; to verify the assumption.<br><br>
  
[[File:P ID2550 KC T 2 5 S3 v1.png|center|frame|Shifted ELP coefficient vectors|class=fit]]
+
The property of the&nbsp; [[Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding#Error_Locator_Polynomial_-_Definition_and_Properties|$\text{error locator polynomial}$]] that&nbsp; ${\it \Lambda}(\alpha^{i}) = 0$&nbsp; only for&nbsp; $e_i &ne; 0$&nbsp; $($that means:&nbsp; the $i$&ndash;th symbol is falsified$)$&nbsp; is preserved when multiplying&nbsp; ${\it \Lambda}(x)$&nbsp; by arbitrary powers of&nbsp; $x$.&nbsp;
 +
 
 +
Each multiplication by&nbsp; $x$&nbsp; implies a shift of one place to the right for the ELP coefficient vector.<br>
 +
<br clear=all>
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
 
$\text{Definition:}$&nbsp;  
 
$\text{Definition:}$&nbsp;  
The &nbsp;$\text{generalized ELP coefficient vectors}$&nbsp; $\underline {\it \Lambda }_l$&nbsp; result from successive shifts with respect to&nbsp; $\underline {\it \Lambda }_l$:
+
The&nbsp; $\text{generalized ELP coefficient vectors}$ &nbsp; $\underline {\it \Lambda }_l$&nbsp; result from successive shifts with respect to&nbsp; $\underline {\it \Lambda }_l$:
  
::<math>{\it \Lambda}_l(x)=x^l \cdot \prod_{i\hspace{0.05cm}\in \hspace{0.05cm} I_{\rm FP} }(x-\alpha^i) =x^l \cdot \big [{\it \lambda}_0 + \lambda_1 \cdot x+\ldots+{\it \lambda}_{r-1} \cdot x^{r-1}+x^r \big ]\hspace{0.05cm}.</math>
+
::<math>{\it \Lambda}_l(x)=x^l \cdot \prod_{i\hspace{0.05cm}\in \hspace{0.05cm} I_{\rm EP} }(x-\alpha^i) =x^l \cdot \big [{\it \lambda}_0 + \lambda_1 \cdot x+\ldots+{\it \lambda}_{r-1} \cdot x^{r-1}+x^r \big ]\hspace{0.05cm}.</math>
  
 
In this defining equation,&nbsp; $\underline {\it \Lambda }_1$&nbsp; corresponds to the previous&nbsp; $\underline {\it \Lambda }$.}}
 
In this defining equation,&nbsp; $\underline {\it \Lambda }_1$&nbsp; corresponds to the previous&nbsp; $\underline {\it \Lambda }$.}}
  
  
The graph shows the occupancy under the assumption of&nbsp; $r$&nbsp; error locations in the error vector &nbsp;$\underline {e}$&nbsp; for
+
The upper graph shows the occupancy under the assumption of&nbsp; $r$&nbsp; error locations in the error vector &nbsp;$\underline {e}$&nbsp; for
*$r=1$ &nbsp; in the left panel (with blue background),
+
*$r=1$ &nbsp; in the left panel&nbsp; (with blue background),
*$r=2$ &nbsp; in the middle area (with red background),
+
*$r=2$ &nbsp; in the middle area&nbsp; (with red background),
*$r=3$ &nbsp; in the right area (with green background).
+
*$r=3$ &nbsp; in the right area&nbsp; (with green background).
  
  
 
One recognizes:
 
One recognizes:
*The length of all&nbsp; $\underline {\it \lambda }_l$&nbsp; is always&nbsp; $n-k$. Each vector contains respectively&nbsp; $r$&nbsp; coefficients&nbsp; $\lambda_0$,&nbsp; $\lambda_1$, ... ,&nbsp; $\lambda_{r-1}$ &nbsp; &rArr; &nbsp; $0 &#8804; i < r$&nbsp; and a one. The remainder of each vector is padded with zeros.<br>
+
# The length of all&nbsp; $\underline {\it \lambda }_l$&nbsp; is always&nbsp; $n-k$.&nbsp; Each vector contains respectively&nbsp; $r$&nbsp; coefficients&nbsp; $\lambda_0$,&nbsp; $\lambda_1$, ... ,&nbsp; $\lambda_{r-1}$ &nbsp; &rArr; &nbsp; $0 &#8804; i < r$&nbsp; and one&nbsp; "1". <br>The remainder of each vector is padded with zeros.<br>
 +
# For each&nbsp; $r$&nbsp; there are exactly&nbsp; $n-k-r$&nbsp; coefficient vectors &nbsp; $\underline {\it \Lambda }_l$,&nbsp; where &nbsp; $\underline {\it \Lambda }_l$ &nbsp; results from &nbsp; $\underline {\it \Lambda }_{l-1} $&nbsp; always by right shifting by one position. <br>The vector&nbsp; $\underline {\it \Lambda }_{n-k-r}$&nbsp; always ends with a&nbsp; $1$.<br>
 +
# The equation system &nbsp; $\underline {\it \Lambda }_l \cdot \underline {s}^{\rm T} = 0 $ &nbsp; therefore leads to&nbsp; $n-k-r$&nbsp; equations. <br>The chosen approach for&nbsp; $r$&nbsp; is only correct if all equations lead to the same results for&nbsp; $\lambda_0$, ... , $\lambda_{r-1}$&nbsp;.<br>
 +
# If this is not the case,&nbsp; one has to increase&nbsp; $r$&nbsp; and thus work on a new equation system,&nbsp; and this until a unique solution results from all equations for the current&nbsp; $r$.&nbsp; If the finally&nbsp; $r$&nbsp; is greater than the correctability&nbsp; $t$&nbsp; of the code,&nbsp; the calculation can be terminated.&nbsp; The pending received word&nbsp; $\underline {y}$&nbsp; is then not decodable.<br>
  
*For each&nbsp; $r$&nbsp; there are exactly&nbsp; $n-k-r$&nbsp; coefficient vectors&nbsp; $\underline {\it \Lambda }_l$, where&nbsp; $\underline {\it \Lambda }_l$&nbsp; results from&nbsp; $\underline {\it \Lambda }_{l-1}$&nbsp; always by right shifting by one position. The vector&nbsp; $\underline {\it \Lambda }_{n-k-r}$&nbsp; always ends with a&nbsp; $1$.<br>
 
  
*The system of equations&nbsp; $\underline {\it \Lambda }_l \cdot \underline {s}^{\rm T} = 0 $&nbsp; therefore leads to&nbsp; $n-k-r$&nbsp; equations. The chosen approach for&nbsp; $r$&nbsp; is correct only if all equations lead to the same results for&nbsp; $\lambda_0$, ... , $\lambda_{r-1}$&nbsp;.<br>
+
{{GraueBox|TEXT=
 +
[[File:EN_KC_Z_2_5_neu.png|right|frame|$\rm GF(2^3)$&nbsp; representation]] 
 +
$\text{Example 4:}$&nbsp; The conditions stated in the&nbsp; [[Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding#Step_.28A.29:_Evaluation_of_the_syndrome_in_BDD| $\text{Example 2}$]]&nbsp; still apply:
 +
# There,&nbsp; due to the syndrome &nbsp; $\underline {s} = (\alpha^5,\ \alpha^2,\ \alpha^3,\ \alpha^1) &ne; \underline {0}$ &nbsp; it was also demonstrated that the received vector&nbsp; $\underline {y}$&nbsp; was falsified &nbsp; &rArr; &nbsp; error vector &nbsp; $\underline {e} \ne {0}$.
 +
# Not known,&nbsp; however,&nbsp; is the actual symbol error count&nbsp; $r$.<br>
  
*If this is not the case, one has to increase&nbsp; $r$&nbsp; and thus work on a new system of equations, and this until a unique solution results from all equations for the current&nbsp; $r$&nbsp;.<br>
+
*Assuming a single falsified symbol &nbsp; $(r= 1)$ &nbsp; we obtain the following system of equations&nbsp; $($written here in matrix form$)$:
 
 
*If finally&nbsp; $r$&nbsp; is greater than the correctability&nbsp; $t$&nbsp; of the code, the calculation can be terminated. The pending received word&nbsp; $\underline {y}$&nbsp; is then not decodable.<br>
 
 
 
 
 
{{GraueBox|TEXT= 
 
$\text{Example 4:}$&nbsp; The conditions stated in the&nbsp; [[Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding#Step_.28A.29:_Evaluation_of_the_syndrome_in_BDD| $\text{"Example 2"}$]]&nbsp; still apply:
 
*There, due to the syndrome&nbsp; $\underline {s} = (\alpha^5,\alpha^2,\alpha^3,\alpha^1) &ne; \underline {0}$&nbsp; it was also demonstrated that the received vector&nbsp; $\underline {y}$&nbsp; was corrupted &nbsp; &rArr; &nbsp; error vector&nbsp; $\underline {e} \underline {0}$.
 
*Not known, however, is the actual symbol error count&nbsp; $r$.<br>
 
 
 
 
 
Assuming a single wrong symbol&nbsp; $(r= 1)$&nbsp; we obtain the following system of equations (written here in matrix form):
 
[[File:P ID2556 KC T 2 5 Darstellung v1.png|right|frame|$\rm GF(2^3)$ representations]]
 
  
 
::<math>\big ({ \boldsymbol{\it \Lambda } }_l \big) \cdot \underline {s} ^{\rm T}=  
 
::<math>\big ({ \boldsymbol{\it \Lambda } }_l \big) \cdot \underline {s} ^{\rm T}=  
Line 389: Line 392:
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
This system of equations gives three different solutions for&nbsp; $\lambda_0$, which is not purposeful:<br>
+
*This equation system gives three different solutions for&nbsp; $\lambda_0$,&nbsp; which is not purposeful:<br>
  
::<math>\text{Zeile 1:}\hspace{0.5cm}\alpha^5 \cdot \lambda_0 + \alpha^2 = 0 \hspace{0.3cm} \Rightarrow  \hspace{0.3cm}
+
::<math>\text{line 1:}\hspace{0.5cm}\alpha^5 \cdot \lambda_0 + \alpha^2 = 0 \hspace{0.3cm} \Rightarrow  \hspace{0.3cm}
 
\lambda_0 = \alpha^{2-5}= \alpha^{-3}= \alpha^{4}\hspace{0.05cm},</math>
 
\lambda_0 = \alpha^{2-5}= \alpha^{-3}= \alpha^{4}\hspace{0.05cm},</math>
::<math>\text{Zeile 2:}\hspace{0.5cm}\alpha^2 \cdot \lambda_0 + \alpha^3 = 0 \hspace{0.3cm} \Rightarrow  \hspace{0.3cm}
+
::<math>\text{line 2:}\hspace{0.5cm}\alpha^2 \cdot \lambda_0 + \alpha^3 = 0 \hspace{0.3cm} \Rightarrow  \hspace{0.3cm}
 
\lambda_0 = \alpha^{3-2}=  \alpha\hspace{0.05cm},</math>
 
\lambda_0 = \alpha^{3-2}=  \alpha\hspace{0.05cm},</math>
::<math>\text{Zeile 3:}\hspace{0.5cm}\alpha^3 \cdot \lambda_0 + \alpha^1 = 0 \hspace{0.3cm} \Rightarrow  \hspace{0.3cm}
+
::<math>\text{line 3:}\hspace{0.5cm}\alpha^3 \cdot \lambda_0 + \alpha^1 = 0 \hspace{0.3cm} \Rightarrow  \hspace{0.3cm}
 
\lambda_0 = \alpha^{1-3}=  \alpha^{-2} = \alpha^{5} \hspace{0.05cm}.</math>
 
\lambda_0 = \alpha^{1-3}=  \alpha^{-2} = \alpha^{5} \hspace{0.05cm}.</math>
  
Deshalb stellen wir nun ein weiteres Gleichungssystem auf, und zwar unter der Annahme&nbsp; $r = 2$:
+
*Therefore,&nbsp; we now set up another equation system,&nbsp; assuming &nbsp; $r = 2$:
  
 
::<math>\big ({ \boldsymbol{\it \Lambda } }_l \big) \cdot \underline {s} ^{\rm T}=  
 
::<math>\big ({ \boldsymbol{\it \Lambda } }_l \big) \cdot \underline {s} ^{\rm T}=  
Line 416: Line 419:
 
\end{pmatrix}\hspace{0.05cm}. </math>
 
\end{pmatrix}\hspace{0.05cm}. </math>
  
This leads to two equations for&nbsp; $\lambda_0$&nbsp; and&nbsp; $\lambda_1$:
+
*This leads to two equations for&nbsp; $\lambda_0$&nbsp; and&nbsp; $\lambda_1$:
 
::<math>\alpha^5 \cdot \lambda_0 + \alpha^2 \cdot \lambda_1 + \alpha^3 = 0 \hspace{0.05cm},\hspace{0.5cm}\alpha^2 \cdot \lambda_0 + \alpha^3 \cdot \lambda_1 + \alpha^1 = 0 \hspace{0.05cm}.</math>
 
::<math>\alpha^5 \cdot \lambda_0 + \alpha^2 \cdot \lambda_1 + \alpha^3 = 0 \hspace{0.05cm},\hspace{0.5cm}\alpha^2 \cdot \lambda_0 + \alpha^3 \cdot \lambda_1 + \alpha^1 = 0 \hspace{0.05cm}.</math>
  
This system of equations is now clearly solvable. One gets&nbsp; $\lambda_0 = \alpha^2$&nbsp; and&nbsp; $\lambda_1 = \alpha^6$. This means:  
+
*This equation system is now clearly solvable.&nbsp; One gets &nbsp; $\lambda_0 = \alpha^2$ &nbsp; and &nbsp; $\lambda_1 = \alpha^6$.&nbsp; This means:  
*The assumption that indeed&nbsp; $r = 2$&nbsp; positions of the received vector&nbsp; $\underline {y}$&nbsp; have been distorted is correct.<br>
+
# The assumption that indeed&nbsp; $r = 2$&nbsp; positions of the received vector&nbsp; $\underline {y}$&nbsp; have been distorted is correct.<br>
*But it is not yet known which positions have been corrupted. So much for now:  
+
# But it is not yet known which positions have been falsified.&nbsp; So much for now:  
*It is not symbol positions 2 and 6, but positions 0 and 2, as shown in the following&nbsp; $\text{Example 5}$&nbsp; (next page).}}<br>  
+
# It is not symbol positions&nbsp; '''2'''&nbsp; and&nbsp; '''6''',&nbsp; but the positions&nbsp; '''0'''&nbsp; and&nbsp; '''2''',&nbsp; as shown in the following&nbsp; $\text{Example 5}$&nbsp; (next section).}}<br>  
  
  
== Step (C): Localization of the error locations ==
+
== Step (C): Localization of the error positions ==
 
<br>
 
<br>
 
After processing step&nbsp; $\rm(B)$&nbsp; are known:
 
After processing step&nbsp; $\rm(B)$&nbsp; are known:
*the number&nbsp; $r$&nbsp; of error locations&nbsp; $e_i &ne; 0$&nbsp; in the vector&nbsp; $\underline {e} = (e_0, \hspace{0.05cm}\text{... }\hspace{0.05cm}, e_i, \hspace{0.05cm}\text{... }\hspace{0.05cm}, e_{n-1})$,<br>
+
# the number&nbsp; $r$&nbsp; of error locations&nbsp; $e_i &ne; 0$&nbsp; in the vector&nbsp; $\underline {e} = (e_0, \hspace{0.05cm}\text{... }\hspace{0.05cm}, e_i, \hspace{0.05cm}\text{... }\hspace{0.05cm}, e_{n-1})$,<br>
 +
# the coefficients&nbsp; $\lambda_0, \hspace{0.05cm}\text{... }\hspace{0.05cm} , \lambda_{r-1}$&nbsp; of the error locator polynomial..<br><br>
  
*the coefficients&nbsp; $\lambda_0, \hspace{0.05cm}\text{... }\hspace{0.05cm} , \lambda_{r-1}$&nbsp; of the <i>error locator polynomial</i>..<br><br>
+
Now the set of error positions has to be determined: &nbsp; $I_{\rm EP} = \{ i \hspace{0.1cm}| \hspace{0.1cm} e_i \ne 0,\hspace{0.3cm} 0 \le i < n \}\hspace{0.05cm}.$
  
Now the set of error positions has to be determined: &nbsp; $I_{\rm FP} = \{ i \hspace{0.1cm}| \hspace{0.1cm} e_i \ne 0,\hspace{0.1cm} 0 \le i < n \}\hspace{0.05cm}.$
+
There are two ways to do this&nbsp; $($both methods are used in the following example$)$:
 +
*the so-called&nbsp; "Chien search",&nbsp; in which one determines the searched zeros by inserting the possible code symbols except the zero symbol &nbsp; &rArr; &nbsp; $(\alpha^0, \text{... }, \alpha^i, \text{... },\alpha^{n-1})$&nbsp; into the  the error locator polynomial,<br>
  
There are two ways to do this (both methods are used in the following example):
+
*the evaluation of the equation &nbsp; $\underline {L} = (L_0, \text{... }, L_i, \text{... },L_{n-1} ) = \underline {\it \Lambda } \cdot { \boldsymbol{\rm H }}$ &nbsp; with the abbreviation&nbsp; $L_i = {\it \Lambda}(\alpha^i)$.<br>
*the so-called <i>Chien search</i>, in which one determines by inserting the possible code symbols except the zero symbol &nbsp; &rArr; &nbsp; $(\alpha^0, \text{... }, \alpha^i, \text{... },\alpha^{n-1})$&nbsp; into the <i>error locator polynomial</i>&nbsp; its zeros,<br>
 
  
*the evaluation of the equation&nbsp; $\underline {L} = (L_0, \text{... }, L_i, \text{... },L_{n-1} ) = \underline {\it \Lambda } \cdot { \boldsymbol{\rm H }}$&nbsp; with the abbreviation&nbsp; $L_i = {\it \Lambda}(\alpha^i)$.<br><br>
 
  
[[File:P ID2557 KC T 2 5 Darstellung v1.png|right|frame|$\rm GF(2^3)$ representations]]
+
{{GraueBox|TEXT=
{{GraueBox|TEXT=
 
 
$\text{Example 5:}$&nbsp;  
 
$\text{Example 5:}$&nbsp;  
In&nbsp; [[Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding#Step_.28B.29:_Set_up.2Fevaluate_the_ELP_coefficient_vector| $\text{"Example 4"}$]]&nbsp; it was determined according to the constraints specified in&nbsp; [[Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding#Step_.28A.29 :_Evaluation_of_the_syndrome_in_BDD| $\text{"Example 2"}$]]&nbsp; stated boundary conditions determined that.
+
In&nbsp; [[Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding#Step_.28B.29:_Set_up.2Fevaluate_the_ELP_coefficient_vector| $\text{Example 4}$]]&nbsp; it was determined according to the constraints specified in&nbsp; [[Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding#Step_.28A.29:_Evaluation_of_the_syndrome_in_BDD| $\text{Example 2}$]]&nbsp; stated boundary conditions determined that
*$r= 2$&nbsp; symbol errors are present, and  
+
[[File:EN_KC_Z_2_5_neu.png|right|frame|$\rm GF(2^3)$&nbsp; representations]] 
*the ELP coefficients&nbsp; $\lambda_0 = \alpha^2$&nbsp; and&nbsp; $\lambda_1 = \alpha^6$&nbsp; are.  
+
 
 +
# $r= 2$&nbsp; symbol errors are present,&nbsp; and  
 +
# the ELP coefficients are&nbsp; $\lambda_0 = \alpha^2$&nbsp; and&nbsp; $\lambda_1 = \alpha^6$.  
  
  
This results in the following <i>error locator polynomial</i>:
+
This results in the following error locator polynomial:
  
 
::<math>{\it \Lambda}(x)=x \cdot \big [{\it \lambda}_0 + \lambda_1 \cdot x+x^2 \big ]
 
::<math>{\it \Lambda}(x)=x \cdot \big [{\it \lambda}_0 + \lambda_1 \cdot x+x^2 \big ]
 
=x \cdot \big [\alpha^2 + \alpha^6 \cdot x+x^2 \big ]\hspace{0.05cm}.</math>
 
=x \cdot \big [\alpha^2 + \alpha^6 \cdot x+x^2 \big ]\hspace{0.05cm}.</math>
  
According to the Chien search one gets
+
&rArr; &nbsp; According to the Chien search one gets
  
::<math>{\it \Lambda}(\alpha^0)\hspace{0.15cm}  =  \hspace{0.15cm}\alpha^0 \cdot \big [ \alpha^2 + \alpha^6 \cdot 1 + 1 \big ] =  \alpha^2 + (\alpha^2 + 1) + 1 = 0 \hspace{0.3cm} \Rightarrow  \hspace{0.3cm}{ \boldsymbol{\rm Nullstelle} }\hspace{0.05cm},</math>
+
::<math>{\it \Lambda}(\alpha^0)\hspace{0.15cm}  =  \hspace{0.15cm}\alpha^0 \cdot \big [ \alpha^2 + \alpha^6 \cdot 1 + 1 \big ] =  \alpha^2 + (\alpha^2 + 1) + 1 = 0 \hspace{0.3cm} \Rightarrow  \hspace{0.3cm}{ \boldsymbol{\rm Zero} }\hspace{0.05cm},</math>
::<math>{\it \Lambda}(\alpha^1)\hspace{0.15cm}  =  \hspace{0.15cm}\alpha^1 \cdot \big [\alpha^2 + \alpha^6 \cdot \alpha^1 + \alpha^2\big ]= \alpha^1 \hspace{0.3cm} \Rightarrow  \hspace{0.3cm}{\rm Keine\hspace{0.15cm} Nullstelle}\hspace{0.05cm},</math>
+
::<math>{\it \Lambda}(\alpha^1)\hspace{0.15cm}  =  \hspace{0.15cm}\alpha^1 \cdot \big [\alpha^2 + \alpha^6 \cdot \alpha^1 + \alpha^2\big ]= \alpha^1 \hspace{0.3cm} \Rightarrow  \hspace{0.3cm}{\rm No\hspace{0.15cm} zero}\hspace{0.05cm},</math>
 
::<math>{\it \Lambda}(\alpha^2)\hspace{0.15cm}  =  \hspace{0.15cm}\alpha^2 \cdot \big [ \alpha^2 + \alpha^6 \cdot \alpha^2 + \alpha^4 \big ] =\alpha^4 + \alpha^{10} + \alpha^6 = \text{...}
 
::<math>{\it \Lambda}(\alpha^2)\hspace{0.15cm}  =  \hspace{0.15cm}\alpha^2 \cdot \big [ \alpha^2 + \alpha^6 \cdot \alpha^2 + \alpha^4 \big ] =\alpha^4 + \alpha^{10} + \alpha^6 = \text{...}
 
=  \hspace{0.15cm}0  
 
=  \hspace{0.15cm}0  
  \hspace{0.3cm} \Rightarrow  \hspace{0.3cm}{ \boldsymbol{\rm Nullstelle} }\hspace{0.05cm}.</math>
+
  \hspace{0.3cm} \Rightarrow  \hspace{0.3cm}{ \boldsymbol{\rm Zero} }\hspace{0.05cm}.</math>
  
 
Thus the two error positions with&nbsp; $i = 0$&nbsp; and&nbsp; $i = 2$&nbsp; are found &nbsp; &rArr; &nbsp; the error vector is: &nbsp; $\underline {e} = (e_0, 0, e_2, 0, 0, 0, 0)$ .<br>
 
Thus the two error positions with&nbsp; $i = 0$&nbsp; and&nbsp; $i = 2$&nbsp; are found &nbsp; &rArr; &nbsp; the error vector is: &nbsp; $\underline {e} = (e_0, 0, e_2, 0, 0, 0, 0)$ .<br>
  
The vector equation&nbsp; $\underline {L} = \underline {\it \Lambda } \cdot { \boldsymbol{\rm H } }$&nbsp; gives the same result in a more compact form:
+
 
 +
&rArr; &nbsp; The vector equation&nbsp; $\underline {L} = \underline {\it \Lambda } \cdot { \boldsymbol{\rm H } }$&nbsp; gives the same result in a more compact form:
  
 
::<math>\underline {L} = \underline{\it \Lambda} \cdot  { \boldsymbol{\rm H } }  = (\alpha^2, \alpha^6, 1, 0) \cdot
 
::<math>\underline {L} = \underline{\it \Lambda} \cdot  { \boldsymbol{\rm H } }  = (\alpha^2, \alpha^6, 1, 0) \cdot
Line 477: Line 482:
 
\underline {e} = (e_0, 0, e_2, 0, 0, 0, 0)\hspace{0.05cm}.</math>
 
\underline {e} = (e_0, 0, e_2, 0, 0, 0, 0)\hspace{0.05cm}.</math>
  
The example continues with the&nbsp; $\text{"Example 6"}$&nbsp; on the next page}}.<br>
+
The example continues with the&nbsp; $\text{Example 6}$&nbsp; in the next section}}.<br>
  
 
== Step (D): Final error correction ==
 
== Step (D): Final error correction ==
 
<br>
 
<br>
In the last step now only the&nbsp; $r$&nbsp; symbol errors have to be corrected, whose positions are known after completion of step&nbsp; $\rm (C)$&nbsp;:
+
In the last step the&nbsp; $r$&nbsp; symbol errors have to be corrected,&nbsp; whose positions are known after completion of step&nbsp; $\rm (C)$&nbsp;:
*Marking the error positions in the received word&nbsp; $\underline {y}$&nbsp; as erasures&nbsp; $\rm E$, the corresponding codeword&nbsp; $\underline {z}$&nbsp; can be found according to the description in chapter&nbsp; [[Channel_Coding/Reed-Solomon_Decoding_for_the_Erasure_Channel|"Reed-Solomon decoding at the erasure channel"]]&nbsp;.<br>
+
*Marking the error positions in the received word&nbsp; $\underline {y}$&nbsp; as erasures&nbsp; $\rm E$,&nbsp; the corresponding code word&nbsp; $\underline {z}$&nbsp; can be found according to the description in chapter&nbsp; [[Channel_Coding/Reed-Solomon_Decoding_for_the_Erasure_Channel|"Reed-Solomon decoding at the erasure channel"]].<br>
  
*A second possibility offers the determination of the error vector&nbsp; $\underline {e}$&nbsp; from the equation&nbsp; $\underline {e}  \cdot \boldsymbol{\rm H }^{\rm T} = \underline {s}$&nbsp; and correcting accordingly&nbsp; $\underline {z} = \underline {y} - \underline {e} $. This procedure is the basis of the following example.<br><br>
+
*A second possibility offers the determination of the error vector&nbsp; $\underline {e}$&nbsp; from the equation &nbsp; $\underline {e}  \cdot \boldsymbol{\rm H }^{\rm T} = \underline {s}$ &nbsp; and correcting accordingly &nbsp; $\underline {z} = \underline {y} - \underline {e} $. &nbsp; This procedure is the basis of the following example.<br><br>
  
[[File:P ID2558 KC T 2 5 Darstellung v1.png|right|frame|$\rm GF(2^3)$ representations]]
 
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Example 6:}$&nbsp; We again consider the&nbsp; $\text{RSC (7, 3, 5)}_8$. In&nbsp; [[Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding#Step_.28A.29:_Evaluation_of_the_syndrome_in_BDD| $\text{"Example 2"}$]]&nbsp; was.
+
$\text{Example 6:}$&nbsp; We consider again the&nbsp; $\text{RSC (7, 3, 5)}_8$.&nbsp;  In &nbsp; [[Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding#Step_.28A.29:_Evaluation_of_the_syndrome_in_BDD| $\text{Example 2}$]]&nbsp; was  
*the received word with&nbsp; $\underline{y}=\big (\alpha^3,\hspace{0.05cm} 1,\hspace{0.05cm} 0, \hspace{0.05cm}\alpha^1, \hspace{0.05cm} \alpha^2, \hspace{0.05cm} \alpha^5, \hspace{0.05cm} \alpha^5 \big)$&nbsp; given, and.
+
*the received word given with&nbsp; $\underline{y}=\big (\alpha^3,\hspace{0.09cm} 1,\hspace{0.09cm} 0, \hspace{0.09cm}\alpha^1, \hspace{0.09cm} \alpha^2, \hspace{0.09cm} \alpha^5, \hspace{0.09cm} \alpha^5 \big),$&nbsp; and
*from this the syndrome&nbsp; $\underline{s}=\big (\alpha^5,\hspace{0.05cm}\alpha^2, \hspace{0.05cm} \alpha^3, \hspace{0.05cm} \alpha^1\big)$&nbsp; determined.  
+
 +
*from this determined the syndrome &nbsp; $\underline{s}=\big (\alpha^5,\hspace{0.09cm}\alpha^2, \hspace{0.09cm} \alpha^3, \hspace{0.09cm} \alpha^1\big)$.  
  
  
According to the calculations in&nbsp; [[Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding#Step_.28C.29:_Localization_of_the_error_locations| $\text{"Example 5"}$]]&nbsp; the error vector is&nbsp; $\underline {e} = (e_0, 0, e_2, 0, 0, 0, 0)$. <br>
+
According to the calculations in&nbsp; [[Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding#Step_.28C.29:_Localization_of_the_error_positions| $\text{Example 5}$]]&nbsp; the error vector is&nbsp; $\underline {e} = (e_0,\ 0,\ e_2,\ 0,\ 0,\ 0,\ 0)$. <br>
  
From &nbsp;$\underline {e}  \cdot \boldsymbol{\rm H }^{\rm T} = \underline {s}$&nbsp; we now obtain the following governing equations for the error values&nbsp; $e_0$&nbsp; and&nbsp; $e_2$:
+
[[File:EN_KC_Z_2_5_neu.png|right|frame|$\rm GF(2^3)$&nbsp; representations]]
 +
From &nbsp;$\underline {e}  \cdot \boldsymbol{\rm H }^{\rm T} = \underline {s}$ &nbsp; we now obtain the following governing equations for the error values&nbsp; $e_0$&nbsp; and&nbsp; $e_2$:
  
 
::<math>\underline {e}  \cdot \boldsymbol{\rm H }^{\rm T}= \begin{pmatrix}
 
::<math>\underline {e}  \cdot \boldsymbol{\rm H }^{\rm T}= \begin{pmatrix}
Line 512: Line 518:
 
\alpha^5 & \alpha^2 & \alpha^3 & \alpha^1  
 
\alpha^5 & \alpha^2 & \alpha^3 & \alpha^1  
 
\end{pmatrix}  </math>
 
\end{pmatrix}  </math>
::<math>\Rightarrow  \hspace{0.3cm}e_0 \cdot (1, 1, 1, 1) + e_2 \cdot ( \alpha^2, \alpha^4, \alpha^6, \alpha^1)\stackrel{!}{=}
+
::<math>\Rightarrow  \hspace{0.3cm}e_0 \cdot (1,\ 1,\ 1,\ 1) + e_2 \cdot (\alpha^2,\ \alpha^4,\ \alpha^6,\ \alpha^1)\stackrel{!}{=}
( \alpha^5, \alpha^2, \alpha^3, \alpha^1)</math>
+
( \alpha^5,\ \alpha^2,\ \alpha^3,\ \alpha^1)</math>
 
::<math>\Rightarrow  \hspace{0.3cm}
 
::<math>\Rightarrow  \hspace{0.3cm}
 
e_0 + e_2 \cdot \alpha^2 = \alpha^5\hspace{0.05cm},\hspace{0.2cm}
 
e_0 + e_2 \cdot \alpha^2 = \alpha^5\hspace{0.05cm},\hspace{0.2cm}
Line 520: Line 526:
 
e_0 + e_2 \cdot \alpha^1 = \alpha^1\hspace{0.05cm}.</math>
 
e_0 + e_2 \cdot \alpha^1 = \alpha^1\hspace{0.05cm}.</math>
  
All equations lead to the result &nbsp;$e_0 = 1$&nbsp; and &nbsp;$e_2 =\alpha^2$.  
+
*All equations lead to the result &nbsp;$e_0 = 1$&nbsp; and &nbsp;$e_2 =\alpha^2$.  
  
Thus the corrected codeword with&nbsp; $\underline {y} =  (\alpha^3,\hspace{0.05cm} 1,\hspace{0.05cm} 0,\hspace{0.05cm}  \alpha^1,\hspace{0.05cm} \alpha^2,\hspace{0.05cm} \alpha^5,\hspace{0.05cm} \alpha^5)$&nbsp; und&nbsp; $\underline {e}=  (1,\hspace{0.05cm} 0,\hspace{0.05cm}  \alpha^2,\hspace{0.05cm} 0,\hspace{0.05cm} 0,\hspace{0.05cm} 0,\hspace{0.05cm} 0)$:
+
*Thus the corrected code word with&nbsp; $\underline {y} =  (\alpha^3,\hspace{0.09cm} 1,\hspace{0.09cm} 0,\hspace{0.09cm}  \alpha^1,\hspace{0.09cm} \alpha^2,\hspace{0.09cm} \alpha^5,\hspace{0.09cm} \alpha^5)$ &nbsp; and &nbsp; $\underline {e}=  (1,\hspace{0.09cm} 0,\hspace{0.09cm}  \alpha^2,\hspace{0.09cm} 0,\hspace{0.09cm} 0,\hspace{0.09cm} 0,\hspace{0.09cm} 0)$:
  
 
::<math>\underline {z} = \underline {y} - \underline {e} = \underline {y} + \underline {e}=
 
::<math>\underline {z} = \underline {y} - \underline {e} = \underline {y} + \underline {e}=
(\alpha^1,\hspace{0.03cm} 1,\hspace{0.03cm} \alpha^2,\hspace{0.03cm}  \alpha^1,\hspace{0.03cm} \alpha^2,\hspace{0.03cm} \alpha^5,\hspace{0.03cm} \alpha^5)\hspace{0.05cm}. </math>
+
(\alpha^1,\hspace{0.09cm} 1,\hspace{0.09cm} \alpha^2,\hspace{0.09cm}  \alpha^1,\hspace{0.09cm} \alpha^2,\hspace{0.09cm} \alpha^5,\hspace{0.09cm} \alpha^5)\hspace{0.05cm}. </math>
  
This is a valid codeword of&nbsp; $\text{RSC (7, 3, 5)}_8$.}}<br>
+
*This is a valid code word of the&nbsp; $\text{RSC (7, 3, 5)}_8$.}}<br>
  
 
== Fast Reed-Solomon decoding ==
 
== Fast Reed-Solomon decoding ==
 
<br>
 
<br>
The class of Reed&ndash;Solomon codes was already introduced in 1960 by the publication&nbsp; [RS60]<ref name='RS60'>Reed, I.S.; Solomon, G.: ''Polynomial Codes over Certain Finite Fields.'' J. Siam, Vol. 8, pp. 300-304, 1960.</ref> introduced. However, their efficient decoding was not possible until a decade or two later.<br>
+
The class of Reed&ndash;Solomon codes was already introduced in 1960 by the publication&nbsp; [RS60]<ref name='RS60'>Reed, I.S.; Solomon, G.:&nbsp; Polynomial Codes over Certain Finite Fields.&nbsp; J. Siam, Vol. 8, pp. 300-304, 1960.</ref>.&nbsp; However,&nbsp; their efficient decoding was not possible until a decade or two later.<br>
 +
 
 +
In the last sections we have demonstrated the so-called&nbsp; "Petersen algorithm"&nbsp; including the&nbsp; "Chien search"&nbsp; on the example of&nbsp; $\text{RSC (7, 3, 5)}_8$&nbsp; which can correct up to&nbsp; $t = 2$&nbsp; errors.&nbsp;
 +
*The decoding process focused on setting up and solving the key equation where the zeros of a degree-2 polynomial in the field&nbsp; $\rm GF(7)$&nbsp; had to be found.
  
In the last pages we have demonstrated the so-called Petersen algorithm including the Chien search on the example of&nbsp; $\text{RSC (7, 3, 5)}_8$&nbsp; which can correct up to&nbsp; $t = 2$&nbsp; errors. The decoding process focused on setting up and solving the key equation where the zeros of a degree $2$ polynomial in the field&nbsp; $\rm GF(7)$&nbsp; had to be found. It was to be recognized that already this algebraic decoding is connected with large expenditure.<br>
+
*It was to be recognized that already this algebraic decoding is connected with large expenditure.<br>
  
For codes used in practice with large codeword length&nbsp; $n$&nbsp; and high correctability&nbsp; $t$&nbsp; the decoding effort would explode if faster decoding algorithms had not been found. For example, in the Reed&ndash;Solomon code&nbsp; $\text{RSC (255, 223, 33)}_{256}$, mentioned early in the ESA/NASA standard for satellite transmission, up to&nbsp; $t = 16$&nbsp; zeros must be found in the field&nbsp; $\rm GF(255)$&nbsp; to decode a single codeword. And this in real time!<br>
 
  
Beginning in the late 1960s, many scientists sought faster decoding algorithms for Reed&ndash;Solomon codes:
+
For codes used in practice with large code word length&nbsp; $n$&nbsp; and high correctability&nbsp; $t$&nbsp; the decoding effort would explode if faster decoding algorithms had not been found.&nbsp;  
  
*In&nbsp; [https://en.wikipedia.org/wiki/Berlekamp%E2%80%93Massey_algorithm "Berlekamp&ndash;Massey algorithm"]&nbsp; $\rm (BMA)$&nbsp; the key equation&nbsp; $\underline {\it \Lambda}  \cdot \underline{s }^{\rm T} = 0$&nbsp; represented as a feedback shift register, see for example&nbsp; [Mas69]<ref name ='Mas69'>Massey, J.L.: ''Shift Register Synthesis and BCH Decoding.''. IEEE Trans. on Information Theory, vol. IT-15, pp. 122–127, Jan. 1969.</ref>,&nbsp; [Fri96]<ref name ='Fri96'>Friedrichs, B.: ''Kanalcodierung – Grundlagen und Anwendungen in modernen Kommunikations- systemen.'' Berlin – Heidelberg: Springer, 1996.</ref> und&nbsp; [Bos98]<ref name ='Bos98'>Bossert, M.: ''Kanalcodierung.'' Stuttgart: B. G. Teubner, 1998.</ref>. The problem is thus reduced to the synthesis of an autoregressive filter. This algorithm works much faster than the (more easily comprehensible) Petersen algorithm.<br>
+
:For example,&nbsp; in the Reed&ndash;Solomon code &nbsp; $\text{RSC (255, 223, 33)}_{256}$,&nbsp; mentioned early in the ESA/NASA standard for satellite transmission,&nbsp; up to &nbsp; $t = 16$ &nbsp; zeros must be found in the field &nbsp; $\rm GF(255)$ &nbsp; to decode a single code word.&nbsp; And this in real time!<br>
  
*Somewhat later, in&nbsp; [SK+75]<ref name='SK+75'>Sugiyama, Y.; Kashara, M.; Hirasawa, S.; Namekawa, T.: ''A Method for Solving Key Equation for Decoding Goppa Codes.'' Information and Control, Vol. 27, pp. 87-99, 1975.</ref>&nbsp; A decoding method based on the&nbsp; [https://en.wikipedia.org/wiki/Euclidean_algorithm "Euclidean algorithm"]&nbsp; has been proposed. This provides the greatest common divisor of two integers, which is used for decoding. The Euclidean algorithm is comparably fast as the&nbsp; $\rm BMA$. More detailed information can be found again in&nbsp; [Bos98]<ref name ='Bos98'></ref> and&nbsp; [Fri96]<ref name ='Fri96'></ref>.<br>
+
Beginning in the late 1960s, many scientists sought faster decoding algorithms for Reed&ndash;Solomon codes:
  
*Other efficient decoding methods of Reed&ndash;Solomon codes operate in the <i>frequency domain</i>&nbsp; using the&nbsp; [[Signal_Representation/Discrete_Fourier_Transform_(DFT)#Arguments_for_the_discrete_implementation_of_the_Fourier_transform| "Discrete Fourier Transform"]]&nbsp; $\rm (DFT)$&nbsp; in the field&nbsp; ${\rm GF}(n)$.<br><br>
+
#  In&nbsp; [https://en.wikipedia.org/wiki/Berlekamp%E2%80%93Massey_algorithm $\text{Berlekamp&ndash;Massey algorithm}$]&nbsp; $\rm (BMA)$&nbsp; the key equation &nbsp; $\underline {\it \Lambda}  \cdot \underline{s }^{\rm T} = 0$ &nbsp; represented as a feedback shift register, see for example&nbsp; [Mas69]<ref name ='Mas69'>Massey, J.L.:&nbsp; Shift Register Synthesis and BCH Decoding.&nbsp; IEEE Trans. on Information Theory, vol. IT-15, pp. 122–127, Jan. 1969.</ref>,&nbsp; [Fri96]<ref name ='Fri96'>Friedrichs, B.:&nbsp; Kanalcodierung – Grundlagen und Anwendungen in modernen Kommunikations- systemen.&nbsp; Berlin – Heidelberg: Springer, 1996.</ref>,&nbsp; [Bos99]<ref name ='Bos99'>Bossert, M.:&nbsp; Channel Coding for Telecommunications.&nbsp;Chichester: Wiley, 1999.</ref>.&nbsp; The problem is thus reduced to the synthesis of an autoregressive filter.&nbsp; The algorithm works much faster than the&nbsp; (more easily comprehensible)&nbsp; Petersen algorithm.<br>
 +
# Somewhat later,&nbsp; in&nbsp; [SK+75]<ref name='SK+75'>Sugiyama, Y.; Kashara, M.; Hirasawa, S.; Namekawa, T.:&nbsp; A Method for Solving Key Equation for Decoding Goppa Codes.&nbsp; Information and Control, Vol. 27, pp. 87-99, 1975.</ref>&nbsp; a decoding method based on the&nbsp; [https://en.wikipedia.org/wiki/Euclidean_algorithm $\text{Euclidean algorithm}$]&nbsp; has been proposed.&nbsp; This provides the greatest common divisor of two integers,&nbsp; which is used for decoding.&nbsp; The Euclidean algorithm is comparably fast as the&nbsp; $\rm BMA$.&nbsp; More detailed information can be found in&nbsp; [Bos99]<ref name ='Bos99'></ref> and&nbsp; [Fri96]<ref name ='Fri96'></ref>.<br>
 +
# Other efficient decoding methods of Reed&ndash;Solomon codes operate in the frequency domain using the&nbsp; [[Signal_Representation/Discrete_Fourier_Transform_(DFT)#Arguments_for_the_discrete_implementation_of_the_Fourier_transform|$\text{Discrete Fourier Transform}$]]&nbsp; $\rm (DFT)$&nbsp; in the field&nbsp; ${\rm GF}(n)$.<br><br>
  
The basic features of Reed&ndash;Solomon error correction were already developed in the 1960s. But up to the present time (2013) the (as fast as possible) algebraic decoding of these codes is a highly topical field of research.
+
The basic features of Reed&ndash;Solomon error correction were already developed in the 1960s.&nbsp; But up to the present time the&nbsp; $($as fast as possible$)$&nbsp; algebraic decoding of these codes is a highly topical field of research.
  
 
== Exercises for the chapter==
 
== Exercises for the chapter==
 
<br>
 
<br>
[[Aufgaben:Aufgabe_2.12:_Decodierung_beim_RSC_(7,_4,_4)_zur_Basis_8|Aufgabe 2.12: Decodierung beim RSC (7, 4, 4) zur Basis 8]]
+
[[Aufgaben:Exercise_2.12:_Decoding_at_RSC_(7,_4,_4)_to_Base_8|Exercise 2.12: Decoding at RSC (7, 4, 4) to Base 8]]
  
[[Aufgaben:Aufgabe_2.12Z:_Reed–Solomon–Syndromberechnung|Aufgabe 2.12Z: Reed–Solomon–Syndromberechnung]]
+
[[Aufgaben:Exercise_2.12Z:_Reed-Solomon_Syndrome_Calculation|Exercise 2.12Z: Reed-Solomon Syndrome Calculation]]
  
[[Aufgaben:Aufgabe_2.13:_Decodierung_beim_RSC_(7,_3,_5)_zur_Basis_8|Aufgabe 2.13: Decodierung beim RSC (7, 3, 5) zur Basis 8]]
+
[[Aufgaben:Exercise_2.13:_Decoding_at_RSC_(7,_3,_5)_to_Base_8|Exercise 2.13: Decoding at RSC (7, 3, 5) to Base 8]]
  
[[Aufgaben:Aufgabe_2.14:_Petersen–Algorithmus|Aufgabe 2.14: Petersen–Algorithmus]]
+
[[Aufgaben:Exercise_2.14:_Petersen_Algorithm|Exercise 2.14: Petersen Algorithm]]
  
==Quellenverzeichnis==
+
==References==
 
<references/>
 
<references/>
  
 
{{Display}}
 
{{Display}}

Latest revision as of 13:43, 24 November 2022

Block diagram and requirements for RS error correction


As in the chapter  "Decoding at the Erasure Channel"  we consider a transmission system with Reed–Solomon coding characterized by the two code parameters  $n=2^m-1$  and  $k$.  With the generator matrix  $\boldsymbol{\rm G}$  the relation between the information word  $\underline {u}$  and the code word  $\underline {c}$  is:

Transmission system with Reed-Solomon coding/decoding and $m$– BSC
$$\underline {c} = {\rm enc}(\underline {u}) = \underline {u} \cdot { \boldsymbol{\rm G}} \hspace{0.3cm} {\rm with}$$
$$ \underline {u} = (u_0, u_1, \hspace{0.05cm}\text{...} \hspace{0.05cm}, u_i, \hspace{0.05cm}\text{...} \hspace{0.05cm}, u_{k-1})\hspace{0.05cm},$$
$$ \underline {c} = (c_0, c_1, \hspace{0.05cm}\text{...} \hspace{0.05cm}, c_i, \hspace{0.05cm}\text{...} \hspace{0.05cm}, c_{n-1}) \hspace{0.05cm}.$$

The symbols  $u_i$  and  $c_i$  originate from the field  ${\rm GF}(q)$  with 

$$q=n+1=2^m.$$

They are representable by  $m$  binary symbols  ("bits").

A comparison of the upper block diagram with the corresponding  $\text{block diagram for Reed-Solomon error detection}$  shows:

  1. The main difference is in the discrete channel model  $($highlighted in green$)$.  Instead of the erasure channel  ("$m$–BEC")  now the  "$m$–BSC"  is considered.  For each bit of the code symbol  $c_i$  the  $\text{Binary Symmetric Channel}$  $\rm (BSC)$  is applied.  A bit error with respect to the  $i$–th bit of the code word results in  $y_i \ne c_i$.
  2. In the  "last chapter"  uncertain bits were already marked by erasures  $\rm E$.  The task of the  "code word finder"  $\rm (CWF)$  was to reconstruct the decoding result  $\underline {y}$  from the garbled received words  $\underline {z}$.
  3. If the number  $e$  of erasures is smaller than the minimum distance  $d_{\rm min}$,  this succeeds and we get   $\underline {z} = \underline {c}$.   Otherwise,  the code word finder reports that it cannot decode the current received word  $\underline {y}$ .   A wrong decision  $(\underline {z} \ne \underline {c})$  was excluded at the BEC.
  4. In this chapter,  the first decoder block is now referred to as  "code word estimator"  $\rm (CWS)$.  The naming is to make clear that due to the  $m$–BSC model,  wrong decisions  $(\underline {z} \ne \underline {c})$  are inevitable,  namely when multiple symbol errors distort the received word  $\underline {y}$  to a valid code word.

$\text{Conclusions:}$ 

  • The decoder's task is to determine its output vector  $\underline {v}$  so that it matches the information word  $\underline {u}$  "as well as possible".  More precisely formulated:
\[{ \rm Pr(block\hspace{0.15cm}error)} = { \rm Pr}( \underline{v} \ne \underline{u}) \stackrel{!}{=} { \rm Minimum}\hspace{0.05cm}.\]
  • Because of deterministic mapping   $\underline{c} = {\rm enc}(\underline{u})$   and   $\underline{v} = {\rm enc}^{-1}(\underline{z})$   holds in the same way:
\[{ \rm Pr(block\hspace{0.15cm}error)} = { \rm Pr}( \underline{z} \ne \underline{c}) \stackrel{!}{=} { \rm Minimum}\hspace{0.05cm}.\]


The two blocks highlighted in yellow are not considered further.  The focus of the considerations is now the  "code word estimator"  $\rm (CWS)$  highlighted in red.

Possible code word estimators for Reed-Solomon decoding


Definition of the error vector  $\underline{e}$

The right sketch of this graphic illustrates the task of the  $\rm CWS$ 

  • where here the channel model  "$m$–BSC" is replaced
  • by the additive  »error vector« 
$$\underline{e} = \underline{y} - \underline{c}.$$

The sketch on the left illustrates the relationship between these vectors.


This task is to be clarified by an example.

$\text{Example 1:}$  Let all symbols be elements of   $\rm GF(2^3) \in \{0,\ 1,\ \alpha^1,\ \alpha^2,\ \alpha^3,\ \alpha^4,\ \alpha^5,\ \alpha^6\}$.   The adjacent table can be used for conversion

$\rm GF(2^3)$  representations
  • between the coefficient representation  $($with the order  $k_2$,  $k_1$,  $k_0)$
  • and the exponent representation  $($as powers of the primitive element  $\alpha).$


In this example,  the code word and received word are in coefficient notation:

\[\underline{c} = \Big ( (010), (001), (100),(010),(100),(111),(111)\Big )\hspace{0.05cm},\]
\[\underline{y} =\Big ( (011), (001), (000),(010),(100),(111),(111)\Big )\hspace{0.05cm}.\]

⇒   This results in the following error vector  $\underline{e} = \underline{y} - \underline{c}$:

\[\underline{e} \hspace{0.05cm} = \hspace{0.05cm} \Big ( (001), (000), (100), (000),(000),(000),(000)\Big )\hspace{0.05cm}.\]

⇒   Converted to the exponential representation,  we get:

\[\underline{c} = \Big ( \alpha^1, \hspace{0.09cm}1\hspace{0.09cm}, \alpha^2,\alpha^1,\alpha^2,\alpha^5,\alpha^5\Big )\hspace{0.05cm},\]
\[\underline{y} =\Big ( \alpha^3, \hspace{0.09cm}1\hspace{0.09cm}, \hspace{0.09cm}0\hspace{0.09cm},\alpha^1,\alpha^2,\alpha^5,\alpha^5\Big )\hspace{0.05cm},\]
\[\underline{e} = \Big ( \hspace{0.09cm}1\hspace{0.09cm}, \hspace{0.09cm}0\hspace{0.09cm}, \hspace{0.05cm}\alpha^2,\hspace{0.12cm}0\hspace{0.12cm},\hspace{0.12cm}0\hspace{0.12cm},\hspace{0.12cm}0\hspace{0.12cm},\hspace{0.12cm}0\hspace{0.12cm}\Big )\hspace{0.05cm}.\]


Note   Task of the code word estimator  $\rm (CWS)$  is to find the most probable code word to  $\underline{y}$  and to pass its result  $\underline{z} = \underline{c}_i$  to the following mapping.

There are several ways to do this:

  1.   Hard Decision Maximum Likelihood Decoding  $\text{(HD–MLD)}$,
  2.   Bounded Distance Decoding  $\text{(BDD)}$,
  3.   Decoding   "over half the minimum distance".


Hard Decision Maximum Likelihood Decoding   $\text{(HD–MLD)}$:

One chooses from all possible Reed–Solomon code words  $\underline{c}_i$   $($hierof there are in total  $q^k)$   the one with the least  $\text{Hamming distance}$  to the received word  $\underline{y}$.  Result:

\[\underline{z} = {\rm arg} \min_{\underline{c}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C}_{\rm RS}} \hspace{0.1cm} d_{\rm H}(\underline{y} \hspace{0.05cm}, \hspace{0.1cm}\underline{c}_{\hspace{0.03cm}i})\hspace{0.05cm}.\]
  • A decision is always made even if the number  $r$  of symbol errors is larger than the correction capability  $t$  of the code.  However,  in such a case the decoding result is very uncertain.
  • It should be mentioned again that maximum likelihood decoding always decides.  Decoding failure is impossible.  But of course there are also wrong decisions.


Bounded Distance Decoding  $\text{(BDD)}$:

If the number  $r$  of symbol errors in the received word  $\underline{y}$  is not greater than the correction capability   $t = ⌊(d_{\rm min}- 1)/2⌋$   of the code,  one can correct the  $r$  symbol errors completely.  However,  it is also true:

  • The case  $r > t$  leads to an abort of the decoding process with no result.   In other words:
  • Only those received words towards the center of the sphere are decoded,  which lie in a sphere around it with radius  $t$.
  • Other received words are marked as undecodable,  for example as  "erasure".


Decoding over half the minimum distance:

Here also in the case  $r > t$  an attempt is made to decode the code word.  However,  in contrast to  $\text{HD–MLD}$,  which also decodes beyond half the minimum distance,  a decoding failure is not per se excluded here.

For the remainder of this chapter,  we will deal exclusively with  "Bounded Distance Decoding".  The reason for this is the enormous complexity of  "Maximum Likelihood Decoding"  proportional to  $q^{n-k}$.

Bounded Distance Decoding Procedure


In the following,  the individual steps of the  "bounded distance decoding"  $\rm (BDD)$  algorithm are described briefly and in a recipe-like manner.  In the next sections,  the individual points will be dealt with in more detail and the procedure will be illustrated using typical examples.


$\rm (A)$   $\text{Calculation and evaluation of syndrome}$   ⇒   $\text{Detailed description}$

  1. Calculate from the received word  $\underline{y}$  and the parity-check matrix  $\boldsymbol{\rm H }$  of the code the syndrome  $\underline {s} = \underline {y} \cdot \boldsymbol{\rm H }^{\rm T}$.
  2. If  $\underline {s} =\underline {0}$,  set the BDD output  $\underline {z} =\underline {y}$  and terminate the decoding process for that received word.
  3. Otherwise,  set the parameter  $r = 1$  and continue with step  $\rm (B)$ .


$\rm (B)$   $\text{Determine the actual symbol error count }\ r$  ⇒   $\text{Detailed description}$

  1. Create and check the equations  $\underline {\it \Lambda} _l \cdot\underline {s}^{\rm T} = 0$   for   $l = 1,$ ... , $2 \cdot t -r$  assuming that the received word contains exactly   $r$   symbol errors.
  2. $\underline {\it \Lambda} _l $  denotes the generalized  $\text{error locator polynomial}$  $\text{(ELP)}$  coefficient vectors.
  3. The parameter  $t$  denotes the correctability of the code.  For the Reed–Solomon codes,  uniformly  $t = ⌊(n-k)/2 ⌋$.
  4. If there is a unique solution,  then continue with step  $\rm (C)$.  
    In the received vector  $\underline{y}$  there are then indeed exactly  $r$   symbols falsified and in the error vector  $\underline{e}$  there are  $r$   entries not equal to $0$.
  5. Otherwise increase  $r$   by  $1$.   If  $r ≤ t$   ⇒   repeat step  $\rm (B)$  from the beginning:  
    The previously assumed  $r$  was obviously too small.  Therefore now a new attempt with larger  $r$.
  6. If the new  $r$  is greater than the correction capability  $t$  of the code,  the current word cannot be decoded.  End the decoding attempt with an  "error message".


$\rm (C)$   $\text{Localization of the }\ r \text{ error locations}$  ⇒   $\text{Detailed description}$

  1. Create the  error locator polynomial   ${\it \Lambda}(x)$   and find its  $r$  zeros in  ${\rm GF}(q) \hspace{-0.05cm}\setminus \hspace{-0.05cm} \{0\}$.
  2. An error at location  $i$  is present whenever  ${\it \Lambda}(\alpha^{i}) = 0$ .


$\rm (D)$   $\text{Determination of }\ r \text{ error values and correction}$  ⇒   $\text{Detailed description}$

  1. Now the  $r$  error locations are known. Replacing in the received vector  $\underline{y}$  the wrong symbols with erasures   ⇒   $y_i = \rm E$,  if  $e_i ≠ 0$,
    we find the result  $\underline{y}$  corresponding to the chapter  "Reed-Solomon Decoding at the Erasure Channel".
  2. Or:  From the equation  $\underline {e} \cdot \boldsymbol{\rm H }^{\rm T} = \underline {s}$  one arrives at a linear equation system for the faulty symbols  $(e_i \ne 0)$  taking advantage of the error-free sites  $(e_i = 0)$.

Step (A): Evaluation of the syndrome in BDD


As shown in section  "Principle of Syndrome Decoding",  the syndrome  $\underline{s}$  can be used to decode a linear code.

  • With the received word  $\underline{y}$   equal to code word  $\underline{c}$  plus error vector  $\underline{e}$  applies:
\[\underline {s} = \underline {y} \cdot { \boldsymbol{\rm H }}^{\rm T}= \underline {c} \cdot { \boldsymbol{\rm H }}^{\rm T}+ \underline {e} \cdot { \boldsymbol{\rm H }}^{\rm T} \hspace{0.05cm}.\]
  • Since always   $\underline {c} \cdot { \boldsymbol{\rm H }}^{\rm T} =\underline {0}$   holds,  it follows from   $\underline{s}= \underline{0}$   also   $\underline {e} \cdot { \boldsymbol{\rm H }}^{\rm T} =\underline{0}$.   That is:
  • With very high probability,  from   $\underline{s}= \underline{0}$   it is also possible to infer   $\underline{e}= \underline{0}$   and thus also the correct decoding result  $\underline{z}= \underline{y}$.  The decoding process would be finished.
  • But there are also error patterns   $\underline{e} \ne \underline{0}$   that lead to the syndrome   $\underline{s}= \underline{0}$.   Such patterns certainly contain more than  $t$  symbol errors. 
  • So here it makes also sense to abort the decoding process.  All subsequent calculations would also not lead to success.

$\text{Example 2:}$  This and the following examples in the next sections are always based on the Reed–Solomon code  $\text{RSC (7, 3, 5)}_8$  so that the conversions given in the graph in  $\rm GF(2^3)$  can be used.  The received words are:

$\rm GF(2^3)$  representations
\[\underline{y}=\big (\alpha^3,\hspace{0.08cm} 1,\hspace{0.08cm} 0, \hspace{0.08cm}\alpha^1, \hspace{0.08cm} \alpha^2, \hspace{0.08cm} \alpha^5, \hspace{0.08cm} \alpha^5 \big).\]
\[\underline {s} = \underline {y} \cdot { \boldsymbol{\rm H } }^{\rm T}= \begin{pmatrix} \alpha^3, \hspace{0.05cm}1, \hspace{0.05cm}0, \hspace{0.05cm}\alpha^1, \hspace{0.05cm}\alpha^2, \hspace{0.05cm}\alpha^5, \hspace{0.05cm}\alpha^5 \end{pmatrix}\cdot \begin{pmatrix} 1 & 1 & 1 & 1 \\ \alpha^1 & \alpha^2 & \alpha^3 & \alpha^4 \\ \alpha^2 & \alpha^4 & \alpha^6 & \alpha^1 \\ \alpha^3 & \alpha^6 & \alpha^2 & \alpha^5 \\ \alpha^4 & \alpha^1 & \alpha^5 & \alpha^2 \\ \alpha^5 & \alpha^3 & \alpha^1 & \alpha^6 \\ \alpha^6 & \alpha^5 & \alpha^4 & \alpha^3 \end{pmatrix}. \]
  • These vector–matrix multiplications gives the result:
\[\underline {s} \hspace{-0.05cm} = \hspace{-0.05cm} (\alpha^3 , \alpha^3 , \alpha^3 , \alpha^3) + (\alpha^1 , \alpha^2 , \alpha^3 , \alpha^4) + (0,0,0,0) \hspace{-0.05cm}+\hspace{-0.05cm} (\alpha^4,1,\alpha^3,\alpha^6)+(\alpha^6,\alpha^3,1,\alpha^4)\hspace{-0.05cm}+\hspace{-0.05cm}(\alpha^3,\alpha^1,\alpha^6,\alpha^4) \hspace{-0.05cm}+\hspace{-0.05cm} (\alpha^4,\alpha^3,\alpha^2,\alpha^1)\]
\[\Rightarrow \hspace{0.3cm} \underline {s} = \text{...} \hspace{0.05cm}= (\alpha^5,\alpha^2,\alpha^3,\alpha^1) \hspace{0.05cm}.\]
  • So the received word was falsified.   Otherwise it should have resulted in  $\underline{e}= \underline{0} = (0, 0, 0, 0)$ .


The description of the decoding process at  $\text{RSC (7, 3, 5)}_8$  is continued in  $\text{Example 4}$ 


Error Locator Polynomial - Definition and Properties


After the syndrome calculation in step  $\rm (A)$  with the result  $\underline{s} \ne \underline{0}$  we know,

  • that the received word  $\underline{y}$   does not match the code word  $\underline{c}$,  respectively
  • that the error vector  $\underline{e} = (e_0, \hspace{0.05cm}e_1, \hspace{0.05cm}\text{ ...}\hspace{0.05cm} , e_{n-1})$  certainly includes elements not equal to  "$0$". 

However,  we do not know how many symbols were falsified   $(0 < r ≤ n)$   nor we can name the positions of the error locations  $(e_i ≠ 0)$  in the error vector  $\underline{c}$.  An approach to this task is provided by the so-called  "error locator polynomial"  introduced by  $\text{William Wesley Peterson}$  ⇒   see [Pet60][1].  It is also known as  "key equation".

$\text{Definition:}$  Let it be known that exactly  $r$  elements of the error vector  $\underline{e}$  are non-zero,  recognizable by  $\text{Hamming weight}$  $w_{\rm H}(\underline{e}) = r$.

  • Also let the quantity   ${I}_{\rm EP}$   of error positions be known:
\[I_{\rm EP} = \{ i \hspace{0.1cm}\vert \hspace{0.1cm} e_i \ne 0,\hspace{0.1cm} 0 \le i < n \}\hspace{0.05cm}.\]
  • Then for the  »Error Locator Polynomial«  $\rm (ELP)$:
\[{\it \Lambda}(x)=x \cdot \prod_{i\hspace{0.05cm}\in \hspace{0.05cm} I_{\rm FP} }(x-\alpha^i) =x \cdot \big [{\it \lambda}_0 + \lambda_1 \cdot x+\ldots+{\it \lambda}_{r-1} \cdot x^{r-1}+x^r \big ].\]


⇒   From the error locator polynomial  we know:

  • Because of the factor  $x$  in front of the product sign:  ${\it \Lambda}(x= 0) = 0$.
  • Other  $r$  zeros result for  $x = \alpha^{i}$   with   $i \in I_{\rm EP}$,  that is,  for all error positions.
  • In contrast,  the  error locator polynomial for  $i ∉ I_{\rm EP}$   ⇒   $e_i = 0$  has no zero:   ${\it \Lambda}(x= \alpha^{i}) \ne0$.

⇒   So we search for the  $r$  non–trivial zeros of  ${\it \Lambda}(x)$  with argument  $x ∈ {\rm GF}(q) \hspace{-0.05cm}\setminus \hspace{-0.05cm} \{0\}$.  If we succeed,

  • we know the  $r$  error positions,
  • but not yet the actual error values  $e_i ∈ {\rm GF}(q)$.
$\rm GF(2^3)$  representations

$\text{Example 3:}$  Let $n=7$   ⇒   $q=8$,  $r=2$  and  $I_{\rm FP} = \{2, \hspace{0.05cm}4\}$:        P ID2551 KC T 2 5 S5a.png

  • Thus for the  "error locator poynomial"  from  ${\rm GF}(2^3)$ is obtained:
\[{\it \Lambda}(x)=x \cdot (x\hspace{-0.05cm}-\hspace{-0.05cm}\alpha^2) \cdot (x\hspace{-0.05cm}-\hspace{-0.05cm}\alpha^4)= x \cdot (x\hspace{-0.05cm}+\hspace{-0.05cm}\alpha^2) \cdot (x\hspace{-0.05cm}+\hspace{-0.05cm}\alpha^4) =x \cdot \big [x^2 \hspace{-0.05cm}+ \hspace{-0.05cm}(\alpha^2 + \alpha^4) \cdot x + \alpha^6\big ] \]
\[\Rightarrow \hspace{0.3cm} {\it \Lambda}(x)= x \cdot \big [\alpha^6 + \alpha \cdot x + x^2\big ]\hspace{0.05cm}.\]
  • The other zeros $($except for  $x = 0)$  arise naturally here for  $x = \alpha^2$  and  $x = \alpha^4$, as a calculation shows:
\[{\it \Lambda}(x = \alpha^2)= x \cdot \big [\alpha^6 + \alpha \cdot \alpha^2 + (\alpha^2)^2\big ] = x \cdot \big [\alpha^6 + \alpha^3 + \alpha^4 \big ]= 0\hspace{0.05cm},\]
\[ {\it \Lambda}(x = \alpha^4)= x \cdot \big [\alpha^6 + \alpha \cdot \alpha^4 + (\alpha^4)^2\big ] =x \cdot \big [\alpha^6 + \alpha^5 + \alpha \big ]= 0\hspace{0.05cm}.\]


For further derivation,  we always assume the  $\text{RSC (7, 3, 5)}_8$  with the following parameter values:

$$n=7, \hspace{0.3cm}k = 3, \hspace{0.3cm}d_{\rm min} = 5 \ \Rightarrow \ t = (d_{\rm min} -1/2) = 2.$$
  • Let the number of symbol errors be  $r = t = 2$.  Thus the system of equations to be solved with the auxiliary variables  $L_i = {\it \Lambda}(\alpha^{i})$:
\[L_0 = {\it \Lambda }(\alpha^0) = \alpha^0 \cdot \left [ {\it \lambda}_0 + {\it \lambda}_1 \cdot (\alpha^0)^1 + (\alpha^0)^2 \right ] = {\it \lambda}_0 \cdot 1 + {\it \lambda}_1 \cdot 1 + 1 \hspace{0.05cm},\]
\[L_1 = {\it \Lambda }(\alpha^1) =\alpha^1 \cdot \left [ {\it \lambda}_0 + {\it \lambda}_1 \cdot (\alpha^1)^1 + (\alpha^1)^2 \right ] = {\it \lambda}_0 \cdot \alpha^1+ {\it \lambda}_1 \cdot \alpha^2 + \alpha^3 \hspace{0.05cm},\]
\[...\]
\[ L_6 = {\it \Lambda }(\alpha^6) = \alpha^6 \cdot \left [ {\it \lambda}_0 + {\it \lambda}_1 \cdot (\alpha^6)^1 + (\alpha^6)^2 \right ] = {\it \lambda}_0 \cdot \alpha^6 + {\it \lambda}_1 \cdot \alpha^{12} + \alpha^{18} \hspace{0.05cm}.\]
  • In vector form,  this system of equations with the auxiliary vector is   $\underline{L} = (L_0, \hspace{0.05cm}L_1, \hspace{0.05cm}L_2,\hspace{0.05cm}L_3,\hspace{0.05cm}L_4,\hspace{0.05cm}L_5,\hspace{0.05cm}L_6)$:
\[\underline {L}^{\rm T}=\begin{pmatrix} L_0\\ L_1\\ L_2\\ L_3\\ L_4\\ L_5\\ L_6 \end{pmatrix} \hspace{0.15cm} = \hspace{0.15cm} \begin{pmatrix} 1 & 1 & 1 \\ \alpha^1 & \alpha^2 & \alpha^3 \\ \alpha^2 & \alpha^4 & \alpha^6 \\ \alpha^3 & \alpha^6 & \alpha^9 \\ \alpha^4 & \alpha^8 & \alpha^{12}\\ \alpha^5 & \alpha^{10} & \alpha^{15}\\ \alpha^6 & \alpha^{12} & \alpha^{18} \end{pmatrix} \hspace{0.15cm}\cdot \hspace{0.15cm} \begin{pmatrix} {\lambda}_0\\ {\lambda}_1\\ 1 \end{pmatrix} \hspace{0.05cm}.\]
  • We now expand the ELP coefficient vector  $\underline {\it \Lambda }$  by appending zeros to the length  $n-k$.  Thus,  in the considered example,  we obtain   ${\it \Lambda } = ( \lambda_0,\hspace{0.05cm}\lambda_1,\hspace{0.05cm}1, \hspace{0.05cm}0)$   and the following vector equation:
\[\underline {L}^{\rm T} \hspace{0.15cm} = \hspace{0.15cm} \begin{pmatrix} 1 & 1 & 1 & 1 \\ \alpha^1 & \alpha^2 & \alpha^3 & \alpha^4\\ \alpha^2 & \alpha^4 & \alpha^6 & \alpha^8\\ \alpha^3 & \alpha^6 & \alpha^9 & \alpha^{12}\\ \alpha^4 & \alpha^8 & \alpha^{12} & \alpha^{16}\\ \alpha^5 & \alpha^{10} & \alpha^{15} & \alpha^{20}\\ \alpha^6 & \alpha^{12} & \alpha^{18} & \alpha^{24} \end{pmatrix} \hspace{0.15cm}\cdot \hspace{0.15cm} \begin{pmatrix} {\lambda}_0\\ {\lambda}_1\\ 1\\ 0 \end{pmatrix} \hspace{0.05cm}.\]
  • With this we have achieved:
  1. The  $7× 3$  matrix has now become a  $7× 4$ matrix.
  2. The fourth column can actually be filled arbitrarily,  since all elements are multiplied by zeros.
  3. The addition chosen here gives the transposed  $\text{parity-check matrix}$  of  $\text{RSC (7, 3, 5)}_8$.
  4. Thus,  one can write for the last vector equation:
\[\underline {L}^{\rm T} = { \boldsymbol{\rm H }}^{\rm T} \cdot \underline {\it \Lambda }^{\rm T} \hspace{0.3cm} \Rightarrow \hspace{0.3cm}\underline {L} = \underline {\it \Lambda } \cdot { \boldsymbol{\rm H }} \hspace{0.05cm}.\]
  • But since for the error locations  $(e_i ≠ 0)$  always  $L_i = {\it \Lambda}(\alpha^{i}) = 0$  holds,  the product  $L_i \cdot e_i \equiv 0$  and one obtains as determining equation for the zeros of the error locator polynomial:
\[\underline {L}^{\rm T} \cdot \underline {e}^{\rm T} = 0 \hspace{0.3cm} \Rightarrow \hspace{0.3cm} \underline {\it \Lambda } \cdot { \boldsymbol{\rm H }} \cdot \underline {e}^{\rm T} = 0 \hspace{0.3cm} \Rightarrow \hspace{0.3cm} \underline {\it \Lambda } \cdot \underline {s}^{\rm T} = 0 \hspace{0.05cm}.\]

$\text{Important intermediate result:}$  The non-trivial zeros  $\lambda_0$, $\lambda_1$, ...  $(=0)$  of the error locator polynomial   ${\it \Lambda}(x)$  must always satisfy the vector equation  $\underline {\it \Lambda } \cdot \underline {s}^{\rm T} = 0 $  .

  • Hereby denotes  $\underline {\it \Lambda }$  the  $\rm ELP$  coefficient vector.
  • $\underline {s } = \underline {y }\cdot \boldsymbol{\rm H }^{\rm T} $  gives the  $\text{syndrome}$.


Step (B): Set up/evaluate the ELP coefficient vector


Before we can consider this intermediate result for step  $\rm (B)$  some generalizations need to be made. 

Shifted ELP coefficient vectors

The reason for this is:

  • The relationship   $\underline {\it \lambda } \cdot \underline {s}^{\rm T} = 0 $   yields only a single equation of determination.
  • Thus the problem can be solved for  $r = 1$  if one is sure that indeed only one symbol has been falsified.
  • If one is not sure of this,  but nevertheless performs the calculation for  $r = 1$,  one still needs a second equation  (or even several)  to verify the assumption.

The property of the  $\text{error locator polynomial}$ that  ${\it \Lambda}(\alpha^{i}) = 0$  only for  $e_i ≠ 0$  $($that means:  the $i$–th symbol is falsified$)$  is preserved when multiplying  ${\it \Lambda}(x)$  by arbitrary powers of  $x$. 

Each multiplication by  $x$  implies a shift of one place to the right for the ELP coefficient vector.

$\text{Definition:}$  The  $\text{generalized ELP coefficient vectors}$   $\underline {\it \Lambda }_l$  result from successive shifts with respect to  $\underline {\it \Lambda }_l$:

\[{\it \Lambda}_l(x)=x^l \cdot \prod_{i\hspace{0.05cm}\in \hspace{0.05cm} I_{\rm EP} }(x-\alpha^i) =x^l \cdot \big [{\it \lambda}_0 + \lambda_1 \cdot x+\ldots+{\it \lambda}_{r-1} \cdot x^{r-1}+x^r \big ]\hspace{0.05cm}.\]

In this defining equation,  $\underline {\it \Lambda }_1$  corresponds to the previous  $\underline {\it \Lambda }$.


The upper graph shows the occupancy under the assumption of  $r$  error locations in the error vector  $\underline {e}$  for

  • $r=1$   in the left panel  (with blue background),
  • $r=2$   in the middle area  (with red background),
  • $r=3$   in the right area  (with green background).


One recognizes:

  1. The length of all  $\underline {\it \lambda }_l$  is always  $n-k$.  Each vector contains respectively  $r$  coefficients  $\lambda_0$,  $\lambda_1$, ... ,  $\lambda_{r-1}$   ⇒   $0 ≤ i < r$  and one  "1".
    The remainder of each vector is padded with zeros.
  2. For each  $r$  there are exactly  $n-k-r$  coefficient vectors   $\underline {\it \Lambda }_l$,  where   $\underline {\it \Lambda }_l$   results from   $\underline {\it \Lambda }_{l-1} $  always by right shifting by one position.
    The vector  $\underline {\it \Lambda }_{n-k-r}$  always ends with a  $1$.
  3. The equation system   $\underline {\it \Lambda }_l \cdot \underline {s}^{\rm T} = 0 $   therefore leads to  $n-k-r$  equations.
    The chosen approach for  $r$  is only correct if all equations lead to the same results for  $\lambda_0$, ... , $\lambda_{r-1}$ .
  4. If this is not the case,  one has to increase  $r$  and thus work on a new equation system,  and this until a unique solution results from all equations for the current  $r$.  If the finally  $r$  is greater than the correctability  $t$  of the code,  the calculation can be terminated.  The pending received word  $\underline {y}$  is then not decodable.


$\rm GF(2^3)$  representation

$\text{Example 4:}$  The conditions stated in the  $\text{Example 2}$  still apply:

  1. There,  due to the syndrome   $\underline {s} = (\alpha^5,\ \alpha^2,\ \alpha^3,\ \alpha^1) ≠ \underline {0}$   it was also demonstrated that the received vector  $\underline {y}$  was falsified   ⇒   error vector   $\underline {e} \ne {0}$.
  2. Not known,  however,  is the actual symbol error count  $r$.
  • Assuming a single falsified symbol   $(r= 1)$   we obtain the following system of equations  $($written here in matrix form$)$:
\[\big ({ \boldsymbol{\it \Lambda } }_l \big) \cdot \underline {s} ^{\rm T}= \begin{pmatrix} \lambda_0 & 1 & 0 & 0 \\ 0 & \lambda_0 & 1 & 0 \\ 0 & 0 & \lambda_0 & 1 \end{pmatrix} \cdot \begin{pmatrix} \alpha^5\\ \alpha^2\\ \alpha^3\\ \alpha \end{pmatrix} \stackrel{!}{=} \begin{pmatrix} 0\\ 0\\ 0 \end{pmatrix} \hspace{0.05cm}.\]
  • This equation system gives three different solutions for  $\lambda_0$,  which is not purposeful:
\[\text{line 1:}\hspace{0.5cm}\alpha^5 \cdot \lambda_0 + \alpha^2 = 0 \hspace{0.3cm} \Rightarrow \hspace{0.3cm} \lambda_0 = \alpha^{2-5}= \alpha^{-3}= \alpha^{4}\hspace{0.05cm},\]
\[\text{line 2:}\hspace{0.5cm}\alpha^2 \cdot \lambda_0 + \alpha^3 = 0 \hspace{0.3cm} \Rightarrow \hspace{0.3cm} \lambda_0 = \alpha^{3-2}= \alpha\hspace{0.05cm},\]
\[\text{line 3:}\hspace{0.5cm}\alpha^3 \cdot \lambda_0 + \alpha^1 = 0 \hspace{0.3cm} \Rightarrow \hspace{0.3cm} \lambda_0 = \alpha^{1-3}= \alpha^{-2} = \alpha^{5} \hspace{0.05cm}.\]
  • Therefore,  we now set up another equation system,  assuming   $r = 2$:
\[\big ({ \boldsymbol{\it \Lambda } }_l \big) \cdot \underline {s} ^{\rm T}= \begin{pmatrix} \lambda_0 & \lambda_1 & 1 & 0 \\ 0 & \lambda_0 & \lambda_1 & 1 \end{pmatrix} \cdot \begin{pmatrix} \alpha^5\\ \alpha^2\\ \alpha^3\\ \alpha \end{pmatrix} \stackrel{!}{=} \begin{pmatrix} 0\\ 0 \end{pmatrix}\hspace{0.05cm}. \]
  • This leads to two equations for  $\lambda_0$  and  $\lambda_1$:
\[\alpha^5 \cdot \lambda_0 + \alpha^2 \cdot \lambda_1 + \alpha^3 = 0 \hspace{0.05cm},\hspace{0.5cm}\alpha^2 \cdot \lambda_0 + \alpha^3 \cdot \lambda_1 + \alpha^1 = 0 \hspace{0.05cm}.\]
  • This equation system is now clearly solvable.  One gets   $\lambda_0 = \alpha^2$   and   $\lambda_1 = \alpha^6$.  This means:
  1. The assumption that indeed  $r = 2$  positions of the received vector  $\underline {y}$  have been distorted is correct.
  2. But it is not yet known which positions have been falsified.  So much for now:
  3. It is not symbol positions  2  and  6,  but the positions  0  and  2,  as shown in the following  $\text{Example 5}$  (next section).



Step (C): Localization of the error positions


After processing step  $\rm(B)$  are known:

  1. the number  $r$  of error locations  $e_i ≠ 0$  in the vector  $\underline {e} = (e_0, \hspace{0.05cm}\text{... }\hspace{0.05cm}, e_i, \hspace{0.05cm}\text{... }\hspace{0.05cm}, e_{n-1})$,
  2. the coefficients  $\lambda_0, \hspace{0.05cm}\text{... }\hspace{0.05cm} , \lambda_{r-1}$  of the error locator polynomial..

Now the set of error positions has to be determined:   $I_{\rm EP} = \{ i \hspace{0.1cm}| \hspace{0.1cm} e_i \ne 0,\hspace{0.3cm} 0 \le i < n \}\hspace{0.05cm}.$

There are two ways to do this  $($both methods are used in the following example$)$:

  • the so-called  "Chien search",  in which one determines the searched zeros by inserting the possible code symbols except the zero symbol   ⇒   $(\alpha^0, \text{... }, \alpha^i, \text{... },\alpha^{n-1})$  into the the error locator polynomial,
  • the evaluation of the equation   $\underline {L} = (L_0, \text{... }, L_i, \text{... },L_{n-1} ) = \underline {\it \Lambda } \cdot { \boldsymbol{\rm H }}$   with the abbreviation  $L_i = {\it \Lambda}(\alpha^i)$.


$\text{Example 5:}$  In  $\text{Example 4}$  it was determined according to the constraints specified in  $\text{Example 2}$  stated boundary conditions determined that

$\rm GF(2^3)$  representations
  1. $r= 2$  symbol errors are present,  and
  2. the ELP coefficients are  $\lambda_0 = \alpha^2$  and  $\lambda_1 = \alpha^6$.


This results in the following error locator polynomial:

\[{\it \Lambda}(x)=x \cdot \big [{\it \lambda}_0 + \lambda_1 \cdot x+x^2 \big ] =x \cdot \big [\alpha^2 + \alpha^6 \cdot x+x^2 \big ]\hspace{0.05cm}.\]

⇒   According to the Chien search one gets

\[{\it \Lambda}(\alpha^0)\hspace{0.15cm} = \hspace{0.15cm}\alpha^0 \cdot \big [ \alpha^2 + \alpha^6 \cdot 1 + 1 \big ] = \alpha^2 + (\alpha^2 + 1) + 1 = 0 \hspace{0.3cm} \Rightarrow \hspace{0.3cm}{ \boldsymbol{\rm Zero} }\hspace{0.05cm},\]
\[{\it \Lambda}(\alpha^1)\hspace{0.15cm} = \hspace{0.15cm}\alpha^1 \cdot \big [\alpha^2 + \alpha^6 \cdot \alpha^1 + \alpha^2\big ]= \alpha^1 \hspace{0.3cm} \Rightarrow \hspace{0.3cm}{\rm No\hspace{0.15cm} zero}\hspace{0.05cm},\]
\[{\it \Lambda}(\alpha^2)\hspace{0.15cm} = \hspace{0.15cm}\alpha^2 \cdot \big [ \alpha^2 + \alpha^6 \cdot \alpha^2 + \alpha^4 \big ] =\alpha^4 + \alpha^{10} + \alpha^6 = \text{...} = \hspace{0.15cm}0 \hspace{0.3cm} \Rightarrow \hspace{0.3cm}{ \boldsymbol{\rm Zero} }\hspace{0.05cm}.\]

Thus the two error positions with  $i = 0$  and  $i = 2$  are found   ⇒   the error vector is:   $\underline {e} = (e_0, 0, e_2, 0, 0, 0, 0)$ .


⇒   The vector equation  $\underline {L} = \underline {\it \Lambda } \cdot { \boldsymbol{\rm H } }$  gives the same result in a more compact form:

\[\underline {L} = \underline{\it \Lambda} \cdot { \boldsymbol{\rm H } } = (\alpha^2, \alpha^6, 1, 0) \cdot \begin{pmatrix} 1 & \alpha^1 & \alpha^2 & \alpha^3 & \alpha^4 & \alpha^5 & \alpha^6 \\ 1 & \alpha^2 & \alpha^4 & \alpha^6 & \alpha^1 & \alpha^3 & \alpha^5 \\ 1 & \alpha^3 & \alpha^6 & \alpha^3 & \alpha^5 & \alpha^1 & \alpha^4 \\ 1 & \alpha^4 & \alpha^1 & \alpha^5 & \alpha^2 & \alpha^6 & \alpha^3 \end{pmatrix} \]
\[ \Rightarrow \hspace{0.3cm} \underline {L} = (\alpha^2,\alpha^3,\alpha^4,\alpha^5,\alpha^6,1 ,\alpha^1) + (\alpha^6,\alpha^1,\alpha^3,\alpha^5,1 ,\alpha^2,\alpha^4)+(1, \alpha^3,\alpha^6,\alpha^3,\alpha^5,\alpha^1,\alpha^4) \]
\[ \Rightarrow \hspace{0.3cm} \underline {L} = (0,\alpha^1,0,\alpha^3,\alpha^3,\alpha^5,\alpha^1)\hspace{0.3cm} \Rightarrow \hspace{0.3cm} L_0 = L_2 = 0 \hspace{0.3cm} \Rightarrow \hspace{0.3cm} \underline {e} = (e_0, 0, e_2, 0, 0, 0, 0)\hspace{0.05cm}.\]

The example continues with the  $\text{Example 6}$  in the next section

.

Step (D): Final error correction


In the last step the  $r$  symbol errors have to be corrected,  whose positions are known after completion of step  $\rm (C)$ :

  • Marking the error positions in the received word  $\underline {y}$  as erasures  $\rm E$,  the corresponding code word  $\underline {z}$  can be found according to the description in chapter  "Reed-Solomon decoding at the erasure channel".
  • A second possibility offers the determination of the error vector  $\underline {e}$  from the equation   $\underline {e} \cdot \boldsymbol{\rm H }^{\rm T} = \underline {s}$   and correcting accordingly   $\underline {z} = \underline {y} - \underline {e} $.   This procedure is the basis of the following example.

$\text{Example 6:}$  We consider again the  $\text{RSC (7, 3, 5)}_8$.  In   $\text{Example 2}$  was

  • the received word given with  $\underline{y}=\big (\alpha^3,\hspace{0.09cm} 1,\hspace{0.09cm} 0, \hspace{0.09cm}\alpha^1, \hspace{0.09cm} \alpha^2, \hspace{0.09cm} \alpha^5, \hspace{0.09cm} \alpha^5 \big),$  and
  • from this determined the syndrome   $\underline{s}=\big (\alpha^5,\hspace{0.09cm}\alpha^2, \hspace{0.09cm} \alpha^3, \hspace{0.09cm} \alpha^1\big)$.


According to the calculations in  $\text{Example 5}$  the error vector is  $\underline {e} = (e_0,\ 0,\ e_2,\ 0,\ 0,\ 0,\ 0)$.

$\rm GF(2^3)$  representations

From  $\underline {e} \cdot \boldsymbol{\rm H }^{\rm T} = \underline {s}$   we now obtain the following governing equations for the error values  $e_0$  and  $e_2$:

\[\underline {e} \cdot \boldsymbol{\rm H }^{\rm T}= \begin{pmatrix} e_0 & 0 & e_2 & 0 & 0 & 0 & 0 \end{pmatrix}\cdot \begin{pmatrix} 1 & 1 & 1 & 1 \\ \alpha^1 & \alpha^2 & \alpha^3 & \alpha^4\\ \alpha^2 & \alpha^4 & \alpha^6 & \alpha^1\\ \alpha^3 & \alpha^6 & \alpha^2 & \alpha^{5}\\ \alpha^4 & \alpha^1 & \alpha^{5} & \alpha^{2}\\ \alpha^5 & \alpha^{3} & \alpha^{1} & \alpha^{6}\\ \alpha^6 & \alpha^{5} & \alpha^{4} & \alpha^{3} \end{pmatrix} \stackrel{!}{=} \underline {s} = \begin{pmatrix} \alpha^5 & \alpha^2 & \alpha^3 & \alpha^1 \end{pmatrix} \]
\[\Rightarrow \hspace{0.3cm}e_0 \cdot (1,\ 1,\ 1,\ 1) + e_2 \cdot (\alpha^2,\ \alpha^4,\ \alpha^6,\ \alpha^1)\stackrel{!}{=} ( \alpha^5,\ \alpha^2,\ \alpha^3,\ \alpha^1)\]
\[\Rightarrow \hspace{0.3cm} e_0 + e_2 \cdot \alpha^2 = \alpha^5\hspace{0.05cm},\hspace{0.2cm} e_0 + e_2 \cdot \alpha^4 = \alpha^2\hspace{0.05cm},\hspace{0.2cm} e_0 + e_2 \cdot \alpha^6 = \alpha^3\hspace{0.05cm},\hspace{0.2cm} e_0 + e_2 \cdot \alpha^1 = \alpha^1\hspace{0.05cm}.\]
  • All equations lead to the result  $e_0 = 1$  and  $e_2 =\alpha^2$.
  • Thus the corrected code word with  $\underline {y} = (\alpha^3,\hspace{0.09cm} 1,\hspace{0.09cm} 0,\hspace{0.09cm} \alpha^1,\hspace{0.09cm} \alpha^2,\hspace{0.09cm} \alpha^5,\hspace{0.09cm} \alpha^5)$   and   $\underline {e}= (1,\hspace{0.09cm} 0,\hspace{0.09cm} \alpha^2,\hspace{0.09cm} 0,\hspace{0.09cm} 0,\hspace{0.09cm} 0,\hspace{0.09cm} 0)$:
\[\underline {z} = \underline {y} - \underline {e} = \underline {y} + \underline {e}= (\alpha^1,\hspace{0.09cm} 1,\hspace{0.09cm} \alpha^2,\hspace{0.09cm} \alpha^1,\hspace{0.09cm} \alpha^2,\hspace{0.09cm} \alpha^5,\hspace{0.09cm} \alpha^5)\hspace{0.05cm}. \]
  • This is a valid code word of the  $\text{RSC (7, 3, 5)}_8$.


Fast Reed-Solomon decoding


The class of Reed–Solomon codes was already introduced in 1960 by the publication  [RS60][2].  However,  their efficient decoding was not possible until a decade or two later.

In the last sections we have demonstrated the so-called  "Petersen algorithm"  including the  "Chien search"  on the example of  $\text{RSC (7, 3, 5)}_8$  which can correct up to  $t = 2$  errors. 

  • The decoding process focused on setting up and solving the key equation where the zeros of a degree-2 polynomial in the field  $\rm GF(7)$  had to be found.
  • It was to be recognized that already this algebraic decoding is connected with large expenditure.


For codes used in practice with large code word length  $n$  and high correctability  $t$  the decoding effort would explode if faster decoding algorithms had not been found. 

For example,  in the Reed–Solomon code   $\text{RSC (255, 223, 33)}_{256}$,  mentioned early in the ESA/NASA standard for satellite transmission,  up to   $t = 16$   zeros must be found in the field   $\rm GF(255)$   to decode a single code word.  And this in real time!

Beginning in the late 1960s, many scientists sought faster decoding algorithms for Reed–Solomon codes:

  1. In  $\text{Berlekamp–Massey algorithm}$  $\rm (BMA)$  the key equation   $\underline {\it \Lambda} \cdot \underline{s }^{\rm T} = 0$   represented as a feedback shift register, see for example  [Mas69][3],  [Fri96][4],  [Bos99][5].  The problem is thus reduced to the synthesis of an autoregressive filter.  The algorithm works much faster than the  (more easily comprehensible)  Petersen algorithm.
  2. Somewhat later,  in  [SK+75][6]  a decoding method based on the  $\text{Euclidean algorithm}$  has been proposed.  This provides the greatest common divisor of two integers,  which is used for decoding.  The Euclidean algorithm is comparably fast as the  $\rm BMA$.  More detailed information can be found in  [Bos99][5] and  [Fri96][4].
  3. Other efficient decoding methods of Reed–Solomon codes operate in the frequency domain using the  $\text{Discrete Fourier Transform}$  $\rm (DFT)$  in the field  ${\rm GF}(n)$.

The basic features of Reed–Solomon error correction were already developed in the 1960s.  But up to the present time the  $($as fast as possible$)$  algebraic decoding of these codes is a highly topical field of research.

Exercises for the chapter


Exercise 2.12: Decoding at RSC (7, 4, 4) to Base 8

Exercise 2.12Z: Reed-Solomon Syndrome Calculation

Exercise 2.13: Decoding at RSC (7, 3, 5) to Base 8

Exercise 2.14: Petersen Algorithm

References

  1. Peterson, W.W:  Encoding and Error-correction Procedures for the Bose-Chaudhuri codes.  IRE Transactions on Information Theory, IT-6:459{470), 1960.
  2. Reed, I.S.; Solomon, G.:  Polynomial Codes over Certain Finite Fields.  J. Siam, Vol. 8, pp. 300-304, 1960.
  3. Massey, J.L.:  Shift Register Synthesis and BCH Decoding.  IEEE Trans. on Information Theory, vol. IT-15, pp. 122–127, Jan. 1969.
  4. 4.0 4.1 Friedrichs, B.:  Kanalcodierung – Grundlagen und Anwendungen in modernen Kommunikations- systemen.  Berlin – Heidelberg: Springer, 1996.
  5. 5.0 5.1 Bossert, M.:  Channel Coding for Telecommunications. Chichester: Wiley, 1999.
  6. Sugiyama, Y.; Kashara, M.; Hirasawa, S.; Namekawa, T.:  A Method for Solving Key Equation for Decoding Goppa Codes.  Information and Control, Vol. 27, pp. 87-99, 1975.