Difference between revisions of "Channel Coding/Error Correction According to Reed-Solomon Coding"

From LNTwww
 
(85 intermediate revisions by 7 users not shown)
Line 1: Line 1:
 
   
 
   
 
{{Header
 
{{Header
|Untermenü=Reed–Solomon–Codes und deren Decodierung
+
|Untermenü=Reed–Solomon–Codes and Their Decoding
|Vorherige Seite=Reed–Solomon–Decodierung beim Auslöschungskanal
+
|Vorherige Seite=Reed-Solomon Decoding for the Erasure Channel
|Nächste Seite=Fehlerwahrscheinlichkeit und Anwendungsgebiete
+
|Nächste Seite=Error Probability and Areas of Application
 
}}
 
}}
  
== Blockschaltbild und Voraussetzungen zu Kapitel 2.5 ==
+
== Block diagram and requirements for RS error correction ==
 
<br>
 
<br>
Wie im [http://en.lntwww.de/Kanalcodierung/Reed%E2%80%93Solomon%E2%80%93Decodierung_beim_Ausl%C3%B6schungskanal#Blockschaltbild_und_Voraussetzungen_zu_Kapitel_2.4 Kapitel 2.4] betrachten wir ein Übertragungssystem mit Reed&ndash;Solomon&ndash;Codierung, das durch die beiden Codeparameter <i>n</i> = 2<sup><i>m</i></sup> &ndash; 1 und <i>k</i> gekennzeichnet  ist. Mit der Generatormatrix <b>G</b> lautet der Zusammenhang zwischen dem Informationswort <u><i>u</i></u> und dem Codewort <u><i>c</i></u>:
+
As in the chapter&nbsp; [[Channel_Coding/Reed-Solomon_Decoding_for_the_Erasure_Channel#Block_diagram_and_requirements_for_Reed-Solomon_error_detection|"Decoding at the Erasure Channel"]]&nbsp; we consider a transmission system with Reed&ndash;Solomon coding characterized by the two code parameters&nbsp; $n=2^m-1$&nbsp; and&nbsp; $k$.&nbsp; With the generator matrix&nbsp; $\boldsymbol{\rm G}$&nbsp; the relation between the information word&nbsp; $\underline {u}$&nbsp; and the code word&nbsp; $\underline {c}$&nbsp; is:
  
:<math>\underline {c} = {\rm enc}(\underline {u}) = \underline {u} \cdot { \boldsymbol{\rm G}}
+
[[File:EN_KC_T_2_5_S1_neu.png|right|frame|Transmission system with Reed-Solomon coding/decoding and $m$&ndash; BSC|class=fit]]
\hspace{0.3cm} {\rm mit}  \hspace{0.3cm}\underline {u} = (u_0, u_1, ... \hspace{0.05cm}, u_i, ...\hspace{0.05cm}, u_{k-1})\hspace{0.05cm}, \hspace{0.2cm}
 
\underline {c} = (c_0, c_1, ... \hspace{0.05cm}, c_i, ...\hspace{0.05cm}, c_{n-1})
 
\hspace{0.05cm}.</math>
 
  
Sowohl die Informationssymbole <i>u<sub>i</sub></i> als auch die Codesymbole <i>c<sub>i</sub></i> entstammen dem Körper GF(<i>q</i>) mit <i>q</i> = <i>n</i> + 1 = 2<sup><i>m</i></sup>, und sind somit durch <i>m</i> Binärsymbole (Bit) darstellbar.<br>
+
:$$\underline {c} = {\rm enc}(\underline {u}) = \underline {u} \cdot { \boldsymbol{\rm G}}
 +
\hspace{0.3cm} {\rm with}$$
 +
::$$ \underline {u} = (u_0, u_1, \hspace{0.05cm}\text{...} \hspace{0.05cm}, u_i, \hspace{0.05cm}\text{...} \hspace{0.05cm}, u_{k-1})\hspace{0.05cm},$$
 +
::$$ \underline {c} = (c_0, c_1, \hspace{0.05cm}\text{...}  \hspace{0.05cm}, c_i, \hspace{0.05cm}\text{...} \hspace{0.05cm}, c_{n-1})
 +
\hspace{0.05cm}.$$
  
[[File:P ID2545 KC T 2 5 S1 v2.png|Übertragungssystem mit Reed–Solomon–Codierung/Decodierung und <i>m</i>–BSC|class=fit]]<br>
+
The symbols&nbsp; $u_i$&nbsp; and&nbsp; $c_i$&nbsp; originate from the field&nbsp; ${\rm GF}(q)$&nbsp; with&nbsp;
 +
:$$q=n+1=2^m.$$
 +
They are representable by&nbsp; $m$&nbsp; binary symbols&nbsp; ("bits").<br>
  
Ein Vergleich dieses Blockschaltbildes mit dem entsprechenden [http://en.lntwww.de/Kanalcodierung/Reed%E2%80%93Solomon%E2%80%93Decodierung_beim_Ausl%C3%B6schungskanal#Blockschaltbild_und_Voraussetzungen_zu_Kapitel_2.4 Modell zu Kapitel 2.4] zeigt:
+
A comparison of the upper block diagram with the corresponding&nbsp; [[Channel_Coding/Reed-Solomon_Decoding_for_the_Erasure_Channel#Block_diagram_and_requirements_for_Reed-Solomon_error_detection|$\text{block diagram for Reed-Solomon error detection}$]]&nbsp; shows:
*Der wesentliche Unterschied liegt im verwendeten diskreten Kanalmodell (grün hinterlegt). Anstelle des Auslöschungskanals (&bdquo;<i>m</i>&ndash;BEC&rdquo;) wird nun der <i>m</i>&ndash;BSC betrachtet. Für jedes einzelne Bit des Codesymbols <i>c<sub>i</sub></i> wird der [http://en.lntwww.de/Kanalcodierung/Kanalmodelle_und_Entscheiderstrukturen#Binary_Symmetric_Channel_.E2.80.93_BSC Binary Symmetric Channel] (BSC) angewandt. Ist auch nur ein Bit innerhalb des Codesymbols verfälscht, so ist <i>y<sub>i</sub></i> &ne; <i>c<sub>i</sub></i>.<br>
+
# The main difference is in the discrete channel model&nbsp; $($highlighted in green$)$.&nbsp; Instead of the erasure channel&nbsp; ("$m$&ndash;BEC")&nbsp; now the&nbsp; "$m$&ndash;BSC"&nbsp; is considered.&nbsp; For each bit of the code symbol&nbsp; $c_i$&nbsp; the&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#Binary_Symmetric_Channel_.E2.80.93_BSC|$\text{Binary Symmetric Channel}$]]&nbsp; $\rm (BSC)$&nbsp; is applied.&nbsp; A bit error with respect to the&nbsp; $i$&ndash;th bit of the code word results in&nbsp; $y_i \ne c_i$.<br>
 +
# In the&nbsp; [[Channel_Coding/Reed-Solomon_Decoding_for_the_Erasure_Channel|"last chapter"]]&nbsp; uncertain bits were already marked by erasures&nbsp; $\rm E$.&nbsp; The task of the&nbsp; "code word finder"&nbsp; $\rm (CWF)$&nbsp; was to reconstruct the decoding result&nbsp; $\underline {y}$&nbsp; from the garbled received words&nbsp; $\underline {z}$.
 +
# If the number&nbsp; $e$&nbsp; of erasures is smaller than the minimum distance&nbsp; $d_{\rm min}$,&nbsp; this succeeds and we get &nbsp; $\underline {z} = \underline {c}$. &nbsp; Otherwise,&nbsp; the code word finder reports that it cannot decode the current received word&nbsp; $\underline {y}$&nbsp;. &nbsp; A wrong decision&nbsp; $(\underline {z} \ne \underline {c})$&nbsp; was excluded at the BEC.<br>
 +
# In this chapter,&nbsp; the first decoder block is now referred to as&nbsp; "code word estimator"&nbsp; $\rm (CWS)$.&nbsp; The naming is to make clear that due to the&nbsp; $m$&ndash;BSC model,&nbsp; wrong decisions&nbsp; $(\underline {z} \ne \underline {c})$&nbsp; are inevitable,&nbsp; namely when multiple symbol errors distort the received word&nbsp; $\underline {y}$&nbsp; to a valid code word.<br><br>
  
*Im Kapitel 2.4 sind unsichere Bit bereits durch Auslöschungen E (<i>Erasures</i>) markiert. Aufgabe des <i>Codewortfinders</i> (CWF) ist es deshalb, aus dem verstümmelten Empfangswort <u><i>y</i></u> das Decodierergebnis <u><i>z</i></u> zu rekonstruieren. Ist die Anzahl <i>e</i> der Auslöschungen kleiner als die minimale Distanz <i>d</i><sub>min</sub>, so gelingt dies und man erhält <u><i>z</i></u> = <u><i>c</i></u>. Andernfalls meldet der CWF, dass er das aktuelle Empfangswort <u><i>y</i></u> nicht decodieren kann. Eine Fehlentscheidung (<u><i>z</i></u> &ne; <u><i>c</i></u>) ist ausgeschlossen.<br>
+
{{BlaueBox|TEXT= 
 +
$\text{Conclusions:}$&nbsp; 
 +
*The decoder's task is to determine its output vector&nbsp; $\underline {v}$&nbsp; so that it matches the information word&nbsp; $\underline {u}$&nbsp; "as well as possible".&nbsp; More precisely formulated:
  
*In diesem Kapitel wird nun der erste Decoderblock als <i>Codewortschätzer</i> (CWS) bezeichnet. Die Namensgebung soll deutlich machen, dass aufgrund des <i>m</i>&ndash;BSC&ndash;Modells Fehlentscheidungen (<u><i>z</i></u> &ne; <u><i>c</i></u>) unvermeidlich sind, nämlich dann, wenn durch mehrere Symbolfehler das Empfangswort <u><i>y</i></u> zu einem gültigen Codewort verfälscht wurde.<br><br>
+
::<math>{ \rm Pr(block\hspace{0.15cm}error)} = { \rm Pr}( \underline{v} \ne \underline{u}) \stackrel{!}{=} { \rm Minimum}\hspace{0.05cm}.</math>
  
Aufgabe des Decoders ist es, seinen Ausgangsvektor <u><i>&upsilon;</i></u> so zu bestimmen, dass er &bdquo;möglichst gut&rdquo; mit dem Informationswort <u><i>u</i></u> übereinstimmt. Oder etwas genauer formuliert:
+
*Because of deterministic mapping &nbsp; $\underline{c} = {\rm enc}(\underline{u})$ &nbsp; and &nbsp; $\underline{v} = {\rm enc}^{-1}(\underline{z})$ &nbsp; holds in the same way:
  
:<math>{ \rm Pr(Blockfehler)} = { \rm Pr}( \underline{\upsilon} \ne \underline{u}) \stackrel{!}{=} { \rm Minimum}\hspace{0.05cm}.</math>
+
::<math>{ \rm Pr(block\hspace{0.15cm}error)} = { \rm Pr}( \underline{z} \ne \underline{c}) \stackrel{!}{=} { \rm Minimum}\hspace{0.05cm}.</math>}}
  
Aufgrund des deterministischen Mappings <u><i>c</i></u> = enc(<u><i>u</i></u>) und <u><i>&upsilon;</i></u> = enc<sup>&ndash;1</sup>(<u><i>z</i></u>) gilt in gleicher Weise:
 
  
:<math>{ \rm Pr(Blockfehler)} = { \rm Pr}( \underline{z} \ne \underline{c}) \stackrel{!}{=} { \rm Minimum}\hspace{0.05cm}.</math>
+
The two blocks highlighted in yellow are not considered further.&nbsp;  The focus of the considerations is now the&nbsp; "code word estimator"&nbsp; $\rm (CWS)$&nbsp; highlighted in red.<br>
  
Deshalb werden im Folgenden die zwei gelb hinterlegten Blöcke nicht weiter betrachtet.  Im Mittelpunkt der Betrachtungen steht vielmehr der rot hinterlegte <i>Codewortschätzer</i> (CWS).<br>
+
== Possible code word estimators for Reed-Solomon decoding ==
 +
<br>
 +
[[File:EN_KC_T_2_5_S2.png|right|frame|Definition of the error vector&nbsp; $\underline{e}$|class=fit]]
 +
The right sketch of this graphic illustrates the task of the&nbsp; $\rm CWS$&nbsp; 
 +
*where here the channel model&nbsp; "$m$&ndash;BSC" is replaced
  
== Mögliche Codewortschätzer: HD–MLD bzw. BDD (1) ==
+
*by the additive&nbsp; &raquo;'''error vector'''&laquo;&nbsp;  
<br>
+
:$$\underline{e} = \underline{y} - \underline{c}.$$
Die rechte Skizze der nachfolgenden Grafik verdeutlicht nochmals die Aufgabenstellung, wobei hier das Kanalmodell &bdquo;<i>m</i>&ndash;BSC&rdquo; durch den <b>additiven Fehlervektor</b>  <u><i>e</i></u> = <u><i>y</i></u> &ndash; <u><i>c</i></u> ersetzt ist. Die linke Skizze verdeutlicht den Zusammenhang zwischen diesen drei Vektoren.<br>
 
  
[[File:P ID2546 KC T 2 5 S2 v2.png|Zur Definition des Fehlervektors <u><i>e</i></u>|class=fit]]<br>
+
The sketch on the left illustrates the relationship between these vectors.<br>
  
Diese Aufgabenstellung soll durch ein Beispiel verdeutlicht werden.<br>
 
  
{{Beispiel}}''':'''
+
This task is to be clarified by an example.<br>
[[File:P ID2547 KC T 2 5 Darstellung v1.png|Drei Darstellungsformen für GF(2<sup>3</sup>)|rechts|rahmenlos]] Alle nachfolgend genannten Symbole sind Elemente von GF(2<sup>3</sup>) = {0, 1, &alpha;<sup>1</sup>, &alpha;<sup>2</sup>, &alpha;<sup>3</sup>, &alpha;<sup>4</sup>, &alpha;<sup>5</sup>, &alpha;<sup>6</sup>}. Zur Umrechnung zwischen der Koeffizientendarstellung (mit der Reihenfolge <i>k</i><sub>2</sub>, <i>k</i><sub>1</sub>, <i>k</i><sub>0</sub>) und der Exponentendarstellung (als Potenzen des primitiven Elements <i>&alpha;</i>) kann die nebenstehende Tabelle verwendet werden.<br>
 
  
Codewort und Empfangswort lauten in Koeffizientendarstellung:
+
{{GraueBox|TEXT= 
 +
$\text{Example 1:}$&nbsp;
 +
Let all symbols be elements of &nbsp; $\rm GF(2^3) \in \{0,\ 1,\ \alpha^1,\ \alpha^2,\ \alpha^3,\ \alpha^4,\ \alpha^5,\ \alpha^6\}$. &nbsp; The adjacent table can be used for conversion
 +
[[File:EN_KC_Z_2_5_neu.png|right|frame|$\rm GF(2^3)$&nbsp; representations]]
 +
   
 +
*between the coefficient representation&nbsp; $($with the order&nbsp; $k_2$,&nbsp; $k_1$,&nbsp; $k_0)$
 +
 +
*and the exponent representation&nbsp; $($as powers of the primitive element&nbsp; $\alpha).$
  
:<math>\underline{c} \hspace{-0.15cm}  =  \hspace{-0.15cm} \Big ( (010), (001), (100),(010),(100),(111),(111)\Big )\hspace{0.05cm},</math>
 
:<math>\underline{y} \hspace{-0.15cm}  =  \hspace{-0.15cm} \Big ( (011), (001), (000),(010),(100),(111),(111)\Big )\hspace{0.05cm}.</math>
 
  
Damit ergibt sich für den Fehlervektor <u><i>e</i></u> = <u><i>y</i></u> &ndash; <u><i>c</i></u>:
+
In this example,&nbsp; the code word and received word are in coefficient notation:
  
:<math>\underline{e} \hspace{0.05cm} = \hspace{0.05cm} \Big ( (001), (000), (100), (000),(000),(000),(000)\Big )\hspace{0.05cm}.</math>
+
::<math>\underline{c} = \Big ( (010), (001), (100),(010),(100),(111),(111)\Big )\hspace{0.05cm},</math>
 +
::<math>\underline{y} =\Big ( (011), (001), (000),(010),(100),(111),(111)\Big )\hspace{0.05cm}.</math>
  
Umgewandelt in die Exponentendarstellung erhält man:
+
&rArr; &nbsp; This results in the following  error vector&nbsp; $\underline{e} = \underline{y} - \underline{c}$:
  
:<math>\underline{c} \hspace{-0.15cm} = \hspace{-0.15cm} \Big ( \alpha^1, 1, \alpha^2,\alpha^1,\alpha^2,\alpha^5,\alpha^5\Big )\hspace{0.05cm},</math>
+
::<math>\underline{e} \hspace{0.05cm} = \hspace{0.05cm} \Big ( (001), (000), (100), (000),(000),(000),(000)\Big )\hspace{0.05cm}.</math>
:<math>\underline{y} \hspace{-0.15cm}  =  \hspace{-0.15cm} \Big ( \alpha^3, 1, \hspace{0.09cm}0\hspace{0.09cm},\alpha^1,\alpha^2,\alpha^5,\alpha^5\Big )
 
\hspace{0.3cm} \Rightarrow  \hspace{0.3cm}\underline{e} = \Big ( \hspace{0.05cm}1, \hspace{0.05cm}0, \hspace{0.05cm}\alpha^2,\hspace{0.05cm}0,\hspace{0.05cm}0,\hspace{0.05cm}0,\hspace{0.05cm}0\hspace{0.05cm}\Big )\hspace{0.05cm}.</math>{{end}}<br>
 
  
Aufgabe des Codewortschätzers (CWS) ist es, das zu <u><i>y</i></u> wahrscheinlichste Codewort <u><i>c</i></u><sub><i>i</i></sub> zu finden und sein Ergebnis <u><i>z</i></u> = <u><i>c</i></u><sub><i>i</i></sub> an das nachfolgende Mapping weiterzugeben. Es gibt verschiedene Möglichkeiten:
+
&rArr; &nbsp; Converted to the exponential representation,&nbsp; we get:
*<i>Hard Decision Maximum Likelihood Decoding</i> (HD&ndash;MLD),<br>
 
  
*<i>Bounded Distance Decoding</i> (BDD),<br>
+
::<math>\underline{c} = \Big ( \alpha^1, \hspace{0.09cm}1\hspace{0.09cm}, \alpha^2,\alpha^1,\alpha^2,\alpha^5,\alpha^5\Big )\hspace{0.05cm},</math>
 +
::<math>\underline{y} =\Big ( \alpha^3, \hspace{0.09cm}1\hspace{0.09cm}, \hspace{0.09cm}0\hspace{0.09cm},\alpha^1,\alpha^2,\alpha^5,\alpha^5\Big )\hspace{0.05cm},</math>
 +
::<math>\underline{e} = \Big ( \hspace{0.09cm}1\hspace{0.09cm}, \hspace{0.09cm}0\hspace{0.09cm}, \hspace{0.05cm}\alpha^2,\hspace{0.12cm}0\hspace{0.12cm},\hspace{0.12cm}0\hspace{0.12cm},\hspace{0.12cm}0\hspace{0.12cm},\hspace{0.12cm}0\hspace{0.12cm}\Big )\hspace{0.05cm}.</math>}}<br>
  
*Decodierung über die halbe Mindestdistanz.<br><br>
+
<u>Note</u> &nbsp; Task of the code word estimator&nbsp; $\rm (CWS)$&nbsp; is to find the most probable code word to&nbsp; $\underline{y}$&nbsp; and to pass its result&nbsp; $\underline{z} = \underline{c}_i$&nbsp; to the following mapping.
  
Diese Decodierprinzipien werden auf der nächsten Seite ausführlicher behandelt.<br>
+
There are several ways to do this:
 +
# &nbsp; Hard Decision Maximum Likelihood Decoding&nbsp;  $\text{(HD&ndash;MLD)}$,<br>
 +
# &nbsp; Bounded Distance Decoding&nbsp; $\text{(BDD)}$,<br>
 +
# &nbsp; Decoding &nbsp;  "over half the minimum distance".<br><br>
  
== Mögliche Codewortschätzer: HD–MLD bzw. BDD (2) ==
 
<br>
 
Aufgabe des Codewortschätzers (CWS) ist es, das zu <u><i>y</i></u> wahrscheinlichste Codewort <u><i>c</i></u><sub><i>i</i></sub> zu finden und sein Ergebnis <u><i>z</i></u> = <u><i>c</i></u><sub><i>i</i></sub> weiterzugeben. Hierfür gibt es mehrere Möglichkeiten:<br><br>
 
  
<b>Hard Decision Maximum Likelihood Decoding</b> (HD&ndash;MLD):<br>
+
<b>Hard Decision Maximum Likelihood Decoding &nbsp; $\text{(HD&ndash;MLD)}$:</b><br>
  
Man wählt von allen möglichen Reed&ndash;Solomon&ndash;Codeworten <u><i>c</i></u><sub><i>i</i></sub> (hiervon gibt es insgesamt <i>q<sup>k</sup></i>) dasjenige mit der geringsten [http://en.lntwww.de/Kanalcodierung/Zielsetzung_der_Kanalcodierung#Einige_wichtige_Definitionen_zur_Blockcodierung Hamming&ndash;Distanz] zum Empfangswort <u><i>y</i></u> aus. Somit lautet das Ergebnis:
+
One chooses from all possible Reed&ndash;Solomon code words&nbsp; $\underline{c}_i$ &nbsp; $($hierof there are in total&nbsp; $q^k)$ &nbsp; the one with the least&nbsp; [[Channel_Coding/Objective_of_Channel_Coding#Important_definitions_for_block_coding|$\text{Hamming distance}$]]&nbsp; to the received word&nbsp; $\underline{y}$.&nbsp; Result:
  
:<math>\underline{z} = {\rm arg} \min_{\underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C}_{\rm RS}} \hspace{0.1cm}
+
::<math>\underline{z} = {\rm arg} \min_{\underline{c}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C}_{\rm RS}} \hspace{0.1cm}
 
d_{\rm H}(\underline{y}  \hspace{0.05cm}, \hspace{0.1cm}\underline{c}_{\hspace{0.03cm}i})\hspace{0.05cm}.</math>
 
d_{\rm H}(\underline{y}  \hspace{0.05cm}, \hspace{0.1cm}\underline{c}_{\hspace{0.03cm}i})\hspace{0.05cm}.</math>
  
Die Entscheidung passiert hier auf der maximalen Rückschlusswahrscheinlichkeit Pr(<u><i>c</i></u><sub><i>i </i></sub>|<sub> </sub><u><i>y</i></u>) und führt zum bestmöglichen Ergebnis. Näheres siehe [http://en.lntwww.de/Kanalcodierung/Klassifizierung_von_Signalen#ML.E2.80.93Entscheidung_beim_BSC.E2.80.93Kanal Kapitel 1.2.] Es wird stets entschieden, selbst wenn die Anzahl  <i>r</i> der Symbolfehler größer ist als die Korrekturfähigkeit <i>t</i> des Codes. In einem solchen Fall ist allerdings das Decodierergebnis sehr unsicher.<br>
+
*The decision here happens on the maximum inference probability&nbsp; ${\rm Pr}(\underline{c}_i\hspace{0.05cm}|\hspace{0.05cm}\underline{y})$&nbsp; and leads to the best possible result.&nbsp; For more details see&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#Maximum-likelihood_decision_at_the_BSC_channel|"Maximum-likelihood decision Decision at the BSC Channel"]].  
  
Es sei nochmals erwähnt, dass bei ML&ndash;Decodierung immer entschieden wird. Ein Decodierversagen ist ausgeschlossen. Aber natürlich gibt es auch falsche Entscheidungen.<br><br>
+
*A decision is always made even if the number&nbsp; $r$&nbsp; of symbol errors is larger than the correction capability&nbsp; $t$&nbsp; of the code.&nbsp; However,&nbsp; in such a case the decoding result is very uncertain.<br>
  
<b>Bounded Distance Decoding</b> (BDD):<br>
+
*It should be mentioned again that maximum likelihood decoding always decides.&nbsp; Decoding failure is impossible.&nbsp; But of course there are also wrong decisions.<br><br>
  
Falls die Anzahl <i>r</i> der Symbolfehler im Empfangswort <u><i>y</i></u> nicht größer ist als die Korrekturfähigkeit <i>t</i> = &lfloor;(<i>d</i><sub>min</Sub> &ndash; 1)/2&rfloor; des Codes, kann man die <i>r</i> Symbolfehler vollständig korrigieren. Der Fall <i>r</i> > <i>t</i> führt zu einem Abbruch des Decodiervorgangs ohne Ergebnis. Anders ausgedrückt: Es werden nur diejenigen Empfangsworte zum Kugelmittelpunkt decodiert, die in einer Kugel um diesen mit Radius <i>t</i> liegen. Andere werden als undecodierbar markiert, zum Beispiel als <i>Erasure</i>.<br><br>
 
  
<b>Decodierung über die halbe Mindestdistanz</b>:<br>
+
<b>Bounded Distance Decoding&nbsp; $\text{(BDD)}$:</b><br>
  
Hier wird auch im Fall <i>r</i> > <i>t</i> versucht, das Codewort zu decodieren. Im Gegensatz zu HD&ndash;MLD, das ebenfalls über die halbe Mindestdistanz hinaus decodiert, ist hier aber ein Decodierversagen nicht per se ausgeschlossen.<br>
+
If the number&nbsp; $r$&nbsp; of symbol errors in the received word&nbsp; $\underline{y}$&nbsp; is not greater than the correction capability &nbsp; $t = &lfloor;(d_{\rm min}- 1)/2&rfloor;$ &nbsp; of the code,&nbsp; one can correct the&nbsp; $r$&nbsp; symbol errors completely.&nbsp; However,&nbsp; it is also true:
 +
*The case&nbsp; $r > t$&nbsp; leads to an abort of the decoding process with no result. &nbsp; In other words:
 +
 
 +
:*Only those received words towards the center of the sphere are decoded,&nbsp; which lie in a sphere around it with radius&nbsp; $t$.
 +
 
 +
:*Other received words are marked as undecodable,&nbsp; for example as&nbsp; "erasure".<br><br>
  
Für den Rest dieses Kapitels beschäftigen wir uns ausschließlich mit <i>Bounded Distance Decoding</i>. Der Grund hierfür ist die enorme Komplexität der <i>Maximum Likelihood Detection</i> proportional zu <i>q</i><sup><i>n</i>&ndash;<i>k</i>.<br>
 
  
== Vorgehensweise beim „Bounded Distance Decoding” ==
+
<b>Decoding over half the minimum distance</b>:<br>
<br>
 
Im Folgenden werden die einzelnen Schritte des BDD&ndash;Algorithmuses kurz und rezeptartig beschrieben. Auf den nächsten Seiten werden dann die einzelnen Punkte genauer behandelt und die Vorgehensweise an typischen Beispielen verdeutlicht.<br>
 
  
(A) Berechne das Syndrom <u><i>s</i></u> = <u><i>y</i></u> &middot; <b>H</b><sup>T</sup>:
+
Here also in the case&nbsp; $r > t$&nbsp; an attempt is made to decode the code word.&nbsp; However,&nbsp; in contrast to&nbsp; $\text{HD&ndash;MLD}$,&nbsp; which also decodes beyond half the minimum distance,&nbsp; a decoding failure is not per se excluded here.<br>
*Ergibt sich aus dem Empfangswort <u><i>y</i></u> und der Prüfmatrix <b>H</b> des Codes das Syndrom <u><i>s</i></u> = <u>0</u>, so setze den BDD&ndash;Ausgang <u><i>z</i></u> = <u><i>y</i></u> und beende den Decodiervorgang für dieses Empfangswort.<br>
 
  
*Andernfalls setze den Parameter <i>r</i> = 1 und mache mit Schritt (B) weiter.<br><br>
+
:For the remainder of this chapter,&nbsp; we will deal exclusively with&nbsp; "Bounded Distance Decoding".&nbsp; The reason for this is the enormous complexity of&nbsp; "Maximum Likelihood Decoding"&nbsp; proportional to&nbsp; $q^{n-k}$.<br>
  
(B) Bestimme die tatsächliche Symbolfehleranzahl <i>r</i>:
+
== Bounded Distance Decoding Procedure ==
*Erstelle und überprüfe die Gleichungen <u><i>&Lambda;</i></u><sub><i>l</i></sub> &middot; <u><i>s</i></u><sup>T</sup> = 0 für <i>l</i> = 1, ..., 2<i>t</i>&ndash;<i>r</i> unter der Annahme, dass das Empfangswort genau <i>r</i> Symbolfehler beinhaltet.<br>
+
<br>
 +
In the following,&nbsp; the individual steps of the&nbsp; "bounded distance decoding"&nbsp; $\rm (BDD)$&nbsp; algorithm are described briefly and in a recipe-like manner.&nbsp; In the next sections,&nbsp; the individual points will be dealt with in more detail and the procedure will be illustrated using typical examples.<br>
  
*<u><i>&Lambda;</i></u><sub><i>l</i></sub> bezeichnet die verallgemeinerten ELP&ndash;Koeffizientenvektoren und <i>t</i> die Korrekturfähigkeit des Codes. Für die Reed&ndash;Solomon&ndash;Codes gilt einheitlich <i>t</i> = &lfloor;(<i>n</i> &ndash; <i>k</i>)/2&rfloor;.<br>
 
  
*Gibt es eine eindeutige Lösung, dann mache mit Schritt (C) weiter. Im Empfangsvektor <u><i>y</i></u> sind dann tatsächlich genau <i>r</i> Symbole verfälscht und im Fehlervektor <u><i>e</i></u> gibt es <i>r</i> Einträge ungleich 0.<br>
+
$\rm (A)$&nbsp; &nbsp;$\text{Calculation and evaluation of syndrome}$ &nbsp; &rArr; &nbsp; [[Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding#Step_.28A.29:_Evaluation_of_the_syndrome_in_BDD|$\text{Detailed description}$]]
 +
# Calculate from the received word&nbsp; $\underline{y}$&nbsp; and the parity-check matrix&nbsp; $\boldsymbol{\rm H }$&nbsp; of the code the syndrome&nbsp; $\underline {s} = \underline {y}  \cdot \boldsymbol{\rm H }^{\rm T}$.
 +
# If&nbsp; $\underline {s} =\underline {0}$,&nbsp; set the BDD output&nbsp; $\underline {z} =\underline {y}$&nbsp; and terminate the decoding process for that received word.<br>
 +
# Otherwise,&nbsp; set the parameter&nbsp; $r = 1$&nbsp; and continue with step&nbsp; $\rm (B)$&nbsp;.<br><br>
  
*Andernfalls erhöhe <i>r</i> um 1. Falls <i>r</i> &#8804; <i>t</i>, dann wiederhole Schritt (B) von Beginn an: Das bisher angenommene <i>r</i> war offensichtlich zu klein. Deshalb nun ein neuer Versuch mit größerem <i>r</i>.<br>
 
  
*Ist das neue <i>r</i> größer als die Korrekturfähigkeit <i>t</i> des Codes, so kann das aktuelle Empfangswort nicht decodiert werden. Beende den Decodierversuch mit einer Fehlermeldung.<br><br>
+
$\rm (B)$&nbsp; &nbsp;$\text{Determine the actual symbol error count }\ r$&nbsp; &rArr; &nbsp; [[Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding#Step_.28B.29:_Set_up.2Fevaluate_the_ELP_coefficient_vector|$\text{Detailed description}$]]
 +
# Create and check the equations&nbsp; $\underline {\it \Lambda} _l \cdot\underline {s}^{\rm T} = 0$ &nbsp; for &nbsp; $l = 1,$ ... , $2 \cdot t -r$&nbsp; assuming that the received word contains exactly &nbsp; $r$ &nbsp; symbol errors.<br>
 +
# $\underline {\it \Lambda} _l $&nbsp; denotes the generalized&nbsp; [[Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding#Error_Locator_Polynomial_-_Definition_and_Properties|$\text{error locator polynomial}$]]&nbsp; $\text{(ELP)}$&nbsp; coefficient vectors.
 +
# The parameter&nbsp; $t$&nbsp; denotes the correctability of the code.&nbsp; For the Reed&ndash;Solomon codes,&nbsp; uniformly&nbsp; $t = &lfloor;(n-k)/2 &rfloor;$.<br>
 +
# If there is a unique solution,&nbsp; then continue with step&nbsp; $\rm (C)$. &nbsp;<br>In the received vector&nbsp; $\underline{y}$&nbsp; there are then indeed exactly&nbsp; $r$ &nbsp; symbols falsified and in the error vector&nbsp; $\underline{e}$&nbsp; there are&nbsp; $r$ &nbsp; entries not equal to $0$.<br>
 +
# Otherwise increase&nbsp; $r$ &nbsp; by&nbsp; $1$. &nbsp; If&nbsp; $r &#8804; t$ &nbsp; &rArr;  &nbsp;  repeat step&nbsp; $\rm (B)$&nbsp; from the beginning: &nbsp; <br>The previously assumed&nbsp; $r$&nbsp; was obviously too small.&nbsp; Therefore now a new attempt with larger&nbsp; $r$.<br>
 +
# If the new&nbsp; $r$&nbsp; is greater than the correction capability&nbsp; $t$&nbsp; of the code,&nbsp; the current word cannot be decoded.&nbsp; End the decoding attempt with an&nbsp; "error message".<br><br>
  
(C) Lokalisiere die <i>r</i> Fehlerpositionen:
 
*Erstelle das <i>Error Locator Polynom</i> <i>&Lambda;</i>(<i>x</i>) und finde dessen <i>r</i> Nullstellen in GF(<i>q</i>)&nbsp;\&nbsp;{0}.<br>
 
  
*Ein Fehler an der Stelle <i>i</i> liegt immer dann vor, wenn <i>&Lambda;</i>(<i>&alpha;<sup>i</sup></i>) = 0 ist.<br><br>
+
$\rm (C)$&nbsp; &nbsp;$\text{Localization of the }\ r \text{ error locations}$&nbsp; &rArr; &nbsp; [[Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding#Step_.28C.29:_Localization_of_the_error_positions|$\text{Detailed description}$]]
 +
# Create the&nbsp; error locator polynomial &nbsp; ${\it \Lambda}(x)$ &nbsp; and find its&nbsp; $r$&nbsp; zeros in&nbsp; ${\rm GF}(q) \hspace{-0.05cm}\setminus \hspace{-0.05cm} \{0\}$.<br>
 +
# An error at location&nbsp; $i$&nbsp; is present whenever&nbsp; ${\it \Lambda}(\alpha^{i}) = 0$&nbsp;.<br><br>
  
(D) Bestimme die <i>r</i> Fehlerwerte <i>e<sub>i</sub></i> &ne; 0:
 
*Bekannt sind die <i>r</i> Fehlerstellen. Ersetzt man im Empfangsvektor <u><i>y</i></u> die falschen Symbole durch Auslöschungen &nbsp;&#8658;&nbsp; <i>y<sub>i</sub></i> = E, falls <i>e<sub>i</sub></i> &ne; 0, so findet man das Ergebnis <u><i>z</i></u> entsprechend Kapitel 2.4.<br>
 
  
*Eine Alternative: Aus der Gleichung <u><i>e</i></u> &middot; <b>H</b><sup>T</sup> = <u><i>s</i></u> kommt man unter Ausnutzung der fehlerfreien Stellen (<i>e<sub>i</sub></i> = 0) zu einem linearen Gleichungssystem für die fehlerhaften Symbole (<i>e<sub>i</sub></i> &ne; 0).<br><br>
+
$\rm (D)$&nbsp; &nbsp;$\text{Determination of }\ r \text{ error values and correction}$&nbsp; &rArr; &nbsp; [[Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding#Step_.28D.29:_Final_error_correction|$\text{Detailed description}$]]
 +
# Now the&nbsp; $r$&nbsp; error locations are known. Replacing in the received vector&nbsp; $\underline{y}$&nbsp; the wrong symbols with erasures &nbsp; &#8658; &nbsp; $y_i = \rm E$,&nbsp; if&nbsp; $e_i &ne; 0$, <br>we find the result&nbsp; $\underline{y}$&nbsp; corresponding to the chapter&nbsp; [[Channel_Coding/Reed-Solomon_Decoding_for_the_Erasure_Channel|"Reed-Solomon Decoding at the Erasure Channel"]].<br>
 +
# '''Or''':&nbsp; From the equation&nbsp; $\underline {e}  \cdot \boldsymbol{\rm H }^{\rm T} = \underline {s}$&nbsp; one arrives at a linear equation system for the faulty symbols&nbsp; $(e_i \ne 0)$&nbsp; taking advantage of the error-free sites&nbsp; $(e_i = 0)$.<br><br>
  
== Schritt (A): Auswertung des Syndroms beim BDD ==
+
== Step (A): Evaluation of the syndrome in BDD ==
 
<br>
 
<br>
Wie in [http://en.lntwww.de/Kanalcodierung/Decodierung_linearer_Blockcodes#Prinzip_der_Syndromdecodierung Kapitel 1.5] gezeigt, kann zur Decodierung eines linearen Codes das Syndrom  <u><i>s</i></u> herangezogen werden. Mit dem Empfangswort <u><i>y</i></u> gleich Codewort <u><i>c</i></u>  plus Fehlervektor <u><i>e</i></u> gilt für dieses:
+
As shown in section&nbsp; [[Channel_Coding/Decoding_of_Linear_Block_Codes#Principle_of_syndrome_decoding|"Principle of Syndrome Decoding"]],&nbsp; the syndrome&nbsp; $\underline{s}$&nbsp; can be used to decode a linear code.  
  
:<math>\underline {s} = \underline {y}  \cdot { \boldsymbol{\rm H }}^{\rm T}=  
+
*With the received word&nbsp; $\underline{y}$ &nbsp; equal to code word&nbsp; $\underline{c}$&nbsp; plus error vector&nbsp; $\underline{e}$&nbsp; applies:
 +
 
 +
::<math>\underline {s} = \underline {y}  \cdot { \boldsymbol{\rm H }}^{\rm T}=  
 
\underline {c}  \cdot { \boldsymbol{\rm H }}^{\rm T}+
 
\underline {c}  \cdot { \boldsymbol{\rm H }}^{\rm T}+
 
\underline {e}  \cdot { \boldsymbol{\rm H }}^{\rm T}
 
\underline {e}  \cdot { \boldsymbol{\rm H }}^{\rm T}
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
Da stets <u><i>c</i></u> &middot; <b>H</b><sup>T</sup> = <u>0</u> gilt, folgt aus <u><i>s</i></u> = <u>0</u> auch <u><i>e</i></u> &middot; <b>H</b><sup>T</sup> = <u>0</u>. Das heißt:
+
*Since always &nbsp; $\underline {c}  \cdot { \boldsymbol{\rm H }}^{\rm T} =\underline {0}$ &nbsp; holds,&nbsp; it follows from &nbsp; $\underline{s}= \underline{0}$ &nbsp; also &nbsp; $\underline {e}  \cdot { \boldsymbol{\rm H }}^{\rm T} =\underline{0}$. &nbsp; That is:
*Mit sehr großer Wahrscheinlichkeit kann aus <u><i>s</i></u> = <u>0</u> auch auf <u><i>e</i></u> = <u>0</u> und damit auch auf das richtige Decodierergebnis <u><i>z</i></u> = <u><i>y</i></u> geschlossen werden. Der Decodiervorgang wäre damit abgeschlossen.<br>
 
  
*Es gibt aber auch Fehlermuster <u><i>e</i></u> &ne; <u>0</u>, die zum Syndrom <u><i>s</i></u> = <u>0</u> führen. Solche Muster beinhalten sicher mehr als <i>t</i> Symbolfehler, so dass auch hier der Abbruch des Decodiervorgangs sinnvoll ist. Alle nachfolgenden Berechnungen würden auch nicht zum Erfolg führen.<br><br>
+
*With very high probability,&nbsp; from &nbsp; $\underline{s}= \underline{0}$ &nbsp; it is also possible to infer &nbsp; $\underline{e}= \underline{0}$ &nbsp; and thus also the correct decoding result&nbsp; $\underline{z}= \underline{y}$.&nbsp;  The decoding process would be finished.<br>
  
{{Beispiel}} '''A:'''
+
*But there are also error patterns &nbsp; $\underline{e} \ne \underline{0}$ &nbsp; that lead to the syndrome &nbsp; $\underline{s}= \underline{0}$. &nbsp; Such patterns certainly contain more than&nbsp; $t$&nbsp; symbol errors.&nbsp;
[[File:P ID2548 KC T 2 5 Darstellung v1.png|rahmenlos|rechts|Drei Darstellunsformen für GF(2<sup>3</sup>)]] Diesem und den folgenden Beispielen auf den nächsten Seiten liegt stets der Reed&ndash;Solomon&ndash;Code (7, 3, 5)<sub>8</sub> zugrunde, so dass die hier angegebenen Umrechnungen in GF(2<sup>3</sup>) genutzt werden können. Das Empfangswort lautet:
 
  
:<math>\underline{y}=\big (\alpha^3,\hspace{0.05cm} 1,\hspace{0.05cm} 0, \hspace{0.05cm}\alpha^1, \hspace{0.05cm} \alpha^2, \hspace{0.05cm} \alpha^5, \hspace{0.05cm} \alpha^5 \big).</math>
+
*So here it makes also sense to abort the decoding process.&nbsp; All subsequent calculations would also not lead to success.<br><br>
  
Mit der [http://en.lntwww.de/Kanalcodierung/Definition_und_Eigenschaften_von_Reed%E2%80%93Solomon%E2%80%93Codes#Generatormatrix_und_Pr.C3.BCfmatrix_.282.29 Prüfmatrix] <b>H</b> ergibt sich für das Syndrom:
+
{{GraueBox|TEXT= 
 +
$\text{Example 2:}$&nbsp; 
 +
This and the following examples in the next sections are always based on the Reed&ndash;Solomon code&nbsp; $\text{RSC (7, 3, 5)}_8$&nbsp; so that the conversions given in the graph in&nbsp; $\rm GF(2^3)$&nbsp; can be used.&nbsp;  The received words are:
 +
[[File:EN_KC_Z_2_5_neu.png|right|frame|$\rm GF(2^3)$&nbsp; representations]]  
  
:<math>\underline {s} \hspace{-0.15cm} \hspace{-0.15cm} \underline {y}  \cdot { \boldsymbol{\rm H }}^{\rm T}=  
+
::<math>\underline{y}=\big (\alpha^3,\hspace{0.08cm} 1,\hspace{0.08cm} 0, \hspace{0.08cm}\alpha^1, \hspace{0.08cm} \alpha^2, \hspace{0.08cm} \alpha^5, \hspace{0.08cm} \alpha^5 \big).</math>
 +
 
 +
*With the&nbsp; [[Channel_Coding/Definition_and_Properties_of_Reed-Solomon_Codes#Generator_matrix_and_parity-check_matrix|$\text{parity-check matrix}$]]&nbsp; $\boldsymbol{\rm H }$&nbsp; results for the syndrome:
 +
 
 +
::<math>\underline {s} = \underline {y}  \cdot { \boldsymbol{\rm H } }^{\rm T}=  
 
\begin{pmatrix}
 
\begin{pmatrix}
 
\alpha^3, \hspace{0.05cm}1, \hspace{0.05cm}0, \hspace{0.05cm}\alpha^1, \hspace{0.05cm}\alpha^2, \hspace{0.05cm}\alpha^5, \hspace{0.05cm}\alpha^5
 
\alpha^3, \hspace{0.05cm}1, \hspace{0.05cm}0, \hspace{0.05cm}\alpha^1, \hspace{0.05cm}\alpha^2, \hspace{0.05cm}\alpha^5, \hspace{0.05cm}\alpha^5
Line 159: Line 187:
 
\alpha^5 & \alpha^3 & \alpha^1 & \alpha^6 \\
 
\alpha^5 & \alpha^3 & \alpha^1 & \alpha^6 \\
 
\alpha^6 & \alpha^5 & \alpha^4 & \alpha^3
 
\alpha^6 & \alpha^5 & \alpha^4 & \alpha^3
\end{pmatrix}  = </math>
+
\end{pmatrix}. </math>
:<math> \hspace{-0.15cm} = \hspace{-0.15cm}(\alpha^3 , \alpha^3 , \alpha^3 , \alpha^3) + (\alpha^1 , \alpha^2 , \alpha^3 , \alpha^4) + (0,0,0,0) + (\alpha^4,1,\alpha^3,\alpha^6)+</math>
+
*These vector&ndash;matrix multiplications gives the result:
:<math>\hspace{0.15cm}  +  \hspace{0.15cm}
+
::<math>\underline {s} \hspace{-0.05cm} = \hspace{-0.05cm} (\alpha^3 , \alpha^3 , \alpha^3 , \alpha^3) + (\alpha^1 , \alpha^2 , \alpha^3 , \alpha^4) + (0,0,0,0) \hspace{-0.05cm}+\hspace{-0.05cm} (\alpha^4,1,\alpha^3,\alpha^6)+(\alpha^6,\alpha^3,1,\alpha^4)\hspace{-0.05cm}+\hspace{-0.05cm}(\alpha^3,\alpha^1,\alpha^6,\alpha^4) \hspace{-0.05cm}+\hspace{-0.05cm} (\alpha^4,\alpha^3,\alpha^2,\alpha^1)</math>
(\alpha^6,\alpha^3,1,\alpha^4)+(\alpha^3,\alpha^1,\alpha^6,\alpha^4) + (\alpha^4,\alpha^3,\alpha^2,\alpha^1)= ... \hspace{0.05cm}=
+
::<math>\Rightarrow  \hspace{0.3cm} \underline {s} = \text{...} \hspace{0.05cm}=
 
(\alpha^5,\alpha^2,\alpha^3,\alpha^1)  \hspace{0.05cm}.</math>
 
(\alpha^5,\alpha^2,\alpha^3,\alpha^1)  \hspace{0.05cm}.</math>
  
Das Empfangswort wurde also verfälscht. Andernfalls hätte sich <u><i>s</i></u> = <u>0</u> = (0, 0, 0, 0) ergeben müssen.<br>
+
*So the received word was falsified. &nbsp; Otherwise it should have resulted in&nbsp; $\underline{e}= \underline{0} = (0, 0, 0, 0)$&nbsp;.<br>
 +
 
  
Die Beschreibung des Decodiervorgangs beim RSC (7, 3, 5)<sub>8</sub> wird im [http://en.lntwww.de/Kanalcodierung/Fehlerkorrektur_nach_Reed%E2%80%93Solomon%E2%80%93Codierung#Schritt_.28B.29:_Aufstellen.2FAuswerten_des_ELP.E2.80.93Koeffizientenvektors_.282.29 Beispiel B] fortgesetzt.{{end}}<br>
+
The description of the decoding process at&nbsp; $\text{RSC (7, 3, 5)}_8$&nbsp; is continued in&nbsp; [[Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding#Step_.28B.29:_Set_up.2Fevaluate_the_ELP_coefficient_vector|$\text{Example 4}$]]&nbsp; }}<br>
  
== Error Locator Polynom – Definition und Eigenschaften (1) ==
+
== Error Locator Polynomial - Definition and Properties ==
 
<br>
 
<br>
Nach der [http://en.lntwww.de/Kanalcodierung/Fehlerkorrektur_nach_Reed%E2%80%93Solomon%E2%80%93Codierung#Schritt_.28A.29:_Auswertung_des_Syndroms_beim_BDD Syndromberechnung] mit dem Ergebnis <u><i>s</i></u> &ne; 0 wissen wir,
+
After the syndrome calculation in step&nbsp; $\rm (A)$&nbsp; with the result&nbsp; $\underline{s} \ne \underline{0}$&nbsp; we know,
*dass das Empfangswort <u><i>y</i></u> nicht mit dem Codewort <u><i>c</i></u> übereinstimmt, bzw.<br>
+
*that the received word&nbsp; $\underline{y}$ &nbsp; does not match the code word&nbsp; $\underline{c}$,&nbsp; respectively<br>
  
*dass der Fehlervektor <u><i>e</i></u> = (<i>e</i><sub>0</sub>, <i>e</i><sub>1</sub>, ... , <i>e<sub>n</sub></i><sub>&ndash;1</sub>) auch Elemente ungleich 0 beinhaltet.<br><br>
+
*that the error vector&nbsp; $\underline{e} = (e_0, \hspace{0.05cm}e_1, \hspace{0.05cm}\text{ ...}\hspace{0.05cm} , e_{n-1})$&nbsp; certainly includes elements not equal to&nbsp; "$0$".&nbsp;<br><br>
  
Wir wissen allerdings nicht, wie viele Symbole verfälscht wurden (0 < <i>r</i> &#8804; <i>n</i>) und wir können auch nicht die Positionen der Fehlerstellen (<i>e<sub>i</Sub></i> &ne; 0) im Fehlervektor <u><i>e</i></u> nennen.<br>
+
However,&nbsp; we do not know how many symbols were falsified &nbsp; $(0 < r &#8804; n)$ &nbsp; nor we can name the positions of the error locations&nbsp; $(e_i &ne; 0)$&nbsp; in the error vector&nbsp; $\underline{c}$.&nbsp; An approach to this task is provided by the so-called&nbsp; "error locator polynomial"&nbsp; introduced by&nbsp; [https://en.wikipedia.org/wiki/W._Wesley_Peterson $\text{William Wesley Peterson}$]&nbsp; &rArr; &nbsp; see [Pet60]<ref name ='Pet60'>Peterson, W.W:&nbsp; Encoding and Error-correction Procedures for the Bose-Chaudhuri codes.&nbsp; IRE Transactions on Information Theory, IT-6:459{470), 1960.</ref>.&nbsp; It is also known as&nbsp; "key equation".<br>
  
Einen Lösungsansatz für diese Aufgabe bietet das <i>Error Locator Polynom</i>, das von W. W. Peterson eingeführt wurde. Siehe [Pet60]<ref>Peterson, W.W: ''Encoding and Error-correction Procedures for the Bose-Chaudhuri codes.'' IRE Transactions on Information Theory , IT-6:459{470), 1960.</ref>. Im Deutschen ist hierfür auch der Begriff <i>Schlüsselgleichung</i> üblich.<br>
+
{{BlaueBox|TEXT= 
 +
$\text{Definition:}$&nbsp; Let it be known that exactly&nbsp; $r$&nbsp; elements of the error vector&nbsp; $\underline{e}$&nbsp; are non-zero,&nbsp; recognizable by&nbsp; [[Channel_Coding/Objective_of_Channel_Coding#Important_definitions_for_block_coding|$\text{Hamming weight}$]]&nbsp; $w_{\rm H}(\underline{e}) = r$.  
 +
*Also let the quantity &nbsp; ${I}_{\rm EP}$ &nbsp; of error positions be known:
  
{{Definition}}''':''' Es sei bekannt, dass genau <i>r</i> Elemente des Fehlervektors <u><i>e</i></u> ungleich 0 sind, erkennbar am Hamming&ndash;Gewicht <i>w</i><sub>H</sub>(<u><i>e</i></u>) = <i>r</i>. Ebenfalls bekannt sei die Menge <i>I</i><sub>FP</sub> der Fehlerpositionen:
+
::<math>I_{\rm EP} = \{ i \hspace{0.1cm}\vert \hspace{0.1cm} e_i \ne 0,\hspace{0.1cm} 0 \le i < n \}\hspace{0.05cm}.</math>
  
:<math>I_{\rm FP} = \{ i \hspace{0.1cm}| \hspace{0.1cm} e_i \ne 0,\hspace{0.1cm} 0 \le i < n \}\hspace{0.05cm}.</math>
+
*Then for the&nbsp; &raquo;'''Error Locator Polynomial'''&laquo; &nbsp;$\rm (ELP)$:
  
Dann gilt für das <i>Error Locator Polynom</i> (ELP):
+
::<math>{\it \Lambda}(x)=x \cdot \prod_{i\hspace{0.05cm}\in \hspace{0.05cm} I_{\rm FP} }(x-\alpha^i) =x \cdot \big [{\it \lambda}_0 + \lambda_1 \cdot x+\ldots+{\it \lambda}_{r-1} \cdot x^{r-1}+x^r \big ].</math>}}
  
:<math>{\it \Lambda}(x)=x \cdot \prod_{i\hspace{0.05cm}\in \hspace{0.05cm} I_{\rm FP}}(x-\alpha^i) =x \cdot \big [{\it \lambda}_0 + \lambda_1 \cdot x+\ldots+{\it \lambda}_{r-1} \cdot x^{r-1}+x^r \big ].</math>{{end}}<br>
 
  
Vom <i>Error Locator Polynom</i> wissen wir aufgrund der Definition:
+
&rArr; &nbsp; From the error locator polynomial&nbsp; we know:
*Wegen des Faktors <i>x</i> vor dem Produktzeichen ist <i>&Lambda;</i>(<i>x</i> = 0) = 0.<br>
+
*Because of the factor&nbsp; $x$&nbsp; in front of the product sign:&nbsp; ${\it \Lambda}(x= 0) = 0$.<br>
  
*Weitere <i>r</i> Nullstellen ergeben sich für <i>x</i> = <i>&alpha;</i><sup><i>i</i></sup> mit <i>i</i> &#8712; <i>I</i><sub>FP</sub>, das heißt, für alle Fehlerpositionen.<br>
+
*Other&nbsp; $r$&nbsp; zeros result for&nbsp; $x = \alpha^{i}$ &nbsp; with &nbsp; $i \in I_{\rm EP}$,&nbsp; that is,&nbsp; for all error positions.<br>
  
*Dagegen ergibt das <i>Error Locator Polynom</i> für <i>i</i> &#8713; <i>I</i><sub>FP</sub> &#8658; <i>e<sub>i</sub></i> = 0 keine Nullstelle: <i>&Lambda;</i>(<i>x</i>&nbsp;=&nbsp;<i>&alpha;</i><sup><i>i</i></sup>)&nbsp;&ne;&nbsp;0.<br><br>
+
*In contrast,&nbsp; the&nbsp; error locator polynomial for&nbsp; $i &#8713; I_{\rm EP}$ &nbsp; &#8658; &nbsp; $e_i = 0$&nbsp; has no zero: &nbsp; ${\it \Lambda}(x= \alpha^{i}) \ne0$.<br><br>
  
Wir suchen also die <i>r</i> nichttrivialen Nullstellen von <i>&Lambda;</i>(<i>x</i>) mit dem Argument <i>x</i> &#8712; GF(<i>q</i>) \ {0}. Gelingt uns dies, so kennen wir die <i>r</i> Fehlerpositionen, jedoch noch nicht die tatsächlichen Fehlerwerte <i>e<sub>i</sub></i> &#8712; GF(<i>q</i>).<br>
+
&rArr; &nbsp; So we search for the&nbsp; $r$&nbsp; non&ndash;trivial zeros of&nbsp; ${\it \Lambda}(x)$&nbsp; with argument&nbsp; $x &#8712; {\rm GF}(q) \hspace{-0.05cm}\setminus \hspace{-0.05cm} \{0\}$.&nbsp; If we succeed,  
 +
:*we know the&nbsp; $r$&nbsp; error positions,  
 +
:*but not yet the actual error values&nbsp; $e_i &#8712; {\rm GF}(q)$.<br>
  
{{Beispiel}}''':'''
+
{{GraueBox|TEXT=
[[File:P ID2549 KC T 2 5 Darstellung v1.png|rahmenlos|rechts|Drei Darstellunsformen für GF(2<sup>3</sup>)]] Es gelte <i>n</i> = 7 &#8658; <i>q</i> = 8, <i>r</i> = 2 und <i>I</i><sub>FP</sub> = {2, 4}:<br>
+
[[File:EN_KC_Z_2_5_neu.png|right|frame|$\rm GF(2^3)$&nbsp; representations]]  
 +
$\text{Example 3:}$&nbsp;
 +
Let $n=7$ &nbsp; &#8658; &nbsp; $q=8$,&nbsp; $r=2$&nbsp; and&nbsp; $I_{\rm FP= \{2, \hspace{0.05cm}4\}$:  &nbsp; &nbsp; &nbsp; &nbsp;[[File:P ID2551 KC T 2 5 S5a.png]]<br>
  
[[File:P ID2551 KC T 2 5 S5a.png]]<br>
+
*Thus for the&nbsp;  "error locator poynomial"&nbsp; from&nbsp; ${\rm GF}(2^3)$ is obtained:
  
Damit erhält man für das <i>Error Locator Poynom</i> aus GF(2<sup>3</sup>):
+
::<math>{\it \Lambda}(x)=x \cdot (x\hspace{-0.05cm}-\hspace{-0.05cm}\alpha^2) \cdot (x\hspace{-0.05cm}-\hspace{-0.05cm}\alpha^4)=  x \cdot (x\hspace{-0.05cm}+\hspace{-0.05cm}\alpha^2) \cdot (x\hspace{-0.05cm}+\hspace{-0.05cm}\alpha^4) =x \cdot \big [x^2 \hspace{-0.05cm}+ \hspace{-0.05cm}(\alpha^2 + \alpha^4) \cdot x + \alpha^6\big ] </math>
 +
::<math>\Rightarrow \hspace{0.3cm}  {\it \Lambda}(x)= x \cdot \big [\alpha^6 + \alpha \cdot x + x^2\big ]\hspace{0.05cm}.</math>
  
:<math>{\it \Lambda}(x)\hspace{0.15cm}  = \hspace{0.15cm}x \cdot (x-\alpha^2) \cdot (x-\alpha^4)=</math>
+
*The other zeros $($except for&nbsp; $x = 0)$&nbsp; arise naturally here for&nbsp; $x = \alpha^2$&nbsp; and&nbsp; $x = \alpha^4$, as a calculation shows:
:<math> \hspace{1.1cm}  = \hspace{0.15cm} x \cdot (x+\alpha^2) \cdot (x+\alpha^4) =</math>
 
:<math> \hspace{1.1cm}  =  \hspace{0.15cm}x \cdot \big [x^2 + (\alpha^2 + \alpha^4) \cdot x + \alpha^6\big ] =</math>
 
:<math> \hspace{1.1cm}  =  \hspace{0.15cm} x \cdot \big [\alpha^6 + \alpha \cdot x + x^2\big ]\hspace{0.05cm}.</math>
 
  
Die beiden Nullstellen (außer bei <i>x</i> = 0) ergeben sich hier natürlich für <i>x</i> = <i>&alpha;</i><sup>2</sup> und <i>x</i> = <i>&alpha;</i><sup>4</sup>, wie die folgende Kontrollrechnung zeigt:
+
::<math>{\it \Lambda}(x = \alpha^2)= x \cdot \big [\alpha^6 + \alpha \cdot \alpha^2 + (\alpha^2)^2\big ] = x \cdot \big [\alpha^6 + \alpha^3 + \alpha^4  \big ]= 0\hspace{0.05cm},</math>
 +
::<math> {\it \Lambda}(x = \alpha^4)= x \cdot \big [\alpha^6 + \alpha \cdot \alpha^4 + (\alpha^4)^2\big ] =x \cdot \big [\alpha^6 + \alpha^5 + \alpha \big ]= 0\hspace{0.05cm}.</math>}}<br>
  
:<math>{\it \Lambda}(x = \alpha^2)\hspace{-0.15cm}  =  \hspace{-0.15cm}  x \cdot \big [\alpha^6 + \alpha \cdot \alpha^2 + (\alpha^2)^2\big ] = x \cdot \big [\alpha^6 + \alpha^3 + \alpha^4  \big ]= 0\hspace{0.05cm},</math>
+
For further derivation,&nbsp;  we always assume the&nbsp; $\text{RSC (7, 3, 5)}_8$&nbsp; with the following parameter values:
:<math> {\it \Lambda}(x = \alpha^4)\hspace{-0.15cm} =  \hspace{-0.15cm}  x \cdot \big [\alpha^6 + \alpha \cdot \alpha^4 + (\alpha^4)^2\big ] =x \cdot \big [\alpha^6 + \alpha^5 + \alpha \big ]= 0\hspace{0.05cm}.</math>{{end}}<br>
 
  
== Error Locator Polynom – Definition und Eigenschaften (2) ==
+
:$$n=7, \hspace{0.3cm}k = 3, \hspace{0.3cm}d_{\rm min} = 5 \ \Rightarrow \ t = (d_{\rm min} -1/2) = 2.$$
<br>
 
Für die weitere Herleitung gehen wir stets vom <b>RSC (7, 3, 5)<sub>8</sub></b> mit den folgenden Parameterwerten aus: <i>n</i> = 7, <i>k</i> = 3, <i>d</i><sub>min</sub> = 5  &nbsp;&#8658;&nbsp; <i>t</i> = (<i>d</i><sub>min</sub> &ndash; 1)/2 = 2. Die Anzahl der Symbolfehler sei <b> <i>r</i> = 2 = <i>t</i></b>.<br>
 
  
Damit lautet das zu lösende Gleichungssystem mit den Hilfsgrößen <i>A<sub>i</sub></i> = <i>&Lambda;</i>(<i>&alpha;<sup>i</sup></i>):
+
*Let the number of symbol errors be&nbsp; $r = t = 2$.&nbsp; Thus the system of equations to be solved with the auxiliary variables&nbsp; $L_i = {\it \Lambda}(\alpha^{i})$:
  
:<math>A_0 = {\it \Lambda }(\alpha^0) \hspace{-0.15cm}  = \hspace{-0.15cm} \alpha^0 \cdot \left [ {\it \lambda}_0 + {\it \lambda}_1 \cdot (\alpha^0)^1 + (\alpha^0)^2 \right ] = {\it \lambda}_0  \cdot 1 + {\it \lambda}_1 \cdot  1 + 1 \hspace{0.05cm},</math>
+
::<math>L_0 = {\it \Lambda }(\alpha^0) = \alpha^0 \cdot \left [ {\it \lambda}_0 + {\it \lambda}_1 \cdot (\alpha^0)^1 + (\alpha^0)^2 \right ] = {\it \lambda}_0  \cdot 1 + {\it \lambda}_1 \cdot  1 + 1 \hspace{0.05cm},</math>
:<math>A_1 = {\it \Lambda }(\alpha^1) \hspace{-0.15cm}  = \hspace{-0.15cm} \alpha^1 \cdot \left [ {\it \lambda}_0 + {\it \lambda}_1 \cdot (\alpha^1)^1 + (\alpha^1)^2 \right ] = {\it \lambda}_0 \cdot \alpha^1+ {\it \lambda}_1 \cdot \alpha^2 + \alpha^3 \hspace{0.05cm},</math>
+
::<math>L_1 = {\it \Lambda }(\alpha^1) =\alpha^1 \cdot \left [ {\it \lambda}_0 + {\it \lambda}_1 \cdot (\alpha^1)^1 + (\alpha^1)^2 \right ] = {\it \lambda}_0 \cdot \alpha^1+ {\it \lambda}_1 \cdot \alpha^2 + \alpha^3 \hspace{0.05cm},</math>
 
:::::::::<math>...</math>
 
:::::::::<math>...</math>
:<math> A_6 = {\it \Lambda }(\alpha^6) \hspace{-0.15cm}  = \hspace{-0.15cm} \alpha^6 \cdot \left [ {\it \lambda}_0 + {\it \lambda}_1 \cdot (\alpha^6)^1 + (\alpha^6)^2 \right ] = {\it \lambda}_0 \cdot \alpha^6 + {\it \lambda}_1 \cdot \alpha^{12} + \alpha^{18} \hspace{0.05cm}.</math>
+
::<math> L_6 = {\it \Lambda }(\alpha^6) = \alpha^6 \cdot \left [ {\it \lambda}_0 + {\it \lambda}_1 \cdot (\alpha^6)^1 + (\alpha^6)^2 \right ] = {\it \lambda}_0 \cdot \alpha^6 + {\it \lambda}_1 \cdot \alpha^{12} + \alpha^{18} \hspace{0.05cm}.</math>
  
In Vektorform lautet dieses Gleichungssystem mit dem Hilfsvektor <u><i>A</i></u>&nbsp;=&nbsp;(<i>A</i><sub>0</sub>,&nbsp;<i>A</i><sub>1</sub>,&nbsp;<i>A</i><sub>2</sub>,&nbsp;<i>A</i><sub>3</sub>,&nbsp;<i>A</i><sub>4</sub>,&nbsp;<i>A</i><sub>5</sub>,&nbsp;<i>A</i><sub>6</sub>):
+
*In vector form,&nbsp; this system of equations with the auxiliary vector is &nbsp; $\underline{L} = (L_0, \hspace{0.05cm}L_1, \hspace{0.05cm}L_2,\hspace{0.05cm}L_3,\hspace{0.05cm}L_4,\hspace{0.05cm}L_5,\hspace{0.05cm}L_6)$:
  
:<math>\underline {A}^{\rm T}=\begin{pmatrix}
+
::<math>\underline {L}^{\rm T}=\begin{pmatrix}
A_0\\
+
L_0\\
A_1\\
+
L_1\\
A_2\\
+
L_2\\
A_3\\
+
L_3\\
A_4\\
+
L_4\\
A_5\\
+
L_5\\
A_6
+
L_6
 
\end{pmatrix}
 
\end{pmatrix}
 
\hspace{0.15cm} = \hspace{0.15cm}
 
\hspace{0.15cm} = \hspace{0.15cm}
Line 253: Line 283:
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
Wir erweitern nun den ELP&ndash;Koeffizientenvektor <u><i>&Lambda;</i></u> durch Anhängen von Nullen auf die Länge <i>n</i> &ndash; <i>k</i>. Im betrachteten Beispiel erhält man somit <u><i>&Lambda;</i></u> = (<i>&lambda;</i><sub>0</sub>, <i>&lambda;</i><sub>1</sub>, 1, 0) und folgende Vektorgleichung:
+
*We now expand the ELP coefficient vector&nbsp; $\underline {\it \Lambda }$&nbsp; by appending zeros to the length&nbsp; $n-k$.&nbsp; Thus,&nbsp; in the considered example,&nbsp; we obtain &nbsp; ${\it \Lambda } = ( \lambda_0,\hspace{0.05cm}\lambda_1,\hspace{0.05cm}1, \hspace{0.05cm}0)$ &nbsp; and the following vector equation:
  
:<math>\underline {A}^{\rm T}
+
::<math>\underline {L}^{\rm T}
 
\hspace{0.15cm} = \hspace{0.15cm}
 
\hspace{0.15cm} = \hspace{0.15cm}
 
\begin{pmatrix}
 
\begin{pmatrix}
Line 274: Line 304:
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
Aus der 7&nbsp;&times;&nbsp;3&ndash;Matrix wurde nun eine 7&nbsp;&times;&nbsp;4&ndash;Matrix. Ddie vierte Spalte kann eigentlich beliebig gefüllt werden, da alle Elemente mit Nullen multipliziert werden. Durch die hier gewählte Ergänzung erhält man die transponierte [http://en.lntwww.de/Kanalcodierung/Definition_und_Eigenschaften_von_Reed%E2%80%93Solomon%E2%80%93Codes#Generatormatrix_und_Pr.C3.BCfmatrix_.282.29 Prüfmatrix] des RSC (7, 3, 5)<sub>8</sub>, und man kann für die letzte Vektorgleichung schreiben:
+
*With this we have achieved:
 +
# The&nbsp; $7&times 3$&nbsp; matrix has now become a&nbsp; $7&times 4$ matrix.
 +
# The fourth column can actually be filled arbitrarily,&nbsp; since all elements are multiplied by zeros.
 +
# The addition chosen here gives the transposed&nbsp; [[Channel_Coding/Definition_and_Properties_of_Reed-Solomon_Codes#Generator_matrix_and_parity-check_matrix| $\text{parity-check matrix}$]]&nbsp; of&nbsp; $\text{RSC (7, 3, 5)}_8$.
 +
#  Thus,&nbsp; one can write for the last vector equation:
  
:<math>\underline {A}^{\rm T} = { \boldsymbol{\rm H }}^{\rm T} \cdot \underline {\it \Lambda }^{\rm T}
+
:::<math>\underline {L}^{\rm T} = { \boldsymbol{\rm H }}^{\rm T} \cdot \underline {\it \Lambda }^{\rm T}
  \hspace{0.3cm} \Rightarrow  \hspace{0.3cm}\underline {A} = \underline {\it \Lambda } \cdot { \boldsymbol{\rm H }}
+
  \hspace{0.3cm} \Rightarrow  \hspace{0.3cm}\underline {L} = \underline {\it \Lambda } \cdot { \boldsymbol{\rm H }}
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
Da aber für die Fehlerstellen (<i>e<sub>i</sub></i> &ne; 0) stets <i>A<sub>i</sub></i> = <i>&Lambda;(</i><i>&alpha;<sup>i</sup></i>) = 0 gilt, ist das Produkt <i>A<sub>i</sub></i> &middot; <i>e<sub>i</sub></i> immer 0 und man erhält als Bestimmungsgleichung für die Nullstellen des <i>Error Locator Polynoms</i>:
+
*But since for the error locations&nbsp; $(e_i &ne; 0)$&nbsp; always&nbsp; $L_i = {\it \Lambda}(\alpha^{i}) = 0$&nbsp; holds,&nbsp; the product&nbsp; $L_i \cdot e_i \equiv 0$&nbsp; and one obtains as determining equation for the zeros of the error locator polynomial:
  
:<math>\underline {A}^{\rm T}  \cdot \underline {e}^{\rm T} = 0  \hspace{0.3cm} \Rightarrow  \hspace{0.3cm}
+
:::<math>\underline {L}^{\rm T}  \cdot \underline {e}^{\rm T} = 0  \hspace{0.3cm} \Rightarrow  \hspace{0.3cm}
 
\underline {\it \Lambda } \cdot { \boldsymbol{\rm H }} \cdot \underline {e}^{\rm T} = 0 \hspace{0.3cm} \Rightarrow  \hspace{0.3cm}
 
\underline {\it \Lambda } \cdot { \boldsymbol{\rm H }} \cdot \underline {e}^{\rm T} = 0 \hspace{0.3cm} \Rightarrow  \hspace{0.3cm}
 
\underline {\it \Lambda }  \cdot \underline {s}^{\rm T} = 0  
 
\underline {\it \Lambda }  \cdot \underline {s}^{\rm T} = 0  
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
Daraus folgt das wichtige <b>Zwischenergebnis</b>: Die nichttrivialen Nullstellen (ungleich 0) <i>&lambda;</i><sub>0</sub>, <i>&lambda;</i><sub>1</sub>, ... des <i>Error Locator Polynoms</i> <i>&Lambda;</i>(<i>x</i>) müssen stets der Vektorgleichung  <u><i>&Lambda;</i></u> &middot; <u><i>s</i></u><sup>T</sup> = 0 genügen, wobei <u><i>&Lambda;</i></u> den ELP&ndash;Koeffizientenvektor bezeichnet und <u><i>s</i></u> = <u><i>y</i></u> &middot; <b>H</b><sup>T</sup> das [http://en.lntwww.de/Kanalcodierung/Fehlerkorrektur_nach_Reed%E2%80%93Solomon%E2%80%93Codierung#Schritt_.28A.29:_Auswertung_des_Syndroms_beim_BDD Syndrom] angibt.<br>
+
{{BlaueBox|TEXT= 
 +
$\text{Important intermediate result:}$&nbsp; The non-trivial zeros&nbsp; $\lambda_0$, $\lambda_1$, ... &nbsp;$(=0)$&nbsp; of the error locator polynomial &nbsp; ${\it \Lambda}(x)$&nbsp; must always satisfy the vector equation&nbsp; $\underline {\it \Lambda }  \cdot \underline {s}^{\rm T} = 0 $&nbsp; .
 +
*Hereby denotes&nbsp; $\underline {\it \Lambda }$&nbsp; the &nbsp;$\rm ELP$&nbsp; coefficient vector.
  
== Schritt (B): Aufstellen/Auswerten des ELP–Koeffizientenvektors (1) ==
+
* $\underline {s }  = \underline {y }\cdot  \boldsymbol{\rm H }^{\rm T} $&nbsp; gives the&nbsp; [[Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding#Step_.28A.29:_Evaluation_of_the_syndrome_in_BDD|$\text{syndrome}$]].}}<br>
 +
 
 +
== Step (B): Set up/evaluate the ELP coefficient vector ==
 
<br>
 
<br>
Bevor wir das Zwischenergebnis  <u><i>&Lambda;</i></u> &middot; <u><i>s</i></u><sup>T</sup> = 0 an einem Beispiel verdeutlichen können, müssen noch einige Verallgemeinerungen vorgenommen werden. Der Grund hierfür ist:
+
Before we can consider this intermediate result for step&nbsp; $\rm (B)$&nbsp; some generalizations need to be made.&nbsp;  
*Die Gleichung <u><i>&Lambda;</i></u> &middot; <u><i>s</i></u><sup>T</sup> = 0 liefert nur eine einzige Bestimmungsgleichung. Damit kann das Problem für <i>r</i> = 1 gelöst werden, wenn man sicher ist, dass tatsächlich nur ein Symbol verfälscht wurde.<br>
+
[[File:P ID2550 KC T 2 5 S3 v1.png|right|frame|Shifted ELP coefficient vectors|class=fit]]
 
+
The reason for this is:
*Ist man sich dessen nicht sicher, führt aber die Berechnung trotzdem für <i>r</i> = 1 durch, so  braucht man noch eine zweite Gleichung (oder auch mehrere), um die Annahme zu verifizieren.<br><br>
+
*The relationship &nbsp; $\underline {\it \lambda }  \cdot \underline {s}^{\rm T} = 0 $ &nbsp; yields only a single equation of determination.  
  
Die Eigenschaft des [http://en.lntwww.de/Kanalcodierung/Fehlerkorrektur_nach_Reed%E2%80%93Solomon%E2%80%93Codierung#Error_Locator_Polynom_.E2.80.93_Definition_und_Eigenschaften_.281.29 Error Locator Polynoms,] dass <i>&Lambda;</i>(<i>&alpha;<sup>i</sup></i>) nur für <i>e<sub>i</sub></i> &ne; 0 (<i>i</i>&ndash;tes Symbol verfälscht) gleich Null ist, bleibt erhalten, wenn man <i>&Lambda;</i>(<i>x</i>) mit beliebigen Potenzen von <i>x</i> multipliziert. Jede Multiplikation mit <i>x</i> bedeutet für den ELP&ndash;Koeffizientenvektor eine Verschiebung um eine Stelle nach rechts.<br>
+
*Thus the problem can be solved for&nbsp; $r = 1$&nbsp; if one is sure that indeed only one symbol has been falsified.<br>
  
[[File:P ID2550 KC T 2 5 S3 v1.png|Verschobene ELP–Koeffizientenvektoren|class=fit]]<br>
+
*If one is not sure of this,&nbsp; but nevertheless performs the calculation for&nbsp; $r = 1$,&nbsp; one still needs a second equation&nbsp; (or even several)&nbsp; to verify the assumption.<br><br>
  
Mit der <b>verallgemeinerten Definition </b> (wobei <i>&Lambda;</i><sub>1</sub>(<i>x</i>) dem bisherigen <i>&Lambda;</i>(<i>x</i>) entspricht),
+
The property of the&nbsp; [[Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding#Error_Locator_Polynomial_-_Definition_and_Properties|$\text{error locator polynomial}$]] that&nbsp; ${\it \Lambda}(\alpha^{i}) = 0$&nbsp; only for&nbsp; $e_i &ne; 0$&nbsp; $($that means:&nbsp; the $i$&ndash;th symbol is falsified$)$&nbsp; is preserved when multiplying&nbsp; ${\it \Lambda}(x)$&nbsp; by arbitrary powers of&nbsp; $x$.&nbsp;
  
:<math>{\it \Lambda}_l(x)=x^l \cdot \prod_{i\hspace{0.05cm}\in \hspace{0.05cm} I_{\rm FP}}(x-\alpha^i) =x^l \cdot \big [{\it \lambda}_0 + \lambda_1 \cdot x+\ldots+{\it \lambda}_{r-1} \cdot x^{r-1}+x^r \big ]\hspace{0.05cm},</math>
+
Each multiplication by&nbsp; $x$&nbsp; implies a shift of one place to the right for the ELP coefficient vector.<br>
 +
<br clear=all>
  
ergeben sich durch sukzessive Verschiebung gegenüber <i><u>&Lambda;</u></i> die ELP&ndash;Koeffizientenvektoren <i><u>&Lambda;</u><sub>l</sub></i>. Die Grafik zeigt die Belegung unter der Annahme von 1 &#8804; <i>r</i> &#8804; 3 Fehlerstellen im Vektor <u><i>e</i></u>. Man erkennt:
+
{{BlaueBox|TEXT=  
*Die Länge aller <u><i>&Lambda;</i></u><i><sub>l</sub></i> ist stets <i>n</i> &ndash; <i>k</i>. Jeder Vektor beinhaltet jeweils <i>r</i> Koeffizienten <i>&lambda;<sub>i</sub></i>(0 &#8804; <i>i</i> < <i>r</i>) und eine Eins. Der Rest eines jeden Vektors ist mit Nullen aufgefüllt.<br>
+
$\text{Definition:}$&nbsp;  
 +
The&nbsp; $\text{generalized ELP coefficient vectors}$ &nbsp; $\underline {\it \Lambda }_l$&nbsp; result from successive shifts with respect to&nbsp; $\underline {\it \Lambda }_l$:
  
*Für jedes <i>r</i> gibt es genau <i>n</i>&ndash;<i>k</i>&ndash;<i>r</i> Koeffizientenvektoren <u><i>&Lambda;</i></u><sub><i>l</i></sub>, wobei sich <u><i>&Lambda;</i></u><sub><i>l</i></sub> aus <u><i>&Lambda;</i></u><sub><i>l</i>&ndash;1</sub> stets durch Rechtsverschiebung um eine Position ergibt. Der Vektor <u><i>&Lambda;</i></u><sub><i>n</i>&ndash;<i>k</i>&ndash;<i>r</i></sub> endet immer mit einer 1.<br>
+
::<math>{\it \Lambda}_l(x)=x^l \cdot \prod_{i\hspace{0.05cm}\in \hspace{0.05cm} I_{\rm EP} }(x-\alpha^i) =x^l \cdot \big [{\it \lambda}_0 + \lambda_1 \cdot x+\ldots+{\it \lambda}_{r-1} \cdot x^{r-1}+x^r \big ]\hspace{0.05cm}.</math>
  
*Das Gleichungssystem  <u><i>&Lambda;</i></u><sub><i>l</i></sub> &middot; <u><i>s</i></u><sup>T</sup> = 0 führt deshalb zu <i>n</i> &ndash; <i>k</i> &ndash; <i>r</i> Gleichungen. Der gewählte Ansatz für <i>r</i> ist nur dann richtig, wenn alle Gleichungen zu den gleichen Ergebnissen für <i>&lambda;</i><sub>0</sub>, ... , <i>&lambda;</i><sub><i>r&ndash;</i>1</sub> führen.<br>
+
In this defining equation,&nbsp; $\underline {\it \Lambda }_1$&nbsp; corresponds to the previous&nbsp; $\underline {\it \Lambda }$.}}
  
*Ist dies nicht der Fall, so muss man <i>r</i> erhöhen und damit ein neues Gleichungssystem bearbeiten, und zwar solange, bis sich aus allen Gleichungen für das aktuelle <i>r</i> eine eindeutige Lösung ergibt.<br>
 
  
*Ist schließlich <i>r</i> größer als die Korrekturfähigkeit <i>t</i> des Codes, so kann die Berechnung beendet werden. Das anstehende Empfangswort <u><i>y</i></u> ist dann nicht decodierbar.<br>
+
The upper graph shows the occupancy under the assumption of&nbsp; $r$&nbsp; error locations in the error vector &nbsp;$\underline {e}$&nbsp; for
 +
*$r=1$ &nbsp; in the left panel&nbsp; (with blue background),
 +
*$r=2$ &nbsp; in the middle area&nbsp; (with red background),
 +
*$r=3$ &nbsp; in the right area&nbsp; (with green background).
  
== Schritt (B): Aufstellen/Auswerten des ELP–Koeffizientenvektors (2) ==
 
<br>
 
Hier nochmals die <i>verallgemeinerte Definition</i> für das <b>Error Locator Polynom</b> (ELP):
 
  
:<math>{\it \Lambda}_l(x)=x^l \cdot \prod_{i\hspace{0.05cm}\in \hspace{0.05cm} I_{\rm FP}}(x-\alpha^i) =x^l \cdot \big [{\it \lambda}_0 + \lambda_1 \cdot x+\ldots+{\it \lambda}_{r-1} \cdot x^{r-1}+x^r \big ]\hspace{0.05cm}.</math>
+
One recognizes:
 +
# The length of all&nbsp; $\underline {\it \lambda }_l$&nbsp; is always&nbsp; $n-k$.&nbsp; Each vector contains respectively&nbsp; $r$&nbsp; coefficients&nbsp; $\lambda_0$,&nbsp; $\lambda_1$, ... ,&nbsp; $\lambda_{r-1}$ &nbsp; &rArr; &nbsp; $0 &#8804; i < r$&nbsp; and one&nbsp; "1". <br>The remainder of each vector is padded with zeros.<br>
 +
# For each&nbsp; $r$&nbsp; there are exactly&nbsp; $n-k-r$&nbsp; coefficient vectors &nbsp; $\underline {\it \Lambda }_l$,&nbsp; where &nbsp; $\underline {\it \Lambda }_l$ &nbsp; results from &nbsp; $\underline {\it \Lambda }_{l-1} $&nbsp; always by right shifting by one position. <br>The vector&nbsp; $\underline {\it \Lambda }_{n-k-r}$&nbsp; always ends with a&nbsp; $1$.<br>
 +
# The equation system &nbsp; $\underline {\it \Lambda }_l \cdot \underline {s}^{\rm T} = 0 $ &nbsp; therefore leads to&nbsp; $n-k-r$&nbsp; equations. <br>The chosen approach for&nbsp; $r$&nbsp; is only correct if all equations lead to the same results for&nbsp; $\lambda_0$, ... , $\lambda_{r-1}$&nbsp;.<br>
 +
# If this is not the case,&nbsp; one has to increase&nbsp; $r$&nbsp; and thus work on a new equation system,&nbsp; and this until a unique solution results from all equations for the current&nbsp; $r$.&nbsp; If the finally&nbsp; $r$&nbsp; is greater than the correctability&nbsp; $t$&nbsp; of the code,&nbsp; the calculation can be terminated.&nbsp; The pending received word&nbsp; $\underline {y}$&nbsp; is then not decodable.<br>
  
Dieses lässt sich am einfachsten mit den <b>verschobenen ELP&ndash;Koeffizientenvektoren</b> <u><i>&Lambda;</i></u><i><sub>l</sub></i> auswerten, wie im folgenden Beispiel gezeigt wird. Hierbei beziehen wir uns auf die [http://en.lntwww.de/Kanalcodierung/Fehlerkorrektur_nach_Reed%E2%80%93Solomon%E2%80%93Codierung#Schritt_.28B.29:_Aufstellen.2FAuswerten_des_ELP.E2.80.93Koeffizientenvektors_.281.29 Grafik auf der letzen Seite,] die <i><u>&Lambda;</u><sub>l</sub></i>&ndash;Belegung unter der Annahme <i>r</i> = 1, <i>r</i> = 2 oder <i>r</i> = 3 Fehler im Fehlervektor <u><i>e</i></u> zeigt.<br>
 
  
{{Beispiel}} '''B:''' Es gelten weiterhin die im [http://en.lntwww.de/Kanalcodierung/Fehlerkorrektur_nach_Reed%E2%80%93Solomon%E2%80%93Codierung#Schritt_.28A.29:_Auswertung_des_Syndroms_beim_BDD Beispiel A] genannten Voraussetzungen. Dort wurde aufgrund des Syndroms <u><i>s</i></u> = (<i>&alpha;</i><sup>5</sup>, <i>&alpha;</i><sup>2</sup>, <i>&alpha;</i><sup>3</sup>, <i>&alpha;</i>) &ne; 0 auch nachgewiesen, dass der Empfangsvektor <u><i>y</i></u> verfälscht wurde &#8658; Fehlervektor <u><i>e</i></u> &ne; <u>0</u>. Nicht bekannt ist allerdings die tatsächliche Symbolfehleranzahl <i>r</i>.<br>
+
{{GraueBox|TEXT=
 +
[[File:EN_KC_Z_2_5_neu.png|right|frame|$\rm GF(2^3)$&nbsp; representation]] 
 +
$\text{Example 4:}$&nbsp; The conditions stated in the&nbsp; [[Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding#Step_.28A.29:_Evaluation_of_the_syndrome_in_BDD| $\text{Example 2}$]]&nbsp; still apply:
 +
# There,&nbsp; due to the syndrome &nbsp; $\underline {s} = (\alpha^5,\ \alpha^2,\ \alpha^3,\ \alpha^1) &ne; \underline {0}$ &nbsp; it was also demonstrated that the received vector&nbsp; $\underline {y}$&nbsp; was falsified &nbsp; &rArr; &nbsp; error vector &nbsp; $\underline {e} \ne {0}$.
 +
# Not known,&nbsp; however,&nbsp; is the actual symbol error count&nbsp; $r$.<br>
  
Unter der Annahme eines einzigen falschen Symbols (<i>r</i> = 1) erhält man folgendes Gleichungssystem (hier in Matrixform geschrieben):
+
*Assuming a single falsified symbol &nbsp; $(r= 1)$ &nbsp; we obtain the following system of equations&nbsp; $($written here in matrix form$)$:
[[File:P ID2556 KC T 2 5 Darstellung v1.png|rahmenlos|rechts|Drei Darstellunsformen für GF(2<sup>3</sup>)]]
 
  
:<math>\big ({ \boldsymbol{\it \Lambda }}_l \big) \cdot \underline {s} ^{\rm T}=  
+
::<math>\big ({ \boldsymbol{\it \Lambda } }_l \big) \cdot \underline {s} ^{\rm T}=  
 
\begin{pmatrix}
 
\begin{pmatrix}
 
\lambda_0 & 1 & 0 & 0  \\
 
\lambda_0 & 1 & 0 & 0  \\
Line 345: Line 390:
 
0
 
0
 
\end{pmatrix}  
 
\end{pmatrix}  
\hspace{0.05cm}</math>
+
\hspace{0.05cm}.</math>
 +
 
 +
*This equation system gives three different solutions for&nbsp; $\lambda_0$,&nbsp; which is not purposeful:<br>
  
:<math>\Rightarrow  \hspace{0.3cm}\alpha^5 \cdot \lambda_0 + \alpha^2 \hspace{-0.15cm}  = \hspace{-0.15cm} 0 \hspace{0.3cm} \Rightarrow  \hspace{0.3cm}
+
::<math>\text{line 1:}\hspace{0.5cm}\alpha^5 \cdot \lambda_0 + \alpha^2 = 0 \hspace{0.3cm} \Rightarrow  \hspace{0.3cm}
 
\lambda_0 = \alpha^{2-5}= \alpha^{-3}= \alpha^{4}\hspace{0.05cm},</math>
 
\lambda_0 = \alpha^{2-5}= \alpha^{-3}= \alpha^{4}\hspace{0.05cm},</math>
:<math>\hspace{0.8cm}\alpha^2 \cdot \lambda_0 + \alpha^3 \hspace{-0.15cm}  = \hspace{-0.15cm} 0 \hspace{0.3cm} \Rightarrow  \hspace{0.3cm}
+
::<math>\text{line 2:}\hspace{0.5cm}\alpha^2 \cdot \lambda_0 + \alpha^3 = 0 \hspace{0.3cm} \Rightarrow  \hspace{0.3cm}
 
\lambda_0 = \alpha^{3-2}=  \alpha\hspace{0.05cm},</math>
 
\lambda_0 = \alpha^{3-2}=  \alpha\hspace{0.05cm},</math>
:<math>\hspace{0.8cm}\alpha^3 \cdot \lambda_0 + \alpha^1 \hspace{-0.15cm}  = \hspace{-0.15cm} 0 \hspace{0.3cm} \Rightarrow  \hspace{0.3cm}
+
::<math>\text{line 3:}\hspace{0.5cm}\alpha^3 \cdot \lambda_0 + \alpha^1 = 0 \hspace{0.3cm} \Rightarrow  \hspace{0.3cm}
 
\lambda_0 = \alpha^{1-3}=  \alpha^{-2} = \alpha^{5} \hspace{0.05cm}.</math>
 
\lambda_0 = \alpha^{1-3}=  \alpha^{-2} = \alpha^{5} \hspace{0.05cm}.</math>
  
Diese drei Gleichungen liefern drei unterschiedliche Lösungen für <i>&lambda;</i><sub>0</sub>, was nicht zielführend ist.<br>
+
*Therefore,&nbsp; we now set up another equation system,&nbsp; assuming &nbsp; $r = 2$:
  
Deshalb stellen wir nun ein weiteres Gleichungssystem auf, und zwar unter der Annahme <i>r</i> = 2:
+
::<math>\big ({ \boldsymbol{\it \Lambda } }_l \big) \cdot \underline {s} ^{\rm T}=  
 
 
:<math>\big ({ \boldsymbol{\it \Lambda }}_l \big) \cdot \underline {s} ^{\rm T}=  
 
 
\begin{pmatrix}
 
\begin{pmatrix}
 
\lambda_0 & \lambda_1 & 1 & 0  \\
 
\lambda_0 & \lambda_1 & 1 & 0  \\
Line 372: Line 417:
 
0\\  
 
0\\  
 
0
 
0
\end{pmatrix} </math>
+
\end{pmatrix}\hspace{0.05cm}. </math>
  
:<math>\Rightarrow  \hspace{0.3cm}\alpha^5 \cdot \lambda_0 + \alpha^2 \cdot \lambda_1 + \alpha^3 \hspace{-0.15cm}  = \hspace{-0.15cm} 0 \hspace{0.05cm},</math>
+
*This leads to two equations for&nbsp; $\lambda_0$&nbsp; and&nbsp; $\lambda_1$:
:<math>\hspace{0.8cm}\alpha^2 \cdot \lambda_0 + \alpha^3 \cdot \lambda_1 + \alpha^1 \hspace{-0.15cm}  = \hspace{-0.15cm} 0 \hspace{0.05cm}.</math>
+
::<math>\alpha^5 \cdot \lambda_0 + \alpha^2 \cdot \lambda_1 + \alpha^3 = 0 \hspace{0.05cm},\hspace{0.5cm}\alpha^2 \cdot \lambda_0 + \alpha^3 \cdot \lambda_1 + \alpha^1 = 0 \hspace{0.05cm}.</math>
  
Dieses Gleichungssystem ist nun eindeutig lösbar. Man erhält <i>&lambda;</i><sub>0</sub> = <i>&alpha;</i><sup>2</sup> und <i>&lambda;</i><sub>1</sub> = <i>&alpha;</i><sup>6</sup>. Das bedeutet: Die Annahme, dass tatsächlich <i>r</i> = 2 Positionen des Empfangsvektors <u><i>y</i></u> verfälscht wurden, ist richtig.<br>
+
*This equation system is now clearly solvable.&nbsp; One gets &nbsp; $\lambda_0 = \alpha^2$ &nbsp; and &nbsp; $\lambda_1 = \alpha^6$.&nbsp; This means:  
 +
# The assumption that indeed&nbsp; $r = 2$&nbsp; positions of the received vector&nbsp; $\underline {y}$&nbsp; have been distorted is correct.<br>
 +
# But it is not yet known which positions have been falsified.&nbsp; So much for now:
 +
# It is not symbol positions&nbsp; '''2'''&nbsp; and&nbsp; '''6''',&nbsp; but the positions&nbsp; '''0'''&nbsp; and&nbsp; '''2''',&nbsp; as shown in the following&nbsp; $\text{Example 5}$&nbsp; (next section).}}<br>  
  
Man weiß aber noch nicht, welche Positionen verfälscht wurden. Soviel vorneweg; Es sind nicht die Positionen 2 und 6, sondern die Symbolpositionen 0 und 2, wie im folgenden [http://en.lntwww.de/Kanalcodierung/Fehlerkorrektur_nach_Reed%E2%80%93Solomon%E2%80%93Codierung#Schritt_.28C.29:_Lokalisierung_der_Fehlerstellen Beispiel C] gezeigt wird.{{end}}<br>
 
  
== Schritt (C): Lokalisierung der Fehlerstellen ==
+
== Step (C): Localization of the error positions ==
 
<br>
 
<br>
Nach Abarbeitung von Schritt (B) sind bekannt:
+
After processing step&nbsp; $\rm(B)$&nbsp; are known:
*die Anzahl <i>r</i> der Fehlerstellen <i>e<sub>i</sub></i> &ne; 0 im Vektor <u><i>e</i></u> = (<i>e</i><sub>0</sub>, ... , <i>e<sub>i</sub></i>, ... , <i>e<sub>n</sub></i><sub>&ndash;1</sub>),<br>
+
# the number&nbsp; $r$&nbsp; of error locations&nbsp; $e_i &ne; 0$&nbsp; in the vector&nbsp; $\underline {e} = (e_0, \hspace{0.05cm}\text{... }\hspace{0.05cm}, e_i, \hspace{0.05cm}\text{... }\hspace{0.05cm}, e_{n-1})$,<br>
 +
# the coefficients&nbsp; $\lambda_0, \hspace{0.05cm}\text{... }\hspace{0.05cm} , \lambda_{r-1}$&nbsp; of the error locator polynomial..<br><br>
  
*die Koeffizienten <i>&lambda;</i><sub>0</sub>, ... , <i>&lambda;<sub>r</sub></i><sub>&ndash;1</sub> des <i>Error Locator Polynoms</i>.<br><br>
+
Now the set of error positions has to be determined: &nbsp; $I_{\rm EP} = \{ i \hspace{0.1cm}| \hspace{0.1cm} e_i \ne 0,\hspace{0.3cm} 0 \le i < n \}\hspace{0.05cm}.$
  
Zu bestimmen ist nun noch die Menge der Fehlerpositionen:
+
There are two ways to do this&nbsp; $($both methods are used in the following example$)$:
 +
*the so-called&nbsp; "Chien search",&nbsp; in which one determines the searched zeros by inserting the possible code symbols except the zero symbol &nbsp; &rArr; &nbsp; $(\alpha^0, \text{... }, \alpha^i, \text{... },\alpha^{n-1})$&nbsp; into the  the error locator polynomial,<br>
  
:<math>I_{\rm FP} = \{ i \hspace{0.1cm}| \hspace{0.1cm} e_i \ne 0,\hspace{0.1cm} 0 \le i < n \}\hspace{0.05cm}.</math>
+
*the evaluation of the equation &nbsp; $\underline {L} = (L_0, \text{... }, L_i, \text{... },L_{n-1} ) = \underline {\it \Lambda } \cdot { \boldsymbol{\rm H }}$ &nbsp; with the abbreviation&nbsp; $L_i = {\it \Lambda}(\alpha^i)$.<br>
  
Hierzu gibt es zwei Möglichkeiten:
 
*die so genannte <i>Chien&ndash;Suche</i>, in dem man durch Einsetzen der möglichen Codesymbole außer dem Nullsymbol &nbsp;&#8658; <i>&alpha;<sup>i</sup></i> (0 &#8804; <i>i</i> < <i>n</i>) &nbsp;in das <i>Error Locator Polynom</i> dessen Nullstellen er- mittelt,<br>
 
  
*die Auswertung der Gleichung <u><i>A</i></u> = (<i>A</i><sub>0</sub>, ... , <i>A<sub>i</sub></i>, ... , <i>A<sub>n</sub></i><sub>&ndash;1</sub>) = <u><i>&Lambda;</i></u> &middot; <b>H</b> mit der Abkürzung <i>A<sub>i</sub></i> = <i>&Lambda;</i>(<i>&alpha;<sup>i</sup></i>).<br><br>
+
{{GraueBox|TEXT=
 +
$\text{Example 5:}$&nbsp;
 +
In&nbsp; [[Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding#Step_.28B.29:_Set_up.2Fevaluate_the_ELP_coefficient_vector| $\text{Example 4}$]]&nbsp; it was determined according to the constraints specified in&nbsp; [[Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding#Step_.28A.29:_Evaluation_of_the_syndrome_in_BDD| $\text{Example 2}$]]&nbsp; stated boundary conditions determined that
 +
[[File:EN_KC_Z_2_5_neu.png|right|frame|$\rm GF(2^3)$&nbsp; representations]] 
  
Beide Verfahren werden im folgenden Beispiel angewendet.<br>
+
# $r= 2$&nbsp; symbol errors are present,&nbsp; and
 +
# the ELP coefficients are&nbsp; $\lambda_0 = \alpha^2$&nbsp; and&nbsp; $\lambda_1 = \alpha^6$.  
  
{{Beispiel}}''':''' In [http://en.lntwww.de/Kanalcodierung/Fehlerkorrektur_nach_Reed%E2%80%93Solomon%E2%80%93Codierung#Schritt_.28B.29:_Aufstellen.2FAuswerten_des_ELP.E2.80.93Koeffizientenvektors_.282.29 Beispiel B] wurde entsprechend den in [http://en.lntwww.de/Kanalcodierung/Fehlerkorrektur_nach_Reed%E2%80%93Solomon%E2%80%93Codierung#Schritt_.28A.29:_Auswertung_des_Syndroms_beim_BDD Beispiel A] genannten Randbedingungen ermittelt, dass <i>r</i> = 2 Symbolfehler vorliegen und die ELP&ndash;Koeffizienten <i>&lambda;</i><sub>0</sub> = <i>&alpha;</i><sup>2</sup> und <i>&lambda;</i><sub>1</sub> = <i>&alpha;</i><sup>6</sup> lauten. Damit ergibt sich das <i>Error Locator Polynom</i>:
 
  
:<math>{\it \Lambda}(x)=x \cdot \big [{\it \lambda}_0 + \lambda_1 \cdot x+x^2 \big ]
+
This results in the following error locator polynomial:
 +
 
 +
::<math>{\it \Lambda}(x)=x \cdot \big [{\it \lambda}_0 + \lambda_1 \cdot x+x^2 \big ]
 
=x \cdot \big [\alpha^2 + \alpha^6 \cdot x+x^2 \big ]\hspace{0.05cm}.</math>
 
=x \cdot \big [\alpha^2 + \alpha^6 \cdot x+x^2 \big ]\hspace{0.05cm}.</math>
  
[[File:P ID2557 KC T 2 5 Darstellung v1.png|rahmenlos|rechts|Drei Darstellunsformen für GF(2<sup>3</sup>)]] Entsprechend der Chien&ndash;Suche erhält man
+
&rArr; &nbsp; According to the Chien search one gets
  
:<math>{\it \Lambda}(\alpha^0)\hspace{0.15cm}  =  \hspace{0.15cm}\alpha^0 \cdot \big [ \alpha^2 + \alpha^6 \cdot 1 + 1 \big ] = </math>
+
::<math>{\it \Lambda}(\alpha^0)\hspace{0.15cm}  =  \hspace{0.15cm}\alpha^0 \cdot \big [ \alpha^2 + \alpha^6 \cdot 1 + 1 \big ] =  \alpha^2 + (\alpha^2 + 1) + 1 = 0 \hspace{0.3cm} \Rightarrow  \hspace{0.3cm}{ \boldsymbol{\rm Zero} }\hspace{0.05cm},</math>
:<math>\hspace{1.4cm} \hspace{0.15cm} \alpha^2 + (\alpha^2 + 1) + 1 = 0 \hspace{0.3cm} \Rightarrow  \hspace{0.3cm}{ \boldsymbol{\rm Nullstelle}}\hspace{0.05cm},</math>
+
::<math>{\it \Lambda}(\alpha^1)\hspace{0.15cm}  =  \hspace{0.15cm}\alpha^1 \cdot \big [\alpha^2 + \alpha^6 \cdot \alpha^1 + \alpha^2\big ]= \alpha^1 \hspace{0.3cm} \Rightarrow  \hspace{0.3cm}{\rm No\hspace{0.15cm} zero}\hspace{0.05cm},</math>
:<math>{\it \Lambda}(\alpha^1)\hspace{0.15cm}  =  \hspace{0.15cm}\alpha^1 \cdot \big [\alpha^2 + \alpha^6 \cdot \alpha^1 + \alpha^2\big ]=</math>
+
::<math>{\it \Lambda}(\alpha^2)\hspace{0.15cm}  =  \hspace{0.15cm}\alpha^2 \cdot \big [ \alpha^2 + \alpha^6 \cdot \alpha^2 + \alpha^4 \big ] =\alpha^4 + \alpha^{10} + \alpha^6 = \text{...}
:<math>\hspace{1.4cm} =  \hspace{0.15cm} \alpha^1 \hspace{0.3cm} \Rightarrow  \hspace{0.3cm}{\rm Keine\hspace{0.15cm} Nullstelle}\hspace{0.05cm},</math>
+
=  \hspace{0.15cm}0  
:<math>{\it \Lambda}(\alpha^2)\hspace{0.15cm}  =  \hspace{0.15cm}\alpha^2 \cdot \big [ \alpha^2 + \alpha^6 \cdot \alpha^2 + \alpha^4 \big ]
+
  \hspace{0.3cm} \Rightarrow  \hspace{0.3cm}{ \boldsymbol{\rm Zero} }\hspace{0.05cm}.</math>
=\alpha^4 + \alpha^{10} + \alpha^6 =</math>
 
:<math> \hspace{1.4cm} =  \hspace{0.15cm} (\alpha^2 + \alpha) + (\alpha + 1) + (\alpha^2 + 1) =</math>
 
:<math> \hspace{1.4cm}=  \hspace{0.15cm}0  
 
  \hspace{0.3cm} \Rightarrow  \hspace{0.3cm}{ \boldsymbol{\rm Nullstelle}}\hspace{0.05cm}.</math>
 
  
Damit sind die beiden Fehlerpositionen mit <i>i</i> = 0 und <i>i</i> = 2 gefunden und der Fehlervektor lautet: <u><i>e</i></u>&nbsp;=&nbsp;(<i>e</i><sub>0</sub>,&nbsp;0,&nbsp;<i>e</i><sub>2</sub>,&nbsp;0,&nbsp;0,&nbsp;0,&nbsp;0).<br>
+
Thus the two error positions with&nbsp; $i = 0$&nbsp; and&nbsp; $i = 2$&nbsp; are found &nbsp; &rArr; &nbsp; the error vector is: &nbsp; $\underline {e} = (e_0, 0, e_2, 0, 0, 0, 0)$ .<br>
  
Die Vektorgleichung <u><i>A</i></u> = <u><i>&Lambda;</i></u> &middot; <b>H</b> liefert das gleiche Ergebnis in kompakterer Form:
 
  
:<math>\underline {A} \hspace{0.15cm} \hspace{0.15cm} \underline{\it \Lambda} \cdot  { \boldsymbol{\rm H }}  = (\alpha^2, \alpha^6, 1, 0) \cdot
+
&rArr; &nbsp; The vector equation&nbsp; $\underline {L} = \underline {\it \Lambda } \cdot { \boldsymbol{\rm H } }$&nbsp; gives the same result in a more compact form:
 +
 
 +
::<math>\underline {L} = \underline{\it \Lambda} \cdot  { \boldsymbol{\rm H } }  = (\alpha^2, \alpha^6, 1, 0) \cdot
 
\begin{pmatrix}
 
\begin{pmatrix}
 
1 & \alpha^1 & \alpha^2 & \alpha^3 & \alpha^4 & \alpha^5 & \alpha^6 \\
 
1 & \alpha^1 & \alpha^2 & \alpha^3 & \alpha^4 & \alpha^5 & \alpha^6 \\
Line 426: Line 475:
 
1 & \alpha^3 & \alpha^6 & \alpha^3 & \alpha^5 & \alpha^1 & \alpha^4 \\
 
1 & \alpha^3 & \alpha^6 & \alpha^3 & \alpha^5 & \alpha^1 & \alpha^4 \\
 
1 & \alpha^4 & \alpha^1 & \alpha^5 & \alpha^2 & \alpha^6 & \alpha^3
 
1 & \alpha^4 & \alpha^1 & \alpha^5 & \alpha^2 & \alpha^6 & \alpha^3
\end{pmatrix} = </math>
+
\end{pmatrix} </math>
:<math> \hspace{0.5cm} \hspace{0.15cm}(\alpha^2,\alpha^3,\alpha^4,\alpha^5,\alpha^6,1      ,\alpha^1)
+
::<math> \Rightarrow \hspace{0.3cm} \underline {L}   =  (\alpha^2,\alpha^3,\alpha^4,\alpha^5,\alpha^6,1      ,\alpha^1)
+ (\alpha^6,\alpha^1,\alpha^3,\alpha^5,1      ,\alpha^2,\alpha^4)+</math>
+
+ (\alpha^6,\alpha^1,\alpha^3,\alpha^5,1      ,\alpha^2,\alpha^4)+(1,      \alpha^3,\alpha^6,\alpha^3,\alpha^5,\alpha^1,\alpha^4) </math>
:<math> \hspace{0.5cm}+  \hspace{0.15cm} (1,      \alpha^3,\alpha^6,\alpha^3,\alpha^5,\alpha^1,\alpha^4) =
+
::<math> \Rightarrow \hspace{0.3cm} \underline {L}  = (0,\alpha^1,0,\alpha^3,\alpha^3,\alpha^5,\alpha^1)\hspace{0.3cm} \Rightarrow  \hspace{0.3cm}
(0,\alpha^1,0,\alpha^3,\alpha^3,\alpha^5,\alpha^1)\hspace{0.3cm} \Rightarrow  \hspace{0.3cm}
+
L_0 = L_2 = 0 \hspace{0.3cm} \Rightarrow \hspace{0.3cm}
A_0 = A_2 = 0\hspace{0.05cm}.</math>
+
\underline {e} = (e_0, 0, e_2, 0, 0, 0, 0)\hspace{0.05cm}.</math>
  
Fortsetzung im [http://en.lntwww.de/Kanalcodierung/Fehlerkorrektur_nach_Reed%E2%80%93Solomon%E2%80%93Codierung#Schritt_.28D.29:_Abschlie.C3.9Fende_Fehlerkorrektur Beispiel D] auf der nächsten Seite.{{end}}<br>
+
The example continues with the&nbsp; $\text{Example 6}$&nbsp; in the next section}}.<br>
  
== Schritt (D): Abschließende Fehlerkorrektur ==
+
== Step (D): Final error correction ==
 
<br>
 
<br>
Im letzten Schritt müssen nun nur noch die <i>r</i> Symbolfehler korrigiert werden, deren Positionen nach Beendigung von Schritt (D) bekannt sind:
+
In the last step  the&nbsp; $r$&nbsp; symbol errors have to be corrected,&nbsp; whose positions are known after completion of step&nbsp; $\rm (C)$&nbsp;:
*Markiert man die Fehlerpositionen im Empfangswort <u><i>y</i></u> als Auslöschungen E (<i>Erasures</i>), so kann das zugehörige Codewort <u><i>z</i></u> entsprechend der Beschreibung im [http://en.lntwww.de/Kanalcodierung/Reed%E2%80%93Solomon%E2%80%93Decodierung_beim_Ausl%C3%B6schungskanal#Blockschaltbild_und_Voraussetzungen_zu_Kapitel_2.4 Kapitel 2.4] gefunden werden.<br>
+
*Marking the error positions in the received word&nbsp; $\underline {y}$&nbsp; as erasures&nbsp; $\rm E$,&nbsp; the corresponding code word&nbsp; $\underline {z}$&nbsp; can be found according to the description in chapter&nbsp; [[Channel_Coding/Reed-Solomon_Decoding_for_the_Erasure_Channel|"Reed-Solomon decoding at the erasure channel"]].<br>
  
*Eine zweite Möglichkeit bietet die Bestimmung des Fehlervektors <u><i>e</i></u> aus der Gleichung <u><i>e</i></u>&nbsp;&middot;&nbsp;<b>H</b><sup>T</sup>&nbsp;=&nbsp;<u><i>s</i></u> und die Korrektur entsprechend <u><i>z</i></u>&nbsp;=&nbsp;<u><i>y</i></u> &ndash;&nbsp;<u><i>e</i></u>. Diese liegt dem folgenden Beispiel zugrunde.<br><br>
+
*A second possibility offers the determination of the error vector&nbsp; $\underline {e}$&nbsp; from the equation &nbsp; $\underline {e}  \cdot \boldsymbol{\rm H }^{\rm T} = \underline {s}$ &nbsp; and correcting accordingly &nbsp; $\underline {z} = \underline {y} - \underline {e} $. &nbsp; This procedure is the basis of the following example.<br><br>
  
{{Beispiel}} '''D:''' In [http://en.lntwww.de/Kanalcodierung/Fehlerkorrektur_nach_Reed%E2%80%93Solomon%E2%80%93Codierung#Schritt_.28A.29:_Auswertung_des_Syndroms_beim_BDD Beispiel A] wurde das Empfangswort mit <u><i>y</i></u> = (<i>&alpha;</i><sup>3</sup>, 1, 0, <i>&alpha;</i><sup>1</sup>, <i>&alpha;</i><sup>2</sup>, <i>&alpha;</i><sup>5</sup>, <i>&alpha;</i><sup>5</sup>) vorgegeben und daraus das Syndrom <u><i>s</i></u> = (<i>&alpha;</i><sup>5</sup>, <i>&alpha;</i><sup>2</sup>, <i>&alpha;</i><sup>3</sup>, <i>&alpha;</i>) ermittelt. Nach den Berechnungen im [http://en.lntwww.de/Kanalcodierung/Fehlerkorrektur_nach_Reed%E2%80%93Solomon%E2%80%93Codierung#Schritt_.28C.29:_Lokalisierung_der_Fehlerstellen Beispiel C] lautet der Fehlervektor <u><i>e</i></u> = (<i>e</i><sub>0</sub>, 0, <i>e</i><sub>2</sub>, 0, 0, 0, 0). Alle diese Angaben gelten für den RSC (7, 3, 5)<sub>8</sub>.<br>
+
{{GraueBox|TEXT= 
 +
$\text{Example 6:}$&nbsp; We consider again the&nbsp; $\text{RSC (7, 3, 5)}_8$.&nbsp;  In &nbsp; [[Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding#Step_.28A.29:_Evaluation_of_the_syndrome_in_BDD| $\text{Example 2}$]]&nbsp; was
 +
*the received word given with&nbsp; $\underline{y}=\big (\alpha^3,\hspace{0.09cm} 1,\hspace{0.09cm} 0, \hspace{0.09cm}\alpha^1, \hspace{0.09cm} \alpha^2, \hspace{0.09cm} \alpha^5, \hspace{0.09cm} \alpha^5 \big),$&nbsp; and
 +
 +
*from this determined the syndrome &nbsp; $\underline{s}=\big (\alpha^5,\hspace{0.09cm}\alpha^2, \hspace{0.09cm} \alpha^3, \hspace{0.09cm} \alpha^1\big)$.  
  
Aus <u><i>e</i></u> &middot; <b>H</b><sup>T</sup> = <u><i>s</i></u> erhält man nun folgende Bestimmungsgleichungen für die Fehlerwerte <i>e</i><sub>0</sub> und <i>e</i><sub>2</sub>:
 
  
:<math>\underline {e}  \cdot { \boldsymbol{\rm H }}^{\rm T}= \begin{pmatrix}
+
According to the calculations in&nbsp; [[Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding#Step_.28C.29:_Localization_of_the_error_positions| $\text{Example 5}$]]&nbsp; the error vector is&nbsp; $\underline {e} = (e_0,\ 0,\ e_2,\ 0,\ 0,\ 0,\ 0)$. <br>
 +
 
 +
[[File:EN_KC_Z_2_5_neu.png|right|frame|$\rm GF(2^3)$&nbsp; representations]]
 +
From &nbsp;$\underline {e}  \cdot \boldsymbol{\rm H }^{\rm T} = \underline {s}$ &nbsp; we now obtain the following governing equations for the error values&nbsp; $e_0$&nbsp; and&nbsp; $e_2$:
 +
 
 +
::<math>\underline {e}  \cdot \boldsymbol{\rm H }^{\rm T}= \begin{pmatrix}
 
e_0 & 0 & e_2 & 0 & 0 & 0 & 0
 
e_0 & 0 & e_2 & 0 & 0 & 0 & 0
 
\end{pmatrix}\cdot
 
\end{pmatrix}\cdot
Line 461: Line 518:
 
\alpha^5 & \alpha^2 & \alpha^3 & \alpha^1  
 
\alpha^5 & \alpha^2 & \alpha^3 & \alpha^1  
 
\end{pmatrix}  </math>
 
\end{pmatrix}  </math>
 
+
::<math>\Rightarrow  \hspace{0.3cm}e_0 \cdot (1,\ 1,\ 1,\ 1) + e_2 \cdot (\alpha^2,\ \alpha^4,\ \alpha^6,\ \alpha^1)\stackrel{!}{=}
:<math>\Rightarrow  \hspace{0.3cm}e_0 \cdot (1, 1, 1, 1) + e_2 \cdot ( \alpha^2, \alpha^4, \alpha^6, \alpha^1)\stackrel{!}{=}
+
( \alpha^5,\ \alpha^2,\ \alpha^3,\ \alpha^1)</math>
( \alpha^5, \alpha^2, \alpha^3, \alpha^1)</math>
+
::<math>\Rightarrow  \hspace{0.3cm}
 
 
:<math>\Rightarrow  \hspace{0.3cm}
 
 
e_0 + e_2 \cdot \alpha^2 = \alpha^5\hspace{0.05cm},\hspace{0.2cm}
 
e_0 + e_2 \cdot \alpha^2 = \alpha^5\hspace{0.05cm},\hspace{0.2cm}
 
e_0 + e_2 \cdot \alpha^4 = \alpha^2\hspace{0.05cm},\hspace{0.2cm}
 
e_0 + e_2 \cdot \alpha^4 = \alpha^2\hspace{0.05cm},\hspace{0.2cm}
Line 471: Line 526:
 
e_0 + e_2 \cdot \alpha^1 = \alpha^1\hspace{0.05cm}.</math>
 
e_0 + e_2 \cdot \alpha^1 = \alpha^1\hspace{0.05cm}.</math>
  
[[File:P ID2558 KC T 2 5 Darstellung v1.png|rahmenlos|rechts|Drei Darstellunsformen für GF(2<sup>3</sup>)]] Alle diese Gleichungen führen zum Ergebnis <i>e</i><sub>0</sub> = 1, <i>e</i><sub>2</sub> = <i>&alpha;</i><sup>2</sup>. Damit lautet das korrigierte Codewort:
+
*All equations lead to the result &nbsp;$e_0 = 1$&nbsp; and &nbsp;$e_2 =\alpha^2$.  
 +
 
 +
*Thus the corrected code word with&nbsp; $\underline {y} =  (\alpha^3,\hspace{0.09cm} 1,\hspace{0.09cm} 0,\hspace{0.09cm}  \alpha^1,\hspace{0.09cm} \alpha^2,\hspace{0.09cm} \alpha^5,\hspace{0.09cm} \alpha^5)$ &nbsp; and &nbsp; $\underline {e}= (1,\hspace{0.09cm} 0,\hspace{0.09cm}  \alpha^2,\hspace{0.09cm} 0,\hspace{0.09cm} 0,\hspace{0.09cm} 0,\hspace{0.09cm} 0)$:
  
:<math>\hspace{0.8cm}\underline {y} \hspace{0.15cm}  =  \hspace{0.15cm}  (\alpha^3,\hspace{0.05cm} 1,\hspace{0.05cm} 0,\hspace{0.05cm}  \alpha^1,\hspace{0.05cm} \alpha^2,\hspace{0.05cm} \alpha^5,\hspace{0.05cm} \alpha^5)\hspace{0.05cm},</math>
+
::<math>\underline {z} = \underline {y} - \underline {e} = \underline {y} + \underline {e}=
:<math>\hspace{0.8cm}\underline {e} \hspace{0.15cm}  =  \hspace{0.15cm}  (1,\hspace{0.05cm} 0,\hspace{0.05cm}  \alpha^2,\hspace{0.05cm} 0,\hspace{0.05cm} 0,\hspace{0.05cm} 0,\hspace{0.05cm} 0)\hspace{0.05cm},</math>
+
(\alpha^1,\hspace{0.09cm} 1,\hspace{0.09cm} \alpha^2,\hspace{0.09cm}  \alpha^1,\hspace{0.09cm} \alpha^2,\hspace{0.09cm} \alpha^5,\hspace{0.09cm} \alpha^5)\hspace{0.05cm}. </math>
:<math>\Rightarrow  \hspace{0.3cm}\underline {z} \hspace{0.15cm} = \hspace{0.15cm} \underline {y} - \underline {e} = \underline {y} + \underline {e}=
 
(\alpha^1,\hspace{0.03cm} 1,\hspace{0.03cm} \alpha^2,\hspace{0.03cm}  \alpha^1,\hspace{0.03cm} \alpha^2,\hspace{0.03cm} \alpha^5,\hspace{0.03cm} \alpha^5)\hspace{0.05cm}. </math>{{end}}<br>
 
  
== Schnelle Reed–Solomon–Decodierung ==
+
*This is a valid code word of the&nbsp; $\text{RSC (7, 3, 5)}_8$.}}<br>
 +
 
 +
== Fast Reed-Solomon decoding ==
 
<br>
 
<br>
Die Klasse der Reed&ndash;Solomon&ndash;Codes wurde bereits im Jahre 1960 durch die Veröffentlichung [RS60]<ref>Reed, I.S.; Solomon, G.: ''Polynomial Codes over Certain Finite Fields.'' J. Siam, Vol. 8, pp. 300–304, 1960.</ref> eingeführt. Ihre effiziente Decodierung war jedoch erst ein bis zwei Jahrzehnte später möglich.<br>
+
The class of Reed&ndash;Solomon codes was already introduced in 1960 by the publication&nbsp; [RS60]<ref name='RS60'>Reed, I.S.; Solomon, G.:&nbsp; Polynomial Codes over Certain Finite Fields.&nbsp; J. Siam, Vol. 8, pp. 300-304, 1960.</ref>.&nbsp; However,&nbsp; their efficient decoding was not possible until a decade or two later.<br>
 +
 
 +
In the last sections we have demonstrated the so-called&nbsp; "Petersen algorithm"&nbsp; including the&nbsp; "Chien search"&nbsp; on the example of&nbsp; $\text{RSC (7, 3, 5)}_8$&nbsp; which can correct up to&nbsp; $t = 2$&nbsp; errors.&nbsp;
 +
*The decoding process focused on setting up and solving the key equation where the zeros of a degree-2 polynomial in the field&nbsp; $\rm GF(7)$&nbsp; had to be found.
  
Auf den letzten Seiten haben wir den so genannten Petersen&ndash;Algorithmus inklusive der Chien&ndash;Suche am Beispiel des RSC (7, 3, 5)<sub>8</sub> demonstriert, der bis zu <i>t</i> = 2 Fehler korrigieren kann. Im Mittelpunkt des Decodiervorgangs stand dabei das Aufstellen und Lösen der Schlüsselgleichung (englisch: <i>Error Locator Polynom</i>), wobei die Nullstellen eines Grad&ndash;2&ndash;Polynoms in GF(7) gefunden werden mussten. Sie konnten erkennen, dass diese algebraische Decodierung mit großem Aufwand verbunden ist.<br>
+
*It was to be recognized that already this algebraic decoding is connected with large expenditure.<br>
  
Bei den in Praxis eingesetzten Codes mit großer Codewortlänge <i>n</i> und hoher Korrekturfähigkeit <i>t</i> würde der Decodieraufwand explodieren, wenn nicht schnellere Decodieralgorithmen gefunden worden wären. So müssen beim Reed&ndash;Solomon&ndash;Code (255, 223, 33)<sub>256</sub>, der schon früh im ESA/NASA&ndash;Standard zur Satellitenübertragung genannt wurde, zur Decodierng eines einzigen Codewortes die bis zu <i>t</i> = 16 Nullstellen im GF(255) gefunden werden, und das auch noch in Echtzeit.<br>
 
  
Ab Ende der 1960er Jahre haben sich viele Wissenschaftler um schnellere Decodieralgorithmen für  Reed&ndash;Solomon&ndash;Codes bemüht:
+
For codes used in practice with large code word length&nbsp; $n$&nbsp; and high correctability&nbsp; $t$&nbsp; the decoding effort would explode if faster decoding algorithms had not been found.&nbsp;  
  
*Beim <span style="font-weight: bold;">Berlekamp&ndash;Massey&ndash;Algorithmus</span> (BMA) wird die Schlüsselgleichung <u><i>&Lambda;</i></u> &middot; <u><i>s</i></u><sup>T</sup> = 0 als rückgekoppeltes Schieberegister dargestellt, siehe zum Beispiel [Mas69]<ref>Massey, J.L.: ''Shift Register Synthesis and BCH Decoding.''. IEEE Trans. on Information Theory, vol. IT-15, pp. 122–127, Jan. 1969.</ref>, [Fri96]<ref>Friedrichs, B.: ''Kanalcodierung – Grundlagen und Anwendungen in modernen Kommunikations- systemen.'' Berlin – Heidelberg: Springer, 1996.</ref> und [Bos98]<ref>Bossert, M.: ''Kanalcodierung.'' Stuttgart: B. G. Teubner, 1998.</ref>. Das Problem wird damit auf die Synthese eines autoregressiven Filters zurückgeführt. Dieser Algorithmus arbeitet wesentlich schneller als der (leichter durchschaubare) Petersen&ndash;Algorithmus.<br>
+
:For example,&nbsp; in the Reed&ndash;Solomon code &nbsp; $\text{RSC (255, 223, 33)}_{256}$,&nbsp; mentioned early in the ESA/NASA standard for satellite transmission,&nbsp; up to &nbsp; $t = 16$ &nbsp; zeros must be found in the field &nbsp; $\rm GF(255)$ &nbsp; to decode a single code word.&nbsp; And this in real time!<br>
  
*Etwas später wurde in [SK+75]<ref>Sugiyama, Y.; Kashara, M.; Hirasawa, S.; Namekawa, T.: ''A Method for Solving Key Equation for Decoding Goppa Codes.'' Information and Control, Vol. 27, pp. 87–99, 1975.</ref> ein Decodierverfahren vorgeschlagen, das auf dem <span style="font-weight: bold;">Euklidischen Algorithmus</span> basiert. Dieser liefert den größten gemeinsamen Teiler zweier ganzer Zahlen, was zur Decodierung genutzt wird. Der Euklidische Algorithmus ist vergleichbar schnell wie der BMA. Genauere Informationen finden Sie wieder in [Bos98]<ref>Bossert, M.: ''Kanalcodierung.'' Stuttgart: B. G. Teubner, 1998.</ref> und [Fri96]<ref>Friedrichs, B.: ''Kanalcodierung – Grundlagen und Anwendungen in modernen Kommunikations- systemen.'' Berlin – Heidelberg: Springer, 1996.</ref>.<br>
+
Beginning in the late 1960s, many scientists sought faster decoding algorithms for Reed&ndash;Solomon codes:
  
*Weitere effiziente Decodiernethoden von Reed&ndash;Solomon&ndash;Codes arbeiten im <b>Frequenzbereich</b> unter Verwendung der [http://en.lntwww.de/Signaldarstellung/Diskrete_Fouriertransformation_(DFT)#Argumente_f.C3.BCr_die_diskrete_Realisierung_der_FT Diskreten Fouriertransformation] (DFT) im Körper GF(<i>n</i>).<br><br>
+
#  In&nbsp; [https://en.wikipedia.org/wiki/Berlekamp%E2%80%93Massey_algorithm $\text{Berlekamp&ndash;Massey algorithm}$]&nbsp; $\rm (BMA)$&nbsp; the key equation &nbsp; $\underline {\it \Lambda}  \cdot \underline{s }^{\rm T} = 0$ &nbsp; represented as a feedback shift register, see for example&nbsp; [Mas69]<ref name ='Mas69'>Massey, J.L.:&nbsp; Shift Register Synthesis and BCH Decoding.&nbsp; IEEE Trans. on Information Theory, vol. IT-15, pp. 122–127, Jan. 1969.</ref>,&nbsp; [Fri96]<ref name ='Fri96'>Friedrichs, B.:&nbsp; Kanalcodierung – Grundlagen und Anwendungen in modernen Kommunikations- systemen.&nbsp; Berlin – Heidelberg: Springer, 1996.</ref>,&nbsp; [Bos99]<ref name ='Bos99'>Bossert, M.:&nbsp; Channel Coding for Telecommunications.&nbsp;Chichester: Wiley, 1999.</ref>.&nbsp; The problem is thus reduced to the synthesis of an autoregressive filter.&nbsp; The algorithm works much faster than the&nbsp; (more easily comprehensible)&nbsp; Petersen algorithm.<br>
 +
# Somewhat later,&nbsp; in&nbsp; [SK+75]<ref name='SK+75'>Sugiyama, Y.; Kashara, M.; Hirasawa, S.; Namekawa, T.:&nbsp; A Method for Solving Key Equation for Decoding Goppa Codes.&nbsp; Information and Control, Vol. 27, pp. 87-99, 1975.</ref>&nbsp; a decoding method based on the&nbsp; [https://en.wikipedia.org/wiki/Euclidean_algorithm $\text{Euclidean algorithm}$]&nbsp; has been proposed.&nbsp; This provides the greatest common divisor of two integers,&nbsp; which is used for decoding.&nbsp; The Euclidean algorithm is comparably fast as the&nbsp; $\rm BMA$.&nbsp; More detailed information can be found in&nbsp; [Bos99]<ref name ='Bos99'></ref> and&nbsp; [Fri96]<ref name ='Fri96'></ref>.<br>
 +
# Other efficient decoding methods of Reed&ndash;Solomon codes operate in the frequency domain using the&nbsp; [[Signal_Representation/Discrete_Fourier_Transform_(DFT)#Arguments_for_the_discrete_implementation_of_the_Fourier_transform|$\text{Discrete Fourier Transform}$]]&nbsp; $\rm (DFT)$&nbsp; in the field&nbsp; ${\rm GF}(n)$.<br><br>
  
Die Grundzüge der Reed&ndash;Solomon&ndash;Fehlerkorrektur wurden bereits in den 1960er Jahren entwickelt. Aber bis in die heutige Zeit (2013) ist die (möglichst schnelle) algebraische Decodierung dieser Codes ein hochaktuelles Forschungsgebiet.
+
The basic features of Reed&ndash;Solomon error correction were already developed in the 1960s.&nbsp; But up to the present time the&nbsp; $($as fast as possible$)$&nbsp; algebraic decoding of these codes is a highly topical field of research.
  
== Aufgaben ==
+
== Exercises for the chapter==
 
<br>
 
<br>
[[Aufgaben:2.12 Decodierung beim RSC(7, 4, 4)(Base 8)|A2.12 Decodierung beim RSC(7, 4, 4)(Base 8)]]
+
[[Aufgaben:Exercise_2.12:_Decoding_at_RSC_(7,_4,_4)_to_Base_8|Exercise 2.12: Decoding at RSC (7, 4, 4) to Base 8]]
  
[[Zusatzaufgaben:2.12 Reed–Solomon–Syndromberechnung]]
+
[[Aufgaben:Exercise_2.12Z:_Reed-Solomon_Syndrome_Calculation|Exercise 2.12Z: Reed-Solomon Syndrome Calculation]]
  
[[Aufgaben:2.13 Nun RSC (7, 3, 5)(Base 8)–Decodierung|A2.13 Nun RSC (7, 3, 5)(Base 8)–Decodierung]]
+
[[Aufgaben:Exercise_2.13:_Decoding_at_RSC_(7,_3,_5)_to_Base_8|Exercise 2.13: Decoding at RSC (7, 3, 5) to Base 8]]
  
[[Aufgaben:2.14 Petersen–Algorithmus|A2.14 Petersen–Algorithmus]]
+
[[Aufgaben:Exercise_2.14:_Petersen_Algorithm|Exercise 2.14: Petersen Algorithm]]
  
==Quellenverzeichnis==
+
==References==
 
<references/>
 
<references/>
  
 
{{Display}}
 
{{Display}}

Latest revision as of 13:43, 24 November 2022

Block diagram and requirements for RS error correction


As in the chapter  "Decoding at the Erasure Channel"  we consider a transmission system with Reed–Solomon coding characterized by the two code parameters  $n=2^m-1$  and  $k$.  With the generator matrix  $\boldsymbol{\rm G}$  the relation between the information word  $\underline {u}$  and the code word  $\underline {c}$  is:

Transmission system with Reed-Solomon coding/decoding and $m$– BSC
$$\underline {c} = {\rm enc}(\underline {u}) = \underline {u} \cdot { \boldsymbol{\rm G}} \hspace{0.3cm} {\rm with}$$
$$ \underline {u} = (u_0, u_1, \hspace{0.05cm}\text{...} \hspace{0.05cm}, u_i, \hspace{0.05cm}\text{...} \hspace{0.05cm}, u_{k-1})\hspace{0.05cm},$$
$$ \underline {c} = (c_0, c_1, \hspace{0.05cm}\text{...} \hspace{0.05cm}, c_i, \hspace{0.05cm}\text{...} \hspace{0.05cm}, c_{n-1}) \hspace{0.05cm}.$$

The symbols  $u_i$  and  $c_i$  originate from the field  ${\rm GF}(q)$  with 

$$q=n+1=2^m.$$

They are representable by  $m$  binary symbols  ("bits").

A comparison of the upper block diagram with the corresponding  $\text{block diagram for Reed-Solomon error detection}$  shows:

  1. The main difference is in the discrete channel model  $($highlighted in green$)$.  Instead of the erasure channel  ("$m$–BEC")  now the  "$m$–BSC"  is considered.  For each bit of the code symbol  $c_i$  the  $\text{Binary Symmetric Channel}$  $\rm (BSC)$  is applied.  A bit error with respect to the  $i$–th bit of the code word results in  $y_i \ne c_i$.
  2. In the  "last chapter"  uncertain bits were already marked by erasures  $\rm E$.  The task of the  "code word finder"  $\rm (CWF)$  was to reconstruct the decoding result  $\underline {y}$  from the garbled received words  $\underline {z}$.
  3. If the number  $e$  of erasures is smaller than the minimum distance  $d_{\rm min}$,  this succeeds and we get   $\underline {z} = \underline {c}$.   Otherwise,  the code word finder reports that it cannot decode the current received word  $\underline {y}$ .   A wrong decision  $(\underline {z} \ne \underline {c})$  was excluded at the BEC.
  4. In this chapter,  the first decoder block is now referred to as  "code word estimator"  $\rm (CWS)$.  The naming is to make clear that due to the  $m$–BSC model,  wrong decisions  $(\underline {z} \ne \underline {c})$  are inevitable,  namely when multiple symbol errors distort the received word  $\underline {y}$  to a valid code word.

$\text{Conclusions:}$ 

  • The decoder's task is to determine its output vector  $\underline {v}$  so that it matches the information word  $\underline {u}$  "as well as possible".  More precisely formulated:
\[{ \rm Pr(block\hspace{0.15cm}error)} = { \rm Pr}( \underline{v} \ne \underline{u}) \stackrel{!}{=} { \rm Minimum}\hspace{0.05cm}.\]
  • Because of deterministic mapping   $\underline{c} = {\rm enc}(\underline{u})$   and   $\underline{v} = {\rm enc}^{-1}(\underline{z})$   holds in the same way:
\[{ \rm Pr(block\hspace{0.15cm}error)} = { \rm Pr}( \underline{z} \ne \underline{c}) \stackrel{!}{=} { \rm Minimum}\hspace{0.05cm}.\]


The two blocks highlighted in yellow are not considered further.  The focus of the considerations is now the  "code word estimator"  $\rm (CWS)$  highlighted in red.

Possible code word estimators for Reed-Solomon decoding


Definition of the error vector  $\underline{e}$

The right sketch of this graphic illustrates the task of the  $\rm CWS$ 

  • where here the channel model  "$m$–BSC" is replaced
  • by the additive  »error vector« 
$$\underline{e} = \underline{y} - \underline{c}.$$

The sketch on the left illustrates the relationship between these vectors.


This task is to be clarified by an example.

$\text{Example 1:}$  Let all symbols be elements of   $\rm GF(2^3) \in \{0,\ 1,\ \alpha^1,\ \alpha^2,\ \alpha^3,\ \alpha^4,\ \alpha^5,\ \alpha^6\}$.   The adjacent table can be used for conversion

$\rm GF(2^3)$  representations
  • between the coefficient representation  $($with the order  $k_2$,  $k_1$,  $k_0)$
  • and the exponent representation  $($as powers of the primitive element  $\alpha).$


In this example,  the code word and received word are in coefficient notation:

\[\underline{c} = \Big ( (010), (001), (100),(010),(100),(111),(111)\Big )\hspace{0.05cm},\]
\[\underline{y} =\Big ( (011), (001), (000),(010),(100),(111),(111)\Big )\hspace{0.05cm}.\]

⇒   This results in the following error vector  $\underline{e} = \underline{y} - \underline{c}$:

\[\underline{e} \hspace{0.05cm} = \hspace{0.05cm} \Big ( (001), (000), (100), (000),(000),(000),(000)\Big )\hspace{0.05cm}.\]

⇒   Converted to the exponential representation,  we get:

\[\underline{c} = \Big ( \alpha^1, \hspace{0.09cm}1\hspace{0.09cm}, \alpha^2,\alpha^1,\alpha^2,\alpha^5,\alpha^5\Big )\hspace{0.05cm},\]
\[\underline{y} =\Big ( \alpha^3, \hspace{0.09cm}1\hspace{0.09cm}, \hspace{0.09cm}0\hspace{0.09cm},\alpha^1,\alpha^2,\alpha^5,\alpha^5\Big )\hspace{0.05cm},\]
\[\underline{e} = \Big ( \hspace{0.09cm}1\hspace{0.09cm}, \hspace{0.09cm}0\hspace{0.09cm}, \hspace{0.05cm}\alpha^2,\hspace{0.12cm}0\hspace{0.12cm},\hspace{0.12cm}0\hspace{0.12cm},\hspace{0.12cm}0\hspace{0.12cm},\hspace{0.12cm}0\hspace{0.12cm}\Big )\hspace{0.05cm}.\]


Note   Task of the code word estimator  $\rm (CWS)$  is to find the most probable code word to  $\underline{y}$  and to pass its result  $\underline{z} = \underline{c}_i$  to the following mapping.

There are several ways to do this:

  1.   Hard Decision Maximum Likelihood Decoding  $\text{(HD–MLD)}$,
  2.   Bounded Distance Decoding  $\text{(BDD)}$,
  3.   Decoding   "over half the minimum distance".


Hard Decision Maximum Likelihood Decoding   $\text{(HD–MLD)}$:

One chooses from all possible Reed–Solomon code words  $\underline{c}_i$   $($hierof there are in total  $q^k)$   the one with the least  $\text{Hamming distance}$  to the received word  $\underline{y}$.  Result:

\[\underline{z} = {\rm arg} \min_{\underline{c}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C}_{\rm RS}} \hspace{0.1cm} d_{\rm H}(\underline{y} \hspace{0.05cm}, \hspace{0.1cm}\underline{c}_{\hspace{0.03cm}i})\hspace{0.05cm}.\]
  • A decision is always made even if the number  $r$  of symbol errors is larger than the correction capability  $t$  of the code.  However,  in such a case the decoding result is very uncertain.
  • It should be mentioned again that maximum likelihood decoding always decides.  Decoding failure is impossible.  But of course there are also wrong decisions.


Bounded Distance Decoding  $\text{(BDD)}$:

If the number  $r$  of symbol errors in the received word  $\underline{y}$  is not greater than the correction capability   $t = ⌊(d_{\rm min}- 1)/2⌋$   of the code,  one can correct the  $r$  symbol errors completely.  However,  it is also true:

  • The case  $r > t$  leads to an abort of the decoding process with no result.   In other words:
  • Only those received words towards the center of the sphere are decoded,  which lie in a sphere around it with radius  $t$.
  • Other received words are marked as undecodable,  for example as  "erasure".


Decoding over half the minimum distance:

Here also in the case  $r > t$  an attempt is made to decode the code word.  However,  in contrast to  $\text{HD–MLD}$,  which also decodes beyond half the minimum distance,  a decoding failure is not per se excluded here.

For the remainder of this chapter,  we will deal exclusively with  "Bounded Distance Decoding".  The reason for this is the enormous complexity of  "Maximum Likelihood Decoding"  proportional to  $q^{n-k}$.

Bounded Distance Decoding Procedure


In the following,  the individual steps of the  "bounded distance decoding"  $\rm (BDD)$  algorithm are described briefly and in a recipe-like manner.  In the next sections,  the individual points will be dealt with in more detail and the procedure will be illustrated using typical examples.


$\rm (A)$   $\text{Calculation and evaluation of syndrome}$   ⇒   $\text{Detailed description}$

  1. Calculate from the received word  $\underline{y}$  and the parity-check matrix  $\boldsymbol{\rm H }$  of the code the syndrome  $\underline {s} = \underline {y} \cdot \boldsymbol{\rm H }^{\rm T}$.
  2. If  $\underline {s} =\underline {0}$,  set the BDD output  $\underline {z} =\underline {y}$  and terminate the decoding process for that received word.
  3. Otherwise,  set the parameter  $r = 1$  and continue with step  $\rm (B)$ .


$\rm (B)$   $\text{Determine the actual symbol error count }\ r$  ⇒   $\text{Detailed description}$

  1. Create and check the equations  $\underline {\it \Lambda} _l \cdot\underline {s}^{\rm T} = 0$   for   $l = 1,$ ... , $2 \cdot t -r$  assuming that the received word contains exactly   $r$   symbol errors.
  2. $\underline {\it \Lambda} _l $  denotes the generalized  $\text{error locator polynomial}$  $\text{(ELP)}$  coefficient vectors.
  3. The parameter  $t$  denotes the correctability of the code.  For the Reed–Solomon codes,  uniformly  $t = ⌊(n-k)/2 ⌋$.
  4. If there is a unique solution,  then continue with step  $\rm (C)$.  
    In the received vector  $\underline{y}$  there are then indeed exactly  $r$   symbols falsified and in the error vector  $\underline{e}$  there are  $r$   entries not equal to $0$.
  5. Otherwise increase  $r$   by  $1$.   If  $r ≤ t$   ⇒   repeat step  $\rm (B)$  from the beginning:  
    The previously assumed  $r$  was obviously too small.  Therefore now a new attempt with larger  $r$.
  6. If the new  $r$  is greater than the correction capability  $t$  of the code,  the current word cannot be decoded.  End the decoding attempt with an  "error message".


$\rm (C)$   $\text{Localization of the }\ r \text{ error locations}$  ⇒   $\text{Detailed description}$

  1. Create the  error locator polynomial   ${\it \Lambda}(x)$   and find its  $r$  zeros in  ${\rm GF}(q) \hspace{-0.05cm}\setminus \hspace{-0.05cm} \{0\}$.
  2. An error at location  $i$  is present whenever  ${\it \Lambda}(\alpha^{i}) = 0$ .


$\rm (D)$   $\text{Determination of }\ r \text{ error values and correction}$  ⇒   $\text{Detailed description}$

  1. Now the  $r$  error locations are known. Replacing in the received vector  $\underline{y}$  the wrong symbols with erasures   ⇒   $y_i = \rm E$,  if  $e_i ≠ 0$,
    we find the result  $\underline{y}$  corresponding to the chapter  "Reed-Solomon Decoding at the Erasure Channel".
  2. Or:  From the equation  $\underline {e} \cdot \boldsymbol{\rm H }^{\rm T} = \underline {s}$  one arrives at a linear equation system for the faulty symbols  $(e_i \ne 0)$  taking advantage of the error-free sites  $(e_i = 0)$.

Step (A): Evaluation of the syndrome in BDD


As shown in section  "Principle of Syndrome Decoding",  the syndrome  $\underline{s}$  can be used to decode a linear code.

  • With the received word  $\underline{y}$   equal to code word  $\underline{c}$  plus error vector  $\underline{e}$  applies:
\[\underline {s} = \underline {y} \cdot { \boldsymbol{\rm H }}^{\rm T}= \underline {c} \cdot { \boldsymbol{\rm H }}^{\rm T}+ \underline {e} \cdot { \boldsymbol{\rm H }}^{\rm T} \hspace{0.05cm}.\]
  • Since always   $\underline {c} \cdot { \boldsymbol{\rm H }}^{\rm T} =\underline {0}$   holds,  it follows from   $\underline{s}= \underline{0}$   also   $\underline {e} \cdot { \boldsymbol{\rm H }}^{\rm T} =\underline{0}$.   That is:
  • With very high probability,  from   $\underline{s}= \underline{0}$   it is also possible to infer   $\underline{e}= \underline{0}$   and thus also the correct decoding result  $\underline{z}= \underline{y}$.  The decoding process would be finished.
  • But there are also error patterns   $\underline{e} \ne \underline{0}$   that lead to the syndrome   $\underline{s}= \underline{0}$.   Such patterns certainly contain more than  $t$  symbol errors. 
  • So here it makes also sense to abort the decoding process.  All subsequent calculations would also not lead to success.

$\text{Example 2:}$  This and the following examples in the next sections are always based on the Reed–Solomon code  $\text{RSC (7, 3, 5)}_8$  so that the conversions given in the graph in  $\rm GF(2^3)$  can be used.  The received words are:

$\rm GF(2^3)$  representations
\[\underline{y}=\big (\alpha^3,\hspace{0.08cm} 1,\hspace{0.08cm} 0, \hspace{0.08cm}\alpha^1, \hspace{0.08cm} \alpha^2, \hspace{0.08cm} \alpha^5, \hspace{0.08cm} \alpha^5 \big).\]
\[\underline {s} = \underline {y} \cdot { \boldsymbol{\rm H } }^{\rm T}= \begin{pmatrix} \alpha^3, \hspace{0.05cm}1, \hspace{0.05cm}0, \hspace{0.05cm}\alpha^1, \hspace{0.05cm}\alpha^2, \hspace{0.05cm}\alpha^5, \hspace{0.05cm}\alpha^5 \end{pmatrix}\cdot \begin{pmatrix} 1 & 1 & 1 & 1 \\ \alpha^1 & \alpha^2 & \alpha^3 & \alpha^4 \\ \alpha^2 & \alpha^4 & \alpha^6 & \alpha^1 \\ \alpha^3 & \alpha^6 & \alpha^2 & \alpha^5 \\ \alpha^4 & \alpha^1 & \alpha^5 & \alpha^2 \\ \alpha^5 & \alpha^3 & \alpha^1 & \alpha^6 \\ \alpha^6 & \alpha^5 & \alpha^4 & \alpha^3 \end{pmatrix}. \]
  • These vector–matrix multiplications gives the result:
\[\underline {s} \hspace{-0.05cm} = \hspace{-0.05cm} (\alpha^3 , \alpha^3 , \alpha^3 , \alpha^3) + (\alpha^1 , \alpha^2 , \alpha^3 , \alpha^4) + (0,0,0,0) \hspace{-0.05cm}+\hspace{-0.05cm} (\alpha^4,1,\alpha^3,\alpha^6)+(\alpha^6,\alpha^3,1,\alpha^4)\hspace{-0.05cm}+\hspace{-0.05cm}(\alpha^3,\alpha^1,\alpha^6,\alpha^4) \hspace{-0.05cm}+\hspace{-0.05cm} (\alpha^4,\alpha^3,\alpha^2,\alpha^1)\]
\[\Rightarrow \hspace{0.3cm} \underline {s} = \text{...} \hspace{0.05cm}= (\alpha^5,\alpha^2,\alpha^3,\alpha^1) \hspace{0.05cm}.\]
  • So the received word was falsified.   Otherwise it should have resulted in  $\underline{e}= \underline{0} = (0, 0, 0, 0)$ .


The description of the decoding process at  $\text{RSC (7, 3, 5)}_8$  is continued in  $\text{Example 4}$ 


Error Locator Polynomial - Definition and Properties


After the syndrome calculation in step  $\rm (A)$  with the result  $\underline{s} \ne \underline{0}$  we know,

  • that the received word  $\underline{y}$   does not match the code word  $\underline{c}$,  respectively
  • that the error vector  $\underline{e} = (e_0, \hspace{0.05cm}e_1, \hspace{0.05cm}\text{ ...}\hspace{0.05cm} , e_{n-1})$  certainly includes elements not equal to  "$0$". 

However,  we do not know how many symbols were falsified   $(0 < r ≤ n)$   nor we can name the positions of the error locations  $(e_i ≠ 0)$  in the error vector  $\underline{c}$.  An approach to this task is provided by the so-called  "error locator polynomial"  introduced by  $\text{William Wesley Peterson}$  ⇒   see [Pet60][1].  It is also known as  "key equation".

$\text{Definition:}$  Let it be known that exactly  $r$  elements of the error vector  $\underline{e}$  are non-zero,  recognizable by  $\text{Hamming weight}$  $w_{\rm H}(\underline{e}) = r$.

  • Also let the quantity   ${I}_{\rm EP}$   of error positions be known:
\[I_{\rm EP} = \{ i \hspace{0.1cm}\vert \hspace{0.1cm} e_i \ne 0,\hspace{0.1cm} 0 \le i < n \}\hspace{0.05cm}.\]
  • Then for the  »Error Locator Polynomial«  $\rm (ELP)$:
\[{\it \Lambda}(x)=x \cdot \prod_{i\hspace{0.05cm}\in \hspace{0.05cm} I_{\rm FP} }(x-\alpha^i) =x \cdot \big [{\it \lambda}_0 + \lambda_1 \cdot x+\ldots+{\it \lambda}_{r-1} \cdot x^{r-1}+x^r \big ].\]


⇒   From the error locator polynomial  we know:

  • Because of the factor  $x$  in front of the product sign:  ${\it \Lambda}(x= 0) = 0$.
  • Other  $r$  zeros result for  $x = \alpha^{i}$   with   $i \in I_{\rm EP}$,  that is,  for all error positions.
  • In contrast,  the  error locator polynomial for  $i ∉ I_{\rm EP}$   ⇒   $e_i = 0$  has no zero:   ${\it \Lambda}(x= \alpha^{i}) \ne0$.

⇒   So we search for the  $r$  non–trivial zeros of  ${\it \Lambda}(x)$  with argument  $x ∈ {\rm GF}(q) \hspace{-0.05cm}\setminus \hspace{-0.05cm} \{0\}$.  If we succeed,

  • we know the  $r$  error positions,
  • but not yet the actual error values  $e_i ∈ {\rm GF}(q)$.
$\rm GF(2^3)$  representations

$\text{Example 3:}$  Let $n=7$   ⇒   $q=8$,  $r=2$  and  $I_{\rm FP} = \{2, \hspace{0.05cm}4\}$:        P ID2551 KC T 2 5 S5a.png

  • Thus for the  "error locator poynomial"  from  ${\rm GF}(2^3)$ is obtained:
\[{\it \Lambda}(x)=x \cdot (x\hspace{-0.05cm}-\hspace{-0.05cm}\alpha^2) \cdot (x\hspace{-0.05cm}-\hspace{-0.05cm}\alpha^4)= x \cdot (x\hspace{-0.05cm}+\hspace{-0.05cm}\alpha^2) \cdot (x\hspace{-0.05cm}+\hspace{-0.05cm}\alpha^4) =x \cdot \big [x^2 \hspace{-0.05cm}+ \hspace{-0.05cm}(\alpha^2 + \alpha^4) \cdot x + \alpha^6\big ] \]
\[\Rightarrow \hspace{0.3cm} {\it \Lambda}(x)= x \cdot \big [\alpha^6 + \alpha \cdot x + x^2\big ]\hspace{0.05cm}.\]
  • The other zeros $($except for  $x = 0)$  arise naturally here for  $x = \alpha^2$  and  $x = \alpha^4$, as a calculation shows:
\[{\it \Lambda}(x = \alpha^2)= x \cdot \big [\alpha^6 + \alpha \cdot \alpha^2 + (\alpha^2)^2\big ] = x \cdot \big [\alpha^6 + \alpha^3 + \alpha^4 \big ]= 0\hspace{0.05cm},\]
\[ {\it \Lambda}(x = \alpha^4)= x \cdot \big [\alpha^6 + \alpha \cdot \alpha^4 + (\alpha^4)^2\big ] =x \cdot \big [\alpha^6 + \alpha^5 + \alpha \big ]= 0\hspace{0.05cm}.\]


For further derivation,  we always assume the  $\text{RSC (7, 3, 5)}_8$  with the following parameter values:

$$n=7, \hspace{0.3cm}k = 3, \hspace{0.3cm}d_{\rm min} = 5 \ \Rightarrow \ t = (d_{\rm min} -1/2) = 2.$$
  • Let the number of symbol errors be  $r = t = 2$.  Thus the system of equations to be solved with the auxiliary variables  $L_i = {\it \Lambda}(\alpha^{i})$:
\[L_0 = {\it \Lambda }(\alpha^0) = \alpha^0 \cdot \left [ {\it \lambda}_0 + {\it \lambda}_1 \cdot (\alpha^0)^1 + (\alpha^0)^2 \right ] = {\it \lambda}_0 \cdot 1 + {\it \lambda}_1 \cdot 1 + 1 \hspace{0.05cm},\]
\[L_1 = {\it \Lambda }(\alpha^1) =\alpha^1 \cdot \left [ {\it \lambda}_0 + {\it \lambda}_1 \cdot (\alpha^1)^1 + (\alpha^1)^2 \right ] = {\it \lambda}_0 \cdot \alpha^1+ {\it \lambda}_1 \cdot \alpha^2 + \alpha^3 \hspace{0.05cm},\]
\[...\]
\[ L_6 = {\it \Lambda }(\alpha^6) = \alpha^6 \cdot \left [ {\it \lambda}_0 + {\it \lambda}_1 \cdot (\alpha^6)^1 + (\alpha^6)^2 \right ] = {\it \lambda}_0 \cdot \alpha^6 + {\it \lambda}_1 \cdot \alpha^{12} + \alpha^{18} \hspace{0.05cm}.\]
  • In vector form,  this system of equations with the auxiliary vector is   $\underline{L} = (L_0, \hspace{0.05cm}L_1, \hspace{0.05cm}L_2,\hspace{0.05cm}L_3,\hspace{0.05cm}L_4,\hspace{0.05cm}L_5,\hspace{0.05cm}L_6)$:
\[\underline {L}^{\rm T}=\begin{pmatrix} L_0\\ L_1\\ L_2\\ L_3\\ L_4\\ L_5\\ L_6 \end{pmatrix} \hspace{0.15cm} = \hspace{0.15cm} \begin{pmatrix} 1 & 1 & 1 \\ \alpha^1 & \alpha^2 & \alpha^3 \\ \alpha^2 & \alpha^4 & \alpha^6 \\ \alpha^3 & \alpha^6 & \alpha^9 \\ \alpha^4 & \alpha^8 & \alpha^{12}\\ \alpha^5 & \alpha^{10} & \alpha^{15}\\ \alpha^6 & \alpha^{12} & \alpha^{18} \end{pmatrix} \hspace{0.15cm}\cdot \hspace{0.15cm} \begin{pmatrix} {\lambda}_0\\ {\lambda}_1\\ 1 \end{pmatrix} \hspace{0.05cm}.\]
  • We now expand the ELP coefficient vector  $\underline {\it \Lambda }$  by appending zeros to the length  $n-k$.  Thus,  in the considered example,  we obtain   ${\it \Lambda } = ( \lambda_0,\hspace{0.05cm}\lambda_1,\hspace{0.05cm}1, \hspace{0.05cm}0)$   and the following vector equation:
\[\underline {L}^{\rm T} \hspace{0.15cm} = \hspace{0.15cm} \begin{pmatrix} 1 & 1 & 1 & 1 \\ \alpha^1 & \alpha^2 & \alpha^3 & \alpha^4\\ \alpha^2 & \alpha^4 & \alpha^6 & \alpha^8\\ \alpha^3 & \alpha^6 & \alpha^9 & \alpha^{12}\\ \alpha^4 & \alpha^8 & \alpha^{12} & \alpha^{16}\\ \alpha^5 & \alpha^{10} & \alpha^{15} & \alpha^{20}\\ \alpha^6 & \alpha^{12} & \alpha^{18} & \alpha^{24} \end{pmatrix} \hspace{0.15cm}\cdot \hspace{0.15cm} \begin{pmatrix} {\lambda}_0\\ {\lambda}_1\\ 1\\ 0 \end{pmatrix} \hspace{0.05cm}.\]
  • With this we have achieved:
  1. The  $7× 3$  matrix has now become a  $7× 4$ matrix.
  2. The fourth column can actually be filled arbitrarily,  since all elements are multiplied by zeros.
  3. The addition chosen here gives the transposed  $\text{parity-check matrix}$  of  $\text{RSC (7, 3, 5)}_8$.
  4. Thus,  one can write for the last vector equation:
\[\underline {L}^{\rm T} = { \boldsymbol{\rm H }}^{\rm T} \cdot \underline {\it \Lambda }^{\rm T} \hspace{0.3cm} \Rightarrow \hspace{0.3cm}\underline {L} = \underline {\it \Lambda } \cdot { \boldsymbol{\rm H }} \hspace{0.05cm}.\]
  • But since for the error locations  $(e_i ≠ 0)$  always  $L_i = {\it \Lambda}(\alpha^{i}) = 0$  holds,  the product  $L_i \cdot e_i \equiv 0$  and one obtains as determining equation for the zeros of the error locator polynomial:
\[\underline {L}^{\rm T} \cdot \underline {e}^{\rm T} = 0 \hspace{0.3cm} \Rightarrow \hspace{0.3cm} \underline {\it \Lambda } \cdot { \boldsymbol{\rm H }} \cdot \underline {e}^{\rm T} = 0 \hspace{0.3cm} \Rightarrow \hspace{0.3cm} \underline {\it \Lambda } \cdot \underline {s}^{\rm T} = 0 \hspace{0.05cm}.\]

$\text{Important intermediate result:}$  The non-trivial zeros  $\lambda_0$, $\lambda_1$, ...  $(=0)$  of the error locator polynomial   ${\it \Lambda}(x)$  must always satisfy the vector equation  $\underline {\it \Lambda } \cdot \underline {s}^{\rm T} = 0 $  .

  • Hereby denotes  $\underline {\it \Lambda }$  the  $\rm ELP$  coefficient vector.
  • $\underline {s } = \underline {y }\cdot \boldsymbol{\rm H }^{\rm T} $  gives the  $\text{syndrome}$.


Step (B): Set up/evaluate the ELP coefficient vector


Before we can consider this intermediate result for step  $\rm (B)$  some generalizations need to be made. 

Shifted ELP coefficient vectors

The reason for this is:

  • The relationship   $\underline {\it \lambda } \cdot \underline {s}^{\rm T} = 0 $   yields only a single equation of determination.
  • Thus the problem can be solved for  $r = 1$  if one is sure that indeed only one symbol has been falsified.
  • If one is not sure of this,  but nevertheless performs the calculation for  $r = 1$,  one still needs a second equation  (or even several)  to verify the assumption.

The property of the  $\text{error locator polynomial}$ that  ${\it \Lambda}(\alpha^{i}) = 0$  only for  $e_i ≠ 0$  $($that means:  the $i$–th symbol is falsified$)$  is preserved when multiplying  ${\it \Lambda}(x)$  by arbitrary powers of  $x$. 

Each multiplication by  $x$  implies a shift of one place to the right for the ELP coefficient vector.

$\text{Definition:}$  The  $\text{generalized ELP coefficient vectors}$   $\underline {\it \Lambda }_l$  result from successive shifts with respect to  $\underline {\it \Lambda }_l$:

\[{\it \Lambda}_l(x)=x^l \cdot \prod_{i\hspace{0.05cm}\in \hspace{0.05cm} I_{\rm EP} }(x-\alpha^i) =x^l \cdot \big [{\it \lambda}_0 + \lambda_1 \cdot x+\ldots+{\it \lambda}_{r-1} \cdot x^{r-1}+x^r \big ]\hspace{0.05cm}.\]

In this defining equation,  $\underline {\it \Lambda }_1$  corresponds to the previous  $\underline {\it \Lambda }$.


The upper graph shows the occupancy under the assumption of  $r$  error locations in the error vector  $\underline {e}$  for

  • $r=1$   in the left panel  (with blue background),
  • $r=2$   in the middle area  (with red background),
  • $r=3$   in the right area  (with green background).


One recognizes:

  1. The length of all  $\underline {\it \lambda }_l$  is always  $n-k$.  Each vector contains respectively  $r$  coefficients  $\lambda_0$,  $\lambda_1$, ... ,  $\lambda_{r-1}$   ⇒   $0 ≤ i < r$  and one  "1".
    The remainder of each vector is padded with zeros.
  2. For each  $r$  there are exactly  $n-k-r$  coefficient vectors   $\underline {\it \Lambda }_l$,  where   $\underline {\it \Lambda }_l$   results from   $\underline {\it \Lambda }_{l-1} $  always by right shifting by one position.
    The vector  $\underline {\it \Lambda }_{n-k-r}$  always ends with a  $1$.
  3. The equation system   $\underline {\it \Lambda }_l \cdot \underline {s}^{\rm T} = 0 $   therefore leads to  $n-k-r$  equations.
    The chosen approach for  $r$  is only correct if all equations lead to the same results for  $\lambda_0$, ... , $\lambda_{r-1}$ .
  4. If this is not the case,  one has to increase  $r$  and thus work on a new equation system,  and this until a unique solution results from all equations for the current  $r$.  If the finally  $r$  is greater than the correctability  $t$  of the code,  the calculation can be terminated.  The pending received word  $\underline {y}$  is then not decodable.


$\rm GF(2^3)$  representation

$\text{Example 4:}$  The conditions stated in the  $\text{Example 2}$  still apply:

  1. There,  due to the syndrome   $\underline {s} = (\alpha^5,\ \alpha^2,\ \alpha^3,\ \alpha^1) ≠ \underline {0}$   it was also demonstrated that the received vector  $\underline {y}$  was falsified   ⇒   error vector   $\underline {e} \ne {0}$.
  2. Not known,  however,  is the actual symbol error count  $r$.
  • Assuming a single falsified symbol   $(r= 1)$   we obtain the following system of equations  $($written here in matrix form$)$:
\[\big ({ \boldsymbol{\it \Lambda } }_l \big) \cdot \underline {s} ^{\rm T}= \begin{pmatrix} \lambda_0 & 1 & 0 & 0 \\ 0 & \lambda_0 & 1 & 0 \\ 0 & 0 & \lambda_0 & 1 \end{pmatrix} \cdot \begin{pmatrix} \alpha^5\\ \alpha^2\\ \alpha^3\\ \alpha \end{pmatrix} \stackrel{!}{=} \begin{pmatrix} 0\\ 0\\ 0 \end{pmatrix} \hspace{0.05cm}.\]
  • This equation system gives three different solutions for  $\lambda_0$,  which is not purposeful:
\[\text{line 1:}\hspace{0.5cm}\alpha^5 \cdot \lambda_0 + \alpha^2 = 0 \hspace{0.3cm} \Rightarrow \hspace{0.3cm} \lambda_0 = \alpha^{2-5}= \alpha^{-3}= \alpha^{4}\hspace{0.05cm},\]
\[\text{line 2:}\hspace{0.5cm}\alpha^2 \cdot \lambda_0 + \alpha^3 = 0 \hspace{0.3cm} \Rightarrow \hspace{0.3cm} \lambda_0 = \alpha^{3-2}= \alpha\hspace{0.05cm},\]
\[\text{line 3:}\hspace{0.5cm}\alpha^3 \cdot \lambda_0 + \alpha^1 = 0 \hspace{0.3cm} \Rightarrow \hspace{0.3cm} \lambda_0 = \alpha^{1-3}= \alpha^{-2} = \alpha^{5} \hspace{0.05cm}.\]
  • Therefore,  we now set up another equation system,  assuming   $r = 2$:
\[\big ({ \boldsymbol{\it \Lambda } }_l \big) \cdot \underline {s} ^{\rm T}= \begin{pmatrix} \lambda_0 & \lambda_1 & 1 & 0 \\ 0 & \lambda_0 & \lambda_1 & 1 \end{pmatrix} \cdot \begin{pmatrix} \alpha^5\\ \alpha^2\\ \alpha^3\\ \alpha \end{pmatrix} \stackrel{!}{=} \begin{pmatrix} 0\\ 0 \end{pmatrix}\hspace{0.05cm}. \]
  • This leads to two equations for  $\lambda_0$  and  $\lambda_1$:
\[\alpha^5 \cdot \lambda_0 + \alpha^2 \cdot \lambda_1 + \alpha^3 = 0 \hspace{0.05cm},\hspace{0.5cm}\alpha^2 \cdot \lambda_0 + \alpha^3 \cdot \lambda_1 + \alpha^1 = 0 \hspace{0.05cm}.\]
  • This equation system is now clearly solvable.  One gets   $\lambda_0 = \alpha^2$   and   $\lambda_1 = \alpha^6$.  This means:
  1. The assumption that indeed  $r = 2$  positions of the received vector  $\underline {y}$  have been distorted is correct.
  2. But it is not yet known which positions have been falsified.  So much for now:
  3. It is not symbol positions  2  and  6,  but the positions  0  and  2,  as shown in the following  $\text{Example 5}$  (next section).



Step (C): Localization of the error positions


After processing step  $\rm(B)$  are known:

  1. the number  $r$  of error locations  $e_i ≠ 0$  in the vector  $\underline {e} = (e_0, \hspace{0.05cm}\text{... }\hspace{0.05cm}, e_i, \hspace{0.05cm}\text{... }\hspace{0.05cm}, e_{n-1})$,
  2. the coefficients  $\lambda_0, \hspace{0.05cm}\text{... }\hspace{0.05cm} , \lambda_{r-1}$  of the error locator polynomial..

Now the set of error positions has to be determined:   $I_{\rm EP} = \{ i \hspace{0.1cm}| \hspace{0.1cm} e_i \ne 0,\hspace{0.3cm} 0 \le i < n \}\hspace{0.05cm}.$

There are two ways to do this  $($both methods are used in the following example$)$:

  • the so-called  "Chien search",  in which one determines the searched zeros by inserting the possible code symbols except the zero symbol   ⇒   $(\alpha^0, \text{... }, \alpha^i, \text{... },\alpha^{n-1})$  into the the error locator polynomial,
  • the evaluation of the equation   $\underline {L} = (L_0, \text{... }, L_i, \text{... },L_{n-1} ) = \underline {\it \Lambda } \cdot { \boldsymbol{\rm H }}$   with the abbreviation  $L_i = {\it \Lambda}(\alpha^i)$.


$\text{Example 5:}$  In  $\text{Example 4}$  it was determined according to the constraints specified in  $\text{Example 2}$  stated boundary conditions determined that

$\rm GF(2^3)$  representations
  1. $r= 2$  symbol errors are present,  and
  2. the ELP coefficients are  $\lambda_0 = \alpha^2$  and  $\lambda_1 = \alpha^6$.


This results in the following error locator polynomial:

\[{\it \Lambda}(x)=x \cdot \big [{\it \lambda}_0 + \lambda_1 \cdot x+x^2 \big ] =x \cdot \big [\alpha^2 + \alpha^6 \cdot x+x^2 \big ]\hspace{0.05cm}.\]

⇒   According to the Chien search one gets

\[{\it \Lambda}(\alpha^0)\hspace{0.15cm} = \hspace{0.15cm}\alpha^0 \cdot \big [ \alpha^2 + \alpha^6 \cdot 1 + 1 \big ] = \alpha^2 + (\alpha^2 + 1) + 1 = 0 \hspace{0.3cm} \Rightarrow \hspace{0.3cm}{ \boldsymbol{\rm Zero} }\hspace{0.05cm},\]
\[{\it \Lambda}(\alpha^1)\hspace{0.15cm} = \hspace{0.15cm}\alpha^1 \cdot \big [\alpha^2 + \alpha^6 \cdot \alpha^1 + \alpha^2\big ]= \alpha^1 \hspace{0.3cm} \Rightarrow \hspace{0.3cm}{\rm No\hspace{0.15cm} zero}\hspace{0.05cm},\]
\[{\it \Lambda}(\alpha^2)\hspace{0.15cm} = \hspace{0.15cm}\alpha^2 \cdot \big [ \alpha^2 + \alpha^6 \cdot \alpha^2 + \alpha^4 \big ] =\alpha^4 + \alpha^{10} + \alpha^6 = \text{...} = \hspace{0.15cm}0 \hspace{0.3cm} \Rightarrow \hspace{0.3cm}{ \boldsymbol{\rm Zero} }\hspace{0.05cm}.\]

Thus the two error positions with  $i = 0$  and  $i = 2$  are found   ⇒   the error vector is:   $\underline {e} = (e_0, 0, e_2, 0, 0, 0, 0)$ .


⇒   The vector equation  $\underline {L} = \underline {\it \Lambda } \cdot { \boldsymbol{\rm H } }$  gives the same result in a more compact form:

\[\underline {L} = \underline{\it \Lambda} \cdot { \boldsymbol{\rm H } } = (\alpha^2, \alpha^6, 1, 0) \cdot \begin{pmatrix} 1 & \alpha^1 & \alpha^2 & \alpha^3 & \alpha^4 & \alpha^5 & \alpha^6 \\ 1 & \alpha^2 & \alpha^4 & \alpha^6 & \alpha^1 & \alpha^3 & \alpha^5 \\ 1 & \alpha^3 & \alpha^6 & \alpha^3 & \alpha^5 & \alpha^1 & \alpha^4 \\ 1 & \alpha^4 & \alpha^1 & \alpha^5 & \alpha^2 & \alpha^6 & \alpha^3 \end{pmatrix} \]
\[ \Rightarrow \hspace{0.3cm} \underline {L} = (\alpha^2,\alpha^3,\alpha^4,\alpha^5,\alpha^6,1 ,\alpha^1) + (\alpha^6,\alpha^1,\alpha^3,\alpha^5,1 ,\alpha^2,\alpha^4)+(1, \alpha^3,\alpha^6,\alpha^3,\alpha^5,\alpha^1,\alpha^4) \]
\[ \Rightarrow \hspace{0.3cm} \underline {L} = (0,\alpha^1,0,\alpha^3,\alpha^3,\alpha^5,\alpha^1)\hspace{0.3cm} \Rightarrow \hspace{0.3cm} L_0 = L_2 = 0 \hspace{0.3cm} \Rightarrow \hspace{0.3cm} \underline {e} = (e_0, 0, e_2, 0, 0, 0, 0)\hspace{0.05cm}.\]

The example continues with the  $\text{Example 6}$  in the next section

.

Step (D): Final error correction


In the last step the  $r$  symbol errors have to be corrected,  whose positions are known after completion of step  $\rm (C)$ :

  • Marking the error positions in the received word  $\underline {y}$  as erasures  $\rm E$,  the corresponding code word  $\underline {z}$  can be found according to the description in chapter  "Reed-Solomon decoding at the erasure channel".
  • A second possibility offers the determination of the error vector  $\underline {e}$  from the equation   $\underline {e} \cdot \boldsymbol{\rm H }^{\rm T} = \underline {s}$   and correcting accordingly   $\underline {z} = \underline {y} - \underline {e} $.   This procedure is the basis of the following example.

$\text{Example 6:}$  We consider again the  $\text{RSC (7, 3, 5)}_8$.  In   $\text{Example 2}$  was

  • the received word given with  $\underline{y}=\big (\alpha^3,\hspace{0.09cm} 1,\hspace{0.09cm} 0, \hspace{0.09cm}\alpha^1, \hspace{0.09cm} \alpha^2, \hspace{0.09cm} \alpha^5, \hspace{0.09cm} \alpha^5 \big),$  and
  • from this determined the syndrome   $\underline{s}=\big (\alpha^5,\hspace{0.09cm}\alpha^2, \hspace{0.09cm} \alpha^3, \hspace{0.09cm} \alpha^1\big)$.


According to the calculations in  $\text{Example 5}$  the error vector is  $\underline {e} = (e_0,\ 0,\ e_2,\ 0,\ 0,\ 0,\ 0)$.

$\rm GF(2^3)$  representations

From  $\underline {e} \cdot \boldsymbol{\rm H }^{\rm T} = \underline {s}$   we now obtain the following governing equations for the error values  $e_0$  and  $e_2$:

\[\underline {e} \cdot \boldsymbol{\rm H }^{\rm T}= \begin{pmatrix} e_0 & 0 & e_2 & 0 & 0 & 0 & 0 \end{pmatrix}\cdot \begin{pmatrix} 1 & 1 & 1 & 1 \\ \alpha^1 & \alpha^2 & \alpha^3 & \alpha^4\\ \alpha^2 & \alpha^4 & \alpha^6 & \alpha^1\\ \alpha^3 & \alpha^6 & \alpha^2 & \alpha^{5}\\ \alpha^4 & \alpha^1 & \alpha^{5} & \alpha^{2}\\ \alpha^5 & \alpha^{3} & \alpha^{1} & \alpha^{6}\\ \alpha^6 & \alpha^{5} & \alpha^{4} & \alpha^{3} \end{pmatrix} \stackrel{!}{=} \underline {s} = \begin{pmatrix} \alpha^5 & \alpha^2 & \alpha^3 & \alpha^1 \end{pmatrix} \]
\[\Rightarrow \hspace{0.3cm}e_0 \cdot (1,\ 1,\ 1,\ 1) + e_2 \cdot (\alpha^2,\ \alpha^4,\ \alpha^6,\ \alpha^1)\stackrel{!}{=} ( \alpha^5,\ \alpha^2,\ \alpha^3,\ \alpha^1)\]
\[\Rightarrow \hspace{0.3cm} e_0 + e_2 \cdot \alpha^2 = \alpha^5\hspace{0.05cm},\hspace{0.2cm} e_0 + e_2 \cdot \alpha^4 = \alpha^2\hspace{0.05cm},\hspace{0.2cm} e_0 + e_2 \cdot \alpha^6 = \alpha^3\hspace{0.05cm},\hspace{0.2cm} e_0 + e_2 \cdot \alpha^1 = \alpha^1\hspace{0.05cm}.\]
  • All equations lead to the result  $e_0 = 1$  and  $e_2 =\alpha^2$.
  • Thus the corrected code word with  $\underline {y} = (\alpha^3,\hspace{0.09cm} 1,\hspace{0.09cm} 0,\hspace{0.09cm} \alpha^1,\hspace{0.09cm} \alpha^2,\hspace{0.09cm} \alpha^5,\hspace{0.09cm} \alpha^5)$   and   $\underline {e}= (1,\hspace{0.09cm} 0,\hspace{0.09cm} \alpha^2,\hspace{0.09cm} 0,\hspace{0.09cm} 0,\hspace{0.09cm} 0,\hspace{0.09cm} 0)$:
\[\underline {z} = \underline {y} - \underline {e} = \underline {y} + \underline {e}= (\alpha^1,\hspace{0.09cm} 1,\hspace{0.09cm} \alpha^2,\hspace{0.09cm} \alpha^1,\hspace{0.09cm} \alpha^2,\hspace{0.09cm} \alpha^5,\hspace{0.09cm} \alpha^5)\hspace{0.05cm}. \]
  • This is a valid code word of the  $\text{RSC (7, 3, 5)}_8$.


Fast Reed-Solomon decoding


The class of Reed–Solomon codes was already introduced in 1960 by the publication  [RS60][2].  However,  their efficient decoding was not possible until a decade or two later.

In the last sections we have demonstrated the so-called  "Petersen algorithm"  including the  "Chien search"  on the example of  $\text{RSC (7, 3, 5)}_8$  which can correct up to  $t = 2$  errors. 

  • The decoding process focused on setting up and solving the key equation where the zeros of a degree-2 polynomial in the field  $\rm GF(7)$  had to be found.
  • It was to be recognized that already this algebraic decoding is connected with large expenditure.


For codes used in practice with large code word length  $n$  and high correctability  $t$  the decoding effort would explode if faster decoding algorithms had not been found. 

For example,  in the Reed–Solomon code   $\text{RSC (255, 223, 33)}_{256}$,  mentioned early in the ESA/NASA standard for satellite transmission,  up to   $t = 16$   zeros must be found in the field   $\rm GF(255)$   to decode a single code word.  And this in real time!

Beginning in the late 1960s, many scientists sought faster decoding algorithms for Reed–Solomon codes:

  1. In  $\text{Berlekamp–Massey algorithm}$  $\rm (BMA)$  the key equation   $\underline {\it \Lambda} \cdot \underline{s }^{\rm T} = 0$   represented as a feedback shift register, see for example  [Mas69][3],  [Fri96][4],  [Bos99][5].  The problem is thus reduced to the synthesis of an autoregressive filter.  The algorithm works much faster than the  (more easily comprehensible)  Petersen algorithm.
  2. Somewhat later,  in  [SK+75][6]  a decoding method based on the  $\text{Euclidean algorithm}$  has been proposed.  This provides the greatest common divisor of two integers,  which is used for decoding.  The Euclidean algorithm is comparably fast as the  $\rm BMA$.  More detailed information can be found in  [Bos99][5] and  [Fri96][4].
  3. Other efficient decoding methods of Reed–Solomon codes operate in the frequency domain using the  $\text{Discrete Fourier Transform}$  $\rm (DFT)$  in the field  ${\rm GF}(n)$.

The basic features of Reed–Solomon error correction were already developed in the 1960s.  But up to the present time the  $($as fast as possible$)$  algebraic decoding of these codes is a highly topical field of research.

Exercises for the chapter


Exercise 2.12: Decoding at RSC (7, 4, 4) to Base 8

Exercise 2.12Z: Reed-Solomon Syndrome Calculation

Exercise 2.13: Decoding at RSC (7, 3, 5) to Base 8

Exercise 2.14: Petersen Algorithm

References

  1. Peterson, W.W:  Encoding and Error-correction Procedures for the Bose-Chaudhuri codes.  IRE Transactions on Information Theory, IT-6:459{470), 1960.
  2. Reed, I.S.; Solomon, G.:  Polynomial Codes over Certain Finite Fields.  J. Siam, Vol. 8, pp. 300-304, 1960.
  3. Massey, J.L.:  Shift Register Synthesis and BCH Decoding.  IEEE Trans. on Information Theory, vol. IT-15, pp. 122–127, Jan. 1969.
  4. 4.0 4.1 Friedrichs, B.:  Kanalcodierung – Grundlagen und Anwendungen in modernen Kommunikations- systemen.  Berlin – Heidelberg: Springer, 1996.
  5. 5.0 5.1 Bossert, M.:  Channel Coding for Telecommunications. Chichester: Wiley, 1999.
  6. Sugiyama, Y.; Kashara, M.; Hirasawa, S.; Namekawa, T.:  A Method for Solving Key Equation for Decoding Goppa Codes.  Information and Control, Vol. 27, pp. 87-99, 1975.