Difference between revisions of "Aufgaben:Exercise 2.13: Decoding at RSC (7, 3, 5) to Base 8"

From LNTwww
(Die Seite wurde neu angelegt: „{{quiz-Header|Buchseite=Kanalcodierung/Fehlerkorrektur nach Reed–Solomon–Codierung }} [[File:|right|]] ===Fragebogen=== <quiz display=simple> {Mult…“)
 
 
(39 intermediate revisions by 4 users not shown)
Line 1: Line 1:
{{quiz-Header|Buchseite=Kanalcodierung/Fehlerkorrektur nach Reed–Solomon–Codierung
+
{{quiz-Header|Buchseite=Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding}}
  
 +
[[File:P_ID2561__KC_A_2_12.png|right|frame|ELP allocation schemes for&nbsp; $r = 1, \ r = 2, \ r = 3$]]
 +
In the&nbsp; [[Aufgaben:Exercise_2.12:_Decoding_at_RSC_(7,_4,_4)_to_Base_8|"Exercise 2.12"]]&nbsp; we used the so-called Petersen algorithm
 +
*for error correction respective
  
 +
*for decoding
  
}}
 
  
[[File:|right|]]
+
the Reed&ndash;Solomon code&nbsp; $(7, \, 4, \, 4)_8$&nbsp; which can due to the minimum distance&nbsp; $d_{\rm min} = 4$&nbsp; $(t = 1)$ only correct one symbol error.
  
 +
In this exercise we now consider the&nbsp; ${\rm RSC} \, (7, \, 3, \, 5)_8 \ \Rightarrow \ d_{\rm min} = 5 \ \Rightarrow \ t = 2$,&nbsp; whose parity-check matrix is as follows:
 +
:$${ \boldsymbol{\rm H}} =
 +
\begin{pmatrix}
 +
1 & \alpha^1 & \alpha^2 & \alpha^3 & \alpha^4 & \alpha^5 & \alpha^6\\
 +
1 & \alpha^2 & \alpha^4 & \alpha^6 & \alpha^1 & \alpha^{3} & \alpha^{5}\\
 +
1 & \alpha^3 & \alpha^6 & \alpha^2 & \alpha^{5} & \alpha^{1} & \alpha^{4}\\
 +
1 & \alpha^4 & \alpha^1 & \alpha^{5} & \alpha^{2} & \alpha^{6} & \alpha^{3}
 +
\end{pmatrix} .$$
  
===Fragebogen===
+
The received word under consideration &nbsp; $\underline{y} = (\alpha^2, \, \alpha^3, \, \alpha, \, \alpha^5, \, \alpha^4, \, \alpha^2, \, 1)$ &nbsp; results in the syndrome&nbsp;
 +
:$$\underline{s} = \underline{y} \cdot \mathbf{H}^{\rm T} = (0, \, 1, \, \alpha^5, \, \alpha^2).$$
 +
The further decoding procedure is done according to the following theory pages:
 +
* Step&nbsp; $\rm (B)$:&nbsp; [[Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding#Step_.28B.29:_Set_up.2Fevaluate_the_ELP_coefficient_vector| "Set up/evaluate the ELP coefficient vector"]]; &nbsp; "ELP"&nbsp; stands for&nbsp; "Error Locator Polynomial".
  
 +
* Step&nbsp; $\rm (C)$:&nbsp; [[Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding#Step_.28C.29:_Localization_of_the_error_positions| "Localization of error positions"]],
 +
 +
* Step&nbsp; $\rm (D)$:&nbsp; [[Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding#Step_.28D.29:_Final_error_correction| "Final error correction"]].
 +
 +
 +
 +
 +
Hints:
 +
* This exercise belongs to the chapter&nbsp; [[Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding| "Error correction According to Reed&ndash;Solomon Coding"]].
 +
 +
* In the above graphic you can see the assignments of the&nbsp; [[Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding#Error_Locator_Polynomial_-_Definition_and_Properties|"ELP coefficients"]]&nbsp; ${\it \underline{\Lambda}}_l$&nbsp; assuming that there are symbol errors in the received words $r = 1, \ r = 2$&nbsp; respectively.&nbsp; $r = 3$&nbsp;.
 +
 +
 +
 +
===Questions===
 
<quiz display=simple>
 
<quiz display=simple>
{Multiple-Choice Frage
+
{Which allocation schemes might be relevant for this exercise?
 
|type="[]"}
 
|type="[]"}
- Falsch
+
+ The schema highlighted in blue&nbsp; $(r = 1)$.
+ Richtig
+
+ The schema highlighted in red&nbsp; $(r = 2)$.
 +
- The scheme highlighted in green&nbsp; $(r = 3)$.
  
 +
{Can the syndrome &nbsp; $\underline{s} = (0, \, 1, \, \alpha^5, \, \alpha^2)$ &nbsp; be caused by one symbol error?
 +
|type="()"}
 +
- YES.
 +
+ NO.
  
{Input-Box Frage
+
{Can the syndrome &nbsp; $\underline{s} = (0, \, 1, \, \alpha^5, \, \alpha^2)$ &nbsp; be caused by two symbol errors?
|type="{}"}
+
|type="()"}
$\alpha$ = { 0.3 }
+
+ YES.
 
+
- NO.
  
 +
{Which symbols of the code word have been falsified?
 +
|type="[]"}
 +
- Symbol 0,
 +
- Symbol 1,
 +
+ Symbol 2,
 +
+ Symbol 3,
 +
- Symbol 4,
 +
- Symbol 5,
 +
- Symbol 6.
  
 +
{What is the error vector&nbsp; $\underline{e}$?&nbsp; Give also the decoding result &nbsp; $\underline{z} = \underline{y} + \underline{e}$&nbsp;.
 +
|type="()"}
 +
- $\underline{e} = (1, \, 0, \, 0, \, 0, \, 0, \, 0, \, \alpha^6)$,
 +
- $\underline{e} = (\alpha^6, \, 0, \, 0, \, 0, \, 0, \, 0, \, 1)$,
 +
+ $\underline{e} = (0, \, 0, \, 1, \, \alpha^6, \, 0, \, 0, \, 0)$,
 +
- $\underline{e} = (0, \, 0, \, \alpha^6, \, 1, \, 0, \, 0, \, 0)$.
 
</quiz>
 
</quiz>
  
===Musterlösung===
+
===Solution===
 
{{ML-Kopf}}
 
{{ML-Kopf}}
'''1.'''
+
'''(1)'''&nbsp; Correct are the&nbsp; <u>proposed solutions 1 and 2</u>:
'''2.'''
+
*The ${\rm RSC} \, (7, \, 3, \, 5)_8$&nbsp; can correct up to&nbsp; $t = 2$&nbsp; symbol errors.
'''3.'''
+
'''4.'''
+
*The actual symbol error count&nbsp; $r$&nbsp; must not be larger.
'''5.'''
+
 
'''6.'''
+
 
'''7.'''
+
[[File:EN_KC_Z_2_5_neu.png|right|frame|Conversion tables for&nbsp; $\rm GF(2^3)$]]
{{ML-Fuß}}
+
'''(2)'''&nbsp; Assuming&nbsp; $r = 1$,&nbsp; the&nbsp; $n-k-1$&nbsp; governing equations&nbsp; ${\it \underline{\Lambda}}_l \cdot \underline{s}^{\rm T} = 0$&nbsp; for&nbsp; $\lambda_0$&nbsp; are as follows :
 +
:$$\lambda_0  \cdot 0 \hspace{-0.15cm} \ + \ \hspace{-0.15cm} 1 = 0 \hspace{0.75cm} \Rightarrow  \hspace{0.3cm} \lambda_0 \hspace{0.15cm} {\rm unspecified} \hspace{0.05cm},$$
 +
:$$\lambda_0  \cdot 1 \hspace{-0.15cm} \ + \ \hspace{-0.15cm} \alpha^5 = 0  \hspace{0.55cm} \Rightarrow  \hspace{0.3cm} \lambda_0  = \alpha^5 \hspace{0.05cm},$$
 +
:$$\lambda_0  \cdot \alpha^5 \hspace{-0.15cm} \ + \ \hspace{-0.15cm} \alpha^2 = 0 \hspace{0.3cm} \Rightarrow  \hspace{0.3cm} \lambda_0  = \alpha^{-3} = \alpha^4 \hspace{0.05cm}.$$
 +
 
 +
*The assumption&nbsp; $r = 1$&nbsp; would be justified only if the same&nbsp; $\lambda_0$&nbsp; value resulted from all these three equations.
 +
 +
*This is not the case here &nbsp; &#8658; &nbsp; Answer <u>NO</u>.
 +
 
 +
 
 +
 
 +
'''(3)'''&nbsp; Assuming the occupation for&nbsp; $r = 2$,&nbsp; we obtain two equations of determination for&nbsp; $\lambda_0$&nbsp; and&nbsp; $\lambda_1$:
 +
:$$\lambda_0  \cdot 0 \hspace{-0.15cm} \ + \ \hspace{-0.15cm}  \lambda_1  \cdot 1 +\alpha^5 = 0  \hspace{0.38cm} \Rightarrow  \hspace{0.3cm} \lambda_1  = \alpha^5 \hspace{0.05cm},$$
 +
:$$\lambda_0  \cdot 1 \hspace{-0.15cm} \ + \ \hspace{-0.15cm}  \lambda_1  \cdot \alpha^5 +\alpha^2 = 0  \hspace{0.15cm} \Rightarrow  \hspace{0.3cm} \lambda_0  = \alpha^{5+5} +\alpha^2 =  \alpha^{3} +\alpha^2 =\alpha^{5} \hspace{0.05cm}.$$
 +
 
 +
*The system of equations can be solved under the assumption&nbsp; $r = 2$ &nbsp; &#8658; &nbsp; Answer <u>YES</u>.
 +
 +
*The results&nbsp; $\lambda_0 = \lambda_1 = \alpha^5$&nbsp; obtained here are processed in the next subtask.
 +
 
 +
 
 +
 
 +
'''(4)'''&nbsp; With the result&nbsp; $\lambda_0 = \lambda_1 = \alpha^5$,&nbsp; the error locator polynomial&nbsp; $($"key equation"$)$&nbsp; is:
 +
:$${\it \Lambda}(x)=x \cdot \big ({\it \lambda}_0 +  {\it \lambda}_1 \cdot x + x^2 \big )
 +
=x \cdot \big (\alpha^5 + \alpha^5 \cdot x + x^2 )\hspace{0.05cm}.$$
 +
 
 +
*This function has zeros for&nbsp; $x = \alpha^2$&nbsp; and&nbsp; $x = \alpha^3$:
 +
:$${\it \Lambda}(x = \alpha^2 )\hspace{-0.15cm} \ = \ \hspace{-0.15cm} \alpha^2 \cdot \big ( \alpha^5 + \alpha^7 + \alpha^4 \big ) = \alpha^2 \cdot \big ( \alpha^5 + 1 + \alpha^4 \big )= 0\hspace{0.05cm},$$
 +
:$${\it \Lambda}(x = \alpha^3 )\hspace{-0.15cm} \ = \ \hspace{-0.15cm} \alpha^3 \cdot \big ( \alpha^5 + \alpha^8 + \alpha^6 \big ) = \alpha^3 \cdot \big ( \alpha^5 + \alpha + \alpha^6 \big )= 0\hspace{0.05cm}.$$
 +
 
 +
*Falsified are consequently the symbols at positions&nbsp; '''2'''&nbsp; and&nbsp; '''3 &nbsp; &#8658; &nbsp; <u>Solutions 3 and 4</u>.
 +
 
 +
 
 +
 
 +
'''(5)'''&nbsp; After the result of the subtask&nbsp; '''(4)'''&nbsp; only the last two proposed solutions come into question:
 +
:$$\underline{e} = (0, \, 0, \, e_2, \, e_3, \, 0, \, 0, \, 0).$$
  
 +
*The approach is therefore accordingly &nbsp; $\underline{e} \cdot \mathbf{H}^{\rm T} = \underline{s}$:
 +
:$$(0, 0, e_2, e_3, 0, 0, 0) \cdot
 +
\begin{pmatrix}
 +
1 & 1 & 1 & 1\\
 +
\alpha^1 & \alpha^2 & \alpha^3 & \alpha^4\\
 +
\alpha^2 & \alpha^4 & \alpha^6 & \alpha^1\\
 +
\alpha^3 & \alpha^6 & \alpha^2 & \alpha^5\\
 +
\alpha^4 & \alpha^1 & \alpha^{5} & \alpha^2\\
 +
\alpha^5 & \alpha^{3} & \alpha^{1} & \alpha^6\\
 +
\alpha^6 & \alpha^{5} & \alpha^{4} & \alpha^3
 +
\end{pmatrix}  \hspace{0.15cm}\stackrel{!}{=} \hspace{0.15cm}
 +
(0, 1, \alpha^5, \alpha^2)
 +
\hspace{0.05cm}. $$
 +
:$$\Rightarrow  \hspace{0.3cm} e_2 \cdot \alpha^{2} \hspace{-0.15cm} \ + \ \hspace{-0.15cm}  e_3  \cdot \alpha^{3} = 0  \hspace{0.05cm},
 +
\hspace{0.8cm} e_2  \cdot \alpha^{4} +  e_6  \cdot \alpha^{3} = 1\hspace{0.05cm},\hspace{0.8cm}
 +
e_2 \cdot \alpha^{6} \hspace{-0.15cm} \ + \ \hspace{-0.15cm}  e_3  \cdot \alpha^{2} = \alpha^{5}  \hspace{0.05cm},
 +
\hspace{0.6cm} e_2  \cdot \alpha^{1} +  e_6  \cdot \alpha^{5} = \alpha^{2} \hspace{0.05cm}. $$
  
 +
*All four equations are satisfied with&nbsp; $e_2 = 1$&nbsp; as well as&nbsp; $e_3 = \alpha^6$ &nbsp; &#8658; &nbsp; <u>Proposed solution 3</u>:
 +
:$$\underline {e} = (0, \hspace{0.05cm}0, \hspace{0.05cm}1, \hspace{0.05cm}\alpha^6, \hspace{0.05cm}0, \hspace{0.05cm}0, \hspace{0.05cm}0)  $$
  
[[Category:Aufgaben zu  Kanalcodierung|^2.5 Fehlerkorrektur nach Reed–Solomon–Codierung
+
*With &nbsp; $\alpha + 1 = \alpha^3$&nbsp;  and&nbsp; $\alpha^5 + \alpha^6 = \alpha$ &nbsp; we get from the given received word&nbsp; $\underline{y} = (\alpha^2, \, \alpha^3, \, \alpha, \, \alpha^5, \, \alpha^4, \, \alpha^2, \, 1)$&nbsp; to the decoding result
 +
:$$\underline {z} = (\alpha^2, \hspace{0.05cm}\alpha^3, \hspace{0.05cm}\alpha^3, \hspace{0.05cm}\alpha, \hspace{0.05cm}\alpha^4, \hspace{0.05cm}\alpha^2, \hspace{0.05cm}1)
 +
\hspace{0.05cm}. $$
  
 +
*In the&nbsp; [[Aufgaben:Exercise_2.07:_Reed-Solomon_Code_(7,_3,_5)_to_Base_8|"Exercise 2.7"]]&nbsp; it was shown that this is a valid code word of&nbsp; ${\rm RSC} \, (7, \, 3, \, 5)_8$.
 +
 +
*The corresponding information word is&nbsp; $\underline{u} = (\alpha^4, \, 1, \, \alpha^3)$.
 +
{{ML-Fuß}}
  
  
  
^]]
+
[[Category:Channel Coding: Exercises|^2.5 Reed-Solomon Error Correction^]]

Latest revision as of 17:30, 23 January 2023

ELP allocation schemes for  $r = 1, \ r = 2, \ r = 3$

In the  "Exercise 2.12"  we used the so-called Petersen algorithm

  • for error correction respective
  • for decoding


the Reed–Solomon code  $(7, \, 4, \, 4)_8$  which can due to the minimum distance  $d_{\rm min} = 4$  $(t = 1)$ only correct one symbol error.

In this exercise we now consider the  ${\rm RSC} \, (7, \, 3, \, 5)_8 \ \Rightarrow \ d_{\rm min} = 5 \ \Rightarrow \ t = 2$,  whose parity-check matrix is as follows:

$${ \boldsymbol{\rm H}} = \begin{pmatrix} 1 & \alpha^1 & \alpha^2 & \alpha^3 & \alpha^4 & \alpha^5 & \alpha^6\\ 1 & \alpha^2 & \alpha^4 & \alpha^6 & \alpha^1 & \alpha^{3} & \alpha^{5}\\ 1 & \alpha^3 & \alpha^6 & \alpha^2 & \alpha^{5} & \alpha^{1} & \alpha^{4}\\ 1 & \alpha^4 & \alpha^1 & \alpha^{5} & \alpha^{2} & \alpha^{6} & \alpha^{3} \end{pmatrix} .$$

The received word under consideration   $\underline{y} = (\alpha^2, \, \alpha^3, \, \alpha, \, \alpha^5, \, \alpha^4, \, \alpha^2, \, 1)$   results in the syndrome 

$$\underline{s} = \underline{y} \cdot \mathbf{H}^{\rm T} = (0, \, 1, \, \alpha^5, \, \alpha^2).$$

The further decoding procedure is done according to the following theory pages:



Hints:

  • In the above graphic you can see the assignments of the  "ELP coefficients"  ${\it \underline{\Lambda}}_l$  assuming that there are symbol errors in the received words $r = 1, \ r = 2$  respectively.  $r = 3$ .


Questions

1

Which allocation schemes might be relevant for this exercise?

The schema highlighted in blue  $(r = 1)$.
The schema highlighted in red  $(r = 2)$.
The scheme highlighted in green  $(r = 3)$.

2

Can the syndrome   $\underline{s} = (0, \, 1, \, \alpha^5, \, \alpha^2)$   be caused by one symbol error?

YES.
NO.

3

Can the syndrome   $\underline{s} = (0, \, 1, \, \alpha^5, \, \alpha^2)$   be caused by two symbol errors?

YES.
NO.

4

Which symbols of the code word have been falsified?

Symbol 0,
Symbol 1,
Symbol 2,
Symbol 3,
Symbol 4,
Symbol 5,
Symbol 6.

5

What is the error vector  $\underline{e}$?  Give also the decoding result   $\underline{z} = \underline{y} + \underline{e}$ .

$\underline{e} = (1, \, 0, \, 0, \, 0, \, 0, \, 0, \, \alpha^6)$,
$\underline{e} = (\alpha^6, \, 0, \, 0, \, 0, \, 0, \, 0, \, 1)$,
$\underline{e} = (0, \, 0, \, 1, \, \alpha^6, \, 0, \, 0, \, 0)$,
$\underline{e} = (0, \, 0, \, \alpha^6, \, 1, \, 0, \, 0, \, 0)$.


Solution

(1)  Correct are the  proposed solutions 1 and 2:

  • The ${\rm RSC} \, (7, \, 3, \, 5)_8$  can correct up to  $t = 2$  symbol errors.
  • The actual symbol error count  $r$  must not be larger.


Conversion tables for  $\rm GF(2^3)$

(2)  Assuming  $r = 1$,  the  $n-k-1$  governing equations  ${\it \underline{\Lambda}}_l \cdot \underline{s}^{\rm T} = 0$  for  $\lambda_0$  are as follows :

$$\lambda_0 \cdot 0 \hspace{-0.15cm} \ + \ \hspace{-0.15cm} 1 = 0 \hspace{0.75cm} \Rightarrow \hspace{0.3cm} \lambda_0 \hspace{0.15cm} {\rm unspecified} \hspace{0.05cm},$$
$$\lambda_0 \cdot 1 \hspace{-0.15cm} \ + \ \hspace{-0.15cm} \alpha^5 = 0 \hspace{0.55cm} \Rightarrow \hspace{0.3cm} \lambda_0 = \alpha^5 \hspace{0.05cm},$$
$$\lambda_0 \cdot \alpha^5 \hspace{-0.15cm} \ + \ \hspace{-0.15cm} \alpha^2 = 0 \hspace{0.3cm} \Rightarrow \hspace{0.3cm} \lambda_0 = \alpha^{-3} = \alpha^4 \hspace{0.05cm}.$$
  • The assumption  $r = 1$  would be justified only if the same  $\lambda_0$  value resulted from all these three equations.
  • This is not the case here   ⇒   Answer NO.


(3)  Assuming the occupation for  $r = 2$,  we obtain two equations of determination for  $\lambda_0$  and  $\lambda_1$:

$$\lambda_0 \cdot 0 \hspace{-0.15cm} \ + \ \hspace{-0.15cm} \lambda_1 \cdot 1 +\alpha^5 = 0 \hspace{0.38cm} \Rightarrow \hspace{0.3cm} \lambda_1 = \alpha^5 \hspace{0.05cm},$$
$$\lambda_0 \cdot 1 \hspace{-0.15cm} \ + \ \hspace{-0.15cm} \lambda_1 \cdot \alpha^5 +\alpha^2 = 0 \hspace{0.15cm} \Rightarrow \hspace{0.3cm} \lambda_0 = \alpha^{5+5} +\alpha^2 = \alpha^{3} +\alpha^2 =\alpha^{5} \hspace{0.05cm}.$$
  • The system of equations can be solved under the assumption  $r = 2$   ⇒   Answer YES.
  • The results  $\lambda_0 = \lambda_1 = \alpha^5$  obtained here are processed in the next subtask.


(4)  With the result  $\lambda_0 = \lambda_1 = \alpha^5$,  the error locator polynomial  $($"key equation"$)$  is:

$${\it \Lambda}(x)=x \cdot \big ({\it \lambda}_0 + {\it \lambda}_1 \cdot x + x^2 \big ) =x \cdot \big (\alpha^5 + \alpha^5 \cdot x + x^2 )\hspace{0.05cm}.$$
  • This function has zeros for  $x = \alpha^2$  and  $x = \alpha^3$:
$${\it \Lambda}(x = \alpha^2 )\hspace{-0.15cm} \ = \ \hspace{-0.15cm} \alpha^2 \cdot \big ( \alpha^5 + \alpha^7 + \alpha^4 \big ) = \alpha^2 \cdot \big ( \alpha^5 + 1 + \alpha^4 \big )= 0\hspace{0.05cm},$$
$${\it \Lambda}(x = \alpha^3 )\hspace{-0.15cm} \ = \ \hspace{-0.15cm} \alpha^3 \cdot \big ( \alpha^5 + \alpha^8 + \alpha^6 \big ) = \alpha^3 \cdot \big ( \alpha^5 + \alpha + \alpha^6 \big )= 0\hspace{0.05cm}.$$
  • Falsified are consequently the symbols at positions  2  and  3   ⇒   Solutions 3 and 4.


(5)  After the result of the subtask  (4)  only the last two proposed solutions come into question:

$$\underline{e} = (0, \, 0, \, e_2, \, e_3, \, 0, \, 0, \, 0).$$
  • The approach is therefore accordingly   $\underline{e} \cdot \mathbf{H}^{\rm T} = \underline{s}$:
$$(0, 0, e_2, e_3, 0, 0, 0) \cdot \begin{pmatrix} 1 & 1 & 1 & 1\\ \alpha^1 & \alpha^2 & \alpha^3 & \alpha^4\\ \alpha^2 & \alpha^4 & \alpha^6 & \alpha^1\\ \alpha^3 & \alpha^6 & \alpha^2 & \alpha^5\\ \alpha^4 & \alpha^1 & \alpha^{5} & \alpha^2\\ \alpha^5 & \alpha^{3} & \alpha^{1} & \alpha^6\\ \alpha^6 & \alpha^{5} & \alpha^{4} & \alpha^3 \end{pmatrix} \hspace{0.15cm}\stackrel{!}{=} \hspace{0.15cm} (0, 1, \alpha^5, \alpha^2) \hspace{0.05cm}. $$
$$\Rightarrow \hspace{0.3cm} e_2 \cdot \alpha^{2} \hspace{-0.15cm} \ + \ \hspace{-0.15cm} e_3 \cdot \alpha^{3} = 0 \hspace{0.05cm}, \hspace{0.8cm} e_2 \cdot \alpha^{4} + e_6 \cdot \alpha^{3} = 1\hspace{0.05cm},\hspace{0.8cm} e_2 \cdot \alpha^{6} \hspace{-0.15cm} \ + \ \hspace{-0.15cm} e_3 \cdot \alpha^{2} = \alpha^{5} \hspace{0.05cm}, \hspace{0.6cm} e_2 \cdot \alpha^{1} + e_6 \cdot \alpha^{5} = \alpha^{2} \hspace{0.05cm}. $$
  • All four equations are satisfied with  $e_2 = 1$  as well as  $e_3 = \alpha^6$   ⇒   Proposed solution 3:
$$\underline {e} = (0, \hspace{0.05cm}0, \hspace{0.05cm}1, \hspace{0.05cm}\alpha^6, \hspace{0.05cm}0, \hspace{0.05cm}0, \hspace{0.05cm}0) $$
  • With   $\alpha + 1 = \alpha^3$  and  $\alpha^5 + \alpha^6 = \alpha$   we get from the given received word  $\underline{y} = (\alpha^2, \, \alpha^3, \, \alpha, \, \alpha^5, \, \alpha^4, \, \alpha^2, \, 1)$  to the decoding result
$$\underline {z} = (\alpha^2, \hspace{0.05cm}\alpha^3, \hspace{0.05cm}\alpha^3, \hspace{0.05cm}\alpha, \hspace{0.05cm}\alpha^4, \hspace{0.05cm}\alpha^2, \hspace{0.05cm}1) \hspace{0.05cm}. $$
  • In the  "Exercise 2.7"  it was shown that this is a valid code word of  ${\rm RSC} \, (7, \, 3, \, 5)_8$.
  • The corresponding information word is  $\underline{u} = (\alpha^4, \, 1, \, \alpha^3)$.