Difference between revisions of "Channel Coding/Decoding of Linear Block Codes"

From LNTwww
 
(24 intermediate revisions by 2 users not shown)
Line 8: Line 8:
 
== Block diagram and requirements ==
 
== Block diagram and requirements ==
 
<br>
 
<br>
We start from the block diagram already shown in the chapter&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures|Channel Models and Decision Structures]]&nbsp; where the channel model used is mostly the&nbsp; <i>Binary Symmetric Channel</i>&nbsp; (BSC). For codeword estimation, we use the&nbsp; <i>Maximum Likelihood Decider</i>&nbsp; (ML), which for binary codes &nbsp; &#8658; &nbsp; $\underline{x} \in {\rm GF}(2^n)$&nbsp; at the block level gives the same result as the&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#Definitions_of_the_different_optimal_receivers|MAP Receiver]].<br>
+
We start from the block diagram already shown in the chapter&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures|$\text{Channel Models and Decision Structures}$]]&nbsp; where the digital channel model used is mostly the&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#Binary_Symmetric_Channel_.E2.80.93_BSC|$\text{Binary Symmetric Channel}$]]&nbsp;$\rm (BSC)$.&nbsp;  
  
[[File:EN_KC_T_1_5_S1.png|center|frame|Block diagram for decoding block codes|class=fit]]
+
For code word estimation,&nbsp; we use the&nbsp; "Maximum Likelihood Decision"&nbsp; $\rm (ML)$,&nbsp; which for binary codes &nbsp; &#8658; &nbsp; $\underline{x}  \in {\rm GF}(2^n)$&nbsp; at the block level gives the same result as the&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#Definitions_of_the_different_optimal_receivers|$\text{MAP Receiver}$]].<br>
 +
 
 +
[[File:EN_KC_T_1_5_S1.png|right|frame|Block diagram for decoding block codes|class=fit]]
  
 
The task of the channel decoder can be described as follows:
 
The task of the channel decoder can be described as follows:
*The vector&nbsp; $\underline{v}$&nbsp; after decoding (at the sink) should match the information word&nbsp; $\underline{u}$&nbsp; as well as possible. <br>That is: &nbsp; The&nbsp; '''block error probability'''&nbsp; should be as small as possible:
+
*The vector&nbsp; $\underline{v}$&nbsp; after decoding&nbsp; (at the sink)&nbsp; should match the information word&nbsp; $\underline{u}$&nbsp; as well as possible.  
  
 +
*That is: &nbsp; The&nbsp; &raquo;'''block error probability'''&laquo;&nbsp; should be as small as possible:
 
::<math>{ \rm Pr(block\:error)} = { \rm Pr}( \underline{v} \ne \underline{u}) \stackrel{!}{=} { \rm minimum}\hspace{0.05cm}.</math>
 
::<math>{ \rm Pr(block\:error)} = { \rm Pr}( \underline{v} \ne \underline{u}) \stackrel{!}{=} { \rm minimum}\hspace{0.05cm}.</math>
  
*Because of deterministic assignments&nbsp; $\underline{x} = {\rm enc}(\underline{u})$&nbsp; respectively,&nbsp; $\underline{v} = {\rm enc}^{-1}(\underline{z})$&nbsp; but also holds:
+
*Because of assignments&nbsp; $\underline{x} = {\rm enc}(\underline{u})$&nbsp; resp.&nbsp; $\underline{v} = {\rm enc}^{-1}(\underline{z})$&nbsp; also holds:
  
 
::<math>{ \rm Pr(block\:error)} = { \rm Pr}( \underline{z} \ne \underline{x}) \stackrel{!}{=} { \rm minimum}\hspace{0.05cm}.</math>  
 
::<math>{ \rm Pr(block\:error)} = { \rm Pr}( \underline{z} \ne \underline{x}) \stackrel{!}{=} { \rm minimum}\hspace{0.05cm}.</math>  
  
*Sought thus is the most likely sent codeword&nbsp; $\underline{y} = \underline{x} +\underline{e}$&nbsp; for the given received word&nbsp; $\underline{x}_i$, which is passed on as result&nbsp; $\underline{z}$&nbsp;:
+
*Sought is the most likely sent code word&nbsp; $\underline{y} = \underline{x} +\underline{e}$&nbsp; for the given received word&nbsp; $\underline{x}_i$,&nbsp; which is passed on as result&nbsp; $\underline{z}$:
 
 
 
::<math>\underline{z} = {\rm arg} \max_{\underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C}} \hspace{0.1cm} {\rm Pr}( \underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm}|\hspace{0.05cm} \underline{y} ) \hspace{0.05cm}.</math>
 
::<math>\underline{z} = {\rm arg} \max_{\underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C}} \hspace{0.1cm} {\rm Pr}( \underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm}|\hspace{0.05cm} \underline{y} ) \hspace{0.05cm}.</math>
  
*For the BSC&ndash;channel, both&nbsp; $\underline{x}_i \in {\rm GF}(2^n)$&nbsp; and&nbsp; $\underline{y}  \in {\rm GF}(2^n)$, so the maximum&ndash;likelihood&ndash;rule can also be written using the&nbsp; [[Channel_Coding/Objective_of_Channel_Coding#Important_definitions_for_block_coding |Hamming distance]]&nbsp; $d_{\rm H}( \underline{y}, \, \underline{x}_i)$&nbsp;:
+
*For the BSC model,&nbsp; both &nbsp; $\underline{x}_i \in {\rm GF}(2^n)$ &nbsp; and &nbsp; $\underline{y}  \in {\rm GF}(2^n)$,&nbsp; so the maximum likelihood decision rule can also be written using the&nbsp; [[Channel_Coding/Objective_of_Channel_Coding#Important_definitions_for_block_coding |$\text{Hamming distance}$]]&nbsp; $d_{\rm H}( \underline{y}, \, \underline{x}_i)$:
  
 
::<math>\underline{z} = {\rm arg} \min_{\underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C}} \hspace{0.1cm}
 
::<math>\underline{z} = {\rm arg} \min_{\underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C}} \hspace{0.1cm}
Line 32: Line 34:
 
== Principle of syndrome decoding ==
 
== Principle of syndrome decoding ==
 
<br>
 
<br>
Assumed here is a&nbsp; $(n, \, k)$&ndash;block code with the parity-check matrix&nbsp; $\boldsymbol{\rm H}$&nbsp; and the systematic code words
+
Assumed here is a&nbsp; $(n, \, k)$&nbsp; block code with the parity-check matrix&nbsp; $\boldsymbol{\rm H}$&nbsp; and the systematic code words
  
 
::<math>\underline{x}\hspace{0.05cm} = (x_1, x_2, \hspace{0.05cm}\text{...} \hspace{0.05cm}, x_i, \hspace{0.05cm}\text{...}  \hspace{0.05cm}, x_n)
 
::<math>\underline{x}\hspace{0.05cm} = (x_1, x_2, \hspace{0.05cm}\text{...} \hspace{0.05cm}, x_i, \hspace{0.05cm}\text{...}  \hspace{0.05cm}, x_n)
 
  = (u_1, u_2, \hspace{0.05cm}\text{...}  \hspace{0.05cm}, u_k, p_1, \hspace{0.05cm}\text{...}  \hspace{0.05cm}, p_{n-k})\hspace{0.05cm}. </math>
 
  = (u_1, u_2, \hspace{0.05cm}\text{...}  \hspace{0.05cm}, u_k, p_1, \hspace{0.05cm}\text{...}  \hspace{0.05cm}, p_{n-k})\hspace{0.05cm}. </math>
  
Assumed here is a&nbsp; $(n, \, k)$&ndash;block code with the parity-check matrix&nbsp; $\boldsymbol{\rm H}$&nbsp; and the systematic code words
+
With the error vector&nbsp; $\underline{e}$&nbsp; then applies to the received word:
  
 
::<math>\underline{y} = \underline{x} + \underline{e} \hspace{0.05cm}, \hspace{0.4cm} \underline{y} \in \hspace{0.1cm} {\rm GF}(2^n) \hspace{0.05cm}, \hspace{0.4cm} \underline{x} \in \hspace{0.1cm} {\rm GF}(2^n) \hspace{0.05cm}, \hspace{0.4cm} \underline{e} \in \hspace{0.1cm} {\rm GF}(2^n)
 
::<math>\underline{y} = \underline{x} + \underline{e} \hspace{0.05cm}, \hspace{0.4cm} \underline{y} \in \hspace{0.1cm} {\rm GF}(2^n) \hspace{0.05cm}, \hspace{0.4cm} \underline{x} \in \hspace{0.1cm} {\rm GF}(2^n) \hspace{0.05cm}, \hspace{0.4cm} \underline{e} \in \hspace{0.1cm} {\rm GF}(2^n)
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
A bit error at position&nbsp; $i$, that is&nbsp; $y_i &ne; x_i$, is expressed by the error coefficient&nbsp; $e_i = 1$.<br>
+
A bit error at position&nbsp; $i$ &nbsp; &rArr; &nbsp; $y_i &ne; x_i$&nbsp; is expressed by the&nbsp; &raquo;'''error coefficient'''&laquo;&nbsp; $e_i = 1$.<br>
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Definition:}$&nbsp; The&nbsp; '''syndrome'''&nbsp; $\underline{s} = (s_0, s_1, \hspace{0.05cm}\text{...} \hspace{0.05cm}, s_{m-1})$&nbsp; is calculated (as a row&ndash; resp. column vector) from the receive word&nbsp; $\underline{y}$&nbsp; and the parity-check matrix&nbsp; $\boldsymbol{\rm H}$&nbsp; as follows:
+
$\text{Definition:}$&nbsp; The&nbsp; &raquo;'''syndrome'''&laquo;&nbsp; $\underline{s} = (s_0, s_1, \hspace{0.05cm}\text{...} \hspace{0.05cm}, s_{m-1})$&nbsp; is calculated&nbsp; (as row resp. column vector)&nbsp; from the received word&nbsp; $\underline{y}$&nbsp; and the parity-check matrix&nbsp; $\boldsymbol{\rm H}$&nbsp; as follows:
  
 
::<math>\underline{s} = \underline{y} \cdot { \boldsymbol{\rm H} }^{\rm T}\hspace{0.3cm}{\rm bzw.}\hspace{0.3cm}  
 
::<math>\underline{s} = \underline{y} \cdot { \boldsymbol{\rm H} }^{\rm T}\hspace{0.3cm}{\rm bzw.}\hspace{0.3cm}  
 
\underline{s}^{\rm T} = { \boldsymbol{\rm H} } \cdot \underline{y}^{\rm T}\hspace{0.05cm}.</math>
 
\underline{s}^{\rm T} = { \boldsymbol{\rm H} } \cdot \underline{y}^{\rm T}\hspace{0.05cm}.</math>
  
The vector length of&nbsp; $\underline{s}$&nbsp; is equal to&nbsp; $m = n-k$&nbsp; $($row number of&nbsp; $\boldsymbol{\rm H})$.}}<br>
+
*The vector length of&nbsp; $\underline{s}$&nbsp; is equal to&nbsp; $m = n-k$&nbsp; $($row number of&nbsp; $\boldsymbol{\rm H})$.}}<br>
  
 
The syndrome&nbsp; $\underline{s}$&nbsp; shows the following characteristics:
 
The syndrome&nbsp; $\underline{s}$&nbsp; shows the following characteristics:
*Because of the equation&nbsp; $\underline{x} \cdot { \boldsymbol{\rm H}}^{\rm T} = \underline{0}$&nbsp; the syndrome&nbsp; $\underline{s}$&nbsp; does not depend on the codeword&nbsp; $\underline{x}$&nbsp; but solely on the error vector&nbsp; $\underline{e}$:
+
*Because of the equation&nbsp; $\underline{x} \cdot { \boldsymbol{\rm H}}^{\rm T} = \underline{0}$ &nbsp; the syndrome&nbsp; $\underline{s}$&nbsp; does not depend on the code word&nbsp; $\underline{x}$&nbsp; but solely on the error vector&nbsp; $\underline{e}$:
  
 
::<math>\underline{s} = \underline{y} \cdot { \boldsymbol{\rm H}}^{\rm T}
 
::<math>\underline{s} = \underline{y} \cdot { \boldsymbol{\rm H}}^{\rm T}
Line 60: Line 62:
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
*For sufficiently few bit errors, $\underline{s}$&nbsp; provides a clear indication of the error locations, allowing full error correction.<br><br>
+
*For sufficiently few bit errors,&nbsp; $\underline{s}$&nbsp; provides a clear indication of the error locations,&nbsp; allowing full error correction.<br><br>
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Example 1:}$&nbsp; Starting from the systematic&nbsp; [[Channel_Coding/General_Description_of_Linear_Block_Codes#Some_properties_of_the_.287.2C_4.2C_3. 29_Hamming_code|$\text{(7, 4, 3)}$ Hamming code]], the following result is obtained for the receive vector&nbsp; $\underline{y} = (0, 1, 1, 0, 0, 1)$&nbsp;:
+
$\text{Example 1:}$&nbsp; Starting from the systematic&nbsp; [[Channel_Coding/General_Description_of_Linear_Block_Codes#Some_properties_of_the_.287.2C_4.2C_3.29_Hamming_code|$\text{(7, 4, 3)}$ Hamming code]],&nbsp; the following result is obtained for the received vector&nbsp; $\underline{y} = (0, 1, 1, 1, 0, 0, 1)$:
  
 
::<math>{ \boldsymbol{\rm H} } \cdot \underline{y}^{\rm T}
 
::<math>{ \boldsymbol{\rm H} } \cdot \underline{y}^{\rm T}
Line 84: Line 86:
 
\end{pmatrix} = \underline{s}^{\rm T} \hspace{0.05cm}.</math>
 
\end{pmatrix} = \underline{s}^{\rm T} \hspace{0.05cm}.</math>
  
Comparing the syndrome with the&nbsp; [[Channel_Coding/General_Description_of_Linear_Block_Codes#Code_definition_by_the_parity-check_matrix| parity-check equations]]&nbsp; of the Hamming code, we see that
+
Comparing the syndrome with the&nbsp; [[Channel_Coding/General_Description_of_Linear_Block_Codes#Code_definition_by_the_parity-check_matrix|$\text{parity-check equations}$]]&nbsp; of the Hamming code,&nbsp; we see that
*most likely the fourth symbol&nbsp; $(x_4 = u_4)$&nbsp; of the codeword has been corrupted,<br>
+
*most likely the fourth symbol&nbsp; $(x_4 = u_4)$&nbsp; of the code word has been falsified,<br>
  
*the codeword estimator will thus yield the result&nbsp; $\underline{z} = (0, 1, 1, 0, 0, 1)$&nbsp;,<br>
+
*the code word estimator will thus yield the result&nbsp; $\underline{z} = (0, 1, 1, 0, 0, 0, 1)$,<br>
  
*the decision is correct only if only one bit was corrupted during transmission.<br><br>
+
*the decision is correct only if only one bit was falsified during transmission.<br><br>
  
Below are the required corrections for the&nbsp; $\text{(7, 4, 3)}$ Hamming code resulting from the calculated syndrome&nbsp; $\underline{s}$&nbsp; corresponding to the columns of the parity-check matrix:
+
Below are the required corrections for the&nbsp; $\text{(7, 4, 3)}$&nbsp; Hamming code resulting from the calculated syndrome&nbsp; $\underline{s}$&nbsp; corresponding to the columns of the parity-check matrix:
  
::<math>\underline{s} = (0, 0, 0) \hspace{0.10cm} \Rightarrow\hspace{0.10cm}{\rm keine\hspace{0.15cm} Korrektur}\hspace{0.05cm};\hspace{0.8cm}\underline{s} = (1, 0, 0)\hspace{0.10cm} \Rightarrow\hspace{0.10cm}p_1{\rm \hspace{0.15cm} invertieren}\hspace{0.05cm};</math>
+
::<math>\underline{s} = (0, 0, 0) \hspace{0.10cm} \Rightarrow\hspace{0.10cm}{\rm no\hspace{0.15cm} correction}\hspace{0.05cm};\hspace{0.8cm}\underline{s} = (1, 0, 0)\hspace{0.10cm} \Rightarrow\hspace{0.10cm}{\rm invert}\hspace{0.15cm}p_1\hspace{0.05cm};</math>
::<math>\underline{s} =(0, 0, 1)\hspace{0.10cm} \Rightarrow\hspace{0.10cm} p_3{\rm \hspace{0.15cm} invertieren}\hspace{0.05cm};\hspace{1.22cm}\underline{s} = (1, 0, 1)\hspace{0.10cm} \Rightarrow\hspace{0.10cm} u_1{\rm \hspace{0.15cm} invertieren}\hspace{0.05cm};</math>
+
::<math>\underline{s} =(0, 0, 1)\hspace{0.10cm} \Rightarrow\hspace{0.10cm} {\rm invert}\hspace{0.15cm}p_3\hspace{0.05cm};\hspace{1.63cm}\underline{s} = (1, 0, 1)\hspace{0.10cm} \Rightarrow\hspace{0.10cm} {\rm invert}\hspace{0.15cm}u_1\hspace{0.05cm};</math>
::<math>\underline{s} =(0, 1, 0)\hspace{0.10cm} \Rightarrow\hspace{0.10cm} p_2{\rm \hspace{0.15cm} invertieren}\hspace{0.05cm};\hspace{1.22cm}\underline{s} = (1, 1, 0)\hspace{0.10cm} \Rightarrow\hspace{0.10cm} u_3{\rm \hspace{0.15cm} invertieren}\hspace{0.05cm};</math>
+
::<math>\underline{s} =(0, 1, 0)\hspace{0.10cm} \Rightarrow\hspace{0.10cm} {\rm invert}\hspace{0.15cm}p_2\hspace{0.05cm};\hspace{1.63cm}\underline{s} = (1, 1, 0)\hspace{0.10cm} \Rightarrow\hspace{0.10cm} {\rm invert}\hspace{0.15cm}u_3\hspace{0.05cm};</math>
::<math>\underline{s} =(0, 1, 1)\hspace{0.10cm} \Rightarrow\hspace{0.10cm} u_4{\rm \hspace{0.15cm} invertieren}\hspace{0.05cm};\hspace{1.22cm}\underline{s} = (1, 1, 1)\hspace{0.10cm} \Rightarrow\hspace{0.10cm} u_2{\rm \hspace{0.15cm} invertieren}\hspace{0.05cm}. </math>}}<br>
+
::<math>\underline{s} =(0, 1, 1)\hspace{0.10cm} \Rightarrow\hspace{0.10cm} {\rm invert}\hspace{0.15cm}u_4\hspace{0.05cm};\hspace{1.63cm}\underline{s} = (1, 1, 1)\hspace{0.10cm} \Rightarrow\hspace{0.10cm} {\rm invert}\hspace{0.15cm}u_2\hspace{0.05cm}. </math>}}<br>
  
 
== Generalization of syndrome coding ==
 
== Generalization of syndrome coding ==
 
<br>
 
<br>
 
We continue to assume the BSC channel model. This means:  
 
We continue to assume the BSC channel model. This means:  
*The receive vector&nbsp; $\underline{y}$&nbsp; and the error vector&nbsp; $\underline{e}$&nbsp; are elements of&nbsp; ${\rm GF}(2^n)$.  
+
*The received vector&nbsp; $\underline{y}$ &nbsp; and the error vector&nbsp; $\underline{e}$ &nbsp; are elements of&nbsp; ${\rm GF}(2^n)$.
*The possible codewords&nbsp; $\underline{x}_i$&nbsp; belong to the code&nbsp; $\mathcal{C}$, which spans a&nbsp; $(n-k)$ dimensional subspace of&nbsp; ${\rm GF}(2^n)$&nbsp;.  
+
 +
*The possible code words&nbsp; $\underline{x}_i$&nbsp; belong to the code&nbsp; $\mathcal{C}$,&nbsp; which spans a&nbsp; $(n-k)$-dimensional subspace of&nbsp; ${\rm GF}(2^n)$.  
  
  
Under this assumption, we briefly summarize again the results of the last pages:  
+
Under this assumption,&nbsp; we briefly summarize the results of the last sections:  
*Syndrome decoding is a realization possibility of maximum likelihood detection of block codes. One decides on the codeword&nbsp; $\underline{x}_i$&nbsp; with the least Hamming distance to the receiving word&nbsp; $\underline{y}$&nbsp;:
+
*Syndrome decoding is a realization possibility of maximum likelihood decoding of block codes.&nbsp; One decides on the code word&nbsp; $\underline{x}_i$ &nbsp; with the least Hamming distance to the received word&nbsp; $\underline{y}$:
  
 
::<math>\underline{z} = {\rm arg} \min_{\underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C}} \hspace{0.1cm}
 
::<math>\underline{z} = {\rm arg} \min_{\underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C}} \hspace{0.1cm}
 
d_{\rm H}(\underline{y}  \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i})\hspace{0.05cm}.</math>
 
d_{\rm H}(\underline{y}  \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i})\hspace{0.05cm}.</math>
  
*But the syndrome decoding is also the search for the most probable error vector&nbsp; $\underline{e}$ that satisfies the condition&nbsp; $\underline{e} \cdot { \boldsymbol{\rm H}}^{\rm T} = \underline{s}$&nbsp;. The&nbsp; <i>syndrome</i>&nbsp; is thereby determined by the equation&nbsp; $\underline{s} = \underline{y} \cdot { \boldsymbol{\rm H}}^{\rm T} $&nbsp;.
+
*But the syndrome decoding is also the search for the most probable error vector&nbsp; $\underline{e}$&nbsp; that satisfies the condition&nbsp; $\underline{e} \cdot { \boldsymbol{\rm H}}^{\rm T} = \underline{s}$.&nbsp; The&nbsp; "syndrome"&nbsp; is thereby determined by the equation &nbsp;
 +
:$$\underline{s} = \underline{y} \cdot { \boldsymbol{\rm H}}^{\rm T} .$$
  
*With the&nbsp; [[Channel_Coding/Objective_of_Channel_Coding#Important_definitions_for_block_coding| Hamming weight]]&nbsp; $w_{\rm H}(\underline{e})$&nbsp; the second interpretation can also be mathematically formulated as follows:
+
*With the&nbsp; [[Channel_Coding/Objective_of_Channel_Coding#Important_definitions_for_block_coding|$\text{Hamming weight}$]]&nbsp; $w_{\rm H}(\underline{e})$&nbsp; the second interpretation can also be mathematically formulated as follows:
  
 
::<math>\underline{z} = \underline{y} + {\rm arg} \min_{\underline{e}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} {\rm GF}(2^n)} \hspace{0.1cm}
 
::<math>\underline{z} = \underline{y} + {\rm arg} \min_{\underline{e}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} {\rm GF}(2^n)} \hspace{0.1cm}
Line 119: Line 123:
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Conclusion:}$&nbsp; Note that the error vector&nbsp; $\underline{e}$&nbsp; as well as the receive vector&nbsp; $\underline{y}$&nbsp; is an element of&nbsp; ${\rm GF}(2^n)$&nbsp; unlike the syndrome&nbsp; $\underline{s} \in {\rm GF}(2^m)$&nbsp; with number&nbsp; $m = n-k$&nbsp; of parity-check equations. This means,
+
$\text{Conclusion:}$&nbsp; Note that the error vector&nbsp; $\underline{e}$&nbsp; as well as the received vector&nbsp; $\underline{y}$&nbsp; is an element of&nbsp; ${\rm GF}(2^n)$&nbsp; unlike the syndrome&nbsp; $\underline{s} \in {\rm GF}(2^m)$&nbsp; with number&nbsp; $m = n-k$&nbsp; of parity-check equations. This means,&nbsp; that
*that the association between the syndrome&nbsp; $\underline{s}$&nbsp; and the error vector&nbsp; $\underline{e}$&nbsp; is not unique, but
+
*the association between the syndrome&nbsp; $\underline{s}$&nbsp; and the error vector&nbsp; $\underline{e}$&nbsp; is not unique,&nbsp; but
  
*that each&nbsp; $2^k$&nbsp; error vectors lead to the same syndrome&nbsp; $\underline{s}$&nbsp; which one groups together into a&nbsp; '''coset'''&nbsp;}}.<br>
+
*each&nbsp; $2^k$&nbsp; error vectors lead to the same syndrome&nbsp; $\underline{s}$&nbsp; which one groups together into a so-called &nbsp; "coset".}}<br>
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Example 2:}$&nbsp; The facts shall be illustrated here by the example&nbsp; $n = 5, \ k = 2$ &nbsp; &#8658; &nbsp; $m = n-k = 3$&nbsp;.
+
$\text{Example 2:}$&nbsp; The facts shall be illustrated by the example with parameters&nbsp; $n = 5, \ k = 2$ &nbsp; &#8658; &nbsp; $m = n-k = 3$&nbsp;.
  
[[File:P ID2361 KC T 1 5 S3 v2.png|right|frame|Splitting the&nbsp; $2^k$&nbsp; error vectors into&nbsp; "cosets".|class=fit]]
+
[[File:EN_KC_T_1_5_S3a.png|right|frame|Splitting the&nbsp; $2^k$&nbsp; error vectors into&nbsp; "cosets"|class=fit]]
  
 
You can see from this graph:
 
You can see from this graph:
  
*The&nbsp; $2^n = 32$&nbsp; possible error vectors&nbsp; $\underline{e}$&nbsp; are divided into&nbsp; $2^m = 8$&nbsp; cosets&nbsp; ${\it \Psi}_0$,&nbsp; ... &nbsp;, ${\it \Psi}_7$&nbsp; split.
+
#The&nbsp; $2^n = 32$&nbsp; possible error vectors&nbsp; $\underline{e}$&nbsp; are divided into&nbsp; $2^m = 8$&nbsp; cosets&nbsp; ${\it \Psi}_0$,&nbsp; ... &nbsp;, ${\it \Psi}_7$.&nbsp; Explicitly drawn here are only the cosets&nbsp; ${\it \Psi}_0$&nbsp; and&nbsp; ${\it \Psi}_5$.<br><br>
*Explicitly drawn here are only the cosets&nbsp; ${\it \Psi}_0$&nbsp; and&nbsp; ${\it \Psi}_5$.<br>
+
#All&nbsp; $2^k = 4$&nbsp; error vectors of the coset&nbsp; ${\it \Psi}_\mu$&nbsp; lead to the syndrome&nbsp; $\underline{s}_\mu$.<br><br>
 
+
#Each minor class&nbsp; ${\it \Psi}_\mu$&nbsp; has a&nbsp; "leader"&nbsp; $\underline{e}_\mu$, namely the one with the minimum Hamming weight.}}<br>
*All&nbsp; $2^k = 4$&nbsp; error vectors of the coset&nbsp; ${\it \Psi}_\mu$&nbsp; lead to the syndrome&nbsp; $\underline{s}_\mu$.  
 
*Each minor class&nbsp; ${\it \Psi}_\mu$&nbsp; has a leader&nbsp; $\underline{e}_\mu$, namely the one with the minimum Hamming weight.}}<br>
 
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Example 3:}$&nbsp; Starting from the systematic&nbsp; $\text{(5,&nbsp;2,&nbsp;3)}$ code&nbsp; $\mathcal{C} = \big \{ (0, 0, 0, 0, 0) \hspace{0.05cm},\hspace{0.15cm}(0, 1, 0, 1, 1) \hspace{0.05cm},\hspace{0.15cm}(1, 0, 1, 1, 0)  \hspace{0.05cm},\hspace{0.15cm}(1, 1, 1, 0, 1) \big \}$&nbsp; the syndrome decoding procedure is now described in detail.
+
$\text{Example 3:}$&nbsp; Starting from the systematic&nbsp; $\text{(5,&nbsp;2,&nbsp;3)}$&nbsp; code&nbsp; $\mathcal{C} = \big \{ (0, 0, 0, 0, 0) \hspace{0.05cm},\hspace{0.15cm}(0, 1, 0, 1, 1) \hspace{0.05cm},\hspace{0.15cm}(1, 0, 1, 1, 0)  \hspace{0.05cm},\hspace{0.15cm}(1, 1, 1, 0, 1) \big \}$&nbsp; the syndrome decoding procedure is now described in detail.
[[File:P ID2362 KC T 1 5 S3b v2.png|right|frame|Example&nbsp; $\text{(5,&nbsp;2,&nbsp;3)}$ syndrome table with cosets|class=fit]]
+
[[File:EN_KC_T_1_5_S3b_neu2.png|right|frame|Example&nbsp; $\text{(5,&nbsp;2,&nbsp;3)}$ syndrome table with cosets|class=fit]]
The generator matrix and the parity-check matrix are:
+
*The generator matrix and the parity-check matrix are:
  
 
::<math>{ \boldsymbol{\rm G} }  
 
::<math>{ \boldsymbol{\rm G} }  
Line 154: Line 156:
 
\end{pmatrix} \hspace{0.05cm}.</math>
 
\end{pmatrix} \hspace{0.05cm}.</math>
  
The table summarizes the final result. Note:   
+
*The table summarizes the final result. Note:   
*The index&nbsp; $\mu$&nbsp; is not identical to the binary value of&nbsp; $\underline{s}_\mu$.  
+
#The index&nbsp; $\mu$&nbsp; is not identical to the binary value of&nbsp; $\underline{s}_\mu$.  
*The order is rather given by the number of ones in the minor class leader&nbsp; $\underline{e}_\mu$.  
+
#The order is rather given by the number of ones in the minor class leader&nbsp; $\underline{e}_\mu$.  
*For example, the syndrome&nbsp; $\underline{s}_5 = (1, 1, 0)$&nbsp; and the syndrome&nbsp; $\underline{s}_6 = (1, 0, 1)$.<br>
+
#For example, the syndrome&nbsp; $\underline{s}_5 = (1, 1, 0)$&nbsp; and the syndrome&nbsp; $\underline{s}_6 = (1, 0, 1)$.<br>
  
  
To derive this table, note:
+
*To derive this table,&nbsp; note:
*The row 1 refers to the syndrome&nbsp; $\underline{s}_0 = (0, 0, 0)$&nbsp; and the associated cosets&nbsp; ${\it \Psi}_0$. The most likely error sequence here is&nbsp; $(0, 0, 0, 0)$ &nbsp; &#8658; &nbsp; no bit error, which we call the coset leader&nbsp; $\underline{e}_0$&nbsp;.  
+
#The row 1 refers to the syndrome&nbsp; $\underline{s}_0 = (0, 0, 0)$&nbsp; and the associated cosets&nbsp; ${\it \Psi}_0$. The most likely error sequence here is&nbsp; $(0, 0, 0, 0, 0)$ &nbsp; &#8658; &nbsp; no bit error, which we call the&nbsp; "coset leader"&nbsp; $\underline{e}_0$.  
*The other entries in the first row, namely&nbsp; $(1, 0, 1, 1, 0 )$,&nbsp; $(0, 1, 0, 1, 1)$&nbsp; and&nbsp; $(1, 1, 1, 0, 1 )$, each yield the syndrome&nbsp; $\underline{s}_0 = (0, 0, 0)$, but yield only with at least three bit errors and are correspondingly unlikely.<br>
+
#The other entries in the first row,&nbsp; namely&nbsp; $(1, 0, 1, 1, 0 )$,&nbsp; $(0, 1, 0, 1, 1)$&nbsp; and&nbsp; $(1, 1, 1, 0, 1 )$,&nbsp; also each yield the syndrome&nbsp; $\underline{s}_0 = (0, 0, 0)$,&nbsp; but only result with at least three bit errors and are correspondingly unlikely.<br>
 +
#In rows 2 to 6,&nbsp; the respective coset leader&nbsp; $\underline{e}_\mu$&nbsp; contains exactly a single&nbsp; "$1$"&nbsp; $(\mu = 1$, ... , $5)$. Here&nbsp; $\underline{e}_\mu$&nbsp; is always the most likely error pattern of the class&nbsp; ${\it \Psi}_\mu$. The other group members result only with at least two bit errors.<br>
 +
#The syndrome&nbsp; $\underline{s}_6 = (1, 0, 1)$&nbsp; is not possible with only one bit error.&nbsp; In creating the table,&nbsp; we then considered all&nbsp; "$5\text{ over }2 = 10$"&nbsp; error patterns&nbsp; $\underline{e}$&nbsp; with weight&nbsp; $w_{\rm H}(\underline{e}) = 2$.
 +
#The first found sequence with syndrome&nbsp; $\underline{s}_6 = (1, 0, 1)$&nbsp; was chosen as coset leader&nbsp; $\underline{e}_6 = (1, 1, 0, 0)$&nbsp;. With a different probing order,&nbsp; the sequence&nbsp; $(0, 0, 1, 0, 1)$&nbsp; could also have resulted from&nbsp; ${\it \Psi}_6$.<br>
 +
#Similar procedure was followed in determining the leader&nbsp; $\underline{e}_7 = (0, 1, 1, 0, 0)$&nbsp; of the cosets class&nbsp; ${\it \Psi}_7$&nbsp; characterized by the uniform syndrome&nbsp; $\underline{s}_7 = (1, 1, 1)$.&nbsp; Also in the class&nbsp; ${\it \Psi}_7$&nbsp; there is another sequence with Hamming weight&nbsp; $w_{\rm H}(\underline{e}) = 2$,&nbsp; namely&nbsp; $(1, 0, 0, 0, 1)$.<br><br>
  
*In rows 2 to 6, the respective coset leader&nbsp; $\underline{e}_\mu$&nbsp; contains exactly a single one&nbsp; $(\mu = 1$, ... , $5)$. Here&nbsp; $\underline{e}_\mu$&nbsp; is always the most likely error pattern of the class&nbsp; ${\it \Psi}_\mu$. The other group members result only with at least two bit errors.<br>
+
*The table only needs to be created once.&nbsp; First,&nbsp; the syndrome must be determined,&nbsp; e.g. for the received vector&nbsp;  $\underline{y} = (0, 1, 0, 0, 1)$:  
 
 
*The syndrome&nbsp; $\underline{s}_6 = (1, 0, 1)$&nbsp; is not possible with only one bit error. In creating the table, we then considered all&nbsp; $5\text{ over }2 = 10$&nbsp; error patterns&nbsp; $\underline{e}$&nbsp; with weight&nbsp; $w_{\rm H}(\underline{e}) = 2$&nbsp;.
 
*The first found sequence with syndrome&nbsp; $\underline{s}_6 = (1, 0, 1)$&nbsp; was chosen as coset leader&nbsp; $\underline{e}_6 = (1, 1, 0, 0)$&nbsp;. With a different probing order, the sequence&nbsp; $(0, 0, 1, 0, 1)$ could also have resulted from ${\it \Psi}_6$&nbsp;.<br>
 
 
 
*Similar procedure was followed in determining the leader&nbsp; $\underline{e}_7 = (0, 1, 1, 0, 0)$&nbsp; of the cosets class&nbsp; ${\it \Psi}_7$&nbsp; characterized by the uniform syndrome&nbsp; $\underline{s}_7 = (1, 1, 1)$&nbsp;. Also in the class&nbsp; ${\it \Psi}_7$&nbsp; there is another sequence with Hamming weight&nbsp; $w_{\rm H}(\underline{e}) = 2$, namely&nbsp; $(1, 0, 0, 0, 1)$.<br><br>
 
 
 
The above table only needs to be created once and can be used as often as desired. First, the syndrome must be determined. This
 
is for example for the receive vector&nbsp;  $\underline{y} = (0, 1, 0, 0, 1)$:  
 
 
::<math>\underline{s} = \underline{y} \cdot { \boldsymbol{\rm H} }^{\rm T} = \begin{pmatrix}
 
::<math>\underline{s} = \underline{y} \cdot { \boldsymbol{\rm H} }^{\rm T} = \begin{pmatrix}
 
0 &1 &0 &0 &1
 
0 &1 &0 &0 &1
Line 187: Line 185:
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
Using the coset leader&nbsp; $\underline{e}_2 = (0, 0, 0, 1, 0)$&nbsp; from the above table $($red entry for &nbsp;$\mu =2)$&nbsp; finally arrives at the decoding result:
+
*Using the coset leader&nbsp; $\underline{e}_2 = (0, 0, 0, 1, 0)$&nbsp; from the above table $($red entry for &nbsp;$\mu =2)$&nbsp; finally arrives at the decoding result:
  
 
::<math>\underline{z} = \underline{y}  + \underline{e}_2  = (0,\hspace{0.05cm} 1,\hspace{0.05cm} 0,\hspace{0.05cm} 0,\hspace{0.05cm} 1) + (0,\hspace{0.05cm} 0,\hspace{0.05cm} 0,\hspace{0.05cm} 1,\hspace{0.05cm} 0) = (0,\hspace{0.05cm} 1,\hspace{0.05cm} 0,\hspace{0.05cm} 1,\hspace{0.05cm} 1)
 
::<math>\underline{z} = \underline{y}  + \underline{e}_2  = (0,\hspace{0.05cm} 1,\hspace{0.05cm} 0,\hspace{0.05cm} 0,\hspace{0.05cm} 1) + (0,\hspace{0.05cm} 0,\hspace{0.05cm} 0,\hspace{0.05cm} 1,\hspace{0.05cm} 0) = (0,\hspace{0.05cm} 1,\hspace{0.05cm} 0,\hspace{0.05cm} 1,\hspace{0.05cm} 1)
Line 194: Line 192:
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Conclusion:}$&nbsp; From these examples it is already clear that the&nbsp; '''syndrome decoding involves a considerable effort''' &nbsp;is connected, if one cannot use certain characteristics as with cyclic codes:
+
$\text{Conclusion:}$&nbsp; From these examples it is already clear that the&nbsp; &raquo;'''syndrome decoding means a considerable effort'''&laquo;,&nbsp; if one cannot use certain properties e.g. cyclic codes.
*For large block code lengths, this method fails completely. Thus, to decode a&nbsp; [https://en.wikipedia.org/wiki/BCH_code BCH codes]&nbsp; &ndash; the abbreviation stands for their inventors '''B'''ose, '''C'''haudhuri and '''H'''ocquenghem &ndash; with code parameters&nbsp; $n = 511$,&nbsp; $k = 259$&nbsp; and&nbsp; $d_{\rm min} = 61$&nbsp; exactly&nbsp; $2^{511-259} \approx 10^{76}$&nbsp; evaluate and save error patterns of length&nbsp; $511$&nbsp; .  
+
 
*Happily, however, there are special decoding algorithms for these and also for other codes of large block length, which lead to success with less effort}}.
+
*For large block code lengths, this method fails completely. Thus, to decode a&nbsp; [https://en.wikipedia.org/wiki/BCH_code $\text{BCH code}$]&nbsp; $($the abbreviation stands for their inventors '''B'''ose, '''C'''haudhuri, '''H'''ocquenghem$)$&nbsp; with code parameters&nbsp; $n = 511$,&nbsp; $k = 259$&nbsp; and&nbsp; $d_{\rm min} = 61$,&nbsp; one has to evaluate and store exactly&nbsp; $2^{511-259} \approx 10^{76}$&nbsp;error patterns of length&nbsp; $511$.  
 +
*Happily,&nbsp; however,&nbsp; there are special decoding algorithms for these and also for other codes of large block length,&nbsp; which lead to success with less effort}}.
  
 
== Coding gain - bit error rate with AWGN ==
 
== Coding gain - bit error rate with AWGN ==
 
<br>
 
<br>
We now consider the&nbsp; [[Digital_Signal_Transmission/Error_Probability_for_Baseband_Transmission#Definition_of_the_bit_error_rate| Bit error rate]]&nbsp; for the following constellation:
+
We now consider the&nbsp; [[Digital_Signal_Transmission/Error_Probability_for_Baseband_Transmission#Definition_of_the_bit_error_rate|$\text{bit error rate}$]]&nbsp; for the following constellation:
[[File:EN_KC_T_1_5_S3b_neu.png|right|frame|Bit error rate for the&nbsp; $\text{(7, 4, 3)}$ Hamming code|class=fit]]
+
[[File:EN_KC_T_1_5_S4.png|right|frame|Bit error rate for the&nbsp; $\text{(7, 4, 3)}$ Hamming code|class=fit]]
*Hamming code&nbsp; $\text{HC (7, 4, 3)}$,<br>
+
#Hamming code&nbsp; $\text{HC (7, 4, 3)}$,<br>
*AWGN&ndash;channel, characterized by the quotient&nbsp; $E_{\rm B}/N_0$ (in dB),<br>
+
#AWGN&ndash;channel, characterized by the quotient&nbsp; $E_{\rm B}/N_0$&nbsp; (in dB),<br>
 
+
#Maximum Likelihood Decoder&nbsp; $\rm (ML)$&nbsp; with&nbsp; <br>"Hard Decision"&nbsp; $\rm (HD)$&nbsp; and &nbsp;"Soft Decision"&nbsp; $\rm (SD)$,&nbsp; resp.
*Maximum Likelihood Detection (ML) with&nbsp; <i>Hard Decision</i>&nbsp; and &nbsp;<i>Soft Decision</i> respectively.
 
  
  
 
It should be noted with regard to this graph:
 
It should be noted with regard to this graph:
*The black comparison curve applies, for example, to binary phase modulation (BPSK) without coding. For this one needs for the bit error rate &nbsp; $10^{-5}$&nbsp; about&nbsp; $10 \cdot \lg \, E_{\rm B}/N_0 = 9.6 \, \rm dB$.<br>
+
*The black comparison curve applies e.g. to binary phase modulation&nbsp; $\rm (BPSK)$&nbsp; without coding.&nbsp; For this one needs for the bit error rate &nbsp; $10^{-5}$&nbsp; about&nbsp; $10 \cdot \lg \, E_{\rm B}/N_0 = 9.6 \, \rm dB$.<br>
  
*The red circles apply to the&nbsp; [[Channel_Coding/General_Description_of_Linear_Block_Codes#Some_properties_of_the_.287.2C_4.2C_3.29_Hamming_code|$\text{(7, 4, 3)}$ code]]&nbsp; and hard decisions of the maximum likelihood decoder&nbsp; $\text{(ML&ndash;HD)}$. Syndrome decoding is a possible realization form for this.<br>
+
*The red circles apply to the&nbsp; [[Channel_Coding/General_Description_of_Linear_Block_Codes#Some_properties_of_the_.287.2C_4.2C_3.29_Hamming_code|$\text{HC (7, 4, 3)}$]]&nbsp; and hard decisions of the maximum likelihood decoder&nbsp; $\text{(ML&ndash;HD)}$.&nbsp; Syndrome decoding is a possible realization form for this.<br>
  
*This configuration brings only for&nbsp; $10 \cdot \lg \, E_{\rm B}/N_0 >6 \, \rm dB$&nbsp; an improvement over the comparison system. For&nbsp; $\rm BER =10^{-5}$&nbsp; one only needs&nbsp; $10 \cdot \lg \, E_{\rm B}/N_0 \approx 9.2 \, \rm dB$.<br>
+
*This configuration brings an improvement over the comparison system only for&nbsp; $10 \cdot \lg \, E_{\rm B}/N_0 >6 \, \rm dB$.&nbsp; For&nbsp; $\rm BER =10^{-5}$&nbsp; one only needs&nbsp; $10 \cdot \lg \, E_{\rm B}/N_0 \approx 9.2 \, \rm dB$.<br>
  
*The green crosses for the Hamming&ndash;Code and&nbsp; [[Channel_Coding/Signal_classification#ML.E2.80.93Decision_at_AWGN.E2.80.93Channel| Soft&ndash;Decision]]&nbsp; $\text{(ML&ndash;SD)}$&nbsp; are below the comparison curve throughout the range. For&nbsp; $\rm BER =10^{-5}$&nbsp; this gives&nbsp; $10 \cdot \lg \, E_{\rm B}/N_0 \approx 7.8 \, \rm dB$.<br><br>
+
*The green crosses for&nbsp; [[Channel_Coding/General_Description_of_Linear_Block_Codes#Some_properties_of_the_.287.2C_4.2C_3.29_Hamming_code|$\text{HC (7, 4, 3)}$]]&nbsp; and&nbsp; [[Channel_Coding/Soft-in_Soft-Out_Decoder| $\text{soft decision}$]]&nbsp; $\text{(ML&ndash;SD)}$&nbsp; are below the red curve throughout the range.&nbsp; For&nbsp; $\rm BER =10^{-5}$&nbsp; this gives&nbsp; $10 \cdot \lg \, E_{\rm B}/N_0 \approx 7.8 \, \rm dB$.<br><br>
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Definition:}$&nbsp; As&nbsp; '''coding gain'''&nbsp; of a system configuration (characterized by its code and the way it is decoded) we refer to the smaller&nbsp; $10 \cdot \lg \, E_{\rm B}/N_0$ required for a given bit error rate&nbsp; $\rm (BER)$&nbsp; compared to the comparison system (without coding):
+
$\text{Definition:}$&nbsp; We refer as&nbsp; &raquo;'''coding gain'''&laquo;&nbsp; of a considered system configuration,&nbsp;
 +
*characterized by its code and  
 +
*the way it is decoded,
 +
 
 +
 
 +
the smaller&nbsp; $10 \cdot \lg \, E_{\rm B}/N_0$&nbsp; required for a given bit error rate&nbsp; $\rm (BER)$&nbsp; compared to the comparison system&nbsp; $($without coding$)$:
  
::<math>G_{\rm Code} (\hspace{0.05cm}{\rm System}\hspace{0.05cm}\vert\hspace{0.05cm}{\rm BER}\hspace{0.05cm}) =10 \cdot {\rm lg}\hspace{0.1cm}{E}_{\rm B}/N_0
+
::<math>G_{\rm Code} (\hspace{0.05cm}\text{considered system} \hspace{0.05cm}\vert\hspace{0.05cm}{\rm BER}\hspace{0.05cm}) =10 \cdot {\rm lg}\hspace{0.1cm}{E}_{\rm B}/N_0
\hspace{0.15cm}(\hspace{0.05cm}{\rm without\hspace{0.1cm} coding}\hspace{0.05cm}\vert\hspace{0.05cm}{\rm BER}\hspace{0.05cm})-
+
\hspace{0.15cm}(\hspace{0.05cm}\text{comparison system}\hspace{0.05cm}\vert\hspace{0.05cm}{\rm BER}\hspace{0.05cm})-
 
10 \cdot {\rm lg}\hspace{0.1cm}{E}_{\rm B}/N_0
 
10 \cdot {\rm lg}\hspace{0.1cm}{E}_{\rm B}/N_0
\hspace{0.15cm}(\hspace{0.05cm}{\rm System}\hspace{0.05cm}\vert\hspace{0.05cm}{\rm BER}\hspace{0.05cm})   
+
\hspace{0.15cm}(\hspace{0.05cm}\text{considered system}\hspace{0.05cm}\vert\hspace{0.05cm}{\rm BER}\hspace{0.05cm})   
 
\hspace{0.05cm}. </math>}}<br>
 
\hspace{0.05cm}. </math>}}<br>
  
Line 234: Line 237:
 
== Decoding at the Binary Erasure Channel ==
 
== Decoding at the Binary Erasure Channel ==
 
<br>
 
<br>
Finally, it will be shown to what extent the decoder has to be modified if instead of thes&nbsp; [[Channel_Coding/Signal_classification#Binary_Symmetric_Channel_.E2.80.93_BSC| "BSC&ndash;Modells" (TOTER LINK)]]&nbsp; (<i>Binary Symmetric Channel</i>&nbsp;) the&nbsp; [[Channel_Coding/Signal_classification#Binary_Erasure_Channel_.E2.80.93_BEC| "BEC&ndash;Kanalmodell" (TOTER LINK)]]&nbsp; (<i>Binary Erasure Channel</i>&nbsp;) is used, which does not produce errors but marks uncertain bits as erasures.<br>
+
Finally,&nbsp; it will be shown to what extent the decoder has to be modified  
 +
*if instead of the&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#Binary_Symmetric_Channel_.E2.80.93_BSC| $\text{BSC model}$]]&nbsp; ("Binary Symmetric Channel")&nbsp;  
 +
*the&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#Binary_Erasure_Channel_.E2.80.93_BEC|$\text{BEC model}$]]&nbsp; ("Binary Erasure Channel")&nbsp; is used,&nbsp;
 +
 
 +
 
 +
which does not produce errors but marks uncertain bits as&nbsp; "erasures".<br>
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Example 4:}$&nbsp; Without limiting generality, as in&nbsp; [[Channel_Coding/Decoding_of_Linear_Block_Codes#Generalization_of_syndrome_coding|$\text{"Example 3"}$]]&nbsp; we again consider the systematic&nbsp; $\text{(5, 2, 3)}$&ndash;block code&nbsp;  
+
$\text{Example 4:}$&nbsp; We consider again as in&nbsp; [[Channel_Coding/Decoding_of_Linear_Block_Codes#Generalization_of_syndrome_coding|$\text{Example 3}$]]&nbsp; the systematic&nbsp; $\text{(5, 2, 3)}$&nbsp; block code&nbsp;  
:$$\mathcal{C} = \big \{ (0, 0, 0, 0, 0) \hspace{0.05cm},\hspace{0.15cm}(0, 1, 0, 1, 1) \hspace{0.05cm},\hspace{0.15cm}(1, 0, 1, 1, 0)  \hspace{0.05cm},\hspace{0.15cm}(1, 1, 1, 0, 1) \big \}.$$
+
:$$\mathcal{C} = \big \{ (0, 0, 0, 0, 0) \hspace{0.05cm},\hspace{0.3cm}(0, 1, 0, 1, 1) \hspace{0.05cm},\hspace{0.3cm}(1, 0, 1, 1, 0)  \hspace{0.05cm},\hspace{0.3cm}(1, 1, 1, 0, 1) \big \}.$$
  
 +
The graphic shows the system model and gives exemplary values for the individual vectors.
 +
*The left part of the picture&nbsp; (yellow background)&nbsp; is valid for&nbsp; "BSC"&nbsp; with one bit error&nbsp; $(0 &#8594; 1)$&nbsp; at the third bit.
 +
*The right part of the picture&nbsp; (green background)&nbsp; is for&nbsp; "BEC"&nbsp; and shows two&nbsp; erasures&nbsp; $(\rm 1 &#8594; E)$&nbsp; at the second and the fourth bit.
 
[[File:EN_KC_T_1_5_S5.png|right|frame|Error correction with BSC and BEC]]  
 
[[File:EN_KC_T_1_5_S5.png|right|frame|Error correction with BSC and BEC]]  
Die Grafik zeigt das Systemmodell und gibt beispielhafte Werte für die einzelnen Vektoren wider.
 
*The left part of the picture (yellow background) is valid for "BSC" with a bit error&nbsp; $0 &#8594; 1$&nbsp; at the third bit.
 
*The right part of the picture (green background) is for "BEC" and shows two&nbsp; <i>Erasures</i>&nbsp; $\rm 1 &#8594; E$&nbsp; at bit 2 and bit 4.
 
  
  
 
One recognizes:
 
One recognizes:
*With BSC only one bit error can be corrected due to&nbsp; $d_{\rm min} = 3$&nbsp; ($t = 1$, marked in red). If one restricts oneself to error detection, this works up to&nbsp; $e= d_{\rm min} -1 = 2$&nbsp; bit errors.<br>
+
*With BSC only&nbsp; $t = 1$&nbsp; bit error&nbsp;  (marked in red in the left part)&nbsp; can be corrected due to&nbsp; $d_{\rm min} = 3$.&nbsp;  If one restricts oneself to error detection, this works up to&nbsp; $e= d_{\rm min} -1 = 2$&nbsp; bit errors.<br>
  
*For BEC, error detection makes no sense, because already the channel locates an uncertain bit as an <i>Erasure</i>&nbsp; $\rm E$&nbsp;. The zeros and ones in the BEC receive word&nbsp; $\underline{y}$&nbsp; are safe. Therefore the error correction works here up to&nbsp; $e = 2$&nbsp; erasures with certainty.<br>
+
*For BEC,&nbsp; error detection makes no sense, because already the channel locates an uncertain bit as an&nbsp; "erasure"&nbsp; $\rm E$.&nbsp; The zeros and ones in the BEC received word&nbsp; $\underline{y}$&nbsp; are safe.&nbsp; Therefore the error correction works here up to&nbsp; $e = 2$&nbsp; erasures&nbsp;  (marked in red in the right part)&nbsp; with certainty.<br>
  
*Also&nbsp; $e = 3$&nbsp; erasures are sometimes still correctable. So&nbsp; $\underline{y} \rm = (E, E, E, 1, 1)$&nbsp; to&nbsp; $\underline{z} \rm = (0, 1, 0, 1, 1)$&nbsp; to be corrected since no second codeword ends with two ones $\underline{y} \rm = (0, E, 0, E, E)$&nbsp; but is not correctable because of the all zero word allowed in the code.<br>
+
*Also&nbsp; $e = 3$&nbsp; erasures are sometimes still correctable.&nbsp; So&nbsp; $\underline{y} \rm = (E, E, E, 1, 1)$&nbsp; to&nbsp; $\underline{z} \rm = (0, 1, 0, 1, 1)$&nbsp; to be corrected since no second code word ends with two ones.&nbsp; But&nbsp; $\underline{y} \rm = (0, E, 0, E, E)$&nbsp; is not correctable because of the all zero word allowed in the code.<br>
  
*If it is ensured that there are no more than two erasures in any receive word, the BEC block error probability&nbsp; ${\rm Pr}(\underline{z} \ne \underline{x}) = {\rm Pr}(\underline{v} \ne \underline{u}) \equiv 0$. In contrast, the corresponding block error probability in the BSC model always has a value greater than&nbsp; $0$.
+
*If it is ensured that there are no more than two erasures in any received word,&nbsp; the BEC block error probability&nbsp; ${\rm Pr}(\underline{z} \ne \underline{x}) = {\rm Pr}(\underline{v} \ne \underline{u}) \equiv 0$.&nbsp; In contrast,&nbsp; the corresponding block error probability in the BSC model always has a value greater than&nbsp; $0$.
  
  
Since after the BEC each receive word is either decoded correctly or not at all, we call here the block&nbsp; $\underline{y} &#8594; \underline{z}$&nbsp; in the future "codeword finder". An "estimation" takes place only in the BSC model.<br>}}<br>
+
Since after the BEC each received word is either decoded correctly or not at all,&nbsp;  we call here the block&nbsp; $\underline{y} &#8594; \underline{z}$&nbsp; in the future&nbsp; "code word finder".&nbsp; An&nbsp; "estimation"&nbsp; takes place only in the BSC model.<br>}}<br>
  
But how does the decoding of a receive word&nbsp; $\underline{y}$&nbsp; with erasures work algorithmically?  
+
But how does the decoding of a received word &nbsp; $\underline{y}$ &nbsp; with erasures work algorithmically?  
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Example 5:}$&nbsp; Starting from the receive word&nbsp; $\underline{y} \rm = (0, E, 0, E, 1)$&nbsp; in&nbsp; $\text{example 4}$&nbsp; we formally set the output of the codeword finder to&nbsp; $\underline{z} \rm = (0, z_2, 0, z_4, 1)$, where the symbols&nbsp; $z_2 \in \{0, \, 1\}$ and $z_4 \in \{0, \, 1\}$&nbsp; are to be determined according to the following equation:
+
$\text{Example 5:}$&nbsp; Starting from the received word&nbsp; $\underline{y} \rm = (0, E, 0, E, 1)$&nbsp; in&nbsp; $\text{Example 4}$&nbsp; we formally set the output of the code word finder to&nbsp; $\underline{z} \rm = (0, z_2, 0, z_4, 1)$,&nbsp; where the symbols&nbsp; $z_2 \in \{0, \, 1\}$&nbsp; and&nbsp; $z_4 \in \{0, \, 1\}$&nbsp; are to be determined according to the following equation:
  
 
::<math>\underline{z} \cdot { \boldsymbol{\rm H} }^{\rm T}= \underline{0}
 
::<math>\underline{z} \cdot { \boldsymbol{\rm H} }^{\rm T}= \underline{0}
Line 267: Line 275:
 
{ \boldsymbol{\rm H} } \cdot \underline{z}^{\rm T}= \underline{0}^{\rm T} \hspace{0.05cm}.</math>
 
{ \boldsymbol{\rm H} } \cdot \underline{z}^{\rm T}= \underline{0}^{\rm T} \hspace{0.05cm}.</math>
  
The task now is to implement this determination equation as efficiently as possible. The following calculation steps result:
+
The task is now to implement this determination equation as efficiently as possible.&nbsp; The following calculation steps result:
*With the parity-check matrix&nbsp; $\boldsymbol{\rm H}$&nbsp; of&nbsp; $\text{(5, 2, 3)}$&ndash;block code and the vector&nbsp; $\underline{z} \rm = (0, z_2, 0, z_4, 1)$&nbsp; is the above determination equation:
+
*With the parity-check matrix&nbsp; $\boldsymbol{\rm H}$&nbsp; of the&nbsp; $\text{(5, 2, 3)}$&nbsp; block code and the vector&nbsp; $\underline{z} \rm = (0, z_2, 0, z_4, 1)$&nbsp; is the above determination equation:
  
 
::<math>{ \boldsymbol{\rm H} } \cdot \underline{z}^{\rm T}
 
::<math>{ \boldsymbol{\rm H} } \cdot \underline{z}^{\rm T}
Line 288: Line 296:
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
*We sum up the safe (correct) bits to the vector&nbsp; $\underline{z}_{\rm K}$&nbsp; and the erased bits to the vector&nbsp; $\underline{z}_{\rm E}$. Then we split the parity-check matrix&nbsp; $\boldsymbol{\rm H}$&nbsp; into the corresponding submatrices&nbsp; $\boldsymbol{\rm H}_{\rm K}$&nbsp; and&nbsp; $\boldsymbol{\rm H}_{\rm E}$&nbsp; :
+
*We sum up the&nbsp; "correct bits"&nbsp; (German:&nbsp; "korrekt" &nbsp; &rArr; &nbsp; subscript&nbsp;"K")&nbsp; to the vector&nbsp; $\underline{z}_{\rm K}$&nbsp; and the&nbsp; "erased bits"&nbsp; to the vector&nbsp; $\underline{z}_{\rm E}$.&nbsp; Then we split the parity-check matrix&nbsp; $\boldsymbol{\rm H}$&nbsp; into the corresponding submatrices&nbsp; $\boldsymbol{\rm H}_{\rm K}$&nbsp; and&nbsp; $\boldsymbol{\rm H}_{\rm E}$:
  
 
::<math>\underline{z}_{\rm K} =(0, 0, 1)\hspace{0.05cm},\hspace{0.3cm} { \boldsymbol{\rm H} }_{\rm K}=   
 
::<math>\underline{z}_{\rm K} =(0, 0, 1)\hspace{0.05cm},\hspace{0.3cm} { \boldsymbol{\rm H} }_{\rm K}=   
Line 297: Line 305:
 
\end{pmatrix}
 
\end{pmatrix}
 
\hspace{0.3cm}\Rightarrow\hspace{0.3cm}
 
\hspace{0.3cm}\Rightarrow\hspace{0.3cm}
{\rm Rows\hspace{0.15cm} 1,\hspace{0.15cm}3 \hspace{0.15cm}and \hspace{0.15cm}5 \hspace{0.15cm}of \hspace{0.15cm}the \hspace{0.15cm}parity-check \hspace{0.15cm}matrix} \hspace{0.05cm},</math>
+
\text{Rows 1, 3 and 5 of the parity-check matrix} \hspace{0.05cm},</math>
 
::<math>\underline{z}_{\rm E} = (z_2, z_4)\hspace{0.05cm},\hspace{0.35cm} { \boldsymbol{\rm H} }_{\rm E}=   
 
::<math>\underline{z}_{\rm E} = (z_2, z_4)\hspace{0.05cm},\hspace{0.35cm} { \boldsymbol{\rm H} }_{\rm E}=   
 
\begin{pmatrix}
 
\begin{pmatrix}
Line 304: Line 312:
 
1 &0
 
1 &0
 
\end{pmatrix}
 
\end{pmatrix}
  \hspace{0.9cm}\Rightarrow\hspace{0.3cm}
+
  \hspace{1.05cm}\Rightarrow\hspace{0.3cm}
{\rm Rows\hspace{0.15cm} 2 \hspace{0.15cm}and \hspace{0.15cm}4 \hspace{0.15cm}of \hspace{0.15cm}the \hspace{0.15cm}parity-check \hspace{0.15cm}matrix}
+
\text{Rows 2 and 4 of the parity-check matrix} \hspace{0.05cm}.</math>
\hspace{0.05cm}.</math>
 
  
*Remembering that in&nbsp; $\rm GF(2)$&nbsp; subtraction equals addition, the above equation can be represented as follows:
+
*Remembering that in&nbsp; $\rm GF(2)$&nbsp; subtraction equals addition,&nbsp;  the above equation can be represented as follows:
  
 
::<math>{ \boldsymbol{\rm H} }_{\rm K} \cdot \underline{z}_{\rm K}^{\rm T}
 
::<math>{ \boldsymbol{\rm H} }_{\rm K} \cdot \underline{z}_{\rm K}^{\rm T}
Line 342: Line 349:
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
*This leads to a linear system of equations with two equations for the unknown&nbsp; $z_2$&nbsp; and&nbsp; $z_4$&nbsp; $($each&nbsp; $0$&nbsp; or&nbsp; $1)$.  
+
*This leads to a linear system of equations with two equations for the unknown&nbsp; $z_2$&nbsp; and&nbsp; $z_4$&nbsp; $($each&nbsp; $0$&nbsp; or&nbsp; $1)$.
*From the last row follows&nbsp; $z_2 = 1$&nbsp; and from the second row&nbsp; $z_2 + z_4 = 0$ &nbsp; &#8658; &nbsp; $z_4 = 1$.  
+
*This gives the allowed codeword&nbsp; $\underline{z} \rm = (0, 1, 0, 1, 1)$.}}<br>
+
*From the last row follows&nbsp; $z_2 = 1$&nbsp; and from the second row&nbsp; $z_2 + z_4 = 0$ &nbsp; &#8658; &nbsp; $z_4 = 1$.&nbsp; This gives the allowed code word&nbsp; $\underline{z} \rm = (0, 1, 0, 1, 1)$.}}<br>
  
 
== Exercises for the chapter ==
 
== Exercises for the chapter ==

Latest revision as of 17:17, 30 November 2022

Block diagram and requirements


We start from the block diagram already shown in the chapter  $\text{Channel Models and Decision Structures}$  where the digital channel model used is mostly the  $\text{Binary Symmetric Channel}$ $\rm (BSC)$. 

For code word estimation,  we use the  "Maximum Likelihood Decision"  $\rm (ML)$,  which for binary codes   ⇒   $\underline{x} \in {\rm GF}(2^n)$  at the block level gives the same result as the  $\text{MAP Receiver}$.

Block diagram for decoding block codes

The task of the channel decoder can be described as follows:

  • The vector  $\underline{v}$  after decoding  (at the sink)  should match the information word  $\underline{u}$  as well as possible.
  • That is:   The  »block error probability«  should be as small as possible:
\[{ \rm Pr(block\:error)} = { \rm Pr}( \underline{v} \ne \underline{u}) \stackrel{!}{=} { \rm minimum}\hspace{0.05cm}.\]
  • Because of assignments  $\underline{x} = {\rm enc}(\underline{u})$  resp.  $\underline{v} = {\rm enc}^{-1}(\underline{z})$  also holds:
\[{ \rm Pr(block\:error)} = { \rm Pr}( \underline{z} \ne \underline{x}) \stackrel{!}{=} { \rm minimum}\hspace{0.05cm}.\]
  • Sought is the most likely sent code word  $\underline{y} = \underline{x} +\underline{e}$  for the given received word  $\underline{x}_i$,  which is passed on as result  $\underline{z}$:
\[\underline{z} = {\rm arg} \max_{\underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C}} \hspace{0.1cm} {\rm Pr}( \underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm}|\hspace{0.05cm} \underline{y} ) \hspace{0.05cm}.\]
  • For the BSC model,  both   $\underline{x}_i \in {\rm GF}(2^n)$   and   $\underline{y} \in {\rm GF}(2^n)$,  so the maximum likelihood decision rule can also be written using the  $\text{Hamming distance}$  $d_{\rm H}( \underline{y}, \, \underline{x}_i)$:
\[\underline{z} = {\rm arg} \min_{\underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C}} \hspace{0.1cm} d_{\rm H}(\underline{y} \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i})\hspace{0.05cm}.\]

Principle of syndrome decoding


Assumed here is a  $(n, \, k)$  block code with the parity-check matrix  $\boldsymbol{\rm H}$  and the systematic code words

\[\underline{x}\hspace{0.05cm} = (x_1, x_2, \hspace{0.05cm}\text{...} \hspace{0.05cm}, x_i, \hspace{0.05cm}\text{...} \hspace{0.05cm}, x_n) = (u_1, u_2, \hspace{0.05cm}\text{...} \hspace{0.05cm}, u_k, p_1, \hspace{0.05cm}\text{...} \hspace{0.05cm}, p_{n-k})\hspace{0.05cm}. \]

With the error vector  $\underline{e}$  then applies to the received word:

\[\underline{y} = \underline{x} + \underline{e} \hspace{0.05cm}, \hspace{0.4cm} \underline{y} \in \hspace{0.1cm} {\rm GF}(2^n) \hspace{0.05cm}, \hspace{0.4cm} \underline{x} \in \hspace{0.1cm} {\rm GF}(2^n) \hspace{0.05cm}, \hspace{0.4cm} \underline{e} \in \hspace{0.1cm} {\rm GF}(2^n) \hspace{0.05cm}.\]

A bit error at position  $i$   ⇒   $y_i ≠ x_i$  is expressed by the  »error coefficient«  $e_i = 1$.

$\text{Definition:}$  The  »syndrome«  $\underline{s} = (s_0, s_1, \hspace{0.05cm}\text{...} \hspace{0.05cm}, s_{m-1})$  is calculated  (as row resp. column vector)  from the received word  $\underline{y}$  and the parity-check matrix  $\boldsymbol{\rm H}$  as follows:

\[\underline{s} = \underline{y} \cdot { \boldsymbol{\rm H} }^{\rm T}\hspace{0.3cm}{\rm bzw.}\hspace{0.3cm} \underline{s}^{\rm T} = { \boldsymbol{\rm H} } \cdot \underline{y}^{\rm T}\hspace{0.05cm}.\]
  • The vector length of  $\underline{s}$  is equal to  $m = n-k$  $($row number of  $\boldsymbol{\rm H})$.


The syndrome  $\underline{s}$  shows the following characteristics:

  • Because of the equation  $\underline{x} \cdot { \boldsymbol{\rm H}}^{\rm T} = \underline{0}$   the syndrome  $\underline{s}$  does not depend on the code word  $\underline{x}$  but solely on the error vector  $\underline{e}$:
\[\underline{s} = \underline{y} \cdot { \boldsymbol{\rm H}}^{\rm T} = \hspace{0.05cm} \underline{x} \cdot { \boldsymbol{\rm H}}^{\rm T} + \hspace{0.05cm} \underline{e} \cdot { \boldsymbol{\rm H}}^{\rm T} = \hspace{0.05cm} \underline{e} \cdot { \boldsymbol{\rm H}}^{\rm T} \hspace{0.05cm}.\]
  • For sufficiently few bit errors,  $\underline{s}$  provides a clear indication of the error locations,  allowing full error correction.

$\text{Example 1:}$  Starting from the systematic  $\text{(7, 4, 3)}$ Hamming code,  the following result is obtained for the received vector  $\underline{y} = (0, 1, 1, 1, 0, 0, 1)$:

\[{ \boldsymbol{\rm H} } \cdot \underline{y}^{\rm T} = \begin{pmatrix} 1 &1 &1 &0 &1 &0 &0\\ 0 &1 &1 &1 &0 &1 &0\\ 1 &1 &0 &1 &0 &0 &1 \end{pmatrix} \cdot \begin{pmatrix} 0 \\ 1 \\ 1 \\ 1 \\ 0 \\ 0 \\ 1 \end{pmatrix} = \begin{pmatrix} 0 \\ 1 \\ 1 \end{pmatrix} = \underline{s}^{\rm T} \hspace{0.05cm}.\]

Comparing the syndrome with the  $\text{parity-check equations}$  of the Hamming code,  we see that

  • most likely the fourth symbol  $(x_4 = u_4)$  of the code word has been falsified,
  • the code word estimator will thus yield the result  $\underline{z} = (0, 1, 1, 0, 0, 0, 1)$,
  • the decision is correct only if only one bit was falsified during transmission.

Below are the required corrections for the  $\text{(7, 4, 3)}$  Hamming code resulting from the calculated syndrome  $\underline{s}$  corresponding to the columns of the parity-check matrix:

\[\underline{s} = (0, 0, 0) \hspace{0.10cm} \Rightarrow\hspace{0.10cm}{\rm no\hspace{0.15cm} correction}\hspace{0.05cm};\hspace{0.8cm}\underline{s} = (1, 0, 0)\hspace{0.10cm} \Rightarrow\hspace{0.10cm}{\rm invert}\hspace{0.15cm}p_1\hspace{0.05cm};\]
\[\underline{s} =(0, 0, 1)\hspace{0.10cm} \Rightarrow\hspace{0.10cm} {\rm invert}\hspace{0.15cm}p_3\hspace{0.05cm};\hspace{1.63cm}\underline{s} = (1, 0, 1)\hspace{0.10cm} \Rightarrow\hspace{0.10cm} {\rm invert}\hspace{0.15cm}u_1\hspace{0.05cm};\]
\[\underline{s} =(0, 1, 0)\hspace{0.10cm} \Rightarrow\hspace{0.10cm} {\rm invert}\hspace{0.15cm}p_2\hspace{0.05cm};\hspace{1.63cm}\underline{s} = (1, 1, 0)\hspace{0.10cm} \Rightarrow\hspace{0.10cm} {\rm invert}\hspace{0.15cm}u_3\hspace{0.05cm};\]
\[\underline{s} =(0, 1, 1)\hspace{0.10cm} \Rightarrow\hspace{0.10cm} {\rm invert}\hspace{0.15cm}u_4\hspace{0.05cm};\hspace{1.63cm}\underline{s} = (1, 1, 1)\hspace{0.10cm} \Rightarrow\hspace{0.10cm} {\rm invert}\hspace{0.15cm}u_2\hspace{0.05cm}. \]


Generalization of syndrome coding


We continue to assume the BSC channel model. This means:

  • The received vector  $\underline{y}$   and the error vector  $\underline{e}$   are elements of  ${\rm GF}(2^n)$.
  • The possible code words  $\underline{x}_i$  belong to the code  $\mathcal{C}$,  which spans a  $(n-k)$-dimensional subspace of  ${\rm GF}(2^n)$.


Under this assumption,  we briefly summarize the results of the last sections:

  • Syndrome decoding is a realization possibility of maximum likelihood decoding of block codes.  One decides on the code word  $\underline{x}_i$   with the least Hamming distance to the received word  $\underline{y}$:
\[\underline{z} = {\rm arg} \min_{\underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C}} \hspace{0.1cm} d_{\rm H}(\underline{y} \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i})\hspace{0.05cm}.\]
  • But the syndrome decoding is also the search for the most probable error vector  $\underline{e}$  that satisfies the condition  $\underline{e} \cdot { \boldsymbol{\rm H}}^{\rm T} = \underline{s}$.  The  "syndrome"  is thereby determined by the equation  
$$\underline{s} = \underline{y} \cdot { \boldsymbol{\rm H}}^{\rm T} .$$
  • With the  $\text{Hamming weight}$  $w_{\rm H}(\underline{e})$  the second interpretation can also be mathematically formulated as follows:
\[\underline{z} = \underline{y} + {\rm arg} \min_{\underline{e}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} {\rm GF}(2^n)} \hspace{0.1cm} w_{\rm H}(\underline{e}_{\hspace{0.03cm}i})\hspace{0.05cm}.\]

$\text{Conclusion:}$  Note that the error vector  $\underline{e}$  as well as the received vector  $\underline{y}$  is an element of  ${\rm GF}(2^n)$  unlike the syndrome  $\underline{s} \in {\rm GF}(2^m)$  with number  $m = n-k$  of parity-check equations. This means,  that

  • the association between the syndrome  $\underline{s}$  and the error vector  $\underline{e}$  is not unique,  but
  • each  $2^k$  error vectors lead to the same syndrome  $\underline{s}$  which one groups together into a so-called   "coset".


$\text{Example 2:}$  The facts shall be illustrated by the example with parameters  $n = 5, \ k = 2$   ⇒   $m = n-k = 3$ .

Splitting the  $2^k$  error vectors into  "cosets"

You can see from this graph:

  1. The  $2^n = 32$  possible error vectors  $\underline{e}$  are divided into  $2^m = 8$  cosets  ${\it \Psi}_0$,  ...  , ${\it \Psi}_7$.  Explicitly drawn here are only the cosets  ${\it \Psi}_0$  and  ${\it \Psi}_5$.

  2. All  $2^k = 4$  error vectors of the coset  ${\it \Psi}_\mu$  lead to the syndrome  $\underline{s}_\mu$.

  3. Each minor class  ${\it \Psi}_\mu$  has a  "leader"  $\underline{e}_\mu$, namely the one with the minimum Hamming weight.


$\text{Example 3:}$  Starting from the systematic  $\text{(5, 2, 3)}$  code  $\mathcal{C} = \big \{ (0, 0, 0, 0, 0) \hspace{0.05cm},\hspace{0.15cm}(0, 1, 0, 1, 1) \hspace{0.05cm},\hspace{0.15cm}(1, 0, 1, 1, 0) \hspace{0.05cm},\hspace{0.15cm}(1, 1, 1, 0, 1) \big \}$  the syndrome decoding procedure is now described in detail.

Example  $\text{(5, 2, 3)}$ syndrome table with cosets
  • The generator matrix and the parity-check matrix are:
\[{ \boldsymbol{\rm G} } = \begin{pmatrix} 1 &0 &1 &1 &0 \\ 0 &1 &0 &1 &1 \end{pmatrix} \hspace{0.05cm},\]
\[{ \boldsymbol{\rm H} } = \begin{pmatrix} 1 &0 &1 &0 &0 \\ 1 &1 &0 &1 &0 \\ 0 &1 &0 &0 &1 \end{pmatrix} \hspace{0.05cm}.\]
  • The table summarizes the final result. Note:
  1. The index  $\mu$  is not identical to the binary value of  $\underline{s}_\mu$.
  2. The order is rather given by the number of ones in the minor class leader  $\underline{e}_\mu$.
  3. For example, the syndrome  $\underline{s}_5 = (1, 1, 0)$  and the syndrome  $\underline{s}_6 = (1, 0, 1)$.


  • To derive this table,  note:
  1. The row 1 refers to the syndrome  $\underline{s}_0 = (0, 0, 0)$  and the associated cosets  ${\it \Psi}_0$. The most likely error sequence here is  $(0, 0, 0, 0, 0)$   ⇒   no bit error, which we call the  "coset leader"  $\underline{e}_0$.
  2. The other entries in the first row,  namely  $(1, 0, 1, 1, 0 )$,  $(0, 1, 0, 1, 1)$  and  $(1, 1, 1, 0, 1 )$,  also each yield the syndrome  $\underline{s}_0 = (0, 0, 0)$,  but only result with at least three bit errors and are correspondingly unlikely.
  3. In rows 2 to 6,  the respective coset leader  $\underline{e}_\mu$  contains exactly a single  "$1$"  $(\mu = 1$, ... , $5)$. Here  $\underline{e}_\mu$  is always the most likely error pattern of the class  ${\it \Psi}_\mu$. The other group members result only with at least two bit errors.
  4. The syndrome  $\underline{s}_6 = (1, 0, 1)$  is not possible with only one bit error.  In creating the table,  we then considered all  "$5\text{ over }2 = 10$"  error patterns  $\underline{e}$  with weight  $w_{\rm H}(\underline{e}) = 2$.
  5. The first found sequence with syndrome  $\underline{s}_6 = (1, 0, 1)$  was chosen as coset leader  $\underline{e}_6 = (1, 1, 0, 0)$ . With a different probing order,  the sequence  $(0, 0, 1, 0, 1)$  could also have resulted from  ${\it \Psi}_6$.
  6. Similar procedure was followed in determining the leader  $\underline{e}_7 = (0, 1, 1, 0, 0)$  of the cosets class  ${\it \Psi}_7$  characterized by the uniform syndrome  $\underline{s}_7 = (1, 1, 1)$.  Also in the class  ${\it \Psi}_7$  there is another sequence with Hamming weight  $w_{\rm H}(\underline{e}) = 2$,  namely  $(1, 0, 0, 0, 1)$.

  • The table only needs to be created once.  First,  the syndrome must be determined,  e.g. for the received vector  $\underline{y} = (0, 1, 0, 0, 1)$:
\[\underline{s} = \underline{y} \cdot { \boldsymbol{\rm H} }^{\rm T} = \begin{pmatrix} 0 &1 &0 &0 &1 \end{pmatrix} \cdot \begin{pmatrix} 1 &1 &0 \\ 0 &1 &1 \\ 1 &0 &0 \\ 0 &1 &0 \\ 0 &0 &1 \\ \end{pmatrix} = \begin{pmatrix} 0 &1 &0 \end{pmatrix}= \underline{s}_2 \hspace{0.05cm}.\]
  • Using the coset leader  $\underline{e}_2 = (0, 0, 0, 1, 0)$  from the above table $($red entry for  $\mu =2)$  finally arrives at the decoding result:
\[\underline{z} = \underline{y} + \underline{e}_2 = (0,\hspace{0.05cm} 1,\hspace{0.05cm} 0,\hspace{0.05cm} 0,\hspace{0.05cm} 1) + (0,\hspace{0.05cm} 0,\hspace{0.05cm} 0,\hspace{0.05cm} 1,\hspace{0.05cm} 0) = (0,\hspace{0.05cm} 1,\hspace{0.05cm} 0,\hspace{0.05cm} 1,\hspace{0.05cm} 1) \hspace{0.05cm}.\]


$\text{Conclusion:}$  From these examples it is already clear that the  »syndrome decoding means a considerable effort«,  if one cannot use certain properties e.g. cyclic codes.

  • For large block code lengths, this method fails completely. Thus, to decode a  $\text{BCH code}$  $($the abbreviation stands for their inventors Bose, Chaudhuri, Hocquenghem$)$  with code parameters  $n = 511$,  $k = 259$  and  $d_{\rm min} = 61$,  one has to evaluate and store exactly  $2^{511-259} \approx 10^{76}$ error patterns of length  $511$.
  • Happily,  however,  there are special decoding algorithms for these and also for other codes of large block length,  which lead to success with less effort

.

Coding gain - bit error rate with AWGN


We now consider the  $\text{bit error rate}$  for the following constellation:

Bit error rate for the  $\text{(7, 4, 3)}$ Hamming code
  1. Hamming code  $\text{HC (7, 4, 3)}$,
  2. AWGN–channel, characterized by the quotient  $E_{\rm B}/N_0$  (in dB),
  3. Maximum Likelihood Decoder  $\rm (ML)$  with 
    "Hard Decision"  $\rm (HD)$  and  "Soft Decision"  $\rm (SD)$,  resp.


It should be noted with regard to this graph:

  • The black comparison curve applies e.g. to binary phase modulation  $\rm (BPSK)$  without coding.  For this one needs for the bit error rate   $10^{-5}$  about  $10 \cdot \lg \, E_{\rm B}/N_0 = 9.6 \, \rm dB$.
  • The red circles apply to the  $\text{HC (7, 4, 3)}$  and hard decisions of the maximum likelihood decoder  $\text{(ML–HD)}$.  Syndrome decoding is a possible realization form for this.
  • This configuration brings an improvement over the comparison system only for  $10 \cdot \lg \, E_{\rm B}/N_0 >6 \, \rm dB$.  For  $\rm BER =10^{-5}$  one only needs  $10 \cdot \lg \, E_{\rm B}/N_0 \approx 9.2 \, \rm dB$.
  • The green crosses for  $\text{HC (7, 4, 3)}$  and  $\text{soft decision}$  $\text{(ML–SD)}$  are below the red curve throughout the range.  For  $\rm BER =10^{-5}$  this gives  $10 \cdot \lg \, E_{\rm B}/N_0 \approx 7.8 \, \rm dB$.

$\text{Definition:}$  We refer as  »coding gain«  of a considered system configuration, 

  • characterized by its code and
  • the way it is decoded,


the smaller  $10 \cdot \lg \, E_{\rm B}/N_0$  required for a given bit error rate  $\rm (BER)$  compared to the comparison system  $($without coding$)$:

\[G_{\rm Code} (\hspace{0.05cm}\text{considered system} \hspace{0.05cm}\vert\hspace{0.05cm}{\rm BER}\hspace{0.05cm}) =10 \cdot {\rm lg}\hspace{0.1cm}{E}_{\rm B}/N_0 \hspace{0.15cm}(\hspace{0.05cm}\text{comparison system}\hspace{0.05cm}\vert\hspace{0.05cm}{\rm BER}\hspace{0.05cm})- 10 \cdot {\rm lg}\hspace{0.1cm}{E}_{\rm B}/N_0 \hspace{0.15cm}(\hspace{0.05cm}\text{considered system}\hspace{0.05cm}\vert\hspace{0.05cm}{\rm BER}\hspace{0.05cm}) \hspace{0.05cm}. \]


Applied to the above graph, one obtains:

\[G_{\rm Code} (\hspace{0.05cm}{\rm Hamming \hspace{0.1cm}(7,\hspace{0.02cm}4,\hspace{0.02cm}3), ML-HD}\hspace{0.05cm}|\hspace{0.05cm}{\rm BER} = 10^{-5}\hspace{0.05cm}) = 0.4\ {\rm dB}\hspace{0.05cm},\]

\[G_{\rm Code} (\hspace{0.05cm}{\rm Hamming \hspace{0.1cm}(7,\hspace{0.02cm}4,\hspace{0.02cm}3), ML-SD}\hspace{0.05cm}|\hspace{0.05cm}{\rm BER} = 10^{-5}\hspace{0.05cm}) = 1.8\ {\rm dB}\hspace{0.05cm}.\]

Decoding at the Binary Erasure Channel


Finally,  it will be shown to what extent the decoder has to be modified


which does not produce errors but marks uncertain bits as  "erasures".

$\text{Example 4:}$  We consider again as in  $\text{Example 3}$  the systematic  $\text{(5, 2, 3)}$  block code 

$$\mathcal{C} = \big \{ (0, 0, 0, 0, 0) \hspace{0.05cm},\hspace{0.3cm}(0, 1, 0, 1, 1) \hspace{0.05cm},\hspace{0.3cm}(1, 0, 1, 1, 0) \hspace{0.05cm},\hspace{0.3cm}(1, 1, 1, 0, 1) \big \}.$$

The graphic shows the system model and gives exemplary values for the individual vectors.

  • The left part of the picture  (yellow background)  is valid for  "BSC"  with one bit error  $(0 → 1)$  at the third bit.
  • The right part of the picture  (green background)  is for  "BEC"  and shows two  erasures  $(\rm 1 → E)$  at the second and the fourth bit.
Error correction with BSC and BEC


One recognizes:

  • With BSC only  $t = 1$  bit error  (marked in red in the left part)  can be corrected due to  $d_{\rm min} = 3$.  If one restricts oneself to error detection, this works up to  $e= d_{\rm min} -1 = 2$  bit errors.
  • For BEC,  error detection makes no sense, because already the channel locates an uncertain bit as an  "erasure"  $\rm E$.  The zeros and ones in the BEC received word  $\underline{y}$  are safe.  Therefore the error correction works here up to  $e = 2$  erasures  (marked in red in the right part)  with certainty.
  • Also  $e = 3$  erasures are sometimes still correctable.  So  $\underline{y} \rm = (E, E, E, 1, 1)$  to  $\underline{z} \rm = (0, 1, 0, 1, 1)$  to be corrected since no second code word ends with two ones.  But  $\underline{y} \rm = (0, E, 0, E, E)$  is not correctable because of the all zero word allowed in the code.
  • If it is ensured that there are no more than two erasures in any received word,  the BEC block error probability  ${\rm Pr}(\underline{z} \ne \underline{x}) = {\rm Pr}(\underline{v} \ne \underline{u}) \equiv 0$.  In contrast,  the corresponding block error probability in the BSC model always has a value greater than  $0$.


Since after the BEC each received word is either decoded correctly or not at all,  we call here the block  $\underline{y} → \underline{z}$  in the future  "code word finder".  An  "estimation"  takes place only in the BSC model.


But how does the decoding of a received word   $\underline{y}$   with erasures work algorithmically?

$\text{Example 5:}$  Starting from the received word  $\underline{y} \rm = (0, E, 0, E, 1)$  in  $\text{Example 4}$  we formally set the output of the code word finder to  $\underline{z} \rm = (0, z_2, 0, z_4, 1)$,  where the symbols  $z_2 \in \{0, \, 1\}$  and  $z_4 \in \{0, \, 1\}$  are to be determined according to the following equation:

\[\underline{z} \cdot { \boldsymbol{\rm H} }^{\rm T}= \underline{0} \hspace{0.3cm} \Rightarrow \hspace{0.3cm} { \boldsymbol{\rm H} } \cdot \underline{z}^{\rm T}= \underline{0}^{\rm T} \hspace{0.05cm}.\]

The task is now to implement this determination equation as efficiently as possible.  The following calculation steps result:

  • With the parity-check matrix  $\boldsymbol{\rm H}$  of the  $\text{(5, 2, 3)}$  block code and the vector  $\underline{z} \rm = (0, z_2, 0, z_4, 1)$  is the above determination equation:
\[{ \boldsymbol{\rm H} } \cdot \underline{z}^{\rm T} = \begin{pmatrix} 1 &0 &1 &0 &0\\ 1 &1 &0 &1 &0\\ 0 &1 &0 &0 &1 \end{pmatrix} \cdot \begin{pmatrix} 0 \\ z_2 \\ 0 \\ z_4 \\ 1 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix} \hspace{0.05cm}.\]
  • We sum up the  "correct bits"  (German:  "korrekt"   ⇒   subscript "K")  to the vector  $\underline{z}_{\rm K}$  and the  "erased bits"  to the vector  $\underline{z}_{\rm E}$.  Then we split the parity-check matrix  $\boldsymbol{\rm H}$  into the corresponding submatrices  $\boldsymbol{\rm H}_{\rm K}$  and  $\boldsymbol{\rm H}_{\rm E}$:
\[\underline{z}_{\rm K} =(0, 0, 1)\hspace{0.05cm},\hspace{0.3cm} { \boldsymbol{\rm H} }_{\rm K}= \begin{pmatrix} 1 &1 &0\\ 1 &0 &0\\ 0 &0 &1 \end{pmatrix} \hspace{0.3cm}\Rightarrow\hspace{0.3cm} \text{Rows 1, 3 and 5 of the parity-check matrix} \hspace{0.05cm},\]
\[\underline{z}_{\rm E} = (z_2, z_4)\hspace{0.05cm},\hspace{0.35cm} { \boldsymbol{\rm H} }_{\rm E}= \begin{pmatrix} 0 &0\\ 1 &1\\ 1 &0 \end{pmatrix} \hspace{1.05cm}\Rightarrow\hspace{0.3cm} \text{Rows 2 and 4 of the parity-check matrix} \hspace{0.05cm}.\]
  • Remembering that in  $\rm GF(2)$  subtraction equals addition,  the above equation can be represented as follows:
\[{ \boldsymbol{\rm H} }_{\rm K} \cdot \underline{z}_{\rm K}^{\rm T} + { \boldsymbol{\rm H} }_{\rm E} \cdot \underline{z}_{\rm E}^{\rm T}= \underline{0}^{\rm T} \hspace{0.3cm} \Rightarrow \hspace{0.3cm} { \boldsymbol{\rm H} }_{\rm E} \cdot \underline{z}_{\rm E}^{\rm T}= { \boldsymbol{\rm H} }_{\rm K} \cdot \underline{z}_{\rm K}^{\rm T} \hspace{0.3cm} \Rightarrow \hspace{0.3cm} \begin{pmatrix} 0 &0\\ 1 &1\\ 1 &0 \end{pmatrix} \cdot \begin{pmatrix} z_2 \\ z_4 \end{pmatrix} = \begin{pmatrix} 1 &1 &0\\ 1 &0 &0\\ 0 &0 &1 \end{pmatrix} \cdot \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix} \hspace{0.05cm}.\]
  • This leads to a linear system of equations with two equations for the unknown  $z_2$  and  $z_4$  $($each  $0$  or  $1)$.
  • From the last row follows  $z_2 = 1$  and from the second row  $z_2 + z_4 = 0$   ⇒   $z_4 = 1$.  This gives the allowed code word  $\underline{z} \rm = (0, 1, 0, 1, 1)$.


Exercises for the chapter


Exercise 1.11: Syndrome Decoding

Exercise 1.11Z: Syndrome Decoding again

Exercise 1.12: Hard Decision vs. Soft Decision

Exercise 1.12Z: Comparison of HC (7, 4, 3) and HC (8, 4, 4)

Exercise 1.13: Binary Erasure Channel Decoding

Exercise 1.13Z: Binary Erasure Channel Decoding again