Difference between revisions of "Channel Coding/Error Probability and Application Areas"

From LNTwww
 
(33 intermediate revisions by 4 users not shown)
Line 1: Line 1:
 
   
 
   
 
{{Header
 
{{Header
|Untermenü=Reed–Solomon–Codes und deren Decodierung
+
|Untermenü=Reed–Solomon–Codes and Their Decoding
|Vorherige Seite=Fehlerkorrektur nach Reed–Solomon–Codierung
+
|Vorherige Seite=Error Correction According to Reed-Solomon Coding
|Nächste Seite=Grundlagen der Faltungscodierung
+
|Nächste Seite=Basics of Convolutional Coding
 
}}
 
}}
  
== Blockfehlerwahrscheinlichkeit für RSC und BDD ==
+
== Block error probability for RSC and BDD ==
 
<br>
 
<br>
Zur Fehlerwahrscheinlichkeitsberechnung  gehen wir vom gleichen Blockschaltbild wie im Kapitel&nbsp; [[Channel_Coding/Fehlerkorrektur_nach_Reed–Solomon–Codierung| Fehlerkorrektur nach Reed–Solomon–Codierung]]&nbsp; aus, wobei wir uns hier für den Codewortschätzer&nbsp; $(\underline {y} &#8594; \underline {z})$&nbsp; auf&nbsp; [[Channel_Coding/Fehlerkorrektur_nach_Reed%E2%80%93Solomon%E2%80%93Codierung#M.C3.B6gliche_Codewortsch.C3.A4tzer:_HD.E2.80.93MLD_bzw._BDD_.281.29| Bounded Distance Decoding]]&nbsp; $\rm (BDD)$&nbsp; beschränken. Für <i>Maximum Likelihood Decoding</i>&nbsp; sind die Ergebnisse geringfügig besser.<br>
+
For error probability calculation we start from the same block diagram as in chapter&nbsp; [[Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding| "Error Correction According to Reed-Solomon Coding"]],&nbsp; but here we choose the code word estimator&nbsp; $(\underline {y} &#8594; \underline {z})$&nbsp; to&nbsp; [[Channel_Coding/Error_Correction_According_to_Reed-Solomon_Coding#Bounded_Distance_Decoding_Procedure| $\text{Bounded Distance Decoding}$]]&nbsp; $\rm (BDD)$.&nbsp; For&nbsp; "Maximum Likelihood Decoding"&nbsp; the results are slightly better.<br>
  
[[File:P ID2564 KC T 2 6 S1 v2.png|center|frame|Systemmodell mit Reed–Solomon–Codierung,&nbsp; $m$–BSC&nbsp; und&nbsp; ''Bounded Distance Decoding''|class=fit]]
+
[[File:EN_KC_T_2_6_S1.png|right|frame|System model with Reed&ndash;Solomon coding,&nbsp; $m$ BSC&nbsp; and&nbsp; ''Bounded Distance Decoding''|class=fit]]
  
Die Blockfehlerwahrscheinlichkeit sei wie folgt definiert:
+
Let the block error probability be defined as follows:
  
::<math>{ \rm Pr(Blockfehler)} = { \rm Pr}( \underline{v} \ne \underline{u})= { \rm Pr}( \underline{z} \ne \underline{c}) = { \rm Pr}( f >t) \hspace{0.05cm}.</math>
+
::<math>{ \rm Pr(block\:error)} = { \rm Pr}( \underline{v} \ne \underline{u})= { \rm Pr}( \underline{z} \ne \underline{c}) = { \rm Pr}( f >t) \hspace{0.05cm}.</math>
  
Aufgrund der BDD&ndash;Annahme ergibt sich das gleiche einfache Ergebnis wie für die binären Blockcodes, nämlich die Wahrscheinlichkeit dafür, dass die Anzahl&nbsp; $f$&nbsp; der Fehler im Block (Empfangswort) größer ist als die Korrekturfähigkeit&nbsp; $t$&nbsp; des Codes.
+
Due to the BDD assumption,&nbsp; the same simple result is obtained as for the binary block codes,&nbsp; namely the probability that
 +
*the number&nbsp; $f$&nbsp; of errors in the block&nbsp; $($received word$)$  
  
Da  für die Zufallsgröße&nbsp; $f$&nbsp; (Fehleranzahl) eine&nbsp; [[Theory_of_Stochastic_Signals/Binomialverteilung#Wahrscheinlichkeiten_der_Binomialverteilung| Binominalverteilung]]&nbsp; im Bereich&nbsp; $0 &#8804; f &#8804; n$&nbsp; vorliegt, erhält man:
+
*is greater than the correctability&nbsp; $t$&nbsp; of the code.
  
::<math>{\rm Pr(Blockfehler)}  =
+
 
 +
Since for the random variable&nbsp; $f$&nbsp; $($number of errors in the block$)$&nbsp; there is a&nbsp; [[Theory_of_Stochastic_Signals/Binomial_Distribution#Probabilities_of_the_binomial_distribution| $\text{binomial distribution}$]]&nbsp; in the range &nbsp; $0 &#8804; f &#8804; n$ &nbsp; we obtain:
 +
 
 +
::<math>{\rm Pr(block\:error)}  =
 
\sum_{f = t + 1}^{n} {n \choose f} \cdot {\varepsilon_{\rm S}}^f \cdot (1 - \varepsilon_{\rm S})^{n-f} \hspace{0.05cm}.</math>
 
\sum_{f = t + 1}^{n} {n \choose f} \cdot {\varepsilon_{\rm S}}^f \cdot (1 - \varepsilon_{\rm S})^{n-f} \hspace{0.05cm}.</math>
  
Während aber im ersten Hauptkapitel stets&nbsp; $c_i &#8712; \rm GF(2)$&nbsp; gegolten hat und damit die&nbsp; $f$&nbsp; Übertragungsfehler jeweils Bitfehler waren, versteht man bei Reed&ndash;Solomon&ndash;Codierung unter einem Übertragungsfehler&nbsp; $(y_i \ne c_i)$&nbsp; wegen&nbsp; $c_i &#8712; {\rm GF}(2^m)$&nbsp; sowie&nbsp; $y_i &#8712; {\rm GF}(2^m)$&nbsp; einen <i>Symbolfehler</i>.
+
*But while in the first main chapter &nbsp; $c_i &#8712; \rm GF(2)$ &nbsp; has always been applied and thus the&nbsp; $f$&nbsp; transmission errors were in each case&nbsp; &raquo;'''bit errors'''&laquo;,
  
Damit ergeben sich folgende Unterschiede:
+
*under Reed&ndash;Solomon coding a transmission error &nbsp; $(y_i \ne c_i)$ &nbsp; is always understood to be a&nbsp; &raquo;'''symbol error'''&laquo;&nbsp; because of &nbsp; $c_i &#8712; {\rm GF}(2^m)$ &nbsp; as well as &nbsp; $y_i &#8712; {\rm GF}(2^m)$.
*Das diskrete Kanalmodell zur Beschreibung der binären Blockcodes ist der&nbsp; [[Channel_Coding/Signal_classification#Binary_Symmetric_Channel_.E2.80.93_BSC| ''Binary Symmetric Channel'']]&nbsp; (BSC). Jedes Bit&nbsp; $c_i$&nbsp; eines Codewortes wird mit der Wahrscheinlichkeit&nbsp; $\varepsilon$&nbsp; verfälscht&nbsp; $(y_i \ne c_i)$ und mit der Wahrscheinlichkeit&nbsp; $1-\varepsilon$&nbsp; richtig übertragen&nbsp; $(y_i = c_i)$.<br>
 
  
*Bei Reed&ndash;Solomon&ndash;Codierung muss man das BSC&ndash;Modell durch das&nbsp; $m$&ndash;BSC&ndash;Kanalmodell ersetzen. Ein Symbol&nbsp; $c_i$&nbsp; wird mit der Wahrscheinlichkeit&nbsp; $\varepsilon_{\rm S}$&nbsp; in ein anderes Symbol&nbsp; $y_i$&nbsp; verfälscht (egal, in welches) und kommt mit der Wahrscheinlichkeit&nbsp; $1-\varepsilon_{\rm S}$&nbsp; unverfälscht beim Empfänger an.<br><br>
+
 +
This results in the following differences:
 +
# &nbsp; The discrete channel model used to describe the binary block codes is the&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#Binary_Symmetric_Channel_.E2.80.93_BSC| $\text{Binary Symmetric Channel}$]]&nbsp; $\rm (BSC)$.&nbsp; Each bit&nbsp; $c_i$&nbsp; of a code word is falsified&nbsp; $(y_i \ne c_i)$&nbsp; with probability&nbsp; $\varepsilon$&nbsp; and correctly transmitted&nbsp; $(y_i = c_i)$&nbsp; with probability&nbsp; $1-\varepsilon$.<br>
 +
# &nbsp;For Reed&ndash;Solomon coding,&nbsp; one must replace the&nbsp; "1&ndash;BSC"&nbsp; model with the&nbsp; "''m''&ndash;BSC"&nbsp; model.&nbsp; A symbol&nbsp; $c_i$&nbsp; is falsified with probability&nbsp; $\varepsilon_{\rm S}$&nbsp; into another symbol&nbsp; $y_i$ &nbsp; $($no matter which one$)$ &nbsp; and arrives with probability&nbsp; $1-\varepsilon_{\rm S}$&nbsp; unfalsified at the receiver.<br><br>
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 1:}$&nbsp;  
+
$\text{Example 1:}$&nbsp;  
Wir gehen vom BSC&ndash;Parameter&nbsp; $\varepsilon = 0.1$&nbsp; aus und betrachten Reed&ndash;Solomon&ndash;Codesymbole&nbsp; $c_i &#8712; {\rm GF}(2^8)$ &nbsp; &#8658; &nbsp;  $m = 8$, $q = 256$,&nbsp;  $n = 255$.  
+
We start from the BSC parameter&nbsp; $\varepsilon = 0.1$&nbsp; and consider Reed&ndash;Solomon code symbols&nbsp; $c_i &#8712; {\rm GF}(2^8)$ &nbsp; &#8658; &nbsp;  $m = 8$, &nbsp; $q = 256$, &nbsp;  $n = 255$.  
  
Für eine Symbolverfälschung&nbsp; $(y_i \ne c_i)$&nbsp; genügt bereits ein falsches Bit. Oder anders ausgedrückt: &nbsp; Soll&nbsp; $y_i = c_i$&nbsp; gelten, so müssen alle&nbsp; $m = 8$ Bit&nbsp; des Codesymbols richtig übertragen werden:
+
*For a symbol falsification &nbsp; $(y_i \ne c_i)$ &nbsp; already one wrong bit is sufficient.  
 +
 
 +
*Or expressed differently: &nbsp; If&nbsp; $y_i = c_i$&nbsp; is to be valid,&nbsp; then all&nbsp; $m = 8$ bits&nbsp; of the code symbol must be transmitted correctly:
  
 
::<math>1 - \varepsilon_{\rm S} = (1 - \varepsilon)^m = 0.9^8 \approx 0.43
 
::<math>1 - \varepsilon_{\rm S} = (1 - \varepsilon)^m = 0.9^8 \approx 0.43
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
*Damit ergibt sich für das 8&ndash;BSC&ndash;Modell die Verfälschungswahrscheinlichkeit&nbsp; $\varepsilon_{\rm S}  &asymp; 0.57$.  
+
*Thus,&nbsp; for the&nbsp; "8&ndash;BSC"&nbsp; model,&nbsp; the falsification probability&nbsp; $\varepsilon_{\rm S}  &asymp; 0.57$.
*Mit der weiteren Annahme, dass die Verfälschung von&nbsp; $c_i = \beta$&nbsp; in jedes andere Symbol&nbsp; $y_i = \gamma \ne \beta$&nbsp; gleichwahrscheinlich ist, erhält man:  
+
:$${\rm Pr}(y_i  = \gamma\hspace{0.05cm}\vert \hspace{0.05cm}c_i  = \beta = 0.57/255 &asymp; 0.223\%.$$}}<br>
+
*With the further assumption that the falsification of&nbsp; $c_i = \beta$&nbsp; into any other symbol&nbsp; $y_i = \gamma \ne \beta$&nbsp; is equally likely,&nbsp; we obtain:  
 +
:$${\rm Pr}(y_i  = \gamma\hspace{0.05cm}\vert \hspace{0.05cm}c_i  = \beta) = 0.57/255 &asymp; 0.223\%.$$}}<br>
  
== Anwendung der Reed–Solomon–Codierung bei binären Kanälen ==
+
== Application of Reed&ndash;Solomon coding for binary channels ==
 
<br>
 
<br>
Die Voraussetzungen für die folgende Berechnung der Blockfehlerwahrscheinlichkeit eines Systems mit Reed&ndash;Solomon&ndash;Codierung und Umsetzung auf Binärsymbole sind in der Grafik zusammengefasst:
+
The prerequisites for the following calculation of block error probability of a Reed&ndash;Solomon coding system and conversion to binary symbols are summarized in the diagram:
*Angenommen wird eine&nbsp; $(n, k)$&ndash;Reed&ndash;Solomon&ndash;Codierung mit Symbolen aus&nbsp; $c_i &#8712; {\rm GF}(2^m)$. Je kleiner die Coderate&nbsp; $R=k/m$&nbsp; ist, um so weniger Information kann bei fester Datenrate übertragen werden.<br>
+
[[File:EN_KC_T_2_6_S2a_neu_v3.png|right|frame|Application of Reed-Solomon coding for binary channels.&nbsp; at point<br>'''(1)''' &nbsp;$k$&ndash;bit symbols; &nbsp;  '''(2)''' &nbsp;$n$&ndash;bit symbols; &nbsp; '''(3)''' &nbsp;$n$&ndash;bit symbols;  &nbsp; '''(4)''' &nbsp;$(n \cdot m)$&ndash;bit blocks;|class=fit]]
  
*Jedes Symbol wird durch&nbsp; $m$&nbsp; Bit binär dargestellt &nbsp; &#8658; &nbsp; <i>Binär&ndash;Mapping</i>. Ein Block&nbsp; $($Codewort &nbsp;$\underline {c} )$&nbsp; besteht somit aus&nbsp; $n$&nbsp; Symbolen bzw. aus&nbsp; $n \cdot m$&nbsp; Binärzeichen (Bit), die  in&nbsp; $\underline {c}_{\rm bin} $&nbsp; zusammengefasst werden.<br>
+
# Assume&nbsp; $(n,\ k)$ Reed&ndash;Solomon coding with symbols of&nbsp; $c_i &#8712; {\rm GF}(2^m)$.&nbsp; The smaller the code rate&nbsp; $R=k/n$,&nbsp; the less information can be transmitted at a fixed data rate.<br>
 +
# Each symbol is represented by&nbsp; $m$&nbsp; bits &nbsp; &#8658; &nbsp; "binary mapping".&nbsp; A block&nbsp; $($code word &nbsp;$\underline {c} )$&nbsp; thus consists of&nbsp; $n$&nbsp; symbols or of&nbsp; $n \cdot m$&nbsp; binary characters&nbsp; $($bits$)$&nbsp; combined in&nbsp; $\underline {c}_{\rm bin} $.<br>
 +
# In addition,&nbsp; the&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#AWGN_channel_at_binary_input|$\text{AWGN Channel}$]],&nbsp; identified by the parameter&nbsp; $E_{\rm B}/N_0 $ is assumed.&nbsp; According to this channel model the transmission is bipolar: &nbsp; "0" &nbsp; &#8596; &nbsp; $+1$&nbsp; and&nbsp; "1" &nbsp; &#8596; $-1$.
 +
# The receiver makes hard decisions on bit level.&nbsp; Before decoding including error correction,&nbsp; the binary symbols are converted back to&nbsp; ${\rm GF}(2^m)$&nbsp; symbols.<br>
  
*Vorausgesetzt wird  außerdem der&nbsp; [[Channel_Coding/Signal_classification#AWGN.E2.80.93Kanal_bei_bin.C3.A4rem_Eingang |AWGN&ndash;Kanal]], gekennzeichnet durch den Parameter&nbsp; $E_{\rm B}/N_0 $. Entsprechend diesem Kanalmodell geschieht die Übertragung bipolar: &nbsp; &bdquo;0&rdquo; &nbsp; &#8596; &nbsp; $+1$&nbsp; sowie  &bdquo;1&rdquo; &nbsp; &#8596; $-1$.
 
  
*Der Empfänger trifft harte Entscheidungen (<i>Hard Decision</i>&nbsp;) auf Bitebene. Vor der Decodierung inclusive der Fehlerkorrektur werden die Binärsymbole wieder auf&nbsp; ${\rm GF}(2^m)$&ndash;Symbole umgesetzt.<br>
+
The&nbsp; "Bounded Distance Decoding"&nbsp; equation from the last section is based on the&nbsp; "''m''&ndash;BSC"&nbsp; model:
 
+
::<math>{\rm Pr(block\:error)}  =
 
 
[[File:P ID2565 KC T 2 6 S2a v2.png|right|frame|Anwendung der Reed–Solomon–Codierung bei Binärkanälen|class=fit]]
 
Die auf der letzten Seite angegebene Gleichung (gültig für <i>Bounded Distance Decoding</i>),
 
::<math>{\rm Pr(Blockfehler)}  =
 
 
\sum_{f = t + 1}^{n} {n \choose f} \cdot {\varepsilon_{\rm S}}^f \cdot (1 - \varepsilon_{\rm S})^{n-f} \hspace{0.05cm},</math>
 
\sum_{f = t + 1}^{n} {n \choose f} \cdot {\varepsilon_{\rm S}}^f \cdot (1 - \varepsilon_{\rm S})^{n-f} \hspace{0.05cm},</math>
  
basiert auf dem $m$&ndash;BSC&ndash;Kanal. Ausgehend vom AWGN&ndash;Kanal (''Additive White Gaussian Noise '') kommt man
+
Starting from the AWGN&ndash;channel&nbsp; ("Additive White Gaussian Noise"),&nbsp; one comes
*mit dem [[Theory_of_Stochastic_Signals/Gaußverteilte_Zufallsgrößen#.C3.9Cberschreitungswahrscheinlichkeit|komplementären Gaußschen Fehlerintegral]] ${\rm Q}(x)$ zum BSC&ndash;Parameter
+
*with the [[Theory_of_Stochastic_Signals/Gaussian_Distributed_Random_Variables#Exceedance_probability|$\text{complementary Gaussian error function}$]]&nbsp; ${\rm Q}(x)$&nbsp; to the &nbsp; &raquo;'''bit error probability'''&laquo; &nbsp; &nbsp; &rArr; &nbsp; &nbsp;  "1&ndash;BSC"&nbsp; parameter:
  
 
::<math>\varepsilon = {\rm Q} \big (\sqrt{2 \cdot E_{\rm S}/N_0} \big )  
 
::<math>\varepsilon = {\rm Q} \big (\sqrt{2 \cdot E_{\rm S}/N_0} \big )  
Line 67: Line 74:
 
\hspace{0.05cm},</math>
 
\hspace{0.05cm},</math>
  
*daraus zur Verfälschungswahrscheinlichkeit $\varepsilon_{\rm S}$ auf Symbolebene:
+
*and from this to the&nbsp; &raquo;'''symbol error probability'''&laquo; &nbsp; &nbsp; &rArr; &nbsp; &nbsp;  "''m''&ndash;BSC"&nbsp; parameter:
  
 
::<math>\varepsilon_{\rm S} = 1 - (1 - \varepsilon)^m  
 
::<math>\varepsilon_{\rm S} = 1 - (1 - \varepsilon)^m  
Line 73: Line 80:
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 2:}$&nbsp; Man erhält mit
+
$\text{Example 2:}$&nbsp; One gets with
*&nbsp;$R=k/n = 239/255 = 0.9373$,  
+
#&nbsp; the code rate &nbsp; $R=k/n = 239/255 = 0.9373$,  
*&nbsp;$10 \cdot \lg  \ E_{\rm B}/N_0 = 7 \, \rm dB$ &nbsp; &#8658; &nbsp; $E_{\rm B}/N_0 &asymp; 5$,&nbsp;  und
+
#&nbsp; the AWGN parameter &nbsp; $10 \cdot \lg  \ E_{\rm B}/N_0 = 7 \, \rm dB$ &nbsp; &#8658; &nbsp; $E_{\rm B}/N_0 &asymp; 5$,&nbsp;  and
*&nbsp; $n = 2^8-1$ &nbsp; &#8658; &nbsp; $m = 8$
+
#&nbsp; eight bits per symbol &nbsp; &#8658; &nbsp; $m = 8$,
 +
#&nbsp; the code length &nbsp; $n = 2^8-1 = 255$  &nbsp;
  
  
für die Parameter&nbsp; $\varepsilon$&nbsp; des BSC&ndash;Modells&nbsp; sowie&nbsp; $\varepsilon_{\rm S}$&nbsp; des 8&ndash;BSC&ndash;Modells folgende Zahlenwerte:
+
the following numerical values for
 +
*the parameter&nbsp; $\varepsilon$&nbsp; of the&nbsp; "1&ndash;BSC"&nbsp; model &nbsp; &rArr; &nbsp; "bit error probability":  
  
:$$\varepsilon = {\rm Q} \big (\sqrt{2 \cdot 0.9373 \cdot 5} \big ) =  {\rm Q} \big (3.06\big ) \approx 1.1 \cdot 10^{-3}=0.11 \cdot \%$$
+
:$$\varepsilon = {\rm Q} \big (\sqrt{2 \cdot 0.9373 \cdot 5} \big ) =  {\rm Q} \big (3.06\big ) \approx 1.1 \cdot 10^{-3}=0.11 \%$$
  
:$$\Rightarrow\hspace{0.3cm} \varepsilon_{\rm S} = 1 - (1 - 1.1 \cdot 10^{-3})^8 = 1 - 0.9989^8 = 1 - 0.9912 \approx 0.88 \cdot \%  
+
*the parameter&nbsp; $\varepsilon_{\rm S}$&nbsp; of the&nbsp; "8&ndash;BSC"&nbsp; model &nbsp; &rArr; &nbsp; "symbol error probability"&nbsp; or&nbsp; "block error probability":
 +
 
 +
:$$\varepsilon_{\rm S} = 1 - (1 - 1.1 \cdot 10^{-3})^8 = 1 - 0.9989^8 = 1 - 0.9912 \approx 0.88 \%  
 
\hspace{0.2cm}(\approx 8 \cdot \varepsilon)\hspace{0.05cm}.$$
 
\hspace{0.2cm}(\approx 8 \cdot \varepsilon)\hspace{0.05cm}.$$
  
Jedes einzelne Symbol wird also mit mehr als $99$&ndash;prozentiger Wahrscheinlichkeit fehlerfrei übertragen.}}<br>
+
&rArr; &nbsp;  So every single symbol is transmitted with more than&nbsp; $99$&nbsp; percent probability without errors.}}<br>
  
== BER der Reed–Solomon–Codierung bei binären Kanälen ==
+
== BER of Reed&ndash;Solomon coding for binary channels ==
 
<br>
 
<br>
Die folgende Grafik zeigt die in&nbsp; [Liv10]<ref name='Liv10'>Liva, G.: ''Channel Coding.'' Vorlesungsmanuskript, Lehrstuhl für Nachrichtentechnik, TU München und DLR Oberpfaffenhofen, 2010.</ref>&nbsp; angegebenen Blockfehlerwahrscheinlichkeiten in Abhängigkeit des AWGN&ndash;Quotienten&nbsp; $10 \cdot \lg \ E_{\rm B}/N_0$.  
+
The following graph shows the block error probabilities given in&nbsp; [Liv10]<ref name='Liv10'>Liva, G.:&nbsp; Channel Coding.&nbsp; Lecture Notes, Chair of Communications Engineering, TU Munich and DLR Oberpfaffenhofen, 2010.</ref>&nbsp; as a function of the AWGN quotient&nbsp; $10 \cdot \lg \ E_{\rm B}/N_0$.
[[File:P ID2566 KC T 2 6 S2b v1.png|righr|frame|Blockfehlerwahrscheinlichkeit zweier Reed–Solomon–Codes der Länge&nbsp; $n = 255$|class=fit]]
+
Dargestellt sind die berechneten Kurvenverläufe&nbsp; ${ \rm Pr}( \underline{v} \ne \underline{u})$&nbsp; für zwei verschiedene Reed&ndash;Solomon&ndash;Codes entsprechend den <i>Deep Space Standards</i>&nbsp; nach CCSDS (<i>Consultative Committee for Space Data Systems</i>), nämlich
+
Shown are the calculated curves &nbsp; ${ \rm Pr}( \underline{v} \ne \underline{u})$ &nbsp; for two different Reed&ndash;Solomon codes corresponding to the&nbsp; "Deep Space Standards"&nbsp; according to&nbsp; $\rm CCSDS$&nbsp; $($"Consultative Committee for Space Data Systems"$)$,&nbsp; namely
 +
[[File:EN_KC_T_2_6_S2b.png|right|frame|Block error probabilities of two Reed-Solomon codes of length&nbsp; $n = 255$ <br><u>Note:</u> &nbsp; You are to perform completely the calculation in the&nbsp; [[Aufgaben:Exercise_2.15:_Block_Error_Probability_with_AWGN|"Exercise 2.15"]]&nbsp; for the&nbsp; $\text{RSC (7, 3, 5)}_{8}$&nbsp; &ndash; thus for somewhat clearer parameters.|class=fit]]
 +
 
 +
# the&nbsp; $\text{RSC (255, 239, 17)}_{256}$&nbsp; with code rate&nbsp; $R=0.9373$,&nbsp; which can correct up to&nbsp; $t = 8$&nbsp; errors,&nbsp; and<br>
 +
# the&nbsp; $\text{RSC (255, 223, 33)}_{256}$&nbsp; with higher correction capability&nbsp; $(t = 16)$&nbsp; due to smaller code rate.
 +
 
  
*dem&nbsp; $\text{RSC (255, 239, 17)}_{256}$&nbsp; mit&nbsp; $R=0.9373$, der bis zu&nbsp; $t = 8$&nbsp; Fehler korrigieren kann, und<br>
+
We analyze the point highlighted in yellow in the graph,&nbsp; valid for
*dem&nbsp; $\text{RSC (255, 223, 33)}_{256}$&nbsp; mit höherer Korrekturfähigkeit&nbsp; $(t = 16)$&nbsp; aufgrund kleinerer Coderate.
+
*the&nbsp; $\text{RSC (255, 239, 17)}_{256}$,&nbsp; and&nbsp;  
  
 +
*$10 \cdot \lg \ E_{\rm B}/N_0 = 7.1 \,\rm dB$ &nbsp; &#8658; &nbsp; $\varepsilon = 0.007$.
  
''Hinweis'': &nbsp; Die nachfolgend nur angedeutete Berechnung sollen Sie in der&nbsp; [[Aufgaben:Aufgabe_2.15:_RS-Blockfehlerwahrscheinlichkeit_bei_AWGN|Aufgabe 2.15]]&nbsp; für den&nbsp; $\text{RSC (7, 3, 5)}_{8}$&nbsp; &ndash; also für etwas übersichtlichere Parameter &ndash; vollständig durchführen.
 
<br clear=all>
 
Wir analysieren den in der Grafik gelb hinterlegten Punkt, gültig für den&nbsp; $\text{RSC (255, 239, 17)}_{256}$&nbsp; und&nbsp; $10 \cdot \lg  \ E_{\rm B}/N_0 = 7.1 \,\rm dB$ &nbsp; &#8658; &nbsp; $\varepsilon = 0.007$. Die dazugehörige Blockfehlerwahrscheinlichkeit ergibt sich entsprechend der Grafik zu
 
  
::<math>{\rm Pr(Blockfehler)}  =
+
The corresponding block error probability results according to the diagram to
 +
 
 +
::<math>{\rm Pr(block\:error)}  =
 
\sum_{f =9}^{255} {255 \choose f} \cdot {\varepsilon_{\rm S}}^f \cdot (1 - \varepsilon_{\rm S})^{255-f}\approx 10^{-4} \hspace{0.05cm}.</math>
 
\sum_{f =9}^{255} {255 \choose f} \cdot {\varepsilon_{\rm S}}^f \cdot (1 - \varepsilon_{\rm S})^{255-f}\approx 10^{-4} \hspace{0.05cm}.</math>
  
Dominant ist hierbei der erste Term&nbsp; $($für&nbsp; $f=9)$, der bereits etwa&nbsp; $80\%$&nbsp; ausmacht:
+
Dominant here is the first term&nbsp; $($for&nbsp; $f=9)$, which already accounts for&nbsp; $\approx 80\%$&nbsp;:
  
::<math>{255 \choose 9} \approx 1.1 \cdot 10^{16}\hspace{0.05cm},\hspace{0.15cm}
+
:$${255 \choose 9} \approx 1.1 \cdot 10^{16}\hspace{0.05cm},\hspace{0.15cm}
 
\varepsilon_{\rm S}^9 \approx 4 \cdot 10^{-20}\hspace{0.05cm},\hspace{0.15cm}
 
\varepsilon_{\rm S}^9 \approx 4 \cdot 10^{-20}\hspace{0.05cm},\hspace{0.15cm}
(1 -\varepsilon_{\rm S})^{246} \approx 0.18  
+
(1 -\varepsilon_{\rm S})^{246} \approx 0.18$$
\hspace{0.15cm} \Rightarrow  \hspace{0.15cm} {\rm Pr}(f = 9) \approx 8 \cdot 10^{-5}  
+
::$$ \Rightarrow  \hspace{0.15cm} {\rm Pr}(f = 9) \approx 8 \cdot 10^{-5}  
\hspace{0.05cm}.</math>
+
\hspace{0.05cm}.$$
  
Dies soll als Beleg dafür gelten, dass man die Summation schon nach wenigen Termen abbrechen darf.<br>
+
This should be taken as proof that one may stop the summation already after a few terms.<br>
  
Die in der Grafik dargestellten Ergebnisse kann man wie folgt zusammenfassen:
+
The results shown in the graph can be summarized as follows:
*Für kleines&nbsp; $E_{\rm B}/N_0$&nbsp; (des AWGN&ndash;Kanals) sind die Reed&ndash;Solomon&ndash;Codes völlig ungeeignet. Beide Codes liegen für&nbsp; $10 \cdot \lg \ E_{\rm B}/N_0 < 6 \,\rm dB$&nbsp; über der (schwarzen) Vergleichskurve für uncodierte Übertragung.<br>
+
# &nbsp; For small&nbsp; $E_{\rm B}/N_0$&nbsp; $($of the AWGN channel$)$,&nbsp; the Reed&ndash;Solomon codes are completely unsuitable.
 +
# &nbsp; Both codes are for&nbsp; $10 \cdot \lg \ E_{\rm B}/N_0 < 6 \,\rm dB$&nbsp; above the&nbsp; $($black$)$&nbsp; comparison curve for uncoded transmission.<br>
 +
# &nbsp; The&nbsp; $\text{RSC (255, 223, 33)}_{256}$&nbsp; calculation differs only in the lower summation limit&nbsp; $(f_{\rm min} = 17)$&nbsp; and by a slightly larger&nbsp; $\varepsilon_{\rm S}$ &nbsp;$($because &nbsp;$R = 0.8745)$.<br>
 +
# &nbsp; For&nbsp; $\rm BER = 10^{-6}$,&nbsp; this&nbsp; "red code"&nbsp; $($with&nbsp; $t = 16)$&nbsp; is better  only by about&nbsp; $1 \, \rm dB$&nbsp; than the&nbsp; "green code"&nbsp; $($with&nbsp; $t = 8)$.
 +
# &nbsp; So the results of both codes are rather disappointing.<br>
  
*Die Berechung für den&nbsp; $\text{RSC (255, 223, 33)}_{256}$&nbsp; unterscheidet sich von obiger Rechnung nur in der unteren Summationsgrenze&nbsp; $(f_{\rm min} = 17)$&nbsp; und durch ein etwas größeres&nbsp; $\varepsilon_{\rm S}$ &nbsp;$($wegen &nbsp;$R = 0.8745)$.<br>
 
 
*Dieser &bdquo;rote&rdquo; Code&nbsp; $($mit&nbsp; $t = 16)$&nbsp; ist für&nbsp; $\rm BER = 10^{-6}$&nbsp; auch nur um etwa&nbsp; $1 \, \rm dB$&nbsp; besser als der durch grüne Kreuze gekennzeichnete Code mit&nbsp; $t = 8$. Die Ergebnisse beider Codes sind also eher enttäuschend.<br>
 
  
 +
{{BlaueBox|TEXT= 
 +
$\text{Conclusion:}$&nbsp;
 +
*Reed&ndash;Solomon codes are not very good at the memoryless AWGN channel.&nbsp;  Both codes are more than&nbsp; $4 \, \rm dB$&nbsp; away from the information-theoretical&nbsp; [[Channel_Coding/Information_Theoretical_Limits_of_Channel_Coding#AWGN_channel_capacity_for_binary_input_signals| $\text{Shannon bound }$ ]].<br>
  
{{BlaueBox|TEXT= 
+
*But Reed&ndash;Solomon codes are very effective with so-called&nbsp; [[Digital_Signal_Transmission/Burst_Error_Channels| $\text{Burst Error Channels}$]].&nbsp; They are therefore mainly used for fading channels,&nbsp; memory systems,&nbsp; etc.<br>
$\text{Fazit:}$&nbsp;
 
*Reed&ndash;Solomon&ndash;Codes sind beim gedächtnislosen Binärkanal (AWGN&ndash;Kanal) nicht sehr gut. Beide Codes liegen mehr als&nbsp; $4 \, \rm dB$&nbsp; von der informationstheoretischen&nbsp; [[Channel_Coding/Informationstheoretische_Grenzen_der_Kanalcodierung#AWGN.E2.80.93Kanalkapazit.C3.A4t_f.C3.BCr_bin.C3.A4re_Eingangssignale| Shannon&ndash;Grenze]]&nbsp; entfernt.<br>
 
  
*Reed&ndash;Solomon&ndash;Codes sind dagegen sehr wirkungsvoll bei so genannten&nbsp; [[Digitalsignal%C3%BCbertragung/B%C3%BCndelfehlerkan%C3%A4le| Bündelfehlerkanälen]]. Sie werden deshalb vorwiegend bei Fadingkanälen, Speichersysteme, usw. eingesetzt.<br>
+
*Reed&ndash;Solomon codes are not perfect.&nbsp; The consequences of this are covered in&nbsp; [[Aufgaben:Exercise_2.16:_Bounded_Distance_Decoding:_Decision_Regions|"Exercise 2.16"]].}}<br>
*Die Reed&ndash;Solomon&ndash;Codes sind nicht perfekt. Welche Konsequenzen sich daraus ergeben, wird in der&nbsp; [[Aufgaben:Aufgabe_2.16:_Entscheidungskriterien_bei_BDD|Aufgabe 2.16]]&nbsp; behandelt.}}<br>
 
  
== Typische Anwendungen mit Reed–Solomon–Codierung==
+
== Typical applications with Reed&ndash;Solomon coding==
 
<br>
 
<br>
[[File:P ID2575 KC T 2 6 S3a v2.png|right|frame|Codeschema mit Kaskadierung|class=fit]]
+
Reed&ndash;Solomon coding is often applied according to the diagram together with an&nbsp; "inner code"&nbsp; in cascaded form.
Reed&ndash;Solomon&ndash;Codierung wird häufig entsprechend der Grafik zusammen mit einem <i>inneren Code</i>&nbsp; in kaskadierter Form angewandt.
+
[[File:EN_KC_T_2_6_S3.png|right|frame|Code scheme with cascading|class=fit]]
  
*Der innere Code ist fast immer ein Binärcode und in der Satelliten&ndash; und Mobilkommunikation oft ein Faltungscode.
+
*The inner code is almost always a binary code and in satellite&ndash; and mobile communications often a convolutional code.
 
   
 
   
*Will man nur die Reed&ndash;Solomon&ndash;Codierung/Decodierung untersuchen, so ersetzt man in einer Simulation die grau hinterlegten Komponenten durch einen einzigen Block, den man &bdquo;Super Channel&rdquo; nennt.<br>
+
*If one wishes to study only Reed&ndash;Solomon encoding/decoding, one replaces the grayed components in a simulation with a single block called a&nbsp; "super channel".<br>
 +
 
 +
*Concatenated coding systems are very efficient if an interleaver is connected between the two encoders to further relax burst errors.<br>
 
<br clear=all>
 
<br clear=all>
Besonders effizient ist ein solches&nbsp; <b>verkettetes</b> &nbsp;(englisch: <i>concatenated</i>) <b>Codiersystem</b>, wenn zwischen den beiden Codierern ein Interleaver geschaltet ist, um Bündelfehler weiter zu entspreizen.<br>
 
 
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 3:}$&nbsp; Die Grafik zeigt beispielhaft einen solchen Interleaver, wobei wir uns auf eine&nbsp; $20 &times; 4$&ndash;Matrix beschränken. In der Praxis sind diese Matrizen deutlich größer.<br>
+
$\text{Example 3:}$&nbsp; The diagram shows an example of such an interleaver,&nbsp; where we restrict ourselves to a&nbsp; $20 &times; 4$&nbsp; matrix.&nbsp; In practice,&nbsp; these matrices are significantly larger.<br>
  
[[File:P ID2579 KC T 2 6 S3b v2.png|right|frame|Interleaver–Matrix für&nbsp; $20 &times; 4$ Symbole|class=fit]]
+
[[File:EN_KC_T_2_6_S4_neu.png|right|frame|Interleaver matrix for&nbsp; $20 &times; 4$&nbsp; symbols|class=fit]]
  
 +
The interleaver is written row&ndash;by&ndash;row and read out column&ndash;by&ndash;column&nbsp; $($blue label$)$.&nbsp; With the de&ndash;interleaver&nbsp; $($green label$)$&nbsp; the sequence is exactly the other way round.
  
 +
*The Reed&ndash;Solomon symbols are therefore not forwarded to the inner coder consecutively,&nbsp; but according to the specified sequence as symbol&nbsp; '''1, 21, 41, 61, 2, 22,''' etc. .<br>
  
Der Interleaver wird zeilenweise beschrieben und spaltenweise ausgelesen (blaue Beschriftung). Beim De&ndash;Interleaver (grüne Beschriftung) ist die Reihenfolge genau umgekehrt.
+
*A contiguous area&nbsp$($here the symbols&nbsp; '''42, 62, 3, 23, 43, 63, 4, 24''' &nbsp; &#8658; &nbsp; around the rectangle outlined in red below$)$&nbsp; is destroyed on the channel,&nbsp; for example by a scratch on the channel "storage medium".<br>
*Die Reed&ndash;Solomon&ndash;Symbole werden also nicht fortlaufend an den inneren Coder weitergeleitet, sondern entsprechend der angegebenen Reihenfolge als Symbol 1, 21, 41, 61, 2, 22, usw. .<br>
 
  
*Auf dem Kanal wird ein zusammenhängender Bereich (hier die Symbole 42, 62, 3, 23, 43, 63, 4, 24 &nbsp; &#8658; &nbsp; um das unten rot umrandete Rechteck) zerstört, zum Beispiel durch einen Kratzer auf dem Kanal &bdquo;Speichermedium&rdquo;.<br>
+
*After de&ndash;interleaving,&nbsp; the symbol sequence is again&nbsp; '''1, 2, 3,''' ... &nbsp; &nbsp; <br>You can see from the red outlined circles that now this burst error has been largely&nbsp; "broken up".}}
 
 
*Nach dem De&ndash;Interleaving ist die Symbolreihenfolge wieder 1, 2, 3, ... &nbsp; &nbsp; Man erkennt an den rot umrandeten Kreisen, dass nun dieses Fehlerbündel weitgehend &bdquo;aufgebrochen&rdquo; wurde.}}
 
  
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 4:}$&nbsp; Eine sehr weit verbreitete Anwendung von Reed&ndash;Solomon&ndash;Codierung &ndash; und zudem die kommerziell erfolgreichste &ndash; ist die <i>Compact Disc</i>&nbsp; $\rm (CD)$, deren Fehlerkorrekturmechanismus bereits im&nbsp; [[Channel_Coding/Zielsetzung_der_Kanalcodierung#Die_.E2.80.9EGeschlitzte_CD.E2.80.9D_.E2.80.93_eine_Demonstration_des_LNT_der_TUM| Einführungkapitel]]&nbsp; dieses Buches beschrieben wurde. Hier ist der innere Code ebenfalls ein Reed&ndash;Solomon&ndash;Code, und das verkette Codiersystem lässt sich wie folgt beschreiben:
+
$\text{Example 4:}$ &nbsp; A very widely used application of Reed&ndash;Solomon coding &ndash; and also the most commercially successful &ndash; is the &raquo;'''Compact Disc'''&laquo;&nbsp; $\rm (CD)$,&nbsp; whose error correction mechanism has already been described in the&nbsp; [[Channel_Coding/Objective_of_Channel_Coding#Some_introductory_examples_of_error_correction| "introduction chapter"]]&nbsp; $($Example 7$)$&nbsp; of this book.&nbsp; Here the inner code is also a Reed&ndash;Solomon code,&nbsp; and the concatenated coding system can be described as follows:
*Beide Kanäle des Stereo&ndash;Audiosignals werden mit je&nbsp; $\text{44.1 kHz}$&nbsp; abgetastet und jeder einzelne Abtastwert wird mit&nbsp; $32$&nbsp; Bit (vier Byte) digital dargestellt.<br>
+
# Both channels of the stereo&ndash;audio signal are sampled at&nbsp; $\text{44.1 kHz}$&nbsp; each and every sample is digitally represented at&nbsp; $32$&nbsp; bits&nbsp; $($four bytes$)$.<br>
 
+
# The grouping of six samples results in a frame&nbsp; $(192$&nbsp; Bit$)$ and thus&nbsp; $24$&nbsp; code symbols from the Galois field&nbsp; $\rm GF(2^8)$.&nbsp; Each code symbol thus corresponds to exactly one byte.<br>
*Die Gruppierung von sechs Samples ergibt einen Rahmen&nbsp; $(192$&nbsp; Bit$)$ und damit&nbsp; $24$&nbsp; Codesymbole aus dem Galoisfeld&nbsp; $\rm GF(2^8)$. Jedes Codesymbol entspricht somit genau einem Byte.<br>
+
# The first Reed&ndash;Solomon code with rate&nbsp; $R_1 = 24/28$&nbsp; returns&nbsp; $28$&nbsp; bytes,&nbsp; which are fed to an interleaver of size&nbsp; $28 &times; 109$&nbsp; bytes.&nbsp; The readout is done&nbsp;  (complicated)&nbsp;  diagonally.<br>
 
+
# The interleaver distributes contiguous bytes over a large area of the entire disc.&nbsp; This resolves so-called&nbsp; "bursts"&nbsp; e.g. caused by scratches on the&nbsp; $\rm CD$.<br>
*Der erste Reed&ndash;Solomon&ndash;Code mit der Rate&nbsp; $R_1 = 24/28$&nbsp; liefert&nbsp; $28$&nbsp; Byte, die einem Interleaver der Größe&nbsp; $28 &times; 109$&nbsp; Byte zugeführt werden. Das Auslesen erfolgt (kompliziert) diagonal.<br>
+
# Together with the second Reed&ndash;Solomon code&nbsp; $($rate $R_2 = 28/32)$&nbsp; gives a total rate of&nbsp; $R = (24/28) &middot; (28/32) = 3/4$.&nbsp; Both codes can each correct&nbsp; $t = 2$&nbsp; symbol errors.<br>
 
+
# The two component codes&nbsp; $\text{RSC (28, 24, 5)}$&nbsp; and&nbsp; $\text{RSC (32, 28, 5)}$&nbsp; are each based on the Galois field&nbsp; $\rm GF(2^8)$,&nbsp; which would actually mean a code length of&nbsp; $n = 255$&nbsp;.<br>
*Der Interleaver verteilt zusammenhängende Bytes großräumig über die gesamte Disk. Dadurch werden so genannte Bursts &bdquo;aufgelöst&rdquo;, die zum Beispiel durch Kratzer auf der&nbsp; $\rm CD$&nbsp; herrühren.<br>
+
#The shorter component codes needed here result from the&nbsp; $\text{RSC (255, 251, 5)}_{256}$&nbsp; by&nbsp; "shortening" &nbsp; &rArr; &nbsp; see&nbsp; [[Aufgaben:Exercise_1.09Z:_Extension_and/or_Puncturing|"Exercise 1.9Z"]].&nbsp; However,&nbsp; this measure does not change the minimum distance&nbsp; $d_{\rm min}= 5$&nbsp;.<br>
 
+
# With the subsequent&nbsp; [https://en.wikipedia.org/wiki/Eight-to-fourteen_modulation $\text{Eight to Fourteen Modulation}$]&nbsp; and further control symbols,&nbsp; one finally arrives at the final code rate&nbsp; $ R = 192/588 &asymp; 0.326$&nbsp; of the CD data compression.<br><br>
*Zusammen mit dem zweiten Reed&ndash;Solomon&ndash;Code&nbsp; $($Rate $R_2 =   28/32)$&nbsp; ergibt sich eine Gesamtrate von&nbsp; $R = (24/28) &middot; (28/32) = 3/4$. Beide Codes können jeweils&nbsp; $t = 2$&nbsp; Symbolfehler korrigieren.<br>
 
 
 
*Die beiden Komponentencodes&nbsp; $\text{RSC (28, 24, 5)}$&nbsp; und&nbsp; $\text{RSC (32, 28, 5)}$&nbsp; basieren jeweils auf dem Galoisfeld&nbsp; $\rm GF(2^8)$, was eigentlich die Codelänge&nbsp; $n = 255$&nbsp; bedeuten würde.<br>
 
 
 
*Die hier benötigten kürzeren Komponentencodes ergeben sich aus aus dem&nbsp; $\text{RSC (255, 251, 5)}_{256}$&nbsp; durch <i>Shortening</i> &nbsp; &rArr; &nbsp; siehe&nbsp; [[Aufgaben:Aufgabe_1.09Z:_Erweiterung_und/oder_Punktierung|Aufgabe 1.9Z]]. Durch diese Maßnahme wird aber die minimale Distanz&nbsp; $d_{\rm min}= 5$&nbsp; nicht verändert.<br>
 
 
 
*Mit der anschließenden&nbsp; [https://de.wikipedia.org/wiki/Eight-to-Fourteen-Modulation Eight&ndash;to&ndash;Fourteen&ndash;Modulation]&nbsp; und weiterer Kontrollsymbole kommt man schließlich zur endgültigen Coderate&nbsp; $ R = 192/588 &asymp; 0.326$&nbsp; der CD&ndash;Datenkomprimierung.<br><br>
 
  
Auf der Seite&nbsp; [[Channel_Coding/Zielsetzung_der_Kanalcodierung#Die_.E2.80.9EGeschlitzte_CD.E2.80.9D_.E2.80.93_eine_Demonstration_des_LNT_der_TUM |Geschlitzte CD]]&nbsp; kann man sich anhand eines Audio&ndash;Beispiels von der Korrekturfähigkeit dieser <i>cross&ndash;interleaved Reed&ndash;Solomon&ndash;Codierung</i>&nbsp; überzeugen, aber auch deren Grenzen erkennen.}}<br>
+
In the section&nbsp; [[Channel_Coding/Objective_of_Channel_Coding#The_.22Slit_CD.22_-_a_demonstration_by_the_LNT_of_TUM | "Slit CD"]]&nbsp; you can see for yourself the correction capabilities of this&nbsp; "cross&ndash;interleaved Reed&ndash;Solomon coding"&nbsp; with the help of an audio example,&nbsp; but you will also see its limitations}}.<br>
  
== Aufgaben zum Kapitel==
+
== Exercises for the chapter==
 
<br>
 
<br>
[[Aufgaben:Aufgabe_2.15:_RS-Blockfehlerwahrscheinlichkeit_bei_AWGN|Aufgabe 2.15: RS-Blockfehlerwahrscheinlichkeit bei AWGN]]
+
[[Aufgaben:Exercise_2.15:_Block_Error_Probability_with_AWGN|Exercise 2.15: Block Error Probability with AWGN]]
  
[[Aufgaben:Aufgabe_2.15Z:_Nochmals_RS-Blockfehlerwahrscheinlichkeit|Aufgabe 2.15Z: Nochmals RS-Blockfehlerwahrscheinlichkeit]]
+
[[Aufgaben:Exercise_2.15Z:_Block_Error_Probability_once_more|Exercise 2.15Z: Block Error Probability once more]]
  
[[Aufgaben:Aufgabe_2.16:_Entscheidungskriterien_bei_BDD|Aufgabe 2.16: Entscheidungskriterien bei BDD]]
+
[[Aufgaben:Exercise_2.16:_Bounded_Distance_Decoding:_Decision_Regions|Exercise 2.16: Bounded Distance Decoding: Decision Regions]]
  
==Quellenverzeichnis==
+
==References==
 
<references/>
 
<references/>
  
 
{{Display}}
 
{{Display}}

Latest revision as of 15:51, 13 December 2022

Block error probability for RSC and BDD


For error probability calculation we start from the same block diagram as in chapter  "Error Correction According to Reed-Solomon Coding",  but here we choose the code word estimator  $(\underline {y} → \underline {z})$  to  $\text{Bounded Distance Decoding}$  $\rm (BDD)$.  For  "Maximum Likelihood Decoding"  the results are slightly better.

System model with Reed–Solomon coding,  $m$ BSC  and  Bounded Distance Decoding

Let the block error probability be defined as follows:

\[{ \rm Pr(block\:error)} = { \rm Pr}( \underline{v} \ne \underline{u})= { \rm Pr}( \underline{z} \ne \underline{c}) = { \rm Pr}( f >t) \hspace{0.05cm}.\]

Due to the BDD assumption,  the same simple result is obtained as for the binary block codes,  namely the probability that

  • the number  $f$  of errors in the block  $($received word$)$
  • is greater than the correctability  $t$  of the code.


Since for the random variable  $f$  $($number of errors in the block$)$  there is a  $\text{binomial distribution}$  in the range   $0 ≤ f ≤ n$   we obtain:

\[{\rm Pr(block\:error)} = \sum_{f = t + 1}^{n} {n \choose f} \cdot {\varepsilon_{\rm S}}^f \cdot (1 - \varepsilon_{\rm S})^{n-f} \hspace{0.05cm}.\]
  • But while in the first main chapter   $c_i ∈ \rm GF(2)$   has always been applied and thus the  $f$  transmission errors were in each case  »bit errors«,
  • under Reed–Solomon coding a transmission error   $(y_i \ne c_i)$   is always understood to be a  »symbol error«  because of   $c_i ∈ {\rm GF}(2^m)$   as well as   $y_i ∈ {\rm GF}(2^m)$.


This results in the following differences:

  1.   The discrete channel model used to describe the binary block codes is the  $\text{Binary Symmetric Channel}$  $\rm (BSC)$.  Each bit  $c_i$  of a code word is falsified  $(y_i \ne c_i)$  with probability  $\varepsilon$  and correctly transmitted  $(y_i = c_i)$  with probability  $1-\varepsilon$.
  2.  For Reed–Solomon coding,  one must replace the  "1–BSC"  model with the  "m–BSC"  model.  A symbol  $c_i$  is falsified with probability  $\varepsilon_{\rm S}$  into another symbol  $y_i$   $($no matter which one$)$   and arrives with probability  $1-\varepsilon_{\rm S}$  unfalsified at the receiver.

$\text{Example 1:}$  We start from the BSC parameter  $\varepsilon = 0.1$  and consider Reed–Solomon code symbols  $c_i ∈ {\rm GF}(2^8)$   ⇒   $m = 8$,   $q = 256$,   $n = 255$.

  • For a symbol falsification   $(y_i \ne c_i)$   already one wrong bit is sufficient.
  • Or expressed differently:   If  $y_i = c_i$  is to be valid,  then all  $m = 8$ bits  of the code symbol must be transmitted correctly:
\[1 - \varepsilon_{\rm S} = (1 - \varepsilon)^m = 0.9^8 \approx 0.43 \hspace{0.05cm}.\]
  • Thus,  for the  "8–BSC"  model,  the falsification probability  $\varepsilon_{\rm S} ≈ 0.57$.
  • With the further assumption that the falsification of  $c_i = \beta$  into any other symbol  $y_i = \gamma \ne \beta$  is equally likely,  we obtain:
$${\rm Pr}(y_i = \gamma\hspace{0.05cm}\vert \hspace{0.05cm}c_i = \beta) = 0.57/255 ≈ 0.223\%.$$


Application of Reed–Solomon coding for binary channels


The prerequisites for the following calculation of block error probability of a Reed–Solomon coding system and conversion to binary symbols are summarized in the diagram:

Application of Reed-Solomon coding for binary channels.  at point
(1)  $k$–bit symbols;   (2)  $n$–bit symbols;   (3)  $n$–bit symbols;   (4)  $(n \cdot m)$–bit blocks;
  1. Assume  $(n,\ k)$ Reed–Solomon coding with symbols of  $c_i ∈ {\rm GF}(2^m)$.  The smaller the code rate  $R=k/n$,  the less information can be transmitted at a fixed data rate.
  2. Each symbol is represented by  $m$  bits   ⇒   "binary mapping".  A block  $($code word  $\underline {c} )$  thus consists of  $n$  symbols or of  $n \cdot m$  binary characters  $($bits$)$  combined in  $\underline {c}_{\rm bin} $.
  3. In addition,  the  $\text{AWGN Channel}$,  identified by the parameter  $E_{\rm B}/N_0 $ is assumed.  According to this channel model the transmission is bipolar:   "0"   ↔   $+1$  and  "1"   ↔ $-1$.
  4. The receiver makes hard decisions on bit level.  Before decoding including error correction,  the binary symbols are converted back to  ${\rm GF}(2^m)$  symbols.


The  "Bounded Distance Decoding"  equation from the last section is based on the  "m–BSC"  model:

\[{\rm Pr(block\:error)} = \sum_{f = t + 1}^{n} {n \choose f} \cdot {\varepsilon_{\rm S}}^f \cdot (1 - \varepsilon_{\rm S})^{n-f} \hspace{0.05cm},\]

Starting from the AWGN–channel  ("Additive White Gaussian Noise"),  one comes

\[\varepsilon = {\rm Q} \big (\sqrt{2 \cdot E_{\rm S}/N_0} \big ) = {\rm Q} \big (\sqrt{2 \cdot R \cdot E_{\rm B}/N_0} \big ) \hspace{0.05cm},\]
  • and from this to the  »symbol error probability«     ⇒     "m–BSC"  parameter:
\[\varepsilon_{\rm S} = 1 - (1 - \varepsilon)^m \hspace{0.05cm}.\]

$\text{Example 2:}$  One gets with

  1.   the code rate   $R=k/n = 239/255 = 0.9373$,
  2.   the AWGN parameter   $10 \cdot \lg \ E_{\rm B}/N_0 = 7 \, \rm dB$   ⇒   $E_{\rm B}/N_0 ≈ 5$,  and
  3.   eight bits per symbol   ⇒   $m = 8$,
  4.   the code length   $n = 2^8-1 = 255$  


the following numerical values for

  • the parameter  $\varepsilon$  of the  "1–BSC"  model   ⇒   "bit error probability":
$$\varepsilon = {\rm Q} \big (\sqrt{2 \cdot 0.9373 \cdot 5} \big ) = {\rm Q} \big (3.06\big ) \approx 1.1 \cdot 10^{-3}=0.11 \%$$
  • the parameter  $\varepsilon_{\rm S}$  of the  "8–BSC"  model   ⇒   "symbol error probability"  or  "block error probability":
$$\varepsilon_{\rm S} = 1 - (1 - 1.1 \cdot 10^{-3})^8 = 1 - 0.9989^8 = 1 - 0.9912 \approx 0.88 \% \hspace{0.2cm}(\approx 8 \cdot \varepsilon)\hspace{0.05cm}.$$

⇒   So every single symbol is transmitted with more than  $99$  percent probability without errors.


BER of Reed–Solomon coding for binary channels


The following graph shows the block error probabilities given in  [Liv10][1]  as a function of the AWGN quotient  $10 \cdot \lg \ E_{\rm B}/N_0$.

Shown are the calculated curves   ${ \rm Pr}( \underline{v} \ne \underline{u})$   for two different Reed–Solomon codes corresponding to the  "Deep Space Standards"  according to  $\rm CCSDS$  $($"Consultative Committee for Space Data Systems"$)$,  namely

Block error probabilities of two Reed-Solomon codes of length  $n = 255$
Note:   You are to perform completely the calculation in the  "Exercise 2.15"  for the  $\text{RSC (7, 3, 5)}_{8}$  – thus for somewhat clearer parameters.
  1. the  $\text{RSC (255, 239, 17)}_{256}$  with code rate  $R=0.9373$,  which can correct up to  $t = 8$  errors,  and
  2. the  $\text{RSC (255, 223, 33)}_{256}$  with higher correction capability  $(t = 16)$  due to smaller code rate.


We analyze the point highlighted in yellow in the graph,  valid for

  • the  $\text{RSC (255, 239, 17)}_{256}$,  and 
  • $10 \cdot \lg \ E_{\rm B}/N_0 = 7.1 \,\rm dB$   ⇒   $\varepsilon = 0.007$.


The corresponding block error probability results according to the diagram to

\[{\rm Pr(block\:error)} = \sum_{f =9}^{255} {255 \choose f} \cdot {\varepsilon_{\rm S}}^f \cdot (1 - \varepsilon_{\rm S})^{255-f}\approx 10^{-4} \hspace{0.05cm}.\]

Dominant here is the first term  $($for  $f=9)$, which already accounts for  $\approx 80\%$ :

$${255 \choose 9} \approx 1.1 \cdot 10^{16}\hspace{0.05cm},\hspace{0.15cm} \varepsilon_{\rm S}^9 \approx 4 \cdot 10^{-20}\hspace{0.05cm},\hspace{0.15cm} (1 -\varepsilon_{\rm S})^{246} \approx 0.18$$
$$ \Rightarrow \hspace{0.15cm} {\rm Pr}(f = 9) \approx 8 \cdot 10^{-5} \hspace{0.05cm}.$$

This should be taken as proof that one may stop the summation already after a few terms.

The results shown in the graph can be summarized as follows:

  1.   For small  $E_{\rm B}/N_0$  $($of the AWGN channel$)$,  the Reed–Solomon codes are completely unsuitable.
  2.   Both codes are for  $10 \cdot \lg \ E_{\rm B}/N_0 < 6 \,\rm dB$  above the  $($black$)$  comparison curve for uncoded transmission.
  3.   The  $\text{RSC (255, 223, 33)}_{256}$  calculation differs only in the lower summation limit  $(f_{\rm min} = 17)$  and by a slightly larger  $\varepsilon_{\rm S}$  $($because  $R = 0.8745)$.
  4.   For  $\rm BER = 10^{-6}$,  this  "red code"  $($with  $t = 16)$  is better only by about  $1 \, \rm dB$  than the  "green code"  $($with  $t = 8)$.
  5.   So the results of both codes are rather disappointing.


$\text{Conclusion:}$ 

  • Reed–Solomon codes are not very good at the memoryless AWGN channel.  Both codes are more than  $4 \, \rm dB$  away from the information-theoretical  $\text{Shannon bound }$ .
  • But Reed–Solomon codes are very effective with so-called  $\text{Burst Error Channels}$.  They are therefore mainly used for fading channels,  memory systems,  etc.
  • Reed–Solomon codes are not perfect.  The consequences of this are covered in  "Exercise 2.16".


Typical applications with Reed–Solomon coding


Reed–Solomon coding is often applied according to the diagram together with an  "inner code"  in cascaded form.

Code scheme with cascading
  • The inner code is almost always a binary code and in satellite– and mobile communications often a convolutional code.
  • If one wishes to study only Reed–Solomon encoding/decoding, one replaces the grayed components in a simulation with a single block called a  "super channel".
  • Concatenated coding systems are very efficient if an interleaver is connected between the two encoders to further relax burst errors.


$\text{Example 3:}$  The diagram shows an example of such an interleaver,  where we restrict ourselves to a  $20 × 4$  matrix.  In practice,  these matrices are significantly larger.

Interleaver matrix for  $20 × 4$  symbols

The interleaver is written row–by–row and read out column–by–column  $($blue label$)$.  With the de–interleaver  $($green label$)$  the sequence is exactly the other way round.

  • The Reed–Solomon symbols are therefore not forwarded to the inner coder consecutively,  but according to the specified sequence as symbol  1, 21, 41, 61, 2, 22, etc. .
  • A contiguous area  $($here the symbols  42, 62, 3, 23, 43, 63, 4, 24   ⇒   around the rectangle outlined in red below$)$  is destroyed on the channel,  for example by a scratch on the channel "storage medium".
  • After de–interleaving,  the symbol sequence is again  1, 2, 3, ...    
    You can see from the red outlined circles that now this burst error has been largely  "broken up".


$\text{Example 4:}$   A very widely used application of Reed–Solomon coding – and also the most commercially successful – is the »Compact Disc«  $\rm (CD)$,  whose error correction mechanism has already been described in the  "introduction chapter"  $($Example 7$)$  of this book.  Here the inner code is also a Reed–Solomon code,  and the concatenated coding system can be described as follows:

  1. Both channels of the stereo–audio signal are sampled at  $\text{44.1 kHz}$  each and every sample is digitally represented at  $32$  bits  $($four bytes$)$.
  2. The grouping of six samples results in a frame  $(192$  Bit$)$ and thus  $24$  code symbols from the Galois field  $\rm GF(2^8)$.  Each code symbol thus corresponds to exactly one byte.
  3. The first Reed–Solomon code with rate  $R_1 = 24/28$  returns  $28$  bytes,  which are fed to an interleaver of size  $28 × 109$  bytes.  The readout is done  (complicated)  diagonally.
  4. The interleaver distributes contiguous bytes over a large area of the entire disc.  This resolves so-called  "bursts"  e.g. caused by scratches on the  $\rm CD$.
  5. Together with the second Reed–Solomon code  $($rate $R_2 = 28/32)$  gives a total rate of  $R = (24/28) · (28/32) = 3/4$.  Both codes can each correct  $t = 2$  symbol errors.
  6. The two component codes  $\text{RSC (28, 24, 5)}$  and  $\text{RSC (32, 28, 5)}$  are each based on the Galois field  $\rm GF(2^8)$,  which would actually mean a code length of  $n = 255$ .
  7. The shorter component codes needed here result from the  $\text{RSC (255, 251, 5)}_{256}$  by  "shortening"   ⇒   see  "Exercise 1.9Z".  However,  this measure does not change the minimum distance  $d_{\rm min}= 5$ .
  8. With the subsequent  $\text{Eight to Fourteen Modulation}$  and further control symbols,  one finally arrives at the final code rate  $ R = 192/588 ≈ 0.326$  of the CD data compression.

In the section  "Slit CD"  you can see for yourself the correction capabilities of this  "cross–interleaved Reed–Solomon coding"  with the help of an audio example,  but you will also see its limitations

.

Exercises for the chapter


Exercise 2.15: Block Error Probability with AWGN

Exercise 2.15Z: Block Error Probability once more

Exercise 2.16: Bounded Distance Decoding: Decision Regions

References

  1. Liva, G.:  Channel Coding.  Lecture Notes, Chair of Communications Engineering, TU Munich and DLR Oberpfaffenhofen, 2010.