Difference between revisions of "Channel Coding/Objective of Channel Coding"

From LNTwww
Line 135: Line 135:
 
A demonstration of the CD's ability to correct follows on the next page}}<br>.
 
A demonstration of the CD's ability to correct follows on the next page}}<br>.
  
== The "Slotted CD" - A Demonstration By The LNT of TUM ==
+
== The "Slit CD" - A Demonstration By The LNT of TUM ==
 
<br>
 
<br>
Ende der 1990er Jahre haben Mitarbeiter des&nbsp;  [https://www.lnt.ei.tum.de/startseite/ Chair of Communications Engineering] der [https://www.tum.de/die-tum/ TU München]&nbsp; unter Leitung von Professor&nbsp; [[Biographies_and_Bibliographies/Lehrstuhlinhaber_des_LNT#Prof._Dr.-Ing._Dr.-Ing._E.h._Joachim_Hagenauer_.281993-2006.29|Joachim Hagenauer]]&nbsp; eine Musik&ndash;CD gezielt beschädigt, indem insgesamt drei Schlitze von jeweils mehr als einem Millimeter Breite eingefräst wurden. Damit fehlen bei jedem Defekt fast&nbsp; $4000$&nbsp; fortlaufende Bit der Audiocodierung.<br>
+
At the end of the 1990s, staff of the&nbsp;  [https://www.lnt.ei.tum.de/startseite/ Chair of Communications Engineering] of the  [https://www.tum.de/die-tum/ Technical University of Munich]&nbsp; led by professor&nbsp; [[Biographies_and_Bibliographies/Lehrstuhlinhaber_des_LNT#Prof._Dr.-Ing._Dr.-Ing._E.h._Joachim_Hagenauer_.281993-2006.29|Joachim Hagenauer]]&nbsp; eliberately damaged a music&ndash;CD by cutting a total of three slits, each more than one millimetre wide. Thus, almost&nbsp; $4000$&nbsp; consecutive bits of audio coding are missing with each defect.<br>
  
[[File:P ID2333 KC T 1 1 S2b.png|right|frame|„Geschlitzte CD”  des&nbsp; $\rm LNT/TUM$]]
+
[[File:P ID2333 KC T 1 1 S2b.png|right|frame|„A Slit CD”  of The&nbsp; $\rm LNT/TUM$]]
  
Die Grafik zeigt die &bdquo;geschlitzte CD&rdquo;:  
+
The diagram shows the &bdquo;slit CD&rdquo;:  
*Sowohl in der Spur 3 als auch in der Spur 14 gibt es bei jeder Umdrehung zwei solcher fehlerhafter Bereiche.  
+
*Both track 3 and track 14 have two such defective areas on each revolution.  
*Sie können sich die Musikqualität mit Hilfe der beiden Audioplayer (Abspielzeit jeweils ca. 15 Sekunden) verdeutlichen.  
+
*You can visualise the music quality with the help of the two audio players (playback time approx. 15 seconds each).  
*Die Theorie zu dieser Audio&ndash;Demo finden Sie im&nbsp; $\text{Beispiel 7}$&nbsp; auf der vorherigen Seite.<br>
+
*The theory of this audio&ndash;demo can be found in the&nbsp; $\text{Example 7}$&nbsp; on the previous page.<br>
  
  
Spur 14:
+
Track 14:
  
 
<lntmedia>file:A_ID59__14_1.mp3</lntmedia>
 
<lntmedia>file:A_ID59__14_1.mp3</lntmedia>
  
Spur 3:
+
Track 3:
  
 
<lntmedia>file:A_ID60__3_1.mp3</lntmedia>
 
<lntmedia>file:A_ID60__3_1.mp3</lntmedia>
  
<br><b>Resumee dieser Audiodemo:</b>
+
<br><b>Summary of this audio demo:</b>
*Die Fehlerkorrektur der CD basiert auf zwei seriell&ndash;verketteten&nbsp; [[Channel_Coding/Definition_und_Eigenschaften_von_Reed–Solomon–Codes|Reed&ndash;Solomon&ndash;Codes]]&nbsp; sowie einer&nbsp;  [https://de.wikipedia.org/wiki/Eight-to-Fourteen-Modulation Eight&ndash;to&ndash;Fourteen&ndash;Modulation]. Die Gesamtcoderate zur RS&ndash;Fehlerkorrektur beträgt&nbsp; $R = 3/4$.<br>
+
*The CD's error correction is based on two serial&ndash;concatenated&nbsp; [[Channel_Coding/Definition_und_Eigenschaften_von_Reed–Solomon–Codes|Reed&ndash;Solomon&ndash;Codes]]&nbsp; and one&nbsp;  [https://en.wikipedia.org/wiki/Eight-to-fourteen_modulation Eight&ndash;to&ndash;Fourteen&ndash;Modulation].The total code rate for RS&ndash;error correction is&nbsp; $R = 3/4$.<br>
  
*Ebenso wichtig für die Funktionsfähigkeit der CD wie die Codes ist der dazwischen geschaltete Interleaver, der die ausgelöschten Bits (&bdquo;Erasures&rdquo;) über eine Länge von fast&nbsp; $2 \, \rm cm$&nbsp; verteilt.<br>
+
*As important for the functioning of the CD as the codes is the interposed interleaver, which distributes the erased bits (&bdquo;Erasures&rdquo;) over a length of almost&nbsp; $2 \, \rm cm$&nbsp;.<br>
  
*Bei der&nbsp; '''Spur 14'''&nbsp; liegen die beiden defekten Bereiche genügend weit auseinander. Deshalb ist der Reed&ndash;Solomon&ndash;Decoder in der Lage, die fehlenden Daten zu rekonstruieren.<br>
+
*In&nbsp; '''Track 14'''&nbsp; the two defective areas are sufficiently far apart. Therefore, the Reed&ndash;Solomon&ndash;decoder is able to reconstruct the missing data.<br>
  
*Bei der&nbsp; '''Spur 3'''&nbsp; folgen die beiden Fehlerblöcke in sehr kurzem Abstand aufeinander, so dass der Korrekturalgorithmus versagt. Das Resultat ist ein fast periodisches Klackgeräusch.<br><br>
+
*In&nbsp; '''Track 3'''&nbsp; the two error blocks follow each other in a very short distance, so that the correction algorithm fails. The result is an almost periodic clacking noise.<br><br>
  
Wir bedanken uns bei Rainer Bauer,&nbsp; [[Biographies_and_Bibliographies/An_LNTwww_beteiligte_Mitarbeiter_und_Dozenten#Dr.-Ing._Thomas_Hindelang_.28am_LNT_von_1994-2000_und_2007-2012.29|Thomas Hindelang]]&nbsp; und&nbsp; [[Biographies_and_Bibliographies/An_LNTwww_beteiligte_Mitarbeiter_und_Dozenten#Manfred_J.C3.BCrgens_.28am_LNT_von_1981-2010.29|Manfred Jürgens]], diese Audio&ndash;Demo verwenden zu dürfen.<br>
+
We would like to thank Rainer Bauer,&nbsp; [[Biographies_and_Bibliographies/An_LNTwww_beteiligte_Mitarbeiter_und_Dozenten#Dr.-Ing._Thomas_Hindelang_.28am_LNT_von_1994-2000_und_2007-2012.29|Thomas Hindelang]]&nbsp; and&nbsp; [[Biographies_and_Bibliographies/An_LNTwww_beteiligte_Mitarbeiter_und_Dozenten#Manfred_J.C3.BCrgens_.28am_LNT_von_1981-2010.29|Manfred Jürgens]]for the permission to use this audio&ndash;demo.<br>
  
== Zusammenspiel zwischen Quellen– und Kanalcodierung ==
+
== Interplay Between Source and Channel Coding ==
 
<br>
 
<br>
Die Nachrichtenübertragung natürlicher Quellen wie Sprache, Musik, Bilder, Videos, usw. geschieht meist entsprechend dem nachfolgend skizzierten zeitdiskreten Modell.<br>
+
Message transmission of natural sources such as speech, music, images, videos, etc. is usually done according to the discrete-time model outlined below.<br>
  
[[File:P ID2334 KC T 1 1 S3a v2.png|center|frame|Bildübertragung mit Quellen– und Kanalcodierung|class=fit]]
+
[[File:P ID2334 KC T 1 1 S3a v2.png|center|frame|Image Transmission With Source and Channel Coding|class=fit]]
  
Zu dieser aus [Liv10]<ref name ='Liv10'>Liva, G.: ''Channel Coding.'' Vorlesungsmanuskript, Lehrstuhl für Nachrichtentechnik, TU München und DLR Oberpfaffenhofen, 2010.</ref> entnommenen Grafik ist Folgendes anzumerken:
+
From [Liv10]<ref name ='Liv10'>Liva, G.: ''Channel Coding.'' Vorlesungsmanuskript, Lehrstuhl für Nachrichtentechnik, TU München und DLR Oberpfaffenhofen, 2010.</ref> the following should be noted:
*Quelle und Sinke sind digitalisiert und werden durch (etwa gleich viele ) Nullen und Einsen repräsentiert.<br>
+
*Source and sink are digitised and represented by (approximately equal numbers of ) zeros and ones.<br>
  
*Der Quellencodierer komprimiert die binären Daten &ndash; im Beispiel ein Digitalfoto &ndash; und reduziert somit die Redundanz der Quelle.<br>
+
*The source encoder compresses the binary data &ndash; in the example a digital photo &ndash; and thus reduces the redundancy of the source.<br>
  
*Der Kanalcodierer fügt wieder Redundanz hinzu und zwar gezielt, so dass einige der auf dem Kanal entstandenen Fehler im Kanaldecoder korrigiert werden können.<br>
+
*The channel encoder adds redundancy again, and specifically so that some of the errors that occurred on the channel can be corrected in the channel decoder.<br>
  
*Für den Kanal wird hier ein zeitdiskretes Modell mit binärem Eingang und Ausgang verwendet, das auch die Komponenten der technischen Sende&ndash; und Empfangseinrichtungen (Modulator, Entscheider, Taktwiedergewinnung) geeignet berücksichtigen sollte.<br><br>
+
*A discrete-time model with binary input and output is used here for the channel, which should also suitably take into account the components of the technical transmit&ndash; and receive equipment (modulator, decision maker, clock recovery).<br><br>
  
Bei richtiger Dimensionierung von Quellen&ndash; und Kanalcodierung ist die Qualität des empfangenen Fotos hinreichend gut, auch wenn die Sinkensymbolfolge aufgrund nicht korrigierbarer Fehlermuster nicht exakt mit der Quellensymbolfolge übereinstimmen wird. Man erkennt innerhalb der Sinkensymbolfolge einen (rot markierten) Bitfehler.<br>
+
With correct dimensioning of sources&ndash; and channel coding, the quality of the received photo is sufficiently good, even if the sink symbol sequence will not exactly match the source symbol sequence due to error patterns that cannot be corrected. One can detect a (red marked) bit error within the sink symbol sequence.<br>
  
 
{{GraueBox|TEXT=  
 
{{GraueBox|TEXT=  
$\text{Beispiel 8:}$&nbsp; Für obige Grafik wurde beispielhaft und stark vereinfachend angenommen, dass
+
$\text{Example 8:}$&nbsp; For the above graph, it was assumed, as an example and for the sake of simplicity, that
*die Quellensymbolfolge nur die Länge&nbsp; $40$&nbsp; hat,
+
*the source symbol sequence has only the length&nbsp; $40$&nbsp;,
*der Quellencodierer die Daten um den Faktor&nbsp; $40/16 = 2.5$&nbsp; komprimiert, und
+
*the source encoder compresses the data by a factor of&nbsp; $40/16 = 2.5$&nbsp; and
*der Kanalcoder&nbsp; $50\%$&nbsp; Redundanz hinzufügt.  
+
*the channel encoder&nbsp; $50\%$&nbsp; adds redundancy.  
  
  
Übertragen werden müssen also nur&nbsp; $24$&nbsp; Codersymbole statt&nbsp; $40$&nbsp; Quellensymbole, was die Übertragungsrate insgesamt um&nbsp; $40\%$&nbsp; reduziert.<br>
+
Thus, only&nbsp; $24$&nbsp; encoder symbols have to be transmitted instead of&nbsp; $40$&nbsp; source symbols, which reduces the overall transmission rate by&nbsp; $40\%$&nbsp;.<br>
  
Würde man auf die Quellencodierung verzichten, in dem man das ursprüngliche Foto im BMP&ndash;Format übertragen würde und nicht das komprimierte JPG&ndash;Bild, so wäre die Qualität vergleichbar, aber eine um den Faktor&nbsp; $2.5$&nbsp; höhere Bitrate und damit sehr viel mehr Aufwand erforderlich.}}<br>
+
If one were to dispense with source encoding by transmitting the original photo in BMP&ndash;format rather than the compressed JPG&ndash;image, the quality would be comparable, but a bit rate higher by a factor&nbsp; $2.5$&nbsp; and thus much more effort would be required.}}<br>
  
 
{{GraueBox|TEXT=  
 
{{GraueBox|TEXT=  
$\text{Beispiel 9:}$&nbsp; Würde man sowohl auf die Quellen&ndash; als auch auf die Kanalcodierung verzichten, also direkt die BMP&ndash;Daten ohne Fehlerschutz übertragen, so wäre das Ergebnis trotz&nbsp; $($um den Faktor&nbsp; $40/24)$&nbsp; größerer Bitrate äußerst dürftig.<br>
+
$\text{Example 9:}$&nbsp; If one were to dispense with both source&ndash; and channel coding, i.e. transmit the BMP&ndash;data directly without error protection, the result would be extremely poor despite&nbsp; $($by a factor&nbsp; $40/24)$&nbsp; greater bit rate.<br>
  
[[File:P ID2335 KC T 1 1 S3b v2.png|center|frame|Bildübertragung ohne Quellen– und Kanalcodierung |class=fit]]}}
+
[[File:P ID2335 KC T 1 1 S3b v2.png|center|frame|Image Transmission Without Source and Channel Coding |class=fit]]}}
  
  
 
{{GraueBox|TEXT=  
 
{{GraueBox|TEXT=  
$\text{Beispiel 10:}$&nbsp; Nun betrachten wir den Fall, dass man die komprimierten Daten (zum Beispiel JPG) ohne Fehlersicherungsmaßnahmen direkt überträgt. <br>
+
$\text{Example 10:}$&nbsp; Now let's consider the case of directly transferring the compressed data (for example JPG) without error-proofing measures. <br>
  
[[File:P ID2336 KC T 1 1 S3c v2.png|center|frame|Bildübertragung mit Quellencodierung, ohne Kanalcodierung |class=fit]]
+
[[File:P ID2336 KC T 1 1 S3c v2.png|center|frame|Image Transmission With Source Coding  and Without Channel Coding | |class=fit]]
  
*Da die komprimierte Quelle nur noch wenig Redundanz besitzt, führt jeder einzelne Übertragungsfehler dazu, dass ganze Bildblöcke falsch decodiert werden.<br>
+
*Since the compressed source has little redundancy left, any single transmission error will cause entire blocks of images to be decoded incorrectly.<br>
*Dieses Codierschema (Quellencodierung, aber keine Kanalcodierung) sollte auf jeden Fall vermieden werden.}}
+
*This coding scheme (source coding but no channel coding) should be avoided at all costs}}.
  
== Blockschaltbild und Voraussetzungen ==
+
 
 +
== Block Diagram and Requirements ==
 
<br>
 
<br>
Im weiteren Verlauf gehen wir von dem skizzierten Blockschaltbild mit Kanalcodierer, Digitalem Kanal und Kanaldecoder aus.  
+
In the further sections, we will start from the sketched block diagram with channel encoder, digital channel and channel decoder.  
  
[[File:P ID2337 KC T 1 1 S4 v2.png|center|frame|Blockschaltbild zur Beschreibung der Kanalcodierung|class=fit]]
+
[[File:P ID2337 KC T 1 1 S4 v2.png|center|frame|Block Diagram Describing Channel Coding|class=fit]]
  
 
Dabei gelten folgende Voraussetzungen:
 
Dabei gelten folgende Voraussetzungen:
*Der Vektor&nbsp; $\underline{u} = (u_1, u_2, \text{...} \hspace{0.05cm}, u_k)$&nbsp; kennzeichnet einen&nbsp; '''Informationsblock'''&nbsp; mit&nbsp; $k$&nbsp; Symbolen.  
+
The following conditions apply:
*Meist beschränken wir uns auf Binärsymbole (Bits) &nbsp; &#8658; &nbsp; $u_i \in \{0, \, 1\}$ für $i = 1, 2, \text{...} \hspace{0.05cm}, k$&nbsp; mit gleichen Auftrittswahrscheinlichkeiten für Nullen und Einsen.<br>
+
*The vector&nbsp; $\underline{u} = (u_1, u_2, \text{...} \hspace{0.05cm}, u_k)$&nbsp; denotes an&nbsp; '''information block'''&nbsp; with&nbsp; $k$&nbsp; symbols.  
 +
*Mostly we restrict ourselves to binary symbols (bits) &nbsp; &#8658; &nbsp; $u_i \in \{0, \, 1\}$ for $i = 1, 2, \text{...} \hspace{0.05cm}, k$&nbsp; with equal occurrence probabilities for zeros and ones.<br>
  
  
*Jeder Informationsblock&nbsp; $\underline{u}$&nbsp; wird durch ein&nbsp; '''Codewort'''&nbsp; (oder einen&nbsp; <i>Codeblock</i>)&nbsp; $\underline{x} = (x_1, x_2, \text{...} \hspace{0.05cm}, x_n)$&nbsp; mit&nbsp; $n \ge k$, $x_i \in \{0, \, 1\}$&nbsp; dargestellt. Man spricht dann von einem binären&nbsp; $(n, k)$&ndash;Blockcode&nbsp; $C$. Die Zuordnung bezeichnen wir mit&nbsp; $\underline{x} = {\rm enc}(\underline{u})$, wobei &bdquo;enc&rdquo; für &bdquo;Encoder&ndash;Funktion&rdquo; steht.<br>
+
*Each information block&nbsp; $\underline{u}$&nbsp; is represented by a&nbsp; '''codeword'''&nbsp; (or a&nbsp; <i>codeblock</i>)&nbsp; $\underline{x} = (x_1, x_2, \text{. ..} \hspace{0.05cm}, x_n)$&nbsp; with&nbsp; $n \ge k$, $x_i \in \{0, \, 1\}$&nbsp; represented. One then speaks of a binary&nbsp; $(n, k)$&ndash;block code&nbsp; $C$. We denote the assignment by&nbsp; $\underline{x} = {\rm enc}(\underline{u})$, where &bdquo;enc&rdquo; stands for &bdquo;encoder&ndash;function&rdquo;.<br>
  
  
*Das&nbsp; '''Empfangswort''' $\underline{y}$&nbsp; ergibt sich aus dem Codewort&nbsp; $\underline{x}$&nbsp; durch&nbsp; [https://en.wikipedia.org/wiki/Modular_arithmetic Modulo&ndash;2&ndash;Addition]&nbsp; mit dem ebenfalls binären Fehlervektor&nbsp; $\underline{e} = (e_1, e_2, \text{...} \hspace{0.05cm}, e_n)$, wobei &bdquo;$e= 1$&rdquo; für einen Übertragungfehler steht und &bdquo;$e= 0$&rdquo; anzeigt, dass das&nbsp; $i$&ndash;te Bit des Codewortes richtig übertragen wurde. Es gilt also:
+
*The&nbsp; '''receive word''' $\underline{y}$&nbsp; results from the codeword&nbsp; $\underline{x}$&nbsp; by&nbsp; [https://en.wikipedia.org/wiki/Modular_arithmetic Modulo&ndash;2&ndash;Addition]&nbsp;with the likewise binary error vector&nbsp; $\underline{e} = (e_1, e_2, \text{. ..} \hspace{0.05cm}, e_n)$, where &bdquo;$e= 1$&rdquo; represents a transmission error and &bdquo;$e= 0$&rdquo; indicates that the&nbsp; $i$&ndash;th bit of the codeword was transmitted correctly. The following therefore applies:
  
 
::<math>\underline{y} = \underline{x} \oplus \underline{e} \hspace{0.05cm}, \hspace{0.5cm} y_i  =  x_i \oplus e_i \hspace{0.05cm}, \hspace{0.2cm} i = 1, \text{...} \hspace{0.05cm}  , n\hspace{0.05cm}, x_i \hspace{-0.05cm} \in  \hspace{-0.05cm} \{ 0, 1 \}\hspace{0.05cm}, \hspace{0.5cm}e_i \in  \{ 0, 1 \}\hspace{0.5cm}
 
::<math>\underline{y} = \underline{x} \oplus \underline{e} \hspace{0.05cm}, \hspace{0.5cm} y_i  =  x_i \oplus e_i \hspace{0.05cm}, \hspace{0.2cm} i = 1, \text{...} \hspace{0.05cm}  , n\hspace{0.05cm}, x_i \hspace{-0.05cm} \in  \hspace{-0.05cm} \{ 0, 1 \}\hspace{0.05cm}, \hspace{0.5cm}e_i \in  \{ 0, 1 \}\hspace{0.5cm}
Line 228: Line 230:
  
  
*Die Beschreibung durch das&nbsp; '''Digitale Kanalmodell'''&nbsp; &ndash; also mit binärem Eingang und Ausgang &ndash; ist allerdings nur dann anwendbar, wenn das Übertragungssystem harte Entscheidungen trifft &ndash; siehe&nbsp; [[Channel_Coding/Kanalmodelle_und_Entscheiderstrukturen#AWGN.E2.80.93Kanal_bei_bin.C3.A4rem_Eingang| AWGN&ndash;Kanal bei binärem Eingang]]. Systeme mit&nbsp; [[Channel_Coding/Decodierung_linearer_Blockcodes#Codiergewinn_.E2.80.93_Bitfehlerrate_bei_AWGN|Soft Decision]]&nbsp; sind mit diesem einfachen Modell nicht modellierbar.<br>
+
The description by the&nbsp; '''Digital Channel Model'''&nbsp; &ndash; i.e. with binary input and output &ndash; is, however, only applicable if the transmission system makes hard decisions &ndash; see&nbsp; [[Channel_Coding/Kanalmodelle_und_Entscheiderstrukturen#AWGN.E2.80.93Kanal_bei_bin.C3.A4rem_Eingang| AWGN&ndash;Channel at Binary Input]]. Systems with&nbsp; [[Channel_Coding/Decodierung_linearer_Blockcodes#Codiergewinn_.E2.80.93_Bitfehlerrate_bei_AWGN|soft decision]]&nbsp; cannot be modelled with this simple model.<br>
  
  
*Der Vektor&nbsp; $\underline{v}$&nbsp; nach der&nbsp; '''Kanaldecodierung'''&nbsp; hat die gleiche Länge&nbsp; $k$&nbsp; wie der Informationsblock&nbsp; $\underline{u}$. Den Decodiervorgang beschreiben wir mit der &bdquo;Decoder&ndash;Funktion&rdquo; als&nbsp; $\underline{v} = {\rm enc}^{-1}(\underline{y}) = {\rm dec}(\underline{y})$. Im fehlerfreien Fall gilt analog zu&nbsp; $\underline{x} = {\rm enc}(\underline{u})$&nbsp; auch&nbsp; $\underline{v} = {\rm enc}^{-1}(\underline{y})$.<br>
+
*The vector&nbsp; $\underline{v}$&nbsp; after&nbsp; '''channel decoding'''&nbsp; has the same length&nbsp; $k$&nbsp; as the information block&nbsp; $\underline{u}$. We describe the decoding process with the &bdquo;decoder&ndash;function&rdquo; as&nbsp; $\underline{v} = {\rm enc}^{-1}(\underline{y}) = {\rm dec}(\underline{y})$. In the error-free case, analogous to&nbsp; $\underline{x} = {\rm enc}(\underline{u})$&nbsp; also&nbsp; $\underline{v} = {\rm enc}^{-1}(\underline{y})$.<br>
  
  
*Ist der Fehlervektor&nbsp; $\underline{e} \ne \underline{0}$, so ist&nbsp; $\underline{y}$&nbsp; meist kein gültiges Element des verwendeten Blockcodes, und die Decodierung ist dann keine reine Zuordnung&nbsp; $\underline{y} \rightarrow \underline{v}$, sondern eine auf maximale Übereinstimmung (mimimale Fehlerwahrscheinlichkeit) basierende Schätzung von&nbsp; $\underline{v}$.<br>
+
*If the error vector&nbsp; $\underline{e} \ne \underline{0}$, then&nbsp; $\underline{y}$&nbsp; is usually not a valid element of the block code used, and the decoding is then not a pure mapping&nbsp; $\underline{y} \rightarrow \underline{v}$, but an estimate of&nbsp; $\underline{v}$ based on maximum match (mimimum error probability).<br>
  
== Einige wichtige Definitionen zur Blockcodierung ==
+
== Important Definitions for Block Coding ==
 
<br>
 
<br>
Wir betrachten nun den beispielhaften binären Blockcode
+
We now consider the exemplary binary block code
  
 
:<math>\mathcal{C} = \{ (0, 0, 0, 0, 0) \hspace{0.05cm},\hspace{0.15cm} (0, 1, 0, 1, 0) \hspace{0.05cm},\hspace{0.15cm}(1, 0, 1, 0, 1) \hspace{0.05cm},\hspace{0.15cm}(1, 1, 1, 1, 1) \}\hspace{0.05cm}.</math>
 
:<math>\mathcal{C} = \{ (0, 0, 0, 0, 0) \hspace{0.05cm},\hspace{0.15cm} (0, 1, 0, 1, 0) \hspace{0.05cm},\hspace{0.15cm}(1, 0, 1, 0, 1) \hspace{0.05cm},\hspace{0.15cm}(1, 1, 1, 1, 1) \}\hspace{0.05cm}.</math>
  
Dieser Code wäre zum Zwecke der Fehlererkennung oder &ndash;korrektur ungeeignet. Aber er ist so konstruiert, dass er die Berechnung wichtiger Beschreibungsgrößen anschaulich verdeutlicht:
+
This code would be unsuitable for the purpose of error detection or &ndash;correction. But it is constructed in such a way that it clearly illustrates the calculation of important descriptive variables:
*Jedes einzelne Codewort&nbsp; $\underline{u}$&nbsp; wird durch fünf Bit beschrieben. Im gesamten Buch drücken wir diesen Sachverhalt durch die&nbsp; '''Codewortlänge'''&nbsp; (englisch: &nbsp;<i>Code Length</i>&nbsp;)&nbsp; $n = 5$ aus.
+
*Each individual codeword&nbsp; $\underline{u}$&nbsp; is described by five bits. Throughout the book, we express this fact by the&nbsp; '''codeword length'''&nbsp; (English: &nbsp;<i>Code Length</i>&nbsp;)&nbsp; $n = 5$.
  
  
*Der obige Code beinhaltet vier Elemente. Damit ist der&nbsp; '''Codeumfang'''&nbsp; (englisch: &nbsp; <i>Size</i>&nbsp;)&nbsp; $|C| = 4$. Entsprechend gibt es auch vier eindeutige Zuordnungen (englisch: &nbsp;<i>Mappings</i>&nbsp;) zwischen&nbsp; $\underline{u}$&nbsp; und&nbsp; $\underline{x}$.
+
*The above code contains four elements. Thus the&nbsp; '''code size'''&nbsp; (English: &nbsp; <i>Size</i>&nbsp;)&nbsp; $|C| = 4$. Accordingly, there are also four unique mappings (English: &nbsp;<i>Mappings</i>&nbsp;) between&nbsp; $\underline{u}$&nbsp; and&nbsp; $\underline{x}$.
  
  
*Die Länge eines  Informationsblocks&nbsp; $\underline{u}$ &nbsp; &rArr; &nbsp; '''Informationsblocklänge'''&nbsp; wird mit&nbsp; $k$&nbsp; bezeichnet. Da bei allen binären Codes&nbsp; $|C| = 2^k$&nbsp; gilt, folgt aus&nbsp; $|C| = 4$&nbsp; der Wert&nbsp; $k = 2$. Die Zuordnungen zwischen&nbsp; $\underline{u}$&nbsp; und&nbsp; $\underline{x}$&nbsp; lauten bei obigem Code&nbsp; $C$:
+
*The length of an information block&nbsp; $\underline{u}$ &nbsp; &rArr; &nbsp; '''Information block length'''&nbsp; is denoted by&nbsp; $k$&nbsp;. Since for all binary codes&nbsp; $|C| = 2^k$&nbsp; holds, it follows from&nbsp; $|C| = 4$&nbsp; that&nbsp; $k = 2$. The assignments between&nbsp; $\underline{u}$&nbsp; and&nbsp; $\underline{x}$&nbsp; in the above code are&nbsp; $C$:
  
 
::<math>\underline{u_0} = (0, 0) \hspace{0.2cm}\leftrightarrow \hspace{0.2cm}(0, 0, 0, 0, 0) = \underline{x_0}\hspace{0.05cm}, \hspace{0.8cm}
 
::<math>\underline{u_0} = (0, 0) \hspace{0.2cm}\leftrightarrow \hspace{0.2cm}(0, 0, 0, 0, 0) = \underline{x_0}\hspace{0.05cm}, \hspace{0.8cm}
Line 258: Line 260:
  
  
*Der Code weist die&nbsp; '''Coderate'''&nbsp; $R = k/n = 2/5$&nbsp; auf. Dementsprechend beträgt seine Redundanz&nbsp; $1-R$, also&nbsp; $60\%$. Ohne Fehlerschutz &nbsp;$($also für den Fall&nbsp; $n = k)$&nbsp; wäre die Coderate&nbsp; $R = 1$.<br>
+
*The code has the&nbsp; '''code rate'''&nbsp; $R = k/n = 2/5$&nbsp;. Accordingly, its redundancy is&nbsp; $1-R$, that is&nbsp; $60\%$. Without error protection&nbsp;$($so for the case&nbsp; $n = k)$&nbsp; the code rate&nbsp; $R = 1$.<br>
  
  
*Eine kleine Coderate  zeigt an, dass von den&nbsp; $n$&nbsp; Bits eines Codewortes nur sehr wenige tatsächlich Information tragen. Beispielsweise hat ein Wiederholungscode&nbsp; $(k = 1)$&nbsp; mit&nbsp; $n = 10$&nbsp; die  Coderate&nbsp; $R = 0.1$.<br>
+
*A small code rate indicates that of the&nbsp; $n$&nbsp; bits of a codeword, very few actually carry information. For example, a repetition code&nbsp; $(k = 1)$&nbsp; with&nbsp; $n = 10$&nbsp; has a code rate&nbsp; $R = 0.1$.<br>
  
  
*Das&nbsp; '''Hamming&ndash;Gewicht'''&nbsp; $w_{\rm H}(\underline{x})$&nbsp; des Codewortes&nbsp; $\underline{x}$&nbsp; gibt die Zahl der Codewortelemente&nbsp; $x_i \ne 0$&nbsp; an. Bei einem binären Code &nbsp; &#8658; &nbsp; $x_i \in  \{0, \, 1\}$&nbsp; ist $w_{\rm H}(\underline{x})$&nbsp; gleich der Summe&nbsp; $x_1 + x_2 + \hspace{0.05cm}\text{...} \hspace{0.05cm}+ x_n$. Im Beispiel:
+
*The&nbsp; '''Hamming&ndash;weight'''&nbsp; $w_{\rm H}(\underline{x})$&nbsp; of the codeword&nbsp; $\underline{x}$&nbsp; indicates the number of codeword elements&nbsp; $x_i \ne 0$&nbsp;. For a binary code&nbsp; &#8658; &nbsp; $x_i \ne 0$&nbsp; $w_{\rm H}(\underline{x})$&nbsp; is equal to the sum&nbsp; $x_1 + x_2 + \hspace{0.05cm}\text{...} \hspace{0.05cm}+ x_n$. In the example:
  
 
::<math>w_{\rm H}(\underline{x}_0) = 0\hspace{0.05cm}, \hspace{0.4cm}w_{\rm H}(\underline{x}_1) = 2\hspace{0.05cm}, \hspace{0.4cm} w_{\rm H}(\underline{x}_2) = 3\hspace{0.05cm}, \hspace{0.4cm}w_{\rm H}(\underline{x}_3) = 5\hspace{0.05cm}.  
 
::<math>w_{\rm H}(\underline{x}_0) = 0\hspace{0.05cm}, \hspace{0.4cm}w_{\rm H}(\underline{x}_1) = 2\hspace{0.05cm}, \hspace{0.4cm} w_{\rm H}(\underline{x}_2) = 3\hspace{0.05cm}, \hspace{0.4cm}w_{\rm H}(\underline{x}_3) = 5\hspace{0.05cm}.  
Line 270: Line 272:
  
  
*Die&nbsp; '''Hamming&ndash;Distanz'''&nbsp; $d_{\rm H}(\underline{x}, \ \underline{x}\hspace{0.03cm}')$&nbsp; zwischen den Codeworten&nbsp; $\underline{x}$&nbsp; und&nbsp; $\underline{x}\hspace{0.03cm}'$&nbsp; bezeichnet die Anzahl der Bitpositionen, in denen sich die beiden Codeworte unterscheiden:
+
*The&nbsp; '''Hamming&ndash;distance'''&nbsp; $d_{\rm H}(\underline{x}, \ \underline{x}\hspace{0.03cm}')$&nbsp; between the codewords&nbsp; $\underline{x}$&nbsp; and&nbsp; $\underline{x}\hspace{0.03cm}'$&nbsp; denotes the number of bit positions in which the two codewords differ:
  
 
::<math>d_{\rm H}(\underline{x}_0, \hspace{0.05cm}\underline{x}_1) = 2\hspace{0.05cm}, \hspace{0.4cm}
 
::<math>d_{\rm H}(\underline{x}_0, \hspace{0.05cm}\underline{x}_1) = 2\hspace{0.05cm}, \hspace{0.4cm}
Line 280: Line 282:
  
  
*Eine wichtige Eigenschaft eines Codes $C$, die seine Korrekturfähigkeit wesentlich beeinflusst, ist die&nbsp; '''minimale Distanz'''&nbsp; zwischen zwei beliebigen Codeworten:
+
*An important property of a code $C$ that significantly affects its ability to be corrected is the&nbsp; '''minimum distance'''&nbsp; between any two code words:
  
 
::<math>d_{\rm min}(\mathcal{C}) =
 
::<math>d_{\rm min}(\mathcal{C}) =
Line 286: Line 288:
  
 
{{BlaueBox|TEXT=  
 
{{BlaueBox|TEXT=  
$\text{Definition:}$&nbsp; Ein&nbsp; $(n, \hspace{0.05cm}k, \hspace{0.05cm}d_{\rm min})\text{ &ndash; Blockcode}$&nbsp; besitzt die Codewortlänge&nbsp; $n$, die Informationsblocklänge&nbsp; $k$&nbsp; und die minimale Distanz&nbsp; $d_{\rm min}$.
+
$\text{Definition:}$&nbsp; A&nbsp; $(n, \hspace{0.05cm}k, \hspace{0.05cm}d_{\rm min})\text{ &ndash; block code}$&nbsp; has the codeword length&nbsp; $n$, the information block length&nbsp; $k$&nbsp; and the minimum distance&nbsp; $d_{\rm min}$.
*Nach dieser Nomenklatur handelt es sich im hier betrachteten Beispiel um einen&nbsp; $(5, \hspace{0.05cm}2,\hspace{0.05cm} 2)$ &ndash; Blockcode.
+
*According to this nomenclature, the example considered here is a&nbsp; $(5, \hspace{0.05cm}2,\hspace{0.05cm} 2)$ &ndash; block code.
*Manchmal verzichtet man auf die Angabe von&nbsp; $d_{\rm min}$&nbsp; und spricht dann von einem&nbsp; $(n,\hspace{0.05cm} k)$ &ndash; Blockcode.}}<br>
+
*Sometimes one omits the specification of&nbsp; $d_{\rm min}$&nbsp; and then speaks of a&nbsp; $(n,\hspace{0.05cm} k)$ &ndash; block code.}}<br>
  
== Beispiele für Fehlererkennung und Fehlerkorrektur ==
+
== Examples of Error Detection and Correction ==
 
<br>
 
<br>
Die eben definierten Größen sollen nun an zwei Beispielen verdeutlicht werden.
+
The variables just defined are now to be illustrated by two examples.
  
 
{{GraueBox|TEXT=  
 
{{GraueBox|TEXT=  
$\text{Beispiel 11:}$&nbsp; &nbsp; &nbsp;$\text{(4, 2, 2)&ndash;Blockcode}$  
+
$\text{Example 11:}$&nbsp; &nbsp; &nbsp;$\text{(4, 2, 2)&ndash;block code}$  
  
In der Grafik  verdeutlichen die nach rechts bzw. links zeigenden Pfeile den Codiervorgang bzw. die Decodierung:
+
In the graphic, the arrows pointing to the right or left illustrate the coding process or decoding:
  
 
:$$\underline{u_0} = (0, 0) \leftrightarrow (0, 0, 0, 0) = \underline{x_0}\hspace{0.05cm},\hspace{0.5cm}
 
:$$\underline{u_0} = (0, 0) \leftrightarrow (0, 0, 0, 0) = \underline{x_0}\hspace{0.05cm},\hspace{0.5cm}
Line 304: Line 306:
 
\underline{u_3} = (1, 1) \leftrightarrow (1, 1, 1, 1) = \underline{x_3}\hspace{0.05cm}.$$
 
\underline{u_3} = (1, 1) \leftrightarrow (1, 1, 1, 1) = \underline{x_3}\hspace{0.05cm}.$$
  
[[File:P ID2532 KC T 1 1 S5a v2.png|center|frame|(4, 2, 2)–Blockcode zur Fehlererkennung|class=fit]]
+
[[File:P ID2532 KC T 1 1 S5a v2.png|center|frame|(4, 2, 2)–Block Code For Error Detection|class=fit]]
  
Rechts sind alle&nbsp; $2^4 = 16$&nbsp; möglichen Empfangsworte&nbsp; $\underline{y}$&nbsp; dargestellt:
+
On the right, all&nbsp; $2^4 = 16$&nbsp; possible receive words&nbsp; $\underline{y}$&nbsp; are shown:
* Von diesen können&nbsp; $2^n - 2^k = 12$&nbsp; nur durch Bitfehler entstanden sein.  
+
*Of these,&nbsp; $2^n - 2^k = 12$&nbsp; can only be due to bit errors.  
*Empfängt der Decoder ein solches &bdquo;weißes&rdquo; Codewort, so erkennt er zwar einen Fehler, er kann diesen aber wegen&nbsp; $d_{\rm min} = 2$&nbsp; nicht korrigieren.  
+
*If the decoder receives such a &bdquo;white&rdquo; code word, it detects an error, but it cannot correct it because&nbsp; $d_{\rm min} = 2$&nbsp;.  
*Empfängt er beispielsweise&nbsp; $\underline{y} = (0, 0, 0, 1)$, so kann nämlich mit gleicher Wahrscheinlichkeit&nbsp; $\underline{x_0} = (0, 0, 0, 0)$&nbsp; oder&nbsp; $\underline{x_1} = (0, 1, 0, 1)$ gesendet worden sein.}}<br>
+
*For example, if it receives&nbsp; $\underline{y} = (0, 0, 0, 1)$, then with equal probability&nbsp; $\underline{x_0} = (0, 0, 0, 0)$&nbsp; or&nbsp; $\underline{x_1} = (0, 1, 0, 1)$ may have been sent}<br>.
  
[[File:P ID2533 KC T 1 1 S5b v2.png|right|frame|(5, 2, 3)–Blockcode zur Fehlerkorrektur|class=fit]]
+
[[File:P ID2533 KC T 1 1 S5b v2.png|right|frame|(5, 2, 3)–Block Code For Error Correction|class=fit]]
 
{{GraueBox|TEXT=  
 
{{GraueBox|TEXT=  
$\text{Beispiel 11:}$&nbsp; &nbsp; &nbsp;$\text{(5, 2, 3)&ndash;Blockcode}$  
+
$\text{Example 11:}$&nbsp; &nbsp; &nbsp;$\text{(5, 2, 3)&ndash;Block Code}$  
  
Hier gibt es wegen $k=2$ vier gültige Codeworte :
+
Here, because of $k=2$, there are four valid code words :
 
:$$\underline{x_0} = (0, 0, 0, 0, 0)\hspace{0.05cm},\hspace{0.5cm} \underline{x_1} =(0, 1, 0, 1, 1)\hspace{0.05cm},$$
 
:$$\underline{x_0} = (0, 0, 0, 0, 0)\hspace{0.05cm},\hspace{0.5cm} \underline{x_1} =(0, 1, 0, 1, 1)\hspace{0.05cm},$$
 
:$$\underline{x_2} =(1, 0, 1, 1, 0)\hspace{0.05cm},\hspace{0.5cm}\underline{x_3} =(1, 1, 1, 0, 1).$$  
 
:$$\underline{x_2} =(1, 0, 1, 1, 0)\hspace{0.05cm},\hspace{0.5cm}\underline{x_3} =(1, 1, 1, 0, 1).$$  
  
In der Grafik dargestellt ist die Empfängerseite, wobei man verfälschte Bit an der Kursivschrift erkennt.
+
The graphic shows the receiver side, where you can recognise falsified bits by the italics.
 
<br clear=all>  
 
<br clear=all>  
*Von den&nbsp; $2^n - 2^k = 28$&nbsp; unzulässigen Codeworten lassen sich nun&nbsp; $20$&nbsp; einem gültigen Codewort (Füllfarbe: &nbsp; rot, grün, blau oder ocker) zuordnen, wenn man davon ausgeht, dass ein einziger  Bitfehler wahrscheinlicher ist als deren zwei oder mehr.  
+
*Of the&nbsp; $2^n - 2^k = 28$&nbsp; invalid codewords, now&nbsp; $20$&nbsp; can be assigned to a valid codeword (fill colour: &nbsp; red, green, blue or ochre), assuming that a single bit error is more likely than their two or more.  
*Zu jedem gültigen Codewort  gibt es fünf unzulässige Codeworte mit jeweils nur einer Verfälschung &nbsp; &rArr; &nbsp; Hamming&ndash;Distanz&nbsp; $d_{\rm H} =1$. Diese sind in dem jeweiligen Quadrat  mit roter, grüner, blauer oder ockerfarbenen Hintergrundfarbe angegeben.  
+
*For each valid codeword, there are five invalid codewords, each with only one corruption &nbsp; &rArr; &nbsp; Hamming&ndash;Distance&nbsp; $d_{\rm H} =1$. These are indicated in the respective square with red, green, blue or ochre background colour.  
*Die Fehlerkorrektur ist für diese aufgrund der minimalen Distanz&nbsp; $d_{\rm min} = 3$&nbsp; zwischen den Codeworten möglich.  
+
*Error correction is possible for these due to the minimum distance&nbsp; $d_{\rm min} = 3$&nbsp; between the codewords.  
*Acht Empfangsworte sind nicht decodierbar. Beispielsweise könnte das Empfangswort&nbsp; $\underline{y} = (0, 0, 1, 0, 1)$&nbsp; aus dem Codewort&nbsp; $\underline{x}_0 = (0, 0, 0, 0, 0)$&nbsp; entstanden sein, aber auch aus dem Codewort&nbsp; $\underline{x}_3 = (1, 1, 1, 0, 1)$. In beiden Fällen wären zwei Bitfehler aufgetreten.}}<br>
+
*Eight receive words are not decodable. For example, the receive word&nbsp; $\underline{y} = (0, 0, 1, 0, 1)$&nbsp; could have arisen from the codeword&nbsp; $\underline{x}_0 = (0, 0, 0, 0)$&nbsp; but also from the codeword&nbsp; $\underline{x}_3 = (1, 1, 1, 0, 1)$. In both cases, two bit errors would have occurred.}}<br>
  
== Zur Nomenklatur in diesem Buch ==
+
== On the Nomenclature in This Book ==
 
<br>
 
<br>
Eine Zielvorgabe unseres Lerntutorials&nbsp; $\rm LNTwww$&nbsp; war, das gesamte Fachgebiet der Nachrichtentechnik und der zugehörigen Grundlagenfächer mit einheitlicher Nomenklatur zu beschreiben. In diesem zuletzt in Angriff genommenen Buch &bdquo; Kanalcodierung&rdquo; müssen nun doch einige Änderungen hinsichtlich der Nomenklatur vorgenommen werden. Die Gründe hierfür sind:
+
One of the objectives of our learning tutorial $\rm LNTww$&nbsp; was to describe the entire field of communications engineering and the associated basic subjects with uniform nomenclature. In this most recently tackled book &bdquo; Channel Coding&rdquo; some changes have to be made with regard to the nomenclature after all. The reasons for this are:
*Die Codierungstheorie ist ein weitgehend in sich abgeschlossenes Fachgebiet und nur wenige Autoren von einschlägigen Fachbüchern zu diesem Gebiet versuchen, einen Zusammenhang mit anderen Aspekten der Digitalsignalübertragung herzustellen.<br>
+
*Coding theory is a largely self-contained subject and few authors of relevant reference books on the subject attempt to relate it to other aspects of digital signal transmission.<br>
  
*Die Autoren der wichtigsten Bücher zur Kanalcodierung &ndash; englischsprachige und deutsche &ndash; verwenden weitgehend eine einheitliche Nomenklatur. Wir erlauben uns deshalb nicht, die Bezeichnungen zur Kanalcodierung in unser Übertragungstechnik&ndash;Schema zu pressen.<br><br>
+
*The authors of the most important books on channel coding &ndash; English-language and German &ndash; largely use a uniform nomenclature. We therefore do not take the liberty of squeezing the designations for channel coding into our communication technology&ndash;scheme.<br><br>
  
Einige Nomenklaturänderungen gegenüber den anderen&nbsp; $\rm LNTwww$&ndash;Büchern sollen hier genannt werden:
+
Some nomenclature changes compared to the other&nbsp; $\rm LNTww$&ndash;books shall be mentioned here:
*Alle Signale werden durch Symbolfolgen in Vektorschreibweise dargestellt. Beispielsweise kennzeichnet&nbsp; $\underline{u} = (u_1, u_2, \hspace{0.05cm}\text{...}\hspace{0.05cm}, u_k)$&nbsp; die&nbsp; <i>Quellensymbolfolge</i>&nbsp; und&nbsp; $\underline{v} = (v_1, v_2, \hspace{0.05cm}\text{...}\hspace{0.05cm}, v_k)$&nbsp; die <i>Sinkensymbolfolge</i>. Bisher wurden diese Symbolfolgen mit&nbsp; $\langle q_\nu \rangle$&nbsp; bzw.&nbsp; $\langle v_\nu \rangle$&nbsp; bezeichnet.<br>
+
*All signals are represented by sequences of symbols in vector notation. For example&nbsp; $\underline{u} = (u_1, u_2, \hspace{0.05cm}\text{...}\hspace{0.05cm}, u_k)$&nbsp; the &nbsp; <i>Quellensymbolfolge</i>&nbsp; and&nbsp; $\underline{v} = (v_1, v_2, \hspace{0.05cm}\text{...}\hspace{0.05cm}, v_k)$&nbsp; the <i>sink symbol sequence</i>. Previously, these symbol sequences were designated&nbsp; $\langle q_\nu \rangle$&nbsp; and&nbsp; $\langle v_\nu \rangle$&nbsp; respectively.<br>
  
  
*Der Vektor $\underline{x} = (x_1, x_2, \hspace{0.05cm}\text{...} \hspace{0.05cm}, x_n)$&nbsp; bezeichnet nun das zeitdiskrete Äquivalent zum Sendesignal&nbsp; $s(t)$, während das Empfangssignal&nbsp; $r(t)$ durch den Vektor&nbsp; $\underline{y} = (y_1, y_2, \hspace{0.05cm}\text{...}\hspace{0.05cm}, y_n)$&nbsp; beschrieben wird. Die Coderate ist der Quotient </i>&nbsp; $R=k/n$ &nbsp; mit &nbsp; $0 \le R \le 1$ und die Anzahl der Prüfbits ergibt sich zu&nbsp; $m = n-k$.
+
*The vector $\underline{x} = (x_1, x_2, \hspace{0.05cm}\text{...} \hspace{0.05cm}, x_n)$ &nbsp; now denotes the discrete-time equivalent to the transmit signal&nbsp; $s(t)$, while the receive signal&nbsp; $r(t)$ is described by the vector&nbsp; $\underline{y} = (y_1, y_2, \hspace{0.05cm}\text{...}\hspace{0.05cm}, y_n)$&nbsp;. The code rate is the quotient </i>&nbsp; $R=k/n$ &nbsp; with &nbsp; $0 \le R \le 1$ and the number of check bits is given by&nbsp; $m = n-k$.
  
  
*Im ersten Hauptkapitel sind die Elemente&nbsp; $u_i$&nbsp; und&nbsp; $v_i$&nbsp; &nbsp;$($jeweils mit Index&nbsp; $i = 1, \hspace{0.05cm}\text{...} \hspace{0.05cm}, k)$&nbsp; der Vektoren&nbsp; $\underline{u}$&nbsp; und&nbsp; $\underline{v}$&nbsp; stets binär&nbsp; $(0$&nbsp; oder &nbsp;$1)$, ebenso wie die&nbsp; $n$&nbsp; Elemente&nbsp; $x_i$&nbsp; des Codewortes&nbsp; $\underline{x}$. Bei digitalem Kanalmodell&nbsp; ([[Channel_Coding/Kanalmodelle_und_Entscheiderstrukturen#Binary_Symmetric_Channel_.E2.80.93_BSC|BSC]],&nbsp; [[Channel_Coding/Kanalmodelle_und_Entscheiderstrukturen#Binary_Erasure_Channel_.E2.80.93_BEC|BEC]],&nbsp; [[Channel_Coding/Kanalmodelle_und_Entscheiderstrukturen#Binary_Symmetric_Error_.26_Erasure_Channel_.E2.80.93_BSEC|BSEC]]) gilt auch für die&nbsp; $n$&nbsp; Empfangswerte&nbsp; $y_i \in \{0, 1\}$.
+
*In the first main chapter the elements&nbsp; $u_i$&nbsp; and&nbsp; $v_i$&nbsp; &nbsp;$($each with index&nbsp; $i = 1, \hspace{0.05cm}\text{...} \hspace{0.05cm}, k)$&nbsp; of the vectors&nbsp; $\underline{u}$&nbsp; and&nbsp; $\underline{v}$&nbsp; always binary&nbsp; $(0$&nbsp; or &nbsp;$1)$, as are the&nbsp; $n$&nbsp; elements&nbsp; $x_i$&nbsp; of the codeword&nbsp; $\underline{x}$. For digital channel model&nbsp; ([[Channel_Coding/Kanalmodelle_und_Entscheiderstrukturen#Binary_Symmetric_Channel_.E2.80.93_BSC|BSC]],&nbsp; [[Channel_Coding/Kanalmodelle_und_Entscheiderstrukturen#Binary_Erasure_Channel_.E2.80.93_BEC|BEC]],&nbsp; [[Channel_Coding/Kanalmodelle_und_Entscheiderstrukturen#Binary_Symmetric_Error_.26_Erasure_Channel_.E2.80.93_BSEC|BSEC]]) also applies to the&nbsp; $n$&nbsp; received values&nbsp; $y_i \in \{0, 1\}$.
  
  
  
*Das&nbsp; [[Channel_Coding/Kanalmodelle_und_Entscheiderstrukturen#AWGN.E2.80.93Kanal_bei_bin.C3.A4rem_Eingang|AWGN&ndash;Kanalmodell]]&nbsp; ist durch reellwertige Ausgangswerte&nbsp; $y_i$&nbsp; gekennzeichnet. Der <i>Codewortschätzer</i> gewinnt in diesem Fall aus dem Vektor&nbsp; $\underline{y} = (y_1, y_2, \hspace{0.05cm}\text{...}\hspace{0.05cm}, y_n)$&nbsp; den binären Vektor&nbsp; $\underline{z} = (z_1, z_2, \hspace{0.05cm}\text{...} \hspace{0.05cm}, z_n)$, der mit dem Codewort&nbsp; $\underline{x}$&nbsp; zu vergleichen ist.
+
*The &nbsp; [[Channel_Coding/Kanalmodelle_und_Entscheiderstrukturen#AWGN.E2.80.93Kanal_bei_bin.C3.A4rem_Eingang|AWGN&ndash;Channel]]&nbsp; is characterised by real-valued output values&nbsp; $y_i$&nbsp;. The <i>code word estimator</i> in this case extracts from the vector&nbsp;$\underline{y} = (y_1, y_2, \hspace{0.05cm}\text{...}\hspace{0.05cm}, y_n)$ &nbsp; the binary vector&nbsp; $\underline{z} = (z_1, z_2, \hspace{0.05cm}\text{...} \hspace{0.05cm}, z_n)$ to be compared with the codeword&nbsp; $\underline{x}$&nbsp;.
  
  
*Der Übergang von&nbsp; $\underline{y}$&nbsp; auf&nbsp; $\underline{z}$&nbsp; erfolgt durch Schwellenwertentscheidung &nbsp; &#8658; &nbsp; <i>Hard Decision</i> oder nach dem MAP&ndash;Kriterium &nbsp; &#8658; &nbsp; <i>Soft Decision</i>. Bei gleichwahrscheinlichen Eingangssymbolen  führt die  &bdquo;Maximum Likelihood&rdquo;&ndash;Schätzung ebenfalls zur minimalen Fehlerrate.
+
*The transition from&nbsp; $\underline{y}$&nbsp; to&nbsp; $\underline{z}$&nbsp; is done by threshold decision &nbsp; &#8658; &nbsp; <i>Hard Decision</i> or according to the MAP&ndash;criterion &nbsp; &#8658; &nbsp; <i>Soft Decision</i>. For equally likely input symbols, the &bdquo;Maximum Likelihood&rdquo;&ndash;estimation also leads to the minimum error rate.
  
  
*Im Zusammenhang mit dem AWGN&ndash;Modell macht es Sinn, binäre Codesymbole $x_i$ bipolar (also $\pm1$) darzustellen. An den statistischen Eigenschaften ändert sich dadurch nichts. Wir kennzeichnen im Folgenden die bipolare Signalisierung durch eine Tilde. Dann gilt:
+
*In the context of the AWGN&ndash;model, it makes sense to represent binary code symbols $x_i$ bipolar (i.e. $\pm1$). This does not change the statistical properties. In the following, we mark the bipolar signalling with a tilde. Then applies:
  
 
::<math>\tilde{x}_i = 1 - 2 x_i  = \left\{ \begin{array}{c} +1\\
 
::<math>\tilde{x}_i = 1 - 2 x_i  = \left\{ \begin{array}{c} +1\\
 
  -1  \end{array} \right.\quad
 
  -1  \end{array} \right.\quad
\begin{array}{*{1}c} {\rm falls} \hspace{0.15cm} x_i = 0\hspace{0.05cm},\\
+
\begin{array}{*{1}c} {\rm if} \hspace{0.15cm} x_i = 0\hspace{0.05cm},\\
{\rm falls} \hspace{0.15cm}x_i = 1\hspace{0.05cm}.\\ \end{array}</math>
+
{\rm if} \hspace{0.15cm}x_i = 1\hspace{0.05cm}.\\ \end{array}</math>
  
== Aufgaben zum Kapitel==
+
== Exercises For The Chapter ==
 
<br>
 
<br>
 
[[Aufgaben:1.1 Zur Kennzeichnung aller Bücher|Aufgabe 1.1: Zur Kennzeichnung aller Bücher]]
 
[[Aufgaben:1.1 Zur Kennzeichnung aller Bücher|Aufgabe 1.1: Zur Kennzeichnung aller Bücher]]
Line 365: Line 367:
 
[[Aufgaben:1.2Z_3D–Darstellung_von_Codes|Aufgabe 1.2Z: 3D–Darstellung von Codes]]
 
[[Aufgaben:1.2Z_3D–Darstellung_von_Codes|Aufgabe 1.2Z: 3D–Darstellung von Codes]]
  
==Quellenverzeichnis==
+
==Sources==
  
 
<references/>
 
<references/>
  
 
{{Display}}
 
{{Display}}

Revision as of 20:22, 17 January 2021

  • [[Channel Coding/{{{Vorherige Seite}}} | Previous page]]
  • Next page
  • [[Channel Coding/{{{Vorherige Seite}}} | Previous page]]
  • Next page


# Overview on The First Main Chapter #


The first chapter deals with block codes for error detection and error correction and provides the basics for describing more effective codes such as the  Reed-Solomon codes  (see Chapter 2), the  convolutional codes  (Chapter 3), and the  iteratively decodable product codes  (turbo codes) and  low-density parity-check codes  (Chapter 4). We restrict ourselves here to binary codes.

This specific field is called  channel coding  in contrast to  source coding  (redundancy reduction for reasons of data compression) and to  line coding  (additional redundancy to adapt the digital signal to the spectral characteristics of the transmission medium).

In detail, it covers:

  • Definitions and introductory examples of error detection and error correction,
  • a brief review of appropriate channel models and decision maker structures,
  • known binary block codes such as single parity-check code, repetition code and Hamming code,
  • the general description of linear codes using generator matrix and check matrix,
  • the decoding possibilities for block codes, including syndrome decoding,
  • simple approximations and upper bounds for block error probability, and
  • an information-theoretic bound on channel coding.


Error Detection and Error Correction


Transmission errors occur in every message transmission system. It is possible to keep the probability  $p_{\rm S}$  of such a symbol error very small, for example by using a very large signal energy. However, the symbol error probability  $p_{\rm S} = 0$  is never achievable because of the Gaussian WDF of the thermal noise that is always present.

Particularly in the case of heavily disturbed channels and also for safety-critical applications, it is therefore essential to provide special protection for the data to be transmitted, adapted to the application and channel. For this purpose, redundancy is added at the transmitter and this redundancy is used at the receiver to reduce the number of decoding errors.

$\text{Definitions:}$

  • Error Detection  :   The decoder checks the integrity of the received blocks and marks any errors found. If necessary, the receiver informs the transmitter about erroneous blocks via the return channel, so that the transmitter sends the corresponding block again.
  • Error Correction   The decoder detects one (or more) bit errors and provides further information for them, for example their positions in the transmitted block. In this way, it may be possible to completely correct the errors that have occurred.
  • The  Channel Coding  includes procedures for error detection as well as those for error correction.


All ARQ (Automatic Repeat Request) procedures use error detection only. Less redundancy is required for error detection than for error correction. One disadvantage of ARQ is its low throughput when channel quality is poor, i.e. when entire blocks of data must be frequently re-requested by the receiver.

In this book we mostly deal with  Forward Error Correction  which leads to very small error rates if the channel is sufficiently good (large SNR). With worse channel conditions, nothing changes in the throughput, i.e. the same amount of information is transmitted. However, the error rate can then assume very large values.

Often FEC and ARQ methods are combined, and the redundancy is divided between them in such a way,

  • so that a small number of errors can still be corrected,

but *when there are many errors, a repeat of the block is requested.

Some Introductory Examples of Error Detection


$\text{Example 1:   Single Parity–Check Code (SPC)}$
If one adds  $k = 4$  bit by a so-called check bit (English:   Parity Bit ) in such a way that the sum of all ones is even, for example (with bold check bits)

$$0000\boldsymbol{0}, 0001\boldsymbol{1}, \text{...} , 1111\boldsymbol{0}, \text{...}\ ,$$

it is very easy to recognise a single error. Two errors within a code word, on the other hand, remain undetected.



$\text{Example 2:   International Standard Book Number (ISBN)}$
Since the 1960s, all books have been given 10-digit codes (ISBN–10 ). Since 2007, the specification according to ISBN–13  is additionally obligatory. For example, these are for the reference book [Söd93][1]:

  • $\boldsymbol{3–540–57215–5}$  (for ISBN–10), bzw.
  • $\boldsymbol{978–3–54057215–2}$  (for ISBN–13).


The last digit  $z_{10}$  for ISBN–10 results from the previous digits  $z_1 = 3$,  $z_2 = 5$, ... ,  $z_9 = 5$  according to the following calculation rule:

\[z_{10} = \left ( \sum_{i=1}^{9} \hspace{0.2cm} i \cdot z_i \right ) \hspace{-0.3cm} \mod 11 = (1 \cdot 3 + 2 \cdot 5 + ... + 9 \cdot 5 ) \hspace{-0.2cm} \mod 11 = 5 \hspace{0.05cm}. \]

Note that  $z_{10} = 10$  must be written as  $z_{10} = \rm X$  (Roman numeral representation of „10”), since the number  $10$  cannot be represented as a digit in the decimal system.

The same applies to the check digit for ISBN–13:

\[z_{13}= 10 - \left ( \sum_{i=1}^{12} \hspace{0.2cm} z_i \cdot 3^{(i+1) \hspace{-0.2cm} \mod 2} \right ) \hspace{-0.3cm} \mod 10 = 10 \hspace{-0.05cm}- \hspace{-0.05cm} \big [(9\hspace{-0.05cm}+\hspace{-0.05cm}8\hspace{-0.05cm}+\hspace{-0.05cm}5\hspace{-0.05cm}+\hspace{-0.05cm}0\hspace{-0.05cm}+\hspace{-0.05cm}7\hspace{-0.05cm}+\hspace{-0.05cm}1) \cdot 1 + (7\hspace{-0.05cm}+\hspace{-0.05cm}3\hspace{-0.05cm}+\hspace{-0.05cm}4\hspace{-0.05cm}+\hspace{-0.05cm}5\hspace{-0.05cm}+\hspace{-0.05cm}2\hspace{-0.05cm}+\hspace{-0.05cm}5) \cdot 3\big ] \hspace{-0.2cm} \mod 10 \]
\[\Rightarrow \hspace{0.3cm} z_{13}= 10 - (108 \hspace{-0.2cm} \mod 10) = 10 - 8 = 2 \hspace{0.05cm}. \]

With both variants, in contrast to the above parity check code (SPC), number twists such as  $57 \, \leftrightarrow 75$  are also recognised, since different positions are weighted differently here


.

$\text{Example 3:   Barcode (one-dimensional)}$
The most widely used error-detecting code worldwide is the bar code or bar code (English:   Bar Code) for marking products, for example according to  EAN–13  (European Article Number ) with 13 digits.

1D–Barcode
  • These are represented by bars and gaps of different widths and can be easily decoded with an opto–cial reader.
  • The first three digits indicate the country (for example Germany:   between 400 and 440), the next four or five digits the manufacturer and the product.
  • The last digit is the check digit  $z_{13}$, which is calculated exactly as for ISBN–13.

Some Introductory Examples of Error Correction


$\text{Example 4:   2D–Barcodes For Online–Tickets}$
If you book a Deutsche Bahn ticket online and print it out, you will find an example of a two-dimensional barcode, namely the  Aztec–Code developed in 1995 by Andy Longacre at the Welch Allyn company in the USA, with which amounts of data up to  $3000$  characters can be encoded. Due to the  Reed–Solomon–error correction    reconstruction of the data content is still possible even if up to  $40\%$  of the code has been destroyed, for example by bending the ticket or by coffee stains.

2D–Barcodes: Aztec– And QR–Code

On the right is a $\rm QR–Code$  (Quick Response) with associated content. The QR–code was developed in 1994 for the automotive industry in Japan to mark components and also allows error correction. In the meantime, the use of the QR–code has become very diverse. In Japan, it can be found on almost every advertising poster and on every business card. It is also becoming more and more popular in Germany.

All 2D–barcodes have square markings to calibrate the reader. You can find details on this in [KM+09][2].


$\text{Example 5:   Codes For Satellites– And Space Communications}$
One of the first areas of application of error correction methods was communication from/to satellites and space shuttles, i.e. transmission routes characterised by low transmission powers and large path losses. As early as 1977, channel coding was used in the  space–mission Voyager 1  to Neptune and Uranus, in the form of serial concatenation of a  Reed–Solomon–Code  and a  convolutional code.

Thus, the power parameter  $10 · \lg \; E_{\rm B}/N_0 \approx 2 \, \rm dB$ was already sufficient to achieve the required error rate  $5 · 10^{-5}$  (related to the compressed data after source coding). Without channel coding, on the other hand, almost  $9 \, \rm dB$  are required for the same error rate, i.e. a factor  $10^{0.7} ≈ 5$  greater transmission power.

The planned Mars project (data transmission from Mars to Earth with  $\rm 5W$–lasers) will also only be successful with a sophisticated coding scheme


.

$\text{Example 6:   Channel Codes For Mobile Communikations}$
A further and particularly high-turnover application area that would not function without channel coding is mobile communication. Here, unfavourable conditions without coding would result in error rates in the percentage range and, due to shadowing and multipath propagation (echoes), the errors often occur in bundles. The error bundle length is sometimes several hundred bits.

  • For voice transmission in the GSM–system, the  $182$  most important (class 1a and 1b) of the total  260  bits of a voice frame  $(20 \, \rm ms)$  together with a few parity– and tailbits are convolutionally coded $($with memory  $m = 4$  and rate $R = 1/2)$  and scrambled. Together with the  $78$  less important and therefore uncoded bits of class 2, this results in the bit rate increasing from  $13 \, \rm kbit/s$  to  $22.4 \, \rm kbit/s$ .
  • One uses the (relative) redundancy of $r = (22.4 - 13)/22.4 ≈ 0.42$  for error correction. It should be noted that  $r = 0.42$  because of the definition used here, $42\%$  of the encoded bits are redundant. With the reference value „bit rate of the uncoded sequence” we would get  $r = 9.4/13 \approx 0.72$  with the statement:   To the information bits are added  $72\%$  check bits.
  • For  UMTS  (Universal Mobile Telecommunications System),  Convolutional Codes  with the rates  $R = 1/2$  or  $R = 1/3$  are used. In the UMTS–modes for higher data rates and correspondingly lower spreading factors, on the other hand, one uses  Turbo–Codes  of the rate  $R = 1/3$  and iterative decoding. Depending on the number of iterations, gains of up to  $3 \, \rm dB$ can be achieved compared to convolutional coding.


$\text{Example 7:   Error Protection of the Compact Disc}$
For a CD (Compact Disc), one uses cross–interleaved  Reed–Solomon–Codes  (RS) and then a so-called   Eight–to–Fourteen–Modulation. Redundancy is used for error detection and correction. This coding scheme shows the following characteristics:

  • The common code rate of the two RS–component codes is  $R_{\rm RS} = 24/28 · 28/32 = 3/4$. Through the 8–to–14–modulation and some control bits, one arrives at the total code rate  $R ≈ 1/3$.
  • In the case of statistically independent errors according to the  BSC–model  (Binary Symmetric Channel ),a complete correction is possible as long as the bit error rate does not exceed the value  $10^{-3}$ .
  • The CD–specific  Cross Interleaver  scrambles  $108$  blocks together so that the  $588$  bits of a block  $($each bit corresponds to approx.   $0.28 \, \rm {µ m})$  are distributed over approximately  $1.75\, \rm cm$.
  • With the code rate  $R ≈ 1/3$  one can correct approx.  $10\%$  „Erasures”. The lost values can be reconstructed (approximately) by interpolation   ⇒  Error concealment.

In summary, if a CD has a scratch of  $1. 75\, \rm mm$  in length in the direction of play (i.e. more than  $6000$  consecutive erasures), still  $90\%$  of all the bits in a block are error-free, so that even the missing  $10\%$  can be reconstructed, or at least the erasures can be disguised so that they are not audible.
.

A demonstration of the CD's ability to correct follows on the next page


.

The "Slit CD" - A Demonstration By The LNT of TUM


At the end of the 1990s, staff of the  Chair of Communications Engineering of the Technical University of Munich  led by professor  Joachim Hagenauer  eliberately damaged a music–CD by cutting a total of three slits, each more than one millimetre wide. Thus, almost  $4000$  consecutive bits of audio coding are missing with each defect.

„A Slit CD” of The  $\rm LNT/TUM$

The diagram shows the „slit CD”:

  • Both track 3 and track 14 have two such defective areas on each revolution.
  • You can visualise the music quality with the help of the two audio players (playback time approx. 15 seconds each).
  • The theory of this audio–demo can be found in the  $\text{Example 7}$  on the previous page.


Track 14:

Track 3:


Summary of this audio demo:

  • As important for the functioning of the CD as the codes is the interposed interleaver, which distributes the erased bits („Erasures”) over a length of almost  $2 \, \rm cm$ .
  • In  Track 14  the two defective areas are sufficiently far apart. Therefore, the Reed–Solomon–decoder is able to reconstruct the missing data.
  • In  Track 3  the two error blocks follow each other in a very short distance, so that the correction algorithm fails. The result is an almost periodic clacking noise.

We would like to thank Rainer Bauer,  Thomas Hindelang  and  Manfred Jürgensfor the permission to use this audio–demo.

Interplay Between Source and Channel Coding


Message transmission of natural sources such as speech, music, images, videos, etc. is usually done according to the discrete-time model outlined below.

Image Transmission With Source and Channel Coding

From [Liv10][3] the following should be noted:

  • Source and sink are digitised and represented by (approximately equal numbers of ) zeros and ones.
  • The source encoder compresses the binary data – in the example a digital photo – and thus reduces the redundancy of the source.
  • The channel encoder adds redundancy again, and specifically so that some of the errors that occurred on the channel can be corrected in the channel decoder.
  • A discrete-time model with binary input and output is used here for the channel, which should also suitably take into account the components of the technical transmit– and receive equipment (modulator, decision maker, clock recovery).

With correct dimensioning of sources– and channel coding, the quality of the received photo is sufficiently good, even if the sink symbol sequence will not exactly match the source symbol sequence due to error patterns that cannot be corrected. One can detect a (red marked) bit error within the sink symbol sequence.

$\text{Example 8:}$  For the above graph, it was assumed, as an example and for the sake of simplicity, that

  • the source symbol sequence has only the length  $40$ ,
  • the source encoder compresses the data by a factor of  $40/16 = 2.5$  and
  • the channel encoder  $50\%$  adds redundancy.


Thus, only  $24$  encoder symbols have to be transmitted instead of  $40$  source symbols, which reduces the overall transmission rate by  $40\%$ .

If one were to dispense with source encoding by transmitting the original photo in BMP–format rather than the compressed JPG–image, the quality would be comparable, but a bit rate higher by a factor  $2.5$  and thus much more effort would be required.


$\text{Example 9:}$  If one were to dispense with both source– and channel coding, i.e. transmit the BMP–data directly without error protection, the result would be extremely poor despite  $($by a factor  $40/24)$  greater bit rate.

Image Transmission Without Source and Channel Coding


$\text{Example 10:}$  Now let's consider the case of directly transferring the compressed data (for example JPG) without error-proofing measures.

P ID2336 KC T 1 1 S3c v2.png
  • Since the compressed source has little redundancy left, any single transmission error will cause entire blocks of images to be decoded incorrectly.
  • This coding scheme (source coding but no channel coding) should be avoided at all costs

.


Block Diagram and Requirements


In the further sections, we will start from the sketched block diagram with channel encoder, digital channel and channel decoder.

Block Diagram Describing Channel Coding

Dabei gelten folgende Voraussetzungen: The following conditions apply:

  • The vector  $\underline{u} = (u_1, u_2, \text{...} \hspace{0.05cm}, u_k)$  denotes an  information block  with  $k$  symbols.
  • Mostly we restrict ourselves to binary symbols (bits)   ⇒   $u_i \in \{0, \, 1\}$ for $i = 1, 2, \text{...} \hspace{0.05cm}, k$  with equal occurrence probabilities for zeros and ones.


  • Each information block  $\underline{u}$  is represented by a  codeword  (or a  codeblock)  $\underline{x} = (x_1, x_2, \text{. ..} \hspace{0.05cm}, x_n)$  with  $n \ge k$, $x_i \in \{0, \, 1\}$  represented. One then speaks of a binary  $(n, k)$–block code  $C$. We denote the assignment by  $\underline{x} = {\rm enc}(\underline{u})$, where „enc” stands for „encoder–function”.


  • The  receive word $\underline{y}$  results from the codeword  $\underline{x}$  by  Modulo–2–Addition with the likewise binary error vector  $\underline{e} = (e_1, e_2, \text{. ..} \hspace{0.05cm}, e_n)$, where „$e= 1$” represents a transmission error and „$e= 0$” indicates that the  $i$–th bit of the codeword was transmitted correctly. The following therefore applies:
\[\underline{y} = \underline{x} \oplus \underline{e} \hspace{0.05cm}, \hspace{0.5cm} y_i = x_i \oplus e_i \hspace{0.05cm}, \hspace{0.2cm} i = 1, \text{...} \hspace{0.05cm} , n\hspace{0.05cm}, x_i \hspace{-0.05cm} \in \hspace{-0.05cm} \{ 0, 1 \}\hspace{0.05cm}, \hspace{0.5cm}e_i \in \{ 0, 1 \}\hspace{0.5cm} \Rightarrow \hspace{0.5cm}y_i \in \{ 0, 1 \}\hspace{0.05cm}.\]


The description by the  Digital Channel Model  – i.e. with binary input and output – is, however, only applicable if the transmission system makes hard decisions – see  AWGN–Channel at Binary Input. Systems with  soft decision  cannot be modelled with this simple model.


  • The vector  $\underline{v}$  after  channel decoding  has the same length  $k$  as the information block  $\underline{u}$. We describe the decoding process with the „decoder–function” as  $\underline{v} = {\rm enc}^{-1}(\underline{y}) = {\rm dec}(\underline{y})$. In the error-free case, analogous to  $\underline{x} = {\rm enc}(\underline{u})$  also  $\underline{v} = {\rm enc}^{-1}(\underline{y})$.


  • If the error vector  $\underline{e} \ne \underline{0}$, then  $\underline{y}$  is usually not a valid element of the block code used, and the decoding is then not a pure mapping  $\underline{y} \rightarrow \underline{v}$, but an estimate of  $\underline{v}$ based on maximum match (mimimum error probability).

Important Definitions for Block Coding


We now consider the exemplary binary block code

\[\mathcal{C} = \{ (0, 0, 0, 0, 0) \hspace{0.05cm},\hspace{0.15cm} (0, 1, 0, 1, 0) \hspace{0.05cm},\hspace{0.15cm}(1, 0, 1, 0, 1) \hspace{0.05cm},\hspace{0.15cm}(1, 1, 1, 1, 1) \}\hspace{0.05cm}.\]

This code would be unsuitable for the purpose of error detection or –correction. But it is constructed in such a way that it clearly illustrates the calculation of important descriptive variables:

  • Each individual codeword  $\underline{u}$  is described by five bits. Throughout the book, we express this fact by the  codeword length  (English:  Code Length )  $n = 5$.


  • The above code contains four elements. Thus the  code size  (English:   Size )  $|C| = 4$. Accordingly, there are also four unique mappings (English:  Mappings ) between  $\underline{u}$  and  $\underline{x}$.


  • The length of an information block  $\underline{u}$   ⇒   Information block length  is denoted by  $k$ . Since for all binary codes  $|C| = 2^k$  holds, it follows from  $|C| = 4$  that  $k = 2$. The assignments between  $\underline{u}$  and  $\underline{x}$  in the above code are  $C$:
\[\underline{u_0} = (0, 0) \hspace{0.2cm}\leftrightarrow \hspace{0.2cm}(0, 0, 0, 0, 0) = \underline{x_0}\hspace{0.05cm}, \hspace{0.8cm} \underline{u_1} = (0, 1) \hspace{0.2cm}\leftrightarrow \hspace{0.2cm}(0, 1, 0, 1, 0) = \underline{x_1}\hspace{0.05cm}, \]
\[\underline{u_2} = (1, 0)\hspace{0.2cm} \leftrightarrow \hspace{0.2cm}(1, 0, 1, 0, 1) = \underline{x_2}\hspace{0.05cm}, \hspace{0.8cm} \underline{u_3} = (1, 1) \hspace{0.2cm} \leftrightarrow \hspace{0.2cm}(1, 1, 1, 1, 1) = \underline{x_3}\hspace{0.05cm}.\]


  • The code has the  code rate  $R = k/n = 2/5$ . Accordingly, its redundancy is  $1-R$, that is  $60\%$. Without error protection $($so for the case  $n = k)$  the code rate  $R = 1$.


  • A small code rate indicates that of the  $n$  bits of a codeword, very few actually carry information. For example, a repetition code  $(k = 1)$  with  $n = 10$  has a code rate  $R = 0.1$.


  • The  Hamming–weight  $w_{\rm H}(\underline{x})$  of the codeword  $\underline{x}$  indicates the number of codeword elements  $x_i \ne 0$ . For a binary code  ⇒   $x_i \ne 0$  $w_{\rm H}(\underline{x})$  is equal to the sum  $x_1 + x_2 + \hspace{0.05cm}\text{...} \hspace{0.05cm}+ x_n$. In the example:
\[w_{\rm H}(\underline{x}_0) = 0\hspace{0.05cm}, \hspace{0.4cm}w_{\rm H}(\underline{x}_1) = 2\hspace{0.05cm}, \hspace{0.4cm} w_{\rm H}(\underline{x}_2) = 3\hspace{0.05cm}, \hspace{0.4cm}w_{\rm H}(\underline{x}_3) = 5\hspace{0.05cm}. \]


  • The  Hamming–distance  $d_{\rm H}(\underline{x}, \ \underline{x}\hspace{0.03cm}')$  between the codewords  $\underline{x}$  and  $\underline{x}\hspace{0.03cm}'$  denotes the number of bit positions in which the two codewords differ:
\[d_{\rm H}(\underline{x}_0, \hspace{0.05cm}\underline{x}_1) = 2\hspace{0.05cm}, \hspace{0.4cm} d_{\rm H}(\underline{x}_0, \hspace{0.05cm}\underline{x}_2) = 3\hspace{0.05cm}, \hspace{0.4cm} d_{\rm H}(\underline{x}_0, \hspace{0.05cm}\underline{x}_3) = 5\hspace{0.05cm},\hspace{0.4cm} d_{\rm H}(\underline{x}_1, \hspace{0.05cm}\underline{x}_2) = 5\hspace{0.05cm}, \hspace{0.4cm} d_{\rm H}(\underline{x}_1, \hspace{0.05cm}\underline{x}_3) = 3\hspace{0.05cm}, \hspace{0.4cm} d_{\rm H}(\underline{x}_2, \hspace{0.05cm}\underline{x}_3) = 2\hspace{0.05cm}.\]


  • An important property of a code $C$ that significantly affects its ability to be corrected is the  minimum distance  between any two code words:
\[d_{\rm min}(\mathcal{C}) = \min_{\substack{\underline{x},\hspace{0.05cm}\underline{x}' \hspace{0.05cm}\in \hspace{0.05cm} \mathcal{C} \\ {\underline{x}} \hspace{0.05cm}\ne \hspace{0.05cm} \underline{x}'}}\hspace{0.1cm}d_{\rm H}(\underline{x}, \hspace{0.05cm}\underline{x}')\hspace{0.05cm}.\]

$\text{Definition:}$  A  $(n, \hspace{0.05cm}k, \hspace{0.05cm}d_{\rm min})\text{ – block code}$  has the codeword length  $n$, the information block length  $k$  and the minimum distance  $d_{\rm min}$.

  • According to this nomenclature, the example considered here is a  $(5, \hspace{0.05cm}2,\hspace{0.05cm} 2)$ – block code.
  • Sometimes one omits the specification of  $d_{\rm min}$  and then speaks of a  $(n,\hspace{0.05cm} k)$ – block code.


Examples of Error Detection and Correction


The variables just defined are now to be illustrated by two examples.

{{GraueBox|TEXT= $\text{Example 11:}$     $\text{(4, 2, 2)–block code}$

In the graphic, the arrows pointing to the right or left illustrate the coding process or decoding:

$$\underline{u_0} = (0, 0) \leftrightarrow (0, 0, 0, 0) = \underline{x_0}\hspace{0.05cm},\hspace{0.5cm} \underline{u_1} = (0, 1) \leftrightarrow (0, 1, 0, 1) = \underline{x_1}\hspace{0.05cm},$$
$$\underline{u_2} = (1, 0) \leftrightarrow (1, 0, 1, 0) = \underline{x_2}\hspace{0.05cm},\hspace{0.5cm} \underline{u_3} = (1, 1) \leftrightarrow (1, 1, 1, 1) = \underline{x_3}\hspace{0.05cm}.$$
(4, 2, 2)–Block Code For Error Detection

On the right, all  $2^4 = 16$  possible receive words  $\underline{y}$  are shown:

  • Of these,  $2^n - 2^k = 12$  can only be due to bit errors.
  • If the decoder receives such a „white” code word, it detects an error, but it cannot correct it because  $d_{\rm min} = 2$ .
  • For example, if it receives  $\underline{y} = (0, 0, 0, 1)$, then with equal probability  $\underline{x_0} = (0, 0, 0, 0)$  or  $\underline{x_1} = (0, 1, 0, 1)$ may have been sent}
    .
(5, 2, 3)–Block Code For Error Correction

$\text{Example 11:}$     $\text{(5, 2, 3)–Block Code}$

Here, because of $k=2$, there are four valid code words :

$$\underline{x_0} = (0, 0, 0, 0, 0)\hspace{0.05cm},\hspace{0.5cm} \underline{x_1} =(0, 1, 0, 1, 1)\hspace{0.05cm},$$
$$\underline{x_2} =(1, 0, 1, 1, 0)\hspace{0.05cm},\hspace{0.5cm}\underline{x_3} =(1, 1, 1, 0, 1).$$

The graphic shows the receiver side, where you can recognise falsified bits by the italics.

  • Of the  $2^n - 2^k = 28$  invalid codewords, now  $20$  can be assigned to a valid codeword (fill colour:   red, green, blue or ochre), assuming that a single bit error is more likely than their two or more.
  • For each valid codeword, there are five invalid codewords, each with only one corruption   ⇒   Hamming–Distance  $d_{\rm H} =1$. These are indicated in the respective square with red, green, blue or ochre background colour.
  • Error correction is possible for these due to the minimum distance  $d_{\rm min} = 3$  between the codewords.
  • Eight receive words are not decodable. For example, the receive word  $\underline{y} = (0, 0, 1, 0, 1)$  could have arisen from the codeword  $\underline{x}_0 = (0, 0, 0, 0)$  but also from the codeword  $\underline{x}_3 = (1, 1, 1, 0, 1)$. In both cases, two bit errors would have occurred.


On the Nomenclature in This Book


One of the objectives of our learning tutorial $\rm LNTww$  was to describe the entire field of communications engineering and the associated basic subjects with uniform nomenclature. In this most recently tackled book „ Channel Coding” some changes have to be made with regard to the nomenclature after all. The reasons for this are:

  • Coding theory is a largely self-contained subject and few authors of relevant reference books on the subject attempt to relate it to other aspects of digital signal transmission.
  • The authors of the most important books on channel coding – English-language and German – largely use a uniform nomenclature. We therefore do not take the liberty of squeezing the designations for channel coding into our communication technology–scheme.

Some nomenclature changes compared to the other  $\rm LNTww$–books shall be mentioned here:

  • All signals are represented by sequences of symbols in vector notation. For example  $\underline{u} = (u_1, u_2, \hspace{0.05cm}\text{...}\hspace{0.05cm}, u_k)$  the   Quellensymbolfolge  and  $\underline{v} = (v_1, v_2, \hspace{0.05cm}\text{...}\hspace{0.05cm}, v_k)$  the sink symbol sequence. Previously, these symbol sequences were designated  $\langle q_\nu \rangle$  and  $\langle v_\nu \rangle$  respectively.


  • The vector $\underline{x} = (x_1, x_2, \hspace{0.05cm}\text{...} \hspace{0.05cm}, x_n)$   now denotes the discrete-time equivalent to the transmit signal  $s(t)$, while the receive signal  $r(t)$ is described by the vector  $\underline{y} = (y_1, y_2, \hspace{0.05cm}\text{...}\hspace{0.05cm}, y_n)$ . The code rate is the quotient   $R=k/n$   with   $0 \le R \le 1$ and the number of check bits is given by  $m = n-k$.


  • In the first main chapter the elements  $u_i$  and  $v_i$   $($each with index  $i = 1, \hspace{0.05cm}\text{...} \hspace{0.05cm}, k)$  of the vectors  $\underline{u}$  and  $\underline{v}$  always binary  $(0$  or  $1)$, as are the  $n$  elements  $x_i$  of the codeword  $\underline{x}$. For digital channel model  (BSCBECBSEC) also applies to the  $n$  received values  $y_i \in \{0, 1\}$.


  • The   AWGN–Channel  is characterised by real-valued output values  $y_i$ . The code word estimator in this case extracts from the vector $\underline{y} = (y_1, y_2, \hspace{0.05cm}\text{...}\hspace{0.05cm}, y_n)$   the binary vector  $\underline{z} = (z_1, z_2, \hspace{0.05cm}\text{...} \hspace{0.05cm}, z_n)$ to be compared with the codeword  $\underline{x}$ .


  • The transition from  $\underline{y}$  to  $\underline{z}$  is done by threshold decision   ⇒   Hard Decision or according to the MAP–criterion   ⇒   Soft Decision. For equally likely input symbols, the „Maximum Likelihood”–estimation also leads to the minimum error rate.


  • In the context of the AWGN–model, it makes sense to represent binary code symbols $x_i$ bipolar (i.e. $\pm1$). This does not change the statistical properties. In the following, we mark the bipolar signalling with a tilde. Then applies:
\[\tilde{x}_i = 1 - 2 x_i = \left\{ \begin{array}{c} +1\\ -1 \end{array} \right.\quad \begin{array}{*{1}c} {\rm if} \hspace{0.15cm} x_i = 0\hspace{0.05cm},\\ {\rm if} \hspace{0.15cm}x_i = 1\hspace{0.05cm}.\\ \end{array}\]

Exercises For The Chapter


Aufgabe 1.1: Zur Kennzeichnung aller Bücher

Aufgabe 1.2: Einfacher binärer Kanalcode

Aufgabe 1.2Z: 3D–Darstellung von Codes

Sources

  1. Söder, G.: Modellierung, Simulation und Optimierung von Nachrichtensystemen. Berlin - Heidelberg: Springer, 1993.
  2. Kötter, R.; Mayer, T.; Tüchler, M.; Schreckenbach, F.; Brauchle, J.: Channel Coding. Lecture Notes, Lehrstuhl für Nachrichtentechnik, TU München, 2008.
  3. Liva, G.: Channel Coding. Vorlesungsmanuskript, Lehrstuhl für Nachrichtentechnik, TU München und DLR Oberpfaffenhofen, 2010.