Difference between revisions of "Channel Coding/Objective of Channel Coding"

From LNTwww
 
(114 intermediate revisions by 7 users not shown)
Line 1: Line 1:
 
{{FirstPage}}
 
{{FirstPage}}
 
{{Header|
 
{{Header|
Untermenü=Binäre Blockcodes zur Kanalcodierung
+
Untermenü=Binary Block Codes for Channel Coding
|Vorherige Seite=Signaldarstellung
+
|Nächste Seite=Channel Models and Decision Structures
|Nächste Seite=Kanalmodelle und Entscheiderstrukturen
 
 
}}
 
}}
  
  
== Fehlererkennung und Fehlerkorrektur ==
+
== # OVERVIEW OF THE FIRST MAIN CHAPTER # ==
 
<br>
 
<br>
Bei einem jeden Nachrichtenübertragungssystem kommt es zu Übertragungsfehlern. Man kann zwar die Wahrscheinlichkeit pS für einen solchen Symbolfehler sehr klein halten, zum Beispiel durch eine sehr große Signalenergie. Die Symbolfehlerwahrscheinlichkeit pS = 0 ist aber wegen der Gaußschen WDF des stets vorhandenen thermischen Rauschens nie erreichbar.<br>
+
The first chapter deals with&nbsp; &raquo;'''block codes for error detection and error correction'''&laquo;&nbsp; and provides the basics for describing more effective codes such as
 +
:*the&nbsp; &raquo;Reed-Solomon codes&laquo;&nbsp; (see Chapter 2),
 +
:*the&nbsp; &raquo;convolutional codes&laquo;&nbsp; (see Chapter 3),&nbsp; and
 +
:*the&nbsp; &raquo;iteratively decodable product codes&laquo;&nbsp; ("turbo codes") and the&nbsp; &raquo;low-density parity-check codes&laquo;&nbsp; (see Chapter 4).
  
Insbesondere bei stark gestörten Kanälen und auch für sicherheitskritische Anwendungen ist es deshalb unumgänglich, die zu übertragenden Daten angepasst an Anwendung und Kanal besonders zu schützen. Dazu fügt man beim Sender Redundanz hinzu und nutzt diese Redundanz beim Empfänger, um die Anzahl der Decodierfehler zu verringern. Dabei unterscheidet man:
 
  
*Fehlererkennung (englisch: Error Detection): Der Decoder prüft die Integrität der empfangenen Blöcke und markiert gefundene Fehler. Eventuell informiert der Empfänger den Sender über fehlerhafte Blöcke via Rückkanal, so dass dieser den entsprechenden Block noch einmal sendet.<br>
+
This specific field of coding is called&nbsp; &raquo;'''Channel Coding'''&laquo;&nbsp; in contrast to
 +
:*&nbsp; &raquo;Source Coding&laquo;&nbsp; (redundancy reduction for reasons of data compression),&nbsp; and
 +
:*&nbsp; &raquo;Line Coding&laquo;&nbsp; (additional redundancy to adapt the digital signal to the spectral characteristics of the transmission medium).aa
  
*Fehlerkorrektur (englisch: Error Correction): Der Decoder erkennt einen (oder mehrere) Bitfehler und liefert für diese weitere Informationen, z.B. deren Positionen im übertragenen Block. Damit können unter Umständen die entstandenen Fehler vollständig korrigiert werden.<br><br>
 
  
Die Kanalcodierung (englisch: Channel Coding oder auch Error–Control Coding) umfasst sowohl Verfahren zur Fehlererkennung als auch solche zur Fehlerkorrektur.<br>
+
We restrict ourselves here to&nbsp; &raquo;binary codes&laquo;.&nbsp; In detail,&nbsp; this book covers:
  
Alle ARQ–Verfahren (englisch: Automatic Repeat Request) nutzen ausschließlich Fehlererkennung. Für die Fehlererkennung ist weniger Redundanz erforderlich als für eine Fehlerkorrektur. Ein Nachteil der ARQ ist der geringe Durchsatz bei schlechter Kanalqualität, also dann, wenn häufig ganze Datenblöcke vom Empfänger neu angefordert werden müssen.<br>
+
#Definitions and introductory examples of&nbsp; &raquo;error detection and error correction&laquo;,
 +
#a brief review of appropriate&nbsp; &raquo;channel models&laquo;&nbsp; and&nbsp; &raquo;decision device structures&laquo;,
 +
#known binary block codes such as&nbsp; &raquo;single parity-check code&laquo;,&nbsp; &raquo;repetition code&laquo;&nbsp; and&nbsp; &raquo;Hamming code&laquo;,
 +
#the general description of linear codes using&nbsp; &raquo;generator matrix&laquo;&nbsp; and&nbsp; &raquo;check matrix&laquo;,
 +
#the decoding possibilities for block codes, including&nbsp; &raquo;syndrome decoding&laquo;&nbsp;,
 +
#simple approximations and upper bounds for the&nbsp; &raquo;block error probability&laquo;,&nbsp; and
 +
#an&nbsp; &raquo;information-theoretic bound&laquo;&nbsp; on channel coding.
  
In diesem Buch behandeln wir größtenteils die Vorwärtsfehlerkorrektur (englisch: Forward Error Correction, FEC), die bei einem ausreichend guten Kanal (großes SNR) zu sehr kleinen Fehlerraten führt. Bei schlechteren Kanalbedingungen ändert sich am Durchsatz nichts, das heißt, es wird die gleiche Informationsmenge übertragen. Allerdings kann dann die Fehlerrate sehr große Werte annehmen.<br>
 
  
Oft werden FEC– und ARQ–Verfahren kombiniert, und zwischen diesen die Redundanz so aufgeteilt,
+
== Error detection and error correction ==
*dass eine kleine Anzahl von Fehlern noch korrigierbar ist,
+
<br>
*bei vielen Fehlern aber eine Wiederholung des Blocks angefordert wird.
+
Transmission errors occur in every digital transmission system.&nbsp; It is possible to keep the probability&nbsp; $p_{\rm B}$&nbsp; of such a bit error very small,&nbsp; for example by using a very large signal energy.&nbsp; However, the bit error probability&nbsp; $p_{\rm B} = 0$&nbsp; is never achievable because of the Gaussian PDF of the thermal noise that is always present.<br>
 +
 
 +
Particularly in the case of heavily disturbed channels and also for safety-critical applications,&nbsp; it is therefore essential to provide special protection for the data to be transmitted,&nbsp; adapted to the application and channel.&nbsp; For this purpose,&nbsp; redundancy is added at the transmitter and this redundancy is used at the receiver to reduce the number of decoding errors.
 +
 
 +
{{BlaueBox|TEXT= 
 +
$\text{Definitions:}$
 +
 
 +
#&nbsp;&raquo;'''Error Detection&laquo;''': &nbsp; The decoder checks the integrity of the received blocks and marks any errors found.&nbsp;  If necessary,&nbsp;  the receiver informs the transmitter about erroneous blocks via the return channel,&nbsp;  so that the transmitter sends the corresponding block again.<br><br>
 +
#&nbsp;&raquo;'''Error Correction&laquo;''': &nbsp; The decoder detects one&nbsp;  (or more)&nbsp; bit errors and provides further information for them,&nbsp;  for example their positions in the transmitted block.&nbsp;  In this way,&nbsp;  it may be possible to completely correct the errors that have occurred.<br><br>
 +
#&nbsp;&raquo;'''Channel Coding&laquo;'''&nbsp; includes both,&nbsp; procedures for&nbsp; &raquo;'''error detection'''&laquo;&nbsp; as well as those for&nbsp; &raquo;'''error correction'''&laquo;.<br>}}
 +
 
 +
 
 +
All&nbsp; $\rm ARQ$&nbsp; ("Automatic Repeat Request")&nbsp; methods use error detection only.&nbsp;
 +
*Less redundancy is required for error detection than for error correction.&nbsp;
 +
*One disadvantage of ARQ is its low throughput when channel quality is poor,&nbsp; i.e. when entire blocks of data must be frequently re-requested by the receiver.<br>
 +
 
 +
 
 +
In this book we mostly deal with&nbsp; $\rm FEC$&nbsp; ("Forward Error Correction")&nbsp;  which leads to very small bit error rates if the channel is sufficiently good &nbsp;  (large SNR).&nbsp;
 +
*With worse channel conditions,&nbsp; nothing changes in the throughput,&nbsp; i.e. the same amount of information is transmitted.&nbsp;
 +
*However,&nbsp; the symbol error rate can then assume very large values.<br>
 +
 
 +
 
 +
Often FEC and ARQ methods are combined,&nbsp; and the redundancy is divided between them in such a way,
 +
*so that a small number of errors can still be corrected,&nbsp; but
 +
*when there are too many errors,&nbsp; a repeat of the block is requested.
  
== Einige einführende Beispiele (1) ==
+
== Some introductory examples of error detection ==
 
<br>
 
<br>
Es folgen zunächst einige Beispiele zur Fehlererkennung. Auf der nächsten Seite werden dann ein paar Einsatzgebiete von Fehlerkorrektur genannt.<br>
+
{{GraueBox|TEXT= 
 +
$\text{Example 1: &nbsp; Single Parity&ndash;Check Code&nbsp; $\rm (SPC)$}$
  
Single Parity Check Code (SPC)<br>  
+
If one adds&nbsp; $k = 4$&nbsp; bit by a so-called&nbsp; "parity bit"&nbsp; in such a way that the sum of all ones is even, e.g. (with bold parity bits)<br>
  
Ergänzt man <i>k</i> = 4 Bit um ein sog. Prüfbit (englisch: <i>Parity Bit</i>) derart, dass die Summe aller Einsen geradzahlig ist, zum Beispiel (mit rotem Prüfbit)<br>
+
:$$0000\hspace{0.03cm}\boldsymbol{0},\ 0001\hspace{0.03cm}\boldsymbol{1}, \text{...} ,\ 1111\hspace{0.03cm}\boldsymbol{0}, \text{...}\ ,$$
  
0000'''0''', 0001'''1''', ... , 1111'''0''', ...<br>
+
it is very easy to recognize a single error.&nbsp; Two errors within a code word, on the other hand, remain undetected. }}<br>
  
so kann man einen Einzelfehler sehr einfach erkennen. Zwei Fehler innerhalb eines Codewortes bleiben dagegen unerkannt. Die deutsche Bezeichnung ist <i>Paritätsprüfcode</i>.<br>
+
{{GraueBox|TEXT=
 +
$\text{Example 2: &nbsp International Standard Book Number (ISBN)}$
  
International Standard Book Number (ISBN)<br>  
+
Since the 1960s,&nbsp; all books have been given 10-digit codes&nbsp; ("ISBN&ndash;10").&nbsp; Since 2007,&nbsp; the specification according to&nbsp; "ISBN&ndash;13"&nbsp; is additionally obligatory.&nbsp; For example,&nbsp; these are for the reference book&nbsp; [Söd93]<ref name ='Söd93'>Söder, G.:&nbsp; Modellierung, Simulation und Optimierung von Nachrichtensystemen.&nbsp; Berlin - Heidelberg: Springer, 1993.</ref>:<br>
  
Seit den 1960er Jahren werden alle Bücher mit 10&ndash;stelligen Kennzahlen (ISBN&ndash;10) versehen. Seit 2007 ist zusätzlich noch die Angabe entsprechend ISBN&ndash;13 verpflichtend. Beispielsweise lauten diese für das Fachbuch [Söd93]<ref>Söder, G.: ''Modellierung, Simulation und Optimierung von Nachrichtensystemen''. Berlin – Heidelberg: Springer, 1993.</ref>:<br>
+
*$\boldsymbol{3&ndash;540&ndash;57215&ndash;5}$&nbsp; (for&nbsp; "ISBN&ndash;10"),
 +
 +
*$\boldsymbol{978&ndash;3&ndash;54057215&ndash;2}$&nbsp; (for&nbsp; "ISBN&ndash;13").
  
<b>3&ndash;540&ndash;57215&ndash;5</b> (ISBN&ndash;10) bzw. <b>978&ndash;3&ndash;54057215&ndash;2</b> (ISBN&ndash;13).<br>
 
  
Die letzte Ziffer <i>z</i><sub>10</sub> (rot markiert) ergibt sich bei ISBN&ndash;10 aus den vorherigen Ziffern <i>z</i><sub>1</sub> = 3, <i>z</i><sub>2</sub> = 5, ... , <i>z</i><sub>9</sub> = 5 nach folgender Rechenregel:
+
The last digit&nbsp; $z_{10}$&nbsp; for&nbsp; "ISBN&ndash;10"&nbsp; results from the previous digits&nbsp; $z_1 = 3$,&nbsp; $z_2 = 5$, ... ,&nbsp; $z_9 = 5$&nbsp; according to the following calculation rule:
  
:<math>z_{10} = \left ( \sum_{i=1}^{9} \hspace{0.2cm} i \cdot z_i \right ) \hspace{-0.3cm} \mod 11 =  
+
::<math>z_{10} = \left ( \sum_{i=1}^{9} \hspace{0.2cm} i \cdot z_i \right ) \hspace{-0.3cm} \mod 11 =  
 
(1 \cdot 3 + 2 \cdot 5 + ... + 9 \cdot 5 ) \hspace{-0.2cm} \mod 11 = 5  
 
(1 \cdot 3 + 2 \cdot 5 + ... + 9 \cdot 5 ) \hspace{-0.2cm} \mod 11 = 5  
 
\hspace{0.05cm}. </math>
 
\hspace{0.05cm}. </math>
  
Zu beachten ist, dass <i>z</i><sub>10</sub> = 10 als <i>z</i><sub>10</sub> = &bdquo;X&rdquo; geschrieben werden muss (römische Zahlendarstellung von &bdquo;10&rdquo;), da sich die Zahl 10 im Zehnersystem nicht als Ziffer darstellen lässt.<br>
+
:Note that&nbsp; $z_{10} = 10$&nbsp; must be written as&nbsp; $z_{10} = \rm X$&nbsp; (roman numeral representation of&nbsp; "10"),&nbsp; since the number&nbsp; $10$&nbsp; cannot be represented as a digit in the decimal system.<br>
  
Entsprechend gilt für die Prüfziffer bei ISBN&ndash;13:
+
The same applies to the check digit for&nbsp; "ISBN&ndash;13":
  
:<math>z_{13} \hspace{-0.1cm} = \hspace{-0.1cm} 10 - \left ( \sum_{i=1}^{12} \hspace{0.2cm}  z_i  \cdot 3^{(i+1) \hspace{-0.2cm} \mod 2}  \right ) \hspace{-0.3cm} \mod 10 = </math>
+
::<math>z_{13}= 10 - \left ( \sum_{i=1}^{12} \hspace{0.2cm}  z_i  \cdot 3^{(i+1) \hspace{-0.2cm} \mod 2}  \right ) \hspace{-0.3cm} \mod 10 = 10 \hspace{-0.05cm}- \hspace{-0.05cm} \big [(9\hspace{-0.05cm}+\hspace{-0.05cm}8\hspace{-0.05cm}+\hspace{-0.05cm}5\hspace{-0.05cm}+\hspace{-0.05cm}0\hspace{-0.05cm}+\hspace{-0.05cm}7\hspace{-0.05cm}+\hspace{-0.05cm}1) \cdot 1 + (7\hspace{-0.05cm}+\hspace{-0.05cm}3\hspace{-0.05cm}+\hspace{-0.05cm}4\hspace{-0.05cm}+\hspace{-0.05cm}5\hspace{-0.05cm}+\hspace{-0.05cm}2\hspace{-0.05cm}+\hspace{-0.05cm}5) \cdot 3\big ] \hspace{-0.2cm} \mod 10 </math>
:<math>\hspace{0.6cm} \hspace{-0.1cm} 10 - [(9+8+5+0+7+1) \cdot 1 + (7+3+4+5+2+5) \cdot 3] \hspace{-0.2cm} \mod 10 = </math>
+
:: <math>\Rightarrow \hspace{0.3cm} z_{13}=  10 - (108 \hspace{-0.2cm} \mod 10) = 10 - 8 = 2
:<math>\hspace{0.6cm} =  \hspace{-0.1cm} 10 - (108 \hspace{-0.2cm} \mod 10) = 10 - 8 = 2
 
 
\hspace{0.05cm}. </math>
 
\hspace{0.05cm}. </math>
  
Bei beiden Varianten werden im Gegensatz zum obigen Paritätsprüfcode (SPC) auch Zahlendreher wie 57 &#8658; 75 erkannt, da hier unterschiedliche Positionen verschieden gewichtet werden.<br>
+
With both variants,&nbsp; in contrast to the above parity check code&nbsp; $\rm (SPC)$,&nbsp; number twists such as&nbsp; $57 \, \leftrightarrow 75$&nbsp; are also recognized,&nbsp; since different positions are weighted differently here}}.
 +
 
 +
{{GraueBox|TEXT=
 +
[[File:EN_KC_T_1_1_s2.png|right|frame|One-dimensional bar code]]
 +
$\text{Example 3: &nbsp; Bar code&nbsp; (one-dimensional)}$
 +
 
 +
The most widely used error-detecting code worldwide is the&nbsp; "bar code"&nbsp; for marking products,&nbsp; for example according to&nbsp; EAN&ndash;13&nbsp; ("European Article Number")&nbsp; with 13 digits.
 +
*These are represented by bars and gaps of different widths and can be easily decoded with an opto&ndash;electronic reader.  
  
[[File:P ID2330 KC T 1 1 S2.png|rahmenlos|rechts|Beispiel eines Strichcodes]]<br>
+
*The first three digits indicate the country&nbsp; (e.g. Germany: &nbsp; 400 ... 440),&nbsp; the next four or five digits producer and product.
 +
 +
*The last digit is the parity check digit&nbsp; $z_{13}$,&nbsp; which is calculated exactly as for ISBN&ndash;13.}}
  
'''Strichcode (eindimensionaler Barcode)'''
+
== Some introductory examples of error correction==
 
<br>
 
<br>
Der am weitesten verbreitete fehlererkennende Code weltweit ist der Strichcode oder Balkencode (englisch: <i>Bar Code</i>) zur Kennzeichnung von Produkten, zum Beispiel EAN&ndash;13 (<i>European Article Number</i>) mit 13 Ziffern. Diese werden durch verschieden breite Balken und Lücken dargestellt und können mit einem opto&ndash;elektronischen Lesegerät leicht entschlüsselt werden. Die ersten drei Ziffern kennzeichnen das Land (beispielsweise Deutschland: 400 &ndash; 440), die nächsten vier bzw. fünf Stellen den Hersteller und das Produkt. Die letzte Ziffer ist die Prüfziffer <i>z</i><sub>13</sub>, die sich genau so berechnet wie bei ISBN&ndash;13.<br>
+
{{GraueBox|TEXT=
 +
$\text{Example 4: &nbsp; Two-dimensional bar codes for online tickets}$
 +
 
 +
If you book a railway ticket online and print it out,&nbsp; you will find an example of a two-dimensional bar code,&nbsp; namely the&nbsp; [https://en.wikipedia.org/wiki/Aztec_Code $\text{Aztec code}$]&nbsp; developed in 1995 by Andy Longacre at the Welch Allyn company in the USA,&nbsp; with which amounts of data up to&nbsp; $3000$&nbsp; characters can be encoded.&nbsp; Due to the&nbsp; [[Channel_Coding/Definition_und_Eigenschaften_von_Reed–Solomon–Codes|$\text{Reed&ndash;Solomon error correction}$]],&nbsp; reconstruction of the data content is still possible even if up to&nbsp; $40\%$&nbsp; of the code has been destroyed,&nbsp; for example by bending the ticket or by coffee stains.<br>
  
== Einige einführende Beispiele (2) ==
+
On the right you see a&nbsp; $\rm QR\ code$&nbsp; ("Quick Response")&nbsp; with associated content.
<br>
+
[[File:EN_KC_T_1_1_S2a_v2.png|right|frame|Two-dimensional bar codes:&nbsp; Aztec and QR code|class=fit]]
Es folgen noch einige Beispiele zur Fehlerkorrektur. Genauere Details hierzu finden Sie in [KM+09]<ref>Kötter, R.; Mayer, T.; Tüchler, M.; Schreckenbach, F.; Brauchle, J.: ''Channel Coding''. Vorlesungsmanuskript, Lehrstuhl für Nachrichtentechnik, Technische Universität München, 2008.</ref>.
+
 +
*The QR code was developed in 1994 for the automotive industry in Japan to mark components and also allows error correction.  
  
<br><b>2D&ndash;Barcodes für Online&ndash;Tickets</b><br>  
+
*In the meantime,&nbsp; the use of the QR code has become very diverse.&nbsp; In Japan,&nbsp; it can be found on almost every advertising poster and on every business card.&nbsp; It has also been becoming more and more popular in Europe.<br>
  
Wenn Sie eine Fahrkarte der Deutschen Bahn online buchen und ausdrucken, finden Sie ein Beispiel eines zweidimensionalen Barcodes, nämlich den 1995 von Andy Longacre bei der Firma Welch Allyn in den USA entwickelten Aztec&ndash;Code, mit dem Datenmengen bis zu 3000 Zeichen codiert werden können. Aufgrund der Reed&ndash;Solomon&ndash;Fehlerkorrektur ist die Rekonstruktion des Dateninhalts auch dann noch möglich, wenn bis zu 40% des Codes zerstört wurden, zum Beispiel durch Knicken der Fahrkarte oder durch Kaffeeflecken.<br>
+
*All two-dimensional bar codes have square markings to calibrate the reader.&nbsp; You can find details on this in&nbsp; [KM+09]<ref>Kötter, R.; Mayer, T.; Tüchler, M.; Schreckenbach, F.; Brauchle, J.:&nbsp; Channel Coding. Lecture notes, Institute for Communications Engineering, TU München, 2008.</ref>.}}
  
[[File:P ID2332 KC T 1 1 S2a.png|2D–Barcodes: Aztec– und QR–Code|class=fit]]<br>
 
  
Rechts ist ein QR&ndash;Code (<i>Quick Response</i>) mit zugehörigem Inhalt dargestellt. Der QR&ndash;Code wurde 1994 für die Autoindustrie in Japan zur Kennzeichnung von Bauteilen entwickelt und erlaubt ebenfalls eine Fehlerkorrektur. Inzwischen ist der Einsatz des QR&ndash;Codes sehr vielfältig. In Japan findet man ihn auf nahezu jedem Werbeplakat und auf jeder Visitenkarte. Auch in Deutschland setzt er sich mehr und mehr durch.<br>
+
{{GraueBox|TEXT=
 +
$\text{Example 5: &nbsp; Codes for satellites and space communications}$
  
Bei allen 2D&ndash;Barcodes gibt es quadratische Markierungen zur Kalibrierung des Lesegerätes.<br>
+
One of the first areas of application of error correction methods was communication from/to satellites and space shuttles,&nbsp;i.e. transmission routes characterized
 +
*by low transmission powers
 +
*and large path losses.  
  
<br><b>Codes für die Satelliten&ndash; und Weltraumkommunikation</b><br>
 
Eines der ersten Einsatzgebiete von Fehlerkorrekturverfahren war die Kommunikation von/zu Satelliten und Raumfähren, also Übertragungsstrecken, die durch niedrige Sendeleistungen und große Pfadverluste gekennzeichnet sind. Schon 1977 wurde bei der <i>Raum&ndash;Mission Voyager 1</i> zu Neptun und Uranus eine Kanalcodierung eingesetzt, und zwar in Form der seriellen Verkettung eines Reed&ndash;Solomon&ndash;Codes und eines Faltungscodes.<br>
 
  
Damit genügte schon der Leistungskennwert 10 &middot; lg <i>E</i><sub>B</sub>/<i>N</i><sub>0</sub> &asymp; 2 dB, um die geforderte Fehlerrate 5 &middot; 10<sup>&ndash;5</sup> (bezogen auf die komprimierten Daten nach der Quellencodierung) zu erreichen. Ohne Kanalcodierung sind dagegen für die gleiche Fehlerrate fast 9 dB erforderlich, also eine um den Faktor 10<sup>0.7</sup> &asymp; 5 größere Sendeleistung.<br>
+
As early as 1977,&nbsp; channel coding was used in the space mission&nbsp; "Voyager 1"&nbsp; to Neptune and Uranus,&nbsp; in the form of serial concatenation of a&nbsp; [[Channel_Coding/Definition_und_Eigenschaften_von_Reed–Solomon–Codes|$\text{Reed&ndash;Solomon code}$]]&nbsp; and a&nbsp; [[Channel_Coding/Grundlagen_der_Faltungscodierung|$\text{convolutional code}$]].<br>
  
Auch das geplante Marsprojekt (die Datenübertragung vom Mars zur Erde mit 5W&ndash;Lasern) wird nur mit einem ausgeklügelten Codierschema erfolgreich sein.<br>
+
Thus,&nbsp; the power parameter&nbsp; $10 &middot; \lg \; E_{\rm B}/N_0 \approx 2 \, \rm dB$&nbsp; was already sufficient to achieve the required bit error rate&nbsp; $5 &middot; 10^{-5}$&nbsp; (related to the compressed data after source coding).&nbsp; Without channel coding,&nbsp; on the other hand,&nbsp; almost&nbsp; $9 \, \rm dB$&nbsp; are required for the same bit error rate,&nbsp; i.e. a factor&nbsp; $10^{0.7} &asymp; 5$&nbsp; greater transmission power.<br>
  
Die Aufzählung wird auf der nächsten Seite fortgesetzt.<br>
+
The planned Mars project&nbsp; (data transmission from Mars to Earth with&nbsp; $\rm 5W$&nbsp; lasers) will also only be successful with a sophisticated coding scheme.}}.
  
== Einige einführende Beispiele (3) ==
+
{{GraueBox|TEXT=  
<br>
+
$\text{Example 6: &nbsp; Channel codes for mobile communications}$
'''Kanalcodes für die Mobilkommunikation'''<br>
 
  
Ein weiteres umsatzstarkes Anwendungsgebiet, das ohne Kanalcodierung nicht funktionieren würde, ist die Mobilkommunikation. Hier ergäben sich bei ungünstigen Bedingungen ohne Codierung Fehlerraten im Prozentbereich und aufgrund von Abschattungen und Mehrwegeausbreitung (Echos) treten die Fehler oft gebündelt auf. Die Fehlerbündellänge beträgt dabei manchmal einige hundert Bit.
+
A further and particularly high-turnover application area that would not function without channel coding is mobile communication.&nbsp; Here,&nbsp; unfavourable conditions without coding would result in error rates in the percentage range and,&nbsp; due to shadowing and multipath propagation&nbsp; (echoes),&nbsp; the errors often occur in bundles.&nbsp; The error bundle length is sometimes several hundred bits.
*Bei der Sprachübertragung im GSM&ndash;System werden die 182 wichtigsten (Klasse 1a und 1b) der insgesamt 260 Bit eines Sprachrahmens (20 ms) zusammen mit einigen wenigen Paritäts&ndash; und Tailbits faltungscodiert (mit Memory <i>m</i> = 4 und Rate <i>R</i> = 1/2) und verwürfelt. Zusammen mit den 78 weniger wichtigen und deshalb uncodierten Bits der Klasse 2 führt dies dazu, dass die Bitrate von 13 kbit/s auf 22.4 kbit/s ansteigt.<br>
+
*For voice transmission in the&nbsp; [[Examples_of_Communication_Systems/Entire_GSM_Transmission_System|$\text{GSM system}$]],&nbsp; the&nbsp; $182$&nbsp; most important&nbsp; (class 1a and 1b)&nbsp; of the total&nbsp; 260&nbsp; bits of a voice frame&nbsp; $(20 \, \rm ms)$&nbsp; together with a few parity and tailbits are convolutionally coded&nbsp; $($with memory&nbsp; $m = 4$&nbsp; and code rate&nbsp;$R = 1/2)$&nbsp; and scrambled.&nbsp; Together with the&nbsp; $78$&nbsp; less important and therefore uncoded bits of class 2,&nbsp; this results in the bit rate increasing from&nbsp; $13 \, \rm kbit/s$&nbsp; to&nbsp; $22.4 \, \rm kbit/s$&nbsp;.<br>
  
*Man nutzt die (relative) Redundanz von <i>r</i> = (22.4 &ndash; 13)/22.4 &asymp; 0.42 zur Fehlerkorrektur. Anzumerken ist, dass &bdquo;<i>r</i> = 0.42&rdquo; aufgrund der hier verwendeten Definition aussagt, das 42% der codierten Bits redundant sind. Mit dem Bezugswert &bdquo;Bitrate der uncodierten Folge&rdquo; ergäbe sich <i>r</i> = 9.4/13 = 0.72 mit der Aussage: Zu den Informationsbits werden 72% Prüfbits hinzugefügt. <br>
+
*One uses the&nbsp; (relative)&nbsp; redundancy of&nbsp; $r = (22.4 - 13)/22.4 &asymp; 0.42$&nbsp; for error correction.&nbsp; It should be noted that&nbsp; $r = 0.42$&nbsp; because of the definition used here,&nbsp; $42\%$&nbsp; of the encoded bits are redundant.&nbsp; With the reference value&nbsp; "bit rate of the uncoded sequence"&nbsp; we would get&nbsp; $r = 9.4/13 \approx 0.72$&nbsp; with the statement: &nbsp; To the information bits are added&nbsp; $72\%$&nbsp; parity bits. <br>
  
*Bei UMTS (<i>Universal Mobile Telecommunications System</i>) werden Faltungscodes mit den Raten <i>R</i> = 1/2 bzw. <i>R</i> = 1/3 eingesetzt. Bei den UMTS&ndash;Modi für höhere Datenraten und entsprechend geringeren Spreizfaktoren verwendet man dagegen Turbo&ndash;Codes der Rate <i>R</i>&nbsp;=&nbsp;1/3 und iterative Decodierung. Abhängig von der Anzahl der Iterationen erzielt man hier gegenüber der Faltungscodierung Gewinne von bis zu 3 dB.<br><br>
+
*For&nbsp; [[Examples_of_Communication_Systems/Allgemeine_Beschreibung_von_UMTS|$\text{UMTS}$]]&nbsp; ("Universal Mobile Telecommunications System"),&nbsp; [[Channel_Coding/Grundlagen_der_Faltungscodierung|$\text{convolutional codes}$]]&nbsp;  with the rates&nbsp; $R = 1/2$&nbsp; or&nbsp; $R = 1/3$&nbsp; are used.&nbsp; In the UMTS modes for higher data rates and correspondingly lower spreading factors,&nbsp; on the other hand,&nbsp; one uses&nbsp;  [[Channel_Coding/Grundlegendes_zu_den_Turbocodes|$\text{Turbo codes}$]]&nbsp; of the rate&nbsp; $R = 1/3$&nbsp; and iterative decoding.&nbsp; Depending on the number of iterations,&nbsp; gains of up to&nbsp; $3 \, \rm dB$&nbsp; can be achieved compared to convolutional coding.}}<br>
  
'''Fehlerschutz der Compact Disc'''<br>
+
{{GraueBox|TEXT=
 +
$\text{Example 7: &nbsp; Error protection of the compact disc}$
  
Bei einer CD (<i>Compact Disc</i>) verwendet man einen <i>cross&ndash;interleaved</i> Reed&ndash;Solomon&ndash;Code (RS) und anschließend eine sogenannte Eight&ndash;to&ndash;Fourteen&ndash;Modulation. Die Redundanz nutzt man zur Fehlererkennung und &ndash;korrektur. Dieses Codierschema zeigt folgende Charakteristika:
+
For a compact disc&nbsp; $\rm (CD)$,&nbsp; one uses&nbsp; "cross-interleaved"&nbsp; [[Channel_Coding/Definition_und_Eigenschaften_von_Reed–Solomon–Codes|$\text{Reed&ndash;Solomon codes}$]]&nbsp; $\rm (RS)$&nbsp; and then a so-called&nbsp; [https://en.wikipedia.org/wiki/Eight-to-fourteen_modulation $\text{Eight&ndash;to&ndash;Fourteen modulation}$].&nbsp; Redundancy is used for error detection and correction.&nbsp; This coding scheme shows the following characteristics:
*Die gemeinsame Coderate der zwei RS&ndash;Komponentencodes beträgt <i>R</i><sub>RS</sub> = 24/28 &middot; 28/32 = 3/4. Durch die 8&ndash;to&ndash;14&ndash;Modulation und einiger Kontrollbits kommt man zur Gesamtcoderate <i>R</i> &asymp; 1/3.<br>
+
*The common code rate of the two RS&ndash;component codes is&nbsp; $R_{\rm RS} = 24/28 &middot; 28/32 = 3/4$.&nbsp; Through the 8&ndash;to&ndash;14 modulation and some control bits,&nbsp; one arrives at the total code rate&nbsp; $R &asymp; 1/3$.<br>
  
*Bei statistisch unabhängigen Fehlern gemäß dem BSC&ndash;Modell (<i>Binary Symmetric Channel</i>) ist eine vollständige Korrektur möglich, so lange die Bitfehlerrate den Wert 10<sup>&ndash;3</sup> nicht überschreitet.<br>
+
*In the case of statistically independent errors according to the&nbsp; [[Channel_Coding/Kanalmodelle_und_Entscheiderstrukturen#Binary_Symmetric_Channel_.E2.80.93_BSC|$\text{BSC}$]]&nbsp;  model&nbsp; ("Binary Symmetric Channel"),&nbsp; a complete correction is possible as long as the bit error rate does not exceed the value&nbsp; $10^{-3}$.<br>
  
*Der CD&ndash;spezifische <i>Cross Interleaver</i> verwürfelt 108 Blöcke miteinander, so dass die 588 Bit eines Blockes  (jedes Bit entspricht ca. 0.28 &mu;m) auf etwa 1.75 cm verteilt werden.<br>
+
*The CD specific&nbsp; "Cross Interleaver"&nbsp; scrambles&nbsp; $108$&nbsp; blocks together so that the&nbsp; $588$&nbsp; bits of a block &nbsp;$($each bit corresponds to approx. &nbsp; $0.28 \, \rm {&micro; m})$&nbsp; are distributed over approx.&nbsp; $1.75\, \rm cm$.<br>
  
*Mit der Coderate <i>R</i> &asymp; 33% kann man ca. 10% Erasures korrigieren und die verloren gegangenen Werte lassen sich durch Interpolation (näherungsweise) rekonstruieren &nbsp;&#8658; <i>Fehlerverschleierung</i>.<br><br>
+
*With the code rate&nbsp; $R &asymp; 1/3$&nbsp; one can correct approx.&nbsp; $10\%$&nbsp; erasures.&nbsp; The lost values can be reconstructed&nbsp; (approximately)&nbsp; by interpolation &nbsp; &#8658; &nbsp;"Error Concealment".<br><br>
  
Zusammenfassend lässt sich sagen: Weist eine CD einen Kratzer von 1.75 mm Länge in Abspielrichtung auf (also mehr als 6000 aufeinanderfolgende Erasures), so sind immer noch 90% aller Bit eines Blockes fehlerfrei, so dass sich auch die fehlenden 10% rekonstruieren lassen, oder dass die Auslöschungen zumindest so verschleiert werden können, dass sie nicht hörbar sind.<br>
+
In summary,&nbsp; if a compact disc has a scratch of&nbsp; $1. 75\, \rm mm$&nbsp; in length in the direction of play&nbsp; $($i.e. more than&nbsp; $6000$&nbsp; consecutive erasures$)$,&nbsp; still&nbsp; $90\%$&nbsp; of all the bits in a block are error-free,&nbsp; so that even the missing&nbsp; $10\%$&nbsp; can be reconstructed,&nbsp; or at least the erasures can be disguised so that they are not audible.<br>
  
Auf der nächsten Seite folgt eine Demonstration zur Korrekturfähigkeit der CD.<br>
+
A demonstration of the CD's ability to correct follows in the next section.}}<br>
  
== Die „Geschlitzte CD” – eine Demonstration des LNT der TUM ==
+
== The "Slit CD" - a demonstration by the LNT of TUM ==
 
<br>
 
<br>
Ende der 1990er Jahre haben Mitarbeiter des Lehrstuhls für Nachrichtentechnik der TU München unter Leitung von Professor Joachim Hagenauer eine Musik&ndash;CD gezielt beschädigt, indem insgesamt drei Schlitze von jeweils mehr als einem Millimeter Breite eingefräst wurden. Damit fehlen bei jedem Defekt fast 4000 fortlaufende Bit der Audiocodierung.<br>
+
At the end of the 1990s,&nbsp; members of the&nbsp;  [https://www.ce.cit.tum.de/en/lnt/home/ $\text{Institute for Communications Engineering}$]&nbsp; of the  [https://www.tum.de/en/about-tum/ $\text{Technical University of Munich}$] &nbsp; led by Prof.&nbsp; [[Biographies_and_Bibliographies/Lehrstuhlinhaber_des_LNT#Prof._Dr.-Ing._Dr.-Ing._E.h._Joachim_Hagenauer_.281993-2006.29|$\text{Joachim Hagenauer}$]]&nbsp; eliberately damaged a music&ndash;CD by cutting a total of three slits,&nbsp; each more than one millimetre wide.&nbsp; With each defect,&nbsp; almost&nbsp; $4000$&nbsp; consecutive bits of audio coding are missing.<br>
 +
 
 +
[[File:P ID2333 KC T 1 1 S2b.png|right|frame|"Slit CD"&nbsp;  of the&nbsp; $\rm LNT/TUM$]]
 +
 
 +
The diagram shows the&nbsp; "slit CD":
 +
*Both track 3 and track 14 have two such defective areas on each revolution.
 +
 
 +
*You can visualise the music quality with the help of the two audio players&nbsp; (playback time approx. 15 seconds each).
 +
 +
*The theory of this audio&ndash;demo can be found in the&nbsp; $\text{Example 7}$&nbsp; in the previous section. <br>
 +
 
  
[[File:P ID2333 KC T 1 1 S2b.png|rahmenlos|rechts|„Geschlitzte CD”  des LNT/TUM]]<br>
+
Track 14:
  
<br><br>Die Grafik zeigt die &bdquo;geschlitzte CD&rdquo;. Sowohl in der Spur 3 als auch in der Spur 14 gibt es bei jeder Umdrehung zwei solcher fehlerhafter Bereiche. Sie können sich die Musikqualität mit Hilfe der beiden Audioplayer (Abspielzeit jeweils ca. 15 Sekunden) verdeutlichen. Die Theorie zu dieser Audio&ndash;Demo finden Sie auf der [http://en.lntwww.de/index.php?title=Kanalcodierung/Zielsetzung_der_Kanalcodierung#Einige_einf.C3.BChrende_Beispiele_.283.29 vorherigen Seite.]<br>
+
<lntmedia>file:A_ID59__14_1.mp3</lntmedia>
  
[[Audio File Please add link and do not upload flash videos. ]]<br>
+
Track 3:
  
[[Audio File Please add link and do not upload flash videos.]]<br>
+
<lntmedia>file:A_ID60__3_1.mp3</lntmedia>
  
<br><br><br><b>Resumee dieser Audiodemo:</b>
+
<br><b>Summary of this audio demo:</b>
*Die Fehlerkorrektur der CD basiert auf zwei seriell&ndash;verketteten Reed&ndash;Solomon&ndash;Codes sowie <i>Eight&ndash;to&ndash;Fourteen Modulation</i>. Die Gesamtcoderate zur RS&ndash;Fehlerkorrektur beträgt <i>R</i> = 3/4.<br>
+
*The CD's error correction is based on two serial&ndash;concatenated&nbsp; [[Channel_Coding/Definition_und_Eigenschaften_von_Reed–Solomon–Codes|$\text{Reed&ndash;Solomon codes}$]]&nbsp; and one&nbsp[https://en.wikipedia.org/wiki/Eight-to-fourteen_modulation $\text{Eight&ndash;to&ndash;Fourteen modulation}$].&nbsp; The total code rate for RS&ndash;error correction is&nbsp; $R = 3/4$.<br>
  
*Ebenso wichtig für die Funktionsfähigkeit der CD wie die Codes ist der dazwischen geschaltete Interleaver, der die ausgelöschten Bits (&bdquo;Erasures&rdquo;) über eine Länge von fast 2 cm verteilt.<br>
+
*As important for the functioning of the compact disc as the codes is the interposed interleaver,&nbsp; which distributes the erased bits&nbsp; ("erasures")&nbsp; over a length of almost&nbsp; $2 \, \rm cm$.<br>
  
*Bei der Spur 14 liegen die beiden defekten Bereiche genügend weit auseinander. Deshalb ist der Reed&ndash;Solomon&ndash;Decoder in der Lage, die fehlenden Daten zu rekonstruieren.<br>
+
*In&nbsp; '''Track 14'''&nbsp; the two defective areas are sufficiently far apart.&nbsp; Therefore,&nbsp; the Reed&ndash;Solomon decoder is able to reconstruct the missing data.<br>
  
*Bei der Spur 3 folgen die beiden Fehlerblöcke in sehr kurzem Abstand aufeinander, so dass der Korrekturalgorithmus versagt. Das Resultat ist ein fast periodisches Klackgeräusch.<br><br>
+
*In&nbsp; '''Track 3'''&nbsp; the two error blocks follow each other in a very short distance,&nbsp; so that the correction algorithm fails.&nbsp; The result is an almost periodic clacking noise.<br><br>
  
Wir bedanken uns bei Rainer Bauer, Thomas Hindelang und Manfred Jürgens herzlich dafür, diese Audio&ndash;Demo verwenden zu dürfen.<br>
+
We would like to thank&nbsp; [[Biographies_and_Bibliographies/An_LNTwww_beteiligte_Mitarbeiter_und_Dozenten#Dr.-Ing._Thomas_Hindelang_.28at_LNT_from_1994-2000_und_2007-2012.29|$\text{Thomas Hindelang}$]],&nbsp;  Rainer Bauer, and&nbsp; Manfred Jürgens for the permission to use this audio&ndash;demo.<br>
  
== Zusammenspiel zwischen Quellen– und Kanalcodierung (1) ==
+
== Interplay between source and channel coding ==
 
<br>
 
<br>
Die Nachrichtenübertragung natürlicher Quellen wie Sprache, Musik, Bilder, Videos, usw. geschieht meist entsprechend dem nachfolgend skizzierten zeitdiskreten Modell.<br>
+
Transmission of natural sources such as speech,&nbsp; music,&nbsp; images,&nbsp; videos,&nbsp; etc. is usually done according to the discrete-time model outlined below.&nbsp; From&nbsp; [Liv10]<ref name ='Liv10'>Liva, G.:&nbsp; Channel Coding.&nbsp; Lectures manuscript,&nbsp; Institute for Communications Engineering, TU München and DLR Oberpfaffenhofen, 2010.</ref>&nbsp; the following should be noted:
 +
*Source and sink are digitized and represented by&nbsp; (approximately equal numbers of)&nbsp; zeros and ones.<br>
 +
 
 +
*The source encoder compresses the binary data&nbsp; &ndash; in the following example a digital photo &ndash;&nbsp; and thus reduces the redundancy of the source.<br>
 +
 
 +
*The channel encoder adds redundancy again,&nbsp; and specifically so that some of the errors that occurred on the channel can be corrected in the channel decoder.<br>
 +
 
 +
*A discrete-time model with binary input and output is used here for the channel,&nbsp; which should also suitably take into account the components of the technical equipment  at transmitter and receiver&nbsp; (modulator,&nbsp; decision device,&nbsp; clock recovery).<br><br>
  
[[File:P ID2334 KC T 1 1 S3a v2.png|Bildübertragung mit Quellen– und Kanalcodierung|class=fit]]<br>
+
With correct dimensioning of source and channel coding,&nbsp; the quality of the received photo is sufficiently good,&nbsp; even if the sink symbol sequence will not exactly match the source symbol sequence due to error patterns that cannot be corrected.&nbsp; One can also detect&nbsp; (red marked)&nbsp; bit errors within the sink symbol sequence of the next example.<br>
  
Zu dieser aus [Liv10]<ref>Liva, G.: ''Channel Coding.'' Vorlesungsmanuskript, Lehrstuhl für Nachrichtentechnik, TU München und DLR Oberpfaffenhofen, 2010.</ref> entnommenen Grafik ist Folgendes anzumerken:
+
{{GraueBox|TEXT=
*Quelle und Sinke sind digitalisiert und werden durch (gleich viele) Nullen und Einsen repräsentiert.<br>
+
$\text{Example 8:}$&nbsp; For the graph,&nbsp; it was assumed,&nbsp; as an example and for the sake of simplicity,&nbsp; that
 +
[[File:EN_KC_T_1_1_S3a_v3.png|right|frame|Image transmission with source and channel coding|class=fit]]
 +
 +
*the source symbol sequence has only the length&nbsp; $40$,&nbsp;
  
*Der Quellencodierer komprimiert die binären Daten &ndash; im betrachteten Beispiel ein Digitalfoto &ndash; und reduziert somit die Redundanz der Quelle.<br>
+
*the source encoder compresses the data by a factor of&nbsp; $40/16 = 2.5$, &nbsp; and
  
*Der Kanalcodierer fügt wieder Redundanz hinzu und zwar gezielt, so dass die auf dem Kanal entstandenen Fehler im Kanaldecoder größtenteils korrigiert werden können.<br>
+
*the channel encoder adds&nbsp; $50\%$&nbsp; redundancy.  
  
*Für den Kanal wird hier ein zeitdiskretes Modell mit binärem Eingang und Ausgang verwendet, das auch die Komponenten der technischen Sende&ndash; und Empfangseinrichtungen (Modulator, Entscheider, Taktwiedergewinnung) geeignet berücksichtigen sollte.<br><br>
 
  
Bei richtiger Dimensionierung von Quellen&ndash; und Kanalcodierung ist die Qualität des empfangenen Fotos hinreichend gut, auch wenn die Sinkensymbolfolge aufgrund nichtkorrigierter Fehlermuster nicht exakt mit der Quellensymbolfolge übereinstimmen wird. Im Beispiel erkennt man einen Bitfehler.<br>
+
Thus, only&nbsp; $24$&nbsp; encoder symbols have to be transmitted instead of&nbsp; $40$&nbsp; source symbols, which reduces the overall transmission rate by&nbsp; $40\%$&nbsp;.<br>
  
Für obige Grafik wurde beispielhaft angenommen, dass der Quellencoder die Daten um den Faktor 40/16 = 2.5 komprimiert und der Kanalcoder 50% Redundanz hinzufügt. Übertragen werden müssen also nur 60% aller Quellensymbole, zum Beispiel 24 statt 40.<br>
+
If one were to dispense with source encoding by transmitting the original photo in BMP format rather than the compressed JPG image,&nbsp;  the quality would be comparable,&nbsp; but a bit rate higher by a factor&nbsp; $2.5$&nbsp; and thus much more effort would be required.}}<br>
  
Würde man auf die Quellencodierung verzichten, in dem man das ursprüngliche Foto im BMP&ndash;Format übertragen würde und nicht das komprimierte JPG&ndash;Bild, so wäre die Qualität vergleichbar, aber eine um den Faktor 2.5 höhere Bitrate und damit sehr viel mehr Aufwand erforderlich.<br>
+
{{GraueBox|TEXT=
 +
[[File:EN_KC_T_1_1_S3b_v3.png|right|frame|Image transmission without source and channel coding|class=fit]]
 +
$\text{Example 9:}$&nbsp;  
 +
<br><br><br><br>
 +
If one were to dispense with both,
 +
*source coding and
 +
*channel coding,
  
== Zusammenspiel zwischen Quellen– und Kanalcodierung (2) ==
 
<br>
 
Würde man sowohl auf die Quellen&ndash; als auch auf die Kanalcodierung verzichten, also direkt die BMP&ndash;Daten ohne Fehlerschutz übertragen, so wäre das Ergebnis trotz (um den Faktor 40/24) größerer Bitrate äußerst dürftig.<br>
 
  
[[File:P ID2335 KC T 1 1 S3b v2.png|Bildübertragung ohne Quellen– und Kanalcodierung |class=fit]]<br>
+
i.e. transmit the BMP data directly without error protection,&nbsp; the result would be extremely poor despite&nbsp; $($by a factor&nbsp; $40/24)$&nbsp; greater bit rate.
 +
<br clear=all>
 +
[[File:EN_KC_T_1_1_S3c_v3.png|left|frame|Image transmission with source coding,&nbsp; but without channel coding|class=fit]]
 +
<br><br>
 +
&raquo;'''Source coding but no channel coding'''&laquo;
  
Die untere Grafik zeigt das Ergebnis für den Fall, dass man die komprimierten Daten (zum Beispiel JPG) ohne Fehlersicherungsmaßnahmen direkt überträgt. Dieses Codierschema (Quellencodierung, aber keine Kanalcodierung) sollte auf jeden Fall vermieden werden.<br>
+
Now let's consider the case of directly transferring the compressed data (e.g. JPG) without error-proofing measures.&nbsp; Then:
  
[[File:P ID2336 KC T 1 1 S3c v2.png|Bildübertragung mit Quellencodierung, ohne Kanalcodierung |class=fit]]<br>
+
#The compressed source has only little redundancy left.
 +
#Thus,&nbsp; any single transmission error will cause entire blocks of images to be decoded incorrectly.<br>
 +
#&raquo;'''This coding scheme should be avoided at all costs'''&laquo;.}}.
  
Da die komprimierte Quelle nur noch wenig Redundanz besitzt, führt jeder einzelne Übertragungsfehler dazu, dass ganze Bildblöcke falsch decodiert werden.<br>
 
  
== Blockschaltbild und Voraussetzungen ==
+
== Block diagram and requirements ==
 
<br>
 
<br>
Im weiteren Verlauf gehen wir von folgendem Blockschaltbild aus:<br>
+
In the further sections,&nbsp; we will start from the sketched block diagram with channel encoder,&nbsp; digital channel and channel decoder.&nbsp; The following conditions apply:  
 
 
[[File:P ID2337 KC T 1 1 S4 v2.png|Blockschaltbild zur Beschreibung der Kanalcodierung|class=fit]]<br>
 
  
Dabei gelten folgende Voraussetzungen:
+
[[File:EN_KC_T_1_1_S4_v2.png|right|frame|Block diagram describing channel coding|class=fit]]
*Der Vektor <i><u>u</u></i> = (<i>u</i><sub>1</sub>, <i>u</i><sub>2</sub>, ... , <i>u<sub>k</sub></i>) kennzeichnet einen Informationsblock mit <i>k</i> Symbolen. Meist beschränken wir uns auf Binärsymbole (Bits) &nbsp;&#8658;&nbsp; <i>u<sub>i</sub></i> &#8712; {0, 1} für <i>i</i> = 1 , 2, ... , <i>k</i> mit gleichen Auftrittswahrscheinlichkeiten für Nullen und Einsen.<br>
 
  
*Jeder Informationsblock <i><u>u</u></i> wird durch ein Codewort (oder <i>Codeblock</i>) <i><u>x</u></i> = (<i>x</i><sub>1</sub>, <i>x</i><sub>2</sub>, ... , <i>x<sub>n</sub></i>) mit <i>n</i> &gt; <i>k</i>, <i>x<sub>i</sub></i> &#8712; {0, 1} dargestellt. Man spricht dann von einem binären (<i>n</i>, <i>k</i>)&ndash;Blockcode <i>C</i>. Die Zuordnung bezeichnen wir mit <i><u>x</u></i> = enc(<i><u>u</u></i>), wobei &bdquo;enc&rdquo; für &bdquo;Encoder&ndash;Funktion&rdquo; steht.<br>
+
*The vector&nbsp; $\underline{u} = (u_1, u_2, \text{...} \hspace{0.05cm}, u_k)$&nbsp; denotes an&nbsp; &raquo;'''information block'''&laquo;&nbsp; with&nbsp; $k$&nbsp; symbols.&nbsp; We restrict ourselves to binary symbols&nbsp; (bits) &nbsp; &#8658; &nbsp; $u_i \in \{0, \, 1\}$ &nbsp; for $i = 1, 2, \text{...} \hspace{0.05cm}, k$ &nbsp; with equal occurrence probabilities for zeros and ones.<br>
  
*Das Empfangswort <i><u>y</u></i> ergibt sich aus dem Codewort <i><u>x</u></i> durch Modulo&ndash;2&ndash;Addition mit dem ebenfalls binären Fehlervektor <i><u>e</u></i> = (<i>e</i><sub>1</sub>, <i>e</i><sub>2</sub>, ... , <i>e<sub>n</sub></i>), wobei &bdquo;<i>e<sub>i</sub></i> = 1&rdquo; für einen Übertragungfehler steht und &bdquo;<i>e<sub>i</sub></i> = 0&rdquo; anzeigt, dass das <i>i</i>&ndash;te Bit des Codewortes richtig übertragen wurde. Es gilt also:
+
*Each information block&nbsp; $\underline{u}$&nbsp; is represented by a&nbsp; &raquo;'''code word'''&laquo;&nbsp; (or&nbsp; "code block")&nbsp; $\underline{x} = (x_1, x_2, \text{. ..} \hspace{0.05cm}, x_n)$ &nbsp; with &nbsp; $n \ge k$,&nbsp; $x_i \in \{0, \, 1\}.$&nbsp; One then speaks of a binary&nbsp; $(n, k)$&nbsp; block code&nbsp; $C$.&nbsp; We denote the assignment by&nbsp; $\underline{x} = {\rm enc}(\underline{u})$,&nbsp; where&nbsp; "enc"&nbsp; stands for&nbsp; "encoder function".<br>
  
::<math>\underline{y} = \underline{x} \oplus \underline{e} \hspace{0.05cm}, \hspace{0.3cm} y_i \hspace{-0.15cm} = \hspace{-0.15cm} x_i \oplus e_i \hspace{0.05cm}, \hspace{0.2cm} i = 1, ... \hspace{0.05cm} , n\hspace{0.05cm},</math>
+
*The&nbsp; &raquo;'''received word'''&laquo;&nbsp;  $\underline{y}$&nbsp; results from the code word&nbsp; $\underline{x}$&nbsp; by the&nbsp;  [https://en.wikipedia.org/wiki/Modular_arithmetic $\text{modulo&ndash;2}$]&nbsp; sum&nbsp; with the likewise binary error vector&nbsp; $\underline{e} = (e_1, e_2, \text{. ..} \hspace{0.05cm}, e_n)$,&nbsp; where&nbsp; "$e= 1$"&nbsp; represents a transmission error and&nbsp; "$e= 0$"&nbsp; indicates that the&nbsp; $i$&ndash;th bit of the code word was transmitted correctly.&nbsp; The following therefore applies:
::<math>x_i \hspace{-0.15cm} \in  \hspace{-0.15cm} \{ 0, 1 \}\hspace{0.05cm}, \hspace{0.2cm}e_i \in  \{ 0, 1 \}\hspace{0.3cm}
 
\Rightarrow \hspace{0.3cm}y_i  \in  \{ 0, 1 \}\hspace{0.05cm}.</math>
 
  
*Die Beschreibung durch das digitale Kanalmodell &ndash; also mit binärem Eingang und Ausgang &ndash; ist allerdings nur dann anwendbar, wenn das Übertragungssystem harte Entscheidungen trifft &ndash; siehe [http://en.lntwww.de/Kanalcodierung/Kanalmodelle_und_Entscheiderstrukturen#AWGN.E2.80.93Kanal_bei_bin.C3.A4rem_Eingang Kapitel 1.2.] Systeme mit <i>Soft Decision</i> sind mit diesem einfachen Modell nicht modellierbar.<br>
+
::<math>\underline{y} = \underline{x} \oplus \underline{e} \hspace{0.05cm}, \hspace{0.5cm} y_i  =  x_i \oplus e_i \hspace{0.05cm}, \hspace{0.2cm} i = 1, \text{...} \hspace{0.05cm}  , n\hspace{0.05cm}, </math>
 +
::<math>x_i \hspace{-0.05cm} \in  \hspace{-0.05cm} \{ 0, 1 \}\hspace{0.05cm}, \hspace{0.5cm}e_i \in  \{ 0, 1 \}\hspace{0.5cm}
 +
\Rightarrow \hspace{0.5cm}y_i  \in  \{ 0, 1 \}\hspace{0.05cm}.</math>
 +
*The description by the&nbsp; &raquo;'''digital channel model'''&laquo;&nbsp; &ndash; i.e. with binary input and output &ndash; is,&nbsp; however,&nbsp; only applicable if the transmission system makes hard decisions &ndash; see section&nbsp; [[Channel_Coding/Decoding_of_Linear_Block_Codes#Coding_gain_-_bit_error_rate_with_AWGN|"AWGN channel at binary input"]].&nbsp; Systems with&nbsp;  [[Channel_Coding/Decodierung_linearer_Blockcodes#Codiergewinn_.E2.80.93_Bitfehlerrate_bei_AWGN|$\text{soft decision}$]]&nbsp; cannot be modelled with this simple model.<br>
  
*Der Vektor <i><u>&upsilon;</u></i> nach der Kanaldecodierung hat die gleiche Länge <i>k</i> wie der Informationsblock  <i><u>u</u></i>. Den Decodiervorgang beschreiben wir mit der &bdquo;Decoder&ndash;Funktion&rdquo; als <i><u>&upsilon;</u></i> = dec(<i><u>y</u></i>). Im fehlerfreien Fall gilt analog zu <i><u>x</u></i> = enc(<i><u>u</u></i>) auch  <i><u>&upsilon;</u></i> = enc<sup>&ndash;1</sup>(<i><u>y</u></i>).<br>
+
*The vector&nbsp; $\underline{v}$&nbsp; after&nbsp; &raquo;'''channel decoding'''&laquo;&nbsp; has the same length&nbsp; $k$&nbsp; as the information block&nbsp; $\underline{u}$.&nbsp; We describe the decoding process with the&nbsp; "decoder function"&nbsp; as&nbsp; $\underline{v} = {\rm enc}^{-1}(\underline{y}) = {\rm dec}(\underline{y})$.&nbsp; In the error-free case,&nbsp; analogous to&nbsp; $\underline{x} = {\rm enc}(\underline{u})$ &nbsp; &rArr; &nbsp; $\underline{v} = {\rm enc}^{-1}(\underline{y})$.<br>
  
*Ist der Fehlervektor <i><u>e</u></i> &ne; 0, so ist <i><u>y</u></i> meist kein gültiges Element des verwendeten Blockcodes, und der Decodiervorgang ist dann keine reine Zuordnung <i><u>y</u></i>&nbsp;&#8594;&nbsp;<i><u>&upsilon;</u></i>, sondern bezeichnet dann eine auf maximale Übereinstimmung (mimimale Fehlerwahrscheinlichkeit) basierende Schätzung von <i><u>&upsilon;</u></i>.<br>
+
*If the error vector&nbsp; $\underline{e} \ne \underline{0}$,&nbsp; then&nbsp; $\underline{y}$&nbsp; is usually not a valid element of the block code used,&nbsp; and the decoding is then not a pure mapping&nbsp; $\underline{y} \rightarrow \underline{v}$,&nbsp; but an estimate of&nbsp; $\underline{v}$&nbsp; based on maximum match &nbsp; ("mimimum error probability").<br>
  
== Einige wichtige Definitionen zur Blockcodierung ==
+
== Important definitions for block coding ==
 
<br>
 
<br>
Wir betrachten nun den beispielhaften binären Blockcode
+
We now consider the exemplary binary block code
  
:<math>\mathcal{C} = \{ (0, 0, 0, 0, 0) \hspace{0.05cm}, (0, 1, 0, 1, 0) \hspace{0.05cm},(1, 0, 1, 0, 1) \hspace{0.05cm},(1, 1, 1, 1, 1) \}\hspace{0.05cm}.</math>
+
:<math>\mathcal{C} = \{ (0, 0, 0, 0, 0) \hspace{0.05cm},\hspace{0.15cm} (0, 1, 0, 1, 0) \hspace{0.05cm},\hspace{0.15cm}(1, 0, 1, 0, 1) \hspace{0.05cm},\hspace{0.15cm}(1, 1, 1, 1, 1) \}\hspace{0.05cm}.</math>
  
Dieser Code wäre zum Zwecke der Fehlererkennung oder &ndash;korrektur weniger gut geeignet. Aber er ist so konstruiert, dass er die Berechnung wichtiger Beschreibungsgrößen anschaulich verdeutlicht:
+
This code would be unsuitable for the purpose of error detection or error correction.&nbsp; But it is constructed in such a way that it clearly illustrates the calculation of important descriptive variables:
*Jedes einzelne Codewort <u><i>x</i></u> wird durch fünf Bit beschrieben. Im gesamten Buch drücken wir diesen Sachverhalt durch die Codelänge (englisch: <i>Code Length</i>) <i>n</i> = 5 aus.
+
*Here,&nbsp; each individual code word&nbsp; $\underline{u}$&nbsp; is described by five bits.&nbsp; Throughout the book,&nbsp; we express this fact by the&nbsp; &raquo;'''code word length'''&laquo;&nbsp;  $n = 5$.
  
*Der obige Code beinhaltet vier Elemente. Damit ist der Codeumfang (englisch: <i>Size</i>) |<i>C</i>| = 4. Entsprechend gibt es auch vier eindeutige Zuordnungen (englisch: <i>Mappings</i>) zwischen <u><i>u</i></u> und <u><i>x</i></u>.
+
*The above code contains four elements.&nbsp; Thus the&nbsp; &raquo;'''code size'''&laquo;&nbsp;  $|C| = 4$.&nbsp; Accordingly,&nbsp; there are also four unique mappings between&nbsp; $\underline{u}$&nbsp; and&nbsp; $\underline{x}$.
  
*Die Informationsblocklänge  <u><i>u</i></u> wird mit <i>k</i> bezeichnet. Da bei allen binären Codes |<i>C</i>| = 2<sup><i>k</i></sup> gilt, folgt aus |<i>C</i>| = 4 der Wert <i>k</i> = 2. Die Zuordnungen zwischen <u><i>u</i></u> und <u><i>x</i></u> lauten bei obigem Code <i>C</i>:
+
*The length of an information block&nbsp; $\underline{u}$ &nbsp; &rArr; &nbsp; &raquo;'''information block length'''&laquo;&nbsp; is denoted by&nbsp; $k$.&nbsp; Since for all binary codes&nbsp; $|C| = 2^k$&nbsp; holds,&nbsp; it follows from&nbsp; $|C| = 4$&nbsp; that&nbsp; $k = 2$.&nbsp; The assignments between&nbsp; $\underline{u}$&nbsp; and&nbsp; $\underline{x}$&nbsp; in the above code&nbsp; $C$&nbsp; are:
  
::<math>\underline{u_0} = (0, 0) \leftrightarrow (0, 0, 0, 0, 0) = \underline{x_0}\hspace{0.05cm}, \hspace{0.2cm}
+
::<math>\underline{u_0} = (0, 0) \hspace{0.2cm}\leftrightarrow \hspace{0.2cm}(0, 0, 0, 0, 0) = \underline{x_0}\hspace{0.05cm}, \hspace{0.8cm}
\underline{u_1} = (0, 1) \leftrightarrow (0, 1, 0, 1, 0) = \underline{x_1}\hspace{0.05cm}, </math>
+
\underline{u_1} = (0, 1) \hspace{0.2cm}\leftrightarrow \hspace{0.2cm}(0, 1, 0, 1, 0) = \underline{x_1}\hspace{0.05cm}, </math>
  
::<math>\underline{u_2} = (1, 0) \leftrightarrow (1, 0, 1, 0, 1) = \underline{x_2}\hspace{0.05cm}, \hspace{0.2cm}
+
::<math>\underline{u_2} = (1, 0)\hspace{0.2cm} \leftrightarrow \hspace{0.2cm}(1, 0, 1, 0, 1) = \underline{x_2}\hspace{0.05cm}, \hspace{0.8cm}
\underline{u_3} = (1, 1) \leftrightarrow (1, 1, 1, 1, 1) = \underline{x_3}\hspace{0.05cm}.</math>
+
\underline{u_3} = (1, 1) \hspace{0.2cm} \leftrightarrow \hspace{0.2cm}(1, 1, 1, 1, 1) = \underline{x_3}\hspace{0.05cm}.</math>
  
*Der Code weist die Coderate <i>R</i> = <i>k</i>/<i>n</i> = 2/5 auf. Dementsprechend beträgt seine Redundanz 1&nbsp;&ndash;&nbsp;<i>R</i>, also 60%. Ohne Fehlerschutz &ndash; also für den Fall <i>n</i> = <i>k</i> &ndash; wäre die Coderate <i>R</i> = 1.<br>
+
*The code has the&nbsp; &raquo;'''code rate'''&laquo;&nbsp; $R = k/n = 2/5$.&nbsp; Accordingly,&nbsp; its redundancy is&nbsp; $1-R$,&nbsp; that is&nbsp; $60\%$.&nbsp; Without error protection&nbsp; $(n = k)$&nbsp; the code rate&nbsp; $R = 1$.<br>
  
*Eine kleine Coderate  zeigt an, dass von den <i>n</i> Bits eines Codewortes nur sehr wenige tatsächlich Information tragen. Beispielsweise gilt für einen Wiederholungscode (<i>k</i> = 1) mit <i>n</i> = 10: <i>R</i> = 0.1.<br>
+
*A small code rate indicates that of the&nbsp; $n$&nbsp; bits of a code word, very few actually carry information.&nbsp; A repetition code&nbsp; $(k = 1,\ n = 10)$&nbsp; has the code rate&nbsp; $R = 0.1$.<br>
  
*Das Hamming&ndash;Gewicht <i>w</i><sub>H</sub>(<i><u>x</u></i>) des Codewortes <i><u>x</u></i> gibt die Zahl der Codewortelemente <i>x<sub>i</sub></i> &ne; 0 an. Bei einem binären Code &#8658; <i>x<sub>i</sub></i> &nbsp;&#8712;&nbsp; {0, 1} ist <i>w</i><sub>H</sub>(<i><u>x</u></i>) gleich der Summe <i>x</i><sub>1</sub> + <i>x</i><sub>2</sub> + ... + <i>x<sub>n</sub></i>:
+
*The&nbsp; &raquo;'''Hamming weight'''&laquo;&nbsp; $w_{\rm H}(\underline{x})$&nbsp; of the code word&nbsp; $\underline{x}$&nbsp; indicates the number of code word elements&nbsp; $x_i \in  \{0, \, 1\}$.&nbsp; For a binary code &nbsp; &rArr; &nbsp; $w_{\rm H}(\underline{x})$&nbsp; is equal to the sum&nbsp; $x_1 + x_2 + \hspace{0.05cm}\text{...} \hspace{0.05cm}+ x_n$.&nbsp; In the example:
  
::<math>w_{\rm H}(\underline{x}_0) = 0\hspace{0.05cm}, \hspace{0.3cm}w_{\rm H}(\underline{x}_1) = 2\hspace{0.05cm}, \hspace{0.3cm} w_{\rm H}(\underline{x}_2) = 3\hspace{0.05cm}, \hspace{0.3cm}w_{\rm H}(\underline{x}_3) = 5\hspace{0.05cm}.  
+
::<math>w_{\rm H}(\underline{x}_0) = 0\hspace{0.05cm}, \hspace{0.4cm}w_{\rm H}(\underline{x}_1) = 2\hspace{0.05cm}, \hspace{0.4cm} w_{\rm H}(\underline{x}_2) = 3\hspace{0.05cm}, \hspace{0.4cm}w_{\rm H}(\underline{x}_3) = 5\hspace{0.05cm}.  
 
</math>
 
</math>
  
*Die Hamming&ndash;Distanz <i>d</i><sub>H</sub>(<i><u>x</u></i>, <i><u>x</u>'</i>) zwischen den beiden Codeworten <i><u>x</u></i> und <i><u>x</u>'</i> bezeichnet die Anzahl der Bitpositionen, in denen sich <i><u>x</u></i> und <i><u>x</u>'</i> unterscheiden:
+
*The&nbsp; &raquo;'''Hamming distance'''&laquo;&nbsp; $d_{\rm H}(\underline{x}, \ \underline{x}\hspace{0.03cm}')$&nbsp; between the code words&nbsp; $\underline{x}$&nbsp; and&nbsp; $\underline{x}\hspace{0.03cm}'$&nbsp; denotes the number of bit positions in which the two code words differ:
 
 
::<math>d_{\rm H}(\underline{x}_0, \hspace{0.05cm}\underline{x}_1) = 2\hspace{0.05cm}, \hspace{0.3cm}
 
d_{\rm H}(\underline{x}_0, \hspace{0.05cm}\underline{x}_2) = 3\hspace{0.05cm}, \hspace{0.3cm}
 
d_{\rm H}(\underline{x}_0, \hspace{0.05cm}\underline{x}_3) = 5\hspace{0.05cm},</math>
 
  
::<math>d_{\rm H}(\underline{x}_1, \hspace{0.05cm}\underline{x}_2) = 5\hspace{0.05cm}, \hspace{0.3cm}
+
::<math>d_{\rm H}(\underline{x}_0, \hspace{0.05cm}\underline{x}_1) = 2\hspace{0.05cm}, \hspace{0.4cm}
d_{\rm H}(\underline{x}_1, \hspace{0.05cm}\underline{x}_3) = 3\hspace{0.05cm}, \hspace{0.3cm}
+
d_{\rm H}(\underline{x}_0, \hspace{0.05cm}\underline{x}_2) = 3\hspace{0.05cm}, \hspace{0.4cm}
 +
d_{\rm H}(\underline{x}_0, \hspace{0.05cm}\underline{x}_3) = 5\hspace{0.05cm},\hspace{0.4cm}
 +
d_{\rm H}(\underline{x}_1, \hspace{0.05cm}\underline{x}_2) = 5\hspace{0.05cm}, \hspace{0.4cm}
 +
d_{\rm H}(\underline{x}_1, \hspace{0.05cm}\underline{x}_3) = 3\hspace{0.05cm}, \hspace{0.4cm}
 
d_{\rm H}(\underline{x}_2, \hspace{0.05cm}\underline{x}_3) = 2\hspace{0.05cm}.</math>
 
d_{\rm H}(\underline{x}_2, \hspace{0.05cm}\underline{x}_3) = 2\hspace{0.05cm}.</math>
  
*Eine wichtige Eigenschaft eines Codes <i>C</i>, die seine Korrekturfähigkeit wesentlich beeinflusst, ist die minimale Distanz zwischen zwei beliebigen Codeworten:
+
*An important property of a code&nbsp; $C$&nbsp; that significantly affects its ability to be corrected is the&nbsp; &raquo;'''minimum distance'''&laquo;&nbsp; between any two code words:
  
 
::<math>d_{\rm min}(\mathcal{C}) =
 
::<math>d_{\rm min}(\mathcal{C}) =
 
\min_{\substack{\underline{x},\hspace{0.05cm}\underline{x}' \hspace{0.05cm}\in \hspace{0.05cm} \mathcal{C} \\ {\underline{x}} \hspace{0.05cm}\ne \hspace{0.05cm} \underline{x}'}}\hspace{0.1cm}d_{\rm H}(\underline{x}, \hspace{0.05cm}\underline{x}')\hspace{0.05cm}.</math>
 
\min_{\substack{\underline{x},\hspace{0.05cm}\underline{x}' \hspace{0.05cm}\in \hspace{0.05cm} \mathcal{C} \\ {\underline{x}} \hspace{0.05cm}\ne \hspace{0.05cm} \underline{x}'}}\hspace{0.1cm}d_{\rm H}(\underline{x}, \hspace{0.05cm}\underline{x}')\hspace{0.05cm}.</math>
  
*Manchmal fügt man den Parameter <i>d</i><sub>min</sub> noch an die Kurzbezeichnung an und spricht dann von einem (<i>n</i>, <i>k</i>, <i>d</i><sub>min</sub>)&ndash;Blockcode. Nach dieser Nomenklatur handelt es sich im hier betrachteten Beispiel um einen (5, 2, 2)&ndash;Blockcode.<br>
+
{{BlaueBox|TEXT=
 
+
$\text{Definition:}$&nbsp; A&nbsp; $(n, \hspace{0.05cm}k, \hspace{0.05cm}d_{\rm min})\text{ block code}$&nbsp; has
== Beispiele für Fehlererkennung und Fehlerkorrektur ==
+
*the code word length&nbsp; $n$,  
<br>
+
*the information block length&nbsp; $k$&nbsp;  
<b>Beispiel A</b>: Wir betrachten zunächst einen <b>(4, 2, 2)&ndash;Blockcode</b> mit den Zuordnungen
+
*the minimum distance&nbsp; $d_{\rm min}$.
  
:<math>\underline{u_0} = (0, 0) \leftrightarrow (0, 0, 0, 0) = \underline{x_0}\hspace{0.05cm}, \hspace{0.8cm}
 
\underline{u_1} = (0, 1) \leftrightarrow (0, 1, 0, 1) = \underline{x_1}\hspace{0.05cm},</math>
 
  
:<math>\underline{u_2} = (1, 0) \leftrightarrow (1, 0, 1, 0) = \underline{x_2}\hspace{0.05cm}, \hspace{0.8cm}
+
According to this nomenclature, the example considered here is a&nbsp; $(5, \hspace{0.05cm}2,\hspace{0.05cm} 2)$&nbsp; block code.&nbsp; Sometimes one omits the specification of&nbsp; $d_{\rm min}$ &nbsp; &rArr; &nbsp; $(5,\hspace{0.05cm} 2)$&nbsp;  block code.}}<br>
\underline{u_3} = (1, 1) \leftrightarrow (1, 1, 1, 1) = \underline{x_3}\hspace{0.05cm}.</math>
 
  
Die nach rechts bzw. links zeigenden Pfeile verdeutlichen den Codiervorgang bzw. die Decodierung.<br>
+
== One example each of error detection and correction ==
 +
<br>
 +
The variables just defined are now to be illustrated by two examples.
  
[[File:P ID2532 KC T 1 1 S5a v2.png|(4, 2, 2)–Blockcode zur Fehlererkennung|class=fit]]<br>
+
{{GraueBox|TEXT=
 +
$\text{Example 10:}$&nbsp; &nbsp; &nbsp;$\text{(4, 2, 2) block code}$
 +
[[File:EN_KC_T_1_1_S5a_v3.png|right|frame|$\rm (4, 2, 2)$&nbsp; block code for error detection|class=fit]]  
  
Rechts sind alle 2<sup>4</sup> = 16 möglichen Empfangsworte <u><i>y</i></u> dargestellt. Von diesen können 2<sup><i>n</i></sup> &ndash; 2<sup><i>k</i></sup> = 12 nur durch Bitfehler entstanden sein. Empfängt der Decoder ein &bdquo;weißes&rdquo; Codewort, so erkennt er zwar einen Fehler, er kann diesen aber wegen <i>d</i><sub>min</sub> = 2 nicht korrigieren. Empfängt er <u><i>y</i></u> = (0, 0, 0, 1), so kann mit gleicher Wahrscheinlichkeit das Codewort <u><i>x</i></u><sub>0</sub>&nbsp;=&nbsp;(0,&nbsp;0,&nbsp;0,&nbsp;0) oder <u><i>x</i></u><sub>1</sub> = (0, 1, 0, 1) gesendet worden sein.<br>
+
In the graphic,&nbsp; the arrows
 +
*pointing to the right illustrate the encoding process,
 +
*pointing to the left illustrate the decoding process:
  
Im <b>Beispiel B</b> betrachten wir den <b>(5,&nbsp;2,&nbsp;3)&ndash;Blockcode </b> gemäß der zweiten Grafik mit den (gültigen) Codeworten (0, 0, 0, 0, 0), (0,&nbsp;1,&nbsp;0,&nbsp;1, 1), (1,&nbsp;0,&nbsp;1,&nbsp;1,&nbsp;0) und (1, 1, 1, 0, 1). Dargestellt ist die Empfängerseite. Von den 2<sup><i>n</i></sup> &ndash;  2<sup><i>k</i></sup> = 28 unzulässigen Codeworten lassen sich nun 20 einem gültigen Codewort (rot, grün, blau oder ocker) zuordnen, wenn man davon ausgeht, dass ein einziger  Bitfehler wahrscheinlicher ist als deren zwei. In der Grafik erkennt man verfälschte Bit an der Kursivschrift.<br>
+
:$$\underline{u_0} = (0, 0) \leftrightarrow (0, 0, 0, 0) = \underline{x_0}\hspace{0.05cm},$$
 +
:$$\underline{u_1} = (0, 1) \leftrightarrow (0, 1, 0, 1) = \underline{x_1}\hspace{0.05cm},$$
 +
:$$\underline{u_2} = (1, 0) \leftrightarrow (1, 0, 1, 0) = \underline{x_2}\hspace{0.05cm},$$
 +
:$$\underline{u_3} = (1, 1) \leftrightarrow (1, 1, 1, 1) = \underline{x_3}\hspace{0.05cm}.$$
  
[[File:P ID2533 KC T 1 1 S5b v2.png|(5, 2, 3)–Blockcode zur Fehlerkorrektur|class=fit]]<br>
+
On the right,&nbsp; all&nbsp; $2^4 = 16$&nbsp; possible received words&nbsp; $\underline{y}$ &nbsp; are shown:
 +
*Of these,&nbsp; $2^n - 2^k = 12$&nbsp; can only be due to bit errors.
 +
 +
*If the decoder receives such a&nbsp; "white"&nbsp; code word,&nbsp; it detects an error,&nbsp; but it cannot correct it because&nbsp; $d_{\rm min} = 2$.
 +
 +
*For example,&nbsp; if&nbsp; $\underline{y} = (0, 0, 0, 1)$&nbsp;  is received,&nbsp; then with equal probability&nbsp; $\underline{x_0} = (0, 0, 0, 0)$&nbsp; or&nbsp; $\underline{x_1} = (0, 1, 0, 1)$ may have been sent.}}
  
Die Fehlerkorrektur ist aufgrund der minimalen Distanz <i>d</i><sub>min</sub> = 3 zwischen den Codeworten möglich. Allerdings sind acht Empfangsworte nicht decodierbar. Beispielsweise könnte das Empfangswort <u><i>y</i></u>&nbsp;=&nbsp;(0,&nbsp;0,&nbsp;1,&nbsp;0,&nbsp;1) sowohl aus dem Codewort <u><i>x</i></u> = (0, 0, 0, 0, 0) durch zwei Bitfehler entstanden sein oder auch aus  <u><i>x</i></u> = (1, 1, 1, 0, 1). Auch in diesem Fall wären zwei Bitfehler aufgetreten.<br>
 
  
== Zur Nomenklatur in diesem Buch ==
+
{{GraueBox|TEXT=
<br>
+
$\text{Example 11:}$&nbsp; &nbsp; &nbsp;$\text{(5, 2, 3) block code}$
Eine Zielvorgabe unseres Lerntutorials <i>LNTwww</i> war, das gesamte Fachgebiet der Nachrichtentechnik und der zugehörigen Grundlagenfächer mit einheitlicher Nomenklatur zu beschreiben. In diesem zuletzt in Angriff genommenen LNTwww&ndash;Buch &bdquo;Einführung in die Kanalcodierung&rdquo; müssen nun doch einige Änderungen hinsichtlich der Nomenklatur vorgenommen werden. Die Gründe hierfür sind:
+
[[File:EN_KC_T_1_1_S54_v2.png|right|frame|$\rm (5, 2, 3)$&nbsp; block code for error correction|class=fit]]
*Die Codierungstheorie ist ein weitgehend in sich abgeschlossenes Fachgebiet und nur wenige Autoren von einschlägigen Fachbüchern zu diesem Gebiet versuchen, einen Zusammenhang mit anderen Aspekten der Digitalsignalübertragung herzustellen.<br>
 
  
*Die Autoren der wichtigsten Bücher zur Kanalcodierung &ndash; englischsprachige und auch deutsche &ndash; verwenden eine in weiten Bereichen einheitliche Nomenklatur. Deshalb erlauben wir uns nicht, die Bezeichnungen zur Kanalcodierung in unser Übertragungstechnik&ndash;Schema zu pressen.<br><br>
+
Here,&nbsp; because of&nbsp; $k=2$,&nbsp; there are again four valid code words:
 +
:$$\underline{x_0} = (0, 0, 0, 0, 0)\hspace{0.05cm},\hspace{0.5cm} \underline{x_1} =(0, 1, 0, 1, 1)\hspace{0.05cm},$$
 +
:$$\underline{x_2} =(1, 0, 1, 1, 0)\hspace{0.05cm},\hspace{0.5cm}\underline{x_3} =(1, 1, 1, 0, 1).$$
  
Die wichtigsten Änderungen gegenüber den anderen LNTwww&ndash;Büchern sind:
+
The graph shows the receiver side,&nbsp; where you can recognize falsified bits by the italics.
*Alle Signale werden durch die entsprechenden Symbolfolgen in Vektorschreibweise dargestellt. Beispielsweise kennzeichnet <i><u>u</u></i> = (<i>u</i><sub>1</sub>, <i>u</i><sub>2</sub>, ... , <i>u<sub>k</sub></i>) die <i>Quellensymbolfolge</i> und <i><u>&upsilon;</u></i>&nbsp;=&nbsp;(<i>&upsilon;</i><sub>1</sub>, <i>&upsilon;</i><sub>2</sub>, ... , <i>&upsilon;<sub>k</sub></i>) die <i>Sinkensymbolfolge</i>. Bisher wurden diese Symbolfolgen mit &#9001;<i>q<sub>&nu;</sub></i>&#9002; bzw. &#9001;<i>&upsilon;<sub>&nu;</sub></i>&#9002; bezeichnet.<br>
 
  
*Das zeitdiskrete Äquivalent zum <i>Sendesignal</i> <i>s</i>(<i>t</i>) bzw. zum <i>Empfangssignal</i> <i>r</i>(<i>t</i>) in anderen Büchern sind die Vektoren <i><u>x</u></i> = (<i>x</i><sub>1</sub>, <i>x</i><sub>2</sub>, ... , <i>x<sub>n</sub></i>) und <i><u>y</u></i> = (<i>y</i><sub>1</sub>, <i>y</i><sub>2</sub>, ... , <i>y<sub>n</sub></i>). Die <i>Coderate</i> ergibt sich zu <i>R</i> = <i>k</i>/<i>n</i> (Werte zwischen 0 und 1 &nbsp;&#8658;&nbsp; <i>n</i> &#8805; <i>k</i>), und <i>m</i> = <i>n</i> &ndash; <i>k</i> gibt die Anzahl der Prüfbits an.
+
*Of the&nbsp; $2^n - 2^k = 28$&nbsp; invalid code words,&nbsp; now&nbsp; $20$&nbsp; can be assigned to a valid code word&nbsp; (fill colour: &nbsp; red, green, blue or ochre),&nbsp; assuming that a single bit error is more likely than their two or more.
 +
 +
*For each valid code word,&nbsp; there are five invalid code words,&nbsp; each with only one falsification &nbsp; &rArr; &nbsp; Hamming distance&nbsp; $d_{\rm H} =1$.&nbsp; These are indicated in the respective square with red, green, blue or ochre background colour.
 +
 +
*Error correction is possible for these due to the minimum distance&nbsp; $d_{\rm min} = 3$&nbsp; between the code words.
 +
   
 +
*Eight received words are not decodable;&nbsp; the received word&nbsp; $\underline{y} = (0, 0, 1, 0, 1)$&nbsp; could have arisen from the code word&nbsp; $\underline{x}_0 = (0, 0, 0, 0, 0)$&nbsp; but also from the code word&nbsp; $\underline{x}_3 = (1, 1, 1, 0, 1)$.&nbsp; In both cases,&nbsp; two bit errors would have occurred.}}<br>
  
*Im Kapitel 1 sind die Elemente <i>u<sub>i</sub></i> und <i>&upsilon;<sub>i</sub></i> (jeweils <i>i</i> = 1, ... , <i>k</i>) der Vektoren <u><i>u</i></u> und <u><i>&upsilon;</i></u> stets binär (0 oder 1), ebenso wie die <i>n</i> Elemente <i>x<sub>i</sub></i> des Codewortes <u><i>x</i></u>. Bei digitalem Kanalmodell (BSC, BEC, BSEC &#8658; Kapitel 1.2) gilt auch für die <i>n</i> Empfangswerte <i>y<sub>i</sub></i> &#8712; {0, 1}.
+
== On the nomenclature in this book ==
 +
<br>
 +
One of the objectives of our learning tutorial&nbsp; $\rm LNTwww$&nbsp; was to describe the entire field of Communications Engineering and the associated basic subjects with uniform nomenclature.&nbsp; In this most recently tackled book&nbsp; "Channel Coding"&nbsp; some changes have to be made with regard to the nomenclature after all. The reasons for this are:
 +
*Coding theory is a largely self-contained subject and few authors of relevant reference books on the subject attempt to relate it to other aspects of digital signal transmission.
  
*Das AWGN&ndash;Kanalmodell (erste Seite von Kapitel 1.2) ist durch reellwertige Ausgangswerte  <i>y<sub>i</sub></i> gekennzeichnet. Der <i>Codewortschätzer</i> gewinnt daraus den binären Vektor <i><u>z</u></i> = (<i>z</i><sub>1</sub>, <i>z</i><sub>2</sub>, ... , <i>z<sub>n</sub></i>), der mit dem Codewort <i><u>x</u></i> zu vergleichen ist.
+
*The authors of the most important books on channel coding&nbsp; (English as well as German language)&nbsp; largely use a uniform nomenclature.&nbsp; We therefore do not take the liberty of squeezing the designations for channel coding into our communication technology scheme.<br>
  
*Der Übergang von <i><u>y</u></i> auf <i><u>z</u></i> erfolgt entweder durch Schwellenwertentscheidung &#8658; <i>Hard Decision</i> oder nach dem MAP&ndash;Kriterium &#8658; <i>Soft Decision</i>. Die Schätzung gemäß &bdquo;Maximum Likelihood&rdquo; führt bei gleichwahrscheinlichen Eingangssymbolen ebenfalls zur minimalen Fehlerrate.
 
  
*Im Zusammenhang mit dem AWGN&ndash;Modell macht es Sinn, die binären Codesymbole <i>x<sub>i</sub></i> bipolar (also als +1 oder &ndash;1) darzustellen. An den statistischen Eigenschaften ändert sich dadurch nichts. Wir kennzeichnen im Folgenden die bipolare Signalisierung durch eine Tilde. Dann gilt:
+
Some nomenclature changes compared to the other&nbsp; $\rm LNTwww$&nbsp; books&nbsp; shall be mentioned here:
 +
#All signals are represented by sequences of symbols in vector notation.&nbsp; For example,&nbsp; $\underline{u} = (u_1, u_2, \hspace{0.05cm}\text{...}\hspace{0.05cm}, u_k)$&nbsp; is the &nbsp; &nbsp; "source symbol sequence and&nbsp; $\underline{v} = (v_1, v_2, \hspace{0.05cm}\text{...}\hspace{0.05cm}, v_k)$&nbsp; the&nbsp; "sink symbol sequence".&nbsp; Previously,&nbsp; these symbol sequences were designated&nbsp; $\langle q_\nu \rangle$&nbsp; and&nbsp; $\langle v_\nu \rangle$,&nbsp; respectively.<br><br>
 +
#The vector $\underline{x} = (x_1, x_2, \hspace{0.05cm}\text{...} \hspace{0.05cm}, x_n)$ &nbsp; now denotes the discrete-time equivalent to the transmitted signal&nbsp; $s(t)$, while the received  signal&nbsp; $r(t)$ is described by the vector&nbsp; $\underline{y} = (y_1, y_2, \hspace{0.05cm}\text{...}\hspace{0.05cm}, y_n)$&nbsp;. The code rate is the quotient </i>&nbsp; $R=k/n$ &nbsp; with &nbsp; $0 \le R \le 1$ and the number of check bits is given by&nbsp; $m = n-k$.<br><br>
 +
#In the first main chapter,&nbsp; the elements&nbsp; $u_i$&nbsp; and&nbsp; $v_i$&nbsp; &nbsp;$($each with index&nbsp; $i = 1, \hspace{0.05cm}\text{...} \hspace{0.05cm}, k)$ &nbsp; of the vectors &nbsp; $\underline{u}$ &nbsp; and &nbsp; $\underline{v}$ &nbsp; are always binary&nbsp; $(0$&nbsp; or &nbsp;$1)$,&nbsp; as are the&nbsp; $n$&nbsp; elements&nbsp; $x_i$&nbsp; of the code word&nbsp; $\underline{x}$. &nbsp; For digital channel model&nbsp; [[Channel_Coding/Kanalmodelle_und_Entscheiderstrukturen#Binary_Symmetric_Channel_.E2.80.93_BSC|$\text{(BSC}$]],&nbsp; [[Channel_Coding/Kanalmodelle_und_Entscheiderstrukturen#Binary_Erasure_Channel_.E2.80.93_BEC|$\text{BEC}$]],&nbsp; [[Channel_Coding/Kanalmodelle_und_Entscheiderstrukturen#Binary_Symmetric_Error_.26_Erasure_Channel_.E2.80.93_BSEC|$\text{BSEC})$]]&nbsp; this also applies to the&nbsp; $n$&nbsp; received values&nbsp; $y_i \in \{0, 1\}$.<br><br>
 +
#The&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#AWGN_channel_at_binary_input|$\text{AWGN}$]]&nbsp;  channel&nbsp; is characterised by real-valued output values&nbsp; $y_i$.&nbsp; The&nbsp; "code word estimator"&nbsp; in this case extracts from the vector&nbsp;$\underline{y} = (y_1, y_2, \hspace{0.05cm}\text{...}\hspace{0.05cm}, y_n)$ &nbsp; the binary vector&nbsp; $\underline{z} = (z_1, z_2, \hspace{0.05cm}\text{...} \hspace{0.05cm}, z_n)$&nbsp; to be compared with the code word&nbsp; $\underline{x}$.<br><br>
 +
#The transition from &nbsp; $\underline{y}$ &nbsp; to &nbsp; $\underline{z}$ &nbsp; is done by threshold decision &nbsp; &#8658; &nbsp; "Hard Decision"&nbsp; or according"&nbsp; to the MAP criterion &nbsp; &#8658; &nbsp; "Soft Decision".&nbsp; For equally likely input symbols,&nbsp; the&nbsp; "maximum Likelihood estimation also leads to the minimum error rate.<br><br>
 +
#In the context of the AWGN model,&nbsp; it makes sense to represent binary code symbols&nbsp; $x_i$&nbsp; bipolar&nbsp; (i.e. $\pm1$).&nbsp; This does not change the statistical properties.&nbsp; In the following,&nbsp; we mark bipolar signalling with a tilde.&nbsp; Then applies:
  
::<math>\tilde{x}_i = 1 - 2 x_i  = \left\{ \begin{array}{c} +1\\
+
:::<math>\tilde{x}_i = 1 - 2 x_i  = \left\{ \begin{array}{c} +1\\
 
  -1  \end{array} \right.\quad
 
  -1  \end{array} \right.\quad
\begin{array}{*{1}c} {\rm falls} \hspace{0.15cm} x_i = 0\hspace{0.05cm},\\
+
\begin{array}{*{1}c} {\rm if} \hspace{0.15cm} x_i = 0\hspace{0.05cm},\\
{\rm falls} \hspace{0.15cm}x_i = 1\hspace{0.05cm}.\\ \end{array}</math>
+
{\rm if} \hspace{0.15cm}x_i = 1\hspace{0.05cm}.\\ \end{array}</math>
  
== Aufgaben ==
+
== Exercises for the chapter ==
 
<br>
 
<br>
[[Aufgaben:1.1 Zur Kennzeichnung aller Bücher|A1.1 Zur Kennzeichnung aller Bücher]]
+
[[Aufgaben:Exercise_1.1:_For_Labeling_Books|Exercise 1.1: For Labeling Books]]
 +
 
 +
[[Aufgaben:Exercise_1.2:_A_Simple_Binary_Channel_Code|Exercise 1.2: A Simple Binary Channel Code]]
  
[[Aufgaben:1.2 Einfacher binärer Kanalcode|A1.2 Einfacher binärer Kanalcode]]
+
[[Aufgaben:Exercise_1.2Z:_Three-dimensional_Representation_of_Codes|Exercise 1.2Z: Three-dimensional Representation of Codes]]
  
[[Zusatzaufgaben:1.2 3D–Darstellung von Codes]]
+
==References==
  
==Quellenverzeichnis==
 
 
<references/>
 
<references/>
 +
 
{{Display}}
 
{{Display}}

Latest revision as of 17:26, 18 November 2022

  • [[Channel Coding/{{{Vorherige Seite}}} | Previous page]]
  • Next page
  • [[Channel Coding/{{{Vorherige Seite}}} | Previous page]]
  • Next page


# OVERVIEW OF THE FIRST MAIN CHAPTER #


The first chapter deals with  »block codes for error detection and error correction«  and provides the basics for describing more effective codes such as

  • the  »Reed-Solomon codes«  (see Chapter 2),
  • the  »convolutional codes«  (see Chapter 3),  and
  • the  »iteratively decodable product codes«  ("turbo codes") and the  »low-density parity-check codes«  (see Chapter 4).


This specific field of coding is called  »Channel Coding«  in contrast to

  •   »Source Coding«  (redundancy reduction for reasons of data compression),  and
  •   »Line Coding«  (additional redundancy to adapt the digital signal to the spectral characteristics of the transmission medium).aa


We restrict ourselves here to  »binary codes«.  In detail,  this book covers:

  1. Definitions and introductory examples of  »error detection and error correction«,
  2. a brief review of appropriate  »channel models«  and  »decision device structures«,
  3. known binary block codes such as  »single parity-check code«,  »repetition code«  and  »Hamming code«,
  4. the general description of linear codes using  »generator matrix«  and  »check matrix«,
  5. the decoding possibilities for block codes, including  »syndrome decoding« ,
  6. simple approximations and upper bounds for the  »block error probability«,  and
  7. an  »information-theoretic bound«  on channel coding.


Error detection and error correction


Transmission errors occur in every digital transmission system.  It is possible to keep the probability  $p_{\rm B}$  of such a bit error very small,  for example by using a very large signal energy.  However, the bit error probability  $p_{\rm B} = 0$  is never achievable because of the Gaussian PDF of the thermal noise that is always present.

Particularly in the case of heavily disturbed channels and also for safety-critical applications,  it is therefore essential to provide special protection for the data to be transmitted,  adapted to the application and channel.  For this purpose,  redundancy is added at the transmitter and this redundancy is used at the receiver to reduce the number of decoding errors.

$\text{Definitions:}$

  1.  »Error Detection«:   The decoder checks the integrity of the received blocks and marks any errors found.  If necessary,  the receiver informs the transmitter about erroneous blocks via the return channel,  so that the transmitter sends the corresponding block again.

  2.  »Error Correction«:   The decoder detects one  (or more)  bit errors and provides further information for them,  for example their positions in the transmitted block.  In this way,  it may be possible to completely correct the errors that have occurred.

  3.  »Channel Coding«  includes both,  procedures for  »error detection«  as well as those for  »error correction«.


All  $\rm ARQ$  ("Automatic Repeat Request")  methods use error detection only. 

  • Less redundancy is required for error detection than for error correction. 
  • One disadvantage of ARQ is its low throughput when channel quality is poor,  i.e. when entire blocks of data must be frequently re-requested by the receiver.


In this book we mostly deal with  $\rm FEC$  ("Forward Error Correction")  which leads to very small bit error rates if the channel is sufficiently good   (large SNR). 

  • With worse channel conditions,  nothing changes in the throughput,  i.e. the same amount of information is transmitted. 
  • However,  the symbol error rate can then assume very large values.


Often FEC and ARQ methods are combined,  and the redundancy is divided between them in such a way,

  • so that a small number of errors can still be corrected,  but
  • when there are too many errors,  a repeat of the block is requested.

Some introductory examples of error detection


$\text{Example 1:   Single Parity–Check Code  $\rm (SPC)$}$

If one adds  $k = 4$  bit by a so-called  "parity bit"  in such a way that the sum of all ones is even, e.g. (with bold parity bits)

$$0000\hspace{0.03cm}\boldsymbol{0},\ 0001\hspace{0.03cm}\boldsymbol{1}, \text{...} ,\ 1111\hspace{0.03cm}\boldsymbol{0}, \text{...}\ ,$$

it is very easy to recognize a single error.  Two errors within a code word, on the other hand, remain undetected.


$\text{Example 2:   International Standard Book Number (ISBN)}$

Since the 1960s,  all books have been given 10-digit codes  ("ISBN–10").  Since 2007,  the specification according to  "ISBN–13"  is additionally obligatory.  For example,  these are for the reference book  [Söd93][1]:

  • $\boldsymbol{3–540–57215–5}$  (for  "ISBN–10"),
  • $\boldsymbol{978–3–54057215–2}$  (for  "ISBN–13").


The last digit  $z_{10}$  for  "ISBN–10"  results from the previous digits  $z_1 = 3$,  $z_2 = 5$, ... ,  $z_9 = 5$  according to the following calculation rule:

\[z_{10} = \left ( \sum_{i=1}^{9} \hspace{0.2cm} i \cdot z_i \right ) \hspace{-0.3cm} \mod 11 = (1 \cdot 3 + 2 \cdot 5 + ... + 9 \cdot 5 ) \hspace{-0.2cm} \mod 11 = 5 \hspace{0.05cm}. \]
Note that  $z_{10} = 10$  must be written as  $z_{10} = \rm X$  (roman numeral representation of  "10"),  since the number  $10$  cannot be represented as a digit in the decimal system.

The same applies to the check digit for  "ISBN–13":

\[z_{13}= 10 - \left ( \sum_{i=1}^{12} \hspace{0.2cm} z_i \cdot 3^{(i+1) \hspace{-0.2cm} \mod 2} \right ) \hspace{-0.3cm} \mod 10 = 10 \hspace{-0.05cm}- \hspace{-0.05cm} \big [(9\hspace{-0.05cm}+\hspace{-0.05cm}8\hspace{-0.05cm}+\hspace{-0.05cm}5\hspace{-0.05cm}+\hspace{-0.05cm}0\hspace{-0.05cm}+\hspace{-0.05cm}7\hspace{-0.05cm}+\hspace{-0.05cm}1) \cdot 1 + (7\hspace{-0.05cm}+\hspace{-0.05cm}3\hspace{-0.05cm}+\hspace{-0.05cm}4\hspace{-0.05cm}+\hspace{-0.05cm}5\hspace{-0.05cm}+\hspace{-0.05cm}2\hspace{-0.05cm}+\hspace{-0.05cm}5) \cdot 3\big ] \hspace{-0.2cm} \mod 10 \]
\[\Rightarrow \hspace{0.3cm} z_{13}= 10 - (108 \hspace{-0.2cm} \mod 10) = 10 - 8 = 2 \hspace{0.05cm}. \]

With both variants,  in contrast to the above parity check code  $\rm (SPC)$,  number twists such as  $57 \, \leftrightarrow 75$  are also recognized,  since different positions are weighted differently here

.

One-dimensional bar code

$\text{Example 3:   Bar code  (one-dimensional)}$

The most widely used error-detecting code worldwide is the  "bar code"  for marking products,  for example according to  EAN–13  ("European Article Number")  with 13 digits.

  • These are represented by bars and gaps of different widths and can be easily decoded with an opto–electronic reader.
  • The first three digits indicate the country  (e.g. Germany:   400 ... 440),  the next four or five digits producer and product.
  • The last digit is the parity check digit  $z_{13}$,  which is calculated exactly as for ISBN–13.

Some introductory examples of error correction


$\text{Example 4:   Two-dimensional bar codes for online tickets}$

If you book a railway ticket online and print it out,  you will find an example of a two-dimensional bar code,  namely the  $\text{Aztec code}$  developed in 1995 by Andy Longacre at the Welch Allyn company in the USA,  with which amounts of data up to  $3000$  characters can be encoded.  Due to the  $\text{Reed–Solomon error correction}$,  reconstruction of the data content is still possible even if up to  $40\%$  of the code has been destroyed,  for example by bending the ticket or by coffee stains.

On the right you see a  $\rm QR\ code$  ("Quick Response")  with associated content.

Two-dimensional bar codes:  Aztec and QR code
  • The QR code was developed in 1994 for the automotive industry in Japan to mark components and also allows error correction.
  • In the meantime,  the use of the QR code has become very diverse.  In Japan,  it can be found on almost every advertising poster and on every business card.  It has also been becoming more and more popular in Europe.
  • All two-dimensional bar codes have square markings to calibrate the reader.  You can find details on this in  [KM+09][2].


$\text{Example 5:   Codes for satellites and space communications}$

One of the first areas of application of error correction methods was communication from/to satellites and space shuttles, i.e. transmission routes characterized

  • by low transmission powers
  • and large path losses.


As early as 1977,  channel coding was used in the space mission  "Voyager 1"  to Neptune and Uranus,  in the form of serial concatenation of a  $\text{Reed–Solomon code}$  and a  $\text{convolutional code}$.

Thus,  the power parameter  $10 · \lg \; E_{\rm B}/N_0 \approx 2 \, \rm dB$  was already sufficient to achieve the required bit error rate  $5 · 10^{-5}$  (related to the compressed data after source coding).  Without channel coding,  on the other hand,  almost  $9 \, \rm dB$  are required for the same bit error rate,  i.e. a factor  $10^{0.7} ≈ 5$  greater transmission power.

The planned Mars project  (data transmission from Mars to Earth with  $\rm 5W$  lasers) will also only be successful with a sophisticated coding scheme.

.

$\text{Example 6:   Channel codes for mobile communications}$

A further and particularly high-turnover application area that would not function without channel coding is mobile communication.  Here,  unfavourable conditions without coding would result in error rates in the percentage range and,  due to shadowing and multipath propagation  (echoes),  the errors often occur in bundles.  The error bundle length is sometimes several hundred bits.

  • For voice transmission in the  $\text{GSM system}$,  the  $182$  most important  (class 1a and 1b)  of the total  260  bits of a voice frame  $(20 \, \rm ms)$  together with a few parity and tailbits are convolutionally coded  $($with memory  $m = 4$  and code rate $R = 1/2)$  and scrambled.  Together with the  $78$  less important and therefore uncoded bits of class 2,  this results in the bit rate increasing from  $13 \, \rm kbit/s$  to  $22.4 \, \rm kbit/s$ .
  • One uses the  (relative)  redundancy of  $r = (22.4 - 13)/22.4 ≈ 0.42$  for error correction.  It should be noted that  $r = 0.42$  because of the definition used here,  $42\%$  of the encoded bits are redundant.  With the reference value  "bit rate of the uncoded sequence"  we would get  $r = 9.4/13 \approx 0.72$  with the statement:   To the information bits are added  $72\%$  parity bits.
  • For  $\text{UMTS}$  ("Universal Mobile Telecommunications System"),  $\text{convolutional codes}$  with the rates  $R = 1/2$  or  $R = 1/3$  are used.  In the UMTS modes for higher data rates and correspondingly lower spreading factors,  on the other hand,  one uses  $\text{Turbo codes}$  of the rate  $R = 1/3$  and iterative decoding.  Depending on the number of iterations,  gains of up to  $3 \, \rm dB$  can be achieved compared to convolutional coding.


$\text{Example 7:   Error protection of the compact disc}$

For a compact disc  $\rm (CD)$,  one uses  "cross-interleaved"  $\text{Reed–Solomon codes}$  $\rm (RS)$  and then a so-called  $\text{Eight–to–Fourteen modulation}$.  Redundancy is used for error detection and correction.  This coding scheme shows the following characteristics:

  • The common code rate of the two RS–component codes is  $R_{\rm RS} = 24/28 · 28/32 = 3/4$.  Through the 8–to–14 modulation and some control bits,  one arrives at the total code rate  $R ≈ 1/3$.
  • In the case of statistically independent errors according to the  $\text{BSC}$  model  ("Binary Symmetric Channel"),  a complete correction is possible as long as the bit error rate does not exceed the value  $10^{-3}$.
  • The CD specific  "Cross Interleaver"  scrambles  $108$  blocks together so that the  $588$  bits of a block  $($each bit corresponds to approx.   $0.28 \, \rm {µ m})$  are distributed over approx.  $1.75\, \rm cm$.
  • With the code rate  $R ≈ 1/3$  one can correct approx.  $10\%$  erasures.  The lost values can be reconstructed  (approximately)  by interpolation   ⇒  "Error Concealment".

In summary,  if a compact disc has a scratch of  $1. 75\, \rm mm$  in length in the direction of play  $($i.e. more than  $6000$  consecutive erasures$)$,  still  $90\%$  of all the bits in a block are error-free,  so that even the missing  $10\%$  can be reconstructed,  or at least the erasures can be disguised so that they are not audible.

A demonstration of the CD's ability to correct follows in the next section.


The "Slit CD" - a demonstration by the LNT of TUM


At the end of the 1990s,  members of the  $\text{Institute for Communications Engineering}$  of the $\text{Technical University of Munich}$   led by Prof.  $\text{Joachim Hagenauer}$  eliberately damaged a music–CD by cutting a total of three slits,  each more than one millimetre wide.  With each defect,  almost  $4000$  consecutive bits of audio coding are missing.

"Slit CD"  of the  $\rm LNT/TUM$

The diagram shows the  "slit CD":

  • Both track 3 and track 14 have two such defective areas on each revolution.
  • You can visualise the music quality with the help of the two audio players  (playback time approx. 15 seconds each).
  • The theory of this audio–demo can be found in the  $\text{Example 7}$  in the previous section.


Track 14:

Track 3:


Summary of this audio demo:

  • As important for the functioning of the compact disc as the codes is the interposed interleaver,  which distributes the erased bits  ("erasures")  over a length of almost  $2 \, \rm cm$.
  • In  Track 14  the two defective areas are sufficiently far apart.  Therefore,  the Reed–Solomon decoder is able to reconstruct the missing data.
  • In  Track 3  the two error blocks follow each other in a very short distance,  so that the correction algorithm fails.  The result is an almost periodic clacking noise.

We would like to thank  $\text{Thomas Hindelang}$,  Rainer Bauer, and  Manfred Jürgens for the permission to use this audio–demo.

Interplay between source and channel coding


Transmission of natural sources such as speech,  music,  images,  videos,  etc. is usually done according to the discrete-time model outlined below.  From  [Liv10][3]  the following should be noted:

  • Source and sink are digitized and represented by  (approximately equal numbers of)  zeros and ones.
  • The source encoder compresses the binary data  – in the following example a digital photo –  and thus reduces the redundancy of the source.
  • The channel encoder adds redundancy again,  and specifically so that some of the errors that occurred on the channel can be corrected in the channel decoder.
  • A discrete-time model with binary input and output is used here for the channel,  which should also suitably take into account the components of the technical equipment at transmitter and receiver  (modulator,  decision device,  clock recovery).

With correct dimensioning of source and channel coding,  the quality of the received photo is sufficiently good,  even if the sink symbol sequence will not exactly match the source symbol sequence due to error patterns that cannot be corrected.  One can also detect  (red marked)  bit errors within the sink symbol sequence of the next example.

$\text{Example 8:}$  For the graph,  it was assumed,  as an example and for the sake of simplicity,  that

Image transmission with source and channel coding
  • the source symbol sequence has only the length  $40$, 
  • the source encoder compresses the data by a factor of  $40/16 = 2.5$,   and
  • the channel encoder adds  $50\%$  redundancy.


Thus, only  $24$  encoder symbols have to be transmitted instead of  $40$  source symbols, which reduces the overall transmission rate by  $40\%$ .

If one were to dispense with source encoding by transmitting the original photo in BMP format rather than the compressed JPG image,  the quality would be comparable,  but a bit rate higher by a factor  $2.5$  and thus much more effort would be required.


Image transmission without source and channel coding

$\text{Example 9:}$ 



If one were to dispense with both,

  • source coding and
  • channel coding,


i.e. transmit the BMP data directly without error protection,  the result would be extremely poor despite  $($by a factor  $40/24)$  greater bit rate.

Image transmission with source coding,  but without channel coding



»Source coding but no channel coding«

Now let's consider the case of directly transferring the compressed data (e.g. JPG) without error-proofing measures.  Then:

  1. The compressed source has only little redundancy left.
  2. Thus,  any single transmission error will cause entire blocks of images to be decoded incorrectly.
  3. »This coding scheme should be avoided at all costs«.

.


Block diagram and requirements


In the further sections,  we will start from the sketched block diagram with channel encoder,  digital channel and channel decoder.  The following conditions apply:

Block diagram describing channel coding
  • The vector  $\underline{u} = (u_1, u_2, \text{...} \hspace{0.05cm}, u_k)$  denotes an  »information block«  with  $k$  symbols.  We restrict ourselves to binary symbols  (bits)   ⇒   $u_i \in \{0, \, 1\}$   for $i = 1, 2, \text{...} \hspace{0.05cm}, k$   with equal occurrence probabilities for zeros and ones.
  • Each information block  $\underline{u}$  is represented by a  »code word«  (or  "code block")  $\underline{x} = (x_1, x_2, \text{. ..} \hspace{0.05cm}, x_n)$   with   $n \ge k$,  $x_i \in \{0, \, 1\}.$  One then speaks of a binary  $(n, k)$  block code  $C$.  We denote the assignment by  $\underline{x} = {\rm enc}(\underline{u})$,  where  "enc"  stands for  "encoder function".
  • The  »received word«  $\underline{y}$  results from the code word  $\underline{x}$  by the  $\text{modulo–2}$  sum  with the likewise binary error vector  $\underline{e} = (e_1, e_2, \text{. ..} \hspace{0.05cm}, e_n)$,  where  "$e= 1$"  represents a transmission error and  "$e= 0$"  indicates that the  $i$–th bit of the code word was transmitted correctly.  The following therefore applies:
\[\underline{y} = \underline{x} \oplus \underline{e} \hspace{0.05cm}, \hspace{0.5cm} y_i = x_i \oplus e_i \hspace{0.05cm}, \hspace{0.2cm} i = 1, \text{...} \hspace{0.05cm} , n\hspace{0.05cm}, \]
\[x_i \hspace{-0.05cm} \in \hspace{-0.05cm} \{ 0, 1 \}\hspace{0.05cm}, \hspace{0.5cm}e_i \in \{ 0, 1 \}\hspace{0.5cm} \Rightarrow \hspace{0.5cm}y_i \in \{ 0, 1 \}\hspace{0.05cm}.\]
  • The description by the  »digital channel model«  – i.e. with binary input and output – is,  however,  only applicable if the transmission system makes hard decisions – see section  "AWGN channel at binary input".  Systems with  $\text{soft decision}$  cannot be modelled with this simple model.
  • The vector  $\underline{v}$  after  »channel decoding«  has the same length  $k$  as the information block  $\underline{u}$.  We describe the decoding process with the  "decoder function"  as  $\underline{v} = {\rm enc}^{-1}(\underline{y}) = {\rm dec}(\underline{y})$.  In the error-free case,  analogous to  $\underline{x} = {\rm enc}(\underline{u})$   ⇒   $\underline{v} = {\rm enc}^{-1}(\underline{y})$.
  • If the error vector  $\underline{e} \ne \underline{0}$,  then  $\underline{y}$  is usually not a valid element of the block code used,  and the decoding is then not a pure mapping  $\underline{y} \rightarrow \underline{v}$,  but an estimate of  $\underline{v}$  based on maximum match   ("mimimum error probability").

Important definitions for block coding


We now consider the exemplary binary block code

\[\mathcal{C} = \{ (0, 0, 0, 0, 0) \hspace{0.05cm},\hspace{0.15cm} (0, 1, 0, 1, 0) \hspace{0.05cm},\hspace{0.15cm}(1, 0, 1, 0, 1) \hspace{0.05cm},\hspace{0.15cm}(1, 1, 1, 1, 1) \}\hspace{0.05cm}.\]

This code would be unsuitable for the purpose of error detection or error correction.  But it is constructed in such a way that it clearly illustrates the calculation of important descriptive variables:

  • Here,  each individual code word  $\underline{u}$  is described by five bits.  Throughout the book,  we express this fact by the  »code word length«  $n = 5$.
  • The above code contains four elements.  Thus the  »code size«  $|C| = 4$.  Accordingly,  there are also four unique mappings between  $\underline{u}$  and  $\underline{x}$.
  • The length of an information block  $\underline{u}$   ⇒   »information block length«  is denoted by  $k$.  Since for all binary codes  $|C| = 2^k$  holds,  it follows from  $|C| = 4$  that  $k = 2$.  The assignments between  $\underline{u}$  and  $\underline{x}$  in the above code  $C$  are:
\[\underline{u_0} = (0, 0) \hspace{0.2cm}\leftrightarrow \hspace{0.2cm}(0, 0, 0, 0, 0) = \underline{x_0}\hspace{0.05cm}, \hspace{0.8cm} \underline{u_1} = (0, 1) \hspace{0.2cm}\leftrightarrow \hspace{0.2cm}(0, 1, 0, 1, 0) = \underline{x_1}\hspace{0.05cm}, \]
\[\underline{u_2} = (1, 0)\hspace{0.2cm} \leftrightarrow \hspace{0.2cm}(1, 0, 1, 0, 1) = \underline{x_2}\hspace{0.05cm}, \hspace{0.8cm} \underline{u_3} = (1, 1) \hspace{0.2cm} \leftrightarrow \hspace{0.2cm}(1, 1, 1, 1, 1) = \underline{x_3}\hspace{0.05cm}.\]
  • The code has the  »code rate«  $R = k/n = 2/5$.  Accordingly,  its redundancy is  $1-R$,  that is  $60\%$.  Without error protection  $(n = k)$  the code rate  $R = 1$.
  • A small code rate indicates that of the  $n$  bits of a code word, very few actually carry information.  A repetition code  $(k = 1,\ n = 10)$  has the code rate  $R = 0.1$.
  • The  »Hamming weight«  $w_{\rm H}(\underline{x})$  of the code word  $\underline{x}$  indicates the number of code word elements  $x_i \in \{0, \, 1\}$.  For a binary code   ⇒   $w_{\rm H}(\underline{x})$  is equal to the sum  $x_1 + x_2 + \hspace{0.05cm}\text{...} \hspace{0.05cm}+ x_n$.  In the example:
\[w_{\rm H}(\underline{x}_0) = 0\hspace{0.05cm}, \hspace{0.4cm}w_{\rm H}(\underline{x}_1) = 2\hspace{0.05cm}, \hspace{0.4cm} w_{\rm H}(\underline{x}_2) = 3\hspace{0.05cm}, \hspace{0.4cm}w_{\rm H}(\underline{x}_3) = 5\hspace{0.05cm}. \]
  • The  »Hamming distance«  $d_{\rm H}(\underline{x}, \ \underline{x}\hspace{0.03cm}')$  between the code words  $\underline{x}$  and  $\underline{x}\hspace{0.03cm}'$  denotes the number of bit positions in which the two code words differ:
\[d_{\rm H}(\underline{x}_0, \hspace{0.05cm}\underline{x}_1) = 2\hspace{0.05cm}, \hspace{0.4cm} d_{\rm H}(\underline{x}_0, \hspace{0.05cm}\underline{x}_2) = 3\hspace{0.05cm}, \hspace{0.4cm} d_{\rm H}(\underline{x}_0, \hspace{0.05cm}\underline{x}_3) = 5\hspace{0.05cm},\hspace{0.4cm} d_{\rm H}(\underline{x}_1, \hspace{0.05cm}\underline{x}_2) = 5\hspace{0.05cm}, \hspace{0.4cm} d_{\rm H}(\underline{x}_1, \hspace{0.05cm}\underline{x}_3) = 3\hspace{0.05cm}, \hspace{0.4cm} d_{\rm H}(\underline{x}_2, \hspace{0.05cm}\underline{x}_3) = 2\hspace{0.05cm}.\]
  • An important property of a code  $C$  that significantly affects its ability to be corrected is the  »minimum distance«  between any two code words:
\[d_{\rm min}(\mathcal{C}) = \min_{\substack{\underline{x},\hspace{0.05cm}\underline{x}' \hspace{0.05cm}\in \hspace{0.05cm} \mathcal{C} \\ {\underline{x}} \hspace{0.05cm}\ne \hspace{0.05cm} \underline{x}'}}\hspace{0.1cm}d_{\rm H}(\underline{x}, \hspace{0.05cm}\underline{x}')\hspace{0.05cm}.\]

$\text{Definition:}$  A  $(n, \hspace{0.05cm}k, \hspace{0.05cm}d_{\rm min})\text{ block code}$  has

  • the code word length  $n$,
  • the information block length  $k$ 
  • the minimum distance  $d_{\rm min}$.


According to this nomenclature, the example considered here is a  $(5, \hspace{0.05cm}2,\hspace{0.05cm} 2)$  block code.  Sometimes one omits the specification of  $d_{\rm min}$   ⇒   $(5,\hspace{0.05cm} 2)$  block code.


One example each of error detection and correction


The variables just defined are now to be illustrated by two examples.

$\text{Example 10:}$     $\text{(4, 2, 2) block code}$

$\rm (4, 2, 2)$  block code for error detection

In the graphic,  the arrows

  • pointing to the right illustrate the encoding process,
  • pointing to the left illustrate the decoding process:
$$\underline{u_0} = (0, 0) \leftrightarrow (0, 0, 0, 0) = \underline{x_0}\hspace{0.05cm},$$
$$\underline{u_1} = (0, 1) \leftrightarrow (0, 1, 0, 1) = \underline{x_1}\hspace{0.05cm},$$
$$\underline{u_2} = (1, 0) \leftrightarrow (1, 0, 1, 0) = \underline{x_2}\hspace{0.05cm},$$
$$\underline{u_3} = (1, 1) \leftrightarrow (1, 1, 1, 1) = \underline{x_3}\hspace{0.05cm}.$$

On the right,  all  $2^4 = 16$  possible received words  $\underline{y}$   are shown:

  • Of these,  $2^n - 2^k = 12$  can only be due to bit errors.
  • If the decoder receives such a  "white"  code word,  it detects an error,  but it cannot correct it because  $d_{\rm min} = 2$.
  • For example,  if  $\underline{y} = (0, 0, 0, 1)$  is received,  then with equal probability  $\underline{x_0} = (0, 0, 0, 0)$  or  $\underline{x_1} = (0, 1, 0, 1)$ may have been sent.


$\text{Example 11:}$     $\text{(5, 2, 3) block code}$

$\rm (5, 2, 3)$  block code for error correction

Here,  because of  $k=2$,  there are again four valid code words:

$$\underline{x_0} = (0, 0, 0, 0, 0)\hspace{0.05cm},\hspace{0.5cm} \underline{x_1} =(0, 1, 0, 1, 1)\hspace{0.05cm},$$
$$\underline{x_2} =(1, 0, 1, 1, 0)\hspace{0.05cm},\hspace{0.5cm}\underline{x_3} =(1, 1, 1, 0, 1).$$

The graph shows the receiver side,  where you can recognize falsified bits by the italics.

  • Of the  $2^n - 2^k = 28$  invalid code words,  now  $20$  can be assigned to a valid code word  (fill colour:   red, green, blue or ochre),  assuming that a single bit error is more likely than their two or more.
  • For each valid code word,  there are five invalid code words,  each with only one falsification   ⇒   Hamming distance  $d_{\rm H} =1$.  These are indicated in the respective square with red, green, blue or ochre background colour.
  • Error correction is possible for these due to the minimum distance  $d_{\rm min} = 3$  between the code words.
  • Eight received words are not decodable;  the received word  $\underline{y} = (0, 0, 1, 0, 1)$  could have arisen from the code word  $\underline{x}_0 = (0, 0, 0, 0, 0)$  but also from the code word  $\underline{x}_3 = (1, 1, 1, 0, 1)$.  In both cases,  two bit errors would have occurred.


On the nomenclature in this book


One of the objectives of our learning tutorial  $\rm LNTwww$  was to describe the entire field of Communications Engineering and the associated basic subjects with uniform nomenclature.  In this most recently tackled book  "Channel Coding"  some changes have to be made with regard to the nomenclature after all. The reasons for this are:

  • Coding theory is a largely self-contained subject and few authors of relevant reference books on the subject attempt to relate it to other aspects of digital signal transmission.
  • The authors of the most important books on channel coding  (English as well as German language)  largely use a uniform nomenclature.  We therefore do not take the liberty of squeezing the designations for channel coding into our communication technology scheme.


Some nomenclature changes compared to the other  $\rm LNTwww$  books  shall be mentioned here:

  1. All signals are represented by sequences of symbols in vector notation.  For example,  $\underline{u} = (u_1, u_2, \hspace{0.05cm}\text{...}\hspace{0.05cm}, u_k)$  is the     "source symbol sequence and  $\underline{v} = (v_1, v_2, \hspace{0.05cm}\text{...}\hspace{0.05cm}, v_k)$  the  "sink symbol sequence".  Previously,  these symbol sequences were designated  $\langle q_\nu \rangle$  and  $\langle v_\nu \rangle$,  respectively.

  2. The vector $\underline{x} = (x_1, x_2, \hspace{0.05cm}\text{...} \hspace{0.05cm}, x_n)$   now denotes the discrete-time equivalent to the transmitted signal  $s(t)$, while the received signal  $r(t)$ is described by the vector  $\underline{y} = (y_1, y_2, \hspace{0.05cm}\text{...}\hspace{0.05cm}, y_n)$ . The code rate is the quotient   $R=k/n$   with   $0 \le R \le 1$ and the number of check bits is given by  $m = n-k$.

  3. In the first main chapter,  the elements  $u_i$  and  $v_i$   $($each with index  $i = 1, \hspace{0.05cm}\text{...} \hspace{0.05cm}, k)$   of the vectors   $\underline{u}$   and   $\underline{v}$   are always binary  $(0$  or  $1)$,  as are the  $n$  elements  $x_i$  of the code word  $\underline{x}$.   For digital channel model  $\text{(BSC}$$\text{BEC}$$\text{BSEC})$  this also applies to the  $n$  received values  $y_i \in \{0, 1\}$.

  4. The  $\text{AWGN}$  channel  is characterised by real-valued output values  $y_i$.  The  "code word estimator"  in this case extracts from the vector $\underline{y} = (y_1, y_2, \hspace{0.05cm}\text{...}\hspace{0.05cm}, y_n)$   the binary vector  $\underline{z} = (z_1, z_2, \hspace{0.05cm}\text{...} \hspace{0.05cm}, z_n)$  to be compared with the code word  $\underline{x}$.

  5. The transition from   $\underline{y}$   to   $\underline{z}$   is done by threshold decision   ⇒   "Hard Decision"  or according"  to the MAP criterion   ⇒   "Soft Decision".  For equally likely input symbols,  the  "maximum Likelihood estimation also leads to the minimum error rate.

  6. In the context of the AWGN model,  it makes sense to represent binary code symbols  $x_i$  bipolar  (i.e. $\pm1$).  This does not change the statistical properties.  In the following,  we mark bipolar signalling with a tilde.  Then applies:
\[\tilde{x}_i = 1 - 2 x_i = \left\{ \begin{array}{c} +1\\ -1 \end{array} \right.\quad \begin{array}{*{1}c} {\rm if} \hspace{0.15cm} x_i = 0\hspace{0.05cm},\\ {\rm if} \hspace{0.15cm}x_i = 1\hspace{0.05cm}.\\ \end{array}\]

Exercises for the chapter


Exercise 1.1: For Labeling Books

Exercise 1.2: A Simple Binary Channel Code

Exercise 1.2Z: Three-dimensional Representation of Codes

References

  1. Söder, G.:  Modellierung, Simulation und Optimierung von Nachrichtensystemen.  Berlin - Heidelberg: Springer, 1993.
  2. Kötter, R.; Mayer, T.; Tüchler, M.; Schreckenbach, F.; Brauchle, J.:  Channel Coding. Lecture notes, Institute for Communications Engineering, TU München, 2008.
  3. Liva, G.:  Channel Coding.  Lectures manuscript,  Institute for Communications Engineering, TU München and DLR Oberpfaffenhofen, 2010.