Difference between revisions of "Channel Coding/Information Theoretical Limits of Channel Coding"

From LNTwww
Line 104: Line 104:
 
<br>
 
<br>
 
[[File:P ID2372 KC T 1 7 S3a.png|right|frame|AWGN channel model|class=fit]]
 
[[File:P ID2372 KC T 1 7 S3a.png|right|frame|AWGN channel model|class=fit]]
We now consider the&nbsp; [[Digital_Signal_Transmission/System_Components_of_a_Baseband_Transmission_System#Transmission_channel_and_interference|AWGN channel]]&nbsp; (<i>Additive White Gaussian Noise</i>&nbsp;). Here, for the output signal&nbsp; $y = x + n$, where&nbsp; $n$&nbsp; describes a&nbsp; [[Theory_of_Stochastic_Signals/Gaussian_Distributed_Random_Variables|Gaussian distributed random variable]]&nbsp; and it applies to their expected values (moments):
+
We now consider the&nbsp; [[Digital_Signal_Transmission/System_Components_of_a_Baseband_Transmission_System#Transmission_channel_and_interference|"AWGN channel"]]&nbsp; (<i>Additive White Gaussian Noise</i>&nbsp;). Here, for the output signal&nbsp; $y = x + n$, where&nbsp; $n$&nbsp; describes a&nbsp; [[Theory_of_Stochastic_Signals/Gaussian_Distributed_Random_Variables|"Gaussian distributed random variable"]]&nbsp; and it applies to their expected values (moments):
 
:$${\rm E}[n] = 0,\hspace{1cm} {\rm E}[n^2] = P_n.$$
 
:$${\rm E}[n] = 0,\hspace{1cm} {\rm E}[n^2] = P_n.$$
  
Thus, regardless of the input signal&nbsp; $x$&nbsp; (analog or digital), there is always a continuous value output signal&nbsp; $y$, and in the&nbsp; [[Channel_Coding/Information_Theoretical_Limits_of_Channel_Coding#Channel_coding_theorem_and_channel_capacity|equation for transinformation]]&nbsp; is to be used&nbsp; $M_y \to\infty$&nbsp;.<br>
+
Thus, regardless of the input signal&nbsp; $x$&nbsp; (analog or digital), there is always a continuous value output signal&nbsp; $y$, and in the&nbsp; [[Channel_Coding/Information_Theoretical_Limits_of_Channel_Coding#Channel_coding_theorem_and_channel_capacity|"equation for mutual information"]]&nbsp; is to be used&nbsp; $M_y \to\infty$&nbsp;.<br>
  
 
The calculation of the channel capacity for the AWGN channel is given here only in keywords. The exact derivation can be found in the fourth main chapter "Discrete Value Information Theory" of the book&nbsp; "[[Information_theory]]".
 
The calculation of the channel capacity for the AWGN channel is given here only in keywords. The exact derivation can be found in the fourth main chapter "Discrete Value Information Theory" of the book&nbsp; "[[Information_theory]]".
 
*The input quantity&nbsp; $x$&nbsp; optimized with respect to maximum mutual information will certainly be value continuous, that is, for the AWGN channel, in addition to&nbsp; $M_y \to\infty$&nbsp; also&nbsp; $M_x \to\infty$ holds.
 
*The input quantity&nbsp; $x$&nbsp; optimized with respect to maximum mutual information will certainly be value continuous, that is, for the AWGN channel, in addition to&nbsp; $M_y \to\infty$&nbsp; also&nbsp; $M_x \to\infty$ holds.
  
*While for discrete value input optimization is to be done over all&nbsp; ${\rm Pr}(x_i)$&nbsp; now optimization is done using the&nbsp; [[Theory_of_Stochastic_Signals/Gaussian_Distributed_Random_Variables#Probability_density_function_. E2.80.93_Cumulative_density_function| PDF]]&nbsp; $f_x(x)$ of the input signal under the constraint&nbsp; [[Digital_Signal_Transmission/Optimization_of_Baseband_Transmission_Systems#Power_and_peak_limitation| Power Limitation]]:
+
*While for discrete value input optimization is to be done over all&nbsp; ${\rm Pr}(x_i)$&nbsp; now optimization is done using the&nbsp; [[Theory_of_Stochastic_Signals/Gaussian_Distributed_Random_Variables#Probability_density_function_. E2.80.93_Cumulative_density_function| "PDF"]]&nbsp; $f_x(x)$ of the input signal under the constraint&nbsp; [[Digital_Signal_Transmission/Optimization_of_Baseband_Transmission_Systems#Power_and_peak_limitation| "Power Limitation"]]:
  
::<math>C = \max_{f_x(x)} \hspace{0.1cm} I(x; y)\hspace{0.05cm},\hspace{0.3cm}{\rm wobei \hspace{0.15cm} gelten  \hspace{0.15cm} muss} \text{:}\hspace{0.15cm} {\rm E} \left [ x^2 \right ] \le P_x  \hspace{0.05cm}.</math>
+
::<math>C = \max_{f_x(x)} \hspace{0.1cm} I(x; y)\hspace{0.05cm},\hspace{0.3cm}{\rm where \hspace{0.15cm} must \hspace{0.15cm} apply} \text{:}\hspace{0.15cm} {\rm E} \left [ x^2 \right ] \le P_x  \hspace{0.05cm}.</math>
  
 
*The optimization also yields a Gaussian distribution for the input PDF &nbsp; &#8658; &nbsp; $x$,&nbsp; $n$&nbsp; and&nbsp; $y$&nbsp; are Gaussian distributed according to the density functions&nbsp; $f_x(x)$,&nbsp; $f_n(n)$&nbsp; and&nbsp; $f_y(y)$. We name the corresponding powers&nbsp; $P_x$,&nbsp; $P_n$&nbsp; and&nbsp; $P_y$.<br>
 
*The optimization also yields a Gaussian distribution for the input PDF &nbsp; &#8658; &nbsp; $x$,&nbsp; $n$&nbsp; and&nbsp; $y$&nbsp; are Gaussian distributed according to the density functions&nbsp; $f_x(x)$,&nbsp; $f_n(n)$&nbsp; and&nbsp; $f_y(y)$. We name the corresponding powers&nbsp; $P_x$,&nbsp; $P_n$&nbsp; and&nbsp; $P_y$.<br>
Line 129: Line 129:
  
 
::<math>C =  {1}/{2 } \cdot {\rm log_2 } \hspace{0.05cm}\left ( 1 + {2 E_{\rm S}}/{N_0 } \right )\hspace{0.05cm}, \hspace{1.9cm}
 
::<math>C =  {1}/{2 } \cdot {\rm log_2 } \hspace{0.05cm}\left ( 1 + {2 E_{\rm S}}/{N_0 } \right )\hspace{0.05cm}, \hspace{1.9cm}
\text {Einheit:}\hspace{0.3cm} \frac{\rm bit}{\rm channel\:use}\hspace{0.05cm}.</math>
+
\text {Unit:}\hspace{0.3cm} \frac{\rm bit}{\rm channel\:use}\hspace{0.05cm}.</math>
 
*The following equation gives the channel capacity per unit time (denoted by $^{\star})$:
 
*The following equation gives the channel capacity per unit time (denoted by $^{\star})$:
 
::<math>C^{\star} = \frac{C}{T_{\rm S} } =    B \cdot {\rm log_2 } \hspace{0.05cm}\left ( 1 + {2  E_{\rm S}}/{N_0 } \right )\hspace{0.05cm}, \hspace{0.8cm}
 
::<math>C^{\star} = \frac{C}{T_{\rm S} } =    B \cdot {\rm log_2 } \hspace{0.05cm}\left ( 1 + {2  E_{\rm S}}/{N_0 } \right )\hspace{0.05cm}, \hspace{0.8cm}
\text {Einheit:} \hspace{0.3cm} \frac{\rm bit}{\rm time\:unit}\hspace{0.05cm}.</math><br>
+
\text {Unit:} \hspace{0.3cm} \frac{\rm bit}{\rm time\:unit}\hspace{0.05cm}.</math><br>
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
Line 168: Line 168:
 
*Für alle binären Codes gilt per se&nbsp; $0 < R &#8804; 1$. Only with <i>non-binary</i> codes code rates&nbsp; $R > 1$&nbsp; are possible. For example, the maximum possible code rate of a quaternary code&nbsp; $R = \log_2 \, M_y = \log_2 \, 4 = 2$.<br>
 
*Für alle binären Codes gilt per se&nbsp; $0 < R &#8804; 1$. Only with <i>non-binary</i> codes code rates&nbsp; $R > 1$&nbsp; are possible. For example, the maximum possible code rate of a quaternary code&nbsp; $R = \log_2 \, M_y = \log_2 \, 4 = 2$.<br>
  
*All one-dimensional modulation types &ndash; i.e., those methods that use only the in-phase&ndash; or only the quadrature component such as&nbsp; [[Digital_Signal_Transmission/Carrier_Frequency_Systems_with_Coherent_Demodulation#On.E2.80.93off_keying_.282.E2.80.93ASK.29|2&ndash;ASK]],&nbsp;  
+
*All one-dimensional modulation types &ndash; i.e., those methods that use only the in-phase&ndash; or only the quadrature component such as&nbsp; [[Digital_Signal_Transmission/Carrier_Frequency_Systems_with_Coherent_Demodulation#On.E2.80.93off_keying_.282.E2.80.93ASK.29|"2&ndash;ASK"]],&nbsp;  
[[Digital_Signal_Transmission/Carrier_Frequency_Systems_with_Coherent_Demodulation#Binary_phase_shift_keying_.28BPSK.29 |BPSK]]&nbsp; and&nbsp; [[Digital_Signal_Transmission/Carrier_Frequency_Systems_with_Non-Coherent_Demodulation#Non-Coh.C3.A4rente_Demodulation_of_bin.C3.A4rer_FSK_.282.E2.80.93FSK.29|2&ndash;FSK]]&nbsp; &ndash; must be in the blue area of the present graphic.<br>
+
[[Digital_Signal_Transmission/Carrier_Frequency_Systems_with_Coherent_Demodulation#Binary_phase_shift_keying_.28BPSK.29 |"BPSK"]]&nbsp; and&nbsp; [[Digital_Signal_Transmission/Carrier_Frequency_Systems_with_Non-Coherent_Demodulation#Non-Coh.C3.A4rente_Demodulation_of_bin.C3.A4rer_FSK_.282.E2.80.93FSK.29|"2&ndash;FSK"]]&nbsp; &ndash; must be in the blue area of the present graphic.<br>
*As shown in the chapter&nbsp; [[Information_Theory/AWGN_Channel_Capacity_for_Discrete_Input#Maximum_code_rate_for_QAM_structures|Maximum code rate for QAM&ndash;structures]]&nbsp; of the book "Information Theory" is shown, there is a "friendlier" limit curve for two-dimensional modulation types such as the&nbsp; [[Modulation_Methods/Quadrature_Amplitude_Modulation|Quadrature&ndash;Amplitude Modulation]]&nbsp;.   
+
*As shown in the chapter&nbsp; [[Information_Theory/AWGN_Channel_Capacity_for_Discrete_Input#Maximum_code_rate_for_QAM_structures|"Maximum code rate for QAM&ndash;structures"]]&nbsp; of the book "Information Theory" is shown, there is a "friendlier" limit curve for two-dimensional modulation types such as the&nbsp; [[Modulation_Methods/Quadrature_Amplitude_Modulation|"Quadrature&ndash;Amplitude Modulation"]]&nbsp;.   
  
== AWGN–Kanalkapazität für binäre Eingangssignale ==
+
== AWGN channel capacity for binary input signals ==
 
<br>
 
<br>
In diesem Buch beschränken wir uns vorwiegend auf binäre Codes, also auf das Galoisfeld&nbsp; ${\rm GF}(2^n)$. Damit ist
+
In this book we restrict ourselves mainly to binary codes, that is, to the Galois field&nbsp; ${\rm GF}(2^n)$. With this
[[File:P ID2374 KC T 1 7 S4a.png|right|frame|Bedingte Dichtefunktionen bei AWGN–Kanal und binärem Eingang]]
+
[[File:P ID2374 KC T 1 7 S4a.png|right|frame|Conditional density functions for AWGN channel and binary input]]
*zum einen die Coderate auf den Bereich&nbsp; $R &#8804; 1$&nbsp; begrenzt,<br>
+
*irstly, the code rate is limited to the range&nbsp; $R &#8804; 1$&nbsp; begrenzt,<br>
*zum zweiten auch für&nbsp; $R &#8804; 1$&nbsp; nicht die gesamte blaue Region verfügbar  (siehe vorherige Seite).
+
*secondly also for&nbsp; $R &#8804; 1$&nbsp; not the whole blue region is available (see previous page).
  
  
Die nun gültige Region ergibt sich aus der&nbsp; [[Channel_Coding/Informationstheoretische_Grenzen_der_Kanalcodierung#Kanalcodierungstheorem_und_Kanalkapazit.C3.A4t|allgemeinen Gleichung]]&nbsp; der Transinformation durch
+
The now valid region results from the&nbsp; [[Channel_Coding/Information_Theoretical_Limits_of_Channel_Coding#Channel_coding_theorem_and_channel_capacity|"general equation"]]&nbsp; of mutual information by
*die Parameter&nbsp; $M_x = 2$&nbsp; und&nbsp; $M_y \to \infty$,<br>
+
*the parameters&nbsp; $M_x = 2$&nbsp; and&nbsp; $M_y \to \infty$,<br>
*bipolare Signalisierung &nbsp; &#8658; &nbsp; $x=0$ &nbsp; &#8594; &nbsp; $\tilde{x} = +1$&nbsp; und&nbsp; $x=1$ &nbsp; &#8594; &nbsp; $\tilde{x} = -1$,<br>
+
*bipolar signaling &nbsp; &#8658; &nbsp; $x=0$ &nbsp; &#8594; &nbsp; $\tilde{x} = +1$&nbsp; and&nbsp; $x=1$ &nbsp; &#8594; &nbsp; $\tilde{x} = -1$,<br>
*den Übergang von bedingten Wahrscheinlichkeiten&nbsp; ${\rm Pr}(\tilde{x}_i)$&nbsp; zu bedingten Wahrscheinlichkeitsdichtefunktionen,<br>
+
*the transition from conditional probabilities&nbsp; ${\rm Pr}(\tilde{x}_i)$&nbsp; to conditional probability density functions,<br>
*Ersetzen der Summe durch eine Integration.<br><br>
+
*Replace the sum with an integration.<br><br>
  
Die Optimierung der Quelle führt auf gleichwahrscheinliche Symbole:
+
The optimization of the source leads to equally probable symbols:
  
 
::<math>{\rm Pr}(\tilde{x} = +1) = {\rm Pr}(\tilde{x} = -1) = 1/2  \hspace{0.05cm}. </math>
 
::<math>{\rm Pr}(\tilde{x} = +1) = {\rm Pr}(\tilde{x} = -1) = 1/2  \hspace{0.05cm}. </math>
  
Damit erhält man  für das Maximum der Transinformation, also für die Kanalkapazität:  
+
This gives for the maximum of the mutual information, i.e. for the channel capacity:  
  
 
::<math>C \hspace{-0.15cm} = {1}/{2} \cdot \int_{-\infty }^{+ \infty}
 
::<math>C \hspace{-0.15cm} = {1}/{2} \cdot \int_{-\infty }^{+ \infty}
Line 198: Line 198:
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
Das Integral lässt sich nicht in mathematisch&ndash;geschlossener Form lösen, sondern kann nur numerisch ausgewertet werden.  
+
The integral cannot be solved in mathematical closed form, but can only be evaluated numerically.  
*Die grüne Kurve zeigt das Ergebnis.  
+
*The green curve shows the result.  
*Die blaue Kurve gibt zum Vergleich die auf der letzten Seite hergeleitete Kanalkapazität für gaußverteilte Eingangssignale an.
+
*The blue curve gives for comparison the channel capacity for Gaussian distributed input signals derived on the last page.
 
   
 
   
[[File:P ID2375 KC T 1 7 S4b v3.png|right|frame|AWGN–Kanalkapazität für binäre Eingangssignale|class=fit]]
+
[[File:P ID2375 KC T 1 7 S4b v3.png|right|frame|AWGN channel capacity for binary input signals|class=fit]]
  
  
Man erkennt:
+
It can be seen:
*Für&nbsp; $10 \cdot \lg \, E_{\rm B}/N_0 < 0 \, \rm dB$&nbsp; unterscheiden sich die beiden Kapazitätskurven nur geringfügig.  
+
*For&nbsp; $10 \cdot \lg \, E_{\rm B}/N_0 < 0 \, \rm dB$&nbsp; the two capacitance curves differ only slightly.  
*So muss man bei binärem bipolaren Eingang gegenüber dem Optimum (Gaußscher Eingang) die Kenngröße&nbsp; $10 \cdot \lg \, E_{\rm B}/N_0$&nbsp; nur etwa um&nbsp; $0.1 \, \rm dB$&nbsp; erhöhen, um ebenfalls die Coderate&nbsp; $R = 0.5$&nbsp; zu ermöglichen.<br>
+
*So, for binary bipolar input, compared to the optimum (Gaussian input), the characteristic&nbsp; $10 \cdot \lg \, E_{\rm B}/N_0$&nbsp; only needs to be increased by about&nbsp; $0.1 \, \rm dB$&nbsp; to also allow the code rate&nbsp; $R = 0.5$&nbsp;.<br>
  
*Ab&nbsp; $10 \cdot \lg \, E_{\rm B}/N_0 \approx6 \, \rm dB$&nbsp; ist die Kapazität&nbsp; $C = 1 \, \rm bit/Kanalzugriff$&nbsp; des AWGN&ndash;Kanals für binären Eingang (fast) erreicht.  
+
*From&nbsp; $10 \cdot \lg \, E_{\rm B}/N_0 \approx6 \, \rm dB$&nbsp; the capacity&nbsp; $C = 1 \, \rm bit/channel\:use$&nbsp; of the AWGN channel for binary input is (almost) reached.  
*Dazwischen verläuft die Grenzkurve annähernd  exponentiell ansteigend.
+
*In between, the limit curve is approximately exponentially increasing.
 
<br clear=all>
 
<br clear=all>
== Gebräuchliche Kanalcodes im Vergleich zur Kanalkapazität ==
+
== Common channel codes versus channel capacity ==
 
<br>
 
<br>
Nun soll gezeigt werden, in wie weit sich etablierte Kanalcodes der BPSK&ndash;Kanalkapazität (grüne Kurve) annähern. In der folgenden Grafik ist als Ordinate die Rate&nbsp; $R=k/n$&nbsp; dieser Codes bzw. die Kapazität&nbsp; $C$&nbsp; (mit der zusätzlichen Pseudo&ndash;Einheit "bit/Kanalzugriff") aufgetragen. Weiter ist vorausgesetzt:
+
Now it shall be shown to what extent established channel codes approximate the BPSK channel capacity (green curve). In the following graph the rate&nbsp; $R=k/n$&nbsp; of these codes or the capacity&nbsp; $C$&nbsp; (with the additional pseudo&ndash;unit "bit/channel use") is plotted as ordinate. Further, it is assumed:
*der AWGN&ndash;Kanal, gekennzeichnet durch&nbsp; $10 \cdot \lg \, E_{\rm B}/N_0$&nbsp; in dB, und<br>
+
*the AWGN&ndash;channel, denoted by&nbsp; $10 \cdot \lg \, E_{\rm B}/N_0$&nbsp; in dB, and<br>
  
*für die durch Kreuze markierten Codes eine Bitfehlerrate (BER) von&nbsp; $10^{-5}$.<br>
+
*for the codes marked by crosses, a bit error rate (BER) of&nbsp; $10^{-5}$.<br>
  
  
Line 225: Line 225:
  
 
   
 
   
$\text{Bitte beachten Sie:}$&nbsp;  
+
$\text{Please note:}$&nbsp;  
*Die Kanalkapazitätskurven gelten stets für&nbsp; $n \to \infty$&nbsp; und&nbsp; $\rm BER \to 0$&nbsp; gelten.  
+
*The channel capacity curves always apply to&nbsp; $n \to \infty$&nbsp; and&nbsp; $\rm BER \to 0$&nbsp;.  
*Würde man diese strenge Forderung "fehlerfrei" auch an die betrachteten Kanalcodes endlicher Codelänge&nbsp; $n$&nbsp; anlegen, so wäre hierfür stets&nbsp; $10 \cdot \lg \, E_{\rm B}/N_0 \to \infty$&nbsp; erforderlich.  
+
*If one would apply this strict requirement "error-free" also to the considered channel codes of finite code length&nbsp; $n$&nbsp;, this would always require&nbsp; $10 \cdot \lg \, E_{\rm B}/N_0 \to \infty$&nbsp;.  
*Dies ist aber ein akademisches Problem, das für die Praxis wenig Bedeutung hat. Für&nbsp; $\rm BER = 10^{-10}$&nbsp; ergäbe sich eine qualitativ und auch quantitativ ähnliche Grafik.<br>
+
*But this is an academic problem of little practical significance. For&nbsp; $\rm BER = 10^{-10}$&nbsp; a qualitatively and also quantitatively similar graph would result.<br>
 
<br clear=all>
 
<br clear=all>
Es folgen einige Erläuterungen zu den Daten, die der Vorlesung [Liv10]<ref name='Liv10'>Liva, G.: Channel Coding. Vorlesungsmanuskript, Lehrstuhl für Nachrichtentechnik, TU München und DLR Oberpfaffenhofen, 2010.</ref> entnommen wurden:
+
The following are some explanations of the data taken from the lecture [Liv10]<ref name='Liv10'>Liva, G.: Channel Coding. Lecture manuscript, Chair of Communications Engineering, TU Munich and DLR Oberpfaffenhofen, 2010.</ref>:
*Die Punkte&nbsp; $\rm A$,&nbsp; $\rm B$&nbsp; und&nbsp; $\rm C$&nbsp; markieren&nbsp; [[Channel_Coding/Beispiele_binärer_Blockcodes#Hamming.E2.80.93Codes|Hamming&ndash;Codes]]&nbsp; unterschiedlicher Rate. Sie alle benötigen für&nbsp; $\rm BER = 10^{-5}$&nbsp; mehr alss&nbsp; $10 \cdot \lg \, E_{\rm B}/N_0 = 8 \, \rm dB$.
+
*The points&nbsp; $\rm A$,&nbsp; $\rm B$&nbsp; and&nbsp; $\rm C$&nbsp; mark&nbsp; [[Channel_Coding/Examples_of_Binary_Block_Codes#Hamming_Codes|"Hamming codes"]]&nbsp; of different rate. They all require for&nbsp; $\rm BER = 10^{-5}$&nbsp; more than&nbsp; $10 \cdot \lg \, E_{\rm B}/N_0 = 8 \, \rm dB$.
*Die Markierung&nbsp; $\rm D$&nbsp; kennzeichnet den binären&nbsp; [https://de.wikipedia.org/wiki/Golay-Code Golay&ndash;Code]&nbsp; mit der Rate&nbsp; $1/2$&nbsp; und die Markierung&nbsp; $\rm E$&nbsp; einen&nbsp; [https://de.wikipedia.org/wiki/Reed-Muller-Code Reed&ndash;Muller&ndash;Code]. Dieser sehr niederratige Code kam  bereits 1971 bei der Raumsonde Mariner 9 zum Einsatz.
+
*The marker&nbsp; $\rm D$&nbsp; denotes the binary&nbsp; [https://en.wikipedia.org/wiki/Binary_Golay_code "Golay code"]&nbsp; with rate&nbsp; $1/2$&nbsp; and the marker&nbsp; $\rm E$&nbsp; denotes a&nbsp; [https://en.wikipedia.org/wiki/Reed-Muller_code "Reed&ndash;Muller code"]. This very low rate code was used as early as 1971 on the Mariner 9 spacecraft.
*Die&nbsp; [[Channel_Coding/Definition_und_Eigenschaften_von_Reed–Solomon–Codes|Reed&ndash;Solomon&ndash;Codes]]&nbsp; (RS&ndash;Codes) werden im zweiten Hauptkapitel ausführlich behandelt. Mit&nbsp; $\rm F$&nbsp; markiert ist ein hochratiger RS&ndash;Code&nbsp; $(R = 223/255 > 0.9)$&nbsp; und einem erforderlichen&nbsp; $10 \cdot \lg \, E_{\rm B}/N_0 < 6 \, \rm dB$.  
+
*The&nbsp; [[Channel_Coding/Definition_and_Properties_of_Reed-Solomon_Codes|"Reed&ndash;Solomon codes"]]&nbsp; (RS codes) are discussed in detail in the second main chapter. Marked by&nbsp; $\rm F$&nbsp; is a high rate RS code&nbsp; $(R = 223/255 > 0.9)$&nbsp; and a required&nbsp; $10 \cdot \lg \, E_{\rm B}/N_0 < 6 \, \rm dB$.  
*Die Markierungen&nbsp; $\rm G$&nbsp; und&nbsp; $\rm H$&nbsp; bezeichnen beispielhafte&nbsp; [[Channel_Coding/Grundlagen_der_Faltungscodierung|Faltungscodes]]&nbsp; (englisch: &nbsp; <i>Convolutional Codes</i>, CC) mittlerer Rate. Der Code&nbsp; $\rm G$&nbsp; wurde schon 1972 bei der Pioneer10&ndash;Mission eingesetzt.
+
*The markers&nbsp; $\rm G$&nbsp; and&nbsp; $\rm H$&nbsp; denote exemplary&nbsp; [[Channel_Coding/Basics_of_Convolutional_Coding|"convolutional codes"]]&nbsp; medium rate. The code&nbsp; $\rm G$&nbsp; was used as early as 1972 on the Pioneer10 mission.
*Die Kanalcodierung der Voyager&ndash;Mission Ende der 1970er Jahre ist mit&nbsp; $\rm I$&nbsp; markiert. Es handelt sich um die Verkettung eines&nbsp; $\text{(2, 1, 7)}$&ndash;Faltungscodes mit einem Reed&ndash;Solomon&ndash;Code, wie im vierten Hauptkapitel beschrieben.<br><br>
+
*The channel coding of the Voyager&ndash;mission in the late 1970s is marked&nbsp; $\rm I$&nbsp;. It is the concatenation of a&nbsp; $\text{(2, 1, 7)}$&ndash;convolutional code with a Reed&ndash;Solomon code, as described in the fourth main chapter.<br><br>
  
''Anmerkung'': &nbsp; Bei den Faltungscodes hat insbesondere der dritte Kennungsparameter eine andere Bedeutung als bei den Blockcodes. $\text{CC (2, 1, 32)}$&nbsp; weist beispielsweise auf das Memory&nbsp; $m = 32$&nbsp; hin.<br>
+
''Note'' &nbsp; For convolutional codes, in particular, the third identifier parameter has a different meaning than for block codes. $\text{CC (2, 1, 32)}$&nbsp; for example, indicates memory&nbsp; $m = 32$&nbsp;.<br>
[[File:P ID2377 KC T 1 7 S5b v4.png|right|frame|Raten und erforderliches&nbsp; $E_{\rm B}/N_0$&nbsp; für iterative Codierverfahren |class=fit]]
+
[[File:P ID2377 KC T 1 7 S5b v4.png|right|frame|Rates and required&nbsp; $E_{\rm B}/N_0$&nbsp; for iterative coding methods |class=fit]]
  
  
  
  
Mit iterativer Decodierung lassen sich deutlich bessere Ergebnisse erzielen, wie die zweite Grafik zeigt.  
+
Much better results can be achieved with iterative decoding, as the second graph shows.  
*Das heißt: &nbsp;Die einzelnen Markierungspunkte liegen sehr viel näher an der Kapazitätskurve.<br>
+
*This means: &nbsp;The individual marker points are much closer to the capacity curve.<br>
*Die bisher mit "Gauß&ndash;Kapazität" beschriftete durchgezogene blaue Kurve wird hier "Gauß (reell)" genannt.  
+
*The solid blue curve previously labeled "Gauss capacity" is here called "Gauss (real)".  
 
<br clear=all>
 
<br clear=all>
Hier noch einige weitere Erläuterungen zu dieser Grafik:
+
Here are some more explanations about this graph:
* Rote Kreuze markieren sogenannte&nbsp; [[Channel_Coding/Grundlegendes_zu_den_Turbocodes|Turbocodes]]&nbsp; nach&nbsp; [https://en.wikipedia.org/wiki/Consultative_Committee_for_Space_Data_Systems CCSDS]&nbsp; (<i>Consultative Committee for Space Data Systems</i>&nbsp;) mit jeweils&nbsp; $k = 6920$&nbsp; Informationsbits und unterschiedlichen Codelängen&nbsp; $n$. Diese von&nbsp; [http://perso.telecom-bretagne.eu/claudeberrou/ Claude Berrou]&nbsp; um 1990 erfundenen Codes können iterativ decodiert werden. Die (roten) Markierungen liegen jeweils weniger als&nbsp; $1 \, \rm dB$&nbsp; von der Shannon&ndash;Grenze entfernt.<br>
+
* Red crosses mark so-called&nbsp; [[Channel_Coding/The_Basics_of_Turbo_Codes|Turbo codes]]&nbsp; accordingt to&nbsp; [https://en.wikipedia.org/wiki/Consultative_Committee_for_Space_Data_Systems CCSDS]&nbsp; (<i>Consultative Committee for Space Data Systems</i>&nbsp;) with each&nbsp; $k = 6920$&nbsp; information bits and different code lengths&nbsp; $n$. These codes, invented by&nbsp; [http://perso.telecom-bretagne.eu/claudeberrou/ Claude Berrou]&nbsp; around 1990, can be decoded iteratively. The (red) marks are each less than&nbsp; $1 \, \rm dB$&nbsp; from the Shannon bound.<br>
  
*Ähnliches Verhalten zeigen die durch weiße Rechtecke gekennzeichneten&nbsp; [[Channel_Coding/Grundlegendes_zu_den_Low–density_Parity–check_Codes|LDPC&ndash;Codes]]&nbsp; (<i>Low Density Parity&ndash;check Codes</i>&nbsp;), die seit 2006 bei&nbsp; [https://praxistipps.chip.de/dvb-s-und-dvb-s2-was-ist-der-unterschied_12879 DVB&ndash;S(2)]&nbsp; (<i>Digital Video Broadcast over Satellite</i>&nbsp;) eingesetzt werden. Diese eignen sich aufgrund der spärlichen Belegung der Prüfmatrix&nbsp; $\boldsymbol {\rm H}$&nbsp; (mit Einsen) sehr gut für die iterative Decodierung mittels ''Faktor&ndash;Graphen'' &nbsp; und &nbsp; ''Exit Charts''. Siehe [Hag02]<ref name='Hag02'>Hagenauer, J.: ''The Turbo Principle in Mobile Communications''. In: Int. Symp. on Information Theory and Its Applications, Oct.2010,  [http://wwwmayr.in.tum.de/konferenzen/Jass05/courses/4/papers/prof_hagenauer.pdf PDF&ndash;Dokument.]</ref><br>
+
*Similar behavior is shown by&nbsp; [[Channel_Coding/The_Basics_of_Low-Density_Parity_Check_Codes|LDPC codes]]&nbsp; (<i>Low Density Parity&ndash;check Codes</i>&nbsp;) marked by white rectangles, which have been used since 2006 on&nbsp; [https://en.wikipedia.org/wiki/DVB-S2 DVB&ndash;S(2)]&nbsp; (<i>Digital Video Broadcast over Satellite</i>&nbsp;). These are well suited for iterative decoding using ''factor&ndash;graphs'' &nbsp; and &nbsp; ''exit charts'' due to the sparse occupancy of the parity-check matrix&nbsp; $\boldsymbol {\rm H}$&nbsp; (with ones). See [Hag02]<ref name='Hag02'>Hagenauer, J.: ''The Turbo Principle in Mobile Communications''. In: Int. Symp. on Information Theory and Its Applications, Oct.2010,  [http://wwwmayr.in.tum.de/konferenzen/Jass05/courses/4/papers/prof_hagenauer.pdf PDF&ndash;Dokument.]</ref><br>
  
*Die schwarzen Punkte markieren die von CCSDS spezifizierten&nbsp; [[Channel_Coding/Grundlegendes_zu_den_Low–density_Parity–check_Codes|LDPC&ndash;Codes]], die alle von einer konstanten Anzahl von Informationsbits ausgehen&nbsp; $(k = 16384)$. Dagegen ist bei allen weißen Rechtecken die Codelänge&nbsp; $n = 64800$&nbsp; konstant, während sich die Anzahl&nbsp; $k$&nbsp; der Informationsbits entsprechend der Rate&nbsp; $R = k/n$&nbsp; ändert.<br>
+
*The black dots mark the CCSDS specified&nbsp; [[Channel_Coding/The_Basics_of_Low-Density_Parity_Check_Codes|LDPC codes]], all of which assume a constant number of information bits&nbsp; $(k = 16384)$. In contrast, for all white rectangles, the code length&nbsp; $n = 64800$&nbsp; is constant, while the number&nbsp; $k$&nbsp; of information bits changes according to the rate&nbsp; $R = k/n$&nbsp;.<br>
  
*Um das Jahr 2000 hatten viele Forscher den Ehrgeiz, sich der Shannon&ndash;Grenze bis auf Bruchteile von einem &nbsp;$\rm dB$&nbsp; anzunähern. Das gelbe Kreuz markiert ein solches Ergebnis aus [CFRU01]<ref name='CFRU01'>Chung S.Y; Forney Jr., G.D.; Richardson, T.J.; Urbanke, R.: ''On the Design of Low-Density Parity- Check Codes within 0.0045 dB of the Shannon Limit''. – In: IEEE Communications Letters, vol. 5, no. 2 (2001), pp. 58–60.''</ref>. Verwendet wurde ein irregulärer Rate&ndash;1/2&ndash;LDPC&ndash;Code der Codelänge&nbsp; $n = 2 \cdot10^6$.<br><br>
+
*Around the year 2000, many researchers had the ambition to approach the Shannon bound to within fractions of a &nbsp;$\rm dB$&nbsp;. The yellow cross marks such a result from [CFRU01]<ref name='CFRU01'>Chung S.Y; Forney Jr., G.D.; Richardson, T.J.; Urbanke, R.: ''On the Design of Low-Density Parity- Check Codes within 0.0045 dB of the Shannon Limit''. – In: IEEE Communications Letters, vol. 5, no. 2 (2001), pp. 58–60.''</ref>. Used an irregular rate&ndash;1/2&ndash;LDPC&ndash;code of code length&nbsp; $n = 2 \cdot10^6$.<br><br>
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Fazit:}$&nbsp; Festzuhalten ist, dass Shannon bereits 1948 erkannt und nachgewiesen hat, dass kein eindimensionales Modulationsverfahren links von der durchgehend eingezeichneten AWGN&ndash;Grenzkurve "Gauß (reell)" liegen kann.   
+
$\text{Conclusion:}$&nbsp; It should be noted that Shannon recognized and proved as early as 1948 that no one-dimensional modulation method can lie to the left of the AWGN limit curve "Gaussian (real)" drawn throughout.   
*Für zweidimensionale Verfahren wie QAM und mehrstufige PSK gilt dagegen die Grenzkurve "Gauß (komplex)", die hier gestrichelt gezeichnet ist und stets links von der durchgezogenen Kurve liegt.  
+
*For two-dimensional methods such as QAM and multilevel PSK, on the other hand, the "Gaussian (complex)" limit curve applies, which is drawn here as a dashed line and always lies to the left of the solid curve.  
*Näheres hierzu finden Sie im Abschnitt&nbsp; [[Information_Theory/AWGN%E2%80%93Kanalkapazit%C3%A4t_bei_wertdiskretem_Eingang#Maximale_Coderate_f.C3.BCr_QAM.E2.80.93Strukturen|Maximale Coderate für QAM&ndash;Strukturen]]&nbsp; des Buches "Informationstheorie".<br>
+
*For more details, see the&nbsp; [[Information_Theory/AWGN_Channel_Capacity_for_Discrete_Input#Maximum_code_rate_for_QAM_structures|"Maximum code rate for QAM structures"]]&nbsp; section of the "Information Theory" book.<br>
*Auch diese Grenzkurve wurde mit QAM&ndash;Verfahren und sehr langen Kanalcodes inzwischen nahezu erreicht, ohne dass sie jemals überschritten werden wird.}}<br>
+
*Also, this limit curve has now been nearly reached with QAM procedures and very long channel codes, without ever being exceeded.}}<br>
  
== Aufgaben zum Kapitel ==
+
== Exercises for the chapter ==
 
<br>
 
<br>
 
[[Aufgaben:Aufgabe_1.17:_Zum_Kanalcodierungstheorem|Aufgabe 1.17: Zum Kanalcodierungstheorem]]
 
[[Aufgaben:Aufgabe_1.17:_Zum_Kanalcodierungstheorem|Aufgabe 1.17: Zum Kanalcodierungstheorem]]

Revision as of 23:19, 14 August 2022

Channel coding theorem and channel capacity


We further consider a binary block code with  $k$  information bits per block and codewords of length  $n$, resulting in the code rate  $R=k/n$  with the unit "information bit/code symbol".

The ingenious information theorist  Claude E. Shannon  has dealt very intensively with the correctability of such codes already in 1948 and has given for this a limit for each channel which results from information-theoretical considerations alone. Up to this day, no code has been found which exceeds this limit, and this will remain so.

$\text{Shannon's channel coding theorem:}$  For each channel with channel capacity  $C > 0$  there always exists (at least) one code whose error probability approaches zero as long as the code rate  $R$  is smaller than the channel capacity  $C$. The prerequisite for this is that the following holds for the block length of this code:   $n \to \infty$.


Notes: 

  • The statement "The error probability approaches zero" is not identical with the statement "The transmission is error-free". Example:   For an infinitely long sequence, finitely many symbols are corrupted.
  • For some channels, even with  $R=C$  the error probability still goes towards zero (but not for all).


The inverse of the channel coding theorem is also true and states:

$\text{Inverse:}$  If the code rate  $R$  is larger than the channel capacity  $C$, then an arbitrarily small error probability cannot be achieved in any case

.

To derive and calculate the channel capacity, we first assume a digital channel with  $M_x$  possible input values  $x$  and  $M_y$  possible output values  $y$ . Then, for the mean mutual information– briefly, the  "mutual information"  – between the random variable  $x$  at the channel input and the random variable  $y$  at its output:

\[I(x; y) =\sum_{i= 1 }^{M_X} \hspace{0.15cm}\sum_{j= 1 }^{M_Y} \hspace{0.15cm}{\rm Pr}(x_i, y_j) \cdot {\rm log_2 } \hspace{0.15cm}\frac{{\rm Pr}(y_j \hspace{0.05cm}| \hspace{0.05cm}x_i)}{{\rm Pr}(y_j)} = \sum_{i= 1 }^{M_X} \hspace{0.15cm}\sum_{j= 1 }^{M_Y}\hspace{0.15cm}{\rm Pr}(y_j \hspace{0.05cm}| \hspace{0.05cm}x_i) \cdot {\rm Pr}(x_i) \cdot {\rm log_2 } \hspace{0.15cm}\frac{{\rm Pr}(y_j \hspace{0.05cm}| \hspace{0.05cm}x_i)}{\sum_{k= 1}^{M_X} {\rm Pr}(y_j \hspace{0.05cm}| \hspace{0.05cm}x_k) \cdot {\rm Pr}(x_k)} \hspace{0.05cm}.\]

In the transition from the first to the second equation, the  "Theorem of Bayes"  and the  "Theorem of Total Probability"  were considered.

Further, it should be noted:

  • The binary logarithm   is denoted here with "log2". Partially, we also use "ld" (Logarithmus dualis in German) for this in our learning tutorial .
  • In contrast to the book  "Information Theory"  we do not distinguish in the following between the random variable $($upper case letters  $X$  or  $Y)$  and the realizations $($lower case letters  $x$  bzw.  $y)$.


$\text{Definition:}$  The  channel capacity'  introduced by Shannon gives the maximum mutual information  $I(x; y)$  between the input variable  $x$  and the output variable  $y$  :

\[C = \max_{{{\rm Pr}(x_i)}} \hspace{0.1cm} I(X; Y) \hspace{0.05cm}.\]

The pseudo–unit "bit/channel use" must be added.


Since the maximization of the mutual information must be done over all possible (discrete) input distributions  ${\rm Pr}(x_i)$  the channel capacity is independent of the input and thus a pure channel parameter.

Channel capacity of the BSC model


We now apply these definitions to the  "BSC model"  (Binary Symmetric Channel ):

\[I(x; y) = {\rm Pr}(y = 0 \hspace{0.03cm}| \hspace{0.03cm}x = 0) \cdot {\rm Pr}(x = 0) \cdot {\rm log_2 } \hspace{0.15cm}\frac{{\rm Pr}(y = 0 \hspace{0.03cm}| \hspace{0.03cm}x = 0)}{{\rm Pr}(y = 0)} + {\rm Pr}(y = 1 \hspace{0.03cm}| \hspace{0.03cm}x = 0) \cdot {\rm Pr}(x = 0) \cdot {\rm log_2 } \hspace{0.15cm}\frac{{\rm Pr}(Y = 1 \hspace{0.03cm}| \hspace{0.03cm}x = 0)}{{\rm Pr}(y = 1)} + \]
\[\hspace{1.45cm} + \hspace{0.15cm}{\rm Pr}(y = 0 \hspace{0.05cm}| \hspace{0.05cm}x = 1) \cdot {\rm Pr}(x = 1) \cdot {\rm log_2 } \hspace{0.15cm}\frac{{\rm Pr}(Y = 0 \hspace{0.05cm}| \hspace{0.05cm}x = 1)}{{\rm Pr}(y = 0)} + {\rm Pr}(y = 1 \hspace{0.05cm}| \hspace{0.05cm}x = 1) \cdot {\rm Pr}(x = 1) \cdot {\rm log_2 } \hspace{0.15cm}\frac{{\rm Pr}(y = 1 \hspace{0.05cm}| \hspace{0.05cm}x = 1)}{{\rm Pr}(y = 1)} \hspace{0.05cm}.\]
BSC channel model

The channel capacity is arrived at by the following considerations:

  • Maximization with respect to the input distribution leads to equally probable symbols:
\[{\rm Pr}(x = 0) = {\rm Pr}(x = 1) = 1/2 \hspace{0.05cm}.\]
  • Due to the symmetry evident from the model, the following then holds simultaneously:
\[{\rm Pr}(y = 0) = {\rm Pr}(y = 1) = 1/2 \hspace{0.05cm}.\]
  • We also consider the BSC transmission probabilities:
\[{\rm Pr}(y = 1 \hspace{0.05cm}| \hspace{0.05cm}x = 0) = {\rm Pr}(y = 0 \hspace{0.05cm}| \hspace{0.05cm}x = 1) = \varepsilon \hspace{0.05cm},\]
\[{\rm Pr}(y = 0 \hspace{0.05cm}| \hspace{0.05cm}x = 0) = {\rm Pr}(y = 1 \hspace{0.05cm}| \hspace{0.05cm}x = 1) = 1-\varepsilon \hspace{0.05cm}.\]
  • After combining two terms each, we thus obtain:
\[C \hspace{0.15cm} = \hspace{0.15cm} 2 \cdot 1/2 \cdot \varepsilon \cdot {\rm log_2 } \hspace{0.15cm}\frac{\varepsilon}{1/2 }+ 2 \cdot 1/2 \cdot (1- \varepsilon) \cdot {\rm log_2 } \hspace{0.15cm}\frac{1- \varepsilon}{1/2 } \varepsilon \cdot {\rm ld } \hspace{0.15cm}2 - \varepsilon \cdot {\rm log_2 } \hspace{0.15cm} \frac{1}{\varepsilon }+ (1- \varepsilon) \cdot {\rm log_2 } \hspace{0.15cm} 2 - (1- \varepsilon) \cdot {\rm log_2 } \hspace{0.15cm}\frac{1}{1- \varepsilon}\]
\[\Rightarrow \hspace{0.3cm} C \hspace{0.15cm} = \hspace{0.15cm} 1 - H_{\rm bin}(\varepsilon). \]
  • The binary entropy function is used here:
\[H_{\rm bin}(\varepsilon) = \varepsilon \cdot {\rm log_2 } \hspace{0.15cm}\frac{1}{\varepsilon}+ (1- \varepsilon) \cdot {\rm log_2 } \hspace{0.15cm}\frac{1}{1- \varepsilon}\hspace{0.05cm}.\]

The following right graph shows the BSC channel capacity depending on the corruption probability  $\varepsilon$. On the left, for comparison, the binary entropy function is shown, which has already been defined in the chapter  "memoryless sources"  of the book "Information Theory".

Kanalkapazität des BSC–Modells

One can see from this representation:

  • The corruption probability  $\varepsilon$  leads to the channel capacity $C(\varepsilon)$. According to the  "channel coding theorem"  error-free decoding is only possible if the code rate $R$ is not greater than  $C(\varepsilon)$.


  • With  $\varepsilon = 10\%$  error-free decoding is impossible because of  $C(0.1) = 0.531$  if the code rate  $R > 0.531$ . At 50–percent corruption, error-free decoding is impossible even if the code rate is arbitrarily small:   $C(0.5) = 0$ .
  • From an information theory point of view  $\varepsilon = 1$  (inversion of all bits) is equivalent to  $\varepsilon = 0$  (error-free transmission). Similarly  $\varepsilon = 0.9$  is equivalent to  $\varepsilon = 0.1$. Error-free decoding is achieved here by swapping the zeros and ones, i.e. by a so-called mapping.

Channel capacity of the AWGN model


AWGN channel model

We now consider the  "AWGN channel"  (Additive White Gaussian Noise ). Here, for the output signal  $y = x + n$, where  $n$  describes a  "Gaussian distributed random variable"  and it applies to their expected values (moments):

$${\rm E}[n] = 0,\hspace{1cm} {\rm E}[n^2] = P_n.$$

Thus, regardless of the input signal  $x$  (analog or digital), there is always a continuous value output signal  $y$, and in the  "equation for mutual information"  is to be used  $M_y \to\infty$ .

The calculation of the channel capacity for the AWGN channel is given here only in keywords. The exact derivation can be found in the fourth main chapter "Discrete Value Information Theory" of the book  "Information theory".

  • The input quantity  $x$  optimized with respect to maximum mutual information will certainly be value continuous, that is, for the AWGN channel, in addition to  $M_y \to\infty$  also  $M_x \to\infty$ holds.
  • While for discrete value input optimization is to be done over all  ${\rm Pr}(x_i)$  now optimization is done using the  "PDF"  $f_x(x)$ of the input signal under the constraint  "Power Limitation":
\[C = \max_{f_x(x)} \hspace{0.1cm} I(x; y)\hspace{0.05cm},\hspace{0.3cm}{\rm where \hspace{0.15cm} must \hspace{0.15cm} apply} \text{:}\hspace{0.15cm} {\rm E} \left [ x^2 \right ] \le P_x \hspace{0.05cm}.\]
  • The optimization also yields a Gaussian distribution for the input PDF   ⇒   $x$,  $n$  and  $y$  are Gaussian distributed according to the density functions  $f_x(x)$,  $f_n(n)$  and  $f_y(y)$. We name the corresponding powers  $P_x$,  $P_n$  and  $P_y$.
  • After longer calculation one gets for the channel capacity using the binary logarithm  $\log_2(\cdot)$ – again with the pseudo unit "bit/channel use":
\[C = {\rm log_2 } \hspace{0.15cm} \sqrt{\frac{P_y}{P_n }} = {\rm log_2 } \hspace{0.15cm} \sqrt{\frac{P_x + P_n}{P_n }} = {1}/{2 } \cdot {\rm log_2 } \hspace{0.05cm}\left ( 1 + \frac{P_x}{P_n } \right )\hspace{0.05cm}.\]
  • If  $x$ describes a  discrete-time signal with symbol rate  $1/T_{\rm S}$, it must be bandlimited to  $B = 1/(2T_{\rm S})$  bandlimited, and the same bandwidth  $B$  must be applied to the noise signal  $n$  ⇒   "noise bandwidth":
\[P_X = \frac{E_{\rm S}}{T_{\rm S} } \hspace{0.05cm}, \hspace{0.4cm} P_N = \frac{N_0}{2T_{\rm S} }\hspace{0.05cm}. \]
  • Thus, the AWGN channel capacity can also be expressed by the  transmit energy per symbol $(E_{\rm S})$  and the  noise power density  $(N_0)$:
\[C = {1}/{2 } \cdot {\rm log_2 } \hspace{0.05cm}\left ( 1 + {2 E_{\rm S}}/{N_0 } \right )\hspace{0.05cm}, \hspace{1.9cm} \text {Unit:}\hspace{0.3cm} \frac{\rm bit}{\rm channel\:use}\hspace{0.05cm}.\]
  • The following equation gives the channel capacity per unit time (denoted by $^{\star})$:
\[C^{\star} = \frac{C}{T_{\rm S} } = B \cdot {\rm log_2 } \hspace{0.05cm}\left ( 1 + {2 E_{\rm S}}/{N_0 } \right )\hspace{0.05cm}, \hspace{0.8cm} \text {Unit:} \hspace{0.3cm} \frac{\rm bit}{\rm time\:unit}\hspace{0.05cm}.\]

$\text{Example 1:}$ 

  • Für  $E_{\rm S}/N_0 = 7.5$   ⇒   $10 \cdot \lg \, E_{\rm S}/N_0 = 8.75 \, \rm dB$  erhält man die Kanalkapazität  $C = {1}/{2 } \cdot {\rm log_2 } \hspace{0.05cm} (16) = 2 \, \rm bit/channel\:use$.
  • For a channel with the (physical) bandwidth  $B = 4 \, \rm kHz$, which corresponds to the sampling rate  $f_{\rm A} = 8\, \rm kHz$  also  $C^\star = 16 \, \rm kbit/s$.


However, a comparison of different coding methods at constant  $E_{\rm S}$  (energy per transmitted symbol ) is not fair. Rather, for this comparison, the energy  $E_{\rm B}$  per useful bit  should be fixed. The following relationships apply:

\[E_{\rm S} = R \cdot E_{\rm B} \hspace{0.3cm} \Rightarrow \hspace{0.3cm} E_{\rm B} = E_{\rm S} / R \hspace{0.05cm}. \]

$\text{Channel Coding Theorem for the AWGN channel:}$ 

Error-free decoding $($at infinitely long blocks   ⇒   $n \to \infty)$  is always possible if the code rate  $R$  is smaller than the channel capacity  $C$:

\[R < C = {1}/{2 } \cdot {\rm log_2 } \hspace{0.05cm}\left ( 1 +2 \cdot R\cdot E_{\rm B}/{N_0 } \right )\hspace{0.05cm}.\]

For each code rate&nbsp $R$  the required  $E_{\rm B}/N_0$  of the AWGN channel can thus be determined so that error-free decoding is just possible. One obtains for the limiting case  $R = C$:

\[{E_{\rm B} }/{N_0} > \frac{2^{2R}-1}{2R } \hspace{0.05cm}.\]


The graph summarizes the result, with the ordinate  $R$  plotted on a linear scale and the abscissa  $E_{\rm B}/{N_0 }$  plotted logarithmically.

Channel capacity of the AWGN channel
  • Error-free coding is not possible outside the blue area.
  • The blue limit curve indicates the channel capacity  $C$  of the AWGN channel.


From this graph and the above equation, the following can be deduced:

  • Channel capacity  $C$  increases somewhat less than linearly with  $10 \cdot \lg \, E_{\rm B}/N_0 $ . In the graph, some selected function values are indicated as blue crosses.
  • If  $10 \cdot \lg \, E_{\rm B}/N_0 < -1.59 \, \rm dB$, error-free decoding is impossible in principle. If the code rate  $R = 0.5$, then  $10 \cdot \lg \, E_{\rm B}/N_0 > 0 \, \rm dB$  must be   ⇒  $E_{\rm B} > N_0$.
  • Für alle binären Codes gilt per se  $0 < R ≤ 1$. Only with non-binary codes code rates  $R > 1$  are possible. For example, the maximum possible code rate of a quaternary code  $R = \log_2 \, M_y = \log_2 \, 4 = 2$.
  • All one-dimensional modulation types – i.e., those methods that use only the in-phase– or only the quadrature component such as  "2–ASK"

"BPSK"  and  "2–FSK"  – must be in the blue area of the present graphic.

AWGN channel capacity for binary input signals


In this book we restrict ourselves mainly to binary codes, that is, to the Galois field  ${\rm GF}(2^n)$. With this

Conditional density functions for AWGN channel and binary input
  • irstly, the code rate is limited to the range  $R ≤ 1$  begrenzt,
  • secondly also for  $R ≤ 1$  not the whole blue region is available (see previous page).


The now valid region results from the  "general equation"  of mutual information by

  • the parameters  $M_x = 2$  and  $M_y \to \infty$,
  • bipolar signaling   ⇒   $x=0$   →   $\tilde{x} = +1$  and  $x=1$   →   $\tilde{x} = -1$,
  • the transition from conditional probabilities  ${\rm Pr}(\tilde{x}_i)$  to conditional probability density functions,
  • Replace the sum with an integration.

The optimization of the source leads to equally probable symbols:

\[{\rm Pr}(\tilde{x} = +1) = {\rm Pr}(\tilde{x} = -1) = 1/2 \hspace{0.05cm}. \]

This gives for the maximum of the mutual information, i.e. for the channel capacity:

\[C \hspace{-0.15cm} = {1}/{2} \cdot \int_{-\infty }^{+ \infty} \left [ f_{y\hspace{0.05cm} |\hspace{0.05cm}\tilde{x} = +1}(y)\cdot {\rm log_2 } \hspace{0.15cm}\frac {f_{y\hspace{0.05cm} |\hspace{0.05cm}\tilde{x} = +1}(y)}{f_y(y)} + f_{y\hspace{0.05cm} |\hspace{0.05cm}\tilde{x} = -1}(y)\cdot {\rm log_2 } \hspace{0.15cm}\frac {f_{y\hspace{0.05cm} |\hspace{0.05cm}\tilde{x} = -1}(y)}{f_y(y)} \right ]\hspace{0.1cm}{\rm d}y \hspace{0.05cm}.\]

The integral cannot be solved in mathematical closed form, but can only be evaluated numerically.

  • The green curve shows the result.
  • The blue curve gives for comparison the channel capacity for Gaussian distributed input signals derived on the last page.
AWGN channel capacity for binary input signals


It can be seen:

  • For  $10 \cdot \lg \, E_{\rm B}/N_0 < 0 \, \rm dB$  the two capacitance curves differ only slightly.
  • So, for binary bipolar input, compared to the optimum (Gaussian input), the characteristic  $10 \cdot \lg \, E_{\rm B}/N_0$  only needs to be increased by about  $0.1 \, \rm dB$  to also allow the code rate  $R = 0.5$ .
  • From  $10 \cdot \lg \, E_{\rm B}/N_0 \approx6 \, \rm dB$  the capacity  $C = 1 \, \rm bit/channel\:use$  of the AWGN channel for binary input is (almost) reached.
  • In between, the limit curve is approximately exponentially increasing.


Common channel codes versus channel capacity


Now it shall be shown to what extent established channel codes approximate the BPSK channel capacity (green curve). In the following graph the rate  $R=k/n$  of these codes or the capacity  $C$  (with the additional pseudo–unit "bit/channel use") is plotted as ordinate. Further, it is assumed:

  • the AWGN–channel, denoted by  $10 \cdot \lg \, E_{\rm B}/N_0$  in dB, and
  • for the codes marked by crosses, a bit error rate (BER) of  $10^{-5}$.


Rates and required  $E_{\rm B}/{N_0}$  of different channel codes



$\text{Please note:}$ 

  • The channel capacity curves always apply to  $n \to \infty$  and  $\rm BER \to 0$ .
  • If one would apply this strict requirement "error-free" also to the considered channel codes of finite code length  $n$ , this would always require  $10 \cdot \lg \, E_{\rm B}/N_0 \to \infty$ .
  • But this is an academic problem of little practical significance. For  $\rm BER = 10^{-10}$  a qualitatively and also quantitatively similar graph would result.


The following are some explanations of the data taken from the lecture [Liv10][1]:

  • The points  $\rm A$,  $\rm B$  and  $\rm C$  mark  "Hamming codes"  of different rate. They all require for  $\rm BER = 10^{-5}$  more than  $10 \cdot \lg \, E_{\rm B}/N_0 = 8 \, \rm dB$.
  • The marker  $\rm D$  denotes the binary  "Golay code"  with rate  $1/2$  and the marker  $\rm E$  denotes a  "Reed–Muller code". This very low rate code was used as early as 1971 on the Mariner 9 spacecraft.
  • The  "Reed–Solomon codes"  (RS codes) are discussed in detail in the second main chapter. Marked by  $\rm F$  is a high rate RS code  $(R = 223/255 > 0.9)$  and a required  $10 \cdot \lg \, E_{\rm B}/N_0 < 6 \, \rm dB$.
  • The markers  $\rm G$  and  $\rm H$  denote exemplary  "convolutional codes"  medium rate. The code  $\rm G$  was used as early as 1972 on the Pioneer10 mission.
  • The channel coding of the Voyager–mission in the late 1970s is marked  $\rm I$ . It is the concatenation of a  $\text{(2, 1, 7)}$–convolutional code with a Reed–Solomon code, as described in the fourth main chapter.

Note   For convolutional codes, in particular, the third identifier parameter has a different meaning than for block codes. $\text{CC (2, 1, 32)}$  for example, indicates memory  $m = 32$ .

Rates and required  $E_{\rm B}/N_0$  for iterative coding methods



Much better results can be achieved with iterative decoding, as the second graph shows.

  • This means:  The individual marker points are much closer to the capacity curve.
  • The solid blue curve previously labeled "Gauss capacity" is here called "Gauss (real)".


Here are some more explanations about this graph:

  • Red crosses mark so-called  Turbo codes  accordingt to  CCSDS  (Consultative Committee for Space Data Systems ) with each  $k = 6920$  information bits and different code lengths  $n$. These codes, invented by  Claude Berrou  around 1990, can be decoded iteratively. The (red) marks are each less than  $1 \, \rm dB$  from the Shannon bound.
  • Similar behavior is shown by  LDPC codes  (Low Density Parity–check Codes ) marked by white rectangles, which have been used since 2006 on  DVB–S(2)  (Digital Video Broadcast over Satellite ). These are well suited for iterative decoding using factor–graphs   and   exit charts due to the sparse occupancy of the parity-check matrix  $\boldsymbol {\rm H}$  (with ones). See [Hag02][2]
  • The black dots mark the CCSDS specified  LDPC codes, all of which assume a constant number of information bits  $(k = 16384)$. In contrast, for all white rectangles, the code length  $n = 64800$  is constant, while the number  $k$  of information bits changes according to the rate  $R = k/n$ .
  • Around the year 2000, many researchers had the ambition to approach the Shannon bound to within fractions of a  $\rm dB$ . The yellow cross marks such a result from [CFRU01][3]. Used an irregular rate–1/2–LDPC–code of code length  $n = 2 \cdot10^6$.

$\text{Conclusion:}$  It should be noted that Shannon recognized and proved as early as 1948 that no one-dimensional modulation method can lie to the left of the AWGN limit curve "Gaussian (real)" drawn throughout.

  • For two-dimensional methods such as QAM and multilevel PSK, on the other hand, the "Gaussian (complex)" limit curve applies, which is drawn here as a dashed line and always lies to the left of the solid curve.
  • For more details, see the  "Maximum code rate for QAM structures"  section of the "Information Theory" book.
  • Also, this limit curve has now been nearly reached with QAM procedures and very long channel codes, without ever being exceeded.


Exercises for the chapter


Aufgabe 1.17: Zum Kanalcodierungstheorem

Aufgabe 1.17Z: BPSK–Kanalkapazität

Quellenverzeichnis

  1. Liva, G.: Channel Coding. Lecture manuscript, Chair of Communications Engineering, TU Munich and DLR Oberpfaffenhofen, 2010.
  2. Hagenauer, J.: The Turbo Principle in Mobile Communications. In: Int. Symp. on Information Theory and Its Applications, Oct.2010, PDF–Dokument.
  3. Chung S.Y; Forney Jr., G.D.; Richardson, T.J.; Urbanke, R.: On the Design of Low-Density Parity- Check Codes within 0.0045 dB of the Shannon Limit. – In: IEEE Communications Letters, vol. 5, no. 2 (2001), pp. 58–60.