Loading [MathJax]/jax/output/HTML-CSS/fonts/TeX/fontdata.js

AWGN Channel Capacity for Discrete-Valued Input

From LNTwww

AWGN model for discrete-time band-limited signals


At the end of the  last chapter , the AWGN model was used according to the left graph, characterised by the two random variables  X  and  Y  at the input and output and the stochastic noise  N  as the result of a mean-free Gaussian random process   ⇒   „White noise” with variance  σ_N^2.  The interference power  P_N  is also equal toh  σ_N^2.

Two largely equivalent models for the AWGN channel

The maximum mutual information  I(X; Y)  between input and output   ⇒   channel capacity  C  is obtained when there is a Gaussian input PDF f_X(x)  vorliegt.  With the transmit power  P_X = σ_X^2   ⇒   variance of the random variable  X , the channel capacity equation is:

C = 1/2 \cdot {\rm log}_2 \hspace{0.1cm} ( 1 + {P_X}/{P_N}) \hspace{0.05cm}.

Now we describe the AWGN channel model according to the case sketched on the right, where the sequence  〈X_ν〉  is applied to the channel input, where the distance between successive values is  T_{\rm A} .  This sequence is the discrete-time equivalent of the continuous-time signal  X(t)  after band limiting and sampling.

The relationship between the two models can be established by means of a graph, which is described in more detail below.

  The  \text{important insights} :

  • In the right-hand model, the same relationship  Y_ν = X_ν + N_ν  applies at the sampling times  ν·T_{\rm A}  as in the left-hand model.
  • The noise component  N_ν  is now to be modelled by band-limited  (auf  ±B)  white noise with the two-sided power density  {\it Φ}_N(f) = N_0/2 , where  B = 1/(2T_{\rm A})  must hold   ⇒  see   sampling theorem.


\text{ Interpretation:}

In the modified model, we assume an infinite sequence  〈X_ν〉  of Gaussian random variables impressed on a  Diracpuls  p_δ(t)  .  The resulting discrete-time signal is thus:

X_{\delta}(t) = T_{\rm A} \cdot \hspace{-0.1cm} \sum_{\nu = - \infty }^{+\infty} X_{\nu} \cdot \delta(t- \nu \cdot T_{\rm A} )\hspace{0.05cm}.

The spacing of all (weighted) Dirac functions is uniform  T_{\rm A}.

AWGN model considering time discretisation and band limitation

Through the interpolation filter with the impulse response  h(t)  as well as the frequency response  H(f), where

h(t) = 1/T_{\rm A} \cdot {\rm si}(\pi \cdot t/T_{\rm A}) \quad \circ\!\!\!-\!\!\!-\!\!\!-\!\!\bullet \quad H(f) = \left\{ \begin{array}{c} 1 \\ 0 \\ \end{array} \right. \begin{array}{*{20}c} {\rm{f\ddot{u}r}} \hspace{0.3cm} |f| \le B, \\ {\rm{f\ddot{u}r}} \hspace{0.3cm} |f| > B, \\ \end{array} \hspace{0.5cm} B = \frac{1}{T_{\rm A}}

must hold, the continuous-time signal  X(t)  is obtained with the following properties:

  • The samples  X(ν·T_{\rm A})  are identical to the input values  X_ν for all integers ν  , which can be justified by the equidistant zeros of the  sinc - function   ⇒   \text{si}(x) = \sin(x)/x .
  • According to the sampling theorem,  X(t)  is ideally bandlimited to the spectral range  ±B , as the above calculation has shown   ⇒   rectangular frequency response  H(f)  of the one-sided bandwidth  B.


\text{Noise power:}  After adding the noise  N(t)  with the (two-sided) power density   {\it Φ}_N(t) = N_0/2  , the matched filter  \rm (MF)  with sinc–shaped impulse response follows.  The following then applies to the  noise power at the MF output :

P_N = {\rm E}\big[N_\nu^2 \big] = \frac{N_0}{2T_{\rm A} } = N_0 \cdot B\hspace{0.05cm}.


\text{Proof:}  With  B = 1/(2T_{\rm A} )  one obtains for the impulse response  h_{\rm E}(t)  and the spectral function  H_{\rm E}(f):

h_{\rm E}(t) = 2B \cdot {\rm si}(2\pi \cdot B \cdot t) \quad \circ\!\!\!-\!\!\!-\!\!\!-\!\!\bullet \quad H_{\rm E}(f) = \left\{ \begin{array}{c} 1 \\ 0 \\ \end{array} \right. \begin{array}{*{20}c} \text{für} \hspace{0.3cm} \vert f \vert \le B, \\ \text{für} \hspace{0.3cm} \vert f \vert > B. \\ \end{array}

It follows, according to the insights of  theory of stochastic signals:

P_N = \int_{-\infty}^{+\infty} \hspace{-0.3cm} {\it \Phi}_N (f) \cdot \vert H_{\rm E}(f)\vert^2 \hspace{0.15cm}{\rm d}f = \int_{-B}^{+B} \hspace{-0.3cm} {\it \Phi}_N (f) \hspace{0.15cm}{\rm d}f = \frac{N_0}{2} \cdot 2B = N_0 \cdot B \hspace{0.05cm}.


Further:

  • If one samples the matched–filter output signal at equidistant intervals   T_{\rm A} , the same constellation as before results for the time instants  ν ·T_{\rm A} , namely:   Y_ν = X_ν + N_ν.
  • The noise component  N_ν  in the discrete-time output signal   Y_ν  is thus „band limited” and „white”.  The channel capacity equation thus needs to be adjusted only slightly.
  • With  \text{energy per symbol}   E_{\rm S} = P_X \cdot T_{\rm A}   ⇒   transmission energy within a „symbol duration”  T_{\rm A}  then holds:
C = {1}/{2} \cdot {\rm log}_2 \hspace{0.1cm} ( 1 + \frac {P_X}{N_0 \cdot B}) = {1}/{2} \cdot {\rm log}_2 \hspace{0.1cm} ( 1 + \frac {2 \cdot P_X \cdot T_{\rm A}}{N_0}) = {1}/{2} \cdot {\rm log}_2 \hspace{0.1cm} ( 1 + \frac {2 \cdot E_{\rm S}}{N_0}) \hspace{0.05cm}.


The channel capacity  C  as a function of  E_{\rm S}/N_0


\text{Example 1:}  The graph shows the variation of the AWGN channel capacity as a function of the quotient  E_{\rm S}/N_0, where the left coordinate axis and the red labels are valid:

C = {1}/{2} \cdot {\rm log}_2 \hspace{0.1cm} ( 1 + \frac { 2 \cdot E_{\rm S} }{N_0}) \hspace{0.5cm}{\rm Einheit\hspace{-0.15cm}: \hspace{0.05cm}bit/Kanalzugriff\hspace{0.15cm} (englisch\hspace{-0.15cm}: \hspace{0.05cm}bit/channel\hspace{0.15cm}use)} \hspace{0.05cm}.

The (pseudo–)unit is sometimes also referred to as „bit/source symbol” or „bit/symbol” for short.

Channel capacities  C  and  C^{\hspace{0.05cm}*}  via  E_{\rm S}/N_0

The right (blue) axis label takes into account the relation  B = 1/(2T_{\rm A})  and thus provides an upper bound for the bit rate  R  a digital system that is still possible for this AWGN channel.

C^{\hspace{0.05cm}*} = \frac{C}{T_{\rm A} } = B \cdot {\rm log}_2 \hspace{0.1cm} ( 1 + \frac { 2 \cdot E_{\rm S} }{N_0}) \hspace{1.0cm}{\rm Einheit\hspace{-0.15cm}: \hspace{0.05cm}bit/second} \hspace{0.05cm}.


\text{Example 2:}  Often one gives the quotient of symbol energy  (E_{\rm S})  and AWGN noise power density  (N_0)  logarithmically.

AWGN channel capacities  C  and  C^{\hspace{0.05cm}*}  as a function of  10 \cdot \lg \ E_{\rm S}/N_0
  • This graph shows the channel capacities  C  and  C^{\hspace{0.05cm}*}  respectively as a function of  10 · \lg (E_{\rm S}/N_0)  in the range from  -20 \ \rm dB  to  +30 \ \rm dB.
  • From about  10 \ \rm dB  onwards, a (nearly) linear curve results here.

System model for the interpretation of the AWGN channel capacity


In order to discuss the  channel coding theorem  in the context of the AWGN channel, we still need an „encoder”, but here it is characterized in information-theoretic terms by the code rate  R  alone.

Model for the interpretation of the AWGN channel capacity

The graphic describes the message system considered by Shannon with the blocks source, coder, (AWGN) channel, decoder and receiver.  In the background you can see an original picture from a paper about the Shannon theory.  We have drawn in red some designations and explanations for the following text:

  • The source symbol  U  comes from an alphabet with  M_U = |U| = 2^k  symbols and can be represented by  k  equally probable statistically independent binary symbols.
  • The alphabet of the code symbol  X  has the symbol range  M_X = |X| = 2^n, where  n  results from the code rate  R = k/n .
  • Thus, for code rate  R = 1 ,   n = k  and the case  n > k  leads to code rate  R < 1.


  \rm Channel coding theorem 

This states that there is (at least) one code of rate  R  that leads to symbol error probability  p_{\rm S} = \text{Pr}(V ≠ U) \equiv 0  if the following conditions are satisfied:

  • The code rate  R  is not larger than the channel capacity  C.
  • Such a suitable code is infinitely long:   n → ∞.  Therefore, a Gaussian distribution  f_X(x)  at the channel input is indeed possible, which has always been the basis of the previous calculation of the AWGN channel capacity:
C = {1}/{2} \cdot {\rm log}_2 \hspace{0.1cm} ( 1 + \frac { 2 \cdot E_{\rm S} }{N_0}) \hspace{0.5cm}{\rm Einheit\hspace{-0.15cm}: \hspace{0.05cm}bit/channel use\hspace{0.15cm} } \hspace{0.05cm}.
  • Thus, the channel input quantity  X  is continuous in value.  The same is true for  U  and for the quantities  YV  after the AWGN channel.
  • For a system comparison, however, the energy per symbol  (E_{\rm S} )  is unsuitable.  A comparison should rather be based on the energy  E_{\rm B}  per information bit   ⇒     \text{Energie pro Bit}  for short.  Thus, with  E_{\rm B} = E_{\rm S}/R  also holds:
C = {1}/{2} \cdot {\rm log}_2 \hspace{0.1cm} ( 1 + \frac { 2 \cdot R \cdot E_{\rm B} }{N_0}) \hspace{0.2cm}{\rm Einheit\hspace{-0.15cm}: \hspace{0.05cm}bit/Kanalzugriff\hspace{0.1cm} (englisch\hspace{-0.15cm}: \hspace{0.05cm}bit/channel \hspace{0.15cm}use)} \hspace{0.05cm}.


These two equations are discussed on the next page.


The channel capacity  C  as a function of  E_{\rm B}/N_0


\text{Example 3:}  The graph for this example shows the AWGN channel capacity  C  as a function

  • of  10 · \lg (E_{\rm S}/N_0)   ⇒   red curve:
C = {1}/{2} \cdot {\rm log}_2 \hspace{0.1cm} ( 1 + \frac { 2 \cdot E_{\rm S} }{N_0}) \hspace{1.5cm}{\rm unit\hspace{-0.15cm}: \hspace{0.05cm}bit/channel use\hspace{0.15cm} (oder\hspace{-0.15cm}: \hspace{0.05cm}bit/Symbol)} \hspace{0.05cm}.

            \text{Red numbers:}   Capacity  C  in  „\rm bit/symbol”  for   10 · \lg (E_{\rm S}/N_0) = -20 \ \rm dB, -15 \ \rm dB, ... , +30\ \rm dB.

  • von  10 · \lg (E_{\rm B}/N_0)   ⇒   green curve:
C = {1}/{2} \cdot {\rm log}_2 \hspace{0.1cm} ( 1 + \frac { 2 \cdot R \cdot E_{\rm B} }{N_0}) \hspace{1.2cm}{\rm unit\hspace{-0.15cm}: \hspace{0.05cm}bit/channel use\hspace{0.1cm} (oder \hspace{-0.15cm}: \hspace{0.05cm}bit/Symbol)} \hspace{0.05cm}.

            \text{Green numbers:}   Required  10 · \lg (E_{\rm B}/N_0)  in  „\rm dB” for    C = 0,\ 1,  ... ,  5  in „\rm bit/symbol”.

The detailed  C(E_{\rm B}/N_0) calculation can be found in  rask 4.8  and the corresponding sample solution.

The AWGN channel capacitance in two different representations

In the following, we interpret the (green)  C(E_{\rm B}/N_0) result in comparison to the (red)  C(E_{\rm S}/N_0) curve:

  • Because of  E_{\rm S} = R · E_{\rm B}  , the intersection of both curves is at  C (= R) = 1 (bit/symbol).  Required are  10 · \lg (E_{\rm S}/N_0) = 1.76\ \rm dB  or  10 · \lg (E_{\rm B}/N_0) = 1.76\ \rm dB  equally.
  • In the range  C > 1  the green curve always lies above the red curve.  For example, for  10 · \lg (E_{\rm B}/N_0) = 20\ \rm dB  the channel capacity is  C ≈ 5, for  10 · \lg (E_{\rm S}/N_0) = 20\ \rm dB  only  C = 3.83.
  • A comparison in horizontal direction shows that the channel capacity  C = 3  bit/symbol is already achievable with  10 · \lg (E_{\rm B}/N_0) \approx 10\ \rm dB , but one needs  10 · \lg (E_{\rm S}/N_0) \approx 15\ \rm dB  benötigt.
  • In the range  C < 1 , the red curve is always above the green one.  For any  E_{\rm S}/N_0 > 0 ,  C > 0.  Thus, for logarithmic abscissa as in the present plot, the red curve extends to „minus infinity”.
  • In contrast, the green curve ends at  E_{\rm B}/N_0 = \ln (2) = 0.693   ⇒   10 · \lg (E_{\rm B}/N_0)= -1.59 \rm dB   ⇒   absolute limit for (error-free) transmission over the AWGN channel.


AWGN channel capacity for binary input signals


On the previous pages of this chapter we always assumed a Gaussian distributed, i.e. value continuous AWGN input  X  according to Shannon theory.  Now we consider the binary case and thus only now do justice to the title of the chapter „AWGN channel capacity with discrete value input” .

Calculation of the AWGN channel capacity for BPSK

The diagram shows the underlying block diagram for  Binary Phase Shift Keying  (BPSK) with binary input  U  and binary output  V.  The best possible coding is to achieve that the error probability  \text{Pr}(V ≠ U)  becomes vanishingly small.

  • The coder output is characterized by the binary random variable  X \hspace{0.03cm}' = \{0, 1\}   ⇒   M_{X'} = 2, while the output  Y  of the AWGN channel remains continuous-valued:   M_Y → ∞.
  • Mapping  X = 1 - 2X\hspace{0.03cm} '  takes us from the unipolar representation to the bipolar (antipodal) description more suitable for BPSK:   X\hspace{0.03cm} ' = 0 → \ X = +1; \hspace{0.5cm} X\hspace{0.03cm} ' = 1 → X = -1.
Conditional WDF for
-1 (red) and +1 (blue)
  • The AWGN channel is characterised by two conditional probability density functions:
f_{Y\hspace{0.05cm}|\hspace{0.03cm}{X}}(y\hspace{0.05cm}|\hspace{0.03cm}{X}=+1) =\frac{1}{\sqrt{2\pi\sigma^2}} \cdot {\rm exp}\left [-\frac{(y - 1)^2} { 2 \sigma^2})\right ] \hspace{0.05cm}\hspace{0.05cm},\hspace{0.5cm}\text{Kurzform:} \ \ f_{Y\hspace{0.05cm}|\hspace{0.03cm}{X}}(y\hspace{0.05cm}|\hspace{0.03cm}+1)\hspace{0.05cm},
f_{Y\hspace{0.05cm}|\hspace{0.03cm}{X}}(y\hspace{0.05cm}|\hspace{0.03cm}{X}=-1) =\frac{1}{\sqrt{2\pi\sigma^2}} \cdot {\rm exp}\left [-\frac{(y + 1)^2} { 2 \sigma^2})\right ] \hspace{0.05cm}\hspace{0.05cm},\hspace{0.5cm}\text{Kurzform:} \ \ f_{Y\hspace{0.05cm}|\hspace{0.03cm}{X}}(y\hspace{0.05cm}|\hspace{0.03cm}-1)\hspace{0.05cm}.
  • Since here the signal  X  is normalised to  ±1    ⇒   power  1  instead of  P_X, the variance of the AWGN noise  N  must be normalised in the same way:   σ^2 = P_N/P_X.
  • The receiver makes a   maximum likelihood decision from the real-valued random variable  Y  (at the AWGN channel output).  The receiver output  V  is binary  (0  or  1).


Based on this model, we now calculate the channel capacity of the AWGN channel.

For a binary input variable  X , this is generally  \text{Pr}(X = -1) = 1 - \text{Pr}(X = +1):

C_{\rm BPSK} = \max_{ {\rm Pr}({X} =+1)} \hspace{-0.15cm} I(X;Y) \hspace{0.05cm}.

Due to the symmetrical channel, it is obvious that the input probabilities are

{\rm Pr}(X =+1) = {\rm Pr}(X =-1) = 0.5

will lead to the optimum.  According to the page  calculation of mutual information with additive noise , there are several calculation possibilities:

\begin{align*}C_{\rm BPSK} & = h(X) + h(Y) - h(XY)\hspace{0.05cm},\\ C_{\rm BPSK} & = h(Y) - h(Y|X)\hspace{0.05cm},\\ C_{\rm BPSK} & = h(X) - h(X|Y)\hspace{0.05cm}. \end{align*}

AAll results still have to be supplemented by the pseudo-unit „bit/channel use” zu ergänzen.  We choose the middle equation here:

  • The conditional differential entropy required for this is equal to
h(Y\hspace{0.03cm}|\hspace{0.03cm}X) = h(N) = 1/2 \cdot {\rm log}_2 \hspace{0.1cm}(2\pi{\rm e}\cdot \sigma^2) \hspace{0.05cm}.
  • The differential entropy  h(Y)  is completely given by the PDF  f_Y(y)  gegeben.  Using the conditional probability density functions defined and sketched in the front, we obtain:
f_Y(y) = {1}/{2} \cdot \big [ f_{Y\hspace{0.03cm}|\hspace{0.03cm}{X}}(y\hspace{0.05cm}|\hspace{0.05cm}{X}=-1) + f_{Y\hspace{0.03cm}|\hspace{0.03cm}{X}}(y\hspace{0.05cm}|\hspace{0.05cm}{X}=+1) \big ]\hspace{0.3cm} \Rightarrow \hspace{0.3cm} h(Y) \hspace{-0.01cm}=\hspace{0.05cm} -\hspace{-0.7cm} \int\limits_{y \hspace{0.05cm}\in \hspace{0.05cm}{\rm supp}(f_Y)} \hspace{-0.65cm} f_Y(y) \cdot {\rm log}_2 \hspace{0.1cm} [f_Y(y)] \hspace{0.1cm}{\rm d}y \hspace{0.05cm}.

It is obvious that  h(Y)  can only be determined by numerical integration, especially if one considers that  f_Y(y)  in the overlap region results from the sum of the two conditional Gaussian functions.

In the following graph, three curves are shown above the abscissa  10 · \lg (E_{\rm B}/N_0) :

  • the channel capacity  C_{\rm Gaussian}, drawn in blue, valid for a Gaussian input quantity  X   ⇒   M_X → ∞,
  • the channel capacity  C_{\rm BPSK}  drawn in green for the random quantity  X = (+1, –1), and
  • the red horizontal line marked „BPSK without coding”.


Comparison of the channel capacity limits  C_{\rm BPSK}  and  C_{\rm Gaussian} 

These curves can be interpreted as follows:

  • The green curve  C_{\rm BPSK}  indicates the maximum permissible code rate  R  of  Binary Phase Shift Keying  (BPSK) at which the bit error probability  p_{\rm B} \equiv 0  is possible for the given  E_{\rm B}/N_0  by best possible coding.
  • For all BPSK systems with coordinates  (10 · \lg \ E_{\rm B}/N_0, \ R)  in the „green range” ist  p_{\rm B} \equiv 0  is achievable in principle.  It is the task of communications engineers to find suitable codes for this.
  • The BPSK curve always lies below the absolute Shannon limit curve  C_{\rm Gauß}  für  M_X → ∞ (blaue Kurve).  In the lower range,  C_{\rm BPSK} ≈ C_{\rm Gaussian}.  For example, a BPSK system with  R = 1/2  only has to provide a  0.1\ \rm dB  larger  E_{\rm B}/N_0  than required by the (absolute) channel capacity  C_{\rm Gauß} .
  • If  E_{\rm B}/N_0  is finite,  C_{\rm BPSK} < 1   ⇒   see  task 4.9Z always applies.  A BPSK with  R = 1  (and thus without coding) will therefore always result in a bit error probability  p_{\rm B} > 0 .
  • The error probabilities of such a BPSK system without coding  (mit  R = 1)  are indicated on the red horizontal line.  To achieve  p_{\rm B} ≤ 10^{–5} , one needs at least  10 · \lg (E_{\rm B}/N_0) = 9.6\ \rm dB.
  • According to the chapter  error probability of the optimal BPSK system  in the book „Digital Signal Transmission”, these probabilities result in
p_{\rm B} = {\rm Q} \left ( \sqrt{S \hspace{-0.06cm}N\hspace{-0.06cm}R}\right ) \hspace{0.45cm} {\rm mit } \hspace{0.45cm} S\hspace{-0.06cm}N\hspace{-0.06cm}R = 2\cdot E_{\rm B}/{N_0} \hspace{0.05cm}.

Hints:

  • In the above graph,  10 · \lg (SNR) s drawn as a second, additional abscissa axis.
  • The function  {\rm Q}(x)  is called the die  complementary Gaussian error function.


Comparison between theory and practice


Two graphs are used to show how far established channel codes approach the BPSK channel capacity (green curve).  The rate  R = k/n  of these codes or the capacity  C  (if the pseudo-unit „bit/channel use” is added) is plotted as the ordinate.

Provided is:

  • the AWGN channel, marked by  10 · \lg (E_{\rm B}/N_0) in dB, and
  • for the realised codes marked by crosses, a  bit error rate  (BER) of  10^{–5}.


Note that the channel capacity curves are always for  n → ∞  and  \rm BER \equiv 0 .

  • If one were to apply this strict requirement „error free” also to the channel codes considered of finite code length  n ,  10 · \lg \ (E_{\rm B}/N_0) \to \infty would always be required for this.
  • However, this is a rather academic problem, i.e. of little practical relevance.  For  \text{BER} = 10^{–10} , a qualitatively similar graph would result.


\text{Example 4:}  The graph shows the characteristics of early systems with channel coding and classical decoding.

  • Some explanations of the data follow, which were taken from the lecture  [Liv10][1]  entnommen wurden.
  • The links in these explanations often refer to the  \rm LNTwww–book Channel Coding.
Rates and required  E_{\rm B}/{N_0}  of different channel codes
  • Points  \rm A\rm B  and  \rm C  markieren  Hamming codes  of rates  R = 4/7 ≈ 0.57R ≈ 0.73  and  R ≈ 0.84 respectively.  For \text{BER} = 10^{–5} , these very early codes (from 1950) all require   10 · \lg (E_{\rm B}/N_0) > 8\ \rm dB.
  • Point  \rm D  indicates the binary  Golay code  with rate  R = 1/2  and the point  \rm E  a  Reed–Muller code.  This very low-rate code was already used in 1971 on the  Raumsonde Mariner 9 space probe .
  • The  Reed–Solomon codes  (RS–Codes, ca. 1960)  are a class of cyclic block codes.  \rm F marks an RS code of rate  223/255 \approx 0.875  and a required  E_{\rm B}/N_0 < 6 \ \rm dB.
  • \rm G  and  \rm H  denote two  convolutional codes (CC) of medium rate. mittlerer Rate.  The code  \rm G  was already used in 1972 for the  Pioneer 10 mission .
  • The channel coding of the  Voyager mission  at the end of the 1970s is marked  \rm I . It is the  concatenation  of a  \text{(2, 1, 7)} convolutional code with a Reed-Solomon code.


It should be noted that in the convolutional codes the third identifier parameter has a different meaning than in the block codes.  For example, the identifier  \text{(2, 1, 32)}  indicates the memory  m = 32 .


Rates and required  E_{\rm B}/{N_0}  for iterative coding methods

\text{Example 5:}  The early channel codes mentioned in  \text{example 4}  are still relatively far from the channel capacity curve.  

This was probably also a reason why the author of this learning tutorial was unaware of the also great practical significance of information theory when he became acquainted with it in his studies in the early 1970s.

The view changed significantly when very long channel codes together with iterative decoding appeared in the 1990s.  The new marker points are much closer to the capacity limit curve.
Here are a few more explanations of this graph:

  • Rote Kreuze markieren die so genannten  Turbo–Codes  nach  \rm CCSDS  (Consultative Committee for Space Data Systems)  mit jeweils  k = 6920  Informationsbit und unterschiedlichen Codelängen  n = k/R.  Diese von  Claude Berrou  um 1990 erfundenen Codes können iterativ decodiert werden.  Die (roten) Markierungen liegen jeweils weniger als  1 \ \rm dB  von der Shannon–Grenze entfernt.
  • Ähnlich verhalten sich die  LDPC–Codes  (Low Density Parity–check Codes)  mit konstanter Codelänge  n = 64800   ⇒   weiße Rechtecke).  Sie werden seit 2006 bei  DVB–S2  (Digital Video Broadcast over Satellite)  eingesetzt und eignen sich aufgrund der spärlichen Einsen–Belegung der Prüfmatrix sehr gut für die iterative Decodierung mittels  Faktor–Graphen  und  Exit Charts.
  • Schwarze Punkte markieren die von  \rm CCSDS  spezifizierten  LDPC–Codes  mit konstanter Anzahl an Informationsbits  (k = 16384)  und variabler Wortlänge  n = k/R.  Diese Codeklasse erfordert ein ähnliches  E_{\rm B}/N_0  wie die roten Kreuze und die weißen Rechtecke.
  • Um das Jahr 2000 hatten viele Forscher den Ehrgeiz, sich der Shannon–Grenze bis auf Bruchteile von einem  \rm dB  anzunähern.  Das gelbe Kreuz markiert ein solches Ergebnis  (0.0045 \ \rm dB)  von  [CFRU01][2]  mit einem irregulären LDPC–Code der Rate  R =1/2  und der Länge  n = 10^7.


\text{Fazit:}  An dieser Stelle soll nochmals die Brillanz und der Weitblick von  Claude E. Shannon  hervorgehoben werden:

  • Er hat 1948 eine bis dahin nicht bekannte Theorie entwickelt, mit der die Möglichkeiten, aber auch die Grenzen der Digitalsignalübertragung aufgezeigt werden.
  • Zu dieser Zeit waren die ersten Überlegungen zur digitalen Nachrichtenübertragung gerade mal zehn Jahre alt   ⇒   Pulscodemodulation  (Alec Reeves, 1938) und selbst der Taschenrechner kam erst mehr als zwanzig Jahre später.
  • Shannon's Arbeiten zeigen uns, dass man auch ohne gigantische Computer Großes leisten kann.


Kanalkapazität des komplexen AWGN–Kanals


Höherstufige Modulationsverfahren können jeweils durch eine Inphase– und eine Quadraturkomponente dargestellt werden.  Hierzu gehören zum Beispiel

  • die  M–QAM    ⇒   Quadraturamplitudenmodulation;  M ≥ 4  quadratisch angeordnete Signalraumpunkte,
  • die  M–PSK   ⇒   M ≥ 4  Signalraumpunkte in kreisförmiger Anordnung


Die beiden Komponenten lassen sich im  äquivalenten Tiefpassbereich  auch als  Realteil  bzw. Imaginärteil  eines komplexen Rauschterms  N  beschreiben.

  • Alle oben genannten Verfahren sind zweidimensional.
  • Der (komplexe) AWGN–Kanal stellt somit  K = 2  voneinander unabhängige Gaußkanäle zur Verfügung.
  • Entsprechend der Seite  Parallele Gaußsche Kanäle  ergibt sich deshalb für die Kapazität eines solchen Kanals:
2D–WDF des Komplexen Gaußschen Rauschens
C_{\text{ Gauß, komplex} }= C_{\rm Gesamt} ( K=2) = {\rm log}_2 \hspace{0.1cm} ( 1 + \frac{P_X/2}{\sigma^2}) \hspace{0.05cm}.
  • P_X  bezeichnet die gesamte Nutzleistung von Inphase– und Quadraturkomponente.
  • Dagegen bezieht sich die Varianz  σ^2  der Störung nur auf eine Dimension:   σ^2 = σ_{\rm I}^2 = σ_{\rm Q}^2.


Die Skizze zeigt die 2D–WDF  f_N(n)  des Gaußschen Rauschprozesses  N  über den beiden Achsen

  • N_{\rm I}  (Inphase–Anteil, Realteil) und
  • N_{\rm Q}  (Quadraturanteil, Imaginärteil).


Dunklere Bereiche der rotationssymmetrischen WDF  f_N(n)  um den Nullpunkt weisen auf mehr Störanteile hin.  Für die Varianz des komplexen Gaußschen Rauschens  N  gilt aufgrund der Rotationsinvarianz  (σ_{\rm I} = σ_{\rm Q})  folgender Zusammenhang:

\sigma_N^2 = \sigma_{\rm I}^2 + \sigma_{\rm Q}^2 = 2\cdot \sigma^2 \hspace{0.05cm}.

Damit lässt sich die Kanalkapazität auch wie folgt ausdrücken:

C_{\text{ Gauß, komplex} }= {\rm log}_2 \hspace{0.1cm} ( 1 + \frac{P_X}{\sigma_N^2}) = {\rm log}_2 \hspace{0.1cm} ( 1 + SNR) \hspace{0.05cm}.

Die Gleichung wird auf der nächsten Seite numerisch ausgewertet.  Man kann aber jetzt schon sagen, dass für das Signal–zu–Störleistungsverhältnis gelten wird:

SNR = {P_X}/{\sigma_N^2} \hspace{0.05cm}.

Maximale Coderate für QAM–Strukturen


Kanalkapazität von BPSK und  M–QAM

Die Grafik zeigt die Kapazität des komplexen AWGN–Kanals als rote Kurve:

C_{\text{ Gauß, komplex} }= {\rm log}_2 \hspace{0.1cm} ( 1 + SNR) \hspace{0.05cm}.
  • Die Einheit dieser Kanalkapazität ist „bit/Kanalzugriff” oder „bit/Quellensymbol”.
  • Als Abszisse bezeichnet das Signal–zu–Störleistungsverhältnis  10 · \lg (SNR)  mit  {SNR} = P_X/σ_N^2.
  • Die Grafik wurde  [Göb10][3]  entnommen.  Wir danken  Bernhard Göbel, unserem ehemaligen Kollegen am LÜT, für sein Einverständnis, diese Abbildung verwenden zu dürfen, sowie für die große Unterstützung unseres Lerntutorials während seiner gesamten aktiven Zeit.


Die rote Kurve basiert entsprechend der Shannon–Theorie wieder auf einer Gaußverteilung  f_X(x)  am Eingang.  Zusätzlich eingezeichnet sind zehn weitere Kapazitätskurven für wertdiskreten Eingang:


Man erkennt aus dieser Darstellung:

  • Die BPSK–Kurve und alle  M–QAM–Kurven liegen rechts von der roten Shannon–Grenzkurve.  Bei kleinem  SNR  sind allerdings alle Kurven von der roten Kurve fast nicht mehr zu unterscheiden.
  • Der Endwert aller Kurven für wertdiskrete Eingangssignale ist  K = \log_2 (M).  Für  SNR \to ∞  erhält man beispielsweise  C_{\rm BPSK} = 1 \ \rm bit/Symbol  sowie  C_{\rm 4-QAM} = C_{\rm QPSK} = 2\ \rm bit/Symbol.
  • Die blauen Markierungen zeigen, dass eine  \rm 2^{10}–QAM  mit  10 · \lg (SNR) ≈ 27 \ \rm dB  eine Coderate von  R ≈ 8.2  ermöglicht.  Der Abstand zur Shannon–Kurve beträgt hier  1.53\ \rm dB.
  • Der  Shaping Gain  beträgt  10 · \lg (π \cdot {\rm e}/6) = 1.53 \ \rm dB.  Diese Verbesserung lässt sich erzielen, wenn man die Lage der  2^{10} = 32^2  quadratisch angeordneten Signalraumpunkte so ändert, dass sich eine gaußähnliche Eingangs–WDF ergibt   ⇒  Signal Shaping.


\text{Fazit:}  In der  Aufgabe 4.10  werden die AWGN–Kapazitätskurven von BPSK und QPSK diskutiert:

  • Ausgehend von der Abszisse  10 · \lg (E_{\rm B}/N_0)  mit  E_{\rm B}  (Energie pro Informationsbit)  kommt man zur QPSK–Kurve durch Verdopplung der BPSK–Kurve:
C_{\rm QPSK}\big [10 \cdot {\rm lg} \hspace{0.1cm}(E_{\rm B}/{N_0})\big ] = 2 \cdot C_{\rm BPSK}\big [10 \cdot {\rm lg} \hspace{0.1cm}(E_{\rm B}/{N_0}) \big ] .
  • Vergleicht man aber BPSK und QPSK bei gleicher Energie  E_{\rm S}  pro Informationssymbol, so gilt:
C_{\rm QPSK}[10 \cdot {\rm lg} \hspace{0.1cm}(E_{\rm S}/{N_0})] = 2 \cdot C_{\rm BPSK}[10 \cdot {\rm lg} \hspace{0.1cm}(E_{\rm S}/{N_0}) - 3\,{\rm dB}] .
Hierbei ist berücksichtigt, dass bei QPSK die Energie in einer Dimension nur  E_{\rm S}/2  beträgt.

Aufgaben zum Kapitel


Aufgabe 4.8: Numerische Auswertung der AWGN-Kanalkapazität

Aufgabe 4.8Z: Was sagt die AWGN-Kanalkapazitätskurve aus?

Aufgabe 4.9: Höherstufige Modulation

Aufgabe 4.9Z: Ist bei BPSK die Kanalkapazität C ≡ 1 möglich?

Aufgabe 4.10:   QPSK–Kanalkapazität

Quellenverzeichnis

  1. Liva, G.: Channel Coding. Vorlesungsmanuskript, Lehrstuhl für Nachrichtentechnik, TU München und DLR Oberpfaffenhofen, 2010.
  2. Chung S.Y; Forney Jr., G.D.; Richardson, T.J.; Urbanke, R.: On the Design of Low-Density Parity- Check Codes within 0.0045 dB of the Shannon Limit. – In: IEEE Communications Letters, vol. 5, no. 2 (2001), pp. 58–60.
  3. Göbel, B.: Information–Theoretic Aspects of Fiber–Optic Communication Channels. Dissertation. TU München. Verlag Dr. Hut, Reihe Informationstechnik, ISBN 978-3-86853-713-0, 2010.