Difference between revisions of "Information Theory/AWGN Channel Capacity for Continuous-Valued Input"

From LNTwww
Line 95: Line 95:
 
We now consider a very simple model of message transmission:
 
We now consider a very simple model of message transmission:
 
*The random variable  $X$  stands for the (zero mean) transmission signal and is characterised by the PDF  $f_X(x)$  and the variance  $σ_X^2$  .  The transmission power is $P_X = σ_X^2$.
 
*The random variable  $X$  stands for the (zero mean) transmission signal and is characterised by the PDF  $f_X(x)$  and the variance  $σ_X^2$  .  The transmission power is $P_X = σ_X^2$.
*The additive noise  $N$  is given by the PDF  $f_N(n)$  and the noise power  $P_N = σ_N^2$ .  
+
*The additive noise  $N$  is given by the  (mean-free)   PDF  $f_N(n)$  and the noise power  $P_N = σ_N^2$ .  
 
*If  $X$  and  $N$  are assumed to be statistically independent   ⇒    signal-independent noise, then  $\text{E}\big[X · N \big] = \text{E}\big[X \big] · \text{E}\big[N\big] = 0$ .
 
*If  $X$  and  $N$  are assumed to be statistically independent   ⇒    signal-independent noise, then  $\text{E}\big[X · N \big] = \text{E}\big[X \big] · \text{E}\big[N\big] = 0$ .
 
*The received signal is  $Y = X + N$.  The output PDF  $f_Y(y)$  can be calculated with the [[Signal_Representation/The_Convolution_Theorem_and_Operation#Convolution_in_Time_Domain|convolution operation]]    ⇒    $f_Y(y) = f_X(x) ∗ f_N(n)$.
 
*The received signal is  $Y = X + N$.  The output PDF  $f_Y(y)$  can be calculated with the [[Signal_Representation/The_Convolution_Theorem_and_Operation#Convolution_in_Time_Domain|convolution operation]]    ⇒    $f_Y(y) = f_X(x) ∗ f_N(n)$.
  
 
[[File:Inf_T_4_2_S3neu.png|right|frame|Message transmission system with additive noise]]
 
[[File:Inf_T_4_2_S3neu.png|right|frame|Message transmission system with additive noise]]
* For the received power (variance) holds:
+
* For the received power  (or  "variance")  holds:
 
   
 
   
 
:$$P_Y = \sigma_Y^2 = {\rm E}\big[Y^2\big] = {\rm E}\big[(X+N)^2\big] =  {\rm E}\big[X^2\big] +  {\rm E}\big[N^2\big] = \sigma_X^2 + \sigma_N^2 $$
 
:$$P_Y = \sigma_Y^2 = {\rm E}\big[Y^2\big] = {\rm E}\big[(X+N)^2\big] =  {\rm E}\big[X^2\big] +  {\rm E}\big[N^2\big] = \sigma_X^2 + \sigma_N^2 $$
Line 106: Line 106:
 
\hspace{0.05cm}.$$
 
\hspace{0.05cm}.$$
  
The sketched density functions sketched (rectangular or trapezoidal) are only intended to clarify the calculation process and have no practical relevance.
+
The sketched probability density functions  (rectangular or trapezoidal)  are only intended to clarify the calculation process and have no practical relevance.
 
<br clear=all>
 
<br clear=all>
To calculate the mutual information between input&nbsp; $X$&nbsp; and output&nbsp; $Y$&nbsp; there are three possibilities according to the&nbsp; [[Information_Theory/AWGN–Kanalkapazität_bei_wertkontinuierlichem_Eingang#On_equivocation_and_irrelevance|graphic on the previous subchapter]]&nbsp; drei Möglichkeiten:
+
To calculate the mutual information between input&nbsp; $X$&nbsp; and output&nbsp; $Y$&nbsp; there are three possibilities according to the&nbsp; [[Information_Theory/AWGN–Kanalkapazität_bei_wertkontinuierlichem_Eingang#On_equivocation_and_irrelevance|graphic on the previous subchapter]]:
 
* Calculation according to &nbsp;$I(X, Y) = h(X) + h(Y) - h(XY)$:
 
* Calculation according to &nbsp;$I(X, Y) = h(X) + h(Y) - h(XY)$:
:The first two terms can be calculated in a simple way from &nbsp;$f_X(x)$&nbsp; and &nbsp;$f_Y(y)$&nbsp; respectively.&nbsp; The&nbsp; ''joint differentrial entropy'' &nbsp;$h(XY)$ is problematic.&nbsp; For this, one needs the 2D joint PDF &nbsp;$f_{XY}(x, y)$, which is usually not given directly.
+
:The first two terms can be calculated in a simple way from &nbsp;$f_X(x)$&nbsp; and &nbsp;$f_Y(y)$&nbsp; respectively.&nbsp; The&nbsp; "joint differential entropy" &nbsp;$h(XY)$ is problematic.&nbsp; For this, one needs the 2D joint PDF &nbsp;$f_{XY}(x, y)$, which is usually not given directly.
  
 
* Calculation according to &nbsp;$I(X, Y) = h(Y) - h(Y|X)$:
 
* Calculation according to &nbsp;$I(X, Y) = h(Y) - h(Y|X)$:
:Here &nbsp;$h(Y|X)$&nbsp; denotes the&nbsp; ''differential scattering entropy''.&nbsp; It holds that &nbsp;$h(Y|X) = h(X + N|X) = h(N)$, so that &nbsp;$I(X; Y)$&nbsp; is very easy to calculate via the equation &nbsp;$f_Y(y) = f_X(x) ∗ f_N(n)$&nbsp; if $f_X(x)$&nbsp; and $f_N(n)$&nbsp; are known.
+
:Here &nbsp;$h(Y|X)$&nbsp; denotes the&nbsp; "differential irrelevance".&nbsp; It holds that &nbsp;$h(Y|X) = h(X + N|X) = h(N)$, so that &nbsp;$I(X; Y)$&nbsp; is very easy to calculate via the equation &nbsp;$f_Y(y) = f_X(x) ∗ f_N(n)$&nbsp; if $f_X(x)$&nbsp; and $f_N(n)$&nbsp; are known.
 
* Calculation according to &nbsp;$I(X, Y) = h(X) - h(X|Y)$:
 
* Calculation according to &nbsp;$I(X, Y) = h(X) - h(X|Y)$:
:According to this equation, however, one needs the differential inference entropy&nbsp;$h(X|Y)$, which is more difficult to state than&nbsp;$h(Y|X)$.
+
:According to this equation, however, one needs the&nbsp; "differential equivocation"&nbsp; $h(X|Y)$, which is more difficult to state than&nbsp;$h(Y|X)$.
  
 
{{BlaueBox|TEXT=
 
{{BlaueBox|TEXT=
$\text{Conclusion:}$&nbsp; In the following we use the middle equation and write for the mutual information between the input&nbsp; $X$&nbsp; and the output&nbsp; $Y$&nbsp; of a&nbsp; ''message transmission system in the presence of additive and uncorrelated noise''&nbsp; $N$:
+
$\text{Conclusion:}$&nbsp; In the following we use the middle equation and write for the mutual information between the input&nbsp; $X$&nbsp; and the output&nbsp; $Y$&nbsp; of a&nbsp; "transmission system in the presence of additive and uncorrelated noise"&nbsp; $N$:
 
 
 
 
 
:$$I(X;Y) \hspace{-0.05cm} = \hspace{-0.01cm} h(Y) \hspace{-0.01cm}- \hspace{-0.01cm}h(N) \hspace{-0.01cm}=\hspace{-0.05cm}
 
:$$I(X;Y) \hspace{-0.05cm} = \hspace{-0.01cm} h(Y) \hspace{-0.01cm}- \hspace{-0.01cm}h(N) \hspace{-0.01cm}=\hspace{-0.05cm}
Line 127: Line 127:
 
==Channel capacity of the AWGN channel==   
 
==Channel capacity of the AWGN channel==   
 
<br>
 
<br>
 +
[[File:P_ID2884__Inf_T_4_2_S4_neu.png|right|frame|Derivation of the AWGN channel capacity]]
 +
 
If one specifies the probability density function of the noise in the previous&nbsp;  [[Information_Theory/AWGN–Kanalkapazität_bei_wertkontinuierlichem_Eingang#Calculation_of_mutual_information_with_additive_noise|general system model]]&nbsp; as Gaussian corresponding to
 
If one specifies the probability density function of the noise in the previous&nbsp;  [[Information_Theory/AWGN–Kanalkapazität_bei_wertkontinuierlichem_Eingang#Calculation_of_mutual_information_with_additive_noise|general system model]]&nbsp; as Gaussian corresponding to
[[File:P_ID2884__Inf_T_4_2_S4_neu.png|right|frame|Derivation of the AWGN channel capacity]]
 
 
:$$f_N(n) = \frac{1}{\sqrt{2\pi  \sigma_N^2}} \cdot {\rm e}^{  
 
:$$f_N(n) = \frac{1}{\sqrt{2\pi  \sigma_N^2}} \cdot {\rm e}^{  
 
- \hspace{0.05cm}{n^2}/(2 \sigma_N^2) } \hspace{0.05cm}, $$
 
- \hspace{0.05cm}{n^2}/(2 \sigma_N^2) } \hspace{0.05cm}, $$
  
we obtain the model sketched on the right for calculating the channel capacity of the so-called&nbsp; [[Modulation_Methods/Qualitätskriterien#Einige_Anmerkungen_zum_AWGN.E2.80.93Kanalmodell|AWGN channel]]&nbsp; (''Additive White Gaussian Noise'').&nbsp; In the following, we usually replace&nbsp; $\sigma_N^2$&nbsp; by&nbsp; $P_N$.
+
we obtain the model sketched on the right for calculating the channel capacity of the so-called&nbsp; [[Modulation_Methods/Qualitätskriterien#Einige_Anmerkungen_zum_AWGN.E2.80.93Kanalmodell|AWGN channel]]&nbsp; ("Additive White Gaussian Noise").&nbsp;  
<br clear=all>
+
 
 +
In the following, we usually replace the variance&nbsp; $\sigma_N^2$&nbsp; by the power&nbsp; $P_N$.
 +
 
 
We know from previous sections:
 
We know from previous sections:
 
*The&nbsp; [[Information_Theory/Anwendung_auf_die_Digitalsignalübertragung#Definition_and_meaning_of_channel_capacity|channel capacity]]&nbsp; $C_{\rm AWGN}$&nbsp; specifies the maximum mutual information&nbsp; $I(X; Y)$&nbsp; between the input quantity&nbsp;  $X$&nbsp;  and the output quantity&nbsp;  $Y$&nbsp;  of the AWGN channel.&nbsp;  The maximisation refers to the best possible input PDF.&nbsp;  Thus, under the&nbsp;  [[Information_Theory/Differentielle_Entropie#Differential_entropy_of_some_power-constrained_random_variables|power constraint]] the following applies:
 
*The&nbsp; [[Information_Theory/Anwendung_auf_die_Digitalsignalübertragung#Definition_and_meaning_of_channel_capacity|channel capacity]]&nbsp; $C_{\rm AWGN}$&nbsp; specifies the maximum mutual information&nbsp; $I(X; Y)$&nbsp; between the input quantity&nbsp;  $X$&nbsp;  and the output quantity&nbsp;  $Y$&nbsp;  of the AWGN channel.&nbsp;  The maximisation refers to the best possible input PDF.&nbsp;  Thus, under the&nbsp;  [[Information_Theory/Differentielle_Entropie#Differential_entropy_of_some_power-constrained_random_variables|power constraint]] the following applies:
Line 141: Line 144:
 
\hspace{0.05cm}.$$
 
\hspace{0.05cm}.$$
  
:It is already taken into account that the maximisation relates solely to the differential entropy &nbsp;$h(Y)$ &nbsp; ⇒ &nbsp; PDF &nbsp;$f_Y(y)$&nbsp; bezieht.&nbsp;  Indeed, for a given noise power&nbsp;  $P_N$&nbsp;, &nbsp;$h(N) = 1/2 · \log_2 (2π{\rm e} · P_N)$&nbsp; is a constant.
+
:It is already taken into account that the maximisation relates solely to the differential entropy &nbsp;$h(Y)$ &nbsp; ⇒ &nbsp; probability density function &nbsp;$f_Y(y)$.&nbsp;  Indeed, for a given noise power&nbsp;  $P_N$&nbsp;, &nbsp;$h(N) = 1/2 · \log_2 (2π{\rm e} · P_N)$&nbsp; is a constant.
*The maximum for &nbsp;$h(Y)$&nbsp;  is obtained for a Gaussian PDF &nbsp;$f_Y(y)$&nbsp; with &nbsp;$P_Y = P_X + P_N$&nbsp;t, see page&nbsp; [[Information_Theory/Differentielle_Entropie#Proof:_Maximum_differential_entropy_with_power_constraint|maximum differential entropy under power constraint]]:
+
*The maximum for &nbsp;$h(Y)$&nbsp;  is obtained for a Gaussian PDF &nbsp;$f_Y(y)$&nbsp; with &nbsp;$P_Y = P_X + P_N$&nbsp;, see page&nbsp; [[Information_Theory/Differentielle_Entropie#Proof:_Maximum_differential_entropy_with_power_constraint|"Maximum differential entropy under power constraint"]]:
 
:$${\rm max}\big[h(Y)\big] = 1/2 · \log_2 \big[2πe · (P_X + P_N)\big].$$
 
:$${\rm max}\big[h(Y)\big] = 1/2 · \log_2 \big[2πe · (P_X + P_N)\big].$$
 
*However, the output PDF &nbsp;$f_Y(y) = f_X(x) ∗ f_N(n)$&nbsp; is Gaussian only if both&nbsp;  $f_X(x)$&nbsp;  and&nbsp;  $f_N(n)$&nbsp;  are Gaussian functions.&nbsp; A striking saying about the convolution operation is:&nbsp; '''Gaussian remains Gaussian, and non-Gaussian never becomes (exactly) Gaussian'''.
 
*However, the output PDF &nbsp;$f_Y(y) = f_X(x) ∗ f_N(n)$&nbsp; is Gaussian only if both&nbsp;  $f_X(x)$&nbsp;  and&nbsp;  $f_N(n)$&nbsp;  are Gaussian functions.&nbsp; A striking saying about the convolution operation is:&nbsp; '''Gaussian remains Gaussian, and non-Gaussian never becomes (exactly) Gaussian'''.
  
  
 +
[[File:P_ID2885__Inf_T_4_2_S4b_neu.png|right|frame|Numerical results for the AWGN channel capacity as a function of&nbsp; ${P_X}/{P_N}$]] 
 
{{BlaueBox|TEXT=
 
{{BlaueBox|TEXT=
$\text{Conclusion:}$&nbsp; For the AWGN channel &nbsp; ⇒ &nbsp;Gaussian noise PDF &nbsp;$f_N(n)$&nbsp; the&nbsp; ''channel capacity''&nbsp; results exactly when the input PDF &nbsp;$f_X(x)$&nbsp; is ''also Gaussian'':
+
$\text{Conclusion:}$&nbsp; For the AWGN channel &nbsp; ⇒ &nbsp;Gaussian noise PDF &nbsp;$f_N(n)$&nbsp; the&nbsp; channel capacity&nbsp; results exactly when the input PDF &nbsp;$f_X(x)$&nbsp; is also Gaussian:
  
[[File:P_ID2885__Inf_T_4_2_S4b_neu.png|right|frame|Numerical results for the AWGN channel capacity as a function of&nbsp; ${P_X}/{P_N}$]]
 
 
:$$C_{\rm AWGN} = h_{\rm max}(Y) - h(N) = 1/2 \cdot  {\rm log}_2 \hspace{0.1cm} {P_Y}/{P_N}$$
 
:$$C_{\rm AWGN} = h_{\rm max}(Y) - h(N) = 1/2 \cdot  {\rm log}_2 \hspace{0.1cm} {P_Y}/{P_N}$$
 
:$$\Rightarrow \hspace{0.3cm} C_{\rm AWGN}=  1/2 \cdot  {\rm log}_2 \hspace{0.1cm} ( 1 + P_X/P_N) \hspace{0.05cm}.$$}}
 
:$$\Rightarrow \hspace{0.3cm} C_{\rm AWGN}=  1/2 \cdot  {\rm log}_2 \hspace{0.1cm} ( 1 + P_X/P_N) \hspace{0.05cm}.$$}}

Revision as of 14:36, 20 August 2021


Mutual information between value-continuous random variables


In the chapter  Information-theoretical model of digital signal transmission  the  "mutual information" between the two value-discrete random variables  $X$  and  $Y$  was given, among other things, in the following form:

$$I(X;Y) = \hspace{0.5cm} \sum_{\hspace{-0.9cm}y \hspace{0.1cm}\in \hspace{0.1cm}{\rm supp}\hspace{0.05cm} (P_{Y}\hspace{-0.08cm})} \hspace{-1.1cm}\sum_{\hspace{1.3cm} x \hspace{0.1cm}\in \hspace{0.1cm}{\rm supp}\hspace{0.05cm} (P_{X}\hspace{-0.08cm})} \hspace{-0.9cm} P_{XY}(x, y) \cdot {\rm log} \hspace{0.1cm} \frac{ P_{XY}(x, y)}{P_{X}(x) \cdot P_{Y}(y)} \hspace{0.05cm}.$$

This equation simultaneously corresponds to the  Kullback–Leibler distance  between the joint probability function  $P_{XY}$  and the product of the two individual probability functions  $P_X$  and  $P_Y$ :

$$I(X;Y) = D(P_{XY} \hspace{0.05cm} || \hspace{0.05cm}P_{X} \cdot P_{Y}) \hspace{0.05cm}.$$

In order to derive the mutual information  $I(X; Y)$  between two value-continuous random variables  $X$  and  $Y$,  one proceeds as follows, whereby inverted commas indicate a quantised variable:

  • One quantises the random variables  $X$  and  $Y$  $($with the quantisation intervals  ${\it Δ}x$  and  ${\it Δ}y)$  and thus obtains the probability functions  $P_{X\hspace{0.01cm}′}$  and  $P_{Y\hspace{0.01cm}′}$.
  • The „vectors”  $P_{X\hspace{0.01cm}′}$  and  $P_{Y\hspace{0.01cm}′}$  become infinitely long after the boundary transitions  ${\it Δ}x → 0,\hspace{0.1cm} {\it Δ}y → 0$ , and the joint PMF  $P_{X\hspace{0.01cm}′\hspace{0.08cm}Y\hspace{0.01cm}′}$  is also infinitely extended in area.
  • These boundary transitions give rise to the probability density functions of the continuous random variables according to the following equations:
$$f_X(x_{\mu}) = \frac{P_{X\hspace{0.01cm}'}(x_{\mu})}{\it \Delta_x} \hspace{0.05cm}, \hspace{0.3cm}f_Y(y_{\mu}) = \frac{P_{Y\hspace{0.01cm}'}(y_{\mu})}{\it \Delta_y} \hspace{0.05cm}, \hspace{0.3cm}f_{XY}(x_{\mu}\hspace{0.05cm}, y_{\mu}) = \frac{P_{X\hspace{0.01cm}'\hspace{0.03cm}Y\hspace{0.01cm}'}(x_{\mu}\hspace{0.05cm}, y_{\mu})} {{\it \Delta_x} \cdot {\it \Delta_y}} \hspace{0.05cm}.$$
  • The double sum in the above equation, after renaming  $Δx → {\rm d}x$  and  $Δy → {\rm d}y$ , becomes the equation valid for continuous value random variables:
$$I(X;Y) = \hspace{0.5cm} \int\limits_{\hspace{-0.9cm}y \hspace{0.1cm}\in \hspace{0.1cm}{\rm supp}\hspace{0.05cm} (P_{Y}\hspace{-0.08cm})} \hspace{-1.1cm}\int\limits_{\hspace{1.3cm} x \hspace{0.1cm}\in \hspace{0.1cm}{\rm supp}\hspace{0.05cm} (P_{X}\hspace{-0.08cm})} \hspace{-0.9cm} f_{XY}(x, y) \cdot {\rm log} \hspace{0.1cm} \frac{ f_{XY}(x, y) } {f_{X}(x) \cdot f_{Y}(y)} \hspace{0.15cm}{\rm d}x\hspace{0.15cm}{\rm d}y \hspace{0.05cm}.$$

$\text{Conclusion:}$  By splitting this double integral, it is also possible to write for the  »mutual information«:

$$I(X;Y) = h(X) + h(Y) - h(XY)\hspace{0.05cm}.$$

The  »joint differential entropy«

$$h(XY) = - \hspace{-0.3cm}\int\limits_{\hspace{-0.9cm}y \hspace{0.1cm}\in \hspace{0.1cm}{\rm supp}\hspace{0.05cm} (P_{Y}\hspace{-0.08cm})} \hspace{-1.1cm}\int\limits_{\hspace{1.3cm} x \hspace{0.1cm}\in \hspace{0.1cm}{\rm supp}\hspace{0.05cm} (P_{X}\hspace{-0.08cm})} \hspace{-0.9cm} f_{XY}(x, y) \cdot {\rm log} \hspace{0.1cm} \hspace{0.1cm} \big[f_{XY}(x, y) \big] \hspace{0.15cm}{\rm d}x\hspace{0.15cm}{\rm d}y$$

and the two  »differential single entropies«

$$h(X) = -\hspace{-0.7cm} \int\limits_{x \hspace{0.05cm}\in \hspace{0.05cm}{\rm supp}\hspace{0.03cm} (\hspace{-0.03cm}f_X)} \hspace{-0.35cm} f_X(x) \cdot {\rm log} \hspace{0.1cm} \big[f_X(x)\big] \hspace{0.1cm}{\rm d}x \hspace{0.05cm},\hspace{0.5cm} h(Y) = -\hspace{-0.7cm} \int\limits_{y \hspace{0.05cm}\in \hspace{0.05cm}{\rm supp}\hspace{0.03cm} (\hspace{-0.03cm}f_Y)} \hspace{-0.35cm} f_Y(y) \cdot {\rm log} \hspace{0.1cm} \big[f_Y(y)\big] \hspace{0.1cm}{\rm d}y \hspace{0.05cm}.$$

On equivocation and irrelevance


We further assume the value-continuous mutual information $I(X;Y) = h(X) + h(Y) - h(XY)$  .  This representation is also found in the following diagram (left graph).

Representation of the mutual information for continuous value random variables

From this you can see that the mutual information can also be represented as follows:

$$I(X;Y) = h(Y) - h(Y \hspace{-0.1cm}\mid \hspace{-0.1cm} X) =h(X) - h(X \hspace{-0.1cm}\mid \hspace{-0.1cm} Y)\hspace{0.05cm}.$$

These fundamental information-theoretical relationships can also be read from the graph on the right. 

This directional representation is particularly suitable for communication systems.

The outflowing or inflowing differential entropy characterises

  • the  equivocation:
$$h(X \hspace{-0.05cm}\mid \hspace{-0.05cm} Y) = - \hspace{-0.3cm}\int\limits_{\hspace{-0.9cm}y \hspace{0.1cm}\in \hspace{0.1cm}{\rm supp}\hspace{0.05cm} (P_{Y}\hspace{-0.08cm})} \hspace{-1.1cm}\int\limits_{\hspace{1.3cm} x \hspace{0.1cm}\in \hspace{0.1cm}{\rm supp}\hspace{0.05cm} (P_{X}\hspace{-0.08cm})} \hspace{-0.9cm} f_{XY}(x, y) \cdot {\rm log} \hspace{0.1cm} \hspace{0.1cm} \big [{f_{\hspace{0.03cm}X \mid \hspace{0.03cm} Y} (x \hspace{-0.05cm}\mid \hspace{-0.05cm} y)} \big] \hspace{0.15cm}{\rm d}x\hspace{0.15cm}{\rm d}y,$$
  • the  irrelevance:
$$h(Y \hspace{-0.05cm}\mid \hspace{-0.05cm} X) = - \hspace{-0.3cm}\int\limits_{\hspace{-0.9cm}y \hspace{0.1cm}\in \hspace{0.1cm}{\rm supp}\hspace{0.05cm} (P_{Y}\hspace{-0.08cm})} \hspace{-1.1cm}\int\limits_{\hspace{1.3cm} x \hspace{0.1cm}\in \hspace{0.1cm}{\rm supp}\hspace{0.05cm} (P_{X}\hspace{-0.08cm})} \hspace{-0.9cm} f_{XY}(x, y) \cdot {\rm log} \hspace{0.1cm} \hspace{0.1cm} \big [{f_{\hspace{0.03cm}Y \mid \hspace{0.03cm} X} (y \hspace{-0.05cm}\mid \hspace{-0.05cm} x)} \big] \hspace{0.15cm}{\rm d}x\hspace{0.15cm}{\rm d}y.$$

The significance of these two information-theoretic quantities will be discussed in more detail in  Exercise 4.5Z .

If one compares the graphical representations of the mutual information for


the only distinguishing feature is that each (capital) $H$  (entropy;  $\ge 0$)  has been replaced by a (non-capital) $h$  (differential entropy;  can be positive, negative or zero).

  • Otherwise, the mutual information is the same in both representations and  $I(X; Y) ≥ 0$  always applies.
  • In the following, we mostly use the  "binary logarithm"   ⇒   $\log_2$  and thus obtain the mutual information with the pseudo-unit  "bit".


Calculation of mutual information with additive noise


We now consider a very simple model of message transmission:

  • The random variable  $X$  stands for the (zero mean) transmission signal and is characterised by the PDF  $f_X(x)$  and the variance  $σ_X^2$  .  The transmission power is $P_X = σ_X^2$.
  • The additive noise  $N$  is given by the  (mean-free)  PDF  $f_N(n)$  and the noise power  $P_N = σ_N^2$ .
  • If  $X$  and  $N$  are assumed to be statistically independent   ⇒   signal-independent noise, then  $\text{E}\big[X · N \big] = \text{E}\big[X \big] · \text{E}\big[N\big] = 0$ .
  • The received signal is  $Y = X + N$.  The output PDF  $f_Y(y)$  can be calculated with the convolution operation    ⇒   $f_Y(y) = f_X(x) ∗ f_N(n)$.
Message transmission system with additive noise
  • For the received power  (or  "variance")  holds:
$$P_Y = \sigma_Y^2 = {\rm E}\big[Y^2\big] = {\rm E}\big[(X+N)^2\big] = {\rm E}\big[X^2\big] + {\rm E}\big[N^2\big] = \sigma_X^2 + \sigma_N^2 $$
$$\Rightarrow \hspace{0.3cm} P_Y = P_X + P_N \hspace{0.05cm}.$$

The sketched probability density functions  (rectangular or trapezoidal)  are only intended to clarify the calculation process and have no practical relevance.
To calculate the mutual information between input  $X$  and output  $Y$  there are three possibilities according to the  graphic on the previous subchapter:

  • Calculation according to  $I(X, Y) = h(X) + h(Y) - h(XY)$:
The first two terms can be calculated in a simple way from  $f_X(x)$  and  $f_Y(y)$  respectively.  The  "joint differential entropy"  $h(XY)$ is problematic.  For this, one needs the 2D joint PDF  $f_{XY}(x, y)$, which is usually not given directly.
  • Calculation according to  $I(X, Y) = h(Y) - h(Y|X)$:
Here  $h(Y|X)$  denotes the  "differential irrelevance".  It holds that  $h(Y|X) = h(X + N|X) = h(N)$, so that  $I(X; Y)$  is very easy to calculate via the equation  $f_Y(y) = f_X(x) ∗ f_N(n)$  if $f_X(x)$  and $f_N(n)$  are known.
  • Calculation according to  $I(X, Y) = h(X) - h(X|Y)$:
According to this equation, however, one needs the  "differential equivocation"  $h(X|Y)$, which is more difficult to state than $h(Y|X)$.

$\text{Conclusion:}$  In the following we use the middle equation and write for the mutual information between the input  $X$  and the output  $Y$  of a  "transmission system in the presence of additive and uncorrelated noise"  $N$:

$$I(X;Y) \hspace{-0.05cm} = \hspace{-0.01cm} h(Y) \hspace{-0.01cm}- \hspace{-0.01cm}h(N) \hspace{-0.01cm}=\hspace{-0.05cm} -\hspace{-0.7cm} \int\limits_{y \hspace{0.05cm}\in \hspace{0.05cm}{\rm supp}(f_Y)} \hspace{-0.65cm} f_Y(y) \cdot {\rm log} \hspace{0.1cm} \big[f_Y(y)\big] \hspace{0.1cm}{\rm d}y +\hspace{-0.7cm} \int\limits_{n \hspace{0.05cm}\in \hspace{0.05cm}{\rm supp}(f_N)} \hspace{-0.65cm} f_N(n) \cdot {\rm log} \hspace{0.1cm} \big[f_N(n)\big] \hspace{0.1cm}{\rm d}n\hspace{0.05cm}.$$


Channel capacity of the AWGN channel


Derivation of the AWGN channel capacity

If one specifies the probability density function of the noise in the previous  general system model  as Gaussian corresponding to

$$f_N(n) = \frac{1}{\sqrt{2\pi \sigma_N^2}} \cdot {\rm e}^{ - \hspace{0.05cm}{n^2}/(2 \sigma_N^2) } \hspace{0.05cm}, $$

we obtain the model sketched on the right for calculating the channel capacity of the so-called  AWGN channel  ("Additive White Gaussian Noise"). 

In the following, we usually replace the variance  $\sigma_N^2$  by the power  $P_N$.

We know from previous sections:

  • The  channel capacity  $C_{\rm AWGN}$  specifies the maximum mutual information  $I(X; Y)$  between the input quantity  $X$  and the output quantity  $Y$  of the AWGN channel.  The maximisation refers to the best possible input PDF.  Thus, under the  power constraint the following applies:
$$C_{\rm AWGN} = \max_{f_X:\hspace{0.1cm} {\rm E}[X^2 ] \le P_X} \hspace{-0.35cm} I(X;Y) = -h(N) + \max_{f_X:\hspace{0.1cm} {\rm E}[X^2] \le P_X} \hspace{-0.35cm} h(Y) \hspace{0.05cm}.$$
It is already taken into account that the maximisation relates solely to the differential entropy  $h(Y)$   ⇒   probability density function  $f_Y(y)$.  Indeed, for a given noise power  $P_N$ ,  $h(N) = 1/2 · \log_2 (2π{\rm e} · P_N)$  is a constant.
$${\rm max}\big[h(Y)\big] = 1/2 · \log_2 \big[2πe · (P_X + P_N)\big].$$
  • However, the output PDF  $f_Y(y) = f_X(x) ∗ f_N(n)$  is Gaussian only if both  $f_X(x)$  and  $f_N(n)$  are Gaussian functions.  A striking saying about the convolution operation is:  Gaussian remains Gaussian, and non-Gaussian never becomes (exactly) Gaussian.


Numerical results for the AWGN channel capacity as a function of  ${P_X}/{P_N}$

$\text{Conclusion:}$  For the AWGN channel   ⇒  Gaussian noise PDF  $f_N(n)$  the  channel capacity  results exactly when the input PDF  $f_X(x)$  is also Gaussian:

$$C_{\rm AWGN} = h_{\rm max}(Y) - h(N) = 1/2 \cdot {\rm log}_2 \hspace{0.1cm} {P_Y}/{P_N}$$
$$\Rightarrow \hspace{0.3cm} C_{\rm AWGN}= 1/2 \cdot {\rm log}_2 \hspace{0.1cm} ( 1 + P_X/P_N) \hspace{0.05cm}.$$


Parallel Gaussian channels


Parallel AWGN channels

We now consider, according to the graph  $K$  parallel Gaussian channels of  $X_1 → Y_1$,  ... ,  $X_k → Y_k$,  ... , $X_K → Y_K$.

  • We call the transmission powers in the  $K$  channels
$$P_1 = \text{E}[X_1^2], \hspace{0.15cm}\text{...}\hspace{0.15cm} ,\ P_k = \text{E}[X_k^2], \hspace{0.15cm}\text{...}\hspace{0.15cm} ,\ P_K = \text{E}[X_K^2].$$
  • The  $K$  noise powers can also be different:
$$σ_1^2, \hspace{0.15cm}\text{...}\hspace{0.15cm} ,\ σ_k^2, \hspace{0.15cm}\text{...}\hspace{0.15cm} ,\ σ_K^2.$$


We are now looking for the maximum mutual information  $I(X_1, \hspace{0.15cm}\text{...}\hspace{0.15cm}, X_K\hspace{0.05cm};\hspace{0.05cm}Y_1, \hspace{0.15cm}\text{...}\hspace{0.15cm}, Y_K) $  between

  • the  $K$  input variables  $X_1$,  ... , $X_K$  and
  • the  $K$ output variables  $Y_1$ , ... , $Y_K$,


which we call the  total channel capacity  of this AWGN configuration.

$\text{Agreement:}$  AAssume power constraint of the total system.  That is:  
    The sum of all powers  $P_k$  in the  $K$  individual channels must not exceed the specified value  $P_X$ :

$$P_1 + \hspace{0.05cm}\text{...}\hspace{0.05cm}+ P_K = \hspace{0.1cm} \sum_{k= 1}^K \hspace{0.1cm}{\rm E} \left [ X_k^2\right ] \le P_{X} \hspace{0.05cm}.$$


Under the only slightly restrictive assumption of independent noise sources  $N_1$,  ... ,  $N_K$  can be written for the mutual information after some intermediate steps:

$$I(X_1, \hspace{0.05cm}\text{...}\hspace{0.05cm}, X_K\hspace{0.05cm};\hspace{0.05cm}Y_1,\hspace{0.05cm}\text{...}\hspace{0.05cm}, Y_K) = h(Y_1, ... \hspace{0.05cm}, Y_K ) - \hspace{0.1cm} \sum_{k= 1}^K \hspace{0.1cm} h(N_k)\hspace{0.05cm}.$$

The following upper bound can be specified for this:

$$I(X_1,\hspace{0.05cm}\text{...}\hspace{0.05cm}, X_K\hspace{0.05cm};\hspace{0.05cm}Y_1, \hspace{0.05cm}\text{...} \hspace{0.05cm}, Y_K) \hspace{0.2cm} \le \hspace{0.1cm} \hspace{0.1cm} \sum_{k= 1}^K \hspace{0.1cm} \big[h(Y_k - h(N_k)\big] \hspace{0.2cm} \le \hspace{0.1cm} 1/2 \cdot \sum_{k= 1}^K \hspace{0.1cm} {\rm log}_2 \hspace{0.1cm} ( 1 + {P_k}/{\sigma_k^2}) \hspace{0.05cm}.$$
  • The equal sign (identity) is valid for mean-free Gaussian input variables  $X_k$  as well as for statistically independent disturbances  $N_k$.
  • One arrives from this equation at the  maximum mutual information   ⇒   channel capacity, if the total transmission power  $P_X$  is divided as best as possible, taking into account the different interferences in the individual channels  $(σ_k^2)$ .
  • This optimisation problem can again be elegantly solved with the method of  Lagrange multipliers  elegant lösen.  The following example only explains the result.
Best possible power division for  $K = 4$  („Water–Filling”)

$\text{Beispiel 1:}$  We consider  $K = 4$  parallel Gaussian channels with four different noise powers  $σ_1^2$,  ... ,  $σ_4^2$  according to the adjacent figure (faint green background).

  • The best possible distribution of the transmitting power among the four channels is sought.
  • If one were to slowly fill this profile with water, the water would initially flow only into  $\text{channel 2}$  .
  • If you continue to pour, some water will also accumulate in  $\text{channel 1}$  and later also in  $\text{channel 4}$.


The drawn „water level”  $H$  describes exactly the point in time when the sum  $P_1 + P_2 + P_4$  corresponds to the total available transmitting power  $P_X$  :

  • The optimal power distribution for this example results in  $P_2 > P_1 > P_4$  as well as  $P_3 = 0$.
  • Only with a larger transmitting power  $P_X$  would a small power  $P_3$  also be allocated to the third channel.


This allocation procedure is called a Water–Filling algorithm.


$\text{Example 2:}$  If all  $K$  Gaussian channels are equally disturbed   ⇒   $σ_1^2 = \hspace{0.15cm}\text{...}\hspace{0.15cm} = σ_K^2 = P_N$,one should naturally distribute the total available transmit power  $P_X$  equally to all channels::   $P_k = P_X/K$.  For the total capacity one then obtains:

Capacity for  $K$  parallel channels
$$C_{\rm Gesamt} = \frac{ K}{2} \cdot {\rm log}_2 \hspace{0.1cm} ( 1 + \frac{P_X}{K \cdot P_N}) \hspace{0.05cm}.$$

The graph shows the total capacity as a function of  $P_X/P_N$  for  $K = 1$,  $K = 2$  and  $K = 3$:

  • For  $P_X/P_N = 10 \ ⇒ \ 10 · \text{lg} (P_X/P_N) = 10 \ \text{dB}$ , the total capacitance becomes approximately  $50\%$  larger if the total power  $P_X$  is divided equally between two channels:   $P_1 = P_2 = P_X/2$.
  • In the borderline case  $P_X/P_N → ∞$ , the total capacity increases by a factor  $K$    ⇒   doubling at $K = 2$.


The two identical and independent channels can be realised in different ways, for example by multiplexing in time, frequency or space.

However, the case  $K = 2$  can also be realised by using orthogonal basis functions such as „cosine” und „sine” as for example with

Relevant tasks


Aufgabe 4.5: Transinformation aus 2D-WDF

Aufgabe 4.5Z: Nochmals Transinformation

Aufgabe 4.6: AWGN–Kanalkapazität

Aufgabe 4.7: Mehrere parallele Gaußkanäle

Aufgabe 4.7Z: Zum Water–Filling–Algorithmus