Loading [MathJax]/jax/output/HTML-CSS/fonts/TeX/fontdata.js

Difference between revisions of "Aufgaben:Exercise 4.6: AWGN Channel Capacity"

From LNTwww
(Die Seite wurde neu angelegt: „ {{quiz-Header|Buchseite=Informationstheorie/AWGN–Kanalkapazität bei wertkontinuierlichem Eingang }} right| Wir gehen vom…“)
 
 
(31 intermediate revisions by 5 users not shown)
Line 1: Line 1:
  
{{quiz-Header|Buchseite=Informationstheorie/AWGN–Kanalkapazität bei wertkontinuierlichem Eingang
+
{{quiz-Header|Buchseite=Information_Theory/AWGN_Channel_Capacity_for_Continuous_Input
 
}}
 
}}
  
[[File:P_ID2899__Inf_A_4_6.png|right|]]
+
[[File:P_ID2899__Inf_A_4_6.png|right|frame|Flowchart of the information]]
Wir gehen vom [http://en.lntwww.de/Informationstheorie/AWGN–Kanalkapazität_bei_wertkontinuierlichem_Eingang#Kanalkapazit.C3.A4t_des_AWGN.E2.80.93Kanals '''AWGN-Kanalmodell'''] aus:
+
We start from the   [[Information_Theory/AWGN–Kanalkapazität_bei_wertkontinuierlichem_Eingang#Channel_capacity_of_the_AWGN_channel| AWGN channel model]] :
:* ''X'' kennzeichnet den Eingang (Sender).
+
* $X$  denotes the input (transmitter).
:*''N'' steht für eine gaußverteilte Störung.
+
* $N$  stands for a Gaussian distributed noise.
:*<i>Y</i> = <i>X</i> + <i>N</i> beschreibt den Ausgang (Empfänger) bei additiver Störung.
+
* $Y = X +N$&nbsp; describes the output (receiver) in case of additive noise.
  
Für die Wahrscheinlichkeitsdichtefunktion der Störung gelte:
 
$$f_N(n) = \frac{1}{\sqrt{2\pi  \sigma_N^2}} \cdot {\rm exp}\left [
 
- \hspace{0.05cm}\frac{n^2}{2 \sigma_N^2} \right ] \hspace{0.05cm}.$$
 
Da die Zufallsgröße <i>N</i> mittelwertfrei ist &nbsp;&#8658;&nbsp; <i>m<sub>N</sub></i> = 0, kann man die Varianz <i>&sigma;<sub>N</sub></i><sup>2</sup> mit der Leistung <i>P<sub>N</sub></i> gleichsetzen. In diesem Fall ist die differentielle Entropie der Zufallsgröße <i>N</i>  wie folgt angebbar (mit Pseudo&ndash;Einheit &bdquo;bit&rdquo;):
 
h(N)=1/2log2(2πePN).
 
In dieser Aufgabe wird <i>P<sub>N</sub></i> = 1 mW vorgegeben. Dabei ist zu beachten:
 
  
:* Die Leistung <i>P<sub>N</sub></i> in obiger Gleichung muss wie die Varianz <i>&sigma;<sub>N</sub></i><sup>2</sup> dimensionslos sein.
+
For the probability density function&nbsp; (PDF)&nbsp; of the noise,&nbsp; let hold:
 +
:$$f_N(n) = \frac{1}{\sqrt{2\pi \hspace{0.03cm}\sigma_{\hspace{-0.05cm}N}^2}} \cdot {\rm e}^{
 +
- \hspace{0.05cm}{n^2}\hspace{-0.05cm}/{(2 \hspace{0.03cm} \sigma_{\hspace{-0.05cm}N}^2) }} \hspace{0.05cm}.$$
 +
Since the random variable&nbsp; N&nbsp; is zero mean &nbsp; &#8658; &nbsp; mN=0,&nbsp; we can equate the variance&nbsp;σ2N&nbsp; with the power &nbsp;PN&nbsp;.&nbsp; In this case, the differential entropy of the random variable&nbsp; N&nbsp;  is specifiable&nbsp; (with the pseudo&ndash;unit "bit")&nbsp; as follows:
 +
:$$h(N) = {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ( 2\pi {\rm e} \cdot P_N \right )\hspace{0.05cm}.$$
 +
In this exercise, &nbsp;PN=1mW&nbsp; is given.&nbsp; It should be noted:
  
:* Um mit dieser Gleichung arbeiten zu können, muss die physikalische Größe <i>P<sub>N</sub></i> geeignet normiert werden, zum Beispiel entsprechend <i>P<sub>N</sub></i>&nbsp;=&nbsp;1&nbsp;mW&nbsp;&nbsp;&#8658;&nbsp;&nbsp;<i>P'<sub>N</sub></i>&nbsp;=&nbsp;1.
+
* The power&nbsp; PN&nbsp; in the above equation, like the variance &nbsp;$\sigma_{\hspace{-0.05cm}N}^2$&nbsp;, must be dimensionless.
  
:* Bei anderer Normierung, beispielsweise <i>P<sub>N</sub></i> = 1 mW &nbsp;&#8658;&nbsp; <i>P'<sub>N</sub></i> = 0.001 ergäbe sich für <i>h</i>(<i>N</i>) ein völlig anderer Zahlenwert.
+
* To work with this equation, the physical quantity &nbsp;PN&nbsp; must be suitably normalized, for example corresponding to&nbsp; $P_N = 1 \hspace{0.15cm} \rm mW$ &nbsp;&nbsp; &#8658; &nbsp;&nbsp; $P_N\hspace{0.01cm}' = 1$.
  
Weiter können Sie bei der Lösung dieser Aufgabe berücksichtigen:
+
* With other normalization, for example &nbsp;PN=1mW&nbsp; &nbsp;&nbsp; &#8658; &nbsp;&nbsp; &nbsp;PN=0.001&nbsp; a completely different numerical value would result fo &nbsp;h(N)&nbsp;.
  
:* Die Kanalkapazität ist definiert als die maximale Transinformation zwischen Eingang <i>X</i> und Ausgang <i>Y</i> bei bestmöglicher Eingangsverteilung:  
+
 
$$C = \max_{\hspace{-0.15cm}f_X:\hspace{0.05cm} {\rm E}[X^2] \le P_X} \hspace{-0.2cm}  I(X;Y)   
+
Further,&nbsp; you can consider for the solution of this exercise:
 +
 
 +
* The channel capacity is defined as the maximum mutual information between input&nbsp; $X&nbsp; and output&nbsp;Y$&nbsp; with the best possible input distribution:
 +
:$$C = \max_{\hspace{-0.15cm}f_X:\hspace{0.05cm} {\rm E}[X^2] \le P_X} \hspace{-0.2cm}  I(X;Y)   
 
\hspace{0.05cm}.$$
 
\hspace{0.05cm}.$$
  
:*Die Kanalkapazität des AWGN&ndash;Kanals lautet:
+
*The channel capacity of the AWGN channel is:
$$C_{\rm AWGN} = {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ( 1 + \frac{P_X}{P_N} \right )
+
:$$C_{\rm AWGN} = {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ( 1 + \frac{P_X}{P_N} \right )
= {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ( 1 + \frac{P\hspace{0.05cm}'_{\hspace{-0.05cm}X}}{P\hspace{0.05cm}'_{\hspace{-0.05cm}N}} \right )\hspace{0.05cm}.$$
+
= {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ( 1 + \frac{P_{\hspace{-0.05cm}X}\hspace{0.01cm}'}{P_{\hspace{-0.05cm}N}\hspace{0.01cm}'} \right )\hspace{0.05cm}.$$
Daraus ist ersichtlich, dass die die Kanalkapazität <i>C</i> und auch die Transinformation <i>I</i>(<i>X</i>; <i>Y</i>) im Gegensatz zu den differentiellen Entropien unabhängig von obiger Normierung ist.
+
:It can be seen:&nbsp; The channel capacity&nbsp; $C&nbsp; and also the mutual information &nbsp;I(X; Y)$&nbsp; are independent of the above normalization, in contrast to the differential entropies.
 +
 
 +
* With Gaussian noise PDF &nbsp;fN(n),&nbsp; an Gaussian input PDF fX(x)&nbsp;  leads to the maximum mutual information and thus to the channel capacity.
 +
 
 +
 
 +
 
 +
 
 +
 
 +
 
 +
 
  
:* Bei gaußförmiger Stör&ndash;WDF <i>f<sub>N</sub></i>(<i>n</i>) führt eine ebenfalls gaußförmige Eingangs&ndash;WDF <i>f<sub>X</sub></i>(<i>x</i>) zur maximalen Transinformation und damit zur Kanalkapazität.
 
  
'''Hinweis:''' Die Aufgabe gehört zum Themengebiet von [http://en.lntwww.de/Informationstheorie/AWGN–Kanalkapazität_bei_wertkontinuierlichem_Eingang '''Kapitel 4.2.''']
 
  
===Fragebogen===
+
Hints:
 +
*The exercise belongs to the chapter&nbsp; [[Information_Theory/AWGN–Kanalkapazität_bei_wertkontinuierlichem_Eingang|AWGN channel capacity with continuous input]].
 +
*Since the results are to be given in&nbsp; "bit",&nbsp; "log" &nbsp;&#8658;&nbsp; "log<sub>2</sub>"&nbsp; is used in the equations.
 +
 +
 
 +
 
 +
===Questions===
  
 
<quiz display=simple>
 
<quiz display=simple>
{Multiple-Choice Frage
+
 
 +
{What transmission power is required for &nbsp;C=2 bit?
 +
|type="{}"}
 +
PX =  { 15 3% }  mW
 +
 
 +
{Under which conditions is &nbsp;I(X;Y)=2 bit&nbsp; achievable at all?
 
|type="[]"}
 
|type="[]"}
- Falsch
+
+ PX&nbsp; is determined as in&nbsp; '''(1)'''&nbsp; or larger.
+ Richtig
+
+ The random variable&nbsp; X&nbsp; is Gaussian distributed.
 +
+ The random variable&nbsp; X&nbsp; is zero mean.
 +
+ The random variables&nbsp; X&nbsp; and&nbsp; N&nbsp; are uncorrelated.
 +
- The random variables&nbsp; X&nbsp; and&nbsp; Y&nbsp; are uncorrelated.
 +
 
  
  
{Input-Box Frage
+
{Calculate the differential entropies of the random variables&nbsp; N,&nbsp; X&nbsp; and&nbsp; Y&nbsp; with appropriate normalization, <br>for example,&nbsp;  PN=1mW &nbsp;&nbsp; &#8658; &nbsp;&nbsp; PN=1.
 
|type="{}"}
 
|type="{}"}
$\alpha$ = { 0.3 }
+
$h(N) \ = \ { 2.047 3% }\ \rm bit$
 +
h(X) =  { 4 3% } $\ \rm bit$
 +
$h(Y) \ = \ { 4.047 3% }\ \rm bit$
 +
 
 +
 
 +
{What are the other information-theoretic descriptive quantities?
 +
|type="{}"}
 +
h(Y|X) =  { 2.047 3% }  bit
 +
h(X|Y) =  { 2 3% }  bit
 +
h(XY) =  { 6.047 3% }  bit
 +
 
 +
 
 +
 
 +
{What quantities would result for the same&nbsp; PX&nbsp; in the limiting case &nbsp; $P_N\hspace{0.01cm} ' \to 0$ ?
 +
|type="{}"}
 +
h(X) =  { 4 3% }  bit
 +
h(Y|X) =  { 0 3% }  bit
 +
h(Y) =  { 4 3% }  bit
 +
I(X;Y) =  { 4 3% }  bit
 +
h(X|Y) =  { 0. }  bit
  
  
Line 54: Line 97:
 
</quiz>
 
</quiz>
  
===Musterlösung===
+
===Solution===
 
{{ML-Kopf}}
 
{{ML-Kopf}}
'''1.'''
+
'''(1)'''&nbsp; The equation for the AWGN channel capacity in&nbsp; "bit"&nbsp; is:
'''2.'''
+
:Cbit=1/2log2(1+PX/PN).
'''3.'''
+
:With&nbsp; Cbit=2&nbsp; this results in:
'''4.'''
+
:$$4 \stackrel{!}{=} {\rm log}_2\hspace{0.05cm}\left ( 1 + {P_X}/{P_N} \right )
'''5.'''
+
\hspace{0.3cm}\Rightarrow \hspace{0.3cm} 1 + {P_X}/{P_N} \stackrel {!}{=} 2^4 = 16
'''6.'''
+
\hspace{0.3cm}\Rightarrow \hspace{0.3cm} P_X = 15 \cdot P_N
'''7.'''
+
\hspace{0.15cm}\underline{= 15\,{\rm mW}}
 +
\hspace{0.05cm}. $$
 +
 
 +
 
 +
'''(2)'''&nbsp; Correct are&nbsp; <u>proposed solutions 1 through 4</u>.&nbsp; Justification:
 +
* For &nbsp;PX<15 mW&nbsp; the mutual information &nbsp;I(X;Y)&nbsp; will always be less than&nbsp; 2&nbsp; bit,&nbsp; regardless of all other conditions.
 +
* With &nbsp;PX=15 mW&nbsp; the maximum mutual information &nbsp;I(X;Y)=2&nbsp; bit is only achievable if the input quantity&nbsp; X&nbsp; is Gaussian distributed.&nbsp; <br>The output quantity&nbsp; Y&nbsp; is then also Gaussian distributed.
 +
* If the random variable&nbsp; X&nbsp; has a constant proportion &nbsp;mX&nbsp; then the variance &nbsp;σ2X=PXm2X&nbsp;  for given&nbsp; PX &nbsp;is smaller, and it holds &nbsp; <br>I(X; Y) = 1/2 &middot; \log_2 \ (1 + \sigma_X^2/P_N) < 2&nbsp; bit.
 +
* The precondition for the given channel capacity equation is that&nbsp; X &nbsp;and&nbsp; N&nbsp; are uncorrelated.&nbsp; On the other hand, if the random variables&nbsp; X &nbsp;and&nbsp; N&nbsp; were uncorrelated, then &nbsp;I(X;Y)=0&nbsp; would result.
 +
 
 +
 
 +
'''(3)'''&nbsp; The given equation for differential entropy makes sense only for dimensionless power.&nbsp; With the proposed normalization, one obtains:
 +
[[File: P_ID2901__Inf_A_4_6c.png |right|frame|Information-theoretical values with the AWGN channel]]
 +
* For &nbsp;PN=1 mW&nbsp; &nbsp;&#8658;&nbsp; &nbsp;$P_N\hspace{0.05cm}' = 1$:
 +
:$$h(N) \  =  \ {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ( 2\pi {\rm e} \cdot 1 \right )
 +
  =  \ {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ( 17.08 \right )
 +
\hspace{0.15cm}\underline{= 2.047\,{\rm bit}}\hspace{0.05cm},$$
 +
* For &nbsp;PX=15 mW&nbsp; &nbsp;&#8658;&nbsp; &nbsp;$P_X\hspace{0.01cm}' = 15$:
 +
:$$h(X) \  =  \ {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ( 2\pi {\rm e} \cdot 15 \right )  =  {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ( 2\pi {\rm e}  \right ) +
 +
{1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left (15 \right )
 +
\hspace{0.15cm}\underline{= 4.000\,{\rm bit}}\hspace{0.05cm}, $$
 +
*  For &nbsp;PY=PX+PN=16 mW&nbsp; &nbsp;&#8658;&nbsp; $P_Y\hspace{0.01cm}' = 16$:
 +
:$$h(Y) = 2.047\,{\rm bit} + 2.000\,{\rm bit}
 +
\hspace{0.15cm}\underline{= 4.047\,{\rm bit}}\hspace{0.05cm}.$$
 +
 
 +
 
 +
 
 +
'''(4)'''&nbsp; The differential irrelevance for the AWGN channel:
 +
:h(YX)=h(N)=2.047bit_.
 +
*However,&nbsp; according to the adjacent graph,&nbsp; also holds:
 +
:h(YX)=h(Y)I(X;Y)=4.047bit2bit=2.047bit_.
 +
*From this,&nbsp; the differential equivocation can be calculated as follows:
 +
:h(XY)=h(X)I(X;Y)=4.000bit2bit=2.000bit_.
 +
[[File: P_ID2900__Inf_A_4_6e.png |right|frame|Information-theoretical values with the ideal channel]]
 +
*Finally,&nbsp; the differential composite entropy is also given,&nbsp; which cannot be read directly from the above diagram:
 +
:$$h(XY) = h(X) + h(Y) - I(X;Y) = 4.000 \,{\rm bit} + 4.047 \,{\rm bit}  - 2 \,{\rm bit} \hspace{0.15cm}\underline{= 6.047\,{\rm bit}}\hspace{0.05cm}.$$
 +
 
 +
 
 +
 
 +
'''(5)'''&nbsp; For the ideal channel with&nbsp; $h(X)\hspace{0.15cm}\underline{=  4.000 \,{\rm bit}}$:
 +
:h(YX) = h(N)=0(bit)_,
 +
:h(Y) = h(X)=4bit_,
 +
:I(X;Y) = h(Y)h(YX)=4bit_, $$
 +
h(X \hspace{-0.05cm}\mid \hspace{-0.05cm} Y) \  =  \ h(X) - I(X;Y)\hspace{0.15cm}\underline{= 0\,{\rm (bit)}}\hspace{0.05cm}.$$
 +
 
 +
*The graph shows these quantities in a flowchart.&nbsp; The same diagram would result in the discrete value case with&nbsp; M=16&nbsp; equally probable symbols &nbsp; &#8658; &nbsp; H(X)=4.000bit.
 +
*One only would have to replace each&nbsp; h&nbsp; by an&nbsp; H.
 +
 
 
{{ML-Fuß}}
 
{{ML-Fuß}}
  
  
  
[[Category:Aufgaben zu Informationstheorie|^4.2 AWGN–Kanalkapazität bei wertkontinuierlichem Eingang^]]
+
[[Category:Information Theory: Exercises|^4.2 AWGN and Value-Continuous Input^]]

Latest revision as of 14:18, 3 November 2021

Flowchart of the information

We start from the   AWGN channel model :

  • X  denotes the input (transmitter).
  • N  stands for a Gaussian distributed noise.
  • Y=X+N  describes the output (receiver) in case of additive noise.


For the probability density function  (PDF)  of the noise,  let hold:

fN(n)=12πσ2Nen2/(2σ2N).

Since the random variable  N  is zero mean   ⇒   mN=0,  we can equate the variance σ2N  with the power  PN .  In this case, the differential entropy of the random variable  N  is specifiable  (with the pseudo–unit "bit")  as follows:

h(N)=1/2log2(2πePN).

In this exercise,  PN=1mW  is given.  It should be noted:

  • The power  PN  in the above equation, like the variance  σ2N , must be dimensionless.
  • To work with this equation, the physical quantity  PN  must be suitably normalized, for example corresponding to  PN=1mW    ⇒    PN=1.
  • With other normalization, for example  PN=1mW     ⇒     PN=0.001  a completely different numerical value would result fo  h(N) .


Further,  you can consider for the solution of this exercise:

  • The channel capacity is defined as the maximum mutual information between input  X  and output  Y  with the best possible input distribution:
C=max
  • The channel capacity of the AWGN channel is:
C_{\rm AWGN} = {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ( 1 + \frac{P_X}{P_N} \right ) = {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ( 1 + \frac{P_{\hspace{-0.05cm}X}\hspace{0.01cm}'}{P_{\hspace{-0.05cm}N}\hspace{0.01cm}'} \right )\hspace{0.05cm}.
It can be seen:  The channel capacity  C  and also the mutual information  I(X; Y)  are independent of the above normalization, in contrast to the differential entropies.
  • With Gaussian noise PDF  f_N(n),  an Gaussian input PDF f_X(x)  leads to the maximum mutual information and thus to the channel capacity.






Hints:


Questions

1

What transmission power is required for  C = 2 \ \rm bit?

P_X \ = \

\ \rm mW

2

Under which conditions is  I(X; Y) = 2 \ \rm bit  achievable at all?

P_X  is determined as in  (1)  or larger.
The random variable  X  is Gaussian distributed.
The random variable  X  is zero mean.
The random variables  X  and  N  are uncorrelated.
The random variables  X  and  Y  are uncorrelated.

3

Calculate the differential entropies of the random variables  NX  and  Y  with appropriate normalization,
for example,  P_N = 1 \hspace{0.15cm} \rm mW    ⇒    P_N\hspace{0.01cm}' = 1.

h(N) \ = \

\ \rm bit
h(X) \ = \

\ \rm bit
h(Y) \ = \

\ \rm bit

4

What are the other information-theoretic descriptive quantities?

h(Y|X) \ = \

\ \rm bit
h(X|Y) \ = \

\ \rm bit
h(XY) \ = \

\ \rm bit

5

What quantities would result for the same  P_X  in the limiting case   P_N\hspace{0.01cm} ' \to 0 ?

h(X) \ = \

\ \rm bit
h(Y|X) \ = \

\ \rm bit
h(Y) \ = \

\ \rm bit
I(X;Y) \ = \

\ \rm bit
h(X|Y) \ = \

\ \rm bit


Solution

(1)  The equation for the AWGN channel capacity in  "bit"  is:

C_{\rm bit} = {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ( 1 + {P_X}/{P_N} \right )\hspace{0.05cm}.
With  C_{\rm bit} = 2  this results in:
4 \stackrel{!}{=} {\rm log}_2\hspace{0.05cm}\left ( 1 + {P_X}/{P_N} \right ) \hspace{0.3cm}\Rightarrow \hspace{0.3cm} 1 + {P_X}/{P_N} \stackrel {!}{=} 2^4 = 16 \hspace{0.3cm}\Rightarrow \hspace{0.3cm} P_X = 15 \cdot P_N \hspace{0.15cm}\underline{= 15\,{\rm mW}} \hspace{0.05cm}.


(2)  Correct are  proposed solutions 1 through 4.  Justification:

  • For  P_X < 15 \ \rm mW  the mutual information  I(X; Y)  will always be less than  2  bit,  regardless of all other conditions.
  • With  P_X = 15 \ \rm mW  the maximum mutual information  I(X; Y) = 2  bit is only achievable if the input quantity  X  is Gaussian distributed. 
    The output quantity  Y  is then also Gaussian distributed.
  • If the random variable  X  has a constant proportion  m_X  then the variance  \sigma_X^2 = P_X - m_X^2   for given  P_X  is smaller, and it holds  
    I(X; Y) = 1/2 · \log_2 \ (1 + \sigma_X^2/P_N) < 2  bit.
  • The precondition for the given channel capacity equation is that  X  and  N  are uncorrelated.  On the other hand, if the random variables  X  and  N  were uncorrelated, then  I(X; Y) = 0  would result.


(3)  The given equation for differential entropy makes sense only for dimensionless power.  With the proposed normalization, one obtains:

Information-theoretical values with the AWGN channel
  • For  P_N = 1 \ \rm mW   ⇒   P_N\hspace{0.05cm}' = 1:
h(N) \ = \ {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ( 2\pi {\rm e} \cdot 1 \right ) = \ {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ( 17.08 \right ) \hspace{0.15cm}\underline{= 2.047\,{\rm bit}}\hspace{0.05cm},
  • For  P_X = 15 \ \rm mW   ⇒   P_X\hspace{0.01cm}' = 15:
h(X) \ = \ {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ( 2\pi {\rm e} \cdot 15 \right ) = {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ( 2\pi {\rm e} \right ) + {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left (15 \right ) \hspace{0.15cm}\underline{= 4.000\,{\rm bit}}\hspace{0.05cm},
  • For  P_Y = P_X + P_N = 16 \ \rm mW   ⇒  P_Y\hspace{0.01cm}' = 16:
h(Y) = 2.047\,{\rm bit} + 2.000\,{\rm bit} \hspace{0.15cm}\underline{= 4.047\,{\rm bit}}\hspace{0.05cm}.


(4)  The differential irrelevance for the AWGN channel:

h(Y \hspace{-0.05cm}\mid \hspace{-0.05cm} X) = h(N) \hspace{0.15cm}\underline{= 2.047\,{\rm bit}}\hspace{0.05cm}.
  • However,  according to the adjacent graph,  also holds:
h(Y \hspace{-0.05cm}\mid \hspace{-0.05cm} X) = h(Y) - I(X;Y) = 4.047 \,{\rm bit} - 2 \,{\rm bit} \hspace{0.15cm}\underline{= 2.047\,{\rm bit}}\hspace{0.05cm}.
  • From this,  the differential equivocation can be calculated as follows:
h(X \hspace{-0.05cm}\mid \hspace{-0.05cm} Y) = h(X) - I(X;Y) = 4.000 \,{\rm bit} - 2 \,{\rm bit} \hspace{0.15cm}\underline{= 2.000\,{\rm bit}}\hspace{0.05cm}.
Information-theoretical values with the ideal channel
  • Finally,  the differential composite entropy is also given,  which cannot be read directly from the above diagram:
h(XY) = h(X) + h(Y) - I(X;Y) = 4.000 \,{\rm bit} + 4.047 \,{\rm bit} - 2 \,{\rm bit} \hspace{0.15cm}\underline{= 6.047\,{\rm bit}}\hspace{0.05cm}.


(5)  For the ideal channel with  h(X)\hspace{0.15cm}\underline{= 4.000 \,{\rm bit}}:

h(Y \hspace{-0.05cm}\mid \hspace{-0.05cm} X) \ = \ h(N) \hspace{0.15cm}\underline{= 0\,{\rm (bit)}}\hspace{0.05cm},
h(Y) \ = \ h(X) \hspace{0.15cm}\underline{= 4\,{\rm bit}}\hspace{0.05cm},
I(X;Y) \ = \ h(Y) - h(Y \hspace{-0.05cm}\mid \hspace{-0.05cm} X)\hspace{0.15cm}\underline{= 4\,{\rm bit}}\hspace{0.05cm}, h(X \hspace{-0.05cm}\mid \hspace{-0.05cm} Y) \ = \ h(X) - I(X;Y)\hspace{0.15cm}\underline{= 0\,{\rm (bit)}}\hspace{0.05cm}.
  • The graph shows these quantities in a flowchart.  The same diagram would result in the discrete value case with  M = 16  equally probable symbols   ⇒   H(X)= 4.000 \,{\rm bit}.
  • One only would have to replace each  h  by an  H.