Difference between revisions of "Aufgaben:Exercise 3.10: Mutual Information at the BSC"
From LNTwww
Line 129: | Line 129: | ||
'''(6)''' Here, too, the <u>second proposed solution</u> is correct: | '''(6)''' Here, too, the <u>second proposed solution</u> is correct: | ||
− | *Because of I(X;Y)=H(X)−H(X∣Y)=H(Y)−H(Y∣X) , H(Y|X) is greater than H(X|Y) by the same | + | *Because of I(X;Y)=H(X)−H(X∣Y)=H(Y)−H(Y∣X) , H(Y|X) is greater than H(X|Y) by the same magnitude that H(Y) is greater than H(X): |
:H(Y∣X)=H(Y)−I(X;Y)=0.8268−0.3578=0.469bit | :H(Y∣X)=H(Y)−I(X;Y)=0.8268−0.3578=0.469bit | ||
*Direct calculation gives the same result H(Y|X)=0.469 bit: | *Direct calculation gives the same result H(Y|X)=0.469 bit: |
Latest revision as of 07:05, 18 September 2022
We consider the Binary Symmetric Channel (BSC). The parameter values are valid for the whole exercise:
- Crossover probability: ε=0.1,
- Probability for 0: p0=0.2,
- Probability for 1: p1=0.8.
Thus the probability mass function of the source is: PX(X)=(0.2, 0.8) and for the source entropy applies:
- H(X)=p0⋅log21p0+p1⋅log21p1=Hbin(0.2)=0.7219bit.
The task is to determine:
- the probability function of the sink:
- PY(Y)=(PY(0), PY(1)),
- the joint probability function:
- PXY(X,Y)=(p00p01p10p11),
- the mutual information:
- I(X;Y)=E[log2PXY(X,Y)PX(X)⋅PY(Y)],
- the equivocation:
- H(X∣Y)=E[log21PX∣Y(X∣Y)],
- the irrelevance:
- H(Y∣X)=E[log21PY∣X(Y∣X)].
Hints:
- The exercise belongs to the chapter Application to Digital Signal Transmission.
- Reference is made in particular to the page Mutual information calculation for the binary channel.
- In Exercise 3.10Z the channel capacity CBSC of the BSC model is calculated.
- This results as the maximum mutual information I(X; Y) by maximization with respect to the probabilities p0 or p1=1−p0.
Questions
Solution
(1) The following applies in general or with the numerical values p0=0.2 and ε=0.1 for the quantities sought:
- PXY(0,0)=p0⋅(1−ε)=0.18_,PXY(0,1)=p0⋅ε=0.02_,
- PXY(1,0)=p1⋅ε=0.08_,PXY(1,1)=p1⋅(1−ε)=0.72_.
(2) In general:
- PY(Y)=[Pr(Y=0),Pr(Y=1)]=(p0,p1)⋅(1−εεε1−ε).
This gives the following numerical values:
- Pr(Y=0)=p0⋅(1−ε)+p1⋅ε=0.2⋅0.9+0.8⋅0.1=0.26_,
- Pr(Y=1)=p0⋅ε+p1⋅(1−ε)=0.2⋅0.1+0.8⋅0.9=0.74_.
(3) For the mutual information, according to the definition with p0=0.2, p1=0.8 and ε=0.1:
- I(X;Y)=E[log2PXY(X,Y)PX(X)⋅PY(Y)]⇒
- I(X;Y)=0.18⋅log20.180.2⋅0.26+0.02⋅log20.020.2⋅0.74+0.08⋅log20.080.8⋅0.26+0.72⋅log20.720.8⋅0.74=0.3578bit_.
(4) With the source entropy H(X) given, we obtain for the equivocation:
- H(X∣Y)=H(X)−I(X;Y)=0.7219−0.3578=0.3642bit_.
- However, one could also apply the general definition with the inference probabilities PX|Y(⋅) :
- H(X∣Y)=E[log21PX∣Y(X∣Y)]=E[log2PY(Y)PXY(X,Y)]
- In the example, the same result H(X|Y)=0.3642 bit is also obtained according to this calculation rule:
- H(X∣Y)=0.18⋅log20.260.18+0.02⋅log20.740.02+0.08⋅log20.260.08+0.72⋅log20.740.72.
(5) Correct is the proposed solution 2:
- In the case of disturbed transmission (ε>0) the uncertainty regarding the sink is always greater than the uncertainty regarding the source. One obtains here as a numerical value:
- H(Y)=Hbin(0.26)=0.8268bit.
- With error-free transmission (ε=0), on the other hand, PY(⋅)=PX(⋅) and H(Y)=H(X) would apply.
(6) Here, too, the second proposed solution is correct:
- Because of I(X;Y)=H(X)−H(X∣Y)=H(Y)−H(Y∣X) , H(Y|X) is greater than H(X|Y) by the same magnitude that H(Y) is greater than H(X):
- H(Y∣X)=H(Y)−I(X;Y)=0.8268−0.3578=0.469bit
- Direct calculation gives the same result H(Y|X)=0.469 bit:
- H(Y∣X)=E[log21PY∣X(Y∣X)]=0.18⋅log210.9+0.02⋅log210.1+0.08⋅log210.1+0.72⋅log210.9.