Difference between revisions of "Aufgaben:Exercise 2.8: Huffman Application for a Markov Source"

From LNTwww
Line 72: Line 72:
 
{What is the bound on the mean codeword length when <u>two-tuples</u> are formed&nbsp; $(k = 2)$? Interpretation.
 
{What is the bound on the mean codeword length when <u>two-tuples</u> are formed&nbsp; $(k = 2)$? Interpretation.
 
|type="()"}
 
|type="()"}
- $L_{\rm M} \ge H_1 =  1.000$&nbsp; bit/source\:symbol,
+
- $L_{\rm M} \ge H_1 =  1.000$&nbsp; $\ \rm bit/source\:symbol$
+ $L_{\rm M} \ge H_2 \approx  0.861$&nbsp; bit/source\:symbol,
+
+ $L_{\rm M} \ge H_2 \approx  0.861$&nbsp; $\ \rm bit/source\:symbol$
- $L_{\rm M} \ge H_3 \approx  0.815$&nbsp; bit/source\:symbol,
+
- $L_{\rm M} \ge H_3 \approx  0.815$&nbsp; $\ \rm bit/source\:symbol$
- $L_{\rm M} \ge H_{k \to \infty} \approx  0.722$&nbsp; bit/source\:symbol,
+
- $L_{\rm M} \ge H_{k \to \infty} \approx  0.722$&nbsp; $\ \rm bit/source\:symbol$
- $L_{\rm M} \ge 0.5$&nbsp; bit/source\:symbol.
+
- $L_{\rm M} \ge 0.5$&nbsp; $\ \rm bit/source\:symbol$
  
  
{Calculate the probabilities of the <u>triples</u>&nbsp; $(k = 3)$&nbsp; for &nbsp;$\underline{q = 0.8}$?
+
{Calculate the probabilities of the <u>three-tuple</u>&nbsp; $(k = 3)$&nbsp; for &nbsp;$\underline{q = 0.8}$?
 
|type="{}"}
 
|type="{}"}
 
$p_{\rm A} = \rm Pr(XXX)\ = \ $ { 0.32 3% }
 
$p_{\rm A} = \rm Pr(XXX)\ = \ $ { 0.32 3% }
Line 101: Line 101:
 
===Solution===
 
===Solution===
 
{{ML-Kopf}}
 
{{ML-Kopf}}
'''(1)'''&nbsp; Richtig ist der <u>Lösungsvorschlag 2</u>:
+
'''(1)'''&nbsp; Correct is the <u>solution suggestion 2</u>:
*Bei der blauen Quellensymbolfolge&nbsp; '''2'''&nbsp; erkennt man sehr viel weniger Symbolwechsel als in der roten Folge.  
+
*In the blue source symbol sequence&nbsp; '''2'''&nbsp; one recognizes much less symbol changes than in the red sequence.
*Die Symbolfolge&nbsp; '''2'''&nbsp; wurde mit dem Parameter&nbsp; $q = {\rm Pr}(\boldsymbol{\rm X}\hspace{0.05cm}|\hspace{0.05cm}\boldsymbol{\rm X}) =  
+
*The symbol sequence&nbsp; '''2'''&nbsp; was generated with the parameter&nbsp; $q = {\rm Pr}(\boldsymbol{\rm X}\hspace{0.05cm}|\hspace{0.05cm}\boldsymbol{\rm X}) =  
{\rm Pr}(\boldsymbol{\rm Y}\hspace{0.05cm}|\hspace{0.05cm}\boldsymbol{\rm Y}) = 0.8$&nbsp; erzeugt und die rote Symbolfolge&nbsp; '''1'''&nbsp; mit&nbsp; $q = 0.2$.
+
{\rm Pr}(\boldsymbol{\rm Y}\hspace{0.05cm}|\hspace{0.05cm}\boldsymbol{\rm Y}) = 0.8$&nbsp; and the red symbol sequence&nbsp; '''1'''&nbsp; with&nbsp; $q = 0.2$.
 
   
 
   
  
  
'''(2)'''&nbsp; Richtig sind die <u>Antworten 2 und 3</u>.:
+
'''(2)'''&nbsp;<u>Answers 2 and 3</u> are correct.:
*Da hier die Quellensymbole&nbsp; $\rm X$&nbsp; und&nbsp; $\rm Y$&nbsp; als gleichwahrscheinlich angenommen wurden, macht die direkte Anwendung von Huffman keinen Sinn.  
+
*Since here the source symbols&nbsp; $\rm X$&nbsp; and&nbsp; $\rm Y$&nbsp; were assumed to be equally probable, the direct application of Huffman makes no sense.
*Dagegen kann man die inneren statistischen Bindungen  der Markovquelle zur Datenkomprimierung nutzen, wenn man&nbsp; $k$&ndash;Tupel bildet&nbsp; $(k &#8805; 2)$.  
+
*In contrast, one can use the inner statistical bonds of the Markov source for data compression if one forms&nbsp; $k$&ndash;tuples &nbsp; $(k &#8805; 2)$.  
*Je größer&nbsp; $k$&nbsp; ist, desto mehr nähert sich die mittlere Codewortlänge&nbsp; $L_{\rm M}$&nbsp; der Entropie&nbsp; $H$.
+
*The larger&nbsp; $k$&nbsp; is, the more the mean codeword length&nbsp; $L_{\rm M}$&nbsp; approaches the entropy&nbsp; $H$.
  
  
  
'''(3)'''&nbsp; Die Symbolwahrscheinlichkeiten sind&nbsp; $p_{\rm X} = p_{\rm Y}  = 0.5$.&nbsp; Damit erhält man für die Zweiertupel:
+
'''(3)'''&nbsp; The symbol probabilities are&nbsp; $p_{\rm X} = p_{\rm Y}  = 0.5$, which gives us for the two-tuples.:&nbsp;  
 
:$$p_{\rm A} \hspace{0.2cm} =  \hspace{0.2cm} {\rm Pr}(\boldsymbol{\rm XX}) = p_{\rm X} \cdot {\rm Pr}(\boldsymbol{\rm X}\hspace{0.05cm}|\hspace{0.05cm}\boldsymbol{\rm X}) = 0.5 \cdot q = 0.5 \cdot 0.8 \hspace{0.15cm}\underline{  = 0.4}  \hspace{0.05cm},$$
 
:$$p_{\rm A} \hspace{0.2cm} =  \hspace{0.2cm} {\rm Pr}(\boldsymbol{\rm XX}) = p_{\rm X} \cdot {\rm Pr}(\boldsymbol{\rm X}\hspace{0.05cm}|\hspace{0.05cm}\boldsymbol{\rm X}) = 0.5 \cdot q = 0.5 \cdot 0.8 \hspace{0.15cm}\underline{  = 0.4}  \hspace{0.05cm},$$
 
:$$p_{\rm B} \hspace{0.2cm} =  \hspace{0.2cm} {\rm Pr}(\boldsymbol{\rm XY}) = p_{\rm X} \cdot {\rm Pr}(\boldsymbol{\rm Y}\hspace{0.05cm}|\hspace{0.05cm}\boldsymbol{\rm X}) = 0.5 \cdot (1-q)= 0.5 \cdot 0.2 \hspace{0.15cm}\underline{  = 0.1}  \hspace{0.05cm},$$
 
:$$p_{\rm B} \hspace{0.2cm} =  \hspace{0.2cm} {\rm Pr}(\boldsymbol{\rm XY}) = p_{\rm X} \cdot {\rm Pr}(\boldsymbol{\rm Y}\hspace{0.05cm}|\hspace{0.05cm}\boldsymbol{\rm X}) = 0.5 \cdot (1-q)= 0.5 \cdot 0.2 \hspace{0.15cm}\underline{  = 0.1}  \hspace{0.05cm},$$
Line 122: Line 122:
  
  
[[File:P_ID2462__Inf_A_2_8d.png|right|frame|Zur Huffman–Codierung für $k = 2$]]
+
[[File:P_ID2462__Inf_A_2_8d.png|right|frame|For Huffman coding for $k = 2$]]
'''(4)'''&nbsp; Nebenstehender Bildschirmabzug des (früheren) SWF&ndash;Programms&nbsp; [[Applets:Huffman_Shannon_Fano|Shannon&ndash;Fano&ndash; und Huffman&ndash;Codierung]]&nbsp; zeigt die Konstruktion des Huffman&ndash;Codes für&nbsp; $k = 2$&nbsp; mit den soeben berechneten Wahrscheinlichkeiten.  
+
'''(4)'''&nbsp; Opposite screen capture of the (earlier) SWF program&nbsp; [[Applets:Huffman_Shannon_Fano|Shannon&ndash;Fano&ndash; and Huffman&ndash;coding]]&nbsp; shows the construction of the Huffman code for&nbsp; $k = 2$&nbsp; with the probabilities just calculated.  
*Damit gilt für die mittlere Codewortlänge:
+
*Thus, the mean codeword length is:
:$$L_{\rm M}\hspace{0.01cm}' = 0.4 \cdot 1 + 0.4 \cdot 2 + (0.1 + 0.1) \cdot 3 =  1.8\,\,{\rm bit/Zweiertupel}$$
+
:$$L_{\rm M}\hspace{0.01cm}' = 0.4 \cdot 1 + 0.4 \cdot 2 + (0.1 + 0.1) \cdot 3 =  1.8\,\,{\rm bit/two-tuple}$$
:$$\Rightarrow\hspace{0.3cm}L_{\rm M} = {L_{\rm M}\hspace{0.01cm}'}/{2}\hspace{0.15cm}\underline{  = 0.9\,{\rm bit/Quellensymbol}}\hspace{0.05cm}.$$
+
:$$\Rightarrow\hspace{0.3cm}L_{\rm M} = {L_{\rm M}\hspace{0.01cm}'}/{2}\hspace{0.15cm}\underline{  = 0.9\,{\rm bit/source\:symbol}}\hspace{0.05cm}.$$
  
  
'''(5)'''&nbsp; Richtig iist der <u>Lösungsvorschlag 2</u>:
+
'''(5)'''&nbsp; Correct is the <u>suggested solution 2</u>:
*Nach dem Quellencodierungstheorem gilt&nbsp; $L_{\rm M} &#8805; H$.  
+
*According to the source coding theorem&nbsp; $L_{\rm M} &#8805; H$ holds.  
*Wendet man aber Huffman&ndash;Codierung an und lässt dabei Bindungen zwischen nicht benachbarten Symbolen außer Betracht&nbsp; $(k = 2)$, so gilt als unterste Grenze der Codewortlänge nicht&nbsp; $H = 0.722$, sondern&nbsp; $H_2 = 0.861$&nbsp; (auf den Zusatz bit/Quellensymbol wird für den Rest der Aufgabe verzichtet).
+
*However, if we apply Huffman coding and disregard ties between non-adjacent symbols&nbsp; $(k = 2)$, the lower bound of the codeword length is not&nbsp; $H = 0.722$, but&nbsp; $H_2 = 0.861$&nbsp; (the addition bit/source symbol is omitted for the rest of the task).
*Das Ergebnis der Teilaufgabe&nbsp; '''(4)'''&nbsp; war&nbsp; $L_{\rm M} = 0.9.$  
+
*The result of subtask&nbsp; '''(4)'''&nbsp; was&nbsp; $L_{\rm M} = 0.9.$  
*Würde eine unsymmetrische Markovkette vorliegen und zwar derart, dass sich für die Wahrscheinlichkeiten&nbsp; $p_{\rm A}$, ... , $p_{\rm D}$&nbsp; die Werte&nbsp; $50\%$,&nbsp; $25\%$&nbsp; und zweimal&nbsp; $12.5\%$&nbsp; ergeben würden, so käme man auf die mittlere Codewortlänge&nbsp; $L_{\rm M}  = 0.875$.
+
*If an asymmetric Markov chain were present and in such a way that for the probabilities&nbsp; $p_{\rm A}$, ... , $p_{\rm D}$&nbsp; the values&nbsp; $50\%$,&nbsp; $25\%$&nbsp; and twice&nbsp; $12.5\%$&nbsp; would result, then one would come to the average code word length&nbsp; $L_{\rm M}  = 0.875$.
*Wie die genauen Parameter dieser unsymmetrischen Markovquelle aussehen, weiß aber auch der Aufgabensteller (G. Söder) nicht.  
+
*How the exact parameters of this asymmetrical Markov source look, however, is not known even to the task creator (G. Söder).  
*Auch nicht, wie sich der Wert&nbsp; $0.875$&nbsp; auf&nbsp; $0.861$&nbsp; senken ließe. Der Huffman&ndash;Algorithmus ist hierfür jedenfalls ungeeignet.
+
*Nor how the value&nbsp; $0.875$&nbsp; could be reduced to&nbsp; $0.861$&nbsp;. In any case, the Huffman algorithm is unsuitable for this.
  
  
'''(6)'''&nbsp; Mit&nbsp; $q = 0.8$&nbsp; und&nbsp; $1 - q = 0.2$&nbsp; erhält man:
+
'''(6)'''&nbsp; With&nbsp; $q = 0.8$&nbsp; and&nbsp; $1 - q = 0.2$&nbsp; we get:
 
:$$p_{\rm A} \hspace{0.2cm} =  \hspace{0.2cm} {\rm Pr}(\boldsymbol{\rm XXX})  = 0.5 \cdot q^2 \hspace{0.15cm}\underline{  = 0.32} = p_{\rm H} = {\rm Pr}(\boldsymbol{\rm YYY})\hspace{0.05cm},$$
 
:$$p_{\rm A} \hspace{0.2cm} =  \hspace{0.2cm} {\rm Pr}(\boldsymbol{\rm XXX})  = 0.5 \cdot q^2 \hspace{0.15cm}\underline{  = 0.32} = p_{\rm H} = {\rm Pr}(\boldsymbol{\rm YYY})\hspace{0.05cm},$$
 
:$$p_{\rm B} \hspace{0.2cm} =  \hspace{0.2cm} {\rm Pr}(\boldsymbol{\rm XXY}) = 0.5 \cdot q \cdot (1-q) \hspace{0.15cm}\underline{  = 0.08}= p_{\rm G} = {\rm Pr}(\boldsymbol{\rm YYX}) \hspace{0.05cm},$$
 
:$$p_{\rm B} \hspace{0.2cm} =  \hspace{0.2cm} {\rm Pr}(\boldsymbol{\rm XXY}) = 0.5 \cdot q \cdot (1-q) \hspace{0.15cm}\underline{  = 0.08}= p_{\rm G} = {\rm Pr}(\boldsymbol{\rm YYX}) \hspace{0.05cm},$$
Line 145: Line 145:
  
  
'''(7)'''&nbsp; Der Bildschirmabzug des des (früheren) SWF&ndash;Programms&nbsp; [[Applets:Huffman_Shannon_Fano|Shannon&ndash;Fano&ndash; und Huffman&ndash;Codierung]]&nbsp; verdeutlicht die Konstellation des Huffman&ndash;Codes für&nbsp; $k = 3$.&nbsp; Damit erhält man für die mittlere Codewortlänge:
+
'''(7)'''&nbsp; The screen capture of the of the (earlier) SWF program&nbsp; [[Applets:Huffman_Shannon_Fano|Shannon&ndash;Fano&ndash; and Huffman&ndash;coding]]&nbsp; coding illustrates the constellation of the Huffman code for&nbsp; $k = 3$.&nbsp; This gives us for the mean codeword length:
 
:$$L_{\rm M}\hspace{0.01cm}' =  0.64 \cdot 2 + 0.24 \cdot 3 + 0.04 \cdot 5 =  2.52\,\,{\rm bit/Dreiertupel}\hspace{0.3cm}
 
:$$L_{\rm M}\hspace{0.01cm}' =  0.64 \cdot 2 + 0.24 \cdot 3 + 0.04 \cdot 5 =  2.52\,\,{\rm bit/Dreiertupel}\hspace{0.3cm}
 
\Rightarrow\hspace{0.3cm}L_{\rm M} = {L_{\rm M}\hspace{0.01cm}'}/{3}\hspace{0.15cm}\underline{  = 0.84\,{\rm bit/Quellensymbol}}\hspace{0.05cm}.$$
 
\Rightarrow\hspace{0.3cm}L_{\rm M} = {L_{\rm M}\hspace{0.01cm}'}/{3}\hspace{0.15cm}\underline{  = 0.84\,{\rm bit/Quellensymbol}}\hspace{0.05cm}.$$

Revision as of 21:25, 4 August 2021

Binary symmetric Markov source

We consider the binary symmetric Markov source according to the graph, which is given by the single parameter

$$q = {\rm Pr}(\boldsymbol{\rm X}\hspace{0.05cm}|\hspace{0.05cm}\boldsymbol{\rm X}) = {\rm Pr}(\boldsymbol{\rm Y}\hspace{0.05cm}|\hspace{0.05cm}\boldsymbol{\rm Y})$$

completely.

  • The given source symbol sequences apply to the conditional probabilities  $q = 0.2$  bzw.  $q = 0.8$, respectively.
  • In subtask  (1)  it has to be clarified which symbol sequence – the red or the blue one – was generated with  $q = 0.2$  and which with  $q = 0.8$ .


The properties of Markov sources are described in detail in the chapter  Message Sources with Memory .  Due to the symmetry assumed here with regard to the binary symbols  $\rm X$  and  $\rm Y$ , some serious simplifications result, as is derived in  exercise 1.5Z :

  • The symbols  $\rm X$  and  $\rm Y$  are equally probable, that is,  $p_{\rm X} = p_{\rm Y} = 0.5$ holds. 
    Thus the first entropy approximation is:   $H_1 = 1\,\,{\rm bit/source\:symbol}\hspace{0.05cm}. $
  • The entropy of the Markov source for  $q = 0.2$  as well as for  $q = 0.8$  results in
$$H = q \cdot {\rm log_2}\hspace{0.15cm}\frac{1}{q} + (1-q) \cdot {\rm log_2}\hspace{0.15cm}\frac{1}{1-q} = 0.722\,\,{\rm bit/source\:symbol}\hspace{0.05cm}.$$
  • For Markov sources, all entropy approximations  $H_k$  with order  $k \ge 2$  are determined by  $H_1$  and  $H = H_{k \to \infty}$ :
$$H_k = {1}/{k}\cdot \big [ H_1 + H \big ] \hspace{0.05cm}.$$
  • The following numerical values again apply equally to  $q = 0.2$  and  $q = 0.8$ :
$$H_2 = {1}/{2}\cdot \big [ H_1 + H \big ] = 0.861\,\,{\rm bit/source\:symbol}\hspace{0.05cm},$$
$$H_3 = {1}/{3} \cdot \big [ H_1 + 2H \big ] = 0.815\,\,{\rm bit/source\:symbol}\hspace{0.05cm}.$$

In this exercise, the Huffman algorithm is to be applied to  $k$–tuples, where we restrict ourselves to  $k = 2$  and  $k = 3$ .





Hints:



Questions

1

Which of the example sequences given at the front is true for  $q = 0.8$?

the red source symbol sequence  1,
the blue source symbol sequence  2.

2

Which of the following statements are true?

The direct application of Huffman is also useful here.
Huffman makes sense when forming two-tuples  $(k = 2)$  Sinn.
Huffman makes sense when forming tuples of three  $(k = 3)$  Sinn.

3

What are the probabilities of two-tuples  $(k = 2)$  for  $\underline{q = 0.8}$?

$p_{\rm A} = \rm Pr(XX)\ = \ $

$p_{\rm B} = \rm Pr(XY)\ = \ $

$p_{\rm C} = \rm Pr(YX)\ = \ $

$p_{\rm D} = \rm Pr(YY)\ = \ $

Find the Huffman code for  $\underline{k = 2}$.  What is the mean codeword length in this case?
|type="{}"}
$L_{\rm M} \ = \ $

$\ \rm bit/source\:symbol$

4

What is the bound on the mean codeword length when two-tuples are formed  $(k = 2)$? Interpretation.

$L_{\rm M} \ge H_1 = 1.000$  $\ \rm bit/source\:symbol$
$L_{\rm M} \ge H_2 \approx 0.861$  $\ \rm bit/source\:symbol$
$L_{\rm M} \ge H_3 \approx 0.815$  $\ \rm bit/source\:symbol$
$L_{\rm M} \ge H_{k \to \infty} \approx 0.722$  $\ \rm bit/source\:symbol$
$L_{\rm M} \ge 0.5$  $\ \rm bit/source\:symbol$

5

Calculate the probabilities of the three-tuple  $(k = 3)$  for  $\underline{q = 0.8}$?

$p_{\rm A} = \rm Pr(XXX)\ = \ $

$p_{\rm B} = \rm Pr(XXY)\ = \ $

$p_{\rm C} = \rm Pr(XYX)\ = \ $

$p_{\rm D} = \rm Pr(XYY)\ = \ $

$p_{\rm E} = \rm Pr(YXX)\ = \ $

$p_{\rm F} = \rm Pr(YXY)\ = \ $

$p_{\rm G} = \rm Pr(YYX)\ = \ $

$p_{\rm H} = \rm Pr(YYY)\ = \ $

6

Find the Huffman code for $\underline{k = 3}$.  What is the mean codeword length in this case?

$L_{\rm M} \ = \ $

$\ \rm bit/source\:symbol$


Solution

(1)  Correct is the solution suggestion 2:

  • In the blue source symbol sequence  2  one recognizes much less symbol changes than in the red sequence.
  • The symbol sequence  2  was generated with the parameter  $q = {\rm Pr}(\boldsymbol{\rm X}\hspace{0.05cm}|\hspace{0.05cm}\boldsymbol{\rm X}) = {\rm Pr}(\boldsymbol{\rm Y}\hspace{0.05cm}|\hspace{0.05cm}\boldsymbol{\rm Y}) = 0.8$  and the red symbol sequence  1  with  $q = 0.2$.


(2) Answers 2 and 3 are correct.:

  • Since here the source symbols  $\rm X$  and  $\rm Y$  were assumed to be equally probable, the direct application of Huffman makes no sense.
  • In contrast, one can use the inner statistical bonds of the Markov source for data compression if one forms  $k$–tuples   $(k ≥ 2)$.
  • The larger  $k$  is, the more the mean codeword length  $L_{\rm M}$  approaches the entropy  $H$.


(3)  The symbol probabilities are  $p_{\rm X} = p_{\rm Y} = 0.5$, which gives us for the two-tuples.: 

$$p_{\rm A} \hspace{0.2cm} = \hspace{0.2cm} {\rm Pr}(\boldsymbol{\rm XX}) = p_{\rm X} \cdot {\rm Pr}(\boldsymbol{\rm X}\hspace{0.05cm}|\hspace{0.05cm}\boldsymbol{\rm X}) = 0.5 \cdot q = 0.5 \cdot 0.8 \hspace{0.15cm}\underline{ = 0.4} \hspace{0.05cm},$$
$$p_{\rm B} \hspace{0.2cm} = \hspace{0.2cm} {\rm Pr}(\boldsymbol{\rm XY}) = p_{\rm X} \cdot {\rm Pr}(\boldsymbol{\rm Y}\hspace{0.05cm}|\hspace{0.05cm}\boldsymbol{\rm X}) = 0.5 \cdot (1-q)= 0.5 \cdot 0.2 \hspace{0.15cm}\underline{ = 0.1} \hspace{0.05cm},$$
$$p_{\rm C} \hspace{0.2cm} = \hspace{0.2cm} {\rm Pr}(\boldsymbol{\rm YX}) = p_{\rm Y} \cdot {\rm Pr}(\boldsymbol{\rm X}\hspace{0.05cm}|\hspace{0.05cm}\boldsymbol{\rm Y}) = 0.5 \cdot (1-q)= 0.5 \cdot 0.2 \hspace{0.15cm}\underline{ = 0.1} \hspace{0.05cm},$$
$$p_{\rm D} \hspace{0.2cm} = \hspace{0.2cm} {\rm Pr}(\boldsymbol{\rm YY}) = p_{\rm Y} \cdot {\rm Pr}(\boldsymbol{\rm Y}\hspace{0.05cm}|\hspace{0.05cm}\boldsymbol{\rm Y}) = 0.5 \cdot q = 0.5 \cdot 0.8\hspace{0.15cm}\underline{ = 0.4} \hspace{0.05cm}.$$


For Huffman coding for $k = 2$

(4)  Opposite screen capture of the (earlier) SWF program  Shannon–Fano– and Huffman–coding  shows the construction of the Huffman code for  $k = 2$  with the probabilities just calculated.

  • Thus, the mean codeword length is:
$$L_{\rm M}\hspace{0.01cm}' = 0.4 \cdot 1 + 0.4 \cdot 2 + (0.1 + 0.1) \cdot 3 = 1.8\,\,{\rm bit/two-tuple}$$
$$\Rightarrow\hspace{0.3cm}L_{\rm M} = {L_{\rm M}\hspace{0.01cm}'}/{2}\hspace{0.15cm}\underline{ = 0.9\,{\rm bit/source\:symbol}}\hspace{0.05cm}.$$


(5)  Correct is the suggested solution 2:

  • According to the source coding theorem  $L_{\rm M} ≥ H$ holds.
  • However, if we apply Huffman coding and disregard ties between non-adjacent symbols  $(k = 2)$, the lower bound of the codeword length is not  $H = 0.722$, but  $H_2 = 0.861$  (the addition bit/source symbol is omitted for the rest of the task).
  • The result of subtask  (4)  was  $L_{\rm M} = 0.9.$
  • If an asymmetric Markov chain were present and in such a way that for the probabilities  $p_{\rm A}$, ... , $p_{\rm D}$  the values  $50\%$,  $25\%$  and twice  $12.5\%$  would result, then one would come to the average code word length  $L_{\rm M} = 0.875$.
  • How the exact parameters of this asymmetrical Markov source look, however, is not known even to the task creator (G. Söder).
  • Nor how the value  $0.875$  could be reduced to  $0.861$ . In any case, the Huffman algorithm is unsuitable for this.


(6)  With  $q = 0.8$  and  $1 - q = 0.2$  we get:

$$p_{\rm A} \hspace{0.2cm} = \hspace{0.2cm} {\rm Pr}(\boldsymbol{\rm XXX}) = 0.5 \cdot q^2 \hspace{0.15cm}\underline{ = 0.32} = p_{\rm H} = {\rm Pr}(\boldsymbol{\rm YYY})\hspace{0.05cm},$$
$$p_{\rm B} \hspace{0.2cm} = \hspace{0.2cm} {\rm Pr}(\boldsymbol{\rm XXY}) = 0.5 \cdot q \cdot (1-q) \hspace{0.15cm}\underline{ = 0.08}= p_{\rm G} = {\rm Pr}(\boldsymbol{\rm YYX}) \hspace{0.05cm},$$
$$p_{\rm C} \hspace{0.2cm} = \hspace{0.2cm} {\rm Pr}(\boldsymbol{\rm XYX}) = 0.5 \cdot (1-q)^2\hspace{0.15cm}\underline{ = 0.02} = p_{\rm F}= {\rm Pr}(\boldsymbol{\rm YXY}) \hspace{0.05cm},$$
$$p_{\rm D} \hspace{0.2cm} = \hspace{0.2cm} {\rm Pr}(\boldsymbol{\rm XYY}) = 0.5 \cdot (1-q) \cdot q \hspace{0.15cm}\underline{ = 0.08} = p_{\rm E} = {\rm Pr}(\boldsymbol{\rm YXX})\hspace{0.05cm}.$$


(7)  The screen capture of the of the (earlier) SWF program  Shannon–Fano– and Huffman–coding  coding illustrates the constellation of the Huffman code for  $k = 3$.  This gives us for the mean codeword length:

$$L_{\rm M}\hspace{0.01cm}' = 0.64 \cdot 2 + 0.24 \cdot 3 + 0.04 \cdot 5 = 2.52\,\,{\rm bit/Dreiertupel}\hspace{0.3cm} \Rightarrow\hspace{0.3cm}L_{\rm M} = {L_{\rm M}\hspace{0.01cm}'}/{3}\hspace{0.15cm}\underline{ = 0.84\,{\rm bit/Quellensymbol}}\hspace{0.05cm}.$$
Zur Huffman–Codierung für $k = 3$
  • Man erkennt die Verbesserung gegenüber der Teilaufgabe  (4).
  • Die für  $k = 2$  gültige Schranke  $H_2 = 0.861$  wird nun von der mittleren Codewortlänge  $L_{\rm M}$  unterschritten.
  • Die neue Schranke für  $k = 3$  ist  $H_3 = 0.815$.
  • Um die Quellenentropie  $H = 0.722$  zu erreichen  (besser gesagt:  diesem Endwert bis auf ein  $ε$  nahe zu kommen), müsste man allerdings unendlich lange Tupel bilden  $(k → ∞)$.