Difference between revisions of "Aufgaben:Exercise 2.7Z: Huffman Coding for Two-Tuples of a Ternary Source"

From LNTwww
Line 5: Line 5:
 
[[File:P_ID2458__Inf_Z_2_7.png|right|frame|Huffman tree for <br>a ternary source]]
 
[[File:P_ID2458__Inf_Z_2_7.png|right|frame|Huffman tree for <br>a ternary source]]
 
We consider the same situation as in&nbsp; [[Aufgaben:2.7_Huffman-Anwendung_für_binäre_Zweiertupel|Exercise A2.7]]: &nbsp;  
 
We consider the same situation as in&nbsp; [[Aufgaben:2.7_Huffman-Anwendung_für_binäre_Zweiertupel|Exercise A2.7]]: &nbsp;  
*The Huffman algorithm leads to a better result, i.e. to a smaller mean codeword length&nbsp; $L_{\rm M}$, if one does not apply it to individual symbols but forms&nbsp; $k$&ndash;tuples beforehand. &nbsp;  
+
*The Huffman algorithm leads to a better result, i.e. to a smaller average codeword length&nbsp; $L_{\rm M}$, if one does not apply it to individual symbols but forms&nbsp; $k$&ndash;tuples beforehand. &nbsp;  
*This increases the symbol range from&nbsp; $M$&nbsp; to $M\hspace{0.03cm}' = M^k$.
+
*This increases the symbol set size from&nbsp; $M$&nbsp; to&nbsp; $M\hspace{0.03cm}' = M^k$.
  
  
 
For the message source considered here, the following applies:
 
For the message source considered here, the following applies:
* Symbol range: &nbsp; $M = 3$,
+
* Symbol set size: &nbsp; $M = 3$,
 
* Symbol set: &nbsp; $\{$ $\rm X$,&nbsp; $\rm Y$,&nbsp; $\rm Z$ $\}$,
 
* Symbol set: &nbsp; $\{$ $\rm X$,&nbsp; $\rm Y$,&nbsp; $\rm Z$ $\}$,
 
* Probabilities: &nbsp;  $p_{\rm X} = 0.7$,&nbsp; $p_{\rm Y} = 0.2$,&nbsp; $p_{\rm Z} = 0.1$,
 
* Probabilities: &nbsp;  $p_{\rm X} = 0.7$,&nbsp; $p_{\rm Y} = 0.2$,&nbsp; $p_{\rm Z} = 0.1$,
* Entropy: &nbsp; $H = 1.157 \ \rm  bit/ternary\:symbol$.
+
* Entropy: &nbsp; $H = 1.157 \ \rm  bit/ternary\hspace{0.12cm}symbol$.
  
  
The graph shows the Huffman tree when the Huffman algorithm is applied to single symbols&nbsp; $k= 1$. <br>In subtask&nbsp; '''(2)'''&nbsp; you are to give the corresponding Huffman code when two-tuples are formed beforehand&nbsp; $(k=2)$.
+
The graph shows the Huffman tree when the Huffman algorithm is applied to single symbols&nbsp; $(k= 1)$. <br>In subtask&nbsp; '''(2)'''&nbsp; you are to give the corresponding Huffman code when two-tuples are formed beforehand&nbsp; $(k=2)$.
  
  
  
  
 
+
<u>Hints:</u>
 
 
 
 
 
 
Hints:
 
 
*The task belongs to the chapter&nbsp; [[Information_Theory/Entropiecodierung_nach_Huffman|Entropy Coding according to Huffman]].
 
*The task belongs to the chapter&nbsp; [[Information_Theory/Entropiecodierung_nach_Huffman|Entropy Coding according to Huffman]].
 
*In particular, reference is made to the page&nbsp; [[Information_Theory/Entropiecodierung_nach_Huffman#Application_of_Huffman_coding_to_.7F.27.22.60UNIQ-MathJax168-QINU.60.22.27.7F.E2.80.93tuples|Application of Huffman coding to&nbsp; $k$-tuples]]&nbsp;.
 
*In particular, reference is made to the page&nbsp; [[Information_Theory/Entropiecodierung_nach_Huffman#Application_of_Huffman_coding_to_.7F.27.22.60UNIQ-MathJax168-QINU.60.22.27.7F.E2.80.93tuples|Application of Huffman coding to&nbsp; $k$-tuples]]&nbsp;.
Line 36: Line 32:
  
 
<quiz display=simple>
 
<quiz display=simple>
{What is the mean codeword length when the Huffman algorithm is applied directly to the ternary source symbols&nbsp; $\rm X$,&nbsp; $\rm Y$&nbsp; und&nbsp; $\rm Z$&nbsp;?  
+
{What is the average codeword length when the Huffman algorithm is applied directly to the ternary source symbols&nbsp; $\rm X$,&nbsp; $\rm Y$&nbsp; und&nbsp; $\rm Z$&nbsp;?  
 
|type="{}"}
 
|type="{}"}
$\underline{k=1}\text{:} \hspace{0.25cm}L_{\rm M} \ = \ $ { 1.3 3% } $\ \rm bit/source\:symbol$
+
$\underline{k=1}\text{:} \hspace{0.25cm}L_{\rm M} \ = \ $ { 1.3 3% } $\ \rm bit/source\hspace{0.12cm}symbol$
  
  
{What are the tuple probabilities here? In particular:
+
{What are the tuple probabilities here?&nbsp; In particular:
 
|type="{}"}
 
|type="{}"}
 
$p_{\rm A} = \rm Pr(XX)\ = \ $ { 0.49 3% }
 
$p_{\rm A} = \rm Pr(XX)\ = \ $ { 0.49 3% }
Line 48: Line 44:
  
  
{What is the mean codeword length if you first form two-tuples and apply the Huffman algorithm to them?
+
{What is the average codeword length if you first form two-tuples and apply the Huffman algorithm to them?
 
|type="{}"}
 
|type="{}"}
$\underline{k=2}\text{:} \hspace{0.25cm}L_{\rm M} \ = \ $ { 1.165 3% } $\ \rm bit/source\:symbol$
+
$\underline{k=2}\text{:} \hspace{0.25cm}L_{\rm M} \ = \ $ { 1.165 3% } $\ \rm bit/source\hspace{0.12cm}symbol$
  
  
 
{Which of the following statements are true when more than two ternary symbols are combined&nbsp; $(k>2)$?
 
{Which of the following statements are true when more than two ternary symbols are combined&nbsp; $(k>2)$?
 
|type="[]"}
 
|type="[]"}
+ $L_{\rm M}$&nbsp; decreases monotonically with increasing &nbsp;$k$&nbsp; ab.
+
+ $L_{\rm M}$&nbsp; decreases monotonically with increasing &nbsp;$k$.
 
- $L_{\rm M}$&nbsp; does not change when &nbsp;$k$&nbsp; is increased.
 
- $L_{\rm M}$&nbsp; does not change when &nbsp;$k$&nbsp; is increased.
- Für &nbsp;$k= 3$&nbsp; you get &nbsp;$L_{\rm M} = 1.05 \ \rm bit/source\:symbol$.
+
- Für &nbsp;$k= 3$&nbsp; you get &nbsp;$L_{\rm M} = 1.05 \ \rm bit/source\hspace{0.12cm}symbol$.
  
  

Revision as of 12:39, 11 August 2021

Huffman tree for
a ternary source

We consider the same situation as in  Exercise A2.7:  

  • The Huffman algorithm leads to a better result, i.e. to a smaller average codeword length  $L_{\rm M}$, if one does not apply it to individual symbols but forms  $k$–tuples beforehand.  
  • This increases the symbol set size from  $M$  to  $M\hspace{0.03cm}' = M^k$.


For the message source considered here, the following applies:

  • Symbol set size:   $M = 3$,
  • Symbol set:   $\{$ $\rm X$,  $\rm Y$,  $\rm Z$ $\}$,
  • Probabilities:   $p_{\rm X} = 0.7$,  $p_{\rm Y} = 0.2$,  $p_{\rm Z} = 0.1$,
  • Entropy:   $H = 1.157 \ \rm bit/ternary\hspace{0.12cm}symbol$.


The graph shows the Huffman tree when the Huffman algorithm is applied to single symbols  $(k= 1)$.
In subtask  (2)  you are to give the corresponding Huffman code when two-tuples are formed beforehand  $(k=2)$.



Hints:


Questions

1

What is the average codeword length when the Huffman algorithm is applied directly to the ternary source symbols  $\rm X$,  $\rm Y$  und  $\rm Z$ ?

$\underline{k=1}\text{:} \hspace{0.25cm}L_{\rm M} \ = \ $

$\ \rm bit/source\hspace{0.12cm}symbol$

2

What are the tuple probabilities here?  In particular:

$p_{\rm A} = \rm Pr(XX)\ = \ $

$p_{\rm B} = \rm Pr(XY)\ = \ $

$p_{\rm C} = \rm Pr(XZ)\ = \ $

3

What is the average codeword length if you first form two-tuples and apply the Huffman algorithm to them?

$\underline{k=2}\text{:} \hspace{0.25cm}L_{\rm M} \ = \ $

$\ \rm bit/source\hspace{0.12cm}symbol$

4

Which of the following statements are true when more than two ternary symbols are combined  $(k>2)$?

$L_{\rm M}$  decreases monotonically with increasing  $k$.
$L_{\rm M}$  does not change when  $k$  is increased.
Für  $k= 3$  you get  $L_{\rm M} = 1.05 \ \rm bit/source\hspace{0.12cm}symbol$.


Solution

(1)  The average codeword length with  $p_{\rm X} = 0.7$,  $L_{\rm X} = 1$,  $p_{\rm Y} = 0.2$,  $L_{\rm Y} = 2$,  $p_{\rm Z} = 0.1$,  $L_{\rm Z} = 2$ zu

$$L_{\rm M} = p_{\rm X} \cdot 1 + (p_{\rm Y} + p_{\rm Z}) \cdot 2 \hspace{0.15cm}\underline{= 1.3\,\,{\rm bit/source\:symbol}}\hspace{0.05cm}. $$
  • This value is still well above the source entropy  $H = 1.157$  bit/source\:symbol.


(2)  There are  $M\hspace{0.03cm}' = M^k = 3^2 = 9$  two-tuples with the following probabilities:

Huffman tree for ternary source and two-tuples.
$$p_{\rm A} = \rm Pr(XX) = 0.7 \cdot 0.7\hspace{0.15cm}\underline{= 0.49},$$
$$p_{\rm B} = \rm Pr(XY) = 0.7 \cdot 0.2\hspace{0.15cm}\underline{= 0.14},$$
$$p_{\rm C} = \rm Pr(XZ) = 0.7 \cdot 0.1\hspace{0.15cm}\underline{= 0.07},$$
$$p_{\rm D} = \rm Pr(YX) = 0.2 \cdot 0.7 = 0.14,$$
$$p_{\rm E} = \rm Pr(YY) = 0.2 \cdot 0.2 = 0.04,$$
$$p_{\rm F} = \rm Pr(YZ) = 0.2 \cdot 0.1 = 0.02,$$
$$p_{\rm G} = \rm Pr(ZX) = 0.1 \cdot 0.7 = 0.07,$$
$$p_{\rm H} = \rm Pr(ZY) = 0.1 \cdot 0.2 = 0.02,$$
$$p_{\rm I} = \rm Pr(ZZ) = 0.1 \cdot 0.1 = 0.01.$$


(3)  The graph shows the Huffman tree for the application with $k = 2$.  Thus we obtain

  • for the individual two-tuples the following binary codings:
    $\rm XX = A$   →   0,     $\rm XY = B$   →   111,     $\rm XZ = C$   →   1011,
    $\rm YX = D$   →   110,     $\rm YY = E$   →   1000,     $\rm YZ = F$   →   10010,
    $\rm ZX = G$   →   1010,     $\rm ZY = H$   →   100111,     $\rm ZZ =I$   →   100110;
  • for the mean codeword length:
$$L_{\rm M}\hspace{0.01cm}' =0.49 \cdot 1 + (0.14 + 0.14) \cdot 3 + (0.07 + 0.04 + 0.07) \cdot 4 + 0.02 \cdot 5 + (0.02 + 0.01) \cdot 6 = 2.33\,\,{\rm bit/two tuples}$$
$$\Rightarrow\hspace{0.3cm}L_{\rm M} = {L_{\rm M}\hspace{0.01cm}'}/{2}\hspace{0.15cm}\underline{ = 1.165\,\,{\rm bit/source\:symbol}}\hspace{0.05cm}.$$


(4) Statement 1 is correct  $L_{\rm M}$ , even though decreases very slowly as  $k$  increases.

  • The last statement is false because  $L_{\rm M}$  cannot be smaller than  $H = 1.157$  bit/source\:symbol even for   $k → ∞$  .
  • But the second statement is not necessarily correct either:   Since  $L_{\rm M} > H$&nbsp still applies with  $k = 2$ ,  $k = 3$  can lead to a further improvement.