Difference between revisions of "Aufgaben:Exercise 2.7: Huffman Application for Binary Two-Tuples"

From LNTwww
 
(34 intermediate revisions by 4 users not shown)
Line 1: Line 1:
  
{{quiz-Header|Buchseite=Informationstheorie/Entropiecodierung nach Huffman
+
{{quiz-Header|Buchseite=Information_Theory/Entropy_Coding_According_to_Huffman
 
}}
 
}}
  
[[File:P_ID2455__Inf_A_2_7.png|right|]]
+
[[File:P_ID2455__Inf_A_2_7.png|right|frame|Three binary codes for  $M = 4$]]
Die Anwendung des Huffman&ndash;Algorithmus in seiner ursprünglichen Form setzt einen Symbolumfang <i>M</i> > 2 voraus und ist deshalb zur Datenkomprimierung von Binärquellen unbrauchbar.
+
The application of the Huffman algorithm in its original form assumes a symbol set size&nbsp; $M > 2$&nbsp; and is therefore useless for data compression of binary sources.
  
Fasst man aber mehrere aufeinanderfolgende Binärzeichen der Nachrichtenquelle zu einem neuen Symbol zusammen, so kann man auf die neue Symbolmenge die Huffman&ndash;Datenkomprimierung sinnvoll anwenden.
+
However, if one combines several consecutive binary characters of the source into a new symbol, one can usefully apply Huffman data compression to the new symbol set.
  
Wir gehen in dieser Aufgabe von der Symbolmenge {<b>X</b>, <b>Y</b>} &nbsp;&#8658;&nbsp; <i>M</i> = 2 aus und bilden gemäß der obigen Tabelle Zweiertupel mit dem Symbolvorrat {<b>A</b>, <b>B</b>, <b>C</b>, <b>D</b>} &nbsp;&#8658;&nbsp; <i>M</i><sup>&nbsp;</sup>&prime; = <i>M</i><sup>&nbsp;2</sup> = 4. Aus der binären Quellensymbolfolge <b>XYXXYXXXYY</b> wird somit die quaternäre Folge  <b>BACAD</b>.
+
In this task we start from the source symbol set&nbsp; $\{$ $\rm X$,&nbsp; $\rm Y$ $\}&nbsp; &#8658; &nbsp;   $M = 2$&nbsp; and form two-tuples according to the table on the right with the new symbol set&nbsp; $\{$ $\rm A$,&nbsp; $\rm B$,&nbsp; $\rm C$,&nbsp; $\rm D$ $\}$ &nbsp;&#8658;&nbsp; $(M\hspace{0.03cm}' = M^2 = 4)$.&nbsp;
  
Desweiteren sind in obiger Tabelle drei Codes angegeben, von denen manche durch den Huffman&ndash;Algorithmus entstanden sind. Die binären Ausgangsfolgen ergeben sich dann für unser Beispiel wie folgt:
+
For example, the binary source symbol sequence&nbsp; $\rm XYXXYXXXYY$&nbsp; thus becomes the quaternary sequence&nbsp;  $\rm BACAD$.
  
:* Code 1: &nbsp;&nbsp;<b>1011011100</b>,
+
Furthermore, three codes are given in the table, some of which have been created by the Huffman algorithm.&nbsp; The binary output sequences then result for our example as follows:
  
:* Code 2:&nbsp;&nbsp; <b>0110011000</b>,
+
* $\text{Code 1}$: &nbsp;&nbsp;<b>1011011100</b>,
 +
* $\text{Code 2}$:&nbsp;&nbsp; <b>0110011000</b>,
 +
* $\text{Code 3}$:&nbsp;&nbsp; <b>10011001110</b>.
  
:* Code 3:&nbsp;&nbsp; <b>10011001110</b>.
 
  
Nochmals zum Verständnis: Aus der ursprünglichen Symbolmenge {<b>X</b>,&nbsp;<b>Y</b>} erhält man durch die Bildung von Zweiertupeln eine Quaternärmenge mit dem Symbolvorrat {<b>A</b>, <b>B</b>, <b>C</b>, <b>D</b>}. Die Folgenlänge <i>N</i> wird dadurch halbiert. Durch Huffman&ndash;Codierung ergibt sich wieder eine Binärfolge, deren Symbolmenge zur besseren Unterscheidung mit {<b>0</b>,&nbsp;<b>1</b>} bezeichnet wird. Die Anwendung der Huffman&ndash;Codierung macht genau dann Sinn, wenn die Länge der Ausgangsfolge (im statistischen Mittel) kleiner ist als <i>N</i>.
+
Again for understanding:
 +
*From the original symbol set&nbsp; $\{$ $\rm X$,&nbsp; $\rm Y$ $\}$&nbsp; a quaternary set with symbol set&nbsp; $\{$ $\rm A$,&nbsp; $\rm B$,&nbsp; $\rm C$,&nbsp; $\rm D$ $\}$ is obtained by forming two-tuples.&nbsp;
 +
*The sequence length $N$ is thereby halved to&nbsp; $N\hspace{0.03cm}' = N/2 $&nbsp;.
 +
*Huffman coding again results in a binary sequence whose symbol set is designated&nbsp; $\{$<b>0</b>,&nbsp; <b>1</b>$\}$&nbsp; for better differentiation.  
 +
*The application of Huffman coding makes sense exactly when the length&nbsp; $L$&nbsp; of the initial sequence is smaller than&nbsp; $N$ (on statistical average).
  
Mit dieser Aufgabe soll geklärt werden, welche der vorgegebenen Binärcodes bei welchen Randbedingungen sinnvoll sind. Die binäre Nachrichtenquelle {<b>X</b>,&nbsp;<b>Y</b>} sei gedächtnislos und wird allein durch die Symbolwahrscheinlichkeiten <i>p</i><sub>X</sub> beschrieben. Die zweite Wahrscheinlichkeit ist dann stets <i>p</i><sub>Y</sub> = 1 &ndash; <i>p</i><sub>X</sub>.
 
  
<b>Hinweis:</b> Die Aufgabe bezieht sich auf die letzte Theorieseite von Kapitel 2.3. Die Idee zu dieser Aufgabe entstand bei einem Vortrag von Prof. Robert Fischer von der Universität Ulm zum Thema &bdquo;Der goldene Schnitt in der Nachrichtentechnik&rdquo;.
+
This task is intended to clarify which of the given binary codes make sense under which boundary conditions.  
 +
*Let the binary message source&nbsp; $\{$ $\rm X$,&nbsp; $\rm Y$ $\}$&nbsp; be memoryless and be described solely by the symbol probability&nbsp; $p_{\rm X}$&nbsp;.
 +
*The second probability is then always&nbsp; $p_{\rm Y} = 1 - p_{\rm X}$.
  
Für die mittlere Codewortlänge <u>pro Zweiertupel</u> gilt:
 
:$$L_{\rm M}' = p_{\rm A} \cdot L_{\rm A} + p_{\rm B} \cdot L_{\rm B} + p_{\rm C} \cdot L_{\rm C} + p_{\rm D} \cdot L_{\rm D} \hspace{0.05cm}.$$
 
  
Bezogen auf ein Quellensymbol ergibt sich <i>L</i><sub>M</sub> = <i>L</i>&prime;<sub>M</sub>/2.
 
  
  
===Fragebogen===
+
 
 +
 
 +
 
 +
 
 +
<u>Hints:</u>
 +
*The exercise belongs to the chapter&nbsp; [[Information_Theory/Entropiecodierung_nach_Huffman|Entropy Coding according to Huffman]].
 +
*In particular, reference is made to the page&nbsp; [[Information_Theory/Entropiecodierung_nach_Huffman#Application_of_Huffman_coding_to_.7F.27.22.60UNIQ-MathJax168-QINU.60.22.27.7F.E2.80.93tuples|Application of Huffman coding to&nbsp; $k$-tuples]].
 +
*The average code word length per two-tuple is&nbsp;  $L_{\rm M}\hspace{0.03cm}' = p_{\rm A} \cdot L_{\rm A} +$&nbsp; ... &nbsp;$ + p_{\rm D} \cdot L_{\rm D} \hspace{0.05cm}$.&nbsp; With respect to a source symbol,&nbsp; $L_{\rm M} = L_{\rm M}\hspace{0.03cm}'/2$.
 +
*A comparable task with ternary input symbols is treated in&nbsp;  [[Aufgaben:Exercise_2.7Z:_Huffman_Coding_for_Two-Tuples_of_a_Ternary_Source|Exercise 2.7Z]]&nbsp;.
 +
 +
*The idea for this task arose during a lecture by&nbsp; [http://www.uni-ulm.de/nt/staff/professors/fischer/ Prof. Robert Fischer]&nbsp; from the University of Ulm at the Technical University of Munich on the topic of <br> &nbsp; &nbsp; "Der goldene Schnitt in der Nachrichtentechnik" &nbsp; &rArr; &nbsp; "The golden ratio in communications technology".
 +
 
 +
 
 +
 
 +
 
 +
===Questions===
  
 
<quiz display=simple>
 
<quiz display=simple>
{Geben Sie die Codewortlängen bei redundanzfreier Binärquelle an.
+
{Give the average code word lengths with redundancy-free binary source.
 
|type="{}"}
 
|type="{}"}
Code 1:  $L_M$ = { 1 3% } bit/Quellensymbol
+
$\text{Code 1}$&nbsp; $L_{\rm M} \ = \ $ { 1 1% } $\ \rm bit/source\hspace{0.15cm}symbol$
Code 2:  $L_M$ = { 1.125 3% } bit/Quellensymbol
+
$\text{Code 2}$&nbsp; $L_{\rm M} \ = \ $ { 1.125 1% } $\ \rm bit/source\hspace{0.15cm}symbol$
Code 3:  $L_M$ = { 1.25 3% } bit/Quellensymbol
+
$\text{Code 3}$&nbsp; $L_{\rm M} \ = \ $ { 1.25 1% } $\ \rm bit/source\hspace{0.15cm}symbol$
  
  
  
{Ermitteln Sie den Huffman&ndash;Code hinsichtlich Zweiertupel für <i>p</i><sub>X</sub> = 0.6.
+
{Determine the Huffman code with respect to two-tuples for  &nbsp;$p_{\rm X}= 0.6$.
|type="[]"}
+
|type="()"}
+ Es ergibt sich Code 1.
+
+ The result is&nbsp; $\text{Code 1}$.
- Es ergibt sich Code 2.
+
- The result is&nbsp; $\text{Code 2}$.
- Es ergibt sich Code 3.
+
- The result is&nbsp; $\text{Code 3}$.
  
  
{Wie groß ist die zugehörige mittlere Codewortlänge?
+
{What is the average code word length of the best Huffman code for &nbsp;$p_{\rm X}= 0.6$&nbsp;?
 
|type="{}"}
 
|type="{}"}
$p_X = 0.6:\ L_M$ = { 1 3% } bit/Quellensymbol
+
$L_{\rm M} \ = \ $ { 1 1% } $\ \rm bit/source\hspace{0.15cm}symbol$
  
  
{Ermitteln Sie den Huffman&ndash;Code hinsichtlich Zweiertupel für <i>p</i><sub>X</sub> = 0.8.
+
{Determine the Huffman code with respect to two-tuple for &nbsp;$p_{\rm X}= 0.8$.
|type="[]"}
+
|type="()"}
- Es ergibt sich Code 1.
+
- The result is&nbsp; $\text{Code 1}$.
+ Es ergibt sich Code 2.
+
+ The result is&nbsp; $\text{Code 2}$.
- Es ergibt sich Code 3.
+
- The result is&nbsp; $\text{Code 3}$.
  
  
{Wie groß ist die zugehörige mittlere Codewortlänge?
+
{What is the average code word length of the best Huffman code for &nbsp;$p_{\rm X}= 0.8$&nbsp;?
 
|type="{}"}
 
|type="{}"}
$p_X = 0.8:\ L_M$ = { 0.78 3% } bit/Quellensymbol
+
$L_{\rm M} \ = \ $ { 0.78 1% } $\ \rm bit/source\hspace{0.15cm}symbol$
  
  
{In welchem Bereich darf die Wahrscheinlichkeit <i>p</i><sub>X</sub> für das Symbol <b>X</b> liegen, damit sich Code 1 als Huffman&ndash;Code ergibt?
+
{In what range may the probability &nbsp;$p_{\rm X}$&nbsp; for the symbol&nbsp; $\rm X$&nbsp; lie so that the&nbsp; $\text{Code 1}$&nbsp; results according to Huffman?
 
|type="{}"}
 
|type="{}"}
$p_\text{X,max}$ = { 0.618 3% }
+
$p_\text{X, min}\ = \ $ { 0.382 1% }
$p_\text{X,min}$ = { 0.382 3% }
+
$p_\text{X, max}\ = \ $ { 0.618 1% }
  
  
Line 74: Line 92:
 
</quiz>
 
</quiz>
  
===Musterlösung===
+
===Solution===
 
{{ML-Kopf}}
 
{{ML-Kopf}}
<b>1</b>&nbsp;&nbsp;Bei redundanzfreier Binärquelle (<i>p</i><sub>X</sub> = <i>p</i><sub>Y</sub> = 1/2) erhält man <i>p</i><sub>A</sub> = <i>p</i><sub>B</sub> = <i>p</i><sub>C</sub> = <i>p</i><sub>D</sub> = 1/4 und mit der angegebenen Gleichung:
+
'''(1)'''&nbsp; With redundancy-free binary source &nbsp;$(p_{\rm X} = p_{\rm Y} = 0.5)$&nbsp; one obtains &nbsp;$p_{\rm A} = p_{\rm B} = p_{\rm C} = p_{\rm D} = 0.25$&nbsp; and with the given equation:
:$$L_{\rm M} \hspace{0.2cm} =  \hspace{0.2cm} \big [ \hspace{0.05cm}p_{\rm A} \cdot L_{\rm A} + p_{\rm B} \cdot L_{\rm B} + p_{\rm C} \cdot L_{\rm C} + p_{\rm D} \cdot L_{\rm D} \hspace{0.05cm} \big ] / 2 = \\
+
:$$L_{\rm M} =  \big [ \hspace{0.05cm}p_{\rm A} \cdot L_{\rm A} + p_{\rm B} \cdot L_{\rm B} + p_{\rm C} \cdot L_{\rm C} + p_{\rm D} \cdot L_{\rm D} \hspace{0.05cm} \big ] / 2 = \big [ \hspace{0.05cm} L_{\rm A} +  L_{\rm B} +  L_{\rm C} +  L_{\rm D}\hspace{0.05cm} \big ] / 8  
\hspace{0.2cm} =  \hspace{0.2cm} \big [ \hspace{0.05cm} L_{\rm A} +  L_{\rm B} +  L_{\rm C} +  L_{\rm D}\hspace{0.05cm} \big ] / 8  
 
 
\hspace{0.05cm}.$$
 
\hspace{0.05cm}.$$
Berücksichtigt man die angegebenen Zuordnungen, so erhält man für
+
Taking into account the given assignments, we obtain for
 
+
* $\text{Code 1}$:&nbsp;&nbsp;&nbsp; $L_{\rm M} \hspace{0.15cm}\underline{= 1.000 \ {\rm bit/source \hspace{0.15cm}symbol} }$,
:* Code 1:&nbsp;&nbsp;&nbsp; <i>L</i><sub>M</sub> <u>= 1.000 bit/Quellensymbol</u>,
+
* $\text{Code 2}$:&nbsp;&nbsp;&nbsp; $L_{\rm M} \hspace{0.15cm}\underline{= 1.125 \ {\rm bit/source \hspace{0.15cm}symbol} }$,
 +
* $\text{Code 3}$:&nbsp;&nbsp;&nbsp; $L_{\rm M} \hspace{0.15cm}\underline{= 1.250 \ {\rm bit/source \hspace{0.15cm}symbol} }$.
  
:* Code 2:&nbsp;&nbsp;&nbsp; <i>L</i><sub>M</sub> <u>= 1.125 bit/Quellensymbol</u>,
 
  
:* Code 3:&nbsp;&nbsp;&nbsp; <i>L</i><sub>M</sub> <u>= 1.250 bit/Quellensymbol</u>.
+
In the course of the task it will become apparent that the first two codes are quite possible as a result of the Huffman algorithm (of course only with suitable symbol probabilities).&nbsp;&nbsp; $\text{Code 3}$&nbsp; is also prefix-free, but never optimal in terms of average code word length.
  
Im Verlauf der Aufgabe wird sich zeigen, dass die beiden ersten Codes durchaus als Ergebnis des Huffman&ndash;Algorithmus möglich sind (natürlich nur bei geeigneten Symbolwahrscheinlichkeiten). Code 3 ist zwar ebenfalls präfixfrei, aber hinsichtlich der mittleren Codewortlänge nie optimal.
 
  
<b>2.</b>&nbsp;&nbsp;Die Wahrscheinlichkeiten der möglichen Zweiertupel lauten:
+
'''(2)'''&nbsp; The probabilities of the possible two-tuples are:
:$$p_{\rm A} = 0.6^2 = 0.36 \hspace{0.05cm}, \hspace{0.2cm}p_{\rm B}= 0.6 \cdot 0.4 = 0.24 =  p_{\rm C} \hspace{0.05cm},\hspace{0.2cm}
+
:$$p_{\rm A} = 0.6^2 = 0.36 \hspace{0.05cm}, \hspace{0.4cm}p_{\rm B}= 0.6 \cdot 0.4 = 0.24 =  p_{\rm C} \hspace{0.05cm},\hspace{0.4cm}
 
p_{\rm D}= 0.4^2 = 0.16 \hspace{0.05cm}.$$
 
p_{\rm D}= 0.4^2 = 0.16 \hspace{0.05cm}.$$
Damit ergibt sich das linke Baumdiagramm (siehe Grafik) und der folgende Huffman&ndash;Code:
+
*This gives the&nbsp; <u>tree diagram on the left</u>&nbsp; (in the adjacent graph) and the following Huffman code:
 +
[[File:P_ID2456__Inf_A_2_7b.png|right|frame|Huffman tree diagram for two different two-tuple constellations&nbsp; ("Schritt" &nbsp; &rArr; &nbsp; EN:&nbsp; "step")]]
 +
:&nbsp;&nbsp;&nbsp; $\rm A$ &nbsp; &#8594; &nbsp; <b>11</b>,&nbsp;&nbsp; $\rm B$ &nbsp; &#8594; &nbsp; <b>10</b>,&nbsp;&nbsp; $\rm C$ &nbsp; &#8594; &nbsp; <b>01</b>,&nbsp;&nbsp; $\rm D$ &nbsp; &#8594; &nbsp; <b>00</b>.
 +
*This is&nbsp; $\text{Code 1}$ &nbsp;&nbsp;&#8658;&nbsp;&nbsp; <u>solution suggestion 1</u>.
  
:<b>A</b> &#8594; <b>11</b>,&nbsp;&nbsp; <b>B</b> &#8594; <b>10</b>,&nbsp;&nbsp; <b>C</b> &#8594; <b>01</b>,&nbsp;&nbsp; <b>D</b> &#8594; <b>00</b>.
 
  
Es handelt sich um den Code 1 &nbsp;&nbsp;&#8658;&nbsp;&nbsp; <u>Lösungsvorschlag 1</u>.
+
'''(3)'''&nbsp; Each two-tuple is represented by two bits.&nbsp; Thus
[[File:P_ID2456__Inf_A_2_7b.png|center|]]
+
:$$L_{\rm M} \hspace{0.15cm}\underline{= 1.000 \ {\rm bit/source \hspace{0.15cm}symbol} }.$$
  
<b>3.</b>&nbsp;&nbsp;Jedes Zweiertupel wird durch zwei Bit dargestellt. Damit ist <i>L</i><sub>M</sub> <u> = 1 bit/Quellensymbol</u>.
 
  
<b>4.</b>&nbsp;&nbsp;Hier lauten die Wahrscheinlichkeiten der einzelnen Zweiertupel:
+
'''(4)'''&nbsp; Here the probabilities of each two-tuple are:
:$$p_{\rm A} = 0.8^2 = 0.64 \hspace{0.05cm}, \hspace{0.2cm}p_{\rm B}= 0.8 \cdot 0.2 = 0.16 =  p_{\rm C} \hspace{0.05cm},\hspace{0.2cm}
+
:$$p_{\rm A} = 0.8^2 = 0.64 \hspace{0.05cm}, \hspace{0.4cm}p_{\rm B}= 0.8 \cdot 0.2 = 0.16 \hspace{0.05cm}, $$
 +
:$$p_{\rm C}  =  p_{\rm B}= 0.8 = 0.16 \hspace{0.05cm},\hspace{0.4cm}
 
p_{\rm D}= 0.2^2 = 0.04 \hspace{0.05cm}. $$
 
p_{\rm D}= 0.2^2 = 0.04 \hspace{0.05cm}. $$
Entsprechend dem rechten Baumdiagramm ergibt sich nun Code 2 &nbsp;&nbsp;&#8658;&nbsp;&nbsp; <u>Lösungsvorschlag 2</u>:
+
According to the <u>tree diagram on the right</u> the result is now&nbsp; $\text{Code 2}$ &nbsp;&nbsp;&#8658;&nbsp;&nbsp; <u>solution suggestion 2</u>:
 +
 
 +
:&nbsp;&nbsp;&nbsp; $\rm A$ &nbsp; &#8594; &nbsp; <b>1</b>,&nbsp;&nbsp; $\rm B$ &nbsp; &#8594; &nbsp; <b>01</b>,&nbsp;&nbsp; $\rm C$ &nbsp; &#8594; &nbsp; <b>001</b>,&nbsp;&nbsp; $\rm D$ &nbsp; &#8594; &nbsp; <b>010</b>.
  
:<b>A</b> &#8594; <b>1</b>,&nbsp;&nbsp; <b>B</b> &#8594; <b>01</b>,&nbsp;&nbsp; <b>C</b> &#8594; <b>011</b>,&nbsp;&nbsp; <b>D</b> &#8594; <b>010</b>.
 
  
<b>5.</b>&nbsp;&nbsp;Hier gilt für die mittlere Zweiertupellänge bzw. die mittlere Codewortlänge:
 
:$$L_{\rm M}' = 0.64 \cdot 1 + 0.16 \cdot 2 + (0.16 + 0.04) \cdot 3 = 1.56\,{\rm bit/Zweiertupel}$$
 
:$$\Rightarrow\hspace{0.3cm}L_{\rm M} = \frac{L_{\rm M}'}{2}\hspace{0.15cm}\underline{  = 0.78\,{\rm bit/Quellensymbol}}\hspace{0.05cm}.$$
 
  
<b>6.</b>&nbsp;&nbsp;Beispielsweise ist für <i>p</i><sub>X</sub> = 0.8 entsprechend der Teilaufgabe (4) der Code 2 optimal und die mittlere Codewortlänge beträgt <i>L</i><sub>M</sub> = 0.78 bit/Quellensymbol. Für <i>p</i><sub>X</sub> = 0.6 ist dagegen Code 1 optimal und die mittlere Codewortlänge ist <i>L</i><sub>M</sub> = 1 bit/Quellensymbol (dieses Ergebnis ist unabhängig von <i>p</i><sub>X</sub>).
+
'''(5)'''&nbsp; For&nbsp; $\text{Code 2}$&nbsp; the mean two-tuple length or the average code word length is:
 +
:$$L_{\rm M}\hspace{0.01cm}' = 0.64 \cdot 1 + 0.16 \cdot 2 + (0.16 + 0.04) \cdot 3 = 1.56\,{\rm bit/two-tuples}\hspace{0.3cm}
 +
\Rightarrow\hspace{0.3cm}L_{\rm M} = {L_{\rm M}\hspace{0.01cm}'}/{2}\hspace{0.15cm}\underline{  = 0.78\,{\rm bit/source \:symbol}}\hspace{0.05cm}.$$
  
Der gesuchte Maximalwert <i>p</i><sub>X,&nbsp;max</sub> wird zwischen 0.6 und 0.8 liegen. Die Bestimmungsgleichung ist dabei, dass für den Grenzfall <i>p</i><sub>X</sub> = <i>p</i><sub>X,&nbsp;max</sub> beide Codes genau die gleiche mittlere Codewortlänge <i>L</i><sub>M</sub> = 1 bit/Quellensymbol besitzen, bzw. <i>L</i>&prime;<sub>M</sub> = 2 bit/Zweiertupel.
 
  
Mit der Abkürzung <i>p</i> = <i>p</i><sub>X, max</sub> lautet die Gleichung:
+
'''(6)'''&nbsp; From the earlier subtasks it has been found:
:$$L_{\rm M}'\hspace{0.15cm}{\rm (Code \hspace{0.15cm}2)} = p^2 \cdot 1 + p \cdot (1-p) \cdot 2 + p \cdot (1-p) \cdot 3
+
*For &nbsp;$p_{\rm X} = 0.6$,&nbsp; &nbsp; $\text{Code 1}$&nbsp; is optimal and the average code word length of this code is&nbsp; $($independent of &nbsp;$p_{\rm X})$&nbsp; $L_{\rm M} = 1$&nbsp; bit/source&nbsp;  symbol.
 +
*For &nbsp;$p_{\rm X} = 0.8$&nbsp; according to subtask&nbsp; '''(4)''',&nbsp; $\text{Code 2}$&nbsp; is optimal and the average code word length is&nbsp; $L_{\rm M} =  0.78$&nbsp; bit/source&nbsp; symbol.
 +
 
 +
 
 +
The sought maximum value &nbsp;$p_\text{X, max}$&nbsp; will thus lie between &nbsp;$0.6$&nbsp; and &nbsp;$0.8$&nbsp;. &nbsp;The determining equation here is that for the limiting case &nbsp;$p_\text{X} = p_\text{X, max}$&nbsp; both codes have exactly the same average code word length &nbsp;$L_{\rm M} = 1$&nbsp; bit/source&nbsp; symbol, or &nbsp;$L_{\rm M}\hspace{0.03cm}' = 2$&nbsp; bit/two-tuple.
 +
 
 +
*With the abbreviation &nbsp;$p = p_\text{X, max}$&nbsp; the corresponding determining equation is:
 +
:$$L_{\rm M}\hspace{0.01cm}'\hspace{0.15cm}{\rm (Code \hspace{0.15cm}2)} = p^2 \cdot 1 + p \cdot (1-p) \cdot 2 + p \cdot (1-p) \cdot 3
 
+  (1-p)^2 \cdot 3 \stackrel{!}{=} 2 \hspace{0.05cm}.$$
 
+  (1-p)^2 \cdot 3 \stackrel{!}{=} 2 \hspace{0.05cm}.$$
Dies führt zum zahlenmäßigen Ergebnis:
+
*This leads to the numerical result:
 
:$$p^2 + p - 1 \stackrel{!}{=} 0 \hspace{0.3cm}\Rightarrow\hspace{0.3cm}
 
:$$p^2 + p - 1 \stackrel{!}{=} 0 \hspace{0.3cm}\Rightarrow\hspace{0.3cm}
 
  p_{\rm X,\hspace{0.05cm}max} = p = \frac{\sqrt{5}-1}{2} \hspace{0.15cm}\underline{  \approx 0.618}  
 
  p_{\rm X,\hspace{0.05cm}max} = p = \frac{\sqrt{5}-1}{2} \hspace{0.15cm}\underline{  \approx 0.618}  
 
  \hspace{0.05cm}.$$
 
  \hspace{0.05cm}.$$
Da sich die grundsätzliche Huffman&ndash;Struktur durch Vertauschen von <b>X</b> und <b>Y</b> nicht ändert, gilt für die untere Grenze:
+
*Since the basic Huffman structure does not change by swapping &nbsp;$\rm X$&nbsp; and &nbsp;$\rm Y$&nbsp; the lower bound is:
 
:$$p_{\rm X,\hspace{0.05cm}min} = 1 -  p_{\rm X,\hspace{0.05cm}max}\hspace{0.15cm}\underline{  \approx 0.382}  
 
:$$p_{\rm X,\hspace{0.05cm}min} = 1 -  p_{\rm X,\hspace{0.05cm}max}\hspace{0.15cm}\underline{  \approx 0.382}  
 
  \hspace{0.05cm}.$$
 
  \hspace{0.05cm}.$$
Die Darstellung der Zweiertupel durch unterschiedlich lange Bitfolgen (Code 2) macht also nur dann Sinn, wenn sich die Symbolwahrscheinlichkeiten von <b>X</b> und <b>Y</b> signifikant unterscheiden. Liegen diese dagegen zwischen 0.382 und 0.618,  so ist Code 1 anzuwenden.
 
  
Die Aufteilung einer Strecke der Länge 1 in zwei Abschnitte der Länge 0.618... und 0.382... bezeichnet man als Goldenen Schnitt, auf den man in den verschiedensten Fachgebieten immer wieder stößt.
+
The representation of the tuples of two by bit sequences of different lengths&nbsp; $\text{(Code 2)}$&nbsp; therefore only makes sense if the symbol probabilities of &nbsp;$\rm X$&nbsp; and &nbsp;$\rm Y$&nbsp; differ significantly.&nbsp; If, on the other hand, they are between&nbsp; $0.382$&nbsp; and&nbsp; $0.618$,&nbsp; $\text{Code 1}$&nbsp; is to be applied.
 +
 
 +
<u>Note:</u> &nbsp; &nbsp; The division of a path of length &nbsp; $1$&nbsp; into two sections of length&nbsp; $0.618$...&nbsp; and&nbsp; $0.382$...&nbsp; is called the&nbsp; [https://en.wikipedia.org/wiki/Golden_ratio '''Golden ratio'''], which is encountered again and again in the most diverse fields.
  
 
{{ML-Fuß}}
 
{{ML-Fuß}}
Line 135: Line 160:
  
  
[[Category:Aufgaben zu Informationstheorie|^2.3 Entropiecodierung nach Huffman^]]
+
[[Category:Information Theory: Exercises|^2.3 Entropy Coding according to Huffman^]]

Latest revision as of 16:56, 1 November 2022

Three binary codes for  $M = 4$

The application of the Huffman algorithm in its original form assumes a symbol set size  $M > 2$  and is therefore useless for data compression of binary sources.

However, if one combines several consecutive binary characters of the source into a new symbol, one can usefully apply Huffman data compression to the new symbol set.

In this task we start from the source symbol set  $\{$ $\rm X$,  $\rm Y$ $\}$   ⇒   $M = 2$  and form two-tuples according to the table on the right with the new symbol set  $\{$ $\rm A$,  $\rm B$,  $\rm C$,  $\rm D$ $\}$  ⇒  $(M\hspace{0.03cm}' = M^2 = 4)$. 

For example, the binary source symbol sequence  $\rm XYXXYXXXYY$  thus becomes the quaternary sequence  $\rm BACAD$.

Furthermore, three codes are given in the table, some of which have been created by the Huffman algorithm.  The binary output sequences then result for our example as follows:

  • $\text{Code 1}$:   1011011100,
  • $\text{Code 2}$:   0110011000,
  • $\text{Code 3}$:   10011001110.


Again for understanding:

  • From the original symbol set  $\{$ $\rm X$,  $\rm Y$ $\}$  a quaternary set with symbol set  $\{$ $\rm A$,  $\rm B$,  $\rm C$,  $\rm D$ $\}$ is obtained by forming two-tuples. 
  • The sequence length $N$ is thereby halved to  $N\hspace{0.03cm}' = N/2 $ .
  • Huffman coding again results in a binary sequence whose symbol set is designated  $\{$01$\}$  for better differentiation.
  • The application of Huffman coding makes sense exactly when the length  $L$  of the initial sequence is smaller than  $N$ (on statistical average).


This task is intended to clarify which of the given binary codes make sense under which boundary conditions.

  • Let the binary message source  $\{$ $\rm X$,  $\rm Y$ $\}$  be memoryless and be described solely by the symbol probability  $p_{\rm X}$ .
  • The second probability is then always  $p_{\rm Y} = 1 - p_{\rm X}$.





Hints:

  • The exercise belongs to the chapter  Entropy Coding according to Huffman.
  • In particular, reference is made to the page  Application of Huffman coding to  $k$-tuples.
  • The average code word length per two-tuple is  $L_{\rm M}\hspace{0.03cm}' = p_{\rm A} \cdot L_{\rm A} +$  ...  $ + p_{\rm D} \cdot L_{\rm D} \hspace{0.05cm}$.  With respect to a source symbol,  $L_{\rm M} = L_{\rm M}\hspace{0.03cm}'/2$.
  • A comparable task with ternary input symbols is treated in  Exercise 2.7Z .
  • The idea for this task arose during a lecture by  Prof. Robert Fischer  from the University of Ulm at the Technical University of Munich on the topic of
        "Der goldene Schnitt in der Nachrichtentechnik"   ⇒   "The golden ratio in communications technology".



Questions

1

Give the average code word lengths with redundancy-free binary source.

$\text{Code 1}$:   $L_{\rm M} \ = \ $

$\ \rm bit/source\hspace{0.15cm}symbol$
$\text{Code 2}$:   $L_{\rm M} \ = \ $

$\ \rm bit/source\hspace{0.15cm}symbol$
$\text{Code 3}$:   $L_{\rm M} \ = \ $

$\ \rm bit/source\hspace{0.15cm}symbol$

2

Determine the Huffman code with respect to two-tuples for  $p_{\rm X}= 0.6$.

The result is  $\text{Code 1}$.
The result is  $\text{Code 2}$.
The result is  $\text{Code 3}$.

3

What is the average code word length of the best Huffman code for  $p_{\rm X}= 0.6$ ?

$L_{\rm M} \ = \ $

$\ \rm bit/source\hspace{0.15cm}symbol$

4

Determine the Huffman code with respect to two-tuple for  $p_{\rm X}= 0.8$.

The result is  $\text{Code 1}$.
The result is  $\text{Code 2}$.
The result is  $\text{Code 3}$.

5

What is the average code word length of the best Huffman code for  $p_{\rm X}= 0.8$ ?

$L_{\rm M} \ = \ $

$\ \rm bit/source\hspace{0.15cm}symbol$

6

In what range may the probability  $p_{\rm X}$  for the symbol  $\rm X$  lie so that the  $\text{Code 1}$  results according to Huffman?

$p_\text{X, min}\ = \ $

$p_\text{X, max}\ = \ $


Solution

(1)  With redundancy-free binary source  $(p_{\rm X} = p_{\rm Y} = 0.5)$  one obtains  $p_{\rm A} = p_{\rm B} = p_{\rm C} = p_{\rm D} = 0.25$  and with the given equation:

$$L_{\rm M} = \big [ \hspace{0.05cm}p_{\rm A} \cdot L_{\rm A} + p_{\rm B} \cdot L_{\rm B} + p_{\rm C} \cdot L_{\rm C} + p_{\rm D} \cdot L_{\rm D} \hspace{0.05cm} \big ] / 2 = \big [ \hspace{0.05cm} L_{\rm A} + L_{\rm B} + L_{\rm C} + L_{\rm D}\hspace{0.05cm} \big ] / 8 \hspace{0.05cm}.$$

Taking into account the given assignments, we obtain for

  • $\text{Code 1}$:    $L_{\rm M} \hspace{0.15cm}\underline{= 1.000 \ {\rm bit/source \hspace{0.15cm}symbol} }$,
  • $\text{Code 2}$:    $L_{\rm M} \hspace{0.15cm}\underline{= 1.125 \ {\rm bit/source \hspace{0.15cm}symbol} }$,
  • $\text{Code 3}$:    $L_{\rm M} \hspace{0.15cm}\underline{= 1.250 \ {\rm bit/source \hspace{0.15cm}symbol} }$.


In the course of the task it will become apparent that the first two codes are quite possible as a result of the Huffman algorithm (of course only with suitable symbol probabilities).   $\text{Code 3}$  is also prefix-free, but never optimal in terms of average code word length.


(2)  The probabilities of the possible two-tuples are:

$$p_{\rm A} = 0.6^2 = 0.36 \hspace{0.05cm}, \hspace{0.4cm}p_{\rm B}= 0.6 \cdot 0.4 = 0.24 = p_{\rm C} \hspace{0.05cm},\hspace{0.4cm} p_{\rm D}= 0.4^2 = 0.16 \hspace{0.05cm}.$$
  • This gives the  tree diagram on the left  (in the adjacent graph) and the following Huffman code:
Huffman tree diagram for two different two-tuple constellations  ("Schritt"   ⇒   EN:  "step")
    $\rm A$   →   11,   $\rm B$   →   10,   $\rm C$   →   01,   $\rm D$   →   00.
  • This is  $\text{Code 1}$   ⇒   solution suggestion 1.


(3)  Each two-tuple is represented by two bits.  Thus

$$L_{\rm M} \hspace{0.15cm}\underline{= 1.000 \ {\rm bit/source \hspace{0.15cm}symbol} }.$$


(4)  Here the probabilities of each two-tuple are:

$$p_{\rm A} = 0.8^2 = 0.64 \hspace{0.05cm}, \hspace{0.4cm}p_{\rm B}= 0.8 \cdot 0.2 = 0.16 \hspace{0.05cm}, $$
$$p_{\rm C} = p_{\rm B}= 0.8 = 0.16 \hspace{0.05cm},\hspace{0.4cm} p_{\rm D}= 0.2^2 = 0.04 \hspace{0.05cm}. $$

According to the tree diagram on the right the result is now  $\text{Code 2}$   ⇒   solution suggestion 2:

    $\rm A$   →   1,   $\rm B$   →   01,   $\rm C$   →   001,   $\rm D$   →   010.


(5)  For  $\text{Code 2}$  the mean two-tuple length or the average code word length is:

$$L_{\rm M}\hspace{0.01cm}' = 0.64 \cdot 1 + 0.16 \cdot 2 + (0.16 + 0.04) \cdot 3 = 1.56\,{\rm bit/two-tuples}\hspace{0.3cm} \Rightarrow\hspace{0.3cm}L_{\rm M} = {L_{\rm M}\hspace{0.01cm}'}/{2}\hspace{0.15cm}\underline{ = 0.78\,{\rm bit/source \:symbol}}\hspace{0.05cm}.$$


(6)  From the earlier subtasks it has been found:

  • For  $p_{\rm X} = 0.6$,    $\text{Code 1}$  is optimal and the average code word length of this code is  $($independent of  $p_{\rm X})$  $L_{\rm M} = 1$  bit/source  symbol.
  • For  $p_{\rm X} = 0.8$  according to subtask  (4),  $\text{Code 2}$  is optimal and the average code word length is  $L_{\rm M} = 0.78$  bit/source  symbol.


The sought maximum value  $p_\text{X, max}$  will thus lie between  $0.6$  and  $0.8$ .  The determining equation here is that for the limiting case  $p_\text{X} = p_\text{X, max}$  both codes have exactly the same average code word length  $L_{\rm M} = 1$  bit/source  symbol, or  $L_{\rm M}\hspace{0.03cm}' = 2$  bit/two-tuple.

  • With the abbreviation  $p = p_\text{X, max}$  the corresponding determining equation is:
$$L_{\rm M}\hspace{0.01cm}'\hspace{0.15cm}{\rm (Code \hspace{0.15cm}2)} = p^2 \cdot 1 + p \cdot (1-p) \cdot 2 + p \cdot (1-p) \cdot 3 + (1-p)^2 \cdot 3 \stackrel{!}{=} 2 \hspace{0.05cm}.$$
  • This leads to the numerical result:
$$p^2 + p - 1 \stackrel{!}{=} 0 \hspace{0.3cm}\Rightarrow\hspace{0.3cm} p_{\rm X,\hspace{0.05cm}max} = p = \frac{\sqrt{5}-1}{2} \hspace{0.15cm}\underline{ \approx 0.618} \hspace{0.05cm}.$$
  • Since the basic Huffman structure does not change by swapping  $\rm X$  and  $\rm Y$  the lower bound is:
$$p_{\rm X,\hspace{0.05cm}min} = 1 - p_{\rm X,\hspace{0.05cm}max}\hspace{0.15cm}\underline{ \approx 0.382} \hspace{0.05cm}.$$

The representation of the tuples of two by bit sequences of different lengths  $\text{(Code 2)}$  therefore only makes sense if the symbol probabilities of  $\rm X$  and  $\rm Y$  differ significantly.  If, on the other hand, they are between  $0.382$  and  $0.618$,  $\text{Code 1}$  is to be applied.

Note:     The division of a path of length   $1$  into two sections of length  $0.618$...  and  $0.382$...  is called the  Golden ratio, which is encountered again and again in the most diverse fields.