Difference between revisions of "Aufgaben:Exercise 2.6: About the Huffman Coding"

From LNTwww
 
(37 intermediate revisions by 4 users not shown)
Line 1: Line 1:
  
{{quiz-Header|Buchseite=Informationstheorie/Entropiecodierung nach Huffman
+
{{quiz-Header|Buchseite=Information_Theory/Entropy_Coding_According_to_Huffman
 
}}
 
}}
  
[[File:P_ID2451__Inf_A_2_6.png|right|]]
+
[[File:EN_Inf_A_2_6_v2.png|right|frame|Tree diagram of the Huffman method]]
Wir betrachten hier eine Quellensymbolfolge &#9001;<i>q<sub>&nu;</sub></i>&#9002; mit dem Symbolumfang <i>M</i> = 8:
+
We consider here a source symbol sequence&nbsp; $\langle q_\nu  \rangle$&nbsp; with symbol set size&nbsp; $M = 8$:
:$$q_{\nu} = \{ \hspace{0.05cm}q_{\mu} \} = \{ \boldsymbol{\rm A} \hspace{0.05cm}, \boldsymbol{\rm B}\hspace{0.05cm}, \boldsymbol{\rm C}\hspace{0.05cm}, \boldsymbol{\rm D}\hspace{0.05cm}, \boldsymbol{\rm E}\hspace{0.05cm}, \boldsymbol{\rm F}\hspace{0.05cm}, \boldsymbol{\rm G}\hspace{0.05cm}, \boldsymbol{\rm H}\hspace{0.05cm}
+
:$$q_{\nu} \in \{ \hspace{0.05cm}q_{\mu} \} = \{ \boldsymbol{\rm A} \hspace{0.05cm},\hspace{0.05cm} \boldsymbol{\rm B}\hspace{0.05cm},\hspace{0.05cm} \boldsymbol{\rm C}\hspace{0.05cm},\hspace{0.05cm} \boldsymbol{\rm D}\hspace{0.05cm},\hspace{0.05cm} \boldsymbol{\rm E}\hspace{0.05cm},\hspace{0.05cm} \boldsymbol{\rm F}\hspace{0.05cm},\hspace{0.05cm} \boldsymbol{\rm G}\hspace{0.05cm},\hspace{0.05cm} \boldsymbol{\rm H}\hspace{0.05cm}
 
\}\hspace{0.05cm}.$$
 
\}\hspace{0.05cm}.$$
Sind die Symbole gleichwahrscheinlich, also wenn gilt
+
If the symbols are equally probable, i.e.&nbsp; $p_{\rm A} =  p_{\rm B} =$ ... $ = p_{\rm H} = 1/M$ &nbsp; then source coding makes no sense.&nbsp; Already with the dual code&nbsp; $\rm A$&nbsp; &#8594;&nbsp; <b>000</b>,&nbsp; $\rm B$&nbsp; &#8594;&nbsp; <b>001</b>, ... ,&nbsp; $\rm H$&nbsp; &#8594;&nbsp; <b>111</b>, the average code word length&nbsp; $L_{\rm M}$&nbsp; is its lower bound&nbsp; $H$&nbsp; according to the source coding theorem&nbsp; $(H$ denotes the&nbsp; "source entropy"&nbsp; here$)$:
:$$p_{\rm A} =  p_{\rm B} = ... \hspace{0.05cm} = p_{\rm H} = 1/M \hspace{0.05cm},$$
+
:$$L_{\rm M,\hspace{0.08cm}min} = H = 3 \hspace{0.15cm}{\rm bit/source \:symbol} \hspace{0.05cm}.$$
so macht Quellencodierung keinen Sinn. Bereits mit dem Dualcode <b>A</b> &#8594; <b>000</b>, <b>B</b>&nbsp;&#8594;&nbsp;<b>001</b>, ... , <b>H</b> &#8594; <b>111</b>, erreicht die mittlere Codewortlänge <i>L</i><sub>M</sub> ihre untere Schranke <i>H</i> gemäß dem Quellencodierungstheorem:
 
:$$L_{\rm M,\hspace{0.08cm}min} = H = 3 \hspace{0.15cm}{\rm bit/Quellensymbol} \hspace{0.05cm}.$$
 
<i>H</i> bezeichnet hierbei die <i>Quellenentropie</i>.
 
  
Die Symbolwahrscheinlichkeiten seien aber in dieser Aufgabe wie folgt gegeben:
+
However, let the symbol probabilities be given as follows:
 
:$$p_{\rm A} \hspace{-0.05cm}= \hspace{-0.05cm} 0.04  \hspace{0.05cm},\hspace{0.1cm}p_{\rm B} \hspace{-0.05cm}= \hspace{-0.05cm} 0.08  \hspace{0.05cm},\hspace{0.1cm}p_{\rm C} \hspace{-0.05cm}= \hspace{-0.05cm} 0.14  \hspace{0.05cm},\hspace{0.1cm}
 
:$$p_{\rm A} \hspace{-0.05cm}= \hspace{-0.05cm} 0.04  \hspace{0.05cm},\hspace{0.1cm}p_{\rm B} \hspace{-0.05cm}= \hspace{-0.05cm} 0.08  \hspace{0.05cm},\hspace{0.1cm}p_{\rm C} \hspace{-0.05cm}= \hspace{-0.05cm} 0.14  \hspace{0.05cm},\hspace{0.1cm}
p_{\rm D} \hspace{-0.05cm}= \hspace{-0.05cm} 0.25  \hspace{0.05cm},$$
+
p_{\rm D} \hspace{-0.05cm}= \hspace{-0.05cm} 0.25  \hspace{0.05cm},\hspace{0.1cm} p_{\rm E} \hspace{-0.05cm}= \hspace{-0.05cm} 0.24  \hspace{0.05cm},\hspace{0.1cm}p_{\rm F} \hspace{-0.05cm}= \hspace{-0.05cm} 0.12  \hspace{0.05cm},\hspace{0.1cm}p_{\rm G} \hspace{-0.05cm}= \hspace{-0.05cm} 0.10  \hspace{0.05cm},\hspace{0.1cm}
:$$p_{\rm E} \hspace{-0.05cm}= \hspace{-0.05cm} 0.24  \hspace{0.05cm},\hspace{0.1cm}p_{\rm F} \hspace{-0.05cm}= \hspace{-0.05cm} 0.12  \hspace{0.05cm},\hspace{0.1cm}p_{\rm G} \hspace{-0.05cm}= \hspace{-0.05cm} 0.10  \hspace{0.05cm},\hspace{0.1cm}
 
 
p_{\rm H} \hspace{-0.05cm}= \hspace{-0.05cm} 0.03  \hspace{0.05cm}.$$
 
p_{\rm H} \hspace{-0.05cm}= \hspace{-0.05cm} 0.03  \hspace{0.05cm}.$$
Es liegt hier also eine redundante Nachrichtenquelle vor, die man durch Huffman&ndash;Codierung komprimieren kann. Der Algorithmus wurde 1952 &ndash; also kurz nach Shannons bahnbrechenden Arbeiten zur Informationstheorie &ndash; von David Albert Huffman veröffentlicht und erlaubt die Konstruktion von optimalen präfixfreien Codes.
+
*So here we have a redundant  source that can be compressed by Huffman coding.
 +
*This algorithm was published by&nbsp; [https://en.wikipedia.org/wiki/David_A._Huffman David Albert Huffman]&nbsp; &ndash; shortly after Shannon's groundbreaking work on information theory &ndash; and allows the construction of optimal prefix-free codes.
  
Der Algorithmus soll hier ohne Herleitung und Beweis angegeben werden, wobei wir uns auf Binärcodes beschränken (die Codesymbolfolge besteht nur aus Nullen und Einsen):
 
  
:1. Man ordne die Symbole nach fallenden Auftrittswahrscheinlichkeiten.
+
The algorithm is given here without derivation and proof, whereby we restrict ourselves to binary codes &nbsp; &rArr; &nbsp; the encoded sequence consists only of zeros and ones:
  
:2. Man fasse die zwei unwahrscheinlichsten Symbole zu einem neuen Symbol zusammen.
+
:'''(1)''' &nbsp; Order the symbols according to decreasing probabilities of occurrence.
  
:3. Man wiederhole Schritt 1 und 2, bis nur zwei (zusammengefasste) Symbole übrig bleiben.
+
:'''(2)''' &nbsp;  Combine the two least likely symbols into a new symbol.
  
:4. Die wahrscheinlichere Symbolmenge wird mit <b>1</b>  binär codiert, die andere Menge mit <b>0</b>.
+
:'''(3)''' &nbsp;  Repeat steps&nbsp; '''(1)'''&nbsp; and&nbsp; '''(2)''', until only two (combined) symbols remain.
  
:5. Man ergänze schrittweise (von unten nach oben) die aufgespaltenen Teilcodes mit <b>1</b> bzw. <b>0</b>.
+
:'''(4)''' &nbsp;  Binary encode the more probable set of symbols with&nbsp; <b>1</b>&nbsp;, the other set with&nbsp; <b>0</b>.
  
Oft wird dieser Algorithmus durch ein Baumdiagramm veranschaulicht. Die obige Grafik zeigt dieses für den vorliegenden Fall. Sie haben folgende Aufgaben:
+
:'''(5)''' &nbsp;  Step by step (from bottom to top) add&nbsp; <b>1</b>&nbsp; or&nbsp; <b>0</b>  to the split partial codes.
  
:* (a): Zuordnung der Symbole <b>A</b>, ... , <b>H</b> zu den mit [1], ... , [8] bezeichneten Eingängen.
 
  
:* (b): Bestimmung der Summenwahrscheinlichkeiten <i>U</i>, ... , <i>Z</i> sowie <i>R</i> (<i>Root</i>).
+
This algorithm is often illustrated by a tree diagram. The diagram above shows this for the case at hand.
  
:* (c) Zuordnung der Symbole <b>A</b>, ... , <b>H</b> zu den entsprechenden Huffman&ndash;Binärfolgen; eine rote Verbindung im Baumdiagramm entspricht einer <b>1</b> und eine blaue Verbindung einer <b>0</b>.
+
You have the following tasks:
  
Sie werden feststellen, dass die mittlere Codewortlänge
+
:'''(a)''' &nbsp; Assign the symbols&nbsp; $\rm A$, ... , $\rm H$&nbsp; to the inputs labelled&nbsp; '''[1]''', ... , '''[8]'''&nbsp;.
:$$L_{\rm M} =  \sum_{\mu = 1}^{M}\hspace{0.05cm} p_{\mu} \cdot L_{\mu} $$
 
bei Huffman&ndash;Codierung nur unwesentlich größer ist als die Quellenentropie <i>H</i>. In dieser Gleichung gelten für den vorliegenden Fall folgende Werte:
 
  
:* <i>M</i> = 8 &nbsp;&nbsp;sowie&nbsp;&nbsp; <i>p</i><sub>1</sub> = <i>p</i><sub>A</sub>, ... , <i>p</i><sub>8</sub> = <i>p</i><sub>H</sub>,
+
:'''(b)''' &nbsp; Determination of the sum probabilities&nbsp; $U$, ... ,&nbsp; $Z$&nbsp; and&nbsp; $R$&nbsp; ("root").
  
:* <i>L</i><sub>1</sub>, ... , <i>L</i><sub>8</sub>:&nbsp;&nbsp; Jeweilige Bitanzahl der Codesymbole für <b>A</b>, ... , <b>H</b>.
+
:'''(c)''' &nbsp; Assignment of the symbols&nbsp; $\rm A$, ... ,&nbsp; $\rm H$&nbsp; to the corresponding Huffman binary sequences.&nbsp; A red connection corresponds to a&nbsp; <b>1</b>, a blue connection to a&nbsp; <b>0</b>.
  
:<b>Hinweis:</b> Die Aufgabe bezieht sich auf das Themengebiet von Kapitel 2.3.
+
You will notice that the average code word length for Huffman coding,
 +
:$$L_{\rm M} =  \sum_{\mu = 1}^{M}\hspace{0.05cm} p_{\mu} \cdot L_{\mu}, $$
 +
is only slightly larger than the source entropy&nbsp; $H$.&nbsp; In this equation, the following values apply to the present case:
 +
*$M = 8$,&nbsp;  $p_1 = p_{\rm A}$,&nbsp; ... ,&nbsp; $p_8 = p_{\rm H}$.
 +
*The respective number of bits of the code symbols for&nbsp; $\rm A$, ... ,&nbsp; $\rm H$&nbsp; is denoted by&nbsp; $L_1$, ... ,&nbsp; $L_8$&nbsp;.
  
  
===Fragebogen===
+
 
 +
 
 +
 
 +
 
 +
 
 +
<u>Hints:</u>
 +
*The exercise belongs to the chapter&nbsp; [[Information_Theory/Entropiecodierung_nach_Huffman|Entropy Coding according to Huffman]].
 +
*In particular, reference is made to the pages&nbsp;
 +
** [[Information_Theory/Entropiecodierung_nach_Huffman#The_Huffman_algorithm|The Huffman algorithm]],
 +
**[[Information_Theory/Entropiecodierung_nach_Huffman#Representation_of_the_Huffman_code_as_a_tree_diagram|Representation of the Huffman code as a tree diagram]].
 +
*To check your results, please refer to the (German language) SWF module&nbsp; [[Applets:Huffman_Shannon_Fano|Coding according to Huffman and Shannon/Fano]].
 +
 +
 
 +
 
 +
===Questions===
  
 
<quiz display=simple>
 
<quiz display=simple>
{Welche Eingänge im Baumdiagramm stehen für
+
{Which inputs in the tree diagram stand for
 
|type="{}"}
 
|type="{}"}
Symbol '''A''': Eingang = { 7 3% }
+
Input number $ \ = \ $  { 7 } &nbsp; &rArr; &nbsp; symbol $\rm A$
Symbol '''B''': Eingang = { 6 3% }
+
Input number $ \ = \ $  { 6 } &nbsp; &rArr; &nbsp; symbol $\rm B$
Symbol '''C''': Eingang = { 3 3% }
+
Input number $ \ = \ $  { 3 } &nbsp; &rArr; &nbsp; symbol $\rm C$
Symbol '''D''': Eingang = { 1 3% }
+
Input number $ \ = \ $  { 1 } &nbsp; &rArr; &nbsp; symbol $\rm D$
  
  
{Welche Zahlenwerte sollten bei den Knoten im Baumdiagramm stehen?
+
{What numerical values (probabilities) should be at the nodes in the tree diagram?
 
|type="{}"}
 
|type="{}"}
Knoten U = { 0.07 3% }
+
Probability  $ \ = \ $ { 0.07 1% } &nbsp; at node $\rm U$
Knoten V = { 0.15 3% }
+
Probability  $ \ = \ $ { 0.15 1% } &nbsp; at node $\rm V$
Knoten W = { 0.22 3% }
+
Probability  $ \ = \ $ { 0.22 1% } &nbsp; at node $\rm W$
Knoten Z = { 0.54 3% }
+
Probability  $ \ = \ $ { 0.29 1% } &nbsp; at node $\rm X$
Root R = { 1 3% }
+
Probability  $ \ = \ $ { 0.46 1% } &nbsp; at node $\rm Y$
 +
Probability  $ \ = \ $ { 0.54 1% } &nbsp; at node $\rm Z$
 +
Probability  $ \ = \ $ { 1 1% } &nbsp; at $\rm root$
  
  
  
{Welche Binärcodes (darzustellen mit Nullen und Einsen) ergeben sich für
+
{Which binary codes (represented with zeros and ones) result for
 
|type="{}"}
 
|type="{}"}
Symbol '''A''': Binärcode = { 11101 3% }
+
Binary code $ \ = \ $ { 11101 } &nbsp; &rArr; &nbsp; symbol $\rm A$
Symbol '''B''': Binärcode = { 1111 3% }
+
Binary code $ \ = \ $ { 1111 } &nbsp; &rArr; &nbsp; symbol $\rm B$
Symbol '''C''': Binärcode = { 110 3% }
+
Binary code $ \ = \ $ { 110 } &nbsp; &rArr; &nbsp; symbol $\rm C$
Symbol '''D''': Binärcode = { 10 3% }
+
Binary code $ \ = \ $ { 10 } &nbsp; &rArr; &nbsp; symbol $\rm D$
  
  
{Wie groß ist die mittlere Codewortlänge?
+
{What is the average code word length?
 
|type="{}"}
 
|type="{}"}
$L_M$ = { 2.73 3% } $bit/Quellensymbol$
+
$L_{\rm M} \ = \ $ { 2.73 3% } $\ \rm bit/source \hspace{0.15cm} symbol$
  
  
{Wie groß ist die Quellenentropie <i>H</i>? <i>Hinweis:</i> Es gibt genau eine Lösung.
+
{What is the source entropy&nbsp; $H$?  
|type="[]"}
+
|type="()"}
+ H = 2.71 bit/Quellensymbol.
+
+ $H = 2.71\ \rm  bit/source \hspace{0.15cm}symbol.$
- H = 2.75 bit/Quellensymbol.
+
- $H = 2.75\ \rm  bit/source \hspace{0.15cm}symbol.$
- H = 3.00 bit/Quellensymbol.
+
- $H = 3.00\ \rm  bit/source \hspace{0.15cm}symbol.$
  
  
Line 95: Line 108:
 
</quiz>
 
</quiz>
  
===Musterlösung===
+
===Solution===
 
{{ML-Kopf}}
 
{{ML-Kopf}}
<b>1.</b>&nbsp;&nbsp;Vor dem Huffman&ndash;Algorithmus müssen die Symbole nach ihren Auftrittswahrscheinlichkeiten sortiert werden. Da die zwei unwahrscheinlichsten Symbole schon im Schritt 1 zusammengefasst werden, nehmen die Auftrittswahrscheinlichkeiten von oben nach unten ab (in der unteren Grafik zu dieser Musterlösung von links nach rechts). Durch Vergleich mit dem Angabenblatt erhält man:
+
'''(1)'''&nbsp; Before the Huffman algorithm, the symbols must be sorted according to their probabilities of occurrence.
 +
*Since the two least likely symbols are already combined in the first step, the probabilities of occurrence decrease from top to bottom <br>(from left to right in the diagram for this sample solution).
 +
*By comparison with the data sheet, one obtains:
 +
 
 +
:Symbol $\rm A$:&nbsp;  <u>Input 7</u>,  &nbsp; &nbsp; Symbol $\rm B$:&nbsp;  <u>Input 6</u>, &nbsp; &nbsp; Symbol $\rm C$:&nbsp;  <u>Input 3</u>, &nbsp; &nbsp; Symbol $\rm D$:&nbsp;  <u>Input 1</u>.
 +
 
 +
[[File:EN_Inf_Z_2_6b.png|right|right|frame|Tree diagram adapted to the exercise]]
 +
 
 +
 
 +
'''(2)'''&nbsp; The node&nbsp; $\rm R$&nbsp; is the tree&nbsp; ("root").&nbsp; This is always assigned&nbsp; $\underline{R=1}$,&nbsp; regardless of the occurrence probabilities.
  
:Symbol <b>A</B>:  <u>Eingang 7</u>,  Symbol <b>B</B>:  <u>Eingang 6</u>,
+
The following applies to the other values:
:Symbol <b>C</B>:  <u>Eingang 3</u>, Symbol <b>D</B>:  <u>Eingang 1</u>.
 
  
<b>2.</b>&nbsp;&nbsp;Der Knoten <i>R</i> ist die Baumwurzel (<i>Root</i>). Unabhängig von den Auftrittswahrscheinlichkeiten ist dieser stets mit <u><i>R</i> = 1</u> belegt. Für die weiteren Werte gilt:
+
:Step 1:&nbsp;&nbsp;&nbsp; $p_{\rm U} = p_{\rm A} + p_{\rm H} = 0.04 + 0.03 \hspace{0.15cm}\underline{ =0.07}$,
  
:Schritt 1:&nbsp;&nbsp;&nbsp; <i>U</i> = <i>p</i><sub>A</sub> + <i>p</i><sub>H</sub> = 0.04 + 0.03 = <u>0.07</u>,
+
:Step 2:&nbsp;&nbsp;&nbsp; $p_{\rm V} = p_{\rm U} + p_{\rm B} = 0.07 + 0.08 \hspace{0.15cm}\underline{ =0.15}$,
  
:Schritt 2:&nbsp;&nbsp;&nbsp; <i>V</i> = <i>U</i> + <i>p</i><sub>B</sub> = 0.07 + 0.08 = <u>0.15</u>,
+
:Step 3:&nbsp;&nbsp;&nbsp; $p_{\rm W} = p_{\rm F} + p_{\rm G} = 0.12 + 0.10 \hspace{0.15cm}\underline{ =0.22}$,
  
:Schritt 3:&nbsp;&nbsp;&nbsp; <i>W</i> = <i>p</i><sub>F</sub> + <i>p</i><sub>G</sub> = 0.12 + 0.10 = <u>0.22</u>,
+
:Step 4:&nbsp;&nbsp;&nbsp; $p_{\rm X} = p_{\rm V} + p_{\rm C} = 0.15 + 0.14  \hspace{0.15cm}\underline{ =0.29}$,
  
:Schritt 4:&nbsp;&nbsp;&nbsp; <i>X</i> = <i>V</i> + <i>p</i><sub>C</sub> = 0.15 + 0.14 = 0.29,
+
:Step 5:&nbsp;&nbsp;&nbsp; $p_{\rm Y} = p_{\rm W} + p_{\rm E} = 0.22 + 0.24  \hspace{0.15cm}\underline{ =0.46}$,
  
:Schritt 5:&nbsp;&nbsp;&nbsp; <i>Y</i> = <i>W</i> + <i>p</i><sub>E</sub> = 0.22 + 0.24 = 0.46,
+
:Step 6:&nbsp;&nbsp;&nbsp; $p_{\rm Z} = p_{\rm X} + p_{\rm D} = 0.29 + 0.25  \hspace{0.15cm}\underline{ =0.54}$.
 +
<br clear=all>
 +
'''(3)'''&nbsp; The Huffman code for symbol&nbsp; $\rm A$&nbsp; is obtained by tracing the path from&nbsp; $\rm root$&nbsp; (yellow dot) to symbol &nbsp; $\rm A$&nbsp; and assigning a&nbsp; <b>1</b>&nbsp; to each red line and a&nbsp; <b>0</b> to each blue one.
  
:Schritt 6:&nbsp;&nbsp;&nbsp; <i>Z</i> = <i>X</i> + <i>p</i><sub>D</sub> = 0.29 + 0.25 = <u>0.54</u>.
+
* Symbol $\rm A$:&nbsp;&nbsp;&nbsp; red&ndash;red&ndash;red&ndash;blue&ndash;red&#8594; <b><u>11101</u></b>,
  
Damit lässt sich das Baumdiagramm auch wie folgt angeben.
+
* Symbol $\rm B$:&nbsp;&nbsp;&nbsp; red&ndash;red&ndash;red&ndash;red &#8594; <b><u>1111</u></b>,
[[File:P_ID2452__Inf_A_2_6a.png|center|]]
 
  
<b>3.</b>&nbsp;&nbsp;Den Huffman&ndash;Code für das Symbol <b>A</b> erhält man, wenn man den Weg von der <i>Root</i> (gelber Punkt) zum Symbol <b>A</b> zurückverfolgt und jeder roten Verbindungslinie eine &bdquo;<b>1</b>&rdquo; zuordnet, jeder blauen eine &bdquo;<b>0</b>&rdquo;.
+
* Symbol $\rm C$:&nbsp;&nbsp;&nbsp; red&ndash;red&ndash;blue &#8594; <b><u>110</u></b>,
  
:* Symbol <b>A</b>:&nbsp;&nbsp;&nbsp; rot&ndash;rot&ndash;rot&ndash;blau&ndash;rot &#8594; <b><u>11101</u></b>,
+
* Symbol $\rm D$:&nbsp;&nbsp;&nbsp; red&ndash;blue&ndash; &#8594; <b><u>10</u></b>,
  
:* Symbol <b>B</b>:&nbsp;&nbsp;&nbsp; rot&ndash;rot&ndash;rot&ndash;rot &#8594; <b><u>1111</u></b>,
+
* Symbol $\rm E$:&nbsp;&nbsp;&nbsp; blue&ndash;red &#8594; <b>01</b>,
  
:* Symbol <b>C</b>:&nbsp;&nbsp;&nbsp; rot&ndash;rot&ndash;blau &#8594; <b><u>110</u></b>,
+
* Symbol $\rm F$:&nbsp;&nbsp;&nbsp; blue&ndash;blue&ndash;red &#8594; <b>001</b>,
  
:* Symbol <b>D</b>:&nbsp;&nbsp;&nbsp; rot&ndash;blau&ndash; &#8594; <b><u>10</u></b>,
+
* Symbol $\rm G$:&nbsp;&nbsp;&nbsp; blue&ndash;blue&ndash;blue &#8594; <b>000</b>,
  
:* Symbol <b>E</b>:&nbsp;&nbsp;&nbsp; blau&ndash;rot &#8594; <b>01</b>,
+
* Symbol $\rm H$:&nbsp;&nbsp;&nbsp; red&ndash;red&ndash;red&ndash;blue&ndash;blue &#8594; <b>11100</b>.
  
:* Symbol <b>F</b>:&nbsp;&nbsp;&nbsp; blau&ndash;blau&ndash;rot &#8594; <b>001</b>,
 
  
:* Symbol <b>G</b>:&nbsp;&nbsp;&nbsp; blau&ndash;blau&ndash;blau &#8594; <b>000</b>,
+
'''(4)'''&nbsp; The coding under point&nbsp; '''(3)'''&nbsp; has shown that
  
:* Symbol <b>H</b>:&nbsp;&nbsp;&nbsp; rot&ndash;rot&ndash;rot&ndash;blau&ndash;blau &#8594; <b>11100</b>.
+
* the symbols&nbsp; $\rm D$&nbsp; and&nbsp; $\rm E$&nbsp; are each represented with two bits,
  
<b>4.</b>&nbsp;&nbsp;Die Codierung unter Punkt 3) hat ergeben, dass
+
* the symbols&nbsp; $\rm C$,&nbsp; $\rm F$&nbsp; and&nbsp; $\rm G$&nbsp; with three bits,
  
:* die Symbole <b>D</b> und <b>E</b> mit 2 Bit,
+
* the symbols&nbsp; $\rm B$&nbsp;  with four bits,&nbsp; and
  
:* die Symbole <b>C</b>, <b>F</b> und <b>G</b> mit 3 Bit,
+
* the symbols&nbsp; $\rm A$&nbsp; and&nbsp; $\rm H$&nbsp; with five bits.
  
:* das Symbol <b>B</b>  mit 4 Bit, und
 
  
:* die Symbole <b>A</b> und <b>H</b> mit 5 Bit
+
Thus, for the average code word length (in "bit/source symbol"):
 +
:$$L_{\rm M} \hspace{0.2cm} =  \hspace{0.2cm}  (p_{\rm D} + p_{\rm E}) \cdot 2 + (p_{\rm C} + p_{\rm F} + p_{\rm G}) \cdot 3  + p_{\rm B} \cdot 4 +(p_{\rm A} + p_{\rm H}) \cdot 5 =  0.49 \cdot 2 + 0.36 \cdot 3 +0.08 \cdot 4 +0.07 \cdot 5  
 +
\hspace{0.15cm}\underline{= 2.73}\hspace{0.05cm}.$$
  
dargestellt werden. Damit erhält man:
 
:$$L_{\rm M} \hspace{0.2cm} =  \hspace{0.2cm}  (p_{\rm D} + p_{\rm E}) \cdot 2 + (p_{\rm C} + p_{\rm F} + p_{\rm G}) \cdot 3  + p_{\rm B} \cdot 4 +(p_{\rm A} + p_{\rm H}) \cdot 5 = \\
 
\hspace{0.2cm} =  \hspace{0.2cm}  0.49 \cdot 2 + 0.36 \cdot 3 +0.08 \cdot 4 +0.07 \cdot 5
 
\hspace{0.15cm}\underline{= 2.73\,{\rm bit/Quellensymbol}}\hspace{0.05cm}.$$
 
  
<b>5.</b>&nbsp;&nbsp;Die mittlere Codewortlänge <i>L</i><sub>M</sub> kann nicht kleiner sein als die Quellenentropie <i>H</i>. Damit scheiden die Antworten 2 und 3 aus. Richtig ist allein <u>Antwort 1</u>.
 
  
Man erkennt, dass die vorliegende Huffman&ndash;Codierung die durch das Quellencodierungstheorem vorgegebene Grenze <i>L</i><sub>M, min</sub> = <i>H</i> = 2.71 bit/Quellensymbol nahezu erreicht.
+
'''(5)'''&nbsp; Only <u>answer 1</u> is correct:
 +
*The average code word length&nbsp; $L_{\rm M}$&nbsp; cannot be smaller than the source entropy&nbsp; $H$.&nbsp; This eliminates answers 2 and 3.
 +
*From the given probabilities of occurrence, the source entropy can actually be calculated to&nbsp; $H = 2.71$ bit/source symbol.
 +
*It can be seen that this Huffman coding almost reaches the given limit&nbsp; ("Source Coding Theorem")&nbsp; $L_{\rm M, \ min } \ge H = 2.71$&nbsp; bit/source symbol.
 
{{ML-Fuß}}
 
{{ML-Fuß}}
  
  
  
[[Category:Aufgaben zu Informationstheorie|^2.3 Entropiecodierung nach Huffman^]]
+
[[Category:Information Theory: Exercises|^2.3 Entropy Coding according to Huffman^]]

Latest revision as of 16:54, 1 November 2022

Tree diagram of the Huffman method

We consider here a source symbol sequence  $\langle q_\nu \rangle$  with symbol set size  $M = 8$:

$$q_{\nu} \in \{ \hspace{0.05cm}q_{\mu} \} = \{ \boldsymbol{\rm A} \hspace{0.05cm},\hspace{0.05cm} \boldsymbol{\rm B}\hspace{0.05cm},\hspace{0.05cm} \boldsymbol{\rm C}\hspace{0.05cm},\hspace{0.05cm} \boldsymbol{\rm D}\hspace{0.05cm},\hspace{0.05cm} \boldsymbol{\rm E}\hspace{0.05cm},\hspace{0.05cm} \boldsymbol{\rm F}\hspace{0.05cm},\hspace{0.05cm} \boldsymbol{\rm G}\hspace{0.05cm},\hspace{0.05cm} \boldsymbol{\rm H}\hspace{0.05cm} \}\hspace{0.05cm}.$$

If the symbols are equally probable, i.e.  $p_{\rm A} = p_{\rm B} =$ ... $ = p_{\rm H} = 1/M$   then source coding makes no sense.  Already with the dual code  $\rm A$  →  000,  $\rm B$  →  001, ... ,  $\rm H$  →  111, the average code word length  $L_{\rm M}$  is its lower bound  $H$  according to the source coding theorem  $(H$ denotes the  "source entropy"  here$)$:

$$L_{\rm M,\hspace{0.08cm}min} = H = 3 \hspace{0.15cm}{\rm bit/source \:symbol} \hspace{0.05cm}.$$

However, let the symbol probabilities be given as follows:

$$p_{\rm A} \hspace{-0.05cm}= \hspace{-0.05cm} 0.04 \hspace{0.05cm},\hspace{0.1cm}p_{\rm B} \hspace{-0.05cm}= \hspace{-0.05cm} 0.08 \hspace{0.05cm},\hspace{0.1cm}p_{\rm C} \hspace{-0.05cm}= \hspace{-0.05cm} 0.14 \hspace{0.05cm},\hspace{0.1cm} p_{\rm D} \hspace{-0.05cm}= \hspace{-0.05cm} 0.25 \hspace{0.05cm},\hspace{0.1cm} p_{\rm E} \hspace{-0.05cm}= \hspace{-0.05cm} 0.24 \hspace{0.05cm},\hspace{0.1cm}p_{\rm F} \hspace{-0.05cm}= \hspace{-0.05cm} 0.12 \hspace{0.05cm},\hspace{0.1cm}p_{\rm G} \hspace{-0.05cm}= \hspace{-0.05cm} 0.10 \hspace{0.05cm},\hspace{0.1cm} p_{\rm H} \hspace{-0.05cm}= \hspace{-0.05cm} 0.03 \hspace{0.05cm}.$$
  • So here we have a redundant source that can be compressed by Huffman coding.
  • This algorithm was published by  David Albert Huffman  – shortly after Shannon's groundbreaking work on information theory – and allows the construction of optimal prefix-free codes.


The algorithm is given here without derivation and proof, whereby we restrict ourselves to binary codes   ⇒   the encoded sequence consists only of zeros and ones:

(1)   Order the symbols according to decreasing probabilities of occurrence.
(2)   Combine the two least likely symbols into a new symbol.
(3)   Repeat steps  (1)  and  (2), until only two (combined) symbols remain.
(4)   Binary encode the more probable set of symbols with  1 , the other set with  0.
(5)   Step by step (from bottom to top) add  1  or  0 to the split partial codes.


This algorithm is often illustrated by a tree diagram. The diagram above shows this for the case at hand.

You have the following tasks:

(a)   Assign the symbols  $\rm A$, ... , $\rm H$  to the inputs labelled  [1], ... , [8] .
(b)   Determination of the sum probabilities  $U$, ... ,  $Z$  and  $R$  ("root").
(c)   Assignment of the symbols  $\rm A$, ... ,  $\rm H$  to the corresponding Huffman binary sequences.  A red connection corresponds to a  1, a blue connection to a  0.

You will notice that the average code word length for Huffman coding,

$$L_{\rm M} = \sum_{\mu = 1}^{M}\hspace{0.05cm} p_{\mu} \cdot L_{\mu}, $$

is only slightly larger than the source entropy  $H$.  In this equation, the following values apply to the present case:

  • $M = 8$,  $p_1 = p_{\rm A}$,  ... ,  $p_8 = p_{\rm H}$.
  • The respective number of bits of the code symbols for  $\rm A$, ... ,  $\rm H$  is denoted by  $L_1$, ... ,  $L_8$ .




Hints:


Questions

1

Which inputs in the tree diagram stand for

Input number $ \ = \ $

  ⇒   symbol $\rm A$
Input number $ \ = \ $

  ⇒   symbol $\rm B$
Input number $ \ = \ $

  ⇒   symbol $\rm C$
Input number $ \ = \ $

  ⇒   symbol $\rm D$

2

What numerical values (probabilities) should be at the nodes in the tree diagram?

Probability $ \ = \ $

  at node $\rm U$
Probability $ \ = \ $

  at node $\rm V$
Probability $ \ = \ $

  at node $\rm W$
Probability $ \ = \ $

  at node $\rm X$
Probability $ \ = \ $

  at node $\rm Y$
Probability $ \ = \ $

  at node $\rm Z$
Probability $ \ = \ $

  at $\rm root$

3

Which binary codes (represented with zeros and ones) result for

Binary code $ \ = \ $

  ⇒   symbol $\rm A$
Binary code $ \ = \ $

  ⇒   symbol $\rm B$
Binary code $ \ = \ $

  ⇒   symbol $\rm C$
Binary code $ \ = \ $

  ⇒   symbol $\rm D$

4

What is the average code word length?

$L_{\rm M} \ = \ $

$\ \rm bit/source \hspace{0.15cm} symbol$

5

What is the source entropy  $H$?

$H = 2.71\ \rm bit/source \hspace{0.15cm}symbol.$
$H = 2.75\ \rm bit/source \hspace{0.15cm}symbol.$
$H = 3.00\ \rm bit/source \hspace{0.15cm}symbol.$


Solution

(1)  Before the Huffman algorithm, the symbols must be sorted according to their probabilities of occurrence.

  • Since the two least likely symbols are already combined in the first step, the probabilities of occurrence decrease from top to bottom
    (from left to right in the diagram for this sample solution).
  • By comparison with the data sheet, one obtains:
Symbol $\rm A$:  Input 7,     Symbol $\rm B$:  Input 6,     Symbol $\rm C$:  Input 3,     Symbol $\rm D$:  Input 1.
Tree diagram adapted to the exercise


(2)  The node  $\rm R$  is the tree  ("root").  This is always assigned  $\underline{R=1}$,  regardless of the occurrence probabilities.

The following applies to the other values:

Step 1:    $p_{\rm U} = p_{\rm A} + p_{\rm H} = 0.04 + 0.03 \hspace{0.15cm}\underline{ =0.07}$,
Step 2:    $p_{\rm V} = p_{\rm U} + p_{\rm B} = 0.07 + 0.08 \hspace{0.15cm}\underline{ =0.15}$,
Step 3:    $p_{\rm W} = p_{\rm F} + p_{\rm G} = 0.12 + 0.10 \hspace{0.15cm}\underline{ =0.22}$,
Step 4:    $p_{\rm X} = p_{\rm V} + p_{\rm C} = 0.15 + 0.14 \hspace{0.15cm}\underline{ =0.29}$,
Step 5:    $p_{\rm Y} = p_{\rm W} + p_{\rm E} = 0.22 + 0.24 \hspace{0.15cm}\underline{ =0.46}$,
Step 6:    $p_{\rm Z} = p_{\rm X} + p_{\rm D} = 0.29 + 0.25 \hspace{0.15cm}\underline{ =0.54}$.


(3)  The Huffman code for symbol  $\rm A$  is obtained by tracing the path from  $\rm root$  (yellow dot) to symbol   $\rm A$  and assigning a  1  to each red line and a  0 to each blue one.

  • Symbol $\rm A$:    red–red–red–blue–red→ 11101,
  • Symbol $\rm B$:    red–red–red–red → 1111,
  • Symbol $\rm C$:    red–red–blue → 110,
  • Symbol $\rm D$:    red–blue– → 10,
  • Symbol $\rm E$:    blue–red → 01,
  • Symbol $\rm F$:    blue–blue–red → 001,
  • Symbol $\rm G$:    blue–blue–blue → 000,
  • Symbol $\rm H$:    red–red–red–blue–blue → 11100.


(4)  The coding under point  (3)  has shown that

  • the symbols  $\rm D$  and  $\rm E$  are each represented with two bits,
  • the symbols  $\rm C$,  $\rm F$  and  $\rm G$  with three bits,
  • the symbols  $\rm B$  with four bits,  and
  • the symbols  $\rm A$  and  $\rm H$  with five bits.


Thus, for the average code word length (in "bit/source symbol"):

$$L_{\rm M} \hspace{0.2cm} = \hspace{0.2cm} (p_{\rm D} + p_{\rm E}) \cdot 2 + (p_{\rm C} + p_{\rm F} + p_{\rm G}) \cdot 3 + p_{\rm B} \cdot 4 +(p_{\rm A} + p_{\rm H}) \cdot 5 = 0.49 \cdot 2 + 0.36 \cdot 3 +0.08 \cdot 4 +0.07 \cdot 5 \hspace{0.15cm}\underline{= 2.73}\hspace{0.05cm}.$$


(5)  Only answer 1 is correct:

  • The average code word length  $L_{\rm M}$  cannot be smaller than the source entropy  $H$.  This eliminates answers 2 and 3.
  • From the given probabilities of occurrence, the source entropy can actually be calculated to  $H = 2.71$ bit/source symbol.
  • It can be seen that this Huffman coding almost reaches the given limit  ("Source Coding Theorem")  $L_{\rm M, \ min } \ge H = 2.71$  bit/source symbol.