Difference between revisions of "Exercise 2.13Z: Combination of BWT and MTF"

From LNTwww
 
(5 intermediate revisions by 2 users not shown)
Line 1: Line 1:
{{quiz-Header|Buchseite=Informationstheorie/Weitere Quellencodierverfahren
+
{{quiz-Header|Buchseite=Information_Theory/Further_Source_Coding_Methods
 
}}
 
}}
  
 
[[File:EN_Inf_Z_2_14b.png|right|frame|Scheme for Burrows-Wheeler data compression]]
 
[[File:EN_Inf_Z_2_14b.png|right|frame|Scheme for Burrows-Wheeler data compression]]
We refer to the theory page  [[Information_Theory/Weitere_Quellencodierverfahren#Application_scenario_for_the_Burrows-Wheeler_transformation|Application Scenario for the Burrows-Wheeler Transform]]  and consider the coding system sketched on the right, consisting of the blocks.
+
We refer to the theory section  [[Information_Theory/Weitere_Quellencodierverfahren#Application_scenario_for_the_Burrows-Wheeler_transformation|Application Scenario for the Burrows-Wheeler Transform]]  and consider the coding system sketched on the right, consisting of the blocks
* <i>Burrows&ndash;Wheeler&ndash;Transformation</i>&nbsp; $\rm (BWT)$&nbsp; as described in&nbsp; [[Aufgaben:2.13_Burrows-Wheeler-Rücktransformation|Exercise 2.13]]; the two character sets at the input and output of the BWT are the same: &nbsp; $\{$ $\rm D$,&nbsp; $\rm E$,&nbsp; $\rm I$,&nbsp; $\rm M$,&nbsp; $\rm N$,&nbsp; $\rm S$ $\}$;
+
* "Burrows&ndash;Wheeler Transform"</i>&nbsp; $\rm (BWT)$&nbsp; as described in&nbsp; [[Aufgaben:2.13_Burrows-Wheeler-Rücktransformation|Exercise 2.13]];&nbsp; the character sets at the input and the output of the BWT are the same: &nbsp; $\{$ $\rm D$,&nbsp; $\rm E$,&nbsp; $\rm I$,&nbsp; $\rm M$,&nbsp; $\rm N$,&nbsp; $\rm S$ $\}$;
* <i>Move&ndash;to&ndash;Front</i>&nbsp; $\rm (MTF)$, a sorting algorithm that outputs a string of the same length&nbsp; $($&nbsp; $N = 12 in the example)$, but with a different alphabet&nbsp; $\{$<b>0</b>,&nbsp; <b>1</b>,&nbsp; <b>2</b>,&nbsp; <b>3</b>,&nbsp; <b>4</b>,&nbsp; <b>5</b>$\}$&nbsp;;
+
* "Move&ndash;to&ndash;Front"&nbsp; $\rm (MTF)$, a sorting algorithm that outputs a string of the same length&nbsp; $($&nbsp; $N = 12$&nbsp; in the example$)$,&nbsp; but with a different alphabet&nbsp; $\{$<b>0</b>,&nbsp; <b>1</b>,&nbsp; <b>2</b>,&nbsp; <b>3</b>,&nbsp; <b>4</b>,&nbsp; <b>5</b>$\}$;
* $\rm RLC0$&nbsp; &ndash; a run-length encoding specifically for zero, which is (as) frequent according to&nbsp; $\rm BWT$&nbsp; and&nbsp; $\rm MTF$&nbsp; ;&nbsp; all other indices are not changed by&nbsp; $\rm RLC0$&nbsp;;
+
* $\rm RLC0$&nbsp; &ndash; a run-length encoding specifically for&nbsp; <b>0</b>, which is (as) frequent according to&nbsp; $\rm BWT$&nbsp; and&nbsp; $\rm MTF$;&nbsp; all other indices are not changed by&nbsp; $\rm RLC0$&nbsp;;
 
* $\rm Huffman$&nbsp; as described in the chapter&nbsp; [[Information_Theory/Entropiecodierung_nach_Huffman|Entropy coding according to Huffman]]; frequent characters are represented by short binary sequences and rare ones by long ones.
 
* $\rm Huffman$&nbsp; as described in the chapter&nbsp; [[Information_Theory/Entropiecodierung_nach_Huffman|Entropy coding according to Huffman]]; frequent characters are represented by short binary sequences and rare ones by long ones.
  
  
The&nbsp; $\rm MTF$&ndash;algorithm can be described as follows for&nbsp; $M = 6$&nbsp; input symbols:
+
The&nbsp; $\rm MTF$&nbsp; algorithm can be described as follows for&nbsp; $M = 6$&nbsp; input symbols:
  
 
* The output sequence of the&nbsp; $\rm MTF$&nbsp; is a string of indices from the set  
 
* The output sequence of the&nbsp; $\rm MTF$&nbsp; is a string of indices from the set  
Line 24: Line 24:
  
  
Hints:
+
<u>Hints:</u>
*The exercise belongs to the chapter&nbsp; [[Information_Theory/Weitere_Quellencodierverfahren|Other source coding methods]].
+
*The exercise belongs to the chapter&nbsp; [[Information_Theory/Further_Source_Coding_Methods|Further Source Coding Methods]].
*In particular, reference is made to the page&nbsp; [[Information_Theory/Weitere_Quellencodierverfahren#Burrows.E2.80.93Wheeler_transformation|Burrows&ndash;Wheeler Transformation]].
+
*In particular, reference is made to the section&nbsp; [[Information_Theory/Further_Source_Coding_Methods#Burrows.E2.80.93Wheeler_transformation|Burrows&ndash;Wheeler Transformation]].
 
*Information on the Huffman code can be found in the chapter&nbsp;  [[Information_Theory/Entropiecodierung_nach_Huffman|Entropy Coding according to Huffman]].&nbsp;  This information is not necessary for the solution of the task.
 
*Information on the Huffman code can be found in the chapter&nbsp;  [[Information_Theory/Entropiecodierung_nach_Huffman|Entropy Coding according to Huffman]].&nbsp;  This information is not necessary for the solution of the task.
  
Line 36: Line 36:
 
+ The input character set is&nbsp; $\{ \hspace{0.05cm}\rm D,\hspace{0.05cm}  E,\hspace{0.05cm}  I,\hspace{0.05cm}  M,\hspace{0.05cm}  N , \hspace{0.05cm} S \}$.
 
+ The input character set is&nbsp; $\{ \hspace{0.05cm}\rm D,\hspace{0.05cm}  E,\hspace{0.05cm}  I,\hspace{0.05cm}  M,\hspace{0.05cm}  N , \hspace{0.05cm} S \}$.
 
+ The output character set is $\{ \hspace{0.05cm}\rm D,\hspace{0.05cm}  E,\hspace{0.05cm}  I,\hspace{0.05cm}  M,\hspace{0.05cm}  N , \hspace{0.05cm} S \}$.
 
+ The output character set is $\{ \hspace{0.05cm}\rm D,\hspace{0.05cm}  E,\hspace{0.05cm}  I,\hspace{0.05cm}  M,\hspace{0.05cm}  N , \hspace{0.05cm} S \}$.
- In the output sequence, allnbsp; $M = 6$&nbsp; characters occur grouped together.
+
- In the output sequence, all&nbsp; $M = 6$&nbsp; characters occur grouped together.
  
  
Line 63: Line 63:
 
|type="[]"}
 
|type="[]"}
 
+ The initial sequence is binary.
 
+ The initial sequence is binary.
+ It causes the smallest possible average codeword length.
+
+ It causes the smallest possible average code word length.
 
+ The dimensioning depends on the other blocks.
 
+ The dimensioning depends on the other blocks.
  
Line 72: Line 72:
 
===Solution===
 
===Solution===
 
{{ML-Kopf}}
 
{{ML-Kopf}}
'''(1)'''&nbsp; The graph on the information page shows that <u>solution suggestions 1 and 2</u> are correct and suggestion 3 is incorrect:  
+
[[File:EN_Inf_Z_2_14.png|right|frame|Example of the MTF algorithm]]
*$\rm E$&nbsp; und&nbsp; $\rm I$&nbsp; treten zwar gruppiert auf,  
+
'''(1)'''&nbsp; The graph on the information section shows that <u>solution suggestions 1 and 2</u> are correct and suggestion 3 is incorrect:  
*aber nicht die&nbsp; $\rm N$&ndash;Zeichen.
+
*$\rm E$&nbsp; and&nbsp; $\rm I$&nbsp; occur grouped together,  
 +
*but not the&nbsp; $\rm N$&nbsp; characters.
  
  
 
'''(2)'''&nbsp;<u>Proposed solutions 2 and 3</u> are correct:  
 
'''(2)'''&nbsp;<u>Proposed solutions 2 and 3</u> are correct:  
*The input sequence is processed character by character.&nbsp; Thus, the output sequence also has the length&nbsp; $N = 12$.
+
*The input sequence is processed&nbsp; "character by character".&nbsp; <br>Thus, the output sequence also has the length&nbsp; $N = 12$.
*In fact, the input set&nbsp; $\{ \hspace{0.05cm}\rm D,\hspace{0.05cm}  E,\hspace{0.05cm}  I,\hspace{0.05cm}  M,\hspace{0.05cm}  N , \hspace{0.05cm} S \}$&nbsp; in die Ausgangsmenge&nbsp;  $\{ \hspace{0.05cm}\rm 0,\hspace{0.05cm}  1,\hspace{0.05cm}  2,\hspace{0.05cm}  3,\hspace{0.05cm}  4 , \hspace{0.05cm} 5 \}$&nbsp; is converted into the output set.  
+
*In fact, the input set&nbsp; $\{ \hspace{0.05cm}\rm D,\hspace{0.05cm}  E,\hspace{0.05cm}  I,\hspace{0.05cm}  M,\hspace{0.05cm}  N , \hspace{0.05cm} S \}$&nbsp; is converted into the output set&nbsp;  $\{ \hspace{0.05cm}\rm 0,\hspace{0.05cm}  1,\hspace{0.05cm}  2,\hspace{0.05cm}  3,\hspace{0.05cm}  4 , \hspace{0.05cm} 5 \}$.  
*However, not by simple&nbsp; <i>mapping</i>, but by an algorithm which is outlined below.
+
*However, not by simple&nbsp; "mapping",&nbsp; but by an algorithm which is outlined below.
 
+
<br clear=all>
 
 
[[File:EN_Inf_Z_2_14.png|center|frame|Example of the MTF algorithm]]
 
 
 
 
'''(3)'''&nbsp; Correct is <u>solution suggestion 2</u>:
 
'''(3)'''&nbsp; Correct is <u>solution suggestion 2</u>:
*The table shows the MTF algorithm.&nbsp; The step&nbsp; $i=0$&nbsp; (red background) indicates the preassignment.&nbsp; The input of the MTF is highlighted in yellow, the output in green.
+
*The table shows the MTF algorithm.&nbsp; The step&nbsp; $i=0$&nbsp; (red background) indicates the preassignment.&nbsp; The MTF input is highlighted in yellow, the output in green.
* In step&nbsp; $i=1$&nbsp;, the input character&nbsp; $\rm N$&nbsp; corresponding to column&nbsp; $i=0$&nbsp; is represented by index&nbsp; $I = 4$&nbsp;.&nbsp; Subsequently,&nbsp; $\rm N$&nbsp; is sorted to the front, while the order of the other characters remains the same.
+
* In step&nbsp; $i=1$,&nbsp; the input character&nbsp; $\rm N$&nbsp; corresponding to column&nbsp; $i=0$&nbsp; is represented by index&nbsp; $I = 4$&nbsp;.&nbsp; Subsequently,&nbsp; $\rm N$&nbsp; is sorted to the front, while the order of the other characters remains the same.
*The input character&nbsp; $\rm M$&nbsp; in the second step is also given the index&nbsp; $I = 4$ according to column&nbsp; $i=2$&nbsp;.&nbsp; One continues in the same way until the twelfth character&nbsp; $\rm N$, to which the index&nbsp; $I = 1$&nbsp; is assigned.
+
*The input character&nbsp; $\rm M$&nbsp; in the second step is also given the index&nbsp; $I = 4$&nbsp; according to column&nbsp; $i=2$&nbsp;.&nbsp; One continues in the same way until the twelfth character&nbsp; $\rm N$, to which the index&nbsp; $I = 1$&nbsp; is assigned.
 
*You can see from the above table that at the times&nbsp; $i=6$,&nbsp; $i=7$,&nbsp; $i=10$&nbsp; and&nbsp; $i=11$&nbsp; the output index is&nbsp; $I = 0$&nbsp;.
 
*You can see from the above table that at the times&nbsp; $i=6$,&nbsp; $i=7$,&nbsp; $i=10$&nbsp; and&nbsp; $i=11$&nbsp; the output index is&nbsp; $I = 0$&nbsp;.
  
  
  
'''(4)'''&nbsp; <u>Statements 1 and 2</u> are correct:  
+
'''(4)'''&nbsp; <u>Statements 1 and 2</u>&nbsp; are correct:  
*The preprocessings "BWT" and "MTF" only have the task to generate as many zeros as possible.
+
*The preprocessings&nbsp; "BWT"&nbsp; and&nbsp; "MTF"&nbsp; only have the task to generate as many zeros as possible.
  
  
  
'''(5)'''&nbsp; <u>All statements</u> are correct.
+
'''(5)'''&nbsp; <u>All statements</u>&nbsp; are correct.
*You can find more information on the Huffman algorithm in the chapter "Entropy coding according to Huffman".
+
*You can find more information on the Huffman algorithm in the chapter&nbsp; "Entropy coding according to Huffman".
 
{{ML-Fuß}}
 
{{ML-Fuß}}
  
  
 
[[Category:Information Theory: Exercises|^2.4 Further Source Coding Methods^]]
 
[[Category:Information Theory: Exercises|^2.4 Further Source Coding Methods^]]

Latest revision as of 00:05, 13 November 2022

Scheme for Burrows-Wheeler data compression

We refer to the theory section  Application Scenario for the Burrows-Wheeler Transform  and consider the coding system sketched on the right, consisting of the blocks

  • "Burrows–Wheeler Transform"  $\rm (BWT)$  as described in  Exercise 2.13;  the character sets at the input and the output of the BWT are the same:   $\{$ $\rm D$,  $\rm E$,  $\rm I$,  $\rm M$,  $\rm N$,  $\rm S$ $\}$;
  • "Move–to–Front"  $\rm (MTF)$, a sorting algorithm that outputs a string of the same length  $($  $N = 12$  in the example$)$,  but with a different alphabet  $\{$012345$\}$;
  • $\rm RLC0$  – a run-length encoding specifically for  0, which is (as) frequent according to  $\rm BWT$  and  $\rm MTF$;  all other indices are not changed by  $\rm RLC0$ ;
  • $\rm Huffman$  as described in the chapter  Entropy coding according to Huffman; frequent characters are represented by short binary sequences and rare ones by long ones.


The  $\rm MTF$  algorithm can be described as follows for  $M = 6$  input symbols:

  • The output sequence of the  $\rm MTF$  is a string of indices from the set
       $ I = \{$012345$\}$.
  • Before starting the actual  $\rm MTF$ algorithm, the possible input symbols are sorted lexicographically and assigned to the following indices:
        $\rm D$   →   0,     $\rm E$   →   1,     $\rm I$   →   2,     $\rm M$   →   3,     $\rm N$   →   4,    $\rm S$   →   5.
  • Let the  $\rm MTF$ input string here be  $\rm N\hspace{0.05cm}M\hspace{0.05cm}S\hspace{0.05cm}D\hspace{0.05cm}E\hspace{0.05cm}E\hspace{0.05cm}E\hspace{0.05cm}N\hspace{0.05cm}I\hspace{0.05cm}I\hspace{0.05cm}I\hspace{0.05cm}N$.  This was the  $\rm BWT$ result in  Exercise 2.13.  The first  $\rm N$  is represented as  $I = 4$  according to the default setting.
  • Then the  $\rm N$  is placed at the beginning in the sorting, so that after the coding step  $i = 1$  the assignment holds:
        $\rm N$   →   0,     $\rm D$   →   1,     $\rm E$   →   2,     $\rm I$   →   3,     $\rm M$   →   4,    $\rm S$   →   5.
  • Continue in the same way until the entire input text has been processed.  If a character is already at position  0, no reordering is necessary.



Hints:

Questions

1

Which statements are true for block  $\rm BWT$  of the coding system?

The input character set is  $\{ \hspace{0.05cm}\rm D,\hspace{0.05cm} E,\hspace{0.05cm} I,\hspace{0.05cm} M,\hspace{0.05cm} N , \hspace{0.05cm} S \}$.
The output character set is $\{ \hspace{0.05cm}\rm D,\hspace{0.05cm} E,\hspace{0.05cm} I,\hspace{0.05cm} M,\hspace{0.05cm} N , \hspace{0.05cm} S \}$.
In the output sequence, all  $M = 6$  characters occur grouped together.

2

Which statements are true for the block  $\rm MTF$  of the coding system?

The output character set is  $\{ \hspace{0.05cm}\rm D,\hspace{0.05cm} E,\hspace{0.05cm} I,\hspace{0.05cm} M,\hspace{0.05cm} N , \hspace{0.05cm} S \}$.
The output character set is  $\{ \hspace{0.05cm}\rm 0,\hspace{0.05cm} 1,\hspace{0.05cm} 2,\hspace{0.05cm} 3,\hspace{0.05cm} 4 , \hspace{0.05cm} 5 \}$.
The MTF output sequence has length  $N = 12$.

3

What is the  $\rm MTF$ output sequence?

$\rm 230000100405$,
$\rm 445340045001$,
$\rm 543120345123$.

4

Which statements apply to block  $\rm RLC0$  of the coding system?

The input value  $0$  receives special treatment
The more often a  $0$  occurs, the more effective this block is.
The best would be  $\rm Pr(0) ≈ Pr(1) ≈ \text{...} ≈ Pr(5)$.

5

Which statements are true for the final block "Huffman"?

The initial sequence is binary.
It causes the smallest possible average code word length.
The dimensioning depends on the other blocks.


Solution

Example of the MTF algorithm

(1)  The graph on the information section shows that solution suggestions 1 and 2 are correct and suggestion 3 is incorrect:

  • $\rm E$  and  $\rm I$  occur grouped together,
  • but not the  $\rm N$  characters.


(2) Proposed solutions 2 and 3 are correct:

  • The input sequence is processed  "character by character". 
    Thus, the output sequence also has the length  $N = 12$.
  • In fact, the input set  $\{ \hspace{0.05cm}\rm D,\hspace{0.05cm} E,\hspace{0.05cm} I,\hspace{0.05cm} M,\hspace{0.05cm} N , \hspace{0.05cm} S \}$  is converted into the output set  $\{ \hspace{0.05cm}\rm 0,\hspace{0.05cm} 1,\hspace{0.05cm} 2,\hspace{0.05cm} 3,\hspace{0.05cm} 4 , \hspace{0.05cm} 5 \}$.
  • However, not by simple  "mapping",  but by an algorithm which is outlined below.


(3)  Correct is solution suggestion 2:

  • The table shows the MTF algorithm.  The step  $i=0$  (red background) indicates the preassignment.  The MTF input is highlighted in yellow, the output in green.
  • In step  $i=1$,  the input character  $\rm N$  corresponding to column  $i=0$  is represented by index  $I = 4$ .  Subsequently,  $\rm N$  is sorted to the front, while the order of the other characters remains the same.
  • The input character  $\rm M$  in the second step is also given the index  $I = 4$  according to column  $i=2$ .  One continues in the same way until the twelfth character  $\rm N$, to which the index  $I = 1$  is assigned.
  • You can see from the above table that at the times  $i=6$,  $i=7$,  $i=10$  and  $i=11$  the output index is  $I = 0$ .


(4)  Statements 1 and 2  are correct:

  • The preprocessings  "BWT"  and  "MTF"  only have the task to generate as many zeros as possible.


(5)  All statements  are correct.

  • You can find more information on the Huffman algorithm in the chapter  "Entropy coding according to Huffman".