Difference between revisions of "Information Theory/Discrete Sources with Memory"

From LNTwww
m (Text replacement - "[[Digitalsignalübertragung" to "[[Digital_Signal_Transmission")
 
(41 intermediate revisions by 5 users not shown)
Line 1: Line 1:
  
 
{{Header
 
{{Header
|Untermenü=Entropie wertdiskreter Nachrichtenquellen
+
|Untermenü=Entropy of Discrete Sources
|Vorherige Seite=Gedächtnislose Nachrichtenquellen
+
|Vorherige Seite=Discrete Memoryless Sources
|Nächste Seite=Natürliche wertdiskrete Nachrichtenquellen
+
|Nächste Seite=Natural Discrete Sources
 
}}
 
}}
  
==Ein einfaches einführendes Beispiel  ==
+
==A simple introductory example ==
 
<br>
 
<br>
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 1:}$&nbsp;
+
$\text{Example 1:}$&nbsp;
Zu&nbsp; [[Information_Theory/Gedächtnislose_Nachrichtenquellen#Modell_und_Voraussetzungen|Beginn des ersten Kapitels]]&nbsp; haben wir eine gedächtnislose Nachrichtenquelle mit dem Symbolvorrat&nbsp; $\rm \{A, \ B, \ C, \ D\}$ &nbsp; ⇒ &nbsp; $M = 4$ &nbsp; betrachtet.&nbsp; Eine beispielhafte Symbolfolge ist in der nachfolgenden Grafik als Quelle&nbsp; $\rm Q1$&nbsp; nochmals dargestellt.  
+
At the&nbsp; [[Information_Theory/Discrete Memoryless Sources#Model_and_requirements|"beginning of the first chapter"]]&nbsp; we have considered a memoryless message source with the symbol set&nbsp; $\rm \{A, \ B, \ C, \ D\}$ &nbsp; ⇒ &nbsp; $M = 4$ &nbsp;. &nbsp; An exemplary symbol sequence is shown again in the following figure as source&nbsp; $\rm Q1$&nbsp;.  
  
Mit den Symbolwahrscheinlichkeiten&nbsp; $p_{\rm A} = 0.4 \hspace{0.05cm},\hspace{0.2cm}p_{\rm B} = 0.3 \hspace{0.05cm},\hspace{0.2cm}p_{\rm C} = 0.2 \hspace{0.05cm},\hspace{0.2cm}  
+
*With the symbol probabilities&nbsp; $p_{\rm A} = 0.4 \hspace{0.05cm},\hspace{0.2cm}p_{\rm B} = 0.3 \hspace{0.05cm},\hspace{0.2cm}p_{\rm C} = 0.2 \hspace{0.05cm},\hspace{0.2cm}  
p_{\rm D} = 0.1\hspace{0.05cm}$&nbsp; ergibt sich die Entropie zu
+
p_{\rm D} = 0.1\hspace{0.05cm}$&nbsp; the entropy is
 
   
 
   
:$$H \hspace{-0.05cm}= 0.4 \cdot {\rm log}_2\hspace{0.05cm}\frac {1}{0.4} + 0.3 \cdot {\rm log}_2\hspace{0.05cm}\frac {1}{0.3} +0.2 \cdot {\rm log}_2\hspace{0.05cm}\frac {1}{0.2} +0.1 \cdot {\rm log}_2\hspace{0.05cm}\frac {1}{0.1} \approx 1.84 \hspace{0.05cm}{\rm bit/Symbol}
+
:$$H \hspace{-0.05cm}= 0.4 \cdot {\rm log}_2\hspace{0.05cm}\frac {1}{0.4} + 0.3 \cdot {\rm log}_2\hspace{0.05cm}\frac {1}{0.3} + 0.2 \cdot {\rm log}_2\hspace{0.05cm}\frac {1}{0.2} + 0.1 \cdot {\rm log}_2\hspace{0.05cm}\frac {1}{0.1} \approx 1.84 \hspace{0.05cm}{\rm bit/symbol}
 
  \hspace{0.01cm}.$$
 
  \hspace{0.01cm}.$$
  
Aufgrund der ungleichen Symbolwahrscheinlichkeiten ist die Entropie kleiner als der Entscheidungsgehalt&nbsp; $H_0 = \log_2 M = 2 \hspace{0.05cm} \rm bit/Symbol$.
+
*Due to the unequal symbol probabilities the entropy is smaller than&nbsp; $H_0 = \log_2 M = 2 \hspace{0.05cm} \rm bit/symbol$.
  
[[File:P_ID2238__Inf_T_1_2_S1a_neu.png|right|frame|Quaternäre Nachrichtenquelle ohne/mit Gedächtnis]]
+
[[File:EN_Inf_T_1_2_S1a.png|right|frame|Quaternary source without/with memory]]
 
<br><br>
 
<br><br>
Die Quelle&nbsp; $\rm Q2$&nbsp; ist weitgehend identisch mit der Quelle&nbsp; $\rm Q1$, außer, dass jedes einzelne Symbol nicht nur einmal, sondern zweimal nacheinander ausgegeben wird:&nbsp; $\rm A ⇒ AA$,&nbsp; $\rm B ⇒ BB$,&nbsp; usw.  
+
The source&nbsp; $\rm Q2$&nbsp; is almost identical to the source&nbsp; $\rm Q1$, except that each individual symbol is output not only once, but twice in a row:&nbsp; $\rm A ⇒ AA$,&nbsp; $\rm B ⇒ BB$,&nbsp; and so on.  
*Es ist offensichtlich, dass&nbsp; $\rm Q2$&nbsp; eine kleinere Entropie (Unsicherheit) aufweist als&nbsp; $\rm Q1$.  
+
*It is obvious that&nbsp; $\rm Q2$&nbsp; has a smaller entropy (uncertainty) than&nbsp; $\rm Q1$.  
*Aufgrund des einfachen Wiederholungscodes ist nun&nbsp;  
+
*Because of the simple repetition code,&nbsp; the entropy
:$$H = 1.84/2 = 0.92 \hspace{0.05cm} \rm bit/Symbol$$
+
:$$H = 1.84/2 = 0.92 \hspace{0.05cm} \rm bit/symbol$$
:nur halb so groß, obwohl sich an den Auftrittswahrscheinlichkeiten nichts geändert hat.}}
+
:has only half the size, although the occurrence probabilities have not changed.}}
  
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Fazit:}$&nbsp;
+
$\text{Conclusion:}$&nbsp;
Dieses Beispiel zeigt:
+
This example shows:
*Die Entropie einer gedächtnisbehafteten Quelle ist kleiner als die Entropie einer gedächtnislosen Quelle mit gleichen Symbolwahrscheinlichkeiten.
+
#The entropy of a source with memory is smaller than the entropy of a memoryless source with equal symbol probabilities.
*Es müssen nun auch die statistischen Bindungen innerhalb der Folge&nbsp; $〈 q_ν 〉$&nbsp; berücksichtigt werden,  
+
#The statistical bindings within the sequence&nbsp; $〈 q_ν 〉$&nbsp; have to be considered now,  
*nämlich die Abhängigkeit des Symbols&nbsp; $q_ν$&nbsp; von den Vorgängersymbolen&nbsp; $q_{ν–1}$,&nbsp; $q_{ν–2}$, ... }}
+
#namely the dependency of the symbol&nbsp; $q_ν$&nbsp; from the predecessor symbols&nbsp; $q_{ν-1}$,&nbsp; $q_{ν-2}$, ... }}
 
   
 
   
 
 
 
 
==Entropie hinsichtlich Zweiertupel ==  
+
== Entropy with respect to two-tuples ==  
 
<br>
 
<br>
Wir betrachten weiterhin die Quellensymbolfolge&nbsp; $〈 q_1, \hspace{0.05cm} q_2,\hspace{0.05cm}\text{ ...} \hspace{0.05cm}, q_{ν–1}, \hspace{0.05cm}q_ν, \hspace{0.05cm}\hspace{0.05cm}q_{ν+1} ,\hspace{0.05cm}\text{ ...} \hspace{0.05cm}〉$&nbsp; und betrachten nun die Entropie zweier aufeinanderfolgender Quellensymbole.  
+
We continue to look at the source symbol sequence&nbsp; $〈 q_1, \hspace{0.05cm} q_2,\hspace{0.05cm}\text{ ...} \hspace{0.05cm}, q_{ν-1}, \hspace{0.05cm}q_ν, \hspace{0.05cm}\hspace{0.05cm}q_{ν+1} .\hspace{0.05cm}\text{...} \hspace{0.05cm}〉$&nbsp; and now consider the entropy of two successive source symbols.  
*Alle Quellensymbole&nbsp; $q_ν$&nbsp; entstammen einem Alphabet mit dem Symbolunfang&nbsp; $M$, so dass es für die Kombination&nbsp; $(q_ν, \hspace{0.05cm}q_{ν+1})$&nbsp; genau&nbsp; $M^2$&nbsp; mögliche Symbolpaare mit folgenden [[Theory_of_Stochastic_Signals/Mengentheoretische_Grundlagen#Schnittmenge|Verbundwahrscheinlichkeiten]] gibt:
+
*All source symbols&nbsp; $q_ν$&nbsp; are taken from an alphabet with the symbol set size&nbsp; $M$,&nbsp; so that for the combination&nbsp; $(q_ν, \hspace{0.05cm}q_{ν+1})$&nbsp; there are exactly&nbsp; $M^2$&nbsp; possible symbol pairs with the following&nbsp; [[Theory_of_Stochastic_Signals/Set_Theory_Basics#Intersection_set|"combined probabilities"]]:
 
   
 
   
 
:$${\rm Pr}(q_{\nu}\cap q_{\nu+1})\le {\rm Pr}(q_{\nu}) \cdot {\rm Pr}( q_{\nu+1})
 
:$${\rm Pr}(q_{\nu}\cap q_{\nu+1})\le {\rm Pr}(q_{\nu}) \cdot {\rm Pr}( q_{\nu+1})
 
  \hspace{0.05cm}.$$
 
  \hspace{0.05cm}.$$
  
*Daraus ist die&nbsp; ''Verbundentropie''&nbsp; eines Zweier–Tupels berechenbar:
+
*From this, the&nbsp; &raquo;'''compound entropy'''&laquo;&nbsp; of two consecutive symbols can be computed:
 
   
 
   
:$$H_2\hspace{0.05cm}' = \sum_{q_{\nu}\hspace{0.05cm} \in \hspace{0.05cm}\{ \hspace{0.05cm}q_{\mu}\hspace{0.01cm} \}} \sum_{q_{\nu+1}\hspace{0.05cm} \in \hspace{0.05cm}\{ \hspace{0.05cm} q_{\mu}\hspace{0.01cm} \}}\hspace{-0.1cm}{\rm Pr}(q_{\nu}\cap q_{\nu+1}) \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{{\rm Pr}(q_{\nu}\cap q_{\nu+1})} \hspace{0.4cm}({\rm Einheit\hspace{-0.1cm}: \hspace{0.1cm}bit/Zweiertupel})
+
:$$H_2\hspace{0.05cm}' = \sum_{q_{\nu}\hspace{0.05cm} \in \hspace{0.05cm}\{ \hspace{0.05cm}q_{\mu}\hspace{0.01cm} \}} \ \sum_{q_{\nu+1}\hspace{0.05cm} \in \hspace{0.05cm}\{ \hspace{0.05cm} q_{\mu}\hspace{0.01cm} \}}\hspace{-0.1cm}{\rm Pr}(q_{\nu}\cap q_{\nu+1}) \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{{\rm Pr}(q_{\nu}\cap q_{\nu+1})} \hspace{0.4cm}({\rm unit\hspace{-0.1cm}: \hspace{0.1cm}bit/two&ndash;tuple})
 
  \hspace{0.05cm}.$$
 
  \hspace{0.05cm}.$$
  
:Der Index &bdquo;2&rdquo; symbolisiert, dass sich die so berechnete Entropie auf Zweiertupel bezieht.  
+
:The index "2" symbolizes that the entropy thus calculated refers to&nbsp; &raquo;'''two-tuples'''&laquo;.  
  
 
+
*To get the average information content per symbol,&nbsp; $H_2\hspace{0.05cm}'$&nbsp; has to be divided in half:
Um den mittleren Informationsgehalt pro Symbol zu erhalten, muss&nbsp; $H_2\hspace{0.05cm}'$&nbsp; noch halbiert werden:
 
 
   
 
   
:$$H_2 = {H_2\hspace{0.05cm}'}/{2}  \hspace{0.5cm}({\rm Einheit\hspace{-0.1cm}: \hspace{0.1cm}bit/Symbol})
+
:$$H_2 = {H_2\hspace{0.05cm}'}/{2}  \hspace{0.5cm}({\rm unit\hspace{-0.1cm}: \hspace{0.1cm}bit/symbol})
 
  \hspace{0.05cm}.$$
 
  \hspace{0.05cm}.$$
  
Um eine konsistente Nomenklatur zu erreichen, benennen wir nun die im Kapitel&nbsp; [[Information_Theory/Gedächtnislose_Nachrichtenquellen#Modell_und_Voraussetzungen|Gedächtnislose Nachrichtenquellen]]&nbsp; definierte Entropie mit&nbsp; $H_1$:
+
*In order to achieve a consistent nomenclature, we now label the entropy defined in chapter&nbsp; [[Information_Theory/Discrete_Memoryless_Sources#Model_and_requirements|"Memoryless Message Sources"]]&nbsp; with&nbsp; $H_1$:
  
:$$H_1 = \sum_{q_{\nu}\hspace{0.05cm} \in \hspace{0.05cm}\{ \hspace{0.05cm}q_{\mu}\hspace{0.01cm} \}} {\rm Pr}(q_{\nu}) \cdot {\rm log_2}\hspace{0.1cm}\frac {1}{{\rm Pr}(q_{\nu})} \hspace{0.5cm}({\rm Einheit\hspace{-0.1cm}: \hspace{0.1cm}bit/Symbol})
+
:$$H_1 = \sum_{q_{\nu}\hspace{0.05cm} \in \hspace{0.05cm}\{ \hspace{0.05cm}q_{\mu}\hspace{0.01cm} \}} {\rm Pr}(q_{\nu}) \cdot {\rm log_2}\hspace{0.1cm}\frac {1}{{\rm Pr}(q_{\nu})} \hspace{0.5cm}({\rm unit\hspace{-0.1cm}: \hspace{0.1cm}bit/symbol})
 
  \hspace{0.05cm}.$$
 
  \hspace{0.05cm}.$$
  
Der Index &bdquo;1&rdquo; soll darauf hinweisen, dass&nbsp; $H_1$&nbsp; ausschließlich die Symbolwahrscheinlichkeiten berücksichtigt und nicht statistischen Bindungen zwischen Symbolen innerhalb der Folge.&nbsp; Mit dem Entscheidungsgehalt&nbsp; $H_0 = \log_2 \ M$&nbsp; ergibt sich dann folgende Größenbeziehung:
+
*The index "1" is supposed to indicate that&nbsp; $H_1$&nbsp; considers only the symbol probabilities and not statistical bindings between symbols within the sequence.&nbsp; With&nbsp; $H_0 = \log_2 \ M$&nbsp; the following size relation results:
 
   
 
   
 
:$$H_0 \ge H_1 \ge H_2
 
:$$H_0 \ge H_1 \ge H_2
 
  \hspace{0.05cm}.$$
 
  \hspace{0.05cm}.$$
  
Bei statistischer Unabhängigkeit der Folgenelemente ist&nbsp; $H_2 = H_1$.
+
*With statistical independence of the sequence elements &nbsp; &rArr; &nbsp; $H_2 = H_1$.
  
Die bisherigen Gleichungen geben jeweils einen Scharmittelwert an.&nbsp; Die für die Berechnung von&nbsp; $H_1$&nbsp; und&nbsp; $H_2$&nbsp; benötigten Wahrscheinlichkeiten lassen sich aber auch als Zeitmittelwerte aus einer sehr langen Folge berechnen oder – etwas genauer ausgedrückt – durch die entsprechenden&nbsp; [[Theory_of_Stochastic_Signals/Wahrscheinlichkeit_und_relative_Häufigkeit#Bernoullisches_Gesetz_der_gro.C3.9Fen_Zahlen|relativen Häufigkeiten]]&nbsp; annähern.
 
  
Verdeutlichen wir uns nun die Berechnung der Entropienäherungen&nbsp; $H_1$&nbsp; und&nbsp; $H_2$&nbsp; an drei Beispielen.
+
The previous equations each indicate an&nbsp; "ensemble mean value". &nbsp; The probabilities required for the calculation of&nbsp; $H_1$&nbsp; and&nbsp; $H_2$&nbsp; can, however, also be calculated as time averages from a very long sequence or, more precisely, approximated by the corresponding&nbsp; [[Theory_of_Stochastic_Signals/From_Random_Experiment_to_Random_Variable#Bernoulli.27s_law_of_large_numbers|"relative frequencies"]].
  
 +
Let us now illustrate the calculation of entropy approximations&nbsp; $H_1$&nbsp; and&nbsp; $H_2$&nbsp; with three examples.
 +
 +
[[File:Inf_T_1_2_S2_vers2.png|right|frame|Ternary symbol sequence and formation of two-tuples]]
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 2:}$&nbsp;
+
$\text{Example 2:}$&nbsp;
Wir betrachten zunächst die Folge&nbsp; $〈 q_1$, ... , $q_{50} \rangle $&nbsp; gemäß der Grafik, wobei die Folgenelemente&nbsp; $q_ν$&nbsp; dem Alphabet $\rm \{A, \ B, \ C \}$ entstammen  &nbsp; ⇒ &nbsp; der Symbolumfang ist&nbsp; $M = 3$.
+
We will first look at the sequence&nbsp; $〈 q_1$, ... , $q_{50} \rangle $&nbsp; according to the graphic, where the sequence elements&nbsp; $q_ν$&nbsp; originate from the alphabet $\rm \{A, \ B, \ C \}$ &nbsp; ⇒ &nbsp; the symbol set size is&nbsp; $M = 3$.
  
[[File:Inf_T_1_2_S2_vers2.png|center|frame|Ternäre Symbolfolge und Bildung von Zweier–Tupeln]]
+
*By time averaging over&nbsp; $50$&nbsp; symbols one gets the symbol probabilities&nbsp; $p_{\rm A} ≈ 0.5$, &nbsp; $p_{\rm B} ≈ 0.3$ &nbsp;and&nbsp; $p_{\rm C} ≈ 0.2$, with which one can calculate the first order entropy approximation:
 
 
Durch Zeitmittelung über die&nbsp; $50$&nbsp; Symbole erhält man die Symbolwahrscheinlichkeiten&nbsp; $p_{\rm A} ≈ 0.5$, &nbsp; $p_{\rm B} ≈ 0.3$ &nbsp;und&nbsp; $p_{\rm C} ≈ 0.2$, womit man die Entropienäherung erster Ordnung berechnen kann:
 
 
   
 
   
:$$H_1 = 0.5 \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{0.5} + 0.3 \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{0.3} +0.2 \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{0.2}  \approx \, 1.486 \,{\rm bit/Symbol}
+
:$$H_1 = 0.5 \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{0.5} + 0.3 \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{0.3} + 0.2 \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{0.2}  \approx \, 1.486 \,{\rm bit/symbol}
 
  \hspace{0.05cm}.$$
 
  \hspace{0.05cm}.$$
  
Aufgrund der nicht gleichwahrscheinlichen Symbole ist&nbsp; $H_1 < H_0 = 1.585 \hspace{0.05cm} \rm bit/Symbol$.&nbsp; Als Näherung für die Wahrscheinlichkeiten von Zweiertupeln erhält man aus der obigen Folge:
+
*Due to the not equally probable symbols&nbsp; $H_1 < H_0 = 1.585 \hspace{0.05cm}. \rm bit/symbol$.&nbsp; As an approximation for the probabilities of two-tuples one gets from the above sequence:
 
   
 
   
:$$\begin{align*}p_{\rm AA} \hspace{-0.1cm}& = \hspace{-0.1cm} 14/49\hspace{0.05cm}, \hspace{0.2cm}p_{\rm AB} = 8/49\hspace{0.05cm}, \hspace{0.2cm}p_{\rm AC} = 3/49\hspace{0.05cm}, \\
+
:$$\begin{align*}p_{\rm AA} \hspace{-0.1cm}& = \hspace{-0.1cm} 14/49\hspace{0.05cm}, \hspace{0.2cm}p_{\rm AB} = 8/49\hspace{0.05cm}, \hspace{0.2cm}p_{\rm AC} = 3/49\hspace{0.05cm}, \\\
  p_{\rm BA} \hspace{-0.1cm}& = \hspace{0.07cm} 7/49\hspace{0.05cm}, \hspace{0.25cm}p_{\rm BB} = 2/49\hspace{0.05cm}, \hspace{0.2cm}p_{\rm BC} = 5/49\hspace{0.05cm}, \\
+
  p_{\rm BA} \hspace{-0.1cm}& = \hspace{0.07cm} 7/49\hspace{0.05cm}, \hspace{0.25cm}p_{\rm BB} = 2/49\hspace{0.05cm}, \hspace{0.2cm}p_{\rm BC} = 5/49\hspace{0.05cm}, \\\
  p_{\rm CA} \hspace{-0.1cm}& = \hspace{0.07cm} 4/49\hspace{0.05cm}, \hspace{0.25cm}p_{\rm CB} = 5/49\hspace{0.05cm}, \hspace{0.2cm}p_{\rm CC} = 1/49\hspace{0.05cm}.\end{align*}$$
+
  p_{\rm CA} \hspace{-0.1cm}& = \hspace{0.07cm} 4/49\hspace{0.05cm}, \hspace{0.25cm}p_{\rm CB} = 5/49\hspace{0.05cm}, \hspace{0.2cm}p_{\rm CC} = 1/49\hspace{0.05cm}.\end{align*}$$
  
Beachten Sie, dass aus den&nbsp; $50$&nbsp; Folgenelementen nur&nbsp; $49$&nbsp; Zweiertupel&nbsp; $(\rm AA$, ... , $\rm CC)$&nbsp; gebildet werden können, die in der Grafik farblich unterschiedlich markiert sind.
+
*Please note that the&nbsp; $50$&nbsp; sequence elements can only be formed from&nbsp; $49$&nbsp; two-tuples&nbsp; $(\rm AA$, ... , $\rm CC)$&nbsp; which are marked in different colors in the graphic.
  
*Die daraus berechenbare Entropienäherung&nbsp; $H_2$&nbsp; sollte eigentlich gleich&nbsp; $H_1$&nbsp; sein, da die gegebene Symbolfolge von einer gedächtnislosen Quelle stammt.  
+
#The entropy approximation&nbsp; $H_2$&nbsp; should actually be equal to&nbsp; $H_1$&nbsp; since the given symbol sequence comes from a memoryless source.  
*Aufgrund der kurzen Folgenlänge&nbsp; $N = 50$&nbsp; und der daraus resultierenden statistischen Ungenauigkeit ergibt sich aber ein kleinerer Wert: &nbsp;  
+
#Because of the short sequence length&nbsp; $N = 50$&nbsp; and the resulting statistical inaccuracy, however, a smaller value results: &nbsp;  
:$$H_2 ≈ 1.39\hspace{0.05cm} \rm   bit/Symbol.$$}}
+
::$$H_2 ≈ 1.39\hspace{0.05cm} \rm bit/symbol.$$}}
  
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 3:}$&nbsp;
+
$\text{Example 3:}$&nbsp;
Nun betrachten wir eine&nbsp; ''gedächtnislose&nbsp; Binärquelle''&nbsp; mit gleichwahrscheinlichen Symbolen, das heißt es gelte&nbsp; $p_{\rm A} = p_{\rm B} = 1/2$.&nbsp; Die ersten zwanzig Folgeelemente lauten: &nbsp; $〈 q_ν 〉 =\rm BBAAABAABBBBBAAAABAB$ ...
+
Now let us consider a&nbsp; &raquo;'''memoryless&nbsp; binary source'''&laquo;&nbsp; with equally probable symbols, i.e. there would be&nbsp; $p_{\rm A} = p_{\rm B} = 1/2$.&nbsp; The first twenty subsequent elements are &nbsp; $〈 q_ν 〉 =\rm BBAAABAABBBBBAAAABABAB$ ...
*Aufgrund der gleichwahrscheinlichen Binärsymbole ist&nbsp; $H_1 = H_0 = 1 \hspace{0.05cm} \rm bit/Symbol$.
+
*Because of the equally probable binary symbols &nbsp; $H_1 = H_0 = 1 \hspace{0.05cm} \rm bit/symbol$.
*Die Verbundwahrscheinlichkeit&nbsp; $p_{\rm AB}$&nbsp; der Kombination&nbsp; $\rm AB$&nbsp; ist gleich&nbsp; $p_{\rm A} · p_{\rm B} = 1/4$.&nbsp; Ebenso gilt&nbsp; $p_{\rm AA} = p_{\rm BB} = p_{\rm BA} = 1/4$.  
+
 
*Damit erhält man für die zweite Entropienäherung
+
*The compound probability&nbsp; $p_{\rm AB}$&nbsp; of the combination&nbsp; $\rm AB$&nbsp; is equal to&nbsp; $p_{\rm A} \cdot p_{\rm B} = 1/4$.&nbsp; Also $p_{\rm AA} = p_{\rm BB} = p_{\rm BA} = 1/4$.
 
   
 
   
:$$H_2 = {1}/{2} \cdot \big [ {1}/{4} \cdot {\rm log}_2\hspace{0.1cm}4 + {1}/{4} \cdot {\rm log}_2\hspace{0.1cm}4 +{1}/{4} \cdot {\rm log}_2\hspace{0.1cm}4 +{1}/{4} \cdot {\rm log}_2\hspace{0.1cm}4 \big ] = 1 \,{\rm bit/Symbol} = H_1 = H_0
+
*This gives for the second entropy approximation
 +
 +
:$$H_2 = {1}/{2} \cdot \big [ {1}/{4} \cdot {\rm log}_2\hspace{0.1cm}4 + {1}/{4} \cdot {\rm log}_2\hspace{0.1cm}4 +{1}/{4} \cdot {\rm log}_2\hspace{0.1cm}4 +{1}/{4} \cdot {\rm log}_2\hspace{0.1cm}4 \big ] = 1 \,{\rm bit/symbol} = H_1 = H_0
 
  \hspace{0.05cm}.$$
 
  \hspace{0.05cm}.$$
  
''Hinweis'': &nbsp; Aus der angegebenen Folge ergeben sich aufgrund der kurzen Länge etwas andere Wahrscheinlichkeiten:&nbsp; $p_{\rm AA} = 6/19$,&nbsp; $p_{\rm BB} = 5/19$,&nbsp; $p_{\rm AB} = p_{\rm BA} = 4/19$.}}
+
<u>Note</u>: &nbsp; Due to the short length of the given sequence the probabilities are slightly different:&nbsp; $p_{\rm AA} = 6/19$,&nbsp; $p_{\rm BB} = 5/19$,&nbsp; $p_{\rm AB} = p_{\rm BA} = 4/19$.}}
  
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 4:}$&nbsp;
+
$\text{Example 4:}$&nbsp;
Die dritte hier betrachtete Folge ergibt sich aus  der Binärfolge von&nbsp; $\text{Beispiel 3}$&nbsp; durch Anwendung eines einfachen Wiederholungscodes:  
+
The third sequence considered here results from the binary sequence of&nbsp; $\text{Example 3}$&nbsp; by using a simple repeat code:  
 
:$$〈 q_ν 〉 =\rm BbBbAaAaAaBbAaAaBbBb \text{...} $$
 
:$$〈 q_ν 〉 =\rm BbBbAaAaAaBbAaAaBbBb \text{...} $$
*Die wiederholten Symbole sind durch entsprechende Kleinbuchstaben markiert.&nbsp; Es gilt trotzdem&nbsp; $M=2$.
+
*The repeated symbols are marked by corresponding lower case letters.&nbsp; But it still applies&nbsp; $M=2$.
*Aufgrund der gleichwahrscheinlichen Binärsymbole ergibt sich auch hier&nbsp; $H_1 = H_0 = 1 \hspace{0.05cm} \rm bit/Symbol$.
+
 
*Wie in&nbsp; [[Aufgaben:1.3_Entropienäherungen|Aufgabe 1.3]]&nbsp; gezeigt wird, gilt nun für die Verbundwahrscheinlichkeiten&nbsp; $p_{\rm AA}=p_{\rm BB} = 3/8$&nbsp; und&nbsp; $p_{\rm AB}=p_{\rm BA} = 1/8$.&nbsp; Daraus folgt:
+
*Because of the equally probable binary symbols, this also results in&nbsp; $H_1 = H_0 = 1 \hspace{0.05cm} \rm bit/symbol$.
 +
 
 +
*As shown in&nbsp; [[Aufgaben:Exercise_1.3:_Entropy_Approximations|"Exercise 1.3"]]&nbsp; for the compound probabilities we obtain&nbsp; $p_{\rm AA}=p_{\rm BB} = 3/8$&nbsp; and&nbsp; $p_{\rm AB}=p_{\rm BA} = 1/8$.&nbsp; Hence
 
:$$\begin{align*}H_2 ={1}/{2} \cdot \big [ 2 \cdot {3}/{8} \cdot {\rm log}_2\hspace{0.1cm} {8}/{3} +  
 
:$$\begin{align*}H_2 ={1}/{2} \cdot \big [ 2 \cdot {3}/{8} \cdot {\rm log}_2\hspace{0.1cm} {8}/{3} +  
  2 \cdot {1}/{8} \cdot {\rm log}_2\hspace{0.1cm}8\big ] = {3}/{8} \cdot {\rm log}_2\hspace{0.1cm}8 - {3}/{8} \cdot{\rm log}_2\hspace{0.1cm}3 +   {1}/{8} \cdot {\rm log}_2\hspace{0.1cm}8 \approx 0.906 \,{\rm bit/Symbol} < H_1 = H_0
+
  2 \cdot {1}/{8} \cdot {\rm log}_2\hspace{0.1cm}8\big ] = {3}/{8} \cdot {\rm log}_2\hspace{0.1cm}8 - {3}/{8} \cdot{\rm log}_2\hspace{0.1cm}3 + {1}/{8} \cdot {\rm log}_2\hspace{0.1cm}8 \approx 0.906 \,{\rm bit/symbol} < H_1 = H_0
 
  \hspace{0.05cm}.\end{align*}$$
 
  \hspace{0.05cm}.\end{align*}$$
  
Betrachtet man sich die Aufgabenstellung genauer, so kommt man zu folgendem Schluss:  
+
A closer look at the task at hand leads to the following conclusion:  
*Die Entropie müsste eigentlich&nbsp; $H = 0.5 \hspace{0.05cm} \rm bit/Symbol$&nbsp; sein (jedes zweite Symbol liefert keine neue Information).  
+
*The entropy should actually be&nbsp; $H = 0.5 \hspace{0.05cm} \rm bit/symbol$&nbsp; (every second symbol does not provide new information).
*Die zweite Entropienäherung&nbsp; $H_2 = 0.906 \hspace{0.05cm} \rm bit/Symbol$&nbsp; ist aber deutlich größer als die Entropie&nbsp; $H$.
+
*Zur Entropiebestimmung reicht die Näherung zweiter Ordnung nicht.&nbsp; Vielmehr muss man größere zusammenhängende Blöcke mit&nbsp; $k > 2$&nbsp; Symbolen betrachten.  
+
*The second entropy approximation&nbsp; $H_2 = 0.906 \hspace{0.05cm} \rm bit/symbol$&nbsp; is much larger than the entropy&nbsp; $H$.
*Einen solchen Block bezeichnen wir im Folgenden als&nbsp; $k$–Tupel.}}
+
 
 +
*For the entropy  determination,  the second order approximation is not sufficient.&nbsp; Rather, one must consider larger contiguous blocks of&nbsp; $k > 2$&nbsp; symbols.
 +
 
 +
*In the following, such a block is referred to as&nbsp; "$k$&ndash;tuple".}}
  
 
 
 
 
==Verallgemeinerung auf $k$–Tupel und Grenzübergang ==
+
==Generalization to $k$-tuple and boundary crossing ==
 
<br>
 
<br>
Zur Abkürzung schreiben wir mit der Verbundwahrscheinlichkeit&nbsp; $p_i^{(k)}$&nbsp; eines&nbsp; $k$–Tupels allgemein:
+
For abbreviation we write using the compound probability&nbsp; $p_i^{(k)}$&nbsp; for a&nbsp; $k$&ndash;tuple in general:
 
   
 
   
:$$H_k = \frac{1}{k} \cdot \sum_{i=1}^{M^k} p_i^{(k)} \cdot {\rm log}_2\hspace{0.1cm} \frac{1}{p_i^{(k)}} \hspace{0.5cm}({\rm Einheit\hspace{-0.1cm}: \hspace{0.1cm}bit/Symbol})
+
:$$H_k = \frac{1}{k} \cdot \sum_{i=1}^{M^k} p_i^{(k)} \cdot {\rm log}_2\hspace{0.1cm} \frac{1}{p_i^{(k)}} \hspace{0.5cm}({\rm unit\hspace{-0.1cm}: \hspace{0.1cm}bit/symbol})
 
  \hspace{0.05cm}.$$
 
  \hspace{0.05cm}.$$
  
Die Laufvariable&nbsp; $i$&nbsp; steht jeweils für eines der&nbsp; $M^k$ Tupel.&nbsp; Die vorher berechnete Näherung&nbsp; $H_2$&nbsp; ergibt sich mit&nbsp; $k = 2$.
+
*The control variable&nbsp; $i$&nbsp; stands for one of the&nbsp; $M^k$&nbsp; tuples.&nbsp;  
 +
 
 +
*The previously calculated approximation&nbsp; $H_2$&nbsp; results with&nbsp; $k = 2$.
 +
 
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
 
$\text{Definition:}$&nbsp;
 
$\text{Definition:}$&nbsp;
Die&nbsp; '''Entropie einer Nachrichtenquelle mit Gedächtnis'''&nbsp; ist der folgende Grenzwert:  
+
The&nbsp; &raquo;'''entropy of a discrete source with memory'''&laquo;&nbsp; has the following limit:  
 
:$$H = \lim_{k \rightarrow \infty }H_k \hspace{0.05cm}.$$
 
:$$H = \lim_{k \rightarrow \infty }H_k \hspace{0.05cm}.$$
  
Für die&nbsp; ''Entropienäherungen''&nbsp; $H_k$&nbsp; gelten folgende Größenrelationen&nbsp; $(H_0$ ist der Entscheidungsgehalt$)$:
+
*For the&nbsp; &raquo;'''entropy approximations'''&laquo;&nbsp; $H_k$&nbsp; the following relations apply&nbsp; $(H_0=\log_2  M)$:
 
:$$H \le \text{...} \le H_k \le \text{...} \le H_2 \le H_1 \le H_0 \hspace{0.05cm}.$$}}
 
:$$H \le \text{...} \le H_k \le \text{...} \le H_2 \le H_1 \le H_0 \hspace{0.05cm}.$$}}
  
  
Der Rechenaufwand wird bis auf wenige Sonderfälle (siehe folgendes Beispiel) mit zunehmendem&nbsp; $k$&nbsp; immer größer und hängt natürlich auch vom Symbolumfang&nbsp; $M$&nbsp; ab:
+
The computational effort will increase with increasing&nbsp; $k$&nbsp; except for a few special cases (see the following example) and depends on the symbol set size&nbsp; $M$&nbsp; of course:
*Zur Berechnung von&nbsp; $H_{10}$&nbsp; einer Binärquelle&nbsp; $(M = 2)$&nbsp; ist über&nbsp; $2^{10} = 1024$&nbsp; Terme zu mitteln.  
+
*For the calculation of&nbsp; $H_{10}$&nbsp; for a binary source&nbsp; $(M = 2)$&nbsp; one must average over&nbsp; $2^{10} = 1024$&nbsp; terms.  
*Mit jeder weiteren Erhöhung von&nbsp; $k$&nbsp; um&nbsp; $1$&nbsp; verdoppelt sich die Anzahl der Summenterme.
+
 
*Bei einer Quaternärquelle&nbsp; $(M = 4)$&nbsp; muss zur&nbsp; $H_{10}$–Bestimmung bereits über&nbsp; $4^{10} = 1\hspace{0.08cm}048\hspace{0.08cm}576$&nbsp; Summenterme gemittelt werden.
+
*With each further increase of&nbsp; $k$&nbsp; by&nbsp; $1$&nbsp; the number of sum terms doubles.
*Berücksichtigt man, dass jedes dieser&nbsp; $4^{10} =2^{20} >10^6$&nbsp; $k$–Tupel bei Simulation/Zeitmittelung etwa&nbsp; $100$&nbsp; mal (statistischer Richtwert) vorkommen sollte, um ausreichende Simulationsgenauigkeit zu gewährleisten, so folgt daraus, dass die Folgenlänge größer als&nbsp; $N = 10^8$&nbsp; sein sollte.
+
 
 +
*In case of a quaternary source&nbsp; $(M = 4)$&nbsp; it must already be averaged over&nbsp; $4^{10} = 1\hspace{0.08cm}048\hspace{0.08cm}576$&nbsp; summation terms for the determination of&nbsp; $H_{10}$.
 +
 
 +
* Considering that each of these&nbsp; $4^{10} =2^{20} >10^6$&nbsp; $k$&ndash;tuple should occur in simulation/time averaging about&nbsp; $100$&nbsp; times (statistical guideline) to ensure sufficient simulation accuracy, it follows that the sequence length should be in this case greater than&nbsp; $N = 10^8$.
  
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 5:}$&nbsp;
+
$\text{Example 5:}$&nbsp;
Wir betrachten eine alternierende Binärfolge &nbsp; ⇒ &nbsp; $〈 q_ν 〉 =\rm ABABABAB$ ... &nbsp;.&nbsp; Entsprechend gilt&nbsp; $H_0 = H_1 = 1 \hspace{0.1cm} \rm bit/Symbol$.  
+
We consider an alternating binary sequence &nbsp; ⇒ &nbsp; $〈 q_ν 〉 =\rm ABABABAB$ ... &nbsp;.&nbsp; Accordingly it holds that&nbsp; $H_0 = H_1 = 1 \hspace{0.1cm} \rm bit/symbol$.  
 +
 
 +
In this special case, the&nbsp; $H_k$ approximation must be determined independently from&nbsp; $k$&nbsp; by averaging only two compound probabilities:
 +
* $k = 2$: &nbsp;&nbsp; $p_{\rm AB} = p_{\rm BA} = 1/2$ &nbsp; &nbsp; ⇒ &nbsp; &nbsp; $H_2 = 1/2 \hspace{0.2cm} \rm bit/symbol$,
  
In diesem Sonderfall muss zur Bestimmung der&nbsp; $H_k$–Näherung unabhängig von&nbsp; $k$&nbsp; stets nur über zwei Verbundwahrscheinlichkeiten gemittelt werden:
+
* $k = 3$:  &nbsp;&nbsp; $p_{\rm ABA} = p_{\rm BAB} = 1/2$ &nbsp; &nbsp; ⇒ &nbsp; &nbsp; $H_3 = 1/3 \hspace{0.2cm} \rm bit/symbol$,
* $k = 2$: &nbsp;&nbsp;  $p_{\rm AB} = p_{\rm BA} = 1/2$    &nbsp; &nbsp;      ⇒ &nbsp; &nbsp;  $H_2 =  1/2 \hspace{0.2cm} \rm bit/Symbol$,
 
* $k = 3$:  &nbsp;&nbsp;   $p_{\rm ABA} = p_{\rm BAB} = 1/2$   &nbsp; &nbsp;   &nbsp; &nbsp; $H_3 = 1/3 \hspace{0.2cm} \rm bit/Symbol$,
 
* $k = 4$:  &nbsp;&nbsp;  $p_{\rm ABAB} = p_{\rm BABA} = 1/2$  &nbsp; &nbsp; ⇒  &nbsp; &nbsp; $H_4 =  1/4 \hspace{0.2cm} \rm bit/Symbol$.
 
  
 +
* $k = 4$:  &nbsp;&nbsp; $p_{\rm ABAB} = p_{\rm BABA} = 1/2$ &nbsp; &nbsp; ⇒ &nbsp; &nbsp; $H_4 = 1/4 \hspace{0.2cm} \rm bit/symbol$.
  
Die (tatsächliche) Entropie dieser alternierenden Binärfolge ist demzufolge
+
 
 +
The entropy of this alternating binary sequence is therefore
 
:$$H = \lim_{k \rightarrow \infty }{1}/{k} = 0 \hspace{0.05cm}.$$
 
:$$H = \lim_{k \rightarrow \infty }{1}/{k} = 0 \hspace{0.05cm}.$$
  
Das Ergebnis war zu erwarten, da die betrachtete Folge nur minimale Information besitzt, die sich im Entropie–Endwert&nbsp; $H$&nbsp; nicht auswirkt, nämlich: <br> &nbsp; &nbsp; „Tritt&nbsp; $\rm A$&nbsp; zu den geraden oder den ungeraden Zeitpunkten auf?
+
The result was to be expected, since the considered sequence has only minimal information, which does not affect the entropy end value&nbsp; $H$,&nbsp; namely: <br> &nbsp; &nbsp; "Does&nbsp; $\rm A$&nbsp; occur at the even or the odd times?"
 +
 
 +
You can see that&nbsp; $H_k$&nbsp; comes closer to this final value&nbsp; $H = 0$&nbsp; very slowly:&nbsp; The twentieth entropy approximation still returns&nbsp; $H_{20} = 0.05 \hspace{0.05cm} \rm bit/symbol$. }}
  
Man erkennt, dass&nbsp; $H_k$&nbsp; diesem Endwert&nbsp; $H = 0$&nbsp; nur sehr langsam näher kommt:&nbsp; Die zwanzigste Entropienäherung  liefert immer noch&nbsp; $H_{20} = 0.05 \hspace{0.05cm} \rm bit/Symbol$. }}
 
  
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Zusammenfassung der Ergebnisse der letzten Seiten:}$&nbsp;
+
$\text{Summary of the results of the last sections:}$&nbsp;
  
*Allgemein gilt für die&nbsp; '''Entropie einer Nachrichtenquelle''':
+
*Generally it applies to the&nbsp; &raquo;'''entropy of a message source'''&laquo;:
 
:$$H \le \text{...} \le H_3 \le H_2 \le H_1 \le H_0  
 
:$$H \le \text{...} \le H_3 \le H_2 \le H_1 \le H_0  
 
  \hspace{0.05cm}.$$  
 
  \hspace{0.05cm}.$$  
*Eine&nbsp; '''redundanzfreie Quelle'''&nbsp; liegt vor, falls alle&nbsp; $M$&nbsp; Symbole gleichwahrscheinlich sind und es keine statistischen Bindungen innerhalb der Folge gibt. <br>Für diese gilt&nbsp; $(r$&nbsp; bezeichnet hierbei die ''relative Redundanz'' $)$:
+
*A&nbsp; &raquo;'''redundancy-free source'''&laquo; &nbsp; exists if all&nbsp; $M$&nbsp; symbols are equally probable and there are no statistical bindings within the sequence. <br> For these hold&nbsp; $(r$&nbsp; denotes the ''relative redundancy'' $)$:
 
:$$H = H_0 = H_1 = H_2 = H_3 = \text{...}\hspace{0.5cm}
 
:$$H = H_0 = H_1 = H_2 = H_3 = \text{...}\hspace{0.5cm}
 
\Rightarrow \hspace{0.5cm} r = \frac{H - H_0}{H_0}= 0 \hspace{0.05cm}.$$  
 
\Rightarrow \hspace{0.5cm} r = \frac{H - H_0}{H_0}= 0 \hspace{0.05cm}.$$  
*Eine&nbsp; '''gedächtnislose Quelle'''&nbsp; kann durchaus redundant sein&nbsp; $(r> 0)$.&nbsp; Diese Redundanz geht dann allein auf die Abweichung der Symbolwahrscheinlichkeiten von der Gleichverteilung zurück.&nbsp; Hier gelten folgende Relationen:
+
*A&nbsp; &raquo;'''memoryless source'''&laquo; &nbsp; can be quite redundant&nbsp; $(r> 0)$.&nbsp; This redundancy then is solely due to the deviation of the symbol probabilities from the uniform distribution.&nbsp; Here the following relations are valid:
 
:$$H = H_1 = H_2 = H_3 = \text{...} \le H_0 \hspace{0.5cm}\Rightarrow \hspace{0.5cm}0 \le r = \frac{H_1 - H_0}{H_0}< 1 \hspace{0.05cm}.$$  
 
:$$H = H_1 = H_2 = H_3 = \text{...} \le H_0 \hspace{0.5cm}\Rightarrow \hspace{0.5cm}0 \le r = \frac{H_1 - H_0}{H_0}< 1 \hspace{0.05cm}.$$  
*Die entsprechende Bedingung für eine&nbsp; '''gedächtnisbehaftete Quelle'''&nbsp; lautet:
+
*The corresponding condition for a&nbsp; &raquo;'''source with memory'''&laquo;&nbsp; is
 
:$$ H <\text{...} < H_3 < H_2 < H_1 \le H_0 \hspace{0.5cm}\Rightarrow \hspace{0.5cm} 0 < r = \frac{H_1 - H_0}{H_0}\le1 \hspace{0.05cm}.$$
 
:$$ H <\text{...} < H_3 < H_2 < H_1 \le H_0 \hspace{0.5cm}\Rightarrow \hspace{0.5cm} 0 < r = \frac{H_1 - H_0}{H_0}\le1 \hspace{0.05cm}.$$
*Ist&nbsp; $H_2 < H_1$, dann gilt auch&nbsp; $H_3 < H_2$, &nbsp; $H_4 < H_3$, usw. &nbsp; &rArr; &nbsp; In der allgemeinen Gleichung ist also das&nbsp; „≤”–Zeichen durch das&nbsp; <”–Zeichen zu ersetzen.  
+
*If&nbsp; $H_2 < H_1$, then&nbsp; $H_3 < H_2$, &nbsp; $H_4 < H_3$, etc. &nbsp; &rArr; &nbsp; In the general equation, the&nbsp; "≤" character must be replaced by the&nbsp; "<" character.  
*Sind die Symbole gleichwahrscheinlich, so gilt wieder&nbsp; $H_1 = H_0$, während bei nichtgleichwahrscheinlichen Symbolen&nbsp; $H_1 < H_0$&nbsp; zutrifft.}}
+
 
 +
*If the symbols are equally probable, then again&nbsp; $H_1 = H_0$,&nbsp; while&nbsp; $H_1 < H_0$&nbsp; applies to symbols which are not equally probable.}}
 
 
 
 
  
==Die Entropie des AMI–Codes  ==
+
==The entropy of the AMI code ==
 
<br>
 
<br>
Im Kapitel&nbsp; [[Digital_Signal_Transmission/Symbolweise_Codierung_mit_Pseudoternärcodes#Eigenschaften_des_AMI-Codes|Symbolweise Codierung mit Pseudoternärcodes]]&nbsp; des Buches „Digitalsignalübertragung” wird unter anderem der AMI–Pseudoternärcode behandelt.  
+
In chapter&nbsp; [[Digital_Signal_Transmission/Symbolwise_Coding_with_Pseudo-Ternary_Codes#Properties_of_the_AMI_code|"Symbol-wise Coding with Pseudo-Ternary Codes"]]&nbsp; of the book&nbsp; "Digital Signal Transmission", among other things, the AMI pseudo-ternary code is discussed.
*Dieser wandelt die Binärfolge&nbsp; $〈 q_ν 〉$&nbsp; mit&nbsp; $q_ν ∈ \{ \rm L, \ H \}$&nbsp; in die Ternärfolge&nbsp; $〈 c_ν 〉$&nbsp; mit&nbsp; $q_ν ∈ \{ \rm M, \ N, \ P \}$.
+
[[File:EN_Inf_T_1_2_S4.png|right|frame|Signals and symbol sequences for AMI code]]
*Die Bezeichnungen der Quellensymbole stehen für&nbsp; $\rm L$ow&nbsp; und&nbsp; $\rm H$igh&nbsp; und die der Codesymbole für&nbsp; $\rm M$inus,&nbsp; $\rm N$ull&nbsp; und&nbsp; $\rm P$lus”.  
+
 +
*This converts the binary sequence&nbsp; $〈 q_ν 〉$&nbsp; with&nbsp; $q_ν ∈ \{ \rm L, \ H \}$&nbsp; into the ternary sequence&nbsp; $〈 c_ν 〉$&nbsp; with&nbsp; $q_ν ∈ \{ \rm M, \ N, \ P \}$.
 +
 
 +
*The names of the source symbols stand for&nbsp; $\rm L$(ow)&nbsp; and&nbsp; $\rm H$(igh)&nbsp; and those of the code symbols for&nbsp; $\rm M$(inus),&nbsp; $\rm P$(lus),&nbsp; and&nbsp; $\rm N$(ull) &nbsp; &rArr; &nbsp; "Zero".  
  
  
Die Codierregel des AMI–Codes (diese Kurzform steht für „Alternate Mark Inversion”) lautet:
+
The coding rule of the AMI code ("Alternate Mark Inversion") is:
[[File:P_ID2240__Inf_T_1_2_S4_neu.png|right|frame|Signale und Symbolfolgen beim AMI–Code]]
 
  
*Jedes Binärsymbol&nbsp; $q_ν =\rm L$&nbsp; wird durch das Codesymbol&nbsp; $c_ν =\rm N$&nbsp; dargestellt.
+
*Each binary symbol&nbsp; $q_ν =\rm L$&nbsp; is represented by the code symbol&nbsp; $c_ν =\rm N$&nbsp;.
*Dagegen wird&nbsp; $q_ν =\rm H$&nbsp; alternierend mit&nbsp; $c_ν =\rm P$&nbsp; und&nbsp; $c_ν =\rm M$&nbsp; codiert &nbsp;  ⇒  &nbsp; Name „AMI”.
 
  
 +
*In contrast,&nbsp; $q_ν =\rm H$&nbsp; alternates with&nbsp; $c_ν =\rm P$&nbsp; and&nbsp; $c_ν =\rm M$&nbsp; coded &nbsp; ⇒ &nbsp; name "Alternate Mark Inversion".
  
Durch diese spezielle  Codierung wird Redundanz allein mit dem Ziel hinzugefügt, dass die Codefolge keinen Gleichsignalanteil beinhaltet.  
+
*This special encoding adds redundancy with the sole purpose of ensuring that the encoded sequence does not contain a DC component.  
 
<br clear=all>
 
<br clear=all>
Wir betrachten hier jedoch nicht die spektralen Eigenschaften des AMI–Codes, sondern interpretieren diesen Code informationstheoretisch:
+
However, we do not consider the spectral properties of the AMI code here, but interpret this code information-theoretically:
*Aufgrund der Stufenzahl&nbsp; $M = 3$&nbsp; ist der Entscheidungsgehalt der (ternären) Codefolge gleich&nbsp; $H_0 = \log_2 \ 3 ≈ 1.585 \hspace{0.05cm} \rm bit/Symbol$.&nbsp; Die erste Entropienäherung liefert&nbsp; $H_1 = 1.5 \hspace{0.05cm} \rm bit/Symbol$, wie die nachfolgende Rechnung zeigt:
+
*Based on the symbol set size&nbsp; $M = 3$&nbsp; the decision content of the (ternary) encoded sequence is equal to&nbsp; $H_0 = \log_2 \ 3 ≈ 1.585 \hspace{0.05cm} \rm bit/symbol$.&nbsp; The first entropy approximation returns&nbsp; $H_1 = 1.5 \hspace{0.05cm} \rm bit/symbol$, as shown in the following calculation:
 
    
 
    
 
:$$p_{\rm H} = p_{\rm L} = 1/2 \hspace{0.3cm}\Rightarrow \hspace{0.3cm}
 
:$$p_{\rm H} = p_{\rm L} = 1/2 \hspace{0.3cm}\Rightarrow \hspace{0.3cm}
 
p_{\rm N} = p_{\rm L} = 1/2\hspace{0.05cm},\hspace{0.2cm}p_{\rm M} = p_{\rm P}= p_{\rm H}/2 = 1/4\hspace{0.3cm}
 
p_{\rm N} = p_{\rm L} = 1/2\hspace{0.05cm},\hspace{0.2cm}p_{\rm M} = p_{\rm P}= p_{\rm H}/2 = 1/4\hspace{0.3cm}
\Rightarrow \hspace{0.3cm} H_1 = 1/2 \cdot {\rm log}_2\hspace{0.1cm}2 + 2 \cdot 1/4 \cdot{\rm log}_2\hspace{0.1cm}4 = 1.5 \,{\rm bit/Symbol}
+
\Rightarrow \hspace{0.3cm} H_1 = 1/2 \cdot {\rm log}_2\hspace{0.1cm}2 + 2 \cdot 1/4 \cdot{\rm log}_2\hspace{0.1cm}4 = 1.5 \,{\rm bit/symbol}
 
  \hspace{0.05cm}.$$
 
  \hspace{0.05cm}.$$
  
*Betrachten wir nun Zweiertupel.&nbsp; Beim AMI–Code kann&nbsp; $\rm P$&nbsp; nicht auf&nbsp; $\rm P$&nbsp; und&nbsp; $\rm M$&nbsp; nicht auf&nbsp; $\rm M$&nbsp; folgen.&nbsp; Die Wahrscheinlichkeit für&nbsp; $\rm NN$&nbsp; ist gleich&nbsp; $p_{\rm L} · p_{\rm L} = 1/4$.&nbsp; Alle anderen (sechs) Zweiertupel treten mit der Wahrscheinlichkeit&nbsp; $1/8$&nbsp; auf.&nbsp; Daraus folgt für die zweite Entropienäherung:
+
*Let's now look at two-tuples.&nbsp; With the AMI code,&nbsp; $\rm P$&nbsp; cannot follow&nbsp; $\rm P$&nbsp; and&nbsp; $\rm M$&nbsp; cannot follow&nbsp; $\rm M$.&nbsp; The probability for&nbsp; $\rm NN$&nbsp; is equal to&nbsp; $p_{\rm L} \cdot p_{\rm L} = 1/4$.&nbsp; All other (six) two-tuples occur with the probability&nbsp; $1/8$.&nbsp; From this follows for the second entropy approximation:
:$$H_2 = 1/2 \cdot \big [ 1/4 \cdot {\rm log_2}\hspace{0.1cm}4 + 6 \cdot 1/8 \cdot {\rm log_2}\hspace{0.1cm}8 \big ] = 1.375 \,{\rm bit/Symbol} \hspace{0.05cm}.$$
+
:$$H_2 = 1/2 \cdot \big [ 1/4 \cdot {\rm log_2}\hspace{0.1cm}4 + 6 \cdot 1/8 \cdot {\rm log_2}\hspace{0.1cm}8 \big ] = 1,375 \,{\rm bit/symbol} \hspace{0.05cm}.$$
  
*Für die weiteren Entropienäherungen&nbsp; $H_3$,&nbsp; $H_4$, ...&nbsp; und die tatsächliche Entropie&nbsp; $H$&nbsp; wird gelten:
+
*For the further entropy approximations&nbsp; $H_3$,&nbsp; $H_4$, ...&nbsp; and the actual entropy&nbsp; $H$&nbsp; will apply:
:$$ H < \hspace{0.05cm}\text{...}\hspace{0.05cm} < H_5 < H_4 < H_3 < H_2 = 1.375 \,{\rm bit/Symbol} \hspace{0.05cm}.$$
+
:$$ H < \hspace{0.05cm}\text{...}\hspace{0.05cm} < H_5 < H_4 < H_3 < H_2 = 1.375 \,{\rm bit/symbol} \hspace{0.05cm}.$$
  
*Bei diesem Beispiel kennt man  ausnahmsweisedie tatsächliche Entropie&nbsp; $H$&nbsp; der Codesymbolfolge&nbsp; $〈 c_ν 〉$: &nbsp; Da durch den Coder keine neue Information hinzukommt, aber auch keine verloren geht, ergibt sich die gleiche Entropie&nbsp; $H = 1 \,{\rm bit/Symbol} $&nbsp; wie für die redundanzfreie Binärfolge $〈 q_ν 〉$.
+
*Exceptionally in this example we know the actual entropy&nbsp; $H$&nbsp; of the encoded sequence&nbsp; $〈 c_ν 〉$: &nbsp; Since no new information is added by the coder and no information is lost, it is the same entropy&nbsp; $H = 1 \,{\rm bit/symbol} $&nbsp; as the one of the redundancy-free binary sequence $〈 q_ν 〉$.
  
  
Die&nbsp; [[Aufgaben:1.4_Entropienäherungen_für_den_AMI-Code|Aufgabe 1.4]]&nbsp; zeigt den beträchtlichen Aufwand zur Berechnung der Entropienäherung&nbsp; $H_3$.&nbsp; Zudem weicht&nbsp; $H_3$&nbsp; noch deutlich vom Endwert&nbsp; $H = 1 \,{\rm bit/Symbol} $&nbsp; ab.&nbsp; Schneller kommt man zum Ergebnis, wenn man den AMI–Code durch eine Markovkette  beschreibt, wie im nächsten Abschnitt dargelegt.
+
The&nbsp; [[Aufgaben:Exercise_1.4:_Entropy_Approximations_for_the_AMI_Code|"Exercise 1.4"]]&nbsp; shows the considerable effort required to calculate the entropy approximation&nbsp; $H_3$. &nbsp; Moreover,&nbsp; $H_3$&nbsp; still deviates significantly from the final value&nbsp; $H = 1 \,{\rm bit/symbol} $.&nbsp; A faster result is achieved if the AMI code is described by a Markov chain as explained in the next section.
  
  
==Binärquellen mit Markoveigenschaften  ==  
+
==Binary sources with Markov properties ==  
 
<br>
 
<br>
[[File:Inf_T_1_2_S5_vers2.png|right|frame|Markovprozesse mit&nbsp; $M = 2$&nbsp; Zuständen]]
+
[[File:Inf_T_1_2_S5_vers2.png|right|frame|Markov processes with&nbsp; $M = 2$&nbsp; states]]
 +
 
 +
Sequences with statistical bindings between the sequence elements (symbols) are often modeled by&nbsp; [[Theory_of_Stochastic_Signals/Markov_Chains|$\text{Markov processes}$]]&nbsp; where we limit ourselves here to first-order Markov processes.&nbsp; First we consider a binary Markov process&nbsp; $(M = 2)$&nbsp; with the states (symbols)&nbsp; $\rm A$&nbsp; and&nbsp; $\rm B$.
  
Folgen mit statistischen Bindungen zwischen den Folgenelementen (Symbolen) werden oft durch&nbsp; [[Theory_of_Stochastic_Signals/Markovketten|Markovprozesse]]&nbsp; modelliert, wobei wir uns hier auf Markovprozesse erster Ordnung beschränken.&nbsp; Zunächst betrachten wir einen binären Markovprozess&nbsp; $(M = 2)$&nbsp; mit den Zuständen (Symbolen)&nbsp; $\rm A$&nbsp; und&nbsp; $\rm B$.
+
On the right, you can see the transition diagram for a first-order binary Markov process&nbsp; however, only two of the four transfer probabilities given are freely selectable, for example
 +
* $p_{\rm {A\hspace{0.01cm}|\hspace{0.01cm}B}} = \rm Pr(A\hspace{0.01cm}|\hspace{0.01cm}B)$ &nbsp; &nbsp; conditional probability that&nbsp; $\rm A$&nbsp; follows&nbsp; $\rm B$&nbsp;.
  
Rechts  sehen Sie das Übergangsdiagramm für einen binären Markovprozess erster Ordnung.&nbsp; Von den vier angegebenen Übertragungswahrscheinlichkeiten sind allerdings nur zwei frei wählbar, zum Beispiel
+
* $p_{\rm {B\hspace{0.01cm}|\hspace{0.01cm}A}} = \rm Pr(B\hspace{0.01cm}|\hspace{0.01cm}A)$  &nbsp; ⇒ &nbsp; conditional probability that&nbsp; $\rm B$&nbsp; follows&nbsp; $\rm A$&nbsp;.
* $p_{\rm {A\hspace{0.01cm}|\hspace{0.01cm}B}} = \rm Pr(A\hspace{0.01cm}|\hspace{0.01cm}B)$ &nbsp;  ⇒  &nbsp; bedingte Wahrscheinlichkeit, dass&nbsp; $\rm A$&nbsp; auf&nbsp; $\rm B$&nbsp; folgt.
 
* $p_{\rm {B\hspace{0.01cm}|\hspace{0.01cm}A}} = \rm Pr(B\hspace{0.01cm}|\hspace{0.01cm}A)$  &nbsp; &nbsp;   bedingte Wahrscheinlichkeit, dass&nbsp; $\rm B$&nbsp; auf&nbsp; $\rm A$&nbsp; folgt.
 
  
  
Für die beiden weiteren Übergangswahrscheinlichkeiten gilt dann&nbsp; $p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}A} = 1- p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}A}$ &nbsp;und &nbsp; $p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}B} = 1- p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B}
+
For the other two transition probabilities:&nbsp; $p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}A} = 1- p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}A}$ &nbsp;and &nbsp; $p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}B} = 1- p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B}
 
  \hspace{0.05cm}.$
 
  \hspace{0.05cm}.$
  
Aufgrund der vorausgesetzten Eigenschaften&nbsp; [[Theory_of_Stochastic_Signals/Autokorrelationsfunktion_(AKF)#Station.C3.A4re_Zufallsprozesse|Stationarität]]&nbsp; und&nbsp; [[Theory_of_Stochastic_Signals/Autokorrelationsfunktion_(AKF)#Ergodische_Zufallsprozesse|Ergodizität]]&nbsp; gilt für die Zustands– bzw. Symbolwahrscheinlichkeiten:
+
Due to the presupposed properties&nbsp; [[Theory_of_Stochastic_Signals/Auto-Correlation_Function_(ACF)#Stationary_random_processes|$\text{Stationarity}$]]&nbsp; and&nbsp; [[Theory_of_Stochastic_Signals/Auto-Correlation_Function_(ACF)#Ergodic_random_processes|$\text{Ergodicity}$]]&nbsp; the following applies to the state or symbol probabilities:
 
   
 
   
 
:$$p_{\rm A} = {\rm Pr}({\rm A}) = \frac{p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B}}{p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B} + p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}A}}
 
:$$p_{\rm A} = {\rm Pr}({\rm A}) = \frac{p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B}}{p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B} + p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}A}}
Line 245: Line 264:
 
  \hspace{0.05cm}.$$
 
  \hspace{0.05cm}.$$
  
Diese Gleichungen erlauben erste informationstheoretische Aussagen über die Markovprozesse:
+
These equations allow first information&ndash;theoretical statements about the Markov processes:
* Für&nbsp; $p_{\rm {A\hspace{0.01cm}|\hspace{0.01cm}B}} = p_{\rm {B\hspace{0.01cm}|\hspace{0.01cm}A}}$&nbsp; sind die Symbole gleichwahrscheinlich &nbsp; ⇒ &nbsp; $p_{\text{A}} = p_{\text{B}}= 0.5$.&nbsp; Die erste Entropienäherung liefert&nbsp; $H_1 = H_0 = 1 \hspace{0.05cm} \rm bit/Symbol$, und zwar unabhängig von den tatsächlichen Werten der (bedingten) Übergangswahrscheinlichkeiten&nbsp; $p_{\text{A|B}}$&nbsp; &nbsp;bzw.&nbsp; $p_{\text{B|A}}$.
+
* For&nbsp; $p_{\rm {A\hspace{0.01cm}|\hspace{0.01cm}B}} = p_{\rm {B\hspace{0.01cm}|\hspace{0.01cm}A}}$&nbsp; the symbols are equally likely &nbsp; ⇒ &nbsp; $p_{\text{A}} = p_{\text{B}}= 0.5$.&nbsp; The first entropy approximation returns&nbsp; $H_1 = H_0 = 1 \hspace{0.05cm} \rm bit/symbol$, independent of the actual values of the (conditional) transition probabilities&nbsp; $p_{\text{A|B}}$&nbsp; and&nbsp; $p_{\text{B|A}}$.
*Die Quellenentropie&nbsp; $H$&nbsp; als der Grenzwert&nbsp; $($für&nbsp; $k \to \infty)$&nbsp; der&nbsp; [[Information_Theory/Nachrichtenquellen_mit_Gedächtnis#Verallgemeinerung_auf_.7F.27.22.60UNIQ-MathJax109-QINU.60.22.27.7F.E2.80.93Tupel_und_Grenz.C3.BCbergang|Entropienäherung&nbsp; $k$–ter Ordnung]] &nbsp; &rArr; &nbsp; $H_k$&nbsp;   hängt aber sehr wohl von den tatsächlichen Werten von&nbsp; $p_{\text{A|B}}$ &nbsp;und&nbsp; $p_{\text{B|A}}$&nbsp; ab und nicht nur von ihrem Quotienten.&nbsp; Dies zeigt das folgende Beispiel.
+
 
 +
*But the source entropy&nbsp; $H$&nbsp; as the limit value&nbsp; $($for&nbsp; $k \to \infty)$&nbsp; of the&nbsp; [[Information_Theory/Discrete_Sources_with_Memory#Generalization_to_.7F.27.22.60UNIQ-MathJax111-QINU.60.22.27.7F-tuple_and_boundary_crossing|"Entropy approximation of order&nbsp; $k$"]] &nbsp; &rArr; &nbsp; $H_k$&nbsp; depends very much on the actual values of&nbsp; $p_{\text{A|B}}$ &nbsp;and&nbsp; $p_{\text{B|A}}$&nbsp; and not only on their quotients.&nbsp; This is shown by the following example.
  
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Beispiel 6:}$&nbsp;
+
$\text{Example 6:}$&nbsp;
Wir betrachten drei binäre symmetrische Markovquellen, die sich durch die Zahlenwerte der symmetrischen Übergangswahrscheinlichkeiten&nbsp; $p_{\rm {A\hspace{0.01cm}\vert\hspace{0.01cm}B} } = p_{\rm {B\hspace{0.01cm}\vert\hspace{0.01cm}A} }$&nbsp; unterscheiden.&nbsp; Für die  Symbolwahrscheinlichkeiten gilt somit&nbsp; $p_{\rm A} = p_{\rm B}= 0.5$, und die  anderen Übergangswahrscheinlichkeiten haben dann die Werte
+
We consider three binary symmetric Markov sources that are represented by the numerical values of the symmetric transition probabilities&nbsp; $p_{\rm {A\hspace{0.01cm}\vert\hspace{0.01cm}B} } = p_{\rm {B\hspace{0.01cm}\vert\hspace{0.01cm}A} }$.&nbsp; For the symbol probabilities the following applies:&nbsp; $p_{\rm A} = p_{\rm B}= 0.5$, and the other transition probabilities have the values
  
[[File:P_ID2242__Inf_T_1_2_S5b_neu.png|right|frame|Drei Beispiele binärer Markovquellen]]  
+
[[File:EN_Inf_T_1_2_S5b.png|right|frame|Three examples of binary Markov sources with&nbsp; $p_{\rm {A\hspace{0.01cm}\vert\hspace{0.01cm}B} } = p_{\rm {B\hspace{0.01cm}\vert\hspace{0.01cm}A} }$&nbsp;]]  
 
:$$p_{\rm {A\hspace{0.01cm}\vert\hspace{0.01cm}A} } = 1 - p_{\rm {B\hspace{0.01cm}\vert\hspace{0.01cm}A} } =
 
:$$p_{\rm {A\hspace{0.01cm}\vert\hspace{0.01cm}A} } = 1 - p_{\rm {B\hspace{0.01cm}\vert\hspace{0.01cm}A} } =
 
p_{\rm {B\hspace{0.01cm}\vert\hspace{0.01cm}B} }.$$
 
p_{\rm {B\hspace{0.01cm}\vert\hspace{0.01cm}B} }.$$
  
*Die mittlere (blaue) Symbolfolge mit&nbsp; $p_{\rm {A\hspace{0.01cm}\vert\hspace{0.01cm}B} } = p_{\rm {B\hspace{0.01cm}\vert\hspace{0.01cm}A} } = 0.5$&nbsp; besitzt die Entropie&nbsp; $H ≈ 1 \hspace{0.1cm}  \rm bit/Symbol$.&nbsp; Das heißt: &nbsp; In diesem Sonderfall gibt es keine statistischen Bindungen innerhalb der Folge.
+
*The middle (blue) sequence with&nbsp; $p_{\rm {A\hspace{0.01cm}\vert\hspace{0.01cm}B} } = p_{\rm {B\hspace{0.01cm}\vert\hspace{0.01cm}A} } = 0.5$&nbsp; has the entropy&nbsp; $H ≈ 1 \hspace{0.1cm}  \rm bit/symbol$.&nbsp; That means: &nbsp; In this special case there are no statistical bindings within the sequence.
  
*Die linke (rote) Folge mit&nbsp; $p_{\rm {A\hspace{0.01cm}\vert\hspace{0.01cm}B} } = p_{\rm {B\hspace{0.01cm}\vert\hspace{0.01cm}A} } = 0.2$&nbsp; weist weniger Wechsel zwischen&nbsp; $\rm A$&nbsp; und&nbsp; $\rm B$&nbsp; auf.&nbsp; Aufgrund von statistischen Abhängigkeiten zwischen benachbarten Symbolen ist nun&nbsp; $H ≈ 0.72 \hspace{0.1cm}  \rm bit/Symbol$&nbsp; kleiner.
+
*The left (red) sequence with&nbsp; $p_{\rm {A\hspace{0.01cm}\vert\hspace{0.01cm}B} } = p_{\rm {B\hspace{0.01cm}\vert\hspace{0.01cm}A} } = 0.2$&nbsp; has less changes between&nbsp; $\rm A$&nbsp; and&nbsp; $\rm B$.&nbsp; Due to statistical dependencies between neighboring symbols the entropy is now smaller:&nbsp; $H ≈ 0.72 \hspace{0.1cm}  \rm bit/symbol$.
  
*Die rechte (grüne) Symbolfolge mit&nbsp; $p_{\rm {A\hspace{0.01cm}\vert\hspace{0.01cm}B} } = p_{\rm {B\hspace{0.01cm}\vert\hspace{0.01cm}A} } = 0.8$&nbsp; hat die genau gleiche Entropie&nbsp; $H ≈ 0.72 \hspace{0.1cm}  \rm bit/Symbol$&nbsp; wie die rote Folge.&nbsp; Hier erkennt man viele Bereiche mit sich stets abwechselnden Symbolen&nbsp; $($... $\rm ABABAB$ ... $)$.}}
+
*The right (green) symbol sequence with&nbsp; $p_{\rm {A\hspace{0.01cm}\vert\hspace{0.01cm}B} } = p_{\rm {B\hspace{0.01cm}\vert\hspace{0.01cm}A} } = 0.8$&nbsp; has the exact same entropy&nbsp; $H ≈ 0.72 \hspace{0.1cm}  \rm bit/symbol$&nbsp; as the red sequence.&nbsp; Here you can see many regions with alternating symbols&nbsp; $($... $\rm ABABAB$ ... $)$.}}
  
  
Zu diesem Beispiel ist noch anzumerken:
+
This example is worth noting:
*Hätte man nicht die Markoveigenschaften der roten und der grünen Folge ausgenutzt, so hätte man das jeweilige Ergebnis&nbsp; $H ≈ 0.72 \hspace{0.1cm}  \rm bit/Symbol$&nbsp; erst nach langwierigen Berechnungen erhalten.
+
#If you had not used the Markov properties of the red and green sequences, you would have reached the respective result&nbsp; $H ≈ 0.72 \hspace{0.1cm}  \rm bit/symbol$&nbsp; only after lengthy calculations.
*Auf den nächsten Seiten wird gezeigt, dass bei einer Quelle mit Markoveigenschaften der Endwert&nbsp; $H$&nbsp; allein aus den Entropienäherungen&nbsp; $H_1$&nbsp; und&nbsp; $H_2$&nbsp; ermittelt werden kann.&nbsp; Ebenso lassen sich aus&nbsp; $H_1$&nbsp; und&nbsp; $H_2$&nbsp; auch alle Entropienäherungen&nbsp; $H_k$&nbsp; für&nbsp; $k$–Tupel in einfacher Weise berechnen &nbsp; &nbsp; $H_3$,&nbsp; $H_4$,&nbsp; $H_5$, ... ,&nbsp; $H_{100}$, ...
+
#The following sections show that for a source with Markov properties the final value&nbsp; $H$&nbsp; can be determined from the entropy approximations&nbsp; $H_1$&nbsp; and&nbsp; $H_2$&nbsp; alone. &nbsp; Likewise, all entropy approximations&nbsp; $H_1$&nbsp; and&nbsp; $H_2$&nbsp; can also be calculated from&nbsp; $H_k$&nbsp; for&nbsp; $k$&ndash;tuples in a simple manner &nbsp; ⇒ &nbsp; $H_3$,&nbsp; $H_4$,&nbsp; $H_5$, ... &nbsp; $H_{100}$, ...
 
 
 
   
 
   
==Vereinfachte Entropieberechnung bei Markovquellen  ==
+
== Simplified entropy calculation for Markov sources ==
 
<br>
 
<br>
[[File:Inf_T_1_2_S5_vers2.png|right|frame|Markovprozesse mit&nbsp; $M = 2$&nbsp; Zuständen]]
+
[[File:Inf_T_1_2_S5_vers2.png|right|frame|Markov processes with&nbsp; $M = 2$&nbsp; states]]
Wir gehen weiterhin von der symmetrischen binären Markovquelle erster Ordnung aus.&nbsp; Wie auf der vorherigen Seite verwenden wir folgende Nomenklatur für
+
We continue to assume the first-order symmetric binary Markov source.&nbsp; As in the previous section, we use the following nomenclature for
*die Übergangswahrscheinlichkeiten&nbsp; $p_{\rm {A\hspace{0.01cm}|\hspace{0.01cm}B}}$, &nbsp; $p_{\rm {B\hspace{0.01cm}|\hspace{0.01cm}A}}$,&nbsp; $p_{\rm {A\hspace{0.01cm}|\hspace{0.01cm}A}}= 1- p_{\rm {B\hspace{0.01cm}|\hspace{0.01cm}A}}$, &nbsp; $p_{\rm {B\hspace{0.01cm}|\hspace{0.01cm}B}} = 1 - p_{\rm {A\hspace{0.01cm}|\hspace{0.01cm}B}}$, &nbsp;  
+
*the transition probabilities &nbsp; &rArr; &nbsp; $p_{\rm {A\hspace{0.01cm}|\hspace{0.01cm}B}}$, &nbsp; $p_{\rm {B\hspace{0.01cm}|\hspace{0.01cm}A}}$,&nbsp; $p_{\rm {A\hspace{0.01cm}|\hspace{0.01cm}A}}= 1- p_{\rm {B\hspace{0.01cm}|\hspace{0.01cm}A}}$, &nbsp; $p_{\rm {B\hspace{0.01cm}|\hspace{0.01cm}B}} = 1 - p_{\rm {A\hspace{0.01cm}|\hspace{0.01cm}B}}$, &nbsp;
*die ergodischen Wahrscheinlichkeiten&nbsp; $p_{\text{A}}$&nbsp; und&nbsp; $p_{\text{B}}$,
+
*die Verbundwahrscheinlichkeiten, zum Beispiel&nbsp; $p_{\text{AB}} = p_{\text{A}} · p_{\rm {B\hspace{0.01cm}|\hspace{0.01cm}A}}$.
+
*the ergodic probabilities &nbsp; &rArr; &nbsp; $p_{\text{A}}$&nbsp; and&nbsp; $p_{\text{B}}$,
  
 +
*the compound probabilities, for example &nbsp; &rArr; &nbsp; $p_{\text{AB}} = p_{\text{A}} - p_{\rm {B\hspace{0.01cm}|\hspace{0.01cm}A}}$.
  
  
Wir berechnen nun die&nbsp; [[Information_Theory/Nachrichtenquellen_mit_Gedächtnis#Entropie_hinsichtlich_Zweiertupel|Entropie eines Zweiertupels]]&nbsp; (mit der Einheit „bit/Zweiertupel”):
+
We now compute the&nbsp; [[Information_Theory/Discrete_Sources_with_Memory#Entropy_with_respect_to_two-tuples|"Entropy of a two-tuple"]]&nbsp; (with the unit "bit/two-tuple"):
 
   
 
   
:$$H_2\hspace{0.05cm}' = p_{\rm A}  \cdot p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}A} \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{ p_{\rm A}  \cdot p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}A}} + p_{\rm A}  \cdot p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}A} \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{ p_{\rm A}  \cdot p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}A}} + p_{\rm B}  \cdot p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B} \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{ p_{\rm B}  \cdot p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B}} + p_{\rm B}  \cdot p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}B} \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{ p_{\rm B}  \cdot p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}B}}
+
:$$H_2\hspace{0.05cm}' = p_{\rm A}  \cdot p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}A} \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{ p_{\rm A}  \cdot p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}A}} + p_{\rm A}  \cdot p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}A} \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{ p_{\rm A}  \cdot p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}A}} + p_{\rm B}  \cdot p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B} \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{ p_{\rm B}  \cdot p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B}} + p_{\rm B}  \cdot p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}B} \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{ p_{\rm B}  \cdot p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}B}}
 
  \hspace{0.05cm}.$$
 
  \hspace{0.05cm}.$$
  
Ersetzt man nun die Logarithmen der Produkte durch entsprechende Summen von Logarithmen, so erhält man das Ergebnis&nbsp; $H_2\hspace{0.05cm}' = H_1 + H_{\text{M}}$&nbsp; mit  
+
If one now replaces the logarithms of the products by corresponding sums of logarithms, one gets the result&nbsp; $H_2\hspace{0.05cm}' = H_1 + H_{\text{M}}$&nbsp; with  
 
:$$H_1 = p_{\rm A}  \cdot (p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}A} + p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}A})\cdot {\rm log}_2\hspace{0.1cm}\frac {1}{p_{\rm A}} + p_{\rm B}  \cdot (p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B} + p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}B})\cdot {\rm log}_2\hspace{0.1cm}\frac {1}{p_{\rm B}} = p_{\rm A}  \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{p_{\rm A}} + p_{\rm B}  \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{p_{\rm B}} = H_{\rm bin} (p_{\rm A})= H_{\rm bin} (p_{\rm B})
 
:$$H_1 = p_{\rm A}  \cdot (p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}A} + p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}A})\cdot {\rm log}_2\hspace{0.1cm}\frac {1}{p_{\rm A}} + p_{\rm B}  \cdot (p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B} + p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}B})\cdot {\rm log}_2\hspace{0.1cm}\frac {1}{p_{\rm B}} = p_{\rm A}  \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{p_{\rm A}} + p_{\rm B}  \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{p_{\rm B}} = H_{\rm bin} (p_{\rm A})= H_{\rm bin} (p_{\rm B})
 
  \hspace{0.05cm},$$
 
  \hspace{0.05cm},$$
Line 292: Line 313:
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Fazit:}$&nbsp; Damit lautet die&nbsp; '''zweite Entropienäherung'''&nbsp; (mit der Einheit „bit/Symbol”):
+
$\text{Conclusion:}$&nbsp; Thus the&nbsp; &raquo;'''second entropy approximation'''&laquo;&nbsp; (with the unit "bit/symbol")&nbsp; is:
:$$H_2 = {1}/{2} \cdot {H_2\hspace{0.05cm}'} = {1}/{2} \cdot \big [ H_{\rm 1} + H_{\rm M} \big]  
+
:$$H_2 = {1}/{2} \cdot {H_2\hspace{0.05cm}'} = {1}/{2} \cdot \big [ H_{\rm 1} + H_{\rm M} \big]  
 
  \hspace{0.05cm}.$$}}
 
  \hspace{0.05cm}.$$}}
  
  
Anzumerken ist:
+
It is to be noted:
*Der erste Summand&nbsp; $H_1$ &nbsp; ⇒ &nbsp; erste Entropienäherung ist  allein abhängig von den Symbolwahrscheinlichkeiten.
+
#The first summand&nbsp; $H_1$ &nbsp; ⇒ &nbsp; first entropy approximation depends only on the symbol probabilities.
*Bei einem symmetrischen Markovprozess  &nbsp; ⇒ &nbsp;     $p_{\rm {A\hspace{0.01cm}|\hspace{0.01cm}B}} = p_{\rm {B\hspace{0.01cm}|\hspace{0.01cm}A}} $ &nbsp; ⇒ &nbsp; $p_{\text{A}} = p_{\text{B}} = 1/2$ &nbsp; ergibt sich für diesen ersten Summanden&nbsp; $H_1 = 1 \hspace{0.1cm} \rm bit/Symbol$.
+
#For a symmetrical Markov process &nbsp; ⇒ &nbsp; $p_{\rm {A\hspace{0.01cm}|\hspace{0.01cm}B}} = p_{\rm {B\hspace{0.01cm}|\hspace{0.01cm}A}} $ &nbsp; ⇒ &nbsp; $p_{\text{A}} = p_{\text{B}} = 1/2$ &nbsp; the result for this first summand is&nbsp; $H_1 = 1 \hspace{0.1cm} \rm bit/symbol$.
*Der zweite Summand&nbsp; $H_{\text{M}}$&nbsp; muss gemäß der zweiten der beiden oberen Gleichungen berechnet werden.  
+
#The second summand&nbsp; $H_{\text{M}}$&nbsp; must be calculated according to the second of the two upper equations.  
*Bei einem symmetrischen Markovprozess erhält man&nbsp; $H_{\text{M}} = H_{\text{bin}}(p_{\rm {A\hspace{0.01cm}|\hspace{0.01cm}B}})$.
+
#For a symmetrical Markov process you get&nbsp; $H_{\text{M}} = H_{\text{bin}}(p_{\rm {A\hspace{0.01cm}|\hspace{0.01cm}B}})$.
  
  
Nun wird dieses Ergebnis auf die&nbsp; $k$–te Entropienäherung erweitert.&nbsp; Hierbei wird der Vorteil von Markovquellen gegenüber anderen Quellen ausgenutzt, dass sich die Entropieberechnung für&nbsp; $k$–Tupel sehr einfach gestaltet.&nbsp; Für jede Markovquelle gilt nämlich:
+
Now, this result is extended to the&nbsp; $k$&ndash;th entropy approximation.&nbsp; Here, the advantage of Markov sources over other sources is taken advantage of, that the entropy calculation for&nbsp; $k$&ndash;tuples is very simple.&nbsp; For each Markov source, the following applies
 
   
 
   
 
:$$H_k = {1}/{k} \cdot \big [ H_{\rm 1} + (k-1) \cdot H_{\rm M}\big ] \hspace{0.3cm} \Rightarrow \hspace{0.3cm}
 
:$$H_k = {1}/{k} \cdot \big [ H_{\rm 1} + (k-1) \cdot H_{\rm M}\big ] \hspace{0.3cm} \Rightarrow \hspace{0.3cm}
  H_2 = {1}/{2} \cdot \big [ H_{\rm 1} + H_{\rm M} \big ]\hspace{0.05cm}, \hspace{0.3cm}
+
  H_2 = {1}/{2} \cdot \big [ H_{\rm 1} + H_{\rm M} \big ]\hspace{0.05cm}, \hspace{0.3cm}
 
  H_3 ={1}/{3} \cdot \big [ H_{\rm 1} + 2 \cdot H_{\rm M}\big ] \hspace{0.05cm},\hspace{0.3cm}
 
  H_3 ={1}/{3} \cdot \big [ H_{\rm 1} + 2 \cdot H_{\rm M}\big ] \hspace{0.05cm},\hspace{0.3cm}
  H_4 = {1}/{4} \cdot \big [ H_{\rm 1} + 3 \cdot H_{\rm M}\big ]  
+
  H_4 = {1}/{4} \cdot \big [ H_{\rm 1} + 3 \cdot H_{\rm M}\big ]  
 
  \hspace{0.05cm},\hspace{0.15cm}{\rm usw.}$$
 
  \hspace{0.05cm},\hspace{0.15cm}{\rm usw.}$$
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Fazit:}$&nbsp; Bildet man den Grenzübergang für&nbsp; $k \to \infty$, so erhält man für die tatsächliche Quellenentropie:
+
$\text{Conclusion:}$&nbsp; With the boundary condition for&nbsp; $k \to \infty$, one obtains the actual source entropy
:$$H = \lim_{k \rightarrow \infty } H_k = H_{\rm M} \hspace{0.05cm}.$$
+
:$$H = \lim_{k \rightarrow \infty} H_k = H_{\rm M} \hspace{0.05cm}.$$
Aus diesem einfachen Ergebnis folgen wichtige Erkenntnisse für die Entropieberechnung:
+
From this simple result important insights for the entropy calculation follow:
*Bei Markovquellen genügt die Bestimmung der Entropienäherungen&nbsp; $H_1$&nbsp; und&nbsp; $H_2$.&nbsp; Damit lautet die Entropie einer Markovquelle:
+
*For Markov sources it is sufficient to determine the entropy approximations&nbsp; $H_1$&nbsp; and&nbsp; $H_2$.&nbsp; Thus, the entropy of a Markov source is
:$$H = 2 \cdot H_2 -   H_{\rm 1}  \hspace{0.05cm}.$$
+
:$$H = 2 \cdot H_2 - H_{\rm 1}  \hspace{0.05cm}.$$
  
*Durch&nbsp; $H_1$&nbsp; und&nbsp; $H_2$&nbsp; liegen auch alle weiteren Entropienäherungen&nbsp; $H_k$&nbsp; für&nbsp; $k \ge 3$&nbsp; fest:
+
*Through&nbsp; $H_1$&nbsp; and&nbsp; $H_2$&nbsp; all further entropy approximations&nbsp; $H_k$&nbsp; are also fixed for&nbsp; $k \ge 3$&nbsp;:
 
   
 
   
:$$H_k = \frac{2-k}{k} \cdot H_{\rm 1} + \frac{2\cdot (k-1)}{k} \cdot H_{\rm 2}
+
:$$H_k = \frac{2-k}{k} \cdot H_{\rm 1} + \frac{2\cdot (k-1)}{k} \cdot H_{\rm 2}
 
  \hspace{0.05cm}.$$}}
 
  \hspace{0.05cm}.$$}}
  
  
Diese Näherungen haben allerdings keine große Bedeutung.&nbsp; Wichtig ist meist nur der Grenzwert&nbsp; $H$.&nbsp; Bei Quellen ohne Markoveigenschaften berechnet man die Näherungen&nbsp; $H_k$&nbsp; nur deshalb, um diesen Grenzwert, also die tatsächliche Entropie, abschätzen zu können.
+
However, these approximations are not very important.&nbsp; Mostly only the limit value&nbsp; $H$&nbsp; is of interest.&nbsp; For sources without Markov properties the approximations&nbsp; $H_k$&nbsp; are calculated only to be able to estimate this limit value, i.e. the actual entropy.
 +
 
  
 +
<u>Notes</u>:
 +
*In the&nbsp; [[Aufgaben:Exercise_1.5:_Binary_Markov_Source|"Exercise 1.5"]]&nbsp; the above equations are applied to the more general case of an asymmetric binary source.
  
''Hinweise'':
+
*All equations in this section also apply to non-binary Markov sources&nbsp; $(M > 2)$ as shown in the next section.
*In der&nbsp; [[Aufgaben:1.5_Binäre_Markovquelle|Aufgabe 1.5]]&nbsp; werden die obigen Gleichungen auf den allgemeineren Fall einer unsymmetrischen Binärquelle angewendet.
 
*Alle  Gleichungen auf dieser Seite gelten auch für nichtbinäre Markovquellen&nbsp; $(M > 2)$, wie auf der nächsten Seite gezeigt wird.
 
  
  
  
==Nichtbinäre Markovquellen ==
+
==Non-binary Markov sources ==
 
<br>
 
<br>
[[File:P_ID2243__Inf_T_1_2_S6_neu.png|right|frame|Ternäre und quaternäre Markovquelle erster Ordnung]]
+
[[File:EN_Inf_T_1_2_S6.png|right|frame|Ternary and quaternary first order Markov source]]
  
Für jede Markovquelle gelten unabhängig vom Symbolumfang die folgenden Gleichungen:
+
The following equations apply to each Markov source regardless of the symbol set size&nbsp; $M$:
 
   
 
   
:$$H = 2 \cdot H_2 -   H_{\rm 1}  \hspace{0.05cm},$$
+
:$$H = 2 \cdot H_2 - H_{\rm 1}  \hspace{0.05cm},$$
:$$H_k = {1}/{k} \cdot \big [ H_{\rm 1} + (k-1) \cdot H_{\rm M}\big ] \hspace{0.05cm},$$
+
:$$H_k = {1}/{k} \cdot \big [ H_{\rm 1} + (k-1) \cdot H_{\rm M}\big ] \hspace{0.05cm},$$
:$$ \lim_{k \rightarrow \infty } H_k = H  
+
:$$ \lim_{k \rightarrow \infty} H_k = H  
 
  \hspace{0.05cm}.$$
 
  \hspace{0.05cm}.$$
  
Diese Gleichungen ermöglichen die einfache Berechnung der Entropie&nbsp; $H$&nbsp; aus den Näherungen&nbsp; $H_1$&nbsp; und&nbsp; $H_2$.
+
These equations allow the simple calculation of the entropy&nbsp; $H$&nbsp; from the approximations&nbsp; $H_1$&nbsp; and&nbsp; $H_2$.
 +
 
 +
We now look at the transition diagrams sketched on the right:
 +
*A ternary Markov source&nbsp; $\rm MQ3$&nbsp; $(M = 3$,&nbsp; blue coloring$)$,&nbsp; and
  
Wir betrachten nun entsprechend den rechts skizzierten Übergangsdiagrammen
+
*a quaternary Markov source&nbsp; $\rm MQ4$&nbsp; $(M = 4$,&nbsp; red color$)$.  
*eine ternäre Markovquelle&nbsp; $\rm MQ3$&nbsp; $($Stufenzahl&nbsp; $M = 3$,&nbsp; blaue Farbgebung$)$ und
 
*eine quaternäre Markovquelle&nbsp; $\rm MQ4$&nbsp; $(M = 4$,&nbsp; rote Farbgebung$)$.  
 
 
<br clear=all>
 
<br clear=all>
In&nbsp; [[Aufgaben:1.6_Nichtbinäre_Markovquellen|Aufgabe 1.6]]&nbsp; werden die Entropienäherungen&nbsp; $H_k$&nbsp; und die  Quellenentropie&nbsp; $H$&nbsp; als der Grenzwert von&nbsp; $H_k$&nbsp; für&nbsp; $k \to \infty$&nbsp; berechnet.&nbsp; Die Ergebnisse sind in der folgenden Grafik zusammengestellt.&nbsp; Alle dort angegebenen Entropien haben die Einheit „bit/Symbol”.
+
In&nbsp; [[Aufgaben:Exercise_1.6:_Non-Binary_Markov_Sources|"Exercise 1.6"]]&nbsp; the entropy approximations&nbsp; $H_k$&nbsp; and the total entropy&nbsp; $H$&nbsp; are calculated as the limit of&nbsp; $H_k$&nbsp; for&nbsp; $k \to \infty$&nbsp;. &nbsp; The results are shown in the following figure.&nbsp; All entropies specified there have the unit "bit/symbol".
 +
 
 +
[[File:EN_Inf_T_1_2_S6b.png|right|frame|Entropies for&nbsp; $\rm MQ3$,&nbsp; $\rm MQ4$&nbsp; and the&nbsp; $\rm AMI$&nbsp; code]]
 +
 
 +
These results can be interpreted as follows:
 +
*For the ternary markov source&nbsp; $\rm MQ3$&nbsp; the entropy approximations of&nbsp; $H_1 = 1.500$&nbsp; above&nbsp; $H_2 = 1.375$&nbsp; up to the limit&nbsp; $H = 1.250$&nbsp; continuously decreasing.&nbsp; Because of&nbsp; $M = 3$&nbsp; &rArr; &nbsp; $H_0 = 1.585$.
 +
 
 +
*For the quaternary Markov source&nbsp; $\rm MQ4$&nbsp; one receives&nbsp; $H_0 = H_1 = 2.000$&nbsp; (since four equally probable states) and&nbsp; $H_2 = 1.5$. &nbsp; From the&nbsp; $H_1$-&nbsp; and&nbsp; $H_2$-values all entropy approximations&nbsp; $H_k$&nbsp; and the final value&nbsp; $H = 1.000$&nbsp; can be calculated.
 +
 
 +
*The two models&nbsp; $\rm MQ3$&nbsp; and&nbsp; $\rm MQ4$&nbsp; were created during the attempt to calculate the&nbsp; [[Information_Theory/Discrete_Sources_with_Memory#The_entropy_of_the_AMI_code|"AMI code"]]&nbsp; to be described information&ndash;theoretically by Markov sources.&nbsp; The symbols&nbsp; $\rm M$,&nbsp; $\rm N$&nbsp; and&nbsp; $\rm P$&nbsp; stand for&nbsp; "minus",&nbsp; "zero"&nbsp; and&nbsp; "plus".
 +
 
 +
*The entropy approximations&nbsp; $H_1$,&nbsp; $H_2$&nbsp; and&nbsp; $H_3$&nbsp; of the AMI code (green markers) were calculated in the&nbsp; [[Aufgaben:Aufgabe 1.4: Entropienäherungen für den AMI-Code|"Exercise 1.4"]].&nbsp; On the calculation of&nbsp; $H_4$,&nbsp; $H_5$, ... had to be omitted for reasons of effort.&nbsp; But the final value of&nbsp; $H_k$&nbsp; for&nbsp; $k \to \infty$ &nbsp; ⇒ &nbsp; $H = 1.000$&nbsp; is known.
 +
 
 +
*You can see that the Markov model&nbsp; $\rm MQ3$&nbsp; for&nbsp; $H_0 = 1.585$,&nbsp; $H_1 = 1.500$&nbsp; and&nbsp; $H_2 = 1.375$&nbsp; yields exactly the same numerical values as the AMI code. &nbsp; On the other hand&nbsp; $H_3$&nbsp; &rArr; &nbsp; $(1.333$&nbsp; instead of&nbsp; $1.292)$&nbsp; and especially the final value&nbsp; $H$&nbsp; &rArr; &nbsp; $(1.250$&nbsp; compared to&nbsp; $1.000)$.
 +
 
 +
*The model&nbsp; $\rm MQ4$&nbsp; $(M = 4)$&nbsp; differs from the AMI code&nbsp; $(M = 3)$&nbsp; with respect to the decision content&nbsp; $H_0$&nbsp; and also with respect to all entropy approximations&nbsp; $H_k$. &nbsp; Nevertheless, $\rm MQ4$&nbsp; is the more suitable model for the AMI code, since the final value&nbsp; $H = 1,000$&nbsp; is the same.
  
[[File:P_ID2244__Inf_T_1_2_S6b_neu.png|center|frame|Entropien für&nbsp; $\rm MQ3$,&nbsp; $\rm MQ4$&nbsp; und den&nbsp; $\rm AMI–Code$]]
+
*The model&nbsp; $\rm MQ3$&nbsp; yields entropy values that are too large, since the sequences&nbsp; $\rm PNP$&nbsp; and&nbsp; $\rm MNM$&nbsp; are possible here, which cannot occur in the AMI code. &nbsp; Already with the approximation&nbsp; $H_3$&nbsp; the difference is slightly noticeable, in the final value&nbsp; $H$&nbsp; even clearly&nbsp; $(1.25$&nbsp; instead of&nbsp; $1)$.
  
Diese Ergebnisse können wie folgt interpretiert werden:
 
*Bei der ternären Markovquelle&nbsp; $\rm MQ3$&nbsp; nehmen die Entropienäherungen von&nbsp; $H_1 = 1.500$&nbsp; über&nbsp; $H_2 = 1.375$&nbsp; bis zum Grenzwert&nbsp; $H = 1.250$&nbsp; kontinuierlich ab.&nbsp; Wegen&nbsp; $M = 3$&nbsp; beträgt der Entscheidungsgehalt&nbsp; $H_0 = 1.585$.
 
*Für die quaternäre Markovquelle&nbsp; $\rm MQ4$&nbsp; erhält man&nbsp; $H_0 = H_1 = 2.000$&nbsp; (da vier gleichwahrscheinliche Zuständen) und&nbsp; $H_2 = 1.5$.&nbsp; Aus dem&nbsp; $H_1$–&nbsp; und&nbsp; $H_2$–Wert lassen sich auch hier alle Entropienäherungen&nbsp; $H_k$&nbsp; und der Endwert&nbsp; $H = 1.000$&nbsp; berechnen.
 
*Die beiden Modelle&nbsp; $\rm MQ3$&nbsp; und&nbsp; $\rm MQ4$&nbsp; entstanden bei dem Versuch, den&nbsp; [[Information_Theory/Nachrichtenquellen_mit_Gedächtnis#Die_Entropie_des_AMI.E2.80.93Codes|AMI–Code]]&nbsp; informationstheoretisch durch Markovquellen zu beschreiben.&nbsp; Die Symbole&nbsp; $\rm M$,&nbsp; $\rm N$&nbsp; und&nbsp; $\rm P$&nbsp; stehen hierbei für „Minus”, „Null” und „Plus”.
 
*Die Entropienäherungen&nbsp; $H_1$,&nbsp; $H_2$&nbsp; und&nbsp; $H_3$&nbsp; des AMI–Codes (grüne Markierungen) wurden in der&nbsp; [[Aufgaben:1.4_Entropienäherungen_Hk|Aufgabe A1.4]]&nbsp; berechnet.&nbsp; Auf die Berechnung von&nbsp; $H_4$,&nbsp; $H_5$, ... musste aus Aufwandsgründen verzichtet werden.&nbsp; Bekannt ist aber der Endwert von&nbsp; $H_k$&nbsp; für&nbsp; $k \to \infty$  &nbsp;  ⇒  &nbsp;  $H = 1.000$.
 
*Man erkennt, dass das Markovmodell&nbsp; $\rm MQ3$&nbsp; bezüglich&nbsp; $H_0 = 1.585$,&nbsp; $H_1 = 1.500$&nbsp; und&nbsp; $H_2 = 1.375$&nbsp; genau gleiche Zahlenwerte liefert wie der AMI–Code.&nbsp; Dagegen unterscheiden sich&nbsp; $H_3$&nbsp; &rArr; &nbsp; $(1.333$&nbsp; statt&nbsp; $1.292)$&nbsp; und insbesondere der Endwert&nbsp; $H$&nbsp; &rArr; &nbsp; $(1.250$&nbsp; gegenüber&nbsp; $1.000)$.
 
*Das Modell&nbsp; $\rm MQ4$&nbsp; $(M = 4)$&nbsp; unterscheidet sich vom AMI–Code&nbsp; $(M = 3)$&nbsp; hinsichtlich des Entscheidungsgehaltes&nbsp; $H_0$&nbsp; und auch bezüglich aller Entropienäherungen&nbsp; $H_k$.&nbsp; Trotzdem ist $\rm MQ4$&nbsp; das geeignetere Modell für den AMI–Code, da der Endwert&nbsp; $H = 1.000$&nbsp; übereinstimmt.
 
*Das Modell&nbsp; $\rm MQ3$&nbsp; liefert zu große Entropiewerte, da hier die Folgen&nbsp; $\rm PNP$&nbsp; und&nbsp; $\rm MNM$&nbsp; möglich sind, die beim AMI–Code nicht auftreten können.&nbsp; Bereits bei der Näherung&nbsp; $H_3$&nbsp; macht sich der Unterschied geringfügig bemerkbar, im Endwert&nbsp; $H$&nbsp; sogar deutlich&nbsp; $(1.25$&nbsp; statt&nbsp; $1)$.
 
  
 +
In the model&nbsp; $\rm MQ4$&nbsp; the state&nbsp; "Zero"&nbsp; was split into two states&nbsp; $\rm N$&nbsp; and&nbsp; $\rm O$&nbsp; (see upper right figure in this section):
 +
*Here applies to the state&nbsp; $\rm N$: &nbsp; The current binary symbol&nbsp; $\rm L$&nbsp; is encoded with the amplitude value&nbsp; $0$&nbsp; (zero), as per the AMI rule.&nbsp; The next occurring&nbsp; $\rm H$ symbol, on the other hand, is displayed as&nbsp; $-1$&nbsp; (minus), because the last symol&nbsp; $\rm H$&nbsp; was encoded as&nbsp; $+1$&nbsp; (plus).
  
Beim Modell&nbsp; $\rm MQ4$&nbsp; wurde der Zustand&nbsp; „Null”&nbsp; aufgespalten in zwei Zustände&nbsp; $\rm N$&nbsp; und&nbsp; $\rm O$&nbsp; (siehe rechte obere Grafik auf dieser Seite):
+
*Also with the state&nbsp; $\rm O$&nbsp; the current binary symbol&nbsp; $\rm L$&nbsp; is represented with the ternary value&nbsp; $0$.&nbsp; In contrast to the state&nbsp; $\rm N$&nbsp; however, the next occurring&nbsp; $\rm H$ symbol is now encoded as&nbsp; $+1$&nbsp; (plus) since the last&nbsp; $\rm H$ symbol was encoded as&nbsp; $-1$&nbsp; (minus).
*Hierbei gilt für den Zustand&nbsp; $\rm N$: &nbsp; Das aktuelle Binärsymbol&nbsp; $\rm L$&nbsp; wird mit dem Amplitudenwert&nbsp; $0$&nbsp; (Null) dargestellt, wie es der AMI–Regel entspricht.&nbsp; Das nächste auftretende&nbsp; $\rm H$–Symbol wird dagegen  als&nbsp; $-1$&nbsp; (Minus) dargestellt, weil das letzte&nbsp; $\rm H$–Symbol als&nbsp; $+1$&nbsp; (Plus) codiert wurde.
 
*Auch beim Zustand&nbsp; $\rm O$&nbsp; wird das aktuelle Binärsymbol&nbsp; $\rm L$&nbsp; mit dem Ternärwert&nbsp; $0$&nbsp; dargestellt.&nbsp; Im Unterschied zum Zustand&nbsp; $\rm N$&nbsp; wird aber nun das nächste auftretende&nbsp; $\rm H$–Symbol als&nbsp; $+1$&nbsp; (Plus) dargestellt, da das letzte&nbsp; $\rm H$–Symbol als&nbsp; $-1$&nbsp; (Minus) codiert wurde.
 
  
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Fazit:}$&nbsp;  
+
$\text{Conclusion:}$&nbsp;  
*Die&nbsp; $\rm MQ4$&ndash;Ausgangsfolge entspricht tatsächlich den Regeln des AMI–Codes und weist die Entropie&nbsp; $H = 1.000 \hspace{0.15cm} \rm bit/Symbol$&nbsp; auf.  
+
#The&nbsp; $\rm MQ4$ output sequence actually follows the rules of the AMI code and has entropy&nbsp; $H = 1.000 \hspace{0.15cm} \rm bit/symbol$.  
*Aufgrund des neuen Zustandes&nbsp; $\rm O$&nbsp; ist nun allerdings&nbsp; $H_0 = 2.000 \hspace{0.15cm} \rm bit/Symbol$&nbsp; $($gegenüber&nbsp; $1.585 \hspace{0.15cm} \rm bit/Symbol)$&nbsp; deutlich zu groß.  
+
#But because of the new state&nbsp; $\rm O$&nbsp; now&nbsp; $H_0 = 2.000 \hspace{0.15cm} \rm bit/symbol$&nbsp; $($against&nbsp; $1.585 \hspace{0.15cm} \rm bit/symbol)$&nbsp; is clearly too large.  
*Auch alle&nbsp; $H_k$–Näherungen sind größer als beim AMI–Code.  
+
#Also all&nbsp; $H_k$ approximations are larger than with the AMI code.&nbsp; First for &nbsp;$k \to \infty$&nbsp; the model&nbsp; $\rm MQ4$&nbsp; and the AMI code match exactly: &nbsp; $H = 1.000 \hspace{0.15cm} \rm bit/symbol$.}}
*Erst für &nbsp;$k \to \infty$&nbsp; stimmen das Modell&nbsp; $\rm MQ4$&nbsp; und der AMI-Code exakt überein: &nbsp; $H = 1.000 \hspace{0.15cm} \rm bit/Symbol$.}}
 
  
 
   
 
   
==Aufgaben zum Kapitel==
+
== Exercises for the chapter==
 
<br>
 
<br>
[[Aufgaben:1.3 Entropienäherungen|Aufgabe 1.3: Entropienäherungen]]
+
[[Aufgaben:Exercise_1.3:_Entropy_Approximations|Exercise 1.3: Entropy Approximations]]
  
[[Aufgaben:1.4 Entropienäherungen für den AMI-Code|Aufgabe 1.4: Entropienäherungen für den AMI-Code]]
+
[[Aufgaben:Exercise_1.4:_Entropy_Approximations_for_the_AMI_Code|Exercise 1.4: Entropy Approximations for the AMI Code]]
  
[[Aufgaben:1.4Z Entropie der AMI-Codierung|Zusatzaufgabe 1.4Z: Entropie der AMI-Codierung]]
+
[[Aufgaben:Exercise_1.4Z:_Entropy_of_the_AMI_Code|Exercise 1.4Z: Entropy of the AMI Code]]
  
[[Aufgaben:1.5 Binäre Markovquelle|Aufgabe 1.5: Binäre Markovquelle]]
+
[[Aufgaben:Exercise_1.5:_Binary_Markov_Source|Exercise 1.5: Binary Markov Source]]
  
[[Aufgaben:1.5Z Symmetrische Markovquelle|Aufgabe 1.5Z: Symmetrische Markovquelle]]
+
[[Aufgaben:Exercise_1.5Z:_Symmetrical_Markov_Source|Exercise 1.5Z: Symmetrical Markov Source]]
  
[[Aufgaben:1.6 Nichtbinäre Markovquellen|Aufgabe 1.6: Nichtbinäre Markovquellen]]
+
[[Aufgaben:Exercise_1.6:_Non-Binary_Markov_Sources|Exercise 1.6: Non-Binary Markov Sources]]
  
[[Aufgaben:1.6Z Ternäre Markovquelle|Aufgabe 1.6Z:Ternäre Markovquelle]]
+
[[Aufgaben:Exercise_1.6Z:_Ternary_Markov_Source|Exercise 1.6Z:Ternary Markov Source]]
  
  
 
{{Display}}
 
{{Display}}

Latest revision as of 16:19, 14 February 2023

A simple introductory example


$\text{Example 1:}$  At the  "beginning of the first chapter"  we have considered a memoryless message source with the symbol set  $\rm \{A, \ B, \ C, \ D\}$   ⇒   $M = 4$  .   An exemplary symbol sequence is shown again in the following figure as source  $\rm Q1$ .

  • With the symbol probabilities  $p_{\rm A} = 0.4 \hspace{0.05cm},\hspace{0.2cm}p_{\rm B} = 0.3 \hspace{0.05cm},\hspace{0.2cm}p_{\rm C} = 0.2 \hspace{0.05cm},\hspace{0.2cm} p_{\rm D} = 0.1\hspace{0.05cm}$  the entropy is
$$H \hspace{-0.05cm}= 0.4 \cdot {\rm log}_2\hspace{0.05cm}\frac {1}{0.4} + 0.3 \cdot {\rm log}_2\hspace{0.05cm}\frac {1}{0.3} + 0.2 \cdot {\rm log}_2\hspace{0.05cm}\frac {1}{0.2} + 0.1 \cdot {\rm log}_2\hspace{0.05cm}\frac {1}{0.1} \approx 1.84 \hspace{0.05cm}{\rm bit/symbol} \hspace{0.01cm}.$$
  • Due to the unequal symbol probabilities the entropy is smaller than  $H_0 = \log_2 M = 2 \hspace{0.05cm} \rm bit/symbol$.
Quaternary source without/with memory



The source  $\rm Q2$  is almost identical to the source  $\rm Q1$, except that each individual symbol is output not only once, but twice in a row:  $\rm A ⇒ AA$,  $\rm B ⇒ BB$,  and so on.

  • It is obvious that  $\rm Q2$  has a smaller entropy (uncertainty) than  $\rm Q1$.
  • Because of the simple repetition code,  the entropy
$$H = 1.84/2 = 0.92 \hspace{0.05cm} \rm bit/symbol$$
has only half the size, although the occurrence probabilities have not changed.


$\text{Conclusion:}$  This example shows:

  1. The entropy of a source with memory is smaller than the entropy of a memoryless source with equal symbol probabilities.
  2. The statistical bindings within the sequence  $〈 q_ν 〉$  have to be considered now,
  3. namely the dependency of the symbol  $q_ν$  from the predecessor symbols  $q_{ν-1}$,  $q_{ν-2}$, ...


Entropy with respect to two-tuples


We continue to look at the source symbol sequence  $〈 q_1, \hspace{0.05cm} q_2,\hspace{0.05cm}\text{ ...} \hspace{0.05cm}, q_{ν-1}, \hspace{0.05cm}q_ν, \hspace{0.05cm}\hspace{0.05cm}q_{ν+1} .\hspace{0.05cm}\text{...} \hspace{0.05cm}〉$  and now consider the entropy of two successive source symbols.

  • All source symbols  $q_ν$  are taken from an alphabet with the symbol set size  $M$,  so that for the combination  $(q_ν, \hspace{0.05cm}q_{ν+1})$  there are exactly  $M^2$  possible symbol pairs with the following  "combined probabilities":
$${\rm Pr}(q_{\nu}\cap q_{\nu+1})\le {\rm Pr}(q_{\nu}) \cdot {\rm Pr}( q_{\nu+1}) \hspace{0.05cm}.$$
  • From this, the  »compound entropy«  of two consecutive symbols can be computed:
$$H_2\hspace{0.05cm}' = \sum_{q_{\nu}\hspace{0.05cm} \in \hspace{0.05cm}\{ \hspace{0.05cm}q_{\mu}\hspace{0.01cm} \}} \ \sum_{q_{\nu+1}\hspace{0.05cm} \in \hspace{0.05cm}\{ \hspace{0.05cm} q_{\mu}\hspace{0.01cm} \}}\hspace{-0.1cm}{\rm Pr}(q_{\nu}\cap q_{\nu+1}) \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{{\rm Pr}(q_{\nu}\cap q_{\nu+1})} \hspace{0.4cm}({\rm unit\hspace{-0.1cm}: \hspace{0.1cm}bit/two–tuple}) \hspace{0.05cm}.$$
The index "2" symbolizes that the entropy thus calculated refers to  »two-tuples«.
  • To get the average information content per symbol,  $H_2\hspace{0.05cm}'$  has to be divided in half:
$$H_2 = {H_2\hspace{0.05cm}'}/{2} \hspace{0.5cm}({\rm unit\hspace{-0.1cm}: \hspace{0.1cm}bit/symbol}) \hspace{0.05cm}.$$
$$H_1 = \sum_{q_{\nu}\hspace{0.05cm} \in \hspace{0.05cm}\{ \hspace{0.05cm}q_{\mu}\hspace{0.01cm} \}} {\rm Pr}(q_{\nu}) \cdot {\rm log_2}\hspace{0.1cm}\frac {1}{{\rm Pr}(q_{\nu})} \hspace{0.5cm}({\rm unit\hspace{-0.1cm}: \hspace{0.1cm}bit/symbol}) \hspace{0.05cm}.$$
  • The index "1" is supposed to indicate that  $H_1$  considers only the symbol probabilities and not statistical bindings between symbols within the sequence.  With  $H_0 = \log_2 \ M$  the following size relation results:
$$H_0 \ge H_1 \ge H_2 \hspace{0.05cm}.$$
  • With statistical independence of the sequence elements   ⇒   $H_2 = H_1$.


The previous equations each indicate an  "ensemble mean value".   The probabilities required for the calculation of  $H_1$  and  $H_2$  can, however, also be calculated as time averages from a very long sequence or, more precisely, approximated by the corresponding  "relative frequencies".

Let us now illustrate the calculation of entropy approximations  $H_1$  and  $H_2$  with three examples.

Ternary symbol sequence and formation of two-tuples

$\text{Example 2:}$  We will first look at the sequence  $〈 q_1$, ... , $q_{50} \rangle $  according to the graphic, where the sequence elements  $q_ν$  originate from the alphabet $\rm \{A, \ B, \ C \}$   ⇒   the symbol set size is  $M = 3$.

  • By time averaging over  $50$  symbols one gets the symbol probabilities  $p_{\rm A} ≈ 0.5$,   $p_{\rm B} ≈ 0.3$  and  $p_{\rm C} ≈ 0.2$, with which one can calculate the first order entropy approximation:
$$H_1 = 0.5 \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{0.5} + 0.3 \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{0.3} + 0.2 \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{0.2} \approx \, 1.486 \,{\rm bit/symbol} \hspace{0.05cm}.$$
  • Due to the not equally probable symbols  $H_1 < H_0 = 1.585 \hspace{0.05cm}. \rm bit/symbol$.  As an approximation for the probabilities of two-tuples one gets from the above sequence:
$$\begin{align*}p_{\rm AA} \hspace{-0.1cm}& = \hspace{-0.1cm} 14/49\hspace{0.05cm}, \hspace{0.2cm}p_{\rm AB} = 8/49\hspace{0.05cm}, \hspace{0.2cm}p_{\rm AC} = 3/49\hspace{0.05cm}, \\\ p_{\rm BA} \hspace{-0.1cm}& = \hspace{0.07cm} 7/49\hspace{0.05cm}, \hspace{0.25cm}p_{\rm BB} = 2/49\hspace{0.05cm}, \hspace{0.2cm}p_{\rm BC} = 5/49\hspace{0.05cm}, \\\ p_{\rm CA} \hspace{-0.1cm}& = \hspace{0.07cm} 4/49\hspace{0.05cm}, \hspace{0.25cm}p_{\rm CB} = 5/49\hspace{0.05cm}, \hspace{0.2cm}p_{\rm CC} = 1/49\hspace{0.05cm}.\end{align*}$$
  • Please note that the  $50$  sequence elements can only be formed from  $49$  two-tuples  $(\rm AA$, ... , $\rm CC)$  which are marked in different colors in the graphic.
  1. The entropy approximation  $H_2$  should actually be equal to  $H_1$  since the given symbol sequence comes from a memoryless source.
  2. Because of the short sequence length  $N = 50$  and the resulting statistical inaccuracy, however, a smaller value results:  
$$H_2 ≈ 1.39\hspace{0.05cm} \rm bit/symbol.$$


$\text{Example 3:}$  Now let us consider a  »memoryless  binary source«  with equally probable symbols, i.e. there would be  $p_{\rm A} = p_{\rm B} = 1/2$.  The first twenty subsequent elements are   $〈 q_ν 〉 =\rm BBAAABAABBBBBAAAABABAB$ ...

  • Because of the equally probable binary symbols   $H_1 = H_0 = 1 \hspace{0.05cm} \rm bit/symbol$.
  • The compound probability  $p_{\rm AB}$  of the combination  $\rm AB$  is equal to  $p_{\rm A} \cdot p_{\rm B} = 1/4$.  Also $p_{\rm AA} = p_{\rm BB} = p_{\rm BA} = 1/4$.
  • This gives for the second entropy approximation
$$H_2 = {1}/{2} \cdot \big [ {1}/{4} \cdot {\rm log}_2\hspace{0.1cm}4 + {1}/{4} \cdot {\rm log}_2\hspace{0.1cm}4 +{1}/{4} \cdot {\rm log}_2\hspace{0.1cm}4 +{1}/{4} \cdot {\rm log}_2\hspace{0.1cm}4 \big ] = 1 \,{\rm bit/symbol} = H_1 = H_0 \hspace{0.05cm}.$$

Note:   Due to the short length of the given sequence the probabilities are slightly different:  $p_{\rm AA} = 6/19$,  $p_{\rm BB} = 5/19$,  $p_{\rm AB} = p_{\rm BA} = 4/19$.


$\text{Example 4:}$  The third sequence considered here results from the binary sequence of  $\text{Example 3}$  by using a simple repeat code:

$$〈 q_ν 〉 =\rm BbBbAaAaAaBbAaAaBbBb \text{...} $$
  • The repeated symbols are marked by corresponding lower case letters.  But it still applies  $M=2$.
  • Because of the equally probable binary symbols, this also results in  $H_1 = H_0 = 1 \hspace{0.05cm} \rm bit/symbol$.
  • As shown in  "Exercise 1.3"  for the compound probabilities we obtain  $p_{\rm AA}=p_{\rm BB} = 3/8$  and  $p_{\rm AB}=p_{\rm BA} = 1/8$.  Hence
$$\begin{align*}H_2 ={1}/{2} \cdot \big [ 2 \cdot {3}/{8} \cdot {\rm log}_2\hspace{0.1cm} {8}/{3} + 2 \cdot {1}/{8} \cdot {\rm log}_2\hspace{0.1cm}8\big ] = {3}/{8} \cdot {\rm log}_2\hspace{0.1cm}8 - {3}/{8} \cdot{\rm log}_2\hspace{0.1cm}3 + {1}/{8} \cdot {\rm log}_2\hspace{0.1cm}8 \approx 0.906 \,{\rm bit/symbol} < H_1 = H_0 \hspace{0.05cm}.\end{align*}$$

A closer look at the task at hand leads to the following conclusion:

  • The entropy should actually be  $H = 0.5 \hspace{0.05cm} \rm bit/symbol$  (every second symbol does not provide new information).
  • The second entropy approximation  $H_2 = 0.906 \hspace{0.05cm} \rm bit/symbol$  is much larger than the entropy  $H$.
  • For the entropy determination, the second order approximation is not sufficient.  Rather, one must consider larger contiguous blocks of  $k > 2$  symbols.
  • In the following, such a block is referred to as  "$k$–tuple".


Generalization to $k$-tuple and boundary crossing


For abbreviation we write using the compound probability  $p_i^{(k)}$  for a  $k$–tuple in general:

$$H_k = \frac{1}{k} \cdot \sum_{i=1}^{M^k} p_i^{(k)} \cdot {\rm log}_2\hspace{0.1cm} \frac{1}{p_i^{(k)}} \hspace{0.5cm}({\rm unit\hspace{-0.1cm}: \hspace{0.1cm}bit/symbol}) \hspace{0.05cm}.$$
  • The control variable  $i$  stands for one of the  $M^k$  tuples. 
  • The previously calculated approximation  $H_2$  results with  $k = 2$.


$\text{Definition:}$  The  »entropy of a discrete source with memory«  has the following limit:

$$H = \lim_{k \rightarrow \infty }H_k \hspace{0.05cm}.$$
  • For the  »entropy approximations«  $H_k$  the following relations apply  $(H_0=\log_2 M)$:
$$H \le \text{...} \le H_k \le \text{...} \le H_2 \le H_1 \le H_0 \hspace{0.05cm}.$$


The computational effort will increase with increasing  $k$  except for a few special cases (see the following example) and depends on the symbol set size  $M$  of course:

  • For the calculation of  $H_{10}$  for a binary source  $(M = 2)$  one must average over  $2^{10} = 1024$  terms.
  • With each further increase of  $k$  by  $1$  the number of sum terms doubles.
  • In case of a quaternary source  $(M = 4)$  it must already be averaged over  $4^{10} = 1\hspace{0.08cm}048\hspace{0.08cm}576$  summation terms for the determination of  $H_{10}$.
  • Considering that each of these  $4^{10} =2^{20} >10^6$  $k$–tuple should occur in simulation/time averaging about  $100$  times (statistical guideline) to ensure sufficient simulation accuracy, it follows that the sequence length should be in this case greater than  $N = 10^8$.


$\text{Example 5:}$  We consider an alternating binary sequence   ⇒   $〈 q_ν 〉 =\rm ABABABAB$ ...  .  Accordingly it holds that  $H_0 = H_1 = 1 \hspace{0.1cm} \rm bit/symbol$.

In this special case, the  $H_k$ approximation must be determined independently from  $k$  by averaging only two compound probabilities:

  • $k = 2$:    $p_{\rm AB} = p_{\rm BA} = 1/2$     ⇒     $H_2 = 1/2 \hspace{0.2cm} \rm bit/symbol$,
  • $k = 3$:    $p_{\rm ABA} = p_{\rm BAB} = 1/2$     ⇒     $H_3 = 1/3 \hspace{0.2cm} \rm bit/symbol$,
  • $k = 4$:    $p_{\rm ABAB} = p_{\rm BABA} = 1/2$     ⇒     $H_4 = 1/4 \hspace{0.2cm} \rm bit/symbol$.


The entropy of this alternating binary sequence is therefore

$$H = \lim_{k \rightarrow \infty }{1}/{k} = 0 \hspace{0.05cm}.$$

The result was to be expected, since the considered sequence has only minimal information, which does not affect the entropy end value  $H$,  namely:
    "Does  $\rm A$  occur at the even or the odd times?"

You can see that  $H_k$  comes closer to this final value  $H = 0$  very slowly:  The twentieth entropy approximation still returns  $H_{20} = 0.05 \hspace{0.05cm} \rm bit/symbol$.


$\text{Summary of the results of the last sections:}$ 

  • Generally it applies to the  »entropy of a message source«:
$$H \le \text{...} \le H_3 \le H_2 \le H_1 \le H_0 \hspace{0.05cm}.$$
  • A  »redundancy-free source«   exists if all  $M$  symbols are equally probable and there are no statistical bindings within the sequence.
    For these hold  $(r$  denotes the relative redundancy $)$:
$$H = H_0 = H_1 = H_2 = H_3 = \text{...}\hspace{0.5cm} \Rightarrow \hspace{0.5cm} r = \frac{H - H_0}{H_0}= 0 \hspace{0.05cm}.$$
  • A  »memoryless source«   can be quite redundant  $(r> 0)$.  This redundancy then is solely due to the deviation of the symbol probabilities from the uniform distribution.  Here the following relations are valid:
$$H = H_1 = H_2 = H_3 = \text{...} \le H_0 \hspace{0.5cm}\Rightarrow \hspace{0.5cm}0 \le r = \frac{H_1 - H_0}{H_0}< 1 \hspace{0.05cm}.$$
  • The corresponding condition for a  »source with memory«  is
$$ H <\text{...} < H_3 < H_2 < H_1 \le H_0 \hspace{0.5cm}\Rightarrow \hspace{0.5cm} 0 < r = \frac{H_1 - H_0}{H_0}\le1 \hspace{0.05cm}.$$
  • If  $H_2 < H_1$, then  $H_3 < H_2$,   $H_4 < H_3$, etc.   ⇒   In the general equation, the  "≤" character must be replaced by the  "<" character.
  • If the symbols are equally probable, then again  $H_1 = H_0$,  while  $H_1 < H_0$  applies to symbols which are not equally probable.


The entropy of the AMI code


In chapter  "Symbol-wise Coding with Pseudo-Ternary Codes"  of the book  "Digital Signal Transmission", among other things, the AMI pseudo-ternary code is discussed.

Signals and symbol sequences for AMI code
  • This converts the binary sequence  $〈 q_ν 〉$  with  $q_ν ∈ \{ \rm L, \ H \}$  into the ternary sequence  $〈 c_ν 〉$  with  $q_ν ∈ \{ \rm M, \ N, \ P \}$.
  • The names of the source symbols stand for  $\rm L$(ow)  and  $\rm H$(igh)  and those of the code symbols for  $\rm M$(inus),  $\rm P$(lus),  and  $\rm N$(ull)   ⇒   "Zero".


The coding rule of the AMI code ("Alternate Mark Inversion") is:

  • Each binary symbol  $q_ν =\rm L$  is represented by the code symbol  $c_ν =\rm N$ .
  • In contrast,  $q_ν =\rm H$  alternates with  $c_ν =\rm P$  and  $c_ν =\rm M$  coded   ⇒   name "Alternate Mark Inversion".
  • This special encoding adds redundancy with the sole purpose of ensuring that the encoded sequence does not contain a DC component.


However, we do not consider the spectral properties of the AMI code here, but interpret this code information-theoretically:

  • Based on the symbol set size  $M = 3$  the decision content of the (ternary) encoded sequence is equal to  $H_0 = \log_2 \ 3 ≈ 1.585 \hspace{0.05cm} \rm bit/symbol$.  The first entropy approximation returns  $H_1 = 1.5 \hspace{0.05cm} \rm bit/symbol$, as shown in the following calculation:
$$p_{\rm H} = p_{\rm L} = 1/2 \hspace{0.3cm}\Rightarrow \hspace{0.3cm} p_{\rm N} = p_{\rm L} = 1/2\hspace{0.05cm},\hspace{0.2cm}p_{\rm M} = p_{\rm P}= p_{\rm H}/2 = 1/4\hspace{0.3cm} \Rightarrow \hspace{0.3cm} H_1 = 1/2 \cdot {\rm log}_2\hspace{0.1cm}2 + 2 \cdot 1/4 \cdot{\rm log}_2\hspace{0.1cm}4 = 1.5 \,{\rm bit/symbol} \hspace{0.05cm}.$$
  • Let's now look at two-tuples.  With the AMI code,  $\rm P$  cannot follow  $\rm P$  and  $\rm M$  cannot follow  $\rm M$.  The probability for  $\rm NN$  is equal to  $p_{\rm L} \cdot p_{\rm L} = 1/4$.  All other (six) two-tuples occur with the probability  $1/8$.  From this follows for the second entropy approximation:
$$H_2 = 1/2 \cdot \big [ 1/4 \cdot {\rm log_2}\hspace{0.1cm}4 + 6 \cdot 1/8 \cdot {\rm log_2}\hspace{0.1cm}8 \big ] = 1,375 \,{\rm bit/symbol} \hspace{0.05cm}.$$
  • For the further entropy approximations  $H_3$,  $H_4$, ...  and the actual entropy  $H$  will apply:
$$ H < \hspace{0.05cm}\text{...}\hspace{0.05cm} < H_5 < H_4 < H_3 < H_2 = 1.375 \,{\rm bit/symbol} \hspace{0.05cm}.$$
  • Exceptionally in this example we know the actual entropy  $H$  of the encoded sequence  $〈 c_ν 〉$:   Since no new information is added by the coder and no information is lost, it is the same entropy  $H = 1 \,{\rm bit/symbol} $  as the one of the redundancy-free binary sequence $〈 q_ν 〉$.


The  "Exercise 1.4"  shows the considerable effort required to calculate the entropy approximation  $H_3$.   Moreover,  $H_3$  still deviates significantly from the final value  $H = 1 \,{\rm bit/symbol} $.  A faster result is achieved if the AMI code is described by a Markov chain as explained in the next section.


Binary sources with Markov properties


Markov processes with  $M = 2$  states

Sequences with statistical bindings between the sequence elements (symbols) are often modeled by  $\text{Markov processes}$  where we limit ourselves here to first-order Markov processes.  First we consider a binary Markov process  $(M = 2)$  with the states (symbols)  $\rm A$  and  $\rm B$.

On the right, you can see the transition diagram for a first-order binary Markov process  however, only two of the four transfer probabilities given are freely selectable, for example

  • $p_{\rm {A\hspace{0.01cm}|\hspace{0.01cm}B}} = \rm Pr(A\hspace{0.01cm}|\hspace{0.01cm}B)$   ⇒   conditional probability that  $\rm A$  follows  $\rm B$ .
  • $p_{\rm {B\hspace{0.01cm}|\hspace{0.01cm}A}} = \rm Pr(B\hspace{0.01cm}|\hspace{0.01cm}A)$   ⇒   conditional probability that  $\rm B$  follows  $\rm A$ .


For the other two transition probabilities:  $p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}A} = 1- p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}A}$  and   $p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}B} = 1- p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B} \hspace{0.05cm}.$

Due to the presupposed properties  $\text{Stationarity}$  and  $\text{Ergodicity}$  the following applies to the state or symbol probabilities:

$$p_{\rm A} = {\rm Pr}({\rm A}) = \frac{p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B}}{p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B} + p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}A}} \hspace{0.05cm}, \hspace{0.5cm}p_{\rm B} = {\rm Pr}({\rm B}) = \frac{p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}A}}{p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B} + p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}A}} \hspace{0.05cm}.$$

These equations allow first information–theoretical statements about the Markov processes:

  • For  $p_{\rm {A\hspace{0.01cm}|\hspace{0.01cm}B}} = p_{\rm {B\hspace{0.01cm}|\hspace{0.01cm}A}}$  the symbols are equally likely   ⇒   $p_{\text{A}} = p_{\text{B}}= 0.5$.  The first entropy approximation returns  $H_1 = H_0 = 1 \hspace{0.05cm} \rm bit/symbol$, independent of the actual values of the (conditional) transition probabilities  $p_{\text{A|B}}$  and  $p_{\text{B|A}}$.
  • But the source entropy  $H$  as the limit value  $($for  $k \to \infty)$  of the  "Entropy approximation of order  $k$"   ⇒   $H_k$  depends very much on the actual values of  $p_{\text{A|B}}$  and  $p_{\text{B|A}}$  and not only on their quotients.  This is shown by the following example.


$\text{Example 6:}$  We consider three binary symmetric Markov sources that are represented by the numerical values of the symmetric transition probabilities  $p_{\rm {A\hspace{0.01cm}\vert\hspace{0.01cm}B} } = p_{\rm {B\hspace{0.01cm}\vert\hspace{0.01cm}A} }$.  For the symbol probabilities the following applies:  $p_{\rm A} = p_{\rm B}= 0.5$, and the other transition probabilities have the values

Three examples of binary Markov sources with  $p_{\rm {A\hspace{0.01cm}\vert\hspace{0.01cm}B} } = p_{\rm {B\hspace{0.01cm}\vert\hspace{0.01cm}A} }$ 
$$p_{\rm {A\hspace{0.01cm}\vert\hspace{0.01cm}A} } = 1 - p_{\rm {B\hspace{0.01cm}\vert\hspace{0.01cm}A} } = p_{\rm {B\hspace{0.01cm}\vert\hspace{0.01cm}B} }.$$
  • The middle (blue) sequence with  $p_{\rm {A\hspace{0.01cm}\vert\hspace{0.01cm}B} } = p_{\rm {B\hspace{0.01cm}\vert\hspace{0.01cm}A} } = 0.5$  has the entropy  $H ≈ 1 \hspace{0.1cm} \rm bit/symbol$.  That means:   In this special case there are no statistical bindings within the sequence.
  • The left (red) sequence with  $p_{\rm {A\hspace{0.01cm}\vert\hspace{0.01cm}B} } = p_{\rm {B\hspace{0.01cm}\vert\hspace{0.01cm}A} } = 0.2$  has less changes between  $\rm A$  and  $\rm B$.  Due to statistical dependencies between neighboring symbols the entropy is now smaller:  $H ≈ 0.72 \hspace{0.1cm} \rm bit/symbol$.
  • The right (green) symbol sequence with  $p_{\rm {A\hspace{0.01cm}\vert\hspace{0.01cm}B} } = p_{\rm {B\hspace{0.01cm}\vert\hspace{0.01cm}A} } = 0.8$  has the exact same entropy  $H ≈ 0.72 \hspace{0.1cm} \rm bit/symbol$  as the red sequence.  Here you can see many regions with alternating symbols  $($... $\rm ABABAB$ ... $)$.


This example is worth noting:

  1. If you had not used the Markov properties of the red and green sequences, you would have reached the respective result  $H ≈ 0.72 \hspace{0.1cm} \rm bit/symbol$  only after lengthy calculations.
  2. The following sections show that for a source with Markov properties the final value  $H$  can be determined from the entropy approximations  $H_1$  and  $H_2$  alone.   Likewise, all entropy approximations  $H_1$  and  $H_2$  can also be calculated from  $H_k$  for  $k$–tuples in a simple manner   ⇒   $H_3$,  $H_4$,  $H_5$, ...   $H_{100}$, ...


Simplified entropy calculation for Markov sources


Markov processes with  $M = 2$  states

We continue to assume the first-order symmetric binary Markov source.  As in the previous section, we use the following nomenclature for

  • the transition probabilities   ⇒   $p_{\rm {A\hspace{0.01cm}|\hspace{0.01cm}B}}$,   $p_{\rm {B\hspace{0.01cm}|\hspace{0.01cm}A}}$,  $p_{\rm {A\hspace{0.01cm}|\hspace{0.01cm}A}}= 1- p_{\rm {B\hspace{0.01cm}|\hspace{0.01cm}A}}$,   $p_{\rm {B\hspace{0.01cm}|\hspace{0.01cm}B}} = 1 - p_{\rm {A\hspace{0.01cm}|\hspace{0.01cm}B}}$,  
  • the ergodic probabilities   ⇒   $p_{\text{A}}$  and  $p_{\text{B}}$,
  • the compound probabilities, for example   ⇒   $p_{\text{AB}} = p_{\text{A}} - p_{\rm {B\hspace{0.01cm}|\hspace{0.01cm}A}}$.


We now compute the  "Entropy of a two-tuple"  (with the unit "bit/two-tuple"):

$$H_2\hspace{0.05cm}' = p_{\rm A} \cdot p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}A} \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{ p_{\rm A} \cdot p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}A}} + p_{\rm A} \cdot p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}A} \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{ p_{\rm A} \cdot p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}A}} + p_{\rm B} \cdot p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B} \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{ p_{\rm B} \cdot p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B}} + p_{\rm B} \cdot p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}B} \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{ p_{\rm B} \cdot p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}B}} \hspace{0.05cm}.$$

If one now replaces the logarithms of the products by corresponding sums of logarithms, one gets the result  $H_2\hspace{0.05cm}' = H_1 + H_{\text{M}}$  with

$$H_1 = p_{\rm A} \cdot (p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}A} + p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}A})\cdot {\rm log}_2\hspace{0.1cm}\frac {1}{p_{\rm A}} + p_{\rm B} \cdot (p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B} + p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}B})\cdot {\rm log}_2\hspace{0.1cm}\frac {1}{p_{\rm B}} = p_{\rm A} \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{p_{\rm A}} + p_{\rm B} \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{p_{\rm B}} = H_{\rm bin} (p_{\rm A})= H_{\rm bin} (p_{\rm B}) \hspace{0.05cm},$$
$$H_{\rm M}= p_{\rm A} \cdot p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}A} \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{ p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}A}} + p_{\rm A} \cdot p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}A} \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{ p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}A}} + p_{\rm B} \cdot p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B} \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{ p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B}} + p_{\rm B} \cdot p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}B} \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{ p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}B}} \hspace{0.05cm}.$$

$\text{Conclusion:}$  Thus the  »second entropy approximation«  (with the unit "bit/symbol")  is:

$$H_2 = {1}/{2} \cdot {H_2\hspace{0.05cm}'} = {1}/{2} \cdot \big [ H_{\rm 1} + H_{\rm M} \big] \hspace{0.05cm}.$$


It is to be noted:

  1. The first summand  $H_1$   ⇒   first entropy approximation depends only on the symbol probabilities.
  2. For a symmetrical Markov process   ⇒   $p_{\rm {A\hspace{0.01cm}|\hspace{0.01cm}B}} = p_{\rm {B\hspace{0.01cm}|\hspace{0.01cm}A}} $   ⇒   $p_{\text{A}} = p_{\text{B}} = 1/2$   the result for this first summand is  $H_1 = 1 \hspace{0.1cm} \rm bit/symbol$.
  3. The second summand  $H_{\text{M}}$  must be calculated according to the second of the two upper equations.
  4. For a symmetrical Markov process you get  $H_{\text{M}} = H_{\text{bin}}(p_{\rm {A\hspace{0.01cm}|\hspace{0.01cm}B}})$.


Now, this result is extended to the  $k$–th entropy approximation.  Here, the advantage of Markov sources over other sources is taken advantage of, that the entropy calculation for  $k$–tuples is very simple.  For each Markov source, the following applies

$$H_k = {1}/{k} \cdot \big [ H_{\rm 1} + (k-1) \cdot H_{\rm M}\big ] \hspace{0.3cm} \Rightarrow \hspace{0.3cm} H_2 = {1}/{2} \cdot \big [ H_{\rm 1} + H_{\rm M} \big ]\hspace{0.05cm}, \hspace{0.3cm} H_3 ={1}/{3} \cdot \big [ H_{\rm 1} + 2 \cdot H_{\rm M}\big ] \hspace{0.05cm},\hspace{0.3cm} H_4 = {1}/{4} \cdot \big [ H_{\rm 1} + 3 \cdot H_{\rm M}\big ] \hspace{0.05cm},\hspace{0.15cm}{\rm usw.}$$

$\text{Conclusion:}$  With the boundary condition for  $k \to \infty$, one obtains the actual source entropy

$$H = \lim_{k \rightarrow \infty} H_k = H_{\rm M} \hspace{0.05cm}.$$

From this simple result important insights for the entropy calculation follow:

  • For Markov sources it is sufficient to determine the entropy approximations  $H_1$  and  $H_2$.  Thus, the entropy of a Markov source is
$$H = 2 \cdot H_2 - H_{\rm 1} \hspace{0.05cm}.$$
  • Through  $H_1$  and  $H_2$  all further entropy approximations  $H_k$  are also fixed for  $k \ge 3$ :
$$H_k = \frac{2-k}{k} \cdot H_{\rm 1} + \frac{2\cdot (k-1)}{k} \cdot H_{\rm 2} \hspace{0.05cm}.$$


However, these approximations are not very important.  Mostly only the limit value  $H$  is of interest.  For sources without Markov properties the approximations  $H_k$  are calculated only to be able to estimate this limit value, i.e. the actual entropy.


Notes:

  • In the  "Exercise 1.5"  the above equations are applied to the more general case of an asymmetric binary source.
  • All equations in this section also apply to non-binary Markov sources  $(M > 2)$ as shown in the next section.


Non-binary Markov sources


Ternary and quaternary first order Markov source

The following equations apply to each Markov source regardless of the symbol set size  $M$:

$$H = 2 \cdot H_2 - H_{\rm 1} \hspace{0.05cm},$$
$$H_k = {1}/{k} \cdot \big [ H_{\rm 1} + (k-1) \cdot H_{\rm M}\big ] \hspace{0.05cm},$$
$$ \lim_{k \rightarrow \infty} H_k = H \hspace{0.05cm}.$$

These equations allow the simple calculation of the entropy  $H$  from the approximations  $H_1$  and  $H_2$.

We now look at the transition diagrams sketched on the right:

  • A ternary Markov source  $\rm MQ3$  $(M = 3$,  blue coloring$)$,  and
  • a quaternary Markov source  $\rm MQ4$  $(M = 4$,  red color$)$.


In  "Exercise 1.6"  the entropy approximations  $H_k$  and the total entropy  $H$  are calculated as the limit of  $H_k$  for  $k \to \infty$ .   The results are shown in the following figure.  All entropies specified there have the unit "bit/symbol".

Entropies for  $\rm MQ3$,  $\rm MQ4$  and the  $\rm AMI$  code

These results can be interpreted as follows:

  • For the ternary markov source  $\rm MQ3$  the entropy approximations of  $H_1 = 1.500$  above  $H_2 = 1.375$  up to the limit  $H = 1.250$  continuously decreasing.  Because of  $M = 3$  ⇒   $H_0 = 1.585$.
  • For the quaternary Markov source  $\rm MQ4$  one receives  $H_0 = H_1 = 2.000$  (since four equally probable states) and  $H_2 = 1.5$.   From the  $H_1$-  and  $H_2$-values all entropy approximations  $H_k$  and the final value  $H = 1.000$  can be calculated.
  • The two models  $\rm MQ3$  and  $\rm MQ4$  were created during the attempt to calculate the  "AMI code"  to be described information–theoretically by Markov sources.  The symbols  $\rm M$,  $\rm N$  and  $\rm P$  stand for  "minus",  "zero"  and  "plus".
  • The entropy approximations  $H_1$,  $H_2$  and  $H_3$  of the AMI code (green markers) were calculated in the  "Exercise 1.4".  On the calculation of  $H_4$,  $H_5$, ... had to be omitted for reasons of effort.  But the final value of  $H_k$  for  $k \to \infty$   ⇒   $H = 1.000$  is known.
  • You can see that the Markov model  $\rm MQ3$  for  $H_0 = 1.585$,  $H_1 = 1.500$  and  $H_2 = 1.375$  yields exactly the same numerical values as the AMI code.   On the other hand  $H_3$  ⇒   $(1.333$  instead of  $1.292)$  and especially the final value  $H$  ⇒   $(1.250$  compared to  $1.000)$.
  • The model  $\rm MQ4$  $(M = 4)$  differs from the AMI code  $(M = 3)$  with respect to the decision content  $H_0$  and also with respect to all entropy approximations  $H_k$.   Nevertheless, $\rm MQ4$  is the more suitable model for the AMI code, since the final value  $H = 1,000$  is the same.
  • The model  $\rm MQ3$  yields entropy values that are too large, since the sequences  $\rm PNP$  and  $\rm MNM$  are possible here, which cannot occur in the AMI code.   Already with the approximation  $H_3$  the difference is slightly noticeable, in the final value  $H$  even clearly  $(1.25$  instead of  $1)$.


In the model  $\rm MQ4$  the state  "Zero"  was split into two states  $\rm N$  and  $\rm O$  (see upper right figure in this section):

  • Here applies to the state  $\rm N$:   The current binary symbol  $\rm L$  is encoded with the amplitude value  $0$  (zero), as per the AMI rule.  The next occurring  $\rm H$ symbol, on the other hand, is displayed as  $-1$  (minus), because the last symol  $\rm H$  was encoded as  $+1$  (plus).
  • Also with the state  $\rm O$  the current binary symbol  $\rm L$  is represented with the ternary value  $0$.  In contrast to the state  $\rm N$  however, the next occurring  $\rm H$ symbol is now encoded as  $+1$  (plus) since the last  $\rm H$ symbol was encoded as  $-1$  (minus).


$\text{Conclusion:}$ 

  1. The  $\rm MQ4$ output sequence actually follows the rules of the AMI code and has entropy  $H = 1.000 \hspace{0.15cm} \rm bit/symbol$.
  2. But because of the new state  $\rm O$  now  $H_0 = 2.000 \hspace{0.15cm} \rm bit/symbol$  $($against  $1.585 \hspace{0.15cm} \rm bit/symbol)$  is clearly too large.
  3. Also all  $H_k$ approximations are larger than with the AMI code.  First for  $k \to \infty$  the model  $\rm MQ4$  and the AMI code match exactly:   $H = 1.000 \hspace{0.15cm} \rm bit/symbol$.


Exercises for the chapter


Exercise 1.3: Entropy Approximations

Exercise 1.4: Entropy Approximations for the AMI Code

Exercise 1.4Z: Entropy of the AMI Code

Exercise 1.5: Binary Markov Source

Exercise 1.5Z: Symmetrical Markov Source

Exercise 1.6: Non-Binary Markov Sources

Exercise 1.6Z:Ternary Markov Source