Difference between revisions of "Theory of Stochastic Signals/Markov Chains"

From LNTwww
 
(41 intermediate revisions by 7 users not shown)
Line 1: Line 1:
 
   
 
   
 
{{Header
 
{{Header
|Untermenü=Wahrscheinlichkeitsrechnung
+
|Untermenü=Probability Calculation
|Vorherige Seite=Statistische Abhängigkeit und Unabhängigkeit
+
|Vorherige Seite=Statistical_Dependence_and_Independence
|Nächste Seite=Wahrscheinlichkeit und relative Häufigkeit
+
|Nächste Seite=From_Random_Experiment_to_Random_Variable
 
}}
 
}}
==Betrachtetes Szenario==
+
==Considered scenario==
Wir betrachten nun abschließend den Fall, dass man ein Zufallsexperiment fortlaufend durchführt und dass zu jedem diskreten Zeitpunkt $(ν = 1, 2, 3, ...)$ ein neues  Ereignis $E_ν$ eintritt. Hierbei soll gelten:
+
<br>
$$E_\nu \in G = \{ E_{\rm 1}, E_{\rm 2}, \hspace{0.1cm}...\hspace{0.1cm}, E_\mu , \hspace{0.1cm}...\hspace{0.1cm}, E_M \}.$$
+
Finally,&nbsp; we consider the case where one conducts a random experiment continuously and that at each discrete time&nbsp; $(ν = 1, 2, 3, \text{...})$&nbsp; a new event&nbsp; $E_ν$.&nbsp; Here,&nbsp; let hold:
 +
[[File:En_Sto_T_1_4_S1.png |right|frame|Possible events and event sequence]]
  
Diese mathematisch nicht ganz saubere Nomenklatur bedeutet (siehe auch die folgende Grafik):  
+
:$$E_\nu \in G = \{ E_{\rm 1}, E_{\rm 2}, \hspace{0.1cm}\text{...}\hspace{0.1cm}, E_\mu , \hspace{0.1cm}\text{...}\hspace{0.1cm}, E_M \}.$$
*Die $M$ möglichen Ereignisse werden mit dem Laufindex $μ$ durchnummeriert.
 
*Der Index $ν$ benennt die diskreten Zeitpunkte, zu denen Entscheidungen gefällt werden.
 
  
[[File:P_ID442__Sto_T_1_4_S1_neu.png | Mögliche Ereignisse und Ereignisfolge]]
+
This mathematically not quite proper nomenclature means&nbsp; (see graph):
 +
*The&nbsp; $M$&nbsp; possible events are numbered consecutively with the index&nbsp; $μ$.
  
Zur einfacheren Darstellung beschränken wir uns im Folgenden auf den Fall $M = 2$  mit der Grundmenge $G = \{ A, B \}$.
+
*The index&nbsp; $ν$&nbsp; names the discrete time points at which events occur.
*Wir berücksichtigen, dass die Wahrscheinlichkeit des Ereignisses $E_ν$ durchaus von allen vorherigen Ereignissen abhängen kann – also von $E_{ν–1}, E_{ν–2}, E_{ν–3},$ usw.
 
*Es bedeutet auch, dass wir eine ''Ereignisfolge mit inneren statistischen Bindungen'' betrachten.  
 
  
Dieses Szenario ist ein Sonderfall eines zeit- und wertdiskreten Zufallsprozesses, der im  Kapitel [[Stochastische_Signaltheorie/Autokorrelationsfunktion_(AKF)#Zufallsprozesse_.281.29|Zufallsprozesse]]  noch ausführlich behandelt wird.
 
  
{{Beispiel}}
+
For simplicity,&nbsp; we restrict ourselves in the following to the case&nbsp; $M = 2$&nbsp; with the universal set&nbsp; $G = \{ A, \ B \}$.&nbsp; Let further hold:
Aus einem Kartenstapel mit 32 Karten (darunter 4 Asse) werden nacheinander Karten gezogen. Mit den Ereignissen
+
*The probability of event&nbsp; $E_ν$&nbsp; may well depend on all previous events&nbsp; – that is,&nbsp; on the events&nbsp; $E_{ν\hspace{0.05cm}-1}, \hspace{0.15cm} E_{ν\hspace{0.05cm}-2}, \hspace{0.15cm} E_{ν\hspace{0.05cm}-3}$,&nbsp; and so on.&nbsp;
*$A :=$ „die gezogene Karte ist ein Ass”, und
 
*$B = \overline{A}:=$ „die gezogene Karte ist kein Ass” lauten die Wahrscheinlichkeiten zum Zeitpunkt $ν = 1$:
 
$${\rm Pr} (A_{\rm 1})  ={4}/{32}= {1}/{8}, \hspace{0.5cm}{\rm Pr} (B_{\rm 1}) = {28}/{32}= {7}/{8}.$$
 
  
Die Wahrscheinlichkeit ${\rm Pr} (A_{\rm 2})$, dass zur Zeit $ν = 2$ ein Ass gezogen wird, hängt nun davon ab,
+
This statement also means that in this chapter we consider a&nbsp; &raquo;sequence of events with internal statistical bindings&laquo;.  
*ob zum Zeitpunkt $ν = 1$ ein Ass gezogen wurde    ⇒ ${\rm Pr} (A_{\rm 2})  = 3/31 < 1/8$,    oder
 
*ob zum Zeitpunkt $ν = 1$ kein Ass gezogen wurde  ⇒  ${\rm Pr} (A_{\rm 2})  = 4/31 > 1/8$.  
 
  
  
Auch die Wahrscheinlichkeiten ${\rm Pr} (A_{\nu})$ zu späteren Zeitpunkten $ν$ hängen stets vom Eintreffen bzw. Nichteintreffen aller vorherigen Ereignisse $E_1, ... ,E_{ν–1}$ ab.
+
This scenario is a special case of a discrete-time, discrete-value random process, which will be discussed in more detail in the chapter&nbsp; [[Theory_of_Stochastic_Signals/Auto-Correlation_Function#Random_processes|&raquo;Random Processes&laquo;]].&nbsp; 
{{end}}
 
  
==Allgemeine Definition einer Markovkette==
+
{{GraueBox|TEXT=
In Sonderfällen, die allerdings sehr häufig vorkommen, kann das oben beschriebene Szenario durch eine Markovkette beschrieben werden.
+
$\text{Example 1:}$&nbsp;
 +
Cards are drawn one after the other from a deck of&nbsp; $32$&nbsp; cards&nbsp; $($including four aces$)$.&nbsp; With the events
 +
*$A:=$&nbsp; &raquo;the drawn card is an ace&laquo;,&nbsp; 
 +
*$B = \overline{A}:=$&nbsp; &raquo;the drawn card is not an ace&laquo;,&nbsp;
  
{{Definition}}
+
Eine '''Markovkette $k$-ter Ordnung''' (englisch: ''Markov Chain'') dient als Modell für zeit- und wertdiskrete Vorgänge, bei denen die Ereigniswahrscheinlichkeiten zur Zeit $ von den vorherigen Ereignissen $E_{ν–1}, ... , E_{ν–k}$ abhängen und durch $M^{k+1}$ bedingte Wahrscheinlichkeiten ausgedrückt werden können. Für $M = 2$ gibt es deshalb $2^{k+1}$ solcher Wahrscheinlichkeiten:
+
the probabilities at time&nbsp; $ν = 1$:
$${\rm Pr} ( E_\nu \hspace {0.05cm}| \hspace {0.05cm}E_{\nu {\rm -1 }},\hspace {0.1cm} ...\hspace {0.1cm}, E_{\nu { -k }}) \hspace {0.5cm} {\rm mit}\hspace {0.5cm} E_{\nu }\in \{ A, B \}, \hspace {0.1cm}...\hspace {0.1cm}, E_{\nu { -k }} \in \{ A, B \}.$$
+
:$${\rm Pr} (A_{\rm 1}) ={4}/{32}= {1}/{8},\hspace{0.5cm}{\rm Pr} (B_{\rm 1}) = {28}/{32}= {7}/{8}.$$
{{end}}
 
  
 +
The probability&nbsp; ${\rm Pr} (A_{\rm 2})$,&nbsp; that an ace is drawn as the second card&nbsp; $(ν = 2)$&nbsp; now depends on,
 +
*whether an ace was drawn at time&nbsp; $ν = 1$&nbsp;&nbsp;  ⇒ &nbsp; ${\rm Pr} (A_{\rm 2})  = 3/31 < 1/8$,&nbsp;
 +
 
 +
*or at time&nbsp; $ν = 1$&nbsp; no ace was drawn &nbsp; ⇒ &nbsp; ${\rm Pr} (A_{\rm 2})  = 4/31 > 1/8$.
  
Das nachfolgende Schaubild verdeutlicht diesen Sachverhalt am Beispiel $k = 2$.
 
  
[[File:P_ID444__Sto_T_1_4_S2a_neu.png | Markovkette zweiter Ordnung]]
+
Also,&nbsp; the probabilities&nbsp; ${\rm Pr} (A_{\nu})$&nbsp; at later times&nbsp; $ν$&nbsp; always depend on the occurrence or non-occurrence of all previous events&nbsp; $E_1, \hspace{0.1cm}\text{...}\hspace{0.1cm} ,E_{ν\hspace{0.05cm}–1}$&nbsp;.}}
  
 +
==General definition of a Markov chain==
 +
<br>
 +
In special cases,&nbsp; which however occur very frequently,&nbsp; the scenario described above can be described by a Markov chain.
  
{{Beispiel}}
+
{{BlaueBox|TEXT= 
Natürliche Sprachen sind oft durch Markovketten beschreibbar, wobei allerdings die Ordnung $k$ gegen Unendlich strebt. Nähert man einen Text lediglich durch eine Markovkette zweiter Ordnung an, so ergibt sich zwar kein sinnvoller Inhalt, aber die Struktur der Sprache ist erkennbar.  
+
$\text{Definition:}$&nbsp;
 +
A&nbsp; $k$-th order&nbsp; &raquo;'''Markov Chain'''&laquo;&nbsp; serves as a model for time and value-discrete processes,&nbsp; where the event probabilities at time&nbsp; $ν$  
 +
*&nbsp;&nbsp; depend on the previous events &nbsp; $E_{ν\hspace{0.05cm}–1}, \hspace{0.15cm}\text{...}\hspace{0.15cm}, E_{ν\hspace{0.05cm}–k}$,&nbsp;  and
 +
 +
*&nbsp; can be expressed by&nbsp; $M^{k+1}$&nbsp; conditional probabilities.}}
  
[[File:P_ID506__Sto_T_1_4_S2b_neu.png |center| Synthetisch erzeugte Texte (deutsch und englisch)]]
 
Die Grafik zeigt beispielhaft zwei synthetisch erzeugte Texte:
 
*Der linke Text wurde ausgehend von einer deutschen Buchvorlage mit Bindungen bis zu zweiter Ordnung synthetisch erzeugt.
 
*Beim rechten Text wurde eine englische Vorlage verwendet.
 
Man erkennt trotz der Beschränkung $k = 2$  viele (kurze) deutsche bzw. englische Wörter und auch, dass deutsche Wörter im Mittel länger sind als englische.
 
{{end}}
 
  
==Markovkette erster Ordnung==
+
[[File:EN_Sto_T_1_4_S2a.png |right|frame| Second order Markov chain]]
Im Folgenden beschränken wir uns stets auf den Sonderfall $k =1$ .
+
The diagram illustrates this issue using&nbsp; $k = 2$&nbsp; as an example.&nbsp;
  
{{Definition}}
+
*For&nbsp; $M = 2$&nbsp; there are&nbsp; $2^{k+1}$&nbsp; such conditional probabilities:&nbsp;
Bei einer  '''Markovkette erster Ordnung''' (englisch: ''First Order Markov Chain'') wird lediglich die statistische Bindung zum letzten Ereignis berücksichtigt, die in der Praxis meist auch am stärksten ist.
 
  
Eine binäre Markovkette erster Ordnung &nbsp;  ⇒  &nbsp; Grundmenge $G = \{ A, B \}$ weist zum Zeitpunkt $\nu$ folgende Ereigniswahrscheinlichkeiten auf:
+
:$${\rm Pr} ( E_\nu \hspace {0.05cm}\vert  \hspace {0.05cm}E_{\nu {\rm -1 } },\hspace {0.15cm}\text{...}\hspace {0.15cm}, E_{\nu { -k } }).$$
$${\rm Pr}(A_\nu) = {\rm Pr}(A_\nu \hspace{0.05cm} | \hspace{0.05cm}A_{\nu - 1}) \cdot {\rm Pr}(A_{\nu - 1}) \hspace{0.1cm} + \hspace{0.1cm} {\rm Pr}(A_\nu \hspace{0.05cm} | \hspace{0.05cm}B_{\nu - 1}) \cdot {\rm Pr}(B_{\nu - 1}) ,$$
 
$${\rm Pr}(B_\nu) = {\rm Pr}(B_\nu \hspace{0.05cm} | \hspace{0.05cm}A_{\nu - 1}) \cdot {\rm Pr}(A_{\nu - 1}) \hspace{0.1cm} + \hspace{0.1cm} {\rm Pr}(B_\nu \hspace{0.05cm} | \hspace{0.05cm}B_{\nu - 1}) \cdot {\rm Pr}(B_{\nu - 1}) .$$
 
{{end}}
 
  
 +
*With&nbsp; $E_{\nu }\in \{ A, B \}, \hspace {0.15cm}\text{...}\hspace {0.15cm}, E_{\nu { -k } } \in \{ A, B \}$&nbsp; these are:
 +
:$${\rm Pr} ( A_\nu  \hspace {0.05cm}\vert  \hspace {0.05cm}A_{\nu {\rm -1 } }, A_{\nu { -2 } }),\hspace {0.5cm} {\rm Pr} ( B_\nu  \hspace {0.05cm}\vert  \hspace {0.05cm}A_{\nu {\rm -1 } }, A_{\nu { -2 } }),$$
 +
:$${\rm Pr} ( A_\nu  \hspace {0.05cm}\vert  \hspace {0.05cm}A_{\nu {\rm -1 } },B_{\nu { -2 } }),\hspace {0.5cm}{\rm Pr} ( B_\nu  \hspace {0.05cm}\vert  \hspace {0.05cm}A_{\nu {\rm -1 } }, B_{\nu { -2 } }),$$
 +
:$${\rm Pr} ( A_\nu  \hspace {0.05cm}\vert  \hspace {0.05cm}B_{\nu {\rm -1 } }, A_{\nu { -2 } }),\hspace {0.5cm}{\rm Pr} ( B_\nu  \hspace {0.05cm}\vert  \hspace {0.05cm}B_{\nu {\rm -1 } }, A_{\nu { -2 } }),$$
 +
:$${\rm Pr} ( A_\nu  \hspace {0.05cm}\vert  \hspace {0.05cm}B_{\nu {\rm -1 } }, B_{\nu { -2 } }),\hspace {0.5cm}{\rm Pr} ( B_\nu  \hspace {0.05cm}\vert  \hspace {0.05cm}B_{\nu {\rm -1 } }, B_{\nu { -2 } }).$$
  
Zu diesen Gleichungen ist anzumerken:
+
{{GraueBox|TEXT=   
*${\rm Pr}(A_\nu)$ steht als Abkürzung für die Wahrscheinlichkeit, dass $E_ν = A = \overline{B}$ ist.
+
$\text{Example 2:}$&nbsp; Natural languages are often describable by Markov chains,&nbsp; although the order&nbsp; $k$&nbsp; tends here to infinity.&nbsp;
*Zu jedem beliebigen Zeitpunkt $ν$ gilt:  &nbsp; ${\rm Pr}(B_\nu) = 1 - {\rm Pr}(A_\nu)$.  
 
*Es gibt zu jedem Zeitpunkt vier Übergangswahrscheinlichkeiten ${\rm Pr}(E_ν\hspace{0.05cm} |\hspace{0.05cm} E_{ν–1})$, von denen jedoch nur zwei unabhängig sind, denn es gilt:
 
:$${\rm Pr}(B_\nu \hspace{0.05cm} | \hspace{0.05cm}A_{\nu - 1}) = 1 - {\rm Pr}(A_\nu \hspace{0.05cm} | \hspace{0.05cm}A_{\nu - 1}), \hspace{0.5cm}{\rm Pr}(A_\nu \hspace{0.05cm} | \hspace{0.05cm}B_{\nu - 1}) = 1 - {\rm Pr}(B_\nu \hspace{0.05cm} | \hspace{0.05cm}B_{\nu - 1}).$$
 
*Durch Verallgemeinerung dieser letzten Aussage gelangt man zu dem Ergebnis, dass es bei einer Markovkette mit $M$ Ereignissen zu jedem Zeitpunkt $ν$ genau $M · (M – 1)$ voneinander unabhängige Übergangswahrscheinlichkeiten gibt.
 
  
 +
In this example,&nbsp; however,&nbsp; texts are only approximated by second order Markov chains.
 +
[[File:EN_Sto_T_1_4_S2c.png |right|frame| Synthetically generated texts&nbsp; based on German or English book template]]
 +
The graphic shows two synthetically generated texts:
 +
*The left text was synthetically generated starting from a German book template with bindings up to second order.
  
{{Beispiel}}
+
*For the right text,&nbsp; an English book template was used.
Mit den vorgegebenen Übergangswahrscheinlichkeiten
 
* ${\rm Pr}(A_ν\hspace{0.05cm} |\hspace{0.05cm} A_{ν–1}) = 0.2$ und
 
*${\rm Pr}(B_ν\hspace{0.05cm} |\hspace{0.05cm} B_{ν–1}) = 0.4$
 
sind auch die beiden anderen Übergangswahrscheinlichkeiten eindeutig festgelegt:
 
:$${\rm Pr}(B_ν\hspace{0.05cm} |\hspace{0.05cm} A_{ν–1}) = 1- 0.2 = 0.8 \hspace{0.3cm}\text{und}\hspace{0.3cm}
 
{\rm Pr}(A_ν\hspace{0.05cm} |\hspace{0.05cm} B_{ν–1}) = 1- 0.4 = 0.6.$$
 
{{end}}
 
  
==Homogene Markovketten==
 
Eine Anwendbarkeit der Markovketten auf praktische Probleme ist meist nur gegeben, wenn weitere einschränkende Voraussetzungen getroffen werden.
 
{{Definition}}
 
Sind alle Übergangswahrscheinlichkeiten unabhängig vom betrachteten Zeitpunkt $ν$, so bezeichnet man die Markovkette als '''homogen''' (englisch: ''homogeneous''). Im Fall $M = 2$ verwenden wir hierfür folgende Abkürzungen:
 
$${\rm Pr}(A \hspace{0.05cm} | \hspace{0.05cm}A) = {\rm Pr}(A_\nu \hspace{0.05cm} | \hspace{0.05cm}A_{\nu - 1}) , \hspace{0.5cm} {\rm Pr}(A \hspace{0.05cm} | \hspace{0.05cm}B) = {\rm Pr}(A_\nu \hspace{0.05cm} | \hspace{0.05cm}B_{\nu - 1}) ,$$
 
$${\rm Pr}(B \hspace{0.05cm} | \hspace{0.05cm}A) = {\rm Pr}(B_\nu \hspace{0.05cm} | \hspace{0.05cm}A_{\nu - 1}) , \hspace{0.5cm} {\rm Pr}(B \hspace{0.05cm} | \hspace{0.05cm}B) = {\rm Pr}(B_\nu \hspace{0.05cm} | \hspace{0.05cm}B_{\nu - 1}) .$$
 
{{end}}
 
  
 +
One recognizes despite the restriction to&nbsp; $k = 2$&nbsp; 
 +
#many $($short$)$ German words on the left,
 +
#many $($short$)$ English words on the right,
 +
#German words are on the average longer than English words.&nbsp;
  
Damit lauten die beiden Ereigniswahrscheinlichkeiten einer binären homogenen Markovkette, die im Gegensatz zu den bedingten Übergangswahrscheinlichkeiten absolute Wahrscheinlichkeiten darstellen:
 
$${\rm Pr}(A_\nu) = {\rm Pr}(A \hspace{0.05cm} | \hspace{0.05cm}A) \cdot {\rm Pr}(A_{\nu - 1}) \hspace{0.1cm} + \hspace{0.1cm} {\rm Pr}(A \hspace{0.05cm} | \hspace{0.05cm}B) \cdot {\rm Pr}(B_{\nu - 1}) ,$$
 
$${\rm Pr}(B_\nu) = {\rm Pr}(B \hspace{0.05cm} | \hspace{0.05cm}A) \cdot {\rm Pr}(A_{\nu - 1}) \hspace{0.1cm} + \hspace{0.1cm} {\rm Pr}(B \hspace{0.05cm} | \hspace{0.05cm}B) \cdot {\rm Pr}(B_{\nu - 1}) .$$
 
  
Diesen Zusammenhang kann man auch aus dem nachfolgend dargestellten Markovdiagramm ablesen. Die Summe der abgehenden Pfeile eines Ereignisses ( $A_ν$ bzw. $B_ν$) ergibt sich stets zu $1$.
+
There is no meaningful content,&nbsp; but the structure of the respective language is recognizable. }}
  
[[File:P_ID1444__Sto_T_1_4_S4_neu.png | Homogene Markovkette erster Ordnung]]
+
==First order Markov chain==
 +
<br>
 +
In the following, we always restrict ourselves to the special case&nbsp; $k =1$.
  
Sie können das „Einschwingverhalten” der Ereigniswahrscheinlichkeiten einer solchen binären Markovkette mit dem folgenden Interaktionsmodul berechnen und anzeigen lassen:
+
{{BlaueBox|TEXT= 
[[Ereigniswahrscheinlichkeiten einer Markovkette erster Ordnung]]
+
$\text{Definition:}$&nbsp;
  
==Stationäre Wahrscheinlichkeiten==
+
$(1)$&nbsp; In a&nbsp; &raquo;'''first order Markov chain'''&laquo;&nbsp; only the statistical binding to the last event is considered,&nbsp; which in practice is usually the strongest.
Wichtige Eigenschaften von Zufallsprozessen sind  [[Stochastische_Signaltheorie/Autokorrelationsfunktion_(AKF)#Station.C3.A4re_Zufallsprozesse|Stationarität]]  und [[Stochastische_Signaltheorie/Autokorrelationsfunktion_(AKF)#Ergodische_Zufallsprozesse|Ergodizität]]. Diese Begriffe werden erst im vierten Kapitel &bdquo;Zufallsgrößen mit statistischen Bindungen&rdquo; definiert, hier aber bereits vorausgreifend auf Markovketten angewandt.
 
  
{{Definition}}
+
$(2)$&nbsp; For example:&nbsp; A&nbsp; &raquo;'''binary first order Markov chain'''&laquo;&nbsp;  ⇒  &nbsp; universal set&nbsp; $G = \{ A, \ B \}$&nbsp; has the following event probabilities at time&nbsp; $\nu$:
Sind bei einer Markovkette neben den Übergangswahrscheinlichkeiten auch alle Ereigniswahrscheinlichkeiten unabhängig vom Zeitpunkt $ν$, so bezeichnet man sie als '''stationär''' (englisch: ''stationary''). Man verzichtet dann auf den Index $ν$ und schreibt im binären Fall:
+
:$${\rm Pr}(A_\nu) = {\rm Pr}(A_\nu \hspace{0.05cm} \vert \hspace{0.05cm}A_{\nu - 1}) \cdot {\rm Pr}(A_{\nu - 1}) \hspace{0.1cm} + \hspace{0.1cm} {\rm Pr}(A_\nu \hspace{0.05cm} \vert \hspace{0.05cm}B_{\nu - 1}) \cdot {\rm Pr}(B_{\nu - 1}) ,$$
$${\rm Pr}(A_\nu ) = {\rm Pr}(A ) \hspace{0.5 cm} {\rm bzw.} \hspace{0.5 cm} {\rm Pr}(B_\nu ) = {\rm Pr}(B).$$
+
:$${\rm Pr}(B_\nu) = {\rm Pr}(B_\nu \hspace{0.05cm} \vert \hspace{0.05cm}A_{\nu - 1}) \cdot {\rm Pr}(A_{\nu - 1}) \hspace{0.1cm} + \hspace{0.1cm} {\rm Pr}(B_\nu \hspace{0.05cm} \vert \hspace{0.05cm}B_{\nu - 1}) \cdot {\rm Pr}(B_{\nu - 1}) .$$}}
Diese Größen nennt man auch die '''ergodischen Wahrscheinlichkeiten''' der Markovkette.
 
{{end}}
 
  
  
Stationäre Markovketten weisen die nachfolgend genannten Besonderheiten auf:  
+
Regarding these equations,&nbsp; we note:  
*Zur Berechnung der ergodischen Wahrscheinlichkeiten einer binären Markovkette ( $M =$ 2) kann man folgende Gleichungen verwenden:
+
*${\rm Pr}(A_\nu)$&nbsp; is shorthand for the probability that at time&nbsp; $ν$&nbsp; the event &nbsp; $E_ν = A = \overline{B}$&nbsp; occurs,&nbsp; and it    holds:
 +
:$${\rm Pr}(B_\nu) = 1 - {\rm Pr}(A_\nu).$$
 +
*At each time there are four transition probabilities&nbsp; ${\rm Pr}(E_ν\hspace{0.05cm} |\hspace{0.05cm} E_{ν–1})$,&nbsp; but only two of them are independent,&nbsp; because it holds:
 +
:$${\rm Pr}(B_\nu \hspace{0.05cm} | \hspace{0.05cm}A_{\nu - 1}) = 1 - {\rm Pr}(A_\nu \hspace{0.05cm} | \hspace{0.05cm}A_{\nu - 1}),$$
 +
:$${\rm Pr}(A_\nu \hspace{0.05cm} | \hspace{0.05cm}B_{\nu - 1}) = 1 - {\rm Pr}(B_\nu \hspace{0.05cm} | \hspace{0.05cm}B_{\nu - 1}).$$
 +
*Generalizing this last statement:&nbsp; For a Markov chain with&nbsp; $M$ events,&nbsp;  there are exactly&nbsp; $M · (M – 1)$&nbsp; independent transition probabilities at each time&nbsp; $ν$.
 +
 
 +
 
 +
{{GraueBox|TEXT= 
 +
$\text{Example 3:}$&nbsp; With the given transition probabilities
 +
:$${\rm Pr}(A_ν\hspace{0.05cm}\vert\hspace{0.05cm} A_{ν–1}) = 0.2, \hspace{0.5cm}  {\rm Pr}(B_ν\hspace{0.05cm} \vert\hspace{0.05cm} B_{ν–1}) = 0.4,$$
 +
the other two transition probabilities are also uniquely specified:
 +
:$${\rm Pr}(B_ν\hspace{0.05cm} \vert\hspace{0.05cm} A_{ν–1}) = 1- 0.2 = 0.8, \hspace{0.5cm}
 +
{\rm Pr}(A_ν\hspace{0.05cm} \vert\hspace{0.05cm} B_{ν–1}) = 1- 0.4 = 0.6.$$}}
 +
 
 +
==Homogeneous Markov chains==
 +
<br>
 +
Applicability of Markov chains to practical problems is usually only given with further restrictive conditions.
 +
 
 +
{{BlaueBox|TEXT= 
 +
$\text{Definition:}$&nbsp;
 +
 +
$(1)$&nbsp; If all transition probabilities are independent of time&nbsp; $ν$,&nbsp; the Markov chain is&nbsp; &raquo;'''homogeneous'''&laquo;.
 +
 
 +
$(2)$&nbsp; In the case&nbsp; $M = 2$&nbsp; we use the following abbreviations:
 +
:$${\rm Pr}(A \hspace{0.05cm} \vert \hspace{0.05cm}A) = {\rm Pr}(A_\nu \hspace{0.05cm}\vert \hspace{0.05cm}A_{\nu - 1}) , \hspace{0.5cm} {\rm Pr}(A \hspace{0.05cm} \vert\hspace{0.05cm}B) = {\rm Pr}(A_\nu \hspace{0.05cm} \vert \hspace{0.05cm}B_{\nu - 1}) ,$$
 +
:$${\rm Pr}(B \hspace{0.05cm} \vert\hspace{0.05cm}A) = {\rm Pr}(B_\nu \hspace{0.05cm} \vert \hspace{0.05cm}A_{\nu - 1}) , \hspace{0.5cm} {\rm Pr}(B \hspace{0.05cm} \vert \hspace{0.05cm}B) = {\rm Pr}(B_\nu \hspace{0.05cm} \vert \hspace{0.05cm}B_{\nu - 1}) .$$}}
 +
 
 +
 
 +
[[File:P_ID1444__Sto_T_1_4_S4_neu.png|right|frame| First order homogeneous Markov chain]]
 +
Thus,&nbsp; the event probabilities of a binary homogeneous Markov chain:
 +
:$${\rm Pr}(A_\nu) = {\rm Pr}(A \hspace{0.05cm} | \hspace{0.05cm}A) \cdot {\rm Pr}(A_{\nu - 1}) \hspace{0.1cm} + \hspace{0.1cm} {\rm Pr}(A \hspace{0.05cm} | \hspace{0.05cm}B) \cdot {\rm Pr}(B_{\nu - 1}) ,$$
 +
:$${\rm Pr}(B_\nu) = {\rm Pr}(B \hspace{0.05cm} | \hspace{0.05cm}A) \cdot {\rm Pr}(A_{\nu - 1}) \hspace{0.1cm} + \hspace{0.1cm} {\rm Pr}(B \hspace{0.05cm} | \hspace{0.05cm}B) \cdot {\rm Pr}(B_{\nu - 1}) .$$
 +
 
 +
These are absolute probabilities in contrast to the conditional transition probabilities.
 +
 
 +
We can also read from the Markov diagram:
 +
*The sum of the&nbsp; &raquo;'''outgoing arrows'''&laquo;&nbsp; of an event is always one:
 +
:$${\rm Pr}(A \hspace{0.05cm} | \hspace{0.05cm}A)  + {\rm Pr}(B \hspace{0.05cm} | \hspace{0.05cm}A) =1,$$
 +
:$${\rm Pr}(A \hspace{0.05cm} | \hspace{0.05cm}B)  + {\rm Pr}(B \hspace{0.05cm} | \hspace{0.05cm}B) =1.$$
 +
 
 +
*For the sums of the&nbsp; &raquo;'''incoming arrows'''&laquo;&nbsp; this restriction does not apply:
 +
:$$  0 < {\rm Pr}(A \hspace{0.05cm} | \hspace{0.05cm}A)  + {\rm Pr}(A \hspace{0.05cm} | \hspace{0.05cm}B)< 2,$$
 +
:$$0 < {\rm Pr}(B \hspace{0.05cm} | \hspace{0.05cm}A)  + {\rm Pr}(B \hspace{0.05cm} | \hspace{0.05cm}B)< 2.$$
 +
 
 +
==Stationary probabilities==
 +
<br>
 +
Important properties of random processes are&nbsp;  [[Theory_of_Stochastic_Signals/Auto-Correlation_Function_(ACF)#Stationary_random_processes|&raquo;$\text{stationarity}$&laquo;]]&nbsp;  and&nbsp; [[Theory_of_Stochastic_Signals/Auto-Correlation_Function_(ACF)#Ergodic_random_processes|&raquo;$\text{ergodicity}$&laquo;]].&nbsp; Although these terms are not defined until the fourth main chapter&nbsp; &raquo;Random Variables with Statistical Dependence&laquo;,&nbsp; we apply them here already in advance to Markov chains.
 +
 
 +
{{BlaueBox|TEXT= 
 +
$\text{Definition:}$&nbsp;
 +
 +
$(1)$&nbsp; If,&nbsp; in addition to the transition probabilities,&nbsp; all event probabilities of a Markov chain are independent of time&nbsp; $ν$,&nbsp; then the Markov chain is called&nbsp; &raquo;'''stationary'''&laquo;.&nbsp;
 +
 
 +
$(2)$&nbsp; One then omits the index&nbsp; $ν$&nbsp; and writes in the binary case:
 +
:$${\rm Pr}(A_\nu ) = {\rm Pr}(A ),  \hspace{0.5 cm}  {\rm Pr}(B_\nu ) = {\rm Pr}(B).$$
 +
$(3)$&nbsp; These quantities are also called&nbsp; &raquo;'''ergodic probabilities'''&laquo;&nbsp; of the Markov chain.}}
 +
 
 +
 
 +
Stationary Markov chains have the special features mentioned below:
 +
*To calculate the ergodic probabilities of a binary Markov chain&nbsp; $(M =2)$&nbsp; one can use the following equations:
 
:$${\rm Pr}(A) = {\rm Pr}(A \hspace{0.05cm} | \hspace{0.05cm}A) \cdot {\rm Pr}(A) \hspace{0.1cm} + \hspace{0.1cm} {\rm Pr}(A \hspace{0.05cm} | \hspace{0.05cm}B) \cdot {\rm Pr}(B) ,$$
 
:$${\rm Pr}(A) = {\rm Pr}(A \hspace{0.05cm} | \hspace{0.05cm}A) \cdot {\rm Pr}(A) \hspace{0.1cm} + \hspace{0.1cm} {\rm Pr}(A \hspace{0.05cm} | \hspace{0.05cm}B) \cdot {\rm Pr}(B) ,$$
 
:$${\rm Pr}(B) = {\rm Pr}(B \hspace{0.05cm} | \hspace{0.05cm}A) \cdot {\rm Pr}(A) \hspace{0.1cm} + \hspace{0.1cm} {\rm Pr}(B \hspace{0.05cm} | \hspace{0.05cm}B) \cdot {\rm Pr}(B) .$$
 
:$${\rm Pr}(B) = {\rm Pr}(B \hspace{0.05cm} | \hspace{0.05cm}A) \cdot {\rm Pr}(A) \hspace{0.1cm} + \hspace{0.1cm} {\rm Pr}(B \hspace{0.05cm} | \hspace{0.05cm}B) \cdot {\rm Pr}(B) .$$
*Da diese beiden Gleichungen linear voneinander abhängen, darf man nur eine davon benutzen. Als zweite Bestimmungsgleichung kann man beispielsweise ${\rm Pr}(A) +  {\rm Pr}(B)  = 1$ verwenden.
+
*Since these equations depend linearly on each other,&nbsp; one may use only one of them.&nbsp;  As the second equation of determination one may use:
*Aus diesen Gleichungen ergeben sich die ergodischen Wahrscheinlichkeiten zu
+
:$${\rm Pr}(A) +  {\rm Pr}(B)  = 1.$$
:$${\rm Pr}(A) = \frac {{\rm Pr}(A \hspace{0.05cm} | \hspace{0.05cm}B) }{{\rm Pr}(A \hspace{0.05cm} | \hspace{0.05cm}B)  + {\rm Pr}(B \hspace{0.05cm} | \hspace{0.05cm}A) }\hspace{0.1cm}, \hspace{0.5cm}{\rm Pr}(B) = \frac {{\rm Pr}(B \hspace{0.05cm} | \hspace{0.05cm}A) }{{\rm Pr}(A \hspace{0.05cm} | \hspace{0.05cm}B)  + {\rm Pr}(B \hspace{0.05cm} | \hspace{0.05cm}A) }.$$
+
 
 +
*The ergodic probabilities result from it to
 +
:$${\rm Pr}(A) = \frac {{\rm Pr}(A \hspace{0.05cm} | \hspace{0.05cm}B) }{{\rm Pr}(A \hspace{0.05cm} | \hspace{0.05cm}B)  + {\rm Pr}(B \hspace{0.05cm} | \hspace{0.05cm}A) }\hspace{0.1cm}, $$
 +
:$${\rm Pr}(B) = \frac {{\rm Pr}(B \hspace{0.05cm} | \hspace{0.05cm}A) }{{\rm Pr}(A \hspace{0.05cm} | \hspace{0.05cm}B)  + {\rm Pr}(B \hspace{0.05cm} | \hspace{0.05cm}A) }.$$
 +
 
 +
{{GraueBox|TEXT= 
 +
$\text{Example 4:}$&nbsp; We consider a stationary and binary Markov chain with events&nbsp; $A$&nbsp; and $B$&nbsp; and transition probabilities&nbsp; ${\rm Pr}(A \hspace{0.05cm} \vert\hspace{0.05cm}A) = 0.4$&nbsp; and&nbsp;  ${\rm Pr}(B \hspace{0.05cm} \vert \hspace{0.05cm}B) = 0.8$.
 +
[[File:P_ID642__Sto_T_1_4_S5_neu.png |right|frame| Transient response of the event probabilities $($three decimal places$)$]]
 +
 +
Furthermore,&nbsp; we assume that each realization starts with event&nbsp; $A$&nbsp; at starting time&nbsp; $ν = 0$.&nbsp; We then obtain the event probabilities listed on the right:
 +
 
 +
In a strict sense,&nbsp; this is a non-stationary Markov chain,&nbsp; but it is&nbsp; $($nearly$)$&nbsp; settled after a short time:
 +
#At later times&nbsp; $(ν > 5),$&nbsp; the event probabilities&nbsp; ${\rm Pr}(A_ν) \approx 1/4$,&nbsp;  ${\rm Pr}(B_ν) \approx 3/4$&nbsp; are no longer seriously changed.
 +
#However,&nbsp; from&nbsp; ${\rm Pr}(A_{ν=5}) = 0.250$&nbsp; and&nbsp;  ${\rm Pr}(B_{ν=5}) = 0.750$&nbsp; it should not be concluded that the Markov chain is already perfectly steady at time &nbsp; $ν = 5$.
 +
#Indeed,&nbsp; the exact values are:&nbsp;
 +
::$${\rm Pr}(A_{ν=5}) = 0.25024,\hspace{0.5cm}{\rm Pr}(B_{ν=5}) = 0.74976.$$}}
 +
 
 +
 
 +
{{BlaueBox|TEXT= 
 +
$\text{Conclusion:}$&nbsp; For a stationary Markov chain, the ergodic probabilities always occur with good approximation very long after the chain is switched on &nbsp; $(ν → ∞)$&nbsp;, regardless of the initial conditions&nbsp; ${\rm Pr}(A_0)$&nbsp; and&nbsp; ${\rm Pr}(B_0) = 1 - {\rm Pr}(A_0)$.
 +
}}
 +
 
 +
==Matrix vector representation==
 +
<br>
 +
Homogeneous Markov chains can be represented very compactly with vectors and matrices.&nbsp; This is especially recommended for more than two events:
 +
 
 +
:$$E_\nu \in G = \{ E_{\rm 1},\ E_{\rm 2}, \hspace{0.1cm}\text{...}\hspace{0.1cm},\ E_\mu , \hspace{0.1cm}\text{...}\hspace{0.1cm},\ E_M \}.$$
 +
For the matrix-vector representation,&nbsp; we use the following nomenclature:
 +
*We summarize the&nbsp; $M$&nbsp; probabilities at time&nbsp; $ν$&nbsp; in a column vector:
 +
:$${\mathbf{p}^{(\nu)}} = \left[ \begin{array}{c} {p_{\rm 1}}^{(\nu)} \\ \dots \\ {p_{M}}^{(\nu)} \end{array} \right] \hspace{0.5cm}{\rm with} \hspace{0.5cm} {p_{\mu}}^{(\nu)} = {\rm Pr}(E_\nu = E_\mu ).$$
 +
*The transition probabilities are expressed by an&nbsp; $M \times M$ matrix:
 +
:$${\mathbf{P}} =\left( p_{ij} \right) = \left[ \begin{array}{cccc} p_{11} & p_{12} & \cdots & p_{1M} \\ p_{21} & p_{22}& \cdots & p_{2M} \\ \dots & \dots & \dots & \dots \\ p_{M1} & p_{M2} & \cdots & p_{MM}  \end{array} \right] \hspace{0.5cm}{\rm with} \hspace{0.5cm} {p_{ij}} = {\rm Pr}(E_{\nu +1 } = E_j \hspace{0.05cm}| \hspace{0.05cm} E_{\nu } = E_i).$$
 +
 
 +
The accompanying figure illustrates this nomenclature using the example&nbsp; $M = 2$.&nbsp; The new event probability vector after one step is:
 +
[[File:P_ID448__Sto_T_1_4_S6_neu.png  |frame|To denote the transition probabilities |right]]
 +
:$${\mathbf{p}^{(\nu + 1)}} = {\mathbf{P}}^{\rm T} \cdot {\mathbf{p}^{(\nu )}} .$$
 +
 
 +
Here,&nbsp; ${\mathbf{P}^{\rm T}}$&nbsp; denotes the&nbsp; &raquo;transposed matrix&raquo;&nbsp; to&nbsp; $\mathbf{P}$.&nbsp; Thus,&nbsp; after&nbsp; $n$&nbsp; steps,&nbsp; the result is
 +
:$${\mathbf{p}^{(\nu +{\it n})}} = \left( {\mathbf{P}}^{\rm T} \right)^n \cdot {\mathbf{p}^{(\nu )}} .$$
  
 +
In the limit transition&nbsp; $(n → ∞)$&nbsp; one then always achieves stationarity of the Markov chain:
 +
:$$\lim_{n \to\infty}\hspace{0.1cm}{\mathbf{p}^{(\nu + n)}} = \lim_{n \to\infty} \left( {\mathbf{P}}^{\rm T} \right)^n \cdot {\mathbf{p}^{(\nu )}}  = {\mathbf{p}}_{\rm erg}= \left[ \begin{array}{c} {p_{\rm 1}} \\ \dots \\ {p_{M}} \end{array} \right] .$$
  
Bei einer stationären Markovkette treten sehr lange nach Einschalten der Kette ( $ν → ∞$) stets die ergodischen Wahrscheinlichkeiten auf, unabhängig von den Startbedingungen ${\rm Pr}(A_0)$ und ${\rm Pr}(B_0) = 1 - {\rm Pr}(A_0)$.
+
{{BlaueBox|TEXT= 
 +
$\text{Definition:}$&nbsp; Multiplying the transition matrix&nbsp; ${\mathbf{P} }$&nbsp; infinitely many times by itself and denoting the result by&nbsp; ${\mathbf{P} }_{\rm erg}$,&nbsp; the resulting matrix consists of&nbsp; $M$&nbsp; equal rows:
 +
:$${\mathbf{P} }_{\rm erg} = \lim_{n \to\infty}  {\mathbf{P} }^n = \left[ \begin{array}{cccc} p_{1} & p_{2} & \cdots & p_{M} \\ p_{1} & p_{2}& \cdots & p_{M} \\ \dots & \dots & \dots & \dots \\ p_{1} & p_{2} & \cdots & p_{M}  \end{array} \right] .$$
 +
The probabilities&nbsp; $p_1, \hspace{0.15cm}\text{...}\hspace{0.15cm}, p_M$&nbsp; in each of these lines are called the&nbsp; &raquo;'''ergodic probabilities'''&laquo;. }}
  
{{Beispiel}}
 
Wir betrachten eine binäre Markovkette mit
 
*den beiden Ereignissen $A$ und $B$ und
 
*den Übergangswahrscheinlichkeiten ${\rm Pr}(A \hspace{0.05cm} | \hspace{0.05cm}A) = 0.4$ und  ${\rm Pr}(B \hspace{0.05cm} | \hspace{0.05cm}B) = 0.8$.
 
Weiterhin setzen wir voraus, dass jede Realisierung dieser Kette zum Startzeitpunkt $ν = 0$ mit dem Ereignis $A$ beginnt. Man erhält dann die nachfolgend aufgelisteten Ereigniswahrscheinlichkeiten (mit drei Nachkommastellen):
 
  
[[File:P_ID642__Sto_T_1_4_S5_neu.png |center| Zum Einschwingen der Ereigniswahrscheinlichkeiten]]
 
  
Es handelt sich hier im strengen Sinne um eine nichtstationäre Markovkette, die jedoch schon nach kurzer Zeit (nahezu) eingeschwungen ist:
+
{{GraueBox|TEXT= 
*Zu späteren Zeitpunkten ( $ν > 5$) werden die Ereigniswahrscheinlichkeiten ${\rm Pr}(A_ν) = 0.25$ und ${\rm Pr}(B_ν) = 0.75$ nicht mehr gravierend verändert.  
+
$\text{Example 5:}$&nbsp; We consider a Markov chain with the events&nbsp; $E_1,&nbsp; E_2$&nbsp; and&nbsp; $E_3$ &nbsp; &rArr; &nbsp; $M=3$.&nbsp; The graph shows
*Aus der Angabe ${\rm Pr}(A_{ν=5}) = 0.250$ und  ${\rm Pr}(B_{ν=5}) = 0.750$ sollte allerdings  nicht geschlossen werden, dass die Markovkette zum Zeitpunkt $ν = 5$  schon vollkommen eingeschwungen ist. Die exakten Werte sind nämlich ${\rm Pr}(A_{ν=5}) = 0.25024$ und ${\rm Pr}(B_{ν=5}) = 0.74976$.
+
[[File:P_ID1445__Sto_T_1_4_S6c_neu.png | right|frame|Markov diagram with three events]]
{{end}}
+
*on the left the Markov diagram,
 +
*on the upper right the transition matrix&nbsp; $\mathbf{P}$,  
 +
*below their powers&nbsp; $\mathbf{P}^2$&nbsp; and&nbsp; $\mathbf{P}^3$.  
  
==Matrix-Vektordarstellung==
 
Homogene Markovketten können mit Vektoren und Matrizen sehr kompakt dargestellt werden. Dies empfiehlt sich insbesondere, wenn mehr als zwei Ereignisse betrachtet werden:
 
$$E_\nu \in G = \{ E_{\rm 1}, E_{\rm 2}, \hspace{0.1cm}...\hspace{0.1cm}, E_\mu , \hspace{0.1cm}...\hspace{0.1cm}, E_M \}.$$
 
Für die Matrix-Vektorendarstellung verwenden wir folgende Nomenklatur:
 
*Die $M$ Wahrscheinlichkeiten zum Zeitpunkt $ν$ fasst man zu einem Spaltenvektor zusammen:
 
:$${\mathbf{p}^{(\nu)}} = \left[ \begin{array}{c} {p_{\rm 1}}^{(\nu)} \\ \dots \\ {p_{M}}^{(\nu)} \end{array} \right] \hspace{0.5cm}{\rm mit} \hspace{0.5cm} {p_{\mu}}^{(\nu)} = {\rm Pr}(E_\nu = E_\mu ).$$
 
*Die Übergangswahrscheinlichkeiten werden durch eine $M$ x $M$-Matrix ausgedrückt:
 
:$${\mathbf{P}} =\left( p_{ij} \right) = \left[ \begin{array}{cccc} p_{11} & p_{12} & \cdots & p_{1M} \\ p_{21} & p_{22}& \cdots & p_{2M} \\ \dots & \dots & \dots & \dots \\ p_{M1} & p_{M2} & \cdots & p_{MM}  \end{array} \right] \hspace{0.5cm}{\rm mit} \hspace{0.5cm} {p_{ij}} = {\rm Pr}(E_{\nu +1 } = E_j \hspace{0.05cm}| \hspace{0.05cm} E_{\nu } = E_i).$$
 
  
 +
In the Markov diagram,&nbsp; all transition probabilities with the numerical value&nbsp; $1/2$&nbsp; are shown in blue; &nbsp; green indicates the probability&nbsp; $1/4$.
 +
*At the start&nbsp; $(ν =0)$&nbsp; all events are equally probable.&nbsp; For&nbsp; $ν = 1$&nbsp; then holds:
 +
:$${\mathbf{p}^{(1)} } = {\mathbf{P} }^{\rm T} \cdot {\mathbf{p}^{(0 )} }= \left[ \begin{array}{ccc} 1/2 & 1/2&  1/2 \\ 1/4 & 0 & 1/2  \\ 1/4& 1/2& 0  \end{array} \right] \left[ \begin{array}{c} 1/3 \\ 1/3  \\ 1/3  \end{array} \right] = \left[ \begin{array}{c} 1/2 \\ 1/4  \\ 1/4  \end{array} \right] .$$
 +
:From this it can be seen that we get from the matrix&nbsp;  $\mathbf{P}$&nbsp; to the transposed matrix&nbsp; ${\mathbf{P}^{\rm T} }$&nbsp; by exchanging rows and columns.
  
[[File:P_ID448__Sto_T_1_4_S6_neu.png |Zur Bezeichnung der Übergangswahrscheinlichkeiten |right]]
+
*At time&nbsp; $ν =2$&nbsp; $($and also at all later times$)$&nbsp; the same probabilities result:
Die nebenstehende Abbildung verdeutlicht diese Nomenklatur am Beispiel $M = 2$.
+
:$${\mathbf{p}^{(2)} } = {\mathbf{P} }^{\rm T} \cdot {\mathbf{p}^{(1 )} }= \left[ \begin{array}{ccc} 1/2 & 1/2&  1/2 \\ 1/4 & 0 & 1/2  \\ 1/4& 1/2& 0  \end{array} \right] \left[ \begin{array}{c} 1/2 \\ 1/4  \\ 1/4 \end{array} \right] = \left[ \begin{array}{c} 1/2 \\ 1/4  \\ 1/4  \end{array} \right] .$$
 +
:This means:&nbsp; The ergodic probabilities of the three events are&nbsp;  $1/2$&nbsp; $($for&nbsp; $E_1)$,&nbsp; $1/4$&nbsp; $($for&nbsp; $E_2)$,&nbsp; and $1/4$&nbsp; $($for&nbsp; $E_3)$.
 +
*This result could have been read directly from the ergodic matrix:
 +
:$${\mathbf{P} }_{\rm erg} = \lim_{n \to\infty}  {\mathbf{P} }^n = \left[ \begin{array}{ccc} 1/2 & 1/4 & 1/4 \\ 1/2 & 1/4 & 1/4  \\ 1/2 & 1/4 & 1/4  \end{array} \right] .$$
  
Der neue Ereigniswahrscheinlichkeitsvektor nach einem Schritt lautet:  
+
:This is obtained by continuously multiplying the transition matrix by itself.
$${\mathbf{p}^{(\nu + 1)}} = {\mathbf{P}}^{\rm T} \cdot {\mathbf{p}^{(\nu )}} .$$
+
*In the figure above the powers&nbsp;  $\mathbf{P}^2$&nbsp; and&nbsp; $\mathbf{P}^3$&nbsp; are given,&nbsp; which approximate the matrix&nbsp; $\mathbf{P}_{\rm erg}$.}}
  
${\mathbf{P}^{\rm T}}$ bezeichnet hierbei die ''transponierte'' Matrix zu '''P'''. Nach $n$ Schritten ergibt sich somit
+
==Exercises for the chapter==
$${\mathbf{p}^{(\nu +{\it n})}} = \left( {\mathbf{P}}^{\rm T} \right)^n \cdot {\mathbf{p}^{(\nu )}} .$$
 
  
Im Grenzübergang ( $n → ∞$) erreicht man dann stets die Stationarität der Markovkette:
+
[[Aufgaben:Exercise_1.6:_Transition_Probabilities|Exercise 1.6: Transition Probabilities]]
$$\lim_{n \to\infty}\hspace{0.1cm}{\mathbf{p}^{(\nu + n)}} = \lim_{n \to\infty} \left( {\mathbf{P}}^{\rm T} \right)^n \cdot {\mathbf{p}^{(\nu )}}  = {\mathbf{p}}_{\rm erg}= \left[ \begin{array}{c} {p_{\rm 1}} \\ \dots \\ {p_{M}} \end{array} \right] .$$
 
  
{{Definition}}
+
[[Aufgaben:Exercise_1.6Z:_Ergodic_Probabilities|Exercise 1.6Z: Ergodic Probabilities]]
Multipliziert man die Übergangsmatrix ${\mathbf{P}}$ unendlich oft mit sich selbst und benennt das Ergebnis mit ${\mathbf{P}}_{\rm erg}$, so besteht die resultierende Matrix aus $M$ gleichen Zeilen:
 
$${\mathbf{P}}_{\rm erg} = \lim_{n \to\infty}  {\mathbf{P}}^n = \left[ \begin{array}{cccc} p_{1} & p_{2} & \cdots & p_{M} \\ p_{1} & p_{2}& \cdots & p_{M} \\ \dots & \dots & \dots & \dots \\ p_{1} & p_{2} & \cdots & p_{M}  \end{array} \right] .$$
 
Die Wahrscheinlichkeiten $p_1, ... , p_M$ in jeder dieser Zeilen bezeichnet man als die '''ergodischen Wahrscheinlichkeiten''' .
 
{{end}}
 
  
 +
[[Aufgaben:Exercise_1.7:_Ternary_Markov_Chain|Exercise 1.7: Ternary Markov Chain]]
  
{{Beispiel}}
+
[[Aufgaben:Exercise_1.7Z:_BARBARA_Generator|Exercise 1.7Z: BARBARA Generator]]
[[File:P_ID1445__Sto_T_1_4_S6c_neu.png | right|Markovdiagramm mit drei Ereignissen]]
 
Wir betrachten eine Markovkette mit den Ereignissen $E_1, E_2$ und $E_3$. Die Grafik zeigt das Markovdiagramm, die Übergangsmatrix '''P''' und deren Potenzen. Im Markovdiagramm sind alle Übergangswahrscheinlichkeiten „1/2” blau eingezeichnet; grün kennzeichnet die Wahrscheinlichkeit „1/4”.
 
*Beim Start ( $ν =0$) sind alle Ereignisse gleichwahrscheinlich.
 
*Für den Zeitpunkt $ν = 1$ gilt dann:
 
:$${\mathbf{p}^{(1)}} = {\mathbf{P}}^{\rm T} \cdot {\mathbf{p}^{(0 )}}= \left[ \begin{array}{ccc} 1/2 & 1/2&  1/2 \\ 1/4 & 0 & 1/2  \\ 1/4& 1/2& 0  \end{array} \right] \left[ \begin{array}{c} 1/3 \\ 1/3  \\ 1/3  \end{array} \right] = \left[ \begin{array}{c} 1/2 \\ 1/4  \\ 1/4  \end{array} \right] .$$
 
:Hieraus ist zu ersehen, dass man von der Matrix '''P''' durch den Austausch der Zeilen und Spalten zur transponierten Matrix ${\mathbf{P}^{\rm T}}$ kommt.  
 
*Zum Zeitpunkt $ν =2$ (und auch zu allen späteren Zeiten) ergeben sich die gleichen Wahrscheinlichkeiten:
 
:$${\mathbf{p}^{(2)}} = {\mathbf{P}}^{\rm T} \cdot {\mathbf{p}^{(1 )}}= \left[ \begin{array}{ccc} 1/2 & 1/2&  1/2 \\ 1/4 & 0 & 1/2  \\ 1/4& 1/2& 0  \end{array} \right] \left[ \begin{array}{c} 1/2 \\ 1/4  \\ 1/4  \end{array} \right] = \left[ \begin{array}{c} 1/2 \\ 1/4  \\ 1/4  \end{array} \right] .$$
 
:Das bedeutet: Die ergodischen Wahrscheinlichkeiten sind „1/2”, „1/4” und „1/4”.
 
*Dieses Ergebnis hätte man auch direkt aus der ergodischen Matrix ablesen können:
 
:$${\mathbf{P}}_{\rm erg} = \lim_{n \to\infty}  {\mathbf{P}}^n = \left[ \begin{array}{ccc} 1/2 & 1/4 & 1/4 \\ 1/2 & 1/4 & 1/4  \\ 1/2 & 1/4 & 1/4  \end{array} \right] .$$
 
  
:Diese ergibt sich durch fortlaufende Multiplikation der Übergangsmatrix mit sich selbst. Im obigen Bild sind die Potenzen $\mathbf{P}^2$ und $\mathbf{P}^3$ angegeben, die sich der Matrix $\mathbf{P}_{\rm erg}$ annähern.
 
{{end}}
 
  
 
{{Display}}
 
{{Display}}

Latest revision as of 13:51, 6 February 2024

Considered scenario


Finally,  we consider the case where one conducts a random experiment continuously and that at each discrete time  $(ν = 1, 2, 3, \text{...})$  a new event  $E_ν$.  Here,  let hold:

Possible events and event sequence
$$E_\nu \in G = \{ E_{\rm 1}, E_{\rm 2}, \hspace{0.1cm}\text{...}\hspace{0.1cm}, E_\mu , \hspace{0.1cm}\text{...}\hspace{0.1cm}, E_M \}.$$

This mathematically not quite proper nomenclature means  (see graph):

  • The  $M$  possible events are numbered consecutively with the index  $μ$.
  • The index  $ν$  names the discrete time points at which events occur.


For simplicity,  we restrict ourselves in the following to the case  $M = 2$  with the universal set  $G = \{ A, \ B \}$.  Let further hold:

  • The probability of event  $E_ν$  may well depend on all previous events  – that is,  on the events  $E_{ν\hspace{0.05cm}-1}, \hspace{0.15cm} E_{ν\hspace{0.05cm}-2}, \hspace{0.15cm} E_{ν\hspace{0.05cm}-3}$,  and so on. 
  • This statement also means that in this chapter we consider a  »sequence of events with internal statistical bindings«.


This scenario is a special case of a discrete-time, discrete-value random process, which will be discussed in more detail in the chapter  »Random Processes«.  

$\text{Example 1:}$  Cards are drawn one after the other from a deck of  $32$  cards  $($including four aces$)$.  With the events

  • $A:=$  »the drawn card is an ace«, 
  • $B = \overline{A}:=$  »the drawn card is not an ace«, 


the probabilities at time  $ν = 1$:

$${\rm Pr} (A_{\rm 1}) ={4}/{32}= {1}/{8},\hspace{0.5cm}{\rm Pr} (B_{\rm 1}) = {28}/{32}= {7}/{8}.$$

The probability  ${\rm Pr} (A_{\rm 2})$,  that an ace is drawn as the second card  $(ν = 2)$  now depends on,

  • whether an ace was drawn at time  $ν = 1$   ⇒   ${\rm Pr} (A_{\rm 2}) = 3/31 < 1/8$, 
  • or at time  $ν = 1$  no ace was drawn   ⇒   ${\rm Pr} (A_{\rm 2}) = 4/31 > 1/8$.


Also,  the probabilities  ${\rm Pr} (A_{\nu})$  at later times  $ν$  always depend on the occurrence or non-occurrence of all previous events  $E_1, \hspace{0.1cm}\text{...}\hspace{0.1cm} ,E_{ν\hspace{0.05cm}–1}$ .

General definition of a Markov chain


In special cases,  which however occur very frequently,  the scenario described above can be described by a Markov chain.

$\text{Definition:}$  A  $k$-th order  »Markov Chain«  serves as a model for time and value-discrete processes,  where the event probabilities at time  $ν$

  •    depend on the previous events   $E_{ν\hspace{0.05cm}–1}, \hspace{0.15cm}\text{...}\hspace{0.15cm}, E_{ν\hspace{0.05cm}–k}$,  and
  •   can be expressed by  $M^{k+1}$  conditional probabilities.


Second order Markov chain

The diagram illustrates this issue using  $k = 2$  as an example. 

  • For  $M = 2$  there are  $2^{k+1}$  such conditional probabilities: 
$${\rm Pr} ( E_\nu \hspace {0.05cm}\vert \hspace {0.05cm}E_{\nu {\rm -1 } },\hspace {0.15cm}\text{...}\hspace {0.15cm}, E_{\nu { -k } }).$$
  • With  $E_{\nu }\in \{ A, B \}, \hspace {0.15cm}\text{...}\hspace {0.15cm}, E_{\nu { -k } } \in \{ A, B \}$  these are:
$${\rm Pr} ( A_\nu \hspace {0.05cm}\vert \hspace {0.05cm}A_{\nu {\rm -1 } }, A_{\nu { -2 } }),\hspace {0.5cm} {\rm Pr} ( B_\nu \hspace {0.05cm}\vert \hspace {0.05cm}A_{\nu {\rm -1 } }, A_{\nu { -2 } }),$$
$${\rm Pr} ( A_\nu \hspace {0.05cm}\vert \hspace {0.05cm}A_{\nu {\rm -1 } },B_{\nu { -2 } }),\hspace {0.5cm}{\rm Pr} ( B_\nu \hspace {0.05cm}\vert \hspace {0.05cm}A_{\nu {\rm -1 } }, B_{\nu { -2 } }),$$
$${\rm Pr} ( A_\nu \hspace {0.05cm}\vert \hspace {0.05cm}B_{\nu {\rm -1 } }, A_{\nu { -2 } }),\hspace {0.5cm}{\rm Pr} ( B_\nu \hspace {0.05cm}\vert \hspace {0.05cm}B_{\nu {\rm -1 } }, A_{\nu { -2 } }),$$
$${\rm Pr} ( A_\nu \hspace {0.05cm}\vert \hspace {0.05cm}B_{\nu {\rm -1 } }, B_{\nu { -2 } }),\hspace {0.5cm}{\rm Pr} ( B_\nu \hspace {0.05cm}\vert \hspace {0.05cm}B_{\nu {\rm -1 } }, B_{\nu { -2 } }).$$

$\text{Example 2:}$  Natural languages are often describable by Markov chains,  although the order  $k$  tends here to infinity. 

In this example,  however,  texts are only approximated by second order Markov chains.

Synthetically generated texts  based on German or English book template

The graphic shows two synthetically generated texts:

  • The left text was synthetically generated starting from a German book template with bindings up to second order.
  • For the right text,  an English book template was used.


One recognizes despite the restriction to  $k = 2$ 

  1. many $($short$)$ German words on the left,
  2. many $($short$)$ English words on the right,
  3. German words are on the average longer than English words. 


There is no meaningful content,  but the structure of the respective language is recognizable.

First order Markov chain


In the following, we always restrict ourselves to the special case  $k =1$.

$\text{Definition:}$ 

$(1)$  In a  »first order Markov chain«  only the statistical binding to the last event is considered,  which in practice is usually the strongest.

$(2)$  For example:  A  »binary first order Markov chain«  ⇒   universal set  $G = \{ A, \ B \}$  has the following event probabilities at time  $\nu$:

$${\rm Pr}(A_\nu) = {\rm Pr}(A_\nu \hspace{0.05cm} \vert \hspace{0.05cm}A_{\nu - 1}) \cdot {\rm Pr}(A_{\nu - 1}) \hspace{0.1cm} + \hspace{0.1cm} {\rm Pr}(A_\nu \hspace{0.05cm} \vert \hspace{0.05cm}B_{\nu - 1}) \cdot {\rm Pr}(B_{\nu - 1}) ,$$
$${\rm Pr}(B_\nu) = {\rm Pr}(B_\nu \hspace{0.05cm} \vert \hspace{0.05cm}A_{\nu - 1}) \cdot {\rm Pr}(A_{\nu - 1}) \hspace{0.1cm} + \hspace{0.1cm} {\rm Pr}(B_\nu \hspace{0.05cm} \vert \hspace{0.05cm}B_{\nu - 1}) \cdot {\rm Pr}(B_{\nu - 1}) .$$


Regarding these equations,  we note:

  • ${\rm Pr}(A_\nu)$  is shorthand for the probability that at time  $ν$  the event   $E_ν = A = \overline{B}$  occurs,  and it holds:
$${\rm Pr}(B_\nu) = 1 - {\rm Pr}(A_\nu).$$
  • At each time there are four transition probabilities  ${\rm Pr}(E_ν\hspace{0.05cm} |\hspace{0.05cm} E_{ν–1})$,  but only two of them are independent,  because it holds:
$${\rm Pr}(B_\nu \hspace{0.05cm} | \hspace{0.05cm}A_{\nu - 1}) = 1 - {\rm Pr}(A_\nu \hspace{0.05cm} | \hspace{0.05cm}A_{\nu - 1}),$$
$${\rm Pr}(A_\nu \hspace{0.05cm} | \hspace{0.05cm}B_{\nu - 1}) = 1 - {\rm Pr}(B_\nu \hspace{0.05cm} | \hspace{0.05cm}B_{\nu - 1}).$$
  • Generalizing this last statement:  For a Markov chain with  $M$ events,  there are exactly  $M · (M – 1)$  independent transition probabilities at each time  $ν$.


$\text{Example 3:}$  With the given transition probabilities

$${\rm Pr}(A_ν\hspace{0.05cm}\vert\hspace{0.05cm} A_{ν–1}) = 0.2, \hspace{0.5cm} {\rm Pr}(B_ν\hspace{0.05cm} \vert\hspace{0.05cm} B_{ν–1}) = 0.4,$$

the other two transition probabilities are also uniquely specified:

$${\rm Pr}(B_ν\hspace{0.05cm} \vert\hspace{0.05cm} A_{ν–1}) = 1- 0.2 = 0.8, \hspace{0.5cm} {\rm Pr}(A_ν\hspace{0.05cm} \vert\hspace{0.05cm} B_{ν–1}) = 1- 0.4 = 0.6.$$

Homogeneous Markov chains


Applicability of Markov chains to practical problems is usually only given with further restrictive conditions.

$\text{Definition:}$ 

$(1)$  If all transition probabilities are independent of time  $ν$,  the Markov chain is  »homogeneous«.

$(2)$  In the case  $M = 2$  we use the following abbreviations:

$${\rm Pr}(A \hspace{0.05cm} \vert \hspace{0.05cm}A) = {\rm Pr}(A_\nu \hspace{0.05cm}\vert \hspace{0.05cm}A_{\nu - 1}) , \hspace{0.5cm} {\rm Pr}(A \hspace{0.05cm} \vert\hspace{0.05cm}B) = {\rm Pr}(A_\nu \hspace{0.05cm} \vert \hspace{0.05cm}B_{\nu - 1}) ,$$
$${\rm Pr}(B \hspace{0.05cm} \vert\hspace{0.05cm}A) = {\rm Pr}(B_\nu \hspace{0.05cm} \vert \hspace{0.05cm}A_{\nu - 1}) , \hspace{0.5cm} {\rm Pr}(B \hspace{0.05cm} \vert \hspace{0.05cm}B) = {\rm Pr}(B_\nu \hspace{0.05cm} \vert \hspace{0.05cm}B_{\nu - 1}) .$$


First order homogeneous Markov chain

Thus,  the event probabilities of a binary homogeneous Markov chain:

$${\rm Pr}(A_\nu) = {\rm Pr}(A \hspace{0.05cm} | \hspace{0.05cm}A) \cdot {\rm Pr}(A_{\nu - 1}) \hspace{0.1cm} + \hspace{0.1cm} {\rm Pr}(A \hspace{0.05cm} | \hspace{0.05cm}B) \cdot {\rm Pr}(B_{\nu - 1}) ,$$
$${\rm Pr}(B_\nu) = {\rm Pr}(B \hspace{0.05cm} | \hspace{0.05cm}A) \cdot {\rm Pr}(A_{\nu - 1}) \hspace{0.1cm} + \hspace{0.1cm} {\rm Pr}(B \hspace{0.05cm} | \hspace{0.05cm}B) \cdot {\rm Pr}(B_{\nu - 1}) .$$

These are absolute probabilities in contrast to the conditional transition probabilities.

We can also read from the Markov diagram:

  • The sum of the  »outgoing arrows«  of an event is always one:
$${\rm Pr}(A \hspace{0.05cm} | \hspace{0.05cm}A) + {\rm Pr}(B \hspace{0.05cm} | \hspace{0.05cm}A) =1,$$
$${\rm Pr}(A \hspace{0.05cm} | \hspace{0.05cm}B) + {\rm Pr}(B \hspace{0.05cm} | \hspace{0.05cm}B) =1.$$
  • For the sums of the  »incoming arrows«  this restriction does not apply:
$$ 0 < {\rm Pr}(A \hspace{0.05cm} | \hspace{0.05cm}A) + {\rm Pr}(A \hspace{0.05cm} | \hspace{0.05cm}B)< 2,$$
$$0 < {\rm Pr}(B \hspace{0.05cm} | \hspace{0.05cm}A) + {\rm Pr}(B \hspace{0.05cm} | \hspace{0.05cm}B)< 2.$$

Stationary probabilities


Important properties of random processes are  »$\text{stationarity}$«  and  »$\text{ergodicity}$«.  Although these terms are not defined until the fourth main chapter  »Random Variables with Statistical Dependence«,  we apply them here already in advance to Markov chains.

$\text{Definition:}$ 

$(1)$  If,  in addition to the transition probabilities,  all event probabilities of a Markov chain are independent of time  $ν$,  then the Markov chain is called  »stationary«. 

$(2)$  One then omits the index  $ν$  and writes in the binary case:

$${\rm Pr}(A_\nu ) = {\rm Pr}(A ), \hspace{0.5 cm} {\rm Pr}(B_\nu ) = {\rm Pr}(B).$$

$(3)$  These quantities are also called  »ergodic probabilities«  of the Markov chain.


Stationary Markov chains have the special features mentioned below:

  • To calculate the ergodic probabilities of a binary Markov chain  $(M =2)$  one can use the following equations:
$${\rm Pr}(A) = {\rm Pr}(A \hspace{0.05cm} | \hspace{0.05cm}A) \cdot {\rm Pr}(A) \hspace{0.1cm} + \hspace{0.1cm} {\rm Pr}(A \hspace{0.05cm} | \hspace{0.05cm}B) \cdot {\rm Pr}(B) ,$$
$${\rm Pr}(B) = {\rm Pr}(B \hspace{0.05cm} | \hspace{0.05cm}A) \cdot {\rm Pr}(A) \hspace{0.1cm} + \hspace{0.1cm} {\rm Pr}(B \hspace{0.05cm} | \hspace{0.05cm}B) \cdot {\rm Pr}(B) .$$
  • Since these equations depend linearly on each other,  one may use only one of them.  As the second equation of determination one may use:
$${\rm Pr}(A) + {\rm Pr}(B) = 1.$$
  • The ergodic probabilities result from it to
$${\rm Pr}(A) = \frac {{\rm Pr}(A \hspace{0.05cm} | \hspace{0.05cm}B) }{{\rm Pr}(A \hspace{0.05cm} | \hspace{0.05cm}B) + {\rm Pr}(B \hspace{0.05cm} | \hspace{0.05cm}A) }\hspace{0.1cm}, $$
$${\rm Pr}(B) = \frac {{\rm Pr}(B \hspace{0.05cm} | \hspace{0.05cm}A) }{{\rm Pr}(A \hspace{0.05cm} | \hspace{0.05cm}B) + {\rm Pr}(B \hspace{0.05cm} | \hspace{0.05cm}A) }.$$

$\text{Example 4:}$  We consider a stationary and binary Markov chain with events  $A$  and $B$  and transition probabilities  ${\rm Pr}(A \hspace{0.05cm} \vert\hspace{0.05cm}A) = 0.4$  and  ${\rm Pr}(B \hspace{0.05cm} \vert \hspace{0.05cm}B) = 0.8$.

Transient response of the event probabilities $($three decimal places$)$

Furthermore,  we assume that each realization starts with event  $A$  at starting time  $ν = 0$.  We then obtain the event probabilities listed on the right:

In a strict sense,  this is a non-stationary Markov chain,  but it is  $($nearly$)$  settled after a short time:

  1. At later times  $(ν > 5),$  the event probabilities  ${\rm Pr}(A_ν) \approx 1/4$,  ${\rm Pr}(B_ν) \approx 3/4$  are no longer seriously changed.
  2. However,  from  ${\rm Pr}(A_{ν=5}) = 0.250$  and  ${\rm Pr}(B_{ν=5}) = 0.750$  it should not be concluded that the Markov chain is already perfectly steady at time   $ν = 5$.
  3. Indeed,  the exact values are: 
$${\rm Pr}(A_{ν=5}) = 0.25024,\hspace{0.5cm}{\rm Pr}(B_{ν=5}) = 0.74976.$$


$\text{Conclusion:}$  For a stationary Markov chain, the ergodic probabilities always occur with good approximation very long after the chain is switched on   $(ν → ∞)$ , regardless of the initial conditions  ${\rm Pr}(A_0)$  and  ${\rm Pr}(B_0) = 1 - {\rm Pr}(A_0)$.

Matrix vector representation


Homogeneous Markov chains can be represented very compactly with vectors and matrices.  This is especially recommended for more than two events:

$$E_\nu \in G = \{ E_{\rm 1},\ E_{\rm 2}, \hspace{0.1cm}\text{...}\hspace{0.1cm},\ E_\mu , \hspace{0.1cm}\text{...}\hspace{0.1cm},\ E_M \}.$$

For the matrix-vector representation,  we use the following nomenclature:

  • We summarize the  $M$  probabilities at time  $ν$  in a column vector:
$${\mathbf{p}^{(\nu)}} = \left[ \begin{array}{c} {p_{\rm 1}}^{(\nu)} \\ \dots \\ {p_{M}}^{(\nu)} \end{array} \right] \hspace{0.5cm}{\rm with} \hspace{0.5cm} {p_{\mu}}^{(\nu)} = {\rm Pr}(E_\nu = E_\mu ).$$
  • The transition probabilities are expressed by an  $M \times M$ matrix:
$${\mathbf{P}} =\left( p_{ij} \right) = \left[ \begin{array}{cccc} p_{11} & p_{12} & \cdots & p_{1M} \\ p_{21} & p_{22}& \cdots & p_{2M} \\ \dots & \dots & \dots & \dots \\ p_{M1} & p_{M2} & \cdots & p_{MM} \end{array} \right] \hspace{0.5cm}{\rm with} \hspace{0.5cm} {p_{ij}} = {\rm Pr}(E_{\nu +1 } = E_j \hspace{0.05cm}| \hspace{0.05cm} E_{\nu } = E_i).$$

The accompanying figure illustrates this nomenclature using the example  $M = 2$.  The new event probability vector after one step is:

To denote the transition probabilities
$${\mathbf{p}^{(\nu + 1)}} = {\mathbf{P}}^{\rm T} \cdot {\mathbf{p}^{(\nu )}} .$$

Here,  ${\mathbf{P}^{\rm T}}$  denotes the  »transposed matrix»  to  $\mathbf{P}$.  Thus,  after  $n$  steps,  the result is

$${\mathbf{p}^{(\nu +{\it n})}} = \left( {\mathbf{P}}^{\rm T} \right)^n \cdot {\mathbf{p}^{(\nu )}} .$$

In the limit transition  $(n → ∞)$  one then always achieves stationarity of the Markov chain:

$$\lim_{n \to\infty}\hspace{0.1cm}{\mathbf{p}^{(\nu + n)}} = \lim_{n \to\infty} \left( {\mathbf{P}}^{\rm T} \right)^n \cdot {\mathbf{p}^{(\nu )}} = {\mathbf{p}}_{\rm erg}= \left[ \begin{array}{c} {p_{\rm 1}} \\ \dots \\ {p_{M}} \end{array} \right] .$$

$\text{Definition:}$  Multiplying the transition matrix  ${\mathbf{P} }$  infinitely many times by itself and denoting the result by  ${\mathbf{P} }_{\rm erg}$,  the resulting matrix consists of  $M$  equal rows:

$${\mathbf{P} }_{\rm erg} = \lim_{n \to\infty} {\mathbf{P} }^n = \left[ \begin{array}{cccc} p_{1} & p_{2} & \cdots & p_{M} \\ p_{1} & p_{2}& \cdots & p_{M} \\ \dots & \dots & \dots & \dots \\ p_{1} & p_{2} & \cdots & p_{M} \end{array} \right] .$$

The probabilities  $p_1, \hspace{0.15cm}\text{...}\hspace{0.15cm}, p_M$  in each of these lines are called the  »ergodic probabilities«.


$\text{Example 5:}$  We consider a Markov chain with the events  $E_1,  E_2$  and  $E_3$   ⇒   $M=3$.  The graph shows

Markov diagram with three events
  • on the left the Markov diagram,
  • on the upper right the transition matrix  $\mathbf{P}$,
  • below their powers  $\mathbf{P}^2$  and  $\mathbf{P}^3$.


In the Markov diagram,  all transition probabilities with the numerical value  $1/2$  are shown in blue;   green indicates the probability  $1/4$.

  • At the start  $(ν =0)$  all events are equally probable.  For  $ν = 1$  then holds:
$${\mathbf{p}^{(1)} } = {\mathbf{P} }^{\rm T} \cdot {\mathbf{p}^{(0 )} }= \left[ \begin{array}{ccc} 1/2 & 1/2& 1/2 \\ 1/4 & 0 & 1/2 \\ 1/4& 1/2& 0 \end{array} \right] \left[ \begin{array}{c} 1/3 \\ 1/3 \\ 1/3 \end{array} \right] = \left[ \begin{array}{c} 1/2 \\ 1/4 \\ 1/4 \end{array} \right] .$$
From this it can be seen that we get from the matrix  $\mathbf{P}$  to the transposed matrix  ${\mathbf{P}^{\rm T} }$  by exchanging rows and columns.
  • At time  $ν =2$  $($and also at all later times$)$  the same probabilities result:
$${\mathbf{p}^{(2)} } = {\mathbf{P} }^{\rm T} \cdot {\mathbf{p}^{(1 )} }= \left[ \begin{array}{ccc} 1/2 & 1/2& 1/2 \\ 1/4 & 0 & 1/2 \\ 1/4& 1/2& 0 \end{array} \right] \left[ \begin{array}{c} 1/2 \\ 1/4 \\ 1/4 \end{array} \right] = \left[ \begin{array}{c} 1/2 \\ 1/4 \\ 1/4 \end{array} \right] .$$
This means:  The ergodic probabilities of the three events are  $1/2$  $($for  $E_1)$,  $1/4$  $($for  $E_2)$,  and $1/4$  $($for  $E_3)$.
  • This result could have been read directly from the ergodic matrix:
$${\mathbf{P} }_{\rm erg} = \lim_{n \to\infty} {\mathbf{P} }^n = \left[ \begin{array}{ccc} 1/2 & 1/4 & 1/4 \\ 1/2 & 1/4 & 1/4 \\ 1/2 & 1/4 & 1/4 \end{array} \right] .$$
This is obtained by continuously multiplying the transition matrix by itself.
  • In the figure above the powers  $\mathbf{P}^2$  and  $\mathbf{P}^3$  are given,  which approximate the matrix  $\mathbf{P}_{\rm erg}$.

Exercises for the chapter

Exercise 1.6: Transition Probabilities

Exercise 1.6Z: Ergodic Probabilities

Exercise 1.7: Ternary Markov Chain

Exercise 1.7Z: BARBARA Generator