Difference between revisions of "Theory of Stochastic Signals/Some Basic Definitions"

From LNTwww
 
(14 intermediate revisions by 3 users not shown)
Line 1: Line 1:
 
{{FirstPage}}
 
{{FirstPage}}
 
{{Header
 
{{Header
|Untermenü=Wahrscheinlichkeitsrechnung
+
|Untermenü=Probability Calculation
 
|Nächste Seite=Mengentheoretische Grundlagen
 
|Nächste Seite=Mengentheoretische Grundlagen
 
}}
 
}}
Line 7: Line 7:
 
== # OVERVIEW OF THE FIRST MAIN CHAPTER # ==
 
== # OVERVIEW OF THE FIRST MAIN CHAPTER # ==
 
<br>
 
<br>
This first chapter brings a brief summary of&nbsp; '''probability theory''', which surely many of you already know from your school days and which is an important prerequisite for understanding the chapters that follow.
+
This first chapter brings a brief summary of&nbsp; &raquo;'''probability calculation'''&laquo;,&nbsp; which surely many of you already know from your school days and which is an important prerequisite for understanding the chapters that follow.
  
 
This chapter includes
 
This chapter includes
* some&nbsp; ''definitions''&nbsp; such as random experiment, outcome, event, and probability,
+
# some&nbsp; &raquo;definitions&laquo;&nbsp; such as&nbsp; &raquo;random experiment&laquo;&nbsp;,&nbsp; &raquo;outcome&laquo;&nbsp;,&nbsp; &raquo; event&laquo;&nbsp;, and&nbsp; &raquo;probability&laquo;&nbsp;,
* the&nbsp; ''set-theoretical basics'' relevant for probability theory,
+
# the&nbsp; &raquo;set-theoretical basics&laquo;&nbsp; relevant for probability theory,
* the clarification of&nbsp; ''statistical dependence''&nbsp; or&nbsp; ''statistical independence'',
+
# the clarification of&nbsp; &raquo;statistical dependence&laquo;&nbsp; and&nbsp; &raquo;statistical independence&laquo;,
* the mathematical treatment of statistical dependence by&nbsp; ''Markov chains''.
+
# the mathematical treatment of statistical dependence by&nbsp; &raquo;Markov chains&laquo;.
  
  
==Experiment and Outcome==
+
==Experiment and outcome==
 
<br>
 
<br>
The starting point of any statistical investigation is a&nbsp; '''random experiment'''. By this, one understands
+
The starting point of any statistical investigation is a&nbsp; &raquo;'''random experiment'''&laquo;.&nbsp; By this,&nbsp; one understands
*an experiment that can be repeated as often as desired under the same conditions with an uncertain&nbsp; '''outcome'''&nbsp; $E$,  
+
*an experiment that can be repeated as often as desired under the same conditions with an uncertain&nbsp; &raquo;'''outcome'''&laquo;&nbsp; &nbsp;$($German:&nbsp; "$\rm E\hspace{0.02cm}$rgebnis"$)$&nbsp; $E$,
*in which, however, the quantity&nbsp;  $ \{E_μ \}$&nbsp; of the possible outcomes is specifiable.
+
 +
*in which,&nbsp; however,&nbsp; the quantity&nbsp;  $ \{E_μ \}$&nbsp; of the possible outcomes is specifiable.
  
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Definition:}$&nbsp; The number of possible outcomes is called the&nbsp; '''range of outcomes'''&nbsp; $M$.&nbsp; Then holds:
+
$\text{Definition:}$&nbsp; The number of possible outcomes is called the&nbsp; &raquo;'''outcome set size'''&laquo;&nbsp; $M$.&nbsp; Then holds:
 
:$$E_\mu \in G = \{E_\mu\}=  \{E_1, \hspace{0.1cm}\text{...} \hspace{0.1cm}, E_M \} .$$
 
:$$E_\mu \in G = \{E_\mu\}=  \{E_1, \hspace{0.1cm}\text{...} \hspace{0.1cm}, E_M \} .$$
The variable&nbsp; $μ$&nbsp; can take all integer values between&nbsp; $1$&nbsp; and&nbsp; $M$&nbsp;.&nbsp;  $G$&nbsp; is called the event space or the&nbsp; '''basic set'''.}}
+
#The variable&nbsp; $μ$&nbsp; can take all integer values between&nbsp; $1$&nbsp; and&nbsp; $M$.&nbsp;   
 +
#$G = \{E_\mu\}$&nbsp; is called the event space or the&nbsp; &raquo;'''universal set'''&laquo;&nbsp; $($German:&nbsp; "Grundmenge" &nbsp; &rArr; &nbsp; letter:&nbsp; "G"$)$&nbsp; with&nbsp; $M$&nbsp; possible outcomes.}}
  
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
$\text{Example 1:}$&nbsp; In the experiment "coin toss" there are only two possible outcomes, namely "heads" and "tails" &nbsp; ⇒  &nbsp; $M = 2$.&nbsp;  In contrast, in the random experiment "throwing a roulette ball" a total of&nbsp; $M = 37$&nbsp; different outcomes are possible, and it holds for the basic set in this case:
+
$\text{Example 1:}$&nbsp;  
 +
*In the experiment&nbsp; &raquo;coin toss&laquo;&nbsp; there are only two possible outcomes,&nbsp; namely&nbsp; &raquo;heads&laquo;&nbsp; and&nbsp; &raquo;tails&laquo; &nbsp; ⇒  &nbsp; $M = 2$.&nbsp;   
 +
 
 +
*In contrast,&nbsp; in the random experiment&nbsp; &raquo;throwing a roulette ball&laquo;&nbsp; a total of&nbsp; $M = 37$&nbsp; different outcomes are possible,&nbsp; and it holds for the universal set in this case:
 
:$$G = \{E_\mu\} = \{0, 1, 2, \text{...} \hspace{0.1cm} , 36\}.$$}}
 
:$$G = \{E_\mu\} = \{0, 1, 2, \text{...} \hspace{0.1cm} , 36\}.$$}}
  
 
==Classical definition of probability==
 
==Classical definition of probability==
 
<br>
 
<br>
We first assume that each trial results in exactly one outcome from&nbsp; $G$&nbsp; and that each of these&nbsp; $M$&nbsp; outcomes is possible in the same way (without preference or disadvantage).
+
We assume that each trial results in exactly one outcome from&nbsp; $G$&nbsp; and that each of these&nbsp; $M$&nbsp; outcomes is possible in the same way&nbsp; $($without preference or disadvantage$)$.
  
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
$\text{Definition:}$&nbsp; With this assumption, the&nbsp; '''probability'''&nbsp; of each outcome is&nbsp; $E_μ$&nbsp; equally:
+
$\text{Definition:}$&nbsp; With this assumption,&nbsp; the&nbsp; &raquo;'''probability'''&laquo;&nbsp; of each outcome&nbsp; $E_μ$&nbsp; is equally:
 
:$$\Pr (E_\mu) = 1/{M}.$$}}
 
:$$\Pr (E_\mu) = 1/{M}.$$}}
  
  
This is the&nbsp; ''classical definition of probability''.&nbsp; ${\rm Pr}(\text{...} )$&nbsp; stands for&nbsp; ''probability''&nbsp; and is to be understood as a mathematical function.
+
This is the&nbsp; &raquo;classical definition of probability&laquo;.&nbsp; ${\rm Pr}(\text{...} )$&nbsp; stands also for&nbsp; &raquo;probability&laquo;&nbsp; and is to be understood as a mathematical function.
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
 
$\text{Example 2:}$&nbsp;
 
$\text{Example 2:}$&nbsp;
In the random experiment "coin toss", the probabilities of the two possible outcomes are:
+
In the random experiment&nbsp; &raquo;coin toss&laquo;,&nbsp; the probabilities of the two possible outcomes are
:$$\rm Pr("Heads")=Pr("Tails")=1/2.$$
+
:$$\rm Pr(heads)=Pr(tails)=1/2.$$
  
This assumes that each attempt ends either with "heads" or with "tails" and that the coin cannot come to rest on its edge during an attempt.
+
*This assumes that each attempt ends either with&nbsp; &raquo;heads&laquo;&nbsp; or with&nbsp; &raquo;tails&raquo;&nbsp; and that the coin cannot come to rest on its edge during an attempt.
  
Also in the experiment "throwing a roulette ball" the probabilities&nbsp; ${\rm Pr}( E_μ) = 1/37$&nbsp; are equal for all numbers from&nbsp; $0$&nbsp; to&nbsp; $36$&nbsp; only if the roulette table has not been manipulated.}}
+
*In the experiment&nbsp; &raquo;throwing a roulette ball&laquo;&nbsp; the probabilities&nbsp; ${\rm Pr}( E_μ) = 1/37$&nbsp; are equal for all numbers from&nbsp; $0$&nbsp; to&nbsp; $36$&nbsp; <br>only if the roulette table has not been manipulated.}}
  
  
''Note:'' &nbsp; Probability theory – and the statistics based on it – can only provide well-founded statements if all implicitly agreed conditions are actually fulfilled.&nbsp;  
+
Note: &nbsp; '''Probability theory – and the statistics based on it – can only provide well-founded statements if all implicitly agreed conditions are actually fulfilled'''.&nbsp;  
*Checking these conditions is not the task of statistics, but of those who use them.&nbsp;  
+
*Checking these conditions is not the task of statistics,&nbsp; but of those who use them.&nbsp;
*Since this basic rule is often violated, statistics has a much worse reputation in society than it actually deserves.
+
 +
*Since this basic rule is often violated,&nbsp; statistics has a much worse reputation in society than it actually deserves.
  
==Event and Event Space==
+
==Event and event probability==
 
<br>
 
<br>
 
{{BlaueBox|TEXT=   
 
{{BlaueBox|TEXT=   
 
$\text{Definitions:}$&nbsp;
 
$\text{Definitions:}$&nbsp;
By an&nbsp; '''event'''&nbsp; we mean a set or summary of outcomes.
 
*We refer to the set of all events as thenbsp; '''event set'''&nbsp; $\{A_i \}$.
 
*Since the number&nbsp; $I$&nbsp; of possible events&nbsp; $\{A_i \}$&nbsp; is generally not the same as the number&nbsp; $M$&nbsp; of possible outcomes - that is, the elements of&nbsp; $G = \{ E_μ \}$&nbsp; – different indices are chosen here.
 
  
If an event&nbsp; $A_i$&nbsp; is composed of&nbsp; $K$&nbsp; (elementary) outcomes, the&nbsp; '''event probability'''&nbsp; is defined as follows:
+
'''(1)''' &nbsp; By an&nbsp; &raquo;'''event'''&laquo;&nbsp; we mean a set or summary of outcomes.&nbsp; We refer to the set of all events as the &nbsp; &raquo;'''event set'''&laquo;&nbsp; $\{A_i \}$.
 +
 
 +
::Since the number&nbsp; $I$&nbsp; of possible events&nbsp; $\{A_i \}$&nbsp; is generally not the same as the number&nbsp; $M$&nbsp; of possible outcomes  &nbsp; &rArr; &nbsp;  the elements of&nbsp; $G = \{ E_μ \}$,&nbsp; <br>different indices are chosen here.
 +
 
 +
'''(2)''' &nbsp;If an event&nbsp; $A_i$&nbsp; is composed of&nbsp; $K$&nbsp; $($elementary$)$&nbsp; outcomes,&nbsp; the&nbsp; &raquo;'''event probability'''&laquo;&nbsp; is defined as follows:
 
:$${\rm Pr} (A_i) = \frac{K}{M} = \frac{\rm Number\hspace{0.1cm}of\hspace{0.1cm}favorable\hspace{0.1cm}outcomes}{\rm Number\hspace{0.1cm}of\hspace{0.1cm}possible\hspace{0.1cm}outcomes}.$$}}
 
:$${\rm Pr} (A_i) = \frac{K}{M} = \frac{\rm Number\hspace{0.1cm}of\hspace{0.1cm}favorable\hspace{0.1cm}outcomes}{\rm Number\hspace{0.1cm}of\hspace{0.1cm}possible\hspace{0.1cm}outcomes}.$$}}
  
  
This equation is called the&nbsp; [https://en.wikipedia.org/wiki/Pierre-Simon_Laplace Laplace probability definition].   
+
This equation is called the&nbsp; [https://en.wikipedia.org/wiki/Pierre-Simon_Laplace &raquo;'''Laplace probability definition'''&laquo;].   
*Here, "favorable outcomes"are those outcomes that belong to the composite event&nbsp; $A_i$&nbsp;.
+
*Here,&nbsp; &raquo;favorable outcomes&laquo;&nbsp; are those outcomes that belong to the composite event&nbsp; $A_i$.
*From this definition it is already clear that a probability must always lie between&nbsp; $0$&nbsp; and&nbsp; $1$&nbsp;&nbsp; (including these two limits).  
+
 
 +
*From this definition it is already clear that a probability must always lie between&nbsp; $0$&nbsp; and&nbsp; $1$&nbsp;&nbsp; $($including these two limits$)$.  
  
  
 
{{GraueBox|TEXT=   
 
{{GraueBox|TEXT=   
 
$\text{Example 3:}$&nbsp;
 
$\text{Example 3:}$&nbsp;
We consider again the experiment "throwing a die". The possible outcomes are thus&nbsp;  $E_μ ∈ G = \{1, 2, 3, 4, 5, 6\}$.  
+
We now consider the experiment&nbsp; &raquo;throwing a die&laquo;.&nbsp; The possible outcomes&nbsp; $($number of points$)$&nbsp; are thus&nbsp;  $E_μ ∈ G = \{1, 2, 3, 4, 5, 6\}$.  
  
 
Let us now define two events&nbsp; $(I = 2)$, viz.
 
Let us now define two events&nbsp; $(I = 2)$, viz.
* $A_1 = \big[$the number of dice is even$\big] = \{2, 4, 6\}$,&nbsp; and   
+
* $A_1 = \big[$the outcome is even$\big] = \{2, 4, 6\}$,&nbsp; and  
* $A_2 = \big[$the number of eyes is odd$\big] = \{1, 3, 5\}$,  
+
   
 +
* $A_2 = \big[$the outcome is odd$\big] = \{1, 3, 5\}$,  
  
  
then the event set&nbsp;  $\{A_1, A_2\}$&nbsp; is equal to the basic set&nbsp; $G$.&nbsp; For this example, the events&nbsp; $A_1$&nbsp; and&nbsp; $A_2$&nbsp; represent a so-called&nbsp; [[Theory_of_Stochastic_Signals/Set_Theory_Basics#Vollst.C3.A4ndiges_System|complete system]]&nbsp;.
+
then the event set&nbsp;  $\{A_1, A_2\}$&nbsp; is equal to the universe&nbsp; $G$.&nbsp; For this example,&nbsp; the events&nbsp; $A_1$&nbsp; and&nbsp; $A_2$&nbsp; represent a so-called&nbsp; [[Theory_of_Stochastic_Signals/Set_Theory_Basics#Complete_system|&raquo;complete system&laquo;]].
  
On the other hand, the further event set&nbsp;  $\{A_3, A_4\}$&nbsp; is not equal to the basic set&nbsp; $G$, if we define the single events as follows:
+
On the other hand,&nbsp; the further event set&nbsp;  $\{A_3, A_4\}$&nbsp; is not equal to the universe&nbsp; $G$,&nbsp; if we define the single events as follows:
* $A_3 = \big[$the number is smaller 3$\big] = \{1, 2\}$,  
+
* $A_3 = \big[$the outcome is smaller than&nbsp; 3$\big] = \{1, 2\}$,
* $A_4 =\big[$the number is bigger than 3$\big] = \{4, 5, 6\}$.  
+
 +
* $A_4 =\big[$the outcome is bigger than&nbsp; 3$\big] = \{4, 5, 6\}$.  
  
  
Here, the event set&nbsp; $\{A_3, A_4\}$&nbsp; does not include the element „3”.&nbsp; The probabilities of the events defined here are&nbsp; ${\rm Pr}( A_3) = 1/3$&nbsp; and&nbsp; ${\rm Pr}( A_1) ={\rm Pr}(A_2) = {\rm Pr}(A_4) = 1/2$.}}
+
Here,&nbsp; the event set&nbsp; $\{A_3, A_4\}$&nbsp; does not include the element&nbsp; $3$.&nbsp; The probabilities of the events defined here are&nbsp;  
 +
:$${\rm Pr}( A_3) = 1/3,\ {\rm Pr}( A_1) ={\rm Pr}(A_2) = {\rm Pr}(A_4) = 1/2.$$}}
  
  
The topic of this chapter is illustrated with examples in the learning video&nbsp; [[Klassische_Definition_der_Wahrscheinlickeit_(Lernvideo)|Classical Definition of Probability]]&nbsp;.
+
&rArr; &nbsp; The topic of this chapter is illustrated with examples in the&nbsp; $($German language$)$&nbsp; learning video&nbsp;<br> &nbsp; &nbsp;  &nbsp; &nbsp; &nbsp; &nbsp; [[Klassische_Definition_der_Wahrscheinlickeit_(Lernvideo)|&raquo;Klassische Definition der Wahrscheinlickeit&laquo;]] &nbsp; &rArr; &nbsp; &raquo;Classical definition of probability&laquo;.
  
  
 
==Exercises for the chapter==
 
==Exercises for the chapter==
 
<br>
 
<br>
[[Aufgaben:1.1 Würfelspiel_Mäxchen|Aufgabe 1.1: Würfelspiel Mäxchen]]
+
[[Aufgaben:Exercise_1.1:_A_Special_Dice_Game|Exercise 1.1: A Special Dice Game]]
  
[[Aufgaben:1.1Z_Summe_zweier_Ternärsignale|Aufgabe 1.1Z: Summe zweier Ternärsignale]]
+
[[Aufgaben:Exercise_1.1Z:_Sum_of_Two_Ternary_Signals|Exercise 1.1Z: Sum of Two Ternary Signals]]
  
  
 
{{Display}}
 
{{Display}}

Latest revision as of 17:47, 30 November 2023

  • [[Theory of Stochastic Signals/{{{Vorherige Seite}}} | Previous page]]
  • Next page
  • [[Theory of Stochastic Signals/{{{Vorherige Seite}}} | Previous page]]
  • Next page

# OVERVIEW OF THE FIRST MAIN CHAPTER #


This first chapter brings a brief summary of  »probability calculation«,  which surely many of you already know from your school days and which is an important prerequisite for understanding the chapters that follow.

This chapter includes

  1. some  »definitions«  such as  »random experiment« ,  »outcome« ,  » event« , and  »probability« ,
  2. the  »set-theoretical basics«  relevant for probability theory,
  3. the clarification of  »statistical dependence«  and  »statistical independence«,
  4. the mathematical treatment of statistical dependence by  »Markov chains«.


Experiment and outcome


The starting point of any statistical investigation is a  »random experiment«.  By this,  one understands

  • an experiment that can be repeated as often as desired under the same conditions with an uncertain  »outcome«   $($German:  "$\rm E\hspace{0.02cm}$rgebnis"$)$  $E$,
  • in which,  however,  the quantity  $ \{E_μ \}$  of the possible outcomes is specifiable.


$\text{Definition:}$  The number of possible outcomes is called the  »outcome set size«  $M$.  Then holds:

$$E_\mu \in G = \{E_\mu\}= \{E_1, \hspace{0.1cm}\text{...} \hspace{0.1cm}, E_M \} .$$
  1. The variable  $μ$  can take all integer values between  $1$  and  $M$. 
  2. $G = \{E_\mu\}$  is called the event space or the  »universal set«  $($German:  "Grundmenge"   ⇒   letter:  "G"$)$  with  $M$  possible outcomes.


$\text{Example 1:}$ 

  • In the experiment  »coin toss«  there are only two possible outcomes,  namely  »heads«  and  »tails«   ⇒   $M = 2$. 
  • In contrast,  in the random experiment  »throwing a roulette ball«  a total of  $M = 37$  different outcomes are possible,  and it holds for the universal set in this case:
$$G = \{E_\mu\} = \{0, 1, 2, \text{...} \hspace{0.1cm} , 36\}.$$

Classical definition of probability


We assume that each trial results in exactly one outcome from  $G$  and that each of these  $M$  outcomes is possible in the same way  $($without preference or disadvantage$)$.

$\text{Definition:}$  With this assumption,  the  »probability«  of each outcome  $E_μ$  is equally:

$$\Pr (E_\mu) = 1/{M}.$$


This is the  »classical definition of probability«.  ${\rm Pr}(\text{...} )$  stands also for  »probability«  and is to be understood as a mathematical function.

$\text{Example 2:}$  In the random experiment  »coin toss«,  the probabilities of the two possible outcomes are

$$\rm Pr(heads)=Pr(tails)=1/2.$$
  • This assumes that each attempt ends either with  »heads«  or with  »tails»  and that the coin cannot come to rest on its edge during an attempt.
  • In the experiment  »throwing a roulette ball«  the probabilities  ${\rm Pr}( E_μ) = 1/37$  are equal for all numbers from  $0$  to  $36$ 
    only if the roulette table has not been manipulated.


Note:   Probability theory – and the statistics based on it – can only provide well-founded statements if all implicitly agreed conditions are actually fulfilled

  • Checking these conditions is not the task of statistics,  but of those who use them. 
  • Since this basic rule is often violated,  statistics has a much worse reputation in society than it actually deserves.

Event and event probability


$\text{Definitions:}$ 

(1)   By an  »event«  we mean a set or summary of outcomes.  We refer to the set of all events as the   »event set«  $\{A_i \}$.

Since the number  $I$  of possible events  $\{A_i \}$  is generally not the same as the number  $M$  of possible outcomes   ⇒   the elements of  $G = \{ E_μ \}$, 
different indices are chosen here.

(2)  If an event  $A_i$  is composed of  $K$  $($elementary$)$  outcomes,  the  »event probability«  is defined as follows:

$${\rm Pr} (A_i) = \frac{K}{M} = \frac{\rm Number\hspace{0.1cm}of\hspace{0.1cm}favorable\hspace{0.1cm}outcomes}{\rm Number\hspace{0.1cm}of\hspace{0.1cm}possible\hspace{0.1cm}outcomes}.$$


This equation is called the  »Laplace probability definition«.

  • Here,  »favorable outcomes«  are those outcomes that belong to the composite event  $A_i$.
  • From this definition it is already clear that a probability must always lie between  $0$  and  $1$   $($including these two limits$)$.


$\text{Example 3:}$  We now consider the experiment  »throwing a die«.  The possible outcomes  $($number of points$)$  are thus  $E_μ ∈ G = \{1, 2, 3, 4, 5, 6\}$.

Let us now define two events  $(I = 2)$, viz.

  • $A_1 = \big[$the outcome is even$\big] = \{2, 4, 6\}$,  and
  • $A_2 = \big[$the outcome is odd$\big] = \{1, 3, 5\}$,


then the event set  $\{A_1, A_2\}$  is equal to the universe  $G$.  For this example,  the events  $A_1$  and  $A_2$  represent a so-called  »complete system«.

On the other hand,  the further event set  $\{A_3, A_4\}$  is not equal to the universe  $G$,  if we define the single events as follows:

  • $A_3 = \big[$the outcome is smaller than  3$\big] = \{1, 2\}$,
  • $A_4 =\big[$the outcome is bigger than  3$\big] = \{4, 5, 6\}$.


Here,  the event set  $\{A_3, A_4\}$  does not include the element  $3$.  The probabilities of the events defined here are 

$${\rm Pr}( A_3) = 1/3,\ {\rm Pr}( A_1) ={\rm Pr}(A_2) = {\rm Pr}(A_4) = 1/2.$$


⇒   The topic of this chapter is illustrated with examples in the  $($German language$)$  learning video 
            »Klassische Definition der Wahrscheinlickeit«   ⇒   »Classical definition of probability«.


Exercises for the chapter


Exercise 1.1: A Special Dice Game

Exercise 1.1Z: Sum of Two Ternary Signals