Difference between revisions of "Aufgaben:Exercise 4.15: PDF and Covariance Matrix"

From LNTwww
m (Text replacement - "Category:Aufgaben zu Stochastische Signaltheorie" to "Category:Theory of Stochastic Signals: Exercises")
 
(8 intermediate revisions by 3 users not shown)
Line 1: Line 1:
  
{{quiz-Header|Buchseite=Stochastische Signaltheorie/Verallgemeinerung auf N-dimensionale Zufallsgrößen
+
{{quiz-Header|Buchseite=Theory_of_Stochastic_Signals/Generalization_to_N-Dimensional_Random_Variables
 
}}
 
}}
  
[[File:P_ID669__Sto_A_4_15.png |right|frame|Zwei Korrelationsmatrizen]]
+
[[File:P_ID669__Sto_A_4_15.png |right|frame|Two covariance matrices]]
Wir betrachten hier die dreidimensionale Zufallsgröße  $\mathbf{x}$, deren allgemein dargestellte Kovarianzmatrix  $\mathbf{K}_{\mathbf{x}}$  in der Grafik angegeben ist.  Die Zufallsgröße besitzt folgende Eigenschaften:
+
We consider here the three-dimensional random variable   $\mathbf{x}$,  whose commonly represented covariance matrix   $\mathbf{K}_{\mathbf{x}}$   is given in the upper graph.  The random variable has the following properties:
  
* Die drei Komponenten sind gaußverteilt und es gilt für die Elemente der Kovarianzmatrix:
+
* The three components are Gaussian distributed and it holds for the elements of the covariance matrix:
 
:$$K_{ij} = \sigma_i \cdot \sigma_j \cdot \rho_{ij}.$$
 
:$$K_{ij} = \sigma_i \cdot \sigma_j \cdot \rho_{ij}.$$
* Die Elemente auf der Hauptdiagonalen seien bekannt:
+
* Let the elements on the main diagonal be known:
:$$ K_{11} =1, \ K_{22} =0, \ K_{33} =0.25.$$
+
:$$ K_{11} =1, \ K_{22} =0, \ K_{33} =0.25.$$
* Der Korrelationskoeffizient zwischen den Koeffizienten  $x_1$  und  $x_3$  beträgt  $\rho_{13} = 0.8$.
+
* The correlation coefficient between the coefficients  $x_1$  and  $x_3$  is  $\rho_{13} = 0.8$.
  
  
Im zweiten Teil der Aufgabe soll die Zufallsgröße  $\mathbf{y}$  mit den beiden Komponenten  $y_1$  und  $y_2$  betrachtet werden, deren Kovarianzmatrix  $\mathbf{K}_{\mathbf{y}}$  durch die angegebenen Zahlenwerte  $(1, \ 0.4, \ 0.25)$  bestimmt ist.
+
In the second part of the exercise, consider the random variable  $\mathbf{y}$  with the two components  $y_1$  and  $y_2$  whose covariance matrix  $\mathbf{K}_{\mathbf{y}}$  is determined by the given numerical values  $(1, \ 0.4, \ 0.25)$  .
  
Die Wahrscheinlichkeitsdichtefunktion einer mittelwertfreien Gaußschen zweidimensionalen Zufallsgröße  $\mathbf{y}$  lautet gemäß den Angaben auf der Seite  [[Theory_of_Stochastic_Signals/Verallgemeinerung_auf_N-dimensionale_Zufallsgrößen#Zusammenhang_zwischen_Kovarianzmatrix_und_WDF|Zusammenhang zwischen Kovarianzmatrix und WDF]]  mit  $N = 2$:
+
The probability density function  $\rm (PDF)$  of a zero mean Gaussian two-dimensional random variable  $\mathbf{y}$  is as specified on page  [[Theory_of_Stochastic_Signals/Generalization_to_N-Dimensional_Random_Variables#Relationship_between_covariance_matrix_and_PDF|"Relationship between covariance matrix and PDF"]]  with  $N = 2$:
 
:$$\mathbf{f_y}(\mathbf{y})  =  \frac{1}{{2 \pi \cdot
 
:$$\mathbf{f_y}(\mathbf{y})  =  \frac{1}{{2 \pi \cdot
 
\sqrt{|\mathbf{K_y}|}}}\cdot {\rm e}^{-{1}/{2} \hspace{0.05cm}\cdot\hspace{0.05cm} \mathbf{y} ^{\rm T}\hspace{0.05cm}\cdot\hspace{0.05cm}\mathbf{K_y}^{-1} \hspace{0.05cm}\cdot\hspace{0.05cm} \mathbf{y}  }=  C \cdot  {\rm e}^{-\gamma_1 \hspace{0.05cm}\cdot\hspace{0.05cm} y_1^2 \hspace{0.1cm}+\hspace{0.1cm} \gamma_2 \hspace{0.05cm}\cdot\hspace{0.05cm} y_2^2 \hspace{0.1cm}+\hspace{0.1cm}\gamma_{12} \hspace{0.05cm}\cdot\hspace{0.05cm} y_1 \hspace{0.05cm}\cdot\hspace{0.05cm} y_2 }.$$
 
\sqrt{|\mathbf{K_y}|}}}\cdot {\rm e}^{-{1}/{2} \hspace{0.05cm}\cdot\hspace{0.05cm} \mathbf{y} ^{\rm T}\hspace{0.05cm}\cdot\hspace{0.05cm}\mathbf{K_y}^{-1} \hspace{0.05cm}\cdot\hspace{0.05cm} \mathbf{y}  }=  C \cdot  {\rm e}^{-\gamma_1 \hspace{0.05cm}\cdot\hspace{0.05cm} y_1^2 \hspace{0.1cm}+\hspace{0.1cm} \gamma_2 \hspace{0.05cm}\cdot\hspace{0.05cm} y_2^2 \hspace{0.1cm}+\hspace{0.1cm}\gamma_{12} \hspace{0.05cm}\cdot\hspace{0.05cm} y_1 \hspace{0.05cm}\cdot\hspace{0.05cm} y_2 }.$$
  
*In den Teilaufgaben  '''(5)'''  und  '''(6)'''  sollen der Vorfaktor  $C$  und die weiteren WDF-Koeffizienten  $\gamma_1$,  $\gamma_2$  und  $\gamma_{12}$  gemäß dieser Vektordarstellung berechnet werden.  
+
*In the subtasks  '''(5)'''  and  '''(6)'''  the prefactor  $C$  and the further PDF coefficients  $\gamma_1$,  $\gamma_2$  and  $\gamma_{12}$  are to be calculated according to this vector representation.  
*Dagegen würde die entsprechende Gleichung bei  herkömmlicher Vorgehensweise entsprechend dem Kapitel  [[Theory_of_Stochastic_Signals/Zweidimensionale_Gaußsche_Zufallsgrößen#Wahrscheinlichkeitsdichte-_und_Verteilungsfunktion|Zweidimensionale Gaußsche Zufallsgrößen]]  lauten:
+
*In contrast,  the corresponding equation in  conventional approach according to the chapter  [[Theory_of_Stochastic_Signals/Two-Dimensional_Gaussian_Random_Variables#Probability_density_function_and_cumulative_distribution_function|"Two-dimensional Gaussian Random Variables"]]  would be:
 
:$$f_{y_1,\hspace{0.1cm}y_2}(y_1,y_2)=\frac{\rm 1}{\rm 2\pi \sigma_1
 
:$$f_{y_1,\hspace{0.1cm}y_2}(y_1,y_2)=\frac{\rm 1}{\rm 2\pi \sigma_1
 
\sigma_2 \sqrt{\rm 1-\rho^2}}\cdot\exp\Bigg[-\frac{\rm 1}{\rm 2
 
\sigma_2 \sqrt{\rm 1-\rho^2}}\cdot\exp\Bigg[-\frac{\rm 1}{\rm 2
Line 31: Line 31:
  
  
''Hinweise:''
+
Hints:  
*Die Aufgabe gehört zum  Kapitel  [[Theory_of_Stochastic_Signals/Verallgemeinerung_auf_N-dimensionale_Zufallsgrößen|Verallgemeinerung auf N-dimensionale Zufallsgrößen]].
+
*The exercise belongs to the chapter  [[Theory_of_Stochastic_Signals/Generalization_to_N-Dimensional_Random_Variables|Generalization to N-Dimensional Random Variables]].
*Einige Grundlagen zur Anwendung von Vektoren und Matrizen finden sich auf den Seiten  [[Theory_of_Stochastic_Signals/Verallgemeinerung_auf_N-dimensionale_Zufallsgrößen#Grundlagen_der_Matrizenrechnung:_Determinante_einer_Matrix|Determinante einer Matrix]]  sowie  [[Theory_of_Stochastic_Signals/Verallgemeinerung_auf_N-dimensionale_Zufallsgrößen#Grundlagen_der_Matrizenrechnung:_Inverse_einer_Matrix|Inverse einer Matrix]]   
+
*Some basics on the application of vectors and matrices can be found in the sections  [[Theory_of_Stochastic_Signals/Generalization_to_N-Dimensional_Random_Variables#Basics_of_matrix_operations:_Determinant_of_a_matrix|"Determinant of a Matrix"]]  and  [[Theory_of_Stochastic_Signals/Generalization_to_N-Dimensional_Random_Variables#Basics_of_matrix_operations:_Inverse_of_a_matrix|Inverse of a Matrix]] .
*Bezug genommen wird auch auf das  Kapitel  [[Theory_of_Stochastic_Signals/Zweidimensionale_Gaußsche_Zufallsgrößen|Zweidimensionale Gaußsche Zufallsgrößen]].
+
*Reference is also made to the chapter  [[Theory_of_Stochastic_Signals/Two-Dimensional_Gaussian_Random_Variables|"Two-Dimensional Gaussian Random Variables"]].
 
   
 
   
  
  
  
===Fragebogen===
+
===Questions===
  
 
<quiz display=simple>
 
<quiz display=simple>
{Welche der folgenden Aussagen sind zutreffend?
+
{Which of the following statements are true?
 
|type="[]"}
 
|type="[]"}
- Die Zufallsgröße&nbsp; $\mathbf{x}$&nbsp; ist mit Sicherheit mittelwertfrei.
+
- The random variable&nbsp; $\mathbf{x}$&nbsp; is zero mean with certainty.
+ Die Matrixelemente&nbsp; $K_{12}$,&nbsp; $K_{21}$,&nbsp; $K_{23}$&nbsp; und&nbsp; $K_{32}$&nbsp; sind Null.
+
+ The matrix elements&nbsp; $K_{12}$,&nbsp; $K_{21}$,&nbsp; $K_{23}$&nbsp; and&nbsp; $K_{32}$&nbsp; are zero.
- Es gilt $K_{31} = -K_{13}$.
+
- It holds that $K_{31} = -K_{13}$.
  
  
{Berechnen Sie das Matrixelement der letzten Zeile und ersten Spalte.
+
{Calculate the matrix element of the last row and first column.
 
|type="{}"}
 
|type="{}"}
 
$K_\text{31} \ = \ $ { 0.4 3% }
 
$K_\text{31} \ = \ $ { 0.4 3% }
  
  
{Berechnen Sie die Determinante&nbsp; $|\mathbf{K}_{\mathbf{y}}|$.
+
{Calculate the determinant&nbsp; $|\mathbf{K}_{\mathbf{y}}|$.
 
|type="{}"}
 
|type="{}"}
$|\mathbf{K}_{\mathbf{y}}| \ = \ $ { 0.09 3% }
+
$|\mathbf{K}_{\mathbf{y}}| \ = \ $ { 0.09 3% }
  
  
{Berechnen Sie die inverse Matrix&nbsp; $\mathbf{I}_{\mathbf{y}} = \mathbf{K}_{\mathbf{y}}^{-1}$&nbsp; mit den Matrixelementen
+
{Calculate the inverse matrix&nbsp; $\mathbf{I}_{\mathbf{y}} = \mathbf{K}_{\mathbf{y}}^{-1}$&nbsp; with matrix elements.
 
$I_{ij}$ :
 
$I_{ij}$ :
 
|type="{}"}
 
|type="{}"}
$I_\text{11} \ = \ $ { 2.777 3% }
+
$I_\text{11} \ = \ $ { 2.777 3% }
$I_\text{12} \ = \ $ { -4.454--4.434 }
+
$I_\text{12} \ = \ $ { -4.454--4.434 }
$I_\text{21} \ = \ $ { -4.454--4.434 }
+
$I_\text{21} \ = \ $ { -4.454--4.434 }
$I_\text{22} \ = \ $ { 11.111 3% }
+
$I_\text{22} \ = \ $ { 11.111 3% }
  
  
{Berechnen Sie den Vorfaktor&nbsp; $C$&nbsp; der zweidimensionalen Wahrscheinlichkeitsdichtefunktion.&nbsp; Vergleichen Sie das Ergebnis
+
{Calculate the prefactor&nbsp; $C$&nbsp; of the two-dimensional probability density function.&nbsp; Compare the result
mit der im  Theorieteil angebenen Formel.
+
with the formula given in the theory section.
 
|type="{}"}
 
|type="{}"}
$C\ = \ $ { 0.531 3% }
+
$C\ = \ $ { 0.531 3% }
  
  
{Bestimmen Sie die Koeffizienten im Argument der Exponentialfunktion.&nbsp; Vergleichen Sie das Ergebnis mit der 2D&ndash;WDF&ndash;Gleichung.
+
{Determine the coefficients in the argument of the exponential function.&nbsp; Compare the result with the two-dimensional PDF equation.
 
|type="{}"}
 
|type="{}"}
$\gamma_1 \ = \ $ { 1.389 3% }
+
$\gamma_1 \ = \ $ { 1.389 3% }
$\gamma_2 \ = \ $ { 5.556 3% }
+
$\gamma_2 \ = \ $ { 5.556 3% }
$\gamma_{12}\ = \ $ { -4.454--4.434 }
+
$\gamma_{12}\ = \ $ { -4.454--4.434 }
  
  
Line 84: Line 84:
 
</quiz>
 
</quiz>
  
===Musterlösung===
+
===Solution===
 
{{ML-Kopf}}
 
{{ML-Kopf}}
'''(1)'''&nbsp; Richtig ist nur <u>der Lösungsvorschlag 2</u>:
+
'''(1)'''&nbsp; Only&nbsp; <u>the proposed solution 2</u>&nbsp; is correct:
*Anhand der Kovarianzmatrix&nbsp; $\mathbf{K}_{\mathbf{x}}$&nbsp; ist keine Aussage darüber möglich, ob die zugrunde liegende Zufallsgröße&nbsp; $\mathbf{x}$&nbsp; mittelwertfrei oder mittelwertbehaftet ist, da ein eventueller Mittelwert&nbsp; $\mathbf{m}$&nbsp; herausgerechnet wird.  
+
*On the basis of the covariance matrix&nbsp; $\mathbf{K}_{\mathbf{x}}$&nbsp; it is not possible to make any statement about whether the underlying random variable&nbsp; $\mathbf{x}$&nbsp; is zero mean or mean-invariant,&nbsp; since any mean value&nbsp; $\mathbf{m}$&nbsp; is factored out.  
*Um Aussagen über den Mittelwert machen zu können, müsste die Korrelationsmatrix&nbsp; $\mathbf{R}_{\mathbf{x}}$&nbsp; bekannt sein.  
+
*To make statements about the mean,&nbsp; the correlation matrix&nbsp; $\mathbf{R}_{\mathbf{x}}$&nbsp; would have to be known.  
*Aus&nbsp; $K_{22} = \sigma_2^2 = 0$&nbsp; folgt zwingend, dass alle anderen Elemente in der zweiten Zeile&nbsp; $(K_{21}, K_{23})$&nbsp; und der zweiten Spalte&nbsp; $(K_{12}, K_{32})$&nbsp; ebenfalls Null sind.  
+
*From&nbsp; $K_{22} = \sigma_2^2 = 0$&nbsp; it follows necessarily that all other elements in the second row&nbsp; $(K_{21}, K_{23})$&nbsp; and the second column&nbsp; $(K_{12}, K_{32})$&nbsp; are also zero.  
*Dagegen ist die dritte Aussage falsch: &nbsp; Die Elemente sind symmetrisch zur Hauptdiagonalen, so dass stets&nbsp; $K_{31} = K_{13}$&nbsp; gelten muss.  
+
*On the other hand,&nbsp; the third statement is false: &nbsp; The elements are symmetric about the main diagonal,&nbsp; so that always&nbsp; $K_{31} = K_{13}$&nbsp; must hold.  
  
  
  
[[File:P_ID2915__Sto_A_4_15a.png|right|frame|Vollständige Kovarianzmatrix]]
+
[[File:P_ID2915__Sto_A_4_15a.png|right|frame|Complete covariance matrix]]
'''(2)'''&nbsp; Aus&nbsp; $K_{11} = 1$&nbsp; und&nbsp; $K_{33} = 0.25$&nbsp; folgen direkt&nbsp; $\sigma_1 = 1$&nbsp; und&nbsp; $\sigma_3 = 0.5$.  
+
'''(2)'''&nbsp; From&nbsp; $K_{11} = 1$&nbsp; and&nbsp; $K_{33} = 0.25$&nbsp; follow directly&nbsp; $\sigma_1 = 1$&nbsp; and&nbsp; $\sigma_3 = 0.5$.  
*Zusammen mit dem Korrelationskoeffizienten&nbsp; $\rho_{13} = 0.8$&nbsp; (siehe Angabenblatt) erhält man somit:
+
*Taken together with the correlation coefficient&nbsp; $\rho_{13} = 0.8$&nbsp; (see specification sheet),&nbsp; we thus obtain:
:$$K_{13} = K_{31} = \sigma_1 \cdot \sigma_2 \cdot \rho_{13}\hspace{0.15cm}\underline{= 0.4}.$$
+
:$$K_{13} = K_{31} = \sigma_1 \cdot \sigma_2 \cdot \rho_{13}\hspace{0.15cm}\underline{= 0.4}.$$
  
  
  
'''(3)'''&nbsp; Die Determinante der Matrix&nbsp; $\mathbf{K_y}$&nbsp; lautet:
+
'''(3)'''&nbsp; The determinant of the matrix&nbsp; $\mathbf{K_y}$&nbsp; is:
 
:$$|{\mathbf{K_y}}| = 1 \cdot 0.25 - 0.4 \cdot 0.4 \hspace{0.15cm}\underline{= 0.09}.$$
 
:$$|{\mathbf{K_y}}| = 1 \cdot 0.25 - 0.4 \cdot 0.4 \hspace{0.15cm}\underline{= 0.09}.$$
  
  
'''(4)'''&nbsp; Entsprechend den Angaben auf den Seiten &bdquo;Determinante einer Matrix&rdquo;  und &bdquo;Inverse einer Matrix&rdquo; gilt:
+
'''(4)'''&nbsp; According to the statements on the pages "Determinant of a Matrix" and "Inverse of a Matrix" holds:
 
:$${\mathbf{I_y}} = {\mathbf{K_y}}^{-1} =
 
:$${\mathbf{I_y}} = {\mathbf{K_y}}^{-1} =
 
\frac{1}{|{\mathbf{K_y}}|}\cdot \left[
 
\frac{1}{|{\mathbf{K_y}}|}\cdot \left[
Line 113: Line 113:
 
\end{array} \right].$$
 
\end{array} \right].$$
  
*Mit&nbsp; $|\mathbf{K_y}|= 0.09$&nbsp; gilt deshalb weiter:
+
*With&nbsp; $|\mathbf{K_y}|= 0.09$&nbsp; therefore holds further:
 
:$$I_{11} = {25}/{9}\hspace{0.15cm}\underline{ = 2.777};\hspace{0.3cm} I_{12} = I_{21} = -40/9 \hspace{0.15cm}\underline{ = -4.447};\hspace{0.3cm}I_{22} = {100}/{9} \hspace{0.15cm}\underline{=
 
:$$I_{11} = {25}/{9}\hspace{0.15cm}\underline{ = 2.777};\hspace{0.3cm} I_{12} = I_{21} = -40/9 \hspace{0.15cm}\underline{ = -4.447};\hspace{0.3cm}I_{22} = {100}/{9} \hspace{0.15cm}\underline{=
 
11.111}.$$
 
11.111}.$$
Line 119: Line 119:
  
  
'''(5)'''&nbsp; Ein Vergleich von&nbsp; $\mathbf{K_y}$&nbsp; und&nbsp; $\mathbf{K_x}$&nbsp; mit Nebenbedingung&nbsp; $K_{22} = 0$&nbsp; zeigt, dass&nbsp; $\mathbf{x}$&nbsp; und&nbsp; $\mathbf{y}$&nbsp; identische Zufallsgrößen sind, wenn man&nbsp; $y_1 = x_1$&nbsp; und&nbsp; $y_2 = x_3$&nbsp; setzt.  
+
'''(5)'''&nbsp; A comparison of&nbsp; $\mathbf{K_y}$&nbsp; and&nbsp; $\mathbf{K_x}$&nbsp; with constraint&nbsp; $K_{22} = 0$&nbsp; shows that&nbsp; $\mathbf{x}$&nbsp; and&nbsp; $\mathbf{y}$&nbsp; are identical random variables if one sets&nbsp; $y_1 = x_1$&nbsp; and&nbsp; $y_2 = x_3$&nbsp; .  
*Somit gilt für die WDF-Parameter:
+
*Thus, for the PDF parameters:
:$$\sigma_1 =1, \hspace{0.3cm} \sigma_2 =0.5, \hspace{0.3cm} \rho =
+
:$$\sigma_1 =1, \hspace{0.3cm} \sigma_2 =0.5, \hspace{0.3cm} \rho =0.8.$$
0.8.$$
 
  
*Der Vorfaktor entsprechend der allgemeinen WDF-Definition  ist somit:
+
*The prefactor according to the general PDF definition is thus:
 
:$$C =\frac{\rm 1}{\rm 2\pi \cdot \sigma_1 \cdot \sigma_2 \cdot \sqrt{\rm 1-\rho^2}}=
 
:$$C =\frac{\rm 1}{\rm 2\pi \cdot \sigma_1 \cdot \sigma_2 \cdot \sqrt{\rm 1-\rho^2}}=
 
\frac{\rm 1}{\rm 2\pi \cdot 1 \cdot 0.5 \cdot 0.6}= \frac{1}{0.6
 
\frac{\rm 1}{\rm 2\pi \cdot 1 \cdot 0.5 \cdot 0.6}= \frac{1}{0.6
 
\cdot \pi} \hspace{0.15cm}\underline{\approx 0.531}.$$
 
\cdot \pi} \hspace{0.15cm}\underline{\approx 0.531}.$$
  
*Mit der in der Teilaufgabe&nbsp; '''(3)'''&nbsp; berechneten Determinante ergibt sich das gleiche Ergebnis:
+
*With the determinant calculated in subtask&nbsp; '''(3)'''&nbsp; we get the same result:
 
:$$C =\frac{\rm 1}{\rm 2\pi \sqrt{|{\mathbf{K_y}}|}}= \frac{\rm
 
:$$C =\frac{\rm 1}{\rm 2\pi \sqrt{|{\mathbf{K_y}}|}}= \frac{\rm
 
1}{\rm 2\pi \sqrt{0.09}} = \frac{1}{0.6 \cdot \pi}.$$
 
1}{\rm 2\pi \sqrt{0.09}} = \frac{1}{0.6 \cdot \pi}.$$
Line 135: Line 134:
  
  
'''(6)'''&nbsp; Die  in der Teilaufgabe&nbsp; '''(4)'''&nbsp; berechnete inverse Matrix kann auch wie folgt geschrieben werden:
+
'''(6)'''&nbsp; The inverse matrix computed in subtask&nbsp; '''(4)'''&nbsp; can also be written as follows:
 
:$${\mathbf{I_y}} = \frac{5}{9}\cdot \left[
 
:$${\mathbf{I_y}} = \frac{5}{9}\cdot \left[
 
\begin{array}{cc}
 
\begin{array}{cc}
Line 142: Line 141:
 
\end{array} \right].$$
 
\end{array} \right].$$
  
*Somit lautet das Argument&nbsp; $A$&nbsp; der Exponentialfunktion:
+
*So the argument&nbsp; $A$&nbsp; of the exponential function is:
 
:$$A = \frac{5}{18}\cdot{\mathbf{y}}^{\rm T}\cdot \left[
 
:$$A = \frac{5}{18}\cdot{\mathbf{y}}^{\rm T}\cdot \left[
 
\begin{array}{cc}
 
\begin{array}{cc}
Line 150: Line 149:
 
y_2\right).$$
 
y_2\right).$$
  
*Durch Koeffizientenvergleich ergibt sich:
+
*By comparing coefficients,&nbsp; we get:
 
:$$\gamma_1 = \frac{25}{18} \approx 1.389; \hspace{0.3cm} \gamma_2 =
 
:$$\gamma_1 = \frac{25}{18} \approx 1.389; \hspace{0.3cm} \gamma_2 =
 
\frac{100}{18} \approx 5.556; \hspace{0.3cm} \gamma_{12} = -
 
\frac{100}{18} \approx 5.556; \hspace{0.3cm} \gamma_{12} = -
\frac{80}{18} \approx -4.444.$$
+
\frac{80}{18} \approx -4,444.$$
  
*Entsprechend der herkömmlichen Vorgehensweise ergeben sich die gleichen Zahlenwerte:
+
*According to the conventional procedure,&nbsp; the same numerical values result:
 
:$$\gamma_1 =\frac{\rm 1}{\rm 2\cdot \sigma_1^2 \cdot ({\rm
 
:$$\gamma_1 =\frac{\rm 1}{\rm 2\cdot \sigma_1^2 \cdot ({\rm
 
1-\rho^2})}=
 
1-\rho^2})}=
Line 161: Line 160:
 
1 \cdot 0.36} \hspace{0.15cm}\underline{ \approx 1.389},$$
 
1 \cdot 0.36} \hspace{0.15cm}\underline{ \approx 1.389},$$
 
:$$\gamma_2 =\frac{\rm 1}{\rm 2 \cdot\sigma_2^2 \cdot ({\rm
 
:$$\gamma_2 =\frac{\rm 1}{\rm 2 \cdot\sigma_2^2 \cdot ({\rm
1-\rho^2})}= \frac{\rm 1}{\rm 2 \cdot 0.25 \cdot 0.36} =
+
1-\rho^2})}= \frac{\rm 1}{\rm 2 \cdot 0.25 \cdot 0.36} =
4 \cdot \gamma_1 \hspace{0.15cm}\underline{\approx 5.556},$$
+
4 \cdot \gamma_1 \hspace{0.15cm}\underline{\approx 5.556},$$
:$$\gamma_{12} =-\frac{\rho}{ \sigma_1 \cdot \sigma_2 \cdot ({\rm 1-\rho^2})}=
+
:$$\gamma_{12} =-\frac{\rho}{ \sigma_1 \cdot \sigma_2 \cdot ({\rm 1-\rho^2})}=
 
-\frac{\rm 0.8}{\rm 1 \cdot 0.5 \cdot 0.36} \hspace{0.15cm}\underline{ \approx -4.444}.$$
 
-\frac{\rm 0.8}{\rm 1 \cdot 0.5 \cdot 0.36} \hspace{0.15cm}\underline{ \approx -4.444}.$$
 
{{ML-Fuß}}
 
{{ML-Fuß}}

Latest revision as of 12:20, 28 March 2022

Two covariance matrices

We consider here the three-dimensional random variable   $\mathbf{x}$,  whose commonly represented covariance matrix   $\mathbf{K}_{\mathbf{x}}$   is given in the upper graph.  The random variable has the following properties:

  • The three components are Gaussian distributed and it holds for the elements of the covariance matrix:
$$K_{ij} = \sigma_i \cdot \sigma_j \cdot \rho_{ij}.$$
  • Let the elements on the main diagonal be known:
$$ K_{11} =1, \ K_{22} =0, \ K_{33} =0.25.$$
  • The correlation coefficient between the coefficients  $x_1$  and  $x_3$  is  $\rho_{13} = 0.8$.


In the second part of the exercise, consider the random variable  $\mathbf{y}$  with the two components  $y_1$  and  $y_2$  whose covariance matrix  $\mathbf{K}_{\mathbf{y}}$  is determined by the given numerical values  $(1, \ 0.4, \ 0.25)$  .

The probability density function  $\rm (PDF)$  of a zero mean Gaussian two-dimensional random variable  $\mathbf{y}$  is as specified on page  "Relationship between covariance matrix and PDF"  with  $N = 2$:

$$\mathbf{f_y}(\mathbf{y}) = \frac{1}{{2 \pi \cdot \sqrt{|\mathbf{K_y}|}}}\cdot {\rm e}^{-{1}/{2} \hspace{0.05cm}\cdot\hspace{0.05cm} \mathbf{y} ^{\rm T}\hspace{0.05cm}\cdot\hspace{0.05cm}\mathbf{K_y}^{-1} \hspace{0.05cm}\cdot\hspace{0.05cm} \mathbf{y} }= C \cdot {\rm e}^{-\gamma_1 \hspace{0.05cm}\cdot\hspace{0.05cm} y_1^2 \hspace{0.1cm}+\hspace{0.1cm} \gamma_2 \hspace{0.05cm}\cdot\hspace{0.05cm} y_2^2 \hspace{0.1cm}+\hspace{0.1cm}\gamma_{12} \hspace{0.05cm}\cdot\hspace{0.05cm} y_1 \hspace{0.05cm}\cdot\hspace{0.05cm} y_2 }.$$
  • In the subtasks  (5)  and  (6)  the prefactor  $C$  and the further PDF coefficients  $\gamma_1$,  $\gamma_2$  and  $\gamma_{12}$  are to be calculated according to this vector representation.
  • In contrast,  the corresponding equation in  conventional approach according to the chapter  "Two-dimensional Gaussian Random Variables"  would be:
$$f_{y_1,\hspace{0.1cm}y_2}(y_1,y_2)=\frac{\rm 1}{\rm 2\pi \sigma_1 \sigma_2 \sqrt{\rm 1-\rho^2}}\cdot\exp\Bigg[-\frac{\rm 1}{\rm 2 (1-\rho^{\rm 2})}\cdot(\frac { y_1^{\rm 2}}{\sigma_1^{\rm 2}}+\frac { y_2^{\rm 2}}{\sigma_2^{\rm 2}}-\rm 2\rho \frac{{\it y}_1{\it y}_2}{\sigma_1 \cdot \sigma_2}) \rm \Bigg].$$



Hints:



Questions

1

Which of the following statements are true?

The random variable  $\mathbf{x}$  is zero mean with certainty.
The matrix elements  $K_{12}$,  $K_{21}$,  $K_{23}$  and  $K_{32}$  are zero.
It holds that $K_{31} = -K_{13}$.

2

Calculate the matrix element of the last row and first column.

$K_\text{31} \ = \ $

3

Calculate the determinant  $|\mathbf{K}_{\mathbf{y}}|$.

$|\mathbf{K}_{\mathbf{y}}| \ = \ $

4

Calculate the inverse matrix  $\mathbf{I}_{\mathbf{y}} = \mathbf{K}_{\mathbf{y}}^{-1}$  with matrix elements. $I_{ij}$ :

$I_\text{11} \ = \ $

$I_\text{12} \ = \ $

$I_\text{21} \ = \ $

$I_\text{22} \ = \ $

5

Calculate the prefactor  $C$  of the two-dimensional probability density function.  Compare the result with the formula given in the theory section.

$C\ = \ $

6

Determine the coefficients in the argument of the exponential function.  Compare the result with the two-dimensional PDF equation.

$\gamma_1 \ = \ $

$\gamma_2 \ = \ $

$\gamma_{12}\ = \ $


Solution

(1)  Only  the proposed solution 2  is correct:

  • On the basis of the covariance matrix  $\mathbf{K}_{\mathbf{x}}$  it is not possible to make any statement about whether the underlying random variable  $\mathbf{x}$  is zero mean or mean-invariant,  since any mean value  $\mathbf{m}$  is factored out.
  • To make statements about the mean,  the correlation matrix  $\mathbf{R}_{\mathbf{x}}$  would have to be known.
  • From  $K_{22} = \sigma_2^2 = 0$  it follows necessarily that all other elements in the second row  $(K_{21}, K_{23})$  and the second column  $(K_{12}, K_{32})$  are also zero.
  • On the other hand,  the third statement is false:   The elements are symmetric about the main diagonal,  so that always  $K_{31} = K_{13}$  must hold.


Complete covariance matrix

(2)  From  $K_{11} = 1$  and  $K_{33} = 0.25$  follow directly  $\sigma_1 = 1$  and  $\sigma_3 = 0.5$.

  • Taken together with the correlation coefficient  $\rho_{13} = 0.8$  (see specification sheet),  we thus obtain:
$$K_{13} = K_{31} = \sigma_1 \cdot \sigma_2 \cdot \rho_{13}\hspace{0.15cm}\underline{= 0.4}.$$


(3)  The determinant of the matrix  $\mathbf{K_y}$  is:

$$|{\mathbf{K_y}}| = 1 \cdot 0.25 - 0.4 \cdot 0.4 \hspace{0.15cm}\underline{= 0.09}.$$


(4)  According to the statements on the pages "Determinant of a Matrix" and "Inverse of a Matrix" holds:

$${\mathbf{I_y}} = {\mathbf{K_y}}^{-1} = \frac{1}{|{\mathbf{K_y}}|}\cdot \left[ \begin{array}{cc} 0.25 & -0.4 \\ -0.4 & 1 \end{array} \right].$$
  • With  $|\mathbf{K_y}|= 0.09$  therefore holds further:
$$I_{11} = {25}/{9}\hspace{0.15cm}\underline{ = 2.777};\hspace{0.3cm} I_{12} = I_{21} = -40/9 \hspace{0.15cm}\underline{ = -4.447};\hspace{0.3cm}I_{22} = {100}/{9} \hspace{0.15cm}\underline{= 11.111}.$$


(5)  A comparison of  $\mathbf{K_y}$  and  $\mathbf{K_x}$  with constraint  $K_{22} = 0$  shows that  $\mathbf{x}$  and  $\mathbf{y}$  are identical random variables if one sets  $y_1 = x_1$  and  $y_2 = x_3$  .

  • Thus, for the PDF parameters:
$$\sigma_1 =1, \hspace{0.3cm} \sigma_2 =0.5, \hspace{0.3cm} \rho =0.8.$$
  • The prefactor according to the general PDF definition is thus:
$$C =\frac{\rm 1}{\rm 2\pi \cdot \sigma_1 \cdot \sigma_2 \cdot \sqrt{\rm 1-\rho^2}}= \frac{\rm 1}{\rm 2\pi \cdot 1 \cdot 0.5 \cdot 0.6}= \frac{1}{0.6 \cdot \pi} \hspace{0.15cm}\underline{\approx 0.531}.$$
  • With the determinant calculated in subtask  (3)  we get the same result:
$$C =\frac{\rm 1}{\rm 2\pi \sqrt{|{\mathbf{K_y}}|}}= \frac{\rm 1}{\rm 2\pi \sqrt{0.09}} = \frac{1}{0.6 \cdot \pi}.$$


(6)  The inverse matrix computed in subtask  (4)  can also be written as follows:

$${\mathbf{I_y}} = \frac{5}{9}\cdot \left[ \begin{array}{cc} 5 & -8 \\ -8 & 20 \end{array} \right].$$
  • So the argument  $A$  of the exponential function is:
$$A = \frac{5}{18}\cdot{\mathbf{y}}^{\rm T}\cdot \left[ \begin{array}{cc} 5 & -8 \\ -8 & 20 \end{array} \right]\cdot{\mathbf{y}} =\frac{5}{18}\left( 5 \cdot y_1^2 + 20 \cdot y_2^2 -16 \cdot y_1 \cdot y_2\right).$$
  • By comparing coefficients,  we get:
$$\gamma_1 = \frac{25}{18} \approx 1.389; \hspace{0.3cm} \gamma_2 = \frac{100}{18} \approx 5.556; \hspace{0.3cm} \gamma_{12} = - \frac{80}{18} \approx -4,444.$$
  • According to the conventional procedure,  the same numerical values result:
$$\gamma_1 =\frac{\rm 1}{\rm 2\cdot \sigma_1^2 \cdot ({\rm 1-\rho^2})}= \frac{\rm 1}{\rm 2 \cdot 1 \cdot 0.36} \hspace{0.15cm}\underline{ \approx 1.389},$$
$$\gamma_2 =\frac{\rm 1}{\rm 2 \cdot\sigma_2^2 \cdot ({\rm 1-\rho^2})}= \frac{\rm 1}{\rm 2 \cdot 0.25 \cdot 0.36} = 4 \cdot \gamma_1 \hspace{0.15cm}\underline{\approx 5.556},$$
$$\gamma_{12} =-\frac{\rho}{ \sigma_1 \cdot \sigma_2 \cdot ({\rm 1-\rho^2})}= -\frac{\rm 0.8}{\rm 1 \cdot 0.5 \cdot 0.36} \hspace{0.15cm}\underline{ \approx -4.444}.$$