Difference between revisions of "Channel Coding/Channel Models and Decision Structures"

From LNTwww
 
(79 intermediate revisions by 8 users not shown)
Line 2: Line 2:
 
   
 
   
 
{{Header
 
{{Header
|Untermenü=Binäre Blockcodes zur Kanalcodierung
+
|Untermenü=Binary Block Codes for Channel Coding
 
|Vorherige Seite=Zielsetzung der Kanalcodierung
 
|Vorherige Seite=Zielsetzung der Kanalcodierung
 
|Nächste Seite=Beispiele binärer Blockcodes
 
|Nächste Seite=Beispiele binärer Blockcodes
 
}}
 
}}
  
== AWGN–Kanal bei binärem Eingang ==
+
== AWGN channel at binary input ==
 
<br>
 
<br>
Wir betrachten das bekannte zeitdiskrete [http://en.lntwww.de/Modulationsverfahren/Qualit%C3%A4tskriterien#Einige_Anmerkungen_zum_AWGN.E2.80.93Kanalmodell AWGN&ndash;Kanalmodell] gemäß der unteren Grafik (links):
+
We consider the well-known discrete-time&nbsp; [[Modulation_Methods/Quality_Criteria#Some_remarks_on_the_AWGN_channel_model|$\text{AWGN}$]]&nbsp; channel model according to the left graph:
*Das binäre und zeitdiskrete Nachrichtensignal <i>x</i> nimmt mit gleicher Wahrscheinlichkeit die Werte 0 und 1 an, das heißt, es ist  Pr(<i>x</i>&nbsp;=&nbsp;0)&nbsp;=&nbsp;Pr(<i>x</i>&#x0303;&nbsp;=&nbsp;+1)&nbsp;=&nbsp;1/2 sowie Pr(<i>x</i>&nbsp;=&nbsp;1)&nbsp;=&nbsp;Pr(<i>x</i>&#x0303;&nbsp;=&nbsp;&ndash;1)&nbsp;=&nbsp;1/2.<br>
+
*The binary and discrete-time message signal&nbsp; $x$&nbsp; takes the values&nbsp; $0$&nbsp; and&nbsp; $1$&nbsp; with equal probability:
 +
:$${\rm Pr}(x = 0) ={\rm Pr}(x = 1) = 1/2.$$
 +
*We now transform the unipolar variable&nbsp; $x \in \{0,\ 1 \}$&nbsp; into the bipolar variable&nbsp; $\tilde{x} \in \{+1, -1 \}$&nbsp; more suitable for signal transmission.&nbsp; Then holds:
 +
[[File:EN_KC_T_1_2_S1.png|right|frame|Probability density function&nbsp; $\rm (PDF)$&nbsp;
 +
of the AWGN channel|class=fit]]
 +
:$${\rm Pr}(\tilde{x} =+1) ={\rm Pr}(\tilde{x} =-1) = 1/2.$$
 +
*The transmission is affected by&nbsp; [[Digital_Signal_Transmission/System_Components_of_a_Baseband_Transmission_System#Transmission_channel_and_interference| "additive white Gaussian noise"]]&nbsp; $\rm (AWGN)$&nbsp; $n$&nbsp; with&nbsp; (normalized)&nbsp; noise power&nbsp; $\sigma^2 = N_0/E_{\rm B}$.&nbsp; The root mean square&nbsp; (rms value)&nbsp; of the Gaussian PDF is&nbsp; $\sigma$.<br>
  
*Die Übertragung wird durch [http://en.lntwww.de/Digitalsignal%C3%BCbertragung/Systemkomponenten_eines_Basisband%C3%BCbertragungssystems#.C3.9Cbertragungskanal_und_St.C3.B6rungen_.282.29 additives weißes gaußverteiltes Rauschen] (AWGN) <i>n</i> mit der (normierten) Rauschleistung <i>&sigma;</i><sup>2</sup> = <i>N</i><sub>0</sub>/(2<i>E</i><sub>S</sub>) beeinträchtigt. Die Streuung der Gauß&ndash;WDF ist <i>&sigma;</i>.<br>
+
*Because of the Gaussian PDF,&nbsp; the output signal&nbsp; $y = \tilde{x} +n$&nbsp; can take on any real value in the range&nbsp; $-\infty$&nbsp; to&nbsp; $+\infty$.&nbsp; That means:
  
*Aufgrund der Gaußschen WDF kann das Ausgangssignal <i>y</i> = <i>x</i>&#x0303; + <i>n</i> alle reellen Werte annehmen. Der Signalwert <i>y</i> ist zwar wie <i>x</i>&#x0303; zeitdiskret, im Gegensatz zu diesem aber wertkontinuierlich.<br>
+
*The AWGN output  value&nbsp; $y$&nbsp; is therefore discrete in time like&nbsp; $x$&nbsp; &nbsp;$($resp. $\tilde{x})$&nbsp; but in contrast to the latter it is continuous in value.<br>
 +
<br clear=all>
 +
The graph on the right shows&nbsp; $($in blue resp. red color$)$&nbsp; the conditional probability density functions:
  
[[File:P ID2340 KC T 1 2 S1 v2.png|Modell und WDF des AWGN–Kanals|class=fit]]<br>
+
::<math>f_{y \hspace{0.03cm}| \hspace{0.03cm}x=0 } \hspace{0.05cm} (y \hspace{0.05cm}| \hspace{0.05cm}x=0 )\hspace{-0.1cm} =  \hspace{-0.1cm}
 +
\frac {1}{\sqrt{2\pi} \cdot \sigma } \cdot {\rm e}^{ -  (y-1)^2/(2\sigma^2) }\hspace{0.05cm},</math>
 +
::<math>f_{y \hspace{0.03cm}| \hspace{0.03cm}x=1 } \hspace{0.05cm} (y \hspace{0.05cm}| \hspace{0.05cm}x=1 )\hspace{-0.1cm} = \hspace{-0.1cm}
 +
\frac {1}{\sqrt{2\pi} \cdot \sigma } \cdot {\rm e}^{ -  (y+1)^2/(2\sigma^2) }\hspace{0.05cm}.</math>
  
Die rechte Grafik zeigt die bedingten Wahrscheinlichkeitsdichtefunktionen (in blau bzw. rot):
+
Not shown is the total&nbsp; $($unconditional$)$&nbsp; PDF,&nbsp; for which applies in the case of equally probable symbols:
  
:<math>f_{y \hspace{0.03cm}| \hspace{0.03cm}x=0 } \hspace{0.05cm} (y \hspace{0.05cm}| \hspace{0.05cm}x=0 )\hspace{-0.1cm} = \hspace{-0.1cm}
+
::<math>f_y(y) = {1}/{2} \cdot  \big [ f_{y \hspace{0.03cm}| \hspace{0.03cm}x=0 } \hspace{0.05cm} (y \hspace{0.05cm}| \hspace{0.05cm}x=0 ) +  
\frac {1}{\sqrt{2\pi} \cdot \sigma } \cdot \exp \left [ -  \frac {(y-1)^2}{2\sigma^2} \right ]\hspace{0.05cm},</math>
+
f_{y \hspace{0.03cm}| \hspace{0.03cm}x=1 } \hspace{0.05cm} (y \hspace{0.05cm}| \hspace{0.05cm}x=1 )\big ]\hspace{0.05cm}.</math>
:<math>f_{y \hspace{0.03cm}| \hspace{0.03cm}x=1 } \hspace{0.05cm} (y \hspace{0.05cm}| \hspace{0.05cm}x=1 )\hspace{-0.1cm} =  \hspace{-0.1cm}
 
\frac {1}{\sqrt{2\pi} \cdot \sigma } \cdot \exp \left [ -  \frac {(y+1)^2}{2\sigma^2} \right ]\hspace{0.05cm}.</math>
 
  
Nicht dargestellt ist die gesamte (unbedingte) WDF, für die bei gleichwahrscheinlichen Symbolen gilt:
+
The two shaded areas &nbsp;$($each &nbsp;$\varepsilon)$&nbsp; mark decision errors under the condition&nbsp;
 +
*$x=0$ &nbsp; &#8658; &nbsp; $\tilde{x} = +1$&nbsp; (blue),
  
:<math>f_y(y) = {1}/{2} \cdot  \left [ f_{y \hspace{0.03cm}| \hspace{0.03cm}x=0 } \hspace{0.05cm} (y \hspace{0.05cm}| \hspace{0.05cm}x=0 ) + 
+
*$x=1$ &nbsp; &#8658; &nbsp; $\tilde{x} = -1$&nbsp; (red),
f_{y \hspace{0.03cm}| \hspace{0.03cm}x=1 } \hspace{0.05cm} (y \hspace{0.05cm}| \hspace{0.05cm}x=1 )\right ]\hspace{0.05cm}.</math>
 
  
Die beiden schraffierten Flächen (jeweils <i>&epsilon;</i>) markieren Entscheidungsfehler unter der Voraussetzung <i>x</i> = 0 &nbsp;&#8658;&nbsp; <i>x</i>&#x0303; = +1 (blau) &nbsp;&nbsp;bzw.&nbsp;&nbsp; <i>x</i> = 1 &nbsp;&#8658;&nbsp; <i>x</i>&#x0303; = &ndash;1 (rot), wenn harte Entscheidungen getroffen werden:
 
  
:<math>z = \left\{ \begin{array}{c} 0\\
+
when hard decisions are made:
 +
::<math>z = \left\{ \begin{array}{c} 0\\
 
  1  \end{array} \right.\quad
 
  1  \end{array} \right.\quad
\begin{array}{*{1}c} {\rm falls} \hspace{0.15cm} y > 0\hspace{0.05cm},\\
+
\begin{array}{*{1}c} {\rm if} \hspace{0.15cm} y > 0\hspace{0.05cm},\\
{\rm falls} \hspace{0.15cm}y < 0\hspace{0.05cm}.\\ \end{array}</math>
+
{\rm if} \hspace{0.15cm}y < 0\hspace{0.05cm}.\\ \end{array}</math>
  
Bei gleichwahrscheinlichen Eingangssymbolen ist dann die mittlere Bitfehlerwahrscheinlichkeit Pr(<i>z</i>&nbsp;&ne;&nbsp;<i>x</i>) ebenfalls gleich <i>&epsilon;</i>. Mit dem [[komplementären Gaußschen Fehlerintergral Please add link and do not upload flash videos.]] Q(<i>x</i>) gilt dabei:
+
For equally probable input symbols, the bit error probability&nbsp; ${\rm Pr}(z \ne x)$&nbsp; is then also&nbsp; $\varepsilon$. With the&nbsp; [[Theory_of_Stochastic_Signals/Gaussian_Distributed_Random_Variables#Exceedance_probability|"complementary Gaussian error function"]]&nbsp; ${\rm Q}(x)$&nbsp; the following holds:
  
:<math>\varepsilon = {\rm Q}(1/\sigma) = {\rm Q}(\sqrt{\rho}) =  
+
::<math>\varepsilon = {\rm Q}(1/\sigma) = {\rm Q}(\sqrt{\rho}) =  
 
\frac {1}{\sqrt{2\pi} } \cdot \int_{\sqrt{\rho}}^{\infty}{\rm e}^{- \alpha^2/2} \hspace{0.1cm}{\rm d}\alpha
 
\frac {1}{\sqrt{2\pi} } \cdot \int_{\sqrt{\rho}}^{\infty}{\rm e}^{- \alpha^2/2} \hspace{0.1cm}{\rm d}\alpha
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
Hierbei bezeichnet <i>&rho;</i> = 1/<i>&sigma;</i><sup>2</sup> = 2 &middot; <i>E</i><sub>S</sub>/<i>N</i><sub>0</sub> das Signal&ndash;zu&ndash;Rauschverhältnis (SNR) vor dem Entscheider, wobei folgende Systemgrößen verwendet werden:
+
where&nbsp; $\rho = 1/\sigma^2 = 2 \cdot E_{\rm S}/N_0$&nbsp; denotes the signal&ndash;to&ndash;noise ratio&nbsp; $\rm (SNR)$&nbsp; before the decision,&nbsp; using the following system quantities:
*<i>E</i><sub>S</sub> ist die Signalenergie pro Symbol (ohne Codierung gleich <i>E</i><sub>B</sub>, also der Signalenergie pro Bit),<br>
+
*$E_{\rm S}$&nbsp; is the signal energy per symbol&nbsp; (without coding: &nbsp;$E_{\rm S}=E_{\rm B}$,&nbsp; thus equal to the signal energy per bit),<br>
  
*<i>N</i><sub>0</sub> bezeichnet die konstante (einseitige) Rauschleistungsdichte des AWGN&ndash;Kanals.<br><br>
+
*$N_0$&nbsp; denotes the constant&nbsp; (one-sided)&nbsp; noise power density of the AWGN channel.<br><br>
 
 
Wir verweisen hier auf das interaktive Flash&ndash;Modul [[Fehlerwahrscheinlichkeit von Digitalsystemen Please add link and do not upload flash videos.]]
 
  
 +
:<u>Notes:</u> &nbsp; The presented facts are clarified with the&nbsp; $($German language$)$&nbsp; SWF applet&nbsp; [[Applets:Fehlerwahrscheinlichkeit|"Symbolfehlerwahrscheinlichkeit von Digitalsystemen"]] &nbsp; <br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp; &nbsp; &rArr;  "Symbol error probability of digital systems"
 
== Binary Symmetric Channel – BSC ==
 
== Binary Symmetric Channel – BSC ==
 
<br>
 
<br>
Das AWGN&ndash;Kanalmodell ist kein digitales Kanalmodell, wie wir es im [http://en.lntwww.de/Kanalcodierung/Zielsetzung_der_Kanalcodierung#Blockschaltbild_und_Voraussetzungen Kapitel 1.1] zur Beschreibung der Kanalcodierverfahren vorausgesetzt haben. Berücksichtigen wir aber eine harte Entscheidung, so kommen wir zum digitalen Modell <i>Binary Symmetric Channel</i> (BSC):<br>
+
The AWGN channel model is not a digital channel model as we have presupposed in the paragraph&nbsp; [[Channel_Coding/Objective_of_Channel_Coding#Block_diagram_and_requirements|"Block diagram and prerequisities"]]&nbsp; for the introductory description of channel coding methods.&nbsp; However,&nbsp; if we take into account a hard decision,&nbsp; we arrive at the digital model&nbsp; "Binary Symmetric Channel"&nbsp; $\rm (BSC)$:<br>
 +
 
 +
[[File:EN_KC_T_1_2_S2_neu.png|right|frame|BSC model and relation with the AWGN model.&nbsp;<u>Note:</u>&nbsp; In the AWGN model,&nbsp; we have denoted the binary output as&nbsp; $z \in \{0, \hspace{0.05cm}1\}$.&nbsp; For the digital channel models&nbsp; (BSC, BEC, BSEC),&nbsp; we now denote the discrete value output again by&nbsp; $y$.&nbsp; To avoid confusion, we now call the output signal of the AWGN model&nbsp; $y_{\rm A}$&nbsp; and for the analog received signal then&nbsp; $y_{\rm A} = \tilde{x} +n$|class=fit]]
 +
 
 +
*We choose the falsification probabilities&nbsp; ${\rm Pr}(y = 1\hspace{0.05cm}|\hspace{0.05cm} x=0)$&nbsp; resp.&nbsp; ${\rm Pr}(y = 0\hspace{0.05cm}|\hspace{0.05cm} x=1)$&nbsp; to be
 +
::<math>\varepsilon =  {\rm Q}(\sqrt{\rho})\hspace{0.05cm}.</math>
  
[[File:P ID2341 KC T 1 2 S2 v2.png|BSC–Modell und Zusammenhang mit dem AWGN–Modell|class=fit]]<br>
+
*Then the connection to the&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#AWGN_channel_at_binary_input|$\text{AWGN}$]]&nbsp;  channel model&nbsp; is established.&nbsp;
  
Wählt man die Verfälschungswahrscheinlichkeiten Pr(<i>y</i>&nbsp;=&nbsp;1&nbsp;|&nbsp;<i>x</i>&nbsp;=&nbsp;0) bzw. Pr(<i>y</i>&nbsp;=&nbsp;0&nbsp;|&nbsp;<i>x</i>&nbsp;=&nbsp;1) jeweils zu
+
*The decision line is at&nbsp; $G = 0$,&nbsp; which also gives rise to the property&nbsp; "symmetrical".<br>
  
:<math>\varepsilon =  {\rm Q}(\sqrt{\rho})\hspace{0.05cm},</math>
 
  
so ist der Zusammenhang zum [http://en.lntwww.de/Kanalcodierung/Kanalmodelle_und_Entscheiderstrukturen#AWGN.E2.80.93Kanal_bei_bin.C3.A4rem_Eingang AWGN&ndash;Kanalmodell] hergestellt. Die Entscheidungsgrenze liegt dabei bei <i>G</i> = 0, wodurch auch die Eigenschaft &bdquo;symmetrisch&rdquo; begründet ist.<br>
+
The BSC model provides a statistically independent error sequence and is thus suitable for modeling memoryless feedback-free channels,&nbsp; which are considered without exception in this book.<br>
  
<i>Hinweis:</i> Beim AWGN&ndash;Modell haben wir die binäre Ausgangsgröße (nach Schwellenwertentscheidung) mit <i>z</i> &#8712; {0, 1} bezeichnet. Bei den digitalen Kanalmodellen (BSC, BEC, BSEC) bezeichnen wir nun den wertdiskreten Ausgang wieder mit <i>y</i>. Um Verwechslungen zu vermeiden, nennen wir das Ausgangssignal des AWGN &ndash;Modells nun <i>y</i><sub>A</sub>. Für das analoge Empfangssignal gilt <i>y</i><sub>A</sub> = <i>x</i>&#x0303; + <i>n</i>.<br>
+
For the description of memory-affected channels,&nbsp; other models must be used,&nbsp; which are discussed in the fifth main chapter of the book "Digital Signal Transmission",&nbsp; e.g.&nbsp; "bundle error channels"&nbsp; according to the
 +
*&nbsp; [[Digital_Signal_Transmission/Burst_Error_Channels#Channel_model_according_to_Gilbert-Elliott|$\text{Gilbert&ndash;Elliott model}$]],<br>
  
Das BSC&ndash;Modell liefert eine statistisch unabhängige Fehlerfolge und eignet sich somit zur Modellierung gedächtnisloser rückkopplungsfreier Kanäle, die in diesem Buch ausnahmslos betrachtet werden.<br>
+
*&nbsp; [[Digital_Signal_Transmission/Burst_Error_Channels#Channel_model_according_to_McCullough|$\text{McCullough model}$]].<br><br>
  
[[File:P ID2342 KC T 1 2 S2b.png|Statistisch unabhängige Fehler (links) und Bündelfehler (rechts) |class=fit]]<br>
+
{{GraueBox|TEXT=
 +
[[File:P ID2342 KC T 1 2 S2b.png|right|frame|Statistically independent errors&nbsp; (left)&nbsp; and bundle errors&nbsp; (right)|class=fit]]  
 +
$\text{Example 1:}$&nbsp;  The figure shows
 +
*the original image in the middle,
 +
 +
*statistically independent errors according to the BSC model&nbsp; (left),
 +
 +
*so-called&nbsp; "bundle errors"&nbsp; according to Gilbert&ndash;Elliott&nbsp; (right).
  
Zur Beschreibung gedächtnisbehafteter Kanäle müssen andere Modelle herangezogen werden, die im Kapitel 5 des Buches &bdquo;Digitalsignalübertragung&rdquo; behandelt werden, zum Beispiel Bündelfehler nach
 
*dem [http://en.lntwww.de/Digitalsignal%C3%BCbertragung/B%C3%BCndelfehlerkan%C3%A4le#Kanalmodell_nach_Gilbert.E2.80.93Elliott_.281.29 Gilbert&ndash;Elliott&ndash;Modell,]<br>
 
  
*dem [http://en.lntwww.de/Digitalsignal%C3%BCbertragung/B%C3%BCndelfehlerkan%C3%A4le#Kanalmodell_nach_McCullough_.281.29 McCullough&ndash;Kanalmodell.]<br><br>
+
The bit error rate is $10\%$ in both cases.  
  
Die Abbildung zeigt statistisch unabhängige Fehler nach dem BSC&ndash;Modell (links) und so genannte Bündelfehler gemäß Gilbert&ndash;Elliott (rechts), wobei die Bitfehlerrate in beiden Fällen 10% beträgt. Aus der rechten Grafik ist zu erkennen, dass das Bild zeilenweise übertragen wird.<br>
+
From the right graph,&nbsp; it can be seen from the bundle error structure that the image was transmitted line by line.}}<br>
  
 
== Binary Erasure Channel – BEC ==
 
== Binary Erasure Channel – BEC ==
 
<br>
 
<br>
Das BSC&ndash;Modell liefert nur die Aussagen &bdquo;richtig&rdquo; und &bdquo;falsch&rdquo;. Manche Empfänger &ndash; so zum Beispiel die so genannten [http://en.lntwww.de/Kanalcodierung/Soft%E2%80%93in_Soft%E2%80%93out_Decoder#Hard_Decision_vs._Soft_Decision_.281.29 Soft&ndash;in Soft&ndash;out Decoder] &ndash; können jedoch auch gewisse Informationen über die Sicherheit der Entscheidung liefern, wobei sie natürlich darüber informiert werden müssen, welche ihrer Eingangswerte sicher sind und welche eher unsicher.<br>
+
The BSC model only provides the statements&nbsp; "correct"&nbsp; and&nbsp; "incorrect".&nbsp; However,&nbsp; some receivers&nbsp; &ndash; such as the so-called&nbsp;  [[Channel_Coding/Soft-in_Soft-Out_Decoder#Hard_Decision_vs._Soft_Decision|$\text{soft&ndash;in soft&ndash;out decoder}$]]&nbsp; &ndash; can also provide some information about the certainty of the decision,&nbsp; although they must of course be informed about which of their input values are certain and which are rather uncertain.<br>
 +
 
 +
The&nbsp; "Binary Erasure Channel"&nbsp; $\rm (BEC)$&nbsp; provides such information.&nbsp; On the basis of the graph you can see:
 +
[[File:EN_KC_T_1_2_S3_neu.png|right|frame|BEC and connection with the AWGN model|class=fit]]
 +
 
 +
*The input alphabet of the BEC model is binary &nbsp; &#8658; &nbsp; $x &#8712; \{0, \hspace{0.05cm}1\}$ and the output alphabet is ternary &nbsp; &#8658; &nbsp;  $y &#8712; \{0, \hspace{0.05cm}1, \hspace{0.05cm}\rm E\}$.
  
[[File:P ID2343 KC T 1 2 S3 v2.png|Binary Erasure Channel (BEC) und Zusammenhang mit dem AWGN–Modell|class=fit]]<br>
+
*An&nbsp; "$\rm E$"&nbsp; denotes an uncertain decision.&nbsp; This new&nbsp; "symbol"&nbsp; stands for&nbsp; "Erasure".
  
Der <i>Binary Erasure Channel</i> (BEC) liefert eine solche Information. Anhand der Grafik erkennt man:
+
*Bit errors are excluded by the BEC model per se.&nbsp; An unsafe decision&nbsp; $\rm (E)$&nbsp; is made with probability&nbsp; $\lambda$,&nbsp; while the probability for a correct&nbsp; (and at the same time safe)&nbsp; decision&nbsp; is&nbsp; $1-\lambda$&nbsp;.
*Das Eingangsalphabet des BEC&ndash;Kanalmodells ist binär &#8658; <i>x</i> &#8712; {0, 1} und das Ausgangsalphabet ternär &#8658;&nbsp;<i>y</i>&nbsp;&#8712;&nbsp;{0, 1, E}. Ein &bdquo;E&rdquo; kennzeichnet eine unsichere Entscheidung. Dieses neue &bdquo;Symbol&rdquo; steht für <i>Erasure</i>, zu deutsch: Auslöschung.
 
  
*Bitfehler werden durch das BEC&ndash;Modell per se ausgeschlossen. Eine unsichere Entscheidung (E) wird mit der Wahrscheinlichkeit <i>&lambda;</i> getroffen, während die Wahrscheinlichkeit für  eine richtige (und gleichzeitig sichere) Entscheidung 1 &ndash; <i>&lambda;</i> beträgt.
+
*In the upper right,&nbsp; the relationship between BEC and AWGN  model is shown,&nbsp; with the erasure area&nbsp; $\rm (E)$&nbsp; highlighted in gray.  
  
*Rechts oben ist der Zusammenhang zwischen BEC&ndash; und AWGN&ndash;Kanalmodell dargestellt, wobei das Erasure&ndash;Entscheidungsgebiet (&bdquo;E&rdquo;) grau hinterlegt ist. Man erkennt, dass es im Gegensatz zum BSC&ndash;Modell (<i>G</i>&nbsp;=&nbsp;0) nun zwei Entscheidungsgrenzen <i>G</i><sub>0</sub>&nbsp;=&nbsp;<i>G</i> und <i>G</i><sub>1</sub>&nbsp;=&nbsp;&ndash;<i>G</i> gibt, und es gilt:
+
*In contrast to the BSC model,&nbsp; there are now two decision lines&nbsp; $G_0 = G$ &nbsp;and&nbsp; $G_1 = -G$.&nbsp; It holds:
  
::<math>\lambda =  {\rm Q}[\sqrt{\rho} \cdot (1 - G)]\hspace{0.05cm}.</math>
+
::<math>\lambda =  {\rm Q}\big[\sqrt{\rho} \cdot (1 - G)\big]\hspace{0.05cm}.</math>
  
Wir weisen hier nochmals auf zwei interaktive Flash&ndash;Module hin:
+
We refer here again to the following applets:  
*[[Fehlerwahrscheinlichkeit von Digitalsystemen Please add link and do not upload flash files.]]<br>
+
*[[Applets:Fehlerwahrscheinlichkeit|"Symbol error probability of digital systems"]],
 +
 
 +
*[[Applets:Complementary_Gaussian_Error_Functions|"Complementary Gaussian Error Function"]].
  
*[[Komplementäre Gaußsche Fehlerfunktion Please add link and do not upload flash files.]]<br>
 
 
   
 
   
 
== Binary Symmetric Error & Erasure Channel – BSEC ==
 
== Binary Symmetric Error & Erasure Channel – BSEC ==
 
<br>
 
<br>
Das BEC&ndash;Modell ist aufgrund der Fehlerwahrscheinlichkeit 0 eher unrealistisch und nur eine Näherung für ein extrem großes Signal&ndash;zu&ndash;Rausch&ndash;Leistungsverhältnis (kurz SNR) <i>&rho;</i>. Stärkere Störungen &#8658; ein kleineres <i>&rho;</i> sollten besser durch den <i>Binary Symmetric Error & Erasure Channel</i> (BSEC) mit den zwei Parametern
+
The BEC model is rather unrealistic &nbsp;$($error probability:&nbsp; $0)$&nbsp; and only an approximation for an extremely large signal&ndash;to&ndash;noise&ndash;power ratio&nbsp; $\rho$.
*Verfälschungswahrscheinlichkeit <i>&epsilon;</i> = Pr (<i>y</i> = 1&nbsp;|&nbsp;<i>x</i> = 0) = Pr (<i>y</i> = 0&nbsp;|&nbsp;<i>x</i> = 1),<br>
 
  
*Erasure&ndash;Wahrscheinlichkeit <i>&lambda;</i> = Pr(<i>y</i> = E&nbsp;|&nbsp;<i>x</i> = 0) = Pr(<i>y</i> = E&nbsp;|&nbsp;<i>x</i> = 1)<br><br>
+
Stronger noise&nbsp; $($a smaller&nbsp; $\rho)$&nbsp; would be better served by the&nbsp; "Binary Symmetric Error & Erasure Channel"&nbsp; $\rm (BSEC)$&nbsp; with the two parameters
 +
*Falsification probability &nbsp; $\varepsilon = {\rm Pr}(y = 1\hspace{0.05cm}|\hspace{0.05cm} x=0)= {\rm Pr}(y = 0\hspace{0.05cm}|\hspace{0.05cm} x=1)$,<br>
  
modelliert werden. Es gilt auch hier <i>x</i> &#8712; {0, 1} und <i>y</i> &#8712; {0, 1, E}.<br>
+
*Erasure probability &nbsp; $\lambda = {\rm Pr}(y = {\rm E}\hspace{0.05cm}|\hspace{0.05cm} x=0)= {\rm Pr}(y = {\rm E}\hspace{0.05cm}|\hspace{0.05cm} x=1).$
  
[[File:P ID2344 KC T 1 2 S4 v2.png|Binary Error & Erasure Channel (BEEC) und Zusammenhang mit dem AWGN–Modell|class=fit]]<br>
 
  
{{Beispiel}}''':''' Wir betrachten das BSEC&ndash;Modell mit den beiden Entscheidungsgeraden <i>G</i><sub>0</sub> = <i>G</i> = 0.5 und <i>G</i><sub>1</sub>&nbsp;=&nbsp;&ndash;<i>G</i>, dessen Parameter <i>&epsilon;</i> und <i>&lambda;</i> durch das SNR <i>&rho;</i>&nbsp;=&nbsp;1/<i>&sigma;</i><sup>2</sup> des vergleichbaren AWGN&ndash;Kanals festgelegt sind. Dann gilt
+
As with the BEC model,&nbsp; $x &#8712; \{0, \hspace{0.05cm}1\}$ &nbsp;and&nbsp; $y &#8712; \{0, \hspace{0.05cm}1, \hspace{0.05cm}\rm E\}$.<br>
*für <i>&sigma;</i> = 0.5  &nbsp;&#8658;&nbsp; <i>&rho;</i> = 4:
 
::<math>\varepsilon \hspace{-0.1cm} =  \hspace{-0.1cm} {\rm Q}[\sqrt{\rho} \cdot (1 + G)] = {\rm Q}(3) \approx 0.14\%\hspace{0.05cm},</math>
 
::<math>{\it \lambda} \hspace{-0.1cm} =  \hspace{-0.1cm}  {\rm Q}[\sqrt{\rho} \cdot (1 - G)] -  \varepsilon = {\rm Q}(1) - {\rm Q}(3) \approx 15.87\% -  0.14\% = 15.73\%\hspace{0.05cm},</math>
 
  
*für <i>&sigma;</i> = 0.25 &nbsp;&#8658;&nbsp; <i>&rho;</i> = 16:
+
{{GraueBox|TEXT=   
 +
$\text{Example 2:}$&nbsp; We consider the BSEC model with the two decision lines&nbsp;
 +
[[File:EN_KC_T_1_2_S4_neu.png|right|frame|BSEC and connection with the AWGN model|class=fit]]
 +
*$G_0 = G = 0.5$,
 +
*$G_1 = -G = -0.5$.
  
::<math>\varepsilon  = {\rm Q}(6) \approx 10^{-10}\hspace{0.05cm},\hspace{0.2cm}
 
{\it \lambda} = {\rm Q}(2)  \approx 2.27\%\hspace{0.05cm}.</math>
 
  
Für die rechts dargestellte WDF wurde <i>&rho;</i> = 4 vorausgesetzt. Für <i>&rho;</i> = 16 könnte das BSEC&ndash;Modell durch die BEC&ndash;Variante ersetzt werden, ohne dass es zu einer gravierenden Verfälschung kommt.{{end}}<br>
+
The parameters&nbsp; $\varepsilon$&nbsp; and&nbsp; $\lambda$&nbsp; are fixed by the&nbsp; $\rm SNR$&nbsp; $\rho=1/\sigma^2$&nbsp; of the comparable AWGN channel.
  
== MAP– und ML–Kriterium (1) ==
+
*For&nbsp; $\sigma = 0.5$  &nbsp; &#8658; &nbsp; $\rho = 4$&nbsp; (assumed for the sketched PDF):
<br>
+
:$$\varepsilon = {\rm Q}\big[\sqrt{\rho} \cdot (1 + G)\big] = {\rm Q}(3) \approx 0.14\%\hspace{0.05cm},$$
Wir gehen nun von dem nachfolgend skizzierten Modell aus und wenden die bereits im [http://en.lntwww.de/Digitalsignal%C3%BCbertragung/Struktur_des_optimalen_Empf%C3%A4ngers#Blockschaltbild_und_Voraussetzungen Kapitel 4.2] des Buches &bdquo;Digitalsignalübertragung&rdquo; genannten Entscheidungskriterien auf den Decodiervorgang an.<br>
+
:$${\it \lambda} =  {\rm Q}\big[\sqrt{\rho} \cdot (1 - G)\big] -  \varepsilon = {\rm Q}(1) - {\rm Q}(3) $$
 +
:$$\Rightarrow \hspace{0.3cm}{\it \lambda}  \approx 15.87\% -  0.14\% = 15.73\%\hspace{0.05cm},$$
 +
 
 +
*For&nbsp; $\sigma = 0.25$  &nbsp; &#8658; &nbsp; $\rho = 16$:
  
[[File:P ID2345 KC T 1 2 S5 v2.png|Modell zur Beschreibung von MAP– und ML–Decodierung|class=fit]]<br>
+
:$$\varepsilon  = {\rm Q}(6) \approx 10^{-10}\hspace{0.05cm},$$
 +
:$${\it \lambda} = {\rm Q}(2)  \approx 2.27\%\hspace{0.05cm}.$$
  
Aufgabe des Kanaldecoders ist es, den Ausgabevektor <u><i>&upsilon;</i></u> so zu bestimmen, dass er &bdquo;möglichst gut&rdquo; mit dem Informationswort <u><i>u</i></u> übereinstimmt. Etwas genauer formuliert: Die Blockfehlerwahrscheinlichkeit
+
:Here,&nbsp; the BSEC model could be replaced by the simpler BEC variant without serious differences.}}<br>
  
:<math>{\rm Pr(Blockfehler)} = {\rm Pr}(\underline{v} \ne \underline{u}) </math>
+
== Criteria&nbsp; &raquo;Maximum-a-posteriori&laquo;&nbsp; and&nbsp; &raquo;Maximum-Likelihood&laquo;==
 +
<br>
 +
We now start from the model sketched below and apply the methods already described in chapter&nbsp; [[Digital_Signal_Transmission/Structure_of_the_Optimal_Receiver|"Structure of the optimal receiver"]]&nbsp; of the book&nbsp; "Digital Signal Transmission"&nbsp; to the decoding process.<br>
  
bezogen auf die Vektoren <u><i>u</i></u> und <u><i>&upsilon;</i></u> der Länge <i>k</i> soll möglichst gering sein. Aufgrund der eindeutigen Zuordnung <u><i>x</i></u> = enc(<u><i>u</i></u>) durch den Kanalcoder bzw. empfängerseitig <u><i>&upsilon;</i></u> = enc<sup>&ndash;1</sup>(<u><i>z</i></u>) gilt in gleicher Weise:
+
{{BlaueBox|TEXT=
 +
$\text{The task of the channel decoder}$&nbsp;
 +
[[File:EN_KC_T_1_2_S5.png|right|frame|Model for the description of MAP and ML decoding|class=fit]]
 +
*is to determine the vector&nbsp; $\underline{v}$&nbsp; so
 +
*that it matches the information word&nbsp; $\underline{u}$&nbsp; "as well as possible".
  
:<math>{\rm Pr(Blockfehler)} = {\rm Pr}(\underline{z} \ne \underline{x})\hspace{0.05cm}. </math>
+
 +
Formulated a little more precisely:
 +
*Minimizing the&nbsp; &raquo;'''block error probability'''&laquo;&nbsp;
 +
:$${\rm Pr(block \:error)} = {\rm Pr}(\underline{v} \ne \underline{u}) $$
 +
:related to the vectors&nbsp; $\underline{u}$&nbsp; and&nbsp; $\underline{v}$&nbsp; of length&nbsp; $k$&nbsp;.  
  
Der Kanaldecoder in obigem Modell besteht aus zwei Teilen:
+
*Because of the unique assignment&nbsp; $\underline{x} = {\rm enc}(\underline{u})$&nbsp; by the channel encoder and&nbsp; $\underline{v} = {\rm enc}^{-1}(\underline{z})$&nbsp; by the channel decoder applies in the same way:
*Der <i>Codewortschätzer</i> ermittelt aus dem Empfangsvektor <i><u>y</u></i> einen Schätzwert <i><u>z</u></i> &#8712; <i>C</i> gemäß einem vorgegebenen Kriterium.<br>
 
  
*Anschließend wird aus dem (empfangenen) Codewort <i><u>z</u></i> das (empfangene) Informationswort  <i><u>&upsilon;</u></i> durch <i>einfaches Mapping</i> ermittelt, das möglichst mit <i><u>u</u></i> übereinstimmen sollte.<br><br>
+
::<math>{\rm Pr(block \:error)} = {\rm Pr}(\underline{z} \ne \underline{x})\hspace{0.05cm}. </math>}}
  
Für den Codewortschätzer gibt es insgesamt vier unterschiedliche Varianten, nämlich
 
*den Maximum&ndash;a&ndash;posteriori&ndash;Empfänger (MAP&ndash;Empfänger) für das gesamte Codewort <u><i>x</i></u>,<br>
 
  
*den Maximum&ndash;a&ndash;posteriori&ndash;Empfänger (MAP&ndash;Empfänger) für die einzelnen Codebits <i>x<sub>i</sub></i>,<br>
+
The channel decoder in the above model consists of two parts:
 +
*The&nbsp; "code word estimator"&nbsp; determines from the received vector&nbsp; $\underline{y}$&nbsp; an estimate&nbsp; $\underline{z} \in \mathcal{C}$&nbsp; according to a given criterion.<br>
  
*den Maximum&ndash;Likelihood&ndash;Empfänger (ML&ndash;Empfänger)  für das gesamte Codewort <u><i>x</i></u>,<br>
+
*From the&nbsp; (received)&nbsp; code word&nbsp; $\underline{z}$&nbsp; the information word&nbsp; $\underline{v}$&nbsp; is determined by&nbsp; "simple mapping".&nbsp; This should match&nbsp; $\underline{u}$.<br><br>
  
*den Maximum&ndash;Likelihood&ndash;Empfänger (ML&ndash;Empfängerfür die einzelnen Codebits <i>x<sub>i</sub></i>.<br><br>
+
There are a total of four different variants for the code word estimator, viz.
 +
#the maximum&ndash;a&ndash;posteriori&nbsp; $\rm (MAP)$&nbsp;  receiver for the entire code word&nbsp; $\underline{x}$,<br>
 +
#the maximum&ndash;a&ndash;posteriori&nbsp; $\rm (MAP)$&nbsp; receiver for the individual code bits&nbsp; $x_i$,<br>
 +
#the maximum&ndash;likelihood&nbsp; $\rm (ML)$&nbsp; receiver  for the entire code word&nbsp; $\underline{x}$,<br>
 +
#the maximum&ndash;likelihood&nbsp; $\rm (ML)$&nbsp; receiver for the individual code bits&nbsp; $x_i$.<br><br>
  
Deren Definitionen folgen auf der nächsten Seite. Vorab aber gleich die wichtige Information:
+
Their definitions follow in the next section.&nbsp; First of all,&nbsp; however,&nbsp; the essential distinguishing feature between&nbsp; $\rm MAP$&nbsp; and&nbsp; $\rm ML$:
*Ein MAP&ndash;Empfängerberücksichtigt im Gegensatz zum ML&ndash;Empfänger auch unterschiedliche Auftrittswahrscheinlichkeiten für das gesamte Codewort bzw. für deren einzelne Bits.<br>
 
  
*Sind alle Codeworte bzw. alle Codebits <i>x<sub>i</sub></i> der Codeworte gleichwahrscheinlich, so ist der einfachere ML&ndash;Empfänger zum entsprechenden MAP&ndash;Empfänger äquivalent.<br><br>
+
{{BlaueBox|TEXT= 
 +
$\text{Conclusion:}$&nbsp; 
 +
*A MAP receiver,&nbsp; in contrast to the ML receiver,&nbsp; also considers different occurrence probabilities for the entire code word or for their individual bits.<br>
  
== MAP– und ML–Kriterium (2) ==
+
*If all code words&nbsp; $\underline{x}$&nbsp; and thus all bits&nbsp; $x_i$&nbsp; of the code words are equally probable,&nbsp; the simpler ML receiver is equivalent to the corresponding MAP receiver.}}<br><br>
 +
 
 +
== Definitions of the different optimal receivers ==
 
<br>
 
<br>
{{Definition}}''':''' Der <font><span style="font-weight: bold;">Maximum&ndash;a&ndash;posteriori&ndash;Empfänger auf Blockebene</span></font> &ndash; kurz: <b>block&ndash;wise MAP</b> &ndash;  entscheidet sich unter den 2<sup><i>k</i></sup> zulässigen Codeworten <i><u>x</u></i><sub><i>i</i></sub> &#8712; <i>C</i> für das Codewort mit der größten Rückschlusswahrscheinlichkeit (englisch: <i>a&ndash;posteriori probability</i>, APP):
+
{{BlaueBox|TEXT= 
 +
$\text{Definition:}$&nbsp; The&nbsp; "block&ndash;wise maximum&ndash;a&ndash;posteriori receiver"&nbsp; &ndash; for short: &nbsp;&raquo;<b>block&ndash;wise MAP</b>&laquo; &ndash;  decides among the&nbsp; $2^k$&nbsp; code words&nbsp; $\underline{x}_i \in \mathcal{C}$&nbsp; for the code word with the highest a&ndash;posteriori probability:
  
:<math>\underline{z} = {\rm arg} \max_{\underline{x}_{\hspace{0.03cm}i} \hspace{0.03cm} \in \hspace{0.05cm} \mathcal{C}} \hspace{0.1cm} {\rm Pr}( \underline{x}_{\hspace{0.03cm}i} |\hspace{0.05cm} \underline{y} ) \hspace{0.05cm}.</math>
+
::<math>\underline{z} = {\rm arg} \max_{\underline{x}_{\hspace{0.03cm}i} \hspace{0.03cm} \in \hspace{0.05cm} \mathcal{C} } \hspace{0.1cm} {\rm Pr}( \underline{x}_{\hspace{0.03cm}i} \vert\hspace{0.05cm} \underline{y} ) \hspace{0.05cm}.</math>
  
Pr(<i><u>x</u></i><sub><i>i </i></sub>|<sub> </sub><i><u>y</u></i>) ist die [http://en.lntwww.de/Stochastische_Signaltheorie/Statistische_Abh%C3%A4ngigkeit_und_Unabh%C3%A4ngigkeit#Bedingte_Wahrscheinlichkeit_.281.29 bedingte Wahrscheinlichkeit,] dass <i><u>x</u></i><sub><i>i</i></sub> gesendet wurde, wenn <i><u>y</u></i> empfangen wird.{{end}}<br>
+
*${\rm Pr}( \underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm}\vert \hspace{0.05cm} \underline{y} )$&nbsp; is the&nbsp; [[Theory_of_Stochastic_Signals/Statistical_Dependence_and_Independence#Conditional_Probability|"conditional probability"]]&nbsp; that&nbsp; $\underline{x}_i$&nbsp; was sent,&nbsp; when&nbsp; $\underline{y}$&nbsp; is received.}}<br>
  
Nach dem [http://en.lntwww.de/Stochastische_Signaltheorie/Statistische_Abh%C3%A4ngigkeit_und_Unabh%C3%A4ngigkeit#Bedingte_Wahrscheinlichkeit_.281.29 Satz von Bayes] kann die Rückschlusswahrscheinlichkeit wie folgt umgeformt werden:
+
We now try to simplify this decision rule step by step.&nbsp; The inference probability can be transformed according to&nbsp; "Bayes rule"&nbsp; as follows:
  
:<math>{\rm Pr}( \underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm}|\hspace{0.05cm} \underline{y} ) =  
+
::<math>{\rm Pr}( \underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm}\vert \hspace{0.05cm} \underline{y} ) =  
 
  \frac{{\rm Pr}( \underline{y} \hspace{0.08cm} |\hspace{0.05cm} \underline{x}_{\hspace{0.03cm}i} ) \cdot {\rm Pr}( \underline{x}_{\hspace{0.03cm}i}  )}{{\rm Pr}( \underline{y}  )} \hspace{0.05cm}.</math>
 
  \frac{{\rm Pr}( \underline{y} \hspace{0.08cm} |\hspace{0.05cm} \underline{x}_{\hspace{0.03cm}i} ) \cdot {\rm Pr}( \underline{x}_{\hspace{0.03cm}i}  )}{{\rm Pr}( \underline{y}  )} \hspace{0.05cm}.</math>
  
Die Wahrscheinlichkeit Pr(<u><i>y</i></u>) ist unabhängig von <i><u>x</u><sub>i</sub></i> und muss bei der Maximierung nicht berücksichtigt werden. Sind zudem alle 2<sup><i>k</i></sup> Informationsworte <i><u>u</u></i><sub><i>i</i></sub> gleichwahrscheinlich, dann kann bei der Maximierung auch auf den Beitrag Pr(<i><u>x</u></i><sub><i>i</i></sub>) = 2<sup>&ndash;<i>k</i></sup> im Zähler verzichtet werden.<br>
+
*The probability&nbsp; ${\rm Pr}( \underline{y}) $&nbsp; is independent of&nbsp; $\underline{x}_i$&nbsp; and need not be considered in maximization.
 +
* Moreover,&nbsp; if all&nbsp; $2^k$&nbsp; information words&nbsp; $\underline{u}_i$&nbsp; are equally probable,&nbsp; then the maximization can also dispense with the contribution&nbsp; ${\rm Pr}( \underline{x}_{\hspace{0.03cm}i} ) = 2^{-k}$&nbsp; in the numerator.<br>
  
{{Definition}}''':''' Der Maximum&ndash;Likelihood&ndash;Empfänger  auf Blockebene &ndash; kurz: block&ndash;wise ML &ndash; entscheidet sich unter den 2<sup><i>k</i></sup> zulässigen Codeworten <i><u>x</u></i><sub><i>i</i></sub> &#8712; <i>C</i> für das Codewort mit der größten Übergangswahrscheinlichkeit:
 
  
:<math>\underline{z} = {\rm arg} \max_{\underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C}} \hspace{0.1cm} {\rm Pr}( \underline{y}  \hspace{0.05cm}|\hspace{0.05cm} \underline{x}_{\hspace{0.03cm}i} ) \hspace{0.05cm}.</math>
+
{{BlaueBox|TEXT=
 +
$\text{Definition:}$&nbsp;  The&nbsp; "block&ndash;wise maximum&ndash;likelihood receiver"&nbsp; &ndash; for short: &nbsp;&raquo;'''block&ndash;wise ML'''&laquo;&nbsp; &ndash; decides among the&nbsp; $2^k$&nbsp; allowed code words&nbsp; $\underline{x}_i \in \mathcal{C}$&nbsp; for the code word with the highest transition probability:
  
Die bedingte Wahrscheinlichkeit Pr(<i><u>y</u></i><sub> </sub>| <i><u>x</u></i><sub><i>i</i></sub>) ist nun in Vorwärtsrichtung zu verstehen, nämlich, dass der Vektor <i><u>y</u></i> empfangen wird, wenn das Codewort <i><u>x</u></i><sub><i>i</i></sub> gesendet wurde.{{end}}<br>
+
::<math>\underline{z} = {\rm arg} \max_{\underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C} } \hspace{0.1cm} {\rm Pr}( \underline{y}  \hspace{0.05cm}\vert\hspace{0.05cm} \underline{x}_{\hspace{0.03cm}i} ) \hspace{0.05cm}.</math>
  
Im Folgenden verwenden wir auf Blockebene stets den ML&ndash;Empfänger. Aufgrund der vorausgesetzten gleichwahrscheinlichen Informationsworte liefert auch dieser stets die bestmögliche Entscheidung.<br>
+
*The conditional probability&nbsp; ${\rm Pr}( \underline{y}  \hspace{0.05cm}\vert\hspace{0.05cm} \underline{x}_{\hspace{0.03cm}i} )$&nbsp; is now to be understood in the forward direction,&nbsp; namely as the probability that the vector&nbsp; $\underline{y}$&nbsp; is received when the code word&nbsp; $\underline{x}_i$&nbsp; has been sent.<br>
  
Anders sieht es jedoch auf Bitebene aus. Ziel einer iterativen Decodierung ist es gerade, für alle Codebits <i>x</i><sub><i>i</i></sub> &isin; {0, 1} Wahrscheinlichkeiten zu schätzen und diese an die nächste Stufe weiterzugeben. Hierzu benötigt man einen MAP&ndash;Empfänger.<br>
+
*In the following, we always use the maximum&ndash;likelihood receiver at the block level.&nbsp; Due to the assumed equally likely information words,&nbsp; this also always provides the best possible decision.}}<br>
  
{{Definition}}''':''' Der Maximum&ndash;a&ndash;posteriori&ndash;Empfänger auf Bitebene (kurz: bit&ndash;wise MAP) wählt für jedes Codebit <i>x</i><sub><i>i</i></sub> den Wert ( 0 oder 1) mit der größten Rückschlusswahrscheinlichkeit Pr(<i>x</i><sub><i>i </i></sub>|<sub> </sub><i><u>y</u></i>):
+
However,&nbsp; the situation is different on the bit level.&nbsp; The goal of an iterative decoding is just to estimate probabilities for all code bits&nbsp; $x_i \in \{0, 1\}$&nbsp; and to pass these on to the next stage.&nbsp; For this one needs a MAP receiver.<br>
  
:<math>\underline{z} = {\rm arg}\hspace{-0.1cm}{ \max_{{x}_{\hspace{0.03cm}i} \hspace{0.03cm} \in \hspace{0.05cm} \{0, 1\}} \hspace{0.03cm} {\rm Pr}( {x}_{\hspace{0.03cm}i} |\hspace{0.05cm} \underline{y} ) \hspace{0.05cm}}.</math>
+
{{BlaueBox|TEXT=
{{end}}<br>
+
$\text{Definition:}$&nbsp;  The&nbsp; "bit&ndash;wise maximum&ndash;a&ndash;posteriori receiver" &ndash; for short:&nbsp; &raquo;'''bit&ndash;wise MAP'''&laquo;&nbsp; &ndash; selects for each individual code bit&nbsp; $x_i$&nbsp; the value &nbsp;$(0$ or $1)$&nbsp; with the highest inference probability&nbsp; ${\rm Pr}( {x}_{\hspace{0.03cm}i}\vert \hspace{0.05cm} \underline{y} )$:
  
== ML–Entscheidung beim BSC–Kanal ==
+
::<math>\underline{z} = {\rm arg}\hspace{-0.1cm}{ \max_{ {x}_{\hspace{0.03cm}i} \hspace{0.03cm} \in \hspace{0.05cm} \{0, 1\} } \hspace{0.03cm} {\rm Pr}( {x}_{\hspace{0.03cm}i}\vert \hspace{0.05cm} \underline{y} ) \hspace{0.05cm} }.</math>}}<br>
 +
 
 +
== Maximum-likelihood decision at the BSC channel ==
 
<br>
 
<br>
Wenden wir nun das ML&ndash;Kriterium auf den gedächtnislosen BSC&ndash;Kanal an. Dann gilt:
+
We now apply the maximum&ndash;likelihood criterion to the memoryless&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#Binary_Symmetric_Channel_.E2.80.93_BSC|$\text{BSC channel}$]].&nbsp; Then holds:
 
+
::<math>{\rm Pr}( \underline{y}  \hspace{0.05cm}|\hspace{0.05cm} \underline{x}_{\hspace{0.03cm}i} ) =
:<math>{\rm Pr}( \underline{y}  \hspace{0.05cm}|\hspace{0.05cm} \underline{x}_{\hspace{0.03cm}i} ) =
+
\prod\limits_{l=1}^{n} {\rm Pr}( y_l  \hspace{0.05cm}|\hspace{0.05cm} x_l ) \hspace{0.5cm}{\rm with}\hspace{0.5cm}
\prod\limits_{l=1}^{n} {\rm Pr}( y_l  \hspace{0.05cm}|\hspace{0.05cm} x_l ) \hspace{0.2cm}{\rm mit}\hspace{0.2cm}
 
 
{\rm Pr}( y_l  \hspace{0.05cm}|\hspace{0.05cm} x_l ) =  
 
{\rm Pr}( y_l  \hspace{0.05cm}|\hspace{0.05cm} x_l ) =  
 
  \left\{ \begin{array}{c} 1 - \varepsilon\\
 
  \left\{ \begin{array}{c} 1 - \varepsilon\\
 
   \varepsilon  \end{array} \right.\quad
 
   \varepsilon  \end{array} \right.\quad
\begin{array}{*{1}c} {\rm falls} \hspace{0.15cm} y_l = x_l \hspace{0.05cm},\\
+
\begin{array}{*{1}c} {\rm if} \hspace{0.15cm} y_l = x_l \hspace{0.05cm},\\
{\rm falls} \hspace{0.15cm}y_l \ne x_l\hspace{0.05cm}.\\ \end{array}
+
{\rm if} \hspace{0.15cm}y_l \ne x_l\hspace{0.05cm}.\\ \end{array}
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
 
+
::<math>\Rightarrow \hspace{0.3cm} {\rm Pr}( \underline{y}  \hspace{0.05cm}|\hspace{0.05cm} \underline{x}_{\hspace{0.03cm}i} ) =
:<math>\Rightarrow \hspace{0.3cm} {\rm Pr}( \underline{y}  \hspace{0.05cm}|\hspace{0.05cm} \underline{x}_{\hspace{0.03cm}i} ) =
 
 
\varepsilon^{d_{\rm H}(\underline{y}  \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i})} \cdot
 
\varepsilon^{d_{\rm H}(\underline{y}  \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i})} \cdot
 
(1-\varepsilon)^{n-d_{\rm H}(\underline{y}  \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i})}  
 
(1-\varepsilon)^{n-d_{\rm H}(\underline{y}  \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i})}  
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
Dies lässt sich wie folgt begründen:
+
{{BlaueBox|TEXT= 
*Die [http://en.lntwww.de/Kanalcodierung/Zielsetzung_der_Kanalcodierung#Einige_wichtige_Definitionen_zur_Blockcodierung Hamming&ndash;Distanz] <i>d</i><sub>H</sub>(<i><u>y</u></i>, <i><u>x</u><sub>i</sub></i>) gibt die Anzahl der Bitpositionen an, in denen sich die beiden Worte <i><u>y</u></i> und <i><u>x<sub>i</sub></u></i> mit jeweils <i>n</i> binären Elementen unterscheiden. Beispielsweise ist die Hamming&ndash;Distanz zwischen <i><u>y</u></i> = (0, 1, 0, 1, 0, 1, 1) und <i><u>x</u><sub>i</sub></i> = (0, 1, 0, 0, 1, 1, 1) gleich 2.<br>
+
$\text{Proof:}$&nbsp;  This result can be justified as follows:
 +
*The&nbsp; [[Channel_Coding/Objective_of_Channel_Coding#Important_definitions_for_block_coding|$\text{Hamming distance}$]]&nbsp; $d_{\rm H}(\underline{y}  \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i})$&nbsp; specifies the number of bit positions where the words&nbsp; $\underline{y}$&nbsp; and&nbsp; $\underline{x}_{\hspace{0.03cm}i}$&nbsp; with each&nbsp; $n$&nbsp; binary element differ. Example: &nbsp; The Hamming distance between&nbsp; $\underline{y}= (0, 1, 0, 1, 0, 1, 1)$&nbsp; and&nbsp; $\underline{x}_{\hspace{0.03cm}i} = (0, 1, 0, 0, 1, 1, 1)$&nbsp; is&nbsp; $2$.<br>
 +
 
 +
*In&nbsp; $n - d_{\rm H}(\underline{y}  \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i})$&nbsp; positions thus the two vectors&nbsp; $\underline{y}$&nbsp; and&nbsp; $\underline{x}_{\hspace{0.03cm}i}$&nbsp; do not differ.&nbsp;  In the above example,&nbsp; five of the&nbsp; $n = 7$&nbsp; bits are identical.
  
*In <i>n</i> &ndash; <i>d</i><sub>H</Sub>(<i><u>y</u></i>, <i><u>x</u><sub>i</sub></i>) Positionen unterscheiden sich demnach die beiden Vektoren  <i><u>y</u></i> und <i><u>x</u><sub>i</sub></i> nicht. Im obigen Beispiel sind 5 der <i>n</i> = 7 Bit identisch. Zu obiger Gleichung kommt man schließlich durch Einsetzen der Verfälschungswahrscheinlichkeit <i>&epsilon;</i> bzw. deren Ergänzung 1 &ndash; <i>&epsilon;</i>.<br><br>
+
*Finally,&nbsp; one arrives at the above equation by substituting the falsification probability&nbsp; $\varepsilon$&nbsp; or its complement&nbsp; $1-\varepsilon$.}}<br>
  
Die Vorgehensweise bei der Maximum&ndash;Likelihood&ndash;Detektion ist, dasjenige Codewort <i><u>x</u><sub>i</sub></i> zu finden, das die Übergangswahrscheinlichkeit Pr(<i><u>y</u></i> | <i><u>x</u><sub>i</sub></i>) maximiert:
+
The approach to maximum&ndash;likelihood decoding is to find the code word&nbsp; $\underline{x}_{\hspace{0.03cm}i}$&nbsp; that has the transition probability&nbsp; ${\rm Pr}( \underline{y}  \hspace{0.05cm}|\hspace{0.05cm} \underline{x}_{\hspace{0.03cm}i} )$&nbsp; maximized:
  
:<math>\underline{z} = {\rm arg} \max_{\underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C}} \hspace{0.1cm}  
+
::<math>\underline{z} = {\rm arg} \max_{\underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C}} \hspace{0.1cm}  
 
\left [  
 
\left [  
 
\varepsilon^{d_{\rm H}(\underline{y}  \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i})} \cdot
 
\varepsilon^{d_{\rm H}(\underline{y}  \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i})} \cdot
Line 212: Line 257:
 
\right ] \hspace{0.05cm}.</math>
 
\right ] \hspace{0.05cm}.</math>
  
Da der Logarithmus eine monoton steigende Funktion ist, erhält man das gleiche Ergebnis nach folgender Maximierung:
+
Since the logarithm is a monotonically increasing function,&nbsp; the same result is obtained after the following maximization:
  
:<math>\underline{z} = {\rm arg} \max_{\underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C}} \hspace{0.1cm}
+
::<math>\underline{z} = {\rm arg} \max_{\underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C}} \hspace{0.1cm}
L(\underline{x}_{\hspace{0.03cm}i}) </math>
+
L(\underline{x}_{\hspace{0.03cm}i})\hspace{0.5cm} {\rm with}\hspace{0.5cm} L(\underline{x}_{\hspace{0.03cm}i}) = \ln \left [  
 
 
:<math>{\rm mit}\hspace{0.15cm} L(\underline{x}_{\hspace{0.03cm}i}) \hspace{-0.1cm} = \hspace{-0.1cm} \ln \left [  
 
 
\varepsilon^{d_{\rm H}(\underline{y}  \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i})} \cdot
 
\varepsilon^{d_{\rm H}(\underline{y}  \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i})} \cdot
 
(1-\varepsilon)^{n-d_{\rm H}(\underline{y}  \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i})}
 
(1-\varepsilon)^{n-d_{\rm H}(\underline{y}  \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i})}
\right ] =</math>
+
\right ] </math>
:<math> \hspace{2cm} \hspace{-0.1cm} d_{\rm H}(\underline{y}  \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i}) \cdot \ln  
+
::<math> \Rightarrow \hspace{0.3cm} L(\underline{x}_{\hspace{0.03cm}i}) = d_{\rm H}(\underline{y}  \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i}) \cdot \ln  
\hspace{0.05cm} \varepsilon + [n -d_{\rm H}(\underline{y}  \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i})] \cdot \ln  
+
\hspace{0.05cm} \varepsilon + \big [n -d_{\rm H}(\underline{y}  \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i})\big ] \cdot \ln  
\hspace{0.05cm} (1- \varepsilon) = </math>
+
\hspace{0.05cm} (1- \varepsilon) = \ln \frac{\varepsilon}{1-\varepsilon} \cdot d_{\rm H}(\underline{y}  \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i}) + n \cdot \ln  
:<math>\hspace{2cm} =  \hspace{-0.1cm} \ln \frac{\varepsilon}{1-\varepsilon} \cdot d_{\rm H}(\underline{y}  \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i}) + n \cdot \ln  
 
 
\hspace{0.05cm} (1- \varepsilon)
 
\hspace{0.05cm} (1- \varepsilon)
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
Der zweite Term dieser Gleichung ist unabhängig von <i><u>x</u><sub>i</sub></i> und muss für die Maximierung nicht weiter betrachtet werden. Auch der Faktor vor der Hamming&ndash;Distanz ist für alle  <i><u>x</u><sub>i</sub></i> gleich. Da aber ln <i>&epsilon;</i>/(1 &ndash; <i>&epsilon;</i>) negativ ist (zumindest für <i>&epsilon;</i>&nbsp;<&nbsp;0.5, was ohne große Einschränkung vorausgestzt werden kann), wird aus der Maximierung eine Minimierung, und man erhält folgendes Endergebnis:<br>
+
Here we have to take into account:
 +
*The second term of this equation is independent of&nbsp; $\underline{x}_{\hspace{0.03cm}i}$&nbsp; and need not be considered further for maximization.
 +
 +
*Also,&nbsp; the factor before Hamming distance is the same for all&nbsp; $\underline{x}_{\hspace{0.03cm}i}$.
 +
 +
*Since&nbsp; $\ln \, {\varepsilon}/(1-\varepsilon)$&nbsp; is negative&nbsp; (at least for&nbsp; $\varepsilon <0.5$,&nbsp; which can be assumed without much restriction),&nbsp; maximization becomes minimization,&nbsp; and the following final result is obtained:<br>
 +
 
  
ML&ndash;Entscheidung beim BSC&ndash;Kanal: Wähle von den 2<sup><i>k</i></sup> zulässigen Codeworten dasjenige mit der <i>geringsten Hamming&ndash;Distanz</i> <i>d</i><sub>H</sub>(<i><u>y</u></i>, <i><u>x<sub>i</sub></u></i>) zum Empfangsvektor <i><u>y</u></i> aus:
+
{{BlaueBox|TEXT= 
 +
$\text{Maximum&ndash;likelihood decision at the BSC channel:}$&nbsp;  
  
:<math>\underline{z} = {\rm arg} \min_{\underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C}} \hspace{0.1cm}
+
Choose from the&nbsp; $2^k$&nbsp; allowed code words&nbsp; $\underline{x}_{\hspace{0.03cm}i}$&nbsp; the one with the&nbsp; &raquo;'''least Hamming distance'''&laquo;&nbsp; $d_{\rm H}(\underline{y}  \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i})$&nbsp; to the received vector&nbsp; $\underline{y}$:
 +
 
 +
::<math>\underline{z} = {\rm arg} \min_{\underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C} } \hspace{0.1cm}
 
d_{\rm H}(\underline{y}  \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i})\hspace{0.05cm}, \hspace{0.2cm}
 
d_{\rm H}(\underline{y}  \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i})\hspace{0.05cm}, \hspace{0.2cm}
\underline{y} \in {\rm GF}(2^n) \hspace{0.05cm}, \hspace{0.2cm}\underline{x}_{\hspace{0.03cm}i}\in {\rm GF}(2^n) \hspace{0.05cm}.</math>
+
\underline{y} \in {\rm GF}(2^n) \hspace{0.05cm}, \hspace{0.2cm}\underline{x}_{\hspace{0.03cm}i}\in {\rm GF}(2^n) \hspace{0.05cm}.</math>}}
  
Anwendungen der ML/BSC&ndash;Entscheidung finden Sie auf den folgenden Seiten:
 
*[http://en.lntwww.de/Kanalcodierung/Beispiele_bin%C3%A4rer_Blockcodes#Single_Parity.E2.80.93check_Codes_.281.29 Single Parity&ndash;check Code] (SPC)<br>
 
  
*[http://en.lntwww.de/Kanalcodierung/Beispiele_bin%C3%A4rer_Blockcodes#Wiederholungscodes_.281.29 Wiederholungscode] (englisch: <i>Repetition Code</i>, RC).<br>
+
Applications of ML/BSC decision can be found in the following sections:
 +
*[[Channel_Coding/Examples_of_Binary_Block_Codes#Single_Parity-check_Codes|$\text{SPC}$]]&nbsp; $($"single parity&ndash;check code"$)$.<br>
  
== ML–Entscheidung beim AWGN–Kanal ==
+
*[[Channel_Coding/Examples_of_Binary_Block_Codes#Repetition_Codes|$\text{RC}$]]&nbsp; $($"repetition code"$)$.<br>
 +
 
 +
== Maximum-likelihood decision at the AWGN channel ==
 
<br>
 
<br>
Das AWGN&ndash;Modell für einen (<i>n</i>, <i>k</i>)&ndash;Blockcode unterscheidet sich vom [http://en.lntwww.de/Kanalcodierung/Kanalmodelle_und_Entscheiderstrukturen#AWGN.E2.80.93Kanal_bei_bin.C3.A4rem_Eingang Modell] auf der ersten Seite dieses Kapitels dadurch, dass für <i>x</i>, <i>x</i>&#x0303; und <i>y</i> nun die entsprechenden Vektoren <i><u>x</u></i>, <u><i>x</i></u>&#x0303; und <i><u>y</u></i> verwendet werden müssen, jeweils bestehend aus <i>n</i> Elementen. Die Schritte zur Herleitung des ML&ndash;Entscheiders bei AWGN werden nachfolgend nur stichpunktartig angegeben:
+
The AWGN model for a&nbsp; $(n, k)$&nbsp; block code differs from the&nbsp; [[Channel_Coding/Channel_Models_and_Decision_Structures#AWGN_channel_at_binary_input| $\text{model}$]]&nbsp; in the first chapter section in that for&nbsp; $x$, &nbsp;$\tilde{x}$&nbsp; and &nbsp;$y$&nbsp; now the corresponding vectors&nbsp; $\underline{x}$, &nbsp;$\underline{\tilde{x}}$&nbsp; and &nbsp;$\underline{y}$&nbsp; must be used,&nbsp; each consisting of&nbsp; $n$&nbsp; elements.  
*Der AWGN&ndash;Kanal ist per se gedächtnislos (hierfür steht das <i>White</i> im Namen), so dass für die bedingte Wahrscheinlichkeitsdichtefunktion geschrieben werden kann:
+
 
 +
The steps to derive the maximum&ndash;likelihood decision for AWGN are given below only in bullet form:
 +
*The AWGN channel is memoryless per se&nbsp; (the&nbsp; "White"&nbsp; in the name stands for this).&nbsp; Thus,&nbsp; for the conditional probability density function&nbsp; $\rm (PDF)$,&nbsp; it can be written:
  
 
::<math>f( \underline{y}  \hspace{0.05cm}|\hspace{0.05cm} \underline{\tilde{x}} ) =
 
::<math>f( \underline{y}  \hspace{0.05cm}|\hspace{0.05cm} \underline{\tilde{x}} ) =
Line 250: Line 304:
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
*Die bedingte WDF ist für jedes einzelne Codeelement (<i>l</i> = 1, ... , <i>n</i>) gaußisch. Damit genügt auch die gesamte WDF einer (eindimensionalen) Gaußverteilung:
+
*The conditional PDF is&nbsp; "Gaussian"&nbsp; for each individual code element&nbsp; $(l = 1, \hspace{0.05cm}\text{...} \hspace{0.05cm}, n)$.&nbsp; Thus,&nbsp; the entire PDF also satisfies a&nbsp; (one-dimensional)&nbsp; Gaussian distribution:
  
 
::<math>f({y_l \hspace{0.03cm}| \hspace{0.03cm}\tilde{x}_l }) =  
 
::<math>f({y_l \hspace{0.03cm}| \hspace{0.03cm}\tilde{x}_l }) =  
\frac {1}{\sqrt{2\pi} \cdot \sigma } \cdot \exp \left [ -  \frac {(y_l - \tilde{x}_l)^2}{2\sigma^2} \right ]</math>
+
\frac {1}{\sqrt{2\pi} \cdot \sigma } \cdot \exp \left [ -  \frac {(y_l - \tilde{x}_l)^2}{2\sigma^2} \right ]\hspace{0.3cm}
 
+
\Rightarrow \hspace{0.3cm}  f( \underline{y}  \hspace{0.05cm}|\hspace{0.05cm} \underline{\tilde{x}} ) =
::<math>\Rightarrow \hspace{0.3cm}  f( \underline{y}  \hspace{0.05cm}|\hspace{0.05cm} \underline{\tilde{x}} ) =
 
 
\frac {1}{(2\pi)^{n/2} \cdot \sigma^n } \cdot \exp \left [ -  \frac {1}{2\sigma^2} \cdot  
 
\frac {1}{(2\pi)^{n/2} \cdot \sigma^n } \cdot \exp \left [ -  \frac {1}{2\sigma^2} \cdot  
 
\sum_{l=1}^{n} \hspace{0.2cm}(y_l - \tilde{x}_l)^2
 
\sum_{l=1}^{n} \hspace{0.2cm}(y_l - \tilde{x}_l)^2
 
  \right ] \hspace{0.05cm}.</math>
 
  \right ] \hspace{0.05cm}.</math>
  
 
+
*Since&nbsp; $\underline{y}$&nbsp; is now value-continuous rather than value-discrete as in the BSC model,&nbsp; probability densities&nbsp; must now be examined according to the maximum-likelihood decision rule&nbsp; rather than probabilities. The optimal result is:
*Da <u><i>y</i></u> nun nicht mehr wie beim BSC&ndash;Modell wertdiskret ist, sondern wertkontinuierlich, müssen jetzt entsprechend der ML&ndash;Entscheidungsregel <i>Wahrscheinlichkeitsdichten</i> untersucht werden und nicht mehr Wahrscheinlichkeiten. Das optimale Ergebnis lautet:
 
  
 
::<math>\underline{z} = {\rm arg} \max_{\underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C}} \hspace{0.1cm}
 
::<math>\underline{z} = {\rm arg} \max_{\underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C}} \hspace{0.1cm}
f( \underline{y}  \hspace{0.05cm}|\hspace{0.05cm} \underline{\tilde{x}}_i )\hspace{0.05cm}, \hspace{0.8cm}
+
f( \underline{y}  \hspace{0.05cm}|\hspace{0.05cm} \underline{\tilde{x}}_i )\hspace{0.05cm}, \hspace{0.5cm}
 
\underline{y} \in R^n\hspace{0.05cm}, \hspace{0.2cm}\underline{x}_{\hspace{0.03cm}i}\in {\rm GF}(2^n) \hspace{0.05cm}.</math>
 
\underline{y} \in R^n\hspace{0.05cm}, \hspace{0.2cm}\underline{x}_{\hspace{0.03cm}i}\in {\rm GF}(2^n) \hspace{0.05cm}.</math>
  
*In der Algebra bezeichnet man den Abstand zweier Punkte <u><i>y</i></u> und <i>x</i>&#x0303; im <i>n</i>&ndash;dimensionalen Raum als die Euklidische Distanz, benannt nach dem griechischen Mathematiker Euklid, der im dritten Jahrhundert vor Christus lebte:
+
*In algebra,&nbsp; the distance between two points&nbsp; $\underline{y}$&nbsp; and&nbsp; $\underline{\tilde{x}}$&nbsp; in&nbsp; $n$&ndash;dimensional space is called the&nbsp; [https://en.wikipedia.org/wiki/Euclidean_distance $\text{Euclidean distance}$],&nbsp; named after the Greek mathematician&nbsp; [https://en.wikipedia.org/wiki/Euclid $\text{Euclid}$]&nbsp; who lived in the third century BC:
  
 
::<math>d_{\rm E}(\underline{y}  \hspace{0.05cm}, \hspace{0.1cm}\underline{\tilde{x}}) =
 
::<math>d_{\rm E}(\underline{y}  \hspace{0.05cm}, \hspace{0.1cm}\underline{\tilde{x}}) =
Line 274: Line 326:
 
\hspace{0.05cm}.</math>
 
\hspace{0.05cm}.</math>
  
*Damit lautet die ML&ndash;Entscheidungsregel beim AWGN&ndash;Kanal für einen jeden Blockcode unter Berücksichtigung der Tatsache, dass der erste Faktor der WDF <i>f</i>(<u><i>y</u></i> | <u><i>x</i></u>&#x0303;<sub><i>i</i></sub>) konstant ist:
+
*Thus, the maximum-likelihood decision rule at the AWGN channel for any block code considering that the first factor of the PDF&nbsp; $f( \underline{y} \hspace{0.05cm}|\hspace{0.05cm} \underline{\tilde{x}_i} )$&nbsp; is constant:
  
 
::<math>\underline{z} = {\rm arg} \max_{\underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C}} \hspace{0.1cm}
 
::<math>\underline{z} = {\rm arg} \max_{\underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C}} \hspace{0.1cm}
Line 281: Line 333:
 
\underline{y} \in R^n\hspace{0.05cm}, \hspace{0.2cm}\underline{x}_{\hspace{0.03cm}i}\in {\rm GF}(2^n) \hspace{0.05cm}.</math>
 
\underline{y} \in R^n\hspace{0.05cm}, \hspace{0.2cm}\underline{x}_{\hspace{0.03cm}i}\in {\rm GF}(2^n) \hspace{0.05cm}.</math>
  
Nach einigen weiteren Zwischenschritten kommt man zum Ergebnis:<br>
+
*After a few more intermediate steps, you arrive at the result:<br>
  
ML&ndash;Entscheidung beim AWGN&ndash;Kanal: Wähle von den 2<sup><i>k</i></sup> zulässigen Codeworten <i>x<sub>i</sub></i> dasjenige mit der <i>kleinsten Euklidischen Distanz</i> zum Empfangsvektor <u><i>y</u></i> aus:
 
  
:<math>\underline{z} = {\rm arg} \min_{\underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C}} \hspace{0.1cm}
+
{{BlaueBox|TEXT= 
 +
$\text{Maximum&ndash;likelihood decision at the AWGN channel:}$&nbsp;
 +
 +
Choose from the&nbsp; $2^k$&nbsp; allowed code words&nbsp; $\underline{x}_{\hspace{0.03cm}i}$&nbsp; the one with the&nbsp; &raquo;'''smallest Euclidean distance'''&laquo;&nbsp; $d_{\rm E}(\underline{y}  \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i})$&nbsp; to the received vector &nbsp;$\underline{y}$:
 +
 
 +
::<math>\underline{z} = {\rm arg} \min_{\underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C} } \hspace{0.1cm}
 
d_{\rm E}(\underline{y}  \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i})\hspace{0.05cm}, \hspace{0.8cm}
 
d_{\rm E}(\underline{y}  \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i})\hspace{0.05cm}, \hspace{0.8cm}
\underline{y} \in R^n\hspace{0.05cm}, \hspace{0.2cm}\underline{x}_{\hspace{0.03cm}i}\in {\rm GF}(2^n) \hspace{0.05cm}.</math>
+
\underline{y} \in R^n\hspace{0.05cm}, \hspace{0.2cm}\underline{x}_{\hspace{0.03cm}i}\in {\rm GF}(2^n) \hspace{0.05cm}.</math>}}
  
==Aufgaben==
+
==Exercises for the chapter==
 
<br>
 
<br>
[[Aufgaben:1.3 BSC–BEC–BSEC–AWGN|A1.3 BSC–BEC–BSEC–AWGN]]
+
[[Aufgaben:Exercise_1.3:_Channel_Models_BSC_-_BEC_-_BSEC_-_AWGN|Exercise 1.3: Channel Models&nbsp; "BSC - BEC - BSEC - AWGN"]]
  
[[Aufgaben:1.4 Maximum–Likelihood–Entscheidung|A1.4 Maximum–Likelihood–Entscheidung]]
+
[[Aufgaben:Exercise_1.4:_Maximum_Likelihood_Decision|Exercise 1.4: Maximum Likelihood Decision]]
  
 
{{Display}}
 
{{Display}}

Latest revision as of 18:57, 21 November 2022


AWGN channel at binary input


We consider the well-known discrete-time  $\text{AWGN}$  channel model according to the left graph:

  • The binary and discrete-time message signal  $x$  takes the values  $0$  and  $1$  with equal probability:
$${\rm Pr}(x = 0) ={\rm Pr}(x = 1) = 1/2.$$
  • We now transform the unipolar variable  $x \in \{0,\ 1 \}$  into the bipolar variable  $\tilde{x} \in \{+1, -1 \}$  more suitable for signal transmission.  Then holds:
Probability density function  $\rm (PDF)$  of the AWGN channel
$${\rm Pr}(\tilde{x} =+1) ={\rm Pr}(\tilde{x} =-1) = 1/2.$$
  • The transmission is affected by  "additive white Gaussian noise"  $\rm (AWGN)$  $n$  with  (normalized)  noise power  $\sigma^2 = N_0/E_{\rm B}$.  The root mean square  (rms value)  of the Gaussian PDF is  $\sigma$.
  • Because of the Gaussian PDF,  the output signal  $y = \tilde{x} +n$  can take on any real value in the range  $-\infty$  to  $+\infty$.  That means:
  • The AWGN output value  $y$  is therefore discrete in time like  $x$   $($resp. $\tilde{x})$  but in contrast to the latter it is continuous in value.


The graph on the right shows  $($in blue resp. red color$)$  the conditional probability density functions:

\[f_{y \hspace{0.03cm}| \hspace{0.03cm}x=0 } \hspace{0.05cm} (y \hspace{0.05cm}| \hspace{0.05cm}x=0 )\hspace{-0.1cm} = \hspace{-0.1cm} \frac {1}{\sqrt{2\pi} \cdot \sigma } \cdot {\rm e}^{ - (y-1)^2/(2\sigma^2) }\hspace{0.05cm},\]
\[f_{y \hspace{0.03cm}| \hspace{0.03cm}x=1 } \hspace{0.05cm} (y \hspace{0.05cm}| \hspace{0.05cm}x=1 )\hspace{-0.1cm} = \hspace{-0.1cm} \frac {1}{\sqrt{2\pi} \cdot \sigma } \cdot {\rm e}^{ - (y+1)^2/(2\sigma^2) }\hspace{0.05cm}.\]

Not shown is the total  $($unconditional$)$  PDF,  for which applies in the case of equally probable symbols:

\[f_y(y) = {1}/{2} \cdot \big [ f_{y \hspace{0.03cm}| \hspace{0.03cm}x=0 } \hspace{0.05cm} (y \hspace{0.05cm}| \hspace{0.05cm}x=0 ) + f_{y \hspace{0.03cm}| \hspace{0.03cm}x=1 } \hspace{0.05cm} (y \hspace{0.05cm}| \hspace{0.05cm}x=1 )\big ]\hspace{0.05cm}.\]

The two shaded areas  $($each  $\varepsilon)$  mark decision errors under the condition 

  • $x=0$   ⇒   $\tilde{x} = +1$  (blue),
  • $x=1$   ⇒   $\tilde{x} = -1$  (red),


when hard decisions are made:

\[z = \left\{ \begin{array}{c} 0\\ 1 \end{array} \right.\quad \begin{array}{*{1}c} {\rm if} \hspace{0.15cm} y > 0\hspace{0.05cm},\\ {\rm if} \hspace{0.15cm}y < 0\hspace{0.05cm}.\\ \end{array}\]

For equally probable input symbols, the bit error probability  ${\rm Pr}(z \ne x)$  is then also  $\varepsilon$. With the  "complementary Gaussian error function"  ${\rm Q}(x)$  the following holds:

\[\varepsilon = {\rm Q}(1/\sigma) = {\rm Q}(\sqrt{\rho}) = \frac {1}{\sqrt{2\pi} } \cdot \int_{\sqrt{\rho}}^{\infty}{\rm e}^{- \alpha^2/2} \hspace{0.1cm}{\rm d}\alpha \hspace{0.05cm}.\]

where  $\rho = 1/\sigma^2 = 2 \cdot E_{\rm S}/N_0$  denotes the signal–to–noise ratio  $\rm (SNR)$  before the decision,  using the following system quantities:

  • $E_{\rm S}$  is the signal energy per symbol  (without coding:  $E_{\rm S}=E_{\rm B}$,  thus equal to the signal energy per bit),
  • $N_0$  denotes the constant  (one-sided)  noise power density of the AWGN channel.

Notes:   The presented facts are clarified with the  $($German language$)$  SWF applet  "Symbolfehlerwahrscheinlichkeit von Digitalsystemen"  
                 ⇒ "Symbol error probability of digital systems"

Binary Symmetric Channel – BSC


The AWGN channel model is not a digital channel model as we have presupposed in the paragraph  "Block diagram and prerequisities"  for the introductory description of channel coding methods.  However,  if we take into account a hard decision,  we arrive at the digital model  "Binary Symmetric Channel"  $\rm (BSC)$:

BSC model and relation with the AWGN model. Note:  In the AWGN model,  we have denoted the binary output as  $z \in \{0, \hspace{0.05cm}1\}$.  For the digital channel models  (BSC, BEC, BSEC),  we now denote the discrete value output again by  $y$.  To avoid confusion, we now call the output signal of the AWGN model  $y_{\rm A}$  and for the analog received signal then  $y_{\rm A} = \tilde{x} +n$
  • We choose the falsification probabilities  ${\rm Pr}(y = 1\hspace{0.05cm}|\hspace{0.05cm} x=0)$  resp.  ${\rm Pr}(y = 0\hspace{0.05cm}|\hspace{0.05cm} x=1)$  to be
\[\varepsilon = {\rm Q}(\sqrt{\rho})\hspace{0.05cm}.\]
  • Then the connection to the  $\text{AWGN}$  channel model  is established. 
  • The decision line is at  $G = 0$,  which also gives rise to the property  "symmetrical".


The BSC model provides a statistically independent error sequence and is thus suitable for modeling memoryless feedback-free channels,  which are considered without exception in this book.

For the description of memory-affected channels,  other models must be used,  which are discussed in the fifth main chapter of the book "Digital Signal Transmission",  e.g.  "bundle error channels"  according to the

Statistically independent errors  (left)  and bundle errors  (right)

$\text{Example 1:}$  The figure shows

  • the original image in the middle,
  • statistically independent errors according to the BSC model  (left),
  • so-called  "bundle errors"  according to Gilbert–Elliott  (right).


The bit error rate is $10\%$ in both cases.

From the right graph,  it can be seen from the bundle error structure that the image was transmitted line by line.


Binary Erasure Channel – BEC


The BSC model only provides the statements  "correct"  and  "incorrect".  However,  some receivers  – such as the so-called  $\text{soft–in soft–out decoder}$  – can also provide some information about the certainty of the decision,  although they must of course be informed about which of their input values are certain and which are rather uncertain.

The  "Binary Erasure Channel"  $\rm (BEC)$  provides such information.  On the basis of the graph you can see:

BEC and connection with the AWGN model
  • The input alphabet of the BEC model is binary   ⇒   $x ∈ \{0, \hspace{0.05cm}1\}$ and the output alphabet is ternary   ⇒   $y ∈ \{0, \hspace{0.05cm}1, \hspace{0.05cm}\rm E\}$.
  • An  "$\rm E$"  denotes an uncertain decision.  This new  "symbol"  stands for  "Erasure".
  • Bit errors are excluded by the BEC model per se.  An unsafe decision  $\rm (E)$  is made with probability  $\lambda$,  while the probability for a correct  (and at the same time safe)  decision  is  $1-\lambda$ .
  • In the upper right,  the relationship between BEC and AWGN model is shown,  with the erasure area  $\rm (E)$  highlighted in gray.
  • In contrast to the BSC model,  there are now two decision lines  $G_0 = G$  and  $G_1 = -G$.  It holds:
\[\lambda = {\rm Q}\big[\sqrt{\rho} \cdot (1 - G)\big]\hspace{0.05cm}.\]

We refer here again to the following applets:


Binary Symmetric Error & Erasure Channel – BSEC


The BEC model is rather unrealistic  $($error probability:  $0)$  and only an approximation for an extremely large signal–to–noise–power ratio  $\rho$.

Stronger noise  $($a smaller  $\rho)$  would be better served by the  "Binary Symmetric Error & Erasure Channel"  $\rm (BSEC)$  with the two parameters

  • Falsification probability   $\varepsilon = {\rm Pr}(y = 1\hspace{0.05cm}|\hspace{0.05cm} x=0)= {\rm Pr}(y = 0\hspace{0.05cm}|\hspace{0.05cm} x=1)$,
  • Erasure probability   $\lambda = {\rm Pr}(y = {\rm E}\hspace{0.05cm}|\hspace{0.05cm} x=0)= {\rm Pr}(y = {\rm E}\hspace{0.05cm}|\hspace{0.05cm} x=1).$


As with the BEC model,  $x ∈ \{0, \hspace{0.05cm}1\}$  and  $y ∈ \{0, \hspace{0.05cm}1, \hspace{0.05cm}\rm E\}$.

$\text{Example 2:}$  We consider the BSEC model with the two decision lines 

BSEC and connection with the AWGN model
  • $G_0 = G = 0.5$,
  • $G_1 = -G = -0.5$.


The parameters  $\varepsilon$  and  $\lambda$  are fixed by the  $\rm SNR$  $\rho=1/\sigma^2$  of the comparable AWGN channel.

  • For  $\sigma = 0.5$   ⇒   $\rho = 4$  (assumed for the sketched PDF):
$$\varepsilon = {\rm Q}\big[\sqrt{\rho} \cdot (1 + G)\big] = {\rm Q}(3) \approx 0.14\%\hspace{0.05cm},$$
$${\it \lambda} = {\rm Q}\big[\sqrt{\rho} \cdot (1 - G)\big] - \varepsilon = {\rm Q}(1) - {\rm Q}(3) $$
$$\Rightarrow \hspace{0.3cm}{\it \lambda} \approx 15.87\% - 0.14\% = 15.73\%\hspace{0.05cm},$$
  • For  $\sigma = 0.25$   ⇒   $\rho = 16$:
$$\varepsilon = {\rm Q}(6) \approx 10^{-10}\hspace{0.05cm},$$
$${\it \lambda} = {\rm Q}(2) \approx 2.27\%\hspace{0.05cm}.$$
Here,  the BSEC model could be replaced by the simpler BEC variant without serious differences.


Criteria  »Maximum-a-posteriori«  and  »Maximum-Likelihood«


We now start from the model sketched below and apply the methods already described in chapter  "Structure of the optimal receiver"  of the book  "Digital Signal Transmission"  to the decoding process.

$\text{The task of the channel decoder}$ 

Model for the description of MAP and ML decoding
  • is to determine the vector  $\underline{v}$  so
  • that it matches the information word  $\underline{u}$  "as well as possible".


Formulated a little more precisely:

  • Minimizing the  »block error probability« 
$${\rm Pr(block \:error)} = {\rm Pr}(\underline{v} \ne \underline{u}) $$
related to the vectors  $\underline{u}$  and  $\underline{v}$  of length  $k$ .
  • Because of the unique assignment  $\underline{x} = {\rm enc}(\underline{u})$  by the channel encoder and  $\underline{v} = {\rm enc}^{-1}(\underline{z})$  by the channel decoder applies in the same way:
\[{\rm Pr(block \:error)} = {\rm Pr}(\underline{z} \ne \underline{x})\hspace{0.05cm}. \]


The channel decoder in the above model consists of two parts:

  • The  "code word estimator"  determines from the received vector  $\underline{y}$  an estimate  $\underline{z} \in \mathcal{C}$  according to a given criterion.
  • From the  (received)  code word  $\underline{z}$  the information word  $\underline{v}$  is determined by  "simple mapping".  This should match  $\underline{u}$.

There are a total of four different variants for the code word estimator, viz.

  1. the maximum–a–posteriori  $\rm (MAP)$  receiver for the entire code word  $\underline{x}$,
  2. the maximum–a–posteriori  $\rm (MAP)$  receiver for the individual code bits  $x_i$,
  3. the maximum–likelihood  $\rm (ML)$  receiver for the entire code word  $\underline{x}$,
  4. the maximum–likelihood  $\rm (ML)$  receiver for the individual code bits  $x_i$.

Their definitions follow in the next section.  First of all,  however,  the essential distinguishing feature between  $\rm MAP$  and  $\rm ML$:

$\text{Conclusion:}$ 

  • A MAP receiver,  in contrast to the ML receiver,  also considers different occurrence probabilities for the entire code word or for their individual bits.
  • If all code words  $\underline{x}$  and thus all bits  $x_i$  of the code words are equally probable,  the simpler ML receiver is equivalent to the corresponding MAP receiver.



Definitions of the different optimal receivers


$\text{Definition:}$  The  "block–wise maximum–a–posteriori receiver"  – for short:  »block–wise MAP« – decides among the  $2^k$  code words  $\underline{x}_i \in \mathcal{C}$  for the code word with the highest a–posteriori probability:

\[\underline{z} = {\rm arg} \max_{\underline{x}_{\hspace{0.03cm}i} \hspace{0.03cm} \in \hspace{0.05cm} \mathcal{C} } \hspace{0.1cm} {\rm Pr}( \underline{x}_{\hspace{0.03cm}i} \vert\hspace{0.05cm} \underline{y} ) \hspace{0.05cm}.\]
  • ${\rm Pr}( \underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm}\vert \hspace{0.05cm} \underline{y} )$  is the  "conditional probability"  that  $\underline{x}_i$  was sent,  when  $\underline{y}$  is received.


We now try to simplify this decision rule step by step.  The inference probability can be transformed according to  "Bayes rule"  as follows:

\[{\rm Pr}( \underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm}\vert \hspace{0.05cm} \underline{y} ) = \frac{{\rm Pr}( \underline{y} \hspace{0.08cm} |\hspace{0.05cm} \underline{x}_{\hspace{0.03cm}i} ) \cdot {\rm Pr}( \underline{x}_{\hspace{0.03cm}i} )}{{\rm Pr}( \underline{y} )} \hspace{0.05cm}.\]
  • The probability  ${\rm Pr}( \underline{y}) $  is independent of  $\underline{x}_i$  and need not be considered in maximization.
  • Moreover,  if all  $2^k$  information words  $\underline{u}_i$  are equally probable,  then the maximization can also dispense with the contribution  ${\rm Pr}( \underline{x}_{\hspace{0.03cm}i} ) = 2^{-k}$  in the numerator.


$\text{Definition:}$  The  "block–wise maximum–likelihood receiver"  – for short:  »block–wise ML«  – decides among the  $2^k$  allowed code words  $\underline{x}_i \in \mathcal{C}$  for the code word with the highest transition probability:

\[\underline{z} = {\rm arg} \max_{\underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C} } \hspace{0.1cm} {\rm Pr}( \underline{y} \hspace{0.05cm}\vert\hspace{0.05cm} \underline{x}_{\hspace{0.03cm}i} ) \hspace{0.05cm}.\]
  • The conditional probability  ${\rm Pr}( \underline{y} \hspace{0.05cm}\vert\hspace{0.05cm} \underline{x}_{\hspace{0.03cm}i} )$  is now to be understood in the forward direction,  namely as the probability that the vector  $\underline{y}$  is received when the code word  $\underline{x}_i$  has been sent.
  • In the following, we always use the maximum–likelihood receiver at the block level.  Due to the assumed equally likely information words,  this also always provides the best possible decision.


However,  the situation is different on the bit level.  The goal of an iterative decoding is just to estimate probabilities for all code bits  $x_i \in \{0, 1\}$  and to pass these on to the next stage.  For this one needs a MAP receiver.

$\text{Definition:}$  The  "bit–wise maximum–a–posteriori receiver" – for short:  »bit–wise MAP«  – selects for each individual code bit  $x_i$  the value  $(0$ or $1)$  with the highest inference probability  ${\rm Pr}( {x}_{\hspace{0.03cm}i}\vert \hspace{0.05cm} \underline{y} )$:

\[\underline{z} = {\rm arg}\hspace{-0.1cm}{ \max_{ {x}_{\hspace{0.03cm}i} \hspace{0.03cm} \in \hspace{0.05cm} \{0, 1\} } \hspace{0.03cm} {\rm Pr}( {x}_{\hspace{0.03cm}i}\vert \hspace{0.05cm} \underline{y} ) \hspace{0.05cm} }.\]


Maximum-likelihood decision at the BSC channel


We now apply the maximum–likelihood criterion to the memoryless  $\text{BSC channel}$.  Then holds:

\[{\rm Pr}( \underline{y} \hspace{0.05cm}|\hspace{0.05cm} \underline{x}_{\hspace{0.03cm}i} ) = \prod\limits_{l=1}^{n} {\rm Pr}( y_l \hspace{0.05cm}|\hspace{0.05cm} x_l ) \hspace{0.5cm}{\rm with}\hspace{0.5cm} {\rm Pr}( y_l \hspace{0.05cm}|\hspace{0.05cm} x_l ) = \left\{ \begin{array}{c} 1 - \varepsilon\\ \varepsilon \end{array} \right.\quad \begin{array}{*{1}c} {\rm if} \hspace{0.15cm} y_l = x_l \hspace{0.05cm},\\ {\rm if} \hspace{0.15cm}y_l \ne x_l\hspace{0.05cm}.\\ \end{array} \hspace{0.05cm}.\]
\[\Rightarrow \hspace{0.3cm} {\rm Pr}( \underline{y} \hspace{0.05cm}|\hspace{0.05cm} \underline{x}_{\hspace{0.03cm}i} ) = \varepsilon^{d_{\rm H}(\underline{y} \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i})} \cdot (1-\varepsilon)^{n-d_{\rm H}(\underline{y} \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i})} \hspace{0.05cm}.\]

$\text{Proof:}$  This result can be justified as follows:

  • The  $\text{Hamming distance}$  $d_{\rm H}(\underline{y} \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i})$  specifies the number of bit positions where the words  $\underline{y}$  and  $\underline{x}_{\hspace{0.03cm}i}$  with each  $n$  binary element differ. Example:   The Hamming distance between  $\underline{y}= (0, 1, 0, 1, 0, 1, 1)$  and  $\underline{x}_{\hspace{0.03cm}i} = (0, 1, 0, 0, 1, 1, 1)$  is  $2$.
  • In  $n - d_{\rm H}(\underline{y} \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i})$  positions thus the two vectors  $\underline{y}$  and  $\underline{x}_{\hspace{0.03cm}i}$  do not differ.  In the above example,  five of the  $n = 7$  bits are identical.
  • Finally,  one arrives at the above equation by substituting the falsification probability  $\varepsilon$  or its complement  $1-\varepsilon$.


The approach to maximum–likelihood decoding is to find the code word  $\underline{x}_{\hspace{0.03cm}i}$  that has the transition probability  ${\rm Pr}( \underline{y} \hspace{0.05cm}|\hspace{0.05cm} \underline{x}_{\hspace{0.03cm}i} )$  maximized:

\[\underline{z} = {\rm arg} \max_{\underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C}} \hspace{0.1cm} \left [ \varepsilon^{d_{\rm H}(\underline{y} \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i})} \cdot (1-\varepsilon)^{n-d_{\rm H}(\underline{y} \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i})} \right ] \hspace{0.05cm}.\]

Since the logarithm is a monotonically increasing function,  the same result is obtained after the following maximization:

\[\underline{z} = {\rm arg} \max_{\underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C}} \hspace{0.1cm} L(\underline{x}_{\hspace{0.03cm}i})\hspace{0.5cm} {\rm with}\hspace{0.5cm} L(\underline{x}_{\hspace{0.03cm}i}) = \ln \left [ \varepsilon^{d_{\rm H}(\underline{y} \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i})} \cdot (1-\varepsilon)^{n-d_{\rm H}(\underline{y} \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i})} \right ] \]
\[ \Rightarrow \hspace{0.3cm} L(\underline{x}_{\hspace{0.03cm}i}) = d_{\rm H}(\underline{y} \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i}) \cdot \ln \hspace{0.05cm} \varepsilon + \big [n -d_{\rm H}(\underline{y} \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i})\big ] \cdot \ln \hspace{0.05cm} (1- \varepsilon) = \ln \frac{\varepsilon}{1-\varepsilon} \cdot d_{\rm H}(\underline{y} \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i}) + n \cdot \ln \hspace{0.05cm} (1- \varepsilon) \hspace{0.05cm}.\]

Here we have to take into account:

  • The second term of this equation is independent of  $\underline{x}_{\hspace{0.03cm}i}$  and need not be considered further for maximization.
  • Also,  the factor before Hamming distance is the same for all  $\underline{x}_{\hspace{0.03cm}i}$.
  • Since  $\ln \, {\varepsilon}/(1-\varepsilon)$  is negative  (at least for  $\varepsilon <0.5$,  which can be assumed without much restriction),  maximization becomes minimization,  and the following final result is obtained:


$\text{Maximum–likelihood decision at the BSC channel:}$ 

Choose from the  $2^k$  allowed code words  $\underline{x}_{\hspace{0.03cm}i}$  the one with the  »least Hamming distance«  $d_{\rm H}(\underline{y} \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i})$  to the received vector  $\underline{y}$:

\[\underline{z} = {\rm arg} \min_{\underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C} } \hspace{0.1cm} d_{\rm H}(\underline{y} \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i})\hspace{0.05cm}, \hspace{0.2cm} \underline{y} \in {\rm GF}(2^n) \hspace{0.05cm}, \hspace{0.2cm}\underline{x}_{\hspace{0.03cm}i}\in {\rm GF}(2^n) \hspace{0.05cm}.\]


Applications of ML/BSC decision can be found in the following sections:

Maximum-likelihood decision at the AWGN channel


The AWGN model for a  $(n, k)$  block code differs from the  $\text{model}$  in the first chapter section in that for  $x$,  $\tilde{x}$  and  $y$  now the corresponding vectors  $\underline{x}$,  $\underline{\tilde{x}}$  and  $\underline{y}$  must be used,  each consisting of  $n$  elements.

The steps to derive the maximum–likelihood decision for AWGN are given below only in bullet form:

  • The AWGN channel is memoryless per se  (the  "White"  in the name stands for this).  Thus,  for the conditional probability density function  $\rm (PDF)$,  it can be written:
\[f( \underline{y} \hspace{0.05cm}|\hspace{0.05cm} \underline{\tilde{x}} ) = \prod\limits_{l=1}^{n} f( y_l \hspace{0.05cm}|\hspace{0.05cm} \tilde{x}_l ) \hspace{0.05cm}.\]
  • The conditional PDF is  "Gaussian"  for each individual code element  $(l = 1, \hspace{0.05cm}\text{...} \hspace{0.05cm}, n)$.  Thus,  the entire PDF also satisfies a  (one-dimensional)  Gaussian distribution:
\[f({y_l \hspace{0.03cm}| \hspace{0.03cm}\tilde{x}_l }) = \frac {1}{\sqrt{2\pi} \cdot \sigma } \cdot \exp \left [ - \frac {(y_l - \tilde{x}_l)^2}{2\sigma^2} \right ]\hspace{0.3cm} \Rightarrow \hspace{0.3cm} f( \underline{y} \hspace{0.05cm}|\hspace{0.05cm} \underline{\tilde{x}} ) = \frac {1}{(2\pi)^{n/2} \cdot \sigma^n } \cdot \exp \left [ - \frac {1}{2\sigma^2} \cdot \sum_{l=1}^{n} \hspace{0.2cm}(y_l - \tilde{x}_l)^2 \right ] \hspace{0.05cm}.\]
  • Since  $\underline{y}$  is now value-continuous rather than value-discrete as in the BSC model,  probability densities  must now be examined according to the maximum-likelihood decision rule  rather than probabilities. The optimal result is:
\[\underline{z} = {\rm arg} \max_{\underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C}} \hspace{0.1cm} f( \underline{y} \hspace{0.05cm}|\hspace{0.05cm} \underline{\tilde{x}}_i )\hspace{0.05cm}, \hspace{0.5cm} \underline{y} \in R^n\hspace{0.05cm}, \hspace{0.2cm}\underline{x}_{\hspace{0.03cm}i}\in {\rm GF}(2^n) \hspace{0.05cm}.\]
  • In algebra,  the distance between two points  $\underline{y}$  and  $\underline{\tilde{x}}$  in  $n$–dimensional space is called the  $\text{Euclidean distance}$,  named after the Greek mathematician  $\text{Euclid}$  who lived in the third century BC:
\[d_{\rm E}(\underline{y} \hspace{0.05cm}, \hspace{0.1cm}\underline{\tilde{x}}) = \sqrt{\sum_{l=1}^{n} \hspace{0.2cm}(y_l - \tilde{x}_l)^2}\hspace{0.05cm},\hspace{0.8cm} \underline{y} \in R^n\hspace{0.05cm}, \hspace{0.2cm}\underline{x}_{\hspace{0.03cm}i}\in \mathcal{C} \hspace{0.05cm}.\]
  • Thus, the maximum-likelihood decision rule at the AWGN channel for any block code considering that the first factor of the PDF  $f( \underline{y} \hspace{0.05cm}|\hspace{0.05cm} \underline{\tilde{x}_i} )$  is constant:
\[\underline{z} = {\rm arg} \max_{\underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C}} \hspace{0.1cm} \exp \left [ - \frac {d_{\rm E}(\underline{y} \hspace{0.05cm}, \hspace{0.1cm}\underline{\tilde{x}}_i)}{2\sigma^2} \right ]\hspace{0.05cm}, \hspace{0.8cm} \underline{y} \in R^n\hspace{0.05cm}, \hspace{0.2cm}\underline{x}_{\hspace{0.03cm}i}\in {\rm GF}(2^n) \hspace{0.05cm}.\]
  • After a few more intermediate steps, you arrive at the result:


$\text{Maximum–likelihood decision at the AWGN channel:}$ 

Choose from the  $2^k$  allowed code words  $\underline{x}_{\hspace{0.03cm}i}$  the one with the  »smallest Euclidean distance«  $d_{\rm E}(\underline{y} \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i})$  to the received vector  $\underline{y}$:

\[\underline{z} = {\rm arg} \min_{\underline{x}_{\hspace{0.03cm}i} \hspace{0.05cm} \in \hspace{0.05cm} \mathcal{C} } \hspace{0.1cm} d_{\rm E}(\underline{y} \hspace{0.05cm}, \hspace{0.1cm}\underline{x}_{\hspace{0.03cm}i})\hspace{0.05cm}, \hspace{0.8cm} \underline{y} \in R^n\hspace{0.05cm}, \hspace{0.2cm}\underline{x}_{\hspace{0.03cm}i}\in {\rm GF}(2^n) \hspace{0.05cm}.\]

Exercises for the chapter


Exercise 1.3: Channel Models  "BSC - BEC - BSEC - AWGN"

Exercise 1.4: Maximum Likelihood Decision