Difference between revisions of "Channel Coding/Bounds for Block Error Probability"
Line 186: | Line 186: | ||
::<math>\beta = \left\{ \begin{array}{c} \lambda \\ \sqrt{4 \cdot \varepsilon \cdot (1- \varepsilon)}\\ | ::<math>\beta = \left\{ \begin{array}{c} \lambda \\ \sqrt{4 \cdot \varepsilon \cdot (1- \varepsilon)}\\ | ||
{\rm e}^{- R \hspace{0.05cm}\cdot \hspace{0.05cm}E_{\rm B}/N_0} \end{array} \right.\quad | {\rm e}^{- R \hspace{0.05cm}\cdot \hspace{0.05cm}E_{\rm B}/N_0} \end{array} \right.\quad | ||
− | \begin{array}{*{1}c} {\rm | + | \begin{array}{*{1}c} {\rm for\hspace{0.15cm} das \hspace{0.15cm}BEC model},\\ |
{\rm for\hspace{0.15cm} das \hspace{0.15cm}BSC-Modell}, \\ {\rm for\hspace{0.15cm} das \hspace{0.15cm}AWGN model}. \end{array}</math> | {\rm for\hspace{0.15cm} das \hspace{0.15cm}BSC-Modell}, \\ {\rm for\hspace{0.15cm} das \hspace{0.15cm}AWGN model}. \end{array}</math> | ||
*The Bhattacharyya bound is always (and usually significantly) above the curve for the "Union Bound". With the goal of finding a uniform bound for all channels, much coarser estimates must be made here than for the "Union Bound".<br><br> | *The Bhattacharyya bound is always (and usually significantly) above the curve for the "Union Bound". With the goal of finding a uniform bound for all channels, much coarser estimates must be made here than for the "Union Bound".<br><br> | ||
− | + | We restrict ourselves here to the <b>Bhattacharyya bound for the BSC model</b>. For its pairwise corruption probability was derived in front: | |
::<math>{\rm Pr}\big[\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}1}\big] = \sum_{i= \left\lfloor (d-1)/2 \right\rfloor}^{d} {d \choose i} \cdot \varepsilon^{i} \cdot (1 - \varepsilon)^{d-i} = \sum_{i= \left\lceil d/2 \right\rceil }^{d} {d \choose i} \cdot \varepsilon^{i} \cdot (1 - \varepsilon)^{d-i}\hspace{0.05cm}.</math> | ::<math>{\rm Pr}\big[\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}1}\big] = \sum_{i= \left\lfloor (d-1)/2 \right\rfloor}^{d} {d \choose i} \cdot \varepsilon^{i} \cdot (1 - \varepsilon)^{d-i} = \sum_{i= \left\lceil d/2 \right\rceil }^{d} {d \choose i} \cdot \varepsilon^{i} \cdot (1 - \varepsilon)^{d-i}\hspace{0.05cm}.</math> | ||
− | * | + | *Here, ε=Pr(y=1|x=0)=Pr(y=0|x=1)<0.5 denotes the channel model. |
− | *d=dH(x_0,x_1) | + | *d=dH(x_0,x_1) indicates the Hamming distance of the considered codewords.<br> |
− | + | To arrive at the Bhattacharyya bound, the following estimates must be made: | |
− | * | + | *For all i<d holds εi⋅(1−ε)d−i≤(1−ε)d/2: |
::<math>{\rm Pr}\big[\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}1}\big] \le \big[\varepsilon \cdot (1 - \varepsilon)\big]^{d/2} \cdot \sum_{i= \left\lceil d/2 \right\rceil }^{d} {d \choose i} \hspace{0.05cm}.</math> | ::<math>{\rm Pr}\big[\underline{x}_{\hspace{0.02cm}0} \mapsto \underline{x}_{\hspace{0.02cm}1}\big] \le \big[\varepsilon \cdot (1 - \varepsilon)\big]^{d/2} \cdot \sum_{i= \left\lceil d/2 \right\rceil }^{d} {d \choose i} \hspace{0.05cm}.</math> | ||
− | * | + | *Change with respect to the lower limit of the run variable i: |
::<math>\sum_{i= \left\lceil d/2 \right\rceil }^{d} {d \choose i} \hspace{0.15cm} < \hspace{0.15cm} \sum_{i= 0 }^{d} {d \choose i} = 2^d\hspace{0.05cm}, | ::<math>\sum_{i= \left\lceil d/2 \right\rceil }^{d} {d \choose i} \hspace{0.15cm} < \hspace{0.15cm} \sum_{i= 0 }^{d} {d \choose i} = 2^d\hspace{0.05cm}, | ||
Line 211: | Line 211: | ||
\hspace{0.05cm}.</math> | \hspace{0.05cm}.</math> | ||
− | * | + | *Resort according to Hamming weights Wi (Hamming distance d=i occurs Wi times): |
− | ::<math>{\rm Pr( | + | ::<math>{\rm Pr(block\:error)} \hspace{0.1cm} \le \hspace{0.1cm} \sum_{i= 1 }^{n} W_i \cdot \beta^{i} = 1 + W_1 \cdot \beta + W_2 \cdot \beta^2 + \hspace{0.05cm}\text{ ...} \hspace{0.05cm}+ W_n \cdot \beta^n |
\hspace{0.05cm}.</math> | \hspace{0.05cm}.</math> | ||
− | * | + | *With the weight enumerator function W(X)=1+W1⋅X+W2⋅X2+...+Wn⋅Xn: |
− | ::<math>{\rm Pr( | + | ::<math>{\rm Pr(block\:error)} \le W(X = \beta) -1= {\rm Pr(Bhattacharyya)} |
\hspace{0.05cm}.</math> | \hspace{0.05cm}.</math> | ||
− | [[File:P ID2370 KC T 1 6 S4.png|right|frame| | + | [[File:P ID2370 KC T 1 6 S4.png|right|frame|Comparison between "Union Bound" and "Bhattacharyya Barrier" valid for the BSC model|class=fit]] |
{{GraueBox|TEXT= | {{GraueBox|TEXT= | ||
− | $\text{ | + | $\text{Example 4:}$ The table summarizes the Bhattacharyya results for different values of the BSC parameter ε valid for the [[Channel_Coding/Limits_for_Block_Error_Probability#Distance_spectrum_of_a_linear_code|example (5, 2) code]]. |
− | + | For this one applies: | |
:W0=1, W1=W2=0, W3=2, W4=1 | :W0=1, W1=W2=0, W3=2, W4=1 | ||
:⇒W(X)=1+2⋅X3+X4. | :⇒W(X)=1+2⋅X3+X4. | ||
− | + | Thus, the Bhattacharyya bound can be calculated: | |
::<math> {\rm Pr(Bhattacharyya)} = W(\beta) -1 = 2 \cdot \beta^3 + \beta^4\hspace{0.05cm}.</math> | ::<math> {\rm Pr(Bhattacharyya)} = W(\beta) -1 = 2 \cdot \beta^3 + \beta^4\hspace{0.05cm}.</math> | ||
− | + | This provides a (often rough) bound on the block error probability: | |
− | ::<math> {\rm Pr( | + | ::<math> {\rm Pr(block\:error)} |
\le {\rm Pr(Bhattacharyya)} | \le {\rm Pr(Bhattacharyya)} | ||
\hspace{0.05cm}.</math>}} | \hspace{0.05cm}.</math>}} | ||
Line 239: | Line 239: | ||
{{BlaueBox|TEXT= | {{BlaueBox|TEXT= | ||
− | $\text{ | + | $\text{Conclusion:}$ Based on the [[Channel_Coding/Limits_for_Block_Error_Probability#Union_Bound_for_the_BSC_model|$\text{Example 3}$]] and the $\text{Example 4}$ (on this page) for the simple (5, 2) block code, though of little practical relevance, and in anticipation of the $\text{example 5}$ (on the next page) for the (7, 4, 3) Hamming code, we summarize: |
− | * | + | *The block error probability of a coding system is often not analytically specifiable and must be determined by simulation. The same applies to the bit error probability.<br> |
− | * | + | *The '''Union Bound''' provides an upper bound for the block error probability. For many applications (especially short codes) the Union Bound is only slightly above the actual error probability.<br> |
− | * | + | *The '''Bhattacharyya bound''' for the BEC channel is about a factor 2 above the Union Bound – see [[Aufgaben:Exercise_1.14:_Bhattacharyya_Bound_for_BEC|Exercise 1.14]]. For the BSC– and the AWGN channel, the gap is much larger. The factor 10 (and more) is not uncommon.<br> |
− | * | + | *The Bhattacharyya bound W(β)−1 looks very simple at first sight. Nevertheless, one needs also here knowledge about the exact weight function W(ξ) of the code.<br> |
− | * | + | *If the transmission channel (BEC, BSC, AWGN or variations thereof) and its parameters are known, there is nothing to be said against using the (more accurate) "Union Bound" as an upper bound for the block error probability.}}<br> |
− | == | + | == Barriers for the (7, 4, 3) Hamming code at the AWGN channel == |
<br> | <br> | ||
− | + | Finally, we consider the block error probability and its bounds ("Union Bound" and "Bhattacharyya bound" ) for the following configuration: | |
− | *AWGN | + | *AWGN channel, characterized by the quotient EB/N0,<br> |
− | *Hamming | + | *Hamming code HC(7, 4, 3) ⇒ R=4/7, W(X)−1=7⋅X3+7⋅X4+X7,<br> |
− | *"Soft | + | *"Soft Decision" according to the maximum–likelihood criterion.<br><br> |
− | [[File:EN_KC_T_1_6_S5.png|right|frame| | + | [[File:EN_KC_T_1_6_S5.png|right|frame|Block error probability and bounds of HC (7, 4, 3)|class=fit]] |
{{GraueBox|TEXT= | {{GraueBox|TEXT= | ||
− | $\text{ | + | $\text{Example 5:}$ Die Ergebnisse sind in der Grafik zusammengefasst. |
*Im Gegensatz zur Grafik im Abschnitt [[Channel_Coding/Decodierung_linearer_Blockcodes#Codiergewinn_.E2.80.93_Bitfehlerrate_bei_AWGN| Codiergewinn – Bitfehlerrate bei AWGN]] ist hier die Blockfehlerrate angegeben und nicht die Bitfehlerrate. | *Im Gegensatz zur Grafik im Abschnitt [[Channel_Coding/Decodierung_linearer_Blockcodes#Codiergewinn_.E2.80.93_Bitfehlerrate_bei_AWGN| Codiergewinn – Bitfehlerrate bei AWGN]] ist hier die Blockfehlerrate angegeben und nicht die Bitfehlerrate. | ||
*Näherungsweise ist Letztere um den Faktor dmin/k kleiner, falls wie hier dmin<k ist. Im vorliegenden Beispiel gilt dmin/k=0.75.<br> | *Näherungsweise ist Letztere um den Faktor dmin/k kleiner, falls wie hier dmin<k ist. Im vorliegenden Beispiel gilt dmin/k=0.75.<br> |
Revision as of 17:27, 28 July 2022
Contents
Distance spectrum of a linear code
We further assume a linear and binary (n,k)–block code C . A major goal of code design is to keep the "block error probability" Pr(u_≠v_)=Pr(z_≠x_) as low as possible. This is achieved by, among other things.
- the minimum distance dmin between two codewords x_ and x_′ is as large as possible, so that one can correct up to t=⌊(dmin−1)/2⌋ bit errors;
- at the same time the minimum distance dmin ⇒ worst case occurs as rarely as possible, given all the allowed codewords.
Definition: We denote the number of codewords x_″∈C with Hamming distance i from the considered codeword x_ of the same code C with Wi(x_), where holds:
- Wi(x_)=|{x_,x_′∈C|dH(x_,x_′)=i}|.
- The straight lines of the absolute value here denote the number of codewords x_′, that satisfy the condition dH(x_,x_′)=i .
This value is also called multiplicity.
Example 1: We consider the (5,2) block code C with the generator matrix
G=(1011001011).
The table shows the hamming distances
- between all codewords x_i.
- to the reference words x_0, ... , x_3.
One recognizes: Independently of the reference word x_i holds:
- W0=1,W1=W2=0,W3=2,W4=1
- ⇒dmin=3.
Not only in this example, but in any linear code, the same multiplicities Wi result for each codeword. Furthermore, since the all-zero word 0_=(0,0, ...,0) is part of every linear binary code, the above definition can also be formulated as follows:
Definition: The distance spectrum of a linear binary (n,k) block code is the set {Wi} with i=0,1, ... , n. Here Wi gives the number of codewords x_∈C with Hamming weight wH(x_)=i .
Often one describes the set {Wi} also as a polynomial with a pseudovariable X:
- {Wi}⇔W(X)=n∑i=0Wi⋅Xii=W0+W1⋅X+W2⋅X2+...+Wn⋅Xn.
It is also called W(X) weight enumerator function.
For example, the weight enumerator function of the (5,2) code C={(0,0,0,0,0),(0,1,0,1,1),(1,0,1,1,0),(1,1,1,0,1)} of Example 1:
- W(X)=1+2⋅X3+X4.
As can be seen from the "table of its code words" , for the (7,4,3) Hamming code:
- W(X)=1+7⋅X3+7⋅X4+X7.
The transformation of the distance spectrum {Wi} into the weight enumerator function W(X) also offers great numerical advantages in some exercises. For example, if the Weight Enumerator Function W(X) of a (n,k)–Block Codes C is known, then for the corresponding "dual" (n,n−k) code Cdual:
- WDual(X)=(1+X)n2k⋅W(1−X1+X).
Example 2: The weight enumerator function W(X) of "single parity–check codes" with n=6, k=5 ⇒ SPC (6, 5).
This is obtained by comparing all 25=32 code words with the all-zero word:
- WSPC(6,5)(X)=1+15⋅X2+15⋅X4+X6.
Considering the above equation, you get the same result much faster:
- The to SPC (6, 5) dual code is the "repetition code" RC (6, 1) with only two codewords (0,0,0,0,0,0) and (1,1,1,1,1,1):
- WRC(6,1)(X)=1+X6.
- From this it follows for the weight enumerator function of the SPC (6, 5) according to the above equation with k=1:
- WSPC(6,5)(X)=(1+X)621⋅W[1+(1−X1+X)6]=1/2⋅[(1+X)6+(1−X)6]=1+15⋅X2+15⋅X4+X6.
Union Bound of the block error probability
We consider as in Example 1 to the distance spectrum the (5,2)–block code C={x_0,x_1,x_2,x_3} and assume that the codeword x_0 has been sent. The graphic illustrates the situation.
In the error-free case, the code word estimator would then return z_=x_0 . Otherwise, a block error would occur (that is, z_≠x_0 and accordingly v_≠u_0) with probability
- Pr(blockerror)=Pr([x_0↦x_1]∪[x_0↦x_2]∪[x_0↦x_3]).
The event "corruption of x_0 to x_1" occurs for a given received word y_ according to the "Maximum Likelihood Decision Rule" "block-wise ML" exactly when holds for the conditional probability density function:
- [x_0↦x_1]⇔f(x_0|y_)<f(x_1|y_).
Since [x_0↦x_1], [x_0↦x_2], [x_0↦x_3]are not necessarily disjoint events (which would thus be mutually exclusive), the "probability of the union set" is less than or equal to the sum of the individual probabilities:
Pr(blockerror)≤Pr[x_0↦x_1]+Pr[x_0↦x_2]+Pr[x_0↦x_3].
One calls this upper bound for the (block) error probability the Union Bound. This was already used in the chapter "approximation of the error probability" of the book "Digital Signal Transmission".
We generalize and formalize these results under the assumption that both x_ and x_′ belong to the code C .
Then the following calculation rules apply:
- Blockerrorprobability:
- Pr(blockerror)=Pr(⋃x_′≠x_[x_↦x_′]),
- Pr(blockerror)=Pr(⋃x_′≠x_[x_↦x_′]),
- Upper bound after the Union Bound:
- Pr(UnionBound)≤∑x_′≠x_Pr[x_↦x_′],
- Pr(UnionBound)≤∑x_′≠x_Pr[x_↦x_′],
- Pairwise error probability (according to the MAP or ML criterion.):
- Pr[x_↦x_′]=Pr[f(x_|y_)≤f(x_′|y_)].
- Pr[x_↦x_′]=Pr[f(x_|y_)≤f(x_′|y_)].
On the next pages, these results are applied to different channels.
Union Bound for the BSC model
We further consider the exemplary (5,2) code: C={(0,0,0,0,0),(0,1,0,1,1),(1,0,1,1,0),(1,1,1,0,1)} and for the digital channel we use the BSC model (Binary Symmetric Channel ):
- Pr(y=1|x=0)=Pr(y=0|x=1)=Pr(e=1)=ε,
- Pr(y=0|x=0)=Pr(y=1|x=1)=Pr(e=0)=1−ε.
Then holds (see the following graphic):
- The codewords x_0=(0,0,0,0,0) and x_1=(0,1,0,1,1) differ in d=3 bits, where d indicates the Hamming–distance between x_0 and x_1 .
- An incorrect decoding result [x_0↦x_1] is obtained whenever at least two of the three bits at bit positions 2, 4, and 5 are corrupted.
- In contrast, the bit positions 1 and 3 do not matter here, since they are the same for x_0 and x_1 .
Since the considered code t=⌊(d−1)/2⌋=1 can correct errors, holds:
- Pr[x_0↦x_1]=d∑i=t+1(di)⋅εi⋅(1−ε)d−i=(32)⋅ε2⋅(1−ε)+(33)⋅ε3=3⋅ε2⋅(1−ε)+ε3=Pr[x_0↦x_2].
This takes into account that x_0=(0,0,0,0) and x_2=(1,0,1,1,0) also differ in three bit positions.
The code words x_0=(0,0,0,0,0) and x_3=(1,1,1,0,1) on the other hand differ in four bit positions:
- Wrong decoding of the block therefore certainly occurs if four or three bits are corrupted.
- A corruption of two bits also results in a block error with 50–percent probability, if one assumes a random decision for this.
- Pr[x_0↦x_3]=ε4+4⋅ε3⋅(1−ε)+1/2⋅6⋅ε2⋅(1−ε)2.
This results in the "Union Bound":
- Pr(UnionBound)=Pr[x_0↦x_1]+Pr[x_0↦x_2]+Pr[x_0↦x_3]≥Pr(Blockfehler).
Beispiel 3: The table summarizes the results for different values of the BSC parameter ε .
It is worth mentioning that the completely different probabilities to be calculated Pr[x_0↦x_1] und Pr[x_0↦x_3] give exactly the same numerical result.
The upper bound according to Bhattacharyya
Another upper bound on the block error probability was given by Bhattacharyya :
- Pr(blockerror)≤W(X=β)−1=Pr(Bhattacharyya).
It should be noted that:
- W(X) is the above defined weight enumerator function characterizing the channel code used.
- The Bhattacharyya parameter β identifies the digital channel. For example:
- β={λ√4⋅ε⋅(1−ε)e−R⋅EB/N0fordasBECmodel,fordasBSC−Modell,fordasAWGNmodel.
- The Bhattacharyya bound is always (and usually significantly) above the curve for the "Union Bound". With the goal of finding a uniform bound for all channels, much coarser estimates must be made here than for the "Union Bound".
We restrict ourselves here to the Bhattacharyya bound for the BSC model. For its pairwise corruption probability was derived in front:
- Pr[x_0↦x_1]=d∑i=⌊(d−1)/2⌋(di)⋅εi⋅(1−ε)d−i=d∑i=⌈d/2⌉(di)⋅εi⋅(1−ε)d−i.
- Here, ε=Pr(y=1|x=0)=Pr(y=0|x=1)<0.5 denotes the channel model.
- d=dH(x_0,x_1) indicates the Hamming distance of the considered codewords.
To arrive at the Bhattacharyya bound, the following estimates must be made:
- For all i<d holds εi⋅(1−ε)d−i≤(1−ε)d/2:
- Pr[x_0↦x_1]≤[ε⋅(1−ε)]d/2⋅d∑i=⌈d/2⌉(di).
- Change with respect to the lower limit of the run variable i:
- d∑i=⌈d/2⌉(di)<d∑i=0(di)=2d,β=2⋅√ε⋅(1−ε)⇒Pr[x_0↦x_1]=βd.
- Resort according to Hamming weights Wi (Hamming distance d=i occurs Wi times):
- Pr(blockerror)≤n∑i=1Wi⋅βi=1+W1⋅β+W2⋅β2+ ...+Wn⋅βn.
- With the weight enumerator function W(X)=1+W1⋅X+W2⋅X2+...+Wn⋅Xn:
- Pr(blockerror)≤W(X=β)−1=Pr(Bhattacharyya).
Example 4: The table summarizes the Bhattacharyya results for different values of the BSC parameter ε valid for the example (5, 2) code.
For this one applies:
- W0=1, W1=W2=0, W3=2, W4=1
- ⇒W(X)=1+2⋅X3+X4.
Thus, the Bhattacharyya bound can be calculated:
- Pr(Bhattacharyya)=W(β)−1=2⋅β3+β4.
This provides a (often rough) bound on the block error probability:
- Pr(blockerror)≤Pr(Bhattacharyya).
Conclusion: Based on the Example 3 and the Example 4 (on this page) for the simple (5, 2) block code, though of little practical relevance, and in anticipation of the example 5 (on the next page) for the (7, 4, 3) Hamming code, we summarize:
- The block error probability of a coding system is often not analytically specifiable and must be determined by simulation. The same applies to the bit error probability.
- The Union Bound provides an upper bound for the block error probability. For many applications (especially short codes) the Union Bound is only slightly above the actual error probability.
- The Bhattacharyya bound for the BEC channel is about a factor 2 above the Union Bound – see Exercise 1.14. For the BSC– and the AWGN channel, the gap is much larger. The factor 10 (and more) is not uncommon.
- The Bhattacharyya bound W(β)−1 looks very simple at first sight. Nevertheless, one needs also here knowledge about the exact weight function W(ξ) of the code.
- If the transmission channel (BEC, BSC, AWGN or variations thereof) and its parameters are known, there is nothing to be said against using the (more accurate) "Union Bound" as an upper bound for the block error probability.
Barriers for the (7, 4, 3) Hamming code at the AWGN channel
Finally, we consider the block error probability and its bounds ("Union Bound" and "Bhattacharyya bound" ) for the following configuration:
- AWGN channel, characterized by the quotient EB/N0,
- Hamming code HC(7, 4, 3) ⇒ R=4/7, W(X)−1=7⋅X3+7⋅X4+X7,
- "Soft Decision" according to the maximum–likelihood criterion.
Example 5: Die Ergebnisse sind in der Grafik zusammengefasst.
- Im Gegensatz zur Grafik im Abschnitt Codiergewinn – Bitfehlerrate bei AWGN ist hier die Blockfehlerrate angegeben und nicht die Bitfehlerrate.
- Näherungsweise ist Letztere um den Faktor dmin/k kleiner, falls wie hier dmin<k ist. Im vorliegenden Beispiel gilt dmin/k=0.75.
- Berechnet wurden nur die Punkte für ganzzahlige dB–Werte. Die gestrichelten Linien wurden interpoliert.
- Die rechts angegebenen (blauen) Zahlenwerte gelten für 10⋅lgEB/N0=8dB ⇒ EB/N0≈6.31 (blaue Vertikale).
Die grünen Kreuze markieren die "Union Bound". Nach dieser gilt:
- Pr(Blockfehler)≤n∑i=dminWi⋅Q(√i⋅2R⋅EB/N0)
- ⇒Pr(Blockfehler)≈7⋅Q(4.65)+7⋅Q(5.37)+Q(7.10)=
- ≈7⋅1.66⋅10−6+7⋅3.93⋅10−8+10−9=1.2⋅10−5.
- Die Zahlenwerte machen deutlich, dass die Union Bound im wesentlichen durch den ersten Term bestimmt wird:
- Pr(UnionBound)≈Wdmin⋅Q(√dmin⋅2R⋅EB/N0)=1.16⋅10−5.
- Allerdings ist diese so genannte "Truncated Union Bound" nicht mehr bei allen Anwendungen eine echte Schranke für die Blockfehlerwahrscheinlichkeit, sondern ist als Näherung zu verstehen.
- Die Bhattacharyya–Schranke ist in der Grafik durch rote Punkte markiert. Diese liegt aufgrund der stark vereinfachten Chernoff–Rubin Bound Q(x)≤e−x2/2 deutlich über der Union Bound.
- Zum Beispiel erhält man für 10⋅lgEB/N0=8dB mit β=e−R⋅EB/N0≈0.027 gegenüber der Union Bound einen mehr als zehnfachen Wert:
- Pr(Bhattacharyya)=W(β)−1=7⋅β3+7⋅β4+β7≈1.44⋅10−4.
Aufgaben zum Kapitel
Aufgabe 1.14: Bhattacharyya–Schranke für BEC
Aufgabe 1.15: Distanzspektren von HC (7, 4, 3) und HC (8, 4, 4)
Aufgabe 1.16: Fehlerwahrscheinlichkeitsschranken für AWGN
Aufgabe 1.16Z: Schranken für die Gaußsche Fehlerfunktion