Difference between revisions of "Channel Coding/Distance Characteristics and Error Probability Bounds"
(12 intermediate revisions by 2 users not shown) | |||
Line 156: | Line 156: | ||
{{GraueBox|TEXT= | {{GraueBox|TEXT= | ||
− | Example 2: Thus, the enhanced path weighting enumerator function of our standard | + | Example 2: Thus, the enhanced path weighting enumerator function of our standard encoder is: |
::<math>T_{\rm enh}(X,\ U) = U \cdot X^5 + 2 \cdot U^2 \cdot X^6 + 4 \cdot U^3 \cdot X^7+ \text{ ...} \hspace{0.1cm} | ::<math>T_{\rm enh}(X,\ U) = U \cdot X^5 + 2 \cdot U^2 \cdot X^6 + 4 \cdot U^3 \cdot X^7+ \text{ ...} \hspace{0.1cm} | ||
Line 180: | Line 180: | ||
First, the state transition diagram must be redrawn for this purpose. The graphic shows this on the left in the previous form as diagram (A), while the new diagram (B) is given on the right. It can be seen: | First, the state transition diagram must be redrawn for this purpose. The graphic shows this on the left in the previous form as diagram (A), while the new diagram (B) is given on the right. It can be seen: | ||
− | [[File: | + | [[File:KC_T_3_5_Diagram_v3.png|right|frame|State transition diagram in two different variants|class=fit]] |
# The state S0 is split into the start state S0 and the end state S0′. Thus, all paths of the trellis that start in state S0 and return to it at some point can also be traced in the right graph (B). Excluded are direct transitions from S0 to S0′ and thus also the zero path.<br> | # The state S0 is split into the start state S0 and the end state S0′. Thus, all paths of the trellis that start in state S0 and return to it at some point can also be traced in the right graph (B). Excluded are direct transitions from S0 to S0′ and thus also the zero path.<br> | ||
Line 281: | Line 281: | ||
*so the average bit error probability Pr(biterror)=1/k, | *so the average bit error probability Pr(biterror)=1/k, | ||
− | *while for the block error probability Pr(blockerror)≡1 holds}}<br> | + | *while for the block error probability Pr(blockerror)≡1 holds.}}<br> |
Line 293: | Line 293: | ||
* the block error probability would always result in Pr(blockerror)≡1, | * the block error probability would always result in Pr(blockerror)≡1, | ||
− | *even if the bit error probability is extremely small\hspace{0.05cm} $( | + | *even if the bit error probability is extremely small $\hspace{0.05cm}$ (but non-zero). |
<br clear=all> | <br clear=all> | ||
{{BlaueBox|TEXT= | {{BlaueBox|TEXT= | ||
Line 305: | Line 305: | ||
*All other paths φ1, φ2, φ3, ... (and many more) leave φ0 at time t. | *All other paths φ1, φ2, φ3, ... (and many more) leave φ0 at time t. | ||
− | *They all belong to the path set Φ which is defined as "Viterbi decoder leaves the correct path at time t". This probability is calculated in the next section}} | + | *They all belong to the path set Φ which is defined as "Viterbi decoder leaves the correct path at time t". This probability is calculated in the next section.}}<br> |
== Burst error probability and Bhattacharyya bound == | == Burst error probability and Bhattacharyya bound == | ||
Line 325: | Line 325: | ||
:⇒Pr(bursterror)=Pr(∪φi∈Φ[φ0↦φi]). | :⇒Pr(bursterror)=Pr(∪φi∈Φ[φ0↦φi]). | ||
− | *An upper bound for this is provided by the so-called [[Channel_Coding/Bounds_for_Block_Error_Probability#Union_Bound_of_the_block_error_probability| | + | *An upper bound for this is provided by the so-called [[Channel_Coding/Bounds_for_Block_Error_Probability#Union_Bound_of_the_block_error_probability|Union Bound]]: |
::<math>{\rm Pr(Burst\:error)} \le \sum_{\varphi_{\hspace{0.02cm}i} \in {\it \Phi}}\hspace{0.15cm} | ::<math>{\rm Pr(Burst\:error)} \le \sum_{\varphi_{\hspace{0.02cm}i} \in {\it \Phi}}\hspace{0.15cm} | ||
Line 349: | Line 349: | ||
{{BlaueBox|TEXT= | {{BlaueBox|TEXT= | ||
− | By »'''summing over all paths'''« and comparing with the (simple) [[Channel_Coding/ | + | By »'''summing over all paths'''« and comparing with the (simple) [[Channel_Coding/Distance_Characteristics_and_Error_Probability_Bounds#Path_weighting_enumerator_function|path weighting function]] T(X) we get the result: |
::<math>{\rm Pr(burst\:error)} \le T(X = \beta),\hspace{0.5cm}{\rm with}\hspace{0.5cm} | ::<math>{\rm Pr(burst\:error)} \le T(X = \beta),\hspace{0.5cm}{\rm with}\hspace{0.5cm} | ||
Line 386: | Line 386: | ||
Finally, an upper bound is given for the bit error probability. According to the graph, we proceed as in [Liv10]<ref name='Liv10'>Liva, G.: Channel Coding. Lecture manuscript, Chair of Communications Engineering, TU Munich and DLR Oberpfaffenhofen, 2010.</ref> assume the following conditions:<br> | Finally, an upper bound is given for the bit error probability. According to the graph, we proceed as in [Liv10]<ref name='Liv10'>Liva, G.: Channel Coding. Lecture manuscript, Chair of Communications Engineering, TU Munich and DLR Oberpfaffenhofen, 2010.</ref> assume the following conditions:<br> | ||
− | * | + | *The zero sequence x_=0_ was sent ⇒ Path φ0.<br> |
*The duration of a path deviation ("error burst duration") is denoted by L .<br> | *The duration of a path deviation ("error burst duration") is denoted by L .<br> | ||
Line 449: | Line 449: | ||
{{GraueBox|TEXT= | {{GraueBox|TEXT= | ||
− | Example 5: The graph illustrates the good correction capability ⇒ small bit error rate (BER) of convolutional codes at the [[Channel_Coding/Channel_Models_and_Decision_Structures# | + | Example 5: The graph illustrates the good correction capability ⇒ small bit error rate (BER) of convolutional codes at the [[Channel_Coding/Channel_Models_and_Decision_Structures#AWGN_channel_at_binary_input|AWGN channel]]. |
− | [[File:EN_KC_T_3_5_S6.png|right|frame|AWGN bit error probability of convolutional codes | + | [[File:EN_KC_T_3_5_S6.png|right|frame|AWGN bit error probability of convolutional codes]] |
*Red circles denote the BER for our "rate 1/2 standard encoder" with memory m=2.<br> | *Red circles denote the BER for our "rate 1/2 standard encoder" with memory m=2.<br> | ||
Line 458: | Line 458: | ||
− | * Codes with large memory m lead to large gains over uncoded transmission | + | * Codes with large memory m lead to large gains over uncoded transmission. |
Line 472: | Line 472: | ||
== Exercises for the chapter == | == Exercises for the chapter == | ||
<br> | <br> | ||
− | [[Aufgaben:Exercise_3.12:_Path_Weighting_Function| | + | [[Aufgaben:Exercise_3.12:_Path_Weighting_Function|Exercise 3.12: Path Weighting Function]] |
− | [[Aufgaben:Exercise_3.12Z:_Ring_and_Feedback| | + | [[Aufgaben:Exercise_3.12Z:_Ring_and_Feedback|Exercise 3.12Z: Ring and Feedback]] |
− | [[Aufgaben:Exercise_3.13:_Path_Weighting_Function_again| | + | [[Aufgaben:Exercise_3.13:_Path_Weighting_Function_again|Exercise 3.13: Path Weighting Function again]] |
− | [[Aufgaben:Exercise_3.14:_Error_Probability_Bounds| | + | [[Aufgaben:Exercise_3.14:_Error_Probability_Bounds|Exercise 3.14: Error Probability Bounds]] |
==References== | ==References== |
Latest revision as of 19:24, 20 December 2022
Contents
- 1 Free distance vs. minimum distance
- 2 Path weighting enumerator function
- 3 Enhanced path weighting enumerator function
- 4 Path weighting enumerator function from state transition diagram
- 5 Rules for manipulating the state transition diagram
- 6 Block error probability vs. burst error probability
- 7 Burst error probability and Bhattacharyya bound
- 8 Bit error probability and Viterbi bound
- 9 Exercises for the chapter
- 10 References
Free distance vs. minimum distance
An important parameter regarding the error probability of linear block codes is the "minimum distance" between two code words x_ and x_′:
- dmin(C)=minx_,x_′∈Cx_≠x_′dH(x_,x_′)=minx_∈Cx_≠0_wH(x_).
- The second part of the equation arises from the fact that every linear code also includes the zero word ⇒ (0_).
- It is therefore convenient to set x_′=0_, so that the Hamming distance dH(x_, 0_) gives the same result as the Hamming weight wH(x_).
Example 1: The table shows the 16 code words of of an exemplary HC (7, 4, 3) (see Examples 6 and 7).
You can see:
- All code words except the null word (0_) contain at least three ones ⇒ dmin=3.
- There are
- seven code words with three "ones" (highlighted in yellow),
- seven with four "ones" (highlighted in green), and
- one each with no "ones" and seven "ones".
The »free distance« dF of a convolutional code (CC) is formulaically no different from the minimum distance of a linear block code:
- dF(CC)=minx_,x_′∈CCx_≠x_′dH(x_,x_′)=minx_∈CCx_≠0_wH(x_).
In the literature instead of "dF" sometimes is also used "d∞".
The graph shows three of the infinite paths with the minimum Hamming weight wH, min(x_)=dF=5.
- Major difference to the minimal distance dmin of other codes is
- that in convolutional codes not information words and code words are to be considered,
- but sequences with the property »semi–infinite«.
- Each encoded sequence x_ describes a path through the trellis.
- The "free distance" is the smallest possible Hamming weight of such a path (except for the zero path).
Path weighting enumerator function
For any linear block code, a weight enumerator function can be given in a simple way because of the finite number of code words x_.
For the Example 1 in the previous section this is:
- W(X)=1+7⋅X3+7⋅X4+X7.
In the case of a (non-terminated) convolutional code, no such weight function can be given, since there are infinitely many, infinitely long encoded sequences x_ and thus also infinitely many trellis paths.
To get a grip on this problem, we now assume the following:
- As a reference for the trellis diagram, we always choose the path of the encoded sequence x_=0_ and call this the "zero path" φ0.
- We now consider only paths φj∈Φ that all deviate from the zero path at a given time t and return to it at some point later.
- Although only a fraction of all paths belong to the set Φ, contains the remainder quantity Φ={φ1, φ2, φ3, ...} still an unbounded set of paths φ0 is not one of them.
In the above trellis some paths φj∈Φ are drawn:
- The yellow path φ1 belongs to the sequence x_1=(11,10,11) with Hamming weight wH(x_1)=5. Thus, the path weighting w(φ1)=5. Due to the definition of the branching time t only this single path φ1 has the free distance dF=5 to the zero path ⇒ A5=1.
- For the two green paths with corresponding sequences x_2=(11,01,01,11) resp. x_3=(11,10,00,10,11) ⇒ w(φ2)=w(φ3)=6. No other path exhibits the path weighting 6. We take this fact into account by the coefficient A6=2.
- Also drawn is the gray path φ4, associated with the sequence x_4=(11,01,10,01,11) ⇒ w(φ4)=7. Also, the sequences x_5=(11,01,01,00,10,11), x_6=(11,10,00,01,01,11) and x_7=(11,10,00,10,00,10,11) have the same path weighting "7" ⇒ A7=4.
Thus, the path weighting enumerator function is for this example:
- T(X)=A5⋅X5+A6⋅X6+A7⋅X7+...=X5+2⋅X6+4⋅X7+....
The definition of this function T(X) follows.
Definition: For the »path weighting enumerator function« of a convolutional code holds:
- T(X)=∑φj∈ΦXw(φj)=∞∑w=dFAw⋅Xw.
- Φ denotes the set of all paths that leave the zero path φ0 exactly at the specified time t and return to it (sometime) later.
- According to the second equation part, the summands are ordered by their path weightings w where Aw denotes the number of paths with weighting w.
- The sum starts with w=dF.
- The path weighting w(φj) is equal to the Hamming weight (number of "ones") of the encoded sequence x_j associated to the path φj: w(φj)=wH(x_j).
Note: The weight enumerator function W(X) defined for linear block codes and the path weight function T(X) of convolutional codes have many similarities; however, they are not identical. We consider again
- the weight enumerator function of the (7,4,3) Hamming code:
- W(X)=1+7⋅X3+7⋅X4+X7,
- and the path weighting enumerator function of our "standard convolutional encoder":
- T(X)=X5+2⋅X6+4⋅X7+8⋅X8+...
- Noticeable is the "1" in the first equation, which is missing in the second one. That is:
- For the linear block codes, the reference code word x_i=0_ is counted.
- But the zero encoded sequence x_i=0_ and the zero path φ0 are excluded by definition for convolutional codes.
Author's personal opinion:
One could have defined W(X) without the "1" as well. This would have avoided among other things, that the Bhattacharyya–bound for linear block codes and that for convolutional codes differ by "−1", as can be seen from the following equations:
- "Bhattacharyya bound for linear block codes": Pr(blockerror)≤W(X=β)−1,
- "Bhattacharyya bound for convolutional codes": Pr(bursterror)≤T(X=β).
Enhanced path weighting enumerator function
The path weighting enumerator function T(X) only provides information regarding the weights of the encoded sequence x_.
- More information is obtained if the weights of the information sequence u_ are also collected.
- One then needs two formal parameters X and U, as can be seen from the following definition.
Definition: The »enhanced path weight enumerator function« (EPWEF) is:
- Tenh(X, U)=∑φj∈ΦXw(φj)⋅Uu(φj)=∑w∑uAw,u⋅Xw⋅Uu.
All specifications for the T(X) definition in the last section apply. In addition, note:
- The path input weight u(φj) is equal to the Hamming weight of the information sequence u_j associated to the path.
- It is expressed as a power of the formal parameter U .
- Aw, u denotes the number of paths φj with path output weight w(φj) and path input weight u(φj). Control variable for the second portion is u.
- Setting the formal parameter U=1 in the enhanced path weighting enumerator function yields the original weighting enumerator function T(X).
For many (and all relevant) convolutional codes, upper equation can still be simplified:
- Tenh(X, U)=∞∑w = dFAw⋅Xw⋅Uu.
Example 2: Thus, the enhanced path weighting enumerator function of our standard encoder is:
- Tenh(X, U)=U⋅X5+2⋅U2⋅X6+4⋅U3⋅X7+ ....
Comparing this result with the trellis shown below, we can see:
- The yellow highlighted path – in T(X) marked by X5 – is composed of a blue arrow (ui=1) and two red arrows (ui=0). Thus X5 becomes the extended term UX5.
- The sequences of the two green paths are
- u_2=(1,1,0,0)⇒x_2=(11,01,01,11),
- u_3=(1,0,1,0,0)⇒x_3=(11,10,00,10,11).
- This gives the second term 2⋅U2X6.
- The gray path (and the three undrawn paths) together make the contribution 4⋅U3X7. Each of these paths contains three blue arrows ⇒ three "ones" in the associated information sequence.
Path weighting enumerator function from state transition diagram
There is an elegant way to determine the path weighting enumerator function T(X) and its enhancement directly from the state transition diagram. This will be demonstrated here and in the following sections using our standard convolutional encoder as an example.
First, the state transition diagram must be redrawn for this purpose. The graphic shows this on the left in the previous form as diagram (A), while the new diagram (B) is given on the right. It can be seen:
- The state S0 is split into the start state S0 and the end state S0′. Thus, all paths of the trellis that start in state S0 and return to it at some point can also be traced in the right graph (B). Excluded are direct transitions from S0 to S0′ and thus also the zero path.
- In diagram (A) the transitions are distinguishable by the colors red (for ui=0) and blue (for ui=1).
- The code words x_i∈{00,01,10,11} are noted at the transitions. In diagram (B) (00) is expressed by X0=1 and (11) by X2.
- The code words (01) and (10) are now indistinguishable, but they are uniformly denoted by X .
- Phrased another way: The code word x_i is now represented as Xw where X is a dummy variable associated with the output.
- w=wH(x_i) indicates the Hamming weight of the code word x_i. For a rate 1/2 code, the exponent w is either 0, 1 or 2.
- Also diagram (B) omits the color coding. The information bit ui=1 is now denoted by U1=U and bit ui=0 by U0=1. The dummy variable U is thus assigned to the input sequence u_ .
Rules for manipulating the state transition diagram
Goal of our calculations will be to characterize the (arbitrarily complicated) path from S0 to S0′ by the extended path weighting enumerator function Tenh(X, U). For this we need rules to simplify the graph stepwise.
»Serial transitions«
Two serial connections – denoted by A(X, U) and B(X, U) – can be replaced by a single connection with the product of these ratings.
»Parallel transitions«
Two parallel connections – denoted by A(X, U) and B(X, U) – are combined by the sum of their valuation functions.
»Ring«
The adjacent constellation can be replaced by a single connection, and the following applies to the replacement:
- E(X,U)=A(X,U)⋅B(X,U)1−C(X,U).
»Feedback«
Due to the feedback, two states can alternate here as often as desired. For this constellation applies:
- F(X,U)=A(X,U)⋅B(X,U)⋅C(X,U)1−C(X,U)⋅D(X,U).
The equations for ring and feedback given here are to be proved in the "Exercise 3.12Z" .
Example 3: The above rules are now to be applied to our standard example. In the graphic on the left you can see the modified diagram (B).
- First we replace the red highlighted detour from S1 to S2 via S3 in the diagram (B) by the red connection T1(X,U) drawn in the diagram (C).
- According to the upper classification, it is a "ring" with the labels A=C=U⋅X and B=X, and we obtain for the first reduction function:
- T1(X,U)=U⋅X21−U⋅X.
- Now we summarize the parallel connections according to the blue background in the diagram (C) and replace it with the blue connection in the diagram (D). The second reduction function is thus:
- T2(X,U)=T1(X,U)+X
- ⇒T2(X,U)=UX2+X⋅(1−UX)1−UX=X1−UX.
- The entire graph (D) can then be replaced by a single connection from S0 to S0′. According to the feedback rule, one obtains for the "enhanced path weighting enumerator function":
- Tenh(X,U)=(UX2)⋅X2⋅X1−UX1−U⋅X1−UX=UX51−UX−UX=UX51−2⋅UX.
- With the series expansion 1/(1–x)=1+x+x2+x3+ ... can also be written for this purpose:
- Tenh(X,U)=UX5⋅[1+2UX+(2UX)2+(2UX)3+...].
- Setting the formal input variable U=1, we obtain the "simple path weighting enumerator function", which alone allows statements about the weighting distribution of the output sequence x_:
- T(X)=X5⋅[1+2X+4X2+8X3+...].
We have already read the same result from the trellis diagram in section "Path weighting enumerator function" . There was
- one gray path with weight 5,
- two yellow paths with weighting 6
- four green paths with weighting 7
- and ... .
Block error probability vs. burst error probability
The simple model according to the sketch is valid for linear block codes as well as for convolutional codes.
»Block codes«
For block codes denote u_=(u1, ..., ui, ..., uk) and v_=(v1, ...,vi, ... , vk) the information blocks at the input and output of the system.
Thus, the following descriptive variables can be specified:
- the »block error probability« Pr(blockerror)=Pr(v_≠u_),
- the »bit error probability« Pr(biterror)=Pr(vi≠ui).
Please note: In real transmission systems, due to thermal noise the following always applies:
- Pr(biterror)>0,Pr(blockerror)>Pr(biterror).
For this a simple explanation attempt: If the decoder decides in each block of length k exactly one bit wrong,
- so the average bit error probability Pr(biterror)=1/k,
- while for the block error probability Pr(blockerror)≡1 holds.
»Convolutional codes«
For convolutional codes, the block error probability cannot be specified, since here u_=(u1, u2, ...) and υ_=(v1, v2, ...) represent sequences.
Even the smallest possible code parameter k=1, leads here to the sequence length k′→∞, and
- the block error probability would always result in Pr(blockerror)≡1,
- even if the bit error probability is extremely small (but non-zero).
Definition: For the »burst error probability « of a convolutional code holds:
- Pr(bursterror)=Pr{Decoderleavesthecorrectpathattimet}.
- To simplify the notation for the following derivation, we always assume the zero sequence 0_ which is shown in red in the drawn trellis as the "zero path" φ0.
- All other paths φ1, φ2, φ3, ... (and many more) leave φ0 at time t.
- They all belong to the path set Φ which is defined as "Viterbi decoder leaves the correct path at time t". This probability is calculated in the next section.
Burst error probability and Bhattacharyya bound
We proceed as in the earlier chapter "Bounds for block error probability"
- from the pairwise error probability Pr[φ0→φi]
- that instead of the path φ0
- the decoder could select the path φi.
All considered paths φi leave the zero path φ0 at time t; thus they all belong to the path set Φ.
The sought "burst error probability" is equal to the following union set:
- Pr(bursterror)=Pr([φ0↦φ1]∪[φ0↦φ2]∪... )
- ⇒Pr(bursterror)=Pr(∪φi∈Φ[φ0↦φi]).
- An upper bound for this is provided by the so-called Union Bound:
- Pr(Bursterror)≤∑φi∈ΦPr[φ0↦φi]=Pr(unionbound).
- The pairwise error probability can be estimated using the Bhattacharyya bound:
- Pr[0_↦x_i]≤βwH(xi)⇒Pr[φ0↦φi]≤βw(φi).
- Here denotes
- wH(x_i) is the Hamming weight of the possible encoded sequence x_i,
- w(φi) is the path weight of the corresponding path φi∈Φ,
- β is the so-called Bhattacharyya parameter.
- β is the so-called Bhattacharyya parameter.
By »summing over all paths« and comparing with the (simple) path weighting function T(X) we get the result:
- Pr(bursterror)≤T(X=β),withT(X)=∑φi∈ΦXw(φi).
Example 4: For our standard encoder with
- code rate R=1/2,
- memory m=2,
- D– transfer function G(D)=(1+D+D2, 1+D),
we have obtained the following path weighting enumerator function:
- T(X)=X5+2⋅X6+4⋅X7+ ...=X5⋅(1+2⋅X+4⋅X2+ ...).
- Using the series expansion 1/(1–x)=1+x+x2+x3+ ... can also be written for this purpose:
- T(X)=X51−2⋅X.
- The BSC model provides the following Bhattacharyya bound with the falsification probability ε:
- Pr(Bursterror)≤T(X=β)=T(X=2⋅√ε⋅(1−ε))=(2⋅√ε⋅(1−ε))51−4⋅√ε⋅(1−ε).
- In the "Exercise 3.14" this equation is to be evaluated numerically.
Bit error probability and Viterbi bound
Finally, an upper bound is given for the bit error probability. According to the graph, we proceed as in [Liv10][1] assume the following conditions:
- The zero sequence x_=0_ was sent ⇒ Path φ0.
- The duration of a path deviation ("error burst duration") is denoted by L .
- The distance between two bursts ("inter burst time") we call N.
- The Hamming weight of the error bundle be H.
For a rate 1/n convolutional code ⇒ k=1 ⇒ one information bit per clock, the expected values E[L], E[N] and E[H] of the random variables defined above to give an upper bound for the bit error probability:
- Pr(biterror)=E[H]E[L]+E[N]≤E[H]E[N].
It is assumed that
- the (mean) duration of an error burst is in practice much smaller than the expected distance between two bursts,
- the (mean) inter burst time E[N] is equal to the inverse of the burst error probability,
- the expected value in the numerator is estimated as follows:
- E[H]≤1Pr(bursterror)⋅∑φi∈Φu(φi)⋅βw(φi).
In deriving this bound, the "pairwise error probability" Pr[φ0→φi] and the "Bhattacharyya estimation" are used. Thus we obtain with
- the path input weighting u(φi),
- the path output weighting w(φi), and
- the Bhattacharyya parameter β
the following estimation for the bit error probability and refer to it as the »Viterbi bound«:
- Pr(biterror)≤∑φi∈Φu(φi)⋅βw(φi).
This intermediate result can also be represented in another form. We recall the enhanced path weighting enumerator function
- Tenh(X,U)=∑φj∈ΦXw(φj)⋅Uu(φj).
If we derive this function according to the dummy input variable U, we get
- ddUTenh(X,U)=∑φj∈Φu(φj)⋅Xw(φj)⋅Uu(φj)−1.
Finally, if we set U=1 for the dummy input variable, than we see the connection to the above result:
- [ddUTenh(X,U)]U=1=∑φj∈Φu(φj)⋅Xw(φj).
Conclusion: The »bit error probability« of a convolutional code can be estimated using the extended path weighting enumerator function in closed form:
- Pr(biterror)≤Pr(Viterbi)=[ddUTenh(X,U)]X=βU=1.
- One speaks of the »Viterbi bound». Here one derives the enhanced path weighting enumerator function after the second parameter U and then sets
- X=β,U=1.
- X=β,U=1.
Example 5: The graph illustrates the good correction capability ⇒ small bit error rate (BER) of convolutional codes at the AWGN channel.
- Red circles denote the BER for our "rate 1/2 standard encoder" with memory m=2.
- Green crosses mark a convolutional code with m=6, the so-called industry standard code.
- Codes with large memory m lead to large gains over uncoded transmission.
Note:
In "Exercise 3.14", the "Viterbi bound" and the "Bhattacharyya bound" are evaluated numerically for the "rate 1/2 standard encoder" and the BSC model.
Exercises for the chapter
Exercise 3.12: Path Weighting Function
Exercise 3.12Z: Ring and Feedback
Exercise 3.13: Path Weighting Function again
Exercise 3.14: Error Probability Bounds
References
- ↑ Liva, G.: Channel Coding. Lecture manuscript, Chair of Communications Engineering, TU Munich and DLR Oberpfaffenhofen, 2010.