Contents
Symbolwise coding vs. blockwise coding
In transmission coding, a distinction is made between two fundamentally different methods:
Symbolwise coding
- Here, a code symbol cν is generated with each incoming source symbol qν, which can depend not only on the current symbol but also on previous symbols qν−1, qν−2, ...
- It is typical for all transmission codes for symbolwise coding that the symbol duration Tc of the usually multilevel and redundant encoder signal c(t) corresponds to the bit duration Tq of the source signal, which is assumed to be binary and redundancy-free.
Details can be found in the chapter "Symbolwise Coding with Pseudo-Ternary Codes".
Blockwise coding
- Here, a block of mq binary source symbols (Mq=2) of bit duration Tq is assigned a one-to-one sequence of mc code symbols from an alphabet with code symbol set size Mc≥2.
- For the symbol duration of a code symbol then holds:
- Tc=mqmc⋅Tq,
- The relative redundancy of a block code is in general
- rc=1−RqRc=1−TcTq⋅log2(Mq)log2(Mc)=1−TcTq⋅log2(Mc).
More detailed information on the block codes can be found in the chapter "Block Coding with 4B3T Codes".
Example 1: For the "pseudo-ternary codes"', increasing the number of levels from Mq=2 to Mc=3 for the same symbol duration (Tc=Tq) adds a relative redundancy of rc=1−1/log2(3)≈37%.
In contrast, the so-called "4B3T codes" operate at block level with the code parameters mq=4, Mq=2, mc=3 and Mc=3 and have a relative redundancy of approx. 16%. Because of Tc/Tq=4/3, the transmitted signal s(t) is lower in frequency here than in uncoded transmission, which reduces the expensive bandwidth and is also advantageous for many channels from a transmission point of view.
Quaternary signal with rc≡0 and ternary signal with rc≈0
A special case of a block code is a redundancy-free multilevel code.
- Starting from the redundancy-free binary source signal q(t) with bit duration Tq,
- a Mc–level code signal c(t) with symbol duration Tc=Tq⋅log2(Mc) is generated.
Thus, the relative redundancy is given by:
- rc=1−TcTq⋅log2(Mc)=1−mqmc⋅log2(Mc)→0.
Thereby holds:
- If Mc is a power to the base 2, then mq=log2(Mc) are combined into a single code symbol (mc=1). In this case, the relative redundancy is actually rc=0.
- If Mc is not a power of two, a hundred percent redundancy-free block coding is not possible. For example, if mq=3 binary symbols are encoded by mc=2 ternary symbols and Tc=1.5⋅Tq is set, a relative redundancy of rc=1−1.5/log2(3)≈5% remains.
- Encoding a block of 128 binary symbols with 81 ternary symbols results in a relative code redundancy of less than rc=0.3%.
To simplify the notation and to align the nomenclature with the "first main chapter", we use in the following
- the bit duration TB=Tq of the redundancy-free binary source signal,
- the symbol duration T=Tc of the encoder signal and the transmitted signal, and
- the number M=Mc of levels.
This results in the identical form for the transmitted signal as for the binary transmission, but with different amplitude coefficients:
- s(t)=+∞∑ν=−∞aν⋅gs(t−ν⋅T)withaν∈{a1,...,aμ,...,aM}.
- In principle, the amplitude coefficients aν can be assigned arbitrarily – but uniquely – to the encoder symbols cν. It is convenient to choose equal distances between adjacent amplitude coefficients.
- Thus, for bipolar signaling (−1≤aν≤+1), the following applies to the possible amplitude coefficients with index μ=1, ... , M:
- aμ=2μ−M−1M−1.
- Independently of the level number M one obtains from this for the outer amplitude coefficients a1=−1 and aM=+1.
- For a ternary signal (M=3), the possible amplitude coefficients are −1, 0 and +1.
- For a quaternary signal (M=4), the coefficients are −1, −1/3, +1/3 and +1.
Example 2: The graphic above shows the quaternary redundancy-free transmitted signal s4(t) with the possible amplitude coefficients ±1 and ±1/3, which results from the binary source signal q(t) shown in the center.
- Two binary symbols each are combined to a quaternary coefficient according to the table with red background. The symbol duration T of the signal s4(t) is twice the bit duration TB (previously: Tq) of the source signal.
- If q(t) is redundancy-free, it also results in a redundancy-free quaternary signal, i.e., the possible amplitude coefficients ±1 and ±1/3 are equally probable and there are no statistical ties within the sequence ⟨aν⟩.
The lower plot shows the (almost) redundancy-free ternary signal s3(t) and the mapping of three binary symbols each to two ternary symbols.
- The possible amplitude coefficients are −1, 0 and +1 and the symbol duration of the coded signal T=3/2⋅TB.
- It can be seen from the green mapping table that the coefficients +1 and −1 occur somewhat more frequently than the coefficient aν=0. This results in the above mentioned relative redundancy of 5%.
- However, from the very short signal section – only eight ternary symbols corresponding to twelve binary symbols – this property is not apparent.
ACF and PSD of a multilevel signal
For a redundancy-free coded M–level bipolar digital signal s(t), the following holds for the "discrete auto-correlation function" (ACF) of the amplitude coefficients and for the corresponding "power-spectral density" (PSD):
- φa(λ)={M+13⋅(M−1)0forforλ=0,λ≠0⇒Φa(f)=M+13⋅(M−1)=const.
Considering the spectral shaping by the basic transmission pulse gs(t) with spectrum Gs(f), we obtain:
- φs(τ)=M+13⋅(M−1)⋅φ∙gs(τ)∘−−−∙Φs(f)=M+13⋅(M−1)⋅|Gs(f)|2.
One can see from these equations:
- In the case of redundancy-free multilevel coding, the shape of ACF and PSD is determined solely by the basic transmission pulse gs(t).
- The magnitude of the ACF is lower than the redundancy-free binary signal by a factor φa(λ=0)=E[a2ν]=(M+1)/(3M−3) for the same shape.
- This factor describes the lower signal power of the multilevel signal due to the M−2 inner amplitude coefficients. For M=3 this factor is equal to 2/3, for M=4 it is equal to 5/9.
- However, a fair comparison between binary and multilevel signal with the same information flow (same equivalent bit rate) should also take into account the different symbol durations. This shows that a multilevel signal requires less bandwidth than the binary signal due to the narrower PSD when the same information is transmitted.
Example 3: We assume a binary source with bit rate RB=1 Mbit/s, so that the bit duration TB=1 µs.
- For binary transmission (M=2), the symbol duration of the transmitted signal is T=TB and the auto-correlation function shown in blue in the left graph results for NRZ rectangular pulses (assuming s20=10 mW).
- For the quaternary system (M=4), the ACF is also triangular, but lower by a factor of 5/9 and twice as wide because of T=2⋅TB.
The sinc2–shaped power-spectral density in the binary case (blue curve) has the maximum value Φs(f=0)=10−8 W/Hz (area of the blue triangle) for the signal parameters selected here. The first zero point is at f=1 MHz.
- The PSD of the quaternary signal (red curve) is only half as wide and slightly higher. Here: Φs(f=0)≈1.1⋅10−8 W/Hz.
- The value results from the area of the red triangle.
This is lower (factor 0.55) and wider (factor 2).
Error probability of a multilevel system
The diagram on the right shows the eye diagrams
- of a binary transmission system (M=2),
- a ternary transmission system (M=3) and
- a quaternary transmission system (M=4).
Here, a cosine rolloff characteristic is assumed for the overall system HS(f)⋅HK(f)⋅HE(f) of transmitter, channel and receiver, so that intersymbol interference does not play a role. The rolloff factor is r=0.5. The noise is assumed to be negligible.
The eye diagram is used to estimate intersymbol interference. A detailed description follows in the section "Definition and statements of the eye diagram". However, the following text should be understandable even without detailed knowledge.
It can be seen from the above diagrams:
- In the binary system (M=2), there is only one decision threshold: E1=0. A transmission error occurs if the noise component dN(TD) at the detection time is greater than +s0 (if dS(TD)=−s0 ) or if dN(TD) is less than −s0 (if dS(TD)=+s0 ).
- In the case of the ternary system (M=3), two eye openings and two decision thresholds E1=−s0/2 and E2=+s0/2 can be recognized. The distance of the possible useful detection signal values dS(TD) to the nearest threshold is −s0/2 in each case. The outer amplitude values (dS(TD)=±s0) can only be falsified in one direction in each case, while dS(TD)=0 is limited by two thresholds.
- Accordingly, an amplitude coefficient aν=0 is falsified twice as often compared to aν=+1 or aν=−1. For AWGN noise with rms value σd as well as equal probability amplitude coefficients, according to the section "Definition of the bit error probability" for the "symbol error probability":
- pS=1/3⋅[Q(s0/2σd)+2⋅Q(s0/2σd)+Q(s0/2σd)]=43⋅Q(s0/2σd).
- Please note that this equation no longer specifies the bit error probability pB, but the "symbol error probability" pS. The corresponding a posteriori parameters are "bit error rate" (BER) and "symbol error rate" (SER). More details are given in the "last section" of this chapter.
For the quaternary system (M=4) with the possible amplitude values ±s0 and ±s0/3,
- there are three eye-openings, and
- thus also three decision thresholds at E1=−2s0/3, E2=0 and E3=+2s0/3.
Taking into account the occurrence probabilities (1/4 for equally probable symbols) and the six possibilities of falsification (see arrows in the graph), we obtain:
- pS=6/4⋅Q(s0/3σd).
Conclusion: In general, the symbol error probability for M–level digital signal transmission is:
- pS=2+2⋅(M−2)M⋅Q(s0/(M−1)σd(M))=2⋅(M−1)M⋅Q(s0σd(M)⋅(M−1)).
- The notation σd(M) is intended to make clear that the rms value of the noise component dN(t) depends significantly on the number M.
Comparison between binary system and multilevel system
For this system comparison under fair conditions, the following are assumed:
- Let the equivalent bit rate RB=1/TB be constant. Depending on the number M, the symbol duration of the encoder signal and the transmitted signal is thus:
- T=TB⋅log2(M).
- The Nyquist condition is satisfied by a "root–root characteristic" with rolloff factor r. Furthermore, no intersymbol interference occurs. The detection noise power is:
- σ2d=N02T.
- The comparison of the symbol error probabilities pS is performed for "power limitation". The energy per bit for M–level transmission is:
- EB=M+13⋅(M−1)⋅s20⋅TB.
Substituting these equations into the general result on the "last section", we obtain for the symbol error probability:
- pS=2⋅(M−1)M⋅Q(√s20/(M−1)2σ2d)=2⋅(M−1)M⋅Q(√3⋅log2(M)M2−1⋅2⋅EBN0)
- ⇒pS=K1⋅Q(√K2⋅2⋅EBN0).
For M=2, set K1=K2=1. For larger level numbers, one obtains for the symbol error probability that can be achieved with M–level redundancy-free coding:
- M=3: K1=1.333, K2=0.594;M=4: K1=1.500, K2=0.400;
- M=5: K1=1.600, K2=0.290;M=6: K1=1.666, K2=0.221;
- M=7: K1=1.714, K2=0.175;M=8: K1=1.750, K2=0.143.
The graph summarizes the results for M–level redundancy-free coding.
- Plotted are the symbol error probabilities pS over the abscissa 10⋅lg(EB/N0).
- All systems are optimal for the respective M, assuming the AWGN channel and power limitation.
- Due to the double logarithmic representation chosen here, a K2 value smaller than 1 leads to a parallel shift of the error probability curve to the right.
- If K1>1 applies, the curve shifts upwards compared to the binary system (K1=1).
System comparison under the constraint of power limitation: The above curves can be interpreted as follows:
- Regarding symbol error probability, the binary system (M=2) is superior to the multilevel systems. Already with 10⋅lg(EB/N0)=12 dB one reaches pS<10−8. For the quaternary system (M=4), 10⋅lg(EB/N0)>16 dB must be spent to reach the same symbol error probability pS=10−8.
- However, this statement is valid only for distortion-free channel, i.e., for HK(f)=1. On the other hand, for distorting transmission channels, a higher-level system can provide a significant improvement because of the significantly smaller noise component of the detection signal (after the equalizer).
- For the AWGN channel, the only advantage of a higher-level transmission is the lower bandwidth requirement due to the smaller equivalent bit rate, which plays only a minor role in baseband transmission in contrast to digital carrier frequency systems, e.g. "quadrature amplitude modulation" (QAM).
System comparison under the peak limitation constraint:
- With the constraint "peak limitation", the combination of rectangular gs(t) and rectangular hE(t) leads to the optimum regardless of the level number M.
- The loss of the multilevel system compared to the binary system is here even greater than with power limitation.
- This can be seen from the factor K2 decreasing with M, for which then applies:
- pS=K1⋅Q(√K2⋅2⋅s20⋅TN0)withK2=log2(M)(M−1)2.
- The constant K1 is unchanged from the above specification for power limitation, while K2 is smaller by a factor of 3:
- M=3: K1=1.333, K2=0.198;M=4: K1=1.500, K2=0.133;
- M=5: K1=1.600, K2=0.097;M=6: K1=1.666, K2=0.074;
- M=7: K1=1.714, K2=0.058;M=8: K1=1.750, K2=0.048.
Symbol and bit error probability
In a multilevel transmission system, one must distinguish between the "symbol error probability" and the "bit error probability", which are given here both as ensemble averages and as time averages:
- The symbol error probability refers to the M–level and possibly redundant sequences ⟨cν⟩ and ⟨wν⟩:
- pS=¯Pr(wν≠cν)=lim
- The bit error probability describes the falsifications with respect to the binary sequences \langle q_\nu \rangle and \langle v_\nu \rangle of source and sink:
- p_{\rm B} = \overline{{\rm Pr} (v_\nu \ne q_\nu)} = \lim_{N \to \infty} \frac{1}{N} \cdot \sum \limits^{N} _{\nu = 1} {\rm Pr} (v_\nu \ne q_\nu) \hspace{0.05cm}.
The diagram illustrates these two definitions and is also valid for the next chapters. The block "coder" causes
- in the present chapter a redundancy-free coding,
- in the "following chapter" a blockwise transmission coding, and finally
- in the "last chapter" symbolwise coding with pseudo-ternary codes.
\text{Conclusion:}
- For multilevel and/or coded transmission, a distinction must be made between the bit error probability p_{\rm B} and the symbol error probability p_{\rm S}. Only in the case of the redundancy-free binary system does p_{\rm B} = p_{\rm S} apply.
- In general, the symbol error probability p_{\rm S} can be calculated somewhat more easily than the bit error probability p_{\rm B} for redundancy-containing multilevel systems.
- However, a comparison of systems with different level numbers M or different types of coding should always be based on the bit error probability p_{\rm B} for reasons of fairness. The mapping between the source and coded symbols must also be taken into account, as shown in the following example.
\text{Example 4:} We consider a quaternary transmission system whose transmission behavior can be characterized as follows (see left sketch in the graphic):
- The falsification probability to a neighboring symbol is
- p={\rm Q}\big [s_0/(3\sigma_d)\big ].
- A falsification to a non-adjacent symbol is excluded.
- The model considers the dual falsification possibilities of inner symbols.
For equally probable binary source symbols q_\nu the quaternary code symbols c_\nu also occur with equal probability. Thus, we obtain for the symbol error probability:
- p_{\rm S} ={1}/{4}\cdot (2 \cdot p + 2 \cdot 2 \cdot p) = {3}/{2} \cdot p\hspace{0.05cm}.
To calculate the bit error probability, one must also consider the mapping between the binary and the quaternary symbols:
- In "dual coding" according to the table with yellow background, one symbol error (w_\nu \ne c_\nu) can result in one or two bit errors (v_\nu \ne q_\nu). Of the six corruption possibilities at the quaternary symbol level, four result in one bit error each and only the two inner ones result in two bit errors. It follows:
- p_{\rm B} = {1}/{4}\cdot (4 \cdot 1 \cdot p + 2 \cdot 2 \cdot p ) \cdot {1}/{2} = p\hspace{0.05cm}.
- The factor 1/2 takes into account that a quaternary symbol contains two binary symbols.
- In contrast, in the so-called "Gray coding" according to the table with green background, the mapping between the binary symbols and the quaternary symbols is chosen in such a way that each symbol error results in exactly one bit error. From this follows:
- p_{\rm B} = {1}/{4}\cdot (4 \cdot 1 \cdot p + 2 \cdot 1 \cdot p ) \cdot {1}/{2} = {3}/{4} \cdot p\hspace{0.05cm}.
Exercises for the chapter
Exercise 2.3: Binary Signal and Quaternary Signal
Exercise 2.4: Dual Code and Gray Code
Exercise 2.4Z: Error Probabilities for the Octal System
Exercise 2.5: Ternary Signal Transmission