Processing math: 100%

Difference between revisions of "Modulation Methods/Pulse Code Modulation"

From LNTwww
Line 27: Line 27:
 
The task of the PCM system is   
 
The task of the PCM system is   
 
*to convert the analog source signal  q(t)  into the binary signal  qC(t)  – this process is also called   '''A/D conversion''',  
 
*to convert the analog source signal  q(t)  into the binary signal  qC(t)  – this process is also called   '''A/D conversion''',  
*transmitting this signal over the channel,  where the receiver-side signal  vC(t)  is also binary because of the decision maker ('''KORREKTUR''': nur "decision"?),  
+
*transmitting this signal over the channel,  where the receiver-side signal  vC(t)  is also binary because of the decision,  
 
*to reconstruct from the binary signal  vC(t)  the analog  (continuous-value as well as continuous-time)  sink signal  v(t)    ⇒   '''D/A conversion'''.   
 
*to reconstruct from the binary signal  vC(t)  the analog  (continuous-value as well as continuous-time)  sink signal  v(t)    ⇒   '''D/A conversion'''.   
  
Line 61: Line 61:
 
*Here,  an ideal amplitude matching is assumed,  so that in the ideal case  (that is:   sampling according to the sampling theorem,  best possible signal reconstruction,  infinitely fine quantization)  the sink signal  v(t)  would exactly match the source signal  q(t).  
 
*Here,  an ideal amplitude matching is assumed,  so that in the ideal case  (that is:   sampling according to the sampling theorem,  best possible signal reconstruction,  infinitely fine quantization)  the sink signal  v(t)  would exactly match the source signal  q(t).  
 
<br clear=all>
 
<br clear=all>
We would like to refer you already here to the three-part&nbsp; (German language)&nbsp; learning video&nbsp; [[Pulscodemodulation_(Lernvideo)|"Pulse Code Modulation"]]&nbsp; which contains all aspects of PCM.&nbsp; Its principle is explained in detail in the first part of the video.
+
&rArr; &nbsp; We would like to refer you already here to the three-part&nbsp; (German language)&nbsp; learning video&nbsp; [[Pulscodemodulation_(Lernvideo)|"Pulse Code Modulation"]]&nbsp; which contains all aspects of PCM.&nbsp; Its principle is explained in detail in the first part of the video.
  
 
==Sampling and signal reconstruction==
 
==Sampling and signal reconstruction==
Line 85: Line 85:
 
:QA(f)=Q(f)Pδ(f)=+μ=Q(fμfA).
 
:QA(f)=Q(f)Pδ(f)=+μ=Q(fμfA).
  
We refer you to part 2 of the&nbsp; (German language)&nbsp; learning video&nbsp; [[Pulscodemodulation_(Lernvideo)|"Pulse Code Modulation"]]&nbsp; which explains sampling and signal reconstruction in terms of system theory.  
+
&rArr; &nbsp; We refer you to part 2 of the&nbsp; (German language)&nbsp; learning video&nbsp; [[Pulscodemodulation_(Lernvideo)|"Pulse Code Modulation"]]&nbsp; which explains sampling and signal reconstruction in terms of system theory.  
  
 
{{GraueBox|TEXT=
 
{{GraueBox|TEXT=
Line 387: Line 387:
  
  
This results in the signal chain sketched on the right ('''KORREKTUR''': above?).
+
This results in the signal chain sketched on the right.
 
<br clear=all>
 
<br clear=all>
 
{{BlaueBox|TEXT=
 
{{BlaueBox|TEXT=
Line 404: Line 404:
  
  
''Note:'' &nbsp; In the third part of the&nbsp; (German language)&nbsp; learning video&nbsp; [[Pulscodemodulation_(Lernvideo)|"Pulse Code Modulation"]]&nbsp; are covered:  
+
&rArr; &nbsp; ''Note:'' &nbsp; In the third part of the&nbsp; (German language)&nbsp; learning video&nbsp; [[Pulscodemodulation_(Lernvideo)|"Pulse Code Modulation"]]&nbsp; are covered:  
 
*the definition of signal-to-noise power ratio&nbsp; (SNR),  
 
*the definition of signal-to-noise power ratio&nbsp; (SNR),  
 
*the influence of quantization noise and transmission errors,  
 
*the influence of quantization noise and transmission errors,  

Revision as of 11:24, 8 April 2022

# OVERVIEW OF THE FOURTH MAIN CHAPTER #


The fourth chapter deals with the digital modulation methods  »amplitude shift keying«  (ASK),  »phase shift keying«  (PSK)  and  »frequency shift keying«  (FSK)  as well as some modifications derived from them.  Most of the properties of the analog modulation methods mentioned in the last two chapters still apply.  Differences result from the now required  »decision component«  of the receiver.

We restrict ourselves here essentially to the  »system-theoretical and transmission aspects«.  The error probability is given only for ideal conditions.  The derivations and the consideration of non-ideal boundary conditions can be found in the book  "Digital Signal Transmission".

In detail are treated:

  • the  »pulse code modulation«  (PCM)  and its components  "Sampling"  –  "Quantization"  –   "Coding",
  • the  »linear modulation«  ASKBPSKDPSK  and associated demodulators,
  • the  »quadrature amplitude modulation«  (QAM)  and more complicated signal space mappings,
  • the  »frequency shift keying«  (FSK)  as an example of non-linear digital modulation,
  • the FSK with  »continuous phase matching«  (CPM),  especially the  (G)MSK  method.


Principle and block diagram


Almost all modulation methods used today work digitally.  Their advantages have already been mentioned in the  first chapter  of this book.  The first concept for digital signal transmission was already developed in 1938 by  Alec Reeves  and has also been used in practice since the 1960s under the name  "Pulse Code Modulation"  (PCM).  Even though many of the digital modulation methods conceived in recent years differ from PCM in detail,  it is very well suited to explain the principle of all these methods.

The task of the PCM system is

  • to convert the analog source signal  q(t)  into the binary signal  qC(t)  – this process is also called   A/D conversion,
  • transmitting this signal over the channel,  where the receiver-side signal  vC(t)  is also binary because of the decision,
  • to reconstruct from the binary signal  vC(t)  the analog  (continuous-value as well as continuous-time)  sink signal  v(t)    ⇒   D/A conversion.
Principle of Pulse Code Modulation  (PCM)

q(t)  Q(f)   ⇒   source signal   (from German:  "Quellensignal"),  analog
qA(t)  QA(f)   ⇒   sampled source signal   (from German:  "abgetastet"   ⇒   "A")
qQ(t)  QQ(f)   ⇒   quantized source signal   (from German:  "quantisiert"   ⇒   "Q")
qC(t)  QC(f)   ⇒   coded source signal   (from German:  "codiert"   ⇒   "C"),  binary
s(t)  S(f)   ⇒   transmitted signal   (from German:  "Sendesignal"),  digital
n(t)   ⇒   noise signal,  characterized by the power-spectral density  Φn(f),   analog r(t)=s(t)hK(t)+n(t)   ⇒   received signal,  hK(t)  HK(f),  analog
  Note:   Spectrum  R(f)  can not be specified due to the stochastic component  n(t).
vC(t)  VC(f)   ⇒   signal after decision,  binary
vQ(t)  VQ(f)   ⇒   signal after PCM decoding,  M–level
  Note:   On the receiver side,  there is no counterpart to  "Quantization"
v(t)  V(f)   ⇒   sink signal,  analog


Further it should be noted to this PCM block diagram:

  • The PCM transmitter  ("A/D converter")  is composed of three function blocks  Sampling - Quantization - PCM Coding  which will be described in more detail in the next sections.
  • The gray-background block  "Digital Transmission System"  shows  "transmitter"  (modulation),  "receiver"  (with decision unit),  and  "analog transmission channel"   ⇒   channel frequency response  HK(f)  and noise power-spectral density  Φn(f).
  • This block is covered in the first three chapters of the book  Digital Signal Transmission.  In chapter 5 of the same book,  you will find  Digital Channel Models  that phenomenologically describe the transmission behavior using the signals  qC(t)  and  vC(t).
  • Further, it can be seen from the block diagram that there is no equivalent for  "quantization"  at the receiver-side.  Therefore,  even with error-free transmission,  i.e.,  for  vC(t)=qC(t),  the analog sink signal  v(t)  will differ from the source signal  q(t).
  • As a measure of the quality of the digital transmission system,  we use the  Signal-to-Noise Power Ratio   ⇒   in short:   Sink-SNR  as the quotient of the powers of source signal  q(t)  and fault signal  ε(t)=v(t)q(t):
ρv=PqPεwithPq=¯[q(t)]2,Pε=¯[v(t)q(t)]2.
  • Here,  an ideal amplitude matching is assumed,  so that in the ideal case  (that is:   sampling according to the sampling theorem,  best possible signal reconstruction,  infinitely fine quantization)  the sink signal  v(t)  would exactly match the source signal  q(t).


⇒   We would like to refer you already here to the three-part  (German language)  learning video  "Pulse Code Modulation"  which contains all aspects of PCM.  Its principle is explained in detail in the first part of the video.

Sampling and signal reconstruction


Sampling  – that is, time discretization of the analog signal  q(t) –  was covered in detail in the chapter  "Discrete-Time Signal Representation"  of the book  "Signal Representation."  Here follows a brief summary of that section.

Time domain representation of sampling

The graph illustrates the sampling in the time domain: 

  • The  (blue)  source signal  q(t)  is  "continuous-time",  the (green) signal sampled at a distance  TA  is  "discrete-time". 
  • The sampling can be represented by multiplying the analog signal  q(t)  by the  Dirac comb in the time domain  ⇒   pδ(t):
qA(t)=q(t)pδ(t)withpδ(t)=ν=TAδ(tνTA).
  • The Dirac delta function at  t=νTA  has the weight  TAq(νTA).  Since  δ(t)  has the unit  "1/s"  thus  qA(t)  has the same unit as  q(t),  e.g.  "V".
  • The Fourier transform of the Dirac comb  pδ(t)  is also a Dirac comb,  but now in the frequency domain   ⇒   Pδ(f).  The spacing of the individual Dirac delta lines is  fA=1/TA,  and all weights of  Pδ(f)  are  1:
pδ(t)=+ν=TAδ(tνTA)Pδ(f)=+μ=δ(fμfA).
  • The spectrum  QA(f)  of the sampled source signal  qA(t)  is obtained from the  Convolution Theorem, where  Q(f)q(t): 
QA(f)=Q(f)Pδ(f)=+μ=Q(fμfA).

⇒   We refer you to part 2 of the  (German language)  learning video  "Pulse Code Modulation"  which explains sampling and signal reconstruction in terms of system theory.

Example 1:  The graph schematically shows the spectrum  Q(f)  of an analog source signal  q(t)  with frequencies up to  fN, max=5 kHz.

Periodic continuation of the spectrum by sampling
  • If one samples  q(t)  with the sampling rate  fA=20 kHz  (so at the respective distance  TA=50 µs),  one obtains the periodic spectrum  QA(f)  sketched in green.


  • Since the Dirac delta functions are infinitely narrow,  qA(t)  also contains arbitrary high frequency components and accordingly  QA(f)  is extended to infinity (middle graph).


  • Drawn below  (in red)  is the spectrum  QA(f)  of the sampled source signal for the sampling parameters  TA=100 µs   ⇒   fA=10 kHz.


Conclusion:  From this example,  the following important lessons can be learned regarding sampling:

  1. If  Q(f)  contains frequencies up to  fN, max,  then according to the  Sampling Theorem  the sampling rate  fA2fN, max  should be chosen.  At smaller sampling rate  fA  (thus larger spacing TA)  overlaps of the periodized spectra occur,  i.e. irreversible distortions.
  2. If exactly  fA=2fN, max  as in the lower graph of  Example 1, then  Q(f)  can be can be completely reconstructed from  QA(f)  by an ideal rectangular low-pass filter  H(f)  with cutoff frequency  fG=fA/2.  The same facts apply in the   PCM system   to extract  V(f)  from  VQ(f)  in the best possible way.
  3. On the other hand,  if sampling is performed with  fA>2fN, max  as in the middle graph of the example,  a low-pass filter  H(f)  with a smaller slope can also be used on the receiver side for signal reconstruction,  as long as the following condition is met:
H(f)={10forfor|f|fN,max,|f|fAfN,max.

Natural and discrete sampling


Multiplication by the Dirac comb provides only an idealized description of the sampling,  since a Dirac delta function  (duration TR0,  height 1/TR)  is not realizable.  In practice,  the  "Dirac comb"  pδ(t)  must be replaced by a  "rectangular pulse comb"  pR(t)  with rectangle duration  TR  (see upper sketch):

Rectangular comb  (on the top),  natural and discrete sampling
pR(t)=+ν=gR(tνTA),
gR(t)={11/20forforfor|t|<TR/2,|t|=TR/2,|t|>TR/2.

TR  should be significantly smaller than the sampling distance  TA.

The graphic show two different sampling methods using the comb  pR(t):

  • In  natural sampling  the sampled signal  qA(t)  is obtained by multiplying the analog source signal  q(t)  by  pR(t).   Thus in the ranges  pR(t)=1qA(t)  has the same progression as  q(t).
  • In  discrete sampling  the signal  q(t)  is  – at least mentally – first multiplied by the Dirac comb  pδ(t).  Then each Dirac delta impulse  TAδ(tνTA)  is replaced by a rectangular pulse  gR(tνTA)  .


Here and in the following frequency domain consideration,  an acausal description form is chosen for simplicity. 

For a  (causal)  realization,  gR(t)=1  would have to hold in the range from  0  to  TR  and not as here for  TR/2<t<TR/2.


Frequency domain view of natural sampling


Definition:  The  natural sampling  can be represented by the convolution theorem in the spectral domain as follows:

qA(t)=pR(t)q(t)=[1TApδ(t)gR(t)]q(t)QA(f)=[Pδ(f)1TAGR(f)]Q(f)=PR(f)Q(f).


The graph shows the result for

  • an  (unrealistic)  rectangular spectrum  Q(f)=Q0  limited to the range  |f|4 kHz,
  • the sampling rate  fA=10 kHz   ⇒   TA=100 µs,  and
  • the rectangular pulse duration  TR=25 µs   ⇒   TR/TA=0.25.
Spectrum in natural sampling with rectangular comb


One can see from this plot:

  1. The spectrum  PR(f)  is in contrast to  Pδ(f)  not a Dirac comb  (all weights equal 1),  but the weights here are evaluated to the function  GR(f)/TA=TR/TAsinc(fTR).
  2. Because of the zero of the  sinc-function,  the Dirac lines vanish here at  ±4fA.
  3. The spectrum  QA(f)  results from the convolution with  Q(f).  The rectangle around  f=0  has height  TR/TAQ0,  the proportions around  μfA (μ0)  are lower.
  4. If one uses for signal reconstruction an ideal,  rectangular low-pass
H(f)={TA/TR=40forfor|f|<fA/2,|f|>fA/2,,
then for the output spectrum  V(f)=Q(f)   ⇒   v(t)=q(t).


Conclusion: 

  • For natural sampling,  a rectangular–low-pass filter is sufficient for signal reconstruction  as for ideal sampling  (with Dirac comb).
  • However,  for amplitude matching in the passband,  a gain by the factor  TA/TR  must be considered.


Frequency domain view of discrete sampling


Definition:  In  discrete sampling  the multiplication of the Dirac com  pδ(t)  with the source signal  q(t)  takes place first  – at least mentally –  and only afterwards the convolution with the rectangular pulse  gR(t):

qA(t)=[1/TApδ(t)q(t)]gR(t)QA(f)=[Pδ(f)Q(f)]GR(f)/TA.
  • It is irrelevant,  but quite convenient,  that here the factor  1/TA  has been added to the evaluation function  GR(f).
  • Thus,  GR(f)/TA=TR/TAsinc(fTR).


Spectrum when discretely sampled with a rectangular comb Korrektur sinc
  • The upper graph shows  (highlighted in green)  the spectral function  Pδ(f)Q(f)  after ideal sampling. 
  • In contrast,  discrete sampling with a rectangular comb yields the spectrum  QA(f)  corresponding to the lower graph.


You can see from this plot:

  1. Each of the infinitely many partial spectra now has a different shape.  Only the middle spectrum around  f=0  is important;
  2. All other spectral components are removed at the receiver side by the low-pass of the signal reconstruction.
  3. If one uses for this low-pass again a rectangular filter with the gain  TA/TR  in the passband,  one obtains for the output spectrum:  
V(f)=Q(f)sinc(fTR).


Conclusion:  Discrete sampling and rectangular filtering result in attenuation distortions  according to the weighting function  sinc(fTR).

  • These are stronger,  the larger  TR  is.  Only in the limiting case  TR0  holds sinc(fTR)=1.
  • However,  ideal equalization can fully compensate for these linear attenuation distortions.  To obtain  V(f)=Q(f)  resp.  v(t)=q(t)  then must hold:
H(f)={(TA/TR)/sinc(fTR)0forfor|f|<fA/2,|f|>fA/2.


Quantization and quantization noise


The second functional unit  Quantization  of the PCM transmitter is used for value discretization.

  • For this purpose the whole value range of the analog source signal  (e.g.,  the range ±qmax)  is divided into  M  intervals.
  • Each sample  qA(νTA)  is then assigned TO a representative  qQ(νTA)  of the associated interval  (e.g.,  the interval center) .


Example 2:  The graph illustrates the unit  "quantization"  using the quantization step number  M=8  as an example.

To illustrate  "quantization"  with  M=8  steps
  • In fact,  a power of two is always chosen for  M  in practice because of the subsequent binary coding.
  • Each of the samples  qA(νTA)  marked by circles is replaced by the corresponding quantized value  qQ(νTA).  The quantized values are entered as crosses.
  • However,  this process of value discretization is associated with an irreversible falsification.
  • The falsification  εν=qQ(νTA)  qA(νTA)  depends on the quantization level number  M.  The following bound applies:
|εν|<1/22/Mqmax=qmax/M.


Definition:  One refers to the second moment of the error quantity  εν  as  quantization noise power:

PQ=12N+1+Nν=Nε2ν1NTANTA0ε(t)2dtwithε(t)=qQ(t)q(t).


Notes:

  • For calculating the quantization noise power  PQ  the given approximation of  "spontaneous quantization"  is usually used. 
  • Here,  one ignores sampling and forms the error signal from the continuous-time signals  qQ(t)  and  q(t).
  • PQ  also depends on the source signal  q(t).  Assuming that  q(t)  takes all values between  ±qmax  with equal probability and the quantizer is designed exactly for this range,  we get accordingly  Exercise 4.4:
PQ=q2max3M2.
  • In a speech or music signal,  arbitrarily large amplitude values can occur  - even if only very rarely.  In this case,  for  qmax  usually that amplitude value is used which is exceeded  (in amplitude)  only at  1%  all times.

PCM encoding and decoding


The block  PCM coding  is used to convert the discrete-time   (after sampling)   and discrete-value  (after quantization with  M  steps)  signal values  qQ(νTA)  into a sequence of  N=log2(M)  binary values.   Logarithm to base 2   ⇒   "binary logarithm".

Example 3:  Each binary value   ⇒   bit is represented by a rectangle of duration  TB=TA/N  resulting in the signal  qC(t).  You can see:

PCM coding with the dual code  (M=8, N=3)
  • Here,  the  "dual code"  is used   ⇒   the quantization intervals  μ  are numbered consecutively from  0  to  M1  and then written in simple binary.  With  M=8  for example  μ=6   ⇔   110.
  • The three symbols of the binary coded signal  qC(t)  are obtained by replacing  0  by  L  ("Low") and  1  by  H  ("High").  This gives in the example the sequence  "HHL HHL LLH LHL HLH LHH".
  • The bit duration  TB  is here shorter than the sampling distance  TA=1/fA  by a factor  N=log2(M)=3.  So,  the bit rate is  RB=log2(M)fA.
  • If one uses the same mapping in decoding  (vCvQ)  as in coding  (qQqC),  then,  if there are no transmission errors:     vQ(νTA)=qQ(νTA).
  • An alternative to dual code is  "Gray code",  where adjacent binary values differ only in one bit.  For  N=3:
  μ=0LLL,     μ=1LLH,     μ=2LHH,     μ=3:   LHL,
  μ=4HHL,     μ=5HHH,     μ=6HLH,     μ=7HLL.

Signal-to-noise power ratio


The digital  "pulse code modulation"  (PCM)  is now compared to analog modulation methods  (AM, FM)  regarding the achievable sink SNR  ρv=Pq/Pε  with AWGN noise.  As denoted in previous chapters  (for example)  ξ=αK2PS/(N0BNF)  the  "performance parameter"  ξ  summarizes different influences:

Sink SNR at AM,  FM,  and  PCM 30/32
  1. The channel transmission factor  αK  (quadratic),
  2. the transmit power  PS  (linear),
  3. the AWGN noise power density  N0  (reciprocal), and.
  4. the signal bandwidth  BNF  (reciprocal);  for a harmonic oscillation:   signal frequency  fN  instead of  BNF.


The two comparison curves for  amplitude modulation  and  frequency modulation can be described as follows:

  • Double-sideband amplitude modulation  (DSB–AM)  without carrier  (m):
ρv=ξ  10·lgρv=10·lg ξ.
  • Frequency modulation  (FM)  with modulation index  η=3:  
ρυ=3/2η2ξ=13.5ξ  10·lg ρv=10·lg ξ+11.3 dB.

The curve for the  PCM 30/32  system should be interpreted as follows:

  • If the performance parameter  ξ  is sufficiently large,  then no transmission errors occur.  The error signal  ε(t)=v(t)  q(t)  is then alone due to quantization  (Pε=PQ).
  • With the quantization step number  M=2N  holds approximately in this case:
ρv=PqPε=M2=22N10lgρv=20lgM=N6.02dB
N=8,M=256:10lgρv=48.16dB.
Note that the given equation is exactly valid only for a sawtooth shaped source signal.   However, for a cosine shaped signal the deviation from this is not very large.
  • As  ξ  decreases  (smaller transmit power or larger noise power density),  the transmission errors increase.  Thus  Pε>PQ  and the sink-to-noise ratio becomes smaller.
  • PCM  (with M=256)  is superior to the analog methods  (AM and FM)  only in the lower and middle  ξ–range.  But if transmission errors do not play a role anymore,  no improvement can be achieved by a larger  ξ  (horizontal curve section with yellow background).
  • An improvement is only achieved by increasing  N  (number of bits per sample)  ⇒   larger  M=2N  (number of quantization steps).   For example, for a  Compact Disc  (CD)  with parameter  N=16   ⇒   M=65536  the sink SNR is: 
10·lg ρv=96.32 dB.

Example 4:  The following graph shows the limiting influence of quantization:

  • Here,  transmission errors are excluded.  Sampling and signal reconstruction are best fit to  q(t).
  • White dotted is the source signal  q(t),  green dotted is the sink signal  v(t)  after PCM with  N=4   ⇒   M=16.
  • Sampling times are marked by crosses.


This image can be interpreted as follows:

Influence of quantization with  N=4  and  N=8


  • With  N=8   ⇒   M=256  the sink signal  v(t)  cannot be distinguished with the naked eye from the source signal  q(t).  The white dotted signal curve applies approximately to both.
  • But from the signal-to-noise ratio  10·lg ρv=47.8 dB  it can be seen that the quantization noise  power  Pε  is only smaller by a factor  1.6105  than the power  Pq  of the source signal. 
  • This SNR would already be clearly audible with a speech or music signal.
  • Although  q(t)  is neither sawtooth nor cosine shaped,  but is composed of several frequency components,  the approximation  ρvM2   ⇒   10·lg ρυ=48.16 dB  deviates insignificantly from the actual value.
  • In contrast,  for  N=4   ⇒   M=16  the deviations between sink signal (marked in green) and source signal (marked in white) can already be seen in the image,  which is also quantitatively expressed by the very small SNR  10·lg ρυ=28.2 dB.

Influence of transmission errors


Starting from the same analog signal  q(t)  as in the last section and a linear quantization with  N=8 bits   ⇒   M=256  the effects of transmission errors are now illustrated using the respective sink signal  v(t).

Influence of a transmission error concerning  Bit 5  at the dual code, meaning that the lowest quantization interval  (μ=0)  is represented with  LLLL LLLL  and the highest interval  (μ=255)  is represented with  HHHH HHHH.
Table:  Results of the bit error analysis.  Note:     10·lg ρv  was calculated from the presented signal of duration  10TA  (only  108=80  bits)   ⇒   each transmission error corresponds to a bit error rate of  1.25%.
  • The white dots mark the source signal  q(t).  Without transmission error the sink signal  v(t)  has the same course when neglecting quantization.
  • Now,  exactly one bit of the fifth sample  q(5TA)=0.715  is corrupted,  where this sample has been coded as  LLHL LHLL.






The results of the error analysis shown in the graph and the table below can be summarized as follows:

  • If only the last bit   ⇒   "Least Significant Bit"   ⇒   (LSB)  of the binary word is corrupted  (LLHL LHLL   ⇒   LLHL LHLH,  white curve),  then no difference from error-free transmission is visible to the naked eye. Nevertheless,  the signal-to-noise ratio is reduced by   3.5 dB.
  • An error of the fourth last bit leads to a clearly detectable distortion by eight quantization steps   (LLHLLHLL ⇒ LLHLHHLL,  green curve):   v(5TA)  q(5TA)=8/2562=0.0625  and the signal-to-noise ratio drops to   10·lg ρυ=28.2 dB.
  • Finally,  the red curve shows the case where the  MSB  ("Most Significant Bit")  is corrupted:   LLHLLHLL ⇒ HLHLLHLL   ⇒   distortion  v(5TA)  q(5TA)=1  (corresponding to half the modulation range).  The SNR is now only about   4 dB.
  • At all sampling times except  5TAv(t)  matches exactly with  q(t)  except for the quantization error.  Outside these points marked by yellow crosses,  the single error at  5TA  leads to strong deviations in an extended range,  due to the interpolation with the  sinc-shaped impulse response of the reconstruction low-pass  H(f).


Estimation of SNR degradation due to transmission errors


Now we will try to determine the SNR curve of the PCM system taking into account  (approximately)  bit errors.  We start from the following block diagram and further assume:

For calculating the SNR curve of the PCM system;  bit errors are taken into account
  • Each sample  qA(νT)  is quantized by  M  steps and represented by  N=log2(M)  bits.  In the example:  M=8   ⇒   N=3.
  • The binary representation of  qQ(νT)  yields the coefficients  ak(k=1,...,N),  which can be falsified by bit errors to the coefficients  bk.  Both  ak  and  bk  are  ±1,  respectively.
  • A bit error  (bkak)  occurs with probability  pB.  Each bit is equally likely to be falsified and in each PCM word there is at most one error   ⇒   only one of the  N  bits can be wrong.


From the diagram given in the graph,  it can be seen for  N=3  and natural binary coding  ("Dual Code"):

  • A falsification of  a1  changes the value  qQ(νT)  by  ±A.
  • A falsification of  a2  changes the value  qQ(νT)  by  ±A/2.
  • A falsification of  a3  changes the value value  qQ(νT)  by  ±A/4.


For the case when  (only)  the coefficient  ak  was falsified,  we obtain by generalization for the deviation:

εk=υQ(νT)  qQ(νT)=akA2k+1.

After averaging over all falsification values  εk   (with  1kN)   taking into account the bit error probability  pB  we obtain for the  "error noise power":

PE=E[ε2k]=Nk=1pB(akA2k+1)2= pBA2N1k=022k=pBA2122N1224/3pBA2.
  • Here are used the sum formula of the geometric series and the approximation  122N1.
  • For  N=8   ⇒   M=256  the associated relative error is about  105.


Excluding transmission errors,  the signal-to-noise power ratio  ρv=PS/PQ  has been found,  where for a uniformly distributed source signal  (e.g. sawtooth-shaped)  the signal power and quantization noise power are to be calculated as follows:

Sink SNR for PCM considering bit errors
PS=A2/3,PQ=A2/322N.

Taking into account the transmission errors,  the above result gives:

ρυ=PSPQ+PE=A2/3A2/322N+A2/34pB=122N+4pB.

The graph shows  10lgρv  as a function of the (logarithmized) power parameter  ξ=PS/(N0BNF), where  BNF  indicates the source signal bandwidth.  Let the constant channel transmission factor be ideally  αK=1.  Then holds:

  • For AWGN noise and the optimum binary system,  the performance parameter is also  ξ=EB/N0  (energy per bit related to noise power density).  The bit error probability is then given by the Gaussian error function  Q(x):
pB=Q(2EB/N0).
  • For  N=8   ⇒   22N=1.5105  and  10lg ξ=6 dB   ⇒   pB=0.0024  (point marked in red)  results:
ρυ=11.5105+40.002410010lgρυ20dB.
  • This small  ρv value goes back to the term  4·0.0024  in the denominator  (influence of the transmission errors)  while in the horizontal section of the curve for each  N  (number of bits per sample) the term  22N  dominates - i.e. the quantization noise.

Non-linear quantization


Often the quantization intervals are not chosen equally large,  but one uses a finer quantization for the inner amplitude range than for large amplitudes.  There are several reasons for this:

Uniform quantization of a speech signal
  • In audio signals,  distortions of the quiet signal components  (i.e. values near the zero line)  are subjectively perceived as more disturbing than an impairment of large amplitude values.
  • Such an uneven quantization also leads to a larger sink SNR for such a music or speech signal,  because here the signal amplitude is not uniformly distributed.


The graph shows a speech signal  q(t)  and its amplitude distribution  fq(q)   ⇒   Probability density function  (PDF).

This is the  Laplace distribution,  which can be approximated as follows:

  • by a continuous-valued two-sided exponential distribution,  and
  • by a Dirac delta function  δ(q)  to account for the speech pauses  (magenta colored).


In the graph, nonlinear quantization is only implied,  e.g. by means of the 13-segment characteristic, which is described in more detail in the  Exercise 4.5 :

  • The quantization intervals here become wider and wider towards the edges section by section.
  • The more frequent small amplitudes,  on the other hand,  are quantized very finely.


Compression and expansion


Non-uniform quantization can be realized, for example, by

Realization of a non-uniform quantization
  • the sampled values  qA(νTA)  are first deformed by a nonlinear characteristic  qK(qA),  and
  • subsequently,  the resulting output values  qK(ν·TA)  are uniformly quantized.


This results in the signal chain sketched on the right.

Conclusion:  Such a non-uniform quantization means:

  • Through the nonlinear characteristic  qK(qA)   ⇒   small signal values are amplified and large values are attenuated   ⇒   "compression".
  • This deliberate signal distortion is undone at the receiver by the inverse function  vE(υQ)    ⇒   "expansion".
  • The total process of transmitter-side compression and receiver-side expansion is also called  "companding."


For the PCM system 30/32, the  "Comité Consultatif International des Télégraphique et Téléphonique"  (CCITT)  recommended the so-called  "A–characteristic":

y(x)={1+ln(Ax)1+ln(A)Ax1+ln(A)1+ln(Ax)1+ln(A)forforfor1/Ax1,1/Ax1/A,1x1/A.
  • Here,  for abbreviation   x=qA(νTA)   and  y=qK(νTA)   are used.
  • This characteristic curve with the value  A=87.56  introduced in practice has a constantly changing slope.
  • For more details on this type of non-uniform quantization,  see the  Exercise 4.6.


⇒   Note:   In the third part of the  (German language)  learning video  "Pulse Code Modulation"  are covered:

  • the definition of signal-to-noise power ratio  (SNR),
  • the influence of quantization noise and transmission errors,
  • the differences between linear and non-linear quantization.


Exercises for the chapter


Exercise 4.1: PCM System 30/32

Exercise 4.2: Low-Pass for Signal Reconstruction

Exercise 4.2Z: About the Sampling Theorem

Exercise 4.3: Natural and Discrete Sampling

Exercise 4.4: About the Quantization Noise

Exercise 4.4Z: Signal-to-Noise Ratio with PCM

Exercise 4.5: Non-Linear Quantization

Exercise 4.6: Quantization Characteristics