Difference between revisions of "Modulation Methods/Pulse Code Modulation"

From LNTwww
 
(54 intermediate revisions by 3 users not shown)
Line 8: Line 8:
 
== # OVERVIEW OF THE FOURTH MAIN CHAPTER # ==
 
== # OVERVIEW OF THE FOURTH MAIN CHAPTER # ==
 
<br>
 
<br>
The fourth chapter deals with the digital modulation methods &nbsp;''Amplitude Shift Keying''&nbsp; (ASK), &nbsp;''Phase Shift Keying''&nbsp; (PSK) and &nbsp;''Frequency Shift Keying''&nbsp; (FSK) as well as some modifications derived from them.&nbsp; Most of the properties of the analog modulation methods mentioned in the last two chapters still apply.&nbsp; Differences result from the now required decision component of the receiver.
+
The fourth chapter deals with the digital modulation methods&nbsp; &raquo;'''amplitude shift keying'''&laquo;&nbsp; $\rm (ASK)$,&nbsp; &raquo;'''phase shift keying'''&laquo;&nbsp; $\rm (PSK)$&nbsp; and&nbsp; &raquo;'''frequency shift keying'''&laquo;&nbsp; $\rm (FSK)$&nbsp; as well as some modifications derived from them.&nbsp; Most of the properties of the analog modulation methods mentioned in the last two chapters still apply.&nbsp; Differences result from the now required&nbsp; &raquo;decision component&laquo;&nbsp; of the receiver.
  
We restrict ourselves here essentially to the system-theoretical and transmission aspects.&nbsp; The error probability is given only for ideal conditions.&nbsp; The derivations and the consideration of non-ideal boundary conditions can be found in the book&nbsp; "Digital Signal Transmission".
+
We restrict ourselves here essentially to the&nbsp; &raquo;system-theoretical and transmission aspects&laquo;.&nbsp; The error probability is given only for ideal conditions.&nbsp; The derivations and the consideration of non-ideal boundary conditions can be found in the book&nbsp; "Digital Signal Transmission".
  
 
In detail are treated:
 
In detail are treated:
*the &nbsp;''Pulse Code Modulation''&nbsp; (PCM)&nbsp; and its components sampling - quantization - coding,
+
#the &nbsp;&raquo;pulse code modulation&laquo;&nbsp; $\rm (PCM)$&nbsp; and its components&nbsp; "sampling"&nbsp; &ndash; &nbsp;"quantization"&nbsp; &ndash; &nbsp; "encoding",
*the &nbsp;''linear modulation''&nbsp; ASK, BPSK and DPSK and associated demodulators,
+
#the &nbsp;&raquo;linear modulation&laquo;&nbsp; $\rm ASK$,&nbsp; $\rm BPSK$,&nbsp; $\rm DPSK$&nbsp; and associated demodulators,
* the &nbsp;''quadrature amplitude modulation''&nbsp; (QAM)&nbsp; and more complicated signal space mappings,
+
# the &nbsp;&raquo;quadrature amplitude modulation&laquo;&nbsp; $\rm (QAM)$&nbsp; and more complicated signal space mappings,
*the FSK - ''Frequency Shift Keying''&nbsp; as an example of nonlinear digital modulation,
+
#the&nbsp;  &raquo;frequency shift keying&laquo;&nbsp; $\rm (FSK$)&nbsp; as an example of non-linear digital modulation,
*the FSK with &nbsp;''continuous phase matching'', especially the (G)MSK method.
+
#the FSK with &nbsp;&raquo;continuous phase matching&laquo;&nbsp; $\rm (CPM)$,&nbsp; especially the&nbsp; $\rm (G)MSK$&nbsp; method.
  
  
Line 23: Line 23:
 
==Principle and block diagram==
 
==Principle and block diagram==
 
<br>
 
<br>
Almost all modulation methods used today work digitally.&nbsp; Their advantages have already been mentioned in&nbsp; [[Modulation_Methods/Objectives_of_Modulation_and_Demodulation#Advantages_of_digital_modulation_methods|first chapter]]&nbsp; this book.&nbsp; The first concept for digital signal transmission was already developed in 1938 by&nbsp; [https://en.wikipedia.org/wiki/Alec_Reeves Alec Reeves]&nbsp; and has also been used in practice since the 1960s under the name &nbsp;''Pulse Code Modulation''&nbsp; $\rm (PCM)$&nbsp; Even though many of the digital modulation methods conceived in recent years differ from PCM in detail, it is very well suited to explain the principle of all these methods.  
+
Almost all modulation methods used today work digitally.&nbsp; Their advantages have already been mentioned in the&nbsp; [[Modulation_Methods/Objectives_of_Modulation_and_Demodulation#Advantages_of_digital_modulation_methods|"first chapter"]]&nbsp; of this book.&nbsp; The first concept for digital signal transmission was already developed in 1938 by&nbsp; [https://en.wikipedia.org/wiki/Alec_Reeves $\text{Alec Reeves}$]&nbsp; and has also been used in practice since the 1960s under the name &nbsp;"Pulse Code Modulation"&nbsp; $\rm (PCM)$.&nbsp; Even though many of the digital modulation methods conceived in recent years differ from PCM in detail,&nbsp; it is very well suited to explain the principle of all these methods.
  
[[File:EN_Mod_T_4_1_S1_v2.png|right|frame|Principle of Pulse Code Modulation&nbsp; $\rm (PCM)]$]]
+
The task of the PCM system is 
 +
*to convert the analog source signal&nbsp; $q(t)$&nbsp; into the binary signal&nbsp; $q_{\rm C}(t)$&nbsp; &ndash; this process is also called &nbsp; &raquo;'''A/D conversion'''&laquo;,
 +
*transmitting this signal over the channel,&nbsp; where the receiver-side signal&nbsp; $v_{\rm C}(t)$&nbsp; is also binary because of the decision,
 +
*to reconstruct from the binary signal&nbsp; $v_{\rm C}(t)$&nbsp; the analog&nbsp; (continuous-value as well as continuous-time)&nbsp; sink signal&nbsp; $v(t)$&nbsp; &nbsp; ⇒ &nbsp; &raquo;'''D/A conversion'''&laquo;. 
  
 +
[[File:EN_Mod_T_4_1_S1_v2.png|right|frame|Principle of Pulse Code Modulation&nbsp; $\rm (PCM)$<br><br>
 +
$q(t)\ \circ\!\!-\!\!\!-\!\!\!-\!\!\bullet\,\ Q(f)$ &nbsp; &rArr; &nbsp; source signal &nbsp; (from German:&nbsp; "Quellensignal"),&nbsp; analog<br>
 +
$q_{\rm A}(t)\ \circ\!\!-\!\!\!-\!\!\!-\!\!\bullet\,\ Q_{\rm A}(f)$ &nbsp; &rArr; &nbsp; sampled source signal &nbsp; (from German:&nbsp; "abgetastet" &nbsp; &rArr; &nbsp;  "A")<br>
 +
$q_{\rm Q}(t)\ \circ\!\!-\!\!\!-\!\!\!-\!\!\bullet\,\ Q_{\rm Q}(f)$ &nbsp; &rArr; &nbsp; quantized source signal &nbsp; (from German:&nbsp; "quantisiert" &nbsp; &rArr; &nbsp;  "Q")<br>
 +
$q_{\rm C}(t)\ \circ\!\!-\!\!\!-\!\!\!-\!\!\bullet\,\ Q_{\rm C}(f)$ &nbsp; &rArr; &nbsp; coded source signal &nbsp; (from German:&nbsp; "codiert" &nbsp; &rArr; &nbsp;  "C"),&nbsp; binary <br>
 +
$s(t)\ \circ\!\!-\!\!\!-\!\!\!-\!\!\bullet\,\ S(f)$ &nbsp; &rArr; &nbsp; transmitted signal &nbsp; (from German:&nbsp; "Sendesignal"),&nbsp; digital<br>
 +
$n(t)$ &nbsp; &rArr; &nbsp; noise signal,&nbsp; characterized by the power-spectral density&nbsp; ${\it Φ}_n(f)$, &nbsp; analog
 +
$r(t)= s(t) \star h_{\rm K}(t) + n(t)$ &nbsp; &rArr; &nbsp; received signal,&nbsp; $h_{\rm K}(t)\ \circ\!\!-\!\!\!-\!\!\!-\!\!\bullet\,\ H_{\rm K}(f)$,&nbsp; analog<br>
 +
&nbsp; Note: &nbsp;  Spectrum&nbsp; $R(f)$&nbsp; can not be specified  due to the stochastic component&nbsp; $n(t)$.<br>
 +
$v_{\rm C}(t)\ \circ\!\!-\!\!\!-\!\!\!-\!\!\bullet\,\ V_{\rm C}(f)$ &nbsp; &rArr; &nbsp; signal after decision,&nbsp; binary<br>
 +
$v_{\rm Q}(t)\ \circ\!\!-\!\!\!-\!\!\!-\!\!\bullet\,\ V_{\rm Q}(f)$ &nbsp; &rArr; &nbsp; signal after PCM decoding,&nbsp; $M$&ndash;level<br>
 +
&nbsp; Note: &nbsp;    On the receiver side,&nbsp; there is no counterpart to&nbsp; "Quantization"<br>
 +
$v(t)\ \circ\!\!-\!\!\!-\!\!\!-\!\!\bullet\,\ V(f)$ &nbsp; &rArr; &nbsp; sink signal,&nbsp; analog<br>]]
  
The exercise of the PCM system is to,
 
*convert the analog source signal&nbsp; $q(t)$&nbsp; into the binary signal&nbsp; $q_{\rm C}(t)$&nbsp; - this process is also called &nbsp; '''A/D conversion''',
 
*transmitting this signal over the channel, where the receiver side signal&nbsp; $v_{\rm C}(t)$&nbsp; is also binary because of the decision maker,
 
*to reconstruct exclusively from the binary signal&nbsp; $v_{\rm C}(t)$&nbsp; the analog as well as value and time continuous sink signal&nbsp; $v(t)$&nbsp; &nbsp; ⇒ &nbsp; '''D/A conversion'''.
 
  
 +
Further it should be noted to this PCM block diagram:
  
Further to the above PCM block diagram, it should be noted:
+
*The PCM transmitter&nbsp; ("A/D converter")&nbsp; is composed of three function blocks &nbsp;&raquo;'''Sampling - Quantization - PCM Coding'''&laquo;&nbsp; which will be described in more detail in the next sections.
  
*The PCM transmitter (or the A/D converter) is composed of the three function blocks &nbsp;''Sampling - Quantization - PCM Coding''&nbsp; which will be described in more detail in the next sections.  
+
*The gray-background block&nbsp; "Digital Transmission System"&nbsp; shows&nbsp; "transmitter"&nbsp; (modulation),&nbsp;  "receiver"&nbsp; (with decision unit),&nbsp; and&nbsp; "analog transmission channel" &nbsp; &rArr; &nbsp; channel frequency response&nbsp; $H_{\rm K}(f)$&nbsp; and noise power-spectral density&nbsp; ${\it Φ}_n(f)$.  
  
*The block with gray background shows the digital transmission system with digital transmitter and receiver (the latter also includes a decision maker), and the analog transmission channel, characterized by the frequency response&nbsp; $H_{\rm K}(f)$&nbsp; and the noise power density&nbsp; ${\it Φ}_n(f)$.  
+
*This block is covered in the first three chapters of the book&nbsp; [[Digital_Signal_Transmission|"Digital Signal Transmission"]].&nbsp; In chapter 5 of the same book,&nbsp; you will find&nbsp; [[Digital_Signal_Transmission/Parameters_of_Digital_Channel_Models|$\text{digital channel models}$]]&nbsp; that phenomenologically describe the transmission behavior using the signals&nbsp; $q_{\rm C}(t)$&nbsp; and&nbsp; $v_{\rm C}(t)$.  
  
*This block is covered in detail in the first three chapters of the book&nbsp; [[Digital_Signal_Transmission]]&nbsp; In chapter 5 of the same book, you will also find digital channel models that phenomenologically describe the transmission behavior using the binary signals&nbsp; $q_{\rm C}(t)$&nbsp; and&nbsp; $v_{\rm C}(t)$&nbsp; .  
+
*Further, it can be seen from the block diagram that there is no equivalent for&nbsp; "quantization"&nbsp; at the receiver-side.&nbsp; Therefore,&nbsp; even with error-free transmission,&nbsp; i.e.,&nbsp; for&nbsp; $v_{\rm C}(t) = q_{\rm C}(t)$,&nbsp; the analog sink signal&nbsp; $v(t)$&nbsp; will differ from the source signal&nbsp; $q(t)$.  
  
*Further, it can be seen from the above block diagram that there is no equivalent for quantization at the receiver end&nbsp; Therefore, even with error-free transmission, i.e., for&nbsp; $v_{\rm C}(t) = q_{\rm C}(t)$, the analog sink signal&nbsp; $v(t)$&nbsp; will differ from the source signal&nbsp; $q(t)$&nbsp; .
+
*As a measure of the quality of the digital transmission system,&nbsp;  we use the&nbsp; [[Modulation_Methods/Quality_Criteria#Signal.E2.80.93to.E2.80.93noise_.28power.29_ratio|$\text{Signal-to-Noise Power Ratio}$]] &nbsp; &rArr; &nbsp; in short: &nbsp; &raquo;'''Sink-SNR'''&laquo;&nbsp; as the quotient of the powers of source signal&nbsp; $q(t)$&nbsp; and error signal&nbsp; $ε(t) = v(t) - q(t)$:  
 
 
*As a measure of the quality of the (digital) transmission system, we use the&nbsp; [[Modulation_Methods/Quality_Criteria#Signal.E2.80.93to.E2.80.93noise_.28power.29_ratio|Signal-to-Noise Power Ratio]] &nbsp; &rArr; &nbsp; in short: &nbsp; '''Sink-SNR'''&nbsp; as the quotient of the powers of useful signal&nbsp; $q(t)$&nbsp; and fault signal&nbsp; $ε(t) = v(t) - q(t)$:  
 
 
:$$\rho_{v} = \frac{P_q}{P_\varepsilon}\hspace{0.3cm} {\rm with}\hspace{0.3cm}P_q = \overline{[q(t)]^2},
 
:$$\rho_{v} = \frac{P_q}{P_\varepsilon}\hspace{0.3cm} {\rm with}\hspace{0.3cm}P_q = \overline{[q(t)]^2},
 
\hspace{0.2cm}P_\varepsilon = \overline{[v(t) - q(t)]^2}\hspace{0.05cm}.$$
 
\hspace{0.2cm}P_\varepsilon = \overline{[v(t) - q(t)]^2}\hspace{0.05cm}.$$
  
*Here, an ideal amplitude matching is assumed, so that in the ideal case&nbsp; (that is: &nbsp; sampling according to the sampling theorem, best possible signal reconstruction, infinitely fine quantization)&nbsp; the sink signal&nbsp; $v(t)$&nbsp; would exactly match the source signal&nbsp; $q(t)$&nbsp;.
+
*Here,&nbsp; an ideal amplitude matching is assumed,&nbsp; so that in the ideal case&nbsp; (that is: &nbsp; sampling according to the sampling theorem,&nbsp; best possible signal reconstruction,&nbsp; infinitely fine quantization)&nbsp; the sink signal&nbsp; $v(t)$&nbsp; would exactly match the source signal&nbsp; $q(t)$.
 
+
<br clear=all>
 
+
&rArr; &nbsp; We would like to refer you already here to the three-part&nbsp; (German language)&nbsp; learning video&nbsp; [[Pulscodemodulation_(Lernvideo)|"Pulse Code Modulation"]]&nbsp; which contains all aspects of PCM.&nbsp; Its principle is explained in detail in the first part of the video.
We would like to refer you already here to the three-part learning video&nbsp; [[Pulscodemodulation_(Lernvideo)|Pulse Code Modulation]]&nbsp; which contains all aspects of PCM.&nbsp; Its principle is explained in detail in the first part of the video.
 
  
 
==Sampling and signal reconstruction==
 
==Sampling and signal reconstruction==
 
<br>
 
<br>
Sampling - that is, time discretization of the analog signal&nbsp; $q(t)$&nbsp; - was covered in detail in the chapter&nbsp; [[Signal_Representation/Discrete-Time_Signal_Representation|Discrete-Time Signal Representation]]&nbsp; of the book "Signal Representation."&nbsp; Here follows a brief summary of that section.
+
Sampling&nbsp; &ndash; that is, time discretization of the analog signal&nbsp; $q(t)$ &ndash;&nbsp; was covered in detail in the chapter&nbsp; [[Signal_Representation/Discrete-Time_Signal_Representation|"Discrete-Time Signal Representation"]]&nbsp; of the book&nbsp; "Signal Representation."&nbsp; Here follows a brief summary of that section.
  
[[File:EN_Mod_T_4_1_S2a.png |center|frame|Time domain representation of sampling]]
+
[[File:EN_Mod_T_4_1_S2a.png |right|frame|Time domain representation of sampling]]
  
The graph illustrates sampling in the time domain.&nbsp; The (blue) signal&nbsp; $q(t)$&nbsp; is time continuous, the (green) signal sampled at a distance&nbsp; $T_{\rm A}$&nbsp; is discrete-time.&nbsp; Here:  
+
The graph illustrates the sampling in the time domain:&nbsp;  
*The sampling can be calculated by multiplying the analog signal&nbsp; $q(t)$&nbsp; by the&nbsp; [[Signal_Representation/Discrete-Time_Signal_Representation#Dirac_comb_in_time_and_frequency_domain|Diracpulse in time domain]] &nbsp; &rArr; &nbsp; $p_δ(t)$&nbsp; represent:
 
:$$q_{\rm A}(t) = q(t) \cdot p_{\delta}(t)\hspace{0.3cm} {\rm with}\hspace{0.3cm}p_{\delta}(t)= \sum_{\nu = -\infty}^{\infty}T_{\rm A}\cdot \delta(t - \nu \cdot T_{\rm A}) \hspace{0.05cm}.$$.
 
  
*The weight of the Dirac function at&nbsp; $t = ν - T_{\rm A}$&nbsp; is equal to&nbsp; $T_{\rm A} - q(ν - T_{\rm A})$.&nbsp; Since the Dirac function&nbsp; $δ(t)$&nbsp; has the unit&nbsp; $\rm 1/s$&nbsp; thus&nbsp; $q_{\rm A}(t)$&nbsp; has the same unit as&nbsp; $q(t)$, for example "V".
+
*The&nbsp; (blue)&nbsp; source signal&nbsp; $q(t)$&nbsp; is&nbsp; "continuous-time",&nbsp; the (green) signal sampled at a distance&nbsp; $T_{\rm A}$&nbsp; is&nbsp; "discrete-time".&nbsp;  
 +
*The sampling can be represented by multiplying the analog signal&nbsp; $q(t)$&nbsp; by the&nbsp; [[Signal_Representation/Discrete-Time_Signal_Representation#Dirac_comb_in_time_and_frequency_domain|$\text{Dirac comb in the time domain}$]]&nbsp; &rArr; &nbsp; $p_δ(t)$:
 +
:$$q_{\rm A}(t) = q(t) \cdot p_{\delta}(t)\hspace{0.3cm} {\rm with}\hspace{0.3cm}p_{\delta}(t)= \sum_{\nu = -\infty}^{\infty}T_{\rm A}\cdot \delta(t - \nu \cdot T_{\rm A}) \hspace{0.05cm}.$$
  
*The Fourier transform of the Dirac pulse&nbsp; $p_δ(t)$&nbsp; is also a Dirac pulse&nbsp; (but now in the frequency domain) &nbsp; &rArr; &nbsp; $P_δ(f)$, where the spacing of the individual Dirac lines&nbsp; $f_{\rm A} = 1/T_{\rm A}$&nbsp; is. &nbsp; All momentum weights of&nbsp; $P_δ(f)$&nbsp; are&nbsp; $1$:  
+
*The Dirac delta function at&nbsp; $t = ν \cdot T_{\rm A}$&nbsp; has the weight&nbsp; $T_{\rm A} \cdot q(ν \cdot T_{\rm A})$.&nbsp; Since&nbsp; $δ(t)$&nbsp; has the unit&nbsp; "$\rm 1/s$"&nbsp; thus&nbsp; $q_{\rm A}(t)$&nbsp; has the same unit as&nbsp; $q(t)$,&nbsp; e.g.&nbsp; "V".
 +
 
 +
*The Fourier transform of the Dirac comb&nbsp; $p_δ(t)$&nbsp; is also a Dirac comb,&nbsp; but now in the frequency domain &nbsp; &rArr; &nbsp; $P_δ(f)$.&nbsp; The spacing of the individual Dirac delta lines is&nbsp; $f_{\rm A} = 1/T_{\rm A}$,&nbsp; and all weights of&nbsp; $P_δ(f)$&nbsp; are&nbsp; $1$:  
 
:$$p_{\delta}(t)= \sum_{\nu = -\infty}^{+\infty}T_{\rm A}\cdot \delta(t - \nu \cdot T_{\rm A})
 
:$$p_{\delta}(t)= \sum_{\nu = -\infty}^{+\infty}T_{\rm A}\cdot \delta(t - \nu \cdot T_{\rm A})
\hspace{0.2cm}\circ\!\!\!\!\!\!\!\!\bullet\, \hspace{0.2cm} P_{\delta}(f)= \sum_{\mu = -\infty}^{+\infty} \delta(f - \mu \cdot f_{\rm A}) \hspace{0.05cm}.$$
+
\hspace{0.2cm}\circ\!\!-\!\!\!-\!\!\!-\!\!\bullet\, \hspace{0.2cm} P_{\delta}(f)= \sum_{\mu = -\infty}^{+\infty} \delta(f - \mu \cdot f_{\rm A}) \hspace{0.05cm}.$$
  
*The spectrum&nbsp; $Q_{\rm A}(f)$&nbsp; of the sampled signal is obtained from the&nbsp; [[Signal_Representation/The_Convolution_Theorem_and_Operation|
+
*The spectrum&nbsp; $Q_{\rm A}(f)$&nbsp; of the sampled source signal&nbsp; $q_{\rm A}(t)$&nbsp; is obtained from the&nbsp; [[Signal_Representation/The_Convolution_Theorem_and_Operation|
  Convolution Theorem]], where&nbsp; $Q(f)$&nbsp; denotes the continuous spectrum of the analog signal&nbsp; $q(t)$&nbsp; :
+
  $\text{Convolution Theorem}$]], where&nbsp; $Q(f)\hspace{0.2cm}\bullet\!\!-\!\!\!-\!\!\!-\!\!\circ\, \hspace{0.2cm} q(t):$&nbsp;
 
:$$Q_{\rm A}(f) = Q(f) \star P_{\delta}(f)= \sum_{\mu = -\infty}^{+\infty} Q(f - \mu \cdot f_{\rm A}) \hspace{0.05cm}.$$
 
:$$Q_{\rm A}(f) = Q(f) \star P_{\delta}(f)= \sum_{\mu = -\infty}^{+\infty} Q(f - \mu \cdot f_{\rm A}) \hspace{0.05cm}.$$
  
We refer you here to the second part of the tutorial video&nbsp; [[Pulscodemodulation_(Lernvideo)|Pulse Code Modulation]]&nbsp; which explains sampling and signal reconstruction in terms of system theory.  
+
&rArr; &nbsp; We refer you to part 2 of the&nbsp; (German language)&nbsp; learning video&nbsp; [[Pulscodemodulation_(Lernvideo)|"Pulse Code Modulation"]]&nbsp; which explains sampling and signal reconstruction in terms of system theory.  
  
 
{{GraueBox|TEXT=
 
{{GraueBox|TEXT=
$\text{Example 1:}$&nbsp; The top graph schematically shows the spectrum&nbsp; $Q(f)$&nbsp; of an analog source signal&nbsp; $q(t)$&nbsp; with frequencies up to&nbsp; $f_{\rm N, \ max} = 5 \ \rm kHz$.  
+
$\text{Example 1:}$&nbsp; The graph schematically shows the spectrum&nbsp; $Q(f)$&nbsp; of an analog source signal&nbsp; $q(t)$&nbsp; with frequencies up to&nbsp; $f_{\rm N, \ max} = 5 \ \rm kHz$.  
  
[[File:P_ID1593__Mod_T_4_1_S2b_neu.png |center|frame| Periodic continuation of the spectrum by sampling]]
+
[[File:P_ID1593__Mod_T_4_1_S2b_neu.png |right|frame| Periodic continuation of the spectrum by sampling]]
  
 +
*If one samples&nbsp; $q(t)$&nbsp; with the sampling rate&nbsp; $f_{\rm A} = 20 \ \rm kHz$&nbsp; $($so at the respective distance&nbsp; $T_{\rm A} = 50 \ \rm &micro; s)$,&nbsp; one obtains the periodic spectrum&nbsp; $Q_{\rm A}(f)$&nbsp;  sketched in green.
  
*If one samples&nbsp; $q(t)$&nbsp; with the sampling rate&nbsp; $f_{\rm A} = 20 \ \rm kHz$&nbsp; $($so at the respective distance $T_{\rm A} = 50 \ \rm &micro; s)$&nbsp;, one obtains the periodic spectrum sketched in green&nbsp; $Q_{\rm A}(f)$.
+
*Since the Dirac functions are infinitely narrow,&nbsp; $q_{\rm A}(t)$&nbsp; also contains arbitrary high frequency components and accordingly&nbsp; $Q_{\rm A}(f)$&nbsp; is extended to infinity (middle graph).  
+
*Since the Dirac delta functions are infinitely narrow,&nbsp; $q_{\rm A}(t)$&nbsp; also contains arbitrary high frequency components and accordingly&nbsp; $Q_{\rm A}(f)$&nbsp; is extended to infinity (middle graph).
*Drawn below (in red) is the spectrum&nbsp; $Q_{\rm A}(f)$&nbsp; for the sampling parameters&nbsp; $T_{\rm A} = 100 \ \rm &micro; s$ &nbsp; ⇒ &nbsp; $f_{\rm A} = 10 \ \rm kHz$. }}
+
 
 +
 +
*Drawn below&nbsp; (in red)&nbsp; is the spectrum&nbsp; $Q_{\rm A}(f)$&nbsp; of the sampled source signal for the sampling parameters&nbsp; $T_{\rm A} = 100 \ \rm &micro; s$ &nbsp; ⇒ &nbsp; $f_{\rm A} = 10 \ \rm kHz$. }}
  
  
 
{{BlaueBox|TEXT=
 
{{BlaueBox|TEXT=
 
$\text{Conclusion:}$&nbsp;  
 
$\text{Conclusion:}$&nbsp;  
From this example, the following important lessons can be learned regarding sampling:  
+
From this example,&nbsp; the following important lessons can be learned regarding sampling:  
*If&nbsp; $Q(f)$&nbsp; contains frequencies up to&nbsp; $f_\text{N, max}$, then according to the&nbsp; [[Signal_Representation/Discrete-Time_Signal_Representation#Sampling_theorem|Sampling theorem]]&nbsp; the sampling rate&nbsp; $f_{\rm A} ≥ 2 - f_\text{N, max}$&nbsp; should be chosen.&nbsp; At smaller sampling rate&nbsp; $f_{\rm A}$&nbsp; $($thus larger spacing $T_{\rm A})$&nbsp; overlaps of the periodized spectra occur, i.e. irreversible distortions.  
+
#If&nbsp; $Q(f)$&nbsp; contains frequencies up to&nbsp; $f_\text{N, max}$,&nbsp; then according to the&nbsp; [[Signal_Representation/Discrete-Time_Signal_Representation#Sampling_theorem|$\text{Sampling Theorem}$]]&nbsp; the sampling rate&nbsp; $f_{\rm A} ≥ 2 \cdot f_\text{N, max}$&nbsp; should be chosen.&nbsp; At smaller sampling rate&nbsp; $f_{\rm A}$&nbsp; $($thus larger spacing $T_{\rm A})$&nbsp; overlaps of the periodized spectra occur,&nbsp; i.e. irreversible distortions.  
 
+
#If exactly&nbsp; $f_{\rm A} = 2 \cdot f_\text{N, max}$&nbsp; as in the lower graph of&nbsp; $\text{Example 1}$, then&nbsp; $Q(f)$&nbsp; can be can be completely reconstructed from&nbsp; $Q_{\rm A}(f)$&nbsp;   by an ideal rectangular low-pass filter&nbsp; $H(f)$&nbsp; with cutoff frequency&nbsp; $f_{\rm G} = f_{\rm A}/2$.&nbsp; The same facts apply in the &nbsp; [[Modulation_Methods/Pulse_Code_Modulation#Principle_and_block_diagram|$\text{PCM system}$]] &nbsp; to extract&nbsp; $V(f)$&nbsp; from&nbsp; $V_{\rm Q}(f)$&nbsp; in the best possible way.
*If exactly&nbsp; $f_{\rm A} = 2 - f_\text{N, max}$&nbsp; as in the lower graph of&nbsp; $\text{Example 1}$, then&nbsp; $Q(f)$&nbsp; can be calculated from&nbsp; $Q_{\rm A}(f)$&nbsp; - resp. in&nbsp; [[Modulation_Methods/Pulse_Code_Modulation#Principle_and_block_diagram|PCM system]]&nbsp; $V(f)$&nbsp; from&nbsp; $V_{\rm Q}(f)$ - can be completely reconstructed by an ideal rectangular low-pass filter&nbsp; $H(f)$&nbsp; with cutoff frequency&nbsp; $f_{\rm G} = f_{\rm A}/2$&nbsp; .  
+
#On the other hand,&nbsp; if sampling is performed with&nbsp; $f_{\rm A} > 2 \cdot f_\text{N, max}$&nbsp; as in the middle graph of the example,&nbsp; a low-pass filter&nbsp; $H(f)$&nbsp; with a smaller slope can also be used on the receiver side for signal reconstruction,&nbsp; as long as the following condition is met:  
 
+
::$$H(f) = \left\{ \begin{array}{l} 1  \\ 0 \\  \end{array} \right.\quad \begin{array}{*{5}c}{\rm{for} }
*On the other hand, if sampling is performed with&nbsp; $f_{\rm A} > 2 - f_\text{N, max}$&nbsp; as in the middle graph of the example, a low-pass filter&nbsp; $H(f)$&nbsp; with a smaller slope can also be used on the receiver side for signal reconstruction, as long as the following condition is met:  
 
:$$H(f) = \left\{ \begin{array}{l} 1  \\ 0 \\  \end{array} \right.\quad \begin{array}{*{5}c}{\rm{for} }
 
 
\\{\rm{for} }  \\ \end{array}\begin{array}{*{10}c} {\hspace{0.04cm}\left \vert \hspace{0.005cm} f\hspace{0.05cm} \right \vert \le f_{\rm N, \hspace{0.05cm}max},}  \\ {\hspace{0.04cm}\left \vert\hspace{0.005cm} f \hspace{0.05cm} \right \vert \ge f_{\rm A}- f_{\rm N, \hspace{0.05cm}max}.}  \\ \end{array}$$}}
 
\\{\rm{for} }  \\ \end{array}\begin{array}{*{10}c} {\hspace{0.04cm}\left \vert \hspace{0.005cm} f\hspace{0.05cm} \right \vert \le f_{\rm N, \hspace{0.05cm}max},}  \\ {\hspace{0.04cm}\left \vert\hspace{0.005cm} f \hspace{0.05cm} \right \vert \ge f_{\rm A}- f_{\rm N, \hspace{0.05cm}max}.}  \\ \end{array}$$}}
  
 
==Natural and discrete sampling==
 
==Natural and discrete sampling==
 
<br>
 
<br>
Multiplication by the Dirac pulse provides only an idealized description of the sampling, since a Dirac function&nbsp; $($duration $T_{\rm R} → 0$,&nbsp; height $1/T_{\rm R} → ∞)$&nbsp; is not realizable.&nbsp; In practice, the Dirac pulse&nbsp; $p_δ(t)$&nbsp; must be replaced, for example, by a square pulse
+
Multiplication by the Dirac comb provides only an idealized description of the sampling,&nbsp; since a Dirac delta function&nbsp; $($duration $T_{\rm R} → 0$,&nbsp; height $1/T_{\rm R} → ∞)$&nbsp; is not realizable.&nbsp; In practice,&nbsp; the&nbsp; "Dirac comb"&nbsp; $p_δ(t)$&nbsp; must be replaced by a&nbsp; "rectangular pulse comb"&nbsp; $p_{\rm R}(t)$&nbsp; with rectangle duration&nbsp; $T_{\rm R}$&nbsp; (see upper sketch):
:$$p_{\rm R}(t)= \sum_{\nu = -\infty}^{+\infty}g_{\rm R}(t - \nu \cdot T_{\rm A})\hspace{0.3cm}  {\rm mit}\hspace{0.3cm} g_{\rm R}(t) = \left\{ \begin{array}{l} 1  \\ 1/2 \\ 0 \\  \end{array} \right.\quad
+
[[File: EN_Mod_T_4_1_S3a.png |right|frame| Rectangular comb&nbsp; (on the top),&nbsp; natural and discrete sampling]]
\begin{array}{*{5}c}{\rm{f\ddot{u}r}}\\{\rm{f\ddot{u}r}} \\{\rm{f\ddot{u}r}} \\ \end{array}\begin{array}{*{10}c}{\hspace{0.04cm}\left|\hspace{0.06cm} t \hspace{0.05cm} \right|} < T_{\rm R}/2\hspace{0.05cm},  \\{\hspace{0.04cm}\left|\hspace{0.06cm} t \hspace{0.05cm} \right|} = T_{\rm R}/2\hspace{0.05cm}, \\
+
:$$p_{\rm R}(t)= \sum_{\nu = -\infty}^{+\infty}g_{\rm R}(t - \nu \cdot T_{\rm A}),$$
{\hspace{0.005cm}\left|\hspace{0.06cm} t \hspace{0.05cm} \right|} > T_{\rm R}/2\hspace{0.05cm}  \\
+
:$$g_{\rm R}(t) = \left\{ \begin{array}{l} 1  \\ 1/2 \\ 0 \\  \end{array} \right.\quad
 +
\begin{array}{*{5}c}{\rm{for}}\\{\rm{for}} \\{\rm{for}} \\ \end{array}\begin{array}{*{10}c}{\hspace{0.04cm}\left|\hspace{0.06cm} t \hspace{0.05cm} \right|} < T_{\rm R}/2\hspace{0.05cm},  \\{\hspace{0.04cm}\left|\hspace{0.06cm} t \hspace{0.05cm} \right|} = T_{\rm R}/2\hspace{0.05cm}, \\
 +
{\hspace{0.005cm}\left|\hspace{0.06cm} t \hspace{0.05cm} \right|} > T_{\rm R}/2\hspace{0.05cm}. \\
 
\end{array}$$
 
\end{array}$$
where the rectangular pulse duration&nbsp; $T_{\rm R}$&nbsp; should be significantly smaller than the sampling distance&nbsp; $T_{\rm A}$&nbsp;.  
+
$T_{\rm R}$&nbsp; should be significantly smaller than the sampling distance&nbsp; $T_{\rm A}$.  
  
The graph above shows the square pulse&nbsp; $p_{\rm R}(t)$.&nbsp; Below are two different sampling methods using this square pulse:
+
The graphic show two different sampling methods using the comb&nbsp; $p_{\rm R}(t)$:
 +
 
 +
*In&nbsp; &raquo;'''natural sampling'''&laquo;&nbsp; the sampled signal&nbsp; $q_{\rm A}(t)$&nbsp; is obtained by multiplying the analog source signal&nbsp; $q(t)$&nbsp; by&nbsp; $p_{\rm R}(t)$. &nbsp; Thus in the ranges&nbsp; $p_{\rm R}(t) = 1$,&nbsp; $q_{\rm A}(t)$&nbsp; has the same progression as&nbsp; $q(t)$.
 +
 
 +
*In&nbsp; &raquo;'''discrete sampling'''&laquo;&nbsp; the signal&nbsp; $q(t)$&nbsp; is&nbsp; &ndash; at least mentally &ndash; first multiplied by the Dirac comb&nbsp; $p_δ(t)$.&nbsp; Then each Dirac delta pulse &nbsp; $T_{\rm A} \cdot δ(t - ν \cdot T_{\rm A})$&nbsp; is replaced by a rectangular pulse&nbsp; $g_{\rm R}(t - ν \cdot T_{\rm A})$&nbsp; .
  
[[File: EN_Mod_T_4_1_S3a.png |right|frame| square pulse (top) and natural and discrete sampling]]
 
 
*In&nbsp; '''natural sampling'''&nbsp; the sampled signal&nbsp; $q_{\rm A}(t)$&nbsp; is obtained by multiplying&nbsp; $q(t)$&nbsp; by&nbsp; $p_{\rm R}(t)$. &nbsp; In the ranges&nbsp; $p_{\rm R}(t) = 1$&nbsp; thus&nbsp; $q_{\rm A}(t)$&nbsp; has the same progression as&nbsp; $q(t)$.
 
  
 +
Here and in the following frequency domain consideration,&nbsp; an acausal description form is chosen for simplicity.&nbsp;
  
+
For a&nbsp; (causal)&nbsp; realization,&nbsp; $g_{\rm R}(t) = 1$&nbsp; would have to hold in the range from&nbsp; $0$&nbsp; to&nbsp; $T_{\rm R}$&nbsp; and not as here for&nbsp; $ -T_{\rm R}/2 < t < T_{\rm R}/2.$  
*In&nbsp; '''discrete sampling'''&nbsp; the signal&nbsp; $q(t)$&nbsp; is - at least mentally - first multiplied by the Dirac pulse&nbsp; $p_δ(t)$&nbsp; Then each Dirac pulse&nbsp; $T_{\rm A} - δ(t - ν - T_{\rm A})$&nbsp; is replaced by a square pulse&nbsp; $g_{\rm R}(t - ν - T_{\rm A})$&nbsp; .
 
<br clear=all>
 
Here and in the following frequency domain consideration, an acausal description form is chosen for simplicity.&nbsp; For a (causal) realization,&nbsp; $g_{\rm R}(t) = 1$&nbsp; would have to hold in the range from&nbsp; $0$&nbsp; to&nbsp; $T_{\rm R}$&nbsp; and not as here for&nbsp; $ \ -T_{\rm R}/2 < t < T_{\rm R}/2.$  
 
  
  
Line 123: Line 137:
 
{{BlaueBox|TEXT=
 
{{BlaueBox|TEXT=
 
$\text{Definition:}$&nbsp; The&nbsp;  
 
$\text{Definition:}$&nbsp; The&nbsp;  
'''natural sampling'''&nbsp; can be represented by the convolution theorem in the spectral domain as follows:
+
&raquo;'''natural sampling'''&laquo;&nbsp; can be represented by the convolution theorem in the spectral domain as follows:
 
:$$q_{\rm A}(t) = p_{\rm R}(t) \cdot q(t) = \left [ \frac{1}{T_{\rm A} } \cdot p_{\rm \delta}(t) \star g_{\rm R}(t)\right ]\cdot q(t) \hspace{0.3cm}
 
:$$q_{\rm A}(t) = p_{\rm R}(t) \cdot q(t) = \left [ \frac{1}{T_{\rm A} } \cdot p_{\rm \delta}(t) \star g_{\rm R}(t)\right ]\cdot q(t) \hspace{0.3cm}
 
\Rightarrow \hspace{0.3cm}Q_{\rm A}(f) = \left [ P_{\rm \delta}(f) \cdot \frac{1}{T_{\rm A} } \cdot G_{\rm R}(f) \right ] \star Q(f) = P_{\rm R}(f) \star Q(f)\hspace{0.05cm}.$$}}
 
\Rightarrow \hspace{0.3cm}Q_{\rm A}(f) = \left [ P_{\rm \delta}(f) \cdot \frac{1}{T_{\rm A} } \cdot G_{\rm R}(f) \right ] \star Q(f) = P_{\rm R}(f) \star Q(f)\hspace{0.05cm}.$$}}
Line 129: Line 143:
  
 
The graph shows the result for  
 
The graph shows the result for  
*an (unrealistic) rectangular spectrum&nbsp; $Q(f) = Q_0$ limited to the range&nbsp; $|f| ≤ 4 \ \rm kHz$&nbsp; ,  
+
*an&nbsp; (unrealistic)&nbsp; rectangular spectrum&nbsp; $Q(f) = Q_0$&nbsp; limited to the range&nbsp; $|f| ≤ 4 \ \rm kHz$,  
*the sampling rate&nbsp; $f_{\rm A} = 10 \ \rm kHz$ &nbsp; ⇒ &nbsp; $T_{\rm A} = 100 \ \rm &micro; s$, and.
+
*the sampling rate&nbsp; $f_{\rm A} = 10 \ \rm kHz$ &nbsp; ⇒ &nbsp; $T_{\rm A} = 100 \ \rm &micro; s$,&nbsp; and  
*the square pulse duration&nbsp; $T_{\rm R} = 25 \ \rm &micro; s$ &nbsp; ⇒ &nbsp; $T_{\rm R}/T_{\rm A} = 0.25$.  
+
*the rectangular pulse duration&nbsp; $T_{\rm R} = 25 \ \rm &micro; s$ &nbsp; ⇒ &nbsp; $T_{\rm R}/T_{\rm A} = 0.25$.  
  
 +
[[File:EN_Mod_T_4_1_S3b.png |right|frame| Spectrum in natural sampling with rectangular comb]]
  
[[File:EN_Mod_T_4_1_S3b.png |center|frame| Spectrum in natural sampling with a square pulse]]
 
  
 
One can see from this plot:  
 
One can see from this plot:  
*The spectrum&nbsp; $P_{\rm R}(f)$&nbsp; in natural sampling, in contrast to&nbsp; $P_δ(f)$&nbsp; is not a Dirac pulse&nbsp; $($all weights equal $1)$,&nbsp; but the weights here are related to the function&nbsp; $G_{\rm R}(f)/T_{\rm A} = T_{\rm R}/T_{\rm A} - {\rm si}(πfT_{\rm R})$&nbsp; evaluated&nbsp;
+
#The spectrum&nbsp; $P_{\rm R}(f)$&nbsp; is in contrast to&nbsp; $P_δ(f)$&nbsp; not a Dirac comb&nbsp; $($all weights equal $1)$,&nbsp; but the weights here are evaluated to the function&nbsp; $G_{\rm R}(f)/T_{\rm A} = T_{\rm R}/T_{\rm A} \cdot {\rm sinc}(f\cdot T_{\rm R})$.
*Because of the zero of the&nbsp; $\rm si$-function, the diraclines vanish here at&nbsp; $±4f_{\rm A}$.  
+
#Because of the zero of the&nbsp; $\rm sinc$-function,&nbsp; the Dirac delta lines vanish here at&nbsp; $±4f_{\rm A}$.  
 
+
#The spectrum&nbsp; $Q_{\rm A}(f)$&nbsp; results from the convolution with&nbsp; $Q(f)$.&nbsp; The rectangle around&nbsp; $f = 0$&nbsp; has height&nbsp; $T_{\rm R}/T_{\rm A} \cdot Q_0$,&nbsp; the proportions around&nbsp; $\mu \cdot f_{\rm A} \ (\mu ≠ 0)$&nbsp; are lower.  
*The spectrum&nbsp; $Q_{\rm A}(f)$&nbsp; results from the convolution with&nbsp; $Q(f)$.&nbsp; The rectangle around&nbsp; $f = 0$&nbsp; has height&nbsp; $T_{\rm R}/T_{\rm A} - Q_0$, the proportions around&nbsp; $\mu - f_{\rm A} \ (\mu ≠ 0)$&nbsp; are less high.  
+
#If one uses for signal reconstruction an ideal,&nbsp; rectangular low-pass
 
+
::$$H(f) = \left\{ \begin{array}{l} T_{\rm A}/T_{\rm R} = 4  \\ 0 \\  \end{array} \right.\quad
*If one uses an ideal, rectangular lowpass for signal reconstruction.
 
:$$H(f) = \left\{ \begin{array}{l} T_{\rm A}/T_{\rm R} = 4  \\ 0 \\  \end{array} \right.\quad
 
 
\begin{array}{*{5}c}{\rm{for}}\\{\rm{for}}  \\ \end{array}\begin{array}{*{10}c}
 
\begin{array}{*{5}c}{\rm{for}}\\{\rm{for}}  \\ \end{array}\begin{array}{*{10}c}
 
{\hspace{0.04cm}\left| \hspace{0.005cm} f\hspace{0.05cm} \right| < f_{\rm A}/2}\hspace{0.05cm},  \\
 
{\hspace{0.04cm}\left| \hspace{0.005cm} f\hspace{0.05cm} \right| < f_{\rm A}/2}\hspace{0.05cm},  \\
 
{\hspace{0.04cm}\left| \hspace{0.005cm} f\hspace{0.05cm} \right| > f_{\rm A}/2}\hspace{0.05cm},  \\
 
{\hspace{0.04cm}\left| \hspace{0.005cm} f\hspace{0.05cm} \right| > f_{\rm A}/2}\hspace{0.05cm},  \\
\end{array}$$
+
\end{array},$$
:so for the output spectrum&nbsp; $V(f) = Q(f)$&nbsp; and accordingly&nbsp; $v(t) = q(t)$.
+
::then for the output spectrum&nbsp; $V(f) = Q(f)$ &nbsp; &rArr; &nbsp; $v(t) = q(t)$.
 +
 
  
 
{{BlaueBox|TEXT=
 
{{BlaueBox|TEXT=
 
$\text{Conclusion:}$&nbsp;  
 
$\text{Conclusion:}$&nbsp;  
*For natural sampling, a rectangular&ndash;low-pass filter is sufficient for signal reconstruction as for ideal sampling (with Dirac pulse).
+
*For natural sampling,&nbsp; '''a rectangular&ndash;low-pass filter is sufficient for signal reconstruction'''&nbsp; as for ideal sampling&nbsp; (with Dirac comb).
*However, for amplitude matching in the passband, a gain by the factor&nbsp; $T_{\rm A}/T_{\rm R}$&nbsp; must be considered. }}
+
*However,&nbsp; for amplitude matching in the passband,&nbsp; a gain by the factor&nbsp; $T_{\rm A}/T_{\rm R}$&nbsp; must be considered. }}
  
  
Line 161: Line 174:
 
{{BlaueBox|TEXT=
 
{{BlaueBox|TEXT=
 
$\text{Definition:}$&nbsp;  
 
$\text{Definition:}$&nbsp;  
In&nbsp; '''discrete sampling'''&nbsp; the multiplication of the Dirac pulse&nbsp; $p_δ(t)$&nbsp; with the source signal&nbsp; $q(t)$&nbsp; takes place - at least mentally - first and only afterwards the convolution with the square pulse&nbsp; $g_{\rm R}(t)$:
+
In&nbsp; &raquo;'''discrete sampling'''&laquo;&nbsp; the multiplication of the Dirac comb&nbsp; $p_δ(t)$&nbsp; with the source signal&nbsp; $q(t)$&nbsp; takes place first&nbsp; &ndash; at least mentally &ndash;&nbsp; and only afterwards the convolution with the rectangular pulse&nbsp; $g_{\rm R}(t)$:
 
:$$q_{\rm A}(t) = \big [ {1}/{T_{\rm A} } \cdot p_{\rm \delta}(t)
 
:$$q_{\rm A}(t) = \big [ {1}/{T_{\rm A} } \cdot p_{\rm \delta}(t)
 
\cdot q(t)\big ]\star g_{\rm R}(t) \hspace{0.3cm} \Rightarrow \hspace{0.3cm}Q_{\rm A}(f) = \big [ P_{\rm \delta}(f) \star Q(f) \big ] \cdot G_{\rm R}(f)/{T_{\rm A} } \hspace{0.05cm}.$$
 
\cdot q(t)\big ]\star g_{\rm R}(t) \hspace{0.3cm} \Rightarrow \hspace{0.3cm}Q_{\rm A}(f) = \big [ P_{\rm \delta}(f) \star Q(f) \big ] \cdot G_{\rm R}(f)/{T_{\rm A} } \hspace{0.05cm}.$$
*It is irrelevant, but quite convenient, that here the factor&nbsp; $1/T_{\rm A}$&nbsp; has been added to the valuation function&nbsp; $G_{\rm R}(f)$&nbsp; .  
+
*It is irrelevant,&nbsp; but quite convenient,&nbsp; that here the factor&nbsp; $1/T_{\rm A}$&nbsp; has been added to the evaluation function&nbsp; $G_{\rm R}(f)$.  
*Thus, $G_{\rm R}(f)/T_{\rm A} = T_{\rm R}/T_{\rm A} - {\rm si}(πfT_{\rm R}).$}}
+
*Thus,&nbsp; $G_{\rm R}(f)/T_{\rm A} = T_{\rm R}/T_{\rm A} \cdot {\rm sinc}(fT_{\rm R}).$}}
  
  
The upper graph shows (highlighted in green) the spectral function&nbsp; $P_δ(f) \star Q(f)$&nbsp; after ideal sampling.&nbsp; In contrast, discrete sampling with a square pulse yields the spectrum&nbsp; $Q_{\rm A}(f)$&nbsp; corresponding to the lower graph.
+
[[File:EN_Mod_T_4_1_S3c_neu.png|right|frame| Spectrum when discretely sampled with a rectangular comb]]
  
[[File:EN_Mod_T_4_1_S3c.png|right|frame| Spectrum when discretely sampled with a square pulse]]
+
*The upper graph shows&nbsp; (highlighted in green)&nbsp; the spectral function&nbsp; $P_δ(f) \star Q(f)$&nbsp; after ideal sampling.&nbsp;
 +
*In contrast,&nbsp; discrete sampling with a rectangular comb yields the spectrum&nbsp; $Q_{\rm A}(f)$&nbsp; corresponding to the lower graph.
  
You can see:
 
*Each of the infinitely many partial spectra now has a different shape.&nbsp; Only the middle spectrum around&nbsp; $f = 0$.&nbsp is important;
 
*All other spectral components are removed at the receiver side by the low pass of the signal reconstruction.
 
  
*If one uses for this low pass again a rectangular filter with the gain around $T_{\rm A}/T_{\rm R}$ in the passband, one obtains for the output spectrum: &nbsp;  
+
You can see from this plot:
:$$V(f) = Q(f) \cdot {\rm si}(\pi f T_{\rm R}) \hspace{0.05cm}.$$
+
#Each of the infinitely many partial spectra now has a different shape.&nbsp; Only the middle spectrum around&nbsp; $f = 0$&nbsp; is important;
 +
#All other spectral components are removed at the receiver side by the low-pass of the signal reconstruction.
 +
#If one uses for this low-pass again a rectangular filter with the gain&nbsp; $T_{\rm A}/T_{\rm R}$&nbsp; in the passband,&nbsp; one obtains for the output spectrum: &nbsp;  
 +
:$$V(f) = Q(f) \cdot {\rm sinc}(f \cdot T_{\rm R}) \hspace{0.05cm}.$$
 
<br clear=all>
 
<br clear=all>
 
{{BlaueBox|TEXT=
 
{{BlaueBox|TEXT=
$\text{Conclusion:}$&nbsp; With discrete sampling and rectangular filtering, attenuation distortions gaccording to the weighting function&nbsp; ${\rm si}(πfT_{\rm R})$.  
+
$\text{Conclusion:}$&nbsp; '''Discrete sampling and rectangular filtering result in  attenuation distortions'''&nbsp;  according to the weighting function&nbsp; ${\rm sinc}(f \cdot T_{\rm R})$.  
*These are the stronger, the larger&nbsp; $T_{\rm R}$&nbsp; is.&nbsp; Only in the limiting case&nbsp; $T_{\rm R} → 0$&nbsp; holds ${\rm si}(πfT_{\rm R}) = 1$.  
+
*These are stronger,&nbsp; the larger&nbsp; $T_{\rm R}$&nbsp; is.&nbsp; Only in the limiting case&nbsp; $T_{\rm R} → 0$&nbsp; holds ${\rm sinc}(f\cdot T_{\rm R}) = 1$.  
  
*However, ideal equalization can fully compensate for these linear attenuation distortions.  
+
*However,&nbsp; ideal equalization can fully compensate for these linear attenuation distortions.&nbsp;  To obtain&nbsp; $V(f) = Q(f)$&nbsp; resp.&nbsp; $v(t) = q(t)$&nbsp; then must hold:
*To obtain&nbsp; $V(f) = Q(f)$&nbsp; respectively,&nbsp; $v(t) = q(t)$&nbsp; then must hold:
+
:$$H(f) = \left\{ \begin{array}{l} (T_{\rm A}/T_{\rm R})/{\rm sinc}(f \cdot T_{\rm R})  \\ 0 \\  \end{array} \right.\quad\begin{array}{*{5}c}{\rm{for} }\\{\rm{for} }  \\ \end{array}\begin{array}{*{10}c}
:$$H(f) = \left\{ \begin{array}{l} (T_{\rm A}/T_{\rm R})/{\rm si}(\pi f T_{\rm R})  \\ 0 \\  \end{array} \right.\quad\begin{array}{*{5}c}{\rm{for} }\\{\rm{for} }  \\ \end{array}\begin{array}{*{10}c}
 
 
{\hspace{0.04cm}\left \vert \hspace{0.005cm} f\hspace{0.05cm} \right \vert < f_{\rm A}/2}\hspace{0.05cm},  \\
 
{\hspace{0.04cm}\left \vert \hspace{0.005cm} f\hspace{0.05cm} \right \vert < f_{\rm A}/2}\hspace{0.05cm},  \\
{\hspace{0.04cm}\left \vert \hspace{0.005cm} f\hspace{0.05cm} \right \vert > f_{\rm A}/2}  \\
+
{\hspace{0.04cm}\left \vert \hspace{0.005cm} f\hspace{0.05cm} \right \vert > f_{\rm A}/2.}  \\
 
\end{array}$$}}
 
\end{array}$$}}
 
   
 
   
Line 193: Line 206:
 
==Quantization and quantization noise==
 
==Quantization and quantization noise==
 
<br>
 
<br>
The second functional unit&nbsp; '''Quantization'''&nbsp; of the PCM transmitter is used for value discretization.  
+
The second functional unit&nbsp; &raquo;'''Quantization'''&laquo;&nbsp; of the PCM transmitter is used for value discretization.  
*For this purpose the whole value range of the analog source signal&nbsp; $($for example the range $± q_{\rm max})$&nbsp; is divided into&nbsp; $M$&nbsp; intervals.
+
*For this purpose the whole value range of the analog source signal&nbsp; $($e.g.,&nbsp; the range $± q_{\rm max})$&nbsp; is divided into&nbsp; $M$&nbsp; intervals.
* Each sample&nbsp; $q_{\rm A}(ν ⋅ T_{\rm A})$&nbsp; is then assigned a representative&nbsp; $q_{\rm Q}(ν ⋅ T_{\rm A})$&nbsp; of the associated interval&nbsp; (for example, the interval center)&nbsp;.  
+
* Each sample&nbsp; $q_{\rm A}(ν ⋅ T_{\rm A})$&nbsp; is then assigned to  a representative&nbsp; $q_{\rm Q}(ν ⋅ T_{\rm A})$&nbsp; of the associated interval&nbsp; (e.g.,&nbsp; the interval center)&nbsp;.  
  
  
 
{{GraueBox|TEXT=
 
{{GraueBox|TEXT=
$\text{Example 2:}$&nbsp; The graph illustrates quantization using the quantization step number as an example&nbsp; $M = 8$.  
+
$\text{Example 2:}$&nbsp; The graph illustrates the unit&nbsp; "quantization"&nbsp; using the quantization step number&nbsp; $M = 8$&nbsp;  as an example.  
  
[[File:Mod_T_4_1_S4a_vers2.png |center|frame| To illustrate quantization with&nbsp; $M = 8$&nbsp; steps]]
+
[[File:Mod_T_4_1_S4a_vers2.png |right|frame| To illustrate&nbsp; "quantization"&nbsp; with&nbsp; $M = 8$&nbsp; steps]]
  
*In fact, a power of two is always chosen for&nbsp; $M$&nbsp; in practice because of the subsequent binary coding.  
+
*In fact,&nbsp; a power of two is always chosen for&nbsp; $M$&nbsp; in practice because of the subsequent binary coding.  
*Each of the samples marked by circles&nbsp; $q_{\rm A}(ν - T_{\rm A})$&nbsp; is replaced by the corresponding quantized value&nbsp; $q_{\rm Q}(ν - T_{\rm A})$&nbsp; The quantized values are entered as crosses.  
+
*Each of the samples&nbsp; $q_{\rm A}(ν \cdot T_{\rm A})$&nbsp; marked by circles is replaced by the corresponding quantized value&nbsp; $q_{\rm Q}(ν \cdot T_{\rm A})$.&nbsp; The quantized values are entered as crosses.  
*However, this process of value discretization is associated with an irreversible falsification.  
+
*However,&nbsp; this process of value discretization is associated with an irreversible falsification.  
*The falsification&nbsp; $ε_ν = q_{\rm Q}(ν - T_{\rm A}) \ - \ q_{\rm A}(ν - T_{\rm A})$&nbsp; depends on the quantization level number&nbsp; $M$&nbsp; The following bound applies:  
+
*The falsification&nbsp; $ε_ν = q_{\rm Q}(ν \cdot T_{\rm A}) \ - \ q_{\rm A}(ν \cdot T_{\rm A})$&nbsp; depends on the quantization level number&nbsp; $M$.&nbsp; The following bound applies:  
 
:$$\vert \varepsilon_{\nu} \vert < {1}/{2} \cdot2/M \cdot q_{\rm max}= {q_{\rm max} }/{M}\hspace{0.05cm}.$$}}
 
:$$\vert \varepsilon_{\nu} \vert < {1}/{2} \cdot2/M \cdot q_{\rm max}= {q_{\rm max} }/{M}\hspace{0.05cm}.$$}}
  
  
 
{{BlaueBox|TEXT=
 
{{BlaueBox|TEXT=
$\text{Definition:}$&nbsp; One refers to the root mean square error magnitude&nbsp; $ε_ν$&nbsp; as&nbsp; '''quantization noise power''':  
+
$\text{Definition:}$&nbsp; One refers to the second moment of the error quantity&nbsp; $ε_ν$&nbsp; as&nbsp; &raquo;'''quantization noise power'''&laquo;:  
 
:$$P_{\rm Q} = \frac{1}{2N+1 } \cdot\sum_{\nu = -N}^{+N}\varepsilon_{\nu}^2 \approx \frac{1}{N \cdot
 
:$$P_{\rm Q} = \frac{1}{2N+1 } \cdot\sum_{\nu = -N}^{+N}\varepsilon_{\nu}^2 \approx \frac{1}{N \cdot
 
T_{\rm A} } \cdot \int_{0}^{N \cdot T_{\rm A} }\varepsilon(t)^2 \hspace{0.05cm}{\rm d}t \hspace{0.3cm} {\rm with}\hspace{0.3cm}\varepsilon(t) = q_{\rm Q}(t) - q(t) \hspace{0.05cm}.$$}}
 
T_{\rm A} } \cdot \int_{0}^{N \cdot T_{\rm A} }\varepsilon(t)^2 \hspace{0.05cm}{\rm d}t \hspace{0.3cm} {\rm with}\hspace{0.3cm}\varepsilon(t) = q_{\rm Q}(t) - q(t) \hspace{0.05cm}.$$}}
  
  
''Notes:''
+
Notes:  
 
*For calculating the quantization noise power&nbsp; $P_{\rm Q}$&nbsp; the given approximation of&nbsp; "spontaneous quantization"&nbsp; is usually used.&nbsp;  
 
*For calculating the quantization noise power&nbsp; $P_{\rm Q}$&nbsp; the given approximation of&nbsp; "spontaneous quantization"&nbsp; is usually used.&nbsp;  
*Here, one ignores sampling and forms the error signal from the continuous-time signals&nbsp; $q_{\rm Q}(t)$&nbsp; and&nbsp; $q(t)$.  
+
*Here,&nbsp; one ignores sampling and forms the error signal from the continuous-time signals&nbsp; $q_{\rm Q}(t)$&nbsp; and&nbsp; $q(t)$.  
*$P_{\rm Q}$&nbsp; also depends on the source signal&nbsp; $q(t)$&nbsp; . &nbsp; Assuming that&nbsp; $q(t)$&nbsp; takes all values between&nbsp; $±q_{\rm max}$&nbsp; with equal probability and the quantizer is designed exactly for this range, we get accordingly&nbsp; [[Aufgaben:Aufgabe_4.4:_Zum_Quantisierungsrauschen| Exercise 4.4]]:  
+
*$P_{\rm Q}$&nbsp; also depends on the source signal&nbsp; $q(t)$.&nbsp; Assuming that&nbsp; $q(t)$&nbsp; takes all values between&nbsp; $±q_{\rm max}$&nbsp; with equal probability and the quantizer is designed exactly for this range,&nbsp; we get accordingly&nbsp; [[Aufgaben:Aufgabe_4.4:_Zum_Quantisierungsrauschen| "Exercise 4.4"]]:  
 
:$$P_{\rm Q} = \frac{q_{\rm max}^2}{3 \cdot M^2 } \hspace{0.05cm}.$$
 
:$$P_{\rm Q} = \frac{q_{\rm max}^2}{3 \cdot M^2 } \hspace{0.05cm}.$$
*In a speech or music signal, arbitrarily large amplitude values can occur - even if only very rarely.&nbsp; In this case, for&nbsp; $q_{\rm max}$&nbsp; usually that amplitude value is used which is exceeded only at&nbsp; $1\%$&nbsp; all times (in amplitude).  
+
*In a speech or music signal,&nbsp; arbitrarily large amplitude values can occur&nbsp; - even if only very rarely.&nbsp; In this case,&nbsp; for&nbsp; $q_{\rm max}$&nbsp; usually that amplitude value is used which is exceeded&nbsp; (in amplitude)&nbsp; only at&nbsp; $1\%$&nbsp; all times.  
  
 
==PCM encoding and decoding==
 
==PCM encoding and decoding==
 
<br>
 
<br>
The block&nbsp; '''PCM coding'''&nbsp; is used to convert the discrete-time&nbsp; (after sampling)&nbsp; and discrete-value&nbsp; (after quantization with&nbsp; $M$&nbsp; steps)&nbsp; signal values&nbsp; $q_{\rm Q}(ν - T_{\rm A})$&nbsp; into a sequence of&nbsp; $N = {\rm log_2}(M)$&nbsp; binary values. &nbsp; Logarithm to base 2 &nbsp; ⇒ &nbsp; ''Binary logarithm.''
+
The block&nbsp; &raquo;'''PCM coding'''&laquo;&nbsp; is used to convert the discrete-time &nbsp; (after sampling) &nbsp; and discrete-value&nbsp; (after quantization with&nbsp; $M$&nbsp; steps)&nbsp; signal values&nbsp; $q_{\rm Q}(ν - T_{\rm A})$&nbsp; into a sequence of&nbsp; $N = {\rm log_2}(M)$&nbsp; binary values. &nbsp; Logarithm to base 2 &nbsp; ⇒ &nbsp; "binary logarithm".
  
 
{{GraueBox|TEXT=
 
{{GraueBox|TEXT=
$\text{Example 3:}$&nbsp; Each binary value &nbsp; &rArr; &nbsp; bit is represented by a rectangle of duration&nbsp; $T_{\rm B} = T_{\rm A}/N$&nbsp; resulting in the signal&nbsp; $q_{\rm C}(t)$&nbsp; .
+
$\text{Example 3:}$&nbsp; Each binary value &nbsp; &rArr; &nbsp; bit is represented by a rectangle of duration&nbsp; $T_{\rm B} = T_{\rm A}/N$&nbsp; resulting in the signal&nbsp; $q_{\rm C}(t)$.&nbsp; You can see:
  
[[File: Mod_T_4_1_S5a_vers2.png|center|frame | PCM coding with the dual code&nbsp; $(M = 8,\ N = 3)$]
+
[[File: Mod_T_4_1_S5a_vers2.png|right|frame | PCM coding with the dual code&nbsp; $(M = 8,\ N = 3)$]]
  
You can see:
+
*Here,&nbsp; the&nbsp; "dual code"&nbsp; is used &nbsp; &rArr; &nbsp; the quantization intervals&nbsp; $\mu$&nbsp; are numbered consecutively from&nbsp; $0$&nbsp; to&nbsp; $M-1$&nbsp; and then written in simple binary.&nbsp; With&nbsp; $M = 8$&nbsp; for example&nbsp; $\mu = 6$ &nbsp; ⇔ &nbsp; '''110'''.  
*The&nbsp; ''dual code'' &nbsp; is used here.&nbsp; This means that the quantization intervals&nbsp; $\mu$&nbsp; are numbered consecutively from&nbsp; $0$&nbsp; to&nbsp; $M-1$&nbsp; and then written in simple binary. &nbsp; With&nbsp; $M = 8$&nbsp; for example&nbsp; $\mu = 6$ &nbsp; ⇔ &nbsp; '''110'''.  
+
*The three symbols of the binary encoded signal&nbsp; $q_{\rm C}(t)$&nbsp; are obtained by replacing&nbsp; '''0'''&nbsp; by&nbsp; '''L'''&nbsp; ("Low") and&nbsp; '''1'''&nbsp; by&nbsp; '''H'''&nbsp; ("High").&nbsp; This gives in the example the sequence&nbsp; "'''HHL HHL LLH LHL HLH LHH'''".  
*The three binary symbols of the coded signal&nbsp; $q_{\rm C}(t)$&nbsp; are obtained by replacing&nbsp; '''0'''&nbsp; by&nbsp; '''L'''&nbsp; ("Low") and&nbsp; '''1'''&nbsp; by&nbsp; '''H'''&nbsp; ("High").&nbsp; In the example, this gives: &nbsp; &nbsp;'''HHL HHL LLH LHL HLH LHH'''.  
+
*The bit duration&nbsp; $T_{\rm B}$&nbsp; is here shorter than the sampling distance&nbsp; $T_{\rm A} = 1/f_{\rm A}$&nbsp; by a factor&nbsp; $N = {\rm log_2}(M) = 3$.&nbsp; So,&nbsp; the bit rate is&nbsp; $R_{\rm B} = {\rm log_2}(M) \cdot f_{\rm A}$.  
*The bit duration&nbsp; $T_{\rm B}$&nbsp; is here shorter than the sampling distance by a factor&nbsp; $N = {\rm log_2}(M) = 3$&nbsp; $T_{\rm A} = 1/f_{\rm A}$, and the bit rate is&nbsp; $R_{\rm B} = {\rm log_2}(M) - f_{\rm A}$.  
+
*If one uses the same mapping in decoding&nbsp; $(v_{\rm C} &nbsp; ⇒ &nbsp; v_{\rm Q})$&nbsp; as in encoding &nbsp; $(q_{\rm Q} &nbsp; ⇒ &nbsp; q_{\rm C})$,&nbsp; then,&nbsp; if there are no transmission errors: &nbsp; &nbsp; $v_{\rm Q}(ν \cdot T_{\rm A}) = q_{\rm Q}(ν \cdot T_{\rm A}). $  
*If one uses the same mapping in decoding&nbsp; $(v_{\rm C} &nbsp; ⇒ &nbsp; v_{\rm Q})$&nbsp; as in coding&nbsp; $(q_{\rm Q} &nbsp; ⇒ &nbsp; q_{\rm C})$, then,&nbsp; if there are no transmission errors: &nbsp; &nbsp; $v_{\rm Q}(ν - T_{\rm A}) = q_{\rm Q}(ν - T_{\rm A}). $  
+
*An alternative to dual code is&nbsp; "Gray code",&nbsp; where adjacent binary values differ only in one bit.&nbsp; For&nbsp; $N = 3$:  
*An alternative to dual code is&nbsp; ''Gray code'', where adjacent binary values differ only in one bit.&nbsp; For&nbsp; $N = 3$:  
+
:&nbsp; $\mu = 0$:&nbsp; '''LLL''', &nbsp; &nbsp; $\mu = 1$:&nbsp; '''LLH''', &nbsp; &nbsp; $\mu = 2$:&nbsp; '''LHH''', &nbsp; &nbsp; $\mu = 3$: &nbsp; '''LHL''',  
: &nbsp; &nbsp; $\mu = 0$:&nbsp; '''LLL''', &nbsp; &nbsp; $\mu = 1$:&nbsp; '''LLH''', &nbsp; &nbsp; $\mu = 2$:&nbsp; '''LHH''', &nbsp; &nbsp; $\mu = 3$: &nbsp; '''LHL''', &nbsp; &nbsp; $\mu = 4$:&nbsp; '''HHL''', &nbsp; &nbsp; $\mu = 5$:&nbsp; '''HHH''', &nbsp; &nbsp; $\mu =6$:&nbsp; '''HLH''', &nbsp; &nbsp; $\mu = 7$:&nbsp; '''HLL'''. }}
+
:&nbsp; $\mu = 4$:&nbsp; '''HHL''', &nbsp; &nbsp; $\mu = 5$:&nbsp; '''HHH''', &nbsp; &nbsp; $\mu =6$:&nbsp; '''HLH''', &nbsp; &nbsp; $\mu = 7$:&nbsp; '''HLL'''. }}
  
 
==Signal-to-noise power ratio==
 
==Signal-to-noise power ratio==
 
<br>
 
<br>
The digital pulse code modulation&nbsp; $\rm (PCM)$&nbsp; is now compared to the analog modulation methods&nbsp; $\rm (AM, \ FM)$&nbsp; regarding the achievable sink SNR&nbsp; $ρ_v = P_q/P_ε$&nbsp; with AWGN noise.  
+
The digital&nbsp; "pulse code modulation"&nbsp; $\rm (PCM)$&nbsp; is now compared to analog modulation methods&nbsp; $\rm (AM, \ FM)$&nbsp; regarding the achievable sink SNR&nbsp; $ρ_v = P_q/P_ε$&nbsp; with AWGN noise.&nbsp; As denoted in previous chapters&nbsp; [[Modulation_Methods/Influence_of_Noise_on_Systems_with_Angle_Modulation|$\text{(for example)}$]]&nbsp; $ξ = {α_{\rm K}}^2 \cdot P_{\rm S}/(N_0 \cdot B_{\rm NF})$&nbsp; the&nbsp; "performance parameter"&nbsp; $ξ$&nbsp; summarizes different influences: 
  
[[File:EN_Mod_T_4_1_S6a.png |right|frame| Sink SNR at AM, FM, PCM 30/32 ]]
+
[[File:EN_Mod_T_4_1_S6a.png |right|frame| Sink SNR at AM,&nbsp; FM,&nbsp; and&nbsp; PCM 30/32 ]]
  
As denoted in previous chapters&nbsp; [[Modulation_Methods/Influence_of_Noise_on_Systems_with_Angle_Modulation|(for example)]]&nbsp; $ξ = {α_{\rm K}}^2 - P_{\rm S}/(N_0 - B_{\rm NF})$&nbsp; the power parameter.&nbsp; This summarizes different influences:
+
#The channel transmission factor&nbsp; $α_{\rm K}$&nbsp; (quadratic),  
*the channel transmission factor&nbsp; $α_{\rm K}$&nbsp; (quadratic),  
+
#the transmit power&nbsp; $P_{\rm S}$&nbsp; (linear),  
*the transmit power&nbsp; $P_{\rm S}$,  
+
#the AWGN noise power density&nbsp; $N_0$&nbsp; (reciprocal), and.  
*the AWGN noise power density&nbsp; $N_0$&nbsp; (reciprocal), and.  
+
#the signal bandwidth&nbsp; $B_{\rm NF}$&nbsp; (reciprocal);&nbsp; for a harmonic oscillation: &nbsp; signal frequency&nbsp; $f_{\rm N}$&nbsp; instead of&nbsp; $B_{\rm NF}$.   
*the signal bandwidth&nbsp; $B_{\rm NF}$&nbsp; (also reciprocal); <br>for a harmonic oscillation: &nbsp; Frequency&nbsp; $f_{\rm N}$&nbsp; instead of&nbsp; $B_{\rm NF}$.   
 
  
  
The two comparison curves [[Modulation_Methods/Envelope_Demodulation#Influence_of_additive_white_Gaussian_noise|for amplitude modulation]] (AM) and [[Modulation_Methods/Influence_of_Noise_on_Systems_with_Angle_Modulation#System_comparison_of_AM.2C_PM_and_FM_with_respect_to_noise|for frequency modulation]] (FM) can be described as follows:  
+
The two comparison curves for&nbsp; [[Modulation_Methods/Envelope_Demodulation|$\text{amplitude modulation}$]]&nbsp; and&nbsp; [[Modulation_Methods/Influence_of_Noise_on_Systems_with_Angle_Modulation#System_comparison_of_AM.2C_PM_and_FM_with_respect_to_noise|$\text{frequency modulation}$]] can be described as follows:  
*Two-sideband FM without carrier:   
+
*Double-sideband amplitude modulation&nbsp; $\text{(DSB&ndash;AM)}$&nbsp; without carrier&nbsp; $(m \to \infty)$:   
:$$ρ_v = ξ \ ⇒ \ 10 - \lg ρ_v = 10 - \lg \ ξ,$$  
+
:$$ρ_v = ξ \ ⇒ \ 10 · \lg ρ_v = 10 · \lg \ ξ.$$  
*Frequency modulation with&nbsp; $η = 3$:  &nbsp;  
+
*Frequency modulation&nbsp; $\text{(FM)}$&nbsp; with modulation index&nbsp; $η = 3$:  &nbsp;  
:$$ρ_υ = 3/2 \cdot η^2 - ξ = 13.5 - ξ \ ⇒ \ 10 - \lg \ ρ_v = 10 - \lg \ ξ + 11.3 \ \rm dB.$$  
+
:$$ρ_υ = 3/2 \cdot η^2 - ξ = 13.5 - ξ \ ⇒ \ 10 · \lg \ ρ_v = 10 · \lg \ ξ + 11.3 \ \rm dB.$$  
  
The curve for the&nbsp; [http://fernmeldemuseum-dresden.de/technik/uebertragungstechnik/pcm-technik/ '''PCM 30/32 system''']&nbsp; should be interpreted as follows:  
+
The curve for the&nbsp; [https://en.wikipedia.org/wiki/PCM30 $\text{PCM 30/32}$]&nbsp;   system should be interpreted as follows:  
*If the power parameter &nbsp;$ξ$&nbsp; is sufficiently large, then no transmission errors occur.&nbsp; The error signal&nbsp; $ε(t) = v(t) \ - \ q(t)$&nbsp; is then due to quantization alone&nbsp; $(P_ε = P_{\rm Q})$.  
+
*If the performance parameter &nbsp;$ξ$&nbsp; is sufficiently large,&nbsp; then no transmission errors occur.&nbsp; The error signal&nbsp; $ε(t) = v(t) \ - \ q(t)$&nbsp; is then alone  due to quantization&nbsp; $(P_ε = P_{\rm Q})$.  
 
*With the quantization step number&nbsp; $M = 2^N$&nbsp; holds approximately in this case:
 
*With the quantization step number&nbsp; $M = 2^N$&nbsp; holds approximately in this case:
 
:$$\rho_{v} = \frac{P_q}{P_\varepsilon}= M^2 = 2^{2N} \hspace{0.3cm}\Rightarrow \hspace{0.3cm} 10 \cdot {\rm lg}\hspace{0.1cm}\rho_{v}=20 \cdot {\rm lg}\hspace{0.1cm}M = N \cdot 6.02\,{\rm dB}$$
 
:$$\rho_{v} = \frac{P_q}{P_\varepsilon}= M^2 = 2^{2N} \hspace{0.3cm}\Rightarrow \hspace{0.3cm} 10 \cdot {\rm lg}\hspace{0.1cm}\rho_{v}=20 \cdot {\rm lg}\hspace{0.1cm}M = N \cdot 6.02\,{\rm dB}$$
 
:$$ \Rightarrow \hspace{0.3cm} N = 8, \hspace{0.05cm} M =256\text{:}\hspace{0.2cm}10 \cdot {\rm lg}\hspace{0.1cm}\rho_{v}=48.16\,{\rm dB}\hspace{0.05cm}.$$
 
:$$ \Rightarrow \hspace{0.3cm} N = 8, \hspace{0.05cm} M =256\text{:}\hspace{0.2cm}10 \cdot {\rm lg}\hspace{0.1cm}\rho_{v}=48.16\,{\rm dB}\hspace{0.05cm}.$$
*Note that the given equation is exactly valid only for a sawtooth shaped source signal. &nbsp; However, for cosine shaped source signal the deviation from this is not very large.  
+
:Note that the given equation is exactly valid only for a sawtooth shaped source signal. &nbsp; However, for a cosine shaped signal the deviation from this is not very large.  
*As &nbsp;$ξ$&nbsp; (smaller transmit power or larger noise power density)&nbsp; decreases, the transmission errors increase.&nbsp; Thus &nbsp;$P_ε > P_{\rm Q}$&nbsp; and the sink-to-noise ratio becomes smaller.  
+
*As &nbsp;$ξ$&nbsp; decreases &nbsp;(smaller transmit power or larger noise power density),&nbsp; the transmission errors increase.&nbsp; Thus &nbsp;$P_ε > P_{\rm Q}$&nbsp; and the sink-to-noise ratio becomes smaller.  
*The PCM&nbsp; $($with $M = 256)$&nbsp; is superior to the analog methods&nbsp; $($AM and FM$)$&nbsp; only in the lower and middle &nbsp;$ξ$-range.&nbsp; But if transmission errors do not play a role anymore, no improvement can be achieved by a larger &nbsp;$ξ$&nbsp; (horizontal curve section with yellow background).  
+
*PCM&nbsp; $($with $M = 256)$&nbsp; is superior to the analog methods&nbsp; $($AM and FM$)$&nbsp; only in the lower and middle &nbsp;$ξ$&ndash;range.&nbsp; But if transmission errors do not play a role anymore,&nbsp; no improvement can be achieved by a larger &nbsp;$ξ$&nbsp; $($horizontal curve section with yellow background$)$.  
*An improvement is only achieved by increasing &nbsp;$N$&nbsp; (number of bits per sample)&nbsp; &rArr; &nbsp; larger&nbsp; $M = 2^N$&nbsp; (number of quantization steps). &nbsp; For example, for a&nbsp; '''Compact Disc'''&nbsp; (CD) with parameter&nbsp; $N = 16$ &nbsp; ⇒ &nbsp; $M = 65536$&nbsp; the value&nbsp;  
+
*An improvement is only achieved by increasing &nbsp;$N$&nbsp; $($number of bits per sample$)$&nbsp; &rArr; &nbsp; larger&nbsp; $M = 2^N$&nbsp; $($number of quantization steps$)$. &nbsp; For example, for a&nbsp; &raquo;'''Compact Disc'''&laquo;&nbsp; $\rm (CD)$&nbsp; with parameter&nbsp; $N = 16$ &nbsp; ⇒ &nbsp; $M = 65536$&nbsp; the sink SNR is:&nbsp;  
:$$10 - \lg \ ρ_v = 96.32 \ \rm dB.$$  
+
:$$10 · \lg \ ρ_v = 96.32 \ \rm dB.$$  
  
 
{{GraueBox|TEXT=
 
{{GraueBox|TEXT=
 
$\text{Example 4:}$&nbsp;  
 
$\text{Example 4:}$&nbsp;  
The following graph shows the limiting influence of quantization:  
+
The following graph shows the limiting influence of quantization:
 +
*Here,&nbsp; transmission errors are excluded.&nbsp; Sampling and signal reconstruction are best fit to&nbsp; $q(t)$.
 
*White dotted is the source signal&nbsp; $q(t)$,&nbsp; green dotted is the sink signal&nbsp; $v(t)$&nbsp; after PCM with&nbsp; $N = 4$ &nbsp; ⇒ &nbsp; $M = 16$.
 
*White dotted is the source signal&nbsp; $q(t)$,&nbsp; green dotted is the sink signal&nbsp; $v(t)$&nbsp; after PCM with&nbsp; $N = 4$ &nbsp; ⇒ &nbsp; $M = 16$.
 
*Sampling times are marked by crosses.  
 
*Sampling times are marked by crosses.  
*Transfer errors are excluded for the time being.&nbsp; Sampling and signal reconstruction are best fit to&nbsp; $q(t)$&nbsp; .
 
  
  
[[File:EN_Mod_T_4_1_S6b.png|center|frame|Influence of quantization with&nbsp; $N = 4$&nbsp; and&nbsp; $N = 8$]]
+
This image can be interpreted as follows:
 
+
[[File:EN_Mod_T_4_1_S6b.png|right|frame|Influence of quantization with&nbsp; $N = 4$&nbsp; and&nbsp; $N = 8$<br><br><br>]]
This image can be interpreted as follows:
+
*With&nbsp; $N = 8$ &nbsp; ⇒ &nbsp; $M = 256$&nbsp; the sink signal&nbsp; $v(t)$&nbsp; cannot be distinguished with the naked eye from the source signal&nbsp; $q(t)$.&nbsp; The white dotted signal curve applies approximately to both.  
*With&nbsp; $N = 8$ &nbsp; ⇒ &nbsp; $M = 256$&nbsp; the sink signal&nbsp; $v(t)$&nbsp; is indistinguishable from the source signal&nbsp; $q(t)$&nbsp; with the naked eye.&nbsp; The white dotted signal curve applies approximately to both.  
+
*But from the signal-to-noise ratio&nbsp; $10 · \lg \ ρ_v = 47.8 \ \rm dB$&nbsp; it can be seen that the quantization noise&nbsp; power&nbsp; $P_\varepsilon$&nbsp; is only smaller by a factor&nbsp; $1. 6 \cdot 10^{-5}$&nbsp; than the power&nbsp; $P_q$&nbsp; of the source signal.&nbsp;  
*From the signal-to-noise ratio&nbsp; $10 - \lg \ ρ_v = 47.8 \ \rm dB$&nbsp; however, it can be seen that the quantization noise&nbsp; (power&nbsp; $P_\varepsilon$&nbsp; of the error signal)&nbsp; is only reduced by a factor&nbsp; $1. 6 - 10^{-5}$&nbsp; smaller than the power&nbsp; $P_q$&nbsp; of the source signal.&nbsp; This SNR would already be clearly audible with a speech or music signal.  
+
*This SNR would already be clearly audible with a speech or music signal.  
*Although the source signal considered here is neither sawtooth nor cosine shaped, but is composed of several frequency components, the given approximation &nbsp;$ρ_v ≈ M^2$ &nbsp; ⇒ &nbsp; $10 - \lg \ ρ_υ = 48.16 \ \rm dB$&nbsp; deviates only insignificantly from the actual value.  
+
*Although&nbsp; $q(t)$&nbsp; is neither sawtooth nor cosine shaped,&nbsp; but is composed of several frequency components,&nbsp; the approximation &nbsp;$ρ_v ≈ M^2$ &nbsp; ⇒ &nbsp; $10 · \lg \ ρ_υ = 48.16 \ \rm dB$&nbsp; deviates insignificantly from the actual value.  
*In contrast, for &nbsp;$N = 4$ &nbsp; ⇒ &nbsp; $M = 16$&nbsp; deviations between the sink signal (marked in green) and the source signal (marked in white) can already be seen in the image, which is also quantitatively expressed by the very small signal-to-noise ratio &nbsp;$10 - \lg \ ρ_υ = 28.2 \ \rm dB$&nbsp; }}.
+
*In contrast,&nbsp; for &nbsp;$N = 4$ &nbsp; ⇒ &nbsp; $M = 16$&nbsp; the deviations between sink signal (marked in green) and source signal (marked in white) can already be seen in the image,&nbsp; which is also quantitatively expressed by the very small SNR &nbsp;$10 · \lg \ ρ_υ = 28.2 \ \rm dB$. }}
  
 
==Influence of transmission errors==
 
==Influence of transmission errors==
 
<br>
 
<br>
Starting from the same analog signal&nbsp; $q(t)$&nbsp; as in the last section and a linear quantization with &nbsp;$N = 8$ bits &nbsp; ⇒ &nbsp; $M = 256$&nbsp; the effects of transmission errors are now illustrated using the respective sink signal&nbsp; $v(t)$&nbsp;.
+
Starting from the same analog signal&nbsp; $q(t)$&nbsp; as in the last section and a linear quantization with &nbsp;$N = 8$ bits &nbsp; ⇒ &nbsp; $M = 256$&nbsp; the effects of transmission errors are now illustrated using the respective sink signal&nbsp; $v(t)$.
  
[[File:EN_Mod_T_4_1_S7a.png |center|frame| Influence of a transmission error concerning&nbsp; '''Bit 5''''&nbsp; at the dual code]]
+
[[File:EN_Mod_T_4_1_S7a.png |right|frame| Influence of a transmission error concerning&nbsp; '''Bit 5'''&nbsp; at the dual code, meaning that the lowest quantization interval&nbsp; $(\mu = 0)$&nbsp; is represented with&nbsp; '''LLLL LLLL'''&nbsp; and the highest interval&nbsp; $(\mu = 255)$&nbsp; is represented with&nbsp; '''HHHH HHHH'''.]]
  
*The white dots again mark the source signal&nbsp; $q(t)$.&nbsp; Without transmission error the sink signal&nbsp; $v(t)$&nbsp; has the same course when neglecting quantization.
+
[[File:EN_Mod_T_4_1_S7b.png |right|frame| Table:&nbsp; Results of the bit error analysis. &nbsp;Note: &nbsp; &nbsp; $10 · \lg \ ρ_v$&nbsp; was calculated from the presented signal of duration&nbsp; $10 \cdot T_{\rm A}$&nbsp; $($only&nbsp; $10 \cdot 8 = 80$&nbsp; bits$)$ &nbsp; &rArr; &nbsp;   each transmission error corresponds to a bit error rate of&nbsp; $1.25\%$.]]
*Now, exactly one bit of the fifth sample at a time&nbsp; $q(5 - T_{\rm A}) = -0.715$&nbsp; is corrupted, where this sample has been coded as&nbsp; '''LLHL LHLL''''&nbsp; . &nbsp; This graph is based on dual code, meaning that the lowest quantization interval&nbsp; $(\mu = 0)$&nbsp; is represented with&nbsp; '''LLLL LLLL''''&nbsp; and the highest interval&nbsp; $(\mu = 255)$&nbsp; is represented with&nbsp; '''HHHH HHHH'''&nbsp;.
 
  
[[File:EN_Mod_T_4_1_S7b.png |right|frame| Table showing the results of the bit error analysis]]
+
*The white dots mark the source signal&nbsp; $q(t)$.&nbsp; Without transmission error the sink signal&nbsp; $v(t)$&nbsp; has the same course when neglecting quantization.
 +
*Now,&nbsp; exactly one bit of the fifth sample&nbsp; $q(5 \cdot T_{\rm A}) = -0.715$&nbsp; is falsified,&nbsp; where this sample has been encoded  as&nbsp; '''LLHL LHLL'''.
 +
<br><br><br><br><br>
  
 +
The results of the error analysis shown in the graph and the table below can be summarized as follows:
 +
*If only the last bit &nbsp; &rArr; &nbsp; "Least Significant Bit" &nbsp; &rArr; &nbsp; $\rm (LSB)$&nbsp; of the binary word is falsified&nbsp; $($'''LLHL LHL<u>L</u> &nbsp; ⇒ &nbsp; LLHL LHL<u>H</u>''',&nbsp;  white curve$)$,&nbsp; then no difference from error-free transmission is visible to the naked eye. Nevertheless,&nbsp; the signal-to-noise ratio is reduced by &nbsp; $3.5 \ \rm dB$.
 +
*An error of the fourth last bit leads to a clearly detectable distortion by eight quantization steps &nbsp; $($'''LLHL<u>L</u>HLL ⇒ LLHL<u>H</u>HLL''',&nbsp; green curve$)$: &nbsp; $v(5T_{\rm A}) \ - \ q(5T_{\rm A}) = 8/256 - 2 = 0.0625$&nbsp; and the signal-to-noise ratio drops to &nbsp; $10 · \lg \ ρ_υ = 28.2 \ \rm dB$.
 +
*Finally,&nbsp; the red curve shows the case where the&nbsp; $\rm MSB$&nbsp; ("Most Significant Bit")&nbsp; is falsified: &nbsp; '''<u>L</u>LHLLHLL ⇒ <u>H</u>LHLLHLL''' &nbsp; &rArr;  &nbsp; distortion&nbsp; $v(5T_{\rm A}) \ - \ q(5T_{\rm A}) = 1$&nbsp; $($corresponding to half the modulation range$)$.&nbsp; The SNR is now only about &nbsp; $4 \ \rm dB$.
 +
*At all sampling times except&nbsp; $5T_{\rm A}$,&nbsp; $v(t)$&nbsp; matches exactly with&nbsp; $q(t)$&nbsp; except for the quantization error.&nbsp; Outside these points marked by yellow crosses,&nbsp; the single error at&nbsp; $5T_{\rm A}$&nbsp; leads to strong deviations in an extended range,&nbsp; due to the interpolation with the&nbsp; $\rm sinc$-shaped impulse response of the reconstruction low-pass&nbsp; $H(f)$.
  
The table shows the results of this analysis:
 
*The specified signal-to-noise ratio&nbsp; $10 - \lg \ ρ_v$&nbsp; was calculated from the presented (very short) signal section of duration&nbsp; $10 - T_{\rm A}$&nbsp;.
 
  
 +
==Estimation of SNR degradation due to transmission errors==
 +
<br>
 +
Now we will try to&nbsp; (approximately)&nbsp; determine the SNR curve of the PCM system taking bit errors into account.&nbsp; We start from the following block diagram and further assume:
 +
[[File:EN_Mod_T_4_1_S7c.png |right|frame|For calculating the SNR curve  of the PCM system;&nbsp; bit errors are taken into account]]
  
*For each transmission error of&nbsp; $10 - 8 = 80$&nbsp; bits, this corresponds to a bit error rate of&nbsp; $1.25\%$.
+
*Each sample&nbsp; $q_{\rm A}(νT)$&nbsp; is quantized by&nbsp; $M$&nbsp; steps and represented by&nbsp; $N = {\rm log_2} (M)$&nbsp; bits.&nbsp; In the example:&nbsp; $M = 8$ &nbsp; ⇒ &nbsp; $N = 3$.
<br clear=all>
+
*The binary representation of&nbsp; $q_{\rm Q}(νT)$&nbsp; yields the coefficients&nbsp; $a_k\, (k = 1, \text{...} \hspace{0.08cm}, N)$,&nbsp; which can be falsified by bit errors to the coefficients&nbsp; $b_k$.&nbsp; Both&nbsp; $a_k$&nbsp; and&nbsp; $b_k$&nbsp; are&nbsp; $±1$,&nbsp; respectively.
The results of this error analysis shown in the graph and table can be summarized as follows:
+
*A bit error&nbsp; $(b_k ≠ a_k)$&nbsp; occurs with probability&nbsp; $p_{\rm B}$.&nbsp; Each bit is equally likely to be falsified and in each PCM word there is at most one error &nbsp; &rArr; &nbsp; only one of the&nbsp; $N$&nbsp; bits can be wrong.
*If only the last bit of the binary word is corrupted&nbsp; $($LSB: &nbsp; ''Least Significant Bit,''&nbsp; '''LLHL LHL<u>L</u> &nbsp; ⇒ &nbsp; LLHL LHL<u>H</u>'''$)$,&nbsp; then no difference from error-free transmission is visible to the naked eye&nbsp; $($white curve$)$. &nbsp; Nevertheless, the signal-to-noise ratio is reduced by &nbsp; $3.5 \ \rm dB$&nbsp; .
 
*A transmission error of the fourth last bit&nbsp; $($green curve,&nbsp; '''LLHL<u>L</u>HLL LLHL<u>H</u>HLL'''$)$&nbsp; already leads to a clearly detectable distortion by eight quantization intervals. &nbsp; That is, &nbsp; $v(5T_{\rm A}) \ - \ q(5T_{\rm A}) = 8/256 - 2 = 0.0625$&nbsp; and the signal-to-noise ratio drops to &nbsp; $10 - \lg \ ρ_υ = 28.2 \ \rm dB$.
 
*Finally, the red curve shows the case where the MSB&nbsp; (''Most Significant Bit'')&nbsp; is corrupted: &nbsp; '''<u>L</u>LHLLHLL ⇒ <u>H</u>LHLL'''.&nbsp; This leads to distortion&nbsp; $v(5T_{\rm A}) \ - \ q(5T_{\rm A}) = 1$&nbsp; (corresponding to half the modulation range).&nbsp; The signal-to-noise ratio is now only about &nbsp; $4 \ \rm dB$.
 
*At all sampling times except&nbsp; $5T_{\rm A}$&nbsp; matches&nbsp; $v(t)$&nbsp; exactly except for the quantization error with&nbsp; $q(t)$&nbsp; . &nbsp; Outside these time points marked by yellow crosses, however, the single error at&nbsp; $5T_{\rm A}$&nbsp; leads to strong deviations in an extended range, which is due to the interpolation with the&nbsp; $\rm si$-shaped impulse response of the reconstruction low-pass&nbsp; $H(f)$&nbsp; .
 
  
  
==Estimation of SNR degradation due to transmission errors.==
+
From the diagram given in the graph,&nbsp; it can be seen for&nbsp; $N = 3$&nbsp; and natural binary coding&nbsp; ("Dual Code"):
<br>
+
*A falsification of&nbsp; $a_1$&nbsp; changes the value&nbsp; $q_{\rm Q}(νT)$&nbsp; by&nbsp; $±A$.
Now we will try to determine the SNR curve of the PCM system taking into account bit errors, at least approximately.&nbsp; We start from the following block diagram and further assume:
+
*A falsification of&nbsp; $a_2$&nbsp; changes the  value&nbsp; $q_{\rm Q}(νT)$&nbsp; by&nbsp; $±A/2$.
*Each sample&nbsp; $q_{\rm A}(νT)$&nbsp; is quantized by&nbsp; $M$&nbsp; stages and quantized by&nbsp; $N = {\rm log_2} (M)$&nbsp; binary sign (bit).&nbsp; In the example&nbsp; $M = 8$ &nbsp; ⇒ &nbsp; $N = 3$.
+
*A falsification of&nbsp; $a_3$&nbsp; changes the  value value&nbsp; $q_{\rm Q}(νT)$&nbsp; by&nbsp; $±A/4$.
*The binary representation of&nbsp; $q_{\rm Q}(νT)$&nbsp; yields the amplitude coefficients&nbsp; $a_k\, (k = 1, \text{...} \hspace{0.08cm}, N),$ which can be corrupted by bit errors in the coefficients&nbsp; $b_k$&nbsp; .  
 
*Both&nbsp; $a_k$&nbsp; and&nbsp; $b_k$&nbsp; are&nbsp; $±1$, respectively.
 
*A bit error&nbsp; $(b_k ≠ a_k)$&nbsp; occurs with probability&nbsp; $p_{\rm B}$&nbsp; .
 
*Each bit is equally likely to be corrupted and in each PCM word there is at most one error &nbsp; &rArr; &nbsp; only one of the&nbsp; $N$&nbsp; bits can be wrong.
 
  
  
[[File:EN_Mod_T_4_1_S7c.png |right|frame|For calculating PCM SNR with bit errors taken into account]]
+
For the case when&nbsp; (only)&nbsp; the coefficient&nbsp; $a_k$&nbsp; was falsified,&nbsp; we obtain by generalization for the deviation:
 
+
:$$\varepsilon_k = υ_{\rm Q}(νT) \ - \ q_{\rm Q}(νT)= - a_k \cdot A \cdot 2^{-k +1}
From the diagram given in the graph, it can be seen for&nbsp; $N = 3$&nbsp; and natural binary coding (dual code):
 
*A corruption of&nbsp; $a_1$&nbsp; changes the quantized value&nbsp; $q_{\rm Q}(νT)$&nbsp; by&nbsp; $±A$.
 
*A corruption of&nbsp; $a_2$&nbsp; changes the quantized value&nbsp; $q_{\rm Q}(νT)$&nbsp; by&nbsp; $±A/2.$.
 
*A corruption of&nbsp; $a_3$&nbsp; changes the quantized value value&nbsp; $q_{\rm Q}(νT)$&nbsp; by&nbsp; $±A/4$.
 
<br clear=all>
 
By generalization, we obtain for the deviation&nbsp; $ε_k = υ_{\rm Q}(νT) \ - \ q_{\rm Q}(νT)$&nbsp; for the case when the amplitude coefficient&nbsp; $a_k$&nbsp; was transferred incorrectly:
 
:$$\varepsilon_k = - a_k \cdot A \cdot 2^{-k +1}
 
 
  \hspace{0.05cm}.$$
 
  \hspace{0.05cm}.$$
 +
 +
After averaging over all falsification values&nbsp; $ε_k$ &nbsp; (with&nbsp; $1 ≤ k ≤ N)$ &nbsp; taking into account the bit error probability&nbsp; $p_{\rm B}$&nbsp; we obtain for the&nbsp; "error noise power": 
 +
:$$P_{\rm E}= {\rm E}\big[\varepsilon_k^2 \big] = \sum\limits^{N}_{k = 1} p_{\rm B} \cdot \left ( - a_k \cdot A \cdot 2^{-k +1} \right )^2 =\ p_{\rm B} \cdot A^2 \cdot \sum\limits^{N-1}_{k = 0} 2^{-2k } = p_{\rm B} \cdot A^2 \cdot \frac{1- 2^{-2N }}{1- 2^{-2 }} \approx {4}/{3} \cdot p_{\rm B} \cdot A^2 \hspace{0.05cm}.$$
  
For the&nbsp; '''error noise power'''&nbsp; after averaging over all corruption values&nbsp; $ε_k$&nbsp; (with&nbsp; $1 ≤ k ≤ N)$&nbsp; taking into account the bit error probability&nbsp; $p_{\rm B}$:
+
*Here are used the sum formula of the geometric series and the approximation&nbsp; $1 - 2^{-2N } ≈ 1$.  
:$$P_{\rm F}= {\rm E}\big[\varepsilon_k^2 \big] = \sum\limits^{N}_{k = 1} p_{\rm B} \cdot \left ( - a_k \cdot A \cdot 2^{-k +1} \right )^2 =\ p_{\rm B} \cdot A^2 \cdot \sum\limits^{N-1}_{k = 0} 2^{-2k } = p_{\rm B} \cdot A^2 \cdot \frac{1- 2^{-2N }}{1- 2^{-2 }} \approx {4}/{3} \cdot p_{\rm B} \cdot A^2 \hspace{0.05cm}.$$
+
*For&nbsp; $N = 8$ &nbsp; ⇒ &nbsp; $M = 256$&nbsp; the associated relative error is about&nbsp; $\rm 10^{-5}$.  
 
 
*Here the summation formula of the geometric series and the approximation&nbsp; $1 - 2^{-2N } ≈ 1$&nbsp; are used.  
 
*For&nbsp; $N = 8$ &nbsp; ⇒ &nbsp; $M = 256$&nbsp; the associated relative error is, for example, about&nbsp; $\rm 10^{-5}$.  
 
  
  
Excluding transmission errors, the signal-to-noise power ratio&nbsp; $ρ_v = P_{\rm S}/P_{\rm Q}$&nbsp; has been found, where for a uniformly distributed source signal&nbsp; (for example, sawtooth-shaped)&nbsp; the signal power and quantization noise power are to be calculated as follows:
+
Excluding transmission errors,&nbsp; the signal-to-noise power ratio&nbsp; $ρ_v = P_{\rm S}/P_{\rm Q}$&nbsp; has been found,&nbsp; where for a uniformly distributed source signal&nbsp; $($e.g. sawtooth-shaped$)$&nbsp; the signal power and quantization noise power are to be calculated as follows:
 
[[File:P_ID1904__Mod_T_4_1_S7d_ganz_neu.png |right|frame| Sink SNR for PCM considering bit errors]]  
 
[[File:P_ID1904__Mod_T_4_1_S7d_ganz_neu.png |right|frame| Sink SNR for PCM considering bit errors]]  
 
:$$P_{\rm S}={A^2}/{3}\hspace{0.05cm},\hspace{0.3cm}P_{\rm Q}= {A^2}/{3} \cdot 2^{-2N } \hspace{0.05cm}.$$
 
:$$P_{\rm S}={A^2}/{3}\hspace{0.05cm},\hspace{0.3cm}P_{\rm Q}= {A^2}/{3} \cdot 2^{-2N } \hspace{0.05cm}.$$
Taking into account the transfer errors, the above result gives:  
+
Taking into account the transmission errors,&nbsp; the above result gives:  
:$$\rho_{\upsilon}= \frac{P_{\rm S}}{P_{\rm Q}+P_{\rm F}} = \frac{A^2/3}{A^2/3 \cdot 2^{-2N } + A^2/3 \cdot 4 \cdot p_{\rm B}} = \frac{1}{ 2^{-2N } + 4 \cdot p_{\rm B}} \hspace{0.05cm}.$$
+
:$$\rho_{\upsilon}= \frac{P_{\rm S}}{P_{\rm Q}+P_{\rm E}} = \frac{A^2/3}{A^2/3 \cdot 2^{-2N } + A^2/3 \cdot 4 \cdot p_{\rm B}} = \frac{1}{ 2^{-2N } + 4 \cdot p_{\rm B}} \hspace{0.05cm}.$$
  
The graph shows &nbsp;$10 - \lg ρ_v$&nbsp; as a function of the (logarithmized) power parameter&nbsp; $ξ = P_{\rm S}/(N_0 - B_{\rm NF})$, where&nbsp; $B_{\rm NF}$&nbsp; indicates the signal bandwidth.&nbsp; Let the constant channel transmission factor be ideally&nbsp; $α_{\rm K} = 1$.  
+
The graph shows &nbsp;$10 \cdot \lg ρ_v$&nbsp; as a function of the (logarithmized) power parameter&nbsp; $ξ = P_{\rm S}/(N_0 \cdot B_{\rm NF})$, where&nbsp; $B_{\rm NF}$&nbsp; indicates the source signal bandwidth.&nbsp; Let the constant channel transmission factor be ideally&nbsp; $α_{\rm K} = 1$.&nbsp; Then holds:
  
*But for the optimal binary system and AWGN noise, the power parameter is also&nbsp; $ξ = E_{\rm B}/N_0$&nbsp; (energy per bit related to noise power density).
+
*For AWGN noise and the optimum binary system,&nbsp; the performance parameter is also&nbsp; $ξ = E_{\rm B}/N_0$&nbsp; $($energy per bit related to noise power density$)$.&nbsp; The bit error probability is then given by the Gaussian error function&nbsp; ${\rm Q}(x)$:
* The bit error probability is then given by the Gaussian error function&nbsp; ${\rm Q}(x)$&nbsp; as follows:
 
 
:$$p_{\rm B}= {\rm Q} \left ( \sqrt{{2E_{\rm B}}/{N_0} }\right ) \hspace{0.05cm}.$$
 
:$$p_{\rm B}= {\rm Q} \left ( \sqrt{{2E_{\rm B}}/{N_0} }\right ) \hspace{0.05cm}.$$
*For&nbsp; $N = 8$ &nbsp; ⇒ &nbsp; $ 2^{-2{\it N} } = 1.5 - 10^{-5}$&nbsp; and&nbsp; $10 - \lg \ ξ = 6 \ \rm dB$ &nbsp; ⇒ &nbsp; $p_{\rm B} = 0.0024$&nbsp; (point marked in red) results:  
+
*For&nbsp; $N = 8$ &nbsp; ⇒ &nbsp; $ 2^{-2{\it N} } = 1.5 \cdot 10^{-5}$&nbsp; and&nbsp; $10 \cdot \lg \ ξ = 6 \ \rm dB$ &nbsp; ⇒ &nbsp; $p_{\rm B} = 0.0024$&nbsp; $($point marked in red$)$&nbsp; results:  
 
:$$\rho_{\upsilon}= \frac{1}{ 1.5 \cdot 10^{-5} + 4 \cdot 0.0024} \approx 100 \hspace{0.3cm} \Rightarrow \hspace{0.3cm}10 \cdot {\rm lg} \hspace{0.15cm}\rho_{\upsilon}\approx 20\,{\rm dB}
 
:$$\rho_{\upsilon}= \frac{1}{ 1.5 \cdot 10^{-5} + 4 \cdot 0.0024} \approx 100 \hspace{0.3cm} \Rightarrow \hspace{0.3cm}10 \cdot {\rm lg} \hspace{0.15cm}\rho_{\upsilon}\approx 20\,{\rm dB}
 
  \hspace{0.05cm}.$$
 
  \hspace{0.05cm}.$$
*This small &nbsp;$ρ_v$ value goes back to the term &nbsp;$4 · 0.0024$&nbsp; in the denominator&nbsp; (influence of the transmission error)&nbsp; while in the horizontal section of the curve for each&nbsp; $N$&nbsp; (number of bits per sample) the term &nbsp;$\rm 2^{-2{\it N} }$&nbsp; dominates - i.e. the quantization noise.
+
*This small &nbsp;$ρ_v$ value goes back to the term &nbsp;$4 · 0.0024$&nbsp; in the denominator&nbsp; $($influence of the transmission errors$)$&nbsp; while in the horizontal section of the curve for each&nbsp; $N$&nbsp; (number of bits per sample) the term &nbsp;$\rm 2^{-2{\it N} }$&nbsp; dominates - i.e. the quantization noise.
==Nonlinear quantization==
+
==Non-linear quantization==
 
<br>
 
<br>
Often the quantization intervals are not chosen equally large, but one uses a finer quantization for the inner amplitude range than for large amplitudes.&nbsp; There are several reasons for this:  
+
Often the quantization intervals are not chosen equally large,&nbsp; but one uses a finer quantization for the inner amplitude range than for large amplitudes.&nbsp; There are several reasons for this:
*In audio signals, distortions of the quiet signal components&nbsp; (i.e. values near the zero line)&nbsp; are subjectively perceived as more disturbing than an impairment of large amplitude values.  
+
[[File:EN_Mod_T_4_1_S8a.png|right|frame|Uniform quantization of a speech signal]]
*Such an uneven quantization also leads to a larger sink-interval for such a music or speech signal, because here the signal amplitude is not evenly distributed.  
+
 +
*In audio signals,&nbsp; distortions of the quiet signal components&nbsp; (i.e. values near the zero line)&nbsp; are subjectively perceived as more disturbing than an impairment of large amplitude values.  
 +
*Such an uneven quantization also leads to a larger sink SNR for such a music or speech signal,&nbsp; because here the signal amplitude is not uniformly distributed.  
  
  
The graph shows a speech signal&nbsp; $q(t)$&nbsp; and its amplitude distribution&nbsp; $f_q(q)$ &nbsp; &rArr; &nbsp; [[Theory_of_Stochastic_Signals/Probability_Density_Function|Probability density function]].&nbsp;  
+
The graph shows a speech signal&nbsp; $q(t)$&nbsp; and its amplitude distribution&nbsp; $f_q(q)$ &nbsp; &rArr; &nbsp; [[Theory_of_Stochastic_Signals/Probability_Density_Function|$\text{Probability density function}$]]&nbsp; $\rm (PDF)$.
  
[[File:EN_Mod_T_4_1_S8a.png|right|frame|Uniform quantization of a speech signal]]
+
This is the&nbsp; [[Theory_of_Stochastic_Signals/Exponentially_Distributed_Random_Variables#Two-sided_exponential_distribution_-_Laplace_distribution|$\text{Laplace distribution}$]],&nbsp; which can be approximated as follows:   
This is the&nbsp; [[Theory_of_Stochastic_Signals/Exponentially_Distributed_Random_Variables#Two-sided_exponential_distribution_-_Laplace_distribution|Laplace distribution]], which can be approximated as follows:   
+
*by a continuous-valued two-sided exponential distribution,&nbsp; and  
*by a continuous two-sided exponential distribution, and.
+
*by a Dirac delta function&nbsp; $δ(q)$&nbsp; to account for the speech pauses&nbsp; (magenta colored).
*by a Dirac function&nbsp; $δ(q)$&nbsp; to account for the speech pauses (magenta colored).
 
  
 
   
 
   
In the graph, nonlinear quantization is only implied, for example, by means of the 13-segment characteristic, which is described in more detail in the&nbsp; [[Aufgaben:Exercise_4.5:_Non-Linear_Quantization|Exercise 4.5]]&nbsp;:  
+
In the graph, nonlinear quantization is only implied,&nbsp; e.g. by means of the 13-segment characteristic, which is described in more detail in the&nbsp; [[Aufgaben:Exercise_4.5:_Non-Linear_Quantization|"Exercise 4.5"]]&nbsp;:  
 
*The quantization intervals here become wider and wider towards the edges section by section.  
 
*The quantization intervals here become wider and wider towards the edges section by section.  
*The more frequent small amplitudes, on the other hand, are quantized very finely.  
+
*The more frequent small amplitudes,&nbsp; on the other hand,&nbsp; are quantized very finely.  
 
+
<br clear=all>
 
==Compression and expansion==
 
==Compression and expansion==
 
<br>
 
<br>
Non-uniform quantization can be realized, for example, by.
+
Non-uniform quantization can be realized, for example, by
*the sampled values &nbsp;$q_{\rm A}(ν - T_{\rm A})$&nbsp; are first deformed by a nonlinear characteristic &nbsp;$q_{\rm K}(q_{\rm A})$&nbsp; and
+
[[File:EN_Mod_T_4_1_S8b.png |right|frame| Realization of a non-uniform quantization]]  
*subsequently, the resulting output values &nbsp;$q_{\rm K}(ν · T_{\rm A})$&nbsp; are uniformly quantized.
 
 
 
[[File:EN_Mod_T_4_1_S8b.png |Right|frame| Realization of a non-uniform quantization]]
 
 
 
 
 
 
 
  
 +
*the sampled values &nbsp;$q_{\rm A}(ν \cdot T_{\rm A})$&nbsp; are first deformed by a nonlinear characteristic &nbsp;$q_{\rm K}(q_{\rm A})$,&nbsp; and
 +
*subsequently,&nbsp; the resulting output values &nbsp;$q_{\rm K}(ν · T_{\rm A})$&nbsp; are uniformly quantized.
  
  
This results in the signal chain sketched opposite.  
+
This results in the signal chain sketched on the right.
 
<br clear=all>
 
<br clear=all>
 
{{BlaueBox|TEXT=
 
{{BlaueBox|TEXT=
$\text{Conclusion:}$&nbsp; Such non-uniform quantization means:  
+
$\text{Conclusion:}$&nbsp; Such a non-uniform quantization means:  
*Through the nonlinear characteristic&nbsp; $q_{\rm K}(q_{\rm A})$&nbsp; small signal values are amplified and large values are attenuated &nbsp; ⇒ &nbsp; '''Compression'''.  
+
*Through the nonlinear characteristic&nbsp; $q_{\rm K}(q_{\rm A})$ &nbsp; &rArr; &nbsp; small signal values are amplified and large values are attenuated &nbsp; ⇒ &nbsp; &raquo;'''compression'''&laquo;.  
*This deliberate signal distortion is undone at the receiver by the inverse function&nbsp; $v_{\rm E}(υ_{\rm Q})$&nbsp; &nbsp; ⇒ &nbsp; '''expansion'''.  
+
*This deliberate signal distortion is undone at the receiver by the inverse function&nbsp; $v_{\rm E}(υ_{\rm Q})$&nbsp; &nbsp; ⇒ &nbsp; &raquo;'''expansion'''&laquo;.  
*The total process of transmit-side compression and receiver-side expansion is also called&nbsp; '''companding.''}}  
+
*The total process of transmitter-side compression and receiver-side expansion is also called&nbsp; &raquo;'''companding.'''&laquo;}}  
  
  
For the PCM system 30/32, the&nbsp; ''Comité Consultatif International des Télégraphique et Téléphonique''&nbsp; (CCITT) recommended the so-called A characteristic:  
+
For the PCM system 30/32, the&nbsp; "Comité Consultatif International des Télégraphique et Téléphonique"&nbsp; $\rm (CCITT)$&nbsp; recommended the so-called&nbsp; "A&ndash;characteristic":  
:$$y(x) = \left\{ \begin{array}{l} \frac{1 + {\rm ln}(A \cdot x)}{1 + {\rm ln}(A)}  \\ \frac{A \cdot x}{1 + {\rm ln}(A)}  \ - \frac{1 + {\rm ln}( - A \cdot x)}{1 + {\rm ln}(A)} \end{array} \right.\quad\begin{array}{*{5}c}{\rm{for}}\\{\rm{for}}\\{\rm{for}}  \end{array}\begin{array}{*{10}c}1/A \le x \le 1\hspace{0.05cm}, \ - 1/A \le x \le 1/A\hspace{0.05cm}, \ - 1 \le x \le - 1/A\hspace{0.05cm}.  \end{array}$$
+
:$$y(x) = \left\{ \begin{array}{l} \frac{1 + {\rm ln}(A \cdot x)}{1 + {\rm ln}(A)}  \\ \frac{A \cdot x}{1 + {\rm ln}(A)}  \\ - \frac{1 + {\rm ln}( - A \cdot x)}{1 + {\rm ln}(A)} \\  \end{array} \right.\quad\begin{array}{*{5}c}{\rm{for}}\\{\rm{for}}\\{\rm{for}}  \\ \end{array}\begin{array}{*{10}c}1/A \le x \le 1\hspace{0.05cm}, \\ - 1/A \le x \le 1/A\hspace{0.05cm}, \\ - 1 \le x \le - 1/A\hspace{0.05cm}.  \\ \end{array}$$
  
*Here, for abbreviation &nbsp;$x = q_{\rm A}(ν - T_{\rm A})$ and $y = q_{\rm K}(ν - T_{\rm A})$&nbsp; is used.
+
*Here,&nbsp; for abbreviation &nbsp; $x = q_{\rm A}(ν \cdot T_{\rm A})$ &nbsp; and&nbsp; $y = q_{\rm K}(ν \cdot T_{\rm A})$ &nbsp; are used.
 
*This characteristic curve with the value &nbsp;$A = 87.56$&nbsp; introduced in practice has a constantly changing slope.  
 
*This characteristic curve with the value &nbsp;$A = 87.56$&nbsp; introduced in practice has a constantly changing slope.  
*For more details on this type of non-uniform quantization, see the&nbsp; [[Aufgaben:Exercise_4.6:_Quantization_Characteristics|Exercise 4.5]].   
+
*For more details on this type of non-uniform quantization,&nbsp; see the&nbsp; [[Aufgaben:Exercise_4.6:_Quantization_Characteristics|"Exercise 4.6"]].   
  
  
''Note:'' &nbsp; In the third part of the tutorial video&nbsp; [[Pulscodemodulation_(Lernvideo)|Pulse code modulation]]&nbsp; are covered:  
+
&rArr; &nbsp; ''Note:'' &nbsp; In the third part of the&nbsp; (German language)&nbsp; learning video&nbsp; [[Pulscodemodulation_(Lernvideo)|"Pulse Code Modulation"]]&nbsp; are covered:  
*the definition of signal-to-noise power ratio (SNR),  
+
*the definition of signal-to-noise power ratio&nbsp; $\rm (SNR)$,  
 
*the influence of quantization noise and transmission errors,  
 
*the influence of quantization noise and transmission errors,  
*the differences between linear and nonlinear quantization.
+
*the differences between linear and non-linear quantization.
  
  

Latest revision as of 14:29, 23 January 2023

# OVERVIEW OF THE FOURTH MAIN CHAPTER #


The fourth chapter deals with the digital modulation methods  »amplitude shift keying«  $\rm (ASK)$,  »phase shift keying«  $\rm (PSK)$  and  »frequency shift keying«  $\rm (FSK)$  as well as some modifications derived from them.  Most of the properties of the analog modulation methods mentioned in the last two chapters still apply.  Differences result from the now required  »decision component«  of the receiver.

We restrict ourselves here essentially to the  »system-theoretical and transmission aspects«.  The error probability is given only for ideal conditions.  The derivations and the consideration of non-ideal boundary conditions can be found in the book  "Digital Signal Transmission".

In detail are treated:

  1. the  »pulse code modulation«  $\rm (PCM)$  and its components  "sampling"  –  "quantization"  –   "encoding",
  2. the  »linear modulation«  $\rm ASK$,  $\rm BPSK$,  $\rm DPSK$  and associated demodulators,
  3. the  »quadrature amplitude modulation«  $\rm (QAM)$  and more complicated signal space mappings,
  4. the  »frequency shift keying«  $\rm (FSK$)  as an example of non-linear digital modulation,
  5. the FSK with  »continuous phase matching«  $\rm (CPM)$,  especially the  $\rm (G)MSK$  method.


Principle and block diagram


Almost all modulation methods used today work digitally.  Their advantages have already been mentioned in the  "first chapter"  of this book.  The first concept for digital signal transmission was already developed in 1938 by  $\text{Alec Reeves}$  and has also been used in practice since the 1960s under the name  "Pulse Code Modulation"  $\rm (PCM)$.  Even though many of the digital modulation methods conceived in recent years differ from PCM in detail,  it is very well suited to explain the principle of all these methods.

The task of the PCM system is

  • to convert the analog source signal  $q(t)$  into the binary signal  $q_{\rm C}(t)$  – this process is also called   »A/D conversion«,
  • transmitting this signal over the channel,  where the receiver-side signal  $v_{\rm C}(t)$  is also binary because of the decision,
  • to reconstruct from the binary signal  $v_{\rm C}(t)$  the analog  (continuous-value as well as continuous-time)  sink signal  $v(t)$    ⇒   »D/A conversion«.
Principle of Pulse Code Modulation  $\rm (PCM)$

$q(t)\ \circ\!\!-\!\!\!-\!\!\!-\!\!\bullet\,\ Q(f)$   ⇒   source signal   (from German:  "Quellensignal"),  analog
$q_{\rm A}(t)\ \circ\!\!-\!\!\!-\!\!\!-\!\!\bullet\,\ Q_{\rm A}(f)$   ⇒   sampled source signal   (from German:  "abgetastet"   ⇒   "A")
$q_{\rm Q}(t)\ \circ\!\!-\!\!\!-\!\!\!-\!\!\bullet\,\ Q_{\rm Q}(f)$   ⇒   quantized source signal   (from German:  "quantisiert"   ⇒   "Q")
$q_{\rm C}(t)\ \circ\!\!-\!\!\!-\!\!\!-\!\!\bullet\,\ Q_{\rm C}(f)$   ⇒   coded source signal   (from German:  "codiert"   ⇒   "C"),  binary
$s(t)\ \circ\!\!-\!\!\!-\!\!\!-\!\!\bullet\,\ S(f)$   ⇒   transmitted signal   (from German:  "Sendesignal"),  digital
$n(t)$   ⇒   noise signal,  characterized by the power-spectral density  ${\it Φ}_n(f)$,   analog $r(t)= s(t) \star h_{\rm K}(t) + n(t)$   ⇒   received signal,  $h_{\rm K}(t)\ \circ\!\!-\!\!\!-\!\!\!-\!\!\bullet\,\ H_{\rm K}(f)$,  analog
  Note:   Spectrum  $R(f)$  can not be specified due to the stochastic component  $n(t)$.
$v_{\rm C}(t)\ \circ\!\!-\!\!\!-\!\!\!-\!\!\bullet\,\ V_{\rm C}(f)$   ⇒   signal after decision,  binary
$v_{\rm Q}(t)\ \circ\!\!-\!\!\!-\!\!\!-\!\!\bullet\,\ V_{\rm Q}(f)$   ⇒   signal after PCM decoding,  $M$–level
  Note:   On the receiver side,  there is no counterpart to  "Quantization"
$v(t)\ \circ\!\!-\!\!\!-\!\!\!-\!\!\bullet\,\ V(f)$   ⇒   sink signal,  analog


Further it should be noted to this PCM block diagram:

  • The PCM transmitter  ("A/D converter")  is composed of three function blocks  »Sampling - Quantization - PCM Coding«  which will be described in more detail in the next sections.
  • The gray-background block  "Digital Transmission System"  shows  "transmitter"  (modulation),  "receiver"  (with decision unit),  and  "analog transmission channel"   ⇒   channel frequency response  $H_{\rm K}(f)$  and noise power-spectral density  ${\it Φ}_n(f)$.
  • Further, it can be seen from the block diagram that there is no equivalent for  "quantization"  at the receiver-side.  Therefore,  even with error-free transmission,  i.e.,  for  $v_{\rm C}(t) = q_{\rm C}(t)$,  the analog sink signal  $v(t)$  will differ from the source signal  $q(t)$.
  • As a measure of the quality of the digital transmission system,  we use the  $\text{Signal-to-Noise Power Ratio}$   ⇒   in short:   »Sink-SNR«  as the quotient of the powers of source signal  $q(t)$  and error signal  $ε(t) = v(t) - q(t)$:
$$\rho_{v} = \frac{P_q}{P_\varepsilon}\hspace{0.3cm} {\rm with}\hspace{0.3cm}P_q = \overline{[q(t)]^2}, \hspace{0.2cm}P_\varepsilon = \overline{[v(t) - q(t)]^2}\hspace{0.05cm}.$$
  • Here,  an ideal amplitude matching is assumed,  so that in the ideal case  (that is:   sampling according to the sampling theorem,  best possible signal reconstruction,  infinitely fine quantization)  the sink signal  $v(t)$  would exactly match the source signal  $q(t)$.


⇒   We would like to refer you already here to the three-part  (German language)  learning video  "Pulse Code Modulation"  which contains all aspects of PCM.  Its principle is explained in detail in the first part of the video.

Sampling and signal reconstruction


Sampling  – that is, time discretization of the analog signal  $q(t)$ –  was covered in detail in the chapter  "Discrete-Time Signal Representation"  of the book  "Signal Representation."  Here follows a brief summary of that section.

Time domain representation of sampling

The graph illustrates the sampling in the time domain: 

  • The  (blue)  source signal  $q(t)$  is  "continuous-time",  the (green) signal sampled at a distance  $T_{\rm A}$  is  "discrete-time". 
  • The sampling can be represented by multiplying the analog signal  $q(t)$  by the  $\text{Dirac comb in the time domain}$  ⇒   $p_δ(t)$:
$$q_{\rm A}(t) = q(t) \cdot p_{\delta}(t)\hspace{0.3cm} {\rm with}\hspace{0.3cm}p_{\delta}(t)= \sum_{\nu = -\infty}^{\infty}T_{\rm A}\cdot \delta(t - \nu \cdot T_{\rm A}) \hspace{0.05cm}.$$
  • The Dirac delta function at  $t = ν \cdot T_{\rm A}$  has the weight  $T_{\rm A} \cdot q(ν \cdot T_{\rm A})$.  Since  $δ(t)$  has the unit  "$\rm 1/s$"  thus  $q_{\rm A}(t)$  has the same unit as  $q(t)$,  e.g.  "V".
  • The Fourier transform of the Dirac comb  $p_δ(t)$  is also a Dirac comb,  but now in the frequency domain   ⇒   $P_δ(f)$.  The spacing of the individual Dirac delta lines is  $f_{\rm A} = 1/T_{\rm A}$,  and all weights of  $P_δ(f)$  are  $1$:
$$p_{\delta}(t)= \sum_{\nu = -\infty}^{+\infty}T_{\rm A}\cdot \delta(t - \nu \cdot T_{\rm A}) \hspace{0.2cm}\circ\!\!-\!\!\!-\!\!\!-\!\!\bullet\, \hspace{0.2cm} P_{\delta}(f)= \sum_{\mu = -\infty}^{+\infty} \delta(f - \mu \cdot f_{\rm A}) \hspace{0.05cm}.$$
  • The spectrum  $Q_{\rm A}(f)$  of the sampled source signal  $q_{\rm A}(t)$  is obtained from the  $\text{Convolution Theorem}$, where  $Q(f)\hspace{0.2cm}\bullet\!\!-\!\!\!-\!\!\!-\!\!\circ\, \hspace{0.2cm} q(t):$ 
$$Q_{\rm A}(f) = Q(f) \star P_{\delta}(f)= \sum_{\mu = -\infty}^{+\infty} Q(f - \mu \cdot f_{\rm A}) \hspace{0.05cm}.$$

⇒   We refer you to part 2 of the  (German language)  learning video  "Pulse Code Modulation"  which explains sampling and signal reconstruction in terms of system theory.

$\text{Example 1:}$  The graph schematically shows the spectrum  $Q(f)$  of an analog source signal  $q(t)$  with frequencies up to  $f_{\rm N, \ max} = 5 \ \rm kHz$.

Periodic continuation of the spectrum by sampling
  • If one samples  $q(t)$  with the sampling rate  $f_{\rm A} = 20 \ \rm kHz$  $($so at the respective distance  $T_{\rm A} = 50 \ \rm µ s)$,  one obtains the periodic spectrum  $Q_{\rm A}(f)$  sketched in green.


  • Since the Dirac delta functions are infinitely narrow,  $q_{\rm A}(t)$  also contains arbitrary high frequency components and accordingly  $Q_{\rm A}(f)$  is extended to infinity (middle graph).


  • Drawn below  (in red)  is the spectrum  $Q_{\rm A}(f)$  of the sampled source signal for the sampling parameters  $T_{\rm A} = 100 \ \rm µ s$   ⇒   $f_{\rm A} = 10 \ \rm kHz$.


$\text{Conclusion:}$  From this example,  the following important lessons can be learned regarding sampling:

  1. If  $Q(f)$  contains frequencies up to  $f_\text{N, max}$,  then according to the  $\text{Sampling Theorem}$  the sampling rate  $f_{\rm A} ≥ 2 \cdot f_\text{N, max}$  should be chosen.  At smaller sampling rate  $f_{\rm A}$  $($thus larger spacing $T_{\rm A})$  overlaps of the periodized spectra occur,  i.e. irreversible distortions.
  2. If exactly  $f_{\rm A} = 2 \cdot f_\text{N, max}$  as in the lower graph of  $\text{Example 1}$, then  $Q(f)$  can be can be completely reconstructed from  $Q_{\rm A}(f)$  by an ideal rectangular low-pass filter  $H(f)$  with cutoff frequency  $f_{\rm G} = f_{\rm A}/2$.  The same facts apply in the   $\text{PCM system}$   to extract  $V(f)$  from  $V_{\rm Q}(f)$  in the best possible way.
  3. On the other hand,  if sampling is performed with  $f_{\rm A} > 2 \cdot f_\text{N, max}$  as in the middle graph of the example,  a low-pass filter  $H(f)$  with a smaller slope can also be used on the receiver side for signal reconstruction,  as long as the following condition is met:
$$H(f) = \left\{ \begin{array}{l} 1 \\ 0 \\ \end{array} \right.\quad \begin{array}{*{5}c}{\rm{for} } \\{\rm{for} } \\ \end{array}\begin{array}{*{10}c} {\hspace{0.04cm}\left \vert \hspace{0.005cm} f\hspace{0.05cm} \right \vert \le f_{\rm N, \hspace{0.05cm}max},} \\ {\hspace{0.04cm}\left \vert\hspace{0.005cm} f \hspace{0.05cm} \right \vert \ge f_{\rm A}- f_{\rm N, \hspace{0.05cm}max}.} \\ \end{array}$$

Natural and discrete sampling


Multiplication by the Dirac comb provides only an idealized description of the sampling,  since a Dirac delta function  $($duration $T_{\rm R} → 0$,  height $1/T_{\rm R} → ∞)$  is not realizable.  In practice,  the  "Dirac comb"  $p_δ(t)$  must be replaced by a  "rectangular pulse comb"  $p_{\rm R}(t)$  with rectangle duration  $T_{\rm R}$  (see upper sketch):

Rectangular comb  (on the top),  natural and discrete sampling
$$p_{\rm R}(t)= \sum_{\nu = -\infty}^{+\infty}g_{\rm R}(t - \nu \cdot T_{\rm A}),$$
$$g_{\rm R}(t) = \left\{ \begin{array}{l} 1 \\ 1/2 \\ 0 \\ \end{array} \right.\quad \begin{array}{*{5}c}{\rm{for}}\\{\rm{for}} \\{\rm{for}} \\ \end{array}\begin{array}{*{10}c}{\hspace{0.04cm}\left|\hspace{0.06cm} t \hspace{0.05cm} \right|} < T_{\rm R}/2\hspace{0.05cm}, \\{\hspace{0.04cm}\left|\hspace{0.06cm} t \hspace{0.05cm} \right|} = T_{\rm R}/2\hspace{0.05cm}, \\ {\hspace{0.005cm}\left|\hspace{0.06cm} t \hspace{0.05cm} \right|} > T_{\rm R}/2\hspace{0.05cm}. \\ \end{array}$$

$T_{\rm R}$  should be significantly smaller than the sampling distance  $T_{\rm A}$.

The graphic show two different sampling methods using the comb  $p_{\rm R}(t)$:

  • In  »natural sampling«  the sampled signal  $q_{\rm A}(t)$  is obtained by multiplying the analog source signal  $q(t)$  by  $p_{\rm R}(t)$.   Thus in the ranges  $p_{\rm R}(t) = 1$,  $q_{\rm A}(t)$  has the same progression as  $q(t)$.
  • In  »discrete sampling«  the signal  $q(t)$  is  – at least mentally – first multiplied by the Dirac comb  $p_δ(t)$.  Then each Dirac delta pulse   $T_{\rm A} \cdot δ(t - ν \cdot T_{\rm A})$  is replaced by a rectangular pulse  $g_{\rm R}(t - ν \cdot T_{\rm A})$  .


Here and in the following frequency domain consideration,  an acausal description form is chosen for simplicity. 

For a  (causal)  realization,  $g_{\rm R}(t) = 1$  would have to hold in the range from  $0$  to  $T_{\rm R}$  and not as here for  $ -T_{\rm R}/2 < t < T_{\rm R}/2.$


Frequency domain view of natural sampling


$\text{Definition:}$  The  »natural sampling«  can be represented by the convolution theorem in the spectral domain as follows:

$$q_{\rm A}(t) = p_{\rm R}(t) \cdot q(t) = \left [ \frac{1}{T_{\rm A} } \cdot p_{\rm \delta}(t) \star g_{\rm R}(t)\right ]\cdot q(t) \hspace{0.3cm} \Rightarrow \hspace{0.3cm}Q_{\rm A}(f) = \left [ P_{\rm \delta}(f) \cdot \frac{1}{T_{\rm A} } \cdot G_{\rm R}(f) \right ] \star Q(f) = P_{\rm R}(f) \star Q(f)\hspace{0.05cm}.$$


The graph shows the result for

  • an  (unrealistic)  rectangular spectrum  $Q(f) = Q_0$  limited to the range  $|f| ≤ 4 \ \rm kHz$,
  • the sampling rate  $f_{\rm A} = 10 \ \rm kHz$   ⇒   $T_{\rm A} = 100 \ \rm µ s$,  and
  • the rectangular pulse duration  $T_{\rm R} = 25 \ \rm µ s$   ⇒   $T_{\rm R}/T_{\rm A} = 0.25$.
Spectrum in natural sampling with rectangular comb


One can see from this plot:

  1. The spectrum  $P_{\rm R}(f)$  is in contrast to  $P_δ(f)$  not a Dirac comb  $($all weights equal $1)$,  but the weights here are evaluated to the function  $G_{\rm R}(f)/T_{\rm A} = T_{\rm R}/T_{\rm A} \cdot {\rm sinc}(f\cdot T_{\rm R})$.
  2. Because of the zero of the  $\rm sinc$-function,  the Dirac delta lines vanish here at  $±4f_{\rm A}$.
  3. The spectrum  $Q_{\rm A}(f)$  results from the convolution with  $Q(f)$.  The rectangle around  $f = 0$  has height  $T_{\rm R}/T_{\rm A} \cdot Q_0$,  the proportions around  $\mu \cdot f_{\rm A} \ (\mu ≠ 0)$  are lower.
  4. If one uses for signal reconstruction an ideal,  rectangular low-pass
$$H(f) = \left\{ \begin{array}{l} T_{\rm A}/T_{\rm R} = 4 \\ 0 \\ \end{array} \right.\quad \begin{array}{*{5}c}{\rm{for}}\\{\rm{for}} \\ \end{array}\begin{array}{*{10}c} {\hspace{0.04cm}\left| \hspace{0.005cm} f\hspace{0.05cm} \right| < f_{\rm A}/2}\hspace{0.05cm}, \\ {\hspace{0.04cm}\left| \hspace{0.005cm} f\hspace{0.05cm} \right| > f_{\rm A}/2}\hspace{0.05cm}, \\ \end{array},$$
then for the output spectrum  $V(f) = Q(f)$   ⇒   $v(t) = q(t)$.


$\text{Conclusion:}$ 

  • For natural sampling,  a rectangular–low-pass filter is sufficient for signal reconstruction  as for ideal sampling  (with Dirac comb).
  • However,  for amplitude matching in the passband,  a gain by the factor  $T_{\rm A}/T_{\rm R}$  must be considered.


Frequency domain view of discrete sampling


$\text{Definition:}$  In  »discrete sampling«  the multiplication of the Dirac comb  $p_δ(t)$  with the source signal  $q(t)$  takes place first  – at least mentally –  and only afterwards the convolution with the rectangular pulse  $g_{\rm R}(t)$:

$$q_{\rm A}(t) = \big [ {1}/{T_{\rm A} } \cdot p_{\rm \delta}(t) \cdot q(t)\big ]\star g_{\rm R}(t) \hspace{0.3cm} \Rightarrow \hspace{0.3cm}Q_{\rm A}(f) = \big [ P_{\rm \delta}(f) \star Q(f) \big ] \cdot G_{\rm R}(f)/{T_{\rm A} } \hspace{0.05cm}.$$
  • It is irrelevant,  but quite convenient,  that here the factor  $1/T_{\rm A}$  has been added to the evaluation function  $G_{\rm R}(f)$.
  • Thus,  $G_{\rm R}(f)/T_{\rm A} = T_{\rm R}/T_{\rm A} \cdot {\rm sinc}(fT_{\rm R}).$


Spectrum when discretely sampled with a rectangular comb
  • The upper graph shows  (highlighted in green)  the spectral function  $P_δ(f) \star Q(f)$  after ideal sampling. 
  • In contrast,  discrete sampling with a rectangular comb yields the spectrum  $Q_{\rm A}(f)$  corresponding to the lower graph.


You can see from this plot:

  1. Each of the infinitely many partial spectra now has a different shape.  Only the middle spectrum around  $f = 0$  is important;
  2. All other spectral components are removed at the receiver side by the low-pass of the signal reconstruction.
  3. If one uses for this low-pass again a rectangular filter with the gain  $T_{\rm A}/T_{\rm R}$  in the passband,  one obtains for the output spectrum:  
$$V(f) = Q(f) \cdot {\rm sinc}(f \cdot T_{\rm R}) \hspace{0.05cm}.$$


$\text{Conclusion:}$  Discrete sampling and rectangular filtering result in attenuation distortions  according to the weighting function  ${\rm sinc}(f \cdot T_{\rm R})$.

  • These are stronger,  the larger  $T_{\rm R}$  is.  Only in the limiting case  $T_{\rm R} → 0$  holds ${\rm sinc}(f\cdot T_{\rm R}) = 1$.
  • However,  ideal equalization can fully compensate for these linear attenuation distortions.  To obtain  $V(f) = Q(f)$  resp.  $v(t) = q(t)$  then must hold:
$$H(f) = \left\{ \begin{array}{l} (T_{\rm A}/T_{\rm R})/{\rm sinc}(f \cdot T_{\rm R}) \\ 0 \\ \end{array} \right.\quad\begin{array}{*{5}c}{\rm{for} }\\{\rm{for} } \\ \end{array}\begin{array}{*{10}c} {\hspace{0.04cm}\left \vert \hspace{0.005cm} f\hspace{0.05cm} \right \vert < f_{\rm A}/2}\hspace{0.05cm}, \\ {\hspace{0.04cm}\left \vert \hspace{0.005cm} f\hspace{0.05cm} \right \vert > f_{\rm A}/2.} \\ \end{array}$$


Quantization and quantization noise


The second functional unit  »Quantization«  of the PCM transmitter is used for value discretization.

  • For this purpose the whole value range of the analog source signal  $($e.g.,  the range $± q_{\rm max})$  is divided into  $M$  intervals.
  • Each sample  $q_{\rm A}(ν ⋅ T_{\rm A})$  is then assigned to a representative  $q_{\rm Q}(ν ⋅ T_{\rm A})$  of the associated interval  (e.g.,  the interval center) .


$\text{Example 2:}$  The graph illustrates the unit  "quantization"  using the quantization step number  $M = 8$  as an example.

To illustrate  "quantization"  with  $M = 8$  steps
  • In fact,  a power of two is always chosen for  $M$  in practice because of the subsequent binary coding.
  • Each of the samples  $q_{\rm A}(ν \cdot T_{\rm A})$  marked by circles is replaced by the corresponding quantized value  $q_{\rm Q}(ν \cdot T_{\rm A})$.  The quantized values are entered as crosses.
  • However,  this process of value discretization is associated with an irreversible falsification.
  • The falsification  $ε_ν = q_{\rm Q}(ν \cdot T_{\rm A}) \ - \ q_{\rm A}(ν \cdot T_{\rm A})$  depends on the quantization level number  $M$.  The following bound applies:
$$\vert \varepsilon_{\nu} \vert < {1}/{2} \cdot2/M \cdot q_{\rm max}= {q_{\rm max} }/{M}\hspace{0.05cm}.$$


$\text{Definition:}$  One refers to the second moment of the error quantity  $ε_ν$  as  »quantization noise power«:

$$P_{\rm Q} = \frac{1}{2N+1 } \cdot\sum_{\nu = -N}^{+N}\varepsilon_{\nu}^2 \approx \frac{1}{N \cdot T_{\rm A} } \cdot \int_{0}^{N \cdot T_{\rm A} }\varepsilon(t)^2 \hspace{0.05cm}{\rm d}t \hspace{0.3cm} {\rm with}\hspace{0.3cm}\varepsilon(t) = q_{\rm Q}(t) - q(t) \hspace{0.05cm}.$$


Notes:

  • For calculating the quantization noise power  $P_{\rm Q}$  the given approximation of  "spontaneous quantization"  is usually used. 
  • Here,  one ignores sampling and forms the error signal from the continuous-time signals  $q_{\rm Q}(t)$  and  $q(t)$.
  • $P_{\rm Q}$  also depends on the source signal  $q(t)$.  Assuming that  $q(t)$  takes all values between  $±q_{\rm max}$  with equal probability and the quantizer is designed exactly for this range,  we get accordingly  "Exercise 4.4":
$$P_{\rm Q} = \frac{q_{\rm max}^2}{3 \cdot M^2 } \hspace{0.05cm}.$$
  • In a speech or music signal,  arbitrarily large amplitude values can occur  - even if only very rarely.  In this case,  for  $q_{\rm max}$  usually that amplitude value is used which is exceeded  (in amplitude)  only at  $1\%$  all times.

PCM encoding and decoding


The block  »PCM coding«  is used to convert the discrete-time   (after sampling)   and discrete-value  (after quantization with  $M$  steps)  signal values  $q_{\rm Q}(ν - T_{\rm A})$  into a sequence of  $N = {\rm log_2}(M)$  binary values.   Logarithm to base 2   ⇒   "binary logarithm".

$\text{Example 3:}$  Each binary value   ⇒   bit is represented by a rectangle of duration  $T_{\rm B} = T_{\rm A}/N$  resulting in the signal  $q_{\rm C}(t)$.  You can see:

PCM coding with the dual code  $(M = 8,\ N = 3)$
  • Here,  the  "dual code"  is used   ⇒   the quantization intervals  $\mu$  are numbered consecutively from  $0$  to  $M-1$  and then written in simple binary.  With  $M = 8$  for example  $\mu = 6$   ⇔   110.
  • The three symbols of the binary encoded signal  $q_{\rm C}(t)$  are obtained by replacing  0  by  L  ("Low") and  1  by  H  ("High").  This gives in the example the sequence  "HHL HHL LLH LHL HLH LHH".
  • The bit duration  $T_{\rm B}$  is here shorter than the sampling distance  $T_{\rm A} = 1/f_{\rm A}$  by a factor  $N = {\rm log_2}(M) = 3$.  So,  the bit rate is  $R_{\rm B} = {\rm log_2}(M) \cdot f_{\rm A}$.
  • If one uses the same mapping in decoding  $(v_{\rm C}   ⇒   v_{\rm Q})$  as in encoding   $(q_{\rm Q}   ⇒   q_{\rm C})$,  then,  if there are no transmission errors:     $v_{\rm Q}(ν \cdot T_{\rm A}) = q_{\rm Q}(ν \cdot T_{\rm A}). $
  • An alternative to dual code is  "Gray code",  where adjacent binary values differ only in one bit.  For  $N = 3$:
  $\mu = 0$:  LLL,     $\mu = 1$:  LLH,     $\mu = 2$:  LHH,     $\mu = 3$:   LHL,
  $\mu = 4$:  HHL,     $\mu = 5$:  HHH,     $\mu =6$:  HLH,     $\mu = 7$:  HLL.

Signal-to-noise power ratio


The digital  "pulse code modulation"  $\rm (PCM)$  is now compared to analog modulation methods  $\rm (AM, \ FM)$  regarding the achievable sink SNR  $ρ_v = P_q/P_ε$  with AWGN noise.  As denoted in previous chapters  $\text{(for example)}$  $ξ = {α_{\rm K}}^2 \cdot P_{\rm S}/(N_0 \cdot B_{\rm NF})$  the  "performance parameter"  $ξ$  summarizes different influences:

Sink SNR at AM,  FM,  and  PCM 30/32
  1. The channel transmission factor  $α_{\rm K}$  (quadratic),
  2. the transmit power  $P_{\rm S}$  (linear),
  3. the AWGN noise power density  $N_0$  (reciprocal), and.
  4. the signal bandwidth  $B_{\rm NF}$  (reciprocal);  for a harmonic oscillation:   signal frequency  $f_{\rm N}$  instead of  $B_{\rm NF}$.


The two comparison curves for  $\text{amplitude modulation}$  and  $\text{frequency modulation}$ can be described as follows:

  • Double-sideband amplitude modulation  $\text{(DSB–AM)}$  without carrier  $(m \to \infty)$:
$$ρ_v = ξ \ ⇒ \ 10 · \lg ρ_v = 10 · \lg \ ξ.$$
  • Frequency modulation  $\text{(FM)}$  with modulation index  $η = 3$:  
$$ρ_υ = 3/2 \cdot η^2 - ξ = 13.5 - ξ \ ⇒ \ 10 · \lg \ ρ_v = 10 · \lg \ ξ + 11.3 \ \rm dB.$$

The curve for the  $\text{PCM 30/32}$  system should be interpreted as follows:

  • If the performance parameter  $ξ$  is sufficiently large,  then no transmission errors occur.  The error signal  $ε(t) = v(t) \ - \ q(t)$  is then alone due to quantization  $(P_ε = P_{\rm Q})$.
  • With the quantization step number  $M = 2^N$  holds approximately in this case:
$$\rho_{v} = \frac{P_q}{P_\varepsilon}= M^2 = 2^{2N} \hspace{0.3cm}\Rightarrow \hspace{0.3cm} 10 \cdot {\rm lg}\hspace{0.1cm}\rho_{v}=20 \cdot {\rm lg}\hspace{0.1cm}M = N \cdot 6.02\,{\rm dB}$$
$$ \Rightarrow \hspace{0.3cm} N = 8, \hspace{0.05cm} M =256\text{:}\hspace{0.2cm}10 \cdot {\rm lg}\hspace{0.1cm}\rho_{v}=48.16\,{\rm dB}\hspace{0.05cm}.$$
Note that the given equation is exactly valid only for a sawtooth shaped source signal.   However, for a cosine shaped signal the deviation from this is not very large.
  • As  $ξ$  decreases  (smaller transmit power or larger noise power density),  the transmission errors increase.  Thus  $P_ε > P_{\rm Q}$  and the sink-to-noise ratio becomes smaller.
  • PCM  $($with $M = 256)$  is superior to the analog methods  $($AM and FM$)$  only in the lower and middle  $ξ$–range.  But if transmission errors do not play a role anymore,  no improvement can be achieved by a larger  $ξ$  $($horizontal curve section with yellow background$)$.
  • An improvement is only achieved by increasing  $N$  $($number of bits per sample$)$  ⇒   larger  $M = 2^N$  $($number of quantization steps$)$.   For example, for a  »Compact Disc«  $\rm (CD)$  with parameter  $N = 16$   ⇒   $M = 65536$  the sink SNR is: 
$$10 · \lg \ ρ_v = 96.32 \ \rm dB.$$

$\text{Example 4:}$  The following graph shows the limiting influence of quantization:

  • Here,  transmission errors are excluded.  Sampling and signal reconstruction are best fit to  $q(t)$.
  • White dotted is the source signal  $q(t)$,  green dotted is the sink signal  $v(t)$  after PCM with  $N = 4$   ⇒   $M = 16$.
  • Sampling times are marked by crosses.


This image can be interpreted as follows:

Influence of quantization with  $N = 4$  and  $N = 8$


  • With  $N = 8$   ⇒   $M = 256$  the sink signal  $v(t)$  cannot be distinguished with the naked eye from the source signal  $q(t)$.  The white dotted signal curve applies approximately to both.
  • But from the signal-to-noise ratio  $10 · \lg \ ρ_v = 47.8 \ \rm dB$  it can be seen that the quantization noise  power  $P_\varepsilon$  is only smaller by a factor  $1. 6 \cdot 10^{-5}$  than the power  $P_q$  of the source signal. 
  • This SNR would already be clearly audible with a speech or music signal.
  • Although  $q(t)$  is neither sawtooth nor cosine shaped,  but is composed of several frequency components,  the approximation  $ρ_v ≈ M^2$   ⇒   $10 · \lg \ ρ_υ = 48.16 \ \rm dB$  deviates insignificantly from the actual value.
  • In contrast,  for  $N = 4$   ⇒   $M = 16$  the deviations between sink signal (marked in green) and source signal (marked in white) can already be seen in the image,  which is also quantitatively expressed by the very small SNR  $10 · \lg \ ρ_υ = 28.2 \ \rm dB$.

Influence of transmission errors


Starting from the same analog signal  $q(t)$  as in the last section and a linear quantization with  $N = 8$ bits   ⇒   $M = 256$  the effects of transmission errors are now illustrated using the respective sink signal  $v(t)$.

Influence of a transmission error concerning  Bit 5  at the dual code, meaning that the lowest quantization interval  $(\mu = 0)$  is represented with  LLLL LLLL  and the highest interval  $(\mu = 255)$  is represented with  HHHH HHHH.
Table:  Results of the bit error analysis.  Note:     $10 · \lg \ ρ_v$  was calculated from the presented signal of duration  $10 \cdot T_{\rm A}$  $($only  $10 \cdot 8 = 80$  bits$)$   ⇒   each transmission error corresponds to a bit error rate of  $1.25\%$.
  • The white dots mark the source signal  $q(t)$.  Without transmission error the sink signal  $v(t)$  has the same course when neglecting quantization.
  • Now,  exactly one bit of the fifth sample  $q(5 \cdot T_{\rm A}) = -0.715$  is falsified,  where this sample has been encoded as  LLHL LHLL.






The results of the error analysis shown in the graph and the table below can be summarized as follows:

  • If only the last bit   ⇒   "Least Significant Bit"   ⇒   $\rm (LSB)$  of the binary word is falsified  $($LLHL LHLL   ⇒   LLHL LHLH,  white curve$)$,  then no difference from error-free transmission is visible to the naked eye. Nevertheless,  the signal-to-noise ratio is reduced by   $3.5 \ \rm dB$.
  • An error of the fourth last bit leads to a clearly detectable distortion by eight quantization steps   $($LLHLLHLL ⇒ LLHLHHLL,  green curve$)$:   $v(5T_{\rm A}) \ - \ q(5T_{\rm A}) = 8/256 - 2 = 0.0625$  and the signal-to-noise ratio drops to   $10 · \lg \ ρ_υ = 28.2 \ \rm dB$.
  • Finally,  the red curve shows the case where the  $\rm MSB$  ("Most Significant Bit")  is falsified:   LLHLLHLL ⇒ HLHLLHLL   ⇒   distortion  $v(5T_{\rm A}) \ - \ q(5T_{\rm A}) = 1$  $($corresponding to half the modulation range$)$.  The SNR is now only about   $4 \ \rm dB$.
  • At all sampling times except  $5T_{\rm A}$,  $v(t)$  matches exactly with  $q(t)$  except for the quantization error.  Outside these points marked by yellow crosses,  the single error at  $5T_{\rm A}$  leads to strong deviations in an extended range,  due to the interpolation with the  $\rm sinc$-shaped impulse response of the reconstruction low-pass  $H(f)$.


Estimation of SNR degradation due to transmission errors


Now we will try to  (approximately)  determine the SNR curve of the PCM system taking bit errors into account.  We start from the following block diagram and further assume:

For calculating the SNR curve of the PCM system;  bit errors are taken into account
  • Each sample  $q_{\rm A}(νT)$  is quantized by  $M$  steps and represented by  $N = {\rm log_2} (M)$  bits.  In the example:  $M = 8$   ⇒   $N = 3$.
  • The binary representation of  $q_{\rm Q}(νT)$  yields the coefficients  $a_k\, (k = 1, \text{...} \hspace{0.08cm}, N)$,  which can be falsified by bit errors to the coefficients  $b_k$.  Both  $a_k$  and  $b_k$  are  $±1$,  respectively.
  • A bit error  $(b_k ≠ a_k)$  occurs with probability  $p_{\rm B}$.  Each bit is equally likely to be falsified and in each PCM word there is at most one error   ⇒   only one of the  $N$  bits can be wrong.


From the diagram given in the graph,  it can be seen for  $N = 3$  and natural binary coding  ("Dual Code"):

  • A falsification of  $a_1$  changes the value  $q_{\rm Q}(νT)$  by  $±A$.
  • A falsification of  $a_2$  changes the value  $q_{\rm Q}(νT)$  by  $±A/2$.
  • A falsification of  $a_3$  changes the value value  $q_{\rm Q}(νT)$  by  $±A/4$.


For the case when  (only)  the coefficient  $a_k$  was falsified,  we obtain by generalization for the deviation:

$$\varepsilon_k = υ_{\rm Q}(νT) \ - \ q_{\rm Q}(νT)= - a_k \cdot A \cdot 2^{-k +1} \hspace{0.05cm}.$$

After averaging over all falsification values  $ε_k$   (with  $1 ≤ k ≤ N)$   taking into account the bit error probability  $p_{\rm B}$  we obtain for the  "error noise power":

$$P_{\rm E}= {\rm E}\big[\varepsilon_k^2 \big] = \sum\limits^{N}_{k = 1} p_{\rm B} \cdot \left ( - a_k \cdot A \cdot 2^{-k +1} \right )^2 =\ p_{\rm B} \cdot A^2 \cdot \sum\limits^{N-1}_{k = 0} 2^{-2k } = p_{\rm B} \cdot A^2 \cdot \frac{1- 2^{-2N }}{1- 2^{-2 }} \approx {4}/{3} \cdot p_{\rm B} \cdot A^2 \hspace{0.05cm}.$$
  • Here are used the sum formula of the geometric series and the approximation  $1 - 2^{-2N } ≈ 1$.
  • For  $N = 8$   ⇒   $M = 256$  the associated relative error is about  $\rm 10^{-5}$.


Excluding transmission errors,  the signal-to-noise power ratio  $ρ_v = P_{\rm S}/P_{\rm Q}$  has been found,  where for a uniformly distributed source signal  $($e.g. sawtooth-shaped$)$  the signal power and quantization noise power are to be calculated as follows:

Sink SNR for PCM considering bit errors
$$P_{\rm S}={A^2}/{3}\hspace{0.05cm},\hspace{0.3cm}P_{\rm Q}= {A^2}/{3} \cdot 2^{-2N } \hspace{0.05cm}.$$

Taking into account the transmission errors,  the above result gives:

$$\rho_{\upsilon}= \frac{P_{\rm S}}{P_{\rm Q}+P_{\rm E}} = \frac{A^2/3}{A^2/3 \cdot 2^{-2N } + A^2/3 \cdot 4 \cdot p_{\rm B}} = \frac{1}{ 2^{-2N } + 4 \cdot p_{\rm B}} \hspace{0.05cm}.$$

The graph shows  $10 \cdot \lg ρ_v$  as a function of the (logarithmized) power parameter  $ξ = P_{\rm S}/(N_0 \cdot B_{\rm NF})$, where  $B_{\rm NF}$  indicates the source signal bandwidth.  Let the constant channel transmission factor be ideally  $α_{\rm K} = 1$.  Then holds:

  • For AWGN noise and the optimum binary system,  the performance parameter is also  $ξ = E_{\rm B}/N_0$  $($energy per bit related to noise power density$)$.  The bit error probability is then given by the Gaussian error function  ${\rm Q}(x)$:
$$p_{\rm B}= {\rm Q} \left ( \sqrt{{2E_{\rm B}}/{N_0} }\right ) \hspace{0.05cm}.$$
  • For  $N = 8$   ⇒   $ 2^{-2{\it N} } = 1.5 \cdot 10^{-5}$  and  $10 \cdot \lg \ ξ = 6 \ \rm dB$   ⇒   $p_{\rm B} = 0.0024$  $($point marked in red$)$  results:
$$\rho_{\upsilon}= \frac{1}{ 1.5 \cdot 10^{-5} + 4 \cdot 0.0024} \approx 100 \hspace{0.3cm} \Rightarrow \hspace{0.3cm}10 \cdot {\rm lg} \hspace{0.15cm}\rho_{\upsilon}\approx 20\,{\rm dB} \hspace{0.05cm}.$$
  • This small  $ρ_v$ value goes back to the term  $4 · 0.0024$  in the denominator  $($influence of the transmission errors$)$  while in the horizontal section of the curve for each  $N$  (number of bits per sample) the term  $\rm 2^{-2{\it N} }$  dominates - i.e. the quantization noise.

Non-linear quantization


Often the quantization intervals are not chosen equally large,  but one uses a finer quantization for the inner amplitude range than for large amplitudes.  There are several reasons for this:

Uniform quantization of a speech signal
  • In audio signals,  distortions of the quiet signal components  (i.e. values near the zero line)  are subjectively perceived as more disturbing than an impairment of large amplitude values.
  • Such an uneven quantization also leads to a larger sink SNR for such a music or speech signal,  because here the signal amplitude is not uniformly distributed.


The graph shows a speech signal  $q(t)$  and its amplitude distribution  $f_q(q)$   ⇒   $\text{Probability density function}$  $\rm (PDF)$.

This is the  $\text{Laplace distribution}$,  which can be approximated as follows:

  • by a continuous-valued two-sided exponential distribution,  and
  • by a Dirac delta function  $δ(q)$  to account for the speech pauses  (magenta colored).


In the graph, nonlinear quantization is only implied,  e.g. by means of the 13-segment characteristic, which is described in more detail in the  "Exercise 4.5" :

  • The quantization intervals here become wider and wider towards the edges section by section.
  • The more frequent small amplitudes,  on the other hand,  are quantized very finely.


Compression and expansion


Non-uniform quantization can be realized, for example, by

Realization of a non-uniform quantization
  • the sampled values  $q_{\rm A}(ν \cdot T_{\rm A})$  are first deformed by a nonlinear characteristic  $q_{\rm K}(q_{\rm A})$,  and
  • subsequently,  the resulting output values  $q_{\rm K}(ν · T_{\rm A})$  are uniformly quantized.


This results in the signal chain sketched on the right.

$\text{Conclusion:}$  Such a non-uniform quantization means:

  • Through the nonlinear characteristic  $q_{\rm K}(q_{\rm A})$   ⇒   small signal values are amplified and large values are attenuated   ⇒   »compression«.
  • This deliberate signal distortion is undone at the receiver by the inverse function  $v_{\rm E}(υ_{\rm Q})$    ⇒   »expansion«.
  • The total process of transmitter-side compression and receiver-side expansion is also called  »companding.«


For the PCM system 30/32, the  "Comité Consultatif International des Télégraphique et Téléphonique"  $\rm (CCITT)$  recommended the so-called  "A–characteristic":

$$y(x) = \left\{ \begin{array}{l} \frac{1 + {\rm ln}(A \cdot x)}{1 + {\rm ln}(A)} \\ \frac{A \cdot x}{1 + {\rm ln}(A)} \\ - \frac{1 + {\rm ln}( - A \cdot x)}{1 + {\rm ln}(A)} \\ \end{array} \right.\quad\begin{array}{*{5}c}{\rm{for}}\\{\rm{for}}\\{\rm{for}} \\ \end{array}\begin{array}{*{10}c}1/A \le x \le 1\hspace{0.05cm}, \\ - 1/A \le x \le 1/A\hspace{0.05cm}, \\ - 1 \le x \le - 1/A\hspace{0.05cm}. \\ \end{array}$$
  • Here,  for abbreviation   $x = q_{\rm A}(ν \cdot T_{\rm A})$   and  $y = q_{\rm K}(ν \cdot T_{\rm A})$   are used.
  • This characteristic curve with the value  $A = 87.56$  introduced in practice has a constantly changing slope.
  • For more details on this type of non-uniform quantization,  see the  "Exercise 4.6".


⇒   Note:   In the third part of the  (German language)  learning video  "Pulse Code Modulation"  are covered:

  • the definition of signal-to-noise power ratio  $\rm (SNR)$,
  • the influence of quantization noise and transmission errors,
  • the differences between linear and non-linear quantization.


Exercises for the chapter


Exercise 4.1: PCM System 30/32

Exercise 4.2: Low-Pass for Signal Reconstruction

Exercise 4.2Z: About the Sampling Theorem

Exercise 4.3: Natural and Discrete Sampling

Exercise 4.4: About the Quantization Noise

Exercise 4.4Z: Signal-to-Noise Ratio with PCM

Exercise 4.5: Non-Linear Quantization

Exercise 4.6: Quantization Characteristics