Difference between revisions of "Linear and Time Invariant Systems/System Description in Frequency Domain"

From LNTwww
 
(236 intermediate revisions by 12 users not shown)
Line 1: Line 1:
 
{{FirstPage}}
 
{{FirstPage}}
 
{{Header|
 
{{Header|
Untermenü=Systemtheoretische Grundlagen
+
Untermenü=Basics of System Theory
|Nächste Seite=Systembeschreibung im Zeitbereich
+
|Nächste Seite=System Description in Time Domain
 
}}
 
}}
  
==Das Ursachen-Wirkungs-Prinzip==
+
== # OVERVIEW OF THE FIRST MAIN CHAPTER # ==
 +
<br>
 +
*In the book&nbsp; &raquo;Signal Representation&laquo;&nbsp; you were familiarized with the mathematical description of deterministic signals in the time and frequency domain.
  
Wir betrachten in diesem Kapitel stets das folgende einfache Modell:
+
*The second book&nbsp; &raquo;Linear Time-Invariant Systems&laquo;&nbsp; now describes&nbsp; what changes a signal or its spectrum undergoes through a transmission system&nbsp; and how these changes can be captured mathematically.
  
[[File:P_ID775__LZI_T_1_1_S1_neu.png| Einfachstes Systemmodell|class=fit]]
+
*In the first chapter,&nbsp; the basics of the so-called&nbsp; &raquo;Systems Theory&laquo;&nbsp; are mentioned,&nbsp; which allows a uniform and simple description of such systems.&nbsp; We start with the system description in the frequency domain with the partial aspects listed above.
  
  
Diese Anordnung ist wie folgt zu interpretieren:
+
{{BlaueBox|TEXT= 
 +
$\text{Please note:}$&nbsp;
 +
*The&nbsp; &raquo;'''system'''&laquo;&nbsp; can be a simple circuit as well as a complete, highly complicated transmission system with a multitude of components.
  
*Im Mittelpunkt steht das so genannte System, das in seiner Funktion weitestgehend abstrahiert ist („Black Box”). Über die Realisierung des Systems ist nichts Genaues bekannt.
+
*Here it is only assumed that the system has the two properties&nbsp; &raquo;'''linear'''&laquo;&nbsp; and&nbsp; &raquo;'''time-invariant'''&laquo;.}}
*Die auf dieses System einwirkende zeitabhängige Eingangsgröße $x(t)$ bezeichnen wir im Folgenden auch als die Ursachenfunktion.
 
*Am Ausgang des Systems erscheint dann die Wirkungsfunktion $y(t)$ – quasi als Antwort des Systems auf die Eingangsfunktion $x(t)$.  
 
  
  
''Anmerkung:'' Das System kann im Allgemeinen von beliebiger Art sein und ist nicht allein auf die Nachrichtentechnik beschränkt. Vielmehr wird auch in anderen Wissenschaftsgebieten wie zum Beispiel den Naturwissenschaften, der Volks- und Betriebswirtschaft, der Soziologie und Politologie versucht, Kausalzusammenhänge zwischen verschiedenen Größen durch das Ursachen–Wirkungs–Prinzip zu erfassen und zu beschreiben. Die für diese phänomenologischen Systemtheorien angewandten Methoden unterscheiden sich aber deutlich von der Vorgehensweise in der Nachrichtentechnik, die in diesem ersten Kapitel des Buches „Lineare zeitinvariante Systeme” dargelegt wird.  
+
==The cause-and-effect principle==
 +
<br>
 +
[[File:P_ID775__LZI_T_1_1_S1_neu.png|right|frame|Simplest system model|class=fit]]
 +
We always consider here the simple model outlined on the right.&nbsp; This arrangement is to be interpreted as follows: 
  
==Anwendung in der Nachrichtentechnik==
+
*The focus is on the so-called&nbsp; &raquo;system&laquo;,&nbsp; which is largely abstracted in its function &nbsp; &rArr; &nbsp; &raquo;black box&laquo;.&nbsp; Nothing is known in detail about the realization of the system.
Das Ursachen–Wirkungs–Prinzip lässt sich auch in der Nachrichtentechnik anwenden, beispielsweise zur Beschreibung von Zweipolen. Hier kann man den Stromverlauf $i(t)$ als Ursachen- und die Spannung $u(t)$ als Wirkungsfunktion betrachten. Durch Beobachten der I/U–Beziehungen lassen sich so Rückschlüsse über die Eigenschaften des eigentlich unbekannten Zweipols ziehen.  
+
 +
*The time-dependent input&nbsp; $x(t)$&nbsp; acting on this system is also referred to as the&nbsp; &raquo;'''cause function'''&laquo;&nbsp; in the following.
 +
 +
*At the system output,&nbsp; the&nbsp; &raquo;'''effect function'''&laquo;&nbsp; $y(t)$&nbsp; appears  as the system response  to the input function&nbsp; $x(t)$.
 +
 
 +
 
 +
{{BlaueBox|TEXT=
 +
$\text{Please note:}$&nbsp;
 +
 
 +
#The&nbsp; &raquo;system&laquo;&nbsp; can generally be of any kind and is not limited to communications technology alone.&nbsp; In fact,&nbsp; attempts are also made in other fields of science,&nbsp; such as the natural sciences,&nbsp; economics and business administration,&nbsp; sociology and political science,&nbsp; to capture and describe causal relationships between different variables by means of the cause-and-effect principle. 
 +
#However,&nbsp; the methods used for these phenomenological systems theories differ significantly from the approach in Communications Engineering,&nbsp; which is outlined in this first main chapter of the present book&nbsp; &raquo;Linear Time-Invariant Systems&laquo;.}}
 +
 
 +
==Application in Communications Engineering==
 +
<br>
 +
The cause-and-effect principle can be applied in Communications Engineering,&nbsp; for example to describe one-port circuits which are also reffered to as one-ports.&nbsp; Here,&nbsp; one can consider the current curve&nbsp; $i(t)$&nbsp; as a cause function and the voltage&nbsp; $u(t)$&nbsp; as an effect function.&nbsp; By observing the I/U relationships,&nbsp; conclusions can be drawn about the properties of the actually unknown one-port.
 +
 
 +
[[File:EN_LZI_T_1_1_S2.png|right|frame|General model of signal transmission|class=fit]]
 +
In Germany,&nbsp; [https://en.wikipedia.org/wiki/Karl_K%C3%BCpfm%C3%BCller $\text{Karl Küpfmüller}$]&nbsp;  introduced the term&nbsp; &raquo;Systems Theory&laquo;&nbsp; for the first time in 1949.&nbsp; 
 +
He considers it as a method for describing complex causal relationships in natural sciences and technology,&nbsp; based on a spectral transformation,&nbsp; e.g. the&nbsp; 
 +
[[Signal_Representation/Fourier_Transform_and_its_Inverse#The_first_Fourier_integral|&raquo;Fourier transform&laquo;]].
  
Karl Küpfmüller hat den Begriff „Systemtheorie” 1949 erstmals (in Deutschland) eingeführt. Er versteht darunter eine Methode zur Beschreibung komplexer Kausalzusammenhänge in Naturwissenschaften und Technik, basierend auf einer Spektraltransformation – beispielsweise der im Buch „Signaldarstellung” dargelegten Fouriertransformation.  
+
A transmission system can entirely be described in terms of systems theory&nbsp;. Here
 +
*the&nbsp; &raquo;cause function&laquo;&nbsp; is the input signal&nbsp; $x(t)$&nbsp; or its spectrum&nbsp; $X(f)$,
 +
   
 +
*the&nbsp; &raquo;effect function&laquo;&nbsp; is the output signal&nbsp; $y(t)$&nbsp; or its spectrum&nbsp; $Y(f)$.  
  
Man kann ein ganzes Nachrichtensystem systemtheoretisch beschreiben. Hier ist die Ursachenfunktion das Eingangssignal $x(t)$ bzw. dessen Spektrum $X(f)$ und die Wirkungsfunktion das Ausgangssignal $y(t)$ oder die dazugehörige Spektralfunktion $Y(f)$.
 
  
[[File:P_ID776__LZI_T_1_1_S2_neu.png  | Allgemeines Modell der Nachrichtenübertragung|class=fit]]
+
Also in the following graphs,&nbsp; input variables are mostly drawn in blue,&nbsp; output variables in red and system variables in green.
 +
<br clear=all>
 +
{{GraueBox|TEXT=
 +
$\text{Example 1:}$&nbsp;
  
Auch in den nachfolgenden Bildern werden die Eingangsgrößen meist blau, die Ausgangsgrößen rot und Systemgrößen grün gezeichnet.
+
(1)&nbsp; For example,&nbsp; if the&nbsp; &raquo;system&laquo;&nbsp; describes a given linear electric circuit,&nbsp; then given a known input signal&nbsp; $x(t)$&nbsp;the output signal&nbsp;$y(t)$&nbsp; can be predicted with the help of&nbsp; "Systems Theory".  
  
{{Beispiel}}
+
(2)&nbsp; A second task of&nbsp; "Systems Theory"&nbsp; is to classify the transmission system by measuring&nbsp; $y(t)$&nbsp;knowing&nbsp; $x(t)$&nbsp; but without knowing the system in detail.  
Beschreibt das „Nachrichtensystem” eine vorgegebene lineare Schaltung, so kann bei bekanntem Eingangssignal $x(t)$ mit Hilfe der Systemtheorie das Ausgangssignal $y(t)$ vorhergesagt werden. Eine zweite Aufgabe der Systemtheorie besteht darin, durch Messung von $y(t)$ bei Kenntnis von $x(t)$ das Nachrichtensystem zu klassifizieren, ohne dieses im Detail zu kennen.  
 
  
Beschreibt $x(t)$ beispielsweise die Stimme eines Anrufers aus Hamburg und $y(t)$ die Aufzeichnung eines Anrufbeantworters in München, dann besteht das „Nachrichtensystem” aus folgenden Komponenten:  
+
(3)&nbsp; If&nbsp; $x(t)$&nbsp;describes, for instance, the voice of a caller in Hamburg and&nbsp; $y(t)$&nbsp;the recording of an answering machine in Munich, then the&nbsp; &raquo;transmission system&laquo;&nbsp; consists of the following components:  
  
Mikrofon Telefon elektrische Leitung Signalumsetzer Glasfaserkabel optischer Verstärker Signalrücksetzer Empfangsfilter (Entzerrer, Rauschbegrenzung) – ... – elektromagnetischer Wandler.  
+
:Microphone telephone electrical line signal converter $($electrical-optical$)$ fiber optic cable optical amplifier signal converter $($optical-electrical$)$ receiver filter $($for equalization and noise limitation$)$ &nbsp; . . . . . . &nbsp; electromagnetic transducer. }}
{{end}}
 
  
==Voraussetzungen für die Anwendung der Systemtheorie==
+
==Prerequisites for the application of Systems Theory==
Das auf der letzten Seite angegebene Modell eines Nachrichtensystems gilt allgemein und unabhängig von Randbedingungen. Die Anwendung der Systemtheorie erfordert jedoch zusätzlich einige einschränkende Voraussetzungen. Für das Folgende gilt stets, wenn nicht explizit etwas anderes angegeben ist:
+
<br>
*Sowohl $x(t)$ als auch $y(t)$ sind deterministische Signale. Andernfalls muss man entsprechend dem Kapitel Stochastische Systemtheorie  im Buch „Stochastische Signaltheorie” vorgehen.
+
The model of a transmission system given above holds generally and independently of the boundary conditions.&nbsp; However,&nbsp; the application of systems theory requires some additional limiting preconditions.   
*Das System ist linear. Dies erkennt man z. B. daran, dass eine harmonische Schwingung $x(t)$ am Eingang auch eine harmonische Schwingung $y(t)$ gleicher Frequenz am Ausgang zur Folge hat:
 
$$x(t) = A_x \cdot \cos(\omega_0 \hspace{0.05cm}t - \varphi_x)\hspace{0.2cm}\Rightarrow \hspace{0.2cm} y(t) = A_y \cdot\cos(\omega_0 \hspace{0.05cm}t - \varphi_y).$$
 
*Neue Frequenzen entstehen nicht. Lediglich Amplitude und Phase der harmonischen Schwingung können verändert werden. Nichtlineare Systeme werden im Kapitel 2.2 dieses Buches behandelt.
 
*Aufgrund der Linearität ist auch das Superpositionsprinzip anwendbar. Dieses besagt, dass aus $x_1(t) ⇒  y_1(t)$ und $x_2(t)  ⇒  y_2(t)$ auch zwingend die folgende Zuordnung gilt:
 
$$x_1(t) + x_2(t) \hspace{0.1cm}\Rightarrow \hspace{0.1cm} y_1(t) + y_2(t).$$
 
*Das System ist zeitinvariant. Das bedeutet, dass ein um $\tau$ verschobenes Eingangssignal genau das gleiche Ausgangssignal zur Folge hat – aber ebenfalls um $\tau$ verzögert:
 
$$x(t - \tau) \hspace{0.1cm}\Rightarrow \hspace{0.1cm} y(t -\tau)\hspace{0.4cm}{\rm falls} \hspace{0.4cm}x(t )\hspace{0.2cm}\Rightarrow \hspace{0.1cm} y(t).$$
 
:Zeitvariante Systeme werden im Buch „Mobile Kommunikation” behandelt.
 
  
Sind alle hier aufgeführten Voraussetzungen erfüllt, so spricht man von einem linearen zeitinvarianten System, abgekürzt LZI–System. In der englischsprachigen Literatur ist hierfür die Abkürzung LTI (Linear Time–Invariant) gebräuchlich.
+
Unless explicitly stated otherwise,&nbsp; the following shall always apply:
 +
*Both&nbsp; $x(t)$&nbsp; and&nbsp; $y(t)$&nbsp; are&nbsp; &raquo;'''deterministic'''&laquo;&nbsp; signals.&nbsp; Otherwise,&nbsp; one must proceed according to the chapter&nbsp; [[Theory_of_Stochastic_Signals/Stochastische_Systemtheorie|&raquo;Stochastic System Theory&laquo;]].
  
==Übertragungsfunktion - Frequenzgang==
+
*The system is&nbsp; &raquo;'''linear'''&laquo;.&nbsp; This can be seen,&nbsp; for example,&nbsp; from the fact that a harmonic oscillation &nbsp;$x(t)$&nbsp; at the input also results in a harmonic oscillation&nbsp; $y(t)$&nbsp; of the same frequency at the output:
Wir setzen ein LZI–System voraus, dessen Eingangs– und Ausgangsspektrum $X(f)$ bzw. $Y(f)$ bekannt sind oder aus den Zeitsignalen $x(t)$ und $y(t)$ durch Fourierrücktransformation berechnet werden können.
+
:$$x(t) = A_x \cdot \cos(\omega_0 \hspace{0.05cm}t - \varphi_x)\hspace{0.2cm}\Rightarrow \hspace{0.2cm} y(t) = A_y \cdot\cos(\omega_0 \hspace{0.05cm}t - \varphi_y).$$
 +
*New frequencies do not arise.&nbsp; Only amplitude and phase of the harmonic oscillation can be changed.&nbsp; Nonlinear systems are treated in the chapter&nbsp; [[Linear_and_Time_Invariant_Systems/Nonlinear_Distortions|&raquo;Nonlinear Distortions&laquo;]].
  
[[File:P_ID777__LZI_T_1_1_S4_neu.png | Zur Definition des Frequenzgangs|class=fit]]
+
*Because of linearity, the superposition principle is applicable.&nbsp; This states that due to&nbsp; $x_1(t) ⇒  y_1(t)$&nbsp;  and&nbsp; $x_2(t)  ⇒  y_2(t)$&nbsp; the following mapping also necessarily holds:
 +
:$$x_1(t) + x_2(t) \hspace{0.1cm}\Rightarrow \hspace{0.1cm} y_1(t) + y_2(t).$$
 +
*The system is&nbsp; &raquo;'''time-invariant'''&laquo;.&nbsp; This means that an input signal shifted by&nbsp; &nbsp;$\tau$&nbsp; results in the same output signal,&nbsp; but this is also delayed by&nbsp;$\tau$:
 +
:$$x(t - \tau) \hspace{0.1cm}\Rightarrow \hspace{0.1cm} y(t -\tau)\hspace{0.4cm}{\rm if} \hspace{0.4cm}x(t )\hspace{0.2cm}\Rightarrow \hspace{0.1cm} y(t).$$
 +
:Time-varying systems are discussed in the book&nbsp; [[Mobile_Communications|&raquo;Mobile Communications&laquo;]].
  
{{Definition}}
+
{{BlaueBox|TEXT=
Das Übertragungsverhalten eines Nachrichtensystems wird im Frequenzbereich durch die Übertragungsfunktion beschrieben:
+
$\text{Please note:}$&nbsp;
$$H(f) = \frac{Y(f)}{X(f)}= \frac{Wirkungsfunktion}{Ursachenfunktion}.$$
+
If all the conditions listed here are fulfilled, one deals with a&nbsp; &raquo;'''linear time-invariant system'''&laquo;,&nbsp; abbreviated&nbsp; $\rm LTI$&nbsp; system.}}  
Weitere Bezeichnungen für $H(f)$ sind Systemfunktion und Frequenzgang. Im Folgenden werden wir vorwiegend den letzten Begriff verwenden.
 
{{end}}
 
  
 +
==Frequency response  &ndash; Transfer function==
 +
<br>
 +
We assume an LTI system whose input and output spectra&nbsp; $X(f)$&nbsp; and&nbsp; $Y(f)$&nbsp; are known or can be derived from the time signals&nbsp; $x(t)$&nbsp; and&nbsp; $y(t)$&nbsp; via&nbsp; [[Signal_Representation/Fourier_Transform_and_its_Inverse#The_first_Fourier_integral|&raquo;Fourier transform&laquo;]].
  
{{Beispiel}}
+
[[File:EN_LZI_T_1_1_S4.png|right|frame|Definition of the frequency response|class=fit]]
Am Eingang eines LZI–Systems liegt das Signal $x(t)$ mit dem rein reellen Spektrum $X(f)$ an (blaue Kurve). Das gemessene Ausgangsspektrum $Y(f)$ – in der unteren Grafik rot markiert – ist bei Frequenzen kleiner als 2 kHz größer als $X(f)$ und besitzt im Bereich um 2 kHz eine steilere Flanke. Oberhalb von 2.8 kHz hat das Signal $y(t)$ keine Spektralanteile.
 
  
[[File:P_ID778__LZI_T_1_1_S4b_neu.png |Eingangsspektrum, Ausgangsspektrum und Frequenzgang|class=fit]]
+
{{BlaueBox|TEXT= 
 +
$\text{Definition:}$&nbsp;
 +
The behaviour of a system is described in the frequency domain by the&nbsp; &raquo;'''frequency response'''&laquo;:
 +
:$$H(f) = \frac{Y(f)}{X(f)}= \frac{ {\rm response\:function} }{ {\rm cause\:function} }.$$
 +
Other terms for&nbsp; $H(f)$&nbsp; are&nbsp; &raquo;system function&laquo;&nbsp; and&nbsp; &raquo;transfer function&laquo;. }}
  
Die grünen Kreise markieren einige Messpunkte des ebenfalls reellen Frequenzgangs $H(f)$ = $Y(f)/X(f)$. Bei niedrigen Frequenzen ist $H(f)$ > 1, das heißt, in diesem Bereich wirkt das LZI–System verstärkend. Der Flankenabfall von $H(f)$ verläuft ähnlich wie der von $Y(f)$, ist aber nicht identisch mit diesem.
 
{{end}}
 
  
==Eigenschaften des Frequenzgangs==
+
{{GraueBox|TEXT=
Der Frequenzgang $H(f)$ ist eine zentrale Größe bei der Beschreibung nachrichtentechnischer Systeme. Nachfolgend werden wichtige Eigenschaften von $H(f)$ aufgezählt.
+
$\text{Example 2:}$&nbsp;
*Der Frequenzgang beschreibt allein das System. Er ist zum Beispiel aus den linearen Bauelementen eines elektrischen Netzwerks berechenbar. Bei anderem Eingangssignal $x(t)$, das natürlich auch ein anderes Ausgangssignal $y(t)$ zur Folge hat, ergibt sich der genau gleiche Frequenzgang $H(f)$.
+
The signal &nbsp; $x(t)$&nbsp; with real spectrum&nbsp; $X(f)$&nbsp; $($blue curve$)$&nbsp; is applied to the input of an LTI system.&nbsp;
*$H(f)$ kann allgemein eine Einheit besitzen. Betrachtet man beispielsweise bei einem Zweipol den Spannungsverlauf $u(t)$ als Ursache und den Strom $i(t)$ als Wirkung, so hat der Frequenzgang $H(f)$ = $I(f)/U(f)$ die Einheit A/V. $I(f)$ und $U(f)$ sind die Fouriertransformierten von $i(t)$ bzw. $u(t)$.  
+
[[File:P_ID778__LZI_T_1_1_S4b_neu.png |right|frame|Input spectrum,&nbsp; output spectrum&nbsp; and&nbsp; frequency response|class=fit]]
*Im Folgenden betrachten wir ausschließlich Vierpole. Zudem setzen wir ohne Einschränkung der Allgemeingültigkeit meist voraus, dass $x(t)$ und $y(t)$ jeweils Spannungen seien. In diesem Fall ist somit $H(f)$ stets dimensionslos.
+
The measured output spectrum&nbsp; $Y(f)$&nbsp; &ndash; marked red in the graph &ndash;
*Da die Spektren $X(f)$ und $Y(f)$ im Allgemeinen komplex sind, ist auch der Frequenzgang $H(f)$ eine komplexe Funktion. Man bezeichnet den Betrag $\\ |H(f)|$ als ''Amplitudengang.'' Dieser wird auch oft in logarithmierter Form dargestellt und als Dämpfungsverlauf bezeichnet:
+
*is larger than&nbsp; $X(f)$&nbsp; at frequencies lower than&nbsp; $2 \ \rm kHz$,&nbsp;
$$a(f) = - \ln |H(f)| = - 20 \cdot \lg |H(f)|.$$
 
:Je nachdem, ob die erste Form mit dem natürlichen oder die zweite mit dekadischem Logarithmus verwendet wird, ist die Pseudoeinheit Neper (Np) bzw. Dezibel (dB) hinzuzufügen.
 
*Der Phasengang ist aus $H(f)$ in folgender Weise berechenbar:
 
$$b(f) = - {\rm arc} \hspace{0.1cm}H(f) \hspace{0.2cm}{\rm in\hspace{0.1cm}Radian \hspace{0.1cm}(rad)}.$$
 
*Damit kann der gesamte Frequenzgang auch wie folgt dargestellt werden:
 
$$H(f) = |H(f)| \cdot {\rm e}^{-{\rm j} \hspace{0.05cm} \cdot\hspace{0.05cm} b(f)} = {\rm e}^{-a(f)}\cdot {\rm e}^{-{\rm j}\hspace{0.05cm} \cdot \hspace{0.05cm} b(f)}.$$
 
  
==Tiefpass, Hochpass, Bandpass und Bandsperre==
+
*has a steeper slope in the region around&nbsp; $2 \ \rm kHz$,&nbsp; and
Nach dem Amplitudengang $|H(f)|$ unterscheidet man zwischen
 
*Tiefpass: Signalanteile werden mit zunehmender Frequenz immer stärker gedämpft.
 
*Hochpass: Hier werden hochfrequente Signalanteile weniger gedämpft als niederfrequente. Ein Gleichsignal $(f = 0)$ kann über einen Hochpass nicht übertragen werden.
 
*Bandpass: Es gibt eine bevorzugte Frequenz, die man als Mittenfrequenz $f_{\rm M}$ bezeichnet. Je weiter die Frequenz eines Signalanteils von $f_{\rm M}$ entfernt ist, um so stärker wird dieser gedämpft.
 
*Bandsperre: Dies ist das Gegenstück zum Bandpass und es gilt $|H(f_{\rm M})| ≈ 0$. Sehr niederfrequente und sehr hochfrequente Signalanteile werden dagegen gut durchgelassen.
 
  
[[File:P_ID780__LZI_T_1_1_S6_neu.png  | Tiefpass, Hochpass (links) und Bandpass (rechts)|class=fit]]
+
*above&nbsp; $2.8 \ \rm kHz$&nbsp; the signal&nbsp; $y(t)$&nbsp; has no spectral components.
 +
 
 +
 
 +
The green circles mark some measuring points of the frequency response&nbsp; $H(f) = Y(f)/X(f)$&nbsp; which is real,&nbsp; too.
 +
#At low frequencies it holds&nbsp; $H(f)>1$:&nbsp; In this range the LTI system has an amplifying effect.
 +
#The frequency roll-off of&nbsp; $H(f)$&nbsp; is similar to that of&nbsp; $Y(f)$ but not identical.}}
 +
 
 +
==Properties of the frequency response==
 +
<br>
 +
The frequency response&nbsp; $H(f)$&nbsp; is a central variable in the description of communication systems.
 +
 
 +
Some properties of this important system characteristic are listed below:
 +
*The frequency response describes the LTI system on its own.&nbsp; It can be calculated, for example, from the linear components of an electrical network.&nbsp; With a different input signal&nbsp; $x(t)$&nbsp; and a correspondingly different output signal&nbsp; $y(t)$&nbsp; the result is exactly the same frequency response&nbsp; $H(f)$.
 +
*The frequency response can have a&nbsp; "unit".&nbsp; For example, if one considers the voltage curve&nbsp; $u(t)$&nbsp; as cause and the current&nbsp; $i(t)$&nbsp; as effect for a one-port, the frequency response&nbsp; $H(f) = I(f)/U(f)$&nbsp; has the unit&nbsp; $\rm A/V$.&nbsp; $I(f)$&nbsp; and&nbsp; $U(f)$&nbsp; are the Fourier transforms of&nbsp; $i(t)$&nbsp; and&nbsp; $u(t)$, respectively.
 +
*In the following we only consider &raquo;'''two-port networks'''&laquo; or so-called &raquo;'''quadripoles'''&laquo;.&nbsp; Moreover, without loss of generality, we usually assume that&nbsp; $x(t)$&nbsp; and&nbsp; $y(t)$&nbsp; are voltages, respectively.&nbsp; In this case&nbsp; $H(f)$&nbsp; is always dimensionless.
 +
*Since the spectra&nbsp; $X(f)$&nbsp; and&nbsp; $Y(f)$&nbsp; are generally complex, the frequency response&nbsp; $H(f)$&nbsp; is also a complex function.&nbsp; The magnitude&nbsp; $|H(f)|$&nbsp; is called the&nbsp; &raquo;'''amplitude response'''&laquo;&nbsp; or the&nbsp; "magnitude frequency response".&nbsp;
 +
*This is also often represented in logarithmic form and called the&nbsp; "attenuation curve"&nbsp; or&nbsp; "gain curve":
 +
:$$a(f) = - \ln |H(f)| = - 20 \cdot \lg |H(f)|.$$
 +
*Depending on whether the first form with the natural logarithm or the second with the decadic logarithm is used, the pseudo-unit&nbsp; "neper"&nbsp; $\rm (Np)$&nbsp; or&nbsp; "decibel"&nbsp; $\rm  (dB)$&nbsp; must be added.
 +
*The&nbsp; &raquo;'''phase response'''&laquo;&nbsp; can be calculated from&nbsp; $H(f)$&nbsp; in the following way:
 +
:$$b(f) = - {\rm arc} \hspace{0.1cm}H(f) \hspace{0.2cm}{\rm in\hspace{0.1cm}radian \hspace{0.1cm}(rad)}.$$
 +
 
 +
{{BlaueBox|TEXT= 
 +
&nbsp; $\text{Thus, the total frequency response can also be represented as follows}$:
 +
:$$H(f) = \vert H(f)\vert \cdot {\rm e}^{ - {\rm j} \hspace{0.05cm} \cdot\hspace{0.05cm} b(f)} = {\rm e}^{-a(f)}\cdot {\rm e}^{ - {\rm j}\hspace{0.05cm} \cdot \hspace{0.05cm} b(f)}.$$}}
 +
 
 +
==Low-pass, high-pass, band-pass and band-stop filters==
 +
<br>
 +
[[File:EN_LZI_T_1_1_S6.png|right|frame|Left:&nbsp; Low-pass and high-pass;&nbsp; cut-off frequency&nbsp; $f_{\rm G}$&nbsp; $($German:&nbsp; Grenzfrequenz &nbsp; &rArr; &nbsp; $\rm G)$&nbsp;,<br>Right:&nbsp; Band-pass;&nbsp; lower cut-off frequency&nbsp; $f_{\rm U}$,&nbsp; upper cut-off frequency&nbsp; $f_{\rm O}$|class=fit]]
 +
According to the amplitude response&nbsp; $|H(f)|$&nbsp; one distinguishes between
 +
 
 +
*&raquo;'''Low-pass filters'''&laquo;: &nbsp;Signal components tend to be more attenuated with increasing frequency.
 +
*&raquo;'''High-pass filters'''&laquo;: &nbsp;Here, high-frequency signal components are attenuated less than low-frequency ones.&nbsp; A direct signal&nbsp; $($that is, a signal component with the frequency&nbsp; $f = 0)$&nbsp; cannot be transmitted via a high-pass filter.
 +
*&raquo;'''Band-pass filters'''&laquo;: &nbsp;There is a preferred frequency called the center frequency&nbsp; $f_{\rm M}$.&nbsp; The further away the frequency of a signal component is from&nbsp; $f_{\rm M}$,&nbsp; the more it will be attenuated.
 +
*&raquo;'''Band-stop filters'''&laquo;: &nbsp;This is the counterpart to the band-pass filter and it is&nbsp; $|H(f_{\rm M})| ≈ 0$.&nbsp; However, very low-frequency and very high-frequency signal components are let pass.
 +
 
 +
 
 +
The graph shows on the left the amplitude responses of the filter types&nbsp; "low-pass"&nbsp; $\rm (LP)$&nbsp; and&nbsp; "high-pass"&nbsp; $\rm (HP)$, and an the right&nbsp; "band-pass"&nbsp; $\rm (BP)$.
 +
 
 +
In the sketch also shown are the&nbsp; &raquo;'''cut-off frequencies'''&laquo;.&nbsp;  These denote 3dB cut-off frequencies here, for example, according to the following definition.
 +
 
 +
{{BlaueBox|TEXT= 
 +
$\text{Definition:}$&nbsp;
 +
The&nbsp; &raquo;'''3 dB cut-off frequency'''&laquo;&nbsp; of a low-pass filter specifies the frequency&nbsp; $f_{\rm G}$,&nbsp; for which holds:
 +
:$$\vert H(f = f_{\rm G})\vert = {1}/{\sqrt{2} } \cdot \vert H(f = 0)\vert \hspace{0.5cm}\Rightarrow\hspace{0.5cm} \vert H(f = f_{\rm G})\vert^2 = {1}/{2} \cdot  \vert H(f = 0)  \vert^2.$$}}
 +
 
 +
 
 +
#Note that there are also a number of other definitions for the cut-off frequency.&nbsp;
 +
#These can be found in the section&nbsp; [[Linear_and_Time_Invariant_Systems/Some_Low-Pass_Functions_in_Systems_Theory#General_remarks|&raquo;General Remarks&laquo;]]&nbsp; in the chapter&nbsp; &raquo;Some Low-Pass Functions in Systems Theory&laquo;.
  
Die Grafik zeigt die Amplitudengänge der Filtertypen TP, HP und BP. Ebenfalls eingezeichnet sind die Grenzfrequenzen $f_{\rm G}$ (bei Tiefpass und Hochpass) bzw. $f_{\rm U}$ und $f_{\rm O}$ (beim Bandpass). Diese sind hier stets 3dB–Grenzfrequenzen, beispielsweise entsprechend nachfolgender Definition:
 
{{Definition}}
 
Die 3dB–Grenzfrequenz eines Tiefpasses gibt diejenige Frequenz $f_{\rm G}$ an, für die gilt:
 
$$|H(f = f_{\rm G})| = \frac{1}{\sqrt{2}} \cdot|H(f = 0)| \hspace{0.5cm}\Rightarrow\hspace{0.5cm} |H(f = f_{\rm G})|^2 = \frac{1}{2} \cdot|H(f = 0)|^2.$$
 
{{end}}
 
  
Anzumerken ist, dass es auch andere Definitionen für Grenzfrequenzen gibt (vgl. Kapitel 1.3  ).
 
 
   
 
   
==Testsignale zur Messung von $H(f)$==
+
==Test signals for measuring the frequency response==
Zur messtechnischen Erfassung des Frequenzgangs $H(f)$ eignet sich jedes beliebige Eingangssignal $x(t)$ mit Spektrum $X(f)$, solange $X(f)$ keine Nullstellen aufweist. Durch Messung des Ausgangsspektrums $Y(f)$ lässt sich so der Frequenzgang ermitteln:
+
<br>
$$H(f) = \frac{Y(f)}{X(f)}.$$
+
For measuring the frequency response&nbsp; $H(f)$&nbsp; any input signal&nbsp; $x(t)$&nbsp; with spectrum&nbsp; $X(f)$&nbsp is suitable,&nbsp as long as&nbsp; $X(f)$&nbsp; has no zeros (in the range of interest).&nbsp; By measuring the output spectrum&nbsp; $Y(f)$&nbsp; the frequency response can thus be determined in a simple way:
Insbesondere eignen sich folgende Eingangssignale:   
+
:$$H(f) = \frac{Y(f)}{X(f)}.$$
*Diracimpuls $x(t) = K · δ(t)$  ⇒  Spektrum $X(f) = K$:
+
In particular,&nbsp the following input signals are suitable:   
:Somit ist der Frequenzgang nach Betrag und Phase formgleich mit dem Ausgangsspektrum $Y(f)$ und es gilt $H(f) = 1/K · Y(f)$. Approximiert man den Diracimpuls durch ein schmales Rechteck gleicher Fläche $K$, so muss $H(f)$ mit Hilfe einer $sin(x)/x$–Funktion korrigiert werden.  
+
*'''&raquo;Dirac delta function'''&laquo; &nbsp; $x(t) = K · δ(t)$  &nbsp; &nbsp; spectrum $X(f) = K$:
*Diracpuls die unendliche Summe gleichgewichteter Diracimpulse im zeitlichen Abstand $T_{\rm A}$:  
+
:Thus, the frequency response by magnitude and phase is of the same shape as the output spectrum&nbsp; $Y(f)$&nbsp; and it holds&nbsp; $H(f) = 1/K · Y(f)$. <br>If one approximates the Dirac delta function by a narrow rectangle of equal area&nbsp; $K$, then&nbsp; $H(f)$&nbsp; must be corrected by means of a&nbsp; ${\rm sin}(x)/x$–function.  
:Dieser führt nach den Aussagen von Kapitel 5.1 im Buch „Signaldarstellung” zu einem Diracpuls im Frequenzbereich mit Frequenzabstand $1/T_{\rm A}$. Damit ist auch eine frequenzdiskrete Messung von $H(f)$ möglich. Die spektralen Abtastwerte ergeben sich ebenfalls im Abstand $f_{\rm A} = 1/T_{\rm A}$.  
+
*'''&raquo;Dirac comb'''&laquo; &nbsp;&nbsp; the infinite sum of equally weighted Dirac delta functions at the time interval&nbsp; $T_{\rm A}$:  
*Harmonische Schwingung $x(t) = A_x · \cos (2πf_0t φ_x)$  ⇒ diracförmiges Spektrum:
+
:This leads according to the chapter&nbsp;  [[Signal_Representation/Discrete-Time_Signal_Representation#Dirac_comb_in_time_and_frequency_domain|&raquo;Discrete-Time Signal Representation&laquo;]]&nbsp; to a Dirac comb in the frequency domain with distance&nbsp; $f_{\rm A} =1/T_{\rm A}$.&nbsp; This allows a discrete frequency measurement of&nbsp; $H(f)$&nbsp; with the spectral samples spaced&nbsp; $f_{\rm A}$.  
:Das Ausgangssignal $y(t) = A_y · \cos(2πf_0t φ_y)$ ist dann ebenfalls eine harmonische Schwingung mit gleicher Frequenz, und es lautet der Frequenzgang bei positiver Frequenz $f_0$:  
+
*'''&raquo;Harmonic oscillation'''&laquo; &nbsp; $x(t) = A_x · \cos (2πf_0t - φ_x)$  &nbsp; &nbsp; Dirac-shaped spectrum at&nbsp; $\pm f_0$:
$$H(f_0) = \frac{Y(f_0)}{X(f_0)} = \frac{A_y}{A_x}\cdot{\rm e}^{{\rm j} \hspace{0.05cm} \cdot \hspace{0.05cm} (\varphi_x - \varphi_y)}.$$
+
:The output signal&nbsp; $y(t) = A_y · \cos(2πf_0t - φ_y)$&nbsp; is an oscillation of the same frequency $f_0$.&nbsp; The frequency response for&nbsp; $f_0 \gt 0$&nbsp; is:  
:Um den gesamten frequenzkontinuierlichen Frequenzgang $H(f)$ zu ermitteln, sind (unendlich) viele Messungen mit unterschiedlichen Frequenzen $f_0$ erforderlich.  
+
:$$H(f_0) = \frac{Y(f_0)}{X(f_0)} = \frac{A_y}{A_x}\cdot{\rm e}^{\hspace{0.05cm} {\rm j} \hspace{0.05cm} \cdot \hspace{0.05cm} (\varphi_x - \varphi_y)}.$$  
 +
:To determine the total frequency response&nbsp; $H(f)$&nbsp; an infinite number of measurements at different frequencies $f_0$&nbsp; are required.
 +
 
 +
 
 +
 
 +
==Exercises for the chapter==
 +
<br>
 +
[[Aufgaben:Exercise_1.1:_Simple_Filter_Functions| Exercise 1.1: Simple Filter Functions]]
 +
 
 +
[[Aufgaben:Exercise_1.1Z:_Low-Pass_Filter_of_1st_and_2nd_Order|Exercise 1.1Z: Low-Pass Filter of 1st and 2nd Order]]
 +
 
 +
[[Aufgaben:Exercise_1.2:_Coaxial_Cable|Exercise 1.2: Coaxial Cable]]
 +
 
 +
[[Aufgaben:Exercise_1.2Z:_Measurement_of_the_Frequency_Response|Exercise 1.2Z: Measurement of the Frequency Response]]
 +
 
 +
 
  
 
{{Display}}
 
{{Display}}

Latest revision as of 18:50, 1 November 2023

  • [[Linear and Time Invariant Systems/{{{Vorherige Seite}}} | Previous page]]
  • Next page
  • [[Linear and Time Invariant Systems/{{{Vorherige Seite}}} | Previous page]]
  • Next page

# OVERVIEW OF THE FIRST MAIN CHAPTER #


  • In the book  »Signal Representation«  you were familiarized with the mathematical description of deterministic signals in the time and frequency domain.
  • The second book  »Linear Time-Invariant Systems«  now describes  what changes a signal or its spectrum undergoes through a transmission system  and how these changes can be captured mathematically.
  • In the first chapter,  the basics of the so-called  »Systems Theory«  are mentioned,  which allows a uniform and simple description of such systems.  We start with the system description in the frequency domain with the partial aspects listed above.


$\text{Please note:}$ 

  • The  »system«  can be a simple circuit as well as a complete, highly complicated transmission system with a multitude of components.
  • Here it is only assumed that the system has the two properties  »linear«  and  »time-invariant«.


The cause-and-effect principle


Simplest system model

We always consider here the simple model outlined on the right.  This arrangement is to be interpreted as follows:

  • The focus is on the so-called  »system«,  which is largely abstracted in its function   ⇒   »black box«.  Nothing is known in detail about the realization of the system.
  • The time-dependent input  $x(t)$  acting on this system is also referred to as the  »cause function«  in the following.
  • At the system output,  the  »effect function«  $y(t)$  appears as the system response to the input function  $x(t)$.


$\text{Please note:}$ 

  1. The  »system«  can generally be of any kind and is not limited to communications technology alone.  In fact,  attempts are also made in other fields of science,  such as the natural sciences,  economics and business administration,  sociology and political science,  to capture and describe causal relationships between different variables by means of the cause-and-effect principle.
  2. However,  the methods used for these phenomenological systems theories differ significantly from the approach in Communications Engineering,  which is outlined in this first main chapter of the present book  »Linear Time-Invariant Systems«.

Application in Communications Engineering


The cause-and-effect principle can be applied in Communications Engineering,  for example to describe one-port circuits which are also reffered to as one-ports.  Here,  one can consider the current curve  $i(t)$  as a cause function and the voltage  $u(t)$  as an effect function.  By observing the I/U relationships,  conclusions can be drawn about the properties of the actually unknown one-port.

General model of signal transmission

In Germany,  $\text{Karl Küpfmüller}$  introduced the term  »Systems Theory«  for the first time in 1949.  He considers it as a method for describing complex causal relationships in natural sciences and technology,  based on a spectral transformation,  e.g. the  »Fourier transform«.

A transmission system can entirely be described in terms of systems theory . Here

  • the  »cause function«  is the input signal  $x(t)$  or its spectrum  $X(f)$,
  • the  »effect function«  is the output signal  $y(t)$  or its spectrum  $Y(f)$.


Also in the following graphs,  input variables are mostly drawn in blue,  output variables in red and system variables in green.

$\text{Example 1:}$ 

(1)  For example,  if the  »system«  describes a given linear electric circuit,  then given a known input signal  $x(t)$ the output signal $y(t)$  can be predicted with the help of  "Systems Theory".

(2)  A second task of  "Systems Theory"  is to classify the transmission system by measuring  $y(t)$ knowing  $x(t)$  but without knowing the system in detail.

(3)  If  $x(t)$ describes, for instance, the voice of a caller in Hamburg and  $y(t)$ the recording of an answering machine in Munich, then the  »transmission system«  consists of the following components:

Microphone – telephone – electrical line – signal converter $($electrical-optical$)$ – fiber optic cable – optical amplifier – signal converter $($optical-electrical$)$ – receiver filter $($for equalization and noise limitation$)$ –   . . . . . .   – electromagnetic transducer.

Prerequisites for the application of Systems Theory


The model of a transmission system given above holds generally and independently of the boundary conditions.  However,  the application of systems theory requires some additional limiting preconditions.

Unless explicitly stated otherwise,  the following shall always apply:

  • Both  $x(t)$  and  $y(t)$  are  »deterministic«  signals.  Otherwise,  one must proceed according to the chapter  »Stochastic System Theory«.
  • The system is  »linear«.  This can be seen,  for example,  from the fact that a harmonic oscillation  $x(t)$  at the input also results in a harmonic oscillation  $y(t)$  of the same frequency at the output:
$$x(t) = A_x \cdot \cos(\omega_0 \hspace{0.05cm}t - \varphi_x)\hspace{0.2cm}\Rightarrow \hspace{0.2cm} y(t) = A_y \cdot\cos(\omega_0 \hspace{0.05cm}t - \varphi_y).$$
  • New frequencies do not arise.  Only amplitude and phase of the harmonic oscillation can be changed.  Nonlinear systems are treated in the chapter  »Nonlinear Distortions«.
  • Because of linearity, the superposition principle is applicable.  This states that due to  $x_1(t) ⇒ y_1(t)$  and  $x_2(t) ⇒ y_2(t)$  the following mapping also necessarily holds:
$$x_1(t) + x_2(t) \hspace{0.1cm}\Rightarrow \hspace{0.1cm} y_1(t) + y_2(t).$$
  • The system is  »time-invariant«.  This means that an input signal shifted by   $\tau$  results in the same output signal,  but this is also delayed by $\tau$:
$$x(t - \tau) \hspace{0.1cm}\Rightarrow \hspace{0.1cm} y(t -\tau)\hspace{0.4cm}{\rm if} \hspace{0.4cm}x(t )\hspace{0.2cm}\Rightarrow \hspace{0.1cm} y(t).$$
Time-varying systems are discussed in the book  »Mobile Communications«.

$\text{Please note:}$  If all the conditions listed here are fulfilled, one deals with a  »linear time-invariant system«,  abbreviated  $\rm LTI$  system.

Frequency response – Transfer function


We assume an LTI system whose input and output spectra  $X(f)$  and  $Y(f)$  are known or can be derived from the time signals  $x(t)$  and  $y(t)$  via  »Fourier transform«.

Definition of the frequency response

$\text{Definition:}$  The behaviour of a system is described in the frequency domain by the  »frequency response«:

$$H(f) = \frac{Y(f)}{X(f)}= \frac{ {\rm response\:function} }{ {\rm cause\:function} }.$$

Other terms for  $H(f)$  are  »system function«  and  »transfer function«.


$\text{Example 2:}$  The signal   $x(t)$  with real spectrum  $X(f)$  $($blue curve$)$  is applied to the input of an LTI system. 

Input spectrum,  output spectrum  and  frequency response

The measured output spectrum  $Y(f)$  – marked red in the graph –

  • is larger than  $X(f)$  at frequencies lower than  $2 \ \rm kHz$, 
  • has a steeper slope in the region around  $2 \ \rm kHz$,  and
  • above  $2.8 \ \rm kHz$  the signal  $y(t)$  has no spectral components.


The green circles mark some measuring points of the frequency response  $H(f) = Y(f)/X(f)$  which is real,  too.

  1. At low frequencies it holds  $H(f)>1$:  In this range the LTI system has an amplifying effect.
  2. The frequency roll-off of  $H(f)$  is similar to that of  $Y(f)$ but not identical.

Properties of the frequency response


The frequency response  $H(f)$  is a central variable in the description of communication systems.

Some properties of this important system characteristic are listed below:

  • The frequency response describes the LTI system on its own.  It can be calculated, for example, from the linear components of an electrical network.  With a different input signal  $x(t)$  and a correspondingly different output signal  $y(t)$  the result is exactly the same frequency response  $H(f)$.
  • The frequency response can have a  "unit".  For example, if one considers the voltage curve  $u(t)$  as cause and the current  $i(t)$  as effect for a one-port, the frequency response  $H(f) = I(f)/U(f)$  has the unit  $\rm A/V$.  $I(f)$  and  $U(f)$  are the Fourier transforms of  $i(t)$  and  $u(t)$, respectively.
  • In the following we only consider »two-port networks« or so-called »quadripoles«.  Moreover, without loss of generality, we usually assume that  $x(t)$  and  $y(t)$  are voltages, respectively.  In this case  $H(f)$  is always dimensionless.
  • Since the spectra  $X(f)$  and  $Y(f)$  are generally complex, the frequency response  $H(f)$  is also a complex function.  The magnitude  $|H(f)|$  is called the  »amplitude response«  or the  "magnitude frequency response". 
  • This is also often represented in logarithmic form and called the  "attenuation curve"  or  "gain curve":
$$a(f) = - \ln |H(f)| = - 20 \cdot \lg |H(f)|.$$
  • Depending on whether the first form with the natural logarithm or the second with the decadic logarithm is used, the pseudo-unit  "neper"  $\rm (Np)$  or  "decibel"  $\rm (dB)$  must be added.
  • The  »phase response«  can be calculated from  $H(f)$  in the following way:
$$b(f) = - {\rm arc} \hspace{0.1cm}H(f) \hspace{0.2cm}{\rm in\hspace{0.1cm}radian \hspace{0.1cm}(rad)}.$$

  $\text{Thus, the total frequency response can also be represented as follows}$:

$$H(f) = \vert H(f)\vert \cdot {\rm e}^{ - {\rm j} \hspace{0.05cm} \cdot\hspace{0.05cm} b(f)} = {\rm e}^{-a(f)}\cdot {\rm e}^{ - {\rm j}\hspace{0.05cm} \cdot \hspace{0.05cm} b(f)}.$$

Low-pass, high-pass, band-pass and band-stop filters


Left:  Low-pass and high-pass;  cut-off frequency  $f_{\rm G}$  $($German:  Grenzfrequenz   ⇒   $\rm G)$ ,
Right:  Band-pass;  lower cut-off frequency  $f_{\rm U}$,  upper cut-off frequency  $f_{\rm O}$

According to the amplitude response  $|H(f)|$  one distinguishes between

  • »Low-pass filters«:  Signal components tend to be more attenuated with increasing frequency.
  • »High-pass filters«:  Here, high-frequency signal components are attenuated less than low-frequency ones.  A direct signal  $($that is, a signal component with the frequency  $f = 0)$  cannot be transmitted via a high-pass filter.
  • »Band-pass filters«:  There is a preferred frequency called the center frequency  $f_{\rm M}$.  The further away the frequency of a signal component is from  $f_{\rm M}$,  the more it will be attenuated.
  • »Band-stop filters«:  This is the counterpart to the band-pass filter and it is  $|H(f_{\rm M})| ≈ 0$.  However, very low-frequency and very high-frequency signal components are let pass.


The graph shows on the left the amplitude responses of the filter types  "low-pass"  $\rm (LP)$  and  "high-pass"  $\rm (HP)$, and an the right  "band-pass"  $\rm (BP)$.

In the sketch also shown are the  »cut-off frequencies«.  These denote 3dB cut-off frequencies here, for example, according to the following definition.

$\text{Definition:}$  The  »3 dB cut-off frequency«  of a low-pass filter specifies the frequency  $f_{\rm G}$,  for which holds:

$$\vert H(f = f_{\rm G})\vert = {1}/{\sqrt{2} } \cdot \vert H(f = 0)\vert \hspace{0.5cm}\Rightarrow\hspace{0.5cm} \vert H(f = f_{\rm G})\vert^2 = {1}/{2} \cdot \vert H(f = 0) \vert^2.$$


  1. Note that there are also a number of other definitions for the cut-off frequency. 
  2. These can be found in the section  »General Remarks«  in the chapter  »Some Low-Pass Functions in Systems Theory«.


Test signals for measuring the frequency response


For measuring the frequency response  $H(f)$  any input signal  $x(t)$  with spectrum  $X(f)$&nbsp is suitable,&nbsp as long as  $X(f)$  has no zeros (in the range of interest).  By measuring the output spectrum  $Y(f)$  the frequency response can thus be determined in a simple way:

$$H(f) = \frac{Y(f)}{X(f)}.$$

In particular,&nbsp the following input signals are suitable:

  • »Dirac delta function«   $x(t) = K · δ(t)$   ⇒   spectrum $X(f) = K$:
Thus, the frequency response by magnitude and phase is of the same shape as the output spectrum  $Y(f)$  and it holds  $H(f) = 1/K · Y(f)$.
If one approximates the Dirac delta function by a narrow rectangle of equal area  $K$, then  $H(f)$  must be corrected by means of a  ${\rm sin}(x)/x$–function.
  • »Dirac comb«  –  the infinite sum of equally weighted Dirac delta functions at the time interval  $T_{\rm A}$:
This leads according to the chapter  »Discrete-Time Signal Representation«  to a Dirac comb in the frequency domain with distance  $f_{\rm A} =1/T_{\rm A}$.  This allows a discrete frequency measurement of  $H(f)$  with the spectral samples spaced  $f_{\rm A}$.
  • »Harmonic oscillation«   $x(t) = A_x · \cos (2πf_0t - φ_x)$   ⇒   Dirac-shaped spectrum at  $\pm f_0$:
The output signal  $y(t) = A_y · \cos(2πf_0t - φ_y)$  is an oscillation of the same frequency $f_0$.  The frequency response for  $f_0 \gt 0$  is:
$$H(f_0) = \frac{Y(f_0)}{X(f_0)} = \frac{A_y}{A_x}\cdot{\rm e}^{\hspace{0.05cm} {\rm j} \hspace{0.05cm} \cdot \hspace{0.05cm} (\varphi_x - \varphi_y)}.$$
To determine the total frequency response  $H(f)$  an infinite number of measurements at different frequencies $f_0$  are required.


Exercises for the chapter


Exercise 1.1: Simple Filter Functions

Exercise 1.1Z: Low-Pass Filter of 1st and 2nd Order

Exercise 1.2: Coaxial Cable

Exercise 1.2Z: Measurement of the Frequency Response