The Fourier transform of is essentially a step function. We shall see that the convolution with leads to an important integral transform. Specifically, the Fourier transform of
is This pair and its dual are shown in Figure 1. Since
has a pole at its origin, its Fourier transform integral diverges there as has a transform only in a limiting sense. (It can be evaluated using contour integration). Likewise, the Fourier integral of sgn (x), in computing either the forward or inverse transform, does not exist in the conventional sense because sgn (x) is not absolutely integrable. The transform of sgn (x) can be defined by considering a sequence of transformable functions that approaches sgn (x) in the limit. We do not let these mathematical inconveniences deter us any more than they did in our previous discussion of functions and sinusoids, for the pair Slatex (1/t)$ — sgn has some intriguing properties.
Since is real and odd, its Fourier transform is odd and pure imaginary. But, more interestingly, its magnitude spectrum is obviously constant. (Could there be a delta function lurking nearby?). The interest in this transform pair comes from convolving a function with in the time domain. This convolution integral, called the Hilbert Transform is as follows:
This transform arises in many applications in Digital Communications and Mathematical Physics. The Cauchy principal value, which is a kind of average familiar to those knowledgeable in contour integration, is to be used over singularities of the integrand. This mathematical inconvenience is avoided in the frequency domain where we can easily visualize the effect of the Hilbert transform.
Multiplication by in the frequency domain produces a phase shift at all frequencies. The phase of is advanced constant for all positive frequencies and retarded a constant for all negative frequencies. The magnitude spectrum of is unchanged since the spectrum of is flat. The Hilbert transform operation in the frequency domain is summarized in Fig 2. The Fourier transform of the given function has its phase shifted , in opposite directions for positive and negative frequencies, then, the inverse Fourier transform produces the time domain result.
The exact phase shift [including the dependence] occurs in several different instances in wave propagation: the reflection of electromagnetic waves from metals at a certain angle of incidence involves an exact phase shift independent of frequency; special teleseismic ray paths produce the same type of phase shifts, and the propagation of point sources for all types of waves includes a phase shift for all frequencies in the far field.
Just onc comment before we delve further: the trick in DSP and Digital Control and Digital Communications is to treat all possible practical signals via a few elementary or test signals. The function is an elementary signals, so also are the complex exponentials or sinusoids, the unit step function, the boxcar, the sinc pulse, and the signum function.
We continue our light study with transforms of sinusoids.
The Fourier transform of sines and cosines plays a central role in the study of linear time invariant systems for three quite related reasons: (1) they are eigenfunctions of LTI systems (2) the spectrum consists of the eigenvalues associated with these eigenfunctions, and (3) the weighting factor (sometimes called the kernel) in the Fourier transform is a complex sinusoid. Clearly, we want to include sinusoids in our Fourier transform repertoire.
We have already seen that the Fourier transform of the complex exponential is a function located at the center of the complex sinusoid. That is,
we get which equals
The Fourier transform of the cosine and the sine follows directly from these two equations. Adding the two equations gives
while subtracting them gives
These transforms are just the combinations of functions that are required by the basic symmetry properties of the Fourier transform; because the cosine is a real and even function, its transform is real and symmetric. The sinc is real and odd; therefore, it has an odd imaginary transform.
Alternately, we could derive the sine and cosine transforms by using the phase-shift theorem; shifting a function along the axis in one domain introduces a complex sinusoid in the other domain. For example, if we want to generate the dual pairs in equations I and II, we apply the phase shift theorem to the function and write
Adding and subtracting these two equations gives
. equation IV
The sine and the cosine transforms and their duals are shown in the attached figure 1.
Thus, the Fourier transforms of sines and cosines can be viewed as resulting from forcing certain symmetries into the function transform after it is shifted along the axis; shifting the function off the origin in the frequency domain and then requiring a symmetric symmetric spectrum results in equation I. An antisymmetric spectrum leads to Equation II. Analogous statements apply to equations for function shifts along the time axis.
The functions in Equations I and II make for easy convolutions leading to an important observation. For example, convolving equation I in the frequency domain with a given function and approximately multiplying by in the time domain gives
This equation and its sine wave counterpart are sometimes referred to as the modulation theorem. They show that modulating the amplitude of a function by a sinusoid shifts the function’s spectrum to higher frequencies by an amount equal to the modulating frequency. . This effect is well understood in television and radio work where a sinusoidal carrier is modulated by the program signal. Obtaining transforms of sines, cosines, and related functions has been a good example of exploiting the basic Fourier transform properties. Our next example is no so obliging.
Symmetry properties and a few simple theorems play a central role in our thinking about the DFT. These same properties carry over to the Fourier integral transform as the DFT sums go over into Fourier integrals. Since these properties are generally quite easy to prove, (and, hopefully, you are already familiar with them), we will simply list them in this section. (The proofs are exercises for you :-))
The symmetry properties are, of course, crucial for understanding and manipulating Fourier transforms. They can be summarized by
and Equation 8b
The applications of these basic symmetry properties leads to
and for two cases of special interest we can show that
if is real, then is Hermitian,
and if is imaginary, then is anti-Hermitian,
The similarity theorem results from a simple change of variable in the Fourier integral
and likewise for the familiar shift theorem
The power theorem which states that
can be specialized to Rayleigh’s theorem by setting
In more mathematical works, Rayleigh’s theorem is sometimes called Plancherel’s theorem.
Of course, the important and powerful convolution theorem — meaning linear convolution — is valid in continuum theory also:
The many variations of the convolution theorem arising from the symmetry properties of the Fourier transform apply as well. For example, the autocorrelation theorem is as follows:
The function is called the cross-power spectrum. When we set , this equation states the important result that the Fourier transform of the autocorrelation of a function is its power spectrum:
The formal similarity between these continuous-theory properties and those of the DFT makes them easy to remember and to visualize. But, there are essential differences between the two. The DFT with its finite sum has no convergence questions. The Fourier integral transform, on the other hand, has demanded the attention of some of our greatest mathematicians to elucidate its convergence properties. As we know, the absolute integrable condition is only a start; it can be relaxed — quite easily at the heuristic level — to include the sine wave/ function pair. The sine wave’s ill behaviour is characteristic of a wide class of functions of interest in DSP that do not decay sufficiently fast at infinity for them to possess a Fourier Transform in the normal sense.
Everything should be as simple as possible, and not simpler —- Albert Einstein.
I wish to provide a fast-track to the high priesthood of Digital Signal Processing. The number of professionals requiring knowledge of DSP is vast because the amazing electronics revolution (after the fabrication of the first IC chip by Fairchild Semiconductor, USA) has made convenient collection and processing of digital data available to many disciplines. (Sounds like Big Data/Analytics!!!). The fields of interest are impossible to enumerate. The applications of DSP are limited to human imagination. I had heard in a TI DSP Conference that a dentist in UK had used DSP to reduce the amount of noise of his drilling machine (His reason was that the dentist’s cutting of teeth is not so painful, but the noise of his tool creates a scare in the minds of patients). Applications of DSP range widely from astrophysics, meteorology, geophysics, and computer science to VLSI design, control systems, communications, RADARs, speech analysis/synthesis, medical technology, and of course, finance/economics.
I wish to minimize the mathematical paraphernalia in these series. There are many elementary texts of DSP and I do not claim any novelty in the presentation. But, these series might help both the enthusiast and the expert. Towards the goal of simplification of the math involved, I assume that signals are deterministic thereby avoiding a heavy reliance on the theory of random variables and stochastic processes. The reader need only have a familiarity with differential and integral calculus, complex numbers, and simple matrix algebra. For the purpose of this article, I assume you are aware of the basic concepts of signals and systems, sampled data and Z-transform, the concept/computation of frequency response and the DFT. I will talk a bit about the DFT though.
Digital signal processing has become extremely important in recent years because of digital electronics. In treating the processing of analog signals, which require any reasonable amount of computation, it is usually beneficial to digitize the signals and to use digital computers for their subsequent processing. The advantage results from both the extremely high computation speeds of modern digital computers and the flexibility afforded by computer programs that can be stored in software or firmware or hardwired into the design. Low cost, VLSI chips, have made this approach beneficial even for devices limited to special purpose computing applications or restricted by throwaway economics. This computational asset has been a major impetus for thinking of signals as discrete time sequences. An additional advantage of representing signals in discrete time has been their pleasant mathematical simplicity; continuous time theory requires far more advanced mathematics than the algebra of polynomials, some simple trigonometric function theory, and the behaviour of the geometric series that we have employed. The digital revolution seduces us into viewing every situation in its terms; still, we are haunted by the concept of underlying continuous relationships. We need to know the effect of digitizing continuous time signals, and the essential difference between these digitized signals and those signals that are inherently digital from the left. Are continuous time and discrete time versions simply alternate models of the real physical world from which we are free to choose? Some say, yes; yet there are essential differences.
For example, meteorological data, such as the barometric pressure at a given location, would certainly seem to be a continuous signal that conceptually extends infinitely far from the past into the future. Physical considerations force us to conclude that its spectrum is bandlimited. The pressure wave from a nuclear blast, on the other hand, has a definite beginning and extends with decaying amplitude infinitely long thereafter. I will show you soon that such a signal must have a frequency spectrum that is not bandlimited and therefore this signal cannot be digitized without aliasing. Still, other signals seem inherently digital: the price of a stock is determined not only has definite beginning and ending. Furthermore, no business lasts for ever; its stock trading has a definite beginning and ending. The stock quote’s spectrum must be repetitive as well as inherently broadbanded.
The spectra of the signals in these three examples are quite different. Clearly, to apply digital signal processing in an intelligent manner, we need to more about continuous time functions. We need to develop a continuous time theory of signals, and then use it to gain insight into its relationship with DSP.
The Fourier Integral Transform Developed from the DFT
Our development of mathematical machinery will follow a natural course motivated only by considering LTI digital systems and operators. The concept of the spectrum arises because sinusoids are eigenfunctions of LTI systems. The convenience of sampling the spectrum at equally spaced intervals gives rise to the DFT. The DFT, with its equally spaced points in both time and frequency, places us in a position to easily take the limit to pass over to continuous time and frequency. We start with the inverse transform
and recognize that this sum may be evaluated for any t; it may be considered a continuous function of time. [Just like the similar sum for the spectral response of an LTI operator. Equation I can be evaluated at any time. ] The frequency interval used in this summation is
Therefore, the frequency (in radians per unit time) is
and as N becomes infinite,
giving for the limit of the sum in equation I
In the limit , this sum becomes an integration — a continuously infinite number of terms separated by the infinitesimal frequency interval
Now, both and are continuous functions. To invert this equation, the orthogonality relation,
must likewise be converted to continuous time and frequency. The same limiting process, and
gives the continuous version:
Before continuing, we need to elaborate a little on this result: the Kronecker has gone over in the limit into a continuous function called the Dirac function. Strictly speaking, it is not a function in the mathematical sense at all; nonetheless, it has been put on a firm mathematical basis by Lighthill. For our purposes, the function can be thought of as the limiting form of a narrow symmetrical pulse located at whose width goes to zero and height goes to infinity such that its area is constant and normalized to unity:
Figure 1 shows an example of this limiting concept along with the development of the sampling property of the
function which has the property
which is equal to
where is any continuous function of time and is its sampled value at . This sampling property of the function provides an important connection between continuous-time theory and discrete-time theory. In addition, the orthogonality of complex sinusoids expressed by Equation 4 clearly possesses a companion relation, obtained simply by relabeling variables as follows:
Now, we can return to find the inverse of Equation 3 by using Equation 5. Multiplying both sides of Equation 3 by
and integrating over time gives
which is equal to
The last integral on the right side yields the function from Equation 5. Thus, we get
which is the desired relation giving in terms of . For reference, we rewrite this result and Equation 3 as a transform pair:
This pair of equations affords a continuous time and continuous frequency Fourier transformation, which are collectively simply called the Fourier transform. Sometimes, one is more specific and calls equation 6A the forward Fourier transform of , and then equation 6B is called the inverse Fourier transform of . Note the logical resemblance of these equations to the DFT. Again, as in the DFT case, there is an obvious duality of the Fourier transform that results from an interchange of time and frequency by a simple relabeling, or redefinition, of variables. One consequence of this duality is the lack of a standard definition for the Fourier transform, sometimes the forward and inverse versions of the transform in equations 6 are interchanged. Different placements of the factor of provides further possibilities for definitions of the Fourier transform.
As we shall see, the similarities between the Fourier transform and the DFT will allow us to exploit the computational advantages of the DFT. But, the differences between the Fourier transform and the DFT, though perhaps few in number, are profound in character. We have approached the Fourier transform from a desire to represent both time and frequency as continuous variables. The resulting transformation equations contain integrals over all values of these variables from minus infinity to plus infinity. This property is a double-edged sword. On the one hand, it does let us represent signals that the DFT does not allow, such as a one-sided transient that decays infinitely far into the future. But, on the other hand, to exactly compute the frequency response of such a signal using a numerical scheme, we would need a continuously infinite number of data points. We will see how to deal with, but not completely solve, this problem later. Another concern, which is nonexistent for the DFT, arises because of the Fourier transform’s integrals; we need to know something of their convergence properties.
The convergence of Fourier integrals is a fascinating subject of Fourier theory, explored by famous mathematician such as Plancherel, Titchmarsh and Wiener. Various conditions have been found that prove the convergence of Fourier integrals for functions displaying rather strange behaviour compared to our view of naturally occurring signals. Because our interest is limited to realistic signals and systems, we can afford to start our discussion with an overrestrictive (sufficient but not necessary) convergence condition: the Fourier integral transform of is absolutely integrable over the open interval, that is,
Under these conditions, the repeated integral
called the Fourier integral representation of f, converges to the average value of at a discontinuity. That is,
when there is a discontinuity at .
Some functions, such as step functions, impulses, and sinusoids, never really occur in nature, nonetheless, they are very convenient for thinking about signals and systems. The absolutely integrable condition immediately disqualifies many of these favourite funtions; clearly, any periodic function, including sinusoids themselves, are excluded from functions possessing Fourier transform pairs, if we accept this condition. However, a sufficiently rich class of functions possessing Fourier integral transforms will result if we allow the Dirac function to be included. Lighthill had shown with mathematical rigour how to include functions in Fourier integral theory. We simply note that equation 5 is indeed, a Fourier transform of a complex sinusoid. This equation shows that the spectrum of is eminently resonable; it contains exactly one pure frequency at .
Furthermore, after our discussion in the next section/blog on the convolution theorem, we will show how Wiener was able to include signals, such as periodic functions and random noise, in frequency analysis. Even though these signals do not possess a Fourier integral transform, they may have a power spectrum.