## Interpolation, Decimation and Multiplexing

Frequently, there is the need in DSP to change the sampling rate of existing data. Increasing the number of samples per unit time, sometimes called upsampling, amounts to interpolation. Decreasing the number of samples per unit time, sometimes called downsampling, is decimation of the data. (The original meaning of the word decimation comes from losing one-tenth of an army through battle or from self-punishment; we apply it to data using various reduction ratios.) Of course, interpolation and decimation can occur in frequency as well as time.

In fact, we have already encountered frequency domain interpolation; zero padding in time followed by the DFT interpolates the hidden sinc functions in the DFT spectrum. We can do the opposite also: zero padding in the frequency domain which produces interpolated time function. We will now investigate this type of upsampling, applied to interpolation of time domain data, in a little greater detail.

Consider the discrete data stream shown in Fig 1a along with its continuous spectrum. As we now realize, this DFT spectrum has different possible interpretations, depending on our data model. For purposes of discussion, let us say that this data results from sampling a band-limited (or, nearly band-limited) continuous signal. Then, in the limit of a very long data window, sampled at a sufficiently high rate, no leakage or aliasing occurs. Time domain interpolation will correctly recover the original analog signal if it does not alter the spectrum in Fig 1a. The periodicity induced into the spectrum by the data sampling process can be eliminating by extracting just one replica. This extraction, accompanied by frequency domain multiplication with the boxcar shown in the right side of Fig 1b, convolves the discrete time domain data with the continuous time function to reproduce the original analog signal. It thus seems evident that a truly band-limited signal can be recovered completely from its sampled version providing that the sampling rate is sufficiently high and that the sample is sufficiently long. The statement is commonly made that a band-limited analog signal can be uniquely recovered from its sampled version provided that it is sampled at a rate greater than twice the highest frequency contained in its spectrum; this statement is called the Sampling Theorem. Several aspects of this theorem have been proved in mathematical detail in many reference texts. However, from our previous discussions in these blogs, any such band-limited signal must be infinitely long, making the exact determination of its spectrum impossible in the first place.

Thus, in practice, we must always be content with an approximate reconstruction of the original analog signal. Preferring a digital scheme for this reconstruction, we convolve the boxcar spectral window of Fig 1b with the sampling function shown in Fig 1c. The result tells us how to exploit the DFT for the recovery of the analog signal — use zero padding in the frequency domain. In our example, we use $2:1$ zero padding, which produces the midpoint interpolation operator shown in Fig 1d. The result of this operator acting on the original data in Fig 1a is shown in Fig 1e. In the frequency domain, one simply appends zeros to the DFT spectrum. It is interesting to note that during the convolution process the sinc operator in the time domain appropriately has its zeros aligned with the unknown midpoints except at the point currently being interpolated; every interpolated point is a linear combination of all other original points, weighted by the sinc function; see Fig 1f. This interpolation, sometimes called sinc interpolation, can only be carried out in an  approximation because the sinc function will have to be truncated somewhere. In the frequency domain, the result of truncating the sinc manifests itself as a convolution of the ideal low pass filter of Fig 1d with a narrow sinc arising from the truncation of the interpolating sinc operator. As a result, the final unsampled data has the same spectrum as the original data only to some approximation.

Decimation, or downsampling, is the reverse operation of the sinc interpolation. To decimate $2:1$ with no loss of information from the original data, the data must be oversampled to begin with. Fig 2a shows data that is nearly oversampled $2:1$ to produce a spectrum that has very little energy in the upper half of the Nyquist interval. As is usually done, we low pass filter in preparation for decimation. In our example of Fig 2b, the upper half of the Nyquist interval has been filtered out with an appropriate filter. Then, the $2:1$ decimation operation simply consists of extracting every other sample in the time domain. This operation can be perceived as multiplication in time and convolution in frequency, with the sampling function shown in Fig 2c. The decimated signal, in Fig 2d, now has a new sampling rate and Nyquist frequency — its spectrum just filled in to meet the new Nyquist criterion. The lowpass filtering has assured that no aliasing occurs in the decimated data.

The next two examples of manipulating data and their spectra employ the combinations of filtering, sampling, interpolation and decimation. Consider the spectrum shown in Fig 3a, which is divided into four separate bands. Each of these bands contains information that we wish to separate from the original spectrum. In one important case in communications applications, each frequency band contains an independent information channel. The modulation theorem, expressed in continuous form, shows that if we modulate a given channel with a sinusoid of frequency $\omega_{0}$, the spectrum is translated by $\pm$\omega_{0}\$. Thus, each of the four frequency bands of Fig 3 could represent separate channels formed by frequency division multiplexing. (FDM) using an appropriate carrier frequency, $\omega_{0}$, $2\omega_{0}$, $3\omega_{0}$ and

$4\omega_{0}$, for each band. Analog versions of FDM had been extensively used for years in communications applications such as AM radio, stereo broadcasting, television and radiotelemetry. Digital FDM is similar, except the spectrum is repetitive.

Recovering a given channel, called demodulation or demultiplexing, is accomplished by first isolating the selected channel using bandpass filtering and then decimating the result. Fig 3 shows channel three demultiplexed by filtering followed by a $4:1$ decimation. The resulting digital data has a new sampling rate, meeting the Nyquist criterion. If the original channels are well-sampled, gaps occur in between the spectral bands of Fig 3a, which are called guard bands.

Another application of isolating a given frequency band in this fashion occurs when we simply desire to pick off a given portion of the spectrum of a signal for more detailed examination and

processing. In this case, the original spectrum of Fig 3a belongs to  just one digital signal, and the bands are portions of the spectrum of special interest. In our example then, band three has been selected for closer examination. The process has given us time domain data that require only one-fourth the original samples, an important savings in some applications where further processing on the spectrum is desired, such as in spectral estimation. When used in this fashion, this procedure is called zoom processing because it zooms in on the spectrum of interest.

For our second example of multiplexing, we address a situation that is complementary to FDM. In FDM, the information channels are mixed in a complicated way in the time domain because of the modulation of sinusoids, but the channels are quite separate in the frequency domain. The reverse situation has the channels easily separated in time, but mixed in frequency. For our example, we consider only two different digital information channels. An obvious way to combine them in time is to interlace the samples, with every other sample belonging to the same channel, called time division multiplexing (TDM). Multiplexing and Demultiplexing in the time domain is then a simple matter of using every other sample. However, let us explore the frequency behaviour of this process.

In Fig 4a, we show one of the two data channels, called channel A. It is oversampled by

$2:1$ so that its spectrum occupies only one-half of the Nyquist interval. Decimation using the sample function of Fig 4b yields the result shown in Fig 4c. But, instead of redefining the sampling rate as in normal decimation, we put a twist into the processing by interpreting the results of Fig 4c as having the same sampling rate as the original data. Thus, the time domain data has zeros at every other point. This zero interlacing produces a spectrum that is folded at one-half the Nyquist frequency as shown. To conserve energy using this interpretation, the  spectrum must be renormalized to one-half the original values. The other channel, the channel B, is similarly oversampled by $2:1$ and then it is decimated by the shifted sampling function shown in Fig 4d. Again, its spectral amplitudes are reduced by a factor of one-half as a consequence of the zero interlacing. Finally, the TDM is completed by adding the results of the two channels. Figures 4c and 4e sum to Fig 4f. As anticipated in TDM, while the time data are easily separated, the frequency data are mixed. Even so, note that now the Nyquist interval is filled with the nonredundant information that can be used to separate the spectrum of the two channels since $A+B$ and $A-B$ are linearly independent. Clearly, TDM demultiplexing could be done in either domain.