You are on page 1of 48

Digital Signal

Processing
Digital Signal
Processing

João Marques de Carvalho,


Edmar Candeia Gurjão,
Luciana Ribeiro Veloso, and
Carlos Danilo Miranda Regis

MOMENTUM PRESS, LLC, NEW YORK


Digital Signal Processing

Copyright © Momentum Press®, LLC, 2019.

All rights reserved. No part of this publication may be reproduced, stored


in a retrieval system, or transmitted in any form or by any means—­
electronic, mechanical, photocopy, recording, or any other—except for
brief quotations, not to exceed 400 words, without the prior permission
of the publisher.

First published by Momentum Press®, LLC


222 East 46th Street, New York, NY 10017
www.momentumpress.net

ISBN-13: 978-1-94708-390-5 (print)


ISBN-13: 978-1-94708-391-2 (e-book)

Momentum Press Communications and Signal Processing Collection

Collection ISSN: 2377-4223 (print)


Collection ISSN: 2377-4231 (electronic)

Cover and interior design by Exeter Premedia Services Private Ltd.,


Chennai, India

10 9 8 7 6 5 4 3 2 1

Printed in the United States of America


Abstract

In this book, we present the fundamentals of digital signal processing


(DSP) in a format which is both concise and accessible to anyone with an
engineering or sciences background. Without sacrificing the fundamental
theory and concepts, we provide the reader with knowledge that capaci-
tates him or her for further reading or for using available DSP tools.
The initial chapters provide the background for proper understanding
of DSP techniques. We start by presenting discrete-time (digital) signals
and systems, with emphasis on the class of linear shift invariant (LSI)
systems, as those are used to model most systems of practical interest. In
the sequence, we introduce the Fourier and the z transforms, which are
essential signals and systems analysis tools, followed by signal sampling
and analog-to-digital (A/D) conversion.
The discrete Fourier transform (DFT) and its computationally effi-
cient form, the fast Fourier transform (FFT), are examined next. FFT
algorithms have been widely used in different DSP applications, either in
software or in hardware implementation, for real-time signal processing in
the frequency domain at very high sampling rates.
The largest chapter of this book is dedicated to digital filters, which
are an essential component of most DSP systems. We analyze and com-
pare the two main filter classes: infinite impulse response (IIR) and finite
impulse response (FIR) and the structures used for their implementation.
We also describe how to specify a digital filter and the main design tech-
niques for both IIR and FIR filters. Examples are presented throughout
the chapter.
In the final chapter, we present examples of DSP techniques appli-
cations to practical problems from several areas, including time and
frequency analysis of signals, biomedical signal processing, audio pro-
cessing, and digital communications.
vi  •   Abstract

KeyWords

A/D conversion; digital signal processing; digital filters; fast fourier


­transform; fourier transform; signals sampling
Contents

List of Figures ix
List of Tables xv
Preface xvii
Acknowledgments xxi
1  Discrete-Time Signals and Systems 1
1.1 Introduction 1
1.2  Properties of Discrete-Time Signals 2
1.3  The Unit Step and Unit Impulse Signals 8
1.4 Complex Exponential and Sinusoidal Signals 10
1.5  Discrete-Time Systems 13
1.6  Linear Shift-Invariant Systems 19
1.7  Chapter Overview 22
2  Discrete-Time Signal Transforms 23
2.1 Introduction 23
2.2 Signals as Linear Combinations of Complex Exponentials 24
2.3  Fourier Series for Periodic Signals 25
2.4  The Fourier Transform 26
2.5 The z Transform 37
2.6  Chapter Overview 53
3  Sampling and Analog to Digital Conversion 55
3.1 Introduction 55
3.2  Signal Sampling and Reconstruction 55
3.3  Analog-to-Digital Conversion 59
3.4  Chapter Overview 65
4  Discrete Fourier Transform and Fast Fourier
Transform 65
4.1 Introduction 65
4.2  Discrete Fourier Transform 66
viii  •   Contents

4.3  Circular Convolution 68


4.4  Overlap-Add Method 70
4.5 Windowing 73
4.6  The Fast Fourier Transform 73
4.7  Chapter Overview 78
5  Digital Filters 79
5.1 Introduction 79
5.2  FIR and IIR Filters 80
5.3  Magnitude and Phase of Digital Filters 84
5.4  Comparing FIR and IIR Filters 88
5.5  Specifying a Digital Filter 89
5.6 IIR Filter Design with the Bilinear Transformation 91
5.7  Designing FIR Filters 103
5.8  Chapter Overview 119
6  Applications 121
6.1 Introduction 121
6.2  Spectrum Peak Detection 121
6.3  Signal Envelope Detection 122
6.4  Processing Electromyographic Signals 123
6.5  Generating Audio Effects 124
6.6  Removing Harmonic Interference 127
6.7  Digital Down Converter 129
6.8 Decimation in Software-Defined Radios 130
6.9  Polynomial Multiplication 131
6.10 Fourier Analysis of Signals with Time-Varying Spectral
­Components 132
6.11  Chapter Overview 133
Recommended Readings 137
About the Author 139
Index 141
List of Figures

Figure 1.1. (a) Continuos-time signal s(t), (b) discrete-time signal


sd(n) = s(nT ).2
Figure 1.2. Discrete-time periodic signal (segment) with period N0.3
Figure 1.3. (a) Signal x(n), (b) xe(n), even component of x(n),
(c) xo(n), odd component of x(n).  5
Figure 1.4. Time shifting: (a) original signal v(n), (b) delayed signal
g(n) = v(n–2).6
Figure 1.5. Time reversal: (a) original signal g(n), (b) reversed signal
g(–n).7
 pn 
Figure 1.6. Time scaling: (a) original signal x(n) = sin   ,
 6
(b) ­compressed signal g(n), (c) expanded signal v(n).8
Figure 1.7. (a) Unit step signal u(n), (b) unit impulse signal δ(n).9
Figure 1.8. Signal x(n) for Example 1.5. 9
Figure 1.9. Signal x(n) for Example 1.6. 10
p
Figure 1.10. One period of sinusoidal signal x(n) = sin  n .11
6 
2 
Figure 1.11. Non-periodic cosine x(n) = cos  n for Example 1.7. 12
3 
Figure 1.12. Representation of a discrete-time system. 14
Figure 1.13. Accumulator of Example 1.8: (a) input signal
x(n) = u(n), (b) output signal y(n).14
Figure 1.14. LSI system represented by its impulse response h(n).19
Figure 1.15. Equivalences among serial connections of LSI systems. 21
Figure 1.16. (a) Parallel connection of two LSI systems,
(b) equivalent single system. 21
Figure 2.1. Example 2.2 for N1 = 2 and N = 20: (a) periodic
impulse train, (b) Fourier series coefficients. 26
Figure 2.2. Example 2.3: (a) magnitude and (b) phase of X(e ).28
jw
x  •   List of Figures

Figure 2.3. Example 2.4: Magnitude response of the average filter. 35


Figure 2.4. The z plane and the unit circle. 37
Figure 2.5. ROC representation: R− < |z| = r < R+.38
Figure 2.6. ROC and pole-zero plot for Example 2.6. 40
Figure 2.7. ROC and pole-zero plot for Example 2.7. 41
Figure 2.8. ROCs for Example 2.8: (a) X1(z)|z|>1/3,
(b) X2(z):|z| < 1/2, (c) X(z):1/3<|z|<1/2.45
Figure 3.1. Block diagram of analog to digital conversion. 55
Figure 3.2. (a) Continuous-time signal x(t) and (b) frequency
spectrum of x(t).56
Figure 3.3. (a) Analog signal x(t), (b) impulse train s(t), and
(c) product xs (t ) = x(t ) s (t ).56
Figure 3.4. Frequency spectrum of the impulse train s(t).57
Figure 3.5. Spectrum of the sampled signal xs (t ) with Ω s > 2Ω x.57
Figure 3.6. Signal reconstruction in the frequency domain
(a) Xs(jΩ), (b) HLP(jΩ), and (c) Xr(jΩ). 58
Figure 3.7. (a) Sampled signal xs(t) and (b) recovered signal xr(t).59
Figure 3.8. Quantization levels for uniform quantization. 60
Figure 3.9. Quantization levels for non-uniform quantization. 61
Figure 3.10. Block diagram for decimation. 63
Figure 3.11. Discrete-time interpolation: (a) original x(n),
T
(b) ­interpolated xi(n), using L = = 4.63
Ti
Figure 3.12. Block diagram for interpolation. 64
Figure 4.1. Magnitude of the Fourier transform of signal x(n)
in ­Example 4.1. 66
Figure 4.2. N = 8 samples taken from X(ejw) (dashed lines) of
­Example 4.1 producing X(k).67
Figure 4.3. Signal x(n) obtained with zero padding for N = 16. 67
Figure 4.4. Magnitude of the DFT for Example 4.1 calculated with
N = 16. 68
Figure 4.5. Example 4.2: (a) Signal x(n) with L = 4, (b) signal h(n)
with M = 4, and (c) result of circular convolution
between x(n) and h(n).69
Figure 4.6. Linear convolution obtained with N + M + L – 1 in the
circular convolution for Example 4.3. 69
List of Figures   •   xi

Figure 4.7. Application of the overlap-add method to a


signal x(n) to calculate linear convolution
between x(n) and h(n).71
p 
Figure 4.8. N = 16-point DFT of x(n) = cos  n .72
2 
11p 
Figure 4.9. N = 16-point DFT of x(n) = cos  n .72
 20 
Figure 4.10. (a) Discrete hanning window, (b) DFT of the discrete
hanning window. 74
 11p 
Figure 4.11. Magnitude of the N = 16-point DFT of x(n) = cos  n
 20 
obtained with a hanning window. 74
Figure 4.12. Butterfly structure representation. 76
Figure 4.13. Butterfly for N/2.76
Figure 4.14. Structure composed by butterflies for N = 8. 77
Figure 5.1. Frequency response of ideal filters: (a) lowpass and
(b) highpass. 80
Figure 5.2. Effects of ideal filtering: (a) spectrum of a hypothetical
­signal, (b) lowpass filtered spectrum, and (c) highpass
filtered spectrum. 80
Figure 5.3. Tapped delay line implementation of an FIR filter. 82
Figure 5.4. Canonic implementation of a recursive (IIR) filter of
order N.82
Figure 5.5. Flow graph for the FIR structure of Figure 5.3. 83
Figure 5.6. Flow graph for the IIR structure of Figure 5.4. 83
Figure 5.7. Flow graph for an IIR second-order section. 83
Figure 5.8. Illustration of filtering on the magnitude and phase
of a ­signal. 85
Figure 5.9. Frequency response for an ideal lowpass filter. 87
Figure 5.10. Example of frequency response for a real lowpass filter. 87
Figure 5.11. Example of lowpass filter specification. 90
Figure 5.12. Non-linear relation between w and W for bilinear
­transformation. 93
Figure 5.13. Mapping of Ha(e ) into H(e ) with bilinear ­
jw jw

transformation.94
Figure 5.14. Flow graph for an IIR Filter from Example 5.1. 95
Figure 5.15. Magnitude and phase responses of IIR Butterworth
lowpass filter of Example 5.2. 97
xii  •   List of Figures

Figure 5.16. Magnitude and phase responses of IIR Chebyshev


­lowpass filter of Example 5.3. 99
Figure 5.17. Magnitude and phase responses of IIR Elliptic
lowpass filter of Example 5.4. 100
Figure 5.18. Magnitude and phase responses of H(z) for
Example 5.5. 103
Figure 5.19. Rectangular window: w(n) and |W(e )|.107
jw

Figure 5.20. Windowing in the time domain. 108


Figure 5.21. Windowing in the frequency domain. 109
Figure 5.22. Commonly used windows for an FIR filter design. 111
Figure 5.23. Impulse and magnitude responses for Example 5.6
with Hanning window and N = 124. 112
Figure 5.24. Magnitude responses for Example 1.7 (a) rectangular
with N = 36, (b) Bartlett with N = 122, (d) Hamming
with N = 132, and (e) Blackman window with N = 220. 112
Figure 5.25. Impulse and magnitude responses for Example 5.7. 114
Figure 5.26. Frequency sampling design: (a) sampling Hd(ejw),
(b) ­interpolating between samples to obtain H(ejw).116
Figure 5.27. |Hd(k)| for Example 5.8. 117
Figure 5.28. Impulse and magnitude responses for Example 5.8. 118
Figure 6.1. Points to FFT interpolation to obtain the peak adjust
factor d.122
Figure 6.2. AM-DSB Signal x(n) and its interpolated envelope
signal e(n).124
Figure 6.3. Electromyographic signal representing three biceps
­contractions: (a) signal, (b) signal filtered with
Butterworth filter, and (c) signal filtered with
Hamming filter. 125
Figure 6.4. Universal filter block diagram for delay generation. 126
Figure 6.5. Block diagram for Schroeder reverb generation. 127
Figure 6.6. Frequency response (magnitude) of a comb filter for
D = 6. 128
Figure 6.7. Magnitude of 256 points FFT for a 650 Hz
signal plus 256 Hz and 512 Hz interference. 128
Figure 6.8. 256 points FFT magnitude of the signal filtered by a
comb filter with D = 10 to remove interference. 129
List of Figures   •   xiii

Figure 6.9. Block diagram of a DDC. 130


Figure 6.10. STFT: (a) signal (b), (c), (d), and (e) windowed
segments.133
Figure 6.11. (a) 1,024 points FFT absolute value and (b) STFT of
the signal in Figure 6.10 (a). 134
List of Tables

Table 2.1.  Basic Fourier transform pairs 28


Table 2.2.  Properties of the Fourier transform 33
Table 2.3.  Some commonly found signals and their z transforms 44
Table 2.4.  Main z transform properties 49
Table 5.1. Transformations to map a prototype low-pass filter into
other filter types 102
Table 5.2.  Impulse responses for linear-phase FIR filters 105
Table 5.3. Magnitude response |Hd(e )| and impulse response
jw

hd(n) of standard frequency-selective filters with cutoff


­frequency wc and sample delay a.106
Table 5.4.  Commonly used windows for an FIR filter design 110
Table 5.5.  Windows features for an FIR filter design 110
Table 6.1. Configurations of a universal filter for the example of
­Figure  6.4. 127
Preface

Motivation for this book comes from the assessment that technological
knowledge transmission should not be restricted to the traditional sequence
of courses present in most undergraduate programs. Those courses have
and will maintain in the foreseeable future a vital role in the formation of
qualified professionals. Nevertheless, there is an ever-growing need for
texts written in a concise (although not less accurate) format to teach in a
few hours the basic theory and essential techniques of a subject, as well as
to provide understanding of some related applications. This book aims to
fill this gap, being directed to people who need to acquire the fundamen-
tals of digital signal processing (DSP) in a short time, during a three hours
flight, for example.
DSP works with digital representations of signals, modifying those
in order to improve their analysis, transmission, and reception. Starting in
the 1960s, the use of DSP has been growing continuously, because of the
development of powerful and efficient methods, particularly filter design
techniques and fast Fourier transform (FFT) algorithms, opening several
application areas. The advancement of integrated circuits technology
­further contributes to the popularization of DSP, allowing for high-speed
implementations of complex processes.
In this book, we present the fundamentals of DSP in a concise format
accessible to anyone with an adequate mathematical background. We do
not assume a previous course on signals and systems, as is usually the
case with undergraduate textbooks. Knowledge of differential and integral
calculus and of finite and infinite sequences and series, as well as complex
numbers, is the basis needed to read and understand the present text.
Our book should be of interest to anyone with an engineering or
­sciences background who must attend on a short notice a technical meet-
ing or who needs to use a DSP software tool, but does not have enough
time to learn all theory behind the involved methods. As an example,
DSP textbooks typically describe in details optimization techniques for
equiripple FIR filters parameters selection or for minimizing IIR filters
xviii  •   Preface

frequency response mean-square error. Those details were originally justi-


fied because the filter designer had often to implement those optimization
algorithms. However, many software libraries and toolboxes are currently
available with filter design and implementation modules, requiring only a
set of specifications from the user. Thus, the designer can focus on deter-
mining the best set of filter specifications for the problem at hand, instead
of spending time to learn all involved methods. In addition, this book can
also be used as a textbook for an advanced undergraduate or graduate
course in DSP with emphasis on problem solving. This type of course usu-
ally starts with the analysis of an application problem, from which theory
is introduced only as required to progress toward a solution.
We start the book in Chapter 1 looking at the fundamentals of dis-
crete-time signals and systems. Signals properties are presented, and the
unit step, unit impulse, and complex exponential signals are defined and
analyzed. After describing the basic properties of discrete-time systems,
we focus on linear shift invariant systems, as those are used to model most
systems of practical interest. The discrete convolution operation is defined
and its computation analyzed.
Chapter 2 is dedicated to the discrete-time Fourier transform and the z
transform. Those are the tools that allow analysis of discrete-time signals
and systems in the frequency and z domains, respectively, providing the
basis for most DSP techniques. Both transforms express signals as linear
combinations of complex exponentials, as described in the chapter. The
relationship between Fourier and z transforms, the concepts of conver-
gence and region of convergence, frequency response, and examples of
applying transforms to the analysis and implementation of discrete-time
systems are also presented in Chapter 2.
Signal sampling and analog to digital (A/D) conversion is the sub-
ject of Chapter 3. We start by analyzing uniform time sampling of analog
signals, establishing the fundamental limit of the Nyquist theorem and
defining aliasing and the need for anti-aliasing filtering. Analog signal
reconstruction from its samples is also described both in time and fre-
quency domains. Next, quantization and encoding of sampled signals are
examined in this chapter. Uniform and non-uniform quantization, binary
representation of quantized samples, and intrinsic errors in analog to dig-
ital conversion are analyzed and A/D converter parameters described.
Finally, Chapter 3 analyzes sampling and reconstruction, or sampling rate
reduction and increase, of discrete-time signals performed by the decima-
tion and interpolation operations, respectively.
In Chapter 4, we introduce the discrete Fourier transform (DFT).
While the discrete-time Fourier transform presented in Chapter 2 is
Preface   •   xix

periodic and continuous in frequency, DFT provides a finite-length dis-


­

crete frequency domain representation of a signal. We present the concept


of ­circular convolution, show how DFT is applied to an infinite-length
­signal, and analyze effects of leakage and windowing on DFT.
Despite its potential for DSP applications, DFT was initially of
limited practical use due to its high computational complexity. The
amount of complex multiplications required for a direct DFT calcula-
tion of a size N signal is of order N 2. Only with the advent of the family
of algorithms known as fast Fourier transform (FFT), this complexity
was reduced to order N log2N, making it feasible for implementation
in DSP systems.
The decimation in time formulation of the FFT is derived in C
­ hapter 4,
as well as the structure known as butterfly, which implements the basic
computation for that FFT form. We show how butterflies can be combined
to calculate larger-size FFTs.
Filters are the main components of most DSP systems; thus, proper
understanding of specification, design, and synthesis of digital filters is
fundamental for a good understanding of those systems. This is the sub-
ject of Chapter 5. We start by analyzing and comparing the two main filter
classes: infinite impulse response (IIR) and finite impulse response (FIR)
and the structures used for their implementation. Applying the bilinear
transformation to design lowpass IIR filters from analog filter is exam-
ined, followed by description of the z-domain transforms that allow to
obtain other frequency selective filters from a lowpass prototype. Design
examples are provided.
Still in Chapter 5, we examine three linear phase FIR filters design
techniques: windowing, optimal equiripple, and frequency sampling. We
describe the main features of each method and provide design examples.
Relative advantages and drawbacks are discussed, and a comparison
among them is presented.
We finally describe in Chapter 6 how DSP techniques can provide
the solution to practical problems chosen from several application areas,
which include time and frequency analysis of signals, biomedical signal
processing, audio processing, and digital communications.
Much has been written on the subjects presented in this book. Instead
of inserting reference citations along the text, we opted for providing a
set of Recommended Readings at the end. We believe that this approach
makes for a more fluid reading, in tune with the objective of this book.
By looking up at the indexes of the recommended texts, readers can
further explore any of the topics here covered. Although it reflects our
preference, the list of Recommended Readings is not exhaustive by any
xx  •   Preface

criteria, and many other textbooks and technical papers are available on
DSP-related subjects.
Finally, we hope that this book will be helpful to professionals and
students in need of an introduction to the area of DSP.
João Marques de Carvalho, Edmar Candeia Gurjão, Luciana Ribeiro
Veloso, and Carlos Danilo Miranda Regis
Acknowledgments

The authors express their gratitude to all those who made this book
­possible, especially to their families for the continuous support.
The authors are thankful to professor Orlando Baiocchi from the
­University of Washington, Tacoma, United States, who supported this
project from the beginning and helped with the reviewing process.
The authors also acknowledge the support of Joel Stein from
­Momentum Press who provided the documents and guidance required
during the writing process.
CHAPTER 1

Discrete-Time Signals
and Systems

1.1 Introduction

A discrete-time signal is a sequence of values usually representing the


behavior of a physical phenomenon. In electrical engineering problems,
those values are samples of a continuous-time-varying electrical signal
taken at uniform rate, called sampling rate or sampling frequency. The
inverse of the sampling rate is called sampling interval or sampling period.
Figure 1.1 shows the graphical representations of a continuous-time s­ ignal
and its discrete-time uniformly sampled version. For the latter, time is
normalized by the sampling period becoming the index n. This can be
stated as:

sd (n) = s (nT ) (1.1)

where T is the sampling period.


In practice, continuous-time or analog signals are electrical
events ­(voltage or current) representing the behavior of some p­ hysical
­phenomenon such as speech or temperature as a function of time. Devices
known as transducers are utilized to convert physical variations (of
­pressure or t­emperature, for example) into voltage or electrical current
changes, thus creating an electrical signal. To be digitally processed,
electrical signals have to be sampled, time discretized, quantized, and
encoded, thus becoming a digital signal.
Therefore, a digital signal is a discrete-time signal, as represented in
Figure 1.1(b), for which the amplitude is quantized, that is, it is constrained
to assume values in a finite set. Quantization is usually accomplished by
rounding or truncating the amplitude sample to the nearest value in the
discrete set. It is always present in digital signal processing, as samples
2  •   Digital Signal Processing

s(t)

−4T −3T −2T −1T 0 1T 2T 3T 4T t


(a)
sd (n)

−4 −3 −2 −1 0 1 2 3 4 n
(b)

Figure 1.1.  (a) Continuous-time signal s(t),


(b) ­Discrete-time signal sd(n) = s(nT ).

must be stored in finite length registers. Chapter 3 analyzes the sampling


and quantization processes.
In this chapter, we present the main properties of discrete-time
­signals, demonstrate how those signals are affected by operations on the
­independent variable, and introduce some signals that are important in
digital signal processing.

1.2  Properties of Discrete-Time Signals

Most properties of continuous-time (analog) signals are also common to


discrete-time (digital) signals. In this section, we review some relevant
properties to digital signal processing.

1.2.1 Periodicity

A discrete-time signal is periodic if there exists an integer N such that:

x(n) = x(n + N ) (1.2)

for any value of n. The integer N is called the period of the signal.
Discrete-Time Signals and Systems   •  3

x(n)
2

... 1 ...

−2N0 −N0 0 N0 2N0 n

Figure 1.2.  Discrete-time periodic signal (segment) with period N0.

Equation 1.2 holds for any integer multiple of N. Figure 1.2 shows a
periodic signal where N0 is the smallest value that period N can assume,
called fundamental period. Thus, this signal is periodic for any period
N = kN 0, k integer. Equation 1.2, thus, generalizes to:

x(n) = x(n + kN 0 ) ∀ n, k ∈ Z . (1.3)

1.2.2  Power and Energy

The energy of a discrete-time signal x(n) is defined as:

E= ∑ | x(n) |2 . (1.4)
n =−∞

The average power of a discrete-time signal is defined as:


N −1
1 2


P = lim
N →∞ N
∑ | x(n) |2 .
(1.5)
n =− N2

If x(n) has finite energy (E < ∞), then P = 0, and the signal is called
an energy signal. If E is infinite and P is finite and non-null, then x(n) is
known as a power signal.
All finite duration signals with finite amplitude are energy signals. All
periodic signals are power signals, but not all power signals are necessar-
ily periodic.

1.2.3 Even and Odd Signals

An even signal is symmetric with respect to the vertical axis. Therefore,


for an even signal, we have:

x(n) = x(− n).


4  •   Digital Signal Processing

If a signal is antisymmetric with respect to the vertical axis, that is, if:

x ( n) = − x ( − n)

it is called an odd signal.


Any signal x(n) can be expressed as the sum of an even component
xe(n) and an odd component xo(n) such that:

x(n) = xe (n) + xo (n).

The even and odd components of the signal can be obtained as:

1
xe (n) =
2
[ x(n) + x(− n)] (1.6)
and

1
xo (n) =
2
[ x(n) − x(− n)]. (1.7)
Example 1.1 Consider the following discrete-time signal:

 1, 0 ≤ n ≤ 4
x ( n) = 
0, otherwise.

The even and odd components of this signal can be obtained, respec-
tively, from Equations 1.6 and 1.7, yielding:

1
 2 , 1 ≤| n |≤ 4

xe (n) =  1, n = 0

 0, otherwise

and

 1
 2, 1≤ n ≤ 4


xo (n) =  1
− , − 4 ≤ n ≤ −1
 2

 0, otherwise.

Signals x(n), xe(n) and xo(n) for this example are shown in Figure 1.3.
Discrete-Time Signals and Systems   •  5

x(n)
1

0.5

−6 −4 −2 0 2 4 6n
(a)

xe(n)
1

0.5

−6 −4 −2 0 2 4 6n
(b)
xo(n)
0.5
−4 −2
−6 0 2 4 6n
−0.5

(c)

Figure 1.3.  (a) Signal x(n), (b) xe(n), even component


of x(n), (c) xo(n), odd component of x(n).

1.2.4 Operations on the Independent Variable

Discrete-time signals are functions of the discrete variable n. Therefore, all


operations defined for functions, such as sum and multiplication, are also
valid for signals. More specific to our interest is how signals are affected by
operations performed on the independent variable, which we examine next.

1.2.4.1 Time Shifting

Time shifting of a signal is accomplished by replacing the independent


variable n by n − n0. If n0 is a positive integer, the signal is delayed; other-
wise, (negative n0) the signal is advanced. Delaying means that the signal
is right-shifted, whereas advancing implies a left shift.

Example 1.2. Consider v(n) = 52 n 2 + 15 n . Shifting v(n) by n0 = +2


means to delay it by 2:

g (n) = v(n − n0 ) = v(n − 2)


1 1
g (n) = [2(n − 2) 2 + (n − 2)] = (2n 2 − 7 n + 6)
5 5
6  •   Digital Signal Processing

v(n) g(n)
4 4

3 3

2 2

... 1 ... ... 1 ...

−3 −2 −1 0 1 2 3 n −1 0 1 2 3 4 5 n

(a) (b)

Figure 1.4.  Time shifting: (a) original signal v(n), (b) delayed signal
g(n) = v(n–2).

A segment of the delayed signal g (n) = v(n − 2) is illustrated


Figure 1.4.

1.2.4.2 Time Reversal

Time reversing a signal consists in changing the sign of the independent


variable n. Thus, the time-reversed version of a discrete-time signal x(n)
is x(−n). This reversion has the effect of reflecting the signal about the
vertical axis.

Example 1.3. Let us time-revert the signal g (n) = 15 (2n 2 − 7 n + 6) from


Example 1.2:

1
g (− n) = [2(− n) 2 + −7(− n) + 6]
5
1
g ( − n ) = ( 2 n 2 + 7 n + 6)
5

Both g(n) and g(−n) are illustrated in Figure 1.5.

1.2.4.3 Time Scaling

Time scaling by an integer factor α is achieved by multiplying or dividing


the independent variable n by α, with |α| > 1. Multiplication implies in
time compression, whereas division implies in time expansion. Therefore,
Discrete-Time Signals and Systems   •  7

g(n) g(−n)
3 3

2 2

... 1 ... ... 1 ...

−1 0 1 2 3 4 n −4 −3 −2 −1 0 1 n

(a) (b)

Figure 1.5.  Time reversal: (a) original signal g(n), (b) reversed
signal g(–n).

x(αn) and x(n/α) are, respectively, time-compressed and time-expanded


versions of x(n).
For discrete-time, compression corresponds to sampling the signal at
rate α, meaning that, for each α samples, one is preserved and the ­others
are discarded. Inversely, expansion in discrete-time means that α − 1
­samples are inserted between two original ones. The values of the inserted
samples must be estimated by some interpolation technique.

Example 1.4. Consider the discrete-time sinusoidal signal x(n) = sin( p6 )n,
(more on sinusoidal signals in Section 1.4). Compressing this signal by a
factor α = 2, we obtain the signal g (n) given by:

 2p n   pn 
g (n) = x(2n) = sin   = sin   .
 6   3

Expansion of x(n) by factor 2 corresponds to signal v(n):

  p n  pn 
 x(n / 2) = sin  ⋅  = sin   , n even
v ( n) =   6 2  12 
 0,
 n odd

Figure 1.6 shows signals x(n), g(n), and v(n) for 0 ≤ n ≤ 12 . One full
cycle of x(n) fits in that interval, as shown in Figure 1.6(a). Due to time
compression, two cycles of g(n) fit in the same interval. However, it can be
seen in Figure 1.6(b) that some samples of x(n) are lost to produce g(n).
In this example expansion is accomplished by inserting zero between each
two original values of x(n). As a result, only half cycle of v(n) fits in the
8  •   Digital Signal Processing

x(n)
1
7 8 9 10 11

0 1 2 3 4 5 6 n
−1

(a)
g(n)
1
4 5 10 11

0 1 2 3 6 7 8 9 12 n
−1

(b)
v(n)
1

0 1 2 3 4 5 6 7 8 9 10 11 12 n
−1

(c)
 pn 
Figure 1.6.  Time scaling: (a) original signal x(n) = sin   ,
 6
(b) compressed signal g(n), (c) expanded signal v(n).

interval, as shown in Figure 1.6(c). Other techniques can be used to fill the
gaps between samples, such as replication of the previous value or linear
interpolation.

1.3 The Unit Step and Unit Impulse Signals

The discrete-time unit step signal is called u (n) and is defined as:

0, n < 0
u ( n) =  (1.8)
1, n ≥ 0
as shown in Figure 1.7(a).
The discrete-time unit impulse signal called δ(n) is shown in
­Figure 1.7(b) and is defined as:

1, n = 0,
d ( n) =  (1.9)
0, n ≠ 0.
Discrete-Time Signals and Systems   •  9

u(n) δ(n)

1 1
... ... ... ...

−4 −2 0 2 4 n −4 −2 0 2 4 n
(a) (b)

Figure 1.7.  (a) Unit step signal u(n), (b) unit impulse signal δ(n).

Signals δ(n) and u(n) are related by the following equation:

n
u ( n) = ∑ d (k ). (1.10)
k =−∞

Signals δ(n) and u(n) play a very important role in the study of
digital signals and systems. These signals can be used to express other
­discrete-time signals, as shown in the following examples.

Example 1.5. Consider the following signal:

 1  n −2
 , n≥2
x ( n) =  2 
 0, n<2

shown in Figure 1.8. An alternative expression for this signal can be


obtained with a shifted unit step signal as:

n −2
 1
x ( n) =   u ( n − 2) .
 2

x(n)
1

...

n
0 1 2 3 4 5 6

Figure 1.8. Signal x(n) for Example 1.5.


10   •   Digital Signal Processing

x(n)
2
1.5
1
0.5

0 1 2 3 4 5 6 n

Figure 1.9. Signal x(n) for Example 1.6.

Example 1.6. Consider the finite duration signal shown in Figure 1.9 and
given by:

n
 , 0≤n≤4
x ( n) =  2
 0, otherwise.

Signal x(n) can be expressed using two unit steps, one of which shifted
to n = 5

n
x ( n) = u (n) − u ( n − 5)  .
2
Alternatively, this signal can be expressed in terms of a sum of shifted
weighted impulses:

x ( n) = 0 d ( n) + 0.5d ( n − 1) + d ( n − 2) + 1.5d ( n − 3) + 2 d ( n − 4) .

1.4 Complex Exponential and Sinusoidal


Signals

A discrete-time complex exponential signal is defined as:

x ( n) = z n = r n e jwn (1.11)

where z is a complex parameter given by z = re jw . When w = 0, the


imaginary component of z is null, and the complex exponential becomes
a real exponential signal x ( n) = r n. In this case, for | r |> 1 , we have
an increasing exponential, whereas for | r |< 1 , x ( n) is a decreasing
exponential.
Discrete-Time Signals and Systems   •   11

jw
Of particular interest is the case | r |= 1 that implies z = e 0. From
Euler’s formula, we have:

x ( n) = e jw0 n = cos(w0 n) + j sin(w0 n) (1.12)

where

1  jw0 n
cos(w0 n) = e + e − jw0 n  , (1.13)
2

and

1  jw0 n − jw0 n 
sin(w0 n) = e −e  . (1.14)
2j
Equations 1.13 and 1.14 define, respectively, the discrete-time cosine
and sine signals, jointly known as sinusoidal signals. A general expression
for this class of signals is given by:

x ( n) = cos (w0 n + f ) , (1.15)

where f is the phase of the sinusoidal signal. For f = 0 , we have


x(n) = cos(w0 n) , whereas f = p2 implies that x(n) = sin(w0 n) .
Figure 1.10 shows one period ( N = 12 ) of x(n) = sin ( p6 n) .

1.4.1  Periodicity of Discrete-Time Exponentials

Assuming the existence of a period N such that x(n) = x(n + N ) (Section


1.2.1) for the complex exponential defined in Equation 1.12, we have:
x ( n + N ) = e jw0 ( n + N ) = e jw0 n ⋅ e jw0 N = x(n)

x(n)
1

7 8 9 10 11
0 1 2 3 4 5 6 12 n

−1
p
Figure 1.10.  One period of sinusoidal signal x(n) = sin  n .
6 
12   •   Digital Signal Processing

which leads to:


(1.16)
e jw0 n ⋅ e jw0 N = e jw0 n .
Equation 1.16 implies that, if x(n) is periodic, then w0 N must be
such that e jw0 N = 1 . This condition is satisfied for w0 N = 2pm, where m
is an integer, that is, for values of w0 N that are integer multiples of 2p .
The fundamental period is, thus, given by:

N 2p (1.17)
=
m w0
or

w0 m (1.18)
= .
2p N
The fundamental frequency defined as:

2p
w0 = m rad (1.19)
N
is inversely proportional to the fundamental period. As N is dimensionless
w0 is expressed in radian, the dimension of 2π.
Equation 1.18 states that the discrete-time complex exponential
w
e jw0 n is periodic if and only if 2p0 is a rational number, that is, a number
that can be expressed as the ratio between two integers m N ( )
. Due to the
equivalence between the exponential and sinusoidal signals, expressed in
­Equations 1.12, 1.13, and 1.14, the condition of Equation 1.18 also applies
to the ­latter class of signals. That periodicity condition is characteristic of
­discrete-time signals, as ­continuous-time complex exponentials, as well as
cosines and sines are always periodic.

Example 1.7. Figure 1.11 shows a segment of signal x(n) = cos 23 n .


w
( )
Despite being a cosine, for this signal, w0 = 23 and 2p0 = 31p , which is not
a rational number. Signal x(n) , therefore, is non-periodic.

x(n)
1

−15 −5 5 15
−20 −10 0 10 20 n

−1
2 
Figure 1.11.  Non-periodic cosine x(n) = cos  n for Example 1.7.
3 
Discrete-Time Signals and Systems   •   13

1.4.2 Harmonically Related Exponentials

A set of discrete-time complex exponentials is harmonic if all signals in


the set share a common period N. This implies that those signals’ fre-
quencies are integer multiples of 2Np . Thus, the set of harmonically related
exponentials is defined as:

j 2Np kn
fk ( n) = e , k = 0, ±1, ±2,⋅⋅⋅. (1.20)

Due to the periodicity of the discrete complex exponential, we have

j ( k + N ) 2Np n jk 2Npn
fk + N (n) = e =e e j 2pn = fk (n). (1.21)

Equation 1.21 shows that the signals are periodic with respect to the
variable k with period N. This periodicity implies that there are only N
distinct harmonic signals in the set defined by Equation 1.20; these signals
j 2 pn j 4 pn − j 2 pn
are f0(n)=1, f1(n) = e N , f2 (n) = e N ,⋅⋅⋅, fN −1(n) = e N . Any value of
k outside the interval [0, N − 1] leads to one of the signals aforemen-
tioned, that is, fN (n) = f0 (n), fN +1 (n) = f1 (n), fN + 2 (n) = f2 (n) , and so on.
Thus, the set of harmonically related discrete-time exponentials is finite of
size N. This is distinct from the continuous-time case where all harmoni-
j 2 p kt
cally related exponentials of the form e T (k is an integer) are unique,
and therefore, the set of signals fk (t ) is infinite.
Harmonically related exponentials form the base for represent-
ing periodic discrete-time signals by Fourier series, as described in
Chapter 2.

1.5 Discrete-Time Systems

A system is a transformation that modifies signals to adapt them to an


intended application. This operation can be expressed as:

y ( n) = T  x ( n)  . (1.22)

Figure 1.12 shows the graphical representation of a discrete-time sys-


tem. Signals x(n) and y(n) are the input and output signal, respectively.
Transformation T[⋅] can be implemented as a hardware device that pro-
cesses signal samples in real time or as a software-implemented operation.
14   •   Digital Signal Processing

x(n) y(n)
T[·]
Figure 1.12.  Representation of a discrete-time system.

u(n) y(n)
5
4
3
2
... 1 ... ... 1 ...
n −2 −1 0 1 2 3 4 n
−2 −1 0 1 2 3 4
(a) (b)

Figure 1.13.  Accumulator of Example 1.8: (a) input signal x(n) = u(n),
(b) output signal y(n).

Example 1.8. The accumulator is a system that adds up past and ­present
values of x(n) to produce y(n). Its characteristic transformation is
given by:

n
y ( n) = T  x ( n)  = ∑ x ( k ) . (1.23)
k =−∞

Figure 1.13 shows the output signal y(n) produced by the accumula-
tor of Equation 1.23 for an unit step signal applied at the input.

1.5.1  Properties of Discrete-Time Systems

The most relevant properties of discrete-time systems are described next.

1.5.1.1 Linearity

A linear system simultaneously obeys the homogeneity and additivity


properties, that is:

Homogeneity : T [ ax(n) ] = aT  x ( n) 

Additivity : T  x1 ( n) + x2 ( n)  = T  x1 ( n)  + T  x2 ( n)  .
Discrete-Time Signals and Systems   •   15

The combination of these two properties is known as the superposi-


tion property or superposition principle, which is the condition for linear-
ity, expressed as:

Linearity : T  ax1 ( n) + bx2 ( n)  = aT  x1 ( n)  + bT  x2 ( n)  . (1.24)

Thus, the superposition principle states that, if the input signal x(n) to
a linear system can be decomposed into a weighted sum (or linear com-
bination) of components x1(n) and x2(n), the output of the system is the
weighted sum of the outputs produced individually by each component.
The sum weights (or coefficients of the linear combination) at the output
are the same as in the input.
Equation 1.24 holds for any number of signals combined at the input:

 ∞  ∞ ∞
T  ∑ bi xi ( n)  = ∑ biT  xi ( n)  = ∑ bi yi ( n) (1.25)
i =−∞  i =−∞ i =−∞

where

yi ( n) = T  xi ( n)  (1.26)

and bi are the linear combination coefficients.


Signals yi ( n) = T  xi ( n)  , i = 0, ±1, ±2, ±3,... in Equation 1.26 are
the outputs produced by the system in response to the xi(n) components of
input signal x(n), respectively. Thus, Equations 1.25 and 1.26 imply that
the output of a linear system to any linear combination of input signals is
the linear combination of the outputs produced by each of the input com-
ponents individually. The set of coefficients bi at the input is preserved at
the output.

Example 1.9. Consider the system characterized by:

y (n) = T [ x(n)] = 3 x(n) + 5. (1.27)

To determine whether the system aforementioned is linear, it suffices to


verify whether Equation 1.27 satisfies the superposition principle. Let us
assume that x1(n) and x2(n) are two input signals and their respective out-
puts are y1 (n) = 3 x1 (n) + 5 and y2 (n) = 3 x2 (n) + 5. A linear combination
of the two output signals is:

ay1 (n) + by2 (n) = a[3 x1 (n) + 5] + b[3 x2 (n) + 5]


16   •   Digital Signal Processing

= 3[ax1 (n) + bx2 (n)] + 5(a + b) (1.28)

where a and b are constants.


Let the signal x3 (n) = ax1 (n) + bx2 (n) , with the same constants a and
b, be also input to the system to produce output y3 (n) = 3 x3 (n) + 5. We
then have:

y3 (n) = 3 x3 (n) + 5 = 3[ax1 (n) + bx2 (n)] + 5. (1.29)

Equations 1.28 and 1.29 are distinct, and therefore, the system is
non-linear. Observe that the non-linearity of this system is due to the term
5 added to 3x(n) in Equation 1.27. If this term was null, the system
would became y (n) = 3 x(n) , which is a linear system.

Example 1.10. To determine the linearity of the accumulator defined in


Equation 1.23, we follow the same procedure as in the previous example,
that is:

n
y1 ( n) = ∑ x1 ( k )
k =−∞

n
y2 ( n ) = ∑ x2 ( k ) (1.30)
k =−∞

where x1(n) and x2(n) are two input signals. Therefore, for constant coef-
ficients a and b:

n n
ay1 (n) + by2 (n) = a ∑ x1 ( k ) + b ∑ x2 ( k )
k =−∞ k =−∞

n
= ∑  ax1 ( k ) + bx2 ( k )  . (1.31)
k =−∞

For input signal x3 (n) = ax1 (n) + bx2 (n) , the output y3 (n) is given
by:

n n
y3 ( n) = ∑ x3 ( k ) = ∑  ax1 ( k ) + bx2 ( k )  . (1.32)
k =−∞ k =−∞

Equations 1.31 and 1.32 are equal, which means that the accumulator
is a linear system.
Discrete-Time Signals and Systems   •   17

1.5.1.2  Shift Invariance

Assuming that x(n) and y (n) are the input and output signals, respec-
tively, for a shift invariant system:

T  x ( n − n0 )  = y ( n − n0 ) . (1.33)

The characteristic transformation T[⋅] of a shift invariant system is


not a function of the independent variable n; consequently, the system
does not change with time. Thus, the output signal y(n) does not depend
on the particular value of n at which x(n) is applied to the system input.
To determine whether a system is shift-invariant, it suffices to verify
whether T[⋅] satisfies Equation 1.33. If it does not, then the system is
shift-varying.

Example 1.11. The accumulator of Equation 1.23 is a shift-invariant


­system. To verify this, consider its characteristic equation:

T  x ( n − n0 )  = ∑ x (l − n0 ) . (1.34)


l =−∞

Making k = l − n0 and replacing in the preceding equation, we obtain:

n − n0
T  x ( n − n0 )  = ∑ x ( k ) = y ( n − n0 ) .
k =−∞

Example 1.12. An example of a shift-varying system is given by:

y ( n) = n 2 x ( n) .

As the input signal is multiplied by n2, the characteristic transforma-


tion is a function of n, which makes the system shift-varying. Verification
is reasonably straightforward. For the shifted input, the system output is:

T [ x(n − n0 )] = n 2 x(n − n0 )

and the shifted output (from Equation 1.33) is:

y (n − n0 ) = (n − n0 ) 2 x(n − n0 ).

18   •   Digital Signal Processing

Because y (n − n0 ) is not equal to T [ x(n − n0 )] , the system is


shift-varying.

1.5.1.3 Causality

Causality implies that the output of the system at a given instant n only
depends on past and/or present values of the input signal. Thus, for a
causal system, there is a cause-and-effect relationship between the input
and output signals. There must be a present or past event at the input for
any event produced at the system output. Causal systems are also known
as non-anticipative as the output does not depend on future input values.
Any physical system is causal because future inputs cannot be processed
in the real world.
The accumulator is an example of a causal system, as its output is the
sum of all past inputs and present input value.

1.5.1.4 Stability

A system is stable if a bounded input signal always results in a bounded


output signal. Thus, for a stable system, we have:

x ( n) ≤ B ⇒ y ( n) ≤ L ⋅ B (1.35)

for finite L and B.


It is easy to see that the accumulator is an unstable system, as it
­performs a sum of infinite values.

1.5.1.5 Invertibility

A system is invertible when the input signal can be recovered from the
output signal. Thus, for any invertible system T[⋅] , there exists an inverse
system G[⋅] such that G[⋅] = T −1[⋅] . Invertibility implies that distinct
inputs produce distinct outputs; otherwise, input recovery would not
be possible.

Example 1.13. The accumulator is an example of invertible system.


Its inverse is the system given by:

G  x ( n)  = x ( n) − x ( n − 1) .
Discrete-Time Signals and Systems   •   19

1.6  Linear Shift-Invariant Systems

A system that simultaneously obeys the properties of linearity and shift


invariance is a linear shift-invariant (LSI) system. The relevance of
this class of systems comes from the fact that a large number of digital
­signal processing systems can be modeled as an LSI system within some
­practical limits. This is very convenient because, for LSI systems, the rela-
tionship between the input and output signals is given by an operation
called ­convolution and the system is identified by its response to an input
impulse signal, called impulse response. These characteristics provide a
practical procedure to determine the output of an LSI system for any input
signal, in the discrete-time domain, as well as in the frequency domain
through the use of transforms.

1.6.1 Impulse Response and Discrete-Time


Convolution

The impulse response of a discrete-time LSI system is called h(n).


It is defined as the output signal produced in response to an impulse
applied at the system input. Thus, for an LSI system characterized by
transformation T,

h(n) = T [ d (n)].

It can be shown that the output y(n) of an LSI system to any input
signal x(n) is the result of the discrete-time convolution between x(n) and
the impulse response h(n). This operation is defined as:


y ( n) = x ( n) ∗ h( n) = ∑ x(k )h(n − k ). (1.36)
k =−∞

Equation 1.36 allows to determine the output of an LSI system for


any input signal, provided that h(n) is known. Thus, an LSI system is
characterized or identified by its impulse response h(n) as illustrated in
Figure 1.14.

x(n) y(n)
h(n)
Figure 1.14.  LSI system represented by its impulse
response h(n).
20   •   Digital Signal Processing

Computation of Equation 1.36 will not be examined here, as it is


s­ eldom performed in practice. Most often, in signal processing problems,
time convolution is replaced by a frequency domain multiplication, as will
be seen in Chapter 2. Derivation of the convolution sum (Equation 1.36),
as well as details about its computation, can be found in signals and
­systems analysis textbooks.
The convolution is an operation that can be performed between
any two signals. Thus, the convolution between signals g(n) e v(n) is
expressed as:



g ( n) ∗ v ( n) = ∑ g (k )v(n − k ).
k =−∞

1.6.2  Properties of LSI systems

It is straightforward to show that convolution is a commutative operation:

g ( n) ∗ v ( n) = v ( n) ∗ g ( n) = ∑ v(k ) g (n − k )
k =−∞

associative:

g ( n) ∗ [v ( n) ∗ z ( n) ] = [ g ( n) ∗ v ( n) ] ∗ z ( n)

and distributive:

g ( n) ∗ [v ( n) + z ( n) ] = [ g ( n) ∗ v ( n) ] + [ g ( n) ∗ z ( n) ] .

The preceding properties have implications on the interconnections


of LSI systems. Specifically, they imply that the order in which LSI sys-
tems are serially connected is irrelevant as illustrated in Figures 1.15 (a)
and (b). The serial connection is also equivalent to a single system with
h(n) given by the convolution of the impulse responses of the connected
systems, as in Figure 1.15 (c). A parallel connection of LSI systems is
equivalent to a single system with h(n) given by the sum of the impulse
responses of the connected systems, as shown in Figure 1.16. Although
illustrated for two systems, these equivalences are valid for any number of
connected LSI systems.
In Section 1.5.1, we learned that a system is stable when a bounded
input signal always produces a bounded output signal. For an LSI system,
Discrete-Time Signals and Systems   •   21

x(n) y(n)
h1(n) h2(n)

(a)

x(n) y(n)
h2(n) h1(n)

(b)

x(n) y(n)
h(n) = h1(n) ∗ h2(n)

(c)

Figure 1.15.  Equivalences among serial connections of


LSI systems.

h1(n)
x(n) y(n)
+
x(n) y(n)
h2(n) h(n) = h1(n) + h2(n)

(a) (b)

Figure 1.16.  (a) Parallel connection of two LSI systems,


(b) equivalent single system.

this property implies that the impulse response is absolutely summable,


that is, h(n) must obey:

∑ h(n) < ∞. (1.37)


n =−∞

Thus, the impulse response of a stable LSI system is either a finite


duration sequence or an infinite convergent sequence, both assuming finite
values only.

Example 1.14. The LSI system with impulse response h1 (n) = (1 / 2) n u (n)
is stable, while the LSI system with impulse response h2 (n) = u (n) is
unstable. To verify for h1 (n) , we can calculate:

∞ ∞


∑ h1 (n) = ∑ (1 / 2)n = 2 < ∞.
n =−∞ n=0

Therefore, h1(n) is absolutely summable. For h2(n), we have:


22  •   Digital Signal Processing

∞ ∞
∑ h2 (n) = ∑ 1 = ∞. (1.38)
n =−∞ n=0

Thus, h2(n) is not absolutely summable.


The impulse response of a causal LSI system must obey:

h(n) = 0, n < 0. (1.39)

This property derives from the fact that causal systems are non-­
anticipative. As the impulse signal is null for n < 0, no event occurs at the
input of the LSI system before n = 0 to cause an output event. Thus, h(n)
must be 0 for n < 0.

1.7 Chapter Overview

In this chapter, we introduce the basic concepts related to discrete-time


­signals and systems. We start by defining discrete-time and digital sig-
nals and analyze the main properties presented by those signals. The
unit impulse and unit step signals, both of which play a relevant role in
the area of digital signal processing, are also defined, and some of their
applications are exemplified. We also introduce the complex exponential
and sinusoidal signals, showing the relationship between them, as well
as how the concepts of period and frequency apply to these two types of
­discrete-time signals. We then introduce discrete-time systems and define
their main properties.
The final section of this chapter is dedicated to the class of LSI
­systems used to model most systems of practical interest. The input and
output for LSI systems are related through the convolution operation,
which is defined and analyzed. We end the chapter by describing the
properties of LSI systems and how they characterize the corresponding
impulse responses.
Index

A ideal lowpass filter, 86


A/D conversion, 59 IIR filters, 80, 88
aliasing, 57 impulse response, 80
audio signals, 124 linear phase, 85
average filter, 34 lowpass filter, 79
magnitude and phase, 84
B magnitude frequency response,
bilinear transformation, 91 84
butterfly, 76 non-recursive filters, 81
passband, 89
C phase frequency response, 84
causal average filter, 34 recursive filters, 81
circular convolution, 68 specifications, 89
complex exponentials, 10 stopband, 89
fundamental frequency, 12 tapped delay line, 81
harmonic exponentials, 13 transition, 89
periodicity, 12 digital signal processing, 1
convolution, 31 discrete Fourier transform (DFT),
66
D discrete time convolution, 19
decimation, 62, 75 discrete-time signal, 1
DFT leakage, 70 complex exponential, 10
difference equations, 32, 81 energy, 3
constant coefficients, 32 even signal, 3
LSI systems, 32 fundamental period, 3
digital down converter, 129 odd signal, 3
digital filters, 79 periodicity, 2
cut-off frequency, 79 power, 3
distortion-free, 86 sinusoidal signals, 10
FIR filters, 80, 88 time reversal, 6
frequency response, 87 time scaling, 6
group delay, 88 time shifting, 5
highpass filter, 79 unit impulse, 8
142   •   Index

unit step, 8 Fourier transform properties, 27


discrete time systems, 13 conjugation and conjugate
causal system, 18 symmetry, 30
invertible system, 18 convolution, 31
linear system, 14 differentiation in frequency, 31
shift invariant system, 17 linearity, 29
stable system, 18 multiplication, 31
Parseval’s relation, 32
E periodicity, 29
Electromyography (EMG) signals, time and frequency shifting, 29
123 time reversal, 31
energy density spectrum, 32 frequency response, 34
energy signal, 3 fundamental period, 3
Euler’s formula, 25
even signal, 3 G
extremal frequencies, 115 Gibbs phenomenon, 107

F H
fast Fourier transform (FFT), 73 half-power frequency, 96
FIR filters, 80 harmonically related complex
alternation theorem, 115 exponentials, 25
Bartlett window, 110 harmonic interference, 127
Blackman window, 110 Hilbert transformer, 122
Chebyshev approximation, 115
equiripple design, 114 I
frequency sampling, 115 IIR filters, 80
Gibbs phenomenon, 107 bilinear transformation, 88, 92
Hamming window, 110 Butterworth filter, 96
Hanning window, 110 Chebyshev filter, 97
Kaiser window, 113 Elliptic filter, 98
linear-phase, 103 impulse response, 19
Parks and McClellan, 115 interpolation, 62
rectangular window, 105
Remez exchange algorithm, L
115 linear shift-invariant systems, 19
windowing, 104 discrete time convolution, 19
Fourier’ properties, 27 impulse response, 19
periodicity, 29 properties, 20
shifting, 29 low pass filter, 34
symmetry, 30
Fourier series, 23, 25 N
analysis equation, 25 non-anticipative, 18
synthesis equation, 25 non-recursive filters, 81
Fourier transform, 26 non-uniform quantization, 60
inverse Fourier transform, 26 Nyquist theorem, 57
Index   •   143

O superposition property/
odd signal, 3 superposition principle, 15
order of the filter, 81 system function, 24
overlap-add method, 72
T
P transducers, 1
Parseval, 32 transfer function, 51
partial fractions, 36 transforms, 23
periodic convolution, 108 Fourier transform, 23, 25
pole-zero plot, 39 z transform, 23
polynomial multiplication, 131
power signal, 3 U
prewarp, 92 unit impulse signal, 8
unit step signal, 8
Q unniform quantization, 60
quantization, 60

R W
recovered signal, 55 waterfall graph, 132
recursive filters, 81 windowing, 73
region of convergence (ROC),
38
Z
S z transform, 37
sampled signal, 55 analysis of LSI systems, 51
sampling interval/sampling period, causal and stable LSI system, 51
1 causal signals, 41
sampling rate/sampling frequency, convergence, 38
1 finite duration signals, 41
short-time Fourier transform left-sided signals, 42
(STFT), 132 pole-zero cancelation, 50
signal envelope detection, 122 pole-zero plot, 39
signals, 55 properties, 48
A/D conversion, 59 region of convergence, 38
sampling and reconstruction, 55 right-sided signals, 42
sinusoidal signals, 10, 11, 25 ROC, 39
software-defined radio (SDR), two-sided signals, 43
130 unit circle, 37
spectrum peak detection, 121 z plane, 37

You might also like