Share Email Print
cover

Spie Press Book

Random Processes for Image and Signal Processing
Author(s): Edward R. Dougherty
Format Member Price Non-Member Price

Book Description

Part of the SPIE/IEEE Series on Imaging Science and Engineering. This book provides a framework for understanding the ensemble of temporal, spatial, and higher-dimensional processes in science and engineering that vary randomly in observations. Suitable as a text for undergraduate and graduate students with a strong background in probability and as a graduate text in image processing courses.


Book Details

Date Published: 27 October 1998
Pages: 616
ISBN: 9780819425133
Volume: PM44

Table of Contents
SHOW Table of Contents | HIDE Table of Contents
Chapter 1. Probability Theory
1.1 Probability Space
1.1.1. Events
1.1.2. Conditional Probability
1.2.Random Variables
1.2.1. Probability Distributions
1.2.2. Probability Densities
1.2.3. Functions of a Random Variable
1.3.Moments
1.3.1. Expectation and Variance
1.3.2. Moment-Generating Function
1.4.Important Probability Distributions
1.4.1. Binomial Distribution
1.4.2. Poisson Distribution
1.4.3. Normal Distribution
1.4.4. Gamma Distribution
1.4.5. Beta Distribution
1.4.6. Computer Simulation
1.5.Multivariate Distributions
1.5.1. Jointly Distributed Random Variables
1.5.2. Conditioning
1.5.3. Independence
1.6.Functions of Several Random Variables
1.6.1. Basic Arithmetic Functions of Two Random Variables
1.6.2. Distributions of Sums of Independent Random Variables
1.6.3. Joint Distributions of Output Random Variables
1.6.4. Expectation of a Function of Several Random Variables
1.6.5. Covariance
1.6.6. Multivariate Normal Distribution
1.7.Laws of Large Numbers
1.7.1. Weak Law of Large Numbers
1.7.2. Strong Law of Large Numbers
1.7.3. Central Limit Theorem
1.8.Parametric Estimation via Random Samples
1.8.1. Random-Sample Estimators
1.8.2. Sample Mean and Sample Variance
1.8.3. Minimum-Variance Unbiased Estimators
1.8.4. Method of Moments
1.8.5. Order Statistics
1.9.Maximum-Likelihood Estimation
1.9.1. Maximum-Likelihood Estimators
1.9.2. Additive Noise
1.9.3. Minimum Noise
1.10 Entropy
1.10.1. Uncertainty
1.10.2. InformationEntropy of a Random Vector
1.11 Source Coding
1.11.1Prefix Codes
1.11.2 Optimal Coding
Exercises for Chapter 1
Chapter 2. Random Processes
2.1. Random Functions
2.2. Moments of a Random Function
2.2.1. Mean and Covariance Functions
2.2.2. Mean and Covariance of a Sum
2.3. Differentiation
2.3.1. Differentiation of Random Functions
2.3.2. Mean-Square Differentiability
2.4. Integration
2.5. Mean Ergodicity
2.6. Poisson Process
2.6.1. One-dimensional Poisson Model
2.6.2. Derivative of the Poisson Process
2.6.3. Properties of Poisson Points
2.6.4. Axiomatic Formulation of Poisson Temporal and Spatial Processes
2.7. Wiener Process and White Noise
2.7.1. White Noise
2.7.2. Random Walk
2.7.3. Wiener Process
2.8. Stationarity
2.8.1. Wide-Sense Stationarity
2.8.2. Mean-Ergodicity for WS Stationary Processes
2.8.3. Covariance-Ergodicity for WS Stationary Processes
2.8.4. Strict-Sense Stationarity
2.9. Estimation
2.10. Linear Systems
2.10.1. Communication of a Linear Operator with Expectation
2.10.2. Representation of Linear Operators
Exercises for Chapter 2
Chapter 3. Canonical Representation
3.1. Canonical Expansions
3.1.1. Fourier Representation and Projections
3.1.2. Canonical Expansion of the Covariance Function
3.2. Karhunen-Loeve Expansion
3.2.1 The Karhunen-Loeve Theorem
3.2.2 Discrete Karhunen-Loeve Expansion
3.2.3 Canonical Expansions with Orthonormal Coordinate Functions
3.2.4 Relation to Data Compression
3.3. Noncanonical Representation
3.3.1 Generalized Beseel Inequality
3.3.2 Decorrelation
3.4. Trigonometric Representation
3.4.1. Trigonometric Fourier Series
3.4.2. Generalized Fourier Coefficients for WS Stationary Processes
3.4.3. Mean-Square Periodic WS Stationary Processes
3.5. Canonical Expansions as Transforms
3.5.1. Orthonormal Transforms of Random Functions
3.5.2. Fourier Descriptors
3.6. Transform Coding
3.6.1. Karhunen-Loeve Compression
3.6.2. Transform Compression Using an Arbitrary Orthonormal System
3.6.3. Walsh-Hadamard Transform
3.6.4. Discrete Cosine Transform
3.6.5. Transform Coding for Digital Images
3.6.6. Optimality of the Karhunen-Loeve Transform
3.7. Coefficients Generated by Linear Functionals
3.7.1. Coefficients From Integral Functionals
3.7.2. Generating Bi-Orthogonal Function Systems
3.8. Canonical Expansion of the Covariance Function
3.8.1. Derivation of a Canonical Expansion from a Covariance Expansion
3.8.2. Constructing a Canonical Expansion for the Covariance Function
3.9. Integral Canonical Expansions
3.9.1. Construction via Integral Functional Coefficients
3.9.2. Construction from a Covariance Expansion
3.10. Power Spectral Density
3.10.1. The Power-Spectral-Density/Covariance Transform Pair
3.10.2. Power Spectral Density and Linear Operators
3.10.3. Integral Canonical Representation of a WS Stationary Random Function
3.11. Canonical Expansions of Vector Random Functions
3.11.1. Vector Random Functions
3.11.2. Canonical Expansions for Vector Random Functions
3.11.3. Finite Sets of Random Vectors
3.12. Canonical Representation over a Discrete Set
Exercises for Chapter 3
Chapter 4. Optimal Filtering
4.1. Optimal Mean-Square-Error Filters
4.1.1. Conditional Expectation
4.1.2. Optimal Nonlinear Filter
4.1.3. Optimal Filter for Jointly Normal Random Variables
4.1.4. Multiple Observation Variables
4.1.5. Bayesian Parametric Estimation
4.2. Optimal Finite-Observation Linear Filters
4.2.1. Linear Filters and the Orthogonality Principle
4.2.2. Design of the Optimal Linear Filter
4.2.3. Optimal Linear Filter in the Jointly Gaussian Case
4.2.4. Role of Wide-Sense Stationarity
4.2.5. Signal-plus-Noise Model
4.2.6. Edge Detection
4.3. Steepest Descent
4.3.1. Steepest Descent Iterative Algorithm
4.3.2. Convergence of the Steepest-Descent Algorithm
4.3.3. Least-Mean-Square Adaptive Algorithm
4.3.4. Convergence of the LMS Algorithm
4.3.5. Nonstationary Processes
4.4. Least-Squares Estimation
4.4.1. Pseudoinverse Estimator
4.4.2. Least-Squares Estimation for Nonwhite Noise
4.4.3. Multiple Linear Regression
4.4.4. Least-Squares Image Restoration
4.5. Optimal Linear Filters for Random Vectors
4.5.1. Optimal Linear Filter for Linearly Dependent Observations
4.5.2. Optimal Estimation of Random Vectors
4.5.3. Optimal Linear Filters for Random Vectors
4.6. Recursive Linear Filters
4.6.1. Recursive Generation of Direct Sums
4.6.2. Static Recursive Optimal Linear Filtering
4.6.3. Dynamic Recursive Optimal Linear Filtering
4.7. Optimal Infinite-Observation Linear Filters
4.7.1. Wiener-Hopf Equation
4.7.2. Wiener Filter
4.8. Optimal Linear Filters in the Context of a Linear Model
4.8.1. The Linear Signal Model
4.8.2. Procedure for Finding the Optimal Linear Filter
4.8.3. Additive White Noise
4.8.4. Discrete Domains
4.9. Optimal Linear Filters via Canonical Expansions
4.9.1. Integral Decomposition into White Noise
4.9.2. Integral Equations Involving the Autocorrelation Function
4.9.3. Solution Via Discrete Canonical Expansions
4.10. Optimal Binary Filters
4.10.1. Binary Conditional Expectation
4.10.2. Boolean Functions and Optimal Translation-Invariant Filters
4.10.3. Optimal Increasing Filters
4.11. Pattern Classification
4.11.1. Optimal Classifiers
4.11.2. Gaussian Maximum-Likelihood Classification
4.11.3. Linear Discriminants
4.12. Neural Networks
4.12.1. Two-Layer Neural Networks
4.12.2. Steepest Descent for Nonquadratic Error Surfaces
4.12.3. Sum-of-Squares Error
4.12.4. Error Back-Propagation
4.12.5. Error Back-Propagation for Multiple Outputs
4.12.6. Adaptive Network Design
Exercises for Chapter 4
Chapter 5. Random Models
5.1. Markov Chains
5.1.1. Chapman-Kolmogorov Equations
5.1.2. Transition Probability Matrix
5.1.3. Markov Processes
5.2. Steady-State Distributions for Discrete-Time Markov Chains
5.2.1. Long-Run Behavior of a Two-State Markov Chain
5.2.2. Classification of States
5.2.3. Steady-State and Stationary Distributions
5.2.4. Long-Run Behavior of Finite Markov Chains
5.2.5. Long-Run Behavior of Markov Chains with Infinite State Spaces
5.3. Steady-State Distributions for Continuous-Time Markov Chains
5.3.1. Steady-State Distribution of an Irreducible Continuous-Time Markov Chain
5.3.2. Birth-Death Model Queues
5.3.3. Forward and Backward Kolmogorov Equations
5.4. Markov Random Fields
5.4.1. Neighborhood Systems
5.4.2. Determination by Conditional Probabilities
5.4.3. Gibbs Distributions
5.5. Random Boolean Model
5.5.1. Germ-Grain Model
5.5.2. Vacancy
5.5.3. Hitting
5.5.4. Linear Boolean Model
5.6. Granulometries
5.6.1. Openings
5.6.2. Classification by Granulometric Moments
5.6.3. Adaptive Reconstructive Openings
5.7. Random Sets
5.7.1. Hit-or-Miss Topology
5.7.2. Convergence and Continuity
5.7.3. RACS
5.7.4. Capacity Functional
Exercises for Chapter 5
Bibliography

Preface

Science and engineering deal with temporal, spatial, and higher-dimensional processes that vary randomly from observation to observation. While one might study a specific observation for its algebraic or topological content, deterministic analysis does not provide a framework for understanding the ensemble of observations, nor does it provide a mechanism for prediction of future events. A system that functions over time needs to be designed in accordance with the manner in which input processes vary over time. System performance needs to be measured in terms of expected behavior and other statistical characteristics concerning operation on random inputs. There is an input random process, a transformation, and an output random process. System analysis begins with the input process and, based on the transformation, derives characteristics of the output process; system synthesis begins with an input process and a desired output process, and derives a transformation that estimates the desired output from the input.

Image and (one-dimensional) signal processing concern the analysis and synthesis of linear and nonlinear systems that operate on spatial and temporal random functions. As areas of applied science, image and signal processing mark off a region within the overall domain of random processes that suits the set of applications they encompass. Because our major concern is algorithm development, our major interest is operator synthesis. We focus on three basic problems: representation, filter design, and modeling. These are not independent; they form a unity that is the key to algorithm development in a stochastic framework. The end goal is design of a filter (operator). If the input process is given in terms of a representation that is compatible with the form of the desired filter, then design is enhanced. Ultimately, the filter is to be used on some class of real-world images or signals. Therefore we need models that fit real processes and whose mathematical structure facilitates design of filters to extract desired structural information.

My goal is not just to present the theory along with applications, but also to help students intuitively appreciate random functions. Were this a mathematics book, I would have taken the mathematical approach of stating general theorems and then giving corollaries for special situations. Instead, I have often begun with special cases in which probabilistic insight is more readily achievable. Moreover, I have not taken a theorem-proof approach. When provided, proofs are in the main body of the text and clearly delineated; sometimes they are either not provided or outlines of conceptual arguments are given. The intent is to state theorems carefully and to draw clear distinctions between rigorous mathematical arguments and heuristic explanations. When a proof can be given at a mathematical level commensurate with the text and when it enhances conceptual understanding, it is usually provided; in other cases, the effort is to explain subtleties of the definitions and properties concerning random functions, and to state conditions under which a proposition applies. Attention is drawn to the differences between deterministic concepts and their random counterparts, for instance, in the mean-square calculus, orthonormal representation, and linear filtering. Such differences are sometimes glossed over in method books; however, lack of differentiation between random and deterministic analysis can lead to misinterpretation of experimental results and misuse of techniques.

My motivation for the book comes from my experience in teaching graduate-level image processing and having to end up teaching random processes. Even students who have taken a course on random processes have often done so in the context of linear operators on signals. This approach is inadequate for image processing. Nonlinear operators play a widening role in image processing, and the spatial nature of imaging makes it significantly different from one-dimensional signal processing. Moreover, students who have some background in stochastic processes often lack a unified view in terms of canonical representation and orthogonal projections in inner product spaces.

The book can be used for courses in a number of ways. For students with a strong background in probability and statistics, it can form a one-semester course on random processes, with the obvious necessity of omitting a number of sections. Since the first chapter provides the essential probability and estimation theory that would normally be taught in a one-semester undergraduate course, the book can be used for a full-year course for students who lack a good undergraduate probability course. The book is structured with this use in mind. My experience shows me that very few engineering students come to graduate school having an adequate background in probability theory. Finally, owing to the large number of imaging applications, with the addition of some supplementary papers, the book can be used for a graduate course on image processing; indeed, I have taken such an approach here at Texas A&M. I suspect that a similar approach can be used for signal processing. For research-oriented departments, cookbook- style texts are totally inadequate and future researchers receive significant benefit from learning their specialty in the proper mathematical framework. To aid in course design, a diagram describing the essential logical structure of the sections follows the preface.

The first chapter covers basic probability theory, with attention paid to multivariate distributions and functions of several random variables. The probability theory concludes with a section on laws of large numbers. There follows a general section on parametric estimation. Maximum-likelihood estimators are covered and applied to a constant signal corrupted by various noise models. Estimation plays an important role throughout the book because it is not enough to know the probabilistic theory behind algorithm design; one also needs to be aware of the problems associated with estimating an optimal filter from sample signals. The chapter concludes with sections on entropy and coding.

The second chapter covers the basic properties of random functions typically found in general texts on engineering random processes. Differences are mainly in orientation, but this is important. The stochastic problems of image processing differ substantially from those of one-dimensional signal processing. Because the latter is more mature as a discipline, books on random processes tend to orient their image-signal processing applications towards signals. Historically, such an approach is natural; nevertheless, it is often not sufficient to give a definition, property, or application for signal processing and then say that, as presented, it is suitable for image processing. Many stochastic problems that are straightforward in one dimension become either very difficult or intractable in two dimensions. I try to take a balanced approach with the basic theory and continue to pay attention to both one- and two-dimensional processes throughout the text.

The third chapter treats canonical representation in the natural context of Fourier expansions in inner product spaces. The Karhunen-Loeve expansion is covered in detail: the meaning of the expansion, its relation to deterministic Fourier representation, and the role played by the eigenfunctions resulting from the covariance function. There follow sections on noncanonical representation,trigonometric representation of wide-sense stationary processes, and the role of canonical expansions as transforms. There is a substantial section on transform coding. It is placed in the context of the Karhunen-Loeve expansion, so that transform efficiency for other transforms, such as the discrete cosine and Walsh- Hadamard transforms, can be better understood. The text then goes into the general theory of discrete canonical expansions whose coefficients are generated by linear functionals. Properties of the coefficients and related coordinate functions are discussed. Because a canonical expansion of a random function can be derived from a canonical expansion of its covariance function, covariance expansions are discussed. Integral canonical expansions are theoretically more difficult and rely on the theory of generalized functions. Therefore the subject is treated formally with the understanding that we often consider random functions whose covariance functions are distributions. Integral canonical expansions provide an appropriate framework for discussion of the power spectral density and the Wiener-Khintchin theory. We are not merely concerned with the power spectral density as the Fourier transform of a covariance function; we are also concerned with white-noise representation of random functions. Representation is discussed in the context of necessary and sufficient conditions for an integral canonical expansion. The next section introduces vector random functions and their canonical representation. The chapter closes with an algorithm that produces a canonical representation over a discrete set that can be applied in very general circumstances.

The fourth chapter treats filter design. The basic theory of optimal mean-square-error filters is covered first. The next eight sections are committed to optimal linear filters. Coverage begins with application of the orthogonality principle to find the optimal linear filter for a finite number of observations. A number of examples are provided to explain both the algebraic and statistical aspects of the theory. The steepest- descent and LMS iterative algorithms are covered next, including convergence. Vector estimation begins with least-squares-estimation of a nonrandom vector, with the best estimator given in terms of the pseudoinverse of the design matrix. Because we assume that the columns of the design matrix are linearly independent, the pseudoinverse takes a simple form. The next section treats random vectors and requires the pseudoinverse of an autocorrelation matrix that may be singular. The main theorem follows directly from a basic theorem on finding projections onto subspaces spanned by linearly dependent vectors in an inner product space. It is the main result for optimal finite-observation linear filters. The last section on finite-observation filters discusses recursive linear filters, concluding with the Kalman filter. Recursive linear filters are based on the fact that projections onto direct sums of subspaces can be decomposed into sums of projections. This principle is introduced and both static and dynamic recursive filtering follow from it. The last three sections on optimal linear filtering involve infinite observations. The Kolmogorov theory places optimal linear filtering into the context of projections onto subspaces generated by operators (in our case, integral operators). The Wiener-Hopf equation is derived and the Wiener filter for wide- sense stationary processes is covered. The next section considers optimal linear filtering in the context of a linear model. This approach can lead to an optimal filter being derived from a system of linear equations; in other cases, it leads to the solution of Wiener-Hopf-like equations that are simpler than the original equation. Optimal linear filtering culminates with the derivation of optimal filters in the framework of canonical expansions. Because it provides a general method for design, it may be considered the main section on optimal linear filtering. Having concluded coverage of linear filters, the chapter turns to nonlinear filters. There is extensive coverage of optimal binary filters, which are a key concern of digital image processing. For discrete binary images, optimization stays close to the probabilistic nature of the processes and optimal versus suboptimal design is appreciable at a very low level. Next, pattern classification is treated in the context of filter design and the Gaussian maximum-likelihood classifier is obtained under the appropriate model conditions. The chapter concludes with neural networks. Error back-propagation is discussed in the context of sum-of-squares error and adaptive network design. Overall, the chapter emphasizes the fundamental role of filter representation. This begins with the integral representation of linear filters and canonical decomposition of random functions, continues with the morphological representation of binary filters, and concludes with the representational power of neural- network filters.

The final chapter treats random models. A major portion is devoted to discrete- and continuous-time Markov chains. The range of application for these models is extensive, including engineering, computer science, and operations research. The key role of the Chapman-Kolmogorov equations is emphasized. Steady-state and stationary distributions are covered for both finite and infinite state spaces. Conditions for existence are given and methods for finding stationary distributions are explored. Special treatment is given to the birth-death model, random walks, and queues. Passing to two dimensions, Markov random fields and Gibbs distributions are discussed. Next comes the random Boolean model, which is fundamental to coverage processes, filtering, and texture analysis. Vacancy and hitting are discussed, and the simple linear Boolean model is discussed in some detail. Granulometries are treated in the following section, which describes granulometric classification and adaptive design of openings to filter clutter in the signal-union-noise model. The final section of the book provides elements of the theory of random closed sets and is at a higher mathematical level than the rest of the text. Its goal is twofold: to explain how the hit-or-miss topology arises naturally from the probabilistic requirements of a random closed set and to present the capacity theorem.

I hope my readers find this book both instructive and enjoyable, and that it provides them with insight and knowledge useful to the pursuit of ground-breaking research. For me, a person trained in analysis, it represents a growing appreciation of the profound differences between deterministic and stochastic scientific epistemology.

Edward R. Dougherty


© SPIE. Terms of Use
Back to Top