Proceedings Volume 1610

Curves and Surfaces in Computer Vision and Graphics II

Martine J. Silbermann, Hemant D. Tagare
cover
Proceedings Volume 1610

Curves and Surfaces in Computer Vision and Graphics II

Martine J. Silbermann, Hemant D. Tagare
View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 1 February 1992
Contents: 7 Sessions, 34 Papers, 0 Presentations
Conference: Robotics '91 1991
Volume Number: 1610

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Approximation in Signal Processing
  • Curves and Surfaces I
  • Approximation I
  • Curves and Surfaces I
  • Approximation I
  • Curves and Surfaces I
  • Approximation II
  • Curves and Surfaces II
  • Image Processing I
  • Image Processing II
  • Approximation II
  • Image Processing II
  • Curves and Surfaces I
  • Approximation in Signal Processing
Approximation in Signal Processing
icon_mobile_dropdown
Simplification of digital filtering algorithms using multirate concepts (Invited Paper)
Sanjit K. Mitra
The polyphase decomposition of a sequence is a useful tool in multirate signal processing such as in the design of computationally efficient decimators and interpolators, the design of analysis/synthesis filter banks, and the development of fast discrete transform algorithms. This paper reviews a recently introduced generalization of the polyphase decomposition concept and outlines some of its applications in the simplification of digital filtering algorithms such as the design and implementation of finite-impulse-response (FIR) digital filters, the design of decimators and interpolators, and discrete Fourier transform computations.
Controlled redundancy in interpolation-based neural nets
Harsha M. Wabgaonkar, Allen R. Stubberud
In this paper, we deal with the problem of associative memory synthesis via multivariate interpolation. We present an abstract yet simple formalism to address the possibility of detecting and eliminating redundant input data from the set of exemplars. The remaining pairs are then stored in a way so as to introduce controlled redundancy by replication of the corresponding neurons. The redundancy is detected via orthogonalization carried out in a Reproducing Kernel Hilbert Space setting.
Curves and Surfaces I
icon_mobile_dropdown
Tensor products and hierarchical fitting
David R. Forsey, Richard H. Bartels
We survey research in progress involving the hierarchical fitting of regular data to tensor product splines. The approach is suitable for data in any dimension and for splines in any number of variables.
Approximation I
icon_mobile_dropdown
Surface reconstruction with discontinuities
Thin-plate spline fitting is often applied to fitting smooth surfaces to a collection of data. However, discontinuities in the data are smoothed by this technique. This occurs because of several underlying assumptions of spline fitting that do not hold when discontinuities are present. The theory of robust statistics can be used to deal with these outliers. In this paper, we present one way to apply the theory of robust statistics to the problem of discontinuous surface fitting.
Vector field interpolation using robust statistics
Jialin Zhong, John Juyang Weng, Thomas S. Huang
This paper investigates the problem of interpolating, under physical constraints, 3-D vector fields from sample vectors at irregular positions. This problem arises from analysis of fluid motion, but our results can also be used in such areas as geometric modeling, data approximation, and the analysis of other types of nonrigid motion. The algorithm proposed in this paper combines the generalized multivariate quadratic interpolation and physical constraints into one step to form an over-determined linear equation system. The least squares solution of this system gives the coefficients of interpolation. Since the interpolation is done in one step and is noniterative, it is computationally efficient. We utilize the methods in robust statistics to detect outliers in the sample data so that the result remains stable in the presence of gross errors. Another merit of our scheme is that by incorporating physical constraints into linear equation system, the algorithm takes into account the characteristics of vector field and is much less sensitive to noise. The algorithm is applied to both synthesized and real data representing 3-D fluid vector field. With the application to 3-D fluid flow in mind, we study the applicability of physical constraints in fluid kinematics and analyze the sources of noise from the real data acquisition. A comparison between our algorithm with previous work shows the robustness of our algorithm. The results of interpolating real flow data are presented.
Recovering signals from inaccurate data
Boleslaw Z. Kacewicz, Marek A. Kowalski
We present results on optimal recovery of band- and energy-limited signals from their inaccurate samples or inner products involving prolate spheroidal wave functions.
Curves and Surfaces I
icon_mobile_dropdown
How to compute offsets without self-intersection (Invited Paper)
Ching-Shoei Chiang, Christoph M. Hoffmann, Robert E. Lynch
Traditional techniques for computing offsets are local in nature and lack good criteria for eliminating possible self-intersections of the offset. Methods based on integrating differential equations and image processing do not lack such criteria but seem to require constructing the solution in the ambient space, i.e., in one dimension larger than the offset. We investigate such methods.
Approximation I
icon_mobile_dropdown
Morphological decomposition of natural surfaces (Invited Paper)
Bianca Falcidieno, Michela Spagnuolo
The main goal of surface characterization addresses the reduction of a surface to a compact symbolic description that efficiently stores information about the morphological structure of the surface. In the context of polyhedral surfaces characteristic regions, i.e., regions with convex, concave, planar, and saddle shape, are proposed as structural surface components and are defined taking into account geometric relationships between triangles. Based on these areal features, a symbolic representation of the surface called characteristic region configuration graph is produced where the nodes correspond to the surface regions while the arcs and hyperarcs correspond to the surface characteristic lines and points.
Curves and Surfaces I
icon_mobile_dropdown
Modeling with multivariate B-spline surfaces over arbitrary triangulations
Phillip Fong, Hans-Peter Seidel
This paper describes the first results of a test implementation of the new multivariate B-splines as recently developed for quadratics and cubics. The surface scheme is based on blending functions and control points and allows us to model Ck-1-continuous piecewise polynomial surfaces of degree k over arbitrary triangulations of the parameter plane. The surface scheme exhibits both affine invariance and the convex hull property, and the control points can be used to manipulate the shape of the surface locally. Additional degrees of freedom in the underlying knot net allow for the modeling of discontinuities. Explicit formulas are given for the representation of polynomials and piecewise polynomials as linear combinations of B-splines.
Fast space-variant texture-filtering techniques
Eugene Fiume, Robert Lansdale
The advent of `photorealistic' rendering has caused a great increase in the use of discrete texture mapping for image synthesis. Strategies to avoid the introduction of visible artifacts in texture mapping are varied. Most often, texture filtering is at the heart of these strategies. A particularly difficult problem to solve efficiently arises when a filtering operation varies significantly from pixel to pixel because it is often not clear how to perform preprocessing, the cost of which would be amortized over many pixels. We call such filtering operations space- variant filtering. In this paper, we describe a variety of such techniques, and we describe our recent attempts to develop constant-time implementations of space-variant filtering operations.
C++ splines classes for prototyping
Allan H. Vermeulen, Richard H. Bartels
C++ classes for the building and prototyping of spline applications are described. Flexibility, extensibility, and maximal code reuse are achieved by separating the concept of a basis from the implementations of curves, functions, and surfaces.
Approximation II
icon_mobile_dropdown
Knot insertion algorithms for splines (Invited Paper)
Phillip J. Barry
Knot insertion is one of the most important tools for spline curves in computer graphics and geometric modeling. This paper is a survey of knot insertion. In particular, it lists certain knot insertion algorithms for B-spline curves, discusses uses of these algorithms, and shows how these algorithms can be extended to rational B-spline curves and geometrically continuous splines.
Knot removal for scattered data
Alain Le Mehaute, Yvon Lafranche
We present a strategy for reducing the number of knots for the representation of a piecewise polynomial approximation of a function defined on scattered data, without perturbing the approximation more than a given tolerance. The method removes some (or all) of the interior knots. The number and location of these knots are determined automatically. Applications are in approximation of data, data storage, and image reconstruction.
Curves and Surfaces II
icon_mobile_dropdown
Rendering of NURBS curves (Invited Paper)
Les A. Piegl
Integer versions of subdivision and corner cutting algorithms of NURBS curves are presented. The algorithms are used to render NURBS curves of any degree on a raster device by either computing a polygonal approximation or by turning on pixels that are closest to the curve. Because of the integer arithmetic used, the algorithms are easily cast in hardware.
Resolvents and their applications in computer-aided geometric design
Hang K. Du, Ronald N. Goldman
Elimination theory was originally developed in the 18th and 19th centuries to find solvability criteria for a system of polynomial equations. We present some new resolvent methods that are suitable to solve important problems in computer aided geometric design (CAGD), including implicitization, inversion, intersection, computation of self intersection points, and detection of unfaithful parametrizations of rational curves and surfaces.
Bezier representation for cubic surface patches
Suresh Kumar Lodha, Joe D. Warren
We describe a new method for creating rectangular Bezier surface patches on an implicit cubic surface. Traditional techniques for representing surfaces have relied on parametric representations of surfaces, that, in general, generate surfaces of implicit degree eight in case of rectangular Bezier surfaces with rational biquadratic parametrization. Thus we have achieved low-degree algebraic surface patch construction by reducing the implicit degree from eight to three. The construction uses a rectangular biquadratic Bezier control polyhedron, embedded within a tetrahedron and satisfying a projective constraint. The control polyhedron and the resulting cubic surface patch satisfy all of the standard properties of parametric Bezier surfaces, including
Classification using normal curves
Helmut Pottmann, Anthony D. DeRose
The analysis of parametric curves for the presence of shape characteristics such as loops, cusps, and inflections is considered. In contrast to previous methods that use heavily algebraic techniques to study polynomial cubic curves, our approach, based on the classical notion of normal curves, is predominately geometric. The approach not only provides new insight into the cubic polynomial case, it can also be applied to rational curves and to curves of degree greater than three. We demonstrate the use of the technique to analyze planar cubic curves, planar rational cubic curves, and polynomial curves of degree four in the plane and in three- space.
Image Processing I
icon_mobile_dropdown
Electronic skeletons: modeling skeletal structures with piecewise algebraic surfaces (Invited Paper)
Chandrajit L. Bajaj
We present two algorithms to construct C1-smooth models of skeletal structures from CT/NMR voxel data. The boundary of the reconstructed models consists of a C1- continuous mesh of triangular algebraic surface patches. One algorithm first constructs C1-continuous piecewise conic contours on each of the CT/NMR data slices and then uses piecewise triangular algebraic surface patches to C1 interpolate the contours on adjacent slices. The other algorithm works directly in voxel space and replaces an initial C0 triangular facet approximation of the model with a highly compressed C1- continuous mesh of triangular algebraic surface patches. Both schemes are adaptive, yielding a higher density of patches in regions of higher curvature.
Range image segmentation by controlled-continuity spline approximation for parallel computation
Gilbert Maitre, Heinz Huegli
A classical approach formulates surface reconstruction in terms of a variational problem by using two-dimensional surfaces defined by generalized spline functions. We present such an approach in the case of range image segmentation. The distinction of our approach lies in the way the discontinuities are detected. The spline is constrained to stay within a certain maximal distance to the discrete measured data, but is free as long as the maximum distance is not reached. Discontinuity emerges on points where the maximum distance constrains the spline. This method leads to a relaxation algorithm that solves the segmentation iteratively, by locally applying a relation that is close to the diffusion equation in the case of the membrane spline. Being iterative and local, the algorithm is suited for parallelism. We applied the method to range data from laser scanners using two different surface models: the membrane spline (more adequate for polyhedric objects), and the thin plate spline (more adequate for curved objects). The results illustrate the practical performance of this method which is simple, parallel, and controlled by few parameters.
Geometric modeling for computer vision
David J. Kriegman, Jean Ponce
Computer vision faces the challenge of exploiting the CAD models available for many manufactured objects. Recently, a variety of new algorithms have been developed that rely on symbolic object representations for recognizing curved three-dimensional objects from images. This paper presents a modeling system for constructing these representations from parametric and implicit algebraic surface specifications. A combination of homotopy continuation and curve tracing is used to compute set operations. Specifically, an algorithm for tracing curves defined implicitly in IRn+1 by n polynomial equations in n + 1 variables is presented that correctly characterizes the topology of the intersection curves and respects singular points. The curve tracing algorithm is also used to render line drawings for which parametric representations of occluding contours are unavailable. Hidden lines are easily removed by explicitly constructing an image structure graph of the smooth curve branches and singular points; it is then sufficient to trace a ray at a single point on each curve branch. A preliminary parallel implementation, distributed over a network of SPARC stations, appears promising. For objects modeled by this system algorithms for constructing their aspect graph, computing their stable poses, and recognizing them in different types of images have been developed.
Performance in noise of a diffusion-based shape descriptor
Murray H. Loew, S. Hwang
A diffusion-like process, analogous to the thermodynamic diffusion of heat or of gas molecules, is used to describe the shape of two- or three-dimensional objects. It is effective at identifying extrema of curvature as would be used to segment the boundary, and also at characterizing the types of line segments that lie between the extrema. Both of those operations are essential for qualitative descriptions of images, as would be required by an approach based on geometrical icons (geons). The region need not be convex. The descriptor is invariant to several common transformations, including rotation. It can be implemented easily on parallel machines, does not pose problems with the definition of slope, and appears to be capable of dealing with the matching of partially occluded objects. The descriptor's performance is essentially independent of user-supplied parameters. It is shown that noise does not affect the accuracy of identification of the extrema -- a simple stopping rule for the process ensures that the structural parts of the boundary are preserved while the noise is suppressed. The procedure is compared and contrasted to scale-based boundary-description methods.
Local cardinal spline interpolation and its application to image processing
Andrew K. Chan, Charles K. Chui, Jun Zha, et al.
This paper presents an image interpolation algorithm using the recently developed local cardinal interpolatory spline (LCIS). The procedures for constructing the LCIS basis are described for both univariate and bivariate cases. This new LCIS algorithm is very efficient and can be implemented easily. Without the need of a mapping procedure, this method is faster than any other polynomial interpolation approach. The C2 property of the LCIS also allows the gradient operator to be constructed for edge detection. Both image interpolation and edge detection algorithms are compared with existing methods in two different examples.
Image smoothing and differentiation with regularized filters
The fundamental problem of smoothing and differentiating of noisy images is an ill-posed problem, and common differentiation filters give very unreliable results. We look at several sources of the errors and show a way to eliminate them. In particular: (1) We show that regularization based filters perform better than the Gaussian, assuming the data changes slowly relative to the noise. (2) Truncation of an infinite filter is very damaging for derivatives, so the common idealized regularization methods cannot be used. We construct finite, discrete regularization based filters using a spline approximation.
Image smoothing with shape invariance and L1 curvature constraints
Michael Blauer, Martin D. Levine
We investigate the regularization problem with a robust L1 (absolute value) smoothness metric. This metric provides the best tradeoff between the standard L2 norm, which is nonrobust and strongly smoothing, and the nonconvex metrics associated with line process methods that require very costly stochastic optimization. We show that solution can be obtained from a convex analog neural network by replacing the nondifferentiable L1 function with a smoothed version that corresponds to a Huber estimator. The resulting analog network implements a direct descent process on the original cost function and involves sigmoid nonlinearities that are the derivative of the smoothed L1 norm. However, for digital computation the direct descent approach is very ineffective due to the near nondifferentiability of the objective function and the fact that line search cannot be performed analytically. We address this problem by introducing a shape invariance constraint that may be derived from the original image by assuming that the sign of the response to the regularization kernel remains unchanged. The problem may now be transformed into a sparse quadratic program with linear inequality constraints. In turn, this program may be solved very effectively through its dual by a coordinate relaxation algorithm that involves only nonnegativity constraints. The resulting algorithm is extremely simple and amounts to iterative application of a nonlinear operator consisting of convolution followed by rectification on an auxiliary Lagrangian image. The algorithm admits very simple parallel and asynchronous implementation.
Corner detection using spline wavelets
Andrew K. Chan, Charles K. Chui, Jun Zha, et al.
Detection of corners in an image is very useful in computer vision and pattern recognition. The existing algorithms for corner detection seem to be insufficient in many situations. The corner detection algorithm proposed in this paper is based on spline-wavelet decompositions. Corner and edge detectors are constructed from the 2-D wavelet transform coefficients. A somewhat sophisticated thresholding technique is applied to remove noise and minor irregularities in the images. Noise can be further reduced if additional processing is applied to the component images at all resolutions. Information on the edges and corners is contained in the component images in all the octaves to facilitate precise localization. A real-time wavelet decomposition algorithm is developed for the corner and edge detectors. It is very efficient and requires very little memory, since most of the computations involve only simple moving average operations and sub-sampling.
Image Processing II
icon_mobile_dropdown
Qualitative descriptors for digital contour segments (Invited Paper)
Kathryn Shaker Baummer, Kie Bum Eom, Murray H. Loew
Earlier studies of the human vision system suggest that a complex object may be recognized from the qualitative features associated with its boundary. Since the human vision system uses qualitative features for object recognition, it is extremely robust to deformation caused by noise, obstruction, scale change, or optical distortion. We considered various qualitative descriptors for digital images. The descriptors developed in this research are based on the estimated curvature function from digital boundaries and can discriminate features such as straightness of a contour segment, perpendicularity of two contour segments, parallelness of two straight contour segments, parallelness of two curved contour segments in a global sense, and the direction of convergence if two straight contour segments are not parallel. We demonstrated that the qualitative descriptors developed in this paper can be applied for identification of elementary shapes, such as cylinders, bricks, cones, etc. We also discussed the recognition of more complex shapes after decomposing them into simpler components.
Contour lines of a C1 surface defined over a triangulation
Marc Daniel
We describe a method for finding contour lines of a C1 surface on which could exist edges with only C continuity. The grid, support of the marching step for every lines, is a triangular one. Each triangle of the initial triangulation is uniformly subdivided with an independant depth of recursivity controled by the lengths of its subtriangles edges and the altitude differences between the associated vertices. Subtriangles in each triangle are numbered. Countouring is based on a set of functions supplying the connections between subtriangles and triangles. Break line crossings are detected. As in every intersection problems, the processing of particular cases entails an important increase of complexity. Solutions are given. Optional Bsplines smoothing of contour lines is provided. It preserves existing ridges. Adaptable G1 continuity at junction points of closed curves is supplied.
Approximation II
icon_mobile_dropdown
Matrix approach of the convergence analysis of recursive subdivision algorithms for parametric surfaces
Ruibin Qu
In this paper, we construct a general binary subdivision algorithm (BSA) for surfaces over uniform triangulations and then present a matrix approach of convergence analysis. In the analysis, the idea of `cross differences of directional divided differences' (CDD) is introduced, and it is shown that the convergence of the scheme is characterized by its CDD. This approach is a generalization of the `Dyadic parametrization' technique that was first used by Dyn, Gregory, and Levin to analyze uniform BSA for curves. Conditions for the scheme to generate Cn (n >= 0) surfaces are studied. As an example, the explicit form of the C0 and C1 convergence conditions of the `butterfly scheme' (introduced by Dyn, Gregory, and Levin) and a 10-point interpolatory BSA are formulated.
Image Processing II
icon_mobile_dropdown
Surface modeling in heart motion analysis
This paper presents a new approach for left ventricle surface modeling in the analysis of dynamic behavior of the heart from biplane cineangiograms. We decompose the motion and deformation analysis of the left ventricle into two stages by coarse-to-fine modeling the moving surface of the left ventricle. Such a two-step surface modeling enables us to formulate the complex motion analysis as a series of well-defined parameter estimation algorithms. We model the globally deformable surface by a parametrized family of surfaces known as superquadrics that are able to model expansion/contraction and twisting deformations. The deformation parameters are obtained by fitting the given 3-D data to the superquadric modeling primitives. The residues of such a fitting are the measure of local deformations that global modeling primitives are unable to abstract. These residues are then interpolated by spherical harmonic surface modeling primitives to form a residue surface. Local deformation tensor analysis of the left ventricle is therefore based on the overall surfaces constructed by superposition of local spherical harmonic residue surfaces on top of the global superquadric surfaces. Animations of the estimated dynamic surface are generated using scientific visualization techniques in order to vividly examine the complex spatial and temporal nature of the left ventricle. These animations are consistent with a priori knowledge of the left ventricle and hence show the success of the surface modeling algorithm.
Measurement of brain compartment volumes from MRI data using region growing and mixed-volume methods
Gilbert R. Hillman, T. A. Kent, Jacob M. Agris
MRI data were collected from normal control subjects and from patients having AIDS or closed head injury (CHI). Both T1-weighted and T2-weighted images were collected; in some cases transverse images were made, while in others coronal images were collected. The images were analyzed in three ways, each of which determined white matter (WM), gray matter (GM), and CSF volume, and the results were compared. Segmentation by interactive threshold selection or manual outlining is still the most commonly used method for compartment volume analysis. This method was applied to the control and AIDS brains, with repetition by two trained observers. These results were compared with those from a second method in which each voxel is viewed statistically as a mixture of whichever two compartments are closest at that anatomical location. Automatic thresholding, followed by regional skeletonization, is used to identify a small set of pixels in each slice that represent the central portion of each of the three regions of interest. All pixels in the slice are then subjected to an interpolation procedure in which their fractional composition is determined from these three intensity values. The intensities from the T1-weighted images show contrast between GM, WM, and CSF, and these are used for the volume computation. Geometric information from the T2-weighted image is used for the location of the authentic compartment locations, as these images show strong contrast between the CSF and the skull. The third method studied is based on a three-dimensional region-growing algorithm that estimates each volume compartment by growing a volume from a seed point, limiting the growth based on spatial and feature criteria. The feature bounds are set restrictively so that GM, WM, and CSF regions are not contiguous, leaving a volume of mixed voxels between the regions. The distributions of intensities in each region are then used to interpolate the most likely composition of the unassigned voxels, so that volume mixing is assumed only in the spaces between the assigned regions. This method is quite robust, requiring little operator judgement. The volumes obtained by these methods are not substantially different, and the methods differ primarily with respect to interoperator variability and convenience of use. The third method also differs from the others in that it treats the set of slices as a single 3-dimensional data set, making better use of regional information. All methods reveal significant changes in brain compartment volume in cases of CNS pathology.
Dynamical systems approach to low-level integration of motion and binocular stereopsis
Edward J. Altman
The representation of trajectories for moving edges by a nonlinear dynamical system is presented as a technique for the integration of motion and binocular disparity. The integration is accomplished through the monolithic use of dynamical systems as the computational mechanism. In this approach, the motion of a simple feature through a small region of the image is modeled by a dynamical system with three variables. Visual processing by the model dynamics occurs within a retinotopic array of identical dynamical systems with nearest neighbor coupling. A form of segmentation, called feature linking, is accomplished by the convergence of neighboring array elements to the same stable mode of coherent oscillations involving only those dynamical systems that respond to the co-occurrence of the same feature and motion. Each model dynamic for a moving feature is driven by concurrent inputs from a pair of Gabor filters in phase quadrature. The stable modes of the model dynamics consist of limit cycles with nearly linear phase. The co-occurrence of a particular feature and velocity of motion in the image sequence optimally selects a particular limit cycle. The integration of motion and binocular stereopsis is achieved through the use of the linear phase property of the limit cycles to recover the positional disparity between similar features moving through corresponding regions in the retinotopic arrays for the left and right images. Bifurcation theory is applied to rigorously describe the convergence properties of the dynamical systems. The focus of this paper concerns the mapping of visual motion onto analog dynamical systems as an initial step toward a theory of low-level integration of motion and binocular stereopsis.
Kinematics of interface evolution with application to active contour models
In physics, a large class of problems, such as crystal growth or flame fronts propagation, are concerned with the motion of a deformable boundary separating time-dependent domains in which the interface itself satisfies an equation of motion. Recently, similar questions have been introduced in image processing through the concept of physically based active contour models or snakes. In snake modeling, a deformable boundary endowed with elastic properties interacts with a constant external field derived from image properties. In the most general case, the interfacial motion is governed by a set of partial differential equations that nonlinearily couple interface intrinsics and external fields. In this paper, we present a general study of the kinematics of deformable regular (d-1)-dimensional interfaces evolving according to a first- order dynamic in a d-dimensional (d >= 2) space, in terms of their intrinsic geometric properties. We formulate local equations of motion and derive evolution theorems. These results are then applied to the kinematical study of a specific 2-dimensional active contour model when its optimization is performed via a first-order deformation process. This provides a significant insight in the instantaneous behavior of snake-like models as well as the nature of their steady-states.
Curves and Surfaces I
icon_mobile_dropdown
Approximation of sweep surfaces by tensor product NURBS
Mark Bloomenthal, Richard F. Riesenfeld
Methods for approximating sweep surfaces by tensor product NURBS are presented. Sweep surfaces are generated by sweeping a (possibly deforming) NURBS cross-section curve along a NURBS axis curve. A general form for NURBS approximation to sweep surfaces is derived and expressed in terms of the approximation to a set of offset curves of the axis curve. The actual algorithm used to construct the surface approximation depends on the nature of the desired deformation and change in orientation that the cross-section undergoes as it is swept along the axis.
Approximation in Signal Processing
icon_mobile_dropdown
Fast approximate factor analysis
The principal orthogonal factor analysis or Karhunen-Loeve algorithm may be sped up by a low-complexity preprocessing step. A fast transform is selected from a large library of wavelet-like orthonormal bases, so as to maximize transform coding gain for an ensemble of vectors. Only the top few coefficients in the new basis, in order of variance across the ensemble, are then decorrelated by diagonalizing the autocovariance matrix. The method has computational complexity O(d2 log d + d'3) and O(d log d + d'2) respectively for training and classifying a d-dimensional system, where d' << d. One application is described, the reduction of an ensemble of 16,384 pixel face images to a 10 parameter space using a desktop computer, retaining 90% of the variance of the ensemble.