Proceedings Volume 0975

Advanced Algorithms and Architectures for Signal Processing III

cover
Proceedings Volume 0975

Advanced Algorithms and Architectures for Signal Processing III

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 23 February 1988
Contents: 1 Sessions, 37 Papers, 0 Presentations
Conference: 32nd Annual International Technical Symposium on Optical and Optoelectronic Applied Science and Engineering 1988
Volume Number: 0975

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • All Papers
All Papers
icon_mobile_dropdown
Old And New Algorithms For Toeplitz Systems
Richard P. Brent
Toeplitz linear systems and Toeplitz least squares problems commonly arise in digital signal processing. In this paper we survey some old, "well known" algorithms and some recent algorithms for solving these problems. We concentrate our attention on algorithms which can be implemented efficiently on a variety of parallel machines (including pipelined vector processors and systolic arrays). We distinguish between algorithms which require inner products, and algorithms which avoid inner products, and thus are better suited to parallel implementation on some parallel architectures. Finally, we mention some "asymptotically fast" 0(n(log n)2) algorithms and compare them with 0(n2) algorithms.
A Theoretical Foundation For The Weighted Checksum Scheme
Cynthia J. Anfinson, Richard P. Brent, Franklin T. Luk
The weighted checksum scheme has been proposed as a low-cost error detection procedure for parallel matrix computations. Error correction has proved to be a much more difficult problem to solve than detection when using weighted checksums. In this paper we provide a theoretical basis for the correction problem. We show that for a distance d+1 weighted checksum scheme, if a maximum of ici [d/2] errors ensue then we can determine exactly how many errors have occurred. We further show that in this case we can correct the errors and give a procedure for doing so.
Systolic Array For Solving Toeplitz Systems Of Equations
J. Chun, V. Roychowdhury, T. Kailath
Many problems of geophysics, image processing and time series analysis involve the problem of solving Toeplitz systems of equations. We present a fast parallel 0 (mn) algorithm that solves both square and over-determined Toeplitz systems of equations. The solution is obtained directly from the triangular factorization without using back-substitution. This avoids separate factorization and back-substitution sections, which complicate architectural implementation. This also enables us to eliminate intermediate memory to store the triangular factor. The parallel implementation is carried out in two steps. First, Regular Iterative Algorithms (RIAs) for solving Toeplitz system of equations are formulated systematically from the mathematical description of our algorithm. The advantage of having RIAs is that the process of mapping the algorithms on regular processor arrays can be done in a systematic manner.
Fast Adaptive RLS Algorithms: A Generalized Inverse Unification
Sanzheng Qiao
This paper presents a generalized inverse unification of some important fast adaptive recursive least squares (RLS) algorithms. This unification exhibits the inside view of those algorithms. Moreover, based on this unification, a new and more stable algorithm for the initial state in prewindowed signal case is given.
A Fast QR-Based Array-Processing Algorithm
James P. Reilly, W. G. Chen, K. M. Wong
We propose a new projection-based algorithm for estimating the angles of arrival of plane waves incident onto arrays of sensors. The method is based on a single QR decomposition of the signal covariance matrix; hence, it is much faster than eigen-based methods which require many QR decompositions. It is shown that optimum performance is attained only if the columns of the covariance matrix are permuted in a prescribed manner before the QR decomposition proceeds. An adjunct to the angle of arrival estimation process is a new eigenvalue-free technique for estimating the number of incident signals. There is no performance penalty associated with either of these new methods. The real-time performance of this technique is enhanced through the use of systolic arrays. A novel systolic array structure is proposed for extracting both the Q and R matrices generated by the QR decomposition.
Approximate Inversion Of Positive Definite Matrices, Specified On A Multiple Band
H. Nelis, E. Deprettere, P. Dewilde
A fast algorithm is presented which can be used to compute an approximate inverse of a positive definite matrix if that matrix is specified only on a multiple band. The approximate inverse is the inverse of a matrix that closely matches the partially specified matrix. It has zeros in the positions that correspond to unspecified entries in the partially specified matrix. It is closely related to the so-called maximum-entropy extension of this matrix. The algorithm is very well suited for implementation on an array processor.
Convergence Of Parallel Block Jacobi Methods
Gautam Shroff, Robert Schreiber
The convergence of a class of Jacobi methods for eigenvalue and singular value problems is established. This class includes some parallel block Jacobi methods that can be efficiently implemented on parallel architectures of different granularities. Membership of any Jacobi method in this class can be easily determined.
Matrix Downdating Techniques For Signal Processing
Adam W. Bojanczyk, Allan O. Steinhardt
We are concerned with a problem of finding the triangular (Banachiewicz-Cholesky) factor of the covariance matrix after deleting observations from the corresponding linear least squares equations. Such a problem, often referred to as downdating, arises in classical signal processing as well as in various other broad ares of computing. Examples include recursive least squares estimation and filtering with a sliding rectangular window in adaptive signal processing, outlier suppression and robust regression in statistics, and the modification of Hessian matrices in the numerical solution of non-linear equations. Formally the problem can be described as follows: Given an n xn upper triangular matrix L and an n-dimensional vector x such that LTL - xxT > 0 find an n xn lower triangular matrix L such that LLT = LLT - XXT We will look at the following issues relevant to the downdating problem: - stability - rank-1 downdating algorithms - generalization to modifications of a higher rank
Generalized Minimum Norm And Constrained Total Least Squares With Applications To Array Signal Processing
Michael D. Zoltowski
The minimum norm Total Least Squares (TLS) solution to the linear system of equations AX = B for the case where the rank of the composite matrix [A I B] under no-noise/error-free conditions is strictly less than the number of columns in A is derived. The resulting method of solution is applied to the covariance level ESPRIT problem encountered in the field of sensor array signal processing. The TLS concept is targeted in light of the fact that the linear system of equations derived from ESPRIT at the covariance level is of the form AX = B where A and B are both in error. In contrast, conventional Least Squares only accounts for errors in B. The derivation is based on a projection operator interpretation of TLS1 analogous to the situation with conventional Least Squares in which the solution to AX = B is obtained by first projecting the columns of B onto range CA). The projection operator approach is further utilized to derive the constrained TLS solution when certain columns of A are known to be exact, i. e., free of error. The applicability of the constrained TLS solution for this case to the problem of estimating the source covariance matrix in a uniformly-spaced linear array scenario is discussed.
Near-Field Source Parameter Estimation Using A Spatial Wigner Distribution Approach
A. L. Swindlehurst, T. Kailath
An important problem in many applications is the estimation of emitter location parameters based on information received from an array of sensors. To date, the majority of techniques proposed for this problem are restricted to the special case of far-field emitters; that is, the wavefronts impinging on the array are assumed to be planar. More general methods are required, however, when sources are located close to the array (i.e., in the near-field) since the inherent curvature of the wavefront is no longer negligible. In this paper, we propose the use of high-resolution signal subspace methods in conjunction with a spatial version of the well-known Wigner-Ville distribution to solve the problem of source localization iii the near-field. Advantages of this approach are that accurate source range/DOA estimates are obtained with relatively good resolution, and without computation or search of a three-dimensional spectral surface. The effectiveness of the algorithm is demonstrated by extensive simulations, and its performance relative to the Cramer-Rao bound is also presented.
Multi-Frequency Angle-Of-Arrival Estimation: An Experimental Evaluation
Vytas Kezys, Simon Haykin
The intent of this paper is to describe the performance enhancement available to angle-of-arrival estimation through the use of a multi-frequency capability. A maximum likelihood estimate has been formulated, based on a model that takes into account a priori information regarding the multipath geometry and the frequency dependence of the received signal. Simulation and field measurements are used to demonstrate the improved performance and to validate the model.
Signal Subspace Processing Of Experimental Radio Data
Gordon E. Martin
The research related to this paper was concerned with the application of EigenVector EigenValue ( EVEV ) signal processing techniques to experimental data. The signal subspace methods of Schmidt (called MUSIC), Johnson, and Pisarenko were considered and compared with results of conventional beamformers. Almost all oral and written papers regarding these EVEV processors involve theoretical studies, possibly using simulated data and incoherent noise, but not experimental data. Contrary to that trend, we have reported behavior of EVEV processors using experimental data in this and other papers. The data used here are predominantly due to an HF radio experiment, but the distribution of eigenvalues is also reported for acoustic data. The paper emphasizes two general subtopics of signal subspace processing. First, the eigenvalues of sampled covariance matrices are examined and related to those of incoherent noise. These results include actual data, all of which we found were not Gaussian incoherent noise. A new test related to the ratio of eigenvalues is developed. The MDL and AIC criteria give misleading results with actual noise. Second, directional responses of EVEV and conventional processors are compared using HF radio data that has high signal-to-noise ratio in the non-Gaussian noise. MUSIC is found to have very favorable directional characteristics.
Adaptive Cancellation Of Correlated Signals In A Multiple Beam Antenna System
Yanping Lee, John Litva
This paper presents a new adaptive cancelling technique which can be implementated on multiple beam antenna systems. One of its notable features is that it exhibits efficient operation even with highly correlated target and interference signals.
Direction Finding Using Dynamic Phase Correction
James A. Cadzow, Hal Arnold, Steve Kocsis
In array signal processing applications, it can happen that amplitude and phase distortions are inadvertently introduced into the constituent sensor signals through electronic equipment miscalibration or non omni-directional sensors. In some cases, this distortion may be artificially induced through inaccurate sensor location. Whatever the case, the ability to provide useful direction-of-arrival estimates is adversely affected by these factors. In this paper, a procedure is presented for correcting amplitude and phase distortions.
Use Of Higher-Order Statistics In Signal Processing And System Theory: An Update
Jerry M. Mendel
During the past few years there has been an increasing interest in applying higher-order statistics, namely cumulants, and their associated Fourier transforms, polyspectra, to a wide range of signal processing and system theory problems. Cumulants and polyspectra can make a big difference in those problems where signals are non-Gaussian and systems are nonminimum phase (or, nonlinear). This paper provides a brief overview of much of the work that has occurred when parametric models are used in conjunction with higher-order statistics. It covers: identification of MA processes, identification of AR processes, identification of ARMA processes, order determination, calculation of cumulants, calculation of polyspectra, extensions to multi-channel and two-dimensional systems, and applications.
A Systolic Array For Efficient Execution Of The Radon And Inverse Radon Transforms
A. J. De Groot, S. G. Azevedo, D. J. Schneberk, et al.
The Systolic Processor with a Reconfigurable Interconnection Network of Transputers (SPRINT) [1] is a sixty-four-element multiprocessor developed at Lawrence Livermore National Laboratory to evaluate systolic algorithms and architectures experimentally. The processors are interconnected in a reconfigurable network which can emulate networks such as the two-dimensional mesh, the triangular mesh, the tree, and the shuffle-exchange network. New systolic algorithms and architectures are described which perform the Radon transform [8] and inverse Radon transform with efficiency arbitrarily close to 100%. High efficiency is possible with any connected network topology, even with low communication bandwidth. The results of the algorithms executed on the SPRINT compare closely with theory.
Use Of Homomorphic Signal Processing Techniques For The Estimation Of Absorbance Spectra As Encountered In Fourier Transform Spectroscopy.
D. J. Gingras
In this paper, we describe a method, using homomorphic signal processing (HSP) techniques for the computation of absorbance spectra as encountered in Fourier Transform Spectroscopy (FTS). These techniques have found already applications in a number of fields including image enhancement, speech and geophysics signal processing. They are based on a generalization of the class of linear operators. Linear systems are easy to analyse and are particularly useful in separating signals combined by addition which is a direct consequence of the property of superposition. In this paper, we will see that the nonlinearity of the transmittance model, introduced to describe the attenuation of the radiation through the sample medium, belongs to the class of nonlinear systems which obey a generalized principle of superposition. They are known as homomorphic systems. The two main practical types of homomorphic systems are the one for multiplication (logarithm operator) and the one for convolution (Z transform).
Time-Frequency Signal Analysis And Synthesis The Choice Of A Method And Its Application
Boualem Boashash
In this paper, the problem of choosing a method for time-frequency signal analysis is discussed. It is shown that a natural approach leads to the introduction of the concepts of the analytic signal and in-stantaneous frequency. The Wigner-Ville Distribution (WVD) is a method of analysis based upon these concepts and it is shown that an accurate Time-Frequency representation of a signal can be obtained by using the WVD for the analysis of a class of signals referred to as "asymptotic". For this class of signals, the instantaneous frequency describes an important physical parameter characteristic of the process under investigation. The WVD procedure for signal analysis and synthesis is outlined and its properties are reviewed for deterministic and random signals.
Instantaneous Frequency, Its Standard Deviation And Multicomponent Signals
Leon Cohen, Chongmoon Lee
We consider instantaneous frequency and its variance using the bilinear joint time-frequency distribu-tions. It is well known that these distributions give the instantaneous frequency as the time derivative of the phase. We show that they also lead to a reasonable definition for the standard deviation of instantaneous frequency, namely σw2(t) )= ((A'(t))/(A(t)))2 where A(t) is the amplitude of the signal. We demonstrate the relationship with the bandwidth of the spectrum. We also derive the corresponding quantities for the short-time Fourier spectrum and show the relation to and consistency with the above definition. The concept of local spread of frequencies is used to define and clarify the meaning of multicomponent signals. It is argued that the breaking up of a signal into components is a local phenomenon and that the criteria for a meaningful decomposition is that the standard deviations of instantaneous frequency of each part about their own individual instantaneous frequencies be well separated and small in comparison to the standard deviations of the signal. In addition, we consider the new distributions of Choi and Williams which dramatically enhance the interpretive value and use of bilinear distributions. These distributions suppress the interference terms while preserving the desirable characteristics of the distributions. This is particularly the case for multicomponent signals. We show that if a class of distributions yields a certain expectation value, then the cross terms of the different distributions within that class contribute the identical value towards expectation values. Hence, even though the cross terms may be reduced, they none the less contribute an identical amount towards an expectation value.
Application Of The Wigner-Ville Distribution To The Identification Of Machine Noise
Boualem Boashash, Peter O'Shea
The theory of signal detection using the Wigner-Ville Distribution (WVD) and the Cross Wagner-Ville Distribution (XWVD) is reviewed, and applied to the signaturing, detection, and identification of some specific machine sounds - the individual cylinder firings of a marine engine. For this task, a 4 step procedure has been devised. The Autocorrelation Function (ACF) is first employed for ascertaining the number of engine cylinders and the firing rate of the engine. Cross-correlation techniques are then used for detecting the occurrence of cylinder firing events. This is followed by the use WVD and XWVD based analyses to produce high resolution Time-Frequency signatures, and finally 2D correlations are employed for identification of the cylinders. The proposed methodology is applied to real data.
Recursion In Wigner Distribution
Moeness G. Amin
The problem of finding a recursive structure to evaluate the Wigner distribution (WD) is investigated. Recursive formula for updating the smoothed Wigner distribution are derived for on-line data processing of non-stationary processes. Recursion is established by first defining a sliding data window and then selecting the time-lag window to satisfy either a "direct" or "indirect" recursion condition. The former requires calculating the Pseudo-WD (PWD) prior to averaging and leads to a "computationally block-invariance" property. On the other hand, the "indirect" recursion, which is based on running Fourier transform, does not require explicit calculations of PWD, and provides a "computationally lag-invariance" property. Both properties are discussed in the paper.
Performance Comparison Of Wigner-Ville Based Techniques To Standard Fm-Discriminators For Estimating Instantaneous Frequency Of A Rapidly Slewing Fm Sinusoid In The Presence Of Noise
Fred J. Harris, Hana Abu Salem
We compare the performance of the Discrete Wigner-Ville distribution (DWV) technique to a number of standard techniques for estimating the spectral content of one or more slowly varying sinusoids in the presence of additive white noise. These alternate techniques include the sliding, windowed Discrete Fourier transform or short-time Fourier transform (STFT), a standard phase derivative discriminator, and an adaptive Least Mean Square line canceller. To make the comparison interesting we have chosen to examine a range of signal to noise ratios which bracket the threshold performance regions of the standard spectral estimators.
A Model For The Analysis Of Fault-Tolerant Signal Processing Architectures
V. S. S. Nair, J. A. Abraham
This paper develops a new model, using matrices, for the analysis of fault-tolerant multiprocessor systems. The relationship between processors computing useful data, the output data, and the check processors is defined in terms of matrix entries. Unlike the matrix based models proposed previously for the analysis of digital systems, this model uses only numerical computations rather than logical operations for the analysis of a system. We present algorithms to evaluate the fault detection and location capability of the system. These algorithms are much less complex than the existing ones. We also use the new model to analyze some fault-tolerant architectures proposed for signal processing applications.
Multiple Error Algorithm-Based Fault Tolerance For Matrix Triangularizations
Haesun Park
The checksum methods have been known as the most efficient fault-tolerant matrix triangularization schemes on systolic arrays in the presence of a single transient error. But it is not realistic to expect that at most one transient error occurs during any computation. In this paper, we extend the existing checksum schemes and introduce a block checksum scheme for multiple transient errors applicable to the fault tolerant matrix LU decomposition, Gaussian elimination with pairwise pivoting, and the QR decomposition. The block checksum scheme can detect, locate, and correct one transient error in each submatrix of a given matrix. Then we introduce examples that show that even one transient error can make the corrected results by factorization updates useless due to rounding errors. We also show that by introducing d weighted checksum vectors, we can detect all the transient errors that occur in a maximum of d different columns in matrix triangularizations.
A Novel Fault Tolerance Technique For Recursive Least Squares Minimization
Cynthia J. Anfinson, Franklin T. Luk, Eric K. Torng
Existing fault tolerance schemes have often been ignored by systolic array designers because they are too costly and unwieldy to implement. With this in mind, we have developed a new technique specially tailored for recursive least squares minimization that emphasizes simplicity. We propose a new decoding scheme that allows for error detection while wasting no precious processor cycles and preserving the basic structure of the systolic array. We will show that errors can be detected by examining a single scalar. The technique can be implemented with negligible algorithmic modification and little additional hardware. The simplicity of our method invites its use in future systolic arrays.
Synchronous And Asynchronous Algorithms For Matrix Transposition On MCAP
Nasser G. Azari, Adam W. Bojanczyk, Soo-Young Lee
Matrix transposition is one of the major tasks in image and signal processing and matrix decompositions. This paper presents algorithms for transposing a matrix on a mesh-connected array processor (MCAP). These algorithms make a very efficient use of the processing elements (PE's) in parallel. We discuss both synchronous and asynchronous algorithms. In the synchronized approach algorithms use a global clock to synchronize the communications between PE's. The number of time units required by synchronous algorithms for transposing an m x n matrix (n ≥ m) on an n x n MCAP is 2(n - 1). The synchronous algorithms eliminate simultaneous requests for using channels between PE's. Clock skews and delays are inevitable problems when we have a large array size (large n). An asynchronous (self-time) approach is proposed to circumvent this problem. The feasibility of the asynchronous algorithm have been demonstrated by the simulation of the algorithm for different sizes of matrices.
A Parallel VLSI Direction Finding Algorithm
Alle-Jan van der Veen, Ed F. Deprettere
In this paper, we present a parallel VLSI architecture that is matched to a class of direction (frequency, pole) finding algorithms of type ESPRIT. The problem is modeled in such a way that it allows an easy to partition full parallel VLSI implementation, using unitary transformations only. The hard problem, the generalized Schur decomposition of a matrix pencil, is tackled using a modified Stewart Jacobi approach that improves convergence and simplifies parameter computations. The proposed architecture is a fixed size, 2-layer Jacobi iteration array that is matched to all sub-problems of the main problem: 2 QR-factorizations, 2 SVD's and a single GSD-problem. The arithmetic used is (pipelined) Cordic.
Implementation Of An SVD Processor Using Redundant CORDIC
Milos D. Ercegovac, Tomas Lang
An implementation of the diagonal and off-diagonal processors for an array performing the singular value decomposition (SVD) is presented. The implementation uses a modification of the CORDIC module that utilizes carry-save addition instead of carry-propagate addition, resulting in a significant improvement in speed. Moreover, the calculation of the angles and of the two-sided rotation are overlapped. To achieve this overlapping, the calculation of the rotation angles includes an on-line module. Finally, the carry-save calculation and the overlapping result in a variable CORDIC scaling factor. This factor is computed and the correction performed by on-line division. Pipelining and rotation interleaving are used to reduce the implementation complexity. The speed is evaluated and compared with that obtained when conventional CORDIC modules are used.
Error Effects On The Processing Of Adaptive Array Data Using The BOC
Mustafa A. G. Abushagur, Mohamad Habli
The Bimodal Optical Computer (BOC) is considered for Adaptive Phased Array Radar (APAR) data processing. The effect of the errors in the BOC on the optimum weight calculations for the interference canceling are studied. Computer simulations for five and nine element APARs are presented.
Recursive Matrix Inverse Update On An Optical Processor
David P. Casasent, Edward J. Baranoski
A high accuracy optical linear algebraic processor (OLAP) using the digital multiplication by analog convolution (DMAC) algorithm is described for use in an efficient matrix inverse update algorithm with speed and accuracy advantages. The solution of the parameters in the algorithm are addressed and the advantages of optical over digital linear algebraic processors are advanced.
A Highly Reconfigurable Array Of Powerful Processors
R. Cohn, H. T. Kung, O. Menzilcioglu, et al.
This paper presents a highly reconfigurable architecture for two-dimensional (2D) arrays of powerful processors. Because of its high degree of reconfigurability the architecture can provide fault tolerance with efficient array utilization and support application programs requiring different interconnection structures. The proposed 2D array incorporates a flexible interconnection network using a mechanism called virtual channels. Ideally, the interconnection mechanism of a reconfigurable array would be infinitely reliable and flexible. Our evaluation results, based on the simulation of real programs for an array of Warp processors (a powerful processor developed at Carnegie Mellon and manufactured by GE), show that we can approach this goal with a modestly complex switch design.
High Discrimination Detection Bound And Model Order Control
Ira J. Clarke
It is easily demonstrated by simulation that modern digital signal analysis algorithms, such as 'MUSIC' , have the potential to discriminate (detect) two spectrally similar signal components at up to about two orders of magnitude better resolution than predicted by the long established Rayleigh criterion. The basic questions addressed in the paper are: a) what 'extra information' is utilised by high discrimination algorithms b) what is the detection limit for a 'perfect' algorithm and c) can we design better high discrimination signal extraction and parameter estimation algorithms based on a deeper understanding of the underlying information handling principles? The paper concludes by proposing a stable multi-stage data decomposition procedure in which model order is controlled by directly measuring the 'effective signal to noise ratio' of possible signal components. The technique is relatively efficient, is generally applicable and has the potential to be remarkably robust.
A New Criterion For The Determination Of The Number Of Signals In High-Resolution Array Processing
K. M. Wong, Q. T. Zhang, J. P. Reilly, et al.
A problem central to most high resolution methods of array processing is the determination of the number of signals K from a finite set of observations. Two commonly used criteria which do not employ the use of a subjective threshold are the AIC and MDL both of which consist of a term of likeihood function and a penalty term. The idea is to find a number K such that either of the criteria is mini nized Examination of both criteria reveals that the likelihood function encompasses irrelevant parameters resulting in the relatively inaccurate estimation of the relevant parameters. A new criterion is proposed here such that a new likelihood function is derived consisting of only the parameters having bearing to the determination of K. The improvement of the accuracy in the estimation of the relevant parameters is evaluated. This new criterion is put to test and computer simulations indicate that considerable improvement in performance is attained.
Invariance Techniques And High-Resolution Null Steering
R. Roy, T. Kailath
Over the past several decades, a significant amount of research has been performed in the area of high-resolution signal parameter estimation. It is a problem of significance in many signal processing applications including direction-of-arrival estimation in which the locations of multiple sources whose radiation is received by an array of sensors are sought. Much of the research has focussed on approaches based on the formation of optimal weight or copy vectors, procedures derived from the conventional practice of beamforming. This class of approached to parameter estimation problems has come to be known as high-resolution spectral analysis/beamforming since the introduction of the maximum entropy (MEM) method by Burg in 1967, and the maximum-likelihood (ML) method by Capon in 1969. These techniques provide increased resolution and accuracy over their predecessors (including conventional beamforming, but suffer from model mismatch. MUSIC and ESPRIT are recently developed geometric techniques that exploit the underlying model and thereby achieve significant improvements in performance. In this paper, these techniques are summarized. From basic physical principles, it is shown that ESPRIT is actually a multidimensional null steering algorithm, an interpretation with significant intuitive appeal. Finally, optimal signal copy vectors that naturally arise from the algorithm are presented, and their properties as beamforming vectors for this class of problems are discussed.
Reduced-Dimension Beam-Space Broad-Band Source Localization: Preprocessor Design
Kevin M. Buckley, Xiao Liang Xu
Data from a set of conventional beamformers, each steered to a point in location (and frequency), are analyzed in beam-space processing. By selecting a location sector of interest, and by using only those beamformers which are steered within this sector, processing is in a Reduced-Dimension Beam-Space (RDBS). For spatial-spectrum estimation, advantages of processing in a RDBS rather than in element-space include; reduction in data and therefore computation required for spatial-spectral analysis, reduction in resolution thresholds, and attenuation of out-of-sector sources through spatial filtering. A beam-space preprocessor structure provides the element-space to RDBS transfor-mation. For broad-band source processing, its objectives are data reduction, spatial filtering and broad-band source focusing. In this paper we investigate beam-space preprocessor design.
Self Calibration Techniques For High-Resolution Array Processing
Benjamin Friedlander
Eigenstructure-based direction finding techniques are capable of resolving closely spaced sources, but are very sensitive to errors in the calibration of the array. A first order sensitivity analysis is performed to quantify this sensitivity. A self-calibration technique based on simultaneous estimation of the directions-of-arrival and of the uncertain array parameters is proposed to alleviate the problems caused by this sensitivity. The properties of the proposed technique are studied using the Cramer-Rao bound, and a class of practical joint estimation algorithms is presented.
Efficient MVDR Processing Using A Systolic Array
J. G. McWhirter, T. J. Shepherd
An efficient systolic array for computing the Minimum Variance Distortionless Response (MVDR) from an adaptive antenna array is described. It is fully piplined and based on a numerically stable algorithm which requires O(p2+Kp) arithmetic operations per sample time where p is the number of antenna elements and K is the number of look direction constraints.