Proceedings Volume 7444

Mathematics for Signal and Information Processing

cover
Proceedings Volume 7444

Mathematics for Signal and Information Processing

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 21 August 2009
Contents: 11 Sessions, 37 Papers, 0 Presentations
Conference: SPIE Optical Engineering + Applications 2009
Volume Number: 7444

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Compression
  • Error Modeling and Analysis I
  • Error Modeling and Analysis II
  • Compressive Computing and Security
  • Pattern Recognition with Applications
  • Poster Session: Mathematics of Data/Image Coding, Compression, and Encryption with Applications XII
  • Computer Arithmetic
  • Time Frequency
  • Implementation I
  • Implementation II
  • Hybrid Signal/Image Processing: Joint Session With Conference 7442
Compression
icon_mobile_dropdown
The optimum approximation of a multidimensional filter bank having analysis filters with small nonlinear characteristics
Firstly, we present the optimum interpolation approximation for multi-dimensional vector signals. The presented approximation shows high performance such that it minimizes various worst-case measures of error of approximation simultaneously. Secondly, we consider a set of restricted multi-dimensional vector signals that all elements of the corresponding generalized spectrum vector are separable-variable functions. For this set of restricted multi-dimensional vector signals, we present the optimum interpolation approximation. Moreover, based on this property, putting the variables to be identical with each other in the approximation, we present a certain optimum interpolation approximation for generalized filter bank with generalized non-linear analysis filters. This approximation also shows the high performance similar to the above-mentioned approximations. Finally, as a practical application of the optimum interpolation approximation for multi-dimensional vector signals, we present a discrete numerical solution of linear partial differential equations with many independent variables.
The optimum discrete running approximation of multidimensional time-limited signals
In this paper, we present an integrated discussion of the space-limited but approximately band-limited ndimensional running discrete approximation that minimizes various continuous worst-case measures of error, simultaneously. Firstly, we introduce the optimum approximation using a fixed finite number of sample values and a running approximation that scans the sample values along the time-axis. Secondly, we derive another filter bank having both the set of extended number of transmission paths and a cutoff frequency over the actual Nyquist frequency. Thirdly, we obtain a continuous space-limited n-dimensional interpolation functions satisfying condition called extended discrete orthogonality. Finally, we derive a set of signals and discrete FIR filter bank that satisfy two conditions of the optimum approximation.
Supporting image algebra in the Matlab programming language for compression research
Image algebra is a rigorous, concise notation that unifies linear and nonlinear mathematics in the image domain. Image algebra was developed under DARPA and US Air Force sponsorship at University of Florida for over 15 years beginning in 1984. Image algebra has been implemented in a variety of programming languages designed specifically to support the development of image processing and computer vision programs. The University of Florida has been associated with implementations supporting the languages FORTRAN, Ada, Lisp, and C++. The latter implementation involved the implementation of a class library, iac++, that supported image algebra programming in C++. Since image processing and computer vision are generally performed with operands that are array-based, the MatlabTM programming language is ideal for implementing the common subset of image algebra. Objects include sets and set operations, images and operations on images, as well as templates and image-template convolution operations. This implementation has been found to be useful for research in data, image, and video compression, as described herein. Due to the widespread acceptance of the Matlab programming language in the computing community, this new implementation offers exciting possibilities for supporting a large group of users. The control over an object's computational resources provided to the algorithm designer by Matlab means that the image algebra Matlab (IAM) library can employ versatile representations for the operands and operations of the algebra. In this paper, we first outline the purpose and structure of image algebra, then present IAM notation in relationship to the preceding (IAC++) implementation. We then provide examples to show how IAM is more convenient and more readily supports efficient algorithm development. Additionally, we show how image algebra and IAM can be employed in compression algorithm development and analysis.
Error Modeling and Analysis I
icon_mobile_dropdown
Analysis of filtering techniques and image quality in pixel duplicated images
When images undergo filtering operations, valuable information can be lost besides the intended noise or frequencies due to averaging of neighboring pixels. When the image is enlarged by duplicating pixels, such filtering effects can be reduced and more information retained, which could be critical when analyzing image content automatically. Analysis of retinal images could reveal many diseases at early stage as long as minor changes that depart from a normal retinal scan can be identified and enhanced. In this paper, typical filtering techniques are applied to an early stage diabetic retinopathy image which has undergone digital pixel duplication. The same techniques are applied to the original images for comparison. The effects of filtering are then demonstrated for both pixel duplicated and original images to show the information retention capability of pixel duplication. Image quality is computed based on published metrics. Our analysis shows that pixel duplication is effective in retaining information on smoothing operations such as mean filtering in the spatial domain, as well as lowpass and highpass filtering in the frequency domain, based on the filter window size. Blocking effects due to image compression and pixel duplication become apparent in frequency analysis.
Spatially adaptive image quality metrics for perceptual image quality assessment
The problem of objective image quality assessment has been known for couple of decades but with emerging multimedia technologies it becomes very important. This paper presents an approach to predict perceived quality of compressed images while incorporating real visual attention coordinates. Information about the visual attention is not usually taken into account in models for image quality assessment. Impact of the region of interest on estimation accuracy of a simple image quality metric has been investigated in our previous papers. The gaze coordinates were calculated using calibrated electro-oculogram records of human observers while watching a number of test images. This paper further investigates this idea using data from more observers. Obtained mean opinion scores of perceived image quality and eye tracking data were used to verify potential improvement of assessment accuracy for a simple image quality metric.
Precise accounting of bit errors in floating-point computations
Floating-point computation generates errors at the bit level through four processes, namely, overflow, underflow, truncation, and rounding. Overflow and underflow can be detected electronically, and represent systematic errors that are not of interest in this study. Truncation occurs during shifting toward the least-significant bit (herein called right-shifting), and rounding error occurs at the least significant bit. Such errors are not easy to track precisely using published means. Statistical error propagation theory typically yields conservative estimates that are grossly inadequate for deep computational cascades. Forward error analysis theory developed for image and signal processing or matrix operations can yield a more realistic typical case, but the error of the estimate tends to be high in relationship to the estimated error. In this paper, we discuss emerging technology for forward error analysis, which allows an algorithm designer to precisely estimate the output error of a given operation within a computational cascade, under a prespecified set of constraints on input error and computational precision. This technique, called bit accounting, precisely tracks the number of rounding and truncation errors in each bit position of interest to the algorithm designer. Because all errors associated with specific bit positions are tracked, and because integer addition only is involved in error estimation, the error of the estimate is zero. The technique of bit accounting is evaluated for its utility in image and signal processing. Complexity analysis emphasizes the relationship between the work and space estimates of the algorithm being analyzed, and its error estimation algorithm. Because of the significant overhead involved in error representation, it is shown that bit accounting is less useful for real-time error estimation, but is well suited to analysis in support of algorithm design.
Error Modeling and Analysis II
icon_mobile_dropdown
Error mitigation for CCSD compressed imager data
To efficiently use the limited bandwidth available on the downlink from satellite to ground station, imager data is usually compressed before transmission. Transmission introduces unavoidable errors, which are only partially removed by forward error correction and packetization. In the case of the commonly used CCSD Rice-based compression, it results in a contiguous sequence of dummy values along scan lines in a band of the imager data. We have developed a method capable of using the image statistics to provide a principled estimate of the missing data. Our method outperforms interpolation yet can be performed fast enough to provide uninterrupted data flow. The estimation of the lost data provides significant value to end users who may use only part of the data, may not have statistical tools, or lack the expertise to mitigate the impact of the lost data. Since the locations of the lost data will be clearly marked as meta-data in the HDF or NetCDF header, experts who prefer to handle error mitigation themselves will be free to use or ignore our estimates as they see fit.
Compression of turbulence-affected video signals
Shahar Mahpod, Yitzhak Yitzhaky
A video signal obtained through a relatively long-distance atmospheric medium suffers from blur and spatiotemporal image movements caused by the air turbulence. These phenomena, which reduce the visual quality of the signal, reduce also the compression rate for motion-estimation based video compression techniques, and cause an increase of the required bandwidth of the compressed signal. The compression rate reduction results from the frequent large amount of random image local movements which differ from one image to the other, resulting from the turbulence effects. In this research we examined the increase of compression rate by developing and comparing two approaches. In the first approach, a pre-processing image restoration is first performed, which includes reduction of the random movements in the video signal and optionally de-blurring the image. Then, a standard compression process is carried out. In this case, the final de-compressed video signal is a restored version of the recorded one. The second approach intends to predict turbulence-induced motion vectors according to the latest images in the sequence. In this approach the final decompressed image should be as much the same as the recorded image (including the spatiotemporal movements). It was found that the first approach improves the compression ratio. At the second approach it was found that after running short temporal median on the video sequence the turbulence optical flow progress can be predicted very well, but this result was not enough for producing a significant improvement at this stage.
Design of distributed sub-band networks having the minimum total weighted energy based on the concept of generating function of networks
We present two topics in this paper. First topic is the optimum running approximation of signals by a FIR filter bank minimizing various worst-case continuous measures of error, simultaneously. As a direct application, we obtain a favorable sub-band multi-input multi-output transmission system that is useful to multi-path sensor network with transmission paths having the minimum transmission power and error of approximation at the same time at each channel independently. We assume that a Fourier transform F(ω) of a signal f(t) is band-limited approximately under a Nyquist frequency but its side-lobes over the Nyquist frequency are small. We introduce a positive low-pass weight function and we define a set of signals Ξ such that a weighted-square-integral of F(ω) by this weight function is bounded by a given positive constant A. Firstly, we consider a finite number of signals densely scattered in the initial set of signals and present one-to-one correspondences between a signal and its running approximation or the error of approximation in a certain small segment in the time domain. Based on this one-to-one correspondence, we show that any continuous worstcase measures of error in any time-limited interval can be expressed by the corresponding measures of error in the small segment in the time axis. Combining this one-to-one correspondence with the optimum approximation proved by Kida, we present a running approximation minimizing various continuous worst-case measures of error and continuous upper-limit of many measures of the approximation formula, at the same time.
Compressive Computing and Security
icon_mobile_dropdown
Compression for data archiving and backup revisited
Data deduplication is a simple, dictionary based compression method that became very popular in storage archiving and backup. It has the advantage of direct, random access to any piece ("chunk") of a file in one table lookup; that's not the case with differential file compression, the other common storage archival method. The compression efficiency (chunk matching) of deduplication improves for smaller chunk sizes, however the sequence of hashes replacing the deduplicated object (file) increases significantly. Within the sequence of chunks that an object is decomposed, sub-sequences of adjacent chunks tend to repeat. We exploit this insight first in an online scheme used to reduce the amount of hash metadata generated. With each newly created entry in the chunk repository we add a "chronological" pointer linking this entry with the next new entry, in time order. When the hashes produced by the chunker follow the chronological pointers we encode them as a "sequence of hashes" by specifying the first hash in the sequence and the length of the sequence. The shrinkage is orders of magnitude smaller than what a customary compression algorithm (gzip) achieves and has a significant impact on the overall deduplication efficiency when relatively small chunks are used. A second scheme is also introduced that optimizes the chunk sizes by joining repeated sub-sequences of small chunks into new "super chunks" with the constraint to achieve practically the same matching performance. We employ suffix arrays to find these repeating subsequences and to determine a new encoding that covers the original sequence. As a result, fewer chunks are used to represent a file, reducing fragmentation i.e. the number of disk accesses needed to reconstruct the file, and requiring fewer entries in the chunk dictionary and fewer hashes to encode a file. As the experimental results show, this method provides over 10 time reduction in fragmentation and over 5 times reduction in the number of entries in repository while achieving similar or slightly better overall deduplication ratios.
Evidence of tampering in watermark identification
In this work, watermarks are embedded in digital images in the discrete wavelet transform (DWT) domain. Principal component analysis (PCA) is performed on the DWT coefficients. Next higher order statistics based on the principal components and the eigenvalues are determined for different sets of images. Feature sets are analyzed for different types of attacks in m dimensional space. The results demonstrate the separability of the features for the tampered digital copies. Different feature sets are studied to determine more effective tamper evident feature sets. The digital forensics, the probable manipulation(s) or modification(s) performed on the digital information can be identified using the described technique.
Pattern Recognition with Applications
icon_mobile_dropdown
Algorithms for the detection of chewing behavior in dietary monitoring applications
Mark S. Schmalz, Abdelsalam Helal, Andres Mendez-Vasquez
The detection of food consumption is key to the implementation of successful behavior modification in support of dietary monitoring and therapy, for example, during the course of controlling obesity, diabetes, or cardiovascular disease. Since the vast majority of humans consume food via mastication (chewing), we have designed an algorithm that automatically detects chewing behaviors in surveillance video of a person eating. Our algorithm first detects the mouth region, then computes the spatiotemporal frequency spectrum of a small perioral region (including the mouth). Spectral data are analyzed to determine the presence of periodic motion that characterizes chewing. A classifier is then applied to discriminate different types of chewing behaviors. Our algorithm was tested on seven volunteers, whose behaviors included chewing with mouth open, chewing with mouth closed, talking, static face presentation (control case), and moving face presentation. Early test results show that the chewing behaviors induce a temporal frequency peak at 0.5Hz to 2.5Hz, which is readily detected using a distance-based classifier. Computational cost is analyzed for implementation on embedded processing nodes, for example, in a healthcare sensor network. Complexity analysis emphasizes the relationship between the work and space estimates of the algorithm, and its estimated error. It is shown that chewing detection is possible within a computationally efficient, accurate, and subject-independent framework.
BiFS-based approaches to remote display for mobile thin clients
M. Mitrea, P. Simoens, B. Joveski, et al.
Under the framework of the FP-7 European MobiThin project, the present study addresses the issue of remote display representation for mobile thin client. The main issue is to design a compressing algorithm for heterogeneous content (text, graphics, image and video) with low-complex decoding. As a first step in this direction, we propose a novel software architecture, based on BiFS - Binary Format for Scenes (MPEG-4 Part 11). On the server side, the graphical content is parsed, converted and binary encoded into the BiFS format. This content is then streamed to the terminal, where it is played on a simple MPEG player. The viability of this solution is validated by comparing it to the most intensively used wired solutions, e.g. VNC - Virtual Network Computing.
A generic approach to haptic modeling of textile artifacts
Haptic Modeling of textile has attracted significant interest over the last decade. In spite of extensive research, no generic system has been proposed. The previous work mainly assumes that textile has a 2D planar structure. They also require time-consuming measurement of textile properties in construction of the mechanical model. A novel approach for haptic modeling of textile is proposed to overcome the existing shortcomings. The method is generic, assumes a 3D structure for the textile, and deploys computational intelligence to estimate the mechanical properties of textile. The approach is designed primarily for display of textile artifacts in museums. The haptic model is constructed by superimposing the mechanical model of textile over its geometrical model. Digital image processing is applied to the still image of textile to identify its pattern and structure through a fuzzy rule-base algorithm. The 3D geometric model of the artifact is automatically generated in VRML based on the identified pattern and structure obtained from the textile image. Selected mechanical properties of the textile are estimated by an artificial neural network; deploying the textile geometric characteristics and yarn properties as inputs. The estimated mechanical properties are then deployed in the construction of the textile mechanical model. The proposed system is introduced and the developed algorithms are described. The validation of method indicates the feasibility of the approach and its superiority to other haptic modeling algorithms.
Poster Session: Mathematics of Data/Image Coding, Compression, and Encryption with Applications XII
icon_mobile_dropdown
Hyperspectral image compression using low complexity integer KLT and three-dimensional asymmetric significance tree
A lossy to lossless three-dimensional (3D) compression of hyperspectral images is presented. On the spectral dimension, a low complexity reversible integer Karhunen-Loève transform (KLT) is used to fully exploit the spectral redundancy, while two-dimensional spatial combinative lifting algorithm (SCLA)-based integer wavelet transform is applied on the spatial dimension. At the low complexity KLT, the calculation processing of covariance matrix is carried out on a subset of vectors that is pseudorandomly selected from the complete set of spectral vectors. The transform matrix is factorized into triangular elementary reversible matrices (TERM) for reversible integer mapping and the lifting scheme is applied to implement integer KLT. The 3D asymmetric significance tree structure is then constructed from the 3D asymmetric orientation tree in 3D transformed domain. Each coefficient is then encoded by the significance test of the 3D asymmetric significance tree node at each bitplane instead of ordered lists to track the significance status of the tree or block sets and coefficients. This algorithm has low complexity and can be applied to lossy to lossless progressive transmission.
Computer Arithmetic
icon_mobile_dropdown
Implementation of a speculative Ling adder
Malhar Mehta, Amith Kumar Nuggehalli Ramachandra, Earl E. Swartzlander Jr.
A large number of adder designs are available based on the constraints of a particular application, e.g., speed, fanout, wire complexity, area, power consumption, etc. However, a lower-bound has been set on the speed of these adders and it has not been possible to design reliable adders faster than this lower bound. This paper deals with the design and implementation of a speculative adder, that takes advantage of the probabilistic dependence of the maximum carrypropagate chain length on the adder operand size. That is, this type of adder is designed to produce correct results for a vast majority of inputs that have carry-propagate chains shorter than the length for which the adder has been designed. An improvement is proposed to an earlier design of a speculative adder, by using Ling equations to speed it up. The resulting speculative adder, called the ACLA has been compared with the earlier design and traditional adders like Ling and Kogge-Stone in terms of area, delay and number of gates required. The ACLA is at least 9.8% faster and 20% smaller than the previous design. A circuit for error detection and error correction has also been implemented, resulting in the Reliable Adder (RA). When implemented as a sequential circuit, such a combination of ACLA and RA can significantly increase the average speed of the adder unit.
A design of complex square root for FPGA implementation
Dong Wang, Milos D. Ercegovac
We present a design for FPGA implementation of a complex square root algorithm for fixed-point operands in radix-4 representation. The design consists of (i) argument prescaling, (ii) residual recurrence, and (iii) result postscaling. These parts share logic resources and optimize the use of resources on FPGA devices used for implementation. Table building methods for prescaling and postscaling are analyzed and efficient designs approaches are discussed. The design is implemented in Altera Stratix-II FPGA for several argument precisions and compared in cost, latency and power with a design with an IP-based design. The results show advantages of the proposed design in cost, delay, and power.
Floating-point arithmetic in embedded and reconfigurable computing systems
Syed Gilani, Michael Schulte, Katherine Compton, et al.
Modern embedded and reconfigurable systems need to support a wide range of applications, many of which may significantly benefit from hardware support for floating-point arithmetic. Some of these applications include 3D graphics, multiple-input multiple-output (MIMO) wireless communication algorithms, orthogonal frequency division multiplexing (OFDM) based systems, and digital filters. Many of these applications have real-time constraints that cannot tolerate the high latency of software emulated floating-point arithmetic. Moreover, software emulation can lead to higher energy consumption that may be unsuitable for applications in powerconstrained environments. This paper examines applications that can potentially benefit from hardware support for floating-point arithmetic and discusses some approaches taken for floating-point arithmetic in embedded and reconfigurable systems. Precision and range analysis is performed on emerging applications in the MIMO wireless communications domain to investigate the potential for low power floating-point units that utilize reduced precision and exponent range.
Optimizing elliptic curve scalar multiplication for small scalars
Pascal Giorgi, Laurent Imbert, Thomas Izard
On an elliptic curve, the multiplication of a point P by a scalar k is defined by a series of operations over the field of definition of the curve E, usually a finite field Fq. The computational cost of [k]P = P + P + ...+ P (k times) is therefore expressed as the number of field operations (additions, multiplications, inversions). Scalar multiplication is usually computed using variants of the binary algorithm (double-and-add, NAF, wNAF, etc). If s is a small integer, optimized formula for [s]P can be used within a s-ary algorithm or with double-base methods with bases 2 and s. Optimized formulas exists for very small scalars (s ≤ 5). However, the exponential growth of the number of field operations makes it a very difficult task when s > 5. We present a generic method to automate transformations of formulas for elliptic curves over prime fields in various systems of coordinates. Our method uses a directed acyclic graph structure to find possible common subexpressions appearing in the formula and several arithmetic transformations. It produces efficient formulas to compute [s]P for a large set of small scalars s. In particular, we present a faster formula for [5]P in Jacobian coordinates. Moreover, our program can produce code for various mathematical software (Magma) and libraries (PACE).
High-speed floating-point divider with reduced area
This paper presents a new implementation of a floating-point divider unit with a competitive performance and reduced area based on proposed modifications to the recursive equations of Goldschmidt algorithm. The Goldschmidt algorithm takes advantage of parallelism in the Newton-Raphson method with the same quadratic convergence. However, recursive equations in the Goldschmidt algorithm consist of a series of multiplications with full-precision operands, and it suffers from large area consumption. In this paper, the recursive equations in the algorithm are modified to replace full-precision multipliers with smaller multipliers and squarers. Implementations of floating-point reciprocal and divider using the modification are presented. Synthesis result shows around 20% to 40% area reduction when it is compared to the implementation based on the conventional Goldschmidt algorithm.
On the design of a radix-10 online floating-point multiplier
This paper describes an approach to design and implement a radix-10 online floating-point multiplier. An online approach is considered because it offers computational flexibility not available with conventional arithmetic. The design was coded in VHDL and compiled, synthesized, and mapped onto a Virtex 5 FPGA to measure cost in terms of LUTs (look-up-tables) as well as the cycle time and total latency. The routing delay which was not optimized is the major component in the cycle time. For a rough estimate of the cost/latency characteristics, our design was compared to a standard radix-2 floating-point multiplier of equivalent precision. The results demonstrate that even an unoptimized radix-10 online design is an attractive implementation alternative for FPGA floating-point multiplication.
Pseudo-random generator based on Chinese Remainder Theorem
Jean Claude Bajard, Heinrich Hördegen
Pseudo-Random Generators (PRG) are fundamental in cryptography. Their use occurs at different level in cipher protocols. They need to verify some properties for being qualified as robust. The NIST proposes some criteria and a tests suite which gives informations on the behavior of the PRG. In this work, we present a PRG constructed from the conversion between further residue systems of representation of the elements of GF(2)[X]. In this approach, we use some pairs of co-prime polynomials of degree k and a state vector of 2k bits. The algebraic properties are broken by using different independent pairs during the process. Since this method is reversible, we also can use it as a symmetric crypto-system. We evaluate the cost of a such system, taking into account that some operations are commonly implemented on crypto-processors. We give the results of the different NIST Tests and we explain this choice compare to others found in the literature. We describe the behavior of this PRG and explain how the different rounds are chained for ensuring a fine secure randomness.
Implementation of sort-based counters
Ryan Nett, Jay Fletcher, Earl E. Swartzlander Jr.
Binary sorting is a well-defined problem that has a range of proposed solutions. Svoboda showed how one particular sorting technique can be used to implement a parallel counter. Among other uses, this type of counter can be used to perform binary multiplication. This paper presents several extensions to Svoboda's work using multi-bit sorting groups as the base building blocks for sorting. Additionally, the existing bitonic sorting algorithm is modeled, synthesized, and compared to Svoboda sorting for a range of word sizes.
Arithmetic operators for on-the-fly evaluation of TRNGs
Renaud Santoro, Arnaud Tisserand, Olivier Sentieys, et al.
Many cryptosystems embed a high-quality true random number generator (TRNG). The randomness quality of a TRNG output stream depends on its implementation and may vary due to various changes in the environment such as power supply, temperature, electromagnetic interferences. Attacking TRNGs may be a good solution to decrease the security of a cryptosystem leading to lower security keys or bad padding values for instance. In order to protect TRNGs, on-the-fly evaluation of their randomness quality must be integrated on the chip. In this paper, we present some preliminary results of the FPGA implementation of functional units dedicated to statistical tests for on-the-fly randomness evaluation. In the entropy test the evaluation of the harmonic series at some ranks is required. Usually its approximation is costly. We propose a multiple interval polynomial approximation. The decomposition of the whole domain into small sub-intervals leads to a good trade-off between the degree of the polynomial (i.e. multipliers cost) and the memory resources required to store the coefficients for all sub-intervals.
Time Frequency
icon_mobile_dropdown
Clutter model with dispersion
We describe clutter noise models that involve propagation effect, namely dispersion and frequency dependent attenuation. By way of examples we show that amplitude and phase are highly correlated in contrast to the assumptions used in the derivation of the Rayleigh distribution. We also discuss the formulation of the clutter problem in phase-space.
Moment feature variability in uncertain propagation channels
Greg Okopal, Patrick J. Loughlin
In underwater automatic target recognition via active sonar, the transmitted sonar pulse and the returning target backscatter can undergo significant distortion due to channel effects, such as frequency-dependent attenuation (damping) and dispersion, as well as random effects due to noise and other channel variability. These propagationinduced effects can be detrimental to classification because the observed backscatter depends not only on the target but also on the propagation environment and how far the wave has traveled, resulting in increased variability in the received sonar signals. Using a recently developed phase space approximation for dispersive propagation, we present a method for analyzing these effects on temporal and spectral moment features of the propagating signal, including uncertainty in certain channel parameters, in particular target distance.
GMSK co-channel demodulation
D. J. Nelson, J. R. Hopkins
Gaussian Minimum Shift Keying (GMSK) is a modulation method used by GSM phone networks and the Automatic Identification System (AIS) used by commercial ships. Typically these systems transmit data in short bursts and accomodate a large number of users by time, frequency and power management. Co-channel interference is not a problem unless the system is heavily loaded. This system load is a function of the density of users and the footprint of the receiver. We consider the problem of demodulation of burst GMSK signals in the presence of severe noise and co-channel interference. We further examine the problem of signal detection and blind estimation and tracking of all of the parameters required in the demodulation process. These parameters include carrier frequency, carrier phase, baud rate, baud phase, modulation index and the start and duration of the signal.
A comparison of two methods for demodulating a target AIS signal through a collision with an interfering AIS signal
Automatic Identification Systems (AIS) are commonly used in navigation for collision avoidance, and AIS signals (GMSK modulation) contain a vessel's identity, position, course and speed - information which is also vital in safeguarding U.S. ports. AIS systems employ Self Organizing Time Division Multiple Access (SOTDMA) regions in which users broadcast in dedicated time slots to prevent AIS collisions. However, AIS signals broadcast from outside a SOTMDA region may collide with those originating inside, and demodulation in co-channel interference is desirable. In this article we compare two methods for performing such demodulation. The first method involves Laurent's Amplitude Modulated Pulse (AMP) decomposition of constant amplitude binary phase modulated signals. Kaleh has demonstrated that this method is highly accurate for demodulating a single GMSK signal in additive Gaussian white noise (AWGN). Here we evaluate the performance of this Laurent-Kaleh method for demodulating a target AIS signal through a collision with an interfering AIS signal. We also introduce a second, far simpler demodulation method which employs a set of filters matched to tribit states and phases of GMSK signals. We compute the bit error rate (BER) for these two methods in demodulating a target AIS signal through a collision with another AIS signal, both as a function of the signal-to-interference ratio (SIR), and as a function the carrier frequency difference (CFD) between the two signals. Our experiments show that there is no outstanding advantage for either of these methods over a wide range of SIR and CFD values. However, the matched filter approach is conceptually much simpler, easier to motivate and implement, while the Laurent-Kaleh method involves a highly complex and non-intuitive signal decomposition.
All-pole and all-zero models of human and cat head related transfer functions
Head Related Transfer Functions (HRTFs) are generally measured at finite locations, so models are needed to synthesize HRTFs at all other locations and at finer resolution than the measured data to create complete virtual auditory displays (VADs). In this paper, real Cepstrum analysis has been used to represent minimum phase HRTFs in the time domain. Minimum-phase all-pole and all-zero models are presented to model DTFs, the directional components of HRTFs, with Redundant Wavelet Transform used for spectral smoothing. Modeling the direction dependent component of the HRTFs only and using suitable smoothing technique help modeling with low-order filters. Linear predictor coefficients technique was used to find all-pole models coefficients while the coefficients of the all-zero models were obtained by using a rectangular window to truncate the original impulse response of the measured DTFs. These models are applied and evaluated on human and cat HRTFs. Models orders were chosen according to error criteria comparison with previous published studies that were supported by human subjective tests and to their ability to preserve the main spectral features that provide the critical cues to sound source location. All-pole and all-zero models of orders as low as 25 were successful to model DTFs. Both models presented in this study showed promising tractable systematic movements of the model poles and zeros with changes in sound source direction that may be used to build future models.
Conditional and joint positive time-frequency distributions: a brief overview
We give a brief review of ideas and insights derived from the study and application of positive time-frequency distributions, in the nearly 25 years since their formulation by Cohen, Posch and Zaparovanny. Associated topics discussed include instantaneous frequency and conditional moments, the "time varying spectrum" and joint versus conditional distributions, the uncertainty principle, kernel design, cross terms, AM-FM signal decompositions, among others. Many of the conventionally held ideas in time-frequency analysis are challenged by results from positive time-frequency distributions.
Implementation I
icon_mobile_dropdown
White balance in a color imaging device with electrically tunable color filters
G. Langfelder, F. Zaraga, A. Longoni
A new method for White Balance, which compensates for changes in the illuminant spectrum by changing accordingly the native chromatic reference system, is presented. A set of base color filters is selected in the sensor, accordingly to the scene illuminant, in order to keep the chromatic components of a white object independent from the illuminant. On the contrary, conventional white balance methods do not change the native color space, but change the chromatic coordinates in order to adjust the white vector direction in the same space. The development in the last ten years of CMOS color sensors for digital imaging whose color reconstruction principle is based on the absorption properties of Silicon, rather than on the presence of color filters, makes the new method applicable in a straightforward manner. An implementation of this method with the Transverse Field Detector, a color pixel with electrically tunable spectral responses is discussed. The experimental results show that this method is effective for scene illuminants ranging from the standard D75 to the standard A (i.e. for scene correlated color temperature from 7500 K to 2850 K). The color reconstruction error specific for each set of electrically selected filters, measured in a perceptive color space after the subsequent color correction, doesn't change significantly in the tested tuning interval.
Implementation II
icon_mobile_dropdown
Fast computation of local correlation coefficients on graphics processing units
Georgios Papamakarios, Georgios Rizos, Nikos P. Pitsianis, et al.
This paper presents an acceleration method, using both algorithmic and architectural means, for fast calculation of local correlation coefficients, which is a basic image-based information processing step for template or pattern matching, image registration, motion or change detection and estimation, compensation of changes, or compression of representations, among other information processing objectives. For real-time applications, the complexity in arithmetic operations as well as in programming and memory access latency had been a divisive issue between the so-called correction-based methods and the Fourier domain methods. In the presented method, the complexity in calculating local correlation coefficients is reduced via equivalent reformulation that leads to efficient array operations or enables the use of multi-dimensional fast Fourier transforms, without losing or sacrificing local and non-linear changes or characteristics.
Automated optimization of look-up table implementation for function evaluation on FPGAs
L. Deng, C. Chakrabarti, N. Pitsianis, et al.
This paper presents a systematic approach for automatic generation of look-up-table (LUT) for function evaluations and minimization in hardware resource on field programmable gate arrays (FPGAs). The class of functions supported by this approach includes sine, cosine, exponentials, Gaussians, the central B-splines, and certain cylinder functions that are frequently used in applications for signal and image processing and data processing. In order to meet customer requirements in accuracy and speed as well as constraints on the use of area and on-chip memory, the function evaluation is based on numerical approximation with Taylor polynomials. Customized data precisions are supported in both fixed point and floating point representations. The optimization procedure involves a search in three-dimensional design space of data precision, sampling density and approximation degree. It utilizes both model-based estimates and gradient-based information gathered during the search. The approach was tested with actual synthesis results on the Xilinx Virtex-2Pro FPGA platform.
A scalable multi-FPGA framework for real-time digital signal processing
K. M. Irick, M. DeBole, S. Park, et al.
FPGAs have emerged as the preferred platform for implementing real-time signal processing applications. In the sub-45nm technologies, FPGAs offer significant cost and design-time advantages over application-specific custom chips and consume significantly less power than general-purpose processors while maintaining, or improving performance. Moreover, FPGAs are more advantageous than GPUs in their support for control-intensive applications, custom bit-precision operations, and diverse system interface protocols. Nonetheless, a significant inhibitor to the widespread adoption of FPGAs has been the expertise required to effectively realize functional designs that maximize application performance. While there have been several academic and commercial efforts to improve the usability of FPGAs, they have primarily focused on easing the tasks of an expert FPGA designer rather than increasing the usability offered to an application developer. In this work, the design of a scalable algorithmic-level design framework for FPGAs, AlgoFLEX, is described. AlgoFLEX offers rapid algorithmic level composition and exploration while maintaining the performance realizable from a fully custom, albeit difficult and laborious, design effort. The framework masks aspects of accelerator implementation, mapping, and communication while exposing appropriate algorithm tuning facilities to developers and system integrators. The effectiveness of the AlgoFLEX framework is demonstrated by rapidly mapping a class of image and signal processing applications to a multi-FPGA platform.
Conditioning properties of the LLL algorithm
Although the LLL algorithm1 was originally developed for lattice basis reduction, the method can also be used2 to reduce the condition number of a matrix. In this paper, we propose a pivoted LLL algorithm that further improves the conditioning. Our experimental results demonstrate that this pivoting scheme works well in practice.
Hybrid Signal/Image Processing: Joint Session With Conference 7442
icon_mobile_dropdown
Image denoising and quality assessment through the Rényi entropy
Salvador Gabarda, Raphael Redondo, Elena Gil, et al.
This paper presents a new image denoising method based on truncating the original noisy coefficients of a Pseudo- Wigner distribution (PWD) calculated through 1D directional windows. This method has been tested both for additive and multiplicative noisy images. The coefficients are selected according to their local directionality to take into account the image anisotropy. Next, the PWD is inverted and the set of different directional images are averaged. When the ground truth image reference is available, the peak signal-to-noise ratio (PSNR) metric is used to evaluate the resulting denoised images in comparison with other alternative methods. The described method is based on the use of the Renyi entropy extracted from a joint spatial frequency representation such as the Wigner distribution. A comparison with other competitive techniques is described and tested for real-world images. In particular, some experimental results are presented in the area of synthetic aperture radar (SAR) and retinal imaging, showing the effectiveness of the method in comparison with other alternative techniques through the use of two different non-reference image quality metrics.
Restoration of dichromatic images gained from CCD/CMOS camera by iterative detection networks with fragmented marginalization at the symbol block level
Image capturing by CCD/CMOS cameras is encumbered with two fundamental perturbing influences. Time invariant blurring (image convolution with fixed kernel) and time variant noises. Both of these influences can be successfully eliminated by the iterative detection networks (IDNs), that effectively and suboptimally (iteratively) solve 2D MAP criterion through the image decomposition to the small areas. Preferably to the individual pixel level, if this allows the noise distribution (statistically independent noise). Nevertheless, this task is so extremely numerically exacting and therefore the contemporary IDNs are limited only for restorations of dichromatic images. The IDNs are composed of certain, as simple as possible, statistical devices (SISO modules) and can be separated into two basic groups with variable topology (exactly matched to the blurring kernel) and with fixed topology, same for all possible kernels. The paper deals with second group of IDNs, concretely with IDNs whose SISO modules are concatenated in three directions (horizontal, vertical and diagonal). Advantages of such ordering rests in the application flexibility (can be comfortable applied to many irregular cores) and also in the low exigencies to number of memory devices it the IDN. The mentioned IDN type will be implemented in the two different variants suppressing defocusing in the lens of CCD/CMOS sensing system and will be verified in the sphere of a dichromatic 2D barcode detection.