SPIE Startup Challenge 2015 Founding Partner - JENOPTIK Get updates from SPIE Newsroom
  • Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:
    Advertisers
SPIE Photonics West 2017 | Register Today

SPIE Defense + Commercial Sensing 2017 | Call for Papers

2017 SPIE Optics + Photonics | Call for Papers

Get Down (loaded) - SPIE Journals OPEN ACCESS

SPIE PRESS




Print PageEmail PageView PDF

Micro/Nano Lithography

A biologically inspired approach to signal compression

Using lossy compression theory to model how the brain receives sensory data may suggest new approaches to image and signal compression.
27 February 2007, SPIE Newsroom. DOI: 10.1117/2.1200702.0644

Stimuli observed by our sensory systems are communicated to the brain by short duration electrical pulses known as action potentials, or spikes.1 Spike trains are generated by special sensory cells and then propagate to the brain along nerve fibers. Different neural systems use a number of different methods to encode information in spike trains. Mathematical and engineering approaches are increasingly assisting neuroscientists to understand and quantify these different neural coding mechanisms. Given that biology can solve many tasks far better than our current technology, new knowledge gained from this computational neuroscience approach should lead to novel bio-inspired and biomimetic engineering.

Despite many previous studies of information transmission in neural systems, one aspect, lossy compression, has been largely unexplored. Unlike lossless compression—e.g., zipping computer files—lossy compression means deliberately discarding information in order to reduce storage or communication costs. Decompressing the result gives data that, although distorted, is close enough to the original to be useful in some way. Familiar examples include the JPEG and MP3 standards. The goal of an optimized data acquisition system is to provide only the minimum amount of information required to achieve some task. One suspects that biology has found many near-optimal solutions to this goal.

The dual aims of my research are to use information-theoretic tools to model sensory neural systems as lossy compression schemes, and to design artificial compression methods inspired by this modeling. Any such method is likely to be useful in environments similar to those faced by biological systems, such as low-power distributed sensor networks making noisy measurements. The design of lossy compression methods involves two conflicting requirements: minimizing the distortion; and maximizing the amount of compression. My initial work has quantified the trade-off between distortion and compression in simple neural ‘rate-coding’ models.2

Although some neurons accurately encode stimulus features by the duration between spikes, this is impossible for time-varying analog stimuli, due to non-zero refractory times (minimum time between spikes) and random noise. Instead, the average spike rate encodes stimulus strength: a weak stimulus leads to only a few spikes, while a strong stimulus generates many. This means that some information is lost, since an average can be formed only from a finite number of spikes. Measuring the loss of information incurred in rate coding means defining a measure of distortion. This is one of the main unsolved questions in this sort of computational neuroscience research: What is an appropriate measure of distortion?

In initial studies,2 I avoided this question simply by using a standard information-theoretic measure of distortion: mean square error (MSE). I have completed simulations in which I treat a single Fitzhugh-Nagumo neuron model as a scalar quantizer—a specific form of lossy compression, such as occurs in analog-to-digital converter (ADC) circuits—and measure the theoretically smallest MSE that can be achieved under various assumptions on timescales and noise. This approach uses results from a phenomenon called suprathreshold stochastic resonance3,4 from the field of statistical physics.

Three timescales determine MSE performance: the refractory time (minimum time between spikes); the duration of a counting window (biophysical memory); and the stimulus correlation time (how rapidly the stimulus varies). Together, these timescales define the achievable resolution of rate coding. If biologically relevant assumptions are made about these timescales, or they are known experimentally, quantitative comparisons of the compression performance of different sensory systems can be made.

Part of the motivation for this work is to study the ability of sensory neurons to efficiently encode sensory stimuli even when signal-to-noise-ratios are very low.1 My lossy compression approach will be extended by studying more biologically realistic models, such as models for hair cells in the auditory system, and further assessing the affects of noise. In particular, random noise in the form of dither signals can be beneficial in ADCs and image processing. It is anticipated that random noise in neural coding has similar benefits for lossy compression of sensory stimuli. Studies of this type may help resolve the questions whether and when stochastic resonance is used by neurons.1


Mark D. McDonnell
School of Electrical and Electronic Engineering,
The University of Adelaide
Adelaide, Australia 

Mark D. McDonnell is currently a postdoctoral research fellow in the School of Electrical and Electronic Engineering at the University of Adelaide. He was awarded his PhD (summa cum laude) from the same school in August 2006, and received the Alumni Postgraduate University Medal for his thesis. A book on the topic of stochastic resonance is scheduled for publication by Cambridge University Press in 2007.