SPIE Membership Get updates from SPIE Newsroom
  • Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:
SPIE Photonics West 2018 | Call for Papers

OPIE 2017

OPIC 2017




Print PageEmail PageView PDF

Electronic Imaging & Signal Processing

Biological color vision inspires artificial color processing

Artificial color, a new spectral-discrimination approach for cameras inspired by color vision in nature, offers advantages over other methods such as hyperspectral imaging.
17 April 2006, SPIE Newsroom. DOI: 10.1117/2.1200603.0099

Simple cameras that use only a few bands do not offer the wonderful spectral discrimination capability of hyperspectral cameras that record a full spectrum at each pixel. Unfortunately, hyperspectral cameras are inevitably more complicated, more expensive, and less sensitive (because they share the available light per pixel among many detections) than simpler cameras. In nature, on the other hand, many creatures have excellent spectral discrimination using only a few channels. We can learn from the color vision of biological organisms how to produce artificial-color systems that have the discrimination capabilities of hyperspectral cameras as well as the simplicity and sensitivity of the throw-away cameras you can buy in a store.

In biological color, the scene is detected using broad, overlapping spectral-sensitivity curves. One curve won't do, because it produces no spectral contrast. That is why we see no color at night when only one kind of sensor cell (our rods) is working. Humans use from two to four types of cone cells to give color vision. We call those who use only two ‘color blind’ and those who use three ‘normal. ’ Perhaps 10% of people (all women) use four kinds of cone cells. Mantis shrimp use about a dozen kinds.

Our brains compute a Bayesian model of the object that must be out there in the world to have caused the detected pattern on our retinas. It uses both the pixels values with the various cone cells and the spatial content of the scene—plus expectations, fears, and so forth—to compute generally useful spectral discriminants and then it attributes them (now called colors) to the Bayesian model. We perceive that model. It is the percept, not the object, that has color. In the artificial-color method,1,2 we use two or three spectrally-overlapping sensitivity curves to gather information, process the resulting data electronically, and attribute the discriminant to the object. Ideally, the sensitivity curves should be problem-specific, but I will illustrate here with the familiar RGB (red-green-blue) channels of a color camera.

Consider Figure 1, taken with an ordinary digital color camera (whose red, green, and blue filters overlap considerably). The frog is doing a very good job blending in with its background. We took 20 pixels at random from four regions: frog, dark green leaves, light green leaves, and dark shadows. Pattern-recognition experts will appreciate that 20 is a very small number, but it allows us to show how powerful artificial color is. We then trained our algorithm to recognize those pixels judged spectrally to belong to the class ‘frog but not dark leaves and not light leaves and not dark shadows.’ This is a very specialized color that will not be suitable for other problems. We then retained the image pixels with that artificial color and set the other pixels to white.

Figure 1. This frog has evolved a color that hides it well from predators with vision like ours, which makes it hard to spot in this ordinary color image.

Figure 2 shows the frog extracted from its background using artificial color alone. Clearly we have recognized what amounts to an arbitrary set strictly on the basis of its red-green-blue components.

Figure 2. Our artificial-color method allowed us to extract the frog from its camouflaging background.

Of course, we do not have to work only in the visible. 3 We can design curves that allow useful discrimination in the infrared, for instance. In other work, we have improved the images with rank-order filters and with mathematical morphology,4 applied logic to the binary filters,5 applied fuzzy-logic techniques to the classification mechanism, and applied our artificial-color method successfully to test problems in biometrics,6 passport control, counterfeit currency detection, and oil discrimination.

H. John Caulfield
Alabama A&M University Research Institute
Normal, AL
Professor H. John Caulfield is Chief Scientist of the Alabama A&M University Research Institutes. He has published numerous books, book chapters, and journal papers. He has been or is now editor or editorial board member of 15 journals and has chaired international meetings for SPIE, OSA, IEEE, Gordon Research Conferences, and the Engineering Foundation. In addition, Dr. Caulfield is an SPIE Fellow and has won many awards from SPIE including the President's Award, Governor's Award, Dennis Gabor Award, and the Gold Medal. He is a former editor of Optical Engineering.

1. H. John Caulfield, Artificial Color,
Vol: 51, pp. 463-465, 2003.
2. J. Fu, H. J. Caulfield, S. R. Pulusani, Artificial Color Vision: a preliminary study,
J. Electronic Imaging,
Vol: 13, pp. 553-558, 2004.
3. J. Fu, H. J. Caulfield, Artificial and biological color band design as spectral compression,
Image and Vision Computing,
Vol: 23, no. 8, pp. 761-766, 2005.
4. J. Fu, H. J. Caulfield, T. Mizell, Applying Median Filtering with Artificial Color,
J. Imaging Science and Technology,
Vol: 49, no. 5, pp. 498-504, 2005.