- Biomedical Optics & Medical Imaging
- Defense & Security
- Electronic Imaging & Signal Processing
- Illumination & Displays
- Lasers & Sources
- Micro/Nano Lithography
- Optical Design & Engineering
- Optoelectronics & Communications
- Remote Sensing
- Sensing & Measurement
- Solar & Alternative Energy
- Sign up for Newsroom E-Alerts
- Information for:
Optical Design & Engineering
New diffractive optical data on the retinex mechanism in color vision
Understanding the diffractive optical system of the human eye provides a new interference-based interpretation of experiments on paradoxically colored shadows and of Edwin Land's retinex data.
25 January 2007, SPIE Newsroom. DOI: 10.1117/2.1200611.0505
The perception of color by the human eye is based on additive trichromatic—red, green and blue (RGB)— processing. The mechanism adapts to illuminants. This color constancy allows perceived colors to remain relatively constant undervarying illumination conditions.
Within the framework of the NAMIROS project,1 we are focusing our efforts on verifying whether the diffractive-optical hardware located in the aperture and image space of the human eye could account for adaptive performance as well as for color processing in opponent color space.2 Part of the project concerns the analysis of the data available for paradoxically colored shadows in twilights and of the Edwin Land's retinex data. (‘Retinex’ refers to data g processed by the retina and the cortex.)
Human color vision involves reducing the visible spectrum (380–760nm) to a triplet of RGB sensitivity curves, described by overlapping Gaussian curves with maxima (λmax) at 559nm (R), 537nm (G) and 447nm (B) under medium sunlight adaptation conditions. A white ‘physics’ equienergy spectrum is transformed into the RGB equilibrium known to physiology. In 1903, A. König was the first to report that, under yellowish gaslight adaptation, the RGB Gaussian curves shifted like a register in matrix multiplication along the spectral intensity distribution in the visible.3 He also showed that the 25:24:20 ratio observed for the λmax triplet remains nearly constant as the shift proceeds, namely from 571nm (R), 549nm (G), 450nm (B) for sunlight to 595nm (R), 565nm (G), 465nm (B) for gaslight. However, Knig did not comment on this adaptive RGB shift, probably because a plausible theory was lacking at the time.
It is well-known that an optical imaging system can be complemented by diffractive-optical hardware. Nano - and micrometer-scaled multilayer gratings in aperture space can diffract ‘global’ information, i.e. the integrated spectral intensities radiated from all illuminants and reflecting objects into the pupil of the eye. In 1948, Frederick Zernike described how, “in the aperture of the lens, the intensity is equal at all points.”4 The gratings in the aperture spread this global information homogeneously all over image space. ‘Local’ data available in image space, i.e. spectral intensities reflected by visible objects, is diffracted together with global data by 3D gratings located in image space. They are then transformed into RGB von Laue/Bragg interference maxima in reciprocal space (in the Fresnel near-field). This diffractive-optical correlator hardware thus effectively relates local and global RGB data by interference contrast and allows us to define human vision as “a spatial calculation involving the whole image.”5
We recently reported on experiments describing paradoxically colored shadows in twilight conditions.2 In twilight lighting conditions, two lights (B1 and B2) of opponent colors cast two shadows (S1 and S2). The two lights combine to give white light, and we observed that the shadows were in the opponent colors. Specifically, a blue B1 and a yellow B2 together were casting a blue S1 shadow and a yellow S2. But the combination of a white B2 and a blue B1 also produced a yellow, paradoxically colored, S2 shadow instead of casting a grey shadow (33% R, 33% G and 33% B). The color seen at S2 clearly resulted from a transformation of all data into RGB (i.e. multiplication of their spectral intensity distributions by the three Gaussians under global white, B1+B2, adaptation) and from a differentiation between local (S2) and global data (B1+B2) in RGB space. When the color of B1 was varied, with B2 always kept white, all opponent colors in the color circle successively appeared. These experiments showed that our eyes cannot see what is physically real, i.e. a white spectrum at S2, but rather what the diffractive-optical correlator hardware in our eye has calculated by relating “local onto global” RGB data. Illuminant B1 drives the optical system into the out-of-balance opponent color state at S2, thus constructing the observed paradoxical color. Paradoxically colored shadows are not illusions per se, but rather the logical result of the calculation rules used by the optical correlators relating local onto global RGB data by matrix multiplication and division in vector space – i.e. in the reciprocal space of diffractive optics.
The Mondrian retinex experiments performed by Land6 featured an adaptation to a constant white background illumination B1 while varying the color of an illuminant B2 irradiating a Mondrian (a display of numerous colored patches) with three spectral RGB bands, namely 630nm (R), 530nm (G) and 450nm (B), and a constant triplet of ‘energy-at-eye’ data reflected from a grey patch towards the eye in the same RGB bands.
Our analysis showed that the introduction of color into illuminant B2 and the variation of this color represents the driving force for the opponent colors always seen by human observers. This evident rule was not grasped by Land. In these experiments, the same logic is at work as for paradoxically colored shadows. Illuminants are not discounted in human color vision, it is rather that ‘reflectances,’ i.e. relations between local energy-at-eye and global illuminant B2 RGB data, are the optimal predictors of the colors seen by observers in the experiments. The successive correlation of the local energy-at-eye RGB data—always kept constant—with the variable RGB data of illuminant B2 and with the constant background illumination B1 leads to the same predictor of the color seen as the diffractive-optical matrix multiplication and division in RGB (multiplication of the spectral intensity distribution with the three RGB Gaussians followed by relating the local and global RGB data).2 The choice of a bluish illuminant B2 (+22% for 450nm (B) compared with a white) leads to a paradoxical yellow (+6% in G and +14% in R)shadow. With a bluish-green illuminant B2 characterized by +4% for 450nm (B) and +28% for 530nm (G), a red color results (+31% in R). In contrast, a purple illuminant B2 (+11% for R and +1% for B) creates a green patch (+16% in G). What happens for blue-yellow can be varied through all colors in opponent color space.
Since Land did not measure photometric data for background illuminant B1, we intend to produce a more complete data set investigating all intervening physical variables during the second part of the NAMIROS project with results expected in 2007.
CORRSYS 3D Sensors AG