Light propagation explains our inverted retina

An engineering tool has shown that the retinal structure aids in improving our vision acuity.
18 October 2010
Erez N. Ribak and Amichai M. Labin

Our eyes are built like a digital camera: the lens comes in front, and the detector—the retina—at the back. Light is converted into electrical signals by photoreceptor cells (see Figure 1). The latter are divided between cones, which can discriminate colors, and rods, which are color blind but much more light-sensitive and which support night vision. But unlike cameras, the retina contains transparent layers of neurons in front of the photoreceptors. These neurons process the detected image and wire it to the brain, but since they are in front, they also distort the image. The key question is why this wiring is not done behind the detector cells. Photoreceptors are placed behind the neurons in all vertebrates, indicating that it is evolutionarily efficient.1 The inverted retina has mystified people since the structure and function of the eye were first exposed in the late 19th century.

While neurons run along the retina and across the light path, glial (Muller) cells intersect the retina in the light-propagation direction and across the neural layers. These cells are each attached to one cone and a few rods. They are also shaped like funnels, narrowing down towards the photoreceptors (see Figure 1). They were considered (among other functions) mechanical and metabolic supports for the neurons. Retinal glial cells can also transmit light, because their refractive index is higher than that of their neighborhood.2


Figure 1. Section through the retina. Except for the absorbing layers at the bottom (R), all parts are transparent, and colors are for demonstration purposes only. The retina is mostly composed of neuron layers (L) and their nuclei (N). Light arrives from the pupil (top) through the vitreous humor (V) and is captured in the funnels of the Muller cells (M), where it is concentrated down to the cones (C). The rest of the light is scattered into the narrower rod photoreceptors that surround the cones in the photoreceptor layer (P). The thickness of the retina, except for its very center, is one quarter to one half a millimeter.

To understand how light passes through the glial cells, we constructed a 3D optical model of the human retina.3 It contained an array of such cells against a background of a random, slightly scattering layered volume, representing the neurons and their nuclei. Each cell was given its refractive index, with slight random fluctuations and slight wiggles as suggested by actual shape variations. Our purpose was to examine the collective quality of the array of glial cells to preserve resolution. We wanted to identify the interaction between neighboring, parallel cells or the coupling between their electromagnetic fields.4 To pass the light field through this volume, we used the split-step Fourier beam-propagation method, a tool usually employed in designing waveguides or planar optics.5 We tried various wavelengths and entrance angles.

We found very limited coupling into neighbor glial cells for small incidence angles. In other words, only light that came through the center of the pupil was captured by the glial cells, concentrated, and guided directly to the cones (see Figure 2). Evidently, the retina developed into a natural optical waveguide array, tailored to almost perfectly preserve images obtained through a narrower pupil.


Figure 2. (left) Light concentration in the glial-cell array, rotated by two degrees. Green light follows the central cell C, and very little leaks to the neighbor N. (right) The glial cells are rotated by eight degrees and cannot hold on to blue light. The decoupled light mostly arrives at rods between the two cells, and only a small fraction is captured in the neighbor cell. This high angle corresponds to light arriving from the outskirts of the pupil, dilated under darkness conditions.

Unguided light, light leaking from the neighboring cells, or coming from the periphery, was rejected, scattered, and detected by the more sensitive rods. Apart from the obvious role for colorless night vision, this makes it easier for the eye to distinguish between the observed scene and clutter, thus improving acuity. (An online video shows this partial removal of oblique light.6)

We also found that it made little difference if this light was blue, green, or red. This explains why we are not sensitive to the difference in focus between colors. The eye is affected by significant chromatic aberration, where blue light is focused approximately 0.25mm in front of red light. This aberration forces ophthalmologists to use a single color or correct for the color aberration7 to see a sharp focus. It was thought that processing by the neural network takes care of this focus error, but it has now become clear that guiding by the glial cells removes this color ambiguity.

In summary, the retina has developed its inverted shape to improve the directionality of intercepted light beams, to enhance vision acuity, increase immunity to scatter and clutter, concentrate more light into the cones, and overcome chromatic aberration. We are now assessing the effect of ocular aberrations on acuity to explore what happens when the beam hitting the retina is more spread and its phase is more random.


Erez N. Ribak, Amichai M. Labin
Faculty of Physics
Technion—Israel Institute of Technology
Haifa, Israel

Erez Ribak works in optics, astronomy and astrophysics, condensed matter, and other fields. He has authored more than 160 papers.

Amichai Labin is a graduate student.


PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research