Designing the basic format for an optical system is a process very different from lens design. It involves determining the components of the system and their locations in order to produce a system that will meet a set of required characteristics. It is the layout of the system up to, but not including, the lens-design stage (in which the lens designer determines the detailed component configurations that are needed to do the job.)
The initial layout of an optical system is almost always done using the thin-lens concept. A thin lens has a thickness of zero, an obvious impossibility but a very valuable simplification of the process because a thin component of a system can be represented simply by a power and a location. Both principal planes in a thin lens are coincident with the lens. The thin lens is a concept, not a reality. The lenses are later expanded to match real components with physical thickness. step by step
The first step in the process should always be to establish the requirements and specifications. One should try to collect all of the specifications before beginning the design. The following, which will probably be included in a typical set, can serve as a checklist:
- The purpose of the system (obvious but often overlooked).
- Wavelength, bandwidth, spectral response, or distribution.
- Aperture diameter.
- Focal length, or magnifying power if afocal. Magnification and track length if the conjugates are both finite. For finite conjugates the track length (object-to-image distance) and magnification are sufficient; a focal length specification is redundant and may constrain the final lens design.
- Numerical aperture (NA) or f-number (f/#). This may be the infinity f/#, or NA, or the "working" f/#, but it should be so defined.
- Object size and distance, image size, angular fields of view, image orientation.
- Performance: resolution, modulation transfer function (MTF) at prescribed spatial frequencies, radial energy distribution, encircled or ensquared energy.
- Sensor characteristics: dimensions, spectral response, pixel size and number, the aerial image modulation (AIM) curve of the sensor, system type (visual, photographic, projection, laser, etc).
- Physical requirements: spatial limitations, size and location of entrance and exit pupils, cold stop, glare (Lyot) stop, bends or folds needed.Ambient conditions.
- Thermal stability requirements.
- Illumination and vignetting.
Once you've assembled this list, the second step, and an important one, is to question or challenge the specifications. Are they really necessary? Can the tough ones be eased? Has the bar been set higher than necessary, just to be safe?
The third step is to ascertain that the specifications are self-consistent. In an afocal system, for example, the magnifying power must equal the beam expansion factor, the ratio of apparent field to real field, and the ratio of entrance-pupil to exit-pupil diameter. In a telescope, the eye-lens diameter is determined by the eye relief and the apparent field. In a finite conjugate system, the magnification equals the ratio of image distance to object distance, and it also equals the ratio of object side NA to image side NA (or image side f/# to object side f/#). Check to be sure that your system requirements don't contradict one another.
Next, resolve any incongruities. Compare the performance specs with known limits to determine whether they are reasonable. The Rayleigh limit for point resolution of a perfect lens is 0.61(λ)/NA; the Sparrow and line resolution limits are 0.5(λ)/NA. For an infinitely distant object, the Rayleigh angular resolution limit is 1.22(λ)/D radians, where D is the entrance-pupil diameter, and (λ) is the wavelength. For a visual system, the Rayleigh resolution limit in seconds of arc equals 5.5 divided by the pupil diameter in inches; for the Sparrow limit use 4.5/D.
Figure 1. In this set of MTF curves, the best possible image contrast for an ordinary optical system is shown in curve A. Curves B through F show the image contrast for systems with wavefront deformations of λ/4, λ/2, 3λ/4, λ, and 2λ A system with a quarter wave deformation is often called diffraction limited.
The MTF cut-off frequency is 2NA/(λ), or 1/(λ)(f/#). In the visible region, the cut-off frequency is about 1800/(f/#) lines per millimeter. Plotting the MTF, or image contrast, against spatial frequency shows how varying the optical path difference can affect a system's imaging capabilities (figure 1). Under ideal, bright conditions, the resolution of the human eye is about one minute of arc; performance falls off as scene brightness decreases. Thus the resolution limit imposed by the eye on the eye-telescope combination is one minute divided by the telescope magnification. Remember, although the final performance of any system will be largely determined by the quality of the lens design, at this point we are only concerned with the limits imposed on the system by general layout.
The magnification of a magnifier or a compound microscope is, by convention, given as 10 in. divided by its focal length f in inches. This convention assumes a comparison with the object viewed from a distance of 10 in. Under circumstances in which one wishes to obtain magnification as a comparison with the view from some other distance, for example D, the magnification factor is simply D/f. A positive magnification produces an erect image; a negative magnification, as in a compound microscope, indicates an inverted image. getting specific
An optical system usually falls into one of the following categories:
- Single component
- Two-component (telephoto, retrofocus, relay, etc.)
- Afocal (e.g. telescope)
- Afocal plus a prime lens
- Afocal plus a scanner
- Periscope (relay, fiber optics, grin rod)
- Three (or more) component.
The single-component system is simple because it is completely defined by its focal length, aperture, magnification, and field of view. With an object at infinity, the magnification is zero, but the image size is the focal length times the angle subtended by the object.
The two-component system is the most widely encountered, and a few very simple expressions serve to handle the layout of this type. For the object at infinity, or at a large distance, they are:
fa = DF/(F-B)
fb = -DB/(F-B-D),
where fa and fb are the focal lengths of the two components, F is the desired focal length of the combination, D is the spacing between the components, and B is the distance from the second component to the focal plane. These equations are probably the most widely used in all system layout work. Note that for mirror systems the mirror radius is simply twice the component focal length; a concave mirror has a positive focal length; a convex mirror, negative.
Figure 2. Two-component lens systems include (a) telephoto, (b) retrofocus, and (c) relay, in which a negative focal length provides an erect image. The mirror equivalents are (a) Cassegrain, (b) Schwarszchild, and (c) Gregorian.
There are several widely known special configurations for the two-component system. If the focal length F is positive and longer than the overall system length (D+B), the system is called a telephoto. This gives a long focal length (and the correspondingly large image) in a small package (figure 2). The telephoto ratio is (D+B)/F; if this ratio is less than one, the system is regarded as telephoto. The mirror equivalent of the telephoto is the Cassegrain configuration.
If the focal length is positive and the back focus B is longer than the focal length, the result is a reversed telephoto, or retrofocus. This sort of system is used when a long working distance B is needed to accommodate prisms or mirrors in this space. The mirror equivalent, called the Schwarzschild system, is rarely used except in small systems such as microscope objectives because the concave mirror diameter must be several times as large as the aperture.
The relay system is produced if one uses a negative focal length in the equations above. This produces an erect image, and the relay lens is often referred to as an erector lens. Note that the focal length of the combination is equal to the focal length of (a) the lens multiplied by the magnification of (b) the lens. The Gregorian mirror system is the reflecting equivalent.
Component focal lengths for afocal systems are given by
fa = MD/(M-1)
fb = D/(1-M),
where M is the angular magnifying power and D is the system length. Note that a negative M indicates an inverted image. Examples of refracting afocal systems include an ordinary Keplerian telescope, a Galilean telescope, and a lens-erecting telescope.
There are equally simple expressions for a system where both conjugates are finite:
fa = msd/(ms-md-s')
fb = ds'/(d-ms+s'),
where s is the object distance, s' is the image distance, d is the space between the components, and m is the magnification (equal to the image size divided by the object size). Note that in general a change in the sign of the magnification not only indicates an erect or inverted image but also may produce two quite different systems, one of which may be vastly preferable to the other.
A field lens is a lens (usually positive) placed close to an internal image to make the off-axis image rays converge so that the diameters needed for the subsequent optics are not unreasonably large. They are commonly encountered in conjunction with eyepieces, so arranged as to direct the edge-of-the-field rays through the clear aperture of the eyelens. A converging field lens will shorten the eye-relief (the distance from the eyelens to the exit pupil) of a telescope. A diverging field lens will lengthen the eye-relief, but at the cost of requiring a larger eye lens to pass the rays. Another classic application of the field lens is in a periscope or endoscope. The periscope consists of alternating relay and field lenses and allows a wide angular field image to be carried through a long narrow space. An initial image is formed by the objective, then passed along by the relay lens after the field lens directs the rays to converge. Note that the relay lenses pass the system image, whereas the field lenses relay the images of the pupils of the objective and relay lenses. a question of distance
An optical system can, to the first order, be described by component focal lengths and spacings (i.e., by powers and spacings, where power is simply the reciprocal of the focal length). If there are more degrees of freedom than those required to define a configuration, the extra variables may be used to reduce or minimize the component powers. As an example, let's look at a telephoto system with a unit focal length and a telephoto ratio of 0.8. This requires that (D+B)/F be equal to 0.8, but we are free to choose D and B, subject only to the restriction that (D+B) = 0.8F (see table below).
It is apparent that the minimum total (absolute) power is between systems (D=0.4, B=0.4) and (D=0.5, B=0.3)actually, it's pretty close to D=0.46, B=0.34. As a rough rule of thumb, we can estimate that because this system has the least power, it likely will have the least aberration residuals, the least sensitivity to mis-spacing and misalignment, and the lowest fabrication cost. This is a very crude estimation because the final result will depend largely on what the lens designer uses for the individual component configurations. But at the very least, this approach does give the lens designer an optimum starting point.
Figure 3. This summary table shows some basic optical configurations classified by field angles and apertures.
Actually all of this can be accomplished with commercially available optical-design software. First, create a merit (or defect) function with targets for the desired system properties such as length, focal length, magnification, and other items from the specification list as appropriate. Next, set up the system as a series of zero-thickness plano-convex or plano-concave lenses, and designate the appropriate radii and spacings as variables. Admittedly, a bit of knowledge and foresight helps at this point (figure 3).
If there are one or more extra degrees of freedom, add a term to the merit function that forces the sum of the absolute values of the surface curvatures (the curvature is the reciprocal radius) to zero. This technique minimizes the powers. Another approach is minimizing the sum of the squares of the curvatures.
The program will not only solve the problem (assuming that there is a solution) but will find an optimum solution. Note, however, that if there is more than one solution, a typical program will seek out the one nearest to the starting system. Obviously it is beneficial to understand the optical principles of the basic system configuration when selecting the starting system.
It is always wise to make a sketch of the system, including the ray bundles for the on-axis and off-axis imagery. This helps to avoid an utterly ridiculous layout. It is possible to calculate the thin-lens ray paths using the ray tracing equations. The change in ray direction or slope is given by u' = u - y(φ), and the ray height at the next surface is given by y2 = u + du', where u' is the ray slope after passing through the component, u is the ray slope before passing through the component, y is the height at which the ray strikes the component, φ is the component power (reciprocal focal length), y2 is the ray height at the next component, and d is the spacing to the next component. The equations are applied iteratively, component by component.
One can make a rough guess as to the type of design to use for each component by determining the f/# and angular field for each component. With these factors, and experience, it is possible to estimate just how complex a construction will be required for each component. The basics of optical-system design can be seen to be relatively simple and straightforward. The real trick is to avoid asking for the impossible. oe
1. R. Kingslake, Optical System Design, 1983, Academic Press, New York.
2. W. J. Smith, Modern Optical Engineering; the Design of Optical Systems, 3rd Ed. 2000, McGraw-Hill, New York.
3. W. J. Smith, Modern Lens Design, 1992, McGraw-Hill, New York.
designer by default
I didn't want to be an optical designer," says Warren Smith, chief scientist and consultant for Kaiser Electro-Optics (Carlsbad, CA). "When I took the lens-design course at the University of Rochester, I didn't even buy the textbook." Yet today, Smith is the author of Modern Optical Engineering, a fundamental text in optical design.
When he graduated in the early 1940s, he found himself as part of a top-secret project in Oak Ridge, TN. "At the time, all I was told was that this was the most important thing I could do for the war effort," says Smith. It wasn't until a couple of weeks later that he realized he was part of the project to build an atomic bomb, developing equipment to separate U235 from U238 by mass spectrograph.
Smith got into lens designing after World War II, when he went to work for an optical manufacturer in Chicago. "I went out and bought a copy of Conrady, [the textbook he was supposed to buy for his course], read it three times cover to cover, reviewed my class notes, and proceeded to learn by making mistakes and asking questions of my betters," he says.
Years later, Smith was invited to write a couple of chapters for a handbook on military IR technology. "I hemmed and hawed, but eventually I agreed," says Smith. Those chapters led to McGraw-Hill asking him to write an entire book on optical design. But Smith took some convincing. "Then they came back and mentioned money, and I couldn't refuse," he says with a laugh.
The rest, as they say, is history. "It turns out that the approach I took at the time and the development of the field were all very fortuitous," says Smith. Originally published in 1966, Modern Optical Engineering is in its third edition. Smith also wrote Modern Lens Design and a systems layout book. "All told, I guess the books add up to about 50,000 copies sold," says Smith. In addition, he has authored more than 34 papers, holds five patents, and serves as an expert witness in patent cases.
"But the book is my prized accomplishment," he says.
Laurie Ann Toupin
The term "non-imaging optics" refers to any optical system that is not intended to produce high-fidelity image. Non-imaging optics encompasses a vast array of illumination applications, such as residential and commercial lighting; automotive lighting; computer, video, PDA, and telephone displays' solar-energy collection; indicator lights; instrument panels; integrating spheres and laboratory instruments; medical instruments; and so on. Although some of these applications do not require optical design by an engineer, many of them do.
The design process for non-imaging optics is analogous to that for imaging optics. Compared to the mature state of software for imaging optics design, though, software-design tools available for non-imaging optics are still in preadolescence. In particular, automatic optimization of illumination systems is still fairly crude, with most optimization being done "by hand" using parametric studies of performance. Both disciplines use geometric ray tracing to evaluate designs, but non-imaging designs may incorporate scattering elements, faceted reflectors, or arrays of micro-optical components to achieve a particular illumination pattern.
In the early stages of a non-imaging design, the engineer invents a starting design from first principles or from a previous design and traces a few rays to learn how the design performs. They may create the design in a ray-tracing program or in a CAD program or by combining elements from both. Accurately evaluating the performance of a non-imaging design usually requires many rays, typically thousands or millions, traced with a Monte Carlo ray-tracing program.
To illustrate the difference between imaging and non-imaging design, consider the performance requirements of imaging versus non-imaging systems. The requirement of an imaging system is to form an accurate image of whatever object or scene is presented to it. The designer (and design software) achieves this by minimizing an error function for a collection of point objects at representative locations in the field of view for different conjugates and different wavelengths.
A non-imaging system has only one object: the light source. The source may be artificial or natural, but the designer is free to tailor the design for that one source or object. In contrast to an imaging system, the requirement for a non-imaging system is to create a particular light distribution, usually a smooth one with no rapid spatial or angular variations. Forming an image is often undesirable in a non-imaging system since many artificial sources have structures and irregularities that would spoil the smoothness of the image. For example, in the design of a narrow-beam illumination system such as an automotive headlight or a flashlight, the naïve guess is to use a point source and parabolic reflector to create the illumination pattern. However, real sources are not points of light, and the approach described causes the source geometry to be imaged or imprinted on the illumination pattern, spoiling the uniformity. Instead, a designer working with a non-imaging system will often break up the reflector into facets or otherwise perturb the shape to blur or smear the image of the source and achieve the desired pattern.
Another challenge in illumination systems design is the choice and accurate modeling of sources. This choice may be influenced by spatial or angular variation of the emitter, as well as by cost, output, package size, ruggedness, or color. The shape of the emitter and its angular output often have a strong effect on the performance of the design. Modeling sources accurately is a challenging but important part of non-imaging optical design because of the sensitivity of the system to inaccuracies in the model. An associated problem is establishing tolerances due to variations in source manufacture or assembly of the source into the illumination system. This sensitivity may make the illumination system unfeasible as a production item.
Edward Freniere, Lambda Research Corp.
Warren Smith is chief scientist and consultant for Kaiser Electro-Optics in Carlsbad, CA.