SPIE Startup Challenge 2015 Founding Partner - JENOPTIK Get updates from SPIE Newsroom
  • Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:

SPIE Photonics West 2017 | Register Today

SPIE Defense + Commercial Sensing 2017 | Call for Papers

Get Down (loaded) - SPIE Journals OPEN ACCESS


Print PageEmail PageView PDF

Electronic Imaging & Signal Processing

Picture Perfect

Image processing and analysis improves microscopic images captured with fiber bundles.

From oemagazine November/December 2004
31 November 2004, SPIE Newsroom. DOI: 10.1117/2.5200411.0008

Rarely will cellular biologists intentionally degrade a microscopic image, but our group at the Microscale Life Sciences Center (MLSC) at the University of Washington (Seattle, WA) is doing just that. At MLSC we work together in teams of engineers, analytical chemists, and biologists to design and build single-cell analysis systems, and we often find that heat, chemistry, or physical constraints prevent the use of a traditional microscope for imaging.1 Fiber optic imaging bundles provide one solution to this problem.

A fiber optic imaging bundle consists of an array of optical fibers arranged in a coherent bundle so that the relative position of each fiber remains constant and an image can be transmitted from one end of the bundle to the other. Imaging bundles are widely used as chemical sensors and biosensors.2 Two factors limit their use in microscopic imaging. First, each fiber effectively transmits one pixel of light, limiting the resolution of the final, pixelated image to the packing density of the fibers. Second, the lower-index cladding that surrounds each fiber to prevent leakage of light from the cores creates a honeycomb effect that is superimposed over the final image. These two factors combine to yield heavily pixelated, poor-quality images not normally useful for biological research, however. Image processing provides an answer. Our team at MLSC has designed new software based on image processing and analysis that allows us to extract real-time information from these types of images.3

We are building a system to automatically monitor the cell-division status of a single yeast cell at 100 sites across a 2 in. x 3 in. microfluidic device. The optical system detects a trapped yeast cell at each location and an imaging algorithm determines whether or not the trapped cell is budding a daughter yeast cell. Rather than scan 100 sites with a motorized xy-stage and possibly miss a budding event, our team chose to monitor each site with a compact microlens assembly and a fiber optic imaging bundle.

In the monitoring system, we project the image of the cell specimen onto the distal tip of the fiber optic bundle at 4X total magnification using back-to-back microscope objectives; in future systems, a microlens assembly will replace these objectives (see figure 1). The 30,000 germanium-oxide-doped silica fiber cores of the imaging bundle are located on 2.21-µm center-to-center spacings. The image travels through the bundle and is projected onto the CCD plane of a video camera by a 10X objective. An image-capture card in a desktop computer converts the CCD image into a 640 * 480 matrix of 8-bit numbers, or pixels.

Subtraction and Filtering

Figure 1. In the image processing/analysis system, a 10X and a 2.5X microscope objective placed back-to-back magnify the image of the specimen by a factor of four onto the distal tip of the fiber optic imaging bundle. The third microscope objective magnifies both the specimen image and the bundle image onto the CCD plane of the video camera.

Figure 2. Raw image shows honeycomb effect on image. Yeast cells two and four are budding; cells one and three are not. Each fiber transmits only one intensity of light, effectively reducing the number of usable pixels in the image.

An unprocessed image of four yeast cells shows the pixelation and honeycomb effects discussed above (see figure 2). To identify the objects—in this case, yeast cells—we tested three image-analysis algorithms: image subtraction, frequency filtering, and spatial filtering, in conjunction with a pixel-value threshold for each segment of the image. Image subtraction is based on the assumption that the background (the cladding and any flaws in the fiber or microfluidic device) will remain constant over the lifetime of the experiment. We can thus continually subtract each pixel of the background image from the corresponding value of the yeast-cell image to yield a corrected image. This corrected version is black everywhere except for the fiber cores that display the yeast cell.

The image subtraction method works only when the hardware is kept extremely stable. We label any two or more adjacent white pixels as a region. A change in regional illumination or a submicron translational shift in the honeycomb pattern, which can be caused by something as simple as heavy footsteps, can cause the subtraction method to fail. Researchers require a more robust method of image processing that would allow the system to operate for days in a biology lab.

The second image-analysis method uses a 2-D frequency band-reject filter. The honeycomb pattern repeats at a roughly periodic frequency that appears as an elliptical band on a 2-D discrete Fourier transform (DFT). A band-reject filter in the shape of this ellipse effectively eliminates this frequency content. The inverse DFT then restores the image to the spatial domain. The band-reject filter successfully removes the honeycomb pattern and leaves the (already poor) yeast-cell resolution intact. Although this method works, it does require more power and introduces more noise than alternative methods.

The third image-analysis method uses a low-pass spatial filter in conjunction with a regional threshold. When an image is thresholded, each pixel is converted to a digital value of one or zero depending on whether the pixel value is above or below a given threshold. We cannot immediately threshold figure 2 because the intensity of the cladding approximately equals the intensity of the yeast cells. On the other hand, shifting the cladding-pixel intensity to be significantly higher than the yeast cell intensity permits thresholding at a level between those two intensities to remove the cladding while retaining the yeast cells. We accomplish this shift by performing a 2-D convolution of the image with a 7 x 7 low-pass filter mask.

The convolution has the same effect as replacing the value of each pixel in the image with an average of the values in a 7 x 7 grid surrounding the pixel. We choose the width of the filter (seven pixels) to be larger than the width of the cladding (between two and four pixels), so that the value of every cladding pixel is raised to a level above the threshold. The much wider yeast cells, which are 4 to 10 µm (25 to 60 pixels) in diameter, are smoothed rather than eliminated by the process.

To perform the process, we first capture a background image, taken from a blank microfluidic device, and the software segments it into 64 equal sections to account for uneven illumination across the image. The algorithms then calculate the threshold value, defined as the mean minus one-half the standard deviation, for each section. After the image is filtered, the binary threshold is performed to leave white (pixel value = 1) yeast cells on a black (pixel value = 0) background.

Improving the Binary Image

So far we have discussed ways to eliminate the honeycomb effect, but it is possible to actually improve the binary image through a series of morphological operations.4 First, we open the image with a five-pixel-diameter disc-structuring element to remove small noise in the image. The subsequent closing operation smoothes the boundary and closes any gaps in the yeast cell image. At this point, almost all the noise in the image has been removed. Note that it is difficult to quantify the improvement to resolution, because the image is now binary. The resolution of the processed image only needs to be good enough to succeed in the final goal of the image analysis, however, which in this case is bud recognition.

Figure 3. The processed image shows that the honeycomb effect has been removed from the image to leave the shape of the yeast cells. The image is now binary and the moments of each cell are calculated to classify the cell as budding or not budding a daughter cell.

The final image-processing step segments the remaining white areas into regions. Only those regions greater than 400 pixels—approximately equivalent to the size of a 2-µm yeast cell—are retained in the final image (see figure 3). We then use geometric invariant moments to classify yeast cells as budding or not budding. A budding yeast cell has a more elliptical shape than a non-budding yeast cell, which is reflected in its second moment. A budding yeast cell is also more asymmetrical, or skewed, about its centroid than a non-budding yeast cell, which is reflected in its third moment. The software converts the second and third central moments to their invariant forms to account for variations in yeast cell size and rotation, then applies empirically determined thresholds to classify the cell as budding or non-budding.

Using this image-analysis system, our team is automating the yeast pedigree analysis to study the correlation between genomic instability, cancer, and aging.5 The use of the system is not limited to the application described above. It is more generally applicable to object recognition and classification. Users can easily alter the hardware and software of this system for other applications in which a conventional microscope is not practicable and the high spatial resolution of a conventional microscope is not required. Detecting fluorescence inside a bacterium, for example, would simply require increasing the distal tip magnification of the fiber bundle to 8X or 16X to improve resolution, and rewriting the classification step of the algorithm to correlate fluorescence inside the cell to different stages of the cell cycle. To track the activity of a large number of macrophages, the magnification can be decreased to 2X to improve the field of view, and the classification step can be written to detect the moment when each macrophage engulfs a bacterium.

The possibilities for the approach are numerous, as long as the cell biologist is willing to trade the resolution of a conventional microscope for a well-engineered system. Fiber optic imaging bundles provide design flexibility not possible with a conventional microscope, and image processing and analysis provide a means to eliminate noise and extract real-time information from the image. oe


M. Lidstrom and D. Meldrum, Nature Reviews Microbiology 1, p. 158 (2003).O. Wolfbeis, Anal. Chem. 76, p. 3269 (2004).

J. Koschwanez et al., Rev. Sci. Inst. 75, p. 1363 (2004).

R. Gonzalez and R. Woods, Digital Image Processing, Prentice Hall, Upper Saddle River, NJ (2002).

M. McMurray and D. Gottschling, Science 301, p. 1859 (2003).

John Koschwanez, Dierdre Meldrum
John Koschwanez is a graduate research assistant and Deirdre Meldrum is a professor of electrical engineering and director of the Microscale Life Sciences Center at the University of Washington, Seattle, WA.