Calibrating multiple microscopes with a smartphone

A smartphone high-resolution LCD allows efficient and accurate calibration of an inexpensive array of handheld microscopes for measuring dynamic events over a large field of view.
01 April 2014
Peter Bajcsy, Mary Brady and Jacob Siegel

How does one build an inexpensive array of handheld microscopes for measuring microscopic dynamic events over a large field of view (FOV)? The challenges of building such an instrument lie in estimating spatial, temporal, and color properties of each handheld microscope, as well as in integrating individual fields of view into a large FOV seamlessly. These calibration challenges of building inexpensive arrays of cameras have been encountered and researched in close-range photogrammetry and multicamera computer vision applications.1–4 Previous research has aimed to reconstruct 3D scenes, but our ultimate objective is to image live cells in a culture dish. An entire dish 10cm in diameter cannot be imaged at the rate of cell state dynamics with a combination of a single camera microscope and a motorized stage. In our experience, acquisition of about 17% of the dish takes around 22 minutes by a Zeiss motorized stage (18×22 tiles with 10% overlap).

Purchase SPIE Field Guide to Image ProcessingTo explore the above calibration challenges with arrays of microscopes, we first assembled a linear array of two digital handheld microscopes (Dino X Lite AM-413MT5, 12 frames per second, 1280×1024 image pixel dimensions), and then connected them to a computer via USB. These microscopes are currently used primarily for skin and scalp dermatology and printed circuit board inspection. Although the macroscale calibration methodologies can be applied at a microscale, the microscopic resolution imposes much higher quality specifications and therefore much higher costs. As a result, the same calibration objects cannot be used. We assessed smartphone high-resolution displays (e.g., iPhone 4S, retina display, 0.079mm/pixel) as alternative calibration objects by comparing them to traditional calibration objects (i.e., Gretag Macbeth color chart, a stage micrometer for pixel-to-millimeter conversion, and a set of prior known shapes with their locations for pose estimation): see Figure 1.


Figure 1. Traditional objects used for (top left) color, (top right) spatial, and (bottom left) pose calibrations. Bottom right: These objects are replaced by an iPhone LCD rendering a virtual object.

Next, we developed calibration methods to perform pixel-to-millimeter conversion, red-green-blue color normalization, and microscope pose estimation using the high-resolution LCD of an iPhone. The iPhone LCD is placed under an array of microscopes as illustrated in Figure 1 (right). It renders temporally varying pixel intensities that represent a dynamic virtual calibration object. There are three current virtual calibration objects. The first is a dynamic web page with varying intensities of each red, green, and blue color for color calibration: see Figure 2 (right). The left image in Figure 2 shows a microscope image of a green printed square on a paper imaged according to the configuration shown in Figure 1 (top left). The constraints of printing and paper imaging give the pixels a variety of colors and intensities, and yield a static semi-regular structure. The right image in Figure 2 shows a microscope image of a green pattern rendered by a smartphone LCD that has very little variation in color and intensity, a very regular structure, and changes intensity and color over time with a known speed (i.e., green, intensity 255→ green, intensity 0→ red, intensity 255→…).


Figure 2. A microscopic image of a green square in the Gretag Macbeth color chart printed by a color printer (left). A microscopic image of an iPhone LCD screen rendering a green color (right). The LCD display is rendering a set of temporally changing distinct colors from the Gretag Macbeth chart.

The second is a static web page with a checkerboard pattern for spatial calibration (see Figure 3). The third is a dynamic web page with moving lines in two orthogonal directions with known line spacing and motion vectors for pose estimation (see Figure 4). Our custom-developed software processes the LCD renderings captured by each microscope to determine the calibration parameters. The preliminary accuracy results for traditional and virtual calibration objects are summarized in Table 1.


Figure 3. Spatial calibration of about 1mm2 field of view (FOV) using an image of (left) a traditional stage micrometer and (middle) an image of an iPhone LCD screen with (right) automated detection of the virtual object rendered by the iPhone LCD.

Figure 4. Left: A microscopic image of one tick mark on a ruler with millimeter accuracy. Right: A microscopic image of an iPhone LCD screen with pulsing lines used as a virtual calibration object.
Table 1. Summary and comparison of calibration results for traditional and virtual calibration objects. The pose angles are illustrated in Figure 5.
Calibration typeMetricTraditionalVirtual
Color Initial Euclidean distance between average camera colors 15.92 7.48
Euclidean distance after linear color transformation 14.72 6.86
Spatial resolution Mean of pixel-to-mm measurements 710.12 711.16
Standard deviation of pixel-to-mm measurements 2.35 4.56
Pose Distance between microscope camera centers in mm Roughly 34–35 34.57
Angle to next camera (β in degrees) 77 76.4
Angle between cameras (δ in degrees) 79 79.8

Figure 5. Definition of angles to next camera (β) and between cameras (δ) for pose estimation.

Our results show that virtual object-based calibration is as accurate overall as physical object-based calibration (see Table 1). In other words, the results of camera integration are similar whether we use traditional or virtual calibration objects. However, the virtual calibration objects have several key advantages. They can be changed quickly and without significant cost. LCD rendering and microscope imaging them leads to a higher signal-to-noise ratio than imaging traditional calibration objects (see Figure 3). In addition, they are able to include dynamic patterns and acquire calibration data at higher rates (see Table 2). Higher acquisition rates are important for achieving higher statistical significance in the calibration results.

Table 2. Calibration data acquisition rates for traditional and virtual objects.
Acquisition rateData/minute
Calibration type/object Traditional Virtual
Color 2 600
Resolution 1 3600
Pose 0.1 8

In the future, we plan to investigate the relationship between virtual object rendering and the display properties,5 and to acquire real video streams to study live cells, nematodes, and insect behavior.

This work has been supported by the National Institute of Standards and Technology (NIST) 2013 Summer Undergraduate Research Fellowship (SURF) Program. We would like to thank Ganesh Saiprasad and Kiran Bhadriraju at NIST for providing additional comments on the work.

Disclaimer

Commercial products are identified in this document to specify the experimental procedure adequately. Such identification is not intended to imply recommendation or endorsement by the National Institute of Standards and Technology, nor is it intended to imply that the products identified are necessarily the best available for the purpose.


Peter Bajcsy, Mary Brady
NIST
Gaithersburg, MD

Peter Bajcsy is a computer scientist at NIST working on automatic transfer of image content to knowledge. His scientific interests include image processing, machine learning, and computer and machine vision. He has co-authored more than 24 journal papers, eight book chapters, and close to 100 conference papers.

Mary Brady is the manager of the Information Systems Group in NIST's Information Technology Laboratory. The group focuses on developing measurements, standards, and underlying technologies that foster innovation throughout the information life cycle from collection and analysis to sharing and preservation.

Jacob Siegel
University of Maryland at College Park
College Park, MD

Jacob Siegel is a computer engineering major at the University of Maryland College Park. He participated in the NIST 2013 SURF program. His research interests include camera calibration and image processing.


References:
1. B. Wilburn, N. Joshi, V. Vaish, E. T. Emilio, A. Barth, A. Adams, M. Horowitz, M. Levoy, High performance imaging using large camera arrays, ACM Trans. Graph. 24(3), p. 765-776, 2005.
2. F. Remondino, C. Fraser, Digital camera calibration methods: considerations and comparisons, Int'l Archiv. Photogram. Remote Sens. Spat. Inf. Sci. 36(5), p. 266-272, 2006. http://www.isprs.org/proceedings/XXXVI/part5/paper/REMO_616.pdf
3. A. Ilie, G. Welch, Ensuring color consistency across multiple cameras, 10th IEEE Int'l Conf. Comp. Vis. (ICCV'05) 2, p. 1268-1275, 2005. doi:10.1109/ICCV.2005.88
4. P. Bajcsy, R. Kooper, Integration of data across disparate sensing systems over both time and space to design smart environments, in C. Turcu ed., Sustainable Radio Frequency Identification Solutions, ch 17,  InTech, 2010.
5. J. M. Libert, P. A. Boynton, E. F. Kelley, An assessment standard for the evaluation of display measurement capabilities, 8th Color Imag. Conf. 2, p. 217-221, 2000. See also http://color.org/events/medical/Boynton.pdf
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research