SPIE Membership Get updates from SPIE Newsroom
  • Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:
SPIE Photonics West 2019 | Call for Papers

2018 SPIE Optics + Photonics | Register Today



Print PageEmail PageView PDF

Electronic Imaging & Signal Processing

In living color

Creating digital archives of paintings requires imaging and reproduction methods more complex than standard red-green-blue sensor and display techniques.

From oemagazine January 2003
31 January 2003, SPIE Newsroom. DOI: 10.1117/2.5200301.0003

Multispectral imaging technology is attracting increasing attention in the fields of color engineering and image science. In contrast to systems for remote sensing, which often include a large number of channels across the near IR spectral region, multispectral imaging systems for color engineering and image science generally focus on visible wavelengths (400 to 700 nm). Multiple images of a single field-of-view are captured in more than three wavelength bands in this range.

This multispectral imaging allows high-grade color image-processing systems to perform important tasks that are not possible with ordinary red-green-blue (RGB) color imaging. First, multispectral images can be used for accurate colorimetry and color image reproduction because the color-matching functions of the human visual system are described more precisely by combining the sensitivity functions of a multispectral camera than by using RGB camera sensitivities. Second, the multispectral images can be used for estimating surface spectral reflectance of an object in a natural scene, which is an inherent physical property of its surface. Numerous applications, including object recognition and color appearance prediction under arbitrary illumination conditions, require this information.

digital archiving

One recent application of multispectral imaging that has been paid much attention is the digital archiving of art paintings. Spectral reflectance information is more useful than color information with RGB for recording and reproducing paintings as digital images. In fact, RGB color images are device dependent and valid for only fixed illumination and viewing conditions. On the other hand, once we acquire all surface-spectral reflectances of the paintings, we can produce images with accurate color appearance under arbitrary illumination conditions.

Spectral reflectance information, however, is not sufficient for rendering realistic images of most art paintings, including oil paintings. Oil paintings consist of thick layers oil paint applied to a canvas. The surfaces are rough and glossy. Specular highlights often appear on the object surfaces because of varnish top coats. Thus, in addition to spectral reflectance, we need some shape information for describing surface geometries and a reflection model for describing optical effects from the surfaces. With this data, the variation of appearance of the paintings under arbitrary conditions of illumination and viewing can be rendered as a set of color images by 3-D computer-graphics techniques.

A direct way to acquire the shape data of object surfaces is to use a laser rangefinder.1 Unfortunately, this device makes unavoidable errors in measuring colored paintings that include specularity; such surfaces differ from white matte surfaces. We consider the surface shape of a painting as a rough plane rather than a 3-D curved surface. Our recent studies show that the surface normal vectors of small facets representing the detailed structures of the rough surface can be estimated from camera data without using a laser rangefinder.2,3

Digitally archiving art paintings involves a number of steps, from measurement to image rendering. The archivists acquire both the spectral reflectance and surface shape data using the multispectral images from a multiband camera. Using the surface reflectance and normal data, they mathematically model the surface light reflection. Finally, they combine all the data to render color images of the paintings with realistic shading effects under arbitrary illumination and viewing conditions.

measuring system

Our system for measuring art paintings uses a six-band camera for spectral imaging. The camera system consists of a monochrome CCD camera, a standard photographic lens, six color filters, and a PC. By combining the filter transmittances and the spectral sensitivity function of the monochrome CCD camera, we create composite spectral sensitivity functions.

Figure 1. A stationary monochrome 494 x 768 CCD camera with 27-µm2 pixels is equipped with a number of colored filters to capture multiple images of a painting. Meanwhile, the light source shining down at a 45° angle is rotated around the painting. This provides information about the shape of the surface.

We acquire multiple images of the same object surface for different illumination directions. The camera aims at the object surface from vertically above the painting (see figure 1). A lamp rotates around the optical axis in a plane between the camera and the object. The camera captures multiband images under eight different illumination directions with 45° increments between them. The angle of elevation of the light source is always about 45°.

The use of multiple illuminations offers two advantages. First, we can choose the most reliable function of spectral reflectance without noise effects (specularity and shadowing). Second, we can estimate the surface normal vector at each pixel point using the change in shading as the illumination direction changes. In practice, we use a photometric stereo method to compute the surface normal. We can determine the surface normal if we can observe light reflected from the surface from three different illumination directions.

We use a linear finite-dimensional model for representing surface spectral reflectances. This model significantly reduces the number of unknown parameters if the reflectance functions with continuous spectra are presented by only a small number of basis functions.

We can express the spectral reflectance function S(λ) as

S(λ) = σ1S1(λ) + σ2S2(λ) + ... + σnSn(λ)          [1]

where {Si(λ)} is the set of basis functions, and {σi} is the set of weights. Because we use six spectral bands, the number of basis functions n must be six or fewer. To determine the basis functions, we used a database of surface spectral reflectances for multiple objects. We selected the five principal components of the set of reflectances as the basis functions.

With basis functions known in advance, the problem of estimating the reflectance becomes instead a problem of inferring the set of weight coefficients {σi} from the camera outputs. In order to estimate the weights σ1, σ2, ..., σ5, we use the average of the camera outputs for different illumination directions after eliminating highlight and shadow effects. Given the illuminant spectrum and the spectral sensitivity functions for the camera, we can calculate the estimates of the weights at each pixel from six sensor outputs. The final step of reflectance estimation is to correct the non-uniformity of illumination across the paintings. A standard white board provides a reference for calibrating the non-uniformity.

image rendering

We use a 3-D light reflection model for creating computer-graphics images. The surface layers of art paintings can be considered an inhomogeneous dielectric material. Light reflected from the inhomogeneous object surface is composed of two additive components: the body (diffuse) reflection component and the interface (specular) reflection component. The spectral intensity of the reflected light Y is described as a function of the geometric parameters in the form and the wavelength as follows:

Y(θ,λ) = α(θ)S(λ)E(λ) + β(θ)E(λ)          [2]

where θ includes the directional angles describing the reflection geometry (such as incident angle, viewing angle, and phase angle), and E(λ) is the spectral power distribution of the illumination. The first term of the right-hand side in equation 2 represents the body reflection in which α(θ) is the shading factor, while the second term represents the interface reflection in which β(θ) is the scale factor.

Because the body reflection is caused by light scattering among the pigments of paints, this component produces the object color. The surface-spectral reflectance of the interface component for inhomogeneous dielectric materials like paints is constant over the visible wavelength region. Therefore, the interface reflection component has the same spectral composition as the illumination. The spatial distributions of these two reflection components are quite different. The direction of the interface reflection is restricted to a narrow angular interval in the direction that the light would be reflected from a mirror. Conversely, the body-reflection component emerges equally in all directions (Lambertian surface).

We use the Torrance-Sparrow model for determining the geometric factors α(θ) and β(θ). We have α(θ) = cosθ with the incident angle θ. This model is particularly precise in describing the interface component β(θ), which consists of several terms, including the index of surface roughness and the Fresnel spectral reflectance. In order to determine the most suitable function of β(θ) for a practical painting, the unknown parameters in β(θ) are estimated from the specular reflection component of the painting's image data.

A ray-tracing algorithm allows us to render the image using this 3-D reflection model. In order to execute the algorithm, we provide the spatial coordinates of a target painting, a light source, and a viewpoint. The set of the estimated surface reflectances and the estimated surface normals at all pixel points is used as the physical data of the target object. Moreover, the properties of the illuminant spectrum E(λ) of the lighting condition are assumed a priori.

Figure 2. Experimental results using an oil painting (a). The original painting was divided into regions and imaged in multiple spectral bands. After recombining and using this data to estimate the surface reflectances and normals, computer graphics can render a computer-graphics image of the painting, illuminated by the slanting rays of an incandescent lamp (b), as though we were viewing it from a slant. Using the same estimated data, we can also render the painting as illuminated by a D65 daylight lamp (c). This compares well to the original illuminated by a real D65 lamp (d).

For example, consider an oil painting illuminated by an incandescent lamp (see figure 2). We imaged an area about 140 mm x 200 mm; the area of the image corresponding to a pixel is about 0.176 mm x 0.193 mm. In this experiment, we could not compare the above resolution with the brush stroke width; however, we think that the resolution is sufficiently fine to estimate the average of surface roughness of the painting object.

We divided the entire object surface into a 2 x 2 matrix of sub-regions, after which the multiband camera photographed each region at nine illumination directions. The four multispectral images were then combined into a large image of about 800 x 1000 pixels.

Figure 2(b) shows the image-rendering result based on the estimated surface reflectances and normals from the image data in (a). The picture represents a computer-graphics image under the conditions of the slanting rays of an incandescent lamp and the slanting viewpoint. We can observe the surface roughness and the gloss on the image.

Figure 2(c) shows a computer-graphics image based on the same estimated data under a D65 daylight lamp but with the same viewing and lighting geometries as in (b). For comparison, Figure 2 (d) shows the observed image of the original under a real D65 lamp.

Comparing the images (b) and (c) to the originals (a) and (d), respectively, suggests that our multispectral technique performs accurate color image reproduction for this art painting. The process described above thus creates realistic images of painted objects under arbitrary conditions of illumination and viewing, allowing us to digitally archive paintings and render realistic reproductions of them. Such technology will allow the broad, accurate dissemination of many artistic works. oe


1. S. Tominaga, T. Matsumoto, et al., Proc. 9th Color Imaging Conf., pp. 337-341 (2001).

2. S. Tominaga, N. Tanaka, et al., Proc. of SPIE, Color Imaging, Vol. 4663, pp. 27-34 (2002).

3. N. Tanaka and S. Tominaga, Proc. 1st ICIS, pp.387-388 (2002).

Shoji Tominaga
Shoji Tominaga is professor of engineering informatics at Osaka Electro-Communication University, Osaka, Japan.