For almost a century, archeologists have been flying around in small airplanes to photograph archeologically induced anomalies in the landscape. Besides easily visible standing material relics—see (1) in Figure 1—partly eroded archeological structures may be revealed when thrown into relief by low-slanting sunlight (2). Buried residues change the properties of the soil matrix and may be disclosed by differences in the color or height of vegetation on top of remains (3) or distinct tonal differences in plowed soil (4). Once detected from the air, these archeological marks are orbited and photographed from various positions.
Before relevant information can be extracted from the photographs, each pixel must be positioned on its true location on the Earth's surface. This georeferencing process ideally takes care of the main geometric deformations that occur during the imaging process (those induced by topographical relief, tilt of the camera axis, and distortion of the optics). Generally, individual photographs are georeferenced using simple and suboptimal methods such as (planar) rectification and polynomial correction. However, some archeologists apply more expensive and time-consuming orthorectification workflows to accurately correct these deformations. We propose an approach that aims to combine the best properties of both procedures.
Figure 1. Four different archeological marks: (1) ruins; (2) shadows; (3) vegetation; (4) soil.
This workflow exploits recent improvements in computer hardware and computer vision algorithms. In a first stage, a structure-from-motion (SfM) algorithm incorporates all aerial photographs covering the same part of the landscape: see (1) in Figure 2. Using correspondences among images, the SfM stage yields a sparse point cloud that represents the scene geometry, the internal camera parameters, and the camera positions (2). Afterwards, a dense multiview stereo algorithm applies these data to generate a detailed 3D mesh. With at least three ground control points (GCPs), this 3D model (and the camera positions) can be embedded into an absolute coordinate framework. Because all necessary information is available, a detailed and accurate orthophoto can be produced.1, 2
(1) Two out of 27 oblique images used in a 3D reconstruction of the scene shown in Figure 3
. (2) Sparse point cloud and extracted camera positions.
Figure 3. The Ricina orthophoto calculated from 27 oblique images and enhanced for display of vegetation marks. The area in the red box shows the buried remains of the Roman amphitheater.
A case study illustrates the workflow. In June 2009, we acquired a series of 27 oblique aerial photographs above the Roman town of Ricina (central Adriatic Italy) with a compact digital camera. All the photographs depict vegetation marks related to this imperial Roman town, of which only the theater building is fully visible today: see (1) in Figure 2. This specific image series serves to show once again the immense value of archeological aerial photography, since the existence of a Roman amphitheater (see Figure 3, red rectangle) was previously unknown. We carried out the georeferencing of the 3D model based on a horizontal rms error (RMSE) of 0.31m and a vertical RMSE of 0.15m.
After applying histogram stretching to enhance the vegetation marks, the final orthophoto (calculated with 10cm grid spacing) nicely displays all archeologically relevant features in their accurate position (see Figure 3). The total orthophoto production time was approximately 70min. Compared with conventional georeferencing on an image-by-image basis, this result shows a substantial saving in total processing time with the additional benefit of obtaining an orthorectified overview image with a positional accuracy that might be hard (or impossible) to attain using conventional low-cost packages.
Additionally, this method enables incorporation of images that would have been considered ‘unusable’ before: some images in the data set only recorded three or ill-distributed GCPs, neither of which is considered adequate for most georeferencing approaches. Our orthophoto procedure integrates all imagery into one photomosaic, making the search for suitable ground control much easier. Because the entire process generates a detailed 3D model and camera calibration parameters, the output is a true orthophoto in which all possible tilt, lens, and terrain displacements are taken into account.
Orthophoto production is critically important in aerial archeology. The approach we have presented here is straightforward and requires no assumptions regarding the camera, the topography of the scene, or existing photogrammetric and computer vision knowledge. All that is needed is a collection of overlapping aerial images. Finally, we note that this processing is very computer resource intensive, and image alignments might sometimes be suboptimal. Because the current approach is semi-automatic, in the near future, we plan to establish a workflow with automated selection of GCPs. Such a development would offer possibilities for consistent and fast creation of archeologically relevant cartographic data in rapidly changing landscapes.
Ludwig Boltzmann Institute for Archaeological Prospection and Virtual Archaeology (LBI ArchPro)
Geert Verhoeven received his MS and PhD in archeology from Ghent University (Belgium), in 2002 and 2009, respectively. Since then, he has been lecturing on archeological methods. His research at the Vienna-based LBI ArchPro institute focuses on extracting information from airborne photographic and hyperspectral data sets.
1. G. Verhoeven, Taking computer vision aloft: archaeological three-dimensional reconstructions from aerial photographs with PhotoScan, Archaeol. Prospect.
18(1), p. 67-73, 2011. doi:10.1002/arp.399
2. G. Verhoeven, M. Doneus, C. Briese, F. Vermeulen, Mapping by matching: a computer vision-based approach to fast and accurate georeferencing of archaeological aerial photographs, J. Archaeol. Sci.
39(7), p. 2060-2070, 2012. doi:10.1016/j.jas.2012.02.022