Processing of 3-line imagery on a digital photogrammetric workstation
Author(s):
Ruediger Brand;
Timm Ohlhof;
Manfred Stephani
Show Abstract
One of the main topics in photogrammetry is the development of automatic processing techniques, e.g. for aerotriangulation and stereo restitution, both starting from digital aerial imagery. On the other hand digital cameras and sensors gain more and more importance. One of the most promising sensor concept makes use of three linear CCD arrays. This paper deals with the special geometry of 3-line imagery and the consequences for the processing on a digital photogrammetric workstation.
Evaluation of a new method of satellite scanner resection
Author(s):
Fergal P. Shevlin
Show Abstract
A method of satellite scanner resection which uses a coplanarity constraint, as opposed to the conventional colinearity constraint, is evaluated. It relies on a simplification of the resection problem achieved through adjusting ground control point coordinates and image-forming ray direction vectors using satellite motion data typically recorded during image acquisition. The problems caused by ground control points not being sufficiently dispersed throughout the scene are examined and the coplanarity resection technique found to be more reliable in this case. Both simulated and actual data are used to quantify the performance of the technique. Some insight into the problems associated with its usage are provided.
Automatic aerotriangulation: concept, realization, and results
Author(s):
L. Tang;
Jochen M. Braun;
R. Debitsch
Show Abstract
An automatic aerotriangulation system is described. Its design and development have been made to possibly meet every requirement from photogrammetric practice. The system consists of five components, i.e. block preparation, fully automatic tie point determination, semi-automatic control point measurement, interface to diverse block adjustment programs and block post-processing. A relational data base takes care of communications among individual components. The fully automatic tie point determination plays a key role in the whole system. Its realization follows the principle of image connection and thus exhausts the technical potential to reach the highest level of automation. The system is now in daily operation in practice. Results achieved using the system are promising. It was proven that automatic aerotriangulation provides higher reliability of results and much more economy to the photogrammetric practice than ever before.
Automated exterior orientation of large-scale imagery using map data
Author(s):
Bjarke Moller Pedersen
Show Abstract
The automation of photogrammetric orientation procedures is a topic of major interest. Several methods to automate also the measurement of ground control points have been suggested, but there is at the moment no general solution to the problem. The approach presented in this paper suggests a coarse-to-fine strategy by means of an object hierarchy extracted from an existing digital map. Large objects may be applied to the coarser levels of an image pyramid, smaller but more well- defined objects to the finer levels. Tests on a single pair of large-scale aerial images are successfully conducted, and seems to indicate that there are no critical parameters.
Potential of digital photogrammetric systems
Author(s):
Werner Mayr
Show Abstract
The presented paper gives first an overview on the status quo of applied digital photogrammetry from the point of view of a commercial system designer. Second, the potentials of a digital photogrammetric system are discussed. As it turns out process automation appears to have the greatest impact on future digital photogrammetric systems. Current products apparently already possess some of the features required to fulfill the discussed potentials. The paper according to a production flow chart discusses all applications and then focuses on the discussion of feature extraction and sensor fusion as important examples of the potential of a digital photogrammetric system. The conclusions will shortly summarize the results and give an outlook from the author's personal point of view.
Automatic aerotriangulation with frame and three-line imagery
Author(s):
Christian Heipke;
Wilhelm Mayr;
Christian Wiedemann;
Heinrich Ebner
Show Abstract
In this paper an approach for automatic aerotriangulation (AAT) is presented, which is designed for frame and three-line imagery. We focus on the extraction of conjugate points, because the geometric differences in geometry of frame and three-line imagery can be considered as well-known and are only different modules at the implementation stage. Our approach uses point features and a coarse-to-fine strategy based on image pyramids. To extract conjugate points we employ feature-based matching of image pairs on all pyramid levels. After matching all overlapping pairs of images, manifold conjugate point tuples are generated and checked for geometric consistency individually as well as in their local neighborhood. Subsequently, the exterior orientation parameters for the whole block are calculated on each pyramid level in a robust bundle adjustment together with 3D- coordinates for the conjugate point tuples in an arbitrary reference system. This information serves as initial values on the next lower pyramid level. Control information is not necessary a priori, but can be introduced at any stage of processing. The approach has been tested with various imagery. A few hundred well distributed conjugate points were extracted in all cases. In particular, a large number of many-ray points, which are essential for a stable block geometry was detected. The standard deviation of all image coordinates lies between 0.3 and 0.4 pixels. These results constitute a proof- of-concept and demonstrate the feasibility of the presented approach.
MOMS-02/D2 DTM generation using intensity-based least-squares matching techniques
Author(s):
Dieter Fritsch;
Michael Kiefner;
Franz Schneider
Show Abstract
The optical MOMS02 three-line imaging camera provided stereo data in a resolution of 13.5 m and 4.5 m during the second German spacelab (D2) experiment on-board the space-shuttle within the flight STS55 from April 26 to May 6, 1993. For the verification of the stereo module, digital terrain models are derived by photogrammetric image matching techniques using intensity based and feature based methods, respectively. These verifications concentrate on two test sites, for which ground control points are available. The paper presents a comparison of the matching results when area based and feature based methods are used for automatic DTM generation. It is interesting, that the accuracy level could be increased by a factor of 2, if area based image matching is used for the point transfer computation. This improvement in accuracy is verified in both test sites: the Australian and the Andes test site. Furthermore, the last part of the paper presents first experimental results of a simulation for the matching of three channels of different pixel resolution, to come very close to the MOMS-02 architecture.
Adaptive automatic terrain extraction
Author(s):
Bingcai Zhang;
Scott Miller
Show Abstract
Automatic terrain extraction (ATE) is a key component of digital photogrammetric software. Image correlation has been widely used in ATE and has been proved to be a reliable and accurate algorithm. A successful implementation of image correlation largely depends on a set of correct parameters which control the algorithm. The set of parameters should change adaptively according to several characteristics including terrain type, signal power, flying height, X and Y parallax, and image noise level. This paper discusses a new adaptive automatic terrain extraction (AATE) system which uses an inference engine to generate the set of image correlation parameters. In addition to an inference engine, AATE can exploit multiple images and multiple bands, and it contains improved methods for correcting image noise and Y parallax errors. The overall result is a more user friendly and productive system which generates substantially more accurate digital elevation data when compared to the previous non- adaptive ATE. Owing to its adaptivity, AATE works well on large and small scale images. This paper presents the theoretical foundation of AATE and the issues arising in the practical implementation. This paper also presents some comparison results between non-adaptive ATE and AATE for various terrain types and flying heights.
Structural matching for nonmetric images
Author(s):
Younian Wang
Show Abstract
Image matching is a basic issue in computer vision and in digital photogrammetry. For non-metric images the area based and the feature based matching methods are not so suitable as for aerial images, because there is usually no initial values available. In this paper a structure based image matching method is introduced, which can recognize the corresponding image objects fully automatically without knowing any a priori information and without having any relation assumptions about the digital image. Some examples for the application of this matching method to the non-metric images in close range photogrammetry and to the SPOT and MOMS remote sensed images are demonstrated. The results show that the highest automation level of image matching have been reached with the developed method.
Feature matching for automatic generation of distortionless digital orthophoto
Author(s):
Maxim Fradkin;
Uzi Ethrog
Show Abstract
The conventional method of automatic orthophoto generation, based on utilizing existing DTM, is essentially incapable of yielding distortionless rectification of the 3-D surface objects, especially in the case of large-scale man-built environments. Outlining the proposed alternative method of orthophoto generation, we concentrate on the incorporation of feature stereo matching technique, based on a multi-primitive hierarchical approach. First, feature hierarchy, consisting of line segments, parallel segments, vertices, edges, edge contours, and close polygons, is generated simultaneously in two images. This allows, by applying rigorous geometric constraints, to reduce significantly the number of generated spurious features, increasing therefore the efficiency and the reliability of the grouping process as well as those of the following matching procedure. A top-down matching algorithm, utilizing the maximal clique technique, propagates matching results through the hierarchy levels, employing various hierarchical and topological relationships established between the features.
First results of parallel global object reconstruction using digitized aerial photographs
Author(s):
Mikael Holm;
Susanna Rautakorpi
Show Abstract
Global object reconstruction or global matching is a general model for digital photogrammetry, integrating area-based multi-image matching, point determination, object surface reconstruction and ortho-image generation. Using this model, the unknown quantities are estimated directly from the pixel intensity values and from control information in a nonlinear least squares adjustment. The unknown quantities are the geometric and radiometric parameters of the approximation of the object surface (e.g. the heights of a digital terrain model and the brightness values of each point on the surface), and the orientation parameters of the images. Because the method is rather computation intensive it is now being implemented on parallel computer architectures. In the first phase digitized aerial photographs are used in the testing of the system. In this paper the first and very preliminary results are presented.
Comparative analysis of hierarchical triangulated irregular networks to represent 3D elevation in terrain databases
Author(s):
Mahdi Abdelguerfi;
Chris Wynne;
Edgar Cooper;
Roy V. Ladner;
Kevin B. Shaw
Show Abstract
Three-dimensional terrain representation plays an important role in a number of terrain database applications. Hierarchical triangulated irregular networks (TINs) provide a variable-resolution terrain representation that is based on a nested triangulation of the terrain. This paper compares and analyzes existing hierarchical triangulation techniques. The comparative analysis takes into account how aesthetically appealing and accurate the resulting terrain representation is. Parameters, such as adjacency, slivers, and streaks, are used to provide a measure on how aesthetically appealing the terrain representation is. Slivers occur when the triangulation produces thin and slivery triangles. Streaks appear when there are too many triangulations done at a given vertex. Simple mathematical expressions are derived for these parameters, thereby providing a fairer and a more easily duplicated comparison. In addition to meeting the adjacency requirement, an aesthetically pleasant hierarchical TINs generation algorithm is expected to reduce both slivers and streaks while maintaining accuracy. A comparative analysis of a number of existing approaches shows that a variant of a method originally proposed by Scarlatos exhibits better overall performance.
Acquisition of 3D urban models by analysis of aerial images, digital surface models, and existing 2D building information
Author(s):
Norbert Haala;
Karl-Heinrich Anders
Show Abstract
For a task like 3D building reconstruction, there are three main data sources carrying information which is reburied for a highly automated data acquisition. These data sources are aerial images, digital surface models (DSM), which can either be derived by stereo matching from aerial images or be directly measured by scanning laser systems, and -- at least for highly developed countries -- existing (2D) GIS information on the ground plan or usage of buildings. The way these different data sources should be utilized by a process of 3D building reconstruction depends on the distinctive characteristics of the different, partly complementary type of information they contain. Image data contains much information, but just this complexity causes enormous problems for the automatic interpretation of this data type. The GIS as a secondary data source provides information on the 2D shape, i.e. the ground plan of a building, which is very reliable, although information on the third dimension is missing and therefore has to be provided by other data sources. As the information of a DSM is restricted to surface geometry, the interpretation of this kind of data is easier compared to the interpretation of image data. Nevertheless, due to insufficient spatial resolution or quality of the DSM, optimal results can only be achieved by the combination of all data sources. Within this paper two approaches aiming on the combination of aerial images, digital surface models and existing ground plans for the reconstruction of three- dimensional building reconstructions are demonstrated.
Automatic building extraction using a combination of spatial data and digital photogrammetry
Author(s):
Jussi Lammi
Show Abstract
This paper studies the automatic extraction of buildings using a combination of spatial data and digital photogrammetry. A heuristic search offers one way to update existing two- dimensional vector data into 2.5- or three dimensional data. The objective is to find a criteria function for a heuristic search of edges from images, and to determine how this should be used for extraction of buildings when two-dimensional basement data is available. The technique proposed is based on an edge detection applied to image data at those positions where discontinuities are expected. The edge detection is done in object space, which makes the implementation of an edge finding algorithm straightforward, and also ensures that the method is directly applicable to the use of multiple images. Edge detection by search is carried out using a hierarchical search strategy. The method achieves sub-pixel accuracy in edge finding, provided the step size used is smaller than the pixel size of images. The method was used to construct three- dimensional models of buildings in Tampere. In the test, the edge detection by search worked well in most cases. The pull- in range of the method was large.
Object-oriented software design in semiautomatic building extraction
Author(s):
Eberhard Guelch;
Hardo Mueller
Show Abstract
Developing a system for semiautomatic building acquisition is a complex process, that requires constant integration and updating of software modules and user interfaces. To facilitate these processes we apply an object-oriented design not only for the data but also for the software involved. We use the unified modeling language (UML) to describe the object-oriented modeling of the system in different levels of detail. We can distinguish between use cases from the users point of view, that represent a sequence of actions, yielding in an observable result and the use cases for the programmers, who can use the system as a class library to integrate the acquisition modules in their own software. The structure of the system is based on the model-view-controller (MVC) design pattern. An example from the integration of automated texture extraction for the visualization of results demonstrate the feasibility of this approach.
Geometric constraints on hypothesis generation for monocular building extraction
Author(s):
Jefferey A. Shufelt
Show Abstract
A recurring issue in data-driven feature extraction systems is the combinatorics of search in hypothesis space. Brute force attempts at feature generation, where all possible combinations of edges are evaluated, leads to exponential growth in the size of the search space. In monocular building extraction systems, this difficulty is encountered in creating plausible building model hypotheses from raw edge segments extracted from aerial imagery. This work presents constraints on intermediate feature generation, based on vanishing point geometry derived from a photogrammetric camera model, to significantly reduce the search space. Qualitative and quantitative results are presented in the context of PIVOT, a fully automated monocular building extraction system.
Surface microstructure extraction from multiple aerial images
Author(s):
Xiaoguang Wang;
Allen R. Hanson;
Robert T. Collins;
Jeffrey M. DeHart
Show Abstract
In this paper we present a system that recovers building facet images from multiple source images and, as a first step towards detailed analysis of microstructures, extracts windows from walls. The system employs a sophisticated multi-image texture mapping technique to eliminate the corrupting effects of shadows and occlusions and to find a 'best piece representation' of each facet. The system is model-driven, providing a context-based environment for microstructure analysis. The window extraction module focuses attention on wall facets, attempting to extract the 2-D window patterns attached to the walls using an oriented region growing technique. High-level knowledge is incorporated to simplify the computation of symbolic window extraction. The algorithms are typically useful in urban sites. Experiments show successful applications of this approach to site model refinement.
Improving reconstruction of man-made objects from sensor images by machine learning
Author(s):
Roman Englert;
Armin B. Cremers
Show Abstract
In this paper we present a new approach for the acquisition and analysis of background knowledge which is used for 3D reconstruction of man-made objects -- in this case buildings. Buildings can be easily represented as parameterized graphs from which p-subisomorphic graphs will be computed. P-graphs will be defined and an upper bound complexity estimation of the computation of p-subisomorphims will be given. In order to reduce search space we will discuss several pruning mechanisms. Background knowledge requires a classification in order to receive a probability distribution which will serve as a priori knowledge for 3D building reconstruction. Therefore, we will apply an alternative view of nearest- neighbor classification to measured knowledge in order to learn based on a complete seed and a noise model a distribution of this knowledge. An application of an extensive scene consisting of 1846 building cluster which are represented as p-graphs in order to estimate a probability distribution of corner nodes demonstrates the effectiveness of our approach. An evaluation using the information coding theory determines the information gain which is provided by the estimated distribution in comparison with no available a priori knowledge.
Extraction of 3D linear features from multiple images by LSB-snakes
Author(s):
Armin Gruen;
Haihong Li
Show Abstract
In general, the snakes or active contour models feature extraction algorithm integrates both photometric and geometric constraints, with an initial estimate of the location of the feature of interest, by an integral measure referred to as the total energy of snakes. The local minimum of this energy defines the feature of interest. To improve the stability and convergence of the solution of snakes, we propose a new implementation based on parametric B-spline approximation. Furthermore, the energies and solutions are formulated in a least squares context and extended to integrate multiple images in a fully 3-D mode. This novel concept of LSB-Snakes (least squares B-spline snakes) improves considerably active contour models by using three new elements: (1) the exploitation of any a priori known geometric (e.g. splines for a smooth curve) and photometric information to constrain the solution, (2) the simultaneous use of any number of images through the integration of camera models and (3) the possibility for internal quality control through computation of the covariance matrix of the estimated parameters. The mathematical model of LSB-snakes is formulated in terms of a combined least squares adjustment. The observation equations consist of the equations formulating the matching of a generic object model with image data, and those that express the geometric constraints and the location of operator-given seed points. By connecting image and object space through the camera models, any number of images can be simultaneously accommodated. Compared to the classical two-image approach this multi-image mode allows us to control blunders, like occlusions, which may appear in some of the images, very well. The issues related to the mathematical modeling of the proposed method are discussed and experimental results are shown in this paper.
Semantic objects and context for finding roads
Author(s):
Albert Baumgartner;
Carsten T. Steger;
Helmut Mayer;
Wolfgang Eckstein
Show Abstract
This paper presents a multi-resolution approach for automatic extraction of roads from digital aerial imagery. Roads are modeled as a network of intersections and links between the intersections. For different context regions, i.e., rural, forest, and urban areas, the model describes different relations between background objects, e.g., buildings or trees, and semantic road objects, e.g., road-parts, road- segments, road-links, and intersections. The classification of the image into context regions is done by texture analysis. The approach to detect roads is based on the extraction of edges in a high resolution image and the extraction of lines in an image of reduced resolution. Using both resolution levels and explicit knowledge about roads, hypotheses for roadsides are generated. The roadsides are used to construct quadrilaterals representing road-parts and polygons representing intersections. Neighboring road-parts are chained to road-segments. Road-links, i.e., the roads between two intersections, are built by grouping of road-segments and closing of gaps between road-segments. Road-links are constructed using knowledge about context.
Automatic road extraction from grey-level images based on object database
Author(s):
Ghislaine Bordes;
Gerard Giraudon;
Olivier Jamet
Show Abstract
In this paper, an automatic road extraction system is described. The main originality of this system is the use of an existing database as a guide for road extraction in aerial images. The database used is a cartographic database. Its geometric accuracy is about 20 meters, whereas the image resolution is 0.5 m and the accuracy required for the result is one meter. The road extraction strategy is a top-down strategy in which the cartographic database is used to generate road hypotheses. Then, road extraction in the image is constrained by the road hypotheses. The different stages of the interpretation process are described. The results obtained by this system are presented and discussed.
3-D approach for semiautomatic extraction of man-made objects from large-scale aerial images
Author(s):
Amnon Krupnik;
Lea Topel;
Liora Sahar
Show Abstract
Automatic identification and extraction of man-made objects from aerial images for cartographic purposes are currently the most challenging problems in photogrammetry. Automating the identification and extraction procedures can significantly improve the efficiency of photogrammetric tasks, such as map compilation, DEM and orthophoto generation and city modeling. This paper presents an approach for object extraction from large-scale aerial images. The approach comprises two key concepts: extraction is performed from the object view point, and objects are extracted semiautomatically. Integration of both concepts in a conventional, well-established photogrammetric procedure complies with the primary requirement of the mapping community, which is to have an efficient procedure, while maintaining the necessary accuracy and reliability. The paper presents the idea and the motivations, and discusses basic tools for extracting two types of common man-made objects: buildings and roads. Preliminary results are shown and discussed as well.
Interpretation of networks from satellite data: adequation between the imagery and cartographic applications
Author(s):
Jean-Paul Sempere
Show Abstract
The planimetric layers of the IGN cartographic database, namely the transportation and hydrographic networks, have to be updated annually. This application needing a fast and reliable production process, the potential use of high resolution satellite such as the forthcoming SPOT 5 had to be evaluated. This study used three test sites in France representing three different types of landscape. The networks were extracted using on screen monoscopic viewing of CNES 5 m panchromatic and 10 m multispectral simulations, and compared with the corresponding networks of the database; topographic maps at scale 1/25,000 were used as a complementary reference for the objects interpreted on the simulations but not included in the database. The results showed a clear lack of reliability of the SPOT 5 simulations, whether panchromatic or multispectral, for the interpretation of hydrography. Some promising results were obtained on the roads network, which have to be confirmed with an operational study on larger test sites. Satellite imagery is anyhow likely to remain a complementary asset for map updating in the near future.
Map feature examination of RADARSAT for geospatial utility and imagery enhancement opportunities
Author(s):
Gary A. Duncan;
William H. Heidbreder;
James Hammack;
Casimir Szpak
Show Abstract
This paper discusses multisensor tests conducted to examine the geospatial information potential of RADARSAT imagery. The focus of the tests will be to develop a metric for determining which map features present in optical or radar imagery could benefit from the use of multisensor enhancement techniques. Visual and geospatial differences between optical and radar imagery will be studied. Data collection and analysis will be based on a product-source prediction capability (PSPC) and other related modeling and analysis tools.
Preliminary results on the analysis of HYDICE data for information fusion in cartographic feature extraction
Author(s):
Stephen J. Ford;
Dirk Kalp;
J. Chris McGlone;
David M. McKeown Jr.
Show Abstract
This paper discusses ongoing research in the analysis of airborne hyperspectral imagery with application to cartographic feature extraction and surface material attribution. Preliminary results, based upon the processing and analysis of hyperspectral data acquired by the Naval Research Laboratory's (NRL) Hyperspectral Digital Imagery Collection Experiment (HYDICE) over Fort Hood, Texas in late 1995, are shown. Significant research issues in geopositioning, multisensor registration, spectral analysis, and surface material classification are discussed. The research goal is to measure the utility of hyperspectral imagery acquired with high spatial resolution (2 meter GSD) to support automated cartographic feature extraction. Our hypothesis is that the addition of a hyperspectral dataset, with spatial resolution comparable to panchromatic mapping imagery, enables opportunities to exploit the inherent spectral information of the hyperspectral imagery to aid in urban scene analysis for cartographic feature extraction and spatial database population. Test areas selected from the Fort Hood dataset will illustrate the process flow and serve to show current research results.
Segmentation design for an automatic multisource registration
Author(s):
Renaud Ruskone;
Ian J. Dowman
Show Abstract
This paper deals with the description of the early phases of an automatic multisource registration method. This method is a part of a more general project that aims to register several data sources and detect the changes. We describe more specifically the requirements in terms of input data, the segmentation method used and the preprocessing useful to adapt the feature extraction to different data sources. The segmentation is a classic region growing method that relies on a threshold taking into account the radiometric properties of each region. After the coarse definition of the segments, an iterative process comes into play to merge them according to similarity measures.
Object model construction by invariance and photogrammetry
Author(s):
Hazem F. Barakat;
Edward M. Mikhail
Show Abstract
Several recent invariance techniques such as: trilinearity (trifocal tensor), cross-ratio of planes, BC-invariant, and factorization of the fundamental matrix, have been extensively analyzed. Significant characteristics which distinguish them from equivalent photogrammetric techniques have been determined and assessed. Test results from simulated and real data, particularly related to the construction of imaged objects, are presented. These are in turn compared to results obtained from photogrammetry. Conclusions are drawn particularly with respect to the relative performance of the various methods, and recommendations made for continuing research.
Model-to-SAR image registration
Author(s):
Richard W. Ely;
Joseph A. Di Girolamo
Show Abstract
Model-based image understanding applications such as model supported exploitation require accurate model-to-image registration. Manual photogrammetric control of imagery is currently a time consuming process. For areas which are exploited repeatedly, a database of control features can be built and automatically registered to new images of the scene. To minimize the effort involved in creating the registration control feature database, it is highly desirable to use the same features on images of all sensor types (EO, IR, and SAR). This paper discusses the registration of features constructed on EO imagery to SAR imagery.
Processes to support error estimation for model supported positioning
Author(s):
Richard W. Ely;
James C. Lundgren;
Walter J. Mueller
Show Abstract
The model supported positioning project was initiated to develop an automatic process for the control of imagery. Imagery is controlled using a rigorous photogrammetric process to adjust the image support data. Because the process of locating the control in the image and adjusting the image support data is fully automatic, processes must be put in place to insure adequate distribution of control in the image as well as a capability to check the accuracy resultant controlled imagery. Additionally, since the control is located in the image using an automatic vector to image matching technique, a process must be put in place to assess the accuracy of the measured point in the image. This paper addresses the approaches taken to incorporate these processes within the model supported positioning prototype.
Triangulated irregular network (TIN) representation quality as a function of source data resolution and polygon budget constraints
Author(s):
Robert F. Richbourg;
Tim Stone
Show Abstract
High resolution digital elevation models (DEM) are becoming increasingly available for use as source data during the process of creating synthetic environments to support simulation systems. Several data sets that provide elevation points corresponding to 1 meter intervals on the earth surface are now available. However, multiple transformations must often be applied to the raw source data before it is suitable for use by any simulation system. These transformations have an impact on the fidelity of the final (simulation) synthetic environment that is difficult to quantify. Further, intuition alone now supports any claim that higher resolution source data necessarily results in generation of higher fidelity simulation data as a product of the transformation process. This paper documents an attempt to measure fidelity differences in final simulation synthetic environments that can be directly attributed to the resolution of the source data. Specifically, several lower resolution DEM are generated from a single high resolution (1 meter horizontal spacing) source DEM and all are used as source data for TIN construction. Automated planning software is applied to each and used as a metric to measure TIN quality.