Show all abstracts
View Session
- Aviation Applications
- Automotive Applications
- Additional Papers
- Automotive Applications
- Additional Papers
- Robotics Applications
- Additional Papers
- Robotics Applications
- Additional Papers
- Aviation Applications
Aviation Applications
Enhanced vision systems: results of simulation and operational tests
Show abstract
Today's aircrews have to handle more and more complex situations. Most critical tasks in the field of civil aviation are landing approaches and taxiing. Especially under bad weather conditions the crew has to handle a tremendous workload. Therefore DLR's Institute of Flight Guidance has developed a concept for an enhanced vision system (EVS), which increases performance and safety of the aircrew and provides comprehensive situational awareness. In previous contributions some elements of this concept have been presented, i.e. the 'Simulation of Imaging Radar for Obstacle Detection and Enhanced Vision' by Doehler and Bollmeyer 1996. Now the presented paper gives an overview about the DLR's enhanced vision concept and research approach, which consists of two main components: simulation and experimental evaluation. In a first step the simulational environment for enhanced vision research with a pilot-in-the-loop is introduced. An existing fixed base flight simulator is supplemented by real-time simulations of imaging sensors, i.e. imaging radar and infrared. By applying methods of data fusion an enhanced vision display is generated combining different levels of information, such as terrain model data, processed images acquired by sensors, aircraft state vectors and data transmitted via datalink. The second part of this contribution presents some experimental results. In cooperation with Daimler Benz Aerospace Sensorsystems Ulm, a test van and a test aircraft were equipped with a prototype of an imaging millimeter wave radar. This sophisticated HiVision Radar is up to now one of the most promising sensors for all weather operations. Images acquired by this sensor are shown as well as results of data fusion processes based on digital terrain models. The contribution is concluded by a short video presentation.
Passive millimeter-wave video camera for aviation applications
Show abstract
Passive Millimeter Wave (PMMW) imaging technology offers significant safety benefits to world aviation. Made possible by recent technological breakthroughs, PMMW imaging sensors provide visual-like images of objects under low visibility conditions (e.g., fog, clouds, snow, sandstorms, and smoke) which blind visual and infrared sensors. TRW has developed an advanced, demonstrator version of a PMMW imaging camera that, when front-mounted on an aircraft, gives images of the forward scene at a rate and quality sufficient to enhance aircrew vision and situational awareness under low visibility conditions. Potential aviation uses for a PMMW camera are numerous and include: (1) Enhanced vision for autonomous take- off, landing, and surface operations in Category III weather on Category I and non-precision runways; (2) Enhanced situational awareness during initial and final approach, including Controlled Flight Into Terrain (CFIT) mitigation; (3) Ground traffic control in low visibility; (4) Enhanced airport security. TRW leads a consortium which began flight tests with the demonstration PMMW camera in September 1997. Flight testing will continue in 1998. We discuss the characteristics of PMMW images, the current state of the technology, the integration of the camera with other flight avionics to form an enhanced vision system, and other aviation applications.
The AWARD program
Frank Leppert
Show abstract
The All Weather ARrival and Departure (AWARD) program is supported by the European Commission under the Brite-EuRam III structure. Following the VERSATILE preparation program, it started on June 1996 and is planned to finish end of 1999. The program consortium consists of ten partners such as a major airline, aircraft and equipment manufacturers, research and tests centers, and an university. Contractors from France, Germany, Great Britain, Italy and The Netherlands are coordinated by Sextant Avionique. AWARD main objective is to demonstrate the efficiency of vision systems under adverse weather conditions. In order to evaluate the added benefits of these concepts within aircraft operations of approach, landing, taxi and takeoff, two applications are developed: (1) Enhanced Vision System (EVS) based on Head Up Display enhancement with Forward Looking Infrared (FLIR) and Millimeter Wave Radar (MMWR) images. (2) Synthetic Vision System (SVS) displaying an overlaid symbology on a perspective presentation of the environment, thanks to the combination of database and accurate positioning systems. The evaluation of these two tests systems will focus on: (1) Performance and human acceptability aspects. They will be appreciated according to human factors criteria as well as an integration within realistic environments. The NLR Research Flight Simulator and the DLR ATTAS flight test aircraft will be used. (2) Reliability, integrity aspects thanks to a theoretical certification/system study which will propose guidelines for certification, and will address impact on the system architecture. The paper addresses the work structure of AWARD in order to show what are the keypoints addressed in this program.
Controlled digital elevation data decimation for flight applications
Show abstract
In future aircraft cockpit designs SVS databases will be used to display 3D physical and virtual information to pilots. One of the key elements is reliable display of terrain elevation data in order to increase situation awareness. Displayed data and the outside world must match concerning position and altitude. Therefore, terrain elevation data with specified error values are required to guarantee reliability and integrity. The determination of the database error imposes two major steps. The first step applies to the generation of the primary database. Today several companies start to provide elevation models generated by different methods taken from independent sources. If not stated, the error of these databases has to be calculated and verified by the use of reference data and statistical methods. Some commonly used models (like DTED) were investigated, errors estimated, and compared. The second step is required for the preparation of data to be used in a SVS. For most of today's graphics machines the amount of data is too large to be drawn at an acceptable frame rate. Therefore, polygonal decimation is used to reduce the number of triangles to be rendered. Most algorithms used for decimation were developed for the visual quality of the decimated terrain. Their parameters do not allow to perform an error bounded decimation because they are based on criterion like 'face angle' or 'bounding box size.' However, it is necessary to know the absolute error introduced by the decimation for an SVS. An algorithm was developed to eliminate vertices only if the newly introduced error is smaller than a given threshold. In addition, this algorithm tries to preserve important features such as ridgelines. Knowing the maximal altitude error of a certain position and the error introduced by decimation, it is possible to generate a worst case elevation error for that point.
Second decision in the EVS concept: an experiment to evaluate pilot ability
Bruno Aymeric,
Alain Leger,
Thierry Kostoj
Show abstract
The scope of this research is the use of an infrared sensor image, projected on a HUD, to land an A/C in 200 m RVR (CATIII) when the airfield is equipped only for CATI. The corresponding operational scenario requires that the pilot perform a second decision on direct visual cues at 50 ft (CATIII DH). This second decision is the core of the concept and appears as one of the most acute problems against the EVS concept. To initiate the reflection, we conducted an experiment to test the ability of pilots to take a correct decision in abnormal situations, using SXT part task simulator. Results show an overall correct behavior of the pilots despite a workload much higher than it would be in real operations. Their comments during and after each trial demonstrate a correct awareness of their situation with respect to the real runway at 50 ft (direct visual cues). However a few instances of incorrect decision occurred and are discussed. The conclusion is that it seems possible to propose a 2 decisions procedure, but further experiments are required. Lessons learned to set up these experiments are presented.
Design of a perspective flight guidance display for a synthetic vision system
Martin Gross,
Udo Mayer,
Rainer Kaufhold
Show abstract
Adverse weather conditions affect flight safety as well as productivity of the air traffic industry. The problem becomes evident in the airport area (Taxiing, takeoff, approach and landing). The productivity of the air traffic industry goes down because the resources of the airport can not be used optimally. Canceled and delayed flights lead directly to additional costs for the airlines. Against the background of aggravated problems due to a predicted increasing air traffic the European Union launched the project AWARD (All Weather ARrival and Departure) in June 1996. Eleven European aerospace companies and research institutions are participating. The project will be finished by the end of 1999. Subject of AWARD is the development of a Synthetic Vision System (based on database and navigation) and an Enhanced Vision System (based on sensors like FLIR and MMWR). Darmstadt University of Technology is responsible for the development of the SVS prototype. The SVS application is depending on precise navigation, databases for terrain and flight relevant information, and a flight guidance display. The objective is to allow landings under CAT III a/b conditions independently from CAT III ILS airport installations. One goal of SVS is to enhance the situation awareness of pilots during all airport area operations by designing an appropriate man-machine- interface for the display. This paper describes the current state of the research and development of the Synthetic Vision System being developed in AWARD. The paper describes which methodology was used to identify the information that should be displayed. Human factors which influenced the basic design of the SVS are portrayed and some of the planned activities for the flight simulation tests are summarized.
Performance assessment of various imaging sensors in fog
Show abstract
All systems operating in the visible and infrared bands of the spectrum are subject to a severe performance degradation when used in adverse weather conditions like fog, snow or rain. This is particularly true for active systems as rangefinders, laser designator, lidars and active imaging sensors where the laser beam will suffer attenuation, turbulence and scattering from the aerosols present in the atmospheric path. This paper presents the ALBEDOS active imaging performance in fog which was determined by observing reference targets through a 22-m controlled-environmental chamber, where fogs with various densities and droplet sizes were generated in a calibrated manner. ALBEDOS is an acronym for Airborne Laser-Based Enhanced Detection and Observation System and is based on a compact, powerful laser diode illuminator and a range-gated intensified CCD camera. It is capable of detecting and identifying people or objects in complete darkness and, to some extent, in adverse weather conditions. In this paper, we compare the efficiency of the range-gated active imager in fog with those of a far-infrared thermal imager and of a low-light level camera operating in a continuous mode.
Evolutionary approach to introduce 3D into the cockpit
Show abstract
Perspective flightpath displays and the depiction of 3-D terrain are regarded as a potential means to increase safety. Although the technology to generate such presentations in real-time is available, other issues which are required for a safe introduction must still be resolved. This paper focuses on some of the major obstacles which are still present. It discusses several objections against perspective flightpath displays and shows why most of them are no longer justified. The potential for an increase in safety is related to navigation, guidance, and control task requirements, and potential implementations, ranging in complexity, to satisfy these requirements are discussed. This classification allows a gradual transition from today's 2-D symbolic displays to future spatial displays. The paper proposes an approach which supports an evolutionary introduction of 3-D navigation displays into the cockpit.
PMMW/DGPS/GPS integrated situation awareness system
Show abstract
Integrating Passive Millimeter Wave camera (PMMW), Global Positioning System (GPS), and Differential Global Positioning System (DGPS) provides a pilot with a visual precision approach and landing in inclement weather conditions conceivably down to CAT III conditions. A DARPA funded, NASA Langley managed Technology Reinvestment Program (TRP) consortium consisting of Honeywell, TRW, Boeing, and Composite Optics Corporations is demonstrating the PMMW camera. The TRW developed PMMW camera displays the runway through fog, smoke, and clouds in day or night conditions. The Global Air Traffic Program Office entered into a Cooperative Research and Development Agreement (CRDA) with Honeywell to demonstrate DGPS. The Honeywell developed DGPS provides precision navigational data to within 1 m error where GPS has 100 m of error. In inclement weather the runway approach is initiated using GPS data until a range where DGPS data can be received. The runway is presented to the pilot using the PMMW image viewed via a Heads Up Display (HUD) or Head Mounted Display (HMD). At a range where DGPS data is available, a precise runway and horizon symbology is computed in the Flight Display Computer and overlaid on the PMMW image. Image processing algorithms operate on the PMMW image to identify and highlight obstacles on the runway. The integrated system provides the pilot with an enhanced situation awareness of the runway approach in inclement weather. When a DGPS ground station is not available at the landing area, image processing algorithms (again operating on the PMMW image) generate the runway and horizon symbology. GPS provides the algorithm with initial conditions for runway location and perspective. The algorithm then locates and highlights the runway and any obstacles on the runway. Honeywell Technology Center is performing research in the area of integrating the PMMW, DGPS, and GPS technologies to provide the pilot with the most necessary features of each system; namely: visibility, accuracy, obstacle detection, runway overlay, horizon symbology and availability.
Development of a 3D stereoscopic flight guidance display
Matthias Hammer,
Stephan K. M. Muecke,
Udo Mayer
Show abstract
As part of an interdisciplinary research project, sponsored by the German Research Community (DFG), the Darmstadt University of Technology investigates the potential offered by stereoscopic flight guidance displays for improving pilot situation awareness. The research aims to formulate ergonomic design recommendations for this type of display. Recent developments in display technology offer new opportunities to improve human-machine interfaces in the cockpit. The utilization of three-dimensional display symbology, as a depiction of three-dimensional sensor or database information, has become an accepted practice in modern enhanced and synthetic vision systems. Nevertheless, the information is depicted on a conventional two-dimensional screen. This can cause problems and errors during the cognitive process of depth perception. The application of stereoscopic technology can add an additional cue which is intuitively seized by the observer. The project concentrates on stereoscopic perspective flight guidance displays as a head down display. The extension to other display types like navigation or head-up display is possible. Because of the complexity of a modern synthetic vision display, the project contains experiments on different levels of abstraction, ranging from classic parameter experiments to flight simulator tests. The stereoscopic layout takes into consideration specific informational needs within different flight phases and is evaluated by means of pilot performance and pilot strain.
Automotive Applications
Real-time fusion of low-light CCD and uncooled IR imagery for color night vision
Show abstract
We present an approach to color night vision through fusion of information derived from visible and thermal infrared sensors. Building on the work reported at SPIE in 1996 and 1997, we show how opponent-color processing and center-surround shunting neural networks can achieve informative multi-band image fusion. In particular, by emulating spatial and color processing in the retina, we demonstrate an effective strategy for multi-sensor color-night vision. We have developed a real- time visible/IR fusion processor from multiple C80 DSP chips using commercially available Matrox Genesis boards, which we use in conjunction with the Lincoln Lab low-light CCD and a Raytheon TI Systems uncooled IR camera. Limited human factors testing of visible/IR fusion is presented showing improvements in human performance using our color fused imagery relative to alternative fusion strategies or either single image modality alone. We conclude that fusion architectures that match opponent-sensor contrast to human opponent-color processing will yield fused image products of high image quality and utility.
Computer vision for driver assistance systems
Uwe Handmann,
Thomas Kalinke,
Christos Tzomakas,
et al.
Show abstract
Systems for automated image analysis are useful for a variety of tasks and their importance is still increasing due to technological advances and an increase of social acceptance. Especially in the field of driver assistance systems the progress in science has reached a level of high performance. Fully or partly autonomously guided vehicles, particularly for road-based traffic, pose high demands on the development of reliable algorithms due to the conditions imposed by natural environments. At the Institut fur Neuroinformatik, methods for analyzing driving relevant scenes by computer vision are developed in cooperation with several partners from the automobile industry. We introduce a system which extracts the important information from an image taken by a CCD camera installed at the rear view mirror in a car. The approach consists of a sequential and a parallel sensor and information processing. Three main tasks namely the initial segmentation (object detection), the object tracking and the object classification are realized by integration in the sequential branch and by fusion in the parallel branch. The main gain of this approach is given by the integrative coupling of different algorithms providing partly redundant information.
Driver performance-based assessment of thermal display degradation effects
Show abstract
The Driver's Vision Enhancer (DVE) is a thermal sensor and display combination currently being procured for use in U.S. Army combat and tactical wheeled vehicles. During the DVE production process, a given number of sensor or display pixels may either vary from the desired luminance values (nonuniform) or be inactive (nonresponsive). The amount and distribution of pixel luminance nonuniformity (NU) and nonresponsivity (NR) allowable in production DVEs is a significant cost factor. No driver performance-based criteria exist for determining the maximum amount of allowable NU and NR. For safety reasons, these characteristics are specified conservatively. This paper describes an experiment to assess the effects of different levels of display NU and NR on Army drivers' ability to identify scene features and obstacles using a simulated DVE display and videotaped driving scenarios. Baseline, NU, and NR display conditions were simulated using real-time image processing techniques and a computer graphics workstation. The results indicate that there is a small, but statistically insignificant decrease in identification performance with the NU conditions tested. The pattern of the performance-based results is consistent with drivers' subjective assessments of display adequacy. The implications of the results for specifying NU and NR criteria for the DVE display are discussed.
Model-based car tracking through the integration of search and estimation
Hichem Sahli,
Mark J. W. Mertens,
Jan P.H. Cornelis
Show abstract
In this work we address the problem of detecting and tracking moving vehicles in image sequences on highway scenes recorded by a moving camera. The proposed method uses a simple parameterized vehicle shape (object-model) and vehicle motion model for an intra-frame matching and a recursive estimation of position, orientation, velocity and shape parameters of the tracked vehicle. In our approach the vehicle detection/identification (matching) is tackled in an optimization framework, implemented as a hypothesis generation and testing method, in which the current set of hypotheses (vehicle shape) are evaluated and modified until a maximum cost instantiation is found. The estimation of the vehicle position, orientation and velocity is based on a ground plane motion constraint and a Kalman Filtering approach. Results on real world highway scenes are presented and open problems are discussed.
Employing range imagery for vision-based driver assistance
Karin Sobottka,
Horst Bunke
Show abstract
Most research in vision-based driver assistance has utilized graylevel or color image sequences. Since the spatial arrangement of scene objects is often more relevant than the reflected brightness information, there has been an increasing interest in range sensors for collision avoidance systems recently. In our approach for obstacle detection and tracking, obstacles are defined as non-traversable objects. Thus obstacle detection is done by checking the traversability of the environment in the sensor's field of view. Once an obstacle is detected, it is tracked along the time axis. Robust long-term tracking is performed by the analysis of the spatial arrangement of obstacles. Our tracking scheme handles problems as occlusion, new appearance or disappearance of scene objects. To be robust against segmentation errors and poor reflection properties of scene objects, splitting of obstacles is taken into account. Our approach was tested on 11 range image sequences consisting of 447 frames. Different scenarios such as driving along a curve, oncoming traffic, high relative velocity between vehicles, and heavy traffic were investigated.
Tracking lane and pavement edges using deformable templates
Show abstract
Experiments with the LOIS (Likelihood Of Image Shape) Lane detector have demonstrated that the use of a deformable template approach allows robust detection of lane boundaries in visual images. The same algorithm has been applied to detect pavement edges in millimeter wave radar images. In addition to ground vehicle applications involving lane sensing, the algorithm is applicable to airplane applications for tracking runways in either visual or radar data. Previous work on LOIS has focused on the problem of detecting lane edges in individual frames. This paper describes extensions to the LOIS algorithm which allow it to smoothly track lane edges through maneuvers such as lane changes.
Additional Papers
Pictogram road signs detection and understanding in outdoor scenes
Salvatore Vitabile,
Filippo Sorbello
Automotive Applications
Sensor-guided parking system for a carlike robot
Kaichum Jiang,
L. D. Seneviratne
Show abstract
This paper presents an automated parking strategy for a car- like mobile robot. The study considers general parking manoeuvre cases for a rectangular robot, including parallel parking. The robot is constructed simulating a conventional car, which is subject to non-holonomic constraints and thus only has two degrees of freedom. The parking space is considered as rectangular, and detected by ultrasonic sensors mounted on the robot. A motion planning algorithm develops a collision-free path for parking, taking into account the non- holonomic constraints acting on the car-like robot. A research into general car maneuvers has been conducted and useful results have been achieved. The motion planning algorithm uses these results, combined with configuration space method, to produce a collision-free path for parallel parking, depending on the parking space detected. A control program in the form of a graphical user interface has been developed for users to operate the system with ease. The strategy is implemented on a modified B12 mobile robot. The strategy presented has the potential for application in automobiles.
Driving Miss Bradley: performance measurement to support thermal driving
Dino Piccione,
Donald A. Ferrett
Show abstract
The Driver's Vision Enhancer (DVE) program is providing a system to enlarge the driving envelope for the community of military wheeled and tracked vehicles. The DVE, an IR device, provides the driver with images of the forward scene under night and adverse day conditions. During the DVE development program, several questions emerged requiring performance-based data to resolve. A comprehensive program to provide the Project Manager, Night Vision/Reconnaissance, Surveillance and Target Acquisition with driver performance data that will aid in the decision-making process is described in this paper. The program involves several linked efforts including: the relative merits of the DVE and night vision goggles (NVG); drivers' ability to detect the presence of drop-offs when using the DVE and NVG; the effect on performance of various levels of nonuniformity and nonresponsiveness in the display/sensor system; the analysis of drivers' vision using an eye-tracker in a vehicle; and the evaluation of candidate symbology to enhance the DVE's utility in the M2 Bradley. The data collected will aid in making decisions on how to write a system specification to reduce cost without sacrificing driver performance, gain an understanding of how drivers use the DVE in operational settings, and determine where training is needed to enhance safety and reduce risk on the battlefield.
Combined binocular and monocular stereo-vision system for unstructured terrain navigation
Show abstract
The purpose of the system described in this paper is to equip an off-road vehicle with a robust and reliable passive ranging system capable of providing information about the local terrain. This information should be sufficient to allow a separate navigation system to make appropriate path-planning decisions for autonomous travel between specified way-points.
Experience of the ARGO autonomous vehicle
Massimo Bertozzi,
Alberto Broggi,
Gianni Conte,
et al.
Show abstract
This paper presents and discusses the first results obtained by the GOLD (Generic Obstacle and Lane Detection) system as an automatic driver of ARGO. ARGO is a Lancia Thema passenger car equipped with a vision-based system that allows to extract road and environmental information from the acquired scene. By means of stereo vision, obstacles on the road are detected and localized, while the processing of a single monocular image allows to extract the road geometry in front of the vehicle. The generality of the underlying approach allows to detect generic obstacles (without constraints on shape, color, or symmetry) and to detect lane markings even in dark and in strong shadow conditions. The hardware system consists of a PC Pentium 200 Mhz with MMX technology and a frame-grabber board able to acquire 3 b/w images simultaneously; the result of the processing (position of obstacles and geometry of the road) is used to drive an actuator on the steering wheel, while debug information are presented to the user on an on-board monitor and a led-based control panel.
Algorithmic solution for autonomous vision-based off-road navigation
Show abstract
A vision based navigation system is a basic tool to provide autonomous operations of unmanned vehicles. For offroad navigation that means that the vehicle equipped with a stereo vision system and perhaps a laser ranging device shall be able to maintain a high level of autonomy under various illumination conditions and with little a priori information about the underlying scene. The task becomes particularly important for unmanned planetary exploration with the help of autonomous rovers. For example in the LEDA Moon exploration project currently under focus by the European Space Agency (ESA), during the autonomous mode the vehicle (rover) should perform the following operations: on-board absolute localization, elevation model (DEM) generation, obstacle detection and relative localization, global path planning and execution. Focus of this article is a computational solution for fully autonomous path planning and path execution. An operational DEM generation method based on stereoscopy is introduced. Self-localization on the DEM and robust natural feature tracking are used as basic navigation steps, supported by inertial sensor systems. The following operations are performed on the basis of stereo image sequences: 3D scene reconstruction, risk map generation, local path planning, camera position update during the motion on the basis of landmarks tracking, obstacle avoidance. Experimental verification is done with the help of a laboratory terrain mockup and a high precision camera mounting device. It is shown that standalone tracking using automatically identified landmarks is robust enough to give navigation data for further stereoscopic reconstruction of the surrounding terrain. Iterative tracking and reconstruction leads to a complete description of the vehicle path and its surrounding with an accuracy high enough to meet the specifications for autonomous outdoor navigation.
Additional Papers
Unsupervised color vision system for driving unmanned vehicles
Konstantine I. Kiy
Show abstract
In this paper, an unsupervised color vision system aimed at driving vehicles in real time on roads in a fast changing unknown real-world environment is described. The algorithms of the image processing unit enable us to deal with even low- saturated images and images with a poor contrast. They are not aimed merely at road finding, but at complete image classification and analysis and its qualitative and conceptual description as well. The algorithms exploit also a new concept of image texture and implement a new fast method for texture description to be used in real-time image segmentation. A subsystem of logical inference based on the proposed image description and applied in qualitative analysis of road scenes is also presented. This subsystem is able to seek in real time for objects with a qualitative description. Examples that illustrate the operation of the system are presented. The core part of the system, which does not take into account the particular properties of the object in the scene, can be used for a wide variety of problems (for instance, for traffic control, automated landing, and so on).
Robotics Applications
Obstacle detection by segment grouping in mobile robot navigation
Show abstract
This paper describes a technique for detection of unknown obstacles using a stereo pair of TV cameras, for mobile robot navigational purpose. Three-dimensional information is recovered by matching segments. Moreover a feature grouping technique is used to produce a coarse obstacle reconstruction, but enough for detecting the free space (without obstacles) in the environment. The advantage is to use a such reconstructed obstacle map is twofold: higher resolution than map obtained by active sensors such as ultrasonics and, moreover, the obstacles are detected from far than active sensors. Results on experimental stereo images, acquired in our laboratory, are presented in order to illustrate the reliability of the technique.
Visual feedback enhancement for telerobotics applications
Ryad Chellali
Show abstract
Visual feedback in one of the main sensory channels in telepresence, teleoperation and telerobotics applications. Most of time this channel is passive, i.e., video image of the remote scene is displayed as is or, sometimes 'augmented.' This fact leads to some limitations, in particular, the operator must keep attention to interpret the whole image and extract the relevant information to react. The critical resource here (the operator's attention) is badly used and the corresponding time must be used to improve the tele-tasks. In this paper we present a new approach to help the operator to detect some remote events and discharge him from some low level information process. In addition, this approach enables the use of small bandwidth communication channel (the equivalent problem in the frequency space). The developed method is based on image motion analysis. This information provides the ability to extract unexpected events and display this volume reduced information to the operator. Some results of this approach and discussions are presented.
Calibration and guidance of agents based on tubes
Wiek A. Vervoort
Show abstract
In order to correct the position and orientation of autonomous mobile robots in buildings TL-tubes can be used as landmarks. Low-budged properly calibrated cameras looking up towards the ceiling can be used to detect the position and orientation of the tubes and to correct the dead-reckoned robot position according to that measurement. The camera calibration- and image recognition algorithms for tubes that may be on or off, work under different outside building lightning conditions. The algorithms are described and are shown to be extremely simple. Experiments show that by using this method the positioning of the robots can be kept accurate, especially its orientation. The expected and measured errors are theoretically explained.
Video-based traffic monitoring system
Edmond Chin Ping Chang
Show abstract
A number of real-time video traffic monitor systems are currently being developed by the Texas Transportation Institute (TTI), Texas A&M University System, to demonstrate how new Intelligent Transportation Systems (ITS) technologies, such as wireless communication, ISDN communication, standard phone-line video compression technology, real-time video surveillance, and video image processing, can be better integrated. These systems were used to illustrate how video- based traffic monitoring and resource sharing can be better achieved to improve system efficiency and increase operational safety by operating agencies anywhere in the world.
Unauthorized access identification in restricted areas
Guido Tascini,
Antonella Carbonaro,
Primo Zingaretti
Show abstract
The paper describes a system to control vehicle accesses in restricted areas. The signalling of vehicles whose license- plates do not belong to a specific database is the aim of the system. The adaptation to different environmental conditions, and the identification of a vehicle by processing the license- plate pattern as a whole, without considering the recognition of the characters, are its two main characteristics. The system implements a recognition engine constituted by two modules. First, the system analyzes the video-recorded sequences to select a frame in which the license-plate satisfies pre-defined constraints, and extracts the license- plate template on which the matching with the model templates stored in the database will be performed. Second, vehicle identification is performed by a genetic template matching that, without requiring a high computational complexity, provides adaptation to normal environmental variations by exploiting learning capabilities. The implemented system, forced to distinguish only between authorized and unauthorized vehicles according to a threshold in the genetic fitness function, shows robust performance on Italian cars, but it is adaptable to different license-plate models, and is independent from outdoor conditions.
Vision-based approach to automate spraying in crop fields
Show abstract
This paper presents an approach to the vision tasks to be performed in a vehicle navigation application in crop fields. The objective is to automate chemical spraying by autonomous navigation and machine vision. A camera is used as the sensor device and a bar of spraying nozzles is provided to perform the spraying. The proposed solution consists of recovering maps of the environment from the image sequence, and exploring them to locate the path to follow and the nozzles that have to be switched on. The motion parameters of the vehicle are used to place the images in the map, and are computed from a feature tracking method. The plants and the weeds are identified through a segmentation, the features to be tracked are computed from the contours of the plants. Results with real image sequences of all the steps involved are presented.
Self-location for indoor navigation of autonomous vehicles
Show abstract
Accurate position estimation is a fundamental requirement for mobile robot navigation. The positioning problem consists of keeping in real-time a reliable estimate of the robot location with respect to a reference frame in the environment. A fast landmark-based position estimation method is presented in this paper. The technique combines orientation of the mobile robot from a heading sensor (a compass) with observations of landmarks from a vision sensor (a CCD camera). Knowing the position of the landmarks in a fixed coordinate system and the orientation of the optical axis of the camera it's possible to recover the robot position by simple geometric considerations. The experiments made in our laboratory demonstrate the reliability of the method and suggest its applicability in the context of autonomous robot navigation.
Approach to full pose estimation for an automatic control system based on vision
Georgii Khachaturov,
Hugo Moncayo
Show abstract
The application area of the developed 3D-pose estimator may be spacecraft docking or any industrial robot control problem that permits putting a visual mark on the operational object. The input information to measure the complete pose (dimension 6) of a solid object is given by a single view gray scale image. A new method of the measurement is presented. It makes use of a visual mark with spatially distributed features. The Fourier transform technique serves for measurements of mark image parameters conveying the information about pose parameters. The method is direct: it does not perform any preliminary extraction of local features. Working in the spectral domain, the method is based on a simple search of maximums, unlike a complicated logic of matching-based algorithms working in the spatial domain. The method permits achieving of the highest theoretically possible precision for the class of model-based methods. Robustness is another advantage of the approach.
View-based methods for relative reconstruction of 3D scenes from several 2D images
Show abstract
Suppose we have two or more images of a 3D scene. From these views alone, we would like to infer the (x,y,z) coordinates of the object-points in the scene (to reconstruct the scene). The most general standard methods require either prior knowledge of the camera models (intersection methods) or prior knowledge of the (x,y,z) coordinates of some of the object points, from which the camera models can be inferred (resection, followed by intersection). When neither alternative is available, a special technique called relative orientation enables a scale model of a scene to be reconstructed from two images, but only when the internal parameters of both cameras are identical. In this paper, we discuss alternatives to relative orientation that does not require knowledge of the internal parameters of the imaging systems. These techniques, which we call view- based relative reconstruction, determine the object-space coordinates up to a 3D projective transformation. The reconstructed points are then exemplars of a projective orbit of representations that are chosen to reside in a particular representation called a canonical frame. Two strategies will be described to choose this canonical frame: (1) projectively simplify the object model and the imaging equations; and (2) projectively simplify the camera model and the imaging equations. In each case, we solve the resulting simplified system of imaging equations to retrieve exemplar points. Both strategies are successful in synthetic imagery, but may be differently suited to various real-world applications.
Additional Papers
Man-machine stereo-TV computer vision system for noncontact measurement
Sergey V. Petuchov,
Vadim F. Vasiliev,
Victor M. Ivaniugin
Show abstract
The structural description of the scene and/or the geometrical performances of the scene objects are insufficient for many tasks of robot control. The complexity of natural scenes as well as the great variety of tasks makes the human operator indispensable for images interpretation. His responsibility lies in indicating the interesting regions (objects) and in helping to establish a good hypothesis about the location of the object in the case of difficult identifying situations. The man-machine computer vision stereo-measurement system (CVMS) allows to create the systems for navigation and control by mobile and manipulating teleoperated robots in a new fashion to make them more adaptive to changes of the external conditions. This paper gives a description of CVMS for non- contact measurements. Three-dimensional coordinates of object points are defined after ones indicating by mouse of the human operator, by indicating with mouse. The measuring points are indicated in monocular image, therefore specified glasses are not required for stereo scope. The system baseline may be increased as compared with distance between human eyes, then measurement accuracy may be also increased. The CVMS contains the one or two TV-cameras and personal computer equipped by input/output board of images. The system breadboard was tested on remote control transport robot.
Efficient road sign detection and recognition algorithm
Show abstract
In this paper, an efficient method for detecting and recognizing road signs from real world scenes is presented. The main outline of this method is composed of three parts: detecting road signs, i.e. road sign segmentation, shape recognition of segmented areas, and identifying the meaning of the detected signs in a hierarchical method. We employed a texture, an RG, and a BY color opponent image for road sign segmentation, a few grid lines and a bounding box for recognizing segmented sign shapes, and compressed eigenspace representation for identifying detected signs. This method has proven to be very efficient, robust, and easy to implement. Furthermore, this method can overcome substantial amount of rotation of the detected sign and opens a great possibility for more improvement and real-time usage.
Terrestrial navigation based on integrated GPS and INS
Sam S. Ge,
Terence K. L. Goh,
T. Y. Jiang,
et al.
Show abstract
The Global Positioning System (GPS) and Inertial Navigation System (INS) have complimentary features that can be exploited in an integrated system, thus resulting in improved navigation performance. The INS is able to provide accurate aiding data on short-term vehicle dynamics, while the GPS provides accurate data on long-term vehicle dynamics. In this paper, a complete solution is presented for terrestrial navigation based on integrated GPS and INS using Kalman filtering technique.
Robotics Applications
PRIMUS: realization aspects of an autonomous unmanned robot
Ingo Schwartz
Show abstract
In the experimental program PRIMUS (PRogram of Intelligent Mobile Unmanned Systems) there shall be shown the autonomous driving of an unmanned robot in open terrain. The goal is to achieve the most possible degree of autonomy. A small tracked vehicle (Wiesel 2) is used as a robot vehicle. This tank is configured as a 'drive by wire-'system and is therefore well suited for the adaptation of control computers. For navigation and orientation in open terrain a sensor package is integrated. To detect obstacles the scene in the driving corridor of the robot is scanned 4 times per second by a 3D- Range image camera (LADAR). The measured 3D-range image is converted into a 2D-obstacle map and used as input for calculation of an obstacle free path. The combination of local navigation (obstacle avoidance) and global navigation leads to a collision free driving in open terrain to a predefined goal point with a velocity of up to 25 km/h. In addition a contour tracker with a TV-camera as sensor is implemented which allows to follow contours (edge of a meadow) or to drive on paved and unpaved roads with a velocity up to 50 km/h. Because of the driving in open terrain there are given high demands on the real time implementation of all the sub-functions in the system. For the most part the described functions will be coded in the programming language Ada. The software will be embedded in a distributed VMEbus-based multicomputer- /multiprocessor system. Up to 20 PowerPC 603 and some 68030/40-CPUs are used to build up a high performance computer system. The Hardware (HW) is adapted to the environmental conditions of the tracked vehicle.
Additional Papers
Low-contrast image enhancement by means of an inverse pseudocoherent synthetic imaging method with control contrast and space-resolving power levels
Alexander M. Akhmetshin
Low-contrast image multiresolution analysis based on a virtual vision method of variable pole location
Alexander M. Akhmetshin,
Igor B. Berezovsky
Aviation Applications
Ultrawideband synthetic vision sensor for airborne wire detection
Robert J. Fontana,
J. Frederick Larrick,
Jeffrey E. Cade,
et al.
Show abstract
A low cost, miniature ultra wideband (UWB) radar has demonstrated the ability to detect suspended wires and other small obstacles at distances exceeding several hundred feet using an average output power of less than 10 microwatts. Originally developed as a high precision UWB radar altimeter for the Navy's Program Executive Office for Unmanned Aerial Vehicles and Cruise Missiles, an improved sensitivity version was recently developed for the Naval Surface Warfare Center (NSWC Dahlgren Division) as part of the Marine Corps Warfighting Laboratory's Hummingbird program for rotary wing platforms. Utilizing a short pulse waveform of approximately 2.5 nanoseconds in duration, the receiver processor exploits the leading edge of the radar return pulse to achieve range resolutions of less than one foot. The resultant 400 MHz bandwidth spectrum produces both a broad frequency excitation for enhanced detection, as well as a low probability of intercept and detection (LPI/D) signature for covert applications. This paper describes the design and development of the ultra wideband sensor, as well as performance results achieved during field testing at NSWC's Dahlgren, VA facility. These results are compared with those achieved with a high resolution EHF radar and a laser-based detection system.
New sector imaging radar for enhanced vision: SIREV
Franz Witte,
Thomas Sutor,
Ruediger Scheunemann
Show abstract
The DLR Radar-System SIREV is a forward looking airborne radar with a fixed mounted antenna at the fuselage of an aircraft or a helicopter. It is able to operate at different frequencies from L-band up to Ka-band and delivers high quality radar images of a flight sector ahead the airpath. The real time generated radar images look very similar to optical images. Depending from the application it can also include a 3D vision as well as information of ground elevation (e.g. obstacles). Due to the all-weather capability of the system and its ability to present radar images very similar to optical images either as top view (mapping mode) or as pilot view (central perspective), the system is especially qualified for: (1) navigation support to the pilot under IMC flight conditions; (2) autonomous landing approaches; (3) taxi support at the ground; (4) dropping of goods or airborne troops. Currently the system is under development. First simulations with data of the SAR system of DLR were performed. The expected image quality and the resolution of the SIREV system can presently not be achieved by any other system.