Proceedings Volume 1007

Mobile Robots III

cover
Proceedings Volume 1007

Mobile Robots III

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 10 March 1989
Contents: 1 Sessions, 44 Papers, 0 Presentations
Conference: 1988 Cambridge Symposium on Advances in Intelligent Robotics Systems 1988
Volume Number: 1007

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • All Papers
All Papers
icon_mobile_dropdown
Planning With Perceptual Milestones To Control Uncertainty In Robot Navigation
Claude L. Fennema Jr., Edward M. Riseman, Allen R. Hanson
Because of the realities of converting commands into motion there is always a degree of uncertainty as to how accurately an intended action has been executed. This situation has received considerable attention in the study of robot navigation, and the common solution is to calculate the degree of uncertainty in position resulting from a chain of commands and to take measures to reduce this uncertainty when it exceeds some limit. These calculations necessarily make assumptions about the factors which introduce uncertainty, such as wheel slippage per foot or backlash. It is difficult, however, to foresee what all these factors are and even more difficult, in some cases, to develop good models for them. This paper addresses the problem from a different perspective. Rather than calculating the degree of positional uncertainty, it is controlled by what might be called "perceptual servoing", accomplished using a combination of low level tracking and high level perceptual verification of "milestones", sensory events the robot should perceive during the execution of a plan. Perception, planning and execution are integrated in this system: the planning activities are able to reason about what should be perceived at various stages so that it can generate milestones; and execution activities use perception to continuously monitor progress. After a brief exposition of a new robot project, of which this effort is a part, this paper describes a hierarchical representation of space into locales or space packets. This representation is designed to facilitate navigational planning, reasoning about perceptual milestones, and perception of the milestones themselves. Following this discussion, the planning, execution and monitoring activities are described. Planning is done hierarchically, taking advantage of the hierarchical representation of space, and incrementally, expanding only the early parts of the trip in an effort to keep memory requirements and replanning computation under control. Execution is done in small steps using low level perception to monitor progress. Finally, plan monitoring activities construct operators from descriptions in the model and use milestones to verify that the overall physical motion is conforming to plan. The research project described here is quite new and results are just becoming available. Consequently this paper focuses on the design of what is a very top down system which integrates perception and reasoning activities, striving towards continuous and plan-directed perception, used here to control the amount of uncertainty.
Path Planning And Collision Avoidance For An Indoor Security Robot
Gary Gilbreath, H. R. Everett
Any mobile robot which must operate in a dynamically changing indoor environment, such as an office, laboratory, or warehouse, must be able to detect and successfully avoid unexpected obstacles. Transient objects such as chairs, doors, trash cans, etc. change position or state frequently, and thus cannot be assigned a static representation in an "absolute" X-Y planview map of the workspace. The most simplistic path planning scheme therefore assumes there are no transient objects in this global model for the initial "find-path" operation. For collision avoidance purposes, a secondary "relative" model of the robot's immediate surroundings is created from real world sensor data collected as the robot is moving, and used to find a path around each individual obstruction as it is encountered. No information regarding the position of permanent objects is available in this smaller relative model, and the position of each transient object is forgotten as soon as it no longer obstructs the path. Conversely, if the absolute position of each detected obstruction is simply recorded in the global map, the resulting model eventually fills up with clutter and the find-path operation fails because no free path exists. This paper discusses a robust approach for map maintenance implemented on a prototype security robot, wherein transient objects are added to the global map as they are encountered, and removed from the model later if no longer detected at the same location. In this manner, subsequent find-path operations will avoid previously identified obstructions, and information on the location of both per-manent as well as transient objects is available when reacting to the discovery of a new obstruction.
Finding Out Critical Points For Real-Time Path Planning
Wei Chen
Path planning for a mobile robot is a classic topic, but the path planning under real-time environment is a different issue. The system sources including sampling time, processing time, processes communicating time, and memory space are very limited for this type of application. This paper presents a method which abstracts the world representation from the sensory data and makes the decision as to which point will be a potentially critical point to span the world map by using incomplete knowledge about physical world and heuristic rule. Without any previous knowledge or map of the workspace, the robot will determine the world map by roving through the workspace. The computational complexity for building and searching such a map is not more than O( n2 ) The find-path problem is well-known in robotics. Given an object with an initial location and orientation, a goal location and orientation, and a set of obstacles located in space, the problem is to find a continuous path for the object from the initial position to the goal position which avoids collisions with obstacles along the way. There are a lot of methods to find a collision-free path in given environment. Techniques for solving this problem can be classified into three approaches: 1) the configuration space approach [1],[2],[3] which represents the polygonal obstacles by vertices in a graph. The idea is to determine those parts of the free space which a reference point of the moving object can occupy without colliding with any obstacles. A path is then found for the reference point through this truly free space. Dealing with rotations turns out to be a major difficulty with the approach, requiring complex geometric algorithms which are computationally expensive. 2) the direct representation of the free space using basic shape primitives such as convex polygons [4] and overlapping generalized cones [5]. 3) the combination of technique 1 and 2 [6] by which the space is divided into the primary convex region, overlap region and obstacle region, then obstacle boundaries with attribute values are represented by the vertices of the hypergraph. The primary convex region and overlap region are represented by hyperedges, the centroids of overlap form the critical points. The difficulty is generating segment graph and estimating of minimum path width. The all techniques mentioned above need previous knowledge about the world to make path planning and the computational cost is not low. They are not available in an unknow and uncertain environment. Due to limited system resources such as CPU time, memory size and knowledge about the special application in an intelligent system (such as mobile robot), it is necessary to use algorithms that provide the good decision which is feasible with the available resources in real time rather than the best answer that could be achieved in unlimited time with unlimited resources. A real-time path planner should meet following requirements: - Quickly abstract the representation of the world from the sensory data without any previous knowledge about the robot environment. - Easily update the world model to spell out the global-path map and to reflect changes in the robot environment. - Must make a decision of where the robot must go and which direction the range sensor should point to in real time with limited resources. The method presented here assumes that the data from range sensors has been processed by signal process unite. The path planner will guide the scan of range sensor, find critical points, make decision where the robot should go and which point is poten- tial critical point, generate the path map and monitor the robot moves to the given point. The program runs recursively until the goal is reached or the whole workspace is roved through.
A Minimum Path Algorithm Among 3D-Polyhedral Objects
Aysin Yeltekin
In this work we introduce a minimum path theorem for 3D case. We also develop an algorithm based on the theorem we prove. The algorithm will be implemented on the software package we develop using C language. The theorem we introduce states that; "Given the initial point I, final point F and S be the set of finite number of static obstacles then an optimal path P from I to F, such that PA S = 0 is composed of straight line segments which are perpendicular to the edge segments of the objects." We prove the theorem as well as we develop the following algorithm depending on the theorem to find the minimum path among 3D-polyhedral objects. The algorithm generates the point Qi on edge ei such that at Qi one can find the line which is perpendicular to the edge and the IF line. The algorithm iteratively provides a new set of initial points from Qi and exploits all possible paths. Then the algorithm chooses the minimum path among the possible ones. The flowchart of the program as well as the examination of its numerical properties are included.
Parallel Off-Road Perception Processing On The Autonomous Land Vehicle
R.Terry Dunlay, Steven B. Seida
This paper describes off-road perception techniques developed for the Autonomous Land Vehicle (ALV). Processing is performed on a parallel processor, the Warp Machine, a 100 MFLOPS systolic array computer. An overview of the complete off-road system is presented, followed by detailed descriptions of the parallel perception processing techniques used in classifying outdoor terrain. Range images are used to determine a Cartesian height map representation of the area in front of the ALV. Then one of a number of different methods of classifying terrain regions in the Cartesian map can be used. The resulting traversability map, consisting of 3 categories (traversable, obstacle, or unknown), is used in path planning. The advantages and limitations of the different approaches are exam- ined and results from each are presented.
ALV Closeup: Finding Road When Offroad
Steven Seida
Flexible and capable autonomous navigation for military missions requires hybrid road and offroad navigation. A vehicle may travel cross country where roads are not available, too exposed, or too damaged. It may also have to navigate existing roads when greater vehicle speeds are desired and exposure risk is acceptable. One critical component of the hybrid road/offroad navigation is the "Road from Offroad" problem: the ability of the vehicle to identify, navigate towards, and enter a paved roadway from an offroad position. A relatively fast (35 seconds) and reliable method developed for the Autonomous Land Vehicle (ALV) is presented. The algorithm runs on a general purpose processor and starts with a raster scan based image segmenter, followed by a rule-based region merging step. A classification scheme that identifies road areas is then applied to the largest regions created by the rule based merge. Simple descriptions of likely road areas in the image are then
Road Boundary Detection
J. Sowers, R. Mehrotra, I. K. Sethi
In recent years, several researchers have examined the problem of autonomous navigation of pathways. A fundamental problem that remains is the efficient extraction of the boundary information. In this paper a method is presented for extracting road boundaries in one pass of an image. This method uses the statistical information regarding the intensity levels present in the image along with some geometrical constraints concerning the road.
Terrain Classification Using Texture For The ALV
Marty Marra, R.Terry Dunlay, Don Mathis
Off-road navigation is a very demanding visual task in which texture can play an important role. Travel on a smooth road or path can be done with greater speed and safety in general than on rough natural terrain. In addition, recognition of off-road terrain types can aid in finding the fastest and safest route through a given area. Implementations of two texture methods for identifying certain terrain features in video imagery are briefly discussed. The first method uses edge and morphological filters to identify roadways from off-road. The second method uses a neural net to identify several terrain types based on color, directional texture, global variance and location in the image. Plans to integrate the terrain labeled image produced by the latter method into the ALV's perception system are also discussed.
Mars Rover Local Navigation And Hazard Avoidance
B. H. Wilcox, D. B. Gennery, A. H. Mishkin
A Mars rover sample return mission has been proposed for the late 1990's. Due to the long speed-of-light delays between Earth and Mars, some autonomy on the rover is highly desirable. JPL has been conducting research in two possible modes of rover operation, Computer-Aided Remote Driving and Semiautonomous Navigation. A recently-completed research program used a half-scale testbed vehicle to explore several of the concepts in semiautonomous navigation. A new, full-scale vehicle with all computational and power resources on-board will be used in the coming year to demonstrate relatively fast semiautonomous navigation. The computational and power requirements for Mars rover local navigation and hazard avoidance are discussed.
Hazard Avoidance For A Mars Rover
Andrew J. Spiessbach
The challenging geology of the surface of Mars, when coupled with the impossibility of continuous remote driving from Earth, dictate the need for autonomous hazard detection, recognition and possibly hazard avoidance capabilities onboard any robotic Mars roving vehicle. The main technical issues represented by terrain hazards are accidental damage and vehicle entrapment. Several approaches to vehicle design geared to prevent such immobilization threats are identified. The gamut of alternatives for rover autonomy are also presented, and the applicability of the various options for the Mars Rover/Sample Return (MRSR) mission are assessed in the context of the technology state of the art for hazard sensors and processing algorithms.
Mars Rover Concept Development
Louis S. McTamaney, Scott Y. Harmon
FMC conducted a carefully structured study effort designed to develop an extensive set of mobility and navigation concepts for a planetary exploration vehicle. This focused study commenced with a requirements analysis that defined operational and functional requirements and constraints. Next, the physical system, its value system (evaluation parameters and design criteria) and the martian terrain design environment (surface models) were developed simultaneously. Mobility (the basic vehicle structure, suspension, and power train) and navigation (both global and local navigation including global reference, local terrain assessment, and guidance and control) concepts were developed independently and then integrated. Special attention was paid to the interdependency of navigation and locomotion in order to strike the appropriate balance between autonomy and mobility. Finally, the most promising concepts were subjected to a rigorous, structured evaluation process to derive the best candidate systems. To support the evaluation process, a three-layer computer model of the martian surface was developed. This model was based on the 1/64° Digital Elevation Model (DEM) of Mars developed by USGS Flagstaff. Local surface roughness based on measured martian slope distribution and power spectral density was superimposed on the DEM. Rocks based on H. Moore's distribution model were then added. To assess performance, selected concepts were modeled using DADSR from Computer Aided Software Inc., and simulations were run with the vehicle traversing the martian surface model, including one meter high vertical steps and one meter wide crevasses . During the course of this study, over 300 mobility concepts and more than 400 navigation concepts generated during a series of brainstorming sessions evolved into three promising candidate systems. The evolution of these systems is discussed; design details, conclusions and recommendations are presented.
Simple Autonomous Mars Walker
Stanley J. Larimer, Thomas R. Lisec, Andrew J. Spiessbach
Under a contract with NASA's Jet Propulsion Laboratory, Martin Marietta has developed several alternative rover concepts for unmanned exploration of the planet Mars. One of those concepts, the "Walking Beam", is the subject of this paper. This concept was developed with the goal of achieving many of the capabilities of more sophisticated articulated-leg walkers with a much simpler, more robust, less computationally demanding and more power efficient design. It consists of two large-base tripods nested one within the other which alternately translate with respect to each other along a 5 meter beam to propel the vehicle. The semi-autonomous navigation system relies on terrain geometry sensors and tacticle feedback from each foot to autonomously select a path which avoids hazards along a route designated from earth. Both mobility and navigation features of this concept are discussed including a top-level description of the vehicle's physical characteristics, deployment strategy, mobility elements, sensor suite, theory of operation, navigation and control processes, and estimated performance.
Perception For Rugged Terrain
In So Kweon, Martial Hebert, Takeo Kanade
To perform planetary exploration without human supervision, a complete autonomous robot must be able to model its environment and locate itself while exploring its surroundings. To this end, estimating motion between sensor views and merging the views into a coherent map are two important problems. In this paper, we present a 3-D perception system for building a geometrical description of rugged terrain environments from range data. We propose an intermediate representation consisting of an elevation map that includes an explicit representation of uncertainty and labeling of the occluded regions. We present an algorithm, called the Locus method, to convert range image to an elevation map. An uncertainty model based on this algorithm is developed. Based on this elevation map and the Locus method, we propose a terrain matching algorithm which does not assume any correspondences between range images. The algorithm consists of two stages: First, a feature-based matching algorithm computes an initial transform. Second, an iconic terrain matching algorithm that minimizes the correlation between two range images is applied to merge multiple range images into a uniform representation. We present terrain modeling results on real range images of rugged terrain.
Advanced Rover For Planetary Exploration
David D. Wright, Wendell Chun
Planetary exploration goals for the next several decades and the eventual requirement for manned exploration leads naturally to a robotic vehicle in the shape of a human. As an exploration vehicle, the human design has some impressive mobility advantages. While a biped robot suffers a locomotion penalty on s:aooth terrain, it is more efficient in the rough terrain which is geologically interesting enough to justify the trip in the first place. Several new technologies based on existing materials and processes are combined to synthesize a reasonably detailed concept design for an android rover.
Mobility Study For A Lunar Rover
David A. Easter, Corinne M. Buoni, Lisa A. McCauley
One of the human exploration initiatives being explored by the National Aeronautics and Space Administration (NASA) involves a program that would build on and extend the legacy of the Apollo program through scientific research and exploration. The construction and operation of a very low frequency radio astronomy observatory on the far side of the Moon is one of the many scientific endeavors being considered that would require a robotic lunar surface vehicle system. A supervisory controlled rover vehicle has been conceptually defined to support the construction of this large array in the 2000 to 2010 time frame. This paper describes the observatory, briefly discusses the vehicle design/operational requirements, identifies key system/subsystem options and trades, and presents the vehicle concept developed. Particular emphasis is placed on vehicle mobility system considerations.
Video Compression For Remote Vehicle Driving
Martin Herman, Karen Chaconas, Marily Nashman, et al.
In order to effectively drive a remote ground vehicle using video images, the operator must be provided with a natural, real-time video sequence and the imagery must be accurate and detailed enough so that the operator can make mobility and survivability decisions. Unfortunately, high data rate communication channels are often not feasible for this task. To accomplish remote driving using a low data rate channel, video compression techniques must be incorporated. This paper discusses the remote vehicle driving problem and describes several video compression algorithms that have been implemented on PIPE, a real-time pipelined image processing machine. The paper then discusses how these algorithms are evaluated on real-world remote driving tests. Finally, advanced techniques for video compression are proposed.
Collision Avoidance Under Uncertainty
Bijan G. Mobasseri
Reasoning with uncertainty has increasingly become an important issue in robot path planning. The injection of doubt into the knowledge of obstacle location calls for capabilities not available in the conventional path planners. This work redefines the concept of optimality within the context of an uncertainty grid and proposes a class of cost functions capable of incorporating such human-like attitudes of conservative and aggressive behavior.
KL Techniques For Optimal Processing Of Time Sequential Imagery
Pieter Vermeulen, David Casasent
Time sequential imagery is difficult to analyze, because of its high dimesionality. This paper advances a new algorithm that screens input data in an intelligent way, discards data with negligible information and uses the remaining images to represent the sequence in an optimal compact form. We present data to illustrate how this algorithm can be used to do novelty filtering, novelty detection, segmentation, background independent modelling and classification.
Collision Avoidance Navigation For Mobile Articulated Robotic Systems
Alexander Yu-Kuang Chen
This paper presents an initial attempt to build a complete framework which will support the collision-free spatial planning for advanced robotic systems. The Inverse Spatial Planning Problem (ISPP) is defined to determine the joint variables and the base mobility such that the designated targets can be dextrously reached at terminal time without encountering any collision. A new approach for solving the ISPP is introduced which includes world modeling, the associated inverse kinematic problem (IKP) solver, collision detection and avoidance algorithms and sensor data fusion consideration. The SRAARS (Scientific Research Associates Advanced Robotic Systems) is employed to demonstrate the current status of the new ISPP solver.
Color Vision For Road Following
Jill D. Crisman, Charles E. Thorpe
At Carnegie Mellon University, we have two new vision systems for outdoor road following. The first system, called SCARF (Supervised Classification Applied to Road Following), is designed to be fast and robust when the vehicle is running in both sunshine and shadows under constant illumination. The second system, UNSCARF (UNSupervised Classification Applied to Road Following), is slower, but provides good results even if the sun is alternately covered by clouds or uncovered. SCARF incorporates our results from our previous experience with road tracking by supervised classification. It is an adaptive supervised classification scheme using color data from two cameras to form a new six dimensional color space. The road is localized by a Hough space technique. SCARF is specifically designed for fast implementation on the WARP supercomputer, an experimental parallel architecture developed at Carnegie Mellon. UNSCARF uses an unsupervised classification algorithm to group the pixels in the image into regions. The road is detected by finding the set of regions which, grouped together, best match the road shape. UNSCARF can be expanded easily to perform unsupervised classification on any number of features, and to use any combination of constraints to select the best combination of regions. The basic unsupervised classification segmentation will also have applications outside the realm of road following.
Optimal Path Planning By Cost Wave Propagation In Metric Configuration Space
Leo Dorst, Karen Trovato
In this paper we give a solution to the problem of finding an optimal path for an arbitrary robot among static obstacles, from its present state to the closest of a set of goal states. The path is optimal in the sense of minimizing some criterion (minimum motion, distance, effort, etc.). Our solution is the well-known heuristic search method `A* with h = 0' (no heuristic), applied to a graph representing configuration space, with a metric incorporating the optimization criterion. It is applicable to any robot, unrestricted in the number of degrees of freedom, and capable of handling a broad class of optimization criteria. The method works fairly fast for a few degrees of freedom. Examples in two dimensions are given.
Synthesis Of Reflexive Behavior For A Mobile Robot Based Upon A Stimulus-Response Paradigm
Tracy L. Anderson, Max Donath
An approach based upon the use of multiple primitive reflexive behaviors is presented in which the control of behavior is separated from the computational devices which affect behavior. Each primitive reflexive behavior is completely uncoupled and is based upon a stimulus/response paradigm which does not incorporate heuristics, path planning, or memory. We present experimental results for a simulated mobile robot operating in an unknown environment. The approach has also been verified using a real robot with onboard parallel processing, operating in an unknown and limited dynamic environment.
An Algorithm For Planning An Optimum Collision-Free Path In Structured Environment
M. G. Radicci, M. Interesse, A. Distante
An efficient algorithm for planning collision-free paths in a planar workspace, cluttered by known obstacles, is presented. The mobile entity is assumed to move with a fixed orientation. Our approach adopts the transformation of the obstacles from the Cartesian space into the configuration space (C-space). Moreover the feasible paths inside the workspace are represented through a set of straight line trajectories (spines), constituting the axes of natural freeways between C-space obstacles. The path planner finds out the optimum path for the mobile point, from the start to the target position, displacing it from spine to spine, at their intersection points. The proposed approach was applied to the three-dimensional gross motion planning of an industrial robot (a PUMA 560). In the experimental phase it always yielded satisfactory results, finding out optimum paths also in highly cluttered environments.
Model-Based Visual Navigation System For A Mobile Robot
Juha Roning, Matti Pietikainen, Mikko Lindholm, et al.
A model-based visual navigation system developed at the University of Oulu is described. The system consists of a hierarchy of processing levels where low level stages are data driven and the highest stage is model driven. With passive three-camera stereovision a sparse depth map of the scene is produced. Descriptions of important local scene structures are extracted using the properties of edge segments and relations between neighboring segments. The nature of the environment and world constraints are exploited to support extraction of the description. Prior knowledge of the world is used for global path finding, aiding scene analysis and providing feedback information to close the control loop between planned and actual movements.
Development And Guidance Of A Mobile Robot With Object Avoidance Properties
Aysin Yeltekin, Kasmijan Javoori
In this work we had developed an autonomous mobile robot with capabilities of object avoidance as well as finding the minimum path among two dimensional polygonal objects. The mobile unit moves on wheels and it can be controlled by the main control software loaded to the machine. A remote controller activates and stops the operation of the main controller. The vehicle can find its way among static polygonal objects with the help of a vision sensor. The main controller is responsible from the execution of the main navigation software as well as commanding the driving circuits for motors of wheels and the vision system. The vision system consists of a microdigital camera used as the vision sensor for object detection. There is a stepper motor which aligns the camera on the direction of motion along the path of the mobile robot. The software package consists of a main navigation program which is written in C language and functional subroutines which are written in Assembly for fast execution and easy access to hardware control. The autonomous mobile robot is a self guiding unit and it does not need teleoperators or other guidance units.
Supervisory Control Of Multiple Vehicles
Thurston L. Brooks
Economically mandated force limitations combined with a steadily shrinking pool of candidates endowed with the requisite skill levels, dictates that modern military systems must multiply the individual soldier's performance. An optimal solution within the realm of current technology can be achieved by recognizing that (1) machines are more effective than humans when the task is routine and specified, and (2) humans process complex data sets and deal with the unpredictable better than machines. These observations lead naturally to a philosophy in which the human's role becomes a higher level function associated with planning, teaching, initiating, monitoring, and intervening when the machine gets into trouble, while the machine performs the codifiable tasks with deliberate efficiency. Martin Marietta is developing the capability for a single operator to simultaneously control multiple remote weapons platforms through the development of a supervisory controls testbed. This effort is described.
Hierarchical Control Of A Mobile Robot With A Blackboard Based System
Thomas Skillman, Wolf Kohn, Robb Graham
Based on our experience with two mobile robot control projects implemented on blackboard based computing systems, a control system design approach has been developed for declarative representation of the control problem and an algorithm decomposition which automatically creates hierarchical controllers. The target computing system architecture, blackboard or otherwise, is captured declaratively in a complexity measure. This measure is used during the decomposition of the controller to optimize the controllers performance on the target computing architecture. Our design approach is presented, and the computational complexity measure is described. A new mobile robot testbed, Ulysses, and potential force field control approach will be used to test our declarative hierarchical controller design procedure.
Algorithms For All-Wheel Steering
David D. Wright
Formulas for steering a complex robotic vehicle are derived with examples. Taken together, the formulas constitute an algorithm which converts commands from a joystick or a trajectory planner into wheel speeds and steering angles. The algorithm is applied to a robotic vehicle in which all wheels are independently steered and powered. The algorithm maps operator requests onto the control space of the vehicle. Steering all the wheels introduces new command options due to increased mobility and quickness of response thereby creating a virtual vehicle for use in route and task planning which is the highest-common-multiple of land vehicle capabilities. For example, an all-wheel steering vehicle can tack uphill, step over a trench or simultaneously rotate while translating -- maneuvers not even possible for most vehicles. These and other examples, including a comparison with an exact solution, add perspective to the mathematical derivations.
Towards A Modular Robotic Architecture
Walter A. Aviles, Robin T. Laird, Margaret E. Myers
Despite the seemingly dramatic differences in the con-figuration of individual robots, all mobile platforms are required to perform the same basic tasks and gather certain basic types of information about their environment. In recognition of this fact, work has been initiated on a high-level robotic control and integration architecture known as the Generic Robotic Processing Architecture (GRPA). GRPA is designed to provide a flexible, standard interface between users and a wide variety of robotic processing sys-tems. Under GRPA, the user is able to "compile" a hardware description of a particular robot into data struc-tures used by its Virtual Systems Interface (VSI). Through the VSI disparate types of sensors and actuators can be addressed in a consistent manner. Changing a data object in the VSI initiates a corresponding change in the physical world. On top of the VSI are Teleoperated, Supervisory, Autonomous and Reflexive control levels. Within each con-trol level, GRPA provides standardized, device independent routines which interact with the the VSI to affect changes in the robot.
A Pipeline Architecture For Near Real-Time Stereo Range Detection
Ali E. Kayaalp, James L. Eckman
Following successful demonstrations in road-following, research efforts in autonomous navigation have focused on cross-country navigation. In cross-country navigation, autonomous vehicles need to detect obstacles and maneuver around them using on-board electronics. Due to the dynamic nature of the en-vironment (moving vehicle/moving obstacle), positions of obstacles need to be constantly tracked in order to avoid collisions. This requires a high speed range detection system. Furthermore, for military applications a passive ranging system is desirable in order to reduce the detectability of the vehicle. In this paper we present a pipeline architecture that performs correlation-based feature matching between two images in near real-time. Based on this architecture, a binocular stereo range detection system has been implemented using DATACUBE image processing boards. The system matches 512 x 512 pixel stereo image pairs in approximately 5 seconds using a disparity range of 128 pixels, and 256 x 256 pixel stereo image pairs in approximately one second using a disparity range of 64 pixels. The range information provided by the system is processed and displayed in different forms using a TAAC-1 graphics accelerator board attached to the SUN-4 host system. We present the feature matching algorithm, its hardware implementation, and show results obtained using the binocular stereo matching system. Finally, we discuss future directions and other possible applications of the system.
Advances In Guidance Systems For Industrial Automated Guided Vehicles
Wiley D. Holcombe, Stephen L. Dickerson, James W. Larsen, et al.
We have focused our research on Automated Guided Vehicles (AGVs) which travel over a network of allowed paths defined in software and navigate using data from several types of sensors. This technology should be more flexible to install and modify, and opens the door to added capabilities and innovative control strategies. We are experimenting with a prototype, test-bed vehicle which navigates by dead reckoning from odo etry and by periodically measuring its absolute position with a simple machine vision system. Ongoing research seeks methods of adaptive improvement of system parameters affecting guidance. We have also develo' - a novel correlating camera that "watches the floor go by' to make non-contact displacement measurements that can comple ent odometry. Accuracies of 1 part in 3000 appear to be achievable from this device. For navigating inside semi-trailers, we have investigated a wall-following application of ultrasonic ranging.
TMAP - A Versatile Mobile Robot
Joel A. Weiss, Richard K. Simmons
TMAP, the Teleoperated Mobile All-purpose Platform, provides the Army with a low cost, light weight, flexibly designed, modularly expandable platform for support of maneuver forces and light infantry units. The highly mobile, four wheel drive, diesel-hydraulic platform is controllable at distances of up to 4km from a portable operator control unit using either fiber optic or RF control links. The Martin Marietta TMAP system is based on a hierarchical task decomposition Real-time Control System architecture that readily supports interchange of mission packages and provides the capability for simple incorporation of supervisory control concepts leading to increased system autonomy and resulting force multiplication. TMAP has been designed to support a variety of missions including target designation, anti-armor, anti-air, countermine, and reconnaissance/surveillance. As a target designation system TMAP will provide the soldier with increased survivability and effectiveness by providing substantial combat standoff, and the firepower effectiveness of several manual designator operators. Force-on-force analysis of simulated TMAP engagements indicate that TMAP should provide significant force multiplication for the Army in Air-Land Battle 2000.
Issues Concerning The Development Of A Mobile Platform For Health Care Applications
Larry W. Korba, Ramiro Liscano, David Green, et al.
There are a number of problems that must yet be overcome before robotic technology can be applied in a hospital or a home care setting. The four basic problems are: cost, safety, finding appropriate applications and developing application specific solutions. Advanced robotics technology is now costly because of the complexity associated with autonomous systems. In any application, it is most important that the safety of the individuals using or exposed to the vehicle is ensured. Often in the health care field, innovative and useful new devices require an inordinate amount of time before they are accepted. The technical and ergonomic problems associated with any application must be solved so that cost containment, safety, ease of use, and quality of life are ensured. This paper discusses these issues in relation to our own development of an autonomous vehicle for health care applications. In this advancement, a commercially available platform is being equipped with an on-board, multiprocessor computer system and a variety of sensor systems. In order to develop pertinent solutions to the technical problems, there must be a framework wherein there is a focus upon the practical issues associated with the end application.
Helpmate: A Mobile Robot For Transport Applications
Bala Krishnamurthy, Bruce Barrows, Steve King, et al.
HelpMate is a mobile robotic materials transport system that performs fetch and carry tasks at Danbury Hospital. It navigates along the main arteries of the hospital, crossing between buildings via interconnecting corridors and uses infrared communication links to communicate with the elevator controller. HelpMate has been designed to work safely around humans, smoothly rerouting its local path to avoid obstacles while maintaining its mission. Safety features include both non-contact and contact obstacle sensing, emergency stop switches, auto/manual mode switches, flashing warning lights, turn indicators, and a failsafe controls design. HelpMate uses odometry, sonar and infrared proximity sensors, and vision as navigation inputs. An onboard card reader provides authorized personnel access to run time control and cargo transfer. Sensor information collected en route is used to build and maintain local navigation maps. A general knowledge of the structured properties of the world is assumed, and used both in collecting and rationalizing the sensor information and updating the robot's local knowledge base. All navigation and path planning is conducted under the direction of onboard processors.
A Methodology Of Map-Guided Autonomous Navigation With Range Sensor In Dynamic Environment
Alex C.C. Meng, Marty Wand, Vincent S.S. Hwang
This paper describes the work for a map-guided robot (AGV) maneuvering in dynamic factory floor environment. The robot is equipped with a narrow-beam sonar to detect obstacles. In factory applications, the location and orientation of the robot will be determined by external sensors. A path from start to target location will be digitized into a sequence of waypoints and the robot navigates locally by dead-reckoning between two adjacent waypoints. Autonomous navigation, in this setup, is viewed in terms of three components: automatic path plannihg for the given floor layout, the start and target locations of the robot, automatic path replanning after detecting obstacles, and local navigation which should always lead the robot out of a trap (cul-de-sac) if one occurs. For any given floor layout, we model the passage ways between obstacles (processing machines) as a connected graph knowns as the Voronoi graph. From the robot's point of view, the graph is the map to plan paths and navigate. Path planning is based on the retraction method. After detecting an obstacle, path replanning will be invoked. Algorithms for path planning, path replanning, and local navigation are given.
A Very Compact Real Time 3-D Range Sensor For Mobile Robot Applications
Francois Blais, Marc Rioux, Jacques Domey, et al.
A very compact 3-D range sensor, ideal for robotic applications and especially for mobile robots is described. The principle of this sensor is based on the use of a double aperture mask in place of the diaphragm of a standard camera lens. The basic optical principle of the system and the image processing algorithms to extract and compute the 3-D coordinates of the scene are described. A first prototype used to evaluate the characteristics of the range sensor and to acquire panoramic images of a room is shown. Performances and experimental results are presented.
Robotic Navigation Within Complex Structures
Timothy R. Larson, Susan Boltinghouse
Autonomous Navigation and piloting of mobile robotic vehicles throughout complex structures such as a nuclear facility, without requiring the installation of special fixtures within the structure, is becoming increasingly important. This paper discusses the development and implementation of navigation algorithms which use the measurements from an on-board scanning range sensor. The navigation algorithms include a Global Path Planner, a Local Path Planner and a Location Identification System. The Global Path Planner selects the optimum path for the mobile robot, employing a stored database containing information about routes throughout the facility. The designated path is defined in terms of local goal coordinates. The Local Path Planner must navigate between the commanded local goal locations, detecting and circumventing any unexpected obstacles which impede the progression of the robot. The Transport Window Concept is employed to accomplish local path planning. The Location Identification System employs the facility database to correlate stored information with measurement data from the scanning range sensor to precisely position the robot and periodically update its on-board Inertial Navigation System. The Local Path Planning Algorithms have been implemented and functionally verified using the Odetics 3D Laser Imaging Radar mounted on a mobile robot navigating through a modelled portion of a nuclear power plant.
Real Time Imaging Rangefinder For Autonomous Land Vehicles
J.Richard Kerr
A three-dimensional sensor that achieves 50 microsteradian resolution over a 90 X 40 degree field of view (FOV) at full video frame rates has been designed for robotic vehicles. A combination of coarse and fine range resolution provides sensing from one to approximately 100 meters with short-range accuracies of less than 10 cm. The system utilizes an eyesafe diode laser configuration along with proprietary mechanical scanning elements, wide-field relay optics, and avalanche photodiode detectors. Range determination is accomplished with dual subcarrier modulation which results in the output of an unambiguous, binary word on a pixel-by-pixel basis. The approach also provides for electronic pitch stabilization.
A Structured Lighting Vision System For Dynamic Obstacle Avoidance With A Mobile Robot
T. J. Essenmacher, V. G. Grafe, G. S. Davidson, et al.
Sandia National Laboratories is using a mobile robot as the focus of a research effort in artificial intelligence and robotics. One of the sensors on board this robot is a structured lighting vision system for dynamic obstacle avoidance and position updating. A 10 MIPS transputer is used in the structured lighting system to collect and process the image data. Descriptions of the system, sensor hardware, and software algorithms are presented. System limitations are also discussed.
A Computational Structure For Enforcing Reactive Behavior In A Mobile Robot
Tracy L. Anderson, Max Donath
An approach is presented which allows for the construction of robot behavior in terms of both spatially and temporally ordered sets of primitive reflexive behaviors. Each primitive reflexive behavior is based upon a stimulus/response model in which the devices which affect behavior are separated from the mechanism for controlling behavior. The approach allows for the explicit representation of complex forms of behavior through the use of abstraction. Behavior is represented in terms of component behaviors and acts to enforce declarative constraints between when a particular component behavior is active and when specific events occur in the environment and/or internal to the robot. In addition, we discuss the need for reactive behavior and illustrate how such behavior can be constructed using the proposed representation.
Navigation By Path Remembering
Jonathan Connell
We present a scheme for navigation that does not rely on a detailed world map or a precise inertial guidance system. Our approach is based on a multiple-agent control system in which there are a variety of local navigation behaviors that run in parallel. These behaviors arbitrate among themselves to select the most applicable action in any given situation. Meanwhile, there is another process which watches for "choice points" -- places where it is possible to travel in one of several distinct directions -- and simply records the qualitative direction actually travelled at each point. Our representation of the robot's path is then a sequence of these choices which can be replayed in reverse to get the robot back to its starting location. An important point is that instead of attempting to build a complete internal map, we use the world as its own representation. Another feature is that the robot does not even care which algorithm(s) are employed to get it from one point to the next. The individual agents communicate through the world to the path rememberer by turning the robot.
Infrared People Sensors For Mobile Robots
Herbert E.M. Viggh, Anita M. Flynn
The MIT Mobile Robot Group is developing a new small, low power, autonomous robot which will interact with humans in dynamically changing environments. This paper describes a new sensor system based on a collection of pyroelectric elements which will enable the robot to initiate behaviors that include approaching and following humans. Pyroelectric detectors were chosen because they offer the capability of filtering out human infrared signatures from ambient surroundings, while also providing the advantages of simplicity, small package size, low power consumption and low cost. However, pyroelectric elements require relative motion between the sensor and the human to allow detection. In addition, while pyroelec-tric sensors are able to signal whether or not a target has moved through the field of view, they have no means of determining range or orientation to the targets, capabilities that would be desirable for a mobile robot. This paper describes a sensor configuration which gets around these problems and provides a low power means for a robot to ascertain distance and direction to people in its surroundings. The sensor system provides several information output channels which feed into the control system on different levels, triggering a variety of behaviors. Fixed sensors on the robot are used to detect multiple people or to ascertain the direction of a moving person. Rotating crossed sensors use geometric relationships to determine range and orientation to stationary targets. Using a behavior-based subsumption architecture control system, the sensors and information processing modules are organized so that the required behaviors can be produced without recourse to sensor fusion, enabling very low computational overhead.
Artificial Intelligence For A Safer And More Efficient Car Driving
Giovanni Adorni
In this paper a project, PROMETHEUS, is described in which fourteen of Europe's leading car manufacturers are to join with approximately forty research institutes and governmental agencies to make the traffic of Europe safer, more efficient and more economical. PROMETHEUS project is divided into seven areas. In this paper one of the seven areas, PRO-ART, is described. PRO-ART is aimed at clarifying the need for and the principles of the artificial intelligence to be used in the next generation automobile. After a brief description of the overhall project, the description of the seven years PRO-ART Italian research programme will be given.
Homeostatic Control For A Mobile Robot: Dynamic Replanning In Hazardous Environments
Ronald C. Arkin
A longstanding goal of robotics has been to introduce intelligent machines into environments that are dangerous to humans. These environments also pose hazards to the robots themselves. By embedding sensing devices as a means for monitoring the internal state of the robot, dynamic plan reformulation can occur in situations that threaten the existence of the robot. A method exploiting an analogy to the endocrine control system is forwarded as the preferred method for homeostatic control - the maintenance of a safe internal environment for the machine. Examples are given describing the impact of fuel reserve depletion and global temperature stress. A methodology using signal schemas as a means to supplement the existing motor schema control found in the Autonomous Robot Architecture (AuRA) is presented.