Proceedings Volume 0635

Applications of Artificial Intelligence III

cover
Proceedings Volume 0635

Applications of Artificial Intelligence III

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 26 March 1986
Contents: 1 Sessions, 87 Papers, 0 Presentations
Conference: 1986 Technical Symposium Southeast 1986
Volume Number: 0635

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • All Papers
All Papers
icon_mobile_dropdown
A Comprehensive Evaluation of Expert System Tools
John F. Gilmore, Kirt Pulaski, Chuck Howard
Current trends in knowledge-based computing have produced a large number of expert system building tools. This onslaught of high-tech software stems from the discovery that expert systems can be effectively applied to a variety of industrial and military problem domains. A variety of vendors provide expert system prototyping and development tools which greatly accelerate the construction of intelligent software. Today's expert system tool generally provides the user with a friendly interface, an efficient inference engine, and formalisms that simplify the creation of a domain knowledge base. This paper presents a formalism for expert system tool evaluation and critiques an exhaustive variety of commercially available tools.
Expert System For Model Management
Yuan-chwen You
The usage of models has exploded recently due to the proliferation of ad hoc computing in decision support system, in particular, the overwhelming acceptance of spreadsheet packages. These phenomenons reflect the needs to seek for a powerful mechanism for controlling and utilizing modeling resources. It is expected that a model base management system be developed to accomplish this purpose. Modeling practice requires some knowledges most decision makers do not possess, namely, the use of different models and procedural details of model instantiation. In order to release this burden, a prototype expert system with modeling expertise to facilitate the utilization of model is under developed. The domain of model management is not restricted to management science and operations research kinds of mathematical models alone. In fact, a software program itself is a model on the one hand and the reusability of model has made model a reusable software on the other hand. Therefore, the scope of model management is broadened to include foundation softwares and other reusable softwares. The modeling characteristics of this wide range of models are explored. A hybrid knowledge representation scheme is proposed as a framework for constructing a prototype knowledge base. This paper has shown how to use the modeling knowledges in this knowledge base to automate the manipulation of models, such as instantiation, selection, synthesis and sequencing of models, as well as the utilization of reusable softwares. This work will suggest a possible future direction in developing a software development automation methodology.
Expert Systems In Medical Studies - A New Twist
James R. Slagle, John M. Long, Michael R. Wick, et al.
The use of experts to evaluate large amounts of trial data results in increasingly expensive and time consuming research. We are investigating the role expert systems can play in reducing the time and expense of research projects. Current methods in large clinical studies for evaluating data are often crude and superficial. We have developed, for a large clinical trial, an expert system for analysis of treadmill exercise ECG test results. In the cases we are studying, a patient is given a treadmill exercise ECG test once a year for five years. Pairs of these exercise tests are then evaluated by cardiologists to determine the condition of the patient's heart. The results of our system show great promise for the use of expert systems in reducing the time and expense of large clinical trials.
Sentinel: An Expert System Decision Aid For A Command, Control And Communications Operator
Daniel L. Tobat, Steven K. Rogers, Stephen E. Cross
The growing complexity and quantity of information used in command, control and communications (C3) networks makes it essential to reduce the workload on the operators of these networks. SENTINEL is an expert system which functions as a decision aid for the strategic missile warning officer, using a simulation of a C3 network that involves multiple missile launches and up to 20 countries. In this research, a blackboard model expert system using rule bases and object oriented programming techniques was developed that permits SENTINEL to deal with uncertainty and offer several layers of explanation. SENTINEL deals with uncertainty by using Cohen's endorsement theory and the pattern recognition techniques of feature sets and prototypes. SENTINEL analyzes the causes of reported events into higher level, yet less precise forms to offer an abstract layer of explanation. The results are applicable to further expert system or decision aid development for C3 networks.
Improved Cartographic Classification Via Expert Systems
Mark F. Doherty, Carolyn M. Bjorklund, Connie Y. Wang, et al.
Statistical classification algorithms currently achieve good, but not perfect, results when classifying complex aerial images into eight or more classes. We show results from one tree classifier which provided between 50% to 65% accuracy in an unsupervised mode. This paper explains how our Advanced Cartographic Expert System (ACES) can be utilized to improve this classification accuracy.
GEST = The Generic Expert System 1221
John F. Gilmore, David Ho, Chuck Howard
The development cycle of an expert system can be decreased if an effective expert system tool (EST) is used. This paper describes the Generic Expert System Tool (GEST) developed by the Artificial Intelligence Branch of the Georgia Tech Research Institute. GEST was developed to be as general purpose as possible while incorporating all of the basic features required of an EST used for real world applications. This paper outlines GEST's basic software architecture and highlights a variety of it's processing elements. A discussion of future enhancement currently being implemented to increase GEST's application domains is also provided.
An Expert System For Labeling Segments In Forward Looking Infrared (Flir) Imagery
G. A. Roberts
An expert system for labeling high priority potential targets, low priority potential targets, roads, trees, forests, and potential clearings in FLIR imagery is presented. This expert system consists of three stages: the initial labeling experts, initial label conflict resolution, and a final relaxation labeling stage. The techniques used in these stages are presented. Examples of segmentation and segment labeling are shown.
A Flight Expert System (FLES) For On-Board Fault Monitoring And Diagnosis
M. Ali, D. .A Scharnhorst, C. S. Ai, et al.
The increasing complexity of modern aircraft creates a need for a larger number of caution and warning devices. But more alerts require more memorization and higher work loads for the pilot and tend to induce a higher probability of errors. Therefore, we have developed an architecture for a flight expert system (FLES) to assist pilots in monitoring, diagnosing and recovering from in-flight faults. A prototype of FLES has been implemented. A sensor simulation model was developed and employed to provide FLES with the airplane status information during the diagnostic process. The simulator is based partly on the Lockheed Advanced Concept System (ACS), a future generation airplane, and partly on the Boeing 737, an existing airplane. A distinction between two types of faults, maladjustments and malfunctions, has led us to take two approaches to fault diagnosis. These approaches are evident in two FLES subsystems: the flight phase monitor and the sensor interrupt handler. The specific problem addressed in these subsystems has been that of integrating information received from multiple sensors with domain knowledge in order to assess abnormal situations during airplane flight. This paper describes our reasons for handling malfunctions and maladjustments separately and the use of domain knowledge in the diagnosis of each.
Forward-Chaining Versus A Graph Approach As The Inference Engine In Expert Systems
Richard E. Neapolitan
Rule-based expert systems are those in which a certain number of IF-THEN rules are assumed to be true. Based on the verity of some assertions, the rules deduce as many new conclusions as possible. A standard technique used to make these deductions is forward-chaining. In forward-chaining, the program or 'inference engine' cycles through the rules. At each rule, the premises for the rule are checked against the current true assertions. If all the premises are found, the conclusion is added to the list of true assertions. At that point it is necessary to start over at the first rule, since the new conclusion may be a premise in a rule already checked. Therefore, each time a new conclusion is deduced it is necessary to start the rule checking procedure over. This process continues until no new conclusions are added and the end of the list of rules is reached. The above process, although quite costly in terms of CPU cycles due to the necessity of repeatedly starting the process over, is necessary if the rules contain 'pattern variables'. An example of such a rule is, 'IF X IS A BACTERIA, THEN X CAN BE TREATED WITH ANTIBIOTICS'. Since the rule can lead to conclusions for many values of X, it is necessary to check each premise in the rule against every true assertion producing an association list to be used in the checking of the next premise. However, if the rule does not contain variable data, as is the case in many current expert systems, then a rule can lead to only one conclusion. In this case, the rules can be stored in a graph, and the true assertions in an assertion list. The assertion list is traversed only once; at each assertion a premise is triggered in all the rules which have that assertion as a premise. When all premises for a rule trigger, the rule's conclusion is added to the END of the list of assertions. It must be added at the end so that it will eventually be used to make further deductions. In the current paper, the two methods are described in detail, the relative advantages of each is discussed, and a benchmark comparing the CPU cycles consumed by each is included. It is also shown that, in the case of reasoning under uncertainty, it is possible to properly combine the certainties derived from rules arguing for the same conclusion when the graph approach is used.
Doss - An Expert System For Large Scale Design
Powell J. Whalen, Theodore F. Skowronski
The Delivery Operations Support System (DOSS) is the automated provisioning system used by AT&T-IS to order, configure, schedule, and track the daily activity associated with providing business customers with telecommunications equipment. At the core of this computer complex, is a custom designed expert system providing optimum communication and data processing system arrangements for equipment assemblies of over 15,000 independent parts. Each arrangement is tailor engineered to match customer needs as specified by salesperson input. The key elements of this computer application were researched, tested, and developed at AT&T-Bell Labs and AT&T-IS Labs over the past 4 years. The administration of the product knowledge base of rules, facts, and user input was moved from computer programmers to product specialists over 2 years ago. The load of the product design sub-system of DOSS is currently running at over 1,000 engineered systems per day. The resultant order accuracy to manufacturing, compensation, and scheduling is estimated as falling in the range of 99% to 95% perfect designs.
Application Of The CSRL Language To The Design Of Diagnostic Expert Systems: The Moodis Experience, A Preliminary Report
Angelo Bravos, Howard Hill, James Choca, et al.
Computer technology is rapidly becoming an inseparable part of many health science specialties. Recently, a new area of computer technology, namely Artificial Intelligence, has been applied toward assisting the medical experts in their diagnostic and therapeutic decision making process. MOODIS is an experimental diagnostic expert system which assists Psychiatry specialists in diagnosing human Mood Disorders, better known as Affective Disorders. Its diagnostic methodology is patterned after MDX, a diagnostic expert system developed at LAIR (Laboratory for Artificial Intelligence Research) of Ohio State University. MOODIS is implemented in CSRL (Conceptual Structures Representation Language) also developed at LAIR. This paper describes MOODIS in terms of conceptualization and requirements, and discusses why the MDX approach and CSRL were chosen.
HAIM OMLET: An Expert System For Research In Orthomodular Lattices And Related Structures
D. D. Dankel II, R. V. Rodriguez, F. D. Anger
This paper describes research towards the construction of an expert system combining the brute force power of algorithmic computation and the inductive reasoning power of a rule-based inference engine in the mathematical area of discrete structures. Little research has been conducted on extending existing expert systems' technology to computationally complex areas. This research addresses the extension of expert systems into areas such as these, where the process of inference by itself will not produce the proper results. Additionally, the research will demonstrate the benefits of combining inference engines and mathematical algorithms to attack computationally complex problems. The specific aim is to produce an expert system which embodies expert level knowledge of orthomodular lattices, graphs, structure spaces, boolean algebras, incidence relations, and projective configurations. The resulting system, implemented on a micro-computer, will provide researchers a powerful and accessible tool for exploring these discrete structures. The system's "shell" will provide a structure for developing other expert systems with similar capabilities in such related areas as coding theory, categories, monoids, automata theory, and non-standard logics.
ESP: An Expert System For Computer Performance Management
Andrew P. Levine
ESP is a prototype Expert System that advises on the computer performance management of IBM MVS installations. Its goals are to increase the productivity of experienced performance analysts and expertly advise newcomers to the field. This paper discusses ESP's domain of application, required inputs, design philosophy, and overall architecture. ESP has been operational about a year. Although it is an experimental system that requires more testing, its initial success demonstrates the potential of Expert System technology to computer performance management.
A Real-Time Knowledge Based Expert System For Diagnostic Problem Solving
Juan Carlos Esteva, Robert G. Reynolds
This paper is a preliminary report of a real-time expert system which is concerned with the detection and diagnosis of electrical deviations in on-board vehicle-based electrical systems. The target systems are being tested at radio frequencies to evaluate their capability to be operated at designed levels of efficiency in an electromagnetic environment. The measurement of this capability is known as ElectroMagnetic Compatibility (EMC). The Intelligent Deviation Diagnosis (IDD) system consists of two basic modules the Automatic Data Acquisition Module (ADAM) and the Diagnosis System (DS). In this paper only the diagnosis system is described.
Economy In Expert System Construction: The AEGIS Combat System Maintenance Advisor
George Drastal, Tom DuBois, Lorin McAndrews, et al.
We present the design philosophy used in constructing the AEGIS Combat System Maintenance Advisor, a rule-based expert system with over 2000 rules that was completed in less than two years. We attribute part of the success of this project to the decision to have the domain expert actively involved in the process of writing rules, and to the development of a knowledge acquisition tool that checked his work. The project was also noteworthy because it includes a compiler which retargets the knowledge base to alternate run-time environments. The techniques used in this project are general and apply to any propositional rule language for expert systems.
A Program Error Localization Expert System
Bogdan Korel
Error localization in program debugging is the process of identifying program statements which cause incorrect behavior. This paper describes a prototype of the error localization expert system which guides a programmer during debugging of Pascal programs. The system is interactive: it queries the programmer for the correctness of the program behavior and uses answers to focus the programmer's attention on an erroneous part of the program (in particular, it can localize a faulty statement). The system differs from previous approaches in that it makes use of the knowledge of program structure rather than the knowledge at the level of symptom-fault rules. The knowledge of program structure is represented by the dependence network which is based on the concept of dependence relationship between program instructions. The inspiration behind using the dependence network is that any instruction in the execution trace from the beginning to a position of incorrectness could conceivably have been responsible for the faulty behavior. Using dependence network, as a guide to which instruction to examine, seems to be an effective way to focus the programmer's attention appropriately. The dependence network is used by the error-locating reasoning mechanism to guide the construction, evaluation, and modification of hypotheses of possible causes of the error. The backtracking reasoning has been implemented in the reasoning mechanism. This type of reasoning is, in some sense, just an abstraction and elaboration of what experienced programmers do intuitively. The expert system frees the programmer from trial-and-error process that is typical during locating the source of an error when using a traditional break-and-examine debugger.
Expert System Makes Image Processing Easier
Gregory Y. Tang
We address the issue of using an expert system to help a user , who is not an expert in digital image processing, to make appropriate use of image processing software. EJAUNDICE expert system builder is selected as the tool to construct the expert system. r-85 image processing workstation is selected to be the target machine. Relevant knowledge is organized as triples and rules. Two types of knowledge are used extensively in establishing the knowledge base. One is the desire type which represents what the user wants. Another is the available type which represents what is available in r-85. An example is to demonstrate how the knowledge base is constructed. The expert system shows more flexibility than the manu-driven approach.
Expert Measurement System For Ultrasonically Characterizing Material Properties
Richard K. Elsley, Ming-Shong Lan
When a human expert performs laboratory measurements, he uses a number of evaluation and decision making techniques that are not usually included in automated measurement systems. These include method selection, method discovery and heuristic evaluation of data and results. This paper describes a preliminary Expert Measurement System that adds these expert thought processes to conventional automated measurements. This Expert Measurement System is the Material Characterization Expert System (MCES). It measures physical properties of materials by the use of ultrasonic waves. Its performance is close to that of a human expert and it operates much more quickly.
Integrating Information From Thermal And Visual Images For Scene Analysis
N. Nandhakumar, J. K. Aggarwal
A new approach has been developed for computer perception of scenes. The approach is based on the integration of information extracted from thermal images and visual images, which provides new information not available by processing either image alone. The thermal behavior of scene objects has been studied in terms of surface heat fluxes. The thermal image is used to measure surface temperature and the visual image provides surface absorptivity and relative orientation. These parameters used together provide estimates of surface heat fluxes. Features based on these estimates are shown to be more meaningful and specific in distinguishing scene components.
Tess = The Tactical Expert System
John F. Gilmore, Melinda M. Fox, Alicia L. Stevenson, et al.
A prime application area for expert system technology is that of automatic target recognition. Existing target recognizers by design are not capable of exploiting all of the information contained in a scene during their classification process (contextual cueing of targets for example). This paper describes a Tactical Expert System (TESS) research project being developed at the Georgia Tech Research Institute. Developed on a Symbolics Lisp machine, TESS is a prototype expert system for interpreting the tactical implications of infrared scenes. This paper highlights the basic target recognition features of TESS and describes several areas of activity research being performed to enhance the TESS system.
A Modified Hough Transform For Detecting Lines In Digital Imagery
Arthur V. Forman Jr.
A variation on the Hough technique for the detection of lines in digital imagery is presented. The proposed parameterization of the Hough space fully exploits the available image resolution. Results are presented illustrating the utility of the approach for finding roads in aerial FLIR (forward-looking infrared) imagery. Finally, a parallel implementation of the algorithm is outlined.
A. System To Recognize Objects In 3-D Images
Griff Bilbro, Wesley Snyder
We are developing a system to recognize objects portrayed in 3-D images. Currently the system Is implemented as a C programming environment: a control program, a suite of generally useful subroutines, and a shared data structure. An application is constructed by writing one or more procedural models for finding a specific object In an image. In operation, the system is initialized by a supervisor module which later provides certain facilities to each of the procedural models. Initialization consists of filtering the 8 bit z(x,y) image, segmenting it into smooth patches, and recording various properties of those patches in a "scene graph". Each procedural model then examines the scene graph and either Identifies the object of interest or rejects the entire image. Each model includes a program of operations to acquire such decisive information. Currently, the environment supports subroutines that 1. analytically fit and classify a specified patch, 2. merge two or more patches (or unmerge some previously blended patches), 3. "defuzz" some patch by directing the absorption of small neighboring patches according to a specified criterion, 4. resegment a portion of the image in some special way, 5. construct 'and process lists of patches ordered by location, cardinality, area, or shape criteria.
Real-Time Image Understanding
T. C. Rearick
Future image understanding systems must be able to respond to scene dynamics within a fraction of a second if they are to be useful in real-time applications. Current image understanding systems are not only very limited in capability, but they are painfully slow. One approach to achieving real-time image understanding is to build faster hardware. This paper presents a different approach. Strategies for implementing real-time image understanding systems are discussed which offer alternatives to traditional computational paradigms. Requirements for real-time operation are shown to constrain the selection of artificial intelligence (AI) methodologies. These strategies are currently being tested in an experimental prototype vision system. The topics discussed in this paper are applicable to other non-vision AI applications as well.
A Rule Based System For Automated Industrial Inspection
Ahmed M. Darwish, Anil K. Jain
This paper describes an automated visual inspection system for complex industrial purposes. The system is composed of four main interacting divisions. These represent knowledge, processing modules, data structures and the master program. The knowledge division is composed of a model and a set of rules. The model describes geometrical, positional and relational properties of the items under test. Rules are further classified into representational, procedural and control rules. Representational rules are a set of design rules that should be followed by items under inspection. Procedural rules are the specific steps that should be executed to ensure the validity of representational rules. Control rules are respon-sible for the sequence of different levels of system activities. Processing modules are simple image processing, morphological and pattern recognition procedures, each assigned a particular job. They share data at different levels of representation. The master program, supervized by the procedural and control rules, chooses the appropriate processing module to continuously update the data or create a new pictorial structure in order to achieve specific goals. The seperation between the rules and the master program makes it a learning system in the sense that it can be taught new models, design rules and inspection algorithms. Thus, it can be easily adapted to different applications.
A Stereo Model Based Upon Mechanisms Of Human Binocular Vision
N. C. Griswold, C. P. Yeh
In computer vision, the idea of using stereo cameras for depth perception has been motivated by the fact that in human vision one percept can arise from two retinal images as a result of the process called "fusion". Nevertheless, most of the stereo algorithms are generally concerned with finding a solution to obtaining depth and three-dimensional shape irrespective of its relevance to the human system. Recent progress in the study of the brain mechanisms of vision has opened new vistas in computer vision research. This paper investigates this knowledge base and its applicability to improving the technique of computer stereo vision. In this regard, (1) a stereo vision model in conjunction with evidences from neurophysiology of the human binocular system is established herein; (2) a computationally efficient algorithm to implement this model is developed. This algorithm has been tested on both computer generated and real scene images. The results from all directional subimages are combined to obtain a complete description of the target surface from disparity measurements.
Measurement Of The 3D Radius Of Curvature Using The Facet Approach
Richard S. Loe, Thomas J. Laffey
In this paper the facet model approach is used to calculate the local radius of curvature at each pixel in simulated range imagery. Previous work with the facet model emphasized the interpretation of a 2D image as a surface in three dimensions. This interpretation is complicated by illumination and reflectance of the scene. Range imagery is ideally suited for interpretation under this model since the raw data consists of a sampling of the range to points on surfaces in three dimensions. The images studied consisted of cylinders and planes in which various amounts of noise have been introduced. The effects of varying the window size used in the facet fit is also investigated. The results not only show that accurate curvature measurements can be made in the presence of significant noise but also indicate that the curvature values provide for a rich description of the scene.
Object Interpretation Using Boundary Based Perceptually Valid Features
Deborah Walters
An important perceptual task for both human and machine vision is to be able to interpret images in terms of distinct objects. This paper presents a technique for object interpretation in line drawings. The method is based on the use of features which have special perceptual significance for human vision. By using such features, and by devising an orientation-boundary representation, a simple, efficient algorithm can be used to interpret line drawings which can contain both straight and curved lines, and can depict any type of object.
A Prototype Knowledge-Based System To Aid The Oceanographic Image Analyst
Matthew Lybanon, John D. McKendrick, R. E. Blake, et al.
Satellite imagery of the oceans has become an invaluable tool for the oceanographer, adding the breadth of synoptic coverage to the depth of in situ measurements at a few points. But the deluge of data and the labor-intensive nature of current methods of processing data pose serious problems for operational interpreters. A prototype expert sytem with a knowledge base of mesoscale oceanographic features is being developed as a step towards a more automated environment to support the oceanographic image analyst.
VEST - The Visual. Expert Syptep Testbed
Stephen D. Tynor, Chi -Cheung Tsang, Kurt Gingher, et al.
Vision systems have been utilized in a variety of applications over the last thirty years. As each application is usually considered to be unique, many organizations have reinvented (or recoded) existing algorithms rather than drawing from outside sources. This paper describes the VEST computer vision system which provides the user with a core of the most widely used computer vision algorithms. Combined with its knowledge base system architecture, VEST allows the user to create an advanced computer vision baseline system to exploit specific application domains.
Error Analysis For A Two-Camera Stereo Vision System
J. H. Nurre, E. L. Hall
Stereo imaging is useful for many machine vision and robot guidance applications. Accuracy of the technique is an important research issue. In this paper, an analytical, mathematical approach to determining the measurement error in two-camera stereo systems is developed. For three dimensional measurements, two perspective images of the scene must be correlated. Using analysis, it is possible to determine the expected error of the measurements in the field of view. Error equations are presented for the two and three dimensional cases. One key result points to the fact that optimum measurements at a distance, can be made in the center of the field of view. Also, increasing the resolution at the center of the image plane can greatly enhance the accuracy of the measurements.
A Logical Basis In The Layered Computer Vision Systems Model
Y. J. Tejwani
In this paper a four layer computer vision system model is described. The model uses a finite memory scratch pad. In this model planar objects are defined as predicates. Predicates are relations on a k-tuple. The k-tuple consists of primitive points and relationship between primitive points. The relationship between points can be of the direct type or the indirect type. Entities are goals which are satisfied by a set of clauses. The grammar used to construct these clauses is examined.
A Computer Vision System For Understanding The Movement Of A Wave Field
Goffreao G. Pieroni, Olin G. Johnson
Mathematical modeling of seismic phenomena is an important tool for producing synthetic wavefield snapshots as well as synthetic seismic traces. The generation of those models is mainly carried out as an aid for studying the propagation of a signal when it is transmitted through or reflected from a a given horizon. In fact, information regarding the geometry of the horizon and the nature of the materials, the contact of which gives rise to the horizon, are fundamental in geological exploration. The snapshots mentioned above form a sequence of images showing a two-dimensional representation of the wave field evolving into refracted and reflected waves during a given period of time. By analyzing the behavior of the reflected and refracted waves a human observer can extract parameters like velocity of the waves, geometry of the horizon, reflection and refraction parameters, and nature of the materials. This is an intelligent process which infers properties of objects by relating aspects of an apparently totally different nature (like a sequence of picture where arcs of circles are delineated). It is interesting to observe that it is possible for the human eye to classify the waves residing in each configuration, tracking them from an image to the successive one, giving finally, a synthetic description of the environment. Frequently a problem rises when reflected, refracted, and wrap-around waves form complex configurations. In these cases the distinction of the components of the wavefield becomes difficult. An automatic system, which is able to emulate the behaviour of a human observer analyzing a simple sequence of synthetic snapshots, is presented.
Cylinder Detection And Measurement In Range Data
Robert Y. Li
This paper is concerned with the problem of detecting and measuring cylindrical objects in range data. Two different approaches are developed for the detection phase. The first approach involves a sequential process of segmenting and classifying every potential surfaces. The classification is based on the decomposition of a quadratic surface model. The second approach utilizes the Hough transform to detect the cylindrical pixels. The transform may produce significant clusters in the parameter space. By performing least square fit to those detected pixels, one can derive useful information about sizes and locations of the cylindrical objects. Synthetic range images are generated to test these ideas.
GENSCHED - A Real World Hierarchical Planning Knowledge-Based System
Antonio C. Semeco, Bryan D. Williams, Stefan Roth, et al.
This article describes the design and implementation of GENSCHED, a hierarchical planning system for scheduling production orders in manufacturing facilities.In a typical manufacturing application, orders for the production of certain items arrive continuously and must be scheduled to minimize tardiness, wait-in-process time, and early completion in addition to maximize throughput and resource utilization. In many cases, arriving orders generate manufacturing requirements beyond the capacity of the plant and compromises must be made. Manufacturing operations desire the capability to rearrange the backlog of orders to expedite higher priority ones, and to estimate the effect of newly arriving orders in the current backlog. GENSCHED features a hierarchical planner which takes advantage of the repetitive nature of the plans to efficiently generate valid schedules. A user interface allows manual and automatic scheduling and "what-if" processing of production orders. Finally, a rule-based subsystem for entering and maintaining domain-specific knowledge is exploited to improve schedules and minimize search.
A Knowledge-Based Approach To Ship Identification
R. W. McLaren, H .-Y . Lin
A knowledge-based ("expert") classifier was designed for classifying ship silhouettes generated from forward-looking infrared (FLIR) imagery. A knowledge-base is constructed based on interviews with a U.S. Navy officer along with confirming evidence from Jayne's Fighting Ships. This knowledge provides the means to set-up a ruled-based sequential decision tree or net. The conditions of the production rules deal with silhouette "humps" and their properties such as the number, spacing, relative placement on deck line, and the like. This classifier was applied to a sample consisting of about 500 sample silhouettes distributed about uniformly over eight classes of ship targets. Results were equal or better than results using a more "conventional" classifier.
Development Of Model-Based Fault-Identification Systems On Microcomputers
Magdi Ragheb, Dennis Gvillo
The development of Model-Based Production-Rule Analysis Systems for the identification of faults in Engineering Systems is discussed. Model-Based systems address the modelling of devices based on knowledge about their structure and behavior in contrast to Rule-Based systems which use rules based solely on human expertise. The exposed methodology uses the Fault-Tree Analysis technique for problem representation to generate Goal-Trees simulating the behavior of system components. Application of the methodology to a Knowledge-Base for the identification of the dominant accident sequences, consequences, and recommended recovery and mitigation actions for a Pressurized Water Reactor (PWR) is demonstrated. The accident sequences were generated using probabilistic risk analysis methods and analyzed with the Transient Reactor Analysis Code (TRAC). The Analysis System uses a backward-chaining deduction-oriented antecedent-consequent logic. Typical case results are given, and the aspects of using the methodology for Fault-Identification are discussed.
Design Issues For A Knowledge Based Controller For A Track-While-Scan Radar System
Edwin R. Addison
Traditional track-while-scan radar systems use a conventional algorithmic approach to the control function. Typically, a fixed amount of time is allocated to search volumes based on geometry and a uniform division of resources according to some fixed rule. Advancement of radar has introduced many new flexibilities into system components, such as electronically steerable phased arrays which allow practically instantaneous switching, digital and multiple beamforming, increased computing capability which allows maintaining track on very large numbers of targets, etc. Radar system design is more and more software and less hardware oriented. Consequently, additional flexibility is available to the TWS designer. Specifically such decisions as how long to dwell on an individual target, when and how quickly to search certain subvolumes, when to stop looking for a missing target, and so forth, add complexities to the control design. To best optimize and exploit these resources, a real-time expert system can be used. This paper explores the ramifications of using an expert system to perform this task. Specific expert system design issues are addressed.
A Rule-Based Interpretation System For Signal Images
Zhen Zhang, M . Simaan
A rule-based interpretation system for images consisting of signals (such as seismic images) is presented in this paper. A test run of the interpreter on a piece of real seismic data shows that it can successfully segment the seismic image into regions of common signal character. The system consists of two substructures: a texture analyzer and an intelligent interpreter. The texture analyzer adapts the "texture energy measurement" method developed by Laws to extract discriminant features from the texture-like signal image. The major function of the analyzer is to assign a vector of initial certainty factors (CFs) to each texel in the image based on the extracted feature measures. The elements of the CF vector essentially correspond to the degree of membership of the texel to each of the texture regions in the image. The intelligent interpreter, which is the rule-based system, is made up of a knowledge database, a reasoning engine and a parallel region growing controller. Whenever a growing region requests to classify one of its boundary texels, a fact list is formed carrying information about the texel and its neighborhood. The interpreter takes the fact list and searches in the knowledge database to determine if any rules can be exercised. In the case of a seismic image, the rules are mostly geologic in nature. After a sequence of executions of rules and manipulations of CF numbers, a final CF vector emerges. If this vector favors the requesting region by having its corresponding component element exceed a preset threshold, the texel is classified to this texture class and merged into the requesting region. A noticeable advantage of the intelligent interpreter over other conventional classification techniques is its capability of patching small areas in the image for which the original data apparently do not provide enough discriminant information. An example illustrating these results on real seismic data from the Gulf of Mexico is presented.
Knowledge-Based Functional-Symbol Understanding In Electronic Circuit Diagram Interpretation
C. L. Huang, J. T. Tou
The AUTORED system is a computer-based system for automatic reading of electronic circuit diagrams, which was developed several years ago. This paper presents some of our new results in AUTORED research. The design of AUTORED consists of two major components: automatic interpretation of electronic diagrams, and organization of interpretation results into a knowledge base for CAD applications. An electronic circuit diagram may be segmented into three parts which are the graphical functional symbols, the connection line segments, and the denotations. New techniques for extracting symbols and denotations from the circuit diagram are presented in this paper. These techniques are designed for junction and corner extraction, line segment tracing and linking, line segment classification, connection-line segment removal and blocking, symbol locating and denotation character grouping. A knowledge base is developed to facilitate the tracing, template-matching, and categorization processes.
Resource Limitation Issues In Real-Time Intelligent Systems
Peter E. Green
This paper examines resource limitation problems that can occur in embedded AI systems which have to run in real-time. It does this by examining two case studies. The first is a system which acoustically tracks low-flying aircraft and has the problem of interpreting a high volume of often ambiguous input data to produce a model of the system's external world. The second is a robotics problem in which the controller for a robot arm has to dynamically plan the order in which to pick up pieces from a conveyer belt and sort them into bins. In this case the system starts with a continuously changing model of its environment and has to select which action to perform next. This latter case emphasizes the issues in designing a system which must operate in an uncertain and rapidly changing environment. The first system uses a distributed HEARSAY methodology running on multiple processors. It is shown, in this case, how the com-binatorial growth of possible interpretation of the input data can require large and unpredictable amounts of computer resources for data interpretation. Techniques are presented which achieve real-time operation by limiting the combinatorial growth of alternate hypotheses and processing those hypotheses that are most likely to lead to meaningful interpretation of the input data. The second system uses a decision tree approach to generate and evaluate possible plans of action. It is shown how the combina-torial growth of possible alternate plans can, as in the previous case, require large and unpredictable amounts of computer time to evalu-ate and select from amongst the alternative. The use of approximate decisions to limit the amount of computer time needed is discussed. The use of concept of using incremental evidence is then introduced and it is shown how this can be used as the basis of systems that can combine heuristic and approximate evidence in making real-time decisions.
Classification Of Textured Surfaces Based On Reflection Data
Sharayu Tulpule, Charles H. Knapp
A statistical approach to classification of surfaces based on the spatial intensity distribution of reflected light is proposed. In this approach, a surface is modelled as a collection of randomly oriented mirror-like micro-facets which give rise to a spatial intensity distribution of reflected light. A correlation matrix is formulated based on the co-occurrence of two given reflected intensities at points separated by a given angular distance. Features based on this matrix are proposed, and two classification schemes based on maximum-likelihood and nearest neighbor decision rules are implemented on these features. Experimental results for the classification schemes are presented for a variety of sample surfaces such as paper, cloth, felt, cork, etc. Success rates of
Three-Dimensional Motion Analysis Using Shape Change Information
Tzay Y. Young, Seetharaman Gunasekaran
This paper describes a method that extracts three-dimensional (3D) motion information by analyzing 2D shape changes of object faces in an image sequence. An object image is segmented into regions, each region corresponding to a face of the object. With orthographic projections, 3D rotation angles can be computed from a set of linear shape change parameters. For perspective projections, under certain conditions the effect on shape changes of planar faces can be taken into consideration by including quadratic parameters. Iterative algorithms for estimation of linear and quadratic shape change parameters are derived and implemented, using an operator formulation. 3D translation and rotation parameters are calculated from the shape change parameters. Experimental results on segmentation and parameter estimation are presented.
Analytical Identification Of The Calibration Matrices Using The Two Plane Model
Alberto Izaguirre, Pearl Pu, John Summers
A new method of camera calibration is proposed for active visual sensing for use in 3D analysis. The method, based on a variation of the two plane camera model, permits the calibration of a mobile camera as a function of the position and orientation of the camera. The algorithm for identification of the calibration matrices is divided into two steps; first, calibration for a fixed position of the camera; and second, calibration as a function of the position and orientation of the camera, utilizing the prior information of the fixed position. The calibration procedure for the fixed position is a linear square method; the calibration procedure for the second method has been modified so as to obtain an analytical solution rather than a numerical iteration as in.
Gross Segmentation Of Color Images Of Natural Scenes For Computer Vision Systems
Mehmet Celenk, Stanley H. Smith
This paper describes a new systematic method for gross segmentation of color images of natural scenes. It is developed within the context of the human visual system and mathematical pattern recognition theory. The eventual goal of the research is to integrate these two concepts to obtain visually distinct image segments which are more reliable and tractable for higher level analysis or interpretation process involved in a computer vision system. This new computational technique is proposed in accordance with the human color perception to detect the image clusters efficiently using only one-dimensional (1-D) histograms of the L*,H°,C cylindrical coordinates of the (L*,a*,b*)-uniform color system selected as the feature space. The method is employed together with the Fisher linear discriminant function to isolate and extract the detected image clusters correctly. In order to obtain the features most useful for a given image, a new feature extraction technique is proposed. It is a statistical-structural method which makes use of the spatial and spectral information contained in the local areas of the image domain. A set of smoothing and line templates are developed and used to refine the extracted image regions in the spatial domain. They can also be applied to the binary images for smoothing purposes.
A True 2D Edge Detector
Tom Miltonberger, Hans Muller
Line finding is a very basic and important step in the low level vision process. Lines are important because they represent the border between two regions and thus help to define and distinguish the region. Lines may also represent physical objects in themselves (at low resolution a tank barrel looks like a ridge line). In the past, ad hoc approaches to line finding have been most prevalent. Recently, Canny has taken a more rigorous approach to edge detection. He has developed an optimal edgel (i.e., individual edge pixel) detector. This detector is optimal under the assumption of additive Gaussian noise and with the constraint that multiple responses from the same edge should be minimized. For a step edge in white Gaussian noise this operator can be closely approximated by convolving the image with a Gaussian mask and then calculating the gradient. A non-maximal suppression operation (in the direction of the gradient at each pixel) is applied to the image. The resulting image is then thresholded. The width of the smoothing Gaussian and the threshold are determined by performance constraints: probability of detection; probability of false alarm; and localization error of the edge. If one accepts the edge model and performance criteria given by Canny, this operator is optimal for detecting and estimating the amplitude and direction of individual edgels (i.e., it is optimal for detecting the 1D edge profile). Unfortunately, Canny's method of grouping individual edgels into lines is not optimal and is in fact quite ad hoc. We have taken a more rigorous approach to extending the Canny 1D edgel detector into a optimal 2D edge detector. Our method applies optimal detection and estimation techniques to the 2D problem. Optimality is determined with respect to the universally most powerful (UNIP) 2 detector for 2D edge. We have been able to develop the optimal detector for a number of edge models. A detector for a constant but unknown amplitude, straight edge model has been implemented. In the implementation we closely approximate the UNIP detector. This paper describes our approach to the problem.
A Filter Design Approach To Textured Image Segmentation
Parvez K. Bashir
A scheme is presented for segmenting textured images using a filter design approach. The discrete Fourier transform (DFT) of each texture in the training set is computed. Using the DFT information, an empirical method for evaluating the texel size (in pixels) for the training set of textures is given. A separable filter template of a particular texel size is designed for each texture based on the DFT. Each textured image is then convolved with these sets of filter templates. A training feature vector is stored in the classifier for each texture by summing the outputs of the filtered images. For texture classification or segmentation, a texture mosaic consisting of one or more textures in the data set is convolved with the same set of filter templates applied in the training procedure. Each filtered image output is summed within image blocks of a particular texel size. A feature vector is computed for each block and fed into a minimum distance classifier. Classification accuracies of more than 90% are achieved using a set of four textures from Brodatzl album of textures.
Segmentation And Global Parameter Estimation Of Textured Images Modelled By Markov Random Fields
Fernand S. Cohen, Zhigang Fan
This paper is concerned with identifying and estimating the parameters of the different texture regions that comprise a textured image. A textured region here is modelled by a Markov Random Field (MRF). The MRF is parametrized by a parameter vector α , ana has a noncausal structure. We assume no a prior knowledge about the different texture regions, their associated texture parameters, or the available number of textured regions. The image is partitioned into disjoint square windows and a maximum likelihood estimate (MLE) (or a sufficient statistis) α* for α (for a fixed order model) is obtained in each window. The components of α* are viewed as features, and a as a feature vector. The windows are grouped in different texture regions based on feature selection and clustering analysis of the α* vectors in the different windows. To simplify the clustering process, the dimensionality of the feature vector is reduced via a Karhunen-Loeve decomposition of the between-to-within scatter matrix of the α* vectors. Each α* is projected onto the dominant mode (eigenvector) of the scatter matrix. The projected data is used in the clustering process. The clustering is achieved by minimizing a within group variance criterion which has been weighted by a factor that explicitly depends on the number of groups. To reduce the computational cost associated with this method, it is accompanied by a "valley method". Finally, by exploiting the asymptotic normality of the MLE, we compute the tglobal MLE α* for each textured region by properly combining the locally estimated MLE α* in the various windows that comprise the region. The global MLE α* for a region is notning but an appropriately weighted linear combination of the local MLE set {αk*}.
Symbolic Simulation Of Engineering Systems On A Supercomputer
Magdi Ragheb, Dennis Gvillo, Henry Makowitz
Model-Based Production-Rule systems for analysis are developed for the symbolic simulation of Complex Engineering systems on a CRAY X-MP Supercomputer. The Fault-Tree and Event-Tree Analysis methodologies from Systems-Analysis are used for problem representation and are coupled to the Rule-Based System Paradigm from Knowledge Engineering to provide modelling of engineering devices. Modelling is based on knowledge of the structure and function of the device rather than on human expertise alone. To implement the methodology, we developed a Production-Rule Analysis System that uses both backward-chaining and forward-chaining: HAL-1986. The inference engine uses an Induction-Deduction-Oriented antecedent-consequent logic and is programmed in Portable Standard Lisp (PSL). The inference engine is general and can accommodate general modifications and additions to the knowledge base. The methodologies used will be demonstrated using a model for the identification of faults, and subsequent recovery from abnormal situations in Nuclear Reactor Safety Analysis. The use of the exposed methodologies for the prognostication of future device responses under operational and accident conditions using coupled symbolic and procedural programming is discussed.
A Comparison Of The Mycin Model For Reasoning Under Uncertainty To A Probability Based Model
Richard E. Neapolitan
Rule-based expert systems are those in which a certain number of IF-THEN rules are assumed to hold. Based on the verity of some assertions, the rules deduce new conclusions. In many cases, neither the rules nor the assertions are known with certainty. The system must then be able to obtain a measure of partial belief in the conclusion based upon measures of partial belief in the assertions and the rule. A problem arises when two or more rules (items of evidence) argue for the same conclusion. As proven in , certain assumptions concerning the independence of the two items of evidence is necessary before the certainties can be combined. In the current paper, it is shown how the well known MYCIN model combines the certainties from two items of evidence. The validity of the model is then proven based on the model's assumptions of independence of evidence. The assumptions are that the evidence must be independent in the whole space, in the space of the conclusion, and in the space of the complement of the conclusion. Next a probability-based model is described and compared to the MYCIN model. It is proven that the probabilistic assumptions for this model are weaker (independence is necessary only in the space of the conclusion and the space of the complement of conclusion), and therefore more appealing. An example is given to show how the added assumption in the MYCIN model is, in fact, the most restrictive assumption. It is also proven that, when two rules argue for the same conclusion, the combinatoric method in a MYCIN version of the probability-based model yields a higher combined certainty than that in the MYCIN model. It is finally concluded that the probability-based model, in light of the comparison, is the better choice.
Study Of The Different Methods For Combining Evidence
Yizong Cheng, Rangasami L. Kashyap
Suppose there are several mutually exclusive hypotheses H1, ..., Hn about the state of nature. Let B(Hi|E) denote the conditional belief that Hi is true given the evidence E. The conditional belief can be a real number in the range [0, 1], an interval of these numbers, or a linguistic valuable like "likely". Let B(Hi|E) and B(Hi|E2) stand for the truth of the hypothesis Hi given evidence E1 and E2 separately, E1 and E2 being two "independent" bodies of evidence (the concept of independence will be discussed later). Let B(Hi|E1E2) denote the belief in the truth of Hi given both E1 and E2. E1 and E2 together may also be considered as a body of evidence, but we distinguish it from E1 or E2 alone by calling the former compound and the latter atomic. Clearly it is some function of the individual belief variables B(Hi|E) and B(Hi|E2), i=1,...,n. There are several methods for computing the integrated belief or conditional probability function from its constituents, two notable ones being the Bayesian approach and the Dempster-Shafer approach with its historical antecedents due to Bernoulli and Lambert. Our aim in this paper is to study the axioms and consequences of the various approaches and evaluate whether they satisfy some common axioms which decision makers expect from these rules for combining various types of evidence.
Decision Support For Fuzzy, Probabilistic And Control Processes: A Prolog Assistant
Y. J. Tejwani, R. A. Jones
Most of the systems encountered by decision makers are dependent on fuzzy variables. In this paper a few of these systems are studied. These systems are called Variable Fuzzy Systems (VFS). The necessary details about operations on fuzzy sub-sets are reviewed. The deductive processes involved in reaching decisions about fuzzy systems are examined. The implementation of these processes in Prolog is discussed. Prolog programs (assistant) used for decision support in some of these variable fuzzy systems is listed.
Using Prototype:For Knowledge-Based Consultation And Teaching
Henrik Nordin
The knowledge-based approach to software development involves formulating the domain knowledge in a declarative way and leaving the procedural control to an inference engine. An important problem with this method of developing software is that it is difficult to express and represent domain-dependent control knowledge, e.g. an expert's strategies. In this paper we propose the use of prototypes, based on an expert's typical cases, to represent domain-dependent control knowledge. We discuss both a consultation system and a knowledge-based tutor working with prototypes.
A Conceptual Clustering Scheme For Frame-Based Knowledge Organisation
H. Krishna Murthy, N. Narasimha Murty
Expert systems are strongly characterised by their use of a large collection of domain specific knowledge acquired from human experts. Several data structures, have been proposed and used in the past for storing this knowledge. Some of the popularly used structures are (i) Frames, (ii) Scripts, (iii) Semantic Nets and (iv) Production rules. Knowledge acquired from the expert over a length of time tends to be inconsistent and redundant thus requiring enormous amount of storage space. Clustering algorithms can be successfully employed to modify the knowledge base so as to make it concise and consistent. Similarity measure has been defined between patterns represented as (i) production rules and semantic Nets and used for clustering. Frame is a structure that captures the hierarchical relationship between several concepts characterizing a pattern and/or subpatterns. Frame is viewed as a collection of concepts which are related to one another through a binary predicate. Clustering is used to group various patterns belonging to a set of classes and obtain a super concept. Similarity between two patterns is defined using a linear structure corresponding to the original hierarchical representation. The above conversion is obtained using a suitable tree-traversal scheme. Several operations defined on the hierarchical data structure is used to define the similarity measure among pattern/pattern classes. The similarity measure between patterns represented as concept frames is used along with a hierarchical agglomerative clustering algorithm to reduce the size of the knowledge base.
An Extendible Graph Processor For Knowledge Engineering
B. J. Garner, E. Tsui
The theory of conceptual graphs offers a uniform and powerful knowledge representation formalism, and an extendible graph processor has been implemented to process domain dependent knowledge that is encoded in canonical graphs. Functional components in the extendible graph processor are described. The language PROLOG is used to implement canonical graphs and the processing tools of the extendible graph processor. Applications of the conceptual graph model are highlighted with a detailed example of schema/script processing.
A Cause Based Method Of Knowledge Representation And Its Application To Lift Scheduling
Alan Howson, Duncan Gillies
The traditional way of encoding knowledge for an expert system is to use the Horn clause. This allows easy implementation of goal-oriented search procedures, but requires that the data base must be complete and consistent. An alternative scheme is to use cause based systems, where an identified cause can lead to one or more effects. With a knowledge base encoded in this way, it is possible to use a learning procedure to determine which effects are related to which cause. In this way rules can be continually inferred from the input data stream. The method has been sucessfully tried on two systems with dynamic properties. These are a lift scheduler and a system for discovering patterns of digits which are randomly embedded in noise.
Automatic Linking Process Of Seismogram Using Branch Ana Bound Search
Kou-Yuan Huang, K. S. Fu, Z. S. Lin
Pattern growing technique is proposed in the automatic linking process of seismogram. The branch and bound search algorithm and the distance calculation of Gram approximate function are used to find the best way to correlate the seismic reflectors. The experimental results in simulated seismograms and real seismic data are quite good.
Automatic Recognition Of Primitive Changes In Manufacturing Process Signals
P. L. Love, M. Simaan
Manufacturing processes are generally monitored by observing sampled process signals. The purpose of this monitoring is to ensure process, and thereby, product consistency and to help diagnose causes of process instability. The interpretation of process signals requires the recognition of what we refer to here as primitive variations, or changes, in signal values which are typically buried in a background of other process related variations and random noise. These primitive variations include changes such as positive or negative sharp peaks, sudden step-like increases or decreases, or gradual ramp-like variations in the signals. Such changes in a given signal indicate a process change which when combined with corresponding changes in other signals could lead to the identification of the cause, or at least a rank order of possible causes, which produced these changes. In this paper, we discuss a two-level AI-based procedure for automatic recognition of these primitive changes. This procedure essentially involves applying syntactic analysis either directly to the raw process signals or, whenever not possible, to a filtered version of them. The first level, therefore, involves applying special purpose nonlinear filters which are designed to enhance or isolate, a particular primitive variation in the signal. The second level consists of a Signal Interpreter process written in LISP. This process analyses the filtered signals and produces a data structure which represents the primitive variations. A description of the entire interpretation system will be presented, and an example illustrating the application of the method to a process signal of an actual Aluminum sheet rolling mill will be shown.
A Robust Machine Translation System
Rika Yoshi
This paper presents an expectation-based Japanese-to-English translation system called JETR. JETR is designed to translate recipes and other instruction booklets containing ungrammatical and abbreviated sentences. JETR is able to preserve the syntactic style of the source text without carrying syntactic information in the internal representation. JETR's inferencer is able to determine the number (plural or singular) of nouns, fill ellipses and resolve pronoun references.
An Expert System For Diagnosis In Traditional Chinese Medicine
Tao Li, Luyuan Fang, Gordon Stokes
An expert system is being developed for diagnosis in traditional Chinese medicine. This system empolys both deductive and differential reasonings. In addition, randomization is added to the system for the selection of hypotheses and production rules. This helps in improving systems performance and flexibility.
Learning Techniques Applied To Multi-Font Character Recognition
J.-J. Cannat, Y. Kodratoff
In this paper we present the usefulness of symbolic learning techniques for multi-font character recognition. In our already existing models of learning, knowledge is provided and the goal is to find a generalization of given examples, (while for our present model of character recognition knowledge has to be found or rather modified in order to discover a discriminating generalization. An inventive refining of knowledge has allowed to achieve the multi-font character recognition.
Automatic Pattern Recognition System With Self-Learning Algorithm Based On Feature Template Matching
Masato Nakashima, Tetsuo Koezuka, Noriyuki Hiraoka, et al.
A new self-learning technique has been developed to increase recognition efficiency and improve operability of an automatic pattern recognition system. The new algorithm can automatically make feature templates that emphasize the difference between similar patterns. This algorithm compares all the templates with each other by cross-correlating and picks out similar pattern pairs. The differences between similar patterns are extracted as the feature templates. This system can automatically carry out this procedure for 100 template patterns in 5 minutes.
Scavenger: An Experimental Rete Compiler
David Bridgeland, Larry Lafferty
The Rete algorithm is a well-known method for increasing the speed of pattern matching in production systems. The Rete technique requires that an expert system rule base be compiled into a network structure. The Scavenger compiler provides a means for specifying the criteria by which a Rete network should be constructed. Work with the Scavenger compiler should provide insight into the characteristics of efficient Rete networks.
Visual Navigation By Tracking Of Environmental Points
Amit Bandopadhay, Dana H. Ballard
An observer can facilitate the computation of Egomotion parameters by tracking a point in the environment. The optical flow field generated by the tracking motion is analyzed for a translating and rotating observer. It is assumed that the observer tracks accurately, without significant slippage. In this case the constraint on the egomotion parameters due to the optical flow field is much simpler than that for the non tracking case. A principal advantage of the proposed scheme is that the computational process does not need to know the tracking velocity. The present method provides a mathematical formulation for the problem, which was also addressed in a qualitative way by researchers such as Cutting. A simple algebraic analysis determines when the parameters of motion can be computed uniquely.
Fast Path Planning In Unstructured, Dynamic, 3-D Worlds
Martin Herman
Issues dealing with fast motion planning in unstructured, dynamic 3-D worlds are discussed, and a fast path planning system under development at NBS is described. It is argued that an octree representation of the obstacles in the world leads to fast path planning algorithms. The system we are developing performs the path search in an octree space, and uses a hybrid search technique that combines hypothesize and test, hill climbing, A*, and multiresolution grid search.
Sensor Driven Robot Systems Testbed
Raymond W. Harrigan
Intelligent robot systeos which operate in a semiautonomous fashion (i.e. , the operator serves only as a high level supervisor) require the integration of sensors and mechanisms cour.)leA by intelligent software. since such robot syste.as operate in the real worl] with imperfect knowledge of that worll, error recovery is an important aspect of any control strategy. Configured with a 6 degree-of-freedom robot manipulator, two dimensional vision and force sensing, the Sensor driven Robot Systems Testbed offers an environment for the development of control concepts for intelligent machine systems. The Sensor Driven Robot Systems Testbed has led to the development of active sensing concepts using complementary sensors and a highly modular control software concept. Both concepts are currently being implemented within the testbed environment.
Auto Atic 3D Reconstruction From Serial Section Electron Micrographs
Peter G Selfridge
Neuron tracing of serial sections is the process of reconstructing the three-dimensional structure of a neuron from a series of two-dimensional cross sectional images. Automatic neuron tracing is a challenging computer vision problem. This paper first describes the CARTOS-ACE neuron tracing system developed at Columbia University. It then describes the program TRACER-1 and presents a detailed example of its execution. It then examines TRACER-1's performance more generally and discusses further improvements, many of which will require encoding more knowledge into the next version of the program.
Knowledge Acquisition For Autonomous Navigation
Francis B. Hoogterp, Steven A. Caito
The design of a robotic vehicle requires a unique knowledge of the world in which the vehicle is to operate. This paper investigates the potential of a knowledge acquisition system for such a vehicle, based on a scanning laser radar sensor system that can provide 3-D surface descriptions of the surveyed terrain simultaneously with an active reflectance image of the scene. The advantages and disadvantages of this sensor are explored along with algorithms for efficiently extracting the type of knowledge required to control a robotic vehicle. Specifically the output of this knowledge acquisition system must consist of the location and extent of the road, the presence, size and location of any obstacles, and the location of road intersections, all in real time. Based on this knowledge of the scene, a desired local path can be planned for the robotic vehicle.
Cyclopion, An Autonomous Guided Vehicle For Factory Use
Nikolai Eberhardt, Meghanad D. Wagh
This paper reports on the concept, software and design of a laboratory model of an autonomous guided vehicle, i.e. a freely moving robot that orients itself in space with the aid of fixed passive optical beacons and follows a path, as given by coordinate points previously stored in its memory. Communication to a base station takes place via a digital radio link. Commands are received and status is reported back. The goal is to design a system that can be implemented essentially with existing technology and will not have to rely on optical pattern recognition from a television camera - an approach certainly more elegant, but at the present time rather immature.
The Structure Of A Fuzzy Production System For Autonomous Robot Control
Can Isik, Alexander Meystel
A knowledge-based controller for an autonomous mobile robot is realized as a hierarchy of production systems. The hierarchical structure is achieved following the information hierarchy of the system. A high level path planning is possible by utilizing the incomplete world description. More detailed linguistic information, obtained from sensors that cover the close surroundings, enables the lower level planning and control of the robot motion. A linguistic model is developed by describing the relationships among the entities of the world description. This model is then transformed into the form of rules of motion control. The inexactness of the world description is modeled using the tools of fuzzy set theory, leading to a production system with a fuzzy database and a redundant rule base.
Determination Of The Most Probable Point From Non-Concurrent Lines
Kyongtae Bae
From a given camera position and orientation we can draw a ray from the camera focus through a point in the image plane (this gives two equations). To locate a three-dimensional point (three unknown coordinates) on the ray at least one more equation is required. When another camera or another perspective is used, the number of equations becomes four and the system is overdetermined, and is to be solved in terms of the best estimation. This work compares the application to the point triangulation problem of the least squares method as opposed to the frequently used linear regression method. Since all coefficients of the equations are subject to error, the linear regression method is not conceptually correct. However, it is commonly used because its straightforward computations provide reasonable solutions in most cases. In our approach the least squares method is formulated by equally weighting the coefficients, thereby obtaining a solution from the principal components of the scatter matrix of the coefficients. Since we have four equations for the three unknowns, we can have four exact solutions from combinations of three out of the four equations. Geometrically each equation represents a plane in 3-D space, and each exact solution represents a vertex of a tetrahedron formed by the four equations. After normali zing the solutions to be scale-invariant, we compute the volume of the normalized tetrahedron. When the volume is greater than one, our least squares approach is significantly different from the usual linear regression method. In other words, when the residual error sum is very large, the usual method fails to give a correct solution, and, therefore, should be considered valid only for relatively small or purely random errors.
An Integrated VLSI Design Environment
Bruce W. Suter, Kevin D. Reilly
On-going research on the development of an integrated VLSI design environment is presented. Although numerous VLSI design tools have been developed, previous approaches tend to attack the VLSI design environment in a fragmented fashion. This difficulty can be overcome by using an integrated simulation environment as the cornerstone upon which the VLSI design environment is built. Among the elements to be considered in an integrated VLSI design environment are multi-level functional/behavioral design specifications; simulators, optimizers, and design rule checkers; natural language user interface; and a knowledge base. In an environment context, new opportunities appear. For example, facilitation of the ability to generate revisions of VLSI devices in a relatively straight-forward manner over the lifecycle of a family of devices. Moreover, this endeavor can be viewed as a multi-level silicon compiler, which is embedded in a simulation environment.
"Real-Time Computer Vision Using Intelligent Hardware"
Gregory M. Cox, Donald Fronek, Rahn Merrill
Recent advances in multiprocessor techniques and high speed memory have provided the basis for new computer architectures and real-time processing. This paper will describe some special hardware implemented in a real-time vision system, and detailing real-time processing of raster algorithms in concert with a linear sensor output. The algorithms and computer architecture considerations are detailed and justified. The application of the research project details the hardware and software that are necessary to process image pixels in less than 100 nano seconds. As a result of real-time processing, a certain "intelligence" is brought to the effort in the form of object identification. During object segmentation, algorithms provide a scoring procedure which identifies features within the image. The effort is supported in part by the Army Research Office, National Science Foundation and the Earth Technology Corporation.
Al Based Connector Assembly
W. J. McClay, P. J. MacVicar-Whelan
A 1000 rule prototype knowledge based system composed of a network of knowledge bases that aids shop floor personnel doing electrical connector assembly to select the correct tooling and materials for each job is described. It is currently in trial use and has demonstrated 100% accuracy and significant reductions in process specification search time. A production prototype is being developed.
Adaptive Decoder For An Adaptive Learning Controller
D. Politis, W. Licata
This paper discusses the implementation of an Adaptive Decoder (AD) in the Adaptive Learning Controller (ALC) algorithms developed by Barto, based on the neuron modeling work of Klopf. It is shown that using an Adaptive Decoder that shifts from coarse to fine space partition at a chosen instant improves ALC performance significantly, by decreasing the required learning time and reducing the operating bounds of the control variables.
Discrimination Of Upright Objects From Flat-Lying Objects In Automated Guidance Of Roving Robots
Malek Adjouadi
In the design of a computer vision system which is to analyze two-dimensional images of real-world scenes, it is imperative that we understand the principles underlying the image projections of upright objects (obstacles or landmarks to be avoided or identified) and flat-lying objects (cast shadows, texture change, etc.). This understanding is particularly important in the automated guidance of roving robots. To this end, this study begins with a presentation of a modular structure for the interpretation of real-world scenes, identifying pertinent problems for the safe and enhanced guidance of roving robots. This is followed by a mathematical framework developed to bring about enhanced scene interpretation. This mathematical framework includes the derivation of the geometrical formulae relating the two-dimensional image to the three-dimensional real world, and an analysis of the perspective effect present in the two-dimensional image. With this mathematical basis established, image techniques are developed to assess the image projections of the objects in question, and to exploit the fact that upright objects, within the scope of the stated problem, are not affected by the perspective effect. Finally, in an attempt to recover the depth information, the proposed techniques are complemented by an algorithm designed to measure the disparity that exists in stereo images.
Rxpert: An Intelligent Computer System For Drug Interactions
Brian G. Gayle, Douglas D. Dankel II
Drug interactions have become very prevalent in modern medicine. With the increasing numbers of techniques for disease diagnosis and marketed medications for disease treatment, multiple drug patient care has become a customary practice. Consequently, the potential of harmful drug induced effects has become, and will continue to be, an enormous problem in providing patient care. A computer reference aid, knowledgeable about drug interactions, would be of great value to medical personnel in eliminating untoward drug effects in patient care. This research involved the development of such a system called RxPERT. RxPERT is an intelligent, easy to use, interactive computer system with expert level knowledge of drug interactions, implemented for use on an IBM-PC microcomputer. The microcomputer implementation allows the program to be highly accessible, and its information promptly retrievable for office as well as institutional settings. The system requires input of a patient history and patient drug requirements. Throughout the user/computer interactive session the user can review explanations of the possible interactions. Ultimately a listing of drug interactions and overall rating of the drug regimen instituted is displayed.
An Extension Matrix Approach To The General Covering Problem
Jiarong Hong, Ryszard Michalski, Carl Uhrik
A new approach, called the extension matrix (EM) approach, for describing and solving the general covering problem (GCP) is proposed. The paper emphasizes that the GCP is NP-hard and describes an approximately optimal covering algorithm, AE1. AE1 incorporates the EM approach with a variety of heuristic search strategies. Results show the new algorithm to be efficient and useful for large scale problems.
A Transitive Model For Artificial Intelligence Applications
John Dwyer
A wide range of mathematical techniques have been applied to artificial intelligence problems and some techniques have proved more suitable than others for certain types of problem. We formally define a mathematical model which incorporates some of these successful techniques and we discuss its intrinsic properties. Universal applicability of the model is demonstrated through specific applications to problems drawn from rule-based systems, digital hardware design and constraint satisfaction networks. We also give indications of potential applications to other artificial intelligence problems, including knowledge engineering.
Rule-Based Geometrical Reasoning For The Interpretation Of Line Drawings
Elizabeth T. Whitaker, Michael N. Huhns
The design of a knowledge-based system for inferring the three-dimensional structure of an object based only on three orthogonal views of it is presented. The three views are line drawings obtained from digitized two-dimensional images of the front, top, and one side of the object. The system attempts to make the same deductions as would be made by a draftsman observing these line drawings. Although the information in only three views is insufficient to uniquely characterize an object, a draftsman is still able to infer a reasonable three-dimensional interpretation by making appropriate assumptions about hidden surfaces and using intuition about the structure of objects in three-space. The system incorporates similar intuitive knowledge and reasoning techniques which enable it to make the same assumptions. The knowledge which is used by the system is encoded as productions in the rule-based language OPS5. The system successfully produces a correct three-dimensional description for many simple polyhedral objects. It can be easily expanded to include new polyhedral objects simply by adding new production rules.
Intelligent Computer-Aided Design By Modeling Chip Layout As A Meta-Planning Problem
William P.-C. Ho
We present an approach to VLSI chip layout (placement and routing) based on a new meta-planning paradigm.' By modeling placement and routing as separate planning problems, they can then each be solved within that paradigm. In planning terminology, placement is the conjunction of subgoals, each of which is to place one component; routing is the conjunction of subgoals, each of which is to route one net. As in any planning problem, the complexity of each of these problems is caused by the subgoal interaction in which the solu-tion of one subgoal greatly impacts the ways in which subsequent subgoals may be solved. Meta-planning directly addresses this control task of managing this interaction. Our meta-planning paradigm organizes meta-level decision knowledge into two control policies - graceful retreat, which selects the most critical subgoal to solve next, and least impact, which selects the solution of that subgoal which uses the least crucial resources. This knowledge is organized in a tie-breaking, layered structure which filters the selection candidates until one most critical subgoal and its solution which uses the least crucial resources remain. The result is a dynamic, interaction-sensitive, constructive solution to the layout problem.
Forecasting Artificial Intelligence Demand
David R. Wheeler, Charles Shelley
Forecasts are major components of the decision analysis process. When accurate, estimates of future economic activity associated with specific courses of action can correctly set corporate strategy in an uncertain environment. When inaccurate, they can lead to bankruptcy. The basic trouble with most forecasts is that they are not made by forecasters.
Intelligent Tutor For Elementary Spanish
Christine Poinsette, Manton Matthews
In this paper we discuss an intelligent tutor for elementary Spanish. The system consists of an ideal student model, a model of common student misconceptions, a model of the particular student's knowledge of Spanish, an ATN parser, and a conceptual dependency framework for semantics. It has a collection of short introductory stories which are presented to the user. The system operates by presenting the story, generating questions about the story, parsing the user's reply, analyzing the reply semantically, generating a response, and finally generating a follow-up question. The responses of these questions are parsed and if syntactically correct then checked semantically. However, if there is a syntactic error the system uses the individualized model of the user and the common misconception model to try to understand what the user intended. The particular questions that are chosen are tailored to the user using the model of the Individual user.
Well Performed Systems
Chen-Yu Sheu
In this paper we have presented a para-digm for designing or operating systems with multiple, while may be conflicting, optimization criteria. Adopting compromise as a basis, the concept "well-performed systems" has been applied to a version of the multi-processor scheduling problem. Based on this concept, unlike approaches in conventional scheduling problems, we allow multiple disciplines in the scheduler. The performance of the system are formulated as conjunctive goals and the degree these goals are satisfied classifies the system into several operating levels. We construct rules to relate different optimization plans to different operating levels and apply these optimization plans whenever needed. Simulation studies have been conducted and the results show that the allowance of multiple disciplines outperforms a system with only one discipline enforced.
Embedding Explanation Mechanism With User Interface
A. Imamiya, A. Kondoh
This paper describes embedding explanation mechanism within a UIMS called the G-system, which being developed at Yamanashi University. A UIMS is an application independent user interface management system, which can serve as the user interface for a variety of functional system. One of the important facilities of the G-system is explanation facility. This facility provides online assistance of applications in the G-system via which users can obtain a consistent assistance that display summary information, command descriptions, explanation of error messages and other online documentation. The use of the explanation facility and knowledge database provides a common interface between many type of applications and user interface system, and separates the functions of application and user interaction.
An Intelligent System In Lisp For Information Retrieval Applyed To The Paleontology Of The Amazon Basin.
Emmanuel P. Lopes Passos, Paulo Cesar Ferreira de Souza Cunha
A model of Knowledge Representation, in LISP, is showed and implemented, based on graphical and geometrical methods when determining relationships. Application examples of the model are proposed for distinct areas of Knowledge and information retrieval for Amazonica Basin is implemented.
Modular Real�Time Image Processing Hardware As A Means To Offload Computationally Intensive Tasks In Artificial Vision
Robert J. Berger, Barry Unger
Transforming large quantities of information from the video signal to more symbolic representations is the first step in any pattern recognition or machine vision application. Traditionally this step is so computationally intensive, that most researchers have had to limit themselves to simple algorithms that lacked flexibility and generality or to non-real-time problems. The other choice has been to skip the higher level AI or judgemental techniques and use all the computational bandwidth for signal processing and very limited problem domains. This paper describes a family of real-time modular hardware, "MaxVideo" that allows the user to cost-efficiently configure an image-processing "front end" that can handle most of the tasks associated with the first steps in machine vision applications, especially those that involve parallel calculations such as convolutions, rotations, erosion / dilation, and feature list extractions. This front end image processor is viewed as transforming the raw video data into the more digestible symbolic "raw primal sketch" of coherent lines and geometric elements. This offloads the more general purpose host, allowing it to effectively handle the higher level knowledge representation / manipulation tasks. As higher level vision techniques are understood, the architecture described allows them to be integrated into hardware.
Flexible Template Matching For Autonomous Classification
John Merchant, Timothy J. Boyd
The target recognition problem is complicated by the fact that target is doubly "unknown". The sensed image is obviously a function of target class: a fundamental problem of target recognition has been that the form of the sensed image is also a strong function of target "geometry". The differences of class, upon which classification necessarily depends, may therefore be com-pletely swamped by the irrelevant differences of view. Human recognition of targets is generally unaffected by this problem. A new method of Automatic Target Recognition, called Flexible Template Matching, is described. It eliminates "geometry" from the recognition problem, by means of a new registration algorithm. This automatically brings the unknown target image and each one, in turn, of a set of class-defining template images (one per class) into mutual registration, without requiring any prior knowledge of the differences of view that may exist between them. The elimination of all irrelevant differences of view (scale, position, rotation, aspect, etc) allows for an optimum match-decision to identify the one true template, based upon computation (using Bayes Formula) of the probability, for each template, that the observed match differences are a typical sample of the match-differences known to occur in a (registered) true match of the target and its template. Since the range and aspect of the target are provided as a by-product of the registration action, the match-error statistics used can be selected according to the observed position and orientation of the target.