PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Tracing maps or other line-drawings for feature identification to obtain concurrent and subsequent identification is complicated by the difficulty in vectoring the tracing process in an optimal manner. The digitized features may have nearly infinite variation in size and shape, and it is important to measure the original shape including width, curvature, length, and continuity; thus image enhancement is undesirable. Without processing, including line-thinning, many traditional tracing algorithms result in backtracking or random wandering rather than following feature trends. The problem is especially acute in cases where features (such as lines) intersect. The necessity to intelligently monitor feature tracing is the subject of the current study presented in this paper. This paper reports on software development for an IBM PS/2 Model 80, with a 80386 processor, to control a mobile mask that focuses on a limited portion of the feature being traced. During tracing, the mask surrounds a portion of the feature, and an investigation of attributes becomes manageable since the field of view is restricted by the mask. After feature information has been extracted from the current location, the mask is vectored to a new location (based on current trend information) that is optimal for continually following the feature. Feature identification and trace vectoring is performed by using programming language (Turbo Pascal 5.0) manipulation of Boolean functions to simulate knowledge base rules. Program code is much more efficient during the software development stage than coupling an inference engine to the tracing software. In future research, integration with an inference engine will permit efficient user-initiated strategy changes for analyzing increasingly complex features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An algorithm for finding a single good path through the set of edge points detected by the gradient of Gaussian operator is discussed. First, an algorithm for finding contours at one scale is presented, then extensions of that algorithm which use multiple scales to produce improved detection of weak edges are presented. Edge points are linked along the gradient maxima ridges using a weighted tree search algorithm. The weights at each point measure noise, curvature, gradient magnitude, and contour length. In the algorithm for multiple scales the search for a contour proceeds as for the single scale, using the largest scale, until a best partial contour at that scale has been found. Then the next finer scale is chosen and the neighborhood around the end points of the contour are examined to determine possible edge points in a direction similar to the end point of the contour. The original algorithm is then followed for each of the points satisfying the above condition, and the best is chosen as an extension to the original edge. A second algorithm uses gradient information obtained at multiple scales in the non-maxima suppression operation. The coarsest scale is used first. Then the edge points are shifted as the non-maxima suppression is applied at finer scales. Both algorithms improve the. detection of edge contours with little increase in noise. The second also reduces the delocalization occurring at larger scales.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Autonomous vehicles often perform navigation and path planning using hierarchical control systems. These systems separate high and low level reasoning through an abstraction of the planning problem. For reasoning about terrain information, we present a method of abstraction that retains the finest level of resolution while progressing through greater levels of abstraction. Abstraction arises from a continuum of Gaussian smoothed terrain surfaces; each smoothed surface describes the terrain at a different scale of abstraction. We refer to this continuum as scale-space. For each level of abstraction, important features can be extracted from land elevation data for planning purposes. In this paper, we present this abstraction method, a graph representation for retaining scale-space information, and examples of how features from various levels of abstraction influence planning at different levels of a hierarchical control system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper extends our previous research on a highly structured and compact algebraic representation of grey-level images. Addition and multiplication are defined for the set of all grey-level images, which can then be described as polynomials of two variables. Utilizing this new algebraic structure, we have devised an innovative, efficient edge detection scheme.We have developed a robust method for linear feature extraction by combining the techniques of a Hough transform and a line follower with this new edge detection scheme. The major advantage of this feature extractor is its general, object-independent nature. Target attributes, such as line segment lengths, intersections, angles of intersection, and endpoints are derived by the feature extraction algorithm and employed during model matching. The feature extractor and model matcher are being incorporated into a distributed robot control system. Model matching is accomplished using both top-down and bottom-up processing: a priori sensor and world model information are used to constrain the search of the image space for features, while extracted image information is used to update the model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes research aimed at developing an automatic cutup system for use in the rough mills of the hardwood furniture and fixture industry. In particular, this paper describes attempts to create the vision system that will power this automatic cutup system. There are a number of factors that make the development of such a vision system a challenge. First there is the innate variability of the wood material itself. No two species look exactly the same, in fact, they can have a significant visual difference in appearance among species. Yet a truly robust vision system must be able to handle a variety of such species, preferably with no operator intervention required when changing from one species to another. Secondly, there is a good deal of variability in the definition of what constitutes a removable defect. The hardwood furniture and fixture industry is diverse in the nature of the products that it makes. The products range from hardwood flooring to fancy hardwood furniture, from simple mill work to kitchen cabinets. Thus depending on the manufacturer, the product, and the quality of the product the nature of what constitutes a removable defect can and does vary. The vision system must be such that it can be tailored to meet each of these unique needs, preferably without any additional program modifications. This paper will describe the vision system that has been developed. It will assess the current system capabilities, and it will discuss the directions for future research. It will be argued that artificial intelligence methods provide a natural mechanism for attacking this computer vision application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Intelligence analysts are frequently faced with the task of developing and maintaining systems for the classification of large, noisy, incomplete and highly dynamic datasets. Until recently, analysts have had only two methodologies to bring to bear upon this task. The first requires that the analyst manually work with the data until a "feel" is developed for it. The second involves the application of classical statistical techniques such as discriminant analysis and numerical taxonomy. Unfortunately, however, these techniques often yield unintuitive decision rules and clusterings, or can demand unrealistic distributional assumptions. Because these traditional techniques are not always applicable, knowledge acquisition has been a "bottleneck" for building rule-based system. However, new automatic techniques drawn from the domain of machine learning are being developed that address both of these problems: they do not require such distributional assumptions, and they tend to deliver readily interpretable clusterings and decision rules. This paper describes experiments that explore the applicability of two such systems, MOCA and CART, as tools to help analysts cope with large quantities of intelligence data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A learning classifier system (LCS) that learns rules for controlling a mathematical model of the liquid level in a vessel has been developed by the Bureau of Mines. LCSs resemble familiar production rule-based systems that incorporate a human expert's knowledge. However, in classifier systems the production rules are represented by strings of characters rather than in linguistic terms. This paper presents two specific examples in which an LCS produces a rule set for controlling liquid level whose performance is comparable to the performance of a human expert's rule set. In the first example, the LCS learns a rule that has been deleted from an author-supplied data base of effective rules. In the second example, the LCS learns rules to supplement a set of rules provided by the authors which included one rule detrimental to controlling liquid level. The LCS-generated rule set obtains a higher level of control over the liquid level system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Practical knowledge-based systems need to reason in terms of knowledge that is already available in databases. This type of knowledge is usually represented as tables acquired from external databases and published reports. Knowledge based systems provide a means for reasoning about entities at a higher level of abstraction. What is needed in many of today's expert systems is a link between the knowledge base and external databases. One such approach is a frame-based database management system. Package Expert (PEx) designs packages for integrated circuits. The thrust of our work is to bring together diverse technologies, data and design knowledge in a coherent system. PEx uses design rules to reason about properties of chips and potential packages, including dimensions, possible materials and packaging requirements. This information is available in existing databases. PEx needs to deal with the following types of information consistently: material databases which are in several formats; technology databases, also in several formats; and parts files which contain dimensional information. It is inefficient and inelegant to have rules access the database directly. Instead, PEx uses a frame-based hierarchical knowledge management approach to databases. Frames serve as the interface between rule-based knowledge and databases. We describe PEx and the use of frames in database retrieval. We first give an overview and the design evolution of the expert system. Next, we describe the system implementation. Finally, we describe how the rules in the expert system access the databases via frames.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
CAUSA is an environment for modeling and simulation of dynamic systems on a quantitative level. The environment provides a conceptual framework including primitives like objects, processes and causal dependencies which allow the modeling of a broad class of complex systems. The facility of simulation allows the quantitative and qualitative inspection and empirical investigation of the behavior of the modeled system. CAUSA is implemented in Knowledge-Craft and runs on a Symbolics 3640.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We are developing a computer vision system to automatically detect and track human motion across the international border between the United States and Mexico. Fundamental requirements are that the system work in real time under varying environmental conditions with relatively inexpensive hardware. The work we describe is applicable to a wide range of multiple object tracking problems. This paper describes the algorithm we have developed to detect and track moving objects. The algorithm is based on the notion of path coherence. However, the algorithm presented there is not suitable for our application because it requires noise free images, all trajectories must be present from the first through last images, and it requires multiple passes throught the trajectory points as each new image is acquired. This procedure is not acceptable for real time applications. We have previously reported on the front end of our system which takes video images and determines the areas which represent changing objects'. Thus the input to the tracking portion of the system consists of a binary image representing the changing pixels. In this paper we present a detailed description of the tracking algorithm, its implementation in Smalltalk-80, and samples of its operation. We also discuss system performance as a function of trajectory complexity and image noise level.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a detailed description and a comparative analysis of the algorithms used to determine the position and orientation of an object in real-time. The exemplary object, a freely moving gold-fish in an aquarium, provides "real-world" motion, with definable characteristics of motion (the fish never swims upside-down) and the complexities of a non-rigid body. For simplicity of implementation, and since a restricted and stationary viewing domain exists (fish-tank), we reduced the problem of obtaining 3D correspondence information to trivial alignment calculations by using two cameras orthogonally viewing the object. We applied symbolic processing techniques to recognize the 3-D orientation of a moving object of known identity in real-time. Assuming motion, each new frame (sensed by the two cameras) provides images of the object's profile which has most likely undergone translation, rotation, scaling and/or bending of the non-rigid object since the previous frame. We developed an expert system which uses heuristics of the object's motion behavior in the form of rules and information obtained via low-level image processing (like numerical inertial axis calculations) to dynamically estimate the object's orientation. An inference engine provides these estimates at frame rates of up to 10 per second (which is essentially real-time). The advantages of the rule-based approach to orientation recognition will be compared other pattern recognition techniques. Our results of an investigation of statistical pattern recognition, neural networks, and procedural techniques for orientation recognition will be included. We implemented the algorithms in a rapid-prototyping environment, the TI-Ezplorer, equipped with an Odyssey and custom imaging hardware. A brief overview of the workstation is included to clarify one motivation for our choice of algorithms. These algorithms exploit two facets of the prototype image processing and understanding workstation - both low-level (segmentation) and high-level (rule-based recognition) vision capabilities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic target recognition (ATR) is one of the most challenging tasks for a computer vision system. It involves the determination of objects in natural scenes in different weather conditions and in the presence of both active and passive countermeasures and battlefield contaminants. This high degree of variability introduces considerable uncertainty into the vision processes in an ATR. This mandates both a flexible control structure capable of adapting as conditions change and a method for managing the uncertainty to aggregate evidence. The desired flexibility can be achieved with a rule-based system in which the knowledge of the effects of scene content and ancillary information on algorithm choices and parameter values can be modeled and manipulated. In this paper, we describe such a system. The uncertainty is modelled by a combination of fuzzy set theory and Dempster-Shafer belief theory. Several variations of these methodologies within the rule-based structure are explored. The results are compared using sequences of forward looking infrared images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Human subjects easily perceive and extensively use shape regularities such as symmetry or periodicity when they are confronted with the task of object description and recognition. A computer vision algorithm is presented which emulates such behaviour in that it similarly makes use of shape redundancies for the concise description and meaningful segmentation of object contours. This can be compared with the way in which designers proceed in using CAD/CAM. In order to make the problem more accessible to computer programming, the contours are analyzed in so-called 'arc length space'. This novel mapping facilitates the detection and elimination of regularities under a broad range of viewing conditions and yields a natural basis for the formulation of the corresponding model compression rules. Several of the regularities which have traditionally been treated separately, are given a unified substrate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The success of knowledge-based image analysis methodology and implementation tools depends largely on an appropriately and efficiently built model wherein the domain-specific context information about and the inherent structure of the observed image scene have been encoded. For identifying an object in an application environment a computer vision system needs to know firstly the description of the object to be found in an image or in an image sequence, secondly the corresponding relationships between object descriptions within the image sequence. This paper presents models of image objects scenes by means of hierarchically structured classes. Using the topovisual formalism of graph and higraph, we are currently studying principally the relational aspect and data abstraction of the modeling in order to visualize the structural nature resident in image objects and scenes, and to formalize. their descriptions. The goal is to expose the structure of image scene and the correspondence of image objects in the low level image interpretation. process. The object-based system design approach has been applied to build the model base. We utilize the object-oriented programming language C + + for designing, testing and implementing the abstracted entity classes and the operation structures which have been modeled topovisually. The reference images used for modeling prototypes of objects and scenes are from industrial environments as'well as medical applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the development by Digital Equipment Corporation of a tool to build expert systems for diagnosing problems in computer modules. The Expert's Toolkit (ET) consists of two parts: the Knowledge Acquisition,Tool (KAT) and the Intelligent Decision Engine (IDE). KAT enables expert technicians to capture their problem-solving strategies in a knowledgebase. IDE provides other technicians and operators on the manufacturing floor with access to the expertise captured in the knowledgebase. The ET system has evolved into a generic expert system building "shell." While ET has been used successfully to build diagnostic systems in other domains, this paper focuses on the computer module problem domain. It describes the problem domain, the design of ET, its basic features, constraints and limitations, the required hardware and software, and the lessons we learned.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An expert system which supports unexperienced users in the application of the SPIDER image processing software package is described in this paper. The proposed structure of the knowledge base allows a generalized approach to software configuration expert systems whicn is applicable in other problem domains, too. The system has been completely implemented and tested on a number of problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Thermal infrared images of the ocean obtained from satellite sensors are widely used for the study of ocean dynamics. The derivation of mesoscale ocean information from satellite data depends to a large extent on the correct interpretation of infrared oceanographic images. The difficulty of the image analysis and understanding problem for oceanographic images is due in large part to the lack of precise mathematical descriptions of the ocean features, coupled with the time varying nature of these features and the complication that the view of the ocean surface is typically obscured by clouds, sometimes almost completely. Towards this objective, the present paper describes a hybrid technique that utilizes a nonlinear probabilistic relaxation method and an expert system for the oceanographic image interpretation problem. A unified mathematical framework that helps in solving the problem is presented. This paper highlights the advantages of using the contextual information in the feature labeling algorithm. The paper emphasizes the need for the feedback from the high level modules to the intermediate modules in an automatic image interpretation system. The paper presents some important results of the series of experiments conducted at Remote Sensing Branch, NORDA, on the NOAA AVHRR imagery data. Key words: feature labeling, feature extraction, oceanic features, edge detection, knowledge based systems, expert system, relaxation, infrared imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A General Expert Systems Package for Science and Engineering that is now being deve-loped consists of two Automatic Deductive, two Learning and eight Service Systems, each of which is an Expert System that can function independently. Operational prototypes exist for nine of the twelve systems. The Service Expert Systems perform operations that are common to many problems In science and engineering, such as high-speed information management, data compression, transporting high-level language code to different machine environments, curve-fitting and data description. The high-speed information management system, SOLID, which uses minimal storage, is both data (or information) and logically independent. SOLID executes all operations (retrieve, store, delete and update) at very high-speeds in bounded time. The data compression system, INTEGRAL, compresses and decompresses bit-strings at rates often in excess of 8 MBaud without loss of even a single significant binary-bit, to yield savings as high as 99.98%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mathematical morphology, "algebra of shape", is used in many image processing applications including machine vision recognition, visually guided robot vision systems, biomedical image processing, and low-level vision problems. The fact that mathematical morphology deals with shape characteristics of an image makes it a tool for object recognition, defect detection, and feature extraction. The analogy between the morphological operations for shape and the convolution operations for signals suggests the dominance of mathematical morphology in the image processing applications related to shape, especially in machine vision.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel technique is presented for segmenting seismic waveforms. The method produces waveform segments which closely correspond to explosion and earthquake signal onsets as well as additional structure of interest. Noise spikes or glitches are also successfully isolated. The approach uses threshold parameters obtained from human segmentation judgment tests and requires only simple, time domain calculations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An algorithm for segmenting range images of industrial parts is presented in this paper. Range images, are unique in that they directly approximate the physical surfaces of a real world 3-D scene. The segmentation of images ( range or intensity ) is based on edge detection or region growing techniques. The algorithm presented in this paper segments the range images by detecting discontinuities. There are three types of discontinuities in range images: jump, crease and smooth edges. The detection of a jump edge is relatively easy and can be obtained using edge detection techniques used for intensity images. The crease and smooth edges are difficult to detect especially in the presence of noise. Our approach is based on the analysis of the difference between the input and the filtered images. We show that, at an edge, the difference after Gaussian smoothing has a maxima in a direction perpendicular to the edge. The close connected regions are then obtained by eroding the image once and an iterative region growing least square fit is used to obtain the final segmented image. The performance of the proposed algorithm on a number of range images is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The segmentation problem in image processing still falls short of an optimal solution. Obtaining a suitable segmentation requires domain specific knowledge. The optimal insertion of this knowledge in the segmentation process is the major issue. The complex human visual system first extracts reliable intrinsic information from the input and then applies knowledge stepwise at each stage of visual processing from the retinal to the cognitive levels. A near optimal segmentation scheme should approximate this approach. In this paper we present a segmentation algorithm based on the above approach. Using only intrinsic information from a noise refined copy of the input image, we identify and group pixels that are geometrically related into regions. The partitioning is successively refined by region merging, using only general rules of perceptual grouping. Control strategies for judging connectivity and homogeneity are based on basic topological and geometric rules. This general geometry guided segmentation has the advantage of being domain independent. At an intermediate level we start introducing domain specific knowledge in further region merging. Our applications are in cell physiology and we first exploit general knowledge from cytology. Progressively, we increase knowledge use in the definition of merging rules. Higher up the segmentation heirarchy, we include rules specific to a given branch of cell physiology and directly linked to characteristics of cell organelle. Merging stops when all existing regions are matchable with particular features and/or further merging is senseless or impossible using the defined rules. The results presented are from insect physiology where for some time now there has been disagreement among researchers on the number, type and distinguishing characteristics of the various hemocytes identified so far. The idea is to bring the various parties to a compromise by producing a Computer Based Reference Classification of Hemocytes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Split and merge is a computationaly efficient region segmentation technique suitable to detect objects or surfaces in a given image. Despite its superior performance, it suffers from large memory usage and excessive computation time. This paper describes parallel implementation of the split and merge algorithm in a 16 node hypercube processor in order to reduce processing time to an acceptable level in the real time applications. Three methods are proposed to parallelize the operation of the method using the nearest neghbor (mesh) topology that can be mapped onto the hypercube architecture. Comparison of the described techniques is given and processing results of the real world images are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Rational Tree Machine (RTM) is a type of graph reduction architecture, for the processing of logic programs, currently under development at Boeing's High Technology Center. The architecture is composed of three types of functional modules: tree memory, automata constructor, and automata simulation modules. A previous paper described the overall architecture and its functionality. This paper will concentrate on a detailed description of the automata constructor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A parallel language has to match or reflect the hardware underneath to use these resources efficiently. Though every parallel language has to have some kind of parallel machine model, no existing language states this explicitly. The Parallaxis parallel programming environment introduces a different approach. The system comprises the specification of the parallel algorithm and the parallel hardware as well. Parallaxis has been designed for single instruction, multiple data (SIMD) system architectures, consisting of identical processing elements (PEs) with local memory. Data exchange is handled by message passing through a local network. In Parallaxis, the hardware structure is specified in the beginning of each program to establish the environment for coding the parallel algorithm. This is necessary for actually arranging this topology using a reconfigurable system, but it is also profitable for performing a simulation, or just stating the used topology. Parallelizable AI applications that demonstrate Parallaxis' usefulness include computer vision, productions systems, neural networks and robot control.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, an OR-Parallel Execution model based multiprocessor system is proposed. Our OR-parallel execution model addresses the following features: (1) Run-time Intelligent Backtracking, (2) Distributed process control and execution, (3) Minimization of data communication between processors, and (4) Minimization of parallel processing management overhead. Special hardware modules such as Intelligent Backtracking Controller, and Forward Execution Controller are designed to further enhance these features in run-time. A bus connected multiprocessor system is designed to experience the proposed OR-parallel execution model. Recent simulation results indicate that the OR-parallel execution model can be successfully used to conduct the parallel processing of most non-deterministic Prolog applications such as database systems, rule-based expert systems, natural language processing and theorem proving, etc.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work explores a distributed problem solving (DPS) approach, namely the AM/AG model, to cooperative memory recall. The AM/AG model is a hierarchic social system metaphor for DPS based on the Mintzberg's model of organizations. At the core of the model are information flow mechanisms, named amplification and aggregation. Amplification is a process of expounding a given task, called an agenda, into a set of subtasks with magnified degree of specificity and distributing them to multiple processing units downward in the hierarchy. Aggregation is a process of combining the results reported from multiple processing units into a unified view, called a resolution, and promoting the conclusion upward in the hierarchy. The combination of amplification and aggregation can account for a memory recall process which primarily relies on the ability of making associations between vast amounts of related concepts, sorting out the combined results, and promoting the most plausible ones. The amplification process is discussed in detail. An implementation of the amplification process is presented. The process is illustrated by an example.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A highly accurate stereo vision has been developed, at the National Research Council of Canada, as part of an on-machine dimensional inspection system. The approach is designed to eliminate or minimize the well-known difficulties of stereo vision. Its main features, besides the potentially high accuracy, are the flexibility, cost effectiveness and speed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new technique for computing intrinsic surface properties is developed in this research. Intrinsic surface properties refer to those properties of a surface which are not affected by the choice of the coordinate system, the position of the viewer relative to the surface, and the particular parametric representation used to describe the imaged surface. Since intrinsic properties are characteristics of a surface, they are ideal for the purposes of representation and recognition. The intrinsic properties which we are interested in are the principal curvatures, the intrinsic distance, and the lines of curvature. We propose to adopt a structured lighting sensing configuration where a grid pattern is projected to encode the object surfaces for analysis. At each stripe junction, the curvature of the projected stripe on the object surface is computed and related to that of the normal section which shares the same tangential direction as the projected curve. The principal curvatures and their directions at the stripe junction under consideration are then recovered using Euler's theorem. Application of this technique to represent and discriminate several simple surface types is also addressed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper an uncertainty analisys is performed of a system for the estimation of visual motion and depth from known egomotion. The motion strategy of the observer is constrained such as to track the point in space which projects on the image center (the fixation point) during the movement. The estimation of the optic flow is performed in two steps: firstly the velocity field is computed for each image pair at the zero crossing points; secondly the optic flow of a long sequence is obtained by matching corresponding contours between successive images. The model used is analysed and the uncertainty of each partial flow is determined on the basis of the independent parameters. The matching between image pairs causes an error propagation in the estimated velocity; a method is proposed to reduce the error using the velocity estimate relative to successive images. In this way the variance of the global flow field (both in magnitude and direction) is determined and it is used to compute the uncertainty in depth. This formulation allows to estimate the precision of the algorithm and the improvement in the accuracy of the measures, which can be achieved varying some key parameters like the number of images used. An experiment, performed on a real image sequence, is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses a knowledge-based system for diagnostic problem solving based on a multi-level representational structure and associated reasoning methods. The motivation behind this approach is to combine shallow evidential models for fault diagnosis with deep qualitative models that derive behavior from structural descriptions. In addition, the reasoning scheme utilizes historical data based on past experience for diagnosis. Using this integrated framework, we concentrate on the following issues: (i) Multi-level knowledge based system design, and (ii) Reasoning systems that exploit the multi-level representational structure for diagnostic problem solving. This system is applied to the diagnosis of a complex electro-mechanical system, specifically, the upper cargo door of the DC-10 aircraft in use at Federal Express Corporation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is difficult to diagnose faults and maintain weapon systems because (1) they are highly complex pieces of equipment composed of multiple mechanical, electrical, and hydraulic assemblies, and (2) talented maintenance personnel are continuously being lost through the attrition process. To solve this problem, we developed a portable diagnostic and maintenance aid that uses a knowledge-based expert system. This aid incorporates diagnostics, operational procedures, repair and replacement procedures, and regularly scheduled maintenance into one compact, 18-pound graphics workstation. Drawings and schematics can be pulled up from the CD-ROM to assist the operator in answering the expert system's questions. Work for this aid began with the development of the initial knowledge-based expert system in a fast prototyping environment using a LISP machine. The second phase saw the development of a personal computer-based system that used videodisc technology to pictorially assist the operator. The current version of the aid eliminates the high expenses associated with videodisc preparation by scanning in the art work already in the manuals. A number of generic software tools have been developed that streamlined the construction of each iteration of the aid; these tools will be applied to the development of future systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a knowledge based approach to automatically generate Lisp programs using the Greedy method of algorithm design. The system's knowledge base is composed of heuristics for recognizing problems amenable to the Greedy method and knowledge about the Greedy strategy itself (i.e., rules for local optimization, constraint satisfaction, candidate ordering and candidate selection). The system has been able to generate programs for a wide variety of problems including the job-scheduling problem, the 0-1 knapsack problem, the minimal spanning tree problem, and the problem of arranging files on tape to minimize access time. For the special class of problems called matroids, the synthesized program provides optimal solutions, whereas for most other problems the solutions are near-optimal.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A prototypical on-line model-based system, LASALLE1, developed at the University of Illinois in collaboration with the Illinois Department of Nuclear Safety (IDNS) is described. Its main purpose is to interpret about 300 signals, updated every two minutes at IDNS from the LaSalle Nuclear Power Plant, and to diagnose possible abnormal conditions. It is written in VAX/VMS OPS5 and operates on both the on-line and testing modes. In its knowledge base, operator and plant actions pertaining to the Emergency Operating Procedure(EOP) A-01, are encoded. This is a procedure driven by a reactor's coolant level and pressure signals; with the purpose of shutting down the reactor, maintaining adequate core cooling and reducing the reactor pressure and temperature to cold shutdown conditions ( about 90 to 200 °F). The monitoring of the procedure is performed from the perspective of Emergency Preparedness. Two major issues are addressed in this system. First, the management of the short-term or working memory of the system. LASALLE1 must reach its inferences, display its conclusion and update a message file every two minutes before a new set of data arrives from the plant. This was achieved by superimposing additional layers of control over the inferencing strategies inherent in OPS5, and developing special rules for the management of the used or outdated information. The second issue is the representation of information and its uncertainty. The concepts of information granularity and performance-level, which are based on a coupling of Probability Theory and the theory of Fuzzy Sets, are used for this purpose. The estimation of the performance-level incorporates a mathematical methodology which accounts for two types of uncertainty encountered in monitoring physical systems: Random uncertainty, in the form of of probability density functions generated by observations, measurements and sensors data and fuzzy uncertainty represented by membership functions based on symbolic , stochastic or numerical models estimating the "plausible", "possible" or "expected" values of the system parameters. Examples from both the on-line mode and the testing mode of the system will be discussed to illustrate the present methodology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a conceptual model of object-based knowledge representation for the inspection and evaluation of structural integrity. Use of the model is illustrated with a practical knowledge-based system(KBS) developed in the domain of bridge fatigue evaluation. An overview of the open-system architecture of the model is first introduced, followed by discussion of the object abstraction in the domain knowledge. Three major kinds of entities represented by objects are categorized in the model. They are : the physical objects for structural topology/interconnectivity as well as individual physical components, the equation objects for quantitative analysis, and abstract objects for qualitative assessment and reasoning. The knowledge representation scheme adapted by this KBS renders the notions of self-inferencing and self-indexing of knowledge in objects. Basic inferencing mechanisms of this model are also described in the paper. Finally, a critique of the hybrid representation is presented along with suggestions for its enhancement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the work in progress for scheduling and planning paths for map-guided robots (AGVs) maneuvering in a dynamic factory floor environment. A schedule of a group of robots consists of a set of paths to be traversed and the start, stop and wait time at each point of a single robot's path segment. The optimal schedule will minimize the finish time of all the robots's tasks. We will show that the optimal routing problem for multiple robots is computationally intractable (NP-Complete). Our methodology is to generate reasonably good schedules in time by combining techniques from two different disciplines. We use a time map management system from artificial intelligence (AI) research to generate an initial schedule quickly, then we pass the initial schedule to an iterative refinement loop (or local search) to gradually improve it locally using an optimization process from operations research (OR). We will briefly recount the basic principles for map-guided autonomous navigation, which has been published in [Meng88b].
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Optimizing processing time in some contour-cutting operations requires solving the so-called no-load path problem. This problem is formulated and an approximate resolution method (based on heuristic search techniques) is described. Results for real-life instances (clothing layouts in the apparel industry) are presented and evaluated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper deals with the use of the Potential Field Approach as a means of collision avoidance and path planning for the "Generalized Mover's Problem" in the presence of obstacles. Although an important and fundamental problem, relatively little has been done using this approach over the last several years. It can also be seen that only a few path planning algorithms developed work directly with continuous state-spaces. The potential field approach was developed by Khatib in 1980. There is, however, considerable room for improvement and expansion of Khatib's algorithm. Khatib's potential functions were not defined for interaction between two general objects, but only for interaction between a certain subclass of objects and a point. Thus in this paper the following goal will be sought: * Extend Khatib's potential function method to the case of the interaction between two general objects, rather than just between a point and an object.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hierarchic decomposition has been shown to be an effective technique for efficiently generating near optimal paths through a defended area. The objective is to avoid the combinatorial difficulties associated with searching large graphs by confining the search to specific advantageous subregions. Within the subregions, paths are constructed using the A heuristic search algorithm. The decomposition into subdomains introduces local horizon effects to the path generation process. The local horizon problem arises as a consequence of the required connection of paths generated by independent search procedures in contiguous subdomains. Several techniques are introduced to facilitate the inter-region handoff. The effects of subdomain size and handoff technique on path quality and efficiency are analyzed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To guide a robot in a time varying environment, the initially planned path must be updated as the contents of the scene change. This research addresses the problem of planning a collosion free path for efficient visual guidance in scenes involving moving obstacles. Here we present a time efficient path planing based on quadtree data structure and parallel processing techniques, which is applied to the top view of a dynamic environment. Motion analysis is performed on a sequence of images taken from the top view of the scene in certain time intervals. The extracted motion information is then used for selecting the best path as the shortest distance nearer to the obstacle which is moving away from a reference point (e.g., camera, robot or a segment of initially planed path).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Large software projects contain a wealth of tested code. By extracting and reusing portions of code from these projects, programmer productivity can be greatly increased. Software reusability relieves the programmer from rewriting existing code. Thus, a greater amount of time can be spent in more innovative tasks in the software life cycle. This paper presents the preliminary results of a knowledge-based software reusability (SR) system in an image processing application. Key issues in software reusability are addressed and the implementation of a vision based system is discussed. The results show that challenges do exist in measuring the closeness of algorithms to user specifications. However, the key factor in determining the effectiveness of software reusability is the time saved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Artificial Intelligence (AI) has found its way into Computer Aided Education (CAE), and there are several systems constructed to put in evidence its interesting advantages. We believe that images (graphic or real) play an important role in learning. However, the use of images, outside their use as illustration, makes it necessary to have applications such as AI. We shall develop the application of AI in an image based CAE and briefly present the system under construction to put in evidence our concept. We shall also elaborate a methodology for constructing such a system. Futhermore we shall briefly present the pedagogical and psychological activities in a learning process. Under the pedagogical and psychological aspect of learning, we shall develop areas such as the importance of image in learning both as pedagogical objects as well as means for obtaining psychological information about the learner. We shall develop the learner's model, its use, what to build into it and how. Under the application of AI in an image based CAE, we shall develop the importance of AI in exploiting the knowledge base in the learning environment and its application as a means of implementing pedagogical strategies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Our reflection, on the elements of existing Image Processing systems (currently Image Processing, Symbol interpretation level, control mode, level of extracted features) and corresponding use of Artificial Intelligence, leads us to the definition of the SARPI system. This system performs the extraction of features of intermediate level. In the present first step of implementation, we limit ourself to line segments. They are associated to a descriptor including several parameters: position, angle, length, cross contrast, ... and precision on all of these parameters. SARPI applies to single or multiple features detection, it finds the requested feature(s) and produces its (their) total or partial (as requested) description. SARPI takes as input the set of requested parameters and available values of some feature parameters (typically: qualitative measure of contrast). Its main part is a control module automatically generating an Image Processing sequence to solve the problem (extraction of requested feature parameters). Rules allow to divide the problem in elementary ones with respect to the kind of input parameters. They allow the selection of an elementary function set according to the requested feature parameters and the known parameters; in this way, if the known information is insufficient, the control module selects and executes elementary functions that look for the missing information. Each of these elementary functions is pre-associated to Image Procedures and heuristics that select the appropriate procedures according to thc values of the input parameters. The parameters of the image processes are controlled automatically by the precision on the requested feature parameters. Particularly, the sampling steps of the parameters ρ and θ of the 'lough transform are calculated from the requested precision of the feature parameters. The selected Image Processings are applied on a region of the image that is calculated from the approximated position of the features, if given, or on the entire image, if not. The system has been tested on images of industrial objects with different conditions of illumination and contrast.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a method is proposed for the integration of information which can be derived from the thermal infrared and visual modes based on the use of edge (dissimilarity) properties of the two types of images. The method involves the use of the Sobel operator to extract and classify edge points according to their strengths. A point-by-point comparison of edges from the visual and TIR images is made. Heuristics, supported by the physical processes affecting the image formation, are employed to give a probabilistic interpretation to each point in the images based on the strengths of edge points detected in the two images at the point. The validity of this approach is tested in a laboratory environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Rule-based reasoning when applied to locating destination addresses on mail pieces can enhance system performance and accuracy. One of the critical steps in the automatic reading and sorting of mail by machine is in locating the block of text that is the destination address on a mail piece. This is complicated by the variation of global structure on mail piece faces, e.g., return and destination addresses can be anywhere on the mail piece, in any orientation and of any size. Compounding the problem is the addition of extraneous text and graphics such as advertising.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Interpretation of X-ray motion pictures of the heart (cineventiculograms of the left ventricle) is complicated by the low contrast of the images and the elastic motion of the heart. We describe a framework for the application of knowledge in the form of diagnostically relevant models of the heart in motion to the problem of placing the heart boundary in each frame of the motion sequence. We employ a blackboard architecture [Nii 86;Weymouth 87] as a basis for the image interpretation. In this framework, local features, such as edges, are grouped to build a complete description of the moving heart. The knowledge is organized in a hierarchy with Knowledge sources (KS's) operating on different levels of the hierarchy. Opportunistic problem-solving techniques are used to control the order of activation of both data-directed and goal-driven KS's.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The affine-transformation matching scheme proposed by Hummel and Wolfson (1988) is very efficient in a model-based matching system, not only in terms of the computational complexity involved, but also in terms of the simplicity of the method. This paper addresses the implementation of the affine-invariant point matching, applied to the problem of recognizing and determining the pose of sheet metal parts. It points out errors that can occur with this method due to quantization, stability, symmetry, and noise problems. By beginning with an explicit noise model which the Hummel and Wolfson technique lacks, we can derive an optimal approach which overcomes these problems. We show that results obtained with the new algorithm are clearly better than the results from the original method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Artificial intelligence is becoming an increasingly important subject of study for computer scientists, engineering designers, as well as professionals in other fields. Even though AI technology is a relatively new discipline, many of its concepts have already found practical applications. Expert systems, in particular, have made significant contributions to technologies in such fields as business, medicine, engineering design, chemistry, and particle physics. This paper describes an expert system developed to aid the mechanical designer with the preliminary design of variable-stroke internal-combustion engines. The expert system accomplished its task by generating and evaluating a large number of design alternatives represented in the form of graphs. Through the application of structural and design rules directly to the graphs, optimal and near optimal preliminary design configurations of engines are deduced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the most difficult tasks a warship crew has to undertake is the ship's own defense. Nowadays, a warship is threatened by a variety of threats, launched from long distances and different platforms. The problem space of the warship is thus a highly multiaxis space. On the other hand, the warship is equipped with a variety of defense systems. A successful defense can be considered as a mapping of the solution space onto the problem space. Unfortunately, there are no known algorithmic solutions to this mapping problem that can satisfy the warship defense problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The engagement of penaided, nuclear armed Ballistic Missile Re-entry Vehicles (RVs) by a Theatre Missile Defence (TMD) system requires the use of a robust and adaptive discrimination system to identify warheads from accompanying decoys and other penetration aids. TMD systems will be characterised by their electronic countermeasure environments, and short flight times of the ballistic missile threat. In such environments time is of the essence for TMD commanders to make effective decisions about the allocation of defence weapon systems. The identification and classification, i.e the discrimination, of warheads in a theatre environment is therefore especially stressing requiring detailed analysis and quantification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Neural networks and expert systems provide different ways to reduce the programming effort required to build complex systems. Adaptive neural networks are programmed merely by training them with examples. Rule-based expert system are developed incrementally merely by adding rules. Although neural networks seem best suited for low-level sensory processing and expert systems seem best suited for high-level symbolic processing, strikingly similar issues arise when these approaches are used in large-scale applications. Illustrative examples of such applications are presented and discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the last several years at the Robotics Institute of Carnegie Mellon University, we have been working on two projects for developing autonomous systems: Nablab for Autonomous Land Vehicle and Ambler for Mars Rover. These two systems are for different purposes: the Navlab is a four-wheeled vehicle (van) for road and open terrain navigation, and the Ambler is a six-legged locomotor for Mars exploration. The two projects, however, share many common aspects. Both are large-scale integrated systems for navigation. In addition to the development of individual components (eg., construction and control of the vehicle, vision and perception, and planning), integration of those component technologies into a system by means of an appropriate architecture is a major issue.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this talk I will describe recent progress in developing problem solving architectures that learn from interactions with external environments. Previous work in planning and problem solving has often ignored the special constraints and uncertainties that arise from interacting with a real environment. In addition, learning was seen as an add-on that could be ignored until the planning system was complete. Similarly, most learning work has been carried out in simulated domains, or at least in domains where interaction with an environment is mininized. Recently, several systems have been developed that attempt to integrate learning and problem solving while interacting with an external environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A robot vision system should perform a variety of tasks accurately and robustly. The system should posses the capabilities to acquire images in a complex environment, to perform low level image processing, to extract primative features and critical information such as depth information, and to derive high level description of the scene. This paper describes extensions of a practical vision system which can acquire and process images under different viewing geometries and extract 3-D information using a pair of stereo images acquired using a single camera mounted on the hand of robot, using region-based stereo matching scheme. The performance of the system is tested by conducting several experiments utilizing a laboratory testbed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an architecture for the control of robotic devices, and in particular of anthropomorphic hands, characterized by a hierarchical structure in which every level of the architecture contains data and control function with varying degree of abstraction. Bottom levels of the hierarchy interface directly with sensors and actuators, and process raw data and motor commands. Higher levels perform more symbolic types of tasks, such as application of boolean rules and general planning operations. The implementation of the layer has to be consistent with the type of operation and its requirements for real time control. In the paper we present one implementation of the rule level with a Boolean Artificial Neural Network which would have a response time sufficient for producing reflex corrective action at the actuator level.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new algorithm for determining the position and orientation of objects. The problem is formulated as an optimization problem using dual number quaternions. It is shown that this reduces to an eigenvalue problem for which standard software library routines can be used to obtain the solution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This project explored a system for developing a knowledge-base of phonetic rules for use in a subsequent larger project for working with the speech impaired. The overall project needs several support knowledge bases including one for "normal" phonemes. The concern here was on developing a learning system for constructing this, incorporating the pronunciations of a range of normal speakers. Thirteen speakers were used, training the system on the 17 vowel sounds. The system was developed on a microcomputer (PC) which is likely to be the most readily available machine to users of the overall system. The results were partially successful - suggestions for further work and improvement are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many expert systems and relational database systems store factual information in the form of attributes values of objects. Problems arise in transforming from that attribute (frame) database representation into English surface structure and in transforming the English surface structure into a representation that references information in the frame database. In this paper we consider mainly the generation process, as it is this area in which we have made the most significant progress. In its interaction with the user, the expert system must generate questions, declarations, and uncertain declarations. Attributes such as COLOR, LENGTH, and ILLUMINATION can be referenced using the template: "<attribute name> of <object>" for both questions and declarations. However, many other attributes, such as RATTLES, in "What is RATTLES of the light bulb?", and HAS_STREP_THROAT in, "HAS_STREP_THROAT of Dan is true." do not fit this template. We examined over 300 attributes from several knowledge bases and have grouped them into 16 classes. For each class there is one "question" template, one "declaration" template, and one "uncertain declaration" template for generating English surface structure. The internal databases identifiers (e.g., HAS_STREP_THROAT and DISEASE_35) must also be replaced by output synonyms. Classifying each attribute in combination with synonym translation remarkably improved the English surface structure that the system generated. In the area of understanding, synonym translation and knowledge of the attribute properties, such as legal values, has resulted in a robust database query capability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A tactical (sentence) text generator designed within an object-oriented paradigm is described. The approach is based on work by Laurence Danlos, and has been extended by message well-formedness checking, message transformational operators (e.g., PASSIVIZE), and linguistic extensions (e.g., negative, interrogative and imperative sentences). The object-oriented approach is shown to provide modularity, making the generator easily extensible, and localization of data and procedures for improved maintainability. The application of this text generator to three Lockheed research projects is also described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a man machine interface in natural language for a Data Base Management System called VORAS, based upon semantics. This DBMS can also be used for knowledge representation, and is well suited to the design of queries in natural language. The system VORAS is an object oriented DBMS developed from a specific model of representation called PDM (Property Driven Model). A user may write a query in natural language. There is no syntaxic level. Most of the analysis is done at a semantic level. Then, it is possible to use a short hand style.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A CAD-to-vision system is a computer system that inputs a CAD model of an object and outputs a vision model and matching procedure by which that object can be recognized and/or its position and orientation determined. CAD-model-based systems are extremely useful for industrial vision tasks where a number of different manufactured parts must be automatically manipulated and/or inspected. Another area where vision systems based on CAD models is becoming important is in the United States space program. Since the space station and space vehicles are recent or even current designs, we can expect to have CAD models of these objects to work with. Vision tasks in space such as docking and tracking of vehicles, guided assembly tasks, and inspection of the space station itself for cracks and other problems can rely on model-directed vision techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An important application of vision in aerial reconnaissance is the detection and classification of changes between images of the same scene taken at different times. There are a variety of factors that make this an exceedingly difficult problem: 1) The images are ordinarily taken from (slightly) different vantage points. If the images are taken at sufficiently high altitudes, then the images can, in principle, be registered by a single global coordinate transformation. Otherwise (and this is often the case) the images must be treated as a stereo pair, and the corresponding disparity field must be computed in order for the images to be compared.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A technique for analyzing dense range images of a pile of simple but unknown objects is discussed. The technique analyzes the configuration of objects in the pile and uses concepts such as stability, viewpoint independence, and object solidity to hypothesize the shapes and sizes of the objects. These hypotheses are analyzed using the known geometry of the range sensor to rule out the inconsistent configurations. The final result of the analysis is one or more descriptions of the 3-D scene, each of which is consistent with i he sensed data, and with the constraints imposed by the physics of objects in contact.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The AI approach to vision has been heralded as reducing the computational burden of traditional bottom-up systems by ap-plying knowledge-based control. AI-style systems use knowledge to focus attention and processing resources on the most promising hypotheses and combine information from multiple knowledge sources and/or sensors. Our case study compares the complexity of an intermediate-level grouping task with and without top-down control. The results provide clear empirical support for the claim that knowledge-directed control reduces the computation required for object identification. The Rectilinear Line Grouping System (RLGS) is a bottom-up line grouping system designed to extract manmade structures from static images. The Schema System is a knowledge-based system shell for controlling computer vision tasks. In this paper we consider the task of finding instances of two objects (telephone poles and road signs) in complex natural scenes. First we apply the RLGS in the original bottom-up manner in which it was designed, noting the number of line relations that must be computed, as well as the complexity of the graph matching task that must be performed. Then we place the RLGS primitives under the direction of the Schema System, noting the reduction in required computation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Advances in the field of knowledge-guided computer vision require the development of large scale projects and experimentation with them. One factor which impedes such development is the lack of software environments which combine standard image processing and graphics abilities with the ability to perform symbolic processing. In this paper, we describe a software environment that assists in the development of knowledge-based computer vision projects. We have built, upon Common LISP and C, a software development environment which combines standard image processing tools and a standard blackboard-based system, with the flexibility of the LISP programming environment. This environment has been used to develop research projects in knowledge-based computer vision and dynamic vision for robot navigation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The circuits in a military command and control network are expected to operate continuously in spite of changes and damage, and must be restored in minutes should they fail. Using object-oriented design methods as a basis for development, this paper describes an approach to the decentralized allocation of circuits in such a network. Aided by local knowledged-based allocation-assistants that collectively guide a circuit-restoration message to its destination via available routes, a node-operator at the destination authorizes one of the proposed circuit routes for allocation of resources. In such a network, a common software-based assistant distributed to each node, can aid operators in the rapid reconstruction of a badly damaged network. This approach also provides a planning aide for tactical network designers, who can use this approach to model nodes, trunk-lines, channels and circuits. Distributed allocation allows network planners to forgo stored restoration plans as their principal means of maintaining service. In time of war, such plans cannot accommodate multiple outages or respond to rapidly changing needs. Node-operators, making use of knowledge-based allocation assistants, can make the necessary decisions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A military operations order contains instructions for the maneuver and fire support elements, among others. In practice, possible maneuver plans are constructed from an approved course of action and it is the responsibility of the field artillery staff representative to respond with supporting fire support plans. The Organization for Combat tool (OFC) can produce fire support plans of a quality commensurate with Command and General Staff School graduates, validated through the system's successful completion of the appropriate course final examinations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe the essential features of an expert system that helps nonexpert designers analyze and design computer networks. The major components of this expert system are the executive, the knowledge management system, the user interface system, the modeler, the analyzer and the synthesizer. A combination of object-oriented techniques, rules and frames is used to represent the application's descriptive and problem-solving knowledge. Principles of planning systems are used to design a network satisfying the user's requirements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Intelligent behaviour in image processing is highly desirable. In this context 'intelligence' includes the efficacious use of techniques, consideration of context and adaptivity to situations. Analysis of human pattern recognition and interpretation can give powerful insights into the possibilities for automation or part automation of the image analysis process. A study has been made of the analysis of photoelastic fringe patterns. Low level image processing techniques can be very powerful, but they suffer from a number of deficiencies, notably a lack of discernment and the imposition of characteristics upon the image. Aspects of knowledge about the physical problem, the problem geometry, the relationships between fringes within a pattern and the characteristics imposed by low-level processing have all been employed to produce better results than those obtainable from low-level processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Navigation of autonomous vehicles in environments where the exact locations of obstacles are know has been the focus of research for two decades. More recently, algorithms for controlling progress through unknown environments have been proposed. The utilization of knowledge-based systems for studying the behavior of an autonomous vehicles has not received much study. A knowledge-driven autonomous system simulation was developed which enabled an autonomous mobile system to move in a two-dimensional environment and to use a simulated ranging/vision sensor to test whether a selected goal position was visible or whether the goal was obscured by one of the multiple polygon obstacles. As the mobile system gains information about the location of obstacles, it is added to the system's knowledge-base. Considerable attention was given to the computation of what vertices were mutually visible in the multi-obstacle environment and that computation was carried out in Lisp. The study relied on a program implemented in a generalized decision-making paradigm, OPS5.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Processing and analysis of terrain data are essential tasks of path planning and navigation. This paper describes an assortment of massively parallel algorithms for processing and analysis of terrain data to support path planning and navigation. These algorithms were implemented on the Martin Marietta geometric arithmetic parallel processor array consisting of 108 x 384 processing elements. Their applications in path planning and navigation of low-altitude combat aircrafts are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the specific advantages of the knowledge-based approach for the assembly process planning. The assembly plan is elaborated off-line from the high level description of the final product to be achieved. Practical implementations and results, relative to the assembly of an electric signalling relay, are presented and discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Autonomous real-time navigation system can be organized into four hierarchical layers, which are presented in this paper. Path planning and kinematic path generation, two A.I.-based processes, are described with emphasis on their heuristics. The software implementation allows real-time obstacle avoidance, whose results are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A learning autonomous robot must learn from sensor data and must decide what topics to learn about. We present the method of resolution limited quantization for learning from sensor data and the method of histogram density to guide the process of topic selection. The methods are complementary in that they use the same knowledge representation. We describe a program, GRID, which implements these methods,. We present an example run of this program learning in the domain of a simulated mobile robot.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An adptive pattern recognition network is described that has several internal feature selection layers. Bayes rule combines features and derives each layer from its predecessor starting from two features per node in the first internal layer. Nodes in higher order layers involve more features than those in the lower order layers. Each node in the last internal layer involves all the input features, and is constructed by different feature combinations. A confidence combination layer then combines recognition confidences of the nodes in the last internal layer. This layer dynamically selects only the most significant (weighted) nodes for each class. Our network provides rapid incremental learning from new training samples, dynamic introduction of new classes and new features, and the exclusion of existing classes and features without retraining on the modified data. We illustrate our method by comparing empirical error rates obtained by applying the layered network, a single internal layer network, and the Bayes quadratic decision rule to the ubiquitous IRIS data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents some basic algorithms for manipulating decision trees with thresholds. The algorithms are based on discrete decision theory. This algebraic approach to discrete decision theory, in particular, provides syntactic techniques for reducing the size of decision trees. If one takes the view that the object of a learning algorithm is to give an economical representation of the observations then this reduction technique provides the key to a method of learning. The basic algorithms to support the incremental learning of decision trees are discussed together with the modifications required to perform reasonable learning when threshold decisions are present. The main algorithm discussed is an incremental learning algorithm which works by maintaining an association irreducible tree representing the observations. At each iteration a new observation is added and an efficient reduction of the tree enlarged by that example is undertaken. The results of some simple experiments are discussed which suggest that this method of learning holds promise and may in some situations out perform standard heuristic techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the need for knowledge-based systems increases, an increasing number of domain experts are becoming interested in taking more active part in the building of knowledge-based systems. However, such a domain expert often must deal with a large number of unfamiliar terms concepts, facts, procedures and principles based on different approaches and schools of thought. He (for brevity, we shall use masculine pronouns for both genders) may need the help of a knowledge engineer (KE) in building the knowledge-based system but may encounter a number of problems. For instance, much of the early interaction between him and the knowl edge engineer may be spent in educating each other about their seperate kinds of expertise. Since the knowledge engineer will usually be ignorant of the knowledge domain while the domain expert (DE) will have little knowledge about knowledge-based systems, a great deal of time will be wasted on these issues ad the DE and the KE train each other to the point where a fruitful interaction can occur. In some situations, it may not even be possible for the DE to find a suitable KE to work with because he has no time to train the latter in his domain. This will engender the need for the DE to be more knowledgeable about knowledge-based systems and for the KE to find methods and techniques which will allow them to learn new domains as fast as they can. In any event, it is likely that the process of building knowledge-based systems will be smooth, er and more efficient if the domain expert is knowledgeable about the methods and techniques of knowledge-based systems building.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We take the benefit of improvements in optical devices for multimedia data storage, jointly with development of usages of AI, to apply AI techniques and ideas to very flexible retrieval among an image database. We study a progressive retrieval based on "relevance feedback", processed through a man-machine dialogue where images play an important role. In the knowledge based system we propose, EXPRIM, among a first set of images displayed to him (following a textual request or browsing in the image base), the user chooses fitting images. The system then attemps to formulate a better request, by trying to understand the user's need through his choices : the chosen images are positive illustrations of it while the rejected ones are negative illustrations. This may be viewed as a machine learning process by examples and negative examples, the concept to learn being the user's need. In the machine learning based prototype we have written in SMALLTALK on a SUN coupled with a videodisk-reader, we have tried to compare, adapt and mix some existing learning techniques. The prototype in hand is being experimented on a pilot application and progressively enhanced by adding and changing various heuristics of the knowledge base. We assume that beyond pure image retrieval, this kind of progressive requesting system may be very well suited to other applications : especially image based Computer Aided Education and Diagnosis by image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The generalization properties of a class of neural architectures can be modelled mathematically. The model is a parallel predicate calculus based on pattern recognition and self-organization of long-term memory in a neural network. It may provide the basis for adaptive expert systems capable of inductive learning and rapid processing in a highly complex and changing environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In its simplest form, linear interpolation on a discrete grid reduces to a special case of the subjective-contour problem: finding the straightest path between two boundary points of the grid. Linear interpolation using local information is hard because straight lines running counter to the grid do not appear straight locally. We present a network which performs approximate linear interpolation, using simple arithmetic elements with nearest-neighbor interconnections. The network represents the line between two boundary points as a profile of activation across the grid of elements. The activation of each element counts the number of direct grid paths from this element to the two boundary points. These counts can span an enormous range. Numerically sound, the network must compromise its performance with the limited dynamic range of real elements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an experiment in recognizing simple hand-drawn shapes on the basis of generic features which are psychologically motivated. A coarse coding scheme is used to represent the input features. The input features are mapped to the appropriate output category in a single-layer neural network using three different learning rules: the Hebbian rule, the Delta rule, and a modification of the Hebbian rule. The shape recognition algorithm was tested in three different domains with results comparable to conventional recognition techniques. The advantage of the scheme proposed here is its generality, and its ability to learn from examples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A technique using neural networks as a means of diagnosing specific abnormal conditions or problems in nuclear power plants is investigated and found to be feasible. The technique is based on the fact that each physical state of the plant can be represented by a unique pattern of instrument readings, which can be related to the condition of the plant. Neural networks are used to relate this pattern to the fault or problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Pitch deposits in the production of pulp from wood are a very significant problem in the pulp and paper industry. Conservative estimates place the annual cost at around $30 million a year. At the present time problems in this domain are handled by human experts, whose time might otherwise be available for research. Development of an expert system in this area would be expected to have several beneficial effects. Quantitatively, it will decrease down time, thus allowing for greater production, and qualitatively it will decrease the occurrence of contamination in the final pulp.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed a proof-of-concept prototype of an expert system for tuning particle beam accelerators. It is designed to function as an intelligent assistant for an operator. In its present form it implements the strategies and reasoning followed by the operator for steering through the beam transport section of the Advanced Test Accelerator at Lawrence Livermore Laboratory's Site 300. The system is implemented in the language LISP using the Artificial Intelligence concepts of frames, daemons, and a representation we developed called a Monitored Decision Script.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An expert system for diagnosis of defect and analysis of cause in manufacturing an integrated circuit board (IC board) has been designed and built. The system architecture supports emulation of two important aspects of expert's problem solving. The first aspect concerns expert's ability to diagnose the defect quickly after a cursory examination of the IC Board. The second aspect concerns expert's ability to refocus his attention on likely defect candidates quickly if his initial considerations fail. Our approach uses rule classification and system architecture to accomplish this. The additional benefits are faster response time, dramatic reduction in the size of the rule base and the ability for the user to select level of expertise in this expert system to match his own level.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The purpose of this expert system is to assess a predisposition to bleeding in a patient undergoing a tonsillectomy and/or adenoidectomy as may occur with patients who have certain blood conditions such as hemophilia and von Willebrand's disease. This goal is achieved by establishing a correlation between the patients' responses to a medical questionnaire and the relative quantities of blood lost during the operation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The COULTER COUNTER® Model S Plus Series instruments are automated clinical hematology blood cell analyzers which measure the count, volume and population distribution of red blood cells, white blood cells and platelets, and hemoglobin from patient blood samples. In the clinical laboratory environment, instrument startup consists of a number of component and system checks to assure proper operation and calibration to insure reliable results are produced on patient samples. If a startup check fails, troubleshooting procedures are provided to assist the operator in determining the cause of the error. Troubleshooting requires expertise in instrument operation, troubleshooting procedures and evaluation of the data produced. This expert system is designed and developed to assist the startup diagnostics of COULTER COUNTER Model S Plus Series instruments. The system reads data produced by the instrument and validates it against expected values. If the values are not all correct, then the troubleshooting starts. Troubleshooting is handled for the most common subsystem problems and those which the operator has the equipment and knowledge to handle, problems that are cheapest to fix and problems that are quickest to fix. The expert system restarts the startup sequence whenever troubleshooting has been successful or recommends calling Customer Service when unsuccessful.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Analyses of the shop scheduling domain indicate the objective of scheduling is the determination and satisfaction of a large number of diverse constraints. Many researchers have explored the possibilities of scheduling with the assistance of dispatching rules, algorithms, heuristics and knowledge-based systems. This paper describes the development of an experimental knowledge-based planning and scheduling system which marries traditional planning and scheduling algorithms with a knowledge-based problem solving methodology in an integrated blackboard architecture. This system embodies scheduling methods and techniques which attempt to minimize one or a combination of scheduling parameters including completion time, average completion time, lateness, tardiness, and flow time. Preliminary results utilizing a test case factory involved in part production are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Manufacturing applications of expert systems offer industry methods to increase their overall productivity while lower their operating costs. Knowledge-based system technology has long reached the point where proof-of-principle systems can evolve into realizable aids to engineers in all phases of the manufacturing process. A growing area of interest to all manufacturing concerns, this paper reviews several knowledge-based systems in the area of diagnostics, planning and scheduling, control, and design. Systems in each of these areas are discussed with an emphasis on their application, knowledge-based system design elements, and level of implementation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This interim report describes an expert system prototype being developed to palletize boxes of unequal sizes. Finding feasible arrangements for boxes of various sizes is an important problem, particularly for distribution centers. Previous efforts have focused on palletizing equal size boxes, or on the problem of unequal size containers where the inequalities are only in two dimensions. Though useful starting points, both approaches are overly restrictive in their scope. This report describes an expert system prototype being developed at Georgia Tech that integrates domain heuristics with packing algorithms to produce optimal packing configurations. As the nature of the palletizing problem varies for each distribution center, the expert system uses several application-specific assumptions to tailor the system to a specific pallet packing approach. Results demonstrating optimal palletizing for column stacking domains are presented and future plans are outlined.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper illustrates the development of a prototype expert system to simulate the current practice of log bucking/allocation operations at a wood product manufacturing facility with three different mills. Limitations with the prototype and technical needs to develop a full scale expert system for log bucking/allocation operations are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A car assembly plant is a large and complex system. Many thousands of tasks must be performed correctly in appropriate sequence if the plant is to achieve its objective of producing correctly assembled vehicles at the design rate. Consequently the design, construction, and running of such plants presents formidable management problems. The source of many of these problems is the interdependence of the numerous operations that must be carried out in the process of building a car. This paper describes CADAVER (Car Assembly line Dependency and VERification), a prototype system developed at CMI which is intended to assist plant designers and managers to understand and manipulate the interdependency relationships which exist between tasks carried out in the plant.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a knowledge-based prototype that inspects and quality assures software components. The prototype model, which offers a singular representation of these components, is used to automate both the mechanical and nonmechanical activities in the quality assurance (QA) process. It will be shown that the prototype, in addition to automating the QA process, provides a novel approach to understanding code. Our approaches are compared with recent approaches to code understanding. The paper also presents the results of an experiment with several classes of nonsyntactic bugs. It is argued that a structured environment, as facilitated by our unique architecture along with "software development standards" used in the QA process, is essential for meaningful analysis of code. Initial success with the prototype has generated several interesting directions for future work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many intelligent systems must respond to sensory data or critical environmental conditions in fixed, predictable time. Rule-based systems, including those based on the efficient Rete matching algorithm, cannot guarantee this result. Improvement in execution-time efficiency is not all that is needed here; it is important to ensure constant, 0(1) time limits for portions of the matching process. Our approach is inspired by two observations about human performance. First, cognitive psychologists distinguish between automatic and controlled processing. Analogously, we partition the matching process across two networks. The first is the automatic partition; it is characterized by predictable 0(1) time and space complexity, lack of persistent memory, and is reactive in nature. The second is the controlled partition; it includes the search-based goal-driven and data-driven processing typical of most production system programming. The former is responsible for recognition and response to critical environmental conditions. The latter is responsible for the more flexible problem-solving behaviors consistent with the notion of intelligence. Support for learning and refining the automatic partition can be placed in the controlled partition. Our second observation is that people are able to attend to more critical stimuli or requirements selectively. Our match algorithm uses priorities to focus matching. It compares priority of information during matching, rather than deferring this comparison until conflict resolution. Messages from the automatic partition are able to interrupt the controlled partition, enhancing system responsiveness. Our algorithm has numerous applications for systems that must exhibit time-constrained behavior.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Temporal reasoning, which is a way of pursuing goals and drawing inferences based on events occurring over time, plays an important role in automated planning systems and in general in common sense reasoning. This work is an attempt at exploring the problems involved in reasoning over time which typically involve updating a plan structure with changing world patterns. This involves developing the appropriate knowledge representation in addition to a plan generation system. A deductive retrieval mechanism, which has been tailored to the needs of temporal retrievals, has been imple-mented. Uncertainty due to incomplete information and indecision is resolved using fuzzy values and a dynamic resolution over a temporal data base. Imprecise temporal information is captured in fuzzy intervals. These intervals are made up of a beginning hour and ending hour. The system can find the tightest possible bounds on a possible event or step in a plan. The system user provides the constraint information for plan development. This is combined with basic domain information in the knowledge base. A plan or set of steps through some temporal constraints will be presented based upon the constraints and domain information. A fuzzy belief in the chance of the plans' success is associated with the information provided by the system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Studies about the Coherence of Knowledge Based Systems (KBSs) are very often carried out in parallel with the acquisition process (TEIRESIAS, ONCONCIN, CHECK). Now we must also consider doing coherence studies after the acquisition process and before using the knowledge : such studies take place during the validation process of a KBS. They have to be specified by contract between the vendor-author and the customer-user and contribute to the receipt process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the (JIB US expert system, installed in its "deferred time" version since July 1985 at the European Space Agency (ESA). This system aims at helping test centres operators in their job of managing batteries on-board satellites.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Expert Requirements Expression and System Synthesis (EXPRESS) environment is being developed at the Lockheed Software Technology Center in Palo Alto, California. EXPRESS provides rapid prototyping and will support full-scale engineering development (FSED) via integrated, knowledge-based, executable specifications and related capabilities. That is, EXPRESS provides "automatic programming" via two key technologies: (1) executable specifications, written in very high-level languages (VHLLs) and (2) knowledge base technology. Users of EXPRESS, however, will be primarily aerospace systems engineers and applications specialists. Most of them will not be familiar with these technologies. Many of them will be infrequent users. The EXPRESS human interface emphasizes case of learning, use, and remembering for these users.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present a design for a multilevel planning system which addresses some of the real time aspects of planning for threat response. Our approach utilizes multiple knowledge representations, and a coupled system of AI and conventional models. The motivation behind this design is to maximize the amount and depth of knowledge which can be utilized depending upon the amount of time which is available to plan. This research is part of the continued development of Grumman's Rapid Expert Assessment to Counter Threats (REACT) system; designed to aid pilots in air combat decision making. REACT consists of cooperating expert systems which communicate and are controlled by the use of a blackboard architecture. This paper concentrates on the REACT module which deals with fast response planning for combat maneuvering at low altitude over hilly terrain. REACT research has led to many interesting and potentially useful results applicable to general autonomous vehicle control architectures. In particular work on integrating the capability in REACT to reason about the tactical use of terrain has suggested guidelines for knowledge base design and data management, system and language specifications, and planner architectures pertinent to real-time coupled systems. We also describe the associated implementation progress, where the experimental planner is being integrated into the multi-language modular system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The satisfiability problem is shown to be a formulation suitable for problems in planning, scheduling, solid modeling, etc. which cannot be solved by the unsatisfiability formulation (logic programming). A reduction procedure is developed, to prove the satisfiability, using a geometrical interpretation of the logical expression to be satified. The procedure is shown to be applicable to the first-order clauses too.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper concerns the efficient management of very large logic programs stored in secondary storage by proposing a physical data organization scheme called an extended concatenated code word (ECCW) which is based on the surrogate file concept.' The ECCW can be constructed by concatenating transformed code words obtained from the arguments. Associated with each code word are two fields; a tag field and a value field. The tag field can represent any argument type including lists and structured terms as well as variables and constants. The value field contains the transformed representation of the corresponding argument according to the content of its tag field. The ECCW uses several storage encoding techniques: multilevel coding to represent nested structures by using normalizing storage model, tagged coding to discriminate attribute types such as variables, lists, complex terms, and constants, storage partitioning and tag collection to reduce search space.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Reconfiguration is a method of by-passing hardware failures in a fault tolerant oil well logging system being built by Schlumberger. An expert system, implemented in PROLOG, has been built which, given a failure, aids in reconfiguring the system. Deriving reconfigurations exhibits characteristics of the frame problem in AI: modifications to the system made to by-pass failures may invalidate other parts of the system, which in turn must be modified, and so on. It is important, however, that changes unrelated to the initial modification not be made. To address this problem, first we describe the well logging system as a relation scheme, and note the data-dependencies which exist over its attributes. This allows us to describe the well logging system as a set of normalized relations. The normalization is such that it groups together, at a level of finest granularity, those attributes, and only those attributes, which determine or are determined by one another. These normalized relations serve as the basis for reasoning about change in a search for all, and only those, reconfigurations with changes which are a consequence of by-passing the failure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present a new scheme for intelligent backtracking in Horn-Clause programs. It is to be observed that variables give more information about the cause of the failure rather than predicates. The scheme suggested in this paper, makes use of this information to eliminate a lot of redundant backtracking. This scheme suggested requires less overhead as compared to the scheme suggested by Vipin Kumar and is easy to implement. Our scheme makes use of an observation that a variable's instantiated value is not altered unless there is a failure on this variable and the system backtracks to the step at which this variable got instantiated. Such a feature is not exploited by conventional Prolog interpreters. We present an algorithm for the proposed intelligent backtracking scheme. We illustrate this scheme with examples. The extra overhead required by this algorithm is the failure list that contains the list of variables that could have caused the failure, changed list that contains the list of variables whose values have changed during backtracking, and the variable look-up table that contains the step numbers, wherein the variables got instantiated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Current advances in technology and the sophistication of the modern battlefield have encouraged researchers to explore the concept of autonomy as the next step in vehicle technology. As a result of this, a number of autonomous and semi-autonomous vehicle systems are being developed for land, sea, and airborne applications. This paper describes a sampling of the most prominent of these systems to provide an overview of how this technology is evolving.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Autonomous systems require the ability to analyze their environment and develop responsive plans of action. Autonomous vehicle research has led to the development of several land, sea, and air vehicle prototypes. These systems integrate vision, diagnostics, planning, situation assessment, tactical reasoning, and intelligent control at a variety of levels to function in limited environments or computer simulation. Route planning in these systems has historically focused on pure numerical computations unable to adapt to the dynamic nature of the world. This paper describes a knowledge-based system for autonomous route planning that has been applied to airborne vehicles. Specific focus is the vehicle model knowledge source that validates routes based upon the physical capabilities of the helicopter system. An overview of the autonomous helicopter is present to establish system context with specific results in validated route planning presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It's a common opinion that in the 1990s combat aircraft a new generation of avionic systems with a more integrated hardware and software will take place, involving innovative software about signal processing, sensor fusion and especially expert system software to reduce pilot workload and to improve system performance. AI theories, methodologies and techniques seem to be generally adequate to these purposes, even for complex applications such as those of Pilot Assistance. In some cases, it is not completely clear yet, if the state of the art in this technology is adequate to meet the needs of such a complex project, and we are still in a phase in which the cost-effectiveness of the AI techniques must be fully demonstrated. A lot of companies are carrying on researches and projects in order to evaluate suitability, maturity and costs of these techniques. An effective approach to the acquisition and use of AI techniques may be the definition of a wide project involving the development of prototypes with increasing functions and performance. The real challenge of an intensive and rapid prototyping is double: from the technical point of view one can investigate technologies and pick up information on the suitability and the adequacy of certain techniques; from the project management point of view one can redefine the purposes of the project and their timing considering the gathered experiences. In this paper we describe the methodologies and techniques employed to develop an Expert System for Pilot Assistance while performing route planning or replanning, the functional characteristics of a first prototype working on Lisp machine, and its current architecture. This prototype is able to provide the pilot with dynamic information about the geography of terrain (accessing an object-oriented database), the tactical situation, the meteo conditions and the current state of the aircraft; further, static information about threats characteristics, fuel consumption, aircraft configuration and pre-planned route are also available through an interface simulating a Head-Down Display. The system is able to plan complete ground attack missions and especially to suggest the most suitable push-up point for the run-in on target. The experiences gathered during the development of this prototype have been very useful to define the architecture of a more powerful prototype which is now being carried out.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A multisensorial vision system for autonomous vehicle driving is presented, that operates in outdoor natural environments. The system, currently under development in our laboratories, will be able to integrate data provided by different sensors in order to achieve a more reliable description of a scene and to meet safety requirements. We chose to perform a high-level symbolic fusion of the data to better accomplish the recognition task. A knowledge-based approach is followed, which provides a more accurate solution; in particular, it will be possible to integrate both physical data, furnished by each channel, and different fusion strategies, by using an appropriate control structure. The high complexity of data integration is reduced by acquiring, filtering, segmenting and extracting features from each sensor channel. Production rules, divided into groups according to specific goals, drive the fusion process, linking to a symbolic frame all the segmented regions characterized by similar properties. As a first application, road and obstacle detection is performed. A particular fusion strategy is tested that integrates results separately obtained by applying the recognition module to each different sensor according to the related model description. Preliminary results are very promising and confirm the validity of the proposed approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the growing degree of office automation and the decreasing costs of storage devices, it becomes more and more attractive to store optically scanned documents like letters or reports in an electronic form. Therefore the need of a good paper-computer interface becomes increasingly important. This interface must convert paper documents into an electronic representation that not only captures their contents, but also their layout and logical structure. We propose a procedure to describe the layout of a document page by dividing it recursively into nested rectangular areas. A semantic meaning to each one will be assigned by means of logical labels. The procedure is used as a basis for modelling a hierarchical document layout onto the semantic meaning of the parts in the document. We analyse the layout of a document using a best-first search in this tesselation structure. The search is directed by a measure of similarity between the layout pattern in the model and the layout of the actual document. The validity of a hypothesis for the semantic labelling of a layout block can then be verified. It either supports the hypothesis or initiates the generation of a new one. The method has been implemented in Common Lisp on a SUN 3/60 Workstation and has run for a large population of office docu-ments. The results obtained have been very encouraging and have convincingly confirmed the soundness of the approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The 3-D vision system we developed uses laser scanning, and simultaneously produces range and intensity images covering a wide area. 3-D vision is indispensable in image processing for factory automation. Conventional, practical slit-light techniques using a TV camera have a limited narrow measurement area, take too long to accept input images, and cannot produce range and intensity images simultaneously. We developed a camera we call the 3-D imager and a vision system based on it. The 3-D imager uses a laser diode beam to scan the measured area and obtains range and intensity data at all points on the scan line. Range measurement is based on triangulation. The vision system, which consists of a 32-bit CPU (68020) and 12M-byte image memory, has three main features: (1) 3-D measurement covers 2048-by-3076-pixel image formed in one image input sequence. (2) Measurement is fast: The system takes 12 seconds to produce data for an entire 6-million-pixel area. (3) The system processes range and intensity data simultaneously. The 256-height-level range image is used to determine an object's shape, and the 256-gray-level intensity image to determine the surface texture, markings, and other features. When used to inspect PC boards, the system detected missing, shifted, and floating components. The inspection resolution is 125 pm in along the X and Y axes and 30 lam along the Z axis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed versatile pattern inspection algorithm that we call Radial Matching, which generates pattern code. dictionaries and enables many different patterns to be inspected by changing dictionary contents. The system inspects all defect type analyzing pattern attributes and connections without referencing layout artwork. It measures a copper pattern radially in eight directions. Sensors are 22.5 degrees apart. Length and orientation data are converted 16-bit codes that can express all printed wiring board (1)V1B) patterns, containing ordinal patterns and defects. Dictionaries are generated when 'PWB patterns are inspected and good and defective codes are classified., These features enable a code dictionary to be generated using only the first PWB. Experiments show that original pat terns can be described using 27% of the codes. Code classification for inspection can be done on a 100 mm2 area. The new algorithm has been implemented a system in use at a Fujitsu plant.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.