PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Finding the best state assignment for implementing a synchronous sequential circuit is important for reducing silicon area or chip count in many digital designs. This State Assignment Problem (SAP) belongs to a broader class of combinatorial optimization problems than the well studied traveling salesman problem, which can be formulated as a special case of SAP. The search for a good solution is considerably more involved for the SAP than it is for the traveling salesman problem due to a much larger number of equivalent solutions, and no effective heuristic has been found so far to cater to all types of circuits. In this paper, a matrix representation is used as the genotype for a Generic Algorithm (GA) approach to this problem. A novel selection mechanism is introduced, and suitable genetic operators for crossover and mutation, are constructed. The properties of each of these elements of the GA are discussed and an analysis of parameters that influence the algorithm is given. A canonical form for a solution is defined to significantly reduce the search space and number of local minima. Simulation results for scalable examples show that the GA approach yields results that are comparable to those obtained using competing heuristics. Although a GA does not seem to be the tool of choice for use in a sequential Von-Neumann machine, the results obtained are good enough to encourage further research on distributed processing GA machines that can exploit its intrinsic parallelism.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We use a Generalized Hough transform (GHT) to detect and track instances of a class of sonar signals. This class consists of a four-dimensional set of curves and hence requires a four- dimensional transform space for the GHT. Many of the signals we need to detect are very weak. Such signals yield peaks in the transform space which are both very narrow and not too far above the random background variations. Finding such peaks is difficult. Exhaustive search over a predetermined discretization of the transform space will yield a nearly optimal point for a sufficiently fine discretization. However, even with an intelligently chosen discretization, exhaustive search requires searching over (and hence evaluating) many points in the transform space. We have therefore developed a genetic algorithm to more efficiently search the transform space. Designing the genetic algorithm to work properly has required experimentation with a number of its parameters. The most important of these are (1) the representation, (2) the population size, and (3) the number of runs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed an algorithm to detect the presence of narrowband signals and track the time evolution of their center frequencies. This algorithm has 35 parameters whose optimal values depend on (among other things): (1) the expected dynamics of the signals, (2) the background statistics, and (3) the clutter (i.e., the number of simultaneous signals). Manually optimizing these parameters is a difficult task not only because of the large number of parameters but also because of the interdependence of their effects on performance. We have therefore devised an automated method for optimizing the parameters. It has three basic components: (1) a 'truth' database with a graphical interface for easy manual entry of 'truth', (2) a scoring function which is a linear combination of six subscores (three evaluating detection performance and three evaluating tracking performance), and (3) a distributed genetic algorithm which optimizes the parameter values for a particular truth database. We have used this procedure to optimize the parameter values to a variety of signal types and environmental conditions. The results have been improved performance as well as the ability to make the algorithm adaptive: as the system detects changes in the environmental conditions, it can switch to a different set of parameters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses the training of product neural networks using genetic algorithms. Two unusual techniques are combined; product units are employed in addition to the traditional summing units and a genetic algorithm is used to train the network rather than using backpropagation. As an example, a neural network is trained to calculate the optimum width of transistors in a CMOS switch. It is shown how local minima can affect the performance of a genetic algorithm, and one method of overcoming this is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The cell represents the basic unit of life. It can be interpreted as a chemical machine which can solve special problems. The present knowledge we have of molecular biology allows the characterization of the metabolism as a processing method. This method is an evolutionary product which has been developed over millions of years. First we will present the analyzed features of the metabolism. Then we will go on to compare this processing method with methods which are discussed in computer science. The comparison shows that there is no method in the field of computer science which uses all the metabolic features. This is the reason why we formalize the metabolic processing method. In this paper we choose to use a grammatical formalism. A genetic grammar is the basis of the metabolic system which represents the metabolic processing method. The basic unit of this system (logic unit) will be shown. This allows the discussion of the complexity of realizing the metabolic system in hardware.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper briefly reviews the two currently dominant paradigms in machine learning--the connectionist network (CN) models and symbol processing (SP) systems; argues for the centrality of knowledge representation frameworks in learning; examines a range of representations in increasing order of complexity and measures of similarity or distance that are appropriate for each of them; introduces the notion of a generalized distance measure (GDM) and presents a class of GDM-based inductive learning algorithms (GDML). GDML are motivated by the need for an integration of symbol processing (SP) and connectionist network (CN) approaches to machine learning. GDM offer a natural generalization of the notion of distance or measure of mismatch used in a variety of pattern recognition techniques (e.g., k-nearest neighbor classifiers, neural networks using radial basis functions, and so on) to a range of structured representations such strings, trees, pyramids, association nets, conceptual graphs, etc. which include those used in computer vision and syntactic approaches to pattern recognition. GDML are a natural extension of generative or constructive learning algorithms for neural networks that enable an adaptive and parsimonious determination of the network topology as well as the desired weights as a function of learning Applications of GDML include tasks such as planning, concept learning, and 2- and 3-dimensional object recognition. GDML offer a basis for a natural integration of SP and CN approaches to the construction of intelligent systems that perceive, learn, and act.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper applies learning techniques to make engineering optimization more efficient and reliable. When the function to be optimized is highly non-linear, the search space generally forms several disjoint convex regions. Unless gradient-descent search is begun in the right region, the solution found will be suboptimal. This paper formalizes the task of learning effective search control for choosing which regions to explore to find a solution close to the global optimum. It defines a utility function for measuring the quality of search control. The paper defines and experimentally compares three algorithms that seek to find search control knowledge of maximum utility. The best algorithm, UTILITYID3, gives a speedup of 4.4 over full search (of all convex regions) while sacrificing only 5% in average solution quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Although a lot of research on explanation-based learning has been done, the adaptability of EBL to examples is no fully improved. In this paper, we propose a preliminary analysis for computing the utility of EBL in a logic programming environment. The research will contribute to the adaptive EBL as the theoretical criteria for selecting useful learned rules and removing useless rules. First, we define a whole EBL framework including both problem solving and learning. Next, the utility of EBL is defined as a utility function of two variables, the amount of test examples, and a probability distribution on test examples. Then we present a method to determine the utility function only by the trace of problem solving just on training examples, not text examples. By analyzing the utility function, we get the sufficient precondition of the EBL's utility and are able to determine whether the EBL system should learn. Finally we present the results on computing EBL's utility in examples. As a result, we have a very interesting fact that EBL deteriorates problem solving even in simple domain theory such Mitchell's SAFE-TO-STACK.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
DARPA and Lockheed's Pilot's Associate (PA) represents one of the largest and most complex artificially intelligent systems constructed to date. Its architecture of five modular, cooperative expert systems posses a knowledge engineering problem unique in its scope, though not in its basic nature. The knowledge bases for each of PA's modules will be very large, constantly changing (in response to new tactics and new technological capabilities), and highly specialized for the task of the specific module. For efficiency, each module must contain only that knowledge necessary for its task, yet for cooperation, each system's knowledge must be consistent with the others'. Machine learning approaches hold the promise of greatly reducing knowledge acquisition and knowledge engineering time and of making the entire PA system more flexible, more accurate, and more consistent. We present the results of a three-year program investigating an Explanation-Based Learning approach to acquiring new plans from a simulator-based learning scenario and then propagating this knowledge to two of the five PA modules--as a tactical plan which focuses on changing world states for the Tactics Planner module, and as a list of pilot information needs for the dynamic display configuration algorithm used in the Pilot-Vehicle Interface module.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we are discuss a multiscale analysis of discrete boundary presented by Generalized Chain Code. We briefly review the multiresolution analysis which make possible the construction of wavelets, scaling function and the associated filters. Using the discrete contour generated by Freeman Chain Code, we approximate it at several resolution levels and obtain its wavelet representation. We then reconstruct the original discrete contour from its wavelet representation also generated by Chain Code. Examples of implementation results are presented with the evaluation of entropy and signal to noise ratio to confirm the perfect visual result.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a modular method for classification of 1-D signals which utilizes the shift-invariant MultiScale WAvelet Representation (MSWAR). The classification employs three modules. Representation module that uses the generalization of the multiresolution wavelet representation. Measurement module that uses local and global measures to establish measures of similarity between the reference and observed signals. And finally, classification module that employs a set of decision rules. These rules are derived based on theoretical and experimental considerations, and under specified conditions, guarantee the correct classification of observed signals with five types of deformities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Time delays inherent in the control systems of current and proposed adaptive optics systems could be eliminated by predicting atmospherically-distorted wavefronts a short time ahead. An error-backpropagation neural network trained on real astronomical data has demonstrated that time series of wavefront tips and tilts (slopes) in the visible, and piston (displacement) in the infrared, are predictable to a degree which would improve the operation of an adaptive optics telescope.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Butterworth wavelets are introduced and it is shown that they constitute a large class of orthonormal wavelets. Advantages of this approach are the simplicity of the analyzing wavelet design, connections with the digital filter design techniques, FIR and IIR type of implementations and computational savings in the IIR case which gives rise to fast wavelet transform algorithms. A mirror representation and nonorthogonal wavelet expansion are discussed in this context.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the major problems in dealing with the changes in the information contented a scene which is one of the characteristics of any dynamic scene is how adapt to these variations such that the performance of any automatic scene analyzer such as object recognizer be at its optimum. In this paper we examine the use of image and signal metrics for characterizing any scene variations and then we describe an automated system for the extraction of these quality measures and finally we will show how these metrics can be used for the automatic adaptation of an object recognition system and the resulting jump in the performance of this system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The theory of adaptive Bayesian networks is summarized. A detailed discussion of the Adaptive Cluster Expansion (ACE) network is presented. ACE is a scalable Bayesian network designed specifically for high-dimensional applications, such as image processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A model for time delay smaller than sampling interval is proposed. Based on such a model, a new adaptive time delay estimation algorithm is developed. Stochastic averaging is used to show the convergence property of the adaptive algorithm. Computer simulations are also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we propose a cybernetic approach to behavior based robotics. We present a distributed adaptive control architecture for coordination of different motivations and behaviors of an autonomous vehicle. The system is based on the Zurich Model of Social Motivation, a cybernetic approach to mammalian behavior by the Swiss ethologist BISCHOF. Our system controls a simulated autonomous robot by teaching a reflective associative memory to propose an action based on the input of eight range sensors. The emerging behavior at every stage reflects the system's experience, and the robust in unexpected situations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Meaningful objects in a scene move with purpose. The ability to induce visual expectations from such purpose is important in visual observation. By regarding the spatio-temporal regularities in the moving patterns of an object in the scene as a network of temporally dependent belief hypothesis, visual expectations can be represented by the most likely combinations of the hypotheses based on updating the network in response to instantaneous visual evidence. A particular type of probabilistic single path Directed Acyclic Graph (DAG) belief network, the Hidden Markov Model (HMM), can be used to represent the 'hidden' regularities behind the apparently random moves of an object in a scene and reproduce such regularities as 'blind', therefore, insensitive expectations. By adaptively adjusting such a probabilistic belief network with observed visual evidence instantaneously, a Visual Augmented Hidden Markov Model (VAHMM) can be used to model and produce dynamic expectations of a moving object in the scene. In particular, using tracked moving service vehicles at an airport docking stand as visual cues, we present how a VAHMM can be constructed first to represent the probabilistic spatial dependent relationships in the typical moving patterns of a type of vehicle, and then to adjust the weighting parameters of such dependencies dynamically with instantaneous new visual evidence. We describe the use of such model to generate in time the probabilistic expectations of an observed object and discuss some possible initial applications of such a framework for providing selective attention in visual observation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Neural-based nonlinear system identification and control suffers from the problem of slow convergence, and selection of a suitable architecture for a problem is made through trial and error. There is the need for an algorithm that would provide an efficient solution to these problems. This paper presents one possible solution. Unlike the backpropagation algorithm that trains a fixed structure, in the algorithm presented in this paper, the network is built slowly in a step-by-step fashion. This evolving architecture methodology permits an optimal allocation of hidden nodes that avoids training on outliers and at the same time, provides sufficient complexity for the approximation of a data set. Through simulation examples we show that this algorithm also exhibits faster convergence properties than the usual multi- layered neural network algorithms. Finally, we examine some common ideas between this architecture and fuzzy logic systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Our ultimate goal is to develop neural-like cognitive sensory processing within non-neuronal systems. Toward this end, computational models are being developed for selectivity attending the task-relevant parts of composite sensory excitations in an example sound processing application. Significant stimuli partials are selectively attended through the use of generalized neural adaptive beamformers. Computational components are being tested by experiment in the laboratory and also by use of recordings from sensor deployments in the ocean. Results will be presented. These computational components are being integrated into a comprehensive processing architecture that simultaneously attends memory according to stimuli, attends stimuli according to memory, and attends stimuli and memory according to an ongoing thought process. The proposed neural architecture is potentially very fast when implemented in special hardware.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
in designing a feedforward neural network for numerical computation using the backpropagation algorithm it is essential to know that the resulting network has a practical global minimum, meaning that convergence to a stationary solution can be achieved in reasonable time and using a network of reasonable size. This is in contrast to theoretical results indicating that any square-integrable (L2) function can be computed assuming that an unlimited number of neurons are available. A class of problems is discussed that does not fit into this category. Although these problems are conceptually simple, it is shown that in practice convergence to a stationary solution can only be approximate and very costly. Computer simulation results are shown, and concepts are presented that can improve the performance by a careful redesign of the problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Corporations need better real-time monitoring and control systems to improve productivity by watching quality and increasing production flexibility. The innovative technology to achieve this goal is evolving in the form artificial intelligence and neural networks applied to sensor processing, fusion, and interpretation. By using these advanced Al techniques, we can leverage existing systems and add value to conventional techniques. Neural networks and knowledge-based expert systems can be combined into intelligent sensor systems which provide real-time monitoring, control, evaluation, and fault diagnosis for production systems. Neural network-based intelligent sensor systems are more reliable because they can provide continuous, non-destructive monitoring and inspection. Use of neural networks can result in sensor fusion and the ability to model highly, non-linear systems. Improved models can provide a foundation for more accurate performance parameters and predictions. We discuss a research software/hardware prototype which integrates neural networks, expert systems, and sensor technologies and which can adapt across a variety of structures to perform fault diagnosis. The flexibility and adaptability of the prototype in learning two structures is presented. Potential applications are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The search space for backpropagation (BP) is usually of high dimensionality which shows convergence. Also, the number of minima abound, and thus the danger to fall in a shallow one is great. In order to limit the search space of BP in a sensible way, we incorporate domain knowledge in the training process. A Two-phase Backpropagation algorithms is presented. In the first phase the direction of the weight vectors of the first (and possibly the only) hidden layer are constrained to remain in the same directions as, for example, the ones of linear discriminants or Principal Components. The directions are chosen based on the problem at hand. Then in the second phase, the constraints are removed and standard Backpropagation algorithm takes over to further minimize the error function. The first phase swiftly situates the weight vectors in a good position (relatively low error) which can serve as the initialization of the standard Backpropagation. Other speed-up techniques can be used in both phases. The generality of its application, its simplicity, and the shorter training time it requires, make this approach attractive.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this report we present our tools for prototyping adaptive user interfaces in the context of real-time musical instrument control. Characteristic of most human communication is the simultaneous use of classified events and estimated parameters. We have integrated a neural network object into the MAX language to explore adaptive user interfaces that considers these facets of human communication. By placing the neural processing in the context of a flexible real-time musical programming environment, we can rapidly prototype experiments on applications of adaptive interfaces and learning systems to musical problems. We have trained networks to recognize gestures from a Mathews radio baton, Nintendo Power GloveTM, and MIDI keyboard gestural input devices. In one experiment, a network successfully extracted classification and attribute data from gestural contours transduced by a continuous space controller, suggesting their application in the interpretation of conducting gestures and musical instrument control. We discuss network architectures, low-level features extracted for the networks to operate on, training methods, and musical applications of adaptive techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider the problem of adaptive reception of multiple sources by using a set of multiple simultaneous beams. This can be achieved by high-resolution direction finding followed by optimal beamforming for each source. In the first stage, the source estimation processor determines the number of sources and their angular locations. In the second stage, the information derived in the first stage is used to form multiple simultaneous beams. Each beam is designed such that it has optimal reception of one source while nulling out all the other sources.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The identification and classification of underwater acoustic signals is an extremely difficult problem because of low SNRs and a high degree of variability in the signals emanated from the same type of sound source. Since different classification techniques have different inductive biases, a single method cannot give the best results for all signal types. Rather, more accurate and robust classification can be obtained by combining the outputs (evidences) of multiple classifiers based on neural network and/or statistical pattern recognition techniques. In this paper, four approaches to evidence combination are presented and compared using realistic oceanic data. The first method uses an entropy-based weighting of individual classifier outputs. The second is based on combination of confidence factors in a manner similar to that used in MYCIN. The other two methods are majority voting and averaging, with little extra computational overhead. All these techniques give better results than those obtained by the best individual classifier, and also provide a basis for detecting outliers and 'false alarms'.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Real-time detection and tracking of multiple targets in passive underwater sonar systems is essential for fast and correct response generation. In this paper, we exploit a model of the human visual early processing of sensory information in designing a smart system for the detection of moving targets and reliable estimation of their velocities. Psychophysical studies indicated that motion information are extracted by a system that responds to oriented spatiotemporal energy. Motion energy detection is modeled by a linear high-pass temporal filter (simple frame to frame difference) and a spatial band-pass Gaussian filtering stage (Laplacian of the Gaussian image) followed by a squarer and summation. The energy image is searched for multiple maxima- depending on the number of targets of interest, and a square window highlights each detected target. Detected targets are labeled according to their motion energy level, this labeling adds a confidence level to the detected target and helps reduce false alarms. A reliable estimate of the labeled target velocity magnitude and direction is produced through tracking of the energy maxima location from one-frame to another. As in the case of the human visual system, there is always a trade-off between accuracy and reliability of the velocity estimate. The algorithm was tested on simulated data. The synthetic data consists of image data in rectangular coordinates, spanning a fixed area, and with a fixed frame rate. The image value was generated as the number of detections over the frame interval at equivalent pixel locations. The intensity was modeled as a Poisson process. The intensity of the Poisson process was varied spatially to simulate non-uniform probability of detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Learning Methodologies: Techniques and Applications
We present a methodology that allows collections of intelligent system to automatically learn communication strategies, so that they can exchange information and coordinate their problem solving activity. In our methodology communication between agents is determined by the agents themselves, which consider the progress of their individual problem solving activities compared to the communication needs of their surrounding agents. Through learning, communication lines between agents might be established or disconnected, communication frequencies modified, and the system can also react to dynamic changes in the environment that might force agents to cease to exist or to be added. We have established dynamic, quantitative measures of the usefulness of a fact, the cost of a fact, the work load of an agent, and the selfishness of an agent (a measure indicating an agent's preference between transmitting information versus performing individual problem solving), and use these values to adapt the communication between intelligent agents. In this paper we present the theoretical foundations of our work together with experimental results and performance statistics of networks of agents involved in cooperative problem solving activities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The recent introduction of compact large capacity memories has opened up possibilities for more aggressive use of data in learning systems. Instead of using complex, global models for the data we investigate the use of well-known but modified and extended local non-parametric methods for learning. Our focus is on learning problems where a large set of data is available to the system designer. Such applications include speech recognition, character recognition and local weather prediction. The general system we present, called an Adaptive Memory, (AM) is adaptive in the sense that some part of the sample data is stored and a local non- parametric model is updated when new training data becomes available. This makes training possible throughout the usable lifetime of the system, in contrast with many popular learning algorithms like neural networks and other parametric methods that have a distinctive learning phase. In the past designers of learning systems have been reluctant to store data samples in memory because of the inherent slowness of searching and storing. However, with the advent of parallel searching algorithms and high speed large memories the AM approach is competitive with parametric methods and may ultimately exceed their performance for a large class of problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we wish to present an algorithmic technique for the generation of Fuzzy system rule bases and composite membership functions. This technique is based on data taken from a representative system either under classical control or the control of a human operator (i.e., an expert). The algorithm may be automated as a computer program which accepts control system input-output pairs. Such an automated process could ease Fuzzy control system design and implementation. Such an automated process may be used to construct Fuzzy systems which can 'learn'.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The knowledge acquisition bottleneck has become the major impediment to the development and application of effective information systems. To remove this bottleneck, new form understanding techniques must be introduced to acquire automatically knowledge from documents. In this study, a document is considered to have two structures: geometric structure and logical structure. They play a key role in the process of the knowledge acquisition, which can be viewed as a process of acquiring the above structures. Extracting the geometric structure from a document refers to document analysis; mapping the geometric structure into logical structure is regarded as document understanding. The form understanding based on form description language (FDL) combines the both of them. This method consists of two phases: (1) Form description using the FDL which contains two parts: (a) Structure analysis based on the form structure description (FSD); (b) Item location based on the item description (IDP). (2) Mapping form description onto form structure: the FDL will be interpreted and mapped into the structure. In this phase, the following stages are included: (a) Interpretation of the form description; (b) Form mapping; (c) Item image production; (d) Item component selection; (e) Learning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.