PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Proceedings Volume Applications and Science of Neural Networks, Fuzzy Systems, and Evolutionary Computation II, (1999) https://doi.org/10.1117/12.367683
Recent developments in the theory of evolutionary computation offer evidence and proof that overturns several conventionally held beliefs. In particular, the no free lunch theorem and other related theorems show that there can be no best evolutionary algorithm, and that no particular variation operator or selection mechanism provides a general advantage over another choice. Furthermore, the fundamental nature of the notion of schema processing is called into question by recent theory that shows that the schema theorem does not hold when schema fitness is stochastic. Moreover, the analysis that underlies schema theory, namely the k- armed bandit analysis, does not generate a sampling plan that yields an optimal allocation of trials, as has been suggested in the literature for almost 25 years. The importance of these new findings is discussed in the context of future progress in the field of evolutionary computation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Applications and Science of Neural Networks, Fuzzy Systems, and Evolutionary Computation II, (1999) https://doi.org/10.1117/12.367691
In this paper an approach is described for segmenting medical images. We use active contour model, also known as snakes, and we propose an energy minimization procedure based on Genetic Algorithms (GA). The widely recognized power of deformable models stems from their ability to segment anatomic structures by exploiting constraints derived from the image data together with a priori knowledge about the location, size, and shape of these structures. The application of snakes to extract region of interest is, however, not without limitations. As is well known, there may be a number of problems associated with this approach such as initialization, existence of multiple minima, and the selection of elasticity parameters. We propose the use of GA to overcome these limits. GAs offer a global search procedure that has shown its robustness in many tasks, and they are not limited by restrictive assumptions as derivatives of the goal function. GAs operate on a coding of the parameters (the positions and the total number of snake points) and their fitness function is the total snake energy. We employ a modified version of the image energy which consider both the magnitude and the direction of the gradient and the Laplacian of Gaussian. Experimental results on synthetic images as well as on medical images are reported. Images used in this work are ocular fundus images, snakes result very useful in the segmentation of the Foveal Avascular Zone. The experiments performed with ocular fundus images show that the proposed method is promising in the early detection of the diabetic retinopathy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Applications and Science of Neural Networks, Fuzzy Systems, and Evolutionary Computation II, (1999) https://doi.org/10.1117/12.367697
We describe the implementation and performance of a genetic algorithm which generates image feature extraction algorithms for remote sensing applications. We describe our basis set of primitive image operators and present our chromosomal representation of a complete algorithm. Our initial application has been geospatial feature extraction using publicly available multi-spectral aerial-photography data sets. We present the preliminary results of our analysis of the efficiency of the classic genetic operations of crossover and mutation for our application, and discuss our choice of evolutionary control parameters. We exhibit some of our evolved algorithms, and discuss possible avenues for future progress.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Applications and Science of Neural Networks, Fuzzy Systems, and Evolutionary Computation II, (1999) https://doi.org/10.1117/12.367702
This study involves diploid genetic algorithms in which a diploid representation of individuals is used. This type of representation allows characteristics that may not be visible in the current population to the preserved in the structure of the individuals and then be expressed in a later generation. Thus it prevents traits that may be useful from being lost. It also helps add diversity to the genetic pool of the population. In conformance with the diploid representation of individuals, a reproductive scheme which models the meiotic cell division for gamete formation in diploid organisms in nature is employed. A domination strategy is applied for mapping an individual's genotype onto its phenotype. The domination factor of each allele at each location is determined by way of a statistical scan of the population in the previous generation. Classical operators such as cross-over and mutation are also used in the new reproductive routine. The next generation of individuals are chosen via a fitness proportional method from among the parents and the offspring combined. To prevent early convergence and the population overtake of certain individuals over generations, an age counter is added. The effectiveness of this algorithm is shown by comparing it with the simple genetic algorithm using various test functions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Applications and Science of Neural Networks, Fuzzy Systems, and Evolutionary Computation II, (1999) https://doi.org/10.1117/12.367703
This paper looks at the general problem of resource allocation in telecommunication networks. It gives an overview of the problem and argues for adaptive methods in the complex telecommunication environment. In particular it discusses a general methodology known as reinforcement learning. The paper presents two examples--admission control in packet data networks, and battery management for mobile communication.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Y. Ahmet Sekercioglu, Andreas Pitsillides, Athanasios V. Vasilakos
Proceedings Volume Applications and Science of Neural Networks, Fuzzy Systems, and Evolutionary Computation II, (1999) https://doi.org/10.1117/12.367704
Designing effective control strategies for Asynchronous Transfer Mode (ATM) networks is known to be difficult because of the complexity of the structure of networks, nature of the services supported, and variety of dynamic parameters involved. Additionally, the uncertainties involved identification of the network parameters cause analytical modeling of ATM networks to be almost impossible. This renders the application of classical control system design methods (which rely on the availability of these models) to the problem even harder. Consequently, a number of researchers are looking at alternative non-analytical control system design and modeling techniques that have the ability to cope with these difficulties to devise effective, robust ATM network management schemes. Those schemes employ artificial neural networks, fuzzy systems and design methods based on evolutionary computation. In this survey, the current state of ATM network management research employing these techniques as reported in the technical literature is summarized. The salient features of the methods employed are reviewed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Applications and Science of Neural Networks, Fuzzy Systems, and Evolutionary Computation II, (1999) https://doi.org/10.1117/12.367705
In this paper a new algorithmic and hardware approach to real-time processing, computing, compression and transmission of multi-media (video, imagery, audio, sensor, telemetry, computer data) information, in the form of synchronized data, was proposed. The proposed approach, called Soft Computing and Soft Communication, leads to multi-media throughput minimization and data homogenization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Zhixiang Chen, Xiannong Meng, Richard K. Fox, Richard H. Fowler
Proceedings Volume Applications and Science of Neural Networks, Fuzzy Systems, and Evolutionary Computation II, (1999) https://doi.org/10.1117/12.367706
In this paper we study the problem of searching documents over the world wide web through training perceptrons. We consider that web documents can be represented by vectors of n boolean attributes. A search process can be viewed as a way of classifying documents over the web according to the user's requirements. We design a perceptron training algorithm for the search engine, and give a bound on the number of trails needed to search for any collection of documents represented by a disjunction of the relevant boolean attributes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Applications and Science of Neural Networks, Fuzzy Systems, and Evolutionary Computation II, (1999) https://doi.org/10.1117/12.367684
This work evolves from the concept of deterministic annealing (DA) as a useful tool to solve non-convex optimization problems. DA is used in order to avoid local minima of the given application specific cost function in which traditional techniques get trapped. It is derived within a probabilistic framework from basic information theoretic principles. The application specific cost is minimized subject to a level of randomness (Shannon entropy), which is gradually lowered. A hard (non random) solution emerges at the limit of low temperature after the system goes through an annealing process. This paper deals with the important and useful application of DA to vector quantization of images. An extension of the basic algorithm by incorporating a structural constraint of mass or density is used to allow optimization of vector quantizers. The constrained algorithm is modified to work for a set of systems to generate a more generalized codebook. Experimental results show considerable performance gains over conventional methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Gerhard Dangelmayr, Sabino Gadaleta, Douglas Hundley, Michael J. Kirby
Proceedings Volume Applications and Science of Neural Networks, Fuzzy Systems, and Evolutionary Computation II, (1999) https://doi.org/10.1117/12.367685
Topology preserving maps derived from neural network learning algorithms are well suited to approximate probability distributions from data sets. We use such algorithms to generate maps which allow the prediction of future events from a sample time series. Our approach relies on computing transition probabilities modeling the time series as a Markov process. Thus the technique can be applied both to stochastic as well as to deterministic chaotic data and also permits the computation of `error bars' for estimating the quality of predictions. We apply the method to the prediction of measured chaotic and noisy time series.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Applications and Science of Neural Networks, Fuzzy Systems, and Evolutionary Computation II, (1999) https://doi.org/10.1117/12.367686
Assessing the conflicting potential of an international situation is very important in the exercise of Defence duty. Mastering a formal method allowing the detection of risky situations is a necessity. Our aim was to develop a highly operational method twinned with a computer simulation tool which can explore a huge number of potential war zones, and can test many hypotheses with high accuracy within reasonable time. We use a multi-agents system to describe an international situation. The agent coding allows us to give computer existence to very abstract concepts such as: a government, the economy, the armed forces, the foreign policy... We give to these agents fuzzy rules of behavior, those rules represent human expertise. In order to yardstick our model we used the Falklands war to make our first simulations. The main distortion between the historical reality and our simulations comes from our fuzzy controller which causes a great loss of information. We are going to change it to a more efficient one in order to fit the historical reality. Agent coding with fuzzy rules allows human experts to keep close to their statements and expertise, and they can handle this kind of tool quite easily.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Applications and Science of Neural Networks, Fuzzy Systems, and Evolutionary Computation II, (1999) https://doi.org/10.1117/12.367687
In modeling brightness perception, one problem of high biological relevance is how luminance information is transmitted into the primary visual cortex. This is especially interesting in the light of recent neurophysiological studies, which suggest that simple cells are responding shallowly to homogeneous illuminated surfaces. This indicates that simple cells possess far more functional complexity as the wide-spread notion of mere line and edge detectors. Here we present new neural circuits for modeling even and odd simple cells, capable of transmitting brightness information without using an extra `luminance- channel'. Although these circuits taken for themselves can not be regarded yet as a full brightness model, however, they might gain some insight in why the visual system is using certain processing strategies. These include e.g. the segregation in ON and OFF channels and the mutual inhibition of simple cell pairs which are in anti-phase relation. These simple cell circuits turn out to be robust against noise, and thus might find its application in a border detection scheme, beside of being a building block for a more sophisticated brightness-model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Applications and Science of Neural Networks, Fuzzy Systems, and Evolutionary Computation II, (1999) https://doi.org/10.1117/12.367688
Quantum mechanical approach has been used to develop a model of the neural ribonucleic acid molecule dynamics. Macro and micro Fermi-Pasta-Ulam recurrence has been considered as a principle information carrier in a neuron.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Applications and Science of Neural Networks, Fuzzy Systems, and Evolutionary Computation II, (1999) https://doi.org/10.1117/12.367689
We have tested several predictive algorithms to determine their ability to learn from and find relationships between large numbers of variables. The purpose of this test is to produce control algorithms for sophisticated devices like particle accelerators. In particular we use COMFORT, a particle accelerator simulator, to generate large amounts of data. We then compared results among several fundamentally different types of algorithms, including least squares and hybrid neural networks. Our data indicate which algorithms perform the best on the basis of performance and training times.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Applications and Science of Neural Networks, Fuzzy Systems, and Evolutionary Computation II, (1999) https://doi.org/10.1117/12.367690
The study is concerned with the fundamentals of granular computing and its use to system modeling and system simulation. In contrast to numerically-driven identification techniques, in granular modeling we concentrate on building meaningful information granules in the space of experimental data and forming the ensuing model as a web of associations between such constructs. As such models are designed at the level of information granules and generate results in the same granular rather than pure numeric format. First, we elaborate on the role of information granules viewed as basic building modules exploited in model development. Second, we show how information granules are constructed. It is shown how to express relationships (links) between information granules; in this case two measures of linkage are discussed, namely a relevance index and a notion of a fuzzy correlation. Granular computing invokes a number of layers whose existence is implied by different levels of information granularity. We show how to move between these layers by using transformations of encoding and decoding of information granules. Subsequently, some generic architectures of granular modeling are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Applications and Science of Neural Networks, Fuzzy Systems, and Evolutionary Computation II, (1999) https://doi.org/10.1117/12.367692
A new method to study large scale neural networks is presented in this paper. The basis is the use of Feynman- like diagrams. These diagrams allow the analysis of collective and cooperative phenomena with a similar methodology to the employed in the Many Body Problem. The proposed method is applied to a very simple structure composed by an string of neurons with interaction among them. It is shown that a new behavior appears at the end of the row. This behavior is different to the initial dynamics of a single cell. When a feedback is present, as in the case of the hippocampus, this situation becomes more complex with a whole set of new frequencies, different from the proper frequencies of the individual neurons. Application to an optical neural network is reported.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Applications and Science of Neural Networks, Fuzzy Systems, and Evolutionary Computation II, (1999) https://doi.org/10.1117/12.367693
IPs (Intellectual Properties) are becoming increasingly essential in today's electronic system design. One of important issues in design reuse is the IP selection, i.e. finding an existing solution that matches the user's expectations best. This paper describes the Internet-based intelligent software system (Software Agent) that helps the user to pick out the optimal designs among those marketed by the IP vendors. The Software Agent for IP Selection (SAFIPS) conducts dialogues with both the IP users and IP vendors, narrowing the choices after evaluating general characteristics first, followed by matching behavioral, RTL, logic, and physical levels. The SAFIPS system conducts reasoning based on fuzzy logic rules derived in the process of dialogues of the software agent with the IP users and vendors. In addition to the dialogue system and fuzzy logic inference system, the SAFIPS includes a HDL simulator and fuzzy logic evaluator that are used to measure the level of matching of the user's behavioral model with the IP vendor's model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Applications and Science of Neural Networks, Fuzzy Systems, and Evolutionary Computation II, (1999) https://doi.org/10.1117/12.367694
This paper provides an overview of our recent work in the development of neural network models for optimization and control of electronic manufacturing processes. The concept of physical-neural network models and model transfer are described and demonstrated to be effective in building accurate neural network models economically. Process diagnostic techniques using multiple neural networks are reviewed and shown to be accurate for fault diagnosis. Finally, recent strategies in integration of statistical and neural network tools for process control are discussed. Several examples from electronics manufacturing such as chemical vapor deposition and fine pitch stencil printing are described to illustrate application of the basic concepts discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Andrew H. Sung, Hujun J. Li, Shih-Hsien Chang, Reid Grigg
Proceedings Volume Applications and Science of Neural Networks, Fuzzy Systems, and Evolutionary Computation II, (1999) https://doi.org/10.1117/12.367695
In this paper, a technique is presented for using neural networks as an aid for solving nonlinear engineering problems, which are encountered in optimization, simulations and modeling, or complex engineering calculations. Iterative algorithms are often used to find the solutions of such problems. For many large-scale engineering problems, finding good starting points for the iterative algorithms is the key to good performance. We describe using neural networks to select starting points for the iterative algorithms for nonlinear systems. Since input/output training data are often easily obtained from the problem description or from the system equations, a neural network can be trained to serve as a rough model of the underlying problem. After the neural network is trained, it is used to select starting points for the iterative algorithms. We illustrate the method with four small nonlinear equation groups, two real applications in petroleum engineering are also given to demonstrate the method's potential application in engineering.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Applications and Science of Neural Networks, Fuzzy Systems, and Evolutionary Computation II, (1999) https://doi.org/10.1117/12.367696
This paper describes several applications of neural networks and fuzzy logic in petroleum engineering that have been, or are being, developed recently at New Mexico Tech. These real-world applications include a fuzzy controller for drilling operation; a neural network model to predict the cement bonding quality in oil well completion; using neural networks and fuzzy logic to rank the importance of input parameters; and using fuzzy reasoning to interpret log curves. We also briefly describe two ongoing, large-scale projects on the development of a fuzzy expert system for prospect risk assessment in oil exploration; and on combining neural networks and fuzzy logic to tackle the large-scale simulation problem of history matching, a long- standing difficult problem in reservoir modeling.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Applications and Science of Neural Networks, Fuzzy Systems, and Evolutionary Computation II, (1999) https://doi.org/10.1117/12.367698
Satellite payloads are fast increasing in complexity, resulting in commensurate growth in cost of manufacturing and operation. A need exists for a software tool, which would assist engineers in production and operation of satellite systems. We have designed and implemented a software tool, which performs part of this task. The tool aids a test engineer in debugging satellite payloads during system testing. At this stage of satellite integration and testing both the tested payload and the testing equipment represent complicated systems consisting of a very large number of components and devices. When an error is detected during execution of a test procedure, the tool presents to the engineer a ranked list of potential sources of the error and a list of recommended further tests. The engineer decides this on this basis if to perform some of the recommended additional test or replace the suspect component. The tool has been installed in payload testing facility. The tool is based on Bayesian networks, a graphical method of representing uncertainty in terms of probabilistic influences. The Bayesian network was configured using detailed flow diagrams of testing procedures and block diagrams of the payload and testing hardware. The conditional and prior probability values were initially obtained from experts and refined in later stages of design. The Bayesian network provided a very informative model of the payload and testing equipment and inspired many new ideas regarding the future test procedures and testing equipment configurations. The tool is the first step in developing a family of tools for various phases of satellite integration and operation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Applications and Science of Neural Networks, Fuzzy Systems, and Evolutionary Computation II, (1999) https://doi.org/10.1117/12.367699
In this paper we introduce an adaptive image segmentation neural network based on a Gaussian mixture classifier that is able to accommodate unlabeled data in the training process to improve generalization when labeled data is insufficient. The classifier is trained by maximizing the joint-likelihood of features and labels over all the data set (labeled and unlabeled). The classifier builds grey- level images with estimation of class-posteriors (as many images as classes) that feed the segmentation algorithm. The paper is focused on the adaptive classification part of the algorithm. The classification tests are performed over Landsat TM mini-scenes. We assess the efficiency of the adaptive classifier depending on the model complexity and the proportion of labeled/unlabeled data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Applications and Science of Neural Networks, Fuzzy Systems, and Evolutionary Computation II, (1999) https://doi.org/10.1117/12.367700
Imaging technology has extended itself from performing gauging on machined parts, to verifying labeling on consumer products, to quality inspection of a variety of man-made and natural materials. Much of this has been made possible by faster computers and algorithms used to extract useful information from the image. In the application of agricultural material, specifically tobacco leaves, the tremendous amount of natural variability in color and texture creates new challenges to image feature extraction. As with many imaging applications, the problem can be expressed as `I see it in the image, how can I get the computer to recognize it?' In this application, the goal is to measure the amount of thick stem pieces in an image of tobacco leaves. By backlighting the leaf, the stems appear dark on a lighter background. The difference in lightness of leaf versus darkness of stem is dependent on the orientation of the leaf and the amount of folding. Because of this, any image thresholding approach must be adaptive. Another factor that allows us to identify the stem from the leaf is shape. The stem is long and narrow, while dark folded leaf is larger and more oblate. These criteria under the image collection limitations create a good application for fuzzy logic. Several generalized classification algorithms, such as fuzzy c-means and fuzzy learning vector quantization, are evaluated and compared. In addition, fuzzy thresholding based on image shape and compactness are applied to this application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Applications and Science of Neural Networks, Fuzzy Systems, and Evolutionary Computation II, (1999) https://doi.org/10.1117/12.367701
Semisupervised classification is one approach to converting multiband optical and infrared imagery into landcover maps. First, a sample of image pixels is extracted and clustered into several classes. The analyst next combines the clusters by hand to create a smaller set of groups that correspond to a useful landcover classification. The remaining image pixels are then assigned to one of the aggregated cluster groups by use of a per-pixel classifier. Since the cluster aggregation process frequently creates groups with multivariate shapes ill-suited for parametric classifiers, there has been renewed interest in nonparametric methods for the task. This research reports the results of an experiment conducted on six Landsat TM images to compare the accuracy of pixel assignment performed by four nearest neighbor classifiers and two neural network paradigms in a semisupervised context. In all the experiments, both the neighbor-based classifiers and neural networks assigned pixels with higher accuracy than the maximum likelihood approach. There was little substantive difference in accuracy among the neighborhood-based classifiers, but the feed-forward network was significantly superior to the probabilistic neural network. The feed-forward network classifier generally produced the highest accuracy on all six of the images, but it was not significantly better than the accuracy produced by the best neighbor-based classifier.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.