We propose a new partial volume (PV) segmentation scheme to extract bladder wall for computer aided detection (CAD) of bladder lesions using multispectral MR images. Compared with CT images, MR images provide not only a better tissue contrast between bladder wall and bladder lumen, but also the multispectral information. As multispectral images are spatially registered over three-dimensional space, information extracted from them is more valuable than that extracted from each image individually. Furthermore, the intrinsic T1 and T2 contrast of the urine against the bladder wall eliminates the invasive air insufflation procedure. Because the earliest stages of bladder lesion growth tend to develop gradually and migrate slowly from the mucosa into the bladder wall, our proposed PV algorithm quantifies images as percentages of tissues inside each voxel. It preserves both morphology and texture information and provides tissue growth tendency in addition to the anatomical structure. Our CAD system utilizes a multi-scan protocol on dual (full and empty of urine) states of the bladder to extract both geometrical and texture information. Moreover, multi-scan of transverse and coronal MR images eliminates motion artifacts. Experimental results indicate that the presented scheme is feasible towards mass screening and lesion detection for virtual cystoscopy (VC).
Automatic target recognition using forward-looking infrared imagery is a challenging problem because of the highly unpredictable nature of target thermal signatures. The high variability of target signatures, target obscuration, and clutter in the background results in distortion of target features, which are used by the target detection stage to identify a potential target. Consequently,
the target detection stage produces a large number of false alarms. Distorted features in the potential targets also make accurate classification of targets difficult. The proposed technique, in
essence attempts to repair the distorted features of the targets to improve the target detection/classification accuracy. The proposed technique completes the feature extraction process in two steps: First, the feature vectors are extracted and classified either as complete or incomplete features using feed-forward neural networks. The incomplete features are then transformed into complete features. These features can then be used to identify/classify the targets.
In this paper, we investigate several fusion techniques for designing a composite classifier to improve the performance (probability of correct classification) of FLIR ATR. The motivation behind the fusion of ATR algorithms is that if each contributing technique in a fusion algorithm composite classifier) emphasizes on learning at least some features of the targets that are not learned by other contributing techniques for making a classification decision, a fusion of ATR algorithms may improve overall probability of correct classification of the composite classifier. In this research, we propose to use four ATR algorithms for fusion. We propose to use averaged Bayes classifier, committee of experts, stacked-generalization, winner-takes-all, and ranking-based fusion techniques for designing the composite classifiers. The experimental results show an improvement of more than 5 % over the best individual performance.
In this paper, we investigate several fusion techniques for designing a composite classifier to improve the performance (probability of correct classification) of FLIR ATR. The motivation behind the fusion of ATR algorithms is that if each contributing technique in a fusion algorithm (composite classifier) emphasizes on learning at least some features of the targets that are not learned by other contributing techniques for making a classification decision, a fusion of ATR algorithms may improve overall probability of correct classification of the composite classifier. In this research, we propose to use four ATR algorithms for fusion. We propose to use averaged Bayes classifier, committee of experts, stacked-generalization, winner-takes-all, and ranking-based fusion techniques for designing the composite classifiers. The experimental results show an improvement of more than 5 % over the best individual performance.
A modular clutter-rejection technique that uses region-based principal component analysis (PCA) is proposed. A major problem in FLIR ATR is the poorly centered targets generated by the preprocessing stage. Our modular clutter-rejection system uses static as well as dynamic region of interest (ROI) extraction to overcome the problem of poorly centered targets. In static ROI extraction, the center of the representative ROI coincides with the center of the potential target image. In dynamic ROI extraction, a representative ROI is moved in several directions with respect to the center of the potential target image to extract a number of ROIs. Each module in the proposed system applies region-based PCA to generate the feature vectors, which are subsequently used to make a decision about the identity of the potential target. Region-based PCA uses topological features of the targets to reject false alarms. In this technique, a potential target is divided into several regions and a PCA is performed on each region to extract regional feature vectors. We propose using regional feature vectors of arbitrary shapes and dimensions that are optimized for the topology of a target in a particular region. These regional feature vectors are then used by a two-class classifier based on the learning vector quantization to decide whether a potential target is a false alarm or a real target. We also present experimental results using real-life data to evaluate and compare the performance of the clutter-rejection systems with static and dynamic ROI extraction.
This paper presents a new lossless image compression technique called modular differential pulse code modulation. The proposed technique consists of a VQ classifier and several neural network class predictors. The classifier uses the four previously encoded pixels to identify the class of the current pixel (the pixel to be predicted). The current pixel is then predicted by the corresponding class predictor. Experimental results demonstrate that the proposed technique reduces the bit rate by as much as 10 percent when compared to the lossless JPEG.
The preprocessing or detection stage of an automatic target recognition system extracts areas containing potential targets from a battlefield scene. These potential target images are then sent to the classification stage to determine the identity of the targets. It is highly desirable at the preprocessing stage to minimize incorrect rejection rate. This, however, results in a high false alarm rate. In this paper, we present a new technique to reject false alarms (clutter images) produced by the preprocessing stage. Our technique, region-based principal component analysis (PCA), uses topological features of the targets to reject false alarms. A potential target is divided into several regions and a PCA is performed on each region to extract regional feature vectors. We propose to use regional feature vectors of arbitrary shapes and dimensions that are optimized for the topology of a target in a particular region. These regional feature vectors are then used by a two-class classifier based on the learning vector quantization to decide whether a potential target is a false alarm or a real target.
Neural networks are highly parallel architectures, which have been used successfully in pattern matching, clustering, and image coding applications. In this paper, we review neural network based techniques that have been used in image coding applications. The neural networks covered in this paper include multilayer perceptron (MLP), competitive neural network (CNN), frequency sensitive competitive neural network (FS-CNN), and self-organizing feature map network (SOFM). All of the above mentioned neural networks except MLP are trained using competitive learning and used for designing the vector quantizer codebook. The major problem with the competitive learning is that some of the neurons may get a little or no chance at all to win the competition. This may lead to a codebook containing several untrained codevectors or the codevectors that have not been trained enough. There are several possible ways to solve this problem, FS-CNN and SOFM offer solution to under-utilization of neurons. We present design algorithms for above mentioned neural networks as well as evaluate and compare their performance on several standard monochrome images.
Composite classifiers that are constructed by combining a number of component classifiers have been designed and evaluated on the problem of automatic target recognition (ATR) using forward-looking infrared (FLIR) imagery. Two existing classifiers, one based on learning vector quantization and the other on modular neural networks, are used as the building blocks for our composite classifiers. A number of classifier fusion algorithms are analyzed. These algorithms combine the outputs of all the component classifiers and classifier selection algorithms, which use a cascade architecture that relies on a subset of the component classifiers. Each composite classifier is implemented and tested on a large data set of real FLIR images. The performances of the proposed composite classifiers are compared based on their classification ability and computational complexity. It is demonstrated that the composite classifier based on a cascade architecture greatly reduces computational complexity with a statistically insignificant decrease in performance in comparison to standard classifier fusion algorithms.
A modular neural network classifier has been applied to the problem of automatic target recognition (ATR) using forward- looking infrared (FLIR) imagery. This modular network classifier consists of several neural networks (expert networks) for classification. Each expert network in the modular network classifier receives distinct inputs from features extracted from only a local region of a target, known as a receptive field, and is trained independently from other expert networks. The classification decisions of the individual expert networks are combined to determine the final classification. Our experiments show that this modular network classifier is superior to a fully connected neural network classifier in terms of complexity (number of weights to be learned) and performance (probability of correct classification). The proposed classifier shows a high noise immunity to clutter or target obscuration due to the independence of the individual neural networks in the modular network, Performance of the proposed classifier is further improved by the use of multi-resolution features and by the introduction of a higher level neural network on the top of expert networks, a method known as stacked generalization.
The mission of the Department of Defense (DoD) Counter-drug Technology Development Program Office's Face Recognition Technology (FERET) program is to develop automatic face recognition systems from the development of detection and recognition algorithms in the laboratory through their demonstration in a prototype real-time system. To achieve this objective, the program supports research in face recognition algorithms, the collection of a large database of facial images, independent testing and evaluation of face recognition algorithms, construction of a real-time demonstration systems, and the integration of algorithms into the demonstration systems. The FERET program has established baseline performance for face recognition. The Army Research Laboratory (ARL) has been the technical agent for the Advanced Research Projects Agency since 1993, managing development of the recognition algorithms, database collection, and algorithm testing. Currently ARL is managing the development of several prototype face recognition systems that will demonstrate complete real-time video face identification in an access control mission. This paper gives an overview of the FERET program, presents recent performance results of face recognition algorithms evaluated, and addresses the future direction of the program and applications for DoD and law enforcement.
Although the traditional method of O/L measurement (relative comparison between two levels) has proved to be a practical and cost effective way of measuring overlays, in the future this method will have to be supplemented by other means that require measurement of feature position in an absolute coordinate system and then comparing the output with the database rather than with some other level the accuracy of which remains to be established. The use of well calibrated Coordinate Measuring Instruments (CMI's) is one way to achieve the desired accuracy. But calibrating CMI's is a chicken-or-egg dilemma; you can't calibrate one without an accurately measured artifact, and you can't make the artifact without a well-calibrated instrument. Or so it seems. Positional self-calibration methods were invented to solve this problem and show great promise. But still there are many subtleties that must be resolved before such methods can be trusted. This paper explains the geometric basis for lattice methods of self-calibration and concludes with a theorem that demonstrates one of the striking difficulties that must be faced when relying on self- calibration algorithms.
We present the design of an automatic target recognition (ATR) system that is part of a hybrid system incorporating some domain knowledge. This design obtains an adaptive trade-off between training performance and memorization capacity by decomposing the learning process with respect to a relevant hidden variable. The probability of correct classification over 10 target classes is 73.4%. The probability of correct classification between the target- class and the clutter-class (where clutters are the false alarms obtained from another ATR) is 95.1%. These performances can be improved by reducing the memorization capacity of this system because its estimation shows that it is too large.
In this paper a compression algorithm is developed to compress SAR imagery at very low bit rate. A new vector quantization (VQ) technique called the predictive residual vector quantizer (PRVQ) is presented for encoding the SAR imagery. Also a variable-rate VQ scheme called the entropy- constrained PRVQ (EC-PRVQ), which is designed by imposing a constraint on the output entropy of the PRVQ, is designed. Experimental results are presented for both PRVQ and EC-PRVQ at high compression ratios. The encoded images are also compared with that of a wavelet-based coder.
In this paper, an adaptive neural network vector predictor is designed in order to improve the performance of the predictive component of the predictive vector quantizer (PVQ). The proposed vector predictor consists of a set of dedicated predictors (experts) where each predictor is optimized for a particular class of input vectors. In our simulations, we used five multi-layer perceptrons (MLP) to design our expert predictors. Each MLP predictor is separately trained by using a set of training vectors that belong to a particular class. The class identity of each training vector is determined by its directional variances. In our current implementation, one predictor is optimized for stationary blocks and four other predictors are designed for horizontal, vertical, 45 degree and 135 degree diagonally oriented edge blocks. The back-propagation algorithm is used for training each network. The directional variances of the neighboring blocks are used to select the appropriate expert predictor for the current input block. Therefore, no overhead information is transmitted in order to inform the receiver about the predictor selection. Our simulation shows that the proposed scheme gives an improvement of more than 1 dB over the predictor consisting of a single MLP predictor. The perceptual quality of the predicted images are also significantly improved.
In this paper we present a new scheme for color image compression. The proposed scheme exploits the correlation between the basic color components (red, green, and blue: RGB) by predicting two color components given one color component. Specifically, this scheme employs neural network predictors to predict the red and blue color components using the encoded (reconstructed) green color component. The prediction error is further quantized using vector quantization. The performance of the proposed scheme is evaluated and compared with that of the JPEG.
Finite-state vector quantization (FSVQ) is known to give a better performance than a memoryless vector quantization (VQ). Recently, a new scheme that incorporates a finite memory into a residual vector quantizer (RVQ) has been developed. This scheme is referred to as finite-state RVQ (FSRVQ). FSRVQ gives better performance than the conventional FSVQ with a substantial reduction in the memory requirement. The codebook search complexity of an FSRVQ is also reduced in comparison with that of the conventional FSVQ scheme. This paper presents a new variable-rate VQ scheme called entropy-constrained finite state residual vector quantization (EC-FSRVQ). EC-FSRVQ is designed by incorporating a constraint on the output entropy of an FSRVQ during the design process. This scheme is intended for low bit rate applications due to its low codebook search complexity and memory requirements. Experimental results show that the EC-FSRVQ outperforms JPEG at low bit rates.
A major problem with a VQ based image compression scheme is its codebook search complexity. Recently, a new VQ scheme called predictive residual vector quantizer (PRVQ) was proposed which has a performance very close to that of the predictive vector quantizer (PVQ) with very low search complexity. This paper presents a new variable-rate VQ scheme called entropy-constrained PRVQ (EC-PRVQ), which is designed by imposing a constraint on the output entropy of the PRVQ. We emphasized the design of EC-PRVQ for bit rates ranging from 0.2 bpp to 1.00 bpp. This corresponds to the compression ratios of 8 through 40, which are likely to be used by most of the real life applications permitting lossy compression. The proposed EC-PRVQ is found to give a good rate-distortion performance and clearly outperforms the state-of-the-art image compression algorithms developed by Joint Photographic Experts Group (JPEG). The robustness of EC-PRVQ is demonstrated by encoding several test images taken from outside the training data. EC-PRVQ not only gives better performance than JPEG, at a manageable encoder complexity, but also retains the inherent simplicity of VQ decoder.
A major problem with a VQ based image compression scheme is its codebook search complexity. Recently a Predictive Residual Vector Quantizer (PRVQ) was proposed in Ref. 8. This scheme has a very low search complexity and its performance is very close to that of the Predictive Vector Quantizer (PVQ). This paper presents a new VQ scheme called Variable-Rate PRVQ (VR-PRVQ) which is designed by imposing a constraint on the output entropy of the PRVQ. The proposed VR-PRVQ is found to give an excellent rate-distortion performance and clearly outperforms the state-of-the-art image compression algorithm developed by Joint Photographic Experts Group (JPEG).
This paper presents a new FSVQ scheme called Finite-State Residual Vector Quantization (FSRVQ) in which each state uses a Residual Vector Quantizer (RVQ) to encode the input vector. Furthermore, a novel tree- structured competitive neural network is proposed to jointly design the next-state and the state-RVQ codebooks for the proposed FSRVQ. Joint optimization of the next-state function and the state-RVQ codebooks eliminates a large number of redundant states in the conventional FSVQ design; consequently, the memory requirements are substantially reduced in the proposed FSRVQ scheme. The proposed FSRVQ can be designed for high bit rates due to its very low memory requirements and low search complexity of the state-RVQs. Simulation results show that the proposed FSRVQ scheme outperforms the conventional FSVQ schemes both in terms of memory requirements and perceptual quality of the reconstructed image. The proposed FSRVQ scheme also outperforms JPEG (current standard for still image compression) at low bit rates.
A vector predictor is an integral part of the predictive vector quantization (PVQ) scheme. The performance of a predictor deteriorates as the vector dimension (block size) is increased. This makes it necessary to investigate new design techniques in order to design a vector predictor which gives better performance when compared to a conventional vector predictor. This paper investigates several neural network configurations which can be employed in order to design a vector predictor. The first neural network investigated in order to design the vector predictor is the multi-layer perceptron. The problem with multi-layer perceptron is the long convergence time which is undesirable when the on-line training of the neural network is required. Another neural network called functional link neural network has been shown to have fast convergence. The use of this network as a vector predictor is also investigated. The third neural network investigated is a recurrent type neural net. It is similar to the multi-layer perceptron except that a part of the predicted output is fed back to the hidden layer/layers in an attempt to further improve the current prediction. Finally, the use of a radial-basis function (RBF) network is also investigated for designing the vector predictor. The performances of the above mentioned neural network vector predictors are evaluated and compared with that of a conventional linear vector predictor.
The major problems with finite state vector quantization (FSVQ) are the lack of accurate prediction of the current state, the state codebook design, and the amount of memory required to store all state codebooks. This paper presents a new FSVQ scheme called finite-state residual vector quantization (FSRVQ), in which a neural network based state prediction is used. Furthermore, a novel tree-structured competitive neural network is used to jointly design the next-state and the state codebooks for the proposed FSRVQ. The proposed FSRVQ scheme differs from the conventional FSVQ in that the state codebooks encode the residual vectors instead of the original vectors. The neural network predictor predicts the current block based on the four previously encoded blocks. The index of the codevector closest to the predicted vector (in the Euclidean distance sense) represents the current state. The residual vector obtained by subtracting the predicted vector from the original vector is then encoded using the current state codebook. The neural network predictor is trained using the back propagation learning algorithm. The next-state codebook and the corresponding state codebooks are jointly designed using the tree-structured competitive neural network. This joint optimization eliminates the large number of unnecessary states which in turn reduces the memory requirement by several order of magnitude when compared to the ordinary FSVQ.
This paper presents a new technique for designing a jointly optimized Multi-stage Vector Quantizer which is also known as the Residual Vector Quantizer (RVQ). In conventional stage-by-stage design procedure, each stage codebook is optimized for that particular stage distortion and does not consider the distortion from the subsequent stages. However, the overall performance can be improved if each stage codebook is optimized by minimizing the distortion from the subsequent stage quantizers as well as the distortion from the previous stage quantizers. This can only be achieved when stage codebooks are jointly designed for each other. In this paper, the proposed codebook design procedure is based on a multi-layer competitive neural network where each layer of this network represents one stage of the RVQ. The weight connecting these layers form the corresponding stage codebooks of the RVQ. The joint design problem of the RVQ's codebooks is formulated as a nonlinearly constrained optimization task which is based on a Lagrangian error function. The proposed procedure seeks a locally optimal solution by iteratively solving these equations for this Lagrangian error function. Simulation results show an improvement in the performance of an RVQ when designed using the proposed joint optimization technique as compared to the stage-by-stage design, where both Generalized Lloyd Algorithm and the Kohonen Learning Algorithm were used to design each stage codebook independently, as well as the conventional joint- optimization technique.
In this paper, a finite-state vector quantizer called Dynamic Finite-State Vector Quantization (DFSVQ) is investigated with regard to its subcodebook construction. In DFSVQ each input vector encoded by a small codebook, called subcodebook, is created from a much larger codebook called supercodebook. The subcodebook is constructed by selecting (reordering procedure) a set of appropriate codevectors from the supercodebook. The performance of the DFSVQ depends on this reordering procedure, therefore, several reordering procedures are introduced and their performances are evaluated in this paper. The reordering procedures that are investigated are the conditional histogram, address prediction, vector prediction, nearest neighbor design, and the frequency usage of codevectors. The performance of the reordering procedures are evaluated by comparing their hit ratios (the number of blocks encoded by the subcodebook) and their computational complexity. Experimental results are presented for both still images and video. It is found that for still images the conditional histogram performs the best and for video the nearest neighbor design performs the best.
A new predictive vector quantization (PVQ) technique capable of exploring the nonlinear dependencies in addition to the linear dependencies that exist between adjacent blocks (vectors) of pixels is introduced. The two components of the PVQ scheme, the vector predictor and the vector quantizer, are implemented by two different classes of neural networks. A multilayer perceptron is used for the predictive cornponent and Kohonen self-organizing feature maps are used to design the codebook for the vector quantizer. The multilayer perceptron uses the nonlinearity condition associated with its processing units to perform a nonlinear vector prediction. The second component of the PVQ scheme vector quantizes the residual vector that is formed by subtracting the output of the perceptron from the original input vector. The joint-optimization task of designing the two components of the PVQ scheme is also achieved. Simulation results are presented for still images with high visual quality.
A review is presented of cluster tool concepts, their potential advantages for future IC manufacturing, approaches to cluster tools and cluster tool technologies. As wafer size increases and device feature size decreases, cluster tools should play a more central role in future IC manufacturing, although there are several problems to be overcome before cluster tools are available for a broad spectrum of IC technologies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.