PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
We show how the unsupervised ANN modeling of image fusion of HVS can embody the mathematics of ICA to achieve blind source de-mixing of remote sensing images. We have shown when tow eyes are extended to multiple pixel has a large footprint on the ground. MLRS gives the percentage composition of ground radiation sources within the footprint and thus overcome the so-called 'boundary error' coined by Tucker in the Amazon deforestation over-estimation as follows.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Major spin-offs from NASA's multi- and hyper spectral imaging remote sensing technology developed for Earth resources monitoring, are creative techniques that combine and integrate spectral with spatial methods. Such techniques are finding use in medicine, agriculture, manufacturing, forensics, and an e er expanding list of other applications. Many such applications are easier to implement using a sensor design different from the pushbroom or whiskbroom air- or space-borne counterparts. This need is met by using a variety of electronically tunable filters that are mounted in front of a monochrome camera to produce a stack of images at a sequence of wavelengths, forming the familiar 'image cube'. The combined spectral/spatial analysis offered by such image cubes takes advantage of tools borrowed form spatial image processing, chemometrics and specifically spectroscopy, and new custom exploitation tools developed specifically for these applications. Imaging spectroscopy is particularly useful for non homogeneous samples or scenes. examples include spatial classification based on spectral signatures, use of spectral libraries for material identification, mixture composition analysis, plume detection, etc. This paper reviews available tunable filters ,system design considerations, general analysis techniques for retrieving the intrinsic scene properties from the measurements, and applications and examples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Landsat satellite images from the mid-1980s and early 1990s were used to map tropical forest extent and deforestation in approximately 800,000 km2 of Amazonian Bolivia. Forest cover extent, including tropical deciduous forest, totaled 472,000 km2 while the area of natural non-forest formation totaled 298,000 km2. The area deforested totaled 15,000 km2 in the middle 1980s and 28,800 km2 by the early 1900s. The rate of tropical deforestation in the > 1,000 mm y-1 precipitation forest zone of Bolivia was 2,200 km2 y-1 from 1985-1986 to 1992-1994. We document a spatially-concentrated 'deforestation zone' in Santa Cruz Department where > 60 percent of the Bolivian deforestation is occurring at an accelerating rate in areas of tropical deciduous dry forest.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The radio plasma imager (R.I.) is a low power radar on board the IMAGE spacecraft to be launched early in year 2000. The principal science objective of RPI. is to characterize the plasma in the Earth's magnetosphere by radio frequency imaging. A key product of RPI is the plasmagram, a map of radio signal strength vs. echo delay-time vs. frequency, on which magneto spheric structures appear as curves of varying intensity. Noise and other emissions will also appear on RPI plasmagrams and when strong enough will obscure the radar echoes. RPI echoes from the Earth's magnetopause will be of particular importance since the magnetopause is the first region that the solar wind impacts before producing geomagnetic storms. To aid in the analysis of RPI plasmagrams and ind all echoes from the Earth's magnetopause, a computer program has been developed to automatically detect and enhance the radar echoes. The technique presented in derived within a Bayesian framework and centers on the construction and analysis of a likelihood function connecting magneto spheric structures and RPI plasmagrams. Once this technique has been perfected on archival IMAGE data it will be recorded and used on board the IMAGE spacecraft in a series of test thereby greatly facilitating organizations like the NOAA SEC to perform real-time analysis of space weather.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Some recent developments in the rapidly advancing field of solar astronomy from space are described. 3D imaging of the Sun's corona, improved imaging of coronal mass ejections that cause electromagnetic disturbances at the earth, and observations of comets approaching and striking the Sun are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Future space based imaging systems require increasingly large aperture sizes to keep pace with the demand for higher spatial resolution for both Earth and Space sciences missions. The cost and weight becomes increasingly prohibitive for telescopes and instruments with apertures greater than 1 meter. A number of solutions are possible and are under investigation; these include: deployable segmented aperture systems, sparse aperture systems, interferometric imaging system, computational deconvolution and super- resolution techniques. The commonality of these techniques lies in increased reliance on sophisticated computational and information theoretic techniques. We give an overview of the complex optical and image processing techniques required for such systems to become operational.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wavelet-based image registration has previously been proposed by the authors. In previous work, maxima obtained from orthogonal Daybooks filters as well as from Simoncelli steerable filters were utilized and compared to register images in a multi-resolution fashion. The first comparative results between both types of filters showed that despite the lack of translation-invariance of the orthogonal filters, both types of filters gave very encouraging results for non-noisy data and small transformations. But the accuracy obtained with orthogonal filters seemed to degrade very quickly for large rotations and large amounts of noise, while results obtained with steerable filters appeared much more stable under these conditions. In this work, we are performing a systematic study of the robustness of such methods as a function of translation, rotation and noise parameters, for both types of filters and using data form the Landsat/Thematic Mapper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Earth's temperature has risen approximately 0.5 degrees C in the last 150 years. Because the atmospheric concentration of carbon dioxide has increased nearly 30 percent since the industrial revolution, a common conjecture, supported by various climate models, is that anthropogenic greenhouse gases have contributed to global warming. Another probable factor for the warming is the natural variation of solar irradiance. Although the variation is as small as 0.1 percent, it is hypothesized that it contributes to part of the temperature rise. Warmer or cooler ocean temperature at one part of the Globe may manifest as abnormally wet or dry weather patterns some months or years later at another part of the globe. Furthermore, the lower atmosphere can be affected through its coupling with the stratosphere, after the stratospheric ozone absorbs the UV portion of the solar irradiance. In this paper, we use wavelet transforms based on Morlet wavelet to analyze the time-frequency properties in several datasets, including the radiation Budget measurements, the long-term total solar irradiance time series, the long-term temperature at two locations for the North and the South Hemisphere. The main solar cycle, approximately 11 years, are identified in the long-term total solar irradiance time series. The wavelet transform of the temperature datasets show annual cycle but not the solar cycle. Some correlation is seen between the length of the solar cycle extracted from the wavelet transform and the North Hemisphere temperature time series. The absence of the 11-year cycle in a time series does not necessarily imply that the geophysical parameter is not affected by the solar cycle; rather it simply reflects the complex nature of the Earth's response to climate forcings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents current results for out method for the estimation of oceanic surface velocity fields using wavelet decomposition of Sea Surface Temperature images (SST) images of the same region taken within a know time interval. Wavelet decompositions are performed on both images to obtain approximations that will be compared for local displacements at various resolution levels. The velocity vector field generated form coarser images is refined by the addition of velocity vector components at more detailed resolution levels. Results of the technique applied to artificial and satellite images are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The current Global Positioning System signal tracking techniques are vulnerable to bit transitions in the baseband navigation signal. We present new signal tracking techniques as lock discriminators in carrier frequency and code delay domains derived within the framework of wavelet theory that are robust to these bit transitions. We then present the performance analysis of the proposed lock discriminators comparatively with currently used techniques on real data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wavelet image processing of Landsat images has been successfully used for surveillance of deforestation and crop detection. However, boundaries between classes or differentiation between similar crops remain difficult problems. High spatial resolution has ben successfully achieved by a number of diverse approaches. High spectral resolution, especially between homogeneous area, should similarly be achievable. Spatial resolution at boundaries and spectral resolution between homogeneous regions remain ongoing challenges and will be specifically addressed in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Reticle systems are considered to be the classical approach for estimating the position of a target in a considered field of view an are widely used in IR seekers. Due to the simplicity and low cost, since only a few detectors are used, reticle seekers are still in use and are subject of further research. However, the major disadvantage of reticle trackers has been proven to be sensitivity on the IR countermeasures such as flares and jammers. When redesigned adequately they produce output signals that are linear convolutive combinations of the reticle transmission functions that are considered as the source signals in the context of the Independent Component Analysis (ICA) theory. Each function corresponds with single optical source position. That enables ICA neural network to be applied on the optical tracker output signals giving on its outputs recovered reticle transmission functions. Position of each optical source is obtained by applying appropriate demodulation method on the recovered source signals. The three conditions necessary for the ICA theory to work are shown to be fulfilled in principle for any kind of the reticle geometry.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The blind source separation problem is to extract the underlying source signals from a set of their linear mixtures, where the mixing matrix is unknown. This situation is common, eg in acoustics, radio, and medical signal processing. We exploit the property of the sources to have a sparse representation in a corresponding signal dictionary. Such a dictionary may consist of wavelets, wavelet packets, etc., or be obtained by learning from a given family of signals. Starting from the maximum a posteriori framework, which is applicable to the case of more sources than mixtures, we derive a few other categories of objective functions, which provide faster and more robust computations, when there are an equal number of sources and mixtures. Our experiments with artificial signals and with musical sounds demonstrate significantly better separation than other known techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Information Technology Application in Multispectral and Hyperspectral Remote Sensing
Recently, Candes and Donoho introduced the curvelet transform, a new multiscale representation suited for objects which are smooth away from discontinuities across curves. Their proposal was intended for functions f defined on the continuum plane R2. In this paper, we consider the problem of realizing this transform for digital data. We describe a strategy for computing a digital curvelet transform, we describe a software environment, Curvelet 256, implementing this strategy in the case of 256 X 256 images, and we describe some experiments we have conducted using it. Examples are available for viewing by web browser.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The image formation process associated with coherent imaging sensor is particularly sensitive to and is often corrupted by non-stationary processes. In the case of SAR, non- stationary processes result from motion within the scene, variable radar cross section, multi-path, topographic variations, sensor anomalies, and deficiencies in the image formation processing chain. Conversely, stationary processes result in image signatures that appear literal to the eye, e.g., urban infrastructure, vegetation, and natural terrain. In analyzing SAR signal history two objectives unfold. One is to obtain a well-focused image devoid of distortions and non-literal artifacts. The second objective is the detection and value-added exploitation of the non-stationary signatures. Note that the roles of signal and clutter are reversed for these two objectives. The notion that joint time-frequency (JTF) techniques may prove useful in accomplishing these objectives has spurred limited investigations into the field of coherent radar imaging systems. This paper addresses SAR image formation processing, the complex response function for a point source, and SAR JTF image formation implementations. Each of these topics is described within the context of applying JTF processing to all aspects of SAR image formation and analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new method for analyzing nonlinear and nonstationary data has been developed. The key pat of the method is the Empirical Mode Decomposition method with which any complicated data set can be decomposed into a finite and often small number of Intrinsic Mode Functions (IMF). An IMF is define das any function having the same numbers of zero- crossing and extrema, and also having symmetric envelopes defined by the local maxima and minima respectively. The IMF also admits well-behaved Hilbert transform. This decomposition method is adaptive, and therefore, highly efficient. Since the decomposition is based on the local characteristic time scale of het data, it is applicable to nonlinear and nonstationary processes. With the Hilbert transform, the IMF yield instantaneous frequencies as functions of time that give sharp identifications of embedded structures. The final presentation of the result is an energy-frequency-time distribution, designated as the Hilbert Spectrum. Comparisons with Wavelet and window Fourier analysis show the new method offers much better temporal and frequency resolutions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wavelet transforms and time-frequency distributions are powerful techniques for analysis of nonstationary biomedical signals. This paper investigates three applications of these techniques to multichannel electroencephalography (EEG) for the diagnosis of epilepsy. Wavelet transforms are utilized to detect the onset of seizures at different sites of subdural electrodes, and to extract spike patterns from EEG data recorded from the scalp. Time-frequency distributions are applied to characterize the early activity of seizures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we introduce the concept of micro-Doppler phenomenon. Micro-Doppler induced by mechanical vibrating or rotating of structures in a target is potentially useful for detection, classification and recognition of targets. While the Doppler frequency due to target body is constant, the micro-Doppler due to vibration or rotating structures of a target is a function of the dwell time. This time-varying Doppler signature in the time-frequency domain may provide further information for target detection, classification and recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Finite Impulse Response (FIR) filters have been the major players in the Wavelets and Multiresoltion Analysis field; mainly due to their ease of design and understandable nature, as well as their well behaved characteristics such as stability and linear phase response. However, it has been demonstrated that in a number of cases IIR filters are more appropriate. This paper describes a new solution for the wavelet conditions to derive stable IIR filters that are matched in a least squares sense to specified frequency responses. The derived solution treats independently the numerator and denominator in the rational transfer function. This solution can be applied to any IIR wavelet filter bank satisfying the orthogonality and perfect reconstruction conditions. Based on the proposed solutions, a new IIR wavelet filter bank with the low pass filter matched to a desired frequency response is developed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The time-frequency tiling, bit allocation and the quantizer of most perceptual coding algorithms is either fixed or controlled by a perceptual mode. The large variety of existing audio signals, each exhibiting different coding requirements due to their different temporal and spectral fine-structure suggests to use a signal-adaptive algorithm. The framework which is described in this is paper makes use of a signal-adaptive wavelet filterbank which allows to switch any node of the wavelet-packet tree individually. Therefore each subband can have an individual time- segmentation and the overall time-frequency tiling can be adapted to the signal using optimization techniques. A rate- distortion optimality can be defined which will minimize the distortion for a given rate in every subband, based on a perceptual model. Due to the additivity of the rate and distortion measure over disjoint covers of the input signal, an overall cost function including the switching cost for the filterbank switching can be defined. By the use of dynamic programming techniques, the wavelet-packet tree can be pruned base don a top-down or bottom-up 'split-merge' decision in every node of the wavelet-tree. Additionally we can profit form temporal masking due to the fact that each subband can have an individual segmentation in time without introducing time domain artifacts such as pre-echo distortion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The nonlinear speech signal decomposition based on Volterra- Wiener functional series is described. The nonlinear filter bank structure is proposed for phonemes recognition solving.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Frequency estimation/determination has applications in various areas, where the sampling rate is usually above the Nyquist rate. In some applications, it is preferred that the range of the frequencies is as large as possible for a given sampling rate and in some applications, the sampling rate is below the Nyquist rate. In both cases, frequency estimation from under sampled waveforms is needed. In this paper, we study the range problem and present an efficient algorithm to determine multiple frequencies form multiple under sampled waveforms with sampling rates below the Nyquist rates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Fourier transform (FT) is often used to analyze transient and non-stationary signals even when such signals are not periodic in nature. We demonstrate how an adaptive wavelet transform (WT) can bring out signal details that the traditional FT cannot, as first shown by Szu et al. In 1992. The magnitude plot of the complex Morlet wavelet shows the evolution of the signal's energy in both time and frequency, while the phase plot pinpoints signal discontinuities at various scales. This information can be used to build a compact model and approximate representation of signals characterized by pulses. We are able to infer the physics of devices that generate EM pulses.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Invisible Digital watermarks have been proposed as a method for discouraging illicit copying and distribution of copyright material. In recent years it has been recognized that embedding information in a transform domain leads to more robust watermarks. In particular, several approaches based on the wavelet transform have ben proposed to address the problem of image water marking. The advantage of the wavelet transform relative to the DFT or DCT is that it allows for localized water marking of the image. A major difficulty, however, in watermarking in any transform domain lies in the fact that constraints on the allowable distortion at any pixel are specified in the spatial domain. In order to insert an invisible watermark, the current trend has been to model the Human Visual Systems and specify a masking function which yields the allowable distortion for any pixel. This complex function combines contrast, luminance, color, texture and edges. The watermark is then inserted in the transform domain and the inverse transform computed. The watermark is finally adjusted to satisfy the constraints on the pixel distortions. However this method is highly suboptimal since it leads to irreversible losses at the embedding stage because the watermark is being adjusted in the spatial domain with no care for the consequences in the transform domain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper a wavelet domain data hiding technique for color images is presented. The characteristics of this transform domain are well suited for masking consideration, due to its good localization in both space and frequency. The different color sensitivity can be exploited in order to increase the water marking power, and therefore its reliability. Thanks to the multi-resolution nature of the DWT the blind watermark detection can be obtained iterativley. The technique has been successfully evaluated against attacks such as JPEG and JPEG-2000 compressions, filtering, cropping, dithering and D/A A/D conversions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Laser-based ultrasonic (LBU) measurement shows great promise for on-line monitoring of weld quality in tailor-welded blanks. Tailor-welded blanks are steel blanks made from plates of different thickness and/or properties butt-welded together; they are used in automobile manufacturing to provide body, frame, and closure panels. LBU uses a pulsed laser to generate the ultrasound and a continuous wave laser interferometer to detect the ultrasound at the point of interrogation to perform ultrasonic inspection. LBU enables in-process measurement since there is no sensor contact or near-contact with the workpiece. The authors are using laser-generated plate waves to propagate form one plate into the weld nugget as a means of detecting defects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the past decade, wavelet filters have been widely applied to signal processing. In effect, wavelet filters are perfect reconstruction filter banks (PRFBs). However, in most researches, the filterbanks and wavelets operate on real- valued or complex-valued signals. In this paper, PRFBs operating over integer quotient rings (IQRs) are introduced. We denote an IQR as Z/(q). Algorithms for constructing such filter banks are proposed. The PRFB design can be carried out either in the time or the frequency domain. We demonstrate that some classical or well known filter tap coefficients can even be transformed into values over Z/(q) in a simple and straightforward way. Here we emphasize that to achieve perfect reconstruction (PR), the filters need not to work on elements in fields. In fact, operating on elements in IQRs can achieve PR with proper choices of a ring and filter tap coefficients. The designed filter banks can be orthogonal or biorthogonal. Based ona PRFB over an IQR, to which we refer as an IQR-PRFB, a perfect reconstruction transmultiplexer (PRTM), to which we refer as an IQR-PRTM, can be derived. Through the utilization of the IQR-PRTM multiplexing and multiple access in a multi-user digital communication system can be realized. The IQR-PRTM effectively decomposes the communication signal space into several orthogonal subspaces, where each multiplexed user sends his message in one of them. If some of the orthogonal subspaces are preserved for parity check, then error correction at the receiving end can be performed. In the proposed schemes, the data to be transmitted must be represented with elements of Z/(q), which can be done easily. A modulation and demodulation/detection scheme, in conjunction with the IQR-PRTM is proposed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There is a great amount of similarity in a set of medical images. Set Redundancy Compression (SRC) has shown that compression of a similar set of images can provide better compression than the compression obtained from compressing the individual images of the set. SRC is based on the prediction of the other images in the set from a smaller subset (this subset can be as small as one image). This paper presents a new wavelet based prediction method for prediction of the intermediate images in a similar set of medical images. The technique uses the correlation between coefficients in the wavelet transforms of the image set to produce a better image prediction compared to direct image prediction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, the possibility of using an orthogonal basis to train a collection of artificial neural networks in a face recognition task is discussed. This orthonormal basis is selected from a dictionary of orthonormal bases consisting of wavelet packets. Here, a basis is obtained by maximizing a certain discriminant measure among classes of training images. Once such a basis is selected, its basis vectors are ordered according to their power of discrimination and the first N most local discriminant basis vectors are retained for image decomposition purpose. By projecting all training images onto an individual basis vector of these N most discriminant basis vectors, N versions of the training set at different spatial/scale resolutions are then created. Next, N multilayer feed- forward neural networks are trained independently by N different resolution-specific training sets. After networks have been trained, they are combined to form an ensemble of networks. Our proposed method takes advantage of the fact that the dimensionality of the pattern recognition problem at hand is reduced, but the important information is still contained, and at the same time, some correlations between neighboring inputs are included. Furthermore, the performance of our proposed network is improved over a single neural network as a result of the ensemble and the nonlinear property of neural networks. Finally, this method is applied to a face recognition task using the Yale Face Database. From the experimental results , the performance of our method is better than a conventional back-propagation network and a wavelet packet parallel consensual neural network in terms of both computation and generalization ability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The translation of knowledge contained in databank into linguistically interpretable fuzzy rules has proven in real applications to be difficult. The lack of interpretability of fuzzy systems generated with neurofuzzy approaches has been found to be a major problem. A solution to this problem is furnished by multi resolution techniques. A dictionary of functions forming a multi resolution is used as candidate membership functions. The membership functions are chosen among the family of scaling functions that have the property to be symmetric, everywhere positive and with a single maxima. This family includes among others splines and some radial functions. The main advantage of using a dictionary of membership functions is that each term, such as 'small', 'large' is well defined beforehand and is not modified during learning. After reviewing the connection between a Takagi-Sugeno fuzzy model and spline modeling, we show how a multi resolution fuzzy system can be developed form data by using wavelet techniques. For regularly spaced data points, a matching pursuit algorithm may be used to determine appropriate fuzzy rules and membership functions. For on- line problems, biorthogonal splines wavenets are taken to determine the fuzzy rules and the resolution of the membership functions. An alternative technique, based on wavelet estimator is also presented. Multi resolution fuzzy techniques, also known as 'fuzzy-wavelet', have found applications in fire detection. For instance, wavelet analysis has been combined with fuzzy logic in flame detectors for on-line signal processing. The resulting algorithms have greatly contributed to translate a new understanding of flames' dynamics into algorithms that are capable of discriminating between a real fire and possible interferences, such as those caused by the sun's radiation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
While discrete wavelet transforms offers a powerful combination of computational efficiency and compact representation for a broad range of signals, they are often designed without any prior knowledge of the signals under analysis. In this paper, we provide a methodology for constructing customized wavelet sand multi rate filterbanks through the application of a generalized cost function on available training data. In particular, we design wavelets that provide maximal discriminate between several signal classes, with the cost function directly tied to classification performance. Since the relationship between the filter coefficients and correct classification may be exceedingly complicated, the optimization is performed using a genetic algorithm. The multi rate filterbank is implemented in a lattice-type structure, known as lifting, which facilitates the incorporation of constraints on the search space. In addition to demonstrating the successful design of signal-adaptive wavelets, this paper validates the use of genetic algorithms as a powerful class of tools for complex system optimization. The method is applied to acoustic scattering data with classification performance evaluated in relation to both non-adaptive biorthogonal wavelets and signal-adaptive wavelets based on linear predictive constraints.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As is known, Zernike polynomials find broad application for the solution of many problems of computational optics. The well-known Zernike polynomials are particularly attractive for their unique properties over a circular aperture. Zernike circle polynomials are used for describing both classical aberrations in optical system and aberrations related to atmospheric turbulence. There are several numerical techniques to solve for the value of Zernike coefficients, the least-squares matrix inversion method and the Gram-Schmidt orthogonalization method would become ill- conditioned due to an improper data sampling. In this article, we present the 2D discrete wavelet transform (DWT) technique to find the 3rd order spherical and coma aberration coefficients. The method offers great improvement in the accuracy and calculating speed of the fitting aberration coefficients better than the least-squares matrix inversion method and the Gram-Schmidt orthogonalization method. Furthermore, the result of solving coefficients through the 2D DWT is independent of the order of the polynomial expansion. So we can find an accurate value from the datum of fitting.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The propagation of optical pulses in nonlinear optical fibers is described by the nonlinear Schrodinger (NLS) equation. This equation can generally be solved exactly using the inverse scattering method, or for more detailed analysis, through the use of numerical techniques. Perhaps the best known numerical technique for solving he NLS equation is the split-step Fourier method, which effects a solution by assuming that the dispersion and nonlinear effects act independently during pulse propagation along the fiber. In this paper we describe an alternative numerical solution to the NLS equation using an adaptive wavelet transform technique, done entirely in the wavelet domain. This technique differs form previous work involving wavelet solutions tithe NLS equation in that these previous works used a 'split-step wavelet' method in which the linear analysis was performed in the wavelet domain while the nonlinear portion was done in the space domain. Our method takes ful advantage of the set of wavelet coefficients, thus allowing the flexibility to investigate pulse propagation entirely in either the wavelet or the space domain. Additionally, this method is fully adaptive in that it is capable of accurately tracking steep gradients which may occur during the numerical simulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This research deals with finite length, perfect reconstruction two-channel orthonormal wavelet filters. For these filters, the coefficients of the filter of maximum order of regularity are unique and are known. Previous research has also developed a parameterizaiton to generate the coefficients of all filters of order of regularity one. The current research presents a parameterization for the generation of the coefficients of these filters with order of regularity between one and the maximum possible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents wavelet-based methods for characterizing clutter in IR and SAR images. With our methods, the operating parameters of automatic target recognition (ATR) systems can automatically adapt to local clutter conditions. Structured clutter, which can confuse ATR systems, possesses correlation across scale in the wavelet domain. We model this correlation using wavelet-domain hidden Markov trees, for which efficient parameter estimation algorithms exist. Based on these models, we develop analytical methods for estimating the false alarm rates of mean-squared-error classifiers. These methods are equally useful for determining threshold levels for constant false alarm rate detectors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present an approach to construct second generation interpolating wavelets to compress the class of integral operators of the form (integral) K(x,y)dy over an unstructured grid in 3D. This approach results in a scheme that generally requires O(N) storage at O(N) cost. Moreover, analytical estimates of the stiffness matrix coefficients are derived. Numerical results are presented for a second kind formation of Laplace equation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wavelets have a tremendous ability to extract signals from noisy environments. However, the use of wavelets can be computationally expensive. The number of computations increases with the size of the wavelet family. Here a wavelet family is combined into a single complex-valued signal that can then be used to extract information form an input signal. The advantage is that the expense of computation is that of a single correlation rather than the several correlations required by the wavelet family. This new filter is constructed using a phase-encoded fractional power filter and offers the user the option of manipulating the trade-off generalization and discrimination that is inherent in first-order filtering. The result is a computationally cheaper method of using wavelets to detect signals embedded in noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to realize the feedback control for variable polarity plasma arc weld formation in the weld process, the feature geometrical size of the keyhole must be extracted. A multiscale edge detection based on the wavelet transform is equivalent to finding the local maxima of a wavelet transform. With the properties of multiscale edge through the wavelet theory, the edge points were detected by getting the maximum modulus of the gradient vector in the direction towards which the gradient vector points in the image plane. The edge points with a large module value correspond to the sharper intensity variation of the image. At coarse scales, the maxima of modules have different positions than at the fine scales and only detected the sharp edge. At fine scale, there are many maxima create by the image noise. We must integrate this multiscale information to look for the best scale where the edges are well discriminated from noises. At last, a new method of peak analysis for threshold selection is proved. It is based on the wavelet transform which provides a multiscale analysis of the information of the histogram. We how that the detection of the zero-crossing or the local extrema of a wavelet transform of the histogram gives a compete characterization of het peaks in the histogram. Many experiments show these ways are effective for the keyhole image to get the geometry parameters of the keyhole in the real-time image processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We design a new compactly-supported interpolating wavelet- distributed approximating functional (DAF) wavelet for biomedical signal/image processing. DAF class is a smooth, continuous interpolating function system which is symmetric and fast-decaying. DAF neural networks are designed for time varying electrocardiogram signal filtering. The neural nets use the Hermite-DAF as the basis function and implement a 3- layer structure. DAF wavelets and the corresponding subband filters are constructed for image processing. Edge- enhancement normalization and device-adapted visual group normalization algorithms are presented which sharpen the desired image features without prior knowledge of the spatial characteristics of the images. We design a nonlinear multiscale gradient-stretch method for feature extraction of mammograms. A fractal technique is introduced to characterize microcalcifications in localized regions of breast tissue. We employ a DAF wavelet-based multiscale edge detection and Dijkstra fractal technique is introduced to characterize microcalcifications in localized regions of breast tissue. We employ a DAF wavelet-based multiscale edge detection and Dijkstra fractal technique to identify micro calcification regions, and use a stochastic thresholding method to detect the calcified spots. The combined perceptual techniques produce natural high-quality images based on the human vision system. The underlying technologies significantly facilitate the creation of generic signal processing and computer-aided diagnostic systems. The system is implemented in the JAVA language, which is cross-platform friendly and is facilitated for telemedicine application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
On the basis of the reciprocal characteristic of SAR image and optical image, two new image fusion algorithms based on the wavelet transformation is presented to implement fusion of SAR image and optical image. The first method of image fusion presented in the paper is that the wavelet transform decomposition of SAR image and optical image is finished, then compare their decomposition coefficient in order to get the bigger decomposition coefficient regarded as the new decomposition coefficient, and use the method of reconstruction to get a new fusion image. The second method of image fusion presented in the paper is that apply the wavelet decomposition to SAR image and optical image, get the high frequence information of wavelet transformation of SAR image and the detail information of wavelet transformation of SAR image and the detail information of wavelet transformation of optical image, then compose a new decomposition coefficient and get a new fusion image. Both of these method achieve better results in experiment, the first method is prior to the second method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Diagnostically lossless compression techniques are essential in archival and communication of medical images. In this paper, an automated wavelet-based background noise removal method, i.e. diagnostically lossless compression method, is proposed. First, the wavelet transform modulus maxima procedure products the modulus maxima image which contains sharp changes in intensity that are used to locate the edges of the images. Then the Graham Scan algorithm is used to determine the convex hull of the wavelet modulus maxima image and extract the foreground of the image, which contains the entire diagnostic region of the image. Histogram analyses are applied to the non-diagnostic region, which is approximated by the image that is outside the convex hull. After setting all pixels in the non-diagnostic region to zero intensity, a higher compression ratio, without introducing loss of any data used for the diagnosis, is achieved with UNIX utilities compress and pack, and with lossless JPEG. Furthermore, an image of smaller rectangular region containing all the diagnostic region is constructed to further improve the compression ratio achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces an approach for radar target recognition by the range profiles based on the combination of wavelet transform and the evolutionary neural network. The whole recognition process consist of tow main stages. The first stage is concerned with feature extraction where the goal is to find the small number of features form high feature space that retains all information needed for an accurate recognition. The second stage is concerned with classification of the patterns based on their reduced features after feature extraction. As the radar echo, i.e., the range profile is a nonstationary signal, Mallat algorithm is used to select and compress the features of the range profiles in the first stage. In the second stage, we use an evolutionary neural network as classifier, that is, construct a feed-forward neural network as classifier by using a hybrid evolutionary algorithm based on evolutionary programming. The experimental results show the whole target recognition system has simple structure and good generalization ability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There are several numerical techniques to solve the value of aberration coefficients. One classical technique is the Gaussian elimination method, which has been described in most standard numerical analysis textbooks, such as Ralston's text, the conventional direct inversion method is numerically unstable. To obtain the Zernike coefficients form a samples wavefront with inherent measurement noise, the classical least-squares matrix inversion method and the Gram-Schmidt orthogonalization method would become ill- conditioned due to an improper data sampling. In this paper, we present the continuous wavelet transform (CWT) technique to find the defocus aberration and 3rd order spherical aberration coefficients. The technique we proposed is superior to the conventional methods in two ways. (1) Our method is much faster than the conventional methods, especially in applications with a few sampling points. (2) Our method is also more accurate in fitting aberration coefficients than the conventional methods, particularly in applications involving noise. Furthermore, the aberration coefficients determined through the CWT are independent of the order of the polynomial expansion. So we can find a true value from the datum of fitting.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the development of an innovative wavelet-based diagnostic methodology to perform real-time detection of mechanical chaos occurring in high-speed, high- performance rotor-dynamic systems. The objective is to provide an early warning if macroscopic and/or microscopic faults are detected in time to prevent catastrophic mechanical failures that could compromise safety and cause expensive downtime. In this paper, we make use of the popular discrete wavelet transform (DWT) operating in the time-domain that has demonstrated to be superior to Fourier- based methods and is highly effective in identifying small variations of parameters in rapid transient events. In addition, the DWT and its extension to wavelet packet transform are real-time implementable in many DSP chips. We present a comparison between the wavelet approach and the traditional technique to identify chaotic signals using synthetic and laboratory-generated data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With growing complexity of today's ICs, it is desirable to improve traditional testing methods to increase the IC manufacturing throughput. If one can select a limited number of points to test the circuit from a given test pattern, the time required for testing can be greatly reduced. This paper presents a method for circuit testing by combining the wavelet transform and test point selection. An example using an 8-bit D/A converter is used to demonstrate the algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For the first time the blind source de-mixing is applied to authenticity protection for multimedia products. We give an overview of the current sate of multimedia authenticity protection, including the requirements of various multimedia applications, based on the ICA seeking statistically factorized probability density and yielding a fast de-mixing computing using unsupervised ANNs. We describe how their blind demixing capability extends signal processing from the conventional one-sensor approach to a multi-sensor approach, as in the 2 eyes and 2 ears of human sensor system but packaged in a spatio-temporal multiplexing fashion. For trademark security, a covert ICA can serve as a dormant digital watermark embedded within the multimedia data. Unauthorized removal of the trademark as plagiarism could degrade the quality of the content dat. We show how these new approaches contribute to a flexible, robust, and relatively secure system for protecting the authenticity of multimedia products.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a model of 3D visual perception which uses a notion of similarity that preserves meaningful neighborhood relationships between different visual stimuli. The notion involves a similarity graph-based representation that seeks to capture local connectivity in the similarity domain. An end-to-end computational implementation using wavelet features and a multilayer neural network architecture is presented. The usefulness of the similarity representation for visual reasoning is explored.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We wish to generalize the covariance matrix approach (PCA) by the statistical Independent Component Analyses (ICA), which have been implemented by Bell-Sejnowski efficiently using ANN methodology. The gain of the statistics is the los of the geometry. In this research, we preserve the texture geometry with a so-called local ICA, in order to extract separately independent features from each class of natural textures. To avoid the curse of the dimensionality due to the local ICA, we furthermore use the divide-and-conquer strategy. A single ICA basis vector is chosen from each texture class, based on the maximum associative recalls from the class training set. Subsequently, another ICA basis is chosen, if necessary, to minimize the false alarm rate, namely the spread of confusion matrix. For the visible remote sensing application, we have designed such an optimum classifier of all natural scene textures with a minimum spread of the confusion matrix.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper has analyzed the day and night images using the novel concept called hard and soft singularity maps (SM) that are biologically extracted by the lateral redundant data. Consequently, the correspondence exists uniquely among neighborhood frames in terms of the different slope values at image corners solving the optical flow problem for the video compression. In this paper, some efficient computational methods: min-max picking, and next order Cellular Neural Network implementing the anti-diffusion Laplacian, can obtain the SM without the convolution broadening based on Sobel and Canny edge operators. However, the differentiation operation may produce a false singularity under noise, and thus we apply the Hermitian wavelets to obtain the noisy singularity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Facial disguises of FBI Most Wanted criminals are inevitable and anticipated in our design of automatic/aided target recognition (ATR) imaging systems. For example, man's facial hairs may hide his mouth and chin but not necessarily the nose and eyes. Sunglasses will cover the eyes but not the nose, mouth, and chins. This fact motivates us to build sets of the independent component analyses bases separately for each facial region of the entire alleged criminal group. Then, given an alleged criminal face, collective votes are obtained from all facial regions in terms of 'yes, no, abstain' and are tallied for a potential alarm. Moreover, and innocent outside shall fall below the alarm threshold and is allowed to pass the checkpoint. Such a PD versus FAR called ROC curve is obtained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.