PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Natural scenes are not band-limited and for most contemporary sampled imaging systems, the (pre-sampling) image formation subsystem frequency response extends well beyond the sampling passband. For these two reasons, most sampled imaging systems -- particularly staring-array systems -- produce aliasing. That is, the sampling process causes (high) spatial frequencies beyond the sampling passband to fold into (lower) spatial frequencies within the sampling passband. When the aliased, sampled image data is then reconstructed, usually by image display, potentially significant image degradation can be produced. This is a well- established theoretical result which can be (and has been, by many) verified experimentally. In this paper we argue that, for the purposes of system design and digital image processing, aliasing should be treated as signal-dependent, additive noise. The argument is both theoretical and experimental. That is, we present a model-based justification for this argument. Moreover, by using a computational simulation based on this model, we process (high resolution images of) natural scenes in a way which enables the `aliased component' of the reconstructed image to be isolated unambiguously. We demonstrate that our model-based argument leads naturally to system design metrics which quantify the extent of aliasing. And, by illustrating several `aliased component' images, we provide a qualitative assessment of aliasing as noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is shown that images with missing pixels can be approximately and usefully reconstructed using an interpolation function derived from as few as 10% of the original pixels in the image. The interpolation procedure uses Gaussian radial basis functions to `hammer' a plane to the given data points, where the basis function widths are calculated according to diagonal dominance criteria. In this paper the effectiveness of the reconstruction technique is investigated as a function of the degree of diagonal dominance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Edge detection is one of the most important stages in a chain of pattern recognition by vision. Many studies have been devoted to the problems of edge detection, however, a large number of difficulties remain (thick edge, sensitivity to noise, important calculation time and memory space). In this paper we present an optimization of an edge detection method using oriented gradients by introducing the notion of visual perception to validate or reject the edge points. This notion has been modelized from the different visual appreciations of several simulated images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present a class of order statistic filters named adaptive vector median. These filters generalize the median by using the concept of the median vector. The principal advantage of using median vectors is a significant reduction of computation. The proposed filter uses an adaptive algorithm that examines whether individual pixels are contaminated by impulsive noise, and if not, no filtering is performed on these pixels. The combination of the vector median and the adaptive algorithm results in a computationally efficient filter that preserves much of the image fine detail while removing impulsive noise, and also avoids alteration of noise-free images. Computer simulation results and comparisons with other filters are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
By using the biorthogonal wavelet transform, we develop a hierarchical planar curve descriptor which decomposes a curve into components of different scales so that the coarsest scale carries the global approximation information while other finer scale components contain the local detailed information. It is shown that the wavelet descriptor can be computed effectively and has many nice properties as a representation tool, e.g. invariance, uniqueness, and stability. We discuss several applications of the wavelet descriptor, including character recognition ad shape deformation. Experimental results are given to illustrate the performance of the proposed wavelet descriptor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Spatial transformations (STs) constitute an important class of image operations, which include the well-known affine transformation, image rotation, scaling, warping, etc. Less well known are the anisomorphic transformations among cartographic projections such as the Mercator, gnomonic, and equal-area formats. In this preliminary study, we introduce a unifying theory of spatial transformation, expressed in terms of the Image Algebra, a rigorous, inherently parallel notation for image and signal processing. Via such theory, we can predict the implementational cost of various STs. Since spatial operations are frequently I/O-intensive, we first analyze the I/O performance of well-known architectures, in order to determine their suitability for ST implementation. Analyses are verified by simulation, with emphasis upon vision-based navigation applications. An additional applications area concerns the remapping of visual receptive fields, which facilitates visual rehabilitation in the presence of retinal damage.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
NASA and the University of Houston College of Optometry are examining the efficacy of image warping as a possible prosthesis for at least two forms of low vision -- maculopathy and retinitis pigmentosa. Before incurring the expense of reducing the concept to practice, one would wish to have confidence that a worthwhile improvement in visual function would result. NASA's Programmable Remapper (PR) can warp an input image onto arbitrary geometric coordinate systems at full video rate, and it has recently been upgraded to accept computer- generated video text. We have integrated the Remapper with an SRI eye tracker to simulate visual malfunction in normal observers. A reading performance test has been developed to determine if the proposed warpings yield an increase in visual function; i.e., reading speed. We describe the preliminary experimental results of this reading test with a simulated central field defect with and without remapped images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A statistical analysis of the noise and mathematical model of speckle pattern are presented and an adaptive suboptimal digital image filtering method in space domain in view of the noise is proposed in this paper. A local variable -- parameter exponential transformation and normalization are first used to reduce the serious inhomogeneity of the grey level distribution caused by speckle diffraction halo. Then, the statistical characteristics of the noises in speckle pattern are analyzed and the mathematical model of speckle pattern is established. On the basis of the analysis and modelling, an adaptive suboptimal image filtering method that can simultaneously remove various noises in speckle pattern is described, in which, the characteristics of linear optimal filter and non-linear adaptive filter are combined and the local fringe direction is taken as an adaptive parameter to ensure that neighborhood statistics and filtering proceed within the maximum stable region of the processing point, besides, local statistical characteristics of the image are considered. Experimental results show that the statistical analysis of the noise and proposed mathematical model can rather accurately describe the characteristics of speckle pattern and the proposed filtering method can effectively remove various noises and remain useful information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper provides a common mathematical framework for analyzing image fidelity losses in rectangularly and hexagonally sampled digital imaging systems. The fidelity losses considered are due to blurring during image formation, aliasing due to undersampling, and imperfect reconstruction. The analysis of the individual and combined effects of these losses is based upon an idealized, noiseless, continuous-discrete-continuous end-to-end digital imaging system model consisting of four independent system components: an input scene, an image gathering point spread function, a sampling function, and an image reconstruction function. The generalized sampling function encompasses both rectangular and hexagonal sampling lattices. Quantification of the image fidelity losses is accomplished via the mean-squared-error (MSE) metrics: imaging fidelity loss, sampling and reconstruction fidelity loss, and end-to-end fidelity loss. Shift-variant sampling effects are accounted for with an expected value analysis. This mathematical framework is used as the basis for a series of simulations comparing a regular rectangular (square) sampling grid to a regular hexagonal sampling grid for a variety of image formation and image reconstruction conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Contrast enhancement is one of the fundamental techniques in the processing chain for improving the quality of an image, which is under- or overexposed. The existing enhancement methods are often used in an interactive manner which supposes a subjective choice of these methods and of the parameters on which they depend. We have therefore developed a method which allows the automatic enhancement of images by introducing decisional criteria for the choice of the adapted transfer function. The decisional criteria are based on the dynamics and the first two statistical moments of the histogram of the image to be processed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We assess redundancy reduction in image coding in terms of the information acquired by the image-gathering process and the amount of data required to convey this information. A clear distinction is made between the theoretically minimum rate of data transmission, as measured by the entropy of the completely decorrelated data, and the actual rate of data transmission, as measured by the entropy of the encoded (incompletely decorrelated) data. It is shown that the information efficiency of the visual communication channel depends not only on the characteristics of the radiance field and the decorrelation algorithm, as is generally perceived, but also on the design of the image-gathering device, as is commonly ignored.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Gray scale DF-expression (GDF) has raised some research interests recently. GDF-expression exploits data correlation on every bitplane and represents image hierarchically. In this paper, a predictive GDF-expression (PGDF) for lossless image compression is proposed which applies a predictive step to increase the spatial correlation on MSB bit-planes. The predictive GDF is compared with predictive Huffman encoding. Extensive experiments show that PGDF can achieve much better compression results than GDF and slightly better results than predictive Huffman encoding for most of the testing images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper evaluates the quantization process in the context of the end-to-end performance of the visual-communication channel. Results show that the trade-off between data transmission and visual quality revolves around the information in the acquired signal, not around its energy. Improved information efficiency is gained by frequency dependent quantization that maintains the information capacity of the channel and reduces the entropy of the encoded signal. Restorations with energy bit-allocation lose both in sharpness and clarity relative to restorations with information bit-allocation. Thus, quantization with information bit-allocation is preferred for high information efficiency and visual quality in optimized visual communication.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
JPEG has already found wide acceptance for still frame image compression. The quantization matrices (QMs) play a critical role in the performance of the JPEG algorithm but there has been a lack of effective QM design tools. As a result, sub-optimal QMs have commonly been used and JPEG has been judged to be inappropriate for some applications. It is our contention that JPEG is even more widely applicable than `common knowledge' would admit. This paper describes a low-cost design tool that has been developed and is currently being successfully applied to design QMs for various sensors including IR, SAR, medical, scanned maps, and fingerprints.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The last decade has seen extensive research in the area of image compression (source coding). Of equal importance is the forward error correction (channel coding) required to maintain the fidelity of reconstructed images when they are transmitted through a noisy communications channel. This paper presents a design methodology based on end-to-end analysis of the communication channel from the sensor to the display monitor to develop the appropriate bit error model for the communications channel. Channels containing all white Gaussian and/or burst errors are addressed. These channels include rf data links, satellite relay links, digital tape recorders, and optical recording systems. An overview of the forward error correction codes required for error correction of these channels is also presented. The codes that are discussed include 1/2 rate convolutional, Reed-Solomon, concatenated 1/2 rate convolutional and Reed-Solomon, and 2-dimensional Reed-Solomon product codes. Extending the error correction capability of these codes through both block and helical interleaving is addressed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Peanoscanning was used to obtain the pixels from an image by following a scan path described by a space-filling curve, the Peano-Hilbert curve. The Peanoscanned data were then compressed without loss of information by direct Huffman, arithmetic, and LZW coding, as well as predictive and transform coding. The results were then compared with the results obtained by compressing equivalent raster scanned data to study the effect of Peanoscanning on image compression. In our implementation, tested on seven natural images, Peano- Differential coding with an entropy coder gave the best results of reversible compression from 8 bits/pixel to about 5 bits/pixel. An efficient implementation of the Peanoscanning operation based on the symmetry exhibited by the Peano-Hilbert curve is also suggested.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An infrared focal plane has been simulated, designed and fabricated which mimics the form and function of the vertebrate retina. The `Neuromorphic' focal plane has the capability of performing pixel-based sensor fusion and real-time local contrast enhancement, much like the response of the human eye. The device makes use of an indium antimonide detector array with a 3 - 5 micrometers spectral response, and a switched capacitor resistive network to compute a real-time 2D spatial average. This device permits the summation of other sensor outputs to be combined on-chip with the infrared detections of the focal plane itself. The resulting real-time analog processed information thus represents the combined information of many sensors with the advantage that analog spatial and temporal signal processing is performed at the focal plane. A Gaussian subtraction method is used to produce the pixel output which when displayed produces an image with enhanced edges, representing spatial and temporal derivatives in the scene. The spatial and temporal responses of the device are tunable during operation, permitting the operator to `peak up' the response of the array to spatial and temporally varying signals. Such an array adapts to ambient illumination conditions without loss of detection performance. This paper reviews the Neuromorphic infrared focal plane from initial operational simulations to detailed design characteristics, and concludes with a presentation of preliminary operational data for the device as well as videotaped imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a computer vision system being developed at the Pattern Analysis and Machine Intelligence (PAMI) Lab of the University of Waterloo and at the Vision, Intelligence and Robotics Technologies Corporation (VIRTEK) in support of the Canadian Space Autonomous Robotics Project. This system was originally developed for flexible manufacturing and guidance of autonomous roving vehicles. In the last few years, it has been engineered to support the operations of the Mobile Service System (MSS) (or its equivalence) for the Space Station Project. In the near term, this vision system will provide vision capability for the recognition, location and tracking of payloads as well as for relating the spatial information to the manipulator for capturing, manipulating and berthing payloads. In the long term, it will serve in the role of inspection, surveillance and servicing of the Station. Its technologies will be continually expanded and upgraded to meet the demand as the needs of the Space Station evolve and grow. Its spin-off technologies will benefit the industrial sectors as well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An optical processor for zero-crossing edge detection is presented which consists of two defocused imaging systems to perform the Gaussian convolutions and a VLSI, ferroelectric liquid crystal spatial light modulator (SLM) to determine the zero-crossings. The zero-crossing SLM is a 32 X 32 array of pixels located on 100 micrometers centers. Each pixels contains a phototransistor, an auto-scaling amplifier, a zero-crossing detection circuit, and a liquid crystal modulating pad. Electrical and optical characteristics of the zero-crossing SLM are presented along with experimental results of the system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As part of research into automated document imaging, an algorithm has been developed to detect the orientation (portrait/landscape) of a binary image page. The detection is based on an analysis of projection profiles, vertical and horizontal variances on a page, and a technique to reduce the impact of non-textual data (blanks, graphics, forms, line art, large fonts, and dithered images) from the page orientation result. The algorithm performed well on test images independent of text dominance. The performance of the algorithm has been evaluated using a sample size of several thousand images of medical journal pages. The algorithm is capable of detecting the page orientation at an accuracy rate of 99.92 - 99.93%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new automatic target recognition (ATR) technique called fractal transform is presented. The fractal transform is a combination of fractal image processing and the Hough transform. A measurement of the power spectral density (PSD) of an input scene was performed using a ring-wedge detector to obtain a log (PSP) vs. log (spatial frequency) plot. By analyzing the log-log plot by the Hough transform and a neural network, the ATR operation based on fractal transform is achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An improved method of estimating fractal surface dimensions has been developed. The accuracy of this method is illustrated using artificially generated fractal surfaces. A slightly different from usual concept of linear dimension is developed, allowing a direct link between that and the corresponding surface dimension estimate. These methods are applied to a series of images of lava flows, representing a variety of physical and chemical conditions. These include lavas from California, Idaho, and Hawaii, as well as some extraterrestrial flows. The fractal surface dimension estimations are presented, as well as the fractal line dimensions where appropriate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many natural backgrounds, such as radar clutter from the ocean surface, which were previously thought to be random may be chaotic. Because of the finite dimensionality of a chaotic background, a non-linear signal processor can be trained as a global predictor. The results of a continuing study of polynomial neural nets (PNN), used for global prediction, are described. Encouraging results have been obtained with PNNs for both signal processing (time series) and images. Since PNNs can be trained to predict chaotic backgrounds, threshold target images can be detected by subtracting the predicted background from the target plus background. In this paper we summarize the basis for PNN processing and present recent PNN image processing results using as a chaotic background video images of the ocean surface.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image edge is an important feature of image. Usually, we use Laplacian or Sober operator to get an image edge. In this paper, we use fractal method to get the edge. After introducing Fractal Brownian Random (FBR) field, we give the definition of Discrete Fractal Brownian Increase Random (DFBIR) field and discuss its properties, then we apply the DFBIR field to detect the edge of an image. According to the parameters H and D of DFBIR, we give a measure M equals (alpha) H + (beta) D. From the M value of each pixel, we can detect the edge of image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The nonlinear distributed optical system with rotation in feedback contour is considered. A particular case of rotating waves is described by bifurcational periodical solutions in the form of power series expansion in terms of a small parameter. Conditions of its stability are obtained. The finite-dimensionality of the studied system is defined in terms of the number of determining modes closely connected with the Hausdorff dimension of the attractor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As part of research into document image processing, an algorithm has been developed to detect the degree of skew in a scanned binary image. The principal components of the algorithm are component labelling, a procedure to reduce the amount of data to be processed, a technique to minimize the effect of non-textual data (graphics, forms, line art, large fonts, and dithered images) on the measurement of skew angle, and the Hough transform. The performance of the algorithm has been evaluated using a sample size of several hundred images of medical journal pages. Evaluation shows that a skew angle may be detected with an accuracy of about 0.50 degree.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To accurately model imagery for converting a single image source to generate imagery for another sensor, it is necessary to develop feature classification techniques that can define significant features in the imagery so that the scattering properties of the incident radiation can be used as a technique to model the desired bands of the electromagnetic spectrum. This paper wifi be concerned with the extraction of features where either color infrared (CIR) or electro-optical (EO) imagery are available as the baseline source materials. The feature extraction process will be initiated using first order techniques for the iiiital classification. This initial classification will be followed by using higher order, more computationally intensive methods. Since higher order methods are usually specific to certain features, a battery of higher order methods will be required to classify the features in an entire scene. These various classified features will be linked using techniques of image analysis. These techniques have been used to generate a sequence of images, where CIR imagery was converted to thermal infrared (TIR) and synthetic aperture radar (SAR), for mission simulation and planning. The input images are processed initially to define regions based on some measure of homogeneity within regions of the image. This processing could be based on texture or measures of signature content within different bands of a niultispectral image. Both automatic and manual classification techniques, including synergistic coinbiiiatioiis, are applicable for this stage of processing. The precise form of the processing should also be guided by whether the regions being processed consist of nianmade or natural regions. This information is very useful since manmade structures usually consist of regularly shaped, rigid regions, whereas natural objects are less well defined and usually exhibit more randomness. Thus, for manmade object feature extraction, it is appropriate to use techniques for extracting lines, regions, ellipses/circles, or other regular-shaped regions with some regular periodicity occurring within selected, larger subregions. On the other hand, naturally occurring features could be more accurately extracted using texture methods or metrics defined based on the energy content of different bands of the electromagnetic spectrum. The feature extraction techniques discussed in this paper are hierarchical in nature to reduce the computational requirements imposed on developing the large set of images required for realistic mission training. The first-order classification is most efficiently implemented by using the three bands of a CIR image which can be transformed from red, green, and blue to intensity, hue and saturation. This transformation has the twofold effect of iuakiiig the process independent of the total received intensity, while at the same time reducing the three parameter lookup table to a two parameter lookup table. The details and accuracy of these techniques, which have been implemented for mission simulation and planning, will be presented in the paper. Combining these techniques with texture methods, which are based on regularity measures of regions, leads to a refined classification. The final step in the procedure is to perform image analysis as a refinement procedure for the classification. Based on the scenes being analyzed and some known a priori scene content, detailed feature extraction procedures can be developed for specific features. Analysis of these features allows decision rules to be constructed based on the features and their interrelationships. These decision rules allow the system developer to encode the proper constraints into the algorithmic processes to determine when the feature actually exists and its limiting boundaries. As more practice and expertise is developed iii the image conversion process, these image analysis techniques will become more and more automated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The neon spreading effect has been attentive since the 1970s. It was assumed this kind of illusory spreading of color, resembling the diffuse light that issues from a glowing neon tube, was some stage in one's visual system where a physical process introduces an error. In this paper, the experimental results showed that the brightness difference between grids and crosses was the prior condition for the neon spreading to appear. We proposed that this kind of illusion was the result of the visual image processing, and could be interpreted by bandpass filtering which involved the filters of several spectral widths. The computer simulation on the neon-spreading effect by Gaussian filtering also helps the proposal.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we describe a method to represent and classify unvoiced sounds using the concept of super wavelets. A super wavelet is a linear combination of wavelets that itself can be treated as a wavelet. Since unvoiced sounds are high frequency and noise like, we use Daubechies' wavelet of order three to generate the super wavelet. The parameters of the wavelet for representation and classification of unvoiced sounds are generated using neural networks. Even though this paper addresses the problems of both signal representation and classification, emphasis is on classification problem, since it is natural to adaptively tune wavelets in conjunction with training the classifier in order to select the wavelet coefficients which contain the most information for discriminating between the classes. We demonstrate the applicability of this method for the representation and classification of unvoiced sounds with representative examples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents implementation of the wavelet transform on parallel computers. The time of computation of wavelet transform on classic computers limits its applications in several areas of signal processing and data compression. We examine some problems encountered when parallelizing such a code and we compare three different SIMD computers on this basis: a Connection-Machine 2/200, a SYMPATI-2 Line Processor, and a MasPar MP-1.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A Smartt interfërometry based opto-electronic wavelet processor is described. It is capable of delivering the true wavelet transform rather than just the amplitude at video rate for image coding and other applications. High quality reconstructed images were obtained from the experiment. The system is compact, fieldable and suitable for practical applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new adaptive digital halftoning method is presented where the screen function is designed to be adaptive to the local variation of image intensity. Wavelet transform is used to extract the information of this variation by estimating the gradient at each pixel. Our initial experimental results have shown improved visual effect of the halftoned images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The close attention has been paid to the climatic change for its important influence on economy and society. Now, the warming of the global climate for the last one hundred years is one of the hot topics in the studies of the climatic change.56'7 It is generally acknowledged that the increasing of greenhouse gases, such as GO2, which enhanced the greenhouse effect, is responsible for the warming of the global climate in the 20th century, especially in the 1980s. In this paper, the global warming is reexamined using of the wavelet transform "microscope". Wavelet analysis is a mathematical technique introduced recently for seismic data and acoustic signals.4 It provides a two— dimensional unfolding of one-dimensional signals, resovling both the time and frequency as independent variable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A stationary band-limited process is used to construct a wavelet basis. This basis is modified to obtain a biorthogonal sequence which in turn is used to obtain a series representation of the process with uncorrelated coefficients.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There are (in general) three common ways to arrive at a wavelet basis for R2. The first (separable) way takes the tensor product of two one-dimensional wavelets and their associated scaling functions. One wavelet (psi) (x) and scaling function (phi) (y) pair; one wavelet (psi) (x) wavelet (psi) (y) pair and one scaling function (phi) (x) and wavelet (psi) (y) pair leading to three `detail' terms per scale and translate. This leads to the familiar Mallat representation of the image. The second, less frequently used basis, is formed by taking all dilations of the product of two one dimensional wavelets, leading to one detail term per single scale and translate, but far more scales since the dilate changes between the two wavelets. In the third way, the basis is formed not by a tensor product, rather by dilation of a set of functions which is supported on a lattice, the translates of which complete the integer lattice. This technique leads to k detail terms corresponding to the k wavelets in the basis. In this paper, non-separable two dimensional Haar wavelets are considered to represent images for compression purposes. The irregular (self-similar) support of non-separable wavelets make them a natural candidate for image compression. A fast algorithm for decomposition and reconstruction of images in terms of the non-separable Haar wavelets is discussed, and the connection between the `best basis' representation and Quad (n-ary) Trees is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, the wavelet transform is used for the purpose of noise reduction and signal enhancement in order to aid in the detection of randomly occurring short duration signals in noisy environments with signal to noise ratios of about -30 dB. The noise is characterized as being additive and consists of correlated interference as well as Gaussian noise. Such problems are encountered in many applications, such as health diagnostics (e.g. electrocardiograms, echo-cardiograms and electroencephalograms), underwater acoustics and geophysical applications where a signature signal passes through multiple media. The wavelet transform, with its basis functions localized both in time and frequency, provides the user with a signal representation suitable for detection purposes. Following the introduction, a brief description of the problem with the characteristics of the signal to be detected and the noise that is present in the environment is given. Then, background information on the wavelet transform is presented. Finally, our results obtained by applying the wavelet transform to signal detection are shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper deals with the system for rational sampling rate change. Two efficient implementations are given. The first structure shows the relationship between the polyphase decomposition of the output and the linear, periodically time-varying (LPTV) nature of the system. The second system refines the first one step further to give the polyphase matrix implementation. Frequency domain relations are given for both systems. The first system is a single-input/multi-output time-varying system, and in this case the z-domain relation is not a transfer function. However, the polyphase matrix can be represented as a multi-input/multi- output time-invariant system, and as such has a transfer matrix, which is given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper develops design constraints for alias free and perfect reconstruction (PR) rational two channel filter banks. The polyphase transfer matrix for the overall system is developed, and it is shown that for a subset of all rational systems, the relations reduce to a polyphase matrix similar to that of the classic M-channel maximally decimated system. Following well known results, it is then straightforward to give constraints in terms of the analysis and synthesis filters for alias free, as well as PR systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The freedom of choosing an appropriate kernel of a linear transform, which is given to us by the recent mathematical foundation of the wavelet transform, is exploited fully and is generally called the adaptive wavelet transform. However, there are several levels of adaptivity: (1) Optimum Coefficients: adjustable transform coefficients chosen with respect to a fixed mother kernel for better invariant signal representation; (2) Super-Mother: grouping different scales of daughter wavelets of same or different eother wavelets at different shift locations into a new family called a superposition mother kernel for better speech signal classification; (3) Variational Calculus to determine ab initio a constraint optimization mother for a specific task. The tradeoff between the mathematical rigor of the complete orthonormality and the speed of order (N) with the adaptive flexibility is finally up to the users' decisions to get their jobs done with the desirable properties. Then, to illustrate (1), a new invariant optoelectronic architecture of a wedge-shape filter in the WT domain is given for a scale-invariant signal classification by neural networks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite- length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI's Integrated Automated Fingerprint Identification System.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper a review of some significant recent theoretical connections between fractional Brownian motion, wavelets, and a low-frequency spectrum 1/f-type noise of the form (omega) -(alpha ) 1 <EQ (alpha) <EQ 2 is presented. Fractional Brownian motion is a parsimonious model (it depends on two parameters) that links the covariance of the sample path of a random signal with its power spectrum. The wavelet transform of fractional Brownian motion has a correlation function and spectral distribution that is known. The applicability of the theory is illustrated using data from an Amber focal plane array by showing that the wavelet transform can decorrelate a 1/f-type fixed pattern noise spectrum in a predictable fashion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wavelet processing followed by a neural network classifier is shown to give higher blob detection rate and lower false alarm rate than simply classifying single pixels by their spectral characteristics. An on-center, off-surround wavelet is shown to be highly effective in removing constant-mean background areas, as well as ramping intensity variations that can occur due to camera nonuniformities or illumination differences. Only a single wavelet dilation is tested in a case study, but it is argued that wavelets at different scales will play a useful role in general. Adaptive wavelet techniques are discussed for registration and sensor fusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The same word uttered by different people has different waveforms. It has also been observed that the same word uttered by the same person has different waveform at different times. This difference can be characterized by some time domain dilation effects in the waveform. In our experiment a set of words was selected and each word was uttered eight times by five different speakers. The objective of this work is to extract a wavelet basis function for the speech data generated by each individual speaker. The wavelet filter coefficients are then used as a feature set and fed into a neural network-based speaker recognition system. This is an attempt to cascade a wavelet network (wavenet) and a neural network (neural-net) for feature extraction and classification respectively and applied for speaker recognition. The results show very high promise and good prospects to couple a wavelet network and neural networks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel method, N-Wavelet Coding, for pattern detection and classification based on wavelet transform and coding theory is introduced in this paper. Using this detection and classification technique it is possible to reduce false alarm rate while maintaining a constant probability of detection and noise level. A set of periodic continuous waveforms comprising six classes spanning a cross correlation coefficient between 0.68 to 0.99 is used to evaluate the N- Wavelet Coding technique. It is found that by increasing the length N of the N-Wavelet Coding it is possible to decrease the false alarm rate while maintaining a constant probability of detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Paper from the Russian Conference on Iconics and Thermovision Systems (TeMP'91)
Image simulating based on real images is duscussed, using the technique of fractal representation. At the first stage the fractal features are calculated from real images, and at the second stage these features are used for simulation of images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problems of utilization of the Chinese residue theorem(CRT) for image coding in the systems of image registration, storage, processing, and translation are touched upon. The number-theoretic algorithms using the CRT permit the digital image structure to be adapted to parameters of the iconic systems. The image transformation based on CRT is isomorphic with respect to the linear operators. Classical number-theoretic coding on the base of number and polynomial residues does not allow to realize the whole possibilities of iconic systems. The new method of image coding, based on nonclassical CRT utilization is suggested. This method allows one to use the speed of response and accuracy of the iconic systems more successfully. The main factors restricting the possibilities of the method are analyzed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.