PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
A system of programs is described for acquisition, mosaicking, cueing and interactive review of large-scale transmission electron micrograph composite images. This work was carried out as part of a final-phase clinical analysis study of a drug for the treatment of diabetic peripheral neuropathy. MOre than 500 nerve biopsy samples were prepared, digitally imaged, processed, and reviewed. For a given sample, typically 1000 or more 1.5 megabyte frames were acquired, for a total of between 1 and 2 gigabytes of data per sample. These frames were then automatically registered and mosaicked together into a single virtual image composite, which was subsequently used to perform automatic cueing of axons and axon clusters, as well as review and marking by qualified neuroanatomists. Statistics derived from the review process were used to evaluate the efficacy of the drug in promoting regeneration of myelinated nerve fibers. This effort demonstrates a new, entirely digital capability for doing large-scale electron micrograph studies, in which all of the relevant specimen data can be included at high magnification, as opposed to simply taking a random sample of discrete locations. It opens up the possibility of a new era in electron microscopy--one which broadens the scope of questions that this imaging modality can be used to answer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe PIP (prolog image processing)--a prototype system for interactive image processing using Prolog, implemented on an Apple Macintosh computer. PIP is the latest in a series of products that the third author has been involved in the implementation of, under the collective title Prolog+. PIP differs from our previous systems in two particularly important respects. The first is that whereas we previously required dedicated image processing hardware, the present system implements image processing routines using software. The second difference is that our present system is hierarchical in structure, where the top level of the hierarchy emulates Prolog+, but there is a flexible infrastructure which supports more sophisticated image manipulation which we will be able to exploit in due course . We discuss the impact of the Apple Macintosh operating system upon the implementation of the image processing functions, and the interface between these functions and the Prolog system. We also explain how the existing set of Prolog+ commands has been implemented. PIP is now nearing maturity, and we will make a version of it generally available in the near future. However, although the represent version of PIP constitutes a complete image processing tool, there are a number of ways in which we are intending to enhance future versions, with a view to added flexibility and efficiency: we discuss these ideas briefly near the end of the present paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A facility has been developed for producing high quality tomographs of order one micrometer resolution. Three dimensional volumes derived from groups of adjacent tomographic slices are then viewed and navigated in a stereographic viewing facility. This facility is being applied to problems in geological evaluation of oil reservoir rock, medical imaging, protein chemistry, and CADCAM.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Textron has designed and built a high-powered CO2 laser radar for long range targeting and remote sensing. This is a coherent, multi-wavelength system with a 2D, wide-band image processing capability. The digital processor produces several output products from the transmitter return signals including range, velocity, angle, and 2D range-Doppler images of hard-body targets (LADAR mode). In addition, the processor sorts and reports on data acquired from gaseous targets by wavelength and integrated path absorption (LIDAR mode). The digital processor has been developed from commercial components with a SUN SPARC 20 serving as the operator workstation and display. The digital output products are produced in real time and stored off-line for post-mission analysis and further target enhancements. This LADAR is distinguished from other designs primarily by the waveforms produced by the laser for target interrogation. The digital processing algorithms are designed to extract certain features through operation on each of the two waveforms. The waveforms are a pulse-tone and a pulse-burst designed for target acquisition and track, and 2D imaging respectively. The algorithms are categorized by function as acquisition/track, 2D imaging, integrated absorption for gaseous targets, and post mission enhancements such as tomographic reconstruction for multiple looks at targets from different perspectives. Field tests are now in process and results acquired from Feb.-June '96 will be reported on. The digital imaging system, its architecture, algorithms, simulations, and products will be described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, an MIMD parallel system is proposed for image processing applications. The system includes four worker processors and one master processor which are connected together using the facilities provided by a local area network. Results of using the system for implementing low level image processing algorithms and the recognition of objects in parallel will be reported.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Secreted skin oil is a complex mixture of lipids, cholesterol, fatty acids, and a large number of other components. Its composition varies among individuals and with changes in physiology. In this paper, the feasibility of obtaining reproducible spectra of skin oils from individuals with a very simple, noninvasive technique is reported. Using pattern recognition algorithms, spectra could be classified on the basis of ethnicity and gender. Differences in spectra between individuals were larger than those between replicate samples taken from the same individual. While there are easier techniques for gender and ethnic identification, our purpose in this paper is to show that information of some value exists in skin-oil spectra. We believe that this approach could be used for such practical discrimination problems such as the determination of high and low cholesterol levels if confirmatory information for training such classifiers were available.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The domain of industrial automation has been of heightened interest since the introduction of robotic vision in the work place. The system discussed in this abstract is designed to be utilized in the areas of machine and robotic vision for the industrial field. It can serve as an inspection or location finding device for real-time closed loop automated systems. The performance advantage of this system is gained through the use of a massively parallel architecture. The system is built on a VLSI wafer, where the concept of a full application specific system is fabricated on a chip. These design techniques take advantage of massive parallel hardware at the lower stages of image processing, whereby increasing the performance of this stage can significantly improve the total throughput of the system. The design procedure is based on custom ASIC hardware that is optimized for the task at hand. Additionally, the elements have been designed in a modular and reusable manner. The processing of system data is accomplished, primarily, by parallel hardware units, with firmware assigned the responsibility of configuring and defining the task. This system incorporates the design theorems of image processing with the fundamentals of high speed architectures. This design technique, in conjunction with parallel processing principles, will overcome the current time limitations that have been persistent in the current approaches. The system is designed with three functional layers, each composed of parallel architectures and local control. A main control system is also present to monitor overall system functionality and issue queuing commands for the processes, in a non-blocking manner. The proposed system theory and design will be presented with the VLSI layout, simulation and modeling with the CADENCE design environment. TEst results and prototype examples are used to determine the success of the system and test its limitations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel syntactic region growing and recognition algorithm called SRG will be presented. The primary function of SRG algorithm is detection of structured regions of interest in given image. The recognition technique operates on region's elected features, it is a subject of the next paper. The SRG algorithm can be outlined as follows. Preprocessing is performed to isolate the kernels of potential regions. The first level regions are grown until the region transition are detected. Regions sharing the same transition boundary are merged. The second level regions are grown and merged if they are within the same transition boundary. Growing process is repeated as necessary until whole image or specified region of interest is covered. Features such as area, depth, and others, are computed for each level region and are used for recognition. The recognition is based on region features without regard to region level. The algorithm was designed for analysis of nondestructive evaluation images. In particular, it was successfully tested on x-ray images and on ultrasonic contact scan images of ceramic specimens for detection of microstructural defects. The results of these tests are included in the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Using information theory, this investigation propose a novel technique for shape description which is invariant to translation, rotation and in most cases also to scale. This new numeric shape descriptor is based on the measure of rotational information content of an image. In this paper we first review some popular metric shape description features. These features are then used to analyze the feasibility of using the rotational information for shape description, and by means of a comparative study we show how the rotational information is related to well known metric shape descriptors such as area, circularity and elongation. Finally, the results obtained are discussed and analyzed, and conclusions drawn in terms of the suitability of the technique for shape description in image recognition problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Visual tracking is a basic skill needed for active vision, traffic surveillance and many robotic applications. The inherent dynamics are studied in this paper. The result is that an optimum in dynamic tracking performance is reached when the time to process the image is equal to the necessary time of sampling the image. This is valid for all systems using conventional CCD cameras and area windows, which practically all present approaches do. This optimum applies to tracking within a camera as well as to the active vision approach of steering the camera. It is valid for square, circular, and multiple windows. It is further shown that reducing sampling time can improve tracking only to a certain degree with represent technology cameras.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new definition for the connected components of gray images that takes into account both the gray values of the pixels and the differences of the gray values of the neighboring pixels is investigated. Our definition depends on two parameters (epsilon) and (delta) and so we call these components the ((epsilon) , (delta) )-components. We describe a method to find the ((epsilon) , (delta) )-components for a given image. We discuss the applications of (epsilon) , (delta) )-components to segmentation and understanding of gray images. We describe a method to study images in an intermediate level through these component histograms. For appropriate values of (epsilon) and (delta) , and object in an image may be represented by this component. We discuss a method to adjust the values of (epsilon) and (delta) so that object extraction and segmentation may be done by locating the corresponding components. Our approach provides a possible method of transition from low level computer vision to a higher level vision. Since we do not make any assumptions about the formation model of the image data, our proposed method could be applied to many types of images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have attained initial function of a real-time acuity- based video transformation. It matches the transmitted local resolution of video images to the eccentrically-varying acuity of the viewer's human visual system. In previous variable resolution imagery, a variable blockiness produces disturbing aliasing effects. We show how probabilistic methods can be useful to perform anti-aliasing on the variable resolution images, so that smoothing interpolation need not be done to defeat the aliasing. Especially when used in dynamic imaging, the methods consistently reduce the high frequency artifacts perceived by the human eye. The effectiveness of these techniques have been demonstrated with the NASA/Texas Instrument Programmable Remapper, which is able to apply the anti-aliasing methods on the fly on the low bandwidth, acuity-based video signal Video imagery will be shown to demonstrate the technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe an infrared stereo imaging method for the 3D target tracking of distant moving point sources. In scenes which are typically lacking in significant features, correspondence between the two camera images is simplified by the application of a triple temporal filter to consecutive frames of data. This filter simultaneously accentuates the target and suppresses background clutter. We apply this stereo tracking technique to experimental range measurements in which the target is tracked with sub-pixel precision.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The resolution achievable in imaging objects in space from ground-based telescopes is limited by atmospheric turbulence. If enough naturally occurring illumination is available then speckle imaging techniques can be used to recover the original object phase using short exposure images. Analogous techniques exist for recovering the phase of a laser illuminated object from measurements of either the incoherent Fourier modulus or coherent Fourier modulus. In both cases many exposures are required to accumulate sufficient statistics. In the case of coherent illumination lack of a priori information concerning the object makes image reconstruction very difficult. In this paper we discuss one approach to circumventing these difficulties, in which multiple modulated laser beams are broadcast off of an object and the relative phase between the beams is measured at a simple light-bucket receiver. The original object phase is recovered from the phase differences using an iterative reconstructor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In astronomical imaging techniques, the relative level of the zero-frequency component of an image is usually unknown relative to all other components. This problem arises because the overall object brightness cannot easily be separated from background when viewing small, faint objects. This affects image interpretability and therefore is a problem that is ubiquitous in high-resolution astronomical imaging. Potential solutions to this problem include various interpolation techniques and image-constraint techniques. These approaches are described, and performance is evaluated with an optimal interpolator that accounts for sample density, signal-to-noise ratio, and the object's overall shape. Novel analytic expressions are obtained which provide insight into the limitations of any restoration approach, and practical means for achieving those limits.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper briefly describes the filed of diagnostic radar imaging, and discusses how the techniques of mathematical morphology may be brought to bear on the various forms of data commonly used in it, to accomplish different types of tasks. These tasks include tests of data quality, analyses of clutter and radar parameters, and analyses of scattering centers and the mechanisms that give rise to them. Examples are given of morphological processing that can be performed on raw or reconstructed phase history data, on high resolution range profiles, on single magnitude images or sequences of images, and on images based on the image domain phase. New results are described which suggest that the image phase itself may carry useful information about scattering mechanism types. As this paper shows, mathematical morphology provides a little-utilized, yet rich set of tools for the analysis of shape-related phenomena in diagnostic imaging radar data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Non-linear filters have proved to be powerful tools in image analysis, enhancement and restoration. Filters such as the ranked order, median, alpha-trimmed mean, weighted median and the class of morphological filters have been investigated thoroughly and have been shown to be indispensable in certain image processing applications. Another interesting, though unrelated, research area has been the use of fuzzy set theoretic operations in image processing. The underlying idea in employing the fuzzy set theory is to employ fuzzy membership grades to represent image characteristics and features. This paper seeks to unify these two methodologies resulting in an interesting approach to image enhancement problems. The emphasis is on using non-linear operations to generate a local fuzzy property plane. This is then used as an adaptive filter based on local image characteristics. In this paper, we propose a novel localized filtering approach similar to non- linear filters, the point of departure being that we work in the fuzzy property plane rather than the image domain itself. For instance, we may realize a fuzzy implementation based on the median filter which traditionally uses actual pixel intensities, to generate fuzzy memberships instead. In our experiments, we consider a locally adaptive contrast enhancement of x-ray images typically having low contrast. The results are compared with traditional enhancement techniques, such as histogram equalization. The results provide a better subjective quality as compared to other approaches as is also evident from the histogram distribution of the processed images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The human vision system performs the tasks of dynamic range compression and color constancy almost effortlessly. The same tasks pose a very challenging problem for imaging systems whose dynamic range is restricted by either the dynamic response of film, in case of analog cameras, or by the analog-to-digital converters, in the case of digital cameras. The images thus formed are unable to encompass the wide dynamic range present in most natural scenes. Whereas the human visual system is quite tolerant to spectral changes in lighting conditions, these strongly affect both the film response for analog cameras and the filter responses for digital cameras, leading to incorrect color formulation in the acquired image. Our multiscale retinex, based in part on Edwin Land's work on color constancy, provides a fast, simple, and automatic technique for simultaneous dynamic range compression and accurate color rendition. The retinex algorithm is non-linear, and global-- output at a point is also a function of its surround--in extent. A comparison with conventional dynamic range compression techniques such as the application of point non- linearities. The applications of such an algorithm are many; from medical imaging to remote sensing; and from commercial photography to color transmission.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the most challenging image processing applications is image restoration, which refers to methods for removing various sources of distortion that may have corrupted the ideal image data. Most of the image restoration techniques were developed to process one image at a time, and do not take advantage of the availability of multiple frames of essentially the same scene. More recently, the advancement of image processing hardware allowed us to process a sequence of images at or near frame rate, and new techniques were developed to process multiple frames of images for various applications. In this paper we propose a novel iterative restoration technique based on Kaczmarz's method, that processes a sequence of frames and produces output images with higher resolution and larger signal-to-noise ratio than the input image sequence. The proposed method provides a natural and simple way to improve image resolution by exploiting the relative scene motion from frame to frame. This relative motion can be due to the motion of the imaging platform, and/or motion of objects in the scene. We discuss the first case in some detail, show how to apply the novel iterative method to the problem, and present experiments using real data to demonstrate the algorithm performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For signal representation, it is always desired that a signal be represented using minimum number of parameters. An important criterion of signal representation is the orthogonality of the constituent basis functions of a transform. There are various orthogonal transforms such like Karhunen-Loeve, discrete cosine, Haar, discrete Fourier etc., but the choice of a particular transform in a given application depends on the amount of reconstruction error that can be tolerated and the computational resources available. The approximate Fourier expansion (AFE) for non- periodic signals with theoretically uncorrelated coefficients has previously been proposed. In this paper, we will give system interpretation to approximate AFE using generalized harmonic analysis. Furthermore, we will investigate some mathematical properties of discrete AFE. Finally, we will apply AFE expansion to images, and show that for purposes of decorrelation is better than discrete Fourier transform. For comparison purposes, the results will also be compared with discrete cosine transform. Computer simulation results will also be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A random edge is the center of an in-focus-strip which is formed by viewing a vertical surface through a conventional microscope. The in-focus-strip is the image of the surface which locates inside the depth of the field of the microscope. The image of the surface which locates outside of the depth of the field is blurred and contains only low frequency illumination profile. The location of this random edge has been used to make internal measurements within holes as small as 200 microns in diameter. A specialized edge detector has been designed to 'find' the center within the in-focused region. The detector first locates the in- focus-strip. It then extracts the surface background detail from the out-of-focus part of the image. This out-of-focus information is used to effectively normalize the intensity distribution over the image. The image is then smoothed to remove random noise. The location of the center of the in- focus-strip is then found from the application of a weighting function constructed from the position of individual details located within the edge region. This method has been used in measuring inside a 0.25 mm diameter hole. The hole is used to carry the coolant through a turbine blade. The accuracy of the measurement is better then +/- micrometers .
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Visual pattern recognition and visual object recognition are central aspects of high level computer vision systems. This paper describes a method of recognizing patterns and objects in digital images with several types of objects in different positions. The moment invariants of such real work, noise containing images are processed by a neural network, which performs a pattern classification. Two learning methods are adopted for training the network: the conjugate gradient and the Levenber-Maquardt algorithms, both in conjunction with simulated annealing, for different sets of error conditions and features. Real images are used for testing the net's correct class assignments and rejections. We present results and comments focusing on the system's capacity to generalize, even in the presence of noise, geometrical transformations, object shadows and other types of image degradation. One advantage of the artificial neural network employed is its low execution time, allowing the system to be integrated to an assembly industry line for automated visual inspection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Motivated by recent interest in intelligent transportation systems, this paper considers the problem of tracking diverse vehicles as they traverse a roadway instrumented with video cameras. From vehicle tracks it is straight- forward to compute basic traffic parameters such as flow, speed, and concentration. The vehicles to be tracked can be dense and we assume that computational resources are limited. Therefore, we cannot consider 3D processing but rather must partition the problem as much as possible into 1D or 2D problems. The key simplifying aspect is that the vehicles follow known tracks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An iterative algorithm has been presented that sends the information of a real and non-negative image to its Fourier phase. The result is a complex image with uniform amplitude in the Fourier domain. Thus in the frequency domain all the information is in the phase and the amplitude is a constant for all frequencies. In the space domain the image, although complex, has the desired property of having an absolute value the same as the original real image. Matched filtering is a common procedure for image recognition with the conjugate of the Fourier transform of the model being the frequency response of the filter. It is shown that using these new complex images instead of the ordinary real images makes the output of the filter very peaked in the case of a match and widely spread in the case of a mis-match. THus the new filter has a highly superior performance over the conventional matched filtering. It is also shown that due to the above properties the new filter performs very well when filter's input is highly corrupted by additive noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we use symbolic data analysis in the color image processing. First, we define symbolic objects and we describe also a method of clustering, in order to find the optimal number of homogeneous parts in the image. After, we describe the similarity and dissimilarity measures for symbolic objects. We define a new similarity measure between symbolic objects with quantitative variables. Finally we present results with real images, using the new similarity between pixels, taking into account their color and also their neighborhood.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of object recognition in images regardless of their scale and orientation is considered in this paper. A framework is used to train and to recognize or classify a transformed object. A set of features obtained from the short-time Fourier transform of the object is used for scale and rotation invariant recognition. An analysis window is used to compute the short-time Fourier transform. The Fourier magnitudes in the polar domain constitute the scale invariant and rotation invariant features. Since, short time sections are used in this method, features are more separable because of the localization of the window which is useful for discriminating variants of very similar objects. The recognition system is tested for different sets of scales and rotations of several objects. This framework performed well for the range of scales and orientations of the objects considered. The framework is computationally efficient and showed robustness in the presence of noise. The tasks involved are simple and the framework can be used for real-time applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A difficult problem in imaging systems is degradation of images caused by motion. This problem is common when the imaging system is in moving vehicles such as tanks or planes and even when the camera is held by human hands. For correct restoration of the degraded image we need to know the point spread function (PSF) of the blurring system. In this paper we propose a method to identify important parameters with which to characterize the PSF of the blur, given only the blurred image itself. A first step of this method has been suggested in a former paper where only the blur extent parameter was considered. The identification method here is based on the concept that image characteristics along the direction of motion are different than the characteristics in other directions. Depending on the PSF shape, the homogeneity and the smoothness of the blurred image in the motion direction are higher than in other directions. Furthermore, in the motion direction correlation exists between the pixels forming the blur of the original unblurred objects. The method proposed here identifies the direction and the extent of the PSF of the blur and evaluates its shape which depends on the type of motion during the exposure. Correct identification of the PSF parameters permits fast high resolution restoration of the blurred image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a technique for directional-sensitive image restoration based on the polynomial transform. The polynomial transform is an image description model which incorporates important properties of visual perception, such as the Gaussian-derivative model of early vision. The polynomial transform basically consists of a local description of an image. Localization is achieved by multiplying the image with overlapping window functions. In the case of the discrete polynomial transform, the contents of the image within every position of the analysis window is represented by a finite set of coefficients. These coefficients correspond to the weights in a polynomial expansion that reconstructs the image within the window function. It has been showed how the polynomial transform can be used to design efficient noise-reduction algorithms by adaptively transforming the coefficients of every window according to the image contents. Other types of transformations on the polynomial coefficients lead to different image-restoration applications, such as deblurring, coding, and interpolation. In all cases, the restored image is obtained by means of an inverse polynomial transform which consists of interpolating the transformed coefficients with pattern functions that are products of a polynomial and a window function. We show in this paper how image restoration, namely noise reduction and deblurring, based on the polynomial transform can be improved by detecting the position and orientation of relevant edges in the image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present our results to date on the application of super- resolution techniques to passive millimeter-wave imagery and discuss the merits of both linear and non-linear methods giving an indication of the improvement which can be obtained. Passive millimeter-wave imagery is potentially useful where poor weather visibility is required. Its spatial resolution, however, is severely restricted due to the diffraction limit of the optics. Super-resolution methods may be used to increase this spatial resolution but often at the expense of processing time. Linear methods may be implemented in real time whereas non-linear methods which are required to restore images with lost spatial frequencies are more time consuming. There is clearly a trade-off between resolution and processing time. In order to make any useful comparisons it is necessary to quantify any improvements, we do this by investigating the resolution and spatial frequency content of the images. We have applied our super-resolution algorithms to conventional images as well as millimetric bar pattern images which were acquired at 94 and 140 GHz. These methods give excellent results, providing a significant quantifiable increase in spatial resolution with only a small reduction in the final signal to noise ratio. Comparisons will be made between the results obtained with various super-resolution algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the field of human animation, hair represents one of the most challenging problems and therefore has been one of the least satisfactory aspects of human images rendered to data. This paper proposes a method to generate realistic hair model for individuals based on image processing. The analysis and recognition of hair strands by image processing provides valuable data, particularly hair outline and the flow direction of the hair for the rendering of realistic hair model for individuals. The image is binarized prior to lines extraction and the hair region is determined by a series of expansion and contraction. These data provide the basic guidance for the generation of the hair model. A simplified spring model is used for the hair modeling. In this spring system, a strand of hair is modeled as a series of interconnected masses, springs and hinges. Hair strands are randomly generated on the skull. The outline region acquired through image processing ensures that the randomly generated hair strands fall neatly into the hair region. These strands are randomly rotated to shuffle the hair strands. In the case of hair strands falling out of the outline region, weights are added or reduced at the interconnected masses in order to move the strand back into the hair region. The lines extracted by image processing serves as guide-lines directing the hair strands to point in the desired direction. This realistic hair model can find many applications in the generation of synthetic humans and creatures in movies, multimedia and computer game productions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Penumbral images of neutron distributions form laser fusion experiments provided by large aperture imaging systems are degraded by signal-dependent noise. Most image processing algorithms assume that the signal and the noise are stationary. The purpose of this paper is to introduce a new approach using a locally adaptive non-stationary filter. This method modifies the stationary Wiener filter approach by trading off noise removal against resolution. We shall describe the method and show some results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Synthetic aperture radar (SAR) images have proved to be useful for a variety of land-use analysis tools, ranging from ice-flow monitoring to flood-damage assessment. SAR images can also be used to derive terrain elevations, thereby greatly increasing their utility. The computation of height information from SAR images is usually accomplished from a stereo pair or an interferometric pair. These two algorithmically different approaches each has its unique strengths and weaknesses, but one feature they share is the need for two SAR images. THe algorithm presented in this paper requires only a single SAR image, from which terrain information is extracted. A brief outline of the signal- processing algorithm will be presented. It will be clearly shown how our new approach differs from previously presented approaches. Data collected from the shuttle imaging radar-C (SIR-C) over the Lucky Rise area of the Mohave Desert will be used to assess the performance of our new signal- processing algorithm. A detected image will be shown that contains height variation of greater than 500 meters. A new terrain map will be generated by our algorithm and a contour map made from the terrain map. It will be shown that overlaying the resulting contour map on top of the detected image greatly increases the utility of the SIR-C image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We report on the development of an automated visual inspection system for liquid crystal display (LCD) modules. The system comprises a highly graphical software program, a unique image processing algorithm, a PC interface card for driving different types of LCD modules, and a universal test jig for holding these modules. The system is able to detect faulty or malfunctioning LCD modules with minimum intervention from human operator; allows testing to be carried out for evaluating the display quality such as contrast ratio; is able to create new test databases for new LCD modules; allows permanent storage/backup of test databases for each individual LCD module; and provides a friendly user interface for ease of operation. In addition, the system has the following features: different access levels for different users; an editor for creating software drivers for driving different types of LCD modules; and a graphical help facility.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, a new class of image coding algorithms coupling standard scalar quantization of frequency coefficients with tree-structured quantization has attracted wide attention because its good performance appears to confirm the promised efficiencies of hierarchical representation. This paper addresses the problem of how spatial quantization modes and standard scalar quantization can be applied in a jointly optimal fashion in an image coder. We consider zerotree quantization and the simplest form of scalar quantizations, and we formalize the problem of optimizing their joint application and we develop an image coding algorithm for solving the resulting optimization problem. Despite the basic form of the two quantizers considered, the resulting algorithm demonstrates coding performance that is competitive the very best coding algorithms in the literature.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We introduce a novel image-adaptive encoding scheme for the baseline JPEG standard that maximizes the decoded image quality without compromising compatibility with current JPEG decoders. Our algorithm jointly optimizes quantizer selection, coefficient 'thresholding', and entropy coding within a rate-distortion (R-D) framework. It unifies two previous approaches to image-adaptive JPEG encoding: R-D optimized quantizer selection by Wu and Gersho, and R-D optimal coefficient thresholding by Ramchandran and Vetterli. By formulating an algorithm which optimizes these two operations jointly, we have obtained performance that is the best in the reported literature for JPEG-compatible coding. In fact the performance of this JPEG coder is comparable to that of more complex 'state-of-the-art' image coding schemes: e.g., for the benchmark 512 by 512 'Lenna' image at a coding rate of 1 bit per pixel, our algorithm achieves a peak signal to noise ratio of 39.6 dB, which represents a gain of 1.7 dB over JPEG using the example Q- matrix with a customized Huffman entropy coder, and even slightly exceeds the published performance of Shapiro's celebrated embedded zerotree wavelet coding scheme. Furthermore, with the choice of appropriate visually-based error metrics, noticeable subjective improvement has been achieved as well. The reason for our algorithm's superior performance can be attributed to its conceptual equivalence to the application of entropy-constrained vector quantization design principles to a JPEG-compatible framework. Furthermore, our algorithm may be applied to other systems that use run-length encoding, including intra- frame MPEG and subband or wavelet coding.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wavelet still image compression has recently been a focus of intense research, and appears to be maturing as a subject. Considerable coding gains over older DCT-based methods have been achieved, while the computational complexity has been made very competitive. We report here on a high performance wavelet still image compression algorithm optimized for both mean-squared error (MSE) and human visual system (HVS) characteristics. We present the problem of optimal quantization from a Lagrange multiplier point of view, and derive novel solutions. Ideally, all three components of a typical image compression system: transform, quantization, and entropy coding, should be optimized simultaneously. However, the highly nonlinear nature of quantization and encoding complicates the formulation of the total cost function. In this report, we consider optimizing the filter, and then the quantizer, separately, holding the other two components fixed. While optimal bit allocation has been treated in the literature, we specifically address the issue of setting the quantization stepsizes, which in practice is quite different. In this paper, we select a short high- performance filter, develop an efficient scalar MSE- quantizer, and four HVS-motivated quantizers which add some value visually without incurring any MSE losses. A combination of run-length and empirically optimized Huffman coding is fixed in this study.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a joint source/channel allocation scheme for transmitting images lossily over block erasure channels such as the Internet. The goal is to reduce image transmission latency. Our subband-level and bitplane-level optimization procedures give rise to an embedded channel coding strategy. Source and channel coding bits are allocated in order to minimize an expected distortion measure. More perceptually important low frequency channels of images are shielded heavily using channel codes; higher frequencies are shielded lightly. The result is a more efficient use of channel codes that can reduce channel coding overhead. This reduction is most pronounced on bursty channels for which the uniform application of channel codes is expensive. We derive optimal source/channel coding tradeoffs for our block erasure channel.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel wavelet packet image coder is introduced in this paper. It is based on our previous work on wavelet image coding using space-frequency quantization (SFQ), in which zerotree quantization and scalar quantization are jointly optimized in a rate-distortion sense. In this paper, we extend the powerful SFQ coding paradigm from the wavelet transform to the more general wavelet packet transformation. The resulting wavelet packet coder offers a universal transform coding framework within the constraints of filter bank structures by allowing joint transform and quantizer design without assuming a priori statistics of the input image. In other words, the new coder adaptively chooses the representation to suit the image and the quantization to suit the representation. Experimental results show that, for some image classes, our new coder is capable of achieving the best coding performances among those in the published literature.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new distortion measure, entropy-weighted mean square error, is introduced to enhance the perceptual quality of reconstructed images form subband vector quantization schemes. The measure is based on the observation that the subbands containing more information ought to be more accurately represented than those which contain less. A compatible feature extractor for a non-linear interpolative vector quantization scheme is proposed in order to extend the method to higher dimensional vector spaces without incurring an excessive computational burden. The experimental results confirm the predictions of improved perceptual quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Motion-compensated prediction is a key technique for achieving high compression performance for a video sequence. Exhaustive search produces the best predictor block but requires intensive computation and expensive hardware. This paper presents a 4:1 checker-board algorithm that reduces the computational complexity of exhaustive search by a factor of 8 while maintaining similar video quality. The algorithm subsamples block pixels by four to one, wand subsamples search locations by two to one. The resulting architecture is simple and scalable, and is suitable for real-time encoding.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital imaging offers major advantages over conventional film radiology, especially with respect to image quality, the speed with which the images can be viewed, the ability to perform image processing, and the potential for computer aided diagnosis. A typical mammographic image requires 10 million pixels of data, assuming 50 micrometers square pixels. Currently, there are not single sensor that can satisfy these specifications. One approach to acquiring full-breast digital images utilizes multiple sub-images from two 1024 by 1024 pixel charge coupled devices. This paper describes how the full-breast image is obtained by translating the sensor apparatus and 'stitching' the sub-images together. Radiologist desire seamless full-breast images, so a 'blending' process was developed to prevent visible seams in the full-breast image. Also, flaws in the detection system are removed by image processing techniques. FInally, the process of enhancing an image for film printing is described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is well known that the respiratory motion during computed tomography (CT) causes artifacts that can mimic disease and could lead to mis-diagnosis. In this paper, we present an adaptive weighting scheme for motion artifacts suppression. The weighting scheme is based on the observation that the motion artifacts are caused by the inconsistency in the projection data set at the beginning and end of a scan. In general, the larger the discrepancy between the projections in these two segments, the more pronounced the artifacts. We make use of the redundant data samples in a CT scan and try to minimize the contributions of these views to the final image. By incorporating the information obtained from the external patient motion measurements, the amount of suppression can be tailored to the data set to achieve the best compromise between the patient motion artifacts and the image noise. This method has been applied to real patient scans and its advantages have been demonstrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Single photon emission computed tomography (SPECT) performs real 3D functional imaging in nuclear medicine. Even though the visual interpretation of nuclear medicine images by experienced physicians predominates in diagnostics the quantitative image analysis can reduce the subjective observer influences and offers the possibility for the use of objective numerical criteria. An essential problem of the quantitative evaluation of SPECT images is the determination of functional volumes from reconstructed SPECT data. This is the prerequisite for the estimation of the absolute radio nuclide concentration as a high level quantitation. In this paper we present a combined technique for the determination of functional volumes in SPECT data including elements of voxel based clustering methods and edge based fuzzy segmentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A reliable autofocus is necessary for any combined image processing and automated microscope system analyze scan areas larger than a single field. Autofocus functions for brightfield microscopy have been reported in the literature. Autofocus implementations for fluorescence microscopy have to deal with some technical difficulties mainly due to the incoherence and faintness of the fluorescent light. In this presentation autofocus procedures are introduced for fluorescence microscopy based on image content information. A Leitz MPV II fluorescence microscope with x, y, z stepping motors was used as basic equipment. The microscope images were captured with an intensified target camera, digitized with a frame grabber and analyzed with a PC. We have constructed a digital autofocus system as well as an analogue autofocus detector. In the case of digital autofocus, the frame grabber images were analyzed while changing the z-position of the microscope slide. Three different focus functions were investigated: 1) edge-finding algorithms such as LoG and Canny operators; 2) different autocorrelation algorithms. In contrast to the digital focus criteria evaluation we constructed an electronic board with several differentiators and integrators. This module is designed for high speed autofocusing and directly coupled to the output of the camera. It makes it possible to obtain a focus value for the current image within one video cycle. All methods and functions were tested for different situations and the results compared.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A modified run-length algorithm with Peano-Hilbert scanning is described. In this approach, quantization is integrated within the run-length calculation, and is controlled by a radius parameter. A zero value of radius will produce lossless coding, while greater than zero will produce lossy coding. The Peano-Hilbert curve scans a 2D array with local plane filling priority. Compared with the standard raster and zigzag scanning, the Peano-Hilbert scanning enlarges pixel covariance as distance between pixels increases. It, therefore, enhances run-length compression ratio as degree of lossy increases. Output of the run-length encoder is further entropy coded using a decomposed Huffman encoder. The proposed method can be applied directly to a raw image or combined with DPCM, wavelet or subband transforms. If the transform is also implemented with integer arithmetic, such as in the lossless JPEG or reversible wavelet transform, a unified lossy and lossless compression is achieved. The method is simpler than the JPEG standard, and yet achieves roughly equivalent performance when combined with DPCM. Decoding is very fast as no de-quantization procedure is needed. Experiments with various types of images have shown its speed advantage and compression efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An algorithm for methodically deriving rate-distortion points for transform coefficient selection schemes for images is presented. The idea is to iteratively generate a set of convex hulls from which a composite operational rate- distortion curve is derived. Although this approach can be used to generate optimal interior rate-distortion points, the complexity is high. A fast suboptimal approach is then proposed which is based upon a modified version of threshold selection. In the modified threshold selection algorithm, each transform block operates on a point along a non-convex rate-distortion curve which is generated from rank ordering of coefficients in the block. Simulations of this fast algorithm using finely quantized DCT coefficients from an image with separate coding of amplitudes and runlengths show that very good rate-distortion performance can be obtained. These simulations also suggest that the modified threshold selection curve tends to lie within the first few convex hulls generated from the composite shell method. The modified threshold selection algorithm provides a fast way for achieving good rate-distortion performance in transform coding systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Lossy compression techniques provide far greater compression ratios than lossless and are, therefore, usually preferred in image processing applications. However, as more and more applications of digital image processing have to combine image compression and highly automated image analysis, it becomes of critical importance to study the interrelations existing between image compression and feature extraction. In this contribution we present a clear and systematic comparison of contemporary general purpose lossy image compression techniques with respect to fundamental features, namely lines and edges detected in images. To this end, a representative set of benchmark edge detection and line extraction operators is applied to original and compressed images. The effects are studied in detail, delivering clear guidelines which combination of compression technique and edge detection algorithm is best used for specific applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Linear prediction schemes, such as JPEG or BJPEG, are simple and normally result in a significant reduction in source entropy. Occasionally the entropy of the prediction error becomes greater than that of the original image. Such situations frequently occur when the image data has discrete gray-levels located within certain intervals. To alleviate this problem, various authors have suggested different preprocessing methods. However, the techniques reported requires two-pass. In this paper, we extend the definition of Lehmer-type inversions from premutations to multiset permutations and present a one-pass algorithm based on inversions of a multiset permutation. We obtain comparable results when we applied JPEG and even better result when we applied BJPEG on preprocessed image, which is treated as multiset permutation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper reports a new fully adaptive DPCM architecture targeted at meeting high-speed video source coding requirements. THe presence of two feedback loops in the ADPCM computational scheme, viz., the feedback necessary for the prediction, and that necessary for adapting the prediction, inherently limit the sampling rate that can be supported. In order to systolize/pipeline these computations we delay both the adaptation and prediction computations so that the required algorithmic delays are now available in the feedback loops. We create a 2-slow simulator of the system, use a system clock that is twice as fast as the sample rate, retime register delays to systolize/pipeline the computations, and project the prediction and adaptation computations onto a common set of multiply accumulate processor modules. This yields pipelined systolic arrays for both the coder and the decoder, using significantly reduced adaptation delay and minimal algorithmic modifications compared with recently proposed architectures, implying improved convergence behavior of the adaptive predictions and higher SNR in the received video frame at high sample rates, besides a hardware efficient, reduced area implementation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The processing of compressed data generally yields decreased computational cost, due to the requirement of fewer operations in the presence of fewer data. We have previously shown that the automated recognition of small targets in compressed imagery, called compressive ATR, is feasible for a variety of compressed image formats. For example, images having domain size X , when tesselated into k X l-pixel blocks and processed by a block-compression transform such as vector quantization (VQ), have compressed domain size V equals X kl pixels. The characterization of each k X l pixel block in terms of one or more parameters of a compressed pixel facilitates the processing of O(Y) such pixels in O(CRD) equals O(X/Y) equals O(kl) work, where CRD denotes the domain compression ratio. In practice, typical computational efficiencies of approximately one-half the domain compression ratio have been realized via processing imagery compressed by VQ, block truncation coding (BTC), and visual pattern image coding (VPIC). In this paper, we extend our previous research in compressive ATR to include the processing of compressed stereoscopic imagery. We begin with a brief review of stereo vision and the correspondence problem, as well as theory fundamental to the processing of compressed data. We then summarize VQ, BTC, and VPIC compression. In Part 2 of this series, we map a cepstrum- based stereo matching algorithm to stereoscopic images represented by the aforementioned compressive formats. Analyses emphasize computational cost and stereo disparity error. Algorithms are expressed in terms of image algebra, a rigorous, concise notation that unifies linear and nonlinear mathematics in the image domain. Since image algebra has been implemented on numerous sequential and parallel computers, our algorithms are feasible and widely portable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a high quality, and in many prospects, efficient, wavelet color image codec (WCIC) is introduced. This codec allows to achieve high compression ratios with little degradation in visual quality. WCIC simultaneously exploits several characteristics of the wavelet transforms. Wavelet transform is formed using wavelet packets via lattice structures. Bit allocation algorithm is developed to distribute the bits between and in the colors such that minimal mean-squared-error is introduced in each color component. This proves to be beneficial at low bit rates which keeps the color distribution in balance. The wavelet coefficients are quantized using either trellis or zero-tree quantizers depending on their statistics. Quantized coefficients are further compressed with adaptive arithmetic coding resulting in the final coded bit stream. This codec is applied to a wide range of images, from computer animations to natural pictures. The results are compared to other algorithm, including fractals and JPEG, and relative merits of the proposed algorithm are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Couseder automation of the task of iridodiagnostics is based on the results of an iris analysis by PC and requires solving a series of complicated problems. We developed a computational system of iridodiagnostics that allows a physician — iridologist introducing on different scales and making processing of a patient's iris image by PC and creating a database to make files of images for the purpose of storing experience for next use.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we represent a new method for adjusting pseudo-color distributions of photographic image. This method is based on the technique for photographic phase image pseudo-color modulated by grating. A reflecting ray path is introduced into a 4f system to produce diffraction of dual phase grating so that the pseudo-color at the output plane can be adjusted. Theoretical analysis of this method, experimental demonstrations results and chromaticity analysis are also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a new kind of high precision optical- electronic imaging tracking technology using the optical SDF filtering and electronic SSDA matching. It can overcome the defaults of pure electronic image correlator. Experiment results show that this new image tracking technology can be efficiently used in recognition and precision tracking of targets on complex ground.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a method of evaluating x-ray tube focal spots and the corresponding image sharpness by computer simulation based on the transfer functions theory. This theory was chosen due to its quantitative as well as qualitative response for the radiographic systems performance, which provides less subjective evaluations and better predictions about the characteristics of the imaging process. The present method uses as input data the effective focal spot dimensions in the field center and the value of the target angulation. An ideal pinhole which scans the entire radiation field is simulated. It allows to obtain the point spread function (PSFs) for any region of interest. The modulation transfer functions (MTFs) are then determined from 2D Fourier transformation from the PSFs. This provides to evaluate the focal spot projection in all field locations and therefore to predict the sharpness of the associated image. Furthermore the computer simulation reduces greatly the number of practical procedures required for obtaining the data which provides the MTF evaluation of radiographic systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a novel system that simulates time-of- flight MR angiography (TOF-MRA). In the current stage, we focus on steady flow of blood. We first create a 3D blood vessel model obeying the branching rules of real vascular systems. In the model, laminar flows inside blood vessels are modeled using nested cylinders with a different velocity in each cylinder shell. Pseudo TOF-MRA images are generated by voxelizing the model. The voxelization is based on flow- related enhancement which is the basis of TOF-MRA. Slices of different resolutions and thickness can be generated with different settings for maximum velocity of blood flow as well as for the orientation of the phase and frequency encoding axes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
During the sonar image acquisition, several physical phenomena degrade the final picture a lot. We examine particularly two damages: the energy loss function of the distance covered by the acoustical wave and the speckle noise. For the first phenomenon, we suggest a corrective post processing which takes some acquisition parameters into account. For the second phenomenon, we apply filtering algorithms like the maximum a posteriori filter, a modified version of the Lee algorithm or an adaptive weighted median filter. The results are shown on real sonar images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Transverse interactions and pattern formation processes occurring in passive ring resonator with thin Kerr medium are theoretically and numerically investigated. In the system diffraction effects can determine spatial destabilization of laser beam profile with formation of regular structures in a transverse plane. Theoretical model for the evolution of the intracavity field and nonlinear phase modulation in the medium was elaborated. System of equations for steady state phase and intensity was derived, and linear stability analysis was carried out. We make some simple predictions about spatial stability of a plane wave and prove the existence of instability domains linked to the onset of transverse modulation of the filed profile. Results of numerical simulations are also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The character recognition in the industrial world is a complex problem because of the variety of supports and/or different types of writing. Nowadays, the employed systems are chosen in accordance of each case. To solve this problem, we present a system using two neural networks. The first one, a semi-supervised network, determines the required characteristics to the character recognition. It can extract an invariant mode whatever the position, the size and the orientation of the character are. The output of this first network is connected to the second one, a supervised network that executes a task of classification. Because of previous learning, this one recognizes the character. According to the quality of the recognition, the system may return to the first network the information required to improve the vision of character. This qualitative signal allow to modify the extracting process of the model. The interest of this system is this cooperation between two types of networks: the first one to carry out the vision, the second one to realize the multifont recognition, and the cooperation between these two networks allows to recognize the character whatever the support and the marking are.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently the retrieval of an image based on its content from an efficiently compressed database has attracted significant research interest. Most of such work to data assumes that the sample image has the same orientation as the retrieved image, and performs poorly if the orientation of the observed image is different from the image in database. However, there are instances where comparisons of images with different orientations are needed. We propose a new image coding and retrieval method which allows the retrieval of images that may be oriented differently than, but otherwise similar to the sample image. In this method, we introduce the idea of using the absolute reference angle to represent the orientation of an image. An image can be coded into two separate parts: part one containing the orientation information, and part two containing the texture information which is orientation independent. Then an image retrieval algorithm which only compares the rotational invariant parts of the images is employed to achieve orientation independent retrieval. We proposed and analyzed several mathematical forms of the absolute reference angle. Application of our method in fractal coded image retrieval is discussed and demonstrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An evolutionary method for object shape extraction is proposed on the basis of utilizing gray-scale morphological structures. Artificial individuals built up from gray-scale morphological operators are mapped into 2D data representation structures. These operator series' are then manipulated for producing new generations. The normalized correlation between the filtering results and the contributed input image areas is calculated for fitness. The extracted objects are obtained by carrying out the filtering results and the contributed input image areas is calculated for fitness. The extracted objects are obtained by carrying out the filtering with the best fit operator series'. This method requires no preliminary knowledge of the object shape, also no constraints are used for image background and smoothness. The evolutionary approach provides a global and directed search on a large number of possible morphological operators and a method that can be applied on a wide range images. As a concrete application, the method is utilized for the shape extraction of speckles and other skin deformities. Ultraviolet and blue filtered images of a cameras device are used for input. In order to obtain a fast method, the algorithm is executed on a multiprocessor basis. Detecting the shape of skin objects originated from benign and malign skin deformities like speckles and melanomas has great medical and cosmetic importance as well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this presentation, the image processing technology is applied to measure the dimensions in the section of hot- rolled section steel which is traveling on the rolling line. With a light sheet intersecting the workpiece, a bright section outline can appear on its outer surface. The entire image of this outline if formed by four TV cameras which image the outline in four directions respectively. The method that how to use this image to get the measured dimensions is discussed in detail. Four kinds of rolled section steel have been measured by this method, and results in the laboratory show that uncertainty is about 0.2 mm in the measurement range of 100 mm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The magnitude of the image geometric unsharpness depends on the location in the field where the object is hit by the x- ray beam. This phenomenon is known as field characteristics and is caused by the target plane angulation. This yields different effective focal spot sizes and shapes when it is 'seen' from different directions and locations in the x-ray field . Due to the effect of the field characteristics, a more detailed evaluation of focal spot behavior in radiology systems is needed. Hence the focal spot should be evaluated in all field locations, which is very complex with experimental procedures, although feasible by computer simulation. This work describes an algorithm with the aim of determining the size and shape of effective focal spots in any location of the radiation field, on the basis of the focal spot size measurement in the filed center. The results obtained by the program have agreed with those obtained by pinholes matrix exposures in several radiology faculties. The program has proved efficient in computing the size of the focal spot projections for mammography systems, with a standard deviation around 0.03-0.04 mm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work proposes a new technique which allows the evaluation of the sharpness of radiologic images by a computer simulation. This simulation provides the image from objects placed at any field position so that the radiologist can determine previously the image sharpness to be obtained in actual exams. It also provides previous knowledge if the system will be able to image a particular detail. The x-ray source is simulated from its representation by the point spread function and it is plotted in a matrix in the image plane. The object is also calculated and plotted in an object matrix. The resultant image matrix is calculated from these two others. The validity of the simulation was verified by comparing the simulated images with the actual images obtained form single phantom exposures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A partially correlated polarimetric K-distribution model is presented to characterize statistical properties of multi- look polarimetric SAR data acquired over building-and-road- densely-distributed urban areas. The model is a generalization to the usual polarimetric K-distribution one and is formulated in terms of the matrix product model in which different texture variable s are assigned to different polarization channels. The joint distribution of these texture variables are assumed to be partially correlated Gamma. Testing of the generalized K-distribution model was conducted using NASA/JPL polarimetric SAR data of a building-and-road-densely-distributed urban area, showing that the generalized K-model is more precise than the usual K-model in characterizing statistical properties of complex urban polarimetric SAR backscattering.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
SAR gets 2D high resolution images of the earth, but its coherent imaging principle brings its images with serious speckle noise, seriously reduced the image quality. Speckle reduction and edge-information protection are contrary goals in SAR image enhancement, hard to be solved in present processing methods. The algorithm put forward in this paper successfully solved this contradiction, experiments show that this algorithm reduce speckle well, and protect image edge-information well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A structural codebook design algorithm is proposed in this paper, which is aimed at considerably reducing the computational burden during the coding process of vector quantization while having almost no extra storage cost. Kohonen's self-organizing feature map algorithm is improved as to tackle two problems in codebook design. Since the address of currently coded block is often the same as that of the previously coded neighboring blocks, especially in the flat regions, an address-dependent vector quantization (ADVQ) scheme is proposed to further reduce the bit rate and computational complexity. The simulation results show that the ADVQ scheme can achieve a bit-rate reduction of 37 percent for a typical image 'Lena' while the computational complexity is reduced by a factor of 20.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The early detection of breast cancer is essential for increasing the survival rate of the disease. Today, mammography is the only breast screening technique capable of detecting breast cancer at a very early stage. The presence of a breast tumor is indicated by some features on the mammogram. One sign of malignancy is the presence of clusters of fine, granular microcalcifications. We present here a three-step method for detecting and characterizing these microcalcifications. We begin with the detection of potential candidates. The aim of this first step is to detect all the pixels that could be a microcalcification. Then we focus on our specific region growing technique which provides an accurate extraction of the shape of the region corresponding to each detected growing technique which provides an accurate extraction of the shape of the region corresponding to each detected seed. This second step is essential because microcalcifications shape is a very important feature for the diagnosis. It is then possible to determine precise parameters to characterize these microcalcifications. This three-step method has been evaluated on a set of images form the mammographic image analysis society database.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We proposed a composite adaptive technique for color image compression. We apply a mapping transform from (R, G, B) space to (Yd, Cr, Cb) to represent the image as luminance image. The proposed compression technique is based on discrete cosine transform (DCT) with block size of 8 by 8 pixels. The proposed adaptive DCT technique depends on the maximum luminance value and the difference between this maximum and the minimum one in each block. This will help to select one quantization matrix out of 16 different matrices. The quantization matrices represent the whole range of types of the blocks. As an error correction step we create a 3D matrix that represents the whole image. The X axis is the number of blocks per row, the Y axis is the number of blocks per column and the Z axis is the 256 colors of the image. This will replace the coding of Cr, Cb components and it will solve the problem is case where the luminance value represents more than one color. The technique achieves a high compression ratio in the range of 0.65 bpp for Yd component of the smooth image to 1 bpp for detailed image in addition to overheads in the range from 0.15 to 0.25 bpp for the error correction matrix. The technique gives a higher compression ratio compared ratio compared to the well known technique for standard images, such as Lenna, while maintaining the same quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the given paper a method for resolving two overlapping signals is described for the case, when both signals are Gaussians with equal half-width. Using wavelet transform of the original signal, which contains such a superposition of two Gaussians, we express the shift between peaks in terms of wavelet image maxima. This enables us to develop a fast method to determine positions and amplitudes of the Gaussian sources with a satisfactory accuracy. A comparison with Fourier method often applied to this problem is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is an important problem for computer vision to match the stereo image pair. Only the problems of stereo matching are solved, the accurate location or measurement of object can be realized. In this paper, an integrated stereo matching approach is presented. Unlike most stereo matching approach, it integrates area-based and feature-based primitives. This allows it to take advantages of the unique attributes of each of these techniques. The feature-based process is used to match the image feature. It can provide a more precise sparse disparity map and accurate location of discontinuities. The area-based process is used to match the continuous surfaces.It can provide a dense disparity map. The techniques of stereo matching with adaptive window are adopted in the area-based process. It can make the results of area-based process get high precise. An integrated process is also used in this approach. It can integrate the results of feature-based process and area-based process, so that the approach can provide not only a dense disparity map but also an accurate location of discontinuities. The approach has been tested by some synthetic and nature images. From the results of matched wedding cake and matched aircraft model, we can see that the surfaces and configuration are well reconstructed. The integrated stereo matching approach can be used in 3D part recognition in intelligent assembly system and computer vision.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work reports and illustrates the application of morphological and topological algorithms to enhance sketch contours on infrared photographies of wood paintings. the procedure involves two steps: the application of a morphological chain to enhance the desired sketches, followed by a topological chain to 'clean up' the undesired artifacts generated at the first step.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In Part 1 of this two-part series, we show that the processing of compressed data generally yields decreased computational cost, due to the requirement of fewer operations in the presence of fewer data. We described several compressive transformations in terms of image algebra, a rigorous, concise notation that unifies linear and nonlinear mathematics in the image domain. In this paper, we continue our development of compressive automated target recognition algorithms to include compressive stereo matching. In particular, we elucidate techniques for mapping a cepstrum-based stereo matching algorithm to stereoscopic images represented by the aforementioned compressive formats. Analyses emphasize computational cost, stereo disparity error, and applicability to ATR over various classes of surveillance imagery. Since the study notation, image algebra, has been implemented on numerous sequential and parallel computers, our algorithms are feasible and widely portable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.