PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The visual pattern image coding (VPIC) technique appears to be a powerful competitor to VQ approaches for image coding since no codebook is needed. In VPIC, edge patterns within 4 by 4 blocks are detected and preserved. We report on the results of an alternative edge-preserving lossy compression algorithm that employs an edge-detection operator for identifying edge pixels for special treatment. A combination of the zero-tree approach of subband coding and the deterministic prediction approach of JBIG identified blocks without edge pixels. Predictive coding is heavily employed in compressing the block mean values. A smoothing algorithm is applied to the means of the 4 by 4 blocks that serves the dual purpose of predicting whether an edge pixel value is above or below the mean.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image coding requires an effective representation of images to provide dimensionality reduction, a quantization strategy to maintain quality, and finally the error free encoding of quantized coefficients. In the coding of quantized coefficients, Huffman coding and arithmetic coding have been used most commonly and are suggested as alternatives in the JPEG standard. In some recent work, zerotree coding has been proposed as an alternate method, that considers the dependece of quantized coefficients from subband to subband, and thus appears as a generalization of the context-based approach often used with arithmetic coding. In this paper, we propose to review these approaches and discuss them as special cases of an analysis based approach to the coding of coefficients. The requirements on causality and computational complexity implied by arithmetic and zerotree coding will be studied and other schemes proposed for the choice of the predictive coefficient contexts that are suggested by image analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In previous work, we reported on the benefits of noise reduction prior to coding of very high quality images. Perceptual transparency can be achieved with a significant improvement in compression as compared to error free codes. In this paper, we examine the benefits of preprocessing when the quality requirements are not very high, and perceptible distortion results. The use of data dependent anisotropic diffusion that maintains image structure, edges, and transitions in luminance or color is beneficial in controlling the spatial distribution of errors introduced by coding. Thus, the merit of preprocessing is for the control of coding errors. In this preliminary study, we only consider preprocessing prior to the use of the standard JPEG and MPEG coding techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Compression with reversible embedded wavelets (CREW) is a unified lossless and lossy continuous-tone still image compression system. 'Reversible' wavelets are nonlinear filters which implement exact-reconstruction systems with minimal precision integer arithmetic. Wavelet coefficients are encoded in a bit-significance embedded order, allowing lossy compression by truncating the compressed data. Lossless coding of wavelet coefficients is unique to CREW. In fact, most of the coded data is created by the lessor significant bits of the coefficients. CREW's context-based coding, called Horizon coding, takes advantage of the spatial and spectral information available in the wavelet domain and adapts well to the lessor significant bits. In applications where the size of an image is large, it is desirable to perform compression in one pass using far less workspace memory than the size of the image. A buffering scheme which allows a one-pass implementation with reasonable memory is described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An image coding scheme using a set of image visual patterns is introduced. These patterns are constructed to represent two basic types of image patterns (uniform and oriented) over small blocks of an image. The coding system characterizes an image by its local features, and further approximates each image block by a block pattern. Algorithms for pattern classification, computation of pattern parameters, and image reconstruction from these parameters are presented, and these provide the necessary tools for applying the proposed coding method to varius images. Satisfactory coded images have been obtained, and compression ratios in the order of 15 to 1 have been achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many video compression systems use motion-compensated prediction to take advantage of the temporal redundancy. Typically, the motion compensation is done on a block-by-block basis using a fixed block size. This paper examines methods to improve motion estimation and compensation through the use of optical flow and adaptive block-based methods. The basic idea in both approaches is the same: to use larger blocks for regions of the image that have less detailed texture, and to use smaller blocks for more complicated regions. The best way to obtain the motion vectors for each block is examined in detail.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is clear that the compression of Meteosat radiometric data is not a classical problem. A first challenge consists in image coding with the intention to preserve the quality of measurements. This is quite different from image coding with the intention to preserve the visual quality. The latter problem is extensively addressed in the literature. The way in which the problem of the errors is addressed is quite different in the two approaches. When the visual quality is concerned, visual criteria are used and the error amplitude can be very large in some places. On the other hand, if the measured features are extracted from radiometric data like meteorological images, it is necessary that the reconstruction errors do not exceed some threshold depending on the required precision. A second challenge concerns the construction of progressive coding scheme which allows the progressive transmission of the image data, avoiding the artifacts of a block coding scheme. In the present work, only the measurement data compression problem has been considered and the tests were realized accordingly. However, these methods also perform quite well when the visual quality is addressed. This paper presents a progressive the incoming data in the scanning order. The comparison with the JPEG standard shows that the progressive wavelet method always performs better when the distortion is concerned.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a new fast algorithm for motion vector (MV) estimation based on the correlation of MVs existing in spatially adjacent and hierarchically related blocks. We select a set of MV candidates from the MV information obtained at coarser resolution level and its spatially neighboring blocks in the same level, and then perform further local search to refine the MV result. Experiments are performed for several test video sequences to demonstrate its excellent performance. The proposed algorithm achieves a speed-up factor ranging from 140 to 200 with only 2-8% MSE increase in comparison with the full search block matching algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The effectiveness of image recognition methods is strongly dependent on the process of intelligent image information compression. This paper presents the Radon-Fourier transformation as a tool for this task. It is shown that only two slices of Radon-Fourier transformation are sufficient to classify the set of medical images. Some classification results for one transformation slice are presented. These results are compared with full 2D Fourier transformation processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method for restoring, enhancing, segmenting, and coding textures in SAR images is presented. Image restoration and enhancement is achieved by means of a polynomial transform based algorithm. The first order polynomial transform coefficients provide an efficient tool for edge detection. These coefficients are transformed in order to adaptively reduce noise and enhance relevant edges on the image. The restored image is obtained by means of an inverse polynomial transform. Texture segmentation is based on an automatic region growing algorithm using the pixels context and an automatic selection of the critical parameters involved in the process. The value of these parameters is determined from the texture models employed and are optimized according to conditions given by local textures. Based on texture models, a distance between class textures has been worked out to measure the goodness of the segmentation. A texture map is also worked out where morphological parameters are provided including a value of texture structure. The image coding algorithm is adapted to the properties of the human visual perception, i.e., it separates the texture from the edge information. Multiscale edges are detected from the local maxima of a dyadic wavelet transform modulus. Relevant edges are selected and coded in the wavelet domain on a geometric basis. The remaining wavelet coefficients carry mostly texture information. Another wavelet representation is computed on the reconstructed image from the coded edges. Relevant textures are selected and coded in this alternative wavelet domain using an energy based criterion. The reconstruction is done by adding the coded edge image to the coded texture image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper described a technique for extracting the facial patterns from color images recorded in different imaging media. First, we extracted the skin-color pixels on the basis of statistical probability analysis of ellipsoid distributions of skin-color pixels' hue and saturation in the lightness intervals. The skin-color regions were segmented and labeled by applying the techniques of binary digital image processing. We observed 20 values of mensuration pattern variables in each region. The facial patterns were extracted from skin-color regions by evaluating their pattern variable values using the knowledge-based multistep filtering technique. This technique could extract the facial patterns from color images simply and automatically as verified by the experimental results. It is significant for operations or systems where locating and detecting the facial pattern are critical important.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new iterative approach for the spectral extrapolation of an image is introduced. In the space domain it is only assumed that the image is non-negative and in the frequency domain it is assumed that its low frequency components are known. A modified version of the method of projections onto convex sets with relaxed parameters is implemented. It is shown that the proposed iterative method is nonexpansive and in some cases it is contractive and given the same constraints, it converges faster than Gerchberg iterative superresolution algorithm and gives a better result. It is also shown that in the cases when the algorithm becomes contractive it performs better in the presence of small amounts of noise in the low frequency components.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Maximum entropy algorithms (MEM) can be used to restore imagery within a region of corrupted data. The necessary condition being that the point spread function (PSF) must be sufficiently large with respect to the region of corrupted data. In most cases MEM will give a result which may not be the result desired. In general the error assessment is qualitative and the restored image appears cosmetically more pleasing to the yee. This paper presents a characterization of one MEM algorithm which estimates an object consistent with Boltzmann statistics within a corrupted region of the detector array. Chosen as an example will be two prefix Hubble Space Telescope (HST) Faint Object Camera (FOC) images. The characterization consists of an assessment of photometric accuracy, precision, and resolution. The results presented here will be parameterized in terms of signal to noise ratio and size of corrupted data. This study is conducted with a set of simulated data which closely match that of the HST FOC. As a demonstration, we apply these techniques to an actual data set obtained from the HST FOC. These FOC data are corrupted in a region which we restore using a synthetic PSF.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a technique for image processing and coding based on the polynomial transform. This is an image representation model that analyzes an image by locally expanding it into a weighted sum of orthogonal polynomials. In this article we use the polynomial transform to build applications, astronomical images being the main target. The applications that we propose are developed on pyramidal structures. Their purpose is to analyze an astronomial image at different spatial scales, generating applications such as coding based on deblurring from its representation at a lower scale and followed by a scheme of prediction; another application is the reduction of noise in images generated by an astronomical acquisition system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper a novel higher-order neural network structure is proposed. A combination of the Sigma-Pi network (as feature extractor) in the first layer and the functional link network (as a classifier) in the second layer is used in a difficult problem of real work gray scale image recognition (human faces classification) invariant to translation, change of scale, and in plane rotation. Both theoretical background of the problem as well as theoretical and practical results are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Homomorphic filter approach for image processing is very well known as a way for image dynamic range and increasing contrast. According to this approach, input signal is assumed to consist of two multiplicative components: background and details. The standard problem in processing such signals involves logarithm operation, division on two components by implementing low frequency and high-pass filters, addition of evaluations multiplied by different gain coefficients, and exponent calculation. In this paper we propose to use median filter for deriving multiplicative component evaluations. It was found that the proposed homomorphic filter has several useful properties in remote sensing image enhancement applications. Experimental results for simulated and real image processing are presented in the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The importance of the nonuniformity (NU) correction of solid state image sensor (IS) lies on enhancing the signal-to-noise ration, especially in low-contrast signal detection. There are various methods for NU correction of IS. But some of the traditional methods are complexed in circuit, and difficult to adjust; some are slow at correction rate, and can not satisfy the real time correction requirement. In this paper, a new method for NU correction is introduced, in which the nonuniformity is corrected by using the memory's function conversion efficiency. The chief advantages of this method are: the correction circuit is simpler and easy to adjust; the convertion rate is relatively high (> equals 5MHz); the correction range is wide.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The research presented here focuses on the general problem of finding tools and methods to compare and evaluate parallel architectures in this particular field: the computer vision. As there are several different parallel architectures proposed for machine vision, some means of comparison between them are necessary in order to employ the most suitable architecture for a given application. 'Benchmarks' are the most popular tools for machine speed comparison, but do not give any information on the most convenient hardware structures for implementation of a given vision problem. This paper tries to overcome this weakness by proposing a definition of the concept of a tool for the evaluation of parallel architecture (more general than a benchmark), and provides a characterization of the chosen algorithms. Taken into account different ways to process data, it is necessary to consider two different classes of machines: MISD and (MIMD, SPMD, SIMD) offering different programming models, thus leading to two classes of algorithms. Consequently, two algorithms, one for each class are proposed: 1) the extraction of connected components, and 2) a parallel region growing algorithm with data reorganization. The second algorithm tests the capabilities of the architecture to support the following: i) pyramidal data structures (initial region step), ii) a merge procedure between global and global information (adjacent regions to the growing region), and iii) a parallel merge procedure between local and global information (adjacent points to the growing region).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposed a simple and automatic method for recognizing the light sources from various color negative film brands by means of digital image processing. First, we stretched the image obtained from a negative based on the standardized scaling factors, then extracted the dominant color component among red, green, and blue components of the stretched image. The dominant color component became the discriminator for the recognition. The experimental results verified that any one of the three techniques could recognize the light source from negatives of any film brands and all brands greater than 93.2 and 96.6% correct recognitions, respectively. This method is significant for the automation of color quality control in color reproduction from color negative film in mass processing and printing machine.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The aim of this method is to yield a 3D perceptual description fo indoor scenes in order to make easier teh interpretation. The first characteristic is to yield this description with planar facets by using the edge segmentation. First, an iterative and cooperative algorithm creates a 2D perceptual discrimination by matching and grouping the segments simultaneously. Then, this 2D description is used by approximating each set of segments by a 3D planar facet in order to compute the initial 3D description. After some checks at different levels, we look for relations between these facets and an algorithm groups the facets approximating parallel planes. The 3D perceptual description will make easier the interpretation of the scene.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cornsweet and Yellott invented the nonlinear intensity dependent spread (IDS) filter based on the human visual system. They showed that this filter shares certain characteristics with the human visual system such as Mach bands, Weber's law, and Ricco's law, which account for the bandpass characteristic, brightness constancy, and trade off between stability and resolution. Up to now, because of its nonlinearity and mathematical complexity, the study of its response has been limited to simple images such as step edges or sinusoidal gratings. Also, no good inverse models have been introduced to allow this filter to possibly be used for image compression. In this paper we provide a more complete model for the IDS filter and its inverse and show that for all circularly symmetric spread functions, the bandpass characteristic of the IDS filter can be modeled as spatial summation of spatially varying high pass filter, and that the high pass filter can be modeled as the Laplacian of a low pass filter. We then show that the image can be recovered by inverting the effects of the Laplacian, followed by a deblurring stage and then computing the reciprocal of the result.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper a method for object recognition is proposed. This method combines some local characteristics of the image, such as the location and intensity of the edges with some priory information about the possible shape of the object in order to recognize objects in a noisy or faded image. Finding objects in an aerial photograph could be considered as a good example of such images. The method implements the above idea by defining a function which includes a number of measures of some properties of the image as its terms. Each of these measures is defined so as to take its minimum value when the corresponding property is best met. This function is called the objective function. In our approach the object recognition problem is defined as a minimization of an objective function with terms which include the sharpness of the edges of the object, the smoothness of the object and its level of similarity with the predefined models. Simulated annealing has been employed for the minimization of this objective function. Fourier descriptors have been used to represent the shape of the objects which, with some modifications, can result in a rotation, scale, and translation invariant recognition system. The results obtained by applying the method to aerial photographs indicate its good ability to locate and recognize complex regions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High resolution 3D information is useful in computer vision. The most common methods of acquiring 3D data are stereo technique and laser range finder, but both of them have some problem in applications. In this paper, a novel stereo image matching algorithm directed by range images is proposed from the view of sensor fusion. At first, the transformation between the range images and camera images is built up, then information extracted from range images is used to constrain the search in stereo matching since the computation of 3D feature points is fast in it. As a result, the workload of point correspondence in stereo can drastically reduce. The experiments have proved the efficiency of our proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image segmentation is almost always a necessary step in image processing. The employed threshold algorithms are based on the detection of local minima in the gray level histograms of the entire image. In automatic cell recognition equipment, like chromosome analysis or micronuclei counting systems, flexible and adaptive thresholds are required to consider variation in gray level intensities of the background and of the specimen. We have studied three different methods of threshold determination: 1) a statistical procedure, which uses the interclass entropy maximization of the gray level histogram. The iterative algorithm can be used for multithreshold segmentation. The contribution of iteration step 'i' is 2+i-1) number of thresholds; 2) a numerical approach, which detects local minima in the gray level histogram. The algorithm must be tailored and optimized for specific applications like cell recognition with two different thresholds for cell nuclei and cell cytoplasm segmentation; 3) an artificial neural network, which is trained with learning sets of image histograms and the corresponding interactively determined thresholds. We have investigated feed forward networks with one and two layers, respectively. The gray level frequencies are used as inputs for the net. The number of different thresholds per image determines the output channels. We have tested and compared these different threshold algorithms for practical use in fluorescence microscopy as well as in bright field microscopy. The implementation and the results are presented and discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Neutron source images of a compressed DT target burn region were obtained in 1994 on the Phebus laser facility at CEL-V, by using a penumbral imaging technique. The symmetric biconical aperture generates a signal-dependent noise which degrades the spatial resolution of the system. The recorded image must be corrected for both instrumental effects and noise. Then by deconvolving from the point spread function of the aperture, we obtain the neutron source distribution. The information structure contained in the raw image and the reconstruction techniques used are explained. Practical and theoretical problems that need to be resolved, including the amount of information in the raw data, will be discussed. Also results and comparative evaluation of those methods are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the development of an automatic signature verification system, the primary problem that must be resolved first is to find out a stable signature feature. In this paper, a simple and flexible distortion measurement approach is proposed to resolve the instability problem of signatures. The total distortion between an input signature and a template signature is a combination of both static and dynamic distortions. We also select some features from the fast Fourier transform (FFT) spectrum by a window function and a special weighting function. The linear correlation is used to find out the similarity between the spectrum of an imput signature and the spectrum of the template signature. Experimental results have shown that the two- layers' decision process has a high verification rate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a system based on image processing and dedicated to count the number of pedestrians walking across subway corridors. Results of its evaluation in real conditions and on a significant amount of passengers are given. The paper describes the experimental prototype able to deal with several cameras and working in real time on a PC. It shows difficulties met in the field trial procedure. The system can process images issued from several cameras hanging from the ceiling (the number of these cameras is a function of the corridor width). Detection is achieved using cooperative techniques: a particular texture is laid on the ground and viewed by the camera(s). This method allows us to enhance detection accuracy, especially in poor lighting conditions. The local destruction of this texture in the corresponding image gives us relevant information about the presence of people. Segmentation of the moving parts is performed: mathematical morphology operators are implemented. Detected pixels are thresholded and filtered. Finally, all the connected parts are labelled. The last step consists in tracking the moving parts provided by the previous step. Sequential analysis on images of the objects is done and a movement prediction is performed. Finally, this tracking allows us to compute the number of passengers passing a fictive line in both directions. This research has been undertaken jointly by INRETS and RATP (French subway operator for Paris).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new approach to optic flow calculation based on a well known image interpolation technique is presented. In this approach, the minimization of the squared differences between a second order Taylor expansion of pixel values in an image patch, and its associated pixel values in the next image is used to calculate the optic flow for four degrees of freedom. As a result, two translations, a rotation, and a dilation for every patch can be calculated. The results presented here are obtained without performing any spatial or temporal presmoothing. Therefore, the algorithm is computationally inexpensive and can be implemented in parallel for every single patch. This contributes to more efficient (real-time) implementation of optic flow methods. For the two translations using synthetic images, a similar accuracy is obtained to that reported in the study by Barron et al. (1994).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we utilize a Bayesian estimator based on a Markov random field prior image model in order to reconstruct a full-resolution color image from single-chip CCD sensor. The fact that the image is a color image is incorporated by using coupled Markov random fields, one field for each color component. The observation is the sampled color image. The Bayesian estimator is approximately computed by using a deterministic method due to Besag called iterated conditional modes. A numerical example illustrating the performance of our proposed approach and the performance of alternative interpolation approach is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes work using historical film material, including what is believed to be the world's first feature length film. The digital processing of historical film material permits many new facilities: digital restoration, electronic storage, automated indexing, and electronic delivery to name a few. Although the work aims ultimately to support all of the previously mentioned facilities, this paper concentrated upon automatic scene change detection, brightness correction, and frame registration. These processes are fundamental to a more complete and complex processing system, but, by themselves, could be immediately used in computer-assisted film cataloging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to automatically analyze electron immunomicroscopy images, we have developed a computerized scheme for the detection and characterization of nanometer-size metal particles. The method is based upon selectively reconstructing an orthogonal wavelet decomposition of the image through the use of wavelet coefficient thresholding. The method is designed to work irrespective of the background and of the structural content of the image. Results are presented for the analysis of immunogold labelling of muscle tissues.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the discussion and results in the field of automatic face identification. The implementation possibilities are presented, and the retained choices are motivated. The objective is to identify the person whose image is available from a grey-level camera. The approach is to extract characteristics that will be classified according to extracted characteristics of a database. One section is devoted to the importance of a proper acquisition method, based on profile images. Several sections are more technical and deal with the profile extraction, the computation of the curvature and the way characteristics are derived. This is naturally followed by practical results. Finally, some prespectives are listed to let the present work be integrated in a practical application where several hundred people must be identified.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years there has been increasing interest in a good technique for identifying the order of the autoregression. In spite of the fact that several testing procedures have been proposed for testing the adequacy of an estimated time series model, the available tests are not, however, satisfactory. They are subjective, and the power of many of the tests to discriminate between different models is not good. Many testing procedures for identifying the order of an autoregressive model are based on the use of the information theoretic criteria introduced by Akaike and by Schwartz and Rissanen. In this paper, a completely different technique is presented to recognize the order of an autoregressive model. It avoids the setting of a penalty function, the choice of which depends on the criterion used. The approach which is taken here is to apply the theory of generalized likelihood ratio testing for composite hypothesis testing. A procedure is used for calculating the exact likelihood function for a Gaussian zero-mean autoregressive process, and then we show that the true maximum likelihood parameter estimates for this process can be obtained. A generalized likelihood ratio test, coupled with the proposed estimation scheme, is used to solve the order recognition problem. The decision rule about the model order for data observed from a Gaussian zero-mean autoregressive process is based on the generalized long-likelihood ratio statistics, which are computed and combined by Fisher's method to form a new test for comparing the model orders. In the new technique, the autoregressive order is estimated by increasing the order of autoregression until the corresponding null hypothesis is accepted. The first autoregressive model, i.e., the autoregressive model with the lowest order transforming the observed time series into test for this hypothesis, gives the estimated order. In essence, the proposed technique represents an iterative procedure which consists of repeated use of the above decision rule. The results of computer simulation are presented as an evidnce of the validity of the theoretical predictions of performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a novel scheme for the optical implementation of hit-miss transform (HMT) in one step with only one optical correlator. This scheme uses complementary-encoding to represent each pixel of binary images, in which a pixel is encoded with two cells and the encoding of pixels with value 1 is complementary to that of pixels with value 0. Using an incoherent optical correlator to perform correlation, the complementary-encoding hit-miss transform is demonstrated. The simulated results and experimental results are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new bond percolation base model to determine the clique potential parameters of a Gibbs-Markov model used in image segmentation. Previously, experimentally determined fixed values were used for these parameters which did not depend on the underlying image data. Using the proposed model, the clique potential parameters are now derived as a function of local characteristics of the image under consideration. The suitability of this approach to multiscale processing via application of the renormalization group transformation is also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Depth from defocus methods are based on the relation between depth and the amount of defocus at each image point. Choosing a proper model for defocusing operator and extracting its parameter is the main idea in this method. Generally defocusing operator can be modeled as a linear, circularly symmetric, low pass, and space variant filter. In the proposed method filter's bandwidth is used as a measure of depth. The importance of the method is the introduction of a simple procedure for extracting the filter's bandwidth, which has a well founded mathematical tractability. No preassumption about the shape of the filter is required and a simple and efficient way to obtain the measure of depth is presented. Experimental results of applying the proposed measure on synthetic and real images indicate its high capability of resolving depth in a single image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes an implementation of tetrahedral interpolation based on a direct addressing structure of look-up tables for fast device dependent/independent color transformation. The CRT and ink jet printer models were used for exemplification and the procedure was used for colorimetric calibration, but the method is generally applicable for any color device. The models are based on look-up tables and tetrahedral interpolation of LUT values. The direct transformation avoids completely the searching procedure for the tetrahedron that includes the interpolated color. For the reverse transformation, the searching procedure is applied to a reduced set of tetrahedrons and is optimized based on the use of the direct addressing tables structure. The time performance of algorithm depends on the number of calibration colors, direct/inverse transformation. The volume of outside gamut colors to be mapped influences also the performance of transformations. The color gamuts of a CRT and of an ink jet printer are represented and compared in Lab and Luv color coordinates in order to select the optimum space to perform the mapping of the outside gamut colors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes an algorithm for approximate affine plane matching of aerial images. The concept of a projective transform associated with partial Hausdorff distance is used in order to reduce the computational burden, and meet the real-time constraints. Potential algorithm parallelization for the SPMD computers is proposed as well. The running time evaluation evaluation of a sequential algorithm is given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A color image segmentation scheme for use in demanding applications is presented. A complexity reduction is performed by transforming the 3D RGB color space into a 1D length space. The constraints that are put on this transform will be enumerated, and a specific Hilbert fractal-based transform will be selected. The segmentation scheme has been designed to preserve the intrinsic multiresolution clustering properties, allowing for a fast and consistent iteration towards a given range of regions or contour pixels. VLSI implementation will be discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The field of knowledge sharing designed from conceptual specifications-ontologies and the field of logical modeling of goal achievement have an impact on processing design by using image data explanations, from sensors to physics which deals with the real world. We report in this paper some illustratioons and preliminary models dealing with the enhancement of relations which exist between conceptualization and image processing (IP). In this framework, the paper focuses on elementary models which are in correspondence with categories of IP objects, functions, and properties. The aim of this approach is to formalize conceptual domains from conceptual specifications (ontologies) and use these conceptual models in order to guarantee the adequacy between domains, and to establish a taxinomy of each domain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Comparing shapes is important for both recognition and geometry-based grouping. In the case of perspective views of planar shapes, such comparison will typically be based on projective invariants. Yet there are quite a number of practical cases where one can do better and exploit simpler invariants of subgroups of the planar projectivities. This paper tries to sketch a systematic approach, based on subgroups defined by the structures they keep fixed in the image. In particular, this paper focuses on fixed points, fixed lines, lines of fixed points, and pencils of fixed lines. A complete classification of the resulting subgroups is given and the most interesting cases are identified. For these cases invariants are given and examples illustrate their use. In order to illustrate the wider scope, perspectively skewed mirror symmetry is discussed, since it entails a different type of fixed structures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we investigate the problem of optimal (feature) selection for texture recognition for the case, when statistical properties of the image general population are satisfactorally represented by the a prior classified training set of small size (i.e. the number of images in the training set is much smaller then the number of pixels on the image). We examine criteria, defined by the trace norm of the certain self-conjugate operator, constructed in the special manner from the elements of the training set. Karhunen-Loeve expansion, Hoteling criteria, and some of their modifications are considered for recognition of computer generated regular textures, distorted with white noise. Comparative analysis of criteria efficiency is presented for several possible kinds of classification of the training set.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we study the invariant subspace under the fractional linear tranform in the dual space, and give the general solution of trihedrons which are corresponding to a line drawing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper focuses on the problem of the massively parallel implementation of image processing algorithms. In previous theoretical studies the parallel software requirements to implement image processing algorithms were pointed out. A test algorithm, which is representative enough of the requirements for edge and region segmentations was chosen. Our goal here is to detail its implementation. The proposed test algorithm was implemented with the data programming model, on the connection machine CM5 in C*, which is an extension of the C programming language. The crux points of the parallel implementation are underlined. Edge point detection requires only parallel operations, and regular communications. Conversely, region extraction and edge chaining require irregular communications, therefore for a better efficiency, in both cases, the original algorithms were modified. These studies are in relation with the problem of finding tools and methods to compare and to evaluate parallel architectures. One of the two proposed algorithms is deduced of this one.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes an approach to the A3 (adequation:algorithm-architecture) problem in the context of image (region growing algorithm) and parallel heterogeneous processings. The matching of the software graph extracted from an application and the graph extracted from the underlying parallel computer is guided by the cost function. Different parameters of the cost function are quantized by means of indices which characterize the hardware usage at the run time of the applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Currently, most portable digital imaging systems use standard 3.5 inch floppies for storage of images and data. Driven by an increasing demand for smaller, lighter, lower power devices, the space restrictions for a digital storage device have become more severe. One solution is to provide a separate device containing the floppy drive, with its own power source, connected to the imaging system with external cables. A better method is the use of PCMCIA memory cards. PCMCIA (Personal Computer Memory Card International Association) memory cards have many advantages over floppy drives. They are smaller, more rugged, faster, and require less power. PCMCIA memory cards can be formatted to accept DOS files, making it easy to move files to a PC for archiving and image processing. Including a PCMCIA socket and controller circuitry in an imaging system provides a convenient upgrade path for future capabilities. The problems associated with adding PCMCIA memory card storage to a portable imaging system are formidable, however. There is little or no off-the-shelf software to support these cards in an embedded system. Since it is a relatively new technology, there is very little public, practical knowledge to guide the embedded systems designer. The PCMCIA standards are lengthy and complicated, and still changing. Memory card technology is changing and card prices are still much higher than for floppies. These and other issues were encountered on a recent development project at Inframetrics. This paper discusses how these issues were addressed and assesses where PCMCIA is headed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a new approach for computer based visual grouping. A number of computational principles are defined related to results on neurophysiological and psychophysical experiments. The grouping principles have been subdivided into two groups. The 'first-order processes' perform local operations on 'basic' features such as luminance, color, and orientation. 'Second-order processes' consider bilocal interactions (stereo, optical flow, texture, symmetry). The computational scheme developed in this paper relies on the solution of a set of nonlinear differential equations. They are referred to as 'coupled diffusion maps'. Such systems obey the prescribed computational principles. Several maps, corresponding to different features, evolve in parallel, while all computations within and between the maps are localized in a small neighborhood. Moreover, interactions between maps are bidirectional and retinotopically organized, features also underlying processing by the human visual system. Within this framework, new techniques are proposed and developed for e.g. the segmentation of oriented textures, stereo analysis, optical flow detection, etc. Experiments show that the underlying algorithms prove to be successful for first-order as well as second-order grouping processes and show the promising possiblities such a framework can offer for a large number of low-level vision tasks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The classical raster (i.e. row by row) image scan does not match the data processing flow, internal to the pyramid structure obtained by the 2D fast wavelet transform of a 2N X 2N image with a (2(gamma ) + 1) X (2(gamma ) + 1) mother wavelet, therefore introducing large latencies, important memory requirements, and irregular processor activities in parallelized implementations. A new algorithm is proposed in which all image data are scanned following a fractal, space-filling curve, which, compared to the raster image scan, offers the following advantages: i) it reduces the calculation memory with almost a factor 2, while maintaining a simple address calculation scheme, ii) the latency in the first N-(gamma) -3 levels of the pyramid, which contain a high percentage of the pyramid data, is minimized, leading to improved block-oriented post-processing capabilities (e.g. vector quantization for image compression), iii) the calculations are spread out more uniformly over one frame slot, and iv) the process is naturally subdivided into similar subproblems, increasing the granularity of the algorithm, without introducing severe communication bottle-necks for parallel architectures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The theory of designing optical-electronic image processing computer systems has been presented. A model of parallel image processing systems has been considered, based on the principle of work up function decomposition. The implementation possibilities of different image processing operations with the help of optical and electronic computer means have been analyzed. A structure model of a computer system has beem examined, that is a conveyor of parallel computer devices. Evaluation of time outlay in the system, while working up an image or a series of them has been made. The differences of time outlay from conveyor length change and the correlation of optical and electronic devices and processing time has been examined. The designing method of image processing in static mode has been represented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
New architectures of image processing computer systems, based on the algorithms of Fourier- descriptive analysis have been developed. A new computing process organization method on the basis of Fourier-descriptive image features has been proposed. The structures of two problem-oriented optical electronic computer systems have been developed. The estimation of time expenditures during identification step and total time expenditures in systems have been carried out.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a 3D edge detector for 3D gray voxel-based data, generalizing the well known Sobel operator for 2D images. Three differentiation operators are defined corresponding to the three principal directions (coordinate axes). Each operator computes the sum of the intensity differences between certain pairs of neighbors of a voxel corresponding to a principal direction. The proposed edge detector corresponds to the square root of the average of the squares of the three differentiation operators. Implementation of the proposed operator is given for biological images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Visualization of large multidimensional magnetic resonance images (MRI) can be augmented by reducing the noise and redundancies in the data. We present details of an automatic data compression and region segmentation technique applied to medical MRI data sampled over a wide range of inversion recovery times (TI). The example images were brain slices, each one sampled with 15 different TI values, varying from 10ms to 10s. Visually, details emerged as TI increased, but some features faded at higher values. A principal component analysis reduced the data by over two thirds without noticeable loss of detail. Conventional image clustering and segmentation techniques fail to produce satisfactory results on MR images. Among the stochastic methods, independent Gaussian random field (IGRF) models were found to be suitable models when region classes have differing grey level means. We developed an automatic image segmentation technique, based on the stochastic nature of the images, that operated in two stages. First, IGRF model parameters were estimated using a modified fuzzy clustering method. Second, image segmentation was formulated as a statistical inference problem. Using a maximum likelihood function, we estimated the class status of each pixel from the IGRF model parameters. The paper elaborates on this approach and presents practical results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Active contours, known as snakes, have been widely used for medical images. In this paper a new algorithm for 3D segmentation is proposed based on the snake technique, which can include both gradient and textural features as image forces and a new external force to improve the sensitivity of a normal snake against image noise. After the contour has been well defined on one slice of a volume data by a 2D segmentation method, the contours of the subsequent slices can be obtained through our modified snake method. Experiments on medical volume images have proved the validity of the algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a new method to find the corpus callosum from sagittal brain MR images automatically. First, we calculate the statistical characteristics of the corpus callosum and obtain shape information. The recognition algorithm consists of two stages: extracting regions satisfying the statistical characteristics (gray level distribtuions) of the corpus callosum, and finding a region matching the shape information. An innovative feature of the algorithm is that we adaptively relax the statistical requirement until we find a region matching the shape information. In order to match the shape information, we propose a new directed window region growing algorithm instead of using conventional contour matching. Experiments show promising results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When carrying out medical imaging based on detection of isotopic radiation levels of internal organs such as lungs or heart, distortions and blur arise as a result of the organ motion during breathing and blood supply. Consequently, the image quality declines, despite the use of expensive high resolution devices. Hence, such devices are not exploited fully. There is a need to overcome the problem in alternative ways. Such as alternative is image restoration. We suggested and developed a method for calculating numerically the optical transfer function (OTF) for any type of image motion. The purpose of this reserach is restoration of original isotope images (of the lungs) by reconstruction methods that depend on the OTF of the real time relative motion between the object and the imaging system. This research uses different algorithms for the reconstruction of an image, according to the OTF of the lung motion, which is in several directions simultaneously. One way of handling the 3D movement is to decompose the image into several portions, to restore each portion according to its motion characteristics, and then to combine all the image portions back into a single image. As additional complication is that the image was recorded at different angles. The application of this reserach is in medical systems requiring high resolution imaging. The main advantage of this approach is its low cost versus conventional approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we describe a 3D computed tomography (3D CT) using monochromatic x-rays generated by synchrotron radiation, which performs a direct reconstruction of 3D volume image of an object from its cone-beam projections. For the develpment of 3D CT, scanning orbit of x-ray source to obtain complete 3D information about an object and corresponding 3D image reconstruction algorithm are considered. Computer simulation studies demonstrate the validities of proposed scanning method and reconstruction algorithm. A prototype experimental system of 3D CT was constructed. Basic phantom examinations and specific material CT image by energy subtraction obtained in this experimental system are shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital image processing in medical applications is an important area. In this paper we present 3D reconstructions and 3D calculations for medical images. We have developed a medical image processing system (NAI200) to solve the problems of the need for 3D reconstruction or calculation images to be preprocessed. For example, zoom, negative, exponential, logarithm, histogram equalization, scale, filters, and other transfers are included. For the 3D reconstructions and 3D calculations the outlines of organs or focuses of disease are needed. In this paper we introduce some ways for autodrawing outlines. Different organs have different means. The entropy is an important method. But in the great majority manual operation may be most simple and precise. The stereo images of organs and focuses are reached, not only on the basis of slice images but also other ways. We describe these in this paper. About 3D calculation we first calculate the areas of slice images, then according to these areas to calculate volumes of organs or focuses. How to calculate a persons ventricular volumes, we also explain in this paper. All of these are reached by our NAI200 medical image processing system. Other functions of the NAI200 will be mentioned in this paper also.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nikolai A. Konovalov, Vladimir B. Veselovsky, Valeri B. Furmanov, Vladimir I. Kovalenko, Nikolay I. Lakhno, Nikolai D. Kovika, Leonid V. Novikov, Vadim N. Scherbina, Lyubov M. Zlydennaya
With the aim of technological process automatization and control for high-frequency welding of different diameter pipes, the machine-program complex (APC) was designed. APC gives the opportunity to create the cine- and telerecording of process in production conditions at the mill '159-529' of the Novomoskovsky Tube Rolling Mill (Ukraine). With the help of APC in real functioning mill conditions, the character of flashing zone length changing and the angle of convergence depending on pipe welding speed was investigated. Also the zone of jumpers is defined. The comparison of theoretical and experimental data gave an opportunity to define a welding rate which is optimal for the most qualitative values of welded joints in the range of pipe products of mill '159-529'.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.