PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
From the point of view of the engineering applications and designing, the basic technologies of a dual-band imaging system (3 - 5 micrometers , 8 - 14 micrometers ) can be researched in this paper. The difference of a ratio of the dual-band radiating energy in the different regions of temperature can be posed. The researching results indicate when the target and background exist in the different regions of a temperature, the distinguishing rules of a discriminating targets can be changed according to the contrast of its dual-band energy. This paper gives the calculating model by which the transition temperature can be calculated accurately in the various conditions and the relationship among the temperature, surface emissivity of target and background, ratio of a dual-band radiating energy etc. can be determined. The relation between S/N ratio and MRTD of a dual-band imaging system is discussed and analyzed in detail. The difference between single-band and dual-band imaging system described. The calculating and analyzing results show that S/N and MRTD of a dual-band imaging system is 5 - 10 times more than that of a single imaging system. The principle scheme to be realized is investigated. Finally the realizable method of a dual-band imaging fusion is researched technologically. By the dual band (3 - 5 micrometers , 8 - 14 micrometers ) detectors with the same focus plane and size, the processing and calculating complexity of a dual-band can be reduced. Analyzing the imaging characteristic, we pose that the fusion technology in level of pixel is realized using main-self fusion structure in a dual-band imaging system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image fusion technique has gradually been paid more and more attention to for its advantage of integration of information from multisensors, and its application has been developed in many fields such as medicine, remote sensing, computer vision, weather forecast, etc. In this paper, some fusion algorithms on pixel level have been programmed and their effects have been analyzed. A new efficient method named after Contrast Modulation-Pyramid Algorithm has been developed. The realization of this new algorithm has been designed and researched with Digital Signal Processor and has been programmed with relevant software. The result showed that image fusion would been completed at real-time or quasi-real-time speed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new hardware implementation of histogram equalization by means of Field Programmable Gate Array (FPGA) is presented. Histogram equalization is an effective means of image enhancement. Its real-time processing requires a great deal of memory and very high processing speed. The logic cell nature of XC4000 family's FPGA is most suitable for performing real-time pixel-level image processing operations such as histogram equalization. A core is constructed to complete this histogram statistics and histogram equalization. As a result, the chip makes circuits and system more effective than ever, and it takes very short time to complete the calculation and generate the look-up table. The equalizing technique is described and implementation results using a Xilinx XC4010 FPGA are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The detection of point targets in complex environment is a problem of critical importance to Infrared Search and Track (IRST) systems. This paper presents techniques of wavelet treatment for analyzing and improving the detection performance of IRST systems. Only single frame processing will be addressed. The wavelet decomposition has been proved to be an effective method for analyzing the singularities of the signal. Through analyzing the features of pulse signal, square wave, white noise transforming on different wavelet scales, an efficient way to extract point objects in the presence of large clouds is presented. The way is prior to that using local adaptive matched filters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The hardware structure of a nonlinear video-authoring system has been described in this paper. Employing TMS320C80, one of the most powerful multimedia video processors, a PCI- based real-time nonlinear video-authoring system has been built, which is also a general-purpose high-performance image/graphics processing platform. This system can decode and encode a standard composite/separate NTSC/PAL video signal simultaneously. Since TMS320C80 has a MIMD structure optimized for video processing and can do 2 billion RISC- like operations per second, a single-chip MVP is efficient to process the digital video/graphics data. By using AMCC S5933 as the PCI controller and synchronous bi-directional FIFO as the shared dual-port memory, a 66 MB/s bi- directional PCI data path is established between the PC host and add-in video-authoring system. Through the PCI interface, the authoring system can not only make convenient use of the rich resources of video and graphics already exist on PC, but also store the processed video data on hard-disk with Ultra SCSI interface in real time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There are many textures such as woven fabrics having repeating Textron. In order to handle the textural characteristics of images with defects, this paper proposes a new method based on 2D wavelet transform. In the method, a new concept of different adaptive wavelet bases is used to match the texture pattern. The 2D wavelet transform has two different adaptive orthonormal wavelet bases for rows and columns which differ from Daubechies wavelet bases. The orthonormal wavelet bases for rows and columns are generated by genetic algorithm. The experiment result demonstrate the ability of the different adaptive wavelet bases to characterize the texture and locate the defects in the texture.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a neural network method for characterizing color video camera. A multilayer feedforward network with the error back-propagation learning rule for training, is used as a nonlinear transformer to model a camera, which realizes a mapping from the CIELAB color space to RGB color space. With SONY video camera, D65 illuminant, Pritchard Spectroradiometer, 410 JIS color charts as training data and 36 charts as testing data, results show that the mean error of training data is 2.9 and that of testing data is 4.0 in a 2563 RGB space.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a new hybrid neural networks model for gray- level image recognition is presented. By the image segmentation based on the vector quantization which is carried out by Kohonen's self-organizing feature map neural networks, the gray-level image can be mapped into an Hopfield network, each neuron has several states. The performance of this model is compared with that of the traditional model. It is concluded that the new one not only has a smaller number of neurons and interconnections, but also has better error correction capabilities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wavelet transform is a new branch of mathematics, which is developing rapidly in recent years. Because it gets rid of some defects of Fourier transform. Wavelet analysis method has been paid more and more attention and widely used in various fields of engineering application, especially in image processing. In this paper, after the wavelet pyramidal decomposition of the image, the different quantization and coding schemes for each subimage are then carried out in accordance with its statistical properties and distributed properties of the coefficients. The computer simulation result shows that this compression system can attain good reconstructed image while assuring satisfying compression ratio.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A wide field of view (FOV) optical-electronic camera system has been developed for commercial and scientific application. FOV of this system is partitioned into four sub-FOVs. Four CCDs with their own lens acquire images of these sub-FOVs respectively. An interactive image processing system has been designed for target acquisition and tracking. This processing system is a multiprocessor system. Moving targets are detected via image compare by pre- processing modules and displayed on CRT screen. The operator selects the target to be tracked and point out its rude position. The accurate position is extracted frame by frame by target tracking modules.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a method of the segmentation and recognition in the sub-image system was presented. A self-adapting algorithm of segmentation to process the rosette scanning graph in real-time was discussed in detail. Finally, the result of the experiment was also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
CMOS image sensor is the enable technology for videoconferencing over PSTN going to public. The major advantage of CMOS image sensor over traditional CCD is that CMOS sensor can integrate more circuits on the same sensor chip. In this paper, we proposed a new lossless pre- compression algorithm targeted for CMOS integration. Our pre-compression algorithm gains lossless compression ratio about 4 and there is no multiply and division operation required. We evaluate the performance of this algorithm in this paper and introduce a hardware architecture that can be easily integrated into CMOS image sensor chip.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The images, photographed through a submarine periscope, are ambiguous because of submarine periscope light path energy dissipation and bad conditions. The images can be processed by Fourier transform. But the processing effect is not good. In this paper, we process the images with wavelet. Our works include removing image noise, enhancing image edge, enlarging image and diminishing image. Experiment shows that the experiment results are better.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Up to now, almost all conventional CRT colorimetric prediction models are based on the `principle of tristimulus values superposition'. Whereas, by doing a series of experiments, we have found a fact that in CRTs there exists a kind of `interactive effect' among RGB channels, and this effect will invalidate above superposition principle and therefore result in a kind of `interactive error' in conventional CRT colorimetric prediction models. Our experimental results show that the errors caused by this `interactive effect' are often bigger than 10 units calculated with CIELUV error formulas.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3D image processing is an important problem of modern science and technology research. With the development of optical technology, the laser confocal scanning microscope (LCSM) system has been used successfully as advanced 3D image instrument in the medical research domain. This paper is primarily to discuss mathematical morphology method of processing 3D image combining with 3D cell image formed by LCSM system. Paper begins from 2D mathematical morphology and specializes various 3D mathematical morphology theories. It offers a series of mathematical morphology methods of 3D image processing about its various cases. At last we use these methods to process the 3D cell image formed by the laser confocal scanning microscope system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a novel object-oriented hierarchical image and video segmentation algorithm is proposed based on 2D entropic thresholding, where the local variance contrast is selected for generating the 2D entropic surface because this parameter can indicate the strength of the edge accurately. The extracted object is first represented by a group of (4 X 4) blocks coarsely, then the intra-block edge extraction procedure and the joint spatiotemporal similarity test among neighboring blocks are further performed for determining the meaningful real objects. Experimental results have confirmed that the proposed hierarchical algorithm may be very useful for MPEG-4 applications, such as determining the Video Object Plane Formation automatically and selecting the coding pattern adaptively. A novel fast algorithm is also introduced for reducing the search burden. Moreover, this unsupervised algorithm also makes the automatic image and video segmentation possible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditional methods to compute color similarity based on color histogram own some disadvantages. For example, if the two images look very similar in color, their color similarity may be zero or very small because the intersection of their histograms may be null or the distance between their histograms may be very large. In this paper, we first propose a new definition of color similarity between two color images, then derive the formula of color similarity based on color histogram, which gets rid of the shortcomings of the traditional methods, at last present experimental results, which show that our method can provide satisfactory retrieval results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image segmentation consists of dividing an image into non- intersecting and dissimilar but meaningful regions (object and background). Thresholding is a commonly employed technique for segmenting image. Many methods for automatic selection of thresholds use optimization process in which some specific criterion functions are defined. Recently, several thresholding methods based on minimizing the cross- entropy function of images have been proposed. Cross-entropy measures the information discrepancy between two probability distributions. Derived from cross-entropy, fuzzy divergence measures the dissimilarity between two fuzzy sets. In this paper, we present four new algorithms for optimal threshold selection based on different criteria integrating cross entropy and fuzzy divergence. The first one is a minimum cross entropy algorithm based on the hypothesis of uniform probability distribution. The second one is a maximum between-class cross entropy algorithm using a posterior probability. The third one is a modified version of existing method based on maximum between-class fuzzy divergence. The last one is a minimum fuzzy divergence algorithm. According to the requirement of image thresholding, we construct a new fuzzy membership function to take into account the gray level probability distribution of object pixels and background pixels about their mean values for the last two algorithms. The effectiveness and generality of these proposed algorithms have been compared with some recent techniques based on related principles, and evaluated by using uniformity measure and shape measure with real images. Results showing the superiority of the proposed algorithms are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automating of measurement has been a generally acknowledged need in application of SSNTD (Solid State Nuclear Track Detectors) techniques. The developments in digital image processing and microcomputer technology afford the opportunities of continuous improvement of the image-based analysis system for track measurement. The SSNTD techniques have found increasing applications. But automation of measurement especially that with image analysis technique is still a subject which remains to be developed. Then a project towards automation of track measurement was initiated in our group. As a result of this effort, a study on development of an automated image analysis system for nuclear track counting is presented in this report.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
From the view of the optical information theory, we study the transfer of the image's information content by imaging integral equation. We, in terms of a set of complete orthogonal eigen functions, expand the images of the manuscript in Karhunen-loe've series, the Karhunen-loe've basis is prolate spheroidal functions. The resolution of images is represented by the number of significant eigenvalues (also called modes of degree of freedom) of expansion. We analyze the influence of the shape of the scanning sampling objective aperture of scanning device on the definition (or resolution) and quality of the color- separation image. By analyzing the definition and signal to noise ratio of the color-separation images, we obtain the conclusion that, under the condition that the sampling way and the number of scanning lines are same, the quality of the color-separation images sampled by the square objective aperture is better than that sampled by the round one.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Time is the best important resource in real-time video image processing system. Only few structure or algorithm can be used in real-time video image processing system because of time limitation. Establish a high speed data-access is a way to this problem. It transfer the video signal between output ports and kernel processor as less time delay. The mode, look-up-table, other analysis, adjustment or calculation rule will be applied first by medium or low data-access. We can choose different data-access according to data relation level. Data which has closer correlation will be treated through low-speed-access, which has loser correlation will be treated through medium-speed-access. Theory analysis and experiment showed that it is a high effective structure for real-time video image processing. The feature of video image and real-time processing system is presented in this paper. The basic theory of data-access is introduced also. The status and relation of high, medium, low data-access is discussed and the rules of data distribution in different access is analyzed. Some examples with datas-accesses are given in the paper (TV tracking and measuring, laser signal alarming system).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The random noise of turbulent atmosphere cause the serious image quality degradation of video images, the cutoff frequency of result is very low when video images were added to improve the SNR. This phenomenon is very obvious especially in astronomical video imaging. A new point of view speckle image that is neither the maximum value point not the centroid was found. A new image reconstruction method which based on this point was developed. It is a kind of iterative shift-and-add method (ISA). The initial value of this method can be the result of traditional shift-and- add method or the result of speckle interferometry (autocorrelation) and so on. The clean method is used as the deconvolution method of ISA. The diffraction-limited resolution reconstructed results of series video images through turbulent atmosphere show that this method has a high signal-to-noise ratio and wide dynamic range. At last the equipment that we used to detect and save the dim images is introduced and some high resolution results of video images are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper classifies user function interaction basis on interaction's characteristic, user quantity and interaction range. The paper illustrates the interaction's reasons, conditions and environments by analysis of some examples in order to achieve understanding service interaction in intelligent network deeply. By analysis service-running time was hold and find that the basic reason resulting in service interaction is network resource. The analysis provides foundation for further searching technical way to resolve those questions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a method of smoothing variable bit-rate (VBR) MPEG traffic is proposed. A buffer, which has capacity over the peak bandwidth of group of picture (GOP) sequence of an MPEG traffic and which output rate is controlled by the distribution of GOP sequence, is connected to a source. The degree of burst of output stream from the buffer is deceased, and the stream's autocorrelation function characterizes non-increased and non-convex property. For smoothed MPEG traffic stream, the GOP sequence is the element target source traffic using for modeling. We applied a marked renewal process to model the GOP smoothed VBR MPEG traffics. The numerical study of simulating target VBR MPEG video source with a marked renewal model shows that not only the model's bandwidth distribution can match accurately that of target source sequence, but also its leading autocorrelation can approximate the long-range dependence of a VBR MPEG traffic as well as the short-range dependence. In addition to that, the model's parameters estimation is very easy. We conclude that GOP smoothed VBR MPEG video traffic could be not only transferred more efficiently but also analyzed perfectly with a marked renewal traffic model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we describe the architecture to integrate distributed multimedia real-time monitoring and control system with ATM network. It is constructed from scratch to take advantage of ATM AAL5 functionality and provide application-to-application QoS guarantees. In order to implement the application-to-application QoS guarantees, the architecture presents a mechanism to (1) provide efficient resource management for distributed real-time multimedia application and protocol processing, (2) minimize the overhead associated with data movement and context- switching. The major components of the architecture are (1) Resource Management Unit. It is responsible for efficient CPU scheduling, dynamic network bandwidth allocation, real- time buffer space reservation and other device management, (2) Connection Management Unit. It implements QoS negotiation and QoS translation among application QoS, system QoS and network QoS, it also fulfill admission control, (3) Data Transfer Unit. To move data efficiently, we develop the real-time thread concept and separate control and data path to manage the data transferring and avoid unnecessary data copying.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A wavelet analysis is a time-frequency domain analysis between the time-domain analysis and the Fourier frequency- domain analysis, that is, a wavelet function is considered as integral kernel of a wavelet transform, characterized by well-localized property of time-frequency domain. A wavelet function is converted by a mother wavelet shifting and flexing. The sampling interval is self-adjusted, as the signal frequency components are different. So the signal detail can be focused on at will. In this paper, a novel method is presented to detect a rectangle-pulse signal of pulse laser radar, which is submerged from noise, by means of the wavelet transform. As to rectangle-signal, the wavelet transform coefficient can be obtain the maximum value by selecting a couple of optimum wavelet variables and a filter comes into being so that the signal noise ratio is improved to detect the signal.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The features of image edges are very important in its applications such as in medicine and remoting images. JPEG image compression standard has now gained wide applications, which take advantage of the features of human visual system. It quantizes the low frequency DCT coefficients with fine level and quantizes the high frequency DCT coefficients with coarse level. But JPEG does not specifically process the image edges, which partly hiding in high frequency DCT coefficients. On analyzing the spectrum of image edge features, this paper got the energy distribution of DCT coefficients of a image subblock with a single straight edge through theoretical deduction and illustration by numerical examples. And we designed a quantization table (Q table) in JPEG. Experiments show that the Q table does not influence the image subblock with no single straight edge and can retain the image edge of image subblock with a single edge effectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Real-time low light level (LLL) image processing technology is the important developmental subject in the area of LLL night vision. But there is an essential distinction between the LLL TV image and the ordinary TV image, so the conventional digital image processing techniques are not suitable for LLL image. In this paper, on the basis of the analysis on the main noise sources of the LLL CCD TV System that our research laboratory developed, new types of theory and technique, topology mode filters, are put forward and taken to process LLL image. Topology mode filters include linear combination of mode filters, weighted-mode filter, iterated-mode filter and adaptive-mode filter. The LLL Image Processing System and the LLL TV Signal and Noise Test and Analysis System are set up specially. By using topology mode filters, the LLL image quality is improved greatly and the processing results are given out. Theory and experiment results show that topology mode filter is better than simple mode filter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In static low-light-level image sequence, the temporal filtering is one of the best techniques of suppressing noise and improving signal-to-noise rate. But in active image sequence, the moving target may be blurring and the quality of image may be worse by using temporal filtering. In the paper, the temporal filtering enhancement technique with motion compensated is presented. In this approach, the interframe gray interpolation of moving pixels is used in the missing frames due to multi-frame accumulating and the motion compensating is realized. Therefore, the satisfied results can be obtained in active LLL image sequence processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, the characteristics of thermal images and visual images are analyzed, we present the differential signal distribution and gray distribution of them, the profiles of thermal images are sharper than that of visual images, this means the correlation between adjacent pixels of the former is stronger than that of the later, furthermore the gray distribution of thermal images is more concentrated than that of visual images. These characteristics demonstrate the compression potential of thermal images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, vector quantization (VQ) has received considerable attention, and has become an effective tool for image compression because of its high compression ratio and simple decoding process. In order to reduce the computational complexity of searching and archiving, tree search can be used in codebook generation which is a major problem of VQ. The Codebook can be generated by a clustering algorithm that selects the most significant vectors of a training set in order to minimize the coding error when all the training set vectors are encoded. Genetic algorithm (GA), a global search method with high robustness, is very effective at finding optimal or near optimal solution to some complex and nonlinear problems. This paper presents a new technique for design a tree-structured vector quantizer using adaptive genetic algorithm. The difference between adaptive GA (AGA) and standard GA is that the probabilities of crossover and mutation of the former are varied depending on fitness values of solutions, thus prove the performance. Experimental results have shown that applying AGA to clustering can accurately locate the clustering centers. In this paper, AGA is used in tree-structured VQ to generate very node codebook. It is proved theoretically and experimentally that the reconstructed images generated by this method have high visual qualities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper is concerned with an application of the computer control technique in an intelligent system for testing inner walls of pipes. This system uses an omnibearing optoelectronic detector as the special purpose input device, and a microcomputer as the master controller for the image detection and accurate fixed position to the detector at arbitrary moment, which is primarily used for real-time inspecting and recognizing the quality of inner walls of high-accuracy metal pipes in the oil and chemical industry, and which fundamental functions include the information acquisition, image processing and recognition, classification and output display.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper study on the method for detection of ship targets form visual images. To deal with the problem such as black and white polar, disturbance of sea spray, a target segmentation method based on feature field is presented. This method is to transform the original gray image, in which target segmentation is difficult, into an improved image in feature field, thus segmentation will be easier. This paper also presents the experiment results to illustrate the merits of this method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multispectral images fusion is a kind of process of synthesis and processing for multispectral images data. Wavelet transform is a multiresolution method that is used to decompose images into detail and average channels. In this paper, on the basis of optoelectronic imaging technique, the imaging process and feature of low light level (LLL) TV system are analyzed and the principle and processing construction of multispectral LLL TV image fusion are discussed. The algorithm and experiment study on the fusion for LLL dual spectrum images within the wavelet coefficient space are carried out with the help of LLL CCD TV system and computer. The proper combination of LLL camera and optical filters is used to implement dual spectrum image fusion experiment at night for outdoor static scene by the aid of computer. The results of experiment indicate that multispectral LLL image fusion has improved the capability for recognizing targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The variations of real procedure are generally nonlinear. It is a common method to deal with nonlinear problem by linear regression model. The parameter estimation obtained by this method is unbiased. The precision of curve fitting is low. This paper puts forward a method that uses artificial neural network technique to make nonlinear regression analysis. Artificial neural networks is make up of a great number of nonlinear processing units united each other, and also is a super scale nonlinear self-adaptive information processing system, so it can deal with nonlinear regression problems properly. Based on the BP model of multi-layer feed forward neural networks, we can obtain a group of deviation values with relation to right values to make the error between network output and expectancy output minimal. At last we make practical calculation on the problem of dynamic analysis of ground water level and the results are satisfactory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent research has demonstrated that a backpropagation neural network classifier is a useful tool for multispectral remote sensing image classification. However, its training time is too long and the network's generalization ability is not good enough. Here, a new method is developed not only to accelerate the training speed but also to increase the accuracy of the classification. The method is composed of two steps. First, a simple penal term is added to the conventional squared error to increase the network's generalization ability. Secondly, the fixed factor method is used to find the optimal learning rate. We have applied it to the classification of landsat MSS data. The results show that the training time is much shorter and the accuracy of classification is increased as well. The results are also compared to the maximum likelihood method which demonstrate that the back-propagation neural network classifier is more efficient.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to meet the requirement of calculating the binary image centroid quickly in real time signal processing, authors used second generation's DSP TM320C25 (made by TI), gave full play its object image centroid quickly simplified traditional algorithm of centroid calculation and calculated image centroid by combining software and hardware.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new genetic algorithm for the restoration of images degraded due to blur and random noise is put forward in this paper. In comparison with other methods, the present method not only gives new objective function terms in the fitness, but also introduces some kinds of effective genetic operation methods. Particularly, the notion of pixel correlation is introduced to predict the changing direction of gray-scale values. Furthermore, the number of generations in the evolution process can be reduced; the fitness in the restored image and subjective visual effect of the image may also be improved. Finally the effectiveness of the present method is verified by computer simulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Variable bit rate (VBR) video traffic makes crucial impact on ATM network performance, design and management due to its wide bandwidth, high burstiness and strong correlation. Previous studies focus on the bandwidth distribution and autocorrelation of VBR video sources. We investigate the importance of the multiple correlation of VBR video traffic in ATM buffering systems. We choose two models to generate VBR video traffic, one is auto-regressive (AR) model, and the other is discrete auto-regressive (DAR) model. The two models have same bandwidth distribution and auto-correlation but different multiple correlation. Numerical experiments are taken to compare the queuing performance of a finite ATM buffer loaded with AR, DAR sources. Results show (1) cell loss rate (CLR) of those two input sources are very closely when CLR is large; (2) the error between them becomes more evident as CLR decreases; (3) this error becomes non- negligible when CLR less than 10-5 (occurs in ATM practice). Our study shows that the multiple correlation of VBR video traffic take an important impact on queuing performance in realistic network applications, which must be taken account of to model VBR video sources and perform more reliable queuing analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper absolute difference measurement (ADM) is proposed for optical gray-scale image recognition. ADM measures the similarity between different gray-scale images and the value of measurement reflects the similarity linearly. Using optical correlation and a novel cycle- encoding method the processing of gray-scale image matching based on absolute difference measurement can be realized simply. This new method can be used in coherent correlators or incoherent correlators. In this paper a compact hybrid image recognition unit based on an incoherent correlator is constructed. The system has good fault-tolerance ability for rotation distortion, Gauss noise disturbance or information losing. In the experiments some pictures of military objects are processed. The recognition speed is 12 frames per second and the accuracy of recognition is more than 95 percent.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we use the concept of uncertain pixel to improve the distortion-invariant ability of morphological hit-or-miss transformation. The novel method is called joint rank-order multi-value hit-or-miss transformation (JRMHMT), which reduces the decision effect of the pixels that is easily disturbed by rotation and scaling distortion. Without losing information of the input images, JRMHMT can realize the precision recognition between two images and has a better distortion-invariant ability in scaling and rotation than the ordinary HMT methods. The comparisons of JRMHMT with the ordinary HMT methods in theoretic analysis and experiments are given in this paper. Based on an incoherent correlator and a novel multi-value complementary encoding method, JRMHMT is realized for image recognition and gets satisfying results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image-based rendering generates views of scene by 2D image transformation instead of difficult physical simulation of conventional graphics with a difficult reconstruction problem, as greatly improves the performance of virtual reality applications. In this paper we develop efficient methods for synthesizing novel views and adding simulated shading to these novel views, given a collection of example images with depths or in correspondence. The example images are warped to compress storage based on overlap of FOV (field of view). Derived views are generated by warping the compressed images and are passed to the next step in which lighting is added by a post-lighting simulation method to produce the final images. It has two significant advantages. Firstly, it generates warped images from multiple sample images in real time. Secondly, it adds simulated shading in final images, as not only improves the reality of the artificial scene, but also achieves faultless combination of the real and the artificial scenes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Based on the blobworld method, we propose a blob-centric image querying scheme that is comprised of several new techniques in content-based image retrieval. We report our research results in the database structure and maintenance algorithms for image indexing. We further conduct a performance comparison of image retrieval efficiency for three possible image retrieval methods, the naive method, the representative-blobs method, and the indexing method. Our quantitative analysis shows over 90% reduction in query response time by using the representative-blobs method and the indexing method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we extend the matrix features used for the textured monochrome images to the color images and define six different vector features. Those vector features extracted from the matrices for color images have useful information about not only colors but also textures of a color image. Since the type of the proposed features is just a vector, the vector features can be well adapted with the indexing schemes widely used in the content-based image retrievals. Also, we analyze the mutual relationships of the matrices if proper constraints are given to the matrix calculations. In the experiments, we show the performance and the applicability of six proposed vector features to the content-based image retrievals and verify the usefulness of the features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a novel scheme to construct the color pattern retrieval system, which can recommend color patterns of the desired human feeling. The desired feeling is represented as a 9D vector in linguistic image scales. That means a component value of the vector is the degree of the feeling in the corresponding linguistic image scale. Then the system recommends color patterns of desired feeling after it compares the given query vector with emotional features of stored color patterns. In order to construct the system, the emotional features are taken from outputs of the neural network which has inputs as the physical features extracted from color patterns. To make indexing system, the hierarchical clustering method with fuzzy c-mean algorithm is used. To verify our scheme, a set of 368 color images for textile design is selected and experimented. Using the proposed retrieval scheme, we could obtain a promising results even though several problems are still remained. We believe this pilot system can be improved to find corresponding textile designs, wall papers, or pictures in a gallery for linguistic queries of human feelings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to support the philosophy of the MPEG-4 video coding standard, we have to represent each frame of video sequences in terms of video object planes (VOPs). Several automatic methods for segmenting of moving objects have been developed. Such algorithms separate the foreground from the background with a change detection mask, which is obtained by the difference image between two successive frames. Thus, these techniques cannot represent each individual video object in a single frame separately, i.e., object correspondence problem. In addition, those algorithms are somewhat premature to obtain desirable segmentation results from all kinds of image sequences because the mathematical model or the similarity measure for the extraction of the video object has not been defined adequately. However, if the user can define video objects in the first frame or newly appeared video objects by a partially or completely user-assisted method like the snake's algorithm, we may obtain good segmentation results over the following successive frames. This semi-automatic segmentation may be more practical in generating VOPs of moving objects. In this paper, we propose a new user-assisted video segmentation algorithm. This algorithm consists of two steps: intra-frame segmentation and inter-frame segmentation. The intra-frame segmentation is applied to the first frame of the image sequence or the frames that have newly appeared video objects. The user can manually define the newly appeared video objects in the image sequence. The inter-frame segmentation is applied to the following consecutive frames. In the inter-frame segmentation, user-defined video objects are segmented automatically by object tracking.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Motion is one of the most prominent features of video. For content-based video retrieval, motion trajectory is the intuitive specification of motion features. In this paper, approaches for video retrieval via single motion trajectory and multiple motion trajectories are addressed. For the retrieval via single motion trajectory, the trajectory is modeled as a sequence of segments and each segment is represented as the slope. Two quantitative similarity measures and corresponding algorithms based on the sequence similarity are presented. For the retrieval via multiple motion trajectories, the trajectories of the video are modeled as a sequence of symbolic pictures. Four quantitative similarity measures and algorithms, which are also based on the sequence similarity, are proposed. All the proposed algorithms are developed based on the dynamic programming approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we examine new motion compensation methods based on the affine or bilinear transformation and derive fast algorithms for affine and bilinear transformation using vector relationship. We also develop a more effective motion estimation method than the conventional image warping method in terms of computational complexity, reconstructed image quality, and the number of coding bits. The performance of the proposed motion compensation method, which combines the affine or the bilinear transformation with the proposed adaptive partial matching, is evaluated experimentally. We simulate our proposed motion compensation method in a DCT- based coder by encoding CIF (Common Intermediate Format) images at bitrates of below 64 kb/s. The proposed adaptive partial matching method can reduce the computational complexity below about 50% of the hexagonal matching method, while maintaining the image quality comparable to the hexagonal method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Video object segmentation is an important component for object-based video coding schemes such as MPEG-4. A fast and robust video segmentation technique, which aims at efficient foreground and background separation via effective combination of motion and color information, is proposed in this work. First, a non-parametric gradient-based iterative color clustering algorithm, called the mean shift algorithm, is employed to provide robust dominant color regions according to color similarity. With the dominant color information from previous frames as the initial guess for the next frame, the amount of computational time can be reduced to 50%. Next, moving regions are identified by a motion detection method, which is developed based on the frame intensity difference to circumvent the motion estimation complexity for the whole frame. Only moving regions are further merged or split according to a region- based affine motion model. Furthermore, sizes, colors, and motion information of homogeneous regions are tracked to increase temporal and spatial consistency of extracted objects. The proposed system is evaluated for several typical MPEG-4 test sequences. It provides very consistent and accurate object boundaries throughout the entire test sequences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a general framework for the segmentation of surfaces represented by 3D scattered data. The method we present is based on an anisotropic diffusion scheme. Contextual information at each data point involves the selection of optimal directions locally representing the shape. Graph based representations are well adapted to embody this kind of knowledge. Thus, we introduce two structures respectively denoted minimal and maximal escarpment trees. Then, an intrinsic segmentation of the data is operated by label diffusion over these structures. It proceeds in two important stages. The first stage consists in the extraction of atomic regions which are to be combined in the second merging stage. Novel aspect of our method is its ability to detect arbitrary topological types of features, as crease edge or boundaries between two smooth regions. This method has proven to be effective, as demonstrated below on both synthetic and real data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Two images of the same scene are sometimes corrupted by noise and different portions of them are blurred differently. In this paper, we proposed to use the techniques of image fusion and image restoration to improve the quality of the images. The proposed algorithms are aided by image registration, which estimates the matching parameters between the images to be registered. They are scale factor, angle of rotation and difference of locations. However, for the saving of computation time in the registration process, we apply the wavelet transform to perform a multi-resolution registration. The gradient that aids the searching of the feature points between the images to be registered is computed from the derivative operator derived from wavelet theory. Experimental results show that we can align and register the two original images successfully. Then, with fusion or restoration, a clearer version of the scene than that of the original images is obtained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In a fixed-rate vector quantization system, an image is divided into smaller blocks, and each block is usually independently encoded by an index of the same length that points to the closest codevector in the codebook. Recently, an algorithm, called search-order coding, has been proposed to further reduce the bit rate by encoding the indexes but without introducing any extra encoding distortion. In this paper, we present an improved algorithm that extends the idea of the search-order coding algorithm by encoding the indexes pair by pair (instead of one by one). With this extension, four types of search results are considered. The simulation results indicate that our algorithm is able to achieve the bit rate up to 5.74% lower than the search-order coding algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work proposes the use of the external wavelet transform (EWT) and the scalable binary description (S-BiD) technique to reduce the memory requirement and the processing time for wavelet-based image compression methods, in the stage of the wavelet transform as well as the quantization and coding of wavelet coefficients. With EWT, one performs the wavelet transform on an image stored in a less expensive but slower buffer with the help of a small fast cache. With the S-BiD technique, wavelet coefficients are partitioned into wavelet blocks. Each block is small enough to be fetched into the cache to be coded completely. Wavelet coefficients are quantized into a set of interleaved bit-streams that describe three layers in the quantization hierarchy, i.e. blocks, subbands and coefficients, respectively. The resulting codec finds wide applications. It can be easily implemented in a DSP or used to compress a very large image without tiling. It can generate good compressed images even with a cache memory less than 4 Kbyte. With a cache memory of 16 Kbyte, its PSNR performance is comparable with all other codecs while it has a very fast coding speed. The maximum size of the wavelet block that can be processed is limited by the size of the cache memory. The coding efficiency of this codec increases with a larger block size.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A Multimedia Instruction on Demand (MID) serves the purpose of providing an environment for lecture design, lecture annotation, and lecture review over networks. In order to support real-time multimedia interactive playback for such an application, the underlying networks are required to provide network resource management mechanism to enforce the reservation policy. In our design, the MID server and MID gateway consist of the following mechanisms, namely, resource management agent, admission control agent, packet classifier, and packet scheduler. We make use of the framework of the ReSerVation Protocol to devise and implement a network resource management mechanism, which control end-to-end packet delays and bandwidth allocation for the designed MID system. In the present paper, our contributions are as follows: (1) a network resource management scheme is designed to support real-time multimedia over the Internet and (2) an experimental test bed is established to measure the system performance. The developed scheme is currently being implemented in the Multimedia Information Networking laboratory at Tamkang University.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Due to packet loss and random delay problems in the Internet, the transmission of real-time voice over such an environment is a complex and challenging issue. Many researchers have devoted to this active research area. In the present paper, we develop an adaptive scheme in wresting with the above problems. To overcome the jitter and packet loss, the adaptive voice synchronization scheme is constructed in a feedback configuration. This scheme consists of the following mechanisms: (1) a QoS control mechanism, (2) an adaptive playback mechanism, and (3) a redundant packet sensing mechanism. As it is well known, voice is a crucial medium for networked multimedia systems. Voice tool is extreme valuable in supporting applications such as computer supported cooperative work or multimedia instruction on demand. Therefore, the results obtained in the present work are of benefits to many networked multimedia systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The transmission of animated 2D graphics via the wireless link of the personal digital assistant (PDA) system is investigated in this work. An animated graphic sequence is often stored in the animated GIF format and transmitted via wired networks. Due to the much narrower bandwidth requirement of a wireless channel, the same approach cannot be properly used in the wireless environment. This paper presents a new approach to wireless transmission of animated sequences. Basic elements of 2D graphics include points, lines, polygons and different color filling commands. Furthermore, to achieve the animation effect, a reference frame is often used to reduce the redundancy between adjacent frames. All these can be considered as rendering commands. The major different between 2D graphics and the GIF image is that the former is on structure level while the latter is on pixel level (i.e. bitmap). By exploiting this unique feature, we are able to obtain a more efficient coding method for the component frame. That is, instead of transmitting the rendered bitmap as done with animated GIF, rendering commands are encoded and transmitted to client's PDA while the rendering process is actually performed at the client side. With the proposed scheme, the compression standard is built upon a common set of rendering commands. The final integration involves buffer control and error protection. It is demonstrated by experiments that animated sequences can be transmitted within a channel of bandwidth less than 1 Kbps while maintaining excellent image quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
WE present a robust real-time video coding scheme that complies with the H.263 standard. By utilizing a feedback channel, the corrupted macroblocks (MBs) due to transmission errors are accurately evaluated and precisely tracked in the encoder. Without dependency trees wide-spanning to unnecessary areas, the error propagation effects are terminated completely by INTRA refreshing the affected MBs. Our simulations show significant video quality improvements in error prone environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the design, implementation, and evaluation of a Continuous Media (CM) system including a CM server and CM clients. It has been proven that TCP/IP is not suitable for distributed applications that require high network bandwidth and timing-criticality. UDP/IP is one of the alternatives. However, due to the fact that UDP is a lossy protocol, there will be many issues that arise when implementing distributed CM applications, some of which will be discussed in this report. Moreover, the magic word QoS (Quality of Service) causes other difficulties. Most of the QoS metrics known so far assume that the communication channel is lossless. In this paper, since we use UDP for CM data transmission, we adopt a new QoS metric that is applicable to lossy streams to evaluate the performance of our server.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In a video database, large amount of information involving video, audio, and/or images needs to be stored and managed. Therefore, there is an important need for novel techniques and systems which can provide an efficient retrieval facility of the voluminous data stored in the video database. While `content-based' retrieval provides a direct and an intuitive approach for video data access, it is inadequate from efficient video data management viewpoint. This is because many (semantic) features of video data can not be extracted out from the video itself automatically; moreover, video objects may share annotations or descriptions. Consequently, it is necessary and cost- effective to complement content-based retrieval with high level (declarative) query-based retrieval in the video database system. In this paper, we describe a high level query language called CAROL (for Cluster And ROle Language), which is being developed on top of a versatile object database system. Supported by the underlying object model extended with clusters and roles, CAROL offers a number of interesting features which are seldomly available from another single query language, including top-down search in a context-dependent manner, bottom-up and/or horizontal search in a context-independent manner, besides traditional search supported by conventional object-oriented database systems. This language constitutes an important component of an on-going project aiming at developing a declarative video data retrieval system at the Hong Kong Polytechnic University.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, as the rapid advances in Internet technologies, there have been many related tools and applications developed, including improvement in browsers, enhancement in server functionality, and also standardization of network protocols. These technology developments have influenced the database architecture, which evolves from centralized ones, to distributed ones, and then to networked ones. While having its own advantages, a network database also encounters the problem of storage and retrieval for decentralized data. Specifically, how to perform a join operation efficiently is a difficult problem since the data transmission over the network is expensive and how to minimize the transmission cost for performing a join is intrinsically hard to solve. Such a problem is even more important and difficult to resolve when a multi-join is being carried out. In this paper, we shall investigate the problem of multi-join execution in a network database and develop a schedule that is able to not only effectively decompose a multi-join into a proper join and semi-join sequence but also efficiently conduct its execution. In addition,w e will utilize related technologies, including Java applets, JDBC, etc, to implement a Web-base network database. In this network database system, users can access the data and issue the multi-join query through a proper Web interface, and the system will take full advantage of the scheduler devised to perform a multi-join query so as to improve the overall system performance. Due to the increasing popularity of Internet, the use of multi-join is expected to be even more frequent, and its execution, without proper scheduling, is becoming the bottleneck of a network database. In view of this, we believe this study is very timely and the results are of both theoretical and practical importance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In some applications like a cyber museum, it is necessary to do an efficient content-based retrieval on multimedia data. The content-based retrieval should be done based on visual features like color, shape, and texture. In this paper, we develop a content-based multimedia information retrieval system for a cyber-museum, especially china images. For this, we first extract not only keywords from the caption or the text information of china images, but also visual features from the images. In addition, we propose an integrated indexing scheme that supports the feature-based retrieval from the china images, such as color and shape. We also implement a user interface using JAVA applet on the World-Wide Web. Finally, a show from our experimental result that our multimedia information retrieval system is good on retrieval effectiveness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.