PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 8334, including the Title Page, Copyright information, Table of Contents, and Introduction, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fourth International Conference on Digital Image Processing (ICDIP 2012)
An regularization approach is introduced into the online identification of inverse model for predistortion. It is based
on a modified backpropagation Levenberg-Marquardt algorithm with sliding window. Adaptive predistorter with
feedback was identified respectively based on direct learning and indirect learning architectures. Length of the sliding
window was discussed. Compared with the Recursive Prediction Error Method (RPEM) algorithm and Nonlinear
Filtered Least-Mean-Square (NFxLMS) algorithm, the algorithm is tested by identification of infinite impulse response
Wiener predistorter. It is found that the proposed algorithm is much more efficient than either of the other techniques.
The values of the parameters are also smaller than those extracted by the ordinary least-squares algorithm since the
proposed algorithm constrains the L2-norm of the parameters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the characteristics of high speed, heavy density and strong dependence on control equipment, safe high-speed
railway operation are affected by various factors whose interaction relationships resulted in the complexity and high risks
of high-speed railway. To adopt 'basic theory, key technology and simulation experiment' method, based on the
"proactive safety" concept, a visual, intelligent and integrated monitoring and early-warning platform for safe high-speed
railway operation was proposed in this paper, and was described from objectives, architecture and functions, to achieve
comprehensive monitoring, safety warning, decision support and information transfer and sharing in the whole operation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For the purpose of improving the defect and deficiency in the field of using software to process images with wavelet
transform, this paper introduced one core algorithm in the newest static image compression standard JPEG2000, that is
the reversible 5/3 integer wavelet transform. Then improved the algorithm and programming it based on nonstandard
decomposition method with Visual C++. Eventually a multi-document platform which has a strong portability was
created. 5/3 integer wavelet transform and inverse transform for grayscale can be carried out by this platform. The
wavelet coefficients could also be stored, which can be used for deeper research, such as filtering, image encoding and
decoding, digital watermark and so on. What's more, after processing, the wavelet coefficients might be displayed as an
image, which makes the results more intuitive. In addition, we illustrated the application of the platform through an
example. After verification, 5/3 integer wavelet transform for images is indeed reversible and lossless, and the original
image could be accurately reconstructed. The platform can be also applied to other image processing fields.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a new concept that is Circulation Course Resources (CCR) is introduced, which means the course
resources circulating from the students listening in classroom lecture, camera shooting, video coding, video storage,
video server to the students learning from VOD. The creating course video system and network-teaching system as parts
of CCR architecture are presented separately. To connect the two systems, a middle system defined as Bridge System is
designed and modeled with UML. The core application design of the Bridge System is expressed by the classes design
and main database design. The functions of Bridge System include making the course videos flowing from one system to
another automatically and converting the important data of the two systems into uniform format. The CCR architecture
has been put into action and achieved satisfied results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
PCA, LDA and LPP are the three most representative subspace face recognition approaches. In this paper, we show
that they can be unified under the same framework. A unified framework is constructed by using the graphic embedding.
We develop a unified framework to study the three major subspace face recognition methods: PCA, LDA, and LPP. PCA
is an evaluation benchmark for face recognition. Both LDA and LPP have achieved superior performance in the YALE
face database. A unified framework on the three methods will greatly help to understand the family of subspace methods
for further improvement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study uses 3D virtual reality technology to create the "Mackay campus of the environmental education and
digital cultural 3D navigation system" for local historical sites in the Tamsui (Hoba) area, in hopes of providing tourism
information and navigation through historical sites using a 3D navigation system. We used Auto CAD, Sketch Up, and
SpaceEyes 3D software to construct the virtual reality scenes and create the school's historical sites, such as the House of
Reverends, the House of Maidens, the Residence of Mackay, and the Education Hall. We used this technology to
complete the environmental education and digital cultural Mackay campus . The platform we established can indeed
achieve the desired function of providing tourism information and historical site navigation. The interactive multimedia
style and the presentation of the information will allow users to obtain a direct information response. In addition to
showing the external appearances of buildings, the navigation platform can also allow users to enter the buildings to
view lifelike scenes and textual information related to the historical sites. The historical sites are designed according to
their actual size, which gives users a more realistic feel. In terms of the navigation route, the navigation system does not
force users along a fixed route, but instead allows users to freely control the route they would like to take to view the
historical sites on the platform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an image segmentation algorithm based on Non-subsampled Contourlet (NSCT) and
Normalized Cut (N-Cut). NSCT have the characteristics of multiresolutional, localized, multidirectional, and low
redundant and anisotropic, so it is more effectively capture high dimensional singularity in an image. The segmentation
is accomplishment by using normalize-cut rules after NSCT. The experiment results show the efficiency of image
contour extraction, and avoid the problem of over-segmentation and under-segmentation phenomenon in N-Cut
algorithm. In addition, because of less information in the low resolutions, the processing time is largely shortened, and
then gets better segmentation results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An improved video median noise reduction algorithm is presented for 120 vehicle terminal monitoring system in
this paper. The noise causes of the video images in many 120 ambulance vehicle video terminal monitor equipments are
analyzed, and the space rigid body model of self-adaptive median noise reduction filter is established to decrease the
noises of the video image transmission process. The noise reduction experiment of video images shows that the proposed
video median noise reduction algorithm is superior to the traditional adaptive filtering method, because the new method
has the superiority of space-time joint noise reduction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method which can distinguish tower cranes from other objects in an image is proposed in this paper. It synthesizes
the advantages of both morphological theory and geometrical characters to identify tower cranes accurately. The
algorithm uses morphological theory to remove noise and segment images. Moreover, geometrical characters are adopted
to extract tower cranes with thresholds. To test the algorithm's practical applicability, we apply it to another image to
check the result. The experiments show that the approach can locate the position of tower cranes precisely and calculate
the number of cranes at 100% accuracy rate. It can be applied to identifying tower cranes in small regions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a fractal dimension calculation technique of highway pavement which is used to detect the
pavement cracking. Firstly, the impulse noises generated by pavement unevenness are eliminated by the use of median
filtering of pavement image. Secondly, the fractal dimension of fissure is calculated according to the crack's fractal feature.
Finally, the pavement damage is detected and located with respect to fractal dimension. These three steps can realize the
location detection and damage rate computation of pavement damage successfully. To a number of random images, the
experiment results show that the fractal dimension of pavement crack region is from 2.25 to 2.99 basically.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents image detection method of pavement crack based on fractal dimension feature and designs
self-adapting algorithm of fractal dimension interval of pavement region. Through image pretreatment, calculation of
fractal dimension, self-adapting calculation of dimension interval, we obtain the location image of damage pavement. The
experimental results of transverse crack, longitudinal crack, net-shaped crack, pit slot are contrast with that of Sobel
operator. The results show that they have the similar capability on the representation of crack, but the proposed method is
more flexible on the aspect of representation of crack size and calculation of damage ratio.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Currently, mobile image applications spend a lot of computing process to display images. A true color raw image
contains billions of colors and it consumes high computational power in most mobile image applications. At the same
time, mobile devices are only expected to be equipped with lower computing process and minimum storage space. Image
dithering is a popular technique to reduce the numbers of bit per pixel at the expense of lower quality image displays.
This paper proposes a novel approach on image dithering using 2x2 Tchebichef moment transform (TMT). TMT
integrates a simple mathematical framework technique using matrices. TMT coefficients consist of real rational numbers.
An image dithering based on TMT has the potential to provide better efficiency and simplicity. The preliminary
experiment shows a promising result in term of error reconstructions and image visual textures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the development of computer education software, digital educational game has become an important part in our
life, entertainment and education. Therefore how to make full use of digital game's teaching functions and educate
through entertainment has become the focus of current research. The thesis make a connection between educational
game and collaborative learning, the current popular teaching model, and concludes digital game-based collaborative
learning model combined with teaching practice.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A HTTP based video transmission system has been built upon the p2p(peer to peer) network structure utilizing the
Java technologies. This makes the video monitoring available to any host which has been connected to the World Wide
Web in any method, including those hosts behind firewalls or in isolated sub-networking. In order to achieve this, a
video source peer has been developed, together with the client video playback peer. The video source peer can respond
to the video stream request in HTTP protocol. HTTP based pipe communication model is developed to speeding the
transmission of video stream data, which has been encoded into fragments using the JPEG codec. To make the system
feasible in conveying video streams between arbitrary peers on the web, a HTTP protocol based relay peer is
implemented as well. This video monitoring system has been applied in a tele-robotic system as a visual feedback to the
operator.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Noise in an image is the unwanted information present; it should be removed without disturbing the useful
information present in it. De-noising an image is very active research area in image processing. In this study, three
classes of degraded noise images are used they are additive noise, multiplicative noise and impulsive noise. There are
several algorithms for de-noise but each algorithm has its own assumptions, advantages and limitations. Histogram
multithresholding give rise to explicit peaks, which reduces the task for finding thresholds in dissecting the image
histogram. The proposed method uses histogram multithreshold segmentation as the first step followed by statistical
features and pattern classifiers for identifying the noise type. Simple filters are used to get the noise samples and noise
identification is achieved by using the proposed method. The proposed method yields the higher results when compared
with the first method for classifying the noise types.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Several algorithms have been implemented to resolve the problem of text categorization. Most of the work in this
area geared for English text, whereas few researches have been conducted on Arabic text. However, the nature of Arabic
text is different than English text; pre-processing of Arabic text are more challenging. In this paper an experimental
study was conducted on three techniques for Arabic text classification; these techniques, Discriminative Multinominal
Naive Bayes (DMNB), Naïve Bayesian (NB) and IBK Algorithms, The paper aimed to assess the accuracy for each
classifier and to determine which classifier is more accurate for Arabic text classification based on stop words
elimination. The accuracy for each classifier is measured by Percentage split method (holdout), and K-fold cross
validation methods, along with the time needed to classify Arabic text.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An double Nd:YAG regenerative amplification picosecond pulse laser is demonstrated under the semiconductor saturable
absorption mirror(SESAM) mode-locking technology and regenerative amplification technology, using BBO crystal as
PC electro-optic crystal. The laser obtained is 20.71ps pulse width at 10 KHz repetition rate, and the energy power is up
to 4W which is much larger than the system without pre-amplification. This result will lay a foundation for the following
amplification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We report on a picosecond pulse laser produced by grating stretcher and regenerative amplifier. By designing the
experimental setup and numerical simulation, mode-locking output pulse energy of 7.5mJ with a repetition rate of 1KHz
and a pulse width of 106.4ps stretched from 8.5ps is obtained at 1064nm. The results indicated that this system has laid a
good foundation for the multichannel amplification to get higher pulse energy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A micro processing used LD end-pumped Nd:YVO4 all solid-state picosecond pulse laser was demonstrated under the
semiconductor saturable absorption mirror(SESAM) mode-locking technology and regeneration amplifier technology, by
using BBO crystal as electro-optic crystal and diode-side-pumped Nd:YAG. 1064nm laser was obtained with 1.47mJ
single pulse energy, 15ps pulse width at 1 kHz repetition rate and the pulse energy fluctuation was less than 0.6% in 3
hours operation. Finally, through the galvanometric we got the beam focused, realizing the steel plate processing which
thickness was 0.5mm and the aperture radius was 25.5μm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Statistical approaches become very important tools that interfere and overlap in our daily life and become inevitable
event that help us in every tiny details of our life, in this paper; we are going to present a new technique for analyzing the
two principal component of any given object by calculating the direction over the occupied coordinates using mean,
variance, and covariance statistical functions, and by finding some relationship between those statistical functions; we
have extracted the angle degree of the processed object, for pattern recognition applications; this object can be adjusted
accordingly to overcome the rotation perturbation shortcoming that hinders the extraction of a unified features especially
for object recognition purposes in which we have to present many samples per single pose which makes the processing
of this increasing size of the database is a noticeable burden, we have achieved a dramatically results with almost zero
time of calculation since the statistical functions applied need little processing time to finish.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Remote sensing is a very useful method for data collection in open spaces, especially in precision agriculture and has
been widely used over centennial. This paper presents the development of methodologies and identification of a surface
model grasslands and pastures based on of chosen guidelines and properties. The model will be used to automate the
process of monitoring the grasslands based on the analysis of spatial data and computer analysis of aerial photographs
obtained automatically.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work a mesh-growing surface reconstruction method adapted for noisy and sparse data is presented. The
method realizes an interpolating growing model and introduces the Complex Propagation scheme that integrates both
inertial and tangential propagation terms. The features and basic functionality of the method are described. A number of
experiments are carried out on 3D ultrasound data received from 2D tracked free-hand ultrasound and sequential
triggered scanning systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces a novel global thresholding approach that exploits the product of gradient magnitude (PGM).
The PGM of an image is obtained by multiplying the responses of the first derivative of Gaussian (FDoG) filter at three
adjacent space scales. The output threshold is selected as the one that maximizes a new objective function of the gray
level variable t . The objective function is defined as the ratio of the mean PGM values of the boundary and nonboundary
regions in the binary image obtained by thresholding with variable t . Through analysis of 35 real images from
different application areas, our results show that the proposed method can perform bilevel thresholding on the images
with different histogram patterns, including unimodal, bimodal, multimodal, or comb-like shape. Its segmentation
quality is superior to five popular thresholding algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper present an improved imaging algorithm for bridge crack detection, through optimizing the eight-direction
Sobel edge detection operator, making the positioning of edge points more accurate than without the optimization, and
effectively reducing the false edges information, so as to facilitate follow-up treatment. In calculating the crack geometry
characteristics, we use the method of extracting skeleton on single crack length. In order to calculate crack area, we
construct the template of area by making logical bitwise AND operation of the crack image. After experiment, the results
show errors of the crack detection method and actual manual measurement are within an acceptable range, meet the
needs of engineering applications. This algorithm is high-speed and effective for automated crack measurement, it can
provide more valid data for proper planning and appropriate performance of the maintenance and rehabilitation processes
of bridge.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Stereo vision system is a useful method for deepness or depth gathering of objects and features in an environment.
This paper presents the region of interest ROI in disparity mapping to be analyzed to get the estimate distance on
stereo vision applications. The stereo vision application in this paper is using a mobile robot that navigates using a
pair of camera. The cameras work as a stereo vision sensor for its navigation. The ROI is a reference sight of the
stereo camera which the pixel intensities from the disparity mapping determine the distance or depth using an
algorithm. The stereo vision baseline is based on horizontal configuration. The matching process is using block
matching technique which briefly described with the performance of its output. The disparity mapping is generated
by the algorithm with the reference to the left image coordinate. The algorithm uses Sum of Absolute Differences
(SAD) which is developed using Matlab software.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To avoid the tremendous complication and difficulty of the digital image analysis method in the laboratory which is
an important approach to study the microstructure of granular materials that plays significant role and govern the
macroscale of granular soils, a new method to perform the digital image analysis of microstructure of the granular soils
based on numerical simulation is proposed. A series of numerical models are developed to simulate the plane strain tests
of granular soil. Based on the numerical results, two methods are proposed to simulate digital image method in the
laboratory: the RENCI 3D slicer method which is a direct analogous to the laboratory method, and the geometric
algorithm method which takes advantage of the numerical method and is considered to be more accurate and be able to
get more information of the microstructure. Some analyses of the local void ratio distribution and the particle orientation
distribution are performed as on physical laboratory experiment. The proposed numerical digital image analysis method
is proved to be a valid and more efficient approach for stereological analysis of microstructure of granular soils.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Time-interleaved Analog-to-Digital Converter (TIADC) is an efficient way to achieve higher sampling rates for
medium-to-high resolution applications. However, the performance of a TIADC suffers from mismatch errors among the
sub-channels. This paper presents a method to estimate the channel mismatch errors using the sub-channels' output data.
The proposed method introduces an input dependent estimating model (IDEM) based on an equivalent transfer function
including the mismatch errors to calculate the standard deviation of the mismatch errors. The spurious-free dynamic
range (SFDR) is then evaluated by applying multi-tone sinusoids signal to input. The simulated results show that the
method in this work can get about 45dB SFDR enhancement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
According to the characteristic of low visibility of night image, we proposed an improved image enhancement
method based on multi-scale Retinex. In our method, we replace the Gaussian filter as Blur filter, and replace logarithm
gain as linear gain to reduce the computational complexity. The proposed algorithm first segment color image into three
independent R, G, B channels, and then employ our improved multi-scale Retinex algorithm to enhance contrast of each
channel. Finally, the enhanced image is given by recombining the three R, G, B enhanced results. A large number of
experiments show that the proposed algorithm can quickly and efficiently improve the contrast of the night image and
has a better visual effect than that of the similar method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When a document is fed to a scanner either mechanically or by a human operator for digitization, it suffers from
some degrees of skew or tilt. Skew detection is one of the first operations to be applied to scanned documents when
converting data to a digital format. Its aim is to align an image before processing because text segmentation and
recognition methods require properly aligned lines. This paper presents a new approach for skew detection and
correction of documents. Particle Swarm Optimization (PSO) is used for solving skew optimization. A new objective
function based on local maxima and minima of projection profile is formulated and PSO is used to find the best angle
that maximizes the objective function. The proposed method is compared with existing methods like Hough transform,
Fourier transform. Experimental results show that the proposed method corrects the skew with the maximum error rate
of ± 1° .
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The automatic target recognition based on image fusion refers to the fusion process using the target images provided
by a variety of sensors, so as to improve the recognition accuracy and robustness and to obtain better recognition
performance. This paper presents a feature fusion method for global structure information and local structure information
of both the feature extraction and target classification recognition, the global structure information is obtained through
the scatter matrix, while the local structure information is obtained through constructing the Laplacian matrix of the
nearest neighbor graph, in the criterion function, the importance degree of these two information in different applications
is regulated by setting a regulatory factor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Landslide is one of prominent geohazards that continuously affecting the tropical countries including Malaysia.
Frequent occurrences of landslides at hillslopes during the heavy rainy periods have resulted in public fear for the safety
of their life and properties. For the past 25 years, many landslides have occurrences have been reported in Klang Valley
especially at the hilly terrain residential areas. A landslide monitoring scheme is therefore very crucial and should be
carried out continuously. Various studies have been conducted to monitor landslide activities such as conventional
geotechnical and geodetic techniques. Each of these techniques has its own advantages and limitations. Therefore, this
study focuses on the effectiveness of the combination approach of GPS technology and inclinometer techniques for
landslide monitoring. The study area is located at residential area Section 5, Wangsa Maju, Kuala Lumpur, Malaysia.
The inclinometer instrument has been placed at five (5) selected monitoring points and three (3) epochs of inclinometer
measurements were made. At the same time, the GPS observations have also been carried out for three (3) epochs
separately using GPS static techniques. This GPS network consists of four (4) control points and eleven (11) monitoring
points. The GPS observations data were validated, processed and adjusted using two (2) adjustment software namely
Trimble Geomatic Office (TGO) version 1.6, and GPS Adjustment and Deformation Analysis (GADA). The results have
shown that the GPS technique can be implemented with inclinometer technique to detect horizontal displacements up to
± 30 mm and vertical displacements less than ± 50 mm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces a image retrieval algorithm based on SIFT feature. The image was transformed feature
vecter. The similarity was computed using Euclidean distance about two images' feature. Experiments show that it
has good performance on special object.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a multilayer segmentation of stereo images with reference to the displacement in the left
and right images. A stereo image is given as left and right components, the corresponding matching is found by drawing
random parallel lines in x-axis making the y-axis as constant. The edges is found in both the right and left images with its
pixel position. The number of edges found in two components is noted down. Then the edge values are clustered with
respect to the deviation found in the matching correspondence. The rough distance is calculated using the deviation
clusters. The number of clusters represents the number of layers of the segmentation. Once the layers are determined the
whole image is segmented with zero crossing by taking the displacement as the layer parameter. The algorithm is
implemented and tested for single and multiple objects with various distances in feet.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new and efficient method for the detection of concentric ellipses with the same orientation in
images on the basis of the Hough Transform (HT). In order to meet real application requirements of high detection
accuracy and low time and space costs, the method detects the parameters of ellipses separately with a five-step
algorithm by using the special geometry properties of concentric ellipses instead of gradient information or central
symmetric points. Some experimental results on both synthetic and real images show that the method is more efficient,
accurate, and robust than the Randomized Hough Transform (RHT) method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Driver fatigue is an important reason for traffic accidents. To ameliorate traffic safety, this paper proposes a novel
method for fatigue pattern detection which is based on parallel Gabor and 1-Nearest Neighbor (1-NN) algorithm. In the
algorithm, parallel Gabor wavelets ssand feature orientation fusion are first employed to get multi-scale orientation
fusion image features, since facial features from tired drivers are different from the opposite. Then, during the phase of
classification, the multi-scale 1-NN algorithm is used to classify the extracted facial image features for fatigue pattern
detection. Experimental results show that the new method can effectively recognize driver fatigue pattern, and the
performance of real-time fatigue detection with multiple processors has been improved comparative to single CPU
computing environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Iris location plays an important role in iris recognition system. Traditional iris location methods based on canny
operator and integro-differential operator are affected by reflections, illumination inconsistency and eyelash. In this
paper, we introduce an accurate iris location method for low quality iris images. First, a reflection removal method is
used to interpolate the specular reflection. Then, we utilize Probable boundary (Pb) edge detection operator to detect
papillary boundary with a lower interference point. Moreover, we optimize the Hough transform to obtain high accuracy
result. Experimental results demonstrate that the location results of the proposed method are more accurate than other
methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a new secret image sharing scheme based on chaotic system and Shamir's method. The
new scheme protects the shadow images with confidentiality and loss-tolerance simultaneously. In the new scheme, we
generate the key sequence based on chaotic system and then encrypt the original image during the sharing phase.
Experimental results and analysis of the proposed scheme demonstrate a better performance than other schemes and
confirm a high probability to resist brute force attack.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This document analyses the performance of subspace signal processing techniques applied to ground penetrating
radar (GPR) images in order to reduce the amount of clutter and noise in the measured GPR image. Two methods
considered in this work are Principal Component Analysis (PCA) and Independent Component Analysis (ICA). An
approach to combine those two techniques to improve their effectiveness when applied to GPR data is proposed in this
paper.
The experiments performed to gather GPR data and evaluate proposed algorithms are also described. The aim of
undertaken experiments is to replicate conditions found in water reservoirs where cracks and holes in the reservoir
foundations and joints cause excessive water leakages and losses to water companies and the UK economy in general.
Performance of implemented algorithms is discussed and compared to the results achieved by a highly skilled human -
GPR image analyst.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper a hybrid method for segmentation of 2-D medical images of human organs is proposed. To capture object's
global properties the Markov random fields (MRFs) are involved. The result of the MRF segmentation is used to
recalculate gradient vector flow based snake external forces in a further snake progression step. A MRF MAP solution is
estimated using the MMD algorithm. To reduce the computation time, MRF estimation is restricted only to sites, where
the state uncertainty is higher than some threshold value. Tests on MR slices and ultrasound images of the left ventricle
of the human heart are reported.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a novel method to filter the keypoints and reduce redundant keypoints. SIFT (Scale
Invariant Feature Transform) is one of the most robust and widely used methods for image matching and object
recognition, which is robust to illumination changes, image scaling and rotation. However SIFT generates a large
number of redundant keypoints in the background of the scene. Based on saliency detection and salient region selection,
the keypoints out of the selected salient region are pruned in our method. The experimental results show that though the
repeatability in our method is a little lower than original SIFT (less than 6%), the number of keypoints in our method is
significantly reduced (more than 33%).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Based on compressed sensing, a new bit-plane image coding method was presented. Due to different important
for different image bit-plane, the new method is robust to bit error, and has the advantages of simple structure and
easy software and hardware implementation. Because the values of the image bit-plane are 1 or zero, one order
difference matrix was chosen as sparse transform matrix, and the simulation show that it has more sparse
presentations. For the general 8-bit images, its have 8 Bit-plane, eighth Bit-plane is Most Significant Bit-plane, so
we can adopt more measure vectors for reconstruction image precision. At the same time, this kind of image codec
scheme can meet many application demands. The method partitioned an image into 8 bit-plane, and made the
orthonormal transform using the one order difference matrix for each bit plane, and then formed multiple
descriptions after using random measurements of each bit plane. At decoding end, it reconstructed the original image
approximately or exactly with the received bit streams by using the OMP algorithms. The proposed method can
construct more descriptions with lower complexity because the process of bit plane data measuring is simple and
easy to hardware realize. Experiment results show that the proposed method can reconstruction image with different
precision and it can easily generate more descriptions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to solve the problem of mass concurrent access to streaming media, this paper designs the application of
streaming media services based on cloud computing with technologies about cloud computing. With the analysis of
the actual demand, the system the paper designs includes three parts: streaming media resource center, streaming
media edge node and intelligent load balance system. Streaming media resource center could manage and distribute
streaming media resources; streaming media edge node is responsible for replaying requests of streaming media
playing directly; intelligent load balance system would schedule system loads according to the current state of users
requests automatically. After experiments, it proves that the system has good performance and practical value.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Compressed sensing is a novel signal sampling theory emerging recently. It is a theory that signals could be sampled
far below the Nyquist sampling rate. This paper introduces compressed sensing theory into the application of infrared
video, proposes a new residual reconstruction algorithm, and establishes a new infrared video codec model with random
Gaussian matrix as the measurement matrix and with orthogonal matching pursuit algorithm as the reconstruction
method. On the platform of Matlab, this paper performs the reconstruction of infrared video frames. The simulation
results verify that the proposed algorithm can provide a good visual quality and speed up evidently by comparison with
conventional algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Robust key point detector plays a crucial role in obtaining a good tracking feature. The main challenge in outdoor
tracking is the illumination change due to various reasons such as weather fluctuation and occlusion. This paper
approaches the illumination change problem by transforming the input image through colour constancy algorithm before
applying the SURF detector. Masked grey world approach is chosen because of its ability to perform well under local as
well as global illumination change. Every image is transformed to imitate the canonical illuminant and Gaussian
distribution is used to model the global change. The simulation results show that the average number of detected key
points have increased by 69.92%. Moreover, the average of improved performance cases far out weight the degradation
case where the former is improved by 215.23%. The approach is suitable for tracking implementation where sudden
illumination occurs frequently and robust key point detection is needed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The camera calibration is the basis of putting the computer vision technology into practice. This paper proposes a new
method based on camera calibration for diameter measurement of gear, and analyses the error from calibration and
measurement. Diameter values are gained by this method, which firstly gets the intrinsic parameters and the extrinsic
parameters by camera calibration, then transforms the feature points in image coordinate extracted from the image plane
of gear to the 3D world coordinate, lastly computes distance between the features points. The experiment results
demonstrate that the method is simple and quick, and easy to implement, highly precise, and rarely limited to the size of
target.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Study all-derivable points in operator algebra. Using the operations of linear mapping and matrix algebra, and the
related results of nest algebra theory, we show that the matrix (the first row and the second column element is the unit
operator, the second row and the second column is invertible operator) is an all-derivable point of the second order
operator matrix algebra.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, an effective slow motion replay detection method for tennis videos which contains logo transition is
proposed. This method is based on the theory of color auto-correlogram and achieved by fowllowing steps: First,detect
the candidate logo transition areas from the video frame sequence. Second, generate logo template. Then use color
auto-correlogram for similarity matching between video frames and logo template in the candidate logo transition areas. Finally, select logo frames according to the matching results and locate the borders of slow motion accurately by using
the brightness change during logo transition process. Experiment shows that, unlike previous approaches, this method
has a great improvement in border locating accuracy rate, and can be used for other sports videos which have logo
transition, too. In addition, as the algorithm only calculate the contents in the central area of the video frames, speed of the
algorithm has been improved greatly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method of laser energy density distribution measurement is presented in this paper. Dot-matrix method and CCD
imaging method are combined in pulse laser far-distance energy density measurement system. The laser energy data is
received by detectors on the reflective board, and the laser spot reflected by the reflective board is acquired by the CCD
camera. The relation of laser spot image gray and energy data is gotten by image processing algorithms, and energy
density distribution is analyzed according to the model of laser energy. Algorithms principle and process are presented
detailedly in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Flight safety is very important issue for aviation industries. Analyzing the flight accidents on the basis of 2-dimensional
image is hardly to illustrate the complex injuries of passengers in the flight cabin. However, how to illustrate the flight
accident is a challenge from 2-dimensional space to 3-dimensional space. This study proposes a particle swarm
optimization approach for improving the identification of objects from 2-dimensional image. The recognition results
provide the information for building 3-dimensional systems for flight accident investigators. The experiments also show
that it is a feasible approach for improving the identification of image objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently several spectral-spatial classification methods had been presented and applied to pattern recognition of
hyperspectral imagery. However, the present methods are especially suitable for classifying images with large spatial
structures in spite of the derived classification accuracies of above 90%. To classify hyperspectral images with larger as
well as smaller spatial structures, a novel spectral-spatial classification method was presented and tested on an Airborne
Visible/Infrared Imaging Spectrometer (AVIRIS) image with 145×145 pixels and 220 bands. Firstly, the AVIRIS image
was implemented a spectral mixture analysis using minimum noise fraction (MNF). Based on the obtained
n-dimensional eigenimage, support vector machine (SVM) was used to classify the AVIRIS image. Simultaneously, the
eigenimage was calculated the mathematical morphology-based image gradients for the n dimensions so to obtain n
watershed segmentation images. Finally, the SVM classification map was turned into several new ones through a series
of post-processing. The experimental results verify that the proposed spectral-spatial classification method has the
capability to detect larger as well as smaller spatial structures in hyperspectral imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Remote telescope control is significant important for the astronomical site testing. Basing on ASCOM standard, a
prototype of remote telescope control system has been implemented. In this paper, the details of the system design, both
server end and client end, are introduced. We tested the prototype on a narrow-band dial-up networking and controlled a
real remote telescope successfully. The result indicates that it is effective to control remote telescope and other devices
with ASCOM.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a method to improve the problem that the assistant lane marks caused by pulse. We also define
a method to distinguish the assistant lane marks' error rate objectively. To improve the problem, we mainly use the Sobel
edge detection to replace the Canny edge detection. Also, we make use of the Gaussian filter to filter noise. Finally, we
improve the ellipse ROI size in tracking part and the performance of the FPS (frame per second) from 32 to 39. In the past,
we distinguished the assistant lane marks' error rate very subjectively. To avoid judging subjectively, we propose an
objective method to define the assistant lane marks' error rate as a standard. We use the performance and the error rate to
choose the ellipse ROI parameter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Aims: Designing of a filter which can separate out the target from background in a given image.
Methods: The original gray image is down-sampled by rejecting every alternate pixel values between two columns and
two rows. A 3-step down-sampling was done to avoid major information loss. A 3-step up-sampling was done by
replicating the lower row and the right column from the down-sampled data matrix to obtain the original size of the
matrix. The image matrix thus obtained was subtracted from the original image.
Results: The iterative down-sampling and up-sampling matrix gives the background information. Subtraction from
original image obtains the target. Thus filters the background.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a method that integrates local fractal dimension and threshold filter into a region growing
algorithm for the segmentation of man-made regions, which are the region of man-made objects. First, we propose a new
method of estimating the local fractal dimension of the nature image to make more stable and accurate. Then, the local
fractal dimension feature is integrated into the region growing algorithm. Last, we use threshold filter method twice to
produce a nearly accurate segmentation of man-made regions. The effectiveness of the proposed method whose aim is
accurately segmenting man-made regions is confirmed through computer simulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
According to the growing Web data mining present situation, this paper proposes a based on support vector
machine and clustering Web mining new methods and based on support vector machine support vector won't
appear in the two types of samples is outside of the division between correct theory, through the introduction of
the clustering of centroid, class radius, class from the concepts, such as the center, which could well solve quickly
and accurately remove the support vector problems, ensure the algorithm generalization. The experiment showed
that the improved algorithm can quickly and accurately to the training sample of exclusion and has good
generalization problems to solve.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Undistortion methods based on perspective invariants play an important role in computer vision. The key of these
methods is how to choose an proper measure to describe the perspective invariance of undistorted image features. We
propose a new measure based on the homography between the control points in undistorted image and the pattern. The
less the distortion is, the less the mapping error is. A new lens distortion calibration method is also put forward which
uses this measure to search for accurate distortion parameters by iterative optimization. Comparing with other proposed
measures based on perspective invariants, our measure is both concise and comprehensive. Both synthetic and real
experiments show that our method performs well in accuracy and runtime.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With motion capture system to improve the frequency of data capture, a large number of redundant data will be
produced. This paper based on the resolution of the human eye, uses the sensors as little as possible to capture the
movement data in the skeletal animation technology. Then based on three-dimensional virtual human model, more
lifelike movement details such as the surface of the skin will be generate. So that, we can achieve already describe the
motion process of virtual people generally, and embodies the virtual people movement details high fidelity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Internet based communications are evolving at a tremendous rate. Encryption of data has become an important
way to protect data resources especially on the Internet, intranets and extranets. Steganography is a process of hiding
data inside a sharing medium. A technique of hiding multiple images in multiple sharing images is proposed. A covering
image of pixel area greater than twice a secret image is chosen which enables embedding multiple secret images inside.
Multiple secret images are encrypted using simple XOR operations over the covering images. The covering images are
transmitted over the communication medium to the authenticated user base on key management. The decryption resolves
the secret images to the user at the other end. The scheme suffers one bit information loss which happens in the last bit of
the byte which is negligible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, I studied the theory of fuzzy logic control of 2R robot, analysed and introduced it detailedly, then
applied it to robot tracking control. The validity of the control scheme is verified by end Linear trajectory tracking test of
2R robot robotic manipulator system of fuzzy logic control. It did not depend on the exact mathematical model and could
solve effectively the influence of nonlinear and uncertainty.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposed a new global localization method for mobile robot based on RFID (Radio Frequency
Identification Devices) and stereo vision, which makes the robot obtain global coordinates with good accuracy when
quickly adapting to unfamiliar and new environment. This method uses RFID tags as artificial landmarks, the 3D
coordinate of the tags under the global coordinate system is written in the IC memory. The robot can read it through
RFID reader; meanwhile, using stereo vision, the 3D coordinate of the tags under the robot coordinate system is
measured. Combined with the robot's attitude coordinate system transformation matrix from the pose measuring system,
the translation of the robot coordinate system to the global coordinate system is obtained, which is also the coordinate of
the robot's current location under the global coordinate system. The average error of our method is 0.11m in experience
conducted in a 7m×7m lobby, the result is much more accurate than other location method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Air void is a crucial parameter for asphalt mix. However, it was impossible to capture the air void distribution based
on traditional laboratory methods, which limited most of the previous research to focus on the average volume of air
voids. However, specimens with the same total volume of air voids while with different distributions of air voids, will
exhibit distinct mechanical behaviors. Computed Tomography technology was applied and a program was developed
based on Matlab® to process CT images and the grayscale threshold value was calculated based on the laboratory
results. The specimens with different compaction levels were scanned and the vertical air void distribution was analyzed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Composting is one of the best methods for sewage sludge management. The early identification of the young
compost stage in composted material is important. The method for determining the degree of maturity of composted
material containing sewage sludge will use the selected topologies of artificial neural networks. The learning processes
of these networks will be carried out with the use of the information contained in digital images of composted material. It
is important that acquisition of these images was carried out under constant lighting and exposure conditions on a
suitable acquisition stand. The objectives of presented study were: to develop a stand for image acquisition of composted
material, to determine the spectral distribution for used light sources and illuminance distribution for visible light, to
determine the parameters for image acquisition of composted material. A suitable stand, consisted of three photographic
chambers illuminated with visible light, UV-A light and mixed light, was developed. The spectral distribution of the
used light sources and the illuminance distribution for visible light were analyzed and considered satisfactory. Image
acquisition parameters, such as focal length, ISO sensitivity, aperture and exposure time, were specified.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
According to the rapid development of network system, demands on internet quality are becoming stricter. The
traditional internet QoS routing manage mechanism can't meet high development on network bandwidth, transmission
delay, delay jitter, error rate and so on. Therefore, this paper designed a QoS manage mechanism based on AntNet, it
defined the exchange type in network and headed path finding algorithm, finally, after stimulate experiment in NS2
platform, it shows the model designed in this paper can enhance routing successful rate in both low level network load
and heavy level network load. The result gave a reference for researching on QoS routing manage mechanism.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In any country, natural mineral resources are considered the back-bone for the development of the industry and the
country's economical growth. Exploration and mining for mineral ores and manufacturing and marketing these ores will
add value to the country's national income.
Geographic Information Systems (GIS) technology has an advantage over other information systems because it
combines the conventional query operations with the ability to display and analyze spatial data from maps, satellite
imagery, and aerial photography. Knowing the importance of mineral ores as a pilar of the economy this paper
concentrates on mineral resources in Libya. Geographic information systems (GIS) was used for identifying mineral
resources in Libya. Geodatabases were designed and all available information were stored in these geodatabases. The
information was collected from scientific researchers, and geological and mining studies. The database also, included the
Libyan international boundaries, the administrative boundaries and the oil and gas fields and pipelines, and such maps as
geophysical and geological maps. Thus a comprehensive database was created containing all the information available
concerning mineral resources in Libya.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In automotive industry, outline design is its life and creative design is its soul indeed. Computer-aided technology has
been widely used in the automotive industry and more and more attention has been paid. This paper chiefly introduce the
application of computer-aided technologies including CAD, CAM and CAE, analyses the process of automotive
structural design and describe the development tendency of computer-aided design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
RFID is a new technology developed in the nineties, using the wireless technology to achieve the non-contact data
reading. It has a great advantage compared with traditional technology in reading data by wireless. And it is widely used
in transportation, material management system, and medical areas etc. In this paper, it mainly introduces the RFID
application in the field of medical temperature measurement system which can real-timely get and trace patient's
temperature. Firstly, it introduces the structure of RFID, and then study and realizes the patient's temperature gathering
and storage, lastly, realizing the RFID anti-collision algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Because of freight train, in China, transports goods on railway freight line throughout the country, it does not depart
from or return to engine shed during a long phase, thus we cannot monitor the quality of wheel set effectively. This paper
provides a system which uses leaser and high speed camera, applies no-contact light section technology to get precise
wheel set profile parameters. The paper employs clamping-track method to avoid complex railway ballast modification
project. And detailed descript an improved image-tracking algorithm to extract central line from profile curve. For getting
one pixel width and continuous line of the profile curve, uses local gray maximum points as direction control points to
direct tracking direction. The results based on practical experiment show the system adapted to detection environment of
high speed and high vibration, and it can effectively detect the wheelset geometric parameters with high accuracy. The
system fills the gaps in wheel set detection for freight train in main line and has an enlightening function on monitoring the
quality of wheel set.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, ant colony system algorithm (ACSA) is used to detect the edge of grayscale images. The novelty of
the proposed method is that the artificial ants used for detecting the edges of images have global memory capacity.
Moreover, a good result has been gotten by using only one ant. A large number of experiments have been done to
determined suitable parameters of ACSA. The results of the experiments show the effectiveness of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Scale-Invariant Feature Transform(SIFT) and Speeded-Up Robust Feature(SURF) are common techniques used for
extracting robust features that can be used to perform matching between different viewpoints of scenes. Both methods
basically involve three main stages, which are feature extraction, orientation assignment and feature descriptor extraction
for matching. SURF is computation efficient compared to SIFT because the integral image is used for the convolutions
to reduce computation time. However, both methods also do not focus much on the technique of matching. This paper
introduces a method which can help to improve the rotational matching performance in term of accuracy by establishing
a decision matrix and an approximated rotational angle within two corresponding images. The proposed method
generally improved the matching rate around 10% to 20% in terms of accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we set up a mathematical model to solve the problem of airport ground services. In this model, we set
objective function of cost and time, and the purpose is making it minimized. Base on the analysis of scheduling
characteristic, we use the multi-population co-evolutionary Memetic algorithm (MAMC) which is with the elitist strategy
to realize the model. From the result we can see that our algorithm is better than the genetic algorithm in this problem and
we can see that our algorithm is convergence. So we can summarize that it can be a better optimization to airport ground
services problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present an improved approach to recognize human action based on the BOW model and the pLSA
model. We propose an improved feature with optical flow to build our bag of words. This feature is able to reduce the
high dimension of the pure optical flow template and also has abundant motion information. Then, we use the topic
model of pLSA (probabilistic Latent Semantic Analysis) to classify human actions in a special way. We find that the
existing methods lead to some mismatching of words due to the k-means clustering approach. To reduce the probability
of mismatching, we add the spatial information to each word and improve the training and testing approach. Our
approach of recognition is tested on two datasets, the KTH datasets and WEIZMANN datasets. The result shows its good
performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The innovative development of computer technology promotes the application of the cloud computing platform, which
actually is the substitution and exchange of a sort of resource service models and meets the needs of users on the
utilization of different resources after changes and adjustments of multiple aspects. "Cloud computing" owns advantages
in many aspects which not merely reduce the difficulties to apply the operating system and also make it easy for users to
search, acquire and process the resources. In accordance with this point, the author takes the management of digital
libraries as the research focus in this paper, and analyzes the key technologies of the mobile internet cloud computing
platform in the operation process.
The popularization and promotion of computer technology drive people to create the digital library models, and its
core idea is to strengthen the optimal management of the library resource information through computers and construct an
inquiry and search platform with high performance, allowing the users to access to the necessary information resources at
any time. However, the cloud computing is able to promote the computations within the computers to distribute in a large
number of distributed computers, and hence implement the connection service of multiple computers. The digital libraries,
as a typical representative of the applications of the cloud computing, can be used to carry out an analysis on the key
technologies of the cloud computing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Privacy preserving data mining have a rapid development in a short year. But it still faces many challenges in the
future. Firstly, the level of privacy has different definitions in different filed. Therefore, the measure of privacy preserving
data mining technology protecting private information is not the same. So, it's an urgent issue to present a unified privacy
definition and measure. Secondly, the most of research in privacy preserving data mining is presently confined to the
theory study.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, by applying the Taylor expansion, the authors study the asymptotic properties of the kernel density estimation fn(e) of an unknown error distribution function f(e) in a nonparametric regression model. Then, they study the choice of the smoothing parameters in the estimation fn(e). Finally, an approximation confidence interval of f(e) was given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hydraulic exciting system consisted of pipeline and wave-exciter has been constructed to study hydraulic
pipeline vibration control laws. Through controlling the inverter frequency conversion, opening and
closing of the shock device produce periodic vibration in hydraulic pipe.The excitation test system is
established. The vibration signals on different point of pipeline have been collected and analyzed to come
to the law of pipeline vibration. The results show that pipeline vibration frequency decreases with the
system pressure increaseing in the same excitation frequency; when frequency and pressure are
determined, the vibration waveforms of different point on pipe are the same, almost nonexistent phase
differences, but they are inconsistent with amplitudes on different points. Pipe vibration close to the
hydraulic cylinder is slightly intenser than that near the wave-exciter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The project aimed to produce an identification model that allows for automatic recognition of malting barley varieties.
The project used computer image analysis and artificial neural networks. The authors based on the analysis of biological
material selected set of features describing the physical parameters allowing the identification of varieties. Image
analysis of samples of barley digital photographs allowed the extraction of the characteristics of varieties. Obtained
characteristics from the images were used as learning data for artificial neural network. Trained a multilayer
perceptron network is characterized by the identification abilities at the level of human abilities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, Shanghai female shot-putter Shou Qianwen was selected as the research object, and also the methods in
the biological mechanics were used to analyze and research her movements of shot putting techniques. As a result, at the
sliding step stage, the maximum swinging velocity of her left leg emerged too early, the joint between the take-off and
swinging techniques was not very ideal, and the angle between the calf and the ground when the right leg departed from
the ground was too large. However, at the transition step stage, the body weight velocity fluctuated too largely, and its
single support time was too long. At the final exertion stage, the posture of her body before the apparatus was highly
sufficient, but the exertion movement of the body was too hasty, and the hip movements were insufficient, which was
reflected on that the exertion point was not prominent, the acceleration effect was insignificant, and the shooting speed
was not very fast, and also the shooting angle was too low.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
By using the classification properties of Kohonen-type networks (Tipping 1996), a neural model was built for the qualitybased
identification of tomatoes. The resulting empirical data in the form of digital images of tomatoes at various stages
of storage were subsequently used to draw up a topological SOFM (Self-Organizing Feature Map) which features cluster
centers of "comparable" cases (Tadeusiewicz 1997, Boniecki 2008). Radial neurons from the Kohonen topological map
were labeled appropriately to allow for the practical quality-based classification of tomatoes (De Grano 2007).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Developing a SP is a sensitive task because the SP itself can lead to security weaknesses if it is not conform to the
security properties. Hence, appropriate techniques are necessary to overcome such problems. These techniques must
accompany the policy throughout its deployment phases. The main contribution of this paper is then, the proposition of
three of these activities: validation, test and multi-SP conflict management. Our techniques are inspired by the well
established techniques of the software engineering for which we have found some similarities with the security domain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image matting plays an important role in image editing and computer vision. Alpha matting is to solve the problem
of softly extracting the foreground object from an image. The problem is inherently ill-posed. Existing matting
approaches usually use a given trimap to estimate the alpha value for each unknown pixel. So the results rely on not only
the different algorithms, but also the given trimap. In this paper, we present an easy interactive matting method based on
the sample search matting. Users only require drawing some strokes or points on the foreground and background to
extract the foreground object. This method can easily collect the samples available to compute the alpha value for the
whole image pixels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper presents the possibilities of neural image analysis of microalgae content in the large-scale algae
production for usage as a biomass. With the growing conflict between the culture produced both for feed and energetic
purpose in Europe, the algae production seems to be very efficient way to produce the huge amount of biomass outside
of conventional agronomy. However, for stable microalgae production the key point for culture management is the rapid
estimation of algae population and assessment of its developmental stage. In traditional way the microalgae content is
usually checked by the long microscopic analyses which cannot be used in large-scale industrial cultivation. Moreover,
highly specialized personnel is required for algal determinations. So the main aim of this study is to estimate the
possibility of usage of automatic image analysis of microalgae content made by artificial neural network. The
preliminary results show that the selection of artificial neural network topology for the microalgae identification allowed
for the selection and choice of teaching variables obtained by studying the image analysis. The selected neural model on
the basis of data from computer image analysis allows to carry out the operations of algae identification and counting.
On the basis of the obtained results of preliminary tests it is possible to count the algae on the photos. Additional
information on their size and color allows to unlimited categorization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The purpose of the project was to identify the degree of organic matter decomposition by means of a neural
model based on graphical information derived from image analysis. Empirical data (photographs of compost content
at various stages of maturation) were used to generate an optimal neural classifier (Boniecki et al. 2009,
Nowakowski et al. 2009). The best classification properties were found in an RBF (Radial Basis Function) artificial
neural network, which demonstrates that the process is non-linear.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we research the problem that upsampling noisy image. We compare location-based upsampling and
bilateral filtering and discuss the relationship between bilateral filtering and noisy image upsampling. For getting a better
upsampling result for noisy image, we propose an improved method to upsample noisy image based on bilateral filtering
combined with edge extraction. Experimental results show that the method is effective and stable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The colored processes simulation is often used in wide-band signal processing. No matter being Gaussian or non-
Gaussian, it is the main method to generate the colored processes that through passing white driving into an
autoregressive filter to fit an assigned power spectrum. After the general technology of autoregressive mode fitting
assigned power spectrum is discussed, from quantitative and qualitative aspects, the practical colored processes
simulation methods are given.. Finally, a numerical instance is illustrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Gaussian mixture is one of the most useful non-Gaussian probability distributions in signal processing. The sampling
of processes obeys this distribution, viz. the numerical simulation of Gaussian mixture, can be realized with Bernoulli
trail sampling, acceptance-rejection sampling or composition sampling. All these approaches are discussed in detail in
this paper. Finally, a numerical instance is illustrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As one of most outstanding embedded real-time systems, QNX operating system is widely used on many key areas.
Increasingly mature virtual machine technique is brought into QNX graphical interface program development to
establish a "Host machine-Target machine" mode. On one computer, the developing platform for QNX graphical
interface program is set up. Test results indicate that the developing platform obviously increases the efficiency of QNX
interface program development and is suggested to other system programs development.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an algorithm of hybrid collision detection. Firstly, the algorithm establishes sphere and OBB level
bounding box for every model in virtual scene and then uses intersect test of sphere bounding box to exclude not intersect
model. Between two model of maybe the intersection, it uses not intersect part of intersect test excluding model of OBB
bounding box, reducing PSO searching space to inside of the nodes which collisions occur. The Algorithm can exclude
not intersect model quickly and avoid slowly and early maturity because of PSO target space bigger. Also, it reduces that
level bounding box algorithm take a lot of memory and newer rate slow. At last, it certifies Hybrid Collision Detection
Algorithm though laboratory and compared with based OBB and random collision detection algorithm which based on
improved PSO algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The indentation depth of resistance spot welding joint is closely related to its quality, and the digital image of its
surface is used as an information source. An evaluating algorithm of artificial intelligence for the indentation depth is put
forward. Firstly, through analyzing characteristics of images on the surface of spot welding joints, the first ring area S1,
the second ring area S2, total area S, the area ratio coefficient between total area and first ring area K1, and the area ratio
coefficient between total area and second ring area K2 are extracted as evaluation factors of indentation depth. At the
same time, S2, S, K1 are selected as characteristic parameters of the indentation depth based on the correlation analysis
between the evaluation factors and the indentation depth. Secondly, a support vector machine (SVM) predicting model of
the indentation depth is established. The model selects the parameters S2, S, K1, welding current I, and electrode pressure
F as the input vector and selects the actual indentation depth hT of welding spot as the target vector. Test results are
shown, the correlation coefficient are 0.9958 between model prediction values and actual measured values. The
indentation depth of welding spot can be predicted by means of the SVM evaluating algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a user credit assessment model based on clustering ensemble aiming to solve the problem that
users illegally spread pirated and pornographic media contents within the user self-service oriented broadband network
new media platforms. Its idea is to do the new media user credit assessment by establishing indices system based on user
credit behaviors, and the illegal users could be found according to the credit assessment results, thus to curb the bad
videos and audios transmitted on the network.
The user credit assessment model based on clustering ensemble proposed by this paper which integrates the
advantages that swarm intelligence clustering is suitable for user credit behavior analysis and K-means clustering could
eliminate the scattered users existed in the result of swarm intelligence clustering, thus to realize all the users' credit
classification automatically.
The model's effective verification experiments are accomplished which are based on standard credit application
dataset in UCI machine learning repository, and the statistical results of a comparative experiment with a single model
of swarm intelligence clustering indicates this clustering ensemble model has a stronger creditworthiness distinguishing
ability, especially in the aspect of predicting to find user clusters with the best credit and worst credit, which will
facilitate the operators to take incentive measures or punitive measures accurately. Besides, compared with the
experimental results of Logistic regression based model under the same conditions, this clustering ensemble model is
robustness and has better prediction accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Due to the evolution of the Electronic Learning (E-Learning), one can easily get desired information on computer or
mobile system connected through Internet. Currently E-Learning materials are easily accessible on the desktop computer
system, but in future, most of the information shall also be available on small digital devices like Mobile, PDA, etc. Most
of the E-Learning materials are paid and customer has to pay entire amount through credit/debit card system. Therefore,
it is very important to study about the security of the credit/debit card numbers. The present paper is an attempt in this
direction and a security technique is presented to secure the credit/debit card numbers supplied over the Internet to access
the E-Learning materials or any kind of purchase through Internet. A well known method i.e. Data Cube Technique is
used to design the security model of the credit/debit card system. The major objective of this paper is to design a
practical electronic payment protocol which is the safest and most secured mode of transaction. This technique may
reduce fake transactions which are above 20% at the global level.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The study of network communication between the master controller and external equipments is done in the paper.
This system uses C/S (Client/Server) mode which is economy and strong. Moreover it has the advantage of extensibility.
By adding more clients and servers can provide the performance of the system when the task increases. The operating
system based on network communication programming between Windows NT two machines is presented in this paper.
The working process, programming steps, technical characteristics etc. based on C/S mode are also proposed in detail.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Owing to the immune clonal selection algorithm introduced into dynamic threshold strategy has better advantage on
optimizing multi-parameters, therefore a novel approach that the immune clonal selection algorithm introduced into
dynamic threshold strategy, is used to optimize the dynamic recursion Elman neural network is proposed in the paper.
The concrete structure of the recursion neural network, the connect weight and the initial values of the contact units etc.
are done by evolving training and learning automatically. Thus it could realize to construct and design for dynamic
recursion Elman neural networks. It could provide a new effective approach for immune clonal selection algorithm
optimizing dynamic recursion neural networks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To suppress speckle noise and preserve edges information in ultrasound images, the nonsubsampled contourlet
transform (NSCT) is applied to decompose the ultrasound image into NSCT subbands. The multiplicative speckle noises
in NSCT high frequency subbands can be expressed in additive forms. A thresholding method is applied to extract and
preserve strong edge coefficients in each NSCT subband. Then a Bayesian minimum mean square error (MMSE)
criterion based equation is achieved to despeckle other NSCT coefficients. Last the despeckled image is reconstructed by
the inverse NSCT transformation. The experimental results of synthetic speckle and clinical ultrasound images show that
the proposed method outperforms several ultrasound image despeckling methods in terms of speckle reduction and edge
preservation indices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study was undertaken to develop machine vision-based raisin detection technology. Supervised color image
segmentation using a Permutation-coded Genetic Algorithm (GA) identifying regions in Hue-Saturation-Intensity (HSI)
color space (GAHSI) for desired and undesired raisin detection was successfully implemented. Images were captured to
explore the possibility of using GAHSI to locate desired raisin and undesired raisin regions in color space
simultaneously. In this research, images were processed separately using three segmentation method, K-Means clustering
in L*a*b* color space and GAHSI for single image, GA for single image in Red-Green-Blue (RGB) color space
(GARGB). The GAHSI results provided evidence for the existence and separability of such regions. When compared
with cluster analysis-based segmentation results, the GAHSI method showed no significant difference.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Geographic Information System and spatial analysis technology were used, soil organic carbon (SOC) spatial
variation and distribution under different vegetation were studied the paper at west, middle and east sections of Qilian
Mountain, in cold and humid region of northwest China, Gansu province. In August 2004, 44 surface soil samples
(0-20cm) were collected covered with different vegetation in the upper reach of Shiyang River, Heihe River and Shule
River, the three inland river basins in the Hexi corridor. The results show the following parts. (a) Mean SOC presents the
trend of middle > east > west in Qilian Mountain. Under the control of SOC, soil properties including soil texture, TN,
TP, TK and CEC are the same trend as SOC, but soil pH and CaCO3 are reverse. And it has close relationship between
SOC and less than 50μm particle size. (b) SOC content is higher at middle and east sections under different vegetation
than that at west. And in the process of vegetation and precipitation, SOC spatial distribution has obvious variation with
longitude, latitude and altitude.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the technology development of network information, there exists more and more seriously questions to our
education. Computer multimedia application breaks the traditional foreign language teaching and brings new challenges
and opportunities for the education. Through the multiple media application, the teaching process is full of animation,
image, voice, and characters. This can improve the learning initiative and objective with great development of learning
efficiency. During the traditional foreign language teaching, people use characters learning. However, through this
method, the theory performance is good but the practical application is low. During the long time computer multimedia
application in the foreign language teaching, many teachers still have prejudice. Therefore, the method is not obtaining
the effect. After all the above, the research has significant meaning for improving the teaching quality of foreign
language.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the rapid development of network technology, online transactions have become more and more common. In this
paper, we firstly introduce the principle and the basic principal and technical foundation of SET, and then we analyze the
progress of designing a system in the foundation of the procedure of the electronic business based on SET. On this basis,
we design a system of the Payment System for Electronic Business. It will not only take on crucial realism signification
for large-scale, medium-sized and mini-type corporations, but also provide guide meaning with programmer and
design-developer to realize Electronic Commerce (EC).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes an efficient and robust technique for face recognition. The proposed technique includes the
Daubechie's wavelet transform D10, Principal Component Analysis (PCA) and Multiscale fusion for face recognition.
Features are extracted using the PCA on original and multiscale images. The multiscale fusion is used to combine the
results of PCA and wavelet transformed PCA to achieve better performance. The main idea is to utilize the discriminant
information of various subbands rather than relying on a single scale. Multiscale experts are finally fused using the sum
rule. Extensive experimental results on the AT&T database show that recognition performance is improved by the
proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Information security knowledge is more and more important to students in universities of finance and economics.
However, mastering the skill of information security is not easy to them. Schema theory is applied into information
security teaching to help students improve their skills. The teaching result shows that there is a significant difference in
final exam and practice exam between the proposed model and a regular teaching model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The rapid development of intelligent assistive technology for replacing a human caregiver in assisting people with
dementia performing activities of daily living (ADLs) promises in the reduction of care cost especially in training and
hiring human caregiver. The main problem however, is the various kinds of sensing agents used in such system and is
dependent on the intent (types of ADLs) and environment where the activity is performed. In this paper on overview of
the potential of computer vision based sensing agent in assistive system and how it can be generalized and be invariant to
various kind of ADLs and environment. We find that there exists a gap from the existing vision based human action
recognition method in designing such system due to cognitive and physical impairment of people with dementia.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A buyer-seller watermarking protocol utilize watermarking along with cryptography for copyright and copy
protection for the seller and meanwhile it also preserve buyers rights for privacy. It enables a seller to successfully
identify a malicious seller from a pirated copy, while preventing the seller from framing an innocent buyer and
provide anonymity to buyer. Up to now many buyer-seller watermarking protocols have been proposed which utilize
more and more cryptographic scheme to solve many common problems such as customer's rights, unbinding problem,
buyer's anonymity problem and buyer's participation in the dispute resolution. But most of them are infeasible
since the buyer may not have knowledge of cryptography. Another issue is the number of steps to complete the
protocols are large, a buyer needs to interact with different parties many times in these protocols, which is very
inconvenient for buyer. To overcome these drawbacks, in this paper we proposed dual watermarking scheme in
encrypted domain. Since neither of watermark has been generated by buyer so a general layman buyer can use the
protocol.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a new hybrid approach for image segmentation. The proposed approach exploits spatial fuzzy
c-means for clustering image pixels into homogeneous regions. In order to improve the performance of fuzzy c-means to
cope with segmentation problems, we employ gravitational search algorithm which is inspired by Newton's rule of gravity.
Gravitational search algorithm is incorporated into fuzzy c-means to take advantage of its ability to find optimum cluster
centers which minimizes the fitness function of fuzzy c-means. Experimental results show effectiveness of the proposed
method in segmentation different types of images as compared to classical fuzzy c-means.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The learning-based method and sparse-representation of signal are combined to form the algorithm for single-image
super-resolution. In the training phase, the correlation between the sparse-representation of high-resolution patches and
that of low-resolution patches for the identical image with regard to their dictionaries is applied to train jointly two
dictionaries for high- and low-resolution patches. In the super-resolution phase, the sparse-representation of each patch
of low-resolution image is found to produce the high-resolution image by using corresponding coefficients of these
representation and high-resolution patches obtained above. For the dictionary learned is a more compact representation
of patches, the method demands less computational cost. Three experimentations validated the algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a novel player detection method via One-Class SVM(OCSVM) is proposed, inspired by both the
player detection problem and the property of the OCSVM. In this detection method, candidate regions are got by local
entropy and local range analysis firstly. Then a set of training samples is obtained by several predefined rules on shape
and area. These samples are used to train two OCSVM models. One model uses color feature, and the other uses gradient
feature. Finally, we locate the regions of player by fusing the detection result of the two models. Extensive experiments
demonstrate effectiveness and efficiency of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Secure multi-party protocols have been proposed for entities (organizations or individuals) that don't fully trust each
other to share sensitive information. Many types of entities need to collect, analyze, and disseminate data rapidly and accurately,
without exposing sensitive information to unauthorized or untrusted parties. Solutions based on secure multiparty
computation guarantee privacy and correctness, at an extra communication (too costly in communication to be
practical) and computation cost. The high overhead motivates us to extend this SMC to cloud environment which
provides large computation and communication capacity which makes SMC to be used between multiple clouds (i.e., it
may between private or public or hybrid clouds).Cloud may encompass many high capacity servers which acts as a hosts
which participate in computation (IaaS and PaaS) for final result, which is controlled by Cloud Trusted Authority (CTA)
for secret sharing within the cloud. The communication between two clouds is controlled by High Level Trusted
Authority (HLTA) which is one of the hosts in a cloud which provides MgaaS (Management as a Service). Due to high
risk for security in clouds, HLTA generates and distributes public keys and private keys by using Carmichael-R-Prime-
RSA algorithm for exchange of private data in SMC between itself and clouds. In cloud, CTA creates Group key for
Secure communication between the hosts in cloud based on keys sent by HLTA for exchange of Intermediate values and
shares for computation of final result. Since this scheme is extended to be used in clouds( due to high availability and
scalability to increase computation power) it is possible to implement SMC practically for privacy preserving in data
mining at low cost for the clients.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Zhang and Wang in 2007 proposed an exploring the modification of embedding directions (EMD) method for which
n pixels are used as an embedding unit, and a digit in based 2n+1 can be concealed by modifying a pixel value a
grayscale at most. In case the overflow or underflow problems occur, EMD adds or subtracts one of the saturated pixels
in the same pixel group by one and re-embeds the digit. However, if the number of saturated pixels is considerable, the
modification of the saturated pixels may result in a critical vulnerability to the LSB-based steganalyzers. This paper
proposes a method to leave those saturated pixels untouched and embeds only those non-saturated ones. Because no
saturated pixels are modified, the stego image is insensitive to the detection by the LSB-based steganalyzers. The
experimental results reveal that the proposed method is more robust to the LSB-based steganalyzers than the original
EMD method when the number of saturated pixels is considerable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new image segmentation method via fusing Normalized Cut (NCut) eigenvector maps. In this
method, we fuse the eigenvector maps by maximizing the salient contour signals and suppress the non-maximum ones.
Then, we use OWT-UCM method to produce the image segmentation from the soft contour map generated from the
fused eigenvector maps and local contour cues. We evaluate our segmentation method based on BSDS500 database.
Experimental results show that the proposed segmentation method is more accurate and preserve large meaningful region.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we used an efficient dye laser with a 0.03 cm-1 bandwidth which was pumped by the second harmonic
frequency of Nd:YAG laser. Saturation parameters obtained by Nodvik equation are used to measure output power of the
amplifier. For investigating the effect of the flow rate on laser performances, the jet stream was changed. Maximum
stored power density in different flow rates was compared and in all stages, laser efficiency was calculated for a constant
pump power. An average efficiency in all experiments was obtained 28.32%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Visual tracking is still an open problem because one needs to discriminate between the target object and background
under long duration. There is a major problem with conventional adaptive tracking where the target object is incorrectly
learnt (adapted) during runtime, resulting in poor performance of tracker. In this paper, we address this problem by
proposing validation-update strategy to minimize the error of false patches updating. The classifier we use is based on
boosted ensemble of Local Dominant Orientation (LDO). However, since LDO features contain binary values which are
unsuitable for classification, we have added a process to the online boosting learning algorithm that permits the two
binary values of "0" and "1". We elevate the tracker performance by pairing the classifier with normalized crosscorrelation
of patches tracked by Lukas-Kanade tracker. In the experiment conducted, we compare our method with two
other state-of-the-art adaptive trackers using BoBot dataset. Our method yields good tracking performance under variety
of scenarios set by BoBot dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Along with the progress and development of the digital technology, there produced the transmission of the new
media by medium of such as the network, mobile phones and the digital television, while among them digital TV has the
superiority of other media. The appearance and development of digital TV will induce a profound change in the
broadcasting and television industry chain. This paper started with discussing the transformation of digital television in
profit model, mode of operation and mode of transmission to construct the precision-guided communication theory; And
then analyzes the properties and marketing nature of the precision-guided communication to make the construction of the
precision-guided communication marketing mode; And put forward the implementing of the precision-guided
communication marketing strategies and concrete steps; At the end of the article the author summarized four
conclusions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, Zhang proposed a reversible data hiding scheme for encrypted image with a low computational complexity
which is made up of image encryption, data embedding and data-extraction/image-recovery phases. During the last
phase, the embedded data are extracted according to a determined smoothness measuring function on each nonoverlapping
block. However, not all pixels in a block are considered in his approach. This may cause higher error rate
when extracting embedded data. In this paper, we propose a novel smoothness evaluating scheme to overcome the
problem. Based on the Zhang's approach, we divide the pixels in each block into three different portions: four corners,
four edges, and the rest of pixels. The smoothness of a whole block is determined by summing the smoothness of three
portions and is utilized to extract embedded data and recovery image. Experimental results show that the proposed
scheme can reduce the error rate of data-extraction/image-recovery effectively. For a given normal testing image, such as
Lena, supposing that the size of each block is 8 by 8, the error rate of our approach is less than 0.6% and Zhang's method
is higher than 12%. Moreover, the error rate will be zero when the size of each block is defined as 12 by 12.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present an applied learning-based color tone mapping technique for video surveillance system. This
technique can be applied onto both color and grayscale surveillance images. The basic idea is to learn the color or intensity
statistics from a training dataset of photorealistic images of the candidates appeared in the surveillance images, and remap
the color or intensity of the input image so that the color or intensity statistics match those in the training dataset. It is
well known that the difference in commercial surveillance cameras models, and signal processing chipsets used by different
manufacturers will cause the color and intensity of the images to differ from one another, thus creating additional challenges
for face recognition in video surveillance system. Using Multi-Class Support Vector Machines as the classifier on a publicly
available video surveillance camera database, namely SCface database, this approach is validated and compared to the
results of using holistic approach on grayscale images. The results show that this technique is suitable to improve the color
or intensity quality of video surveillance system for face recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Since two-dimensional principal component analysis has been used in face recognition, many approaches in 2D-based
method have been developed. However, less attention is spent in the classification methods based on 2D image matrix.
Considering that the feature extracted from 2DPCA based is a matrix instead of a single vector as in PCA based, a new
measurement distance is proposed which considers the rows of the feature matrix. Unlike the previous methods which
are depending on the columns or the whole matrix of the feature matrix, the proposed method is combined with the k-nearest
neighbour instead of the 1-nearest neighbour. Moreover, by using the proposed method, the drawback of 2DPCA
based algorithms compared to PCA based algorithms, which is the increment of the coefficient numbers, can be
alleviated. Experimental results on a famous face databases show that by increasing the number of training images per
class, the proposed method accuracy is also increased until it surpasses all methods in terms of accuracy and storage
capacity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Diabetes is a chronic illness that requires continuous medical care and patient self-management education to prevent
acute complications and to reduce the risk of long-term complications. This paper deals with study and development of
algorithm to develop an initial stage expert system to provide diagnosis to the pregnant women who are suffering from
Gestational Diabetes Mellitus (GDM) by means of Oral Glucose Tolerance Test (OGTT).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wavelet is widely used in signal processing field. As the simplest wavelet framework, Haar wavelet is very popular due
to its memory efficient, fast and easy to implement. Haar wavelet can be employed to process the image for further
purpose like image compression. In this paper, the effectiveness of Haar wavelet and grayscale features are evaluated by
employing template matching as the principle technique. Results show that Haar wavelet feature is more relevant in
facial feature detection task as compared to grayscale feature.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Kernel Entropy Component Analysis (KECA) is a newer method than Kernel Principle Component Analysis (KPCA)
for data transformation and dimensionality reduction in case of face recognition. Although in almost all previous
researches using KECA are shown to be more superior and more appropriate method compared to KPCA, here in this
paper the significance of Kernel PCA in handling face pose in surveillance images is compared to KECA. Comparative
analysis is made to signify the importance of Kernel Principle Component Analysis in terms of pose invariant face
recognition in surveillance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In H.264/AVC, rate distortion (R-D) model plays an important role in rate control and mode decision for efficient video
compression. In general, R-D model includes rate quantization (R-Q) model and distortion quantization (D-Q) model.
We have already had a study on frame-level D-Q model in the past, it is meaningful for frame level rate control
optimization. However, basic unit level R-D model is crucial for precise rate control and efficient mode decision.
Therefore, it is necessary to make in-depth analysis on D-Q model at MB level. In this paper, we test several existing
D-Q models and give fair comparison on these models, and have an in-depth study on D-Q modeling from accuracy,
complexity and applications. Finally, we have shown advantages and disadvantages of these models. This work is
meaningful for efficient video coding algorithm optimization in the future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, researches on seamless combination cryptosystem with biometric technologies, e.g. fingerprint
recognition, are conducted by many researchers. In this paper, we propose a binding algorithm of fingerprint template
and cryptographic key to protect and access the key by fingerprint verification. In order to avoid the intrinsic fuzziness of
variant fingerprints, vector quantization and error correction technique are introduced to transform fingerprint template
and then bind with key, after a process of fingerprint registration and extracting global ridge pattern of fingerprint. The
key itself is secure because only hash value is stored and it is released only when fingerprint verification succeeds.
Experimental results demonstrate the effectiveness of our ideas.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the development of computer network technology, the internet presents a high bandwidth delay product
characteristic. The traditional congestion control algorithm, with high delay network and transfer inefficiency, is not
suitable for the existing network. This paper proposes a congestion control algorithm that combines the loss and delay
signal to adjust the congestion widow. This algorithm is based on BIC algorithm and uses the packet loss rate and delay
time to adjust the congestion window to achieve high throughput in high-latency environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the process of feature description and extraction, current 3D model retrieval algorithms focus on the global
features of 3D models but ignore the combination of global and local features of the model. For this reason, they show
less effective performance to the models with similar global shape and different local shape. This paper proposes a novel
algorithm for 3D model retrieval based on mesh segmentation. The key idea is to exact the structure feature and the local
shape feature of 3D models, and then to compares the similarities of the two characteristics and the total similarity
between the models. A system that realizes this approach was built and tested on a database of 200 objects and achieves
expected results. The results show that the proposed algorithm improves the precision and the recall rate effectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an efficient image encryption scheme based on chaotic systems and singular value decomposition.
In this scheme, the image pixel's positions are scrambled using chaotic systems with variable control parameters. To
further enforce the security, the pixel gray values are modified using a combination between singular value
decomposition (SVD) and chaotic polynomial map. Simulation results justify the feasibility of the proposed scheme in
image encryption purpose.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces the structure of CDIO model. Through analyzing the existent problem from teaching of recent
computer profession, the teaching reform of major courses for computer profession is necessary. So this paper imports
the model to the teaching reform of major courses for computer profession as operating system. It integrates the teaching
content of operating system, uses the teaching methods of case, task-driving and collaboration and improves the
assessment system. The questionnaire inquiries from the teaching practice prove that the mode improves the
understanding ability of students for theory and through experiment teaching exercises practice and innovation ability for
students. The improved effect is evident.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a novel method based on the multi-beam interference suppression in the satellite navigation
system. This method is able to suppress the interference without the prior information about the location of the
interference, and able to solute the conflict between the depth of the zero-trappers and the gain of the satellite signals. A
lot of simulation experiments are also given to verify the performance of the multi-beam interference suppression
method, and the results show the priority of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose in this paper an image noise severity measurement method that correlates well with human's
quality perception on the presence of noise in images. In our approach, a 32x32 pixels mask is used to
compute the differences between the original and noise-degraded images in terms of the statistical means and
outlier values. These differences are formulated and then compared to the quality scores from the subjective
evaluations. The degraded images were distorted by two common types of random noise for images -
Gaussian white noise and impulse noise. Experiment results showed that this approach obtained higher
correlation compare to classical Peak Signal to Noise Ratio (PSNR) method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
First of all, this paper establishes air cushion vehicle bottom waveform obtained by the superimposition of
wave-making water surface deformation in calm water and wave waveforms in external environment. While taking
advantage of the continuity equation of flow to establish air flow system, this paper establishes a more comprehensive
mathematical model of 6-DOF motion control for air cushion vehicles. On this basis, we also build the beach model from
the water to the land slope as well as the models of ditch and obstacle. At the same time, we have predicted the craft's
dynamic trim in the process of air cushion vehicles' climbing the beach. Forecast results can better reflect the basic motion
characteristics of air cushion vehicle, and simulation results can be used to further study the manoeuvrability of air cushion
vehicle, debug and optimize the manoeuvrability-control system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a new descriptor to identify the petals' shape of a blooming flower based on the digital images
captured in natural scene. The proposed descriptor can be used as one of features in computer aided flower recognition
system beside the commonly used features such as number of petals and color. Experiments were conducted on the
Malaysia flowers with same number of petals and with similar color across different species of flowers. 35 images from
7 species were used as the training set to set up the reference values of petals' shape descriptor and 7 new images were
used as the testing set. The descriptor calculated from the testing set is then compared to the reference values from the
training set to achieve the flowers recognition purpose. With the given set of data, complete success in full identification
rate was obtained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most multi-object detection and tracking techniques suffer from the well-known "multi-object occlusion" problem.
The abundant nodes of wireless video sensor networks (WVSNs) can be utilized to solve the problem, and the video
nodes in WVSN have limited calculation capability and energy. In order to achieve effective multi-object tracking using
WVSN, the main contributions of our proposed method are that: (1) the limits of field of view (FOV) of every video
nodes are utilized to establish the consistent labeling of the objects in different views. (2) Mobile Agent is employed to
communicate among network nodes, so the objects are assigned correct labels after multi-object occlusion. The
performance of the approach has been demonstrated on real-world and the experimental results show that the proposed
method is effective for resolving multi-object occlusions and meets the requirement of WVSN.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper provides novel analysis of existing interpolation techniques and suggests improvement for more
accurate orthorectification of satellite imagery. Traditional methods for measuring geo-location use Ground
Control Points (GCPs). The accuracy of these methods depends on the accuracy of GCPs. The accuracy of geolocations
can also be improved by using Digital Elevation Model (DEM) which incorporates topographic relief
displacement to measure geographic locations. Since the accuracy of geographic locations is dependent on the
resolution of DEM, in our study, the accuracy of geo-locations was assessed using interpolated DEMs of multiple
resolutions. The comparative analysis showed that the accuracy of geo-locations can be improved by increasing
the resolution of DEM using interpolation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Stone relics are our precious historical and cultural heritage. However, they are facing long-term weathering due to
the particular environment where they exist, which brings us a lot of difficulty to protect cultural relics. In order to
protect the cultural relics, right measures should be formulated according to the existing circumstances of cultural relics,
pertinently analyzing their weathering. In this paper, the major influential factors of stone relic weathering have been
analyzed based on the model of decision tree.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the development of computer science and communications, the digital image processing develops more and
more fast. High quality images are loved by people, but it will waste more stored space in our computer and it will waste
more bandwidth when it is transferred by Internet. Therefore, it's necessary to have an study on technology of image
compression. At present, many algorithms about image compression is applied to network and the image compression
standard is established. In this dissertation, some analysis on DCT will be written. Firstly, the principle of DCT will be
shown. It's necessary to realize image compression, because of the widely using about this technology; Secondly, we
will have a deep understanding of DCT by the using of Matlab, the process of image compression based on DCT, and the
analysis on Huffman coding; Thirdly, image compression based on DCT will be shown by using Matlab and we can
have an analysis on the quality of the picture compressed. It is true that DCT is not the only algorithm to realize image
compression. I am sure there will be more algorithms to make the image compressed have a high quality. I believe the
technology about image compression will be widely used in the network or communications in the future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditionally, college students always use flash cards as a tool to remember massive knowledge, such as nomenclature,
structures, and reactions in chemistry. Educational and information technology have enabled flashcards viewed on
computers, like Slides and PowerPoint, works as tunnels of drilling and feedback for the learners. The current generation
of students is more capable of information technology and mobile computing devices. For example, they use their
Mobile phones much more intensively everyday day. Trends of using Mobile phone as an educational tool is analyzed
and a educational technology initiative is proposed, which use Mobile phone flash cards applications to help students
learn biology and chemistry. Experiments show that users responded positively to these mobile flash cards.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Embedded wireless WiFi technology is one of the current wireless hot spots in network applications. This paper
firstly introduces the definition and characteristics of WiFi. With the advantages of WiFi such as using no wiring, simple
operation and stable transmission, this paper then gives a system design for the application of embedded wireless WiFi
technology in the motion capture system. Also, it verifies the effectiveness of design in the WiFi-based wireless sensor
hardware and software program.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This article uses the hybrids between recursive method and Monte Carlo method to solve the differential equation,
for example in this article, the Schrödinger equation for atom, by two new methods one of which depends gross energy
as fitness function, the other applies a special substitution and a delta function randomly chosen from the domain. After
a examples implemented by program the result given by running on computer is perfectly in accordance to their algebra
solution, which prove the new methods is powerful for solving differential equations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, computer-based testing has become an effective method to evaluate students' overall learning
progress so that appropriate guiding strategies can be recommended. Research has been done to develop intelligent test
assembling systems which can automatically generate test sheets based on given parameters of test items. A good multisubject
test sheet depends on not only the quality of the test items but also the construction of the sheet. Effective and
efficient construction of test sheets according to multiple subjects and criteria is a challenging problem. In this paper, a
multi-subject test sheet generation problem is formulated and a test sheet generating approach based on intelligent
genetic algorithm and hierarchical planning (GAHP) is proposed to tackle this problem. The proposed approach utilizes
hierarchical planning to simplify the multi-subject testing problem and adopts genetic algorithm to process the layered
criteria, enabling the construction of good test sheets according to multiple test item requirements. Experiments are
conducted and the results show that the proposed approach is capable of effectively generating multi-subject test sheets
that meet specified requirements and achieve good performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Path and server diversities have been used to guarantee reliable video streaming communication over wireless
networks. In this paper, server diversity over mobile wireless ad hoc networks (MANETs) is implemented. Particularly,
multipoint-to-point transmission together with multiple description coding (MDC) and forward error correction (FEC)
technique is used to enhance the quality of service of video streaming over the wireless lossy networks. Additionally, the
dynamic source routing (DSR) protocol is used to discover maximally disjoint routes for each sender and to distribute the
workload evenly within the MANETs for video streaming applications. NS-2 Simulation study demonstrates the
feasibility of the proposed mechanism and it shows that the approach achieves better quality of video streaming, in terms
of the playable frame rate, reliability and real-time performance on the receiving side.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For quality control problems in coal transport, RFID technology has been proposed to be applied to coal
transportation process. The whole process RFID traceability system from coal production to consumption has been
designed and coal supply chain logistics tracking system integration platform has been built, to form the coal supply
chain traceability and transport tracking system and providing more and more transparent tracking and monitoring of
coal quality information for consumers of coal.
Currently direct transport and combined transport are the main forms of coal transportation in China. The means of
transport are cars, trains and ships. In the booming networking environment of RFID technology, the RFID technology
will be applied to coal logistics and provide opportunity for the coal transportation tracking in the process transportation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, Music video data is increasing at an astonishing speed. Shot segmentation and keyframe extraction
constitute a fundamental unit in organizing, indexing, retrieving video content. In this paper a unified framework is
proposed to detect the shot boundaries and extract the keyframe of a shot. Music video is first segmented to shots by
illumination-invariant chromaticity histogram in independent component (IC) analysis feature space .Then we presents a
new metric, image complexity, to extract keyframe in a shot which is computed by ICs. Experimental results show the
framework is effective and has a good performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Underwater wireless sensor networks (UWSNs) are a subclass of wireless sensor networks. Underwater sensor
deployment is a significant challenge due to the characteristics of UWSNs and underwater environment. Recent
researches for UWSNs deployment mostly focus on the maintenance of network connectivity and maximum
communication coverage. However, the broadcast nature of the transmission medium incurs various types of security
attacks. This paper studies the security issues and threats of UWSNs topology. Based on the cluster-based topology, an
underwater cluster-based security scheme (U-CBSS) is presented to defend against these attacks. and safety.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is important to improve quality and rate of reconstruction image, which also is a hot spot for ECT researchers, in
order to realize measurement on-line for multi-phase flow parameters of high-speed industry processes by using
electrical capacitance tomography (ECT). In this paper, key issues in designing high-speed ECT system including
algorithms and circuits (small capacitance measurement module, data acquisition control module and communication
module) are analyzed and discussed. To a great extent, the speed of hardware circuit depends on the performance of
small capacitance measurement circuit. This paper presented ac-based and active differential capacitance measuring
circuits, which are suitable for rapidly ECT imaging. The real time performances and image reconstruction process of
ECT system are discussed. Key technology and methods for high-speed ECT system are given in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The inverters and control methods are applied in more and more fields with power electronics development. The
technology of current hysteretic band PWM is a kind of easy and reliable control method. But the harmonic distortion of
load current is big amplitude around switching frequency side band. The paper introduce a Varied Hysteretic-band
Current control technique(VHBCC), this method can spreads the spectrum contents of load current around switching
frequency side band, reduces the harmonic distortion of load current . At last, The results of simulation prove the method
feasible and effective.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
More and more researchers have great concern on the issue of Common-mode voltage (CMV) in high voltage large
power converter. A novel common-mode voltage suppression scheme based on zero-vector PWM strategy (ZVPWM) is
present in this paper. Taking a diode-clamped five-level converter as example, the principle of zero vector PWM
common-mode voltage (ZCMVPWM) suppression method is studied in detail. ZCMVPWM suppression strategy is
including four important parts, which are locating the sector of reference voltage vector, locating the small triangular
sub-sector of reference voltage vector, reference vector synthesis, and calculating the operating time of vector. The
principles of four important pars are illustrated in detail and the corresponding MATLAB models are established. System
simulation and experimental results are provided. It gives some consultation value for the development and research of
multi-level converters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Software requirements should be divided into functional requirements and non functional requirements, including
run-time quality attributes, develop-time quality attributes and constraints in software engineering, the rate of software
rework keeps high because of human factors, fail to grasp the key factors.In the practical point of view, the six key steps
can be proposed in the needs analysis study to resist development rework, increase success rate of software development.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The 23 GOF design patterns are proven to be effective against changes in demand, but the the abuse of design
patterns phenomenon are common because of human chase blindly. The abuse of model application invirtually increases
the difficulties of code maintenance, and is more harmful than non-usage any design patterns. In this paper, one method
of "the choose of design patterns according to test first " is advocated to use design patterns rationally by the way of nonusage
any design patterns.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study is based on the ZigBee wireless sensor network technology to locate and track construction workers. By
using ZigBee as a network element, the construction workers as a wireless sensor network of mobile nodes, the wireless
sensor networks and construction workers are combined in organic enriched wireless sensor networks. This paper
describes the ZigBee-based positioning and tracking system hardware design, including the CC2430 ZigBee modules
based positioning sub-stations, base locator, mobile positioning devices and transmission interface circuit design, and
ZigBee-based positioning and tracking system software design. An RSSI-based positioning technology of construction
worker is then proposed. The positioning and tracking system of construction workers for construction site is given as an
example, which demonstrates that it is obviously helpful to the management of construction site.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the rapid development of today's information age, a large amount of data has been transmitted in a computer
network.A heavy burden of transmission will be brought out when not to compress the information. The paper introduces
a randomizing incremental building algorithm for compressed quadtree, describes its implementation steps, and analyzes
its validity and running time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Perceptual hash is an important research branch of multimedia security, while the evaluation of performance of a
perceptual hash algorithm is not well studied so far. The traditional performance of a perceptual hash algorithm is
robustness and discriminability. Such evaluation criterion is not good enough to judge an algorithm, especially for a
perceptual hash algorithm working in authentication mode. In this paper, anew evaluation method is proposed to evaluate
an authentication oriented perceptual hash algorithm by the capability of distinguish the malicious tamper from content
preserving operations. A new performance, Identical Perceptual Distance, is also defined to quantize the ability. The
thorough test on several representative published perceptual hash algorithms shows the feasibility of the proposed
evaluation method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new technique, named Improved Local Line Binary Pattern (ILLBP), which is an operator for
illumination-robust face recognition from a single training image. In order to empirically demonstrate effectiveness of
the proposed approach, we use Principal Component Analysis-Nearest Neighbour (PCA-NN) and multi-class Support
Vector Machine (SVM) as the classifiers. Comparisons to the Local Line Binary Pattern (LLBP) on Yale Face Database
B are also conducted. The advantages of our technique include higher accuracy, lesser complexity and faster
computational time compared to LLBP technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When jamming signal is strong enough, the performance of spread spectrum (SS) communications will decline
quickly. According to the independence between the SS signal and the jamming signal in baseband, a novel anti-jamming
framework of SS communication with independent component analysis (ICA) is proposed in this paper. This
anti-jamming framework utilizes the particular structure of SS communications to reduce the degree of the received
mixed signals. Because of the advantage of ICA, no prior knowledge of the jamming signal is needed any more, so the
operations such as detection and parameter estimation of the jamming signal can be saved. The proposed anti-jamming
framework deals with common jamming signals in a unifying framework, which needs not to switch among several anti-jamming
techniques corresponding to different type of the jamming. The validity of the proposed method is proved by
numerical simulation results at last.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose methods to perform large-scale circuit simulation for MOSFET circuits containing lossy
coupled transmission lines that have been encountered in modern circuit design community. We utilize the fast multi-rate
ITA (Iterated Timing Analysis) algorithm and a full time-domain transmission line calculation algorithm based on the
Method of Characteristic. Various methods to speedup the transmission line calculation algorithm have been presented.
All proposed methods have been implemented and tested to justify their superior performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A modified Waveform Relaxation algorithm with transmission line calculation ability is proposed to perform large-scale
circuit simulation for MOSFET circuits with lossy coupled transmission lines. The adopted full time-domain
transmission line calculation algorithm, based on the Method of Characteristic, has been equipped with a time step
control scheme to improve the calculation efficiency. All proposed methods have been implemented in a simulation
program to simulate several circuits. The simulation results well justify the success of proposed methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The mobile computing device has many limitations, such as relative small user interface and slow computing speed.
Usually, augmented reality requires face pose estimation can be used as a HCI and entertainment tool. As far as the realtime
implementation of head pose estimation on relatively resource limited mobile platforms is concerned, it is required
to face different constraints while leaving enough face pose estimation accuracy. The proposed face pose estimation
method met this objective. Experimental results running on a testing Android mobile device delivered satisfactory
performing results in the real-time and accurately.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Thumbnail images provide mobile computing users of an easier and smoother image exploring experience. Recognizing
the objects in an image is important in many retrieval tasks, but thumbnails generated by shrinking the original image
often contain less information. The ability of computer vision systems to identify key components of images is
investigated in this research. We evaluate automatic cropping techniques 1) based on a general method that detects
salient portions of images, and 2) based on automatic face detection. Both of the two methods have been implemented on
the mobile computing platform and a desktop platform as a comparison. The experimental results demonstrate the
validness of the proposed approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Information exchange over insecure
networks needs to provide authentication and
confidentiality to the database in significant
problem in datamining. In this paper we propose a
novel authenticated multiparty ID3 Algorithm used
to construct multiparty secret sharing decision tree
for implementation in medical transactions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is a key of all template matching techniques to construct an initial target template. The paper presents an update
Snake model, and applies it on the initial template construct process in vehicle image tracking system. The update Snake
model is improved mainly on two aspects: one is reconstructing its internal and external energy functions, and the order
is adaptively modifying the weights. The update Snake model stretches demand for selecting of initial control points, and
quickens the contour searching on a certain extent so as to meet the system's demand for real-time. Besides, the number
of contour feature points can adaptively change with the size and complexity of the target, in order to the target contour
more precise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sensors are important parts of the navigation system. The decentralized filtering for sensor fault tolerance has great
significance for improving system reliability. The method combining federated filtering and covariance intersection
filtering is adopted to achieve the decentralized filtering of sensor fault tolerance, and the navigation accuracy and
tolerance are analyzed. The simulation results indicate that the implementation of decentralized filtering on integrated
navigation system that composed of multi sensors can bring the benefits of high accuracy and improve the system
capability of fault tolerance due to the information redundancy. Thus, more completed and accurate navigation
information about the movable sensor carrier can be provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to combine rugged terrain and abuttals efficiently and accurately in current GIS, an improved method is
presented which realized abuttals rendering on rugged terrain by the way of combining buffer object and ZP+ algorithm
which comes from Z-pass. We create the shadow volume via extruding abuttal data, then play real-time
rendering ,determine the width of shadow volume dynamically using screen-space error metric, as a result, the volume
could fit the scene closely. For improving the render rate we store the volume vertex arrays using buffer object.
Experiments indicate that the method advances the program performance, combines abuttals and terrain closely and
meets the demand of display.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Random forest is a popular classification algorithm used to build ensemble models of decision tree classifiers.
However, owing to the complexity of unbalanced data distribution in high dimensional space, a random forest may
include bad trees that can result in wrong results. This paper proposed an improved random forest algorithm with tree
selection methods. This algorithm is particularly designed for analyzing unbalanced data. The novel tree selection
methods are developed for making random forest framework well suited to classify unbalanced data. Experimental
results on unbalanced datasets with diverse characteristics have demonstrated that the proposed method could generate a
random forest model with higher performance than the random forests generated by Breiman's method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The traditional blog retrieval which only focus on topical relevance is no longer satisfied by more and more blog
users. They might be interested to follow bloggers whose posts express in-depth thoughts and analysis on the reported
issues. This paper focuses on the problem of finding blogs that are relevant and in-depth about a user's query. We use LQtf
coefficient, which is a kind of pivoted normalization weighting coefficient, to analyze the posts in blogs. And the
effect of different kinds of in-depth analysis coefficient based on L-Qtf coefficient is also discussed. We propose an
improved framework of in-depth facet blog distillation system in order to obtain in-depth blog to the query and set up
experiments for comparison. Experimental results on the BLOG08 dataset show that the improved system is more
effective than the prior system in TREC 2009 blog track.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The theoretical meaning of sub-optimum research is that it provide an valuable methodology for the theory of
system cooperation, development of dissipative structures theory and actual uncertain problems, and it has great
application value in economic decision-making theory. In future studies, how to build the research framework of
the largest sub-optimum analysis, and how to choose the optimum and non-optimum of decision-making project,
both of them are preconditions for research in this area.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The short loops constructs are common in the process models derived from the event logs in most information
systems. But the current algorithms are unsatisfied when differentiating length-one loops and length-two loops
if the sets of traces they can execute are identical. So, we first put forward a method based on the conformance
checking techniques to handle the above problem. Next, using a Petri-net-based representation, some new
ordering relations are defined to detect the short loops. At last, it is proven that an algorithm is proposed to
discover the process models with short loops correctly. The improved approach in this paper can be applied in
other process mining techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nuclei of the epithelial of Pap smear cells are important risk indicator of cervical cancers. Pathologist uses the
changing of the area of the nuclei to determine whether cells are normal or abnormal. It means that having correct
measurement of the area of nuclei is important on the pap smears assessment. Our paper present a novel approach to
analyze the shape of nuclei in pap smear images and measuring the area of nuclei. We conducted a study to measure the
area of nuclei automatically by calculating the number of pixels contained in each of the segmented nuclei. For
comparison, we performed measurements of nuclei area using the ellipse area approximation. The result of the t-test
confirmed that there were similarity between elliptical area approximation and automatic segmented nuclei-area at 0.5%
level of significance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image inpainting refers to the process of reconstructing the damaged image or removal of unwanted objects in the
image. We need to take the most appropriate way of inpainting the image to its original state, while ensuring that the
result achieves the best artistic effect. There are some methods commonly used, such as method based on PDE or based
on texture synthesis. But in cases that the damaged region is too large or a semantic fragment is completely missed, those
old methods cannot do a good job. Rather than inpainting within the original image data, a way of inpainting using image
content from other pictures will do help. As long as the improvement of research in image retrieval, we hope to take
advantage of the good results by image retrieval with particular criteria. With the image patches from the image retrieval
results, we can do a better inpainting and achieve the result more in line with the visual effects. This paper shows the
algorithm and experiment results show the high efficiency and quality of the algorithm. And the unexpected result due to
the random computation shows its creative accomplishment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In non-photorealistic rendering (NPR), the Chinese ink painting style rendering is a traditional NPR skill of China. In
this paper, we propose a method for image-based and ink-diffusion-based Chinese ink painting NPR. Users without
painting experience can also convert a normal image to a Chinese ink painting automatically. As we known, ink is an
important pigment for Chinese ink painting and the various ink shade effects on the painting are produced by ink mixed
with water. In addition, the ink diffusion along the boundaries is a very important aspect of Chinese ink painting. In
order to realize the effects described above, we present a Chinese ink painting NPR method based on ink diffusion. We
use Mean Shift based image segmentation algorithm to preprocess the input image to get regions with different tones.
Then, we detect the segmentation regions' edges letting the edge points to be the start points for diffusion. We set each
point an ink value which is corresponding to its gray value. At the same time, a new algorithm simulating ink diffusion is
proposed to make the segmentation image look like a black-ink painting. Results in this paper demonstrate our method is
promising.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to determine the appropriate number of thresholds that a grey-level image should be thresholded by so that
the resulting image preserves as much information as possible from the original image using the least possible bits, we
propose in this paper, a novel criterion for multilevel image thresholding. The criterion is a weighted sum of within-class
variance and the number of edge pixels in the thresholded image. To determine the appropriate number of thresholds, an
image has to be thresholded iteratively with increasing number of thresholds by any standard thresholding method and
the solution that minimizes the proposed criterion is chosen as the appropriate solution. We also present an efficient
technique to compute the number of edge pixels. Experiments on a variety of real-world images show that the proposed
criterion gives visually more consistent results compared to the most widely used automatic thresholding criterion
proposed by Yen et al. (1995).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.