PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The problem of pattern recognition is formulated as a classification in the statistic learning theory. Vapnik constructed a class of learning algorithms called support vector machine (SMV) to solve the problem. The algorithm not only has strong theoretical foundation but also provides a powerful tool for solving real-life problems. But it still has some drawbacks. Tow of them are 1) the computational complexity of finding the optimal separating hyperplane is quite high in the linearly separable case, and 2) in the linearly non-separable case, for any given sample set it's hard to choose a proper nonlinear mapping (kernel function) such that the sample set is linearly separable in the new space after the mapping. To overcome these drawbacks, we presented some new approaches. The main idea and some experimental results of the approaches are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Kernel based methods and Support Vector Machines (SVMs)\cite{Vapnik1998,Smola1998} in particular are a class of learning methods that can be used for non-linear regression estimation. They have often achieved state of the art performance in many areas where they have been applied. The class of functions they choose from is determined by a kernel function. The form of this function is of central importance to kernel based methods. In this topic, I will give a simple description about the core concept of kernel-based methods and SVM and some fresh ideas for creating new kernels with multiscale and interpretability characterizations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present an automated region-growing algorithm designed to extract the pipe-like organs, i.e. bronchus, colon and blood vessel, from 3D CT images taken by helical CT scanner. The algorithm is called 3D scanline- filling algorithm, which is a kind of region-growing. This algorithm is very fast, and its parameters can be adjusted automatically. It can be utilized in Virtual Endoscopy system (VES), which is an important computer-aided diagnosis method. The data created after applying this algorithm can then be sent to VES for further process, be visualized and displayed on screen. The physician can observed the inner organ on the screen just like with a real endoscopy without any pain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new approach to the extraction of video object and segmentation of video object plane from video sequence. The method is based on morphological motion filter using connected operator and our proposed new filtering criterion. The morphological motion filter aims to detect motion which is distinct from that of the background, and thereby locates independently moving physical objects in the scenes. Then an object tracker is used which matches a two-dimension binary model of the object against subsequent frames using the Hausdorff distance. The best match found indicates the translation the object has undergone . Then the object matcher using active contour model is presented to match the object in the new location. From a series of binary contour we can extract the moving object. Experiments show that our algorithm can extract object from moving backgrounds efficiently.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, existing segmentation approaches are summarized. The Otsu method is found to fit for real-time implementation but sensitive to target size. If target is too small compared with the background, it always cannot be segmented. To solve such problem, a task-guided segmentation is proposed. In such model, with the idea of attention, image segmentation is processed under the guidance of tasks at hand. A model of task-guided segmentation is proposed and is applied in real-time IR ship recognition. Experimental results demonstrated the effectiveness of the approach proposed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Even though the edges of a texture image may have various orientations and their locations in the image may be random, for the magnitude of Fourier transform of the image, the contribution of all edges with the same orientation will be stacked up in the orientation being perpendicular to the edges. This special phenomenon is called as auto-registration of the magnitude spectrum. In this paper we propose a method to exploit the auto-registration property of the magnitude spectra for texture identification and image segmentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image segmentation is a typical problem of image analysis. The aim is to partitioning a grayscale image to disjoint regions of coherent illuminance or homogenous texture. There are many segmentation methods of region-based, contour-based or region-contour-joint approaches to deal with different type images. Here we treat color coherent region as a special texture, and we used a new texture descriptor based on local Fourier coefficients histogram adaptive for this extended texture representation. Then a texture-based image segmentation algorithm is proposed. By utilizing the texture features of a region and the K-mean cluster algorithm we obtain a coarse segmentation of an image. Then by refining the region boundary iteratively a final segmentation can be resulted. Because our texture feature is also suitable for gray-coherence region, this algorithm can protect the gray-coherence from over-segmentation. For the same time, we reprove the boundary refinement by replaced with three steps: horizontal refinement, vertical refinement and boundary integrality checking. We also proposed the pre-processing and post-processing method for this algorithm. The segmentation performance is demonstrated on several synthesis texture images and aerial images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image segmentation technique is the foundation of the high-level digital image processing, and it is widely applied in many areas, it is also a classic difficult problem on the domain of advanced information processing. Because of itsí» importance and difficulties, image segmentation processing motivates large numbers of researchers to work for it, and quite a number of segmentation thoughts and algorithms have been proposed over the years. But by far, it is very difficult to provide a general effective segmentation algorithm, and further more there is no objective criteria to value the performance of the image segmentation algorithms. From the theory research of the image segmentation, there are two main directions:1). To improve the classic segmentation algorithms. 2). To provide the novel thought and novel approaches. From the application of the image segmentation. it is mainly applied for ATRs(Automatic Target Recognition system) and industry testing. Because the segmentation cannot only greatly compress data and reduce the required memory space but also simplify the analysis and the following processing steps. In this paper, we propose a new application of the image segmentation-image matching in real-time systems. And the key problem is that we need provide the best segmentation methods which are suitable to the following matching processing. The rule to select the segmentation method is also provided. Using the traditional matching scheme, the experimental results show that performance of the segmentation algorithm(2-D OTSU) is unstable, and the correct matching probability (CMP) is increased rapidly when the size of real images(matching basic unit) become larger and larger, compared with the tradition methods, some of them are better than the direct gray-level image matching when the real image becomes large, but the average CMP(ACMP) is not good. To improve the ACMP, we provide a new method for image matching-combined matching. We make every three sequential images a group, and make this group as the matching basic unit.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper deals with a class of variational problems rising from image segmentation. After introducing the mathematical model of this question, we analysis it's rationality in details. At the same time, two numerical methods will be proposed to solve it. Furthermore, the equivalence of these two equations will be proved briefly. At last, we shall apply this result to approximate a minimum problem that was introduced by D.Mumford and J.Shah to Study image Segmentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Genetic Algorithm (GA) is derived from the mechanics of genetic adaptation in biological systems, which can search the global space of certain application effectively. The proposed algorithm introduces three parameters, fitmax, fitmin, and fitave to measure how close the individuals are, so as to improve the Adaptive Genetic Algorithm (AGA) proposed by M. Sriniras. At the same time, the elitist strategy is employed to protect the best individual of each generation, and Remainder Stochastic Sampling with Replacement (RSSR) is employed in the proposed Improved Adaptive Genetic Algorithm (IAGA) to improve the basic reproduction operator. The proposed IAGA is applied to image segmentation. The experimental results exhibit satisfactory segmentation and demonstrate the learning capabilities of it. By determining pc and pm of the whole generation adaptively, it strikes a balance between the two incompatible goals: sustain the global convergence capacity and converge rapidly to global optimum.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The gradient histogram has unimodality. It has been found from the statistics of the gradient histogram for large numbers of images that the gradient histograms of the object and background can be fitted with the X2 distribution density function of different degrees of freedom. A method of automatic estimation of the threshold of gradient image segmentation is presented and proved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new image segmenting method with multi-agent structure is proposed to cope with the image variability and the intrinsic complexity of segmenting tasks. This method includes agents of nine different functions, namely, control, multi-valued segmentation, region splitting, region merging, marking, regional feature calculation, spatial relationship analysis, and confidence calculation. With the exception of carrying on communication in the form of instructions and answers, the communication and coordination of work between agents are performed through the reception and operation of various (original and processing-generated) data on the blackboard. The final segmentation results jointly obtained by the multiple agents take segmented subregion confidence as a component for characterizing the region feature vector. The object identification algorithm will make use of the domain knowledge of objects and features with confidence component for detection and identification of the segmentation results. The flexibility, adaptability and robustness of this method have been confirmed by experimental results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An approach for color image segmentation is proposed based on the contributions of color features to segmentation rather than the choice of a particular color space. It is different from the pervious methods where SOFM is used for construct the feature encoding so that the feature-encoding can self-organize the effective features for different color images. Fuzzy clustering is applied for the final segmentation when the well-suited color features and the initial parameter are available. The proposed method has been applied in segmenting different types of color images and the experimental results show that it outperforms the classical clustering method. Our study shows that the feature encoding approach offers great promise in automating and optimizing color image segmentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Face Recognition is an area of emergent research, that offers great challenges, mainly in adverse conditions. This paper addresses face images with approximately 20% of face in partly occluded or not-well illuminated images as well as with use of disguise, scarf, sun glasses or masks. The presented techniques use three different eigenfeatures: eigeneyes, eigennose and eigenmouth. Even in these unfavorable situations, recognition rates achieved 87%. Images were extracted from The Yale Face Database.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In automatic face recognition, strong discriminatory feature extraction is very important. In this paper a new approach to extract powerful local discriminatory features is introduced. Instead of using traditional wavelet features, the authors examine multiscale local statistical characteristics to achieve strong discriminatory features based on important wavelet subbands. Meanwhile, to efficiently utilize potentials for the extracted multi- MLDFs, an integrated recognition system is developed, where multi-classifiers first conduct the corresponding coarse classification, then a decision fusion scheme by associating different priorities with each of the classifiers makes the final recognition. Our experiments showed this technique achieves superior performance to popular methods such as PCA/Eigenface, HMM, wavelet features, and neural networks, etc.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses an ameliorative version of traditional eigenface methods. Much of the previous work on eigenspace methods usually built only one eigenface space with eigenfaces of different persons, utilizing only one or very limited faces of an individual. The information of one facial image is very limited, so traditional methods have difficulty coping with differences of facial images caused by the changes of age, emotion, illumination, and hairdress. We took advantage of facial images of the same person obtained at different ages, under different conditions, and with different emotion. For every individual we constructed an eigenface subspace separately, namely multiple eigenface spaces were constructed for a face database. Experiments illustrated that the ameliorative algorithm is distortion- invariant to some extent.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents s new scheme for face localization in a complex background. A Difference Offset of Gaussian filter is first introduced to calculate a feature inertia surface of an image. Then skeletonization on the inertia surface is carried out by a combination of fast construction of Euclidean Distance Maps and morphological operations. After that, face regions in this resulting skeleton image are detected by a quadratic Gabor filter, and its face trueness of each located area is verified by a modified retinal model. Experiments in practical applications have shown its feasibility.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces a video based face verification system. Stereo vision and multiple-related template matching method are used to segment facial region from the background. Facial features are extracted for geometrical normalization and followed by illumination normalization. Faces are represented by normalized face images masking out some corner points. A Support Vector Machine classifier is trained for each person for verification. Experimental results demonstrate the superior performance of this method in complex real environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent research results show the recognition rates of all kinds mouth-shape recognition systems are not high enough because of the improper feature selection and extraction for mouth-shape images and the false classification for those features on the boundary of different categories. This paper presents a statistical approach, called CPMSR, for mouth-shape recognition at the phoneme level. The feature-extracting module for this approach is based on research results of phonetics and personal investigation at a deaf school. The analyzing module employs Support Vector Machine (SVM) technique, which is a useful tool dealing with boundary points problem. With these improvements our experiment achieved the satisfactory recognition rate of over 90% for 5 vowels and 24 consonants' mouth-shapes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Face recognition has wider application fields. In recurrent references, most of the algorithms are with simple background in the static images, and only used for ID picture recognition. It is necessary to study the whole process of multi-pose face recognition in a complex background. In this paper an automatic multi-pose face recognition system with multi-feature is presented. It consists of several steps as following: face detection based on skin-color and multi-verification, detection and location of the face organs based on the combination of an iterative search by a confidence function and template matching at the candidate points with the analysis of multiple features and their projections, feature extraction for recognition, and recognition decision based on a hierarchical face model with the division of the face poses. The new approach is applied to 420 color images that contain multi-pose faces with two visible eyes in a complex background, and the results are satisfactory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a 3-D face recognition system is developed using a modified neural network. This modified neural network is constructed by substituting each of neuron in its hidden layer of conventional multilayer perceptron with a circular-structure of neurons. This neural system is then called as cylindrical-structure of hidden layer neural network (CHL-NN). The neural system is then applied on a real 3-D face image database that consists of 5 Indonesian persons. The images are taken under four different expressions such as neutral, smile, laugh and free expression. The 2-D images is taken from the human face images by gradually changing visual points, which is done by successively varies the camera position from - 90 to +90 with an interval of 15 degree. The experimental result has shown that the average recognition rate of 60% could be achieved when we used the image in its spatial domain. Improvement of the system is then developed, by transforming the image in its spatial domain into its eigenspace domain. Karhunen Loeve transformation technique is used, and each image in the spatial domain is represented as a point in the eigenspace domain. Fisherface method is then utilized as a feature extraction on the eigenspace domain, and using the same database and experimental procedure, the recognition rate of the system could be increased into 84% in average.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a fast hierarchical knowledge-based approach for automatically detecting multi-scale upright faces in still color images. The approach consists of three levels. At the highest level, skin-like regions are determinated by skin model, which is based on the color attributes hue and saturation in HSV color space, as well color attributes red and green in normalized color space. In level 2, a new eye model is devised to select human face candidates in segmented skin-like regions. An important feature of the eye model is that it is independent of the scale of human face. So it is possible for finding human faces in different scale with scanning image only once, and it leads to reduction the computation time of face detection greatly. In level 3, a human face mosaic image model, which is consistent with physical structure features of human face well, is applied to judge whether there are face detects in human face candidate regions. This model includes edge and gray rules. Experiment results show that the approach has high robustness and fast speed. It has wide application perspective at human-computer interactions and visual telephone etc.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a detecting and tracking system based on stereo video. Facial features are extracted in the first frame pair and tracked in the following frames. Stereo vision is applied to enhance the correctness and precision of the feature points. During detection the face position and feature regions are determined first, then an Active Appearance Model is used to find the accurate positions of feature points, and finally the camera parameters are used to perform a stereo reprojection. The same reprojection is applied following the Kanad-Lucas-Tomasi tracker during the tracking procedure, assuming that the head is a rigid object. Experimental results demonstrate that AAM and stereo reprojection improves robustness and accuracy remarkably.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we propose a face detection method based on multi-information, which is a development from knowledge-based methods. In contrast to the traditional approaches, we analyze not only the intensity information but also some intrinsic properties, such as orientation pattern, existing in the intensity image. We establish the template and rules by combining intensity, orientation and anisotropic measure obtained from the image. Moreover, we use Bayesian rule to automatically determine the parameters for the rules. Experiments show that our method achieves an obvious improvement in recognition ratio.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a dynamic gesture recognition system is presented which requires no special hardware other than a Webcam. The system is based on a novel method combining Principal Component Analysis (PCA) with hierarchical multi-scale theory and Discrete Hidden Markov Models (DHMM). We use a hierarchical decision tree based on multiscale theory. Firstly we convolve all members of the training data with a Gaussian kernel, which blurs differences between images and reduces their separation in feature space. This reduces the number of eigenvectors needed to describe the data. A principal component space is computed from the convolved data. We divide the data in this space into two clusters using the k-means algorithm. Then the level of blurring is reduced and PCA is applied to each of the clusters separately. A new principal component space is formed from each cluster. Each of these spaces is then divided into two and the process is repeated. We thus produce a binary tree of principal component spaces where each level of the tree represents a different degree of blurring. The search time is then proportional to the depth of the tree, which makes it possible to search hundreds of gestures in real time. The output of the decision tree is then input into DHMM to recognize temporal information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new unsupervised multiresolution pyramidal edge detector, based on adaptive weighted fuzzy mean (AWFM) filters, is presented in this paper. The algorithm first constructs a pyramidal structure by repetitive AWFM filtering and subsampling of original image. Then it utilizes multiple heuristic linking criteria between the edge nodes of two adjacent levels and considers the linkage as a fuzzy model, which is trained offline. Through this fuzzy linking model, the boundaries detected at coarse resolution are propagated and refined to the bottom level from the coarse-to-fine edge detection. The validation experiments results demonstrate that the proposed approach has superior performance compared with standard fixed resolution detector and previous multiresolution approach, especially in impulse noise environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
How to extract the edge effectively in noisy image is a difficult problem of pattern recognition. In this paper, we present a stochastic heuristic search algorithm to extract edge in noisy image. We use repetitive random searches to obtain various possible independent-edge trajectories in the edge image, then self-reinforce and accumulate the search trajectories respectively, at last, extract the edges based on the results of the accumulation of self-reinforcement. Our technique combines the local information of the edge points and the whole information of independent-edge curves availably. Lots of experiments and comparing with past heuristic search algorithms, we find that our method can extract edges effectively and suppress the noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The recognition method for the harbor contour is presented in this paper. The chain code of the contour represents the orientation and the curvature of the contour, and from it the control points are detected by the extra of wavelet transform. The primitive of the harbor contour is composed of four consecutive control points satisfying certain criteria. Three features of the primitive, which are scale- and rotation- invariant, and the deduced recognition rule, with the form of shape energy, are described in the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The edges in an image could be considered as boundary lines between the classes to be analyzed and distinguished. Determining those boundary lines is important for the detection of image edges. As to the edges in the super dimensional spectral image data, the lower density zone in spectral space could be considered as the seeking range for the boundary between different objects. This paper discusses the principles and methods of density analysis for super dimensional spectral image data. One key for that is to determine the statistical unit of super dimensional space. The approaches include the method according the gray level combination in spectral space, the method of statistics starting from first pixel of image, the method of taking as the reference the first component of principal component transformation for spectral space, the method of determining the unit super sphere based on sample set etc. The experiments using one of the methods have shown the effectiveness of spectral space density analysis and have been discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In texture analysis, the selection of window size has great influence on effectiveness of extracted feature and computing speed. This paper employs Gauss-markov random field (GMRF) model to describe textures, the least square error approach is employed to estimate field parameters, and it has been proved to be non-bias. Because there may be no solution by the estimation expression, a modification to it is presented. Based on the non-bias characteristic of parameter estimation, we present a window size selection approach for texture primitives, and experiment shows that our approach is very effective.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A combined optimal K-L expansion method (COKLT) is proposed in this paper. The first (c-1) axes of COKLT are Fisher criterion optimal, and the others are the entropy criterion optimal. The experimental results on CENPARMI handwritten numeral database show that the proposed method is effective.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper demonstrates a novel criterion for both feature ranking and feature selection using Support Vector Machines (SVMs). The method analyses the importance of feature subset using the bound on the expected error probability of an SVM. In addition a scheme for feature ranking based on SVMs is presented. Experiments show that the proposed schemes perform well in feature ranking/selection, and risk bound based criterion is superior to some other criterions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The corner is an important local feature of the image. This paper proposes a new algorithm for corner detection by multi-feature-intensity and edge points. A fast adaptive SUSAN principle based on local gray-level feature is presented firstly for detecting candidate corners. This improved method can detect the features in different contrast images automatically and fast through a self-adjust threshold, which is determined by the intensity gradient magnitude. To detect the corners on blurry edges, the candidate corners would include some edge points as a result of reducing the detection threshold. After these candidate corners are arrayed along the boundary trend by the method of edge element, the angles between approximate straight edge lines are calculated. Then the edge points contained in the candidate corners are removed since they have not significant discontinuous changes in the direction of boundary, and the false corners due to quantization are also removed by our method. In this way, the true corners are reserved. The experimental results showed that the proposed algorithm has good capability of corner detection and localization for different contrast image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Extracting human body framework from images is up to now an open and significant problem in general. We propose a systematic approach which strictly follows the fundamental assumption: only one front-viewed standing human body dressing clothes with long sleeves and long trousers can be extracted from an image in this paper. Three main steps devised in our approach to extract the human body framework are face detection, segmentation of clothes and trousers regions, and positioning all human body parts. The face detection was performed by means of face skin color extraction and face template matching. Based on the determined hue quantization and color texture features, a texture similarity measure was designed for the segmentation of clothes and trousers. In accordance with the entropy concept of information theory, the homogeneous and inhomogeneous information was derived from similarity measurements of a human body image. Then the useful images transformed respectively from the found homogeneous and inhomogeneous information were combined with the defined relationships of a human body framework to locate the trunk region, arm parts, the hip region and leg parts. Experiments have confirmed the feasibility of the proposed approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new algorithm to estimate Hurst parameter is introduced in this work. A remote sensing texture is modeled as a fBm process. Since fBm is characterized by only one Hurst parameter, it is not flexible enough to model the short-term correlation structure. Therefore extended models were proposed to settle this problem. Noting that the track of the logarithm delta variances is certain, and the slopes k(s) of the piecewise lines characterize the specific texture, we use k(s)/2 to estimate the multiscale Hurst parameters of the digital image. Since the new features characterize the textures in a multi-scale way and meet with the characters of the natural processes, they perform better than the existing features based on fractal models and wavelet transforms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a model of extraction and representation of spatial knowledge using Hough Transform. The purpose here is to extract line segments and specific relations to represent knowledge. Using Hough Transform, from polar representation of a line segment we extract line segment, which approximate an object. We consider the rotation of segment from 0 degree to 180 degrees position and approximate the possible segments positions to n which constitute the alphabet of our model. We then define relations between extracted segments with respect to their ends and inner. From these relations and alphabet, we represent an object as a couple (S, R) where S is the vector of segments and R is the vector of relations between the components of S. The similarity of two objects depends on the distance between their representation. There are promising results for character recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Finding line segments in an intensity image is one of the most fundamental issues in computer vision and other industry applications. Many methods have been presented, but robust line segment extraction is still a difficult and open problem. In this paper a local window method based on line-model to extract lineal features from low quality image is described. The method utilizes blob-coloring technique to extract potential line-blob in a local window area, then a line segment is discriminate using the line-model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a new approach to model texture in 2D color images based on opponent correlation functions computed within and between sensor bands of a red-green-blue image. The new opponent correlations cover all the possible opponent autocorrelations and opponent crosscorrelations of a color image. Invariants of opponent correlations are computed with the aid of Zernike moments to preserve rotation, translation, and scale invariance. The method can be used in the classification (even with slight illumination changes) and the unsupervised segmentation of color texture in a variety of image processing and pattern recognition tasks. One of the most important points in introducing the opponent correlation functions is the efficient computation compared to the direct spatial correlation method that requires heavy computations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Feature extraction is very important to pattern recognition. For many image recognition tasks, it is very hard to directly extract the explicit geometrical features of the images. In this case, global feature extraction is often used. Principal Component Analysis (PCA) is a typical global feature extraction method. However, PCA assumes the image population as Gaussian distribution and produces a set of compact features, which are the coefficients of the basis functions with largest eigenvalues. Compared with compact features of PCA, sparse features seem more attractive for recognition tasks. In this paper, three algorithms that produce sparse feature are studied. Independent Component Analysis (ICA) and sparse coding (SP) can describe non-Gaussian distribution. The discriminatory sparse coding (DSP) is a variation of SP, which incorporates class label information of the training samples. Experiments results of face recognition show sparse features have more advantage over compact features. DSP gets the best results for its clustering property of the features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Due to the SAR generates images with coherent processing of the scattered signal, producing the images usually highly susceptible to speckle effects, a new mother wavelet function was constructed and further formed a wavelet function family. It possesses the narrow band in the low-pass and its wave shape distributes on the symmetry of impulsion response coefficient in time domain, which enable it has an excellent performance in dealing with wave signal. With the original ocean SAR image that its size is 1024*1024, the Wiener filter algorithm was adopted and the new wavelet filter was applied in order to suppress speckle and then to exact the edge information with the Prewitt filter algorithm. Compared several popular mother wavelet functions with the created wavelet function as the same processing method and uniform image, the result shows that the new mother wavelet function is better than other ones in the preserving image's edge information, though its performance is not good as well as compared ones in the suppressing speckle. Based on the above discussion, it is obvious that the wavelet filter is a very effective method in speckle filtering and the constructed mother wavelet function plays an important role in extracting SAR image edge feature information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper integrates the virtues of thresholding and region growing, and utilizes gray frequency distributing and local spatial information to segment image simultaneously by multi-gray-character(multi-dimensional histogram). The technique researched in this paper was found on the model that multi-dimensional histogram of an image that fits multi-dimensional gaussian-mixture-distributing. The paper derivates the parameters formulations of gaussian-mixture model based on maximum likelihood estimate, then the bayes discrimination rule guides the segmentation result. The theoretic analysis and experiment indicates the new technique overcomes two main shortcomings of the existing similar techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we presented an algorithm that implements the automatic detection of faces and facial features in color images. First, we use the chroma chart technique to determine the skin regions, then separate the touching regions apart using morphological operators; Second, we extract the candidate facial feature regions using morphological valley operators, for each skin region; Third, we match the candidate facial features with the model face we proposed to determine the optimal facial feature combination of eyes and mouth, thus we get the final description of the faces' geometric information. In this work, we improved the performance of the chroma chart, introduced morphological operators into the skin region analysis and facial feature detection, put forward our matching scheme that's fast and stable. The whole process is completely automatic, and is robust to lighting conditions, shading, tilt, multiple faces, etc.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper a new image segmentation method is presented that use modified Mahalanobis-distance (M-Mahalanobis- distance) as a criterion to classify the pixels in the (gray level, average gray level) scatter plot. The experiment results show that this algorithm can give better segmentation results with stronger robustness and faster speed for an image with low contrast, many details and noises.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Generally speaking, image analysis is picking-up useful data and information from image. Image segmentation is a important method in image analysis, in our paper, on the basis of summarizing predecessor's theory and experiment, think about practical application at the same time, we adopt a method of image segmentation based on fringe pick-up namely line scan space band-pass filter. This method could decrease information amount consumedly, and provide enough preparative for automatic recognition of system after image segmentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
First, Basic conceptions of order morphology transformation on gray images are introduced, then two generalized edge operators based on order morphology filtering are defined, Finally the theory and method of the edge operators when percentile p is equal to 1/2 are dealt with for noise suppression and edge extraction. The experiments show that this method possesses good characteristic of suppressing noise and extracting edge comparing common edge extracting method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A linear feature extraction method for infrared image is presented in this paper, which includes three major steps: image preprocessing using fuzzy feature transformation and fuzzy enhancement, edge strength map and direction map profile maximum method. Comparison with other edge detection methods, accuracy and robust experiments are made to prove its better position accuracy and reliability. Results show that the proposed algorithm can solve the linear feature extraction problem for infrared image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this article, some basic problems on the level set methods are discussed, such as the method used to preserve the distance function, the existence and uniqueness of the solutions for the level set equations and the analysis of the singular points. It is presented that if the solutions of the level set equations with the distance function restriction exist, they must be the signed distance function to the evolving surface. And it is presented that there exists a unique solution in a neighborhood of the initial zero level set. However, the uniqueness of the solutions is hard to be guaranteed away from the initial zero level set. An important property of the singular points is given, which is a sufficient and necessary condition in distinguishing the singular points from ordinary points. The above results consummate the theoretical base of the level set methods. At meantime, the estimate method of the width of the narrow band is presented in order to avoid the singular points during the iterative process of the level set methods. The implementations of our theory are shown on real images and synthetic images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper will not only discuss the precision tracking using segmentation in infrared (IR) images, but also describe how to avoid using the a priori information in implementing the precision tracking with segmentation. The method presented above is not limited to single target case, it can be extended to multiple target tracking, too. For the two cases, we have done Monte Carlo simulations. The considerable simulation tests show that this tracking method is successful not only in recognizing and tracking targets automatically but also in tracking the specified target. Though the noise background is complicated, they all have good effect and high precision.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Te paper proposed a new segmentation method, which is based on the relation stable-state. The relation stable-state is derived from the fact that the contour of an object may be enlarged or shrunk while the threshold is changing, but times that boundary points visited by contours is more than times that inner points visited by contours. This relation is usual stable. Minimum area and edge intensity are the only two parameters needed in it. Under the control of these two parameters, it chooses contours in the original image; sums them into a contour image; extracts contours in the contour image, does merge-split process and region growing step by step. In fact, it integrated gray value, edge information and space connectivity smartly. Experiments show it can be applied to extract objects with multi-level perfectly even if the image is non-uniform illumination, thus it is more general and practical.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Road cracks in the highway surface are very dangerous to traffic. They should be found and repaired as early as possible. So we designed the system of auto detecting cracks in the highway surface. In this system, there are several key steps. For instance, the first step, image recording should use high quality photography device because of the high speed. In addition, the original data is very large, so it needs huge storage media and some effective compress processing. As the illumination is affected by environment greatly, it is essential to do some preprocessing first, such as image reconstruction and enhancement. Because the cracks are too tiny to detect, segmentation is rather difficult. This paper here proposed a new segmentation method to detect such tiny cracks, even 2mm-width ones. In this algorithm, we first do edge detecting to get seeds for line growing in the following. Then delete the false ones and get the information of cracks. It is accurate and fast enough.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper focuses on an effective face recognition algorithm of poor quality video data and its real-time implementation. As the fundamental step of our approach, a fast face detection method based on color information is presented. Instead of performing a pixel-based color segmentation on each single face image, we incorporate color information into a face detection scheme based on spatio-temporal filtering in image sequences, which can reduce the noises in surveillance video. Our face recognition method is based on principal component analysis (PCA) that is fairly effective and fast for surveillance video in comparison with feature-based methods. For the training set, a large database of numerous face images of each subjects, digitized at the condition of three head orientations is setup. We use separate eigenspaces for different views of head orientations, so that the collection of images taken from each view of head orientations will have its own eigenspace. For real-time implementation, a automatic face detection and recognition system with TI Digital Signal Processor(DSP) TMS320C6201 is described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Corner point is one of the most important feature points in computer vision and pattern recognition. In this paper we introduce a new boundary-based corner detection method using wavelet transform for its ability for detecting sharp variations. Our idea is derived from Jiann-Shu Lee et al.'s algorithm, but, unlike them, we represent the orientation profile in an almost continuous way. Theoretical analysis and experimental results show that our method is effective in detecting corners.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a method to realize feature extraction on low contrast magnetic domain images of magneto-optical (MO) recording films. The method is based on the following three steps: first, Lee-filtering method is adopted to realize pre-filtering and noise diminishing; followed by gradient feature segmentation, which separate the object area from the background area; at last common linking method is adopted and the characteristic parameters of magnetic domain are calculated. We describe these steps with particular emphasis on the gradient feature segmentation. The results show that this method has advantages over other traditional ones for feature extraction of low contrast images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
2-D entropic thresholding method is a very effective approach for image segmentation. But the computational complexity up to O(L4) greatly limits its application. Wu et al. proposed a fast recursive algorithm based on Abutaleb's 2-D entropic thresholding method, which reduced the computational complexity to O(L2) with the memory cost of 2*L floating words. This paper should present a novel fast searching algorithm for optimal thresholding vector, which reduces the computational complexity to about (3*L+w*w), where w is less than 10,with very little memory cost.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a technique to segment overlapping and touching chromosomes of human metaphase cells. Automated chromosome classification has been an important pattern recognition problem for decades, numerous attempts were made in the past to characterize chromosome band patterns. But successful separation between touching and overlapping chromosomes is vital for correct classification. Since chromosomes are non-rigid objects, common methods for separation between touching chromosomes are not usable. We proposed a method using shape concave and convex information, topology analysis information, and band pale paths for segmentation of touching and overlapping chromosomes. To detect shape concave and convex information, we should first pre-segment the chromosomes and get the edge of overlapping and touching chromosomes. After filtering the original image using edge-preserving filter, we adopt the Otsu's segmentation method and extract the boundary of chromosomes. Hence the boundary can be used for segment the overlapping and touching chromosomes by detecting the concave and convex information based on boundary information. Most of the traditional boundary-based algorithms detect corners based on two steps: the first step is to acquire the smoothed version of curvature at every point along the contour, and the second step is to detect the positions where curvature maximal occur and threshold the curvature as corner points. Recently wavelet transform has been adopted into corner detection algorithms. Since the metaphase overlapping chromosomes has multi-scale corners, we adopt a multi-scale corner detection method based on Hua's method for corner detection. For touching chromosomes, it is convenient to split them using pale paths. Starting from concave corner points, a search algorithm is represented. The searching algorithm traces three pixels into the object in the direction of the normal vector in order to avoid stopping at the initial boundary until it reaches to another boundary or tracing route. For overlapping chromosomes, the searching algorithm fails. We proposed a topology information based method for analyzing overlapping and touching chromosomes. Mihail Popescu adopts Cross Section Sequence Graph (CSSG) method for shape analyzing. Gady Agam proposed Discrete Curvature Function for splitting touching and overlapping chromosomes. But due to the non-rigid property of chromosomes, it is hard to determine the actual topology structure of chromosomes. In this paper we proposed a new method to produce topology information of chromosomes and had got good results in chromosome segmentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents application of some methods in cell image segmentation. Cell's DNA value, area and perimeter are important parameters in cancer diagnosis and prognosis. Automatic counting of cells and calculating these parameters are desirable since manual operating is tedious, time- consuming and with a great potential of inaccuracies. With the development of image analysis technology, it is possible to carry out ths target. A cell image analysis system was developed in this laboratory to perform the process automatically and accurately. In this process, the most pivotal step is the segmentation of the object, the difficulties may arise due to various reasons such as the contrast quality, touching and overlapping cells, image acquisition hardware artifacts, etc. This paper illustrates an approach based on some effective segmentation methods which are used to solve typical problems encountered in various applications. These proposed methods in segmentation have been used in our system and the test result proved these are valid ways and work well in solving the complex problems. Through the application of these techniques described in this paper, we can count the cell numbers and get the parameters successfully.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image segmentation is one of the most important operations in many image analysis problems, which is the process that subdivides an image into its constituents and extracts those parts of interest. In this paper, we present a new second order difference gray-scale image segmentation algorithm based on cellular neural networks. A 3x3 CNN cloning template is applied, which can make smooth processing and has a good ability to deal with the conflict between the capability of noise resistance and the edge detection of complex shapes. We use second order difference operator to calculate the coefficients of the control template, which are not constant but rather depend on the input gray-scale values. It is similar to Contour Extraction CNN in construction, but there are some different in algorithm. The result of experiment shows that the second order difference CNN has a good capability in edge detection. It is better than Contour Extraction CNN in detail detection and more effective than the Laplacian of Gauss (LOG) algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, fundus images are pre-processed in order to enhance these images first. Then snake model is improved. The original snake control points are gained automatically and centripetal force is added to snake model. This improvement not only quicken the convergence speed during snake growing but also remedy the illness of the classical snake model. The contour of ocular disc and ocular cup of fundus image is extracted successfully using this new snake model and a satisfactory result is got.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a rapid segmentation method of vehicle license plate based on knowledge is presented. Based upon the image segmentation, the location accuracy and the real time ability are improved by scanning the acquired binary image in the row and the column directions, and besides, the location information as well as the size information of vehicle license plate is taken advantage of to reduce the area of being detected. We have made a series of experiments on different distance and different kinds of vehicle images. The experiment results show that the proposed method has good ability of locating and robust.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When enjoying videophone or distant learning, people want to see human face as real as possible even in very low bit rate. How to synthesis human face to deliver over network such as Internet and PSTN draws much attention. Conventional techniques based on low-level features cannot perform the desired operation. While model based method need much prior knowledge. The authors present a new algorithm for human face synthesis. It can give a virtual face based on human vision system for bit rate ranging from several kb/s to tens of KB/s. An Adaptive Face Image Filter(AFIF) is used to attenuate noise and preserve face edges as well as details. A facial region detection method detects those pixels that belong to a face. After that, with a novel facial texture interpolating method, the face is rendered in gray scale. Its key feature is a group of diffuse functions for interpolation. Then color is rendered to the whole face scalable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to monitor, analyze, evaluate our skin about its aging, its health, and it alimentary status, presently the most efficient way is to analyze the dermal images using modern digital image processing techniques. As the preprocessing of this skin analyzing system, in this paper, we propose a method for extracting the centerlines of the creases meshing over our dermis in a clinical dermal image. The method includes the following steps: first of all, Feature space transformation, this is to map the pixels in the grayscale space into feature space according to the regional energy of these pixels. Secondly, Filtering, a morphological filter is designed to remove noises and spurious minima. The amount of filtering can be manually tuned to get a different result. Thirdly, marker selection, using the Top-hat transformation based on morphological reconstruction to get a reliable set of markers. Finally, Watershed transformation, the scheme is used to extract the central axes and form a network of lines. The experiment shows that this method is an effective one for the purpose of extracting roof-edge centerlines.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an automatic approach for extracting a face and estimating the facial expression from a gray image. To improve the execution time and the tolerance degree, the technique of wavelet decomposition is used in our approach for feature extraction. The main steps of the proposed approach are as follows. First, each input image is preprocessed with normalization, which includes translation, rotation, scaling, and light source adjustment. Secondly, the wavelet decomposition transform is applied to making image into different levels, and the information extracted from these levels is used to estimate the face region with a template-matching method. Finally, the facial expression is estimated from the found face region by using the technique of active shape template. Experimental results have confirmed the feasibility of the proposed approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
On the basis of facial geometrical feature and muscle model, this paper represents a generation method adapted to two-dimension human images. The method uses a feature mesh model to describe facial components accurately, and makes the displacement of contour feature point the parameter of facial image morphing and expression synthesis. At the same time, It controls interact relation between facial components and variety of facial contour by using a face integral modal. The result of experiments shows that the facial images generated by this algorithm are accurate, and the expression effect is more natural.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is difficult to segment cloud images because of the complicated and various shapes and blurry edges of cloud. In this paper, we present an idea and a set of realizable design about self-adaptive segmentation by means of mathematical morphology. Created models can show some characteristics of clouds such as shapes, scales and temperature exactly. Some practices show that the segmentation models are self-adaptive, the program is general and the algorithm is high efficient with the parallel operation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An adaptive approach to small object segmentation based on Genetic Algorithms is proposed. A new parameter scale of the subject area's percentage is introduced in this method, which can overcome the P-tile method's defect of requiring the exact percentage of an object area, and meanwhile makes effective use of the small object's character. Genetic Algorithm forms the skeleton of the new approach, which can dynamically locate the optical threshold in the search space. The proposed algorithm can be extended to segment those images with object of arbitrary size by simply changing the set of the new parameter. Experiment results indicate that the proposed algorithm performs better segmentation quality and takes less computational time than conventional Otsu method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.