In this paper, we present a Support Vector Machine (SVM) based pixel classifier for a semi-automated segmentation algorithm to detect neuronal membrane structures in stacks of electron microscopy images of brain tissue samples. This algorithm uses high-dimensional feature spaces extracted from center-surrounded patches, and some distinct edge sensitive features for each pixel in the image, and a training dataset for the segmentation of neuronal membrane structures and background. Some threshold conditions are later applied to remove small regions, which are below a certain threshold criteria, and morphological operations, such as the filling of the detected objects, are done to get compactness in the objects. The performance of the segmentation method is calculated on the unseen data by using three distinct error measures: pixel error, wrapping error, and rand error, and also a pixel by pixel accuracy measure with their respective ground-truth. The trained SVM classifier achieves the best precision level in these three distinct errors at 0.23, 0.016 and 0.15, respectively; while the best accuracy using pixel by pixel measure reaches 77% on the given dataset. The results presented here are one step further towards exploring possible ways to solve these hard problems, such as segmentation in medical image analysis. In the future, we plan to extend it as a 3D segmentation approach for 3D datasets to not only retain the topological structures in the dataset but also for the ease of further analysis.
As the usage of 3D models increases, so does the importance of developing accurate 3D shape retrieval algorithms. A
common approach is to calculate a shape descriptor for each object, which can then be compared to determine two
objects' similarity. However, these descriptors are often evaluated independently and on different datasets, making them
difficult to compare. Using the SHREC 2011 Shape Retrieval Contest of Non-rigid 3D Watertight Meshes dataset, we
systematically evaluate a collection of local shape descriptors. We apply each descriptor to the bag-of-words paradigm
and assess the effects of varying the dictionary's size and the number of sample points. In addition, several salient point
detection methods are used to choose sample points; these methods are compared to each other and to random selection.
Finally, information from two local descriptors is combined in two ways and changes in performance are investigated.
This paper presents results of these experiments.
In this paper we describe a new formulation for the 3D salient local features based on the voxel grid inspired by the Scale
Invariant Feature Transform (SIFT). We use it to identify the salient keypoints (invariant points) on a 3D voxelized
model and calculate invariant 3D local feature descriptors at these keypoints. We then use the bag of words approach on
the 3D local features to represent the 3D models for shape retrieval. The advantages of the method are that it can be
applied to rigid as well as to articulated and deformable 3D models. Finally, this approach is applied for 3D Shape
Retrieval on the McGill articulated shape benchmark and then the retrieval results are presented and compared to other
methods.
View-based indexing schemes for 3D object retrieval are gaining popularity since they provide good retrieval results.
These schemes are coherent with the theory that humans recognize objects based on their 2D appearances. The viewbased
techniques also allow users to search with various queries such as binary images, range images and even 2D
sketches.
The previous view-based techniques use classical 2D shape descriptors such as Fourier invariants, Zernike moments,
Scale Invariant Feature Transform-based local features and 2D Digital Fourier Transform coefficients. These methods
describe each object independent of others. In this work, we explore data driven subspace models, such as Principal
Component Analysis, Independent Component Analysis and Nonnegative Matrix Factorization to describe the shape
information of the views. We treat the depth images obtained from various points of the view sphere as 2D intensity
images and train a subspace to extract the inherent structure of the views within a database. We also show the benefit of
categorizing shapes according to their eigenvalue spread. Both the shape categorization and data-driven feature set
conjectures are tested on the PSB database and compared with the competitor view-based 3D shape retrieval algorithms.
KEYWORDS: Image fusion, Facial recognition systems, 3D image processing, Databases, Principal component analysis, Biometrics, Mahalanobis distance, Digital cameras, Digital imaging, Light sources and illumination
In this paper, we investigate the use of 3D surface geometry for face recognition and compare it to one based on color map information. The 3D surface and color map data are from the CAESAR anthropometric database. We find that the recognition performance is not very different between 3D surface and color map information using a principal component analysis algorithm. We also discuss the different techniques for the combination of the 3D surface and color map information for multi-modal recognition by using different fusion approaches and show that there is significant improvement in results. The effectiveness of various techniques is compared and evaluated on a dataset with 200 subjects in two different positions.
Conference Committee Involvement (6)
Three-Dimensional Image Processing, Measurement (3DIPM), and Applications 2015
10 February 2015 | San Francisco, California, United States
3D Image Processing, Measurement (3DIPM), and Applications 2014
5 February 2014 | San Francisco, California, United States
3D Image Processing (3DIP) and Applications 2013
6 February 2013 | Burlingame, California, United States
3D Image Processing (3DIP) and Applications 2012
24 January 2012 | Burlingame, California, United States
3D Image Processing (3DIP) and Applications II
26 January 2011 | San Francisco Airport, California, United States
3D Image Processing (3DIP) and Applications
18 January 2010 | San Jose, California, United States
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.