The behavior of nine 3D shape descriptors which were computed on the surface of 3D face models, is studied.
The set of descriptors includes six curvature-based ones, SPIN images, Folded SPIN Images, and Finger prints.
Instead of defining clusters of vertices based on the value of a given primitive surface feature, a face template
composed by 28 anatomical regions, is used to segment the models and to extract the location of different
landmarks and fiducial points. Vertices are grouped by: region, region boundaries, and subsampled versions of
them. The aim of this study is to analyze the discriminant capacity of each descriptor to characterize regions
and to identify key points on the facial surface. The experiment includes testing with data from neutral faces
and faces showing expressions. Also, in order to see the usefulness of the bending-invariant canonical form
(BICF) to handle variations due to facial expressions, the descriptors are computed directly from the surface
and also from its BICF. In the results: the values, distributions, and relevance indexes of each set of vertices,
were analyzed.
An extension of Bayesian Shape Models (BSM) to 3D space is presented. The extension is based on the
inclusion of shape information into the fitting functions. Shape information consists of 3D shape descriptors
they are derived from curvature, and were selected by considering the relevance of the feature. We also
introduce the use of functions to define the separation of face regions. In order to extract the features, the
3D BSM is deformed iteratively by looking for the vertices that best match the shape, by using a point model
distribution obtained from train dataset. As result of the fitting process, a 3D face model oriented in frontal
position and segmented in 48 regions is obtained, over this model 15 landmarks are extracted. The 3D BSM
was trained with 150 3D face models from two different databases, and evaluated using a leave-one-out scheme.
The model segmentation and the landmark location were compared against a ground truth segmentation and
point location. From this comparison it is possible to affirm that the proposed system has an accuracy of approximately one millimeter, and the orientation of the models in frontal position has an average error of more or less 1.5 degrees.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.