We address kinship verification, which is a challenging problem in computer vision and pattern discovery. It has several applications, such as organizing photoalbums, recognizing resemblances among humans, and finding missing children. We present a system for facial kinship verification based on several kinds of texture descriptors (local binary patterns, local ternary patterns, local directional patterns, local phase quantization, and binarized statistical image features) with pyramid multilevel (PML) face representation for feature extraction along with our proposed paired feature representation and our proposed robust feature selection to reduce the number of features. The proposed approach consists of the following three main stages: (1) face preprocessing, (2) feature extraction and selection, and (3) kinship verification. Extensive experiments are conducted on five publicly available databases (Cornell, UB KinFace, Family 101, KinFace W-I, and KinFace W-II). Additionally, we provided a wide experiment for each stage to find the best and most suitable settings. We present many comparisons with state-of-the-art methods and through these comparisons, it appears that our experiments show stable and good results.
The use of several images of various modalities has been proved to be useful for solving problems arising in many
different applications of remote sensing. The main reason is that each image of a given modality conveys its own part of
specific information, which can be integrated into a single model in order to improve our knowledge on a given area.
With the large amount of available data, any task of integration must be performed automatically. At the very first stage
of an automated integration process, a rather direct problem arises : given a region of interest within a first image, the
question is to find out its equivalent within a second image acquired over the same scene but with a different modality.
This problem is difficult because the decision to match two regions must rely on the common part of information
supported by the two images, even if their modalities are quite different. In this paper, we propose a new method to
address this problem.
Image registration is a major issue in the field of Remote Sensing because it provides a support for integrating information from two or more images into a model that represents our knowledge on a given application. It may be used for comparing the content of two segmented images captured by the same sensor at different times; but it also may be used for extracting and assembling information from images captured by various sensors corresponding to different modalities (optical, radar,).
The registration of images from different modalities is a very difficult problem because data representations are different (e.g. vectors for multispectral images and scalar values for radar ones) but also, and especially, because an important part of the information is different from an image to another (e.g. hyperspectral signature and radar response). And precisely, any registration process is based, explicitly or not, on matching the common information in the two images.
The problem we are interested in is to develop a generic approach that enables the registration of two images from different modalities when their spatial representations are related by a rigid transformation. This situation often occurs, and it requires a very robust and accurate registration process to provide the spatial correspondence.
First, we show that this registration problem between images from different modalities can be reduced to a matching problem between binary images. There are many approaches to tackle this problem, and we give an overview of these approaches. But we have to take into account the specificity of the context in which we have to solve this problem: we must select those points of both images that are associated with the same information, and not the other ones, in order to process the pairing that will lead to the registration parameters.
The approach we propose is a Hough-like method that induces a separation between relevant and non-relevant pairings, the Hough space being a representation of the rigid transformation parameters. In order to characterize the relevant items in each image, we propose a new primitive that provides a local representation of patterns in binary images. We give a complete description of this approach and results concerning various types of images to register.
In this paper, we propose a new formalism that enables to take into account image textural features in a very robust and selective way. This approach also permits to visualize these features so experts can efficiently supervise an image segmentation process based on texture analysis. The texture concept has been studied through different approaches. One of them is based on the notion of ordered local extrema and is very promising. Unfortunately, this approach does not take in charge texture directionality; and the mathematical morphology formalism, on which it is based, does not enable extensions to this feature. This led us to design a new formalism for texture representation which is able to include directionality features. It produces a representation of texture relevant features in the form of a surface z = f(x,y). The visualization of this surface gives experts sufficient information to discriminate different textures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.