In this work we investigate the relationships between features representing images, fusion schemes for these
features and kernel types used in an Web-based Adaptive Image Retrieval System. Using the Kernel Rocchio
learning method, several kernels having polynomial and Gaussian forms are applied to general images represented
by annotations and by color histograms in RGB and HSV color spaces. We propose different fusion schemes,
which incorporate kernel selector component(s). We perform experiments to study the relationships between
a concatenated vector and several kernel types. Experimental results show that an appropriate kernel could
significantly improve the performance of the retrieval system.
KEYWORDS: Image retrieval, Data modeling, Expectation maximization algorithms, Feature extraction, RGB color model, Systems modeling, Visualization, Computing systems, Image processing, Commercial off the shelf technology
The goal of this work is to investigate the applicability of two probabilistic approaches, namely Three-Way Aspect
Model (TWAPM) and Co-training, to retrieve images represented by multiple feature types from large image
collections. We test these approaches in a learning context via relevance feedback from user. Our experiments
show that Co-training may be a better choice than TWAPM for a Web-based Image Retrieval System.
The goal of this paper is to investigate the selection of the kernel for a Web-based AIRS. Using the Kernel Perceptron learning method, several kernels having polynomial and Gaussian Radial Basis Function (RBF) like forms (6 polynomials and 6 RBFs) are applied to general images represented by color histograms in RGB and HSV color spaces. Experimental results on these collections show that performance varies significantly between different kernel types and that choosing an appropriate kernel is important.
In an Adaptive Image Retrieval System (AIRS) the user-system interaction is built through an interface that allows the relevance feedback process to take place. Most existing image retrieval systems simply display the result list of images (or their thumbnails) to the user in a 2D grid, without including any information about the relationships between images. In this context, we propose a new interactive multiple views interface for our AIRS, in which each view illustrates these relations by using visual attributes (colors, shapes, proximities). We identify two types of users for an AIRS: a user who seeks images whom we refer to as an end-user, and a user who designs and researches the collection and the retrieval systems whom we refer to as a researcher-user. With such views, the interface allows user (end-user or researcher-user) more effective interaction with the system by seeing more information about the request sent to the system as well as, by better understanding of the results, how to refine the query iteratively. Our qualitative evaluation of these multiple views in AIRS shows that each view has its own limitations and benefits. However, together, the views offer complementary information that helps user in improving his or her search effectiveness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.