Photo or video mosaicing have drawn a lot of interests in the research field in the past years. Most of the existing work, however, focuses on how to match the images or video frames. This paper presents techniques to handle some practical issues when generating panorama photos. We have found from the experiments that a simple translational motion model gives more robust results than affine model for horizontally panned image sequences. Realizing the fact that there would always be some misalignments between two images no matter how well the matching is done, we propose a stitching method that finds a line of best agreement between two images, to make the misalignments less visible. Also shown in this paper are methods on how to correct camera exposure changes and how to blend the stitching line between the images. We will show panorama photos generated from both still images and video.
We present a prototype system for managing and searching collections of personal digital images. The system allows the collection to be stored across a mixture of local and remote computers and managed seamlessly. It provides multiple ways of organizing and viewing the same collection. It also provides a search function that uses features based on face detection and low-level color, texture and edge features combined with digital camera capture settings to provide high-quality search that is computed at the server but available from all other networked devices accessing the photo collection. Evaluations of the search facility using human relevancy experiments are provided.
The essential motivations, towards an object-based approach to video coding, include possible object-based coding scheme. In this work we present an region-based video coder which uses a segmentation map obtained from the previous reconstructed frame, thereby eliminating the need to transmit expensive shape information to the decoder. While the inspiration for this work is derived from previous work by Yokoyama et al, there are major differences between our work and the earlier effort, in the segmentation scheme employed, the motion model, and the handling of overlapped and uncovered regions. We use an edge flow based segmentation scheme, which appears to produce consistent segmentation results over a variety of natural images. Since it combines luminance, chrominance and texture information for image segmentation, it is well suited to segment real world images For motion compensation, we choose an affine model, and use hierarchical region-matching for accurate affine parameter estimation. Heuristic techniques are used to eliminate overlapped and uncovered regions after motion compensation. Extensive coding results of our implementation are presented.
KEYWORDS: Video, Image segmentation, Feature extraction, Video compression, Video surveillance, Video processing, Motion estimation, Data storage, Motion models, Information visualization
There is a growing need for new representations of video that allow not only compact storage of data but also content-based functionalities such as search and manipulation of objects. We present here a prototype system, called NeTra-V, that is currently being developed to address some of these content related issues. The system has a two-stage video processing structure: a global feature extraction and clustering stage, and a local feature extraction and object-based representation stage. Key aspects of the system include a new spatio-temporal segmentation and object-tracking scheme, and a hierarchical object-based video representation model. The spatio-temporal segmentation scheme combines the color/texture image segmentation and affine motion estimation techniques. Experimental results show that the proposed approach can handle large motion. The output of the segmentation, the alpha plane as it is referred to in the MPEG-4 terminology, can be used to compute local image properties. This local information forms the low-level content description module in our video representation. Experimental results illustrating spatio- temporal segmentation and tracking are provided.
Currently there are quite a few image retrieval systems that use color and texture as features to search images. However, by using global features these methods retrieve results that often do not make much perceptual sense. It is necessary to constrain the feature extraction within homogeneous regions, so that the relevant information within these regions can be well represented. This paper describes our recent work on developing an image segmentation algorithm which is useful for processing large and diverse collections of image data. A compact color feature representation which is more appropriate for these segmented regions is also proposed. By using the color and texture features and a region-based search, we achieve a very good retrieval performance compared to the entire image based search.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.