We present a novel mixture of links model to segment an object observed from multiple viewpoints. Each component in this mixture represents a temporal linkage between superpixels from all the viewpoints, hence expressing the inter-view consistency. The principle goal is to find the maximum a posterior estimate of appearance models and the exact bounding-box of object in each view. To this end, the segmentation is casted as finding more comprehensive and accurate samples using the mixture of links model. In contrast to most existing multi-view co-segmentation methods that rely on time-consuming 3D information, our method only uses 2D cues to achieve faster speed without decreasing the accuracy. The experimental results confirm the effectiveness of our approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.