In this paper, we study an integrated framework to
generate expressive sketch animation from real video with
user interaction. It consists of two mainly steps: (i) image
sketch computation by a learning-based edge detector; (ii)
temporal sketch propagation by a robust stochastic
matching algorithm. In the first step, given a video clip,
the edge probability map on each frame is first computed
by a discriminative model that is trained with a collection
of various features. A template sketch is flexibly extracted
from the beginning frame by threshold tuning, where user
intervention is allowed to perfect the sketch template.
Then this template is matched and localized to the
following image sketches over frames by the graph-based
matching algorithm. User interaction is allowed to
sequentially correct the matching results. A number of
sketch animations from real videos are presented to verify
this framework in the experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.