This paper proposes a novel method to utilize three source features for video captioning. It fuses global video features with local object and regional features to model the relationships among objects and their motions and applies object tags instead of visual features to guide the generation of descriptions. Specifically, Multi-feature is firstly extracted by pretrained models and treated as separate inputs alongside video frames. Secondly, an object awareness attention block is designed to fuse the different features information and to learn a joint video representation which has both visual and linguistic semantics. Experiments on MSVD and MSR-VTT datasets have shown the effectiveness of the proposed method, and the ablation studies have verified the contribution of each component.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.