As a tool to express common semantics of objects, language can be used to describe the attributes and locations of objects within the scope of human vision. Searching for the location of an object in the field of vision through natural language is an important capability of the human. Proposing a mechanism to learn this ability of human is a major challenge for computer vision. Most existing object localization methods usually use strong supervised information of the training set to train the model. However, these models lack interpretability and require expensive labels which are difficult to obtain. Facing these challenges, we propose a new method for locating object by natural language descriptions for fine-grained image. Firstly, we propose a model that can learn the semantically relevant parts between fine-grained images and languages, and achieve ideal localization accuracy without using strong supervisory signal. In addition, we have improved the contrast loss function to make natural language descriptions better match target regions of fine-grained images.The multi-scale fusion techniques are utilized to improve the ability of capturing details on fine-grained images. Comprehensive experiments demonstrate that the proposed method achieves ideal localization results on the CUB200-2011 dataset. And the proposed model has strong zero-shot learning ability on untrained data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.