The visual impairment community especially blind people needs support from advanced technologies to help them with understanding and answering the image content. In the multi-modal area, Visual Question Answering (VQA) is the notable cutting-edge task requiring the combination of images and texts via a co-attention mechanism. Inspired by the Deep Co-attention Layer, we propose a Bi-direction Co-Attention VT-Transformer network to jointly learn visual and textual features simultaneously. Via our system, the relationship and interaction of the modality objects are digested and combined together into the meaningful space. Besides, the consistency of Transformer architecture in both feature extractor and multi-modal attention function is efficient enough to decrease the layer of attention as well as the computation cost. Through the experimental results and ablation studies, our model achieves the promising performance against the existing approaches and uni-direction mechanism in VizWiz-VQA 2020 dataset for blind people.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.