With the development of information technology and big data, large amount of multimodal data with opinion biases has exploded. Compared to unimodal data, multimodal data can accurately identify users' emotional tendencies and enrich emotional attitudes from a variety of perspectives. To effectively identify the consistency of sentiment representations between image and text semantics, as well as the variability in the degree of contribution in sentiment analysis, an imagetext fusion sentiment analysis method based on text attention is proposed. The method constructs attention-based models for extracting text and images' features, emphasizes the words containing sentiment information, and builds a sentiment classification model using multimodal feature fusion. Experimental results show fusion method proposed has achieved better performance in both accuracy and F1 values compared with baseline models.
Traffic flow prediction is a key technology in intelligent transportation systems and plays a crucial role in intelligent transportation. How to obtain dynamically changing spatial relationships of road networks without prior knowledge is a challenge in current traffic flow prediction. In response to the spatiotemporal dynamic changes of traffic flow, this paper proposes a traffic flow prediction method ASTCNN based on attention and time convolutional networks. This method utilizes time convolution to extract long-term dependencies in traffic flow time series; Based on an adaptive node embedding attention mechanism, model the dynamic correlation of different road network nodes without requiring prior knowledge of the graph; At the same time, by stacking time convolution and attention network layers, dynamic spatial correlation and nonlinear time dependence at different time levels can be effectively excavated, enhancing the representation ability of the model's spatiotemporal modeling. The experimental results on two real traffic datasets have shown that this method is concise and effective, and its predictive performance is superior to common baseline models.
Traffic signs contain important traffic information. The traditional traffic sign detection method cannot solve the problem of low detection accuracy caused by the small, occupied area of traffic sign images. Based on this, a traffic sign detection algorithm based on improved YOLOv4 is proposed. Firstly, the 13 × 13 large receptive field detection layer is removed on the YOLOv4 structure, and the 104 × 104 detection layer is added. It obtains more global feature information and improves detection accuracy. The attention mechanism is introduced into the algorithm, that is, the backbone network extracts three feature layers and then adds the scSE module. Make the network focus on the target area and improve the algorithm detection ability. Secondly, in order to speed up the convergence of the network and improve the detection accuracy, a dynamic residual connection is added to the backbone network. It promotes the spread of well-performing signals. And use the decoupled-head detection head to use different branches to calculate classification and positioning tasks. By evaluating the average accuracy of the detection effect of CCTSDB traffic sign data set, mAP reaches 97.68 %, which is 3.78 % higher than YOLOv4. Moreover, network convergence experiments have shown that the improved model converges faster. Compared with other models, the improved model has better detection performance for smaller traffic signs and can better meet the actual needs of high-precision detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.