Skip to main content

Research Repository

Advanced Search

A Multimodal Sentiment Analysis Approach Based on a Joint Chained Interactive Attention Mechanism

Qiu, Keyuan; Zhang, Yingjie; Zhao, Jiaxu; Zhang, Shun; Wang, Qian; Chen, Feng

A Multimodal Sentiment Analysis Approach Based on a Joint Chained Interactive Attention Mechanism Thumbnail


Authors

Keyuan Qiu

Yingjie Zhang

Jiaxu Zhao

Shun Zhang

Profile Image

Qian Wang qian.wang@durham.ac.uk
Academic Visitor

Feng Chen



Contributors

Ioannis Hatzilygeroudis
Editor

Abstract

The objective of multimodal sentiment analysis is to extract and integrate feature information from text, image, and audio data accurately, in order to identify the emotional state of the speaker. While multimodal fusion schemes have made some progress in this research field, previous studies still lack adequate approaches for handling inter-modal information consistency and the fusion of different categorical features within a single modality. This study aims to effectively extract sentiment coherence information among video, audio, and text and consequently proposes a multimodal sentiment analysis method named joint chain interactive attention (VAE-JCIA, Video Audio Essay–Joint Chain Interactive Attention). In this approach, a 3D CNN is employed for extracting facial features from video, a Conformer is employed for extracting audio features, and a Funnel-Transformer is employed for extracting text features. Furthermore, the joint attention mechanism is utilized to identify key regions where sentiment information remains consistent across video, audio, and text. This process acquires reinforcing features that encapsulate information regarding consistency among the other two modalities. Inter-modal feature interactions are addressed through chained interactive attention, and multimodal feature fusion is employed to efficiently perform emotion classification. The method is experimentally validated on the CMU-MOSEI dataset and the IEMOCAP dataset. The experimental results demonstrate that the proposed method significantly enhances the performance of the multimodal sentiment analysis model.

Citation

Qiu, K., Zhang, Y., Zhao, J., Zhang, S., Wang, Q., & Chen, F. (2024). A Multimodal Sentiment Analysis Approach Based on a Joint Chained Interactive Attention Mechanism. Electronics, 13(10), Article 1922. https://doi.org/10.3390/electronics13101922

Journal Article Type Article
Acceptance Date May 10, 2024
Online Publication Date May 14, 2024
Publication Date May 2, 2024
Deposit Date Jun 13, 2024
Publicly Available Date Jun 13, 2024
Journal Electronics
Publisher MDPI
Peer Reviewed Peer Reviewed
Volume 13
Issue 10
Article Number 1922
DOI https://doi.org/10.3390/electronics13101922
Keywords model optimization, decision fusion, sentiment analysis, deep learning, attention mechanisms
Public URL https://durham-repository.worktribe.com/output/2480507

Files





You might also like



Downloadable Citations