ATTENTION: The works hosted here are being migrated to a new repository that will consolidate resources, improve discoverability, and better show UTA's research impact on the global community. We will update authors as the migration progresses. Please see MavMatrix for more information.
Show simple item record
dc.contributor.author | Banerjee, Debapriya | |
dc.contributor.author | Lygerakis, Fotios | |
dc.contributor.author | Makedon, Fillia | |
dc.date.accessioned | 2023-07-24T21:35:33Z | |
dc.date.available | 2023-07-24T21:35:33Z | |
dc.date.issued | 2021-07-02 | |
dc.identifier.uri | http://hdl.handle.net/10106/31579 | |
dc.description.abstract | Multi-modal sentiment analysis plays an important role for providing better interactive experiences to users. Each modality in
multi-modal data can provide different viewpoints or reveal unique
aspects of a user’s emotional state. In this work, we use text, audio
and visual modalities from MOSI dataset and we propose a novel fusion technique using a multi-head attention LSTM network. Finally,
we perform a classification task and evaluate its performance. | en_US |
dc.language.iso | en_US | en_US |
dc.publisher | ACM | en_US |
dc.subject | late fusion, multi-modal sentiment analysis, multi head attention recurrent neural networks | en_US |
dc.title | Sequential Late Fusion Technique for Multi-modal Sentiment Analysis | en_US |
dc.type | Article | en_US |
Files in this item
- Name:
- 3453892.3461009.pdf
- Size:
- 878.5Kb
- Format:
- PDF
- Description:
- Journal Article
This item appears in the following Collection(s)
Show simple item record