Show simple item record

dc.contributor.authorBanerjee, Debapriya
dc.contributor.authorLygerakis, Fotios
dc.contributor.authorMakedon, Fillia
dc.date.accessioned2023-07-24T21:35:33Z
dc.date.available2023-07-24T21:35:33Z
dc.date.issued2021-07-02
dc.identifier.urihttp://hdl.handle.net/10106/31579
dc.description.abstractMulti-modal sentiment analysis plays an important role for providing better interactive experiences to users. Each modality in multi-modal data can provide different viewpoints or reveal unique aspects of a user’s emotional state. In this work, we use text, audio and visual modalities from MOSI dataset and we propose a novel fusion technique using a multi-head attention LSTM network. Finally, we perform a classification task and evaluate its performance.en_US
dc.language.isoen_USen_US
dc.publisherACMen_US
dc.subjectlate fusion, multi-modal sentiment analysis, multi head attention recurrent neural networksen_US
dc.titleSequential Late Fusion Technique for Multi-modal Sentiment Analysisen_US
dc.typeArticleen_US


Files in this item

Thumbnail


This item appears in the following Collection(s)

Show simple item record