AI-Enabled Multimodal Framework for Emotion Classification and Sentiment Analysis

Authors

  • V Sunil Kumar,Dr.Piyush Kumar Pareek

Keywords:

multimodal sentiment analysis, emotion recognition, speech–text–vision fusion, cross-attention, self-supervised learning, fairness, calibration, deployment

Abstract

Human affect is inherently multimodal, expressed through prosody, lexical choice, facialdynamics, and body cues. Yet many deployed systems still rely on a single channel (usuallytext), limiting robustness in natural

References

A. Zadeh, P. P. Liang, S. Poria, E. Cambria, and L.-P. Morency, “Multimodal Language Analysis in the Wild: CMU-MOSEI Dataset and Interpretable Dynamic Fusion,” in Proc. ACL, 2018, pp. 2236–2246

Downloads

Published

2024-07-20

How to Cite

V Sunil Kumar,Dr.Piyush Kumar Pareek. (2024). AI-Enabled Multimodal Framework for Emotion Classification and Sentiment Analysis. Journal of Computational Analysis and Applications (JoCAAA), 33(07), 2655–2662. Retrieved from https://www.eudoxuspress.com/index.php/pub/article/view/3742

Issue

Section

Articles