An Intelligent Multimodal Emotion Recognition System for E-Learning

Main Article Content

Mohamed Ben Ammar, Jihane Ben Slimane

Abstract

The purpose of this research paper is to introduce an Intelligent Multimodal Emotion Recognition System (IMERS) that aims to improve the e-learning process by accurately perceiving and reacting to the emotional states of learners. IMERS incorporates information through three primary modalities: facial expressions, voice, and text. The utilization of multimodal fusion in this technique effectively addresses the constraints of single-modality systems and yields a more extensive and precise comprehension of emotions. The paper highlight the following: Initially, we discuss the structure of multimodal fusion, specifically focusing on the architecture of IMERS. This includes an explanation of the many components involved, such as data preparation, feature extraction, decision fusion, and sentiment classification. Every approach employs distinct deep learning algorithms customized to its unique properties. Furthermore, our assessment of IMERS encompasses its proficiency in discerning emotions inside e-learning environments, as evidenced by its correct detection of primary emotions across diverse datasets. Another area of emphasis is personalized learning applications, in which we demonstrate how IMERS customize learning experiences by adapting instruction, offering specific feedback, and cultivating an emotionally nurturing learning environment..

Article Details

How to Cite
Jihane Ben Slimane , M. B. A. (2024). An Intelligent Multimodal Emotion Recognition System for E-Learning. International Journal on Recent and Innovation Trends in Computing and Communication, 11(10), 2847–2852. Retrieved from https://ijritcc.org/index.php/ijritcc/article/view/10315
Section
Articles