Emotion Detection and Classification using Hybrid Feature Selection and Deep Learning Techniques
Main Article Content
Abstract
Image sentiment analysis has gained significant attention due to the increasing availability of user-generated content on various platforms such as social media, e-commerce websites, and online reviews. The core of our approach lies in the deep learning model, which combines the strengths of convolutional neural networks (CNNs) and long short-term memory (LSTM) networks. The CNN component captures local dependencies and learns high-level features, while the LSTM component captures long-term dependencies and maintains contextual information. By fusing these two components, our model effectively captures both local and global context, leading to improved sentiment analysis performance. During the execution first select the context and generate visual feature vector for generation of captions. The EfficientNetB7 model is applied in order to construct the image description for every individual picture. The Attention-based LSTM as well as Gated Recurrent Unit (GRU) greedy method are the two approaches that are utilized in the process of classifying sentiment labels. The proposed research has been categorized into three different phases. In Phase 1 describe various data preprocessing and normalization techniques. It also demonstrates training using RESNET-101 deep learning-based CNN classification algorithm. In Phase 2 extract the various features from the selected context of input image. The context has been selected based on detected objects from the image and generates a visual caption for the entire dataset. The generated captions are dynamically used for model training as well as testing to both datasets. The EfficientNet module has used for generation of visual context from selected contexts. Finally in phase 3 classification model has built using a Deep Convolutional Neural Network (DCNN). The proposed algorithm classified the entire train and test dataset with different cross- validations such as 5-fold, 10-fold and 15-fold etc. The numerous activation functions are also used for evaluation of the proposed algorithm in different ways. The higher accuracy of the proposed model is 96.20% sigmoid function for 15-fold cross validation.