Voice Feature Extraction for Gender and Emotion Recognition

Main Article Content

Vani Nair
Pooja Pillai
Anupama Subramanian
Sarah Khalife
Dr. Madhu Nashipudimath

Abstract

Voice recognition plays a key role in spoken communication that helps to identify the emotions of a person that reflects in the voice. Gender classification through speech is a widely used Human Computer Interaction (HCI) as it is not easy to identify gender by computer. This led to the development of a model for “Voice feature extraction for Emotion and Gender Recognition”. The speech signal consists of semantic information, speaker information (gender, age, emotional state), accompanied by noise. Females and males have different voice characteristics due to their acoustical and perceptual differences along with a variety of emotions which convey their own unique perceptions. In order to explore this area, feature extraction requires pre- processing of data, which is necessary for increasing the accuracy. The proposed model follows steps such as data extraction, pre- processing using Voice Activity Detector (VAD), feature extraction using Mel-Frequency Cepstral Coefficient (MFCC), feature reduction by Principal Component Analysis (PCA) and Support Vector Machine (SVM) classifier. The proposed combination of techniques produced better results which can be useful in the healthcare sector, virtual assistants, security purposes and other fields related to the Human Machine Interaction domain. 

Article Details

How to Cite
Nair, V., P. Pillai, Anupama Subramanian, S. Khalife, and D. M. Nashipudimath. “Voice Feature Extraction for Gender and Emotion Recognition”. International Journal on Recent and Innovation Trends in Computing and Communication, vol. 9, no. 5, May 2021, pp. 17-22, doi:10.17762/ijritcc.v9i5.5463.
Section
Articles