A Survey of Deep Learning Approaches for Natural Language Processing Tasks

Main Article Content

Suresh dodda, Navin Kamuni, Jyothi Swaroop Arlagadda, Venkata Sai Mahesh vuppalapati, Preetham Vemasani

Abstract

In recent years, deep learning has been a go-to method for solving difficult NLP problems. Deep learning models have attained state-of-the-art performance across a wide range of natural language processing applications, including text summarization, sentiment analysis, named entity identification, and language translation, by utilizing enormous neural network designs and massive volumes of training data. In this paper, we take a look at the most important deep learning methods and how they've been used for different natural language processing jobs. We go over the basics of neural network designs including CNNs, RNNs, and transformers, and we also go over some of the more recent developments, such as BERT and GPT-3. Our discussion of each method centers on its guiding principles, benefits, drawbacks, and significant NLP applications. To further illustrate the relative merits of various models, we also provide their comparative performance findings on industry-standard benchmark datasets. We also highlight some of the present difficulties and potential future avenues of study in deep learning applied to natural language processing. The purpose of this survey is to offer academics and practitioners in natural language processing a high-level perspective on how to make good use of deep learning in their respective fields.

Article Details

How to Cite
Jyothi Swaroop Arlagadda, Venkata Sai Mahesh vuppalapati, Preetham Vemasani, S. dodda, N. K. (2021). A Survey of Deep Learning Approaches for Natural Language Processing Tasks. International Journal on Recent and Innovation Trends in Computing and Communication, 9(12), 27–36. Retrieved from https://ijritcc.org/index.php/ijritcc/article/view/10357
Section
Articles