A Mobile Application Framework to Classify Philippine Currency Images to Audio Labels Using Deep Learning

Main Article Content

Mary Grace Abellano Buban, Joyce Cadiz Malubay, Natividad Ballesteros Concepcion, Lilibeth Abellano Buban


This research presents a mobile application framework designed to empower visually impaired individuals in Legazpi City by providing real-time audio feedback for currency identification. Leveraging deep learning techniques, the proposed framework employs a robust model trained on a comprehensive dataset of Philippine currency images. The deep learning model is capable of accurately classifying various denominations of bills and coins, enabling the development of an inclusive solution for the visually impaired community. The researcher employed a qualitative approach in this study, which included a focus group discussion. Respondents were chosen using purposive sampling. Among those who responded were masseuses, chiropractors, herbal street vendors, and students. Through an online meeting, the selected participants contributed to the focus group discussion. In addition, an in-depth informal interview was conducted to gather additional information for the development of an architectural framework. Based on the result of this study, it was discovered that by implementing this architectural framework, these groups would be able to more easily identify money, increasing efficiency and reducing errors in cash transactions. The use of audio labels is particularly helpful for visually impaired individuals, as it provides an accessible way for them to independently handle and identify money.

Article Details

How to Cite
Natividad Ballesteros Concepcion, Lilibeth Abellano Buban, M. G. A. B. J. C. M. (2024). A Mobile Application Framework to Classify Philippine Currency Images to Audio Labels Using Deep Learning. International Journal on Recent and Innovation Trends in Computing and Communication, 12(2), 78–84. Retrieved from https://ijritcc.org/index.php/ijritcc/article/view/10459