Visual Eureka Navigating Images Through Textual Queries

Main Article Content

Zarinabegam Mundargi, Siddhant Kokane, Rishikesh Makode, Shivdas Nakil, Manthan Manalwar, Mohit Burchunde


Within the domain of text extraction technologies, progress has been somewhat constrained, notwithstanding notable instances such as Google Lens, which proficiently extracts text from images. A conspicuous gap persists, however, in the availability of software tailored for the reciprocal task of searching images based on their textual content. Our pioneering conceptual framework introduces a transformative paradigm shift—a software solution engineered for image retrieval through text search. The crux of our technical innovation lies in the systematic incorporation of metadata as a repository for textual data linked to images. Through advanced text extraction algorithms, including robust optical character recognition methods, we decipher and store relevant textual information in this metadata. This meticulous indexing facilitates a highly efficient search mechanism, allowing users to query images based on specific text-related parameters. The user interface seamlessly integrates these functionalities, providing an intuitive platform for users to input text queries and retrieve images with unprecedented precision. Scalability and performance optimization measures ensure the system's adaptability to growing datasets, promising not only a redefined utility of image search but also a significant advancement in user convenience and operational efficiency within the visual data retrieval landscape.

Article Details

How to Cite
Shivdas Nakil, Manthan Manalwar, Mohit Burchunde, Z. M. S. K. R. M. (2024). Visual Eureka Navigating Images Through Textual Queries. International Journal on Recent and Innovation Trends in Computing and Communication, 12(2), 242–246. Retrieved from