Edge Computing for AI and ML: Enhancing Performance and Privacy in Data Analysis
Main Article Content
Abstract
Centralised cloud computing paradigms are encountering difficulties with latency, bandwidth, privacy, and security due to the exponential growth of data volumes produced by sensors and Internet of Things (IoT) devices. One potential approach to these constraints is edge computing, which moves computers and storage closer to the data sources. With this paradigm change, data privacy is improved, network congestion is decreased, and real-time processing is made possible. Aiming to improve the efficiency and confidentiality of data analysis applications powered by artificial intelligence (AI) and machine learning (ML), this article investigated the possibility of edge computing. We provide a thorough analysis of the latest developments in edge computing frameworks, algorithms, and architectures that allow for safe and fast training and inference of AI/ML models at the edge. We also go over the main obstacles and where the field may go from here in terms of research. Our research lays the groundwork for future intelligent edge systems by demonstrating the substantial advantages of edge computing in facilitating low-latency, energy-efficient, and privacy-preserving AI/ML applications.