Improvisation of Incremental Computing in Hadoop Architecture with File Caching

Main Article Content

Mr.Alhad V. Alsi, Prof. A P.Bodkhe

Abstract

Incremental data is a difficult problem, as it requires the continues development of well defined algorithms and a runtime system to support the continuous progress in computation. Many online data sets are elastic in nature. New entries get added with respect to the progress in the application. The Hadoop is a dedicated to the processing of distributed data and used to manipulate the large amount of distributed data. This manipulation not only contains storage but also the computation and processing of the data. Hadoop is used for data centric applications where data plays a vital role in making the decisions. Systems for incremental bulk data processing and computation can efficiently used for the updates but are not compatible with the non-incremental systems such as e.g., MapReduce, and more importantly and requires the programmer to implement application-specific incremental methodologies which ultimately increases algorithm and code complexity. Thus this paper discusses about the various aspects of the incremental computation and file caching.
DOI: 10.17762/ijritcc2321-8169.150633

Article Details

How to Cite
, M. V. A. P. A. P. (2015). Improvisation of Incremental Computing in Hadoop Architecture with File Caching. International Journal on Recent and Innovation Trends in Computing and Communication, 3(6), 3639–3642. https://doi.org/10.17762/ijritcc.v3i6.4509
Section
Articles