Skip to main content

HadoopTrajectory: a Hadoop spatiotemporal data processing extension

مؤلف البحث
Mohamed Bakli, Mahmoud Sakr, Taysir Hassan A. Soliman
مجلة البحث
Journal of Geographical Systems
المشارك في البحث
تصنيف البحث
1
الناشر
Springer-Verlag GmbH Germany, part of Springer Nature 2019
عدد البحث
NULL
موقع البحث
https://link.springer.com/article/10.1007/s10109-019-00292-4
سنة البحث
2019
صفحات البحث
NULL
ملخص البحث

The recent advances in location tracking technologies and the widespread use of location-aware applications have resulted in big datasets of moving object trajectories. While there exists a couple of research prototypes for moving object databases, there is a lack of systems that can process big spatiotemporal data. This work proposes HadoopTrajectory, a Hadoop extension for spatiotemporal data processing. The extension adds spatiotemporal types and operators to the Hadoop core. These types and operators can be directly used in MapReduce programs, which gives the Hadoop user the possibility to write spatiotemporal data analytics programs. The storage layer of Hadoop, the HDFS, is extended by types to represent trajectory data and their corresponding input and output functions. It is also extended by file splitters and record readers. This enables Hadoop to read big files of moving object trajectories such as vehicle GPS tracks and split them over worker nodes for distributed processing. The storage layer is also extended by spatiotemporal indexes that help filtering the data before splitting it over the worker nodes. Several data access functions are provided so that the MapReduce layer can deal with this data. The MapReduce layer is extended with trajectory processing operators, to compute for instance the length of a trajectory in meters. This paper describes the extension and evaluates it using a synthetic dataset and a real dataset. Comparisons with non-Hadoop systems and with standard Hadoop are given. The extension accounts for about 11,601 lines of Java code.