Point clouds registration is a fundamental step of many point clouds processing pipelines; however, most algorithms are tested on data that are collected ad-hoc and not shared with the research community. These data often cover only a very limited set of use cases; therefore, the results cannot be generalized. Public datasets proposed until now, taken individually, cover only a few kinds of environment and mostly a single sensor. For these reasons, we developed a benchmark, for localization and mapping applications, using multiple publicly available datasets. In this way, we are able to cover many kinds of environment and many kinds of sensor that can produce point clouds. Furthermore, the ground truth has been thoroughly inspected and evaluated to ensure its quality. For some of the datasets, the accuracy of the ground truth measuring system was not reported by the original authors, therefore we estimated it with our own novel method, based on an iterative registration algorithm. Along with the data, we provide a broad set of registration problems, chosen to cover different types of initial misalignment, various degrees of overlap, and different kinds of registration problems. Lastly, we propose a metric to measure the performances of registration algorithms: it combines the commonly used rotation and translation errors together, to allow an objective comparison of the alignments. This work aims at encouraging authors to use a public and shared benchmark, instead of data collected ad-hoc, to ensure objectivity and repeatability, two fundamental characteristics in any scientific field.
Fontana, S., Cattaneo, D., Ballardini, A., Vaghi, M., Sorrenti, D. (2021). A benchmark for point clouds registration algorithms. ROBOTICS AND AUTONOMOUS SYSTEMS, 140(June 2021) [10.1016/j.robot.2021.103734].
A benchmark for point clouds registration algorithms
Fontana S.
Primo
;Cattaneo D.;Ballardini A. L.;Vaghi M.;Sorrenti D. G.
2021
Abstract
Point clouds registration is a fundamental step of many point clouds processing pipelines; however, most algorithms are tested on data that are collected ad-hoc and not shared with the research community. These data often cover only a very limited set of use cases; therefore, the results cannot be generalized. Public datasets proposed until now, taken individually, cover only a few kinds of environment and mostly a single sensor. For these reasons, we developed a benchmark, for localization and mapping applications, using multiple publicly available datasets. In this way, we are able to cover many kinds of environment and many kinds of sensor that can produce point clouds. Furthermore, the ground truth has been thoroughly inspected and evaluated to ensure its quality. For some of the datasets, the accuracy of the ground truth measuring system was not reported by the original authors, therefore we estimated it with our own novel method, based on an iterative registration algorithm. Along with the data, we provide a broad set of registration problems, chosen to cover different types of initial misalignment, various degrees of overlap, and different kinds of registration problems. Lastly, we propose a metric to measure the performances of registration algorithms: it combines the commonly used rotation and translation errors together, to allow an objective comparison of the alignments. This work aims at encouraging authors to use a public and shared benchmark, instead of data collected ad-hoc, to ensure objectivity and repeatability, two fundamental characteristics in any scientific field.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.