Luis Li li.li4@durham.ac.uk
PGR Student Doctor of Philosophy
DurLAR: A High-Fidelity 128-Channel LiDAR Dataset with Panoramic Ambient and Reflectivity Imagery for Multi-Modal Autonomous Driving Applications
Li, Li; Ismail, Khalid N.; Shum, Hubert P.H.; Breckon, Toby P.
Authors
Dr Khalid Ismail khalid.n.ismail@durham.ac.uk
Academic Visitor
Professor Hubert Shum hubert.shum@durham.ac.uk
Professor
Professor Toby Breckon toby.breckon@durham.ac.uk
Professor
Contributors
Luis Li li.li4@durham.ac.uk
Other
Abstract
We present DurLAR, a high-fidelity 128-channel 3D LiDAR dataset with panoramic ambient (near infrared) and reflectivity imagery, as well as a sample benchmark task using depth estimation for autonomous driving applications. Our driving platform is equipped with a high resolution 128 channel LiDAR, a 2MPix stereo camera, a lux meter and a GNSS/INS system. Ambient and reflectivity images are made available along with the LiDAR point clouds to facilitate multi-modal use of concurrent ambient and reflectivity scene information. Leveraging DurLAR, with a resolution exceeding that of prior benchmarks, we consider the task of monocular depth estimation and use this increased availability of higher resolution, yet sparse ground truth scene depth information to propose a novel joint supervised/self-supervised loss formulation. We compare performance over both our new DurLAR dataset, the established KITTI benchmark and the Cityscapes dataset. Our evaluation shows our joint use supervised and self-supervised loss terms, enabled via the superior ground truth resolution and availability within DurLAR improves the quantitative and qualitative performance of leading contemporary monocular depth estimation approaches (RMSE = 3.639, SqRel = 0.936).
Citation
Li, L., Ismail, K. N., Shum, H. P., & Breckon, T. P. (2021). DurLAR: A High-Fidelity 128-Channel LiDAR Dataset with Panoramic Ambient and Reflectivity Imagery for Multi-Modal Autonomous Driving Applications. . https://doi.org/10.1109/3dv53792.2021.00130
Presentation Conference Type | Conference Paper (Published) |
---|---|
Conference Name | International Conference on 3D Vision |
Start Date | Dec 1, 2021 |
End Date | Dec 3, 2021 |
Publication Date | 2021-12 |
Deposit Date | Oct 25, 2021 |
Publicly Available Date | Dec 4, 2021 |
Pages | 1227-1237 |
DOI | https://doi.org/10.1109/3dv53792.2021.00130 |
Public URL | https://durham-repository.worktribe.com/output/1138941 |
Publisher URL | https://doi.ieeecomputersociety.org/10.1109/3DV53792.2021.00130 |
Files
Accepted Conference Proceeding
(12.3 Mb)
PDF
You might also like
Less is More: Reducing Task and Model Complexity for 3D Point Cloud Semantic Segmentation
(2023)
Presentation / Conference Contribution
TraIL-Det: Transformation-Invariant Local Feature Networks for 3D LiDAR Object Detection with Unsupervised Pre-Training
(2024)
Presentation / Conference Contribution
RAPiD-Seg: Range-Aware Pointwise Distance Distribution Networks for LiDAR Semantic Segmentation
(2024)
Presentation / Conference Contribution
Towards Open-World Object-based Anomaly Detection via Self-Supervised Outlier Synthesis
(2024)
Presentation / Conference Contribution
U3DS3 : Unsupervised 3D Semantic Scene Segmentation
(2024)
Presentation / Conference Contribution
Downloadable Citations
About Durham Research Online (DRO)
Administrator e-mail: dro.admin@durham.ac.uk
This application uses the following open-source libraries:
SheetJS Community Edition
Apache License Version 2.0 (http://www.apache.org/licenses/)
PDF.js
Apache License Version 2.0 (http://www.apache.org/licenses/)
Font Awesome
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2024
Advanced Search