Qi Feng
360 Depth Estimation in the Wild - The Depth360 Dataset and the SegFuse Network
Feng, Qi; Shum, Hubert P.H.; Morishima, Shigeo
Abstract
Single-view depth estimation from omnidirectional images has gained popularity with its wide range of applications such as autonomous driving and scene reconstruction. Although data-driven learning-based methods demonstrate significant potential in this field, scarce training data and ineffective 360 estimation algorithms are still two key limitations hindering accurate estimation across diverse domains. In this work, we first establish a large-scale dataset with varied settings called Depth360 to tackle the training data problem. This is achieved by exploring the use of a plenteous source of data, 360 videos from the internet, using a test-time training method that leverages unique information in each omnidirectional sequence. With novel geometric and temporal constraints, our method generates consistent and convincing depth samples to facilitate single-view estimation. We then propose an end-to-end two-branch multi-task learning network, SegFuse, that mimics the human eye to effectively learn from the dataset and estimate high-quality depth maps from diverse monocular RGB images. With a peripheral branch that uses equirectangular projection for depth estimation and a foveal branch that uses cubemap projection for semantic segmentation, our method predicts consistent global depth while maintaining sharp details at local regions. Experimental results show favorable performance against the state-of-the-art methods.
Citation
Feng, Q., Shum, H. P., & Morishima, S. (2022, March). 360 Depth Estimation in the Wild - The Depth360 Dataset and the SegFuse Network. Presented at IEEE Conference on Virtual Reality and 3D User Interfaces, Christchurch, New Zealand
Presentation Conference Type | Conference Paper (published) |
---|---|
Conference Name | IEEE Conference on Virtual Reality and 3D User Interfaces |
Start Date | Mar 12, 2022 |
End Date | Mar 16, 2022 |
Acceptance Date | Dec 22, 2021 |
Online Publication Date | Apr 20, 2022 |
Publication Date | 2022 |
Deposit Date | Jan 21, 2022 |
Publicly Available Date | Jan 21, 2022 |
Series ISSN | 2642-5246,2642-5254 |
DOI | https://doi.org/10.1109/vr51125.2022.00087 |
Public URL | https://durham-repository.worktribe.com/output/1137732 |
Files
Accepted Conference Proceeding
(8.7 Mb)
PDF
Copyright Statement
© 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
You might also like
Adaptive Graph Learning from Spatial Information for Surgical Workflow Anticipation
(2024)
Journal Article
Neural-code PIFu: High-fidelity Single Image 3D Human Reconstruction via Neural Code Integration
(2024)
Presentation / Conference Contribution
From Category to Scenery: An End-to-End Framework for Multi-Person Human-Object Interaction Recognition in Videos
(2024)
Presentation / Conference Contribution
MAGR: Manifold-Aligned Graph Regularization for Continual Action Quality Assessment
(2024)
Presentation / Conference Contribution
Downloadable Citations
About Durham Research Online (DRO)
Administrator e-mail: dro.admin@durham.ac.uk
This application uses the following open-source libraries:
SheetJS Community Edition
Apache License Version 2.0 (http://www.apache.org/licenses/)
PDF.js
Apache License Version 2.0 (http://www.apache.org/licenses/)
Font Awesome
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2025
Advanced Search