On Depth Error from Spherical Camera Calibration within Omnidirectional Stereo Vision
Groom, M.; Breckon, T.P.
As a depth sensing approach, whilst stereo vision provides a good compromise between accuracy and cost, a key limitation is the limited field of view of the conventional cameras that are used within most stereo configurations. By contrast, the use of spherical cameras within a stereo configuration offers omnidirectional stereo sensing. However, despite the presence of significant image distortion in spherical camera images, only very limited attempts have been made to study and quantify omnidirectional stereo depth accuracy. In this paper we construct such an omnidirectional stereo system that is capable of real-time 360◦ disparity map reconstruction as the basis for such a study. We first investigate the accuracy of using a standard spherical camera model for calibration combined with a longitude-latitude projection for omnidirectional stereo, and show that the depth error increases significantly as the angle from the camera optical axis approaches the limits of the camera field of view. In contrast, we then consider an alternative calibration approach via the use of perspective undistortion with a conventional pinhole camera model allowing omnidirectional cameras to be mapped to a conventional rectilinear stereo formulation. We find that conversely this proposed approach exhibits improved depth accuracy at large angles from the camera optical axis when compared to omnidirectional stereo depth based on a spherical camera model calibration.
Groom, M., & Breckon, T. (2022). On Depth Error from Spherical Camera Calibration within Omnidirectional Stereo Vision.
|26th International Conference on Pattern Recognition
|Aug 21, 2022
|Aug 25, 2022
|May 17, 2022
|Online Publication Date
|Aug 22, 2022
|Jun 15, 2022
|Publicly Available Date
|Aug 25, 2022
|Institute of Electrical and Electronics Engineers
Accepted Conference Proceeding
© 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
You might also like
Robust Semi-Supervised Anomaly Detection via Adversarially Learned Continuous Noise Corruption
ACR: Attention Collaboration-based Regressor for Arbitrary Two-Hand Reconstruction
Less is More: Reducing Task and Model Complexity for 3D Point Cloud Semantic Segmentation