Zhaoyi Jiang
C2SPoint: A classification-to-saliency network for point cloud saliency detection
Jiang, Zhaoyi; Ding, Luyun; Tam, Gary; Song, Chao; Li, Frederick W.B.; Yang, Bailin
Authors
Luyun Ding
Gary Tam
Chao Song
Dr Frederick Li frederick.li@durham.ac.uk
Associate Professor
Bailin Yang
Abstract
Point cloud saliency detection is an important technique that support downstream tasks in 3D graphics and vision, like 3D model simplification, compression, reconstruction and viewpoint selection. Existing approaches often rely on hand-crafted features and are only applicable to specific datasets. In this paper, we propose a novel weakly supervised classification network, called C2SPoint, which directly performs saliency detection on the point clouds. Unlike previous methods that require per-point saliency annotations, C2SPoint only requires category labels of the point clouds during training. The network consists of two branches: a Classification branch and a Saliency branch. The former branch is composed of two Adaptive Set Abstraction layers for feature extraction and a Saliency Transform layer for learning saliency knowledge from the classification network. The latter branch introduces a multi-scale point-cluster similarity matrix for propagating the cluster saliency to each point within it, resulting in the prediction of point-level saliency. Experimental results demonstrate the effectiveness of our method in point cloud saliency detection, with improvements of 2% in both AUC and NSS compared to state-of-the-art methods.
Citation
Jiang, Z., Ding, L., Tam, G., Song, C., Li, F. W., & Yang, B. (2023). C2SPoint: A classification-to-saliency network for point cloud saliency detection. Computers and Graphics, 115, 274-284. https://doi.org/10.1016/j.cag.2023.07.003
Journal Article Type | Article |
---|---|
Acceptance Date | Jul 3, 2023 |
Online Publication Date | Jul 8, 2023 |
Publication Date | 2023 |
Deposit Date | Jul 14, 2023 |
Publicly Available Date | Jul 9, 2024 |
Journal | Computers & Graphics |
Print ISSN | 0097-8493 |
Electronic ISSN | 0097-8493 |
Publisher | Elsevier |
Peer Reviewed | Peer Reviewed |
Volume | 115 |
Pages | 274-284 |
DOI | https://doi.org/10.1016/j.cag.2023.07.003 |
Public URL | https://durham-repository.worktribe.com/output/1168730 |
Files
Accepted Journal Article
(5.4 Mb)
PDF
Publisher Licence URL
http://creativecommons.org/licenses/by-nc-nd/4.0/
Copyright Statement
© 2023. This manuscript version is made available under the CC-BY-NC-ND 4.0 license https://creativecommons.org/licenses/by-nc-nd/4.0/
You might also like
Advances in Web-Based Learning - ICWL 2015
(-0001)
Book
Tackling Data Bias in Painting Classification with Style Transfer
(2023)
Presentation / Conference Contribution
Aesthetic Enhancement via Color Area and Location Awareness
(2022)
Presentation / Conference Contribution
STIT: Spatio-Temporal Interaction Transformers for Human-Object Interaction Recognition in Videos
(2022)
Presentation / Conference Contribution
STGAE: Spatial-Temporal Graph Auto-Encoder for Hand Motion Denoising
(2021)
Presentation / Conference Contribution
Downloadable Citations
About Durham Research Online (DRO)
Administrator e-mail: dro.admin@durham.ac.uk
This application uses the following open-source libraries:
SheetJS Community Edition
Apache License Version 2.0 (http://www.apache.org/licenses/)
PDF.js
Apache License Version 2.0 (http://www.apache.org/licenses/)
Font Awesome
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2024
Advanced Search