Xiaochuan Wang
No-reference synthetic image quality assessment with convolutional neural network and local image saliency
Wang, Xiaochuan; Liang, Xiaohui; Yang, Bailin; Li, Frederick W.B.
Abstract
Depth-image-based rendering (DIBR) is widely used in 3DTV, free-viewpoint video, and interactive 3D graphics applications. Typically, synthetic images generated by DIBR-based systems incorporate various distortions, particularly geometric distortions induced by object dis-occlusion. Ensuring the quality of synthetic images is critical to maintaining adequate system service. However, traditional 2D image quality metrics are ineffective for evaluating synthetic images as they are not sensitive to geometric distortion. In this paper, we propose a novel no-reference image quality assessment method for synthetic images based on convolutional neural networks, introducing local image saliency as prediction weights. Due to the lack of existing training data, we construct a new DIBR synthetic image dataset as part of our contribution. Experiments were conducted on both the public benchmark IRCCyN/IVC DIBR image dataset and our own dataset. Results demonstrate that our proposed metric outperforms traditional 2D image quality metrics and state-of-the-art DIBR-related metrics.
Citation
Wang, X., Liang, X., Yang, B., & Li, F. W. (2019). No-reference synthetic image quality assessment with convolutional neural network and local image saliency. Computational Visual Media, 5(2), 193-208. https://doi.org/10.1007/s41095-019-0131-6
Journal Article Type | Article |
---|---|
Acceptance Date | Jan 27, 2019 |
Online Publication Date | Mar 30, 2019 |
Publication Date | Jun 1, 2019 |
Deposit Date | Jul 12, 2019 |
Publicly Available Date | Jul 12, 2019 |
Journal | Computational Visual Media |
Print ISSN | 2096-0433 |
Electronic ISSN | 2096-0662 |
Publisher | SpringerOpen |
Peer Reviewed | Peer Reviewed |
Volume | 5 |
Issue | 2 |
Pages | 193-208 |
DOI | https://doi.org/10.1007/s41095-019-0131-6 |
Public URL | https://durham-repository.worktribe.com/output/1292541 |
Files
Published Journal Article
(2.6 Mb)
PDF
Publisher Licence URL
http://creativecommons.org/licenses/by/4.0/
Copyright Statement
© The Author(s) 2019
Open Access
This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
You might also like
Advances in Web-Based Learning - ICWL 2015
(-0001)
Book
Tackling Data Bias in Painting Classification with Style Transfer
(2023)
Presentation / Conference Contribution
Aesthetic Enhancement via Color Area and Location Awareness
(2022)
Presentation / Conference Contribution
STIT: Spatio-Temporal Interaction Transformers for Human-Object Interaction Recognition in Videos
(2022)
Presentation / Conference Contribution
STGAE: Spatial-Temporal Graph Auto-Encoder for Hand Motion Denoising
(2021)
Presentation / Conference Contribution
Downloadable Citations
About Durham Research Online (DRO)
Administrator e-mail: dro.admin@durham.ac.uk
This application uses the following open-source libraries:
SheetJS Community Edition
Apache License Version 2.0 (http://www.apache.org/licenses/)
PDF.js
Apache License Version 2.0 (http://www.apache.org/licenses/)
Font Awesome
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2024
Advanced Search