A. Atapour-Abarghouei
Domain Adaptation via Image Style Transfer
Atapour-Abarghouei, A.; Breckon, T.P.
Authors
Professor Toby Breckon toby.breckon@durham.ac.uk
Professor
Contributors
Hemanth Venkateswara
Editor
Sethuraman Panchanathan
Editor
Abstract
While recent growth in modern machine learning techniques has led to remarkable strides in computer vision applications, one of the most significant challenges facing learning-based vision systems is the scarcity of large, high-fidelity datasets required for training large-scale models. This has necessitated the creation of transfer learning and domain adaptation as a highly-active area of research, wherein the objective is to adapt a model trained on one set of data from a specific domain to perform well on previously-unseen data from a different domain. In this chapter, we use monocular depth estimation as a means of demonstrating a new perspective on domain adaptation. Most monocular depth estimation approaches either rely on large quantities of ground truth depth data, which is extremely expensive and difficult to obtain, or alternatively predict disparity as an intermediary step using a secondary supervisory signal leading to blurring and other artefacts. Training a depth estimation model using pixel-perfect synthetic depth images can resolve most of these issues but introduces the problem of domain shift from synthetic to real-world data. Here, we take advantage of recent advances in image style transfer and its connection with domain adaptation to predict depth from a single colour image based on training over a large corpus of synthetic data obtained from a virtual environment. Experimental results point to the impressive capabilities of style transfer used as a means of adapting the model to unseen data from a different domain.
Citation
Atapour-Abarghouei, A., & Breckon, T. (2020). Domain Adaptation via Image Style Transfer. In H. Venkateswara, & S. Panchanathan (Eds.), Domain adaptation in computer vision with deep learning (137-156). Springer Verlag. https://doi.org/10.1007/978-3-030-45529-3_8
Online Publication Date | Aug 19, 2020 |
---|---|
Publication Date | 2020 |
Deposit Date | Aug 25, 2020 |
Publicly Available Date | Aug 19, 2022 |
Publisher | Springer Verlag |
Pages | 137-156 |
Book Title | Domain adaptation in computer vision with deep learning. |
ISBN | 9783030455286 |
DOI | https://doi.org/10.1007/978-3-030-45529-3_8 |
Public URL | https://durham-repository.worktribe.com/output/1656366 |
Files
Accepted Book Chapter
(2.9 Mb)
PDF
Copyright Statement
This is a post-peer-review, pre-copyedit version of a chapter published in Domain adaptation in computer vision with deep learning. The final authenticated version is available online at: https://doi.org/10.1007/978-3-030-45529-3_8
You might also like
Progressively Select and Reject Pseudo-labelled Samples for Open-Set Domain Adaptation
(2024)
Journal Article
Generalized Zero-Shot Domain Adaptation via Coupled Conditional Variational Autoencoders
(2023)
Journal Article
Cross-Domain Structure Preserving Projection for Heterogeneous Domain Adaptation
(2021)
Journal Article
Downloadable Citations
About Durham Research Online (DRO)
Administrator e-mail: dro.admin@durham.ac.uk
This application uses the following open-source libraries:
SheetJS Community Edition
Apache License Version 2.0 (http://www.apache.org/licenses/)
PDF.js
Apache License Version 2.0 (http://www.apache.org/licenses/)
Font Awesome
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2024
Advanced Search