Skip to main content

Research Repository

Advanced Search

DrawGAN: Multi-view Generative Model Inspired By The Artist's Drawing Method

Yang, Bailin; Chen, Zheng; Li, Frederick W. B.; Sun, Haoqiang; Cai, Jianlu

Authors

Bailin Yang

Zheng Chen

Haoqiang Sun

Jianlu Cai



Abstract

We present a novel approach for modeling artists' drawing processes using an architecture that combines an unconditional generative adversarial network (GAN) with a multi-view generator and multi-discriminator. Our method excels in synthesizing various types of picture drawing, including line drawing, shading, and color drawing, achieving high quality and robustness. Notably, our approach surpasses the existing state-of-the-art unconditional GANs. The key novelty of our approach lies in its architecture design, which closely resembles the typical sequence of an artist's drawing process, leading to significantly enhanced image quality. Through experimental results on few-shot datasets, we demonstrate the potential of leveraging a multi-view generative model to enhance feature knowledge and modulate image generation processes. Our proposed method holds great promise for advancing AI in the visual arts field and opens up new avenues for research and creative practices.

Citation

Yang, B., Chen, Z., Li, F. W. B., Sun, H., & Cai, J. (in press). DrawGAN: Multi-view Generative Model Inspired By The Artist's Drawing Method.

Conference Name Computer Graphics International 2023
Conference Location Shanghai, China
Start Date Aug 28, 2023
End Date Sep 1, 2023
Acceptance Date Jun 9, 2023
Deposit Date Sep 12, 2023
Publicly Available Date Sep 27, 2023
Publisher Springer Nature
Public URL https://durham-repository.worktribe.com/output/1735773