Skip to main content

Research Repository

Advanced Search

∞-Diff: Infinite Resolution Diffusion with Subsampled Mollified States

Bond-Taylor, Sam; Willcocks, Chris G.

Authors

Profile Image

Sam Bond-Taylor samuel.e.bond-taylor@durham.ac.uk
PGR Student Doctor of Philosophy



Abstract

This paper introduces ∞-Diff, a generative diffusion model defined in an infinite-dimensional Hilbert space, which can model infinite resolution data. By training on randomly sampled subsets of coordinates and denoising content only at those locations, we learn a continuous function for arbitrary resolution sampling. Unlike prior neural field-based infinite-dimensional models, which use point-wise functions requiring latent compression, our method employs non-local integral operators to map between Hilbert spaces, allowing spatial context aggregation. This is achieved with an efficient multi-scale function-space architecture that operates directly on raw sparse coordinates, coupled with a mollified diffusion process that smooths out irregularities. Through experiments on high-resolution datasets, we found that even at an 8× subsampling rate, our model retains high-quality diffusion. This leads to significant run-time and memory savings, delivers samples with lower FID scores, and scales beyond the training resolution while retaining detail.

Citation

Bond-Taylor, S., & Willcocks, C. G. (in press). ∞-Diff: Infinite Resolution Diffusion with Subsampled Mollified States. In The Twelfth International Conference on Learning Representations

Conference Name The International Conference on Learning Representations (ICLR)
Conference Location Vienna Austria
Start Date May 7, 2024
End Date May 11, 2024
Acceptance Date Jan 16, 2024
Deposit Date Mar 14, 2024
Book Title The Twelfth International Conference on Learning Representations
Public URL https://durham-repository.worktribe.com/output/2328620
Publisher URL https://openreview.net/forum?id=OUeIBFhyem
Related Public URLs https://iclr.cc/
https://iclr.cc/virtual/2024/poster/18741