Our new form of diffusion model, UQDM, enables practical progressive compression with an unconditional diffusion model - avoiding the computational intractability of Gaussian channel simulation by using universal quantization.
Diffusion probabilistic models have achieved mainstream success in many generative modeling tasks, from image generation to solving inverse problems. A distinct feature of these models is that they correspond to deep hierarchical latent variable models optimizing a variational evidence lower bound (ELBO) on the data likelihood. Drawing on a basic connection between likelihood modeling and compression, we explore the potential of diffusion models for progressive coding, resulting in a sequence of bits that can be incrementally transmitted and decoded with progressively improving reconstruction quality. Unlike prior work based on Gaussian diffusion or conditional diffusion models, we propose a new form of diffusion model with uniform noise in the forward process, whose negative ELBO corresponds to the end-to-end compression cost using universal quantization. We obtain promising first results on image compression, achieving competitive rate-distortion-realism results on a wide range of bit-rates with a single model, bringing neural codecs a step closer to practical deployment.
A progressive compression algorithm allows for lossy reconstructions with improving quality as more bits are sent, up till a lossless reconstruction. This results in variable-rate compression with a single bitstream, and is highly desirable in practical applications. We follow the same conceptual framework as in (Ho et al., 2020; Theis et al., 2022). Crucially, however, we avoid Gaussian channel simulation and bits-back coding, by instead using universal quantization as follows:
To facilitate this, our diffusion model replaces the Gaussian conditional distributions with uniform distributions, allowing for end-to-end training where our NELBO training objective naturally corresponds to the lossless coding cost of our progressive codec. For more details and results, see our paper.
@article{yang2025universal,
title={Progressive Compression with Universally Quantized Diffusion Models},
author={Yibo Yang and Justus Will and Stephan Mandt},
journal = {International Conference on Learning Representations},
year={2025}
}