Lossy Compression With Pretrained Diffusion Models

Jeremy Vonderfecht

Portland State University

Feng Liu

Portland State University

ICLR 2025

We present a lossy compression method that can leverage state-of-the-art diffusion models for entropy coding. Our method works zero-shot, requiring no additional training of the diffusion model or any ancillary networks. We apply the DiffC algorithm1 to Stable Diffusion 1.5, 2.1, XL, and Flux-dev. We demonstrate that our method is competitive with other state-of-the-art generative compression methods at ultra-low bitrates.

ThumbnailThumbnailThumbnailThumbnailThumbnail
Select an image and comparison settings
DiffC
MS-ILLM
DiffEIC
PerCo
0.00.00.10.10.112.815.718.621.524.40.10.30.40.60.7Bits per pixel (bpp)PSNR (dB)LPIPS

The following video shows lossy reconstructions at each of Flux’s 1,000 timesteps:

Citation

    @inproceedings{
  vonderfecht2025lossy,
  title={Lossy Compression with Pretrained Diffusion Models},
  author={Jeremy Vonderfecht and Feng Liu},
  booktitle={The Thirteenth International Conference on Learning Representations},
  year={2025},
  url={https://openreview.net/forum?id=raUnLe0Z04}
}

  

Footnotes

  1. Theis, L., Salimans, T., Hoffman, M. D., & Mentzer, F. (2022). Lossy compression with gaussian diffusion. arXiv preprint arXiv:2206.08889.