We present a lossy compression method that can leverage state-of-the-art diffusion models for entropy coding. Our method works zero-shot, requiring no additional training of the diffusion model or any ancillary networks. We apply the DiffC algorithm1 to Stable Diffusion 1.5, 2.1, XL, and Flux-dev. We demonstrate that our method is competitive with other state-of-the-art generative compression methods at ultra-low bitrates.
The following video shows lossy reconstructions at each of Flux’s 1,000 timesteps:
@inproceedings{
vonderfecht2025lossy,
title={Lossy Compression with Pretrained Diffusion Models},
author={Jeremy Vonderfecht and Feng Liu},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025},
url={https://openreview.net/forum?id=raUnLe0Z04}
}
Theis, L., Salimans, T., Hoffman, M. D., & Mentzer, F. (2022). Lossy compression with gaussian diffusion. arXiv preprint arXiv:2206.08889. ↩