Faster Diffusion: Rethinking the Role of UNet Encoder in Diffusion Models

1VCIP, CS, Nankai University, 2Mohamed bin Zayed University of AI, 3Linkoping University
4Harbin Engineering University, 5Universitat Autonoma de Barcelona

*Equal Contribution   Corresponding author

Our approach can easily be combined with various diffusion model-based tasks šŸ§  (such as text-to-image, personalized generation, video generation, etc.) and various sampling strategies (like DDIM-50 steps, Dpm-solver-20 steps) to achieve training-free acceleration.

Abstract

One of the key components within diffusion models is the UNet for noise prediction. While several works have explored basic properties of the UNet decoder, its encoder largely remains unexplored. In this work, we conduct the first comprehensive study of the UNet encoder. We empirically analyze the encoder features and provide insights to important questions regarding their changes at the inference process. In particular, we find that encoder features change gently, whereas the decoder features exhibit substantial variations across different time-steps. This finding inspired us to omit the encoder at certain adjacent time-steps and reuse cyclically the encoder features in the previous time-steps for the decoder. Further based on this observation, we introduce a simple yet effective encoder propagation scheme to accelerate the diffusion sampling for a diverse set of tasks. By benefiting from our propagation scheme, we are able to perform in parallel the decoder at certain adjacent time-steps. Additionally, we introduce a prior noise injection method to improve the texture details in the generated image. Besides the standard text-to-image task, we also validate our approach on other tasks: text-to-video, personalized generation and reference-guided generation. Without utilizing any knowledge distillation technique, our approach accelerates both the Stable Diffusion (SD) and the DeepFloyd-IF models sampling by 41% and 24% respectively, while maintaining high-quality generation performance.

Qualitative results


~1.4x acceleration for Text2Video-Zero, origin video (left) and ours (right)

 

~1.5x acceleration for VideoFusion, origin video (left) and ours (right)

 

Quantitative results



Video Presentation

Coming Soon

BibTeX


        @misc{li2023faster,
          title={Faster Diffusion: Rethinking the Role of UNet Encoder in Diffusion Models}, 
          author={Senmao Li and Taihang Hu and Fahad Shahbaz Khan and Linxuan Li and Shiqi Yang and Yaxing Wang and Ming-Ming Cheng and Jian Yang},
          year={2023},
          eprint={2312.09608},
          archivePrefix={arXiv},
          primaryClass={cs.CV}
          }