Knowledge Distillation in Diffusion Models
Project Description
The objective of this project is to explore knowledge distillation and fine-tuning techniques to reduce the model parameter count in diffusion models. Diffusion models are known to demand significant GPU resources during inference, which hinders their effectiveness on edge devices. By adopting knowledge distillation and fine-tuning, we aim to mitigate this issue and make diffusion models more feasible for deployment on edge devices.
Supervisor
YEUNG, Sai Kit
Quota
5
Course type
UROP1000
UROP1100
UROP2100
UROP3100
UROP3200
UROP4100
Applicant's Roles
Investigate existing techniques to reduce the size of diffusion models and propose a novel diffusion-based framework.
Applicant's Learning Objectives
Implement and evaluate an efficient framework for diffusion models.
Complexity of the project
Moderate