personalization-with-text-to-image-diffusion-models-feb2024 icon indicating copy to clipboard operation
personalization-with-text-to-image-diffusion-models-feb2024 copied to clipboard

Get familiar with different fine-tuning techniques for text-to-image models, and learn how to teach a diffusion model a concept of your choosing

🎨 Fine-tuning text-to-image diffusion models for personalization and subject-driven generation

Presentation: Personalization of Diffusion Models with 🧨Diffusers

📚 Workshop description

During the workshop you will get familiar with different fine-tuning techniques for text-to-image models, and learn how to easily teach a diffusion model a concept of your choosing (special style, a pet, faces, etc) with as little as 3 images depicting your concept.

🛠️ Requirements

Python >= 3.10, acquaintance with Diffusion models, Text-to-Image models.

NOTE 💡 While we will briefly go over diffusion models and specifically Stable Diffusion, we will not get into detail, and assume some familiarity with the diffusion process and architecture of stable diffusion models.

TIP 💌 If you're not familiar with diffusion models but interested in doing this workshop, check this (free & open-sourced) introductory diffusion class 🤓

▶️ Usage

  • Clone the repository
  • Start Jupyter lab and navigate to the workshop folder or Use Google collab and import Jupyter notebooks there.
  • Open the first workshop notebook
    • [Option1] Install requirements with pip install -r requirements.txt
    • [Option2] Run the Setup cells in the notebook

🎬 Video record

Re-watch this YouTube stream

🤝 Credits

This workshop was set up by @pyladiesams and @linoytsaban