Dhruv Nair
Dhruv Nair
@kinjal-1007 You will have to install diffusers from main before running your code. `pip install git+https://github.com/huggingface/diffusers.git`
@Eduardishion @kinjal-1007 This issue only affects single file model loading. You can either try loading the model using `from_pretrained` ```python import torch from diffusers import StableDiffusion3Pipeline pipe = StableDiffusion3Pipeline.from_pretrained("stabilityai/stable-diffusion-3-medium-diffusers", torch_dtype=torch.float16)...
It's looking really nice. Obviously there are a lot of intricacies here that I might not have picked up, so in my initial pass I just tried to focus on...
@cdutr Sounds like a good idea. Would you like to try and add this processor?
Hi @LawJarp-A I think we would need TeaCache to be implemented in a model agnostic way in order to merge the PR. The First Block Cache implementation is a good...
Sounds like a good option to me. @DefTruth would you like to work on adding it?
Hi @Passenger12138 could try enabling group offloading before compiling.
@rodjjo It seems like you have a working pipeline based on the code provided, why not open a PR for it? And could you share some results here?
@EmmaThompson123 To load the FP8 checkpoint can you try running ```python transformer = FluxTransformer2DModel.from_single_file("/ckptstorage/repo/pretrained_weights/black-forest-labs/FLUX.1-dev/flux1-dev-fp8.safetensors", torch_dtype=torch.bfloat16) ``` The first time you try to load the model, we will attempt to fetch...
@sayakpaul Just for my reference could you point me to some Pixart Single File checkpoints that are widely used?