diffusers
diffusers copied to clipboard
Update stable_diffusion_tensorrt_txt2img.py
What does this PR do?
Fixes #6984
Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the contributor guideline?
- [x] Did you read our philosophy doc (important for complex PRs)?
- [x] Was this discussed/approved via a GitHub issue or the forum? Please add a link to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the documentation guidelines, and here are tips on formatting docstrings.
- [ ] Did you write any new necessary tests?
Who can review?
@yiyixuxu
Hi @yiyixuxu, I have created a sample draft file based on my understanding of the problem. I would love to discuss over this and edit the file as needed. Please review. Thanks
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
Hi:
Can you make sure the code example runs as expected? https://github.com/huggingface/diffusers/blob/main/examples/community/README.md#tensorrt-text2image-stable-diffusion-pipeline
Hi @yiyixuxu, soo I need to run this code sample on some online notebook and produce the output right?
Hi @yiyixuxu, I tried running the example you suggested on google colab but it is exploding the memory. Should I run it on my laptop? Will it cause the memory to just explode?
Hi @yiyixuxu, could you please review my notebook here. I am trying to run the example but the colab memory seems to blow up every time. I am using thr T4 GPU. Thanks
Hi @standardAI, didn't get your comment. I need to find some function in TensorRTStableDiffusionPipeline
that is the same as from StableDiffusionPipeline
right? You said some function in TensorRT which is not in TensorRT so I'm confused.
Find a/several (0)self.something
:
1. Used/Called in TensorRTStableDiffusionPipeline
2. Haven't been defined in TensorRTStableDiffusionPipeline
3. Defined in StableDiffusionPipeline
4. Haven't been defined in DiffusionPipeline
Then (5)copy its/their definition into TensorRTStableDiffusionPipeline
from StableDiffusionPipeline
and (6)add the "# Copied from ..." phrase on top of its/their definition.
Hi @standardAI, I have made the changes. Should I try to run the colab notebook again? Why is the memory expanding in that case?
Hi @standardAI, done!
Good work! For now, I need to be AFK. I will further review them tomorrow if needed.
Hi @standardAI, thank you for your kind suggestions. I have made the necessary changes and I think it will be good to go. @yiyixuxu could you please start the tests to merge it? I would like to see if its merges now properly. Thanks
Hi:
Can you make sure the code example runs as expected?
main
/examples/community/README.md#tensorrt-text2image-stable-diffusion-pipeline
Try this example to see if it works.
Hi @standardAI can you please review?
Hi @asfiyab-nvidia, thank you again for your TensorRT-version contribution!
Could you examine this refactor being done by @Bhavay-2001? Now, your implementation is supposed to be inherited from DiffusionPipeline
rather than StableDiffusionPipeline
. Unfortunately, I know nothing about TensorRT.
A few comments
- Please update the example usage in the README as well.
- Based on my testing, the pipeline needs a few more updates in the init call. The first error I'm encountering is below. Fixing it leads to other issues that need to be addressed as well
ValueError: Pipeline <class 'diffusers_modules.local.stable_diffusion_tensorrt_txt2img.TensorRTStableDiffusionPipeline'> expected {'scheduler', 'tokenizer', 'unet', 'text_encoder', 'vae', 'safety_checker', 'feature_extractor', 'image_encoder'}, but only {'scheduler', 'tokenizer', 'unet', 'text_encoder', 'vae'} were passed.
Script I'm using to test (adapted from the example in the community readme)
import torch
from diffusers import DDIMScheduler
from diffusers.pipelines import DiffusionPipeline
# Use the DDIMScheduler scheduler here instead
scheduler = DDIMScheduler.from_pretrained("stabilityai/stable-diffusion-2-1",
subfolder="scheduler")
pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1",
custom_pipeline="/workspace/diffusers/examples/community/stable_diffusion_tensorrt_txt2img.py",
revision='fp16',
torch_dtype=torch.float16,
scheduler=scheduler,)
# re-use cached folder to save ONNX models and TensorRT Engines
pipe.set_cached_folder("stabilityai/stable-diffusion-2-1", revision='fp16',)
pipe = pipe.to("cuda")
prompt = "a beautiful photograph of Mt. Fuji during cherry blossom"
image = pipe(prompt).images[0]
image.save('tensorrt_mt_fuji.png')
Thank you very much for your feedback @asfiyab-nvidia!
A few comments
- Please update the example usage in the README as well.
- Based on my testing, the pipeline needs a few more updates in the init call. The first error I'm encountering is below. Fixing it leads to other issues that need to be addressed as well
ValueError: Pipeline <class 'diffusers_modules.local.stable_diffusion_tensorrt_txt2img.TensorRTStableDiffusionPipeline'> expected {'scheduler', 'tokenizer', 'unet', 'text_encoder', 'vae', 'safety_checker', 'feature_extractor', 'image_encoder'}, but only {'scheduler', 'tokenizer', 'unet', 'text_encoder', 'vae'} were passed.
Script I'm using to test (adapted from the example in the community readme)
import torch from diffusers import DDIMScheduler from diffusers.pipelines import DiffusionPipeline # Use the DDIMScheduler scheduler here instead scheduler = DDIMScheduler.from_pretrained("stabilityai/stable-diffusion-2-1", subfolder="scheduler") pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", custom_pipeline="/workspace/diffusers/examples/community/stable_diffusion_tensorrt_txt2img.py", revision='fp16', torch_dtype=torch.float16, scheduler=scheduler,) # re-use cached folder to save ONNX models and TensorRT Engines pipe.set_cached_folder("stabilityai/stable-diffusion-2-1", revision='fp16',) pipe = pipe.to("cuda") prompt = "a beautiful photograph of Mt. Fuji during cherry blossom" image = pipe(prompt).images[0] image.save('tensorrt_mt_fuji.png')
Hi @asfiyab-nvidia, should I delete these parameters from the init class or provide a default value for them? Because those parameters are not defined in the DiffusionPipeline class
Hi @asfiyab-nvidia, should I delete these parameters from the init class or provide a default value for them? Because those parameters are not defined in the DiffusionPipeline class
When you add a proper class attribute, this error disappears. Please examine other pipelines.
@Bhavay-2001 nice progress! @standardAI @asfiyab-nvidia thank you all for helping out! 😇😇😇
if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1:
Hi @standardAI, are you saying adding these lines of code in order to add logger
, deprecate
and ValueError
?
Sorry, I was not able to quote the exact same line from your file but I tried my best to make it easier for you.
Right 👍.
I have added for all the parameters but image_encoder
because I couldn't find its usage beyond tensorrt.
Also, the other 2 tensorrt files like inpaint and img2img doesn't have these class attributes written. Do we have to change in them as well?
Thanks
PR has been shifted here