Question about Canny and Scribble Control in imagen3_editing.ipynb: Control Strength and Scribble Application Process
File Name
https://github.com/GoogleCloudPlatform/generative-ai/blob/main/vision/getting-started/imagen3_editing.ipynb
What happened?
Hello,
I have a question regarding the imagen3_editing.ipynb notebook, specifically concerning the Canny and Scribble ControlNet controls.
Firstly, in the example provided for Canny control, I noticed that Canny edge detection seems to be applied as a preprocessing step to the input image before it's used as a control. I'm interested in understanding how to control the strength or influence of this Canny control. Is there a parameter or setting that allows me to adjust how closely the generated image follows the Canny edges? I'd like to know if it's possible to make the generation more or less reliant on the Canny control.
Secondly, when looking at the Scribble control example, I don't see a similar explicit preprocessing step like the Canny edge detection being applied. This leads me to wonder about the correct way to apply the Scribble control. Could you please clarify the process for using Scribble control effectively within imagen3_editing.ipynb? Specifically, I'm unsure if there's a specific type of input image expected for Scribble, or if there are particular steps needed to prepare a Scribble image for use with the ControlNet.
Any guidance on controlling the Canny strength and the proper application of Scribble control would be greatly appreciated.
Thank you for your help!
Relevant log output
Code of Conduct
- [x] I agree to follow this project's Code of Conduct
Hi @katiemn
We were just wondering if there are any updates
Hello, I updated https://github.com/GoogleCloudPlatform/generative-ai/blob/main/vision/getting-started/imagen3_customization.ipynb with an example on using Controlled Customization with a scribble input image.