runzehe
runzehe
Thank you for your attention to our work! We implement zero-init by setting the weight and bias of the final to_out linear layer of the attention module to 0 (most...
> by the way, the arxiv link of your paper in the homepage misleadingly directs to your another paper @hrz2000 Thanks for your reminder~~ I will correct it now hahaha
Our experiments use 8 × 80GB NVIDIA A100 GPUs, with a total batch size of 512, with the image at 256 × 256 resolution following InstructPix2Pix. This configuration can also...
Thank you for your interest in this work! Yes, this is a problem worth exploring, the method based on inpainting is currently unable to deal with operations with large differences...