HumanGaussian
HumanGaussian copied to clipboard
Ablation study and SD3 ControlNet questions
Hi,
Thank you for your great work!
After reading your paper and code, I have several questions about the ablation study as follows.
- Comparing the ablation study config (B) and (C), the extra negative prompt indeed alleviates the over-saturate problem but creates a lot of floating effects. Why is that?
- It seems like the purpose of the Config (D) and (E) is to remove the artifacts created by Config (C). Instead of using SDS, can using variational score distillation (vsd) help to circumvent the necessity of using the negative guidance?
- I want to build a project based on your work; thus, I am trying to conduct a finer-grained ablation study, e.g., Using Config (ABCE) without Config (D) to further understand the benefits of the dual-branch guidance. However, I am new in this field so the code of
dual_branch_guidance.pyseems highly integrated to me. Can you guide me how to conduct the ablation study? For example, can I simply modify flags or usestable_diffusion_controlnet.pyinstead ofdual_branch_guidance.py? [Note that, the versions of stable diffusion model instable_diffusion_controlnet.pyanddual_branch_guidance.pyare different, is it possible to simply change the SDM version ofstable_diffusion_controlnet.pyto the latest one? - When I tried new prompts, I noticed that the SDM v2 guidance cannot well align with the prompt, e.g., when I require it to generate "black coat" and "green skirt", the model always generate a overall green cloth. Note that I also compared the T2I behaviors of SD2 and the latest SD3, the results indicate the similar trends. Therefore, I am trying to use SD3 and its pose version controlnet. Is it possible to slightly modify your project to make it work?
Thank you again for your fantastic project. Your answers are highly appreciated. Also, could you let me know if you are open to further discussions?
Best, Shurui
@CM-BF Hey! Did you ever figure it out? I'm also interested in modifying the code to work with other SD models. The fact its all based on SD2.X is very limiting. Also looks like we have to use the custom trained texture model?
Most of the community released checkpoints etc are based on SD1.5 etc.
Thanks!