Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization
Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization copied to clipboard
An Pytorch implementation of the paper Key-Locked Rank One Editing for Text-to-Image Personalization
Restored from ./ckpt/v2-1_512-ema-pruned.ckpt with 38 missing and 2 unexpected keys Missing Keys: ['logvar', 'C_inv', 'target_input', 'model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn2.to_k.target_output', 'model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn2.to_v.target_output', 'model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn2.to_k.target_output', 'model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn2.to_v.target_output', 'model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_k.target_output', 'model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_v.target_output', 'model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn2.to_k.target_output', 'model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn2.to_v.target_output', 'model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn2.to_k.target_output', 'model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn2.to_v.target_output', 'model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn2.to_k.target_output', 'model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn2.to_v.target_output', 'model.diffusion_model.middle_block.1.transformer_blocks.0.attn2.to_k.target_output', 'model.diffusion_model.middle_block.1.transformer_blocks.0.attn2.to_v.target_output', 'model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_k.target_output',...
Hi Team, Great work! I am trying your code. I was able to run your code and try on some customization. v 1 but with v2 I am facing issues....
I'm not super experienced and don't know if something is actually going wrong.. I managed to get the training going and am seeing results that go *somewhat* in the right...
Hi Team, I am able to run the sd v2 related code using command `python main.py --name actor --base ./configs/perfusion_custom_sd_v2.yaml --basedir ./ckpt -t True --gpus 0,` the training runs fine...
Hi author, Thanks for this excellent work. I would like to ask any plan for supporting webui extension? It will be very useful. Thanks
Hi author, Thanks for this excellent work. I would like to ask any plan for supporting SDXL-1.0? It will be very useful. Thanks
``` File "/home/code/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/main.py", line 461, in model = instantiate_from_config(config.model) File "/home/code/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/main.py", line 137, in instantiate_from_config return get_obj_from_str(config["target"])(**config.get("params", dict())) File "/home/code/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/perfusion/perfusion.py", line 65, in __init__ self.embedding_manager = instantiate_from_config(personalization_config, embedder=self.cond_stage_model) File "/home/code/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/ldm/util.py",...
Firstly, I would like to extend my congratulations on your acceptance into SIGGRAPH. I find your work quite interesting and appreciate your contribution to the field. And I got a...
Hi Author, I found that if I used teddy.ckpt and herburn.ckpt to get multiconcept results. The image results don't include any teddy. 
Hi Author, Thanks for your excellent work. Would you explain a little bit about how to use CLIP similarity to automatically select a balanced weight? I can help with that....