Bingliang Li
Bingliang Li
> Yes. We will release the pretrained models ASAP. Hi just want to ask is there any update? I'm planning to do further research with your model, could you please...
@loris2222 @MaslikovEgor @OptimusLime @willyfh @qiruiw @saurabhya Just want to report my test result on @loris2222 's model(RefCOCO val): ``` 2022-10-27 18:05:35 | INFO | __main__:76 - => loading checkpoint 'exp/refcoco/CRIS_R50/best_model.pth'...
@RubenBMHMendes You have to download pre-trained CLIP first(it's not `best_model.pth`) from [here](https://openaipublic.azureedge.net/clip/models/afeb0e10f9e5a86da6080e35cf09123aca3b358a0c3e3b6c78a7b63bc04b6762/RN50.pt) , it's from the [official CLIP repo](https://github.com/openai/CLIP/blob/main/clip/clip.py). Then put `best_model.pth` at `/exp/refcoco/CRIS_50` and you are all set.
This can be internet issues, the below solved for me, run this in command line: `HF_ENDPOINT=https://hf-mirror.com python` # Or just add this before your xx.py and ignore the rest. Then...
Hi, may I ask did you find a method for SMPL-X clothing?
I have the same question, does the model accept any non generated image and a given caption? I would like to use this model for zero shot object localization.
I managed to achieve this using this plugin: [link](https://github.com/kousw/stable-diffusion-webui-daam). Use img2img and set step to 1, denoising strength to 0, and you are all set!
May I ask are the code for AudioLDM 2.0 ready to be released?
Hi, may I ask did you find a solution to this? (rendering the human with textures)
Great work! I would say Dock expose is one of the best open source software I found in years, great features & actively maintained!