Gong Chao
Gong Chao
Hi, thanks for your amazing work! I find your work so interesting and I wanna train the models by myself. Could you provide training code?
Hi, do you remember me. Sorry to bother you again. I am a beginner of stable diffusion model and have some problems when running your code. Did you use a...
Hi, what an amazing work! I found that you evaluated BrushNet on editbench in your paper. In the editbench's annotation file, there are several kinds of prompts: prompt_full, prompt_mask-simple, prompt_mask-rich...
Hi! I have a question, it seems that after training stage 1, the llava+qformer's output is aligned with clip text space. Could we directly use the llava and qformer after...
Hi, thanks for your excellent work. [Here](https://github.com/SunzeY/AlphaCLIP/issues/25#issuecomment-1917471111) in January you guys mentioned that you will release the data, which is a key contribution in your CVPR paper. So could you...
Hi, many thanks for your excellent work! Your [web demo](https://huggingface.co/spaces/Zery/Alpha-CLIP_LLaVA-1.5) reports an error "502 Bad Gateway" now. Could you fix it?
When I click the demo webseite you provide in Readme, http://111.0.123.204:8000/, there occurs HTTP ERROR 502.
Thanks for your excellent work! I wanna convert osprey format checkpoint to huggingface format. How should I do this?
Such an excellent work! I am reading the paper EMMA and find that you didn't give the details about the face encoder and image encoder. So what are the encoders...
Hi, I am reading your excellent work and have a question. Is the unet you use a classical T2I unet or an inpainting unet?