Chen Wang
Chen Wang
In the `replace_cross_attention` of AttentionReplace, why `attn_replace` is not used? I guess according to the paper, we have to replace `attn_base` with corresponding layers in `attn_replace` Thank you
It seems that the default p=1, q=1 is not as good. and the paper didn't specify the actual value, just mentioned 'set p, q to low values'
Is it possible to share the finetuning code of zero123plus? I tried to reimplement as the paper using the same batch size and steps, but the results are far from...
Hi, thanks for the amazing work. Is there any instructions on how to use the text-to-3d functionality?
I tried to set up the docker and run the run_custom.py on the provided milk data with use_gui 0. However, after some running, the code crashes at some point. Since...
Hi authors, thanks for the great work! I am interested in what object categories in omniobject3d did you use for evaluation?
When I tried to render gaussians of a large scene (9 M Gaussians) with the many cameras (20), the latter ones seem to just crash. (I used the same 20...