leonardodora
leonardodora
sorry,I don't find any codes about image sampling in your repo. so I wonder how can I sample conditioned images.
hi,I try your DUC method as decoder on my network(similiar to mobile net for high performance),but there are many grid on the predicted alpha. the loss seems normal. could you...
I know the loss may be a little high? can you tell me what learning rate you are use to reach emd loss about 0.075
in FaceVerseModel.py there is error when batch_size > 1, how can I fix it?thanks!
I have met some problems with the environment. btw, can cuda11.2 works in faceverse?
 Hi!!!! It happend when the first checkpoint saved. I use the scripts/sky/train_256.sh and I didn't modify anything but a typo.
Hi, It failed when changing the color from blue to green(or other color) . But the attention map of t-shirt is correct and the mission I think is somewhat easy....
hi, when I train the t2v model from scratch, the loss became nan. I know it is important to have pretrained model like pixart. But it is hard to explained...
as the vae of opensora is different from Latte, the weights from latte could be able to use directly? Or your team train a latte model from scratch?
Excellent work! But Is there any plan to release training codes? I want to try some other stuff like dishes or food