Xingyi Yang
Xingyi Yang
I use A6000 for my experiment. I am trying to figure out how to further save memory cost.🙏
I would be very grateful if you could share your version🙏. I haven't looked at the VideoCrafter repository in quite some time, so I might need a bit of time...
Thanks for the advice! That is actually an important technique components of our project (I guess one of the most important part). Will upload this week! Stay tuned!!!!!
Hello @shaoshitong . Thank you so much for your interest. I send a email to you.
great question!will do shortly
Dear @mxy0610 , From your error code, it's unclear what settings you're using for your training. However, it seems you might be encountering issues due to using a single GPU...
Could you please provide more context about the problem? For example, the environment version, system details, and any other relevant information. Also, are you using `ROCm` devices?
Can you see your GPU utilization? This is still slow. I use 8xA5000 and train around 1-2 days for 300 epoches.
Hello, @1-2-3-4-0 @zxk72 someone also report this problem to me, and i'm looking for solutions. Can YOU please also look at this issue?https://github.com/pytorch/pytorch/issues/21819
@bravoYJ Hello, my intuition is that the group number should not be set too large, as it may negate the parameter-saving benefits. I have not tested scenarios where the group...