MagicDrive icon indicating copy to clipboard operation
MagicDrive copied to clipboard

about training 240 frames~

Open LivingTom opened this issue 1 year ago • 15 comments

hi, thanks for your open source again. i just find there has no difference between 16 frames yaml and 61 frames yaml except sc_attn_index, so i'm wondering that if i can training 240 frames just change model sc_attn_index ? Looking forward to your reply ~

LivingTom avatar Oct 23 '24 08:10 LivingTom

It is possible. Actually, we are limited by GPU memory (80G A800), so we only train up to 60 frames.

flymin avatar Oct 23 '24 09:10 flymin

It is possible. Actually, we are limited by GPU memory (80G A800), so we only train up to 60 frames.

thanks for your reply, and have you ever think about reduce the requirement of GPU memory ?

LivingTom avatar Oct 23 '24 09:10 LivingTom

and another question is did you ever think about use SVD to generate video ? is the quality of SVD generated video not good enough ?

LivingTom avatar Oct 23 '24 09:10 LivingTom

thanks for your reply, and have you ever think about reduce the requirement of GPU memory ?

We've done a lot to save GPU memory. You may check the details of our implementation.

and another question is did you ever think about use SVD to generate video ? is the quality of SVD generated video not good enough ?

Currently, you can refer to Vista, which is based on SVD but without fine-grained controllability. In our new work, we will discuss the related problem. The new paper will come out soon. Stay tuned.

flymin avatar Oct 23 '24 10:10 flymin

thanks for your reply, and have you ever think about reduce the requirement of GPU memory ?

We've done a lot to save GPU memory. You may check the details of our implementation.

and another question is did you ever think about use SVD to generate video ? is the quality of SVD generated video not good enough ?

Currently, you can refer to Vista, which is based on SVD but without fine-grained controllability. In our new work, we will discuss the related problem. The new paper will come out soon. Stay tuned.

thanks a lot.

LivingTom avatar Oct 24 '24 01:10 LivingTom

It is possible. Actually, we are limited by GPU memory (80G A800), so we only train up to 60 frames.

and can i ask what's the difference between video generate and image generate ? just increase the number of batch size ?

LivingTom avatar Oct 24 '24 01:10 LivingTom

and can i ask what's the difference between video generate and image generate ? just increase the number of batch size ?

They are fundamentally different. Images are 2D, but videos are 3D (with temporal dim). From the resources' perspective, one simple example is that, many high-res image generations only support training with batch size=1. The training/inference consumption of video can easily explode, and the model needs to gain more capability.

flymin avatar Oct 24 '24 02:10 flymin

and can i ask what's the difference between video generate and image generate ? just increase the number of batch size ?

They are fundamentally different. Images are 2D, but videos are 3D (with temporal dim). From the resources' perspective, one simple example is that, many high-res image generations only support training with batch size=1. The training/inference consumption of video can easily explode, and the model needs to gain more capability.

thanks for your answer, and i found that you didn't use any image to generate latents (you just use bev) to generate video, and my questions is that how about use 8 images + 8 random latents to generate 16 frames video, can this help to increase the continuity in time to generate long video ?

LivingTom avatar Oct 24 '24 03:10 LivingTom

thanks for your answer, and i found that you didn't use any image to generate latents (you just use bev) to generate video, and my questions is that how about use 8 images + 8 random latents to generate 16 frames video, can this help to increase the continuity in time to generate long video ?

I think you want to ask about future frame prediction. This can be thought of as a downstream task of the video generation model. There are some inference tricks to do so, similar to image inpainting. Anyway, it relies on the ability of the video generation model.

flymin avatar Oct 24 '24 05:10 flymin

thanks for your answer, and i found that you didn't use any image to generate latents (you just use bev) to generate video, and my questions is that how about use 8 images + 8 random latents to generate 16 frames video, can this help to increase the continuity in time to generate long video ?

I think you want to ask about future frame prediction. This can be thought of as a downstream task of the video generation model. There are some inference tricks to do so, similar to image inpainting. Anyway, it relies on the ability of the video generation model.

thank you very much

LivingTom avatar Oct 24 '24 05:10 LivingTom

thanks for your answer, and i found that you didn't use any image to generate latents (you just use bev) to generate video, and my questions is that how about use 8 images + 8 random latents to generate 16 frames video, can this help to increase the continuity in time to generate long video ?

I think you want to ask about future frame prediction. This can be thought of as a downstream task of the video generation model. There are some inference tricks to do so, similar to image inpainting. Anyway, it relies on the ability of the video generation model.

thank you very much

Could you please offer me some help about those inferences tricks, or give some link, i want to try ~

LivingTom avatar Oct 24 '24 08:10 LivingTom

This issue is stale because it has been open for 7 days with no activity. If you do not have any follow-ups, the issue will be closed soon.

github-actions[bot] avatar Oct 31 '24 16:10 github-actions[bot]

Sorry, I cannot provide that because I did not try any personally. I think a quick search could give you the answer.

flymin avatar Nov 18 '24 08:11 flymin

Sorry, I cannot provide that because I did not try any personally. I think a quick search could give you the answer.

that ok ~, thanks for your reply, and look forward to your new search~

LivingTom avatar Nov 18 '24 09:11 LivingTom

This issue is stale because it has been open for 7 days with no activity. If you do not have any follow-ups, the issue will be closed soon.

github-actions[bot] avatar Nov 25 '24 16:11 github-actions[bot]