VEnhancer icon indicating copy to clipboard operation
VEnhancer copied to clipboard

GPU memory requirement for inference

Open caoandong opened this issue 1 year ago • 4 comments

Hi, thank you so much for open sourcing this amazing work!

I'm wondering what's the memory requirement to run the inference script? I tested the script verbatim on an A100 40G machine and it went OOM. Curious if we need to use a 80G machine instead, or is there something obvious that I'm missing?

Thanks!

caoandong avatar Jul 29 '24 01:07 caoandong

it requires more than 40GB for 2 seconds of 720p video in my early experiments, 3 seconds video needs ~71 GB Vram without upscaling (upscale = 1)

Another question is "can we use something like flash attention to reduce vram usage"?

JC1DA avatar Jul 29 '24 02:07 JC1DA

Hi, thank you so much for open sourcing this amazing work!

I'm wondering what's the memory requirement to run the inference script? I tested the script verbatim on an A100 40G machine and it went OOM. Curious if we need to use a 80G machine instead, or is there something obvious that I'm missing?

Thanks!

At this time, A100 80G machine is required for high-resolution (~2k) and high-frame-rate (fps>=24) video generation. You can decline "up_scale" or "target_fps" to avoid OOM, but the visual results will observe obvious drop.

hejingwenhejingwen avatar Jul 29 '24 12:07 hejingwenhejingwen

it requires more than 40GB for 2 seconds of 720p video in my early experiments, 3 seconds video needs ~71 GB Vram without upscaling (upscale = 1)

Another question is "can we use something like flash attention to reduce vram usage"?

You are correct, this algorithm is expensive. Actually, we have already incorporated xformers for attention computation. You can modify the number of sampling steps to achieve faster inference, but the performance will drop undoubtedly. We will design more efficient sampling strategies in the future.

hejingwenhejingwen avatar Jul 29 '24 13:07 hejingwenhejingwen

2 秒的 720p 视频用了多长时间推理? @JC1DA

O-O1024 avatar Jul 29 '24 22:07 O-O1024