StreamingT2V icon indicating copy to clipboard operation
StreamingT2V copied to clipboard

Error launching gradio_demo.py under Windows

Open SoftologyPro opened this issue 10 months ago • 1 comments

python gradio_demo.py

gives this error then aborts

Traceback (most recent call last):
  File "D:\Tests\StreamingT2V\StreamingT2V\t2v_enhanced\gradio_demo.py", line 43, in <module>
    msxl_model = init_v2v_model(cfg_v2v)
TypeError: init_v2v_model() missing 1 required positional argument: 'device'

SoftologyPro avatar Apr 13 '24 08:04 SoftologyPro

OK, I worked this one out. Line 43 needs the device parameter added, ie msxl_model = init_v2v_model(cfg_v2v,device) Also line 48 stream_cli, stream_model = init_streamingt2v_model(ckpt_file_streaming_t2v, result_fol, device) Then it continues past the errors.

It does seem to struggle with the Caching example 1/6 stage on a local 24 GB GPU. Can you recommend some settings to get the gradio starting quicker on a 24GB GPU? The 24GB fills up and then it starts using physical RAM that really slows everything down. I can remark the examples to get past this and get the gradio launching.

There are also these warnings... It seems like you have activated model offloading by calling enable_model_cpu_offload, but are now manually moving the pipeline to GPU. It is strongly recommended against doing so as memory gains from offloading are likely to be lost. Offloading automatically takes care of moving the individual components vae, image_encoder, unet, scheduler, feature_extractor to GPU when needed. To make sure offloading works as expected, you should consider moving the pipeline back to CPU: pipeline.to('cpu') or removing the move altogether if you use offloading.

Basically the gradio is no go for local 24 GB. The inference script run directly allows 50 frames. Above 50 leaks over into physical RAM and takes forever.

SoftologyPro avatar Apr 13 '24 10:04 SoftologyPro