Jiatong (Julius) Han
Jiatong (Julius) Han
Hi @jiacxie , thanks for your question and insights. Would you share more insights and probably some statistics on your findings? We would be happy to learn about this and...
It is due to deadlock by huggingface tokenizers. Can you follow the error message and set `export TOKENIZERS_PARALLELISM=false`?
You may adjust the `num_frames` and `fps` in the config file to control the output duration. See [here](https://github.com/hpcaitech/Open-Sora/blob/06507f744f9f7e5e8d300d9ae446bc8d351ed00c/configs/opensora/inference/64x512x512.py#L4) for an example.
Can @zhengzangw please take a look?
Thanks. I believe they are only the same when batch size equals one. Could @zhengzangw please confirm?
Would [this](https://github.com/hpcaitech/Open-Sora/blob/ee909a7d6611bcc9c5cf1ac055a7e9cc74157e09/configs/opensora-v1-2/inference/sample.py#L4) be what you intend?
Can you set 'CUDA_VISIBLE_DEVICES=0' before the inference command, and if it does not help, print the device of input texts and T5 encoder model to see if they are on...
I'd suggest you install again: `conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia`
Can you run `pip list | grep torch`? I see you may not have the correct `torch` installed to facilitate GPU use. Check if `nvcc --version` and `python -c "import...
Looks good to me. What was your command to run `inference.py`?