CogVideo icon indicating copy to clipboard operation
CogVideo copied to clipboard

AttributeError: 'NoneType' object has no attribute 'shape'

Open zhang-sir029 opened this issue 10 months ago • 2 comments

System Info / 系統信息

Please help me deal with this Error"AttributeError: 'NoneType' object has no attribute 'shape'

models:CogVideoX1.5-5B-I2V (Latest) git:https://github.com/THUDM/CogVideo.git

(CogVideo) zyq@zyq-MS-7D89:/home/satadisk/CogVideo/inference$ python gradio_web_demo.py Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:01<00:00, 3.42it/s] Loading pipeline components...: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:01<00:00, 2.83it/s] /home/zyq/anaconda3/envs/CogVideo/lib/python3.10/site-packages/gradio/utils.py:1021: UserWarning: Expected at least 4 arguments for function <function generate at 0x7f5e4f0adcf0>, received 3. warnings.warn(

  • Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch(). Traceback (most recent call last): File "/home/zyq/anaconda3/envs/CogVideo/lib/python3.10/site-packages/gradio/queueing.py", line 625, in process_events response = await route_utils.call_process_api( File "/home/zyq/anaconda3/envs/CogVideo/lib/python3.10/site-packages/gradio/route_utils.py", line 322, in call_process_api output = await app.get_blocks().process_api( File "/home/zyq/anaconda3/envs/CogVideo/lib/python3.10/site-packages/gradio/blocks.py", line 2098, in process_api result = await self.call_function( File "/home/zyq/anaconda3/envs/CogVideo/lib/python3.10/site-packages/gradio/blocks.py", line 1645, in call_function prediction = await anyio.to_thread.run_sync( # type: ignore File "/home/zyq/anaconda3/envs/CogVideo/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( File "/home/zyq/anaconda3/envs/CogVideo/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2461, in run_sync_in_worker_thread return await future File "/home/zyq/anaconda3/envs/CogVideo/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 962, in run result = context.run(func, *args) File "/home/zyq/anaconda3/envs/CogVideo/lib/python3.10/site-packages/gradio/utils.py", line 883, in wrapper response = f(*args, **kwargs) File "/home/zyq/anaconda3/envs/CogVideo/lib/python3.10/site-packages/gradio/utils.py", line 883, in wrapper response = f(*args, **kwargs) File "/home/satadisk/CogVideo/inference/gradio_web_demo.py", line 180, in generate tensor = infer(prompt, num_inference_steps, guidance_scale, progress=progress) File "/home/satadisk/CogVideo/inference/gradio_web_demo.py", line 100, in infer video = pipe( File "/home/zyq/anaconda3/envs/CogVideo/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) File "/home/zyq/anaconda3/envs/CogVideo/lib/python3.10/site-packages/diffusers/pipelines/cogvideo/pipeline_cogvideox.py", line 710, in call noise_pred = self.transformer( File "/home/zyq/anaconda3/envs/CogVideo/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/zyq/anaconda3/envs/CogVideo/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) File "/home/zyq/anaconda3/envs/CogVideo/lib/python3.10/site-packages/diffusers/models/transformers/cogvideox_transformer_3d.py", line 470, in forward ofs_emb = self.ofs_proj(ofs) File "/home/zyq/anaconda3/envs/CogVideo/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/zyq/anaconda3/envs/CogVideo/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) File "/home/zyq/anaconda3/envs/CogVideo/lib/python3.10/site-packages/diffusers/models/embeddings.py", line 1325, in forward t_emb = get_timestep_embedding( File "/home/zyq/anaconda3/envs/CogVideo/lib/python3.10/site-packages/diffusers/models/embeddings.py", line 54, in get_timestep_embedding assert len(timesteps.shape) == 1, "Timesteps should be a 1d-array" AttributeError: 'NoneType' object has no attribute 'shape'

piplist: (CogVideo) zyq@zyq-MS-7D89:/home/satadisk/CogVideo$ pip list Package Version


accelerate 1.3.0 aiofiles 23.2.1 aiohappyeyeballs 2.4.6 aiohttp 3.11.12 aiosignal 1.3.2 annotated-types 0.7.0 anyio 4.8.0 async-timeout 5.0.1 attrs 25.1.0 boto3 1.36.21 botocore 1.36.21 braceexpand 0.1.7 certifi 2025.1.31 charset-normalizer 3.4.1 click 8.1.8 cpm-kernels 1.0.11 datasets 3.3.0 decorator 5.1.1 deepspeed 0.16.3 diffusers 0.32.2 dill 0.3.8 distro 1.9.0 einops 0.8.1 exceptiongroup 1.2.2 fastapi 0.115.8 ffmpy 0.5.0 filelock 3.17.0 frozenlist 1.5.0 fsspec 2024.12.0 gradio 5.16.0 gradio_client 1.7.0 h11 0.14.0 hjson 3.1.0 httpcore 1.0.7 httpx 0.28.1 huggingface-hub 0.28.1 idna 3.10 imageio 2.37.0 imageio-ffmpeg 0.6.0 importlib_metadata 8.6.1 Jinja2 3.1.5 jiter 0.8.2 jmespath 1.0.1 markdown-it-py 3.0.0 MarkupSafe 2.1.5 mdurl 0.1.2 moviepy 2.1.2 mpmath 1.3.0 msgpack 1.1.0 multidict 6.1.0 multiprocess 0.70.16 networkx 3.4.2 ninja 1.11.1.3 numpy 1.26.0 nvidia-cublas-cu12 12.4.5.8 nvidia-cuda-cupti-cu12 12.4.127 nvidia-cuda-nvrtc-cu12 12.4.127 nvidia-cuda-runtime-cu12 12.4.127 nvidia-cudnn-cu12 9.1.0.70 nvidia-cufft-cu12 11.2.1.3 nvidia-curand-cu12 10.3.5.147 nvidia-cusolver-cu12 11.6.1.9 nvidia-cusparse-cu12 12.3.1.170 nvidia-cusparselt-cu12 0.6.2 nvidia-nccl-cu12 2.21.5 nvidia-nvjitlink-cu12 12.4.127 nvidia-nvtx-cu12 12.4.127 openai 1.63.0 orjson 3.10.15 packaging 24.2 pandas 2.2.3 pillow 10.4.0 pip 25.0 proglog 0.1.10 propcache 0.2.1 protobuf 5.29.3 psutil 7.0.0 py-cpuinfo 9.0.0 pyarrow 19.0.0 pydantic 2.10.6 pydantic_core 2.27.2 pydub 0.25.1 Pygments 2.19.1 python-dateutil 2.9.0.post0 python-dotenv 1.0.1 python-multipart 0.0.20 pytz 2025.1 PyYAML 6.0.2 regex 2024.11.6 requests 2.32.3 rich 13.9.4 ruff 0.9.6 s3transfer 0.11.2 safehttpx 0.1.6 safetensors 0.5.2 scikit-video 1.1.11 scipy 1.15.2 semantic-version 2.10.0 sentencepiece 0.2.0 setuptools 75.8.0 shellingham 1.5.4 six 1.17.0 sniffio 1.3.1 starlette 0.45.3 SwissArmyTransformer 0.4.12 sympy 1.13.1 tensorboardX 2.6.2.2 tokenizers 0.21.0 tomlkit 0.12.0 torch 2.6.0 torchvision 0.21.0 tqdm 4.67.1 transformers 4.48.3 triton 3.2.0 typer 0.15.1 typing_extensions 4.12.2 tzdata 2025.1 urllib3 2.3.0 uvicorn 0.34.0 webdataset 0.2.111 websockets 12.0 wheel 0.45.1 xxhash 3.5.0 yarl 1.18.3 zipp 3.21.0

Information / 问题信息

  • [x] The official example scripts / 官方的示例脚本
  • [ ] My own modified scripts / 我自己修改的脚本和任务

Reproduction / 复现过程

1.git clone https://github.com/THUDM/CogVideo.git 2.conda create -n CogVideo python=3.10 -y 3.(base) zyq@zyq-MS-7D89:/home/satadisk/CogVideo$ conda activate CogVideo 4.(CogVideo) zyq@zyq-MS-7D89:/home/satadisk/CogVideo$ pip install -r requirements.txt 5.下载模型modelscope download --model ZhipuAI/CogVideoX1.5-5B-I2V --local_dir './CogVideoX1.5-5B-I2V' 6.修改模型路径 pipe = CogVideoXPipeline.from_pretrained("/home/satadisk/CogVideo/CogVideoX1.5-5B-I2V/", torch_dtype=torch.bfloat16).to("cuda") 7.启动gradio 8.输入文本,开始生成视异常信息

Expected behavior / 期待表现

可以正常生成视频

zhang-sir029 avatar Feb 18 '25 06:02 zhang-sir029

@zhipuch Can you help me reproduce this, I seem to have seen this problem more than once.

zRzRzRzRzRzRzR avatar Feb 20 '25 05:02 zRzRzRzRzRzRzR

gradio目前只有t2v版本,I2V是图像生成视频不是文生视频

zhipuch avatar Feb 24 '25 06:02 zhipuch