generative-models
generative-models copied to clipboard
Hello, why do I only generate one frame of video using sv3d_u and sv3d_p
执行结果:python scripts/sampling/simple_video_sample.py --input_path assets/test_image.png --version sv3d_u VideoTransformerBlock is using checkpointing VideoTransformerBlock is using checkpointing VideoTransformerBlock is using checkpointing VideoTransformerBlock is using checkpointing VideoTransformerBlock is using checkpointing VideoTransformerBlock is using checkpointing VideoTransformerBlock is using checkpointing VideoTransformerBlock is using checkpointing VideoTransformerBlock is using checkpointing VideoTransformerBlock is using checkpointing VideoTransformerBlock is using checkpointing VideoTransformerBlock is using checkpointing VideoTransformerBlock is using checkpointing VideoTransformerBlock is using checkpointing VideoTransformerBlock is using checkpointing VideoTransformerBlock is using checkpointing Initialized embedder #0: FrozenOpenCLIPImagePredictionEmbedder with 683800065 params. Trainable: False Initialized embedder #1: VideoPredictionEmbedderWithEncoder with 83653863 params. Trainable: False Initialized embedder #2: ConcatTimestepEmbedderND with 0 params. Trainable: False Restored from checkpoints/sv3d_u.safetensors with 0 missing and 0 unexpected keys /opt/dragonplus/generative-models/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:31: UserWarning: None of the inputs have requires_grad=True. Gradients will be None warnings.warn("None of the inputs have requires_grad=True. Gradients will be None")
或者: streamlit run scripts/demo/video_sampling.py
Collecting usage statistics. To deactivate, set browser.gatherUsageStats to False.
You can now view your Streamlit app in your browser.
Network URL: http://xxx External URL: http://xxx
VideoTransformerBlock is using checkpointing
VideoTransformerBlock is using checkpointing
VideoTransformerBlock is using checkpointing
VideoTransformerBlock is using checkpointing
VideoTransformerBlock is using checkpointing
VideoTransformerBlock is using checkpointing
VideoTransformerBlock is using checkpointing
VideoTransformerBlock is using checkpointing
VideoTransformerBlock is using checkpointing
VideoTransformerBlock is using checkpointing
VideoTransformerBlock is using checkpointing
VideoTransformerBlock is using checkpointing
VideoTransformerBlock is using checkpointing
VideoTransformerBlock is using checkpointing
VideoTransformerBlock is using checkpointing
VideoTransformerBlock is using checkpointing
Initialized embedder #0: FrozenOpenCLIPImagePredictionEmbedder with 683800065 params. Trainable: False
Initialized embedder #1: VideoPredictionEmbedderWithEncoder with 83653863 params. Trainable: False
Initialized embedder #2: ConcatTimestepEmbedderND with 0 params. Trainable: False
Loading model from checkpoints/sv3d_u.safetensors
576 576 None
2024-03-21 18:52:16.771 Uncaught app exception
Traceback (most recent call last):
File "/opt/dragonplus/generative-models/venv/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 542, in _run_script
exec(code, module.dict)
File "/opt/dragonplus/generative-models/scripts/demo/video_sampling.py", line 192, in
@kaikaizhangdragonplus Hi, I don't know my answer is helpful, but it seems that the issue mostly arises due to the imageio library. Therefore, instead of imageio, you can save frame by frame using cv2. However, looking at your error logs... it seems there might be a need to check the input data. The fact that a NoneType was passed as input data implies that either the input data is corrupted or the path was incorrectly set, resulting in failure to read it.
Thanks!
its imagio issue. seek for it here.
sould be something like this:
#imageio.mimwrite(video_path, vid)
#cv2.imwrite(video_path, vid)
frame0 = vid[0, :, :, :].squeeze()
out = cv2.VideoWriter(
video_path,
cv2.VideoWriter_fourcc(*"MP4V"),
20.0,
(frame0.shape[1], frame0.shape[0]),
)
for frame in vid:
out.write(frame[:, :, ::-1])
out.release()
Same issue.
problem solved?
its imagio issue. seek for it here.
sould be something like this:
#imageio.mimwrite(video_path, vid) #cv2.imwrite(video_path, vid) frame0 = vid[0, :, :, :].squeeze() out = cv2.VideoWriter( video_path, cv2.VideoWriter_fourcc(*"MP4V"), 20.0, (frame0.shape[1], frame0.shape[0]), ) for frame in vid: out.write(frame[:, :, ::-1]) out.release()
Thank you very much, confirmed this worked for me. This video is playable in VLC Desktop, but not here on github web, not sure what is the issue with the codec.
https://github.com/Stability-AI/generative-models/assets/158145/b91519a9-25b6-4ae1-ad92-1ff4967f3fcf