Where should the models be downloaded to?
The error I get when starting app.py shows
Traceback (most recent call last):
File "D:\Tests\SkyReels-A2\SkyReels-A2\app.py", line 110, in <module>
infer = ModelInference()
File "D:\Tests\SkyReels-A2\SkyReels-A2\app.py", line 39, in __init__
self._image_encoder = CLIPVisionModel.from_pretrained(self._pipeline_path, subfolder="image_encoder", torch_dtype=torch.float32)
File "D:\Tests\SkyReels-A2\SkyReels-A2\venv\lib\site-packages\transformers\modeling_utils.py", line 279, in _wrapper
return func(*args, **kwargs)
File "D:\Tests\SkyReels-A2\SkyReels-A2\venv\lib\site-packages\transformers\modeling_utils.py", line 4078, in from_pretrained
resolved_config_file = cached_file(
File "D:\Tests\SkyReels-A2\SkyReels-A2\venv\lib\site-packages\transformers\utils\hub.py", line 266, in cached_file
file = cached_files(path_or_repo_id=path_or_repo_id, filenames=[filename], **kwargs)
File "D:\Tests\SkyReels-A2\SkyReels-A2\venv\lib\site-packages\transformers\utils\hub.py", line 470, in cached_files
resolved_files = [
File "D:\Tests\SkyReels-A2\SkyReels-A2\venv\lib\site-packages\transformers\utils\hub.py", line 471, in <listcomp>
_get_cache_file_to_return(path_or_repo_id, filename, cache_dir, revision) for filename in full_filenames
File "D:\Tests\SkyReels-A2\SkyReels-A2\venv\lib\site-packages\transformers\utils\hub.py", line 134, in _get_cache_file_to_return
resolved_file = try_to_load_from_cache(path_or_repo_id, full_filename, cache_dir=cache_dir, revision=revision)
File "D:\Tests\SkyReels-A2\SkyReels-A2\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 106, in _inner_fn
validate_repo_id(arg_value)
File "D:\Tests\SkyReels-A2\SkyReels-A2\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 154, in validate_repo_id
raise HFValidationError(
huggingface_hub.errors.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/path/to/model'. Use `repo_type` argument if needed.
So I run
huggingface-cli download Skywork/SkyReels-A2 --local-dir image_encoder --exclude "*.git*" "README.md" "docs"
in the same directory as app.py to download the models into the image_encoder directory.
Then starting app.py gives the same error.
Where/how do I need to download the models so app.py finds them correctly at startup?
OK, found it in app.py.
Maybe set the default location to
self._pipeline_path = "./models/"
and tell users to download into models
huggingface-cli download Skywork/SkyReels-A2 --local-dir models --exclude "*.git*" "README.md" "docs"
and disable share by default for security.
How long should it take? The first example man holding teddy bear is still going after 10 minutes on a 4090.
700 seconds later and it just started the 0/50 vae combine: before vae repeat: True step.
GPU 24 GB VRAM maxed out, so this is not meant for local use yet?
@SoftologyPro We updated the inference code to automatically pull the model to the corresponding path.
When I run the latest app.py I still get this error?
File "D:\Tests\SkyReels-A2\SkyReels-A2\app.py", line 110, in <module>
infer = ModelInference()
File "D:\Tests\SkyReels-A2\SkyReels-A2\app.py", line 39, in __init__
self._image_encoder = CLIPVisionModel.from_pretrained(self._pipeline_path, subfolder="image_encoder", torch_dtype=torch.float32)
File "D:\Tests\SkyReels-A2\SkyReels-A2\venv\lib\site-packages\transformers\modeling_utils.py", line 279, in _wrapper
return func(*args, **kwargs)
File "D:\Tests\SkyReels-A2\SkyReels-A2\venv\lib\site-packages\transformers\modeling_utils.py", line 4078, in from_pretrained
resolved_config_file = cached_file(
File "D:\Tests\SkyReels-A2\SkyReels-A2\venv\lib\site-packages\transformers\utils\hub.py", line 266, in cached_file
file = cached_files(path_or_repo_id=path_or_repo_id, filenames=[filename], **kwargs)
File "D:\Tests\SkyReels-A2\SkyReels-A2\venv\lib\site-packages\transformers\utils\hub.py", line 470, in cached_files
resolved_files = [
File "D:\Tests\SkyReels-A2\SkyReels-A2\venv\lib\site-packages\transformers\utils\hub.py", line 471, in <listcomp>
_get_cache_file_to_return(path_or_repo_id, filename, cache_dir, revision) for filename in full_filenames
File "D:\Tests\SkyReels-A2\SkyReels-A2\venv\lib\site-packages\transformers\utils\hub.py", line 134, in _get_cache_file_to_return
resolved_file = try_to_load_from_cache(path_or_repo_id, full_filename, cache_dir=cache_dir, revision=revision)
File "D:\Tests\SkyReels-A2\SkyReels-A2\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 106, in _inner_fn
validate_repo_id(arg_value)
File "D:\Tests\SkyReels-A2\SkyReels-A2\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 154, in validate_repo_id
raise HFValidationError(
huggingface_hub.errors.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/path/to/model'. Use `repo_type` argument if needed.
@SoftologyPro I have fixed this error. Try again.
OK, thanks, that works now.
Is this suppposed to be runnable on a 24 GB consumer GPU?
@SoftologyPro We have supported user-Level GPU Inference on RTX4090. You can read the inference section of readme for details.
Do you mean Set the offload_switch of infer_MGPU.py to True, and you can run it on RTX4090?
If so, can you add that as a checkbox option to the gradio UI (and have it turned on by default)? Then it should be faster and more usable?
I did try editing infer_MGPU.py to have offload_switch=true
The initial vae combine: before vae repeat: True stat appears much faster. Before it took over 700 seconds to show up.
VRAM is still maxed out to 24 GB.
460 seconds until the 0/50 stat appears. It has been over 25 minutes now and it has not gotten to 1/50 yet.
I too am curious about time. I used the 14B 540p DF model (544x960 video) with an L40 with 48GB of RAM on a RunPod node and it took almost an hour to make a 12 second video (3 clips of 4 seconds).
CUDA Version: 12.8 Driver Version: 565.57.01 torch version: 2.6.0+cu124 Python: 3.12
Using ComfyUI workflow.